0% found this document useful (0 votes)
81 views6 pages

Final Prep 1

The document discusses three stochastic modeling problems involving Markov chains. Problem 1 analyzes coin flipping processes, Problem 2 considers class ranking dynamics, and Problem 3 evaluates engine repair priorities and profits. The solutions provide transition probabilities, recurrent/transient state classifications, and expected values for key metrics.

Uploaded by

orselmerve2001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views6 pages

Final Prep 1

The document discusses three stochastic modeling problems involving Markov chains. Problem 1 analyzes coin flipping processes, Problem 2 considers class ranking dynamics, and Problem 3 evaluates engine repair priorities and profits. The solutions provide transition probabilities, recurrent/transient state classifications, and expected values for key metrics.

Uploaded by

orselmerve2001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

IE335 Stochastic Models Fall 23

Study Set 1 for Final Exam

1. Coin 1 comes up heads with probability 0.6 and coin 2 with probability 0.5. A coin is

continually flipped until it comes up tails, at which time that coin is put aside and we start

flipping the other one.

i. What proportion of flips use coin 1?

Solution :

Lets define flipping coin 1 as state 1, and flipping coin 2 as state 2. Then, we can find

the transition probabilities from one state to another. I keep flipping coin 1 if it comes

up head, so from state 1 to state 1, the transition probability is 0.6. Similarly, we can

calculate the rest of the transition probabilities.


1 2
 
1  0.6 0.4 
P =  
2 0.5 0.5

The proportion of the time I use coin 1 is actually the proportion of the time I am at

state 1. So π1 is asked. So, lets write all the equations of π T = π T P.

0.6π1 + 0.5π2 = π1

0.4π1 + 0.5π2 = π2

Additionally, we will also use the normalization of π values which is π1 + π2 = 1

If we solve these equations, we will get π1 = 5/9 and π2 = 4/9

ii. If we start the process with coin 1 what is the probability that coin 2 is used on the

fifth flip?

Solution :
4
So we have to find P01 or R01 (4). Please do it yourself and find P 4 matrix. Answer is

0.4444

2. There are m classes offered by a particular department, and each year, the students rank

each class from 1 to m, in order of difficulty, with rank m being the highest. Unfortunately,

1
IE335 Stochastic Models Fall 23

the ranking is completely arbitrary. In fact, any given class is equally likely to receive any

given rank on a given year (two classes may not receive the same rank). A certain professor

chooses to remember only the highest ranking his class has ever gotten.

i. Find the transition probabilities of the Markov chain that models the ranking that the

professor remembers.

Solution :

Consider a Markov Chain with the states giving the professor’s memory of that course’s

ranking. Those rankings can be between 1 to m. But lets say the class is ranked as

4th in difficulty in one year and 3rd in the next year, he will only remember the 4th

position. Which means that transition probability from a state to a lower ranked state

will be zero, (for j < i, we have pij = 0) and since it stays in the same state for all the

lower ranked states, transition probability from state i to the same state is i/m (all the

1/m probabilities up until the it h state summed up). Finally pij values for j < i are all

1/m, 1/m, since the class is equally likely to receive any given rating.

ii. Find the recurrent and the transient states.

Solution :There is a positive probability that on any given year, the professor will re-

ceive the highest ranking, namely 1/m. Therefore, state m is accessible from every

other state. The prob from state m to others are 0, but for state m it is 1. Therefore

it is absorbing. Therefore, m is the only recurrent state, and all other states are transient

iii. Find the expected number of years for the professor to achieve the highest ranking given

that in the first year he achieved the ith ranking.

Solution :

This question can be answered by finding the mean first passage time to the absorbing

state m starting from i. Lets write the equations for mean passage time to absorption.
P
Since state m is absorbing, µm = 0. For the rest of the states, µi = 1 + j pij µj . Lets

write those explicitly.

2
IE335 Stochastic Models Fall 23

µ1 = 1 + (1/m)µ1 + (1/m)µ2 + .. + (1/m)µm

µ2 = 1 + (2/m)µ2 + (1/m)µ3 + .. + (1/m)µm

µ3 = 1 + (3/m)µ3 + (1/m)µ4 + .. + (1/m)µm

..

µm−2 = 1 + (m − 2/m)µm−2 + (1/m)µm−1 + (1/m)µm

µm−1 = 1 + (m − 1/m)µm−1 + (1/m)µm

The last equation gives us µm−1 = m (since µm = 0). Using that, the next equation

will give us µm−2 = m. By solving each one of the equations, we have µi = m for all

i ≤ m.

You can also solve this question by only considering the transition to state m from any

state. Since the probability of achieving the highest ranking in a given year is 1/m,

independent of the current state, the required expected number of years is the expected

number of trials to the first success in a Bernoulli process with success probability 1/m.

Thus, the expected number of years is m.

3. Consider a mechanic shop that specializes in the periodic overhaul of diesel and gasoline

engines. The overhaul of a diesel engine requires two days, while a gasoline engine requires a

single day. Each morning the probability of receiving an overhaul is pD =1/3 , and pG =1/2 for

diesel and gasoline engines, respectively. (These probabilities are independent. Any arrival

of diesel will not change the arrival probability of gasoline for that day). The profit per day is

$20 and $23 for diesel and gasoline engines, respectively. Work which cannot be immediately

started on the day received is lost to competitor shops. The following policy is in effect when

deciding whether to accept a job or not: a) If only one-day’s work is complete on a diesel

engine, any arriving jobs are refused; b) otherwise, if only one engine type is received, it is

accepted. If not in the midst of overhauling a diesel engine and both engine types arrive,

which engine type should receive priority? Compute the long-run average profit for both

cases.

3
IE335 Stochastic Models Fall 23

Solution :

Lets define the states first.

0. Idle

1. First day of work on a diesel engine is in progress

2. Second day of work on a diesel engine is in progress

3. Work on a gasoline engine is in progress (first and only day of gasoline)

Lets consider the transition probabilities of this Markov Chain. Since the arrival of any

type is independent, being idle means that no arrival, ie. (2/3) * (1/2) = 1/3. So, when

a job finishes at the end of the day, being idle on the next day is with probability of 1/3.

Therefore, p0,0 = 1/3, p2,0 = 1/3, p3,0 = 1/3. We know that when finishing the first day

of diesel, we need to have the second day’s work on the same engine again, therefore, p1,2 = 1

1/3 0 1
1/3
1/3 1

3 2

Now, suppose that whenever we have both arrival types, we choose diesel over gasoline. Lets

find the rest of the probabilities based on this assumption. From state 2 to state 1, the

probability is 1/3 (an arrival probability of diesel, regardless of the gasoline arrival.) The

transition probability from state 3 to 1 is also the same. We only check the arrival probability

of diesel. From state 0 to state 1 is also the same logic. We know that there is no probability

of moving from state 0 to 2. Lets write all those down into the transition matrix
0 1 2 3
 
0 1/3 1/3 0
 
 
1  0 0 1
0 
P =
 
 
 1/3 1/3 0
 
2 
 
3 1/3 1/3 0

4
IE335 Stochastic Models Fall 23

From the row summation being equal to 1, we can find the rest of the prob as p0,3 =

1/3, p2,3 = 1/3, p3,3 = 1/3

1/3
1/3 0 1
1/3 1/3
1/3 1/3 1/3 1

1/3 3 2
1/3

0 1 2 3
 
0 1/3 1/3 0 1/3
 
 
 0
1 0 1 0 
P =
 
 
2  1/3 1/3 0 1/3 
 
 
3 1/3 1/3 0 1/3
Now, if we find the long term probabilities of being at each state, we can calculate the

expected profit of the system where diesel is preferred over gasoline. Please do this part

yourself. π0 = 1/4, π1 = 1/4, π2 = 1/4, π3 = 1/4. The expected profit is 14 (0 + 20 + 20 +

23) = 63/4.

Now, lets assume that we select gasoline over diesel engine. And find the expected value

of this strategy. From state 0 to state 3, the probability is 1/2 (an arrival probability of

gasoline, regardless of the diesel arrival.) The transition probability from state 2 to 3 is also

the same. We only check the arrival probability of gasoline. From state 3 to state 3 is also

the same logic. From state 2 to state 1 the probability is 1/6 (no arrival of gasoline, and

an arrival of diesel) Same probability is true for transitions from state 3 to state 1 and from

state 0 to state 1. We know that there is no probability of moving from state 0 to 2. Lets

write all those down into the transition matrix

5
IE335 Stochastic Models Fall 23

0 1 2 3
 
0 1/3 1/6 0 1/2
 
 
1  0 0 1 0 
P =
 
 
 1/3 1/6 0 1/2 
 
2
 
3 1/3 1/6 0 1/2

1/6
1/3 0 1
1/6 1/3
1/3 1/2 1/6 1

1/2 3 2
1/2

if we find the long term probabilities of being at each state, we can calculate the expected

profit of the system where gasoline is preferred over diesel. Please do this part yourself.
2
π0 = 2/7, π1 = 1/7, π2 = 1/7, π3 = 3/7. The expected profit is 7 (0) + 71 (20) + 1
7 (20) +
3
7 (23) = 109/7. So preferring diesel to gasoline is better.

You might also like