0% found this document useful (0 votes)
5 views6 pages

hw03 Sol

The document outlines Homework 3 for the course IEOR E4102, focusing on stochastic modeling problems due on February 14, 2025. It includes problems related to probability calculations involving sports team wins, Poisson distributions for emergency room patients, and geometric random variables. Additionally, it discusses Markov chains in the context of an urn model with red and blue balls, along with detailed solutions and calculations for each problem.

Uploaded by

atlantise163
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views6 pages

hw03 Sol

The document outlines Homework 3 for the course IEOR E4102, focusing on stochastic modeling problems due on February 14, 2025. It includes problems related to probability calculations involving sports team wins, Poisson distributions for emergency room patients, and geometric random variables. Additionally, it discusses Markov chains in the context of an urn model with red and blue balls, along with detailed solutions and calculations for each problem.

Uploaded by

atlantise163
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

IEOR E4102 Stochastic Modeling for MSE Spring 2025

Dr. A. B. Dieker

Homework 3
due on Friday February 14, 2025, 11:59pm EST
Include all intermediate steps of the computations in your answers. If the answer is readily
available on the web (e.g., on wikipedia), then credit is only given for the intermediate steps.

1. In this problem we assume that each match of your favorite sports team results in a win with
probability p, independently of the results of the other matches. (The assumption that p is
the same for all games is obviously questionable, for instance because we ignore the opponent’s
strength, home-field advantage, and much more.)
A run of k wins (‘winning streak’) corresponds to at least k successive wins in a row at any time
in the season, which consists of n games. Suppose S(n, k) is the probability of having a run of
k wins over n ≥ 1 games. Use conditioning to argue that for k ≤ n, we have that
k
X
S(n, k) = S(n − j, k) × pj−1 (1 − p) + pk ,
j=1

where S(n, k) = 0 if k > n. Clearly identify the random variable you condition on.
Solution. Suppose Y is the number of games until the team incurs its first loss. Then we have,
for k ≤ n,

X
S(n, k) = P (k successive wins in first n games|Y = j)P (Y = j)
j=1
k
X
= P (k successive wins in first n games|Y = j)P (Y = j) + P (Y > k).
j=1

We now observe that P (k successive wins in first n games|Y = j) = S(n − j, k) because the
result of each game is i.i.d., so if the first loss is in game j we need the run of length k to be
in the rest of the n − j games. (It is possible that S(n − j, k) = 0 if k > n − j.) We therefore
conclude that
k
X
S(n, k) = S(n − j, k)P (Y = j) + P (Y > k)
j=1
k
X
= S(n − j, k) × pj−1 (1 − p) + pk .
j=1

2. The number of patients arriving to the Columbia emergency room tomorrow has a Poisson
distribution with parameter 100. Some of these patients will be trauma patients, meaning
that their condition is extremely severe and that they require immediate medical attention.
Whether or not each patient is a trauma patient is modeled via i.i.d. Bernoulli random variables
with success probability 0.01. (It’s macabre, but success means trauma here.) Calculate the
probability mass function of the number of trauma patients tomorrow by conditioning on an
appropriately chosen random variable.
Solution. Let X be the number of trauma patients and Y be the number of total patients.
It follows immediately that Y ≥ X. Whether or not the patients are trauma can be viewed in
terms of independent trials, so the conditional distribution of X = n given Y = m is binomial.
That is, we have for 0 ≤ n ≤ m that
 
m
P [X = n|Y = m] = 0.01n 0.99m−n ,
n

and otherwise this probability is 0. (If m = 0, we have that P [X = 0|Y = 0] = 1.) Thus, the
probability mass function of X is given by
X
P [X = n] = P [X = n|Y = m]P (Y = m)
m≥0
X m e−100 100m
= 0.01n 0.99m−n ×
n m!
m≥n
e−100 100n+k

X n+k 
= 0.01n 0.99k × ,
n (n + k)!
k≥0

where we have substituted k = m − n. Expanding an rearranging the summand, this equals


X (n + k)! 0.01n 0.99k 100n+k e−100 X 99k
e−100 = .
n!k! (n + k)! n! k!
k≥0 k≥0

99k
= e99 . Therefore,
P
By the Taylor expansion of the exponential function, we know that k≥0 k!
the sought-after pmf is ( −1
e
a! a = 0, 1, 2, . . .
pX (a) =
0 otherwise.
(That is, X is Poisson distribution with parameter 1.)
,

3. Old exam problem. Suppose that X1 , X2 , . . . are i.i.d. geometric random variables with parame-
ter p ∈ (0, 1) and that N is an independent geometric random variable with parameter q ∈ (0, 1).
In this problem, we study the random variable
N
X
Y = Xi .
i=1

(a) Calculate E(Y |N ) and use it to show that E(Y ) = 1/(pq).


(b) Calculate Var(Y ) by conditioning on N .
(c) Calculate the probability mass function of Y . Hint: you may use that P (X1 + · · · + Xn =
k−1 n
k) = n−1 p (1 − p)k−n for n ≥ 1 and k ≥ n.

Solution.

2
(a) We know that E(Xi ) = 1/p, so we have that
N N
!
X X N
E(Y |N ) = E Xi N = E(Xi ) = N E(X1 ) = .
p
i=1 i=1

You may also directly use that E(Y |N ) = N E(X1 ) from class. We also know that E(N ) =
1/q, so we obtain that
 
N E(N ) 1
E(Y ) = E(E(Y |N )) = E = = .
p p pq

(b) We use the law of total variance:

Var(Y ) = E(Var(Y |N )) + Var(E(Y |N )).

Using part (a) and Var(N ) = (1 − q)/q 2 , we find that


 
N 1 1 1−q 1−q
Var(E(Y |N )) = Var = 2 Var(N ) = 2 × 2 = 2 2 .
p p p q p q

We know from class that Var(Y |N ) = N × Var(X1 ), and so


1−p 1−p
E(Var(Y |N )) = E(N × Var(X1 )) = E(N ) × Var(X1 ) = E(N ) × = 2 ,
p2 p q

where we used Var(X1 ) = (1 − p)/p2 . To conclude, we have that


1−q 1−p 1 − pq
Var(Y ) = E(Var(Y |N )) + Var(E(Y |N )) = + 2 = 2 2 .
p2 q 2 p q p q

(c) We condition on N to find that, for k ≥ 1,


k
X
P (Y = k) = P (Y = k|N = n)P (N = n)
n=1
Xk
= P (X1 + · · · + Xn = k)P (N = n)
n=1
k  
X k−1 n
= p (1 − p)k−n × (1 − q)n−1 q
n−1
n=1
k  
X k−1
= pq [p(1 − q)]n−1 (1 − p)(k−1)−(n−1)
n−1
n=1
k−1  
X k−1
= pq [p(1 − q)]m (1 − p)(k−1)−m
m
m=0
= pq[p(1 − q) + (1 − p)]k−1
= pq(1 − pq)k−1 .

In particular, Y is geometric with parameter pq.

3
,
4. (By S. Ross, from a different edition of our text.) An urn initially contains 2 balls, one of which
is red and the other blue. At each stage a ball is randomly selected. If the selected ball is red,
then it is replaced with a red ball with probability 0.7 or with a blue ball with probability 0.3;
if the selected ball is blue, then it is equally likely to be replaced with either a red or blue ball.
(a) Let Xn equal 1 if the nth ball selected is red, and let it equal 0 otherwise. Is {Xn : n ≥ 1}
a Markov chain? If so, give its transition probability matrix.
(b) Let Yn denote the number of red balls in the urn immediately before the nth ball is selected.
Is {Yn : n ≥ 1} a Markov chain? If so, give its transition probability matrix.
(c) Find the probability that the second ball selected is red.
(d) Do this problem after the lecture on 2/10. Find the probability that the fourth ball selected
is red.
Solution.
(a) No, Xn defined as in the question is not a Markov chain. This is because the previous choices
of balls chosen (i.e., before round n) would affect the probability of getting a red ball now.
More particularly, for example, one can verify that P (X2 = 0|X1 = 0) ̸= P (X3 = 0|X2 = 0)
confirming that Xn is not a Markov chain. This can be seen by conditioning:
P (X2 = 0, X1 = 0) 1/2 × 0.5 × 1/2
P (X2 = 0|X1 = 0) = = = 0.25,
P (X1 = 0) 1/2
where {X2 = 0, X1 = 0} is realized if the first selected ball is blue (which happens with
probability 1/2) and the second as well, which means that the urn must contain one of each
color (probability 1/2) and then a blue ball must be selected (probability 1/2).
We now compute P (X2 = 0) in order to find P (X3 = 0|X2 = 0). Similarly to the argument
above, we find that
P (X2 = 0, X1 = 1) 1/2 × 0.3 × 1 + 1/2 × 0.7 × 1/2
P (X2 = 0|X1 = 1) = = = 0.65,
P (X1 = 1) 1/2
and therefore P (X2 = 0) = 1/2 × 0.25 + 1/2 × 0.65 = 0.45. Similarly, we can also conclude
that
P (X3 = 0, X2 = 0)
P (X3 = 0|X2 = 0) = ≈ 0.417.
P (X2 = 0)
Here, we again use conditioning to write P (X3 = 0, X2 = 0) as

P (X3 = 0, X2 = 0|X1 = 0)P (X1 = 0) + P (X3 = 0, X2 = 0|X1 = 1)P (X1 = 1),

and we note that P (X3 = 0, X2 = 0|X1 = 0) = 0.5 × (1/2 × 0.5) × 1/2 = 0.0625, P (X3 =
0, X2 = 0|X1 = 1) = 0.3125. Plugging back gives us the value indicated and we have
therefore arrived at our desired contradiction.
(b) Yes, {Yn } is a Markov chain. That is because P (Yn+1 = j|Yn = i, Yn−1 = in−1 , . . . , Y0 = i0 )
is independent of i0 , . . . , in−1 and n. The transition matrix P is given by
0 1 2
 
0 0.5 0.5 0
P = 1  0.15 0.6 0.25 .
2 0 0.3 0.7

4
We briefly derive the second row: P10 = P (RB → BB) = 1/2 × 0.3 (first need to pick
R and then replace it by B), P11 = P (RB → RB) = 1/2 × 0.7 + 1/2 × 0.5, and P11 =
P (RB → BB) = 1/2 × 0.5.
(c) We condition on the color of the first selected ball. Below, we write Bk for the color of the
kth ball, which is either red (R) or blue (B). We find that

P (B2 = R) = P (B2 = R|B1 = R)P (B1 = R) + P (B2 = R|B1 = B)P (B1 = B)


   
1 1 1 1
= 0.7 × × + 0.5 × 1 + 0.5 × × = 0.55.
2 2 2 2

Here, P (B2 = R|B1 = R) = 0.7 × 1/2, because we have to replace the R by R and then
pick another R the next step. Similarly, P (B2 = R|B1 = B) = 0.5 × 1/2 + 0.5 × 1 because
if we replace with R then we end up in state RR and definitely pick R the next step, else
we will pick R with probability 1/2.
(d) Proceeding as part (c) would be quite involved for this question, so we use the Chapman-
Kolmogorov equations instead. Conditioning, we have
1
P (B4 = R) = P (Y4 = 1|Y1 = 1) + P (Y4 = 2|Y1 = 1),
2
where we have used that Y1 = 1 as per the question. Moreover, if Y4 = 1 we have RB in
the urn and hence we pick R with probability 21 , and if Y4 = 2 we have RR in the urn and
we pick R with probability 1. We calculate the 3-step transition probability matrix for Yn ,
as
0 1 2
 
0 0.2450 0.5300 0.2250
Q = P3 = 1  0.1590 0.4860 0.3550 .
2 0.0810 0.4260 0.4930
Hence, we have P (B4 = R) = 12 0.4860 + 0.3550 = 0.5980.

5. Do this problem after the lecture on 2/10. Consider “model 2” of the weather example from class.
Suppose that if it has rained yesterday and today, then it will rain tomorrow with probability
0.7. If it did not rain yesterday or today (i.e., no rain yesterday and no rain today), then it
will rain tomorrow with probability 0.2. In any other case the weather tomorrow will be the
same as it is today with probability 0.6. Determine the transition probability matrix P for this
Markov chain. (You can choose any order of the states to solve this problem, so mark the rows
and columns of your matrix with the state it represents.)
Solution. Suppose the first letter in a state represents the weather yesterday and the second
letter the weather today. The letters “R” and “N” stand for rain and no rain, respectively. The
transition probability matrix is given by

RR NR RN NN
 
RR 0.7 0 0.3 0
NR  0.6 0 0.4 0 
P =  0 0.4 0 0.6 .
 
RN
NN 0 0.2 0 0.8

5
6. (Adapted from S. Ross, from a different edition of our text.) There are k players, and the skill
of player i is a number vi > 0, i = 1, . . . , k. In every period, two of the players play a game,
while the other k − 2 wait in an ordered line. The loser of a game joins the end of the line, and
the winner then plays a new game against the player who is first in line. Whenever i and j play,
i wins with probability vi /(vi + vj ).

(a) Define a Markov chain that is useful in analyzing this model.


(b) How many states does the Markov chain have?
(c) Give the transition probabilities of the chain.

Solution.

(a) Let the state space E be the set of all permutations, and identify a permutation π with a
tuple (π1 , . . . , πk ), such as for instance (3, 4, 2, 1, 5). Suppose the first two numbers stand
for the labels of the current players, and the next k − 2 numbers give the labels of the
players in the (ordered) queue. (With this choice of the state space we use the convention
that the first player won the previous game. It is also possible to answer this problem
without that convention, for instance by requiring that the first two elements are ordered,
and this cuts the state space in half.) If the current state is π = (π1 , . . . , πk ), the next state
vπ1
is either (π1 , π3 , . . . , πk , π2 ) if the first player wins, which happens with probability vπ +v π
,
1 2
π2 v
or (π2 , π3 , . . . , πk , π1 ) if the first player loses, which happens with probability vπ +v π2
. In
1
particular, since the next state only depends on the current state, we have a Markov chain.
(b) Total number of states is k! (or k!/2 if the alternative state space is chosen). Borrowing
a notion from the current week, it is interesting to note (and perhaps not obvious at first
glance) that this Markov chain is irreducible.
(c) The transition matrix is given by, for all permutations π and π ′ ,
 v
π1
 vπ1 +vπ2 ,

 if π ′ = (π1 , π3 , . . . , πk , π2 )
vπ2
Pπ,π′ = vπ +v π2
, if π ′ = (π2 , π3 , . . . , πk , π1 )
 1

0 otherwise.

For the alternative state space, we need to replace (π1 , π3 , . . . , πk , π2 ) and (π2 , π3 , . . . , πk , π1 )
in this expression by (min(π1 , π3 ), max(π1 , π3 ), . . . , πk , π2 ) and (min(π2 , π3 ), max(π2 , π3 ), . . . , πk , π1 ),
respectively.

You might also like