Department of Mathematics Indian Institute of Technology Guwahati
Department of Mathematics Indian Institute of Technology Guwahati
Department of Mathematics Indian Institute of Technology Guwahati
1. The joint probability mass function of the lifetimes X and Y of two connected components in a
machine can be modeled by
e−2
, x = 0, 1, 2, . . . , y = x, x + 1, x + 2, . . .
f (x, y) = x!(y − x)!
0, otherwise
What is the joint probability mass function of X and Y − X? Are X and Y − X independent? What
is the correlation between X and Y ? 3
Solution: Let V = Y − X. The joint probability mass function of X and V is
( −2
e
fX,V (x, v) = P {X = x, V = v} = P {X = x, Y = v + x} = x!v! , x, v = 0, 1, 2, . . .
0, otherwise 1 mark
P∞ e−2 −2 −1
Now, the marginal distributin of X is fX (x) = v=0 x!v! = ex! e = ex! for x = 0, 1, 2, . . . (zero
e−2 e−1
otherwise) and the marginal distributin of V is fV (v) = ∞
P
x=0 x!v! = v! for v = 0, 1, 2, . . . (zero
otherwise). Since fX,V (x, v) = fX (x)fV (v), the random variables X and V = Y −X are independent.
1 mark
P∞ P∞ e−2
To compute E(XY ), rather than using E(XY ) = x=0 y=x xy x!(y−x)! , it can be computed from
E(XY ) = E[X(Y − X + X)] = E(X)E(Y − X) + E(X 2 ), using the independence of X and Y − X.
Since both X and Y − X are P oisson(1) random variables and hence has mean 1 and variance 1, it
follows immediately that E(XY ) = 1 + 2 = 3. Also, it implies that E(Y ) = E(Y − X) + E(X) = 2
and V ar(Y ) = V ar(Y − X) + V ar(X) = 2. This gives ρX,Y = √12 . 1 mark
2. The continuous random variables X and Y have the joint probability density function
( x 1 2
e− 2 x(1+y ) , x, y > 0
f (x, y) = π
0, otherwise
√
What is the marginal probability density function of X? Show that the random variables Y X and
√
X are independent. [Fact: Γ(1/2) = π ] 3
1 R∞ 1 2
Solution: The marginal PDF of X, for x > 0, can be obtained as fX (x) = πx e− 2 x 0 e 2 xy dy.
2 p −(1/2)x R ∞ e−z 1 1 √
Putting xy2 = z, we get fX (x) = x2 e π 0
√ dz = √1 x 2 e− 2 x (since Γ(1/2) =
z 2π
π). The
PDF equals zero for x ≤ 0. 1 mark
√
∂(x,y)
Now, let U = Y X and V = X. Hence, X = V , Y = √UV . Since ∂(u,v) = √1 , the joint PDF of U
v
u 1 1 √ − 1
v − 1 2
u
and V is fU,V (u, v) = fX,Y (v, √v ) √v = π ve 2 e 2 for u, v > 0 and fU,V (u, v) = 0 otherwise.
1 mark
1
− 1 u2 R ∞ √ 1
Now, the marginal PDF of U is fU (u) = e π2 0 ve− 2 v dv, for u > 0 (and zero otherwise). Putting
1 2√ R ∞ t2 e− 12 t2 1 2√ q
e− 2 u e− 2 u 1 2
2
v = t , we get fU (u) = π 2π 0 √
2π
= π 2π.1 = π2 e− 2 u [using normal density].
√ 1
Similarly, we can obtain the marginal PDF of V as fV (v) = √12π ve− 2 v [using gamma density], for
v > 0 (and zero otherwise).
Since fU,V (u, v) = fU (u)fV (v), for all u, v, U and V are independent. [Note: U is distributed as |Z|
with Z a standard normal distribution, and V is distributed as gamma with parameters 3/2 and 1/2]
1 mark
Thus
d 1 3 15 25 1 mark
E(X) = MX (t)t=0 = − + + =
dx 6 8 8 12
and
d2 1 9 75 32
E(X 2 ) =
2
MX (t)t=0 = + + = .
dx 6 8 8 3
Therefore 2
32 25 911
V (X) = E(X 2 ) − E 2 (X) = − = 1 mark
3 12 144
4. Let X1 , X2 , . . . , X50 be a collection of i.i.d. random variables, each having exponential distribution
with parameter α > 0. Find the distribution of S = X1 + X2 + · · · + X50 . 3
Solution: The MGF of exponential distribution with parameter α(> 0) is given by
Z ∞
α
MX (t) = α etx e−αx dx = , t < α. 1 mark
0 α−t
The MGF of the random variable S is given by
P50
MS (t) = E[etS ] = E[et i=1 Xi
]
50
= [MX1 (t)] , since Xi ’s are IID 1 mark
50
α
= , t<α
α−t
50
1
= , t<α
1 − αt
which is the MGF of a gamma RV with parameter (50, α). Therefore, by the uniqueness property of
MGF, S ∼ Gamma(50, α). 1 mark
2
√
5. Let {Xn } be a sequence of random variables with E(Xn ) = 8 and V ar(Xn ) = 1/ n, for n ∈ N.
Prove or disprove: {Xn } converges in probability to 8. 2
Solution: For {Xn } to converge in probability to a constant c, we must have, for every > 0,
P {|Xn − c| > } → 0 as n → ∞. 1 mark
By Chebyshev’s inequality, for any > 0, we have
V ar(X)
P {|Xn − E[Xn ]| > } ≤ .
2
Applying this here, we get
1
P {|Xn − 8| > } ≤ √ → 0 as n → ∞
n2
6. Let {Yn , n ≥ 0} be a sequence of uncorrelated random variables with common mean 0 and common
variance σ 2 . Define Xn = aYn + (1 − a)Yn−1 , for n ∈ N and 0 ≤ a ≤ 1. Prove or disprove: {Xn } is
a covariance stationary process. 2
Solution: Since Yn ’s have mean 0 and variance σ 2 and uncorrelated, we have
Cov(Xn , Xm ) = E[Xn Xm ]
= a2 E[Yn Ym ] + a(1 − a)(E[Yn Ym−1 ] + E[Yn−1 Ym ]) + (1 − a)2 E[Yn−1 Ym−1 ]
2 2 2 2
a σ + (1 − a) σ , |n − m| = 0
= a(1 − a)σ 2 , |n − m| = 1
0, |n − m| > 1
which is a function of n − m only (and not dependent on n or m). Therefore, the process {Xn , n ≥ 0}
is a covariance stationary process. 1 mark