0% found this document useful (0 votes)
31 views136 pages

Problem Solutions On Probability Statistics

This document contains 15 multi-part probability problems. The problems cover topics like coin tossing, random selection without replacement, probability of arrangement in sequences, probability of distances on lines and in spaces, and defining probability spaces and fields.

Uploaded by

um79788
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views136 pages

Problem Solutions On Probability Statistics

This document contains 15 multi-part probability problems. The problems cover topics like coin tossing, random selection without replacement, probability of arrangement in sequences, probability of distances on lines and in spaces, and defining probability spaces and fields.

Uploaded by

um79788
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 136

Problem & Solutions on Probability & Statistics

Problem Set-1

[1] A coin is tossed until for the first time the same result appear twice in succession.

To an outcome requiring n tosses assign a probability2−𝑛 . Describe the sample space. Evaluate the
probability of the following events:

(a) A= The experiment ends before the 6th toss.


(b) B= An even number of tosses are required.
(c) A∩ B, 𝐴𝑐 ∩ 𝐵

[2] Three tickets are drawn randomly without replacement from a set of tickets numbered 1 to 100. Show
that the probability that the number of selected tickets are in (i) arithmetic progression is 1/ 66 and (ii)
105
geometric progression is 100 .
3

[3] Three players A, B and C play a series of games; none of which can be drawn and their probability of
wining any game are equal. The winner of each game scores 1 point and the series is won by the player
who first scores 4 points. Out of the first three games A won 2 games and B won 1 game. Find the
probability that C will win the series.

[4] A point P is randomly placed in a square with side of 1 cm. Find the probability that the distance from
P to the nearest side does not exceed x cm.

[5] Let there be n people in a room and p denote the probability that there are no common birth days. Find
an approximate value of p for n= 10.

[6] Suppose a lift has 3 occupants A, B and C and there are three possible floors (1, 2 and 3) on which
they can get out. Assuming that each person acts independently of the others and that each person has an
equally likely chance of getting off at each floor, calculate the probability that exactly one person will get
out on each floor.

[7] If n men, among whom are A and B, stand in a row, what is the probability that there will be exactly r
men between A and B ?

[8] In a town of n+ 1 inhabitants, a person tells a rumor to a second persons, who in trun tells it to a third
persons, and so on. At each step the recipient of the rumor is chosen at random from the n people
available. Find the probability that the rumor will be told r times without

(a) returning to the originator,

(b) being repeated to any persons.

Do the same problem when at each step the rumor is told to a gathering of N randomly chosen people.

[9] 2 points are taken at random and independently of each other on a line segment of length m. Find the
probability that the distance between 2 points is less than m/3.
[10] n points are taken at random and independently of one another inside a sphere of radius R. What is
the probability that the distance from the centre of the sphere to the nearest point is not less than r ?

[11] A car is parked among N cars in a row, not at either end. On his return, the owner finds that exactly r
of the N places are still occupied. What is the probability that both neighboring places are empty?

[12] 3 points X, Y, Z are taken at random and independently of each other on a line segment AB. What is
the probability that Y will be between X and Z?

[13] The coefficients of the equation 𝑎𝑥 2 + 𝑏𝑥 + 𝑐 = 0 are determined by throwing an ordinary die. Find
the probability that the framed equation will have real roots.

[14] Let 𝛺= {1, 2, 3, 4}. Check whether any of the following is a 𝜍-field of subsets of 𝛺

ℱ1 = 𝜙, 1, 2 , 3, 4

ℱ2 = {𝜙, 𝛺, 1 , 2, 3, 4 , {3, 4}}

ℱ3 = {𝜙, 𝛺, 1 , 2 , 1, 2 , 3, 4 , 2, 3, 4 , {1, 3, 4} }

[15] Prove that if 𝐹1 𝑎𝑛𝑑 𝐹2 are 𝜍-fields of subsets of 𝛺, then 𝐹1 ∩ 𝐹2 is also a 𝜍-field. Give a counter
example to show that similar result for union 𝜍- fields does not hold.

[16] Let F be a 𝜍- field of subsets of the sample space 𝛺 and let 𝐴 ∊ 𝐹 be a fixed. Show that 𝐹𝐴 = {𝐶: 𝐶 =
𝐴 ∩ 𝐵, 𝐵 ∊ 𝐹} is a 𝜍-field of subsets of A.

Solution Set-1

1) 𝛺= {HH, TT, HTT, THH, HTHH, THTT, ….}

1
𝑃 𝐻𝐻 = = 𝑃 𝑇𝑇
4
1
𝑃 𝐻𝑇𝑇 = 𝑃 𝑇𝐻𝐻 = .
23
5
a) 𝑃 𝐴 = 𝑖=2 𝑃 exp 𝑒𝑛𝑑𝑠 𝑖𝑛 𝑖 𝑡𝑜𝑠𝑠𝑒𝑠

= 𝑃 exp 𝑒𝑛𝑑𝑠 𝑖𝑛 2 𝑡𝑜𝑠𝑠𝑒𝑠 + 𝑃 𝑒𝑛𝑑𝑠 𝑖𝑛 3 + 𝑃 𝑒𝑛𝑑𝑠 𝑖𝑛 4 + 𝑃 … 5

1 1
=2× +2× 3+⋯
22 2
∞ 1
b) P(B)= 2 𝑖=1 22𝑖 =⋯

c) P (A∩ B)= P(exp ends in 2 tosses) + P(exp ends in 4 tosses)

= ….

∞ 1
𝑃 𝐴𝑐 ∩ 𝐵 = 2 𝑖=3 22𝑖 =…..
2) Total # of cases : 100
3
(i) No. in AP Common diff 1, 2, .... , 49

# of cases 98, 96, …, 2

⟹Total # of favorable cases 98+ 96 …+ 2


49×50
=2 = 49 × 50
2

49×50 1
Reqd prob= 100 = 66
3

1) No in GP. Common ratio can be integer or fraction

Case 1: C. V. integer

c. r. # f tav cases total #

2 ⟶ (1, 2, 4), ……….. (25, 50, 100) ⟶ 25

3 ⟶ (1, 3, 9), …. (11, 33, 99) ⟶ 11

4 ⟶(1, 4, 6), …. (6, 24, 96) ⟶ 6

5 ⟶(1, 5, 25), …. (4, 20, 100) ⟶ 4

6 ⟶ (1, 6, 36), (2, 12, 72) ⟶ 2

7 ⟶ (1, 7, 49), (2, 14, 98) ⟶ 2

8 ⟶ (1, 8, 64), ⟶ 1

9 ⟶ (1, 9, 81) ⟶ 1

10 ⟶ (1, 10, 100) ⟶ 1

_______________

Total 53

Case 2: c. r. fractional

1st # c. r. tav cases total #


3
1 2
⟶ 4, 6, 9 , 8, 12, 18 , … . 44, 66, 99 ⟶ 11

5
2
⟶ 4, 10, 25 , 8, 20, 50 , (12, 30, 75)(16, 40, 100) ⟶ 4

7
2
⟶ 4, 14, 49 , 8, 28, 98 ⟶ 2
9
2
⟶ (4, 18, 81) ⟶ 1

F
4 5 7 8 10
9⟶ , , , ,
3 3 3 3 3
⟶ 6+4+2+1+1

5 7 9
6⟶ , ,
4 4 4
⟶4+2+1

6 7 8 9
5⟶ , , ,
5 5 5 5
⟶2+2+1+1

7
6⟶ ⟶2
6

8 9 10
19 ⟶ , ,
7 7 7
⟶1+1+1

9
4⟶ ⟶1
8

10
:1 ⟶ ⟶1
9

__________________________________

Total 52
53+52
⟹ reqd prob. 100 .
3

3)C can win in exactly 4 or 5 or 6 or 7 additional games

Case 1: 4 additional games

1 4
C wins all ⟶ prob 3
. ___________(i)

Case 2: 5 additional games.

4 1 3 2 1
C wins 3 out of 1st 4 & the 5th game prob ⟶ 3 3 3
× 3 _________(ii)

Case 3: 6 additional games

C wins 3 out of 1st 5 & 6th game and either (i) B wins 2 and A wins on one or (ii) B wins 1, A wins 1

5 1 3 1 2 1 5 2 1 31 1 1
Prob ⟶ 3 3 3
×3+ 3 1 3
.
3 3
× 3 ___________(iii)

Case 4 : 7 additional games

𝐴 𝑤𝑖𝑛𝑠 1
st
Out of 1 6 games 𝐵 … … . .2
𝐶 … … … .3
6 3 1 31 1 2 1
𝑝𝑟𝑜𝑏 3 1 3 3 3 3
__________ (iv)

Reqd. prob = (i) + (ii) + (iii) + (iv) ← m. e. ways.

(4) The pt P must lie in the shaded region so that the distance from P to the nearest side does not exceed x
cm.
1
If 𝑥 ≥ 2 , 𝑡𝑕𝑒𝑛 𝑝𝑟𝑜𝑏 = 1

1
𝐼𝑓 0 < 𝑥 < 2 , 𝑡𝑕𝑒𝑛 𝑎𝑟𝑒𝑎 𝑜𝑓 𝑡𝑕𝑒 𝑠𝑕𝑎𝑑𝑒𝑑 𝑟𝑒𝑔𝑖𝑜𝑛 = 1 − (1 − 2𝑥)2

⟹ reqd. prob.= 1-(1 − 2𝑥)2 .


365×364×…×(365−(𝑛−1)) 1 2 𝑛−1
5) Reqd prob.= 365 𝑛
= 1 − 365 1 − 365 … 1 − 365
= 𝑝 𝑠𝑎𝑦

𝑛−1 𝑛−1
𝑘 𝑘 1 𝑛(𝑛 − 1)
log 𝑒 𝑝 = log 𝑒 1− ≈ − = − .
365 365 365 2
𝑘=1 𝑘=1

1 10×9
For n= 10 log 𝑒 𝑝 ≈ − 365 2
=⋯

⟹𝑝≈⋯

6) Total # of possible outcomes: 33

Tavarable # of outcomes : 3! = 6
6
Reqd. prob.= .
27

7) Total # of ways in which n men can stand in a row ⟶ n!

# of possible positions for A & B ∋ there are exactly r positions available between them

= 2! × (n – r - 1)

↗ ↖

permutation possible positions

among A&B ({1, r+ 2}, {2, r+ 3}, …., {(n- r-1), n} for A&B)
𝑛−2
= Further # of ways that r persons can be chosen to stand between A& B= 𝑟

Favarable # of cases
𝑛−2
2! × 𝑛 − 𝑟 − 1 × 𝑟
× 𝑟! × 𝑛 − 𝑟 − 2 !

↙ ↘
Perm of r perm of (n- r- 2) men excluding A< B and r men in

Men betn A & B between

⟹ reqd. prob.
𝑛−2
2! × 𝑛 − 𝑟 − 1 ! × 𝑟! × 𝑟
𝑛!
8)

(a) Total # of of ways 𝑛𝑟

𝑜𝑟𝑖𝑔𝑖𝑛𝑎𝑡𝑜𝑟 ⟶ 𝑛 𝑤𝑎𝑦𝑠
2𝑛𝑑 𝑝𝑒𝑟𝑠𝑜𝑛 ⟶ 𝑛 − 1 𝑤𝑎𝑦𝑠 } ⟶ 𝑛(𝑛 − 1)𝑟−1

𝑡𝑕
𝑟 𝑝𝑒𝑟𝑠𝑜𝑛 ⟶ 𝑛 − 1 𝑤𝑎𝑦𝑠

𝑛 𝑛−1 𝑟−1
Reqd. prob.=
𝑛𝑟

(b) Originator ⟶ n options

2nd person ⟶ n- 1

3rd person ⟶ n- 2

𝑟 𝑡𝑕 𝑝𝑒𝑟𝑠𝑜𝑛 ⟶ (n- r+ 1)
𝑛 𝑛−1 …(𝑛−𝑟+1)
Reqd. prob. = 𝑛𝑟

𝑛 𝑟
Second part:- Total # of cases 𝑁
.

𝑛 𝑛−1 𝑟−1
Case tavarable to 1st event 𝑁 𝑁

𝑛 𝑛−1 𝑟−1
𝑁 𝑁
𝑅𝑒𝑞𝑑 𝑝𝑟𝑜𝑏 = 𝑛 𝑟
𝑁

Apply for 2nd event


𝑛
𝑁
𝑛 −𝑁 𝑛 −2𝑁
𝑁 𝑁 … 𝑛 −(𝑟−1)𝑁
𝑁
𝑅𝑒𝑞𝑑 𝑝𝑟𝑜𝑏 = 𝑛 𝑟
𝑁 ↗
𝑤𝑖𝑡 𝑕 𝑜𝑏𝑣𝑖𝑜𝑛𝑠 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛

9) Let the distance of 2 randomly chosen pts from a fixed pt A on the line segment be denoted as x& y
𝑚
Reqd. condition is |x- y|< 3 .
𝑚 𝑚
𝑖. 𝑒. − <𝑥−𝑦<
3 3

Note that x, y ∊ [0, m]

𝑚
Inside the rect bounded by x axis, y axis, x= m and y= m, the area favorable to |x- y|< 3
is clearly the
region OABCDE

2 2 2
Area of OABCDE = 𝑚2 − 3
𝑚

1 2𝑚 2𝑚
= 𝑚2 − 2 ×
2 3 3
4
𝑚 2− 𝑚 2 5
9
⟹ 𝑟𝑒𝑞𝑑 𝑝𝑟𝑜𝑏 = 𝑚2
= 9.

10) n pt must lie on or outside a sphere of raidus r, having same centre as the original sphere of raidus R.

For any of the n pts,

𝑣𝑜𝑙𝑚 𝑜𝑓 𝑠𝑝𝑕 𝑤𝑖𝑡 𝑕𝑟𝑎 𝑟 𝑟3


P(lie inside the smaller sphere)= 𝑣𝑜𝑙𝑚 𝑜𝑓 𝑠𝑝𝑕 𝑤𝑖𝑡 𝑕𝑟𝑎 𝑅
= 𝑅3 .

𝑟3
⟹ P(lie on or outside the smaller sphere)= 1 −
𝑅3

𝑛
𝑟3
As the pts are taken independently, the reqd. prob.= 1 − .
𝑅3

11) Owner’s car can be in any of the (N- 2) places (leaving 2 ends)

Remaining (r- 1) cars in (N- 1) remaining place


𝑁−1
⟹ Total # of cases= (N- 2) 𝑟−1

Favorable # of cases:

Owner’s car in any of the (N- 2) places and 2 neighboring places are empty

⟹remaining (r-1) cars can be in (N- 3) remaining places.


𝑁−3
⟹favorable # of cases: (N- 2) 𝑟−1
.
𝑁 −3
𝑟−1
Reqd. prob. = 𝑁 −1 .
𝑟−1
12) Let x, y, 3 be the distances of X, Y, Z from a fixed pt P on the line segment the six possibilities are

x < y < 3 ; x < 3 < y ; 3 < x < y;

y < x < 3 ; y < 3 < x ; 3 < y < x;

The above 6 possibilities are equally likely due to random draws

Y lie between X & Z in 2 cases (x < y < 3 & 3 < y < x)


2
Reqd. prob. =6

13) Coefficient a, b or c can take any of the values 1, 2, …

Total # of (a, b, c) combinations 6× 6 × 6 = 216

Real roots ⟶ requirement 𝑏 2 ≥ 4𝑎𝑐

Listing of favorable # of cases

ac (a , c) 4ac 𝑏⃝ ∋ 𝑏 2 ≥ 4𝑎𝑐 # of cases


1 (1 , 1) 4 2, 3, 4, 5, 6 5

(1, 2)
2 ⟶ 8 3, 4, 5, 6 2× 4 = 8
(2, 1)

(1 , 3)
3 ⟶ 12 4, 5, 6 2× 3 = 6
(3, 1)

(1, 4)
4 4, 1 ⟶ 16 4, 5, 6 3× 3 = 9
(2, 2)

(1, 5)
5 ⟶ 20 5, 6 2× 2 = 4
(5, 1)

(1, 6)
(6, 1)
6 ⟶ 24 5;6 4× 2 = 8
(2, 3)
(3, 2)

____ not possible to obtain ac= 7

(2, 4)
⟶ 32 6 2× 1 = 2
(4, 2)

ac values higher than 9 will not have any b∋ 𝑏 2 ≥ 4𝑎𝑐

⟹ # of favorable case for 𝑏 2 ≥ 4𝑎𝑐 = 5 + 8 + 6 + 9 + 4 + 8 + 2 + 1 = 43


43
𝑟𝑒𝑞𝑑 𝑝𝑟𝑜𝑏 = .
216

14) (i) 𝜙 𝑐 = 𝛺 ∈ ℱ1 ⟹ ℱ1 𝑖𝑠 𝑛𝑜𝑡 𝑎 𝜍 − 𝑓𝑖𝑒𝑙𝑑

(ii) {1} ∪ {3, 4} = {1, 3, 4} ∉ℱ2

ℱ2 is not closed under union ⟹ℱ2 is not a 𝜍-field

Or {1, 2}∩{2, 3, 4}= {2} ∈ℱ2

⟹ℱ2 is not 𝜍- field.

(𝑖𝑖𝑖)ℱ3 contain 𝛺 and is closed under complementation and union ⟹ℱ3 is a 𝜍-field.

15) 𝛺∈ ℱ1 , ℱ2 ⟹ 𝛺 ∈ ℱ1 ∩ ℱ2 ______(i)

Let A ∈ ℱ1 ∩ ℱ2 , 𝑡𝑕𝑒𝑛 𝐴 ∈ ℱ1 & 𝐴 ∈ ℱ2

⟹ 𝐴𝑐 ∈ ℱ1 & 𝐴𝑐 ∈ ℱ2

⟹ 𝐴𝑐 ∈ ℱ1 ∩ ℱ2 ______(ii)

If 𝐴1 , 𝐴2 , … ∈ ℱ1 ∩ ℱ2 , 𝑡𝑕𝑒𝑛

𝐴1 , 𝐴2 , … ∈ ℱ1 & ⟹∪ 𝐴𝑖 ∈ ℱ1

𝐴1 , 𝐴2 , … ∈ ℱ2 & ⟹∪ 𝐴𝑖 ∈ ℱ2

⟹ ∪ 𝐴𝑖 ∈ ℱ1 ∩ ℱ2 _________(iii)

(i), (ii) & (iii) ⟹ ℱ1 ∩ ℱ2 is a 𝜍-field.

Counter example

𝛺= {1, 2, 3}

i.e. ℱ1 = 𝜙, 𝛺, 1 , 2, 3 ⟶ 𝜍 − 𝑓𝑖𝑒𝑙𝑑

ℱ2 = 𝜙, 𝛺, 2 , 1, 3 ⟶ 𝜍 − 𝑓𝑖𝑒𝑙𝑑

ℱ1 ∪ ℱ2 = {𝜙, 𝛺, 1 , 2 , 1, 3 , {2, 3}}

ℱ1 ∪ ℱ2 is not a 𝜍-field ({1}∪{2}∉ ℱ1 ∪ ℱ2 )

16)(i) A∈ ℱ & A ∩ A= A ⟹A∈ℱ𝐴 .

(ii) Let C ∈ℱ𝐴 𝑡𝑕𝑒𝑛 𝐶 = 𝐴 ∩ 𝐵 𝑓𝑜𝑟 𝐵 ∈ ℱ

𝐶 𝐶𝐴 𝑐𝑜𝑚𝑝𝑙𝑒𝑚𝑒𝑛𝑡 𝑤. 𝑟. 𝑡. 𝐴 = 𝐴 − 𝐶
=𝐴−𝐴∩𝐵

= 𝐴 ∩ 𝐵𝐶 ∈ ℱ𝐴 (𝑎𝑠 𝐵𝐶 ∈ ℱ𝐴 )

(iii)Let 𝐶1 , 𝐶2 , … ∈ ℱ𝐴 , 𝑡𝑕𝑒𝑛

𝐶𝑖 = 𝐴 ∩ 𝐵𝑖 , 𝑖 = 1, 2, … . 𝑓𝑜𝑟 𝐵𝑖 ∈ ℱ

𝑈 𝑈
∪ 𝐶𝑖 = 𝐴 ∩ 𝐵𝑖 = 𝐴 ∩ 𝐵 ∈ ℱ𝐴 𝑎𝑠 𝑈𝐵𝑖 ∈ ℱ
𝑖 𝑖 𝑖

⟹ ℱ𝐴 𝑖𝑠 𝑎 𝜍 − 𝑓𝑖𝑒𝑙𝑑 𝑜𝑓 𝑠𝑢𝑏𝑠𝑒𝑡𝑠 𝑜𝑓 𝐴.

Problem Set -2

[1] Let 𝛺= {0, 1, 2, …}. If for an event A,

𝑒 −𝜆 𝜆 𝑥
(a) P(A)= 𝑥∈𝐴 𝑥!
, 𝜆 > 0.

(b) P(A) = 𝑥∈𝐴 𝑝(1 − 𝑝)𝑥 , 0 < 𝑝 < 1.

1 𝑖𝑓 𝑡𝑕𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑒𝑙𝑒𝑚𝑒𝑛𝑡𝑠 𝑖𝑛 𝐴 𝑖𝑠 𝑓𝑖𝑛𝑖𝑡𝑒


(c) P(A)=
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
Determine in each of the above cases whether P is a probability measure.

In case where your answer is in the affirmative, determine P(E), P(F), P(G), P(E∩F), P(E ∪F), P(F∪G),
P(E∩G) and P(F∩G) , where E= {x ∈𝛺 : x > 2},

F= {x ∈ 𝛺 : 0< x < 3} and G= {x ∈ 𝛺: 3 < x < 6}.

[2] Let 𝛺= ℜ. In each of the following cases determine whether P(.) is a probability measure. For an
interval I,

1 |𝑥|
(a) P(I)= ∫𝐼 2
𝑒 𝑑𝑥

0, 𝑖𝑓 𝐼 ⊂ −∞, 0 ,
(b) P(I)= 2
∫𝐼 2𝑥𝑒 −𝑥 𝑑𝑥 , 𝑖𝑓 𝐼 ⊂ 0, ∞ .

1 𝑖𝑓 𝑙𝑒𝑛𝑔𝑡𝑕 𝑜𝑓 𝐼 𝑖𝑠 𝑓𝑖𝑛𝑖𝑡𝑒
(c) P(I)=
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
[3] Show that the probability of exactly one of the events A or B occurring is P(A)+ P(A)- 2P(A ∩B).

[4] Prove that

P(A ∩B)- P(A) P(B)= P(A)P(𝐵𝑐 )- P(A∩ 𝐵𝑐 )


= 𝑃 𝐴𝑐 𝑃 𝐵 − 𝑃 𝐴𝑐 ∩ 𝐵

= 𝑃 𝐴 ∪ 𝐵𝑐 − 𝑃(𝐴𝑐 )𝑃(𝐵𝑐 )
𝑛 𝑛
[5] For events 𝐴1 , 𝐴2 , … . . , 𝐴𝑛 𝑠𝑕𝑜𝑤 𝑡𝑕𝑎𝑡 𝑃 𝑖=1 𝐴𝑖 = 𝑖=1 𝑃 𝐴𝑖 − 1≤𝑖1 <𝑖2 ≤𝑛 𝑃(𝐴𝑖1 ∩ 𝐴𝑖2 ∩ 𝐴𝑖3 ) −
⋯ + −1 𝑛−1 𝑃( 𝑛𝑖=1 𝐴𝑖 )

[6] Consider the sample space 𝛺= {, 1, 2, ….}and ℱ the 𝜍-field of subsets of 𝛺. To the elementary event
{j} assign the probability

2𝑗
P({j})= 𝐶 𝑗 ! , 𝑗 = 0, 1, 2, … ..

(a) Determine the constant c.


(b) Define the events A, B and C by A= {j :2≤j ≤ 4}, B= {j : j ≥ 3} and C= {j : j is an odd integer}.

Evaluate P(A), P(B), P(C), P(A∩B), P(A∩C), P(C∩B), P(A∩B∩C) and verify the formula for P(A∪
B∪C).

[7] Each packet of a certain cereal contains a small plastic model of one of the five different dinosaurs; a
given packet is equally likely to contain any one of the five dinosaurs. Find the probability that someone
buying six packets of the cereal will acquire models of three favorite dinosaurs.

[8] Suppose n cards numbered 1, 2, …, n are laid out at random in a row. Let 𝐴𝑖 denote the event that
‘card I appears in the ith position of the row’, which is termed as a match. What is the probability of
obtaining at least one match?

[9] A man addresses n envelopes and writes n cheques for payment of n bills.

(a) If the n bills are placed at random in the n envelopes, what would be the probability that eaxch bill
would be placed in the wrong envelope?

(b) If the bills and n cheques are placed at random in the n envelopes, one bill and one cheque in each
envelope, what would be the probability that in no instance would the enclosures be completely correct?

[10] For events A, B and C such that P(C) > 0, prove that

(a) P(A∪B| c)= P(A|C)+ P(B|C)- P(AB|C)

(b) P(𝐴𝐶 |𝐶)= 1- P(A|C).

[11] Let A and B be two events such that 0 < P(A) < 1. Which of the following statements are true?

(a) P(A|B) + P(𝐴𝐶 |𝐵)= 1; (b) P(A|B) + P(A|𝐵𝐶 )= 1; (c) P(A|B)+ P(𝐴𝐶 | 𝐵𝐶 )= 1
1 1
[12] Consider the two events A and B such that P(A)= 4, P(B|A)= ½ and P(A|B)=4.

Which of the following statements are true?

(a) A and B are mutually exclusive events,


(b) A ⊂ B,
3
(c) P(𝐴𝐶 𝐵𝐶 = 4,
(d) P(A|B) + P(A| 𝐵𝐶 )= 1

[13] Consider an urn in which 4 balls have been placed by the following scheme. A fair coin is tossed, if
the coin comes up heads, a white ball is placed in the urn otherwise a red ball is placed in the urn.

(a) What is the probability that the urn will contain exactly 3 white balls?

(b) What is the probability that the urn will contain exactly 3 white balls, given that the first ball placed in
the urn was white?

[14] A random experiment has three possible outcomes, A, B and C, with probabilities𝑝𝐴 , 𝑝𝐵 , 𝑎𝑛𝑑 𝑝𝐶 .
What is the probability that, in independent performances of the experiment, A will occur before B?

[15] a system composed of n separate components is said to be a parallel system if it functions when at
least one of the components functions. For such a system, if component I, independent of other
components, functions with probability𝑝𝑖 , 𝑖 = 1(1)𝑛, what is the probability that the system functions?

[16] A student has to sit for an examination consisting of 3 questions selected randomly from a list of 100
questions. To pass, the student needs to answer correctly all the three questions. What is the probability
that the student will pass the examination if he remembers correctly answer to 90 questions o the list?

[17] A person has three coins in his pocket, two fair coins (heads and tails are equally likely) but the third
one is baised with probability of heads 2/3. One coin selected at random drops on the floor, landing heads
up. How likely is it that it is one of the fair coins?

[18] A slip of paper is given to A, who marks it with either a+ or a- sign, with a probability 1/3 of writing
a+ sign. A passes the slip to B, who may either leave it unchanged or change the sign before passing it to
C. C in turn passes the slip to D after perhaps changing the sign; finally D passes it to a referee after
perhaps changing the sign. It is further known that B, C and D each change the sign with probability 2/3.
Find the probability that A originally wrote a+ given that the referee sees a+ sign o the slip.

[19] Each of the three boxes A, B, and C, identical in appearance, has two drawers. Box A contains a gold
coin in each drawer, Box B contains a silver coin in each drawer and box C contains a gold coi in one
drawer and silver coin in the other. A box is chosen at random and one of its drawers is then chosen at
random and opened, and a gold coin is found. What is the probability that the other drawer of this box
contains a silver coin?

[20] Each of four persons fires one shot at a target. Let 𝐶𝑘 denote the event that the target is hit by person
k, k= 1, 2, 3, 4. If the events 𝐶1 , 𝐶2 , 𝐶3 , 𝐶4 are independent and if P(𝐶1 )= P(𝐶2 )= 0.7,

P(𝐶3 )= 0.9 and P(𝐶4 )= 0.4, compute the probability that : (a) all of them hit the target; (b) no one hits the
target; (c) exactly one hits the target; (d) at least one hits the target.
𝑛 𝐶 𝑛
[21] Let 𝐴1 , 𝐴2 , … . . , 𝐴𝑛 be n independent events. Show that 𝑃 𝑖=1 𝐴𝑖 ≤ exp − 𝑖=1 𝑃 𝐴𝑖 .
[22] Give a counter example to show that pair wise independence of a set of events 𝐴1 , 𝐴2 , … . . , 𝐴𝑛 does
not imply mutual independence.

[23] We say that B carries negative information about event A if P(A|B) < P(A). Let A, B and C be three
events such that B carries negative information about A and C carries negative information about B. Is it
true C carries negative information about A ? Prove your assertion.

[24] Suppose in a class there are 5 boys and 3 girl students. A list of 4 students, to be interviewed, is made
by choosing 4 students at random from this class. If the first student selected at random from the list, for
interview, is a girl, then find the conditional probability of selecting a boy next from among the remaining
3 students in the list.

[25] During the course of an experiment with a particular brand of a disinfectant on files, it is found that
80% are killed in the first application. Those which survive develop a resistance, so that the percentage of
survivors killed in any later application is half of that in the preceding application. Find the probability
that (a) a fly will survive 4 applications; (b) it will survive 4 applications, given that it has survived the 1 st
one.

[26] An art dealer receives a group of 5 old paintings and on the basis of past experience, the thinks that
the probabilities are, 0.76, 0.09, 0.02, 0.01, 0.02 and 0.10 that 0, 1, 2, 3, 4 or all 5 of them, respectively,
are forgeries. The art dealer sends one randomly chosen (out of 5) paintings for authentication. If this
painting turns out to be a forgery, then what probability should he now assign to the possibility that the
other 4 are also forgeries?

Solution Key

(1) (a) 𝛺= {0, 1, 2, ….}

𝑒 −𝜆 𝜆 𝑥
P(A)= 𝑥∈𝐴 𝑥!
, 𝜆 > 0.

𝑒 −𝜆 𝜆 𝑥 ∞𝜆
𝑥
P(A)≥ 0 obv ____(i) P(𝛺)= 𝑥∈𝛺 = 𝑒 −𝜆 0 𝑥! = 1 _______(i)
𝑥!

Define 𝐴𝑖 = 𝑥 ∈ 𝛺: 𝑖 − 1 < 𝑥 < 𝑖 + 1 = 𝑖 In general, 𝐴1 , 𝐴2 , ….



𝑃 1 𝐴𝑖 = 𝑃 1, 2, … = 1 − 𝑃 𝑋 = 0 𝜕𝐴𝑖 ∩ 𝐴𝑗 = 𝜙 𝑖 ∀𝑖 = 𝑗

1 𝑃 𝐴𝑖 = 𝑃 𝐴1 + ⋯ = 1 − 𝑃 𝑋 = 0
∞ ∞ 𝑃 𝐴𝑖 = 𝑃 𝑥
𝑃 1 𝐴𝑖 = 1 𝑃 𝐴𝑖 _________(iii)
𝑖 𝑥∈𝑈𝐴𝑖

⟹ P is prob. Measure.
= 𝑃({𝑥}) = 𝑃(𝐴𝑖 )
(b)Similar to (a). 𝑖 𝑥∈𝐴𝑖 𝑖

(c) P(A)≥ 0

P(𝛺)= 0 (∵𝛺 is infinite).


P is not prob. Measure.

Note:- Also take, 𝐴𝑖 = {𝑖} i= 1, 2, …

Then 𝑈𝐴𝑖 = {1, 2, … } -infinite # of elements


∞ ∞ ∞

𝑃 𝐶𝑖 = 0 ≠ 𝑃 𝐶𝑖 = 1
1 1 1

2nd part
−𝜆 𝑥 𝜆2
∞𝑒 𝜆
(a) P(E)= 3 𝑥!
= 1 − 𝑒 −𝜆 1 + 𝜆 + 2!

2
𝑒 −𝜆 𝜆𝑥 𝜆2
𝑃 𝐹 = = 𝑒 −𝜆 𝜆 +
𝑥! 2!
1


𝑒 −𝜆 𝜆𝑥 𝑒 −𝜆 𝜆0
𝑃 𝐸 ∪𝐹 = =1− = 1 − 𝑒 −𝜆
𝑥! 0!
1

𝑜𝑡𝑕𝑒𝑟𝑠.

( 2) 𝛺= R

1 |𝑥|
𝑎 𝑃 𝐼 = ∫𝐼 𝑒 𝑑𝑥 ≥0 ∀𝐼
2

∞ 1 0 ∞
P(𝛺)= ½ ∫−∞ 𝑒 |𝑥| 𝑑𝑥 = 2 ∫−∞ 𝑒 𝑥 𝑑𝑥 + ∫0 𝑒 −𝑥 𝑑𝑥 = 1

1
𝐼1 ∩ 𝐼2 = 𝜙 ∶ 𝑃 𝐼1 ∪ 𝐼2 = ∫𝐼 + ∫𝐼 = 𝑃 𝐼1 + 𝑃 𝐼2 𝑒𝑥𝑡𝑒𝑛𝑑𝑒𝑑 …. P is prob measure
2 1 2

𝑏 𝑆𝑖𝑚𝑖𝑙𝑎𝑟 𝑡𝑜 (𝑎)

𝑐 P(𝛺)= P(ℜ)= 0 ≠ 1 P is not prob measure.

(3) P(exactly me of A or B)

= 𝑃( 𝐴 ∩ 𝐵𝑐 ∪ 𝐴𝑐 ∩ 𝐵

= 𝑃 𝐴 + 𝑃 𝐵 − 2𝑃(𝐴 ∩ 𝐵) ___ on simplification.

𝑃 𝐴𝐵 − 𝑃 𝐴 𝑃 𝐵 = 𝑃 𝐴 𝑃 𝐵𝑐 − 𝑃(𝐴𝐵𝑐 ). _____ 1st equation.

[𝑈𝑠𝑖𝑛𝑔 𝑃 𝐴 = 𝑃 𝐴𝐵 + 𝑃 𝐴𝐵𝑐 . ]

2nd & 3rd equation is can be proved in a similar way.

(5) For n = 2
𝑃 𝐴1 ∪ 𝐴2 = 𝑃 𝐴1 ∪ 𝐴1 𝑐 𝐴2

= 𝑃 𝐴1 + 𝑃 𝐴1 𝑐 𝐴2

= 𝑃 𝐴1 + [𝑃 𝐴2 − 𝑃 𝐴1 𝐴2 ] ___true for n= 2

𝑃𝑟𝑜𝑜𝑓 by indention

𝐴𝑟𝑢𝑚 that it is true for n= m, then

𝑚 +1 𝑚

𝑃 𝐴𝑘 = 𝑃 𝐴𝑘 ∪ 𝐴𝑚 +1
1 1

𝑚 𝑚

=𝑃 𝐴𝑘 + 𝑃 𝐴𝑚 +1 − 𝑃 𝐴𝑘 ∩ 𝐴𝑚 +1
1 1

𝑛 𝑚
𝑚 −1
𝑟. 𝑕. 𝑠 = 𝑃 𝐴𝑘 − 𝑃 𝐴𝑘 1 ∩ 𝐴𝑘 2 + 𝑃 𝐴𝑘 1 𝐴𝑘 2 𝐴𝑘 3 − ⋯ + −1 𝑃 𝐴𝑘
1 𝑘 1 <𝑘 2 𝑘 1 <𝑘 2 <𝑘 3 1
𝑚

+ 𝑃 𝐴𝑚 +1 − 𝑃 𝐴𝑘 𝐴𝑚 +1
1 _______(1)

𝑃 𝐴𝑘 𝐴𝑚+1
1
𝑛

= 𝑃 𝐴𝑘 𝐴𝑚+1 − 𝑃 𝐴𝑘 1 𝐴𝑚+1 ∩ 𝐴𝑘 2 𝐴𝑚+1


1 𝑘 1 <𝑘 2

+ 𝑃 𝐴𝑘 1 𝐴𝑚 +1 ∩ 𝐴𝑘 2 𝐴𝑚 +1 ∩ 𝐴𝑘 3 𝐴𝑚+1 − ⋯
𝑘 1 <𝑘 2 <𝑘 3
𝑚
𝑚 −1
+ −1 𝑃 𝐴𝑘 𝐴𝑚 +1 ____(2)
1

𝑈𝑠𝑖𝑛𝑔 2 𝑖𝑛 (1) gives


𝑚 +1

𝑃 𝐴𝑘
1
𝑚 +1 𝑚 +1

= 𝑃 𝐴𝑘 − 𝑃 𝐴𝑘 1 𝐴𝑘 2 + 𝑃 𝐴𝑘 1 𝐴𝑘 2 𝐴𝑘 3 − ⋯
1 𝑘 1 <𝑘 2 𝑘 1 <𝑘 2 <𝑘 3
𝑚
𝑚−1
+ −1 𝑃 𝐴𝑘 .
1
2𝑗
6) 𝛺= {0, 1, 2, ….}. P({j})= 𝑐 𝑗 ! = 𝐶𝑒 2 ⟹ 𝐶 = 𝑒 −2

4 ∝ ∝
−2
2𝑗 −2
2𝑗 22𝑗 +1
𝑏 𝑃 𝐴 = 𝑒 ;𝑃 𝐵 = 𝑒 ;𝑃 𝐶 = 𝑒 −2
𝑗! 𝑗! 2𝑗 + 1 !
2 3 0

𝑃 𝐵∩𝐶 =𝑃 3 +𝑃 5 +⋯ = ⋯

Other probs can be completed in a similar manner

(7) Favorite models of dinosaurs numbered 1, 2, 3 (say) define events

𝐴𝑖 = model # I not found in 6 pockets.

𝑖 = 1, 2, 3

Reqd. prob. = 𝑃 𝐴1 𝑐 ∩ 𝐴2 𝑐 ∩ 𝐴3 𝑐 = 1 − 𝑃 𝐴1 𝑐 ∩ 𝐴2 𝑐 ∩ 𝐴3 𝑐 𝑐
= 1 − 𝑃 𝐴1 ∪ 𝐴2 ∪ 𝐴3

= 1 − 𝑃 𝐴1 + 𝑃 𝐴2 + 𝑃 𝐴3 − 𝑃 𝐴1 𝐴2 − 𝑃 𝐴1 𝐴3 − 𝑃 𝐴1 𝐴3 + 𝑃 𝐴1 𝐴2 𝐴3 ______(1)

4 6
𝑃 𝐴𝑖 = 5
∀𝑖
3 6
𝑁ote that 𝑃 𝐴𝑖 𝐴𝑗 = ∀ 𝑖 ≠ 𝑗 _____(2)
5
2 6
𝑃 𝐴1 𝐴2 𝐴3 = 5

Use (2) in (1) get the desired prob.

(8) 𝐴𝑖 : match at position i

P(at least one match)


𝑛 𝑛−1 𝑛
= P(𝐴1 ∪ 𝐴2 ∪ … ∪ 𝐴𝑛 ) = 1 𝑃(𝐴𝑖 ) − 𝑖<𝑗 𝑃(𝐴𝑖 − 𝐴𝑗 ) + ⋯ + −1 𝑃( 1 𝐴𝑖 )

𝑛−𝑟 !
𝑃(𝐴𝑖1 ∩ 𝐴𝑖2 ∩ … ∩ 𝐴𝑖𝑟 = ; 1 ≤ 𝑖1 < 𝑖2 < ⋯ < 𝑖𝑟 ≤ 𝑛 𝑟 = 1(1)𝑛
𝑛!
1 1 𝑛−1
1
𝑟𝑒𝑞𝑑 𝑝𝑟𝑜𝑏 = 1 − + + ⋯ + −1
2! 3! 𝑛!

(9)

(a) 𝐴𝑖 : event that ith bill goes to ith envelope (i= 1(1) n)
𝑛 𝑐
𝑟𝑒𝑞𝑑. Prob.=𝑃 𝑖=1 𝐴𝑖 =1−𝑃 𝐴𝑖
𝑛−1
=1− 𝑃 𝐴𝑖 − 𝑖<𝑗 𝑃 𝐴𝑖 𝐴𝑗 + ⋯ + −1 𝑃 𝐴1 , … , 𝐴𝑟 =
𝑛
1− 𝑃 𝐴𝑖 ˽𝑄1 𝑖<𝑗 𝑃 𝐴𝑖 𝐴𝑗 ˽𝑄2 + ⋯ + (−1) 𝑃 𝐴1 , … , 𝐴𝑟 ˽𝑄𝑛
𝑛 𝑛−𝑖 ! 1
𝐼𝑛 𝑄𝑖 ⟶ 𝑡𝑒𝑟𝑚𝑠 𝑒𝑎𝑐𝑕 𝑒𝑞𝑢𝑎𝑙 𝑡𝑜 =
𝑖 𝑛! (𝑛)𝑖

𝑛 1 𝑛 1 𝑛 1
⟹𝑃 𝐴𝑖 𝑐 = 1 − + − ⋯ + −1 𝑛
1 𝑛 1 2 𝑛 2 𝑛 𝑛 𝑛

𝑛! 𝑛−1 ! 𝑛! 𝑛−2 ! 1
=1− . + . … + −1 𝑛 .
1! 𝑛 − 1 ! 𝑛! 2! 𝑛 − 2 ! 𝑛! 𝑛!
𝑛
1 1 𝑛
1 1
= 1 − + − ⋯ + −1 = (−1)𝑖
1! 2! 𝑛! 𝑛!
𝑖=0

(b) 𝐵𝑖 : ith envelope get ith bill & ith cheque

𝑟𝑒𝑞𝑑. Prob. 𝑃 𝐵𝑖 𝑐 = 1 − 𝑃 𝐵𝑖 = 1 − 𝑃 𝐵𝑖 + 𝑖<𝑗 𝑃 𝐵𝑖 𝐵𝑗 − ⋯ + −1 𝑛 𝑃(𝐵1 … 𝐵𝑛 )

⟷ ⎵ ↓

𝑅1 𝑅2 𝑅𝑛

𝑛 𝑛−𝑖 ! 𝑛−𝑖 !
In 𝑅𝑖 ⟶ 𝑖
𝑡𝑒𝑟𝑚 𝑒𝑎𝑐𝑕 𝑒𝑞𝑢𝑎𝑙 𝑡𝑜 𝑛!
. 𝑛! .

𝑐
⟹𝑃 𝑖 𝐵𝑖 = 1−𝑃 𝑖 𝐵𝑖 =1− 𝑖𝑃 𝐵𝑖 + 𝑖<𝑗 𝑃 𝐵𝑖 𝐵𝑗 − ⋯ + −1 𝑛 𝑃 𝐵1 … 𝐵𝑛

= 1 − 𝑅1 + 𝑅2 − 𝑅3 + ⋯ + −1 𝑛 𝑅𝑛

𝑛 𝑛−𝑖 ! 𝑛−𝑖 !
𝑅𝑖 = ×
𝑖 𝑛! 𝑛!
𝑛! 𝑛−𝑖 ! 𝑛−𝑖 ! 1 1 𝑛!
= . . = . | 𝑛 𝑖 =
𝑖! 𝑛 − 𝑖 ! 𝑛! 𝑛! 𝑖! 𝑛 𝑖 𝑛−𝑖 !
𝑛
𝑐 𝑖
1
⟹𝑃 𝐵𝑖 = −1 .
𝑖! 𝑛 𝑖
𝑖 𝑖=0

𝑃 𝐴∪𝐵 ∩𝐶 𝑃 𝐴𝐶∩𝐵𝐶
(10) (i) P(A∪ B | C)= 𝑃 𝐶
= 𝑃 𝐶

= 𝑃 𝐴 𝐶) + 𝑃 𝐵 𝐶 − 𝑃 𝐴𝐵 𝐶)

𝑃(𝐴𝑐 𝑐) 𝑃 𝐶 −𝑃(𝐴𝐶)
(ii) 𝑃 𝐴𝑐 𝐶) = 𝑃(𝐶)
= 𝑃(𝐶)
= 1 − 𝑃(𝐴|𝐶)

𝑃(𝐴𝐵) 𝑃(𝐴𝑐 𝐵) 𝑃(𝐵)


(11) (a) true – 𝑃 𝐴 𝐵 + 𝑃 𝐴𝑐 𝐵 = 𝑃(𝐵)
+ 𝑃(𝐵)
=𝑃 𝐵
=1

𝑃 𝐴𝐵 𝑃 𝐴𝐵 𝑐 𝑃 𝐴 −𝑃 𝐴𝐵
(b) 𝑃 𝐴 𝐵 = 𝑃 𝐵
; 𝑃 𝐴 𝐵𝑐 = 𝑃 𝐵𝑐
= 1−𝑃 𝐵
𝑇𝑎𝑘𝑒 𝐴 ⊂ 𝐵, 𝑃 𝐴 > 0, 𝑃 𝐵 − 𝐴 > 0

𝑃(𝐴𝐵) 𝑃(𝐴𝐵𝑐 ) 𝑃 𝐴
𝑃 𝐴 𝐵 + 𝑃 𝐴 𝐵𝑐 = + = < 1 𝑓𝑎𝑙𝑠𝑒.
𝑃(𝐵) 𝑃(𝐵𝑐 ) 𝑃(𝐵)

(c) Take A⊂B, i.e. 𝐵𝑐 ⊂ 𝐴𝑐

𝑃 𝐴 𝐵 + 𝑃 𝐴𝑐 𝐵𝑐 .

𝑃(𝐴𝐵) 𝑃 𝐴𝑐 𝐵𝑐
= + > 1 𝑓𝑎𝑙𝑠𝑒.
𝑃(𝐵) 𝑃(𝐵𝑐 )
1
(12) (a) 𝑃 𝐴 𝐵 = 4 ⟹ 𝑃 𝐴𝐵 ≠ 0

↖(i.e. AB ≠ 0)

(b) 𝐴𝐶𝐵 ⟹ 𝑃 𝐴𝐵 = 𝑃 𝐴

𝑃(𝐴𝐵) 1
𝐺𝑖𝑣𝑒𝑛 𝑃 𝐵 𝐴 = = ⟹ 𝑃 𝐴𝐵 ≠ 𝑃 𝐴 𝑓𝑎𝑙𝑠𝑒.
𝑃(𝐴) 2

𝑃 𝐴𝑐 𝐵 𝑐 1−𝑃 𝐴 − 𝑃 𝐵 +𝑃 𝐴𝐵
(c) 𝑃 𝐴𝑐 𝐵𝑐 = 𝑃 𝐵𝑐
= 1−𝑃 𝐵
_____(∗)

1
𝑃 𝐵𝐴 =
2
1 1
& 𝑃 𝐴 = , ⟹ 𝑃 𝐴𝐵 =
4 8

1
𝑃(𝐴𝐵) 1 1
&𝑃 𝐴 𝐵 = = 8 = ⟹𝑃 𝐵 =
𝑃(𝐵) 𝑃(𝐵) 4 2
1 1 1
1− − + 3
𝑐 𝑐 4 2 8
(*) ⟹ 𝑃 𝐴 𝐵 = 1 = 4.
2

(13) (a) P(exactly 3 white balls, out of 4)

4 1 3 1
= 3
. =⋯
2 2

(b)A : first ball placed is white

1
𝑃 𝐴 =
2
3
1 3 1
𝑃(𝐴𝐵) 2 . 2 2 3
𝐵: 𝑢𝑟𝑛 𝑐𝑜𝑛𝑡𝑎𝑖𝑛 𝑒𝑥𝑎𝑐𝑡𝑙𝑦 3 𝑤𝑕𝑖𝑡𝑒 𝑏𝑎𝑙𝑙𝑠 𝑃 𝐵 𝐴 = = = .
𝑃(𝐴) 1 8
2
(14) D: A occurs before B
𝑝𝐴 𝑝𝐴
𝑃 𝐷 = 𝑝𝐴 + 𝑝𝐴 𝑝𝐶 + 𝑝𝐴 𝑝𝐶 2 + ⋯ = =
1 − 𝑝𝐶 𝑝𝐴 + 𝑝𝐵

𝐴𝑖 : 𝑐𝑜𝑚𝑝𝑜𝑛𝑒𝑛𝑡 𝑖 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛

P(system function)= 1 − 𝑃 𝐴𝑖 𝑐 = 1 − (1 − 𝑝)𝑛

(16) 𝐴𝑖 : Question I is among the 90 questions that the student can answer correctly.

Reqd. prob. 𝑃 𝐴1 𝐴2 𝐴3 = 𝑃 𝐴1 𝑃 𝐴2 𝐴1 𝑃 𝐴3 𝐴1 𝐴2 )

90 89 88
= . . ….
100 99 98
(17) Apply bayes theorem
1 2
×
2 3
Reqd. prob. = 1 2 2 1 =⋯
× + ×
2 3 3 3

(18) 𝐴+ , 𝐵+ , 𝐶 + , 𝐷 +-events that A, B, C, D passes the paper with (t) sign

bayes theorem
𝑃 𝐴+ 𝑃 𝐷+ 𝐴+
Reqd. prob. =P 𝐴+ 𝐷 + = 𝑃 𝐷+

3 2
1 3 2 1 13
𝑃 𝐷 + 𝐴+ ) = + . =
3 2 3 3 27

𝑎𝑙𝑠𝑜 𝑃 𝐷 + = 𝑃 𝐷 + 𝐴+ 𝑃 𝐴+ + 𝑃 𝐷 + 𝐴+ 𝑐 𝑃 𝐴+ 𝑐

𝑃 𝐷 + 𝐴+ 𝑐 = 𝑃 𝐷 𝑝𝑎𝑠𝑠𝑒𝑠 𝑤𝑖𝑡𝑕 + 𝐴 𝑝𝑎𝑠𝑠𝑒𝑠−).


2 3 0
3 2 1 3 2 1 14
= + =
1 3 3 3 3 3 27
13 1 14 2 41
𝑃 𝐷+ = . + . =
27 3 27 3 81
13 1
. 13
⟹ 𝑃 𝐴+ 𝐷 + = 27 3
41 = 41 .
81

(19) silver coin in other

P(h| Gold coin in one drawer).

Bayes thm
1 1
× 1
3 2
=1 1 1 1 =3.
×1+ × + ×0
3 3 2 3

4
(20) (a) 𝑃 𝐶1 𝐶2 𝐶3 𝐶4 = 1 𝑃(𝐶𝑖 ) =⋯

(b) 𝑃 𝐶1 𝑐 ∩ 𝐶2 𝑐 ∩ 𝐶3 𝑐 ∩ 𝐶4 𝑐 = 41 𝑃(𝐶𝑖 𝑐 ) [𝑒𝑥𝑝𝑙𝑎𝑖𝑛 𝑤𝑕𝑦 𝑖𝑠 𝑠𝑜 𝐶1 , 𝐶2 , 𝐶3 , 𝐶4 𝑖𝑛𝑑𝑒𝑝𝑒𝑛 ⟹


𝐶1 𝑐 , 𝐶2 𝑐 , 𝐶3 𝑐 , 𝐶4 𝑐 𝑎𝑟𝑒 𝑎𝑙𝑠𝑜 𝑖𝑛𝑑𝑒𝑝𝑒𝑛. ]

(c) 𝑃 𝐶1 𝐶2 𝑐 𝐶3 𝑐 𝐶4 𝑐 + 𝑃 𝐶1 𝑐 𝐶2 𝐶3 𝑐 𝐶4 𝑐 + 𝑃 𝐶1 𝑐 𝐶2 𝑐 𝐶3 𝐶4 𝑐 + 𝑃 𝐶1 𝑐 𝐶2 𝑐 𝐶3 𝑐 𝐶4 𝑐
4

= 𝑃 𝐶1 𝑃 𝐶𝑖 3 + ⋯
2

(d) P(at least one hits)= 1 − 𝑃 𝑛𝑜 𝑜𝑛𝑒 𝑕𝑖𝑡𝑠

= 1 − 𝑃 𝐶1 𝑐 𝐶2 𝑐 𝐶3 𝑐 𝐶4 𝑐 = ⋯

(21) 𝑃 𝑛
1 𝐴𝑖 𝑐 = 𝑛
1 𝑃 𝐴𝑖 𝑐 = 𝑛
1 1 − 𝑃 𝐴𝑖 ≤ 𝑛
1 exp −𝑃 𝐴𝑖 0 < 𝑥 < 1, 1 − 𝑥 < 𝑒 −𝑥

𝑖. 𝑒. 𝑃 𝑛
1 𝐴𝑖 𝑐 ≤ exp(− 𝑛
1 𝑃(𝐴𝑖 )).

(22) 𝛺= {1, 2, 3, 4} 7: Proper set

1
𝑃 𝑖 = 𝑖 = 1, 2, 3, 4
4
A= {1, 4} , B= {2, 4}, C= {3, 4}.

P(A)= P(B)= P(C)= ½

1 1
𝑃 𝐴𝐵 = 𝑃 𝐴𝐶 = 𝑃 𝐵𝐶 = ; 𝑃 𝐴𝐵𝐶 =
4 4

⟹ 𝑃 𝐴𝐵 = 𝑃 𝐴 𝑃 𝐵 , 𝑃 𝐴𝐶 = 𝑃 𝐴 𝑃 𝐶 & 𝑃 𝐵𝐶 = 𝑃 𝐵 𝑃(𝐶)

i.e. A, B, C are pair wise independent


1 1
but P(ABC)= 4 ≠ 𝑃 𝐴 . 𝑃 𝐵 . 𝑃 𝐶 = 8

⟹ 𝐴, 𝐵, 𝐶 𝑛𝑜𝑡 𝑚𝑢𝑡𝑢𝑎𝑙𝑠 𝑖𝑛𝑑𝑒𝑝.

(23) Counter example

In prv prob set up take

A= {1, 2}, B= {3, 4}, C= {1}

P(A| B) < P(A)

P(B| C) < P(B) but P(A|C) > P(A)


⟹ C does not carry negative information about A.

(24) 𝐴𝑖 : I gives are in the list i= 0, 1, 2, 3

B : 1st student is girl.

C : 2nd student is boy:- to obtain P(C|B)


3 5 3 5 3 5
1 3 1 2 2 2 3 1 3
P(B)= 𝑃 𝐴1 𝑃 𝐵 𝐴1 + 𝑃 𝐴2 𝑃 𝐵 𝐴2 + 𝑃 𝐴3 𝑃 𝐵 𝐴3 = 8 ×4+ 8 ×4+ 8 ×4
4 4 4

105
𝑃 𝐵 = 8 .
4× 4

𝑃 𝐶𝐵 𝑃 𝐶 ∩ 31 𝐴𝑖 𝐵 𝑃 3
1 𝐶𝐴𝑖 𝐵
𝑃 𝐶𝐵 = = =
𝑃 𝐵 𝑃 𝐵 𝑃 𝐵
3

= 𝑃 𝐶 𝐴𝑖 𝐵 𝑃 𝐴𝑖 𝐵
1

2 1
= 1 × 𝑃 𝐴1 𝐵 + × 𝑃 𝐴2 𝐵 + × 𝑃 𝐴3 𝐵
3 3
𝑃 𝐴1 𝑃 𝐵 𝐴1 2
𝑃 𝐴1 𝐵 = =
𝑃 𝐵 7
𝑃 𝐴2 𝑃 𝐵 𝐴1 4 1
𝑃 𝐴2 𝐵 = = 𝑃 𝐴3 𝐵) =
𝑃 𝐵 7 7

2 2 4 1 1
⟹𝑃 𝐶 𝐵 =1× + × + × =⋯
7 3 7 3 7
(25)

𝐴𝑖 : event that a fly survives ith application i= 1, 2, 3, 4

Note that 𝐴4 ⊂ 𝐴3 ⊂ 𝐴2 ⊂ 𝐴1

⟹ 𝐴4 = 𝐴1 ∩ 𝐴2 ∩ 𝐴3 ∩ 𝐴4

(a) Reqd. prob. = P(a fly survives 4 applications)

= P 𝐴1 𝐴2 𝐴3 𝐴4 = 𝑃 𝐴4

= 𝑃 𝐴1 𝑃 𝐴2 𝐴1 𝑃 𝐴3 𝐴1 𝐴2 𝑃(𝐴4 |𝐴1 𝐴2 𝐴3 )

↓ ↓ ↓ ↓

= (1- 0.8) (1- 0.4) (1- 0.2) (1- 0.1)


(for the given conditions)

= 0.2 × 0.6 × 0.8 × 0.9

𝑃 𝐴4 ∩𝐴1 𝑃 𝐴4
(b) 𝑃 𝐴4 𝐴1 = 𝑃 𝐴1
=𝑃 𝐴1

= 0.6 × 0.8 × 0.9


(26) 𝐵𝑖 : event that I of the paintings are forgeries i= 0(1) 5

P(𝐵0 ) = 0.76, 𝑃 𝐵1 = 0.09, 𝑃 𝐵2 = 0.02, 𝑃 𝐵3 = 0.01, 𝑃 𝐵4 = 0.02 & 𝑃 𝐵5 =


0.1 (𝑔𝑖𝑣𝑒𝑛 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑠)

A : event that the painting sent for authentication turns out to be a forgery.
𝑃 𝐵5 𝑃 𝐴 𝐵5 )
Reqd. prob. = 𝑃 𝐵5 𝐴 = 5
𝑖=0 𝑃
𝐵𝑖 𝑃(𝐴| 𝐵𝑖 )

Bayes them
5

𝑃 𝐴 = 𝑃 𝐵𝑖 𝑃(𝐴| 𝐵𝑖 )
𝑖=0

1 2 3 4
= 0.76 × 0 + 0.09 × + 0.02 × + 0.01 × + 0.02 × + 0.10 × 1
5 5 5 5

=⋯

0.10 × 1
𝑃 𝐵5 𝐴 = =⋯
𝑃(𝐴)

Problem Set-3

[1] Let X be a random variable defined on (𝛺, ℑ, 𝑃).Show that the following are also random variables;
(a) |X|, (b) 𝑋 2 𝑎𝑛𝑑 𝑐 𝑋, given that {x< 0}=𝜙.

[2] Let 𝛺= [0, 1] and ℑ be the Boral 𝜍-field of subsets of 𝛺. Define X on 𝛺 as follows:

1
𝜔 𝑖𝑓 0 ≤ 𝜔 ≤
𝑋 𝜔 = 2
1 1
𝜔− 𝑖𝑓 < 𝜔 ≤ 1
2 2
Show that X defined above is a random variable.

[3] Let 𝛺= {1, 2, 3, 4} and ℑ= {𝜙, 𝛺, {1}, {2, 3, 4}} be a 𝜍-field of subsets of 𝛺. Verify whether

𝑋 𝜔 = 𝜔 + 1; ∀ 𝜔 ∈ 𝛺, is a random variable with respect to ℑ.

[4] Let a card be selected from an ordinary pack of playing cards. The outcome 𝜔 is one of these 52 cards.
Define X on 𝛺 as :

4 𝑖𝑓 𝜔 𝑖𝑠 𝑎𝑛 𝑎𝑐𝑒
3 𝑖𝑓 𝜔 𝑖𝑠 𝑎 𝑘𝑖𝑛𝑔
𝑋 𝜔 = 2 𝑖𝑓 𝜔 𝑖𝑠 𝑎 𝑞𝑢𝑒𝑒𝑛
1 𝑖𝑓 𝜔 𝑖𝑠 𝑎 𝑗𝑎𝑐𝑘
0 𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Show that X is a random variable. Further, suppose that P(.) assigns a probability of 1/52 to each outcome
𝜔. Derive the distribution function of X.

0 𝑖𝑓 𝑥 < −1
(𝑥+2)
[5] Let F(x)=
4
𝑖𝑓 − 1 ≤ 𝑥 < 1
1 𝑖𝑓 𝑥 ≥ 1

Show that F(.) is a distribution function. Sketch the graph of F(x) and compute the probabilities P(- ½ < X
1
≤ ) , 𝑃 𝑋 = 0 , 𝑃 𝑋 = 1 𝑎𝑛𝑑𝑃(−1 ≤ 𝑥 < 1). Further, obtain the decomposition F(x)= 𝛼 𝐹𝑑 𝑥 +
2
1 − 𝛼 𝐹𝑐 (𝑥); where 𝐹𝑑 𝑥 𝑎𝑛𝑑𝐹𝑐 (𝑥) are purely discrete and purely continuous distribution functions,
respectively.

[6] Which of the following functions are distribution function?

0, 𝑥 < 0
1 0, 𝑥 < 0 0 𝑥≤1
(a) F(x)= 𝑥, 0 ≤ 𝑥 ≤ 2 ; (b) 𝐹 𝑥 = 𝑥 ; (c)𝐹 𝑥 = 1 − 1 𝑥 > 1
1 1− 𝑒 , 𝑥 ≥ 𝑥
1, 𝑥 >
2

0 𝑖𝑓 𝑥 ≤ 0
[7]𝐿𝑒𝑡 𝐹 𝑥 = 2 −𝑥/3 1
1− 3
𝑒 − 3 𝑒 −[𝑥/3] 𝑖𝑓 𝑥 > 0

Where, [x] is the largest integer≤ x. show that F(.) is a distribution function and compute P(X > 6), P(X =
5) and P(5 ≤ 𝑋 ≤ 8).

[8] The distribution function of a random variable X is given by

0, 𝑥 < −2,
1
3
, −2 ≤ 𝑥 < 0,
1
F(x)= , 0 ≤ 𝑥 < 5,
2
1 𝑥−5 2
2
+ 2
, 5≤𝑥< 6
1, 𝑥 ≥ 6
𝐹𝑖𝑛𝑑 𝑃 −2 ≤ 𝑋 < 5 , 𝑃 0 < 𝑋 < 5.5 𝑎𝑛𝑑 𝑃 1.5 < 𝑋 ≤ 5.5 𝑋 > 2).
𝑛
[9] Prove that if 𝐹1 . , … . , 𝐹𝑛 . 𝑎𝑟𝑒 𝑛 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛𝑠, 𝑡𝑕𝑒𝑛 𝐹 𝑥 = 𝑖=1 𝛼𝑖 𝐹𝑖 (𝑥) is also a
distribution function for any (𝛼1 , … , 𝛼𝑛 ), such that 𝛼𝑖 ≥ 0and 𝑛𝑖=1 𝛼𝑖 = 1.

[10] Suppose 𝐹1 𝑎𝑛𝑑 𝐹2 are distribution functions. Verify whether G(x) = 𝐹1 𝑥 + 𝐹2 (𝑥)is also a
distribution function.

[11] Find the value of 𝛼 and k so that F given by

0 𝑖𝑓 𝑥 ≤ 0
F(x)= 2 /2
+𝑘𝑒 −𝑥 𝑖𝑓 𝑥 > 0

Is distribution function of a continuous random variable.

0 𝑖𝑓 𝑥 < 0
(𝑥+2)
8
𝑖𝑓 0 ≤ 𝑥 < 1
𝑥 2 +2
[12] Let F(x)=
8
𝑖𝑓 1 ≤ 𝑥 < 2
2𝑥+𝑐
𝑖𝑓 2 ≤ 𝑥 ≤ 3
8
1 𝑖𝑓 𝑥 > 3

Find the value of c such that F is a distribution function. Using the obtained value of c, find the
decomposition F(x)= 𝛼 𝐹𝑑 𝑥 + 1 − 𝛼 𝐹𝑐 (𝑥); where 𝐹𝑑 𝑥 𝑎𝑛𝑑𝐹𝑐 (𝑥) are purely discrete and purely
continuous distribution functions, respectively.

[13] Suppose 𝐹𝑋 is the distribution function of a random variable X. Determine the distribution function
of a 𝑋 + 𝑎𝑛𝑑 𝑏 |𝑋|. Where

𝑋 𝑖𝑓 𝑋 ≥ 0
𝑋+ =
0 𝑖𝑓 𝑋 < 0

[14]The convolution F of two distribution functions 𝐹1 𝑎𝑛𝑑 𝐹2 is defined as follows;



F(x)= ∫−∞ 𝐹1 𝑥 − 𝑦 𝑑𝐹2 𝑦 ; 𝑥 ∈ ℝ

And is denoted by F= 𝐹1 ∗ 𝐹2 . Show that is F is also a distribution function.

[15] Which of the following functions are probability mass functions?

(𝑥−2) (𝑒 −𝜆 𝜆 𝑥 )
𝑖𝑓 𝑥 = 1, 2, 3, 4 𝑖𝑓 𝑥 = 0, 1, 2, 3, 4, … 𝑤𝑕𝑒𝑟𝑒, 𝜆 > 0 ; 𝑐 𝑓 𝑥 =
(a) f(x)= 2 ; 𝑏 𝑓 𝑥 = 𝑥!
0 𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒. 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
(𝑒 −𝜆 𝜆 𝑥 )
𝑥!
𝑖𝑓 𝑥 = 1, 2, 3, 4, … 𝑤𝑕𝑒𝑟𝑒, 𝜆 > 0
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

[16] Find the value of the constant c such that f(x)= (1- c)𝑐 𝑥 ; x= 0, 1, 2,3, … defines a probability mass
function.
[17] Let X be a discrete random variable taking values in 𝒳 = −3, −2, −1, 0, 1, 2, 3 𝑠𝑢𝑐𝑕 𝑡𝑕𝑎𝑡 𝑃 𝑋 =
−3 = 𝑃 𝑋 = −2 = 𝑃 𝑋 = −1 = 𝑃 𝑋 = 1 = 𝑃 𝑋 = 2 = 𝑃(𝑋 = 3)

And P(X < 0)= P(X= 0)= P(X > 0). Find the distribution function of X.

[18] A battery cell is labeled as good if it works for at least 300 days in a clock, otherwise it is labeled as
bad. Three manufacturers, A, B, and C make cells with probability of marking good cells as 0.95, 0.90
and 0.80 respectively. Three identical clocks are selected and cells made by A, B, and C are used in clock
numbers 1, 2 and 3 respectively. Let X be the total number of clocks working after 300 days. Find the
probability mass function of X and plot the corresponding distribution function.
2 −𝜃𝑥 𝑖𝑓 𝑥>0
[19] Prove that the function 𝑓𝜃 (𝑥) = 𝜃 𝑥𝑒
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

Defines a probability density function for 𝜃 > 0. Find the corresponding distribution function and hence
compute P(2 < X < 3) and P(X > 5).

[20] Find the value of the constant c such that the following function is probability density function.

𝑐 𝑥 + 1 𝑒 −𝜆𝑥 𝑖𝑓 𝑥 ≥ 0
𝑓𝜆 𝑥 =
0 𝑖𝑓 𝑥 < 0

Where 𝜆 > 0. Obtain the distribution function of the random variable associated with probability density
function𝑓𝜆 𝑥 .

𝑥2
[21] Show that f(x)= 𝑖𝑓 − 3 < 𝑥 < 3
18
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
Defines a probability density function. Find the corresponding distribution function and hence find P(|X|<
1) and P(𝑥 2 < 9)

Solution Key

(1) (a) y= |X| ∀𝑥 ∈ ℝ 𝑦 −1 −∞, 𝑥 = 𝜔: 𝑦 𝜔 ≤ 𝑥 = 𝜔: 𝑋 𝜔 ≤𝑥

= {𝜔: −𝑥 ≤ 𝑋(𝜔) ≤ 𝑥}

= 𝑋 −1 −𝑥, ∞ ∩ 𝑋 −1 (−∞, 𝑥]

As X is a.r.v.

𝑋 −1 −𝑥, ∞ ∩ 𝑋 −1 (−∞, 𝑥] ∈ 𝓕

[X is a. r. v. ⟹𝑋 −1 𝐵 ∈ℱ∀ 𝐵 ∈ 𝔅

⟹ 𝑋 −1 −𝑥, ∞ ∈ ℱ &𝑋 −1 −∞, 𝑥 ∈ ℱ]

⟹ 𝑦 −1 −∞, 𝑥 ∈ ℱ ∀ 𝑥 ∈ ℝ
⟹ y = X 𝑖𝑠 𝑎. 𝑟. 𝑣.

(b)y= 𝑋 2 ; ∀ 𝑥 ∈ ℝ 𝑦 −1 −∞, 𝑥 = 𝜔: 𝑦 𝜔 ≤ 𝑥

= 𝜔: 𝑋 2 𝜔 ≤ 𝑥

= 𝜔: − 𝑥 ≤ 𝑋 𝜔 ≤ 𝑥

= 𝑋 −1 − 𝑥, ∞ ∩ 𝑋 −1 −∞, 𝑥 ∈ ℱ

⟹ 𝑦 = 𝑋 2 𝑖𝑠 𝑎. 𝑟. 𝑣.

Sly part (c).

(2) 𝛺= [0, 1] ℱ : Boral 𝜍-field of student of 𝛺

1
𝜔, 0 ≤ 𝜔 ≤ 2
X(𝜔)= 1 1
𝜔 − 2, 2
<𝜔≤1

𝜙 ∈ ℱ, 𝑥<0
1 1 1
0, 𝑥 ∪ , + 𝑥 0 ≤ 𝑥 <
𝑋 −1 −∞, 𝑥 = 2 2 2
∈ℱ
1
𝛺 ∈ ℱ ,𝑥 ≥
2

⟹ 𝑋 −1 −∞, 𝑥 ∈ ℱ ∀ 𝑥 ∈ ℝ ⟹ X is a. r. v.

(3) 𝛺= {1, 2, 3, 4}

ℑ= {𝜙, 𝛺, {1}, {2, 3, 4}}

X(𝜔)= 𝜔+1 → 2, 3, 4, 5

𝜙 ∈ℑ 𝑥<2
𝑋 −1 −∞, 𝑥 = 1 ∈ℑ 2 ≤𝑥<3
1, 2 ∉ ℑ 3 ≤ 𝑥 < 4

⟹ 𝑋 𝑖𝑠 𝑛𝑜𝑡 𝑎. 𝑟. 𝑣.

(4) Ace king

↓ ↓

𝛺= {1 H, ……, k H, 1 S, ….., k S, 1D, …., k D, 1C, …. K C }

↔ ↔ ↔ ↔
All hearts Spades Diamond club

4 𝑎𝑐𝑒 ℑ = 𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙
3 𝑘𝑖𝑛𝑔
X(𝜔)= 2 𝑞𝑢𝑒𝑒𝑛
1 𝑗𝑎𝑐𝑘
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝜙 𝐼𝐹 𝑋 < 0
2𝐻, … 10𝐻, 2𝑆, … 10𝑆, 2𝐷, … 10𝐷͍, 2𝐶, … 10𝐶 𝑖𝑓 0 ≤ 𝑥 < 1
2𝐻, 𝐽𝐻͍ , 2𝑆, … 𝐽𝑆͍ , 2𝐷, … 𝐽𝐷͍ , 2𝐶, … 𝐽𝐶͍ 𝑖𝑓 1 ≤ 𝑋 < 2
𝑋 −1 −∞, 𝑥 =
2𝐻, … 𝑄𝐻͍ , 2𝑆, … 𝑄𝑆͍ , 2𝐷, … 𝑄𝐷͍ , 2𝐶, … , 𝑄𝐶͍ , 2 ≤ 𝑋 < 3
2𝐻, … 𝐾𝐻͍, 2𝑆, … 𝐾𝑆͍ , 2𝐷, … 𝐾𝐷͍ , 2𝐶, … 𝐾𝐶͍ 3 ≤ 𝑋 < 4
𝛺 𝑋 ≥4

⟹ 𝑋 −1 −∞, 𝑥 ∈ ℱ∀ 𝑥 ∈ ℝ

⟹ X is a. r. v.

(5)

(i) F(.) is non decreasing


(ii) F(.) is right contain everywhere
(iii) F(-∞)= 0 & F(∞)= 1

⟹ F(.) is a d. f.
1 1
P(- ½ <X ≤ ½ )= 𝐹 2
− 𝐹 −2

5 3 2
= − =
8 8 8

𝑃 𝑋 =0 =𝐹 0 − 𝐹 0− =0

𝑃 𝑋 = 1 = 𝐹 1 − 𝐹1— 𝐹(−1−)

3 3
= −0 =
4 4
(6) (a) F(x) is not right continuous at x = ½

⟹ F(.) is not a d. f.

(b)& (c) the n. s. c. holds for these 2 and hence they are d. f. s.

(7) F(.) is non decreasing

F(∞)= 1 & F(-∞)= 0

F(x) is right contain ∀ 𝑥 ∈ ℝ


⟹ F(.) is a. d. f.

2 6 1 −6
𝑃 𝑋 > 6 = 1 − 𝑃 𝑋 ≤ 6 = 1 − 𝐹 6 = 1 − 1 − 𝑒 −3 − 𝑒 3
3 3
2 1
= 1 − 1 − 𝑒 −2 − 𝑒 −2 = 𝑒 −2
3 3

𝑃 𝑋 =5 =𝐹 5 −𝐹 5−

2 5 1 −5 2 5 1 −5
= 1 − 𝑒 −3 − 𝑒 3 − 1 − 𝑒 −3 − 𝑒 3
3 3 3 3
2 5 1 2 5 1
= 1 − 𝑒 −3 − 𝑒 −1 − 1 − 𝑒 −3 − 𝑒 −1 = 0
3 3 3 3

𝑃 5≤𝑋 ≤8 =𝐹 8 −𝐹 5−

=⋯

(8) (6) (a) F(x) is not right continuous at x = ½

⟹ F(.) is not a d. f.

(b)& (c) the n. s. c. holds for these 2 and hence they are d. f. s.

(7) F(.) is non decreasing

F(∞)= 1 & F(-∞)= 0

F(x) is right contain ∀ 𝑥 ∈ ℝ

⟹ F(.) is a. d. f.

2 6 1 −6
𝑃 𝑋 > 6 = 1 − 𝑃 𝑋 ≤ 6 = 1 − 𝐹 6 = 1 − 1 − 𝑒 −3 − 𝑒 3
3 3
2 1
= 1 − 1 − 𝑒 −2 − 𝑒 −2 = 𝑒 −2
3 3

𝑃 𝑋 =5 =𝐹 5 −𝐹 5−

2 5 1 −5 2 5 1 −5
= 1 − 𝑒 −3 − 𝑒 3 − 1 − 𝑒 −3 − 𝑒 3
3 3 3 3
2 5 1 2 5 1
= 1 − 𝑒 −3 − 𝑒 −1 − 1 − 𝑒 −3 − 𝑒 −1 = 0
3 3 3 3

𝑃 5≤𝑋 ≤8 =𝐹 8 −𝐹 5−

=⋯
(8) P(-2≤ X < 5)= F(5-)- F(-2 -)
1 1
=2−0 = 2

P(0< X < 5.5)= F(5.5-) – F(0)


1 1 1 1
= 2
+8 −2 =8

𝑃 2<𝑋 ≤5.5
P(1.5 < X ≤ 5.5| X > 2)=
𝑃 𝑋>2

𝐹 5.5 − 𝐹 2
=
1− 𝐹 2

1 1 1
+ −
= 2 8 2=1
1 4
1−2

(9) for 𝑥1 < 𝑥2


𝑛
𝐹 𝑥2 − 𝐹 𝑥1 = 1 𝛼𝑖 𝐹𝑖 𝑥2 − 𝐹𝑖 𝑥1 ≥ 0 ≥ 0∀ 𝑖

⟹ F(x) is non-decreases.

𝐹 −∞ = 𝛼𝑖 𝐹𝑖 −∞ = 0 & 𝐹 ∞ = 1

𝐹 𝑥+ = lim 𝐹 𝑧 = lim 𝛼𝑖 𝐹𝑖 𝑧
𝑧↓𝑥 𝑧↓𝑥

= 𝛼𝑖 𝐹𝑖 𝑥+ = 𝛼𝑖 𝐹𝑖 𝑥 = 𝐹(𝑥)

⟹ F(.) is a d. f.

(10) 𝐺 ∞ = 𝐹1 ∞ + 𝐹2 ∞ = 2 ≠ 1

⟹ 𝐺 . 𝑖𝑠 𝑛𝑜𝑡 𝑎 𝑑. 𝑓.

(11) Right cont at x= 0 ⟹ F(0)= F(0+)

⟹ 0= 𝛼+ k ________(i)

𝐹 ∞ = 1 ⟹ 𝛼 = 1 ⟹ 𝑘 = −𝑣

(12) Right cont at x= 3 ⟹ F(3)= F(3+)


6+𝐶
⟹ 8
=1⟹𝐶=2
2
𝑓 𝑛 𝐹 . 𝑖𝑠 𝑕𝑎𝑣𝑖𝑛𝑔 𝑗𝑢𝑚𝑝 𝑎𝑡 𝑥 = 0 𝑜𝑛𝑙𝑦 𝑚𝑎𝑔𝑛𝑖𝑡𝑢𝑑𝑒
8

Discrete part of d. f.

0 𝑥<0
𝛼𝐹𝑑 𝑥 = 2
𝑥 ≥0
8

𝑐𝑜𝑛𝑡 𝑝𝑎𝑟𝑡

0 𝑖𝑓 𝑥 < 0
𝑥
0≤𝑥<1
8
𝑥2
1≤𝑥<2
1 − 𝛼 𝐹𝑐 𝑥 = 8
2𝑥
2≤𝑥≤3
8
6
𝑖𝑓 𝑥 > 3
8
2 1 3
⟹𝛼= = & 1−𝛼 =
8 4 4
0 𝑖𝑓 𝑥 < 0
𝑥
0≤𝑥<1
6
0 𝑥<0 𝑥2
𝐹𝑑 𝑥 = & 𝐹𝑐 𝑥 = 1≤𝑥<2
1 𝑥≥ 0 6
𝑥
2≤𝑥≤3
3
1 𝑖𝑓 𝑥 > 3

(13) Y= 𝑋 +

P(Y ≤ y)= 0 if y < 0

If y= 0 𝑃 𝑌 ≤ 0 = 𝑃 𝑋 + ≤ 0 = 𝑃 𝑋 + = 0 = 𝑃 𝑋 ≤ 0 = 𝐹(0)

If y> 0 𝑃 𝑌 ≤ 𝑦 = 𝑃 𝑋 + ≤ 𝑦 = 𝑃 𝑋 ≤ 𝑦 = 𝐹 𝑦

0 𝑦<0
𝐹𝑌 𝑦 =
𝐹 𝑦 𝑦≥0

𝑍= 𝑋

𝐹𝑍 𝔷 = 𝑃 1 × 1 ≤ 𝔷

= 𝑃 −𝔷 ≤ 𝑋 ≤ 𝔷

𝐹 𝔷 − 𝐹 −𝔷 − 𝑖𝑓 𝔷 ≥ 0
=
0 𝑖𝑓 𝔷 < 0
(14) for 𝑥1 < 𝑥2
∞ ∞ ∞
𝐹 𝑥2 − 𝐹 𝑥1 = 𝐹1 𝑥2 − 𝑦 𝑑𝐹2 𝑦 − 𝐹1 𝑥1 − 𝑦 𝑑𝐹2 𝑦
−∞ −∞ −∞


= 𝐹1 𝑥2 − 𝑦 − 𝐹1 𝑥1 − 𝑦 𝑑𝐹2 𝑦
−∞

≥ 0 ∀𝑥1 < 𝑥2

𝐹 𝛼 = 𝐹1 ∞ 𝑑 𝐹2 𝑦 = 𝐹2 ∞ − 𝐹2 −∞ = 1
−∞

𝐹 −∞ = 0
∞ ∞
𝐹 𝑋+ = 𝐹1 ( 𝑋+ − 𝑦) 𝑑𝐹2 𝑦 = 𝐹1 (𝑥 − 𝑦) 𝑑𝐹2 𝑦 = 𝐹(𝑥)
−∞ −∞

⟹ F(.) as defined is a d. f.

(15) (a) f(1) < 0 ⟹ f(.) is not a p. m. f.

(b) f(x) ≥ 0 ∀ x

−𝜆
𝜆𝑥
𝑓 𝑥 =𝑒 =1
𝑥!
𝑥 𝛼

⟹ f(.) is a p. m. f.

(c) 𝑥 𝑓 𝑥 ≠ 1 ⟹ 𝑓 . 𝑖𝑠 𝑛𝑜𝑡 𝑎 𝑝. 𝑚. 𝑓.

(16) f(x)≥ 0 ∀ C∈[0, 1]

𝑥 𝑓 𝑥 = 1 𝑎𝑙𝑠𝑜 ∀ C∈[0, 1]

⟹ ∀ C → 0≤C ≤ 1; f(.) is p. m .f.

(17) Suppose

P(X= -3)= P(X= -2) = P(X= -1)= P(X= 1) = P(X= 2) = P(X= 3)= p

⟹ P(X < 0)= 3p = P(X > 0)= P(X = 0)

P(X < 0) + P(X = 0) + P(X > 0)= 1


1
⟹𝑝=9
p. m. f.

X= x -3 -2 -1 0 1 2 3
1 1 1 3 1 1 1
P(X=x) 9 9 9 9 9 9 9

0 𝑥 < −3
1
− 3 ≤ 𝑥 < −2
9
2
− 2 ≤ 𝑥 < −1
9
3
−1≤𝑥 <0
𝐹𝑋 𝑥 = 9
6
0≤𝑥<1
9
7
1≤𝑥<2
9
8
2≤𝑥<3
9
1 𝑥≥3

(18) X: r. v. denoting total # of clocks working after 300days

X can take values in {0, 1, 2, 3}

𝑃 𝑋 = 0 = 𝑃 𝐴𝑐 𝐵𝑐 𝐶 𝑐 = 𝑃 𝐴𝑐 𝑃 𝐵𝑐 𝑃 𝐶 𝑐

= 1 − 0.95 1 − 0.9 1 − 0.8 = 𝑝0 , 𝑠𝑎𝑦

𝑃 𝑋 = 1 = 𝑃 𝐴𝐵𝑐 𝐶 𝑐 ∪ 𝐴𝑐 𝐵𝐶 𝑐 ∪ 𝐴𝑐 𝐵𝑐 𝐶

= 𝑃 𝐴 𝑃 𝐵𝑐 𝑃 𝐶 𝑐 + 𝑃 𝐴𝑐 𝑃 𝐵 𝑃 𝐶 𝑐 + 𝑃 𝐴𝑐 𝑃 𝐵𝑐 𝑃 𝐶

=⋯

= 𝑝1 𝑠𝑎𝑦

𝑃 𝑋 = 2 = 𝑃 𝐴𝐵𝐶 𝑐 ∪ 𝐴 𝐵𝑐 𝐶 ∪ 𝐴𝑐 𝐵𝐶

= 𝑃 𝐴 𝑃 𝐵 𝑃 𝐶 𝑐 + 𝑃 𝐴 𝑃 𝐵𝑐 𝑃 𝐶 + 𝑃 𝐴𝑐 𝑃 𝐵 𝑃 𝐶

=⋯

= 𝑝2 𝑠𝑎𝑦

𝑃 𝑋 = 3 = 𝑃 𝐴𝐵𝐶 = 𝑃 𝐴 𝑃 𝐵 𝑃 𝐶 = 0.95 × 0.9 × 0.8 = 𝑝3 𝑠𝑎𝑦

p. m. f.

X= x 0 1 2
3
P(X= 𝑝0 𝑝1 𝑝2
x) 𝑝3
d. f.

(19) f(x) ≥0∀ x


∞ ∞
⎾2
𝑓(𝑥) 𝑑𝑥 = 𝜃 2 𝑥𝑒 −𝜃𝑥 𝑑𝑥 = 𝜃 2 . =1
−∞ 0 𝜃2

⟹ f(.) is a p. d. f.

P(2 < X < 3)= F(3)- F(2)

& P(X> 5)= 1- F(5)


𝑥 𝑥
Here, F(x)= ∫−∞ 𝑓 𝑥 𝑑𝑥 = 𝜃 2 ∫0 𝑥𝑒 −𝜃𝑥 𝑑𝑥 𝑖𝑓 𝑥 > 0

= 0 𝑖𝑓 𝑥 ≤ 0

Integration by parts.

(20) If C ≥ 0 then f(x) ≥ 0 ∀ x



𝑓 𝑥 𝑑𝑥 = 1
0


⟹𝐶 (𝑥 + 1) 𝑒 −𝜆𝑥 𝑑𝑥 = 1
0

⎾2 ⎾1 𝜆2
i.e. 𝐶 𝜆2
+ 𝜆
= 1 ⟹ 𝐶 = 1+𝜆

𝑥 𝜆2 𝑥
d. f. F(x)= ∫0 𝑓 𝑥 𝑑𝑥 = 1+𝜆 ∫0 1 + 𝑥 𝑒 −𝜆𝑥 𝑑𝑥 𝑖𝑓𝑥 ≥ 0

= 0 𝑖𝑓 𝑥 < 0

By parts.

(21) f(x) ≥ 0 ∀ x
𝑥 3
𝑥2 1 𝑥3 3 1
𝑓 𝑥 𝑑𝑥 = 𝑑𝑥 = | = . 18 = 1
−∞ −3 18 18 3 −3 18

⟹ f(.) is a p. d. f.

0 𝑖𝑓 𝑥 < −3
3
𝐹𝑋 𝑥 = 𝑥 + 27
−3≤𝑥 ≤3
54
1 𝑥>3
28 26
𝑃 𝑋 < 1 = 𝐹 1 − 𝐹 −1 = −
7 7

𝑃 𝑋2 < 9 = 1

Problem Set -4

[1] Find the expected number of throws of a fair die required to obtain a 6.

[2] Consider a sequence of independent coin flips, each of which has a probability p of being heads.
Define a random variable X as the length of the urn (of either heads or tails) started by the first trial. Find
E(X).

[3] Verify whether E(X) exists in the following cases:


−1
(a) X has the p. m. f. P(X= x)= 𝑥 𝑥+1 , 𝑖𝑓 𝑥 = 1, 2, …
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

(2𝑥 2 )−1 , 𝑖𝑓 𝑋 > 1


𝑏 𝑋 𝑕𝑎𝑠 𝑡𝑕𝑒 𝑝. 𝑑. 𝑓. 𝑓 𝑥 =
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1 1
(c)X (Cauchy r. v. ) has the p. d. f. f(x)= 𝜋 1+𝑥 2 ; −∞ < 𝑥 < ∞ .

[4] Find the mean and variance of the distributions having the following p. d. f. / p. m. f.

(a) 𝑓 𝑥 = 𝑎𝑥 𝑎−1 , 0 < 𝑥 < 1, 𝑎 > 0

1
𝑏 𝑓 𝑥 = ; 𝑥 = 1, 2, … , 𝑛; 𝑛 > 0 𝑖𝑠 𝑎𝑛 𝑖𝑛𝑡𝑒𝑔𝑒𝑟
𝑛
3
𝑐 𝑓 𝑥 = (𝑥 − 1)2 ; 0 < 𝑥 < 2
2
[5] Find the mean and variance of the Weibull random variable having the p. d. f.

𝑐 𝑥−𝜇 𝑐−1 𝑥−𝜇 𝑐


f(x)= 𝑎 𝑎
exp − 𝑎
𝑖𝑓 𝑥 > 𝜇 𝑤𝑕𝑒𝑟𝑒, 𝑐 > 0, 𝑎 > 0 𝑎𝑛𝑑 𝜇 ∈ −∞, ∞ .
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
[6] A median of a distribution is a value m such that P(X≥ m)≥ ½ and P(X≤ m)≥ ½ , with equality for a
continuous distribution. Find the median of the distribution with p. d. f. f(x)= 3𝑥 2 , 0 < 𝑥 < 1; =
0, otherwise.

[7] Let X be a continuous, nonnegative random variable with d. f. F(x). Show that

E(X)= ∫0 (1 − 𝑓(𝑥))𝑑𝑥 .

1
[8] A target is made of three concentric circles of radii 3
, 1, 3 feet. Shots within the inner circle give 4
points, within the next ring 3 points and within the third ring 2 points. Shots outside the target give 0. Let
X be the distance of the hit from the centre (in feet) and let the p. d. f. of X be
2
𝜋 1+𝑥 2
𝑥>0
f(x)=
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
What is the expected vale of the score in a single shot?

[9] Find the moment generating function (m. g. f.) for the following distributions

(a) X is a (Binomial r. v.) discrete random variable with p. m. f.


𝑛 𝑥
𝑝 (1 − 𝑝)𝑛 −𝑥 𝑥 = 0, 1, 2, … , 𝑛
𝑃 𝑋=𝑥 = 𝑥
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

n is a positive integer.

(b)X is a (Poisson r. v.) continuous random variable with p. d. f.

𝑒 −𝑥/𝛽 𝑥 𝛼−1
𝑓𝑥 𝑥 = ,𝑥 > 0
⎾𝛼𝛽 𝛼
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

Find E(X) and V(X) from the m. g. f. s.

[10] The m. g. f. of a random variable X is given by

1 1 1 5
𝑀𝑋 𝑡 = 𝑒 −5𝑡 + 𝑒 4𝑡 + 𝑒 5𝑡 + 𝑒 25𝑡
2 6 8 24

Find the distribution function of the random variable.

[11] Let X be a random variable with P(X ≤ 0)= 0 and let 𝜇= E(X) exists. Show that P(X≥ 2𝜇) ≤ 0.5.

[12] Let X be a random variable with E(X)= 3 and E(𝑋 2 )= 13, determine a lower bound for P(-2 < X < 8).

[13] Let x be a random variable with p. m. f.


1
8
𝑥 = −1, 1
P(X= x)= 6
𝑥=0
8
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Using the p. m. f. , show that the bound for Chebychev’ s inequality cannot be improved.

[14] A communication system consists of n components, each of which will independently function with
probability p. The system will be able to operate effectively if al least one half of its components function.

(a) For what value of a p a 5-component system is more likely to operate effectively than a 3-component
system?

(b) In general, when is a (2k + 1) –component system better than a (2k- 1) –component system?

[15] An interviewer is given a list of 8 people whom he can attempt to interview. He is required to
interview exactly 5 people. If each person(independently) agrees to be interviewed with probability 2/3,
what is the probability that his list will enable him to complete his task?

[16] A pipe-smoking mathematician carries at all times 2 match boxes, 1 in his left-hand pocket and 1 in
his right- hand pocket. Each time he needs a match he is equally likely to take it from either pocket.
Consider the moment when the mathematician first discovers that one of his matchboxes is empty. If it is
assumed that both matchboxes initially contained N matches, what is the probability that there are exactly
k matches in the other box, k= 0, 1, …, N?

Solution Key

(1) Let X denote the # of throws reqd. to get a 6

𝔛 = 1, 2, 3, …
𝑥−1
5 1
𝑃 𝑋=𝑥 = 𝑥 ∈𝔛
6 6

= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
∞ 𝑥−1 ∞ 𝑥−1 2
5 1 1 5 5 5 1
𝐸 𝑋 = 𝑥 = 𝑥 = 1+2 +3 +⋯
6 6 6 6 6 6 6
1 1

2 2 2 2
1 5 5 5 5 5 5 5 5
= 1+ + +⋯ + 1+ + +⋯ + 1+ + +⋯ +⋯
6 6 6 6 6 6 6 6 6

2
1 1 5 1 5 1
= + + + …
6 1− 5 61 − 5 6 5
6 6 1−6

1 5 5 2
=6 6 1+6+ 6
+⋯ = 6
(2) X: length of run of heads or tails starting with trial 1

𝔛= {1, 2, …}

𝑃 𝑋 = 𝑥 = (1 − 𝑝)𝑥 𝑝 + 𝑝 𝑥 (1 − 𝑝)

↑ ↑

Run of x 𝑇𝑠 run of x H
∞ ∞ ∞
𝑥 𝑥 𝑥−1
𝐸 𝑋 = 𝑥 1−𝑝 𝑝+ 𝑝 1−𝑝 =𝑝 1−𝑝 𝑥 1−𝑝 + 𝑥 𝑝 𝑥−1
1 1 1

1 1
=𝑝 1−𝑝 2
+ 2
← 𝑎𝑠 𝑖𝑛 (1)
𝑝 1−𝑝

1 − 2𝑝 + 2𝑝2
= .
𝑝(1 − 𝑝)

∞ 1 ∞ 1
(3) (a) 𝐸 𝑋 = 1 𝑥 𝑥 𝑥+1
= 1 𝑥+1 𝑛𝑜𝑡 𝑐𝑜𝑛𝑣𝑒𝑟𝑔𝑒𝑛𝑡

⟹ 𝐸 𝑋 𝑑𝑜𝑒𝑠 𝑛𝑜𝑡 𝑒𝑥𝑖𝑠𝑡

1
𝑏 𝐸 𝑋 = 𝑥 𝑑𝑥 = ∞ ⟹ 𝐸 𝑥 𝑑𝑜𝑒𝑠 𝑛𝑜𝑡 𝑒𝑥𝑖𝑠𝑡
𝑥 >1 2𝑥 2

∞ 𝑥 1 2 ∞ 𝑥 1
(d) 𝐸 𝑋 = ∫−∞ 𝑑𝑥 = ∫0 𝑑𝑥 = log 1 + 𝑥 2 | ∞
0
=∞
𝜋 1+𝑥 2 𝜋 1+𝑥 2 𝜋
⟹ 𝐸 𝑋 𝑑𝑜𝑒𝑠 𝑛𝑜𝑡 𝑒𝑥𝑖𝑠𝑡.

(4) Trial calculations


1 𝑎 𝑎
(a) e. g. 𝐸 𝑋 = 𝑎 ∫0 𝑥 𝑎 𝑑𝑥 = ;𝐸 𝑋2 =
𝑎+1 𝑎+2

𝑎 𝑎 2
𝑉 𝑋 = 𝐸 𝑋 2 − (𝐸(𝑋))2 = −
𝑎+2 𝑎+1

=⋯
𝑥−𝜇 𝑐
𝑐 ∞ 𝑥−𝜇 𝑐−1 −
(5) 𝐸 𝑋 = 𝑎 ∫𝜇 𝑥 𝑎
𝑒 𝑎 𝑑𝑥

𝑥−𝜇 𝑐 𝑐 𝑥−𝜇 𝑐−1


𝑦= 𝑑𝑦 =
𝑎 𝑎 𝑎
∞ 1 1
⟹𝐸 𝑋 = 𝑎𝑦 𝑐 + 𝜇 𝑒 −𝑦 𝑑𝑦 = 𝑎⎾ + 1 + 𝜇
0 𝑐
∞ 𝑐−1 𝑥−𝜇 𝑐
𝑐 𝑥−𝜇 −
𝐸 𝑋2 = 𝑥2 𝑒 𝑎 𝑑𝑥
𝑎 𝜇 𝑎

∞ 1 2
𝑎𝑠 𝑖𝑛𝐸 𝑋 = 𝑎𝑦 𝑐 + 𝜇 𝑒 −𝑦 𝑑𝑦
0

2 1
= 𝑎2 + 1 + 2𝑎𝜇 + 1 + 𝜇2
𝑐 𝑐

𝑉 𝑋 = 𝐸𝑋 2 − 𝐸𝑋 2

2
2 1 1
= 𝑎2 + 1 + 2𝑎𝜇 + 1 + 𝜇2 − 𝑎 +1+𝜇
𝑐 𝑐 𝑐

=⋯
𝑚 1 1
(6)∫0 3𝑥 2 𝑑𝑥= ∫0 3 𝑥 2 𝑑𝑥 = 2 ⟹ 𝑚 = ⋯

∞ ∞ ∞
(7) ∫0 1 − 𝐹 𝑥 𝑑𝑥 = ∫0 ∫𝑥 𝑓𝑥 𝑦 𝑑𝑦 𝑑𝑥

0<𝑥<𝑦<∞
∞ 𝑦 ∞
= 𝑓𝑥 𝑦 𝑑𝑥 𝑑𝑦 = 𝑦 𝑓𝑥 𝑦 𝑑𝑦 = 𝐸(𝑋)
0 0 0

(8) Let Z denote the r. v. ∋

Z: Score in a shot 𝔛𝑧 = 0, 1, 2, 3 4

2 1 2 ∞ 1
𝑃 𝑍 = => 3) = 2
𝑑𝑥 = tan−1 𝑥 | =
𝜋 3 1+𝑥 𝜋 3 3

3
2 1 1
𝑃 𝑍=2 =𝑃 1<𝑋< 3 2
𝑑𝑥 =
𝜋 1 1+𝑥 6

1 2 1 1 1
P(Z=3) = P 3
< 𝑥 < 1 = 𝜋 ∫0 1+𝑥 2
𝑑𝑥 =6

1
2 1 3 1 1
𝑃 𝑧=4 =𝑃 0<𝑥< = 2
=
3 𝜋 0 1+𝑥 3

Expendent score,
1 1 1 1 1 1 4
E(Z)=0 × 3 + 2. 6 + 3. 6 + 4 . 3 = 3 + 2 + 3 = ⋯
𝑛 𝑛 𝑛 𝑛
(9)(a) 𝑀𝑋 𝑡 = 𝐸 𝑒 𝑡𝑋 = 0 𝑒 𝑡𝑥 𝑥
𝑝𝑥 1 − 𝑝 𝑛−𝑥
= 0 𝑥 (𝑝𝑒 𝑡 )𝑥 1 − 𝑝 𝑛−𝑥

= (1 − 𝑝 + 𝑝𝑒 𝑡 )𝑛

𝑞 =1−𝑝

𝑑
𝑀 𝑡 𝑛 𝑞 + 𝑝𝑒 𝑡 𝑛−1
𝑝𝑒 𝑡 𝑡 = 0
𝑑𝑡 𝑋 𝑡=0

= 𝑛𝑝 = 𝐸(𝑥)

𝑑2
𝑀 𝑡 = 𝑛 𝑛 − 1 𝑞 + 𝑝𝑒 𝑡 𝑛−2
𝑝𝑒 𝑡 2
+ 𝑛 𝑞 + 𝑝𝑒 𝑡 𝑛−1
𝑝𝑒 𝑡
𝑑𝑡 2 𝑋

𝑑2 𝑀𝑋 𝑡
| = 𝑛 𝑛 − 1 𝑝2 + 𝑛𝑝 = 𝜇2 1 = 𝐸(𝑋 2 )
𝑑𝑡 2 𝑡=0

𝑉 𝑋 = 𝐸𝑋 2 − (𝐸𝑋)2 = 𝑛 𝑛 − 1 𝑝2 + 𝑛𝑝 − 𝑛2 𝑝2 = 𝑛𝑝 1 − 𝑝 = 𝑛𝑝𝑞

Sly (b)

(10) (i) 𝑋 ∼ 𝐺 𝛼, 𝛽
∞ 𝑥
1 −
𝑀𝑋 𝑡 = 𝐸 𝑒 𝑡𝑋 = 𝑒 𝑡𝑋 𝑥 𝛼−1 𝑒 𝛽 𝑑𝑥
⎾𝛼𝛽 𝛼 0

∞ 1
1 −𝑥 −𝑡 1
= 𝑥 𝛼−1 𝑒 𝛽 𝑑𝑥 𝑟𝑒𝑔𝑖𝑜𝑛 𝑜𝑓 𝑒𝑥𝑖𝑠𝑡𝑎𝑛𝑐𝑒 𝑡 <
⎾𝛼𝛽 𝛼 0 𝛽
𝛼
⎾𝛼 1 1
= 𝛼
. 𝛼 = −𝑡
⎾𝛼𝛽 1 𝛽
−𝑡
𝛽

𝑑 −𝛼−1
𝐸 𝑋 = 𝑀 𝑡 |𝑡 = 0 = −𝛼 1 − 𝛽𝑡 (−𝛽)|𝑡=0
𝑑𝑡 𝑋

= 𝛼𝛽

𝑑2 𝑀𝑋 𝑡
𝐸 𝑋2 = |𝑡=0
𝑑𝑡 2
−𝛼−2
= 𝛼𝛽(− 𝛼 + 1 1 − 𝛽𝑡 (−𝛽))|𝑡=0

= 𝛼 𝛼 + 1 𝛽2

𝑉 𝑋 = 𝐸 𝑋 2 − ((𝐸𝑋))2 = 𝛼 2 𝛽2 + 𝛼𝛽 2 − 𝛼 2 𝛽 2 = 𝛼𝛽 2

Sly (e)
1 1 1 5
(ii)𝑀𝑋 𝑡 = 𝑒 −5𝑡 2 + 𝑒 4𝑡 6 + 𝑒 5𝑡 8 + 24 𝑒 25𝑡

A 4 pt distn.

p. m. f.

𝑋=𝑥 −5 4 5 25

1 1 1 5
𝑃 𝑋=𝑥
2 6 8 24
0 𝑥 < −5
1
2
−5 ≤𝑥 <4
1 1
d. f. 𝐹𝑋 𝑥 = 2
+6 4≤𝑥<5
1 1 1
+ + 5 ≤ 𝑥 < 25
2 6 8
1 1 1 5
2
+ 6 + 8 + 24 = 1 𝑥 ≥ 25

(11) X is (+) ve values r. v. , by Markov’s inequality

𝐸((𝑋)) 1
𝑃 𝑋 ≥ 2𝜇 = 𝑃 𝑋 ≥ 2𝜇 ≤ =
2𝜇 2

−2−3 𝑋−𝐸 𝑋 8−3


(12) 𝑃 −2 < 𝑋 < 8 = 𝑃 2
< < 2
𝑉 𝑋

5 𝑋−𝐸 𝑋 5 5
=𝑃 − < < = 𝑃 𝑋−𝜇 ≤ 𝑉 𝑋
2 𝑉 𝑋 2 2

5
= 1−𝑃 𝑋−𝜇 ≥ 𝑉 𝑋
2
𝑉 𝑋
≥1− 𝑐𝑕𝑒𝑏𝑦𝑠𝑕𝑒𝑣 ′ 𝑠 𝑠𝑖𝑛𝑒𝑞𝑢𝑎𝑙
25
.𝑉 𝑋
4
4 21
=1− =
25 25
1 1 1 1 1
(13) 𝐸 𝑋 = − 8 + 8 = 0 = 𝜇; 𝑉 𝑋 = 𝐸𝑋 2 = 8 + 8 = 4 = 𝜍 2

By chebyshev’s inequality

𝜍2
∀𝑡 >0 𝑃 𝑋−𝜇 ≥𝑡 ≤ 𝑡2

1
𝑖. 𝑒. 𝑃 𝑋 ≥ 𝑡 ≤
4𝑡 2
1 1 1
𝐴𝑙𝑠𝑜, 𝑃 𝑋 ≥ 𝑡 = 8 + 8 = 4 0 < 𝑡 ≤ 1
0 𝑡>1
1
𝑖. 𝑒. 𝑃 𝑋 ≥ 𝑡 = ∀ 𝑡 ∋ 0 < 𝑡 ≤ 1
4

⟹ for t= 1, bound from chebyshev’s inequality is attained exactly and hence cannot be improved

(14) Let X be the r. v. denoting the # of components functioning X∼ B(n, p)

P(system works effectively)= P(X≥ [n/2] +1).

(a) 𝑃 5 𝑐𝑜𝑚𝑝. 𝑠𝑦𝑠𝑡𝑒𝑚 𝑤𝑜𝑟𝑘𝑠 = 𝑝5

5 3 2
5 4
𝑖. 𝑒. 𝑝5 = 𝑝5 𝑋 ≥ 3 = 𝑝 1−𝑝 + 𝑝 1 − 𝑝 + 𝑝5
3 4
3 2
& 𝑝3 = 𝑝 1 − 𝑝 + 𝑝3 = 𝑃 3 𝑐𝑜𝑚𝑝. 𝑠𝑦𝑠𝑡𝑒𝑚 𝑤𝑜𝑟𝑘𝑠
2

𝑝5 > 𝑝3

5 3 2
5 4 3 2
𝑖𝑓 𝑝 1−𝑝 + 𝑝 1 − 𝑝 + 𝑝5 > 𝑝 1 − 𝑝 + 𝑝3
3 4 2
1
𝑠𝑖𝑚𝑝𝑙𝑦𝑓𝑦 𝑡𝑜 𝑔𝑒𝑡 𝑡𝑕𝑒 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑎𝑠 𝑝 >
2

𝑏 𝑃2𝑘+1 𝑋 ≥ 𝑘 + 1 = 𝑝2𝑘+1
= 𝑃2𝑘−1 𝑋 ≥ 𝑘 + 1 + 𝑃2𝑘−1 𝑋 = 𝑘 𝑃2 𝑋 ≥ 1 + 𝑃2𝑘−1 𝑋 = 𝑘 − 1 𝑃2 𝑋 = 2
2
𝑖. 𝑒. 𝑝2𝑘+1 = 𝑃2𝑘+1 𝑋 ≥ 𝑘 + 1 + 𝑃2𝑘+1 𝑋 = 𝑘 1 − 1 − 𝑝 + 𝑃2𝑘−1 𝑋 = 𝑘 − 1 𝑝2

𝐹𝑢𝑟𝑡𝑕𝑒𝑟 𝑝2𝑘−1 = 𝑃2𝑘−1 𝑋 ≥ 𝑘 = 𝑃2𝑘−1 𝑋 = 𝑘 + 𝑃2𝑘−1 𝑋 ≥ 𝑘 + 1


2
𝑠𝑖𝑛𝑐𝑒 𝑝2𝑘−1 = 𝑃2𝑘−1 + 𝑃2𝑘−1 𝑋 = 𝑘 ͍ − 𝑃2𝑘−1 𝑋 = 𝑘 1 − 𝑝 + 𝑃2𝑘−1 𝑋 = 𝑘 − 1 𝑝2
2
⟹ 𝑝2𝑘+1 = 𝑝2𝑘−1 − 𝑃2𝑘−1 𝑋 = 𝑘 1 − 𝑝 + 𝑃2𝑘−1 𝑋 = 𝑘 − 1 𝑝2

⟹ 𝑝2𝑘+1 > 𝑝2𝑘−1 𝑖𝑓


2
𝑃2𝑘−1 𝑋 = 𝑘 − 1 − 𝑝 + 𝑃2𝑘−1 𝑋 = 𝑘 − 1 𝑝2 > 0

2𝑘 − 1 𝑘 𝑘−1
2𝑘 − 1 𝑘−1
𝑖. 𝑒. 𝑝 1−𝑝 −1 − 𝑝2 + 2𝑝 + 𝑝 1 − 𝑝 𝑘 𝑝2 > 0
𝑘 𝑘−1

𝑖. 𝑒. 𝑝𝑘 1 − 𝑝 𝑘−1
−1 − 𝑝2 + 2𝑝 + 𝑝 − 𝑝2 > 0

𝑖. 𝑒. −1 − 2𝑝2 + 3𝑝 > 0
𝑖. 𝑒. 2𝑝 − 1 1 − 𝑝 > 0

1
𝑖. 𝑒. 𝑝 > 𝑟𝑒𝑞𝑑 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛.
2

15 𝑋: # of intervals attempts to get 5 interviews,


4 𝑥−5
𝑥−1 2 1 2
𝑃 𝑋=𝑥 = × ; 𝑥 = 5, 6, …
4 3 3 3

𝑟𝑒𝑞𝑑 𝑝𝑟𝑜𝑏 𝑃 𝑋 ≤ 8 = 𝑃 𝑋 = 5 + 𝑃 𝑋 = 6 + 𝑃 𝑋 = 7 + 𝑃 𝑋 = 8
5 5 1
4 2 5 2 1
= + + ⋯+ = ⋯
4 3 4 3 3

(16) P(selecting Box 1)= P(selecting box 2)= ½

Suppose Box 2 is found empty, then Box 2 has been chosen (n+1)th times, at it is time Box 1 contains
k matches if it has been chosen n-k times

Chosing Box 2≡ success.

Chosing Box 1≡ failure} Bernoulli trial p= ½

⟹ Box 2 found empty with k matches left in Box 1

≡ N- k failures preceding (N+ 1)th success

𝑁+(𝑁−𝑘) 1 𝑁 1 𝑁−𝑘 1 2𝑁−𝑘 1 2𝑁−𝑘+1


𝑝𝑟𝑜𝑏 = 𝑁 2 2
×2 = 𝑁 2

𝑠𝑙𝑦 Box 1 found empty with k matches in Box 2


2𝑁−𝑘+1
2𝑁 − 𝑘 1
𝑝𝑟𝑜𝑏 =
𝑁 2
2𝑁−𝑘
2𝑁 − 𝑘 1
⟹ 𝑟𝑒𝑞𝑑 𝑝𝑟𝑜𝑏 =
𝑁 2
Problem Set-5

[1] A machine contains two belts of different lengths. These have times to failure which are exponentially
distributed, with means 𝛼 and 2𝛼. The machine will stop if either belt fails. The failure of the belts are
assumed to be independent. What is the probability that the system performs after time 𝛼 from the start?

[2] Let X be a normal random variable with parameters 𝜇= 10 and 𝜍 2 = 36. 𝐶𝑜𝑚𝑝𝑢𝑡𝑒

𝑎 𝑃 𝑋 > 5 , 𝑏 𝑃 4 < 𝑋 < 16 , 𝑐 𝑃 𝑋 < 8 .

[3] Let 𝑋 ∼ 𝑁 𝜇, 𝜍 2 . 𝐼𝑓 𝑃 𝑋 ≤ 0 = 0.5 𝑎𝑛𝑑 𝑃 −1.96 ≤ 𝑋 ≤ 1.96 = 0.95, 𝑓𝑖𝑛𝑑 𝜇 𝑎𝑛𝑑 𝜍 2 .

[4] It is assumed that the lifetime of computer chips produced by a certain semiconductor manufacturer
are normally distributed with parameters 𝜇= 1.4 × 106 𝑎𝑛𝑑 𝜍 2 = 3 × 105 hours. What is the
approximate probability that a batch of 10 chips will contain at least 2 chips whose lifetime are less than
1.8 × 106 hours?

[5] Let X be a normal random variable with mean 0 and variance 1 i.e. N (0, 1). Prove that

2 /2
2 𝑒 −𝑡
𝑃 𝑋 >𝑡 ≤ ;∀ 𝑡 > 0
𝜋 𝑡

[6] Show that if X is a discrete random variable with values 0, 1, 2, … then



𝐸 𝑋 = 𝑘=0(1 − 𝐹(𝑘)), where F(x) is the distribution function of the random variable X.

[7] The cumulative distribution function of a random variable X defined over 0 ≤ 𝑥 < ∞ 𝑖𝑠 𝐹 𝑥 = 1 −
2
𝑒 −𝛽𝑥 , 𝑤𝑕𝑒𝑟𝑒 𝛽 > 0. Find the mean, median and variance of X.
𝜙(𝑥)
[8] Show that for any x > 0,1 − 𝜙 𝑥 ≤ 𝑥
, where 𝜙(x) is the c. d. f. and 𝜙(x) is the p. d. f. of standard
normal distribution.

[9] A point 𝑚0 is said to mode of a random variable X, if the p. m. f. or the p. d. f. of X has a maximum at
𝑚0 . For the distribution given in problem [7], if 𝑚0 denotes the mode; 𝜇 and 𝜍 2 , the variance of the
corresponding random variable, then show that

2
𝑚0 = 𝜇 𝑎𝑛𝑑 2𝑚0 2 − 𝜇2 = 𝜍 2 .
𝜋

[10] Let X be a poison random variable with parameter 𝜆. Find the probability mass function of Y=
𝑋 2 − 5.

[11] Let X be a Binomial random variable with parameters n and p. Find the probability mass function of
Y= n- X.
[12] Consider the discrete random variable X with the probability mass function

1 1 1
𝑃 𝑋 = −2 = , 𝑃 𝑋 = −1 = , 𝑃 𝑋 = 0 = ,
5 6 5
1 10 1
𝑃 𝑋=1 = ,𝑃 𝑋 = 2 = ,𝑃 𝑋 = 3 = .
15 30 30

Find the probability mass function of Y= 𝑋 2 .

[13] The probability mass function of the random variable X is given by

1 2 𝑥
𝑃 𝑋 = 𝑥 = 3 3 𝑥 = 0, 1, 2, …
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
Find the distribution of Y= X/ (X +1).

[14] Let X be a random variable with probability mass function

𝑒 −1 , 𝑥 = 0
−1
𝑒
𝑃 𝑋=𝑥 = , 𝑥 ∈ {±1, ±2, … }
2 𝑋 !
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
Find the p. m. f. and distribution of the random variable Y= |X|.

……………………………………………………………………………………………………..

Useful data

𝜙 (1/3)= 0.6293, 𝜙(5/6)= 0.7967, 𝜙(1)= 0.8413, 𝜙(4/3) 0.918

……………………………………………………………………………………………………..

Solution Key

(1) Belt 1
1
X∼ Exp with mean 𝛼 ∼ 𝛼 𝑒 −𝑥/𝛼 ; 𝑥 > 0

Belt 2
1
Y∼ Exp with mean 2𝛼 ∼ 𝑒 −𝑥/2𝛼 ; 𝑥 > 0
2𝛼

P(system works beyond 𝛼)

P(X >𝛼∩ Y> 𝛼)= P(X > 𝛼) P(Y > 𝛼)


𝑥 𝑥
∞1 ∞ 1
= ∫𝛼 𝑒 −𝛼 𝑑𝑥 ∗ ∫𝛼 𝑒 −2𝛼 𝑑𝑥 = 𝑒 −1 × 𝑒 −1/2 = 𝑒 −3/2
𝛼 2𝛼
𝑋−10 5−10 5
(1) (a) 𝑃 𝑋 > 5 = 𝑃 > =𝑃 𝑍> − ; 𝑍 ∼ 𝑁 0, 1
6 6 6

5 5
=1−𝜙 − = 1− 1−𝜙
6 6

5
=𝜙 = 0.7967
6
4 − 10 16 − 10
𝑏 𝑃 4 < 𝑋 < 16 = 𝑃 <𝑍< = 𝑃 −1 < 𝑍 < 1
6 6

= 𝜙 1 − 𝜙 −1 = 2𝜙 1 − 1

=⋯

8 − 10 1 1
𝑐 𝑃 𝑋<8 =𝑃 𝑍< = 𝜙 − =1−𝜙 =⋯
6 3 3
1
3 𝑃 𝑋≤0 = =𝑃 𝑋≥0 ⟹𝜇=0
2

𝑃 −1.96 ≤ 𝑋 ≤ 1.96 = 0.95

1.96 𝑋 1.96
𝑃 − ≤ ≤ = 0.95
𝜍 𝜍 𝜍
1.96 1.96
𝑃 − ≤𝑍≤ = 0.95; 𝑍 ∼ 𝑁(0, 1)
𝜍 𝜍
1.96
2𝜙 − 1 = 0.5
𝜍
1.96
𝜙 = 0.975
𝜍
1.96
⟹ = 𝜙 −1 0.975 = 1.96 ⟹ 𝜍 = 1
𝜍

( 4) X : lifetime r. v.

X∼ 𝑁 𝜇, 𝜍 2

= 1.4 × 106 𝑕𝑟𝑠

𝜍 2 = 3 × 105 𝑕𝑟𝑠

𝑃 𝑋 < 1.8 × 106

𝑋 − 1.4 × 106 0.4 × 106


= <
3 × 105 3 × 105
4
=𝑃 𝑍< 𝑍 ∼ 𝑁 0, 1
3
4
=𝜙 = 0.918
3

Y: r. v. denoting # of chips that have lifetime < 1.8 × 106 𝑕𝑟

Y∼ Bin(10, 0.918)

⟹ P(Y≥ 2)= 1- P(Y< 2)

= 1-P(Y= 0) – P(Y = 1)
10 10 10 1 9
= 1- 0
. 918 ° 1 − .918 − 1
0.918 1 − .918

=⋯

(5) 𝑋 ∼ N 0, 1

∀t>0 𝑃 𝑋 ≥𝑡 =1−P 𝑋 <𝑡

= 1 − P −t < 𝑋 < 𝑡

= 1 − ϕ t − ϕ −t

= 1 − 2ϕ t − 1

=1− 2 1−p X> 𝑡 −1

= 2−2+2P X>𝑡 =2P X>𝑡


∞ −x 2 ∞
1 1 x −x 2
P X>𝑡 = e 2 dx ≤ e 2 dx t < 𝑥 < ∞
2π t 2π t t

x2
y=
2
−t 2

1 1 1 e 2
= e−y dy =
2π t t2 2π 𝑡
2

−t 2 −t 2
1 e2 2 e2
⟹P X ≥t ≤2 =
2π 𝑡 π 𝑡

(6) 𝑋 = 𝑥 0 1 2 … … ..

𝑃 𝑋=𝑥 𝑝0 𝑝1 𝑝2 … … . ..
∞ ∞

1−𝐹 𝑘 = 𝑃 𝑋 >𝑘 =𝑃 𝑋 >0 + 𝑃 𝑋 >1 + 𝑃 𝑋 >2 +⋯


𝑘=0 𝑘=0

= 𝑝1 + 𝑝2 + 𝑝3 + ⋯ + 𝑝2 + 𝑝3 + ⋯ + 𝑝3 + 𝑝4 + ⋯

= 𝑝1 + 2𝑝2 + 3𝑝3 + ⋯

= 𝑖 𝑝𝑖 = 𝑖 𝑃(𝑋 = 𝑖) = 𝐸(𝑋)
𝑖=1 𝑖=0

(7) d. f.

0 , 𝑥<0
F(x)= 2
1 − 𝑒 −𝛽𝑥 , 𝑥 ≥ 0

𝛽>0
2
2𝛽 𝑥𝑒 −𝛽𝑥 , 𝑥 ≥ 0
𝑝. 𝑑. 𝑓. 𝑓 𝑥 = 𝑜
0𝑤


2
𝐸 𝑋 = 2𝛽 𝑥 2 𝑒 −𝛽𝑥 𝑑𝑥
0

𝑦 = 𝑥2

3
∞ 1 1 𝜋 ⎾
=𝛽 𝑦 2 𝑒 −𝛽𝑦 = 𝛽. 32 = =𝜇
0 2 3
𝛽2
∞ ∞
2 ⎾2 1
𝐸𝑋 2 = 2𝛽 𝑥 3 𝑒 −𝛽𝑥 𝑑𝑥 = 𝛽 𝑦 𝑒 −𝛽𝑦 𝑑𝑦 = 𝛽 =
0 0 𝛽2 𝛽

1 1 𝜋
𝑉 𝑋 = 𝐸 𝑋 2 − 𝐸𝑋 2
= − 𝜇2 = −
𝛽 𝛽 4𝛽

𝑚𝑒𝑑𝑖𝑎𝑛 ∶ 𝑚0

1
𝑚0 ⟹ 𝐹 𝑚0 = = 1 − 𝐹 𝑚0
2
𝑚0 ∞
2 2 1
𝑖. 𝑒. 2𝛽 𝑥 𝑒 −𝛽𝑥 𝑑𝑥 = 2𝛽 𝑥𝑒 −𝛽𝑥 𝑑𝑥 =
0 𝑚0 2

2 1
𝑖. 𝑒. 1 − 𝑒 −𝛽𝑚 0 =
2

⟹ 𝑚0 = ⋯
2
1 ∞ −𝑦
(8) 1 − 𝜙 𝑥 = ∫
2𝜋 𝑥
𝑒 2 𝑑𝑦

∞ −𝑦 2
1 1
= 𝑦𝑒 2 𝑑𝑦
2𝜋 𝑥 𝑦

−𝑦 2 ∞ −𝑦 2
1 1 ∞ 1
= . −𝑒 2 − − 2 −𝑒 2 𝑑𝑦
2𝜋 𝑦 𝑥 𝑥 𝑦

1 1 −𝑥 2 1 −𝑦 2
= 𝑒 2 − 𝑒 2 𝑑𝑦
2𝜋 𝑥 0 𝑦2

≥0

1 1 −𝑥 2 𝜙(𝑥)
⟹1−𝜙 𝑥 ≤ 𝑒 2 = .
𝑥 2𝜋 𝑥

(9)

Mode- pt at which f(x) is maximum.


2 2
𝑓 ′ 𝑥 = 2𝛽 𝑥𝑒 −𝛽𝑥 −2𝛽𝑥 + 𝑒 −𝛽𝑥

1
𝑓 ′ 𝑥 = 0 ⟹ 2𝛽𝑥 2 = 1 ⟹ 𝑥 =
2𝛽

𝑑 −𝛽𝑥 2
𝑓 ′′ 𝑥 = 2𝛽 𝑒 (1−2𝛽𝑥 2 )
𝑑𝑥
2 2
= 2𝛽 𝑒 −𝛽𝑥 −4𝛽𝑥 + 1 − 2𝛽𝑥 2 𝑒 −𝛽𝑥 −2𝛽𝑥

1 𝛽
𝑓 ′′ (𝑥)| 1 = 2𝛽 𝑒 −2 −4 <0
𝑥=
2𝛽
2

𝛽>0

1
⟹ 𝑚∗ , 𝑡𝑕𝑒 𝑚𝑜𝑑𝑒 𝑜𝑓 𝑡𝑕𝑒 𝑑𝑖𝑠𝑡𝑛 𝑖𝑠 𝑎𝑡 .
2𝛽

1
𝑚∗ =
2𝛽

𝜋 1 𝜋
𝜇=𝐸 𝑋 = . = 2 𝑚∗
2 𝛽 2
𝜋 ∗
𝑖. 𝑒. 𝜇 = 𝑚
2
2 2
& 2𝑚∗ 2 − 𝜇2 = 2 𝜇 − 𝜇2
𝜋
4 2 4 𝜋 1
= 𝜇 − 𝜇2 = . . − 𝜇2
𝜋 𝜋 4 𝛽

1
𝑖. 𝑒. 2𝑚∗ 2 − 𝜇2 = 𝜍 2 = − 𝜇2
𝛽

= 𝐸𝑋 2 − 𝜇2 = 𝑉 𝑋 .

(10) X∼ P(𝜆)

𝑒 −𝜆 𝜆 𝑥
p. m. f. 𝑃 𝑋 = 𝑥 = 𝑥!
, 𝑥 = 0, 1, 2, … .
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

𝑌 = 𝑋 2 − 5 ⟹ 𝑟𝑎𝑛𝑔𝑒 𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝑌 = −5, −4, −1, 4, 11, … = 𝓎

𝑃 𝑌 = 𝑦 = 𝑃 𝑋2 − 5 = 𝑦 = 𝑃 𝑋2 = 𝑦 + 5

𝑒 −𝜆 𝜆 𝑦+5
, 𝑦∈𝓎
𝑝. 𝑚. 𝑓. 𝑜𝑓 𝑌: 𝑃 𝑌 = 𝑦 = 𝑃 𝑋 = 𝑦+5 = 𝑦+5 !
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

(11) X∼ B(n, p)
𝑛
𝑥
𝑝 𝑥 1 − 𝑝 𝑛−𝑥 , 𝑥 = 0, 1, … , 𝑛
p. m. f. 𝑃 𝑋 = 𝑥 =
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

𝑌 = 𝑛 − 𝑥 ⟹ 𝓎 = 0, 1, … , 𝑛 .

𝑃 𝑌 =𝑦 =𝑃 𝑛−𝑋 =𝑦 =𝑃 𝑋 =𝑛−𝑦

⟹ 𝑝. 𝑚. 𝑓. 𝑜𝑓 𝑌
𝑛
𝑝𝑛−𝑦 1 − 𝑝 𝑛− 𝑛−𝑦 ; 𝑦 = 0, 1, … , 𝑛
𝑃 𝑌=𝑦 = 𝑛−𝑦
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑛
1 − 𝑝 𝑦 𝑝𝑛−𝑦 , 𝑦 = 0, 1, … , 𝑛
= 𝑦
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

⟹ 𝑌 ∼ 𝐵 𝑛, 1 − 𝑝 .

(12) 𝑌 = 𝑋 2 → 𝑟𝑎𝑛𝑔𝑒 𝑟𝑝 = 0, 1, 4, 9
𝑃 𝑋=0 𝑦=0
𝑃 𝑋 = −1 + 𝑃 𝑋 = 1 𝑦=1
𝑝. 𝑚. 𝑓. 𝑃 𝑌 = 𝑦 =
𝑃 𝑋 = −2 + 𝑃 𝑋 = 2 𝑦=4
𝑃 𝑋=3 𝑦=9

1
, 𝑦=0
5
1 1
+ , 𝑦=1
= 6 15
1 1
+ , 𝑦=4
5 3
1
, 𝑦=9
30
1 2 𝑥
(13) 𝑃 𝑋 = 𝑥 = 3 3
, 𝑥 = 0, 1, 2, … . .
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

𝑋 𝑌
𝑌= ⟹𝑋=
𝑋+1 1−𝑌
1 2 3
𝑟𝑎𝑛𝑔𝑒 𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝑌 = 0, , , , …
2 3 4
𝑋 𝑦
𝑃 𝑌=𝑦 =𝑃 =𝑦 = 𝑃 𝑋=
𝑋+1 1−𝑦
𝑦
1 2 1−𝑦 1 2
= , 𝑦 = 0, , , …
3 3 2 3
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

(14) 𝑝. 𝑚. 𝑓 𝑜𝑓 𝑋

𝑒 −1 ,𝑥=0
𝑒 −1
𝑃 𝑋=𝑥 = , 𝑥 ∈ ±1, ±2, …
2 𝑋 !
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

𝑌 = 𝑋 𝓎 = 0, 1, 2, …

𝑃 𝑌 = 0 = 𝑃 𝑋 = 0 = 𝑒 −1

𝑃 𝑌 = 1 = 𝑃 𝑋 = −1 + 𝑃 𝑋 = 1

𝑒 −1 𝑒 −1
= + = 𝑒 −1
2 2

𝑃 𝑌 = 2 = 𝑃 𝑋 = −2 + 𝑃 𝑋 = 2
𝑒 −1 𝑒 −1 𝑒 −1
= + =
2.2! 2.2! 2

𝑠𝑙𝑦 𝑓𝑜𝑟 𝑘 = 1, 2, ….

𝑃 𝑌 = 𝑘 = 𝑃 𝑋 = −𝑘 + 𝑃 𝑋 = 𝑘

𝑒 −1 𝑒 −1 𝑒 −1
= + =
2. 𝑘! 2. 𝑘! 𝑘!

𝑝. 𝑚. 𝑓. 𝑜𝑓 𝑌

𝑒 −1
𝑃 𝑌=𝑦 = , 𝑦 = 0, 1, 2, … .
𝑦!
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒

𝑌∼𝑃 1 .

Problem Set-6

[1] The probability density function of the random variable X is

1 0<𝑥<1
𝑓𝑋 𝑥 =
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

𝑖. 𝑒. 𝑋 ∼ 𝑈(0, 1). Find the distribution of the following functions of X

(a) 𝑌 = 𝑋 ; 𝑏 𝑌 = 𝑋 2 ; 𝑐 𝑌 = 2𝑋 + 3; 𝑑 𝑌 = −𝜆 log 𝑋; 𝜆 > 0.

[2] Let X be a random variable with 𝑈 0, 𝜃 , 𝜃 > 0 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛. 𝐹𝑖𝑛𝑑 𝑡𝑕𝑒 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑜𝑓 𝑌 =
𝜃
min 𝑋, 2 .

[3] The probability density function of X is given by

1 1 3
𝑓𝑋 𝑥 = 2 − ≤𝑥≤
2 2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

Find the distribution of Y=𝑋 2 .

[4] The probability density function of X is by


𝑥 𝑝−1
𝑓𝑋 𝑥 = 𝑘 𝑥>0
1 + 𝑥 𝑝+𝑞
0 𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒,

𝑝, 𝑞 > 0. Derive the distribution of Y= (1 + 𝑋)−1 .

[5] The probability density function of X is by

𝑘 𝑥 𝛽−1 exp −𝛼𝑥 𝛽 𝑥>0


𝑓𝑋 𝑥 =
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒,

𝛼, 𝛽 > 0. 𝐷𝑒𝑟𝑖𝑣𝑒 𝑡𝑕𝑒 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑜𝑓 𝑌 = 𝑋𝛽

[6] According to the Maxwell-Boltzmann law of theoretical physics, the probability density function of V,
the velocity of a gas molecule, is

𝑘 𝑣 2 exp −𝛽𝑣 2 𝑣 > 0


𝑓𝑉 𝑣 =
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒,

𝑤𝑕𝑒𝑟𝑒 𝛽> 0 is a constant which depends on the mass and absolute temperature of the molecule and k > 0
is a normalizing constant. Derive the distribution of the kinetic energy E= 𝑚𝑉 2 /2.

[7] The probability density function of the random variable X is

3 2
𝑓𝑋 𝑥 = 8 (𝑥 + 1) −1<𝑥 <1
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Find the distribution of the following functions of Y= 1- 𝑋 2 .

[8] Let X be a random variable with U(0, 1) distribution. Find the distribution function of Y= min (X, 1-
X) and the probability density function of Z= (1- Y)/ Y.

[9] Suppose X∼ N 𝜇, 𝜍 2 , 𝜇 ∈ ℜ, 𝜍 ∈ ℜ+. Find the distribution of 2X – 6.

[10] Let X be a continuous random variable on (a, b) with p. d. f. f and c. d. f. F. Find the p. d. f. of Z= -
log (F(X)).

[11] Let X be a continuous r. v. having the following p. d. f.

6𝑥 1 − 𝑥 𝑖𝑓 0 ≤ 𝑥 ≤ 1
f(x)=
0 𝑖𝑓 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Derive the distribution function of X and hence find the p. d. f. of Y= 𝑋 2 (3 − 2𝑋).

[12] Let X be a distributed as double exponential with p. d. f. f(x)


1
= 𝑒 −|𝑥| ; 𝑥 ∈ ℜ. 𝐹𝑖𝑛𝑑 𝑡𝑕𝑒 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑌 = |𝑋|
2
[13] 3 balls are placed randomly in 3 boxes𝐵1 , 𝐵2 𝑎𝑛𝑑𝐵3 . Let N be the total number of boxes which are
occupied an 𝑋𝑖 be the total number of balls in the box𝐵𝑖 , i= 1, 2, 3. Find the joint p. m. f. of (N, 𝑋1 ) and
(𝑋1 , 𝑋2 ). Obtain the marginal distributions of N, 𝑋1 𝑎𝑛𝑑 𝑋2 from the joint p. m. f. s.

[14] The joint p. m. f. of X and Y is given by

𝑐 𝑥𝑦 𝑖𝑓 𝑥, 𝑦 ∈ { 1, 1 , 2, 1 , 2, 2 , (3, 1)}
p(x, y)=
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
Find the constant c, the marginal p. m. f. of X and Y and the conditional p. m. f. of X given Y= 2.

[15] The joint p. m. f. of X and Y is given by


(𝑥+2𝑦)
𝑖𝑓 𝑥, 𝑦 ∈ { 1, 1 , 1, 2 , 2, 1 , (2, 2)}
p(x, y)= 18
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
(a) Find the marginal distributions.
(b) Verify whether X and Y are independent random variables.
(c) Find P(X< Y), P(X+ Y > 2).
(d) Find the conditional p. m. f. of Y given X= x, x= 1, 2.

[16] 5 cards are drawn at random without replacement from a deck of 52 playing cards. Let the random
variables 𝑋1 , 𝑋2 , 𝑋3 denote the number of spades, the number of hearts, the number of diamonds,
respectively, that appear among the five cards. Find the joint p. m. f. of 𝑋1 , 𝑋2 , 𝑋3 . Also determine
whether the 3 random variables are independent.

[17] Consider a sample of size 3 drawn with replacement from an urn containing 3 white , 2 black and 3
red balls. Let the random variables𝑋1 , 𝑎𝑛𝑑 𝑋2 denote the number of white balls and number of black
balls in the sample, respectively. Determine whether the two random variables are independent.
𝑇
[18] Let 𝑋 = 𝑋1 , 𝑋2 , 𝑋3 𝑏𝑒 𝑎 𝑟𝑎𝑛𝑑𝑜𝑚 𝑣𝑒𝑐𝑡𝑜𝑟 𝑤𝑖𝑡𝑕 𝑗𝑜𝑖𝑛𝑡 𝑝. 𝑚. 𝑓.

1
𝑓𝑋1 ,𝑋2 ,𝑋3 𝑥1 , 𝑥2 , 𝑥3 = 4 𝑥1 , 𝑥2 , 𝑥3 ∈
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

X= {(1, 0, 0), (0, 1, 0), (0, 0, 1),(1, 1, 1)}. Show that 𝑋1 , 𝑋2 , 𝑋3 are pair wise independent but are not
mutually independent.

Solution Key

(1) X∼ U(0, 1)

0 𝑥<𝑈
𝑓𝑋 𝑥 = 𝑥 0 ≤ 𝑥 ≤ 1
1 𝑥>1

𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑢𝑠𝑖𝑛𝑠 𝑑. 𝑓. 𝑚𝑒𝑡𝑕𝑜𝑑


𝑎 𝑌= 𝑋

𝑑. 𝑓. 𝑜𝑓 𝑌: 𝑓𝑌 𝑦 = 𝑃 𝑋≤𝑦

0 𝑦<0
= 𝑃 𝑋 ≤ 𝑦2 = 𝑦2 0≤𝑦≤1
1 𝑦>1

0, 𝑦 < 0
2𝑦, 0 ≤ 𝑦 ≤ 1
𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑌𝑖𝑠 𝑓𝑌 𝑦 = 2𝑦, 0≤𝑦≤1=
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1, 𝑦 > 1

𝑏 𝑌 = 𝑋2

0, 𝑦 < 0
2
𝑑. 𝑓. 𝑜𝑓 𝑌: 𝑓𝑌 𝑦 = 𝑃 𝑋 ≤ 𝑦 = 𝑃 − 𝑦 ≤𝑥≤ 𝑦 , 0≤𝑦≤1
1, 𝑦>1

𝑓𝑜𝑟 0 ≤ 𝑦 ≤ 1

𝑃 − 𝑦 ≤𝑋≤ 𝑦 = 𝑃 0≤𝑋≤ 𝑦 = 𝐹𝑋 𝑦 = 𝑦

0, 𝑦 < 0
⟹ 𝐹𝑌 𝑦 = 𝑦, 0 ≤ 𝑦 ≤ 1
1, 𝑦>1

1
0≤𝑦≤1
𝑝. 𝑑. 𝑓 𝑜𝑓 𝑌: 𝑓𝑌 𝑦 = 2 𝑦
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

𝑐 𝑌 = 2𝑋 + 3 → 3, 5

𝑑. 𝑓. 𝑜𝑓 𝑌: 𝐹𝑌 𝑦 = 𝑃 2𝑋 + 3 ≤ 𝑦

0, 𝑦 < 3
𝑦−3 𝑦−3
=𝑃 𝑋≤ = , 3≤𝑦≤5
2 2
1, 𝑦 > 5

1
𝑝. 𝑑. 𝑓. 𝑓𝑌 𝑦 = 2 , 3 ≤ 𝑦 ≤ 5 ⟹ 𝑌 ∼ 𝑈 3, 5
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

𝑑 𝑌 = −𝜆 log 𝑋 → 0, ∞
𝑦

𝐹𝑌 𝑦 = 𝑃 𝑌 ≤ 𝑦 = 𝑃 −𝜆 log 𝑋 ≤ 𝑦 = 𝑃 𝑋 > 𝑒 𝜆

𝑦

=1−𝑃 𝑋 ≤𝑒 𝜆
0 𝑦<0
𝑖. 𝑒. 𝐹𝑌 𝑦 = −
𝑦
1−𝑒 𝜆 𝑦≥0

1 −𝑦
𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑌: 𝑓𝑌 𝑦 = 𝜆 𝑒 , 𝑦 ≥ 0
𝜆

0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

𝑖. 𝑒. 𝑌 ∼ 𝐸𝑥𝑝 𝜆 (𝑠𝑐𝑎𝑙𝑒 𝜆)

(2) X∼𝑈 0, 𝜃

𝜃 𝜃
𝑌 = min 𝑋, → 0, ← 𝑟𝑎𝑛𝑔𝑒 𝑠𝑝 𝑜𝑓 𝑌
2 2
𝜃
𝐹𝑌 𝑦 = 𝑃 𝑌 ≤ 𝑦 = 𝑃 min 𝑋, ≤𝑦
2
𝜃
= 1 − 𝑝 min 𝑋, >𝑦
2
𝜃
=1−𝑝 𝑋 >𝑦∩ >𝑦
2
𝜃
𝑛𝑜𝑤 𝑃 𝑋 > 𝑦, > 𝑦 = 1 𝑖𝑓 𝑦 < 0
2
𝜃
= 0 𝑖𝑓 𝑦 ≥
2
𝜃
𝜃 𝜃 1 𝜃−𝑦
𝑓𝑜𝑟 0 ≤ 𝑦 < ; 𝑃 𝑋 > 𝑦, > 𝑦 = 𝑃 𝑋 > 𝑦 = 𝑑𝑥 =
2 2 𝜃 𝑦 𝜃

0, 𝑦 < 0
𝑦 𝜃
, 0≤𝑦<
⟹ 𝐹𝑌 𝑦 = 𝜃 2
𝜃
1, 𝑦 ≥
2
𝜃
𝑁𝑜𝑡𝑒: − 𝐹𝑌 𝑦 𝑕𝑎𝑠 𝑎 𝑗𝑢𝑚𝑝 𝑑𝑖𝑠𝑐𝑜𝑛𝑡𝑖𝑢𝑖𝑡𝑦 𝑎𝑡
2
1 1 3
, ≤𝑥≤2 1 3
(3) 𝑓𝑋 𝑥 = 2 2 𝑋 ∼ 𝑈 −2,2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
9
𝑌 = 𝑋 2 → 𝑦 ∈ 0,
4
2
𝐹𝑌 𝑦 = 𝑃 𝑋 ≤ 𝑦 = 𝑃 − 𝑦 ≤ 𝑋 ≤ 𝑦
𝑓𝑜𝑟 𝑢 < 0; 𝐹𝑌 𝑦 = 0
9
& 𝑦 > ; 𝐹𝑌 𝑦 = 1
4
𝑦
1 1
𝑓𝑜𝑟, 0 ≤ 𝑦 ≤ ; 𝐹𝑌 𝑦 = 𝑑𝑥 = 𝑦
4 − 𝑦2
1
− 𝑦
1 9 2 1 1 1
𝑓𝑜𝑟, < 𝑦 < ; 𝐹𝑌 𝑦 = 0. 𝑑𝑥 + 𝑑𝑥 = 𝑦+
4 4 − 𝑦 −
1 2 2 2
2
1 𝑦
= +
4 2
0 𝑦<0
1
𝑦 0≤𝑦≤
4
⟹ 𝐹𝑌 𝑦 = 1 1 1 9
𝑦+ , ≤𝑦≤
2 2 4 4
9
1, 𝑦 ≥
4
1 1
, 0≤𝑦≤
2 𝑦 4
𝑝. 𝑑. 𝑓. 𝑓𝑌 𝑦 =
1 1 9
, <𝑦 ≤
4 𝑦 4 4
𝑥 𝑝 −1
𝑘 , 𝑥>0
(4) 𝑓𝑋 𝑥 = 1+𝑥 𝑝 +𝑞
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1 1−𝑌
𝑌= ⟹𝑋= = 𝑔−1 𝑌 ; 𝑌 ∈ 0, 1
1+𝑋 𝑌
𝑑𝑥 1
𝐽= = − 2
𝑑𝑦 𝑦
𝑓𝑋 𝑔−1 𝑦 𝐽 , 0 ≤ 𝑦 ≤ 1
𝑓𝑌 𝑦 =
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑝−1
1−𝑦 1 𝑝+𝑞 1
= 𝑘. 𝑦 𝑦 −1
. 2, 0≤𝑦 ≤1
𝑦
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑘. 𝑦 𝑞−1 1 − 𝑦 𝑝−1 , 0 ≤ 𝑥 ≤ 1
𝑖. 𝑒. 𝑓𝑌 𝑦 =
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
−1
𝑘 = 𝐵𝑒𝑡𝑎 𝑞, 𝑝
⟹ 𝑌 ∼ 𝐵𝑒𝑡𝑎 (𝑞, 𝑝)
𝛽−1 −𝛼𝑥 𝛽
(5) 𝑓𝑋 𝑥 = 𝑘 𝑥 𝑒 , 𝑥>0
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑑𝑥 1 𝛽1 −1 1
𝑌 = 𝑥𝛽 𝐽 = = 𝑦 𝑥 = 𝑦 𝛽 = 𝑔−1 𝑦
𝑑𝑦 𝛽
𝑓𝑋 𝑔−1 𝑦 𝐽 , 𝑦 > 0
𝑓𝑌 𝑦 =
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1 𝛽−1 1 𝛽1 −1
= 𝑘. 𝑦 𝛽 𝑒 −𝛼𝑦 .𝑦 , 𝑦>0
𝛽
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1 1 𝛽1 −1
1−
𝑘𝑦 𝛽 𝑒 −𝛼𝑦 .𝑦 , 𝑦>0
= 𝛽
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑒 −𝛼𝑦
𝑘 , 𝑦>0
= 𝛽
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1
𝑘 = 𝛼𝛽 ⟹ 𝑌 ∼ 𝐸𝑥𝑝 .
𝛼
2 −𝛽𝑣
(6) 𝑓𝑉 𝑣 = 𝑘𝑣 𝑒 , 𝑣>0
0 , 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1 2𝐸
𝐸 = 𝑚𝑉 2 ; 𝑉 2 = .
2 𝑚
𝜕𝑒
= 𝑚𝑣
𝜕𝑣
𝜕𝑣 1
𝐽= = .
𝜕𝑒 2𝑚𝑒
2𝑒 −𝛽 2𝑒 1
𝑘. 𝑒 𝑚 . ; 𝑒>0
𝑓𝐸 𝑒 = 𝑚 2𝑚𝑒
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1 2𝛽
= 𝑐. 𝑒 2 exp − 𝑒 , 𝑒>0
𝑚
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
∞ 1 2𝛽
𝑐 𝑖𝑠 ∋ 𝐶. 𝑒 2 exp − 𝑒 𝑑𝑒 = 1
0 𝑚
3
3 2𝛽 2
⎾ 𝑚
𝑖. 𝑒. 𝐶. 2
3 =1 ⟹𝐶 = 3
2𝛽 2 ⎾2
𝑚
⟹ 𝐸 ∼ 𝐺𝑎𝑚𝑚𝑎 (… )
3
𝑥+1 2 , −1<𝑥<1
(7) 𝑓𝑋 𝑥 = 8
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑌 = 1 − 𝑋 2 , 𝑦 ∈ 0, 1
𝑥2 = 1 − 𝑦 ⟹ 𝑥 = ± 1 − 𝑦
𝑑𝑥 1
𝑥 ∈ −1, 0 → 𝑥 = − 1 − 𝑦 = 𝑔−1 1 𝑦 → =
𝑑𝑦 2 1−𝑦
𝑑𝑥 1
𝑥 ∈ 0, 1 → 𝑥 = 1 − 𝑦 = 𝑔2 −1 𝑦 → =
𝑑𝑦 2 1−𝑦
−1, 0 −1, 0 0, 1 (−1, 0)
↓ ↓ ↓ ↓
𝑑𝑥 𝑑𝑥
𝑓𝑌 𝑦 = 𝑓𝑋 𝑔1 −1 𝑦 + 𝑓𝑋 𝑔2 −1 𝑦 . 0<𝑦<1
𝑑𝑦 𝑑𝑦
3 2 1 3 2 1
= 1− 1−𝑦 . + 1+ 1−𝑦 .
8 2 1−𝑦 8 2 1−𝑦
3 2 2
= 1− 1−𝑦 + 1+ 1−𝑦
16 1 − 𝑦
3
= 2 1+ 1−𝑦
16 1 − 𝑦
3 1 1
1 − 𝑦 −2 + 1 − 𝑦 2 , 0 < 𝑦 < 1
𝑖. 𝑒. 𝑓𝑌 𝑦 = 8
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
(8) Y= min (X, 1- X) → range of Y(0, ½ )

𝐹𝑌 𝑦 = 𝑃 𝑌 ≤ 𝑦 = 𝑃 min 𝑋, 1 − 𝑋 ≤ 𝑦 = 1 − 𝑃 min 𝑋, 1 − 𝑋 > 𝑦

= 1 − 𝑃 𝑋 > 𝑦, 1 − 𝑋 > 𝑦

= 1 − 𝑃 𝑋 > 𝑦, 1 − 𝑦 > 𝑋

=1−𝑃 𝑦 <𝑋 <1−𝑦

1 𝑦≤0
1−𝑦
1
𝑑𝑥 𝑖𝑓 0 < 𝑦 <
𝑃 𝑦 <𝑥 <1−𝑦 = 𝑦 2
1
0 𝑦≥
2
0 𝑦≤0
1 1
2𝑦 0<𝑦< 2, 0 < 𝑦 <
⟹ 𝐹𝑌 𝑦 = 2 ⟹ 𝑝. 𝑑. 𝑓. 𝑓𝑌 𝑦 = 2
1 0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1 𝑦≥
2
1−𝑌 1
𝑍= = − 1 → 𝑟𝑎𝑛𝑔𝑒 𝑜𝑓 𝑍 𝑖𝑠 1, ∞
𝑌 𝑌

𝐹𝑍 Ʒ = 𝑃 𝑍 ≤ Ʒ

𝑖𝑓 Ʒ ≤ 1, 𝑡𝑕𝑒𝑛 𝐹𝑍 Ʒ = 0

1 1
𝑖𝑓 Ʒ > 1, 𝑡𝑕𝑒𝑛 𝑃 𝑍 ≤ Ʒ = 𝑃 −1≤Ʒ = 𝑃 ≤Ʒ+1
𝑌 𝑌
1 1
=𝑃 𝑌≥ = 1−𝑃 𝑦 <
Ʒ+1 Ʒ+1
2
=1− 𝑢𝑠𝑖𝑛𝑔 𝑑. 𝑓. 𝑜𝑓 𝑌
Ʒ+1
0, 𝑖𝑓Ʒ ≤ 1
⟹ 𝐹𝑍 Ʒ = 2
1− , 𝑖𝑓 Ʒ > 1
Ʒ+1
2
, Ʒ>1
⟹ 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑍 𝑖𝑠 𝑓𝑍 Ʒ = Ʒ+1 2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

(9) X∼ N 𝜇, 𝜍 2
𝑌 = 2𝑋 − 6; 𝑦 ∈ (−∞, ∞)
𝐹𝑌 𝑦 = 𝑃 𝑌 ≤ 𝑦 = 𝑃 2𝑋 − 6 ≤ 𝑦
𝑦+6
=𝑃 𝑋≤
2
𝑦+6
𝑋−𝜇 −𝜇 𝑦 + 6 − 2𝜆
=𝑃 ≤ 2 = 𝜙
𝜍 𝜍 2𝜍
𝑦 + 6 − 2𝜆 1
𝑓𝑌 𝑦 = 𝜙 . . 𝑦 ∈ −∞, ∞
2𝜍 2𝜍
2
1 1 𝑦 − 2𝜇 − 6 1 1 1 2
= 𝑒2 . = exp − 𝑦 − 2𝜇 − 6
2𝜋 2𝜍 2𝜍 2𝜋 2𝜍 2 4𝜍 2
⟹ 𝑌 ∼ 𝑁(2𝜇 − 6, 4𝜍 2 )
(10) X∼𝑓𝑋 𝑥
𝑍 = − log 𝐹 𝑋 ; Ʒ ∈ 0, ∞
𝜕𝑧 𝑓 𝑥 𝐹 𝑥
Ʒ = − log 𝐹 𝑋 ⟹ 𝑥 = 𝐹 −1 𝑒 −𝑧 = ⟹ 𝐽 =
𝜕𝑥 𝐹 𝑥 𝑓 𝑥
𝐹 𝐹 −1 𝑒 −𝑧
𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑍 = 𝑓 𝐹 −1 𝑒 −𝑧 .
𝑓 𝐹 −1 𝑒 −𝑧
= 𝐹 𝐹 −1 𝑒 −𝑧 = 𝑒 −𝑧 ;
𝑒 −Ʒ , Ʒ > 0
⟹ 𝑓𝑍 𝑧 =
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
6𝑥 1 − 𝑥 , 0 ≤ 𝑥 ≤ 1
(11) 𝑓𝑋 𝑥 =
0 𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
0, 𝑥 < 0
𝑥
𝐹𝑋 𝑥 = 𝑃 𝑋 ≤ 𝑥 = 6𝑦 − 6𝑦 2 𝑑𝑥, 0≤𝑥≤1
0
1, 𝑥>1
0, 𝑥 < 0
2
= 𝑥 3 − 2𝑥 , 0 ≤ 𝑥 ≤ 1
1, 𝑥 > 1
𝐹𝑋 𝑋 = 𝑋 2 3 − 2𝑋
If X ∼𝐹𝑋 𝑥 𝑑𝑖𝑠𝑡𝑕 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛, 𝑡𝑕𝑒𝑚 𝑌 = 𝐹 𝑋 ∼ 𝑈 0, 1

[X ∼𝑓𝑋 𝑥 𝑝. 𝑑. 𝑓. & 𝑑. 𝑓. 𝐹.]

Cont r. v. Y= F(X) →y ∈ (0, 1)[ General result as in prob. 10]
X= 𝐹 −1 𝑦
𝑑𝑦 𝑑𝑥 1
=𝑓 𝑥 =
𝑑𝑥 𝑑𝑦 𝑓 𝑥
𝑓𝑋 𝐹 −1 𝑦
= 1 𝑖𝑓 0 < 𝑦 < 1
⟹ 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑌 ∶ 𝑓𝑌 𝑦 = 𝑓𝑋 𝐹 −1 𝑦
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
⟹ 𝑌 ∼ 𝑈 0, 1
⟹ 𝐹𝑜𝑟 𝑡𝑕𝑒 𝑔𝑖𝑣𝑒𝑛 𝑑𝑖𝑠𝑡𝑛 𝑋 2 3 − 2𝑋 = 𝐹 𝑋 ∼ 𝑈(0, 1)
(12) X∼ Double exponential

1
𝑓𝑋 𝑥 = 𝑒 − 𝑥 ; −∞ < 𝑥 < ∞
2
𝑌 = 𝑋 𝑟𝑎𝑛𝑔𝑒 𝑜𝑓 𝑌 ∶ 0, ∞
𝑑𝑥
𝑥 ∈ −∞, 0 → 𝑥 = −𝑦 → = 1
𝑑𝑦
𝑑𝑥
𝑥 ∈ 0, ∞ → 𝑥 = 𝑦 → = 1
𝑑𝑦
⟹ 𝑓𝑌 𝑦 = 𝑓𝑋 𝑔1 −1 𝑦 𝐽 + 𝑓𝑋 𝑔2 −1 𝑦 𝐽

In (-∞, 0)
1 −𝑦 1 −𝑦
𝑖. 𝑒. 𝑓𝑌 𝑦 = 2 𝑒 + 2 𝑒 , 0 < 𝑦 < ∞
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑒 −𝑦 , 0 < 𝑦 < ∞
∴ 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑌 ∶ 𝑓𝑌 𝑦 =
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
(13) 𝑋𝑖 = 0, 1, 2, 3 𝑓𝑜𝑟 𝑖 = 1, 2, 3

𝑁 = 1, 2, 3

Possible configurations with 3 boxes and 3 balls

𝐵1 𝐵2 𝐵3 ↓N 𝑋1 𝑋2 𝑋3

1 3 0 0
3 0 0 → 1 0 3 0
1 0 0 3
0 3 0 2 2 1 0
2 2 0 1
0 0 3 2 1 2 0
1 1
2 0 2 1
2 1 0 → each with prob. = 3+3−1 = 10 2 1 0 2
3
2 0 1 2
2 0 1 3 1 1 1

1 2 0
0 2 1

1 0 2

0 1 2

1 1 1

𝑗𝑡 𝑝. 𝑚. 𝑓. 𝑜𝑓 (𝑁, 𝑋1 ) 𝑗𝑡 𝑝. 𝑚. 𝑓. 𝑜𝑓 (𝑁, 𝑋1 )

𝑋1 0 1 2 3
𝑋2 0 1 2 3
N
2 1 𝑋1
1 0 0 2 1
10 10 1 0 0
2 2 2 2 10 10
0 2 2 2 2
3 10 10 10 0
1 3 10 10 10
0 0 0 1
10 0 10
0 0
4 3 2 1
10 10 10 10

4 3 2
Marginal of 𝑋1 10 10 10
1
10

Marginal of 𝑋2

(14) 𝑥,𝑦 𝑝 𝑥, 𝑦 = 𝐶 𝑥,𝑦 𝑥𝑦 = 1

⟹ 𝐶 1, 1 + 2, 1 + 2, 2 + 3, 1 = 1

1
⟹𝐶=
10

𝑗𝑡 𝑝. 𝑚. 𝑓.

𝑌 1 2
X
1 1 1
10
0 10
2 2 4 6
3 10 10 10
} marginal of
3 X
10
0
3
10
6 4
10 10
Marginal of Y
𝑝 𝑥,2
Conditional p. m. f. of X given Y= 2, 𝑝𝑌 2
= 1 𝑖𝑓 𝑥 = 2

= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

(15) 𝑗𝑡 𝑝. 𝑚. 𝑓.

𝑌 1 2
X
1 3 5 8
18 18 18
2 4 6 10
18
} marginal of
18 18
X

7 11
18 18 ↔

Marg of Y
3 8 7
(b)P(X= 1, Y=1)= 18 ≠ 𝑃 𝑋 = 1 , 𝑃 𝑌 = 1 = 18 . 18

⟹ 𝑋 & 𝑌 𝑚𝑎𝑔𝑖𝑛𝑎𝑙

5
𝑐 𝑃 𝑋 < 𝑌 = 𝑃 𝑋 = 1, 𝑌 = 2 =
18
15
𝑃 𝑋 + 𝑌 > 2 = 𝑃 𝑋 = 1, 𝑌 = 2 + 𝑃 𝑋 = 2, 𝑌 = 1 + 𝑃 𝑋 = 2, 𝑌 = 2 =
18
𝑥+3
𝑑 𝑚𝑎𝑟𝑔 𝑜𝑓 𝑋: 𝑝𝑋 𝑥 = 𝑃 𝑋 = 𝑥 = ; 𝑥 = 1, 2
9
1
18 (𝑥 + 2𝑦) 𝑥 + 2𝑦
𝑝𝑌|𝑋=𝑥 = = ; 𝑦 = 1, 2
1 2𝑥 + 6
18 (2𝑥 + 6)

(16) 𝑖𝑓 𝑝. 𝑚. 𝑓. 𝑜𝑓 𝑋1 , 𝑋2 , 𝑋3
13 13 13 13 3
𝑥1 𝑥2 𝑥3 5−𝑥 1 −𝑥 2 −𝑥 3
𝑝𝑋1 ,𝑋2 ,𝑋3 𝑥1 , 𝑥2 , 𝑥3 = 52 ; 𝑥𝑖 ≥ 0 & 𝑥𝑖 ≤ 5
5 1
13 39
𝑥 5−𝑥
𝑝𝑋 𝑖 𝑥 = 52 𝑥 = 0, 1, 2, 3, 4, 5
5
𝑝𝑋1 𝑥1 𝑝𝑋2 𝑥2 𝑝𝑋3 𝑥3 ≠ 𝑝𝑋1 ,𝑋2 ,𝑋3 𝑥1 , 𝑥2 , 𝑥3
⟹(𝑋1 , 𝑋2 , 𝑋3 ) 𝑎𝑟𝑒 𝑛𝑜𝑡 𝑖𝑛𝑑𝑒𝑝.
(17) 𝑋1 : # 𝑜𝑓 𝑤𝑕𝑖𝑡𝑒 𝑏𝑎𝑙𝑙𝑠
𝑋2 : # 𝑜𝑓 𝑏𝑙𝑎𝑐𝑘 𝑏𝑎𝑙𝑙𝑠.
𝑊, 2 𝐵 , 1 𝑅 − 7
3! 3 𝑥 1 2 𝑥 2 3 3−𝑥 1 −𝑥 2
= 𝑝𝑋1 ,𝑋2 𝑥1 , 𝑥2 = ; 𝑥𝑖 ≥ 0, 𝑥1 + 𝑥2
𝑥1 ! 𝑥2 ! 3 − 𝑥1 − 𝑥2 ! 8 8 8
≤3
3 2
𝑋1 , 𝑋2 ∼ 𝑀𝑢𝑙𝑡 3, ,
8 8
3 3 𝑥1
5 3−𝑥 1
𝑝𝑋1 𝑥1 = ; 𝑥1 = 0, 1, 2, 3
𝑥1 8 8
3 2 𝑥 1 6 3−𝑥 2
𝑝𝑋2 𝑥2 = ; 𝑥2 = 0, 1, 2, 3
𝑥2 8 8
3 2
𝑖. 𝑒. 𝑋1 ∼ 𝐵 3, ; 𝑋2 ∼ 𝐵 3,
8 8
𝑝𝑋1 𝑥1 𝑝𝑋2 𝑥2 ≠ 𝑝𝑋1 ,𝑋2 𝑥1 , 𝑥2
⟹ 𝑋1 & 𝑋2 𝑎𝑟𝑒 𝑛𝑜𝑡 𝑖𝑛𝑑𝑒𝑝.
(18) From the jt p. m. f. of 𝑋1 , 𝑋2
1
𝑃 𝑋1 = 0, 𝑋2 = 0 = 𝑃 𝑋1 = 0, 𝑋2 = 1 = 𝑃 𝑋1 = 1, 𝑋2 = 0 = 𝑃 𝑋1 = 1, 𝑋2 = 1 =
4
𝐹𝑢𝑟𝑡𝑕𝑒𝑟 𝑋1 , 𝑋2 ≡ 𝑋1 , 𝑋3 ≡ 𝑋2 , 𝑋3
1
& 𝑃 𝑋𝑖 = 0 = = 𝑃 𝑋𝑖 = 1 ; 𝑖 = 1, 2, 3
2
⟹ 𝑋1 , 𝑋2 , 𝑋3 𝑎𝑟𝑒 𝑝𝑎𝑖𝑟 𝑤𝑖𝑠𝑒 𝑖𝑛𝑑𝑒𝑝.
1 1
𝐵𝑢𝑡 𝑃 𝑋1 = 0, 𝑋2 = 0, 𝑋3 = 0 = ≠ 𝑃 𝑋1 = 0 𝑃 𝑋2 = 0 𝑃 𝑋3 = 0 =
4 8
⟹ 𝑋1 , 𝑋2 , 𝑋3 𝑎𝑟𝑒 𝑛𝑜𝑡 𝑖𝑛𝑑𝑒𝑝.

Problem Set-7

4𝑥𝑦 0 < 𝑥 < 1, 0 < 𝑦 < 1


[1] The joint p. d. f. of (X, Y) is given by f(x, y)=
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
Find the probability p. d. f. s and verify whether the random variables are independent. Also find

P(0<X< ½ , ¼ <Y < 1), P(X+ Y <1)

𝑒 −(𝑥+𝑦) 0 < 𝑥, 𝑦 < ∞


[2] If the joint p. d. f. of (X, Y) f(x, y)=
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒,
Show that X and Y are independent.

2𝑒 −(𝑥+𝑦) 0 < 𝑥 < 𝑦 < ∞


[3] If the joint p. d. f. of (X, Y) is f(x, y)=
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒,
Show that X and Y are not dependent.

[4] Show that the random variables X and Y with joint p. d. f.

12𝑥𝑦 1 − 𝑦 0 < 𝑥 < 1, 0 < 𝑦 < 1


f(x, y)=
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
Are independent.
2
[5] Suppose the joint p. d. f. of (X, Y) is f(x, y)= 𝑐𝑥 𝑦 0 < 𝑥 < 𝑦 < 1
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
Find (a) the value of the constant c, (b) the marginal p. d. f. s of X and Y and (c) P(X+ Y ≤ 1).

6 1 − 𝑥 − 𝑦 𝑥 > 0, 𝑦 > 0, 𝑥 + 𝑦 < 1


[6] The joint p. d. f. of (X, Y) is given by f(x, y)=
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
Find the marginal p. d. f. of X and Y and P(2X + 3Y < 1).
𝑥+𝑦 0 < 𝑥 < 1, 0 < 𝑦 < 1
[7] The joint p. d. f. of (X, Y) is f(x, y)=
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
Find the conditional distribution of Y given X= x, 0< x<1; the conditional mean and conditional varience
of the conditional distribution.
𝑐𝑥
0<𝑥<𝑦
[8] Suppose the conditional p. d. f. of X given Y = y is f(x|y)= 𝑦2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
4
Further, the marginal distribution of Y is g(y)= 𝑑𝑦 0 < 𝑦 < 1
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
(a) Find the constants c and d.
(b) The p. d. f. of (X, Y).
(c) P(0.25 < X < 0.5) and P(0.25 < X < 0.5 |Y= 0.625)

[9] Let f(x) and g(y) be two arbitrary p. d. f. s with corresponding distribution functions F(x) and G(y)
respectively. Suppose the joint p. d. f. of X and Y is given by

h(x, y)= f(x) g(y)[1+𝛼 2𝐹 𝑥 − 1 {2𝐺 𝑦 − 1}], |𝛼|≤ 1.

Show that the marginal p. d. f. of X and Y are f(x) and g(y), respectively. Does there exist a value of 𝛼 for
which the random variables X and Y are independent?

4𝑥 1 − 𝑥 2 , 0 < 𝑥 < 1
[10] Suppose the marginal density of the random variable is 𝑓𝑋 𝑥 =
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
And the conditional density of the random variable Y given X = x is

2𝑦
, 𝑥 < 𝑦 < 1, 0 < 𝑥 < 1
𝑓𝑌|𝑋=𝑥 𝑦|𝑥 = 1 − 𝑥2
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Find the conditional p. d. f. of X given Y= y, E(X|Y= ½ ) and Var (X| Y = ½ ).


− 𝑥+𝑦
[11] The joint p. d. f. of (X, Y) f(x, y)= 𝑒 0 < 𝑥, 𝑦 < ∞
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
Find the joint m. g. f. of (X, Y) and the m. g. f. of Z= X+ Y are have hence V(Z).

[12] Derive the joint m. g. f. of


𝑋1 , 𝑋2 ∼ 𝑁2 𝜇1 , 𝜇2 , 𝜍1 2 , 𝜍2 2 , 𝜌 𝑎𝑛𝑑 𝑢𝑠𝑖𝑛𝑔 𝑡𝑕𝑒 𝑗𝑜𝑖𝑛𝑡 𝑚. 𝑔. 𝑓. 𝑓𝑖𝑛𝑑 𝜌 𝑋1 , 𝑋2 .
2 0<𝑥<𝑦<1
[13] Let the joint p. d. f. of (X, Y) be f(x, y)=
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒,
Find the conditional mean and conditional variance of X given Y= y and that of Y given X= x. Compute
further 𝜌 𝑋, 𝑌 .

[14] Let X, Y and Z be three random variables and a and b be two scalar constants. Prove that (a) Cov(X,
b)= Cov(Y, b)= 𝐶𝑜𝑣 𝑍, 𝑏 = 0; 𝑏 𝐶𝑜𝑣 𝑋, 𝑎𝑌 + 𝑏 = 𝑎 𝐶𝑜𝑣 𝑋, 𝑌 ; 𝑐 𝐶𝑜𝑣 𝑋, 𝑌 + 𝑍 = 𝐶𝑜𝑣 𝑋, 𝑌 +
𝐶𝑜𝑣 𝑋, 𝑍 ; 𝑑 𝜌 𝑋, 𝑎𝑌 + 𝑏 = 𝜌(𝑋, 𝑌) for a>0.

[15] Let 𝑋1 , 𝑋2 , 𝑎𝑛𝑑𝑋3 be three independent random variables each with a variance
𝜍 2 . 𝐷𝑒𝑓𝑖𝑛𝑒 𝑡𝑕𝑒 𝑛𝑒𝑤 𝑟𝑎𝑛𝑑𝑜𝑚 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠

3−1 3− 3
𝑊1 = 𝑋1 , 𝑊2 = 𝑋1 + 𝑋2 𝑎𝑛𝑑 𝑊3
2 2
= 2 − 1 𝑋2 + 2 − 2 𝑋3 . 𝐹𝑖𝑛𝑑 𝜌 𝑊1 , 𝑊2 , 𝜌 𝑊1 , 𝑊3 𝑎𝑛𝑑 𝜌 𝑊2 , 𝑊3 .

[16] Let (X, Y)∼𝑁2 3, 1, 16, 25, 0.6 . Find (a)P(3< Y< 8); (b) P(3 < y< 8| X= 7); (c) P(-3 <X <3) and (d)
P(-3 <X < 3| Y= 4).

[17] Let (X, Y) ∼ 𝑁2 5, 10, 1, 25, 𝜌 𝑤𝑖𝑡𝑕 𝜌 > 0. 𝐼𝑓 𝑖𝑡 𝑖𝑠 𝑔𝑖𝑣𝑒𝑛 𝑡𝑕𝑎𝑡 𝑃 4 < 𝑦 < 16 𝑋 = 5) =
0.954 𝑎𝑛𝑑 𝜙 2 = 0.977, find the value of 𝜌.

[18] Let 𝑋1 , 𝑋2 , … , 𝑋20 be independent random variables with identical distributions, each with a mean 2
and variance 3. Define Y= 15 𝑖=1 𝑋𝑖 𝑎𝑛𝑑 𝑍 =
20
𝑖=11 𝑋𝑖 . Find E(Y), E(Z), V(Y), V(Z) and 𝜌(Y, Z).

[19] Let X and Y be a jointly distributed random variables with E(X)= 15, E(Y)= 20, V(X)= 25, V(Y)=
100 and 𝜌(X, Y)= -0.6. Find 𝜌(X- Y, 2X – 3Y).

[20] suppose that the lifetime of light bulbs of a certain kind follows exponential distribution with p. d. f.
1 −𝑥
𝑓𝑋 𝑥 = 50 𝑒
50 𝑥 > 0

0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
Find the probability that among 8 such bulbs, 2 will last less that 40 hours, 3 will last anywhere between
40 and 60 hours, 2 will last anywhere 60 and 80 hours and 1 will last for more than 80 hours. Find the
expected number of bulbs in a lot of 8 bulbs with lifetime between 60 and 80 hours and also the expected
number of bulbs in a lot of 8 with lifetime 60 and 80 hours, given that the number of bulbs with lifetime
anywhere between 40 and 60 hours is 2.

[21] Let the random variables X and Y have the following joint p. m. f. s

(a) P(X= x, Y= y)= 1/3, if (x, y)∈{(0, 0), (1, 1), (2, 2)} and 0 otherwise.

(b) P(X= x, Y= y)= 1/3 , if (x, y)∈ {(0, 2), (1, 1), (2, 0)} and 0 otherwise.

(c) P(X= x, Y= y)= 1/3 , if (x, y) ∈ {(0, 0), (1, 1), (2, 0)} and 0 otherwise.

In each of the above cases find the coefficient of correlation between X and Y.

[22] The joint p. m. f. of (X, Y) is

P(X= x, Y= y)= xy/10, if (x, y)∈ {(1, 1), (2, 1), (2, 2), (3, 1)} and 0 otherwise.

Find the joint m. g. f. of X and Y and the coefficient of correlation between X and Y. Using the joint m. g.
f. , find the p. m. f. Z= X+ Y.
[23] Let 𝑀𝑋,𝑌 𝑢, 𝑣 𝑑𝑒𝑛𝑜𝑡𝑒 𝑡𝑕𝑒 𝑗𝑜𝑖𝑛𝑡 𝑚. 𝑔. 𝑓. 𝑋, 𝑌 𝑎𝑛𝑑 𝛹 𝑢, 𝑣 = log 𝑀𝑋,𝑌 𝑢, 𝑣 . 𝑆𝑕𝑜𝑤 𝑡𝑕𝑎𝑡

𝜕𝛹 (𝑢,𝑣) 𝜕𝛹 (𝑢,𝑣) 𝜕 2 𝛹(𝑢,𝑣) 𝜕 2 𝛹(𝑢,𝑣) 𝜕 2 𝛹(𝑢,𝑣)


𝜕𝑢
|𝑢=𝑣=0 , 𝜕𝑣
|𝑢=𝑣=0 , 𝜕𝑢 2 | 𝑢=𝑣=0 , 𝜕𝑣 2 |𝑢=𝑣=0 𝑎𝑛𝑑 𝜕𝑢𝜕𝑣
|𝑢=𝑣=0 yeilds the mean, the
variance and the covariance of the two random variables.
1
𝑓𝑋,𝑌 𝑥, 𝑦 = 2 𝑓𝜌 𝑥, 𝑦 + 𝑓−𝜌 𝑥, 𝑦 ; −∞ < 𝑥, 𝑦 < ∞

Where, 𝑓𝜌 𝑥, 𝑦 is the probability density function of 𝑁2 0, 0, 1, 1, 𝜌 𝑎𝑛𝑑 𝑓−𝜌 𝑥, 𝑦 is the probability


density function of 𝑁2 0, 0, 1, 1, −𝜌 . Find the marginal p. d. f. s of X and Y, the correlation coefficient
between X and Y. Are the 2 variables independent?

[25] Let the joint p. d. f. of X and Y be given by


𝑘, 𝑖𝑓 − 𝑥 < 𝑦 < 𝑥; 0 < 𝑥 < 1
𝑓𝑋,𝑌 𝑥, 𝑦 =
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
Find the value of the constant k and obtain the conditional expectations E(X|Y= y) and E(Y| X= x). Verify
whether the 2 random variables are independent and / or uncorrelated.

[26] The joint moment generating function of X and Y is given by


1
𝑀𝑋,𝑌 𝑠, 𝑡 = 𝑎 𝑒 𝑠+𝑡 + 1 + 𝑏 𝑒 𝑠 + 𝑒 𝑡 , 𝑎, 𝑏 > 0; 𝑎 + 𝑏 = .
2
Find the correlation coefficient between X and Y.

[27] Let X and Y be jointly distributed random variables with


1
E(X)= E(Y)= 0, E 𝑋 2 = 𝐸 𝑌 2 = 2 𝑎𝑛𝑑 𝜌 𝑋, 𝑌 = 3

𝑋 2𝑌 2𝑋 𝑌
𝐹𝑖𝑛𝑑 𝜌 + , + .
3 3 3 3

Solution Key

(1) 𝑗𝑡 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑋, 𝑌
4𝑥𝑦 0 < 𝑥 < 1, 0 < 𝑦 < 1
𝑓𝑋,𝑌 𝑥, 𝑦 =
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑜𝑓 𝑋 ∶
1
𝑓𝑋 𝑥 = 4𝑥𝑦 𝑑𝑦 = 2𝑥 0 < 𝑥 < 1
0
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
2𝑦, 0<𝑦<1
𝑆𝑙𝑦 𝑓𝑌 𝑦 =
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑜𝑏𝑠𝑒𝑟𝑣𝑒 𝑡𝑕𝑎𝑡𝑓𝑋,𝑌 𝑥, 𝑦 = 𝑓𝑋 𝑥 𝑓𝑌 𝑦
⟹ 𝑋 & 𝑌 𝑎𝑟𝑒𝑤 𝑖𝑛𝑑𝑒𝑝.
1 1 1 1
𝑃 0<𝑋< , <𝑌<1 =𝑃 0<𝑋< 𝑃 <𝑌<1
2 4 2 4
1
1
2
= 2𝑥 𝑑𝑥 2𝑦 𝑑𝑦 = ⋯
1
0
4
1
𝑃 𝑋+𝑌 <1 = 𝑃 𝑋 < 1 − 𝑦 𝑓𝑌 𝑦 𝑑𝑦 → 𝑋 & 𝑌 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝.
0
1 1−𝑦
= 2𝑥 𝑑𝑥 2𝑦 𝑑𝑦
0 0
=⋯

(2) 𝑓𝑋 𝑥 = ∫0 𝑒 −𝑥 𝑒 −𝑦 𝑑𝑦 = 𝑒 −𝑥 𝑥 > 0
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑓𝑌 𝑦 = 𝑒 −𝑦 𝑦 > 0
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑓𝑋,𝑌 𝑥, 𝑦 = 𝑓𝑋 𝑥 𝑓𝑌 𝑦
⟹ 𝑋 & 𝑌 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝.
∞ −𝑥 −𝑦
(3) 𝑓𝑋 𝑥 = ∫𝑥 2𝑒 𝑒 𝑑𝑦
= 2𝑒 −𝑥 𝑒 −𝑥 = 2𝑒 −2𝑥 𝑥 > 0
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑦
𝑠𝑙𝑦 𝑓𝑌 𝑦 = 2 𝑒 −𝑦 𝑒 −𝑥 𝑑𝑥 = 2 𝑒 −𝑦 1 − 𝑒 −𝑦 𝑦 > 0
0
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑓 𝑥, 𝑦 ≠ 𝑓 𝑥 𝑓 𝑦
⟹ 𝑋 & 𝑌 𝑎𝑟𝑒 𝑛𝑜𝑡 𝑖𝑑𝑒𝑝.
1 𝑦2 𝑦3
(4) 𝑓𝑋 𝑥 = 12𝑥 ∫0 𝑦 − 𝑦 2 𝑑𝑦 = 12𝑥 | 10
2
− 3
2𝑥 0 < 𝑥 < 1
⟹ 𝑓𝑋 𝑥 =
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1
𝑓𝑌 𝑦 = 12𝑦 1 − 𝑦 𝑥 𝑑𝑥 = 6 𝑦 1 − 𝑦 0<𝑦<1
0
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑓 𝑥, 𝑦 = 𝑓 𝑥 𝑓 𝑦
⟹ 𝑋 & 𝑌 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝.
1 1 1 2 1
(5) ∫0 ∫𝑥 𝑓 𝑥, 𝑦 𝑑𝑦 𝑑𝑥 = 1, 𝑖. 𝑒. 𝐶 ∫0 𝑥 ∫𝑥 𝑦 𝑑𝑦 𝑑𝑥 = 1
1
1
⟹ 𝐶 𝑥 2 1 − 𝑥 2 𝑑𝑥 = 1
0 2
3 5
𝑐 𝑥 𝑥 1
⟹ − = 1 ⟹ 𝐶 = 15
2 3 5 0
15 2
1 𝑥 1 − 𝑥2 , 0 < 𝑥 < 1
(b) 𝑓𝑋 𝑥 = 15𝑥 2 ∫𝑥 𝑦 𝑑𝑦 = 2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑦
5𝑦 4 , 0 < 𝑦 < 1
𝑓𝑌 𝑦 = 15𝑦 𝑥 2 𝑑𝑥 =
0 0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

𝑐 𝑃 𝑋+𝑌 ≤1 = 15 𝑥 2 𝑦 𝑑𝑦 𝑑𝑥
𝑥+𝑦≤1
𝑥<𝑦
1 1
1−𝑥
2
2
2 𝑦2 1 − 𝑥
= 15 𝑥 𝑦 𝑑𝑦 𝑑𝑥 = 15 𝑥2 | 𝑑𝑥
0 𝑥 0 2 𝑥
15
=⋯= .
192

𝐴𝑙𝑡 𝑃 𝑋 + 𝑌 ≤ 1 = 15 𝑥 2 𝑦 𝑑𝑦 𝑑𝑥
𝑥+𝑦≤1
𝑥<𝑦
1
𝑦 1 1−𝑦
2
2
= 15 𝑦 𝑥 𝑑𝑥 𝑑𝑦 + 15 𝑦 𝑥 2 𝑑𝑥 𝑑𝑦
1
0 0 0
2
15 15 15
=⋯= + =
15 × 32 10 × 32 192
1−𝑥 1−𝑥 𝑦2 1−𝑥
(6) 𝑓𝑋 𝑥 = ∫0 𝑓𝑋,𝑌 𝑥, 𝑦 𝑑𝑦 = 6 ∫0 1 − 𝑥 − 𝑦 𝑑𝑦 = 6 1 − 𝑥 𝑦 − 0
2
3 1 − 𝑥 2, 0 < 𝑥 < 1
=
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑠𝑙𝑦 𝑏𝑦 𝑠𝑦𝑚𝑚𝑒𝑡𝑟𝑦
3 1 − 𝑦 2, 0 < 𝑦 < 1
𝑓𝑌 𝑦 =
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1 1−2𝑥
2 3
𝑃 2𝑋 + 3𝑌 < 1 = 6 1 − 𝑥 − 𝑦 𝑑𝑦 𝑑𝑥
0 0
1 1 − 2𝑥
2 𝑦2 3 𝑑𝑥
=6 1−𝑥 𝑦−
0 2 0
1 2
2 1 − 2𝑥 1 1 − 2𝑥
=6 1−𝑥 − 𝑑𝑥
0 3 2 3
1
2 1 + 2𝑥 2 − 3𝑥 1 + 4𝑥 2 − 4𝑥
=6 − 𝑑𝑥
0 3 18
1
2
− 14𝑥 + 5 2 8𝑥
=6 𝑑𝑥
0 18
1
6 𝑥3 𝑥2
= 8 − 14 + 5𝑥 2
18 3 2 0
6 8 1 1 5 13
= . −7. + =
18 3 8 4 2 36
1 1−3𝑦
3 2
𝐴𝑙𝑡 𝑃 2𝑋 + 3𝑌 < 1 = 6 1 − 𝑦 − 𝑥 𝑑𝑥 𝑑𝑦
0 0
1 1 − 3𝑦
3 𝑥2 2
=6 1−𝑦 𝑥− 𝑑𝑦
0 2 0
1 2
3 1 − 3𝑦 1 1 − 3𝑦
=6 1−𝑦 − 𝑑𝑦
0 2 2 2
13
=⋯=
36
1 1
(7) 𝑓𝑋 𝑥 = ∫0 𝑓 𝑥, 𝑦 𝑑𝑦 = ∫0 𝑥 + 𝑦 𝑑𝑦
1
= 𝑥+2 0<𝑥 <1
0 𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑓 𝑥, 𝑦 𝑥+𝑦 2 𝑥+𝑦
𝑓𝑌|𝑋 = = = 0<𝑦<1
𝑓𝑋 𝑥 1 2𝑥 + 1
2𝑥 + 1
2
1 1
2 𝑥+𝑦 2
𝐸 𝑌|𝑋 = 𝑦 𝑑𝑦 = 𝑥𝑦 + 𝑦 2 𝑑𝑦
0 2𝑥 + 1 2𝑥 + 1 0
2 𝑥 1 2 3𝑥 + 2 3𝑥 + 2
= + = =
2𝑥 + 1 2 3 6 2𝑥 + 1 6𝑥 + 3
1 1
2 2
2 𝑥 + 𝑦 2
𝐸 𝑌 𝑋 = 𝑦 𝑑𝑦 = 𝑦 2 𝑥 + 𝑦 3 𝑑𝑦
0 2𝑥 + 1 2𝑥 + 1 0
2 𝑥 1 2 4𝑥 + 3 4𝑥 + 3
= + = =
2𝑥 + 1 3 4 12 2𝑥 + 1 6 2𝑥 + 1
𝑉 𝑌 𝑋 = 𝐸 𝑌2 𝑋 − 𝐸2 𝑌 𝑋
4𝑥 + 3 3𝑥 + 2 2
= − =⋯
6(2𝑥 + 1) 3 2𝑥 + 1
(8) f(x, y)= f(x| y) g(y)= c d x 𝑦 2 ; 0 < 𝑥 < 𝑦, 0 < 𝑦 < 1
1 1
𝑔 𝑦 𝑑𝑦 = 1 ⟹ 𝑑 𝑦 4 𝑑𝑦 = 1 ⟹ 𝑑 = 5
0 0
⟹ 𝑓 𝑥, 𝑦 = 5 𝑐 𝑥𝑦 2 ; 0 < 𝑥 < 𝑦 < 1
1 𝑦
2
5𝑐 1 4
⟹ 5𝑐 𝑦 𝑥 𝑑𝑥 𝑑𝑦 = 1 ⟹ 𝑦 𝑑𝑦 = 1
0 0 2 0
⟹𝑐=2
⟹ 𝑓 𝑥, 𝑦 = 10𝑥 𝑦 2 ; 0 < 𝑥 < 𝑦 < 1
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1
𝑓𝑋 𝑥 = 10𝑥 𝑦 2 𝑑𝑦 0 < 𝑥 < 1
0
10
𝑓𝑋 𝑥 = 𝑥 1 − 𝑥3 0 < 𝑥 < 1
3
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1
10 2
𝑃 0.25 < 𝑋 < 0.5 = 𝑥 − 𝑥 4 𝑑𝑥 = ⋯
3 1
4
1
1 1 2
𝑃 < 𝑋 < 𝑌 = 0.625 = 𝑓𝑋|𝑌=𝑦 𝑑𝑥
4 2 1
4
1 2 2
2 𝑥 2 1 1 1
=2 2
𝑑𝑥 = 22
− =⋯
1 (0.625) 0.625 2 4
4
(9) Marginal of X from h (x, y)
∞ ∞
𝑓𝑋 𝑥 = 𝑕 𝑥, 𝑦 𝑑𝑦 = 𝑓 𝑥 𝑔 𝑦 1 + 𝛼 2 𝐹 𝑥 − 1 2𝐺 𝑦 − 1 𝑑𝑦
−∞ −∞
∞ ∞
𝑓 𝑥 = 𝑔 𝑦 𝑑𝑦 + 𝑓 𝑥 𝛼 2𝐹 𝑥 − 1 𝑔 𝑦 2𝐺 𝑦 − 1 𝑑𝑦
−∞ −∞

= 𝑓 𝑥 × 1 + 𝑓 𝑥 𝛼 2𝐹 𝑥 − 1 𝑔 𝑦 2𝐺 𝑦 − 1 𝑑𝑦
−∞
∞ 1
𝑢2 1
𝑔 𝑦 2𝐺 𝑦 − 1 𝑑𝑦 ≟ 2𝑢 − 1 𝑑𝑢 = 2 − 𝑢| = 0
−∞ 0 2 0
⟹ 𝑓𝑋 𝑥 = 𝑓 𝑥 + 0

𝑠𝑙𝑦 𝑓𝑌 𝑦 = 𝑕 𝑥, 𝑦 𝑑𝑥 = 𝑔 𝑦
−∞
𝑕 𝑥, 𝑦 = 𝑓𝑋 𝑥 . 𝑓𝑌 𝑦 = 𝑓 𝑥 𝑔 𝑦 𝑖𝑓𝑓 𝛼 = 0
(10) 𝑓𝑋,𝑌 𝑥, 𝑦 = 𝑓𝑌|𝑋=𝑥 𝑦 𝑥 𝑓𝑋 𝑥
8𝑥𝑦, 0 < 𝑥 < 𝑦 < 1
=
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑌
𝑦
8𝑦 𝑥 𝑑𝑥 = 4𝑦 3 , 0<𝑦<1
𝑓𝑌 𝑦 = 0
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑎𝑙 𝑝. 𝑑. 𝑓 𝑜𝑓 𝑋 𝑔𝑖𝑣𝑒𝑛 𝑌
8𝑥𝑦 2𝑥
= , 0 < 𝑥 < 𝑦; 0 < 𝑦 < 1
𝑓𝑋|𝑌=𝑦 𝑥 𝑦 = 4𝑦 3 𝑦 3
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
2 𝑦 2 2 𝑦 3 2𝑦
𝐸 𝑋 𝑌 = 𝑦 = 2 𝑥 𝑑𝑥 = 2 − =
𝑦 0 𝑦 3 3
1 1
⟹𝐸 𝑋𝑌= =
2 3
𝑦
2 2 𝑦4 𝑦2
𝐸 𝑋2 𝑌 = 𝑦 = 2 𝑥 3 𝑑𝑥 = 2 . =
𝑦 0 𝑦 4 2
1 1
⟹ 𝐸 𝑋2 𝑌 = =
2 8
𝑉 𝑋 𝑌 = 𝑦 = 𝐸 𝑋2 𝑌 = 𝑦 − 𝐸2 𝑋 𝑌 = 𝑦
1 1 1
= − = .
8 9 72
(11) Jt . m. g. f.
∞ ∞
𝑀𝑋1 ,𝑋2 𝑡1 , 𝑡2 = 𝐸 𝑒 𝑡 1 𝑋1 +𝑡2 𝑋2 = 𝑒 𝑡1 𝑥 1 +𝑡 2 𝑥 2 𝑒 − 𝑥 1 +𝑥 2 𝑑𝑥2 𝑑𝑥1
0 0
∞ ∞ ∞
= 𝑒 −𝑥 2 1−𝑡 1
𝑑𝑥1 𝑒 −𝑥 2 1−𝑡 2
𝑑𝑥2
0 0 0

−1 −1
= 1 − 𝑡1 1 − 𝑡2 𝑖𝑓𝑡1 , 𝑡2 < 1

𝑁𝑜𝑡𝑒: 𝑠𝑖𝑛𝑐𝑒 𝑋1 & 𝑋2 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝, 𝑤𝑒 𝑤𝑟𝑖𝑡𝑒

𝑀𝑋1 ,𝑋2 𝑡1 , 𝑡2 = 𝑀𝑋1 𝑡1 𝑀𝑋2 𝑡2

𝑚. 𝑔. 𝑓. 𝑜𝑓 𝑍 = 𝑋1 + 𝑋2

𝑀𝑍 𝑡 = 𝐸 𝑒 𝑡 𝑋1 + 𝑋2
= 1−𝑡 −2
,𝑡 < 1

𝜕𝑀𝑍 𝑡 −3
𝐸 𝑧 = |𝑡=0 = 2 1 − 𝑡 |𝑡=0 = 2
𝑑𝑡
𝜕 2 𝑀𝑍 𝑡
𝐸 𝑧2 = |𝑡=0 = 6(1 − 𝑡)−4 | 𝑡=0 = 6 ⟹ 𝑉 𝑧 = 2
𝑑𝑡 2
(12) 𝑀𝑋1 ,𝑋2 𝑡1 , 𝑡2 = 𝐸 𝑒 𝑡 1 𝑋1 +𝑡 2 𝑋2
= 𝐸𝐸 𝑒 𝑡 1 𝑋1 +𝑡2 𝑋2 |𝑋1 = 𝐸 𝑒 𝑡1 𝑋1 𝐸 𝑒 𝑡 2 𝑋2 |𝑋1
𝜍2
𝑠𝑖𝑛𝑐𝑒 𝑋2 𝑋1 ∼ 𝑁 𝜇2 + 𝜌 𝑥 − 𝜇1 , 𝜍2 2 1 − 𝜌2
𝜍1 1
𝐸 𝑒 𝑡 2 𝑋2 |𝑋1 → 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑎𝑡 𝑚. 𝑔. 𝑓. 𝑜𝑓 𝑋2 𝑔𝑖𝑣𝑒𝑛 𝑋1
𝜍
𝑡 1 𝑋2
𝑡 2 𝜇 2 +𝜌 2 𝑥 1 −𝜇 1
𝜍 𝑡2 2 2
𝑀𝑋1 ,𝑋2 𝑡1 , 𝑡2 = 𝑒 𝑒 1
+ 𝜍 1 − 𝜌2
2 2
𝑡 2 𝜍
𝑡 1 𝑋1 +𝑡 2 𝜌 2 𝑋1
𝜍
−𝑡 2 𝜌 2 𝜇 1
𝑡 2 𝜇 2 + 2 𝜍2 2 1−𝜌 2 𝜍1 𝜍1
= 𝑒 2 𝐸 𝑒 𝑒
𝑡 2 𝜍 𝜍
𝑡 2 𝜇 2 + 2 𝜍2 2 1−𝜌 2 −𝑡 2 𝜌 2 𝜇 1 𝑡1 + 𝑡 2 𝜌 2
= 𝑒 2 𝜍1 𝑋𝐸 𝑒 𝜍1 𝑋1
2
𝑡 𝜍 𝜍 2
𝑡 2 𝜇 2 + 2 𝜍2 2 1−𝜌 2 −𝑡 2 𝜌 2 𝜇 1 𝑡 1 + 𝑡 2 𝜌 2 𝜇 1 𝜍1 𝜍2
=𝑒 2 𝜍1 𝑒 𝜍1 + 𝑡1 + 𝑡2 𝜌
2 𝜍1
𝑡2 2 2 𝜍2 𝜍2
= exp 𝑡2 𝜇2 + 𝜍 1 − 𝜌2 − 𝑡2 𝜌 𝜇1 + 𝑡1 𝜇1 + 𝑡2 𝜌
2 2 𝜍1 𝜍1
𝜍1 2 𝜍2 2 𝜍2
+ 𝑡1 2 + 𝑡2 2 𝜌2 2 + 2𝑡1 𝑡2 𝜌
2 𝜍1 𝜍1
2 2
𝑡2 𝑡1
= exp 𝑡2 𝜇2 + 𝜍2 2 + 𝑡1 𝜇1 + 𝜍 2 + 𝑡1 𝑡2 𝜌𝜍1 𝜍2
2 2 1
1
= exp 𝑡1 𝜇1 + 𝑡2 𝜇2 + 𝑡1 2 𝜍1 2 + 𝑡2 2 𝜍2 2 + 2𝑡1 𝑡2 𝜍1 𝜍2 𝜌
2
𝜕𝑀𝑋1 ,𝑋2 𝑡1 , 𝑡2 𝜕𝑀𝑋1 ,𝑋2
|𝑡 1 =0,𝑡2 =0 = 𝜇1 𝑠𝑙𝑦 | = 𝜇2 & 𝑉 𝑋1 = 𝜍1 2 , 𝑉 𝑋2 = 𝜍2 2
𝜕𝑡1 𝜕𝑡2 𝑡 1 =0,𝑡 2 =0
𝜕 2 𝑀𝑋1 ,𝑋2 𝑡1 , 𝑡2
𝐸 𝑋1 , 𝑋2 = |𝑡 1 =0,𝑡2 =0 = 𝜌𝜍1 𝜍2 + 𝜇1 𝜇2
𝜕𝑡1 𝜕𝑡2
⟹ 𝐶𝑜𝑣 𝑋1 , 𝑋2 = 𝜌𝜍1 𝜍2 + 𝜇1 𝜇2 − 𝜇1 𝜇2 = 𝜌𝜍1 𝜍2
⟹ 𝑐𝑜𝑟𝑣 𝑋1 , 𝑋2 = 𝜌
2, 0 < 𝑥 < 𝑦 < 1
(13) f(x, y)=
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1
𝑓𝑋 𝑥 = 2 𝑑𝑦 = 2(1 − 𝑥), 0 < 𝑥 < 1
𝑥
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑦
𝑓𝑌 𝑦 = 2 𝑑𝑥 = 2𝑦, 0 < 𝑦 < 1
0
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
2
𝑓𝑌|𝑋=𝑥 = 2 1−𝑥 , 𝑥 <𝑦 <1
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
2
, 0<𝑥<𝑦
𝑓𝑋|𝑌=𝑦 = 2𝑦
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1
1 1 − 𝑥2 1+𝑥
𝐸 𝑌𝑋 = 𝑦 𝑑𝑦 = =
𝑥 1−𝑥 2 1−𝑥 2
1 2 3
∫ 𝑦 1 1−𝑥
𝐸 𝑌2 𝑋 = 𝑥 𝑑𝑦 = .
1−𝑥 3 1−𝑥
1 − 𝑥3 1+𝑥
⟹ 𝑉 𝑌 𝑋 = 𝐸 𝑌2 𝑋 − 𝐸2 𝑌 𝑋 = −
3 1−𝑥 2
𝑠𝑙𝑦 𝐸 𝑋 𝑌 , 𝐸 𝑋 2 𝑌 𝑎𝑛𝑑 𝑕𝑒𝑛𝑐𝑒 𝑉 𝑋 𝑌 .
(14) (a) 𝐶𝑜𝑣 𝑋, 𝑏 = 𝐸 𝑋 − 𝐸 𝑋 𝑏 − 𝐸 𝑏 = 0 = 𝐶𝑜𝑣 𝑋, 𝑏 =
𝑍𝐶𝑜𝑣 𝑍, 𝑏
𝑏 𝐶𝑜𝑣 𝑋, 𝑎𝑌 + 𝑏 = 𝐸 𝑋 − 𝐸 𝑋 𝑎𝑌 + 𝑏 − 𝐸 𝑎𝑌 + 𝑏
= 𝐸 𝑋 − 𝐸 𝑋 𝑎𝑌 + 𝑏 − 𝑎𝐸 𝑌 − 𝑏
= 𝑎 𝐶𝑜𝑣 𝑋, 𝑌
𝑐 𝐶𝑜𝑣 𝑋, 𝑌 + 𝑍 = 𝐸 𝑋 − 𝐸 𝑋 𝑌 + 𝑍 − 𝐸 𝑌 − 𝐸 𝑍
= 𝐸 𝑋−𝐸 𝑋 𝑌−𝐸 𝑌 + 𝑧−𝐸 𝑧
= 𝐶𝑜𝑣 𝑋, 𝑌 + 𝐶𝑜𝑣 𝑋, 𝑧
𝑑 𝐶𝑜𝑣 𝑋, 𝑎𝑌 + 𝑏 = 𝑎 𝑐𝑜𝑣 𝑋, 𝑌
𝑐𝑜𝑣(𝑋, 𝑎𝑌 + 𝑏) 𝑎 𝑐𝑜𝑣(𝑥, 𝑌)
𝑐𝑜𝑣 𝑋, 𝑎𝑌 + 𝑏 = 1 = 1 = 𝑐𝑜𝑣 𝑋, 𝑌
𝑉 𝑋 𝑉 𝑎𝑌 + 𝑏 2 [𝑉(𝑋)𝑎2 𝑉(𝑌)]2
3−1 3− 3 3−1
(15) 𝑐𝑜𝑣 𝑤1 , 𝑤2 = 𝐶𝑜𝑣 𝑋1 , 2
𝑋1 + 2
𝑋2 = 2
𝑉 𝑋1 +
3− 3 3−1
𝑐𝑜𝑣 𝑋1 , 𝑋2 = 𝜍2 2
2 2
2 2
2
3−1 2
3− 3 2
𝑉 𝑤1 = 𝜍 & 𝑉 𝑤2 = 𝜍 + 𝜍2 = 3 − 1 𝜍2
2 2
1
⟹ 𝜌𝑤 1 ,𝑤 2 =
2
𝑠𝑙𝑦 𝜌𝑤 1 ,𝑤 3 & 𝜌𝑤 2 ,𝑤 3

𝑐𝑜𝑣 𝑤1 , 𝑤3 = 𝑐𝑜𝑣 𝑋1 , 2 − 1 𝑋2 + 2 − 2 𝑋3 = 0
⟹ 𝜌𝑤 1 ,𝑤 3 = 0
(16) (a) 𝑃 3 < 𝑌 < 8 𝑌 ∼ 𝑁 1, 25
3−1 𝑌−1 8−1 7 2
=𝑃 < < = 𝜙 −𝜙
5 5 5 5 5
= ⋯ 𝑓𝑟𝑜𝑚 𝑡𝑎𝑏𝑙𝑒
5
𝑏 𝑃 3 < 𝑌 < 8 𝑋 = 7 𝑌 𝑋 ∼ 𝑁 1 + 𝑃 𝑥 − 3 , 25 1 − 𝑝2
4
3−4 𝑌−4 8−4
=𝑃 < < 𝑋=7
4 4 4
= 𝜙 1 − 𝜙 −0.25
=⋯
𝑐 𝑃 −3 < 𝑋 < 3 𝑋 ∼ 𝑁 3, 16
−3 − 3 𝑋 − 3 3 − 3 6
=𝑃 < < = 𝜙 0 −𝜙 − = ⋯
4 4 4 4
4
𝑑 𝑃 −3 < 𝑋 < 3 𝑌 = 4 𝑋 𝑌 ∼ 𝑁 3+𝑝 𝑦 − 1 , 16 1 − 𝑝2
5
−3 − 4.44 𝑋 − 4.44 3 − 4.44 1.44 7.44
=𝑃 < < 𝑌 = 4) = 𝜙 − −𝜙 − =⋯
3.2 3.2 3.2 3.2 3.2
(17) 𝑋, 𝑌 ∼ 𝑁2 5, 10, 1, 25, 𝜌 ; 𝜌 > 0
𝑌|𝑋 = 5 ∼ 𝑁2 10, 25 1 − 𝜌2
4 − 10 𝑌 − 10 16 − 10
𝑃 4 < 𝑌 < 16 𝑋 = 5 = 𝑃 < < 𝑋=5
5 1−𝜌 2 5 1−𝜌 2 5 1 − 𝜌2
6 6
=𝜙 −𝜙 −
5 1−𝜌 2 5 1 − 𝜌2
6
= 2𝜙 − 1 = 0.954 𝑔𝑖𝑣𝑒𝑛 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛
5 1 − 𝜌2
6
⟹𝜙 = 0.977 = 𝜙 2
5 1 − 𝜌2
6
⟹ = 2 ⟹ 1 − 𝜌2 = 0.36 ⟹ 𝜌 = 0.8 𝑎𝑠 𝜌 > 0
5 1−𝜌 2

(18) 𝐸 𝑌 = 15 1 𝐸 𝑋𝑖 = 30
𝑉 𝑌 = 15 𝑉 𝑋𝑖 = 45 ; 𝑉 𝑍 = 10 × 3 = 30
15 10

𝑐𝑜𝑣 𝑌, 𝑍 = 𝑐𝑜𝑣 𝑋𝑖 , 𝑋𝑖 = 5𝑉 𝑋𝑖 = 15
1 11
15
𝜌𝑌,𝑍 = 1
[45 × 30]2
(19) 𝑈 = 𝑋 − 𝑌; 𝑉 = 2𝑋 − 3𝑌
𝐸 𝑈 = −5
𝐸 𝑉 = 2 × 15 − 3 × 20 = −30
𝑉 𝑈 = 𝑉 𝑋 + 𝑉 𝑌 − 2𝑐𝑜𝑣 𝑋, 𝑌 𝑉 𝑣 = 4𝑉 𝑋 + 9𝑉 𝑌 − 12 𝑐𝑜𝑣 𝑋, 𝑌
𝑐𝑜𝑣 𝑋, 𝑌 𝑐𝑜𝑣 𝑋, 𝑌
𝑁𝑜𝑤𝜌𝑋,𝑌 = −0.6 = = ⟹ 𝑐𝑜𝑣 𝑋, 𝑌 = −30
25 × 100 50

⟹ 𝑉 𝑈 = 185 & 𝑉 𝑣 = 100 + 900 + 360 = 1360

⟹ 𝑐𝑜𝑣 𝑈, 𝑉 = 𝑐𝑜𝑣 𝑋 − 𝑌, 2𝑋 − 3𝑌

= 2𝑉 𝑋 − 3 𝑐𝑜𝑣 𝑋, 𝑌 − 2𝑐𝑜𝑣 𝑌, 𝑋 + 3𝑉 𝑌 = 2 × 25 − 5 −30 + 30 × 100 = 500


500
⟹ 𝜌𝑈,𝑉 =
185 × 1360
(20) X : r. v. denoting life time
1 −𝑥
𝑋 ∼ 𝐸𝑥𝑝 50 𝑝. 𝑑. 𝑓. 𝑓 𝑥 = 50 𝑒 , 𝑥 > 0
50

0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑥

𝐹𝑋 𝑥 = 1 − 𝑒 50 , 𝑥>0
𝑌1 : # 𝑜𝑓 𝑏𝑢𝑖𝑏𝑠 𝑜𝑢𝑡 𝑜𝑓 8 𝑡𝑜 𝑕𝑎𝑣𝑒 𝑙𝑖𝑓𝑒𝑡𝑖𝑚𝑒 < 40
𝑌2 ∶ … … … … … … … . . ≥ 40& < 60
𝑌3 : … … … … … … … … … … ≥ 60 & ≤ 80
𝑌4 ∶ … … … … … … … . > 80
40
𝑃 𝑋 < 40 = 𝐹𝑋 40 = 1 − 𝑒 −50 = 𝑝1 , 𝑠𝑎𝑦
40 60
𝑃 40 ≤ 𝑋 < 60 = 𝐹𝑋 60 − 𝐹𝑋 40 = 𝑒 −50 − 𝑒 −50 == 𝑝2 , 𝑠𝑎𝑦
60 80
𝑃 60 ≤ 𝑋 ≤ 80 = 𝐹𝑋 80 − 𝐹𝑋 60 = 𝑒 −50 − 𝑒 −50 = 𝑝3 , 𝑠𝑎𝑦
80
𝑃 𝑋 > 80 = 1 − 𝑝1 − 𝑝2 − 𝑝1 = 𝑒 −50
𝑗𝑡. 𝑝. 𝑚. 𝑓. 𝑜𝑓 𝑌1 , 𝑌2 , 𝑌3 𝑖𝑠 𝑚𝑎𝑙𝑡𝑖𝑛𝑜𝑚𝑖𝑎𝑙 8, 𝑝1 , 𝑝2 , 𝑝3
8!
⟹ 𝑃 𝑌1 = 2, 𝑌2 = 3, 𝑌3 = 2 = 𝑝 2 𝑝 3 𝑝 2 1 − 𝑝1 − 𝑝2 − 𝑝3
2! 3! 22! 1! 1 2 3
60 80
𝑚𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛𝑌3 ∼ 𝐵𝑖𝑛 8, 𝑝3 = 𝑒 −50 − 𝑒 −50
60 80
𝐸 𝑌3 = 8 𝑒 −50 − 𝑒 −50
𝑝3
𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑎𝑙 𝑌3 |𝑌2 = 𝑦2 ∼ 𝐵𝑖𝑛 8 − 𝑦2 ,
1 − 𝑝2
60 80
𝑒 −50 − 𝑒 −50
⟹ 𝐸 𝑌3 𝑌2 = 1 = 8 − 1 40 60 .
1− 𝑒 −50 − 𝑒 −50
(21) (a)

Y 0 1 2
X
0 1 1
3
0 0 3
1 1 1
2 0 0
3 3
1 1
0 0 3 3

1 1 1
3 3 3

𝐸 𝑋 =1=𝐸 𝑌

𝑉 𝑋 = 𝐸 𝑋2 − 1

5 2
= −1= =𝑉 𝑌
3 3
1 1 1 5
𝐸 𝑋𝑌 = 0 × 0 + 1×1 + 2×2 × =
3 3 3 3
5 2
𝑐𝑜𝑣 𝑋, 𝑌 = −1=
3 3

𝜌𝑋,𝑌 = 1

𝑏
Y 0 1 2
X
0 1
0 0
3
1 1
2 0 0
3
1
3
0 0
𝑠𝑙𝑦 ⟹ 𝜌𝑋,𝑌 = −1

(c)

Y 0 1 2
X
0 1
0 0 3
1 1
2 0 0
3
1
3
0 0
𝜌𝑋,𝑌 = 0.

(22)

Y 1 2
X
1 1 1
10
0 10
2 2 4 6
3 10 10 10
3 3
0
10 10

6 4
10 10

1 6 3 22
𝐸 𝑋 = +2 +3 =
10 10 10 10
6 4 14
𝐸 𝑌 = + 2 =
10 10 10
1 6 3 52
𝐸 𝑋2 = +4 +9 =
10 10 10 10
2
6 4 22
𝐸 𝑌 = +4 =
10 10 10
52 22 2
𝑉 𝑋 = − =⋯
10 10
22 2 14 2
𝑉 𝑌 = − =⋯
10 10
1 2 4 3 30
𝐸 𝑋𝑌 = 1 × 1 + 2×1 + 2×2 + 3×1 = =3
10 10 10 10 10
22 14
𝑐𝑜𝑣 𝑋, 𝑌 = 𝐸 𝑋𝑌 − 𝐸 𝑋 𝐸 𝑌 = 3 − . =⋯
10 10
𝑐𝑜𝑣 𝑋, 𝑌
𝑐𝑜𝑣 𝑋, 𝑌 = 1 =⋯
𝑉 𝑋 𝑉 𝑌 2
𝑗𝑡 𝑚. 𝑔. 𝑓. 𝑜𝑓 𝑋, 𝑌
𝑀𝑋,𝑌 𝑡1 , 𝑡2 = 𝑒 𝑡 1 𝑥+𝑡 2 𝑦 𝜌 𝑋 = 𝑥, 𝑌 = 𝑦
𝑥,𝑦
1 2 4 3
= 𝑒 𝑡 1 +𝑡 2 × + 𝑒 2𝑡 1 +𝑡 2 + 𝑒 2(𝑡1 +𝑡2 ) + 𝑒 3𝑡1 +𝑡 2 . .
10 10 10 10
(23) 𝑀𝑋,𝑌 𝑢, 𝑣 = 𝐸 𝑒 𝑢𝑋 +𝑣𝑌
= 𝛹 𝑢, 𝑣 = 𝑙0𝑔𝑀𝑋,𝑌 𝑢, 𝑣
𝜕𝛹 𝑢, 𝑣 1 𝜕𝑀𝑋,𝑌 𝑢, 𝑣
= .
𝜕𝑢 𝑀𝑋,𝑌 𝑢, 𝑣 𝜕𝑢
𝜕𝛹 𝑢, 𝑣 1
|𝑢=0,𝑣=0 = .𝐸 𝑋 = 𝐸 𝑋
𝜕𝑢 𝑀 0, 0
𝜕𝛹 0, 0 𝜕𝛹 𝑢, 𝑣
𝑠𝑙𝑦 = |𝑢=0,𝑣=0 = 𝐸 𝑌
𝜕𝑣 𝜕𝑣
𝜕 2 𝛹 𝑢, 𝑣 1 𝜕 2 𝑀 𝑢, 𝑣 −1 𝜕𝑀 𝑢, 𝑣 𝜕𝑀 𝑢, 𝑣
2
= 2
+ 2
𝜕𝑢 𝑀 𝑢, 𝑣 𝜕𝑢 𝑀 𝑢, 𝑣 𝜕𝑢 𝜕𝑢
2
1 𝜕 2 𝑀 𝑢, 𝑣 𝜕𝑀 𝑢, 𝑣 1
= − .
𝑀 𝑢, 𝑣 𝜕𝑢2 𝜕𝑢 𝑀 𝑢, 𝑣
2
𝜕 𝛹 𝑢, 𝑣 𝜕 2 𝛹 0, 0
|𝑢=0,𝑣=0 = 𝐸 𝑋 2 − 𝐸2 𝑋 = 𝑉 𝑋 =
𝜕𝑢2 𝜕𝑢2
𝜕 2 𝛹 𝑢, 𝑣
𝑠𝑙𝑦 |𝑢 =0,𝑣=0 = 𝑉 𝑌
𝜕𝑢2
𝜕 2 𝛹 𝑢, 𝑣 1 𝜕 2 𝑀 𝑢, 𝑣 1 𝜕𝑀 𝑢, 𝑣 𝜕𝑀 𝑢, 𝑣
= . − 2. .
𝜕𝑣 𝜕𝑢 𝑀 𝑢, 𝑣 𝜕𝑣 𝜕𝑢 𝑀 𝑢, 𝑣 𝜕𝑣 𝜕𝑢
2
𝜕 𝛹 𝑢, 𝑣
| = 𝐸 𝑋𝑌 − 𝐸 𝑋 𝐸 𝑌
𝜕𝑣 𝜕𝑢 𝑢=0,𝑣=0
2
𝜕 𝛹 𝑢, 𝑣
𝑖. 𝑒. | = 𝑐𝑜𝑣 𝑋, 𝑌 .
𝜕𝑣 𝜕𝑢 𝑢=0,𝑣=0
(24) 𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑋

𝑓𝑋 𝑥 = 𝑓𝑋,𝑌 𝑥, 𝑦 𝑑𝑦
−∞
∞ ∞
1 1
= 𝑓𝜌 𝑥, 𝑦 𝑑𝑦 + 𝑓−𝜌 𝑥, 𝑦 𝑑𝑦
2 −∞ 2 −∞
1 1
= 𝜙 𝑥 + 𝜙 𝑥 𝜙 𝑥 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑁 0, 1
2 2
= 𝜙 𝑥 ⟹ 𝑋 ∼ 𝑁 0, 1
𝑠𝑙𝑦 𝑓𝑌 𝑦 = 𝜙 𝑦 ⟹ 𝑌 ∼ 𝑁 0, 1
∞ ∞
𝐸 𝑋𝑌 = 𝑥𝑦 𝑓𝑋,𝑌 𝑥, 𝑦 𝑑𝑥 𝑑𝑦
−∞ −∞
∞ ∞ ∞ ∞
1 1
= 𝑥𝑦 𝑓𝜌 𝑥, 𝑦 𝑑𝑥 𝑑𝑦 + 𝑥𝑦 𝑓−𝜌 𝑥, 𝑦 𝑑𝑥 𝑑𝑦
2 −∞ −∞ 2 −∞ −∞
1 1
=
𝜌 + −𝜌 = 0
2 2
𝑐𝑜𝑣 𝑋, 𝑌 = 𝐸 𝑋𝑌 − 𝐸 𝑋 𝐸 𝑌 = 0 − 0.0 = 0
𝜌 𝑋, 𝑌 = 0 ⟹ 𝑋 & 𝑌 𝑎𝑟𝑒 𝑢𝑛𝑐𝑟𝑟𝑒𝑙𝑎𝑡𝑒𝑑
𝑠𝑖𝑛𝑐𝑒, 𝑓𝑋,𝑌 𝑥, 𝑦 ≠ 𝑓𝑋 𝑥 𝑓𝑌 𝑦 .
𝑋 & 𝑌 𝑎𝑟𝑒 𝑛𝑜𝑡 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 .
1 𝑥 1
(25) ∫0 ∫−𝑥 𝑘 𝑑𝑦 𝑑𝑥 = 1 ⟹ 𝑘 ∫0 2𝑥 𝑑𝑥 = 1 ⟹ 𝑘 = 1
𝑥
2𝑥, 0 < 𝑥 < 1
𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑜𝑓 𝑋, 𝑓𝑋 𝑥 = 𝑘 𝑑𝑦 =
−𝑥 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1
1− 𝑦 , −1 < 𝑦 < 1
𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑜𝑓 𝑌, 𝑓𝑌 𝑦 = 𝑑𝑥 =
𝑦 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1
𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑎𝑙 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑜𝑓 𝑌|𝑋 = 𝑥; 𝑓𝑌|𝑋=𝑥 = 2𝑥 , −𝑥 < 𝑦 < 𝑥
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑥
1
𝐸 𝑌𝑋=𝑥 = 𝑦 𝑑𝑦 = 0
−𝑥 2𝑥
1
1 1 − 𝑦2
𝑠𝑙𝑦 𝐸 𝑋 𝑌 = 𝑦 = 𝑥 𝑑𝑥 = .
𝑦 1− 𝑦 2 1− 𝑦
−1
𝑓𝑋|𝑌=𝑦 = 1 − 𝑦 , 𝑦 <𝑥<1
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑓𝑋,𝑌 𝑥, 𝑦 = 1 ≠ 𝑓𝑋 𝑥 𝑓𝑌 𝑦
⟹ 𝑋 & 𝑌 𝑎𝑟𝑒 𝑛𝑜𝑡 𝑖𝑛𝑑𝑒𝑝.
1 ∞
𝐸 𝑋𝑌 = 𝑥𝑦 𝑑𝑦 𝑑𝑥 = 0
0 −∞
𝐸 𝑌 = 𝐸. 𝐸 𝑌 𝑋 = 0
⟹ 𝑐𝑜𝑣 𝑋, 𝑦 = 𝜌𝑋,𝑌 = 0
⟹ 𝑋 & 𝑌 𝑎𝑟𝑒 𝑢𝑛𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑒𝑑.
(26) 𝑀𝑋,𝑌 𝑠, 𝑡 = 𝑎 𝑒 𝑠+𝑡 + 1 + 𝑏 𝑒 𝑠 + 𝑒 𝑡 , 𝑎, 𝑏 > 0, 𝑎 + 𝑏 =
1
2
𝜕
𝐸 𝑋 = (𝑎 𝑒 𝑠+𝑡 + 1 + 𝑏 𝑒 𝑠 + 𝑒 𝑡 )|𝑠=𝑡=0
𝜕𝑠
1
= 𝑎 𝑒 𝑡 𝑒 𝑠 + 𝑏 𝑒 𝑠 |𝑡=𝑠=0 = 𝑎 + 𝑏 = = 𝐸 𝑌
2
2
𝜕
𝐸 𝑋 2 = 2 𝑎 𝑒 𝑠+𝑡 + 1 + 𝑏 𝑒 𝑠 + 𝑒 𝑡 |𝑠=𝑡=0
𝜕𝑠
1
= 𝑎 𝑒 𝑒 + 𝑏 𝑒 𝑠 |𝑠=𝑡=0 = 𝑎 + 𝑏 = = 𝐸 𝑌 2
𝑡 𝑠
2
1 1 1
𝑉 𝑋 =𝑉 𝑌 = − =
2 4 4
𝜕2
𝐸 𝑋𝑌 = 𝑎 𝑒 𝑠+𝑡 + 1 + 𝑏 𝑒 𝑠 + 𝑒 𝑡 |𝑠=𝑡=0
𝜕𝑡 𝜕𝑠
= 𝑎 𝑒 𝑡 𝑒 𝑠 |𝑠=𝑡=0 = 𝑎
1
1 𝑎−4
∴ 𝑐𝑜𝑣 𝑋, 𝑌 = 𝑎 − ⟹ 𝜌𝑋,𝑌 = = 4𝑎 − 1.
4 1
4
𝑋 2𝑌 2𝑋 𝑌
(27) 𝑣𝑎𝑟 3 + 3 = 𝑣𝑎𝑟 3 + 3 ←∵ 𝑉 𝑋 = 𝑉 𝑌
1 4 𝑋 2𝑌
= 𝑉 𝑋 + 𝑉 𝑌 + 2 𝑐𝑜𝑣 ,
9 9 3 3
2 8 4 2 2 8 8 38
= + + × = + + =
9 9 9 3 9 9 27 27
𝑋 2𝑌 2𝑋 𝑌
𝑐𝑜𝑣 + , +
3 3 3 3
2 1 4 2
= 𝑉 𝑋 + 𝑐𝑜𝑣 𝑋, 𝑌 + 𝑐𝑜𝑣 𝑋, 𝑌 + 𝑉 𝑌
9 9 9 9
4 2 8 4 34
= + + + =
9 27 27 9 27
𝑋 2𝑌 2𝑋 𝑌
𝑐𝑜𝑣 + , +
3 3 3 3
34
34
= 27 = .
38 38
27
Problem Set-8

[1] The joint probability mass function of the random variables 𝑋1 𝑎𝑛𝑑 𝑋2 𝑖𝑠 𝑔𝑖𝑣𝑒𝑛 𝑏𝑦

𝑥 1 +𝑥 2 2−𝑥 1 −𝑥 2
2 1
𝑃 𝑋1 = 𝑥1 , 𝑋2 = 𝑥2 = 𝑖𝑓 𝑥1 , 𝑥2 = 0, 0 , 0, 1 , 1, 0 , (1, 1)
3 3
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

(a) Find the joint probability mass function of𝑌1 = 𝑋1 − 𝑋2 𝑎𝑛𝑑 𝑌2 = 𝑋1 + 𝑋2 .


(b) Find the marginal probability mass functions of 𝑌1 𝑎𝑛𝑑 𝑌2 .
(c) Verify whether 𝑌1 𝑎𝑛𝑑 𝑌2 are independent.

[2] Let the joint probability mass function of 𝑋1 𝑎𝑛𝑑 𝑋2 𝑏𝑒


𝑥1 𝑥2
𝑖𝑓 𝑥1 = 1, 2, 3; 𝑥2 = 1, 2, 3;
𝑃 𝑋1 = 𝑥1 , 𝑋2 = 𝑥2 = 36
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

(a) Find the joint probability mass function of𝑌1 = 𝑋1 𝑋2 𝑎𝑛𝑑 𝑌2 = 𝑋2 .


(b) Find the marginal probability mass functions of 𝑌1
(c) Find the probability mass function of Z=𝑋1 + 𝑋2 .

[3] (a) Let X∼


𝐵𝑖𝑛 𝑛1 , 𝑝 𝑎𝑛𝑑 𝑌 ∼
𝐵𝑖𝑛 𝑛2 , 𝑝 𝑏𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 𝑟𝑎𝑛𝑑𝑜𝑚 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠. 𝐹𝑖𝑛𝑑 𝑡𝑕𝑒 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑎𝑙 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑜𝑓 𝑋 𝑔𝑖𝑣𝑒𝑛 𝑋 +
𝑌 = 𝑡, 𝑡 ∈ 0, 1, … , min 𝑛1 , 𝑛2 .
1 1
(b) Let X∼𝐵𝑖𝑛 𝑛1 , 𝑎𝑛𝑑𝑌 ∼ 𝐵𝑖𝑛 𝑛2 , be independent random variables.
2 2

Find the distribution of Y=𝑋1 − 𝑋2 + 𝑛2 .

[4] Let X∼ Poisson


𝜆1 𝑎𝑛𝑑 𝑌 ∼
𝑃𝑜𝑖𝑠𝑠𝑜𝑛 𝜆2 𝑏𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 𝑟𝑎𝑛𝑑𝑜𝑚 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠. 𝐹𝑖𝑛𝑑 𝑡𝑕𝑒 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑎𝑙 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑜𝑓 𝑋 𝑔𝑖𝑣𝑒𝑛 𝑋 +
𝑌 = 𝑡, 𝑡 ∈ 0, 1, … .

[5] Let 𝑋1 , 𝑋2 , 𝑋3 𝑎𝑛𝑑 𝑋4 be four mutually independent random variables each having probability
density function
2
f(x)= 3 1 − 𝑥 0 < 𝑥 < 1
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Find the probability density functions of Y= min(𝑋1 , 𝑋2 , 𝑋3 , 𝑋4 )𝑎𝑛𝑑 𝑍 = 𝑋1 , 𝑋2 , 𝑋3 , 𝑋4 .

[6] Suppose 𝑋1 , … . , 𝑋𝑛 are n independent random variables, where 𝑋𝑖 𝑖 = 1, … , 𝑛 has the exponential
distribution Exp 𝛼1 , 𝑤𝑖𝑡𝑕 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑑𝑒𝑛𝑠𝑖𝑡𝑦 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛
𝛼𝑖 𝑒 −𝛼 𝑖 𝑥 𝑥 > 0
𝑓𝑋 𝑖 𝑥 =
0 𝑜𝑡𝑗𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Find the probability density functions of Y= min 𝑋1 , … . , 𝑋𝑛 𝑎𝑛𝑑 𝑍 = max(𝑋1 , … . , 𝑋𝑛 ).

[7] Let X and Y be the respective arrival times of two friends A and B who agree to meet at a spot and
wait for the other only for t minutes. Supposing that X and Y are i. i. d. Exp (𝜆). Show that the probability
of A and B meeting each other is 1 − 𝑒 −𝜆𝑡 .

[8] Let 𝑋1 𝑎𝑛𝑑 𝑋2 𝑏𝑒 𝑖. 𝑖. 𝑑. 𝑈 0, 1 . Define two new random variables as 𝑌1 = 𝑋1 + 𝑋1 𝑎𝑛𝑑 𝑌2 = 𝑋2 −


𝑋1 . Find the joint probability density function of 𝑌1 𝑎𝑛𝑑 𝑌2 are also the marginal probability density
functions of 𝑌1 𝑎𝑛𝑑 𝑌2 .

[9] Let X and Y be i. i. d. N(0, 1). Find the probability density function of Z= X/ Y.

[10] Let X and Y be independent random variables with probability density functions

𝑥 𝛼 1 −1 −𝑦/𝜃
𝑓𝑋 𝑥 = 𝑒 𝑦>0
⎾𝛼2 𝜃 𝛼 2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑋
Find the distributions of 𝑈 = 𝑋 + 𝑌 𝑎𝑛𝑑 𝑉 = (𝑋+𝑌) and also that they are independently distributed.

[11] Let X and Y be i.i.d. random variables with common probability density function
𝑐
−∞<𝑥 <∞
f(x)= 1+𝑥 4
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒,

Where, c is a normalizing constant. Find the probability density function of Z= X / Y.

[12] Let X and Y be i. i. d. N(0, 1), define the random variables R and𝛩 𝑏𝑦 𝑋 = 𝑅 cos 𝛩 , 𝑌 = 𝑅 𝑠𝑖𝑛𝛩,

𝑅2
(a) show that R and 𝛩 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 𝑤𝑖𝑡𝑕 2
∼ 𝐸𝑥𝑝 1 𝑎𝑛𝑑 𝛩 ∼ 𝑈 0, 2𝜋

𝑋
𝑏 𝑠𝑕𝑜𝑤 𝑡𝑕𝑎𝑡 𝑋 2 + 𝑌 2 𝑎𝑛𝑑 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡𝑙𝑦 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑑.
𝑌

[13] Let 𝑈1 𝑎𝑛𝑑𝑈2 be i.i.d. U(0, 1) random variables. Show that

𝑋1 = −2𝑙𝑛𝑈1 cos(2𝜋𝑈2 ) 𝑎𝑛𝑑 𝑋2 = −2𝑙𝑛𝑈1 sin(2𝜋𝑈2 ) 𝑎𝑟𝑒 𝑖. 𝑖. 𝑑. 𝑁(0, 1) random variables.

[14] Let 𝑋1 , 𝑋2 , 𝑎𝑛𝑑 𝑋3 be i. i. d. with probability density function

𝑒 −𝑥 𝑥 > 0
f(x)=
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Find the probability density function of 𝑌1 , 𝑌2 , 𝑌3 ; where


𝑋1 𝑋1 + 𝑋2
𝑌1 = ; 𝑌2 = ; 𝑌 = 𝑋1 + 𝑋2 + 𝑋3
𝑋1 + 𝑋2 𝑋1 + 𝑋2 + 𝑋3 3

[15] Let 𝑋1 , 𝑋2 , 𝑎𝑛𝑑 𝑋3 be three mutually independent chi-square random variables with 𝑛1 , 𝑛2 , 𝑛3
degrees of freedom respectively; i.e. 𝑋1 ∼ 𝜆𝑛 1 2 , 𝑋2 ∼ 𝜆𝑛 2 2 𝑎𝑛𝑑 𝑋3 ∼ 𝜆𝑛 3 2 and they are independent.

𝑋
(a) Show that 𝑌1 = 𝑋1 𝑎𝑛𝑑𝑌2 = 𝑋1 + 𝑋2 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 𝑎𝑛𝑑 𝑡𝑕𝑎𝑡 𝑌2 is chi-square random variable
2
with 𝑛1 + 𝑛2 degree of freedom.

(b) Find the probability density functions of

𝑋1 /𝑛1 𝑋3 /𝑛3
𝑍1 = 𝑎𝑛𝑑 𝑍2 =
𝑋2 /𝑛2 (𝑋1 + 𝑋2 )/(𝑛1 + 𝑛2 )

[16] Let X and Y be independent random variables such that X∼ N(0, 1) and Y∼𝜆𝑛 2 2 .

𝑋
Find the probability density function of 𝑇 = .
𝑌/𝑛

[17] Let 𝑋1 , … , 𝑋𝑛 be a random sample from N(0, 1) distribution. Find the m.g. f. of Y= 𝑛𝑖=1 𝑋𝑖 2 and
identify its distribution. Further, suppose 𝑋𝑛+1 is another random sample from N(0, 1) independent

𝑋𝑛 +1
of 𝑋1 , … , 𝑋𝑛 . Derive the distribution of .
𝑌
𝑛

[18] X and Y are i. i. d. random variables each having geometric distribution with the following p. m.
f.

1 − 𝑝 𝑥 𝑝, 𝑥 = 0, 1, …
P(X= x)=
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑋
𝐼𝑑𝑒𝑛𝑡𝑖𝑓𝑦 𝑡𝑕𝑒 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑜𝑓 . 𝐹𝑢𝑟𝑡𝑕𝑒𝑟 𝑓𝑖𝑛𝑑 𝑡𝑕𝑒 𝑝. 𝑚. 𝑓. 𝑜𝑓 𝑍 = min⁡
(𝑋, 𝑌).
𝑋+𝑌

Solution Key

(1) 𝑃 𝑌1 = 𝑦1 , 𝑌2 = 𝑦2 = 𝑃 𝑋1 − 𝑋2 = 𝑦1 , 𝑋1 + 𝑋2 = 𝑦2
𝑦1 + 𝑦2 𝑦1 − 𝑦2
= 𝑃 𝑋1 = , 𝑋2 =
2 2
0 2−0
2 1 𝑦1 + 𝑦2 𝑦1 − 𝑦2
𝑖𝑓 = 0, = 0, 𝑖. 𝑒. 𝑦1 = 0, 𝑦2 = 0
3 3 2 2
2 1 1 2−1 𝑦1 + 𝑦2 𝑦1 − 𝑦2
𝑖𝑓 =1, = 0, 𝑖. 𝑒. 𝑦1 = 1, 𝑦2 = 1
= 3 1 3 2−1 2 2
2 1 𝑦1 + 𝑦2 𝑦1 − 𝑦2
𝑖𝑓 = 0, = 1, 𝑖. 𝑒. 𝑦1 = −1, 𝑦2 = 1
3 3 2 2
2 2 1 2−2 𝑦1 + 𝑦2 𝑦1 − 𝑦2
𝑖𝑓 =1, = 1, 𝑖. 𝑒. 𝑦1 = 0, 𝑦2 = 2
3 3 2 2
𝑖. 𝑒.
1
9
𝑖𝑓 𝑦1 , 𝑦2 = (0, 0)
2
𝑖𝑓 𝑦1 , 𝑦2 = −1, 1 , (1, 1)
𝑃 𝑌1 = 𝑦1 , 𝑌2 = 𝑦2 = 9
4
9
𝑖𝑓 𝑦1 , 𝑦2 = (0, 2)
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑌2 0 1 2
𝑌1
0 1 4
9
0 9
-1 2
1 0 9
0
2
0 9
0
5 1
𝑦1 = 0 𝑦 =0
9 9 2
2 4
𝑦1 = −1 𝑃 𝑌 = 𝑦 = 𝑦 =1
𝑃 𝑌1 = 𝑦1 = 9 2 2 9 2
2 4
𝑦1 = 1 𝑦 =2
9 9 2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑠𝑖𝑛𝑐𝑒 𝑃 𝑌1 = 𝑦1 , 𝑌2 = 𝑦2 ≠ 𝑃 𝑌1 = 𝑦1 𝑃 𝑌2 = 𝑦2 ∀ 𝑦1 , 𝑦2
𝑌1 & 𝑌2 𝑎𝑟𝑒 𝑛𝑜𝑡 𝑖𝑛𝑑𝑒𝑝.
(2) 𝑌1 = 𝑋1 𝑋2 ; 𝑌2 = 𝑋2
𝑗𝑡 𝑝. 𝑚. 𝑓.
𝑦1
𝑃 𝑌1 = 𝑦1 , 𝑌2 = 𝑦2 = 𝑃 𝑋1 𝑋2 = 𝑦1 , 𝑋2 = 𝑦2 = 𝑃 𝑋1 = , 𝑋2 = 𝑦2
𝑦2
𝑦1 𝑦1
𝑖𝑓 = 1, 2, 3; 𝑦2 = 1, 2, 3
= 36 𝑦2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑃𝑜𝑠𝑠𝑖𝑏𝑙𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑌1 𝑖𝑛 1, 2, 3, 4, 6, 9 .
1
𝑃 𝑋1 = 1, 𝑋2 = 1 = 𝑦 =1
36 1
2 2 4
𝑃 𝑋1 = 1, 𝑋2 = 2 + 𝑃 𝑋1 = 2, 𝑋2 = 1 = + = 𝑦 =2
36 36 36 1
3 3 6
𝑃 𝑋1 = 1, 𝑋2 = 3 + 𝑃 𝑋1 = 3, 𝑋2 = 1 = + = 𝑦 =3
36 36 36 1
𝑃 𝑌1 = 𝑦1 = 4
𝑃 𝑋1 = 2, 𝑋2 = 2 = 𝑦1 = 4
36
6 6 12
𝑃 𝑋1 = 2, 𝑋2 = 3 + 𝑃 𝑋1 = 3, 𝑋2 = 2 = + = 𝑦 =6
36 36 36 1
9
𝑃 𝑋1 = 3, 𝑋2 = 3 = 𝑦1 = 9
36
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑍 = 𝑋1 + 𝑋2 → 𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑍 𝑖𝑛 2, 3, 4, 5, 6
𝑃 𝑧=3
= 𝑃 𝑋1 + 𝑋2 = 3
1
𝑃 𝑋1 = 1, 𝑋2 = 1 = 𝔍=2
36
4
𝑃 𝑋1 = 1, 𝑋2 = 2 + 𝑃 𝑋1 = 2, 𝑋2 = 1 = 𝔍=3
36
10
= 𝑃 𝑋1 = 1, 𝑋2 = 3 + 𝑃 𝑋1 = 2, 𝑋2 = 2 + 𝑃 𝑋1 = 3, 𝑋2 = 1 = 𝔍=4
36
12
𝑃 𝑋1 = 2, 𝑋2 = 3 + 𝑃 𝑋1 = 3, 𝑋2 = 2 = 𝔍=5
36
9
𝑃 𝑋1 = 3, 𝑋2 = 2 = 𝔍=6
36
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑃 𝑋=𝑥,𝑋+𝑌=𝑡 𝑃 𝑋=𝑥,𝑌=𝑡−𝑥
(3) 𝑃 𝑋 = 𝑥 𝑋 + 𝑌 = 𝑡 = =
𝑃 𝑋+𝑌=𝑡 𝑃 𝑋+𝑌=𝑡
𝑛1 𝑛2
𝑃 𝑋 = 𝑥 𝑃 𝑌 = 𝑡−𝑥 𝑝 𝑥 1 − 𝑝 𝑛−𝑥 𝑡−𝑥 𝑝𝑡−𝑥 1 − 𝑝 𝑛 2 − 𝑡−𝑥
= = 𝑥 𝑛 1 +𝑛 2 𝑡
𝑃 𝑋+𝑌 =𝑡 𝑡
𝑝 1 − 𝑝 𝑛 1 +𝑛 2 −𝑡
𝑛1 𝑛2
𝑥 𝑡−𝑥
= 𝑛 1 +𝑛 2 ; 0 ≤ 𝑥 ≤ 𝑛1 0 ≤ 𝑡 − 𝑥 ≤ 𝑛2
𝑡
↑ 𝑕𝑦𝑝𝑒𝑟 𝑔𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑛1 , 𝑛2
𝑃 𝑋=𝑥,𝑌=𝑡−𝑥
(4) 𝑃 𝑋 = 𝑥 𝑋 + 𝑌 = 𝑡 = [𝑋 ∼ 𝑃 𝜆1 , 𝑌 ∼ 𝑃 𝜆2 ; 𝑋 + 𝑌 ∼ 𝑃 𝜆1 + 𝜆2 ]
𝑃 𝑋+𝑌=𝑡
𝑒 −𝜆 1 𝜆1 𝑥 𝑒 −𝜆 2 𝜆2 𝑡−𝑥
𝑃 𝑋 =𝑥 𝑃 𝑌 =𝑡−𝑥 𝑥! 𝑡−𝑥 !
= =
𝑃 𝑋+𝑌 =𝑡 𝑒 − 𝜆 1 +𝜆 2 𝜆1 + 𝜆2 𝑡
𝑡!
𝑥 1
𝑡 𝜆1 𝜆1
= 1−
𝑥 𝜆1 + 𝜆2 𝜆1 + 𝜆2
𝜆1
𝑖. 𝑒. 𝑋| 𝑋 + 𝑌 = 𝑡 𝐵𝑖𝑛 𝑡,
𝜆1 + 𝜆2
2
0 𝑥<0
(5) 𝑓𝑋 𝑥 = 3 1 − 𝑥 0 < 𝑥 < 1 𝑥 2
𝐹𝑋 𝑥 = 3 ∫0 1 − 𝑡 𝑑𝑡 0 ≤ 𝑥 < 1 = 1 − 1 − 𝑥 3
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1 𝑥≥1
𝑌 = 𝑀𝑖𝑛 𝑋1 , 𝑋2 , 𝑋3 , 𝑋4 ; 𝑍 = 𝑀𝑎𝑥 𝑋1 , 𝑋2 , 𝑋3 , 𝑋4
𝑋1 , 𝑋2 , 𝑋3 , 𝑋4 𝑖. 𝑖. 𝑑. 𝑓𝑟𝑜𝑚 𝑓𝑋 𝑥
𝑑. 𝑓. 𝑜𝑓 𝑌 = 𝐹𝑌 𝑦 = 𝑃 𝑌 ≤ 𝑦 = 1 − 𝑃 𝑌 > 𝑦
4

=1− 𝑃 𝑋𝑖 > 𝑦
1
4
= 1− 1−𝑃 𝑋 ≤𝑦
= 1 − 1 − 1 − 1 − 𝑦 2 4 = 1 − 1 − 𝑦 12 0 < 𝑦 < 1
𝑓𝑌 𝑦 = 12 1 − 𝑦 11 0 < 𝑦 < 1
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
4

𝑑. 𝑓. 𝑜𝑔 𝑍: 𝑓𝑍 Ʒ = 𝑃 𝑍 ≤ Ʒ = 𝑃 𝑋𝑖 ≤ Ʒ = [𝑃 𝑋 ≤ Ʒ ]4
1
= 1− 1−Ʒ 3 40< Ʒ<1
2 3 3
𝑓𝑍 Ʒ = 12(1 − Ʒ) (1 − (1 − Ʒ) ) 0 < Ʒ < 1
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
(6) Similar to (5) .
(7) X: arrival time of A
Y : arrival time of B

X & Y i. i. d. Exp(𝜆)- p. d. f.

𝑓 𝑥 = 𝜆𝑒 −𝜆𝑥 𝑥 > 0

𝑟𝑒𝑞𝑑 𝑝𝑟𝑜𝑏 = 𝑃 𝑋 < 𝑌, 𝑌 − 𝑋 ≤ 𝑡 + 𝑃 𝑌 < 𝑋, 𝑋 − 𝑌 ≤ 𝑡

=𝑃 𝑌−𝑡 ≤𝑋 ≤𝑌 +𝑃 𝑋−𝑡 ≤𝑌 ≤𝑋

= 𝑃 𝑋 ≤ 𝑌 ≤ 𝑋 + 𝑡 + 𝑃 𝑌 ≤ 𝑋 ≤ 𝑌 + 𝑡 𝑗𝑡 𝑝. 𝑑. 𝑓 𝑜𝑓 𝑋, 𝑌 → 𝜆2 𝑒 −𝜆 𝑥+𝑦
𝑥 > 0, 𝑦 > 0
∞ 𝑥+𝑡 ∞ 𝑦+𝑡
= 𝜆2 𝑒 −𝜆 𝑥+𝑦
𝑑𝑦 𝑑𝑥 + 𝜆2 𝑒 −𝜆 𝑥+𝑦
𝑑𝑥 𝑑𝑦
0 𝑥 0 𝑥

∞ 𝑥+𝑡
= 2𝜆2 𝑒 −𝜆𝑥 𝑒 −𝜆𝑡 𝑑𝑦 𝑑𝑥
0 𝑥


1
= 2𝜆2 1 − 𝑒 −𝜆𝑡 𝑒 −2𝜆𝑥 𝑑𝑥 = 1 − 𝑒 −𝜆𝑡 .
𝜆 0

(8) 𝑋1 , 𝑋2 ∼ 𝑈 0, 1
𝜕𝑦1 𝜕𝑦1
1 𝜕𝑥1 𝜕𝑥2 1 1
𝑌1 = 𝑋1 + 𝑋2 ⟹ = = = 2
𝐽 𝜕𝑦2 𝜕𝑦2 −1 1
𝜕𝑥1 𝜕𝑥2
1
𝑌2 = 𝑋2 − 𝑋1 𝐽 =
2
𝑓𝑋1 ,𝑋2 𝑥1 , 𝑥2 = 1; 0 < 𝑥1 < 1, 0 < 𝑥2 < 1
1
⟹ 𝑓𝑌1 ,𝑌2 𝑦1 , 𝑦2 = ; 0 < 𝑦1 + 𝑦2 < 2, 0 < 𝑦1 − 𝑦2 < 2
2
𝑅𝑎𝑛𝑔𝑒 𝑢𝑛𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑎𝑙𝑙𝑦 0 < 𝑦1 < 2 & − 1 < 𝑦2 < 1
𝑦1 + 𝑦2 𝑦1 − 𝑦2 𝑦1 − 𝑦2
𝑖2 = , 𝑖1 = 𝐴𝑙𝑠𝑜 0 < 𝑥1 < 1 ; ⟹ 0 < <1
2 2 2
⟹ 0 < 𝑦1 − 𝑦2 < 2
𝑦2 < 𝑦1 < 2 + 𝑦2 & 𝑦1 − 2 < 𝑦2 < 𝑦1 } ____(1)
𝑦1 + 𝑦2
𝐴𝑙𝑠𝑜 0 < 𝑥2 < 1 ; 0 < <1
2
0 < 𝑦1 + 𝑦1 < 2
−𝑦2 < 𝑦1 < 2 − 𝑦2 & − 𝑦1 < 𝑦2 < 2 − 𝑦1 } ____(2)
𝐶𝑜𝑚𝑏𝑖𝑛𝑖𝑛𝑔 1 & 2
max 𝑦2 , −𝑦2 < 𝑦1 < min 2 + 𝑦2 , 2 − 𝑦2
& max 𝑦1 − 2, −𝑦1 < 𝑦2 < min 𝑦1 , 2 − 𝑦1 ) ____(3)
𝐼𝑓 − 1 < 𝑦2 < 0 𝑡𝑕𝑒𝑛 𝑓𝑟𝑜𝑚 3 − 𝑦2 < 𝑦1 < 2 + 𝑦2 & 𝑖𝑓 0 < 𝑦2 < 1 𝑡𝑕𝑒𝑛 𝑓𝑟𝑜𝑚 3 𝑦2
< 𝑦1 < 2 − 𝑦2 } _____(4)
𝐴𝑙𝑡𝑒𝑟𝑛𝑎𝑡𝑖𝑣𝑒𝑙𝑦 𝑖𝑓 0 < 𝑦1 < 1 𝑡𝑕𝑒𝑛 𝑓𝑟𝑜𝑚 3 − 𝑦1 < 𝑦2 < 𝑦1 & 𝑖𝑓 1 < 𝑦1
< 2 𝑡𝑕𝑒𝑛 𝑓𝑟𝑜𝑚 3 𝑦1 − 2 < 𝑦2 < 2 − 𝑦1 }____(5)
⟹ 𝑀𝑎𝑟𝑔 𝑜𝑓 𝑌1
1 𝑦1
𝑓𝑌1 𝑦1 = 𝑑𝑦 = 𝑦1 𝑖𝑓 0 < 𝑦1 < 1
2 −𝑦1 2
2−𝑦1
1
𝑈𝑠𝑖𝑛𝑔 5 →= 𝑑𝑦2 = 2 − 𝑦1 𝑖𝑓 1 < 𝑦1 < 2
2 𝑦1 −2
& 𝑀𝑎𝑟𝑔 𝑜𝑓 𝑌2
2+𝑦2
1
𝑓𝑌2 𝑦2 = 𝑑𝑦1 = 1 + 𝑦2 𝑖𝑓 − 1 < 𝑦2 < 0
2 −𝑦2
2−𝑦2
1
𝑢𝑠𝑖𝑛𝑔 4 →= 𝑑𝑦1 = 1 − 𝑦2 𝑖𝑓 0 < 𝑦2 < 1
2 𝑦2
(9) X ∼N(0, 1)
Y∼ N(0, 1) > ind
1 1
exp − 𝑥 2 + 𝑦 2
𝑓𝑋,𝑌 𝑥, 𝑦 =
2𝜋 2
𝑧 𝑢
𝑈 = 𝑌} 𝑋 = 𝑈𝑍 𝐽 = = |𝑢|
1 0
1 1
𝑓𝑈,𝑍 𝑢, 𝑧 = exp − 𝑢2 Ʒ2 + 𝑢2 𝑢 ; −∞ < 𝑢 < ∞, −∞ < 𝑧 < ∞
2𝜋 2
1 ∞ 1
𝑓𝑍 Ʒ = 𝑢 exp − 𝑢2 1 + Ʒ2 𝑑𝑢
2𝜋 −∞ 2

1 𝑢2 1 1
= 𝑢 exp − 1 + Ʒ2 𝑑𝑢 = . ; −∞ < Ʒ < ∞
𝜋 0 2 𝜋 1 + Ʒ2
𝑖. 𝑒. 𝑍 ∼ 𝐶𝑎𝑢𝑐𝑕𝑦 𝑑𝑖𝑠𝑡𝑛 0, 1
𝜃 1
𝐼𝑛 𝑔𝑒𝑛𝑒𝑟𝑎𝑙 𝑋 ∼ 𝐶𝑎𝑢𝑐𝑕𝑦 𝜇, 𝜃 → 𝑓𝑋 𝑥 = ; −∞ < 𝑥 < ∞
𝜋 1 + (𝑥 − 𝜇)2
𝑥+𝑦
1
(10) 𝑓𝑋,𝑌 𝑥, 𝑦 = ⎾𝛼 ⎾𝛼 𝛼 +𝛼 2 𝑥 𝛼 1 −1 𝑦 𝛼 2 −1 𝑒 − 𝜃 , 𝑥 > 0, 𝑦 > 0
1 2𝜃 1
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑈 = 𝑋 + 𝑌} 𝑋 = 𝑈𝑉
𝑋
𝑉= } 𝑌 = 𝑈(1 − 𝑉)
𝑋+𝑌
𝑣 𝑢
𝐽= = −𝑢
1 − 𝑣 −𝑢
𝑅𝑎𝑛𝑔𝑒 𝑢 > 0, 0 < 𝑣 < 1
1 𝛼 2 −1 −𝑢
𝑓𝑈,𝑉 𝑢, 𝑣 = 𝑢𝑣 𝛼 1 −1 𝑢 1 − 𝑣 𝑒 𝜃. 𝑢 𝑢 > 0, 0 < 𝑣 < 1
⎾𝛼1 ⎾𝛼2 𝜃 𝛼 1 +𝛼 2
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑖. 𝑒. 𝑓𝑈,𝑉 𝑢, 𝑣
1 𝑢
𝛼 1 +𝛼 2 −1 −𝜃
1 𝛼 −1
𝑢 𝑒 × 𝑣 𝛼 1 −1 1 − 𝑣 𝑢 2 𝑢 > 0, 0 < 𝑣 < 1
= ⎾𝛼1 + 𝛼2 𝜃 𝛼 1 +𝛼 2 𝐵 𝛼1 , 𝛼2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1 𝑢
𝛼 1 +𝛼 2 −1 −𝜃
⟹ 𝑓𝑈 𝑢 = 𝑢 𝑒 𝑢>0
⎾𝛼1 + 𝛼2 𝜃 𝛼 1 +𝛼 2
𝑈 ∼ 𝐺𝑎𝑚𝑚𝑎. = 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1 𝛼 −1
𝑓𝑉 𝑣 = 𝑣 𝛼 1 −1 1 − 𝑣 𝑢 2 0 < 𝑣 < 1
𝐵 𝛼1 , 𝛼2
𝑉 ∼ 𝐵𝑒𝑡𝑎. = 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
⟹ 𝑈 & 𝑉 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝.
𝐶2
(11) 𝑓𝑋,𝑌 = 1+𝑥 4 1+𝑦 4
− ∞ < 𝑥 < ∞, −∞ < 𝑦 < ∞
𝑋 𝑋 = 𝑈1 𝑈2 𝑢 𝑢1
𝑈1 = , 𝑈2 = 𝑌} }𝐽 = 2 = 𝑢2
𝑌 𝑌 = 𝑈2 0 1
𝑅𝑎𝑛𝑔𝑒 − ∞ < 𝑢1 < ∞, −∞ < 𝑢2 < ∞
𝐶 2 𝑢2
𝑓𝑈1 ,𝑈2 𝑢1 , 𝑢2 = − ∞ < 𝑢1 < ∞, −∞ < 𝑢2 < ∞
1 + 𝑢1 4 𝑢2 4 1 + 𝑢2 4
∞ ∞
𝑢2
𝑓𝑈1 𝑢1 = 𝑓𝑈1 ,𝑈2 𝑢1 , 𝑢2 𝑑𝑢2 = 2𝐶 𝑑𝑢2
−∞ 0 1 + 𝑢1 𝑢2 4 1 + 𝑢2 4
4

𝐶𝜋 1
= . 𝑎𝑛 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑡𝑖𝑛𝑔 .
2 1 + 𝑢1 2

2
𝑓𝑈1 𝑢1 𝑑𝑢1 = 1 ⟹ 𝐶 = 2
−∞ 𝜋
2 1
⟹ 𝑓𝑈1 𝑢1 = . − ∞ < 𝑢1 < ∞.
𝜋 1 + 𝑢1 2
↑ 𝐶𝑎𝑢𝑐𝑕𝑦 𝑑𝑖𝑠𝑡𝑛.
1
1 𝑥 2 +𝑦 2
(12) 𝑓𝑋,𝑌 𝑥, 𝑦 = 2𝜋 𝑒 −2 − ∞ < 𝑥 < ∞, −∞ < 𝑦 < ∞
𝑋 = 𝑅 cos 𝛩
𝑌 = 𝑅 sin 𝛩
𝑐𝑜𝑠𝜃 −𝑟 𝑠𝑖𝑛𝜃
𝐽= =𝑟
𝑠𝑖𝑛𝜃 𝑟 𝑐𝑜𝑠𝜃
𝑅𝑎𝑛𝑔𝑒 𝑟 ≥ 0, 0 < 𝜃 < 2𝜋
1 −𝑟 2
𝑓𝑅,𝛩 𝑟, 𝜃 = 𝑒 2 𝑟, 𝑟 > 0, 0 < 𝜃 < 2𝜋
2𝜋
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑟2
𝑓𝑅 𝑟 = 𝑟𝑒 − 2 𝑟 > 0
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1
𝑓𝛩 𝜃 = 0 < 𝜃 < 2𝜋 𝛩 ∼ 𝑈 0, 2𝜋
2𝜋
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
⟹ 𝑅 &𝛩 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝.
𝑅2
𝐷𝑒𝑓𝑖𝑛𝑒 𝑌 = 𝑦>0
2
𝑑𝑟 1
𝑅= 2 𝑦 =
𝑑𝑦 2𝑦
1
𝑓𝑌 𝑦 = 2𝑦𝑒 −𝑦 𝑦 > 0
2𝑦
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑖. 𝑒. 𝑓𝑌 𝑦 = 𝑒 −𝑦 𝑦 > 0
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑅2
⟹ ∼ 𝐸𝑥𝑝 1 .
2
𝑈 = 𝑋 2 + 𝑌 2 = 𝑅 2 − 𝑓 𝑛 𝑜𝑓 𝑟. 𝑣. 𝑘
𝑋
𝑉 = = 𝑐𝑜𝑡𝛩 − 𝑓 𝑛 𝑜𝑓 𝑟. 𝑣. 𝛩
𝑌
𝑠𝑖𝑛𝑐𝑒 𝑅 & 𝛩 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝, 𝑈 & 𝑉 𝑎𝑟𝑒 𝑎𝑙𝑠𝑜 𝑖𝑛𝑑𝑒𝑝.
𝑋
𝑖. 𝑒. 𝑋 2 + 𝑌 2 & 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝.
𝑌
(13) 𝑈1 ∼ 𝑈 0, 1
−𝑙𝑛𝑈1 ∼ 𝐸𝑥𝑝 1 − 𝑠𝑡𝑟𝑎𝑖𝑔𝑕𝑡 𝑓𝑜𝑟𝑤𝑎𝑟𝑑
𝑈2 ∼ 𝑈 0, 1
2𝜋𝑈2 ∼ 𝑈 0, 2𝜋 − 𝑠𝑡𝑟𝑎𝑖𝑔𝑕𝑡 𝑓𝑜𝑟𝑤𝑎𝑟𝑑.
⟹ −𝑙𝑛𝑈1 ∼ 𝐸𝑥𝑝 1 & 2𝜋𝑈2 ∼ 𝑈 0, 2𝜋 𝑎𝑛𝑑 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝. 𝑏𝑦 𝑝𝑟𝑜𝑏𝑙𝑒𝑚 # 12
𝑅2
𝑗𝑡 𝑑𝑖𝑠𝑡𝑛 𝑜𝑓 – 𝑙𝑛𝑈1 , 2𝜋𝑈2 𝑖𝑠 𝑠𝑎𝑚𝑒 𝑎𝑠 𝑗𝑡 𝑑𝑖𝑠𝑡𝑛 𝑜𝑓 ,𝛩
2
𝑅2
𝑖. 𝑒. – 𝑙𝑛𝑈1 , 2𝜋𝑈2 ≝ ,𝛩
2
𝑖. 𝑒. – 2𝑙𝑛𝑈1 , 2𝜋𝑈2 ≝ 𝑅 2 , 𝛩
𝑖. 𝑒. – 2𝑙𝑛𝑈1 cos 2𝜋𝑈2 , – 2𝑙𝑛𝑈1 sin 2𝜋𝑈2 ≝ 𝑅 𝑐𝑜𝑠𝛩, 𝑅 𝑠𝑖𝑛𝛩
𝑖. 𝑒. 𝑋1 , 𝑋2 ≝ 𝑅 𝑐𝑜𝑠𝛩, 𝑅 𝑠𝑖𝑛𝛩
⟹ 𝑋1 𝑎𝑛𝑑 𝑋2 𝑎𝑟𝑒 𝑖. 𝑖. 𝑑. 𝑁 0, 1 𝑟. 𝑣. 𝑠.
𝐷𝑖𝑣𝑒𝑟𝑡 𝑚𝑒𝑡𝑕𝑜𝑑 𝑈1 , 𝑈2 𝑖. 𝑖. 𝑑. 𝑈 0, 1 .
𝑓𝑈1 ,𝑈2 𝑢1 , 𝑢2 = 1 ; 0 < 𝑢1 < 1, 0 < 𝑢2 < 1
= 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑋1 = – 2𝑙𝑛𝑈1 cos 2𝜋𝑈2
𝑋2 = – 2𝑙𝑛𝑈1 sin 2𝜋𝑈2
𝑅𝑎𝑛𝑔𝑒 𝑜𝑓 𝑋1 ; −∞ < 𝑥1 < ∞, 𝑠𝑙𝑦 − ∞ < 𝑥2 < ∞
𝑋1 2 + 𝑋2 2 = – 2𝑙𝑛𝑈1
𝑋2
= 𝑡𝑎𝑛 2𝜋𝑈2
𝑋1
1
𝑈1 = exp − 𝑋1 2 + 𝑋2 2
2
1 𝑋2
𝑈2 = tan−1
2𝜋 𝑋1
1 1
exp − 𝑋1 2 + 𝑋2 2 −𝑋1 exp − 𝑋1 2 + 𝑋2 2 −𝑋2
2 2
𝐽=
𝑋2 𝑋1
− 2 2
2𝜋 𝑋1 + 𝑋2 2𝜋 𝑋1 2 + 𝑋2 2
1 1
𝐽 = exp − 𝑋1 2 + 𝑋2 2 −
2 2𝜋
1
exp − 2 𝑋1 2 + 𝑋2 2
𝐽 =
2𝜋
1 1
⟹ 𝑓𝑋1 ,𝑋2 𝑥1 , 𝑥2 = exp − 𝑥1 2 + 𝑥2 2 ; −∞ < 𝑥1 < ∞, −∞ < 𝑥2 < ∞ ↓
2𝜋 2
1 −1𝑥 1 2 1 −1𝑥 2 2
= 𝑒 2 𝑒 2
2𝜋 2𝜋
⟹ 𝑋1 & 𝑋2 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝 𝑁 0, 1 𝑟. 𝑣. 𝑠.
− 𝑥 1 ,𝑥 2 ,𝑥 3
(14) 𝑓𝑋1 ,𝑋2 ,𝑋3 𝑥1 , 𝑥2 , 𝑥3 = 𝑒 ; 𝑥1 > 0, 𝑥2 > 0, 𝑥3 > 0
𝑋1 𝑋1
𝑌1 = ; 𝑌2 = ; 𝑌 = 𝑋1 + 𝑋2 + 𝑋3
𝑋1 + 𝑋2 𝑋1 + 𝑋2 + 𝑋3 3
𝑖. 𝑒. 𝑋1 = 𝑌1 𝑌2 𝑌3
𝑋2 = 𝑌2 𝑌3 1 − 𝑌1
𝑋3 = 𝑌3 1 − 𝑌2
𝑋1 + 𝑋2 = 𝑌2 𝑌3 , 𝑋1 = 𝑌1 𝑌2 𝑌3 , 𝑋2 = 𝑌2 𝑌3
𝑦2 𝑦3 𝑦1 𝑦3 𝑦1 𝑦2
𝐽 = −𝑦2 𝑦3 𝑦3 1 − 𝑦1 𝑦2 1 − 𝑦1 = 𝑦2 𝑦3 2
0 −𝑦3 1 − 𝑦2
2 −𝑦3
𝑓𝑌1 ,𝑌2 ,𝑌3 𝑦1 , 𝑦2 , 𝑦3 = 𝑦2 𝑦3 𝑒 ; 0 < 𝑦1 < 1, 𝑦3 > 0
1 ∞
𝑓𝑌1 𝑦1 = 𝑦2 𝑑𝑦2 𝑦3 2 𝑒 −𝑦3 𝑑𝑦3 = 1 0 < 𝑦1 < 1
0 0
𝑖. 𝑒. 𝑌1 ∼ 𝑈 0, 1
𝑓𝑌2 𝑦2 = 𝑦2 × 1 × 2 0 < 𝑦2 < 1
⎾𝑚 + 𝑛 𝑚 −1 𝑛−1
𝑖. 𝑒. 𝑌2 ∼ 𝐵𝑒𝑡𝑎 2, 1 𝑋 ∼ 𝐵𝑒𝑡𝑎 𝑚, 𝑛 𝑓𝑋 𝑥 = 𝑥 1−𝑥
⎾𝑚⎾𝑛
1 1
&𝑓𝑌3 𝑦3 = 𝑑𝑦1 𝑑𝑦2 𝑦3 2 𝑒 −𝑦3
0 0
1
= 𝑒 −𝑦3 𝑦3 2 0 < 𝑦3 < ∞
2
𝑦2 𝑦3 𝑦3 1 − 𝑦1 1 − 𝑦2 + 𝑦2 𝑦3 1 − 𝑦1
𝑦1 𝑦3 𝑦2 𝑦3 𝑦1 𝑦3 𝑦1 𝑦2
0 𝑦3 𝑦2
0 − 𝑦3 1 − 𝑦2
𝑥𝑖 𝑛𝑖
1
(15) (a) 𝑓𝑋1 ,𝑋2 𝑥1 , 𝑥2 = 2
𝑖=1 𝑛 𝑖 𝑛 𝑒 − 2 𝑥𝑖 2 −1 ; 𝑥𝑖 > 0
22⎾ 𝑖
2
2
𝑥𝑖 𝑛𝑖
=𝐶 𝑒 − 2 𝑥𝑖 2 −1 ; 𝑥𝑖 > 0
𝑖=1
𝑌 𝑌
𝑋1 𝑋1 = 𝑌 1+21
1
𝑌1 = ; 𝑌 = 𝑋1 + 𝑋2 |
𝑋2 2 𝑋2 = 𝑌 +
𝑌2
1 1
1 1 𝑥 2
= 𝑥2 − 2 = 1 + 𝑥 = 𝑥1 + 𝑥2 = 𝑦2
=
1 + 𝑦1
𝑥2 2
𝐽 𝑥2 𝑥2 2 𝑥2 2 𝑦2 𝑦2
1 1 1 + 𝑦1
𝑦2
𝐽 =
1 + 𝑦1 2
𝑦2
𝑦1 𝑦2 𝑛 1 −1 𝑦2 𝑛 2 −1 𝑦2
𝑓𝑌1 ,𝑌2 𝑦1 , 𝑦2 = 𝐶 𝑒 − 2 𝑦1 +1
2
1+𝑦1
2
𝑦1 +1 2
𝑦1 > 0, 𝑦2 > 0

𝑦2 𝑛 1 +𝑛 2
𝑖. 𝑒. 𝑓𝑌1 ,𝑌2 𝑦1 , 𝑦2 = 𝐶1 𝑒 − 2 𝑦2 2
−1

𝑛 2 −1
𝑦1 2
𝑓𝑌2 𝑋 𝑛 1 +𝑛 2 𝑦1 > 0, 𝑦2 > 0
1 + 𝑦1 2

𝑓𝑌1

⟹ 𝑌1 & 𝑌2 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝


𝑦2 𝑛 1 +𝑛 2
& 𝑓𝑌2 𝑦2 = 𝐶1 𝑒 − 2 𝑦2 2
−1
𝑦2 >0

∞ −1
𝑛1 + 𝑛2 𝑛 1 +𝑛 2
𝑓𝑌2 𝑦2 𝑑𝑦2 = 1 ⟹ 𝐶1 = ⎾ .2 2
0 2

⟹ 𝑌2 ∼ 𝜆2 𝑤𝑖𝑡𝑕 𝑛1 + 𝑛2 𝑑. 𝑓.

𝑏 𝑠𝑖𝑚𝑖𝑙𝑎𝑟 𝑡𝑎 𝑎

𝑋1
𝑛
𝑍1 = 1 ∼ 𝐹𝑛 1 ,𝑛 2 → 𝐹 𝑑𝑖𝑠𝑡𝑛 𝑤𝑖𝑡𝑕 𝑛1 , 𝑛2 𝑑. 𝑓. &
𝑋2
𝑛2

𝑋3 /𝑛3
𝑍2 = ∼ 𝐹𝑛 3 ,𝑛 1 + 𝑛 2
𝑋1 + 𝑋2 / 𝑛1 + 𝑛2

(16) X∼ N(0, 1)
1 𝑥2 1 𝑦 𝑛
𝑓𝑋,𝑌 = 𝑒− 2 𝑛 𝑒 −2 𝑦 2 −1
2𝑥 𝑛
22 ⎾2
𝑋 𝑋 𝑇
𝑇= 𝑑𝑒𝑓𝑖𝑛𝑒 𝑑𝑢𝑚𝑚𝑦 𝑈 = 𝑌 →
𝑌 𝑌 𝑈=𝑌
𝑛
𝑈
⟹𝑋=𝑇
𝑛
𝑌=𝑈
𝑢 𝑡 𝑢
𝐽= 𝑛 2 𝑛 𝑢 = 𝑛
0 1
1 1 𝑡2𝑢 𝑢 𝑛 1

⟹ 𝑓𝑇,𝑈 𝑡, 𝑢 = exp − exp − , 𝑢 2 2 −∞<𝑡 < ∞ 𝑢>0
𝑛 𝑛 2 𝑛 2
2𝜋 22 𝑛
2

𝑓𝑇 𝑡 = 𝑓𝑇,𝑈 𝑡, 𝑢 𝑑𝑢
0

1 𝑛 1 𝑢 𝑡2
= . 𝑢 2 −2 exp − 1+ 𝑑𝑢
𝑛𝑛
0 2 𝑛
2𝜋 22
2 𝑛
𝑛+1
⎾ 2 1
= 𝑛 𝑛
. 𝑛+1 −∞<𝑡 <∞
2𝜋 22 𝑛 1 2
2 𝑡2
2 1 + 𝑛
=⋯
𝑛 2
(17) 𝑀𝑌 𝑡 = 𝐸 𝑒 𝑡 𝑌 = 𝐸 𝑒 𝑡 1 𝑋𝑖
𝑛 𝑛
𝑡𝑋 𝑖 2
= 𝐸 𝑒 = 𝑀𝑋 𝑖 2 𝑡
𝑖=1 𝑖=1
𝑛
1 𝑛
− −
𝑋𝑖 2 ∼ 𝜆1 2 → = 1 − 2𝑡 2 = 1 − 2𝑡 2
𝑖=1
⟹ 𝑌 ∼ 𝜆𝑛 2
𝑋𝑛+1 ∼ 𝑁 0, 1
> 𝑖𝑛𝑑𝑒𝑝.
𝑌 ∼ 𝜆𝑛 2
𝑗𝑡 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑌 & 𝑋𝑛+1
1 𝑦 𝑛 1 𝑥2
𝑓𝑌,𝑋𝑛 +1 𝑦, 𝑥 = 𝑛 𝑒 −2 𝑦 2 −1 × −
𝑒 2
𝑛 2𝜋
22 ⎾ 2
𝑋𝑛+1
𝑇=
𝑌 𝑢
𝑛 } ⟹ 𝑋𝑛+1 = 𝑇 𝑛 𝑌 = 𝑈
𝑈=𝑌
𝑢 𝑡 𝑢
𝐽= 𝑛 2 𝑛 𝑢 = 𝑛.
0 1
𝑗𝑡 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑇 & 𝑈
𝑛 𝑛 −1 1 𝑡2𝑢 𝑢 𝑛
𝑓𝑇,𝑈 𝑡, 𝑢 = 22 ⎾ 2𝜋 𝑛 exp − exp − 𝑢 2 −1 ; −∞ < 𝑡 < ∞, 𝑢 > 0
2 2 𝑛 2
1
𝑛 𝑛 −1 𝑛 𝑢 𝑡2
𝑓𝑇 𝑡 = 22 ⎾ 2𝜋 𝑛 𝑢 2 −1 exp − 1+ 𝑑𝑢
2 0 2 𝑛
𝑛+1
𝑛 𝑛 −1 ⎾ 2
= 22 ⎾ 2𝜋 𝑛 𝑛+1 ; −∞ < 𝑡 < ∞
2 2
1 𝑡2
1+
2 𝑛
𝑛+1 𝑛+1


2 𝑡2 2
= 𝑛 1 + ; −∞ < 𝑡 < ∞
𝜋 ⎾2 𝑛 𝑛
(18) Z= X + Y ; Z∈ {0, 1, …}
Ʒ

𝑃 𝑧 = Ʒ = 𝑃 𝑋+𝑌 =Ʒ =𝑃 𝑋 =𝑥 ∩𝑌 = Ʒ−𝑥
𝑥=0
Ʒ

= 𝑃 𝑋 =𝑥 ∩𝑌 = Ʒ−𝑥
𝑥=0
Ʒ

= 𝑃 𝑋 =𝑥 𝑃 𝑌 = Ʒ−𝑥
𝑥=0
Ʒ Ʒ
𝑥 Ʒ−𝑥 2
= 𝑞 𝑝𝑞 𝑝=𝑝 𝑞Ʒ
𝑥=0 𝑥=0
𝑝2 𝑞 Ʒ Ʒ + 1 , Ʒ = 0, 1, …
𝑖. 𝑒. 𝑃 𝑧 = Ʒ =
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑃 𝑋 = 𝑥, 𝑍 = Ʒ = 𝑃 𝑋 = 𝑥, 𝑌 = Ʒ − 𝑥
2 Ʒ
= 𝑝 𝑞 ; 𝑥 = 0, , 1 … , Ʒ; Ʒ = 0, 1, … .
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

Problem Set-9

1 1
[1] Let 𝑋𝑛 be a sequence of 𝑁 𝑛
,1 − 𝑛 , 𝑠𝑕𝑜𝑤 𝑡𝑕𝑎𝑡 𝑋𝑛 → 𝑍, 𝑤𝑕𝑒𝑟𝑒 𝑍 ∼ 𝑁 0, 1 .
[2] Let 𝑋𝑛 be a sequence of i.i.d. random variables with E(𝑋𝑖 )=𝜇, 𝑉𝑎𝑟 𝑋𝑖 = 𝜎 2 𝑎𝑛𝑑 𝐸(𝑋𝑖 −
1 𝑋1 −𝜇 2 +⋯+ 𝑋𝑛 −𝜇 2 1
𝜇)4 = 𝜎 4 + 1. 𝐹𝑖𝑛𝑑 lim𝑛→∞ 𝑃[𝜎 2 − 𝑛
≤ 𝑛
≤ 𝜎2 + 𝑛 ] .
𝑛
[3] Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 𝑏𝑒 𝑖. 𝑖. 𝑑. 𝐵 1, 𝑝 , 𝑆𝑛 = 𝑖=1 𝑋𝑖 . 𝐹𝑖𝑛𝑑 𝑛 𝑤𝑕𝑖𝑐𝑕 𝑤𝑜𝑢𝑙𝑑 𝑔𝑢𝑎𝑟𝑎𝑛𝑡𝑒𝑒
𝑆𝑛
𝑃 𝑛
− 𝑝 ≥ 0.01 ≤ 0.01, no matter whatever the unknown p may be.
[4] Let 𝑋1 , … , 𝑋𝑛 be i.i.d. from a distribution with mean
𝑛 (𝑋𝑛 ͟−𝜇 )
𝜇 𝑎𝑛𝑑 𝑓𝑖𝑛𝑖𝑡𝑒 𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 𝜎 2 . 𝑃𝑟𝑜𝑣𝑒 𝑡𝑕𝑎𝑡 𝑆𝑛
→ 𝑍, 𝑤𝑕𝑒𝑟𝑒 𝑍 ∼ 𝑁 0, 1 .

1
𝑥≥1
[5] The p. d. f. of a random variable X is f(x)= 𝑥2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Consider a random sample of size 72 from the distribution having the above p. d. f. compute,
approximately, the probability that more than 50 of these observations are less than 3.
100
[6] Let 𝑋1 , … , 𝑋100 be i. i. d. from poisson (3) distribution and let Y= 𝑖=1 𝑋𝑖 . Using CLT, find an
approximate value of P(100≤ Y ≤ 200).

[7] Let X∼ Bin (100, 0.6). Find an approximate value of P(10 ≤X ≤ 16).
1
𝑒 − 𝑥 𝑛−1 𝑥 > 0
[8] The p. d. f. of 𝑋𝑛 𝑖𝑠 𝑔𝑖𝑣𝑒𝑛 𝑏𝑦 𝑓𝑛 𝑥 = ⎾𝑛
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑋𝑛
Find the limiting distribution of 𝑌𝑛 = 𝑛
.

[9] Let X̅ denote the mean of a random sample OF SIZE 64 FROM THE Gamma distribution with density

1 𝑥
𝑒 −𝛼 𝑥 𝑛−1 𝑥 > 0
𝑓𝑛 𝑥 = ⎾𝑝𝛼 𝑝

0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

𝑊𝑖𝑡𝑕 𝛼 = 2, 𝑝 = 4. Compute the approximate value of P(7 < X̅ < 9).

1
[10] 𝑋1 , … , 𝑋𝑛 is a random sample from U(0, 2). Let 𝑌𝑛 = 𝑋𝑛 , 𝑠𝑕𝑜𝑤 𝑡𝑕𝑎𝑡 𝑛 𝑌𝑛 − 1 → 𝑁 0, 3 .

Solution Set

1 1
𝑋𝑛 − 𝑥−
𝑛 𝑛
(1) 𝐹𝑋𝑛 𝑥 = 𝑃 𝑋𝑛 ≤ 𝑥 = 𝑃 ≤
1 1
1− 1−
𝑛 𝑛

1
𝑥−
=𝛷 𝑛 ⟶ 𝛷 𝑥 𝑎𝑠 𝑛 → ∞
1
1−𝑛

⟹ 𝑋𝑛 ⟶ 𝑋 ∼ 𝑁 0,1

𝐴𝑙𝑡 𝑚. 𝑔. 𝑓. 𝑜𝑓 𝑋𝑛

𝑡 𝑡2 1
𝑀𝑋𝑛 𝑡 = exp + 1−
𝑛 2 𝑛
𝑡2
⟶ 𝑒 2 ← 𝑚. 𝑔. 𝑓. 𝑜𝑓 𝑁 0, 1

⟹ 𝑋𝑛 ⟶ 𝑋 ∼ 𝑁 0, 1 .
2
(2) 𝑌𝑖 = 𝑋𝑖 − 𝜇
2
𝐸 𝑌𝑖 = 𝐸 𝑋𝑖 − 𝜇 = 𝜎2
2
𝑉 𝑌𝑖 = 𝐸 𝑋𝑖 − 𝜇 − 𝜎2 2

4
= 𝐸 𝑋𝑖 − 𝜇 + 𝜎 4 − 2𝜎 2 𝐸 𝑋𝑖 − 𝜇 2

= 𝜎 4 + 1 + 𝜎 4 − 2𝜎 4 = 1

𝑖. 𝑒. 𝐸 𝑌𝑖 = 𝜎 2 ; 𝑉 𝑌𝑖 = 1 ∀ 𝑖 & 𝑌1 … 𝑌𝑛 𝑖. 𝑖. 𝑑.

𝑆𝑛 = 𝑌𝑖

𝐸𝑆𝑛 = 𝑛𝜎 2

𝑉 𝑆𝑛 = 𝑛

𝑆𝑛 − 𝐸 𝑆𝑛
𝐶𝐿𝑇 ⟹ → 𝑁 0, 1
𝑉 𝑆𝑛
2 2
𝑋1 − 𝜇 + … + 𝑋𝑛 − 𝜇 − 𝑛𝜎 2
𝑖. 𝑒. → 𝑋 ∼ 𝑁 0, 1 .
𝑛
2 2
1 𝑋1 − 𝜇 + … + 𝑋𝑛 − 𝜇 1
lim 𝑃 𝜎 2 − ≤ ≤ 𝜎2 +
𝑛→∞ 𝑛 𝑛 𝑛
2 2
1 𝑋1 − 𝜇 + … + 𝑋𝑛 − 𝜇 − 𝑛𝜎 2 1
= lim 𝑃 − ≤ ≤
𝑛→∞ 𝑛 𝑛 𝑛
2 2
𝑋1 − 𝜇 + … + 𝑋𝑛 − 𝜇 − 𝑛𝜎 2
= lim 𝑃 −1 ≤ ≤1
𝑛→∞ 𝑛

= 𝛷 1 − 𝛷 −1 = 2𝛷 1 − 1 = ⋯

(3) 𝐸𝑆𝑛 = 𝑛𝑝 ; 𝑉𝑆𝑛 = 𝑛𝑝𝑞


2
𝑆𝑛 𝐸 𝑆𝑛 − 𝑛𝑝 𝑛𝑝 1 − 𝑝 1
𝑃 −𝑝 ≥𝑡 ≤ = ≤ ≤ 0.01 𝑔𝑖𝑣𝑒𝑛
𝑛 𝑡 2 𝑛2 2
𝑛 𝑡 2 4𝑛𝑡 2
1
⟹𝑛 ≥ 𝑓𝑜𝑟 𝑡 = 0.01
0.04𝑡 2

𝑛≥⋯
𝑛 𝑋𝑛 −𝜇
(4) 𝐶𝐿𝑇 ⟹ 𝜎
⟶ 𝑍 ∼ 𝑁 0, 1

𝑛
2 1 2
𝐴𝑙𝑠𝑜 𝑆𝑛 = 𝑋𝑖 − 𝑋 ⟶ 𝜎2
𝑛
𝑖=1

1
(𝑆𝑛 2 = 𝑛 𝑋𝑖 2 − 𝑋 2 ) ⟹ 𝑆𝑛 ⟶ 𝜎,

↓p ↓p

𝜎 2 + 𝜇2 𝜇2 →

↓p

𝜎2
𝑋𝑛 𝑋
Using slutsky’s 𝑋𝑛 → 𝑋; 𝑌𝑛 → 𝑐 →
𝑌𝑛 𝑐

𝑛 𝑋𝑛 − 𝜇
𝜎 → 𝑋 ∼ 𝑁 0, 1
𝑆𝑛
𝜎

𝑛 𝑋𝑛 − 𝜇
𝑖. 𝑒. ⟶ 𝑋 ∼ 𝑁 0, 1 .
𝑆𝑛
1
𝑥>1
(5) 𝑋1 … 𝑋72 𝑟. 𝑠. 𝑓𝑟𝑜𝑚 𝑓 𝑥 = 𝑥2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1 𝑖𝑓 𝑋𝑖 < 3
𝐷𝑒𝑓𝑖𝑛𝑒 𝑌𝑖 =
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
3
1 2
𝑃 𝑌𝑖 = 1 = 𝑃 𝑋𝑖 < 3 = 𝑑𝑥 = = 𝜃 𝑠𝑎𝑦
1 𝑥2 3

𝑌1 , … , 𝑌72 𝑎𝑟𝑒 𝑖. 𝑖. 𝑑. 𝐵 1, 𝜃
72
2
𝑌= 𝑌𝑖 ∼ 𝐵 72, 𝜃 =
3
𝑖=1

2
𝑌 − 72 × 3
𝐶𝐿𝑇 ⟹ ⟶ 𝑍 ∼ 𝑁 0, 1
2 1
72 × 3 × 3
𝑌 − 48
𝑖. 𝑒. ⟶ 𝑍 ∼ 𝑁 0, 1
4

𝑃 𝑌 > 50 = 1 − 𝑃 𝑌 ≤ 50 = 1 − 𝑃 𝑌 ≤ 50.5 ←

𝑌 − 48 50.5 − 48
=1−𝑃 ≤
4 4
2.5
≈1−𝛷 =⋯
4

(6) 𝑋1 … 𝑋100 𝑖. 𝑖. 𝑑 𝑃 3
100

𝐸 𝑋1 = 3; 𝑉 𝑋1 = 3 ; 𝑌 = 𝑋1 ∼ 𝑃 300 ⟹ 𝐸 𝑌 = 𝑉 𝑦 = 3
𝑖=1

𝑌 − 300 𝑆𝑛 − 𝐸𝑆𝑛
𝐶𝐿𝑇 ⟹ = → 𝑁 0, 1
10 3 𝑉𝑆𝑛

𝑃 100 ≤ 𝑌 ≤ 200 = 𝑃 99.5 ≤ 𝑌 ≤ 200.5 ← 𝑐𝑜𝑛𝑡 𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛

99.5 − 300 𝑌 − 300 200.5 − 300


=𝑃 ≤ ≤
10 3 10 3 10 3
200.5 − 300 99.5 − 300
≈ 𝛷 −𝛷 .
10 3 10 3

(7) 𝑋 ∼ 𝑏𝑖𝑛 100, 0.6

𝑋 − 100 × 0.6 𝑋 − 60
𝐶𝐿𝑇 ⟹ = ⟶ 𝑍 ∼ 𝑁(0, 1)
100 × 0.6 × 0.4 24

⟹ 𝑃 10 ≤ 𝑋 ≤ 16 = 𝑃 9.5 ≤ 𝑋 ≤ 16.5

9.5 − 60 𝑋 − 60 16.5 − 60
=𝑃 ≤ ≤
24 24 24
16.5 − 60 9.5 − 60
≈ 𝛷 −𝛷
24 24

=⋯

(8)𝑋𝑛 has p.d.f.

1 −𝑥 𝑛−1
𝑓𝑛 𝑥 = ⎾𝑛 𝑒 𝑥 𝑥>0
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

m. g. f. of 𝑋𝑛

1
𝑀𝑋𝑛 𝑡 = 𝑒 𝑡𝑥 𝑒 −𝑥 𝑥 𝑛−1 𝑑𝑥
⎾𝑛 0


1
= 𝑒 −𝑥 1−𝑡
𝑥 𝑛−1 𝑑𝑥
⎾𝑛 0

1
= = (1 − 𝑡)𝑛
(1 − 𝑡)𝑛
𝑋𝑛
m. g. f. of 𝑌𝑛 = 𝑛
𝑖𝑠

𝑋𝑛 𝑡 −𝑛
𝑀𝑌𝑛 𝑡 = 𝐸 𝑒 𝑡 𝑛 = 1− ⟶ 𝑒 𝑡 𝑎𝑠 𝑛 → ∞
𝑛

↑ 𝑚. 𝑔. 𝑓. 𝑟. 𝑣. 𝑑𝑒𝑛𝑔 𝑎𝑡 𝑥 = 0

𝑌𝑛 ⟶ 𝑋 (𝑑𝑒𝑔𝑟𝑒𝑒 𝑎𝑡 1)

(9)

1 −𝑥 𝑝−1
𝑥 = ⎾𝑝𝛼 𝑝 𝑒 𝑥 𝑥 > 0 𝛼 = 2, 𝑝 = 4
𝛼
𝑓𝑋𝑛
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

𝐸 𝑋𝑛 = 𝛼𝑝 ; 𝑉 𝑋𝑛 = 𝛼 2 𝑝 = 16 = 𝜎 2

16 1
𝐸 𝑋 = 8; 𝑉 𝑋 = =
𝑛 4
𝑛 𝑋 −𝛼𝑝
By C LT 𝛼 2𝑝
→ 𝑍 ∼ 𝑁 0, 1

𝑖𝑒. 2 𝑋 − 8 ⟶ 𝑍 ∼ 𝑁 0,1

𝑃 7 < 𝑋 < 9 = 𝑃 2 7−8 < 2 𝑋−8 < 2 9−8

≈ 𝑃 −2 < 𝑧 < 2

= 𝛷 2 − 𝛷 −2 = 2𝛷 2 − 1

(10)
2
1
𝑋𝑖 ∼ 𝑈 0, 2 𝐸𝑋𝑖 = 𝑥 𝑑𝑥 = 1
2 0

2
1 4 1
𝐸𝑋𝑖 2 = 𝑥 2 𝑑𝑥 = ; 𝑉 𝑋𝑖 =
2 0 3 3
1
𝑋1 , … , 𝑋𝑛 i. i. d. with E𝑋1 = 1 & 𝑉𝑋1 = 3
1
𝐵𝑦 𝐶𝐿𝑇 𝑛 𝑋𝑛 − 1 ⟶ 𝑁 0, .
3
1
𝑖. e. 𝑛 𝑌𝑛 − 1 ⟶ 𝑁 0, 3 .

Problem Set-10

[1] Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a random sample from an exponential distribution with p.d.f.

1 𝑥
𝑓𝑥 𝑥 = exp − ; 𝑥 > 0
𝛽 𝛽
𝑛
Show that X̅= 𝑖=1 𝑋𝑖 /𝑛 is an unbiased estimator of𝛽.

[2] Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a random sample from U (0, ); 𝜃> 0, show that


𝑛+1
𝑛
𝑋(𝑛) 𝑎𝑛𝑑 2𝑋 are both unbiased estimators of 𝜃.

[3] Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a random sample from an exponential distribution with p.d.f.

𝑓 𝑥 = 𝛽 exp −𝛽 𝑥 ; 𝑥 > 0
1
Show that X̅ is an unbiased estimator of .
𝛽

𝑛 2 𝑛 2
𝑖=1 𝑋 𝑖 𝑖=1 𝑋 𝑖
[4] Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a random sample from 𝑁 0, 𝜃 2 , 𝜃 > 0. 𝑆𝑕𝑜𝑤 𝑡𝑕𝑎𝑡 𝑛(𝑛+1)
𝑎𝑛𝑑 2𝑛
are
both unbiased estimators of 𝜃 2 .

[5] Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a random sample from P( 𝜃); 𝜃 >0. Find an unbiased estimator of 𝜃 𝑒 −2𝜃 .

[6] Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a random sample from B(1, 𝜃); 0≤ 𝜃 ≤1.


1
𝑛+ 𝑛𝑖=1 𝑋 𝑖
(a) Show that the estimator 𝑇 𝑋 = 2 𝑛+ 𝑛
is not unbiased 𝜃?

(b) Show that lim𝑛→∞ 𝐸 𝑇 𝑋 = 𝜃.

(An estimator satisfying the condition in (b) is said to be unbiased in the limit)
𝜇 𝜇
[7] 𝑋1 , … , 𝑋𝑛 be a random sample from N 𝜇, 𝜎 2 , 𝜇 ∈ ℜ, 𝜎 ∈ ℜ+ . Find unbiased estimators of 𝑎𝑛𝑑 .
𝜎2 𝜎

[8] Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a random sample from B(1, 𝜃); 0 ≤ 𝜃 ≤1. Find an unbiased estimator of
𝜃 2 (1 − 𝜃).

[9] Using Neyman Fisher Factorization Theorem, find a sufficient based on a random sample
𝑋1 , 𝑋2 , … , 𝑋𝑛 from each of the following distributions
1 𝑥
exp − 𝛼 𝑖𝑓 𝑥 > 0
(a) 𝑓𝛼 𝑥 = 𝛼
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

exp − 𝑥 − 𝛽 𝑖𝑓 𝑥 > 𝛽
(b) 𝑓𝛽 𝑥 =
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1 𝑥−𝛽
(c) 𝑓𝛼,𝛽 𝑥 = 𝛼
exp − 𝛼
𝑖𝑓 𝑥 > 𝛽
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1 𝑙𝑜𝑔 𝑥 1 −𝜇 2
(d) 𝑓𝜇 ,𝜎 𝑥 = 𝑥𝜎 2𝜋
exp − 2𝜎 2
𝑖𝑓 𝑥 > 0
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
1 𝜃 𝜃
(e) 𝑓𝜃 𝑥 = 𝜃
–2 ≤ 𝑥 ≤ 2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

[10] Let 𝑋1 𝑎𝑛𝑑 𝑋2 be independent random samples with densities 𝑓1 𝑥1 = 𝜃𝑒 −𝜃 𝑥 1 𝑎𝑛𝑑 𝑓2 𝑥2 =


2𝜃𝑒 −2𝜃 𝑥 2 as the respective p.d.f.s where 𝜃 >0 is an unknown parameter and 0<𝑥1 , 𝑥2 < ∞. Using
Neyman Fisher Factorization Theorem find a sufficient stastic for 𝜃.

[11]Let 𝑋1 , … , 𝑋𝑛 be a random sample with densities

exp 𝑖𝜃 − 𝑥 𝑖𝑓 𝑥 ≥ 𝑖𝜃
𝑓𝑥 𝑖 𝑥 =
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Using Neyman Fisher Factorization Theorem find a sufficient statistic for 𝜃.

[12] Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a random sample from a Beta (𝛼, 𝛽 ) distribution 𝛼 > 0, 𝛽 > 0 𝑤𝑖𝑡𝑕 𝑝. 𝑑. 𝑓.

⎾𝛼 + 𝛽 𝛼−1
𝑓 𝑥 = 𝑥 (1 − 𝑥)𝛽 −1 0 < 𝑥 < 1
⎾𝛼⎾𝛽
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

Show that
𝑛
(a) 𝑖=1 𝑋𝑖 is sufficient for 𝛼 if 𝛽 is known to be a given constant.
𝑛
(b) 𝑖=1(1 − 𝑋𝑖 ) is sufficient for 𝛽 if 𝛼 is known to be given constant.
𝑛 𝑛
(c) 𝑖=1 𝑋𝑖 , 𝑖=1 1 − 𝑋𝑖 is jointly sufficient for (𝛼, 𝛽) if both the parameters are unknown.

[13] Let T and T* be two statistic such that T= 𝛹(𝑇 ∗ ), Show that if T is sufficient then 𝑇 ∗ is also sufficient.

1 1
[14] 𝑋1 , … , 𝑋𝑛 be a random sample from U 𝜃 − 2 , 𝜃 + 2 , 𝜃 ∊ ℜ. Find a sufficient statistic for 𝜃.

[15] Let 𝑋1 , … , 𝑋𝑛 be independent random variables with 𝑋𝑖 (𝑖 = 1, 2, … , 𝑛) having the

−𝑖𝜃 𝑥 𝑖
𝑓𝑖 𝑥𝑖 = 𝑖𝜃 𝑒 𝑥𝑖 > 0
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
Find a sufficient statistic for 𝜃.

Solution Key
𝛽
1 ∞
(1) 𝐸 𝑋 = ∫ 𝑥
𝛽 0
𝑒 −𝑥 𝑑𝑥 = 𝛽
1 1
⟹𝐸 𝑋 =𝐸 𝑋𝑖 = 𝐸 𝑋𝑖 = 𝛽
𝑛 𝑛
⟹ 𝑋 𝑖𝑠 𝑢. 𝑒. 𝑜𝑓 𝛽
𝑛
𝜃𝑛
𝑥 𝑛−1 0 < 𝑥 < 𝜃
(2) 𝑓𝑋 𝑛 𝑥 =
0 𝑜/𝑤
𝑛 𝜃 𝑛 𝑛
𝐸 𝑋𝑛 𝑛
= 𝑥 𝑑𝑥 = 𝜃
𝜃 0 𝑛+1
𝑛+1
⟹𝐸 𝑋𝑛 = 𝜃
𝑛
𝑛+1
⟹ 𝑋 𝑛 𝑖𝑠 𝑢. 𝑒. 𝑜𝑓 𝜃.
𝑛
1
0<𝑥<𝜃
Also 𝑓𝑋 𝑥 = 𝜃
0 𝑜/𝑤
𝜃
𝐸 𝑋 =
2
2
⟹ 𝐸 2𝑋 = 𝐸 𝑋𝑖
𝑛
2
= 𝐸(𝑋𝑖 ) = 𝜃
𝑛
⟹ 2 X̅ is u. e. for 𝜃
∞ 1
(3) 𝐸 𝑋 = 𝛽 ∫0 𝑥 𝑒 −𝛽𝑥 𝑑𝑥 = 𝛽
𝐸 𝑋 = 𝛽
⟹ 𝑋𝑖𝑠 𝑢. 𝑒. 𝑜𝑓 𝛽.
1 1
𝐸 𝑋 =𝐸 𝑋𝑖 = 𝐸 𝑋𝑖
𝑛 𝑛
(4) 𝑇1 = 𝑋𝑖 , 𝑇2 = 𝑋𝑖 2
𝐸 𝑇1 2 = 𝑉 𝑇1 + 𝐸 2 𝑇1
= 𝑛𝜃 2 + 𝑛2 𝜃 2 = 𝜃 2 𝑛 𝑛 + 1
𝑇1 2
⟹𝐸 = 𝜃2
𝑛 𝑛+1
𝑇1 2
⟹ 𝑖𝑠 𝑢. 𝑒. 𝑜𝑓 𝜃 2
𝑛 𝑛+1
𝐸 𝑇2 = 𝐸 𝑋𝑖 2 = 𝐸 𝑋𝑖 2

= 𝑉 𝑋𝑖 + 𝐸 2 𝑋𝑖

= 𝜃 2 + 𝜃 2 = 2𝑛𝜃 2
𝑇2
⟹ 𝑖𝑠 𝑢. 𝑒. 𝑜𝑓 𝜃 2
2𝑛
(5) 𝑔 𝜃 = 𝜃𝑒 −2𝜃
1 𝑖𝑓 𝑋1 = 0, 𝑋2 = 0
𝛿0 𝑋 =
0 𝑜/𝑤

𝐸 𝛿0 𝑋 = 1. 𝑃 𝑋1 = 0, 𝑋2 = 1

= 𝑃 𝑋1 = 0 𝑃 𝑋2 = 1

𝑒 −𝜃 𝜃1
= 𝑒 −𝜃 . = 𝜃𝑒 −2𝜃
1!

⟹ 𝛿0 𝑋 is u. e. of𝜃𝑒 −2𝜃 .

(6) 𝑋1 , … , 𝑋𝑛 i. i. d. B(1, 𝜃)
𝑋𝑖 ∼ 𝐵 𝑛, 𝜃
1 1
2 𝑛 + 𝐸( 𝑋𝑖 ) 2 𝑛 + 𝑛𝜃
𝐸 𝑇 𝑋 = = ≠𝜃
𝑛+ 𝑛 𝑛+ 𝑛
⟹ T(X̲) is not u. e. of 𝜃.
1
+ 𝑛𝜃
lim 𝐸(𝑇 𝑋 ) = lim 2 = 𝜃
𝑛→∞ 𝑛 →∞ 𝑛 + 𝑛
⟹ 𝑇 𝑋 is unbiased in the limit for 𝜃
(7) 𝑋1 , … , 𝑋𝑛 r. s. from 𝜇, 𝜎 2
𝜎2
𝑋 ∼ 𝑁 𝜇, 𝑛
⟹ > 𝑖𝑛𝑑𝑒𝑝.
(𝑛 − 1)𝑆 2 2
𝑌= ∼ 𝜒𝑛−1
𝜎2
2
If Z∼ 𝜒𝑚 , 𝑡𝑕𝑒𝑛

1 1 𝑧 𝑚
𝐸 = 𝑚 𝑧 −1 𝑒 −2 𝑧 2 −1 𝑑𝑧
𝑧 2 2 𝑚/2 0

1 𝑧 𝑚
= 𝑚
𝑒 −2 𝑧 2 −1−1 𝑑𝑧
𝑚 0
22 2
𝑚 𝑚
2 − 1 2 2 −1
1
= =
𝑚 𝑚 𝑚−2
22 2

1 1 𝑧 𝑚 1
&𝐸 = 𝑚
𝑒 −2 𝑍 2 −2−1 𝑑𝑧
𝑧 𝑚 0
22 2
𝑚 − 1 𝑚2 −12 𝑚−1
2 2 2
= 𝑚 𝑚
=
𝑚
22 2 2 2
1 𝜎2 1 1
⟹𝐸 = 𝐸 2
= =
𝑌 𝑛−1 𝑠 𝑛−1 −2 𝑛−3
1 𝑛−1 1
⟹𝐸 2 = . .
𝑠 𝑛 − 3 𝜎2
𝑛 −2
1 𝜎 2
&E 𝑌
= 𝐸 𝑛−1 𝑆
= 𝑛 −1
2⎾
2
𝑛−1
1 𝑛 − 1⎾ 2 1
⟹𝐸 = =
𝑆 𝑛−1 𝜎
2⎾ 2
𝑠𝑖𝑛𝑐𝑒 X̅ & 𝑠 2 are indep.
X 1
𝐸 2
= 𝐸 𝑋 .𝐸 2
𝑠 𝑠
𝑛−1 1
= 𝜇. . .
𝑛 − 3 𝜎2
𝑛−3 𝑋 𝜇
⟹𝐸 .
𝑛−1 𝑠 2
= 𝜎2
𝑛−3 𝑋 𝜇
⟹ . is an unbiased estimator of 𝜎 2 .
𝑛−1 𝑠 2
Further
X 1
𝐸 = 𝐸 𝑋 .𝐸
𝑠 𝑆
𝑛−1
𝑛 − 1⎾ 2 1
= 𝜇. .
𝑛−1 𝜎
2⎾ 2
𝑛−1
2 ⎾ 2 X 𝜇
⟹𝐸 =
𝑛 − 1 ⎾𝑛 − 1 𝑠 𝜎
2
𝑛 −1
2 ⎾ 2 X 𝜇
⟹ . . is an unbiased estimator of .
𝑛−1 ⎾𝑛 −1 𝑠 𝜎
2
(8) 𝑋1 , … , 𝑋𝑛 are i. i. d. B(1, 𝜃)
𝑔 𝜃 = 𝜃 2 (1 − 𝜃)
1 𝑖𝑓 𝑋1 = 1, 𝑋2 = 1, 𝑋3 = 0
𝐷𝑒𝑓𝑖𝑛𝑒 (X̲)=
0 𝑜/𝑤
𝐸𝜃 δ X = P 𝑋1 = 1, 𝑋2 = 1, 𝑋3 = 0
= P 𝑋1 = 1 𝑃 𝑋2 = 1 𝑃 𝑋3 = 0
= 𝜃 2 (1 − 𝜃)
⟹ (X̲) is an u. e. of g(𝜃)= 𝜃 2 (1 − 𝜃).

(9)
𝑥
1
(a) f(x│𝛼)= 𝛼 𝑒 −𝛼 ; 𝑥 > 0

1
1
jt p. d. f. 𝑓 𝑥 𝛼 = 𝛼 𝑛 𝑒 −𝛼 𝑥𝑖
𝑋1 , … , 𝑋𝑛 > 0

1 1
= 𝑛
𝑒 −𝛼 𝑥𝑖
.1
𝛼
𝑛

𝑖. 𝑒. 𝑓 𝑥 𝛼 = 𝑔 𝛼, 𝑥𝑖 . 𝑕 𝑥 𝑕 𝑥 = 1 .
1

𝑛
By NFFT, T (X̲) = 1 𝑥𝑖 is suff for 𝛼

(9)(b) 𝑓 𝑥 𝛽 = 𝑒 − 𝑥−𝛽 𝑥>𝛽

𝑒− (𝑥 𝑖 −𝛽)
, 𝑋1 , … , 𝑋𝑛 > 𝛽
𝑓 𝑥𝛽 =
0 𝑜/𝑤

𝑒− 𝑥 𝑖 +𝑛𝛽
, 𝑥1 >𝛽
𝑖. 𝑒. 𝑓 𝑥 𝛽 =
0 𝑜/𝑤

1 𝑎<1
𝑖. 𝑒. 𝑓 𝑥 𝛽 = 𝑒 𝑛𝛽 − 𝑥𝑖
𝐼 𝛽,𝑥 1 𝐼 𝑎,𝑏 =
0 𝑜/𝑤

= 𝑒− 𝑥𝑖
𝑒 𝑛𝛽 𝐼 𝛽,𝑥 1

= 𝑕 𝑥 𝑔 𝛽, 𝑥 1

By NFFT, T(X̲)= 𝑋 1 is a suff statistic.

1 − 𝑥𝑖 𝑛𝛽
(9) (c) 𝑓 𝑥 𝛼, 𝛽 = 𝛼 𝑛 exp 𝛼
+ 𝛼
𝐼 𝛽 ,𝑥 1
1 − 𝑥𝑖 𝑛𝛽
= 𝑛
exp + .𝐼 𝛽,𝑥 1 .1
𝛼 𝛼 𝛼

=𝑔 𝛼, 𝛽 ; 𝑥𝑖 𝑥 1 . 𝑕(𝑥)

By, NFFT, T(X̲)= 𝑥𝑖 𝑥 1 is jointly sufficient for (𝛼, 𝛽).

1 𝑛 1 1
𝑛
(9)(d) 𝑓 𝑥 𝜇, 𝜎 = 𝑖=1 𝑥 . exp − (𝑙𝑜𝑔𝑥𝑖 − 𝜇)2
𝜎 2𝜋 𝑖 2𝜎 2

𝑛 𝑛
1 𝑛𝜇2 1 𝜇 2
1
= 𝑛 exp − 2 − 2 (𝑙𝑜𝑔𝑥𝑖 ) + 2 𝑙𝑜𝑔𝑥𝑖 × 𝑥𝑖 −1
𝜎 2𝜎 2𝜎 𝜎 2𝜋 𝑖=1
𝑛 𝑛
2
1
= 𝑔 𝜇, 𝜎 ; 𝑙𝑜𝑔𝑥𝑖 , (𝑙𝑜𝑔𝑥𝑖 ) . 𝑕(𝑥) 𝑥𝑖 −1
2𝜋 𝑖=1

By NFFT 𝑙𝑜𝑔𝑥𝑖 , (𝑙𝑜𝑔𝑥𝑖 )2 is jointly sufficient for (𝜇, 𝜎)

1 𝜃
; − 2 < 𝑥1 … 𝑥𝑛 < 𝜃/2
(9)(e)f(x̰│𝜃)= 𝜃𝑛
0 𝑜/𝑤

1 𝜃
; │𝑥𝑖 │ < ; 𝑖 = 1, 2, … 𝑛
= 𝜃𝑛 2
0 𝑜/𝑤
1 𝜃
; Max𝑖 │𝑥𝑖 │ < 2 ; 𝑖 = 1, 2, … 𝑛
i.e. f(x̰│𝜃)= 𝜃𝑛
0 𝑜/𝑤
1 𝜃
⟹ f (x̰│𝜃)= 𝐼 Max𝑖 │𝑥𝑖 │ ,
𝜃𝑛 2

⟹By NFFT T(X̲)= Max │𝑥𝑖 │ is suff for 𝜃.


𝑖

(10) Joint p. d. f. of 𝑋1 & 𝑋2


𝑓 𝑥1 , 𝑥2 = 𝜃𝑒 −𝜃𝑥 1 . 2𝜃 𝑒 −2𝜃𝑥 2
𝑓 𝑥1 , 𝑥2 = 𝜃 2 𝑒 −𝜃 𝑥 1 +2𝑥 2
𝑓 𝑥1 , 𝑥2 = 𝜃 2 𝑒 −𝜃 𝑥 1 +2𝑥 2
2
BY NFFT T 𝑋1 , 𝑋2 = 𝑋1 + 2𝑋2 is suff for 𝜃.
(11) jt p. d. f.
𝑛 𝑥1 𝑥2 𝑥
𝑖=1 exp 𝑖𝜃 − 𝑥𝑖 𝑖𝑓 , , … , 𝑛𝑛 ≥𝜃
f (x̰│𝜃)= 1 2
0 𝑜/𝑤
𝑛 𝑛(𝑛+1) 𝑥𝑖
𝑒− 1 𝑥𝑖
𝑒𝜃 2 𝑖𝑓 ≥𝜃
= 𝑖
0 𝑜/𝑤
𝑛 𝑛+1 𝑥𝑖 𝑛
𝑖. 𝑒. f x θ = 𝑒 𝜃 2 I θ, Min 𝑒− 𝑖=1 𝑥 𝑖
i 𝑖
↙ ↙
𝑥𝑖
= g θ, Min × h(x)
i 𝑖
𝑋𝑖
⟹ T (X̰) = Mini is suff.
𝑖
(12)𝑋1 , … , 𝑋𝑛 𝑖. 𝑖. 𝑑 𝐵𝑒𝑡𝑎 𝛼, 𝛽
(a) 𝛽 is known – 𝛼 is the unknown parameter

𝛽 −1 1 𝑛
⎾𝛼+𝛽 𝑛 𝛼−1
𝜋 1 − 𝑥𝑖 ⎾𝛽
𝜋𝑥𝑖 0 < 𝑥1 , … , 𝑥𝑛 < 1
f (x̰│𝛼) = ⎾𝛼
𝑕 𝑥
0 𝑜/𝑤
𝑛
By NFFT 𝑖=1 𝑋𝑖 is suff for 𝛼.

(b) 𝛼 is known – 𝛽 is the unknown parameter

𝛼−1 1 𝑛
⎾𝛼+𝛽 𝑛 𝛽 −1 𝜋𝑥𝑖 ⎾𝛼
𝜋 1 − 𝑥𝑖 0 < 𝑥1 , … , 𝑥𝑛 < 1
f (x̰│𝛼) = ⎾𝛽
𝑕 𝑥
0 𝑜/𝑤
𝑛
By NFFT 𝑖=1(1 − 𝑋𝑖 ) is suff for 𝛽.

(c) 𝛼, 𝛽 both unknown

⎾𝛼+𝛽 𝑛 𝛼−1 𝛽 −1
𝜋𝑥𝑖 𝜋 1 − 𝑥𝑖 . 1 0 < 𝑥1 , … , 𝑥𝑛 < 1
f (x̰│𝛼, 𝛽) = ⎾𝛼⎾𝛽
0 𝑜/𝑤

By NFFT (𝜋𝑥𝑖 , 𝜋 1 − 𝑥𝑖 ) is jointly sufficient for (𝛼, 𝛽).

(13) T is suff for 𝜃∊𝛩 & T= 𝛹(𝑇 ∗)

By NFFT T is suff for 𝜃 iff.

f (x̰│𝜃) = g(𝜃, t(x)) h(x)

i.e. f (x̰│𝜃) = g(𝜃, 𝛹(𝑡 ∗(x)) . h(x)

f (x̰│𝜃 = g’ 𝜃, 𝑡 ∗ (x)) . h(x)

⟹ 𝑇 ∗(X) is suff for 𝜃.


1 1
1, 𝜃 − < 𝑥(1) , … , < 𝑥(𝑛) < 𝜃 +
(14) f (x̰│𝜃) = 2 2
0 𝑜/𝑤

i.e. f (x̰│𝜃) = 𝐼 1
𝜃− ,𝑥 1
𝐼 𝑥 𝑛 ,𝜃+
1
2 2

= 𝑔 𝜃, 𝑥 1 , 𝑥 𝑛 𝑕(𝑥)

By NFFT, T(X)= 𝑋 1 ,𝑋 𝑛 is jointly suff for 𝜃.

(15) f (x̰│𝜃)= 𝜃 − 𝑒 −𝜃𝑥 1 2𝜃 − 𝑒 −2𝜃𝑥 2 … 𝑛𝜃 − 𝑒 −𝑛𝜃 𝑥 𝑛


𝑛
𝑛
i.e. f (x̰│𝜃)= 𝜃 𝑛 𝑖=1 𝑖 𝑒 −𝜃 𝑖=1 𝑖𝑥 𝑖

𝑛
𝑛
= 𝑖 𝜃 𝑛 𝑒 −𝜃 𝑖=1 𝑖𝑥 𝑖

𝑖=1
𝑛

= 𝑕 𝑥 𝑔 𝜃, 𝑖𝑥𝑖
𝑖=1

𝑛
By NFFT, T(X)= 𝑖=1 𝑖𝑋𝑖 is sufficient for 𝜃.

Problem Set-11

[1] Let 𝑋1 , … , 𝑋𝑛 be a random sample from P(𝜃), 𝜃∊ (0, ∞). Show that T= 𝑛𝑖=1 𝑋𝑖 is complete
sufficient statistic. Find the Uniformly Minimum Variance Unbiased Estimator (UMVUE) of the
following parametric functions: (a) g(𝜃) = 𝜃, (b) g(𝜃) = 𝑒 −𝜃 (c) g(𝜃) = 𝑒 −𝜃 (1 + 𝜃).

[2] Suppose 𝑋1 , … , 𝑋𝑛 be a random sample from B(1, 𝜃), 𝜃∊ (0, 1). Show that T= 𝑛𝑖=1 𝑋𝑖 is
complete sufficient statistic and hence find the UMVUE for each of the following parametric
functions : (a) g(𝜃) = 𝜃, (b) g(𝜃)= 𝜃 4 and (c) g(𝜃) = 𝜃 1 − 𝜃 2 .

[3] Let 𝑋1 , … , 𝑋𝑛 be a random sample from Exp (𝜃, 1), i. e.


− 𝑥−𝜃
f (x│𝜃)= 𝑒 𝑖𝑓 𝑥 > 𝜃
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

Show that T= 𝑋 1 = min⁡


{𝑋1 , … , 𝑋𝑛 } is a complete sufficient statistic and hence find the UMVUE of g
(𝜃) =𝜃 2 .

[4] 𝑋1 , … , 𝑋𝑛 is a random sample from U(0, 𝜃), 𝜃 > 0. Show that T=𝑋 𝑛 = max⁡
{𝑋1 , … , 𝑋𝑛 } is a
𝑘
complete sufficient statistic and find the UMVUE of g(𝜃) =𝜃 ; 𝑘 > −𝑛.

[5] 𝑋1 , … , 𝑋𝑛 is a random sample from Gamma (2, 𝜃), 𝜃> 0, i.e.


1
𝑒 −𝑥/𝜃 𝑥 𝑖𝑓 𝑥 > 0
f x│𝜃) = ⎾2𝜃 2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑛
Show that T= 𝑖=1 𝑋𝑖 is complete sufficient statistic and find the UMVUE of 𝜃.

[6] Let 𝑋1 , … , 𝑋𝑛 be a random sample from U(𝜃 – ½ , 𝜃 + ½). Show that sufficient statistic is not
complete.

[7] Suppose the statistic T is UMVUE of 𝜃 such that V(T) ≠ 0. Show that 𝑇 2 cannot be UMVUE of 𝜃 2 .

[8] Let 𝑋1 , … , 𝑋𝑛 be a random sample from N(0, 𝜃). Find the UMVUE of𝜃 2 .

[9] Let 𝑋1 , … , 𝑋𝑛 be a random sample from N(𝜇, 𝜃). Find the UMVUE of (a) 𝜃 when 𝜇 is known and
(b) 𝜃 when 𝜇 is not known.

[10] Let 𝑋1 , … , 𝑋𝑛 be a random sample from N (𝜇, 𝜎 2 ). Find the UMVUE of (a) 𝜎 𝑟 when 𝜇 is known,
(b) 𝜎 𝑟 when 𝜇 is not known and (c) 𝛿, where 𝛿 is given by P X≥ 𝛿)= p for a given p.
[11] 𝑋1 , … , 𝑋𝑛 is a random sample from U (0, 𝜃), 𝜃 > 0. Consider the following 3 estimators for 𝜃;

𝑛+1
𝑇1 𝑋 = 𝑋(𝑛) , 𝑇2 𝑋 = 2𝑋 𝑎𝑛𝑑 𝑇3 𝑋 = 𝑋(1) + 𝑋(𝑛) .
𝑛

Show that all the estimators are unbiased for 𝜃. Among the three estimators, which are would you
prefer and why?

[12] 𝑋1 , … , 𝑋𝑛 be a random sample from N (𝜇, 𝜎 2 ), 𝜇∊ ℜ, 𝜎∊ℜ+. Assuming completeness of the


associated sufficient statistic find the UMVUE of 𝜇2 and 𝜇 +.

[13] Let 𝑋1 , … , 𝑋𝑛 be a random sample from Exp (a, b). Assuming completeness of the associated
sufficient statistic, find the (a) UMVUE of a when b is known and (b) UMVUE of b when a is known.

Solution Key

(1) (a) g(𝜃)= 𝜃

E(T) = n𝜃 ⟹ E(T/n) = 𝜃

T/n u. e. based on CSS.


𝑇
⟹ is UMVUE of 𝜃
𝑛

(b) g(𝜃) = 𝑒 −𝜃

1, 𝑖𝑓 𝑋1 = 0
Consider 𝛿0 𝑋 =
0, 𝑜/𝑤

E𝛿0 𝑋 = 𝑃 𝑋1 = 0 = 𝑒 −𝜃 ⟹ 𝛿0 𝑋 𝑖𝑠 𝑢. 𝑒. 𝑜𝑓𝑒 −𝜃

Rao-blackwellization of 𝛿0 𝑋

(t) = E(𝛿0 𝑋 │𝑇 = 𝑡)

𝑃 𝑋1 =0,𝑇=𝑡 𝑃 𝑋1 =0, 𝑛𝑖=2 𝑋 𝑖 =𝑡


= P(𝑋1 = 0│𝑇 = 𝑡) = 𝑃 𝑇=𝑡
= 𝑃 𝑇=𝑡

𝑡
𝑒 − 𝑛−1 𝜃
𝑛−1 𝜃
𝑒 −𝜃 𝑡!
𝑃 𝑋1 = 0 𝑃 𝑛𝑖=2 𝑋𝑖 = 𝑡
= =
𝑃 𝑇=𝑡 𝑒 −𝑛𝜃 𝑛𝜃 𝑡
𝑡!
𝑡
𝑛−1
=
𝑛

𝑛−1 𝑇
(T)= 𝑛
is u. e. based on CSS
𝑛−1 𝑇
⟹ 𝑛
is UMVUE of 𝑒 −𝜃 .

(c)g(𝜃) = 𝑒 −𝜃 (1 + 𝜃)

1, 𝑖𝑓 𝑋1 = 0
𝛿0 𝑋 =
0, 𝑜/𝑤

E𝛿0 𝑋 = 𝑃 𝑋1 ≤ 1 = 𝑃 𝑋1 = 0 + 𝑃(𝑋1 = 1)

= 𝑒 −𝜃 + 𝜃𝑒 −𝜃 = 𝑒 −𝜃 (𝜃 + 1)

⟹ 𝛿0 𝑋 is u.e. of 𝑒 −𝜃 (𝜃 + 1).

Rao-blackwellization of 𝛿0 𝑋

(t) = E(𝛿0 𝑋 │𝑇 = 𝑡)

= P(𝑋1 ≤ 1│𝑇 = 𝑡)

P 𝑋1 ≤ 1 𝑇 = 𝑡
=
𝑃 𝑇=𝑡

𝑃 𝑋1 = 0 ∪ 𝑋1 = 1
=
𝑃 𝑇=𝑡

𝑃 𝑋1 = 0, 𝑇 = 𝑡) + 𝑃( 𝑋1 = 1, 𝑇 = 𝑡
=
𝑃 𝑇=𝑡
𝑛 𝑛
𝑃 𝑋1 = 0, 2 𝑋𝑖 = 𝑡 + 𝑃 𝑋1 = 1, 2 𝑋𝑖 = 𝑡 − 1
=
𝑃 𝑇=𝑡
𝑛 𝑛
𝑃 𝑋1 = 0 𝑃 2 𝑋𝑖 = 𝑡 + 𝑃 𝑋1 = 1 𝑃 2 𝑋𝑖 = 𝑡 − 1
=
𝑃 𝑇=𝑡
𝑡 𝑡−1
𝑒 − 𝑛−1 𝜃
𝑛−1 𝜃 𝑒 − 𝑛−1 𝜃
𝑛−1 𝜃
𝑒 −𝜃 𝑡! + 𝜃𝑒 −𝜃
𝑡−1 !
=
𝑒 −𝑛𝜃 𝑛𝜃 𝑡
𝑡!
𝑡
𝑛−1 𝑡
= 1+
𝑛 𝑛−1

𝑛−1 𝑇 𝑇
(T)= 𝑛
1 + 𝑛−1 is u.e. of g(𝜃) based on C.S.S T

𝑛−1 𝑇 𝑇
⟹ 𝑛
1 + 𝑛−1 is UMVUE of g(𝜃) = 𝑒 −𝜃 (1 + 𝜃).

(2) 𝑋1 , … , 𝑋𝑛 r. s. from B(1, 𝜃), 𝜃∊ (0, 1)= 𝛩


𝑛
By NFFT T= 𝑖=1 𝑋𝑖 is sufficient

T ∼B n, 𝜃)

Note that p. m. f. of 𝑋𝑖 ∼ 𝐵(1, 𝜃) is

f(x)= 𝜃 𝑥 (1 − 𝜃)1−𝑥

𝜃 𝑥
= 1−𝜃
(1 − 𝜃)

𝜃
= exp(x log 1−𝜃
+ log(1 − 𝜃))

𝜃
With h(x) =1, (𝜃)= log , T(x)= x & 𝛽(𝜃)= - log (1-𝜃)
1−𝜃

This is 1-parameter exponential family

Natural parameter space {: 𝜂∊(0, ∞)} ({𝜂(𝜃) : 𝜃∊ (0, 1)}).

Natural parameter space contains open intervals.

⟹ The 1-parameter expo family is of full rank (complete)


𝑛
⟹T(X) = 𝑖=1 𝑋𝑖 is C.S.S.

Alternate proof using convergent power series argument done class.

(a) g(𝜃)= 𝜃
𝑇
E(T) = n𝜃 ⟹ E 𝑛
=𝜃

𝑇
𝑛
u. e. based on CSS T
𝑇
⟹ is UMVUE of 𝜃.
𝑛
(b) g(𝜃)= 𝜃 4
1,
𝑖𝑓 𝑋1 = 1, 𝑋2 = 1, 𝑋3 = 1, 𝑋4 = 1
𝛿0 𝑋 =
0, 𝑜/𝑤
E𝛿0 𝑋 = 𝑃 𝑋1 = 1, 𝑋2 = 1, 𝑋3 = 1, 𝑋4 = 1 = 4𝑖=1 𝑃 𝑋𝑖 = 1 = 𝜃 4
⟹ 𝛿0 𝑋 is u.e. of 𝜃 4 .

Rao-blackwellization of 𝜃 4

(t) = E(𝛿0 𝑋 │𝑇 = 𝑡)

= 𝑋1 = 1, 𝑋2 = 1, 𝑋3 = 1, 𝑋4 = 1│𝑇 = 𝑡
𝑛
𝑃 𝑋1 = 1, 𝑋2 = 1, 𝑋3 = 1, 𝑋4 = 1, 𝑖=5 𝑋𝑡 =𝑡−4
=
𝑃 𝑇=𝑡
𝑛
𝑃 𝑋1 = 1 𝑃 𝑋2 = 1 𝑃 𝑋3 = 1 𝑃 𝑋4 = 1 𝑃 𝑖=5 𝑋𝑡 =𝑡−4
=
𝑃 𝑇=𝑡

𝑛−4
𝜃. 𝜃. 𝜃. 𝜃. 𝑡−4
𝜃 𝑡−4 1 − 𝜃 𝑛−𝑡
= 𝑛
𝑡
𝜃 𝑡 1 − 𝜃 𝑛−𝑡

𝑛−4
𝑡−4 𝑡 𝑡 − 1 𝑡 − 2 (𝑡 − 3)
= 𝑛 =
𝑡
𝑛 𝑛 − 1 𝑛 − 2 (𝑛 − 3)

𝑇 𝑇−1 𝑇−2 (𝑇−3)


(T)= is u.e. based on C.S.S.
𝑛 𝑛−1 𝑛−2 (𝑛−3)

⟹(T) is UMVUE of 𝜃 4 .
2
(c) g(𝜃) = 𝜃 1 − 𝜃

1, 𝑖𝑓 𝑋1 = 1, 𝑋2 = 0, 𝑋3 = 0
𝛿0 𝑋 =
0, 𝑜/𝑤
2
E𝛿0 𝑋 = 𝑃 𝑋1 = 1, 𝑋2 = 0, 𝑋3 = 0 = θ 1 − 𝜃
2
⟹ 𝛿0 𝑋 is u.e. of θ 1 − 𝜃

Rao-blackwellization of 𝛿0 𝑋

(t) = E(𝛿0 𝑋 │𝑇 = 𝑡)

= 𝑃 𝑋1 = 1, 𝑋2 = 0, 𝑋3 = 0 𝑇 = 𝑡
𝑃 𝑋1 = 1, 𝑋2 = 0, 𝑋3 = 0, 𝑛4 𝑋𝑖 = 𝑡 − 1
=
𝑃 𝑇=𝑡
𝑃 𝑋1 = 1 𝑃 𝑋2 = 0 𝑃 𝑋3 = 0 𝑃 𝑛4 𝑋𝑖 = 𝑡 − 1
=
𝑃 𝑇=𝑡
𝜃. 1 − 𝜃 . 1 − 𝜃 𝑛−3𝑡−1
𝜃 𝑡−1 1 − 𝜃 𝑛−3−𝑡+1
= 𝑛
𝑡
𝜃 𝑡 1 − 𝜃 𝑛−𝑡
𝑛−3
𝑛−3 !
𝑡−1 𝑡−1 ! 𝑛−𝑡−2 !
= 𝑛 =
𝑛!
𝑡
𝑡! 𝑛 − 𝑡 !
𝑡. 𝑛 − 𝑡 . (𝑛 − 𝑡 − 1)
=
𝑛 𝑛 − 1 (𝑛 − 2)
𝑇. 𝑛−𝑇 .(𝑛−𝑇−1)
(T)= 𝑛 𝑛−1 (𝑛−2)
is u.e. based on C.S.S. T
⟹(T) is UMVUE of θ 1 − 𝜃 2 .
− 𝑥−𝜃
(3) 𝑋1 , … , 𝑋𝑛 r.s. from f(x)= 𝑒 𝑖𝑓 𝑥 > 𝜃
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
By NFFT T= 𝑋(1) is sufficient.

𝑛𝑒 −𝑛 𝑡−𝜃
𝑖𝑓 𝑡 > 𝜃
𝑓𝑇 𝑡 =
0 𝑜/𝑤

Now E g(T)= 0 ∀ 𝜃∊ 𝛩

⟹ ∫𝜃 𝑔(𝑡) 𝑛𝑒 −𝑛 𝑡−𝜃
𝑑𝑡 = 0 ∀ 𝜃∊ 𝛩


i.e. ∫𝜃 𝑔(𝑡) 𝑒 −𝑛𝑡 𝑑𝑡 = 0 ∀ θ ∊ Θ

Differentiating w.r.t. 𝜃, we get

g(𝜃) 𝑒 −𝑛𝜃 = 0 ∀ θ ∊ Θ

⟹ g(𝜃)= 0 ∀ 0<𝜃 <t ; 0< t < ∞

⟹ g(t) = 0 a.e. (𝜃<t <∞)

Range of T=𝑋(1)

⟹T= 𝑋(1) is complete.

g(𝜃)= 𝜃 2

E𝑋 1 =∫𝜃 𝑡 𝑛𝑒 −𝑛 𝑡−𝜃
𝑑𝑡


⎾2 1 1
=𝑛 𝑦 + 𝜃 𝑒 −𝑛𝑦 𝑑𝑦 = 𝑛 2
+ 𝜃. = 𝜃 +
0 𝑛 𝑛 𝑛
1
E(𝑋 1 − 𝑛 )= 𝜃

2 ∞
Sly E 𝑋 1 = 𝑛 ∫𝜃 𝑡 2 𝑒 −𝑛 𝑡−𝜃
𝑑𝑡

y= t- 𝜃
∞ 2
= n∫0 𝑦 + 𝜃 𝑒 −𝑛𝑦 𝑑𝑦


=𝑛 (𝑦 2 + 𝜃 2 + 2𝜃𝑦 𝑒 −𝑛𝑦 𝑑𝑦
0

⎾3 1 ⎾2
=𝑛 3
+ 𝜃 2 + 2𝜃 2
𝑛 𝑛 𝑛
2 2𝜃
= 2
+ 𝜃2 +
𝑛 𝑛
2 2 2 1
⟹E𝑋 1 = 𝑛2
+ 𝜃2 + 𝑛 𝐸 𝑋 1 −𝑛

2 2 2 1
⟹𝐸 𝑋 1 − − 𝑋 1 − = 𝜃2
𝑛2 𝑛 𝑛

2 2 2 2
𝑖. 𝑒. 𝐸(𝑋 1 − − 𝑋 1 + = 𝜃2
𝑛2 𝑛 𝑛2
2 2
⟹E 𝑋 1 −𝑛𝑋 1 = 𝜃2

2 2
𝑋 1 −𝑛𝑋 1 is u.e. of 𝜃 2 based on C.S.S. 𝑋 1

2 2
⟹𝑋 1 −𝑛𝑋 1 is UMVUE of 𝜃 2

(4) 𝑋1 , … , 𝑋𝑛 r.s. from U(0, 𝜃)

T= 𝑋(𝑛) is C.S.S. (proved in class)

g(𝜃) = 𝜃 𝑘
𝑛
𝜃𝑛
𝑡 𝑛−1 , 0 < 𝑡 < 𝜃
p.d.f. of T; 𝑓𝑇 𝑡
0 , 𝑜/𝑤

𝑘 𝑛 𝜃 𝑛
E 𝑋𝑛
= 𝜃 𝑛 ∫0 𝑡 𝑘 𝑡 𝑛−1 𝑑𝑡 = 𝑛+𝑘 𝜃 𝑘 .

𝑛+𝑘
⟹𝐸 𝑛
𝑋𝑛 𝑘 = 𝜃𝑘 .

𝑛+𝑘 𝑘
𝑛
𝑋 𝑛 is u.e. based of C.S.S. 𝑋 𝑛

𝑛+𝑘 𝑘
⟹ 𝑛
𝑋 𝑛 is UMVUE of 𝜃 𝑘 .

(5) 𝑋1 , … , 𝑋𝑛 r.s. from G(2, 𝜃); 𝜃> 0

p.d.f.
1
⎾2𝜃 2
𝑒 −𝑥/𝜃 𝑥 2−1
𝑖𝑓 𝑥 > 0
f(x)=
0 𝑜/𝑤

Note that the p.d.f. can be written as


𝑥
f(x)= x exp (− 𝜃 − 2 𝑙𝑜𝑔𝜃)

1
with h(x)= x ; 𝜂(𝜃)= − 𝜃 𝑇 𝑥 = 𝑥 ; 𝛽 𝜃 = 2 𝑙𝑜𝑔𝜃
the above is 1-parameter expo family distn.

Natural parameter space

{:𝜂 < 0} ({(𝜃) : 𝜃∊ (0, ∞)}).

The above contains open intervals

⟹ 1-parameter expo family is full rank (complete)


𝑛
⟹ T(X) = 𝑖=1 𝑋𝑖 is C.S.S.
𝑛 𝑛
E (T) = E 𝑖=1 𝑋𝑖 = 𝑖=1 𝐸 𝑋𝑖 = 2𝑛𝜃

1 𝑥 ⎾3 𝜃 3
𝐸 𝑋𝑖 = 𝑥 2 𝑒 −𝜃 𝑑𝑥 = = 2𝜃
𝜃2 0 𝜃2

𝑇
⟹𝐸 2𝑛
= 𝜃

𝑇 𝑛
is u.e. based on C.S.S. T= 𝑖=1 𝑋𝑖
2𝑛

𝑇
⟹ is UMVUE of 𝜃.
2𝑛

(6) 𝑋1 , … , 𝑋𝑛 be a r.s. from U(𝜃 – ½ , 𝜃 + ½).


1 1
1, 𝜃 − 2 < 𝑥(1) < ⋯ < 𝑥(𝑛) < 𝜃 + 2
jt. P.d.f f(x)=
0 𝑜/𝑤

i.e. f(x)= 1. 𝐼 1
𝜃− ,𝑥 1
𝐼 𝑥 𝑛 , 𝜃+
1
2 2

1, 𝑎 < 𝑏
𝐼 𝑎,𝑏 =
0, 𝑜/𝑤

By NFFT, T(X)= 𝑋 1 ,𝑋 𝑛 is jointly suff for 𝜃

𝑛−1 1 1
𝑓𝑋 1 𝑥 = 𝑛 1 − 𝐹𝑋 𝑥 𝑓𝑋 𝑥 ; 𝜃 − <𝑥<𝜃+
2 2
𝑛−1
1 1 1
= 𝑛 𝜃−𝑥+ , 𝜃− <𝑥<𝜃+
2 2 2
0 𝑜/𝑤
1 𝑛−1
𝜃+
2 1 1 𝑛
𝐸𝑋 1 =𝑛 𝑥 𝜃−𝑥+ 𝑑𝑥 = 𝜃 + −
𝜃−
1 2 2 𝑛+1
2

𝑛−1 1 1
𝑓𝑋 𝑛 𝑥 = 𝑛 𝐹𝑋 𝑥 𝑓𝑋 𝑥 ; 𝜃 − <𝑥<𝜃+
2 2
𝑛−1
1 1 1
= 𝑛 𝑥−𝜃+ , 𝜃− <𝑥<𝜃+
2 2 2
0 𝑜/𝑤
1 𝑛−1
𝜃+
2 1 𝑛 1
𝐸𝑋 𝑛 =𝑛 𝑥 𝑥−𝜃+ 𝑑𝑥 = − +𝜃
𝜃−
1 2 𝑛+1 2
2

𝑛−1
⟹𝐸 𝑋 𝑛 − 𝑋 1 − 𝑛+1 = 0 ∀ 𝜃

𝑛−1
⇏𝑋 𝑛 − 𝑋 1 = 𝑛+1 a.e.

𝑛−1
i.e. with g(T)= 𝑋 𝑛 − 𝑋 1 − 𝑛+1

E g(T)= 0 ∀ 𝜃 ⇏ g t = 0 a.e.

⟹ T= 𝑋 𝑛 − 𝑋 1 is not complete.

(7) T is UMVUE of 𝜃

⟹ E(T)= 𝜃

Suppose 𝑇 2 is UMVUE of 𝜃 2 ⟹𝑇 2 is u.e. of 𝜃 2

⟹𝐸 𝑇 2 = 𝜃 2

⟹ V(T)= 𝐸 𝑇 2 − 𝐸 2 (𝑇)

i.e. V(T)= 𝜃 2 − 𝜃 2 = 0

whis is a cantvadiction

⟹𝑇 2 cannot be u.e. of 𝜃 2

⟹ 𝑇 2 cannot be UMVUE of 𝜃 2

(8) 𝑋1 , … , 𝑋𝑛 r.s. from N(0, 𝜃)


𝑛 2
T= 𝑖=1 𝑋𝑖 𝑖𝑠 𝐶. 𝑆. 𝑆.

𝑇 𝑇 𝑇
∼ 𝜒𝑛 2 ; 𝐸 = 𝑛; 𝑉 = 2𝑛
𝜃 𝜃 𝜃

𝐸 𝑇 = 𝑛𝜃; 𝑉 𝑇 = 2𝑛𝜃 2
2
𝐸𝑇 2 = 𝑉 𝑇 + 𝐸 𝑇 = 2𝑛𝜃 2 + 𝑛2 𝜃 2 = 𝑛 𝑛 + 2 𝜃 2

𝑇2
⟹𝐸 = 𝜃2
𝑛 𝑛+2
𝑇2
𝑛 𝑛+2
is u.e. based on C.S.S. 𝑋𝑖 2 = 𝑇

𝑇2
⟹𝑛 𝑛+2
is UMVUE of 𝜃 2 .

(9) (a) 𝑋1 , … , 𝑋𝑛 r.s. from N(𝜇, 𝜃); 𝜇 is known


𝑛
T= 𝑖=1(𝑋𝑖 − 𝜇)2 is CSS (𝜇 is known)

𝑇 (𝑋𝑖 − 𝜇)2
𝐸 =𝑛 ∼ 𝜒𝑛 2
𝜃 𝜃

(𝑋 𝑖 −𝜇 )2
⟹ is UMVUE for 𝜃 when 𝜇 is known.
𝑛

(9) (b) 𝑋1 , … , 𝑋𝑛 r.s. from N(𝜇, 𝜃); 𝜇, 𝜃 both known

𝑛 2 1
𝑋𝑖 , 𝑖=1 𝑋𝑖 ⇔ 𝑋, 𝑠 2 = 𝑛−1 (𝑋𝑖 − 𝑋)2 𝑖𝑠 𝐶. 𝑆. 𝑆. (2-parameter full rank expo family)

𝑛 − 1 𝑠2 (𝑋𝑖 − 𝑋)2
= ∼ 𝜒𝑛−1 2
𝜃 𝜃

(𝑋𝑖 − 𝑋)2
⟹𝐸 = 𝜃
𝑛−1

(𝑋 𝑖 −𝑋 )2
is u.e. of 𝜃 based on C.S.S.
𝑛−1

(𝑋 𝑖 −𝑋 )2
⟹ 𝑛−1
is UMVUE of 𝜃.

(10) (a) 𝑋1 , … , 𝑋𝑛 r.s. from N(𝜇, 𝜎 2 ) 𝜇 is known

T= (𝑋𝑖 − 𝜇)2 is C.S.S.

g(𝜎)= 𝜎 𝑟
𝑇
Y= 𝜎 2 ∼ 𝜒𝑛 2


𝑟 1 𝑟 𝑦 𝑛
𝐸 𝑌2 = 𝑛 𝑦 2 𝑒 −2 𝑦 −2 −1 𝑑𝑦
𝑛
22 ⎾ 2 0


1 𝑦 𝑛+𝑟
= 𝑛 𝑒 −2 𝑦 − 2
−1
𝑑𝑦
𝑛
22 ⎾ 2 0

𝑟 𝑛+𝑟
22 ⎾ 2
= 𝑛
⎾2
𝑟 𝑟 𝑛+𝑟
𝑟 𝑇2 22 ⎾ 2
⟹𝐸 𝑌2 =𝐸 = 𝑛
𝜎𝑟 ⎾2

𝑛
⎾2
⟹𝐸 𝑟 = 𝜎𝑟
𝑛+𝑟
22 ⎾
2
𝑛 𝑟

𝑟
2
𝑛 +𝑟
𝑇 2 is u.e. of 𝜎 𝑟 based on C.S.S.
22 ⎾
2

𝑛 𝑟

⟹ 𝑟
2
𝑛 +𝑟
(𝑋𝑖 − 𝜇)2 2 is UMVUE of 𝜎 𝑟 (when 𝜇 is known).
22 ⎾
2

(b) 𝑋1 , … , 𝑋𝑛 r.s. from N(𝜇, 𝜎 2 ) ; 𝜇 & 𝜎 both unknown

𝑋, (𝑋𝑖 − 𝜇)2 is C.S.S.

= (𝑋, 𝑆𝑋 2 )

𝑆𝑋 2
Y= 𝜎2
∼ 𝜒𝑛−1 2

𝑟
𝑟 𝑛 −1+𝑟
𝑆𝑋 𝑟 22 ⎾
2
𝐸 𝑌 2 =𝐸 𝜎𝑟
= 𝑛 −1 (following the derivation in part (a))

2

𝑛 −1

⟹𝐸 𝑟
2
𝑛 −1+𝑟
𝑆𝑋 𝑟 = 𝜎 𝑟
22 ⎾
2

𝑛 −1

𝑟
2
𝑛 −1+𝑟
𝑆𝑋 𝑟 is u.e. of 𝜎 𝑟 based on C.S.S.
22 ⎾
2

𝑛 −1

⟹ 𝑟
2
𝑛 −1+𝑟
𝑆𝑋 𝑟 is UMVUE of 𝜎 𝑟 .
22 ⎾
2

𝑋−𝜇 𝛿−𝜇 𝛿 −𝜇
(c)p= P X ≤𝛿) = P 𝜎
≤ 𝜎
= 𝛷 𝜎

Given value ; To find the UMVUE of 𝛿


𝛿−𝜇 𝛿 −𝜇
p= 𝛷 𝜎
⟹ 𝜎
= 𝛷−1 (𝑝)

i.e. 𝛿= 𝜇 +𝜎 𝛷−1 𝑝 ⎵ = 𝑔(𝜃 )


Known 𝜃= (𝜇, 𝜎 ’.

𝑛 −1

2
Note that E(X̅ )= 𝜇 & E 1
𝑛
𝑆𝑋 = 𝜎 ↖(from part (b) with r= 1)
22 ⎾
2

𝑛 −1

⟹ [X̅ + 1
2
𝑛
𝑆𝑋 𝛷−1 (𝑝) ] is u.e. of 𝜇 + 𝜎 𝛷−1 (𝑝) based on C.S.S.
22 ⎾
2

𝑛 −1

⟹ X̅ + 1
2
𝑛
𝑆𝑋 𝛷−1 (𝑝) is UMVUE of 𝛿= 𝜇 + 𝛷−1 (𝑝) .
2 ⎾
2
2

(11) 𝑋1 , … , 𝑋𝑛 r.s. from U(0, 𝜃)

It’s easy to check that

𝜃
𝐸𝑋𝑖 = ∀ 𝑖
2
𝑛 𝜃
𝐸𝑋 𝑛 = 𝜃 & 𝐸𝑋 1 =
𝑛+1 𝑛+1
𝑛+1
⟹ 𝐸 𝑇1 𝑋 = 𝐸𝑋 𝑛 = 𝜃
𝑛
𝜃 𝑛
𝐸 𝑇2 𝑋 = 𝜃 & 𝐸 𝑇3 𝑋 = + 𝜃=𝜃
𝑛+1 𝑛+1

𝑇1 , 𝑇2 , 𝑇3 are all u.e. of

Among the 3 estimators 𝑇1 is UMVUE and hence is the preferred estimator.

(12) 𝑋1 , … , 𝑋𝑛 r.s. from N(𝜇, 𝜎 2 )

1
𝑋, 𝑆 2 = (𝑋𝑖 − 𝑋)2 𝑖𝑠 𝐶. 𝑆. 𝑆
𝑛−1

X̅ ∼ N(𝜇, 𝜎 2 /𝑛) > indep.

𝑛−1 𝑠 2
∼ 𝜒𝑛−1 2
𝜎2

2
𝐸 𝑋2 = 𝑉 𝑋 + 𝐸 𝑋

𝜎2
i.e. 𝐸 𝑋 2 = 𝑛
+ 𝜇2

1
𝐸 𝑋2 = 𝐸 𝑆 2 + 𝜇2
𝑛
1
i.e. 𝐸 𝑋 2 − 𝑛 𝑆 2 = 𝜇2
1
⟹ 𝑋 2 − 𝑛 𝑆 2 is u.e. of 𝜇2 based only on the C. S. S.

1
⟹𝑋 2 − 𝑛 𝑆 2 is UMVUE of 𝜇2 .

If g(𝜇, 𝜎) = 𝜇 + 𝜎

𝐸 𝑋 = 𝜇

𝑛 −1 1

And E 2
𝑛 (𝑋𝑖 − 𝑋)2 2 = 𝜎 (from problem # 10 (c)/(b))
2⎾
2

𝑛 −1 1

⟹E 𝑋+ 2
𝑛 (𝑋𝑖 − 𝑋)2 2 = 𝜇+𝜎
2⎾
2

𝑛 −1 1

⟹𝑋+ 2
𝑛 (𝑋𝑖 − 𝑋)2 2 is u.e. of 𝜇 + 𝜎 based on C.S.S.
2⎾
2

𝑛 −1 1

⟹𝑋 + 2
𝑛 (𝑋𝑖 − 𝑋)2 2 is UMVUE of 𝜇+ 𝜎.
2⎾
2

(13) 𝑋1 , … , 𝑋𝑛 r.s. from


𝑥−𝑎
1 −
f(x)= 𝑒 𝑏 , 𝑥>𝑎
𝑏
0 𝑜/𝑤

This is exponential distn with location parameter ‘a’ and scale parameter ‘b’

(a) If b is known then 𝑋(1) is C.S.S., in such a situation


𝑏
𝐸𝑋(1) = 𝑎 +
𝑛
𝑏
i.e. 𝐸 𝑋 1 −𝑛 =𝑎
𝑏
𝑋 1 − 𝑛 is u.e. based on C.S.S.

𝑏
⟹𝑋 1 − 𝑛 is UMVUE of a, when b is known

(b) If a is known, then 𝑋𝑖 is C.S.S.


𝐸 𝑋𝑖 = 𝐸𝑋𝑖 = 𝑛 𝑎 + 𝑏
𝑋𝑖
⟹𝐸 −𝑎 = 𝑏
𝑛

⟹ X̅ - a is u.e. based on C.S.S. and hence is UMVUE of b.

Remark:
[when both a & b are unknown, then

T(X)= (𝑋 1 , 𝑋𝑖 ) or (𝑋 1 , (𝑋𝑖 − 𝑋 1 )) is C.S.S

We may note that under such a setup

𝑏 2 2
𝑋 1 ∼ 𝐸𝑥𝑝 𝑎, & 𝑋𝑖 − 𝑋 1 ∼ 𝜒2 𝑛−1
𝑛 𝑏

And the 2 are independent.]


2
i.e. 𝑋 1 & 𝑏
𝑋𝑖 − 𝑋 1 are indep.

Problem Set-12

[1] 𝑋1 , … , 𝑋𝑛 be a random sample from N(𝜇, 𝜎 2 ) distribution. Find the Cramer-Rao Lower Bounds
(CRLB) on the variances of unbiased estimators of 𝜇 and𝜎 2 . Can you find unbiased estimators 𝜇 and
𝜎 2 whose variance attains the respective CRLB?

[2] 𝑋1 , … , 𝑋𝑛 is a random sample from Gamma(𝛼, 𝛽)


1
𝑒 −𝑥/𝛽 𝑥 𝛼−1 𝑖𝑓 𝑥 > 0
f x∣𝛼, 𝛽) = ⎾𝛼 𝛽 𝛼
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

𝛼is assumed to be known. Find the Fisher Information I(𝛽) and CRLB on the variances of unbiased
estimators of 𝛽.

[3] 𝑋1 , … , 𝑋𝑛 be a random sample from P(𝜃), 𝜃∊(0, ∞). Find the CRLB on the variances of unbiased
estimators of the following estimands: (a) g(𝜃)= 𝜃 , (b) g(𝜃)= 𝜃 2 and (c) g(𝜃)= 𝑒 −𝜃 .

[4] Suppose 𝑋1 , … , 𝑋𝑛 be a random sample from B(1, 𝜃), 𝜃∊ (0, 1). Find the CRLB on the variances of
unbiased estimators of the following estimands: (a) g(𝜃)= 𝜃 4 (b) g(𝜃)= 𝜃(1- 𝜃).
𝑛
[5] 𝑋1 , … , 𝑋𝑛 be a random sample from U(0, 𝜃), 𝜃 > 0. Show that (a) 𝑛+1 𝑋(𝑛) is a consistent
estimator of 𝜃 and (b) 𝑒 𝑋 (𝑛 ) is consistent for 𝑒 𝜃 , where 𝑋(𝑛) = max 𝑋1 , … , 𝑋𝑛 .

[6] 𝑋1 , … , 𝑋𝑛 be a random sample from U(𝜃- ½ , 𝜃 + ½ ), 𝜃∊ℜ. Show that

1 1 𝑋 1 +𝑋 𝑛
𝑋 1 + 2,𝑋 𝑛 − 2 𝑎𝑛𝑑 2
are all consistent estimators of 𝜃, 𝑋 𝑛 = max 𝑋1 , … , 𝑋𝑛 and
𝑋 1 = min 𝑋1 , … , 𝑋𝑛 .

[7] 𝑋1 , … , 𝑋𝑛 be a random sample from


1
1 + 𝜃𝑥 − 1 < 𝑥, 1
f(x)= 2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
Where, 𝜃∊(-1, 1). Fin a consistent estimator for 𝜃.

[8] 𝑋1 , … , 𝑋𝑛 be a random sample from P(𝜃). Find a consistent estimator of 𝜃 3 3 𝜃 + 𝜃 + 12 .

[9] Let 𝑋1 , … , 𝑋𝑛 be a random sample from Gamma (𝛼, 𝛽) with density


1
𝑒 −𝑥/𝛽 𝑥 𝛼−1 𝑥 > 0
f(x)= ⎾𝛼 𝛽 𝛼
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑛
𝑖=1 𝑋 𝑖
Where, 𝛼 is a known constant and 𝛽 is an unknown parameter, show that is a consistent
𝑛𝛼
estimator of 𝛽.

[10] Let 𝑋1 , … , 𝑋𝑛 be a random sample from each of the following distributions having the following
density or mass functions. Find the maximum likelihood estimator (MLE) of 𝜃 in each case.

𝑒 −𝜃 𝜃 𝑥
(a) f(x; 𝜃)= 𝑥 = 0, 1, 2, …
𝑥!
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝜃−1
(b) f(x; 𝜃)= 𝜃 𝑥 0<𝑥<1
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1
𝑒 −𝑥/𝜃 𝑥 > 0
(c) f(x; 𝜃)= 𝜃
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1
𝑒 −∣𝑥−𝜃∣ − ∞ < 𝑥, ∞
(d) f(x; 𝜃)= 2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝜃 𝜃
e X∼ U − 2 , 2 .

[11] Let 𝑋1 , … , 𝑋𝑛 be a random sample from the distribution having p.d.f.


1
𝑒 −(𝑥−𝜃1 )/𝜃2 𝑥 ≥ 𝜃1
f(x ; 𝜃1 , 𝜃2 )= 𝜃2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Find the MLEs of 𝜃1 𝑎𝑛𝑑 𝜃2 .

[12] Let 𝑋1 , … , 𝑋𝑛 be a random sample from the distribution having p.d.f.

𝜆𝛼
f(x ; 𝛼, 𝜆)= ⎾𝛼
𝑒 −𝜆𝑥 𝑥 𝛼−1 𝑥 ≥ 0
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Find the MLEs of 𝛼 and 𝜆.

[13] Let 𝑋1 , … , 𝑋𝑛 be a random sample from the function having p.d.f.


1
𝜇 − 3 𝜎 ,𝑥 < 𝜇 + 3𝜎
f(x ; 𝜇, 𝜎)= 2 3𝜎
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
Find the MLEs of 𝜇 and 𝜎.

[14] Let 𝑋1 , … , 𝑋𝑛 be a random sample from U (𝜃- ½ , 𝜃 + ½ ), 𝜃∊ℜ. Show that any statistic
u(𝑋1 , … , 𝑋𝑛 ) such that it satisfies

1 1
𝑋(𝑛) − ≤ 𝑢 𝑋1 , … , 𝑋𝑛 ≤ 𝑋(1) +
2 2
𝑋 1 +𝑋 𝑛 3 1 1 1
Is a maximum likelihood estimator of 𝜃. in particular 𝑎𝑛𝑑 𝑋 1 + + 𝑋 𝑛 − are
2 4 2 4 2
MLEs of 𝜃.

[15] The lifetimes of a component are assumed to be exponential with parameter 𝜆. Ten of these
components were placed on a test independently. The only data recorded were the number of
components that had failed (out of 10 put to test) in less than 100 hours, which was recorded to be
3. Find the maximum likelihood estimate of 𝜆.

[16] A salesman of used cars is willing to assume that the number of sales he makes per day is a
Poisson random variable with parameter 𝜇. Over the past 30 days he made no sales on 20 days and
one or mare sales on each of the remaining 10 days. Find the maximum likelihood estimate of 𝜇.

[17] Let 𝑋1 , … , 𝑋𝑛 be a random sample from each of the following distributions. Find the method of
moments estimator (MOME) of the corresponding unknown parameters in each of the situations.

a X∼P 𝜃 ; b X∼ -𝜃/2 , 𝜃/2);

c X∼ Exp 0, 𝜃 ; d X∼ Exp 𝛼, 𝛽);


1
𝑒 −𝑥/𝛽 𝑥 𝛼−1 𝑖𝑓 𝑥 ≥ 0
e X∼ G 𝛼, 𝛽) with p.d.f. f(x ; 𝛼, 𝛽)= 𝛽 𝛼 ⎾𝛼
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Solution Key

(1) 𝑋1 , … , 𝑋𝑛 i.i.d. N(𝜇, 𝜎 2 )


1
1 − 𝑥−𝜇 2
f x∣ 𝜇, 𝜎 2 )= 𝑒 2 𝜎2
2𝜋 𝜎2

1 1 𝜕 log 𝑓 𝑥 − 𝜇 𝜕 2 log 𝑓 1
log 𝑓 = 𝑘 − log 𝜎 2 − 𝑥−𝜇 2
= ; = − 2.
2 2 𝜎2 𝜕𝜇 𝜎 2 𝜕𝜇 2 𝜎

𝜕 2 log 𝑓 1
−𝐸 2
= 2 = 𝐼(𝜇)
𝜕𝜇 𝜎

𝜎2
CRLB for an u.e. for 𝜇= 𝑛
.

𝜎2
Since V(X̅ )= 𝑛
; X̅ attains CRLB.
𝜕 log 𝑓 1 1 2
= − + 𝑥−𝜇
𝜕𝜎 2 2 𝜎2 2 𝜎4

𝜕 2 log 𝑓 1 𝑥−𝜇 2
= − .
𝜕 𝜎2 2 2 𝜎4 𝜎6

𝜕 2 log 𝑓 1 1 1
𝐼 𝜎 2 = −𝐸 2 2
= − 4
+ 4=
𝜕 𝜎 2𝜎 𝜎 2 𝜎4

2𝜎 4
CRLB for an u.e. for𝜎 2 = 𝑛
.

1 𝑛
Now 𝑆 2 = 𝑛−1 1 (𝑋𝑖 − 𝑋)2 is UMVUE for 𝜎 2 with

2𝜎 4
V(𝑆 2 ) = > CRLB
𝑛−1

Since UMVUE is the unbiased with lowest variance in the class of all unbiased estimators, CRLB
can’t be attained by any unbiased estimator of 𝜎 2 .
𝑥
1 −
(2) f x∣𝛼, 𝛽) = ⎾𝛼 𝛽 𝛼 𝑒 𝛽 𝑥 𝛼−1 𝑖𝑓 𝑥 > 0

𝑥
log 𝑓 = −𝑙𝑜𝑔⎾𝛼 − 𝛼 log 𝛽 − + 𝛼 − 1 log 𝑥
𝛽

𝜕 log 𝑓 𝛼 𝑥
= − + 2
𝜕𝛽 𝛽 𝛽

𝜕 2 log 𝑓 𝛼 𝑥
2
= 2 − 2 3.
𝜕𝛽 𝛽 𝛽

𝜕 2 log 𝑓 𝛼 𝛼𝛽 𝛼
𝐼 𝛽 = −𝐸 2
= − 2 + 2 3 = 2.
𝜕𝛽 𝛽 𝛽 𝛽

1 𝛽2
⟹ CRLB for u.e. of 𝛽 : 𝛼 = 𝑛𝛼 .
𝑛. 2
𝛽

(3) 𝑋1 , … , 𝑋𝑛 i.i.d. P (𝜃)

𝑒 −𝜃 𝜃 𝑥
f x∣𝜃)= 𝑥!

log f x ∣ θ = −𝜃 + 𝑥𝑙𝑜𝑔𝜃 − log 𝑥!

𝜕 log 𝑓 𝑥 𝜕 2 log 𝑓 𝑥
= −1 + ; 2
= − 2.
𝜕𝜃 𝜃 𝜕𝜃 𝜃

𝜕 2 log 𝑓 1
𝐼 𝜃 = −𝐸 2
=
𝜕𝜃 𝜃
1 𝜃
CRLB for any u.e. of 𝜃= 1 =𝑛
𝑛.
𝜃

(2𝜃)2 4𝜃 3
CRLB for any u.e. of g (𝜃) = 𝜃 2 : 𝑛 = 𝑛
𝜃

2
−𝑒 −𝜃 𝜃 𝑒 −2𝜃
CRLB for any u.e. of g (𝜃) = 𝑒 −𝜃 ∶ 𝑛 = 𝑛
.
𝜃

(4) 𝑋1 , … , 𝑋𝑛 i.i.d. B (1, 𝜃)

f x∣𝜃)= 𝜃 𝑥 1 − 𝜃 1−𝑥

log f x ∣ θ = 𝑥𝑙𝑜𝑔 𝜃 + 1 − 𝑥 log 1 − 𝜃

𝜕 log 𝑓 𝑥 1−𝑥 𝑥 1
= + −1 = − .
𝜕𝜃 𝜃 1−𝜃 𝜃 1−𝜃 1−𝜃

𝜕 log 𝑓 2
𝜕 log 𝑓 𝜃(1 − 𝜃) 1
𝐸 𝜃 =𝐸 =𝑉 = 2 = .
𝜕𝜃 𝜕𝜃 𝜃 1−𝜃 𝜃(1 − 𝜃)

2
4𝜃 3 16𝜃 7 (1−𝜃)
CRLB for u.e. of 𝜃 4 : 1 =
𝑛. 𝑛
𝜃 (1−𝜃 )

(1−2𝜃)2 (1−2𝜃)2 𝜃(1−𝜃)


CRLB for u.e. of 𝜃(1-𝜃): 1 = 𝑛
.
𝑛.
𝜃 (1−𝜃 )

(5) 𝑋1 , … , 𝑋𝑛 i.i.d. U(0, 𝜃)


2
𝐸 𝑋 𝑛 −𝜃 𝐸𝑋 𝑛 2 +𝜃 2 −2𝜃 𝐸𝑋 𝑛
P[∣𝑋 𝑛 − 𝜃∣ ≥𝜖] ≤ 𝜖2
= 𝜖2

𝑛 𝑛−1
𝑥 , 0<𝑥<𝜃
𝑓𝑋 𝑛 𝑥 = 𝜃𝑛
0, 𝑜/𝑤
𝜃
𝑛 𝑛
𝐸𝑋 𝑛 = 𝑛 𝑥 𝑛 𝑑𝑥 = .𝜃
𝜃 0 𝑛+1

𝜃
2 𝑛 𝑛
𝐸𝑋 𝑛 = 𝑥 𝑛+1 𝑑𝑥 = 𝜃2
𝜃𝑛 0 𝑛+2

1 𝑛 𝑛
⟹ P[∣ 𝑋 𝑛 − 𝜃 ∣ ≥ ϵ] ≤ 2
𝜃 2 + 𝜃 2 − 2𝜃 .𝜃
𝜖 𝑛+2 𝑛+1

→ 0 as n→∞

⟹𝑋 𝑛 → 𝜃.
𝑛
⟹ 𝑋𝑛 →𝜃
𝑛+1

𝑛
⟹ 𝑛+1 𝑋 𝑛 is a consistent estimator for 𝜃

Further since 𝑋 𝑛 →𝜃

𝑒𝑋 𝑛 = 𝑔 𝑋 𝑛 → 𝑔 𝜃 = 𝑒𝜃 .

⟹ 𝑒 𝑋 𝑛 is a consistent estimator for 𝑒 𝜃 .

(6) 𝑋1 , … , 𝑋𝑛 i.i.d. U(𝜃- ½ , 𝜃 + ½ )


𝑥
1
𝐹𝑋 𝑥 = 𝑑𝑥 = 𝑥 − 𝜃 +
𝜃−
1 2
2

𝑛−1
𝑓𝑋 1 𝑥 = 𝑛 1 − 𝐹𝑋 𝑥 𝑓 𝑥

𝑛 −1
1 1 1
𝑖. 𝑒. 𝑓𝑋 1 𝑥 = 𝑛 𝜃−𝑥+ ,𝜃 − ≤𝑥≤𝜃+
2 2 2
0 , 𝑜/𝑤
1 𝑛−1
𝜃+
2 1 1 𝑛
𝐸𝑋 1 =𝑛 𝑥 𝜃−𝑥+ 𝑑𝑥 = 𝜃 + − .
𝜃−
1 2 2 𝑛+1
2

1 𝑛−1
𝜃+
2 2 1
𝐸𝑋 1 = 𝑥2 𝜃 − 𝑥 + 𝑑𝑥
𝜃−
1 2
2

2
1 𝑛 𝑛
= 𝜃+ + − 2𝜃 + 1
2 𝑛+2 𝑛+1
2
1
𝐸 𝑋 1 − 𝜃−2
1
𝑃 𝑋𝑛 − 𝜃− ≥ϵ ≤
2 𝜖2

1 2 1 2 1
r.h.s. = 𝜖 2 𝐸 𝑋 1 + 𝜃−2 −2 𝜃−2 𝐸 𝑋 1

2 2
1 1 𝑛 𝑛 1 1 1 𝑛
= 𝜃+ + − 2𝜃 + 1 + 𝜃− −2 𝜃− 𝜃+ −
𝜖2 2 𝑛+2 𝑛+1 2 2 2 𝑛+1
2 2
1 1 1 1 1
⟶ 2 𝜃+ + 1 − 2𝜃 + 1 + 𝜃− −2 𝜃− 𝜃− 𝑎𝑠 𝑛 → ∞
𝜖 2 2 2 2

=0
1
⟹𝑃 𝑋 𝑛 − 𝜃− ≥ ϵ ⟶ 0 𝑎𝑠 𝑛 → ∞
2

1
⟹𝑋 1 ⟶ 𝜃 − 2. ___________(1)

We can similarly prove that


1
𝑋 𝑛 ⟶ 𝜃 + 2. _________________(2)

Combining (1) & (2), we get.

𝑋 1 +𝑋 𝑛
⟶𝜃
2
𝑋 1 +𝑋 𝑛
⟹ is a consistent estimator for 𝜃
2

So,
1
𝑋 1 + is a consistent estimator for 𝜃 (from (1))
2

1
&𝑋 𝑛 − is a consistent estimator for 𝜃 (from (2)).
2

1
1 + 𝜃𝑥 − 1 < 𝑥, 1
(7) 𝑋1 , … , 𝑋𝑛 i. i. d. 𝑓𝑋 𝑥 = 2
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1
1 𝜃
𝐸 𝑋 = (1 + 𝜃𝑥) 𝑑𝑥 =
2 −1 3

𝜃
⟹ 𝑋1 , … , 𝑋𝑛 are i. i. d. with 𝐸 𝑋1 = 3

By khintchime’s WLLN
𝑛
1
𝑋𝑖 ⟶ 𝐸 𝑋1
𝑛
1

𝜃
𝑖. 𝑒. 𝑋 ⟶ ⟹ 3𝑋 ⟶ 𝜃
3

⟹3𝑋 ⟶ 𝜃 is a consistent estimator for 𝜃.

(8) 𝑋1 , … , 𝑋𝑛 i. i. d. P (𝜃)

𝐸 𝑋𝑖 = 𝜃 ∀ 𝑖 = 1(1)𝑛

By WLLN 𝑋𝑛 ⟶ 𝜃

⟹ g(𝑋𝑛 )⟶g(𝜃)
3
⟹𝑋𝑛 3 𝑋𝑛 + 𝑋𝑛 + 12 ⟶ 𝜃 3 3 𝜃 + 𝜃 + 12

3
⟹ 𝑋𝑛 3 𝑋𝑛 + 𝑋𝑛 + 12 is a consistent estimator for 𝜃 3 3 𝜃 + 𝜃 + 12 .

(9) 𝑋1 , … , 𝑋𝑛 i. i. d. Gamma (𝛼, 𝛽)

𝛼 is known

𝐸 𝑋 = 𝛼𝛽 𝑓𝑜𝑟 𝑋 ∼ 𝐺𝑎𝑚𝑚𝑎 𝛼, 𝛽

By WLLN
𝑛
1
𝑋𝑖 ⟶ 𝐸 𝑋1
𝑛
1

𝑛
1
𝑖. 𝑒. 𝑋𝑖 ⟶ 𝛼𝛽
𝑛
1

1 𝑛
⟹𝑛𝛼 1 𝑋𝑖 ⟶ 𝛽.

1 𝑛
⟹ 𝑛𝛼 1 𝑋𝑖 is a consistent estimator for 𝛽.

𝑛
[Note: T= 1 𝑋𝑖 ∼ 𝐺𝑎𝑚𝑚𝑎 𝑛𝛼, 𝛽 can be proved using m. g. f. approach]

(10) (a) 𝑋1 , … , 𝑋𝑛 i.i.d. P (𝜃)


𝑥 𝑖 −𝑛𝜃 −1
Likelihood function L(𝜃|x)= 𝜃 𝑒 𝑥𝑖 !

𝑙 θ x = log L θ x = 𝑥𝑖 𝑙𝑜𝑔𝜃 − 𝑛𝜃 − log 𝑥𝑖 !

𝜕𝑙 𝑥𝑖 𝜕𝑙
= − 𝑛; = 0 ⟹ 𝜃ˆ = 𝑥̅
𝜕𝜃 𝜃 𝜕𝜃

𝜕2 𝑙 𝑥𝑖
2
𝜃 =𝜃ˆ = − 2 < 0
𝜕𝜃 𝜃ˆ

⟹ 𝜃ˆ𝑀𝐿𝐸 = 𝑋

(10) (b) 𝑋1 , … , 𝑋𝑛 i.i.d. with p. d. f.


𝜃−1
𝑓𝑋 𝑥 = 𝜃 𝑥 0<𝑥<1
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑛 𝜃−1

L θ x = 𝜃𝑛 𝑥𝑖
1
𝑛

𝑙 θ x = log L θ x = n log𝜃 + 𝜃 − 1 log 𝑥𝑖


1

𝜕𝑙 𝑛
= + 𝑙𝑜𝑔𝑥𝑖
𝜕𝜃 𝜃
𝜕𝑙 𝑛
= 0 ⟹ 𝜃ˆ = − 𝑛
𝜕𝜃 1 log 𝑥𝑖

𝜕2 𝑙 𝑛 𝑛
2
𝜃 =𝜃ˆ = − 2 < 0 ⟹ 𝜃ˆ𝑀𝐿𝐸 = − 𝑛 .
𝜕𝜃 𝜃ˆ 1 log 𝑥𝑖

(c) 𝑋1 , … , 𝑋𝑛 i.i.d. with p. d. f.


1 −𝑥/𝜃
𝑓𝑋 𝑥 = 𝜃 𝑒 𝑥>0
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1
L θ x = 𝑛 𝑒 − 𝑥 𝑖 /𝜃
𝜃
1
𝑙 θ x = log L θ x = −n log𝜃 + 𝑥𝑖
𝜃
𝜕𝑙 𝑛 1
=− + 2 𝑥𝑖
𝜕𝜃 𝜃 𝜃
𝜕𝑙
= 0 ⟹ 𝜃ˆ = 𝑥̅
𝜕𝜃
𝜕2 𝑙 𝑛 2𝑛𝑥̅ 𝑛 2𝑛𝑥̅ 𝑛
2
𝜃 =𝜃ˆ = 2 − 3 𝜃 =𝜃ˆ = 2 − 3 = − 2 < 0
𝜕𝜃 𝜃 𝜃 𝑥̅ 𝑥̅ 𝑥̅

⟹ 𝜃ˆ𝑀𝐿𝐸 = 𝑋.

(a) (10) (d) 𝑋1 , … , 𝑋𝑛 i.i.d. with p. d. f.

1 −∣𝑥−𝜃∣
𝑓𝑋 𝑥 = 2 𝑒 − ∞ < 𝑥, ∞
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

L θ x is maximized if ∣ 𝑥𝑖 − 𝜃 ∣ is minimized

Realize that ∣ 𝑥𝑖 − 𝜃 ∣ is minimized w.r.t. 𝜃 at

𝜃ˆ = 𝑚𝑒𝑑𝑖𝑎𝑛(𝑥1 , … , 𝑥𝑛 )

⟹ 𝜃ˆ𝑀𝐿𝐸 = 𝑚𝑒𝑑𝑖𝑎𝑛 𝑋1 , … , 𝑋𝑛
𝜃 𝜃
(10) (e) 𝑋1 , … , 𝑋𝑛 i.i.d. with U − 2 , 2

1 θ 𝜃
, − 2 ≤ 𝑥1 , … , 𝑥𝑛 ≤ 2
Likelihood function L θ x = θn
0, otherwise
1 θ
L θ x = θn , if 𝑥𝑖 ≤ 2 , i = 1(1)n
0, otherwise

i.e.
1 θ
L θ x = θn , if max
i
𝑥𝑖 ≤
2
0, otherwise

L θ x is maximized at minimum value of 𝜃 given x

⟹𝜃ˆ𝑀𝐿𝐸 = 2 maxi 𝑥𝑖

(11) 𝑋1 , … , 𝑋𝑛 i.i.d. Exp(𝜃1 , 𝜃2 )

𝜃1 ˆ𝑀𝐿𝐸 = 𝑋(1)
𝑛
1
𝜃2 ˆ𝑀𝐿𝐸 = 𝑋𝑖 − 𝑋 1
𝑛
1

Done in class.
𝜆𝛼
(12) 𝑋1 , … , 𝑋𝑛 i.i.d. 𝑓𝑋 𝑥 = ⎾𝛼
𝑒 −𝜆𝑥 𝑥 𝛼−1 𝑥 ≥ 0
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑛 𝛼−1
𝜆𝑛𝛼 𝑛
L θx = 𝑛
𝑒 −𝜆 1 𝑥 𝑖 𝑥𝑖
⎾𝛼
1

𝑙 θ x = log L θ x = n𝛼 log 𝜆 − 𝑛 log ⎾𝛼 + 𝛼 − 1 𝑙𝑜𝑔𝑥𝑖 − 𝜆 𝑥𝑖


1

Likelihood equations

𝜕 𝑙𝑜𝑔𝑙 𝑛𝛼
= − 𝑥𝑖 = 0__________(1)
𝜕𝜆 𝜆
𝜕 𝑙𝑜𝑔𝑙 ⎾𝛼′
= 𝑛𝑙𝑜𝑔𝜆 − 𝑛 + 𝑙𝑜𝑔𝑥𝑖 = 0 ________(2)
𝜕𝛼 ⎾𝛼
𝛼
(1) ⟹ 𝜆= 𝑥̅

From (2), we get


𝛼 ⎾𝛼′
N log −𝑛 + 𝑙𝑜𝑔𝑥𝑖 = 0 (*)
𝑥̅ ⎾𝛼

Solving (*) by numerical method gives 𝛼ˆ𝑀𝐿𝐸


𝛼ˆ𝑀𝐿𝐸
& 𝜆ˆ𝑀𝐿𝐸 = 𝑋
.

(13) 𝑋1 , … , 𝑋𝑛 i.i.d. with p.d.f.


1
, 𝜇 − 3 𝜎 ,𝑥 < 𝜇 + 3𝜎
f(x ; 𝜇, 𝜎)= 2 3𝜎
0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Likelihood function
𝑛
1
, 𝜇− 3𝜎 ≤𝑥 1 ≤⋯≤𝑥 𝑛 ≤𝜇+ 3𝜎
L (μ, θ) x = 2 3𝜎

0 , 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

Using condition (*),

𝜇− 3𝜎 ≤𝑥 1 &𝑥 𝑛 ≤𝜇+ 3𝜎

⟹𝜇≤ 𝑥 1 + 3 𝜎 &𝑥 𝑛 − 3𝜎 ≤ 𝜇

⟹𝑥 𝑛 − 3𝜎 ≤𝜇 ≤𝑥 1 + 3𝜎

For a given 𝜎, L (μ, θ) x is maximized w.r.t. 𝜇 if

𝜇∊ 𝑥 𝑛 − 3 𝜎, 𝑥 1 + 3 𝜎 (o/w L (μ, θ) x = 0)

⟹ Any value of 𝜇 in the above internal is an MLE of 𝜇

In particular

𝑋 𝑛 − 3𝜎+ 𝑋 1 + 3𝜎 𝑋 𝑛 +𝑋 1
= = 𝜇ˆ(𝜎)𝑀𝐿𝐸
2 2

Since the above MLE is indep of 𝜎, it is MLE of 𝜇 ∀ 𝜎


𝑋 𝑛 +𝑋 1
⟹ 𝜇ˆ𝑀𝐿𝐸 = 2

Further, L(𝜇ˆ, 𝜎) is maximized w.r.t. 𝜎 if 𝜎 is minimum.

Observe that

3𝜎 ≥μ−𝑥 1 & 3𝜎 ≥𝑥 𝑛 −𝜇

At the MLE of 𝜇;
𝑥 𝑛 −𝑥 1
3𝜎 ≥
2
𝑋 𝑛 −𝑋 1
⟹𝜎ˆ𝑀𝐿𝐸 =
2 3

(14) 𝑋1 , … , 𝑋𝑛 i.i.d. U(𝜃- ½ , 𝜃 + ½ ), 𝜃∊ℜ


1 1
1, θ − ≤ 𝑥 1 ≤⋯≤𝑥 𝑛 ≤𝜃+
L θx = 2 2
0, o/w

L is maximized w.r.t. 𝜃 of
1 1
θ − 2 ≤ 𝑥 1 &𝑥 𝑛 ≤ 𝜃 + 2 Max𝜃 𝐿 = 1

1 1
𝑖. 𝑒. 𝑥 𝑛 − ≤𝜃≤𝑥 1 + .
2 2

⟹Any statistic U(X̠ )∋


1 1
𝑥 𝑛 − ≤ 𝑢(𝑥1 , … , 𝑥𝑛 ) ≤ 𝑥 1 + is an MLE of 𝜃
2 2

𝑋 1 +𝑋 𝑛
In particular, 2
is an MLE of 𝜃

1 1
In general, 𝛼(𝑋 1 + ) +(1- 𝛼)( 𝑋 𝑛 − ); ∀ 0 < 𝛼 < 1 is an MLE of 𝜃
2 2

3
With 𝛼= 4, we have the above estimator as

3 1 1 1
4
𝑋 1 +2 +4 𝑋 𝑛 − 2 is an MLE of 𝜃.

(15)X : r.v. denoting lifetime of the component

X∼ Exp distn with mean 𝜆

1 −𝑥/𝜆
𝑓𝑋 𝑥 = 𝜆 𝑒 𝑥>0
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
1, 𝑖𝑓 𝑖𝑡𝑕 𝑐𝑜𝑚𝑝𝑜𝑛𝑒𝑛𝑡 𝑕𝑎𝑠 𝑙𝑖𝑓𝑒 < 100𝑕𝑟𝑠
𝐷𝑒𝑓𝑖𝑛𝑒 𝑌𝑖 =
0, 𝑜/𝑤
100
1 −
100
𝑃 𝑌𝑖 = 1 = 𝑃 𝑋 < 100 = 𝑒 −𝑥/𝜆 𝑑𝑥 = 1 − 𝑒 𝜆
𝜆 0

100
𝑌1 … 𝑌𝑛 i.i.d. 𝐵 1, 1 − 𝑒 − 𝜆 ≡ 𝐵(1, 𝜃)

100
(n= 10) with 𝜃= 1 − 𝑒 − 𝜆 .

⟹𝜃ˆ𝑀𝐿𝐸 = 𝑌(done in class)


100
Further, 𝜆= − log ⁡(1−𝜃) = 𝑔(𝜃)

⟹ MLE of 𝑔(𝜃) is g 𝜃ˆ𝑀𝐿𝐸 .


100 100
⟹ 𝜆ˆ𝑀𝐿𝐸 = − = −
log 1−𝜃ˆ𝑀𝐿𝐸 log ⁡(1−𝑋 )

3
From the given data X̅ =
10

⟹ The maximum likelihood estimate of 𝜆 computed

100
From the given data is − 7
log
10

(16)X: r.v. denoting the no. of sales per day

X∼ P(𝜇) (from the assumptions)

1, 𝑖𝑓 0 𝑠𝑎𝑙𝑒𝑠 𝑜𝑛 𝑑𝑎𝑦 𝑖
Define 𝑌𝑖 =
0, 𝑜/𝑤

P(𝑌𝑖 = 1) = 𝑃 𝑋 = 0 = 𝑒 −𝜇 ;

𝑌1 … 𝑌30 i.i.d. B(1, 𝑒 −𝜇 )≡B(1, 𝜃)(𝜃= 𝑒 −𝜇 )

𝜃ˆ𝑀𝐿𝐸 = 𝑌

Further, 𝜇= -log 𝜃

⟹ 𝜇ˆ𝑀𝐿𝐸 = − log 𝜃ˆ𝑀𝐿𝐸

⟹ MLE estimate of 𝜇 from the given data is

20
Given by -log .
30

(17)(a)

𝑋1 , … , 𝑋𝑛 i.i.d. P(𝜃)

𝜇1 1 = 𝐸 𝑋 = 𝜃

MOME of 𝜃, 𝜃ˆ𝑀𝑂𝑀𝐸 = 𝑚𝑖 = 𝑋

𝜃 𝜃
(b) 𝑋1 , … , 𝑋𝑛 i.i.d. 𝑈 − ,
2 2

𝑑𝑜𝑛𝑒 in class.

(c) 𝑋1 , … , 𝑋𝑛 i.i.d. Exp (0, 𝜃)


𝑥
1 ∞
𝜇𝑖 = 𝐸 𝑋 = ∫0 𝑥 𝑒 −𝜃 𝑑𝑥 = 𝜃
𝜃

⟹𝜃ˆ𝑀𝑂𝑀𝐸 = 𝑚𝑖 = 𝑋

(d) 𝑋1 , … , 𝑋𝑛 i.i.d. Exp(𝛼, 𝛽)

Done in class.

(e) 𝑋1 , … , 𝑋𝑛 i.i.d. G(𝛼, 𝛽) with p.d.f.


1
𝑒 −𝑥/𝛽 𝑥 𝛼−1 𝑖𝑓 𝑥 ≥ 0
𝑓 𝑥; 𝛼, 𝛽 = 𝛽 𝛼 ⎾𝛼
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.

1 ⎾𝛼 + 1 𝛽 𝛼+1
𝜇1 1 =𝐸 𝑋 = 𝛼 𝑥 𝛼+1−1 𝑒 −𝑥/𝛽 𝑑𝑥 = = 𝛼𝛽
𝛽 ⎾𝛼 0 ⎾𝛼𝛽 𝛼
∞ 𝑥
1 −
𝜇2 1 = 𝐸 𝑋 2 = 𝛼 𝑥 𝛼+2−1 𝑒 𝛽 𝑑𝑥
𝛽 ⎾𝛼 0
⎾𝛼 + 2 𝛽 𝛼+2
= = (𝛼 + 1)𝛼𝛽 2
⎾𝛼𝛽 𝛼
𝑋 =𝑚 𝑖 = 𝛼𝛽
𝐸𝑞𝑢𝑎𝑡𝑒 1 }
𝑋 𝑖 2 = 𝑚 2 1 = 𝛼(𝛼+1)𝛽 2
𝑛
𝑚 1
⟹ 𝑚 2 1 = 𝛼𝛽 + 𝛽 = 𝑚𝑖 + 𝛽
1
1 2 2
𝑚2 1 − (𝑚1 1 )2 𝑛 𝑋𝑖 − 𝑋
⟹ 𝛽ˆ𝑀𝑂𝑀𝐸 = =
𝑚𝑖 𝑋
2
1
𝑛 (𝑋𝑖 − 𝑋)
𝑖. 𝑒. 𝛽ˆ𝑀𝑂𝑀𝐸 =
𝑋
𝑋 𝑋2
& 𝛼ˆ𝑀𝑂𝑀𝐸 = 2
= 2.
1 1
𝑛 𝑋𝑖 − 𝑋 𝑛 (𝑋𝑖 − 𝑋)
𝑋

Assignment-13

[1] The observed value of the mean of a random sample of size 20 from N (𝜇, 80) be 81.2. Find the equal
tail 95% and the equal tail 99% confidence interval of 𝜇. Which one is shorter?

[2] Let X̅ be the mean of a random sample of size n from N (𝜇, 9). Find n such that, approximately, P (X̅ -1
<𝜇 < X̅ +1) = 0.90

[3] Let a random sample of size 25 from a normal distribution N (𝜇, 𝜎 2 ) yield x̅ = 4.7 and 𝑠 2 =
1 𝑛
𝑛−1 𝑖=1(𝑥𝑖 − 𝑥̅ )2 = 5.76. Determine a 90% confidence interval for 𝜇.
[4] Let 𝑋1 , … , 𝑋𝑛 be random sample of size be random sample of size 9 from N (𝜇, 𝜎 2 ). Find the
expected length of 95% confidence interval for 𝜇 when (a) 𝜎 is known and (b) 𝜎 is unknown.

[5] Let 𝑋1 , … , 𝑋𝑛 be random sample of size n from a distribution with p.d.f.

3𝑥 2
f(x|𝜃)= 𝜃3
0<𝑥<𝜃
0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒.
𝑋𝑛
(a) Find the distribution of , where 𝑋(𝑛) = max 𝑋1 , … , 𝑋𝑛 .
𝜃
1

(b) Show that 𝑋 𝑛 ,𝛼 3𝑛 𝑋𝑛 gives a 100(1- 𝛼)% confidence interval for 𝜃.

1
[6] Let 𝑋1 , … , 𝑋𝑛 be random sample of size n from U(0, 𝜃), 𝜃∊ℜ. Show that 𝑋 𝑛 , 𝛼 −𝑛 𝑋 𝑛 𝑎𝑛𝑑 ( 1 −
1

𝛼) 𝑋 𝑛 , ∞ are both 100(1- 𝛼)% confidence intervals for 𝜃.
𝑛

[7] Let two independent random samples, each of size 5, from two normal distributions N
𝜇1 , 𝜎1 2 𝑎𝑛𝑑 (𝜇2 , 𝜎2 2 ) are; 1.5, 2.8 , 3.3 , 3.9 , 7.2 and 2.8 , 1.8 , 3.1 , 6.5 , 6.9 respectively.

(a) If it is known𝜎1 2 = 𝜎2 2 = 3.5, find a 95% confidence interval for 𝜇1 − 𝜇2 .

(b) If it is known that 𝜇1 = 𝜇2 = 4, find a 95% confidence interval for 𝜎1 2 / 𝜎2 2 .

Solution Key

(1) 𝑋1 , … , 𝑋20 r.s. from N (𝜇, 80)

Confidence int for 𝜇

𝜎2
X∼ N 𝜇, 𝑛 ), i.e. N (𝜇, 4)

𝑋 −𝜇
⟹ Y= 2
∼ 𝑁 0, 1

𝑋−𝜇 𝛼
⟹ 1 − 𝛼 = 𝑃 −𝑧𝛼 ≤ ≤ 𝑧𝛼 𝑧𝛼 𝑖𝑠 ∋ 𝑓𝑜𝑟 𝑧 ∼ 𝑁 0, 1 𝑃 𝑧 > 𝑧𝛼 =
2 2 2 2 2 2

⟹ 1 − 𝛼 = 𝑃 𝑋 − 2𝑧𝛼 ≤ 𝜇 ≤ 𝑋 + 2𝑧𝛼
2 2

𝛼
For 100(1- 𝛼)Y= 95%, 𝛼= 0.05; = 0.025
2

C I ⟶ 𝑋 − 2𝑧0.025 , 𝑋 + 2𝑧0.025

Observed value of X̅ is 81.2 & 𝑧0.025 = 1.96

81.2 − 2 × 1.96, 81.2 + 2 × 1.96 _________(1)


𝛼
For 100(1- 𝛼)% = 99% , 𝛼= 0.01, 2 = 0.005

C I ⟶ 𝑋 − 2𝑧0.025 , 𝑋 + 2𝑧0.025

x̅= 81.2, 𝑧0.025 = 2.575

81.2 − 2 × 2.575, 81.2 + 2 × 2.575 _________(2)

(2) 𝑋1 … 𝑋𝑛 r.s. N (𝜇, 9)


9 𝑋−𝜇
𝑋 ∼ 𝑁 𝜇, ⟹ ∼ 𝑁 0, 1
𝑛 3
𝑛
𝑃[𝑋 − 1 < 𝜇 < 𝑋 + 1] = 𝑃 −1 ≤ 𝑋 − 𝜇 ≤ 1

−1 𝑋−𝜇 1
≤=𝑃 ≤
3 3 3
𝑛 𝑛 𝑛
𝑛 𝑛 𝑛
=𝛷 −𝛷 − = 2𝛷 − 1 = 0.90 𝑔𝑖𝑣𝑒𝑛 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛
3 3 3
𝑛
⟹𝛷 = 0.95 = 𝛷 1.96
3
⟹ 𝑛 = 3 × 1.96 ⟹ 𝑛 = ⋯
(3) 𝑋1 … 𝑋𝑛 r.s. N (𝜇, 𝜎 2 )
𝜎2
𝑋 ∼ 𝑁 μ,
𝑛
𝑖𝑛𝑑𝑒𝑝
𝑛 − 1 𝑠2 2
∼ 𝜒𝑛−1
𝜎2
𝑋−𝜇
⟹ 𝑠 ∼ 𝑡𝑛−1
𝑛

𝑋−𝜇
𝑃 −𝑡𝛼 ;𝑛−1 ≤ 𝑠 ≤ 𝑡𝛼 ;𝑛−1 = 1 − 𝛼
2 2
𝑛
𝐹𝑜𝑟 1 − 𝛼 = 0.90; 𝛼 = 0.1, 𝑡𝛼 ;𝑛−1 = 𝑡0.05,24 = 1.711
2
𝑠 𝑠
⟹𝑃 𝑋− 𝑡0.05,24 ≤ 𝜇 ≤ 𝑋 +
𝑡0.05,24 = 0.90
𝑛 𝑛
100 1 − 𝛼 % 𝐶𝐼 𝑤𝑖𝑡𝑕 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑 𝑥̅ = 4.7, 𝑠 2 = 5.76
5.76 5.76
4.7 − × 1.711, 4.7 + × 1.711
25 25

=⋯
2
(4) 𝑋1 … 𝑋9 r.s. N (𝜇, 𝜎 )
(a) CI for 𝜇 when 𝜎 2 is known at 95% level
𝑋 −𝜇
𝑃 −𝑧𝛼 ≤ ≤ 𝑧𝛼 = 0.95
2 2 2

𝑋−𝜇
𝛼 = 0.05 𝑃 −𝑧0.025 ≤ 𝜎 ≤ 𝑧0.025 = 0.95
3
𝜎 𝜎
= 𝑃 𝑋− × 1.96 ≤ 𝜇 ≤ 𝑋 + × 1.96 = 0.95[𝑧0.025 = 1.96]
3 3
𝜎 𝜎
𝐶𝐼 𝑋− × 1.96, 𝑋 + × 1.96
3 3
𝜎
Length 2 3 × 1.96 = 𝐿

𝜎
𝐸 𝐿 =2× × 1.96 = ⋯
3

(b) 𝜎 2 unknown case


𝐶𝐼 𝑏𝑎𝑠𝑒𝑑 𝑜𝑛

𝑋−𝜇
𝑃 −𝑡𝛼 ;𝑛−1 ≤ 𝑠 ≤ 𝑡𝛼 ;𝑛−1 = 0.95
2 2
𝑛
𝑡0.025,8 = 2.306,
𝑠 𝑠
𝐶𝐼 𝑋 − 𝑡0.025,8 , 𝑋 + 𝑡0.025,8
𝑛 𝑛
𝑠
𝐿𝑒𝑛𝑔𝑡𝑕 𝐿 = 2 × × 2.306
𝑛
2 × 2.306
𝐸 𝐿 = ×𝐸 𝑆
𝑛
𝑛 − 1 𝑠2 2
𝑛 − 1 𝑠2
𝑢𝑠𝑖𝑛𝑔 ∼ 𝜒 𝑛−1 𝑙𝑒𝑡 𝑌 =
𝜎2 𝜎2
∞ 1
𝑛−1 1 𝑦 𝑛−1
𝐸 𝑌 =𝐸 𝑠 = 𝑛−1 𝑦 2 𝑒 −2 𝑦 2 − 1 𝑑𝑦
𝜎 𝑛−1 0
2 2 ⎾
2

1 𝑦 𝑛
= 𝑛−1 𝑒 −2 𝑦 2 −1 𝑑𝑦
𝑛−1
2 2 ⎾ 2 0
𝑛 𝑛
⎾ 2 22
= 𝑛−1
𝑛−1
2 2 ⎾ 2
𝑛 𝑛
𝜎 ⎾ 2 22
⟹𝐸 𝑆 = . 𝑛−1
𝑛 − 1 2 2 ⎾𝑛 − 1
2
𝑛 𝑛
2 × 2.306 𝜎 ⎾ 2 22
𝐸 𝐿 = . . =⋯
𝑛 𝑛 − 1 2𝑛−1
2 ⎾
𝑛−1
2
3𝑥 2
(5) 𝑋1 … 𝑋𝑛 r.s. from f(x|𝜃)= 𝜃3
0<𝑥<𝜃
(a) 𝑋 𝑛 = 𝑀𝑎𝑥 𝑋1 … 𝑋𝑛
𝑥
𝑛−1 3𝑦 2 3 𝑥3 𝑥 3
𝑓𝑋 𝑛 𝑥 = 𝑛 𝐹𝑋 𝑥 𝑓𝑋 𝑥 . 𝐹𝑋 𝑥 = 3
𝑑𝑦 = 3
. =
0 𝜃 𝜃 3 𝜃
3 𝑛−1 2
𝑥 3𝑥
𝑓𝑋 𝑛 𝑥 = 𝑛 . . 0<𝑥<𝜃
𝜃3 𝜃3
3𝑛 3𝑛−1
𝑖. 𝑒. 𝑓𝑋 𝑛 𝑥 =
𝑥 0<𝑥<𝜃
𝜃 3𝑛
𝑋𝑛 3𝑛
𝑌= ; 𝑓𝑌 𝑦 = 3𝑛 (𝑦𝜃)3𝑛−1 𝜃; 0 < 𝑦 < 1
𝜃 𝜃
𝑓𝑌 𝑦 = 3𝑛 𝑦 3𝑛−1 0 < 𝑦 < 1
1 1
(b) 𝑃 𝑋 𝑛 ≤ 𝜃 ≤ 𝛼 −3𝑛 𝑋 𝑛 = 𝑃 𝛼 3𝑛 ≤ 𝑋 𝑛 ≤1
1
= 𝑃 𝛼 3𝑛 ≤ 𝑌 ≤ 1
1
3𝑛
= 3𝑛 1 𝑦 3𝑛−1 𝑑𝑦 = 1−𝛼 =1−𝛼
𝛼 3𝑛 3𝑛
1
⟹ 𝑋 𝑛 , 𝛼 −3𝑛 𝑋 𝑛 provides at 100(1- 𝛼) % CI for 𝜃.
(6) 𝑋1 … 𝑋𝑛 r.s. U (0, 𝜃)
𝑛 𝑛−1
𝑓𝑋 𝑛 𝑥 = 𝑥 0<𝑥<𝜃
𝜃𝑛
𝑋𝑛
𝑌= ; 𝑓𝑌 𝑦 = 𝑛 𝑦 𝑛−1 ; 0 < 𝑦 < 1
𝜃
1
𝑃 𝑋 𝑛 ≤ 𝜃 ≤ 𝛼 −𝑛 𝑋 𝑛
𝜃 1
=𝑃 1≤ ≤ 𝛼 −𝑛
𝑋𝑛
1 𝑋𝑛
= 𝑃 𝛼𝑛 ≤ ≤1
𝜃
1
𝑛
= 𝑛 1 𝑦 𝑛−1 𝑑𝑦 = 1 − 𝛼 = 1 − 𝛼
𝛼𝑛 𝑛
1
⟹ 𝑋 𝑛 , 𝛼 −𝑛 𝑋 𝑛 𝑖𝑠 100 1 − 𝛼 % 𝐶𝐼 𝑓𝑜𝑟 𝜃
1

𝐴𝑙𝑠𝑜 𝑃 1−𝛼 𝑛𝑋 𝑛 ≤𝜃<∞


1 𝜃
=𝑃 1−𝛼 𝑛 ≤ <∞
𝑋𝑛
𝑋𝑛 −
1
=𝑃 0< < 1−𝛼 𝑛
𝜃
1
− 𝜃
⟹ 1−𝛼 𝑛
𝑋𝑛
, ∞ is also 100(1- 𝛼)% CI for 𝜃.
(7) 𝑋1 , … , 𝑋5 i.i.d. N ( 𝜇1 , 𝜎1 2 - value(1.5, 2.8 , 3.3 , 3.9 , 7.2)
𝑌1 , … , 𝑌5 i.i.d. N ( 𝜇2 , 𝜎2 2 - value(2.8 , 1.8 , 3.1 , 6.5 , 6.9)
(a) 𝜎1 2 = 𝜎2 2 = 3.5 = 𝜎 2 𝑠𝑎𝑦
𝜎2
𝑋 ∼ 𝑁 𝜇1 ,
5
> 𝑖𝑛𝑑𝑒𝑝.
𝜎2
𝑌 ∼ 𝑁 𝜇2 ,
5
𝑋 − 𝑌 − 𝜇1 − 𝜇2
⟹ ∼ 𝑁 0, 1
2
𝜎
5
100 1 − 𝛼 % 𝐶𝐼 𝑓𝑜𝑟 ( 𝜇1 − 𝜇2 𝑎𝑡 𝛼 = 0.05

2 2
𝑋 − 𝑌 − 𝑧0.025 3.5 , 𝑋 − 𝑌 + 𝑧0.025 3.5
5 5

𝑢𝑠𝑒 𝑧0.025 = 1.96 & 𝑐𝑜𝑚𝑝𝑢𝑡𝑒𝑑 𝑥̅ & 𝑦 to get the computed CI


𝜇1 − 𝜇2 = 𝜇 (say); 𝜇 known
𝑛 𝑋− 𝜇 2
∼𝜒 1 2 𝜇1= 𝜇
𝜎12
(b) 𝜎 2
𝑌 ∼𝑁 𝜇 , 2
> 𝑖𝑛𝑑𝑒𝑝.
𝑛
𝑛 𝑌− 𝜇 2 2
∼𝜒 1
𝜎22
2
𝑛 𝑋− 𝜇
/1
𝜎1 2
⟹ 2
𝑛 𝑌− 𝜇
/1
𝜎2 2
2
𝜎2 2 𝑋 − 𝜇
= . ∼ 𝐹1 ,1
𝜎1 2 𝑌 − 𝜇
2
𝜎2 2 𝑋 − 𝜇
⟹ 𝑃 𝐹1 ,1;1−𝛼 ≤ 2 ≤ 𝐹1 ,1;𝛼 = 1 − 𝛼
2 𝜎1 𝑌 − 𝜇 2

2 2
𝑋− 𝜇 1 𝜎1 2 𝑋− 𝜇 1
⟹𝑃 ≤ 2≤ =1−𝛼
𝑌− 𝜇 𝐹1 ,1;𝛼 𝜎2 𝑌− 𝜇 𝐹1 ,1;1−𝛼
2 2
2
𝜎1
100 1 − 𝛼 % 𝐶𝐼 𝑓𝑜𝑟 𝑖𝑠
𝜎2 2
2 2
𝑋− 𝜇 1 𝑋− 𝜇 1
, .
𝑌− 𝜇 𝐹1 ,1;𝛼 𝑌 − 𝜇 𝐹1 ,1;1−𝛼
2 2

Using computed x̅ , y̅ and 𝐹1 ,1;𝛼 , 𝐹1 ,1;1−𝛼


2 2

At 𝛼 = 0.05 gives the computed CI.

You might also like