Problem Solutions On Probability Statistics
Problem Solutions On Probability Statistics
Problem Set-1
[1] A coin is tossed until for the first time the same result appear twice in succession.
To an outcome requiring n tosses assign a probability2−𝑛 . Describe the sample space. Evaluate the
probability of the following events:
[2] Three tickets are drawn randomly without replacement from a set of tickets numbered 1 to 100. Show
that the probability that the number of selected tickets are in (i) arithmetic progression is 1/ 66 and (ii)
105
geometric progression is 100 .
3
[3] Three players A, B and C play a series of games; none of which can be drawn and their probability of
wining any game are equal. The winner of each game scores 1 point and the series is won by the player
who first scores 4 points. Out of the first three games A won 2 games and B won 1 game. Find the
probability that C will win the series.
[4] A point P is randomly placed in a square with side of 1 cm. Find the probability that the distance from
P to the nearest side does not exceed x cm.
[5] Let there be n people in a room and p denote the probability that there are no common birth days. Find
an approximate value of p for n= 10.
[6] Suppose a lift has 3 occupants A, B and C and there are three possible floors (1, 2 and 3) on which
they can get out. Assuming that each person acts independently of the others and that each person has an
equally likely chance of getting off at each floor, calculate the probability that exactly one person will get
out on each floor.
[7] If n men, among whom are A and B, stand in a row, what is the probability that there will be exactly r
men between A and B ?
[8] In a town of n+ 1 inhabitants, a person tells a rumor to a second persons, who in trun tells it to a third
persons, and so on. At each step the recipient of the rumor is chosen at random from the n people
available. Find the probability that the rumor will be told r times without
Do the same problem when at each step the rumor is told to a gathering of N randomly chosen people.
[9] 2 points are taken at random and independently of each other on a line segment of length m. Find the
probability that the distance between 2 points is less than m/3.
[10] n points are taken at random and independently of one another inside a sphere of radius R. What is
the probability that the distance from the centre of the sphere to the nearest point is not less than r ?
[11] A car is parked among N cars in a row, not at either end. On his return, the owner finds that exactly r
of the N places are still occupied. What is the probability that both neighboring places are empty?
[12] 3 points X, Y, Z are taken at random and independently of each other on a line segment AB. What is
the probability that Y will be between X and Z?
[13] The coefficients of the equation 𝑎𝑥 2 + 𝑏𝑥 + 𝑐 = 0 are determined by throwing an ordinary die. Find
the probability that the framed equation will have real roots.
[14] Let 𝛺= {1, 2, 3, 4}. Check whether any of the following is a 𝜍-field of subsets of 𝛺
ℱ1 = 𝜙, 1, 2 , 3, 4
ℱ3 = {𝜙, 𝛺, 1 , 2 , 1, 2 , 3, 4 , 2, 3, 4 , {1, 3, 4} }
[15] Prove that if 𝐹1 𝑎𝑛𝑑 𝐹2 are 𝜍-fields of subsets of 𝛺, then 𝐹1 ∩ 𝐹2 is also a 𝜍-field. Give a counter
example to show that similar result for union 𝜍- fields does not hold.
[16] Let F be a 𝜍- field of subsets of the sample space 𝛺 and let 𝐴 ∊ 𝐹 be a fixed. Show that 𝐹𝐴 = {𝐶: 𝐶 =
𝐴 ∩ 𝐵, 𝐵 ∊ 𝐹} is a 𝜍-field of subsets of A.
Solution Set-1
1
𝑃 𝐻𝐻 = = 𝑃 𝑇𝑇
4
1
𝑃 𝐻𝑇𝑇 = 𝑃 𝑇𝐻𝐻 = .
23
5
a) 𝑃 𝐴 = 𝑖=2 𝑃 exp 𝑒𝑛𝑑𝑠 𝑖𝑛 𝑖 𝑡𝑜𝑠𝑠𝑒𝑠
1 1
=2× +2× 3+⋯
22 2
∞ 1
b) P(B)= 2 𝑖=1 22𝑖 =⋯
= ….
∞ 1
𝑃 𝐴𝑐 ∩ 𝐵 = 2 𝑖=3 22𝑖 =…..
2) Total # of cases : 100
3
(i) No. in AP Common diff 1, 2, .... , 49
49×50 1
Reqd prob= 100 = 66
3
Case 1: C. V. integer
8 ⟶ (1, 8, 64), ⟶ 1
9 ⟶ (1, 9, 81) ⟶ 1
_______________
Total 53
Case 2: c. r. fractional
5
2
⟶ 4, 10, 25 , 8, 20, 50 , (12, 30, 75)(16, 40, 100) ⟶ 4
7
2
⟶ 4, 14, 49 , 8, 28, 98 ⟶ 2
9
2
⟶ (4, 18, 81) ⟶ 1
F
4 5 7 8 10
9⟶ , , , ,
3 3 3 3 3
⟶ 6+4+2+1+1
5 7 9
6⟶ , ,
4 4 4
⟶4+2+1
6 7 8 9
5⟶ , , ,
5 5 5 5
⟶2+2+1+1
7
6⟶ ⟶2
6
8 9 10
19 ⟶ , ,
7 7 7
⟶1+1+1
9
4⟶ ⟶1
8
10
:1 ⟶ ⟶1
9
__________________________________
Total 52
53+52
⟹ reqd prob. 100 .
3
1 4
C wins all ⟶ prob 3
. ___________(i)
4 1 3 2 1
C wins 3 out of 1st 4 & the 5th game prob ⟶ 3 3 3
× 3 _________(ii)
C wins 3 out of 1st 5 & 6th game and either (i) B wins 2 and A wins on one or (ii) B wins 1, A wins 1
5 1 3 1 2 1 5 2 1 31 1 1
Prob ⟶ 3 3 3
×3+ 3 1 3
.
3 3
× 3 ___________(iii)
𝐴 𝑤𝑖𝑛𝑠 1
st
Out of 1 6 games 𝐵 … … . .2
𝐶 … … … .3
6 3 1 31 1 2 1
𝑝𝑟𝑜𝑏 3 1 3 3 3 3
__________ (iv)
(4) The pt P must lie in the shaded region so that the distance from P to the nearest side does not exceed x
cm.
1
If 𝑥 ≥ 2 , 𝑡𝑒𝑛 𝑝𝑟𝑜𝑏 = 1
1
𝐼𝑓 0 < 𝑥 < 2 , 𝑡𝑒𝑛 𝑎𝑟𝑒𝑎 𝑜𝑓 𝑡𝑒 𝑠𝑎𝑑𝑒𝑑 𝑟𝑒𝑔𝑖𝑜𝑛 = 1 − (1 − 2𝑥)2
𝑛−1 𝑛−1
𝑘 𝑘 1 𝑛(𝑛 − 1)
log 𝑒 𝑝 = log 𝑒 1− ≈ − = − .
365 365 365 2
𝑘=1 𝑘=1
1 10×9
For n= 10 log 𝑒 𝑝 ≈ − 365 2
=⋯
⟹𝑝≈⋯
Tavarable # of outcomes : 3! = 6
6
Reqd. prob.= .
27
# of possible positions for A & B ∋ there are exactly r positions available between them
= 2! × (n – r - 1)
↗ ↖
among A&B ({1, r+ 2}, {2, r+ 3}, …., {(n- r-1), n} for A&B)
𝑛−2
= Further # of ways that r persons can be chosen to stand between A& B= 𝑟
Favarable # of cases
𝑛−2
2! × 𝑛 − 𝑟 − 1 × 𝑟
× 𝑟! × 𝑛 − 𝑟 − 2 !
↙ ↘
Perm of r perm of (n- r- 2) men excluding A< B and r men in
⟹ reqd. prob.
𝑛−2
2! × 𝑛 − 𝑟 − 1 ! × 𝑟! × 𝑟
𝑛!
8)
𝑜𝑟𝑖𝑔𝑖𝑛𝑎𝑡𝑜𝑟 ⟶ 𝑛 𝑤𝑎𝑦𝑠
2𝑛𝑑 𝑝𝑒𝑟𝑠𝑜𝑛 ⟶ 𝑛 − 1 𝑤𝑎𝑦𝑠 } ⟶ 𝑛(𝑛 − 1)𝑟−1
⋮
𝑡
𝑟 𝑝𝑒𝑟𝑠𝑜𝑛 ⟶ 𝑛 − 1 𝑤𝑎𝑦𝑠
𝑛 𝑛−1 𝑟−1
Reqd. prob.=
𝑛𝑟
2nd person ⟶ n- 1
3rd person ⟶ n- 2
𝑟 𝑡 𝑝𝑒𝑟𝑠𝑜𝑛 ⟶ (n- r+ 1)
𝑛 𝑛−1 …(𝑛−𝑟+1)
Reqd. prob. = 𝑛𝑟
𝑛 𝑟
Second part:- Total # of cases 𝑁
.
𝑛 𝑛−1 𝑟−1
Case tavarable to 1st event 𝑁 𝑁
𝑛 𝑛−1 𝑟−1
𝑁 𝑁
𝑅𝑒𝑞𝑑 𝑝𝑟𝑜𝑏 = 𝑛 𝑟
𝑁
9) Let the distance of 2 randomly chosen pts from a fixed pt A on the line segment be denoted as x& y
𝑚
Reqd. condition is |x- y|< 3 .
𝑚 𝑚
𝑖. 𝑒. − <𝑥−𝑦<
3 3
𝑚
Inside the rect bounded by x axis, y axis, x= m and y= m, the area favorable to |x- y|< 3
is clearly the
region OABCDE
2 2 2
Area of OABCDE = 𝑚2 − 3
𝑚
1 2𝑚 2𝑚
= 𝑚2 − 2 ×
2 3 3
4
𝑚 2− 𝑚 2 5
9
⟹ 𝑟𝑒𝑞𝑑 𝑝𝑟𝑜𝑏 = 𝑚2
= 9.
10) n pt must lie on or outside a sphere of raidus r, having same centre as the original sphere of raidus R.
𝑟3
⟹ P(lie on or outside the smaller sphere)= 1 −
𝑅3
𝑛
𝑟3
As the pts are taken independently, the reqd. prob.= 1 − .
𝑅3
11) Owner’s car can be in any of the (N- 2) places (leaving 2 ends)
Favorable # of cases:
Owner’s car in any of the (N- 2) places and 2 neighboring places are empty
(1, 2)
2 ⟶ 8 3, 4, 5, 6 2× 4 = 8
(2, 1)
(1 , 3)
3 ⟶ 12 4, 5, 6 2× 3 = 6
(3, 1)
(1, 4)
4 4, 1 ⟶ 16 4, 5, 6 3× 3 = 9
(2, 2)
(1, 5)
5 ⟶ 20 5, 6 2× 2 = 4
(5, 1)
(1, 6)
(6, 1)
6 ⟶ 24 5;6 4× 2 = 8
(2, 3)
(3, 2)
(2, 4)
⟶ 32 6 2× 1 = 2
(4, 2)
(𝑖𝑖𝑖)ℱ3 contain 𝛺 and is closed under complementation and union ⟹ℱ3 is a 𝜍-field.
15) 𝛺∈ ℱ1 , ℱ2 ⟹ 𝛺 ∈ ℱ1 ∩ ℱ2 ______(i)
⟹ 𝐴𝑐 ∈ ℱ1 & 𝐴𝑐 ∈ ℱ2
⟹ 𝐴𝑐 ∈ ℱ1 ∩ ℱ2 ______(ii)
If 𝐴1 , 𝐴2 , … ∈ ℱ1 ∩ ℱ2 , 𝑡𝑒𝑛
𝐴1 , 𝐴2 , … ∈ ℱ1 & ⟹∪ 𝐴𝑖 ∈ ℱ1
𝐴1 , 𝐴2 , … ∈ ℱ2 & ⟹∪ 𝐴𝑖 ∈ ℱ2
⟹ ∪ 𝐴𝑖 ∈ ℱ1 ∩ ℱ2 _________(iii)
Counter example
𝛺= {1, 2, 3}
i.e. ℱ1 = 𝜙, 𝛺, 1 , 2, 3 ⟶ 𝜍 − 𝑓𝑖𝑒𝑙𝑑
ℱ2 = 𝜙, 𝛺, 2 , 1, 3 ⟶ 𝜍 − 𝑓𝑖𝑒𝑙𝑑
𝐶 𝐶𝐴 𝑐𝑜𝑚𝑝𝑙𝑒𝑚𝑒𝑛𝑡 𝑤. 𝑟. 𝑡. 𝐴 = 𝐴 − 𝐶
=𝐴−𝐴∩𝐵
= 𝐴 ∩ 𝐵𝐶 ∈ ℱ𝐴 (𝑎𝑠 𝐵𝐶 ∈ ℱ𝐴 )
(iii)Let 𝐶1 , 𝐶2 , … ∈ ℱ𝐴 , 𝑡𝑒𝑛
𝐶𝑖 = 𝐴 ∩ 𝐵𝑖 , 𝑖 = 1, 2, … . 𝑓𝑜𝑟 𝐵𝑖 ∈ ℱ
𝑈 𝑈
∪ 𝐶𝑖 = 𝐴 ∩ 𝐵𝑖 = 𝐴 ∩ 𝐵 ∈ ℱ𝐴 𝑎𝑠 𝑈𝐵𝑖 ∈ ℱ
𝑖 𝑖 𝑖
⟹ ℱ𝐴 𝑖𝑠 𝑎 𝜍 − 𝑓𝑖𝑒𝑙𝑑 𝑜𝑓 𝑠𝑢𝑏𝑠𝑒𝑡𝑠 𝑜𝑓 𝐴.
Problem Set -2
𝑒 −𝜆 𝜆 𝑥
(a) P(A)= 𝑥∈𝐴 𝑥!
, 𝜆 > 0.
In case where your answer is in the affirmative, determine P(E), P(F), P(G), P(E∩F), P(E ∪F), P(F∪G),
P(E∩G) and P(F∩G) , where E= {x ∈𝛺 : x > 2},
[2] Let 𝛺= ℜ. In each of the following cases determine whether P(.) is a probability measure. For an
interval I,
1 |𝑥|
(a) P(I)= ∫𝐼 2
𝑒 𝑑𝑥
0, 𝑖𝑓 𝐼 ⊂ −∞, 0 ,
(b) P(I)= 2
∫𝐼 2𝑥𝑒 −𝑥 𝑑𝑥 , 𝑖𝑓 𝐼 ⊂ 0, ∞ .
1 𝑖𝑓 𝑙𝑒𝑛𝑔𝑡 𝑜𝑓 𝐼 𝑖𝑠 𝑓𝑖𝑛𝑖𝑡𝑒
(c) P(I)=
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
[3] Show that the probability of exactly one of the events A or B occurring is P(A)+ P(A)- 2P(A ∩B).
= 𝑃 𝐴 ∪ 𝐵𝑐 − 𝑃(𝐴𝑐 )𝑃(𝐵𝑐 )
𝑛 𝑛
[5] For events 𝐴1 , 𝐴2 , … . . , 𝐴𝑛 𝑠𝑜𝑤 𝑡𝑎𝑡 𝑃 𝑖=1 𝐴𝑖 = 𝑖=1 𝑃 𝐴𝑖 − 1≤𝑖1 <𝑖2 ≤𝑛 𝑃(𝐴𝑖1 ∩ 𝐴𝑖2 ∩ 𝐴𝑖3 ) −
⋯ + −1 𝑛−1 𝑃( 𝑛𝑖=1 𝐴𝑖 )
[6] Consider the sample space 𝛺= {, 1, 2, ….}and ℱ the 𝜍-field of subsets of 𝛺. To the elementary event
{j} assign the probability
2𝑗
P({j})= 𝐶 𝑗 ! , 𝑗 = 0, 1, 2, … ..
Evaluate P(A), P(B), P(C), P(A∩B), P(A∩C), P(C∩B), P(A∩B∩C) and verify the formula for P(A∪
B∪C).
[7] Each packet of a certain cereal contains a small plastic model of one of the five different dinosaurs; a
given packet is equally likely to contain any one of the five dinosaurs. Find the probability that someone
buying six packets of the cereal will acquire models of three favorite dinosaurs.
[8] Suppose n cards numbered 1, 2, …, n are laid out at random in a row. Let 𝐴𝑖 denote the event that
‘card I appears in the ith position of the row’, which is termed as a match. What is the probability of
obtaining at least one match?
[9] A man addresses n envelopes and writes n cheques for payment of n bills.
(a) If the n bills are placed at random in the n envelopes, what would be the probability that eaxch bill
would be placed in the wrong envelope?
(b) If the bills and n cheques are placed at random in the n envelopes, one bill and one cheque in each
envelope, what would be the probability that in no instance would the enclosures be completely correct?
[10] For events A, B and C such that P(C) > 0, prove that
[11] Let A and B be two events such that 0 < P(A) < 1. Which of the following statements are true?
(a) P(A|B) + P(𝐴𝐶 |𝐵)= 1; (b) P(A|B) + P(A|𝐵𝐶 )= 1; (c) P(A|B)+ P(𝐴𝐶 | 𝐵𝐶 )= 1
1 1
[12] Consider the two events A and B such that P(A)= 4, P(B|A)= ½ and P(A|B)=4.
[13] Consider an urn in which 4 balls have been placed by the following scheme. A fair coin is tossed, if
the coin comes up heads, a white ball is placed in the urn otherwise a red ball is placed in the urn.
(a) What is the probability that the urn will contain exactly 3 white balls?
(b) What is the probability that the urn will contain exactly 3 white balls, given that the first ball placed in
the urn was white?
[14] A random experiment has three possible outcomes, A, B and C, with probabilities𝑝𝐴 , 𝑝𝐵 , 𝑎𝑛𝑑 𝑝𝐶 .
What is the probability that, in independent performances of the experiment, A will occur before B?
[15] a system composed of n separate components is said to be a parallel system if it functions when at
least one of the components functions. For such a system, if component I, independent of other
components, functions with probability𝑝𝑖 , 𝑖 = 1(1)𝑛, what is the probability that the system functions?
[16] A student has to sit for an examination consisting of 3 questions selected randomly from a list of 100
questions. To pass, the student needs to answer correctly all the three questions. What is the probability
that the student will pass the examination if he remembers correctly answer to 90 questions o the list?
[17] A person has three coins in his pocket, two fair coins (heads and tails are equally likely) but the third
one is baised with probability of heads 2/3. One coin selected at random drops on the floor, landing heads
up. How likely is it that it is one of the fair coins?
[18] A slip of paper is given to A, who marks it with either a+ or a- sign, with a probability 1/3 of writing
a+ sign. A passes the slip to B, who may either leave it unchanged or change the sign before passing it to
C. C in turn passes the slip to D after perhaps changing the sign; finally D passes it to a referee after
perhaps changing the sign. It is further known that B, C and D each change the sign with probability 2/3.
Find the probability that A originally wrote a+ given that the referee sees a+ sign o the slip.
[19] Each of the three boxes A, B, and C, identical in appearance, has two drawers. Box A contains a gold
coin in each drawer, Box B contains a silver coin in each drawer and box C contains a gold coi in one
drawer and silver coin in the other. A box is chosen at random and one of its drawers is then chosen at
random and opened, and a gold coin is found. What is the probability that the other drawer of this box
contains a silver coin?
[20] Each of four persons fires one shot at a target. Let 𝐶𝑘 denote the event that the target is hit by person
k, k= 1, 2, 3, 4. If the events 𝐶1 , 𝐶2 , 𝐶3 , 𝐶4 are independent and if P(𝐶1 )= P(𝐶2 )= 0.7,
P(𝐶3 )= 0.9 and P(𝐶4 )= 0.4, compute the probability that : (a) all of them hit the target; (b) no one hits the
target; (c) exactly one hits the target; (d) at least one hits the target.
𝑛 𝐶 𝑛
[21] Let 𝐴1 , 𝐴2 , … . . , 𝐴𝑛 be n independent events. Show that 𝑃 𝑖=1 𝐴𝑖 ≤ exp − 𝑖=1 𝑃 𝐴𝑖 .
[22] Give a counter example to show that pair wise independence of a set of events 𝐴1 , 𝐴2 , … . . , 𝐴𝑛 does
not imply mutual independence.
[23] We say that B carries negative information about event A if P(A|B) < P(A). Let A, B and C be three
events such that B carries negative information about A and C carries negative information about B. Is it
true C carries negative information about A ? Prove your assertion.
[24] Suppose in a class there are 5 boys and 3 girl students. A list of 4 students, to be interviewed, is made
by choosing 4 students at random from this class. If the first student selected at random from the list, for
interview, is a girl, then find the conditional probability of selecting a boy next from among the remaining
3 students in the list.
[25] During the course of an experiment with a particular brand of a disinfectant on files, it is found that
80% are killed in the first application. Those which survive develop a resistance, so that the percentage of
survivors killed in any later application is half of that in the preceding application. Find the probability
that (a) a fly will survive 4 applications; (b) it will survive 4 applications, given that it has survived the 1 st
one.
[26] An art dealer receives a group of 5 old paintings and on the basis of past experience, the thinks that
the probabilities are, 0.76, 0.09, 0.02, 0.01, 0.02 and 0.10 that 0, 1, 2, 3, 4 or all 5 of them, respectively,
are forgeries. The art dealer sends one randomly chosen (out of 5) paintings for authentication. If this
painting turns out to be a forgery, then what probability should he now assign to the possibility that the
other 4 are also forgeries?
Solution Key
𝑒 −𝜆 𝜆 𝑥
P(A)= 𝑥∈𝐴 𝑥!
, 𝜆 > 0.
𝑒 −𝜆 𝜆 𝑥 ∞𝜆
𝑥
P(A)≥ 0 obv ____(i) P(𝛺)= 𝑥∈𝛺 = 𝑒 −𝜆 0 𝑥! = 1 _______(i)
𝑥!
⟹ P is prob. Measure.
= 𝑃({𝑥}) = 𝑃(𝐴𝑖 )
(b)Similar to (a). 𝑖 𝑥∈𝐴𝑖 𝑖
(c) P(A)≥ 0
𝑃 𝐶𝑖 = 0 ≠ 𝑃 𝐶𝑖 = 1
1 1 1
2nd part
−𝜆 𝑥 𝜆2
∞𝑒 𝜆
(a) P(E)= 3 𝑥!
= 1 − 𝑒 −𝜆 1 + 𝜆 + 2!
2
𝑒 −𝜆 𝜆𝑥 𝜆2
𝑃 𝐹 = = 𝑒 −𝜆 𝜆 +
𝑥! 2!
1
∞
𝑒 −𝜆 𝜆𝑥 𝑒 −𝜆 𝜆0
𝑃 𝐸 ∪𝐹 = =1− = 1 − 𝑒 −𝜆
𝑥! 0!
1
𝑜𝑡𝑒𝑟𝑠.
( 2) 𝛺= R
1 |𝑥|
𝑎 𝑃 𝐼 = ∫𝐼 𝑒 𝑑𝑥 ≥0 ∀𝐼
2
∞ 1 0 ∞
P(𝛺)= ½ ∫−∞ 𝑒 |𝑥| 𝑑𝑥 = 2 ∫−∞ 𝑒 𝑥 𝑑𝑥 + ∫0 𝑒 −𝑥 𝑑𝑥 = 1
1
𝐼1 ∩ 𝐼2 = 𝜙 ∶ 𝑃 𝐼1 ∪ 𝐼2 = ∫𝐼 + ∫𝐼 = 𝑃 𝐼1 + 𝑃 𝐼2 𝑒𝑥𝑡𝑒𝑛𝑑𝑒𝑑 …. P is prob measure
2 1 2
𝑏 𝑆𝑖𝑚𝑖𝑙𝑎𝑟 𝑡𝑜 (𝑎)
(3) P(exactly me of A or B)
= 𝑃( 𝐴 ∩ 𝐵𝑐 ∪ 𝐴𝑐 ∩ 𝐵
[𝑈𝑠𝑖𝑛𝑔 𝑃 𝐴 = 𝑃 𝐴𝐵 + 𝑃 𝐴𝐵𝑐 . ]
(5) For n = 2
𝑃 𝐴1 ∪ 𝐴2 = 𝑃 𝐴1 ∪ 𝐴1 𝑐 𝐴2
= 𝑃 𝐴1 + 𝑃 𝐴1 𝑐 𝐴2
= 𝑃 𝐴1 + [𝑃 𝐴2 − 𝑃 𝐴1 𝐴2 ] ___true for n= 2
𝑃𝑟𝑜𝑜𝑓 by indention
𝑚 +1 𝑚
𝑃 𝐴𝑘 = 𝑃 𝐴𝑘 ∪ 𝐴𝑚 +1
1 1
𝑚 𝑚
=𝑃 𝐴𝑘 + 𝑃 𝐴𝑚 +1 − 𝑃 𝐴𝑘 ∩ 𝐴𝑚 +1
1 1
𝑛 𝑚
𝑚 −1
𝑟. . 𝑠 = 𝑃 𝐴𝑘 − 𝑃 𝐴𝑘 1 ∩ 𝐴𝑘 2 + 𝑃 𝐴𝑘 1 𝐴𝑘 2 𝐴𝑘 3 − ⋯ + −1 𝑃 𝐴𝑘
1 𝑘 1 <𝑘 2 𝑘 1 <𝑘 2 <𝑘 3 1
𝑚
+ 𝑃 𝐴𝑚 +1 − 𝑃 𝐴𝑘 𝐴𝑚 +1
1 _______(1)
𝑃 𝐴𝑘 𝐴𝑚+1
1
𝑛
+ 𝑃 𝐴𝑘 1 𝐴𝑚 +1 ∩ 𝐴𝑘 2 𝐴𝑚 +1 ∩ 𝐴𝑘 3 𝐴𝑚+1 − ⋯
𝑘 1 <𝑘 2 <𝑘 3
𝑚
𝑚 −1
+ −1 𝑃 𝐴𝑘 𝐴𝑚 +1 ____(2)
1
𝑃 𝐴𝑘
1
𝑚 +1 𝑚 +1
= 𝑃 𝐴𝑘 − 𝑃 𝐴𝑘 1 𝐴𝑘 2 + 𝑃 𝐴𝑘 1 𝐴𝑘 2 𝐴𝑘 3 − ⋯
1 𝑘 1 <𝑘 2 𝑘 1 <𝑘 2 <𝑘 3
𝑚
𝑚−1
+ −1 𝑃 𝐴𝑘 .
1
2𝑗
6) 𝛺= {0, 1, 2, ….}. P({j})= 𝑐 𝑗 ! = 𝐶𝑒 2 ⟹ 𝐶 = 𝑒 −2
4 ∝ ∝
−2
2𝑗 −2
2𝑗 22𝑗 +1
𝑏 𝑃 𝐴 = 𝑒 ;𝑃 𝐵 = 𝑒 ;𝑃 𝐶 = 𝑒 −2
𝑗! 𝑗! 2𝑗 + 1 !
2 3 0
𝑃 𝐵∩𝐶 =𝑃 3 +𝑃 5 +⋯ = ⋯
𝑖 = 1, 2, 3
Reqd. prob. = 𝑃 𝐴1 𝑐 ∩ 𝐴2 𝑐 ∩ 𝐴3 𝑐 = 1 − 𝑃 𝐴1 𝑐 ∩ 𝐴2 𝑐 ∩ 𝐴3 𝑐 𝑐
= 1 − 𝑃 𝐴1 ∪ 𝐴2 ∪ 𝐴3
= 1 − 𝑃 𝐴1 + 𝑃 𝐴2 + 𝑃 𝐴3 − 𝑃 𝐴1 𝐴2 − 𝑃 𝐴1 𝐴3 − 𝑃 𝐴1 𝐴3 + 𝑃 𝐴1 𝐴2 𝐴3 ______(1)
4 6
𝑃 𝐴𝑖 = 5
∀𝑖
3 6
𝑁ote that 𝑃 𝐴𝑖 𝐴𝑗 = ∀ 𝑖 ≠ 𝑗 _____(2)
5
2 6
𝑃 𝐴1 𝐴2 𝐴3 = 5
𝑛−𝑟 !
𝑃(𝐴𝑖1 ∩ 𝐴𝑖2 ∩ … ∩ 𝐴𝑖𝑟 = ; 1 ≤ 𝑖1 < 𝑖2 < ⋯ < 𝑖𝑟 ≤ 𝑛 𝑟 = 1(1)𝑛
𝑛!
1 1 𝑛−1
1
𝑟𝑒𝑞𝑑 𝑝𝑟𝑜𝑏 = 1 − + + ⋯ + −1
2! 3! 𝑛!
(9)
(a) 𝐴𝑖 : event that ith bill goes to ith envelope (i= 1(1) n)
𝑛 𝑐
𝑟𝑒𝑞𝑑. Prob.=𝑃 𝑖=1 𝐴𝑖 =1−𝑃 𝐴𝑖
𝑛−1
=1− 𝑃 𝐴𝑖 − 𝑖<𝑗 𝑃 𝐴𝑖 𝐴𝑗 + ⋯ + −1 𝑃 𝐴1 , … , 𝐴𝑟 =
𝑛
1− 𝑃 𝐴𝑖 ˽𝑄1 𝑖<𝑗 𝑃 𝐴𝑖 𝐴𝑗 ˽𝑄2 + ⋯ + (−1) 𝑃 𝐴1 , … , 𝐴𝑟 ˽𝑄𝑛
𝑛 𝑛−𝑖 ! 1
𝐼𝑛 𝑄𝑖 ⟶ 𝑡𝑒𝑟𝑚𝑠 𝑒𝑎𝑐 𝑒𝑞𝑢𝑎𝑙 𝑡𝑜 =
𝑖 𝑛! (𝑛)𝑖
𝑛 1 𝑛 1 𝑛 1
⟹𝑃 𝐴𝑖 𝑐 = 1 − + − ⋯ + −1 𝑛
1 𝑛 1 2 𝑛 2 𝑛 𝑛 𝑛
𝑛! 𝑛−1 ! 𝑛! 𝑛−2 ! 1
=1− . + . … + −1 𝑛 .
1! 𝑛 − 1 ! 𝑛! 2! 𝑛 − 2 ! 𝑛! 𝑛!
𝑛
1 1 𝑛
1 1
= 1 − + − ⋯ + −1 = (−1)𝑖
1! 2! 𝑛! 𝑛!
𝑖=0
⟷ ⎵ ↓
𝑅1 𝑅2 𝑅𝑛
𝑛 𝑛−𝑖 ! 𝑛−𝑖 !
In 𝑅𝑖 ⟶ 𝑖
𝑡𝑒𝑟𝑚 𝑒𝑎𝑐 𝑒𝑞𝑢𝑎𝑙 𝑡𝑜 𝑛!
. 𝑛! .
𝑐
⟹𝑃 𝑖 𝐵𝑖 = 1−𝑃 𝑖 𝐵𝑖 =1− 𝑖𝑃 𝐵𝑖 + 𝑖<𝑗 𝑃 𝐵𝑖 𝐵𝑗 − ⋯ + −1 𝑛 𝑃 𝐵1 … 𝐵𝑛
= 1 − 𝑅1 + 𝑅2 − 𝑅3 + ⋯ + −1 𝑛 𝑅𝑛
𝑛 𝑛−𝑖 ! 𝑛−𝑖 !
𝑅𝑖 = ×
𝑖 𝑛! 𝑛!
𝑛! 𝑛−𝑖 ! 𝑛−𝑖 ! 1 1 𝑛!
= . . = . | 𝑛 𝑖 =
𝑖! 𝑛 − 𝑖 ! 𝑛! 𝑛! 𝑖! 𝑛 𝑖 𝑛−𝑖 !
𝑛
𝑐 𝑖
1
⟹𝑃 𝐵𝑖 = −1 .
𝑖! 𝑛 𝑖
𝑖 𝑖=0
𝑃 𝐴∪𝐵 ∩𝐶 𝑃 𝐴𝐶∩𝐵𝐶
(10) (i) P(A∪ B | C)= 𝑃 𝐶
= 𝑃 𝐶
= 𝑃 𝐴 𝐶) + 𝑃 𝐵 𝐶 − 𝑃 𝐴𝐵 𝐶)
𝑃(𝐴𝑐 𝑐) 𝑃 𝐶 −𝑃(𝐴𝐶)
(ii) 𝑃 𝐴𝑐 𝐶) = 𝑃(𝐶)
= 𝑃(𝐶)
= 1 − 𝑃(𝐴|𝐶)
𝑃 𝐴𝐵 𝑃 𝐴𝐵 𝑐 𝑃 𝐴 −𝑃 𝐴𝐵
(b) 𝑃 𝐴 𝐵 = 𝑃 𝐵
; 𝑃 𝐴 𝐵𝑐 = 𝑃 𝐵𝑐
= 1−𝑃 𝐵
𝑇𝑎𝑘𝑒 𝐴 ⊂ 𝐵, 𝑃 𝐴 > 0, 𝑃 𝐵 − 𝐴 > 0
𝑃(𝐴𝐵) 𝑃(𝐴𝐵𝑐 ) 𝑃 𝐴
𝑃 𝐴 𝐵 + 𝑃 𝐴 𝐵𝑐 = + = < 1 𝑓𝑎𝑙𝑠𝑒.
𝑃(𝐵) 𝑃(𝐵𝑐 ) 𝑃(𝐵)
𝑃 𝐴 𝐵 + 𝑃 𝐴𝑐 𝐵𝑐 .
𝑃(𝐴𝐵) 𝑃 𝐴𝑐 𝐵𝑐
= + > 1 𝑓𝑎𝑙𝑠𝑒.
𝑃(𝐵) 𝑃(𝐵𝑐 )
1
(12) (a) 𝑃 𝐴 𝐵 = 4 ⟹ 𝑃 𝐴𝐵 ≠ 0
↖(i.e. AB ≠ 0)
(b) 𝐴𝐶𝐵 ⟹ 𝑃 𝐴𝐵 = 𝑃 𝐴
𝑃(𝐴𝐵) 1
𝐺𝑖𝑣𝑒𝑛 𝑃 𝐵 𝐴 = = ⟹ 𝑃 𝐴𝐵 ≠ 𝑃 𝐴 𝑓𝑎𝑙𝑠𝑒.
𝑃(𝐴) 2
𝑃 𝐴𝑐 𝐵 𝑐 1−𝑃 𝐴 − 𝑃 𝐵 +𝑃 𝐴𝐵
(c) 𝑃 𝐴𝑐 𝐵𝑐 = 𝑃 𝐵𝑐
= 1−𝑃 𝐵
_____(∗)
1
𝑃 𝐵𝐴 =
2
1 1
& 𝑃 𝐴 = , ⟹ 𝑃 𝐴𝐵 =
4 8
1
𝑃(𝐴𝐵) 1 1
&𝑃 𝐴 𝐵 = = 8 = ⟹𝑃 𝐵 =
𝑃(𝐵) 𝑃(𝐵) 4 2
1 1 1
1− − + 3
𝑐 𝑐 4 2 8
(*) ⟹ 𝑃 𝐴 𝐵 = 1 = 4.
2
4 1 3 1
= 3
. =⋯
2 2
1
𝑃 𝐴 =
2
3
1 3 1
𝑃(𝐴𝐵) 2 . 2 2 3
𝐵: 𝑢𝑟𝑛 𝑐𝑜𝑛𝑡𝑎𝑖𝑛 𝑒𝑥𝑎𝑐𝑡𝑙𝑦 3 𝑤𝑖𝑡𝑒 𝑏𝑎𝑙𝑙𝑠 𝑃 𝐵 𝐴 = = = .
𝑃(𝐴) 1 8
2
(14) D: A occurs before B
𝑝𝐴 𝑝𝐴
𝑃 𝐷 = 𝑝𝐴 + 𝑝𝐴 𝑝𝐶 + 𝑝𝐴 𝑝𝐶 2 + ⋯ = =
1 − 𝑝𝐶 𝑝𝐴 + 𝑝𝐵
𝐴𝑖 : 𝑐𝑜𝑚𝑝𝑜𝑛𝑒𝑛𝑡 𝑖 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛
(16) 𝐴𝑖 : Question I is among the 90 questions that the student can answer correctly.
Reqd. prob. 𝑃 𝐴1 𝐴2 𝐴3 = 𝑃 𝐴1 𝑃 𝐴2 𝐴1 𝑃 𝐴3 𝐴1 𝐴2 )
90 89 88
= . . ….
100 99 98
(17) Apply bayes theorem
1 2
×
2 3
Reqd. prob. = 1 2 2 1 =⋯
× + ×
2 3 3 3
bayes theorem
𝑃 𝐴+ 𝑃 𝐷+ 𝐴+
Reqd. prob. =P 𝐴+ 𝐷 + = 𝑃 𝐷+
3 2
1 3 2 1 13
𝑃 𝐷 + 𝐴+ ) = + . =
3 2 3 3 27
𝑎𝑙𝑠𝑜 𝑃 𝐷 + = 𝑃 𝐷 + 𝐴+ 𝑃 𝐴+ + 𝑃 𝐷 + 𝐴+ 𝑐 𝑃 𝐴+ 𝑐
Bayes thm
1 1
× 1
3 2
=1 1 1 1 =3.
×1+ × + ×0
3 3 2 3
4
(20) (a) 𝑃 𝐶1 𝐶2 𝐶3 𝐶4 = 1 𝑃(𝐶𝑖 ) =⋯
(c) 𝑃 𝐶1 𝐶2 𝑐 𝐶3 𝑐 𝐶4 𝑐 + 𝑃 𝐶1 𝑐 𝐶2 𝐶3 𝑐 𝐶4 𝑐 + 𝑃 𝐶1 𝑐 𝐶2 𝑐 𝐶3 𝐶4 𝑐 + 𝑃 𝐶1 𝑐 𝐶2 𝑐 𝐶3 𝑐 𝐶4 𝑐
4
= 𝑃 𝐶1 𝑃 𝐶𝑖 3 + ⋯
2
= 1 − 𝑃 𝐶1 𝑐 𝐶2 𝑐 𝐶3 𝑐 𝐶4 𝑐 = ⋯
(21) 𝑃 𝑛
1 𝐴𝑖 𝑐 = 𝑛
1 𝑃 𝐴𝑖 𝑐 = 𝑛
1 1 − 𝑃 𝐴𝑖 ≤ 𝑛
1 exp −𝑃 𝐴𝑖 0 < 𝑥 < 1, 1 − 𝑥 < 𝑒 −𝑥
𝑖. 𝑒. 𝑃 𝑛
1 𝐴𝑖 𝑐 ≤ exp(− 𝑛
1 𝑃(𝐴𝑖 )).
1
𝑃 𝑖 = 𝑖 = 1, 2, 3, 4
4
A= {1, 4} , B= {2, 4}, C= {3, 4}.
1 1
𝑃 𝐴𝐵 = 𝑃 𝐴𝐶 = 𝑃 𝐵𝐶 = ; 𝑃 𝐴𝐵𝐶 =
4 4
⟹ 𝑃 𝐴𝐵 = 𝑃 𝐴 𝑃 𝐵 , 𝑃 𝐴𝐶 = 𝑃 𝐴 𝑃 𝐶 & 𝑃 𝐵𝐶 = 𝑃 𝐵 𝑃(𝐶)
105
𝑃 𝐵 = 8 .
4× 4
𝑃 𝐶𝐵 𝑃 𝐶 ∩ 31 𝐴𝑖 𝐵 𝑃 3
1 𝐶𝐴𝑖 𝐵
𝑃 𝐶𝐵 = = =
𝑃 𝐵 𝑃 𝐵 𝑃 𝐵
3
= 𝑃 𝐶 𝐴𝑖 𝐵 𝑃 𝐴𝑖 𝐵
1
2 1
= 1 × 𝑃 𝐴1 𝐵 + × 𝑃 𝐴2 𝐵 + × 𝑃 𝐴3 𝐵
3 3
𝑃 𝐴1 𝑃 𝐵 𝐴1 2
𝑃 𝐴1 𝐵 = =
𝑃 𝐵 7
𝑃 𝐴2 𝑃 𝐵 𝐴1 4 1
𝑃 𝐴2 𝐵 = = 𝑃 𝐴3 𝐵) =
𝑃 𝐵 7 7
2 2 4 1 1
⟹𝑃 𝐶 𝐵 =1× + × + × =⋯
7 3 7 3 7
(25)
Note that 𝐴4 ⊂ 𝐴3 ⊂ 𝐴2 ⊂ 𝐴1
⟹ 𝐴4 = 𝐴1 ∩ 𝐴2 ∩ 𝐴3 ∩ 𝐴4
= P 𝐴1 𝐴2 𝐴3 𝐴4 = 𝑃 𝐴4
= 𝑃 𝐴1 𝑃 𝐴2 𝐴1 𝑃 𝐴3 𝐴1 𝐴2 𝑃(𝐴4 |𝐴1 𝐴2 𝐴3 )
↓ ↓ ↓ ↓
𝑃 𝐴4 ∩𝐴1 𝑃 𝐴4
(b) 𝑃 𝐴4 𝐴1 = 𝑃 𝐴1
=𝑃 𝐴1
A : event that the painting sent for authentication turns out to be a forgery.
𝑃 𝐵5 𝑃 𝐴 𝐵5 )
Reqd. prob. = 𝑃 𝐵5 𝐴 = 5
𝑖=0 𝑃
𝐵𝑖 𝑃(𝐴| 𝐵𝑖 )
Bayes them
5
𝑃 𝐴 = 𝑃 𝐵𝑖 𝑃(𝐴| 𝐵𝑖 )
𝑖=0
1 2 3 4
= 0.76 × 0 + 0.09 × + 0.02 × + 0.01 × + 0.02 × + 0.10 × 1
5 5 5 5
=⋯
0.10 × 1
𝑃 𝐵5 𝐴 = =⋯
𝑃(𝐴)
Problem Set-3
[1] Let X be a random variable defined on (𝛺, ℑ, 𝑃).Show that the following are also random variables;
(a) |X|, (b) 𝑋 2 𝑎𝑛𝑑 𝑐 𝑋, given that {x< 0}=𝜙.
[2] Let 𝛺= [0, 1] and ℑ be the Boral 𝜍-field of subsets of 𝛺. Define X on 𝛺 as follows:
1
𝜔 𝑖𝑓 0 ≤ 𝜔 ≤
𝑋 𝜔 = 2
1 1
𝜔− 𝑖𝑓 < 𝜔 ≤ 1
2 2
Show that X defined above is a random variable.
[3] Let 𝛺= {1, 2, 3, 4} and ℑ= {𝜙, 𝛺, {1}, {2, 3, 4}} be a 𝜍-field of subsets of 𝛺. Verify whether
[4] Let a card be selected from an ordinary pack of playing cards. The outcome 𝜔 is one of these 52 cards.
Define X on 𝛺 as :
4 𝑖𝑓 𝜔 𝑖𝑠 𝑎𝑛 𝑎𝑐𝑒
3 𝑖𝑓 𝜔 𝑖𝑠 𝑎 𝑘𝑖𝑛𝑔
𝑋 𝜔 = 2 𝑖𝑓 𝜔 𝑖𝑠 𝑎 𝑞𝑢𝑒𝑒𝑛
1 𝑖𝑓 𝜔 𝑖𝑠 𝑎 𝑗𝑎𝑐𝑘
0 𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
Show that X is a random variable. Further, suppose that P(.) assigns a probability of 1/52 to each outcome
𝜔. Derive the distribution function of X.
0 𝑖𝑓 𝑥 < −1
(𝑥+2)
[5] Let F(x)=
4
𝑖𝑓 − 1 ≤ 𝑥 < 1
1 𝑖𝑓 𝑥 ≥ 1
Show that F(.) is a distribution function. Sketch the graph of F(x) and compute the probabilities P(- ½ < X
1
≤ ) , 𝑃 𝑋 = 0 , 𝑃 𝑋 = 1 𝑎𝑛𝑑𝑃(−1 ≤ 𝑥 < 1). Further, obtain the decomposition F(x)= 𝛼 𝐹𝑑 𝑥 +
2
1 − 𝛼 𝐹𝑐 (𝑥); where 𝐹𝑑 𝑥 𝑎𝑛𝑑𝐹𝑐 (𝑥) are purely discrete and purely continuous distribution functions,
respectively.
0, 𝑥 < 0
1 0, 𝑥 < 0 0 𝑥≤1
(a) F(x)= 𝑥, 0 ≤ 𝑥 ≤ 2 ; (b) 𝐹 𝑥 = 𝑥 ; (c)𝐹 𝑥 = 1 − 1 𝑥 > 1
1 1− 𝑒 , 𝑥 ≥ 𝑥
1, 𝑥 >
2
0 𝑖𝑓 𝑥 ≤ 0
[7]𝐿𝑒𝑡 𝐹 𝑥 = 2 −𝑥/3 1
1− 3
𝑒 − 3 𝑒 −[𝑥/3] 𝑖𝑓 𝑥 > 0
Where, [x] is the largest integer≤ x. show that F(.) is a distribution function and compute P(X > 6), P(X =
5) and P(5 ≤ 𝑋 ≤ 8).
0, 𝑥 < −2,
1
3
, −2 ≤ 𝑥 < 0,
1
F(x)= , 0 ≤ 𝑥 < 5,
2
1 𝑥−5 2
2
+ 2
, 5≤𝑥< 6
1, 𝑥 ≥ 6
𝐹𝑖𝑛𝑑 𝑃 −2 ≤ 𝑋 < 5 , 𝑃 0 < 𝑋 < 5.5 𝑎𝑛𝑑 𝑃 1.5 < 𝑋 ≤ 5.5 𝑋 > 2).
𝑛
[9] Prove that if 𝐹1 . , … . , 𝐹𝑛 . 𝑎𝑟𝑒 𝑛 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛𝑠, 𝑡𝑒𝑛 𝐹 𝑥 = 𝑖=1 𝛼𝑖 𝐹𝑖 (𝑥) is also a
distribution function for any (𝛼1 , … , 𝛼𝑛 ), such that 𝛼𝑖 ≥ 0and 𝑛𝑖=1 𝛼𝑖 = 1.
[10] Suppose 𝐹1 𝑎𝑛𝑑 𝐹2 are distribution functions. Verify whether G(x) = 𝐹1 𝑥 + 𝐹2 (𝑥)is also a
distribution function.
0 𝑖𝑓 𝑥 ≤ 0
F(x)= 2 /2
+𝑘𝑒 −𝑥 𝑖𝑓 𝑥 > 0
0 𝑖𝑓 𝑥 < 0
(𝑥+2)
8
𝑖𝑓 0 ≤ 𝑥 < 1
𝑥 2 +2
[12] Let F(x)=
8
𝑖𝑓 1 ≤ 𝑥 < 2
2𝑥+𝑐
𝑖𝑓 2 ≤ 𝑥 ≤ 3
8
1 𝑖𝑓 𝑥 > 3
Find the value of c such that F is a distribution function. Using the obtained value of c, find the
decomposition F(x)= 𝛼 𝐹𝑑 𝑥 + 1 − 𝛼 𝐹𝑐 (𝑥); where 𝐹𝑑 𝑥 𝑎𝑛𝑑𝐹𝑐 (𝑥) are purely discrete and purely
continuous distribution functions, respectively.
[13] Suppose 𝐹𝑋 is the distribution function of a random variable X. Determine the distribution function
of a 𝑋 + 𝑎𝑛𝑑 𝑏 |𝑋|. Where
𝑋 𝑖𝑓 𝑋 ≥ 0
𝑋+ =
0 𝑖𝑓 𝑋 < 0
(𝑥−2) (𝑒 −𝜆 𝜆 𝑥 )
𝑖𝑓 𝑥 = 1, 2, 3, 4 𝑖𝑓 𝑥 = 0, 1, 2, 3, 4, … 𝑤𝑒𝑟𝑒, 𝜆 > 0 ; 𝑐 𝑓 𝑥 =
(a) f(x)= 2 ; 𝑏 𝑓 𝑥 = 𝑥!
0 𝑡𝑒𝑟𝑤𝑖𝑠𝑒. 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
(𝑒 −𝜆 𝜆 𝑥 )
𝑥!
𝑖𝑓 𝑥 = 1, 2, 3, 4, … 𝑤𝑒𝑟𝑒, 𝜆 > 0
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
[16] Find the value of the constant c such that f(x)= (1- c)𝑐 𝑥 ; x= 0, 1, 2,3, … defines a probability mass
function.
[17] Let X be a discrete random variable taking values in 𝒳 = −3, −2, −1, 0, 1, 2, 3 𝑠𝑢𝑐 𝑡𝑎𝑡 𝑃 𝑋 =
−3 = 𝑃 𝑋 = −2 = 𝑃 𝑋 = −1 = 𝑃 𝑋 = 1 = 𝑃 𝑋 = 2 = 𝑃(𝑋 = 3)
And P(X < 0)= P(X= 0)= P(X > 0). Find the distribution function of X.
[18] A battery cell is labeled as good if it works for at least 300 days in a clock, otherwise it is labeled as
bad. Three manufacturers, A, B, and C make cells with probability of marking good cells as 0.95, 0.90
and 0.80 respectively. Three identical clocks are selected and cells made by A, B, and C are used in clock
numbers 1, 2 and 3 respectively. Let X be the total number of clocks working after 300 days. Find the
probability mass function of X and plot the corresponding distribution function.
2 −𝜃𝑥 𝑖𝑓 𝑥>0
[19] Prove that the function 𝑓𝜃 (𝑥) = 𝜃 𝑥𝑒
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
Defines a probability density function for 𝜃 > 0. Find the corresponding distribution function and hence
compute P(2 < X < 3) and P(X > 5).
[20] Find the value of the constant c such that the following function is probability density function.
𝑐 𝑥 + 1 𝑒 −𝜆𝑥 𝑖𝑓 𝑥 ≥ 0
𝑓𝜆 𝑥 =
0 𝑖𝑓 𝑥 < 0
Where 𝜆 > 0. Obtain the distribution function of the random variable associated with probability density
function𝑓𝜆 𝑥 .
𝑥2
[21] Show that f(x)= 𝑖𝑓 − 3 < 𝑥 < 3
18
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
Defines a probability density function. Find the corresponding distribution function and hence find P(|X|<
1) and P(𝑥 2 < 9)
Solution Key
= {𝜔: −𝑥 ≤ 𝑋(𝜔) ≤ 𝑥}
= 𝑋 −1 −𝑥, ∞ ∩ 𝑋 −1 (−∞, 𝑥]
As X is a.r.v.
𝑋 −1 −𝑥, ∞ ∩ 𝑋 −1 (−∞, 𝑥] ∈ 𝓕
[X is a. r. v. ⟹𝑋 −1 𝐵 ∈ℱ∀ 𝐵 ∈ 𝔅
⟹ 𝑦 −1 −∞, 𝑥 ∈ ℱ ∀ 𝑥 ∈ ℝ
⟹ y = X 𝑖𝑠 𝑎. 𝑟. 𝑣.
(b)y= 𝑋 2 ; ∀ 𝑥 ∈ ℝ 𝑦 −1 −∞, 𝑥 = 𝜔: 𝑦 𝜔 ≤ 𝑥
= 𝜔: 𝑋 2 𝜔 ≤ 𝑥
= 𝜔: − 𝑥 ≤ 𝑋 𝜔 ≤ 𝑥
= 𝑋 −1 − 𝑥, ∞ ∩ 𝑋 −1 −∞, 𝑥 ∈ ℱ
⟹ 𝑦 = 𝑋 2 𝑖𝑠 𝑎. 𝑟. 𝑣.
1
𝜔, 0 ≤ 𝜔 ≤ 2
X(𝜔)= 1 1
𝜔 − 2, 2
<𝜔≤1
𝜙 ∈ ℱ, 𝑥<0
1 1 1
0, 𝑥 ∪ , + 𝑥 0 ≤ 𝑥 <
𝑋 −1 −∞, 𝑥 = 2 2 2
∈ℱ
1
𝛺 ∈ ℱ ,𝑥 ≥
2
⟹ 𝑋 −1 −∞, 𝑥 ∈ ℱ ∀ 𝑥 ∈ ℝ ⟹ X is a. r. v.
(3) 𝛺= {1, 2, 3, 4}
X(𝜔)= 𝜔+1 → 2, 3, 4, 5
𝜙 ∈ℑ 𝑥<2
𝑋 −1 −∞, 𝑥 = 1 ∈ℑ 2 ≤𝑥<3
1, 2 ∉ ℑ 3 ≤ 𝑥 < 4
⟹ 𝑋 𝑖𝑠 𝑛𝑜𝑡 𝑎. 𝑟. 𝑣.
↓ ↓
↔ ↔ ↔ ↔
All hearts Spades Diamond club
4 𝑎𝑐𝑒 ℑ = 𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙
3 𝑘𝑖𝑛𝑔
X(𝜔)= 2 𝑞𝑢𝑒𝑒𝑛
1 𝑗𝑎𝑐𝑘
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝜙 𝐼𝐹 𝑋 < 0
2𝐻, … 10𝐻, 2𝑆, … 10𝑆, 2𝐷, … 10𝐷͍, 2𝐶, … 10𝐶 𝑖𝑓 0 ≤ 𝑥 < 1
2𝐻, 𝐽𝐻͍ , 2𝑆, … 𝐽𝑆͍ , 2𝐷, … 𝐽𝐷͍ , 2𝐶, … 𝐽𝐶͍ 𝑖𝑓 1 ≤ 𝑋 < 2
𝑋 −1 −∞, 𝑥 =
2𝐻, … 𝑄𝐻͍ , 2𝑆, … 𝑄𝑆͍ , 2𝐷, … 𝑄𝐷͍ , 2𝐶, … , 𝑄𝐶͍ , 2 ≤ 𝑋 < 3
2𝐻, … 𝐾𝐻͍, 2𝑆, … 𝐾𝑆͍ , 2𝐷, … 𝐾𝐷͍ , 2𝐶, … 𝐾𝐶͍ 3 ≤ 𝑋 < 4
𝛺 𝑋 ≥4
⟹ 𝑋 −1 −∞, 𝑥 ∈ ℱ∀ 𝑥 ∈ ℝ
⟹ X is a. r. v.
(5)
⟹ F(.) is a d. f.
1 1
P(- ½ <X ≤ ½ )= 𝐹 2
− 𝐹 −2
5 3 2
= − =
8 8 8
𝑃 𝑋 =0 =𝐹 0 − 𝐹 0− =0
𝑃 𝑋 = 1 = 𝐹 1 − 𝐹1— 𝐹(−1−)
3 3
= −0 =
4 4
(6) (a) F(x) is not right continuous at x = ½
⟹ F(.) is not a d. f.
(b)& (c) the n. s. c. holds for these 2 and hence they are d. f. s.
2 6 1 −6
𝑃 𝑋 > 6 = 1 − 𝑃 𝑋 ≤ 6 = 1 − 𝐹 6 = 1 − 1 − 𝑒 −3 − 𝑒 3
3 3
2 1
= 1 − 1 − 𝑒 −2 − 𝑒 −2 = 𝑒 −2
3 3
𝑃 𝑋 =5 =𝐹 5 −𝐹 5−
2 5 1 −5 2 5 1 −5
= 1 − 𝑒 −3 − 𝑒 3 − 1 − 𝑒 −3 − 𝑒 3
3 3 3 3
2 5 1 2 5 1
= 1 − 𝑒 −3 − 𝑒 −1 − 1 − 𝑒 −3 − 𝑒 −1 = 0
3 3 3 3
𝑃 5≤𝑋 ≤8 =𝐹 8 −𝐹 5−
=⋯
⟹ F(.) is not a d. f.
(b)& (c) the n. s. c. holds for these 2 and hence they are d. f. s.
⟹ F(.) is a. d. f.
2 6 1 −6
𝑃 𝑋 > 6 = 1 − 𝑃 𝑋 ≤ 6 = 1 − 𝐹 6 = 1 − 1 − 𝑒 −3 − 𝑒 3
3 3
2 1
= 1 − 1 − 𝑒 −2 − 𝑒 −2 = 𝑒 −2
3 3
𝑃 𝑋 =5 =𝐹 5 −𝐹 5−
2 5 1 −5 2 5 1 −5
= 1 − 𝑒 −3 − 𝑒 3 − 1 − 𝑒 −3 − 𝑒 3
3 3 3 3
2 5 1 2 5 1
= 1 − 𝑒 −3 − 𝑒 −1 − 1 − 𝑒 −3 − 𝑒 −1 = 0
3 3 3 3
𝑃 5≤𝑋 ≤8 =𝐹 8 −𝐹 5−
=⋯
(8) P(-2≤ X < 5)= F(5-)- F(-2 -)
1 1
=2−0 = 2
𝑃 2<𝑋 ≤5.5
P(1.5 < X ≤ 5.5| X > 2)=
𝑃 𝑋>2
𝐹 5.5 − 𝐹 2
=
1− 𝐹 2
1 1 1
+ −
= 2 8 2=1
1 4
1−2
⟹ F(x) is non-decreases.
𝐹 −∞ = 𝛼𝑖 𝐹𝑖 −∞ = 0 & 𝐹 ∞ = 1
𝐹 𝑥+ = lim 𝐹 𝑧 = lim 𝛼𝑖 𝐹𝑖 𝑧
𝑧↓𝑥 𝑧↓𝑥
= 𝛼𝑖 𝐹𝑖 𝑥+ = 𝛼𝑖 𝐹𝑖 𝑥 = 𝐹(𝑥)
⟹ F(.) is a d. f.
(10) 𝐺 ∞ = 𝐹1 ∞ + 𝐹2 ∞ = 2 ≠ 1
⟹ 𝐺 . 𝑖𝑠 𝑛𝑜𝑡 𝑎 𝑑. 𝑓.
⟹ 0= 𝛼+ k ________(i)
𝐹 ∞ = 1 ⟹ 𝛼 = 1 ⟹ 𝑘 = −𝑣
Discrete part of d. f.
0 𝑥<0
𝛼𝐹𝑑 𝑥 = 2
𝑥 ≥0
8
𝑐𝑜𝑛𝑡 𝑝𝑎𝑟𝑡
0 𝑖𝑓 𝑥 < 0
𝑥
0≤𝑥<1
8
𝑥2
1≤𝑥<2
1 − 𝛼 𝐹𝑐 𝑥 = 8
2𝑥
2≤𝑥≤3
8
6
𝑖𝑓 𝑥 > 3
8
2 1 3
⟹𝛼= = & 1−𝛼 =
8 4 4
0 𝑖𝑓 𝑥 < 0
𝑥
0≤𝑥<1
6
0 𝑥<0 𝑥2
𝐹𝑑 𝑥 = & 𝐹𝑐 𝑥 = 1≤𝑥<2
1 𝑥≥ 0 6
𝑥
2≤𝑥≤3
3
1 𝑖𝑓 𝑥 > 3
(13) Y= 𝑋 +
If y= 0 𝑃 𝑌 ≤ 0 = 𝑃 𝑋 + ≤ 0 = 𝑃 𝑋 + = 0 = 𝑃 𝑋 ≤ 0 = 𝐹(0)
If y> 0 𝑃 𝑌 ≤ 𝑦 = 𝑃 𝑋 + ≤ 𝑦 = 𝑃 𝑋 ≤ 𝑦 = 𝐹 𝑦
0 𝑦<0
𝐹𝑌 𝑦 =
𝐹 𝑦 𝑦≥0
𝑍= 𝑋
𝐹𝑍 𝔷 = 𝑃 1 × 1 ≤ 𝔷
= 𝑃 −𝔷 ≤ 𝑋 ≤ 𝔷
𝐹 𝔷 − 𝐹 −𝔷 − 𝑖𝑓 𝔷 ≥ 0
=
0 𝑖𝑓 𝔷 < 0
(14) for 𝑥1 < 𝑥2
∞ ∞ ∞
𝐹 𝑥2 − 𝐹 𝑥1 = 𝐹1 𝑥2 − 𝑦 𝑑𝐹2 𝑦 − 𝐹1 𝑥1 − 𝑦 𝑑𝐹2 𝑦
−∞ −∞ −∞
∞
= 𝐹1 𝑥2 − 𝑦 − 𝐹1 𝑥1 − 𝑦 𝑑𝐹2 𝑦
−∞
≥ 0 ∀𝑥1 < 𝑥2
∞
𝐹 𝛼 = 𝐹1 ∞ 𝑑 𝐹2 𝑦 = 𝐹2 ∞ − 𝐹2 −∞ = 1
−∞
𝐹 −∞ = 0
∞ ∞
𝐹 𝑋+ = 𝐹1 ( 𝑋+ − 𝑦) 𝑑𝐹2 𝑦 = 𝐹1 (𝑥 − 𝑦) 𝑑𝐹2 𝑦 = 𝐹(𝑥)
−∞ −∞
⟹ F(.) as defined is a d. f.
(b) f(x) ≥ 0 ∀ x
∞
−𝜆
𝜆𝑥
𝑓 𝑥 =𝑒 =1
𝑥!
𝑥 𝛼
⟹ f(.) is a p. m. f.
(c) 𝑥 𝑓 𝑥 ≠ 1 ⟹ 𝑓 . 𝑖𝑠 𝑛𝑜𝑡 𝑎 𝑝. 𝑚. 𝑓.
𝑥 𝑓 𝑥 = 1 𝑎𝑙𝑠𝑜 ∀ C∈[0, 1]
(17) Suppose
P(X= -3)= P(X= -2) = P(X= -1)= P(X= 1) = P(X= 2) = P(X= 3)= p
X= x -3 -2 -1 0 1 2 3
1 1 1 3 1 1 1
P(X=x) 9 9 9 9 9 9 9
0 𝑥 < −3
1
− 3 ≤ 𝑥 < −2
9
2
− 2 ≤ 𝑥 < −1
9
3
−1≤𝑥 <0
𝐹𝑋 𝑥 = 9
6
0≤𝑥<1
9
7
1≤𝑥<2
9
8
2≤𝑥<3
9
1 𝑥≥3
𝑃 𝑋 = 0 = 𝑃 𝐴𝑐 𝐵𝑐 𝐶 𝑐 = 𝑃 𝐴𝑐 𝑃 𝐵𝑐 𝑃 𝐶 𝑐
𝑃 𝑋 = 1 = 𝑃 𝐴𝐵𝑐 𝐶 𝑐 ∪ 𝐴𝑐 𝐵𝐶 𝑐 ∪ 𝐴𝑐 𝐵𝑐 𝐶
= 𝑃 𝐴 𝑃 𝐵𝑐 𝑃 𝐶 𝑐 + 𝑃 𝐴𝑐 𝑃 𝐵 𝑃 𝐶 𝑐 + 𝑃 𝐴𝑐 𝑃 𝐵𝑐 𝑃 𝐶
=⋯
= 𝑝1 𝑠𝑎𝑦
𝑃 𝑋 = 2 = 𝑃 𝐴𝐵𝐶 𝑐 ∪ 𝐴 𝐵𝑐 𝐶 ∪ 𝐴𝑐 𝐵𝐶
= 𝑃 𝐴 𝑃 𝐵 𝑃 𝐶 𝑐 + 𝑃 𝐴 𝑃 𝐵𝑐 𝑃 𝐶 + 𝑃 𝐴𝑐 𝑃 𝐵 𝑃 𝐶
=⋯
= 𝑝2 𝑠𝑎𝑦
p. m. f.
X= x 0 1 2
3
P(X= 𝑝0 𝑝1 𝑝2
x) 𝑝3
d. f.
⟹ f(.) is a p. d. f.
= 0 𝑖𝑓 𝑥 ≤ 0
Integration by parts.
∞
⟹𝐶 (𝑥 + 1) 𝑒 −𝜆𝑥 𝑑𝑥 = 1
0
⎾2 ⎾1 𝜆2
i.e. 𝐶 𝜆2
+ 𝜆
= 1 ⟹ 𝐶 = 1+𝜆
𝑥 𝜆2 𝑥
d. f. F(x)= ∫0 𝑓 𝑥 𝑑𝑥 = 1+𝜆 ∫0 1 + 𝑥 𝑒 −𝜆𝑥 𝑑𝑥 𝑖𝑓𝑥 ≥ 0
= 0 𝑖𝑓 𝑥 < 0
By parts.
(21) f(x) ≥ 0 ∀ x
𝑥 3
𝑥2 1 𝑥3 3 1
𝑓 𝑥 𝑑𝑥 = 𝑑𝑥 = | = . 18 = 1
−∞ −3 18 18 3 −3 18
⟹ f(.) is a p. d. f.
0 𝑖𝑓 𝑥 < −3
3
𝐹𝑋 𝑥 = 𝑥 + 27
−3≤𝑥 ≤3
54
1 𝑥>3
28 26
𝑃 𝑋 < 1 = 𝐹 1 − 𝐹 −1 = −
7 7
𝑃 𝑋2 < 9 = 1
Problem Set -4
[1] Find the expected number of throws of a fair die required to obtain a 6.
[2] Consider a sequence of independent coin flips, each of which has a probability p of being heads.
Define a random variable X as the length of the urn (of either heads or tails) started by the first trial. Find
E(X).
[4] Find the mean and variance of the distributions having the following p. d. f. / p. m. f.
1
𝑏 𝑓 𝑥 = ; 𝑥 = 1, 2, … , 𝑛; 𝑛 > 0 𝑖𝑠 𝑎𝑛 𝑖𝑛𝑡𝑒𝑔𝑒𝑟
𝑛
3
𝑐 𝑓 𝑥 = (𝑥 − 1)2 ; 0 < 𝑥 < 2
2
[5] Find the mean and variance of the Weibull random variable having the p. d. f.
[7] Let X be a continuous, nonnegative random variable with d. f. F(x). Show that
∞
E(X)= ∫0 (1 − 𝑓(𝑥))𝑑𝑥 .
1
[8] A target is made of three concentric circles of radii 3
, 1, 3 feet. Shots within the inner circle give 4
points, within the next ring 3 points and within the third ring 2 points. Shots outside the target give 0. Let
X be the distance of the hit from the centre (in feet) and let the p. d. f. of X be
2
𝜋 1+𝑥 2
𝑥>0
f(x)=
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
What is the expected vale of the score in a single shot?
[9] Find the moment generating function (m. g. f.) for the following distributions
n is a positive integer.
𝑒 −𝑥/𝛽 𝑥 𝛼−1
𝑓𝑥 𝑥 = ,𝑥 > 0
⎾𝛼𝛽 𝛼
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1 1 1 5
𝑀𝑋 𝑡 = 𝑒 −5𝑡 + 𝑒 4𝑡 + 𝑒 5𝑡 + 𝑒 25𝑡
2 6 8 24
[11] Let X be a random variable with P(X ≤ 0)= 0 and let 𝜇= E(X) exists. Show that P(X≥ 2𝜇) ≤ 0.5.
[12] Let X be a random variable with E(X)= 3 and E(𝑋 2 )= 13, determine a lower bound for P(-2 < X < 8).
Using the p. m. f. , show that the bound for Chebychev’ s inequality cannot be improved.
[14] A communication system consists of n components, each of which will independently function with
probability p. The system will be able to operate effectively if al least one half of its components function.
(a) For what value of a p a 5-component system is more likely to operate effectively than a 3-component
system?
(b) In general, when is a (2k + 1) –component system better than a (2k- 1) –component system?
[15] An interviewer is given a list of 8 people whom he can attempt to interview. He is required to
interview exactly 5 people. If each person(independently) agrees to be interviewed with probability 2/3,
what is the probability that his list will enable him to complete his task?
[16] A pipe-smoking mathematician carries at all times 2 match boxes, 1 in his left-hand pocket and 1 in
his right- hand pocket. Each time he needs a match he is equally likely to take it from either pocket.
Consider the moment when the mathematician first discovers that one of his matchboxes is empty. If it is
assumed that both matchboxes initially contained N matches, what is the probability that there are exactly
k matches in the other box, k= 0, 1, …, N?
Solution Key
𝔛 = 1, 2, 3, …
𝑥−1
5 1
𝑃 𝑋=𝑥 = 𝑥 ∈𝔛
6 6
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
∞ 𝑥−1 ∞ 𝑥−1 2
5 1 1 5 5 5 1
𝐸 𝑋 = 𝑥 = 𝑥 = 1+2 +3 +⋯
6 6 6 6 6 6 6
1 1
2 2 2 2
1 5 5 5 5 5 5 5 5
= 1+ + +⋯ + 1+ + +⋯ + 1+ + +⋯ +⋯
6 6 6 6 6 6 6 6 6
2
1 1 5 1 5 1
= + + + …
6 1− 5 61 − 5 6 5
6 6 1−6
1 5 5 2
=6 6 1+6+ 6
+⋯ = 6
(2) X: length of run of heads or tails starting with trial 1
𝔛= {1, 2, …}
𝑃 𝑋 = 𝑥 = (1 − 𝑝)𝑥 𝑝 + 𝑝 𝑥 (1 − 𝑝)
↑ ↑
Run of x 𝑇𝑠 run of x H
∞ ∞ ∞
𝑥 𝑥 𝑥−1
𝐸 𝑋 = 𝑥 1−𝑝 𝑝+ 𝑝 1−𝑝 =𝑝 1−𝑝 𝑥 1−𝑝 + 𝑥 𝑝 𝑥−1
1 1 1
1 1
=𝑝 1−𝑝 2
+ 2
← 𝑎𝑠 𝑖𝑛 (1)
𝑝 1−𝑝
1 − 2𝑝 + 2𝑝2
= .
𝑝(1 − 𝑝)
∞ 1 ∞ 1
(3) (a) 𝐸 𝑋 = 1 𝑥 𝑥 𝑥+1
= 1 𝑥+1 𝑛𝑜𝑡 𝑐𝑜𝑛𝑣𝑒𝑟𝑔𝑒𝑛𝑡
1
𝑏 𝐸 𝑋 = 𝑥 𝑑𝑥 = ∞ ⟹ 𝐸 𝑥 𝑑𝑜𝑒𝑠 𝑛𝑜𝑡 𝑒𝑥𝑖𝑠𝑡
𝑥 >1 2𝑥 2
∞ 𝑥 1 2 ∞ 𝑥 1
(d) 𝐸 𝑋 = ∫−∞ 𝑑𝑥 = ∫0 𝑑𝑥 = log 1 + 𝑥 2 | ∞
0
=∞
𝜋 1+𝑥 2 𝜋 1+𝑥 2 𝜋
⟹ 𝐸 𝑋 𝑑𝑜𝑒𝑠 𝑛𝑜𝑡 𝑒𝑥𝑖𝑠𝑡.
𝑎 𝑎 2
𝑉 𝑋 = 𝐸 𝑋 2 − (𝐸(𝑋))2 = −
𝑎+2 𝑎+1
=⋯
𝑥−𝜇 𝑐
𝑐 ∞ 𝑥−𝜇 𝑐−1 −
(5) 𝐸 𝑋 = 𝑎 ∫𝜇 𝑥 𝑎
𝑒 𝑎 𝑑𝑥
∞ 1 2
𝑎𝑠 𝑖𝑛𝐸 𝑋 = 𝑎𝑦 𝑐 + 𝜇 𝑒 −𝑦 𝑑𝑦
0
2 1
= 𝑎2 + 1 + 2𝑎𝜇 + 1 + 𝜇2
𝑐 𝑐
𝑉 𝑋 = 𝐸𝑋 2 − 𝐸𝑋 2
2
2 1 1
= 𝑎2 + 1 + 2𝑎𝜇 + 1 + 𝜇2 − 𝑎 +1+𝜇
𝑐 𝑐 𝑐
=⋯
𝑚 1 1
(6)∫0 3𝑥 2 𝑑𝑥= ∫0 3 𝑥 2 𝑑𝑥 = 2 ⟹ 𝑚 = ⋯
∞ ∞ ∞
(7) ∫0 1 − 𝐹 𝑥 𝑑𝑥 = ∫0 ∫𝑥 𝑓𝑥 𝑦 𝑑𝑦 𝑑𝑥
0<𝑥<𝑦<∞
∞ 𝑦 ∞
= 𝑓𝑥 𝑦 𝑑𝑥 𝑑𝑦 = 𝑦 𝑓𝑥 𝑦 𝑑𝑦 = 𝐸(𝑋)
0 0 0
Z: Score in a shot 𝔛𝑧 = 0, 1, 2, 3 4
∞
2 1 2 ∞ 1
𝑃 𝑍 = => 3) = 2
𝑑𝑥 = tan−1 𝑥 | =
𝜋 3 1+𝑥 𝜋 3 3
3
2 1 1
𝑃 𝑍=2 =𝑃 1<𝑋< 3 2
𝑑𝑥 =
𝜋 1 1+𝑥 6
1 2 1 1 1
P(Z=3) = P 3
< 𝑥 < 1 = 𝜋 ∫0 1+𝑥 2
𝑑𝑥 =6
1
2 1 3 1 1
𝑃 𝑧=4 =𝑃 0<𝑥< = 2
=
3 𝜋 0 1+𝑥 3
Expendent score,
1 1 1 1 1 1 4
E(Z)=0 × 3 + 2. 6 + 3. 6 + 4 . 3 = 3 + 2 + 3 = ⋯
𝑛 𝑛 𝑛 𝑛
(9)(a) 𝑀𝑋 𝑡 = 𝐸 𝑒 𝑡𝑋 = 0 𝑒 𝑡𝑥 𝑥
𝑝𝑥 1 − 𝑝 𝑛−𝑥
= 0 𝑥 (𝑝𝑒 𝑡 )𝑥 1 − 𝑝 𝑛−𝑥
= (1 − 𝑝 + 𝑝𝑒 𝑡 )𝑛
𝑞 =1−𝑝
𝑑
𝑀 𝑡 𝑛 𝑞 + 𝑝𝑒 𝑡 𝑛−1
𝑝𝑒 𝑡 𝑡 = 0
𝑑𝑡 𝑋 𝑡=0
= 𝑛𝑝 = 𝐸(𝑥)
𝑑2
𝑀 𝑡 = 𝑛 𝑛 − 1 𝑞 + 𝑝𝑒 𝑡 𝑛−2
𝑝𝑒 𝑡 2
+ 𝑛 𝑞 + 𝑝𝑒 𝑡 𝑛−1
𝑝𝑒 𝑡
𝑑𝑡 2 𝑋
𝑑2 𝑀𝑋 𝑡
| = 𝑛 𝑛 − 1 𝑝2 + 𝑛𝑝 = 𝜇2 1 = 𝐸(𝑋 2 )
𝑑𝑡 2 𝑡=0
𝑉 𝑋 = 𝐸𝑋 2 − (𝐸𝑋)2 = 𝑛 𝑛 − 1 𝑝2 + 𝑛𝑝 − 𝑛2 𝑝2 = 𝑛𝑝 1 − 𝑝 = 𝑛𝑝𝑞
Sly (b)
(10) (i) 𝑋 ∼ 𝐺 𝛼, 𝛽
∞ 𝑥
1 −
𝑀𝑋 𝑡 = 𝐸 𝑒 𝑡𝑋 = 𝑒 𝑡𝑋 𝑥 𝛼−1 𝑒 𝛽 𝑑𝑥
⎾𝛼𝛽 𝛼 0
∞ 1
1 −𝑥 −𝑡 1
= 𝑥 𝛼−1 𝑒 𝛽 𝑑𝑥 𝑟𝑒𝑔𝑖𝑜𝑛 𝑜𝑓 𝑒𝑥𝑖𝑠𝑡𝑎𝑛𝑐𝑒 𝑡 <
⎾𝛼𝛽 𝛼 0 𝛽
𝛼
⎾𝛼 1 1
= 𝛼
. 𝛼 = −𝑡
⎾𝛼𝛽 1 𝛽
−𝑡
𝛽
𝑑 −𝛼−1
𝐸 𝑋 = 𝑀 𝑡 |𝑡 = 0 = −𝛼 1 − 𝛽𝑡 (−𝛽)|𝑡=0
𝑑𝑡 𝑋
= 𝛼𝛽
𝑑2 𝑀𝑋 𝑡
𝐸 𝑋2 = |𝑡=0
𝑑𝑡 2
−𝛼−2
= 𝛼𝛽(− 𝛼 + 1 1 − 𝛽𝑡 (−𝛽))|𝑡=0
= 𝛼 𝛼 + 1 𝛽2
𝑉 𝑋 = 𝐸 𝑋 2 − ((𝐸𝑋))2 = 𝛼 2 𝛽2 + 𝛼𝛽 2 − 𝛼 2 𝛽 2 = 𝛼𝛽 2
Sly (e)
1 1 1 5
(ii)𝑀𝑋 𝑡 = 𝑒 −5𝑡 2 + 𝑒 4𝑡 6 + 𝑒 5𝑡 8 + 24 𝑒 25𝑡
A 4 pt distn.
p. m. f.
𝑋=𝑥 −5 4 5 25
1 1 1 5
𝑃 𝑋=𝑥
2 6 8 24
0 𝑥 < −5
1
2
−5 ≤𝑥 <4
1 1
d. f. 𝐹𝑋 𝑥 = 2
+6 4≤𝑥<5
1 1 1
+ + 5 ≤ 𝑥 < 25
2 6 8
1 1 1 5
2
+ 6 + 8 + 24 = 1 𝑥 ≥ 25
𝐸((𝑋)) 1
𝑃 𝑋 ≥ 2𝜇 = 𝑃 𝑋 ≥ 2𝜇 ≤ =
2𝜇 2
5 𝑋−𝐸 𝑋 5 5
=𝑃 − < < = 𝑃 𝑋−𝜇 ≤ 𝑉 𝑋
2 𝑉 𝑋 2 2
5
= 1−𝑃 𝑋−𝜇 ≥ 𝑉 𝑋
2
𝑉 𝑋
≥1− 𝑐𝑒𝑏𝑦𝑠𝑒𝑣 ′ 𝑠 𝑠𝑖𝑛𝑒𝑞𝑢𝑎𝑙
25
.𝑉 𝑋
4
4 21
=1− =
25 25
1 1 1 1 1
(13) 𝐸 𝑋 = − 8 + 8 = 0 = 𝜇; 𝑉 𝑋 = 𝐸𝑋 2 = 8 + 8 = 4 = 𝜍 2
By chebyshev’s inequality
𝜍2
∀𝑡 >0 𝑃 𝑋−𝜇 ≥𝑡 ≤ 𝑡2
1
𝑖. 𝑒. 𝑃 𝑋 ≥ 𝑡 ≤
4𝑡 2
1 1 1
𝐴𝑙𝑠𝑜, 𝑃 𝑋 ≥ 𝑡 = 8 + 8 = 4 0 < 𝑡 ≤ 1
0 𝑡>1
1
𝑖. 𝑒. 𝑃 𝑋 ≥ 𝑡 = ∀ 𝑡 ∋ 0 < 𝑡 ≤ 1
4
⟹ for t= 1, bound from chebyshev’s inequality is attained exactly and hence cannot be improved
5 3 2
5 4
𝑖. 𝑒. 𝑝5 = 𝑝5 𝑋 ≥ 3 = 𝑝 1−𝑝 + 𝑝 1 − 𝑝 + 𝑝5
3 4
3 2
& 𝑝3 = 𝑝 1 − 𝑝 + 𝑝3 = 𝑃 3 𝑐𝑜𝑚𝑝. 𝑠𝑦𝑠𝑡𝑒𝑚 𝑤𝑜𝑟𝑘𝑠
2
𝑝5 > 𝑝3
5 3 2
5 4 3 2
𝑖𝑓 𝑝 1−𝑝 + 𝑝 1 − 𝑝 + 𝑝5 > 𝑝 1 − 𝑝 + 𝑝3
3 4 2
1
𝑠𝑖𝑚𝑝𝑙𝑦𝑓𝑦 𝑡𝑜 𝑔𝑒𝑡 𝑡𝑒 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑎𝑠 𝑝 >
2
𝑏 𝑃2𝑘+1 𝑋 ≥ 𝑘 + 1 = 𝑝2𝑘+1
= 𝑃2𝑘−1 𝑋 ≥ 𝑘 + 1 + 𝑃2𝑘−1 𝑋 = 𝑘 𝑃2 𝑋 ≥ 1 + 𝑃2𝑘−1 𝑋 = 𝑘 − 1 𝑃2 𝑋 = 2
2
𝑖. 𝑒. 𝑝2𝑘+1 = 𝑃2𝑘+1 𝑋 ≥ 𝑘 + 1 + 𝑃2𝑘+1 𝑋 = 𝑘 1 − 1 − 𝑝 + 𝑃2𝑘−1 𝑋 = 𝑘 − 1 𝑝2
2𝑘 − 1 𝑘 𝑘−1
2𝑘 − 1 𝑘−1
𝑖. 𝑒. 𝑝 1−𝑝 −1 − 𝑝2 + 2𝑝 + 𝑝 1 − 𝑝 𝑘 𝑝2 > 0
𝑘 𝑘−1
𝑖. 𝑒. 𝑝𝑘 1 − 𝑝 𝑘−1
−1 − 𝑝2 + 2𝑝 + 𝑝 − 𝑝2 > 0
𝑖. 𝑒. −1 − 2𝑝2 + 3𝑝 > 0
𝑖. 𝑒. 2𝑝 − 1 1 − 𝑝 > 0
1
𝑖. 𝑒. 𝑝 > 𝑟𝑒𝑞𝑑 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛.
2
𝑟𝑒𝑞𝑑 𝑝𝑟𝑜𝑏 𝑃 𝑋 ≤ 8 = 𝑃 𝑋 = 5 + 𝑃 𝑋 = 6 + 𝑃 𝑋 = 7 + 𝑃 𝑋 = 8
5 5 1
4 2 5 2 1
= + + ⋯+ = ⋯
4 3 4 3 3
Suppose Box 2 is found empty, then Box 2 has been chosen (n+1)th times, at it is time Box 1 contains
k matches if it has been chosen n-k times
[1] A machine contains two belts of different lengths. These have times to failure which are exponentially
distributed, with means 𝛼 and 2𝛼. The machine will stop if either belt fails. The failure of the belts are
assumed to be independent. What is the probability that the system performs after time 𝛼 from the start?
[2] Let X be a normal random variable with parameters 𝜇= 10 and 𝜍 2 = 36. 𝐶𝑜𝑚𝑝𝑢𝑡𝑒
[4] It is assumed that the lifetime of computer chips produced by a certain semiconductor manufacturer
are normally distributed with parameters 𝜇= 1.4 × 106 𝑎𝑛𝑑 𝜍 2 = 3 × 105 hours. What is the
approximate probability that a batch of 10 chips will contain at least 2 chips whose lifetime are less than
1.8 × 106 hours?
[5] Let X be a normal random variable with mean 0 and variance 1 i.e. N (0, 1). Prove that
2 /2
2 𝑒 −𝑡
𝑃 𝑋 >𝑡 ≤ ;∀ 𝑡 > 0
𝜋 𝑡
[7] The cumulative distribution function of a random variable X defined over 0 ≤ 𝑥 < ∞ 𝑖𝑠 𝐹 𝑥 = 1 −
2
𝑒 −𝛽𝑥 , 𝑤𝑒𝑟𝑒 𝛽 > 0. Find the mean, median and variance of X.
𝜙(𝑥)
[8] Show that for any x > 0,1 − 𝜙 𝑥 ≤ 𝑥
, where 𝜙(x) is the c. d. f. and 𝜙(x) is the p. d. f. of standard
normal distribution.
[9] A point 𝑚0 is said to mode of a random variable X, if the p. m. f. or the p. d. f. of X has a maximum at
𝑚0 . For the distribution given in problem [7], if 𝑚0 denotes the mode; 𝜇 and 𝜍 2 , the variance of the
corresponding random variable, then show that
2
𝑚0 = 𝜇 𝑎𝑛𝑑 2𝑚0 2 − 𝜇2 = 𝜍 2 .
𝜋
[10] Let X be a poison random variable with parameter 𝜆. Find the probability mass function of Y=
𝑋 2 − 5.
[11] Let X be a Binomial random variable with parameters n and p. Find the probability mass function of
Y= n- X.
[12] Consider the discrete random variable X with the probability mass function
1 1 1
𝑃 𝑋 = −2 = , 𝑃 𝑋 = −1 = , 𝑃 𝑋 = 0 = ,
5 6 5
1 10 1
𝑃 𝑋=1 = ,𝑃 𝑋 = 2 = ,𝑃 𝑋 = 3 = .
15 30 30
1 2 𝑥
𝑃 𝑋 = 𝑥 = 3 3 𝑥 = 0, 1, 2, …
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
Find the distribution of Y= X/ (X +1).
𝑒 −1 , 𝑥 = 0
−1
𝑒
𝑃 𝑋=𝑥 = , 𝑥 ∈ {±1, ±2, … }
2 𝑋 !
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
Find the p. m. f. and distribution of the random variable Y= |X|.
……………………………………………………………………………………………………..
Useful data
……………………………………………………………………………………………………..
Solution Key
(1) Belt 1
1
X∼ Exp with mean 𝛼 ∼ 𝛼 𝑒 −𝑥/𝛼 ; 𝑥 > 0
Belt 2
1
Y∼ Exp with mean 2𝛼 ∼ 𝑒 −𝑥/2𝛼 ; 𝑥 > 0
2𝛼
5 5
=1−𝜙 − = 1− 1−𝜙
6 6
5
=𝜙 = 0.7967
6
4 − 10 16 − 10
𝑏 𝑃 4 < 𝑋 < 16 = 𝑃 <𝑍< = 𝑃 −1 < 𝑍 < 1
6 6
= 𝜙 1 − 𝜙 −1 = 2𝜙 1 − 1
=⋯
8 − 10 1 1
𝑐 𝑃 𝑋<8 =𝑃 𝑍< = 𝜙 − =1−𝜙 =⋯
6 3 3
1
3 𝑃 𝑋≤0 = =𝑃 𝑋≥0 ⟹𝜇=0
2
1.96 𝑋 1.96
𝑃 − ≤ ≤ = 0.95
𝜍 𝜍 𝜍
1.96 1.96
𝑃 − ≤𝑍≤ = 0.95; 𝑍 ∼ 𝑁(0, 1)
𝜍 𝜍
1.96
2𝜙 − 1 = 0.5
𝜍
1.96
𝜙 = 0.975
𝜍
1.96
⟹ = 𝜙 −1 0.975 = 1.96 ⟹ 𝜍 = 1
𝜍
( 4) X : lifetime r. v.
X∼ 𝑁 𝜇, 𝜍 2
𝜍 2 = 3 × 105 𝑟𝑠
Y∼ Bin(10, 0.918)
= 1-P(Y= 0) – P(Y = 1)
10 10 10 1 9
= 1- 0
. 918 ° 1 − .918 − 1
0.918 1 − .918
=⋯
(5) 𝑋 ∼ N 0, 1
= 1 − P −t < 𝑋 < 𝑡
= 1 − ϕ t − ϕ −t
= 1 − 2ϕ t − 1
x2
y=
2
−t 2
∞
1 1 1 e 2
= e−y dy =
2π t t2 2π 𝑡
2
−t 2 −t 2
1 e2 2 e2
⟹P X ≥t ≤2 =
2π 𝑡 π 𝑡
(6) 𝑋 = 𝑥 0 1 2 … … ..
𝑃 𝑋=𝑥 𝑝0 𝑝1 𝑝2 … … . ..
∞ ∞
= 𝑝1 + 𝑝2 + 𝑝3 + ⋯ + 𝑝2 + 𝑝3 + ⋯ + 𝑝3 + 𝑝4 + ⋯
= 𝑝1 + 2𝑝2 + 3𝑝3 + ⋯
∞
= 𝑖 𝑝𝑖 = 𝑖 𝑃(𝑋 = 𝑖) = 𝐸(𝑋)
𝑖=1 𝑖=0
(7) d. f.
0 , 𝑥<0
F(x)= 2
1 − 𝑒 −𝛽𝑥 , 𝑥 ≥ 0
𝛽>0
2
2𝛽 𝑥𝑒 −𝛽𝑥 , 𝑥 ≥ 0
𝑝. 𝑑. 𝑓. 𝑓 𝑥 = 𝑜
0𝑤
∞
2
𝐸 𝑋 = 2𝛽 𝑥 2 𝑒 −𝛽𝑥 𝑑𝑥
0
𝑦 = 𝑥2
3
∞ 1 1 𝜋 ⎾
=𝛽 𝑦 2 𝑒 −𝛽𝑦 = 𝛽. 32 = =𝜇
0 2 3
𝛽2
∞ ∞
2 ⎾2 1
𝐸𝑋 2 = 2𝛽 𝑥 3 𝑒 −𝛽𝑥 𝑑𝑥 = 𝛽 𝑦 𝑒 −𝛽𝑦 𝑑𝑦 = 𝛽 =
0 0 𝛽2 𝛽
1 1 𝜋
𝑉 𝑋 = 𝐸 𝑋 2 − 𝐸𝑋 2
= − 𝜇2 = −
𝛽 𝛽 4𝛽
𝑚𝑒𝑑𝑖𝑎𝑛 ∶ 𝑚0
1
𝑚0 ⟹ 𝐹 𝑚0 = = 1 − 𝐹 𝑚0
2
𝑚0 ∞
2 2 1
𝑖. 𝑒. 2𝛽 𝑥 𝑒 −𝛽𝑥 𝑑𝑥 = 2𝛽 𝑥𝑒 −𝛽𝑥 𝑑𝑥 =
0 𝑚0 2
2 1
𝑖. 𝑒. 1 − 𝑒 −𝛽𝑚 0 =
2
⟹ 𝑚0 = ⋯
2
1 ∞ −𝑦
(8) 1 − 𝜙 𝑥 = ∫
2𝜋 𝑥
𝑒 2 𝑑𝑦
∞ −𝑦 2
1 1
= 𝑦𝑒 2 𝑑𝑦
2𝜋 𝑥 𝑦
−𝑦 2 ∞ −𝑦 2
1 1 ∞ 1
= . −𝑒 2 − − 2 −𝑒 2 𝑑𝑦
2𝜋 𝑦 𝑥 𝑥 𝑦
∞
1 1 −𝑥 2 1 −𝑦 2
= 𝑒 2 − 𝑒 2 𝑑𝑦
2𝜋 𝑥 0 𝑦2
≥0
1 1 −𝑥 2 𝜙(𝑥)
⟹1−𝜙 𝑥 ≤ 𝑒 2 = .
𝑥 2𝜋 𝑥
(9)
1
𝑓 ′ 𝑥 = 0 ⟹ 2𝛽𝑥 2 = 1 ⟹ 𝑥 =
2𝛽
𝑑 −𝛽𝑥 2
𝑓 ′′ 𝑥 = 2𝛽 𝑒 (1−2𝛽𝑥 2 )
𝑑𝑥
2 2
= 2𝛽 𝑒 −𝛽𝑥 −4𝛽𝑥 + 1 − 2𝛽𝑥 2 𝑒 −𝛽𝑥 −2𝛽𝑥
1 𝛽
𝑓 ′′ (𝑥)| 1 = 2𝛽 𝑒 −2 −4 <0
𝑥=
2𝛽
2
𝛽>0
1
⟹ 𝑚∗ , 𝑡𝑒 𝑚𝑜𝑑𝑒 𝑜𝑓 𝑡𝑒 𝑑𝑖𝑠𝑡𝑛 𝑖𝑠 𝑎𝑡 .
2𝛽
1
𝑚∗ =
2𝛽
𝜋 1 𝜋
𝜇=𝐸 𝑋 = . = 2 𝑚∗
2 𝛽 2
𝜋 ∗
𝑖. 𝑒. 𝜇 = 𝑚
2
2 2
& 2𝑚∗ 2 − 𝜇2 = 2 𝜇 − 𝜇2
𝜋
4 2 4 𝜋 1
= 𝜇 − 𝜇2 = . . − 𝜇2
𝜋 𝜋 4 𝛽
1
𝑖. 𝑒. 2𝑚∗ 2 − 𝜇2 = 𝜍 2 = − 𝜇2
𝛽
= 𝐸𝑋 2 − 𝜇2 = 𝑉 𝑋 .
(10) X∼ P(𝜆)
𝑒 −𝜆 𝜆 𝑥
p. m. f. 𝑃 𝑋 = 𝑥 = 𝑥!
, 𝑥 = 0, 1, 2, … .
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑃 𝑌 = 𝑦 = 𝑃 𝑋2 − 5 = 𝑦 = 𝑃 𝑋2 = 𝑦 + 5
𝑒 −𝜆 𝜆 𝑦+5
, 𝑦∈𝓎
𝑝. 𝑚. 𝑓. 𝑜𝑓 𝑌: 𝑃 𝑌 = 𝑦 = 𝑃 𝑋 = 𝑦+5 = 𝑦+5 !
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
(11) X∼ B(n, p)
𝑛
𝑥
𝑝 𝑥 1 − 𝑝 𝑛−𝑥 , 𝑥 = 0, 1, … , 𝑛
p. m. f. 𝑃 𝑋 = 𝑥 =
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑌 = 𝑛 − 𝑥 ⟹ 𝓎 = 0, 1, … , 𝑛 .
𝑃 𝑌 =𝑦 =𝑃 𝑛−𝑋 =𝑦 =𝑃 𝑋 =𝑛−𝑦
⟹ 𝑝. 𝑚. 𝑓. 𝑜𝑓 𝑌
𝑛
𝑝𝑛−𝑦 1 − 𝑝 𝑛− 𝑛−𝑦 ; 𝑦 = 0, 1, … , 𝑛
𝑃 𝑌=𝑦 = 𝑛−𝑦
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑛
1 − 𝑝 𝑦 𝑝𝑛−𝑦 , 𝑦 = 0, 1, … , 𝑛
= 𝑦
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
⟹ 𝑌 ∼ 𝐵 𝑛, 1 − 𝑝 .
(12) 𝑌 = 𝑋 2 → 𝑟𝑎𝑛𝑔𝑒 𝑟𝑝 = 0, 1, 4, 9
𝑃 𝑋=0 𝑦=0
𝑃 𝑋 = −1 + 𝑃 𝑋 = 1 𝑦=1
𝑝. 𝑚. 𝑓. 𝑃 𝑌 = 𝑦 =
𝑃 𝑋 = −2 + 𝑃 𝑋 = 2 𝑦=4
𝑃 𝑋=3 𝑦=9
1
, 𝑦=0
5
1 1
+ , 𝑦=1
= 6 15
1 1
+ , 𝑦=4
5 3
1
, 𝑦=9
30
1 2 𝑥
(13) 𝑃 𝑋 = 𝑥 = 3 3
, 𝑥 = 0, 1, 2, … . .
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑋 𝑌
𝑌= ⟹𝑋=
𝑋+1 1−𝑌
1 2 3
𝑟𝑎𝑛𝑔𝑒 𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝑌 = 0, , , , …
2 3 4
𝑋 𝑦
𝑃 𝑌=𝑦 =𝑃 =𝑦 = 𝑃 𝑋=
𝑋+1 1−𝑦
𝑦
1 2 1−𝑦 1 2
= , 𝑦 = 0, , , …
3 3 2 3
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
(14) 𝑝. 𝑚. 𝑓 𝑜𝑓 𝑋
𝑒 −1 ,𝑥=0
𝑒 −1
𝑃 𝑋=𝑥 = , 𝑥 ∈ ±1, ±2, …
2 𝑋 !
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑌 = 𝑋 𝓎 = 0, 1, 2, …
𝑃 𝑌 = 0 = 𝑃 𝑋 = 0 = 𝑒 −1
𝑃 𝑌 = 1 = 𝑃 𝑋 = −1 + 𝑃 𝑋 = 1
𝑒 −1 𝑒 −1
= + = 𝑒 −1
2 2
𝑃 𝑌 = 2 = 𝑃 𝑋 = −2 + 𝑃 𝑋 = 2
𝑒 −1 𝑒 −1 𝑒 −1
= + =
2.2! 2.2! 2
𝑠𝑙𝑦 𝑓𝑜𝑟 𝑘 = 1, 2, ….
𝑃 𝑌 = 𝑘 = 𝑃 𝑋 = −𝑘 + 𝑃 𝑋 = 𝑘
𝑒 −1 𝑒 −1 𝑒 −1
= + =
2. 𝑘! 2. 𝑘! 𝑘!
𝑝. 𝑚. 𝑓. 𝑜𝑓 𝑌
𝑒 −1
𝑃 𝑌=𝑦 = , 𝑦 = 0, 1, 2, … .
𝑦!
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑌∼𝑃 1 .
Problem Set-6
1 0<𝑥<1
𝑓𝑋 𝑥 =
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
[2] Let X be a random variable with 𝑈 0, 𝜃 , 𝜃 > 0 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛. 𝐹𝑖𝑛𝑑 𝑡𝑒 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑜𝑓 𝑌 =
𝜃
min 𝑋, 2 .
1 1 3
𝑓𝑋 𝑥 = 2 − ≤𝑥≤
2 2
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
[6] According to the Maxwell-Boltzmann law of theoretical physics, the probability density function of V,
the velocity of a gas molecule, is
𝑤𝑒𝑟𝑒 𝛽> 0 is a constant which depends on the mass and absolute temperature of the molecule and k > 0
is a normalizing constant. Derive the distribution of the kinetic energy E= 𝑚𝑉 2 /2.
3 2
𝑓𝑋 𝑥 = 8 (𝑥 + 1) −1<𝑥 <1
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
[8] Let X be a random variable with U(0, 1) distribution. Find the distribution function of Y= min (X, 1-
X) and the probability density function of Z= (1- Y)/ Y.
[10] Let X be a continuous random variable on (a, b) with p. d. f. f and c. d. f. F. Find the p. d. f. of Z= -
log (F(X)).
6𝑥 1 − 𝑥 𝑖𝑓 0 ≤ 𝑥 ≤ 1
f(x)=
0 𝑖𝑓 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑐 𝑥𝑦 𝑖𝑓 𝑥, 𝑦 ∈ { 1, 1 , 2, 1 , 2, 2 , (3, 1)}
p(x, y)=
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
Find the constant c, the marginal p. m. f. of X and Y and the conditional p. m. f. of X given Y= 2.
[16] 5 cards are drawn at random without replacement from a deck of 52 playing cards. Let the random
variables 𝑋1 , 𝑋2 , 𝑋3 denote the number of spades, the number of hearts, the number of diamonds,
respectively, that appear among the five cards. Find the joint p. m. f. of 𝑋1 , 𝑋2 , 𝑋3 . Also determine
whether the 3 random variables are independent.
[17] Consider a sample of size 3 drawn with replacement from an urn containing 3 white , 2 black and 3
red balls. Let the random variables𝑋1 , 𝑎𝑛𝑑 𝑋2 denote the number of white balls and number of black
balls in the sample, respectively. Determine whether the two random variables are independent.
𝑇
[18] Let 𝑋 = 𝑋1 , 𝑋2 , 𝑋3 𝑏𝑒 𝑎 𝑟𝑎𝑛𝑑𝑜𝑚 𝑣𝑒𝑐𝑡𝑜𝑟 𝑤𝑖𝑡 𝑗𝑜𝑖𝑛𝑡 𝑝. 𝑚. 𝑓.
1
𝑓𝑋1 ,𝑋2 ,𝑋3 𝑥1 , 𝑥2 , 𝑥3 = 4 𝑥1 , 𝑥2 , 𝑥3 ∈
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
X= {(1, 0, 0), (0, 1, 0), (0, 0, 1),(1, 1, 1)}. Show that 𝑋1 , 𝑋2 , 𝑋3 are pair wise independent but are not
mutually independent.
Solution Key
(1) X∼ U(0, 1)
0 𝑥<𝑈
𝑓𝑋 𝑥 = 𝑥 0 ≤ 𝑥 ≤ 1
1 𝑥>1
𝑑. 𝑓. 𝑜𝑓 𝑌: 𝑓𝑌 𝑦 = 𝑃 𝑋≤𝑦
0 𝑦<0
= 𝑃 𝑋 ≤ 𝑦2 = 𝑦2 0≤𝑦≤1
1 𝑦>1
0, 𝑦 < 0
2𝑦, 0 ≤ 𝑦 ≤ 1
𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑌𝑖𝑠 𝑓𝑌 𝑦 = 2𝑦, 0≤𝑦≤1=
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
1, 𝑦 > 1
𝑏 𝑌 = 𝑋2
0, 𝑦 < 0
2
𝑑. 𝑓. 𝑜𝑓 𝑌: 𝑓𝑌 𝑦 = 𝑃 𝑋 ≤ 𝑦 = 𝑃 − 𝑦 ≤𝑥≤ 𝑦 , 0≤𝑦≤1
1, 𝑦>1
𝑓𝑜𝑟 0 ≤ 𝑦 ≤ 1
𝑃 − 𝑦 ≤𝑋≤ 𝑦 = 𝑃 0≤𝑋≤ 𝑦 = 𝐹𝑋 𝑦 = 𝑦
0, 𝑦 < 0
⟹ 𝐹𝑌 𝑦 = 𝑦, 0 ≤ 𝑦 ≤ 1
1, 𝑦>1
1
0≤𝑦≤1
𝑝. 𝑑. 𝑓 𝑜𝑓 𝑌: 𝑓𝑌 𝑦 = 2 𝑦
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑐 𝑌 = 2𝑋 + 3 → 3, 5
𝑑. 𝑓. 𝑜𝑓 𝑌: 𝐹𝑌 𝑦 = 𝑃 2𝑋 + 3 ≤ 𝑦
0, 𝑦 < 3
𝑦−3 𝑦−3
=𝑃 𝑋≤ = , 3≤𝑦≤5
2 2
1, 𝑦 > 5
1
𝑝. 𝑑. 𝑓. 𝑓𝑌 𝑦 = 2 , 3 ≤ 𝑦 ≤ 5 ⟹ 𝑌 ∼ 𝑈 3, 5
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑑 𝑌 = −𝜆 log 𝑋 → 0, ∞
𝑦
−
𝐹𝑌 𝑦 = 𝑃 𝑌 ≤ 𝑦 = 𝑃 −𝜆 log 𝑋 ≤ 𝑦 = 𝑃 𝑋 > 𝑒 𝜆
𝑦
−
=1−𝑃 𝑋 ≤𝑒 𝜆
0 𝑦<0
𝑖. 𝑒. 𝐹𝑌 𝑦 = −
𝑦
1−𝑒 𝜆 𝑦≥0
1 −𝑦
𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑌: 𝑓𝑌 𝑦 = 𝜆 𝑒 , 𝑦 ≥ 0
𝜆
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑖. 𝑒. 𝑌 ∼ 𝐸𝑥𝑝 𝜆 (𝑠𝑐𝑎𝑙𝑒 𝜆)
(2) X∼𝑈 0, 𝜃
𝜃 𝜃
𝑌 = min 𝑋, → 0, ← 𝑟𝑎𝑛𝑔𝑒 𝑠𝑝 𝑜𝑓 𝑌
2 2
𝜃
𝐹𝑌 𝑦 = 𝑃 𝑌 ≤ 𝑦 = 𝑃 min 𝑋, ≤𝑦
2
𝜃
= 1 − 𝑝 min 𝑋, >𝑦
2
𝜃
=1−𝑝 𝑋 >𝑦∩ >𝑦
2
𝜃
𝑛𝑜𝑤 𝑃 𝑋 > 𝑦, > 𝑦 = 1 𝑖𝑓 𝑦 < 0
2
𝜃
= 0 𝑖𝑓 𝑦 ≥
2
𝜃
𝜃 𝜃 1 𝜃−𝑦
𝑓𝑜𝑟 0 ≤ 𝑦 < ; 𝑃 𝑋 > 𝑦, > 𝑦 = 𝑃 𝑋 > 𝑦 = 𝑑𝑥 =
2 2 𝜃 𝑦 𝜃
0, 𝑦 < 0
𝑦 𝜃
, 0≤𝑦<
⟹ 𝐹𝑌 𝑦 = 𝜃 2
𝜃
1, 𝑦 ≥
2
𝜃
𝑁𝑜𝑡𝑒: − 𝐹𝑌 𝑦 𝑎𝑠 𝑎 𝑗𝑢𝑚𝑝 𝑑𝑖𝑠𝑐𝑜𝑛𝑡𝑖𝑢𝑖𝑡𝑦 𝑎𝑡
2
1 1 3
, ≤𝑥≤2 1 3
(3) 𝑓𝑋 𝑥 = 2 2 𝑋 ∼ 𝑈 −2,2
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
9
𝑌 = 𝑋 2 → 𝑦 ∈ 0,
4
2
𝐹𝑌 𝑦 = 𝑃 𝑋 ≤ 𝑦 = 𝑃 − 𝑦 ≤ 𝑋 ≤ 𝑦
𝑓𝑜𝑟 𝑢 < 0; 𝐹𝑌 𝑦 = 0
9
& 𝑦 > ; 𝐹𝑌 𝑦 = 1
4
𝑦
1 1
𝑓𝑜𝑟, 0 ≤ 𝑦 ≤ ; 𝐹𝑌 𝑦 = 𝑑𝑥 = 𝑦
4 − 𝑦2
1
− 𝑦
1 9 2 1 1 1
𝑓𝑜𝑟, < 𝑦 < ; 𝐹𝑌 𝑦 = 0. 𝑑𝑥 + 𝑑𝑥 = 𝑦+
4 4 − 𝑦 −
1 2 2 2
2
1 𝑦
= +
4 2
0 𝑦<0
1
𝑦 0≤𝑦≤
4
⟹ 𝐹𝑌 𝑦 = 1 1 1 9
𝑦+ , ≤𝑦≤
2 2 4 4
9
1, 𝑦 ≥
4
1 1
, 0≤𝑦≤
2 𝑦 4
𝑝. 𝑑. 𝑓. 𝑓𝑌 𝑦 =
1 1 9
, <𝑦 ≤
4 𝑦 4 4
𝑥 𝑝 −1
𝑘 , 𝑥>0
(4) 𝑓𝑋 𝑥 = 1+𝑥 𝑝 +𝑞
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
1 1−𝑌
𝑌= ⟹𝑋= = 𝑔−1 𝑌 ; 𝑌 ∈ 0, 1
1+𝑋 𝑌
𝑑𝑥 1
𝐽= = − 2
𝑑𝑦 𝑦
𝑓𝑋 𝑔−1 𝑦 𝐽 , 0 ≤ 𝑦 ≤ 1
𝑓𝑌 𝑦 =
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑝−1
1−𝑦 1 𝑝+𝑞 1
= 𝑘. 𝑦 𝑦 −1
. 2, 0≤𝑦 ≤1
𝑦
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑘. 𝑦 𝑞−1 1 − 𝑦 𝑝−1 , 0 ≤ 𝑥 ≤ 1
𝑖. 𝑒. 𝑓𝑌 𝑦 =
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
−1
𝑘 = 𝐵𝑒𝑡𝑎 𝑞, 𝑝
⟹ 𝑌 ∼ 𝐵𝑒𝑡𝑎 (𝑞, 𝑝)
𝛽−1 −𝛼𝑥 𝛽
(5) 𝑓𝑋 𝑥 = 𝑘 𝑥 𝑒 , 𝑥>0
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑑𝑥 1 𝛽1 −1 1
𝑌 = 𝑥𝛽 𝐽 = = 𝑦 𝑥 = 𝑦 𝛽 = 𝑔−1 𝑦
𝑑𝑦 𝛽
𝑓𝑋 𝑔−1 𝑦 𝐽 , 𝑦 > 0
𝑓𝑌 𝑦 =
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1 𝛽−1 1 𝛽1 −1
= 𝑘. 𝑦 𝛽 𝑒 −𝛼𝑦 .𝑦 , 𝑦>0
𝛽
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
1 1 𝛽1 −1
1−
𝑘𝑦 𝛽 𝑒 −𝛼𝑦 .𝑦 , 𝑦>0
= 𝛽
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑒 −𝛼𝑦
𝑘 , 𝑦>0
= 𝛽
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1
𝑘 = 𝛼𝛽 ⟹ 𝑌 ∼ 𝐸𝑥𝑝 .
𝛼
2 −𝛽𝑣
(6) 𝑓𝑉 𝑣 = 𝑘𝑣 𝑒 , 𝑣>0
0 , 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1 2𝐸
𝐸 = 𝑚𝑉 2 ; 𝑉 2 = .
2 𝑚
𝜕𝑒
= 𝑚𝑣
𝜕𝑣
𝜕𝑣 1
𝐽= = .
𝜕𝑒 2𝑚𝑒
2𝑒 −𝛽 2𝑒 1
𝑘. 𝑒 𝑚 . ; 𝑒>0
𝑓𝐸 𝑒 = 𝑚 2𝑚𝑒
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1 2𝛽
= 𝑐. 𝑒 2 exp − 𝑒 , 𝑒>0
𝑚
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
∞ 1 2𝛽
𝑐 𝑖𝑠 ∋ 𝐶. 𝑒 2 exp − 𝑒 𝑑𝑒 = 1
0 𝑚
3
3 2𝛽 2
⎾ 𝑚
𝑖. 𝑒. 𝐶. 2
3 =1 ⟹𝐶 = 3
2𝛽 2 ⎾2
𝑚
⟹ 𝐸 ∼ 𝐺𝑎𝑚𝑚𝑎 (… )
3
𝑥+1 2 , −1<𝑥<1
(7) 𝑓𝑋 𝑥 = 8
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑌 = 1 − 𝑋 2 , 𝑦 ∈ 0, 1
𝑥2 = 1 − 𝑦 ⟹ 𝑥 = ± 1 − 𝑦
𝑑𝑥 1
𝑥 ∈ −1, 0 → 𝑥 = − 1 − 𝑦 = 𝑔−1 1 𝑦 → =
𝑑𝑦 2 1−𝑦
𝑑𝑥 1
𝑥 ∈ 0, 1 → 𝑥 = 1 − 𝑦 = 𝑔2 −1 𝑦 → =
𝑑𝑦 2 1−𝑦
−1, 0 −1, 0 0, 1 (−1, 0)
↓ ↓ ↓ ↓
𝑑𝑥 𝑑𝑥
𝑓𝑌 𝑦 = 𝑓𝑋 𝑔1 −1 𝑦 + 𝑓𝑋 𝑔2 −1 𝑦 . 0<𝑦<1
𝑑𝑦 𝑑𝑦
3 2 1 3 2 1
= 1− 1−𝑦 . + 1+ 1−𝑦 .
8 2 1−𝑦 8 2 1−𝑦
3 2 2
= 1− 1−𝑦 + 1+ 1−𝑦
16 1 − 𝑦
3
= 2 1+ 1−𝑦
16 1 − 𝑦
3 1 1
1 − 𝑦 −2 + 1 − 𝑦 2 , 0 < 𝑦 < 1
𝑖. 𝑒. 𝑓𝑌 𝑦 = 8
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
(8) Y= min (X, 1- X) → range of Y(0, ½ )
= 1 − 𝑃 𝑋 > 𝑦, 1 − 𝑋 > 𝑦
= 1 − 𝑃 𝑋 > 𝑦, 1 − 𝑦 > 𝑋
1 𝑦≤0
1−𝑦
1
𝑑𝑥 𝑖𝑓 0 < 𝑦 <
𝑃 𝑦 <𝑥 <1−𝑦 = 𝑦 2
1
0 𝑦≥
2
0 𝑦≤0
1 1
2𝑦 0<𝑦< 2, 0 < 𝑦 <
⟹ 𝐹𝑌 𝑦 = 2 ⟹ 𝑝. 𝑑. 𝑓. 𝑓𝑌 𝑦 = 2
1 0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1 𝑦≥
2
1−𝑌 1
𝑍= = − 1 → 𝑟𝑎𝑛𝑔𝑒 𝑜𝑓 𝑍 𝑖𝑠 1, ∞
𝑌 𝑌
𝐹𝑍 Ʒ = 𝑃 𝑍 ≤ Ʒ
𝑖𝑓 Ʒ ≤ 1, 𝑡𝑒𝑛 𝐹𝑍 Ʒ = 0
1 1
𝑖𝑓 Ʒ > 1, 𝑡𝑒𝑛 𝑃 𝑍 ≤ Ʒ = 𝑃 −1≤Ʒ = 𝑃 ≤Ʒ+1
𝑌 𝑌
1 1
=𝑃 𝑌≥ = 1−𝑃 𝑦 <
Ʒ+1 Ʒ+1
2
=1− 𝑢𝑠𝑖𝑛𝑔 𝑑. 𝑓. 𝑜𝑓 𝑌
Ʒ+1
0, 𝑖𝑓Ʒ ≤ 1
⟹ 𝐹𝑍 Ʒ = 2
1− , 𝑖𝑓 Ʒ > 1
Ʒ+1
2
, Ʒ>1
⟹ 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑍 𝑖𝑠 𝑓𝑍 Ʒ = Ʒ+1 2
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
(9) X∼ N 𝜇, 𝜍 2
𝑌 = 2𝑋 − 6; 𝑦 ∈ (−∞, ∞)
𝐹𝑌 𝑦 = 𝑃 𝑌 ≤ 𝑦 = 𝑃 2𝑋 − 6 ≤ 𝑦
𝑦+6
=𝑃 𝑋≤
2
𝑦+6
𝑋−𝜇 −𝜇 𝑦 + 6 − 2𝜆
=𝑃 ≤ 2 = 𝜙
𝜍 𝜍 2𝜍
𝑦 + 6 − 2𝜆 1
𝑓𝑌 𝑦 = 𝜙 . . 𝑦 ∈ −∞, ∞
2𝜍 2𝜍
2
1 1 𝑦 − 2𝜇 − 6 1 1 1 2
= 𝑒2 . = exp − 𝑦 − 2𝜇 − 6
2𝜋 2𝜍 2𝜍 2𝜋 2𝜍 2 4𝜍 2
⟹ 𝑌 ∼ 𝑁(2𝜇 − 6, 4𝜍 2 )
(10) X∼𝑓𝑋 𝑥
𝑍 = − log 𝐹 𝑋 ; Ʒ ∈ 0, ∞
𝜕𝑧 𝑓 𝑥 𝐹 𝑥
Ʒ = − log 𝐹 𝑋 ⟹ 𝑥 = 𝐹 −1 𝑒 −𝑧 = ⟹ 𝐽 =
𝜕𝑥 𝐹 𝑥 𝑓 𝑥
𝐹 𝐹 −1 𝑒 −𝑧
𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑍 = 𝑓 𝐹 −1 𝑒 −𝑧 .
𝑓 𝐹 −1 𝑒 −𝑧
= 𝐹 𝐹 −1 𝑒 −𝑧 = 𝑒 −𝑧 ;
𝑒 −Ʒ , Ʒ > 0
⟹ 𝑓𝑍 𝑧 =
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
6𝑥 1 − 𝑥 , 0 ≤ 𝑥 ≤ 1
(11) 𝑓𝑋 𝑥 =
0 𝑡𝑒𝑟𝑤𝑖𝑠𝑒
0, 𝑥 < 0
𝑥
𝐹𝑋 𝑥 = 𝑃 𝑋 ≤ 𝑥 = 6𝑦 − 6𝑦 2 𝑑𝑥, 0≤𝑥≤1
0
1, 𝑥>1
0, 𝑥 < 0
2
= 𝑥 3 − 2𝑥 , 0 ≤ 𝑥 ≤ 1
1, 𝑥 > 1
𝐹𝑋 𝑋 = 𝑋 2 3 − 2𝑋
If X ∼𝐹𝑋 𝑥 𝑑𝑖𝑠𝑡 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛, 𝑡𝑒𝑚 𝑌 = 𝐹 𝑋 ∼ 𝑈 0, 1
↓
[X ∼𝑓𝑋 𝑥 𝑝. 𝑑. 𝑓. & 𝑑. 𝑓. 𝐹.]
↑
Cont r. v. Y= F(X) →y ∈ (0, 1)[ General result as in prob. 10]
X= 𝐹 −1 𝑦
𝑑𝑦 𝑑𝑥 1
=𝑓 𝑥 =
𝑑𝑥 𝑑𝑦 𝑓 𝑥
𝑓𝑋 𝐹 −1 𝑦
= 1 𝑖𝑓 0 < 𝑦 < 1
⟹ 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑌 ∶ 𝑓𝑌 𝑦 = 𝑓𝑋 𝐹 −1 𝑦
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
⟹ 𝑌 ∼ 𝑈 0, 1
⟹ 𝐹𝑜𝑟 𝑡𝑒 𝑔𝑖𝑣𝑒𝑛 𝑑𝑖𝑠𝑡𝑛 𝑋 2 3 − 2𝑋 = 𝐹 𝑋 ∼ 𝑈(0, 1)
(12) X∼ Double exponential
1
𝑓𝑋 𝑥 = 𝑒 − 𝑥 ; −∞ < 𝑥 < ∞
2
𝑌 = 𝑋 𝑟𝑎𝑛𝑔𝑒 𝑜𝑓 𝑌 ∶ 0, ∞
𝑑𝑥
𝑥 ∈ −∞, 0 → 𝑥 = −𝑦 → = 1
𝑑𝑦
𝑑𝑥
𝑥 ∈ 0, ∞ → 𝑥 = 𝑦 → = 1
𝑑𝑦
⟹ 𝑓𝑌 𝑦 = 𝑓𝑋 𝑔1 −1 𝑦 𝐽 + 𝑓𝑋 𝑔2 −1 𝑦 𝐽
↑
In (-∞, 0)
1 −𝑦 1 −𝑦
𝑖. 𝑒. 𝑓𝑌 𝑦 = 2 𝑒 + 2 𝑒 , 0 < 𝑦 < ∞
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑒 −𝑦 , 0 < 𝑦 < ∞
∴ 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑌 ∶ 𝑓𝑌 𝑦 =
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
(13) 𝑋𝑖 = 0, 1, 2, 3 𝑓𝑜𝑟 𝑖 = 1, 2, 3
𝑁 = 1, 2, 3
𝐵1 𝐵2 𝐵3 ↓N 𝑋1 𝑋2 𝑋3
1 3 0 0
3 0 0 → 1 0 3 0
1 0 0 3
0 3 0 2 2 1 0
2 2 0 1
0 0 3 2 1 2 0
1 1
2 0 2 1
2 1 0 → each with prob. = 3+3−1 = 10 2 1 0 2
3
2 0 1 2
2 0 1 3 1 1 1
1 2 0
0 2 1
1 0 2
0 1 2
1 1 1
𝑗𝑡 𝑝. 𝑚. 𝑓. 𝑜𝑓 (𝑁, 𝑋1 ) 𝑗𝑡 𝑝. 𝑚. 𝑓. 𝑜𝑓 (𝑁, 𝑋1 )
𝑋1 0 1 2 3
𝑋2 0 1 2 3
N
2 1 𝑋1
1 0 0 2 1
10 10 1 0 0
2 2 2 2 10 10
0 2 2 2 2
3 10 10 10 0
1 3 10 10 10
0 0 0 1
10 0 10
0 0
4 3 2 1
10 10 10 10
4 3 2
Marginal of 𝑋1 10 10 10
1
10
Marginal of 𝑋2
⟹ 𝐶 1, 1 + 2, 1 + 2, 2 + 3, 1 = 1
1
⟹𝐶=
10
𝑗𝑡 𝑝. 𝑚. 𝑓.
𝑌 1 2
X
1 1 1
10
0 10
2 2 4 6
3 10 10 10
} marginal of
3 X
10
0
3
10
6 4
10 10
Marginal of Y
𝑝 𝑥,2
Conditional p. m. f. of X given Y= 2, 𝑝𝑌 2
= 1 𝑖𝑓 𝑥 = 2
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
(15) 𝑗𝑡 𝑝. 𝑚. 𝑓.
𝑌 1 2
X
1 3 5 8
18 18 18
2 4 6 10
18
} marginal of
18 18
X
7 11
18 18 ↔
Marg of Y
3 8 7
(b)P(X= 1, Y=1)= 18 ≠ 𝑃 𝑋 = 1 , 𝑃 𝑌 = 1 = 18 . 18
⟹ 𝑋 & 𝑌 𝑚𝑎𝑔𝑖𝑛𝑎𝑙
5
𝑐 𝑃 𝑋 < 𝑌 = 𝑃 𝑋 = 1, 𝑌 = 2 =
18
15
𝑃 𝑋 + 𝑌 > 2 = 𝑃 𝑋 = 1, 𝑌 = 2 + 𝑃 𝑋 = 2, 𝑌 = 1 + 𝑃 𝑋 = 2, 𝑌 = 2 =
18
𝑥+3
𝑑 𝑚𝑎𝑟𝑔 𝑜𝑓 𝑋: 𝑝𝑋 𝑥 = 𝑃 𝑋 = 𝑥 = ; 𝑥 = 1, 2
9
1
18 (𝑥 + 2𝑦) 𝑥 + 2𝑦
𝑝𝑌|𝑋=𝑥 = = ; 𝑦 = 1, 2
1 2𝑥 + 6
18 (2𝑥 + 6)
(16) 𝑖𝑓 𝑝. 𝑚. 𝑓. 𝑜𝑓 𝑋1 , 𝑋2 , 𝑋3
13 13 13 13 3
𝑥1 𝑥2 𝑥3 5−𝑥 1 −𝑥 2 −𝑥 3
𝑝𝑋1 ,𝑋2 ,𝑋3 𝑥1 , 𝑥2 , 𝑥3 = 52 ; 𝑥𝑖 ≥ 0 & 𝑥𝑖 ≤ 5
5 1
13 39
𝑥 5−𝑥
𝑝𝑋 𝑖 𝑥 = 52 𝑥 = 0, 1, 2, 3, 4, 5
5
𝑝𝑋1 𝑥1 𝑝𝑋2 𝑥2 𝑝𝑋3 𝑥3 ≠ 𝑝𝑋1 ,𝑋2 ,𝑋3 𝑥1 , 𝑥2 , 𝑥3
⟹(𝑋1 , 𝑋2 , 𝑋3 ) 𝑎𝑟𝑒 𝑛𝑜𝑡 𝑖𝑛𝑑𝑒𝑝.
(17) 𝑋1 : # 𝑜𝑓 𝑤𝑖𝑡𝑒 𝑏𝑎𝑙𝑙𝑠
𝑋2 : # 𝑜𝑓 𝑏𝑙𝑎𝑐𝑘 𝑏𝑎𝑙𝑙𝑠.
𝑊, 2 𝐵 , 1 𝑅 − 7
3! 3 𝑥 1 2 𝑥 2 3 3−𝑥 1 −𝑥 2
= 𝑝𝑋1 ,𝑋2 𝑥1 , 𝑥2 = ; 𝑥𝑖 ≥ 0, 𝑥1 + 𝑥2
𝑥1 ! 𝑥2 ! 3 − 𝑥1 − 𝑥2 ! 8 8 8
≤3
3 2
𝑋1 , 𝑋2 ∼ 𝑀𝑢𝑙𝑡 3, ,
8 8
3 3 𝑥1
5 3−𝑥 1
𝑝𝑋1 𝑥1 = ; 𝑥1 = 0, 1, 2, 3
𝑥1 8 8
3 2 𝑥 1 6 3−𝑥 2
𝑝𝑋2 𝑥2 = ; 𝑥2 = 0, 1, 2, 3
𝑥2 8 8
3 2
𝑖. 𝑒. 𝑋1 ∼ 𝐵 3, ; 𝑋2 ∼ 𝐵 3,
8 8
𝑝𝑋1 𝑥1 𝑝𝑋2 𝑥2 ≠ 𝑝𝑋1 ,𝑋2 𝑥1 , 𝑥2
⟹ 𝑋1 & 𝑋2 𝑎𝑟𝑒 𝑛𝑜𝑡 𝑖𝑛𝑑𝑒𝑝.
(18) From the jt p. m. f. of 𝑋1 , 𝑋2
1
𝑃 𝑋1 = 0, 𝑋2 = 0 = 𝑃 𝑋1 = 0, 𝑋2 = 1 = 𝑃 𝑋1 = 1, 𝑋2 = 0 = 𝑃 𝑋1 = 1, 𝑋2 = 1 =
4
𝐹𝑢𝑟𝑡𝑒𝑟 𝑋1 , 𝑋2 ≡ 𝑋1 , 𝑋3 ≡ 𝑋2 , 𝑋3
1
& 𝑃 𝑋𝑖 = 0 = = 𝑃 𝑋𝑖 = 1 ; 𝑖 = 1, 2, 3
2
⟹ 𝑋1 , 𝑋2 , 𝑋3 𝑎𝑟𝑒 𝑝𝑎𝑖𝑟 𝑤𝑖𝑠𝑒 𝑖𝑛𝑑𝑒𝑝.
1 1
𝐵𝑢𝑡 𝑃 𝑋1 = 0, 𝑋2 = 0, 𝑋3 = 0 = ≠ 𝑃 𝑋1 = 0 𝑃 𝑋2 = 0 𝑃 𝑋3 = 0 =
4 8
⟹ 𝑋1 , 𝑋2 , 𝑋3 𝑎𝑟𝑒 𝑛𝑜𝑡 𝑖𝑛𝑑𝑒𝑝.
Problem Set-7
[9] Let f(x) and g(y) be two arbitrary p. d. f. s with corresponding distribution functions F(x) and G(y)
respectively. Suppose the joint p. d. f. of X and Y is given by
Show that the marginal p. d. f. of X and Y are f(x) and g(y), respectively. Does there exist a value of 𝛼 for
which the random variables X and Y are independent?
4𝑥 1 − 𝑥 2 , 0 < 𝑥 < 1
[10] Suppose the marginal density of the random variable is 𝑓𝑋 𝑥 =
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
And the conditional density of the random variable Y given X = x is
2𝑦
, 𝑥 < 𝑦 < 1, 0 < 𝑥 < 1
𝑓𝑌|𝑋=𝑥 𝑦|𝑥 = 1 − 𝑥2
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
[14] Let X, Y and Z be three random variables and a and b be two scalar constants. Prove that (a) Cov(X,
b)= Cov(Y, b)= 𝐶𝑜𝑣 𝑍, 𝑏 = 0; 𝑏 𝐶𝑜𝑣 𝑋, 𝑎𝑌 + 𝑏 = 𝑎 𝐶𝑜𝑣 𝑋, 𝑌 ; 𝑐 𝐶𝑜𝑣 𝑋, 𝑌 + 𝑍 = 𝐶𝑜𝑣 𝑋, 𝑌 +
𝐶𝑜𝑣 𝑋, 𝑍 ; 𝑑 𝜌 𝑋, 𝑎𝑌 + 𝑏 = 𝜌(𝑋, 𝑌) for a>0.
[15] Let 𝑋1 , 𝑋2 , 𝑎𝑛𝑑𝑋3 be three independent random variables each with a variance
𝜍 2 . 𝐷𝑒𝑓𝑖𝑛𝑒 𝑡𝑒 𝑛𝑒𝑤 𝑟𝑎𝑛𝑑𝑜𝑚 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠
3−1 3− 3
𝑊1 = 𝑋1 , 𝑊2 = 𝑋1 + 𝑋2 𝑎𝑛𝑑 𝑊3
2 2
= 2 − 1 𝑋2 + 2 − 2 𝑋3 . 𝐹𝑖𝑛𝑑 𝜌 𝑊1 , 𝑊2 , 𝜌 𝑊1 , 𝑊3 𝑎𝑛𝑑 𝜌 𝑊2 , 𝑊3 .
[16] Let (X, Y)∼𝑁2 3, 1, 16, 25, 0.6 . Find (a)P(3< Y< 8); (b) P(3 < y< 8| X= 7); (c) P(-3 <X <3) and (d)
P(-3 <X < 3| Y= 4).
[17] Let (X, Y) ∼ 𝑁2 5, 10, 1, 25, 𝜌 𝑤𝑖𝑡 𝜌 > 0. 𝐼𝑓 𝑖𝑡 𝑖𝑠 𝑔𝑖𝑣𝑒𝑛 𝑡𝑎𝑡 𝑃 4 < 𝑦 < 16 𝑋 = 5) =
0.954 𝑎𝑛𝑑 𝜙 2 = 0.977, find the value of 𝜌.
[18] Let 𝑋1 , 𝑋2 , … , 𝑋20 be independent random variables with identical distributions, each with a mean 2
and variance 3. Define Y= 15 𝑖=1 𝑋𝑖 𝑎𝑛𝑑 𝑍 =
20
𝑖=11 𝑋𝑖 . Find E(Y), E(Z), V(Y), V(Z) and 𝜌(Y, Z).
[19] Let X and Y be a jointly distributed random variables with E(X)= 15, E(Y)= 20, V(X)= 25, V(Y)=
100 and 𝜌(X, Y)= -0.6. Find 𝜌(X- Y, 2X – 3Y).
[20] suppose that the lifetime of light bulbs of a certain kind follows exponential distribution with p. d. f.
1 −𝑥
𝑓𝑋 𝑥 = 50 𝑒
50 𝑥 > 0
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
Find the probability that among 8 such bulbs, 2 will last less that 40 hours, 3 will last anywhere between
40 and 60 hours, 2 will last anywhere 60 and 80 hours and 1 will last for more than 80 hours. Find the
expected number of bulbs in a lot of 8 bulbs with lifetime between 60 and 80 hours and also the expected
number of bulbs in a lot of 8 with lifetime 60 and 80 hours, given that the number of bulbs with lifetime
anywhere between 40 and 60 hours is 2.
[21] Let the random variables X and Y have the following joint p. m. f. s
(a) P(X= x, Y= y)= 1/3, if (x, y)∈{(0, 0), (1, 1), (2, 2)} and 0 otherwise.
(b) P(X= x, Y= y)= 1/3 , if (x, y)∈ {(0, 2), (1, 1), (2, 0)} and 0 otherwise.
(c) P(X= x, Y= y)= 1/3 , if (x, y) ∈ {(0, 0), (1, 1), (2, 0)} and 0 otherwise.
In each of the above cases find the coefficient of correlation between X and Y.
P(X= x, Y= y)= xy/10, if (x, y)∈ {(1, 1), (2, 1), (2, 2), (3, 1)} and 0 otherwise.
Find the joint m. g. f. of X and Y and the coefficient of correlation between X and Y. Using the joint m. g.
f. , find the p. m. f. Z= X+ Y.
[23] Let 𝑀𝑋,𝑌 𝑢, 𝑣 𝑑𝑒𝑛𝑜𝑡𝑒 𝑡𝑒 𝑗𝑜𝑖𝑛𝑡 𝑚. 𝑔. 𝑓. 𝑋, 𝑌 𝑎𝑛𝑑 𝛹 𝑢, 𝑣 = log 𝑀𝑋,𝑌 𝑢, 𝑣 . 𝑆𝑜𝑤 𝑡𝑎𝑡
𝑋 2𝑌 2𝑋 𝑌
𝐹𝑖𝑛𝑑 𝜌 + , + .
3 3 3 3
Solution Key
(1) 𝑗𝑡 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑋, 𝑌
4𝑥𝑦 0 < 𝑥 < 1, 0 < 𝑦 < 1
𝑓𝑋,𝑌 𝑥, 𝑦 =
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑜𝑓 𝑋 ∶
1
𝑓𝑋 𝑥 = 4𝑥𝑦 𝑑𝑦 = 2𝑥 0 < 𝑥 < 1
0
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
2𝑦, 0<𝑦<1
𝑆𝑙𝑦 𝑓𝑌 𝑦 =
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑜𝑏𝑠𝑒𝑟𝑣𝑒 𝑡𝑎𝑡𝑓𝑋,𝑌 𝑥, 𝑦 = 𝑓𝑋 𝑥 𝑓𝑌 𝑦
⟹ 𝑋 & 𝑌 𝑎𝑟𝑒𝑤 𝑖𝑛𝑑𝑒𝑝.
1 1 1 1
𝑃 0<𝑋< , <𝑌<1 =𝑃 0<𝑋< 𝑃 <𝑌<1
2 4 2 4
1
1
2
= 2𝑥 𝑑𝑥 2𝑦 𝑑𝑦 = ⋯
1
0
4
1
𝑃 𝑋+𝑌 <1 = 𝑃 𝑋 < 1 − 𝑦 𝑓𝑌 𝑦 𝑑𝑦 → 𝑋 & 𝑌 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝.
0
1 1−𝑦
= 2𝑥 𝑑𝑥 2𝑦 𝑑𝑦
0 0
=⋯
∞
(2) 𝑓𝑋 𝑥 = ∫0 𝑒 −𝑥 𝑒 −𝑦 𝑑𝑦 = 𝑒 −𝑥 𝑥 > 0
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑓𝑌 𝑦 = 𝑒 −𝑦 𝑦 > 0
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑓𝑋,𝑌 𝑥, 𝑦 = 𝑓𝑋 𝑥 𝑓𝑌 𝑦
⟹ 𝑋 & 𝑌 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝.
∞ −𝑥 −𝑦
(3) 𝑓𝑋 𝑥 = ∫𝑥 2𝑒 𝑒 𝑑𝑦
= 2𝑒 −𝑥 𝑒 −𝑥 = 2𝑒 −2𝑥 𝑥 > 0
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑦
𝑠𝑙𝑦 𝑓𝑌 𝑦 = 2 𝑒 −𝑦 𝑒 −𝑥 𝑑𝑥 = 2 𝑒 −𝑦 1 − 𝑒 −𝑦 𝑦 > 0
0
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑓 𝑥, 𝑦 ≠ 𝑓 𝑥 𝑓 𝑦
⟹ 𝑋 & 𝑌 𝑎𝑟𝑒 𝑛𝑜𝑡 𝑖𝑑𝑒𝑝.
1 𝑦2 𝑦3
(4) 𝑓𝑋 𝑥 = 12𝑥 ∫0 𝑦 − 𝑦 2 𝑑𝑦 = 12𝑥 | 10
2
− 3
2𝑥 0 < 𝑥 < 1
⟹ 𝑓𝑋 𝑥 =
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1
𝑓𝑌 𝑦 = 12𝑦 1 − 𝑦 𝑥 𝑑𝑥 = 6 𝑦 1 − 𝑦 0<𝑦<1
0
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑓 𝑥, 𝑦 = 𝑓 𝑥 𝑓 𝑦
⟹ 𝑋 & 𝑌 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝.
1 1 1 2 1
(5) ∫0 ∫𝑥 𝑓 𝑥, 𝑦 𝑑𝑦 𝑑𝑥 = 1, 𝑖. 𝑒. 𝐶 ∫0 𝑥 ∫𝑥 𝑦 𝑑𝑦 𝑑𝑥 = 1
1
1
⟹ 𝐶 𝑥 2 1 − 𝑥 2 𝑑𝑥 = 1
0 2
3 5
𝑐 𝑥 𝑥 1
⟹ − = 1 ⟹ 𝐶 = 15
2 3 5 0
15 2
1 𝑥 1 − 𝑥2 , 0 < 𝑥 < 1
(b) 𝑓𝑋 𝑥 = 15𝑥 2 ∫𝑥 𝑦 𝑑𝑦 = 2
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑦
5𝑦 4 , 0 < 𝑦 < 1
𝑓𝑌 𝑦 = 15𝑦 𝑥 2 𝑑𝑥 =
0 0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑐 𝑃 𝑋+𝑌 ≤1 = 15 𝑥 2 𝑦 𝑑𝑦 𝑑𝑥
𝑥+𝑦≤1
𝑥<𝑦
1 1
1−𝑥
2
2
2 𝑦2 1 − 𝑥
= 15 𝑥 𝑦 𝑑𝑦 𝑑𝑥 = 15 𝑥2 | 𝑑𝑥
0 𝑥 0 2 𝑥
15
=⋯= .
192
𝐴𝑙𝑡 𝑃 𝑋 + 𝑌 ≤ 1 = 15 𝑥 2 𝑦 𝑑𝑦 𝑑𝑥
𝑥+𝑦≤1
𝑥<𝑦
1
𝑦 1 1−𝑦
2
2
= 15 𝑦 𝑥 𝑑𝑥 𝑑𝑦 + 15 𝑦 𝑥 2 𝑑𝑥 𝑑𝑦
1
0 0 0
2
15 15 15
=⋯= + =
15 × 32 10 × 32 192
1−𝑥 1−𝑥 𝑦2 1−𝑥
(6) 𝑓𝑋 𝑥 = ∫0 𝑓𝑋,𝑌 𝑥, 𝑦 𝑑𝑦 = 6 ∫0 1 − 𝑥 − 𝑦 𝑑𝑦 = 6 1 − 𝑥 𝑦 − 0
2
3 1 − 𝑥 2, 0 < 𝑥 < 1
=
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑠𝑙𝑦 𝑏𝑦 𝑠𝑦𝑚𝑚𝑒𝑡𝑟𝑦
3 1 − 𝑦 2, 0 < 𝑦 < 1
𝑓𝑌 𝑦 =
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1 1−2𝑥
2 3
𝑃 2𝑋 + 3𝑌 < 1 = 6 1 − 𝑥 − 𝑦 𝑑𝑦 𝑑𝑥
0 0
1 1 − 2𝑥
2 𝑦2 3 𝑑𝑥
=6 1−𝑥 𝑦−
0 2 0
1 2
2 1 − 2𝑥 1 1 − 2𝑥
=6 1−𝑥 − 𝑑𝑥
0 3 2 3
1
2 1 + 2𝑥 2 − 3𝑥 1 + 4𝑥 2 − 4𝑥
=6 − 𝑑𝑥
0 3 18
1
2
− 14𝑥 + 5 2 8𝑥
=6 𝑑𝑥
0 18
1
6 𝑥3 𝑥2
= 8 − 14 + 5𝑥 2
18 3 2 0
6 8 1 1 5 13
= . −7. + =
18 3 8 4 2 36
1 1−3𝑦
3 2
𝐴𝑙𝑡 𝑃 2𝑋 + 3𝑌 < 1 = 6 1 − 𝑦 − 𝑥 𝑑𝑥 𝑑𝑦
0 0
1 1 − 3𝑦
3 𝑥2 2
=6 1−𝑦 𝑥− 𝑑𝑦
0 2 0
1 2
3 1 − 3𝑦 1 1 − 3𝑦
=6 1−𝑦 − 𝑑𝑦
0 2 2 2
13
=⋯=
36
1 1
(7) 𝑓𝑋 𝑥 = ∫0 𝑓 𝑥, 𝑦 𝑑𝑦 = ∫0 𝑥 + 𝑦 𝑑𝑦
1
= 𝑥+2 0<𝑥 <1
0 𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑓 𝑥, 𝑦 𝑥+𝑦 2 𝑥+𝑦
𝑓𝑌|𝑋 = = = 0<𝑦<1
𝑓𝑋 𝑥 1 2𝑥 + 1
2𝑥 + 1
2
1 1
2 𝑥+𝑦 2
𝐸 𝑌|𝑋 = 𝑦 𝑑𝑦 = 𝑥𝑦 + 𝑦 2 𝑑𝑦
0 2𝑥 + 1 2𝑥 + 1 0
2 𝑥 1 2 3𝑥 + 2 3𝑥 + 2
= + = =
2𝑥 + 1 2 3 6 2𝑥 + 1 6𝑥 + 3
1 1
2 2
2 𝑥 + 𝑦 2
𝐸 𝑌 𝑋 = 𝑦 𝑑𝑦 = 𝑦 2 𝑥 + 𝑦 3 𝑑𝑦
0 2𝑥 + 1 2𝑥 + 1 0
2 𝑥 1 2 4𝑥 + 3 4𝑥 + 3
= + = =
2𝑥 + 1 3 4 12 2𝑥 + 1 6 2𝑥 + 1
𝑉 𝑌 𝑋 = 𝐸 𝑌2 𝑋 − 𝐸2 𝑌 𝑋
4𝑥 + 3 3𝑥 + 2 2
= − =⋯
6(2𝑥 + 1) 3 2𝑥 + 1
(8) f(x, y)= f(x| y) g(y)= c d x 𝑦 2 ; 0 < 𝑥 < 𝑦, 0 < 𝑦 < 1
1 1
𝑔 𝑦 𝑑𝑦 = 1 ⟹ 𝑑 𝑦 4 𝑑𝑦 = 1 ⟹ 𝑑 = 5
0 0
⟹ 𝑓 𝑥, 𝑦 = 5 𝑐 𝑥𝑦 2 ; 0 < 𝑥 < 𝑦 < 1
1 𝑦
2
5𝑐 1 4
⟹ 5𝑐 𝑦 𝑥 𝑑𝑥 𝑑𝑦 = 1 ⟹ 𝑦 𝑑𝑦 = 1
0 0 2 0
⟹𝑐=2
⟹ 𝑓 𝑥, 𝑦 = 10𝑥 𝑦 2 ; 0 < 𝑥 < 𝑦 < 1
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
1
𝑓𝑋 𝑥 = 10𝑥 𝑦 2 𝑑𝑦 0 < 𝑥 < 1
0
10
𝑓𝑋 𝑥 = 𝑥 1 − 𝑥3 0 < 𝑥 < 1
3
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1
10 2
𝑃 0.25 < 𝑋 < 0.5 = 𝑥 − 𝑥 4 𝑑𝑥 = ⋯
3 1
4
1
1 1 2
𝑃 < 𝑋 < 𝑌 = 0.625 = 𝑓𝑋|𝑌=𝑦 𝑑𝑥
4 2 1
4
1 2 2
2 𝑥 2 1 1 1
=2 2
𝑑𝑥 = 22
− =⋯
1 (0.625) 0.625 2 4
4
(9) Marginal of X from h (x, y)
∞ ∞
𝑓𝑋 𝑥 = 𝑥, 𝑦 𝑑𝑦 = 𝑓 𝑥 𝑔 𝑦 1 + 𝛼 2 𝐹 𝑥 − 1 2𝐺 𝑦 − 1 𝑑𝑦
−∞ −∞
∞ ∞
𝑓 𝑥 = 𝑔 𝑦 𝑑𝑦 + 𝑓 𝑥 𝛼 2𝐹 𝑥 − 1 𝑔 𝑦 2𝐺 𝑦 − 1 𝑑𝑦
−∞ −∞
∞
= 𝑓 𝑥 × 1 + 𝑓 𝑥 𝛼 2𝐹 𝑥 − 1 𝑔 𝑦 2𝐺 𝑦 − 1 𝑑𝑦
−∞
∞ 1
𝑢2 1
𝑔 𝑦 2𝐺 𝑦 − 1 𝑑𝑦 ≟ 2𝑢 − 1 𝑑𝑢 = 2 − 𝑢| = 0
−∞ 0 2 0
⟹ 𝑓𝑋 𝑥 = 𝑓 𝑥 + 0
∞
𝑠𝑙𝑦 𝑓𝑌 𝑦 = 𝑥, 𝑦 𝑑𝑥 = 𝑔 𝑦
−∞
𝑥, 𝑦 = 𝑓𝑋 𝑥 . 𝑓𝑌 𝑦 = 𝑓 𝑥 𝑔 𝑦 𝑖𝑓𝑓 𝛼 = 0
(10) 𝑓𝑋,𝑌 𝑥, 𝑦 = 𝑓𝑌|𝑋=𝑥 𝑦 𝑥 𝑓𝑋 𝑥
8𝑥𝑦, 0 < 𝑥 < 𝑦 < 1
=
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑌
𝑦
8𝑦 𝑥 𝑑𝑥 = 4𝑦 3 , 0<𝑦<1
𝑓𝑌 𝑦 = 0
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑎𝑙 𝑝. 𝑑. 𝑓 𝑜𝑓 𝑋 𝑔𝑖𝑣𝑒𝑛 𝑌
8𝑥𝑦 2𝑥
= , 0 < 𝑥 < 𝑦; 0 < 𝑦 < 1
𝑓𝑋|𝑌=𝑦 𝑥 𝑦 = 4𝑦 3 𝑦 3
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
2 𝑦 2 2 𝑦 3 2𝑦
𝐸 𝑋 𝑌 = 𝑦 = 2 𝑥 𝑑𝑥 = 2 − =
𝑦 0 𝑦 3 3
1 1
⟹𝐸 𝑋𝑌= =
2 3
𝑦
2 2 𝑦4 𝑦2
𝐸 𝑋2 𝑌 = 𝑦 = 2 𝑥 3 𝑑𝑥 = 2 . =
𝑦 0 𝑦 4 2
1 1
⟹ 𝐸 𝑋2 𝑌 = =
2 8
𝑉 𝑋 𝑌 = 𝑦 = 𝐸 𝑋2 𝑌 = 𝑦 − 𝐸2 𝑋 𝑌 = 𝑦
1 1 1
= − = .
8 9 72
(11) Jt . m. g. f.
∞ ∞
𝑀𝑋1 ,𝑋2 𝑡1 , 𝑡2 = 𝐸 𝑒 𝑡 1 𝑋1 +𝑡2 𝑋2 = 𝑒 𝑡1 𝑥 1 +𝑡 2 𝑥 2 𝑒 − 𝑥 1 +𝑥 2 𝑑𝑥2 𝑑𝑥1
0 0
∞ ∞ ∞
= 𝑒 −𝑥 2 1−𝑡 1
𝑑𝑥1 𝑒 −𝑥 2 1−𝑡 2
𝑑𝑥2
0 0 0
−1 −1
= 1 − 𝑡1 1 − 𝑡2 𝑖𝑓𝑡1 , 𝑡2 < 1
𝑚. 𝑔. 𝑓. 𝑜𝑓 𝑍 = 𝑋1 + 𝑋2
𝑀𝑍 𝑡 = 𝐸 𝑒 𝑡 𝑋1 + 𝑋2
= 1−𝑡 −2
,𝑡 < 1
𝜕𝑀𝑍 𝑡 −3
𝐸 𝑧 = |𝑡=0 = 2 1 − 𝑡 |𝑡=0 = 2
𝑑𝑡
𝜕 2 𝑀𝑍 𝑡
𝐸 𝑧2 = |𝑡=0 = 6(1 − 𝑡)−4 | 𝑡=0 = 6 ⟹ 𝑉 𝑧 = 2
𝑑𝑡 2
(12) 𝑀𝑋1 ,𝑋2 𝑡1 , 𝑡2 = 𝐸 𝑒 𝑡 1 𝑋1 +𝑡 2 𝑋2
= 𝐸𝐸 𝑒 𝑡 1 𝑋1 +𝑡2 𝑋2 |𝑋1 = 𝐸 𝑒 𝑡1 𝑋1 𝐸 𝑒 𝑡 2 𝑋2 |𝑋1
𝜍2
𝑠𝑖𝑛𝑐𝑒 𝑋2 𝑋1 ∼ 𝑁 𝜇2 + 𝜌 𝑥 − 𝜇1 , 𝜍2 2 1 − 𝜌2
𝜍1 1
𝐸 𝑒 𝑡 2 𝑋2 |𝑋1 → 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑎𝑡 𝑚. 𝑔. 𝑓. 𝑜𝑓 𝑋2 𝑔𝑖𝑣𝑒𝑛 𝑋1
𝜍
𝑡 1 𝑋2
𝑡 2 𝜇 2 +𝜌 2 𝑥 1 −𝜇 1
𝜍 𝑡2 2 2
𝑀𝑋1 ,𝑋2 𝑡1 , 𝑡2 = 𝑒 𝑒 1
+ 𝜍 1 − 𝜌2
2 2
𝑡 2 𝜍
𝑡 1 𝑋1 +𝑡 2 𝜌 2 𝑋1
𝜍
−𝑡 2 𝜌 2 𝜇 1
𝑡 2 𝜇 2 + 2 𝜍2 2 1−𝜌 2 𝜍1 𝜍1
= 𝑒 2 𝐸 𝑒 𝑒
𝑡 2 𝜍 𝜍
𝑡 2 𝜇 2 + 2 𝜍2 2 1−𝜌 2 −𝑡 2 𝜌 2 𝜇 1 𝑡1 + 𝑡 2 𝜌 2
= 𝑒 2 𝜍1 𝑋𝐸 𝑒 𝜍1 𝑋1
2
𝑡 𝜍 𝜍 2
𝑡 2 𝜇 2 + 2 𝜍2 2 1−𝜌 2 −𝑡 2 𝜌 2 𝜇 1 𝑡 1 + 𝑡 2 𝜌 2 𝜇 1 𝜍1 𝜍2
=𝑒 2 𝜍1 𝑒 𝜍1 + 𝑡1 + 𝑡2 𝜌
2 𝜍1
𝑡2 2 2 𝜍2 𝜍2
= exp 𝑡2 𝜇2 + 𝜍 1 − 𝜌2 − 𝑡2 𝜌 𝜇1 + 𝑡1 𝜇1 + 𝑡2 𝜌
2 2 𝜍1 𝜍1
𝜍1 2 𝜍2 2 𝜍2
+ 𝑡1 2 + 𝑡2 2 𝜌2 2 + 2𝑡1 𝑡2 𝜌
2 𝜍1 𝜍1
2 2
𝑡2 𝑡1
= exp 𝑡2 𝜇2 + 𝜍2 2 + 𝑡1 𝜇1 + 𝜍 2 + 𝑡1 𝑡2 𝜌𝜍1 𝜍2
2 2 1
1
= exp 𝑡1 𝜇1 + 𝑡2 𝜇2 + 𝑡1 2 𝜍1 2 + 𝑡2 2 𝜍2 2 + 2𝑡1 𝑡2 𝜍1 𝜍2 𝜌
2
𝜕𝑀𝑋1 ,𝑋2 𝑡1 , 𝑡2 𝜕𝑀𝑋1 ,𝑋2
|𝑡 1 =0,𝑡2 =0 = 𝜇1 𝑠𝑙𝑦 | = 𝜇2 & 𝑉 𝑋1 = 𝜍1 2 , 𝑉 𝑋2 = 𝜍2 2
𝜕𝑡1 𝜕𝑡2 𝑡 1 =0,𝑡 2 =0
𝜕 2 𝑀𝑋1 ,𝑋2 𝑡1 , 𝑡2
𝐸 𝑋1 , 𝑋2 = |𝑡 1 =0,𝑡2 =0 = 𝜌𝜍1 𝜍2 + 𝜇1 𝜇2
𝜕𝑡1 𝜕𝑡2
⟹ 𝐶𝑜𝑣 𝑋1 , 𝑋2 = 𝜌𝜍1 𝜍2 + 𝜇1 𝜇2 − 𝜇1 𝜇2 = 𝜌𝜍1 𝜍2
⟹ 𝑐𝑜𝑟𝑣 𝑋1 , 𝑋2 = 𝜌
2, 0 < 𝑥 < 𝑦 < 1
(13) f(x, y)=
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1
𝑓𝑋 𝑥 = 2 𝑑𝑦 = 2(1 − 𝑥), 0 < 𝑥 < 1
𝑥
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑦
𝑓𝑌 𝑦 = 2 𝑑𝑥 = 2𝑦, 0 < 𝑦 < 1
0
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
2
𝑓𝑌|𝑋=𝑥 = 2 1−𝑥 , 𝑥 <𝑦 <1
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
2
, 0<𝑥<𝑦
𝑓𝑋|𝑌=𝑦 = 2𝑦
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1
1 1 − 𝑥2 1+𝑥
𝐸 𝑌𝑋 = 𝑦 𝑑𝑦 = =
𝑥 1−𝑥 2 1−𝑥 2
1 2 3
∫ 𝑦 1 1−𝑥
𝐸 𝑌2 𝑋 = 𝑥 𝑑𝑦 = .
1−𝑥 3 1−𝑥
1 − 𝑥3 1+𝑥
⟹ 𝑉 𝑌 𝑋 = 𝐸 𝑌2 𝑋 − 𝐸2 𝑌 𝑋 = −
3 1−𝑥 2
𝑠𝑙𝑦 𝐸 𝑋 𝑌 , 𝐸 𝑋 2 𝑌 𝑎𝑛𝑑 𝑒𝑛𝑐𝑒 𝑉 𝑋 𝑌 .
(14) (a) 𝐶𝑜𝑣 𝑋, 𝑏 = 𝐸 𝑋 − 𝐸 𝑋 𝑏 − 𝐸 𝑏 = 0 = 𝐶𝑜𝑣 𝑋, 𝑏 =
𝑍𝐶𝑜𝑣 𝑍, 𝑏
𝑏 𝐶𝑜𝑣 𝑋, 𝑎𝑌 + 𝑏 = 𝐸 𝑋 − 𝐸 𝑋 𝑎𝑌 + 𝑏 − 𝐸 𝑎𝑌 + 𝑏
= 𝐸 𝑋 − 𝐸 𝑋 𝑎𝑌 + 𝑏 − 𝑎𝐸 𝑌 − 𝑏
= 𝑎 𝐶𝑜𝑣 𝑋, 𝑌
𝑐 𝐶𝑜𝑣 𝑋, 𝑌 + 𝑍 = 𝐸 𝑋 − 𝐸 𝑋 𝑌 + 𝑍 − 𝐸 𝑌 − 𝐸 𝑍
= 𝐸 𝑋−𝐸 𝑋 𝑌−𝐸 𝑌 + 𝑧−𝐸 𝑧
= 𝐶𝑜𝑣 𝑋, 𝑌 + 𝐶𝑜𝑣 𝑋, 𝑧
𝑑 𝐶𝑜𝑣 𝑋, 𝑎𝑌 + 𝑏 = 𝑎 𝑐𝑜𝑣 𝑋, 𝑌
𝑐𝑜𝑣(𝑋, 𝑎𝑌 + 𝑏) 𝑎 𝑐𝑜𝑣(𝑥, 𝑌)
𝑐𝑜𝑣 𝑋, 𝑎𝑌 + 𝑏 = 1 = 1 = 𝑐𝑜𝑣 𝑋, 𝑌
𝑉 𝑋 𝑉 𝑎𝑌 + 𝑏 2 [𝑉(𝑋)𝑎2 𝑉(𝑌)]2
3−1 3− 3 3−1
(15) 𝑐𝑜𝑣 𝑤1 , 𝑤2 = 𝐶𝑜𝑣 𝑋1 , 2
𝑋1 + 2
𝑋2 = 2
𝑉 𝑋1 +
3− 3 3−1
𝑐𝑜𝑣 𝑋1 , 𝑋2 = 𝜍2 2
2 2
2 2
2
3−1 2
3− 3 2
𝑉 𝑤1 = 𝜍 & 𝑉 𝑤2 = 𝜍 + 𝜍2 = 3 − 1 𝜍2
2 2
1
⟹ 𝜌𝑤 1 ,𝑤 2 =
2
𝑠𝑙𝑦 𝜌𝑤 1 ,𝑤 3 & 𝜌𝑤 2 ,𝑤 3
↓
𝑐𝑜𝑣 𝑤1 , 𝑤3 = 𝑐𝑜𝑣 𝑋1 , 2 − 1 𝑋2 + 2 − 2 𝑋3 = 0
⟹ 𝜌𝑤 1 ,𝑤 3 = 0
(16) (a) 𝑃 3 < 𝑌 < 8 𝑌 ∼ 𝑁 1, 25
3−1 𝑌−1 8−1 7 2
=𝑃 < < = 𝜙 −𝜙
5 5 5 5 5
= ⋯ 𝑓𝑟𝑜𝑚 𝑡𝑎𝑏𝑙𝑒
5
𝑏 𝑃 3 < 𝑌 < 8 𝑋 = 7 𝑌 𝑋 ∼ 𝑁 1 + 𝑃 𝑥 − 3 , 25 1 − 𝑝2
4
3−4 𝑌−4 8−4
=𝑃 < < 𝑋=7
4 4 4
= 𝜙 1 − 𝜙 −0.25
=⋯
𝑐 𝑃 −3 < 𝑋 < 3 𝑋 ∼ 𝑁 3, 16
−3 − 3 𝑋 − 3 3 − 3 6
=𝑃 < < = 𝜙 0 −𝜙 − = ⋯
4 4 4 4
4
𝑑 𝑃 −3 < 𝑋 < 3 𝑌 = 4 𝑋 𝑌 ∼ 𝑁 3+𝑝 𝑦 − 1 , 16 1 − 𝑝2
5
−3 − 4.44 𝑋 − 4.44 3 − 4.44 1.44 7.44
=𝑃 < < 𝑌 = 4) = 𝜙 − −𝜙 − =⋯
3.2 3.2 3.2 3.2 3.2
(17) 𝑋, 𝑌 ∼ 𝑁2 5, 10, 1, 25, 𝜌 ; 𝜌 > 0
𝑌|𝑋 = 5 ∼ 𝑁2 10, 25 1 − 𝜌2
4 − 10 𝑌 − 10 16 − 10
𝑃 4 < 𝑌 < 16 𝑋 = 5 = 𝑃 < < 𝑋=5
5 1−𝜌 2 5 1−𝜌 2 5 1 − 𝜌2
6 6
=𝜙 −𝜙 −
5 1−𝜌 2 5 1 − 𝜌2
6
= 2𝜙 − 1 = 0.954 𝑔𝑖𝑣𝑒𝑛 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛
5 1 − 𝜌2
6
⟹𝜙 = 0.977 = 𝜙 2
5 1 − 𝜌2
6
⟹ = 2 ⟹ 1 − 𝜌2 = 0.36 ⟹ 𝜌 = 0.8 𝑎𝑠 𝜌 > 0
5 1−𝜌 2
(18) 𝐸 𝑌 = 15 1 𝐸 𝑋𝑖 = 30
𝑉 𝑌 = 15 𝑉 𝑋𝑖 = 45 ; 𝑉 𝑍 = 10 × 3 = 30
15 10
𝑐𝑜𝑣 𝑌, 𝑍 = 𝑐𝑜𝑣 𝑋𝑖 , 𝑋𝑖 = 5𝑉 𝑋𝑖 = 15
1 11
15
𝜌𝑌,𝑍 = 1
[45 × 30]2
(19) 𝑈 = 𝑋 − 𝑌; 𝑉 = 2𝑋 − 3𝑌
𝐸 𝑈 = −5
𝐸 𝑉 = 2 × 15 − 3 × 20 = −30
𝑉 𝑈 = 𝑉 𝑋 + 𝑉 𝑌 − 2𝑐𝑜𝑣 𝑋, 𝑌 𝑉 𝑣 = 4𝑉 𝑋 + 9𝑉 𝑌 − 12 𝑐𝑜𝑣 𝑋, 𝑌
𝑐𝑜𝑣 𝑋, 𝑌 𝑐𝑜𝑣 𝑋, 𝑌
𝑁𝑜𝑤𝜌𝑋,𝑌 = −0.6 = = ⟹ 𝑐𝑜𝑣 𝑋, 𝑌 = −30
25 × 100 50
⟹ 𝑐𝑜𝑣 𝑈, 𝑉 = 𝑐𝑜𝑣 𝑋 − 𝑌, 2𝑋 − 3𝑌
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑥
−
𝐹𝑋 𝑥 = 1 − 𝑒 50 , 𝑥>0
𝑌1 : # 𝑜𝑓 𝑏𝑢𝑖𝑏𝑠 𝑜𝑢𝑡 𝑜𝑓 8 𝑡𝑜 𝑎𝑣𝑒 𝑙𝑖𝑓𝑒𝑡𝑖𝑚𝑒 < 40
𝑌2 ∶ … … … … … … … . . ≥ 40& < 60
𝑌3 : … … … … … … … … … … ≥ 60 & ≤ 80
𝑌4 ∶ … … … … … … … . > 80
40
𝑃 𝑋 < 40 = 𝐹𝑋 40 = 1 − 𝑒 −50 = 𝑝1 , 𝑠𝑎𝑦
40 60
𝑃 40 ≤ 𝑋 < 60 = 𝐹𝑋 60 − 𝐹𝑋 40 = 𝑒 −50 − 𝑒 −50 == 𝑝2 , 𝑠𝑎𝑦
60 80
𝑃 60 ≤ 𝑋 ≤ 80 = 𝐹𝑋 80 − 𝐹𝑋 60 = 𝑒 −50 − 𝑒 −50 = 𝑝3 , 𝑠𝑎𝑦
80
𝑃 𝑋 > 80 = 1 − 𝑝1 − 𝑝2 − 𝑝1 = 𝑒 −50
𝑗𝑡. 𝑝. 𝑚. 𝑓. 𝑜𝑓 𝑌1 , 𝑌2 , 𝑌3 𝑖𝑠 𝑚𝑎𝑙𝑡𝑖𝑛𝑜𝑚𝑖𝑎𝑙 8, 𝑝1 , 𝑝2 , 𝑝3
8!
⟹ 𝑃 𝑌1 = 2, 𝑌2 = 3, 𝑌3 = 2 = 𝑝 2 𝑝 3 𝑝 2 1 − 𝑝1 − 𝑝2 − 𝑝3
2! 3! 22! 1! 1 2 3
60 80
𝑚𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛𝑌3 ∼ 𝐵𝑖𝑛 8, 𝑝3 = 𝑒 −50 − 𝑒 −50
60 80
𝐸 𝑌3 = 8 𝑒 −50 − 𝑒 −50
𝑝3
𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑎𝑙 𝑌3 |𝑌2 = 𝑦2 ∼ 𝐵𝑖𝑛 8 − 𝑦2 ,
1 − 𝑝2
60 80
𝑒 −50 − 𝑒 −50
⟹ 𝐸 𝑌3 𝑌2 = 1 = 8 − 1 40 60 .
1− 𝑒 −50 − 𝑒 −50
(21) (a)
Y 0 1 2
X
0 1 1
3
0 0 3
1 1 1
2 0 0
3 3
1 1
0 0 3 3
1 1 1
3 3 3
𝐸 𝑋 =1=𝐸 𝑌
𝑉 𝑋 = 𝐸 𝑋2 − 1
5 2
= −1= =𝑉 𝑌
3 3
1 1 1 5
𝐸 𝑋𝑌 = 0 × 0 + 1×1 + 2×2 × =
3 3 3 3
5 2
𝑐𝑜𝑣 𝑋, 𝑌 = −1=
3 3
𝜌𝑋,𝑌 = 1
𝑏
Y 0 1 2
X
0 1
0 0
3
1 1
2 0 0
3
1
3
0 0
𝑠𝑙𝑦 ⟹ 𝜌𝑋,𝑌 = −1
(c)
Y 0 1 2
X
0 1
0 0 3
1 1
2 0 0
3
1
3
0 0
𝜌𝑋,𝑌 = 0.
(22)
Y 1 2
X
1 1 1
10
0 10
2 2 4 6
3 10 10 10
3 3
0
10 10
6 4
10 10
1 6 3 22
𝐸 𝑋 = +2 +3 =
10 10 10 10
6 4 14
𝐸 𝑌 = + 2 =
10 10 10
1 6 3 52
𝐸 𝑋2 = +4 +9 =
10 10 10 10
2
6 4 22
𝐸 𝑌 = +4 =
10 10 10
52 22 2
𝑉 𝑋 = − =⋯
10 10
22 2 14 2
𝑉 𝑌 = − =⋯
10 10
1 2 4 3 30
𝐸 𝑋𝑌 = 1 × 1 + 2×1 + 2×2 + 3×1 = =3
10 10 10 10 10
22 14
𝑐𝑜𝑣 𝑋, 𝑌 = 𝐸 𝑋𝑌 − 𝐸 𝑋 𝐸 𝑌 = 3 − . =⋯
10 10
𝑐𝑜𝑣 𝑋, 𝑌
𝑐𝑜𝑣 𝑋, 𝑌 = 1 =⋯
𝑉 𝑋 𝑉 𝑌 2
𝑗𝑡 𝑚. 𝑔. 𝑓. 𝑜𝑓 𝑋, 𝑌
𝑀𝑋,𝑌 𝑡1 , 𝑡2 = 𝑒 𝑡 1 𝑥+𝑡 2 𝑦 𝜌 𝑋 = 𝑥, 𝑌 = 𝑦
𝑥,𝑦
1 2 4 3
= 𝑒 𝑡 1 +𝑡 2 × + 𝑒 2𝑡 1 +𝑡 2 + 𝑒 2(𝑡1 +𝑡2 ) + 𝑒 3𝑡1 +𝑡 2 . .
10 10 10 10
(23) 𝑀𝑋,𝑌 𝑢, 𝑣 = 𝐸 𝑒 𝑢𝑋 +𝑣𝑌
= 𝛹 𝑢, 𝑣 = 𝑙0𝑔𝑀𝑋,𝑌 𝑢, 𝑣
𝜕𝛹 𝑢, 𝑣 1 𝜕𝑀𝑋,𝑌 𝑢, 𝑣
= .
𝜕𝑢 𝑀𝑋,𝑌 𝑢, 𝑣 𝜕𝑢
𝜕𝛹 𝑢, 𝑣 1
|𝑢=0,𝑣=0 = .𝐸 𝑋 = 𝐸 𝑋
𝜕𝑢 𝑀 0, 0
𝜕𝛹 0, 0 𝜕𝛹 𝑢, 𝑣
𝑠𝑙𝑦 = |𝑢=0,𝑣=0 = 𝐸 𝑌
𝜕𝑣 𝜕𝑣
𝜕 2 𝛹 𝑢, 𝑣 1 𝜕 2 𝑀 𝑢, 𝑣 −1 𝜕𝑀 𝑢, 𝑣 𝜕𝑀 𝑢, 𝑣
2
= 2
+ 2
𝜕𝑢 𝑀 𝑢, 𝑣 𝜕𝑢 𝑀 𝑢, 𝑣 𝜕𝑢 𝜕𝑢
2
1 𝜕 2 𝑀 𝑢, 𝑣 𝜕𝑀 𝑢, 𝑣 1
= − .
𝑀 𝑢, 𝑣 𝜕𝑢2 𝜕𝑢 𝑀 𝑢, 𝑣
2
𝜕 𝛹 𝑢, 𝑣 𝜕 2 𝛹 0, 0
|𝑢=0,𝑣=0 = 𝐸 𝑋 2 − 𝐸2 𝑋 = 𝑉 𝑋 =
𝜕𝑢2 𝜕𝑢2
𝜕 2 𝛹 𝑢, 𝑣
𝑠𝑙𝑦 |𝑢 =0,𝑣=0 = 𝑉 𝑌
𝜕𝑢2
𝜕 2 𝛹 𝑢, 𝑣 1 𝜕 2 𝑀 𝑢, 𝑣 1 𝜕𝑀 𝑢, 𝑣 𝜕𝑀 𝑢, 𝑣
= . − 2. .
𝜕𝑣 𝜕𝑢 𝑀 𝑢, 𝑣 𝜕𝑣 𝜕𝑢 𝑀 𝑢, 𝑣 𝜕𝑣 𝜕𝑢
2
𝜕 𝛹 𝑢, 𝑣
| = 𝐸 𝑋𝑌 − 𝐸 𝑋 𝐸 𝑌
𝜕𝑣 𝜕𝑢 𝑢=0,𝑣=0
2
𝜕 𝛹 𝑢, 𝑣
𝑖. 𝑒. | = 𝑐𝑜𝑣 𝑋, 𝑌 .
𝜕𝑣 𝜕𝑢 𝑢=0,𝑣=0
(24) 𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑋
∞
𝑓𝑋 𝑥 = 𝑓𝑋,𝑌 𝑥, 𝑦 𝑑𝑦
−∞
∞ ∞
1 1
= 𝑓𝜌 𝑥, 𝑦 𝑑𝑦 + 𝑓−𝜌 𝑥, 𝑦 𝑑𝑦
2 −∞ 2 −∞
1 1
= 𝜙 𝑥 + 𝜙 𝑥 𝜙 𝑥 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑁 0, 1
2 2
= 𝜙 𝑥 ⟹ 𝑋 ∼ 𝑁 0, 1
𝑠𝑙𝑦 𝑓𝑌 𝑦 = 𝜙 𝑦 ⟹ 𝑌 ∼ 𝑁 0, 1
∞ ∞
𝐸 𝑋𝑌 = 𝑥𝑦 𝑓𝑋,𝑌 𝑥, 𝑦 𝑑𝑥 𝑑𝑦
−∞ −∞
∞ ∞ ∞ ∞
1 1
= 𝑥𝑦 𝑓𝜌 𝑥, 𝑦 𝑑𝑥 𝑑𝑦 + 𝑥𝑦 𝑓−𝜌 𝑥, 𝑦 𝑑𝑥 𝑑𝑦
2 −∞ −∞ 2 −∞ −∞
1 1
=
𝜌 + −𝜌 = 0
2 2
𝑐𝑜𝑣 𝑋, 𝑌 = 𝐸 𝑋𝑌 − 𝐸 𝑋 𝐸 𝑌 = 0 − 0.0 = 0
𝜌 𝑋, 𝑌 = 0 ⟹ 𝑋 & 𝑌 𝑎𝑟𝑒 𝑢𝑛𝑐𝑟𝑟𝑒𝑙𝑎𝑡𝑒𝑑
𝑠𝑖𝑛𝑐𝑒, 𝑓𝑋,𝑌 𝑥, 𝑦 ≠ 𝑓𝑋 𝑥 𝑓𝑌 𝑦 .
𝑋 & 𝑌 𝑎𝑟𝑒 𝑛𝑜𝑡 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 .
1 𝑥 1
(25) ∫0 ∫−𝑥 𝑘 𝑑𝑦 𝑑𝑥 = 1 ⟹ 𝑘 ∫0 2𝑥 𝑑𝑥 = 1 ⟹ 𝑘 = 1
𝑥
2𝑥, 0 < 𝑥 < 1
𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑜𝑓 𝑋, 𝑓𝑋 𝑥 = 𝑘 𝑑𝑦 =
−𝑥 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1
1− 𝑦 , −1 < 𝑦 < 1
𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑜𝑓 𝑌, 𝑓𝑌 𝑦 = 𝑑𝑥 =
𝑦 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1
𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑎𝑙 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑜𝑓 𝑌|𝑋 = 𝑥; 𝑓𝑌|𝑋=𝑥 = 2𝑥 , −𝑥 < 𝑦 < 𝑥
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑥
1
𝐸 𝑌𝑋=𝑥 = 𝑦 𝑑𝑦 = 0
−𝑥 2𝑥
1
1 1 − 𝑦2
𝑠𝑙𝑦 𝐸 𝑋 𝑌 = 𝑦 = 𝑥 𝑑𝑥 = .
𝑦 1− 𝑦 2 1− 𝑦
−1
𝑓𝑋|𝑌=𝑦 = 1 − 𝑦 , 𝑦 <𝑥<1
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑓𝑋,𝑌 𝑥, 𝑦 = 1 ≠ 𝑓𝑋 𝑥 𝑓𝑌 𝑦
⟹ 𝑋 & 𝑌 𝑎𝑟𝑒 𝑛𝑜𝑡 𝑖𝑛𝑑𝑒𝑝.
1 ∞
𝐸 𝑋𝑌 = 𝑥𝑦 𝑑𝑦 𝑑𝑥 = 0
0 −∞
𝐸 𝑌 = 𝐸. 𝐸 𝑌 𝑋 = 0
⟹ 𝑐𝑜𝑣 𝑋, 𝑦 = 𝜌𝑋,𝑌 = 0
⟹ 𝑋 & 𝑌 𝑎𝑟𝑒 𝑢𝑛𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑒𝑑.
(26) 𝑀𝑋,𝑌 𝑠, 𝑡 = 𝑎 𝑒 𝑠+𝑡 + 1 + 𝑏 𝑒 𝑠 + 𝑒 𝑡 , 𝑎, 𝑏 > 0, 𝑎 + 𝑏 =
1
2
𝜕
𝐸 𝑋 = (𝑎 𝑒 𝑠+𝑡 + 1 + 𝑏 𝑒 𝑠 + 𝑒 𝑡 )|𝑠=𝑡=0
𝜕𝑠
1
= 𝑎 𝑒 𝑡 𝑒 𝑠 + 𝑏 𝑒 𝑠 |𝑡=𝑠=0 = 𝑎 + 𝑏 = = 𝐸 𝑌
2
2
𝜕
𝐸 𝑋 2 = 2 𝑎 𝑒 𝑠+𝑡 + 1 + 𝑏 𝑒 𝑠 + 𝑒 𝑡 |𝑠=𝑡=0
𝜕𝑠
1
= 𝑎 𝑒 𝑒 + 𝑏 𝑒 𝑠 |𝑠=𝑡=0 = 𝑎 + 𝑏 = = 𝐸 𝑌 2
𝑡 𝑠
2
1 1 1
𝑉 𝑋 =𝑉 𝑌 = − =
2 4 4
𝜕2
𝐸 𝑋𝑌 = 𝑎 𝑒 𝑠+𝑡 + 1 + 𝑏 𝑒 𝑠 + 𝑒 𝑡 |𝑠=𝑡=0
𝜕𝑡 𝜕𝑠
= 𝑎 𝑒 𝑡 𝑒 𝑠 |𝑠=𝑡=0 = 𝑎
1
1 𝑎−4
∴ 𝑐𝑜𝑣 𝑋, 𝑌 = 𝑎 − ⟹ 𝜌𝑋,𝑌 = = 4𝑎 − 1.
4 1
4
𝑋 2𝑌 2𝑋 𝑌
(27) 𝑣𝑎𝑟 3 + 3 = 𝑣𝑎𝑟 3 + 3 ←∵ 𝑉 𝑋 = 𝑉 𝑌
1 4 𝑋 2𝑌
= 𝑉 𝑋 + 𝑉 𝑌 + 2 𝑐𝑜𝑣 ,
9 9 3 3
2 8 4 2 2 8 8 38
= + + × = + + =
9 9 9 3 9 9 27 27
𝑋 2𝑌 2𝑋 𝑌
𝑐𝑜𝑣 + , +
3 3 3 3
2 1 4 2
= 𝑉 𝑋 + 𝑐𝑜𝑣 𝑋, 𝑌 + 𝑐𝑜𝑣 𝑋, 𝑌 + 𝑉 𝑌
9 9 9 9
4 2 8 4 34
= + + + =
9 27 27 9 27
𝑋 2𝑌 2𝑋 𝑌
𝑐𝑜𝑣 + , +
3 3 3 3
34
34
= 27 = .
38 38
27
Problem Set-8
[1] The joint probability mass function of the random variables 𝑋1 𝑎𝑛𝑑 𝑋2 𝑖𝑠 𝑔𝑖𝑣𝑒𝑛 𝑏𝑦
𝑥 1 +𝑥 2 2−𝑥 1 −𝑥 2
2 1
𝑃 𝑋1 = 𝑥1 , 𝑋2 = 𝑥2 = 𝑖𝑓 𝑥1 , 𝑥2 = 0, 0 , 0, 1 , 1, 0 , (1, 1)
3 3
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
[5] Let 𝑋1 , 𝑋2 , 𝑋3 𝑎𝑛𝑑 𝑋4 be four mutually independent random variables each having probability
density function
2
f(x)= 3 1 − 𝑥 0 < 𝑥 < 1
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
[6] Suppose 𝑋1 , … . , 𝑋𝑛 are n independent random variables, where 𝑋𝑖 𝑖 = 1, … , 𝑛 has the exponential
distribution Exp 𝛼1 , 𝑤𝑖𝑡 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑑𝑒𝑛𝑠𝑖𝑡𝑦 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛
𝛼𝑖 𝑒 −𝛼 𝑖 𝑥 𝑥 > 0
𝑓𝑋 𝑖 𝑥 =
0 𝑜𝑡𝑗𝑒𝑟𝑤𝑖𝑠𝑒.
[7] Let X and Y be the respective arrival times of two friends A and B who agree to meet at a spot and
wait for the other only for t minutes. Supposing that X and Y are i. i. d. Exp (𝜆). Show that the probability
of A and B meeting each other is 1 − 𝑒 −𝜆𝑡 .
[9] Let X and Y be i. i. d. N(0, 1). Find the probability density function of Z= X/ Y.
[10] Let X and Y be independent random variables with probability density functions
𝑥 𝛼 1 −1 −𝑦/𝜃
𝑓𝑋 𝑥 = 𝑒 𝑦>0
⎾𝛼2 𝜃 𝛼 2
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑋
Find the distributions of 𝑈 = 𝑋 + 𝑌 𝑎𝑛𝑑 𝑉 = (𝑋+𝑌) and also that they are independently distributed.
[11] Let X and Y be i.i.d. random variables with common probability density function
𝑐
−∞<𝑥 <∞
f(x)= 1+𝑥 4
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒,
[12] Let X and Y be i. i. d. N(0, 1), define the random variables R and𝛩 𝑏𝑦 𝑋 = 𝑅 cos 𝛩 , 𝑌 = 𝑅 𝑠𝑖𝑛𝛩,
𝑅2
(a) show that R and 𝛩 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 𝑤𝑖𝑡 2
∼ 𝐸𝑥𝑝 1 𝑎𝑛𝑑 𝛩 ∼ 𝑈 0, 2𝜋
𝑋
𝑏 𝑠𝑜𝑤 𝑡𝑎𝑡 𝑋 2 + 𝑌 2 𝑎𝑛𝑑 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡𝑙𝑦 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑑.
𝑌
𝑒 −𝑥 𝑥 > 0
f(x)=
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
[15] Let 𝑋1 , 𝑋2 , 𝑎𝑛𝑑 𝑋3 be three mutually independent chi-square random variables with 𝑛1 , 𝑛2 , 𝑛3
degrees of freedom respectively; i.e. 𝑋1 ∼ 𝜆𝑛 1 2 , 𝑋2 ∼ 𝜆𝑛 2 2 𝑎𝑛𝑑 𝑋3 ∼ 𝜆𝑛 3 2 and they are independent.
𝑋
(a) Show that 𝑌1 = 𝑋1 𝑎𝑛𝑑𝑌2 = 𝑋1 + 𝑋2 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 𝑎𝑛𝑑 𝑡𝑎𝑡 𝑌2 is chi-square random variable
2
with 𝑛1 + 𝑛2 degree of freedom.
𝑋1 /𝑛1 𝑋3 /𝑛3
𝑍1 = 𝑎𝑛𝑑 𝑍2 =
𝑋2 /𝑛2 (𝑋1 + 𝑋2 )/(𝑛1 + 𝑛2 )
[16] Let X and Y be independent random variables such that X∼ N(0, 1) and Y∼𝜆𝑛 2 2 .
𝑋
Find the probability density function of 𝑇 = .
𝑌/𝑛
[17] Let 𝑋1 , … , 𝑋𝑛 be a random sample from N(0, 1) distribution. Find the m.g. f. of Y= 𝑛𝑖=1 𝑋𝑖 2 and
identify its distribution. Further, suppose 𝑋𝑛+1 is another random sample from N(0, 1) independent
𝑋𝑛 +1
of 𝑋1 , … , 𝑋𝑛 . Derive the distribution of .
𝑌
𝑛
[18] X and Y are i. i. d. random variables each having geometric distribution with the following p. m.
f.
1 − 𝑝 𝑥 𝑝, 𝑥 = 0, 1, …
P(X= x)=
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑋
𝐼𝑑𝑒𝑛𝑡𝑖𝑓𝑦 𝑡𝑒 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑜𝑓 . 𝐹𝑢𝑟𝑡𝑒𝑟 𝑓𝑖𝑛𝑑 𝑡𝑒 𝑝. 𝑚. 𝑓. 𝑜𝑓 𝑍 = min
(𝑋, 𝑌).
𝑋+𝑌
Solution Key
(1) 𝑃 𝑌1 = 𝑦1 , 𝑌2 = 𝑦2 = 𝑃 𝑋1 − 𝑋2 = 𝑦1 , 𝑋1 + 𝑋2 = 𝑦2
𝑦1 + 𝑦2 𝑦1 − 𝑦2
= 𝑃 𝑋1 = , 𝑋2 =
2 2
0 2−0
2 1 𝑦1 + 𝑦2 𝑦1 − 𝑦2
𝑖𝑓 = 0, = 0, 𝑖. 𝑒. 𝑦1 = 0, 𝑦2 = 0
3 3 2 2
2 1 1 2−1 𝑦1 + 𝑦2 𝑦1 − 𝑦2
𝑖𝑓 =1, = 0, 𝑖. 𝑒. 𝑦1 = 1, 𝑦2 = 1
= 3 1 3 2−1 2 2
2 1 𝑦1 + 𝑦2 𝑦1 − 𝑦2
𝑖𝑓 = 0, = 1, 𝑖. 𝑒. 𝑦1 = −1, 𝑦2 = 1
3 3 2 2
2 2 1 2−2 𝑦1 + 𝑦2 𝑦1 − 𝑦2
𝑖𝑓 =1, = 1, 𝑖. 𝑒. 𝑦1 = 0, 𝑦2 = 2
3 3 2 2
𝑖. 𝑒.
1
9
𝑖𝑓 𝑦1 , 𝑦2 = (0, 0)
2
𝑖𝑓 𝑦1 , 𝑦2 = −1, 1 , (1, 1)
𝑃 𝑌1 = 𝑦1 , 𝑌2 = 𝑦2 = 9
4
9
𝑖𝑓 𝑦1 , 𝑦2 = (0, 2)
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑌2 0 1 2
𝑌1
0 1 4
9
0 9
-1 2
1 0 9
0
2
0 9
0
5 1
𝑦1 = 0 𝑦 =0
9 9 2
2 4
𝑦1 = −1 𝑃 𝑌 = 𝑦 = 𝑦 =1
𝑃 𝑌1 = 𝑦1 = 9 2 2 9 2
2 4
𝑦1 = 1 𝑦 =2
9 9 2
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑠𝑖𝑛𝑐𝑒 𝑃 𝑌1 = 𝑦1 , 𝑌2 = 𝑦2 ≠ 𝑃 𝑌1 = 𝑦1 𝑃 𝑌2 = 𝑦2 ∀ 𝑦1 , 𝑦2
𝑌1 & 𝑌2 𝑎𝑟𝑒 𝑛𝑜𝑡 𝑖𝑛𝑑𝑒𝑝.
(2) 𝑌1 = 𝑋1 𝑋2 ; 𝑌2 = 𝑋2
𝑗𝑡 𝑝. 𝑚. 𝑓.
𝑦1
𝑃 𝑌1 = 𝑦1 , 𝑌2 = 𝑦2 = 𝑃 𝑋1 𝑋2 = 𝑦1 , 𝑋2 = 𝑦2 = 𝑃 𝑋1 = , 𝑋2 = 𝑦2
𝑦2
𝑦1 𝑦1
𝑖𝑓 = 1, 2, 3; 𝑦2 = 1, 2, 3
= 36 𝑦2
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑃𝑜𝑠𝑠𝑖𝑏𝑙𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑌1 𝑖𝑛 1, 2, 3, 4, 6, 9 .
1
𝑃 𝑋1 = 1, 𝑋2 = 1 = 𝑦 =1
36 1
2 2 4
𝑃 𝑋1 = 1, 𝑋2 = 2 + 𝑃 𝑋1 = 2, 𝑋2 = 1 = + = 𝑦 =2
36 36 36 1
3 3 6
𝑃 𝑋1 = 1, 𝑋2 = 3 + 𝑃 𝑋1 = 3, 𝑋2 = 1 = + = 𝑦 =3
36 36 36 1
𝑃 𝑌1 = 𝑦1 = 4
𝑃 𝑋1 = 2, 𝑋2 = 2 = 𝑦1 = 4
36
6 6 12
𝑃 𝑋1 = 2, 𝑋2 = 3 + 𝑃 𝑋1 = 3, 𝑋2 = 2 = + = 𝑦 =6
36 36 36 1
9
𝑃 𝑋1 = 3, 𝑋2 = 3 = 𝑦1 = 9
36
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑍 = 𝑋1 + 𝑋2 → 𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑍 𝑖𝑛 2, 3, 4, 5, 6
𝑃 𝑧=3
= 𝑃 𝑋1 + 𝑋2 = 3
1
𝑃 𝑋1 = 1, 𝑋2 = 1 = 𝔍=2
36
4
𝑃 𝑋1 = 1, 𝑋2 = 2 + 𝑃 𝑋1 = 2, 𝑋2 = 1 = 𝔍=3
36
10
= 𝑃 𝑋1 = 1, 𝑋2 = 3 + 𝑃 𝑋1 = 2, 𝑋2 = 2 + 𝑃 𝑋1 = 3, 𝑋2 = 1 = 𝔍=4
36
12
𝑃 𝑋1 = 2, 𝑋2 = 3 + 𝑃 𝑋1 = 3, 𝑋2 = 2 = 𝔍=5
36
9
𝑃 𝑋1 = 3, 𝑋2 = 2 = 𝔍=6
36
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑃 𝑋=𝑥,𝑋+𝑌=𝑡 𝑃 𝑋=𝑥,𝑌=𝑡−𝑥
(3) 𝑃 𝑋 = 𝑥 𝑋 + 𝑌 = 𝑡 = =
𝑃 𝑋+𝑌=𝑡 𝑃 𝑋+𝑌=𝑡
𝑛1 𝑛2
𝑃 𝑋 = 𝑥 𝑃 𝑌 = 𝑡−𝑥 𝑝 𝑥 1 − 𝑝 𝑛−𝑥 𝑡−𝑥 𝑝𝑡−𝑥 1 − 𝑝 𝑛 2 − 𝑡−𝑥
= = 𝑥 𝑛 1 +𝑛 2 𝑡
𝑃 𝑋+𝑌 =𝑡 𝑡
𝑝 1 − 𝑝 𝑛 1 +𝑛 2 −𝑡
𝑛1 𝑛2
𝑥 𝑡−𝑥
= 𝑛 1 +𝑛 2 ; 0 ≤ 𝑥 ≤ 𝑛1 0 ≤ 𝑡 − 𝑥 ≤ 𝑛2
𝑡
↑ 𝑦𝑝𝑒𝑟 𝑔𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑛1 , 𝑛2
𝑃 𝑋=𝑥,𝑌=𝑡−𝑥
(4) 𝑃 𝑋 = 𝑥 𝑋 + 𝑌 = 𝑡 = [𝑋 ∼ 𝑃 𝜆1 , 𝑌 ∼ 𝑃 𝜆2 ; 𝑋 + 𝑌 ∼ 𝑃 𝜆1 + 𝜆2 ]
𝑃 𝑋+𝑌=𝑡
𝑒 −𝜆 1 𝜆1 𝑥 𝑒 −𝜆 2 𝜆2 𝑡−𝑥
𝑃 𝑋 =𝑥 𝑃 𝑌 =𝑡−𝑥 𝑥! 𝑡−𝑥 !
= =
𝑃 𝑋+𝑌 =𝑡 𝑒 − 𝜆 1 +𝜆 2 𝜆1 + 𝜆2 𝑡
𝑡!
𝑥 1
𝑡 𝜆1 𝜆1
= 1−
𝑥 𝜆1 + 𝜆2 𝜆1 + 𝜆2
𝜆1
𝑖. 𝑒. 𝑋| 𝑋 + 𝑌 = 𝑡 𝐵𝑖𝑛 𝑡,
𝜆1 + 𝜆2
2
0 𝑥<0
(5) 𝑓𝑋 𝑥 = 3 1 − 𝑥 0 < 𝑥 < 1 𝑥 2
𝐹𝑋 𝑥 = 3 ∫0 1 − 𝑡 𝑑𝑡 0 ≤ 𝑥 < 1 = 1 − 1 − 𝑥 3
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
1 𝑥≥1
𝑌 = 𝑀𝑖𝑛 𝑋1 , 𝑋2 , 𝑋3 , 𝑋4 ; 𝑍 = 𝑀𝑎𝑥 𝑋1 , 𝑋2 , 𝑋3 , 𝑋4
𝑋1 , 𝑋2 , 𝑋3 , 𝑋4 𝑖. 𝑖. 𝑑. 𝑓𝑟𝑜𝑚 𝑓𝑋 𝑥
𝑑. 𝑓. 𝑜𝑓 𝑌 = 𝐹𝑌 𝑦 = 𝑃 𝑌 ≤ 𝑦 = 1 − 𝑃 𝑌 > 𝑦
4
=1− 𝑃 𝑋𝑖 > 𝑦
1
4
= 1− 1−𝑃 𝑋 ≤𝑦
= 1 − 1 − 1 − 1 − 𝑦 2 4 = 1 − 1 − 𝑦 12 0 < 𝑦 < 1
𝑓𝑌 𝑦 = 12 1 − 𝑦 11 0 < 𝑦 < 1
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
4
𝑑. 𝑓. 𝑜𝑔 𝑍: 𝑓𝑍 Ʒ = 𝑃 𝑍 ≤ Ʒ = 𝑃 𝑋𝑖 ≤ Ʒ = [𝑃 𝑋 ≤ Ʒ ]4
1
= 1− 1−Ʒ 3 40< Ʒ<1
2 3 3
𝑓𝑍 Ʒ = 12(1 − Ʒ) (1 − (1 − Ʒ) ) 0 < Ʒ < 1
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
(6) Similar to (5) .
(7) X: arrival time of A
Y : arrival time of B
X & Y i. i. d. Exp(𝜆)- p. d. f.
𝑓 𝑥 = 𝜆𝑒 −𝜆𝑥 𝑥 > 0
=𝑃 𝑌−𝑡 ≤𝑋 ≤𝑌 +𝑃 𝑋−𝑡 ≤𝑌 ≤𝑋
= 𝑃 𝑋 ≤ 𝑌 ≤ 𝑋 + 𝑡 + 𝑃 𝑌 ≤ 𝑋 ≤ 𝑌 + 𝑡 𝑗𝑡 𝑝. 𝑑. 𝑓 𝑜𝑓 𝑋, 𝑌 → 𝜆2 𝑒 −𝜆 𝑥+𝑦
𝑥 > 0, 𝑦 > 0
∞ 𝑥+𝑡 ∞ 𝑦+𝑡
= 𝜆2 𝑒 −𝜆 𝑥+𝑦
𝑑𝑦 𝑑𝑥 + 𝜆2 𝑒 −𝜆 𝑥+𝑦
𝑑𝑥 𝑑𝑦
0 𝑥 0 𝑥
∞ 𝑥+𝑡
= 2𝜆2 𝑒 −𝜆𝑥 𝑒 −𝜆𝑡 𝑑𝑦 𝑑𝑥
0 𝑥
∞
1
= 2𝜆2 1 − 𝑒 −𝜆𝑡 𝑒 −2𝜆𝑥 𝑑𝑥 = 1 − 𝑒 −𝜆𝑡 .
𝜆 0
(8) 𝑋1 , 𝑋2 ∼ 𝑈 0, 1
𝜕𝑦1 𝜕𝑦1
1 𝜕𝑥1 𝜕𝑥2 1 1
𝑌1 = 𝑋1 + 𝑋2 ⟹ = = = 2
𝐽 𝜕𝑦2 𝜕𝑦2 −1 1
𝜕𝑥1 𝜕𝑥2
1
𝑌2 = 𝑋2 − 𝑋1 𝐽 =
2
𝑓𝑋1 ,𝑋2 𝑥1 , 𝑥2 = 1; 0 < 𝑥1 < 1, 0 < 𝑥2 < 1
1
⟹ 𝑓𝑌1 ,𝑌2 𝑦1 , 𝑦2 = ; 0 < 𝑦1 + 𝑦2 < 2, 0 < 𝑦1 − 𝑦2 < 2
2
𝑅𝑎𝑛𝑔𝑒 𝑢𝑛𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑎𝑙𝑙𝑦 0 < 𝑦1 < 2 & − 1 < 𝑦2 < 1
𝑦1 + 𝑦2 𝑦1 − 𝑦2 𝑦1 − 𝑦2
𝑖2 = , 𝑖1 = 𝐴𝑙𝑠𝑜 0 < 𝑥1 < 1 ; ⟹ 0 < <1
2 2 2
⟹ 0 < 𝑦1 − 𝑦2 < 2
𝑦2 < 𝑦1 < 2 + 𝑦2 & 𝑦1 − 2 < 𝑦2 < 𝑦1 } ____(1)
𝑦1 + 𝑦2
𝐴𝑙𝑠𝑜 0 < 𝑥2 < 1 ; 0 < <1
2
0 < 𝑦1 + 𝑦1 < 2
−𝑦2 < 𝑦1 < 2 − 𝑦2 & − 𝑦1 < 𝑦2 < 2 − 𝑦1 } ____(2)
𝐶𝑜𝑚𝑏𝑖𝑛𝑖𝑛𝑔 1 & 2
max 𝑦2 , −𝑦2 < 𝑦1 < min 2 + 𝑦2 , 2 − 𝑦2
& max 𝑦1 − 2, −𝑦1 < 𝑦2 < min 𝑦1 , 2 − 𝑦1 ) ____(3)
𝐼𝑓 − 1 < 𝑦2 < 0 𝑡𝑒𝑛 𝑓𝑟𝑜𝑚 3 − 𝑦2 < 𝑦1 < 2 + 𝑦2 & 𝑖𝑓 0 < 𝑦2 < 1 𝑡𝑒𝑛 𝑓𝑟𝑜𝑚 3 𝑦2
< 𝑦1 < 2 − 𝑦2 } _____(4)
𝐴𝑙𝑡𝑒𝑟𝑛𝑎𝑡𝑖𝑣𝑒𝑙𝑦 𝑖𝑓 0 < 𝑦1 < 1 𝑡𝑒𝑛 𝑓𝑟𝑜𝑚 3 − 𝑦1 < 𝑦2 < 𝑦1 & 𝑖𝑓 1 < 𝑦1
< 2 𝑡𝑒𝑛 𝑓𝑟𝑜𝑚 3 𝑦1 − 2 < 𝑦2 < 2 − 𝑦1 }____(5)
⟹ 𝑀𝑎𝑟𝑔 𝑜𝑓 𝑌1
1 𝑦1
𝑓𝑌1 𝑦1 = 𝑑𝑦 = 𝑦1 𝑖𝑓 0 < 𝑦1 < 1
2 −𝑦1 2
2−𝑦1
1
𝑈𝑠𝑖𝑛𝑔 5 →= 𝑑𝑦2 = 2 − 𝑦1 𝑖𝑓 1 < 𝑦1 < 2
2 𝑦1 −2
& 𝑀𝑎𝑟𝑔 𝑜𝑓 𝑌2
2+𝑦2
1
𝑓𝑌2 𝑦2 = 𝑑𝑦1 = 1 + 𝑦2 𝑖𝑓 − 1 < 𝑦2 < 0
2 −𝑦2
2−𝑦2
1
𝑢𝑠𝑖𝑛𝑔 4 →= 𝑑𝑦1 = 1 − 𝑦2 𝑖𝑓 0 < 𝑦2 < 1
2 𝑦2
(9) X ∼N(0, 1)
Y∼ N(0, 1) > ind
1 1
exp − 𝑥 2 + 𝑦 2
𝑓𝑋,𝑌 𝑥, 𝑦 =
2𝜋 2
𝑧 𝑢
𝑈 = 𝑌} 𝑋 = 𝑈𝑍 𝐽 = = |𝑢|
1 0
1 1
𝑓𝑈,𝑍 𝑢, 𝑧 = exp − 𝑢2 Ʒ2 + 𝑢2 𝑢 ; −∞ < 𝑢 < ∞, −∞ < 𝑧 < ∞
2𝜋 2
1 ∞ 1
𝑓𝑍 Ʒ = 𝑢 exp − 𝑢2 1 + Ʒ2 𝑑𝑢
2𝜋 −∞ 2
∞
1 𝑢2 1 1
= 𝑢 exp − 1 + Ʒ2 𝑑𝑢 = . ; −∞ < Ʒ < ∞
𝜋 0 2 𝜋 1 + Ʒ2
𝑖. 𝑒. 𝑍 ∼ 𝐶𝑎𝑢𝑐𝑦 𝑑𝑖𝑠𝑡𝑛 0, 1
𝜃 1
𝐼𝑛 𝑔𝑒𝑛𝑒𝑟𝑎𝑙 𝑋 ∼ 𝐶𝑎𝑢𝑐𝑦 𝜇, 𝜃 → 𝑓𝑋 𝑥 = ; −∞ < 𝑥 < ∞
𝜋 1 + (𝑥 − 𝜇)2
𝑥+𝑦
1
(10) 𝑓𝑋,𝑌 𝑥, 𝑦 = ⎾𝛼 ⎾𝛼 𝛼 +𝛼 2 𝑥 𝛼 1 −1 𝑦 𝛼 2 −1 𝑒 − 𝜃 , 𝑥 > 0, 𝑦 > 0
1 2𝜃 1
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑈 = 𝑋 + 𝑌} 𝑋 = 𝑈𝑉
𝑋
𝑉= } 𝑌 = 𝑈(1 − 𝑉)
𝑋+𝑌
𝑣 𝑢
𝐽= = −𝑢
1 − 𝑣 −𝑢
𝑅𝑎𝑛𝑔𝑒 𝑢 > 0, 0 < 𝑣 < 1
1 𝛼 2 −1 −𝑢
𝑓𝑈,𝑉 𝑢, 𝑣 = 𝑢𝑣 𝛼 1 −1 𝑢 1 − 𝑣 𝑒 𝜃. 𝑢 𝑢 > 0, 0 < 𝑣 < 1
⎾𝛼1 ⎾𝛼2 𝜃 𝛼 1 +𝛼 2
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑖. 𝑒. 𝑓𝑈,𝑉 𝑢, 𝑣
1 𝑢
𝛼 1 +𝛼 2 −1 −𝜃
1 𝛼 −1
𝑢 𝑒 × 𝑣 𝛼 1 −1 1 − 𝑣 𝑢 2 𝑢 > 0, 0 < 𝑣 < 1
= ⎾𝛼1 + 𝛼2 𝜃 𝛼 1 +𝛼 2 𝐵 𝛼1 , 𝛼2
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
1 𝑢
𝛼 1 +𝛼 2 −1 −𝜃
⟹ 𝑓𝑈 𝑢 = 𝑢 𝑒 𝑢>0
⎾𝛼1 + 𝛼2 𝜃 𝛼 1 +𝛼 2
𝑈 ∼ 𝐺𝑎𝑚𝑚𝑎. = 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1 𝛼 −1
𝑓𝑉 𝑣 = 𝑣 𝛼 1 −1 1 − 𝑣 𝑢 2 0 < 𝑣 < 1
𝐵 𝛼1 , 𝛼2
𝑉 ∼ 𝐵𝑒𝑡𝑎. = 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
⟹ 𝑈 & 𝑉 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝.
𝐶2
(11) 𝑓𝑋,𝑌 = 1+𝑥 4 1+𝑦 4
− ∞ < 𝑥 < ∞, −∞ < 𝑦 < ∞
𝑋 𝑋 = 𝑈1 𝑈2 𝑢 𝑢1
𝑈1 = , 𝑈2 = 𝑌} }𝐽 = 2 = 𝑢2
𝑌 𝑌 = 𝑈2 0 1
𝑅𝑎𝑛𝑔𝑒 − ∞ < 𝑢1 < ∞, −∞ < 𝑢2 < ∞
𝐶 2 𝑢2
𝑓𝑈1 ,𝑈2 𝑢1 , 𝑢2 = − ∞ < 𝑢1 < ∞, −∞ < 𝑢2 < ∞
1 + 𝑢1 4 𝑢2 4 1 + 𝑢2 4
∞ ∞
𝑢2
𝑓𝑈1 𝑢1 = 𝑓𝑈1 ,𝑈2 𝑢1 , 𝑢2 𝑑𝑢2 = 2𝐶 𝑑𝑢2
−∞ 0 1 + 𝑢1 𝑢2 4 1 + 𝑢2 4
4
𝐶𝜋 1
= . 𝑎𝑛 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑡𝑖𝑛𝑔 .
2 1 + 𝑢1 2
∞
2
𝑓𝑈1 𝑢1 𝑑𝑢1 = 1 ⟹ 𝐶 = 2
−∞ 𝜋
2 1
⟹ 𝑓𝑈1 𝑢1 = . − ∞ < 𝑢1 < ∞.
𝜋 1 + 𝑢1 2
↑ 𝐶𝑎𝑢𝑐𝑦 𝑑𝑖𝑠𝑡𝑛.
1
1 𝑥 2 +𝑦 2
(12) 𝑓𝑋,𝑌 𝑥, 𝑦 = 2𝜋 𝑒 −2 − ∞ < 𝑥 < ∞, −∞ < 𝑦 < ∞
𝑋 = 𝑅 cos 𝛩
𝑌 = 𝑅 sin 𝛩
𝑐𝑜𝑠𝜃 −𝑟 𝑠𝑖𝑛𝜃
𝐽= =𝑟
𝑠𝑖𝑛𝜃 𝑟 𝑐𝑜𝑠𝜃
𝑅𝑎𝑛𝑔𝑒 𝑟 ≥ 0, 0 < 𝜃 < 2𝜋
1 −𝑟 2
𝑓𝑅,𝛩 𝑟, 𝜃 = 𝑒 2 𝑟, 𝑟 > 0, 0 < 𝜃 < 2𝜋
2𝜋
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑟2
𝑓𝑅 𝑟 = 𝑟𝑒 − 2 𝑟 > 0
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1
𝑓𝛩 𝜃 = 0 < 𝜃 < 2𝜋 𝛩 ∼ 𝑈 0, 2𝜋
2𝜋
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
⟹ 𝑅 &𝛩 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝.
𝑅2
𝐷𝑒𝑓𝑖𝑛𝑒 𝑌 = 𝑦>0
2
𝑑𝑟 1
𝑅= 2 𝑦 =
𝑑𝑦 2𝑦
1
𝑓𝑌 𝑦 = 2𝑦𝑒 −𝑦 𝑦 > 0
2𝑦
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑖. 𝑒. 𝑓𝑌 𝑦 = 𝑒 −𝑦 𝑦 > 0
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑅2
⟹ ∼ 𝐸𝑥𝑝 1 .
2
𝑈 = 𝑋 2 + 𝑌 2 = 𝑅 2 − 𝑓 𝑛 𝑜𝑓 𝑟. 𝑣. 𝑘
𝑋
𝑉 = = 𝑐𝑜𝑡𝛩 − 𝑓 𝑛 𝑜𝑓 𝑟. 𝑣. 𝛩
𝑌
𝑠𝑖𝑛𝑐𝑒 𝑅 & 𝛩 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝, 𝑈 & 𝑉 𝑎𝑟𝑒 𝑎𝑙𝑠𝑜 𝑖𝑛𝑑𝑒𝑝.
𝑋
𝑖. 𝑒. 𝑋 2 + 𝑌 2 & 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝.
𝑌
(13) 𝑈1 ∼ 𝑈 0, 1
−𝑙𝑛𝑈1 ∼ 𝐸𝑥𝑝 1 − 𝑠𝑡𝑟𝑎𝑖𝑔𝑡 𝑓𝑜𝑟𝑤𝑎𝑟𝑑
𝑈2 ∼ 𝑈 0, 1
2𝜋𝑈2 ∼ 𝑈 0, 2𝜋 − 𝑠𝑡𝑟𝑎𝑖𝑔𝑡 𝑓𝑜𝑟𝑤𝑎𝑟𝑑.
⟹ −𝑙𝑛𝑈1 ∼ 𝐸𝑥𝑝 1 & 2𝜋𝑈2 ∼ 𝑈 0, 2𝜋 𝑎𝑛𝑑 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝. 𝑏𝑦 𝑝𝑟𝑜𝑏𝑙𝑒𝑚 # 12
𝑅2
𝑗𝑡 𝑑𝑖𝑠𝑡𝑛 𝑜𝑓 – 𝑙𝑛𝑈1 , 2𝜋𝑈2 𝑖𝑠 𝑠𝑎𝑚𝑒 𝑎𝑠 𝑗𝑡 𝑑𝑖𝑠𝑡𝑛 𝑜𝑓 ,𝛩
2
𝑅2
𝑖. 𝑒. – 𝑙𝑛𝑈1 , 2𝜋𝑈2 ≝ ,𝛩
2
𝑖. 𝑒. – 2𝑙𝑛𝑈1 , 2𝜋𝑈2 ≝ 𝑅 2 , 𝛩
𝑖. 𝑒. – 2𝑙𝑛𝑈1 cos 2𝜋𝑈2 , – 2𝑙𝑛𝑈1 sin 2𝜋𝑈2 ≝ 𝑅 𝑐𝑜𝑠𝛩, 𝑅 𝑠𝑖𝑛𝛩
𝑖. 𝑒. 𝑋1 , 𝑋2 ≝ 𝑅 𝑐𝑜𝑠𝛩, 𝑅 𝑠𝑖𝑛𝛩
⟹ 𝑋1 𝑎𝑛𝑑 𝑋2 𝑎𝑟𝑒 𝑖. 𝑖. 𝑑. 𝑁 0, 1 𝑟. 𝑣. 𝑠.
𝐷𝑖𝑣𝑒𝑟𝑡 𝑚𝑒𝑡𝑜𝑑 𝑈1 , 𝑈2 𝑖. 𝑖. 𝑑. 𝑈 0, 1 .
𝑓𝑈1 ,𝑈2 𝑢1 , 𝑢2 = 1 ; 0 < 𝑢1 < 1, 0 < 𝑢2 < 1
= 0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑋1 = – 2𝑙𝑛𝑈1 cos 2𝜋𝑈2
𝑋2 = – 2𝑙𝑛𝑈1 sin 2𝜋𝑈2
𝑅𝑎𝑛𝑔𝑒 𝑜𝑓 𝑋1 ; −∞ < 𝑥1 < ∞, 𝑠𝑙𝑦 − ∞ < 𝑥2 < ∞
𝑋1 2 + 𝑋2 2 = – 2𝑙𝑛𝑈1
𝑋2
= 𝑡𝑎𝑛 2𝜋𝑈2
𝑋1
1
𝑈1 = exp − 𝑋1 2 + 𝑋2 2
2
1 𝑋2
𝑈2 = tan−1
2𝜋 𝑋1
1 1
exp − 𝑋1 2 + 𝑋2 2 −𝑋1 exp − 𝑋1 2 + 𝑋2 2 −𝑋2
2 2
𝐽=
𝑋2 𝑋1
− 2 2
2𝜋 𝑋1 + 𝑋2 2𝜋 𝑋1 2 + 𝑋2 2
1 1
𝐽 = exp − 𝑋1 2 + 𝑋2 2 −
2 2𝜋
1
exp − 2 𝑋1 2 + 𝑋2 2
𝐽 =
2𝜋
1 1
⟹ 𝑓𝑋1 ,𝑋2 𝑥1 , 𝑥2 = exp − 𝑥1 2 + 𝑥2 2 ; −∞ < 𝑥1 < ∞, −∞ < 𝑥2 < ∞ ↓
2𝜋 2
1 −1𝑥 1 2 1 −1𝑥 2 2
= 𝑒 2 𝑒 2
2𝜋 2𝜋
⟹ 𝑋1 & 𝑋2 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝 𝑁 0, 1 𝑟. 𝑣. 𝑠.
− 𝑥 1 ,𝑥 2 ,𝑥 3
(14) 𝑓𝑋1 ,𝑋2 ,𝑋3 𝑥1 , 𝑥2 , 𝑥3 = 𝑒 ; 𝑥1 > 0, 𝑥2 > 0, 𝑥3 > 0
𝑋1 𝑋1
𝑌1 = ; 𝑌2 = ; 𝑌 = 𝑋1 + 𝑋2 + 𝑋3
𝑋1 + 𝑋2 𝑋1 + 𝑋2 + 𝑋3 3
𝑖. 𝑒. 𝑋1 = 𝑌1 𝑌2 𝑌3
𝑋2 = 𝑌2 𝑌3 1 − 𝑌1
𝑋3 = 𝑌3 1 − 𝑌2
𝑋1 + 𝑋2 = 𝑌2 𝑌3 , 𝑋1 = 𝑌1 𝑌2 𝑌3 , 𝑋2 = 𝑌2 𝑌3
𝑦2 𝑦3 𝑦1 𝑦3 𝑦1 𝑦2
𝐽 = −𝑦2 𝑦3 𝑦3 1 − 𝑦1 𝑦2 1 − 𝑦1 = 𝑦2 𝑦3 2
0 −𝑦3 1 − 𝑦2
2 −𝑦3
𝑓𝑌1 ,𝑌2 ,𝑌3 𝑦1 , 𝑦2 , 𝑦3 = 𝑦2 𝑦3 𝑒 ; 0 < 𝑦1 < 1, 𝑦3 > 0
1 ∞
𝑓𝑌1 𝑦1 = 𝑦2 𝑑𝑦2 𝑦3 2 𝑒 −𝑦3 𝑑𝑦3 = 1 0 < 𝑦1 < 1
0 0
𝑖. 𝑒. 𝑌1 ∼ 𝑈 0, 1
𝑓𝑌2 𝑦2 = 𝑦2 × 1 × 2 0 < 𝑦2 < 1
⎾𝑚 + 𝑛 𝑚 −1 𝑛−1
𝑖. 𝑒. 𝑌2 ∼ 𝐵𝑒𝑡𝑎 2, 1 𝑋 ∼ 𝐵𝑒𝑡𝑎 𝑚, 𝑛 𝑓𝑋 𝑥 = 𝑥 1−𝑥
⎾𝑚⎾𝑛
1 1
&𝑓𝑌3 𝑦3 = 𝑑𝑦1 𝑑𝑦2 𝑦3 2 𝑒 −𝑦3
0 0
1
= 𝑒 −𝑦3 𝑦3 2 0 < 𝑦3 < ∞
2
𝑦2 𝑦3 𝑦3 1 − 𝑦1 1 − 𝑦2 + 𝑦2 𝑦3 1 − 𝑦1
𝑦1 𝑦3 𝑦2 𝑦3 𝑦1 𝑦3 𝑦1 𝑦2
0 𝑦3 𝑦2
0 − 𝑦3 1 − 𝑦2
𝑥𝑖 𝑛𝑖
1
(15) (a) 𝑓𝑋1 ,𝑋2 𝑥1 , 𝑥2 = 2
𝑖=1 𝑛 𝑖 𝑛 𝑒 − 2 𝑥𝑖 2 −1 ; 𝑥𝑖 > 0
22⎾ 𝑖
2
2
𝑥𝑖 𝑛𝑖
=𝐶 𝑒 − 2 𝑥𝑖 2 −1 ; 𝑥𝑖 > 0
𝑖=1
𝑌 𝑌
𝑋1 𝑋1 = 𝑌 1+21
1
𝑌1 = ; 𝑌 = 𝑋1 + 𝑋2 |
𝑋2 2 𝑋2 = 𝑌 +
𝑌2
1 1
1 1 𝑥 2
= 𝑥2 − 2 = 1 + 𝑥 = 𝑥1 + 𝑥2 = 𝑦2
=
1 + 𝑦1
𝑥2 2
𝐽 𝑥2 𝑥2 2 𝑥2 2 𝑦2 𝑦2
1 1 1 + 𝑦1
𝑦2
𝐽 =
1 + 𝑦1 2
𝑦2
𝑦1 𝑦2 𝑛 1 −1 𝑦2 𝑛 2 −1 𝑦2
𝑓𝑌1 ,𝑌2 𝑦1 , 𝑦2 = 𝐶 𝑒 − 2 𝑦1 +1
2
1+𝑦1
2
𝑦1 +1 2
𝑦1 > 0, 𝑦2 > 0
𝑦2 𝑛 1 +𝑛 2
𝑖. 𝑒. 𝑓𝑌1 ,𝑌2 𝑦1 , 𝑦2 = 𝐶1 𝑒 − 2 𝑦2 2
−1
↓
𝑛 2 −1
𝑦1 2
𝑓𝑌2 𝑋 𝑛 1 +𝑛 2 𝑦1 > 0, 𝑦2 > 0
1 + 𝑦1 2
𝑓𝑌1
∞ −1
𝑛1 + 𝑛2 𝑛 1 +𝑛 2
𝑓𝑌2 𝑦2 𝑑𝑦2 = 1 ⟹ 𝐶1 = ⎾ .2 2
0 2
⟹ 𝑌2 ∼ 𝜆2 𝑤𝑖𝑡 𝑛1 + 𝑛2 𝑑. 𝑓.
𝑏 𝑠𝑖𝑚𝑖𝑙𝑎𝑟 𝑡𝑎 𝑎
𝑋1
𝑛
𝑍1 = 1 ∼ 𝐹𝑛 1 ,𝑛 2 → 𝐹 𝑑𝑖𝑠𝑡𝑛 𝑤𝑖𝑡 𝑛1 , 𝑛2 𝑑. 𝑓. &
𝑋2
𝑛2
𝑋3 /𝑛3
𝑍2 = ∼ 𝐹𝑛 3 ,𝑛 1 + 𝑛 2
𝑋1 + 𝑋2 / 𝑛1 + 𝑛2
(16) X∼ N(0, 1)
1 𝑥2 1 𝑦 𝑛
𝑓𝑋,𝑌 = 𝑒− 2 𝑛 𝑒 −2 𝑦 2 −1
2𝑥 𝑛
22 ⎾2
𝑋 𝑋 𝑇
𝑇= 𝑑𝑒𝑓𝑖𝑛𝑒 𝑑𝑢𝑚𝑚𝑦 𝑈 = 𝑌 →
𝑌 𝑌 𝑈=𝑌
𝑛
𝑈
⟹𝑋=𝑇
𝑛
𝑌=𝑈
𝑢 𝑡 𝑢
𝐽= 𝑛 2 𝑛 𝑢 = 𝑛
0 1
1 1 𝑡2𝑢 𝑢 𝑛 1
−
⟹ 𝑓𝑇,𝑈 𝑡, 𝑢 = exp − exp − , 𝑢 2 2 −∞<𝑡 < ∞ 𝑢>0
𝑛 𝑛 2 𝑛 2
2𝜋 22 𝑛
2
∞
𝑓𝑇 𝑡 = 𝑓𝑇,𝑈 𝑡, 𝑢 𝑑𝑢
0
∞
1 𝑛 1 𝑢 𝑡2
= . 𝑢 2 −2 exp − 1+ 𝑑𝑢
𝑛𝑛
0 2 𝑛
2𝜋 22
2 𝑛
𝑛+1
⎾ 2 1
= 𝑛 𝑛
. 𝑛+1 −∞<𝑡 <∞
2𝜋 22 𝑛 1 2
2 𝑡2
2 1 + 𝑛
=⋯
𝑛 2
(17) 𝑀𝑌 𝑡 = 𝐸 𝑒 𝑡 𝑌 = 𝐸 𝑒 𝑡 1 𝑋𝑖
𝑛 𝑛
𝑡𝑋 𝑖 2
= 𝐸 𝑒 = 𝑀𝑋 𝑖 2 𝑡
𝑖=1 𝑖=1
𝑛
1 𝑛
− −
𝑋𝑖 2 ∼ 𝜆1 2 → = 1 − 2𝑡 2 = 1 − 2𝑡 2
𝑖=1
⟹ 𝑌 ∼ 𝜆𝑛 2
𝑋𝑛+1 ∼ 𝑁 0, 1
> 𝑖𝑛𝑑𝑒𝑝.
𝑌 ∼ 𝜆𝑛 2
𝑗𝑡 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑌 & 𝑋𝑛+1
1 𝑦 𝑛 1 𝑥2
𝑓𝑌,𝑋𝑛 +1 𝑦, 𝑥 = 𝑛 𝑒 −2 𝑦 2 −1 × −
𝑒 2
𝑛 2𝜋
22 ⎾ 2
𝑋𝑛+1
𝑇=
𝑌 𝑢
𝑛 } ⟹ 𝑋𝑛+1 = 𝑇 𝑛 𝑌 = 𝑈
𝑈=𝑌
𝑢 𝑡 𝑢
𝐽= 𝑛 2 𝑛 𝑢 = 𝑛.
0 1
𝑗𝑡 𝑝. 𝑑. 𝑓. 𝑜𝑓 𝑇 & 𝑈
𝑛 𝑛 −1 1 𝑡2𝑢 𝑢 𝑛
𝑓𝑇,𝑈 𝑡, 𝑢 = 22 ⎾ 2𝜋 𝑛 exp − exp − 𝑢 2 −1 ; −∞ < 𝑡 < ∞, 𝑢 > 0
2 2 𝑛 2
1
𝑛 𝑛 −1 𝑛 𝑢 𝑡2
𝑓𝑇 𝑡 = 22 ⎾ 2𝜋 𝑛 𝑢 2 −1 exp − 1+ 𝑑𝑢
2 0 2 𝑛
𝑛+1
𝑛 𝑛 −1 ⎾ 2
= 22 ⎾ 2𝜋 𝑛 𝑛+1 ; −∞ < 𝑡 < ∞
2 2
1 𝑡2
1+
2 𝑛
𝑛+1 𝑛+1
−
⎾
2 𝑡2 2
= 𝑛 1 + ; −∞ < 𝑡 < ∞
𝜋 ⎾2 𝑛 𝑛
(18) Z= X + Y ; Z∈ {0, 1, …}
Ʒ
𝑃 𝑧 = Ʒ = 𝑃 𝑋+𝑌 =Ʒ =𝑃 𝑋 =𝑥 ∩𝑌 = Ʒ−𝑥
𝑥=0
Ʒ
= 𝑃 𝑋 =𝑥 ∩𝑌 = Ʒ−𝑥
𝑥=0
Ʒ
= 𝑃 𝑋 =𝑥 𝑃 𝑌 = Ʒ−𝑥
𝑥=0
Ʒ Ʒ
𝑥 Ʒ−𝑥 2
= 𝑞 𝑝𝑞 𝑝=𝑝 𝑞Ʒ
𝑥=0 𝑥=0
𝑝2 𝑞 Ʒ Ʒ + 1 , Ʒ = 0, 1, …
𝑖. 𝑒. 𝑃 𝑧 = Ʒ =
0, 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑃 𝑋 = 𝑥, 𝑍 = Ʒ = 𝑃 𝑋 = 𝑥, 𝑌 = Ʒ − 𝑥
2 Ʒ
= 𝑝 𝑞 ; 𝑥 = 0, , 1 … , Ʒ; Ʒ = 0, 1, … .
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
Problem Set-9
1 1
[1] Let 𝑋𝑛 be a sequence of 𝑁 𝑛
,1 − 𝑛 , 𝑠𝑜𝑤 𝑡𝑎𝑡 𝑋𝑛 → 𝑍, 𝑤𝑒𝑟𝑒 𝑍 ∼ 𝑁 0, 1 .
[2] Let 𝑋𝑛 be a sequence of i.i.d. random variables with E(𝑋𝑖 )=𝜇, 𝑉𝑎𝑟 𝑋𝑖 = 𝜎 2 𝑎𝑛𝑑 𝐸(𝑋𝑖 −
1 𝑋1 −𝜇 2 +⋯+ 𝑋𝑛 −𝜇 2 1
𝜇)4 = 𝜎 4 + 1. 𝐹𝑖𝑛𝑑 lim𝑛→∞ 𝑃[𝜎 2 − 𝑛
≤ 𝑛
≤ 𝜎2 + 𝑛 ] .
𝑛
[3] Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 𝑏𝑒 𝑖. 𝑖. 𝑑. 𝐵 1, 𝑝 , 𝑆𝑛 = 𝑖=1 𝑋𝑖 . 𝐹𝑖𝑛𝑑 𝑛 𝑤𝑖𝑐 𝑤𝑜𝑢𝑙𝑑 𝑔𝑢𝑎𝑟𝑎𝑛𝑡𝑒𝑒
𝑆𝑛
𝑃 𝑛
− 𝑝 ≥ 0.01 ≤ 0.01, no matter whatever the unknown p may be.
[4] Let 𝑋1 , … , 𝑋𝑛 be i.i.d. from a distribution with mean
𝑛 (𝑋𝑛 ͟−𝜇 )
𝜇 𝑎𝑛𝑑 𝑓𝑖𝑛𝑖𝑡𝑒 𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 𝜎 2 . 𝑃𝑟𝑜𝑣𝑒 𝑡𝑎𝑡 𝑆𝑛
→ 𝑍, 𝑤𝑒𝑟𝑒 𝑍 ∼ 𝑁 0, 1 .
1
𝑥≥1
[5] The p. d. f. of a random variable X is f(x)= 𝑥2
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
Consider a random sample of size 72 from the distribution having the above p. d. f. compute,
approximately, the probability that more than 50 of these observations are less than 3.
100
[6] Let 𝑋1 , … , 𝑋100 be i. i. d. from poisson (3) distribution and let Y= 𝑖=1 𝑋𝑖 . Using CLT, find an
approximate value of P(100≤ Y ≤ 200).
[7] Let X∼ Bin (100, 0.6). Find an approximate value of P(10 ≤X ≤ 16).
1
𝑒 − 𝑥 𝑛−1 𝑥 > 0
[8] The p. d. f. of 𝑋𝑛 𝑖𝑠 𝑔𝑖𝑣𝑒𝑛 𝑏𝑦 𝑓𝑛 𝑥 = ⎾𝑛
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝑋𝑛
Find the limiting distribution of 𝑌𝑛 = 𝑛
.
[9] Let X̅ denote the mean of a random sample OF SIZE 64 FROM THE Gamma distribution with density
1 𝑥
𝑒 −𝛼 𝑥 𝑛−1 𝑥 > 0
𝑓𝑛 𝑥 = ⎾𝑝𝛼 𝑝
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1
[10] 𝑋1 , … , 𝑋𝑛 is a random sample from U(0, 2). Let 𝑌𝑛 = 𝑋𝑛 , 𝑠𝑜𝑤 𝑡𝑎𝑡 𝑛 𝑌𝑛 − 1 → 𝑁 0, 3 .
Solution Set
1 1
𝑋𝑛 − 𝑥−
𝑛 𝑛
(1) 𝐹𝑋𝑛 𝑥 = 𝑃 𝑋𝑛 ≤ 𝑥 = 𝑃 ≤
1 1
1− 1−
𝑛 𝑛
1
𝑥−
=𝛷 𝑛 ⟶ 𝛷 𝑥 𝑎𝑠 𝑛 → ∞
1
1−𝑛
⟹ 𝑋𝑛 ⟶ 𝑋 ∼ 𝑁 0,1
𝐴𝑙𝑡 𝑚. 𝑔. 𝑓. 𝑜𝑓 𝑋𝑛
𝑡 𝑡2 1
𝑀𝑋𝑛 𝑡 = exp + 1−
𝑛 2 𝑛
𝑡2
⟶ 𝑒 2 ← 𝑚. 𝑔. 𝑓. 𝑜𝑓 𝑁 0, 1
⟹ 𝑋𝑛 ⟶ 𝑋 ∼ 𝑁 0, 1 .
2
(2) 𝑌𝑖 = 𝑋𝑖 − 𝜇
2
𝐸 𝑌𝑖 = 𝐸 𝑋𝑖 − 𝜇 = 𝜎2
2
𝑉 𝑌𝑖 = 𝐸 𝑋𝑖 − 𝜇 − 𝜎2 2
4
= 𝐸 𝑋𝑖 − 𝜇 + 𝜎 4 − 2𝜎 2 𝐸 𝑋𝑖 − 𝜇 2
= 𝜎 4 + 1 + 𝜎 4 − 2𝜎 4 = 1
𝑖. 𝑒. 𝐸 𝑌𝑖 = 𝜎 2 ; 𝑉 𝑌𝑖 = 1 ∀ 𝑖 & 𝑌1 … 𝑌𝑛 𝑖. 𝑖. 𝑑.
𝑆𝑛 = 𝑌𝑖
𝐸𝑆𝑛 = 𝑛𝜎 2
𝑉 𝑆𝑛 = 𝑛
𝑆𝑛 − 𝐸 𝑆𝑛
𝐶𝐿𝑇 ⟹ → 𝑁 0, 1
𝑉 𝑆𝑛
2 2
𝑋1 − 𝜇 + … + 𝑋𝑛 − 𝜇 − 𝑛𝜎 2
𝑖. 𝑒. → 𝑋 ∼ 𝑁 0, 1 .
𝑛
2 2
1 𝑋1 − 𝜇 + … + 𝑋𝑛 − 𝜇 1
lim 𝑃 𝜎 2 − ≤ ≤ 𝜎2 +
𝑛→∞ 𝑛 𝑛 𝑛
2 2
1 𝑋1 − 𝜇 + … + 𝑋𝑛 − 𝜇 − 𝑛𝜎 2 1
= lim 𝑃 − ≤ ≤
𝑛→∞ 𝑛 𝑛 𝑛
2 2
𝑋1 − 𝜇 + … + 𝑋𝑛 − 𝜇 − 𝑛𝜎 2
= lim 𝑃 −1 ≤ ≤1
𝑛→∞ 𝑛
= 𝛷 1 − 𝛷 −1 = 2𝛷 1 − 1 = ⋯
𝑛≥⋯
𝑛 𝑋𝑛 −𝜇
(4) 𝐶𝐿𝑇 ⟹ 𝜎
⟶ 𝑍 ∼ 𝑁 0, 1
𝑛
2 1 2
𝐴𝑙𝑠𝑜 𝑆𝑛 = 𝑋𝑖 − 𝑋 ⟶ 𝜎2
𝑛
𝑖=1
1
(𝑆𝑛 2 = 𝑛 𝑋𝑖 2 − 𝑋 2 ) ⟹ 𝑆𝑛 ⟶ 𝜎,
↓p ↓p
𝜎 2 + 𝜇2 𝜇2 →
↓p
𝜎2
𝑋𝑛 𝑋
Using slutsky’s 𝑋𝑛 → 𝑋; 𝑌𝑛 → 𝑐 →
𝑌𝑛 𝑐
𝑛 𝑋𝑛 − 𝜇
𝜎 → 𝑋 ∼ 𝑁 0, 1
𝑆𝑛
𝜎
𝑛 𝑋𝑛 − 𝜇
𝑖. 𝑒. ⟶ 𝑋 ∼ 𝑁 0, 1 .
𝑆𝑛
1
𝑥>1
(5) 𝑋1 … 𝑋72 𝑟. 𝑠. 𝑓𝑟𝑜𝑚 𝑓 𝑥 = 𝑥2
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
1 𝑖𝑓 𝑋𝑖 < 3
𝐷𝑒𝑓𝑖𝑛𝑒 𝑌𝑖 =
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
3
1 2
𝑃 𝑌𝑖 = 1 = 𝑃 𝑋𝑖 < 3 = 𝑑𝑥 = = 𝜃 𝑠𝑎𝑦
1 𝑥2 3
𝑌1 , … , 𝑌72 𝑎𝑟𝑒 𝑖. 𝑖. 𝑑. 𝐵 1, 𝜃
72
2
𝑌= 𝑌𝑖 ∼ 𝐵 72, 𝜃 =
3
𝑖=1
2
𝑌 − 72 × 3
𝐶𝐿𝑇 ⟹ ⟶ 𝑍 ∼ 𝑁 0, 1
2 1
72 × 3 × 3
𝑌 − 48
𝑖. 𝑒. ⟶ 𝑍 ∼ 𝑁 0, 1
4
𝑃 𝑌 > 50 = 1 − 𝑃 𝑌 ≤ 50 = 1 − 𝑃 𝑌 ≤ 50.5 ←
𝑌 − 48 50.5 − 48
=1−𝑃 ≤
4 4
2.5
≈1−𝛷 =⋯
4
(6) 𝑋1 … 𝑋100 𝑖. 𝑖. 𝑑 𝑃 3
100
𝐸 𝑋1 = 3; 𝑉 𝑋1 = 3 ; 𝑌 = 𝑋1 ∼ 𝑃 300 ⟹ 𝐸 𝑌 = 𝑉 𝑦 = 3
𝑖=1
𝑌 − 300 𝑆𝑛 − 𝐸𝑆𝑛
𝐶𝐿𝑇 ⟹ = → 𝑁 0, 1
10 3 𝑉𝑆𝑛
𝑋 − 100 × 0.6 𝑋 − 60
𝐶𝐿𝑇 ⟹ = ⟶ 𝑍 ∼ 𝑁(0, 1)
100 × 0.6 × 0.4 24
⟹ 𝑃 10 ≤ 𝑋 ≤ 16 = 𝑃 9.5 ≤ 𝑋 ≤ 16.5
9.5 − 60 𝑋 − 60 16.5 − 60
=𝑃 ≤ ≤
24 24 24
16.5 − 60 9.5 − 60
≈ 𝛷 −𝛷
24 24
=⋯
1 −𝑥 𝑛−1
𝑓𝑛 𝑥 = ⎾𝑛 𝑒 𝑥 𝑥>0
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
m. g. f. of 𝑋𝑛
∞
1
𝑀𝑋𝑛 𝑡 = 𝑒 𝑡𝑥 𝑒 −𝑥 𝑥 𝑛−1 𝑑𝑥
⎾𝑛 0
∞
1
= 𝑒 −𝑥 1−𝑡
𝑥 𝑛−1 𝑑𝑥
⎾𝑛 0
1
= = (1 − 𝑡)𝑛
(1 − 𝑡)𝑛
𝑋𝑛
m. g. f. of 𝑌𝑛 = 𝑛
𝑖𝑠
𝑋𝑛 𝑡 −𝑛
𝑀𝑌𝑛 𝑡 = 𝐸 𝑒 𝑡 𝑛 = 1− ⟶ 𝑒 𝑡 𝑎𝑠 𝑛 → ∞
𝑛
↑ 𝑚. 𝑔. 𝑓. 𝑟. 𝑣. 𝑑𝑒𝑛𝑔 𝑎𝑡 𝑥 = 0
𝑌𝑛 ⟶ 𝑋 (𝑑𝑒𝑔𝑟𝑒𝑒 𝑎𝑡 1)
(9)
1 −𝑥 𝑝−1
𝑥 = ⎾𝑝𝛼 𝑝 𝑒 𝑥 𝑥 > 0 𝛼 = 2, 𝑝 = 4
𝛼
𝑓𝑋𝑛
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
𝐸 𝑋𝑛 = 𝛼𝑝 ; 𝑉 𝑋𝑛 = 𝛼 2 𝑝 = 16 = 𝜎 2
16 1
𝐸 𝑋 = 8; 𝑉 𝑋 = =
𝑛 4
𝑛 𝑋 −𝛼𝑝
By C LT 𝛼 2𝑝
→ 𝑍 ∼ 𝑁 0, 1
𝑖𝑒. 2 𝑋 − 8 ⟶ 𝑍 ∼ 𝑁 0,1
≈ 𝑃 −2 < 𝑧 < 2
= 𝛷 2 − 𝛷 −2 = 2𝛷 2 − 1
(10)
2
1
𝑋𝑖 ∼ 𝑈 0, 2 𝐸𝑋𝑖 = 𝑥 𝑑𝑥 = 1
2 0
2
1 4 1
𝐸𝑋𝑖 2 = 𝑥 2 𝑑𝑥 = ; 𝑉 𝑋𝑖 =
2 0 3 3
1
𝑋1 , … , 𝑋𝑛 i. i. d. with E𝑋1 = 1 & 𝑉𝑋1 = 3
1
𝐵𝑦 𝐶𝐿𝑇 𝑛 𝑋𝑛 − 1 ⟶ 𝑁 0, .
3
1
𝑖. e. 𝑛 𝑌𝑛 − 1 ⟶ 𝑁 0, 3 .
Problem Set-10
1 𝑥
𝑓𝑥 𝑥 = exp − ; 𝑥 > 0
𝛽 𝛽
𝑛
Show that X̅= 𝑖=1 𝑋𝑖 /𝑛 is an unbiased estimator of𝛽.
𝑓 𝑥 = 𝛽 exp −𝛽 𝑥 ; 𝑥 > 0
1
Show that X̅ is an unbiased estimator of .
𝛽
𝑛 2 𝑛 2
𝑖=1 𝑋 𝑖 𝑖=1 𝑋 𝑖
[4] Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a random sample from 𝑁 0, 𝜃 2 , 𝜃 > 0. 𝑆𝑜𝑤 𝑡𝑎𝑡 𝑛(𝑛+1)
𝑎𝑛𝑑 2𝑛
are
both unbiased estimators of 𝜃 2 .
[5] Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a random sample from P( 𝜃); 𝜃 >0. Find an unbiased estimator of 𝜃 𝑒 −2𝜃 .
(An estimator satisfying the condition in (b) is said to be unbiased in the limit)
𝜇 𝜇
[7] 𝑋1 , … , 𝑋𝑛 be a random sample from N 𝜇, 𝜎 2 , 𝜇 ∈ ℜ, 𝜎 ∈ ℜ+ . Find unbiased estimators of 𝑎𝑛𝑑 .
𝜎2 𝜎
[8] Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a random sample from B(1, 𝜃); 0 ≤ 𝜃 ≤1. Find an unbiased estimator of
𝜃 2 (1 − 𝜃).
[9] Using Neyman Fisher Factorization Theorem, find a sufficient based on a random sample
𝑋1 , 𝑋2 , … , 𝑋𝑛 from each of the following distributions
1 𝑥
exp − 𝛼 𝑖𝑓 𝑥 > 0
(a) 𝑓𝛼 𝑥 = 𝛼
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
exp − 𝑥 − 𝛽 𝑖𝑓 𝑥 > 𝛽
(b) 𝑓𝛽 𝑥 =
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
1 𝑥−𝛽
(c) 𝑓𝛼,𝛽 𝑥 = 𝛼
exp − 𝛼
𝑖𝑓 𝑥 > 𝛽
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
1 𝑙𝑜𝑔 𝑥 1 −𝜇 2
(d) 𝑓𝜇 ,𝜎 𝑥 = 𝑥𝜎 2𝜋
exp − 2𝜎 2
𝑖𝑓 𝑥 > 0
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
1 𝜃 𝜃
(e) 𝑓𝜃 𝑥 = 𝜃
–2 ≤ 𝑥 ≤ 2
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
exp 𝑖𝜃 − 𝑥 𝑖𝑓 𝑥 ≥ 𝑖𝜃
𝑓𝑥 𝑖 𝑥 =
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
[12] Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a random sample from a Beta (𝛼, 𝛽 ) distribution 𝛼 > 0, 𝛽 > 0 𝑤𝑖𝑡 𝑝. 𝑑. 𝑓.
⎾𝛼 + 𝛽 𝛼−1
𝑓 𝑥 = 𝑥 (1 − 𝑥)𝛽 −1 0 < 𝑥 < 1
⎾𝛼⎾𝛽
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
Show that
𝑛
(a) 𝑖=1 𝑋𝑖 is sufficient for 𝛼 if 𝛽 is known to be a given constant.
𝑛
(b) 𝑖=1(1 − 𝑋𝑖 ) is sufficient for 𝛽 if 𝛼 is known to be given constant.
𝑛 𝑛
(c) 𝑖=1 𝑋𝑖 , 𝑖=1 1 − 𝑋𝑖 is jointly sufficient for (𝛼, 𝛽) if both the parameters are unknown.
[13] Let T and T* be two statistic such that T= 𝛹(𝑇 ∗ ), Show that if T is sufficient then 𝑇 ∗ is also sufficient.
1 1
[14] 𝑋1 , … , 𝑋𝑛 be a random sample from U 𝜃 − 2 , 𝜃 + 2 , 𝜃 ∊ ℜ. Find a sufficient statistic for 𝜃.
−𝑖𝜃 𝑥 𝑖
𝑓𝑖 𝑥𝑖 = 𝑖𝜃 𝑒 𝑥𝑖 > 0
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
Find a sufficient statistic for 𝜃.
Solution Key
𝛽
1 ∞
(1) 𝐸 𝑋 = ∫ 𝑥
𝛽 0
𝑒 −𝑥 𝑑𝑥 = 𝛽
1 1
⟹𝐸 𝑋 =𝐸 𝑋𝑖 = 𝐸 𝑋𝑖 = 𝛽
𝑛 𝑛
⟹ 𝑋 𝑖𝑠 𝑢. 𝑒. 𝑜𝑓 𝛽
𝑛
𝜃𝑛
𝑥 𝑛−1 0 < 𝑥 < 𝜃
(2) 𝑓𝑋 𝑛 𝑥 =
0 𝑜/𝑤
𝑛 𝜃 𝑛 𝑛
𝐸 𝑋𝑛 𝑛
= 𝑥 𝑑𝑥 = 𝜃
𝜃 0 𝑛+1
𝑛+1
⟹𝐸 𝑋𝑛 = 𝜃
𝑛
𝑛+1
⟹ 𝑋 𝑛 𝑖𝑠 𝑢. 𝑒. 𝑜𝑓 𝜃.
𝑛
1
0<𝑥<𝜃
Also 𝑓𝑋 𝑥 = 𝜃
0 𝑜/𝑤
𝜃
𝐸 𝑋 =
2
2
⟹ 𝐸 2𝑋 = 𝐸 𝑋𝑖
𝑛
2
= 𝐸(𝑋𝑖 ) = 𝜃
𝑛
⟹ 2 X̅ is u. e. for 𝜃
∞ 1
(3) 𝐸 𝑋 = 𝛽 ∫0 𝑥 𝑒 −𝛽𝑥 𝑑𝑥 = 𝛽
𝐸 𝑋 = 𝛽
⟹ 𝑋𝑖𝑠 𝑢. 𝑒. 𝑜𝑓 𝛽.
1 1
𝐸 𝑋 =𝐸 𝑋𝑖 = 𝐸 𝑋𝑖
𝑛 𝑛
(4) 𝑇1 = 𝑋𝑖 , 𝑇2 = 𝑋𝑖 2
𝐸 𝑇1 2 = 𝑉 𝑇1 + 𝐸 2 𝑇1
= 𝑛𝜃 2 + 𝑛2 𝜃 2 = 𝜃 2 𝑛 𝑛 + 1
𝑇1 2
⟹𝐸 = 𝜃2
𝑛 𝑛+1
𝑇1 2
⟹ 𝑖𝑠 𝑢. 𝑒. 𝑜𝑓 𝜃 2
𝑛 𝑛+1
𝐸 𝑇2 = 𝐸 𝑋𝑖 2 = 𝐸 𝑋𝑖 2
= 𝑉 𝑋𝑖 + 𝐸 2 𝑋𝑖
= 𝜃 2 + 𝜃 2 = 2𝑛𝜃 2
𝑇2
⟹ 𝑖𝑠 𝑢. 𝑒. 𝑜𝑓 𝜃 2
2𝑛
(5) 𝑔 𝜃 = 𝜃𝑒 −2𝜃
1 𝑖𝑓 𝑋1 = 0, 𝑋2 = 0
𝛿0 𝑋 =
0 𝑜/𝑤
𝐸 𝛿0 𝑋 = 1. 𝑃 𝑋1 = 0, 𝑋2 = 1
= 𝑃 𝑋1 = 0 𝑃 𝑋2 = 1
𝑒 −𝜃 𝜃1
= 𝑒 −𝜃 . = 𝜃𝑒 −2𝜃
1!
⟹ 𝛿0 𝑋 is u. e. of𝜃𝑒 −2𝜃 .
(6) 𝑋1 , … , 𝑋𝑛 i. i. d. B(1, 𝜃)
𝑋𝑖 ∼ 𝐵 𝑛, 𝜃
1 1
2 𝑛 + 𝐸( 𝑋𝑖 ) 2 𝑛 + 𝑛𝜃
𝐸 𝑇 𝑋 = = ≠𝜃
𝑛+ 𝑛 𝑛+ 𝑛
⟹ T(X̲) is not u. e. of 𝜃.
1
+ 𝑛𝜃
lim 𝐸(𝑇 𝑋 ) = lim 2 = 𝜃
𝑛→∞ 𝑛 →∞ 𝑛 + 𝑛
⟹ 𝑇 𝑋 is unbiased in the limit for 𝜃
(7) 𝑋1 , … , 𝑋𝑛 r. s. from 𝜇, 𝜎 2
𝜎2
𝑋 ∼ 𝑁 𝜇, 𝑛
⟹ > 𝑖𝑛𝑑𝑒𝑝.
(𝑛 − 1)𝑆 2 2
𝑌= ∼ 𝜒𝑛−1
𝜎2
2
If Z∼ 𝜒𝑚 , 𝑡𝑒𝑛
∞
1 1 𝑧 𝑚
𝐸 = 𝑚 𝑧 −1 𝑒 −2 𝑧 2 −1 𝑑𝑧
𝑧 2 2 𝑚/2 0
∞
1 𝑧 𝑚
= 𝑚
𝑒 −2 𝑧 2 −1−1 𝑑𝑧
𝑚 0
22 2
𝑚 𝑚
2 − 1 2 2 −1
1
= =
𝑚 𝑚 𝑚−2
22 2
∞
1 1 𝑧 𝑚 1
&𝐸 = 𝑚
𝑒 −2 𝑍 2 −2−1 𝑑𝑧
𝑧 𝑚 0
22 2
𝑚 − 1 𝑚2 −12 𝑚−1
2 2 2
= 𝑚 𝑚
=
𝑚
22 2 2 2
1 𝜎2 1 1
⟹𝐸 = 𝐸 2
= =
𝑌 𝑛−1 𝑠 𝑛−1 −2 𝑛−3
1 𝑛−1 1
⟹𝐸 2 = . .
𝑠 𝑛 − 3 𝜎2
𝑛 −2
1 𝜎 2
&E 𝑌
= 𝐸 𝑛−1 𝑆
= 𝑛 −1
2⎾
2
𝑛−1
1 𝑛 − 1⎾ 2 1
⟹𝐸 = =
𝑆 𝑛−1 𝜎
2⎾ 2
𝑠𝑖𝑛𝑐𝑒 X̅ & 𝑠 2 are indep.
X 1
𝐸 2
= 𝐸 𝑋 .𝐸 2
𝑠 𝑠
𝑛−1 1
= 𝜇. . .
𝑛 − 3 𝜎2
𝑛−3 𝑋 𝜇
⟹𝐸 .
𝑛−1 𝑠 2
= 𝜎2
𝑛−3 𝑋 𝜇
⟹ . is an unbiased estimator of 𝜎 2 .
𝑛−1 𝑠 2
Further
X 1
𝐸 = 𝐸 𝑋 .𝐸
𝑠 𝑆
𝑛−1
𝑛 − 1⎾ 2 1
= 𝜇. .
𝑛−1 𝜎
2⎾ 2
𝑛−1
2 ⎾ 2 X 𝜇
⟹𝐸 =
𝑛 − 1 ⎾𝑛 − 1 𝑠 𝜎
2
𝑛 −1
2 ⎾ 2 X 𝜇
⟹ . . is an unbiased estimator of .
𝑛−1 ⎾𝑛 −1 𝑠 𝜎
2
(8) 𝑋1 , … , 𝑋𝑛 are i. i. d. B(1, 𝜃)
𝑔 𝜃 = 𝜃 2 (1 − 𝜃)
1 𝑖𝑓 𝑋1 = 1, 𝑋2 = 1, 𝑋3 = 0
𝐷𝑒𝑓𝑖𝑛𝑒 (X̲)=
0 𝑜/𝑤
𝐸𝜃 δ X = P 𝑋1 = 1, 𝑋2 = 1, 𝑋3 = 0
= P 𝑋1 = 1 𝑃 𝑋2 = 1 𝑃 𝑋3 = 0
= 𝜃 2 (1 − 𝜃)
⟹ (X̲) is an u. e. of g(𝜃)= 𝜃 2 (1 − 𝜃).
(9)
𝑥
1
(a) f(x│𝛼)= 𝛼 𝑒 −𝛼 ; 𝑥 > 0
1
1
jt p. d. f. 𝑓 𝑥 𝛼 = 𝛼 𝑛 𝑒 −𝛼 𝑥𝑖
𝑋1 , … , 𝑋𝑛 > 0
1 1
= 𝑛
𝑒 −𝛼 𝑥𝑖
.1
𝛼
𝑛
𝑖. 𝑒. 𝑓 𝑥 𝛼 = 𝑔 𝛼, 𝑥𝑖 . 𝑥 𝑥 = 1 .
1
𝑛
By NFFT, T (X̲) = 1 𝑥𝑖 is suff for 𝛼
𝑒− (𝑥 𝑖 −𝛽)
, 𝑋1 , … , 𝑋𝑛 > 𝛽
𝑓 𝑥𝛽 =
0 𝑜/𝑤
𝑒− 𝑥 𝑖 +𝑛𝛽
, 𝑥1 >𝛽
𝑖. 𝑒. 𝑓 𝑥 𝛽 =
0 𝑜/𝑤
1 𝑎<1
𝑖. 𝑒. 𝑓 𝑥 𝛽 = 𝑒 𝑛𝛽 − 𝑥𝑖
𝐼 𝛽,𝑥 1 𝐼 𝑎,𝑏 =
0 𝑜/𝑤
= 𝑒− 𝑥𝑖
𝑒 𝑛𝛽 𝐼 𝛽,𝑥 1
= 𝑥 𝑔 𝛽, 𝑥 1
1 − 𝑥𝑖 𝑛𝛽
(9) (c) 𝑓 𝑥 𝛼, 𝛽 = 𝛼 𝑛 exp 𝛼
+ 𝛼
𝐼 𝛽 ,𝑥 1
1 − 𝑥𝑖 𝑛𝛽
= 𝑛
exp + .𝐼 𝛽,𝑥 1 .1
𝛼 𝛼 𝛼
↓
=𝑔 𝛼, 𝛽 ; 𝑥𝑖 𝑥 1 . (𝑥)
1 𝑛 1 1
𝑛
(9)(d) 𝑓 𝑥 𝜇, 𝜎 = 𝑖=1 𝑥 . exp − (𝑙𝑜𝑔𝑥𝑖 − 𝜇)2
𝜎 2𝜋 𝑖 2𝜎 2
𝑛 𝑛
1 𝑛𝜇2 1 𝜇 2
1
= 𝑛 exp − 2 − 2 (𝑙𝑜𝑔𝑥𝑖 ) + 2 𝑙𝑜𝑔𝑥𝑖 × 𝑥𝑖 −1
𝜎 2𝜎 2𝜎 𝜎 2𝜋 𝑖=1
𝑛 𝑛
2
1
= 𝑔 𝜇, 𝜎 ; 𝑙𝑜𝑔𝑥𝑖 , (𝑙𝑜𝑔𝑥𝑖 ) . (𝑥) 𝑥𝑖 −1
2𝜋 𝑖=1
1 𝜃
; − 2 < 𝑥1 … 𝑥𝑛 < 𝜃/2
(9)(e)f(x̰│𝜃)= 𝜃𝑛
0 𝑜/𝑤
1 𝜃
; │𝑥𝑖 │ < ; 𝑖 = 1, 2, … 𝑛
= 𝜃𝑛 2
0 𝑜/𝑤
1 𝜃
; Max𝑖 │𝑥𝑖 │ < 2 ; 𝑖 = 1, 2, … 𝑛
i.e. f(x̰│𝜃)= 𝜃𝑛
0 𝑜/𝑤
1 𝜃
⟹ f (x̰│𝜃)= 𝐼 Max𝑖 │𝑥𝑖 │ ,
𝜃𝑛 2
𝛽 −1 1 𝑛
⎾𝛼+𝛽 𝑛 𝛼−1
𝜋 1 − 𝑥𝑖 ⎾𝛽
𝜋𝑥𝑖 0 < 𝑥1 , … , 𝑥𝑛 < 1
f (x̰│𝛼) = ⎾𝛼
𝑥
0 𝑜/𝑤
𝑛
By NFFT 𝑖=1 𝑋𝑖 is suff for 𝛼.
𝛼−1 1 𝑛
⎾𝛼+𝛽 𝑛 𝛽 −1 𝜋𝑥𝑖 ⎾𝛼
𝜋 1 − 𝑥𝑖 0 < 𝑥1 , … , 𝑥𝑛 < 1
f (x̰│𝛼) = ⎾𝛽
𝑥
0 𝑜/𝑤
𝑛
By NFFT 𝑖=1(1 − 𝑋𝑖 ) is suff for 𝛽.
⎾𝛼+𝛽 𝑛 𝛼−1 𝛽 −1
𝜋𝑥𝑖 𝜋 1 − 𝑥𝑖 . 1 0 < 𝑥1 , … , 𝑥𝑛 < 1
f (x̰│𝛼, 𝛽) = ⎾𝛼⎾𝛽
0 𝑜/𝑤
i.e. f (x̰│𝜃) = 𝐼 1
𝜃− ,𝑥 1
𝐼 𝑥 𝑛 ,𝜃+
1
2 2
= 𝑔 𝜃, 𝑥 1 , 𝑥 𝑛 (𝑥)
𝑛
𝑛
= 𝑖 𝜃 𝑛 𝑒 −𝜃 𝑖=1 𝑖𝑥 𝑖
𝑖=1
𝑛
= 𝑥 𝑔 𝜃, 𝑖𝑥𝑖
𝑖=1
𝑛
By NFFT, T(X)= 𝑖=1 𝑖𝑋𝑖 is sufficient for 𝜃.
Problem Set-11
[1] Let 𝑋1 , … , 𝑋𝑛 be a random sample from P(𝜃), 𝜃∊ (0, ∞). Show that T= 𝑛𝑖=1 𝑋𝑖 is complete
sufficient statistic. Find the Uniformly Minimum Variance Unbiased Estimator (UMVUE) of the
following parametric functions: (a) g(𝜃) = 𝜃, (b) g(𝜃) = 𝑒 −𝜃 (c) g(𝜃) = 𝑒 −𝜃 (1 + 𝜃).
[2] Suppose 𝑋1 , … , 𝑋𝑛 be a random sample from B(1, 𝜃), 𝜃∊ (0, 1). Show that T= 𝑛𝑖=1 𝑋𝑖 is
complete sufficient statistic and hence find the UMVUE for each of the following parametric
functions : (a) g(𝜃) = 𝜃, (b) g(𝜃)= 𝜃 4 and (c) g(𝜃) = 𝜃 1 − 𝜃 2 .
[4] 𝑋1 , … , 𝑋𝑛 is a random sample from U(0, 𝜃), 𝜃 > 0. Show that T=𝑋 𝑛 = max
{𝑋1 , … , 𝑋𝑛 } is a
𝑘
complete sufficient statistic and find the UMVUE of g(𝜃) =𝜃 ; 𝑘 > −𝑛.
[6] Let 𝑋1 , … , 𝑋𝑛 be a random sample from U(𝜃 – ½ , 𝜃 + ½). Show that sufficient statistic is not
complete.
[7] Suppose the statistic T is UMVUE of 𝜃 such that V(T) ≠ 0. Show that 𝑇 2 cannot be UMVUE of 𝜃 2 .
[8] Let 𝑋1 , … , 𝑋𝑛 be a random sample from N(0, 𝜃). Find the UMVUE of𝜃 2 .
[9] Let 𝑋1 , … , 𝑋𝑛 be a random sample from N(𝜇, 𝜃). Find the UMVUE of (a) 𝜃 when 𝜇 is known and
(b) 𝜃 when 𝜇 is not known.
[10] Let 𝑋1 , … , 𝑋𝑛 be a random sample from N (𝜇, 𝜎 2 ). Find the UMVUE of (a) 𝜎 𝑟 when 𝜇 is known,
(b) 𝜎 𝑟 when 𝜇 is not known and (c) 𝛿, where 𝛿 is given by P X≥ 𝛿)= p for a given p.
[11] 𝑋1 , … , 𝑋𝑛 is a random sample from U (0, 𝜃), 𝜃 > 0. Consider the following 3 estimators for 𝜃;
𝑛+1
𝑇1 𝑋 = 𝑋(𝑛) , 𝑇2 𝑋 = 2𝑋 𝑎𝑛𝑑 𝑇3 𝑋 = 𝑋(1) + 𝑋(𝑛) .
𝑛
Show that all the estimators are unbiased for 𝜃. Among the three estimators, which are would you
prefer and why?
[13] Let 𝑋1 , … , 𝑋𝑛 be a random sample from Exp (a, b). Assuming completeness of the associated
sufficient statistic, find the (a) UMVUE of a when b is known and (b) UMVUE of b when a is known.
Solution Key
E(T) = n𝜃 ⟹ E(T/n) = 𝜃
(b) g(𝜃) = 𝑒 −𝜃
1, 𝑖𝑓 𝑋1 = 0
Consider 𝛿0 𝑋 =
0, 𝑜/𝑤
E𝛿0 𝑋 = 𝑃 𝑋1 = 0 = 𝑒 −𝜃 ⟹ 𝛿0 𝑋 𝑖𝑠 𝑢. 𝑒. 𝑜𝑓𝑒 −𝜃
Rao-blackwellization of 𝛿0 𝑋
(t) = E(𝛿0 𝑋 │𝑇 = 𝑡)
𝑡
𝑒 − 𝑛−1 𝜃
𝑛−1 𝜃
𝑒 −𝜃 𝑡!
𝑃 𝑋1 = 0 𝑃 𝑛𝑖=2 𝑋𝑖 = 𝑡
= =
𝑃 𝑇=𝑡 𝑒 −𝑛𝜃 𝑛𝜃 𝑡
𝑡!
𝑡
𝑛−1
=
𝑛
𝑛−1 𝑇
(T)= 𝑛
is u. e. based on CSS
𝑛−1 𝑇
⟹ 𝑛
is UMVUE of 𝑒 −𝜃 .
(c)g(𝜃) = 𝑒 −𝜃 (1 + 𝜃)
1, 𝑖𝑓 𝑋1 = 0
𝛿0 𝑋 =
0, 𝑜/𝑤
E𝛿0 𝑋 = 𝑃 𝑋1 ≤ 1 = 𝑃 𝑋1 = 0 + 𝑃(𝑋1 = 1)
= 𝑒 −𝜃 + 𝜃𝑒 −𝜃 = 𝑒 −𝜃 (𝜃 + 1)
⟹ 𝛿0 𝑋 is u.e. of 𝑒 −𝜃 (𝜃 + 1).
Rao-blackwellization of 𝛿0 𝑋
(t) = E(𝛿0 𝑋 │𝑇 = 𝑡)
= P(𝑋1 ≤ 1│𝑇 = 𝑡)
P 𝑋1 ≤ 1 𝑇 = 𝑡
=
𝑃 𝑇=𝑡
𝑃 𝑋1 = 0 ∪ 𝑋1 = 1
=
𝑃 𝑇=𝑡
𝑃 𝑋1 = 0, 𝑇 = 𝑡) + 𝑃( 𝑋1 = 1, 𝑇 = 𝑡
=
𝑃 𝑇=𝑡
𝑛 𝑛
𝑃 𝑋1 = 0, 2 𝑋𝑖 = 𝑡 + 𝑃 𝑋1 = 1, 2 𝑋𝑖 = 𝑡 − 1
=
𝑃 𝑇=𝑡
𝑛 𝑛
𝑃 𝑋1 = 0 𝑃 2 𝑋𝑖 = 𝑡 + 𝑃 𝑋1 = 1 𝑃 2 𝑋𝑖 = 𝑡 − 1
=
𝑃 𝑇=𝑡
𝑡 𝑡−1
𝑒 − 𝑛−1 𝜃
𝑛−1 𝜃 𝑒 − 𝑛−1 𝜃
𝑛−1 𝜃
𝑒 −𝜃 𝑡! + 𝜃𝑒 −𝜃
𝑡−1 !
=
𝑒 −𝑛𝜃 𝑛𝜃 𝑡
𝑡!
𝑡
𝑛−1 𝑡
= 1+
𝑛 𝑛−1
𝑛−1 𝑇 𝑇
(T)= 𝑛
1 + 𝑛−1 is u.e. of g(𝜃) based on C.S.S T
𝑛−1 𝑇 𝑇
⟹ 𝑛
1 + 𝑛−1 is UMVUE of g(𝜃) = 𝑒 −𝜃 (1 + 𝜃).
T ∼B n, 𝜃)
f(x)= 𝜃 𝑥 (1 − 𝜃)1−𝑥
𝜃 𝑥
= 1−𝜃
(1 − 𝜃)
𝜃
= exp(x log 1−𝜃
+ log(1 − 𝜃))
𝜃
With h(x) =1, (𝜃)= log , T(x)= x & 𝛽(𝜃)= - log (1-𝜃)
1−𝜃
(a) g(𝜃)= 𝜃
𝑇
E(T) = n𝜃 ⟹ E 𝑛
=𝜃
𝑇
𝑛
u. e. based on CSS T
𝑇
⟹ is UMVUE of 𝜃.
𝑛
(b) g(𝜃)= 𝜃 4
1,
𝑖𝑓 𝑋1 = 1, 𝑋2 = 1, 𝑋3 = 1, 𝑋4 = 1
𝛿0 𝑋 =
0, 𝑜/𝑤
E𝛿0 𝑋 = 𝑃 𝑋1 = 1, 𝑋2 = 1, 𝑋3 = 1, 𝑋4 = 1 = 4𝑖=1 𝑃 𝑋𝑖 = 1 = 𝜃 4
⟹ 𝛿0 𝑋 is u.e. of 𝜃 4 .
Rao-blackwellization of 𝜃 4
(t) = E(𝛿0 𝑋 │𝑇 = 𝑡)
= 𝑋1 = 1, 𝑋2 = 1, 𝑋3 = 1, 𝑋4 = 1│𝑇 = 𝑡
𝑛
𝑃 𝑋1 = 1, 𝑋2 = 1, 𝑋3 = 1, 𝑋4 = 1, 𝑖=5 𝑋𝑡 =𝑡−4
=
𝑃 𝑇=𝑡
𝑛
𝑃 𝑋1 = 1 𝑃 𝑋2 = 1 𝑃 𝑋3 = 1 𝑃 𝑋4 = 1 𝑃 𝑖=5 𝑋𝑡 =𝑡−4
=
𝑃 𝑇=𝑡
𝑛−4
𝜃. 𝜃. 𝜃. 𝜃. 𝑡−4
𝜃 𝑡−4 1 − 𝜃 𝑛−𝑡
= 𝑛
𝑡
𝜃 𝑡 1 − 𝜃 𝑛−𝑡
𝑛−4
𝑡−4 𝑡 𝑡 − 1 𝑡 − 2 (𝑡 − 3)
= 𝑛 =
𝑡
𝑛 𝑛 − 1 𝑛 − 2 (𝑛 − 3)
⟹(T) is UMVUE of 𝜃 4 .
2
(c) g(𝜃) = 𝜃 1 − 𝜃
1, 𝑖𝑓 𝑋1 = 1, 𝑋2 = 0, 𝑋3 = 0
𝛿0 𝑋 =
0, 𝑜/𝑤
2
E𝛿0 𝑋 = 𝑃 𝑋1 = 1, 𝑋2 = 0, 𝑋3 = 0 = θ 1 − 𝜃
2
⟹ 𝛿0 𝑋 is u.e. of θ 1 − 𝜃
Rao-blackwellization of 𝛿0 𝑋
(t) = E(𝛿0 𝑋 │𝑇 = 𝑡)
= 𝑃 𝑋1 = 1, 𝑋2 = 0, 𝑋3 = 0 𝑇 = 𝑡
𝑃 𝑋1 = 1, 𝑋2 = 0, 𝑋3 = 0, 𝑛4 𝑋𝑖 = 𝑡 − 1
=
𝑃 𝑇=𝑡
𝑃 𝑋1 = 1 𝑃 𝑋2 = 0 𝑃 𝑋3 = 0 𝑃 𝑛4 𝑋𝑖 = 𝑡 − 1
=
𝑃 𝑇=𝑡
𝜃. 1 − 𝜃 . 1 − 𝜃 𝑛−3𝑡−1
𝜃 𝑡−1 1 − 𝜃 𝑛−3−𝑡+1
= 𝑛
𝑡
𝜃 𝑡 1 − 𝜃 𝑛−𝑡
𝑛−3
𝑛−3 !
𝑡−1 𝑡−1 ! 𝑛−𝑡−2 !
= 𝑛 =
𝑛!
𝑡
𝑡! 𝑛 − 𝑡 !
𝑡. 𝑛 − 𝑡 . (𝑛 − 𝑡 − 1)
=
𝑛 𝑛 − 1 (𝑛 − 2)
𝑇. 𝑛−𝑇 .(𝑛−𝑇−1)
(T)= 𝑛 𝑛−1 (𝑛−2)
is u.e. based on C.S.S. T
⟹(T) is UMVUE of θ 1 − 𝜃 2 .
− 𝑥−𝜃
(3) 𝑋1 , … , 𝑋𝑛 r.s. from f(x)= 𝑒 𝑖𝑓 𝑥 > 𝜃
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
By NFFT T= 𝑋(1) is sufficient.
𝑛𝑒 −𝑛 𝑡−𝜃
𝑖𝑓 𝑡 > 𝜃
𝑓𝑇 𝑡 =
0 𝑜/𝑤
Now E g(T)= 0 ∀ 𝜃∊ 𝛩
∞
⟹ ∫𝜃 𝑔(𝑡) 𝑛𝑒 −𝑛 𝑡−𝜃
𝑑𝑡 = 0 ∀ 𝜃∊ 𝛩
∞
i.e. ∫𝜃 𝑔(𝑡) 𝑒 −𝑛𝑡 𝑑𝑡 = 0 ∀ θ ∊ Θ
g(𝜃) 𝑒 −𝑛𝜃 = 0 ∀ θ ∊ Θ
Range of T=𝑋(1)
g(𝜃)= 𝜃 2
∞
E𝑋 1 =∫𝜃 𝑡 𝑛𝑒 −𝑛 𝑡−𝜃
𝑑𝑡
∞
⎾2 1 1
=𝑛 𝑦 + 𝜃 𝑒 −𝑛𝑦 𝑑𝑦 = 𝑛 2
+ 𝜃. = 𝜃 +
0 𝑛 𝑛 𝑛
1
E(𝑋 1 − 𝑛 )= 𝜃
2 ∞
Sly E 𝑋 1 = 𝑛 ∫𝜃 𝑡 2 𝑒 −𝑛 𝑡−𝜃
𝑑𝑡
y= t- 𝜃
∞ 2
= n∫0 𝑦 + 𝜃 𝑒 −𝑛𝑦 𝑑𝑦
∞
=𝑛 (𝑦 2 + 𝜃 2 + 2𝜃𝑦 𝑒 −𝑛𝑦 𝑑𝑦
0
⎾3 1 ⎾2
=𝑛 3
+ 𝜃 2 + 2𝜃 2
𝑛 𝑛 𝑛
2 2𝜃
= 2
+ 𝜃2 +
𝑛 𝑛
2 2 2 1
⟹E𝑋 1 = 𝑛2
+ 𝜃2 + 𝑛 𝐸 𝑋 1 −𝑛
2 2 2 1
⟹𝐸 𝑋 1 − − 𝑋 1 − = 𝜃2
𝑛2 𝑛 𝑛
2 2 2 2
𝑖. 𝑒. 𝐸(𝑋 1 − − 𝑋 1 + = 𝜃2
𝑛2 𝑛 𝑛2
2 2
⟹E 𝑋 1 −𝑛𝑋 1 = 𝜃2
2 2
𝑋 1 −𝑛𝑋 1 is u.e. of 𝜃 2 based on C.S.S. 𝑋 1
2 2
⟹𝑋 1 −𝑛𝑋 1 is UMVUE of 𝜃 2
g(𝜃) = 𝜃 𝑘
𝑛
𝜃𝑛
𝑡 𝑛−1 , 0 < 𝑡 < 𝜃
p.d.f. of T; 𝑓𝑇 𝑡
0 , 𝑜/𝑤
𝑘 𝑛 𝜃 𝑛
E 𝑋𝑛
= 𝜃 𝑛 ∫0 𝑡 𝑘 𝑡 𝑛−1 𝑑𝑡 = 𝑛+𝑘 𝜃 𝑘 .
𝑛+𝑘
⟹𝐸 𝑛
𝑋𝑛 𝑘 = 𝜃𝑘 .
𝑛+𝑘 𝑘
𝑛
𝑋 𝑛 is u.e. based of C.S.S. 𝑋 𝑛
𝑛+𝑘 𝑘
⟹ 𝑛
𝑋 𝑛 is UMVUE of 𝜃 𝑘 .
p.d.f.
1
⎾2𝜃 2
𝑒 −𝑥/𝜃 𝑥 2−1
𝑖𝑓 𝑥 > 0
f(x)=
0 𝑜/𝑤
1
with h(x)= x ; 𝜂(𝜃)= − 𝜃 𝑇 𝑥 = 𝑥 ; 𝛽 𝜃 = 2 𝑙𝑜𝑔𝜃
the above is 1-parameter expo family distn.
𝑇
⟹𝐸 2𝑛
= 𝜃
𝑇 𝑛
is u.e. based on C.S.S. T= 𝑖=1 𝑋𝑖
2𝑛
𝑇
⟹ is UMVUE of 𝜃.
2𝑛
i.e. f(x)= 1. 𝐼 1
𝜃− ,𝑥 1
𝐼 𝑥 𝑛 , 𝜃+
1
2 2
1, 𝑎 < 𝑏
𝐼 𝑎,𝑏 =
0, 𝑜/𝑤
𝑛−1 1 1
𝑓𝑋 1 𝑥 = 𝑛 1 − 𝐹𝑋 𝑥 𝑓𝑋 𝑥 ; 𝜃 − <𝑥<𝜃+
2 2
𝑛−1
1 1 1
= 𝑛 𝜃−𝑥+ , 𝜃− <𝑥<𝜃+
2 2 2
0 𝑜/𝑤
1 𝑛−1
𝜃+
2 1 1 𝑛
𝐸𝑋 1 =𝑛 𝑥 𝜃−𝑥+ 𝑑𝑥 = 𝜃 + −
𝜃−
1 2 2 𝑛+1
2
𝑛−1 1 1
𝑓𝑋 𝑛 𝑥 = 𝑛 𝐹𝑋 𝑥 𝑓𝑋 𝑥 ; 𝜃 − <𝑥<𝜃+
2 2
𝑛−1
1 1 1
= 𝑛 𝑥−𝜃+ , 𝜃− <𝑥<𝜃+
2 2 2
0 𝑜/𝑤
1 𝑛−1
𝜃+
2 1 𝑛 1
𝐸𝑋 𝑛 =𝑛 𝑥 𝑥−𝜃+ 𝑑𝑥 = − +𝜃
𝜃−
1 2 𝑛+1 2
2
𝑛−1
⟹𝐸 𝑋 𝑛 − 𝑋 1 − 𝑛+1 = 0 ∀ 𝜃
𝑛−1
⇏𝑋 𝑛 − 𝑋 1 = 𝑛+1 a.e.
𝑛−1
i.e. with g(T)= 𝑋 𝑛 − 𝑋 1 − 𝑛+1
E g(T)= 0 ∀ 𝜃 ⇏ g t = 0 a.e.
⟹ T= 𝑋 𝑛 − 𝑋 1 is not complete.
(7) T is UMVUE of 𝜃
⟹ E(T)= 𝜃
⟹𝐸 𝑇 2 = 𝜃 2
⟹ V(T)= 𝐸 𝑇 2 − 𝐸 2 (𝑇)
i.e. V(T)= 𝜃 2 − 𝜃 2 = 0
whis is a cantvadiction
⟹𝑇 2 cannot be u.e. of 𝜃 2
⟹ 𝑇 2 cannot be UMVUE of 𝜃 2
𝑇 𝑇 𝑇
∼ 𝜒𝑛 2 ; 𝐸 = 𝑛; 𝑉 = 2𝑛
𝜃 𝜃 𝜃
𝐸 𝑇 = 𝑛𝜃; 𝑉 𝑇 = 2𝑛𝜃 2
2
𝐸𝑇 2 = 𝑉 𝑇 + 𝐸 𝑇 = 2𝑛𝜃 2 + 𝑛2 𝜃 2 = 𝑛 𝑛 + 2 𝜃 2
𝑇2
⟹𝐸 = 𝜃2
𝑛 𝑛+2
𝑇2
𝑛 𝑛+2
is u.e. based on C.S.S. 𝑋𝑖 2 = 𝑇
𝑇2
⟹𝑛 𝑛+2
is UMVUE of 𝜃 2 .
𝑇 (𝑋𝑖 − 𝜇)2
𝐸 =𝑛 ∼ 𝜒𝑛 2
𝜃 𝜃
(𝑋 𝑖 −𝜇 )2
⟹ is UMVUE for 𝜃 when 𝜇 is known.
𝑛
𝑛 2 1
𝑋𝑖 , 𝑖=1 𝑋𝑖 ⇔ 𝑋, 𝑠 2 = 𝑛−1 (𝑋𝑖 − 𝑋)2 𝑖𝑠 𝐶. 𝑆. 𝑆. (2-parameter full rank expo family)
𝑛 − 1 𝑠2 (𝑋𝑖 − 𝑋)2
= ∼ 𝜒𝑛−1 2
𝜃 𝜃
(𝑋𝑖 − 𝑋)2
⟹𝐸 = 𝜃
𝑛−1
(𝑋 𝑖 −𝑋 )2
is u.e. of 𝜃 based on C.S.S.
𝑛−1
(𝑋 𝑖 −𝑋 )2
⟹ 𝑛−1
is UMVUE of 𝜃.
g(𝜎)= 𝜎 𝑟
𝑇
Y= 𝜎 2 ∼ 𝜒𝑛 2
∞
𝑟 1 𝑟 𝑦 𝑛
𝐸 𝑌2 = 𝑛 𝑦 2 𝑒 −2 𝑦 −2 −1 𝑑𝑦
𝑛
22 ⎾ 2 0
∞
1 𝑦 𝑛+𝑟
= 𝑛 𝑒 −2 𝑦 − 2
−1
𝑑𝑦
𝑛
22 ⎾ 2 0
𝑟 𝑛+𝑟
22 ⎾ 2
= 𝑛
⎾2
𝑟 𝑟 𝑛+𝑟
𝑟 𝑇2 22 ⎾ 2
⟹𝐸 𝑌2 =𝐸 = 𝑛
𝜎𝑟 ⎾2
𝑛
⎾2
⟹𝐸 𝑟 = 𝜎𝑟
𝑛+𝑟
22 ⎾
2
𝑛 𝑟
⎾
𝑟
2
𝑛 +𝑟
𝑇 2 is u.e. of 𝜎 𝑟 based on C.S.S.
22 ⎾
2
𝑛 𝑟
⎾
⟹ 𝑟
2
𝑛 +𝑟
(𝑋𝑖 − 𝜇)2 2 is UMVUE of 𝜎 𝑟 (when 𝜇 is known).
22 ⎾
2
= (𝑋, 𝑆𝑋 2 )
𝑆𝑋 2
Y= 𝜎2
∼ 𝜒𝑛−1 2
𝑟
𝑟 𝑛 −1+𝑟
𝑆𝑋 𝑟 22 ⎾
2
𝐸 𝑌 2 =𝐸 𝜎𝑟
= 𝑛 −1 (following the derivation in part (a))
⎾
2
𝑛 −1
⎾
⟹𝐸 𝑟
2
𝑛 −1+𝑟
𝑆𝑋 𝑟 = 𝜎 𝑟
22 ⎾
2
𝑛 −1
⎾
𝑟
2
𝑛 −1+𝑟
𝑆𝑋 𝑟 is u.e. of 𝜎 𝑟 based on C.S.S.
22 ⎾
2
𝑛 −1
⎾
⟹ 𝑟
2
𝑛 −1+𝑟
𝑆𝑋 𝑟 is UMVUE of 𝜎 𝑟 .
22 ⎾
2
𝑋−𝜇 𝛿−𝜇 𝛿 −𝜇
(c)p= P X ≤𝛿) = P 𝜎
≤ 𝜎
= 𝛷 𝜎
↓
Known 𝜃= (𝜇, 𝜎 ’.
𝑛 −1
⎾
2
Note that E(X̅ )= 𝜇 & E 1
𝑛
𝑆𝑋 = 𝜎 ↖(from part (b) with r= 1)
22 ⎾
2
𝑛 −1
⎾
⟹ [X̅ + 1
2
𝑛
𝑆𝑋 𝛷−1 (𝑝) ] is u.e. of 𝜇 + 𝜎 𝛷−1 (𝑝) based on C.S.S.
22 ⎾
2
𝑛 −1
⎾
⟹ X̅ + 1
2
𝑛
𝑆𝑋 𝛷−1 (𝑝) is UMVUE of 𝛿= 𝜇 + 𝛷−1 (𝑝) .
2 ⎾
2
2
𝜃
𝐸𝑋𝑖 = ∀ 𝑖
2
𝑛 𝜃
𝐸𝑋 𝑛 = 𝜃 & 𝐸𝑋 1 =
𝑛+1 𝑛+1
𝑛+1
⟹ 𝐸 𝑇1 𝑋 = 𝐸𝑋 𝑛 = 𝜃
𝑛
𝜃 𝑛
𝐸 𝑇2 𝑋 = 𝜃 & 𝐸 𝑇3 𝑋 = + 𝜃=𝜃
𝑛+1 𝑛+1
1
𝑋, 𝑆 2 = (𝑋𝑖 − 𝑋)2 𝑖𝑠 𝐶. 𝑆. 𝑆
𝑛−1
𝑛−1 𝑠 2
∼ 𝜒𝑛−1 2
𝜎2
2
𝐸 𝑋2 = 𝑉 𝑋 + 𝐸 𝑋
𝜎2
i.e. 𝐸 𝑋 2 = 𝑛
+ 𝜇2
1
𝐸 𝑋2 = 𝐸 𝑆 2 + 𝜇2
𝑛
1
i.e. 𝐸 𝑋 2 − 𝑛 𝑆 2 = 𝜇2
1
⟹ 𝑋 2 − 𝑛 𝑆 2 is u.e. of 𝜇2 based only on the C. S. S.
1
⟹𝑋 2 − 𝑛 𝑆 2 is UMVUE of 𝜇2 .
If g(𝜇, 𝜎) = 𝜇 + 𝜎
𝐸 𝑋 = 𝜇
𝑛 −1 1
⎾
And E 2
𝑛 (𝑋𝑖 − 𝑋)2 2 = 𝜎 (from problem # 10 (c)/(b))
2⎾
2
𝑛 −1 1
⎾
⟹E 𝑋+ 2
𝑛 (𝑋𝑖 − 𝑋)2 2 = 𝜇+𝜎
2⎾
2
𝑛 −1 1
⎾
⟹𝑋+ 2
𝑛 (𝑋𝑖 − 𝑋)2 2 is u.e. of 𝜇 + 𝜎 based on C.S.S.
2⎾
2
𝑛 −1 1
⎾
⟹𝑋 + 2
𝑛 (𝑋𝑖 − 𝑋)2 2 is UMVUE of 𝜇+ 𝜎.
2⎾
2
This is exponential distn with location parameter ‘a’ and scale parameter ‘b’
𝑏
⟹𝑋 1 − 𝑛 is UMVUE of a, when b is known
Remark:
[when both a & b are unknown, then
𝑏 2 2
𝑋 1 ∼ 𝐸𝑥𝑝 𝑎, & 𝑋𝑖 − 𝑋 1 ∼ 𝜒2 𝑛−1
𝑛 𝑏
Problem Set-12
[1] 𝑋1 , … , 𝑋𝑛 be a random sample from N(𝜇, 𝜎 2 ) distribution. Find the Cramer-Rao Lower Bounds
(CRLB) on the variances of unbiased estimators of 𝜇 and𝜎 2 . Can you find unbiased estimators 𝜇 and
𝜎 2 whose variance attains the respective CRLB?
𝛼is assumed to be known. Find the Fisher Information I(𝛽) and CRLB on the variances of unbiased
estimators of 𝛽.
[3] 𝑋1 , … , 𝑋𝑛 be a random sample from P(𝜃), 𝜃∊(0, ∞). Find the CRLB on the variances of unbiased
estimators of the following estimands: (a) g(𝜃)= 𝜃 , (b) g(𝜃)= 𝜃 2 and (c) g(𝜃)= 𝑒 −𝜃 .
[4] Suppose 𝑋1 , … , 𝑋𝑛 be a random sample from B(1, 𝜃), 𝜃∊ (0, 1). Find the CRLB on the variances of
unbiased estimators of the following estimands: (a) g(𝜃)= 𝜃 4 (b) g(𝜃)= 𝜃(1- 𝜃).
𝑛
[5] 𝑋1 , … , 𝑋𝑛 be a random sample from U(0, 𝜃), 𝜃 > 0. Show that (a) 𝑛+1 𝑋(𝑛) is a consistent
estimator of 𝜃 and (b) 𝑒 𝑋 (𝑛 ) is consistent for 𝑒 𝜃 , where 𝑋(𝑛) = max 𝑋1 , … , 𝑋𝑛 .
1 1 𝑋 1 +𝑋 𝑛
𝑋 1 + 2,𝑋 𝑛 − 2 𝑎𝑛𝑑 2
are all consistent estimators of 𝜃, 𝑋 𝑛 = max 𝑋1 , … , 𝑋𝑛 and
𝑋 1 = min 𝑋1 , … , 𝑋𝑛 .
[10] Let 𝑋1 , … , 𝑋𝑛 be a random sample from each of the following distributions having the following
density or mass functions. Find the maximum likelihood estimator (MLE) of 𝜃 in each case.
𝑒 −𝜃 𝜃 𝑥
(a) f(x; 𝜃)= 𝑥 = 0, 1, 2, …
𝑥!
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝜃−1
(b) f(x; 𝜃)= 𝜃 𝑥 0<𝑥<1
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
1
𝑒 −𝑥/𝜃 𝑥 > 0
(c) f(x; 𝜃)= 𝜃
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
1
𝑒 −∣𝑥−𝜃∣ − ∞ < 𝑥, ∞
(d) f(x; 𝜃)= 2
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝜃 𝜃
e X∼ U − 2 , 2 .
𝜆𝛼
f(x ; 𝛼, 𝜆)= ⎾𝛼
𝑒 −𝜆𝑥 𝑥 𝛼−1 𝑥 ≥ 0
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
[14] Let 𝑋1 , … , 𝑋𝑛 be a random sample from U (𝜃- ½ , 𝜃 + ½ ), 𝜃∊ℜ. Show that any statistic
u(𝑋1 , … , 𝑋𝑛 ) such that it satisfies
1 1
𝑋(𝑛) − ≤ 𝑢 𝑋1 , … , 𝑋𝑛 ≤ 𝑋(1) +
2 2
𝑋 1 +𝑋 𝑛 3 1 1 1
Is a maximum likelihood estimator of 𝜃. in particular 𝑎𝑛𝑑 𝑋 1 + + 𝑋 𝑛 − are
2 4 2 4 2
MLEs of 𝜃.
[15] The lifetimes of a component are assumed to be exponential with parameter 𝜆. Ten of these
components were placed on a test independently. The only data recorded were the number of
components that had failed (out of 10 put to test) in less than 100 hours, which was recorded to be
3. Find the maximum likelihood estimate of 𝜆.
[16] A salesman of used cars is willing to assume that the number of sales he makes per day is a
Poisson random variable with parameter 𝜇. Over the past 30 days he made no sales on 20 days and
one or mare sales on each of the remaining 10 days. Find the maximum likelihood estimate of 𝜇.
[17] Let 𝑋1 , … , 𝑋𝑛 be a random sample from each of the following distributions. Find the method of
moments estimator (MOME) of the corresponding unknown parameters in each of the situations.
Solution Key
1 1 𝜕 log 𝑓 𝑥 − 𝜇 𝜕 2 log 𝑓 1
log 𝑓 = 𝑘 − log 𝜎 2 − 𝑥−𝜇 2
= ; = − 2.
2 2 𝜎2 𝜕𝜇 𝜎 2 𝜕𝜇 2 𝜎
𝜕 2 log 𝑓 1
−𝐸 2
= 2 = 𝐼(𝜇)
𝜕𝜇 𝜎
𝜎2
CRLB for an u.e. for 𝜇= 𝑛
.
𝜎2
Since V(X̅ )= 𝑛
; X̅ attains CRLB.
𝜕 log 𝑓 1 1 2
= − + 𝑥−𝜇
𝜕𝜎 2 2 𝜎2 2 𝜎4
𝜕 2 log 𝑓 1 𝑥−𝜇 2
= − .
𝜕 𝜎2 2 2 𝜎4 𝜎6
𝜕 2 log 𝑓 1 1 1
𝐼 𝜎 2 = −𝐸 2 2
= − 4
+ 4=
𝜕 𝜎 2𝜎 𝜎 2 𝜎4
2𝜎 4
CRLB for an u.e. for𝜎 2 = 𝑛
.
1 𝑛
Now 𝑆 2 = 𝑛−1 1 (𝑋𝑖 − 𝑋)2 is UMVUE for 𝜎 2 with
2𝜎 4
V(𝑆 2 ) = > CRLB
𝑛−1
Since UMVUE is the unbiased with lowest variance in the class of all unbiased estimators, CRLB
can’t be attained by any unbiased estimator of 𝜎 2 .
𝑥
1 −
(2) f x∣𝛼, 𝛽) = ⎾𝛼 𝛽 𝛼 𝑒 𝛽 𝑥 𝛼−1 𝑖𝑓 𝑥 > 0
𝑥
log 𝑓 = −𝑙𝑜𝑔⎾𝛼 − 𝛼 log 𝛽 − + 𝛼 − 1 log 𝑥
𝛽
𝜕 log 𝑓 𝛼 𝑥
= − + 2
𝜕𝛽 𝛽 𝛽
𝜕 2 log 𝑓 𝛼 𝑥
2
= 2 − 2 3.
𝜕𝛽 𝛽 𝛽
𝜕 2 log 𝑓 𝛼 𝛼𝛽 𝛼
𝐼 𝛽 = −𝐸 2
= − 2 + 2 3 = 2.
𝜕𝛽 𝛽 𝛽 𝛽
1 𝛽2
⟹ CRLB for u.e. of 𝛽 : 𝛼 = 𝑛𝛼 .
𝑛. 2
𝛽
𝑒 −𝜃 𝜃 𝑥
f x∣𝜃)= 𝑥!
𝜕 log 𝑓 𝑥 𝜕 2 log 𝑓 𝑥
= −1 + ; 2
= − 2.
𝜕𝜃 𝜃 𝜕𝜃 𝜃
𝜕 2 log 𝑓 1
𝐼 𝜃 = −𝐸 2
=
𝜕𝜃 𝜃
1 𝜃
CRLB for any u.e. of 𝜃= 1 =𝑛
𝑛.
𝜃
(2𝜃)2 4𝜃 3
CRLB for any u.e. of g (𝜃) = 𝜃 2 : 𝑛 = 𝑛
𝜃
2
−𝑒 −𝜃 𝜃 𝑒 −2𝜃
CRLB for any u.e. of g (𝜃) = 𝑒 −𝜃 ∶ 𝑛 = 𝑛
.
𝜃
f x∣𝜃)= 𝜃 𝑥 1 − 𝜃 1−𝑥
𝜕 log 𝑓 𝑥 1−𝑥 𝑥 1
= + −1 = − .
𝜕𝜃 𝜃 1−𝜃 𝜃 1−𝜃 1−𝜃
𝜕 log 𝑓 2
𝜕 log 𝑓 𝜃(1 − 𝜃) 1
𝐸 𝜃 =𝐸 =𝑉 = 2 = .
𝜕𝜃 𝜕𝜃 𝜃 1−𝜃 𝜃(1 − 𝜃)
2
4𝜃 3 16𝜃 7 (1−𝜃)
CRLB for u.e. of 𝜃 4 : 1 =
𝑛. 𝑛
𝜃 (1−𝜃 )
𝑛 𝑛−1
𝑥 , 0<𝑥<𝜃
𝑓𝑋 𝑛 𝑥 = 𝜃𝑛
0, 𝑜/𝑤
𝜃
𝑛 𝑛
𝐸𝑋 𝑛 = 𝑛 𝑥 𝑛 𝑑𝑥 = .𝜃
𝜃 0 𝑛+1
𝜃
2 𝑛 𝑛
𝐸𝑋 𝑛 = 𝑥 𝑛+1 𝑑𝑥 = 𝜃2
𝜃𝑛 0 𝑛+2
1 𝑛 𝑛
⟹ P[∣ 𝑋 𝑛 − 𝜃 ∣ ≥ ϵ] ≤ 2
𝜃 2 + 𝜃 2 − 2𝜃 .𝜃
𝜖 𝑛+2 𝑛+1
→ 0 as n→∞
⟹𝑋 𝑛 → 𝜃.
𝑛
⟹ 𝑋𝑛 →𝜃
𝑛+1
𝑛
⟹ 𝑛+1 𝑋 𝑛 is a consistent estimator for 𝜃
Further since 𝑋 𝑛 →𝜃
𝑒𝑋 𝑛 = 𝑔 𝑋 𝑛 → 𝑔 𝜃 = 𝑒𝜃 .
𝑛−1
𝑓𝑋 1 𝑥 = 𝑛 1 − 𝐹𝑋 𝑥 𝑓 𝑥
𝑛 −1
1 1 1
𝑖. 𝑒. 𝑓𝑋 1 𝑥 = 𝑛 𝜃−𝑥+ ,𝜃 − ≤𝑥≤𝜃+
2 2 2
0 , 𝑜/𝑤
1 𝑛−1
𝜃+
2 1 1 𝑛
𝐸𝑋 1 =𝑛 𝑥 𝜃−𝑥+ 𝑑𝑥 = 𝜃 + − .
𝜃−
1 2 2 𝑛+1
2
1 𝑛−1
𝜃+
2 2 1
𝐸𝑋 1 = 𝑥2 𝜃 − 𝑥 + 𝑑𝑥
𝜃−
1 2
2
2
1 𝑛 𝑛
= 𝜃+ + − 2𝜃 + 1
2 𝑛+2 𝑛+1
2
1
𝐸 𝑋 1 − 𝜃−2
1
𝑃 𝑋𝑛 − 𝜃− ≥ϵ ≤
2 𝜖2
1 2 1 2 1
r.h.s. = 𝜖 2 𝐸 𝑋 1 + 𝜃−2 −2 𝜃−2 𝐸 𝑋 1
2 2
1 1 𝑛 𝑛 1 1 1 𝑛
= 𝜃+ + − 2𝜃 + 1 + 𝜃− −2 𝜃− 𝜃+ −
𝜖2 2 𝑛+2 𝑛+1 2 2 2 𝑛+1
2 2
1 1 1 1 1
⟶ 2 𝜃+ + 1 − 2𝜃 + 1 + 𝜃− −2 𝜃− 𝜃− 𝑎𝑠 𝑛 → ∞
𝜖 2 2 2 2
=0
1
⟹𝑃 𝑋 𝑛 − 𝜃− ≥ ϵ ⟶ 0 𝑎𝑠 𝑛 → ∞
2
1
⟹𝑋 1 ⟶ 𝜃 − 2. ___________(1)
𝑋 1 +𝑋 𝑛
⟶𝜃
2
𝑋 1 +𝑋 𝑛
⟹ is a consistent estimator for 𝜃
2
So,
1
𝑋 1 + is a consistent estimator for 𝜃 (from (1))
2
1
&𝑋 𝑛 − is a consistent estimator for 𝜃 (from (2)).
2
1
1 + 𝜃𝑥 − 1 < 𝑥, 1
(7) 𝑋1 , … , 𝑋𝑛 i. i. d. 𝑓𝑋 𝑥 = 2
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
1
1 𝜃
𝐸 𝑋 = (1 + 𝜃𝑥) 𝑑𝑥 =
2 −1 3
𝜃
⟹ 𝑋1 , … , 𝑋𝑛 are i. i. d. with 𝐸 𝑋1 = 3
By khintchime’s WLLN
𝑛
1
𝑋𝑖 ⟶ 𝐸 𝑋1
𝑛
1
𝜃
𝑖. 𝑒. 𝑋 ⟶ ⟹ 3𝑋 ⟶ 𝜃
3
(8) 𝑋1 , … , 𝑋𝑛 i. i. d. P (𝜃)
𝐸 𝑋𝑖 = 𝜃 ∀ 𝑖 = 1(1)𝑛
By WLLN 𝑋𝑛 ⟶ 𝜃
⟹ g(𝑋𝑛 )⟶g(𝜃)
3
⟹𝑋𝑛 3 𝑋𝑛 + 𝑋𝑛 + 12 ⟶ 𝜃 3 3 𝜃 + 𝜃 + 12
3
⟹ 𝑋𝑛 3 𝑋𝑛 + 𝑋𝑛 + 12 is a consistent estimator for 𝜃 3 3 𝜃 + 𝜃 + 12 .
𝛼 is known
𝐸 𝑋 = 𝛼𝛽 𝑓𝑜𝑟 𝑋 ∼ 𝐺𝑎𝑚𝑚𝑎 𝛼, 𝛽
By WLLN
𝑛
1
𝑋𝑖 ⟶ 𝐸 𝑋1
𝑛
1
𝑛
1
𝑖. 𝑒. 𝑋𝑖 ⟶ 𝛼𝛽
𝑛
1
1 𝑛
⟹𝑛𝛼 1 𝑋𝑖 ⟶ 𝛽.
1 𝑛
⟹ 𝑛𝛼 1 𝑋𝑖 is a consistent estimator for 𝛽.
𝑛
[Note: T= 1 𝑋𝑖 ∼ 𝐺𝑎𝑚𝑚𝑎 𝑛𝛼, 𝛽 can be proved using m. g. f. approach]
𝜕𝑙 𝑥𝑖 𝜕𝑙
= − 𝑛; = 0 ⟹ 𝜃ˆ = 𝑥̅
𝜕𝜃 𝜃 𝜕𝜃
𝜕2 𝑙 𝑥𝑖
2
𝜃 =𝜃ˆ = − 2 < 0
𝜕𝜃 𝜃ˆ
⟹ 𝜃ˆ𝑀𝐿𝐸 = 𝑋
L θ x = 𝜃𝑛 𝑥𝑖
1
𝑛
𝜕𝑙 𝑛
= + 𝑙𝑜𝑔𝑥𝑖
𝜕𝜃 𝜃
𝜕𝑙 𝑛
= 0 ⟹ 𝜃ˆ = − 𝑛
𝜕𝜃 1 log 𝑥𝑖
𝜕2 𝑙 𝑛 𝑛
2
𝜃 =𝜃ˆ = − 2 < 0 ⟹ 𝜃ˆ𝑀𝐿𝐸 = − 𝑛 .
𝜕𝜃 𝜃ˆ 1 log 𝑥𝑖
⟹ 𝜃ˆ𝑀𝐿𝐸 = 𝑋.
1 −∣𝑥−𝜃∣
𝑓𝑋 𝑥 = 2 𝑒 − ∞ < 𝑥, ∞
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
L θ x is maximized if ∣ 𝑥𝑖 − 𝜃 ∣ is minimized
𝜃ˆ = 𝑚𝑒𝑑𝑖𝑎𝑛(𝑥1 , … , 𝑥𝑛 )
⟹ 𝜃ˆ𝑀𝐿𝐸 = 𝑚𝑒𝑑𝑖𝑎𝑛 𝑋1 , … , 𝑋𝑛
𝜃 𝜃
(10) (e) 𝑋1 , … , 𝑋𝑛 i.i.d. with U − 2 , 2
1 θ 𝜃
, − 2 ≤ 𝑥1 , … , 𝑥𝑛 ≤ 2
Likelihood function L θ x = θn
0, otherwise
1 θ
L θ x = θn , if 𝑥𝑖 ≤ 2 , i = 1(1)n
0, otherwise
i.e.
1 θ
L θ x = θn , if max
i
𝑥𝑖 ≤
2
0, otherwise
⟹𝜃ˆ𝑀𝐿𝐸 = 2 maxi 𝑥𝑖
𝜃1 ˆ𝑀𝐿𝐸 = 𝑋(1)
𝑛
1
𝜃2 ˆ𝑀𝐿𝐸 = 𝑋𝑖 − 𝑋 1
𝑛
1
Done in class.
𝜆𝛼
(12) 𝑋1 , … , 𝑋𝑛 i.i.d. 𝑓𝑋 𝑥 = ⎾𝛼
𝑒 −𝜆𝑥 𝑥 𝛼−1 𝑥 ≥ 0
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑛 𝛼−1
𝜆𝑛𝛼 𝑛
L θx = 𝑛
𝑒 −𝜆 1 𝑥 𝑖 𝑥𝑖
⎾𝛼
1
Likelihood equations
𝜕 𝑙𝑜𝑔𝑙 𝑛𝛼
= − 𝑥𝑖 = 0__________(1)
𝜕𝜆 𝜆
𝜕 𝑙𝑜𝑔𝑙 ⎾𝛼′
= 𝑛𝑙𝑜𝑔𝜆 − 𝑛 + 𝑙𝑜𝑔𝑥𝑖 = 0 ________(2)
𝜕𝛼 ⎾𝛼
𝛼
(1) ⟹ 𝜆= 𝑥̅
Likelihood function
𝑛
1
, 𝜇− 3𝜎 ≤𝑥 1 ≤⋯≤𝑥 𝑛 ≤𝜇+ 3𝜎
L (μ, θ) x = 2 3𝜎
∗
0 , 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝜇− 3𝜎 ≤𝑥 1 &𝑥 𝑛 ≤𝜇+ 3𝜎
⟹𝜇≤ 𝑥 1 + 3 𝜎 &𝑥 𝑛 − 3𝜎 ≤ 𝜇
⟹𝑥 𝑛 − 3𝜎 ≤𝜇 ≤𝑥 1 + 3𝜎
𝜇∊ 𝑥 𝑛 − 3 𝜎, 𝑥 1 + 3 𝜎 (o/w L (μ, θ) x = 0)
In particular
𝑋 𝑛 − 3𝜎+ 𝑋 1 + 3𝜎 𝑋 𝑛 +𝑋 1
= = 𝜇ˆ(𝜎)𝑀𝐿𝐸
2 2
Observe that
3𝜎 ≥μ−𝑥 1 & 3𝜎 ≥𝑥 𝑛 −𝜇
At the MLE of 𝜇;
𝑥 𝑛 −𝑥 1
3𝜎 ≥
2
𝑋 𝑛 −𝑋 1
⟹𝜎ˆ𝑀𝐿𝐸 =
2 3
L is maximized w.r.t. 𝜃 of
1 1
θ − 2 ≤ 𝑥 1 &𝑥 𝑛 ≤ 𝜃 + 2 Max𝜃 𝐿 = 1
1 1
𝑖. 𝑒. 𝑥 𝑛 − ≤𝜃≤𝑥 1 + .
2 2
𝑋 1 +𝑋 𝑛
In particular, 2
is an MLE of 𝜃
1 1
In general, 𝛼(𝑋 1 + ) +(1- 𝛼)( 𝑋 𝑛 − ); ∀ 0 < 𝛼 < 1 is an MLE of 𝜃
2 2
3
With 𝛼= 4, we have the above estimator as
3 1 1 1
4
𝑋 1 +2 +4 𝑋 𝑛 − 2 is an MLE of 𝜃.
1 −𝑥/𝜆
𝑓𝑋 𝑥 = 𝜆 𝑒 𝑥>0
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
1, 𝑖𝑓 𝑖𝑡 𝑐𝑜𝑚𝑝𝑜𝑛𝑒𝑛𝑡 𝑎𝑠 𝑙𝑖𝑓𝑒 < 100𝑟𝑠
𝐷𝑒𝑓𝑖𝑛𝑒 𝑌𝑖 =
0, 𝑜/𝑤
100
1 −
100
𝑃 𝑌𝑖 = 1 = 𝑃 𝑋 < 100 = 𝑒 −𝑥/𝜆 𝑑𝑥 = 1 − 𝑒 𝜆
𝜆 0
100
𝑌1 … 𝑌𝑛 i.i.d. 𝐵 1, 1 − 𝑒 − 𝜆 ≡ 𝐵(1, 𝜃)
100
(n= 10) with 𝜃= 1 − 𝑒 − 𝜆 .
3
From the given data X̅ =
10
100
From the given data is − 7
log
10
1, 𝑖𝑓 0 𝑠𝑎𝑙𝑒𝑠 𝑜𝑛 𝑑𝑎𝑦 𝑖
Define 𝑌𝑖 =
0, 𝑜/𝑤
P(𝑌𝑖 = 1) = 𝑃 𝑋 = 0 = 𝑒 −𝜇 ;
𝜃ˆ𝑀𝐿𝐸 = 𝑌
Further, 𝜇= -log 𝜃
20
Given by -log .
30
(17)(a)
𝑋1 , … , 𝑋𝑛 i.i.d. P(𝜃)
𝜇1 1 = 𝐸 𝑋 = 𝜃
MOME of 𝜃, 𝜃ˆ𝑀𝑂𝑀𝐸 = 𝑚𝑖 = 𝑋
𝜃 𝜃
(b) 𝑋1 , … , 𝑋𝑛 i.i.d. 𝑈 − ,
2 2
𝑑𝑜𝑛𝑒 in class.
⟹𝜃ˆ𝑀𝑂𝑀𝐸 = 𝑚𝑖 = 𝑋
Done in class.
Assignment-13
[1] The observed value of the mean of a random sample of size 20 from N (𝜇, 80) be 81.2. Find the equal
tail 95% and the equal tail 99% confidence interval of 𝜇. Which one is shorter?
[2] Let X̅ be the mean of a random sample of size n from N (𝜇, 9). Find n such that, approximately, P (X̅ -1
<𝜇 < X̅ +1) = 0.90
[3] Let a random sample of size 25 from a normal distribution N (𝜇, 𝜎 2 ) yield x̅ = 4.7 and 𝑠 2 =
1 𝑛
𝑛−1 𝑖=1(𝑥𝑖 − 𝑥̅ )2 = 5.76. Determine a 90% confidence interval for 𝜇.
[4] Let 𝑋1 , … , 𝑋𝑛 be random sample of size be random sample of size 9 from N (𝜇, 𝜎 2 ). Find the
expected length of 95% confidence interval for 𝜇 when (a) 𝜎 is known and (b) 𝜎 is unknown.
3𝑥 2
f(x|𝜃)= 𝜃3
0<𝑥<𝜃
0 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
𝑋𝑛
(a) Find the distribution of , where 𝑋(𝑛) = max 𝑋1 , … , 𝑋𝑛 .
𝜃
1
−
(b) Show that 𝑋 𝑛 ,𝛼 3𝑛 𝑋𝑛 gives a 100(1- 𝛼)% confidence interval for 𝜃.
1
[6] Let 𝑋1 , … , 𝑋𝑛 be random sample of size n from U(0, 𝜃), 𝜃∊ℜ. Show that 𝑋 𝑛 , 𝛼 −𝑛 𝑋 𝑛 𝑎𝑛𝑑 ( 1 −
1
−
𝛼) 𝑋 𝑛 , ∞ are both 100(1- 𝛼)% confidence intervals for 𝜃.
𝑛
[7] Let two independent random samples, each of size 5, from two normal distributions N
𝜇1 , 𝜎1 2 𝑎𝑛𝑑 (𝜇2 , 𝜎2 2 ) are; 1.5, 2.8 , 3.3 , 3.9 , 7.2 and 2.8 , 1.8 , 3.1 , 6.5 , 6.9 respectively.
Solution Key
𝜎2
X∼ N 𝜇, 𝑛 ), i.e. N (𝜇, 4)
𝑋 −𝜇
⟹ Y= 2
∼ 𝑁 0, 1
𝑋−𝜇 𝛼
⟹ 1 − 𝛼 = 𝑃 −𝑧𝛼 ≤ ≤ 𝑧𝛼 𝑧𝛼 𝑖𝑠 ∋ 𝑓𝑜𝑟 𝑧 ∼ 𝑁 0, 1 𝑃 𝑧 > 𝑧𝛼 =
2 2 2 2 2 2
⟹ 1 − 𝛼 = 𝑃 𝑋 − 2𝑧𝛼 ≤ 𝜇 ≤ 𝑋 + 2𝑧𝛼
2 2
𝛼
For 100(1- 𝛼)Y= 95%, 𝛼= 0.05; = 0.025
2
C I ⟶ 𝑋 − 2𝑧0.025 , 𝑋 + 2𝑧0.025
C I ⟶ 𝑋 − 2𝑧0.025 , 𝑋 + 2𝑧0.025
−1 𝑋−𝜇 1
≤=𝑃 ≤
3 3 3
𝑛 𝑛 𝑛
𝑛 𝑛 𝑛
=𝛷 −𝛷 − = 2𝛷 − 1 = 0.90 𝑔𝑖𝑣𝑒𝑛 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛
3 3 3
𝑛
⟹𝛷 = 0.95 = 𝛷 1.96
3
⟹ 𝑛 = 3 × 1.96 ⟹ 𝑛 = ⋯
(3) 𝑋1 … 𝑋𝑛 r.s. N (𝜇, 𝜎 2 )
𝜎2
𝑋 ∼ 𝑁 μ,
𝑛
𝑖𝑛𝑑𝑒𝑝
𝑛 − 1 𝑠2 2
∼ 𝜒𝑛−1
𝜎2
𝑋−𝜇
⟹ 𝑠 ∼ 𝑡𝑛−1
𝑛
𝑋−𝜇
𝑃 −𝑡𝛼 ;𝑛−1 ≤ 𝑠 ≤ 𝑡𝛼 ;𝑛−1 = 1 − 𝛼
2 2
𝑛
𝐹𝑜𝑟 1 − 𝛼 = 0.90; 𝛼 = 0.1, 𝑡𝛼 ;𝑛−1 = 𝑡0.05,24 = 1.711
2
𝑠 𝑠
⟹𝑃 𝑋− 𝑡0.05,24 ≤ 𝜇 ≤ 𝑋 +
𝑡0.05,24 = 0.90
𝑛 𝑛
100 1 − 𝛼 % 𝐶𝐼 𝑤𝑖𝑡 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑 𝑥̅ = 4.7, 𝑠 2 = 5.76
5.76 5.76
4.7 − × 1.711, 4.7 + × 1.711
25 25
=⋯
2
(4) 𝑋1 … 𝑋9 r.s. N (𝜇, 𝜎 )
(a) CI for 𝜇 when 𝜎 2 is known at 95% level
𝑋 −𝜇
𝑃 −𝑧𝛼 ≤ ≤ 𝑧𝛼 = 0.95
2 2 2
𝑋−𝜇
𝛼 = 0.05 𝑃 −𝑧0.025 ≤ 𝜎 ≤ 𝑧0.025 = 0.95
3
𝜎 𝜎
= 𝑃 𝑋− × 1.96 ≤ 𝜇 ≤ 𝑋 + × 1.96 = 0.95[𝑧0.025 = 1.96]
3 3
𝜎 𝜎
𝐶𝐼 𝑋− × 1.96, 𝑋 + × 1.96
3 3
𝜎
Length 2 3 × 1.96 = 𝐿
𝜎
𝐸 𝐿 =2× × 1.96 = ⋯
3
𝑋−𝜇
𝑃 −𝑡𝛼 ;𝑛−1 ≤ 𝑠 ≤ 𝑡𝛼 ;𝑛−1 = 0.95
2 2
𝑛
𝑡0.025,8 = 2.306,
𝑠 𝑠
𝐶𝐼 𝑋 − 𝑡0.025,8 , 𝑋 + 𝑡0.025,8
𝑛 𝑛
𝑠
𝐿𝑒𝑛𝑔𝑡 𝐿 = 2 × × 2.306
𝑛
2 × 2.306
𝐸 𝐿 = ×𝐸 𝑆
𝑛
𝑛 − 1 𝑠2 2
𝑛 − 1 𝑠2
𝑢𝑠𝑖𝑛𝑔 ∼ 𝜒 𝑛−1 𝑙𝑒𝑡 𝑌 =
𝜎2 𝜎2
∞ 1
𝑛−1 1 𝑦 𝑛−1
𝐸 𝑌 =𝐸 𝑠 = 𝑛−1 𝑦 2 𝑒 −2 𝑦 2 − 1 𝑑𝑦
𝜎 𝑛−1 0
2 2 ⎾
2
∞
1 𝑦 𝑛
= 𝑛−1 𝑒 −2 𝑦 2 −1 𝑑𝑦
𝑛−1
2 2 ⎾ 2 0
𝑛 𝑛
⎾ 2 22
= 𝑛−1
𝑛−1
2 2 ⎾ 2
𝑛 𝑛
𝜎 ⎾ 2 22
⟹𝐸 𝑆 = . 𝑛−1
𝑛 − 1 2 2 ⎾𝑛 − 1
2
𝑛 𝑛
2 × 2.306 𝜎 ⎾ 2 22
𝐸 𝐿 = . . =⋯
𝑛 𝑛 − 1 2𝑛−1
2 ⎾
𝑛−1
2
3𝑥 2
(5) 𝑋1 … 𝑋𝑛 r.s. from f(x|𝜃)= 𝜃3
0<𝑥<𝜃
(a) 𝑋 𝑛 = 𝑀𝑎𝑥 𝑋1 … 𝑋𝑛
𝑥
𝑛−1 3𝑦 2 3 𝑥3 𝑥 3
𝑓𝑋 𝑛 𝑥 = 𝑛 𝐹𝑋 𝑥 𝑓𝑋 𝑥 . 𝐹𝑋 𝑥 = 3
𝑑𝑦 = 3
. =
0 𝜃 𝜃 3 𝜃
3 𝑛−1 2
𝑥 3𝑥
𝑓𝑋 𝑛 𝑥 = 𝑛 . . 0<𝑥<𝜃
𝜃3 𝜃3
3𝑛 3𝑛−1
𝑖. 𝑒. 𝑓𝑋 𝑛 𝑥 =
𝑥 0<𝑥<𝜃
𝜃 3𝑛
𝑋𝑛 3𝑛
𝑌= ; 𝑓𝑌 𝑦 = 3𝑛 (𝑦𝜃)3𝑛−1 𝜃; 0 < 𝑦 < 1
𝜃 𝜃
𝑓𝑌 𝑦 = 3𝑛 𝑦 3𝑛−1 0 < 𝑦 < 1
1 1
(b) 𝑃 𝑋 𝑛 ≤ 𝜃 ≤ 𝛼 −3𝑛 𝑋 𝑛 = 𝑃 𝛼 3𝑛 ≤ 𝑋 𝑛 ≤1
1
= 𝑃 𝛼 3𝑛 ≤ 𝑌 ≤ 1
1
3𝑛
= 3𝑛 1 𝑦 3𝑛−1 𝑑𝑦 = 1−𝛼 =1−𝛼
𝛼 3𝑛 3𝑛
1
⟹ 𝑋 𝑛 , 𝛼 −3𝑛 𝑋 𝑛 provides at 100(1- 𝛼) % CI for 𝜃.
(6) 𝑋1 … 𝑋𝑛 r.s. U (0, 𝜃)
𝑛 𝑛−1
𝑓𝑋 𝑛 𝑥 = 𝑥 0<𝑥<𝜃
𝜃𝑛
𝑋𝑛
𝑌= ; 𝑓𝑌 𝑦 = 𝑛 𝑦 𝑛−1 ; 0 < 𝑦 < 1
𝜃
1
𝑃 𝑋 𝑛 ≤ 𝜃 ≤ 𝛼 −𝑛 𝑋 𝑛
𝜃 1
=𝑃 1≤ ≤ 𝛼 −𝑛
𝑋𝑛
1 𝑋𝑛
= 𝑃 𝛼𝑛 ≤ ≤1
𝜃
1
𝑛
= 𝑛 1 𝑦 𝑛−1 𝑑𝑦 = 1 − 𝛼 = 1 − 𝛼
𝛼𝑛 𝑛
1
⟹ 𝑋 𝑛 , 𝛼 −𝑛 𝑋 𝑛 𝑖𝑠 100 1 − 𝛼 % 𝐶𝐼 𝑓𝑜𝑟 𝜃
1
−
𝐴𝑙𝑠𝑜 𝑃 1−𝛼 𝑛𝑋 𝑛 ≤𝜃<∞
−
1 𝜃
=𝑃 1−𝛼 𝑛 ≤ <∞
𝑋𝑛
𝑋𝑛 −
1
=𝑃 0< < 1−𝛼 𝑛
𝜃
1
− 𝜃
⟹ 1−𝛼 𝑛
𝑋𝑛
, ∞ is also 100(1- 𝛼)% CI for 𝜃.
(7) 𝑋1 , … , 𝑋5 i.i.d. N ( 𝜇1 , 𝜎1 2 - value(1.5, 2.8 , 3.3 , 3.9 , 7.2)
𝑌1 , … , 𝑌5 i.i.d. N ( 𝜇2 , 𝜎2 2 - value(2.8 , 1.8 , 3.1 , 6.5 , 6.9)
(a) 𝜎1 2 = 𝜎2 2 = 3.5 = 𝜎 2 𝑠𝑎𝑦
𝜎2
𝑋 ∼ 𝑁 𝜇1 ,
5
> 𝑖𝑛𝑑𝑒𝑝.
𝜎2
𝑌 ∼ 𝑁 𝜇2 ,
5
𝑋 − 𝑌 − 𝜇1 − 𝜇2
⟹ ∼ 𝑁 0, 1
2
𝜎
5
100 1 − 𝛼 % 𝐶𝐼 𝑓𝑜𝑟 ( 𝜇1 − 𝜇2 𝑎𝑡 𝛼 = 0.05
2 2
𝑋 − 𝑌 − 𝑧0.025 3.5 , 𝑋 − 𝑌 + 𝑧0.025 3.5
5 5
2 2
𝑋− 𝜇 1 𝜎1 2 𝑋− 𝜇 1
⟹𝑃 ≤ 2≤ =1−𝛼
𝑌− 𝜇 𝐹1 ,1;𝛼 𝜎2 𝑌− 𝜇 𝐹1 ,1;1−𝛼
2 2
2
𝜎1
100 1 − 𝛼 % 𝐶𝐼 𝑓𝑜𝑟 𝑖𝑠
𝜎2 2
2 2
𝑋− 𝜇 1 𝑋− 𝜇 1
, .
𝑌− 𝜇 𝐹1 ,1;𝛼 𝑌 − 𝜇 𝐹1 ,1;1−𝛼
2 2