0% found this document useful (0 votes)
116 views20 pages

Jam 2012 PDF

The document contains solved problems from a mathematical statistics exam. 1) The eigenvector of a given 3x3 matrix is (1, 0, 0). 2) The volume of revolution of the area bounded by y=√x and x=4 about the x-axis between 0 to 4 is 8π. 3) The order of integration can be changed for a double integral over the region bounded by y=x2 and y=2-x between 0 to 1, splitting it into two integrals.

Uploaded by

shiwangi jhawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
116 views20 pages

Jam 2012 PDF

The document contains solved problems from a mathematical statistics exam. 1) The eigenvector of a given 3x3 matrix is (1, 0, 0). 2) The volume of revolution of the area bounded by y=√x and x=4 about the x-axis between 0 to 4 is 8π. 3) The order of integration can be changed for a double integral over the region bounded by y=x2 and y=2-x between 0 to 1, splitting it into two integrals.

Uploaded by

shiwangi jhawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

IIT−JAM Mathematical Statistics (MS) 2012 Solved Paper

𝟐 𝟏 𝟎
1. An eigenvector of the matrix M= (𝟎 𝟐 𝟏) is
𝟎 𝟎 𝟐
𝟏
(a) (𝟎)
𝟎
𝟎
(b) (𝟏)
𝟎
𝟎
(c) (𝟎)
𝟏
𝟐
(d) (𝟐)
𝟐
2 1 0
Solution: (a) Eigenvalue of M = (0 2 1) is 2.
0 0 2
𝑥
So, let 𝑥 = (𝑦) be the eigenvector.
𝑧

⇒ (𝑀 − 2𝐼)𝑋 = 0

0 1 0 𝑥 0
⇒ (0 0 1 ) ( 𝑦 ) = (0)
0 0 0 𝑧 0

⇒ 𝑦 = 0, 𝑧 = 0

1
Hence (0) is an eigenvector.
0

2. The volume of the solid of revolution generated by revolving the area bounded by the
curve 𝒚 = √𝒙 and the straight lines 𝒙 = 𝟒 𝒂𝒏𝒅 𝒚 = 𝟎 about the x−axis, is
(a) 𝟐𝝅
(b) 4𝝅
(c) 𝟖𝝅
(d) 12𝝅

Solution: (c)

Volume of solid revolution about x−axis


4 4
2
𝜋𝑥 2 4
𝑣 = ∫ 𝜋 𝑦 𝑑𝑥 = ∫ 𝜋𝑥𝑑𝑥 = | = 8𝜋 .
2 0
0 0
𝟏 𝟐−𝒙
3. Let 𝑰 = ∫𝟎 ∫𝒙𝟐 𝒙𝒚𝒅𝒙𝒅𝒚. The change of order of integration in the integral gives I as
𝟏 √𝒚 𝟐 𝟐−𝒚
(a) 𝑰 = ∫𝟎 ∫𝟎 𝒙𝒚𝒅𝒙𝒅𝒚 + ∫𝟏 ∫𝟎 𝒙𝒚𝒅𝒙𝒅𝒚
𝟏 𝟐−𝒚 𝟐 𝟐−𝒚
(b) 𝑰 = ∫𝟎 ∫𝟎 𝒙𝒚𝒅𝒙𝒅𝒚 + ∫𝟏 ∫𝟎 𝒙𝒚𝒅𝒙𝒅𝒚
𝟏 √𝒚 𝟐 𝟐−𝒚
(c) 𝑰 = ∫𝟎 ∫𝟎 𝒙𝒚𝒅𝒙𝒅𝒚 + ∫𝟏 ∫𝟎 𝒙𝒚𝒅𝒙𝒅𝒚
𝟏 𝟐−𝒚 𝟐 √𝒚
(d) 𝑰 = ∫𝟎 ∫𝟎 𝒙𝒚𝒅𝒙𝒅𝒚 + ∫𝟏 ∫𝟎 𝒙𝒚𝒅𝒙𝒅𝒚

Solution: (a) In the given integration, region bounded is from 𝑦 = 𝑥 2 𝑡𝑜 𝑦 = 2 − 𝑥 𝑎𝑛𝑑 𝑥 =


0 𝑡𝑜 𝑥 = 1.

By changing the order of integration, we get

1 √𝑦 2 2−𝑦

∫ ∫ 𝑥𝑦𝑑𝑥𝑑𝑦 + ∫ ∫ 𝑥𝑦𝑑𝑥𝑑𝑦
𝑦=0 𝑥=0 𝑦=1 𝑥=0

𝟏 𝟐 𝒌
4. Let 𝑳 = 𝐥𝐢𝐦 𝒏 [𝒇 (𝒏) + 𝒇 (𝒏) + ⋯ + 𝒇 (𝒏) − 𝒌𝒇(𝟎)]
𝒏→∞

where k is a positive integer, if f(x)= 𝒔𝒊𝒏 𝒙, then L is equal to

(𝒌+𝟏)(𝒌+𝟐)
(a) 𝟔
(𝒌+𝟏)(𝒌+𝟐)
(b) 𝟐
𝒌(𝒌+𝟏)
(c) 𝟐
(d) 𝒌(𝒌 + 𝟏)
1 2 𝑘
Solution: (c) 𝐿 = lim 𝑛 [𝑓 (ℎ) + 𝑓 (ℎ) + ⋯ + 𝑓 (𝑛) − 𝑘𝑓(0)]
𝑛→∞

1 2 𝑘
= lim 𝑛 [sin + sin + ⋯ + sin − 𝑘 sin 0]
𝑛→∞ ℎ ℎ 𝑛
1 2 𝑘
sin sin sin
𝐿 = lim ℎ+ ℎ × 2 + ⋯+ 𝑛 ×𝑘
𝑛→∞ 1 2 𝑘
ℎ 𝑛 𝑛
𝑘(𝑘 + 1)
𝐿 = 1 +2 +⋯+ 𝑘 =
2

5. Let

𝟏
√𝒙𝟐 + 𝒚𝟐 𝒔𝒊𝒏 ( ) 𝒊𝒇 (𝒙, 𝒚) ≠ (𝟎, 𝟎)
𝒇(𝒙, 𝒚) = { √𝒙𝟐 + 𝒚𝟐
𝟎 𝒊𝒇 (𝒙, 𝒚) = (𝟎, 𝟎)

Then at the point (0, 0)


𝝏𝒇 𝝏𝒇
(a) f is continuous and 𝝏𝒙 𝒂𝒏𝒅 𝝏𝒚
exist.
𝝏𝒇 𝝏𝒇
(b) f is continuous and 𝝏𝒙 𝒂𝒏𝒅 𝝏𝒚 do not exist.
𝝏𝒇 𝝏𝒇
(c) f is not continuous and 𝝏𝒙 𝒂𝒏𝒅 𝝏𝒚 exist.
𝝏𝒇 𝝏𝒇
(d) f is not continuous and 𝒂𝒏𝒅 don’t exist.
𝝏𝒙 𝝏𝒚

Solution : (b)

𝐿𝑡 𝐿𝑡
𝑓(𝑥, 𝑦) = 𝑓(𝑟 cos 𝜃, 𝑟 sin 𝜃)
(𝑥, 𝑦) → (0,0) 𝑟→0

𝐿𝑡 1
= 𝑟 sin = 0 = 𝑓(0,0)
𝑟→0 𝑟

So, f(x,y) is continuous at (0,0)

𝜕𝑡 𝐿𝑡 𝑓(0 + ℎ, 0) − 𝑓(0,0)
𝑎𝑡 (0,0) =
𝜕𝑥 ℎ→0 ℎ
1
𝐿𝑡 |ℎ| sin |ℎ|
=
ℎ→0 ℎ

Which does not exist.

𝜕𝑡 𝑓(0,0) + 𝑘 − 𝑓(0,0)
𝑎𝑡 (0,0) = lim
𝜕𝑥 𝑘→0 𝑘
1
|𝑘| sin
|𝑘|
= lim 𝑑𝑜𝑒𝑠 𝑛𝑜𝑡 𝑒𝑥𝑖𝑠𝑡.
𝑘→0 𝑘
6. Let {𝒂𝒏 } be a real sequence converging to a, where a > 0. Then
𝒂𝒏
(a) ∑∞ ∞
𝟏 𝒂𝒏 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆𝒔 𝒃𝒖𝒕 ∑𝟏 𝒏
diverges
𝒂𝒏
(b) ∑∞ ∞
𝟏 𝒂𝒏 𝒅𝒊𝒗𝒆𝒓𝒈𝒆𝒔 𝒃𝒖𝒕 ∑𝟏 converges
𝒏
𝒂𝒏
(c) Both ∑∞ ∞
𝟏 𝒂𝒏 𝒂𝒏𝒅 ∑𝟏 𝒏
converge
𝒂
(d) Both ∑∞
𝟏 𝒂𝒏 𝒂𝒏𝒅 ∑∞ 𝒏
𝟏 𝒏 diverge

Solution: (d) As, lim 𝑎𝑛 = 𝑎 > 0

So, ∑∞
𝑛=1 𝑎𝑛 diverges as lim 𝑎𝑛 = 0 is necessary condition for convergence.

𝑎𝑛 𝑎
Also ∑∞
𝑛=1 diverges as lim 𝑛 ( 𝑛 ) = 𝑎 > 0 and by Śleszyński–Pringsheim theorem lim 𝑛𝑢𝑛 = 0 is
𝑛 𝑛
necessary for convergence of positive term series.

7. A four digit number is chosen at random. The probability that there are exactly two zeroes
in that number is
(a) 0.73
(b) 0.973
(c) 0.027
(d) 0.27

Solution: (c) Number of 4 digit numbers = 9 × 103 = 9000

Number of 4 digit numbers with exactly 2 zeroes = 9𝐶1 × 3𝐶2 × 1 × 9 = 243


243
So, required probability = = 0.027
9000

8. A person makes repeated attempts to destroy a target. Attempts are made independent
of each other. The probability of destroying the target in any attempt is 0.8. Given that he
fails to destroy the target in the first five attempts, the probability that the target is
destroyed in the 8th attempt is
(a) 0.128
(b) 0.032
(c) 0.160
(d) 0.064

Solution: (b) Probability that target will be hit in 8th attempt is equal to product of probability of
failure in the 6th and 7th attempt and success in the 8th attempt.

𝑃 = (0.2)(0.2)(0.8) = 0.032
9. Let the random variable 𝑿~𝑩(𝟓, 𝒑) such that 𝑷(𝑿 = 𝟐) = 𝟐𝑷(𝑿 = 𝟑). Then the variance
of X is
(a) 10/3
(b) 10/9
(c) 5/3
(d) 5/9

Solution: (b) 𝑃(𝑥 = 2) = 2𝑃(𝑥 = 3)

5
⇒ ( ) 𝑝2 (1 − 𝑝)3 = 5𝑐3 𝑝3 (1 − 𝑝)2
2
1
⇒ 1 − 𝑝 = 2𝑝 ⇒ 3𝑝 = 1 ⇒ 𝑝 =
3
1 2 10
Variance of 𝑥 = 𝑛𝑝(1 − 𝑝) = 5 × 3 × 3 = 3
.

10. Let 𝑿𝟏 , … . , 𝑿𝟖 be i.i.d. 𝑴(𝟎, 𝝈𝟐 ) random variables. Further, let 𝑼 = 𝑿𝟏 + 𝑿𝟐 𝒂𝒏𝒅 𝑽 =


∑𝟖𝒊=𝟏 𝑿𝒊 . The correlation coefficient between U and V is
(a) 1/8
(b) ¼
(c) ¾
(d) ½

Solution: (d) 𝐸(𝑋𝑖 ) = 0; 𝑉(𝑋𝑖 ) = 𝜎 2 ; 𝐸(𝑋𝑖2 ) = 𝜎 2 ; 𝐸(𝑈𝑉) = 𝐸(𝑋12 + 𝑋22 ) = 2𝜎 2

𝐸(𝑈)𝐸(𝑉) = 0; 𝑉𝑎𝑟 (𝑈) = 2𝜎 2 ; 𝑉𝑎𝑟 (𝑉) = 8𝜎 2 ; 𝐶𝑜𝑣 (𝑈, 𝑉) = 𝐸(𝑈𝑉) − 𝐸(𝑈)𝐸(𝑉) = 2𝜎 2

Correlation coefficient U and V is

𝐶𝑂𝑉 (𝑈, 𝑉) 2𝜎 2 1
= = =
√𝑣𝑎𝑟 (𝑈), 𝑉𝑎𝑟 (𝑉) √2𝜎 2 , 8𝜎 2 2

11. Let 𝑿~𝑭𝟖.𝟏𝟓 𝒂𝒏𝒅 𝒀~𝑭𝟏𝟓.𝟖. If P(X>4) =0.01and P(𝒀 ≤ 𝒌) = 𝟎. 𝟎𝟏, then the value of k is
(a) 0.025
(b) 0.25
(c) 2
(d) 4
1
Solution: (b) 𝑋~𝐹8, 15 𝑎𝑛𝑑 𝑌~𝐹15, 8 ⇒𝑌=𝑋;

1 1 1
𝑃(𝑋 > 4) = 𝑃 (𝑋 ≤ 4) = 𝑃 (𝑌 ≤ 4)= 𝑃(𝑌 ≤ 0.25).
12. Let 𝑿𝟏 , … , 𝑿𝒏 be i.i.d. Exp (1) random variables and 𝑺𝒏 = ∑𝒏𝒊=𝟏 𝑿𝒊 . Using the central limit
theorem, the value of 𝐥𝐢𝐦 𝑷(𝑺𝒏 > 𝑛) is
𝒏→∞
(a) 0
(b) 1/3
(c) ½
(d) 1

Solution: (c) 𝑋𝑖 ~ exp(1) ⇒ 𝐸(𝑋𝑖 ) = 1; 𝑉(𝑋𝑖 ) = 1

𝑆𝑛 − 𝑛𝜇
→ 𝜉, 𝑤ℎ𝑒𝑟𝑒 𝜉 𝑓𝑜𝑙𝑙𝑜𝑤𝑠 𝑁(0, 𝜎 2 )
√𝑛

Where 𝜎 2 𝑖𝑠 𝑣(𝑋𝑖 )

𝑆𝑛 − 𝑛. 1
lim 𝑃(𝑆𝑛 > 𝑛) = lim 𝑃 ( > 0)
𝑛→∞ 𝑛→∞ √𝑛

⇒ ln 𝑁(0,1)𝑃(𝑥 > 0) = 1/2

13. Let the random variable 𝑿~𝑼 (𝟓, 𝟓 + 𝜽). Based on a random sample of size 1, say 𝑿𝟏 , the
unbiased estimator of 𝜽𝟐 is
(a) 𝟑(𝑿𝟏 − 𝟓)𝟐
𝑿𝟐𝟏 −𝟓
(b) 𝟏𝟐
(c) 𝟑(𝑿𝟏 + 𝟓)𝟐
𝑿𝟐𝟏 +𝟓
(d) 𝟏𝟐

Solution: (a)

5+𝜃
2)
1 (5 + 𝜃)3 − 53 𝜃 2
𝐸(𝑋 = ∫ 𝑥 2 . 𝑑𝑥 = = + 5𝜃 + 25
𝜃 3𝜃 3
5

5+𝜃
1 (5 + 𝜃)2 − 52 𝜃
𝐸(𝑋) = ∫ 𝑥. 𝑑𝑥 = = +5
𝜃 2𝜃 2
5

𝜃2
𝐸(𝑋 − 5)2 = 𝐸(𝑋 2 ) − 10𝐸(𝑋) + 25 =
3

⇒ 𝐸[3(𝑋 − 5)2 ] = 𝜃 2

So, 3(𝑋 − 5)2 is unbiased estimator of 𝜃 2.


14. Let 𝑿𝟏 , … , 𝑿𝒏 be a random sample of size n from 𝑵(𝝁, 𝟏𝟔) population. If a 𝟗𝟓%
̅ − 𝟎. 𝟗𝟖, 𝑿
confidence interval for 𝝁 𝒊𝒔 [𝑿 ̅ + 𝟎. 𝟗𝟖], then the value of n is
(a) 4
(b) 16
(c) 32
(d) 64

Solution: (d) 95% confidence interval for 𝜇 is

1.96𝜎 1.96𝜎
[𝑋̅ − , 𝑋̅ + ]
√𝑛 √𝑛
1.96𝜎
⟹ = 0.98
√𝑛

⟹ √𝑛 = 2𝜎 = 2(4) = 8

⟹ 𝑛 = 64

15. A coin is tossed 4 times and p is the probability of getting head in a single trial. Let S be the
number of head(s) obtained. It is decided to test

𝟏 𝟏
𝑯𝟎 : 𝒑 = 𝒂𝒈𝒂𝒊𝒏𝒔𝒕 𝑯𝟏 : 𝒑 ≠
𝟐 𝟐

Using the decision rule: Reject 𝑯𝟎 if S is 0 to 4. The probabilities of Type I error (𝜶), and Type II
𝟑
error (𝜷) when 𝒑 = 𝟒, are

𝟏 𝟖𝟕
(a) 𝜶 = 𝟒 , 𝜷 = 𝟏𝟐𝟖
𝟏 𝟖𝟕
(b) 𝜶 = 𝟖 , 𝜷 = 𝟏𝟐𝟖
𝟏 𝟒𝟏
(c) 𝜶 = 𝟖 , 𝜷 = 𝟐𝟓𝟔
𝟏 𝟒𝟏
(d) 𝜶 = 𝟒 , 𝜷 = 𝟐𝟓𝟔

Solution: (b) 𝛼 = 𝑝 (reject 𝐻0 /𝐻0 is true)

i.e., 𝑝 = 1/2 & Rejection 𝐻0 means 𝑆 = 0,4

1 4 1 4 1
𝛼 = 𝑃(𝑆 = 0 𝑜𝑟 4|𝑃 = 1/2) = 4𝐶0 ( ) + 4𝐶4 ( ) =
2 2 8

𝛽 = 𝑃(𝑎𝑐𝑐𝑒𝑝𝑡 𝐻0 |𝐻1 𝑖𝑠 𝑡𝑟𝑢𝑒) = 𝑃(𝑆 = 1,2,3 |𝑃 = 3/4)

3 1 3 3 2 1 2 3 3 1
= 4𝐶1 ( ) ( ) + 4𝐶2 ( ) ( ) + 4𝐶3 ( ) ( )
4 4 4 4 4 4
12 + 54 + 108 174 87
= = = .
256 256 128
16. (a) Find the value(s) of 𝝀 for which the following system of linear equations

𝝀 𝟏 𝟏 𝒙 𝟏
(𝟏 𝝀 𝟏) (𝒚) = (𝟏)
𝟏 𝟏 𝝀 𝒛 𝟏

(i) Has a unique solution


(ii) Has infinitely many solutions
(iii) Has no solution

(b) Let 𝒂𝟏 = 𝟐, 𝒃𝟏 = 𝟏 𝒂𝒏𝒅 𝒇𝒐𝒓 𝒏 ≥ 𝟏,

𝒂𝒏 + 𝒃𝒏 𝟐𝒂𝒏 𝒃𝒏
𝒂𝒏+𝟏 = , 𝒃𝒏+𝟏 =
𝟐 𝒂𝒏 + 𝒃𝒏

Show that

(i) 𝒃𝒏 ≤ 𝒂𝒏 , for all n


(ii) 𝒃𝒏+𝟏 ≤ 𝒃𝒏 for all n,
(iii) The sequences {𝒂𝒏 }𝒂𝒏𝒅 {𝒃𝒏 } converge to the same limit √𝟐.

Solution: (a)

1 1 𝜆 1 1 1 𝜆 1
[𝐴: 𝐵] = [1 ~
𝜆 1 |1] [0 𝜆−1 1−𝜆 | 0 ]
𝜆 1 1 1 0 1 − 𝜆 1 − 𝜆2 1 − 𝜆

By 𝑅2 ← 𝑅2 − 𝑅1 & 𝑅3 ← 𝑅3 − 𝜆𝑅1

1 1 𝜆 1
~ [0 𝜆−1 1−𝜆 | 0 ]
0 0 2 − 𝜆 − 𝜆2 1 − 𝜆

By 𝑅3 ← 𝑅3 + 𝑅2

(i) For unique solution (𝜆 − 1)(2 − 𝜆 − 𝜆2 ) ≠ 0

⟹ −(𝜆 − 1)2 (𝜆 + 2) ≠ 0 ⟹ 𝜆 ≠ 1 𝑜𝑟 − 2

(ii) For 𝜆 = 1, we have infinitely many solutions


(iii) For 𝜆 = −2, we have no solution.

(b) As 𝑎𝑛+1 is arithmetic mean and 𝑏𝑛+1 is harmonic mean between 𝑎𝑛 𝑎𝑛𝑑 𝑏𝑛 , 𝑠𝑜 𝑎𝑛+1 ≥
𝑏𝑛+1 , ∀𝑛 𝑎𝑠 𝐴. 𝑀. ≥ 𝐺. 𝑀.

Hence, we have (i) 𝑏𝑛 ≤ 𝑎𝑛 , for all n

Also 𝑏1 ≤ 𝑏2 ≤ 𝑏3 … monotonically decreasing and bounded below by 𝑏1 . So it is convergent and


similarly {𝑏𝑛 } is monotonically increasing and bounded above by 𝑎1 . So it is also convergent.

Let 𝐿𝑡 𝑎𝑛 = 𝑙 𝑎𝑛𝑑 𝐿𝑡 𝑏𝑛 = 𝑚
𝑎𝑛 + 𝑏𝑛 𝑙+𝑚
⟹ 𝐿𝑡 𝑎𝑛+1 = lim ⟹𝑙 = ⟹𝑙 =𝑚.
2 2

So, {𝑎𝑛 }𝑎𝑛𝑑 {𝑏𝑛 } converges to same limit.

Let l be the common limit of {𝑎𝑛 } 𝑎𝑛𝑑 {𝑏𝑛 }

𝑎𝑛+1 𝑏𝑛+1 = 𝑎𝑛 𝑏𝑛 = 𝑎𝑛−1 𝑏𝑛−1 = ⋯ = 𝑎1 𝑏1

⟹ 𝑎𝑛 𝑏𝑛 = 𝑎1 𝑏1 = 2

⟹ lim 𝑎𝑛 𝑏𝑛 = 2 ⟹ 𝑙 2 = 2 ⟹ 𝑙 = √2

Hence {𝑎𝑛 } 𝑎𝑛𝑑 {𝑏𝑛 } converges to same limit = √2.

17. (a) Solve: (𝒙𝟐 𝒚𝟑 + 𝒙𝒚)𝒅𝒚 = 𝒅𝒙

(b) Find the general solution of the differential equation

𝒅
(𝑫𝟐 − 𝟒𝑫 + 𝟒)𝒚 = 𝒙 𝒔𝒊𝒏 𝟐𝒙, 𝒘𝒉𝒆𝒓𝒆 𝑫 ≡
𝒅𝒙
𝑑𝑥
Solution: (a) (𝑥 2 𝑦 3 + 𝑥𝑦)𝑑𝑦 = 𝑑𝑥 ⇒ = 𝑥𝑦 + 𝑥 2 𝑦 3
𝑑𝑦

1 𝑑𝑥 1 3
1
⇒ = 𝑦 + 𝑦 ; 𝑙𝑒𝑡 =𝑧
𝑥 2 𝑑𝑦 𝑥 𝑥

1 𝑑𝑥 𝑑𝑧
⇒− =
𝑥 2 𝑑𝑦 𝑑𝑦

𝑑𝑧 𝑑𝑧
⇒− = 𝑦𝑧 + 𝑦 3 ⇒ + 𝑦𝑧 = −𝑦 3
𝑑𝑦 𝑑𝑦

⇒ 𝑧𝑒 𝑦2 = ∫ 𝑒 𝑦2 (−𝑦 3 )𝑑𝑦 + 𝑒; 𝐿𝑒𝑡 𝑦 2 = 𝑡

⇒ 2𝑦𝑑𝑦 = 𝑑𝑡

𝑡 1
⇒ 𝑧𝑒 𝑡 = ∫ −𝑒 𝑡 𝑑𝑡 + 𝑐 = − [𝑡𝑒 𝑡 − 𝑒 𝑡 ] + 𝑐
2 2

1
⇒ 𝑧 = − [𝑡 − 1] + 𝑐𝑒 −𝑡
2
1 1
⇒ = − [𝑦 2 − 1] + 𝑐𝑒 −𝑦2 𝑖𝑠 𝑡ℎ𝑒 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛.
𝑥 2

(b) (𝐷 2 − 4𝐷 + 4)𝑦 = 𝑥 sin 2𝑥

⇒ (𝐷 − 2)2 𝑦 = 𝑥 sin 2𝑥

Auxiliary equation is (𝑚 − 2)2 = 0


⇒ 𝑚 = 2, 2

So, complementary function is

𝑦 = (𝑐1 + 𝑐2 𝑥)𝑒 2𝑥

Particular integral is

1
𝑦= 𝑥 sin 2𝑥
(𝐷 − 2)2

2(𝐷 − 2) 1
= [𝑥 − ] sin 2𝑥
(𝐷 − 2) (𝐷 − 2)2
2

2 1
= [𝑥 − ] 2 sin 2𝑥
𝐷 − 2 𝐷 − 4𝐷 + 4
2 1
= [𝑥 − ] sin 2𝑥
𝐷 − 2 −4𝐷
2 cos 2𝑥
= [𝑥 − ]
𝐷−2 8
𝑥 cos 2𝑥 1 𝐷 + 2
= − × 2 (cos 2𝑥)
8 4 𝐷 −4
𝑥 cos 2𝑥 1
= + (−2 sin 2𝑥 + 2 cos 2𝑥)
8 32
𝑥 cos 2𝑥 cos 2𝑥 − sin 2𝑥
= +
8 16
1
= [(1 + 2𝑥) cos 2𝑥 − sin 2𝑥]
16

So, the complete solution is

1
𝑦 = (𝑐1 + 𝑐2 𝑥)𝑒 2𝑥 + [(1 + 2𝑥) cos 2𝑥 − sin 2𝑥]
16

18. (a) Find all the critical points of the function 𝒇(𝒙, 𝒚) = 𝒙𝟑 + 𝒚𝟑 + 𝟑𝒙𝒚 and examine those
points for local maxima and local minima

(b) If f is a continuous real−valued function on [0,1], show that there exists a point 𝒄 ∈
(𝟎, 𝟏) such that

𝟏 𝟏

∫ 𝒙𝒇(𝒙)𝒅𝒙 = ∫ 𝒇(𝒙)𝒅𝒙.
𝟎 𝒄

Solution: (a) 𝑓(𝑥, 𝑦) = 𝑥 3 + 𝑦 3 + 3𝑥𝑦


𝑓𝑥 = 3𝑥 2 + 3𝑦 ; 𝑓𝑦 = 3𝑦 2 + 3𝑥

𝑓𝑥𝑥 = 6𝑥 ; 𝑓𝑦𝑦 = 6𝑦

𝑓𝑥𝑦 = 3

For critical points, we have

𝑓𝑥 = 0 𝑎𝑛𝑑 𝑓𝑦 = 0 ⟹ 𝑥 2 + 𝑦 = 0 𝑎𝑛𝑑

𝑦2 + 𝑥 = 0

𝑦 = −𝑥 2 ⟹ 𝑥 4 + 𝑥 = 0

⟹ 𝑥(1 + 𝑥 3 ) = 0 ⟹ 𝑥 = 0, −1

At 𝑥 = 0, 𝑦 = 0; 𝑎𝑡 𝑥 = −1, 𝑦 = −1

So, critical points are (0,0) and (−1, −1)


2
𝑓𝑥𝑥 𝑓𝑦𝑦 − (𝑓𝑥𝑦 ) = 36𝑥𝑦 − 9 = 9(4𝑥𝑦 − 1)

2
At (0,0); 𝑓𝑥𝑥 𝑓𝑦𝑦 − (𝑓𝑥𝑦 ) = −9 < 0. So, (0, 0) is a saddle point.

2
At (−1, −1); 𝑓𝑥𝑥 𝑓𝑦𝑦 − (𝑓𝑥𝑦 ) = 9(4 − 1) = 27 > 0

And 𝑓𝑥𝑥 at (−1, −1) = −6 < 0. 𝑆𝑜(−1, −1) is local maxima point.

(b)
1 1 1

∫ 𝑥𝑓(𝑥)𝑑𝑥 𝑙𝑖𝑒𝑠 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 0 ∫ 𝑓(𝑥)𝑑𝑥 𝑎𝑛𝑑 1. ∫ 𝑓(𝑥)𝑑𝑥


0 0 0

Now as f(x) is a continuous function in the interval [0,1], so


0 1

∫ 𝑓(𝑥)𝑑𝑥 ≤ ∫ 𝑓(𝑥)𝑑𝑥
1 0

Now, as f(x) is continuous function, so integral


1

∫ 𝑓(𝑥)𝑑𝑥 𝑒𝑥𝑖𝑠𝑡 𝑓𝑜𝑟 𝑒𝑎𝑐ℎ 𝑐 ∈ [0,1].


𝑐

Hence,
1 1

∫ 𝑓(𝑥)𝑑𝑥 = ∫ 𝑓(𝑥)𝑑𝑥 𝑓𝑜𝑟 𝑎 𝑝𝑜𝑖𝑛𝑡 𝑐 ∈ (0,1)


0 𝑐
19. (a) Evaluate the triple integral

𝟐
𝒛=𝟒 𝒙=𝟐√𝒛 𝒚=√𝟒𝒛−𝒙

∫ ∫ ∫ 𝒅𝒚𝒅𝒙𝒅𝒛
𝒛=𝟎 𝒙=𝟎 𝒚=𝟎

(b) Let

𝟏 𝟐 𝟎
𝑴 = (𝟎 𝟐 𝟏).
𝟏 𝟎 𝟏
𝟓 𝟏
If 𝑴−𝟏 = 𝑰 + 𝒌𝑴 + 𝑴𝟐 , where I is the identity matrix of order 3, find the value of k. hence or
𝟒 𝟒
otherwise, solve the system of equations:

𝒙 𝟏
𝑴 (𝒚) = (𝟎)
𝒛 𝟎

Solution: (a)
𝑧=4 𝑥=2√𝑧 𝑦=√4𝑧−𝑥 2 𝑧=4 𝑥=2√𝑧

∫ ∫ ∫ 𝑑𝑦𝑑𝑥𝑑𝑧 = ∫ ∫ √4𝑧 − 𝑥 2 𝑑𝑥𝑑𝑧


𝑧=0 𝑥=0 𝑦=0 𝑧=0 𝑥=0
𝑧=4 𝑧=4
𝑥 4𝑧 𝑥 𝑥=2√𝑧
= ∫ [ √4𝑧 − 𝑥 2 + 𝑠𝑖𝑛−1 ] 𝑑𝑧 = ∫ 𝜋𝑧𝑑𝑧 = 8𝜋 .
2 2 2√𝑧 𝑥=0
𝑧=0 𝑧=0

(b) By Cayley Hamilton theorem, every square matrix satisfies it’s characteristic equation,
so,

|𝑀 − 𝜆𝐼| = 0 is satisfied by M.

1−𝜆 2 0
|𝑀 − 𝜆𝐼| = | 0 2−𝜆 1 | = (1 − 𝜆)[2 + 𝜆2 − 3𝜆] + 2
1 0 1−𝜆

= −𝜆3 + 4𝜆2 − 5𝜆 + 4

Hence, −𝑀3 + 4𝑀2 − 5𝑀 + 4𝐼 = 0 ⟹ −𝑀2 + 4𝑀 − 5𝐼 + 4𝑀−1 = 0

(By taking product by 𝑀−1 )

1 1 5
⟹ 𝑀−1 = [𝑀2 − 4𝑀 + 5𝐼] = 𝑀2 − 𝑀 + 𝐼
4 4 4

⟹ 𝐾 = −1

1 2 0 1 6 2
2
𝑁𝑜𝑤, 𝑀 = (0 2 1 ) ⟹ 𝑀 = (1 4 3)
1 0 1 2 0 1
1 1 2 −2 2
⟹ 𝑀−1 = [𝑀2 − 4𝑀 + 5𝐼] = (1 1 −1)
4 4
2 0 2
1 1 1

2 2 2
1 1 1
⟹ 𝑀−1 = −
4 4 4
1 1
(− 4 0 2 )
𝑥 1 𝑥 1
𝑀 (𝑦) = (0) ⟹ (𝑦) = 𝑀−1 (0)
𝑧 0 𝑧 0
1 1 1

2 2 2
1 1 1 1
⟹ − (0)
4 4 4 0
1 1
(− 4 0 2 )
1
𝑥 2
1
⟹ (𝑦) =
𝑧 4
1

( 4)

1 1 1
⟹ 𝑥 = ;𝑦 = &𝑧 = −
2 4 4

20. (a) Let N be a random variable representing the number of fair dice thrown with
probability mass function

𝟏
𝑷(𝑵 = 𝒊) = , 𝒊 = 𝟏, 𝟐, … ..
𝟐𝒊

Let S be the sum of numbers appearing on the faces of the dice. Given that 𝑺 = 𝟑, what is the
probability that 2 dice were thrown?

(b) Let 𝑿~ 𝑵(𝟎, 𝟏)𝒂𝒏𝒅 𝒀 = 𝑿 + |𝑿|. Find 𝑬(𝒀𝟑 ).

Solution: (a) Prob. (2 dice were thrown |𝑠𝑢𝑚 𝑜𝑛 𝑡ℎ𝑒 𝑓𝑎𝑐𝑒 = 3)

𝑃𝑟𝑜𝑏. (2 𝑑𝑖𝑐𝑒 𝑤𝑒𝑟𝑒 𝑡ℎ𝑟𝑜𝑤𝑛 ∩ 𝑠𝑢𝑚 𝑜𝑛 𝑡ℎ𝑒 𝑓𝑎𝑐𝑒 = 3)


=
𝑃𝑟𝑜𝑏. (𝑠𝑢𝑚 𝑜𝑛 𝑡ℎ𝑒 𝑓𝑎𝑐𝑒 = 3)

𝑃(2|𝑠 = 3)
=
𝑃(1|𝑠 = 3) + 𝑃(2|𝑠 = 3) + 𝑃(3|𝑠 = 3)
1 2
2 2 × 36
=
1 1 1 2 1 1
2 × 6 + 22 × 36 + 23 × 216
1 1
= 72 = 72
1 1 1 144 + 24 + 1
+ +
12 72 1728 178
24
=
169

(b) 𝑋~𝑁(0,1)

𝑌 = 𝑋 + |𝑋| = 2𝑋; 𝑋 > 0

= 0;𝑋 ≤ 0

𝑌 3 = 8𝑋 3 𝑋>0

=0 ;𝑋 ≤ 0

3)
1 2 /2
𝐸(𝑌 = ∫ 8𝑥 3 . 𝑒 −𝑥 𝑑𝑥
√2𝜋
0

𝑥2
Let 𝑧 = 2
⟹ 𝑑𝑧 = 𝑑𝑥


3)
1 1 2
⟹ 𝐸(𝑌 = ∫ 16𝑧𝑒 −𝑧 𝑑𝑧 = × 16 = 8√
√2𝜋 √2𝜋 𝜋
0

21. Let Y ~𝑵(𝝁, 𝝈𝟐𝒚 ) 𝒂𝒏𝒅 𝒀 = 𝒍𝒏 𝑿.


(a) Find the probability density function of the random variable X and the median of X.
(b) Find the maximum likelihood estimator of the median of the random variable X based on a
random sample of size n.

Solution: (a) 𝑌~𝑁(𝜇𝑦 , 𝜎𝑦2 ); 𝑌 = ln 𝑋 is log normal distribution whose PDF, f(x) is

1 (ln 𝑥−𝜇)2

𝑓(𝑥) = 𝑒 2𝜎 2 ; 𝑥>0
𝜎√2𝜋𝑥

Median of 𝑋 = 𝑀, then we have


𝑀
1
∫ 𝑓(𝑥)𝑑𝑥 = ⟹ 𝑀 = 𝑒𝜇
2
0

(b) Likelihood function


𝑛
1
𝑓𝐿 (𝑥 ∶ 𝜇, 𝜎) = ∏ ( ) 𝑓𝑁 (ln 𝑥; 𝜇, 𝜎)
𝑥𝑖
𝑖=1
𝑓𝐿 is pdf of log normal and 𝑓𝑁 is pdf of normal

⟹ 𝑙𝐿 = (𝜇, 𝜎 |𝑥1 , 𝑥2 , … , 𝑥𝑛 ) = − ∑ ln 𝑥𝑘 + 𝑙𝑁 (𝜇, 𝜎|ln 𝑥1 , ln 𝑥2 , … , ln 𝑥𝑛 )


𝑘

= 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 + 𝑙𝑁 (𝜇, 𝜎|ln 𝑥1 , ln 𝑥2 , … , ln 𝑥𝑛 )

So, by using the concept of Normal distribution, we have

∑𝑘 ln 𝑥𝑘 ∑𝑘(ln 𝑥𝑘 − 𝜇̂ ) 2
𝜇̂ = 𝑎𝑛𝑑 𝜎̂ 2 =
𝑛 𝑛

22. (a) A random variable X has probability density function


𝟐 𝒙𝟐
𝒇(𝒙) = 𝒂𝒙𝒆−𝜷 , 𝒙 > 2, 𝛼 > 2, 𝛽 > 0

√𝝅
If 𝑬(𝑿) = , determine 𝜶 𝒂𝒏𝒅 𝜷.
𝟐

(b) Let X and Y be two random variables with join probability density function

𝒆−𝒚 𝒊𝒇 𝟎 ≤ 𝒙 ≤ 𝒚 ≤ ∞
𝒇(𝒙, 𝒚) = {
𝟎 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆

(i) Find the marginal density functions of X and Y


(ii) Examine whether X and Y are independent
(iii) Find 𝑪𝒐𝒗 (𝑿, 𝒀)

Solution: (a)
∞ ∞
2𝑥2
𝐸(𝑋) = ∫ 𝑥𝑓(𝑥)𝑑𝑥 + ∫ 𝛼𝑥 2 𝑒 −𝛽 𝑑𝑥
0 0

Let 𝑦 = (𝛽𝑥)2 ⇒ 𝑑𝑦 = 2𝛽 2 𝑥𝑑𝑥



𝛼 𝑦1/2 −𝑦 𝛼 3
⇒ 𝐸(𝑋) = ∫ 2
. 𝑒 𝑑𝑦 = 3 Γ ( )
2𝛽 𝛽 2𝛽 2
0

𝛼 √𝜋
= 3 √𝜋 =
4𝛽 2

⇒ 𝛼 = 2𝛽 3 … … … … (1)

𝐴𝑙𝑠𝑜, ∫ 𝑓(𝑥)𝑑𝑥 = 1
0


2𝑥2
⇒ ∫ 𝛼𝑥𝑒 −𝛽 𝑑𝑥 = 1
0

𝛼 −𝑦
⇒∫ 𝑒 𝑑𝑦 = 1
2𝛽 2
0

𝛼
⇒ =1
2𝛽 2

⇒ 𝛼 = 2𝛽 2 … … . (2)

From (1) & (2) we get 𝛽 = 1 ⇒ 𝛼 = 2

(b) (i) Marginal density function of X,


𝑓(𝑥) = ∫ 𝑒 −𝑦 𝑑𝑦; 0 ≤ 𝑥 < ∞


2

⟹ 𝑓(𝑥) = 𝑒 −𝑥 ; 0 ≤ 𝑥 < ∞

Marginal density function of Y,


𝑦

𝑓(𝑦) = ∫ 𝑒 −𝑦 𝑑𝑥 = 𝑦𝑒 −𝑦 = 𝑦𝑒 −𝑦 ; 0 ≤ 𝑦 < ∞
0

(ii) As 𝑓(𝑥)𝑓(𝑦) = 𝑒 −𝑥 (𝑦𝑒 −𝑦 ) = 𝑦𝑒 −(𝑥+𝑦) 𝑎𝑛𝑑 𝑓(𝑥, 𝑦) = 𝑒 −𝑦

So, 𝑓(𝑥, 𝑦) ≠ 𝑓(𝑥)𝑓(𝑦) hence X and Y are not independent.

(iii)

𝐸(𝑋) = ∫ 𝑥𝑒 −𝑥 𝑑𝑥 = [−𝑥𝑒 −𝑥 − 𝑒 −𝑥 ]∞
0 =1
0

𝐸(𝑌) = ∫ 𝑦 2 𝑒 −𝑦 𝑑𝑦 = [(−𝑦 2 − 2𝑦 − 𝑧)𝑒 −𝑦 ]∞


0 =2
0

∞ 𝑦 ∞
𝑦 3 −𝑦
𝐸(𝑋𝑌) = ∫ ∫ 𝑥𝑦𝑒 −𝑦 𝑑𝑥𝑑𝑦 = ∫ 𝑒 𝑑𝑦
2
𝑦=0 𝑥=0 𝑦=0


1
= [−𝑦 3 − 3𝑦 2 − 6𝑦 − 6]𝑒 −𝑦 ∫ = 3
2
0

𝐶𝑜𝑣 (𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌) = 3 − (1)(2) = 3 − 2 = 1.


𝟏
23. (a) Let 𝑿𝟏 , … . , 𝑿𝒏 be a random sample from 𝑬𝒙𝒑 (𝜽) population. Obtain the Cramer−Rao
lower bound for the variance of an unbiased estimator of 𝜽𝟐 .

(b) Let 𝑿𝟏 , … , 𝑿𝒏 (n>4) be a random sample from a population with mean 𝝁 and variance
𝟐
𝝈 .

Consider the following estimators of 𝝁


𝒏
𝟏
𝑼 = ∑ 𝑿𝒊 ,
𝒏
𝒊=𝟏

𝟏 𝟑 𝟏
𝑽= 𝑿𝟏 + (𝑿𝟐 + ⋯ + 𝑿𝒏−𝟏 ) + 𝑿𝒏
𝟖 𝟒(𝒏 − 𝟐) 𝟖

(i) Examine whether the estimates U and V are unbiased


(ii) Examine whether the estimates U and V are consistent
(iii) Which of these two estimates is more efficient? Justify your answer.

Solution: (a) If T(X) is unbiased estimator of a function Ψ(𝜃) of the parameter 𝜃, then Cramer Rao
bound of variance of Ψ(𝜃) is given by

[Ψ ′ (𝜃)]2
𝑉(𝑇(𝑋)) ≥
𝐼(𝜃)

Where 𝐼(𝜃) is Fisher’s information.


2
𝜕𝐿(𝑥, 𝜃) 𝜕 2 𝐿(𝑥; 𝜃)
𝐼(𝜃) = 𝐸 [( ) ] = −𝐸 [ ]
𝜕𝜃 𝜕𝜃 2

𝐿(𝑥, 𝜃) = log 𝑓(𝑥, 𝜃)

1 −𝑥/𝜃
𝑓(𝑥𝑖=1 𝜃) = 𝑒 ; 𝜃 > 0, 𝑥 > 0
𝜃
𝑛
1 1
𝑓(𝑥, 𝜃) = ∏ 𝑒 −𝑥𝑖/𝜃 = 𝑛 𝑒 − ∑ 𝑥𝑖/𝜃
𝜃 𝜃
𝑖=1

𝐿(𝑥, 𝜃) = log 𝑓(𝑥, 𝜃) = log 𝜃 −𝑛 𝑒 −(∑ 𝑥𝑖)/𝜃

(∑ 𝑥𝑖)
= −𝑛 log 𝜃 −
𝜃
2
𝑛 ∑ 𝑥𝑖
= 𝐼(𝜃) = 𝐸 [(− + 2 ) ]
𝜃 𝜃

Given Ψ(𝜃) = 𝜃 2 ⟹ Ψ ′ (𝜃) = 2𝜃


4𝜃 2
⟹ 𝑉𝑎𝑟 [𝑇(𝑥)] ≥
𝑛 ∑ 𝑥𝑖 2
𝐸 [(− 𝜃 + 2 ) ]
𝜃

So, lower bound for variance is

4𝜃 2
2
𝑛 ∑ 𝑥𝑖
𝐸 [(− + 2 ) ]
𝜃 𝜃

(b)
𝑛
1
𝑈 = ∑ 𝑋𝑖
𝑛
𝑖=1

1 3 1
𝑉 = 𝑋1 + (𝑋2 + ⋯ + 𝑋𝑛−1 ) + 𝑋𝑛
8 4(𝑛 − 2) 8

(i)
𝑛
1 𝑛𝜇
𝐸(𝑈) = ∑ 𝐸(𝑋𝑖 ) = =𝜇
𝑛 𝑛
𝑖=1

𝜇 3 (𝑛 − 2) 1
𝐸(𝑋) = + 𝜇+ 𝜇
8 4 (𝑛 − 2) 8

1 3 1
= 𝜇( + + ) = 𝜇
8 4 8

Both U and V are unbiased estimator of 𝜇 as 𝐸(𝑈) = 𝐸(𝑉) = 𝜇

(ii) lim 𝑈 = 𝜇 𝑎𝑛𝑑 lim 𝑉 = 𝜇


𝑛→∞ 𝑛→∞

So, both U and V are consistent estimator of 𝜇.

(iii)
𝑛
1 1
𝑉𝑎𝑟(𝑈) = 𝑉𝑎𝑟 ( ∑ 𝑋𝑖 ) = 2 [𝑉𝑎𝑟 𝑋1 + 𝑉𝑎𝑟 𝑋2 + ⋯ + 𝑉𝑎𝑟 𝑋𝑛 ]
𝑛 𝑛
𝑖=1

1 2 2 2]
𝜎2
= [𝜎 + 𝜎 + ⋯ + 𝜎 =
𝑛2 𝑛
1 2 9 1
𝑉𝑎𝑟 (𝑉) = 𝜎 + 2
(𝑛 − 2)𝜎 2 + 𝜎 2
64 16(𝑛 − 2) 64

1 2 9
= 𝜎 + 𝜎2
32 16(𝑛 − 2)

As, Var(U) < Var (V)


Here U is more efficient estimator than V.

24. Let 𝑿𝟏 , … , 𝑿𝒏 be a random sample from a Bernoulli population with parameter p,


(a) (i) Find a sufficient statistic for p

𝑷(𝟏−𝒑)
(ii) Consider an estimator 𝑼(𝑿𝟏 , 𝑿𝟐 ) of 𝒏
given by

𝟏
𝑼(𝑿𝟏 , 𝑿𝟐 ) = {𝟐𝒏 𝒊𝒇 𝑿𝟏 + 𝑿𝟐 = 𝟏
𝟎 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆

Examine whether 𝑼(𝑿𝟏 , 𝑿𝟐 ) is the an unbiased estimator

(b) Using the results obtained in (a) above and Rao−Blackwell theorem, find the uniformity
minimum variance unbiased estimator (UMVUE) of

𝒑(𝟏 − 𝒑)
.
𝒏

Solution: (a)

(i) 𝑇(𝑋) = 𝑋1 + 𝑋2 + ⋯ + 𝑋𝑛 is sufficient statistics for p.

As 𝑃𝑟 = (𝑋 = 𝑥) = Pr{𝑋1 = 𝑥1 , 𝑋2 = 𝑥2 , … . , 𝑋𝑛 = 𝑥𝑛 }

= 𝑃𝑥1 (1 − 𝑝)1−𝑥1 𝑝 𝑥2 (1 − 𝑝)1−𝑥2 … 𝑃𝑥𝑛 (1 − 𝑝)1−𝑥𝑛


𝑛−(𝑥1 +𝑥2 +⋯+𝑥𝑛 )
= 𝑃(𝑥1 +𝑥2 +⋯+𝑥𝑛 )(1−𝑝)

= 𝑝∑ 𝑥𝑖 (1 − 𝑝)𝑛−∑ 𝑥𝑖

= 𝑃𝑇(𝑋) (1 − 𝑃)𝑛−𝑇(𝑋)

Hence 𝑇(𝑥) = ∑ 𝑥𝑖

(ii) 𝑋1 + 𝑋2 = 1

Evaluate 𝑈(𝑋1 , 𝑋2 ) and if

𝑃(1 − 𝑝)
𝐸(𝑈(𝑋1 , 𝑋2 )) = 𝑡ℎ𝑒𝑛 𝑈 𝑖𝑠 𝑢𝑛𝑏𝑖𝑎𝑠𝑒𝑑
𝑛
1 1
𝐸(𝑈) = (𝑋1 = 0, 𝑋2 = 1) + (𝑋 = 1, 𝑋2 = 0)
2𝑛 2𝑛 1
𝑝(1 − 𝑝) 𝑝(1 − 𝑃) 𝑝(1 − 𝑝)
= + =
2𝑛 2𝑛 𝑛

Hence U is an unbiased estimator of 𝑝(1 − 𝑝).

(b) Now by Rao−Blackwell theorem, we obtain UMVUE, which requires the statistics to be sufficient.
Also, mean square error of Rao−Blackwell estimator does not exceed that of the original estimator.

Rest of the calculations are left for you as an exercise.

25. (a) Let 𝑿𝟏 , … , 𝑿𝒏 be a random sample from the population having probability density
function

𝟐
𝟐𝒙 −𝒙𝟐
𝒇(𝒙, 𝜽) = { 𝜽𝟐 𝒆 𝜽 𝒊𝒇 𝒙 > 0
𝟎 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆

Obtain the most powerful test for testing 𝑯𝟎 : 𝜽 = 𝜽𝟎 𝒂𝒈𝒂𝒊𝒏𝒔𝒕 𝑯𝟏 : 𝜽 = 𝜽𝟏 (𝜽𝟏 < 𝜽𝟎 )

(b) Let 𝑿𝟏 , … , 𝑿𝒏 be a random sample of size n from 𝑵(𝝁, 𝟏) population. To test 𝑯𝟎 ∶ 𝝁 =


̅ ≤ 𝒄. If 𝜶 = 𝟎. 𝟎𝟓 𝒂𝒏𝒅 𝜷 =
𝟓 𝒂𝒈𝒂𝒊𝒏𝒔𝒕 𝑯𝟏 : 𝝁 = 𝟒, the decision rule is : Reject 𝑯𝟎 if 𝒙
𝟎. 𝟏𝟎, determine n (rounded off to an integer) and hence c.

Solution: (a) Likelihood function,


𝑛
2𝑥𝑖 −𝑥2 /𝜃2
𝐿(𝑥, 𝜃) = ∏ 𝑒 𝑖
𝜃2
𝑖=1
Consider 𝐻0 : 𝜃 = 𝜃0 & 𝐻1 : 𝜃 = 𝜃1
The best critical region, using Neyman−Pearson Lemma is given by
𝑛 𝑛
1 2 2 1 2 2
= 2𝑛 ∏ 2𝑥𝑖 𝑒 −𝑥𝑖 /𝜃𝑖 ≥ 𝑘 2𝑛 ∏ 2𝑥𝑖 𝑒 −𝑥𝑖 /𝜃0
𝜃1 𝜃0
𝑖=1 𝑖=1
2𝑛 𝑛 1 1
𝜃0 −𝑥𝑖2 ( 2 − 2 )
⟹𝐾≤( ) ∏𝑒 𝜃1 𝜃0
𝜃1
𝑖=1
𝑛
1 1 2
⟹ log 𝐾 ≤ 2𝑛(log 𝜃0 − log 𝜃1 ) + ∑ ( 2 − 2 ) 𝑥𝑖
𝜃0 𝜃1
𝑖=1
𝑛
1 1 2
𝑁𝑜𝑤 𝑃 (∑ ( 2 − 2 ) 𝑥𝑖 ≥ log 𝑘 − 2𝑛(log 𝜃0 − log 𝜃1 )
|𝐻0 𝑖𝑠 𝑡𝑟𝑢𝑒 ) = 𝛼
𝜃0 𝜃1
𝑖=1
So it is used for most powerful test.

(b) Same as above. Do yourself.

You might also like