Solutions To IIT JAM For Mathematical Statistics: December 2018
Solutions To IIT JAM For Mathematical Statistics: December 2018
net/publication/329609889
CITATIONS READS
0 16,324
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Mohd. Arshad on 25 February 2019.
0.95
x
χ25,0.95
Mohd. Arshad
Department of Statistics & Operations Research
Aligarh Muslim University
Aligarh, India
Email: [email protected]
ISBN: 978-93-5346-351-9
Price: | 380
Printed in India
Preface
From our teaching experiences, we have observed that the students preparing for various competitive exami-
nations face difficulties in solving previous years’ papers. One such entrance examination is IIT JAM which is
being conducted by IITs for the last 14 years. Written solutions to JAM question papers are important for the
students who initially do not have skills in solving the papers completely and who do not have mentors available
to clear their doubts. This motivated us to write this book. While framing the idea of the book, we discussed
with our students about it and got very positive response, which further encouraged us to write it in the way
that readers can get maximum benefit of it.
This book contains solutions to IIT JAM (Mathematical Statistics) examination papers from the year 2005
to 2018. The questions have been solved in such a way that the aspirants could get an insight of the examination
pattern as well as maximum understanding of the concepts on which the questions are based. The purpose of the
book is not only to provide solutions of JAM examination papers but also to make students proficient in writing
those solutions to get maximum output. Graphs are given (wherever required) in support of the solutions to
visualize the concepts. Alternative solutions have also been provided to explain different approaches to reach
the same conclusion. The book would be suitable for aspirants of different competitive examinations (like IIT
JAM, GATE, ISS etc.) and for students interested in learning the problem solving techniques and concepts of
Mathematical Statistics as well.
In our country, any book for solutions, like ours, is seen as a guide which can only spoon-feed the readers.
Because of this mindset, we had to struggle in getting a publisher for this book and we resorted to the only
option left to us: self publishing. Various reputed international publishers, like Springer, have published such
books in different areas, which have been whole-heartedly welcomed. Hopefully, our book would be welcomed
by the readers, which might change the existing mindset.
A note to the students is that they should not be completely driven by the solutions. They are encouraged to
attempt the problems without looking at the solutions first. If a problem is solved with the help of the solution
given in the book, they should try similar problems by themselves and also try to think for the alternative
solutions, if any.
We would like to thank our friends Pratyoosh, Vivek and Alok for several fruitful discussions and our students
Vaishali, Ruby, Sakshi, Saumya, Harshita and many more for their valuable support throughout writing of this
book. We are thankful to the Department of Statistics, BBAU, Lucknow and to the Department of Statistics
and Operations Research, AMU, Aligarh for providing wonderful environment and facilities. Our colleagues from
these Departments were very supportive and motivating. The support of the family and friends is something
without which one cannot go very far. Without mentioning their names, we are thankful to the Almighty that
we are blessed with such wonderful people around us.
Any errors found are the authors’ responsibility and the suggestions are welcomed at [email protected].
Since the echelon form has pivots in three columns, namely 1st , 2nd and 4th , the rank of the matrix P is 3.
We can reach to the same conclusion by counting the number of non-zero rows in echelon form. There are
three non-zero rows, namely 1st , 2nd and 3rd , so the rank of P is 3. Hence option (c) is the correct choice.
x + y + z = 3, x + az = b, y + 2z = 3.
This would be satisfied if a + 1 = 0 = b, i.e., a = −1 and b = 0. Hence option (a) is the correct choice.
3. Six identical fair dice are thrown independently. Let S denote the number of dice showing even numbers
on their upper faces. Then the variance of the random variable S is
(a) 21 (b) 1 (c) 32 (d) 3.
Solution. Define the random variable
(
1 if ith die shows even number on its upper face,
Xi = i = 1, 2, . . . , 6.
0 otherwise,
Clearly, Xi ’s are iid Bernoulli random variables with probability of success equals 36 or 12 . It is easy to see
P6
that S = i=1 Xi ∼ Bin 6, 12 . Therefore, Var(S) = 6 × 21 × 12 = 32 . Hence option (c) is the correct choice.
1
P21
4. Let X1 , X2 , . . . , X21 be a random sample from a distribution having the variance 5. Let X̄ = 21 i=1 Xi
P21
and S = i=1 (Xi − X̄)2 . Then the value of E(S) is
(a) 5 (b) 100 (c) 0.25 (d) 105.
Since the sample variance is an unbiased estimator of the population variance, we have
Solution.
S
E 21−1 = 5, which implies that E(S) = 20 × 5 = 100. Hence option (b) is the correct choice.
Optional Solution: Let µ = E(Xi ), i = 1, 2, . . . , 21. Consider
21
X
S= (Xi − X̄)2
i=1
21
X
= (Xi − µ + µ − X̄)2
i=1
21
X
= (Xi − µ)2 + (X̄ − µ)2 − 2(Xi − µ)(X̄ − µ)
i=1
21
X 21
X
= (Xi − µ)2 + 21(X̄ − µ)2 − 2(X̄ − µ) (Xi − µ)
i=1 i=1
21
X
= (Xi − µ)2 − 21(X̄ − µ)2 .
i=1
1.1 Objective Questions 3
Since (Z1 , Z2 ) has bivariate normal distribution and Cov(Z1 , Z2 ) = 0, it follows that Zi s are independent.
Thus, Zi s are i.i.d. N (0, 1), i = 1, 2. Therefore, Z12 and Z22 are i.i.d. chi-square random variables with 1
degree of freedom. Clearly,
2
X −Y Z 2 /1
U= = 12 ∼ F (1, 1),
X +Y Z2 /1
where F (1, 1) denotes the F -distribution with degrees of freedom (1, 1). Hence option (d) is the correct
choice.
Optional Solution: The joint density of X and Y is given by
1 1 2 1 1 2 1 − 1 (x2 +y2 )
fX,Y (x, y) = √ e− 2 x × √ e− 2 y = e 2 , x, y ∈ R.
2π 2π 2π
Let us define two new random variables Z1 and Z2 such that Z1 = X−Y√
2
and Z2 = X+Y
√ . On writing X and
2
1 1
Y in terms of Z1 and Z2 , we get X = 2 (Z1 + Z2 ) and Y = 2 (Z2 − Z1 ). Moreover, X 2 + Y 2 = Z12 + Z22
√ √
6. In three independent throws of a fair dice, let X denote the number of upper faces showing six. Then the
value of E(3 − X)2 is
(a) 20
3 (b) 23 (c) 25 (d) 12
5
.
Solution. Clearly, X ∼ Bin 3, 16 . Then, E(X) = 3 × 61 = 21 and Var(X) = 3 × 61 × 56 = 125
. Now,
7. Let
1 0 1+x 1+x
0 1 1 1
P =
1 1+x
.
0 1+x
1 1+x 1+x 0
y
3
2
5
4
S
1
3 C
4
1 B
2
x
x +
+ y
1
A y = 3
4 = 3 2
4
x
1 1 3 1 5 3
4 2 4 4 2
For a fixed x ∈ {1, 2, 3} and for y ∈ {1, 2, . . . , x}, the conditional pmf of Y , given that X = x, is
P (X = x, Y = y) 1
P (Y = y|X = x) = = .
P (X = x) x
It is also known that 1% of a population suffers from the disease, i.e., P (D) = 0.01 and P (Dc ) = 0.99. On
using Bayes’ theorem, the required probability is given by
P (Y |D)P (D) 0.99 × 0.01
P (D|Y ) = = = 0.5.
P (Y |D)P (D) + P (Y |Dc )P (Dc ) 0.99 × 0.01 + 0.01 × 0.99
− ln U1
Y = ∼ Beta(1, 1).
− ln U1 − ln(1 − U2 )
αβ
The variance of Beta(α, β) is given by (α+β)2 (α+β+1) , and therefore,
1×1 1
Var(Y ) = = .
(1 + 1)2 (1 + 1 + 1) 12
1 − xy
f (x, y) = e , x > 0, 0 < y < 1,
y
then
(a) E(X) = 0.5 and E(Y ) = 0.5 (b) E(X) = 1.0 and E(Y ) = 0.5
(c) E(X) = 0.5 and E(Y ) = 1.0 (d) E(X) = 1.0 and E(Y ) = 1.0.
5.1 Objective Questions 79
and therefore,
Z ∞
E(X) = xfX (x) dx
−∞
Z ∞ Z 1
1 − xy
= x e dy dx
0 0 y
Z 1 Z ∞
1 x
= x e− y dx dy (changing the order of integration)
0 0 y
Z 1
= y dy (using the formula for the mean of exponential distribution)
0
1
= .
2
Now, the marginal pdf of Y is given by
(R ∞
1 −x
ye
dx, 0 < y < 1,
y
0
fY (y) =
0 otherwise,
(
1, 0 < y < 1,
=
0 otherwise.
Clearly, Y ∼ U (0, 1), and therefore, E(Y ) = 21 . Hence option (a) is the correct choice.
1
7. If X is an F (m, n) random variable, where m > 2, n > 2, then E(X)E X equals
n(n−2)
(a) m(m−2) (b) m(m−2)
n(n−2)
mn
(c) (m−2)(n−2) (d) m(n−2)
n(m−2) .
n 1
Solution. We know that if X ∼ F (m, n), m > 2, n > 2, then E(X) = n−2 and X ∼ F (n, m). Therefore,
1 n m mn
E(X)E = × = .
X n−2 m−2 (m − 2)(n − 2)
where α1 ≥ 0 and α2 ≥ 0 are unknown parameters such that α1 + α2 ≤ 1. For testing the null hypothesis
H0 : α1 + α2 = 1 against the alternative hypothesis H1 : α1 = α2 = 0, suppose that the critical region is
C = {2, 3}. Then, this critical region has
(a) size = 1/2 and power = 2/3 (b) size = 1/4 and power = 2/3
(c) size = 1/2 and power = 1/4 (d) size = 2/3 and power = 1/3.
Solution. It is easy to verify that the pmfs of X under H0 : α1 + α2 = 1 and under H1 : α1 = α2 = 0 are
given by
1+α1 (
2 , if x = 1, 1
fH0 (x) = 1−α1
and f (x) = 3 , if x ∈ {1, 2, 3},
2 , if x = 2, H 1
0, otherwise,
0, otherwise,
80 Questions and Solutions of IIT JAM (MS) – 2009
respectively. Recall that the size of a critical region is the supremum of the probability of type-I error,
where the supremum is taken over all the values of parameter(s) in H0 . Therefore, for the given problem,
= sup
(fH0 (2) + fH0 (3))
α1 +α2 =1
1 − α1
= sup +0
α1 +α2 =1 2
α
2
= sup
α1 +α2 =1 2
1
= ,
2
where the last equality follows from the fact that the maximum possible value of α2 , under H0 , is 1. Now,
the power of the critical region C is given by
1 1 2
PH1 (X ∈ C) = PH1 (X ∈ {2, 3}) = fH1 (2) + fH1 (3) = + = .
3 3 3
Hence option (a) is the correct choice.
9. The observed value of mean of a random sample from N (θ, 1) distribution is 2.3. If the parameter space is
Θ = {0, 1, 2, 3}, then the maximum likelihood estimate of θ is
(a) 1 (b) 2 (c) 2.3 (d) 3.
Solution. Let the observed sample is x = (x1 , x2 , . . . , xn ), then the likelihood function is given by
( n 1
Pn 2
√1 e− 2 i=1 (xi −θ) , if θ ∈ {0, 1, 2, 3},
L(θ|x) = 2π
0, otherwise.
Pn
Clearly, to maximize L(θ|x), we must select θ ∈ {0, 1, 2, 3} to minimize i=1 (xi − θ)2 . Now,
n
X n
X
(xi − θ)2 = (xi − x̄ + x̄ − θ)2
i=1 i=1
n
X n
X n
X
2 2
= (xi − x̄) + (x̄ − θ) + 2 (xi − x̄)(x̄ − θ)
i=1 i=1 i=1
n
X n
X
= (xi − x̄)2 + n(x̄ − θ)2 + 2(x̄ − θ) (xi − x̄)
i=1 i=1
n
X
= (xi − x̄)2 + n(x̄ − θ)2 + 2(x̄ − θ)(nx̄ − nx̄)
i=1
n
X
= (xi − 2.3)2 + n(2.3 − θ)2 .
i=1
Pn
It is easy to verify that (2.3 − θ)2 , θ ∈ {0, 1, 2, 3}, is minimum at θ = 2, and so is i=1 (xi − θ)2 . Thus,
θbMLE = 2. Hence option (b) is the correct choice.
(a) converges for x > 1 and diverges for x ≤ 1 (b) converges for x ≤ 1 and diverges for x > 1
(c) converges for all x > 0 (d) diverges for all x > 0.
Chapter 14
Solution. First we show that an ≥ 1.5 for all natural numbers n ≥ 1. Clearly, a1 = 2 > 1.5. For any fixed
k ∈ N, assume that ak ≥ 1.5. Then
2ak + 1 1 1
ak+1 = =2− ≥2− = 1.6 > 1.5.
ak + 1 ak + 1 1.5 + 1
By the Mathematical Induction, we conclude that an ≥ 1.5, ∀n ∈ N. Next we show that an ≤ 2 for all
natural number n ≥ 1. It is given that a1 = 2. For any fixed k ∈ N, assume that ak ≤ 2. Then
1 1
ak+1 = 2 − ≤2− = 1.66 < 2.
ak + 1 2+1
By the Mathematical Induction, we conclude that an ≤ 2, ∀n ∈ N. Thus, 1.5 ≤ an ≤ 2, ∀n ∈ N. Hence
option (a) is the correct choice.
n2 −2n
2. The value of lim 1 + n2 e is
n→∞
(a) e−2 (b) e−1 (c) e (d) e2 .
Solution. We have
n2 ( n2 )
2 2
lim 1 + e−2n = lim exp ln 1 + − 2n
n→∞ n n→∞ n
2
= lim exp n2 ln 1 + − 2n
n→∞ n
( 2 3 ! )
2 2 2 1 2 1
= lim exp n − + − · · · − 2n
n→∞ n n 2 n 3
x2 x3 x4
(since ln(1 + x) = x − 2 + 3 − 4 + · · · , −1 < x < 1)
23 1
= lim exp −2 + − ···
n→∞ n 3
= e−2 .
Hence option (a) is the correct choice.
14.1 Multiple Choice Questions 273
3. Let {an }n≥1 and {bn }n≥1 be two convergent sequences of real numbers. For n ≥ 1, define un = max{an , bn }
and vn = min{an , bn }. Then
Solution. Let limn→∞ an = a and limn→∞ bn = b. Without loss of generality, assume that a ≤ b. If
a = b, it is easy to see that there exists a natural number K such that an = bn for all n ≥ K. Then
un = max(an , bn ) = an = min(an , bn ) = vn , ∀n ≥ K, and hence both {un }n≥1 and {vn }n≥1 converge.
Now, assume that a < b. Let ε = b−a 2 > 0. Convergence of {an }n≥1 and {bn }n≥1 to a and b, respectively,
implies that ∃K1 , K2 ∈ N such that |an − a| < ε and |bn − b| < ε, ∀n ≥ K ∗ = max(K1 , K2 ). Equivalently,
we have
13 7
λ2 − trace(M )λ + det(M ) = 0 ⇒ λ2 − λ− =0 ⇒ 20λ2 − 13λ − 7 = 0.
20 20
By the Cayley-Hamilton theorem, we have 20M 2 − 13M − 7I = 0. Hence option (b) is the correct choice.
Then
P (X = 0) + P (X = 1.5) + P (X = 2) + P (X ≥ 1)
equals
(a) 83 (b) 5
8 (c) 7
8 (d) 1.
274 Questions and Solutions of IIT JAM (MS) – 2018
Solution. We have
1 1
P (X = 0) = F (0) − F (0−) = F (0) − lim F (0 − h) = − lim 0 = .
h→0+ 4 h→0+ 4
Since F is continuous at x = 1.5, we get P (X = 1.5) = 0. We also have
P (X = 2) = F (2) − F (2−)
= F (2) − lim+ F (2 − h)
h→0
1 4(2 − h) − (2 − h)2
= 1 − lim +
h→0+ 4 8
1 8−4
=1− +
4 8
1
=
4
and
P (X ≥ 1) = 1 − P (X < 1)
= 1 − F (1−)
= 1 − lim+ F (1 − h)
h→0
1 4(1 − h) − (1 − h)2
= 1 − lim +
h→0+ 4 8
1 4−1
=1− +
4 8
3
= .
8
Then, we have
1 1 3 7
P (X = 0) + P (X = 1.5) + P (X = 2) + P (X ≥ 1) = +0+ + = .
4 4 8 8
Hence option (c) is the correct choice.
X1 +X2
7. Let X1 , X2 and X3 be i.i.d. U (0, 1) random variables. Then E X1 +X2 +X3 equals
(a) 31 (b) 12 (c) 32 (d) 34 .
Solution. We have
X1 + X2 + X3
=1
X1 + X2 + X3
X1 + X2 + X3
⇒ E =1
X1 + X2 + X3
X1 X2 X3
⇒ E +E +E =1
X1 + X2 + X3 X1 + X2 + X3 X1 + X2 + X3
X1
⇒ 3E =1
X1 + X2 + X3
(since X1 , X2 and X3 are i.i.d. random variables)
X1 1
⇒ E = ,
X1 + X2 + X3 3
and therefore,
X1 + X2 X1 2
E = 2E = .
X1 + X2 + X3 X1 + X2 + X3 3
Hence option (c) is the correct choice.
Remark: Note that the distribution of the random variables, i.e., U (0, 1) has not been used anywhere. This
is a redundant information.
14.1 Multiple Choice Questions 275
where θ ∈ [0, 1] is the unknown parameter. Then the maximum likelihood estimate of θ is
(a) 25 (b) 53 (c) 57 (d) 95 .
Solution. The likelihood function is given by
Then,
3 2 3
l0 (θ) = − =0 ⇒ θ=
θ 1−θ 5
and
3 2
l00 (θ) = − 2
− < 0, ∀θ ∈ [0, 1],
θ (1 − θ)2
which implies that the maximum likelihood estimate of θ is 53 . Hence option (b) is the correct choice.
9. Consider four coins labelled as 1, 2, 3 and 4. Suppose that the probability of obtaining a ‘head’ in a single
toss of the ith coin is 4i , i = 1, 2, 3, 4. A coin is chosen uniformly at random and flipped. Given that the
flip resulted in a ‘head’, the conditional probability that the coin was labelled either 1 or 2 equals
1 2 3 4
(a) 10 (b) 10 (c) 10 (d) 10 .
Solution. Let Ci denote the event that ith coin is choosen. Then P (Ci ) = 14 , i = 1, . . . , 4. Further, let
E be the event that flip resulted in ‘head’. Then, P (E|Ci ) = 4i , i = 1, 2, 3, 4. The required probability is
given by
P (C1 ∪ C2 |E) = P (C1 |E) + P (C2 |E) (since Ci ’s are mutually exclusive)
P (E|C1 )P (C1 ) + P (E|C2 )P (C2 )
= P4
i=1 P (E|Ci )P (Ci )
1 1 2 1
4 × 4 + 4 ×
= P 4 i 1
4
i=1 4 × 4
1+2
= P4
i=1 i
3
= .
10
Hence option (c) is the correct choice.
10. Consider the linear regression model yi = β0 + β1 xi + i ; i = 1, 2, . . . , n, where i ’s are i.i.d. standard
normal random variables. Given that
n n n n
!2
1X 1X 1X 1X
xi = 3.2, yi = 4.2, xj − xi = 1.5
n i=1 n i=1 n j=1 n i=1