0% found this document useful (0 votes)
732 views21 pages

Solutions To IIT JAM For Mathematical Statistics: December 2018

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
732 views21 pages

Solutions To IIT JAM For Mathematical Statistics: December 2018

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/329609889

Solutions to IIT JAM for Mathematical Statistics

Book · December 2018

CITATIONS READS

0 16,324

2 authors:

Amit Kumar Misra Mohd. Arshad


Babasaheb Bhimrao Ambedkar University Indian Institute of Technology Indore
12 PUBLICATIONS   114 CITATIONS    25 PUBLICATIONS   87 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Estimation problems based on record data View project

Selection and Related Estimation Problems View project

All content following this page was uploaded by Mohd. Arshad on 25 February 2019.

The user has requested enhancement of the downloaded file.


Amit
About the Authors
Arshad
Dr. Amit Kumar Misra is currently working as an Assistant Professor in the
Department of Statistics, Babasaheb Bhimrao Ambedkar University, Lucknow. He
has previously worked in the Department of Statistics at Central University of
South Bihar. After acquiring M.Sc. degree in Statistics from C.S.J.M. University
Solutions to IIT JAM

Solutions to IIT JAM for Mathematical Statistics


Kanpur, he completed his Ph.D. in Statistics from Indian Institute of Technology
(IIT) Kanpur. He has published/communicated eight research papers in reputed
international journals. He has participated in several conferences/workshops. He
is a member of Indian Society for Medical Statistics, Indian Science Congress and
Institute of Actuaries of India. Dr. Misra has been teaching undergraduate and
postgraduate courses for more than 7 years.
for
Dr. Mohd. Arshad is an Assistant Professor, Department of Statistics & Opera-
tions Research, Aligarh Muslim University, Aligarh. He did his M.Sc. (Statistics),
Mathematical Statistics
Gold Medallist, from C.S.J.M. University, Kanpur and Ph.D. (Statistics) from In-
dian Institute of Technology (IIT) Kanpur. He has published/presented several
research papers in reputed international journals/conferences. He is a member of
Editorial Boards of The Aligarh Journal of Statistics, and Computer Simulation in
Application. He is a member of International Indian Statistical Association, Indian
Science Congress, Indian Society for Probability and Statistics, Indian Mathemati-
cal Society. He is also associated with different universities and colleges in different
capacities. Dr. Arshad has been teaching undergraduate and postgraduate courses fχ25 (x)
for more than 4 years.

0.95

x
χ25,0.95

Amit Kumar Misra


Mohd. Arshad
Solutions to IIT JAM
for
Mathematical Statistics

Amit Kumar Misra


Mohd. Arshad
Solutions to IIT JAM for Mathematical Statistics

Amit Kumar Misra


Department of Statistics
Babasaheb Bhimrao Ambedkar University
Lucknow, India
Email: [email protected]

Mohd. Arshad
Department of Statistics & Operations Research
Aligarh Muslim University
Aligarh, India
Email: [email protected]

Copyright c 2018, Authors: Self-publishing


ALL RIGHTS RESERVED. No part of this book may be reproduced, stored in or introduced into a retrieval
system, or transmitted in any form or by any means (photocopying, electronic, mechanical, recoding, or other-
wise), without the prior written permission of the authors of this book. Any infringement will be very strictly
dealt with according to Copyright Act.

ISBN: 978-93-5346-351-9

Price: | 380

Printed in India
Preface

From our teaching experiences, we have observed that the students preparing for various competitive exami-
nations face difficulties in solving previous years’ papers. One such entrance examination is IIT JAM which is
being conducted by IITs for the last 14 years. Written solutions to JAM question papers are important for the
students who initially do not have skills in solving the papers completely and who do not have mentors available
to clear their doubts. This motivated us to write this book. While framing the idea of the book, we discussed
with our students about it and got very positive response, which further encouraged us to write it in the way
that readers can get maximum benefit of it.
This book contains solutions to IIT JAM (Mathematical Statistics) examination papers from the year 2005
to 2018. The questions have been solved in such a way that the aspirants could get an insight of the examination
pattern as well as maximum understanding of the concepts on which the questions are based. The purpose of the
book is not only to provide solutions of JAM examination papers but also to make students proficient in writing
those solutions to get maximum output. Graphs are given (wherever required) in support of the solutions to
visualize the concepts. Alternative solutions have also been provided to explain different approaches to reach
the same conclusion. The book would be suitable for aspirants of different competitive examinations (like IIT
JAM, GATE, ISS etc.) and for students interested in learning the problem solving techniques and concepts of
Mathematical Statistics as well.
In our country, any book for solutions, like ours, is seen as a guide which can only spoon-feed the readers.
Because of this mindset, we had to struggle in getting a publisher for this book and we resorted to the only
option left to us: self publishing. Various reputed international publishers, like Springer, have published such
books in different areas, which have been whole-heartedly welcomed. Hopefully, our book would be welcomed
by the readers, which might change the existing mindset.
A note to the students is that they should not be completely driven by the solutions. They are encouraged to
attempt the problems without looking at the solutions first. If a problem is solved with the help of the solution
given in the book, they should try similar problems by themselves and also try to think for the alternative
solutions, if any.
We would like to thank our friends Pratyoosh, Vivek and Alok for several fruitful discussions and our students
Vaishali, Ruby, Sakshi, Saumya, Harshita and many more for their valuable support throughout writing of this
book. We are thankful to the Department of Statistics, BBAU, Lucknow and to the Department of Statistics
and Operations Research, AMU, Aligarh for providing wonderful environment and facilities. Our colleagues from
these Departments were very supportive and motivating. The support of the family and friends is something
without which one cannot go very far. Without mentioning their names, we are thankful to the Almighty that
we are blessed with such wonderful people around us.
Any errors found are the authors’ responsibility and the suggestions are welcomed at [email protected].

Amit Kumar Misra


Mohd. Arshad

December 10, 2018


Contents

1 Questions and Solutions of IIT JAM (MS) – 2005 1


1.1 Objective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Subjective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Questions and Solutions of IIT JAM (MS) – 2006 17


2.1 Compulsory Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.1 Objective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.2 Subjective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2 Optional Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.1 For M.Sc. at IIT Bombay/Kharagpur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.2 For M.Sc. at IIT Kanpur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

3 Questions and Solutions of IIT JAM (MS) – 2007 37


3.1 Objective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 Subjective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4 Questions and Solutions of IIT JAM (MS) – 2008 57


4.1 Objective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2 Subjective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5 Questions and Solutions of IIT JAM (MS) – 2009 77


5.1 Objective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.2 Subjective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

6 Questions and Solutions of IIT JAM (MS) – 2010 98


6.1 Objective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.2 Subjective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

7 Questions and Solutions of IIT JAM (MS) – 2011 117


7.1 Objective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.2 Subjective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

8 Questions and Solutions of IIT JAM (MS) – 2012 134


8.1 Objective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
8.2 Subjective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

9 Questions and Solutions of IIT JAM (MS) – 2013 153


9.1 Objective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
9.2 Fill in the Blank Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
9.3 Subjective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

10 Questions and Solutions of IIT JAM (MS) – 2014 172


10.1 Objective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
10.2 Subjective Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
iv CONTENTS

11 Questions and Solutions of IIT JAM (MS) – 2015 197


11.1 Multiple Choice Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
11.2 Multiple Select Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
11.3 Numerical Answer Type Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

12 Questions and Solutions of IIT JAM (MS) – 2016 218


12.1 Multiple Choice Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
12.2 Multiple Select Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
12.3 Numerical Answer Type Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234

13 Questions and Solutions of IIT JAM (MS) – 2017 243


13.1 Multiple Choice Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
13.2 Multiple Select Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
13.3 Numerical Answer Type Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262

14 Questions and Solutions of IIT JAM (MS) – 2018 272


14.1 Multiple Choice Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
14.2 Multiple Select Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
14.3 Numerical Answer Type Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Chapter 1

Questions and Solutions of IIT JAM


(MS) – 2005

1.1 Objective Questions


1. Let  
1 2 3 4 5 1
 2 3 4 8 6 3 

P = .
2 4 6 7 10 3 
4 7 10 14 16 7
Then the rank of the matrix P is
(a) 1 (b) 2 (c) 3 (d) 4.
Solution. Using elementary row operations, we get
 
1 2 3 4 5 1
 2 3 4 8 6 3 
P = 2 4 6

7 10 3 
4 7 10 14 16 7
 
1 2 3 4 5 1
 0 −1 −2 0 −4 1 
∼ 0
 (R2 → R2 − 2R1 , R3 → R3 − 2R1 , R4 → R4 − 4R1 )
0 0 −1 0 1 
0 −1 −2 −2 −4 3
 
1 2 3 4 5 1
 0 −1 −2 0 −4 1 
∼ 0
 (R4 → R4 − R2 )
0 0 −1 0 1 
0 0 0 −2 0 2
 
1 2 3 4 5 1
 
 0 −1 −2 0 −4 1 
∼  (R4 → R4 − 2R3 ).
 0 0 0 −1 0 1 
0 0 0 0 0 0

Since the echelon form has pivots in three columns, namely 1st , 2nd and 4th , the rank of the matrix P is 3.
We can reach to the same conclusion by counting the number of non-zero rows in echelon form. There are
three non-zero rows, namely 1st , 2nd and 3rd , so the rank of P is 3. Hence option (c) is the correct choice.

2. Consider the following system of linear equations:

x + y + z = 3, x + az = b, y + 2z = 3.

This system has infinite number of solutions if


(a) a = −1, b = 0 (b) a = 1, b = 2 (c) a = 0, b = 1 (d) a = −1, b = 1.
2 Questions and Solutions of IIT JAM (MS) – 2005

Solution. Given that


x + y + z = 3, x + az = b, y + 2z = 3.
Adding the second and third equations, we get x + y + (a + 2)z = (b + 3). For infinite number of solutions,
this equation must be same as the first equation in given system of equations, i.e., a + 2 = 1 and b + 3 = 3.
This implies that a = −1 and b = 0. Hence option (a) is the correct choice.
Optional Solution: The given system of linear equations can be written as AX = B, where
     
1 1 1 x 3
A = 1 0 a , X = y  , and B =  b  .
0 1 2 z 3

Using elementary row operations on the augmented matrix [A : B], we have


     
1 1 1 : 3 1 1 1 : 3 1 1 1 : 3
1 0 a : b  → 0 −1 a − 1 : b − 3 → 0 −1 a − 1 : b − 3 .
0 1 2 : 3 0 1 2 : 3 0 0 a+1 : b

For infinite number of solutions to exist,

rank(A : B) = rank(A) < number of columns in A.

This would be satisfied if a + 1 = 0 = b, i.e., a = −1 and b = 0. Hence option (a) is the correct choice.

3. Six identical fair dice are thrown independently. Let S denote the number of dice showing even numbers
on their upper faces. Then the variance of the random variable S is
(a) 21 (b) 1 (c) 32 (d) 3.
Solution. Define the random variable
(
1 if ith die shows even number on its upper face,
Xi = i = 1, 2, . . . , 6.
0 otherwise,

Clearly, Xi ’s are iid Bernoulli random variables with probability of success equals 36 or 12 . It is easy to see
P6 
that S = i=1 Xi ∼ Bin 6, 12 . Therefore, Var(S) = 6 × 21 × 12 = 32 . Hence option (c) is the correct choice.
1
P21
4. Let X1 , X2 , . . . , X21 be a random sample from a distribution having the variance 5. Let X̄ = 21 i=1 Xi
P21
and S = i=1 (Xi − X̄)2 . Then the value of E(S) is
(a) 5 (b) 100 (c) 0.25 (d) 105.

 Since the sample variance is an unbiased estimator of the population variance, we have
Solution.

S
E 21−1 = 5, which implies that E(S) = 20 × 5 = 100. Hence option (b) is the correct choice.
Optional Solution: Let µ = E(Xi ), i = 1, 2, . . . , 21. Consider
21
X
S= (Xi − X̄)2
i=1
21
X
= (Xi − µ + µ − X̄)2
i=1
21
X  
= (Xi − µ)2 + (X̄ − µ)2 − 2(Xi − µ)(X̄ − µ)
i=1
21
X 21
X
= (Xi − µ)2 + 21(X̄ − µ)2 − 2(X̄ − µ) (Xi − µ)
i=1 i=1
21
X
= (Xi − µ)2 − 21(X̄ − µ)2 .
i=1
1.1 Objective Questions 3

Taking expectation on both the sides, we get


21
X
E(S) = E(Xi − µ)2 − 21E(X̄ − µ)2
i=1
21
X
= Var(Xi ) − 21Var(X̄)
i=1
21
X 5
= 5 − 21 ×
i=1
21
= 21 × 5 − 5
= 100.

Hence option (b) is the correct choice.


 2
X−Y
5. Let X and Y be independent standard normal random variables. Then the distribution of U = X+Y is
(a) chi-square with 2 degrees of freedom (b) chi-square with 1 degrees of freedom
(c) F with (2, 2) degrees of freedom (d) F with (1, 1) degrees of freedom.
Solution. Let us define two new random variables Z1 and Z2 such that Z1 = X−Y √
2
and Z2 = X+Y
√ . Since
2
Z1 and Z2 are the linear combinations of two independent normal random variables, it follows that (Z1 , Z2 )
has bivariate normal distribution and the marginal distributions of Z1 and Z2 are the univariate normal.
It is easy to verify that Zi ∼ N (0, 1), i = 1, 2. Now consider the covariance
 
X −Y X +Y 1
Cov(Z1 , Z2 ) = Cov √ , √ = [Var(X) − Var(Y )] = 0.
2 2 2

Since (Z1 , Z2 ) has bivariate normal distribution and Cov(Z1 , Z2 ) = 0, it follows that Zi s are independent.
Thus, Zi s are i.i.d. N (0, 1), i = 1, 2. Therefore, Z12 and Z22 are i.i.d. chi-square random variables with 1
degree of freedom. Clearly,
 2
X −Y Z 2 /1
U= = 12 ∼ F (1, 1),
X +Y Z2 /1
where F (1, 1) denotes the F -distribution with degrees of freedom (1, 1). Hence option (d) is the correct
choice.
Optional Solution: The joint density of X and Y is given by
1 1 2 1 1 2 1 − 1 (x2 +y2 )
fX,Y (x, y) = √ e− 2 x × √ e− 2 y = e 2 , x, y ∈ R.
2π 2π 2π

Let us define two new random variables Z1 and Z2 such that Z1 = X−Y√
2
and Z2 = X+Y
√ . On writing X and
2
1 1
Y in terms of Z1 and Z2 , we get X = 2 (Z1 + Z2 ) and Y = 2 (Z2 − Z1 ). Moreover, X 2 + Y 2 = Z12 + Z22
√ √

and the jacobian of the transformation is given by


√ √
∂(x, y) 1/ 2 1/ 2
J= = √ √ = 1.
∂(z1 , z2 ) −1/ 2 1/ 2

Then, the joint density of Z1 and Z2 is given by


1 − 1 (z12 +z22 ) 1 1 2 1 1 2
fZ1 ,Z2 (z1 , z2 ) = e 2 = √ e− 2 z1 × √ e− 2 z2 , z1 , z2 ∈ R.
2π 2π 2π
Thus, Zi s are i.i.d. N (0, 1), i = 1, 2. Therefore, Z12 and Z22 are i.i.d. chi-square random variables with 1
degree of freedom. Clearly,
 2
X −Y Z 2 /1
U= = 12 ∼ F (1, 1),
X +Y Z2 /1
where F (1, 1) denotes the F -distribution with degrees of freedom (1, 1). Hence option (d) is the correct
choice.
4 Questions and Solutions of IIT JAM (MS) – 2005

6. In three independent throws of a fair dice, let X denote the number of upper faces showing six. Then the
value of E(3 − X)2 is
(a) 20
3 (b) 23 (c) 25 (d) 12
5
.

Solution. Clearly, X ∼ Bin 3, 16 . Then, E(X) = 3 × 61 = 21 and Var(X) = 3 × 61 × 56 = 125
. Now,

E(3 − X)2 = E(9 + X 2 − 6X)


= 9 + E(X 2 ) − 6E(X)
= 9 + Var(X) + [E(X)]2 − 6E(X)
5 1 1
=9+ + −6×
12 4 2
20
= .
3

Hence option (a) is the correct choice.


 
Optional Solution: Clearly, X ∼ Bin 3, 16 . Let Y = 3 − X. Then, Y ∼ Bin 3, 56 , E(Y ) = 3 × 5
6 = 5
2 and
Var(Y ) = 3 × 56 × 16 = 12
5
. Now,

E(3 − X)2 = E(Y 2 )


= Var(Y ) + [E(Y )]2
 2
5 5
= +
12 2
20
= .
3

Hence option (a) is the correct choice.

7. Let
 
1 0 1+x 1+x
 0 1 1 1 
P =
 1 1+x
.
0 1+x 
1 1+x 1+x 0

Then the determinant of the matrix P is


(a) 3(x + 1)3 (b) 3(x + 1)2 (c) 3(x + 1) (d) (x + 1)(2x + 3).
Solution. The determinant of the matrix P is

1 0 1 + x 1 + x

0 1 1 1
|P | =
1 1 + x 0 1 + x
1 1 + x 1 + x 0

1 1 1 0 1 1

= 1 1 + x 0 1 + x − 0 1 0 1 + x
1 + x 1 + x 0 1 1+x 0

0 1 1 0 1 1

+ (1 + x) 1 1 + x 1 + x − (1 + x) 1 1 + x 0
1 1 + x 0 1 1 + x 1 + x

= 1 1[0 − (1 + x)2 ] − 1[0 − (1 + x)2 ] + 1[(1 + x)2 − 0]
+ (1 + x) {0 − 1[0 − (1 + x)] + 1[(1 + x) − (1 + x)]}
− (1 + x) {0 − 1[(1 + x) − 0] + 1[(1 + x) − (1 + x)]}
= 3(1 + x)2 .

Hence option (b) is the correct choice.


1.1 Objective Questions 5

Optional Solution: The determinant of the matrix P is



1 0 1 + x 1 + x

0 1 1 1
|P | =
1 1+x 0 1 + x
1 1+x 1+x 0

1 0 1+x 1 + x

0 1 1 1
=
(R3 → R3 − R1 , R4 → R4 − R1 )
0 1 + x −(1 + x) 0
0 1+x 0 −(1 + x)

1 0 1+x 1 + x

0 1 1 1
= (R3 → R3 − (1 + x)R2 , R4 → R4 − (1 + x)R2 )
0 0 −2(1 + x) −(1 + x)
0 0 −(1 + x) −2(1 + x)

1 0 1+x 1 + x

0 1 1 1 
=
R4 → R4 − 12 R3
0 0 −2(1 + x) −(1 + x)
0 0 0 − 23 (1 + x)

= 1 × 1 × (−2(1 + x)) × − 32 (1 + x)
= 3(1 + x)2 .
Hence option (b) is the correct choice.

8. The area of the region (x, y) : 0 ≤ x, y ≤ 1, 34 ≤ x + y ≤ 23 is
9 7
(a) 16 (b) 16 (c) 13
32 (d) 19
32 .

Solution. Let S = {(x, y): 0 ≤ x, y ≤ 1}, A = (x, y) : 0 ≤ x, y ≤ 1, x + y < 43 , B = {(x, y) : 0 ≤ x, y ≤ 1,
3 3 3
4 ≤ x + y ≤ 2 and C = (x, y) : 0 ≤ x, y ≤ 1, x + y > 2 be sets such that S = A ∪ B ∪ C. Clearly, A, B
and C are disjoint sets. The set S forms a square of unit length, and the sets A and C form the triangles
(see Figure 1.1). Therefore, Area(S) = 1, Area(A) = 12 × 34 × 34 = 32
9
, and Area(C) = 21 × 12 × 21 = 18 . Then,
the required (shaded) area is given by
 
9 1 19
Area(B) = Area(S) − [Area(A) + Area(C)] = 1 − + = .
32 8 32
Hence option (d) is the correct choice.
9. Let E, F and G be three events such that the events E and F are mutually exclusive, P (E ∪ F ) = 1,
P (E ∩ G) = 41 and P (G) = 12
7
. Then P (F ∩ G) equals
(a) 12 (b) 4 (c) 12 (d) 13 .
1 1 5

Solution. The events E and F are mutually exclusive, i.e., E ∩ F = ∅. Also,


P (G) = P (F ∩ G) + P (E ∩ G) (since P (E ∪ F ) = 1 and E ∩ F = ∅)
⇒ P (F ∩ G) = P (G) − P (E ∩ G)
7 1 1
⇒ P (F ∩ G) = − = .
12 4 3
Hence option (d) is the correct choice.
10. Let X and Y have the joint probability mass function
1
P (X = x, Y = y) = , y = 1, 2, . . . , x; x = 1, 2, 3.
3x
Then the value of the conditional expectation E(Y |X = 3) is
(a) 1 (b) 2 (c) 1.5 (d) 2.5.
Solution. For x ∈ {1, 2, 3},
x
X Xx
1 1
P (X = x) = P (X = x, Y = y) = = .
y=1 y=1
3x 3
6 Questions and Solutions of IIT JAM (MS) – 2005

y
3
2

5
4
S
1

3 C
4

1 B
2
x
x +
+ y
1
A y = 3
4 = 3 2
4
x
1 1 3 1 5 3
4 2 4 4 2

Figure 1.1: Problem 8

For a fixed x ∈ {1, 2, 3} and for y ∈ {1, 2, . . . , x}, the conditional pmf of Y , given that X = x, is

P (X = x, Y = y) 1
P (Y = y|X = x) = = .
P (X = x) x

In particular, P (Y = y|X = 3) = 31 , ∀y = 1, 2, 3. Clearly, the conditional distribution of Y |X = 3 is


discrete uniform over the set {1, 2, 3}, and therefore E (Y |X = 3) = 3+1
2 = 2. Hence option (b) is the
correct choice.
11. Let X1 and X2 be independent random variables with respective moment generating functions
 3
3 1 t t
M1 (t) = + e and M2 (t) = e2(e −1) , −∞ < t < ∞.
4 4

Then the value of P (X1 + X2 = 1) is


(a) 81
64 e
−2 27 −2
(b) 64 e (c) 11
64 e
−2
(d) 27 −2
32 e .
Solution. It follows from the uniqueness property of MGF that X1 ∼ Bin(3, 41 ) and X2 ∼ Poisson(2).
Therefore,

P (X1 + X2 = 1) = P (X1 = 0, X2 = 1) + P (X1 = 1, X2 = 0)


= P (X1 = 0)P (X2 = 1) + P (X1 = 1)P (X2 = 0) (since X1 and X2 are independent)
   0  3 −2 1    1  2 −2 0
3 1 3 e 2 3 1 3 e 2
= +
0 4 4 1! 1 4 4 0!
81 −2
= e .
64
Hence option (a) is the correct choice.
" Z ∞ #
1 − t n
−1
12. lim n  √ e 2 t 2 dt equals
n→∞ 2 2 Γ n n+ 2n
2
(a) 0.5 (b) 0 (c) 0.0228 (d) 0.1587.
Solution.
" Z #
∞ √ 
1 − 2t n
lim  e t 2 −1 dt = lim P Sn > n + 2n ,
n n √
n→∞ 22Γ 2 n+ 2n n→∞
Chapter 5

Questions and Solutions of IIT JAM


(MS) – 2009

5.1 Objective Questions


1. For detecting a disease, a test gives correct diagnosis with probability 0.99. It is known that 1% of a
population suffers from this disease. If a randomly selected individual from this population tests positive,
then the probability that the selected individual actually has the disease is
(a) 0.01 (b) 0.05 (c) 0.5 (d) 0.99.
Solution. Let the events Y and D denote, respectively, that the test is positive and the desease is present.
It is given that the test gives the correct diagnosis with probabilty 0.99. This can be expressed as

P (Y |D) = 0.99, P (Y c |D) = 0.01, P (Y c |Dc ) = 0.99, P (Y |Dc ) = 0.01.

It is also known that 1% of a population suffers from the disease, i.e., P (D) = 0.01 and P (Dc ) = 0.99. On
using Bayes’ theorem, the required probability is given by
P (Y |D)P (D) 0.99 × 0.01
P (D|Y ) = = = 0.5.
P (Y |D)P (D) + P (Y |Dc )P (Dc ) 0.99 × 0.01 + 0.01 × 0.99

Hence option (c) is the correct choice.


2. Let X be any random variable with mean µ and variance 9. Then the smallest value of m such that
P (|X − µ| < m)
√ ≥ 0.99, p
is
(a) 90 (b) 90 (c) 100/11 (d) 30.
Solution. Using Chebyshev’s inequality, we have
 
m √
P (|X − µ| < m) = P |X − µ| < √ × 9
9
1
≥1−
(m/3)2
9
= 1 − 2.
m
Since 1 − m92 is increasing in m ∈ (0, ∞), we should choose the smallest value of m such that 1 − m92 ≥ 0.99.
On solving the inequality, we get m ≥ 30 or m ≤ −30. But m > 0, and therefore, the desired value of m
is 30. Hence option (d) is the correct choice.
3. If a random variable X has the cumulative distribution function


 0, if x < 0,

1, if x = 0,
F (x) = 31+x

 , if 0 < x < 1,

 3
1, if x ≥ 1,
78 Questions and Solutions of IIT JAM (MS) – 2009

then E(X) equals


(a) 13 (b) 1 (c) 1
6 (d) 21 .
Solution. It is easy to see that F (·) is neither a step nor a continuous function on R, and therefore, the
random variable X is neither discrete nor continuous. It has a mixed distribution with discrete part given
by
1 1
P (X = 0) = F (0) − F (0−) = −0= ,
3 3
1+1 1
P (X = 1) = F (1) − F (1−) = 1 − = ,
3 3
and continuous part given by
d 1
f (x) = F (x) = , 0 < x < 1.
dx 3
Then,
Z 1
1 1 1 1
E(X) = 0 × +1× + x dx = .
3 3 0 3 2
Hence option (d) is the correct choice.
ln U1
4. If Y = , where U1 and U2 are independent U (0, 1) random variables, then variance of Y
ln U1 + ln(1 − U2 )
equals
1 1 1
(a) 12 (b) 3 (c) 4 (d) 16 .
Solution. Given that U1 and U2 are independent U (0, 1) random variables. It is staight forward to see
that 1 − U2 ∼ U (0, 1). Then, it can be shown that − ln U1 and − ln(1 − U2 ) are iid Exp(1), or G(1, 1),
random variables. It is well known that if X and Y are independent with X ∼ G(α, λ) and Y ∼ G(β, λ),
X
then X+Y ∼ Beta(α, β). Using this, we get

− ln U1
Y = ∼ Beta(1, 1).
− ln U1 − ln(1 − U2 )
αβ
The variance of Beta(α, β) is given by (α+β)2 (α+β+1) , and therefore,

1×1 1
Var(Y ) = = .
(1 + 1)2 (1 + 1 + 1) 12

Hence option (a) is the correct choice.

5. If X is a Bin(30, 0.5) random variable, then


(a) P (X > 15) = 0.5 (b) P (X < 15) = 0.5 (c) P (X > 15) > 0.5 (d) P (X < 15) < 0.5.
Solution. Clearly, the random variable X can take 31 values, viz., 0, 1, . . . , 30. Since the probability of
success is 0.5, it follows that the pmf of X is symmetric about the point 15. Now, we have

P (X ≤ 14) + P (X = 15) + P (X ≥ 16) = 1


⇒ 2P (X ≤ 14) = 1 − P (X = 15) (using the symmetry of pmf)
⇒ 2P (X ≤ 14) < 1 (since P (X = 15) > 0)
⇒ P (X < 15) = P (X ≤ 14) < 0.5,

Hence option (d) is the correct choice.

6. If the joint probability density function of (X, Y ) is given by

1 − xy
f (x, y) = e , x > 0, 0 < y < 1,
y

then
(a) E(X) = 0.5 and E(Y ) = 0.5 (b) E(X) = 1.0 and E(Y ) = 0.5
(c) E(X) = 0.5 and E(Y ) = 1.0 (d) E(X) = 1.0 and E(Y ) = 1.0.
5.1 Objective Questions 79

Solution. The marginal pdf of X is given by


(R 1
1 −x
0 y
e y dy, x > 0,
fX (x) =
0 otherwise,

and therefore,
Z ∞
E(X) = xfX (x) dx
−∞
Z ∞ Z 1 
1 − xy
= x e dy dx
0 0 y
Z 1 Z ∞
1 x
= x e− y dx dy (changing the order of integration)
0 0 y
Z 1
= y dy (using the formula for the mean of exponential distribution)
0
1
= .
2
Now, the marginal pdf of Y is given by
(R ∞
1 −x
ye
dx, 0 < y < 1,
y
0
fY (y) =
0 otherwise,
(
1, 0 < y < 1,
=
0 otherwise.

Clearly, Y ∼ U (0, 1), and therefore, E(Y ) = 21 . Hence option (a) is the correct choice.
1

7. If X is an F (m, n) random variable, where m > 2, n > 2, then E(X)E X equals
n(n−2)
(a) m(m−2) (b) m(m−2)
n(n−2)
mn
(c) (m−2)(n−2) (d) m(n−2)
n(m−2) .
n 1
Solution. We know that if X ∼ F (m, n), m > 2, n > 2, then E(X) = n−2 and X ∼ F (n, m). Therefore,
 
1 n m mn
E(X)E = × = .
X n−2 m−2 (m − 2)(n − 2)

Hence option (c) is the correct choice.

8. Let X be a random variable having probability mass function



2+4α1 +α2

 6 , if x = 1,
2−2α1 +α2
f (x) = 6 , if x = 2,

 1−α1 −α2
3 , if x = 3,

where α1 ≥ 0 and α2 ≥ 0 are unknown parameters such that α1 + α2 ≤ 1. For testing the null hypothesis
H0 : α1 + α2 = 1 against the alternative hypothesis H1 : α1 = α2 = 0, suppose that the critical region is
C = {2, 3}. Then, this critical region has
(a) size = 1/2 and power = 2/3 (b) size = 1/4 and power = 2/3
(c) size = 1/2 and power = 1/4 (d) size = 2/3 and power = 1/3.
Solution. It is easy to verify that the pmfs of X under H0 : α1 + α2 = 1 and under H1 : α1 = α2 = 0 are
given by

 1+α1 (
 2 , if x = 1, 1
fH0 (x) = 1−α1
and f (x) = 3 , if x ∈ {1, 2, 3},
2 , if x = 2, H 1

 0, otherwise,
0, otherwise,
80 Questions and Solutions of IIT JAM (MS) – 2009

respectively. Recall that the size of a critical region is the supremum of the probability of type-I error,
where the supremum is taken over all the values of parameter(s) in H0 . Therefore, for the given problem,

size = sup PH0 (X ∈ C)


α1 +α2 =1

= sup PH0 (X ∈ {2, 3})


α1 +α2 =1

= sup
(fH0 (2) + fH0 (3))
α1 +α2 =1
 
1 − α1
= sup +0
α1 +α2 =1 2
α 
2
= sup
α1 +α2 =1 2
1
= ,
2
where the last equality follows from the fact that the maximum possible value of α2 , under H0 , is 1. Now,
the power of the critical region C is given by

1 1 2
PH1 (X ∈ C) = PH1 (X ∈ {2, 3}) = fH1 (2) + fH1 (3) = + = .
3 3 3
Hence option (a) is the correct choice.

9. The observed value of mean of a random sample from N (θ, 1) distribution is 2.3. If the parameter space is
Θ = {0, 1, 2, 3}, then the maximum likelihood estimate of θ is
(a) 1 (b) 2 (c) 2.3 (d) 3.
Solution. Let the observed sample is x = (x1 , x2 , . . . , xn ), then the likelihood function is given by
( n 1
Pn 2
√1 e− 2 i=1 (xi −θ) , if θ ∈ {0, 1, 2, 3},
L(θ|x) = 2π
0, otherwise.
Pn
Clearly, to maximize L(θ|x), we must select θ ∈ {0, 1, 2, 3} to minimize i=1 (xi − θ)2 . Now,
n
X n
X
(xi − θ)2 = (xi − x̄ + x̄ − θ)2
i=1 i=1
n
X n
X n
X
2 2
= (xi − x̄) + (x̄ − θ) + 2 (xi − x̄)(x̄ − θ)
i=1 i=1 i=1
n
X n
X
= (xi − x̄)2 + n(x̄ − θ)2 + 2(x̄ − θ) (xi − x̄)
i=1 i=1
n
X
= (xi − x̄)2 + n(x̄ − θ)2 + 2(x̄ − θ)(nx̄ − nx̄)
i=1
n
X
= (xi − 2.3)2 + n(2.3 − θ)2 .
i=1
Pn
It is easy to verify that (2.3 − θ)2 , θ ∈ {0, 1, 2, 3}, is minimum at θ = 2, and so is i=1 (xi − θ)2 . Thus,
θbMLE = 2. Hence option (b) is the correct choice.

10. The series √



X n
√ , x > 0,
x n n2 + 1
n=1

(a) converges for x > 1 and diverges for x ≤ 1 (b) converges for x ≤ 1 and diverges for x > 1
(c) converges for all x > 0 (d) diverges for all x > 0.
Chapter 14

Questions and Solutions of IIT JAM


(MS) – 2018

14.1 Multiple Choice Questions


2an +1
1. Let {an }n≥1 be a sequence of real numbers such that a1 = 2 and, for n ≥ 1, an+1 = an +1 . Then
(a) 1.5 ≤ an ≤ 2, for all natural number n ≥ 1.
(b) there exists a natural number n ≥ 1 such that an > 2.
(c) there exists a natural number n ≥ 1 such that an < 1.5.

1+ 5
(d) there exists a natural number n ≥ 1 such that an = 2 .

Solution. First we show that an ≥ 1.5 for all natural numbers n ≥ 1. Clearly, a1 = 2 > 1.5. For any fixed
k ∈ N, assume that ak ≥ 1.5. Then
2ak + 1 1 1
ak+1 = =2− ≥2− = 1.6 > 1.5.
ak + 1 ak + 1 1.5 + 1
By the Mathematical Induction, we conclude that an ≥ 1.5, ∀n ∈ N. Next we show that an ≤ 2 for all
natural number n ≥ 1. It is given that a1 = 2. For any fixed k ∈ N, assume that ak ≤ 2. Then
1 1
ak+1 = 2 − ≤2− = 1.66 < 2.
ak + 1 2+1
By the Mathematical Induction, we conclude that an ≤ 2, ∀n ∈ N. Thus, 1.5 ≤ an ≤ 2, ∀n ∈ N. Hence
option (a) is the correct choice.
n2 −2n
2. The value of lim 1 + n2 e is
n→∞
(a) e−2 (b) e−1 (c) e (d) e2 .
Solution. We have
  n2 (   n2 )
2 2
lim 1 + e−2n = lim exp ln 1 + − 2n
n→∞ n n→∞ n
   
2
= lim exp n2 ln 1 + − 2n
n→∞ n
(    2    3   ! )
2 2 2 1 2 1
= lim exp n − + − · · · − 2n
n→∞ n n 2 n 3
x2 x3 x4
(since ln(1 + x) = x − 2 + 3 − 4 + · · · , −1 < x < 1)
 
23 1
= lim exp −2 + − ···
n→∞ n 3
= e−2 .
Hence option (a) is the correct choice.
14.1 Multiple Choice Questions 273

3. Let {an }n≥1 and {bn }n≥1 be two convergent sequences of real numbers. For n ≥ 1, define un = max{an , bn }
and vn = min{an , bn }. Then

(a) neither {un }n≥1 nor {vn }n≥1 converges.


(b) {un }n≥1 converges but {vn }n≥1 does not converge.
(c) {un }n≥1 does not converge but {vn }n≥1 converges.
(d) both {un }n≥1 and {vn }n≥1 converge.

Solution. Let limn→∞ an = a and limn→∞ bn = b. Without loss of generality, assume that a ≤ b. If
a = b, it is easy to see that there exists a natural number K such that an = bn for all n ≥ K. Then
un = max(an , bn ) = an = min(an , bn ) = vn , ∀n ≥ K, and hence both {un }n≥1 and {vn }n≥1 converge.
Now, assume that a < b. Let ε = b−a 2 > 0. Convergence of {an }n≥1 and {bn }n≥1 to a and b, respectively,
implies that ∃K1 , K2 ∈ N such that |an − a| < ε and |bn − b| < ε, ∀n ≥ K ∗ = max(K1 , K2 ). Equivalently,
we have

a −  < an < a +  and b −  < bn < b + , ∀n ≥ K ∗


3a − b a+b 3b − a
⇒ < an < < bn < , ∀n ≥ K ∗
2 2 2
⇒ un = bn and vn = an , ∀n ≥ K ∗
⇒ both {un }n≥1 and {vn }n≥1 converge.

Hence option (d) is the correct choice.


" #
1 3
4. Let M = 4 4 . If I is the 2 × 2 identity matrix and 0 is the 2 × 2 zero matrix, then
3 2
5 5
(a) 20M 2 − 13M + 7I = 0 (b) 20M 2 − 13M − 7I = 0
(c) 20M 2 + 13M + 7I = 0 (d) 20M 2 + 13M − 7I = 0.
Solution. The trace of M is 14 + 52 = 1320 and the determinant of M is
1
4 × 2
5 − 3
4 × 3
5
7
= − 20 . Then, the
characteristic equation of M is given by

13 7
λ2 − trace(M )λ + det(M ) = 0 ⇒ λ2 − λ− =0 ⇒ 20λ2 − 13λ − 7 = 0.
20 20
By the Cayley-Hamilton theorem, we have 20M 2 − 13M − 7I = 0. Hence option (b) is the correct choice.

5. Let X be a random variable with the probability density function


( p
α
e−αx xp−1 , if x ≥ 0, α > 0, p > 0,
f (x) = Γ(p)
0, otherwise.

If E(X) = 20 and Var(X) = 10, then (α, p) is


(a) (2, 20) (b) (2, 40) (c) (4, 20) (d) (4, 40).
Solution. Clearly, X has gamma distribution with mean αp and variance αp2 . Therefore, p
α = 20 and
p
α2 = 10. On solving, we obtain that α = 2 and p = 40 Hence option (b) is the correct choice.

6. Let X be a random variable with the distribution function




0, if x < 0,
2
F (x) = 41 + 4x−x8 , if 0 ≤ x < 2,


1, if x ≥ 2.

Then
P (X = 0) + P (X = 1.5) + P (X = 2) + P (X ≥ 1)
equals
(a) 83 (b) 5
8 (c) 7
8 (d) 1.
274 Questions and Solutions of IIT JAM (MS) – 2018

Solution. We have
1 1
P (X = 0) = F (0) − F (0−) = F (0) − lim F (0 − h) = − lim 0 = .
h→0+ 4 h→0+ 4
Since F is continuous at x = 1.5, we get P (X = 1.5) = 0. We also have

P (X = 2) = F (2) − F (2−)
= F (2) − lim+ F (2 − h)
h→0
 
1 4(2 − h) − (2 − h)2
= 1 − lim +
h→0+ 4 8
 
1 8−4
=1− +
4 8
1
=
4
and

P (X ≥ 1) = 1 − P (X < 1)
= 1 − F (1−)
= 1 − lim+ F (1 − h)
h→0
 
1 4(1 − h) − (1 − h)2
= 1 − lim +
h→0+ 4 8
 
1 4−1
=1− +
4 8
3
= .
8
Then, we have
1 1 3 7
P (X = 0) + P (X = 1.5) + P (X = 2) + P (X ≥ 1) = +0+ + = .
4 4 8 8
Hence option (c) is the correct choice.
X1 +X2

7. Let X1 , X2 and X3 be i.i.d. U (0, 1) random variables. Then E X1 +X2 +X3 equals
(a) 31 (b) 12 (c) 32 (d) 34 .
Solution. We have
X1 + X2 + X3
=1
X1 + X2 + X3
 
X1 + X2 + X3
⇒ E =1
X1 + X2 + X3
     
X1 X2 X3
⇒ E +E +E =1
X1 + X2 + X3 X1 + X2 + X3 X1 + X2 + X3
 
X1
⇒ 3E =1
X1 + X2 + X3
(since X1 , X2 and X3 are i.i.d. random variables)
 
X1 1
⇒ E = ,
X1 + X2 + X3 3
and therefore,    
X1 + X2 X1 2
E = 2E = .
X1 + X2 + X3 X1 + X2 + X3 3
Hence option (c) is the correct choice.
Remark: Note that the distribution of the random variables, i.e., U (0, 1) has not been used anywhere. This
is a redundant information.
14.1 Multiple Choice Questions 275

8. Let x1 = 0, x2 = 1, x3 = 2, x4 = 3 and x5 = 0 be the observed values of a random sample of size 5 from a


discrete distribution with the probability mass function

 θ
3, if x = 0,
f (x; θ) = P (X = x) = 2θ3 , if x = 1,

 1−θ
2 , if x = 2, 3,

where θ ∈ [0, 1] is the unknown parameter. Then the maximum likelihood estimate of θ is
(a) 25 (b) 53 (c) 57 (d) 95 .
Solution. The likelihood function is given by

L(θ) = P (X = 0)P (X = 1)P (X = 2)P (X = 3)P (X = 0)


     
θ 2θ 1−θ 1−θ θ
=
3 3 2 2 3
θ3 (1 − θ)2
= , θ ∈ [0, 1].
54
The log-likelihood function is given by

l(θ) = log L(θ) = 3 log θ + 2 log(1 − θ) − log 54, θ ∈ [0, 1].

Then,
3 2 3
l0 (θ) = − =0 ⇒ θ=
θ 1−θ 5
and
3 2
l00 (θ) = − 2
− < 0, ∀θ ∈ [0, 1],
θ (1 − θ)2
which implies that the maximum likelihood estimate of θ is 53 . Hence option (b) is the correct choice.

9. Consider four coins labelled as 1, 2, 3 and 4. Suppose that the probability of obtaining a ‘head’ in a single
toss of the ith coin is 4i , i = 1, 2, 3, 4. A coin is chosen uniformly at random and flipped. Given that the
flip resulted in a ‘head’, the conditional probability that the coin was labelled either 1 or 2 equals
1 2 3 4
(a) 10 (b) 10 (c) 10 (d) 10 .
Solution. Let Ci denote the event that ith coin is choosen. Then P (Ci ) = 14 , i = 1, . . . , 4. Further, let
E be the event that flip resulted in ‘head’. Then, P (E|Ci ) = 4i , i = 1, 2, 3, 4. The required probability is
given by

P (C1 ∪ C2 |E) = P (C1 |E) + P (C2 |E) (since Ci ’s are mutually exclusive)
P (E|C1 )P (C1 ) + P (E|C2 )P (C2 )
= P4
i=1 P (E|Ci )P (Ci )
1 1 2 1
4 × 4 + 4 ×
= P 4 i 1
4
i=1 4 × 4
1+2
= P4
i=1 i
3
= .
10
Hence option (c) is the correct choice.

10. Consider the linear regression model yi = β0 + β1 xi + i ; i = 1, 2, . . . , n, where i ’s are i.i.d. standard
normal random variables. Given that
n n n n
!2
1X 1X 1X 1X
xi = 3.2, yi = 4.2, xj − xi = 1.5
n i=1 n i=1 n j=1 n i=1

View publication stats

You might also like