0% found this document useful (0 votes)
63 views11 pages

Ch5 Sol

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views11 pages

Ch5 Sol

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Exercises in Introduction to Mathematical Statistics (Ch.

5)

Tomoki Okuno

September 14, 2022

Note
• Not all solutions are provided: Exercises that are too simple or not very important to me are skipped.
• Texts in red are just attentions to me. Please ignore them.

5 Consistency and Limiting Distributions


5.1 Convergence in Probability
5.1.1. Let an be a sequence of real numbers. Hence, we can also say that an is a sequence of constant
P
(degenerate) random variables. Let a be a real number. Show that an → a is equivalent to an → a.
Solution.

an → a ⇔ ∀ϵ > 0, ∃N ∈ N such that |an − a| ≤ ϵ for n > N


⇔ P (|an − a| ≤ ϵ) → 1 as n → ∞
⇔ P (|an − a| > ϵ) → 0 as n → ∞
P
⇔ an → a.

5.1.2. Let the random variable Yn have a distribution that is Binomial(n, p).
(a) Prove that Yn /n converges in probability to p. This result is one form of the weak law of large numbers.
Solution.
iid Pn
Let X1 , ..., Xn ∼ Bernoulli(p) with µ = p and σ 2 = p(1 − p). Since Yn = i=1 Xi ,
n
P
X
Yn /n = Xi /n = X n → p by WLLN.
i=1

(b) Prove that 1 − Yn /n converges in probability to 1 − p.


P
Solution. Let g(x) = 1 − x, which is continuous at all x. Then 1 − Yn /n = g(Yn /n) → g(p) = 1 − p.
(c) Prove that Yn /n(1 − Yn /n) converges in probability to p(1 − p).
P
Solution. Let g(x) = x(1 − x). Then, Yn /n(1 − Yn /n) = g(Yn /n) → g(p) = p(1 − p).
5.1.3. Let Wn denote a random variable with mean µ and variance b/np , where p > 0, µ, and b are constants
(not functions of n). Prove that Wn converges in probability to µ.
Solution.

1
By Chebyshev’s inequality, for ∀ϵ > 0,

Var(Wn ) b
P (|Wn − µ| ≥ ϵ) ≤ = 2 p →0 as n → ∞.
ϵ2 ϵ n

5.1.4. Derive the cdf given in expression (5.1.1).


Solution.
The cdf of X is FX (x) = 0, x ≤ 0; x/θ, 0 < x ≤ θ; 1, x > θ. Hence,

0 
 t≤0
n t n
FYn (t) = P (Yn < t) = P (Xi < t, i = 1, .., n) = [F (t)] = θ 0<t≤θ

1 t > θ.

5.1.7. Let X1 , ..., Xn be iid random variables with common pdf


(
e−(x−θ) x > θ, −∞ < θ < ∞
f (x) =
0 elsewhere.

This pdf is called the shifted exponential. Let Yn = min{X1 , ..., Xn }. Prove that Yn → θ in probability
by first obtaining the cdf of Yn .
Solution.

P (Yn ≥ y) = P (min{X1 , ..., Xn } ≥ y)


= P (Xi ≥ y, i = 1, . . . , n)
= [P (X ≥ y)]n since X1 , ..., Xn are iid
(
1 y≤θ
= hR ∞ −(t−θ) in
y
e dt = e−n(y−θ) y > θ.

Hence, the cdf of Yn is


(
0 y≤θ
FYn (y) = 1 − P (Yn ≥ y) =
1 − e−n(y−θ) y > θ.

Let ϵ > 0 be given.

P (|Yn − θ| ≤ ϵ) = P (−ϵ < Yn − θ < ϵ)


= P (θ − ϵ < Yn < θ + ϵ)
= F (θ + ϵ) − F (θ − ϵ)
= (1 − e−nϵ ) − 0
→1 as n → ∞,

P
which means Yn → θ.
5.1.8. Using the assumptions behind the confidence interval given in expression (4.2.9), show that
s s
S12 S22 . σ12 σ2 P
+ + 2 →1
n1 n2 n1 n2

Solution.

2
Let s
σ12 σ2
r
x y .
g(x, y) = + + 2.
n1 n2 n1 n2
P P
Since S12 → σ12 and S22 → σ22 and g(x, y) is continuous at all (x, y),
s s
S12 S22 . σ12 σ2 P
+ + 2 = g(S12 , S22 ) → g(σ12 , σ22 ) = 1.
n1 n2 n1 n2

5.1.9. For Exercise 5.1.7, obtain the mean of Yn . Is Yn an unbiased estimator of θ? Obtain an unbiased
estimator of θ based on Yn .
Solution.
First, we obtain the pdf, fYn (y):
(
0 y≤θ
fYn (y) = FY′ n (y) =
ne−n(y−θ) y > θ.

Hence, the mean of Yn is


Z ∞
E(Yn ) = n ye−n(y−θ) dy
θ
Z ∞ 
t
= + θ e−t dt (t = n(y − θ))
0 n
Z ∞ Z ∞
1 −t
= + θ since te dt = e−t dt = 1.
n 0 0

Hence, Yn is biased for θ, but Yn − 1/n is unbiased since E(Yn − 1/n) = E(Yn ) − 1/n = θ.

5.2 Convergence in Distribution


Simple example that illustrates the difference between two convergences
Let X be a continuous random variable with a pdf fX (x) that is symmetric at 0. Then it is easy to show
that the pdf of −X is also fX (x). Thus, X and −X have the same distributions. Define the sequence of
random variables Xn as (
X if n is odd
Xn =
−X if n is even.
D
Clearly, FXn = FX for all x in the support of X, so that Xn → X. On the other hand, X does not get close
P
to X. In particular, Xn ↛ X in probability.
5.2.1. Let X n denote the mean of a random sample of size n from a distribution that is N (µ, σ 2 ). Find the
limiting distribution of X n .
Solution. Since X n ∼ N (µ, σ 2 /n), Var(X n ) → 0 as n → ∞, indicating that X n is generate at µ.
5.2.2. Let Y1 denote the minimum of a random sample of size n from a distribution that has pdf f (x) =
e−(x−θ) , θ < x < ∞, zero elsewhere. Let Zn = n(Y1 − θ). Investigate the limiting distribution of Zn .
Solution.
By Exercise 5.1.7,
(
0 y≤θ
FY1 (y) =
1 − e−n(y−θ) y > θ.

3
Thus,

FZn (z) = P (Z ≤ z) = P (n(Y1 − θ) ≤ z)


= P (Y1 ≤ z/n + θ)
(
0 z≤0
= −z
1−e z>0
D
which holds for ∀n. Then fZn (z) = e−z , or Zn → Γ(1, 1).
5.2.3. Let Yn denote the maximum of a random sample of size n from a distribution of the continuous type
that has cdf F (x) and pdf f (x) = F ′ (x). Find the limiting distribution of Zn = n[1 − F (Yn )].
Solution.

FYn (y) = P (Yn ≤ y) = [P (X ≤ y)]n = [FX (y)]n .

Hence

FZn (z) = P (n[1 − F (Yn )] ≤ z)


= P [F (Yn ) ≥ 1 − z/n]
−1
= P [Yn ≥ FX (1 − z/n)] since Yn is nondecreasing
−1
=1− FYn {FX (1 − z/n)}
−1
=1− [FX {FX (1 − z/n)}]n
n −z
= 1 − (1 − z/n) → 1 − e as n → ∞,
D
which means that Zn → Γ(1, 1).
5.2.4. Let Y2 denote the second smallest item of a random sample of size n from a distribution of the
continuous type that has cdf F (x) and pdf f (x) = F ′ (x). Find the limiting distribution of Wn = nF (Y2 ).
Solution.
The pdf of Y2 is

fY2 (y) = n(n − 1)FX (y)[1 − FX (y)]n−2 fX (y).

Hence

FWn (w) = P (F (Y2 ) ≤ w/n)


= P (Y2 ≤ F −1 (w/n))
Z F −1 (w/n)
= n(n − 1)FX (y)[1 − FX (y)]n−2 fX (y)dy
−∞
F −1 (w/n)
Z F −1 (w/n)
n−1
= −nFX (y)[1 − FX (y)] + nfX (y)[1 − FX (y)]n−1 dy
−∞ −∞
F −1 (w/n)
= −w[1 − w/n]n−1 − [1 − FX (y)]n
−∞
= −w[1 − w/n]n−1 + [1 − (1 − w/n)n ]
→ −we−w + (1 − e−w ) ≡ FW (w),

fW (w) = FW (w) = −e−w + we−w + e−w = we−w ,
D
which means Wn → W ∼ Γ(2, 1).
5.2.5. Let the pmf of Yn be pn (y) = 1, y = n, zero elsewhere. Show that Yn does not have a limiting
distribution. (In this case, the probability has “escaped” to infinity.)

4
Solution.
The cdf of Yn is
(
0 y<n
FYn (y) =
1 y ≥ n.

Assume Y is the limiting distribution of Yn ,

FY (y) = lim FYn (y) = 0, −∞ < y < ∞.


n→∞

Since FY (y) ̸= FYn (y), Y does not exist.


be a random sample of size n from a distribution that is N (µ, σ 2 ), where σ 2 > 0.
5.2.6. Let X1 , X2 , ..., Xn P
n
Show that the sum Zn = 1 Xi does not have a limiting distribution.
Solution.
Since X1 , ..., Xn are iid N (µ, σ 2 ), X
Zn = Xi ∼ N (nµ, nσ 2 ).
i

Thus, the cdf of Zn is


 
Zn − nµ z − nµ
FZn (z) = P (Zn ≤ z) = P √ ≤ √
nσ 2 nσ 2
 
z − nµ
=Φ √
nσ 2

Φ(−∞) = 0
 µ>0
→ Φ(0) = 1/2 µ = 0 as n → ∞,

Φ(∞) = 1 µ<0

which means that Zn does not have a limiting distribution.


5.2.7. Let Xn have a gamma distribution with parameter α = n and β, where β is not a function of n. Let
Yn = Xn /n. Find the limiting distribution of Yn .
Solution.

MXn (t) = (1 − βt)−n ⇒ MYn (t) = MXn (t/n) = (1 − βt/n)−n → eβt as n → ∞,

which indicates that the limiting distribution of Yn is degenerate at β.


5.2.8. Let Zn be χ2 (n) and let Wn = Zn /n2 . Find the limiting distribution of Wn .
Solution.
Since MZn (t) = (1 − 2t)−n/2 ,

MWn (t) = MZn (t/n2 ) = (1 − 2t/n2 )−n/2


√ !−n/2 √ !−n/2
2t 2t
= 1+ 1−
n n
√ ! √ !
2t 2t
→ exp − exp as n → ∞
2 2
= e0 ,

which implies that the limiting distribution of Wn is degenerate at 0.


5.2.11. Let p = 0.95 be the probability that a man, in a certain age group, lives at least 5 years.

5
(a) If we are to observe 60 such men and if we assume independence, use R to compute the probability
that at least 56 of them live 5 or more years.
Solution.
Let X ∼ Binomial(60, 0.95), P (X ≥ 56) = 1 - pbinom(55, 60, 0.95) = 0.820.
(b) Find an approximation to the result of part (a) by using the Poisson distribution.
Solution.
D
Let Y = 60 − X ∼ Binomial(60, 0.05), then Y ∼ Poisson(3). Using this approximation to obtain

P (X ≥ 56) = P (Y ≤ 4) = ppois(4, 3) = 0.815,

which is close to the exact probability.


5.2.12. Let the random variable Zn have a Poisson distribution
√ with parameter µ = n. Show that the
limiting distribution of the random variable Yn = (Zn − n)/ n is normal with mean zero and variance 1.
Solution.
t
−1)
MZn (t) = en(e ,

− nt

MYn (t) = e MZn (t/ n)
√ √
nt n(et/ n
= e− e −1)
√ √ √
− nt n((1+t/ n+t2 /2n+O(t3 /n n)−1)
=e e
t2 /2
→e as n → ∞,

D
indicating that Yn → N (0, 1). This is a specific case of the Central Limit Theorem.
5.2.15. Let X n denote the mean of a random sample of size n from a Poisson distribution with parameter
µ = 1.
√ √ √ √
(a) Show that the mgf of Yn = n(Xn − µ)/σ = n(Xn − 1) is given by exp[−t n + n(et/ n − 1)].
Solution.
iid t t/n
The mgf of Xi ∼ Poisson(1) is√ MX (t) = ee −1 ⇒ MX̄ (t) = [MX (t/n)]n = en(e −1)
. Hence, MY (t) =
√ √ √ t/ n √ √
e− nt MX̄ ( nt) = e− nt en(e −1)
= exp[−t n + n(et/ n − 1)].
(b) Investigate the limiting distribution of Yn as n → ∞.
√ D
Solution. By CLT, n(X n − 1) → N (0, 1).
√ p
5.2.16. Using Exercise 5.2.15 and the ∆-method, find the limiting distribution of n( X n − 1).
Solution.

Let g(x) = x, which is continuous at all x. Then, by the Delta method,
√ √
q
D D
n(g(X n ) − g(1)) → N (0, {g ′ (1)}2 ) ⇒ n( X n − 1) → N (0, 1/4).

5.2.17. Let X n denote the mean of a random sample of size n from a distribution that has pdf f (x) = e−x ,
0 < x < ∞, zero elsewhere.

(a) Show that the mgf MYn (t) of Yn = n(X n − 1) is
√ √ √ √
MYn (t) = [et/ n
− (t/ n)et/ n ]−n , t< n.

Solution.

6
Since MX (t) = (1 − t)−1 , t < 1,

MX n (t) = [MX (t/n)]n = (1 − t/n)−n .

Hence,
√ √ √ √ √ √ √
MYn (t) = e−t n
MX n ( nt) = e−t n (1 − t/ n)−n = [et/ n − (t/ n)et/ n ]−n .

(b) Find the limiting distribution of Yn as n → ∞.


Solution.
D
By CLT, Yn → N (0, 1). Another solution is:
√ √ √
MYn (t) = [et/ n − (t/ n)et/ n ]−n
 √  −n
t/ n t
= e 1− √
n
−n
t2
 
t t
≈ 1+ √ + 1− √
n 2n n
 2 2 3
−n
t t t
= 1− + − √
n 2n 2n n
−n
t2 t3

= 1− − √
2n 2n n
2 2
→ e(−t /2)(−1)
= et /2
.

√ p
5.2.18. Continuing with Exercise 5.2.17, use the ∆-method to find the limiting distribution of n( X n −1).
Solution. Exactly the same as 5.2.16.
5.2.19. Let Y1 < Y2 < · · · < Yn be the order statistics of a random sample (see Section 5.2) from a
distribution with pdf f (x) = e−x , 0 < x < ∞, zero elsewhere. Determine the limiting distribution of
Zn = (Yn − log n).
Solution.

FZn (z) = P (Yn − log n ≤ z)


= P [Yn ≤ z + log n]
= P [X ≤ z + log n]n
= [FX (z + log n)]n
= [1 − e−z−log n ]n
−z
= [1 − e−z /n]n → e−e

as n → ∞.
5.2.20. Let Y1 < Y2 < · · · < Yn be the order statistics of a random sample (see Section 5.2) from a
distribution with pdf f (x) = 5x4 , 0 < x < 1, zero elsewhere. Find p so that Zn = np Y1 converges in
distribution.
Solution.

FZn (z) = P (np Y1 ≤ z) = P [Y1 ≤ z/np ] = 1 − P [Y1 > z/np ]


= 1 − [1 − P (X ≤ z/np )]n = 1 − [1 − F (z/np )]n
5
= 1 − [1 − z 5 /n5p ]n → e−z = FZ (z) as n → ∞,

which holds only when 5p = 1 or p = 1/5.

7
5.3 Central Limit Theorem
5.3.1. Let X denote the mean of a random sample of size 100 from a distribution that is χ2 (50). Compute
an approximate value of P (49 < X < 51).
Solution.
√ D
Since E(X) = 50, Var(X) = 100, by CLT, 100(X − 50)/10 = X − 50 → N (0, 1). Hence,

P (49 < X < 51) = P (−1 < X − 50 < 1) = Φ(1) − Φ(−1) = 0.683.

5.3.2. Let X denote the mean of a random sample of size 128 from a gamma distribution with α = 2 and
β = 4. Approximate P (7 < X < 9).
Solution.
√ √ D
Since E(X) = αβ = 8, Var(X) = αβ 2 = 32, 128(X − 8)/ 32 = 2(X − 8) → N (0, 1) by CLT. Hence,

P (7 < X < 9) = P (−2 < 2(X − 8) − 50 < 2) = Φ(2) − Φ(−2) = 0.954.

5.3.3. Let Y be b(72, 31 ). Approximate P (22 ≤ Y ≤ 28).


Solution.
D
Since E(Y ) = np = 24, Var(X) = np(1 − p) = 16, (Y − 24)/4 → N (0, 1) by CLT. Hence,

P (22 ≤ Y ≤ 28) = P (−0.5 < (Y − 24)/4 < 1) = Φ(1) − Φ(−0.5) = 0.532.

Note that the official answer uses a continuity correction. Then

P (21.5 ≤ Y ≤ 28.5) = P (−0.625 < (Y − 24)/4 < 1.125) = Φ(1.125) − Φ(−0.625) = 0.604.

5.3.4. Compute an approximate probability that the mean of a random sample of size 15 from a distribution
having pdf f (x) = 3x2 , 0 < x < 1, zero elsewhere, is between 53 and 45 .
Solution.
Since E(X) = 3/4 and E(X 2 ) = 3/5 ⇒ Var(X) = 3/80. Hence,

√ 15(X − 3/4) D
n(X − µ)/σ = p = 20(X − 0.75) → N (0, 1) by CLT.
3/80

Thus,

P (3/5 < X < 4/5) = P (−3 < 20(X − 0.75) < 1) = Φ(1) − Φ(−3) = 0.840.

5.3.6. Let Y be b(400, 51 ). Compute an approximate value of P (0.25 < Y /400).


Solution.
Y − np Y − 80 Y − 80 D
p = √ = → N (0, 1) by CLT.
np(1 − p) 64 8

Hence,

P (0.25 < Y /400) = P (100 < Y ) = P (2.5 < (Y − 80)/8) = 1-pnorm(2.5) = 0.0062.

If consider a continuity correction, then the probability is

P (Y > 100) = P (Y > 100.5) = P ((Y − 80)/8 > 2.5625) = 1-pnorm(2.5625) = 0.0052.

8
5.3.7. If Y is b(100, 21 ), approximate the value of P (Y = 50).
Solution.
Y − np Y − 50 D
p = √ → N (0, 1) by CLT.
np(1 − p) 25

Use a continuity correction to obtain

P (Y = 50) = P (49.5 ≤ Y ≤ 50.5) = P (−0.1 ≤ (Y − 50)/5 ≤ 0.1) = 0.080.

5.3.8. Let Y be b(n, 0.55). Find the smallest value of n such that (approximately) P (Y /n > 21 ) ≥ 0.95.
Solution.

n(Y /n − 0.55) D
p → N (0, 1) by CLT.
(0.55)(0.45)

Hence,
√ √ ! √ !
n(Y /n − 0.55) −0.05 n 0.05 n
0.95 ≤ P (Y /n > 1/2) = P p >p =Φ p
(0.55)(0.45) (0.55)(0.45) (0.55)(0.45)
√ 2
0.05 n 1.645 (0.55)(0.45)
⇒p > 1.645 ⇒ n > = 267.90,
(0.55)(0.45) 0.052

which indicates that the smallest n is 268.


5.3.9. Let f (x) = 1/x2 , 1 < x < ∞, zero elsewhere, be the pdf of a random variable X. Consider a random
sample of size 72 from the distribution having this pdf. Compute approximately the probability that more
than 50 of the observations of the random sample are less than 3.
Solution.
Let W is a Bernoulli trial:
(
1 p = P (X < 3)
W =
0 1 − p,

where
Z 3
1 1 2
p= dx = 1 − = .
1 x2 3 3
P72
Further let Y = i=1 Wi ∼ b(72, p = 2/3), by CLT,

Y − np Y − 48 Y − 48 D
p = √ = → N (0, 1).
np(1 − p) 16 4

Hence, the desired probability is

P (Y ≥ 50) = P ((Y − 48)/4 ≥ 0.5) = pnorm(0.5, lower = F) = 0.309.

If a continuity correction is used,

P (Y ≥ 50) = P (Y ≥ 50.5) = pnorm(0.625, lower = F) = 0.266.

5.3.10. Forty-eight measurements are recorded to several decimal places. Each of these 48 numbers is
rounded off to the nearest integer. The sum of the original 48 numbers is approximated by the sum of these

9
integers. If we assume that the errors made by rounding off are iid and have a uniform distribution over the
interval (− 12 , 12 ), compute approximately the probability that the sum of the integers is within two units of
the true sum.
Solution.
Let Ui ∼ U (−0.5, 0.5). Then E(Ui ) = 0, Var(Ui ) = [0.5 − (0.5)]/12 = 1/12, which gives us
48 48
D
X p X
Ui / 48/12 = Ui /2 → N (0, 1) by CLT.
i=1 i=1

Then, the desired probability is


48
! 48
! 48
!
X X X
P Ui ≤ 2 = P −2 ≤ Ui ≤ 2 =P −1 ≤ Ui /2 ≤ 1 = 0.683.
i=1 i=1 i=1

5.3.11. We know that X is approximately N (µ, σ 2 /n) for large n. Find the approximate distribution of
3
̸ 0.
u(X) = X , provided that µ =
Solution.
√ D
X approx. N (µ, σ 2 /n) ⇔ n(X − µ) → N (0, σ 2 ) by CLT.

Let g(x) = x3 , which is continuous and differentiable at x (g ′ (x) = 3x2 ). Then, by Delta method,
√ 3 3
n(X − µ3 ) ∼ N (0, 9µ4 σ 2 ) ⇔ X approx. N (µ3 , 9µ4 σ 2 /n).
Pn
5.3.12. Let X1 , X2 , ..., Xn be a random sample from a Poisson distribution with mean µ. Thus, Y = i=1 Xi
has a Poisson distribution
p with mean nµ. Moreover, X = Y /n is approximately N (µ, µ/n) for large n. Show
that u(Y /n) = Y /n is a function of Y /n whose variance is essentially free of µ.
Solution.
√ D
X approx. N (µ, µ/n) ⇔
n(X − µ) → N (0, µ) by CLT.
√ √
Let g(x) = x, which is continuous and differentiable at x (g ′ (x) = 1/(2 x)). Then, by Delta method,
√ p
   
√ D 1 p √ 1
n( X − µ) → N 0, ⇔ X approx. N µ, ,
4 4n
whose variance is free of µ.
5.3.13. Using the notation of Example 5.3.5, show that equation (5.3.4) is true.
Solution.
p
pb − p p(1 − p) D
pb − p
p =p p → N (0, 1) by Slutsky
pb(1 − pb)/n p(1 − p)/n pb(1 − pb)
because
pb − p D
p → N (0, 1) by CLT,
p(1 − p)/n
p
p(1 − p) P P
p → 1 by g() and WLLN (b
p → E(X) = p).
pb(1 − pb)

5.3.14. Assume√that X1 , X2 , ..., Xn is a random sample from a Γ(1, β) distribution. Determine the asymptotic
distribution of n(X − β). Then find a transformation g(X) whose asymptotic variance is free of β.

10
Solution.
Since E(X) = β, Var(X) = β 2 ,
√ D
n(X − β) → N (0, β 2 ) by CLT.

Let g(x) = log x, which is continuous at x > 0 and g (β) = 1/β. Thus, by Delta Method,
√ D
n(log X − log β) → N (0, {g ′ (β)}2 β 2 ) = N (0, 1).

Hence, log X is a transformation to make the asymptotic variance free of β.

11

You might also like