Lecture3
Lecture3
Lecture 3
Indian Institute of Technology Delhi
Contents
The Bootstrap 1
Why Does the Bootstrap Work? 3
※ The Bootstrap
The plug-in estimator of θ is defined as
θ̂ = T (F̂ ) = g(X1 , . . . , Xn )
In the Bootstrap world, we simulate X1∗ , . . . , Xn∗ from F̂ and then compute the
estimator over these values, θ̂∗ = g(X1∗ , . . . , Xn∗ ).
1
Lecture Notes The Bootstrap
The next theorem states that v̂b2 approximates Var θb . The are two sources
of error in this approximation. The first is due to the fact that n is finite and the
second is due to the fact that B is finite. However, we can make B as large as we
like. (something like B = 10, 000 suffice in practice.)
Theorem 1.1
v̂b2 P
Under appropriate regularity conditions, Var(θ̂)
→ 1 as n → ∞.
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
y = np.random.rand(100)
tmean=np.mean(y)
resample=np.mean(np.random.choice(y,size=(100,10000),replace=True),
axis=0)
The next states that our Bootstrap confidence has right coverage asymptotically.
Theorem 1.2
Under appropriate regularity conditions,
!
1
Pr(θ ∈ Cn ) = 1 − α + O √ .
n
2
Lecture Notes Why Does the Bootstrap Work?
0.40
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
4 2 0 2 4
Example 2.1
R
Suppose that θ = xdF (x), then its estimator is
n
1X
θ̂ = Xi .
n i=1
3
Lecture Notes Why Does the Bootstrap Work?
Along with asymptotic normality, we also have Edgeworth expansion, for each
t
√ !
n(θ̂ − θ) p 1 (t)ϕ(t) p j (t)ϕ(t) 1
Pr ≤ t − Φ(t) = √ + ... + + ... = O √ ,
σ̂ n nj/2 n