Lecture On Bootstrap - Lecture Notes
Lecture On Bootstrap - Lecture Notes
Academia Sinica
December 4, 2014
1 / 29
This lecture is based on Xiaoxia Shi’s lecture note and my
understanding of bootstrap.
This is an introductory lecture to Bootstrap Method in that I
won’t provide any proofs.
For further reading, please see
Horowitz, J.L. (2001). “The Bootstrap”, in J.J. Heckman and
E. Leamer, eds, Handbook of Econometrics, vol. 5, Elsevier
Science, B.V., p. 3159-3228,
and references within.
Xiaoxia Shi’s lecture note is available at
http://www.ssc.wisc.edu/∼ xshi/econ715/Lecture 10 bootstrap.pdf
2 / 29
Introduction
3 / 29
Introduction (cont’d)
5 / 29
How does bootstrap work?
6 / 29
Confidence Interval
√ ∗
Suppose that the limiting distribution of n(θ̂ − θ̂) is close
√
to n(θ̂ − θ0 ).
Let’s pretend that we know the 2.5% and 97.5% quantiles of
√ ∗ ∗ and q ∗ .
n(θ̂ − θ̂) for now and they are denoted as q2.5 97.5
∗ √ ∗ √
Then the 95% CI is (θ̂−q97.5 / n, θ̂−q2.5 / n).
Note that
∗ √
P(q2.5 < n(θ̂ ∗ − θ̂) < q97.5
∗
) = 95%
∗ √ ∗
⇒P(q2.5 < n(θ̂ − θ0 ) < q97.5 ) ≈ 95%
∗ √ ∗ √
⇒P(q2.5 / n < (θ̂ − θ0 ) < q97.5 / n) ≈ 95%
∗ √ ∗ √
⇒P(−q97.5 / n < (θ0 − θ̂) < −q2.5 / n) ≈ 95%
∗ √ ∗ √
⇒P(θ̂−q97.5 / n < θ0 < θ̂−q2.5 / n) ≈ 95%
8 / 29
Remarks:
9 / 29
How to obtain those quantiles?
∗ , q∗
So far, we pretend that q2.5 ∗
97.5 and α95 are known.
√ ∗
n(θ̂ − θ̂) is known as we pointed out, because we know Wi∗
are drawn from Fb , the empirical CDF.
√
Of course, the close form of the CDF of n(θ̂ ∗ − θ̂) is still
hard to get!
Then this is where the well-known bootstrap simulations come
into play.
10 / 29
How to obtain those quantiles? (Cont’d)
11 / 29
Bootstrap simulations:
13 / 29
Hypothesis testing
14 / 29
A wrong bootstrap procedure!
15 / 29
Note that
√ ∗
P( n(µ̂n − 1) < q̃(25) )
√ √
=P( n(µ̂n − 1) < n(µ̂∗(25) − 1))
√
=P(0 < n(µ̂∗(25) − µ̂n )) → 0.
Similarly,
√ ∗
P( n(µ̂n − 1) > q̂(975) )
√ √
=P( n(µ̂n − 1) > n(µ̃∗(975) − 1))
√
=P(0 > n(µ̂∗(975) − µ̂n )) → 0.
Note that the previous two results hold no matter the true
parameters are.
Therefore, no matter under the null or under the alternative,
the size or the power of such test is zero.
16 / 29
A Right way to do!
17 / 29
Under the null hypothesis H0 : µ = 1,
√ ∗ √ ∗
P( n(µ̂n − 1) < q̂(25) or n(µ̂n − 1) > q̂(975) )
∗ √ ∗
=1 − P(q̂(25) < n(µ̂n − 1) < q̂(975) )
∗ √ ∗
=1 − P(q̂(25) < n(µ̂n − µ) < q̂(975) ) ≈ 0.05.
18 / 29
Remarks
19 / 29
Other uses of Bootstrap
Standard Error
We can use bootstrap method to approximate the asymptotic
standard error of an estimator, σ.
As we mentioned, when constructing CI’s or conducting
hypothesis testing, we need a consistent estimator for σ.
We can use bootstrap to obtain an consistent estimator:
B
1X ∗ ∗ 2
σ̂n∗ = θ̂b − θ ,
B
i =1
∗
where θ is the sample average of θ̂b∗ ’s.
Then we can replace σ̂ with σ̂n∗ in the previous cases.
Shi’s remark: To use bootstrap for standard error, the
estimator under consideration must be asymptotically normal.
Otherwise, the use of standard error itself is misguided.
20 / 29
Other uses of Bootstrap
Bias Correction
We can use bootstrap to correct the bias of an estimator.
The exact bias is Bias(θ̂n , θ) = E [θ̂n ] − θ and is unknown.
The bootstrap estimator of the bias is:
B
X
d ∗ (θ̂n , θ) = 1
Bias θ̂b∗ − θ̂n .
B
i =1
22 / 29
Bootstrap for Regression Models
Yi = Xi β + Ui , for i = 1, . . . , n,
√ D
Under regularity conditions, n(β̂n − β) → N(0, V ) where
23 / 29
Bootstrap for Regression Models (Cont’d)
24 / 29
Wild Bootstrap for Regression Models
√
Then we can show that n(β̂n∗ − β̂n ) can approximate
√
n(β̂n − β̂) well.
This is a residual-based bootstrap and this only works for OLS.
25 / 29
Bootstrap method for weakly dependent data
26 / 29
Blockwise Bootstrap
28 / 29
Conclusion!
29 / 29