0% found this document useful (0 votes)
18 views4 pages

Chapter 2 - The Linear Model

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views4 pages

Chapter 2 - The Linear Model

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Chapter 2: The linear model

Set of Exercises
Exercise 1. Gender wage gap
We consider the Mincer equation:
0
yi = αsi + xi β + ui , i = 1, ..., N
where yi =ln(wi ) are log wages, si = 1 if the individual i is a male, zero if not, and xi gathers
individual characteristics such as labor market experience, education, ..., and a constant.
0
We assume that (yi , si , xi ), i = 1, ..., N is a random sample, and that ui is independent of si
and xi .
We are interested in the quantity:
!
E(wi |si = 1, xi )
∆ = 100E −1
E(wi |si = 0, xi )

1. Interpret ∆.

Hint: you may start by considering


!
E(wi |si = 1, xi )
∆(xi ) = 100 −1
E(wi |si = 0, xi )

2. Compute ∆ as a function of α only:


∆ = f (α)

where you will give the expression of f .

3. We estimate α and β by regressing yi on si and xi . Let α̂ be the OLS estimate of α.


Then we estimate ∆ as:
∆ˆ = f (α̂)

ˆ is a consistent estimate of ∆.
Show that ∆

4. Let σ̂α be the robust asymptotic standard error of α̂. Show that the robust asymptotic
standard error of ∆ˆ is:
100exp(α̂)σ̂α

Hint: use the delta-method.

1
Exercise 2. Grouped data
We consider the classical regression model:
E(y|X) = Xβ, V ar(y|X) = σ 2 IN
where there are K regressors and N observations.
We assume here that the observations yi , xi are grouped into J groups of size n1 , ..., nJ , and
that we only observe the means of y and X in the groups:
yj∗ = n1j x∗j = n1j
X X
yi , xi
i∈j i∈j

We construct a Jx1 vector y ∗ and a JxK matrix X ∗ .

1. Show that
E(y ∗ |X ∗ ) = X ∗ β, V ar(y ∗ |X ∗ ) = DN

where
 
σ2
n1
0 0
DN =  0 ... 0
 
σ2
0 0 nJ

Hint: find a matrix M such that y ∗ = M y and X ∗ = M X.

2. Show that
J
!−1 J
0
X X
β̂GLS = nj x∗j x∗j nj x∗j yj∗
j=1 j=1

Interpret.

3. If we estimate β by OLS from the grouped data, how do we have to correct standard
errors?

Exercise 3. Household data


We want to estimate β in the classical regression model:
E(y|X) = Xβ, V ar(y|X) = σ 2 I2N
where i = 1, ..., 2N are individual observations.
However, we do not dispose of individual data. Instead, we observe data taken at the
household level. It is assumed that each household comprises two individuals. We observe
x∗j and yj∗ , j = 1, ..., N , which are the average values in each household. Sample size N is
1000.
We regress yj∗ on x∗j by OLS, and use the standard formula to compute the standard error.

2
1. Give the value of V ar(y ∗ |X ∗ ), where y ∗ is the N x1 vector of yj∗ , and X is the N xK
0
matrix of (x∗j ) , as a function of σ 2 .

2. Is the way we have computed the standard error correct?

3. What is the (infeasible) GLS estimate of the education coefficient in the regression?

4. In fact, half of the households in the sample comprise one single person. Does this
finding modify the previous results?

Explain how you would compute the GLS estimator in this case.

Exercise 4. Estimation with parameterized conditional heteroskedasticity


A researcher is interested in the following model:
0
yi = xi β + ui
0
where xi is a vector of K regressors, observations are iid, E(ui |xi ) = 0, and E(u2i |xi ) = xi α.

1. Assume first that α is known. Show that the GLS estimator of β writes:
N
!−1 N
X 1 0
X 1
β̂GLS = 0 x x
i i 0 xi yi
i=1
x i α i=1
x i α

2. Give the expression of the asymptotic variance of β̂GLS .

From now on, we asume that α is not known. Then the researcher proposes to
estimate the parameters in 2 steps:

• Step 1: Regress yi on xi by OLS, and compute the prediction error ûi . Then
regress û2i on xi , again by OLS. This yields an estimate for α, say α̃.
• Step 2: Estimate β by weighted least squares, proceeding as if α̃ were the true α.
This yields β̃.

3. Show that:
N
!−1 N
0
X X
plim xi xi xi u2i = α
N →∞
i=1 i=1

4. Show that α̃ is a consistent estimator of α, under the condition E(x3i ) < ∞. In this
question, you will assume K = 1 to simplify the notation.

5. Show that β̃ is a consistent estimator of β. It is sufficient to give an intuition of the


proof.

Remark: it can be shown that


!

plim N β̃ − β̂GLS =0
N →∞

3
so that β̃ and β̂GLS are asymptotically equivalent.

6. The researcher then changes her mind, and considers the following model:
0
yi = xi β + ui
0
where E(ui |xi ) = 0, and E(u2i |xi ) = exp(xi α).

Why is this specification? Propose a way to estimate β efficiently.

Hint: Recall the Nonlinear Least Squares estimation method.

You might also like