0% found this document useful (0 votes)
66 views3 pages

MOOC Econometrics Test Exercise 2

Uploaded by

niranth sai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views3 pages

MOOC Econometrics Test Exercise 2

Uploaded by

niranth sai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Let's address each part of the exercise step by step:

**(a) Proof that \( E(b_R) = \beta_1 + P\beta_2 \):**

Given the partitioned model \( y = X_1\beta_1 + X_2\beta_2 + \epsilon \), where \( X = (X_1 \
quad X_2) \), \( b_R \) is the OLS estimator of \( \beta_1 \) obtained by regressing \( y \) on \
( X_1 \). Therefore, \( b_R = (X_1^TX_1)^{-1}X_1^Ty \).

The expected value of \( b_R \) is:

\[ E(b_R) = E((X_1^TX_1)^{-1}X_1^Ty) \]

Given \( y = X_1\beta_1 + X_2\beta_2 + \epsilon \), and assuming \( E(\epsilon) = 0 \):

\[ E(b_R) = E((X_1^TX_1)^{-1}X_1^T(X_1\beta_1 + X_2\beta_2 + \epsilon)) \]

\[ E(b_R) = E((X_1^TX_1)^{-1}X_1^TX_1\beta_1 + (X_1^TX_1)^{-1}X_1^TX_2\beta_2 +


(X_1^TX_1)^{-1}X_1^T\epsilon) \]

\[ E(b_R) = (X_1^TX_1)^{-1}X_1^TX_1\beta_1 + (X_1^TX_1)^{-1}X_1^TX_2\beta_2 \]

\[ E(b_R) = \beta_1 + (X_1^TX_1)^{-1}X_1^TX_2\beta_2 \]

\[ E(b_R) = \beta_1 + P\beta_2 \]

Therefore, \( E(b_R) = \beta_1 + P\beta_2 \).

**(b) Proof that \( \text{var}(b_R) = \sigma^2 (X_1^TX_1)^{-1} \):**

The variance of \( b_R \) is:

\[ \text{var}(b_R) = \text{var}((X_1^TX_1)^{-1}X_1^Ty) \]

Under the assumptions \( \text{var}(\epsilon) = \sigma^2 I \):

\[ \text{var}(b_R) = (X_1^TX_1)^{-1}X_1^T\text{var}(y)X_1(X_1^TX_1)^{-1} \]

\[ \text{var}(b_R) = (X_1^TX_1)^{-1}X_1^T\sigma^2 I X_1(X_1^TX_1)^{-1} \]


\[ \text{var}(b_R) = \sigma^2 (X_1^TX_1)^{-1} \]

Therefore, \( \text{var}(b_R) = \sigma^2 (X_1^TX_1)^{-1} \).

**(c) Proof that \( b_R = b_1 + Pb_2 \):**

From the partitioned model:

\[ y = X_1b_1 + X_2b_2 + \epsilon \]

The OLS estimator \( b_R \) for \( \beta_1 \) is:

\[ b_R = (X_1^TX_1)^{-1}X_1^Ty \]

Substituting \( y = X_1b_1 + X_2b_2 + \epsilon \):

\[ b_R = (X_1^TX_1)^{-1}X_1^T(X_1b_1 + X_2b_2 + \epsilon) \]

\[ b_R = (X_1^TX_1)^{-1}X_1^TX_1b_1 + (X_1^TX_1)^{-1}X_1^TX_2b_2 + (X_1^TX_1)^{-


1}X_1^T\epsilon \]

\[ b_R = b_1 + (X_1^TX_1)^{-1}X_1^TX_2b_2 \]

\[ b_R = b_1 + Pb_2 \]

Therefore, \( b_R = b_1 + Pb_2 \).

**(d) Argue that \( P \) can be obtained by regressing each variable in \( X_2 \) on a constant
term and \( X_1 \):**

Matrix \( P \) consists of coefficients obtained from regressing each variable in \( X_2 \) on \


( X_1 \).

For each variable \( X_{2j} \) (where \( j \) denotes the column of \( X_2 \)):

\[ X_{2j} = X_1\beta_{1j} + \text{error} \]


The coefficients \( \beta_{1j} \) from these regressions form the columns of \( P \).

**(e) Determine the values of \( P \) from Lecture 2.1:**

You would need to refer to the specific lecture material to find the exact values of \( P \)
calculated from the wage data. This involves performing regressions of each variable in \
( X_2 \) on \( X_1 \) to obtain \( P \).

**(f) Numerical check of \( b_R = b_1 + Pb_2 \):**

To check the numerical validity, you would compare \( b_R \) (from Lecture 2.1) with \( b_1 +
Pb_2 \) (from Lecture 2.5, if available). Due to rounding or precision issues, they may not
match exactly, but they should be very close. This equation checks if the OLS estimator \
( b_R \) for \( \beta_1 \) can be expressed as \( b_1 + Pb_2 \), confirming the relationship
derived theoretically.

You might also like