0% found this document useful (0 votes)
19 views16 pages

Answer Key To Exercises - LN8 - Ver1

Uploaded by

lijuncheng0219
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views16 pages

Answer Key To Exercises - LN8 - Ver1

Uploaded by

lijuncheng0219
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Sample Answers to Analytical Exercises of Chapter 8

Exercise 1 Suppose the moment conditions are E [g(wi ; )] = 0 with g(w; ) = g1 (w) g2 (w) .
Set up Jn ( ) as in (??) and derive the asymptotic distribution of the corresponding GMM estimator
of .

Solution: This is parallel to the case where g(wi ; ) = zi (yi x0i ). De…ne G = E [g2 (w)] and
=E [g(w; )g(w; )0 ]; then
p d
n b GM M ! N (0; V) ;

where
1 1
V = G0 WG G0 W WG G0 WG :

Exercise 2 In the linear model estimated by GMM with general weight matrix W, the asymptotic
variance of b GM M is V in (??).

1. 1
(i) Let V0 be this matrix when W = Show that V0 = G0 1G .

(ii) We want to show that for any W, V V0 0. To do this, start by …nding matrices A and
B such that V = A0 A and V0 = B0 B.

(iii) Show that B0 A = B0 B and therefore B0 (A B) = 0.

(iv) Use the expressions V = A0 A, A = B + (A B), and B0 (A B) = 0 to show that


V V0 .

Solution:
1 1 1 1
(i) V0 = G0 1 G G0 1 G G0 1 G = G0 1G :
(ii) A = WG (G0 WG) 1 , and B = 1 G G0 1G 1.

(iii)

1 1
B0 A = G0 1
G G0 1
WG G0 WG
1
= G0 1
G
1 1
= G0 1
G G0 1 1
G G0 1
G
= B0 B;
Email: [email protected]

1
so B0 (A B) = 0.
(iv) V V0 = A0 A B0 B = (B+(A B))0 (B+(A B)) B0 B = (A B)0 (A B) 0.

Exercise 3 Show that when a new group of instrumental variables is added in, the optimal asymp-
totic variance matrix V0 will not increase. Discuss when the two asymptotic variance matrices will
be equal. (Hint: use the result in Exercise 2.)

Solution: Denote the original!instruments!as z1 2 Rl1 , the new group of instruments as z2 2 Rl2 ,
g1 (wi ; ) z1 u
and g(wi ; ) = = . When the new group of instruments is added in,
g2 (wi ; ) z2 u
0 1 1 1
the asymptotic variance
! matrix
! changes from V1 = G1 11 G1 to!V = G0 1 G ! , where
0
E[z1 x ] G1 0 2 0 2
E[z1 z1 u ] E[z1 z2 u ] 11 12
G = 0
= , and = 0 2 0 2
. De-
E[z2 x ] G E[z2 z1 u ] E[z2 z2 u ]
! 2 21 22
1
0
…ne W = 11
. We can check that W W = W, and G0 WG = G01 111 G1 . So
0 0
1 1 1
(G0 WG) 1 (G0 W WG) (G0 WG) 1 = G01 111 G1 . From Exercise 2, G0 1 G G01 1
11 G1 .
The equality will hold if 1 = W. From the partitioned matrix inversion formula,

!
1 1 1
1 11:2 11:2 12 22
= 1 1 1
;
22:1 21 11 22:1

1 1 1. A
where ii:j = ii ij jj ji , i; j = 1; 2. Since 22:1 > 0, W cannot be equal to
su¢ cient condition for the equality to hold is E[z2 x0 ] = 0 and E[z1 z02 u2 ] = 01 . Intuitively, if z2 is
not correlated with the existing instruments and the covariates, then it is not correlated with the
existing system at all and so will not provide any further information.
1 1
For a general "if and only if" condition under which G0 1 G = G01 111 G1 , we orthog-
1
onalize the two groups of moment conditions. De…ne r2 (wi ; ) =!g2 (wi ; ) ! 21 11 g1 (wi ; ) =
1 G1 G1
21 11 ; Il2 g(wi ; ); now G changes to 1
and changes to
G2 21 11 G1 D2
" ! # ! !
g1 (wi ; ) 11 0 11 0
E g1 (wi ; )0 ; r2 (wi ; )0 = 1
:
r2 (wi ; ) 0 22 21 11 12 0 22

These di¤erent formulations of moment


! conditions would ! generate the same e¢ cient estimator of
g1 (wi ; ) Il1 0
by noticing that = 1
g(wi ; ), so the original optimal weight
r2 (wi ; ) 21 11 Il 2
matrix 1 applied to g (w ; ) and the new optimal weight matrix
i

" ! !# 1 ! !
1 1
I l1 0 I l1 11 12 Il 1 11 12 1 I l1 0
1
= 1
21 11 Il2 0 Il 2 0 I l2 21 11 I l2
1
In the homoskedastic case, this is equivalent to E[z1 z02 ] = 0.

2
!
g1 (wi ; )
applied to would generate the same estimator of . It is not hard to see that the
r2 (wi ; )
inverse of the asymptotic variance matrix of this e¢ cient estimator of is G0 1 G = G01 111 G1 +
1 1
D02 221 D2 . So G0 1 G = G01 111 G1 if and only if D02 221 D2 = 0 or D2 = 0 or G2 =
1 2
21 11 G1 . For more discussions, see Breusch et al. (1999).

Exercise 4 Take the single equation

y = X + u; E[ujZ] = 0:

Assume E[u2i jzi ] = 2. Show that if b is estimated by GMM with weight matrix Wn = (Z0 Z) 1,

then
p d 1
n b ! N 0; 2
G0 M 1
G ;

where G = E[zi x0i ] and M = E[zi z0i ].

Solution: Consider the moment conditions: E [zi ui ] = E [zi (yi x0i )] = 0. Then, the GMM
estimator is written as
! 1
1 1
b= X0 Z Z0 Z Z0 X X0 Z Z0 Z Z0 u
+ :
n n n n n n

Z0 X p Z0 Z p Z0 u p
By the LLN, ! E[zi x0i ] = G; ! E[zi z0i ] = M; ! E[zi ui ] = 0. By the CLT,
n n n
p Z0 u d p 1
n ! N 0; E zi z0i u2i = N 0; 2M . Therefore, b ! + G0 M 1G G0 M 1 0= ,
n
and
p d 1 1
n b ! G0 M 1
G G0 M 1
N 0; 2
M = N 0; 2
G0 M 1
G :

Exercise 5 In Exercise 12 of Chapter 7, …nd the e¢ cient GMM estimator of based on the
moment condition E[zi (yi xi )] = 0. Does this di¤ er from 2SLS and/or OLS?

Solution: Let us use the 2SLS estimator to get the estimate of the optimal weight matrix. Compute
bi = yi xi b 2SLS . Then the e¢ cient GMM estimator
2SLS residuals: u
" n
#" n
!# 1" n !#
X X
x2i x3i X xi
b = arg min (yi xi ) xi x2i b2i
u (yi xi )
i=1 i=1
x3i x4i i=1
x2i
2 " !# 1 n !3 1
Xn n
X X
4 x2i xi 3 2
xi 5
= x2i x3i b2i
u
i=1 i=1
x3i xi 4
i=1
x3i
n
" n
!# 1 n !
X X x2i x3i X xi yi
x2i x3i b2i
u :
i=1 i=1
x3i x4i i=1
x 2y
i i

2
In the homoskedastic case, this is equivalent to E[z2 x0 ] E[z2 z01 ]E[z1 z01 ] 1 E[z1 x0 ] =
E[z2 1 z01 E[z1 z01 ] 1 z1 x0 ] = 0, which is in turn equivalent to 2 = 0 in the …rst stage regression x = 1 z1 + 2 z2 +v
by the FWL theorem.

3
It di¤ers from the 2SLS/OLS. Even under homoskedasticity, it’s di¤erent from them (although
asymptotically equivalent).

bi = yi x0i e where e is consistent


Exercise 6 Take the model yi = x0i + ui with E[zi ui ] = 0. Let u
for (e.g., a GMM estimator with arbitrary weight matrix). De…ne the estimate of the optimal
GMM weight matrix
n
! 1
1X 0 2
Wn = zi zi ubi :
n
i=1
p 1
Show that Wn ! where = E[zi zi u2i ].

bi = ui + x0i
Solution: Note that u b , so

2
b2i = u2i + 2ui x0i
u b + x0 b :
i

and
n n n n
1X 0 2 1X 0 2 2X 0 b +1
X
b
2
bi =
zi zi u zi zi u i + zi zi ui x0i zi z0i x0i :
n n n n
i=1 i=1 i=1 i=1

The 1st term goes to in probability by the LLN. As for the 2nd and 3rd term, the consistency
of b implies
! !
n
1X 0 1X
n
p
h i
zi zi ui x0i b kzi k2 kxi k jui j b ! E kzi k2 kxi k jui j 0 = 0;
n n
i=1 i=1

where k k is the Euclidean norm of matrix, the inequality is from the triangle inequality and the
Schwarz matrix
h inequality,i and the convergence is from the WLLN and CMT. To apply the WLLN,
we need E kzi k2 kxi k jui j < 1, which is implied by

h i h i1=2 h i1=2
E kzi k2 kxi k jui j E kzi k4 E kxi k2 jui j2
h i1=2 h i1=4
1=4
E kzi k4 E kxi k4 E jui j4 < 1;
h i
where both inequalities are from the Cauchy-Schwarz inequality, and we assume E kzi k4 <
h i
1; E kxi k4 < 1 and E jui j4 < 1. Similarly, we have

1 Pn 0 b
2
n i=1 zi zi x0i

1 Pn b b
0
n i=1 zi z0i x0i xi
1 Pn 2 2 b
2
n i=1 kzi k kxi k
b 1 Pn kzi k2 kxi k2
2
= n i=1
p
h i
!E kzi k kxi k2 0 = 0;
2

4
h i Pn 2 p
where E kzi k2 kxi k2 < 1 by the Cauchy-Schwarz inequality. So 1 0 x0i b !
n i=1 zi zi
0, and we have
n
! 1
1X 0 2 p
Wn = bi
zi zi u ! :
n
i=1
Pn
Exercise 7 Suppose we want to estimate 2 in Exercise 4. (i) Show that n 1 b2i
i=1 u is consistent,
bi = yi xi b . (ii) In the structural model
where u

y = X1 1 + X2 2 + u;
X2 = X1 12 + Z2 22 + V;
0
show that n 1 l2 1; b0 Y0 MZ Y 1; b0 is also a consistent estimator of 2, where
2;2SLS 2;2SLS
Y = (y; X2 ).

Solution: (i) This proof is similar to that of Theorem 4 in Chapter 5 with xi = 1.


0 0
(ii) Y 1; b 2;2SLS = X1 b 1;2SLS + u b . Since Z = [X1 ; Z2 ],

0
MZ Y 1; b0 b
= MZ u
2;2SLS

= MZ X1 1 + X2 2 +u X1 b 1;2SLS X2 b 2;2SLS

= MZ X2 b + MZ u:
2 2;2SLS

As usual, MZ X2 b will not contribute to the probability limit, so we need only to


2 2;2SLS
show that
1 p
u0 MZ u ! 2
;
n l2
which is equivalent to show that
1 0 p
u PZ u !0; (1)
n
but which is obvious from the moment condition E[zi ui ] = 0.

Exercise 8 The equation of interest is

yi = m(xi ; ) + ui ; E[zi ui ] = 0:

The observed data is (yi ; xi ; zi ), zi is l 1 and is k 1, l k. Show how to construct an e¢ cient


GMM estimator for .

Solution: The l 1 moment conditions are E[gi ( )] = E [zi (yi m(xi ; ))] = 0. The optimal
weighting matrix is W0 = (E[gi ( )gi ( )0 ]) 1 = (E[zi z0i (yi m(xi ; ))2 ]) 1 which can be consis-
1
1 Pn e ))2
tently estimated by Wn = z i z 0 (y
i i m(xi ; , where e is a consistent estimator of
n i=1
: In summary, we can run a two step procedure:

5
Step 1. Find an initial consistent estimate b GM M , for example, by

n
! n
!
X X
b = arg min (yi m(xi ; ))z0i (yi m(xi ; ))zi :
GM M
i=1 i=1

Step 2. Find the e¢ cient GMM estimate b by

n
! n
! 1 n
!
b = arg min 1X 1X 1X
(yi m(xi ; ))z0i (yi m(xi ; b )) 2
zi z0i (yi m(xi ; ))zi :
n n n
i=1 i=1 i=1

Exercise 9 Take the linear model

yi = x0i + ui ;
E[zi ui ] = 0;

and consider the GMM estimator b of . Let


0
Jn = ng n b b 1
gn b

d 2
denote the test of overidentifying restrictions. Show that Jn ! l k by demonstrating each of the
following:

(i) Since > 0, we can write 1 = CC0 and = C0 1C 1.

0 1
(ii) Jn = n C0 g n b C0 b C C0 g n b .

(iii) C0 g n b = Dn C0 g n ( 0) where

1
1 0 0 1 0 b 1 0 1 0 b
Dn = Il C0 ZX XZ 1
ZX XZ 1
C0 1
;
n n n n
1 0
gn ( 0) = Z u:
n

p
(iv) Dn !I l R(R0 R) 1 R0 where R = C0 E[zi x0i ] = C0 G:
d
(v) n1=2 C0 g n ( 0) !Z N (0; Il ).
d
(vi) Jn ! Z 0 Il R(R0 R) 1 R0 Z.

(vii) Z 0 Il R(R0 R) 1 R0 Z 2 .
l k

(Hint: I l R(R0 R) 1 R0 is a projection matrix. Note also the di¤ erence in the notation Z
and Z.)

6
Solution: (i) Since 1 is symmetric, there exists a matrix H such that (s.t.) HH0 = I and
1 = H H0 , where
0 1
1 0
B . .. .. C
=B .
@ . . . C
A
0 k

is the eigenvalue matrix.


Since 1 is positive de…nite, all eigenvalues are positive. Therefore,

0 p 1
1 0
B .. .. .. C
1=2
=B
@ . . . C
A
p
0 k

is well-de…ned. Set C = H 1=2 ; then we have 1 = H 1=2 1=2 H0 = CC0 .

(ii) Jn = ngn ( b )0 CC 1 b 1 (C0 ) 1 C0 gn ( b ) = n(C0 gn ( b ))0 (C0 b C) 1 (C0 gn ( b )):


(iii) Since

1 1
b 1 0 b 11 1 0 b 11 1 0 b 11 1 0 b 1
0 = XZ Z0 X XZ Z0 u = XZ Z0 X XZ gn ( 0 );
n n n n n n n

we have

1 0 1 0h i
gn ( b ) = Z (y Xb) =
Z (y X 0 ) X( b 0)
n n
1
1 0 1 0 b 11 0 1 0 b 1
= gn ( 0 ) ZX XZ ZX XZ gn ( 0 )
n n n n
" #
1
1 0 1 0 b 11 0 1 0 b 1
= Il ZX XZ ZX XZ gn ( 0 )
n n n n
" #
1
1 1 1 1 1 1
) C0 gn ( b ) = C0 C0 Z0 X X0 Z b Z0 X X0 Z b gn ( 0 )
n n n n
" #
1
1 1 1 1 1 1 1
= Il C0 Z0 X X0 Z b Z0 X X0 Z b C0 C0 gn ( 0)
n n n n
= Dn C0 gn ( 0)

(iv)

1
0 1 0 1 0 b 11 0 1 0 b 1 1
Dn = Il C ZX XZ ZX XZ C0
n n n n
p 1 1
! Il C0 E[zi x0i ] E[xi z0i ] 1
E[zi x0i ] E[xi z0i ] 1
C0
1 1 1
= Il C0 E[zi x0i ] E[xi z0i ]CC0 E[zi x0i ] E[xi z0i ]CC0 C0 = Il R R0 R R0 :

7
p d
(v) Since ngn ( 0) = p1 Z0 u ! N (0; ), where = E[zi z0i u2i ],
n

p d 1
nC0 gn ( 0) ! C0 N (0; ) = N (0; C0 (C0 C 1
)C) = N (0; Il ):

(vi) By the the results of (ii) and (iii),

p 0 1 p
Jn = nC0 gn ( 0) D0n C0 b C Dn nC0 gn ( 0)

d 0 1 1
! N (0; Il )0 Il R(R0 R) 1
R0 C0 C0 C 1
C Il R(R0 R) 1
R0 N (0; Il )
= N (0; Il )0 Il R(R0 R) 1
R0 N (0; Il ):

(vii) Set M = Il R(R0 R) 1 R0 ; then since M is symmetric, there exists a matrix H s.t.
M = H0 M H and HH0 = Il , where M denotes the eigenvalue matrix of M. Therefore,

N (0; Il )0 MN (0; Il ) = N (0; Il )0 H0 M HN (0; Il ) = N (0; HH0 )0 0


M N (0; HH ) = N (0; Il )0 M N (0; Il ):

Now we use the following properties of the trace:

tr(A + B) = tr(A) + tr(B); tr(AB) = tr(BA):

Since R is l k,

tr(M) = tr(Il ) tr(R(R0 R) 1


R0 ) = tr(Il ) tr((R0 R) 1
R0 R) = tr(Il ) tr(Ik ) = l k:

On the other hand,


tr(M) = tr(H0 M H) = tr( M HH0 ) = tr( M );
P
thus we have tr( M ) = li=1 i = l k. Moreover, since M satis…es MM = M, eigenvalues of M
are either 0 or 1. These facts imply that M has the following form:
!
Il k O
M = ;
O Ok

where O represents a zero matrix.


Consequently, we get:

l k
X
d
Jn ! N (0; Il )0 M N (0; Il ) = Zj2 2
l k;
j=1

i:i:d:
where Zj N (0; 1).

b0 P e u
u b
Exercise 10 In the homoskedastic linear model (??), show that (i) Jn = nRu2 = ub 0 ubZ=n
2
, where Ru2
e 2 = MX Z 2 ; (ii) l2 F has the same asymptotic
is the uncentered R2 in the regression (??), and Z 1

8
distribution as Jn .

Solution: (i) In the homoskedastic model,

n
! n
! 1 n
!
1X 1X 0 1X
Jn = n bi z0i
u b2u zi zi bi
zi u
n n n
i=1 i=1 i=1
b Z (Z0 Z) 1 Z0 u
u0 b
= = nRu2 :
b0u
u b =n

The numerator is the projection of ub on span(Z). Since u b is orthogonal to X1 , this is equivalent


b0 P u
u b
e 2 = MX Z2 . In other words, Jn = 0 Ze 2 . Note also that u
to the projection on Z b 0 Z (Z0 Z) 1 Z0 u
b
1 bu
u b =n
is the sum square residuals of the "second-stage" regression of the …tted y (PZ y) on the …tted X
(PZ X).
(ii) From Theorem 4 of Chapter 3,

(SSRR SSUU ) u0 u
(b b ub 0 MZ u
b) b 0 PZ u
u b
l2 F = = 0 = 0 :
SSU=(n l) b MZ u
u b =(n l) b MZ u
u b =(n l)

The numerator of l2 F is the same as Jn , so as long as u b 0 MZ u


b =(n l) has the same probability
0 0 d
bu
limit as u b =n, we are done. This is equivalent to show that ub PZ ub =n ! 0, which is exactly (1).

Exercise 11 We use the same setup and notations as in Exercise 9 except that E[z(y x0 n )] =
p
E [zu] = = n. Show the following results.
p d 1 1
(i) n b n !N G0 1G G0 1 ; V , where V = G0 1G .

d 0
(ii) Jn ! 2 (
l k ), where = 1( GVG0 ) 1 .

(iii) = 0 if and only if 2 span(G).

Solution: (i) Note that

1
p 1 0 b 11 1 0 b 1 1 0
n b n = XZ Z0 X XZ p Zu
n n n n
Xn
1 1 1
= G0 1
G G0 p f(zi ui E [zi ui ]) + E [zi ui ]g + op (1)
n
i=1
d 0 1 1 0 1 1
! G G G N (0; ) + G0 1
G G0 1

1 1
= N G0 1
G G0 1
; G0 1
G ;

p
where the second equality is due to b ! even under the local alternative (see Exercise 18 of
p
Chapter 5), and the third equality is because E [zi ui ] = = n.

9
(ii) Repeating the analysis in Exercise 9, we can see

0 1
Jn = n C0 g n b C0 b C C0 g n b ;
p
C0 g n b = Dn C0 g n ( n ) ; Dn ! Il R(R0 R) 1
R0 with R = C0 G

still hold, and the only di¤erence is that

d
n1=2 C0 g n ( n) ! N C0 ; Il :

As a result,
d
Jn ! N C0 ; Il Il R(R0 R) 1
R0 N C0 ; Il = 2
l k( );

where = 0 C Il R(R0 R) 1 R0 C0 = 0 1 ( GVG0 ) 1 .


2
(iii) Given that = Il R(R0 R) 1 R0 C0 , = 0 is equivalent to Il R(R0 R) 1 R0 C0 =
0. Because rank Il R(R0 R) 1 R0 C0 = l k, the dimenion of such that = 0 must be k.
Because Il R(R0 R) 1 R0 C0 G = 0 and dim (span(G)) = k, the result follows.

Exercise 12 Take the linear model

yi = x0i + ui ;
E[zi ui ] = 0;

and consider the unrestricted GMM estimator b and restricted GMM estimator e of under the
linear constraints R0 = c. De…ne

Jn ( ) = n g n ( ) b 1
gn ( ) ;

and then b = arg minJn ( ) and e = arg min


0
Jn ( ). De…ne the Lagrangian
R =c

1 0
L( ; ) = Jn ( ) + R0 c :
2

(i) Show that

1 1 1
e = b X0 Z b 1
Z0 X R R0 X0 Z b 1
Z0 X R R0 b c ;
1
b = 1 1
R0 X0 Z b 1
Z0 X R R0 b c :
n

(ii) Derive the asymptotic distribution of e under the null.

(iii) Show that


1 b 0
Jn e = Jn b + e X0 Z b 1
Z0 X b e :
n

10
(iv) Show that the distance statistic is equal to the Wald statistic.

Solution: (i) The FOCs for the Lagrange problem are

1 0 1
n XZ b 1
Z0 y Z0 X e + R = 0;
n n

so
1
e=b 1 1 0 b 1 1 0
XZ ZX R :
n n n

Substitute this into the constraint R0 = c to obtain the expression for b in the question. Then
substitute this expression for b into the FOCs to obtain the expression for e in the question.
(ii) When R0 = c,

1
p p 1 1 p
n e = n b X0 Z b 1
Z0 X R R0 X0 Z b 1
Z0 X R R0 n b
( )
1
1 1 p
= Ik X0 Z b 1
Z0 X R R0 X0 Z b 1
Z0 X R R0 n b
h i 1 p n o
d 1
= Ik b
VR b
R0 VR R0 n b ! Ik VR R0 VR R0 N (0; V) ;

1 p 1
where V b = 1 0 b 1 1 Z0 X ! V = E [xi z0i ] 1E [zi x0i ] .
nX Z n
(iii)
0
1 0 1 0 e 1 0 1 0 e
Jn e = n Zy ZX b 1
Zy ZX
n n n n
0
1 1 0 1 1 0
= n Z0 y Z0 X b + ZX b e b 1
Z0 y Z0 X b + ZX b e
n n n n
1 b 0
= Jn b + e X0 Z b 1
Z0 X b e :
n

where the third equality uses the FOCs for the unrestricted GMM:

1 0 1
XZ b 1
Z0 y Z0 X b = 0:
n n
0
(iv) What needs to be shown is that 1 b e X0 Z b 1 Z0 X b e equals the Wald statistic.
n
But this is immediate from substitution of the expression for e in (i):

1 b e 0 0 b 1 0
XZ ZX b e
n
1
1 0 1 1
= R0 b c R0 X0 Z b 1 Z0 X R R0 X0 Z b 1
Z0 X X0 Z b 1
Z0 X
n
1 1 1
X0 Z b 1
Z0 X R R0 X0 Z b 1
Z0 X R R0 b c

11
1
1 0 1
= R0 b c R0 X0 Z b 1
Z0 X R R0 b c
n
0h i 1
= n R0 b c b
R0 VR R0 b c :

Exercise 13 Suppose 2 R, H0 is c = 0, and the curvature of Qn ( ) at e is e and at b is


b . Show that (i) Wn is twice the di¤ erence in Qn ( ) at e and b, using a quadratic approximation
to Qn ( ) at b; (ii) LMn is twice the di¤ erence in Qn ( ) at e and b, using a quadratic approximation
to Qn ( ) at e.

Solution: (i) The quadratic approximation to Qn ( ) at b is

1b b
2
+ Qn b :
2
2
The height di¤erence of this approximation at e = c and b is 1b
2 c b , which is equal to Wn =2
by noticing that r b = b c and R(b) = 1.
(ii) The quadratic approximation to Qn ( ) at e is
" #
1e e
2 @Qn (e) @Qn (e) e
+ + Qn (e) :
2 @ @

e
. 2
@Qn (e)
The height di¤erence of this approximation at e and the highest point e+ @Q@n ( ) e is 1 1
2e @ ,
which is equal to LMn =2 by noticing that e 1 = 1= e when e is a scalar.

Exercise 14 When 2u E u2 6= 1, what is the form of the Wald statistic, and what is its
asymptotic distribution?

Solution: If E u2 6= 1, then

1 1 1 1
e2 Z
u0 Z e0 Z
e e 0 Y2 Y 0 Z
Z e e0 e e 0 Y2 e2 Z
Y20 Z e0 Z
e e0 u
2 2 2 2 2 Z2 Z2 Z 2 2 2 Z2
Wn = ;
b2u

where b2u = 1
b0u
b b=y
with u e e 0 b 2, y e 2 = MZ Y2 , and
e = MZ1 y, Y
n ku Y 2 1

1 1 1
b = e2
Y20 Z e0 Z
Z e e 0 Y2
Z e2 Z
Y20 Z e0 Z
e e0 y :
Z
2 2 2 2 2 2 2

Now,

1 e0 d
p Z u ! Q1=2 0 u ;
n 2
1 e0 1 e0 e 1 e0 d
p Z Y2 = Z2 Z2 C + p Z V2 ! Q1=2 C 1=2
+ Q1=2 2
1=2
;
n 2 n n 2

12
where !
0
N 0; I Il 2 ;
vec ( 2 )
!
1 0
1=2 2 1=2
where I = , and = E [v2 u] u . So
Ik 2

1 1 1
b = e2 Z
Y20 Z e0 Z
e e 0 Y2
Z e2 Z
Y20 Z e0 Z
e e0 u
Z
2 2 2 2 2 2 2 2
h i 1
d 1=2 0 0
! u C+ 2 C+ 2 C+ 2 0
1=2 1
u 1 2:

As a result,

b=y
u e e 0 b 2 = MZ u
Y e 2 b2
Y 2 = MZ1 u e 2n
Z 1=2
C + MZ1 V2 b
2 2 ;
2 1

and

1
b2u = b0u
u b
n k
1 0
= u V2 b 2 2 u V2 b 2 2 + op (1)
n
d 2 1=2 1 2 0 1 1=2 1=2 1
! u 2 uE [uv2 ] 1 2 + u 2 1 1 2:

Combining all these elements together, we have

0 1
d 2 1 2
Wn ! 0 1 0 2 :
1 2 1 2+ 2 1 2

Exercise 15 In the LIML estimator, what is the asymptotic distribution of n (b 1) when 22 =


n 1=2 C?

Solution: From the analysis in Appendix B of Chapter 7, we can see n (b 1) is the smallest root
1
e2 Z
of Y0 Z e0 Z
e e0 Y
Z 1 0
= 0, and
2 2 2 n Y MZ Y

MZ Y = MZ u, M1 Y = M1 u;

0 0
where = 1; 2 . Note that this smallest root is the same as the smallest root of
! ! ! !
1 0
u 00 1 2 0e e
e0 Z
1
e0 Y 1 0 1 00 u
1
00
1=2
Y Z2 Z2 2 Z2 Y MZ Y 1=2
(2)
0 0 I k2 n 2 Ik 2 0
! !
1
u 00 e2 Z
0 e0 Z
e
1
e 0 (u; Y2 ) 1 0 u
1
00
= 1=2
(u; Y2 ) Z 2 2 Z 2 (u; V2 ) MZ (u; V2 ) 1=2
= 0:
0 n 0

13
From the main text and the last exercise,

1 e0 e p
Z Z2 ! Q;
n 2
1 e0 d
p Z u ! Q1=2 0 u ;
n 2
1 e0 d
p Z Y2 ! Q1=2 C + 2
1=2
:
n 2

Also,
1
1 0 1 0 1 0 1 0 1 0 p 2
u MZ u = uu uZ ZZ Zu ! u;
n n n n n
1
1 0 1 0 1 0 1 0 1 0 p
u MZ V2 = u V2 uZ ZZ Z V2 ! E [uv2 ] ;
n n n n n
1
1 0 1 0 1 0 1 0 1 0 p
V MZ V2 = V V2 V Z ZZ Z V2 ! :
n 2 n 2 n 2 n n

As a result,

e2 Z
e0 Z
e
1
e 0 (u; Y2 ) 1
(u; Y2 )0 Z 2 2 Z 2 (u; V2 )0 MZ (u; V2 )
n !
0 2 E [uv2 ]
d 1=2 1=2 u
! 0 u; C+ 2 0 u; C+ 2 ;
E [v2 u]

and the expression in (2) converges in distribution to

0
0; C + 2 0; C + 2 I ;

where I is de…ned in the last exercise. By the CMT, n (b 1) converges in distribution to the
0
smallest root of 0; C + 2 0; C + 2 I = 0.

Exercise 16 (i) Write out the objective function of the CUE in the linear homoskedastic endoge-
nous model. (ii) Show that this CUE is equivalent to the LIML estimator. (iii) Show that if gi ( )
is replaced by gi ( ) in Jn ( ), then the new objective function Jen ( ) = Jn ( )=(1 + Jn ( )).

Solution: (i) and (ii). In the linear homoskedastic endogenous model, b ( ) = b ( ) 2


(Z0 Z) 1
=
(Z0 Z) 1 = n 1 (y X0 )0 (y X0 ) , so

(y X0 )0 PZ (y X0 )
Jn ( ) =
n 1 (y X0 )0 (y X0 )
1
1
= n 1 + AR ( ) ;

(y X0 )0 PZ (y X0 )
where AR ( ) = (y X0 )0 MZ (y X0 )
. So the minimizer Jn ( ) is the same as the minimizer
0
0
(y X ) PZ (y X )0 (y X0 )0 PZ (y X0 )
of (y X0 )0 MZ (y X0 )
, which is in turn the same as the minimizer of (y X0 )0 MZ (y X0 )
+1 =

14
(y X0 )0 (y X0 )
(y X0 )0 MZ (y X0 )
, that is, the LIML estimator.
1 1 BDA 1 .
(iii) Recall that If T = A+CBD, then T 1 = A 1 A 1 CB(B + BDA CB) Let
Pn
A = n1 gi ( )gi ( )0 , C = g n ( ), D = g n ( )0 and B = 1; then
i=1

n
! 1
1X 1
gi ( )gi ( )0 =A 1
+A 1
gn( ) 1 g n ( )0 A 1
gn( ) g n ( )0 A 1
:
n
i=1

As a result,
1
P
n
g n ( )0 1
n gi ( )gi ( )0 gn ( )
Jn ( ) i=1
= 1
n + Jn ( ) 1 P
n
1 + g n ( )0 n gi ( )gi ( )0 gn ( )
i=1
g n ( )0 A 1 g n ( )g n ( )0 A 1 g n ( )
g n ( )0 A 1g
n( )+ 1 g n ( )0 A 1 g n ( )
=
g n ( )0 A 1 g n ( )g n ( )0 A 1 g n ( )
1 + g n ( )0 A 1g
n( ) + 1 g n ( )0 A 1 g n ( )
Jen ( )2
Jen ( ) +
1 Jen ( )
= e 2
= Jen ( ):
1 + Jen ( ) + Jne( )
1 Jn ( )

z0i b Pii x0i


Exercise 17 De…ne Pij as the ijth element of PZ . Show that (i) z0i b ( i) = 1 Pii ; (ii) b JIVE2
P 0
1 P
0
can be rewritten as i6=j xi Pij xj i6=j xi Pij yj .

Solution: (i) From Exercise 14 of Chapter 3, b ( i) =b (1 Pii ) 1


(Z0 Z) 1
zi x0i z0i b . So

Pii z0i b Pii z0i b Pii x0i z0i b z0i b Pii x0i
z0i b ( i) = z0i b x0i z0i b = = :
1 Pii 1 Pii 1 Pii
0y
P P
(ii)We only show that X = i6=j x0i Pij yj since (X 0 X) = i6=j x0i Pij xj can be similarly
shown.
n
X n
X
0 e0 1 0
X y = ( i) zi yi = X0 i Z i Z0 Z zi y i
i=1 i=1
n
X X
1 0
= X0 Z xi z0i Z0 Z zi y i = x0i Pij yj :
i=1 i6=j

Exercise 18 (Emprical)

Exercise 19 Show that N (0; V )0 1N (0; V ) = 2 .


l k

15
1
Solution: Recall that V = G G0 1G G0 . So

N (0; V )0 1
N (0; V )
0
1=2 1=2
= N 0; V N 0; V
1 0 1
1=2
= N 0; I G G0 1
G G0 1=2
N 0; I 1=2
G G0 1
G G0 1=2

1
= N (0; Il )0 I 1=2
G G0 1
G G0 1=2
N (0; Il ) :

1=2 G G0 1G 1
Note that G0 1=2 is a projection matrix on span 1=2 G , so by the
arguments in Exercise 9(vii),

N (0; V )0 1
N (0; V ) = 2
l k

1=2 G 1 1
since tr(I G0 1G G0 1=2 ) = l tr G0 1G G0 1=2 1=2 G =l k.

16

You might also like