Econometric Analysis MT Official Problem Set Solution 5
Econometric Analysis MT Official Problem Set Solution 5
KAMILA NOWAKOWICZ
Renyi’s inequality
Let Xi be a sequence of zero mean independent random variables with variance i.
2
If c1 , c2 , . . .
is a non-increasing sequence of positive constants, then for any m, n with m < n and arbitrary
" > 0: ✓ ◆ !
m
X n
X
1
P sup ck |X1 + · · · + Xn | " 2 c2m 2
i + c2i 2
i .
mkn " i=1 i=m+1
Solution. From the setup of this question we can guess that we will need to use the following
definition of convergence almost surely:
a.s
Zn ! Z if 8" > 0:
✓ ◆
lim P sup |Zk Z| > " = 0
m!1 mk
Pn
Let Yi = Xi µi and Y n = 1
n i=1 Yi . Note that Yi has mean zero and V (Xi ) = V (Xi µi ) =
a.s.
V (Yi ). We want to show that Y n ! 0. Pick any " > 0. We have almost sure convergence if:
✓ ◆ k
! k
!
1X 1 X
P sup Yk 0 >" =P sup Yi > " = P sup Yi > "
mkn mkn k i=1 mkn k i=1
These solutions are adapted from solutions by Chen Qiu, which were based on Prof Hidalgo’s notes and
solutions for EC484. Their aim is to fill some gaps between notes and exercises. Prof Hidalgo’s notes should
always be the final reference. If you spot any mistakes in this file please contact me: [email protected].
1You can find a proof of the theorem here: https://math.stackexchange.com/questions/2661290/
continuity-of-probability-measure-and-monotonicity.
1
S n P o
In our setting An = nk=m k1 k
Y
i=1 i > " . It’s easy to check that the sequence is increasing
Sn Sn+1
( k=m {·} ⇢ k=m {·}), hence we get:
k
! n
( k
)!
1 X [ 1 X
lim P sup Yi > " = lim P Yi > "
n!1 mkn k i=1 n!1 k i=1
k=m
1
( k
)!
[ 1 X
=P Yi > "
k=m
k i=1
k
!
1 X
= P sup Yi > " .
mk k i=1
Note that ck = 1
k
for k 2 N is a non-increasing sequence of positive constants. We can apply
Renyi’s inequality with ck = k1 and take limits as n ! 1 to get:
✓ ◆ ✓ ◆
P sup Y k > " = lim P sup Y k > "
mk n!1 mkn
m n
!
1 1 X2
X 1 2
lim 2 i + i
n!1 " m2 i=1 i=m+1
i2
m 1
!
1 1 X 2 X 1 2
= 2 + .
" m2 i=1 i i=m+1 i2 i
P1 2
1
"2
is a constant. We need to show that if i
i=1 i2 < 1 then the two terms on the right hand
side go to zero as m ! 1.
P 2 P1 2
• Since 1 i=1 i
i
2 < 1 the tail contributions must go to zero, hence i=m+1 i
i
2 ! 0 as
m ! 1.
• For the other term we need another result2:
Kronecker’s Lemma
P
If 1i=1 xi < 1, then for any sequence {bi }i2N which satisfies 0 < b1 b2 · · · and
bi ! 1 as i ! 1:
m
1 X
lim bi xi = 0.
m!1 bm
i=1
2
Here take xi = i
i
2 , bi = i2 (increasing, non-negative, goes to infinity). Then:
m
1 X 2 i2
lim i 2 =0
m!1 m2 i
i=1 |{z}
= 2
i
The topic of this question is super interesting, the result we focus on – not so much. The
main purpose of this exercise is to practice working with the algebra of stochastic orders.
Solution. We start by rewriting ˜ ˆ in a way which will simplify our analysis. The key trick
1
is to factor out (⇤ 1
+ Z 0 Z) :
˜ ˆ= ⇤ 1 1
1
+ Z 0Z ⇤ 1✓ + Z 0y (Z 0 Z) Z 0 y
⇣ ⌘
0 1 1
1
= ⇤ +Z Z ⇤ 1✓ + Z 0y ⇤ 1 + Z 0 Z (Z 0 Z) Zy0
⇣ ⌘
1 0 1 1 0 1ˆ 0
= ⇤ +Z Z ⇤ ✓+Z y ⇤ Zy
⇣ ⇣ ⌘⌘
1
= ⇤ 1 + Z 0Z ⇤ 1 ✓ ˆ
✓ 1 ◆ 1
⇤ Z 0Z 1 1⇣ ⌘
= + ⇤ ✓ ˆ
n n n
✓ 1 ◆ 1
⇤ 1 1⇣ ˆ
⌘
= + M̂ ⇤ ✓
n n
Z0Z
where M̂ = n
. Now we can consider each term separately:
• ⇤ 1
is non-stochastic, it’s a non-zero constant, so ⇤ 1
= O(1), hence also ⇤ 1
= Op (1).
• 1
n
⇤ 1 =O 1
n
O(1) = O 1
n
, so also n1 ⇤ 1
= Op 1
n
.
• M̂ ! M > 0 so M̂ = O(1) hence M̂ = Op (1).
⇣ 1 ⌘ 1 ⇣ 1 ⌘ 1
• From above we get ⇤
n
+ M̂ ! M < 1, so
1 ⇤
n
+ M̂ = O(1), hence also
Op (1).
• ✓ is a constant parameter so ✓ = O(1) and ✓ = Op (1).
✓ ◆ 1
Z 0Z Z0u
• Finally, for a more interesting term: ˆ = +ˆ = + n
. To find
=Op (1) n
| {z }
=M̂ 1 =O
p (1)
Z0u
the rate of convergence of n
we can use Theorem 15 (iii) with r = 2:
0v
u !1
0
Zu u Z 0u
2
= O p @t E A
n n
!
1 ⇣ 0 2⌘
0 2
Zu 1 0 1 1
E = 2 E kZ uk = 2 E (Z 0 u) (Z 0 u) = 2 E (u0 ZZ 0 u) = 2 E (tr(u0 ZZ 0 u))
n n n n n
1 1 1 1
= 2
E (tr (ZZ 0 uu0 )) = 2 tr (E (ZZ 0 uu0 )) = 2 tr (ZZ 0 E (uu0 )) = 2 tr ZZ 0 2
I
n n n n
2 2
✓ 0 ◆ ✓ ◆
ZZ 1
= 2 tr (Z 0 Z) = tr =O
n n n n
| {z }
!tr(M ), so O(1)
3
⇣ ⌘
Z0u
Therefore n
= Op p1
n
and
✓ ◆ ✓ ◆
ˆ 1 1
= Op (1)Op p = Op p
n n
✓ ◆ ✓ ⇢ ◆
ˆ= 1 1
+ˆ = Op (1) + Op p = Op max 1, p = Op (1)
n n
by Theorem 16.
• Combining the last two points:
8 ⇣ ⌘
<Op p1 if ✓ =
✓ ˆ = (✓ )+( ˆ) = n
:O (1) if ✓ =
6
p
as required.
We can conclude that the prior information makes no difference in the limit. However, it can
be helpful in getting more stable results in small samples.
We are only given information about first and second moments. In this case the sharpest
result follows from an adapted version of Theorem 15 (iii) for r = 2:
⇣p ⌘
Xi E(Xi ) = Op V (kXi k) .
Xn Yn = Op (fn gn )
Xn + Yn = Op (max {fn , gn })
Then ✓ ◆
⇣p ⌘ 1
x̄ = Op V (x̄) = Op p
n
(ii) Similarly for ȳ we have:
n
! n n
1X 1X 1X
E(ȳ) = E yi = E(yi ) = 3=3
n i=1 n i=1 n i=1
and !
n n n
1X 1 X 1 X 2
V (ȳ) = V yi = V (y i ) = 2 =
n i=1 n2 i=1 n2 i=1 n
where the covariances are omitted since they are zero (by independence). Hence:
r ! r !
2 1
ȳ 3 = Op = Op
n n
where the last equality follows from the fact that multiplying by a constant does not
affect the rate of convergence. Then by Theorem 16 and the fact that 3 = Op (1) (e.g. by
Theorem 14):
r ! ( r )!
1 1
ȳ = (ȳ 3) + 3 = Op + Op (1) = Op max 1, = Op (1)
n n
The remaining results follow directly from Theorem 16:
(iii) ✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
2 1 1 1 1 1
x̄ = x̄ ⇥ x̄ = Op p ⇥ Op p = Op p ⇥p = Op
n n n n n
(iv)
ȳ 3 = (Op (1))3 = Op (13 ) = Op (1)
(v) ✓ ◆ ✓ ◆ ✓ ◆
1 1 1
x̄ȳ = Op p Op (1) = Op p ⇥1 = Op p
n n n
5
(vi) ✓ ◆ ✓ ⇢ ◆
1 1
x̄ + 2 = Op p + Op (1) = Op max p ,1 = Op (1)
n n