100% found this document useful (1 vote)
149 views6 pages

Econometric Analysis MT Official Problem Set Solution 5

This document summarizes the steps to prove Kolmogorov's strong law of large numbers using Renyi's inequality. It first defines convergence almost surely and introduces the necessary variables. It then applies Renyi's inequality and uses Kronecker's lemma to show that the two terms on the right-hand side go to zero as m approaches infinity, completing the proof. The document also provides context and references for some of the mathematical results and definitions used in the proof.

Uploaded by

SylviaTian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
149 views6 pages

Econometric Analysis MT Official Problem Set Solution 5

This document summarizes the steps to prove Kolmogorov's strong law of large numbers using Renyi's inequality. It first defines convergence almost surely and introduces the necessary variables. It then applies Renyi's inequality and uses Kronecker's lemma to show that the two terms on the right-hand side go to zero as m approaches infinity, completing the proof. The document also provides context and references for some of the mathematical results and definitions used in the proof.

Uploaded by

SylviaTian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

EC484 CLASSES: WEEK 6

KAMILA NOWAKOWICZ

Question 6, Problem Set 2

In this question we prove Kolmogorov’s SLLN using:

Renyi’s inequality
Let Xi be a sequence of zero mean independent random variables with variance i.
2
If c1 , c2 , . . .
is a non-increasing sequence of positive constants, then for any m, n with m < n and arbitrary
" > 0: ✓ ◆ !
m
X n
X
1
P sup ck |X1 + · · · + Xn | "  2 c2m 2
i + c2i 2
i .
mkn " i=1 i=m+1

Solution. From the setup of this question we can guess that we will need to use the following
definition of convergence almost surely:
a.s
Zn ! Z if 8" > 0:
✓ ◆
lim P sup |Zk Z| > " = 0
m!1 mk
Pn
Let Yi = Xi µi and Y n = 1
n i=1 Yi . Note that Yi has mean zero and V (Xi ) = V (Xi µi ) =
a.s.
V (Yi ). We want to show that Y n ! 0. Pick any " > 0. We have almost sure convergence if:

✓ ◆ k
! k
!
1X 1 X
P sup Yk 0 >" =P sup Yi > " = P sup Yi > "
mkn mkn k i=1 mkn k i=1

converges to 0 as m ! 1, for n = 1. We should be cautious when replacing a finite constant


n with infinity, in many cases such substitution would lead to incorrect results. Here we are
able to set n = 1 because of a property known as continuity of increasing sequences. We say
that a sequence of events {An }n2N is increasing if 8n An ⇢ An+1 , and we define its limit as
S 1
limn!1 An = 1 n=1 An . The continuity theorem for increasing sequences gives that if {An }n2N
is an increasing sequence then
⇣ ⌘
P lim An = lim P (An ).
n!1 n!1

These solutions are adapted from solutions by Chen Qiu, which were based on Prof Hidalgo’s notes and
solutions for EC484. Their aim is to fill some gaps between notes and exercises. Prof Hidalgo’s notes should
always be the final reference. If you spot any mistakes in this file please contact me: [email protected].
1You can find a proof of the theorem here: https://math.stackexchange.com/questions/2661290/

continuity-of-probability-measure-and-monotonicity.
1
S n P o
In our setting An = nk=m k1 k
Y
i=1 i > " . It’s easy to check that the sequence is increasing
Sn Sn+1
( k=m {·} ⇢ k=m {·}), hence we get:
k
! n
( k
)!
1 X [ 1 X
lim P sup Yi > " = lim P Yi > "
n!1 mkn k i=1 n!1 k i=1
k=m
1
( k
)!
[ 1 X
=P Yi > "
k=m
k i=1
k
!
1 X
= P sup Yi > " .
mk k i=1

Note that ck = 1
k
for k 2 N is a non-increasing sequence of positive constants. We can apply
Renyi’s inequality with ck = k1 and take limits as n ! 1 to get:
✓ ◆ ✓ ◆
P sup Y k > " = lim P sup Y k > "
mk n!1 mkn
m n
!
1 1 X2
X 1 2
 lim 2 i + i
n!1 " m2 i=1 i=m+1
i2
m 1
!
1 1 X 2 X 1 2
= 2 + .
" m2 i=1 i i=m+1 i2 i
P1 2
1
"2
is a constant. We need to show that if i
i=1 i2 < 1 then the two terms on the right hand
side go to zero as m ! 1.
P 2 P1 2
• Since 1 i=1 i
i
2 < 1 the tail contributions must go to zero, hence i=m+1 i
i
2 ! 0 as
m ! 1.
• For the other term we need another result2:

Kronecker’s Lemma
P
If 1i=1 xi < 1, then for any sequence {bi }i2N which satisfies 0 < b1  b2  · · · and
bi ! 1 as i ! 1:
m
1 X
lim bi xi = 0.
m!1 bm
i=1

2
Here take xi = i
i
2 , bi = i2 (increasing, non-negative, goes to infinity). Then:
m
1 X 2 i2
lim i 2 =0
m!1 m2 i
i=1 |{z}
= 2
i

which completes the proof.


2If you are interested in the proof: https://en.wikipedia.org/wiki/Kronecker%27s_lemma. The first line
in the proof follows from this result: https://proofwiki.org/wiki/Abel%27s_Lemma/Formulation_2.
2
Question 1, Problem Set 3

The topic of this question is super interesting, the result we focus on – not so much. The
main purpose of this exercise is to practice working with the algebra of stochastic orders.

Solution. We start by rewriting ˜ ˆ in a way which will simplify our analysis. The key trick
1
is to factor out (⇤ 1
+ Z 0 Z) :

˜ ˆ= ⇤ 1 1
1
+ Z 0Z ⇤ 1✓ + Z 0y (Z 0 Z) Z 0 y
⇣ ⌘
0 1 1
1
= ⇤ +Z Z ⇤ 1✓ + Z 0y ⇤ 1 + Z 0 Z (Z 0 Z) Zy0

⇣ ⌘
1 0 1 1 0 1ˆ 0
= ⇤ +Z Z ⇤ ✓+Z y ⇤ Zy
⇣ ⇣ ⌘⌘
1
= ⇤ 1 + Z 0Z ⇤ 1 ✓ ˆ
✓ 1 ◆ 1
⇤ Z 0Z 1 1⇣ ⌘
= + ⇤ ✓ ˆ
n n n
✓ 1 ◆ 1
⇤ 1 1⇣ ˆ

= + M̂ ⇤ ✓
n n

Z0Z
where M̂ = n
. Now we can consider each term separately:
• ⇤ 1
is non-stochastic, it’s a non-zero constant, so ⇤ 1
= O(1), hence also ⇤ 1
= Op (1).
• 1
n
⇤ 1 =O 1
n
O(1) = O 1
n
, so also n1 ⇤ 1
= Op 1
n
.
• M̂ ! M > 0 so M̂ = O(1) hence M̂ = Op (1).
⇣ 1 ⌘ 1 ⇣ 1 ⌘ 1
• From above we get ⇤
n
+ M̂ ! M < 1, so
1 ⇤
n
+ M̂ = O(1), hence also
Op (1).
• ✓ is a constant parameter so ✓ = O(1) and ✓ = Op (1).
✓ ◆ 1
Z 0Z Z0u
• Finally, for a more interesting term: ˆ = +ˆ = + n
. To find
=Op (1) n
| {z }
=M̂ 1 =O
p (1)
Z0u
the rate of convergence of n
we can use Theorem 15 (iii) with r = 2:
0v
u !1
0
Zu u Z 0u
2
= O p @t E A
n n

!
1 ⇣ 0 2⌘
0 2
Zu 1 0 1 1
E = 2 E kZ uk = 2 E (Z 0 u) (Z 0 u) = 2 E (u0 ZZ 0 u) = 2 E (tr(u0 ZZ 0 u))
n n n n n
1 1 1 1
= 2
E (tr (ZZ 0 uu0 )) = 2 tr (E (ZZ 0 uu0 )) = 2 tr (ZZ 0 E (uu0 )) = 2 tr ZZ 0 2
I
n n n n
2 2
✓ 0 ◆ ✓ ◆
ZZ 1
= 2 tr (Z 0 Z) = tr =O
n n n n
| {z }
!tr(M ), so O(1)
3
⇣ ⌘
Z0u
Therefore n
= Op p1
n
and
✓ ◆ ✓ ◆
ˆ 1 1
= Op (1)Op p = Op p
n n

✓ ◆ ✓ ⇢ ◆
ˆ= 1 1
+ˆ = Op (1) + Op p = Op max 1, p = Op (1)
n n
by Theorem 16.
• Combining the last two points:
8 ⇣ ⌘
<Op p1 if ✓ =
✓ ˆ = (✓ )+( ˆ) = n
:O (1) if ✓ =
6
p

Combining all results, and assuming ✓ 6= , we get:


✓ ◆ 1
˜ ˆ= ⇤ 1 1 1⇣ ˆ

+ M̂ ⇤ ✓
n n
✓ ◆
1
= Op (1)Op Op (1)
n
✓ ◆
1
= Op
n

as required.
We can conclude that the prior information makes no difference in the limit. However, it can
be helpful in getting more stable results in small samples.

Question 2, Problem Set 3

We are only given information about first and second moments. In this case the sharpest
result follows from an adapted version of Theorem 15 (iii) for r = 2:
⇣p ⌘
Xi E(Xi ) = Op V (kXi k) .

We also use results from Theorem 16: for Xn = Op (fn ), Yn = Op (gn ):

Xn Yn = Op (fn gn )

Xn + Yn = Op (max {fn , gn })

The main takeaways from this question is:


• when asked to find orders in probability of a mean zero estimator always start from
calculating the variance;
• when the estimator does not have mean zero, the best we can do is Op (1).
4
Solution. The question has many parts, but only the first two require a bit of work. The rest
follows from the algebra of stochastic orders (Theorem 16).
(i) In this part we use Theorem 15 (iii):
⇣p ⌘
Xi E(Xi ) = Op V (kXi k)
Pn
By definition, x̄ = 1
n i=1 xi so
n
! n n
1X 1X 1X
E(x̄) = E xi = E(xi ) = 0=0
n i=1 n i=1 n i=1
and
n
! n n n n
1X 1 X 1 XX 1 X 1
V (x̄) = V xi = 2 V (xi ) + 2 Cov(xi , xj ) = 2 1= .
n i=1 n i=1 | {z } n i=1 j=1 | {z } n i=1 n
=1 =0
i6=j by independence

Then ✓ ◆
⇣p ⌘ 1
x̄ = Op V (x̄) = Op p
n
(ii) Similarly for ȳ we have:
n
! n n
1X 1X 1X
E(ȳ) = E yi = E(yi ) = 3=3
n i=1 n i=1 n i=1
and !
n n n
1X 1 X 1 X 2
V (ȳ) = V yi = V (y i ) = 2 =
n i=1 n2 i=1 n2 i=1 n
where the covariances are omitted since they are zero (by independence). Hence:
r ! r !
2 1
ȳ 3 = Op = Op
n n
where the last equality follows from the fact that multiplying by a constant does not
affect the rate of convergence. Then by Theorem 16 and the fact that 3 = Op (1) (e.g. by
Theorem 14):
r ! ( r )!
1 1
ȳ = (ȳ 3) + 3 = Op + Op (1) = Op max 1, = Op (1)
n n
The remaining results follow directly from Theorem 16:
(iii) ✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
2 1 1 1 1 1
x̄ = x̄ ⇥ x̄ = Op p ⇥ Op p = Op p ⇥p = Op
n n n n n
(iv)
ȳ 3 = (Op (1))3 = Op (13 ) = Op (1)
(v) ✓ ◆ ✓ ◆ ✓ ◆
1 1 1
x̄ȳ = Op p Op (1) = Op p ⇥1 = Op p
n n n
5
(vi) ✓ ◆ ✓ ⇢ ◆
1 1
x̄ + 2 = Op p + Op (1) = Op max p ,1 = Op (1)
n n

You might also like