0% found this document useful (0 votes)
8 views3 pages

Convergence in Mean

The document discusses convergence in mean for sequences of random variables, defining it through the limit of expected distances as n approaches infinity. It explains the relationship between convergence in mean and convergence in probability, highlighting that convergence in mean is a stronger condition. Examples and theorems illustrate these concepts, including cases where sequences converge in probability but not in mean.

Uploaded by

jeygerome
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views3 pages

Convergence in Mean

The document discusses convergence in mean for sequences of random variables, defining it through the limit of expected distances as n approaches infinity. It explains the relationship between convergence in mean and convergence in probability, highlighting that convergence in mean is a stronger condition. Examples and theorems illustrate these concepts, including cases where sequences converge in probability but not in mean.

Uploaded by

jeygerome
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

7.2.

6 Convergence in Mean
One way of interpreting the convergence of a sequence X to X is to say that the ''distance'' n

between X and X is getting smaller and smaller. For example, if we define the distance
n

between X and X as P (|X − X| ≥ ϵ), we have convergence in probability. One way to define the
n n

distance between X and X is n

r
E (|Xn − X | ) ,

where r ≥ 1 is a fixed number. This refers to convergence in mean. (Note: for convergence in
mean, it is usually required that E|X | < ∞ .) The most common choice is r = 2, in which case it is
r
n

called the mean-square convergence. (Note: Some authors refer to the case r = 1 as convergence
in mean.)

Convergence in Mean

Let r ≥ 1 be a fixed number. A sequence of random variables X , X , X , ⋯ converges in the rth 1


r
2 3

mean or in the L norm to a random variable X, shown by X −→ X , if


r
n

r
lim E (|Xn − X | ) = 0.
n→∞

m.s.

If r = 2, it is called the mean-square convergence, and it is shown by X n −−→ X .

Example 7.10
r
L

Let X n ∼ U nif orm (0,


1

n
) . Show that X n −→ 0 , for any r ≥ 1.

Solution
The PDF of X is given by n

1
⎧n 0 ≤ x ≤
⎪ n

fXn (x) = ⎨


0 otherwise

We have
1

n
r r
E (|Xn − 0| ) = ∫ x n dx
0

1
= → 0, for all r ≥ 1.
r
(r + 1)n

Theorem 7.3
s r
L L

Let 1 ≤ r ≤ s. If X n
−→ X , then X n
−→ X .
Proof
We can use Hölder's inequality, which was proved in Section 6.2.6. Hölder's
Inequality states that
1 1
p p
q
q
E|XY | ≤ (E|X | ) (E|Y | ) ,

where 1 < p , q < ∞ and 1

p
+
1

q
= 1 . In Hölder's inequality, choose
r
X = |Xn − X | ,

Y = 1,
s
p = > 1.
r

We obtain
1
r s
p
E|Xn − X | ≤ (E|Xn − X | ) .

s
L

Now, by assumption X n
−→ X , which means
s
lim E (|Xn − X | ) = 0.
n→∞

We conclude
1
r s p
lim E (|Xn − X | ) ≤ lim (E|Xn − X | )
n→∞ n→∞

= 0.

r
L

Therefore, Xn −→ X .

As we mentioned before, convergence in mean is stronger than convergence in probability. We


can prove this using Markov's inequality.

Theorem 7.4
r
L p

If X
n −→ X for some r ≥ 1, then X n → X .

Proof
For any ϵ > 0, we have
r r
P (|Xn − X| ≥ ϵ) = P (|Xn − X | ≥ ϵ ) (since r ≥ 1)

r
E|Xn − X |
≤ (by Markov's inequality).
r
ϵ

Since by assumption lim E (|Xn − X | ) = 0


r
, we conclude
n→∞

lim P (|Xn − X| ≥ ϵ) = 0, for all ϵ > 0.


n→∞
The converse of Theorem 7.4 is not true in general. That is, there are sequences that converge
in probability but not in mean. Let us look at an example.

Example 7.11

Consider a sequence {X n, n = 1, 2, 3, ⋯} such that


2 1

⎪ n with probability
⎪ n

Xn = ⎨


⎪ 1
0 with probability 1 −
n

Show that
p

a. X → 0.
n

b. X does not converge in the rth mean for any r ≥ 1.


n

Solution p

a. To show X n → 0 , we can write, for any ϵ > 0


2
lim P (|Xn | ≥ ϵ) = lim P (Xn = n )
n→∞ n→∞

1
= lim
n→∞ n

= 0.

We conclude that X → 0. n

b. For any r ≥ 1, we can write

r
1 1
2r
lim E (|Xn | ) = lim (n ⋅ + 0 ⋅ (1 − ))
n→∞ n→∞ n n

2r−1
= lim n
n→∞

= ∞ (since r ≥ 1).

Therefore, X does not converge in the rth mean for any r ≥ 1. In particular, it is
n
p

interesting to note that, although X → 0, the expected value of X does not


n n

converge to 0.

← previous
next →

The print version of the book is available on Amazon.

You might also like