0% found this document useful (0 votes)
101 views7 pages

Math 710 Homework 7: Problem 1

1. The document contains the solutions to 6 problems involving limits of integrals and expectations of random variables. It proves several propositions about conditions under which certain limits of integrals and expectations exist or are equal to 0. 2. The proofs use techniques like the dominated convergence theorem, monotone convergence theorem, and properties of expectations. 3. One proposition shows that the expectation of a positive random variable X can be computed as the integral of the probability that X exceeds each value t from 0 to infinity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views7 pages

Math 710 Homework 7: Problem 1

1. The document contains the solutions to 6 problems involving limits of integrals and expectations of random variables. It proves several propositions about conditions under which certain limits of integrals and expectations exist or are equal to 0. 2. The proofs use techniques like the dominated convergence theorem, monotone convergence theorem, and properties of expectations. 3. One proposition shows that the expectation of a positive random variable X can be computed as the integral of the probability that X exceeds each value t from 0 to infinity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Math 710 Homework 7

Austin Mohr
December 2, 2010

Problem 1
Let (Ω, F, P ) be a probability space, and let {An } and A belong to F. Let X be
a random variable defined on this probability space with X ∈ L1 .

Proposition 1. Z
lim XdP = 0
n→∞ |X|>n

Proof. Define the sequence of random variables Xn for n ∈ N via


(
X(ω) if |X(ω)| > n
Xn (ω) =
0 otherwise.

Observe that, for all n,


Z Z
XdP = Xn dP.
|X|>n Ω

Since
Z Z

Xn dP ≤ |Xn |dP,

Ω Ω
R
it suffices to show that Ω |Xn |dP → 0.
Now, |Xn | → 0 almost everywhere, since |X| is finite almost everywhere.
Moreover, |Xn | ≤ |X| for all n. Thus, by the Dominated Convergence Theorem,
Z Z
lim |Xn |dP = lim |Xn |dP
n→∞ Ω n→∞
ZΩ
= 0dP

= 0.

1
Proposition 2. If
lim P (An ) = 0,
n→∞

then Z
lim XdP = 0.
n→∞ An

Proof. Define the sequence of random variables Xn for n ∈ N via Xn = X · 1An .


Observe that, for all n, Z Z
XdP = Xn dP.
An Ω

Since Z Z

Xn dP ≤ |Xn |dP,

Ω Ω
R
it suffices to show that Ω |Xn |dP → 0.
Now, |Xn | → 0, since P (An ) → 0. Moreover, |Xn | ≤ |X| for all n. Thus, by
the Dominated Convergence Theorem,
Z Z
lim |Xn |dP = lim |Xn |dP
n→∞ Ω n→∞
ZΩ
= 0dP

= 0.

Proposition 3. Z
|X|dP = 0
A
if and only if
P {A ∩ [X > 0]} = 0.
Proof. Observe first that
Z Z Z
|X|dP = |X|dP + |X|dP
A A∩[|X|>0] A∩[|X|=0]
Z
= |X|dP + 0.
A∩[|X|>0]
R R
Thus, A |X|dP = 0 if and only if A∩[|X|>0] |X|dP = 0 if and only if P {A ∩
[|X| > 0]} = 0 (since we integrate over those ω ∈ Ω for which |X(ω)| > 0).
Proposition 4. Let X ∈ L2 . If V (X) = 0, then P [X = E(X)] = 1.

2
Proof. Let  > 0. By Chebychev’s Inequality,
V (X)
P [|X − E(X)| ≥ ] ≤
2
= 0.

Equivalently,
P [|X − E(X)| < ] = 1
for all  > 0. Therefore,
P [X = E(X)] = 1.

Problem 2
If X and Y are independent random variables and E(X) exists, then, for all
B ∈ B(R), Z
XdP = E(X)P {Y ∈ B}.
[Y ∈B]

Proof. For ease of notation, let A = [Y ∈ B]. Thus,


Z Z
XdP = XdP
[Y ∈B] A
Z
= X · 1A dP

= E(X · 1A ).

Now, write X = X + −X − . Since each of X + and X − is measurable, we can find


sequences of nonneagtive, simple random variables {Xn+ } and {Xn− } with Xn+ ↑
X + and Xn− ↑ X − . By the Monotone Convergence Theorem, E(Xn+ ) ↑ E(X + )
and E(Xn− ) ↑ E(X − ). Thus, by linearity of expectation, E(Xn+ − Xn− ) →
E(X). Moreover, E(Xn+ · 1A − Xn− · 1A ) → E(X · 1A ). We show next that
E(Xn+ ·1A −Xn− ·1A ) → E(X)P (A) and so conclude that E(X ·1A ) = E(X)P (A).
For each n, we have
n
X
Xn+ = ai 1Ai ,
i=1

where the ai are constants and the Ai partition Ω. It follows that


n
X
Xn+ · 1A = ai 1Ai 1A
i=1
Xn
= ai 1Ai ∩A ,
i=1

3
and so
n
X
E(Xn+ · 1A ) = ai P (Ai ∩ A)
i=1
Xn
= ai P (Ai )P (A) (by independence of X and Y )
i=1
n
X
= P (A) ai P (Ai )
i=1
= P (A)E(Xn+ )
→ P (A)E(X + ).

Similarly, E(Xn− · 1A ) → P (A)E(X − ), and so

E(Xn+ · 1A − Xn− · 1A ) = E(Xn+ · 1A ) − E(Xn− · 1A )


→ P (A)E(X + ) − P (A)E(X − )
= P (A)E(X + − X − )
= P (A)E(X),

as desired.

Problem 3
Proposition 5. For all n ≥ 1, let Xn and X be uniformly bounded random
variables. If
lim Xn = X,
n→∞
then
lim E|Xn − X| = 0.
n→∞

Proof. The random variable that is identically K belongs to L1 , since


Z
KdP = K · P (Ω)

= K.

Thus, the identically K random variable is a dominating random variable for


the Xn , and so by the Dominated Convergence Theorem, E|Xn − X| → 0.

Problem 4
On the Lebesgue interval (Ω = [0, 1], B([0, 1]), P = λ) define the random vari-
ables
n
Xn = I 1.
log n 0, n

4
Proposition 6. For Xn defined as above,
lim Xn = 0
n→∞

and
lim E(Xn ) = 0,
n→∞
yet the Xn are unbounded.
Proof. For any x ∈ [0, 1], we can choose N such that N1 < x. Thus, XN (x) = 0,
/ [0, N1 ]. As x was arbitrary, we conclude that Xn → 0.
since x ∈
Next, observe that, for all n,
 
n 1
E(Xn ) = λ 0,
log n n
n 1
= ·
log n n
1
= ,
log n
and so E(Xn ) → 0.
Finally, logn n → ∞, and so the Xn are unbounded. Hence, Xn → 0 and
E(Xn ) → 0, yet the condition in the Dominated Convergence Theorem fails.

Problem 5
Proposition 7. Let Xn ∈ L1 for all n ≥ 1 satisfying
sup E(Xn ) < ∞.
n

If Xn ↑ X, then X ∈ L1 and E(Xn ) → E(X).


Proof. Since Xn ↑ X, we see that Xn+ ↑ X + and Xn− ↓ X − . Since each of Xn+
and Xn− belongs to L1 for all n, we have by the Monotone Convergence Theorem
that E(Xn+ ) → E(X + ) and E(Xn− ) → E(X − ). By linearity of expectation,
this implies that E(Xn ) → E(X). Since supn E(Xn ) is finite, so is E(X) by
uniqueness of limits.
To show that X ∈ L1 , it remains to rule out the case that E(X + ) =
E(X − ) = ∞ (if only one of them is infinite, then E(X) = ±∞, but supn E(Xn ) <
∞). Observe, however, Xn− ↓ X − . Thus, if E(X − ) = ∞, E(Xn− ) = ∞ for all
n, contradicting the fact that Xn− ∈ L1 for all n.

Problem 6
Proposition 8. For any positive random variable X,
Z
E(X) = P (X > t)dt.
[0,∞)

5
Proof. We may view the area of integration as a subset A of the product space
Ω × [0, ∞) where
A = {(ω, t) | X(ω) > t}.
with product measure
P 0 = P × µ.
Now, by Fubini’s Theorem,
Z Z Z
0
1A dP = 1A (ω, t)dtdP
Ω×[0,∞) Ω [0,∞)
Z
= X(ω)dP

= E(X).

On the other hand,


Z Z Z
1A dP 0 = 1A (ω, t)dP dt
Ω×[0,∞) [0,∞) Ω
Z
= P {ω | X(ω) > t}dt
[0,∞)
Z
= P (X > t)dt.
[0,∞)

Proposition 9. For any positive random variable X and any constant α > 0,
Z
E(X α ) = α tα−1 P (X > t)dt.
[0,∞)

R X(ω)
Proof. By direct computation, we have X α (ω) = 0
αtα−1 dt. It follows that
Z
E(X α ) = X(ω)P (dω)

Z Z X(ω)
= αtα−1 dtP (dω)
ZΩ∞ 0Z
= αtα−1 P (dω)dt (by Fubini’s Theorem)
0 {P (X>t)}
Z ∞
= αtα−1 P (X > t)dt.
0

6
Problem 7
Proposition 10. Let X be a nonnegative random variable and let δ > 0, 0 <
β < 1, and C be constants. If

P {X > nδ} ≤ Cβ n

for all n ≥ 1, then E(X α ) < ∞ for all α > 0.

Proof. By the previous problem, it is equivalent to show that α [0,∞) tα−1 P (X >
R

t)dt is finite.
To begin, pick N such that tα−1 P (X > t) is strictly decreasing in t for all
t ≥ N . Such an N exists, as tα−1 is a polynomial in t and P (X > t) decays
exponentially. It follows that
Z ∞
X
α tα−1 P (X > t)dt ≤ α δ · (nδ)α−1 P (X > nδ) (since tα−1 P (X > t) is strictly decreasing)
[N,∞) n=N

X
α
≤ αδ nα−1 Cβ n
n=N

X
α
= Cαδ nα−1 β n
n=N
<∞ (by the ratio test).

Since [0,N ] tα−1 P (X > t)dt is also finite, we conclude that [0,∞] tα−1 P (X >
R R

t)dt is finite.

You might also like