Probability Theory
Prof. Dr. Igor Kortchemski
24. Januar 2024
Problems and suggested solution
Question 1
[10 Points] Let (Xn )n≥1 be a sequence of independent random variables such that for every n ≥ 1
we have P (Xn = n2 − 1) = n12 and P (Xn = −1) = 1 − n12 . For n ≥ 1 we define Sn = X1 + · · · + Xn .
(1) [1 Point] Show that E [Xn ] = 0 for every n ≥ 1.
(2) [3 Points] State the Borel-Cantelli lemmas.
(3) [5 Points] Show that almost surely
Sn
−→ −1.
n n→∞
(4) [1 Point] Why is it not possible to apply the strong law of large numbers? Justify your answer.
Solution:
(1) We have
1 1
E [Xn ] = (n2 − 1)P Xn = n2 − 1 − 1 · P (Xn = −1) = (n2 − 1) · 2
−1 1− 2 = 0.
n n
(2) Let (An )n≥1 be a sequence of events.
P∞
Borel-Cantelli 1. If n=1 P (An ) < ∞, then P (lim sup An ) = 0.
P∞
Borel-Cantelli 2. If n=1 P (An ) = ∞ and if (An )n≥1 are independent, then P (lim sup An ) =
1.
(3) Set An = {Xn = n2 − 1}. Then since ∞ n=1 P (An ) < ∞, by Borel-Cantelli 1. we have
P
P (lim sup An ) = 0. As a consequence, almost surely An happens finitely often. Thus almost
surely there exists N ≥ 1 such that n ≥ N implies Xn = −1. Thus, almosty surely, for n ≥ N :
Sn SN n−N
= − ,
n n n
which tends to −1 as n → ∞.
(4) The random variables (Xi )i≥1 do not have the same law, so the strong law of large numbers
cannot be applied.
Page 1 of 8
Probability Theory
Prof. Dr. Igor Kortchemski
24. Januar 2024
Question 2
[5 Points] Let (E, A) and (F, B) be two sets equipped with σ-fields. Recall that on E × F , the
product σ-field is defined by A ⊗ B = σ({A × B : A ∈ A, B ∈ B}). For C ∈ A ⊗ B and x ∈ E, we set
Cx = {y ∈ F : (x, y) ∈ C}.
(1) [3 Points] Fix x ∈ E. Show that U = {C ∈ A ⊗ B : Cx ∈ B} is a σ-field on E × F .
(2) [2 Points] Show that for every C ∈ A ⊗ B and x ∈ E we have Cx ∈ B.
Solution:
(1) We check the three items of the definition of a σ-field:
– E × F ∈ U since (E × F )x = F ∈ B.
– If C ∈ U, then (C c )x = {y ∈ F : (x, y) ̸∈ C} = (Cx )c ∈ B because B is stable by
complementation.
– If (Ci )i≥1 is a sequence of elements of U, then
[ [ [
Ci = y ∈ F : (x, y) ∈ Ci = (Ci )x ∈ B
i≥1 x i≥1 i≥1
because B is stable by countable unions.
(2) Fix x ∈ E. Observe that U contains all elements of the form A × B with A ∈ A, B ∈ B.
Indeed, (A × B)x = B if x ∈ A and (A × B)x = ∅ if x ̸∈ A. As a consequence, since U is a
σ-field by (1), U contains σ({A × B : A ∈ A, B ∈ B}), which is precisely A ⊗ B. This implies
that for every C ∈ A ⊗ B we have Cx ∈ B.
Page 2 of 8
Probability Theory
Prof. Dr. Igor Kortchemski
24. Januar 2024
Question 3
[20 Points] Let λ > 0 and let X be a real-valued random variable such that P (X ≥ a) = a−λ for
all a ≥ 1. Let (Xn )n≥1 be a sequence of independent random variables all having the same law as X.
We define for every n ≥ 1
n
!1/n
Y
Tn = Xi .
i=1
Remark: In the following, Part 1 and Part 2 can be treated independently: question (6) can be solved
without using the other questions.
Part 1.
(1) [2 Points] Show that X has a density and give its expression.
(2) [4 Points] As n → ∞, does Tn converge almost surely? Justify your answer.
(3) [1 Point] As n → ∞, does Tn converge in probability? Justify your answer.
(4) [4 Points] Does E [Tn2 ] converge as n → ∞? Justify your answer.
(5) [3 Points] As n → ∞, does Tn converge in L1 ? Justify your answer.
Part 2.
max(X1 ,...,Xn )
(6) [6 Points] Show that n1/λ
converges in distribution as n → ∞.
Solution:
(1) Observe that the cumulative distribution function of X (cdf) of X is given by P (X ≤ a) =
1 − a−λ for a ≥ 1 and P (X ≤ a) = 0 for a < 1. The cdf is piecewise C 1 , so X has a density
given by −1x≥1 dx x = 1x≥1 xλ+1
d −λ λ
(2) Yes, Tn converges almost surely. Observe that P (X ≥ 1) = 1, and that P (ln(X) ≥ a) = e−λa
for every a ≥ 0. Thus ln(X) follows an exponential law of parameter λ. In addition,
n
1X
ln(Tn ) = ln(Xi ).
n i=1
By the composition principle, the random variables ln(X1 ), . . . , ln(Xn ) are independent with
same law distributed as an exponential random variable of parameter λ. By the strong law
of large numbers, ln(Tn ) converges almost surely to 1/λ. By continuity of the exponential
function, it follows that Tn converges almost surely to exp(1/λ).
(3) Yes, Tn converges in probability to exp(1/λ): we saw in the lecture that almost sure conver-
gence implies convergence in probability.
(4) Write " n n
#
h i h i h in
2/n 2/n
Tn2 = E X 2/n
Y Y
E =E Xi = E Xi ,
i=1 i=1
Page 3 of 8
Probability Theory
Prof. Dr. Igor Kortchemski
24. Januar 2024
where we have used the independence of (Xi )1≤i≤n for the second equality and the
h fact i that
2/n
these random variables all have the same law for the last equality. To compute E X using
(1) and the transfer theorem:
Z ∞ Z ∞
h
2/n
i
2/n λ λ
E X = x · dx = dx
1 xλ+1 1 xλ−2/n+1
h i
which is finite for n such that λ − 2/n > 0. Thus for n sufficiently large E X 2/n < ∞ and
h i λn 2
E X 2/n = =1+ .
λn − 2 λn − 2
Thus, using the Taylor expansion ln(1 + x) = x + o(x) as x → 0:
n
2 2 2 1
h i
E Tn2 = 1+ = exp n ln 1 + = exp n +o
λn − 2 λn − 2 λn − 2 n
2
= exp + o(1)
λ
which converges to exp(2/λ) so the answer of the question is yes.
(5) The answer is yes.
Solution 1. We check that (Tn )n≥1 converges in probability and is uniformly integrable. The
convergence in probability has been established in (2) and uniform integrability comes from
the fact that (Tn ) is bounded in L2 since E [Tn2 ] converges as n → ∞ (we saw in the lecture
that a sequence of random variables bounded in Lp for p > 1 is uniformly integrable).
For Solutions 2 and 3, we first show that E [Tn ] → exp(1/λ). As in question (4), we have
" n # n h i h in
1/n 1/n
= E X 1/n
Y Y
E [Tn ] = E Xi = E Xi ,
i=1 i=1
h i
and we similarly compute E X 1/n :
Z ∞
h
1/n
i λ λn 1
E X = x1/n · dx = =1+ .
1 xλ+1 λn − 1 λn − 1
Thus, similarly to (4):
n
1 1 1 1
E [Tn ] = 1 + = exp n ln 1 + = exp n +o
λn − 1 λn − 1 λn − 1 n
1
= exp + o(1) .
λ
This entails
E [Tn ] −→ exp(1/λ),
n→∞
Page 4 of 8
Probability Theory
Prof. Dr. Igor Kortchemski
24. Januar 2024
Solution 2. We show that E [(Tn − exp(1/λ)2 ] → 0. Indeed, then by the Cauchy-Schwarz
1/2
inequality E [|Tn − exp(1/λ)|] ≤ E [(Tn − exp(1/λ))2 ] → 0. To this end just write
h i h i
E (Tn − exp(1/λ)2 = E Tn2 − 2 exp(1/λ)E [Tn ] + exp(2/λ) −→
n→∞
0
since E [Tn2 ] → exp(2/λ) and E [Tn ] → exp(1/λ).
Solution 3. We have Tn ≥ 0, Tn → exp(1/λ) almost surely and E [Tn ] → exp(1/λ). Then
Scheffé’s lemma (seen in the exercise sheet) implies that Tn → exp(1/λ) in L1 .
(6) We first
compute the point-wise limit of the cdf of max(X1 , . . . , Xn ). First, for a ≤ 0 we
1/λ
have P max(X1 , . . . , Xn )/n ≤ a = 0. Next, for a > 0, by independence we have for n
sufficiently large so that an1/λ ≥ 1:
P max(X1 , . . . , Xn )/n1/λ ≤ a = P X1 ≤ a, · · · , Xn ≤ an1/λ
n
= P X1 ≤ an1/λ
= (1 − P X1 > an1/λ )n
!n
1
= 1−
(an1/λ )λ
because P X1 = an1/λ = 0. Thus
1 n
1/λ
P max(X1 , . . . , Xn )/n ≤a = 1− λ
a n
1
= exp n ln 1 −λ
a n
1 1
= exp n − λ + o
a n n
so that
−λ
P max(X1 , . . . , Xn )/n1/λ ≤ a −→ e−a .
n→∞
Now observe that F (a) = e−a 1a≥0 is the cdf of a certain random variable X. Indeed, F
−λ
has limit 0 at −∞, limit 1 at ∞, is continuous and weakly increasing. We conclude that
max(X1 ,...,Xn )
n1/λ
converges in distribution to X.
Page 5 of 8
Probability Theory
Prof. Dr. Igor Kortchemski
24. Januar 2024
Question 4
[12 Points] Let (Ui )i≥1 be a sequence of independent and identically distributed random variables,
all following the uniform distribution on [0, 1]. Fix x0 ∈ (0, 1). We define by induction a sequence of
random variables (Xn )n≥0 as follows: X0 = x0 , and for n ≥ 0:
Xn Xn + 1
Xn+1 = 1Un+1 >Xn + 1Un+1 ≤Xn .
2 2
In other words,
Xn
2
if Un+1 > Xn
Xn+1 = Xn +1
2
if Un+1 ≤ Xn .
Finally, define F0 = {∅, Ω} and Fn = σ(U1 , . . . , Un ) for n ≥ 1.
In this exercise, you may use without proof the following fact (seen in one of the training exercises):
Let X, Y be two real-valued random variables, and A be a σ-field. Assume that Y is independent of
A and that X is A-measurable. Then for any measurable function g : R2 → R+ , we have
E [g(X, Y ) | A] = h(X) a.s., where h(x) = E [g(x, Y )] .
(1) [4 Points] Show that (Xn )n≥0 is a (Fn )n≥0 -martingale.
(2) [2 Points] Show that (Xn )n≥0 converges almost surely and in L1 .
(3) [2 Points] Show that for every n ≥ 0 we have 2|Xn+1 − Xn | ≥ min(Xn , 1 − Xn ).
(4) [4 Points] Denote by X∞ the almost sure limit of (Xn )n≥0 . Show that X∞ follows a Bernoulli
distribution and find its parameter, justifying your answer.
Solution:
(1) First of all, using the fact that 0 ≤ x/2 ≤ 1 and 0 ≤ (x + 1)/2 ≤ 1 for every 0 ≤ x ≤ 1, we
readily check by induction that for every n ≥ 0 we have 0 ≤ Xn ≤ 1. As a consequence Xn is
bounded and thus integrable.
Next, by induction, we check that Xn is Fn measurable:
– since X0 = x0 is constant, it is indeed F0 measurable.
– assume that Xn is Fn measurable. Then by definition of Xn+1 , Xn+1 is σ(Un+1 , Xn ) measu-
rable as a measurable function of (Un+1 , Xn ). But both Xn and Un+1 are Fn+1 measurable,
so σ(Un+1 , Xn ) ⊂ Fn+1 , which shows that Xn+1 is Fn+1 measurable.
Finally we check that for every n ≥ 0 we have E [Xn+1 | Fn ] = Xn . To this end, write by
linearity of conditional expectation and using the fact that Xn is Fn measurable:
Xn iX +1
E [Xn+1 | Fn ] = E 1Un+1 >Xn | Fn + E 1Un+1 ≤Xn | Fn
h i h
n
.
2 2
Page 6 of 8
Probability Theory
Prof. Dr. Igor Kortchemski
24. Januar 2024
Using the fact given in the statement of the exercise twice, we get
Xn Xn + 1
E [Xn+1 | Fn ] = (1 − Xn ) · + Xn · = Xn ,
2 2
which completes the proof.
(2) We have seen that (Xn )n≥0 takes its values in [0, 1], so that it is a bounded martingale. In
converges therefore almost surely and in L1 .
(3) We have either Xn+1 = Xn /2 or Xn+1 = (Xn + 1)/2, so that 2|Xn+1 − Xn | is equal to either
Xn or 1 − Xn . Thus 2|Xn+1 − Xn | is at least equal to the minimum of these two quantities.
(4) By passing to the limit in (3), we get that almost surely min(X∞ , 1 − X∞ ) ≤ 0. Since
X∞ ∈ [0, 1] we conclude that P (X∞ ∈ {0, 1}) = 1, so that X∞ follows a Bernoulli distribution.
Its parameter is equal to its mean E [X∞ ], which by L1 convergence is the limit of E [Xn ]. But
since (Xn ) is a martingale we have E [Xn ] = E [X0 ] = x0 for every n ≥ 0. We conclude that
X∞ is a Bernoulli random variable with parameter x0 .
Page 7 of 8
Probability Theory
Prof. Dr. Igor Kortchemski
24. Januar 2024
Intentionally blank page
Page 8 of 8