0% found this document useful (0 votes)
62 views7 pages

Statistics and Econometrics II Problem Set 1: Exercise 1 Law of Iterated Expectations

(1) The document contains the text and exercises for a statistics and econometrics course. (2) Exercise 1 covers the law of iterated expectations and shows that E(U) = E(E(U|W)) and E(U|W) = 0 implies E(UW) = 0. (3) Exercise 2 examines probability density functions, cumulative distribution functions, and quantiles.

Uploaded by

Daniel Matos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views7 pages

Statistics and Econometrics II Problem Set 1: Exercise 1 Law of Iterated Expectations

(1) The document contains the text and exercises for a statistics and econometrics course. (2) Exercise 1 covers the law of iterated expectations and shows that E(U) = E(E(U|W)) and E(U|W) = 0 implies E(UW) = 0. (3) Exercise 2 examines probability density functions, cumulative distribution functions, and quantiles.

Uploaded by

Daniel Matos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Prof.

Jean-Paul Renne Spring semester

Statistics and Econometrics II


Problem Set 1

Exercise 1 Law of iterated expectations


Show that (i) E(U ) = E(E(U |W )) and that (ii) E(U |W ) = 0 implies E(U W ) = 0.

(i) Let us denote by fU , fW and fU W the probability distribution functions (p.d.f.) of U ,


W and (U, W ), respectively. We have:
Z
fU (u) = fU W (u, w)dw.

Besides, we have (Bayes equality):

fU W (u, w) = fU |W (u, w)fW (w).

Therefore:
Z Z Z Z Z
E(U ) = ufU (u)du = ufU W (u, w)dwdu = ufU |W (u, w)fW (w)dwdu
Z Z 
= ufU |W (u, w)du fW (w)dw = E (E[U |W ]) .
| {z }
E[U |W =w]

(Note that E[U |W = w] is a function of w (not of u).)

(ii) We have:
E(U W ) = E(E(U W |W )) = E(W E(U |W )).
Hence if E(U |W ) = 0 then E(U W ) = 0.

Exercise 2 P.d.f. and quantiles


Consider the random variable X that admits the probability density function (p.d.f.) f ,
assumed to be continuous, and the cumulative distribution function (c.d.f.) F . Let’s denote
by qα the quantile of X associated with probability α (i.e. F (qα ) = α).

1
(a) How to deduce f from F ?

(b) What is the c.d.f. of X conditional on X > qα ? [Express it using F . Let’s denote by
Fα this conditional c.d.f.]

(c) What is the p.d.f. of X conditional on X > qα ? [Let’s denote it by fα ]

(a) We have f = F 0 .

(b) The c.d.f. of X conditional on X > qα is the function:

Fα : x → P(X < x|X > qα ).

We have:
P(X < x; X > qα )
Fα (x) = P(X < x|X > qα ) =
P(X > qα )
P(qα < X < x) F (x) − F (qα )
= = I{qα ≤x} .
P(X > qα ) 1 − F (qα )

(c) This p.d.f. is the first derivative (w.r.t. x) of the c.d.f. computed in the previous
function. Let us denote by fα this p.d.f.; we have:

f (x)
fα (x) = Fα0 (x) = I{qα ≤x} .
1 − F (qα )
R
(It is easily checked that fα (x)dx = 1.)

Exercise 3 Estimator properties (French version below)

We consider a parameter θ and an estimate of it, denoted by θ̂.

(a) What is called the property of an estimator that guarantees E(θ̂) = θ?

(b) In the general case (that is when E(θ̂) 6= θ), give the relationship between the expected
squared error E((θ̂ − θ)2 ), the variance of θ̂ and the expected estimator bias.

(c) If the variance of θ̂ is small, does it necessarily mean that θ̂ is a “precise” estimator of
θ?

Exercise 3 Propriétés des estimateurs

Nous considérons un paramètre θ et un estimateur de ce paramètre, que l’on note θ̂.

(a) Comment est appelée la propriété d’un estimateur garantissant E(θ̂) = θ?

2
(b) Dans le cas général, (c’est-à-dire quand E(θ̂) 6= θ), quelle est la relation existant entre
l’espérance de l’erreur au carré E((θ̂ − θ)2 ), la variance de θ̂ et l’espérance du biais?
(c) Si la variance de θ̂ est petite, est-il nécessairement vrai que θ̂ est un estimateur "précis"
de θ?

(a) Unbiased
(b)

E[(θ̂ − θ)2 ] = E[(θ̂ − E[θ̂] + E[θ̂] − θ)2 ]


= E[(θ̂ − E[θ̂])2 + 2(θ̂ − E[θ̂])(E[θ̂] − θ) + (E[θ̂] − θ)2 ]
= E[(θ̂ − E[θ̂])2 ] + E[2(θ̂ − E[θ̂])(E[θ̂] − θ)] + E[(E[θ̂] − θ)2 ]
= E[(θ̂ − E[θ̂])2 ] + 2(E[θ̂] − θ)E[θ̂ − E[θ̂]] + (E[θ̂] − θ)2
= E[(θ̂ − E[θ̂])2 ] + 2(E[θ̂] − θ)(E[θ̂] − E[θ̂]) + (E[θ̂] − θ)2
= E[(θ̂ − E[θ̂])2 ] + (E[θ̂] − θ)2
= Var(θ̂) + Bias2

Alternative method:
Because θ is not random, we have Var(θ̂) = Var(θ̂ − θ). Using the definition of
Var(θ̂ − θ), we have:

Var(θ̂ − θ) = E[(θ̂ − θ)2 ] − [E(θ̂ − θ)]2 .


| {z }
=Bias

Using Var(θ̂) = Var(θ̂ − θ), the previous equality leads to:

Var(θ̂) = E[(θ̂ − θ)2 ] − Bias2 .

(c) No. "Precision" is driven by both the variance of the estimator and it’s expected bias.
Hence, even if the variance is low, the estimator can still be imprecise due to a strong
expected bias.

Exercise 4 i.i.d. financial returns (French version below)


We consider a financial asset whose value on day i is Si . We set si = log(Si /Si−1 ) and we
assume that si = µ + σZi , where the Zi are identically and independently distributed (i.i.d.)
and are drawn from a Student distribution with ν degrees of freedom (i.e. Zi ∼ i.i.d.t(ν)).

We consider an investor that buys this asset at date 0 and that keepsPit until date n. We
denote by s̄n the average of si between periods 0 and n, that is s̄n = n1 ni=1 si .

3
ν
We recall that if Z ∼ t(ν), then E(Z) = 0 and Var(Z) = for ν > 2.
ν−2
(a) How to interpret si ?
(b) Express Sn as a function of s̄n and of S0 .

(c) Assuming that n is large, give an approximation of the distribution of ns̄n .

(d) Show that P(Sn < S0 ) = P( ns̄n < 0).
(e) Propose an approximation of the probability that the investor loses money by holding
the asset between dates 0 and n (if n is large).

Exercise 4 Actif financier avec rendements indépendants


Nous considérons un actif financier dont la valeur le jour i est Si . Nous introduisons la
variable si telle que si = log(Si /Si−1 ) et nous supposons que si = µ + σZi , où les Zi sont
identiquement et indépendamment distribués (i.i.d.); ils sont tirés d’une distribution de Stu-
dent à ν degrés de liberté (c’est-à-dire Zi ∼ i.i.d.t(ν)).

Nous considérons un investisseur qui achète cet actif à la date 0 et qui le garde jusqu’à
n. Nous notons s̄n la moyenne des si entre la période 0 et la période n, c’est-à-dire
la date P
s̄n = n ni=1 si .
1

ν
On rappelle que si Z ∼ t(ν), alors E(Z) = 0 et Var(Z) = ν−2
for ν > 2.
(a) Comment interpréter si ?
(b) Exprimer Sn en fonction de s̄n et de S0 .

(c) En supposant que n est grand, donner une approximation de la distribution de ns̄n .

(d) Montrer que P(Sn < S0 ) = P( ns̄n < 0).
(e) Donner une valeur approchée de la probabilité que l’investisseur perde de l’argent en
portant cet actif entre les dates 0 et n (en supposant que n est grand).

(a) daily return


(b)

Sn Sn Sn−1 Sn−2 S1
= ...
S0 Sn−1 Sn−2 Sn−3 S0
Sn Sn Sn−1 S1
log = log + log + ... + log
S0 Sn−1 Sn−2 S0
logSn = logS0 + sn + sn−1 + ... + s1
logSn = logS0 + ns̄n
Sn = elogS0 +ns̄n
= S0 × ens̄n

4

(c) by CLT: n(s̄n − E(si )) ∼ N (0, Var(si ))

Var(si ) = Var(µ + σZi )


= σ 2 Var(Zi )
ν
= σ2
ν−2
E(si ) = µ + E(Zi ) = µ
| {z }
=0

√ ν
⇒ n(s̄n − µ) ∼ N (0, σ 2
)
ν−2
√ √ ν
⇒ ns̄n ∼ N ( nµ, σ 2 )
ν−2
(d)

logSn = logS0 + ns̄n


logSn − logS0 = ns̄n
⇒ logSn < logS0 ⇔ ns̄n < 0
Sn < S0 ⇔ ns̄n < 0

(e)
√ √ ν
ns̄n ∼ N ( nµ, σ 2 )
√ √ ν − 2
ns̄n − nµ
r ∼ N (0, 1)
2
ν
σ
ν−2

We are interested in

P( ns̄n < 0)
√ √
=P( n(s̄n − µ) < − nµ)
√ √ ! √ !
n(s̄ − µ) − nµ − nµ
=P q n <q =Φ q
ν ν ν
σ 2 ν−2 σ 2 ν−2 σ 2 ν−2
| {z }
has a standard
normal distribution

Exercise 5 Roulette and Central Limit Theorem


The pockets of the roulette wheel are numbered from 0 to 36. There are 18 black pockets,
18 red pockets and one green pocket (numbered 0). The probability that the ball stops in

5
pocket i, i ∈ {0, 1, . . . , 36} is 1/37. Each player chooses one color (black or red) and gives 1$
to the Casino. If the ball lands in a compartment whose color is as bet by the player, then
the player earns 2$. Otherwise, she gets 0.

(a) Determine the payoffs of the Casino when there is only one player that plays once.

(b) One player plays three times. Compute the probability that the Casino loses money.

(c) There are 100 players. Each of them plays 100 times. The 10.000 resulting games are
supposed to be independent. Propose an approximation of the Casino’s payoffs.

(d) Propose an analytical approximation of the probability that the casino loses money
after n games, where n is a large number.

(a) If g is the casino payoff, we have:

• g = 1 with probability θ = 19/37.


• g = −1 with probability (1 − θ) = 18/37.

Hence E(g) = 2θ − 1 and Var(g) = 1 − (2θ − 1)2 .

(b) The probability that the casino loses money is equal to the probability that the casino
loses three or two times out of the three. We have:

P(casino loses twice) = C32 (1 − θ)2 θ and


P(casino loses three times) = (1 − θ)3 .

Hence, the casino loses money with a probability equal to C32 (1 − θ)2 θ + (1 − θ)3 .

(c) We have n = 100 × 100. We denote by P the payoff. We have P = ni=1 gi .


P

The gi s are assumed to be independent. The CLT yields to:



n(ḡ − E(g)) → N (0, Var(g)),

which leads to (using P = nḡ):

P → N (n(2θ − 1), n(1 − (2θ − 1)2 )). (1)

Numerically: P → N (270, 1002 ).

(d) Using Eq. 1, we have:


!
P − n(2θ − 1) −n(2θ − 1)
P(P < 0) = P p <p
n(1 − (2θ − 1)2 ) n(1 − (2θ − 1)2 )
!
√ (2θ − 1)
≈ P X < − np .
1 − (2θ − 1)2

6
where X ∼ N (0, 1) (by the CLT, we have that √P −n(2θ−1) 2 converges to N (0, 1)).
n(1−(2θ−1) )

Hence: !
√ (2θ − 1)
P(P < 0) ≈ Φ − n p ,
1 − (2θ − 1)2
where Φ denotes the c.d.f. of the standard normal distribution.
For n = 10.000, this probability is of 0.35%.

You might also like