0% found this document useful (0 votes)
141 views6 pages

Stochastic Calculus Midterm Exam Solutions

This document contains solutions to exercises from a midterm exam in stochastic calculus. Exercise 1 involves calculating properties of a probability density function, including showing that it integrates to 1 and finding its moment generating function and related quantities. Exercise 2 examines a conditional probability problem involving three random variables Y, X, and Z. Part a calculates the expected values of Y for different values it can take. Part b considers when X and Y are normally distributed and applies properties of Gaussian conditioning. Part c extends the problem by introducing an investor's payoff that depends on X and the estimated value of Y from part b. It expresses the goal of finding the value of α that maximizes the expected payoff adjusted for

Uploaded by

Uasdaf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
141 views6 pages

Stochastic Calculus Midterm Exam Solutions

This document contains solutions to exercises from a midterm exam in stochastic calculus. Exercise 1 involves calculating properties of a probability density function, including showing that it integrates to 1 and finding its moment generating function and related quantities. Exercise 2 examines a conditional probability problem involving three random variables Y, X, and Z. Part a calculates the expected values of Y for different values it can take. Part b considers when X and Y are normally distributed and applies properties of Gaussian conditioning. Part c extends the problem by introducing an investor's payoff that depends on X and the estimated value of Y from part b. It expresses the goal of finding the value of α that maximizes the expected payoff adjusted for

Uploaded by

Uasdaf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Stochastic Calculus

Midterm exam solutions

30.10.2018

Exercise 1:
R∞
a) f (x) takes positive values and we need to ensure that −∞
f (x)dx = 1.
Z ∞ Z ∞
1 − |x−a|
f (x)dx = e b dx
−∞ 2b
Z−∞

1 −|y|
= e bdy
−∞ 2b
1 ∞ −|y|
Z
= e bdy
2 −∞
Z ∞
1 0 −|y|
Z
= ( e bdy + e−|y| bdy)
2 −∞ 0
Z 0 Z ∞
1
= ( ey bdy + e−y bdy)
2 −∞ 0
1 y0 −y ∞
= (e |−∞ − e |0 )
2
1
= ((1 − 0) − (0 − 1)) = 1
2
b) We can calculate the moment generating function
Z ∞
1 |x−a|
sx
M (s) = E[e ] = esx e− b dx
−∞ 2b
Z ∞
1
= es(by+a) e−|y| bdy
−∞ 2b
sa Z ∞
e
= esby e−|y| dy
2 −∞
Z 0 Z ∞
esa
= ( sby y
e e dy + esby e−y dy)
2 −∞ 0
Z 0 Z ∞
esa (sb+1)y
= ( e dy + e(sb−1)y dy)
2 −∞ 0
esa 1 1
= ( e(sb+1)y |0−∞ + e(sb−1)y |∞
0 )
2 sb + 1 sb − 1
esa 1 1
= ( −0+0− )
2 sb + 1 sb − 1
esa (sb − 1) − (sb + 1)
=
2 (sb + 1)(sb − 1)
esa −2
=
2 s 2 b2 − 1
esa
=
1 − s 2 b2
The first and second derivatives are given by
esa (a(1 − s2 b2 ) + 2sb2 )
M 0 (s) =
1 − s 2 b2
and
esa (2b2 + 6b4 s2 + a2 (1 − b2 s2 )2 + a(4b2 s − 4b4 s3 ))
M 00 (s) =
(1 − b2 s2 )
with M 0 (0) = a and M 00 (0) = 2b2 + a2 which gives E[X] = a and V ar(X) = 2b2 .

10 points

Exercise 2:
a) Y can take three values such that Y ≥ 0, Y = 2, Y = 1 and Y = 0.

• Y = 2: This happens if X = 1 and Z = 1 and we have V (2) = 1.


• Y = 1: This happens if X = 1 and Z = 0 or X = 0 and Z = 1. We need
P (X = 1|Y = 1). From Bayes rule
P (Y = 1|X = 1)P (X = 1)
P (X = 1|Y = 1) =
P (Y = 1)
P (Z = 0)P (X = 1)
=
P (X = 1)P (Z = 0) + P (X = 0)P (Z = 1))
(1 − 2p) 13
= 1
3
(1 − 2p) + 31 p
1 − 2p
= .
1−p

2
We use this to calculate V (1):
1 − 2p
V (1) = 1P (X = 1|Y = 1) + 0P (X = 0|Y = 1) = .
1−p
• Y = 0: This happens if X = 1 and Z = −1, X = 0 and Z = 0 or X = −1 and
Z = 1. We need P (X = 1|Y = 0) and P (X = −1|Y = 0). By Bayes rule
P (Y = 0|X = 1)P (X = 1)
P (X = 1|Y = 0) =
P (Y = 0)
P (Z = −1)P (X = 1)
=
P (X = 1)P (Z = −1) + P (X = 0)P (Z = 0) + P (X = −1)P (Z = 1)
p 13
= 1
3
p + 31 (1 − 2p) + 31 p
p
= = p.
1
The exact same calculations give P (X = −1|Y = 0) = p:
P (Y = 0|X = −1)P (X = −1)
P (X = −1|Y = 0) =
P (Y = 0)
P (Z = 1)P (X = −1)
=
P (X = 1)P (Z = −1) + P (X = 0)P (Z = 0) + P (X = −1)P (Z = 1)
p 31
= 1
3
p + 31 (1 − 2p) + 13 p
p
= = p.
1
This gives
V (0) = 1P (X = 1|Y = 0) + 0P (X = 0|Y = 0) + (−1)P (X = −1|Y = 0) = 0.
b) Both X and Y are normally distributed. From Gaussian conditioning we know that X
conditional on Y is normally distributed with
Cov(X, Y )
E[X|Y ] = E[X] + (Y − E[Y ])
V ar(Y )
and
Cov(X, Y )2
V ar(X|Y ) = V ar(X) − .
V ar(X)
In this particular question we have E[X] = µ, E[Y ] = E[X + Z] = µ, Cov(X, Y ) =
Cov(X, X + Z) = Cov(X, X) + Cov(X, Z) = σ 2 and V ar(Y ) = V ar(X + Z) = V ar(X) +
2
V ar(Z) = σ 2 + σz2 . Define the constant c = σ2σ+σ2 . EX|Y ] can be simplified to
z

2
σ
E[X|Y ] = µ + (Y − µ) = µ + c(Y − µ) = (1 − c)µ + cY
σ2 + σz2
and the variance
σ4
V ar(X|Y ) = σ 2 −
2 2
= σ 2 (1 − c).
σ + σz
From the previous part we have V (Y ) = E[X|Y ] = (1 − c)µ + cY . This can be rewritten
as V (Y ) = (1 − c)µ + c(X + Z). Hence
V ar(X − V (Y )) = V ar((1 − c)X − cZ − (1 − c)µ) = (1 − c)2 σ 2 + c2 σz2
and
V ar(V (Y )) = V ar((1 = c)µ + c(X + Z)) = c2 (σ 2 + σz2 ) = cσ 2 .

3
c) The payoff for the investor is w = α(X − V (Y )) = α((1 − c)X − (1 − c)µ − cZ) by using
the expression for V (Y ) from the previous part. The value of Aw is normally distributed
and hence
1
E[−e−Aw |X] = −eE[−Aw|X]+ 2 V ar(−Aw|X)
We need to find the α that maximize E[Aw|X] − 12 V ar(−Aw|X). We need

E[Aw|X] = E[Aα((1 − c)X − (1 − c)µ − cZ)] = Aα(1 − c)(X − µ)

and
V ar(−Aw|X) = V ar(−Aα((1 − c)X − (1 − c)µ − cZ)|X) = A2 α2 c2 σz2
The investor wants to maximize
1 1
Aα(1 − c)(X − µ) − A2 α2 c2 σz2 = A(α(1 − c)(X − µ) − Aα2 c2 σz2 .
2 2
This gives the first order equation

(1 − c)(X − µ) − Aαc2 σz2 = 0


(1−c)(X−µ)
with the solution α = Ac2 σz2
.

10 points

Exercise 3:

a)
 
x1  T
E[XT |X0 = x1 ] = 1 0 Π
x2
    
 1 −1 1 (p11 − p12 )T 0 −1 1 x1
= 1 0
2 1 1 0 1 1 1 x2
 
1  x1
= (p11 − p12 )T + 1 −(p11 − p12 )T + 1
2 x2
(p11 − p12 )T + 1 −(p11 − p12 )T + 1
= x1 + x2
2 2
We can simplify notation by introducing δ = p11 − p12 and obtain

δT + 1 −δ T + 1 x1 + x2 δ T (x1 − x2 )
E[XT |X0 = x1] = x1 + x2 = +
2 2 2 2

b) We can use the solution from the previous part and get
∞ ∞
X
−rt
X x1 + x2 δ t (x1 − x2 )
E[ e Xt |X0 = x1 ] = e−rt ( + )
t=0 t=0
2 2
∞ ∞
x1 + x2 X −rt X1 − x2 X −rt t
= e + e δ
2 t=0
2 t=0
x1 + x2 1 X 1 − x2 1
= −r
+ .
2 1−e 2 1 − δe−r

4
Alternatively we can use the Kolmogorov equation and define
X∞
V (X) = E[ e−rt Xt ]
t=0
= X + e−r ΠV (X)

with the solution


V (X) = (I − e−r Π)−1 X
and ∞
X
e−rt Xt |X0 = x1 ] = 1 0 (I − e−r Π)−1 X.

E[
t=0

c) Start by finding F2 (s) = E[sT1 |X0 = x2 ]. This solves

F2 (s) = sp12 + sp11 F2 (s)


(1 − sp11 )F2 (s) = sp12
sp12
F2 (s) = .
1 − sp11

Use this to calculate F1 (s) = E[sT1 |X0 = x1 ].

F1 (s) = sp12 F2 (s) + sp11 F1 (s)


(1 − sp11 )F1 (s) = sp12 F2 (s)
sp12 sp12
F1 (s) = F2 (s) = ( )2 .
1 − sp11 (1 − sp11 )

The first derivative is given by

2p212 s
F10 (s) = .
(1 − p11 s)3

and the second derivative by

2p212 (1 + 2p11 s)
F100 (s) = .
(1 − p11 s)4

We need to evaluate both at s = 1 to obtain


2p212 2
E[T1 ] = F10 (1) = =
(1 − p11 )3 1 − p11

and
2p212 (1 + 2p11 ) 2(1 + 2p11 )
E[T12 − T1 ] = F100 (1) = 4
= .
(1 − p11 ) (1 − p11 )2
We find the variance by

V ar(T1 ) = E[T12 ] − E[T1 ]2 = E[T12 − T1 ] + E[T1 ] + E[T1 ]2


2(1 + 2p11 ) 2 2 2 2p11
= + − ( ) =
(1 − p11 )2 1 − p11 1 − p11 (1 − p11 )2

10 points

5
Exercise 4:

a) We can see that all events happen with positive or zero probability and we need to verify
that the probabilities sum to 1. We can do so by induction. It obviously holds for N = 1
with P (X = 2) = 21−1 = 20 = 1 and there are no other event with positive probability.
Suppose it holds for N = a, it also holds for N = a + 1. There are two events that have
different probabilities in the two cases, X = 2a and X = 2a+1 . We need to verify that
P (X = 2a ) + P (X = 2a+1 ) is the same for both N = a and N = a + 1. Start with N = a
and obtain
P (X = 2a ) + P (X = 2a+1 ) = 21−a + 0 = 21−a
and for N = a + 1 we obtain

P (X = 2a ) + P (X = 2a+1 ) = 2−a + 21−(a−1) = 2−a + 2−a = 21−a .

The probability is the same and as a result, if it is a probability distribution for N = a,


it is also a probability distribution for N = a + 1.

b) The expected value is given by


N
X −1 N
X −1
E[X] = ( 2i 2−i ) + 2N 21−N = ( 1) + 2 = N + 1
i=1 i=1

When N → ∞, E[X] → ∞.

c) The probability of eventually running out of money conditional on wealth w is given by


fm (w). Define the event A as the event that the player eventually goes bankrupt. This
gives
fm (w) = E[1A |w]
The probability of eventually going bankrupt after playing the lottery one additional time
is given by fm (w + XN − m) and we get

fm (w + XN − m) = E[1A |w + XN − m]

and by the law of iterated expectations

fm (w) = E[E[1A |w + XN − m]|w].

Therefore we have fm (w) = E[fm (w + XN − m)]. This gives the following difference
equation
N
X −1
fm (w) = fm (w + 2i − m)2−i + fm (w + 2N − m)21−i .
i=1

10 points

You might also like