0% found this document useful (0 votes)
27 views78 pages

Bouchardtalk

Uploaded by

dprolim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views78 pages

Bouchardtalk

Uploaded by

dprolim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Stochastic Target Problems with Controlled Loss

by B. Bouchard, R. Elie, and N. Touzi

Chris Hammond

March 7, 2009
Motivation: Quantile Hedging

Suppose we have d risky assets X and wealth process Y . We must


pay out a contingent claim g (X ) at time T . We fix p ∈ [0, 1]. We
would like to find the minimum initial wealth y such that we can
find an investment strategy that will allow us to cover our
contingent claim at time T with probability p.
Motivation: Quantile Hedging

Suppose we have d risky assets X and wealth process Y . We must


pay out a contingent claim g (X ) at time T . We fix p ∈ [0, 1]. We
would like to find the minimum initial wealth y such that we can
find an investment strategy that will allow us to cover our
contingent claim at time T with probability p.
More precisely Xx,t ν is an Rd -valued process with X ν (t) = x and ν
x,t
ν
tells us how much is invested in asset i. Also, Yt,x,y is our wealth
ν
process with Yt,x,y (t) = y . Given p ∈ [0, 1], we wish to find
ν
V̄ (t, x, p) = inf{y ≥ 0 : P[Yt,x,y ν (T ))] ≥
(T ) ≥ g (Xt,x
p for some admissible control ν}.
Motivation: Quantile Hedging

Suppose we have d risky assets X and wealth process Y . We must


pay out a contingent claim g (X ) at time T . We fix p ∈ [0, 1]. We
would like to find the minimum initial wealth y such that we can
find an investment strategy that will allow us to cover our
contingent claim at time T with probability p.
More precisely Xx,t ν is an Rd -valued process with X ν (t) = x and ν
x,t
ν
tells us how much is invested in asset i. Also, Yt,x,y is our wealth
ν
process with Yt,x,y (t) = y . Given p ∈ [0, 1], we wish to find
ν
V̄ (t, x, p) = inf{y ≥ 0 : P[Yt,x,y ν (T ))] ≥
(T ) ≥ g (Xt,x
p for some admissible control ν}.
For p = 1, this problem was solved by Soner and Touzi when the
controls are bounded.
Outline

I ν
Characterize V̄ (t, x, p) = inf{y ≥ 0 : P[Yt,x,y (T ) ≥
ν
g (Xt,x (T ))] ≥ p for some control ν}
Outline

I ν
Characterize V̄ (t, x, p) = inf{y ≥ 0 : P[Yt,x,y (T ) ≥
ν
g (Xt,x (T ))] ≥ p for some control ν}
I Reduce this to the case where p = 1 but with unbounded
controls ν
Outline

I ν
Characterize V̄ (t, x, p) = inf{y ≥ 0 : P[Yt,x,y (T ) ≥
ν
g (Xt,x (T ))] ≥ p for some control ν}
I Reduce this to the case where p = 1 but with unbounded
controls ν
I Extend existing characterization of V (t, x) = inf{y ≥ 0 :
ν (T ), Y ν
G (Xt,x t,x,y (T )) ≥ 0 for some control ν} to the case of
unbounded controls
Outline

I ν
Characterize V̄ (t, x, p) = inf{y ≥ 0 : P[Yt,x,y (T ) ≥
ν
g (Xt,x (T ))] ≥ p for some control ν}
I Reduce this to the case where p = 1 but with unbounded
controls ν
I Extend existing characterization of V (t, x) = inf{y ≥ 0 :
ν (T ), Y ν
G (Xt,x t,x,y (T )) ≥ 0 for some control ν} to the case of
unbounded controls
I Apply this to the quantile hedging problem
Problem Formulation and MANY definitions
I Let T > 0 a finite time horizon, W = {Wt : 0 ≤ t ≤ T } a
d-dimensional Brownian motion on (Ω, F, P) with F = {Ft }
the P-augmentation of the filtration generated by W
Problem Formulation and MANY definitions
I Let T > 0 a finite time horizon, W = {Wt : 0 ≤ t ≤ T } a
d-dimensional Brownian motion on (Ω, F, P) with F = {Ft }
the P-augmentation of the filtration generated by W
I Let U0 be a subset of the F-progressively measurable
processes ν which are in L2 ([0, T ]) P-almost surely and taking
values in a closed (possibly unbounded) subset U of Rd
Problem Formulation and MANY definitions
I Let T > 0 a finite time horizon, W = {Wt : 0 ≤ t ≤ T } a
d-dimensional Brownian motion on (Ω, F, P) with F = {Ft }
the P-augmentation of the filtration generated by W
I Let U0 be a subset of the F-progressively measurable
processes ν which are in L2 ([0, T ]) P-almost surely and taking
values in a closed (possibly unbounded) subset U of Rd
I For t ∈ [0, T ], z = (x, y ) ∈ Rd × R and ν ∈ U0 define
ν = (X ν , Y ν ) as the Rd × R-valued solution of the SDE:
Zt,z t,x t,z

dX (r ) = µ(X (r ), νr )dr + σ(X (r ), νr )dW (r ) (1)


dY (r ) = µY (Z (r ), νr )dr + σY (Z (r ), νr ) (2)
for t ≤ r ≤ T with Z (t) = (X (t), Y (t)) = (x, y ), where
(µY , σY ) : Rd × R × U → R × Rd and
(µ, σ) : Rd × U → Rd × Md are locally Lipschitz and satisfy
|µY (x, y , u)|+|µ(x, u)|+|σY (x, y , u)|+|σ(x, u)| ≤ K (x, y )(1+|u|)
where K is locally bounded
Problem Formulation and MANY definitions (continued)

I Let U denote the subset of U0 allowing a strong solution of


(1) and (2) for all initial data
Problem Formulation and MANY definitions (continued)

I Let U denote the subset of U0 allowing a strong solution of


(1) and (2) for all initial data
I Let X be the interior of the support of X
Problem Formulation and MANY definitions (continued)

I Let U denote the subset of U0 allowing a strong solution of


(1) and (2) for all initial data
I Let X be the interior of the support of X
I Given u ∈ U, let Lu denote the Dynkin operator, Lu φ(t, x) =
∂t φ(t, x) + µ(x, u)Dφ(t, x) + 12 Tr(σσ T (x, u)D 2 φ(t, x))
Problem Formulation and MANY definitions (continued)

I Let U denote the subset of U0 allowing a strong solution of


(1) and (2) for all initial data
I Let X be the interior of the support of X
I Given u ∈ U, let Lu denote the Dynkin operator, Lu φ(t, x) =
∂t φ(t, x) + µ(x, u)Dφ(t, x) + 12 Tr(σσ T (x, u)D 2 φ(t, x))
I N u (x, y , q) = σY (x, y , u) − σ T (x, u)q and
N0 (x, y , q) = {u ∈ U : N u (x, y , q) = 0}
Problem Formulation and MANY definitions (continued)

I Let U denote the subset of U0 allowing a strong solution of


(1) and (2) for all initial data
I Let X be the interior of the support of X
I Given u ∈ U, let Lu denote the Dynkin operator, Lu φ(t, x) =
∂t φ(t, x) + µ(x, u)Dφ(t, x) + 12 Tr(σσ T (x, u)D 2 φ(t, x))
I N u (x, y , q) = σY (x, y , u) − σ T (x, u)q and
N0 (x, y , q) = {u ∈ U : N u (x, y , q) = 0}
I Let G : Rd+1 → R be measurable with for all x ∈ Rd the
function y 7→ G (x, y ) is non-decreasing and right continuous
Problem Formulation and MANY definitions (continued)

I Let U denote the subset of U0 allowing a strong solution of


(1) and (2) for all initial data
I Let X be the interior of the support of X
I Given u ∈ U, let Lu denote the Dynkin operator, Lu φ(t, x) =
∂t φ(t, x) + µ(x, u)Dφ(t, x) + 12 Tr(σσ T (x, u)D 2 φ(t, x))
I N u (x, y , q) = σY (x, y , u) − σ T (x, u)q and
N0 (x, y , q) = {u ∈ U : N u (x, y , q) = 0}
I Let G : Rd+1 → R be measurable with for all x ∈ Rd the
function y 7→ G (x, y ) is non-decreasing and right continuous
I Stochastic Target Problem is V (t, x) = inf{y ∈ R :
ν (T ), Y ν
G (Xt,x t,x,y (T )) ≥ 0 for some ν ∈ U}, equivalently
ν
V (t, x) = inf{y ∈ R : Yt,x,y ν (T )) for some ν ∈
(T ) ≥ g (Xt,x
U} where g (x) = inf{y : G (x, y ) ≥ 0}
What is known

Soner and Touzi


When U is bounded, Soner and Touzi proved that V is a
discontinuous viscosity solution of
sup{µY (x, v (t, x), u) − Lu v (t, x) : u ∈ N0 (x, v (t, x), Dv (t, x))} =
0
What is known

Soner and Touzi


When U is bounded, Soner and Touzi proved that V is a
discontinuous viscosity solution of
sup{µY (x, v (t, x), u) − Lu v (t, x) : u ∈ N0 (x, v (t, x), Dv (t, x))} =
0

What is a viscosity solution?


Let D(x, u(x), Du(x), D 2 u(x)) be a second-order elliptic
differential operator, i.e., if A − B is positive-definite, then
D(x, y , v , A) ≤ D(x, y , v , B). Then u is a viscosity supersolution
of D(x, u(x), Du(x), D 2 u(x)) ≥ 0 if for any x0 and for any smooth
φ for which x0 is a min of φ − u, D(x0 , u(x0 ), Dφ(x0 ), D 2 φ(x)) ≥ 0
Dynamic Programming Equation-Auxiliary Definitions and
Assumptions
Since U is not necessarily bounded, we introduce relaxed
semilimits.
I Let Θ = (x, y , q, A) ∈ Rd × R × Rd × Sd and let
N (x, y , q) = {u ∈ U : |N u (x, y , q)| ≤ }
Dynamic Programming Equation-Auxiliary Definitions and
Assumptions
Since U is not necessarily bounded, we introduce relaxed
semilimits.
I Let Θ = (x, y , q, A) ∈ Rd × R × Rd × Sd and let
N (x, y , q) = {u ∈ U : |N u (x, y , q)| ≤ }
I For  > 0 let
F (Θ) = sup{µY (x, y , u) − µ(y , u)q − 21 Tr(σσ T (x, u)A) : u ∈
0
N (x, y , q)} Then F ∗ (Θ) = lim sup&0,Θ0 →Θ F (Θ ) and
0
F∗ (Θ) = lim inf&0,Θ0 →Θ F (Θ )
Assumption 2.1(needed for subsolution property) Let
B ⊂ X × R × Rd such that N0 6= ∅ on B. Then for each
 > 0, (x0 , y0 , q0 ) ∈ intB and u0 ∈ N0 (x0 , y0 , q0 ), there exists an
0
open neighborhood B of (x0 , y0 , q0 ) and a locally Lipschitz map ν̂
0 0
on B such that |ν̂(x0 , y0 , q0 ) − u0 | <  and ν̂ ∈ N0 (x, y , q) on B
Dynamic Programming Equation-Auxiliary Definitions and
Assumptions
Since U is not necessarily bounded, we introduce relaxed
semilimits.
I Let Θ = (x, y , q, A) ∈ Rd × R × Rd × Sd and let
N (x, y , q) = {u ∈ U : |N u (x, y , q)| ≤ }
I For  > 0 let
F (Θ) = sup{µY (x, y , u) − µ(y , u)q − 21 Tr(σσ T (x, u)A) : u ∈
0
N (x, y , q)} Then F ∗ (Θ) = lim sup&0,Θ0 →Θ F (Θ ) and
0
F∗ (Θ) = lim inf&0,Θ0 →Θ F (Θ )
Assumption 2.1(needed for subsolution property) Let
B ⊂ X × R × Rd such that N0 6= ∅ on B. Then for each
 > 0, (x0 , y0 , q0 ) ∈ intB and u0 ∈ N0 (x0 , y0 , q0 ), there exists an
0
open neighborhood B of (x0 , y0 , q0 ) and a locally Lipschitz map ν̂
0 0
on B such that |ν̂(x0 , y0 , q0 ) − u0 | <  and ν̂ ∈ N0 (x, y , q) on B
Assume also that V is locally bounded
Dynamic Programming Equation

Theorem (2.1)
V∗ is a viscosity supersolution of −∂t V∗ + F ∗ V∗ ≥ 0 on [0, T ) × X
If Assumption 2.1 holds, then V ∗ is a viscosity subsolution of
−∂t V ∗ + F∗ V ∗ ≤ 0 on [0, T ) × X
Dynamic Programming Equation

Theorem (2.1)
V∗ is a viscosity supersolution of −∂t V∗ + F ∗ V∗ ≥ 0 on [0, T ) × X
If Assumption 2.1 holds, then V ∗ is a viscosity subsolution of
−∂t V ∗ + F∗ V ∗ ≤ 0 on [0, T ) × X
Let N(x, y , q) = {r ∈ Rd : r = N u (x, y , q) for u ∈ U} and let
δ = dist(0, Nc ) − dist(0, N)
Dynamic Programming Equation

Theorem (2.1)
V∗ is a viscosity supersolution of −∂t V∗ + F ∗ V∗ ≥ 0 on [0, T ) × X
If Assumption 2.1 holds, then V ∗ is a viscosity subsolution of
−∂t V ∗ + F∗ V ∗ ≤ 0 on [0, T ) × X
Let N(x, y , q) = {r ∈ Rd : r = N u (x, y , q) for u ∈ U} and let
δ = dist(0, Nc ) − dist(0, N)
Theorem (2.2)
The function x ∈ X 7→ V∗ (T , x) is a viscosity supersolution of
min{(V∗ (T , •) − g∗ )1{F ∗ V∗ (T ,•)<∞} , δ ∗ V∗ (T , •)} ≥ 0 on X and if
Assumption 2.1 holds, x 7→ V ∗ (T , x) is a viscosity subsolution of
min{(V ∗ (T , •) − g ∗ ), δ∗ V ∗ (T , •)} ≤ 0 on X
Problem Reduction

I For α an F-Progressively measurable Rd -valued P-almost


surely square integrable process and p ∈ [0, 1], define
α (t) = p, dP α (s) = P α (s) = P α (s)(1 − P α (s))α dW
Pt,p t,p t,p t,p t,p s s
for s ∈ [t, T ]
Problem Reduction

I For α an F-Progressively measurable Rd -valued P-almost


surely square integrable process and p ∈ [0, 1], define
α (t) = p, dP α (s) = P α (s) = P α (s)(1 − P α (s))α dW
Pt,p t,p t,p t,p t,p s s
for s ∈ [t, T ]
I Let X̄ = (X , P), X̄ = X × (0, 1), Ū = U × Rd

Theorem
For t ∈ [0, T ] and x̄ = (x, p) ∈ X̄, V̄ (t, x̄) = inf{y ∈ R+ :
ν̄ (T ), Y ν
Ḡ (X̄t,x̄ t,x,y (T )) ≥ 0 for some ν̄ ∈ Ū}
Problem Reduction

I For α an F-Progressively measurable Rd -valued P-almost


surely square integrable process and p ∈ [0, 1], define
α (t) = p, dP α (s) = P α (s) = P α (s)(1 − P α (s))α dW
Pt,p t,p t,p t,p t,p s s
for s ∈ [t, T ]
I Let X̄ = (X , P), X̄ = X × (0, 1), Ū = U × Rd
I Define Ḡ (x̄, y ) = 1{G (x,y )≥0} − p.

Theorem
For t ∈ [0, T ] and x̄ = (x, p) ∈ X̄, V̄ (t, x̄) = inf{y ∈ R+ :
ν̄ (T ), Y ν
Ḡ (X̄t,x̄ t,x,y (T )) ≥ 0 for some ν̄ ∈ Ū}
Proof of Theorem

I Suppose y > (RHS), then for some ν̄ ∈ Ū we have


ν̄ (T ), Y ν
Ḡ (X̄t,x̄ t,x,y (T )) ≥ 0.
Proof of Theorem

I Suppose y > (RHS), then for some ν̄ ∈ Ū we have


ν̄ (T ), Y ν
Ḡ (X̄t,x̄ t,x,y (T )) ≥ 0.
I Applying E to the above inequality and using the fact that
α is a martingale, we have
Pt,p
E[1{G (Xt,x
ν (T ),Y ν
t,x,y (T ))≥0}
]−p ≥0
Proof of Theorem

I Suppose y > (RHS), then for some ν̄ ∈ Ū we have


ν̄ (T ), Y ν
Ḡ (X̄t,x̄ t,x,y (T )) ≥ 0.
I Applying E to the above inequality and using the fact that
α is a martingale, we have
Pt,p
E[1{G (Xt,x
ν (T ),Y ν
t,x,y (T ))≥0}
]−p ≥0
I Suppose y > V̄ (t, x, p), then there exists ν ∈ U with
ν (T ), Y ν
P[G (Xt,x t,x,y (T )) ≥ 0] ≥ p
Proof of Theorem

I Suppose y > (RHS), then for some ν̄ ∈ Ū we have


ν̄ (T ), Y ν
Ḡ (X̄t,x̄ t,x,y (T )) ≥ 0.
I Applying E to the above inequality and using the fact that
α is a martingale, we have
Pt,p
E[1{G (Xt,x
ν (T ),Y ν
t,x,y (T ))≥0}
]−p ≥0
I Suppose y > V̄ (t, x, p), then there exists ν ∈ U with
ν (T ), Y ν
P[G (Xt,x t,x,y (T )) ≥ 0] ≥ p
I By the Stochastic Integral Representation Theorem, there
exists φ with 1{G (Xt,x ν (T ),Y ν
t,x,y (T ))≥0}
=
RT
E[1{G (Xt,x
ν (T ),Y ν
t,x,y (T ))≥0}
] + t φs dWs
Proof of Theorem

I Suppose y > (RHS), then for some ν̄ ∈ Ū we have


ν̄ (T ), Y ν
Ḡ (X̄t,x̄ t,x,y (T )) ≥ 0.
I Applying E to the above inequality and using the fact that
α is a martingale, we have
Pt,p
E[1{G (Xt,x
ν (T ),Y ν
t,x,y (T ))≥0}
]−p ≥0
I Suppose y > V̄ (t, x, p), then there exists ν ∈ U with
ν (T ), Y ν
P[G (Xt,x t,x,y (T )) ≥ 0] ≥ p
I By the Stochastic Integral Representation Theorem, there
exists φ with 1{G (Xt,x ν (T ),Y ν
t,x,y (T ))≥0}
=
RT
E[1{G (Xt,x
ν (T ),Y ν
t,x,y (T ))≥0}
] + t φs dWs
φ
I Let τ be the first time that Pt,p hits 0 and let α = φ1[t,τ ] .
− α
Then 1{G (Xt,x
ν (T ),Y ν
t,x,y (T ))≥0}
P t,p (T ) ≥ 0
Dynamic Programming Equation

I µ̄(x̄, ū) = (µ(x, u), 0)T σ̄(x̄, ū) = (σ(x, u), αT )T


Dynamic Programming Equation

I µ̄(x̄, ū) = (µ(x, u), 0)T σ̄(x̄, ū) = (σ(x, u), αT )T


I For (y , q, A) ∈ R × Rd+1 × Sd+1 , and for ū = (u, α) ∈ Ū
N̄ ū (x̄, y , q) = σY (x, y , u) − σ̄(x̄, ū)T q = N u (x, y , qx ) − qp α
Dynamic Programming Equation

I µ̄(x̄, ū) = (µ(x, u), 0)T σ̄(x̄, ū) = (σ(x, u), αT )T


I For (y , q, A) ∈ R × Rd+1 × Sd+1 , and for ū = (u, α) ∈ Ū
N̄ ū (x̄, y , q) = σY (x, y , u) − σ̄(x̄, ū)T q = N u (x, y , qx ) − qp α
I N̄ (x̄, y , q) = {ū ∈ Ū : |N̄ ū (x̄, y , q)| ≤ }
Dynamic Programming Equation

I µ̄(x̄, ū) = (µ(x, u), 0)T σ̄(x̄, ū) = (σ(x, u), αT )T


I For (y , q, A) ∈ R × Rd+1 × Sd+1 , and for ū = (u, α) ∈ Ū
N̄ ū (x̄, y , q) = σY (x, y , u) − σ̄(x̄, ū)T q = N u (x, y , qx ) − qp α
I N̄ (x̄, y , q) = {ū ∈ Ū : |N̄ ū (x̄, y , q)| ≤ }
I F̄ (x̄, y , q, A) =
supū∈N̄ (x̄,y ,q) {µY (x, y , u)− µ̄(x̄, u, α)q − 21 Tr(σ̄σ̄ T (x̄, u, α)A)}
Dynamic Programming Equation

I µ̄(x̄, ū) = (µ(x, u), 0)T σ̄(x̄, ū) = (σ(x, u), αT )T


I For (y , q, A) ∈ R × Rd+1 × Sd+1 , and for ū = (u, α) ∈ Ū
N̄ ū (x̄, y , q) = σY (x, y , u) − σ̄(x̄, ū)T q = N u (x, y , qx ) − qp α
I N̄ (x̄, y , q) = {ū ∈ Ū : |N̄ ū (x̄, y , q)| ≤ }
I F̄ (x̄, y , q, A) =
supū∈N̄ (x̄,y ,q) {µY (x, y , u)− µ̄(x̄, u, α)q − 21 Tr(σ̄σ̄ T (x̄, u, α)A)}
I N̄(x̄, y , q) = {N ū (x̄, y , q) : ū ∈ Ū} and
δ̄ = dist(o, Nc ) − dist(0, N̄)
Dynamic Programming Equation (continued)

Assumption 3.1
Let B ⊂ X × [0, 1] × R × Rd+1 such that N̄0 6= ∅ on B. Then for
every  > 0, (x0 , p0 , y0 , q0 ) ∈ intB, ū0 ∈ N̄0 (x0 , p0 , y0 , q0 ), there
0
exists an open neighborhood B of (x0 , p0 , y0 , q0 ) and a locally
0
Lipschitz map ν̂ on B such that |ν̂(x0 , p0 , y0 , q0 ) − ū0 | ≤  and
0
ν̂(x, p, y , q) ∈ N̄0 (x, p, y , q) on B
Dynamic Programming Equation (continued)

Assumption 3.1
Let B ⊂ X × [0, 1] × R × Rd+1 such that N̄0 6= ∅ on B. Then for
every  > 0, (x0 , p0 , y0 , q0 ) ∈ intB, ū0 ∈ N̄0 (x0 , p0 , y0 , q0 ), there
0
exists an open neighborhood B of (x0 , p0 , y0 , q0 ) and a locally
0
Lipschitz map ν̂ on B such that |ν̂(x0 , p0 , y0 , q0 ) − ū0 | ≤  and
0
ν̂(x, p, y , q) ∈ N̄0 (x, p, y , q) on B

Corollary 3.1
The functtion V̄∗ is a viscosity supersolution of −∂t V̄∗ + F̄ ∗ V̄∗ ≥ 0
on [0, T ) × X̄. Under the additional Assumption 3.1, V̄ ∗ is a
viscosity subsolution of min{V̄ ∗ , −∂t V̄ ∗ + F̄∗ V̄ ∗ } on [0, T ) × X̄.
Boundary Conditions

Remark
Note that V̄ (•, 1) = V and V̄ (•, 0) = 0. Since G is non-decreasing
in y , 0 ≤ V̄∗ (•, 0) ≤ V̄ ∗ (•, 1) ≤ V ∗
Boundary Conditions

Remark
Note that V̄ (•, 1) = V and V̄ (•, 0) = 0. Since G is non-decreasing
in y , 0 ≤ V̄∗ (•, 0) ≤ V̄ ∗ (•, 1) ≤ V ∗

Assumption 3.2
For all (x, y , q) ∈ X × (0, ∞) × Rd , we have N0 (x, y , q) is a proper
subset of U
Boundary Conditions

Remark
Note that V̄ (•, 1) = V and V̄ (•, 0) = 0. Since G is non-decreasing
in y , 0 ≤ V̄∗ (•, 0) ≤ V̄ ∗ (•, 1) ≤ V ∗

Assumption 3.2
For all (x, y , q) ∈ X × (0, ∞) × Rd , we have N0 (x, y , q) is a proper
subset of U

Assumption 3.3
For all compact subsets A of Rd × R × Rd × Sd , there exists
C > 0 such that F (Θ) ≤ C (1 + 2 ) for all  ≥ 0 and Θ ∈ A.
Boundary Conditions (continued)

Theorem3.1
Assume the function supu∈U |σ(•, u)| is locally bounded on X and
that Assumption 3.1 holds, then (i) Under Assumption 3.2,
V̄ ∗ (•, 0) = 0 on [0, T ) × X and V̄∗ (•, 0) = 0 on [0, T ] × X (ii)
Under Assumption 3.3, V̄ ∗ (•, 1) is a viscosity supersolution of
(2.10)-(2.19) on [0, T ] × X. In particular, if Assumption 2.2 is
satisfied, then V̄ ∗ (•, 1) = V̄∗ (•, 1) = V∗ = V ∗ on [0, T ] × X
Quantile Hedging Problem

I X = (0, ∞)d
Quantile Hedging Problem

I X = (0, ∞)d
I µ(x, u) = µ(x), σ(x, u) = σ(x) with σ invertible and
supx∈X |λ(x)| < ∞ where λ = σ −1 µ
Quantile Hedging Problem

I X = (0, ∞)d
I µ(x, u) = µ(x), σ(x, u) = σ(x) with σ invertible and
supx∈X |λ(x)| < ∞ where λ = σ −1 µ
I µY (x, y , u) = uµ(x), σY (x, y , u) = σ T (x)u
Quantile Hedging Problem

I X = (0, ∞)d
I µ(x, u) = µ(x), σ(x, u) = σ(x) with σ invertible and
supx∈X |λ(x)| < ∞ where λ = σ −1 µ
I µY (x, y , u) = uµ(x), σY (x, y , u) = σ T (x)u
I G (x, y ) = y − g (x) for g Lipschitz
Quantile Hedging Problem

I X = (0, ∞)d
I µ(x, u) = µ(x), σ(x, u) = σ(x) with σ invertible and
supx∈X |λ(x)| < ∞ where λ = σ −1 µ
I µY (x, y , u) = uµ(x), σY (x, y , u) = σ T (x)u
I G (x, y ) = y − g (x) for g Lipschitz
I dXt,x (s) = µ(Xt,x (s))ds + σ(Xt,x (s))dWs where Xt,x (t) = x.
Quantile Hedging Problem

I X = (0, ∞)d
I µ(x, u) = µ(x), σ(x, u) = σ(x) with σ invertible and
supx∈X |λ(x)| < ∞ where λ = σ −1 µ
I µY (x, y , u) = uµ(x), σY (x, y , u) = σ T (x)u
I G (x, y ) = y − g (x) for g Lipschitz
I dXt,x (s) = µ(Xt,x (s))ds + σ(Xt,x (s))dWs where Xt,x (t) = x.
ν
Rs
I Yt,x,y (s) = y + t νr dXt,x (r )
Quantile Hedging Problem

I X = (0, ∞)d
I µ(x, u) = µ(x), σ(x, u) = σ(x) with σ invertible and
supx∈X |λ(x)| < ∞ where λ = σ −1 µ
I µY (x, y , u) = uµ(x), σY (x, y , u) = σ T (x)u
I G (x, y ) = y − g (x) for g Lipschitz
I dXt,x (s) = µ(Xt,x (s))ds + σ(Xt,x (s))dWs where Xt,x (t) = x.
ν
Rs
I Yt,x,y (s) = y + t νr dXt,x (r )
I V (t, x) = EQt,x [g (Xt,x (T ))], where Qt,x is the P-equivalent
Martingale measure
R defined by 2
−1 T
dQt,x RT
dP = exp( 2 t |λ(Xt,x (s))| ds − t λ(Xt,x (s))dWs )
Quantile Hedging Problem

I X = (0, ∞)d
I µ(x, u) = µ(x), σ(x, u) = σ(x) with σ invertible and
supx∈X |λ(x)| < ∞ where λ = σ −1 µ
I µY (x, y , u) = uµ(x), σY (x, y , u) = σ T (x)u
I G (x, y ) = y − g (x) for g Lipschitz
I dXt,x (s) = µ(Xt,x (s))ds + σ(Xt,x (s))dWs where Xt,x (t) = x.
ν
Rs
I Yt,x,y (s) = y + t νr dXt,x (r )
I V (t, x) = EQt,x [g (Xt,x (T ))], where Qt,x is the P-equivalent
Martingale measure
R defined by 2
−1 T
dQt,x RT
dP = exp( 2 t |λ(Xt,x (s))| ds − t λ(Xt,x (s))dWs )
R•
I Define W Qt,x = W + t λ(Xt,x (s))ds, the Qt,x -Brownian
motion defined on [t, T ]
Quantile Hedging Problem

DPE
By Corollary 3.1, Theorem 3.1 and (i) of Proposition 3.2, one has
that V̄∗ is a viscosity supersolution on [0, T ) × X̄ of
0 ≤ −∂t V̄∗ + F̄ ∗ V̄∗ = −∂t V̄∗ − 21 Tr(σσ T Dxx V̄∗ ) −
infα∈Rd −α(Dp V̄∗ )T σ −1 µ + Tr(σαDxp V̄∗ ) + 21 |α|2 Dpp V̄∗

Quantile Hedging Problem

DPE
By Corollary 3.1, Theorem 3.1 and (i) of Proposition 3.2, one has
that V̄∗ is a viscosity supersolution on [0, T ) × X̄ of
0 ≤ −∂t V̄∗ + F̄ ∗ V̄∗ = −∂t V̄∗ − 21 Tr(σσ T Dxx V̄∗ ) −
infα∈Rd −α(Dp V̄∗ )T σ −1 µ + Tr(σαDxp V̄∗ ) + 21 |α|2 Dpp V̄∗


Boundary Conditions
V̄∗ (•, 1) = V and V̄∗ (•, 0) = 0 on [0, T ] × X, V̄∗ (T , x, p) ≥ pg (x)
on X × [0, 1]. We define V̄∗ (•, p) = 0 for p < 0 and V̄∗ (•, p) = ∞
for p > 1.
Quantile Hedging Problem

DPE
By Corollary 3.1, Theorem 3.1 and (i) of Proposition 3.2, one has
that V̄∗ is a viscosity supersolution on [0, T ) × X̄ of
0 ≤ −∂t V̄∗ + F̄ ∗ V̄∗ = −∂t V̄∗ − 21 Tr(σσ T Dxx V̄∗ ) −
infα∈Rd −α(Dp V̄∗ )T σ −1 µ + Tr(σαDxp V̄∗ ) + 21 |α|2 Dpp V̄∗


Boundary Conditions
V̄∗ (•, 1) = V and V̄∗ (•, 0) = 0 on [0, T ] × X, V̄∗ (T , x, p) ≥ pg (x)
on X × [0, 1]. We define V̄∗ (•, p) = 0 for p < 0 and V̄∗ (•, p) = ∞
for p > 1.

How do we solve this system?


They use the (standard) trick of taking the Legendre transform (in
the p-variable) to convert this to a linear problem. Let
v (t, x, q) = supp∈R {pq − V̄∗ (t, x, p)} for
(t, x, q) ∈ [0, T ] × (0, ∞)d × R
Transformed Problem

I v (•, q) = ∞ for q < 0 and


v (•, q) = supp∈[0,1] {pq − V̄∗ (•, p)} for q > 0.
Transformed Problem

I v (•, q) = ∞ for q < 0 and


v (•, q) = supp∈[0,1] {pq − V̄∗ (•, p)} for q > 0.
I (4.11) v is an upper-semicontinuous viscosity subsolution of
−∂t v − 21 Tr(σσ T Dxx v ) − 12 |λ|2 q 2 Dqq v − Tr(σλDx,q v ) ≤ 0
Transformed Problem

I v (•, q) = ∞ for q < 0 and


v (•, q) = supp∈[0,1] {pq − V̄∗ (•, p)} for q > 0.
I (4.11) v is an upper-semicontinuous viscosity subsolution of
−∂t v − 21 Tr(σσ T Dxx v ) − 12 |λ|2 q 2 Dqq v − Tr(σλDx,q v ) ≤ 0
I (4.12) v (T , x, q) ≤ (q − g (x))+
Transformed Problem

I v (•, q) = ∞ for q < 0 and


v (•, q) = supp∈[0,1] {pq − V̄∗ (•, p)} for q > 0.
I (4.11) v is an upper-semicontinuous viscosity subsolution of
−∂t v − 21 Tr(σσ T Dxx v ) − 12 |λ|2 q 2 Dqq v − Tr(σλDx,q v ) ≤ 0
I (4.12) v (T , x, q) ≤ (q − g (x))+
I Uppersemicontinuity of v follows from the lower
semi-continuity of V̄∗ .
Transformed Problem

I v (•, q) = ∞ for q < 0 and


v (•, q) = supp∈[0,1] {pq − V̄∗ (•, p)} for q > 0.
I (4.11) v is an upper-semicontinuous viscosity subsolution of
−∂t v − 21 Tr(σσ T Dxx v ) − 12 |λ|2 q 2 Dqq v − Tr(σλDx,q v ) ≤ 0
I (4.12) v (T , x, q) ≤ (q − g (x))+
I Uppersemicontinuity of v follows from the lower
semi-continuity of V̄∗ .
I To prove (4.11), let (t0 , x0 , q0 ) ∈ [0, T ) × (0, ∞)d × (0, ∞)
and let φ be a smooth local maximizer of v − φ at (t0 , x0 , q0 )
with (v − φ)(t0 , x0 , q0 ) = 0
Transformed Problem

I v (•, q) = ∞ for q < 0 and


v (•, q) = supp∈[0,1] {pq − V̄∗ (•, p)} for q > 0.
I (4.11) v is an upper-semicontinuous viscosity subsolution of
−∂t v − 21 Tr(σσ T Dxx v ) − 12 |λ|2 q 2 Dqq v − Tr(σλDx,q v ) ≤ 0
I (4.12) v (T , x, q) ≤ (q − g (x))+
I Uppersemicontinuity of v follows from the lower
semi-continuity of V̄∗ .
I To prove (4.11), let (t0 , x0 , q0 ) ∈ [0, T ) × (0, ∞)d × (0, ∞)
and let φ be a smooth local maximizer of v − φ at (t0 , x0 , q0 )
with (v − φ)(t0 , x0 , q0 ) = 0
I WLOG, q 7→ φ(•, q) is strictly convex. Since v is convex
Dqq φ(t0 , x0 , q0 ) ≥ 0. Given , η > 0, define φ,η (t, x, q) =
φ(t, x, q)+|q −q0 |2 +η|q −q0 |2 (|q −q0 |2 +|t −t0 |2 +|x −x0 |2 )
Transformed Problem (continued)

I (t0 , x0 , q0 ) still maximizes v − φ,η and Dqq φ(t0 , x0 , q0 ) ≥ 0


implies Dqq φ,η (t0 , x0 , q0 ) ≥ 2 > 0 Since φ has bounded
derivatives, we can choose η large enough that
Dqq φ,η (t, x, q) > 0
Transformed Problem (continued)

I (t0 , x0 , q0 ) still maximizes v − φ,η and Dqq φ(t0 , x0 , q0 ) ≥ 0


implies Dqq φ,η (t0 , x0 , q0 ) ≥ 2 > 0 Since φ has bounded
derivatives, we can choose η large enough that
Dqq φ,η (t, x, q) > 0
I (4.11) holds for φ,η implies it for φ since we have
convergence in C 2
Transformed Problem (continued)

I (t0 , x0 , q0 ) still maximizes v − φ,η and Dqq φ(t0 , x0 , q0 ) ≥ 0


implies Dqq φ,η (t0 , x0 , q0 ) ≥ 2 > 0 Since φ has bounded
derivatives, we can choose η large enough that
Dqq φ,η (t, x, q) > 0
I (4.11) holds for φ,η implies it for φ since we have
convergence in C 2
I Assume q 7→ φ(•, q) is strictly convex. Let
φ̃(t, x, p) = supq∈R {pq − φ(t, x, q)}. Since φ is strictly
convex in q and smooth, φ̃ is strictly convex in p and smooth
Transformed Problem (continued)

I φ(t, x, q) = supp∈R {pq − φ̃(t, x, p)} =


J(t, x, q)q − φ̃(t, x, J(t, x, q)) on
(0, T ) × (0, ∞)d × (0, ∞) ⊂ int(dom(φ)) where q 7→ J(•, q)
denotes the inverse of p 7→ Dp φ̃(•, p)
Transformed Problem (continued)

I φ(t, x, q) = supp∈R {pq − φ̃(t, x, p)} =


J(t, x, q)q − φ̃(t, x, J(t, x, q)) on
(0, T ) × (0, ∞)d × (0, ∞) ⊂ int(dom(φ)) where q 7→ J(•, q)
denotes the inverse of p 7→ Dp φ̃(•, p)
I Since q0 > 0 we can find p0 ∈ [0, 1] such that
v (t0 , x0 , q0 ) = p0 q0 − V̄∗ (t0 , x0 , p0 ) which implies that
(t0 , x0 , p0 ) is a minimizer of V̄∗ − φ̃ such that
(V̄∗ − φ̃)(t0 , x0 , p0 ) = 0 and
φ(t0 , x0 , q0 ) = supp∈R {pq0 − φ̃(t0 , x0 , p)} = p0 q0 − φ̃(t0 , x0 , p0 )
with p0 = J(t0 , x0 , q0 )
Solution of Transformed Problem

I Recall (4.11) v is an upper-semicontinuous viscosity


subsolution of
−∂t v − 12 Tr(σσ T Dxx v ) − 12 |λ|2 q 2 Dqq v − Tr(σλDx,q v ) ≤ 0
Solution of Transformed Problem

I Recall (4.11) v is an upper-semicontinuous viscosity


subsolution of
−∂t v − 12 Tr(σσ T Dxx v ) − 12 |λ|2 q 2 Dqq v − Tr(σλDx,q v ) ≤ 0
I Since (4.11) is linear, Feynman-Kac yields an upper bound,
namely
v (t, x, q) ≤ v̄ (t, x, q) = EQt,x [(Qt,x,q (T ) − g (Xt,x (T )))+ ],
where dQ(s)
Qt,x
Q(s) = λ(Xt,x (s))dWs and Qt,x,q (t) = q
Solution of Transformed Problem

I Recall (4.11) v is an upper-semicontinuous viscosity


subsolution of
−∂t v − 12 Tr(σσ T Dxx v ) − 12 |λ|2 q 2 Dqq v − Tr(σλDx,q v ) ≤ 0
I Since (4.11) is linear, Feynman-Kac yields an upper bound,
namely
v (t, x, q) ≤ v̄ (t, x, q) = EQt,x [(Qt,x,q (T ) − g (Xt,x (T )))+ ],
where dQ(s)
Qt,x
Q(s) = λ(Xt,x (s))dWs and Qt,x,q (t) = q
I v̄ provides a lower bound for V̄
Solution (continued)

I (4.15) v̄ is convex in q and there is a unique solution q̂ of


∂v̄ Qt,x [Q
∂q (t, x, q̄) = E t,x,1 (T )1{Qt,x,q̂ (T )≥g (Xt,x (T ))} ] =
P[Qt,x,q̂ (T ) ≥ g (Xt,x (T ))] = p, where we used
dP
dQt,x = Qt,x,1 (T )
Solution (continued)

I (4.15) v̄ is convex in q and there is a unique solution q̂ of


∂v̄ Qt,x [Q
∂q (t, x, q̄) = E t,x,1 (T )1{Qt,x,q̂ (T )≥g (Xt,x (T ))} ] =
P[Qt,x,q̂ (T ) ≥ g (Xt,x (T ))] = p, where we used
dP
dQt,x = Qt,x,1 (T )
I V̄ (t, x, p) ≥ V̄∗ (t, x, p) ≥ pq̂(t, x, p) − v̄ (t, x, q̂(t, x, p)) =
q̂(t, x, p)(p − EQt,x [Qt,x,1 (T )1{q̂(t,x,p)Qt,x,1 (T )≥g (Xt,x (T ))} ]) +
EQt,x [g (Xt,x (T ))1{q̂(t,x,p)Qt,x,1 (T )≥g (Xt,x (T ))} ] =
EQt,x [g (Xt,x (T ))1{q̂(t,x,p)Qt,x,1 (T )≥g (Xt,x (T ))} ] =: y (t, x, p)
Solution (continued)

I (4.15) v̄ is convex in q and there is a unique solution q̂ of


∂v̄ Qt,x [Q
∂q (t, x, q̄) = E t,x,1 (T )1{Qt,x,q̂ (T )≥g (Xt,x (T ))} ] =
P[Qt,x,q̂ (T ) ≥ g (Xt,x (T ))] = p, where we used
dP
dQt,x = Qt,x,1 (T )
I V̄ (t, x, p) ≥ V̄∗ (t, x, p) ≥ pq̂(t, x, p) − v̄ (t, x, q̂(t, x, p)) =
q̂(t, x, p)(p − EQt,x [Qt,x,1 (T )1{q̂(t,x,p)Qt,x,1 (T )≥g (Xt,x (T ))} ]) +
EQt,x [g (Xt,x (T ))1{q̂(t,x,p)Qt,x,1 (T )≥g (Xt,x (T ))} ] =
EQt,x [g (Xt,x (T ))1{q̂(t,x,p)Qt,x,1 (T )≥g (Xt,x (T ))} ] =: y (t, x, p)
I By Martingale Representation Theorem, there exists ν ∈ U
ν
with Yt,x,y (t,x,p) (T ) ≥ g (Xt,x (T ))1{q̂(t,x,p)Qt,x,1 (T )≥g (Xt,x (T ))}
Solution (continued)

I (4.15) v̄ is convex in q and there is a unique solution q̂ of


∂v̄ Qt,x [Q
∂q (t, x, q̄) = E t,x,1 (T )1{Qt,x,q̂ (T )≥g (Xt,x (T ))} ] =
P[Qt,x,q̂ (T ) ≥ g (Xt,x (T ))] = p, where we used
dP
dQt,x = Qt,x,1 (T )
I V̄ (t, x, p) ≥ V̄∗ (t, x, p) ≥ pq̂(t, x, p) − v̄ (t, x, q̂(t, x, p)) =
q̂(t, x, p)(p − EQt,x [Qt,x,1 (T )1{q̂(t,x,p)Qt,x,1 (T )≥g (Xt,x (T ))} ]) +
EQt,x [g (Xt,x (T ))1{q̂(t,x,p)Qt,x,1 (T )≥g (Xt,x (T ))} ] =
EQt,x [g (Xt,x (T ))1{q̂(t,x,p)Qt,x,1 (T )≥g (Xt,x (T ))} ] =: y (t, x, p)
I By Martingale Representation Theorem, there exists ν ∈ U
ν
with Yt,x,y (t,x,p) (T ) ≥ g (Xt,x (T ))1{q̂(t,x,p)Qt,x,1 (T )≥g (Xt,x (T ))}
I By (4.15), this implies that V̄ (t, x, p) = y (t, x, p)
Derivation of DPE for Singular Stochastic Target Problems

Theorem (Geometric Dynamic Programming Principle)


Let (t, x) ∈ [0, T ) × X and θ be a [s, T ]-valued stopping time.
ν
Then, V (t, x) = inf{y ∈ R : Yt,x,y (θ) ≥
ν
V (θ, Xt,x (θ)) a.s. for some ν ∈ U}.
Derivation of DPE for Singular Stochastic Target Problems

Theorem (Geometric Dynamic Programming Principle)


Let (t, x) ∈ [0, T ) × X and θ be a [s, T ]-valued stopping time.
ν
Then, V (t, x) = inf{y ∈ R : Yt,x,y (θ) ≥
ν
V (θ, Xt,x (θ)) a.s. for some ν ∈ U}.
One obtains the DPE by using GDPP and choosing stopping times
appropriately. The arguments adapt those in Soner and Touzi.
Open Questions and Future Work

I The authors find a way to solve a control problem where the


constraints are satisfied with fixed probability. Can this
method be used to solve the corresponding problem for other
constrained control problems?
Open Questions and Future Work

I The authors find a way to solve a control problem where the


constraints are satisfied with fixed probability. Can this
method be used to solve the corresponding problem for other
constrained control problems?
I Are there other interesting applications of DPE with
unbounded controls?
Open Questions and Future Work

I The authors find a way to solve a control problem where the


constraints are satisfied with fixed probability. Can this
method be used to solve the corresponding problem for other
constrained control problems?
I Are there other interesting applications of DPE with
unbounded controls?
I Can the regularity assumptions be further relaxed?

You might also like