Osmlf 111
Osmlf 111
Gilles Pagès
—–
LPSM-Sorbonne-Université
DU Financial Engineering
Avril 2021
Very fast but also very unstable, especially when Jh (✓⇤ ) is “small”.
Yet another local recursive zero search if h C 1 (Levenberg-Marquardt
algorithm): Let n > 0, n 1,
⇥ ⇤ 1
✓n+1 = ✓n Jh (✓n ) + n+1 Id h(✓n ), n 0.
turns out to be more stable. . . by an appropriate choice of n.
CallBS (. . . , , . . .) CallM2Mkt = 0.
The function is even in and the equation has two opposite solutions.
As < 0 is meaningless, one considers on the whole real line R,
+
7 ! CallBS ( )
[This q
is the actual algorithm with the “good choice” of
2
0 = T
(log(s0 /K ))2 avoiding the negative side and ensuring a fast
convergence (1 ).]
1
S. Manaster, G. Koehler (1982). The calculation of Implied Variance from the Black–Scholes Model: A Note, The Journal
of Finance, 37(1):227–230
Gilles PAGÈS (LPSM) MLANM LPSM-Sorbonne Université 13 / 128
Implicitation: Implied Correlation I
2-dim (correlated) Black-Scholes model:
2
i )t+ W i
Xti = x0i e (r 2 i t , x0i , i > 0, i = 1, 2
with hW 1 , W 2 it = ⇢t.
Best-of-Call Payo↵:
max(XT1 , XT2 ) K +
.
Premium at time 0
rT
Best-of-CallBS (. . . , ⇢, . . . ) = e E max(XT1 , XT2 ) K +
.
Organized markets on such options are market of the correlation ⇢.
The volatilities i, i = 1, 2, are known from vanilla option markets on
X 1 and X 2 .
How to “extract” the correlation ⇢?
Gilles PAGÈS (LPSM) MLANM LPSM-Sorbonne Université 14 / 128
Deterministic algo(s):
Except that we have no (simple) closed form for the B-S price and its
⇢-derivative.
The correlation ⇢ 2 [ 1, 1]. Projections are possible but. . . .
What to do?
and
1
V (⇠) ⇠+ EX ⇠ +
by Jensen’s inequality
1 ↵
1
⇠+ (E X ⇠)
1 ↵
↵ 1
= ⇠+ E X ! +1 as ⇠! 1.
1 ↵ 1 ↵
By exchanging di↵erentiation and E, we get
0 1
V (⇠) = 1 P(X > ⇠).
1 ↵
Huge dataset (zk )k=1:N with of possibly high dimension d: N ' 106 ,
even 109 , and d ' 103 .
[Image, profile, text, . . . ]
v (✓, z).
N
1 X
Global loss function: V (✓) = v (✓, zk )
N
k=1
N
X
1
with gradient rV (✓) = r✓ v (✓, zk ).
N
k=1
min V (✓).
✓2⇥
2
Prediction/loss function (local) v (✓, z) = 12 f (✓, xk ) yk , k = 1 : N
so that
r✓ v (✓, z) = r✓ f (✓, x)> f (✓, x) y .
Only input zk = xk 2 Rd , k = 1 : N.
Prototype parameter set: ✓ := (✓1 , . . . , ✓r ) 2 ⇥ = (Rd )r , r 2 N.
(An example of) Local loss function: nearest neighbor among the
prototypes: x 2 Rd , ✓ 2 ⇥.
2
v (✓, x) = 1
min
2 i=1:r |✓i x|2 = 12 dist x, {✓1 , . . . , ✓r }