Interest Rate Modelling
Interest Rate Modelling
Sebastian Schlenkrich
WS, 2019/20
Part VII
Sensitivity Calculation
p. 463
Outline
p. 464
Outline
p. 465
Why do we need sensitivities?
p. 466
Derivative pricing is based on hedging and risk replication
Recall fundamental derivative replication result
Consider portfolio π(t) = V (t, X (t)) − φ(t)⊤ X (t) and apply Ito’s lemma
p. 467
Market risk calculation relies on accurate sensitivities
Consider portfolio value π(t), time horizon ∆t and returns
Market risk measure Value at Risk (VaR) is the lower quantile q of distribution of
portfolio returns ∆π(t) given a confidence level 1 − α, formally
1
∆π ≈ [∇X π (X )]⊤ ∆X + ∆X ⊤ [HX π (X )] ∆X .
2
p. 470
Par rate Delta and Gamma are sensitivity w.r.t. changes in
market rates I
◮ For multiple projection and discounting yield curves, sensitivities are calculated
for each curve individually.
p. 471
Par rate Delta and Gamma are sensitivity w.r.t. changes in
market rates II
X ∂2V
Γ̄R = 1⊤ ΓR = [1bp]2 · ≈ V (R̄ + 1bp · 1) − 2V (R̄) + V (R̄ − 1bp · 1).
∂Rk2
k
p. 472
Vega is the sensitivity w.r.t. changes in market volatilities
BucketedATM
Normal Volatility Vega
k,l
Denote σ̄ = σN the matrix of market-implied At-the-money normal volatilites for
expiries k = 1, . . . , q and swap terms l = 1, . . . , r . Bucketed ATM Normal Volatility
Vega of an instrument with model price V = V (σ̄) is specified as
∂V
Vega = 1bp · k,l
.
∂σN k=1,...,q, l=1,...,r
p. 473
Outline
p. 474
Crutial part of sensitivity calculation is evaluation or
approximation of partial derivatives
Consider again general pricing function V = V (p) in terms of a scalar
parameter p. Assume differentiability of V w.r.t. p and sensitivity
dV (p)
∆V = · ∆p.
dp
p. 475
We do a case study for European swaption Vega I
Recall pricing function
p
V Swpt = Ann(t) · Bachelier S(t), K , σ T − t, φ
with
φ [F − K ]
Bachelier (F , K , ν, φ) = ν · Φ (h) · h + Φ′ (h) , h= .
ν
First, analyse Bachelier formula. We get
h i
d Bachelier (ν) dh dh
Bachelier (ν) = + ν Φ′ (h) h + Φ (h) − Φ′ (h) h
dν ν dν dν
Bachelier (ν) dh
= + νΦ (h) .
ν dν
dh
With dν
= − νh follows
d
Bachelier (ν) = Φ (h) · h + Φ′ (h) − Φ (h) · h = Φ′ (h) .
dν
p. 476
We do a case study for European swaption Vega II
d2 dh h2 ′
Bachelier (ν) = −hΦ′ (h) = Φ (h) .
dν 2 dν ν
d3 h2
3
Bachelier (ν) = h2 − 3 2 Φ′ (h) .
dν ν
p. 477
We do a case study for European swaption Vega III
d Swpt d p
V = Ann(t) · Bachelier (ν) · T − t.
dσ dν
Test case
◮ Rates flat at 5%, implied normal volatilities flat at 100bp.
◮ 10y into 10y European payer swaption (call on swap rate).
√ √
◮ Strike at 5% + 100bp · 10y · 3 = 10.48% (maximizing Volga).
p. 478
What is the problem with finite difference approximation? I
VegaFD
|RelErr| = −1
Vega
p. 479
What is the problem with finite difference approximation?
II
p. 480
Outline
p. 481
Derivative pricing usually involves model calibration
Consider swap pricing function V Swap as a function of yield curve model parameters z,
i.e.
V Swap = V Swap (z).
Model parameters z are itself derived from market quotes R for par swaps and FRAs.
That is
z = z(R).
dV Swap dz
∆R = 1bp · (z(R)) · (R) .
dz dR
| {z } | {z }
Pricing Calibration
p. 482
Can we calculate calibration Jacobian more efficiently?
Theorem (Implicit Function Theorem)
Let H : Rq × Rr → Rq be a continuously differentiable function with
H(z̄, R̄) = 0 for some pair (z̄, R̄). If the Jacobian
dH
Jz = (z̄, R̄)
dz
is invertible, then there exists an open domain U ⊂ Rr with R̄ ∈ U and a
continuously differentiable function g : U → Rq with
H (g(R), R) = 0 ∀R ∈ U .
dg(R)
h i−1 h i
dH dH
=− (g(R), R) (g(R), R) .
dR dz dR
Proof.
See Analysis.
p. 483
How does Implicit Function Theorem help for sensitivity
calculation? I
p. 484
How does Implicit Function Theorem help for sensitivity
calculation? II
dH
If pair (z̄, R̄) solves calibration problem H(z̄, R̄) = 0 and dz
(z̄, R̄) is invertible,
then there exists a function
z = z(R)
in a vicinity of R̄ and
h i−1 h i
dz dH dH
(R) = − (g(R), R) (g(R), R) .
dR dz dR
Reformulation of calibration helpers gives
d
dz
ModelRate1 (z)
dH ..
(g(R), R) = . , and
dz
d
dz
ModelRateq (z)
−1
dH ..
(g(R), R) = . .
dR
−1
p. 485
How does Implicit Function Theorem help for sensitivity
calculation? III
Consequently
d
−1
h i−1 dz
ModelRate1 (z)
dz dH ..
(R) = (g(R), R) = . .
dR dz
d
dz
ModelRateq (z)
p. 487
We can adapt Jacobian method to Vega calculation as well
II
Implicit Function Theorem yields
h i−1 h i
dσ dH dH
=− (σ (σN ) , σN ) (σ (σN ) , σN )
dσN dσ dσN
d
dσN
V1Swpt (σN1 )
h i−1
d ..
= Model[σ] . .
dσ
d
dσN
Vk̄Swpt (σNk̄ )
◮ d
dσ
Model[σ] are Hull-White model Vega(s) of co-terminal European
swaptions.
◮ d
dσN
VkSwpt (σNk ) are Bachelier or market Vega(s) of co-terminal European
swaptions.
Bermudan Vega becomes
h i−1
d d Berm d d
V Berm = V · Model[σ] · Market σNk .
dσN dσ dσ dσN
p. 488
Outline
p. 489
What is the idea behind Algorithmic Differentiation (AD)
p. 490
Functions are represented as Evaluation Procedures
consisting of a sequence of elementary operations
v−3 = x1 = F
v−2 = x2 = K
v−1 = x3 = σ
v0 = x4 = τ
v1 = v−3 /v−2 ≡ f1 (v−3 , v−2 )
v2 = log(v1 ) ≡ f2 (v1 )
√
Example: Black Formula v3 = v0 ≡ f3 (v0 )
v4 = v−1 · v3 ≡ f4 (v−1 , v3 )
Black(·) = ω [F Φ(ωd1 ) − K Φ(ωd2 )] v5 = v2 /v4 ≡ f5 (v2 , v4 )
√ v6 = 0.5 · v4 ≡ f6 (v4 )
log(F /K ) σ τ
with d1,2 = √ ± 2
v7 = v5 + v6 ≡ f7 (v5 , v6 )
σ τ
v8 = v7 − v4 ≡ f8 (v7 , v4 )
◮ Inputs F , K , σ, τ v9 = ω · v7 ≡ f9 (v7 )
◮ Discrete parameter ω ∈ {−1, 1} v10 = ω · v8 ≡ f10 (v8 )
◮ Output Black(·) v11 = Φ(v9 ) ≡ f11 (v9 )
v12 = Φ(v10 ) ≡ f12 (v10 )
v13 = v−3 · v11 ≡ f13 (v−3 , v11 )
v14 = v−2 · v12 ≡ f14 (v−2 , v12 )
v15 = v13 − v14 ≡ f15 (v13 , v14 )
v16 = ω · v15 ≡ f16 (v15 )
y1 = v16
p. 491
Alternative representation is Directed Acyclic Graph (DAG)
v−3 = x1 = F v−3 v−2 v−1 v0
v−2 = x2 = K
v−1 = x3 = σ v1 v3
v0 = x4 = τ
v1 = v−3 /v−2 ≡ f1 (v−3 , v−2 ) v2 v4
v2 = log(v1 ) ≡ f2 (v1 )
√
v3 = v0 ≡ f3 (v0 ) v5 v6
v4 = v−1 · v3 ≡ f4 (v−1 , v3 )
v5 = v2 /v4 ≡ f5 (v2 , v4 )
v7
v6 = 0.5 · v4 ≡ f6 (v4 )
v7 = v5 + v6 ≡ f7 (v5 , v6 )
v8
v8 = v7 − v4 ≡ f8 (v7 , v4 )
v9 = ω · v7 ≡ f9 (v7 )
v10 = ω · v8 ≡ f10 (v8 ) v9 v10
p. 492
Evaluation Procedure can be formalized to make it more
tractable
vi−n = xi i = 1, . . . , n
vi = fi (vj )j≺i i = 1, . . . , l
ym−i = vl−i i = m − 1, . . . , 0,
p. 493
Forward mode of AD calculates tangents
◮ In addition to function evaluation vi = fi (ui ) evaluate derivative
X ∂
v̇i = fi (ui ) · v̇j .
∂vj
j≺i
[vi−n , v̇i−n ] = [x
i , ẋi ] i = 1, . . . , n
[vi , v̇i ] = fi (ui ), f˙i (ui , u̇i ) i = 1, . . . , l
[ym−i , ẏm−i ] = [vl−i , v̇l−i ] i = m − 1, . . . , 0.
Here, the initializing derivative values ẋi−n for i = 1 . . . n are given and determine the
direction of the tangent.
◮ With ẋ = (ẋi ) ∈ Rn and ẏ = (ẏi ) ∈ Rm , the forward mode of AD evaluates
ẏ = F ′ (x )ẋ .
v−3 = x1 = F v̇−3 = 0
v−2 = x2 = K v̇−2 = 0
v−1 = x3 = σ v̇−1 = 1
v0 = x4 = τ v̇0 = 0
v1 = v−3 /v−2 v̇1 = v̇−3 /v−2 − v1 · v̇−2 /v−2
v2 = log(v1 ) v̇2 = v̇1 /v1
√
v3 = v0 v̇3 = 0.5 · v̇0 /v3
v4 = v−1 · v3 v̇4 = v̇−1 · v3 + v−1 · v̇3
v5 = v2 /v4 v̇5 = v̇2 /v4 − v5 · v̇4 /v4
v6 = 0.5 · v4 v̇6 = 0.5 · v̇4
v7 = v5 + v6 v̇7 = v̇5 + v̇6
v8 = v7 − v4 v̇8 = v̇7 − v̇4
v9 = ω · v7 v̇9 = ω · v̇7
v10 = ω · v8 v̇10 = ω · v̇8
v11 = Φ(v9 ) v̇11 = φ(v9 ) · v̇9
v12 = Φ(v10 ) v̇12 = φ(v10 ) · v̇10
v13 = v−3 · v11 v̇13 = v̇−3 · v11 + v−3 · v̇11
v14 = v−2 · v12 v̇14 = v̇−2 · v12 + v−2 · v̇12
v15 = v13 − v14 v̇15 = v̇13 − v̇14
v16 = ω · v15 v̇16 = ω · v̇15
y1 = v16 ẏ1 = v̇16
p. 495
Reverse Mode of AD calculates adjoints
◮ Forward Mode calculates derivatives and applies chain rule in the same order as
function evaluation.
◮ Define auxiliary derivative values v̄j and assume initialisation v̄j = 0 before
reverse mode evaluation.
p. 496
Reverse Mode of AD calculates adjoints 2/2
vi−n = xi i = 1, . . . , n
vi = fi (vj )j≺i i = 1, . . . , l
ym−i = vl−i i = m − 1, . . . , 0
v̄i = ȳi i = 0, . . . , m − 1
ūi += f¯i (ui , v̄i ) i = l, . . . , 1
x̄i = v̄i i = n, . . . , 1.
Here, all intermediate variables vi are assigned only once. The initializing values ȳi are
given and represent a weighting of the dependent variables yi .
◮ Vector ȳ = (ȳi ) can also be interpreted as normal vector of a hyperplane in the
range of F .
◮ With ȳ = (ȳi ) and x̄ = (x̄i ), reverse mode of AD yields
x̄ T = ∇ ȳ T F (x ) = ȳ T F ′ (x ).
p. 497
Black formula Reverse Mode evaluation procedure ... I
v−3 = x1 = F
v−2 = x2 = K
v−1 = x3 = σ
v0 = x4 = τ
v1 = v−3 /v−2
v2 = log(v1 )
√
v3 = v0
v4 = v−1 · v3
v5 = v2 /v4
v6 = 0.5 · v4
v7 = v5 + v6
v8 = v7 − v4
v9 = ω · v7
v10 = ω · v8
v11 = Φ(v9 )
v12 = Φ(v10 )
v13 = v−3 · v11
v14 = v−2 · v12
v15 = v13 − v14
v16 = ω · v15
y1 = v16
v̄16 = ȳ1 = 1
.
.
.
p. 498
Black formula Reverse Mode evaluation procedure ... II
.
.
.
y1 = v16
v̄16 = ȳ1 = 1
v̄15 += ω · v̄16
v̄13 += v̄15 ; v̄14 += (−1) · v̄15
v̄−2 += v12 · v̄14 ; v̄12 += v−2 · v̄14
v̄−3 += v11 · v̄13 : v̄11 += v−3 · v̄13
v̄10 += φ(v10 ) · v̄12
v̄9 += φ(v9 ) · v̄11
v̄8 += ω · v̄10
v̄7 += ω · v̄9
v̄7 += v̄8 ; v̄4 += (−1) · v̄8
v̄5 += v̄7 ; v̄6 += v̄7
v̄4 += 0.5 · v̄6
v̄2 += v̄5 /v4 ; v̄4 += (−1) · v5 · v̄5 /v4
v̄−1 += v3 · v̄4 ; v̄3 += v−1 · v̄4
v̄0 += 0.5 · v̄3 /v3
v̄1 += v̄2 /v1
v̄−3 += v̄1 /v−2 ; v̄−2 += (−1) · v1 · v̄1 /v−2
τ̄ = x̄4 = v̄0
σ̄ = x̄3 = v̄−1
K̄ = x̄2 = v̄−2
F̄ = x̄1 = v̄−3
p. 499
We summarise the properties of Forward and Reverse Mode
◮ Memory
consumption/management key
challange for implementations.
◮ Computational effort can be improved by AD vector mode.
◮ Reverse Mode memory consumption can be managed via checkpointing
techniques.
p. 500
How is AD applied in practice?
◮ Typically, you don’t want to differentiate all your source code by hand.
◮ Tools help augmenting existing programs for tangent and adjoint computations.
p. 501
There is quite some literature on AD and its application in
finance
p. 502
Part VIII
Wrap-up
p. 503
Outline
p. 504
What was this lecture about?
Date calculations
Market conventions
Optionalities
Bank A may decide to early terminate deal in 10, 11, 12,.. years
p. 505
References I
D. Bang.
Local-stochastic volatility for vanilla modeling.
https://ssrn.com/abstract=3171877, 2018.
D. Duffy.
Finite Difference Methods in Financial Engineering.
Wiley Finance, 2006.
P. Glasserman.
Monte Carlo Methods in Financial Engineering.
Springer, 2003.
References III
A. Griewank and A. Walther.
Evaluating derivatives: principles and techniques of algorithmic
differentiation - 2nd ed.
SIAM, 2008.
M. Henrard.
Interest rate instruments and market conventions guide 2.0.
Open Gamma Quantitative Research, 2013.
M. Henrard.
A quant perspective on ibor fallback proposals.
https://ssrn.com/abstract=3226183, 2018.
References IV
Y. Iwashita.
Piecewise polynomial interpolations.
OpenGamma Quantitative Research, 2013.
U. Naumann.
The Art of Differentiating Computer Programs: An Introduction to
Algorithmic Differentiation.
SIAM, 2012.
V. Piterbarg.
Funding beyond discounting: collateral agreements and derivatives pricing.
R. Rebonato.
Volatility and Correlation.
John Wiley & Sons, 2004.
S. Shreve.
Stochastic Calculus for Finance II - Continuous-Time Models.
Springer-Verlag, 2004.
Contact
d-fine GmbH
Mobile: +49-162-263-1525
Mail: [email protected]