Interacting Particle Systems For The Computation of Rare Credit Portfolio Losses
Interacting Particle Systems For The Computation of Rare Credit Portfolio Losses
Interacting Particle Systems For The Computation of Rare Credit Portfolio Losses
n0
. This chain is not assumed to be time homo-
geneous. In fact, the random element X
n
takes values in some measurable
state space (E
n
, c
n
) that can change with n. We denote by K
n
(x
n1
, dx
n
) the
Markov transition kernels. Throughout the paper, for any measurable space
(E, c) we denote by B
b
(E) the space of bounded, measurable functions.
2.1 Feynman-Kac Path Expectations
We denote by Y
n
the history of X
n
as dened by
Y
n
def.
= (X
0
, . . . , X
n
) F
n
def.
= (E
0
E
n
) , n 0.
Y
n
n0
is itself a Markov chain and we denote by M
n
(y
n1
, dy
n
) its transition
kernel. For each n 0, we choose a multiplicative potential function G
n
dened
on F
n
and we dene the Feynman-Kac expectations by
n
(f
n
) = E
_
_
f(Y
n
)
1p<n
G
k
(Y
p
)
_
_
. (2.1)
We denote by
n
() the corresponding normalized measure dened as
n
(f
n
) =
E
_
f
n
(Y
n
)
1k<n
G
k
(Y
k
)
_
E
_
1k<n
G
k
(Y
k
)
_ =
n
(f
n
)/
n
(1). (2.2)
A very important observation is that
n+1
(1) =
n
(G
n
) =
n
(G
n
)
n
(1) =
n
p=1
p
(G
p
).
Therefore, given any bounded measurable function f
n
on F
n
, we have
n
(f
n
) =
n
(f
n
)
1p<n
p
(G
p
).
Using the notation G
p
= 1/G
p
for the reciprocal of the multiplicative potential
function and the above denitions of
n
and
n
we see that
E[f
n
(Y
n
)] = E
_
_
f
n
(Y
n
)
1p<n
G
p
(Y
p
)
1p<n
G
p
(Y
p
)
_
_
=
n
_
_
f
n
1p<n
G
p
_
_
(2.3)
=
n
_
_
f
n
1p<n
G
p
_
_
1p<n
p
(G
p
).
5
Finally, one can check by inspection that the measures (
n
)
n1
satisfy the
nonlinear recursive equation
n
=
n
(
n1
)
def.
=
_
F
n1
n1
(dy
n1
)
G
n1
(y
n1
)
n1
(G
n1
)
M
n
(y
n1
, ),
starting from
1
= M
1
(x
0
, ). This dynamical equation on the space of mea-
sures is known as Stettners equation in ltering theory. We state it to justify
the selection/mutation decomposition of each step of the particle algorithm
introduced below.
2.2 IPS Interpretation and Monte Carlo Algorithm
Motivated by the above denitions and results, we introduce a very natural
interacting path-particle system. For a given integer M, using the transforma-
tions
n
, we construct a Markov chain
n
n0
whose state
n
= (
j
n
)
1jM
at
time n, can be interpreted as a set of M Monte Carlo samples of path-particles
j
n
= (
j
0,n
,
j
1,n
, . . . ,
j
n,n
) F
n
= (E
0
E
n
).
The transition mechanism of this Markov chain can be described as follows.
We start with an initial conguration
1
= (
j
1
)
1jM
that consists of M
independent and identically distributed random variables with distribution,
1
(d(y
0
, y
1
)) = M
1
(x
0
, d(y
0
, y
1
)) =
x
0
(dy
0
)K
1
(y
0
, dy
1
),
i.e.,
j
1
def.
= (
j
0,1
,
j
1,1
) = (x
0
,
j
1,1
) F
1
= (E
0
E
1
) where the
j
1,1
are drawn
independently of each other from the distriution K
1
(x
0
, ). Then, the one-step
transition taking
n1
F
M
n1
into
n
F
M
n
is given by a random draw from
the distribution
P
n
d(y
1
n
, . . . , y
M
n
)[
n1
=
M
j=1
n
(m(
n1
))(dy
j
n
), (2.4)
where the notation m(
n1
) is used for the empirical distribution of the
j
n1
,
i.e.
m(
n1
)
def.
=
1
M
M
j=1
j
n1
.
From the denition of
n
, one can see that (2.4) is the superposition of a selec-
tion procedure followed by a mutation given by the transition of the original
Markov chain. More precisely:
n1
F
M
n1
selection
n1
F
M
n1
mutation
n
F
M
n
.
where the selection stage is performed by choosing randomly and indepen-
dently M path-particles
j
n1
= (
j
0,n1
,
j
1,n1
, . . . ,
j
n1,n1
) F
n1
,
6
according to the Boltzmann-Gibbs measure
M
j=1
G
n1
(
j
0,n1
, . . . ,
j
n1,n1
)
M
k=1
G
n1
(
k
0,n1
, . . . ,
k
n1,n1
)
(
j
0,n1
,...,
j
n1,n1
)
, (2.5)
and for the mutation stage, each selected path-particle
j
n1
is extended as
follows
j
n
= (
j
n1
,
j
n,n
)
= ((
j
0,n1
, . . . ,
j
n1,n1
),
j
n,n
) F
n
= F
n1
E
n
,
where
j
n,n
is a random variable with distribution K
n
(
j
n1,n1
, ). In other
words, the transition step is a mere extension of the path particle with an ele-
ment drawn at random using the transition kernel K
n
of the original Markov
chain. All of the mutations are performed independently. But most impor-
tantly, all these mutations are happening with the original transition distribu-
tion of the chain. This is in sharp contrast with importance sampling where
the Monte Carlo transitions are from twisted transition distributions obtained
from a Girsanov-like change of measure. So from a practical point of view,
a black-box providing random samples from the original chain transition dis-
tribution is enough for the implementation of the IPS algorithm: no need to
know the details of such a generation.
A result of [4] reproduced in [5] states that for each xed time n, the
empirical historical path measure
M
n
def.
= m(
n
) =
1
M
M
j=1
(
j
0,n
,
j
1,n
, ,
j
n,n
)
converges in distribution, as M , toward the normalized Feynman-Kac
measure
n
. Moreover, there are several propagation of chaos estimates that
ensure that (
j
0,n
,
j
1,n
, . . . ,
j
n,n
) are asymptotically independent and identically
distributed with distribution
n
[4]. This justies for each measurable function
f
n
on F
n
, the choice of
M
n
(
f
n
) =
M
n
(
f
n
)
1p<n
M
p
(G
p
). (2.6)
for a particle approximation of the expectation
n
(
f
n
). The main properties
of the particle approximation
M
n
are stated in the following lemma whose
statement is borrowed from [5].
Lemma 1 ([5]). Under the assumption
sup
(y
n
, y
n
)F
2
n
G
n
(y
n
)/G
n
( y
n
) < ,
7
M
n
is an unbiased estimator for
n
, in the sense that for any p 1 and
f
n
B
b
(F
n
) with |
f
n
| 1, we have
E
M
n
(
f
n
) =
n
(
f
n
),
and in addition
sup
M1
ME[[
M
n
(
f
n
)
n
(
f
n
)[
p
]
1/p
c
p
(n),
for some constant c
p
(n) < whose value does not depend on the function
f
n
.
We refer to [5] for a complete proof of this result. From formula (2.3), it is
clear that starting from a function f
n
on F
n
, we will apply the above result to
f
n
= f
n
p<n
G
p
. (2.7)
3 Loss Distributions of Credit Portfolios
In this section, we explain how we use the interacting particle system approach
described in the previous section to the computation of the probabilities of
rare credit losses in a large portfolio of credit sensitive instruments modeled
in the structural approach. Since the language of continuous-time nance is
commonly used in the industry, we choose to rst introduce our model in
continuous time. We will concentrate on the discrete time version implemented
for the purpose of computations in the next subsection.
3.1 Credit Portfolio Model
We consider a portfolio of N rms. N will be typically 125 in the numerical
applications presented later in the paper, and 1 for the single-name case pre-
sented in the Appendix. The dynamics of the asset values of these N rms are
given by the following system of stochastic dierential equations:
dS
i
(t) = rS
i
(t)dt +
i
(t)S
i
(t)dW
i
(t), i = 1, . . . , N (3.1)
where r is the risk-free interest rate,
i
is an idiosyncratic (non-random)
volatility factor, the correlation structure of the driving Wiener processes W
i
is given by
d W
i
, W
i
)
t
=
ii
dt,
and the common stochastic volatility factor (t) is a square-root diusion
satisfying the stochastic dierential equation:
d(t) = ( (t))dt +
_
(t) dW
t
, (3.2)
where , and are positive constants and the Wiener process W
t
satises
d W
i
, W)
t
=
dt, i = 1, 2, , N.
8
We impose the condition
2
< 2 so that (t) remains positive at all times.
Note also that, in contrast to the classical Heston model, the volatility (t) is
a square-root process and not the square volatility. This is not an issue since
we are not using explicit formulas for the Heston model which are in fact not
available in the correlated multi-dimensional case. Also, for each rm i, we
assume the existence of a deterministic boundary t B
i
(t) and the time of
default for rm i is assumed to be given by
i
= inf t : S
i
(t) B
i
(t) . (3.3)
For the sake of simplicity, we dene the portfolio loss function L(t) as the
number of defaults prior to time t, i.e.
L(t) =
N
i=1
1
{
i
t}
, t > 0. (3.4)
Since the spreads of CDO tranches are derived from the knowledge of a nite
number of expectations of the form:
E(L(T) K)
+
,
where T is a coupon payment date and K is proportional to the attachment or
detachment points of the tranche, we will restrict ourselves to the evaluation
of these expectations.
Clearly, the only interesting case is when all of the names in the portfolio
are dependent. In [12], the exact distribution of losses is derived for N = 2 from
the distributions of the hitting times of a pair of correlated Brownian motions.
Unfortunately, a tractable general result is not available for N > 2, let alone in
the case of stochastic volatility! Since the distribution of L(T) is not known in
the dependent case, for N > 2, one usually relies on approximation methods.
Moreover, since N is typically very large (125 in a standard CDO contract),
PDE based methods are ruled out.
Instead of computing the spreads of the tranches directly, we compute the
probability mass function for L(T), that is we calculate
P(L(T) = k) = p
k
(T), k = 0, . . . , N. (3.5)
3.2 The Background Markov Chain
We discretize the time variable t of the above continuous time model using a
time step t, that will be chosen as t = (1/20)yr in the numerical exper-
iments reported later in the paper. Notice that we will also need a smaller
time step t. The latter will be chosen to be t = 10
3
yr in our numerical
experiments. The Markov chain X
n
n
on which we construct the IPSs used
to compute small probabilities is given by:
X
n
=
_
(nt), (S
i
(nt))
1iN
, ( min
0mn
S
i
(mt))
1iN
_
, n 0.
(3.6)
9
The state space of X
n
is E
n
= [0, )
2N+1
so this chain is (2N+1)-dimensional.
We assume a constant (i.e. time independent) barrier B
i
for each rm 1 i
N, and we dene the time
i
of default of rm i as
i
= minn 0; S
i
(nt) B
i
in analogy with its continuous time version (3.3). In this way, the value of
i
can
be read o the sequence of running minima. Notice also, with this denition
of the default times, we do not have to correct for the bias introduced by the
discretization of a continuous time boundary crossing problem.
The very form of the resampling distribution (2.5) shows that in order to
have more simulation paths realizing rare events corresponding to unusually
high numbers of defaults, an obvious strategy is to choose a set of potential
functions becoming larger as the likelihood of default increases. Indeed, the
resampling step will select paths with high Gibbs resampling weights, and the
paths with small weights will have a greater chance of not being selected, and
hence disappear. For the purpose of our numerical experiments we choose a
parameter > 0, and we dene the multiplicative potential functions G
p
by:
G
p
(Y
p
) = exp[(V (X
p
) V (X
p1
))], (3.7)
where
V (X
p
) =
N
i=1
log( min
0mp
S
i
(mt)).
We shall drop the superscript when the dependence upon this free parameter
is irrelevant. Notice that
G
p
(Y
p
) = exp[(V (X
p
) V (X
p1
))]
= exp
_
i=1
log
min
0mp
S
i
(mt)
min
0mp1
S
i
(mt)
_
,
where the above logarithm is obviously less than or equal to zero. Clearly, dif-
ferent choices of give dierent distributions for the resampling weights, and
as a result, we expect that dierent choices of will give dierent sets of loss
levels k for which the probability P(L(t) = k) can be computed by IPS as a
positive number. For a given value of k, contrary to a plain Monte Carlo com-
putation, the IPS algorithm produces enough sample paths with k losses for
the estimation procedure to be acceptable if we choose appropriately. In the
numerical computations reported below, we use an idea which could be traced
back to [5], at least in an implicit form, and which was used systematically
in [3]. Instead of choosing and getting reasonable estimates of P(L(t) = k)
for some values of k depending upon , we reverse the procedure, and for
each k, we pick the best . Note that in the single-name case presented in the
appendix, since we can compare the variances of the IPS and MC estimators
over the range of ks, we can aord to use the standard approach xing rst.
Finally, it is worth noticing that because of the special form (3.7) of the
resampling weights,
10
1. we only need to keep track of (X
p1
, X
p
) instead of the full history Y
p
=
(X
0
, X
1
, , X
p
) thereby minimizing the storage space needed for the im-
plementation of the algorithm;
2.
1k<p
G
k
(Y
k
) = exp[(V (X
p1
) V (X
0
))], thereby providing a signif-
icant simplication of the computations.
3.3 Detailed IPS Algorithm
We divide the time interval [0, T] into n equal intervals [(p 1)T/n, pT/n],
p = 1, 2, . . . , n. These are the times we stop and perform the selection step.
We introduce the chain (X
p
)
0pn
= (
X
pT/n
)
0pn
and the whole history of
the chain is denoted by Y
p
= (X
0
, . . . , X
p
).
Since it is not possible to sample directly from the distribution of (X
p
)
0pn
for N > 2, we will have to apply an Euler scheme during the mutation stage;
we let t denote the suciently small time step used. In general t will be
chosen so that t << t = T/n.
Our algorithm is built with the weight function dened in equation (3.7).
As mentioned earlier, because of the special form of the resampling weights,
instead of working with the entire histories Y
p
, we need only to keep track of X
p
and its parent X
p1
. We introduce a special notation, say
W
p
, for the parent
of X
p
. So for all practical purposes, instead of being an entire historical path,
y
p
= (x
0
, x
1
, , x
p
), for implementation purposes, a particle will only be a
couple y
p
= (w
p
, x
p
).
Initialization. We start with M identical copies,
X
(j)
0
, 1 j M, of the
initial condition X
0
. That is,
X
(j)
0
= ((0), (S
1
(0), , S
N
(0)), (S
1
(0), , S
N
(0))), 1 j M.
and we dene their parents by
W
(j)
0
=
X
(j)
0
. In this way we have our initial
set of M particles (
W
(j)
0
,
X
(j)
0
), 1 j M.
Now suppose that at time p, we have a set of M particles (
W
(j)
p
,
X
(j)
p
), 1
j M.
Selection Stage. We compute the normalizing constant:
M
p
=
1
M
M
j=1
exp
_
_
V (
X
(j)
p
) V (
W
(j)
p
)
__
. (3.8)
Then, we choose independently M particles according to the empirical distri-
bution
M
p
(d
W, d
X) =
1
M
M
p
M
j=1
exp
_
_
V (
X
(j)
p
) V (
W
(j)
p
)
__
(
W
(j)
p
,
X
(j)
p
)
(d
W, d
X).
(3.9)
The particles that are selected are denoted (
W
(j)
p
,
X
(j)
p
).
11
Mutation Stage. The stochastic volatility (t) and the correlation between
Brownian motions prevent us from knowing the transition probability of X
n
in
closed form and perform the mutation in one step. We need Monte Carlo sim-
ulations based on an approximation scheme. We choose a plain Euler scheme
to make our life easier. We x a time step t << t (as already mentioned, we
will choose t = 10
3
in the numerical experiments reported below). For each
of the selected particles, (
W
(j)
p
,
X
(j)
p
), we apply an Euler scheme from time t
p
to time t
p+1
with step size t to each
X
(j)
p
so that
X
(j)
p
becomes
X
(j)
p+1
. We
then set
W
(j)
p+1
=
X
(j)
p
. It should be noted that each of the particles are evolved
independently, and that the true dynamics of X
p
, given by the discretization
of (3.1,3.2), is applied rather than some other measure. It is this fact that
separates IPS from importance sampling.
Conclusion. At maturity, i.e. at time n such that nt = T, we tally the
total number of losses for each of the M particles by computing the function
f
n
dened by
f(
X
(j)
n
) =
N
i=1
1
{X
(j)
n
(N+1+i)B
i
}
,
where we use the last N component of X
n
dened in (3.6). The estimator
p
M
k
(T) of P(L(T) = k) = p
k
(T) is then given by
p
M
k
(T) =
_
_
1
M
M
j=1
1
{f(
X
(j)
n
)=k}
exp[(V (
W
(j)
n
) V (
X
0
))]
_
_
_
n1
p=0
M
p
_
.
(3.10)
As we explained earlier, this estimator is unbiased in the sense that E[ p
M
k
(T)] =
p
k
(T).
4 Numerical Results
In this section we report on numerical experiments with an implementation of
the IPS procedure described in Section 2 with the stochastic volatility model
described in Section 3.
4.1 Parameters of the Numerical Experiments
All the computations reported in this section were done for a homogeneous
portfolio of N = 125 names. The following table gives the parameters used for
the stochastic volatility dynamics (3.2) of the process (t):
i
(t) . Also, as before we work with a time independent barrier, say B. Using IPS, we
15
compute the probability of default before maturity T:
P
B
(0, T) = P(min
uT
S(u) B) = E[1
min
uT
S(u)B
].
We use the classical explicit formula for P
B
(0, T) as a benchmark:
P
B
(0, T) = 1
N(d
+
2
)
S
0
B
p
N(d
2
)
, (A.1)
with N denoting the standard normal N(0, 1) cumulative distribution function and
d
2
=
ln
S
0
B
+ (r
1
2
2
)T
T
,
p = 1
2r
2
.
We are only interested in values of B that make the above event rare.
We remark that it is a standard result that the variance associated with the traditional
Monte Carlo method for computing P
B
(0, T) is P
B
(0, T)(1 P
B
(0, T)). We also remark
that for a single name the Markov chain (X
p
)
0pn
dened in Section 3.2 simplies to
X
p
= (S(t
p
), min
ut
p
S(u)).
Then, following the IPS setup described in Section 2, the rare event probability P
B
(0, T)
admits the Feynman-Kac representation:
P
B
(0, T) =
n
(L
(B)
n
(1)),
where L
(B)
n
(1) is given by the weighted indicator function dened for any path y
n
=
(x
0
, . . . , x
n
) F
n
by
L
(B)
n
(1)(y
n
) = L
(B)
n
(1)(x
0
, . . . , x
n
) = 1
{min
uT
S(u)B}
Y
1p<n
G
p
(x
0
, . . . , x
p
)
= 1
{min
uT
S(u)B}
e
(V (x
n1
)V (x
0
))
= 1
{min
uT
S(u)B}
e
(log(min
ut
n1
S(u)/S
0
))
,
for our choice of multiplicative potential function (3.7). Also, notice that we have ||L
(B)
n
(1)(y
n
)||
1 since log(min
ut
n1
S(u)/S
0
) 0 and > 0 by assumption. Next, following the IPS
selection-mutation algorithm outlined in Section 3.3, we form the estimator
P
B
M
(0, T) =
M
n
(L
(B)
n
(1)) =
M
n
(L
(B)
n
(1))
Y
1p<n
M
p
(G
p
). (A.2)
By Lemma 1, P
B
M
(0, T) is an unbiased consistent estimator of P
B
(0, T). While many esti-
mators are unbiased, the key to determining the eciency of our estimator is to study its
variance and prove a central limit theorem.
A.2 Variance Analysis
In the present situation we can prove:
16
Theorem 1. The estimator P
M
B
(0, T) given in equation (A.2) is unbiased, and it satises
the central limit theorem
ME[P
B
M
(0, T) P
B
(0, T)]
M
N(0,
B
n
()
2
),
with the asymptotic variance
B
n
()
2
=
n
X
p=1
h
E
n
e
log(min
ut
p1
S(u))
o
E
n
P
2
B,p,n
e
log(min
ut
p1
S(u))
o
P
B
(0, T)
2
i
,
(A.3)
where P
B,p,n
is the collection of functions dened by
P
B,p,n
(x) = E
n
1
min
tT
S(t)B
|X
p
= x
o
,
and P
B
(0, T) is given by (A.1).
The proof follows directly by applying Theorem 2.3 in [5] with the weight function that
we have dened in (3.7).
In the constant volatility single-name case, the asymptotic variance
B
n
()
2
can be
obtained explicitly in terms of double and triple integrals with respect to explicit densities.
This will be used in our comparison of variances for IPS and pure Monte Carlo in the
following section. The details of these explicit formulas are given in the Ph.D. Dissertation
[11].
As shown numerically in the next section the variance for IPS is of order p
2
with
p = P
B
(0, T) (small in the regime of interest), in contrast to being of order p for the direct
Monte Carlo simulation. This is indeed a very signicant variance reduction in the regime
p small, as already observed in [5], in a dierent context.
A.3 Numerical Results
In this subsection, we compute the probability of default for dierent values of the barrier
comparing IPS to the standard Monte Carlo method. Notice that, in both cases, we imple-
mented the continuity correction for the barrier level described in [2] to account for the fact
that we are using a discrete approximation to the continuous barrier for both IPS and Monte
Carlo. For the dierent values of the barrier we use, we can calculate the exact probability
of default from equation (A.1).
The following are the parameters we used for both IPS and Monte Carlo.
r S
0
t T n (# of mutations in IPS) M
.06 .25 80 .001 1 20 20000
The number of simulations M is the same for IPS and Monte Carlo, and from an
empirical investigation, we chose = 18.5 in the IPS method (note that 18.5/125 is within
the range of s used in Section 4 in the case of 125 names). The results are shown in Figure
A.1.
Indeed probabilities of order 10
14
will be irrelevant in the context of default probabil-
ities but the user can see that IPS is capturing the rare events probabilities for the single
name case whereas traditional Monte Carlo is not able to capture these values below 10
4
.
In Figure A.2 we show how the variance decreases with the barrier level, and therefore
with the default probability, for Monte Carlo and IPS. In the IPS case the variance is
obtained empirically and using the integral formulas derived in the [11]. We deduce that the
variance for IPS decreases as p
2
(p is the default probability), as opposed to p in the case
of Monte Carlo simulation.
17
Each Monte Carlo and IPS simulation gives an estimate of the probability of default
(whose theoretical value does not depend on the method) as well as an estimate of the stan-
dard deviation of the estimator (whose theoretical does depend on the method). Therefore,
it is instructive from a practical point of view to compare the two methods by comparing the
empirical ratios of their standard deviation to the probability of default for each method. If
p(B) is the probability of default for a certain barrier level B, then the standard deviation,
p
2
(B), for traditional Monte Carlo is given by,
p
Monte Carlo
2
(B) =
p
p(B)
p
(1 p(B)),
and the theoretical ratio for Monte Carlo is given by
p
Monte Carlo
2
(B)
p(B)
=
p
(1 p(B))
p
p(B)
,
which can computed using (A.1). For IPS, the corresponding ratio is
p
IPS
2
(B)
p(B)
=
B
n
()
p(B)
,
where
B
n
() is given in Theorem 1. It is computed using the formula given in [11].
In Figure A.3 one sees that there are specic regimes where it is more ecient to use
IPS as opposed to traditional Monte Carlo for certain values of the barrier level (below
.65 S
0
). This is to be expected since IPS is well suited to rare event probabilities whereas
Monte Carlo is not.
References
1. A. Bassamboo and S. Jain, Ecient importance sampling for reduced form models
in credit risk, in WSC 06: Proceedings of the 38th conference on Winter simulation,
Winter Simulation Conference, 2006, pp. 741748.
2. M. Broadie, P. Glasserman, and S. Kou, A continuity correction for discrete barrier
options, Mathematical Finance, 7 (1997), pp. 325349.
3. R. Carmona and S. Crepey, Importance sampling and interacting particle systems for
the estimation of markovian credit portfolios loss distributions, tech. rep., July 2008.
4. P. Del Moral, Feynman-Kac Formulae: Genealogical and Interacting Particle Systems
with Applications, Springer-Verlag, 2004.
5. P. Del Moral and J. Garnier, Genealogical particle analysis of rare events, Annals
of Applied Probability, 15 (2005), pp. 24962534.
6. J. Fouque, R. Sircar, and K. Solna, Stochastic volatility eects on defaultable bonds,
Applied Mathematical Finance, 13 (2006), pp. 215244.
7. J. Fouque, B. Wignall, and X. Zhou, Modeling correlated defaults: First passage
model under stochastic volatility, Journal of Computational Finance, 11 (2008).
8. K. Giesecke, in preparation, tech. rep., 2008.
9. P. Glasserman, Monte Carlo Methods in Financial Engineering, Springer-Verlag, 2004.
10. P. Glasserman and J. Li, Importance sampling for portfolio credit risk, Management
Science, 51 (2005), pp. 16431656.
11. D. Vestal, Interacting particle systems for pricing credit derivatives, Ph.D. Disserta-
tion, University of California Santa Barbara, 2008.
12. C. Zhou, An analysis of default correlations and multiple defaults, Review of Financial
Studies, 14 (2001), pp. 55576.
18
Fig. 4.1 Loss distributions estimated from M = 10
5
Monte Carlo samples, of a homoge-
neous portfolio of N = 125 names for maturities T = 1, 2, 3, 4, 5. The parameters used are
given in the tables above.
Fig. 4.2 Comparison of the estimates of the loss distribution (log-scale) given by plain
Monte Carlo with M = 10
5
samples and M = 50000 samples for maturities T = 1yr and
T = 5yr. The portfolio is homogeneous, has N = 125 names and the parameters are given
in the tables in the text.
19
Fig. 4.3 Comparison of the estimates of the loss distribution (log-scale) given by plain
Monte Carlo with M = 10
5
samples and IPS with M = 200 samples. The portfolio is
homogeneous, has N = 125 names and the parameters are given in the tables in the text.
Fig. 4.4 Surface plot of the numbers of samples against the number k of losses at maturity
T = 1 and the value of the parameter .
20
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
10
14
10
12
10
10
10
8
10
6
10
4
10
2
10
0
Barrier/S
0
P
r
o
b
a
b
i
l
i
t
i
y
o
f
D
e
f
a
u
l
t
MC vs IPS
IPS
MC
True Value
Fig. A.1 Default probabilities for dierent barrier levels for IPS and Monte Carlo
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
10
30
10
25
10
20
10
15
10
10
10
5
10
0
10
5
B/S
0
V
a
r
i
a
n
c
e
Comparison of MC with IPS
IPS Empirical Variance
IPS Theoretical Variance
MC Theoretical Variance
MC prob. default squared
Fig. A.2 Variances for dierent barrier levels for IPS and Monte Carlo
21
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
10
0
10
1
10
2
10
3
10
4
10
5
10
6
10
7
B/S
0
p
2
(
x
)
/
p
(
x
)
Standard Deviation to Probability Ratio for MC and IPS
IPS Theoretical
IPS Empirical
MC Theoretical
MC Emprical
Fig. A.3 Standard deviation-to-probability ratio for Monte Carlo and IPS