Technical Guide
Technical Guide
Christopher C. Finger
August 2000
Abstract
For single horizon models of defaults in a portfolio, the effect of model and distribution choice on the
model results is well understood. Collateralized Debt Obligations in particular have sparked interest in
default models over multiple horizons. For these, however, there has been little research, and there is little
understanding of the impact of various model assumptions. In this article, we investigate four approaches
to multiple horizon modeling of defaults in a portfolio. We calibrate the four models to the same set of
input data (average defaults and a single period correlation parameter), and examine the resulting default
distributions. The differences we observe can be attributed to the model structures, and to some extent,
to the choice of distributions that drive the models. Our results show a significant disparity. In the single
period case, studies have concluded that when calibrated to the same first and second order information,
the various models do not produce vastly different conclusions. Here, the issue of model choice is much
more important, and any analysis of structures over multiple horizons should bear this in mind.
In recent years, models of defaults in a portfolio context have been well studied. Three separate
approaches (CreditMetrics, CreditRisk+, and CreditPortfolioView1 ) were made public in 1997.
Subsequently, researchers2 have examined the mathematical structure of the various models. Each
of these studies has revealed that it is possible to calibrate the models to each other and that the
differences between the models lie in subtle choices of the driving distributions and in the data
sources one would naturally use to feed the models.
Common to all of these models, and to the subsequent examinations thereof, is the fact that the
models describe only a single period. In other words, the models describe, for a specific risk horizon,
whether each asset of interest defaults within the horizon. The timing of defaults within the risk
horizon is not considered, nor is the possibility of defaults beyond the horizon. This is not a flaw
of the current models, but rather an indication of their genesis as approaches to risk management
and capital allocation for a fixed portfolio.
Not entirely by chance, the development of portfolio models for credit risk management has coin-
cided with an explosion in issuance of Collateralized Debt Obligations (CDO’s). The performance
of a CDO structure depends on the default behavior of a pool of assets. Significantly, the depen-
dence is not just on whether the assets default over the life of the structure, but also on when the
defaults occur. Thus, while an application of the existing models can give a cursory view of the
structure (by describing, for instance, the distribution of the number of assets that will default over
the structure’s life), a more rigorous analysis requires a model of the timing of defaults.
In this paper, we will survey a number of extensions of the standard single-period models that allow
for a treatment of default timing over longer horizons. We will examine two extensions of the Cred-
itMetrics approach, one that models only defaults over time and a second that effectively accounts
1 See Wilson (1997).
2 See Finger (1998), Gordy (2000), and Kolyoglu and Hickman (1998).
1
for rating migrations. In addition, we will examine the copula function approach introduced by Li
(1999 and 2000), as well as a simple version of the stochastic intensity model applied by Duffie
and Garleanu (1998).
We will seek to investigate the differences in the four approaches that arise from model – rather than
data – differences. Thus, we will suppose that we begin with satisfactory estimates of expected
default rates over time, and of the correlation of default events over one period. Higher order
information, such as the correlation of defaults in subsequent periods or the joint behavior of three
or more assets, will be driven by the structure of the models. The analysis of the models will
then illuminate the range of results that can arise given the same initial data. Nagpal and Bahar
(1999) adopt a similar approach in the single horizon context, investigating the range of possible
full distributions that can be calibrated to first and second order default statistics.
In the following section, we present terminology and notation to be used throughout. We proceed to
detail the four models. Finally, we present two comparison exercises: in the first, we use closed form
results to analyze default rate volatilities and conditional default probabilities, while in the second,
we implement Monte Carlo simulations in order to investigate the full distribution of realized default
rates.
In order to compare the properties of the four models, we will consider a large homogeneous pool
of assets. By homogeneous, we mean that each asset has the same probability of default (first order
statistics) at every time we consider; further, each pair of assets has the same joint probability of
default (second order statistics) at every time.
To describe the first order statistics of the pool, we specify the cumulative default probability qk
– the probability that a given asset defaults in the next k years – for k = 1, 2, . . . T , where T is
the maximum horizon we consider. Equivalently, we may specify the marginal default probability
2
pk – the probability that a given asset defaults in year k. Clearly, cumulative and marginal default
probabilities are related through
Finally, to describe the second order statistics of the pool, we specify the joint cumulative default
probability qj,k – the probability that for a given pair of assets, the first asset defaults sometime in
the first j years and the second defaults sometime in the first k years – or equivalently, the joint
marginal default probability pj,k – the probability that the first asset defaults in year j and the
second defaults in year k. These two notions are related through
j −1
X k−1
X
qj,k = qj −1,k−1 + pi,k + pj,i + pj,k , for j, k = 2, . . . , T . (2)
i=1 i=1
In practice, it is possible to obtain first order statistics for relatively long horizons, either by observing
market prices of risky debt and calibrating cumulative default probabilities as in Duffie and Singleton
(1999), or by taking historical cumulative default experience from a study such as Keenan et al (2000)
or Standard & Poor’s (2000). Less information is available for second order statistics, however, and
therefore we will assume that we can obtain the joint default probability for the first year (p1,1 )3 ,
but not any of the joint default probabilities for subsequent years. Thus, our exercise will be to
calibrate each of the four models to fixed values of q1 , q2 , . . . qT and p1,1 , and then to compare the
higher order statistics implied by the models.
The model comparison can be a simple task of comparing values of p1,2 , p2,2 , q2,2 , and so on.
However, to make the comparisons a bit more tangible, we will consider the distributions of realized
3 This is a reasonable supposition, since all of the single period models mentioned previously essentially require p
1,1 as an input.
3
default rates. The term "default rate" is often used loosely in the literature, without a clear notion
of whether default rate is synonymous with default probability, or rather is itself a random variable.
To be clear, in this article, default rate is a random variable equal to the proportion of assets in
a portfolio that default. For instance, if the random variable Xi(k) is equal to one if the ith asset
defaults in year k, then the year k default rate is equal to
n
1 X (k)
Xi . (3)
n
i=1
For our homogeneous portfolio, the mean year k default rate is simply pk , the marginal default
probability for year k. Furthermore, the standard deviation of the year k default rate (which we will
refer to as the year k default rate volatility) is
q
pk,k − pk2 + (pk − pk,k )/n. (4)
Of interest to us is the large portfolio limit (that is, n → ∞) of this quantity, normalized by the
default probability. We will refer to this as the normalized year k default volatility, which is given
by
q
pk,k − pk2
. (5)
pk
Additionally, we will examine the normalized cumulative year k default volatility, which is defined
similarly to the above, with the exception that the default rate is computed over the first k years
rather than year k only. The normalized cumulative default volatility is given by
q
qk,k − qk2
. (6)
qk
Finally, we will use 8 to denote the standard normal cumulative distribution function. In the
bivariate setting, we will use 82 (z1 , z2 ; ρ) to indicate the probability that Z1 < z1 and Z2 < z2 ,
where Z1 and Z2 are standard normal random variables with correlation ρ.
In the following four sections, we describe the models to be considered, and discuss in detail the
calibration to our initial data.
4
3 Discrete CreditMetrics extension
In its simplest form, the single period CreditMetrics model, calibrated for our homogeneous port-
folio, can be stated as follows:
(ii) To each asset i, assign a standard normal random variable Z (i) , where the correlation between
distinct Z (i) and Z (j ) is equal to ρ, such that
The simplest extension of this model to multiple horizons is to simply repeat the one period model.
We then have default thresholds α1 , α2 , . . . , αT corresponding to each period. For the first period,
we assign standard normal random variables Z1(i) to each asset as above, and asset i defaults in the
first period if Z1(i) < α1 . For assets that survive the first period, we assign a second set of standard
(j )
normal random variables Z2(i) , such that the correlation between distinct Z2(i) and Z2 is ρ but the
variables from one period to the next are independent. Asset i then defaults in the second period
if Z1(i) > α1 (it survives the first period) and Z2(i) < α2 . The extension to subsequent periods
should be clear. In the end, the model is specified by the default thresholds α1 , α2 , . . . , αT and the
correlation parameter ρ.
To calibrate this model to our cumulative default probabilities q1 , q2 , . . . , qT and joint default
probability, we begin by setting the first period default threshold:
For subsequent periods, we set αk such that the probability that Zi(k) < αk is equal to the conditional
default probability for period k:
−1 qk − qk−1
αk = 8 . (9)
1 − qk−1
5
We complete the calibration by choosing ρ to satisfy (7), with α replaced by α1 .
The joint default probabilities and default volatilities are easily obtained in this context. For instance,
the marginal year two joint default probability is given by (for distinct i and j ):
n o
(j ) (j )
p2,2 = P Z1(i) > α1 ∩ Z1 > α1 ∩ Z2(i) < α2 ∩ Z2 < α2
n o n o
(j ) (j )
= P Z1(i) > α1 ∩ Z1 > α1 · P Z2(i) < α2 ∩ Z2 < α2
= (1 − 2p1 + p1,1 ) · 82 (α2 , α2 ; ρ). (10)
Similarly, the probability that asset i defaults in the first period, and asset j in the second period is
n o q2 − p1
(i) (j ) (j )
p1,2 = P Z1 < α1 ∩ Z1 > α1 ∩ Z2 < α2 = (p1 − p1,1 ) · . (11)
1 − p1
It is then possible to obtain q2,2 using (2) and the default volatilities using (5) and (6).
By construction, the discrete CreditMetrics extension above does not allow for any correlation of
default rates through time. For instance, if a high default rate is realized in the first period, this has
no bearing on the default rate in the second period, since the default drivers for the second period
(the Z2(i) above) are independent of the default drivers for the first. Intuitively, we would not expect
this behavior from the market. If a high default rate occurs in one period, then it is likely that those
obligors that did not default would have generally decreased in credit quality. The impact would
then be that the default rate for the second period would also have a tendency to be high.
In order to capture this behavior, we introduce a CreditMetrics extension where defaults in con-
secutive periods are not driven by independent random variables, but rather by a single diffusion
process. Our diffusion-driven CreditMetrics extension is described by:
6
(ii) To each obligor, assign a standard Wiener process W (i) , with W0(i) = 0, where the instanta-
neous correlation between distinct W (i) and W (j ) is ρ.4
(iv) For k > 1, obligor i defaults in year k if it survives the first k − 1 years (that is, W1(i) >
(i)
α1 , . . . , Wk−1 > αk−1 ) and Wk(i) < αk .
Note that this approach allows for the behavior mentioned above. If the default rate is high in the
first year, this is because many of the Wiener processes have fallen below the threshold α1 . The
Wiener processes for non-defaulting obligors will have generally trended downward as well, since
all of the Wiener processes are correlated. This implies a greater likelihood of a high number of
defaults in the second year. In effect, then, this approach introduces a notion of credit migration.
Cases where the Wiener process trends downward but does not cross the default threshold can be
thought of as downgrades, while cases where the process trends upward are essentially upgrades.
and thus that α1 is given by (8). For the second threshold, we require that the probability that an
obligor defaults in year two is equal to p2 :
n o
P W1(i) > α1 ∩ W2(i) < α2 = p2 . (13)
√
Since W (i) is a Wiener process, we know that the standard deviation of Wt(i) is t and that for
√
s < t, the correlation between Ws(i) and Wt(i) is s/t. Thus, given α1 , we find the value of α2 that
satisfies
√ √ p
8(α2 / 2) − 82 (α1 , α2 / 2; 1/2) = p2 . (14)
7
For the kth period, given α1 , . . . , αk−1 , we calibrate αk by solving
n o
P W1(i) > α1 ∩ . . . ∩ Wk−1 (i)
> αk−1 ∩ Wk(i) < αk = pk , (15)
again utilizing the properties of the Wiener process W (i) to compute the probability on the left hand
side.
We complete the calibration by finding ρ such that the year one joint default probability is p1,1 :
n o
(j )
P W1(i) < α1 ∩ W1 < α1 = p1,1 . (16)
(j )
Since W1(i) and W1 each follow a standard normal distribution, and have a correlation of ρ, the
solution for ρ here is identical to that of the previous section.
With the calibration complete, it is a simple task to compute the joint default probabilities. For
instance, the joint year two default probability is given by
n o
(j ) (j )
p2,2 = P W1(i) > α1 ∩ W1 > α1 ∩ W2(i) < α2 ∩ W2 < α2 , (17)
(j ) (j )
where we use the fact that {W1(i) , W1 , W2(i) , W2 } follow a multivariate normal distribution with
covariance
1 ρ 1 ρ
(j ) (j ) ρ 1 ρ 1
Cov{W1(i) , W1 , W2(i) , W2 } =
1
. (18)
ρ 2 2ρ
ρ 1 2ρ 2
5 Copula functions
A drawback of both the CreditMetrics extensions above is that in a Monte Carlo setting, they require
a stepwise simulation approach. In other words, we must simulate the pool of assets over the first
year, tabulate the ones that default, then simulate the remaining assets over the second year, and so
on. Li (1999 and 2000) introduces an approach wherein it is possible to simulate the default times
directly, thus avoiding the need to simulate each period individually.
8
(i) Specify the cumulative default time distribution F , such that F (t) gives the probability that a
given asset defaults prior to time t.
(ii) Assign a standard normal random variable Z (i) to each asset, where the correlation between
distinct Z (i) and Z (j ) is ρ.
Since we are concerned here only with the year in which an asset defaults, and not the precise
timing within the year, we will consider a discrete version of the copula approach:
The calibration to the cumulative default probabilities is already given. Further, it is easy to observe5
that the correlation parameter ρ is calibrated exactly as in the previous two sections.
The joint default probabilities are perhaps simplest to obtain for this approach. For example, the
joint cumulative default probability qk,l is given by
n o
(i) (j )
qk,l = P Z < αk ∩ Z < αl = 82 (αk , αl ; ρ). (20)
9
6 Stochastic default intensity
The approaches of the three previous sections can all be thought of as extensions of the single
period CreditMetrics framework. Each approach relies on standard normal random variables to
drive defaults, and calibrates thresholds for these variables. Furthermore, it is easy to see that over
the first period, the three approaches are identical; they only differ in their behavior over multiple
periods.
Our fourth model takes a different approach to the construction of correlated defaults over time, and
can be thought of as an extension of the single period CreditRisk+ framework. In the CreditRisk+
model, correlations between default events are constructed through the assets’ dependence on a
common default probability, which itself is a random variable.6 Importantly, given the realization
of the default probability, defaults are conditionally independent. The volatility of the common
default probability is in effect the correlation parameter for this model; a higher default volatility
induces stronger correlations, while a zero volatility produces independent defaults.7
The natural extension of the CreditRisk+ framework to continuous time is the stochastic intensity
approach presented in Duffie and Garleanu (1998) and Duffie and Singleton (1999). Intuitively, the
stochastic intensity model stipulates that in a given small time interval, assets default independently,
with probability proportional to a common default intensity.8 In the next time interval, the intensity
changes, and defaults are once again independent, but with the default probability proportional to
the new intensity level. The evolution of the intensity is described through a stochastic process. In
practice, since the intensity must remain positive, it is common to apply similar stochastic processes
as are utilized in models of interest rates.
6 More precisely, assets may depend on different default probabilities, each of which are correlated.
7 See Finger (1998), Gordy (2000), and Kolyoglu and Hickman (1998) for further discussion.
8As with our description of the CreditRisk+ model, this is a simplification. The Duffie-Garleanu framework provides for an
intensity process for each asset, with the processes being correlated.
10
For our purposes, we will model a single intensity process h. Conditional on h, the default time
for each asset is then the first arrival of a Poisson process with arrival rate given by h. The Poisson
processes driving the defaults for distinct assets are independent, meaning that given a realization
of the intensity process h, defaults are independent. The Poisson process framework implies that
given h, the probability that a given asset survives until time t is
Z t
exp − du hu . (21)
0
Further, because defaults are conditionally independent, the conditional probability, given h, that
two assets both survive until time t is
Z t
exp −2 du hu . (22)
0
The unconditional survival probabilities are given by expectations over the process h, so that in
particular, the survival probability for a single asset is given by
Z t
1 − qt = E exp − du hu . (23)
0
For the intensity process, we assume that h evolves according to the stochastic differential equation
p
dht = −κ(ht − h̄k )dt + σ ht dWt , (24)
where W is a Wiener process and h̄k is the level to which the process trends during year k. (That
is, the mean reversion is toward h̄1 for t < 1, toward h̄2 for 1 ≤ t < 2, etc.) Let h0 = h̄1 . Note
that this is essentially the model for the instantaneous discount rate used in the Cox-Ingersoll-Ross
interest rate model. Note also that in Duffie-Garleanu, there is a jump component to the evolution
of h, while the level of mean reversion is constant.
In order to express the default probabilities implied by the stochastic intensity model in closed
form, we will rely on the following result from Duffie-Garleanu.9 For a process h with h0 = h̄ and
9 We have changed the notation slightly from the Duffie-Garleanu result, in order to make more explicit the dependence on h̄.
11
evolving according to (24) with h̄k = h̄ for all k, we have
Z t+s
Et exp − du hu exp[x + yhs ] = exp x + αs (y)h̄ + βs (y)ht , (25)
t
where Et denotes conditional expectation given information available at time t. The functions αs
and βs are given by
κ κ(a(y)c − d(y)) c + d(y)ebs
αs (y) = s+ log , and (26)
c bcd(y) c+d
1 + a(y)ebs
βs (y) = , (27)
c + d(y)ebs
where
√
κ 2 + 2σ 2
κ+
c = − , (28)
2 p
σ 2 y − κ + (σ 2 y − κ)2 − σ 2 (σ 2 y 2 − 2κy − 2)
d(y) = (1 − cy) , (29)
σ 2 y 2 − 2κy − 2
a(y) = (d(y) + c)y − 1, (30)
−d(y)(κ + 2c) + a(y)(σ 2 − κc)
b = . (31)
a(y)c − d(y)
6.2 Calibration
Our calibration approach for this model will be to fix the mean reversion speed κ, solve for h̄1 and
σ to match p1 and p1,1 , and then to solve in turn for h̄2 , . . . , h̄T to match p2 , . . . , pT . To begin,
we apply (23) and (25) to obtain
p1 = 1 − exp α1 (0)h̄1 + β1 (0)h0 = 1 − exp [α1 (0) + β1 (0)]h̄1 . (32)
To compute the joint probability that two obligors each survive the first year, we must take the
expectation of (22), which is essentially the same computation as above, but with the process h
replaced by 2h. We observe that the process 2h also evolves according to (24) with the same mean
√
reversion speed κ, and with h̄k replaced by 2h̄k and σ replaced by σ 2. Thus, we define the
12
√
functions α̂s and β̂s in the same way as αs and βs , with σ replaced by σ 2. We can then compute
the joint one year survival probability:
Z t h i
E exp −2 du hu = exp 2[α̂1 (0) + β̂1 (0)]h̄1 . (33)
0
Finally, since the joint survival probability is equal to 1 − 2p1 + p1,1 , we have
h i
p1,1 = 2p1 − 1 + exp 2[α̂1 (0) + β̂1 (0)]h̄1 . (34)
To calibrate σ and h̄1 to (32) and (34), we first find the value of σ such that
2(α̂1 (0) + β̂1 (0)) log[1 − 2p1 + p1,1 ]
= , (35)
α1 (0) + β1 (0) log[1 − p1 ]
and then set
log[1 − p1 ]
h̄1 = . (36)
α1 (0) + β1 (0)
Note that though the equations are lengthy, the calibration is actually quite straightforward, in that
we only are ever required to fit one parameter at a time.
In order to calibrate h̄2 , we need to obtain an expression for the two year cumulative default
probability q2 . To this end, we must compute the two year survival probability
Z 2
1 − q2 = E exp − du hu . (37)
0
Since the process h does not have a constant level of mean reversion over the first two years, we
cannot apply (25) directly here. However (25) can be applied once we express the two year survival
probability as
Z 1 Z 2
1 − q2 = E exp − du hu E1 exp − du hu . (38)
0 1
Now given h1 , the process h evolves according to (24) from t = 1 to t = 2 with a constant mean
reversion level h̄2 , meaning we can apply (25) to the conditional expectation in (38), yielding
Z 1
1 − q2 = E exp − du hu exp α1 (0)h̄2 + β1 (0)h1 . (39)
0
13
The same argument allows us to apply (25) again to (39), giving
1 − q2 = exp α1 (0)h̄2 + [α1 (β1 (0)) + β1 (β1 (0))]h̄1 . (40)
The remaining mean reversion levels h̄3 , . . . , h̄T are calibrated similarly.
The computation of joint probabilities for longer horizons is similar to (34). The joint probability
that two obligors each survive the first two years is given by
Z 2
E exp −2 du hu . (42)
0
For the joint probability that the first obligor survives the first year and the second survives the first
two years, we must compute
Z 1 Z 2 Z 1 Z 2
E exp − du hu exp − du hu = E exp −2 du hu exp − du hu (44)
0 0 0 1
The joint default probabilities p2,2 and p1,2 then follow from (43) and (45).
14
7 Model comparisons – closed form results
Our first set of model comparisons will utilize the closed form results described in the previous
sections. We will restrict the comparisons here to the two period setting, and to second order results
(that is, default volatilities and joint probabilities for two assets); results for multiple periods and
actual distributions of default rates will be analyzed through Monte Carlo in the next section.
For our two period comparisons, we will analyze four sets of parameters: investment and speculative
grade default probabilities10 , each with two correlation values. The low and high correlation settings
will correspond to values of 10% and 40%, respectively, for the asset correlation parameter ρ in
the first three models. For the stochastic intensity model, we will investigate two values for the
mean reversion speed κ. The "slow" setting will correspond to κ = 0.29, such that a random shock
to the intensity process will decay by 25% over the next year; the "fast" setting will correspond
to κ = 1.39, such that a random shock to the intensity process will decay by 75% over one year.
Calibration results are presented in Table 1.
We present the normalized year two default volatilities for each model in Figure 1. As defined in (5)
and (6), the marginal and cumulative default volatilities are the standard deviation of the marginal
and cumulative two year default rates of a large, homogeneous portfolio. As we would expect, the
default volatilities are greater in the high correlation cases than in the low correlation cases. Of the
five models tested, the stochastic intensity model with slow mean reversion seems to produce the
highest levels of default volatility, indicating that correlations in the second period tend to be higher
for this model than for the others.
It is interesting to note that of the first three models, all of which are based on the normal distribution
and default thresholds, the copula approach in all four cases has a relatively low marginal default
volatility but a relatively high cumulative default volatility. (The slow stochastic intensity model is
in fact the only other model to show a marginal volatility less than the cumulative volatility.) Note
10 Taken from Exhibit 30 of Keenan et al (2000).
15
that the cumulative two year default rate is the sum of the first and second year marginal default
rates, and thus that the two year cumulative default volatility is composed of three terms: the first
and second year marginal default volatilities and the covariance between the first and second years.
Our calibration guarantees that the first year default volatilities are identical across the models.
Thus, the behavior of the copula model suggests a stronger covariance term (that is, a stronger link
between year one and year two defaults) than for either of the two CreditMetrics extensions.
To further investigate the links between default events, we examine conditional probability of a
default in the second year, given the default of another asset. To be precise, for two distinct assets i
and j , we will calculate the conditional probability that asset i defaults in year two, given that asset
j defaults in year one, normalized by the unconditional probability that asset i defaults in year two.
In terms of quantities we have already defined, this normalized conditional probability is equal to
p1,2 /(p1 p2 ). We will also calculate the normalized conditional probability that asset i defaults in
year two, given that asset j defaults in year two, given by p2,2 /p22 . For both of these quantities, a
value of one indicates that the first asset defaulting does not affect the chance that the second asset
defaults; a value of four indicates that the second asset is four times more likely to default if the
first asset defaults than it is if we have no information about the first asset. Thus, the probability
conditional on a year two default can be interpreted as an indicator of contemporaneous correlation
of defaults, and the probability conditional on a year one default as an indicator of lagged default
correlation.
The normalized conditional probabilities under the five models are presented in Figure 2. As we
expect, there is no lagged correlation for the discrete CreditMetrics extension. Interestingly, the
copula and both stochastic intensity models often show a higher lagged than contemporaneous
correlation. While it is difficult to establish much intuition for the copula model, this phenomenon
can be rationalized in the stochastic intensity setting. For this model, any shock to the default
intensity will tend to persist longer than one year. If one asset defaults in the first year, it is most
likely due to a positive shock to the intensity process; this shock then persists into the second year,
where the other asset is more likely to default than normal. Further, shocks are more persistent for the
16
slower mean reversion, explaining why the difference in lagged and contemporaneous correlation
is more pronounced in this case. By contrast, the two CreditMetrics extensions show much higher
contemporaneous than lagged correlation; this lack of persistence in the correlation structure will
manifest itself more strongly over longer horizons.
To this point, we have calibrated the collection of models to have the same means over two periods,
and the same volatilities over one period. We have then investigated the remaining second order
statistics – the second period volatility and the correlation between the first and second periods – that
depend on the particular models. In the next section, we will extend the analysis on two fronts: first,
we will investigate more horizons in order to examine the effects of lagged and contemporaneous
correlations over longer times; second, we will investigate the entire distribution of portfolio defaults
rather than just the second order moments.
In this section, we perform Monte Carlo simulations for the five models investigated previously.
In each case, we begin with a homogeneous portfolio of one hundred speculative grade bonds. We
calibrate the model to the cumulative default probabilities in Table 2 and to the two correlation
settings from the previous section. Over 1,000 trials, we simulate the number of bonds that default
within each year, up to a final horizon of six years.11
The simulation procedures are straightforward for the two CreditMetrics extensions and the copula
approach. For the stochastic intensity framework, we simulate the evolution of the intensity process
according to (24). This requires a discretization of (24):
p √
ht+1t ≈ −κ(ht − h̄k )1t + σ ht 1t, (46)
11As we have pointed out before, it is possible to simulate continuous default times under the copula and stochastic intensity
frameworks. In order to compare with the two CreditMetrics extensions, we restrict the analysis to annual buckets.
17
where is a standard normal random variable.12 Given the intensity process path for a particular
scenario, we then compute the conditional survival probability for each annual period as in (21). Fi-
nally, we generate defaults by drawing independent binomial random variables with the appropriate
probability.
The simulation time for the five models is a direct result of the number of timesteps needed. The
copula model simulates the default times directly, and is therefore the fastest. The two CreditMetrics
models require only annual timesteps, and require roughly 50% more runtime than the copula model.
For the stochastic intensity model, the need to simulate over many timesteps produces a runtime
over one hundred times greater than the simpler models.
We first examine default rate volatilities over the six horizons. As in the previous section, we
consider the normalized cumulative default rate volatility. For year k, this is the standard deviation
of the number of defaults that occur in years one through k, divided by the expected number of
defaults in that period. This is essentially the quantity defined in (6), with the exception that
here we consider a finite portfolio. The default volatilities from our simulations are presented
in Figure 3. Our calibration guarantees that the first year default volatilities are essentially the
same. The second year results are similar to those in Figure 1, with slightly higher volatility for
the slow stochastic intensity model, and slightly lower volatility for the discrete CreditMetrics
extension. At longer horizons, these differences are amplified: the slow stochastic intensity and
discrete CreditMetrics models show high and low volatilities, respectively, while the remaining
three models are indistinguishable.
Thought default rate volatilities are illustrative, they do not provide us information about the full dis-
tribution of defaults through time. At the one year horizon, our calibration guarantees that volatility
will be consistent across the five models; the distribution assumptions, however influence the pre-
12 Note that while (24) guarantees a non-negative solution for h, the discretized version admits a small probability that h
t+1t will
be negative. To reduce this possibility, we choose 1t for each timestep such that the probability that ht+1t < 0 is sufficiently small.
The result is that while we only need 50 timesteps per year in some cases, we require as many as one thousand when the value of σ
is large, as in the high correlation, fast mean reversion case.
18
cise shape of the portfolio distribution. We see in Table 3 that there is actually very little difference
between even the 1st percentiles of the distributions, particularly in the low correlation case. For
the full six year horizon, Table 4 shows more differences between the percentiles. Consistent with
the default volatility results, the tail percentiles are most extreme for the slow stochastic intensity
model, and least extreme for discrete CreditMetrics. Interestingly, though the CreditMetrics diffu-
sion model shows similar volatility to the copula and fast stochastic intensity models, it produces
less extreme percentiles than these other models. Note also that among distributions with similar
means, the median serves well as an indicator of skewness. The high correlation setting generally,
and the slow stochastic intensity model in particular, show lower medians. For these cases, the
distribution places higher probability on the worst default scenarios as well as the scenarios with
few or no defaults.
The cumulative probability distributions for the six year horizons are presented in Figures 4 through
7. As in the other comparisons, the slow stochastic intensity model is notable for placing large prob-
ability on the very low and high default rate scenarios, while the discrete CreditMetrics extension
stands out as the most benign of the distributions. Most striking, however, is the similarity between
the fast stochastic intensity and copula models, which are difficult to differentiate even at the most
extreme percentile levels.
As a final comparison of the default distributions, we consider the pricing of a simple structure
written on our portfolio. Suppose each of the one hundred bonds in the portfolio has a notional
value of $1 million, and that in the event of a default the recovery rate on each bond is forty percent.
The structure is composed of three elements:
(i) First loss protection. As defaults occur, the protection seller reimburses the structure up to a
total payment of $10 million. Thus, the seller pays $600,000 at the time of the first default,
$600,000 at the time of each of the subsequent fifteen defaults, and $400,000 at the time of
the seventeenth default.
(ii) Second loss protection. The protection seller reimburses the structure for losses in excess of
19
$10 million, up to a total payment of $20 million. This amounts to reimbursing the losses on
the seventeenth through the fiftieth defaults.
(iii) Senior notes. Notes with a notional value of $100 million maturing after six years. The notes
suffer a principal loss if the first and second loss protection are fully utilized – that is, if more
than fifty defaults occur.
For the first and second loss protection, we will estimate the cost of the protection based on a
constant discount rate of 7%. In each scenario, we produce the timing and amounts of the protection
payments, and discount these back to the present time. The price of the protection is then the average
discounted value across the 1,000 scenarios. For the senior notes, we compute the expected principal
loss at maturity, which is used by Moody’s along with Table 5 to determine the notes’ rating.
Additionally, we compute the total amount of protection (capital) required to achieve a rating of A3
(an expected loss of 0.5%) and Aa3 (an expected loss of 0.101%).
We present the first and second loss prices in Table 6, along with the expected loss, current rating,
and required capital for the senior notes. The slow stochastic intensity model yields the lowest
pricing for the first loss protection, the worst rating for the senior notes, and the highest required
capital. The results for the other models are as expected, with the copula and fast mean reversion
models yielding the most similar results.
9 Conclusion
The analysis of Collateralized Debt Obligations, and other structured products written on credit
portfolios, requires a model of correlated defaults over multiple horizons. For single horizon
models, the effect of model and distribution choice on the model results is well understood. For
the multiple horizon models, however, there has been little research.
20
correlation parameter), and have investigated the resulting default distributions. The differences we
observe can be attributed to the model structures, and to some extent, to the choice of distributions
that drive the models. Our results show a significant disparity. The rating on a class of senior
notes under our low correlation assumption varied from Aaa to A3, and under our high correlation
assumption from A1 to Baa3. Additionally, the capital required to achieve a target investment grade
rating varied by as much as a factor of two.
In the single period case, a number of studies have concluded that when calibrated to the same
first and second order information, the various models do not produce vastly different conclusions.
Here, the issue of model choice is much more important, and any analysis of structures over multiple
horizons should heed this potential model error.
References
Cifuentes, A., Choi, E., and Waite, J. (1998). Stability of rations of CBO/CLO tranches. Moody’s
Investors Service.
Credit Suisse Financial Products. (1997). CreditRisk+: A credit risk management framework.
Duffie, D. and Garleanu, N. (1998). Risk and valuation of Collateralized Debt Obligations. Working
paper. Graduate School of Business, Stanford University.
http://www.stanford.edu/ ˜ duffie/working.htm
Duffie, D. and Singleton, K. (1998). Simulating correlated defaults. Working paper. Graduate
School of Business, Stanford University.
http://www.stanford.edu/ ˜ duffie/working.htm
Duffie, D. and Singleton, K. (1999). Modeling term structures of defaultable bonds. Review of
Financial Studies, 12, 687-720.
21
Finger, C. (1998). Sticks and stones. Working paper. RiskMetrics Group.
http://www.riskmetrics.com/research/working
Gordy, M. (2000). A comparative anatomy of credit risk models. Journal of Banking & Finance,
24 (January), 119-149.
Gupton, G., Finger, C., and Bhatia, M. (1997). CreditMetrics – Technical Document. Morgan
Guaranty Trust Co. http://www.riskmetrics.com/research/techdoc
Li, D. (1999). The valuation of basket credit derivatives. CreditMetrics Monitor, April, 34-50.
http://www.riskmetrics.com/research/journals
Li, D. (2000). On default correlation: a copula approach. The Journal of Fixed Income, 9 (March),
43-54.
Keenan, S., Hamilton, D. and Berthault, A. (2000). Historical default rates of corporate bond
issuers, 1920-1999. Moody’s Investors Service.
Nagpal, K. and Bahar, R. (1999). An analytical approach for credit risk analysis under correlated de-
faults. CreditMetrics Monitor, April, 51-74. http://www.riskmetrics.com/research/journals
Standard & Poor’s. (2000). Ratings performance 1999: Stability & Transition.
22
Table 1: Calibration results.
Table 2: Moody’s speculative grade cumulative default probabilities. From Exhibit 30, Keenan et al (2000).
Year 1 2 3 4 5 6
Probability 3.35% 6.76% 9.98% 12.89% 15.57% 17.91%
23
Table 3: One year default statistics. Speculative grade.
24
Table 5: Target expected losses for six year maturity. From Chart 3, Cifuentes et al (2000).
Table 6: Prices (in $M) for first and second loss protection. Expected loss, rating, and required capital ($M)
for senior notes. Speculative grade collateral.
Senior notes
First loss Second loss Exp. loss Rating Capital (Aa3) Capital (A3)
Low correlation
CM Discrete 7.227 1.350 0.000% Aaa 17.3 13.8
CM Diffusion 6.676 1.533 0.017% Aa1 21.6 15.9
Copula 6.788 1.936 0.022% Aa1 24.5 18.0
Stoch. int. – slow 5.533 2.501 0.466% A3 39.8 29.4
Stoch. int. – fast 6.763 1.911 0.038% Aa2 25.7 18.3
High correlation
CM Discrete 6.117 2.698 0.159% A1 32.3 23.6
CM Diffusion 5.144 2.832 0.514% Baa1 41.1 30.2
Copula 5.210 3.200 0.821% Baa2 43.7 34.4
Stoch. int. – slow 4.856 3.307 1.903% Baa3 54.5 46.1
Stoch. int. – fast 5.685 3.500 0.918% Baa2 45.9 35.2
25
Figure 1: Marginal and cumulative year two default volatility.
Investment grade, low correlation Investment grade, high correlation
1.4 4
1.2 3.5
3
1
2.5
0.8
2
0.6
1.5
0.4
1
0.2 0.5
Marginal Marginal
Cumulative Cumulative
0 0
CM CM Copula Stoch int Stoch int CM CM Copula Stoch int Stoch int
Discrete Diffusion Slow Fast Discrete Diffusion Slow Fast
0.9 1.8
0.8 1.6
0.7 1.4
0.6 1.2
0.5 1
0.4 0.8
0.3 0.6
0.2 0.4
26
Figure 2: Year two conditional default probability given default of a second asset.
Investment grade, low correlation Investment grade, high correlation
2.5 18
16
2 14
12
1.5
10
8
1
6
0.5 4
1.8
5
1.6
1.4
4
1.2
1 3
0.8
2
0.6
0.4
1
0.2 Cond on 1st yr default Cond on 1st yr default
Cond on 2nd yr default Cond on 2nd yr default
0 0
CM CM Copula Stoch int Stoch int CM CM Copula Stoch int Stoch int
Discrete Diffusion Slow Fast Discrete Diffusion Slow Fast
27
Figure 3: Normalized cumulative default rate volatilities. Speculative grade.
Low correlation
1.2
1
Default volatility
0.8
0.6
0.4
0.2
1 2 3 4 5 6
High correlation
2.5
CM Discrete
CM Diffusion
2 Copula
Default volatility
St.Int. Slow
St.Int. Fast
1.5
0.5
1 2 3 4 5 6
Time
28
Figure 4: Distribution of cumulative six year defaults. Speculative grade, low correlation.
100%
80%
Cumulative probability
60%
40%
20%
CM Discrete
CM Diffusion
Copula
St.Int. Slow
St.Int. Fast
0
0 10 20 30 40 50 60 70 80 90 100
Defaults
29
Figure 5: Distribution of cumulative six year defaults, extreme cases. Speculative grade, low correlation.
100%
96%
Cumulative probability
92%
88%
84%
CM Discrete
CM Diffusion
Copula
St.Int. Slow
St.Int. Fast
80%
20 30 40 50 60 70 80 90 100
Defaults
30
Figure 6: Distribution of cumulative six year defaults. Speculative grade, high correlation.
100%
80%
Cumulative probability
60%
40%
20%
CM Discrete
CM Diffusion
Copula
St.Int. Slow
St.Int. Fast
0
0 10 20 30 40 50 60 70 80 90 100
Defaults
31
Figure 7: Distribution of cumulative six year defaults, extreme cases. Speculative grade, high correlation.
100%
96%
Cumulative probability
92%
88%
84%
CM Discrete
CM Diffusion
Copula
St.Int. Slow
St.Int. Fast
80%
20 30 40 50 60 70 80 90 100
Defaults
32