Credit Default Swap Valuation Using Copula Fitting Method
Credit Default Swap Valuation Using Copula Fitting Method
Credit Default Swap Valuation Using Copula Fitting Method
Bruce Haydon
Citigroup Treasury
Page <1>
Bruce Haydon
Discounting
For the Exposure Profile, discounting is applied to the time the cashflow is received,
not the present time. As such, the first cashflow will be discounted using DF(0,0.5),
which is the rate of the 6M Libor just fixed (cash to be paid out at 0.5Y).
Discount rates were obtained from the Bank of Englands OIS Spot Curve, found at
http://www.bankofengland.co.uk.
Page <2>
Bruce Haydon
Page <3>
Bruce Haydon
where n is the sample size, K() is the kernel smoothing function, h is the
bandwidth.
Kernel Smoothing Function
The kernel smoothing function defines the shape of the curve used to
generate the pdf. Similar to a histogram, the kernel distribution builds a
function to represent the probability distribution using the sample data. But
unlike a histogram, which places the values into discrete bins, a kernel
distribution sums the component smoothing functions for each data value to
produce a smooth, continuous probability curve. The following plots show a
visual comparison of a histogram and a kernel distribution generated from
the same sample data.
A histogram represents the probability distribution by establishing bins and
placing each data value in the appropriate bin. Below is a histogram of the
first sample set of dPDs (Italy) represented as a histogram:
MATLAB code used to generate histogram for original dPD historical data for
Obligor 3 (Portugal 5Y):
>> figure;
histogram (PDDELTAS(:,3))
str = sprintf('Histogram Original Samples Asset %d', 3);
title(str);
Resultant generated Histogram:
Page <4>
Bruce Haydon
The entire set of histograms for the original historical data sample sets can
be found in Appendix A.
Because of this bin count approach, the histogram produces a discrete
probability density function. This might be unsuitable for our purpose, in
generating random numbers from a fitted distribution using a Copula.
Alternatively, the kernel distribution builds the pdf by creating an individual
probability density curve for each data value, then summing the smooth
curves. This approach creates one smooth, continuous probability density
function for the data set.
We can see the result of kernel smoothing in the graph of asset 3 below,
where a kernel-smoothed curve is overlaid on the original histogram. The
solid line represents the smoothed pdf. The full set of smoothed curves can
be found in Appendix A as well
Page <5>
Bruce Haydon
Page <6>
Bruce Haydon
Page <7>
Bruce Haydon
(7)Copula
Now that we have our matrix of pseudo-samples u, there are two ways we
can proceed to obtain the correlation matrix necessary for the Copulas. The
first method is to convert the u-hist matrix into a matrix of Z-hist values
distributed on N[0,1] using the inverse normal CDF function, and finding
linear correlation of that matrix. The second method uses MATLAB functions
that operate on the pseudosamples u-hist directly without transformation
into Z-hist to generate correlation matrices for both Gaussian and t-copulas.
(a) Generate Z-hist, and obtain correlation from resulting matrix
For this, we use the transformation
pseudo-samples, and is known as
correlation matrix
=( Z
hist
Z =1 (U )
obtained from
) .
Page <8>
Bruce Haydon
Z hist
U hist
for the Gaussian Copula. In addition, this same function also allows us to
obtain the correlation matrix and degrees of freedom for a t-Copula through a
separate invocation specifying that Copula type.
rhohatGaussian = copulafit('Gaussian',uhist);
[rhohatTcop,dfhat]= copulafit('t',uhist);
Page <9>
Bruce Haydon
^= AA ' .
Page <10>
Bruce Haydon
[Rcholesky,posdefindicator]
= chol(rhohatGaussian);
After this block of code was run, the following (5x5) matrix Rcholesky,
which represents A in the equation:
^= AA '
.
^= AA '
>> invRho=transpose(Rcholesky)
invRho =
1.0000
0.1928
0.5265
-0.0809
0.1307
0
0.9812
0.2170
0.0639
0.4645
0
0
0.8220
0.0142
0.1659
0
0
0
0.9946
0.1805
0
0
0
0
0.8409
-0.0809
0.0471
-0.0170
1.0000
0.2011
0.1307
0.4810
0.3059
0.2011
1.0000
>> test=invRho*Rcholesky
test =
1.0000
0.1928
0.5265
-0.0809
0.1307
0.1928
1.0000
0.3144
0.0471
0.4810
0.5265
0.3144
1.0000
-0.0170
0.3059
We can see that the matrix text, which is a product of the generated
Cholesky matrix times its transpose, equals our original correlation matrix
^
rhohatGaussian ( ) ,confirming that
^= AA '
holds.
Page <11>
Bruce Haydon
Page <12>
Bruce Haydon
After the command was run, the following (nx10000) matrix was generated.
Each row represents a vector U.To confirm the variables are uniformly
distributed, I generated a histogram of the data in column 1:
% Generate histogram on first column as test to ensure uniform distribution
figure;
histogram (Urandom(:,1));
str = sprintf('Histogram Random Generated Samples Asset 1');
title(str);
After the code above was executed, the following histogram was generated:
Page <13>
Bruce Haydon
Z =invnormalCDF(U ) .
% Now convert "U" into vector of standard Normal distributed variables "Z"
% through the use of the Normal inverse cumulative distribution function.
ZSimulation=norminv(Urandom,0,1);
%Z=NormInverseCDF(U)
Page <14>
Bruce Haydon
Page <15>
Bruce Haydon
(10)
(11)
Page <16>
Bruce Haydon
Page <17>
Bruce Haydon
Page <18>
Bruce Haydon
end
for ii=1:nAssets
figure;
histogram (PDDELTAS(:,ii))
%Plot histogram first
hold on;
pdKS = fitdist(PDDELTAS(:,ii),'Kernel'); %generate fitted curve
x = -1:.05:1;
ySix = pdf(pdKS,x);
ySix = ySix*15;
% Scale pdf to overlay on histogram
plot(x,ySix,'k-','LineWidth',2);
str = sprintf('Kernel Smoothed PDF - Asset %d', ii);
title(str);
Page <19>
Bruce Haydon
Page <20>
Bruce Haydon
nSimulations=10000
Page <21>
Bruce Haydon
Page <22>
Bruce Haydon
Page <23>
Bruce Haydon
Page <24>
Bruce Haydon