Chapter 6 - The Multivariate Normal Distribution and Copulas - 2013 - Simulation
Chapter 6 - The Multivariate Normal Distribution and Copulas - 2013 - Simulation
Normal Distribution
and Copulas
6
Introduction
In this chapter we introduce the multivariate normal distribution and show how to
generate random variables having this joint distribution. We also introduce copulas
which are useful when choosing joint distributions to model random variables
whose marginal distributions are known.
X = AZ + µ (6.2)
Z 1 , . . .
, Z m , and is thus also a normal random variable. Hence, using that
E e W = exp {E[W ] + Var(W )/2} when W is normal, we see that
n
n
n
E exp ti X i = exp E ti X i + Var ti X i 2
i=1 i=1 i=1
As
n
n
E ti X i = ti μi
i=1 i=1
and
n n
n
Var ti X i = Cov ti X i , tj X j
i=1 i=1 j=1
n
n
= ti t j Cov(X i , X j )
i=1 j=1
we see that the joint moment generating function, and thus the joint distribution,
of the multivariate normal vector is specified by knowledge of the mean values
and the covariances.
That is,
2
a11 a11 a21 σ12 c
=
2
a11 a21 a21 + a222
c σ22
Letting ρ = c
σ1 σ2
be the correlation between X 1 and X 2 , the preceding gives that
a11 = σ1
a21 = c/σ1 = ρσ2
a22 = σ22 − ρ 2 σ22 = σ2 1 − ρ 2
Hence, letting
σ1 0
A= (6.5)
ρσ2 σ2 1 − ρ 2
X 1 = σ1 Z 1 + μ1
X 2 = ρσ2 Z 1 + σ2 1 − ρ 2 Z 2 + μ2
6.2 Generating a Multivariate Normal Random Vector 101
The preceding can also be used to derive the joint density of the bivariate normal
vector X 1 , X 2 . Start with the joint density function of Z 1 , Z 2 :
1 1 2
f Z 1 ,Z 2 (z 1 , z 2 ) = exp − z 1 + z 2
2
2π 2
and consider the transformation
x 1 = σ1 z 1 + μ1 (6.6)
x2 = ρσ2 z 1 + σ2 1 − ρ 2 z 2 + μ2 (6.7)
(1, 1), (2, 1), . . . , (n, 1), (2, 2), (3, 2), . . . , (n, 2),
(3, 3), . . . , (n, 3), . . . , (n − 1, n − 1), (n, n − 1), (n, n)
By symmetry the equations obtained for (i, j) and ( j, i) would be the same and
so only the first to appear is given.
For instance, suppose we want the Choleski decomposition of the matrix
⎡ ⎤
942
⎢ ⎥
C = ⎣4 8 3⎦ (6.9)
237
6.3 Copulas
A joint probability distribution function that results in both marginal distributions
being uniformly distributed on (0, 1) is called a copula. That is, the joint
6.3 Copulas 103
C(x, 1) = x, C(1, y) = y
and
P(Y ≤ y) = G(y)
and having some knowledge about the type of dependency between X and Y , we
want to choose an appropriate joint distribution function H (x, y) = P(X ≤ x,
Y ≤ y). Because X has distribution F and Y has distribution G it follows that
F(X ) and G(Y ) are both uniform on (0, 1). Consequently the joint distribution
function of F(X ), G(Y ) is a copula. Also, because F and G are both increasing
functions, it follows that X ≤ x, Y ≤ y if and only if F(X ) ≤ F(x),
G(Y ) ≤ G(y).Consequently, if we choose the copula C(x, y) as the joint
distribution function of F(X ), G(Y ) then
H (x, y) = P(X ≤ x, Y ≤ y)
= P(F(X ) ≤ F(x), G(Y ) ≤ G(y))
= C(F(x), G(y))
(X ) and (Y ) is called the Gaussian copula. That is, the Gaussian copula C is
given by
C(x, y) = P((X ) ≤ x, (Y ) ≤ y)
= P(X ≤ −1 (x), Y ≤ −1 (y))
−1 (x) −1 (y)
1
=
−∞ −∞ 2π 1 − ρ 2
1
× exp − (x + y − 2ρx y) d y d x
2 2
2(1 − ρ 2 )
Remark The terminology “Gaussian copula” is used because the normal
distribution is often called the Gaussian distribution in honor of the famous
mathematician J.F. Gauss, who made important use of the normal distribution
in his astronomical studies.
and
G(y) = lim H (x, y)
x→∞
Consequently,
Fs (s(X )) = F(s −1 (s(X ))) = F(X )
and
Ft (t (Y )) = G(Y )
showing that
Suppose again that X, Y has a joint distribution function H (x, y) and that the
continuous marginal distribution functions are F and G. Another way to obtain a
copula aside from using that F(X ) and G(Y ) are both uniform on (0, 1) is to use
that 1 − F(X ) and 1 − G(Y ) are also uniform on (0, 1). Hence,
is also a copula. It is sometimes called the copula generated by the tail distributions
of X and Y .
Now, for x ≥ 0, y ≥ 0
exp{−(λ1 + λ3 )F −1 (1 − x)} = x
exp{−(λ2 + λ3 )G −1 (1 − y)} = y
λ3
− λ +λ
exp{λ3 F −1 (1 − x)} = x 1 3
λ3
− λ +λ
exp{λ3 G −1 (1 − y)} = y 2 3
Hence, from Equations (6.10) and (6.13) we obtain that the copula generated by
the tail distribution of X and Y, referred to as the Marshall–Olkin copula, is
= min(x α y, x y β )
λ1 λ2
where α = λ1 +λ3
and β = λ2 +λ3
.
6.4 Generating Variables from Copula Models 107
Multidimensional Copulas
We can also use copulas to model n-dimensional probability distributions. The
n-dimensional distribution function C(x1 , . . . , xn ) is said to be a copula if all n
marginal distributions are uniform on (0, 1). We can now choose a joint distribution
of a random vector X 1 , . . . , X n by first choosing the marginal distribution
functions Fi , i = 1, . . . , n, and then choosing a copula for the joint distribution of
F1 (X 1 ), . . . , Fn (X n ). Again a popular choice is the Gaussian copula which takes C
to be the joint distribution function of (W1 ), . . . , (Wn ) when W1 , . . . , Wn has
a multivariate normal distribution with mean vector 0, and a specified covariance
matrix whose diagonal (variance) values are all 1. (The diagonal values of the
covariance matrix are taken equal to 1 so that the distribution of (Wi ) is uniform
on (0, 1).) In addition, so that the relationship between X i and X j is similar to that
between Wi and W j , it is usual to let Cov(Wi , W j ) = Cov(X i , X j ), i = j.
generated value of the vector having the distribution of the copula. We then set
these values equal to H (V ) and to R(W ) and solve for V and W. That is, we use
the following approach:
1. Generate T1 , T2 , T3 , independent exponential random variables with rates
λ1 , λ 2 , λ 3 .
2. Let X = min(T1 , T3 ), Y = min(T2 , T3 ).
3. Set H (V ) = e−(λ1 +λ3 )X , R(W ) = e−(λ2 +λ3 )Y .
4. Solve the preceding to obtain V, W .
Exercises
1. Suppose Y1 , . . . , Ym are independent normal random variables with means
E[Yi ] = μi , and variances Var(Yi ) = σi2 , i = 1, . . . , m. If
Cov(X i , X j ) = 0 when i = j