0% found this document useful (0 votes)
219 views6 pages

Bivariate Normal Distribution

The document discusses the bivariate normal distribution and conditional distributions. It defines a general bivariate normal distribution with arbitrary parameters and shows that the marginal distributions of X and Y are both normal. It also shows that the covariance between X and Y is equal to their correlation. The document describes how to generate bivariate normal random variables and discusses the multivariate change of variables theorem.

Uploaded by

niranjan dey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
219 views6 pages

Bivariate Normal Distribution

The document discusses the bivariate normal distribution and conditional distributions. It defines a general bivariate normal distribution with arbitrary parameters and shows that the marginal distributions of X and Y are both normal. It also shows that the covariance between X and Y is equal to their correlation. The document describes how to generate bivariate normal random variables and discusses the multivariate change of variables theorem.

Uploaded by

niranjan dey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

6.

5 Conditional Distributions

General Bivariate Normal

Let Z1 , Z2 ∼ N (0, 1), which we will use to build a general bivariate normal
Lecture 22: Bivariate Normal Distribution distribution.
 
1 1 2 2
f (z1 , z2 ) = exp − (z1 + z2 )
Statistics 104 2π 2
We want to transform these unit normal distributions to have the follow
Colin Rundel
arbitrary parameters: µX , µY , σX , σY , ρ
April 11, 2012
X = σX Z1 + µ X
p
Y = σY [ρZ1 + 1 − ρ2 Z2 ] + µY

Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 1 / 22

6.5 Conditional Distributions 6.5 Conditional Distributions

General Bivariate Normal - Marginals General Bivariate Normal - Cov/Corr

First, lets examine the marginal distributions of X and Y , Second, we can find Cov (X , Y ) and ρ(X , Y )

X = σX Z1 + µX Cov (X , Y ) = E [(X − E (X ))(Y − E (Y ))]


h p i
= σX N (0, 1) + µX = E (σX Z1 + µX − µX )(σY [ρZ1 + 1 − ρ2 Z2 ] + µY − µY )
h i
= N (µX , σX2 )
p
= E (σX Z1 )(σY [ρZ1 + 1 − ρ2 Z2 ])
h p i
= σX σY E ρZ12 + 1 − ρ2 Z1 Z2
p
Y = σY [ρZ1 + 1 − ρ2 Z2 ] + µY = σX σY ρE [Z12 ]
p
= σY [ρN (0, 1) + 1 − ρ2 N (0, 1)] + µY = σX σY ρ

= σY [N (0, ρ2 ) + N (0, 1 − ρ2 )] + µY
Cov (X , Y )
ρ(X , Y ) = =ρ
= σY N (0, 1) + µY σX σY
= N (µY , σY2 )

Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 2 / 22 Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 3 / 22
6.5 Conditional Distributions 6.5 Conditional Distributions

General Bivariate Normal - RNG Multivariate Change of Variables


Let X1 , . . . , Xn have a continuous joint distribution with pdf f defined of S. We can define n
Consequently, if we want to generate a Bivariate Normal random variable new random variables Y1 , . . . , Yn as follows:
with X ∼ N (µX , σX2 ) and Y ∼ N (µY , σY2 ) where the correlation of X and
Y is ρ we can generate two independent unit normals Z1 and Z2 and use Y1 = r1 (X1 , . . . , Xn ) ··· Yn = rn (X1 , . . . , Xn )

the transformation: If we assume that the n functions r1 , . . . , rn define a one-to-one differentiable transformation
from S to T then let the inverse of this transformation be

x1 = s1 (y1 , . . . , yn ) ··· xn = sn (y1 , . . . , yn )


X = σX Z1 + µ X
p Then the joint pdf g of Y1 , . . . , Yn is
Y = σY [ρZ1 + 1 − ρ2 Z2 ] + µY (
f (s1 , . . . , sn )|J| for (y1 , . . . , yn ) ∈ T
g (y1 , . . . , yn ) =
0 otherwise
We can also use this result to find the joint density of the Bivariate
Where
Normal using a 2d change of variables.
 ∂s1 ∂s1 
∂y1
··· ∂yn

J = det  . .. . 
 .. . .. 

∂sn ∂sn
∂y1
··· ∂yn

Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 4 / 22 Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 5 / 22

6.5 Conditional Distributions 6.5 Conditional Distributions

General Bivariate Normal - Density General Bivariate Normal - Density

The first thing we need to find are the inverses of the transformation. If Next we calculate the Jacobian,
x = r1 (z1 , z2 ) and y = r2 (z1 , z2 ) we need to find functions h1 and h2 such " ∂s1 ∂s1
# " 1
0
#
∂x ∂y σX 1
that Z1 = s1 (X , Y ) and Z2 = s2 (X , Y ). J = det ∂s2 ∂s2 = det −ρ
√ √1 = p
∂x ∂y σX 1−ρ2 σY 1−ρ2 σX σY 1 − ρ2

X = σX Z1 + µX
X − µX
The joint density of X and Y is then given by
Z1 =
σX
f (x, y ) = f (z1 , z2 )|J|
1 1 2 1 1 2
   
p 2 2
= exp − (z1 + z2 ) |J| = exp − (z1 + z2 )
Y = σY [ρZ1 + 1 − ρ2 Z2 ] + µY 2π 2
p
2πσX σY 1 − ρ2 2
Y − µY X − µX p
=ρ + 1 − ρ 2 Z2
σY σX 1
"
1
"
x − µX
!2
1 y − µY x − µX
!2 ##
  = exp − + −ρ
1 Y − µY X − µX
p
2πσX σY 1 − ρ2 2 σX 1 − ρ2 σY σX
Z2 = p −ρ
1 − ρ2 σY σX
(x − µX )2 (y − µY )2
" !#
1 −1 (x − µX ) (y − µY )
Therefore, = exp + − 2ρ
2πσX σY (1 − ρ2 )1/2 2(1 − ρ2 ) 2
σX 2
σY σX σY
 
x − µX 1 y − µY x − µX
s1 (x, y ) = s2 (x, y ) = p −ρ
σX 1 − ρ2 σY σX
Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 6 / 22 Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 7 / 22
6.5 Conditional Distributions 6.5 Conditional Distributions

General Bivariate Normal - Density (Matrix Notation) General Bivariate Normal - Density (Matrix Notation)

Obviously, the density for the Bivariate Normal is ugly, and it only gets Recall for a 2 × 2 matrix,
worse when we consider higher dimensional joint densities of normals. We
     
a b 1 d −b 1 d −b
A= A−1 = =
can write the density in a more compact form using matrix notation, c d det A −c a ad − bc −c a

Then,
σX2
     
x µX ρσX σY
x= µ= Σ= (x − µ)T Σ−1 (x − µ)
y µY ρσX σY σY2
T 
σY2
  
1 x − µx −ρσX σY x − µx
= 2 2 2
  σX σY (1 − ρ2 ) y − µy −ρσX σY σX y − µy
1 −1/2 1 T −1
exp − (x − µ) Σ (x − µ)
T 
f (x) = (det Σ) 2
 
1 σY (x − µX ) − ρσX σY (y − µY ) x − µx
2π 2 = 2 2 2
2
σX σY (1 − ρ ) −ρσX σY (x − µX ) + σX (y − µY ) y − µy
1
σ 2 (x − µX )2 − 2ρσX σY (x − µX )(y − µY ) + σX2 (y − µY )2

We can confirm our results by checking the value of (det Σ)−1/2 and =
σX2 σY2 (1 − ρ2 ) Y
(x − µ)T Σ−1 (x − µ) for the bivariate case. 1 (x − µX )2 (x − µX )(y − µY ) (y − µY )2
!
= − 2ρ +
1 − ρ2 σX2 σX σY σY2
−1/2 1
(det Σ)−1/2 = σX2 σY2 − ρ2 σX2 σY2 =
σX σY (1 − ρ2 )1/2

Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 8 / 22 Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 9 / 22

6.5 Conditional Distributions 6.5 Conditional Distributions

General Bivariate Normal - Examples General Bivariate Normal - Examples

X ∼ N (0, 1), Y ∼ N (0, 1) X ∼ N (0, 2), Y ∼ N (0, 1) X ∼ N (0, 1), Y ∼ N (0, 2) X ∼ N (0, 1), Y ∼ N (0, 1) X ∼ N (0, 1), Y ∼ N (0, 1) X ∼ N (0, 1), Y ∼ N (0, 1)

ρ=0 ρ=0 ρ=0 ρ = 0.25 ρ = 0.5 ρ = 0.75

Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 10 / 22 Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 11 / 22
6.5 Conditional Distributions 6.5 Conditional Distributions

General Bivariate Normal - Examples General Bivariate Normal - Examples

X ∼ N (0, 1), Y ∼ N (0, 1) X ∼ N (0, 1), Y ∼ N (0, 1) X ∼ N (0, 1), Y ∼ N (0, 1) X ∼ N (0, 1), Y ∼ N (0, 1) X ∼ N (0, 2), Y ∼ N (0, 1) X ∼ N (0, 1), Y ∼ N (0, 2)

ρ = −0.25 ρ = −0.5 ρ = −0.75 ρ = −0.75 ρ = −0.75 ρ = −0.75

Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 12 / 22 Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 13 / 22

6.5 Conditional Distributions 6.5 Conditional Distributions

Multivariate Normal Distribution Multivariate Normal Distribution - Cholesky

Matrix notation allows us to easily express the density of the multivariate In the bivariate case, we had a nice transformation such that we could
normal distribution for an arbitrary number of dimensions. We express the generate two independent unit normal values and transform them into a
k-dimensional multivariate normal distribution as follows, sample from an arbitrary bivariate normal distribution.

X ∼ Nk (µ, Σ) There is a similar method for the multivariate normal distribution that
takes advantage of the Cholesky decomposition of the covariance matrix.
where µ is the k × 1 column vector of means and Σ is the k × k
covariance matrix where {Σ}i,j = Cov (Xi , Xj ).
The Cholesky decomposition is defined for a symmetric, positive definite
matrix X as
The density of the distribution is
L = Chol(X)
  where L is a lower triangular matrix such that LLT = X.
1 −1/2 1 T −1
f (x) = (det Σ) exp − (x − µ) Σ (x − µ)
(2π)k/2 2

Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 14 / 22 Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 15 / 22
6.5 Conditional Distributions 6.5 Conditional Distributions

Multivariate Normal Distribution - RNG Cholesky and the Bivariate Transformation

Let Z1 , . . . , Zk ∼ N (0, 1) and Z = (Z1 , . . . , Zk )T then We need to find the Cholesky decomposition of Σ for the general bivariate
case where  2

σX ρσX σY
Σ=
µ + Chol(Σ)Z ∼ Nk (µ, Σ) ρσX σY σY2

this is offered without proof in the general k-dimensional case but we can We need to solve the following for a, b, c
check that this results in the same transformation we started with in the
a2 σX2
      
a 0 a b ab ρσX σY
bivariate case and should justify how we knew to use that particular = =
b c 0 c ab b2 + c2 ρσX σY σY2
transformation.
This gives us three (unique) equations and three unknowns to solve for,
a2 = σX2 ab = ρσX σY b 2 + c 2 = σY2

a = σX
b = ρσX σY /a = ρσY
q
c = σY2 − b 2 = σY (1 − ρ2 )1/2

Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 16 / 22 Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 17 / 22

6.5 Conditional Distributions 6.5 Conditional Distributions

Cholesky and the Bivariate Transformation Conditional Expectation of the Bivariate Normal

Let Z1 , Z2 ∼ N (0, 1) then Using X = µX + σX Z1 and Y = µY + σY [ρZ1 + (1 − ρ2 )1/2 Z2 ] where


Z1 , Z2 ∼ N (0, 1) we can find E (Y |X ).
 
X
= µ + Chol(Σ)Z h   i
Y E [Y |X = x] = E µY + σY ρZ1 + (1 − ρ2 )1/2 Z2 X = x

    
µX σX 0 Z1  
x − µX
 
= + = E µY + σ Y ρ 2 1/2

+ (1 − ρ ) Z2 X = x
µY ρσY σY (1 − ρ2 )1/2 Z2
    σX
µX σX Z1
 
= + x − µX 2 1/2
µY ρσY Z1 + σY (1 − ρ2 )1/2 Z2 = µY + σ Y ρ + (1 − ρ ) E [Z2 |X = x]
σX
 
x − µX
= µY + σ Y ρ
X = µX + σ X Z 1 σX
Y = µY + σY [ρZ1 + (1 − ρ2 )1/2 Z2 ] By symmetry,  
y − µY
E [X |Y = y ] = µX + σX ρ
σY
Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 18 / 22 Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 19 / 22
6.5 Conditional Distributions 6.5 Conditional Distributions

Conditional Variance of the Bivariate Normal Example - Husbands and Wives (Example 5.10.6, deGroot)

Suppose that the heights of married couples can be explained by a bivariate normal distribution.
Using X = µX + σX Z1 and Y = µY + σY [ρZ1 + (1 − ρ2 )1/2 Z2 ] where If the wives have a mean heigh of 66.8 inches and a standard deviation of 2 inches while the
Z1 , Z2 ∼ N (0, 1) we can find Var (Y |X ). heights of the husbands have a mean of 70 inches and a standard deviation of 2 inches. The
correlation between the heights is 0.68. What is the probability that for a randomly selected
couple the wife is taller than her husband?
h   i
Var [Y |X = x] = Var µY + σY ρZ1 + (1 − ρ2 )1/2 Z2 X = x

   
x − µX 2 1/2

= Var µY + σY ρ + (1 − ρ ) Z2 X = x

σX
= Var [σY (1 − ρ2 )Z2 |X = x]
= σY2 (1 − ρ2 )

By symmetry,
Var [X |Y = y ] = σX2 (1 − ρ2 )

Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 20 / 22 Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 21 / 22

6.5 Conditional Distributions 6.5 Conditional Distributions

Example - Conditionals Example - Conditionals, cont.


Suppose that X1 and X2 have a bivariate normal distribution where E (X1 |X2 ) = 3.7 − 0.15X2 ,
E (X2 |X1 ) = 0.4 − 0.6X1 , and Var (X2 |X1 ) = 3.64.

Find E (X1 ), Var (X1 ), E (X2 ), Var (X2 ), and ρ(X1 , X2 ).

Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 22 / 22 Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 23 / 22

You might also like