0% found this document useful (0 votes)
33 views

P10-Transformation of Random Variables

This document discusses the transformation of random variables. It provides the general principle that if x1,...,xn are random variables and y1,...,yn are transformations of those variables, then the joint probability distribution of y1,...,yn can be derived from the joint distribution of x1,...,xn. Specifically, for discrete variables the mass function transforms accordingly, and for continuous variables the probability density function transforms based on the Jacobian of the transformation. The document then provides examples of transforming joint distributions when the original variables are independent or have a specific distribution like Poisson or Gamma.

Uploaded by

ching chau
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

P10-Transformation of Random Variables

This document discusses the transformation of random variables. It provides the general principle that if x1,...,xn are random variables and y1,...,yn are transformations of those variables, then the joint probability distribution of y1,...,yn can be derived from the joint distribution of x1,...,xn. Specifically, for discrete variables the mass function transforms accordingly, and for continuous variables the probability density function transforms based on the Jacobian of the transformation. The document then provides examples of transforming joint distributions when the original variables are independent or have a specific distribution like Poisson or Gamma.

Uploaded by

ching chau
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

§10 Transformation of random variables

§10.1 General principle


10.1.1 Let X1 , . . . , Xn be n random variables. For each i = 1, . . . , n, let yi = yi (x1 , . . . , xn ) be
a real-valued transformation of (x1 , . . . , xn ) ∈ supp(X1 , . . . , Xn ). Assume that the mapping
(x1 , . . . , xn ) 7→ (y1 , . . . , yn ) is one-to-one between the supports of (X1 , . . . , Xn ) and (Y1 , . . . , Yn ),
where Yi = yi (X1 , . . . , Xn ), i = 1, . . . , n. Thus, we may map (Y1 , . . . , Yn ) to (X1 , . . . , Xn ) by
an inverse transformation Xi = xi (Y1 , . . . , Yn ), i = 1, . . . , n.
To simplify presentation, we use henceforth a matrix notation:
           
X1 Y)
x1 (Y Y1 X)
y1 (X x1 y1
 ..   ..   ..   ..   ..   .. 
Y ) =  .  , Y =  .  = y (X
X =  .  = x (Y X ) =  .  , x =  .  , y =  .  , etc.
Xn Y)
xn (Y Yn X)
yn (X xn yn

x) of X , it is of interest to derive the joint probability


Given the joint probability function fX (x
function fY (yy ) of Y .

x), the corresponding joint mass function of Y is


10.1.2 For discrete X with joint mass function fX (x
  
fY (yy ) = P(Y
Y = y ) = P x (Y
Y ) = x (yy ) = P X = x (yy ) = fX x (yy ) , y ∈ supp(Y
Y ).

10.1.3 For continuous X with joint pdf fX (x x) and assuming differentiability of y (·) [hence also of
x (·)], the corresponding joint pdf of Y is

fY (yy ) = fX x (yy ) det x 0 (yy ) ,



y ∈ supp(Y
Y ),

where  
∂x1 (yy )/∂y1 · · · ∂x1 (yy )/∂yn
∂x  .. .. ..
x 0 (yy ) = , .

. .
∂yy

∂xn (yy )/∂y1 · · · ∂xn (yy )/∂yn
is the Jacobian matrix of transformation.

§10.2 Transformation of more than one random variables: examples


10.2.1 For independent X1 , X2 with Xi ∼ Poisson (λi ) (i = 1, 2), define Y1 = X1 and Y2 = X1 + X2 .

67
Here " # " #
x1 y1
x) =
y (x ⇒ x (yy ) =
x1 + x2 y2 − y1
Joint mass function of Y = [Y1 , Y2 ]> :

 e−λ1 λy11 e−λ2 λy22 −y1 


fY (yy ) = fX x (yy ) = fX1 (y1 )fX2 (y2 − y1 ) = 1 y1 , y2 − y1 = 0, 1, 2, . . .
y1 ! (y2 − y1 )!
−(λ1 +λ2 ) y2
 
e λ2 y2
(λ1 /λ2 )y1 1 y2 = 0, 1, 2, . . . , y1 = 0, . . . , y2 .

=
y2 ! y1

Conditional mass function of Y1 given Y2 = y2 :


 
y2
(λ1 /λ2 )y1 1 y1 = 0, . . . , y2

fY1 |Y2 (y1 |y2 ) ∝ fY (yy ) ∝
y1
  y1  y2 −y1
y2 λ1 λ2 
⇒ fY1 |Y2 (y1 |y2 ) = 1 y1 = 0, . . . , y2
y1 λ1 + λ2 λ1 + λ2
 
λ1
⇒ Y1 |Y2 = y2 ∼ Binomial y2 , .
λ1 + λ2

Marginal mass function of Y2 :

e−(λ1 +λ2 ) (λ1 + λ2 )y2 


fY2 (y2 ) = fY (yy )/fY1 |Y2 (y1 |y2 ) = 1 y2 = 0, 1, 2, . . .
y2 !
⇒ Y2 ∼ Poisson (λ1 + λ2 ).

10.2.2 Consider independent X1 , X2 with Xi ∼ Gamma (αi , λ) (i = 1, 2), so that


2  αi αi −1 −λxi 
Y λ x ie λα1 +α2 xα1 1 −1 xα2 2 −1 e−λ(x1 +x2 )
fX (x1 , x2 ) = = (x1 , x2 > 0).
i=1
Γ(αi ) Γ(α1 )Γ(α2 )

Define, for a constant c > 0, Y1 = c(X1 + X2 ) and Y2 = X1 /(X1 + X2 ). Here


" # " #
c(x1 + x2 ) y1 y2 /c
x) =
y (x ⇒ x (yy ) = .
x1 /(x1 + x2 ) y1 (1 − y2 )/c

Thus, the Jacobian matrix is


" # " #
1 ∂(y y
1 2 )/∂y 1 ∂(y y
1 2 )/∂y 2 1 y2 y 1
x 0 (yy ) = = ⇒ det x 0 (yy ) = −c−2 y1 .
c ∂y1 (1 − y2 )/∂y1 ∂y1 (1 − y2 )/∂y2 c 1 − y2 −y1

68
Joint pdf of (Y1 , Y2 ) is

fY (yy ) = fX x (yy ) det x 0 (yy )



α −1
λα1 +α2 (y1 y2 /c)α1 −1 y1 (1 − y2 )/c 2 e−(λ/c) y1 y1
  

= 2
1 y1 > 0, y2 ∈ [0, 1]
Γ(α1 )Γ(α2 ) c
α1 +α2 α1 +α2 −1 −(λ/c) y1
(λ/c) y1 e Γ(α1 + α2 ) α1 −1
y2 (1 − y2 )α2 −1 1 y2 ∈ [0, 1] .

= 1 {y1 > 0} ×
Γ(α1 + α2 ) Γ(α1 )Γ(α2 )
It follows that Y1 , Y2 are independent with

Y1 ∼ Gamma α1 + α2 , λ/c , Y2 ∼ Beta (α1 , α2 ).

10.2.3 Given joint pdf of X = (X1 , X2 , X3 ):

fX (x1 , x2 , x3 ) = 40 x1 x3 1 {x1 , x2 , x3 ≥ 0, x1 + x3 ≤ 1, x2 + x3 ≤ 1}.

Define Y1 = X1 /(1 − X3 ), Y2 = X2 /(1 − X3 ) and Y3 = 1 − X3 .


Here    
x1 /(1 − x3 ) y1 y3
x) = x2 /(1 − x3 ) ⇒ x (yy ) =  y2 y3  .
y (x
   

1 − x3 1 − y3
Thus, the Jacobian matrix is
   
∂(y1 y3 )/∂y1 ∂(y1 y3 )/∂y2 ∂(y1 y3 )/∂y3 y3 0 y1
x 0 (yy ) =  ∂(y2 y3 )/∂y1 ∂(y2 y3 )/∂y2 ∂(y2 y3 )/∂y3  =  0 y3 y2 
   

∂(1 − y3 )/∂y1 ∂(1 − y3 )/∂y2 ∂(1 − y3 )/∂y3 0 0 −1


⇒ det x 0 (yy ) = −y32 .

Joint pdf of (Y1 , Y2 , Y3 ) is

fY (yy ) = fX x (yy ) det x 0 (yy )




= 40 y1 y33 (1 − y3 ) 1 y1 , y2 , y3 ∈ [0, 1]


= 2y1 1 {0 ≤ y1 ≤ 1} 1 {0 ≤ y2 ≤ 1} 20y33 (1 − y3 ) 1 {0 ≤ y3 ≤ 1} .
 

It follows that Y1 , Y2 , Y3 are independent with marginal pdf’s

fY1 (u) = 2u 1 {0 ≤ u ≤ 1}, fY2 (u) = 1 {0 ≤ u ≤ 1}, fY3 (u) = 20u3 (1 − u) 1 {0 ≤ u ≤ 1}.

In particular, Y2 ∼ U [0, 1].

69
§10.3 *** More challenges ***
10.3.1 Let (X, Y ) be a random coordinate pair uniformly distributed over the quadrangle with vertices
(0, 0), (a, 0), (a, 1) and (2a, 1), where a > 0. Find the means and variances of X and Y and their
correlation.

10.3.2 Suppose that U has a uniform distribution over [0, 1], V has an exponential distribution of unit
rate, and that U and V are independent.

(a) Find the conditional density function and expectation of U + V given that U = V .
(b) Find the conditional density function and expectation of U + V given that U ≤ V .

70

You might also like