0% found this document useful (0 votes)
87 views3 pages

Transformations of Random Variables - Examples: Example 1: Square of Normal

1) The document provides three examples of transformations of random variables. The first example finds the PDF of Y=X^2 where X is a standard normal random variable. The second finds the PDF of Z=max(X,Y) where X and Y are independent exponential random variables. The third shows that Z=-2ln(X)cos(2piY) is standard normal if X and Y are independent uniforms on [0,1]. 2) All three examples follow the general procedure of: finding the range of the transformed variable, solving for the pre-image variables, computing the Jacobian, and determining the final PDF using the change of variables formula. 3) The third example introduces an auxiliary

Uploaded by

700spymaster007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views3 pages

Transformations of Random Variables - Examples: Example 1: Square of Normal

1) The document provides three examples of transformations of random variables. The first example finds the PDF of Y=X^2 where X is a standard normal random variable. The second finds the PDF of Z=max(X,Y) where X and Y are independent exponential random variables. The third shows that Z=-2ln(X)cos(2piY) is standard normal if X and Y are independent uniforms on [0,1]. 2) All three examples follow the general procedure of: finding the range of the transformed variable, solving for the pre-image variables, computing the Jacobian, and determining the final PDF using the change of variables formula. 3) The third example introduces an auxiliary

Uploaded by

700spymaster007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Transformations of random variables - Examples

Example 1: Square of normal


2
Task: Let X be a standard normal random variable, i.e., the PDF of X is given by fX (x) = √1 e−x /2 . Find

the PDF of Y = g(X), where g(x) = x2 .

Solution 1: Start with the CDF FY (y) = Pr[Y ≤ y] and insert Y = g(X):

FY (y) = Pr[Y ≤ y] = Pr[X 2 ≤ y] = Pr[|X| ≤ y]
√ √ √ √ √ √
= Pr[− y ≤ X ≤ y] = Pr[X ≤ y] − Pr[X ≤ − y] = FX ( y) − FX (− y).

where we used Pr[a ≤ X ≤ b] = Pr[X ≤ b] − Pr[X ≤ a] and the fact that X 2 = |X| (and not X). To find
the PDF we simply form the derivative of the CDF with respect to its argument, i.e.,

dFY (y) 1 √ −1 √ 1  √ √ 
fY (y) = = √ fX ( y) − √ fX (− y) = √ fX ( y) + fX (− y) ,
∂y 2 y 2 y 2 y

where the chain rule was used to evaluate the differential. Finally, insert the known PDF for a standard normal
2
random variable, i.e., fX (x) = √12π e−x /2 . We obtain

1  1 1  1
fY (y) = √ √ e−y/2 + √ e−y/2 = √ √ e−y/2 .
2 y 2π 2π 2π y

What we should not forget is that while X can be in (−∞, ∞), Y = X 2 obviously only takes values in [0, ∞)
so the full answer is
(
√ 1 √ e−y/2 y ≥ 0
fY (y) = 2π y
0 otherwise

Solution 2: Using the algorithm given in class and in the tutorials

1. Find the range of Y inserting the range of X into g(X). Here: X ∈ (−∞, ∞), therefore Y ∈ [0, ∞).
Then we know that fY (y) = 0, ∀y < 0.
√ √ √
2. Solve y = g(x) for x, find all solutions xi (y). Here: x = ± y, i.e., x1 = y, x2 = − y.

3. Determine g ′ (xi ) where g ′ (x) = dg(x)/dx. Here: g ′ (x) = 2x, therefore g ′ (x1 ) = 2 y, g ′ (x2 ) =

−2 y.
P 1
4. Final solution for fY (y) = i |g′ (x i )|
fX (xi ). Here:

1 √ 1 √ 1  1 1  1
fY (y) = √ fX ( y) + √ fX (− y) = √ √ e−y/2 + √ e−y/2 = √ √ e−y/2
|2 y| | − 2 y| 2 y 2π 2π 2π y

for y ≥ 0 and fY (y) = 0 otherwise.

1
Example 2: Max of exponentials

Task: Let X, Y be independent and exponentially distributed random variables with parameters λX , λY , i.e.,
fX (x) = λX · e−λX ·c for x ≥ 0 and similarly fY (x) = λY · e−λY ·c for y ≥ 0.
Find the PDF of Z = max{X, Y }.

 start with the CDF FZ(z) = Pr[Z ≤ z] = Pr[max(X, Y ) ≤ z]. Obviously,


Solution: It is easiest to
(max(X, Y ) ≤ z) ⇔ (X ≤ z)and(Y ≤ z) . Since X and Y are independent this joint event can be
factored into the individual events on X and Y and we obtain

FZ (z) = Pr[Z ≤ z] = Pr[max(X, Y ) ≤ z] = Pr[(X ≤ z), (Y ≤ z)]


= Pr[(X ≤ z), (Y ≤ z)]
= Pr[X ≤ z] · Pr[Y ≤ z]
= FX (z) · FY (z)

As before, to find the PDF we differentiate. Using the product rule of differentiation we obtain

dFZ (z)
fZ (z) = = fX (z) · FY (z) + FX (z) · fY (z).
dz
Since we know that X and Y areR xexponential we can insert their PDFs and CDFs. The PDF was given above,
the CDF evaluates to FX (x) = 0 fX (τ )dτ = 1 − e−λX ·x for x ≥ 0. Consequently, we obtain for Z:
   
fZ (z) = λX · e−λX ·z · 1 − e−λY ·z + 1 − e−λX ·z λY · e−λY ·z
= λX · e−λX ·z + λY · e−λY ·z − (λX + λY ) · e−(λX +λY )·z

for z ≥ 0 and fZ (z) = 0 otherwise.

 What if they are not independent? In this case we need to integrate the joint PDF over a region that satisfies
(X ≤ z)and(Y ≤ z) . This region can either be constructed from intersecting the half-planes (X ≤ z) and
   
(Y ≤ z) or by factoring the joint event into (X ≤ z)and(X > Y ) or (Y ≤ z)and(X ≤ Y ) , where now
the two “OR”-ed events are disjoint and hence their probabilities add. Therefore,

FZ (z) = Pr[(X ≤ z), (X > Y )] + Pr[(Y ≤ z), (X ≤ Y )]


Z z Z x Z z Z y
= fX,Y (x, y)dydx + fX,Y (x, y)dxdy
−∞ −∞ −∞ −∞
Z z Z z
= fX,Y (x, y)dxdy
−∞ −∞

For the special case independent variables fX,Y (x, y) factors into fX (x) · fY (y) and hence FZ (z) becomes
FX (z) · FY (z), as mentioned earlier.

2
Example 3: Auxiliary random variable

Task: Consider X, Y to be two mutually independent uniformly distributed p random variables in the interval
[0, 1], i.e., fX,Y (x, y) = 1∀0 ≤ x, y ≤ 1 and 0 otherwise. Show that Z = −2 ln(X) · cos(2πY ) is standard
normal distributed.

Solution:
This one is easiest solved by introducing an auxiliary random variable. Since Z involves the cosine of Y it
may be a good idea to define one that involves the sine of Y so that the determinant of the Jacobian becomes
simple. p
A reasonable choice would be W = −2 ln(X) · sin(2πY ).
Now we go through the usual steps to find fZ,W (z, w):

1. Determine the range for Z and W . It is not difficult to see that for X, Y ∈ [0, 1] we obtain all values
Z, W ∈ (−∞, +∞).

2. Solve for Z and W . It is easy to see that Z 2 + W 2 = −2 ln(X) and W/Z = tan(2πY ) eventually
leading to the solutions
 
Z 2 +W 2 1 W
X = e− 2 and Y = tan−1
2π Z
p p
3. Compute the Jacobian matrix for g(x, y) = −2 ln(x) · cos(2πy) and h(x, y) = −2 ln(x) · sin(2πy).
We obtain
" #  √ −2
p 
∂g(x,y) ∂g(x,y) cos(2πy) −2π −2 ln(x) sin(2πy)
∂x ∂y 2x −2 ln(x)
J= = p 
∂h(x,y) ∂h(x,y) √ −2 sin(2πy) 2π −2 ln(x) cos(2πy)
∂x ∂y 2x −2 ln(x)

Therefore, the determinant becomes


p
1 −2 ln(x) 
det(J ) = − · p · 2π cos2 (2πy) + 2π sin2 (2πy)
x −2 ln(x)
−2π
=
x
Inserting the solution for X from the previous step we have
−2π 1 1 − z2 +w2
det(J ) = 2 2 ⇒ = e 2

e − z +w
2
| det(J )| 2π

4. Now we can write down the final result for the joint PDF of Z and W .
X 1
fZ,W (z, w) = fX,Y (xi , yi )
| det(J (xi , yi )|
i
1 − z2 +w2
= e 2 .

From fZ,W (z, w) we observe that Z and W are jointly Gaussian and mutually independent. Consequently,
if we marginalize over W we find fZ (z) to be Gaussian as well. Mathematically,
Z ∞ Z ∞ Z
1 − z2 +w2 1 − z 2 ∞ − w2
fZ (z) = fZ,W (z, w)dw = e 2 dw = e 2 e 2 dw
−∞ −∞ 2π 2π −∞
| {z }


1 2
− z2
√ 1 2
− z2
= e 2π = √ e ,
2π 2π
which is the desired result.

You might also like