0% found this document useful (0 votes)
26 views5 pages

(TG) TransformedRV Distributions

This a summarization of the concept of random variable

Uploaded by

devesh4221
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views5 pages

(TG) TransformedRV Distributions

This a summarization of the concept of random variable

Uploaded by

devesh4221
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

10 Transformation of Random Variables

In probability and statistics, we frequently encounter situations where we start


with a known random variable X having a certain probability distribution, and
then define a new random variable√ Y by transforming X through some function
g. For example, we may have Y = X or Y = eX . Here the task is to determine
the probability distribution of Y based on what we know about the distribution
of X.
Why do we need transformations?
In data analysis and theoretical work, we often transform variables to sim-
plify models, to consider new quantities derived from basic measurements, or to
work more conveniently with certain functions. For instance:
If X is exponentially distributed and we set Y = ln(X), the transformed
variable might have a simpler form or a more familiar distribution.
√ If we know the distribution of a uniform random variable U and we set Y =
U , we might be able to deduce that Y follows a particular known distribution.
When dealing with transformations, the general idea is: If we know how X
is distributed, can we derive how Y = g(X) is distributed?

10.1 Transformations in the Discrete Case


Let X be a discrete random variable that takes values in some countable set
(e.g., integers), and the probability mass function (PMF) is given by:
P (X = x) = pX (x).
Suppose we define Y = g(X) for some function g. If g is not one-to-one,
different values of X might map to the same value of Y . To find the PMF of
Y , we sum over all x that map to a given y:
X
pY (y) = P (Y = y) = pX (x).
{x:g(x)=y}

Example 10.1. Example (Discrete): Let X be a random variable that takes


values {1, 2, 3, 4, 5, 6} each with probability 16 . Define Y = X 2 . The possible
values of Y are {1, 4, 9, 16, 25, 36} and:
pY (1) = pX (1) = 16
pY (4) = pX (2) = 16
pY (9) = pX (3) = 16
pY (16) = pX (4) = 16
In this example, the function is one-to-one, so the PMF is straightforward.
If the transformation were Y = X mod 2, then multiple values of X would map
to each value of Y , and we would sum accordingly.

10.2 Transformations in the Continuous Case


Let the random variable X be continuous with probability density function
(PDF) fX (x), and cumulative distribution function (CDF) FX (x). To find the

40
probability distributions of the transformations of the random variable X we
proceed mainly following two approaches as described below:
1. CDF Method
2. Change of Variables Formula (PDF Method)

10.2.1 Using the CDF


For a continuous random variable Y = g(X), the CDF of Y is defined as:

FY (y) = P (Y ≤ y) = P (g(X) ≤ y).

If g is a monotone increasing function, then:

P (g(X) ≤ y) = P (X ≤ g −1 (y)) = FX (g −1 (y)).

Thus, for a strictly increasing g:

FY (y) = FX (g −1 (y)).

If g is strictly decreasing, we must be careful with the inequalities, and we


get:
FY (y) = P (g(X) ≤ y) = P X ≥ g −1 (y) = 1 − FX (g −1 (y)).


When g is monotone, it’s straightforward. When not, the process involves


splitting the range into parts and carefully accounting for all intervals where
g(X) ≤ y.

10.3 Using the PDF (Change of Variables)


When dealing directly with PDFs, suppose Y = g(X) and assume g is differen-
tiable and strictly monotone so that it has a well-defined inverse g −1 . The PDF
of Y can be found using the change of variables formula:

d −1
fY (y) = fX (g −1 (y)) (g (y)) .
dy
This comes from the rule of transformations: if X has PDF fX , then for a
strictly monotone g,

dx
fY (y) = fX (x) where x = g −1 (y).
dy

Important Points:
We need to ensure that g is one-to-one (monotone) so that the inverse exists
and is unique.
The absolute value of the derivative makes sure that the transformed vari-
able’s pdf is non-negative.

41
Example 10.2. Let X be a continuous random variable with PDF:
(
2x, 0 ≤ x ≤ 1,
fX (x) =
0, otherwise.

Define Y = X 2 . Find the PDF of Y .


Solution:
The CDF of X is:
Z x
FX (x) = 2t dt = x2 , 0 ≤ x ≤ 1.
0

Therefore, the CDF of Y is:



FY (y) = P (Y ≤ y) = P (X 2 ≤ y) = P (X ≤ y).

For 0 ≤ y ≤ 1:
√ √
FY (y) = FX ( y) = ( y)2 = y.
Now we differentiate to find fY (y):

d
fY (y) = FY (y) = 1, 0 ≤ y ≤ 1.
dy

Thus, fY (y) = 1 for 0 ≤ y ≤ 1.


Example
√ 10.3. Let X ∼ Uniform(0, 1), so fX (x) = 1 for 0 ≤ x ≤ 1. Define
Y = X. We want fY (y).
1. Identify the inverse: x = g −1 (y) = y 2 . 2. The derivative: dx
dy = 2y. 3.
The range of Y : since X ∈ [0, 1], Y ∈ [0, 1].
Now:
fY (y) = fX (y 2 ) · |2y|.
Since fX (x) = 1 for x ∈ [0, 1], and y 2 ∈ [0, 1] when y ∈ [0, 1], we have:

fY (y) = 1 · 2y = 2y, 0 ≤ y ≤ 1.

This is the PDF of Y , which turns out to be the PDF of a Beta distribution
with parameters (2, 1) or equivalently a simple distribution on [0, 1].

10.4 Non-Monotone Transformations


If the transformation is not one-to-one, then we need to proceed differently. For
example, consider Y = X 2 where X is a continuous variable on (−∞, ∞). For
√ √
a given y > 0, there are two x-values that map to y: + y and − y. In such
cases, we must sum the contributions from all branches of the inverse:
X dx
fY (y) = fX (x) .
dy
x:g(x)=y

42
Example 10.4. Let X has the PDF
(
1 − 21 x2
2π e −∞ < x < ∞
fX (x) =
0 otherwise

(namely standard normal distribution).


Define Y = X 2 . To find fY (y):
√ √
For y ≥ 0, the inverse functions are x = y and x = − y.
1
The derivative dx/dy = 2√y .
Thus:
√ 1 √ 1
fY (y) = fX ( y) √ + fX (− y) √ .
2 y 2 y
Since fX (x) is symmetric about zero:
√ √
fX ( y) = fX (− y).
2
For a standard normal, fX (x) = √1 e−x /2 . Thus:

1 1 1
fY (y) = 2 · √ e−y/2 · √ = √ e−y/2 .
2π 2 y 2πy

This is a well-known distribution called the Chi-square distribution with 1


degree of freedom (which is indeed the distribution of X 2 when X ∼ N (0, 1)).

10.4.1 Jacobian Methods for Higher Dimensions


For multivariate transformations, the density transformation formula involves
the Jacobian of the transformation. If X = (X1 , X2 , . . . , Xn ) has joint PDF
fX (x), and we consider a transformation Y = g(X), where Y = (Y1 , Y2 , . . . , Yn )
and g is invertible, then the joint PDF of Y is given by:

fY (y) = fX (g −1 (y)) · |det(J)|


where J is the Jacobian matrix of partial derivatives of the inverse transfor-
mation x = g −1 (y).

10.5 Summary
1. Identify the transformation and whether it is monotone:
If monotone, use the straightforward CDF or PDF change of variable
formulas.
If not monotone, break the range of X into parts and sum over contribu-
tions from each inverse branch.
2. For discrete variables:
Sum the original PMF over all x that map to the desired y.

43
3. For continuous variables: Use the CDF approach: FY (y) = P (g(X) ≤ y)
and solve for Y .
Use the PDF approach: fY (y) = fX (g −1 (y)) d
dy (g
−1
(y)) for one-to-one
g.
For multiple solutions to g(x) = y, sum the contributions from all appro-
priate inverses.
4. Check the support: Ensure you correctly determine the support of Y . Dif-
ferent transformations can restrict or expand the range of possible values.
Transforming random variables is a fundamental technique that appears in many
areas of probability and statistics. Understanding how to handle both monotone
and non-monotone transformations, applying the CDF and PDF methods, and
using Jacobians in multivariate cases are key skills. With these tools, one can
derive the distribution of complex, transformed variables from simpler, known
distributions.

10.6 Exercise
1. Suppose X ∼ Exponential(λ) with PDF fX (x) = λe−λx for x ≥ 0. Let
Y = ln(X). Find fY (y).
2. Let X be a random variable with PDF fX (x) = 21 e−|x| (a Laplace distri-
bution). Define Y = X 2 . Find fY (y) for y ≥ 0.
3. Exponential to a Log-Transformed Variable) Let X ∼ Exponential(λ) with
PDF fX (x) = λe−λx for x ≥ 0. Define Y = − ln(X). Find the PDF of Y .
4. (Uniform to an Exponential-Like Transformation) If X ∼ Uniform(0, 1)
and define Y = − ln(X), find fY (y). What distribution does Y follow?
X−µ
5. (Normal Standardization) Let X ∼ N (µ, σ 2 ). Define Y = σ . Show
that Y ∼ N (0, 1).
6. (Absolute Value of a Normal distribution) Let X ∼ N (0, 1). Define Y =
|X|. Find the PDF of Y . Hint: Consider that for each y ≥ 0, there are
two x-values, +y and −y.
1
7. Square of a Non-Normal Variable Let X have PDF fX (x) = π(1+x 2 ) (a
2
standard Cauchy distribution). Define Y = X . Find fY (y) for y > 0.
8. (Gamma Distribution and Root Transformation) Let X ∼ Gamma(k, θ)
k−1 −x/θ
with PDF fX (x) = x Γ(k)θ
e
k for x > 0. Define Y = X 1/3 . Find the PDF
fY (y).
9. Log-Transformation of a Cauchy Variable Let X ∼ Cauchy(0, 1) (standard
Cauchy). Define Y = eX . Find the PDF fY (y).
10. Let X ∼ Uniform(a, b) and define Y = a + b − X. Find fY (y) and show
that Y also has a Uniform(a, b) distribution.

44

You might also like