Functions of Continuous Random Variables PDF CDF
Functions of Continuous Random Variables PDF CDF
Example 4.7
• Solution
◦ First, note that we already know the CDF and PDF of X . In particular,
⎧0 for x < 0
⎪
FX (x) = ⎨ x for 0 ≤ x ≤ 1
⎪
⎩1 for x > 1
It is a good idea to think about the range of Y before finding the distribution. Since ex is an increasing function of x and
RX = [0, 1], we conclude that RY = [1, e] . So we immediately know that
FY (y) = P(Y ≤ y) = 0, for y < 1,
FY (y) = P(Y ≤ y) = 1, for y ≥ e.
a. To find FY (y) for y ∈ [1, e], we can write
FY (y) = P(Y ≤ y)
= P(eX ≤ y)
= P(X ≤ ln y) since ex is an increasing function
= FX (ln y) = ln y since 0 ≤ ln y ≤ 1.
To summarize
⎧0 for y < 1
⎪
FY (y) = ⎨ ln y for 1 ≤ y < e
⎪
⎩1 for y ≥ e
b. The above CDF is a continuous function, so we can obtain the PDF of Y by taking its derivative. We have
1
for 1 ≤ y ≤ e
{0
y
fY (y) = FY′ (y) =
otherwise
Note that the CDF is not technically differentiable at points 1 and e , but as we mentioned earlier we do not worry about this
since this is a continuous random variable and changing the PDF at a finite number of points does not change probabilities.
1 of 5 10/12/22, 2:59 pm
Functions of Continuous Random Variables | PDF | CDF https://www.probabilitycourse.com/chapter4/4_1_3_functions_continu...
Note that since we have already found the PDF of Y it did not matter which method we used to find E[Y]. However, if the
problem only asked for E[Y] without asking for the PDF of Y , then using LOTUS would be much easier.
Example 4.8
⎧0 for y < 0
⎪
FY (y) = ⎨ √y for 0 ≤ y ≤ 1
⎪
⎩1 for y > 1
Note that the CDF is a continuous function of Y , so Y is a continuous random variable. Thus, we can find the PDF of Y by
differentiating FY (y),
1
for 0 ≤ y ≤ 1
{0
2√y
fY (y) = FY′ (y) =
otherwise
So far, we have discussed how we can find the distribution of a function of a continuous random variable starting from finding the CDF. If we
are interested in finding the PDF of Y = g(X), and the function g satisfies some properties, it might be easier to use a method called the
method of transformations. Let's start with the case where g is a function satisfying the following properties:
• g(x) is differentiable;
• g(x) is a strictly increasing function, that is, if x1 < x2 , then g(x1 ) < g(x2 ) .
Now, let X be a continuous random variable and Y = g(X). We will show that you can directly find the PDF of Y using the following
formula.
fX (x 1 ) dx1
{0
g ′ (x 1 )
= fX (x1 ). dy
where g(x1 ) = y
fY (y) =
if g(x) = y does not have a solution
Note that since g is strictly increasing, its inverse function g−1 is well defined. That is, for each y ∈ RY , there exists a unique x1 such that
g(x1 ) = y. We can write x1 = g−1 (y).
FY (y) = P(Y ≤ y)
= P(g(X) ≤ y)
= P(X < g−1 (y)) since g is strictly increasing
2 of 5 10/12/22, 2:59 pm
Functions of Continuous Random Variables | PDF | CDF https://www.probabilitycourse.com/chapter4/4_1_3_functions_continu...
= FX (g−1 (y)).
=
fX (x 1 ) since dx
dy
= 1
dy .
g ′ (x1 ) dx
We can repeat the same argument for the case where g is strictly decreasing. In that case, g′ (x1 ) will be negative, so we need to use |g′ (x1 )|.
Thus, we can state the following theorem for a strictly monotonic function. (A function g : ℝ → ℝ is called strictly monotonic if it is
strictly increasing or strictly decreasing.)
Theorem 4.1
Suppose that X is a continuous random variable and g : ℝ → ℝ is a strictly monotonic differentiable function. Let Y = g(X). Then
the PDF of Y is given by
fX (x 1 ) dx1
{0
|g′ (x1 )|
= fX (x1 ). | dy
| where g(x1 ) = y
fY (y) = (4.5)
if g(x) = y does not have a solution
Example 4.9
{0
4x 3 0<x≤1
fX (x) =
otherwise
1
and let Y = X
. Find fY (y).
• Solution
◦ First note that RY = [1, ∞). Also, note that g(x) is a strictly decreasing and differentiable function on (0, 1], so we may use
Equation 4.5. We have g′ (x)= − x12 . For any y ∈ [1, ∞), x1 = g−1 (y) = 1y . So, for y ∈ [1, ∞)
fX (x1 )
fY (y) = |g′ (x1 )|
4x13
= 1
|− |
x2
1
= 4x15
= y45 .
Thus, we conclude
4
y≥1
{0
y5
fY (y) =
otherwise
Theorem 4.1 can be extended to a more general case. In particular, if g is not monotonic, we can usually divide it into a finite number of
monotonic differentiable functions. Figure 4.3 shows a function g that has been divided into monotonic parts. We may state a more general
form of Theorem 4.1.
3 of 5 10/12/22, 2:59 pm
Functions of Continuous Random Variables | PDF | CDF https://www.probabilitycourse.com/chapter4/4_1_3_functions_continu...
Theorem 4.2
Consider a continuous random variable X with domain RX , and let Y = g(X). Suppose that we can partition RX into a finite number of
intervals such that g(x) is strictly monotone and differentiable on each partition. Then the PDF of Y is given by
n n
fX (xi ) ∣ dxi ∣
∑ |g′ (xi )| ∑
fY (y) = = fX ( xi ). ∣ ∣ (4.6)
i=1 i=1
∣ dy ∣
Example 4.10
1 − x2
fX (x) = e 2 , for all x ∈ ℝ
√2π
‾‾
‾
and let Y = X 2 . Find fY (y).
• Solution
4 of 5 10/12/22, 2:59 pm
Functions of Continuous Random Variables | PDF | CDF https://www.probabilitycourse.com/chapter4/4_1_3_functions_continu...
◦ We note that the function g(x) = x 2 is strictly decreasing on the interval (−∞, 0), strictly increasing on the interval (0, ∞) ,
and differentiable on both intervals, g′ (x) = 2x. Thus, we can use Equation 4.6. First, note that RY = (0, ∞). Next, for any
y ∈ (0, ∞) we have two solutions for y = g(x), in particular,
x1 = √y, x2 = −√y
Note that although 0 ∈ RX it has not been included in our partition of RX . This is not a problem, since P(X = 0) = 0.
Indeed, in the statement of Theorem 4.2, we could replace RX by RX − A, where A is any set for which P(X ∈ A) = 0. In
particular, this is convenient when we exclude the endpoints of the intervals. Thus, we have
fX (x 1 ) f (x )
fY (y) = |g′ (x1 )|
+ |gX′ (x22 )|
fX (√y) fX (−√y)
= |2√y|
+ |−2
√y|
y y
= 1
e− 2 + 1
e− 2
2√2πy 2√2πy
y
= 1
e− 2 , for y ∈ (0, ∞).
√2πy
← previous
next →
5 of 5 10/12/22, 2:59 pm