0% found this document useful (0 votes)
2 views

Functions of One & two dim random variables-

The document discusses the functions of one-dimensional and two-dimensional random variables, detailing how to derive probability mass functions (pmf) and probability density functions (pdf) for transformed variables. It provides examples of transformations for both discrete and continuous random variables, including specific cases and solutions. Additionally, it introduces the concept of joint pdfs for two-dimensional random variables and outlines the process for finding the pdf of a function of these variables using Jacobians.

Uploaded by

khushpatel1222
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Functions of One & two dim random variables-

The document discusses the functions of one-dimensional and two-dimensional random variables, detailing how to derive probability mass functions (pmf) and probability density functions (pdf) for transformed variables. It provides examples of transformations for both discrete and continuous random variables, including specific cases and solutions. Additionally, it introduces the concept of joint pdfs for two-dimensional random variables and outlines the process for finding the pdf of a function of these variables using Jacobians.

Uploaded by

khushpatel1222
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Functions of One dimensional random variables

If X is a discrete random variable and 𝑌 = 𝐻(𝑋) is a continuous function of X, then Y is also


a Discrete Random Variable.
Eg:

𝑋 -1 0 1
𝑃(𝑥) 1 1 1
3 2 6
Suppose 𝑌 = 3𝑋 + 1,then pmf of Y is given by

𝑌 -2 1 4
𝑃(𝑦) 1 1 1
3 2 6
Suppose 𝑌 = 𝑋 2 , then pmf of 𝑌is
𝑌 1 0
𝑃(𝑦) 1 1
2 2

Suppose X is a continuous random variable with pdf 𝑓(𝑥) and 𝐻(𝑋) is a continuous function
of 𝑋. Then 𝑌 is a continuous random variable. To obtain pdf of Y we follow the following
steps.
1. Obtain cdf of 𝑌, i.e., 𝐺(𝑦) = 𝑃(𝑌 ≤ 𝑦).
2. Differentiate 𝐺(𝑦) with respect to 𝑦 to get pdf of 𝑦 i.e., 𝑔(𝑦).
3. Determine the range space of 𝑌 such that 𝑔(𝑦) > 0.

Problems:

2𝑥; 0 < 𝑥 < 1


1. If 𝑓(𝑥) = { , and 𝑌 = 3𝑋 + 1, find pdf of 𝑌.
0; 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝑦−1
Soln: 𝐺(𝑦) = 𝑃(𝑌 ≤ 𝑦) = 𝑃(3𝑋 + 1 ≤ 𝑦) = 𝑃 (𝑋 ≤ 3 )
𝑦−1
𝑦−1 2
𝐺(𝑦) = ∫0 3 2𝑥𝑑𝑥 = ( ) .
3
2(𝑦−1)
𝑔(𝑦) = 𝐺’(𝑦)= .
9
𝑦−1
0<𝑥<1⟹0< < 1 ⟹ 1 < 𝑦 < 4.
3
2(𝑦−1)
; 1<𝑦<4
Therefore, 𝑔(𝑦) = { 9 .
0; 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

2𝑥; 0 < 𝑥 < 1


2. If 𝑓(𝑥) = { , and 𝑌 = 𝑒 −𝑋 , find pdf of 𝑌.
0; 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
1
Soln: 𝐺(𝑦) = 𝑃(𝑌 ≤ 𝑦) = 𝑃(𝑒 −𝑋 ≤ 𝑦) = 𝑃 (log 𝑒 𝑦 ≤ 𝑋)
1 1 2
𝐺(𝑦) = ∫log 1 2𝑥𝑑𝑥 = 1 − (log e 𝑦) .
𝑒𝑦
2 1
𝑔(𝑦) = 𝐺’(𝑦)= 𝑦 log e 𝑦.
1 1
0 < 𝑥 < 1 ⟹ 0 < log e 𝑦 < 1 ⟹ 𝑒 < 𝑦 < 1.
2 1 1
log e 𝑦 ; <𝑦<1
Therefore, 𝑔(𝑦) = { 𝑦 𝑒 .
0; 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

Result: Let 𝑋 be a continuous random variable with pdf 𝑓(𝑥). Let 𝑌 = 𝑋 2 . Then pdf
1
of 𝑌 is 𝑔(𝑦) = 2 (𝑓(√𝑦) + 𝑓(−√𝑦))
√𝑦
−𝑥 2
Example 1: Suppose 𝑓(𝑥) = { 2𝑥𝑒 ; 0 < 𝑥 < ∞. Find pdf of 𝑌 = 𝑋 2 .
0; 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Soln:
1 1
𝑔(𝑦) = 2 (𝑓(√𝑦) + 𝑓(−√𝑦)) = 2 (2√𝑦𝑒 −𝑦 + 0) = 𝑒 −𝑦 ; 0 < 𝑦 < ∞.
√𝑦 √𝑦

2
(𝑥 + 1); −1 < 𝑥 < 1
Example 2: Suppose 𝑓(𝑥) = { 9 . Find pdf of 𝑌 = 𝑋 2 .
0; 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Soln:
1 1 2(√𝑦+1) 2(−√𝑦+1) 2
𝑔(𝑦) = 2 (𝑓(√𝑦) + 𝑓(−√𝑦)) = 2 ( + )=9 ; 0 < 𝑥 < 1.
√ 𝑦 √ 𝑦 9 9 √𝑦

Theorem: Let X be a continuous random variable with pdf 𝑓(𝑥). Suppose 𝑌 = 𝐻(𝑋)
is a strictly monotone (increasing or decreasing) function of X, then pdf of 𝑌 is given
by
𝑑𝑥
𝑔(𝑦) = 𝑓(𝑥) |𝑑𝑦| where 𝑥 = 𝐻 −1 (𝑦).

Example:
1
1. Suppose X is uniformly distributed over (0,1), find pdf of 𝑌 = 𝑋+1.
Soln: We know that 𝑌 is strictly monotone.
1; 0 < 𝑥 < 1
𝑓(𝑥) = { .
0; 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
1 1
Note that 𝑋 = 𝑌 − 1. ⇒ 𝑓(𝑥) = 𝑓 (𝑌 − 1) = 1.
𝑑𝑥 1
|𝑑𝑦| = 𝑦 2 .
1 1
Therefore, 𝑔(𝑦) = ; < 𝑦 < 1.
𝑦2 2

𝜋 𝜋
2. If 𝑋 is uniformly distributed over (− 2 , 2 ), find the pdf of 𝑌 = 𝑡𝑎𝑛𝑋. (Or show
that 𝑌 = 𝑡𝑎𝑛𝑋 follows Cauchy’s distribution).
1 𝜋 𝜋
; −2 < 𝑥 < 2
Soln: Given 𝑓(𝑥) = {𝜋 .
0; 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
We know that 𝑌 is strictly monotone.
1 𝑑𝑥 1
Then 𝑋 = tan−1 𝑌 ⇒ 𝑓(tan−1 𝑌) = 𝜋 . And |𝑑𝑦| = 1+𝑦 2 .
1 1
Therefore, 𝑔(𝑦) = 𝜋 1+𝑦 2 ; −∞ < 𝑦 < ∞.

𝑋−𝜇
3. If 𝑋~𝑁(𝜇, 𝜎 2 ), then show that 𝑍 = ~𝑁(0,1) and 𝑌 = 𝑍 2 ~𝜒 2 (1).
𝜎
𝑋−𝜇
Soln: 𝐺(𝑧) = 𝑃(𝑍 ≤ 𝑧) = 𝑃 ( ≤ 𝑧) = 𝑃(𝜎𝑧 + 𝜇 ≥ 𝑥)
𝜎

𝐺(𝑧) = 𝐹(𝜎𝑧 + 𝜇).


𝑧2
′ (𝑧) ′ (𝜎𝑧 1 −
𝑔(𝑧) = 𝐺 =𝐹 + 𝜇)𝜎 = 𝑓(𝜎𝑧 + 𝜇)𝜎 = 𝑒 2 ~𝑁(0,1).
√2𝜋
𝑦 𝑦
1 1 1 1
Now, 𝑔(𝑦) = 2 (𝑓(√𝑦) + 𝑓(−√𝑦)) = 2 ( 𝑒 −2 + 𝑒 −2 )
√ 𝑦 √ 𝑦 √2𝜋 √2𝜋
𝑦
1
𝑔(𝑦) = 𝑒 −2 .
√𝑦 √2𝜋

Hence, 𝑔(𝑦)~𝜒 2 (1).


Extra Problem:
A random variable X having Cauchy distribution. Show that 1/𝑋 also has
Cauchy distribution.

Functions of two dimensional random variables


Let (𝑋, 𝑌) be a continuous two dimensional random variable with pdf 𝑓(𝑥, 𝑦). If 𝑍 = 𝐻1 (𝑋, 𝑌)
is a continuous function of (𝑋, 𝑌), then 𝑍 will be a continuous (one-dimensional) random
variable. In order to find the pdf of 𝑍, we shall follow the procedure discussed below.
To find the pdf of 𝑍 = 𝐻1 (𝑋, 𝑌), we first introduce a second random variable, say 𝑊 =
𝐻2 (𝑋, 𝑌), and obtain the joint pdf of 𝑍 and 𝑊, say 𝑘(𝑧, 𝑤). From the knowledge of 𝑘(𝑧, 𝑤),
we can then obtain the desired pdf of 𝑍, say 𝑔(𝑧), by simply integrating 𝑘(𝑧, 𝑤) with respect

to 𝑤. That is, 𝑔(𝑧) = ∫−∞ 𝑘(𝑧, 𝑤)𝑑𝑤.
Two problems which arise here are
i. how to find the joint pdf 𝑘(𝑧, 𝑤)of 𝑍and 𝑊
ii. how to choose the appropriate random variable 𝑊 = 𝐻2 (𝑋, 𝑌)
To resolve these problems, let us simply state that we usually make the simplest possible choice
for 𝑊 as it plays only an intermediate role. In order to find the joint pdf 𝑘(𝑧, 𝑤), we need the
following theorem.

Theorem:
Suppose that (𝑋, 𝑌) is a two-dimensional continuous random
variable with joint pdf 𝑓(𝑥, 𝑦). Let 𝑍 = 𝐻1 (𝑋, 𝑌) and 𝑊 = 𝐻2 (𝑋, 𝑌) and assume that the
functions 𝐻1 and 𝐻2 satisfy the following conditions:
i. The equations 𝑧 = 𝐻1 (𝑥, 𝑦) and 𝑤 = 𝐻2 (𝑥, 𝑦) may be uniquely solved for 𝑥 and
𝑦 in terms of 𝑧 and 𝑤, say 𝑥 = 𝐺1 (𝑧, 𝑤) and
𝑦 = 𝐺2 (𝑧, 𝑤).
𝜕𝑥 𝜕𝑥 𝜕𝑦 𝛿𝑦
ii. The partial derivatives 𝜕𝑧 , 𝜕𝑤 , 𝜕𝑧 and 𝛿𝑤 exist and are continuous.
Then the joint pdf (𝑍, 𝑊), say 𝑘(𝑧, 𝑤), is given by the following expression:
𝑘(𝑧, 𝑤) = 𝑓[𝐺1 (𝑧, 𝑤), 𝐺2 (𝑧, 𝑤)]|𝐽(𝑧, 𝑤)|,
where 𝐽(𝑧, 𝑤) is the following 2 × 2 determinant:
𝜕𝑥 𝜕𝑥
𝐽(𝑧, 𝑤) = | 𝜕𝑧 𝜕𝑤 |
𝜕𝑦 𝛿𝑦
𝜕𝑧 𝛿𝑤
This determinant is called the ‘Jacobian’ of the transformation (𝑥, 𝑦) → (𝑧, 𝑤) and is
𝛿(𝑥,𝑦)
sometimes denoted by . We note that 𝑘(𝑧, 𝑤) will be nonzero for those values of (𝑧, 𝑤)
𝛿(𝑧,𝑤)
corresponding to values of (𝑥, 𝑦) for which 𝑓(𝑥, 𝑦) is nonzero.

Problems

1. Suppose that 𝑋 and 𝑌 are two independent random variables having pdf 𝑓(𝑥) =
𝑒 −𝑥 , 0 ≤ 𝑥 ≤ ∞ and 𝑔(𝑦) = 2𝑒 −2𝑦 , 0 ≤ 𝑦 ≤ ∞. Find the pdf of X+Y
Solution:
Since 𝑋 and 𝑌 are independent, the joint pdf of (𝑋, 𝑌) is given by,
𝑓(𝑥, 𝑦) = 𝑓(𝑥)𝑔(𝑦) = 2𝑒 −(𝑥+2𝑦) , 0 ≤ 𝑥, 𝑦 ≤ ∞
Let 𝑍 = 𝑋 + 𝑌 and 𝑊 = 𝑍, that is 𝑌 = 𝑊 and 𝑋 = 𝑍 − 𝑊.
𝜕𝑥 𝜕𝑥
𝜕𝑧 𝜕𝑤 1 −1
The Jacobian 𝐽 = |𝜕𝑦 𝛿𝑦
| =| |=1
0 1
𝜕𝑧 𝛿𝑤
Thus joint pdf of (𝑊, 𝑍) is,
𝑘(𝑧, 𝑤) = 𝑓(𝑥, 𝑦)|𝐽| = 2𝑒 −(𝑥+2𝑦) = 2𝑒 −(𝑧+𝑤)
0≤𝑦≤∞⇒0≤𝑤≤∞
0≤𝑥 ≤∞⇒0≤𝑧−𝑤 ≤∞⇒𝑤 ≤𝑧 ≤∞

Thus 𝑘(𝑤, 𝑧) = 2𝑒 −(𝑧+𝑤) , 0 ≤ 𝑤 ≤ 𝑧 ≤ ∞


𝑧
The required pdf of z, ℎ(𝑧) = ∫𝑤=0 2𝑒 −(𝑧+𝑤) 𝑑𝑤
= 2(𝑒 −𝑧 − 𝑒 −2𝑧 ), 0 ≤ 𝑧 ≤ ∞.

2. If 𝑋~𝑁(0, 𝜎 2 ) , 𝑌~𝑁(0, 𝜎 2 ) and 𝑋, 𝑌 are independent. Find the pdf of 𝑅 = √𝑋 2 + 𝑌 2


Solution:
1 2 ⁄2𝜎 2
Pdf of 𝑋: 𝑓(𝑥) = 𝑒 −𝑥 , −∞ ≤ 𝑥 ≤ ∞
√2𝜋𝜎
1 2 ⁄2𝜎 2
Pdf of 𝑌: 𝑔(𝑦) = 𝑒 −𝑦 , −∞ ≤ 𝑦 ≤ ∞
√2𝜋𝜎
Since 𝑋 and 𝑌 are independent, the joint pdf of (𝑋, 𝑌) is given by,
1 2 2 2
𝑓(𝑥, 𝑦) = 𝑓(𝑥)𝑔(𝑦) = 𝑒 −(𝑥 +𝑦 )⁄2𝜎 , −∞ ≤ 𝑥, 𝑦 ≤ ∞
2𝜋𝜎 2
𝑋
Let 𝑅 = √𝑋 2 + 𝑌 2 and 𝜃 = tan−1 (𝑌 ), that is 𝑋 = 𝑅𝑐𝑜𝑠 𝜃 and 𝑌 = 𝑅𝑠𝑖𝑛 𝜃 and the
Jacobian 𝐽 = 𝑅.
Thus joint pdf of (𝑅, 𝜃) is,
𝑅 2 ⁄2𝜎 2
𝑘(𝑟, 𝜃) = 𝑓(𝑥, 𝑦)|𝐽| = 2𝜋𝜎2 𝑒 −𝑅 , 𝑅 ≥ 0, 0 ≤ 𝜃 ≤ 2𝜋
2𝜋 𝑅 2 2
The required pdf of z, ℎ(𝑧) = ∫𝜃=0 2𝜋𝜎2 𝑒 −𝑅 ⁄2𝜎 𝑑𝜃
𝑅 2 2
= 𝜎2 𝑒 −𝑅 ⁄2𝜎 , 𝑅 ≥ 0.

3. If 𝑋1 , 𝑋2 are independent and have standard normal distribution 𝑋1 , 𝑋2 ~𝑁(0, 1). Find
𝑋
the pdf of 𝑋1.
2
Solution:
1 2
Pdf of 𝑋1 : 𝑓(𝑥1 ) = 𝑒 −𝑥1 ⁄2 , −∞ ≤ 𝑥1 ≤ ∞
√2𝜋
1 2
Pdf of 𝑋2 : 𝑔(𝑥2 ) = 𝑒 −𝑦1 ⁄2 , −∞ ≤ 𝑦1 ≤ ∞
√2𝜋
Since𝑋1 , 𝑋2 are independent, the joint pdf of (𝑋1 , 𝑋2 )is given by,
1 −(𝑥 2 +𝑥 2 )⁄2
𝑓(𝑥1 , 𝑥2 ) = 𝑒 1 2 , −∞ ≤ 𝑥1 , 𝑦1 ≤ ∞
2𝜋
𝑋
Let 𝑍 = 𝑋1and 𝑊 = 𝑋2, that is 𝑋2 = 𝑊 and 𝑋1 = 𝑍𝑊.
2
𝑤 𝑧
The Jacobian 𝐽 = | |=𝑤
0 1
Thus joint pdf of (𝑊, 𝑍) is,
|𝑤| 2 (1+𝑧 2 )⁄2
𝑘(𝑧, 𝑤) = 2 𝜋 𝑒 −𝑤 , −∞ ≤ 𝑤, 𝑧 ≤ ∞.
∞ |𝑤| 2 (1+𝑧 2 )⁄2
The required pdf of 𝑍, ℎ(𝑧) = ∫−∞ 2𝜋 𝑒 −𝑤 𝑑𝑤
2 ∞ 2 (1+𝑧 2 )⁄2
= 2 𝜋 ∫0 |𝑤|𝑒 −𝑤 𝑑𝑤
On substitution: − 𝑤 2 (1 + 𝑧 2 )⁄2 = 𝑡
−𝑤(1 + 𝑧 2 )𝑑𝑤 = 𝑑𝑡
1 ∞ 𝑒 −𝑡 1
We get, ℎ(𝑧) = 𝜋 ∫0 𝑑𝑡 = 𝜋(1+𝑧 2) , −∞ ≤ 𝑧 ≤ ∞.
1+𝑧 2
4. The joint pdf of the random variable(𝑋, 𝑌) is given by
𝑥
𝑓(𝑥, 𝑦) = 𝑒 −𝑦 , 0 < 𝑥 < 2, 𝑦 > 0
2
Find the pdf of 𝑋 + 𝑌
Solution:
Let 𝑍 = 𝑋 + 𝑌 and 𝑊 = 𝑍, that is 𝑌 = 𝑊 and 𝑋 = 𝑍 − 𝑊.
The Jacobian 𝐽 = 1
𝑧−𝑤
Thus joint pdf of (𝑊, 𝑍) is, 𝑘(𝑧, 𝑤) = 𝑓(𝑥, 𝑦)|𝐽| = 2 𝑒 −𝑤
0≤𝑦≤∞⇒0≤𝑤≤∞
0≤ 𝑥 ≤ 2⇒0≤ 𝑧−𝑤 ≤2⇒ 𝑤 ≤ 𝑧 ≤ 2+𝑤
𝑧−𝑤 −𝑤
𝑘(𝑧, 𝑤) = 2 𝑒 , 0 ≤ 𝑤 ≤ 𝑧 ≤ 2 + 𝑤

𝑧 𝑧−𝑤
∫0 ( ) 𝑒 −𝑤 𝑑𝑤, 𝑤ℎ𝑒𝑛 0 < 𝑧 < 2
2
The required pdf of z, ℎ(𝑧) = { 𝑧 𝑧−𝑤
∫𝑧−2 ( ) 𝑒 −𝑤 𝑑𝑤, 𝑤ℎ𝑒𝑛 2 < 𝑧 < ∞
2
1
(𝑧 + 𝑒 −𝑧 − 1), 𝑤ℎ𝑒𝑛 0 < 𝑧 < 2
ℎ(𝑧) = { 2
1 𝑧
(𝑒 + 𝑒 2−𝑧 ), 𝑤ℎ𝑒𝑛 2 < 𝑧 < ∞
2
𝑥+𝑦
−2 −( ) 𝑥−𝑦
5. Let 𝑓(𝑥) = { 𝛼 𝑒 2 ; 𝑥, 𝑦 > 0, 𝛼 > 0 . Find the distribution of 2 .
0; 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒

*********

You might also like