0% found this document useful (0 votes)
2 views41 pages

Chapter 7 Distribution For Functions of Random Variables Student

Chapter 7 discusses the transformation techniques for both discrete and continuous random variables, detailing how to derive the probability density functions (pdf) for transformed variables. It includes examples of transformations for univariate and multivariate cases, as well as methods for determining marginal densities. The chapter concludes with a summary of key concepts related to random variable transformations.

Uploaded by

layermunch
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views41 pages

Chapter 7 Distribution For Functions of Random Variables Student

Chapter 7 discusses the transformation techniques for both discrete and continuous random variables, detailing how to derive the probability density functions (pdf) for transformed variables. It includes examples of transformations for univariate and multivariate cases, as well as methods for determining marginal densities. The chapter concludes with a summary of key concepts related to random variable transformations.

Uploaded by

layermunch
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

STS3402

CHAPTER 7
DISTRIBUTION FOR FUNCTIONS OF
RANDOM VARIABLES
TRANSFORMATION TECHNIQUE FOR
DISCRETE RANDOM VARIABLE
◦ Let 𝑋 be a discrete random variable with pdf 𝑝(𝑥). Let 𝑌 = 𝑢(𝑋) define a
one-to-one transformation between the values of 𝑋 and 𝑌, so that the equation
𝑦 = 𝑢(𝑥) can be uniquely solved for 𝑥 in terms of 𝑦 , where 𝑥 =
𝑢−1 𝑦 . Then, 𝑌 is also a random variable and the pdf of 𝑌 is:

𝑝 𝑦 =𝑃 𝑌=𝑦
=𝑃 𝑢 𝑋 =𝑦
= 𝑃(𝑋 = 𝑢−1 𝑦 )
EXAMPLE
X is a random variable with 𝑝 𝑥 = 𝑝𝑞 𝑥−1 , 𝑥 = 1,2,3, …
Determine the distribution of 𝑌 = 𝑋 2 .
Solution:
Discrete case in 2 random variables

◦ Let 𝑋1 and 𝑋2 be a discrete random variables with joint probability distribution


function 𝑝(𝑥1 , 𝑥2 ). If 𝑌1 = 𝑢1 𝑋1 , 𝑋2 and 𝑌2 = 𝑢2 𝑋1 , 𝑋2 where 𝑢1 and 𝑢2
are one-to-one transformations between the points (𝑥1 , 𝑥2 ) and 𝑦1 , 𝑦2 .
Then,

◦ 𝑥1 = 𝑢1−1 𝑦1 , 𝑦2
◦ 𝑥2 = 𝑢2−1 𝑦1 , 𝑦2 (obtained by solving 𝑦1 and 𝑦2 simultaneously)

𝑝 𝑦1 , 𝑦2 = 𝑃 𝑌1 = 𝑦1 , 𝑌2 = 𝑦2
= 𝑝[𝑋1 = 𝑢1−1 𝑦1 , 𝑦2 , 𝑋2 = 𝑢2−1 𝑦1 , 𝑦2 ]
EXAMPLE
𝑋1 and 𝑋2 are independent poisson variables with parameters 𝜆1 and 𝜆2 . Find
𝑝 𝑦1 , 𝑦2 if 𝑌1 = 𝑋1 + 𝑋2 and 𝑌2 = 𝑋2 . Hence, obtain the marginal distribution of 𝑌1 =
𝑋1 + 𝑋2 .
Solution:
Solution:
Solution:
TRANSFORMATION TECHNIQUE FOR
CONTINUOUS RANDOM VARIABLE
THEOREM

◦ Let 𝑋 be a continuous random variable having pdf 𝑓𝑥 (𝑥). Suppose that 𝑌 = 𝑔(𝑋) be a
function of 𝑋 that is a strictly monotone (increasing or decreasing). Then 𝑌 = 𝑔(𝑋) is
also a random variable and has probability density function,

𝛿𝑔−1 𝑦
𝑓𝑦 𝑦 = 𝑓𝑥 [𝑔−1 𝑦 ]
𝛿𝑦

where 𝑔−1 𝑦 is defined to equal that value of 𝑥 such that 𝑔 𝑥 = 𝑦.


Proof when g(x) is an
increasing function

Note:

If g(x) is an increasing function, then when


𝑔 𝑥 ≤ 𝑦, 𝑥 ≤ 𝑔−1 𝑦
◦𝑌=𝑔 𝑋
◦ 𝐹𝑦 𝑦 = 𝑃 𝑔 𝑋 ≤ 𝑦 = 𝑃 𝑋 ≤ 𝑔−1 𝑦 = 𝐹𝑥 𝑔−1 𝑦

◦ Differentiation gives that


𝛿𝑔−1 𝑦
𝑓𝑦 𝑦 = 𝑓𝑥 𝑔−1 𝑦
𝛿𝑦

◦ Since 𝑔−1 𝑦 is an increasing function, its derivation is non-negative.


Proof when g(x) is a
decreasing function

Note:
If g(x) is a decreasing function, then when
𝑔 𝑥 ≤ 𝑦, 𝑥 ≥ 𝑔−1 𝑦
◦ So, 𝐹𝑦 𝑦 = 𝑃 𝑌 ≤ 𝑦 = 𝑃 𝑔 𝑋 ≤ 𝑦 = 𝑃 𝑋 ≥ 𝑔−1 𝑦 = 1 − 𝑃[𝑋 ≤
𝑔−1 𝑦 ]

◦ Differentiation gives that,

𝛿𝑔−1 𝑦
𝑓𝑦 𝑦 = 𝑓𝑥 𝑔−1 𝑦
𝛿𝑦

𝛿𝑔−1 𝑦
◦ Note: <0, because 𝑔−1 𝑦 is a decreasing function.
𝛿𝑦
EXAMPLE
Let 𝑋 be a continuous r.v with
𝑥2
𝑓 𝑥 = ,1 < 𝑥 < 2
3
If 𝑌 = 𝑋 2 , find 𝑓𝑦 (𝑦).

Solution:
EXAMPLE
𝑓 𝑥 = 2𝑥, 0 < 𝑥 < 1
Find the pdf of 𝑌 = −4𝑥 + 3

Solution:
JOINT PROBABILITY DISTRIBUTIONS OF
FUNCTIONS OF RANDOM VARIABLES
Let X1 and X2 be jointly continuous random variables with joint pdf f(x1, x2). Suppose 𝑌1 =
𝑔1 𝑥1 , 𝑥2 and 𝑌2 = 𝑔2 𝑥1 , 𝑥2 for some functions 𝑔1 , 𝑔2.

Assume 𝑔1 , 𝑔2satisfy the following:

Equations 𝑦1 = 𝑔1 𝑥1 , 𝑥2 and 𝑦2 = 𝑔2 𝑥1 , 𝑥2 can be uniquely solved for x1 and x2 in


terms of y1 and y2 with solutions given by 𝑥1 = ℎ1 𝑥1 , 𝑥2 ; 𝑥2 = ℎ2 𝑥1 , 𝑥2

𝑔1 and 𝑔2 have continuous partial derivations at all points (x1, x2) such that the determinant:
𝜕𝑥1 𝜕𝑥1
𝜕𝑦1 𝜕𝑦2
𝐽 𝑥1 , 𝑥2 = ≠ 0.
𝜕𝑥2 𝜕𝑥2
𝜕𝑦1 𝜕𝑦2
Under these conditions, the random variables Y1 and Y2 are jointly continuous with
joint pdf, 𝑓𝑦 𝑦1 , 𝑦2 = 𝑓𝑥 ℎ1 𝑦1 , 𝑦2 ; ℎ2 𝑦1 , 𝑦2 𝐽 𝑥1 , 𝑥2

𝐽 𝑥1 , 𝑥2 or simplify J is the Jacobian of the transformation.


EXAMPLE
Let X1 and X2 be jointly continuous random variable with pdf,
𝑓 𝑥1 , 𝑥2 = 2𝑥1 ; 0 ≤ 𝑥1 , 𝑥2 ≤ 1. Find the density function of Y = X1 + X2.

Solution:
Solution:
Transformation region
Transformation region
To determine the marginal density, f1(y1), two regions are considered:

1) when 0 ≤ 𝑦1 ≤ 1

24
2) when 1 ≤ 𝑦1 ≤ 2
Hence,

The transformation region 𝑦1 , 𝑦2 becomes


To find the pdf of y1,
◦ when 0 ≤ 𝑦1 ≤ 1

◦ when 1 ≤ 𝑦1 ≤ 2

27
EXAMPLE
X1 and X2 are random variables with joint pdf,
𝑓 𝑥1 , 𝑥2 = 𝑒 − 𝑋1 +𝑋2 ; 0 ≤ 𝑋1 , 𝑋2 ≤ ∞.
Determine the pdf of X1 + X2 .
Solution:
y2
The pdf of X1 + X2 is the marginal density of Y1,

30
The marginal pdf of Y2 is

1)when 0 ≤ 𝑦2 ≤ ∞

2) when −∞ ≤ 𝑦2 ≤ 0
So, the marginal pdf of Y2 is

32
OTHER METHODS OF
DETERMINING DENSITIES OF
FUNCTIONS OF RANDOM
VARIABLES
Theorem (Uniqueness)
Let MX(t) and MY(t) be the mgf of X
The Moment and Y if
Generating Function
Method (MGF) MX(t) = MY(t)
for all t, then X and Y are identically
distributed.
EXAMPLE
Xi ; i = 1, 2, …, n are independent exponential random variables,
𝑓 𝑥𝑖 = 𝜆𝑒 −𝜆𝑥𝑖 ; 𝑖 = 1, . . . , 𝑛
Determine the distribution of 𝑍 = σ𝑛𝑖=1 𝑋𝑖 .

Solution:
EXAMPLE
Xi ; i = 1, 2, …, n are independent and normally distributed with 𝐸 𝑋𝑖 = 𝜇𝑖 and
𝑉𝑎𝑟 𝑋𝑖 = 𝜎𝑖2 . Determine the distribution of 𝑍 = σ𝑛𝑖=1 𝑎𝑖 𝑋𝑖 .

Solution:
1. Xi ~ exponential and are independent random
variables
𝑛

𝑍 = ෍ 𝑋𝑖 ~𝛤 𝜆, 𝑛
Some 𝑖=1

Transformation 2. 𝑋𝑖 ~𝑁 𝜇𝑖 , 𝜎𝑖2 and are independent random


Distribution variables
𝑛

𝑍 = ෍ 𝑎𝑖 𝑋𝑖 ~𝑁 𝜇, 𝜎 2
𝑖=1

𝜇 = σ𝑛𝑖=1 𝑎𝑖 𝜇𝑖 and 𝜎 2 = σ𝑛𝑖=1 𝑎𝑖2 𝜎𝑖2


3. Zi ~ N(0,1) and are independent random
Some variables then
Transformation 𝑛

Distribution 𝑊 = ෍ 𝑍𝑖2 ~𝜒𝑛2


𝑖=1
𝑡𝑍 2
PROOF: Zi ~ N(0,1)  𝑀 𝑍2 𝑡 =𝐸 𝑒 =
1
∞ 1 𝑡𝑧 2 − 𝑧2
‫׬‬−∞ 2𝜋 𝑒 ×𝑒 2 𝑑𝑧
1 ∞ −1 1−2𝑡 𝑧 2
= 2𝜋
‫׬‬−∞
𝑒 2 𝑑𝑧

Let 𝑢2 = 1 − 2𝑡 𝑧 2
𝑑𝑢
𝑢= 1 − 2𝑡 𝑧 ⇒ 𝑑𝑧 =
1−2𝑡
1
1 ∞ 1 − 𝑢2
𝑀𝑍 2 𝑡 = 2𝜋
‫׬‬−∞ 1−2𝑡
𝑒 2 𝑑𝑢
1
1 ∞ 1 − 𝑢2
= ‫׬‬ 𝑒 2 𝑑𝑢
1−2𝑡 −∞ 2𝜋

1 1 1/2
= = .
1−2𝑡 1−2𝑡
𝑊 = σ𝑛𝑖=1 𝑍𝑖2
where Zi s are independent
𝑛/2
𝑛 1
𝑀𝑊 𝑡 = 𝛱𝑖=1 𝑀𝑍 2 𝑡 =
1 − 2𝑡
which is the mgf of 𝜒𝑛2 .
4) Zi ~ N(0,1) and 𝑊~𝜒𝜈2 are independent random variables, then

𝑍
𝑇= ~𝑡𝜈
𝑊
𝜈
Proof: Show in assignment/SCL 2
SUMMARY OF CHAPTER 7
By the end of this chapter students should be familiar with the following:

• Transformations of univariate random variables, discrete and continuous


• Transformations of multivariate random variables, discrete and
continuous
• Methods of determining marginal densities of functions of random
variables

You might also like