0% found this document useful (0 votes)
24 views10 pages

ST 1010 2020 Week 09-13 Transformations

Uploaded by

Anon son
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views10 pages

ST 1010 2020 Week 09-13 Transformations

Uploaded by

Anon son
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

ST 1010: Statistical Theory (2C,30L) DST @UOC

Transformations
Function of Random Variables
Here we will consider the ways of finding the probability densities of functions of one or more random
variables. That is, given a set of random variables 𝑋1 , 𝑋2 , 𝑋3 , . . . , 𝑋𝑛 and their joint probability distribution
function or density 𝑓𝑋1 ,𝑋2 ,𝑋3 ,...,𝑋𝑛 (•,•,•, . . . ,•), we shall be interested in finding the probability density of some
random variable 𝑌 = 𝑢(𝑋1 , 𝑋2 , 𝑋3 , . . . , 𝑋𝑛 ). There are three main methods available for this purpose.
1. Distribution Function Technique
2. Transformation Technique
3. Moment-Generating Function Technique

Distribution Function Technique (See pages 1-4)


A straightforward method of obtaining the probability density of a function of continuous random
variables consists of first finding its cumulative distribution function and then its probability density by
differentiation. Suppose that 𝑋1 , 𝑋2 , 𝑋3 , . . . , 𝑋𝑛 are continuous random variables with a given joint probability
density, then use following three steps to get probability density function of Y.
Step 1: Determine the expression for the cumulative probability distribution function of Y
That is finding,
𝐺 (𝑦) = 𝑃[𝑌 ≤ 𝑦] = 𝑃 [𝑢(𝑋1 , 𝑋2 , 𝑋3 , . . . , 𝑋𝑛 ) ≤ 𝑦]
Step 2: Then differentiate 𝐺 (𝑦)with respect to y to get 𝑔𝑌 (𝑦)
That is finding,
𝑑𝐹(𝑦)
𝑔𝑌 (𝑦) =
𝑑𝑦
Step 3: Find the correct range for the y

Example 01: If the probability density of X is given by


6𝑥 (1 − 𝑥 ) 𝑓 𝑜𝑟 0 < 𝑥 < 1
𝑓 (𝑥 ) = {
0 𝑜 𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Find the probability density of Y = X3

Answer 01:
1
1 2
3 𝑦3
𝐺 (𝑦 ) = 𝑃 (𝑌 ≤ 𝑦 ) = 𝑃 (𝑋 ≤ 𝑦 ) = 𝑃 (𝑋 ≤ 𝑦 ) =
3 ∫0 6𝑥(1 − 𝑥)𝑑𝑥 = 3𝑦 3 − 2𝑦
1
Then 𝑔(𝑦) = 2 (𝑦 −3 − 1)
Therefore, the probability density function of Y is,
1
2 (𝑦 −3 − 1) 𝑓𝑜𝑟 0 <𝑦<1
𝑔 (𝑦 ) ={
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

Example 02: If Y = X , show that


𝑓(𝑦) + 𝑓(−𝑦) 𝑓𝑜𝑟 𝑦 > 0
𝑔 (𝑦 ) = {
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Where f (x ) and g ( y ) are probability density functions of X and Y respectively.
Answer 02: For y > 0 we have
𝐺 (𝑦 ) = 𝑃 (𝑌 ≤ 𝑦 ) = 𝑃(|𝑋| ≤ 𝑦) = 𝑃(−𝑦 ≤ 𝑋 ≤ 𝑦) = 𝐹 (𝑦) − 𝐹(−𝑦)
Then
𝑔 (𝑦 ) = 𝑓(𝑦) − 𝑓(−𝑦)
Since x cannot be negative,
g ( y ) = 0 for y  0
Therefore, the probability density function of Y is,
 f ( y ) + f (− y ) y0
g(y) =
0 otherwise

RABA 30
ST 1010: Statistical Theory (2C,30L) DST @UOC
Example 03: For the random variables X1 and X2 the probability density function f X1 , X 2 ( x1 , x2 ) is given as
follows.
6e−3 x1 − 2 x2 for x1  0, x2  0
f X1 , X 2 ( x1 , x2 ) = 
0 otherwise
Find the probability density of Y = X1+X2

Answer 03:
G ( y ) = P[Y  y] = P[ X1 + X 2  y ] = P[ X 1  y − x2 ]
y y − x2

G ( y) =   6e −3 x1 − 2 x2 dx1 dx2 = 1 − 2e −3 y − 3e −2 y
0 0
Then
(
gY ( y ) = 6 e−2 y − e−3 y )
Therefore the probability density function of Y is;
 −2 y
6 e − e
gY ( y ) = 
(
−3 y
) y0

0 otherwise

RABA 31
ST 1010: Statistical Theory (2C,30L) DST @UOC

Transformation Technique- One Variable


Here the probability density function of a function of random variable is obtained without by taking
its cumulative distribution function first.

Discrete case
In the discrete case there is no real problem so long as the relationship between the values of X and
Y = u (U ) is one-to-one; all we have to do is make the appropriate substitution.

Example 04: If X is the number of heads obtained in four tosses of a balanced coin,
1
i. Find the probability density function of Y = .
1+ X
Z = ( X − 2)
2
ii. Find the probability density function of
Answer 04:
i. Method 1: Since X, “number of heads” follows Bin ( n = 4, p = 0.5) we can find the corresponding
probabilities.
X 0 1 2 3 4
f(x) 1/16 4/16 6/16 4/16 1/16
Using the relationship y = 1
1+ x
X 0 1 2 3 4
Y 1 1/2 1/3 1/4 1/5
f(y) 1/16 4/16 6/16 4/16 1/16
Method 2: By substituting x = (1/y) -1 for x in binomial distribution with n = 4 and p = 0.5;
1  1 1 1 1
g ( y ) = f  − 1 = 4C 1  ( 0.5 )
4
for y = 1, , , , when x = 0,1, 2,3, 4
y   −1
y 
2 3 4 5
ii. Calculating the probabilities of h(z) associated with the various values of Z we get
z 0 1 4
h(z) 6/16 8/16 2/16

Continuous case
Here we assume that the function y = u(x ) is differentiable and either increasing or decreasing for all
values within the range of X for which f (x )  0 so that the inverse function, given by x = w( y ) exists for all
the corresponding values of y and is differentiable except where u ( x) = 0 .

Theorem: Let f (x ) be the value of the probability density of the continuous random variable X ant x. If the
function given by y = u(x ) is differentiable and either increasing or decreasing for all values within the range
of X for which f (x )  0 , then for these values of x, the equation y = u(x ) can be uniquely solved for x to give
x = w( y ) , and for the corresponding values of y the probability density of Y = u( X ) is given by
g ( y ) = f w( y ) w( y ) provided that u ( x) = 0 , elsewhere g ( y ) = 0

Proof: See pages 1-4

Example 05 : If X has the exponential distribution given by


e − x for x0
f (x ) = 
0 otherwise
Find the probability density of the random variable Y = X

RABA 32
ST 1010: Statistical Theory (2C,30L) DST @UOC

= y 2 = w( y ) , w( y ) =
dx 1
Answer 05: Since, x = = 2y
dy dy
dx
g ( y ) = e− y 2 y = 2 ye− y
2 2
Therefore for y > 0.

The two diagrams have shown here, illustrated what happened when we transformed from X to Y.

Example 06: If F(x) is the value of the cumulative distribution function of the continuous random variable X at
x, find the probability of Y = F(X).

Answer 06:

Differentiating y = F (x ) with respect to x, we get


dy
= F ( x) = f ( x)
dx

And hence,
provided that f (x )  0
dx 1 1
= =
dy dy f ( x)
dx

Hence
1
g ( y) = f ( x) =1 for 0  y  1
f ( x)

RABA 33
ST 1010: Statistical Theory (2C,30L) DST @UOC

Transformation technique- Several variables


Methods of the transformation techniques for one variable can also be used to find the distribution
of a random variable, which is a function of two or more random variable. Suppose that you are given the joint
distribution of two random variables X1 and X2 and that you want to determine the probability distribution of
the random variable Y = u( X 1 , X 2 ) .
If the ‘relationship between y and x1 ’ with x 2 held constant or the ‘relationship between y and x 2 ’
with x1 held constant we can proceed in the discrete case given earlier to find the joint distribution of Y and
X2 or that of X1 and Y and then sum on the values of the other random variable to get the marginal distribution
of Y.
In the continuous case, we first use previous theorem with the transformation written as;
 x1  x2
g ( y, x2 ) = f (x1 , x2 ) or as g (x1 , y ) = f (x1 , x2 )
y y
Here f (x1 , x2 ) and the partial derivative must be expressed in terms of “ y and x 2 ”or “ x1 and y ”.
Then we integrate out the other random variable to get the marginal density of Y.

Example 07 : If X1 and X2 are independent random variables having Poisson distributions with the parameter
1 and 2 , find the probability distribution function of the random variable Y = X 1 + X 2 .

Answer 07: Since X1 and X2 are independent, their joint distribution of X1 and X2 is given by,

f X1 , X 2 ( x1 , x2 ) = f X1 ( x1 )  f X 2 ( x2 )
e − 1 ( 1 ) 1 e − 2 ( 2 )
x x2

f X1 , X 2 ( x1 , x2 ) = 
x1 ! x2 !
e −( 1 + 2 ) ( 1 ) 1 ( 2 )
x x2

= for x1 , x2 = 0,1, 2,3,...


x1 ! x2 !
Since y = x1 + x2 and hence, x1 = y − x2 we can substitute y − x2 for x1 , and we can get the joint
distribution of Y and X2
e − (1 +2 ) (2 ) 2 (1 )
x ( y − x2 )
g ( y, x2 ) =
x2 !( y − x2 )! for y, x2 = 0,1, 2, 3,...

Then, summing on x 2 from 0 to y, we get

h( y )
y − (1 + 2 ) x2
e (2 ) (1 )( y − x ) 2

= 
x2 = 0 x2 !( y − x2 )!
e − (1 +2 ) y
h( y )  x !( y − x )!( ) ( )(
y! y − x2 )
=
x2
2 1 .
y! x2 =0 2 2

This is the binomial expresion of (1 + 2 )y


Hence we have,
e −(1 +2 ) (1 + 2 )
y
h( y ) = for y = 0,1, 2, ...
y!
This is the Poisson distribution with parameter  = (1 + 2 )

RABA 34
ST 1010: Statistical Theory (2C,30L) DST @UOC
Example 08: Let X1 and X2 be two independent random variables having Binomial distributions such that
𝐵𝑖𝑛(𝑛1 , 𝑝) and 𝐵𝑖𝑛(𝑛2 , 𝑝). Find the probability distribution function of the random variable 𝑌1 = 𝑋1 + 𝑋2 and
𝑌2 = 𝑋2 .
Answer 08: Make sure that the transformation from X to Y is 1-1.
Here 𝑋1 = 𝑌1 − 𝑋2 and 𝑋2 = 𝑌2
𝑃[𝑌1 = 𝑦1 , 𝑌2 = 𝑦2 ] = 𝑃[𝑋1 + 𝑋2 = 𝑦1 , 𝑋2 = 𝑦2 ]
= 𝑃[𝑋1 = 𝑦1 − 𝑦2 , 𝑋2 = 𝑦2 ]
= 𝑃[𝑋1 = 𝑦1 − 𝑦2 ] × 𝑃[𝑋2 = 𝑦2 ]
𝑛 𝑛
= 1 𝐶𝑦1 −𝑦2 𝑝 𝑦1 −𝑦2 𝑞 𝑛1 −(𝑦1 −𝑦2 ) × 2 𝐶𝑦2 𝑝 𝑦2 𝑞 𝑛2 −𝑦2
𝑛 𝑛
= 1 𝐶𝑦1 −𝑦2 2 𝐶𝑦2 𝑝 𝑦1 𝑞 𝑛1 +𝑛2 −𝑦1
𝑓𝑜𝑟𝑦1 = 0,1,2,3, . . . , (𝑛1 + 𝑛2 ) 𝑎𝑛𝑑 𝑦2 = 0,1,2,3, . . . , 𝑦1
This is the joint probability density function of y1 and y 2 .
Then summing on y 2 form 0 to y1 we will get the probability density function of y1 .
𝑦1
𝑛1 𝑛
𝑃[𝑌1 = 𝑦1 ] = ∑ 𝐶𝑦1 −𝑦2 2 𝐶𝑦2 𝑝 𝑦1 𝑞 𝑛1 +𝑛2 −𝑦1
𝑦2 =0
𝑦1
𝑛1 𝑛
= 𝑝 𝑦1 𝑞 𝑛1 +𝑛2 −𝑦1 ∑ 𝐶𝑦1 −𝑦2 2 𝐶𝑦2
𝑦2 =0
𝑦1 𝑛1 +𝑛2 −𝑦1 𝑛1 +𝑛2
=𝑝 𝑞 𝐶𝑦1
𝑛1 +𝑛2 𝑦1 𝑛1 +𝑛2 −𝑦1
= 𝐶𝑦1 𝑝 𝑞
⇒ 𝑌1 ~𝐵𝑖𝑛(𝑛1 + 𝑛2 , 𝑝)

Example 09: Let X 1 , X 2 ~ Bin(n, p) . Find the probability density function of Y = X 1 + X 2 .


𝑦
Answer 09: 𝑓𝑌 (𝑦) = 𝑃[𝑌 = 𝑦] = 𝑃[𝑋1 + 𝑋2 = 𝑦] = ∑𝑘=0 𝑃[𝑋1 = 𝑘]𝑃[𝑋2 = 𝑌 − 𝑘]
𝑦 𝑛 𝑘
𝑓𝑌 (𝑦) = ∑𝑘=0 𝐶𝑘 𝑝 𝑞 𝑛−𝑘
× 𝑛𝐶𝑦−𝑘 𝑝 𝑦−𝑘 𝑞 𝑛−(𝑦−𝑘)
𝑦

= ∑ 𝑛𝐶𝑘 𝑛𝐶𝑦−𝑘 𝑝 𝑦 𝑞 2𝑛−𝑦


𝑘=0
𝑦

=𝑝 𝑞 𝑦 2𝑛−𝑦
∑ 𝑛𝐶𝑘 𝑛𝐶𝑦−𝑘
𝑘=0
𝑦 2𝑛−𝑦 2𝑛
= 𝑝 𝑞 𝐶𝑦
2𝑛
= 𝐶𝑦 𝑝 𝑦 𝑞 2𝑛−𝑦 𝑦 = 0,1,2,3, . . . ,2𝑛
⇒ 𝑌 = 𝑋1 + 𝑋2 ~𝐵𝑖𝑛(2𝑛, 𝑝)

Example 10: If the joint probability density of X1 and X2 is given by


𝑒 −(𝑥1 +𝑥2 ) 𝑓 𝑜𝑟𝑥1 > 0, 𝑥2 > 0
𝑓(𝑥1 , 𝑥2 ) = {
0𝑜 𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝑋1
Find the probability density of 𝑌 =
𝑋1 +𝑋2
Answer 10: Since y decreases as x 2 increases and x1 is held constant we can find the joint density of X1 and Y.
𝑥1 1−𝑦 ∂𝑥2 𝑥1
Since 𝑦 = we have 𝑥2 = 𝑥1 and hence =− it follows that;
𝑥1 +𝑥2 𝑦 ∂𝑦 𝑦2
𝑥 𝑥
∂𝑥2 − 1 𝑥1 𝑥1 − 𝑦1
𝑔(𝑥1 , 𝑦) = 𝑓 (𝑥1 , 𝑥2 ) | |=𝑒 𝑦 × |− | = − 2 𝑒 for x > 0 and 0 < y <1.
∂𝑦 𝑦 𝑦
𝑥1
Finally, integrating out x1 and changing the variable of integration to 𝑢 = we get;
𝑦
𝑥
∞ 𝑥1 − 1 ∞
ℎ(𝑦) = ∫0 𝑒 𝑦 𝑑𝑥1 = ∫0 𝑢 𝑒 −𝑢 𝑑𝑢 = Γ(2) =1
𝑦2
1 𝑓𝑜𝑟 0 < 𝑦 < 1
Therefore ℎ(𝑦) = {
0 𝑜 𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

Thus the random variable Y has Uniform distribution with a = 0 and b = 1 (That is within the range [0,1])
RABA 35
ST 1010: Statistical Theory (2C,30L) DST @UOC
Theorem : Let f (x1 , x2 ) be the value of the joint probability density of the continuous random variables X 1
and X 2 at (x1 , x2 ) . If the functions given by y1 = u1 (x1 , x2 ) and y2 = u2 (x1 , x2 ) are partially differentiable
with respect to both x1 and x 2 , and represent a one-to-one transformation for all values within the range of
X 1 and X 2 for which f (x1 , x2 )  0 , then, for these values of x1 and x 2 , the equations y1 = u1 (x1 , x2 ) and
y2 = u2 (x1 , x2 ) can be uniquely solved for x1 and x 2 to give x1 = w1 ( y1 , y2 ) and x2 = w2 ( y1 , y2 ) , and for
the corresponding values of y1 and y 2 , the joint probability density of Y1 = u1 ( X 1 , X 2 ) and Y2 = u2 ( X 1 , X 2 )
is given by;
𝑔(𝑦1 , 𝑦2 ) = 𝑓[𝑤1 (𝑦1 , 𝑦2 ), 𝑤2 (𝑦1 , 𝑦2 )]|𝐽|

Here, J, called the “Jacobian” of the transformation, and is the determinant of J.


∂𝑥1 ∂𝑥1
∂𝑦1 ∂𝑦2
where 𝐽=| | Elsewhere g ( y1 , y2 ) = 0
∂𝑥2 ∂𝑥2
∂𝑦1 ∂𝑦2

Note
 This theorem can be easily generalized to functions of three or more random variables.
 If the joint probability density of three random variables X1, X2, and X3 and we want to find the joint
probability density of the random variables Y1 = u1 ( X 1 , X 2 , X 3 ) and Y2 = u 2 ( X 1 , X 2 , X 3 ) , and
Y3 = u3 ( X 1 , X 2 , X 3 ) the general approach is the same, but the Jacobian is now the 3 3 determinent,
∂𝑥1 ∂𝑥1 ∂𝑥1
∂𝑦 ∂𝑦2 ∂𝑦3
| 1 |
∂𝑥2 ∂𝑥2 ∂𝑥2
𝐽=| |
∂𝑦1 ∂𝑦2 ∂𝑦3
|∂𝑥 ∂𝑥3 ∂𝑥3 |
3
∂𝑦1 ∂𝑦2 ∂𝑦3
 Once we have determined the joint probability density of the three new random variables, we can find
the marginal density of any two of the random variables, or any one by integration.
 More General Case: If X is a random vector of the continuous type with probability density function
f X ( x ) and Y = g ( X ) , where g is a 1-1 function of X , then the joint probability density function of Y
is given by;
𝑓𝑌̱ (𝑦̱) = 𝑓𝑋̱ [𝑥(𝑦̱)]|𝐽|

Here,
∂𝑥1 ∂𝑥1 ∂𝑥1 ∂𝑥1
...
∂𝑦1 ∂𝑦2 ∂𝑦3 ∂𝑦𝑝
| |
∂𝑥2 ∂𝑥2 ∂𝑥2 ∂𝑥2
... 𝑋1 𝑌1
∂𝑦1 ∂𝑦2 ∂𝑦3 ∂𝑦𝑝
𝑋2 𝑌2
| | 𝑋3 𝑌3
∂𝑥3 ∂𝑥3 ∂𝑥3 ∂𝑥3
𝐽= ... 𝑋̱ = . 𝑌̱ = .
∂𝑦1 ∂𝑦2 ∂𝑦3 ∂𝑦𝑝
. .
| . |
. .
. [𝑋𝑛 ] [𝑌𝑛 ]
| . |
∂𝑥𝑛 ∂𝑥𝑛 ∂𝑥𝑛 ∂𝑥𝑛
...
∂𝑦1 ∂𝑦2 ∂𝑦3 ∂𝑦𝑝

RABA 36
ST 1010: Statistical Theory (2C,30L) DST @UOC
Example 11: If the joint probability density of X1 and X2 is given by
𝑒 −(𝑥1 +𝑥2 ) 𝑓𝑜𝑟 𝑥1 > 0, 𝑥2 > 0
𝑓(𝑥1 , 𝑥2 ) = {
0 𝑜 𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Find the probability density of Y1 = X 1 + X 2 and Y2 = X 1 , and the marginal density of Y
2
X1 + X 2
Answer 11: Solving y1 = x1 + x2 and y2 = x1 for x and x , we get x = y y and x = y (1 − y ) , and it
1 2 1 1 2 2 1 2
x1 + x2
follows;
∂𝑦1 𝑦2 ∂𝑦1 𝑦2
| ∂𝑦1 ∂𝑦2
| 𝑦2 𝑦1
𝐽= =| | = −𝑦1
|∂𝑦 (1 − 𝑦 ) ∂𝑦1 (1 − 𝑦2 )|
1 2 (1 − 𝑦2 ) − 𝑦1
∂𝑦1 ∂𝑦2
Mapping the region x1  0 and x2  0 in the x1 x2 plane into the region y1  0 and 0  y2  1 in
the y1 y 2 plane is a one-to-one transformation.
Therefore
𝑒 −𝑦1 |−𝑦1 | = 𝑦1 𝑒 −𝑦1 𝑓𝑜𝑟 𝑦1 > 0 𝑎𝑛𝑑 0 < 𝑦2 < 1
𝑔(𝑦1 , 𝑦2 ) = {
0 𝑒 𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
Integrating the density function of g ( y1 , y2 ) with respect to y1 we get
∞ ∞
ℎ(𝑦2 ) = ∫0 𝑔(𝑦1 , 𝑦2 )𝑑𝑦1 = ∫0 𝑦1 𝑒 −𝑦1 𝑑𝑦1 = Γ(2) = 1 for 0  y2  1 ;
elsewhere h( y2 ) = 0

Example 12: If the joint probability density of X1 and X2 is given by


1 𝑓 𝑜𝑟 0 < 𝑥1 < 1, 0 < 𝑥2 < 1
𝑓(𝑥1 , 𝑥2 ) = {
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Find the probability density of Y = X 1 + X 2 and Z = X 2 , and the marginal density of Y

Answer 12: Solving y = x1 + x2 and z = x2 for x1 and x 2 , we get x1 = y − z and x2 = z , and it follows;
∂ (𝑦 − 𝑧 ) ∂ (𝑦 − 𝑧 )
| ∂𝑦 ∂𝑧 | 1 −1
𝐽= =| |=1
| ∂𝑧 ∂𝑧 |
01
∂𝑦 ∂𝑧
Mapping the region 0  x1  1 and 0  x2  1 in the x1 x2 plane into the region z  y  z + 1 and
0  z  1 in the yz plane is a one-to-one transformation.
Therefore
1|1| = 1𝑓 𝑜𝑟𝑧 < 𝑦 < 𝑧 + 1𝑎𝑛𝑑0 < 𝑧 < 1
𝑔(𝑦, 𝑧) = {
0𝑒 𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
Integrating the density function of g ( y, z ) with respect to z we get

0 𝑓𝑜𝑟 𝑦≤0
𝑦
∫ 1𝑑𝑧 = 1 𝑓 𝑜𝑟 0 < 𝑦 < 1
0
ℎ (𝑦 ) = 1
∫ 1𝑑𝑧 = 2 − 𝑦 𝑓 𝑜𝑟 1 < 𝑦 < 2
𝑦−1
{0 𝑓 𝑜𝑟 𝑦 ≥ 2

RABA 37
ST 1010: Statistical Theory (2C,30L) DST @UOC
Example 13: If the joint probability density of X1, X2 and X3 is given by
𝑒 −(𝑥1 +𝑥2 +𝑥3 ) 𝑓 𝑜𝑟𝑥1 > 0, 𝑥2 > 0, 𝑥3 > 0
𝑓(𝑥1 , 𝑥2 ) = {
0𝑜 𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Find the probability density of Y1 = X 1 + X 2 + X 3 , Y2 = X 2 , and Y3 = X 3 , and the marginal density of Y1.

Answer 13: Solving y1 = x1 + x2 + x3 , y2 = x2 and y3 = x3 for x1 , x2 and x3 , we get x1 = y1 − y 2 − y3 and


x2 = y2 , and x3 = y3 and it follows;
1 −1 −1
J == 0 1 0 =1
0 1 1
And since the transformation is 1-1,
 −y −y
e 1 1 = e 1 for y2  0, y3  0, y1  y2 + y3
g ( y1 , y2 , y3 ) = 

0 elsewhere
Integrating the density function of g ( y1 , y2 , y3 ) with respect to y 2 and y 3 we get
𝑦1 𝑦1 −𝑦3 1
∫0 ∫0 𝑒 −𝑦1 𝑑𝑦2 𝑑𝑦3 = 𝑦12 𝑒 −𝑦1 𝑓 𝑜𝑟𝑦1 > 0
ℎ(𝑦1 ) = { 2
0𝑜 𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
=> This is a Gamma distribution with α = 3 &  =1.

RABA 38
ST 1010: Statistical Theory (2C,30L) DST @UOC

Moment-Generating function technique


Moment generating functions can play in important role in determining the probability distribution or
density of a function of random variables when the function is a linear combination of n independent random
variables.

Theorem : If X 1 , X 2 , X 3 ,..., and X n are independent random variables and Y = X 1 + X 2 + ... + X n then;
n
M Y (t ) =  M X i (t ) .
i =1

Here M X i (t ) is the MGF of function of X i at t .

Proof: Since X 1 , X 2 , X 3 ,..., and X n are independent random variables, we have;


f (x1 , x2 , x3 ,..., xn ) = f1 (x1 ) f 2 (x2 ) f 3 (x3 )... f n (xn ) and
M Y (t ) = E e ( )
Yt

= E e ( X1 + X 2 + X 3 +...+ X n ) t 
   
=    ...  e
( X1 + X 2 + X 3 +...+ X n ) t
f (x1 , x2 , x3 ,..., xn )dx1dx2 dx3 ...dxn
− − − −
  
= e f ( x )d x1  e f 2 ( x2 )d x2 ...  e xn t f n ( xn )d xn
x1 t x2 t
1 1
− − −
n
=  M X i (t )
i =1

Example 14: Find the probability distribution of the sum of n independent random variables
X 1 , X 2 , X 3 ,..., X n having Poisson distribution with the respective parameters 1 , 2 , 3 ,..., n

i ( et −1)
Answer 14: We can show that M X i (t ) = e and hence, for Y = X 1 + X 2 + ... + X n we can obtain;
n
M Y (t ) =  e i ( e −1) = e1 +2 +3 +...+n ( e −1) .
t t

i =1
This is Poisson distribution with parameter 1 + 2 + ... + n .

Example 15: If X 1 , X 2 , X 3 ,..., X n are independent random variables having exponential distribution with the
same parameter , find the probability density of the random variable Y = X 1 + X 2 + ... + X n

Answer 15: We can show that M X i (t ) = (1 −  t ) and hence, for Y = X 1 + X 2 + ... + X n we can obtain
−1

n
M Y (t ) =  (1 −  t ) = (1 −  t ) .
−1 −n

i =1

RABA 39

You might also like