0% found this document useful (0 votes)
48 views2 pages

2 A

1) If Y is a linear function of a random variable X (Y = aX + b), then the probability density function of Y, fY(y), is equal to the pdf of X, fX(x), scaled by a factor of 1/a. 2) The mean and variance of Y transform in a linear way based on the parameters a and b - the mean is μY = aμX + b and the variance is σY2 = a2σX2. 3) This relationship between random variables and their distributions allows probabilities and parameters to be related when one variable is a linear function of another.

Uploaded by

Sanjay Dutta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views2 pages

2 A

1) If Y is a linear function of a random variable X (Y = aX + b), then the probability density function of Y, fY(y), is equal to the pdf of X, fX(x), scaled by a factor of 1/a. 2) The mean and variance of Y transform in a linear way based on the parameters a and b - the mean is μY = aμX + b and the variance is σY2 = a2σX2. 3) This relationship between random variables and their distributions allows probabilities and parameters to be related when one variable is a linear function of another.

Uploaded by

Sanjay Dutta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

CONCEPTUAL TOOLS

By: Neil E. Cotter PROBABILITY


LINEAR FUNCS OF RAND VARS

TOOL: Given probability density function, fX(x), for X, the probability density function (pdf),
fY(y) for Y = aX + b, (a ≠ 0),

is
1  y − b
fY (y) = fXx = .
a  a 

Also, the mean and variance transform as follows:


µY = aµ X + b σY2 = a 2σ 2X .

PROOF: By definition, fY(y) is the derivative of the cumulative probability


€ distribution function.
d d
fY (y) = FY (y) = P(Y ≤ y)
dy dy

Making a direct substitution for Y, we have an expression that we can


transform into a statement about the probability of X:

d  y − b
 P X ≤  a>0
d d  dy  a 
P(Y ≤ y) = P(aX + b ≤ y) = 
dy dy  d P X ≥ y − b  a < 0
 dy  a 

The last expressions are statements about the cumulative distribution


function of X.

 d  y − b
 FX  x =  a>0
 dy  a 
fY (y) = 
 d 1− F  x = y − b  a < 0
 dy  X 
 a 

Using the chain rule from calculus, it is possible to write the above
derivatives in terms of x:

 d  y − b  dy
 FX  x =  a>0
 dx  a  dx
fY (y) = 
 d 1− F  x = y − b  dy a<0
 dx  X 
 a  dx


CONCEPTUAL TOOLS
By: Neil E. Cotter PROBABILITY
LINEAR FUNCS OF RVS (CONT.)

The derivatives in terms of x are probability density functions for X, and


the derivatives of y = ax + b are equal to a:
  y − b
 fXx = a a > 0
  a 
fY (y) = 
− f  x = y − b  a a < 0
 X  a 

This may be written more compactly as follows:


1  y − b
€ fY (y) = fXx = , a≠0
a  a 

For the mean of Y, we write the integral formula:



€ µY = E(aX + b) = ∫−∞ (ax + b) f X (x)dx
We rewrite the integral in two parts and exploit the property that the area
under the pdf is equal to one:

∞ ∞
µY = a ∫ xf (x)dx + b ∫
f (x)dx = aµ X + b
−∞ X −∞ X

For the variance, we substitute for Y in the variance formula:

€ σY2 = E([Y − µY ]2 ) = E([aX + b − (aµ X + b)]2 )

or

€ σY2 = E(a 2 [X − µ X ]2 ) = a 2 E([X − µ X ]2 ) = a 2σ 2X

You might also like