The Distribution of Function of Random Variable
The Distribution of Function of Random Variable
1.17
Examples
i. Let
1, 0 < x < 1
f (x) =
0, otherwise
Find the distribution of Y = − log X.
1.18
Sometimes we are not only interested in an individual vari-
able but two or more variables.
To specify the relationship between two random variables, we
define the joint cumulative distribution function of X and Y
by
F (x, y) = Pr{X ≤ x, Y ≤ y}
The distribution of X can be obtained from the joint distri-
bution
FX (x) = Pr{X ≤ x}
= Pr{X ≤ x, Y < ∞}
lim X ≤ x, Y ≤ y}
= Pr{y→∞
lim Pr{X ≤ x, Y ≤ y}
= y→∞
= F (x, ∞)
Similarly we can obtain the distribution of Y
All joint probability statement about X and Y can be an-
swered by F (x, y)
Example: Pr{X > a, Y > b}
1.19
If X and Y are both discrete, the we can define the joint
probability mass function by
f (x, y) = Pr{X = x, Y = y}
Y
X 1 2 3 4
1 0.08 0.10 0.04 0.02
2 0.05 0.06 0.05 0.03
3 0.04 0.04 0.06 0.06
4 0.02 0.03 0.07 0.06
5 0.01 0.02 0.08 0.08
1.20
Similarly, if X and Y are both continuous, then the joint
probability density function f (x, y) is the one such that
Z Z
Pr{X ∈ C, Y ∈ D} = {(x,y):x∈C;y∈D}
f (x, y)dxdy
Since
F (a, b) = Pr{X ∈ (−∞, a), Y ∈ (−∞, b)}
Z a Z b
= −∞ −∞
f (x, y)dydx
Therefore,
∂2
f (a, b) = F (a, b)
∂x∂y
whenever the derivative exists.
if X and Y are jointly continuous, they are individually con-
tinuous and the prob. density function of X is
Z ∞
f (x) = −∞
f (x, y)dy
and the density function of Y is
Z ∞
f (y) = −∞
f (x, y)dx
1.21
Example: Let X and X be random variables with joint den-
sity function
1
2 0 ≤ x ≤ y, 0 ≤ y ≤ 2
f (x, y) =
0, otherwise
1.22
Conditional Distribution: Discrete Case:
Recall the definition of conditional probability of E given F
Pr(E|F ) =
a≤x
1.23
Example:
Suppose that f (x, y), the joint probability mass function of
X and Y , is given by
f (0, 0) = 0.45 f (0, 1) = 0.05 f (1, 0) = 0.05 f (1, 1) = 0.45
Find the marginal distribution of X and the conditional dis-
tribution of X given Y = 0, 1.
1.24
Continuous Case:
If X and Y have the joint probability density function f (x, y),
define the conditional density function of X given Y = y by
f (x, y)
f (x|y) =
f (y)
where f (y) > 0.
It is consistent with the discrete case.
f (x, y)dxdy
f (x|y)dx =
f (y)dy
Pr(x ≤ X < x + dx, y ≤ Y < y + dy)
=
Pr(y ≤ Y < y + dy)
= Pr(x ≤ X < x + dx|y ≤ Y < y + dy)
The conditional cumulative distribution function of X given
Y = y is
Z a
FX|Y (a|y) = Pr(X ≤ a|Y = y) = −∞
f (x|y)dx
1.25
Examples: Suppose X, Y have a joint density function de-
fined as
1, 0 < x < 1; 0 < y < 1
f (x, y) =
0, otherwise.
We can compute the marginal density function of X by in-
tegrating Y out:
Z 1
fX (x) = 0
f (x, y)dy
Z 1
= 0
1dy
= y|1y=0
= 1, 0 < x < 1
1.26
Now suppose we want to find Pr(X > Y ).
Note that X > Y represents {(x, y) : 0 < y < x < 1}.
Therefore,
Z Z
Pr(X > Y ) = {(x,y):0<y<x<1}
f (x, y)dxdy
Z 1Z x
= 0 0
1dydx
Z 1
= 0
y|xy=0dx
Z 1
= 0
xdx
x 1
=
2 0
1
=
2
1.27
Let X and Y have the joint pdf
fX,Y (x, y) = 2e−(x+y), 0 < x < y < ∞
Then the marginal density of X is
Z ∞
fX (x) = fX,Y (x, y)dy
Zx∞
= x
2e−(x+y)dy
−x∞ −y
Z
= 2e x
e dy
−x −y ∞
= 2e −e |x
−x −x
= 2e e
= 2e−2x, 0 < x < ∞
the marginal density of Y is
Z y
fY (y) = fX,Y (x, y)dx
Z0y
= 0
2e−(x+y)dx
y −x
= 2e−y
Z
0
e dx
−x y
−y
= 2e −e |0
−y −y
= 2e (1 − e )
= 2e−y (1 − e−y ), 0 < y < ∞
The conditional density of X given Y = y is
fX,Y (x, y)
fX|Y (x|y) =
fY (y)
2e−(x+y)
=
2e−y (1 − e−y )
e−x
= ,0 < x < y
1 − e−y
Similarly, the conditional density of Y given X = x is
e−y
fY |X (y|x) = −x , x < y < ∞
e
1.28
(c) Expected Values and Variance
• If X is discrete and taking values x1, x2, . . . , then the
expectation or expected value, or the mean of X is defined
by
E(X) = xiPr{X = xi}
X
i
Examples:
If the probability mass function of X is given by
1
f (0) = = f (1)
2
then
E(X) =
If I is the indicator variable for the event A, that is, if
1 if A occurs
I=
0 otherwise
Then
1.30
• Variance
To measure the variation of values in the distribution, we
use Variance
If X is a random variable with mean µ, then the variance
of X is defined by
Var(X) = E[(X − µ)2]
Alternative formula
Var(X) = E(X 2) − µ2
1.31
• If we have two random variables X1 and X2 and to mea-
sure the dependence structure, we can use the Covariance
Cov(X1, X2) = E[(X1 − µ1)(X2 − µ2)]
where µi = E(Xi), i = 1, 2.
Alternative formula:
Cov(X1, X2) = E(X1X2) − µ1µ2
1.32
We can also compute the conditional mean of X given Y = y
Z y
E(X|Y = y) = xfX|Y (x|y)dx
0
Z y xe−x
= 0 dx
1 − e−y
1 Z y
−x
= xe dx
1 − e−y 0
Integration by parts by letting u = x and dv = e−xdx,
du = dx and v = −e−x, then
y y
−x
[−xe−x|y0 ]
+ 0 e−xdx
Z Z
0
xe dx =
= −ye + [−e−x|y0 ]
−y
= 1 − e−y − ye−y
Therefore,
1 − e−y − ye−y
E(X|Y = y) =
1 − e−y
1.33