Chapter 4 Expected Values
Chapter 4 Expected Values
Definition. Let X be a random variable having a pdf f (x). Also, suppose the the
following conditions are satisfied:
R∞
−∞ |x|f (x)dx converges to a finite limit (continuous case)
E[X] = xf (x)
X
For instance, consider a discrete variable with space {1, 2, 3., , , , } and f (n) = c/n2
for a constant c that allows f to sum to 1. Then
∞ ∞
nf (n) = c/n
X X
n=1 n=1
which does not converge.
Examples
4.1.1 Expectations of Functions of Random Variables
FY (y) = P [Y ≤ y] = P [g(x) ≤ y]
Z g −1 (y)
−1 −1
= P [X ≤ g (y)] = F (g (y)) = ∞
f (x)dx.
Then we have
Z ∞
E[Y ] = −∞
yfY (y)dy
How does E[Y ] compare to the integral I
Z ∞
I= −∞
g(x)f (x)dx
By applying a change of variables with x = g −1(y), have have
dx
= g −10(y) > 0
dy
and
Z ∞
−1 −10
I= −∞
yf [g (y)]g (y)dy
Z ∞
= −∞
yfY (y)dy
Examples.
Example 1 The following result holds true for continuous random variables that
take only nonnegative values.
Z ∞ Z ∞
E[X] = 0
P [X > t]dt = 0
(1 − F (t))dt
100 1
E[1/X] =
X
x=51 50x
However,
1 Z 100 1 100 1 1 Z 100 1
dx ≤ ≤ dx
X
50 50 x + 1 x=51 50x 50 50 x
100 1
= 0.0137 <
X
x=51 50x
E[X] is often called the mean value or mean of X. It serves as the primary
moment-based measure of the location of a distribution, and is often denoted by µ.
x
in the discrete case.
Z ∞
2 2
E[(X − µ) ] = −∞
(x − µ) f (x)dx
in the continuous case. It is common to let the notation σ 2 denote the variance.
E[g(X)]
P [g(X) ≥ c] ≤
c
A proof is given below for the continuous case.
Z Z
= A
g(x)f (x)dx + Ac
g(x)f (x)dx
From this and the nonnegativity of g(X) we have
Z Z
E[g(X)] ≥ A
g(x)f (x)dx ≥ A
cf (x)dx
which implies
E[g(X)]
≥ P (A) = P [g(X) ≥ c]
c
Proof: In the previous theorem take g(X) = (X − µ)2 and let c = k 2σ 2 and the
proof follows immediately.
Example 3: Suppose that it is known that the number of items produced in a
factory during a week is a random variable with mean 50.
a) What can be said about the probability that a week’s production will exceed 75?
By Markov’s inequality
b) If the variance of the week’s production is known to equal 25, then what can be
said about the probability that this week’s production will be between 40 and 60.
√
P [|X − 50| ≥ 10] = P [|X − 50| ≥ 2 25] ≤ 1/4
Hence
1
Example 4 Suppose X had pdf f (x) = 10 on the interval (0, 10). Then µ = 5 and
σ 2 = 25/3.
Z 9 1
P [|X − 5| ≥ 4] = 1 − 1
dx = 1 − 0.8 = 0.2
10
Suppose that there is a positive number h such that for t ∈ (−h, h) E[etX ] exists.
Then, in the continuous case
Z ∞
tx
M (t) = −∞
e f (x)dx
and in the discrete case
Not every random variable has a moment-generating function. However, for those
that do, the moment generating function completely determines the distribution. If
two random variables have the same mgf, they must have identical distributions.
The existence of M (t) for t ∈ (−h, h) implies the existence of derivatives of M (t)
of all orders at t = 0.
Furthermore, by a theorem that allows us to change the order of differentiation and
integration, we have
dM (t) 0
Z ∞
= M (t) = −∞ xetxf (x)dx
dt
or for discrete variables
In general, we can see that M 00(0) = E[X 2], M 000(0) = E[X 3] , and so on.
Examples.