0% found this document useful (0 votes)
72 views

Chapter 4 Expected Values

The document discusses expected values and moment-generating functions. It defines the expected value of a random variable as the weighted average of all possible outcomes, using the probability density function as weights. The expected value provides a measure of the central location of a probability distribution. The moment-generating function of a random variable contains complete information about its distribution and determines all its moments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views

Chapter 4 Expected Values

The document discusses expected values and moment-generating functions. It defines the expected value of a random variable as the weighted average of all possible outcomes, using the probability density function as weights. The expected value provides a measure of the central location of a probability distribution. The moment-generating function of a random variable contains complete information about its distribution and determines all its moments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Chapter 4 Expected Values

4.1 The Expected Value of a Random Variables

Definition. Let X be a random variable having a pdf f (x). Also, suppose the the
following conditions are satisfied:

|x|f (x) converges to a finite limit (discrete case)


P

R∞
−∞ |x|f (x)dx converges to a finite limit (continuous case)

Then the expected value of X is given by

E[X] = xf (x)
X

in the discrete case, and


Z ∞
E[X] = −∞
xf (x)dx
in the continuous case.
Note that the expected value does not always exists.

For instance, consider a discrete variable with space {1, 2, 3., , , , } and f (n) = c/n2
for a constant c that allows f to sum to 1. Then
∞ ∞
nf (n) = c/n
X X

n=1 n=1
which does not converge.

Book Examples Expectation of:


Geometric Random Variable
Poisson Distribution
Gamma Density
Normal Distribution
Cauchy Density

Examples
4.1.1 Expectations of Functions of Random Variables

Consider a continuous increasing function of X, Y = g(X).

FY (y) = P [Y ≤ y] = P [g(x) ≤ y]

Z g −1 (y)
−1 −1
= P [X ≤ g (y)] = F (g (y)) = ∞
f (x)dx.

where f (x) is the pdf of X.

Let fY denote the pdf of Y . To find this we obtain

FY0 (y) = fY (y) = F 0[g −1(y)]g −10(y)

Then we have
Z ∞
E[Y ] = −∞
yfY (y)dy
How does E[Y ] compare to the integral I
Z ∞
I= −∞
g(x)f (x)dx
By applying a change of variables with x = g −1(y), have have

dx
= g −10(y) > 0
dy
and
Z ∞
−1 −10
I= −∞
yf [g (y)]g (y)dy

Z ∞
= −∞
yfY (y)dy

Examples.
Example 1 The following result holds true for continuous random variables that
take only nonnegative values.
Z ∞ Z ∞
E[X] = 0
P [X > t]dt = 0
(1 − F (t))dt

Let’s verify this with the uniform distribution.

X has pdf f (x) = 1 for x ∈ (0, 1) and f (x) = 0 elsewhere. Then


Z 1
E[X] = 0
xdx = 12/2 − 02/2 = 1/2
and
Z x
F (x) = 0
dt = x
for x ∈ (0, 1). Now
Z 1 Z 1
2
0
[1 − F (x)]dx = 0
(1 − x)dx = 1 − 1 /2 = 1/2.
Example 2 Let X have a discrete uniform distribution on the integers {51, 52, 53, ..., 100}.
Approximate E[1/X] Note that f (x) = 1/50 on this space.

100 1
E[1/X] =
X

x=51 50x
However,
1 Z 100 1 100 1 1 Z 100 1
dx ≤ ≤ dx
X

50 50 x + 1 x=51 50x 50 50 x

(1/50)[ln[101] − ln[51]] = 0.0136659 <

100 1
= 0.0137 <
X

x=51 50x

(1/50)[ln[100] − ln[50]] = 0.01386294


Some Special Expectations: 4.1.2, 4.2, 4.5

E[X] is often called the mean value or mean of X. It serves as the primary
moment-based measure of the location of a distribution, and is often denoted by µ.

The dispersion of a distribution is often described by the second central moment


more commonly known as the variance.

E[(X − µ)2] = (x − µ)2f (x)


X

x
in the discrete case.
Z ∞
2 2
E[(X − µ) ] = −∞
(x − µ) f (x)dx
in the continuous case. It is common to let the notation σ 2 denote the variance.

σ 2 = E[(X − µ)2] = E[X 2] − 2µE[X] + µ2 = E[X 2] − µ2

The standard deviation of a random variable X is defined to be the square root


of its variance, and is often denoted by σ.
Examples in Book for:
Bernoulli Distribution
Normal Distribution
Uniform Distribution

Theorem (Markov): Let g(X) be a nonnegative function of a random variable


X. If E[g(X)] exists, then for every positive c,

E[g(X)]
P [g(X) ≥ c] ≤
c
A proof is given below for the continuous case.

Proof: Let A = {x : g(x) ≥ c}, and let f (x) be the pdf of X.


Z ∞
E[g(X)] = −∞
g(x)f (x)dx

Z Z
= A
g(x)f (x)dx + Ac
g(x)f (x)dx
From this and the nonnegativity of g(X) we have
Z Z
E[g(X)] ≥ A
g(x)f (x)dx ≥ A
cf (x)dx
which implies

E[g(X)]
≥ P (A) = P [g(X) ≥ c]
c

A special case of Markov’s inequality is Chebyshev’s Inequality.

Theorem (Chebyshev): Let X be a random variable with a finite variance σ 2.


Then for every k > 0
1
P [|X − µ| ≥ kσ] ≤
k2

Proof: In the previous theorem take g(X) = (X − µ)2 and let c = k 2σ 2 and the
proof follows immediately.
Example 3: Suppose that it is known that the number of items produced in a
factory during a week is a random variable with mean 50.

a) What can be said about the probability that a week’s production will exceed 75?
By Markov’s inequality

P [X ≥ 75] ≤ E[X]/75 = 50/75 = 2/3.

b) If the variance of the week’s production is known to equal 25, then what can be
said about the probability that this week’s production will be between 40 and 60.

P [|X − 50| ≥ 10] = P [|X − 50| ≥ 2 25] ≤ 1/4
Hence

P [|X − 50| < 10] ≥ 1 − 1/4 = 3/4


Because Chebyshev’s inequality is valid for all distributions with a variance, it can-
not always be expected to give the most precise bound. Consider the example below.

1
Example 4 Suppose X had pdf f (x) = 10 on the interval (0, 10). Then µ = 5 and
σ 2 = 25/3.
Z 9 1
P [|X − 5| ≥ 4] = 1 − 1
dx = 1 − 0.8 = 0.2
10

However, if we use Chebyshev’s inequality


25
P [|X − 5| ≥ 4] ≤ ≈ 0.52.
3(16)
4.5 The Moment-Generating Function

Another special expectation is E[etX ] which is a function of t, and is known as the


moment-generating function M (t).

Suppose that there is a positive number h such that for t ∈ (−h, h) E[etX ] exists.
Then, in the continuous case
Z ∞
tx
M (t) = −∞
e f (x)dx
and in the discrete case

M (t) = etxf (x).


X

Not every random variable has a moment-generating function. However, for those
that do, the moment generating function completely determines the distribution. If
two random variables have the same mgf, they must have identical distributions.
The existence of M (t) for t ∈ (−h, h) implies the existence of derivatives of M (t)
of all orders at t = 0.
Furthermore, by a theorem that allows us to change the order of differentiation and
integration, we have

dM (t) 0
Z ∞
= M (t) = −∞ xetxf (x)dx
dt
or for discrete variables

M 0(t) = xetxf (x)


X

Thus, it is clear from the definition of E[X] that M 0(0) = E[X] = µ.

In general, we can see that M 00(0) = E[X 2], M 000(0) = E[X 3] , and so on.

Book Examples mgf of:


Poisson Distribution
Gamma Distribution
Normal Distribution

Examples.

You might also like