0% found this document useful (0 votes)
16 views17 pages

2 14 MeanAndVariance

The document discusses the concepts of mean (expected value) and variance in the context of random variables (RVs). It provides definitions, examples, and theorems related to calculating expected values, variance, and the application of Chebychev's Inequality. Key formulas and properties are presented to illustrate the relationships between these statistical measures.

Uploaded by

sanaerbh792
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views17 pages

2 14 MeanAndVariance

The document discusses the concepts of mean (expected value) and variance in the context of random variables (RVs). It provides definitions, examples, and theorems related to calculating expected values, variance, and the application of Chebychev's Inequality. Key formulas and properties are presented to illustrate the relationships between these statistical measures.

Uploaded by

sanaerbh792
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

2.

14 Mean and Variance

Great Expectations

Mean (Expected Value)

Law of the Unconscious Statistician

Variance

Chebychev’s Inequality

1
2.14 Mean and Variance

Definition: The mean or expected value or average


of a RV X is
 

x xf (x) if X is discrete
µ ≡ E[X] ≡ 

 xf (x) dx if X is cts

The mean gives an indication of a RV’s central ten-


dency.

2
2.14 Mean and Variance

Example: Suppose X has the Bernoulli distribution


with parameter p, i.e., Pr(X = 1) = p, Pr(X = 0) =
q = 1 − p. Then

E[X] = xPr(X = x) = 1 · p + 0 · q = p.
x

Example: Die. X = 1, 2, . . . , 6, each w.p. 1/6. Then


 1 1
E[X] = xf (x) = 1 · + · · · + 6 · = 3.5.
x 6 6

3
2.14 Mean and Variance

Example: X ∼ Exp(λ). f (x) = λe−λx, x ≥ 0. Then



E[X] = xf (x) dx
∞
= xλe−λx dx
0  ∞
= −xe−λx|∞
0 − (−e−λx) dx (by parts)
 ∞ 0
= −λx
e dx (L’Hôpital’s rule)
0
= 1/λ.

4
2.14 Mean and Variance

Law of the Unconscious Statistician

Definition/Theorem: The expected value of a func-


tion of X, say g(X), is
 

x g(x)f (x)
if X is discrete
E[g(X)] ≡ 

 g(x)f (x) dx if X is cts


Examples: E[X 2] =  x2f (x) dx

E[sin X] = (sin x)f (x) dx

5
2.14 Mean and Variance

Just a moment please. . .

Definition: The kth moment of X is


 
 k
x x f (x) if X is discrete
E[X k ] = 

 xk f (x) dx if X is cts
Definition: The kth central moment of X is
 

x(x − µ) f (x)
k X is discrete
E[(X − µ)k ] = 

 (x − µ) k f (x) dx X is cts

6
2.14 Mean and Variance

Definition: The variance of X is the second central


moment, i.e., Var(X) ≡ E[(X − µ)2]. It’s a measure
of spread or dispersion.

Notation: σ 2 ≡ Var(X).

Definition: The standard deviation of X is σ ≡



+ Var(X).

7
2.14 Mean and Variance

Theorem: For any g(X) and constants a and b, we


have E[ag(X) + b] = aE[g(X)] + b.

Proof (just do cts case):



E[ag(X) + b] = (ag(x) + b)f (x) dx
 
= a g(x)f (x) dx + b f (x) dx
 
= aE[g(X)] + b.

Remark: In particular, E[aX + b] = aE[X] + b.


Similarly, E[g(X) + h(X)] = E[g(X)] + E[h(X)].
8
2.14 Mean and Variance

Theorem (easier way to calculate variance):

Var(X) = E[X 2] − (E[X])2

Proof:

Var(X) = E[(X − µ)2]


= E[X 2 − 2µX + µ2]
= E[X 2] − 2µE[X] + µ2 (by above remarks)
= E[X 2] − µ2.

9
2.14 Mean and Variance

Example: X ∼ Bern(p).

 1 w.p. p
X =  0 w.p. q

Recall E[X] = p. In fact, for any k,

E[X k ] = 0k · q + 1k · p = p.

So Var(X) = E[X 2] − (E[X])2 = p − p2 = pq.

10
2.14 Mean and Variance

1 , a < x < b.
Example: X ∼ U(a, b). f (x) = b−a

1  b a+b
E[X] = x dx =
a b−a 2
 b 1 a2 + ab + b2
E[X 2] = x2 dx =
a b−a 3
(a − b)2
Var(X) = E[X 2] − (E[X])2 = (algebra).
12

11
2.14 Mean and Variance

Theorem: Var(aX +b) = a2Var(X). A “shift” doesn’t


matter!

Proof:

Var(aX + b) = E[(aX + b)2] − (E[aX + b])2


= E[a2X 2 + 2abX + b2] − (aE[X] + b)2
= a2E[X 2] + 2abE[X] + b2
−(a2(E[X])2 + 2abE[X] + b2)
= a2(E[X 2] − (E[X])2)
= a2Var(X)

12
2.14 Mean and Variance

Example: X ∼ Bern(0.3)

Recall that E[X] = p = 0.3 and


Var(X) = pq = (0.3)(0.7) = 0.21.

Let Y = g(X) = 4X + 5. Then

E[Y ] = E[4X + 5] = 4E[X] + 5 = 6.2


Var(Y ) = Var(4X + 5) = 16Var(X) = 3.36.

13
2.14 Mean and Variance

Chebychev’s Inequality

Theorem: Suppose that E[X] = µ and Var(X) = σ 2.


Then for any  > 0,

Pr(|X − µ| ≥ ) ≤ σ 2/2.

Proof: See text.

14
2.14 Mean and Variance

Remarks:

If  = kσ, then Pr(|X − µ| ≥ kσ) ≤ 1/k2.

Pr(|X − µ| < ) ≥ 1 − σ 2/2.

Chebychev gives a crude bound on the prob that X


deviates from the mean by more than a constant, in
terms of the constant and the variance. You can al-
ways use Chebychev, but it’s crude.
15
2.14 Mean and Variance

Example: Suppose X ∼ U(0, 1). f (x) = 1, 0 < x < 1.

a+b (a−b)2
E[X] = 2 = 1/2, Var(X) = 12 = 1/12.

Then Chebychev implies


1 1
Pr(|X − | ≥ ) ≤ 2
.
2 12
In particular, for  = 1/4,
1 1 4
Pr(|X − | ≥ ) ≤ (TERRIBLE bound!).
2 4 3

16
2.14 Mean and Variance

Example(cont’d): Let’s compare the above bound to


the exact answer.
1 1
Pr(|X − | ≥ )
2 4
1 1
= 1 − Pr(|X − | < )
2 4
1 1 1
= 1 − Pr(− < X − < )
4 2 4
1 3
= 1 − Pr( < X < )
4 4
 3/4
= 1− f (x) dx
1/4
1
= 1− = 1/2.
2
17

You might also like