0% found this document useful (0 votes)
17 views44 pages

Iit random variable semester

This document covers the expected values of discrete random variables, focusing on the probability mass function (pmf) and its applications, such as in a card game scenario. It explains the concept of expected value, variance, and how to calculate them using examples, including a geometric distribution and a card game with different rewards. The document also discusses the implications of expected value in determining whether a game is fair based on its expected payout.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views44 pages

Iit random variable semester

This document covers the expected values of discrete random variables, focusing on the probability mass function (pmf) and its applications, such as in a card game scenario. It explains the concept of expected value, variance, and how to calculate them using examples, including a geometric distribution and a card game with different rewards. The document also discusses the implications of expected value in determining whether a game is fair based on its expected payout.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

STAT 234 Lecture 4

Expected Values of Discrete Random Variables


Section 3.3

Yibi Huang
Department of Statistics
University of Chicago

1
Random Variable & Probability Mass Function (Review)

A random variable is a real-valued function on the sample space


S and maps elements of S , ω, to real numbers.
X
S −−−−−−→ R
ω 7 →
− x = X(ω)

The probability mass function (pmf) of a random variable X is a


function p(x) that maps each possible value xi to the
corresponding probability P(X = xi ).

• A pmf p(x) must satisfy 0 ≤ p(x) ≤ 1 and p(x) = 1.


P
x

2
Example: Geometric Distribution

Let X be the number of tosses required to obtain the first heads,


when tossing a coin with a probability of p to land heads.
The pmf of X is The pmf of X is
x−1 tails
0.00 0.15 0.30

z }| {
p(x) = P(X = x) = P(T . . . T H) by indep.
p(x)

= P(T )(T ) · · · P(T )P(H)


= (1 − p)(1 − p) · · · (1 − p) p
| {z }
−5 0 5 10 15 x−1 copies
x = (1 − p) x−1
p,

if x is a positive integer and p(x) = 0 if not.

• We say X has a geometric distribution since the pmf is a


geometric sequence
p(x) = 1?
P∞
• Does x=1 3
Example: Geometric Distribution (Cont’d)

p(x) = − p) x−1 p = 1?
P∞ P∞
Does x=1 x=1 (1

4
Example: Geometric Distribution (Cont’d)

p(x) = − p) x−1 p = 1?
P∞ P∞
Does x=1 x=1 (1

Recall the geometric sum


X∞
ark = a + ar + ar2 + · · · ark + · · ·
k=0
a
= if |r| < 1.
1−r

4
Example: Geometric Distribution (Cont’d)

p(x) = − p) x−1 p = 1?
P∞ P∞
Does x=1 x=1 (1

Recall the geometric sum


X∞
ark = a + ar + ar2 + · · · ark + · · ·
k=0
a
= if |r| < 1.
1−r
The sum of the pmf of the geometric distribution
X∞ X∞
p(x) = (1 − p) x−1 p
x=1 x=1
= p + (1 − p)p + (1 − p)2 p + · · · + (1 − p) x−1 p + · · ·

is simply the case that a = p and r = 1 − p and hence the sum is


a p p
= = = 1.
1 − r 1 − (1 − p) p

4
Example: A Card Game

Consider a card game that you draw ONE card from a


well-shuffled deck of cards. You win

• $1 if you draw a heart,


• $5 if you draw an ace (including the ace of hearts),
• $10 if you draw the king of spades and
• $0 for any other card you draw.

What’s the pmf of your reward X ?

5
Example: A Card Game

Consider a card game that you draw ONE card from a


well-shuffled deck of cards. You win

• $1 if you draw a heart,


• $5 if you draw an ace (including the ace of hearts),
• $10 if you draw the king of spades and
• $0 for any other card you draw.

What’s the pmf of your reward X ?


Outcome x p(x) 



35/52 if x = 0
Heart (not ace) if x = 1

1 12/52 


12/52
p(x) =  if x = 5



Ace 5 4/52 4/52
if x = 10


King of spades 10 1/52 


1/52

0 for all other values of x

All else 0 35/52 

5
Expected Value of a Random
Variable
Expected Value = Expectation = Mean

Let X be a discrete random variable with pmf p(x). The expected


value or the expectation or the mean of X , denoted by E[X], or µ
is a weighted average of the possible values of X , where the
weights are the probabilities of those values.

µ = E[X]
X
= xP(X = x)
all x
X
= xp(x)
all x

6
Example: Card Game — Expected Value




 35/52 if x = 0
if x = 1

12/52





p(x) = 

if x = 5


 4/52
if x = 10

1/52






0 if x , 0, 1, 5, 10

X 35 12 4 1 42
E[X] = xp(x) = 0 × +1× +5× + 10 × = ≈ 0.81
x
52 52 52 52 52
0.7
0.6 42
0.5 E(X) =
0.4 52
p(x)

0.3
0.2
0.1
0.0
0 1 2 3 4 5 6 7 8 9 10
x
7
Interpretation of the Expected Value

If one plays the card game 5200 times(where the card is drawn with
replacement),then in the 5200 games, he is expected to get

• $10 about 100 times (why?)


• $5 about 400 times
• $1 about 1200 times
• $0 about 3500 times
His average reward in the 5200 games is hence about
100 × $10 + 400 × $5 + 1200 × $1 + 3500 × $0
5200
100 400 1200 3500
= × $10 + × $5 + × $1 + × $0
5200 5200 5200 5200
1 4 12 35 X 42
= × $10 + × $5 + × $1 + × $0 = p(x)x = $ ≈ $0.81
52 52 52 52 x
52
So the long run average reward in a game is just the expected value.
8
Fair Game

For the card game we have discussed so far,

• will you play the game if it costs $1 to play once?


• will you play the game if it costs 50 cents to play once?
• what is the maximum amount you would be willing to pay to
play this game?

9
Fair Game

For the card game we have discussed so far,

• will you play the game if it costs $1 to play once?


• will you play the game if it costs 50 cents to play once?
• what is the maximum amount you would be willing to pay to
play this game?

A fair game is defined as a game that costs as much as its


expected payout, i.e. expected profit is 0.

9
Expected Value of the Geometric Distribution

Recall the pmf of the Geometric distribution is p(x) = (1 − p) x−1 p


for x = 1, 2, 3, . . .. Find the expected value
X X∞
xp(x) = x(1 − p) x−1 p.
x x=1

Sol. Recall the geometric sum


X∞ a
ark = if |r| < 1.
k=0 1−r
Differentiate both sides of the identity above with respect to r, we
get another identity
d X∞ d a
ark =
dr k=0 dr 1 − r
∥ ∥
X∞ a
akrk−1 =
k=1 (1 − r)2

10
Expected Value of the Geometric Distribution (Cont’d)

Observe the expected value

can ignore x = 0 since


X∞ X∞ !
E(X) = xp(x) = xp(x)
x=0 x=1 xp(x) = 0 when x = 0
X∞
= x(1 − p) x−1 p
x=1

is simply the second identity above when a = p and r = 1 − p, and


hence the expected value is

a p p 1
2
= 2
= 2 = .
(1 − r) (1 − (1 − p)) p p

11
Expected Value of a Function of a
Random Variable
Functions of a Random Variable

Example 1 (Card Game w/ Tax). Suppose


it costs 50 cents = $0.5 to play the game
Reward
and the player has to pay 10% of the Outcome X
reward as tax. One’s net profit from the Heart (not ace) 1
game is Ace 5
King of spades 10
X − 0.1X − 0.5 All else 0
(reward) (tax) (cost)

The net profit of the game is hence h(X) = 0.9X − 0.5.


Example 2. (Card Game w/ a new Tax Rule) Suppose the tax is
0.02X 2 dollars for a reward of X dollars. (So those who earn more
pay a higher percentage of their rewards as tax). Then the next
profit is
h(X) = X − 0.02X 2 − 0.5.
12
Expected Value of a Function of a Random Variable

In addition to the expected value of a random variable X itself, we


might be also interested in the expected value of a function of a
random variable h(X) , e.g.,

• the net profit from the card game h(X) = 0.9X − 0.5
• the net profit from the card game h(X) = X − 0.02X 2 − 0.5 with
a new tax rule

Definition: If the pmf of X is pX (x), the expected value of h(X) is


X
E[h(X)] = h(x)pX (x).
x

13
Example 1 (Card Game w/ Tax)

Reward pmf Net Profit


One’s expected net profit from x p(x) h(x) = 0.9x − 0.5
the game is 1 12/52 0.9 · 1 − 0.5 = 0.4
5 4/52 0.9 · 5 − 0.5 = 4.0
10 1/52 0.9 · 10 − 0.5 = 8.5
X 0 35/52 0.9 · 0 − 0.5 = −0.5
E[h(X)] = h(x)p(x)
x
12 4 1 35
= 0.4 × + 4.0 × + 8.5 × + (−0.5) ×
52 52 52 52
11.8
= ≈ 0.227
52

14
Variance of a Random Variable
Variance of a Random Variable

One measure of spread of a random variable (or its probability


distribution) is the variance.

The variance of a random variable X , denoted as σ2X or V(X) is


defined as the average squared distance from the mean.
h i
Var(X) = σ2 = "sigma squared" = E (X − µ)2

Variance is in squared units.


Square root of the variance is the standard deviation (SD).

SD(X) = σ =
p
Var(X)

15
Example (Card Game)

Recall for the card game reward X :


x 0 1 5 10 42
pmf: 35 12 4 1 , and mean = µ = E(X) = .
p(x) 52 52 52 52
52
Its variance is hence,
 !2  X !2
42 42
Var(X) = E[(X − µ) ] = E  X −  =
2
 
x− p(x)
52 x
52
!2 !2 !2 !2
42 35 42 12 42 4 42 1
= 0− · + 1− · + 5− · + 10 − ·
52 52 52 52 52 52 52 52
9260
= ≈ 3.42
522
r
9260 √
SD(X) = Var(X) =
p
≈ 3.42 ≈ 1.85.
522

16
Example (Card Game)

Recall for the card game reward X :


x 0 1 5 10 42
pmf: 35 12 4 1 , and mean = µ = E(X) = .
p(x) 52 52 52 52
52
Its variance is hence,
 !2  X !2
42 42
Var(X) = E[(X − µ) ] = E  X −  =
2
 
x− p(x)
52 x
52
!2 !2 !2 !2
42 35 42 12 42 4 42 1
= 0− · + 1− · + 5− · + 10 − ·
52 52 52 52 52 52 52 52
9260
= ≈ 3.42
522
r
9260 √
SD(X) = Var(X) =
p
≈ 3.42 ≈ 1.85.
522
Observe the computation of the variance can be awkward if the
expected value µ is not an integer.
16
A Shortcut Formula for Calculating Variance

Var(X) = E[(X − µ)2 ] = E(X 2 ) − µ2

Proof.
X
E[(X − µ)2 ] = (x − µ)2 p(x)
x
=
=
| {z } | {z } | {z }

= =

17
A Shortcut Formula for Calculating Variance

Var(X) = E[(X − µ)2 ] = E(X 2 ) − µ2

Proof.
X
E[(X − µ)2 ] = (x − µ)2 p(x)
Xx
= (x2 − 2µx + µ2 ) p(x)
x
=
| {z } | {z } | {z }

= =

17
A Shortcut Formula for Calculating Variance

Var(X) = E[(X − µ)2 ] = E(X 2 ) − µ2

Proof.
X
E[(X − µ)2 ] = (x − µ)2 p(x)
Xx
= (x2 − 2µx + µ2 ) p(x)
Xx X X
= x2 p(x) − 2µ x p(x) + µ2 p(x)
| x{z } | x{z } x
| {z }

= =

17
A Shortcut Formula for Calculating Variance

Var(X) = E[(X − µ)2 ] = E(X 2 ) − µ2

Proof.
X
E[(X − µ)2 ] = (x − µ)2 p(x)
Xx
= (x2 − 2µx + µ2 ) p(x)
Xx X X
= x2 p(x) − 2µ x p(x) + µ2 p(x)
| x{z } | x{z } x
| {z }
=E(X 2 )

= =

17
A Shortcut Formula for Calculating Variance

Var(X) = E[(X − µ)2 ] = E(X 2 ) − µ2

Proof.
X
E[(X − µ)2 ] = (x − µ)2 p(x)
Xx
= (x2 − 2µx + µ2 ) p(x)
Xx X X
= x2 p(x) − 2µ x p(x) + µ2 p(x)
| x{z } | x{z } x
| {z }
=E(X 2 ) =µ

= =

17
A Shortcut Formula for Calculating Variance

Var(X) = E[(X − µ)2 ] = E(X 2 ) − µ2

Proof.
X
E[(X − µ)2 ] = (x − µ)2 p(x)
Xx
= (x2 − 2µx + µ2 ) p(x)
Xx X X
= x2 p(x) − 2µ x p(x) + µ2 p(x)
| x{z } | x{z } x
| {z }
=E(X 2 ) =µ =1

= =

17
A Shortcut Formula for Calculating Variance

Var(X) = E[(X − µ)2 ] = E(X 2 ) − µ2

Proof.
X
E[(X − µ)2 ] = (x − µ)2 p(x)
Xx
= (x2 − 2µx + µ2 ) p(x)
Xx X X
= x2 p(x) − 2µ x p(x) + µ2 p(x)
| x{z } | x{z } x
| {z }
=E(X 2 ) =µ =1

= E(X 2 ) − 2µ2 + µ2 =

17
A Shortcut Formula for Calculating Variance

Var(X) = E[(X − µ)2 ] = E(X 2 ) − µ2

Proof.
X
E[(X − µ)2 ] = (x − µ)2 p(x)
Xx
= (x2 − 2µx + µ2 ) p(x)
Xx X X
= x2 p(x) − 2µ x p(x) + µ2 p(x)
| x{z } | x{z } x
| {z }
=E(X 2 ) =µ =1

= E(X 2 ) − 2µ2 + µ2 = E(X 2 ) − µ2

17
Example (Card Game)

x 0 1 5 10
p(x) 35/52 12/52 4/52 1/52
Let’s calculate the variance again using the shortcut formula
Var(X) = E(X 2 ) − µ2 . First we calculate E[X 2 ]
35 12 4 1 212
E[X 2 ] = 02 · + 12 · + 52 · + 102 · =
52 52 52 52 52
and the variance is hence
!2
212 42 9260
Var(X) = E(X ) − µ =
2 2
− =
52 52 522

which resembles our previous calculation.

18
Linear Transformation of a
Random Variable
Linear Transformation of a Random Variable

Linear transformation of a random variable h(X) = aX + b is also a


function of interest, e.g.,

• The net profit h(X) = X − 0.1X − 0.5 = 0.9X − 0.5 from the
Card Game w/ tax

For Y = aX + b, we can show that

E(aX + b) = a E(X) + b, and Var(aX + b) = a2 Var(X)

Before we get to the proofs.


Let’s review properties of summation.

19
Review: Summation Notation and Its Properties

In the following, a is a fixed constant.


Xn
a = (a + · · · + }a) = na
| + a {z
i=1
n copies
Xn
(axi ) = ax1 + ax2 + · · · + axn
i=1
= a(x1 + x2 + · · · + xn )
Xn
=a xi
i=1
Xn
(xi + yi ) = (x1 + y1 ) + (x2 + y2 ) + · · · + (xn + yn )
i=1
= (x1 + x2 + · · · + xn ) + (y1 + y2 + · · · + yn )
Xn Xn
= xi + yi
i=1 i=1

20
Proof of E(aX + b) = a E(X) + b

We prove it for the case that X is discrete with pmf p(x).\ This
relation is also true when X is continuous.

E(aX + b)
X
= (ax + b)p(x) (definition of E(aX + b))
Xx
= (axp(x) + bp(x))
x
X X n n n
= axp(x) +
X X X
bp(x) (since (xi + yi ) = xi + yi )
x x
i=1 i=1 i=1
X X n n
=a xp(x) +b
X X
p(x) (since (axi ) = a xi )
| x{z } | {z x
} i=1 i=1
=E(X) =1
= aE(X) + b

21
Proof of Var(aX + b) = a2 Var(X)

Recall Var(Y) is the expected value of [Y − E(Y)]2 .

22
Proof of Var(aX + b) = a2 Var(X)

Recall Var(Y) is the expected value of [Y − E(Y)]2 .

For Y = aX + b, we have proved that E(Y) = E(aX + b) = aµ + b, where


µ = E(X) and hence

[Y − E(Y)]2 = [(aX + b) − E(aX + b)]2 = [aX + b − (aµ + b)]2 = a2 (X − µ)2 .

22
Proof of Var(aX + b) = a2 Var(X)

Recall Var(Y) is the expected value of [Y − E(Y)]2 .

For Y = aX + b, we have proved that E(Y) = E(aX + b) = aµ + b, where


µ = E(X) and hence

[Y − E(Y)]2 = [(aX + b) − E(aX + b)]2 = [aX + b − (aµ + b)]2 = a2 (X − µ)2 .

Taking expected value of the above we get

E[Y − E(Y)]2 = E[a2 (X − µ)2 ]

22
Proof of Var(aX + b) = a2 Var(X)

Recall Var(Y) is the expected value of [Y − E(Y)]2 .

For Y = aX + b, we have proved that E(Y) = E(aX + b) = aµ + b, where


µ = E(X) and hence

[Y − E(Y)]2 = [(aX + b) − E(aX + b)]2 = [aX + b − (aµ + b)]2 = a2 (X − µ)2 .

Taking expected value of the above we get

E[Y − E(Y)]2 = E[a2 (X − µ)2 ]


∥ ∥∗
Var(Y) a E[(X − µ)2 ]
2

22
Proof of Var(aX + b) = a2 Var(X)

Recall Var(Y) is the expected value of [Y − E(Y)]2 .

For Y = aX + b, we have proved that E(Y) = E(aX + b) = aµ + b, where


µ = E(X) and hence

[Y − E(Y)]2 = [(aX + b) − E(aX + b)]2 = [aX + b − (aµ + b)]2 = a2 (X − µ)2 .

Taking expected value of the above we get

E[Y − E(Y)]2 = E[a2 (X − µ)2 ]


∥ ∥∗
Var(Y) a E[(X − µ)2 ]
2

in which the step E[a2 (X − µ)2 ] = a2 E[(X − E(X))2 ] is justified using


E[cW + d] = c E[W] + d we just proved with c = a2 , W = (X − E(X))2 , and
d = 0.
22
Proof of Var(aX + b) = a2 Var(X)

Recall Var(Y) is the expected value of [Y − E(Y)]2 .

For Y = aX + b, we have proved that E(Y) = E(aX + b) = aµ + b, where


µ = E(X) and hence

[Y − E(Y)]2 = [(aX + b) − E(aX + b)]2 = [aX + b − (aµ + b)]2 = a2 (X − µ)2 .

Taking expected value of the above we get

E[Y − E(Y)]2 = E[a2 (X − µ)2 ]


∥ ∥∗
Var(Y) a E[(X − µ)2 ]
2

∥ ∥
Var(aX + b) a2 Var(X)

in which the step E[a2 (X − µ)2 ] = a2 E[(X − E(X))2 ] is justified using


E[cW + d] = c E[W] + d we just proved with c = a2 , W = (X − E(X))2 , and
d = 0.
22
Example (Card Game w/ Tax)

For the Card Game, recall the mean and variance of the reward X
are
42 9620
E(X) = , Var(X) =
52 522
The mean and variance of the net profit with tax h(X) = 0.9X − 0.5
are
42 11.8
E(0.9X − 0.5) = 0.9 E(X) − 0.5 = 0.9 × − 0.5 =
52 52
9620 7792.2
Var(0.9X − 0.5) = 0.9 Var(X) = 0.9 ×
2 2
=
522 52

23

You might also like