0% found this document useful (0 votes)
7 views23 pages

Chapter 07

Uploaded by

Tamer lotfy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views23 pages

Chapter 07

Uploaded by

Tamer lotfy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 23

Statistics II

Module Code: STAT01C03


Instructor : Dr. Alaa Ibrahim

Copyright © 2009 Cengage Learning


Chapter 7

Random Variables and


Discrete Probability Distributions

Copyright © 2009 Cengage Learning


Population Mean (Expected Value)
The population mean is the weighted
average of all of its values. The
weights are the probabilities.

This parameter is also called the


expected value of X and is represented
by E(X).

Copyright © 2009 Cengage Learning


Population Variance…
The population variance is calculated
similarly. It is the weighted average of the
squared deviations from the mean.

As before, there is a “short-cut”


formulation…

The standard deviation is the same as before:


Copyright © 2009 Cengage Learning
Example 7.3…
Find the mean, variance, and standard deviation
for the population of the number of color
televisions per household… (from Example 7.1)

12) + 1(.319) + 2(.374) + 3(.191) + 4(.076) + 5(.02


4
Copyright © 2009 Cengage Learning
Example 7.3…
Find the mean, variance, and standard deviation
for the population of the number of color
televisions per household… (from Example 7.1)

– 2.084)2(.012) + (1 – 2.084)2(.319)+…+(5 – 2.084)2(


07

Copyright © 2009 Cengage Learning


Example 7.3…
Find the mean, variance, and standard deviation
for the population of the number of color
televisions per household… (from Example 7.1)

= 1.052

Copyright © 2009 Cengage Learning


Laws of Expected Value…
E(c) = c
The expected value of a constant (c) is just
the value of the constant.

E(X + c) = E(X) + c
E(cX) = cE(X)
We can “pull” a constant out of the expected
value expression (either as part of a sum
with a random variable X or as a coefficient
of random variable X).

Copyright © 2009 Cengage Learning


Laws of Variance…
V(c) = 0
The variance of a constant (c) is zero.

V(X + c) = V(X)
The variance of a random variable and a
constant is just the variance of the random
variable (per 1 above).

V(cX) = c2V(X)
The variance of a random variable and a
constant coefficient is the coefficient
squared times the variance of the random
variable.
Copyright © 2009 Cengage Learning
Bivariate Distributions…
Up to now, we have looked at univariate
distributions, i.e. probability distributions
in one variable.

As you might guess, bivariate distributions are


probabilities of combinations of two variables.

Bivariate probability distributions are also


called joint probability. A joint probability
distribution of X and Y is a table or formula
that lists the joint probabilities for all
pairs of values x and y, and is denoted P(x,y).
P(x,y) = P(X=x and Y=y)

Copyright © 2009 Cengage Learning


Discrete Bivariate Distribution…
As you might expect, the requirements
for a bivariate distribution are similar
to a univariate distribution, with only
minor changes to the notation:

for all pairs (x,y).

Copyright © 2009 Cengage Learning


Example 7.5…
Xavier and Yvette are real estate
agents. Let X denote the number of
houses that Xavier will sell in a month
and let Y denote the number of houses
Yvette will sell in a month. An analysis
of their past monthly performances has
the following joint probabilities
(bivariate probability distribution).

Copyright © 2009 Cengage Learning


Marginal Probabilities…
As before, we can calculate the marginal
probabilities by summing across rows and
down columns to determine the
probabilities of X and Y individually:

robability that Xavier sells 1 house = P(X=1) =0.50


Copyright © 2009 Cengage Learning
Describing the Bivariate Distribution…
We can describe the mean, variance, and
standard deviation of each variable in a
bivariate distribution by working with
the marginal probabilities…

same formulae as for


univariate
distributions…

Copyright © 2009 Cengage Learning


Covariance…
The covariance of two discrete variables
is defined as:

or alternatively using this shortcut


method:

Copyright © 2009 Cengage Learning


Coefficient of Correlation…
The coefficient of correlation is
calculated in the same way as described
earlier…

Copyright © 2009 Cengage Learning


Example 7.6…
Compute the covariance and the
coefficient of correlation between the
numbers of houses sold by Xavier and
Yvette.
COV(X,Y) = (0 – .7)(0 – .5)(.12) + (1 – .7)(0
– .5)(.42) + …
… + (2 – .7)(2 – .5)(.01) = –.15

= –0.15 ÷ [(.64)
(.67)] = –.35

There is a weak, negative relationship between


the two variables.

Copyright © 2009 Cengage Learning


Example 7.6…
X Y Probability X - µ(x) Y - µ(y) [X - µ(x)][Y-µ(y)]
0 0 0.12 -0.7 -0.5 0.042
0 1 0.21 -0.7 0.5 -0.074
0 2 0.07 -0.7 1.5 -0.074
1 0 0.42 0.3 -0.5 -0.063
1 1 0.06 0.3 0.5 0.009
1 2 0.02 0.3 1.5 0.009
2 0 0.06 1.3 -0.5 -0.039
2 1 0.03 1.3 0.5 0.020
2 2 0.01 1.3 1.5 0.020
0.7 0.5 -0.150

Copyright © 2009 Cengage Learning


Example 7.6…
= –0.150 ÷ [(.64)
(.67)] = –.35

There is a weak, negative relationship between


the two variables.

Copyright © 2009 Cengage Learning


Sum of Two Variables…
The bivariate distribution allows us to develop
the probability distribution of any combination
of the two variables, of particular interest is
the sum of two variables.

If we consider our example of Xavier and Yvette


selling houses, we can create a probability
distribution…

…to answer questions like “what is the


probability that two houses are sold”?
P(X+Y=2) = P(0,2) + P(1,1) + P(2,0) = .07 + .06 +
.06 = .19
Copyright © 2009 Cengage Learning
Sum of Two Variables…
Likewise, we can compute the expected value,
variance, and standard deviation of X+Y in
the usual way…

E(X + Y) = 0(.12) + 1(.63) + 2(.19) + 3(.05)


+ 4(.01) = 1.2

V(X
 x  y 2 Var (X  Y)  (.12)
+ Y) = (0 – 1.2)2
+ … + (4
.56  .75
– 1.2) (.01) = .56

Copyright © 2009 Cengage Learning


Laws…
We can derive laws of expected value and
variance for the sum of two variables
as follows…

E(X + Y) = E(X) + E(Y)

V(X + Y) = V(X) + V(Y) + 2COV(X, Y)

If X and Y are independent, COV(X, Y) =


0 and thus
V(X + Y) = V(X) + V(Y)
Copyright © 2009 Cengage Learning
Laws
E(X + Y) = E(X) + E(Y) = .7 + .5 = 1.2

V(X + Y) = V(X) + V(Y) + 2COV(X, Y)

= .41 + .45 + 2(-.15) = .56

Copyright © 2009 Cengage Learning

You might also like