0% found this document useful (0 votes)
25 views23 pages

Unit 8

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views23 pages

Unit 8

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

UNIT 8 MATHEMATICAL EXPECTATION Mathematical Expectation

Structure
8.1 Introduction
Objectives

8.2 Expectation of a Random Variable


8.3 Properties of Expectation of One-dimensional Random Variable
8.4 Moments and Other Measures in Terms of Expectations
8.5 Addition and Multiplication Theorems of Expectation
8.6 Summary
8.7 Solutions/Answers

8.1 INTRODUCTION
In Units 1 to 4 of this course, you have studied probabilities of different events
in various situations. Concept of univariable random variable has been
introduced in Unit 5 whereas that of bivariate random variable in Units 6 and
7. Before studying the present unit, we advice you to go through the above
units.
You have studied the methods of finding mean, variance and other measures in
context of frequency distributions in MST-002 (Descriptive Statistics). Here, in
this unit we will discuss mean, variance and other measures in context of
probability distributions of random variables. Mean or Average value of a
random variable taken over all its possible values is called the expected value
or the expectation of the random variable. In the present unit, we discuss the
expectations of random variables and their properties.
In Secs. 8.2, 8.3 and 8.4, we deal with expectation and its properties. Addition
and multiplication laws of expectation have been discussed in Sec. 8.5.
Objectives
After studying this unit, you would be able to:
 find the expected values of random variables;
 establish the properties of expectation;
 obtain various measures for probability distributions; and
 apply laws of addition and multiplication of expectation at appropriate
situations.

8.2 EXPECTATION OF A RANDOM VARIABLE


In Unit 1 of MST-002, you have studied that the mean for a frequency
distribution of a variable X is defined as

55
Random Variables and n
Expectation
f x
i 1
i i
Mean = n
.
 fi
i 1

If the frequency distribution of the variable X is given as


x: x1 x2 x 3 ...x n
f: f1 f2 f 3 ...f n

The above formula of finding mean may be written as


n

f x
i 1
i i
f1x1  f 2 x 2  ...  f n x n
Mean = n
= n

 fi
i 1
f i 1
i

x1f1 x 2 f2 xn fn
= n
 n
 ...  n

 fi
i 1
 fi
i 1
f
i 1
i

     
 f   f   f 
= x1  n i   x2  n 2   ...  x n  n n 
 f   f   f 
 i   i   i 
 i 1   i 1   i1 
f1 f2 fn
Notice that n
, n
,..., n
are, in fact, the relative frequencies or the
f f
i 1
i
i 1
i f
i 1
i

proportion of individuals corresponding to the values x1 , x 2 , …, x n


respectively of variable X and hence can be replaced by probabilities. [See
Unit 2 of this course]
Let us now define a similar measure for the probability distribution of a
random variable X which assumes the values say x1 , x 2 ,..., x n with their
associated probabilities p1 , p2 , ..., p n . This measure is known as expected value
of X and in the similar way is given as
n
x1  p1   x 2  p 2   ...  x n  p n    x i pi with only difference is that the role of
i 1
relative frequencies has now been taken over by the probabilities. The expected
value of X is written as E(X).
The above aspect can be viewed in the following way also:

56
n
Mathematical Expectation
x f
i 1
i i
Mean of a frequency distribution of X is n
, similarly mean of a
 fi
i 1
n

x p
i 1
i i
probability distribution of r.v. X is n
.
p
i 1
i

n
Now, as we know that p
i 1
i  1 for a probability distribution, therefore
n
the mean of the probability distribution becomes  x i pi .
i 1

n
 Expected value of a random variable X is E  X    x i pi .
i 1

The above formula for finding the expected value of a random variable X
is used only if X is a discrete random variable which takes the values
x1 , x 2 , ..., x n with probability mass function
p  x i   P  X  x i  , i  1, 2,..., n.

But, if X is a continuous random variable having the probability density


function f  x  , then in place of summation we will use integration and in
this case, the expected value of X is defined as

E X   xf  x  dx ,


The expectation, as defined above, agrees with the logical/theoretical


argument also as is illustrated in the following example.
Suppose, a fair coin is tossed twice, then answer to the question, “How
many heads do we expect theoretically/logically in two tosses?” is
obviously 1 as the coin is unbiased and hence we will undoubtedly expect
one head in two tosses. Expectation actually means “what we get on an
average”? Now, let us obtain the expected value of the above question
using the formula.
Let X be the number of heads in two tosses of the coin and we are to
obtain E(X), i.e. expected number of heads. As X is the number of heads
in two tosses of the coin, therefore X can take the values 0, 1, 2 and its
probability distribution is given as
X: 0 1 2
1 1 1. [Refer Unit 5 of MST-003]
px :
4 2 4
3
 E  X    x i pi
i 1

= x1p1  x 2 p 2  x 3p 3

57
Random Variables and
1 1 1 1 1
Expectation =  0     1    (2)   = 0    1
4 2 4 2 2
So, we get the same answer, i.e. 1 using the formula also.
So, expectation of a random variable is nothing but the average (mean)
taken over all the possible values of the random variable or it is the value
which we get on an average when a random experiment is performed
repeatedly.
Remark 1: Sometimes summations and integrals as considered in the
above definitions may not be convergent and hence expectations in such
cases do not exist. But we will deal only those summations (series) and
integrals which are convergent as the topic regarding checking the
convergence of series or integrals is out of the scope of this course. You
need not to bother as to whether the series or integral is convergent or
not, i.e. as to whether the expectation exists or not as we are dealing with
only those expectations which exist.
Example 1: If it rains, a rain coat dealer can earn Rs 500 per day. If it is a dry
day, he can lose Rs 100 per day. What is his expectation, if the probability of
rain is 0.4?
Solution: Let X be the amount earned on a day by the dealer. Therefore, X can
take the values Rs 500,  Rs 100 ( loss of Rs 100 is equivalent to negative of
the earning of Rs100).
 Probability distribution of X is given as
Rainy Day Dry day
X  in Rs. : 500 100
px : 0.4 0.6

Hence, the expectation of the amount earned by him is


2
E  X    x i pi = x1p1  x 2 p 2
i 1

=  500  0.4    100  0.6  = 200  60 = 140

Thus, his expectation is Rs 140, i.e. on an overage he earns Rs 140 per day.

Example 2: A player tosses two unbiased coins. He wins Rs 5 if 2 heads


appear, Rs 2 if one head appears and Rs1 if no head appears. Find the
expected value of the amount won by him.
Solution: In tossing two unbiased coins, the sample space, is 
S = HH, HT, TH, TT.

1 2 1
 P  2 heads   , P  one head   , P  no head   .
4 4 4
Let X be the amount in rupees won by him
 X can take the values 5, 2 and 1 with

58
1 Mathematical Expectation
P  X  5  P  2heads   ,
4
2
P  X  2  P 1Head   , and
4
1
P  X  1  P  no Head   .
4
 Probability distribution of X is
X: 5 2 1
1 2 1
px
4 4 4
Expected value of X is given as
3
E  X    x i pi = x1p1  x 2 p 2  x 3p 3
i 1

 1   2   1  5 4 1 10
= 5    2    1  =     2.5.
4 4 4 4 4 4 4
Thus, the expected value of amount won by him is Rs 2.5.
Example 3: Find the expectation of the number on an unbiased die when thrown.
Solution: Let X be a random variable representing the number on a die when thrown.
X can take the values 1, 2, 3, 4, 5, 6 with
1
P  X  1  P  X  2   P  X  3  P  X  4  P  X  5  P  X  6   .
6
Thus, the probability distribution of X is given by
X: 1 2 3 4 5 6
1 1 1 1 1 1
px :
6 6 6 6 6 6
Hence, the expectation of number on the die when thrown is
6
1 1 1 1 1 1 21 7
E  X    x i pi  1  2   3   4   5   6  = 
i 1 6 6 6 6 6 6 6 2
Example 4: Two cards are drawn successively with replacement from a
well shuffled pack of 52 cards. Find the expected value for the number of
aces.
Solution: Let A1, A2 be the events of getting ace in first and second draws,
respectively. Let X be the number of aces drawn. Thus, X can take the
values 0, 1, 2 with

P  X  0  P  no ace   P  A1  A 2 

cards are drawn with replacement 


 P  A1  P  A 2  and hence the events are independent 
 

59
Random Variables and 48 48 12 12 144
Expectation =    = ,
52 52 13 13 169
P  X  1   one Ace and one other card 

 P  A1  A 2    A1  A 2  

 By Addition theorem of probability 


 P  A1  A 2   P  A1  A 2  for mutually exclusive events 
 
 By multiplication theorem of 
 P  A1  P  A 2   P  A1  P  A 2   
 probability for independent events 
4 48 48 4 1 12 12 1 24
         , and
52 52 52 52 13 13 13 13 169
P  X  2   P  both aces   P  A1  A 2 

4 4 1
 P  A1  P  A 2  =   .
52 52 169

Hence, the probability distribution of random variable X is


X: 0 1 2
144 24 1
px :
169 169 169
 The expected value of X is given by
3
144 24 1 26 2
E  X    x i pi  0   1  2  
i 1 169 169 169 169 13
Example 5: For a continuous distribution whose probability density
function is given by:
3x
f x   2  x  , 0  x  2, find the expected value of X.
4
Solution: Expected value of a continuous random variable X is given by
 2 2
3x 3
E X   xf  x  dx =  x  2  x  dx   x 2  2  x  dx
 0
4 40
2 3 4
3
2
3  x3 x 4  3   2  2 
40
2 3
 
=  2x  x dx =  2   =  2
4  3 4  0 4  3

4
 0


3 16 16  3 16
    1
4  3 4  4 12
Now, you can try the following exercises.
E1) You toss a fair coin. If the outcome is head, you win Rs 100; if the
outcome is tail, you win nothing. What is the expected amount won
by you?
60
E2) A fair coin is tossed until a tail appears. What is the expectation of Mathematical Expectation
number of tosses?
E3) The distribution of a continuous random variable X is defined by
 x3 , 0  x 1
 3
f  x    2  x  , 1  x  2
 0 , elsewhere

Obtain the expected value of X.

Let us now discuss some properties of expectation in the next section.

8.3 PROPERTIES OF EXPECTATION OF ONE-


DIMENSIONAL RANDOM VARIABLE
Properties of mathematical expectation of a random variable X are:
1. E(k) = k, where k is a constant
2. E(kX) = kE(X), k being a constant.
3. E(aX + b) = aE(X) + b, where a and b are constants
Proof:
Discrete case:
Let X be a discrete r.v. which takes the values x1 , x 2 , x 3 , ... with
respective probabilities p1 , p2 , p 3 , ...

1. E  k    k p i [By definition of the expectation]


i

= k  pi
i

sum of probabilities of all the 


 k 1  k  possible value of r.v. is 1 
 
2. E  kX     kx i  pi [By def.]
i

= k  x i pi
i

 k E X

3. E  a X  b     ax i  b  p i [By def.]
i

=   ax p
i
i i  bp i    ax i p i   bpi  a  x i pi  b  pi
i i i i

 aE  X   b 1  aE  X   b

61
Random Variables and Continuous Case:
Expectation
Let X be continuous random variable having f(x) as its probability density
function. Thus,

1. E  k    kf  x  dx [By def.]



 k  f  x  dx


integral of the p.d.f. over 


 k 1  k  the entire range is 1 
 

2. E  kX     kx  f  x  dx [By def.]



 k  xf  x  dx  kE  X 


  
3. E  aX  b     ax  b f  x  dx    ax  f  x  dx   b f  x  dx
  

 
 a  x f  x  dx  b  f  x  dx  aE  X   b 1 = aE  X   b
 

Example 6: Given the following probability distribution:


X 2 1 0 1 2

px 0.15 0.30 0 0.30 0.25

Find i) E(X)
ii) E(2X + 3)
iii) E(X2)
iv) E(4X – 5)
Solution
5
i) E  X    x i pi = x1p1  x 2 p 2  x 3 p3  x 4 p 4  x 5p 5
i 1

  2  0.15    1 0.30    0  0   1 0.30    2  0.25 


 0.3  0.3  0  0.3  0.5  0.2
ii) E(2X + 3) = 2 E  X   3 [Using property 3 of this section]
= 2(0.2) + 3 [Using solution (i) of the question]
= 0.4 + 3 = 3.4
5
iii) E  X 2    x i2 pi [By def.]
i 1
62
= x12 p1  x 22 p 2  x 32 p3  x 42 p 4  x 52 p5 Mathematical Expectation
2 2 2 2 2
  2   0.15   1  0.30    0   0   1  0.30    2   0.25 

=  4  0.15   1 0.30    0   1 0.30    4  0.25 


 0.6  0.3  0  0.3  1  2.2
iv) E (4X  5)  E  4X   5  

= 4E  X    5  [Using property 3 of this section]


= 4(0.2)  5
 0.8  5   4.2

Here is an exercise for you.


E4) If X is a random variable with mean ‘’ and standard deviation ‘’,
X
then what is the expectation of Z= ?

[Note: Here Z so defined is called standard random variate.]

Let us now express the moments and other measures for a random variable
in terms of expectations in the following section.

8.4 MOMENTS AND OTHER MEASURES IN


TERMS OF EXPECTATIONS
Moments
The moments for frequency distribution have already been studied by you
in Unit 3 of MST-002. Here, we deal with moments for probability
distributions. The rth order moment about any point ‘A’ (say) of variable X
already defined in Unit 3 of MST-002 is given by:
n
r
f x i i  A
r'  i 1
n

f i 1
i

So, the rth order moment about any point ‘A’ of a random variable X
having probability mass function P  X  x i   p  x i   pi is defined as
n
r
p x i i  A
r'  i 1
n

p i 1
i

[Replacing frequencies by probabilities as discussed in Sec. 8.2 of this unit.]


n
r  n 
  pi  xi  A   p i  1
i 1  i1 
63
Random Variables and The above formula is valid if X is a discrete random variable. But, if X is a
Expectation
continuous random variable having probability density function f(x), then

r
rth order moment about A is defined as  r '    x  A  f  x  dx.

th
So, r order moment about any point ‘A’ of a random variable X is defined as
  pi  x i  A r , if X is a discrete r.v.
 i
r'   
   x  A  r f  x  dx, if X is a continous r.v
 
= E(X  A)r
Similarly, rth order moment about mean () i.e. rth order central moment is
defined as
 p i  x i    r , if X is a discrete r.v.
 i
r   
   x    r f  x  dx, if X is a continous r.v
 
r r
= E  X     E  X  E  X  

Variance
Variance of a random variable X is second order central moment and is
defined as
2 2
 2 = V  X   E  X     E  X  E  X  
Also, we know that
2
V  X    2 '  1 ' 

where 1 ',  2 ' be the moments about origin.


2
 
 We have V  X   E X 2   E  X  

1 '  E  X  01  E  X  , and  2 '  E  X  0 2  E X 2   


 
Theorem 8.1: If X is a random variable, then V  aX  b   a 2 V  X  ,
where a and b are constants.
2
Proof: V  aX  b   E  aX  b   E  aX  b   [By def. of variance]
2
= E aX  b   aE  X   b   [Using property 3 of Sec. 8.3]
2
= E aX  b  aE  X   b 
2
= E a X  E  X 
2
 E a 2  X  E  X   
 

64
Mathematical Expectation
2
 a 2 E  X  E  X   [Using property 2 of section 8.3]

 a 2V  X  [By definition of Variance]

Cor. (i) V  aX   a 2 V  X 

(ii) V(b) = 0
(iii) V(X + b) = V(X)

Proof: (i) This result is obtained on putting b = 0 in the above theorem.


(ii) This result is obtained on putting a = 0 in the above theorem.
(iii) This result is obtained on putting a = 1 in the above theorem.
Covariance
For a bivariate frequency distribution, you have already studied in Unit 6 of
MST-002 that covariance between two variables X and Y is defined as

f xi i  x  y i  y 
Cov  X, Y   i

f
i
i

 For a bivariate probability distribution, Cov (X, Y) is defined as


 pi j  x i  x  yi  y  , if  X, Y  is two-dimensional discrete r.v.
 i
Cov(X, Y) =   
    x  x  y  y  f  x, y  dydx, if  X, Y  is two dimensional continuous r.v.
  

where p i j = P  X  x i , Y  y j 

= E  X  X  Y  Y  [By definition of expectation]

E(X) =Mean of X i.e. X,


 E  X  E  X    Y  E  Y     
 E(Y) = Mean of Y i.e.Y 
On simplifying,
Cov(X, Y) = E(XY) – E(X) E(Y).
Now, if X and Y are independent random variables then, by multiplication
theorem,
E(XY) = E(X)E(Y) and hence in this case Cov(X, Y) = 0.
Remark 2:
i) If X and Y are independent random variables, then
V(X + Y) = V(X) +V(Y).

65
Random Variables and 2
Expectation Proof: V  X  Y   E  X  Y   E  X  Y  

2
 E  X  Y  E  X   E  Y  

2
 E X  E  X   Y  E  Y 

2 2
 E X  E  X   Y  E  Y   2 X  E  X  Y  E  Y 
 
2 2
 E  X  E  X    E  Y  E  Y    2E  X  E  X    Y  E  Y   

 V  X   V  Y   2Cov  X, Y 

 V X  V Y  0 [ X and Y are independent]

 V X  V Y

ii) If X and Y are independent random variables, then


V(X – Y) = V(X) + V(Y).
Proof: This can be proved in the similar manner as done is Remark 2(i)
above.
iii) If X and Y are independent random variables, then

V  aX  bY   a 2 V  X   b 2 V  Y  .

Proof: Prove this result yourself proceeding in the similar fashion as in


proof of Remark 2(i).

Mean Deviation about Mean


Mean deviation about mean in context of frequency distribution is
n

f
i 1
i  xi  x 
n
, and
f i 1
i

therefore, mean deviation about mean in context of probability distribution is


n

p i  x  mean  n
i 1
n
  p i  x  mean 
p
i 1
i
i 1

 by definition of expectation, we have


M.D. about mean = EX  Mean
= EX – E(X)

66
 pi  x  Mean  for discrete r.v Mathematical Expectation


   x  Mean  f  x  dx for continuous r.v
 
Note: Other measures as defined for frequency distributions in MST-002
can be defined for probability distributions also and hence can be
expressed in terms of the expectations in the manner as the moments;
variance and covariance have been defined in this section of the Unit.
Example 7: Considering the probability distribution given in Example 6, obtain
i) V(X)
ii) V(2X + 3).
Solution:
2
 
(i) V  X   E X 2   E  X  

2 The values have already been obtained 


 2.2   0.2  in the solution of Example 6 
 
 2.2  0.04 = 2.16
2
(ii) V  2X  3   2  V  X  [Using the result of Theorem 8.1]

= 4V(X) = 4(2.16) = 8.64

Example 8: If X and Y are independent random variables with variances 2


and 3 respectively, find the variance of 3X + 4Y.

Solution: V(3X + 4Y) = (3)2 V(X) + (4)2 V(Y) [By Remark 3 of Section 8.4]

= 9(2) + 16( 3) = 18 + 48 = 66

Here are two exercises for you:

E5) If X is a random variable with mean  and standard deviation , then


X
find the variance of standard random variable Z = .

E6) Suppose that X is a random variable for which E(X) = 10 and


V(X) = 25. Find the positive values of a and b such that Y = aX – b
has expectation 0 and variance 1.

8.5 ADDITION AND MULTIPLICATION


THEOREMS OF EXPECTATION
Now, we are going to deal with the properties of expectation in case of
two-dimensional random variable. Two important properties, i.e. addition
and multiplication laws of expectation are discussed in the present section.

67
Random Variables and
Expectation
Addition Theorem of Expectation
Theorem 8.2: If X and Y are random variables, then E  X  Y   E  X   E  Y 

Proof:
Discrete case:
Let (X, Y) be a discrete two-dimensional random variable which takes up
the values (xi, yj) with the joint probability mass function
pij = P  X  x i  Y  y j  .

Then, the probability distribution of X is given by


pi  p  x i   P  X  x i 

 event X = xi can happen with 


 P  X  x i  Y  y1   P  X  x i  Y  y 2   ...  
Y=y1 orY=y2 orY=y3 or... 
= pi1  pi2  pi3  ...

  p ij
j

Similarly, the probability distribution of Y is given by

p j  p  y j   P  Y  y j  = p ij
i

 E  X    x i pi , E  Y    y jp j  and E  X  Y     x i  y j p ij
i j i j

Now E  X  Y     x i  y j p ij
i j

  x i pij   y jpij
i j i j

= x p   y p
i
i
j
ij
j
j
i
ij

[ in the first term of the right hand side, xi is free from j and hence can
be taken outside the summation over j; and in second term of the right
hand side, yj is free from i and hence can be taken outside the summation
over i.]

 E  X  Y    x i pi   y j p j  = E  X   E  Y 
i j

Continuous Case:
Let (X, Y) be a bivariate continuous random variable with probability
density function f  x, y  . Let f  x  and f  y  be the marginal
probability density functions of random variables X and Y respectively.

68
 
Mathematical Expectation
 E X   x f  x  dx, E  Y    y f  y dy,
 

 
and E  X  Y      x  y  f  x, y  dy dx .
 

 
Now, E  X  Y      x  y  f  x, y  dy dx
 

   
   x f  x, y  dy dx    y f  x, y  dy dx
   


  
 
 x
   f  x, y  dy  dx   y   f  x, y  dx  dy
    
[ in the first term of R.H.S., x is free from the integral w.r.t. y and
hence can be taken outside this integral. Similarly, in the second term of
R.H.S, y is free from the integral w.r.t. x and hence can be taken outside
this integral.]
 
 Refer to the definition of marginal density 
  x f  x  dx   y f  y  dy function given in Unit 7 of this course 
   
 E X  E Y

Remark 3: The result can be similarly extended for more than two random
variables.
Multiplication Theorem of Expectation
Theorem 8.3: If X and Y are independent random variables, then
E(XY) = E(X) E(Y)
Proof:
Discrete Case:
Let (X, Y) be a two-dimensional discrete random variable which takes up the
values  x i , y j  with the joint probability mass function
pij  P  X  x i  Y  y j  . Let pi and p j' be the marginal probability mass
functions of X and Y respectively.
 E  X    x i pi , E  Y    y jp j' , and
i j

E  XY     x i y j  p ij
i j

But as X and Y are independent,


 p ij  P  X  x i  Y  y j 

69
Random Variables and
Expectation  if events A and B are independent,
= P  X  x i  P  Y  y j   
 then P  A  B   P  A  P  B  

 p i p j'

Hence, E(XY) =   x y  p p
i j
i j i j
'

=  x i y jpi p j'
i j


   x i p i y j p j' 
i j

 x i pi is free from j and hence can be 


  x i pi  y j p j'  taken outside the summation over j 
i j  
=E(X) E(Y)
Continuous Case:
Let (X, Y) be a bivariate continuous random variable with probability
density function f(x, y). Let f(x) and f(y) be the marginal probability
density function of random variables X and Y respectively.
 
 E X   x f  x  dx, E  Y    y f  y  dy ,
 

 
and E  XY     xy f  x, y  dy dx .
 

 
Now E  XY     xy f  x, y  dy dx
 

 
 X and Y are independent, f(x,y)=f(x)f(y)
   xy f  x  f  y  dy dx
 
 (see Unit 7 of this course)



 
     x f  x    yf  y    dy dx
   

   
   x f  x  dx   y f  y  dy 
    

 E X E Y

Remark 4: The result can be similarly extended for more than two
random variables.
Example 8: Two unbiased dice are thrown. Find the expected value of
the sum of number of points on them.
Solution: Let X be the number obtained on the first die and Y be the
number obtained on the second die, then
70
7 7 Mathematical Expectation
E X  and E  Y   [See Example 3 given in Section 8.2]
2 2
 The required expected value = E(X + Y)
 Using addition theorem 
= E(X) + E(Y)  
 of expectation 
7 7
=  =7
2 2
Remark 5: This example can also be done considering one random
variable only as follows:
Let X be the random variable denoting “the sum of numbers of points on
the dice”, then the probability distribution in this case is
X: 2 3 4 5 6 7 8 9 10 11 12
1 2 3 4 5 6 5 4 3 2 1
p(x) :
36 36 36 36 36 36 36 36 36 36 36
1 2 1
and hence E(X) = 2   3   ...  12  =7
36 36 36
Example 9: Two cards are drawn one by one with replacement from 8
cards numbered from 1 to 8. Find the expectation of the product of the
numbers on the drawn cards.
Solution: Let X be the number on the first card and Y be the number on
the second card. Then probability distribution of X is
X 1 2 3 4 5 6 7 8
px 1 1 1 1 1 1 1 1
8 8 8 8 8 8 8 8

and the probability distribution of Y is


Y 1 2 3 4 5 6 7 8

p  y 1 1 1 1 1 1 1 1
8 8 8 8 8 8 8 8

1 1 1
 E  X   E  Y   1  2   ...  8 
8 8 8
1 1 9
 1  2  3  4  5  6  7  8    36  
8 8 2
Thus, the required expected value is
E  XY   E  X  E  Y  [Using multiplication theorem of expectation]

9 9 81
   .
2 2 4

71
Random Variables and
Expectation
Expectation of Linear Combination of Random Variables
Theorem 8.4: Let X1 , X 2 , ..., X n be any n random variables and if
a1 , a 2 , ..., a n are any n constants, then

E  a1X1  a 2 X 2  ...  a n X n   a1E  X1   a 2 E  X 2   ...  a n E  X n 

[Note : Here a1X1  a 2 X 2  ...  a n X n is a linear combination of X1, X2, ... , Xn]

Proof: Using the addition theorem of expectation, we have


E  a1X1  a 2 X 2  ....  a n X n   E  a1X1   E  a 2 X 2   ...  E  a n X n 

= a1E  X1   a 2 E  X 2   ...  a n E  X n  .

[Using second property of Section 8.3 of the unit]


Now, you can try the following exercises.
E7) Two cards are drawn one by one with replacement from ten cards
numbered 1 to 10. Find the expectation of the sum of points on two
cards.
E8) Find the expectation of the product of number of points on two dice.

Now before ending this unit, let’s summarize what we have covered in it.

8.6 SUMMARY
The following main points have been covered in this unit:
1) Expected value of a random variable X is defined as
n
E  X    x i pi , if X is a discrete random variable
i 1


  xf  x  dx , if X is a continuous random variable.


2) Important properties of expectation are:


i) E(k) = k, where k is a constant.
ii) E(kX) = kE(X), k being a constant.
iii) E(aX + b) = aE(X) + b, where a and b are constants
iv) Addition theorem of Expectation is stated as:
If X and Y are random variables, then E  X  Y   E  X   E  Y  .

v) Multiplication theorem of Expectation is stated as:


If X and Y are independent random variables, then
E(XY) = E(X)E(Y).

72
vi) If X1 , X 2 , ..., X n be any n random variables and if a1 , a 2 , ..., a n are any n Mathematical Expectation

constants, then
E  a1X1  a 2 X 2  ...  a n X n   a1E  X1   a 2 E  X 2   ...  a n E  X n  .

3) Moments and other measures in terms of expectation are given as:


i) r th order moment about any point is given as

  pi  x i  A  r , if X is a discrete r.v.
 i
'
r   
   x  A r f  x  dx, if X is a continous r.v
 

= E(X  A)r
ii) Variance of a random variable X is given as
2 2
V  X   E  X     E  X  E  X  

pi  xi  x  yi  y , if  X, Y is discrete r.v.


 i
iii) Cov(X,Y) =   
    x  x  y  y f  x, y dydx, if  X, Y is continuous r.v.
 

 E  X  E  X    Y  E  Y   

= E(XY) – E(X) E(Y).


iv) M.D. about mean = EX – E(X)
 pi  x  Mean  for discrete r.v


   x  Mean  f  x  dx for continuous r.v
 
If you want to see what our solutions to the exercises in the unit are, we
have given them in the following section.

8.7 SOLUTIONS/ANSWERS
E1) Let X be the amount (in rupees) won by you.
1
 X can take the values 100, 0 with P[X = 100] = P[Head] = , and
2
1
P[X = 0] = P[Tail] = .
2
 probability distribution of X is
X: 100 0
1 1
px
2 2

73
Random Variables and and hence the expected amount won by you is
Expectation
1 1
E  X   100   0  = 50.
2 2
E2) Let X be the number of tosses till tail turns up.
 X can take values 1, 2, 3, 4… with
1
P[X = 1] = P[Tail in the first toss] =
2
2
1 1 1
P[X = 2] = P[Head in the first and tail in the second toss] =    ,
2 2 2
3
1 1 1 1
P[X = 3] = P[HHT] =      , and so on.
2 2 2 2

 Probability distribution of X is
X: 1 2 3 4 5...
2 3 4 5
1 1 1 1 1
px         ...
2 2 2 2 2
and hence
2 3 4
1 1 1 1
E  X   1  2     3     4     ... … (1)
2 2 2 2
1
Multiplying both sides by , we get
2
2 3 4 5
1 1 1 1 1
E  X      2     3     4     ...
2 2 2 2 2
2 3 4
1 1 1 1
 E  X      2     3     ... … (2)
2 2 2 2
[Shifting the position one step towards right so that we get the
terms having same power at the same positions as that in (1)]
Now, subtracting (2) from (1), we have
2 3 4
1 1 1 1 1
E  X   E  X             ...
2 2 2 2 2
2 3 4
1 1 1 1 1
 E X          
2 2 2 2 2
2 3
1 1 1
 E  X   1         ...
2 2 2

74
(Which is an infinite G.P. with first term a = 1 and common ratio Mathematical Expectation
1
r= )
2
1 a
 [ S  (see Unit 3of course MST - 001)]
1 1 r
1
2
1
=  2.
1
2

E3) E  X    x f  x  dx


0 1 2 
  x f  x  dx   x f  x  dx   x f  x  dx   x f  x  dx
 0 1 2

0 1 2 
3
 x  0  dx   x  x  dx   x  2  x 
3
 dx   x  0  dx
 0 1 2

1 2
 0   x 4 dx   x 8  x 3  6x  2  x   dx  0
0 1

1 2


  x 4dx   8x  x 4  12x 2  6x 3 dx 
0 1

1 2
 x5   x 2 x5 x3 x4 
    8   12  6 
 5 0  2 5 3 4 1

1   8  2   2  12  2  6  2    8 1 1 12 1 6 1 


2 5 3 4 2 5 3 4

          
5   2 5 3 4   2 5 3 4 
 

1  32   1 3 
   16   32  24   4   4   
5  5   5 2 

1  8 13  1 3 1
      .
5  5 10  5 10 2
E4) As X is a random variable with mean ,
 E(X) =  ... (1)

 expectation is nothing but simply the average taken over all 


 the possible values of random variable as defined in Sec. 8.2 
 

75
Random Variables and
 X
Expectation Now, E  Z  E  
  
1 
 E   X   
 
1
 E X   [Using Property 2 of Sec. 8.3]

1
  E  X     [Using Property 3 of Sec. 8.3]

1
    [Using (1)]

=0
Note: Mean of standard random variable is zero.
X
E5) Variance of standard random variable Z = is given as

 X X 
V(Z) = V   =V  
     
1   
 V  X    
   
2
1  Using the result of the Theorem 8.1 
   V X of Sec. 8.5 of this unit 
  
1
= VX
2
1  it is given that the standard deviation 

 2  
2 = 1  2 
of X is and hence its variance is  
Note: The mean of standard random variate is ‘0’ [See (E4)] and its
variance is 1.
E6) Given that E  Y   0  E(a X  b) = 0  a E(X) – b = 0
 a(10) – b = 0
 10 a – b = 0 ... (1)
Also as V(Y) = 1,
hence V(aX  b) = 1
1
 a2V(X) = 1  a2(25) = 1  a2 =
25
1
 a= [ a is positive]
5
 From (1), we have
1
10    b  0  2 – b = 0  b = 2
5
76
1 Mathematical Expectation
Hence, a = , b = 2.
5
E7) Let X be the number on the first card and Y be the number on the
second card. Then probability distribution of X is:
X 1 2 3 4 5 6 7 8 9 10

px 1 1 1 1 1 1 1 1 1 1
10 10 10 10 10 10 10 10 10 10

and the probability distribution of Y is


X 1 2 3 4 5 6 7 8 9 10

px 1 1 1 1 1 1 1 1 1 1
10 10 10 10 10 10 10 10 10 10

1 1 1
 E(X) = E(Y) = 1  2   ...  10 
10 10 10
1 1
 1  2  3  4  5  6  7  8  9  10 =  55  5.5
10 10
and hence the required expected value is
E  X  Y   E  X   E  Y  = 5.5 + 5.5 = 11

E8) Let X be the number obtained on the first die and Y be the number
obtained on the second die.
7
Then E(X) = E(Y) = . [See Example 3 given in Section 8.2]
2
Hence, the required expected value is
E  XY   E  X  E  Y  [Using multiplication theorem of expectation]

7 7 49
=  = .
2 2 4

77

You might also like