0% found this document useful (0 votes)
39 views13 pages

Chapter 6, Section 5: Transformations of Variables

1. The moment-generating function (mgf) of a random variable characterizes its probability distribution. If two random variables have the same mgf, they have the same distribution. 2. The mgf of a transformed random variable can be determined from the mgf of the original variable using properties such as: the mgf of an affine transformation, the mgf of a sum of independent variables, and the mgf of a random sample. 3. Examples show how to use mgfs to determine distributions of transformed variables, such as sums, and find their expected values and variances. The normal and binomial distributions can be derived using mgfs.

Uploaded by

kirenger
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views13 pages

Chapter 6, Section 5: Transformations of Variables

1. The moment-generating function (mgf) of a random variable characterizes its probability distribution. If two random variables have the same mgf, they have the same distribution. 2. The mgf of a transformed random variable can be determined from the mgf of the original variable using properties such as: the mgf of an affine transformation, the mgf of a sum of independent variables, and the mgf of a random sample. 3. Examples show how to use mgfs to determine distributions of transformed variables, such as sums, and find their expected values and variances. The normal and binomial distributions can be derived using mgfs.

Uploaded by

kirenger
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 13

Chapter 6, Section 5

Transformations of Variables

Method of Moment-Generating
Functions

 John J Currano, 04/16/2010


1
Method of Moment-Generating Functions

The crucial theorem is:

Theorem 6.1 (p. 318): If X and Y are random variables which

both have moment-generating functions, and if

mX(t) = mY(t) for all t in some interval around t = 0,

then X and Y have the same probability distribution.

2
Method of Moment-Generating Functions

Some other useful facts:

1. If U = aY + b, then

   
m U (t )  E (e tU )  E e aYt  bt  e bt E eY (at )  e bt m Y (at ).

2. If Y1, Y2, … , Yn are independent and U = Y1 + Y2 +  + Yn, then

(
m U (t )  E (e tU )  E e tY1 )(
 tY2  tYn
)
 E e tY1 e tY2 e tYn

( )( ) ( )
 E etY1 E e tY2  E e tYn by independence

 mY1 (t ) mY2 (t )  mYn (t )

3
Method of Moment-Generating Functions

Some other useful facts:


n
3. If Y1, Y2, … , Yn are independent and U   aiYi ,
i 1
n
m U (t )   m Yi (ai t ).
then i 1

4. If Y1, Y2, … , Yn are independent and identically distributed (iid)

with common distribution Y, i.e., a Random Sample from Y, and

U = Y1 + Y2 +  + Yn, then [ ].


m U (t )  m Y (t )
n

4
Method of Moment-Generating Functions

Example 1. Suppose that Y1, Y2, … , Ym are independent binomial RVs


with Yi ~ bin(ni, p) [same p]. Then for i = 1, 2, … , m,

m Yi (t )  q  pe t  ni

m
Let Y  Yi . Then by Property 2 on slide 3,
i 1

m n1  n2  nm

i 1
( )
mY (t )   m Yi (t )  q  pe t ,

m
so Y has a binomial distribution with n  ni trials and probability of
i 1
success, p. Thus, the sum of independent binomials with the same
probability of success, p, is also binomial.
5
Example 2. Let Y ~ NegBin(r, p),
X0 = 0, and
Xi = # of trial on which i th success occurs.
Then
Yi = Xi  Xi 1 ~ Geom(p) for i = 1, 2, … , r, and
Y1, Y2, … , Yr are independent.
Also, r r
Yi   X i  X i 1  X r  X 0  X r  0  Y .
i 1 i 1
telescoping sum

r
r  pet 
Thus, mY (t )  mYi (t )   t

 by Property 2 on slide 3.
i 1 1  qe 
We have found the moment-generating functions of the negative
binomial distributions using those of the geometric distributions.
6
r
 pe  t
m ( t )   
Example 2. Y ~ NegBin(r, p), Y  1  qe t  .
 
We can now use the mgf of Y to find its mean and variance.
For example,
 pe 0 
E (Y )  mY (0)  r  
r 1
 pe  1  qe    pe  qe 
0 0 0 0

0
 1  qe  1  qe 
0 2

r 1
( )
 p  p (1  q) pq
r  
1 q  ( )
1  q
2

r 1
 p p(p  q)
r   
p p2

 r
.
p

7
Y  1 
Example 3. Let Y ~ N( ,  ) and Z 
2  Y  .
  
 2 2
Then m Y (t )  exp  t  21  t ,
   1 
so m Z (t )  exp   t  m Y  t  by Property 1 on slide 3
    

    1  1 
2
 exp   t  exp    t   21  2  t 

        
  t  2 t 2  
 exp     exp
t
 



 1
2

 2






( )
1
2
t2

So Z ~ N(0, 1 ) : Z has a standard normal distribution. We proved this


fact in Chapter 4 using the Distribution Function method – this is simpler.

Transforming a random variable Y by subtracting its mean and dividing


by its standard deviation is called standardizing Y.
8
Example 4. Let Z ~ N(0,1) and Y =  +  Z .

Then m Z (t )  exp  1
2

t2 ,

so mY (t )  exp  t  m Z  t  by Property 1 on slide 3

 exp  t  exp
( 1
2 )
 t 2

(
 exp  t  1
2
2 t2
)
So Y ~ N( ,  2) : Y has a normal distribution with mean  and
variance  2. We also proved this in Chapter 4 using the
Distribution Function method – again this method is simpler.

Transforming a standard normal random variable Z in this


fashion is a way of simulating other normal random variables.

9
Example 5. If Yi ~ N( i ,  i 2) are independent for i = 1, 2, . . . , n,


then m Yi (t )  exp  i t 
1
2
 i2 t 2  m
for 1 i  n. Let Y   ai Yi .
i 1

Then by the Property 3 on slide 4,


m m
( (a t)).
2
m Y (t )   m Yi ( ai t )   exp  i a i t  1
2
 i2 i
i 1 i 1

 m m 
 exp  t a i  i  2 t  ai2 i2  ,
1 2
 i 1 i 1 

m m 
so Y ~ N   a i  i ,  ai2  i2  . Thus, a linear combination of
 i 1 i 1 

independent normal random variables is also normal.

10
Special Case. Suppose Y1, Y2, . . . , Yn are a random sample from
1 n
a N(  ,  2) - distribution, and
Y   Yi , the sample mean.
n i 1


Then by Example 5, Y ~ N  ,  2 n . 

Thus the sample mean of a random sample of size n from a normal


distribution also has a normal distribution.

Its mean is the same as that of the original distribution and its
variance is smaller   2/n instead of  2.

This will be used often next semester.

11
Example 6. (p. 319, Example 6.11)
1 z2 2
Let Z ~ N(0,1), so f ( z )  e , and let Y  Z 2 . Then
2

mY (t )  E e 
tZ2

 e
t z2 1
2
e z2 2
dz



1 (1 2t ) z 2 2
  e dz
 2


1  z 2 (2(1 2t ) 1)
  e dz
 2

12
2
Example 6. (p. 319, Example 6.11) Let Z ~ N(0,1) and Y  Z .

Then m Y (t )  
1
e  z2  2(12t )1  dz .
 2

The integrand is proportional to the density function of the normal

distribution with  = 0 and  2 = (1 – 2t)1. Thus, if we multiply by


1
 1  2t   
12 1 2
inside the integral and by   1  2t outside,

we obtain
m Y (t )  1  2t   1  1  2t 
1 2 1 2
.

Therefore, Y  Z 2 has a  2 (1) - distribution.

We proved this in Chapter 4 using the Distribution Function method.


Once again, this method is simpler.
13

You might also like