Chapter 6, Section 5: Transformations of Variables
Chapter 6, Section 5: Transformations of Variables
Transformations of Variables
Method of Moment-Generating
Functions
2
Method of Moment-Generating Functions
1. If U = aY + b, then
m U (t ) E (e tU ) E e aYt bt e bt E eY (at ) e bt m Y (at ).
(
m U (t ) E (e tU ) E e tY1 )(
tY2 tYn
)
E e tY1 e tY2 e tYn
( )( ) ( )
E etY1 E e tY2 E e tYn by independence
3
Method of Moment-Generating Functions
4
Method of Moment-Generating Functions
m
Let Y Yi . Then by Property 2 on slide 3,
i 1
m n1 n2 nm
i 1
( )
mY (t ) m Yi (t ) q pe t ,
m
so Y has a binomial distribution with n ni trials and probability of
i 1
success, p. Thus, the sum of independent binomials with the same
probability of success, p, is also binomial.
5
Example 2. Let Y ~ NegBin(r, p),
X0 = 0, and
Xi = # of trial on which i th success occurs.
Then
Yi = Xi Xi 1 ~ Geom(p) for i = 1, 2, … , r, and
Y1, Y2, … , Yr are independent.
Also, r r
Yi X i X i 1 X r X 0 X r 0 Y .
i 1 i 1
telescoping sum
r
r pet
Thus, mY (t ) mYi (t ) t
by Property 2 on slide 3.
i 1 1 qe
We have found the moment-generating functions of the negative
binomial distributions using those of the geometric distributions.
6
r
pe t
m ( t )
Example 2. Y ~ NegBin(r, p), Y 1 qe t .
We can now use the mgf of Y to find its mean and variance.
For example,
pe 0
E (Y ) mY (0) r
r 1
pe 1 qe pe qe
0 0 0 0
0
1 qe 1 qe
0 2
r 1
( )
p p (1 q) pq
r
1 q ( )
1 q
2
r 1
p p(p q)
r
p p2
r
.
p
7
Y 1
Example 3. Let Y ~ N( , ) and Z
2 Y .
2 2
Then m Y (t ) exp t 21 t ,
1
so m Z (t ) exp t m Y t by Property 1 on slide 3
1 1
2
exp t exp t 21 2 t
t 2 t 2
exp exp
t
1
2
2
( )
1
2
t2
Then m Z (t ) exp 1
2
t2 ,
exp t exp
( 1
2 )
t 2
(
exp t 1
2
2 t2
)
So Y ~ N( , 2) : Y has a normal distribution with mean and
variance 2. We also proved this in Chapter 4 using the
Distribution Function method – again this method is simpler.
9
Example 5. If Yi ~ N( i , i 2) are independent for i = 1, 2, . . . , n,
then m Yi (t ) exp i t
1
2
i2 t 2 m
for 1 i n. Let Y ai Yi .
i 1
m m
exp t a i i 2 t ai2 i2 ,
1 2
i 1 i 1
m m
so Y ~ N a i i , ai2 i2 . Thus, a linear combination of
i 1 i 1
10
Special Case. Suppose Y1, Y2, . . . , Yn are a random sample from
1 n
a N( , 2) - distribution, and
Y Yi , the sample mean.
n i 1
Then by Example 5, Y ~ N , 2 n .
Its mean is the same as that of the original distribution and its
variance is smaller 2/n instead of 2.
11
Example 6. (p. 319, Example 6.11)
1 z2 2
Let Z ~ N(0,1), so f ( z ) e , and let Y Z 2 . Then
2
mY (t ) E e
tZ2
e
t z2 1
2
e z2 2
dz
1 (1 2t ) z 2 2
e dz
2
1 z 2 (2(1 2t ) 1)
e dz
2
12
2
Example 6. (p. 319, Example 6.11) Let Z ~ N(0,1) and Y Z .
Then m Y (t )
1
e z2 2(12t )1 dz .
2