Lecture 6 - Fall 2023
Lecture 6 - Fall 2023
8P
>
> x n f (x) if X is discrete
< x2RX
E (X n ) =
>
>
:R 1 n f (x)
1x if X is continuous
M(t) = E (e tX )
is called the moment generating function of X if this expected
value exists for all t in the interval h < t < h for some
h > 0.
MOMENT GENERATING FUNCTIONS
M(t) = E (e tX )
is called the moment generating function of X if this expected
value exists for all t in the interval h < t < h for some
h > 0.
t 2x 2 t 3x 3 tr xr
e tx = 1 + tx + + + ... + + ...
2! 3! r!
For discrete case, thus we get
X t 2x 2 t 3x 3 tr xr
M(t) = [1 + tx + + + ... + + . . .]f (x)
x
2! 3! r!
X X t2 X 2 tr X r
= f (x) + t xf (x) + x f (x) + . . . x f (x) + . . .
x x
2! x r! x
t2 tr
= 1 + E (X )t + E (X 2 ) + . . . + E (X r ) + . . .
2! r!
d r M(t)
Theorem: dt r |t=0 = E (X r ).
Example: Let X have the PDF
81 x/2
<2e if x >0
f (x) =
:
0 otherwise.
Recall
1 1
M(t) = , t< .
1 2t 2
Then
2 8 1
M 0 (t) = , M 00 (t) = , t< .
(1 2t)2 (1 2t)3 2
and hence
E (X ) = 2, E (X 2 ) = 8, and Var (X ) = 4.
Table of Contents
1. Uniform Distribution
A random variable X is said to be uniform on the interval
[a, b] if its probability density function is of the form
1
f (x) = , ax b
b a
where a and b are constants.
b+a
E (X ) = µX =
2
2 (b a)2
Var(X ) = X =
12
8
>
<1 if x =0
MX (t) =
>
: e tb e ta
t(b a) otherwise.
EXERCISE: Suppose Y ⇠ UNIF(0, 1) and Y = 14 x 2 . What is the
probability density function of X ?
2. Exponential Distribution: A continuous random variable
is said to be an exponential random variable with parameter
✓ if it’s probability density function is of the form
81 x
<✓e ✓ if x > 0
f (x; ✓) =
:
0 otherwise.
where ✓ > 0.
where ✓ > 0.
where > 0.
1 1
Theorem: If X ⇠ EXP( ), then E (X ) = and Var (X ) = 2 .
EXERCISE: What is the cumulative density function of a random
variable which has an exponential distribution with variance 25?
The normal distribution plays a central role in probability and
statistics and was discovered by a French mathematician Abraham
DeMoivre (1667-1754). This distribution is also called the
Gaussian distribution after Carl Friedrich Gauss, who proposed it as
a model for measurement errors.
The normal distribution plays a central role in probability and
statistics and was discovered by a French mathematician Abraham
DeMoivre (1667-1754). This distribution is also called the
Gaussian distribution after Carl Friedrich Gauss, who proposed it as
a model for measurement errors.
E (X ) = µX = µ
2 2
Var (X ) = X =
1 2t2
MX (t) = e µt+ 2 .
Proof.
Z 1
1 1 x µ 2
MX (t) = e tx p e 2
( )
dx
1 2⇡
Z 1
1 1 x µ 2
= p e tx e 2
( )
dx
2⇡ 1
Z 1
1 1 x µ 2
= p e tx e 2
( )
dx
2⇡ 1
Z 1
1 1
( 2xt 2 +(x µ)2 )
= p e 2 2 dx
2⇡ 1
Note that
2
2xt + (x µ)2 = [x (µt + 2 2
)] 2µt 2
t2 4
,
and hence
n Z 1 o
1 2t2 1 1 x
(
(µt+ 2 ) 2
)
MX (t) = e µt+ 2 p e 2 dx
2⇡ 1
Since the quantity inside the bracket is the integral from 1 to
1 of a normal probability density function with the parameters
µ + t 2 and , and hence is equal to 1, it follows that
1 2t2
MX (t) = e µt+ 2 .
Further,
1 2t2
MX0 (t) = (µ + 2
t)e µt+ 2 .
µt+ 12 2t2 1 2t2
MX00 (t) = 2
e + (µ + 2
t)2 e µt+ 2 .
=) E (X ) = MX0 (0) = µ and Var (X ) = 2
.
Examples:
1. P(Z > 1.26)
2.P(Z > 1.37)
3. P(Z < 0.86)
4. P( 1.25 < z < 0.37)
Theorem: If X ⇠ N(µ, 2) then the random variable
Z = X µ ⇠ N(0, 1).
Theorem: If X ⇠ N(µ, 2) then the random variable
Z = X µ ⇠ N(0, 1).
F (z) = P(Z z)
X µ
= P( z)
= P(X z + µ)
Z z+µ
1 1 x
( µ 2
)
= p e 2 dx
1 2⇡
Z z
1 1 2
w
= p e 2 dw ,
1 2⇡
x µ
(where w = )
Hence
1 1 2
f (z) = F 0 (z) = p e 2
z
.
2⇡
Hence the proof.
Example: If X ⇠ N(3, 16), then what is P(4 X 8)?
4 3 X 3 8 3
P(4 X 8) = P( )
4 4 4
1 5
= P( Z )
4 4
= 0.8944 0.5987
= 0.2957.