Lecture 6 - Fall 2023
Lecture 6 - Fall 2023
Dipti Dubey
Department of Mathematics
Shiv Nadar University
September 4, 2023
Moments of Random Variables: Let X be a random variable
with space RX and probability density function f .
The nth moment about the origin of a random variable X , as
denoted by E (X n ), is defined to be
P
x n f (x) if X is discrete
x∈RX
E (X n ) =
R ∞
n f (x)
−∞ x if X is continuous
M(t) = E (e tX )
is called the moment generating function of X if this expected
value exists for all t in the interval −h < t < h for some
h > 0.
MOMENT GENERATING FUNCTIONS
M(t) = E (e tX )
is called the moment generating function of X if this expected
value exists for all t in the interval −h < t < h for some
h > 0.
Then
Z ∞
M(t) = e tx e −x/2 dx
0
Z ∞
1 1
= e (t− 2 )x dx
2 0
1 1
= , t< .
1 − 2t 2
R ∞ −ax
[Use : 0 e dx = 1a , a > 0.]
To explain why we refer to this function as a “moment-generating”
function, let us substitute for e tx its Maclaurins series expansion,
t 2x 2 t 3x 3 tr xr
e tx = 1 + tx + + + ... + + ...
2! 3! r!
For discrete case, thus we get
X t 2x 2 t 3x 3 tr xr
M(t) = [1 + tx + + + ... + + . . .]f (x)
x
2! 3! r!
X X t2 X 2 tr X r
= f (x) + t xf (x) + x f (x) + . . . x f (x) + . . .
x x
2! x r! x
t2 tr
= 1 + E (X )t + E (X 2 ) + . . . + E (X r ) + . . .
2! r!
d r M(t)
Theorem: dt r |t=0 = E (X r ).
Example: Let X have the PDF
1 −x/2
2e if x > 0
f (x) =
0 otherwise.
Recall
1 1
M(t) = , t< .
1 − 2t 2
Then
2 8 1
M 0 (t) = , M 00 (t) = , t< .
(1 − 2t)2 (1 − 2t)3 2
and hence
E (X ) = 2, E (X 2 ) = 8, and Var (X ) = 4.
Table of Contents
1. Uniform Distribution
A random variable X is said to be uniform on the interval
[a, b] if its probability density function is of the form
1
f (x) = , a≤x ≤b
b−a
where a and b are constants.
b+a
E (X ) = µX =
2
(b − a)2
Var(X ) = σX2 =
12
1
if x = 0
MX (t) =
e tb −e ta
otherwise.
t(b−a)
EXERCISE: Suppose Y ∼ UNIF(0, 1) and Y = 14 x 2 . What is the
probability density function of X ?
2. Exponential Distribution: A continuous random variable
is said to be an exponential random variable with parameter
θ if it’s probability density function is of the form
1 −x
θe θ if x > 0
f (x; θ) =
0 otherwise.
where θ > 0.
where θ > 0.
where λ > 0.
1 1
Theorem: If X ∼ EXP(λ), then E (X ) = λ and Var (X ) = λ2
.
EXERCISE: What is the cumulative density function of a random
variable which has an exponential distribution with variance 25?
The normal distribution plays a central role in probability and
statistics and was discovered by a French mathematician Abraham
DeMoivre (1667-1754). This distribution is also called the
Gaussian distribution after Carl Friedrich Gauss, who proposed it as
a model for measurement errors.
The normal distribution plays a central role in probability and
statistics and was discovered by a French mathematician Abraham
DeMoivre (1667-1754). This distribution is also called the
Gaussian distribution after Carl Friedrich Gauss, who proposed it as
a model for measurement errors.
E (X ) = µX = µ
Var (X ) = σX2 = σ 2
1 2t2
MX (t) = e µt+ 2 σ .
Proof.
Z ∞
1 1 x−µ 2
MX (t) = e tx √ e − 2 ( σ ) dx
−∞ σ 2π
Z ∞
1 1 x−µ 2
= √ e tx e − 2 ( σ ) dx
σ 2π −∞
Z ∞
1 1 x−µ 2
= √ e tx e − 2 ( σ ) dx
σ 2π −∞
Z ∞
1 1 2 2
= √ e − 2σ2 (−2xtσ +(x−µ) ) dx
σ 2π −∞
Note that
Z ∞ 2
1 2t2
n 1 1 x−(µt+σ ) 2
o
MX (t) = e µt+ 2 σ √ e− 2 ( σ
)
dx
σ 2π −∞
Since the quantity inside the bracket is the integral from −∞ to
−∞ of a normal probability density function with the parameters
µ + tσ 2 and σ, and hence is equal to 1, it follows that
1 2t2
MX (t) = e µt+ 2 σ .
Further,
1 2t2
MX0 (t) = (µ + σ 2 t)e µt+ 2 σ .
µt+ 12 σ 2 t 2 1 2t2
MX00 (t) = σ 2 e + (µ + σ 2 t)2 e µt+ 2 σ .
=⇒ E (X ) = MX0 (0) = µ and Var (X ) = σ . 2
Examples:
1. P(Z > 1.26)
2.P(Z > −1.37)
3. P(Z < −0.86)
4. P(−1.25 < z < 0.37)
Theorem: If X ∼ N(µ, σ 2 ) then the random variable
Z = X σ−µ ∼ N(0, 1).
Theorem: If X ∼ N(µ, σ 2 ) then the random variable
Z = X σ−µ ∼ N(0, 1).
F (z) = P(Z ≤ z)
X −µ
= P( ≤ z)
σ
= P(X ≤ σz + µ)
Z σz+µ
1 1 x−µ 2
= √ e − 2 ( σ ) dx
−∞ σ 2π
Z z
1 1 2
= √ e − 2 w dw ,
−∞ σ 2π
x−µ
(where w = σ )
Hence
1 1 2
f (z) = F 0 (z) = √ e − 2 z .
σ 2π
Hence the proof.
Example: If X ∼ N(3, 16), then what is P(4 ≤ X ≤ 8)?
4−3 X −3 8−3
P(4 ≤ X ≤ 8) = P( ≤ ≤ )
4 4 4
1 5
= P( ≤ Z ≤ )
4 4
= 0.8944 − 0.5987
= 0.2957.