0% found this document useful (0 votes)
53 views39 pages

Lecture 6 - Fall 2023

1. The document introduces moments of random variables and defines the nth moment about the origin. It then defines moment generating functions and provides an example using an exponential distribution. 2. Several special continuous distributions are covered, including the uniform, exponential, and normal distributions. Formulas for the mean, variance, and moment generating function are provided for each distribution. 3. Exercises are included to help understand and apply the concepts, such as finding the probability density function or cumulative density function for a random variable with a specified distribution.

Uploaded by

tarunya724
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views39 pages

Lecture 6 - Fall 2023

1. The document introduces moments of random variables and defines the nth moment about the origin. It then defines moment generating functions and provides an example using an exponential distribution. 2. Several special continuous distributions are covered, including the uniform, exponential, and normal distributions. Formulas for the mean, variance, and moment generating function are provided for each distribution. 3. Exercises are included to help understand and apply the concepts, such as finding the probability density function or cumulative density function for a random variable with a specified distribution.

Uploaded by

tarunya724
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Introduction to Statistics (MAT 283)

Dipti Dubey

Department of Mathematics
Shiv Nadar University

September 4, 2023
Moments of Random Variables: Let X be a random variable
with space RX and probability density function f .
The nth moment about the origin of a random variable X , as
denoted by E (X n ), is defined to be

P
x n f (x) if X is discrete
 x∈RX


E (X n ) =

R ∞
 n f (x)
−∞ x if X is continuous

for n = 0, 1, 2, 3, . . . , provided the right side converges abso-


lutely.
If n = 1, then E (X ) is called the first moment about the
origin. If n = 2, then E (X 2 ) is called the second moment of
X about the origin.
If n = 1, then E (X ) is called the first moment about the
origin. If n = 2, then E (X 2 ) is called the second moment of
X about the origin.

In general, these moments may or may not exist for a given


random variable. If for a random variable, a particular
moment does not exist, then we say that the random variable
does not have that moment.
MOMENT GENERATING FUNCTIONS

Moment Generating Function: Let X be a random variable


with probability density function f . A real valued function
M : R → R defined by

M(t) = E (e tX )
is called the moment generating function of X if this expected
value exists for all t in the interval −h < t < h for some
h > 0.
MOMENT GENERATING FUNCTIONS

Moment Generating Function: Let X be a random variable


with probability density function f . A real valued function
M : R → R defined by

M(t) = E (e tX )
is called the moment generating function of X if this expected
value exists for all t in the interval −h < t < h for some
h > 0.

Using the definition of expected value of a random variable, we


obtain
P
e tx f (x) if X is discrete
 x∈RX


M(t) =

R ∞ tx

−∞ e f (x) if X is continuous.
Example: Let X have the PDF
 1 −x/2
2e if x > 0
f (x) =
0 otherwise.

Then
Z ∞
M(t) = e tx e −x/2 dx
0
Z ∞
1 1
= e (t− 2 )x dx
2 0
1 1
= , t< .
1 − 2t 2
R ∞ −ax
[Use : 0 e dx = 1a , a > 0.]
To explain why we refer to this function as a “moment-generating”
function, let us substitute for e tx its Maclaurins series expansion,

t 2x 2 t 3x 3 tr xr
e tx = 1 + tx + + + ... + + ...
2! 3! r!
For discrete case, thus we get

X t 2x 2 t 3x 3 tr xr
M(t) = [1 + tx + + + ... + + . . .]f (x)
x
2! 3! r!

X X t2 X 2 tr X r
= f (x) + t xf (x) + x f (x) + . . . x f (x) + . . .
x x
2! x r! x

t2 tr
= 1 + E (X )t + E (X 2 ) + . . . + E (X r ) + . . .
2! r!
d r M(t)
Theorem: dt r |t=0 = E (X r ).
Example: Let X have the PDF
 1 −x/2
2e if x > 0
f (x) =
0 otherwise.

Recall
1 1
M(t) = , t< .
1 − 2t 2
Then
2 8 1
M 0 (t) = , M 00 (t) = , t< .
(1 − 2t)2 (1 − 2t)3 2
and hence

E (X ) = 2, E (X 2 ) = 8, and Var (X ) = 4.
Table of Contents

SOME SPECIAL CONTINUOUS DISTRIBUTIONS


SOME SPECIAL CONTINUOUS DISTRIBUTIONS

1. Uniform Distribution
A random variable X is said to be uniform on the interval
[a, b] if its probability density function is of the form

1
f (x) = , a≤x ≤b
b−a
where a and b are constants.

We denote a random variable X with the uniform distribution on


the interval [a, b] as X ∼ UNIF(a, b).
APPLICATION: Random number generation.
THEOREM: If X is a uniform random variable on the interval
[a, b], then the mean, variance and moment generating functions
are respectively given by

b+a
E (X ) = µX =
2
(b − a)2
Var(X ) = σX2 =
12

1
 if x = 0
MX (t) =
 e tb −e ta

otherwise.
t(b−a)
EXERCISE: Suppose Y ∼ UNIF(0, 1) and Y = 14 x 2 . What is the
probability density function of X ?
2. Exponential Distribution: A continuous random variable
is said to be an exponential random variable with parameter
θ if it’s probability density function is of the form
1 −x
θe θ if x > 0
f (x; θ) =
0 otherwise.

where θ > 0.

If a random variable X has an exponential density function with


parameter θ, then we denote it by writing X ∼ EXP(θ).
APPLICATION: To model lifetime of electronic components.
2. Exponential Distribution: A continuous random variable
is said to be an exponential random variable with parameter
θ if it’s probability density function is of the form
1 −x
θe θ if x > 0
f (x; θ) =
0 otherwise.

where θ > 0.

If a random variable X has an exponential density function with


parameter θ, then we denote it by writing X ∼ EXP(θ).
APPLICATION: To model lifetime of electronic components.
Alternate Definition of Exponential Distribution: A continuous
random variable is said to be an exponential random variable with
parameter λ = 1θ if it’s probability density function is of the form
 −λx
λe if x > 0
f (x; θ) =
0 otherwise.

where λ > 0.
1 1
Theorem: If X ∼ EXP(λ), then E (X ) = λ and Var (X ) = λ2
.
EXERCISE: What is the cumulative density function of a random
variable which has an exponential distribution with variance 25?
The normal distribution plays a central role in probability and
statistics and was discovered by a French mathematician Abraham
DeMoivre (1667-1754). This distribution is also called the
Gaussian distribution after Carl Friedrich Gauss, who proposed it as
a model for measurement errors.
The normal distribution plays a central role in probability and
statistics and was discovered by a French mathematician Abraham
DeMoivre (1667-1754). This distribution is also called the
Gaussian distribution after Carl Friedrich Gauss, who proposed it as
a model for measurement errors.

3. The Normal (or Gaussian) Distribution:


A random variable X is said to have a normal distribution if
its probability density function is given by
1 1 x−µ 2
f (x) = √ e − 2 ( σ ) , −∞ < x < ∞
σ 2π
with parameters µ, where − ∞ < µ < ∞ and σ > 0.
The normal distribution plays a central role in probability and
statistics and was discovered by a French mathematician Abraham
DeMoivre (1667-1754). This distribution is also called the
Gaussian distribution after Carl Friedrich Gauss, who proposed it as
a model for measurement errors.

3. The Normal (or Gaussian) Distribution:


A random variable X is said to have a normal distribution if
its probability density function is given by
1 1 x−µ 2
f (x) = √ e − 2 ( σ ) , −∞ < x < ∞
σ 2π
with parameters µ, where − ∞ < µ < ∞ and σ > 0.

If X has a normal distribution with parameters µ and σ 2 , then we


write X ∼ N(µ, σ 2 ).
The graph of a normal probability density function, shaped like the
cross section of a bell.
• From the form of the probability density function, we see that the
density is symmetric about µ, f (µ − x) = f (µ + x), where it has a
maximum, and that the rate at which it falls off is determined by σ.
Theorem: If X ∼ N(µ, σ 2 ), then

E (X ) = µX = µ
Var (X ) = σX2 = σ 2

1 2t2
MX (t) = e µt+ 2 σ .
Proof.
Z ∞
1 1 x−µ 2
MX (t) = e tx √ e − 2 ( σ ) dx
−∞ σ 2π
Z ∞
1 1 x−µ 2
= √ e tx e − 2 ( σ ) dx
σ 2π −∞
Z ∞
1 1 x−µ 2
= √ e tx e − 2 ( σ ) dx
σ 2π −∞
Z ∞
1 1 2 2
= √ e − 2σ2 (−2xtσ +(x−µ) ) dx
σ 2π −∞

Note that

−2xtσ 2 + (x − µ)2 = [x − (µt + σ 2 )]2 − 2µtσ 2 − t 2 σ 4 ,


and hence

Z ∞ 2
1 2t2
n 1 1 x−(µt+σ ) 2
o
MX (t) = e µt+ 2 σ √ e− 2 ( σ
)
dx
σ 2π −∞
Since the quantity inside the bracket is the integral from −∞ to
−∞ of a normal probability density function with the parameters
µ + tσ 2 and σ, and hence is equal to 1, it follows that

1 2t2
MX (t) = e µt+ 2 σ .

Further,

1 2t2
MX0 (t) = (µ + σ 2 t)e µt+ 2 σ .
µt+ 12 σ 2 t 2 1 2t2
MX00 (t) = σ 2 e + (µ + σ 2 t)2 e µt+ 2 σ .
=⇒ E (X ) = MX0 (0) = µ and Var (X ) = σ . 2

Hence the proof.


Standard Normal Random Variable: A normal random vari-
able is said to be standard normal, if its mean is zero and
variance is one. We denote a standard normal random vari-
able X by X ∼ N(0, 1) and it’s probability density function
is given by
1 1 2
f (x) = √ e − 2 x , −∞ < x < ∞.

Example: If X ∼ N(0, 1), what is the probability of the random
variable X less than or equal to 1.72?
Example: If X ∼ N(0, 1), what is the probability of the random
variable X less than or equal to 1.72?
Solution: We have, using Standard Normal Table type I

P(X ≤ −1.72) = 1 − P(X ≤ 1.72)


= 1 − 0.9573
= 0.0427.
Example: If X ∼ N(0, 1), what is the probability of the random
variable X less than or equal to 1.72?
Solution: We have, using Standard Normal Table type I

P(X ≤ −1.72) = 1 − P(X ≤ 1.72)


= 1 − 0.9573
= 0.0427.
Using Standard Normal Table type II,

P(X ≤ −1.72) = 0.0427.


• Probabilities that are not of the form P(Z ≤ z) are found by
using the basic rules of probability and the symmetry of the normal
distribution.
• Probabilities that are not of the form P(Z ≤ z) are found by
using the basic rules of probability and the symmetry of the normal
distribution.

Examples:
1. P(Z > 1.26)
2.P(Z > −1.37)
3. P(Z < −0.86)
4. P(−1.25 < z < 0.37)
Theorem: If X ∼ N(µ, σ 2 ) then the random variable
Z = X σ−µ ∼ N(0, 1).
Theorem: If X ∼ N(µ, σ 2 ) then the random variable
Z = X σ−µ ∼ N(0, 1).

Proof: We will show that Z is standard normal by finding the


probability density function of Z . We compute the probability
density of Z by first computing it’s cumulative distribution
function.

F (z) = P(Z ≤ z)
X −µ
= P( ≤ z)
σ
= P(X ≤ σz + µ)
Z σz+µ
1 1 x−µ 2
= √ e − 2 ( σ ) dx
−∞ σ 2π
Z z
1 1 2
= √ e − 2 w dw ,
−∞ σ 2π
x−µ
(where w = σ )
Hence
1 1 2
f (z) = F 0 (z) = √ e − 2 z .
σ 2π
Hence the proof.
Example: If X ∼ N(3, 16), then what is P(4 ≤ X ≤ 8)?

4−3 X −3 8−3
P(4 ≤ X ≤ 8) = P( ≤ ≤ )
4 4 4

1 5
= P( ≤ Z ≤ )
4 4

= P(Z ≤ 1.25) − P(Z ≤ 0.25)

= 0.8944 − 0.5987

= 0.2957.

You might also like