0% found this document useful (0 votes)
56 views8 pages

Lecture 3:4 (Part 2)

The document discusses several probability distributions: 1) If random variables come from normal or chi-squared distributions, then sums of those variables will have normal or chi-squared distributions respectively. 2) A ratio of a normal and independent chi-squared variable has a Student's t distribution. 3) The mean and variance are derived for an exponential distribution and its square. 4) Independent gamma variables lead to a Dirichlet distribution for their normalized sums.

Uploaded by

dan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views8 pages

Lecture 3:4 (Part 2)

The document discusses several probability distributions: 1) If random variables come from normal or chi-squared distributions, then sums of those variables will have normal or chi-squared distributions respectively. 2) A ratio of a normal and independent chi-squared variable has a Student's t distribution. 3) The mean and variance are derived for an exponential distribution and its square. 4) Independent gamma variables lead to a Dirichlet distribution for their normalized sums.

Uploaded by

dan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

RANDOM VARIABLES Normal distribution

If the random variables X1 , X2 , . . . , Xn are independent normal variables with


distributions given by Xi ∼ N (µi , σi2 ) for i = 1, 2, . . . , n then the random
variable
P Y = k1 X1 + Pk22X22 + . . . + kn Xn has a normal distribution with mean
ki µi and variance k i σi .

Proof

As X1 , X2 , . . . , Xn are independent, the moment generating function of Y is given


by P
E(etY ) = E(et ki Xi )
= E(etk1 X1 )E(etk2 X2 ) . . . E(etkn Xn )
Y h (k 2 σ 2 )t2 i
= exp (ki µi )t + i i
P2 2 2 2 i
hX ( ki σi )t
= exp ( ki µi )t +
2
P
which is the moment
P 2 2 generating function of a normal variable with mean k i µi
and variance k i σi .

1
RANDOM VARIABLES Chi–squared distribution

If the random variables X1 , X2 , . . . , Xn are independent chi–squared variables with


distributions given by Xi ∼ χ2νi for i = 1, 2, . . . , n then the randomPvariable
Y = X1 + X2 + . . . + Xn has a chi–squared distribution with parameter νi .

Proof

As X1 , X2 , . . . , Xn are independent, the moment generating function of Y is given


by P
E(etY ) = E(et Xi )
= E(etX1 )E(etX2 ) . . . E(etXn )
Y  1  ν2i
=
1 − 2t
 1  21 P νi
=
1 − 2t
which
P is the moment generating function of a chi–squared variable with parameter
νi .

2
RANDOM VARIABLES Student’s t–distribution

It was previously established that if X1 ∼ N (0, 1) and X2 ∼ χ2ν then Y = pXX1


2
ν
has a t–distribution with probability density function
− ν+1
Γ( ν+1
 2
) y 2
f (y) = √ 2 ν +1 .
νπΓ( 2 ) ν

The mean of Y is zero as the probability density function of Y is symmetric about


zero. The variance of Y can be found by evaluating E(Y 2 ) which is given by
− ν+1
Γ( ν+1
∞  2
2 )
Z
2 y 2
y √ ν +1 dy
−∞ νπΓ( 2 ) ν

but can be found more easily by considering


 ν 
E(Y 2 ) = E X12
X2
 ν .
2
= E(X1 )E
X2

As X12 ∼ χ21 we have that E(X12 ) = 1 and E ν



X2 is given by


1
Z
ν  ν ν x
E = ν ν x 2 −1 e− 2 dx
X2 0 x 2 2 Γ( 2 )
Z ∞
ν ν x
= ν ν x 2 −2 e− 2 dx
2 2 Γ( 2 ) 0
ν Z ∞
ν2 2 −1 ν x
= ν ν y 2 −2 e−y dy setting y =
2 2 Γ( 2 ) 0 2
ν
ν2 2 −1 ν
= ν ν Γ( − 1)
2 2 Γ( 2 ) 2
ν Γ( ν2 − 1)
=
2 ( ν2 − 1)Γ( ν2 − 1)
ν
=
ν−2
ν
and so the variance of the t–distribution is ν−2 .

3
RANDOM VARIABLES Mean and variance

Let the random variable X have probability density function


2x − x2
f (x) = e θ x>0
θ
and let the random variable Y be defined by Y = X 2 .
√ dx 1 √1
In this case x = y and dy = 2 y and the probability density function of Y is

2 y −y 1
f (y) = e θ √
θ 2 y
1 y
= e− θ y > 0
θ

The mean of X is given by



E(X) = E( Y )
Z ∞
√ 1 − yθ
= y e dy
0 θ
Z ∞√
1 1 y
= θz 2 e−z θdz setting z =
0 θ θ
√ 3
= θΓ( )
√ 2
θ √
= π
2
and E(X 2 ) by
E(X 2 ) = E(Y )
Z ∞
1 y
= y e− θ dy
θ
Z0 ∞
1 y
= θz e−z θdz setting z =
0 θ θ
= θΓ(2)

and so the variance of X is
V (X) = E(X 2 ) − E 2 (X)
πθ
=θ−
4
π
= θ(1 − ).
4

4
RANDOM VARIABLES Dirichlet distribution

Let X1 , X2 and X3 be independent gamma variables with Xi ∼ Γ(αi , 1) i = 1, 2, 3.


Their joint probability density function is

3
Y 1
f (x1 , x2 , x3 ) = xiαi −1 e−xi 0 < xi < ∞.
i=1
Γ(α i )

Let
X1
Y1 =
X1 + X2 + X3
X2
Y2 =
X1 + X2 + X3
Y3 = X 1 + X 2 + X 3
so that
x 1 = y1 y3
x 2 = y2 y3
x3 = y3 (1 − y1 − y2 )


y3
0 y1

J = 0 y3 y2

−y3 −y3 1 − y1 − y2
= y32

and the joint probability density function of Y1 , Y2 and Y3 is

y3α1 +α2 +α3 −1 e−y3 y1α1 −1 y2α2 −1 (1 − y1 − y2 )α3 −1


f (y1 , y2 , y3 ) =
Γ(α1 )Γ(α2 )Γ(α3 )

for
0 < y1 < ∞, 0 < y2 < ∞, y1 + y2 < 1, 0 < y3 < ∞.

The variable Y3 has a gamma distribution and is independent of Y1 and Y2 while


the variables Y1 and Y2 , though not independent of each other, have a joint density
function which is known as the Dirichlet distribution.

5
RANDOM VARIABLES Pearsonian class of pdfs

A probability density function f (x) which satisfies the differential equation

1 df (x) x+a
=
f (x) dx b0 + b1 + b2 x 2

for constants a, b0 , b1 and b2 is said to belong to the Pearsonian class of density


functions.

Example

The gamma variable has density function

1 x
f (x) = α
xα−1 e− θ
Γ(α)θ

with
df (x) 1 α−2 − x 1 α−1 1 −x
= (α − 1)x e θ + x − e θ
dx Γ(α)θα Γ(α)θα θ
and so
1 x 1 1 x
− 1)xα−2 e− θ + α−1

1 df (x) Γ(α)θ α (α Γ(α)θ α x − θ e− θ
= 1 x
f (x) dx Γ(α)θ α xα−1 e− θ
1
= (α − 1)x−1 −
θ
x − θ(α − 1)
=
−θx

which identifies it as a member of the Pearsonian class with a = −θ(α −1), b1 = −θ


and b0 = b2 = 0.

6
RANDOM VARIABLES Exponential class of pdfs

A family of probability density functions {f (x; θ) : θǫΩ} where Ω consists of an


interval set Ω = {θ : γ < θ < δ} with γ and δ known constants and where f (x; θ)
can be written in the form
h i
f (x; θ) = exp p(θ)K(x) + S(x) + q(θ) a < x < b

is known as an exponential class of probability density functions of the


continuous type.

If additionally

(i) neither a nor b depends on θ, γ < θ < δ,


(ii) p(θ) is a non–trivial continuous function of θ, γ < θ < δ,
(iii) each of K ′ (x) 6≡ 0 and S(x) is a continuous function of x, a < x < b

the exponential class is a regular case of the exponential class.

A probability density function of the form


h i
f (x; θ) = exp p(θ)K(x) + S(x) + q(θ) x = a1 , a2 , a3 , . . .

is a regular case of the exponential class of probability density functions of the


discrete type if

(i) the set {x : x = a1 , a2 , a3 , . . .} does not depend on θ


(ii) p(θ) is a non–trivial continuous function of θ, γ < θ < δ,
(iii) K(x) is a non–trivial function of x on the set {x : x = a1 , a2 , a3 , . . .}

7
RANDOM VARIABLES Exponential class of pdfs

Example 1

The probability density function of a Poisson variable with parameter θ is

e−θ θx
f (x; θ) =
x!h i
= exp (ln θ)(x) − ln(x!) − θ

and so is a member of an exponential class of probability density functions with

p(θ) = ln θ
K(x) = x
S(x) = − ln (x!)
q(θ) = −θ

Example 2

The probability density function of a N (0, θ) variable is

1 1 x2
f (x; θ) = √ e− 2 θ
2πθ
h 1 2 √ i
= exp − (x ) − ln 2πθ

and so is a member of an exponential class of probability density functions with

1
p(θ) = −

K(x) = x2
S(x) = 0

q(θ) = ln 2πθ

You might also like