f (x) = Γ (α+β) Γ (α) Γ (β) x, for 0<x<1
f (x) = Γ (α+β) Γ (α) Γ (β) x, for 0<x<1
Definition 3.2.5.
A random variable X has a beta distribution and it is referred to as a beta random
variable if and only if its probability density function is given by
Γ(α+β) α−1
f (x )=¿
{ Γ(α) Γ(β )
x (1−x)β−1 , for 0<x<1 ¿ ¿¿¿
We shall not prove here that the total area under the curve of the beta distribution, like
that of any probability density, is equal to 1, but in the proof of the theorem that
follows, we shall make use of the fact that
1
Γ (α+ β ) α−1
∫ x (1−x )β −1 dx=1
x=0 Γ (α ) Γ ( β )
1
Γ (α )⋅Γ ( β )
∫ x α −1(1−x )β−1 dx= Γ (α+β )
x =0
This integral defines the beta function, whose values are denoted B ( α , β ) ; in other
Γ (α )⋅Γ ( β )
B ( α , β )=
words, Γ (α+ β ) .
Detailed discussion of the beta function may be found in any textbook on advanced
calculus.
Page 1 of 8
It is quite straight forward to evaluate the moments of a Beta random variable.
1
kΓ (α +β )
E( X )= ∫ x k +α−1(1−x )β−1 dx
Γ (α ) Γ ( β ) x=0
Theorem 3.2.1
α
E( X )=
α+ β and the variance is
αβ
Var ( X )= 2
(α+ β ) (α+ β+1 )
Proof
From
Γ (α+β ) Γ (α+k )
E( X k )=
Γ( α) Γ ( α+β +k )
If k = 1, we have
Γ ( α+β ) Γ (α +1)
E( X )=
Γ ( α) Γ (α +β+1)
( α+β−1 ) ! α !
=
( α−1 ) ! ( α+β ) !
( α+β−1 ) ! α⋅(α−1 )!
=
( α−1 ) ! ( α +β )⋅( α+ β−1 ) !
α
=
α+ β
α
⇒ Mean = E( x )=
α+β
Page 2 of 8
Γ (α + β ) Γ (α +2 )
E( X 2 )=
Γ (α ) Γ (α + β +2 )
( α + β−1 ) ! ( α +1) !
=
( α −1 ) ! ( α + β +1 ) !
( α+ β−1 ) ! ( α + 1)⋅α⋅( α−1 )!
=
( α−1 ) ! ( α + β+ 1 )⋅( α+ β )⋅( α + β−1 ) !
α ( α+1 )
=
( α + β +1) ( α + β )
2
⇒ Var ( X ) = E( x )−[ E( x ) ]
2
α( α +1) α
= − [ ]
( α+ β +1 ) ( α + β ) α + β
α( α +1) α2
= −
( α+ β +1 ) ( α + β ) ( α + β )2
α (α +1 )( α+ β )−α 2 (α + β +1 )
=
( α + β+ 1) ( α + β )2
αβ
=
( α+ β +1) ( α + β )2
αβ
i. e . Var ( X )=
( α + β +1) ( α+ β )2
The moment generating function for the Beta distribution is neither simple nor useful.
The Beta density can take on a variety of shapes, including the uniform on (0, 1) with
α =β=1 .
Figure 1 indicates how these shapes change with the values for the parameters.
Page 3 of 8
3.2.4. THE NORMAL DISTRIBUTION
The normal distribution, which we shall study in this section, is in many ways the
cornerstone of modern statistical theory. It was investigated first in the eighteenth
century when scientists observed an astonishing degree of regularity in errors of
measurement. They found that the patterns (distributions) that they observed could be
closely approximated by continuous curves, which they referred to as “normal curves
of errors” and attributed to the laws of chance.
Definition 3.2.6
A random variable X has a normal distribution and it is referred to as a normal random
variable if and only if its probability density function is given by
Page 4 of 8
1 x−μ 2
N ( x; μ, σ 2 ) =f ( x)=¿ { 1
σ √2π
−
e
( ),
2 σ
for −∞<x<∞ ¿ ¿¿¿
………………………..(3.2.3)
where −∞< μ<∞ , σ >0
A random variable having this distribution is said to be normal or normally
distributed. This distribution is very important, because many random variables of
practical interest are normal or approximately normal or can be transformed into
normal random variables in a relatively simple fashion. Furthermore, the normal
distribution is a useful approximation of more complicated distributions. It also appears
in the mathematical proofs of various statistical tests.
In Eq. (3.2.3), μ is the mean and σ is the standard deviation of the distribution.
The curve of f (x) is called the bell-shaped curve. It is symmetric with respect to μ .
The figure above shows f (x) for μ=0 . For μ>0 ( μ<0 ) the curves have the same
First, though, let us show that the formula of Definition 3.2.6 can serve as a probability
2
density. Since the values of N ( x; μ , σ ) are evidently positive as long as σ >0 , we
must show that the total area under the curve is equal to 1.
Integrating from −∞ to ∞ and making the substitution
x−μ dz 1 1
z= ⇒ = ⇒ dz= dx
σ dx σ σ , we get
Page 5 of 8
2
1 x− μ
∞ − ( ∞
) dx 1 2 1 2
1 1 ∞ −2 z 2 ∞ −2 z
∫ f (x ) dx=1 ⇒ ∫ σ √2 π 2
e
σ
⇒ ∫
√ 2 π x= −∞
e dz= ∫ e dz
√ 2 π x= 0
x = −∞ x= −∞
Γ ( 12 ) = √ π
Then, since the integral on the right equal √2 √2 according to Equation (3.2.2), it
2 √π
× =1
follows that the total area under the curve is equal to √2 π √ 2
We first work out the m.g.f. which will in turn be used to obtain the mean and variance
expressions.
By definition;
M . g . f . = M X (t )=E (etx )
=E [ e tx +μt−μt ] =E [ e tx−μt⋅e μt ]
= e μt E [ etx −μt ] = e μt E [ e t( x −μ) ]
−( x− μ)2
∞ ∞
1 2
= ∫ e μt e t( x−μ ) f (x ) dx = ∫ e μt e t( x −μ ) e2 σ dx
x =−∞ x=−∞ √ 2 πσ 2
−1 −1
∞ [( x−μ )2−2 σ 2 t (x−μ ) ] ∞ [( x−μ )2−2 tσ 2 (x− μ) ]
μt 1 2σ
2 μt 1 2σ
2
=e ∫ e dx =e ∫ e dx
x=−∞ √ 2 πσ 2 x=−∞ √ 2 πσ 2
Consider from:
Then;
2 4 2
∞
1
−1
{
[ ( x −μ)−σ 2 t ] −σ 4 t2 } ∞
1
−1
{[ (x− μ)−σ 2 t ]2 } σ t
2 σ2 2 2
M X (t )=e μt ∫ e dx =e μt ∫ e2 σ ¿ e2 σ dx
x=−∞ √ 2 πσ 2 x=−∞ √ 2 πσ 2
σ2 t 2 ∞ −1
{[ (x−μ )−σ 2 t ]2} 2 2
σ t ∞ −1
{[( x−μ )−σ 2 t ]2}
1 2 μt+
1 2
=e μt⋅e 2
∫ 2
e2 σ dx =e 2
∫ 2
e2σ dx
x=−∞ √ 2 πσ x=−∞ √2 πσ
Page 6 of 8
∞
1
−1
{[( x−μ )−σ 2 t ]2 }
2σ 2
∫ e dx
The integral x=−∞ √2 πσ 2 is 1 since it is the area under a normal
2
distribution with mean μ+σ t 2
and variance = σ .
σ2t2
μt +
2
Hence M X (t ) =e
Mean = M 'X (0 )
σ 2t2 σ 2 t2
∴ M 'X (t ) =
d μt+ 2
dt
e [ ]=(μ +σ t )e 2
μt+
2
At t = 0; we have
⇒ Mean=μ
Next we work out Var (x)
We know that
2 2 2
Var( X ) = σ = E ( X ) − [ E( X )]
'' ' 2
= M X ( 0) − [ M X (0 )]
σ2t 2
μt +
M 'X (t )= 2
( μ+σ t )e 2
Let
Page 7 of 8
dU
U=( μ+σ 2 t ) ⇒ =σ 2
dt
σ 2 t2 σ 2 t2
μt+ dV μt+
2 2
V =e ⇒ =( μ+σ t )e 2
dt
'' dU dV
∴ M X (t ) = V +U
dt dt
2 2 2 2
σ t σ t
μt+ μt +
2 2 2 2 2
=e ¿ σ +( μ +σ t )⋅( μ+ σ t )e
At t=0
'' 0 2 2 2 0
M X (0 ) = e ¿ σ +( μ+σ ¿ 0 )⋅( μ+σ ¿ 0 )e
2 2
=μ + σ
But
2
Var( X ) = M ''X (0) − [ M 'X ( 0)]
=μ2 +σ 2 −( μ )2
= σ2
Hence
2
Var ( X) = σ which is the variance of the normal variable X.
Page 8 of 8