Lecture Prob
Lecture Prob
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 2
Continuous Random Variables (cont’d)
P(a ≤ X ≤ b) =?
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 3
Continuous Random Variables (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 4
Continuous Random Variables (cont’d)
Note: The value of f(x) does not give the probability that the
corresponding random variable takes on the values x; in the
continuous case, probabilities are given by integrals not by the
values f(x).
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 5
Continuous Random Variables (cont’d)
f(x)
P (a ≤ X ≤ b) = area under f ( x) from a to b
b
= ∫ f ( x)dx
a
P(a ≤ X ≤ b)
x
a b
2. ∫ f ( x)dx = 1.
−∞
F(x) represents the probability that a random variable with
probability density f(x) takes on a value less than or equal to
x and the corresponding function F is called the cumulative
distribution function or simply distribution function of the
random variable X.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 7
Continuous Random Variables (cont’d)
∞
µ = ∫ x f ( x)dx
−∞
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 10
Continuous Random Variables (cont’d)
−∞
= µ2 − µ
′ 2
1 .2 1 .0 1 .2
(b ) P ( 0 .6 ≤ X ≤ 1 .2 ) = ∫ f ( x ) dx = ∫ xdx + ∫ ( 2 − x ) dx
0 .6 0 .6 1 .0
1 .2
2 1 .0
x ⎛ x ⎞
2
= + ⎜⎜ 2 x − ⎟⎟ = 0 .50
2 0 .6 ⎝ 2 ⎠ 1 .0
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 13
Continuous Random Variables (cont’d)
x
Solution: F ( x) = ∫ f (t )dt
−∞
If x ≤ 0, F ( x) = 0.
x
x2
If 0 < x < 1, F ( x) = ∫ tdt =
0
2
1 x
x2
If 1 ≤ x < 2, F ( x) = ∫ tdt + ∫ (2 − t )dt = 2 x − − 1
0 1
2
1 2
If x ≥ 2, F ( x) = ∫ tdt + ∫ (2 − t )dt = 1.
0 1
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 14
Continuous Random Variables (cont’d)
⎧0 for x ≤ 0
⎪ 2
⎪x for 0 < x < 1
⎪2
F ( x) = ⎨ 2
⎪2 x − x
−1 for 1 ≤ x < 2
⎪ 2
⎪
⎩1 for x ≥ 2
( a ) P ( X > 1 .8) = 1 − F (1 .8) = .02
( b ) P ( 0 .4 ≤ X ≤ 1 .6 ) = F (1 .6 ) − F ( 0 .4 ) = 0 .84 .
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 15
Continuous Random Variables (cont’d)
−∞ 0 1
6
7 1
σ = µ 2′ − µ = − 1 = .
2 2
6 6
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 16
Find the mean of
1 1
f ( x) = −∞ < x < ∞
π 1 + x2
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 17
Find the median and mode of
f ( x) = λ e − λ x x>0
log2
median=
λ
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 18
Density Curves
∫
− x2
e dx = π
−∞
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 22
The Normal Distribution (cont’d)
The antiderivative of the Gaussian function is the error
function.
Gaussian functions are used as pre-smoothing kernels in
image processing.
A Gaussian function is the wave function of the ground state
of the quantum harmonic oscillator.
Gaussian functions are also associated with the vacuum state
in quantum field theory.
Gaussian beams are used in optical and microwave systems.
Gaussian orbitals are used in computational chemistry.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 23
The Normal Distribution (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 24
Normal Distributions
Symmetric
Single-peaked
Bell-shaped
Tails fall off quickly
The mean, median, and mode are the same (Unimodal).
The points where there is a change in curvature is one
standard deviation on either side of the mean.
The mean and standard deviation completely specify the
curve.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 25
The Normal Distribution (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 26
The Empirical Rule
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 28
The Normal Distribution (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 30
The Normal Distribution (cont’d)
∞ ∞
1 (x − µ ) 1 µ −( x − µ )
∫ ∫−∞σ e
2 2 2
− ( x − µ ) / 2σ / 2σ 2
Mean = e dx + dx
2π −∞
σ 2π
x−µ
(put = u in first integral)
σ
∞ ∞
1
∫ ∫
2
= σ ue −u / 2
du +µ f ( x;µ , σ 2
)dx.
2π −∞ −∞
⎛ ∞
⎞
⎜⎡ ⎤ ( )
2σ 2 2
−u / 2
∞
⎟
⎜ u.∫ ue du ∫ ∫
2
−u / 2
= − 1. ue du du ⎟
⎜⎢ ⎥
⎝⎣ ⎦0 0
2π ⎟
⎠
⎛ ∞
⎞
⎜⎡ −u / 2 ⎤
2 ∞ ∞
2σ 2 ⎟ 2σ 2
⎜ −ue ∫ ∫
2
−u / 2 −u 2 / 2
= + e du ⎟ = e du
2π ⎜⎢⎣ ⎥
⎦ 0 ⎟ 2π 0
⎝ 0 ⎠
Since ∫ ue −u 2 / 2
du = −e −u 2 / 2
which we can easily get by putting
u2
= v in this integral.
2
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 33
The Normal Distribution (cont’d)
We know that
∞ ∞
1
∫ f ( x; µ , σ ∫
− ( x − µ ) 2 / 2σ 2
2
)dx = 1 ⇔ e dx = 1
-∞ -∞ 2π σ
Put µ = 0 and σ = 1, we get
∞ ∞
1 −x / 2 1
∫ ∫e
2
− x2 / 2
e dx = 1 ⇔ 2 dx = 1.
-∞ 2π 2π 0
Hence,
Variance = σ2
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 34
Moment Generating function for Normal
Distribution
1 − ( x − µ ) 2 / 2σ 2
f ( x; µ , σ ) =
2
e for − ∞ < x < ∞
2π σ
∞
1
m.g. f = E ( e tX
)= ∫ e tx
e − ( x − µ )2 / 2σ 2
dx
−∞ 2πσ
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 35
∞
1
∫e
tx − ( x − µ )2 / 2σ 2
= e dx
2πσ −∞
tµ ∞
e
∫
tx − t µ − ( x − µ )2 / 2σ 2
= e e e dx
2πσ −∞
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 36
tµ ∞
e
∫e
tx −t µ − ( x − µ )2 / 2σ 2
= e dx
2πσ −∞
tµ ∞
e
∫
t ( x − µ ) − ( x − µ )2 / 2σ 2
= e e dx
2πσ −∞
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 37
tµ ∞
e
∫e
t ( x − µ ) − ( x − µ )2 / 2σ 2
= dx
2πσ −∞
tµ ∞ 2σ 2t ( x − µ ) − ( x − µ )2
e
=
2πσ
∫e
−∞
2σ 2
dx
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 38
tµ ∞ ( x − µ )2 − 2σ 2t ( x − µ )
e −
=
2πσ
∫
−∞
e 2σ 2
dx
tµ ∞ ( x − µ )2 − 2σ 2t ( x − µ ) +σ 4t 2 −σ 4t 2
e −
=
2πσ
∫
−∞
e 2σ 2
dx
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 39
tµ ∞ ( x − µ −σ 2t )2 −σ 4t 2
e −
=
2πσ
∫
−∞
e 2σ 2
dx
tµ ∞ ( x − µ −σ 2t )2 σ 4t 2
e −
=
2πσ
∫
−∞
e 2σ 2
e 2σ 2
dx
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 40
tµ ∞ ( x − µ −σ 2t )2 σ 2t 2
e −
=
2πσ
∫e
−∞
2σ 2
e 2
dx
σ 2t 2
tµ + ∞ ( x − µ −σ 2t )2
e 2 −
=
2πσ ∫e
−∞
2σ 2
dx
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 41
σ 2t 2 ∞ ( x − µ −σ 2t )2
tµ + 1 −
=e 2
2πσ ∫
−∞
e 2σ 2
dx
1 22
tµ + σ t
=e 2
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 42
1 22
tµ + σ t
M g (t ) = e 2
1 22
log ( M g (t ) ) = t µ + σ t
2
dM g (t ) ⎡⎛ µ t + 1 σ 2t 2 ⎞ ⎤
dt
= ⎢⎜ e 2
⎟ µ +σ t
2
( ) ⎥
t =0 ⎢⎣⎝ ⎠ ⎥⎦ t =0
E( X ) = µ
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 43
The Normal Distribution (cont’d)
The normal distribution with µ = 0 and σ = 1 is called standard
normal distribution.
Distribution function for standard normal distribution
z
1
∫e
−t 2 / 2
F ( z) = dt = P( Z ≤ z )
2π −∞
S. N. Table at the end of book gives the values of F(z) for positive
or negative values of z = 0.00, 0.01, 0.02, ,3.49 and of z = 3.50,
z = 4.00 and z = 5.00.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 44
The Normal Distribution (cont’d)
0 z a b
Figure: The standard normal probabilities Figure: The standard normal probability
F(z) = P(Z ≤ z) F(b) - F(a) = P(a ≤ Z ≤ b)
F(- z) = 1 - F(z)
Proof: The standard normal density function is given by
1 −z 2 / 2
f (z) = e which is a even function, i.e. f(-z) = f(z)
2π
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 45
The Normal Distribution (cont’d)
Hence,
−z z ∞
F (− z ) = ∫ f (t ) dt = − ∫ f ( − s ) ds = ∫ f ( − s ) ds
−∞ ∞ z
∞ z
= ∫ f ( s ) ds = 1 − ∫ f ( s ) ds = 1 − F ( z )
z −∞
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 47
The Normal Distribution (cont’d)
1
-1.37 -1.37
or
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 48
The Normal Distribution (cont’d)
⇔ F(zα) = 1 - α zα
Figure: The zα notation for a standard
normal distribution
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 49
The Normal Distribution (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 50
The Normal Distribution (cont’d)
If a random variable X has a normal distribution with the
mean µ and the standard deviation σ, then
X −µ
Z=
σ
is a random variable which has the standard normal distribution.
In this case, Z is called standardized random variable.
The probability that the random variable X will take on a
value less than or equal to a, is given by
⎛ X −µ a−µ ⎞ ⎛ a−µ ⎞ ⎛a−µ⎞
P ( X ≤ a ) = P⎜ ≤ ⎟ = P⎜ Z ≤ ⎟ = F⎜ ⎟
⎝ σ σ ⎠ ⎝ σ ⎠ ⎝ σ ⎠
which we can get from Table 3.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 51
The Normal Distribution (cont’d)
⎛ 14.9 − 16.2 ⎞
(b) P ( X < 14.9) = F ⎜ ⎟ = F (−1.04) = 0.1492.
⎝ 1.25 ⎠
Number of
outages
7.5 11.6
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 57
5.3 The normal Approximation
to the Binomial Distribution
The normal distribution can be used to approximate the
binomial distribution when n is large and p the probability of
a success, is close to 0.50 and hence not small enough to use
the Poisson approximation.
Normal approximation to binomial distribution
Theorem 5.1 If X is a random variable having the binomial
distribution with the parameters n and p, and if
X − np
Z=
np (1 − p )
then the limiting form of the distribution function of this
standardized random variable as n → ∞ is given by
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 58
The normal Approximation to the
Binomial Distribution (cont’d)
z 1 −t 2 / 2
F ( z) = ∫ e dt − ∞ < z < ∞.
−∞
2π
Although X takes on only the values 0, 1, 2, … , n, in the limit
as n → ∞ the distribution of the corresponding standardized
random variable is continuous and the corresponding probability
density is the standard normal density.
A good rule of thumb for the normal approximation
Use the normal approximation to the binomial distribution only
when np and n(1 - p) are both greater than 15.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 59
The normal Approximation to the
Binomial Distribution (cont’d)
Example 8: If a random variable has the binomial distribution
with n = 30 and p = 0.60, use the normal approximation to
determine the probabilities that it will take on
(a) a value less than 12;
(b) the value 14;
(c) a value greater than 16.
Solution: µ = np = 18; σ2 = np(1 - p) = 7.2; σ = 2.6833
(a ) P (X < 12) = P ( X < 11.5) (using continuity correction)
⎛ 11.5 − 18 ⎞
= F⎜ ⎟ = F (−2.42) = 0.0078.
⎝ 2.6833 ⎠
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 60
The normal Approximation to the
Binomial Distribution (cont’d)
(b) P( X = 14) = P(13.5 < X < 14.5) (using continuity correction)
⎛ 14.5 − 18 ⎞ ⎛ 13.5 − 18 ⎞
= F⎜ ⎟ − F⎜ ⎟
⎝ 2.6833 ⎠ ⎝ 2.6833 ⎠
= F (−1.3044) − F (−1.677)
= 0.0961− 0.0468 = 0.0493.
(c ) P ( X > 16 ) = P ( X > 16 .5) ( using continuity correction )
⎛ 16 .5 − 18 ⎞
= 1 − P ( X ≤ 16 .5) = 1 − F ⎜ ⎟
⎝ 2.6833 ⎠
= 1 − F ( −0.559 ) = F ( 0.559 ) = 0.7120 .
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 61
5.5 The Uniform Distribution
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 62
5.5 The Uniform Distribution
The uniform distribution, with parameters α and β, has
probability density function
⎧ 1
⎪ for α < x < β
f ( x) = ⎨ β − α
⎪0 elsewhere
⎩
f(x)
1
β −α
Figure: Graph of uniform
probability density
x
0 α β
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 63
The Uniform Distribution (cont’d)
12
Proof:
3 β
β 1 1 x β 2 + αβ + α 2
µ 2′ = ∫ x 2
dx = =
α β −α β −α 3 α
3
Hence
β 2 + αβ + α 2 (α + β ) 2 ( β − α ) 2
σ 2 = µ 2′ − µ 2 = − = .
3 4 12
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 65
The Uniform Distribution (cont’d)
Distribution function for uniform density function
⎧0 for x ≤ α
⎪
⎪ x −α
F ( x) = ⎨ for α < x < β
⎪ β −α
⎪⎩1 for x ≥ β
Example: In certain experiments, the error made in determining the
solubility of a substance is a random variable having the uniform
density with α = - 0.025 and β = 0.025. What are the probabilities
that such an error will be
(a) between 0.010 and 0.015;
(b) between –0.012 and 0.012?
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 66
The Uniform Distribution (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 69
The Uniform Distribution (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 70
5.6 The Log-Normal Distribution
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 72
5.6 The Log-Normal Distribution
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 73
The Log-Normal Distribution (cont’d)
f(x)
⎛ ln b − α ⎞ ⎛ ln a − α ⎞
P(a < X < b) = F ⎜⎜ ⎟⎟ − F ⎜⎜ ⎟⎟
⎝ β ⎠ ⎝ β ⎠
where F is the distribution function of the standard normal
distribution.
Mean of log-normal distribution 2
α +β / 2
µ =e
Variance of log-normal distribution
2α + β 2 β2
σ =e
2
(e − 1)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 76
The Log-Normal Distribution (cont’d)
1 ∞
∫
−{ y 2 +α 2 − 2 (α + β 2 ) y } / 2 β 2
= e dy
2π β −∞
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 77
The Log-Normal Distribution (cont’d)
1 ∞ ( )
− ⎧⎨ y − (α + β 2 ) − β 4 − 2αβ 2 ⎫⎬ / 2 β 2
2
µ=
2π β ∫ −∞
e ⎩ ⎭
dy
α +β 2 / 2 ∞ 1 − ( y − (α + β 2 ) ) / 2 β 2
2
=e ∫ −∞
2π β
e dy
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 78
The Log-Normal Distribution (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 79
The Log-Normal Distribution (cont’d)
1 ∞ ( )
− ⎧⎨ y − (α + 2 β 2 ) − 4 β 4 − 4αβ 2 ⎫⎬ / 2 β 2
2
µ 2′ =
2π β ∫ −∞
e ⎩ ⎭
dy
∞ 1 − ( y − (α + 2 β 2 ) ) / 2 β 2
2
∫
2 (α + β 2 ) 2 (α + β 2 )
=e e dy = e
−∞
2π β
Since the integrand is normal density function with µ = α + 2β2
and σ = β, so the value of the integral is 1. Hence we have
2 (α + β 2 ) 2α + β 2 2α + β 2 β2
σ = µ 2′ − µ = e
2 2
−e =e (e − 1).
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 80
The Log-Normal Distribution (cont’d)
f(x)
α = 1, β = 1
α = 1, β = 2
α = 2, β = 3
0 1 2 3 4 5 6 7
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 84
The Gamma Distribution (cont’d)
∞
β 2
β 2
Γ(α + 2)
∫
α +1 − y
= y e dy =
Γ(α ) 0 Γ(α )
= αβ 2 (α + 1)
Hence σ 2 = µ 2′ − µ 2 = αβ 2 (α + 1) − α 2 β 2 = αβ 2 .
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 86
The Gamma Distribution (cont’d)
µ = β and σ = β 2 2
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 87
The Gamma Distribution (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 89
The Gamma Distribution (cont’d)
So if waiting time between successive arrivals be random variable
with the distribution function
F (t ) = 1 − e −αt
the probability density of the waiting time between successive
arrivals given by d −αt
F (t ) = α e = f (t )
dt 1
which is an exponential distribution with β = .
α
Note: If X represents the waiting time to the first arrival, then
P( X > t ) = 1 − P ( X ≤ t ) = 1 − F (t ) = e −αt .
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 90
The Gamma Distribution (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 91
The Gamma Distribution (cont’d)
∞
− 0.6 t ∞
(b) P(t > 3) = ∫ 0.6e − 0.6 t
dt = − e = e −1.8 .
3
3
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 92
The Gamma Distribution (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 94
The Gamma Distribution (cont’d)
α αβ
µ= and σ = 2
α+β (α + β ) 2 (α + β + 1)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 97
The Beta Distribution (cont’d)
Beta Function:
1
B(m, n) = ∫ x m −1 (1 − x) n −1 dx m > 0, n > 0
0
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 98
The Beta Distribution (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 99
The Beta Distribution (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 100
The Beta Distribution (cont’d)
Proof for mean
Γ(α + β ) α −1
1
µ =∫x x (1 − x) β −1dx
0
Γ(α ) ⋅ Γ( β )
Γ(α + β )
1
∫
α +1−1 β −1
= x (1 − x ) dx
Γ(α ) ⋅ Γ( β ) 0
Γ(α + β )
= B (α + 1, β )
Γ(α ) ⋅ Γ( β )
Γ(α + β ) Γ(α + 1) ⋅ Γ( β ) α
= =
Γ(α ) ⋅ Γ( β ) Γ(α + β + 1) α +β
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 101
The Beta Distribution (cont’d)
Proof for variance
Γ(α + β ) α −1
1
µ 2′ = ∫ x2
x (1 − x) β −1dx
0
Γ(α ) ⋅ Γ( β )
Γ(α + β )
1
∫
α + 2 −1 β −1
= x (1 − x ) dx
Γ(α ) ⋅ Γ( β ) 0
Γ(α + β )
= B (α + 2, β )
Γ(α ) ⋅ Γ( β )
Γ(α + β ) Γ(α + 2) ⋅ Γ( β ) α (α + 1)
= =
Γ(α ) ⋅ Γ( β ) Γ(α + β + 2) (α + β )(α + β + 1)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 102
The Beta Distribution (cont’d)
Hence
σ 2 = µ 2′ − µ 2
α (α + 1) α2
= −
(α + β )(α + β + 1) (α + β ) 2
αβ
=
(α + β ) 2 (α + β + 1)
Notes.(1) For α = 1 and β = 1, we obtain as a special case of the
uniform distribution defined on the interval from 0 to 1.
(2) If α = β, then the density function is symmetric about
1/2 (red & purple plots).
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 103
The Beta Distribution (cont’d)
0 0
⎡0.1 0.1
⎤
= 90⎢ ∫ (1 − x) dx − ∫ (1 − x) dx⎥
8 9
⎣0 0 ⎦
⎡ (1 − x)9 0.1
(1 − x) 10 0.1 ⎤
= 90⎢ + ⎥
⎢⎣ − 9 0
10 0 ⎥⎦
= 0.2639.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 105
The Weibull Distribution
σ 2 = α −2 / β ⎢Γ ⎜1 + ⎟ − ⎨Γ(1 + ) ⎬ ⎥
⎢⎣ ⎝ β ⎠ ⎩ β ⎭ ⎥⎦
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 107
The Weibull Distribution
(cont’d)
Proof for mean:
∞ ∞ 1/ β
⎛u⎞
µ = ∫ x.α β x β −1 −α x β
e ∫
dx = ⎜ ⎟ e − u du (put u = α x β )
0 0 ⎝
α⎠
∞ 1
1+ −1 1
∫u
−1 / β −1 / β
=α β
e du = α
−u
Γ (1 + )
0
β
Proof for variance:
∞ ∞ 2/ β
⎛u⎞
µ 2′ = ∫ x .α β x ∫
β
β −1 −αx
2
e dx = ⎜ ⎟ e −u du (put u = αx β )
0 0⎝
α⎠
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 108
The Weibull Distribution
(cont’d)
∞ 2
1+ −1 2
∫u
−2 / β −2 / β
µ 2′ = α β
e du = α
−u
Γ(1 + )
0
β
Hence
σ 2 = µ 2′ − µ 2
2
−2 / β 2 −2 / β ⎛ 1 ⎞
=α Γ (1 + ) −α ⎜⎜ Γ (1 + ) ⎟⎟
β ⎝ β ⎠
⎡ 2 ⎧ 1 ⎫
2
⎤
= α − 2 / β ⎢Γ (1 + ) − ⎨Γ (1 + ) ⎬ ⎥
⎢⎣ β ⎩ β ⎭ ⎥⎦
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 109
The Weibull Distribution (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 110
In the equation x + 2 x − Q = 0, Q is a random variable,
2
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 111
x2 + 2x − Q = 0
roots are :
−2 ± 4 + 4Q
x= x = −1 ± 1 + Q
2
Let Y denote the larger root.
Y = g ( Q ) = −1 + 1 + Q
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 112
Now,
Q is a random variable, uniformly distributed
over the interval (0, 2).
⎧1
⎪ 0<q<2
f (q) = ⎨ 2
⎪⎩ 0 elsewhere
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 113
Since
y = 1+ q −1
0<q<2⇒
1 < 1+ q < 3
1 < 1+ q < 3
0 < 1+ q −1 < 3 −1
∴0 < y < 3 −1
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 114
Hence , density function for y is given by
dq
f ( y) = f ( g ( y) )
−1
dy
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 115
Since
dy 1
y = 1+ q −1 =
dq 2 1 + q
dq
Hence or , = 2 1+ q
dy
f ( y ) = f (q).2 1 + q
1
or , f ( y ) = .2 1 + q ⎡⎣ but 1 + q = y + 1⎤⎦
2
∴ f ( y) = 1 + y
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 116
Density function for Y is given by:
⎪⎧ y + 1 0 < y < 3 − 1
f ( y) = ⎨
⎪⎩0 otherwise
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 117
5.10 Joint Distributions—Discrete
and Continuous
In many statistical investigations, one is frequently
interested in studying the relationship between two
or more r.v.'s, such as the relationship between
annual income and yearly savings per family or the
relationship between occupation and hypertension.
In this chapter, we consider n-dimensional vector
valued r.v.'s, however, we start with the case of n =
2. We will study them simultaneously in order to
determine not only their individual behavior but
also the degree of relationship between them.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 118
5.10 Joint Distributions—Discrete
and Continuous (cont’d)
Discrete Variables
¾ For two discrete random variables X1 and X2, the probability
that X1 will take the value x1 and X2 will take the value x2 is
written as P(X1 = x1, X2 = x2).
¾ Consequently, P(X1 = x1, X2 = x2) is the probability of the
intersection of the events X1 = x1 and X2 = x2.
¾ If X1 and X2 are discrete random variables, the function
given by f (x1, x2) = P(X1 = x1, X2 = x2) for each pair of
values (x1, x2) within the range of X1 and X2 is called the
joint probability distribution of X1 and X2.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 119
Joint Distributions—Discrete and
Continuous (cont’d)
¾ The distribution of probability is specified by listing the
probabilities associated with all possible pairs of values x1
and x2, either by formula or in a table.
¾ A function of two variables can serve as the joint
probability distribution of a pair of discrete random
variables X1 and X2 if and only if its values, f (x1, x2),
satisfy the conditions
1. f (x1, x2) ≥ 0 for each pair of values (x1, x2) within its
domain;
2. ∑∑ f (x1, x2 ) = 1, where the double summation extends
x1 x2
over all possible pairs (x1, x2) within its domain.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 120
Joint Distributions—Discrete and
Continuous (cont’d)
Joint distribution function: If X1 and X2 are discrete
random variables, the function given by
F ( x1 , x 2 ) = P ( X 1 ≤ x 1 , X 2 ≤ x 2 ) = ∑∑
s ≤ x1 t ≤ x 2
f (s, t )
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 121
Joint Distributions—Discrete and
Continuous (cont’d)
Marginal distribution: If X1 and X2 are discrete random
variables and f (x1, x2) is the value of their joint probability
distribution at (x1, x2), the function given by
P ( X 1 = x1 ) = f1 ( x1 ) = ∑ f ( x1 , x2 )
x2
for each x1 within the range of X1 is called the marginal
distribution of X1. Correspondingly, the function given by
P( X 2 = x2 ) = f 2 ( x2 ) = ∑ f ( x1 , x2 )
x1
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 122
Joint Distributions—Discrete and
Continuous (cont’d)
for each x2 within the range of X2 is called the marginal
distribution of X2.
Conditional probability distribution: Consistent with the definition of
conditional probability of events when A is the event X1 = x1 and B is the
event X2 = x2 , the conditional probability distribution of X1 = x1
given X2 = x2 is defined as
f ( x1 , x2 )
f1 ( x1 | x2 ) = for all x1 provided f 2 ( x2 ) ≠ 0
f 2 ( x2 )
Similarly, the conditional probability distribution of X2 given X1 = x1
is defined as f (x , x )
f 2 ( x2 | x1 ) = 1 2
for all x2 provided f1 ( x1 ) ≠ 0
f1 ( x1 )
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 123
Joint Distributions—Discrete and
Continuous (cont’d)
If f1(x1|x2) = f1(x1) for all x1 and x2, so the conditional probability
distribution of X1 free of x2 or equivalently, if
f (x1, x2) = f1(x1) f2(x2) for all x1, x2
the two random variables are independent.
Example: Two scanners are needed for an experiment. Of the five, two
have electronic defects, another one has a defect in memory, and two
are in good working order. Two units are selected at random.
(a) Find the joint probability distribution of X1 = the number with
electronic defects and X2 = the number with a defect in memory.
(b) Find the probability of 0 and 1 total defects among the two selected.
(c) Find the marginal probability distribution of X1.
(d) Find the conditional probability distribution of X1 given X2 = 0.
Solution: (a) The joint probability distribution of X1 and X2 is given by
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 124
Joint Distributions—Discrete and
Continuous (cont’d)
⎛ 2 ⎞⎛ 1 ⎞⎛ 2 ⎞
⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟
x x 2 − x1 − x2 ⎠
f ( x1 , x2 ) = ⎝ 1 ⎠⎝ 2 ⎠⎝ where x1 = 0 , 1, 2 and x 2 = 0 , 1
⎛5⎞
⎜⎜ ⎟⎟ 0 ≤ x1 + x 2 ≤ 2
⎝ 2⎠
The joint probability distribution f(x1,x2) of X1 and X2 can be summarized in
the following table:
f(x1,x2) X2
0 1
0 0.1 0.2
X1 1 0.4 0.2
2 0.1 0.0
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 125
Joint Distributions—Discrete and
Continuous (cont’d)
(b) Let A be the event that X1 + X2 equal to 0 or 1
P(A) = f (0 , 0) + f (0 , 1) + f (1, 0) = 0.1 + 0.2 + 0.4 = 0 .7
(c) The marginal probability distribution of X1 is given by
f1 ( x1 ) = ∑ f ( x1 , x 2 ) = f ( x1 ,0 ) + f ( x1 ,1)
x2
f1 ( 0 ) = f ( 0,0 ) + f ( 0,1) = 0 .1 + 0 .2 = 0 .3
f1 (1) = f (1,0 ) + f (1,1) = 0 .4 + 0 .2 = 0 .6
f1 ( 2 ) = f ( 2,0 ) + f ( 2,1) = 0 .1 + 0 .0 = 0 .1
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 126
Joint Distributions—Discrete and
Continuous (cont’d)
f(x1,x2) X2 f1(x1)
0 1
0 0.1 0.2 0.3
X1 1 0.4 0.2 0.6
2 0.1 0.0 0.1
f2(x2) 0.6 0.4 1
= 0 .1 + 0 .4 + 0 .1 = 0 .6
f (0,0) 0.1 1
f1 (0 | 0) = = =
f 2 (0) 0.6 6
f (1,0) 0.4 4
f1 (1 | 0) = = =
f 2 (0) 0.6 6
f (2,0) 0.1 1
f1 (2 | 0) = = =
f 2 (0) 0.6 6
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 128
Joint Distributions—Discrete and
Continuous (cont’d)
All the definitions concerned with two random
variables X1 and X2 can be generalized for k random
variables X1, X2, …, Xk.
Let x1 be a possible value for the first random variable
X1, x2 be a possible value for the second random
variable X2, and so on with xk be a possible value
for the k-th random variable Xk. The values of the
joint probability distribution of k discrete random
variables X1, X2, …, Xk are given by
f(x1, x2, …, xk) = P(X1 = x1 , X2 = x2, …, Xk = xk)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 129
Joint Distributions—Discrete and
Continuous (cont’d)
Continuous Variables
There are many situation in which we describe an
outcome by giving the value of several continuous
variables. For instance, we may measure the weight
and the hardness of a rock, the volume, pressure
and temperature of a gas, or the thickness, color,
compressive strength and potassium content of a
piece of glass.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 130
Joint Distributions—Discrete and
Continuous (cont’d)
Example 1: If two random variables have the joint density
x2
⎧ x1 x2 for 0 < x1 < 1, 0 < x2 < 2
f ( x1 , x2 ) = ⎨
⎩0 elsewhere 2
1
find the probabilities that
(a) both random variables will take on values 0 1 x1
less than 1;
(b) the sum of the values taken on by the two random
variables will be less than 1.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 131
Joint Distributions—Discrete and
Continuous (cont’d)
Solution:
x2
( a ) P ( X 1 < 1, X 2 < 1)
1 1 2
= ∫ ∫ f (x , x
− ∞− ∞
1 2 ) dx 1 dx 2 1
1 1 x1
0
= ∫ ∫ x1 x2 dx1dx 2
1
0 0
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 132
1 2 1 1
x 1
= ∫ x2 1
dx2 = ∫ x2 dx2
0
2 0 20
2 1
1 x 1
= ⋅ 2
= .
2 2 0
4
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 133
Joint Distributions—Discrete and
Continuous (cont’d)
x2
(b) P( X1 + X 2 < 1)
1 1−x1 2
=∫ ∫ f (x , x )dx dx
1 2 2 1
1
−∞ −∞
1 1−x1
= ∫ ∫ x1x2dx2dx1
x1
0 1
x1 + x2 = 1
0 0
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 134
1 2 1− x1 1
x2 1
= ∫ x1 dx1 = ∫ x1 (1 − x1 ) 2 dx1
0
2 0
20
1
1
= ∫ ( x1 − 2 x1 + x1 )dx1
2 3
20
1
1 ⎡ x1 2 x x1 ⎤
2 3 4
1
= ⎢ − + ⎥ = .
1
2⎣ 2 3 4 ⎦ 0 24
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 135
Problem
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 136
Joint Distributions—Discrete and
Continuous (cont’d)
Example 2: With the reference to the example 1, find
the marginal densities of the two random variables.
Solution: The marginal density of X1 is given by
∞ 2 2 2
x2
f1(x1) = ∫ f (x1, x2 )dx2 = ∫ x1x2dx2 = x1 = 2x1, 0 < x1 < 1.
−∞ 0
2 0
The marginal density of X2 is given by
∞ 1 2 1
x x2
f 2 ( x2 ) = ∫ f ( x1, x2 )dx1 = ∫ x1x2dx1 = x2 1
= , 0 < x2 < 2.
−∞ 0
2 0
2
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 137
Joint Distributions—Discrete and
Continuous (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 139
Joint Distributions—Discrete and
Continuous (cont’d)
Example 17: If two random variables have the joint
density
⎧6
⎪ ( x + y 2
) for 0 < x < 1, 0 < y < 1
f ( x, y ) = ⎨ 5
⎪⎩0 elsewhere
find
(a) an expression for f1(x | y) for 0 < y < 1;
(b) an expression for f1(x | 0.5)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 140
Joint Distributions—Discrete and
Continuous (cont’d)
f ( x, y)
(a) f1 ( x | y) = provided f2 ( y) ≠ 0
f2 ( y)
But for 0 < y < 1
∞ 1
6 6⎛ 2 1⎞
f 2 ( y ) = ∫ f ( x, y )dx = ∫ ( x + y )dx = ⎜ y + ⎟
2
−∞
50 5⎝ 2⎠
and elsewhere 0.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 141
Joint Distributions—Discrete and
Continuous (cont’d)
Therefore for 0 < y < 1
⎧ x + y2
⎪ 2 0 < x <1
f1 ( x | y ) = ⎨ y + 1 / 2
⎪0 elsewhere
⎩
⎧ x + 0.25
⎪ 0 < x <1
(b) f1 ( x | 0.5) = ⎨ 0.75
⎪⎩0 elsewhere
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 142
Joint Distributions—Discrete and
Continuous (cont’d)
⎧ 4x +1
⎪ 0 < x <1
f1 ( x | 0.5) = ⎨ 3
⎪⎩0 elsewhere
(c) Mean of the conditional density of X when Y = 0.5
1 1
x(4 x + 1)
= ∫ x ⋅ f1 ( x | 0.5)dx = ∫ dx
0 0
3
1
⎛ 4x 3
x ⎞ 11 2
= ⎜⎜ + ⎟⎟ = .
⎝ 9 6 ⎠ 0 18
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 143
Joint Distributions—Discrete and
Continuous (cont’d)
Properties of Expectation
Consider a function g(X) of a single continuous random variables X.
For instance, if X is an oven temperature in degree centigrade, then
g(X) = (9/5)X + 32 is the same temperature in degree Fahrenheit.
If X has probability density function f, then the mean or expectation of g(X)
is given by
∞
E[ g ( X )] = ∫ g ( x) f ( x)dx
−∞
In the discrete case, where X has probability distribution f,
E[ g ( X )] = ∑ g ( xi ) f ( xi )
xi
where xi is a possible value for X.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 144
Joint Distributions—Discrete and
Continuous (cont’d)
If X has mean µ = E(X), then take g(x) = (x - µ)2, we have
E[g(x) ] = E[(x - µ)2] = σ2 (variance of X).
For any random variable Y, let E(Y) denote its expectation which is also
its mean µY. Its variance is Var(Y) which is also written as σ Y2 .
When g(x) = ax + b, for given constants a and b, the random variable g(X)
has expectation
∞ ∞ ∞
E (aX + b) = ∫ (ax + b) f ( x)dx = a ∫ x f ( x)dx + b ∫ f ( x)dx = aE ( X ) + b
−∞ −∞ −∞
= aµ X + b
and variance ∞ ∞
Var (aX + b) = ∫ (ax + b − aµ X − b) 2 f ( x)dx = a 2 ∫ ( x − µ X ) 2 f ( x)dx
−∞ −∞
= a 2Var ( X )
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 145
Joint Distributions—Discrete and
Continuous (cont’d)
For given constant a and b
E ( aX + b ) = aE ( X ) + b and Var ( aX + b ) = a 2Var ( X )
Given any collection of k random variables, the function Y = g(X1, X2, …, Xk)
is also a random variable.
For example Y = X1 – X2 is a random variable when g(x1, x2) = x1 – x2 and
Y = 2X1 +3X2 is a random variable when g(x1, x2) = 2x1 +3x2 .
The random variable g(X1, X2, …, Xk) has expected value, or mean, given by
∞ ∞ ∞
or in discrete case,
∑∑ .....∑ g ( x , x ,..., x ) f ( x , x ,..., x )
x1 x2 xk
1 2 k 1 2 k
where the summation is over all k-tuples (x1, x2, …, xk) of possible values.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 146
Joint Distributions—Discrete and
Continuous (cont’d)
Let X1 and X2 be two random variables and µ1 and µ2 are their mean
respectively. Take g(x1, x2) = (x1 – µ1)(x2 - µ2), we see that the product
(x1 – µ1)(x2 - µ2) will be positive if both values x1 and x2 are above their
respective means or both are below their respective means. Otherwise it
will be negative.
The expected value E[(X1 – µ1)(X2 - µ2)] will tend to be positive when large
X1 and X2 tend to occur together and small X1 and X2 tend to occur
together with high probability.
This measure E[(X1 – µ1)(X2 - µ2)] of joint variation is called the population
covariance of X1 and X2.
When X1 and X2 are independent, their covariance E[(X1 – µ1)(X2 - µ2)] = 0.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 147
Joint Distributions—Discrete and
Continuous (cont’d)
Proof: If X1 and X2 are independent so f(x1, x2) = f1(x1) f2( x2),
∞ ∞
E[( X 1 − µ1 )( X 2 − µ 2 )] = ∫ ∫ (x
− ∞− ∞
1 − µ1 )( x2 − µ 2 ) f ( x1 , x2 )d x1dx2
∞ ∞
= ∫ ( x1 − µ1 ) f1 ( x1 )dx1 ⋅ ∫ ( x2 − µ 2 ) f 2 ( x2 )dx2 = 0
−∞ −∞
The expectation of a linear combination of two independent random variables
Y = a1 X1 + a2 X2 is
∞ ∞
µY = E (Y ) = E (a1 X 1 + a2 X 2 ) = ∫ ∫ (a x
− ∞− ∞
1 1 + a2 x2 ) f ( x1 , x2 )dx1dx2
∞ ∞
= ∫ ∫ (a x
− ∞− ∞
1 1 + a2 x2 ) f1 ( x1 ) f 2 ( x2 )dx1dx2 (since X1 and X2 are independent)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 148
Joint Distributions—Discrete and
Continuous (cont’d)
∞ ∞ ∞ ∞
µY = a1 ∫ x1 f1 ( x1 )dx1 ∫ f 2 ( x2 )dx2 + ∫ f ( x )dx ∫ x
1 1 1 f ( x2 )dx2
2 2
−∞ −∞ −∞ −∞
= a1 E ( X 1 ) + a2 E ( X 2 ) = a1µ1 + a2 µ 2
Note: This result hold even if the two random variables are not independent.
Var (Y ) = E (Y − µY ) 2 = E[(a1 X 1 + a2 X 2 − a1µ1 − a2 µ 2 ) 2 ]
= E[(a1 ( X 1 − µ1 ) + a2 ( X 2 − µ 2 )) 2 ]
= E[a12 ( X 1 − µ1 ) 2 + a22 ( X 2 − µ 2 ) 2 + 2a1a2 ( X 1 − µ1 )( X 2 − µ 2 )]
= a12 E[( X 1 − µ1 ) 2 ] + a22 E[( X 2 − µ 2 ) 2 ] + 2a1a2 E[( X 1 − µ1 )( X 2 − µ 2 )]
= a12Var ( X 1 ) + a22Var ( X 2 )
since the third term is zero because X1 and X2 are independent .
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 149
Joint Distributions—Discrete and
Continuous (cont’d)
These properties hold for any number of variables whether they are
continuous or discrete.
The mean and variance of linear combinations
Let Xi have mean µi and variance σi2 for i = 1, 2, …., k. The linear
combination Y = a1 X1 + a2 X2 + ….. + ak Xk has
E(a1 X1 + a2 X2 + ….. + ak Xk) = a1 E(X1) + a2 E( X2) + ….. + ak E( Xk)
or k
µY = ∑ ai µ i
i =1
When the random variables are independent
Var(a1 X1 + a2 X2 + ….. + ak Xk) = a12 Var(X1) + a22 Var( X2) + ….. + ak2 Var( Xk)
or k
σ Y2 = ∑ ai2σ i2
i =1
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 150
Joint Distributions—Discrete and
Continuous (cont’d)
Example: If X1 has mean 4 and variance 9 while X2 has mean –2 and
variance 5, and the two are independent, find
(a) E(2X1 + X2 – 5)
(b) Var(2X1 + X2 – 5)
Solution:
(a) E(2X1 + X2 – 5) = E(2X1 + X2) – 5 = 2E(X1) +E(X2) – 5
= 2(4) + ( - 2) – 5 = 1.
(b) Var (2X1 + X2 – 5) = Var(2X1 + X2) = 22 Var(X1) + Var(X2)
= 22(9) + 5 = 41.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 151
5.13 Simulation
Simulation is a technique of manipulating a model of a system through a
process of imitation.
To simulate the observation of continuous random variables we usually
start with uniform random numbers and relate these to the distribution
function of interest.
Let X is a continuous random variable with cumulative distribution function
F(x), then U = F(X) is uniformly distributed on [0, 1]. So to find a
random observation x of X, we select u an n-digit uniform random
number and solve equation
u = F(x) for x as x = F -1(u).
Further, to generate a random sample of size r from X, we take a sequence
of r independent n-digit uniform random numbers say u1, u2, …., ur, and
then generate x1, x2, …., xr where
xi = F -1(ui); i = 1, 2, …..,r.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 152
Simulation (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 153
Simulation (cont’d)
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 155
Simulation (cont’d)
Further to generate a random sample of size n from X, we take n independent
n- digit random numbers u1, u2, …., un, and then generate x1, x2, …., xn as
⎛1⎞
xi = β ln⎜⎜ ⎟⎟, i = 1, 2, ......., n.
⎝ ui ⎠
{x1, x2, …., xn} is then the required random sample from X, whose distribution
is exponential with parameter β.
So, we have
u1 u2 z1 z2 x1 x2
The simulated times it takes four pairs to learn how to operate the machine are
6.16327, 5.37122, 6.44433 and 6.35715.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 159
Simulation (cont’d)
Simulation of an observation from uniform distribution on [a , b]
The density function for uniform distribution is given by
⎧ 1
⎪ for a < x < b
f ( x) = ⎨b − a
⎪⎩0 elsewhere
The cumulative distribution function is
⎧0 for x ≤ a
⎪x −a
⎪
F ( x) = ⎨ for a < x < b
⎪b − a
⎪⎩1 for x ≥ b
Solving for u = F(x) we get x = a + (b – a) u.
Department of Mathematics,
4-Apr-08 BITS Pilani, Goa Campus 160