0% found this document useful (0 votes)
11 views80 pages

Block 4

Uploaded by

saswat sahoo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views80 pages

Block 4

Uploaded by

saswat sahoo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Normal Distribution

UNIT 13 NORMAL DISTRIBUTION


Structure
13.1 Introduction
Objectives

13.2 Normal Distribution


13.3 Chief Characteristics of Normal Distribution
13.4 Moments of Normal Distribution
13.5 Mode and Median of Normal Distribution
13.6 Mean Deviation about Mean
13.7 Some Problems Based on Properties of Normal Distribution
13.8 Summary
13.9 Solutions/Answers

13.1 INTRODUCTION
In Units 9 to 12, we have studied standard discrete distributions. From this unit
onwards, we are going to discuss standard continuous univariate distributions.
This unit and the next unit deal with normal distribution. Normal distribution
has wide spread applications. It is being used in almost all data-based research
in the field of agriculture, trade, business, industry and the society. For
instance, normal distribution is a good approximation to the distribution of
heights of randomly selected large number of students studying at the same
level in a university.
The normal distribution has a unique position in probability theory, and it can
be used as approximation to most of the other distributions. Discrete
distributions occurring in practice including binomial, Poisson,
hypergeometric, etc. already studied in the previous block (Block 3) can also
be approximated by normal distribution. You will notice in the subsequent
courses that theory of estimation of population parameters and testing of
hypotheses on the basis of sample statistics have also been developed using
the concept of normal distribution as most of the sampling distributions tend to
normality for large samples. Therefore, study of normal distribution is very
important.
Due to various properties and applications of the normal distribution, we have
covered it in two units – Units 13 and 14. In the present unit, normal
distribution is introduced and explained in Sec. 13.2. Chief characteristics of
normal distribution are discussed in Sec. 13.3. Secs. 13.4, 13.5 and 13.6
describes the moments, mode, median and mean deviation about mean of the
distribution.
Objectives
After studing this unit, you would be able to:
 introduce and explain the normal distribution;

5
Continuous Probability
Distributions  know the conditions under which binomial and Poisson distributions
tend to normal distribution;
 state various characteristics of the normal distribution;
 compute the moments, mode, median and mean deviation about mean
of the distribution; and
 solve various practical problems based on the above properties of
normal distribution.

13.2 NORMAL DISTRIBUTION


The concept of normal distribution was initially discovered by English
mathematician Abraham De Moivre (1667-1754) in 1733. De Moivre obtained
this continuous distribution as a limiting case of binomial distribution. His
work was further refined by Pierre S. Laplace (1749-1827) in 1774. But the
contribution of Laplace remained unnoticed for long till it was given concrete
shape by Karl Gauss (1777-1855) who first made reference to it in 1809 as the
distribution of errors in Astronomy. That is why the normal distribution is
sometimes called Gaussian distribution. Though, normal distribution can be
used as approximation to most of the other distributions, here we are going to
discuss (without proof) its approximation to (i) binomial distribution and (ii)
Poisson distribution.
Normal Distribution as a Limiting Case of Binomial Distribution
Normal distribution is a limiting case of binomial distribution under the
following conditions:
i) n, the number of trials, is indefinitely large i.e. n  ;
ii) neither p (the probability of success) nor q (the probability of failure) is too
close to zero.
Under these conditions, the binomial distribution can be closely associated by
X  np
a normal distribution with standardized variable given by Z  . The
npq
approximation becomes better with increasing n. In practice, the
approximation is very good if both np and nq are greater than 5.
For binomial distribution, you have already studied [see Unit 9 of this course]
that
2 2
 2  npq  q  p   q  p 
1  33   3
 ,
2  npq  npq

 4 npq 1  3  n  2  pq  1  6pq


2  2
 2
 3 ,
2  npq  npq

q  p 1  2p
1  1   , and
npq npq

1  6pq
 2  2  3  .
npq
6
From the above results, it may be noticed that if n  , then moment Normal Distribution
coefficient of skewness ( 1 )  0 and the moment coefficient of kurtosis i.e.
 2  3 or  2  0 . Hence, as n  , the distribution becomes symmetrical
and the curve of the distribution becomes mesokurtic, which is the main
feature of normal distribution.
Normal Distribution as a Limiting Case of Poisson Distribution
You have already studied in Unit 10 of this course that Poisson distribution is a
limiting case of binomial distribution under the following conditions:
i) n, the number of trials is indefinitely large i.e. n 
ii) p, the constant probability of success for each trial is very small i.e. p  0.
iii) np is a finite quantity say ‘’.
As we have discussed above that there is a relation between the binomial and
normal distributions. It can, in fact, be shown that the Poisson distribution
approaches a normal distribution with standardized variable given by
X
Z as λ increases indefinitely.

For Poisson distribution, you have already studied in Unit 10 of the course that
32  2 1 1
1  3
 3   1  1  ; and
2   

 4 3 2 1 1
2  2
 2    3    2  2  3  .
2   
Like binomial distribution, here in case of Poisson distribution also it may be
noticed from the above results that the moment coefficient of skewness
( 1 )  0 and the moment coefficient of kurtosis i.e. 2  3 or  2  0 as
λ . Hence, as λ  , the distribution becomes symmetrical and the curve
of the distribution becomes mesokurtic, which is the main feature of normal
distribution.
Under the conditions discussed above, a random variable following a binomial
distribution or following a Poisson distribution approaches to follow normal
distribution, which is defined as follows:
Definition: A continuous random variable X is said to follow normal
distribution with parameters  (      ) and 2(>0) if it takes on any real
value and its probability density function is given by
2
1  x  
1   
f x  e 2  
,   x  ;
 2
which may also be written as

1  1  x   2 
= exp     ,   x  .
 2  2    

7
Continuous Probability
Distributions Remark
i) The probability function represented by f  x  may also be written as

f x; ,  2 .
ii) If a random variable X follows normal distribution with mean  and
variance 2, then we may write, “X is distributed to N(, 2)” and is
expressed as X  N(, 2).
iii) No continuous probability function and hence the normal distribution
can be used to obtain the probability of occurrence of a particular value
of the random variable. This is because such probability is very small,
so instead of specifying the probability of taking a particular value by
the random variable, we specify the probability of its lying within
interval. For detail discussion on the concept, Sec. 5.4 of Unit 5 may be
referred to.
X
iv) If X ~ N  ,  2  , then Z  is standard normal variate having

mean ‘0’ and variance ‘1’. The values of mean and variance of standard
normal variate are obtained as under, for which properties of
expectation and variance are used (see Unit 8 of this course).
 X 1
Mean of Z i.e. E  Z   E   =  E  X    
   
1
  E  X    

1
     = 0 [ E(X) = Mean of X =  ]

 X  
Variance of Z i.e. V(Z) = V  
  
1 1
  V  X      2  V  X  
2 
 
1 2

2
  [ variance of X is 2]

= 1.
X 
v) The probability density function of standard normal variate Z 

1  12 z2
is given by   z   e ,   z  .
2

This result can be obtained on replacing f  x  by   z  , x by z,  by 0


and  by 1 in the probability density function of normal variate X i.e. in
2
1  x  
1   
f x  e 2  
,   x  
 2

8
vi) The graph of the normal probability function f  x  with respect to x is Normal Distribution

famous ‘bell-shaped’ curve. The top of the bell is directly above the
mean . For large value of , the curve tends to flatten out and for
small values of  , it has a sharp peak as shown in (Fig. 13.1):

Fig. 13.1

Normal distribution has various properties and large number of applications. It


can be used as approximation to most of the other distributions and hence is
most important probability distribution in statistical analysis. Theory of
estimation of population parameters and testing of hypotheses on the basis of
sample statistics (to be discussed in the next course MST-004) have also been
developed using the concept of normal distribution as most of the sampling
distributions tend to normality for large samples. Normal distribution has
become widely and uncritically accepted on the basis of much practical work.
As a result, it holds a central position in Statistics.
Let us now take some examples of writing the probability function of normal
distribution when mean and variance are specified, and vice-versa:
Example 1: (i) If X ~ N (40, 25) then write down the p.d.f. of X
(ii) If X ~ N (  36, 20) then write down the p.d.f. of X
(iii) If X ~ N (0, 2) then write down the p.d.f. of X
Solution: (i) Here we are given X ~ N (40, 25)
 in usual notations, we have

μ = 40, σ 2 = 25  σ = ± 25
5    0always 
Now, the p.d.f. of random variable X is given by
2
1  x  
1  
2 σ 

f(x) = e
σ 2π
2
1  x  40 
1  
2 5 

= e ,   x  
5 2π
9
Continuous Probability
Distributions
(ii) Here we are given X ~ N (  36, 20).
in usual notations, we have
μ  36, σ 2  20  σ  20
Now, the p.d.f. of random variable X is given by
2
1 x µ 
1  
2 σ 

f(x) = e
σ 2π
2
1  x  (  36)  1
1    1  (x +36) 2
2 20 
= e = e 40
20 2π 40π
1
1  (x +36) 2
= e 40 ,   x  
2 10π
(iii) Here we are given X ~ N (0, 2).
 in usual notations, we have

μ  0, σ 2  2  σ  2
Now, the p.d.f. of random variable X is given by
2 2
1 x µ  1 x 0 
1  
2 σ 
 1  
2 2 

f(x) = e = e
σ 2π 2 2π
1
1  x2
4
 e ,   x  
2 π
Example 2: Below, in each case, there is given the p.d.f. of a normally
distributed random variable. Obtain the parameters (mean and variance)
of the variable.
2
1  x  46 
1  
2 6 

(i) f(x) = e ,  x 
6 2π
1
1  (x  60) 2
(ii) f(x) = e 32 ,   x  
4 2π
2
1  x  46 
1  
2 6 

Solution: (i) f(x) = e ,  x
6 2π
Comparing it with,
2
1 x µ 
1  
2 σ 

f(x) = e
σ 2π
we have
µ  46,   6

 Mean    46, var iance   2  36

10
1
1  (x  60) 2 Normal Distribution
(ii) f(x) = e 32 ,   x  
4 2π
2
1  X  60 
1  
2 4 

= e
4 2π
Comparing it with,
2
1 x µ 
1  
2 σ 

f(x) = e
σ 2π
we get
µ = 60,   4

 Mean    60, var iance   2  16


Here are some exercises for you.
E 1) Write down the p.d.f. of r. v. X in each of the following cases:
1 4
(i) X ~ N  , 
2 9
(ii) X ~ N (  40,16)
E 2) Below, in each case, is given the p.d.f. of a normally distributed
random variable. Obtain the parameters (mean and variance) of the
variable.
x2
1 
(i) f(x) = e 8 ,   x  
2 2π
1
1  (x  2) 2
4
(ii) f(x) = e ,  x 
2 π

Now, we are going to state some important properties of Normal distribution in


the next section.

13.3 CHIEF CHARACTERISTICS OF


NORMAL DISTRIBUTION
The normal probability distribution with mean  and variance 2 has the
following properties:
i) The curve of the normal distribution is bell-shaped as shown in Fig. 13.1
given in Remark (vi) of Sec. 13.2.
ii) The curve of the distribution is completely symmetrical about x =  i.e. if
we fold the curve at x  , both the parts of the curve are the mirror images
of each other.
iii) For normal distribution, Mean = Median = Mode

11
Continuous Probability
Distributions iv) f  x  , being the probability, can never be negative and hence no portion
of the curve lies below x-axis.
v) Though x-axis becomes closer and closer to the normal curve as the
magnitude of the value of x goes towards  or  , yet it never touches it.

vi) Normal curve has only one mode.


vii) Central moments of Normal distribution are
µ1  0,  2   2 , 3  0,  4  3 4 and

µ 32 
β1 = 3
 0, 2  24  3
µ2 2
i.e. the distribution is symmetrical and curve is always mesokurtic.
Note: Not only µ1 and µ 3 are 0 but all the odd order central moments
are zero for a normal distribution.

viii) For normal curve,


Q3 – Median = Median – Q1
i.e. First and third quartiles of normal distribution are equidistant from
median.
Q  Q1 2
ix) Quartile Deviation (Q.D.) = 3 is approximately equal to of the
2 3
standard deviation.
4
x) Mean deviation is approximately equal to of the standard deviation.
5
2 4
xi) Q.D. : M.D. : S.D. =  :  :  = 10 : 12 : 15
3 5
xii) The points of inflexion of the curve are
1
1 
x    , f  x   e 2
 2
xiii) If X1 , X 2 ,..., X n are independent normal variables with means
1 ,  2 ,...,  n and var iances 12 , 22 , ...,  n2 respectively then the linear
combination a1X1 + a 2 X 2 +...+ a n X n of X1 , X 2 ,..., X n is also a normal
variable with
mean a11  a 2 2  ...  a n  n and var iance a1212  a 22 22  ...  a n2  n2 .
xiv) Particularly, sum or difference of two independent normal variates is also
a normal variate. If X and Y are two independent normal variates with
means 1, 2 and variances 12 ,  2 2 , then

X + Y  N(1 + 2, 12 + 22) and X – Y  N(1  2, 12 + 22).


Also, if X1, X2, …, Xn are independent variates each distributed as
 2 
N( ,2), then their mean X ~ N  , .
 n 
12
Normal Distribution

xv) Area property:




P     X       f  x  dx  0.6827,

1
Or P  1  Z  1     z  dz = 0.6827,
1
 2 

P    2  X    2   f  x  dx  0.9544,
 2 

2
Or P  2  Z  2     z  dz = 0.9544, and
2

 3 

P    3  X    3    f  x  dx  0.9973.
 3

3
Or P  3  Z  3     z  dz = 0.9973.
3

This property and its applications will be discussed in detail in Unit 14.
Let us now establish some of these properties.

13.4 MOMENTS OF NORMAL DISTRIBUTION


Before finding the moments, following is defined as gamma function [See Unit
16 of the Course also for detail discussion] which is used for computing the
even order central moments.
Gamma Function

If n > 0, the integral  x n 1e x dx is called a gamma function and is denoted by
0

n .
 
2 x 31  x
e.g.  x e dx   x e dx   3
0 0

  1
1/ 2  x
1
x 1
and 0 x e dx  0 x 2 e dx   2 
Some properties of the gamma function are

i) If n > 1,  n    n  1  n  1
ii) If n is a positive integer, then n  n  1

1
iii)     .
2
13
Continuous Probability
Distributions
Now, the first four central moments of normal distribution are obtained as
follows:
First Order Central Moment
As first order central moment (1) of any distribution is always zero [see Unit
3 of MST-002], therefore, first order central moment (1) of normal
distribution = 0.
Second Order Central Moment

2
2 =   x    f  x  dx

[See Unit 8 of MST-003]

2
 1  x  
2 1   
  x  
2  
 . e dx
  2
x 
Put  z  x    z

Differentiating
dx
 dz

 dx  dz
Also, when x  , we have z   and
and when x   , z  
 1
2 1  z2
 2    z
  2
e 2 dz

 1
2 2
 z2
2
 ze dz
2 

2
 z
2 2 2

2
  z e dz
2 0
z2

2
 on changing z to – z, the integrand i.e. z e does not get changed 2

i.e. it is an even function of z [see Unit 2 of MST-001]. Now, the


following property of definite integral can be used:
 

 f  z  dz  2  f  z  dz if f  z  is even function of z
 0

1
1  12
Now, put z 2  2t  z  2 t  2 t 2  dz  2 t dt
2

14
 1 Normal Distribution
2 1 2
 2  2  (2t)e t 1
dt   2 2  t 2 e  t dt
0  0
2t 2

2 2 32 1  t
  t e dt
 0

2 2  3 
   [By def. of gamma function]
 2

2 2 1 1
 [By Property (i) of gamma function]
 2 2

2


  [By Property (iii) of gamma function]

 2
Third Order Central Moment

3
3 =   x    f  x  dx


2
 1  x  
3 1   
  x  
2  
 e dx
  2
x 
Put  z  x    z  dx  dz

and hence

3 1  12 z 2
3 

  z  .
 2
e dz

 1
1  z2
 3  z 3e 2
dz
2 

1 1 1
 z2  z2  z2
3 3 3
Now, as integrand z e 2 changes to z e 2 on changing z to – z i.e. z e 2

is an odd function of z.
Therefore, using the following property of definite integral:
a

 f  z  dz  0 if f(z) is an odd function of z


a

we have,
1
3  3 0 = 0
2

15
Continuous Probability
Distributions Fourth Order Central Moment
2
  1  x  
4 4 1   
  x    f  x  dx    x   
2  
4 = e dx
   2
x 
Putting z

 dx  dz

4 1  12 z 2
 4    z 
  2
e dz

 1  1
4  z2 2 4  z2
  z 4e 2
dz  z 4
e 2
dz
2  2 0

z2
4 
 integrand z . e does not get changed on changing z to – z and hence it is
2

an even function of z [using the same property as used in case of 2].

z2
Put  t  z2 = 2t
2
 2zdz = 2dt
 z dz = dt
dt dt
 dz = 
z 2t

2 4 1
 4   (2t) 2 e  t dt
2 0 2t
 
24 .4 2 t 1 4 4 32  t
=  t e dt   t e dt
2 2 0 t  0

4 4 52 1  t
  t e dt
 0

4 4 5
 [By definition of gamma function]
 2

4 4 3 3
 [By Property (i) of gamma function]
 2 2

4 4 3 1 1
 [By Property (i) of gamma function]
 22 2

3 4 1
  [Using   (Property (iii) of gamma function)]
 2

 3 4

16
Thus, the first four central moments of normal distribution are Normal Distribution

1  0,  2   2 , 3  0,  4  3 4 .

 32 4 34 3 4
 1  = 0,  2   = 3
 23 22 2
2
4 
Therefore, moment coefficient of skewness ( 1 )  0

 the distribution is symmetrical.


The moment coefficient of kurtosis is 2  3 or  2  0.

 The curve of the normal distribution is mesokurtic.

Now, let us obtain the mode and median for normal distribution in the next
section.

13.5 MODE AND MEDIAN OF NORMAL


DISTRIBUTION
Mode
Let X ~ N  ,  2  , then p.d.f . of X is given by
2
1  x  
1  
 

f (x)  e 2 ...(1)
 2
,   x  
Taking logarithm on both sides of (1), we get
2
1 1 x   logmn  log m  log n 
logf(x) = log    log e  n 
 2 2    and log m  n log m 
1 1
 log  2
(x  )2 as loge = 1
σ 2π 2
Differentiating w.r.t.x
1 ' 1 (x  )
f (x)  0  2 2(x  )  
f(x) 2 2
(x  )
 f ' (x)   f (x) … (2)
2
For maximum or minimum
f ' (x)  0

(x  )
 f(x)  0
2

 x   0 as f (x)  0

17
Continuous Probability x
Distributions
Now differentiating (2) w.r.t. x, we have
 x   1
f  (x)   f '(x)  f (x)
 2
f () f ( )
f (x) at x   0 2
 2 0
 
 x =  is point where function has a maximum value.
 Mode of X is  .
Median
Let M denote the median of the normally distributed random variable X.
We know that median divides the distribution into two equal parts
M 
1
  f (x)dx   f (x)dx 
 M
2
M
1
  f (x)dx  2


2
µ 1  x   M
1  
 
 1
  e 2 dx   f (x)dx 
 σ 2π 
2

x 
In the first integral, let us put z

Therefore, dx   dz
Also when x    z  0 , and
when x    z   .
Thus, we have
0 M
1  12  z 2 1
 e dz   f (x)dx 
 2 
2
0 1 M
1  z2 1
  e 2 dz   f (x)dx 
 2 
2

 1  12 z2 
M  Z is s.n.v.with p.d.f. (z)  e 
1 1  2 
 +  f (x) dx 
2  2   0
1 
So  (z)dz  1   (z)dz  
   2 
M
  f(x)dx = 0
µ

M  as f (x)  0
18
 Median of X   Normal Distribution

From the above two results, we see that


Mean = Median = Mode = 

13.6 MEAN DEVIATION ABOUT THE MEAN


Mean deviation about mean for normal distribution is

  | x  Mean | f  x dx

[See Section 8.4 of Unit 8]

2
 1  x  
1  
2  

  x  e dx
  2
x 
Put  z  x    z

dx
  dz

 1
1  z2
 M.D. about mean =  | z | e 2 dz
  2
 1
  z2
2
  | z |e dz
2 

1
 z2
Now, | z | e 2 (integrand) is an even function z as it does not get changed on
changing z to –z,  by the property,
a a
“  f (x)dx  2 f (x)dx, if f  x  is an even function of x ”, we have
a 0

1 
  z2
M.D. about mean = 2  z e 2 dz
2 0
Now, as the range of z is from 0 to ∞ i.e. z takes non-negative values,
 z = z and hence
1 
2  z2
2
M.D. about mean   ze dz
2 0

z2
Put  t  z 2  2t  2zdz = 2dt  zdz = dt
2
 
2   e t  2 2
 M.D. about mean =  e t
dt  2      0  1 = 
2 0   1  0  

19
Continuous Probability
Distributions 2
In practice, instead of , its approximate value is mostly used and that is

4
.
5

2 2 7 7 4
    0.6364  0.7977  0.08 or (approx.)
 22 11 5
Let us now take up some problems based on properties of Normal Distribution
in the next section.

13.7 SOME PROBLEMS BASED ON PROPERTIES


OF NORMAL DISTRIBUTION
Example 3: If X1 and X2 are two independent variates each distributed as
N(0, 1), then write the distribution of (i) X1 + X2. (ii) X1 – X2.
Solution: We know that, if X1 and X2 are two independent normal variates s.t.

 
X1 ~ N 1 , 12 and X 2 ~ N  2 , 22 
then

 
X1  X 2 ~ N 1   2 , 12   22 , and


X1  X 2 ~ N 1   2 , 12   22  [See Property xiii (Section 13.3)]

Here, X1 ~ N  0,1 , X 2 ~ N  0,1

 i) X1  X 2 ~ N  0  0, 1  1

i.e. X1  X 2 ~ N  0, 2  , and

ii) X1  X 2 ~ N  0  0, 1  1

i.e. X1  X 2 ~ N  0, 2 

Example 4: If X ~ N  30, 25  , find the mean deviation about mean.

Solution: Here  = 30, 2 = 25   = 5.

2 2 2
 Mean deviation about mean =  = .5 = 5
  

Example 5: If X ~ N  0, 1 , what are its first four central moments?

Solution: Here  = 0, 2 = 1  σ = 1.
 first four central moments are:
1  0,  2   2  1,  3  0,  4  3 4  3.

Example 6: If X1 , X 2 are independent variates such that X1 ~ N (40, 25) ,


X 2 ~ N (60, 36) , then find mean and variance of (i) X = 2X1 +3X 2
(ii) Y  3X1  2X 2
20
Solution: Here X1 ~ N (40, 25), X 2 ~ N (60, 36) Normal Distribution

 Mean of X1 = E(X1 )  40

Variance of X1  Var(X1 )  25

Mean of X 2 = E(X 2 )  60

Variance of X 2  V ar(X 2 )  36

Now,
(i) Mean of X = E(X)  E(2X1  3X 2 )  E(2X1 )  E(3X 2 )

= 2E(X1 ) + 3E(X 2 ) = 2  40  3  60 = 80 + 180 = 260

Var (X) = Var(2X1  3X 2 )

 Var(2X1 )  Var(3X 2 )  X1 and X 2 are independent 

 4Var(X1 ) + 9Var(X 2 )

 4  25  9  36 = 100 + 324 = 424


(ii) Mean of Y = E(Y)  E(3X1  2X 2 )

 E(3X1 )  E( 2X 2 )

= 3E(X1 ) + (  2)E(X 2 )

= 3  40  2  (60) = 120 – 120 = 0

Var (Y) = Var (3X1  2X 2 )

= Var(3X1 )  Var( 2X 2 )


2
= (3) 2 Var  X1    2  Var  X 2 

= 9  25  4  36 = 225 + 144 = 369


You can now try some exercises based on the properties of normal distribution
which you have studied in the present unit.
E3) If X1 and X2 are two independent normal variates with means 30, 40
and variances 25, 35 respectively. Find the mean and variance of
i) X1 + X2
ii) X1 – X2
E4) If X  N(50, 225), find its Quartile deviation.
E5) If X1 and X2 are independent variates with each distributed as
X  X2
N (50, 64), what is the distribution of 1 ?
2
E6) For a normal distribution, the first moment about 5 is 30 and the fourth
moment about 35 is 768. Find the mean and standard deviation of the
distribution.

21
Continuous Probability
Distributions 13.8 SUMMARY
The following main points have been covered in this unit:
1) A continuous random variable X is said to follow normal distribution with
parameters  (      ) and 2(>0) if it takes on any real value and its
probability density function is given by
2
1  x  
1   
f x  e 2  
,   x  
 2
X
2) If X  
N ,  2 , then Z 

is standard normal variate.

3) The curve of the normal distribution is bell-shaped and is completely


symmetrical about x   .
4) For normal distribution, Mean = Median = Mode.
5) Q3 – Median = Median – Q1
Q3  Q1 2
6) Quartile Deviation (Q.D.) = is approximately equal to of the
2 3
standard deviation.
4
7) Mean deviation is approximately equal to of the standard deviation.
5
8) Central moments of Normal distribution are
1  0,  2   2 , 3  0,  4  34
9) Moment coefficient of skewness is zero and the curve is always mesokurtic.
10) Sum of independent normal variables is also a normal variable.

13.9 SOLUTIONS/ANSWERS
1 4
E 1) (i) Here we are given X ~ N  , 
2 9
 in usual notations, we have
1 4 2
  , 2  
2 9 3
Now, p.d.f. of r.v. X is given by
2
1  x  
1  
2  

f(x) = e ,   x  
 2
2
1  x 1/2 
1  
2  2/3 

= e
2

3

22
2
9  2x 1  Normal Distribution
3  
2 4 

= e ,   x  
2 2π
(ii) Here we are given X ~ N(40, 16)
 in usual notations, we have
   40, 2  16 4
Now, p.d.f. of r.v. X is given by
2
1  x  
1  
2  

f(x) = e ,   x 
σ 2π
2
1  x  (  40) 
1  
2 4


= e
4 2π
2
1  x + 40 
1  
2 4 

= e ,   x  
4 2π
x2
1 
E 2) (i) f(x) = e 8,   x  
2 2π
2
1x
1 
= e 24
2 2π
2
1  x 0 
1  
2 2 

= e ...(1) ,   x  
2 2π
Comparing (1) with,
2
1  x  
1  
2  

f(x) = e ,   x  
σ 2π
we get
  0, 2

 Mean    0 and var iance   2  (2) 2  4


1
1  (x  2) 2
4
(ii) f(x) = e ,   x  
2 π
1
1 
2×2
(x  2)2
= e
2× 2 π
2
1  x 2 
1  
2 2 

= e ...(1) ,   x  
2 2π
Comparing (1) with,
2
1  x  
1  
2  

f (x)  e ,   x  
 2
23
Continuous Probability
Distributions
we get

  2,  2

 Mean    2 and var iance   2  ( 2) 2  2


E3) i) X1 + X2  N(1 + 2, 12 + 22)
 X1 + X2  N(30 + 40, 25 + 35)
 X1 + X2  N(70, 60)
ii) X1 – X2  N(30  40, 25 + 35)
 X1 – X2  N(  10, 60)
E4) As 2 = 225
  = 15
2 2
and hence Q.D. =   15  10
3 3
E5) We know that if X1, X2, .., Xn are independent variates each distributed as
 2 
N(, 2), then X ~ N  , 
 n 
Here X1 and X2 are independent variates each distributed as N(50, 64),
X1  X 2  64 
 their mean i.e. X i.e. ~ N  50, 
2  2 

i.e X ~ N(50, 32).

E6) We know that 1  x  A [See Unit 3 of MST-002]

where 1 is the first moment about A.

 30 = x  5  x  35  Mean = 35
Given that fourth moment about 35 is 768. But mean is 35, and hence the
fourth moment about mean = 768.
 4 = 768
 34 = 768
768
 4 =   4  3 4 
3
 4 = 256 = (4)4   = 4.

24
Area Property of
UNIT 14 AREA PROPERTY OF NORMAL Normal Distribution
DISTRIBUTION
Structure
14.1 Introduction
Objectives

14.2 Area Property of Normal Distribution


14.3 Fitting of Normal Curve using Area Property
14.4 Summary
14.5 Solutions/Answers

14.1 INTRODUCTION
In Unit 13, you have studied normal distribution and its chief characteristics.
Some characteristics including moments, mode, median, mean deviation about
mean have been established too in Unit 13. The area property of normal
distribution has just been touched in the preceding unit. Area property is very
important property and has lot of applications and hence it needs to be studied in
detail. Hence, in the Unit 14 this property with its diversified applications has
been discussed in detail. Fitting of normal distribution to the observed data and
computation of expected frequencies have also been discussed in one of the
sections i.e. Sec. 14.3 of this unit.
Objectives
After studing this unit, you would be able to:
 describe the importance of area property of normal distribution;
 explain use of the area property to solve many practical life problems; and
 fit a normal distribution to the observed data and compute the expected
frequencies using area property.

14.2 AREA PROPERTY OF NORMAL


DISTRIBUTION
Let X be a normal variate having the mean  and variance 2.
Suppose we are interested in finding P    X  x1  See Fig.14.1 
2
x1 x1 1  x  
1   
Now, P    X  x1    f  x  dx   e 2  
dx
   2

25
Continuous Probability
Distributions

Fig. 14.1: P[µ < X < x1 ]


x 
Put  z  x    z

dx
 dz

Also, when X = , Z = 0
x1  
and when X  x1 , Z   z1 (say)

z1
1  12 Z2
 P    X  x1   P  0  Z  z1    e dz
0  2

z1 z
1  12 Z2 1

 e dz     z  dz
0 2 0

1  12 z 2
where   z   e is the probability density function of standard normal
2
z1 z
1  12 z2 1

variate and the definite integral  e dz i.e.   z dz represents the area
0 2 0

under standard normal curve between the ordinates at Z = 0 and Z = z1. (Fig.
14.2).

Fig. 14.2: P[0 < Z < z1 ]

You need not to evaluate the integral to find the area. Table is available to find
such area for different values of z1.
Here, we have transformed the integral from
2
x1 1  x   z1
1 2  
 1  12 z2
  2 e  dx to 
0 2
e dz

26
i.e. we have transformed normal variate ‘X’ to standard normal variate (S.N.V.) Area Property of
X Normal Distribution
Z .

This is because, the computation of
2
x1 1  x  
1  2   

 e dx requires construction of separate tables for different values of
 2
 and  as the normal variate X may have any values of mean and standard
deviation and hence different tables are required for different  and . So,
infinitely many tables are required to be constructed which is impossible. But
beauty of standard normal variate is that its mean is always ‘0’ and standard
deviation is always ‘1’ as shown in Unit 13. So, whatever the values of mean
and standard deviation of a normal variate be, the mean and standard deviation
on transforming it to the standard normal variate are always ‘0’ and ‘1’
respectively and hence only one table is required.

In particular,


P     X       f  x  dx [See Fig.14.3]


 X  
1 Z   when X    , Z  
 1
 P  1  Z  1     z  dz  
1  when X    , Z          1 
   
1
 2   z  dz [By Symmetry]
0

 From the table given in the 


= 2  0.34135  Appendix at the end of the unit 
 
= 0.6827

Fig. 14.3: Area within the Range 


Similarly,
 2 

P    2  X    2    f  x  dx See Fig.14.4
 2 

27
Continuous Probability
Distributions  X  
for Z   , we have 
2 2
 
 P  2  Z  2     z  dx  2   z  dx  Z  2 whenX    2 
2 0 and Z  2 when X    2 
 
 
 From the table given in the 
= 2  0.4772  Appendix at the end of the unit 
 
= 0.9544

Fig. 14.4: Area within the Range   2


and
P    3  X    3   P  3  Z  3 = 2.P [0 < Z < 3] [See Fig. 14.5]

= 2  0.49865 = 0.9973
 P[X lies within the range   3] = 0.9973
 P[X lies outside the range   3] = 1 – 0.9973 = 0.0027
which is very small and hence usually we expect a normal variate to lie within
the range from – 3 to 3, though, theoretically it ranges from –  to .

Fig. 14.5: Area within the Range   3


From the above discussion, we conclude that while solving numerical problems,
we need to transform the given normal variate into standard normal variate
because tables for the area under every normal curve, being infinitely many,
cannot be made available whereas the standard normal curve is one and hence
table for area under this curve can be made available and this is given in the
Appendix at the end of this unit.
Example 1: If X ~ N (45, 16) and Z is the standard normal variable (S.N.V.) i.e
X 
Z then find Z scores corresponding to the following values of X.

28
(i) X = 45 (ii) X = 53 (iii) X = 41 (iv) X = 47 Area Property of
Normal Distribution
Solution: We are given X ~ N (45, 16)
 In usual notations, we have

  45, 2  16     16    4    0 always 

X   X  45
Now Z  
 4
45  45 0
(i) When X = 45, Z  0
4 4
53  45 8
(ii) When X = 53, Z=  2
4 4
41  45 4
(iii) When X = 41, Z   1
4 4
47  45 2
(iv) When X = 47, Z   0.5
4 4
Example 2: If the r.v. X is normally distributed with mean 80 and standard
deviation 5, then find
(i) P  X  95 , (ii) P  X  72 , (iii) P  60.5  X  90 ,

(iv) P 85  X  97  , and (v) P  64  X  76 

Solution: Here we are given that X is normally distributed with mean 80 and
standard deviation (S.D.) 5.
i.e. Mean =   80 and var iance   2  (S.D.) 2  25.
X   X  80
If Z is the S.N.V., then Z  
 5
Now
95  80 15
(i) X = 95, Z =  3
5 5
 P  X  95  P  Z  3 [See Fig.14.6]

= 0.5  P  0  Z  3

= 0.5 – 0.4987 [Using table area under normal curve]


= 0.0013

Fig. 14.6: Area to the Right of X= 95


29
Continuous Probability
Distributions 72  80 8
(ii) X = 72, Z =   1.6
5 5
 P  X  72  P  Z  1.6  [See Fig.14.7]

 normal curveis symmetrical 


= P  Z  1.6  about the line Z  0 
 
= 0.5  P  0  Z  1.6 

= 0.5 – 0.4452 [Using table area under normal curve]


= 0.0548

Fig. 14.7: Area to the Left of X = 72

60.5  80 19.5
(iii) X = 60.5, Z    3.9
5 5
90  80 10
X = 90, Z   2
5 5
 P  60.5  X  90  P  3.9  X  2  [See Fig.14.8]

= P  3.9  X  0   P  0  Z  2

 normal curve is 
= P  0  X  3.9   P  0  Z  2  symmetrical about 
 the line Z  0 

 Using table area 


= 0.5000+ 0.4772  under normal curve 
 
= 0.9772

30
Area Property of
Normal Distribution

Fig. 14.8: Area between X = 60.5 and X = 90


85  80 5
(iv) X = 85, Z   1
5 5
97  80 17
X= 97, Z    3.4
5 5
 P 85  X  97   P 1  Z  3.4 [See Fig.14.9]

= P  0  Z  3.4  P  0  Z  1

= 0.4997 – 0.3413 [Using table area under normal curve]


= 0.1584

Fig. 1 4.9: Area between X = 85 and X= 97

64  80 16
(v) X = 64, Z   3.2
5 5
76  80 4
X = 76, Z    0.8
5 5
P  64  X  76   P  3.2  Z  0.8 [See Fig.14.10]

 normal curve is symmetrical 


= P  0.8  Z  3.2 about the line Z  0 
 
= P  0  Z  3.2  P  0  Z  0.8

 Using table area 


= 0.4993 – 0.2881  under normal curve 
 
= 0.2112

31
Continuous Probability
Distributions

Fig. 14.10: Area between X = 64 and X= 76

Example 3: In a university the mean weight of 1000 male students is 60 kg and


standard deviation is 16 kg.
(a) Find the number of male students having their weights
i) less than 55 kg
ii) more than 70 kg
iii) between 45 kg and 65 kg

(b) What is the lowest weight of the 100 heaviest male students?
(Assuming that the weights are normally distributed)
Solution: Let X be a normal variate, “The weights of the male students of the
university”. Here, we are given that µ = 60 kg, σ = 16 kg, therefore,
X ~ N(60, 256).
We know that if X ~ N(µ, σ2), then the standard normal variate is given by
X 
Z .

X  60
Hence, for the given information, Z 
16
55  60
(a) i) For X = 55, Z   0.3125  0.31 .
16
Therefore,
P[X < 55] = P [Z <  0.31] = P [Z > 0.31] [See Fig. 14.11]
area on both 
= 0.5  P [0 < Z < 0.31] sides of Z  0 is 0.5
 
 Using table area 
= 0.5  0.1217  under normal curve 
 
= 0.3783

Fig. 14.11: Area Representing Students having Less than 55 kg weight


32
Area Property of
Normal Distribution
Number of male students having weight less than 55 kg = N  P(X < 55)
= 1000  0.3783
= 378
70  60
ii) For X = 70, Z   0.625 0.63
16

P[X>70] = P [Z > 0 .63] [See Fig. 14.12]


area on both 
= 0.5  P [0 < Z < 0.63] sides of Z  0 is 0.5
 

 Using table area 


= 0.5  0.2357  under normal curve 
 
= 0.2643

Fig. 14.12: Area Representing Students having More than 70 kg weight

Number of male students having weight more than 70 kg = N  P[X > 70]
= 1000  0.2643
= 264
45  60
iii) For X 45, Z   0.9375  0.94
16
65  60
For X 65, Z   0.3125 0.31
16
P  45  X  65  P  0.94  Z  0.31 [See Fig. 14.13]

= P[  0.94 < Z < 0] + P[0 < Z < 0.31]


 P[0  Z  0.94]  P[0  Z  0.31]
= 0.3264 + 0.1217 = 0. 4481

33
Continuous Probability
Distributions

Fig. 14.13: Area Representing Students having Weight between 45 kg and 65 kg

 Number of male students having weight between 45 kg & 65 kg


= P[45 < X < 65]
= 1000  0.4481 = 448
b) Let x1 be the lowest weight amongst 100 heaviest students.

x1  60
Now, for X  x1 , Z   z1 (say) .
16
100
P [X  x1 ]   0.1 [See Fig.14.14]
1000
 P[Z  z1 ]  0.1

 P  0  Z  z1   0.5  0.1 0.4.

 z1 = 1.28 [From Table]

 x1 = 60+16  1.28 = 60+20.48 = 80.48.

Therefore, the lowest weight of 100 heaviest male students is 80.48 kg.

Fig. 14.14: Area Representing the 100 Heaviest Male Students

Example 4: In a normal distribution 10% of the items are over 125 and 35% are under
60. Find the mean and standard deviation of the distribution.
Solution:

Fig. 14.15: Area Representing the Items under 60 and over 125

Let X ~ N(µ, σ2), where µ and σ2 are unknown and are to be obtained.
34
Here we are given Area Property of
Normal Distribution
P[X > 125] = 0.1 and P[X < 60] = 0.35. [See Fig. 14.15]
X 
We know that if X ~ N(µ, σ2), then Z  .

60     vesign is taken because 
For X = 60, Z    z1 (say) ... (1)  P[Z  0]  P[Z  0]  0.5 
  
125  
For X 125, Z   z 2 (say) ...(2)

Now P  X  60   P  Z  z1   0.35

 P[Z  z1 ]  0.35 [By symmetry of normal curve]

 0.5  P[0  Z  z1 ]  0.35

 P[0  Z  z1 ]  0.15

 From the table areas 


 z1  0.39  under normal curve 
 
and P  X  125  P  Z  z 2   0.10

 0.5  P[0  Z  z 2 ]  0.10

 P[0  Z  z 2 ]  0.40

 z 2  1.28  From the table

Putting the values of z1 and z 2 in Equations (1) and (2), we get

60  
 0.39 … (3)

125  
 1.28 … (4)

(4) – (3) gives
125    60  
 1.28  0.39

65 65
 1.67     38.92
 1.67
From Eq. (4),   125  1.28    125  1.28  38.92 = 75.18
Hence   mean  75.18;   S.D.  38.92
Example 5: Find the quartile deviation of the normal distribution having mean µ
and variance  2 .
Solution: Let X  N(, 2). Let Q1 and Q3 are the first and third quartiles. Now
as Q1, Q2 and Q3 divide the distribution into four equal parts, therefore, areas
35
Continuous Probability
Distributions
under the normal curve to the left of Q1 , between Q1 and Q2 (Median), between
Q2 and Q3 and to the right of Q3 all are equal to 25 percent of the total area. This
has been shown in Fig. 14.16.

Fig. 14.16: Area to the Left of X = Q1 and to the Right of X = Q3

i.e. here, we have


P[X < Q1] = 0.25, P[Q1 < X < µ] = 0.25, P[µ < X < Q3 ] = 0.25 and
P[X > Q3] = 0.25 [See Fig. 14.16]
Q1  
Now, when X  Q1 , Z   z1 , (say)

 value of Z corresponds to Q1 which lies to the left of mean which is zero for Z
and hence the value to the left of it is negative. Thus, a negative value of Z has
been taken here.
 Q1    z1  Q1    z1

and when
Q3  
X  Q3 , Z   z1

Due to symmetry of normal curve, the values of Z corresponding to Q1 and Q3
are equal in magnitude because they are equidistant from mean.
 Q3    z1  Q3    z1

Now, as P[µ < X < Q3] = 0.25 , therefore,


P[0 < Z < z1] = 0.25
 z1 = 0.67 [From normal tables]
Q 3  Q1 (  z1 )  (  z1 ) 2
Now, Q.D. =   z1 = σ(0.67) i.e.  (approx).
2 2 3
Now, we are sure that you can try the following exercises:

X 
E1) If X ~ N(150, 9) and Z is a S.N.V. i.e Z  then find Z scores

corresponding to the following values of X
(i) X = 165 (ii) X = 120
36 E2) Suppose X ~ N (25, 4) then find
(i) P[X < 22], (ii) P [X > 23], (iii) P[X – 24< 3], and (iv) P[X – 21 > 2] Area Property of
Normal Distribution
E3) Suppose X ~ N (30, 16) then find  in each case
(i) P[X   ]  0.2492
(ii) P[X   ]  0.0496
E4) Let the random variable X denote the chest measurements (in cm) of
2000 boys, where X ~ N(85, 36).
a) Then find the number of boys having chests measurement
i) less than or equal to 87 cm,
ii) between 86 cm and 90 cm,
iii) more than 80 cm.

b) What is the lowest value of the chest measurement among the 100
boys having the largest chest measurements?
E5) In a particular branch of a bank, it is noted that the duration/waiting time
of the customers for being served by the teller is normally distributed
with mean 5.5 minutes and standard deviation 0.6 minutes. Find the
probability that a customer has to wait
a) between 4.2 and 4.5 minutes, (b) for less than 5.2 minutes, and (c)
more than 6.8 minutes
E6) Suppose that temperature of a particular city in the month of March is
normally distributed with mean 24  C and standard deviation 6  C . Find
the probability that temperature of the city on a day of the month of
March is
(a) less than 20  C (b) more than 26  C (c) between 23  C and 27  C

14.3 FITTING OF NORMAL CURVE USING AREA


PROPERTY
To fit a normal curve to the observed data we first find the mean and variance
from the given data. Mean and variance so obtained are  and  respectively.
Substituting these values of  and 2 in the probability function
2
1  x  
1   
f x  e 2  
, we get the normal curve fitted to the given data.
 2
Now, the expected frequencies can be computed using either of the following
two methods:
1. Area method
2. Method of ordinates
But, here we only deal with the area method. Process of finding the expected
frequencies by area method is described in the following steps:
(i) Write the lower limits of each of the given class intervals.

37
Continuous Probability
Distributions X 
(ii) Find the standard normal variate Z  corresponding to each lower

limit. Suppose the values of the standard normal variate are obtained as z1,
z2, z3, …
(iii) Find P[Z  z1], P[Z  z2], P[Z  z3],…i.e. the areas under the normal curve
to the left of ordinate at each value of Z obtained in step (ii). Using table
given in the Appendix at the end of the unit Z = zi may be to the right or left
of Z = 0.
If Z = zi is to the right of Z = 0 as shown in the following figure:

Fig. 14.17: Area to the Left of Z = z i , when zi is to the Right of Z = 0

Then, P[Z  zi] is obtained as


P[Z  zi] = 0.5 + P[0  Z  zi]
But, if Z = zi is to the left of Z = 0 (this is the case when zi is negative) as
shown in the following figure:

Fig. 14.18: Area to the Left of Z = z i , when z i is to the left of Z = 0

Then
P[Z  zi] = 0.5 – P[zi  Z  0]
= 0.5 – P[0  Z  – zi] [Due to symmetry]
e.g. zi =  2 (say),
Then P[Z  – 2] = 0.5 – P[–2  Z  0]
= 0.5 – P[0  Z  – (–2)]
= 0.5 – P[0  Z  2]
(iv) Obtain the areas for the successive class intervals on subtracting the area
corresponding to every lower limit from the area corresponding to the
succeeding lower limit.
38
e.g. suppose 10, 20, 30 are three successive lower limits. Area Property of
Normal Distribution
Then areas corresponding to these limits are
P[X  10], P[X  20], P[X  30] respectively.
Now the difference P[X  30] – P[X  20] gives the area corresponding to
the interval 20-30.
(v) Finally, multiply the differences obtained in step (iv) i.e. areas
corresponding to the intervals by N (the sum of the observed frequencies),
we get the expected frequencies.

Above procedure is explained through the following example.


Example 6: Fit a normal curve by area method to the following data and find the
expected frequencies.

X f
0-10 3
10-20 5
20-30 8
30-40 3
40-50 1

Solution: First we are to find the mean and variance of the given frequency
distribution. This you can obtain yourself as you did in Unit 2 of MST-002 and
at many other stages. So, this is left an exercise for you.
You will get the mean and variance as
 = 22 and 2 = 111 respectively
  = 10.54
Hence, the equation of the normal curve is
2
1  x  
1   
f x  e 2  

 2
2
1  x  22 
1   
= e 2  10.54  ,   x  
10.54  2

39
Continuous Probability
Distributions
Expected frequencies are computed as follows:

Class Lower Standard Area under Difference Expected


Interval Limit Normal normal curve between frequencies
Variate to the left of successive =20  col.V
X  Z areas
Z
 P X  x 
X  22  P  Z  z

10.54
Below 0  – P[Z < – ] 0.0183 – 0 0.366 0
= 0.0183
=0
0-10 0 –2.09 P[Z  –2.09] 0.1271 – 0.0183 2.176 2
= 0.1088
= 0.0183
10-20 10 –1.14 P[z  – 1.14] 0.4241 – 0.1271 5.94 6
= 0.2970
= 0.1271
20-30 20 –0.19 P[Z  –0.19] 0.7764 – 0.4241 7.05 7
= 0.3523
= 0.4241
30-40 30 0.76 P[Z  0.76] 0.9564 – 0.7764 3.6 4
= 0.1800
= 0.9564
40-50 40 1.71 P[Z  1.71] 0.9961 – 0.9564 0.79 1
= 0.0397
= 0.9564
50 and 50 2.66 P[Z  2.66] _ _
above
= 0.9961

The areas under the normal curve shown in the fourth column of the above tables
are obtained as follows:
 there is no value 
P[Z < –] = 0  to the left of   
 
P[Z  – 2.09] = 0.5 – P[–2.09  Z  0] [See Fig. 14.19]
= 0.5 – P[0  Z  2.09] [Due to symmetry]
 From table given at 
= 0.5 – 0.4817  the end of the unit 
 
= 0.0183

40
Area Property of
Normal Distribution

Fig. 14.19: Area to the Left of Z = –2.09


Similarly,
P[Z  – 1.14] = 0.5 – 0.3729 = 0.1271
P[Z  – 0.19] = 0.5 – 0.0759 = 0.4241
Now, P[Z  0.76] = 0.5 + P[0  Z  0.76] [See Fig. 14.20]
= 0.5 + 0.2764
= 0.7764

Fig. 14.20: Area to the Left of Z = 0.76

Similarly
P[Z  1.71] = 0.5 + 0.4564 = 0.9564
P[Z  2.66] = 0.5 = 0.4961 = 0.9961
You can now try the following exercises:
E7) Fit a normal curve to the following distribution and find the expected
frequencies by area method.

X 60– 65 65-70 70-75 75-80 80-85


5 8 12 8 7

E8) The following table gives the frequencies of occurrence of a variate X


between certain limits. The distribution is normal. Find the mean and S.D.
of X.

X Less than 40 40-50 50 and more


f 30 33 37

41
Continuous Probability
Distributions

14.4 SUMMARY
The main points covered in this unit are:
1) Area property and its various applications has been discussed in detail.
2) Quartile deviation has also been obtained using the area property in an
example.
3) Fitting of normal distribution using area property and computation of
expected frequencies using area method have been explained.

14.5 SOLUTIONS/ANSWERS
E1) We are given X ~ N(150, 9)
 in usual notations, we have
  150,  2  9 3

X   X  150
Now, Z  
 3
165  150 15
(i) When X = 165, Z=  5
3 3
120  150 30
(ii) When X = 120, Z=   10
3 3
E2) Here X ~ N(25, 4)
 in usual notations, we have
Mean =   25, var iance   2  4    2
X   X  25
If Z is the S.N.V then Z  
 2
22  25 3
i) X = 22, Z   1.5
2 2
P[X  22]  P[Z  1.5] [See Fig. 14.21]
 due to symmetry of 
 P[Z  1.5]  normal curve 
 
 0.5  P[0  Z  1.5]

 Using table area 


= 0.5  0.4332  under normal curve 
 
= 0.0668

42
Area Property of
Normal Distribution

Fig. 14.21: Area to the Left of X = 22

23  25 2
ii) X = 23, Z    1
2 2
P[X  23]  P[Z  1] [See Fig.14.22]

 due tosymmetry of 
= P[Z  1]  normal curve 
 
= 0.5  P[0  Z  1]

 Using table area 


= 0.5 + 0.3413  under normal curve 
 
= 0.8413

Fig. 14.22: Area to the Right of X = 23

 x  a  b 
iii) P[| X  24 | 3]  P[ 3  X  24  3]  
  b  x  a  b 
= P[  3  24  X  3  24)
= P[21<X  27]
21  25 4
X = 21, Z    2
2 2
27  25 2
X = 27, Z   1
2 2
 P[| X  24 | 3]  P[21  X  27] See Fig.14.23

 P[  2  Z  1]
43
Continuous Probability
Distributions
= P[–2< Z< 0] + P[0 < Z < 1]
= P[0  Z  2]  P[0  Z  1]
= 0.4772 – 0.3413 = 0.1359

Fig. 14.23: Area between X = 21 and X = 27

iv) P[|X  21| 2]  P[X  21  2 or   X  21  2]

 x  a  b    x  a   b 
 
 x  a  b or   x  a   b 
= P[X  23or  X  2  21]

 y   a 
= P[X  23or X  19]  y  a 
 
19  25 6
For X=19, Z    3
2 2
23  25 2
For X=23, Z    1
2 2
 P[| X  21| 2]  P[X  23or X  19] See Fig14.24 
= P[Z  1or Z  3]

 By addition theorem for 


= P[Z  1]  P[Z  3]  mutually exclusive events 
 
 1  P[ 3  Z  1]
 1  P[1  Z  3]
= 1– [P[0  Z  3]  P[0  Z  1]]

= 1– [0.4987 – 0.3413]  From table 


= 1– 0.1574 = 0.8426.

44
Area Property of
Normal Distribution

Fig. 14.24: Area between X = 19 and X = 23

E3) Here X ~ N(30, 16)


 in usual notations, we have
Mean =   30, variance  2  16    4
X   X  30
If Z is S.N.V then Z  
 4
  30
i) X  , Z   z1 (say) ...(1)
4
Now P[X   ]  0.2492 See Fig.14.25
 P[Z  z1 ]  0.2492  0.5  P 0  Z  z1   0.2492

 P  0  Z  z1   0.2508

 z1  0.67 [From the table]

Putting z1  0.67 in (1), we get

  30
= 0.67
4
  30  2.68
  30  2.68= 32.68

Fig. 14.25: z1 Corresponding to 24.92 % Area to its Right

  30
ii) For X  , Z    z 2 (say) … (2)
4
Now P[X   ]  0.0496

 P[Z   z 2 ]  0.0496 [See Fig.14.26]

 P[Z  z 2 ]  0.0496 [Due to symmetry]


45
Continuous Probability
Distributions

Fig. 14.26: z2 Corresponding to 4.96 % Area to its Right

 0.5  P[0  Z  z 2 ]  0.0496

 P[0  Z  z 2 ]  0.5  0.0496  0.4504

 z 2  1.65 [From the table]

Putting  z 2  1.65 in (2), we get

  30
 1.65
4
   30  1.65  4
   30  6.60
   30  6.6 = 23.4
E4) We are given X ~ N(85, 36), N = 2000
i.e.   85cm, 2  36cm, N  2000
x
If X ~ N(µ, σ2) and Z  then we know that Z ~ N(0, 1)

87  85 2
a) i) For X = 87, Z   0.33
6 6
Now P[X < 87] = P [Z < 0.33] [See Fig. 14.27]
= 0.5 + P [0 < Z < 0.33]

Fig. 14.27: Area to the Left of X = 87 or Z = 0.33

= 0.5 + P [0 < Z < 0.33]


 From the table of areas 
= 0.5 + 0.1293  under normal curve 
 
= 0.6293
46
Therefore, number of boys having chests measurement  87 Area Property of
Normal Distribution
 N.P[X  87]
= 2000  0.6293 = 1259
86  85 1
ii) For X = 86, Z   0.17
6 6
90  85 5
For X = 90, Z   0.83
6 6

Fig. 14.28: Area between X= 86 and X= 90

 P 86  X  90  P 0.17  Z  0.83 See Fig.14.28


= P  0  Z  0.83  P  0  Z  0.17 
 From the table of areas 
= 0.2967 – 0.0675  under normal curve 
 
= 0.2292
 number of boys having chests measurement between 86 cm
and 90 cm
= N. P [86  x  90 ]
= 2000  0.2292 = 458
80  85 5
iii) For X = 80, Z   0.83
6 6
P [X > 80] = P [Z > – 0.83] [See Fig. 14.29]
= P [Z < 0.83]
= 0.5 + P [0 < Z < 0.83]
 From the table of areas 
= 0.5 + 0.2967  under normal curve 
 
= 0.7967
 number of boys having chest measurement more than 80 cm
= N.P[X > 80]
= 2000  0.7967
= 1593

47
Continuous Probability
Distributions

Fig. 14.29: Area to the Right of X = 80 or Z =  0.83

b) Let x1 be the lowest chest measurement amongst 100 boys having the
largest chest measurements.
x  85
Now, for X x1 , Z  1  z1 (say) .
6
100
P[X  x1 ]   0.05
2000
 P  Z  z1   0.05 See Fig.14.30 

Fig. 14.30: Area Representing the 100 Boys having Largest Chest Measurements

 P[0  Z  z1 ]  0.5  0.05  0.45

 z1 =1.64 [From Table]

 x1 = 85 + 6  1.64  85  9.84 = 94.84.

Therefore, the lowest value of the chest measurement among the 100
boys having the largest chest measurement is 94.84 cm.
E5) We are given
  5.5 minutes,  = 0.6 minutes
X
If X ~ N(,  2 ) and Z  then we know that Z ~ N (0, 1)

4.2  5.5 1.3 13
a) For X = 4.2, Z    2.17
0.6 0.6 6
4.5  5.5 1.0 10 5
For X = 4.5, Z     1.67
0.6 0.6 6 3

48
Area Property of
Normal Distribution

Fig. 14.31: Area Representing Probability of Waiting Time between 4.2 and 4.5 Minutes

P[4.2  x  4.5]  P  2.17  Z  1.67  See Fig.14.31


= P [1.67 < Z < 2.17]
= P [0 < Z < 2.17] – P [0 < Z < 1.67]
= 0.4850 – 0.4525
= 0.0325
Therefore, probability that customer has to wait between 4.2 min and
4.5 min = 0.0325
5.2  5.5 0.3 3 1
b) For X  5.2, Z      0.5
0.6 0.6 6 2

Fig. 14.32: Area Representing Probability of Waiting Time Less than 5.2 Minutes

P[X < 5.2] = P [Z < –0.5] [See Fig 14.32]


= P [Z > 0.5]
= 0.5 – P [0 < Z < 0.5]
 From the table of areas 
= 0.5 – 0.1915  under normal curve 
 
= 0.3085
Therefore, probability that customer has to work for less than 5.2 min
= 0.3085
6.8  5.5 1.3 13
c) For X  6.8, Z    2.17
0.6 0.6 6

49
Continuous Probability
Distributions

Fig. 14.33: Area Representing Probability of Waiting Time Greater than 6.8 Minutes

P[X > 6.8] = P [Z > 2.17] [See Fig 14.33]


= 0.5 – P [0 < Z < 2.17]
= 0.5 – 0.4850 = 0.0150
Therefore, probability that customer has to wait for more than
6.8 min = 0.0150
E6) Let the random variable X denotes the temperature of the city in the
month of March. Then we are given
X ~ N(, 2 ), where   24  C,   6  C

X 
We know that if X ~ N(, 2 ), and Z  then Z ~ N (0, 1)

20  24 4 2
a) For X = 20, Z    0.67
6 6 3


Fig. 14.34: Area Representing Probability of Temperature Less than 20 C

P[X < 20] = P[Z < –0.67] [See Fig. 14.34]


= P [Z > 0.67]
= 0.5 – P[0 < Z < 0.67]
= 0.5 – 0.2486 = 0.2514
Therefore, probability that temperature of the city is less than 20  C is
0.2514
26  24 2 1
b) For X= 26, Z    0.33
6 6 3

50
Area Property of
Normal Distribution


Fig. 14.35: Area Representing Probability of Temperature Greater than 26 C
Since, P[X > 26] = P [Z > 0.33] [See Fig. 14.35]
= 0.5 – P[0 < Z < 0.33]
 From the table of areas 
= 0.5 – 0.1293  
 under normal curve 
= 0.3707
Therefore, probability that temperature of the city is more than
26  C is 0.3707
23  24 1
c) For X= 23, Z   0.17
6 6
27  24 3 1
For X= 27, Z     0.5
6 6 2

Fig. 14.36: Area Representing Probability of Temperature


 
between 23 C and 27 C

P[23 < X < 27] = P[–0.17 < Z < 0.5] [See Fig. 14.36]
= P [–0.17 < Z < 0] + P [0 < Z < 0.5]
= P[0 < Z < 0.17] + P[0 < Z < 0.5]
 From the table of areas 
= 0.0675 + 0.1915  under Normal Curve 
 
= 0.2590
Therefore, probability that temperature of the city is between
23  C and 27  C is 0.2590

51
Continuous Probability
Distributions E7) Mean ()  73, variance( 2 )  39.75
and hence S.D. ( )  6.3
 The equation of the normal curve fitted to the given data is
2
1  x  73 
1   
f(x)= e 2 6.3 
,   x  
(6.3) 2
Using area method,
The expected frequencies are obtained as follows:
Class Lower X Area under Difference Expected
interval limit Z= normal curve between frequency

X X  73 to the left of successive areas 40  col. V
 z
6.3
Below   0 0.0197 – 0 0.8 1
60
= 0.0197

60 – 65 60 –2.06 0.5 – 0.4803 0.1020 – 0.0197 3.3 3


= 0.0197
= 0.0823

65 – 70 65 –1.27 0.5 – 0.3980 0.3156 – 0.1020 8.5 9


= 0.1020
= 0.2136

70 – 75 70 –0.48 0.5 – 0.1844 0.6255 – 0.3156 12.4 12


= 0.3156
= 0.3099

75 – 80 75 + 0.32 0.5 + 0.1255 0.8655 – 0.6255 9.6 10


= 0.6255
= 0.2400

80 – 85 80 1.11 0.5 + 0.3655 0.9713 – 0.8655 4.2 4


= 0.8655
= 1.1058

85 and 85 1.90 0.5 + 0.4713


above = 0.9713

30
E8) P[X  40]   0.3,
100
33
P[40  X  50]   0.33, and
100

52
37 Area Property of
P[X  50]   0.37, Normal Distribution
100
Now, Let X ~ N(,  2 ),
 Standard normal variate is
X 
Z=

 It is taken as  ve as area to 
40  
When X = 40, Z   z1 , (say)  the left of this value is 30% 

as probabilityis 0.3 

 It is taken as +ve as 
50   area to the right of this 
When X = 50, Z =  z2 (say)  

 valueis given as37% 

Now,
P[X  40]  P[Z  z1 ]  0.3

 0.5  P[  z1  Z  0]  0.3 [See Fig14.37]

 0.5  P[0  Z  z1 ]  0.3 [Due to symmetry]

 P[0  Z  z1 ]  0.2
From table at the end of this unit,

Fig. 14.37: -z1 Corresponding to the 30% Area to its Left

The value of Z corresponding to probability/area is

 As values of Z are 0.52 and 0.53 


z1  0.525 corresponding to the proabability 
 
0.1985 

P[X  50]  0.37

 P[Z  z 2 ]  0.37 See Fig.14.38


 0.5  P[0  Z  z 2 ]  0.37

 P[0  Z  z 2 ]  0.13

z 2 =0.33(approx.) [From the table]

53
Continuous Probability
Distributions

Fig. 14.38: z2 Corresponding to the 37% Area to its Right

40   50  
  0.525 and  0.33
 
 40    0.525 and 50    0.33
Solving these equations for  and , we have
  11.7 and   46.14

54
APPENDIX Area Property of
Normal Distribution
AREAS UNDER NORMAL CURVE
The standard normal probability curve is given by
1  1 
(z)= exp   z 2  ,   z  
2  2 
The following table gives probability corresponding to the shaded area as shown
in the following figure i.e. P[0  Z  z] for different values of z

TABLE OF AREAS

z 0 1 2 3 4 5 6 7 8 9

0.0 .0000 .0040 .0080 .0120 .0160 .0199 .0239 .0279 .0319 .0359
0.1 .0398 .0438 .0478 .0517 .0557 .0596 .0636 .0675 .0714 .0759

0.2 .0793 .0832 .0871 .0910 .0948 .0987 .1026 .1064 .1103 .1141
0.3 .1179 .1217 .1255 .1293 .1331 .1368 .1406 .1443 .1480 .1517
0.4 .1554 .1591 .1628 .1664 .1700 .1736 .1772 .1808 .1844 .1879

0.5 .1915 .1950 .1985 .2019 .2054 .2088 .2123 .2157 .2190 .2224
0.6 .2257 .2291 .2324 .2357 .2389 .2422 .2454 .2486 .2517 .2549
0.7 .2580 .2611 .2642 .2673 .2703 .2734 .2764 .2794 .2823 .2852
0.8 .2881 .2910 .2939 .2967 .2005 .3023 .3051 .3078 .3106 .3133
0.9 .3159 .3186 .3212 .3238 .3264 .3289 .3315 .3340 .3365 .3389

1.0 .3413 .3438 .3461 .3485 .3508 .3531 .3554 .3577 .3599 .3621
1.1 .3643 .3655 .3686 .3708 .3729 .3749 .3770 .3790 .3810 .3820
1.2 .3849 .3869 .3888 .3907 .3925 .3944 .3962 .3980 .3997 .4015
1.3 .4032 .4049 .4066 .4082 .4099 .4115 .4131 .4147 .4162 .4177
1.4 .4192 .4207 .4222 .4236 .4251 .4265 4279 .4292 .4306 .4319

55
Continuous Probability
Distributions

1.5 .4332 .4345 .4357 .4370 .4382 .4394 .4406 .4418 .4429 .4441
1.6 .4452 .4463 .4474 .4484 .4495 .4505 .4515 .4525 .4535 .4545
1.7 .4554 .4564 .4573 .4582 .4591 .4599 .4608 .4616 .4625 .4633
1.8 .4641 .4649 .4656 .4664 .4671 .4678 .4686 .4693 .4699 .4706
1.9 .4713 .4719 .4726 .4732 .4738 .4744 .4750 .4756 .4761 .4767

2.0 .4772 .4778 .4783 .4788 .4793 .4798 .4803 .4808 .4812 .4817
2.1 .4821 .4826 .4830 .4834 .4838 .4842 .4846 .4850 .4854 .4857
2.2 .4861 .4864 .4868 .4871 .4875 .4678 .4881 .4884 .4887 .4890
2.3 .4893 .4896 .4898 .4901 .4904 .4906 .4909 .4911 .4913 .4916
2.4 .4918 .4920 .4922 .4925 .4927 .4929 .4931 .4932 .4934 .4936

2.5 .4938 .4940 .4941 .4943 .4945 .4946 .4948 .4959 .4951 .4952
2.6 .4953 .4955 .4956 .4957 .4959 .1960 .4961 .4962 .4963 .4964
2.7 .4965 .4966 .4967 .4968 .4969 .4970 .4971 .4972 .4973 .4974
2.8 .4974 .4975 .4976 .4977 .4977 .4978 .4979 .4879 .4980 .4981

2.9 .4981 .4982 .4982 .4983 .4984 .4984 .4985 .4985 .4986 .4986

3.0 .4987 .4987 .4987 .4988 .4988 .4989 .4989 .4989 .4990 .4990

3.1 .4990 .4991 .4991 .4991 .4992 .4992 .4992 .4992 .4993 .4993
3.2 .4993 .4493 .4994 .4994 .4994 .4994 .4994 .4995 .4995 .4995
3.3 .4995 .4995 .4995 .4996 .4996 .4996 .4996 .4996 .4996 .4997
3.4 .4997 .4997 .4997 .4997 .4997 .4997 .4997 .4997 .4997 .4998

3.5 .4998 .4998 .4998 .4998 .4998 .4998 .4998 .4998 .4998 .4998
3.6 .4998 .4998 .4999 .4999 .4999 .4999 .4999 .4999 .4999 .4999
3.7 .4999 .4999 .4999 .4999 .4999 .4999 .4999 .4999 .4999 .4999
3.9 .5000 .5000 .5000 .5000 .5000 .5000 .5000 .5000 .5000 .5000

56
Continuous Uniform and
UNIT 15 CONTINUOUS UNIFORM AND Exponential Distributions
EXPONENTIAL DISTRIBUTIONS
Structure
15.1 Introduction
Objectives

15.2 Continuous Uniform Distribution


15.3 Exponential Distribution
15.4 Summary
15.5 Solutions/Answers

15.1 INTRODUCTION
In Units 13 and 14, you have studied normal distribution with its various
properties and applications. Continuing our study on continuous distributions,
we, in this unit, discuss continuous uniform and exponential distributions. It
may be seen that discrete uniform and geometric distributions studied in Unit
11 and Unit 12 are the discrete analogs of continuous uniform and exponential
distributions. Like geometric distribution, exponential distribution also has the
memoryless property. You have also studied that geometric distribution is the
only discrete distribution which has the memoryless property. This feature is
also there in exponential distribution and it is the only continuous distribution
having the memoryless property.
The present unit discusses continuous uniform distribution in Sec. 15.2 and
exponential distribution in Sec. 15.3.
Objectives
After studing the unit, you would be able to:
 define continuous uniform and exponential distributions;
 state the properties of these distributions;
 explain the memoryless property of exponential distribution; and
 solve various problems on the situations related to these distributions.

15.2 CONTINUOUS UNIFORM DISTRIBUTION


The uniform (or rectangular) distribution is a very simple distribution. It
provides a useful model for a few random phenomena like having random
number from the interval [0, 1], then one is thinking of the value of a
uniformly distributed random variable over the interval [0, 1].
Definition: A random variable X is said to follow a continuous uniform
(rectangular) distribution over an interval (a, b) if its probability density
function is given by

 1
 for a  x  b
f x  b  a
 0, otherwise

57
Continuous Probability The distribution is called uniform distribution since it assumes a constant
Distributions
(uniform) value for all x in (a, b). If we draw the graph of y = f(x) over x-axis
and between the ordinates x = a and x = b (say), it describes a rectangle as
shown in Fig. 15.1

1
ba

X
a b
Fig. 15.1: Graph of uniform function

A uniform variate X on the interval (a, b) is written as X ~ U[a, b]


Cumulative Distribution Function
The cumulative distribution function of the uniform random variate over the
interval (a, b) is given by:
x
For x  a , F  x   P  X  x    0dx  0


For a < x < b,


x x
1 1 x a
F  x   P  X  x    f  x  dx   dx   x ax  .
a a
ba ba ba

For x  b
x
F  x   P X  x   f  x dx


a b 
  f  x dx   f  x dx   f  x dx
 a b

a b 
1
   0 dx  
 a
ba
dx    0 dx
b

1 b ba
=0+  x a  0 =  1.
ba ba
So,
 0 for x  a
x  a

Fx    for a  x  b
b  a
 1 for x  b

58
On plotting its graph, we have Continuous Uniform and
Exponential Distributions
Fx 

a b
Fig. 15.2: Graph of distribution function

Mean and Variance of Uniform Distribution


Mean = 1st order moment about origin 1'  
b b
1
=  x.f  x  dx =  x.
a a
ba
dx

b
1  x2  1  b2 a 2 
=      
b  a  2 a b  a  2 2 

=
 b  a  b  a  
ab
2b  a  2

Second order moment about origin   '2 


b
2
=  x f  x  dx
a

b b
1 1  x3  1  b3 a 3 
=  x2. dx =   =   
a
ba b  a  3 a ba  3 3 

b3  a 3
=
3 b  a 

 b  a   b2  ab  a 2 
=  x 3  y 3   x  y   x 2  xy  y 2  
3 b  a   

a 2  ab  b 2

3
2
2 a 2  ab  b 2  a  b 
2
 Variance of X = E(X ) – [E(X)] =  
3  2 
2


 
4 a 2  ab  b 2  3  a  b 
12
59
Continuous Probability
Distributions 4a 2  4ab  4b 2  3a 2  3b 2  6ab

12
2
b2  a 2  2ab  b  a 
  .
12 12
2

So, Mean =
ab
and Variance =
b  a  .
2 12
Let us now take up some examples on continuous uniform distribution.
Example 1: If X is uniformly distributed with mean 2 and variance 12, find
P[X < 3].
Solution: Let X  U [a, b]
 probability density function of X is
1
f x  , a  x  b.
ba
Now as Mean = 2
ab
 2
2
 a+b=4 … (1)
Variance = 12
2


b  a   12
12
2
  b  a   144

 b – a =  12

b  a  12, being negative is 


 b – a = 12 ... (2)  rejected as b should be greater than a 
 
 b  a should be positive 

Adding (1) and (2), we have


2b = 16
 b = 8 and hence a = 4 – 8 = – 4
1 1 1
 f x    for – 4 < x < 8.
b  a 8   4  12
3 3
1 1 1 3
Thus, the desired probability  P  X  3  4 12 dx  12 41dx = 12  x 4
1 7
 3   4   = .
12 12
Example 2: Calculate the coefficient of variation for the rectangular
distribution in (0, 12).
60
Solution: Here a = 0, b = 12. Continuous Uniform and
Exponential Distributions
a  b 0  12
 Mean =   6,
2 2
2 2

Variance =
b  a  
12  0  
144
 12.
12 12 12
 S.D. = 12
Thus, the coefficient of variation
S.D.
= 100 [Also see Unit 2 of MST-002]
Mean

12
 100 = 57.74%
6
Example 3: Metro trains are scheduled every 5 minutes at a certain station. A
person comes to the station at a random time. Let the random variable X count
the number of minutes he/she has to wait for the next train. Assume X has a
uniform distribution over the interval (0, 5). Find the probability that he/she
has to wait at least 3 minutes for the train.
Solution: As X follows uniform distribution over the interval (0, 5),
 probability density function of X is
1 1 1
f x    , 0 x5
ba 50 5
Thus, the desired probability
5 5 5
1 1
P  X  3   f  x  dx   dx   1 dx
3 3
5 53

1 5 1 2
  x 3   5  3   0.4
5 5 5

Now, you can try the following exercises.


E1) Suppose that X is uniformly distributed over (–a, a). Determine ‘a’ so that
1
i) P  X  4 
3
3
ii) P  X  1 
4
iii) P  X  2   P  X  2 

E2) A random variable X has a uniform distribution over (–2, 2). Find k for
1
which P[X > k] = .
2

61
Continuous Probability Now, let us discuss exponential distribution in the next section.
Distributions

15.3 EXPONENTIAL DISTRIBUTION


The exponential distribution finds applications in the situations related to
lifetime of an equipment or service time at the counter in a queue. So, the
exponential distribution serves as a good model whenever there is a waiting
time involved for a specific event to occur e.g. waiting time for a failure to
occur in a machine. The exponential distribution is defined as follows:
Definition: A random variable X is said to follow exponential distribution
with parameter  > 0, if it takes any non-negative real value and its probability
density function is given by
e x for x  0
f x  
0, elsewhere

Its cumulative distribution function (c.d.f.) is thus given by


x x
F  x   P  X  x    f  x  dx   e x dx
0 0

x
 ex  x x


 x
  1 e  0   e  e
0
 
 0

 
  e x  1  1  e x .

1  e x for x  0
So, F  x    .
0, elsewhere

Mean and Variance of Exponential Distribution


 
Mean = E  X    x f  x  dx   x  ex dx
0 0


   x ex dx
0


 ex 

ex 
    x   1 dx  [Integrating by parts]
    0 0  

In case of integration of product of two different types of functions, we do


integration by parts i.e. the following formula is applied:

  First function Second fuction  dx


= (First function as it is) (Integral of second)
   Differentiation of first  Integral of second  dx

62
 Continuous Uniform and
 1  ex  
 Mean    0  0     Exponential Distributions
     0 

 1   1  1
    2  0  1     2   .
     
 
Now, E  X 2    x 2f  x  dx   x 2  e x  dx
0 0


   x 2e x dx
0

 ex   ex 
=   x 2   2x  dx  [Integrating by parts]
 0 0

  


 2 
   0  0    x e x dx 
 0 
 
2 2

 0 0

  x e x dx =  x ex dx 
2
 E X

21
 [E(X) is mean and has already been obtained]

2

2
2
2 1 2 1 1
Thus, Variance = E(X2) – [E(X)] 2 = 2
   2  2  2
    
1 1
So, Mean = and Variance = 2 .
 
1 1 Mean
Remark 1: Variance = 2
   Mean =  Variance
 . 
So,
Value of  Implies
<1 Mean < Variance
=1 Mean = Variance
>1 Mean > Variance

Hence, for exponential distribution,


Mean > or = or < Variance according to whether  > or = or < 1.

63
Continuous Probability Memoryless Property of Exponential Distribution
Distributions
Now, let us discuss a very important property of exponential distribution and
that is the memoryless (or forgetfulness) property. Like geometric distribution
in the family of discrete distributions, exponential distribution is the only
distribution in the family of continuous distributions which has memoryless
property. The memorless property of exponential distribution is stated as:
If X has an exponential distribution, then for every constant a  0, one has
P[X  x + a  X  a] = P  X  x  for all x i.e. the conditional probability of
waiting up to the time ' x  a ' given that it exceeds ‘a’ is same as the
probability of waiting up to the time ‘ x ’. To make you understand the above
concept clearly let us take the following example: Suppose you purchase a TV
set, assuming that its life time follows exponential distribution, for which the
expected life time has been told to you 10 years (say). Now, if you use this TV
set for say 4 years and then you ask a TV mechanic, without informing him/her
that you had purchased it 4 years ago, regarding its expected life time. He/she,
if finds the TV set as good as new, will say that its expected life time is 10
years.
So, here, in the above example, 4 years period has been forgotten, in a way,
and for this example:
P[life time up to 10 years]
= P[life time up to 14 years | life time exceeds 4 years]
i.e. P[X  10] = P [X  14 X  4]
or P[X  10] = P[X  10 + 4 X  4]
Here a = 4 and x = 10.
Let us now prove the memoryless property of exponential distribution.
 X  x  a    X  a  
Proof: P  X  x  a  X  a   [By conditional probability]
P X  a 

where
P  X  x  a    X  a    P  a  X  x  a 
xa xa
x
  f  x  dx   e dx
a a

xa
 ex   e   x a  e a 
    
   a    
  x  a 
  e  ea    e x .ea  ea 

= e a 1  ex  , and

  
x  e x   a  a
P[X  a] =  f  x  dx =  e dx       0  e   e
a a    a

64
e a 1  e x  Continuous Uniform and
 P X  x  a  X  a   1  e x Exponential Distributions
ea
x
Also, P[ X  x ]   e x dx
0

 1  e x [On simplification]


Thus,
P[X  x + a  X  a] = P[X  x].
Hence proved
Example 4: Show that for the exponential distribution:
f  x   Ae  x , 0  x   , mean and variance are equal.

Solution: As f  x  is probability function,



  f  x  dx  1
0

 
x  ex 
  Ae dx  1  A   1
0  (1)  0
 –A [0 –1] = 1  A = 1
 f  x   e x

Now, comparing it with the exponential distribution


f  x   e x , we have

=1
1 1
Hence, mean =   1,
 1
1 1
and variance =   1.
2 1
So, the mean and variance are equal for the given exponential distribution.
Example 5: Telephone calls arrive at a switchboard following an exponential
distribution with parameter  = 12 per hour. If we are at the switchboard, what
is the probability that the waiting time for a call is
i) at least 15 minutes
ii) not more than 10 minutes.
Solution: Let X be the waiting time (in hours) for a call.
 f  x   ex , x  0

 F  x   P  X  x   1  e x [c.d.f. of exponential distribution]

= 1  e12x … (1) [  = 12]


65
Continuous Probability Now,
Distributions
1
i) P[waiting time is at least 15 minutes] = P[waiting time is at least hours]
4
 1  1
= P X    1  P X  
 4  4
1
 12 
= 1  1  e 4  [Using (1) above]
 
= e 3
See table given at the 
= 0.0498 end of Unit10 
 
ii) P[waiting time not more than 10 minutes]
1
= P[waiting time not more than hrs]
6
1
 1 12
= P X    1  e 6
 6

= 1– e 2 = 1– (0.1353) = 0.8647

Now, we are sure that you can try the following exercises.
E3) What are the mean and variance of the exponential distribution given
by:
f  x   3e3x , x  0

E4) Obtain the value of k > 0 for which the function given by
f  x   2e kx , x  0

follows an exponential distribution.


1
E5) Suppose that accidents occur in a factory at a rate of   per
20
working day. Suppose in the factory six days (from Monday to
Saturday) are working. Suppose we begin observing the occurrence of
accidents at the starting of work on Monday. Let X be the number of
days until the first accident occurs. Find the probability that
i) first week is accident free
ii) first accident occurs any time from starting of working day on
Tuesday in second week till end of working day on Wednesday in
the same week.

66
We now conclude this unit by giving a summary of what we have covered in it. Continuous Uniform and
Exponential Distributions
15.4 SUMMARY
Following main points have been covered in this unit.
1) A random variable X is said to follow a continuous uniform (rectangular)
distribution over an interval (a, b) if its probability density function is given
by

 1
 for a  x  b
f x  b  a
 0, otherwise

ab
2) For continuous uniform distribution, Mean  and
2
2

variance 
b  a .
12
3) A random variable X is said to follow exponential distribution with
parameter  > 0, if it takes any non-negative real value and its probability
density function is given by
ex for x  0
f x  
 0 , elsewhere
1 1
4) For exponential distribution, Mean = and Variance = 2 .
 
5) Mean > or = or < Variance according to whether  > or = or < 1.
6) Exponential distribution is the only continuous distribution which has
the memoryless property given by:
P[X  x + a  X  a] = P[X  x].

15.5 SOLUTIONS/ANSWERS
E1) As X  U[  a, a],
 probability density function of X is
1 1 1
f x    , a  x  a .
a  ( a) a  a 2a
1
i) Given that P[X > 4] =
3
a
1 1
  2a dx  3
4

1 1
  x a4 
2a 3

67
Continuous Probability a4 1
Distributions  
2a 3
 3a – 12 = 2a
 a = 12.
3
ii) P  X  1 
4
1
1 3
 dx 
a
2a 4

1 3
  x 1 a 
2a 4
1 3
 1  a  
2a 4
3
 1+ a = a
2
 2  2a  3a
 a2
iii) P  X  2   P  X  2 

 X  2   X  2 
 X  2 or  X  2 
 
  2  X  2 and 
 P  2  X  2  P  X  2 or X  2   
 X  2  X  2 
 X  2or  X  2 
 
 X  2or X   2 

 By Addition law of 
 P  2  X  2  P  X  2   P  X  2  probability for mutually 

 exclusive events 
2 2 a
1 1 1
  dx   dx   dx
2
2a a
2a 2
2a

1 1 1

2a
 4    2  a    a  2 
2a 2a
 4  (2  a)  (a  2)
 4  4  2a
 2a = 8
 a=4
E2) As X ~ U [  2, 2],
1
 f x  ,  2  x  2.
68 4
1 Continuous Uniform and
Now P X  k  Exponential Distributions
2
2
1 1
  4dx  2
k

2k 1
 
4 2
2–k=2
 k = 0.
E3) Comparing it with the exponential distribution given by
f  x   e x , x  0

We have  = 3
1 1 1 1
 Mean =  and Variance = 2 
 3  9
E4) As the given function is exponential distribution i.e. a p.d.f.,

  f  x  dx  1
0

k=2 [On simplification]


Alternatively, you may compare the given function with exponential
distribution
f  x   ex ,

we have
 = 2 and  = k
k=2
1
 x
E5) Here P  X  x   F  x   1  e x = 1 – e 20

i) P[First week is accident free] = P[Accident occurs after six days]


= P[X > 6] = 1 – P[X  5]
1

= 1  1  e5/ 20   e 4
 e 0.25  0.7788.

ii) P[First accident occurs on second week from starting of working day
on Tuesday till end of working day on Wednesday]
=P[First accident occurs after 7 working days
and before the end of 9 working days]
= P[7 < X  9]
= P[X  9] – P[X  7]

69
Continuous Probability 9 7
Distributions      
  1  e 20   1  e 20 
   
9 7
 
20 20
 e e
7 9
 
20 20
e e
 e0.35  e 0.45
= 0.7047 – 0.6376 [See the table give at the end of Unit 10]
= 0.0671.

70
Gamma and Beta
UNIT 16 GAMMA AND BETA Distributions
DISTRIBUTIONS
Structure
16.1 Introduction
Objectives

16.2 Beta and Gamma Functions


16.3 Gamma Distribution
16.4 Beta Distribution of First Kind
16.5 Beta Distribution of Second Kind
16.6 Summary
16.7 Solutions/Answers

16.1 INTRODUCTION
In Unit 15, you have studied continuous uniform and exponential
distributions. Here, we will discuss gamma and beta distributions. Gamma
distribution reduces to exponential distribution and beta distribution reduces
to uniform distribution for special cases. Gamma distribution is a
generalization of exponential distribution in the same sense as the negative
binomial distribution is a generalization of geometric distribution. In a sense,
the geometric distribution and negative binomial distribution are the discrete
analogs of the exponential and gamma distributions, respectively. The present
unit discusses the gamma and beta distributions which are defined with the
help of special functions known as gamma and beta functions, respectively.
So, before defining these distributions, we first define gamma and beta
functions in Sec. 16.2 of this unit. Then gamma distribution and beta
distribution of first kind followed by beta distribution of second kind are
discussed in Secs. 16.3 to 16.5.
Objectives
After studing this unit, you would be able to:
 define beta and gamma functions;
 define gamma and beta distributions;
 discuss various properties of these distributions;
 identify the situations where these distributions can be employed; and
 solve various practical problems related to these distributions.

16.2 BETA AND GAMMA FUNCTIONS


In this section, some special functions i.e. beta and gamma functions are
defined with their properties and the relation between them. These will be
helpful in defining beta and gamma distributions to be defined in the
subsequent sections.

71
Continuous Probability
Distributions Beta Function
1
m 1 n 1
Definition: If m > 0, n > 0, the integral  x 1  x 
0
dx is called a beta

function and is denoted by β(m, n) e.g.


1 1 3
2 1 31 3 
i)  x 1  x  dx   x 2 1  x  dx    ,3 
0 0 2 
1
2 1  3 
or  x 1  x  dx     1, 2  1    , 3 
0 2  2 
1 1 1
   1 1  2 2
ii) x 3
1  x  3 dx      1,   1    , 
0  3 3  3 3

Properties of Beta Function


1. Beta function is symmetric function i.e. β(m, n) = β(n, m)
2. There are some other forms also of Beta function. One of these forms,
which will be helpful in defining beta distribution of second kind, is

x m 1
  m, n    mn
dx
0 1  x 
  p,q  1   p  1, q 
3. (i) 
q p
(ii) β(p, q) = β(p + q, q)  β(p, q + 1)

On the basis of the above discussion, you can try the following exercise.
E1) Express the following as a beta function:
1 1 1

i) x 3
1  x  2 dx
0

1
2 5
ii)  x 1  x 
0
dx


x2
iii)  1  x  5
dx
0

1
 
2
x
iv)  1  x  2
dx
0

72
Gamma Function Gamma and Beta
Distributions
Though we have defined Gamma function in Unit 13, yet we are again
defining it with more properties, examples and exercises to make you clearly
understand this special function.

n 1  x
Definition: If n > 0, the integral x e dx is called a gamma function and is
0

denoted by  n 

e.g.

2 x
(i) x e dx   2  1   3
0


x 1  3
(ii)  xe dx    1    
0 2  2
Some Important Results on Gamma Function

1. If n > 1,  n    n  1  n  1
2. If n is a positive integer, n   n  1 !

1
3.    
2
Relationship between Beta and Gamma Functions

If m > 0, n > 0, then   m, n  


 m n 
m  n
You can now try the following exercise.
E2) Evaluate:
 5
x
(i)  e x 2 dx
0


10
(ii)  1  x 
0
dx

 1

x
(iii)  x 2 e dx
0

16.3 GAMMA DISTRIBUTION


Gamma distribution is a generalisation of exponential distribution. Both the
distributions are good models for waiting times. For exponential distribution,
the length of time interval between successive happenings is considered i.e.
the time is considered till one happening occurs whereas for gamma
distribution, the length of time between 0 and the instant when rth happening
73
Continuous Probability
Distributions
occurs is considered. So, if r = 1, then the situation becomes the exponential
situation. Let us now define gamma distribution:
Definition: A random variable X is said to follow gamma distribution with
parameters r > 0 and  > 0 if its probability density function is given by
  r e x x r 1
 , x0
f (x)   r 

0, elsewhere

Remark 1:
(i) It can be verified that

 f  x  dx  1
0

Verification:
 
 r ex x r 1
  x dx  
0 0 r
dx

r 1

ex  x 
 dx
0 r
Putting  x = y   dx  dy
Also, when x  0, y  0 and when x  , y  

1 y
 e y r 1dx
r 0

1
 r [Using gamma function defined in Sec. 16.2]
r
=1
(ii) If X is a gamma variate with two parameters r > 0 and  > 0, it is expressed
as X  γ(, r).
(iii) If we put r = 1, we have

e x x 0
f (x)  ,x  0
1
 e x , x  0
which is probability density function of exponential distribution.

Hence, exponential distribution is a particular case of gamma


distribution.

(iv) If we put  = 1, we have

e x .x r 1
f (x)  , x  0, r  0
r
74
It is known as gamma distribution with single parameter r. This form of the Gamma and Beta
gamma distribution is also widely used. If X follows gamma distribution with Distributions

single parameter r > 0, it is expressed as X   (r).


Mean and Variance of Gamma Distribution
If X has a gamma distribution with parameters r > 0 and  > 0, then its
r r
Mean = , Variance = 2 .
 
If X has a gamma distribution with single parameter r > 0, then its
Mean = Variance = r.
Additive Property of Gamma Distribution
1. If X1, X2, …, Xk are independent gamma variates with parameters
 , r1  ,  , r2  ,...,  , rk  respectively, then X1 + X2 +…+ Xk is also a
gamma variate with parameter (, r1 + r2 +…+rk).
2. If X1, X2,...,Xk are independent gamma variates with single parameters r1,
r2,…, rk respectively, then X1 + X2 + …, + Xk is also a gamma variate with
parameter r1 + r2 + … + rk.
Example 1: Suppose that on an average 1 customer per minute arrive at a
shop. What is the probability that the shopkeeper will wait more than 5
minutes before
(i) both of the first two customers arrive, and
(ii) the first customer arrive?
Assume that waiting times follows gamma distribution.
Solution:
i) Let X denotes the waiting time in minutes until the second customer
arrives, then X has gamma distribution with r = 2 (as the waiting time is to
be considered up to 2nd customer)
 = 1 customer per minute.
 
 r ex x r 1
 P  X  5   f (x)dx   dx
5 5 r
2

1 e x x 21 
e  x x1 
 dx   dx   x1e  x dx
5  2 5
1 5

  e  x   e  x 
 x    1 dx  [Integrating by parts]
  1 5 5 1 

 
 ex 
  0  5e5    e x dx  5e5   
5  1  5
 5e 5   0  e5 

= 6 e 5
75
Continuous Probability
Distributions
= 6  0.0070 [See the table given at the end of Unit 10]
= 0.042
ii) In this case r = 1,  = 1 and hence

 r ex .x r 1
P  X  5   dx
5 r
  
(1)1 e  x x 0  e x 
 dx   e x dx     0  e5  0.0070
5 (1) 5  1  5
Alternatively,
As r = 1, so it is a case of exponential distribution for which
f  x   ex , x  0

  
 e x 
 P  X  5    e x x
dx   1e dx     0  e 5  0.0070
5 5  1  5
Here is an exercise for you.
E3) Telephone calls arrive at a switchboard at an average rate of 2 per minute.
Let X denotes the waiting time in minutes until the 4th call arrives and
follows gamma distribution. Write the probability density function of X.
Also find its mean and variance.

Let us now discuss the beta distributions in the next two sections:

16.4 BETA DISTRIBUTION OF FIRST KIND


You have studied in Sec. 16.3 that beta function is related to gamma function
in the following manner:

  m, n  
 m n 
m  n
Now, we are in a position to define beta distribution which is defined with the
help of beta function. There are two kinds of beta distribution  beta
distribution of first kind and beta distribution of second kind. Beta distribution
of second kind is defined in next section of the unit whereas beta distribution
of first kind is defined as follows:
Definition: A random variable X is said to follow beta distribution of first
kind with parameters m > 0 and n > 0, if its probability density function is
given by

 1 m 1 n 1
  m, n x 1  x  , 0  x  1
f (x)    
0, otherwise

The random variable X is known as beta variate of first kind and can be
expressed as X  1(m, n)
76
Remark 5: If m = 1 and n = 1, then the beta distribution reduces to Gamma and Beta
Distributions
1 11
f x  x11 1  x  , 0  x  1
 1,1
0
x 0 1  x 
 , 0  x 1
 1,1

1
 ,0  x 1
 1,1

11 0 0
But  1,1  
2 1

Therefore, f (x) 
11  1
1
 f (x)  1, 0  x  1
1
 ,0  x 1
1 0
which is uniform distribution on (0, 1).
1
[p.d.f. of uniform distribution on (a, b) is f (x)  , a  x  b]
ba
So, continuous uniform distribution is a particular case of beta
distribution.
Mean and variance of Beta Distribution of First Kind
Mean and Variance of this distribution are given as
m
Mean =
mn
mn
Variance = 2
 m  n   m  n  1
Example 4: Determine the constant C such that the function
6
f (x)  Cx 3 1  x  , 0  x  1 is a beta distribution of first kind. Also, find its
mean and variance.
Solution: As f  x  is a beta distribution of first kind.
1
 f x  1
0

1
3 6
  Cx 1  x 
0
dx  1

1
6
 C  x 3 1  x  dx  1
0

77
Continuous Probability
Distributions  C   3  1, 6  1  1 [By definition of Beta distribution of first kind]

1
 C
  4, 7 

47  m n 
    m, n   
47   m  n  
11 10
 
4 7 3 6

10  9  8  7  6
  840
3 2  6
6
Thus, f  x   840x 3 1  x 
7 1
= 840x 4 1 1  x 
7 1
x 41 1  x 

  4, 7 

1
[  840 just obtained above in this example]
  4, 7 

 m = 4, n = 7
m 4 4
 Mean =   ,
m  n 4  7 11
mn
and Variance = 2
 m  n   m  n  1
47
 2
 4  7   4  7  1
28 7 7
  
12112 121 3 363
Now, you can try the following exercises.
E4) Using beta function, prove that
1
2 3
 60x 1  x 
0
dx  1

E5) Determine the constant k such that the function


1 1

f (x)  kx 2 1  x  2 ,0  x  1, is a beta distribution of first kind. Also
find its mean and variance.

78
Gamma and Beta
16.5 BETA DISTRIBUTION OF SECOND KIND Distributions
Let us now define beta distribution of second kind.
Definition: A random variable X is said to follow beta distribution of second
kind with parameters m > 0, n > 0 if its probability density function is given
by
 x m 1
 mn
, 0x
f  x      m, n 1  x 

0, elsewhere

x m 1
Remark 6: It can be verified that    m, n 1  x  mn
dx  1
0

Verification:
 
x m 1 1 x m 1
   m, n 1  x  dx  dx
0
m n
  m, n  0 1  x  m  n

  x m-1 
  mn
dx is another form 
 0 1+x  
1 of beta function. 
   m, n 
  m, n   
(see Sec. 16.2 of this Unit) 
 
 
=1

Remark 7: If X is a beta variate of second kind with parameters m > 0, n > 0,


then it is expressed as X  2(m, n)
Mean and Variance of beta Distribution of second kind
m
Mean = , n  1;
n 1
m  m  n  1
Variance = 2
,n  2
 n  1  n  2 
Example 5: Determine the constant k such that the function
kx 3
f x  7
, 0  x  ,
1  x 
is the p.d.f of beta distribution of second kind. Also find its mean and
variance.
Solution: As f  x  is a beta distribution of second kind,

  f  x  dx  1
0

79
Continuous Probability 
Distributions kx 3
  1  x  dx  1
0
7


x 4 1
 k 4 3
dx  1
0 1  x 
 k  4, 3  1

1 7 6 6 5 4
 k     60
  4,3 4 3 3 2 2

Here m = 4, n = 3
m 4 4
 Mean =   2
n 1 3  1 2
m  m  n  1 4(4  3  1) 46
Variance = 2
 2
 6
 n  1  n  2  (3  1)  3  2  4 1

Now, you can try the following exercises.


E6) Using beta function, prove that

x3 64
 13
dx 
15015
0 1  x  2

E7) Obtain mean and variance for the beta distribution whose density is given
by
60x 2
f x  7
,0  x  
1  x 

16.6 SUMMARY
The following main points have been covered in this unit:
1) A random variable X is said to follow gamma distribution with
parameters r > 0 and  > 0 if its probability density function is given by
  r ex x r 1
 , x0
f x   r

0, elsewhere

2) Gamma distribution of random variable X with single parameter r > 0 is


e  x x r 1
defined as f  x   , x  0, r  0
r
r
3) For gamma distribution with two parameters λ and r, Mean = and

r
Variance = .
2
80
4) A random variable X is said to follow beta distribution of first kind with Gamma and Beta
parameters m > 0 and n > 0, if its probability density function is given by Distributions

 1 m 1 n 1
  m, n x 1  x  , 0  x  1
f x    
0, otherwise

m mn
Its mean and variance are and 2
, respectively.
mn  m  n   m  n  1
5) A random variable X is said to follow beta distribution of second kind
with parameters m > 0, n > 0 if its probability density function is given by:
 x m 1
 mn
,0  x  
f  x      m, n 1  x 

0, elsewhere

m m  m  n  1
Its Mean and Variance are , n  1; and 2
,n  2
n 1  n  1  n  2 
respectively.
6) Exponential distribution is a particular case of gamma distribution and
continuous uniform distribution is a particular case of beta distribution.

16.7 SOLUTIONS/ANSWERS
1 1 1
1 1  2 3
1  x  2 dx  B  

3
E1) (i) x  1,  1  B  , 
0  3 2  3 2
1 1
2 11 5 6 1
(ii)  x 1  x  dx   x 1  x 
0 0
dx

is not a beta function, since m =  1 < 0, but m and n both should be


positive.
 
x2 x 31
(iii)  5
dx   3 2
dx = β(3, 2)
0 1  x  0 1  x 
[ m = 3, n = 2(see Property 2 of Beta function Sec. 16.2)]
1 1
   1
2
x x2 1 3
(iv)  1  x  2
dx   1 3
dx    , 
0 0 1  x 

2 2 2 2


5 
E2)  e x .x 5/ 2 dx    1 
0 2 

7
  
2

81
Continuous Probability
Distributions  5  5  5   3   3   5  3   1   1 
                 
 2  2  2   2   2   2  2   2   2 
 Result 1on gamma 
function (See Sec. 16.2) 
 
 5  3  1 
       Result 3 on gamma function 
 2  2  2 

 15 
  
8

1 10
(ii)  x 1  x 
0
dx = β(1 + 1, 10 + 1)

= β(2,11)

2 11 see relation between 


=  beta and gamma function 
13  

=
1!10 ! [Result 2 on gamma function]
12 !

=
10  ! =
1

1
12 1110 ! 12 11 132
 1

x  1  1
(iii) 0 2 e dx    2  1   2   
x

E3) Here  = 2, r = 4.
 r ex .x r 1
 f (x)  ,x  0
r 
24.e2x .x 3
 ,x  0
4
16e2x .x 3
= ,x  0
3

8
= x 3e 2x , x  0
3
r 4
Mean =   2,
 2
r 4
Variance = 2
 2 1
 2
1 1
3 4 1
E4)  60x 2 1  x  dx  60  x 31 1  x  dx  60   3, 4 
0 0

82
34 2 3 60  2  3  2 Gamma and Beta
= 60 = 60   1 Distributions
7 6 6  5  4  3 2
1 1 1

E5)  kx 2
1  x  2 dx  1
0

 1 1 
 k    1,  1  1
 2 2 

1 2 1 2 2
 k    
1 3   
 ,  1 3 1
 
1
 
2 2 2 2 2 2
Now, as the given p.d.f. of beta distribution of first kind is
2  12 1
f (x)  x 1  x  2 , 0  x  1

1 3
1 1
x2 1  x  2
 , 0  x 1
1 3
 , 
2 2
1 3
m  , n 
2 2
1
m 1
and hence mean =  2 
mn 1  3 4
2 2
mn
Variance = 2
 m  n   m  n  1
 1  3  3
   3 1
2
   2 4
 2
 2
 
 1 3   1 3   2   3 4  4  3 16
      1
2 2 2 2 
 
x3 x 4 1
E6)  1  x  13/ 2
dx   5
dx
4
0 0 1  x  2

5 5
4 3
 5  2  2
=   4,  
 2  5 13
4  2
 2

5
6
2 6  32 64
=  
13 11 9 7 5 5 13 11 9  7  5 15015
. . . .
2 2 2 2 2 2
83
Continuous Probability
Distributions 60x 2
E7) f  x   7
,0  x  
1  x 
60x 31
 3 4
,0  x  
1  x 
x 31  34 23 1
 3 4
, 0x    3, 4     
  3, 4 1  x   6 6 60 

m =3, n = 4
m 3
Hence, mean =  1
n 1 4  1
m  m  n  1 3  3  4  1 3 6
Variance = 2
= 2
 = 1.
 n  1  n  2   4  1  4  2  9 2

84

You might also like