0% found this document useful (0 votes)
9 views19 pages

Some Discrete Probability Distributions STA 211

Some Discrete Probability

Uploaded by

Peter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views19 pages

Some Discrete Probability Distributions STA 211

Some Discrete Probability

Uploaded by

Peter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Some Discrete Probability Distributions

A random experiment that results into two mutually and exhaustive outcomes is said to follow a
Bernoulli trials: a success {s} with probability p or a failure {f} with probability q = 1- p. Thus,
[{s}]  p, [{ f }]  q and the sample description space is   {s, f } . The  -field of events Ƒ
is  , , {s}, { f } . The product    is called the Cartesian product. If we do n independent
trials, the sample space is
n  




n times

and contains 2 n elementary outcomes, each of which is an n-tuple. Thus,


 n  {a1 , a 2 ,, a } , where M = 2n and a = s or f.
i

For example, consider an experiment of throwing a coin 3 times with p  [{}] and q  [{}] .
The probability of the event {THT} is qpq = pq2. The probability of the event {HTT} is also pq2.
The different events leading to one head and two tails are listed as:
1  { }
 2  { }
 3  { }
If A denotes the event of getting one head and two tails without regard to order, then
  1   2   3 Since  i  j   for i  j we obtain ( )  (1 )  ( 2 )  ( 3 )  3 pq 2 .

A discrete random variable takes on whole number values and has a staircase type of
distribution function. A probability measure for discrete random variable is the probability mass
function (pmf). The probability mass function is used when there is at most a countable set of
outcomes of the random experiment. It is defined as
 ( xi )  (   xi )  F ( xi )  F ( xi ) ,

where xi is the point taken on the left of the jump at x i . The probability distribution function
(PDF) for a discrete random variable is given by
F ( x )  (   x )  
all xi  x
 ( xi ) ,

For any event A when X is discrete:


( )  
all x
 ( xi ) .

Suppose p(x) depends on a quantity that can be assigned to any one of a number of possible
values, with each different value determining a different probability distribution. Such a quantity
is called a parameter of the distribution. The collection of all probability distributions for
different values of the parameter is called a family of probability distributions. For example, the
pmf of any Bernoulli random variable can be expressed in the form p(1)   and p(0)  1   ,
where 0    1. Because the pmf depends on the particular value of  , we often write p( x; )
rather than just p(x):
1   if x  0


p( x;  )   if x  1 (1)

0 otherwise

Then each choice of  in Expression (1) yields a different pmf. Every probability distribution
for a Bernoulli random variable has the form of Expression (1), so it is called the family of
Bernoulli distributions.

The cumulative distribution function (cdf) F(x) of a discrete random variable X with pmf p(x) is
defined for every number x by
F ( x )  (   x )   p( y ) .
y: y  x

For any number x, F(x) is the probability that the observed value of X will be at most x.
Example 1
Determine the possible values of F(y) for each values in the set {1, 2, 3, 4} given the pmf of Y
was
Y 1 2 3 4
p(y) .4 .3 .1 .1
Solution
The possible values of F(y) are
F (1)  (  1)  (  1)  p(1)  .4
F (2)  (  2)  (  1 or 2)  p(1)  p(2)  .7
F (3)  (  3)  (  1 or 2 or 3)  p(1)  p(2)  p(3)  .9
F (4)  (  4)  (  1 or 2 or 3 or 4)  1
The cdf is
0 if y  1


.4 if 1  y  2


F ( y )  .7 if 2  y  3

.9 if 3  y  4

1 if 4  y


Graphically, we have
F(y)

1 2 3 4 y
For X a discrete random variable, the graph of F(x) will have a jump at every possible value of X
and will be flat between possible values. Such a graph is called a step function.

The Binomial Random variable and Distribution


A binomial experiment satisfies the following conditions:
(i) The experiment consists of a sequence of n smaller experiments called trials, where fixed in
advance of the experiment.
(ii) Each trial can result in one of the same two possible outcomes (dichotomous trials), which
we denote by success (S) and failure (F).
(iii) The trials are independent, so that the outcome on any particular trial does not influence the
outcome on any other trial.
(iv) The probability of success is constant from trial to trail; we denote this probability by p.

Given a binomial experiment consisting of n trials, the binomial random variable X associated
with this distribution is defined as X = the number of S’s among the n trials. Since the ordering
of S’s and F’s is not important, the first x trials resulting in S and the last n – x resulting in F is
n x
written as p (1  p) . The first factor is the number of ways of choosing x of the n trials to be
x

S’s – that is, the number of combination of size x that can be constructed from n distinct objects.
 n  x
   p (1  p ) n  x x  0, 1, 2,  n
 x 
b( x; n, p )  
0 otherwise


The probability of a given ordered set of x successes and n – x failures in Bernoulli trails is
x n x
simply p q regardless of the ordering of the s and f’s, we obtain

 n
(   x)    p x q n x  b( x; n, p) .
 x
The symbol b(x; n, p) denotes the binomial law which is the probability of getting x successes in
n independent trials with individual Bernoulli trail success probability p.

The binomial coefficient


n
n
C x   
 x
is the number of subpopulations of size x that can be formed from a population of size n.
Example 2
10 independent binary pulses per second arrive at a receiver. The error probability (that is zero
received as one or vice versa) is 0.001. What is the probability of at least one error per second?
Solution
P(at least one error/sec) = 1 – P(no errors/sec)
10 
 1   (0.001) 0 (0.999)10
0
 1  (0.999)10
 0.01
Example 3
Obtain the following probabilities.
(i) b(3; 6, .4)
(ii) b(7; 8, .6)
(iii) Given X ~ Bi(10, .3), find (a) (2    4) (b) (2    6) (c) (2  )
(iv) (   1) when X ~ Bi(10, .7)
(v) P(X = 8) when X ~ Bi (10, .7)
Solution
 6 6!
(i) P(X = 3) = b(3; 6, .4) =  (.4) 3 (1  .4) 63  (.4) 3 (.6) 3
 3 3!(6  3)!

 20 (. 4) 3 (. 6) 3
= 0.27648
(i) P(X = 3) = (  3)  (  2)
= 0.821 - 0.544 (from Tables, n = 6, p = .4)
= 0.277
(ii) P(X = 7) = (  7)  (  6)
= 0.996 – 0.965 (from Tables, n = 8, p = .6)
= 0.031

(iii) X ~ Bi (10, .3) i.e. n = 10, p = .3


(a) (2    4)  (  4)  (  1)
= 0.850 – 0.149
= 0.701
(b) P(2 < X < 6) = (  5)  (  2)
= 0.953 – 0. 383
= 0.570
(c) (2  )  (  2)  1  (  1)
= 1 – 0.149
= 0.851
(iv) (   1) = 0.000 (from Tables, n = 10, p = .7)
(v) P(X = 8) = (  8)  (  7)
= 0.851 – 0.617 (from Tables, n = 10, p = .7)
= 0.234

Hypergeometric and Negative Binomial Distributions


The hypergeometric and negative binomial distributions are both closely related to the binomial
distribution. The binomial distribution is the approximate probability model for sampling without
replacement from a finite dichotomous (S-F) population while the hypergeometric distribution is
the exact probability model for the number of S’s in the sample. The binomial random variable X
is the number of S’s when the number n of trials is fixed, whereas the negative binomial
distribution arises from fixing the number of S’s desired and letting the number of trials be
random.

The Hypergeometric Distribution: The assumptions leading to the hypergeometric distribution


are as follows:
(i) The population or set to be sampled consists of N individuals, objects or elements (a finite
population).
(ii) Each individual can be characterized as a success (S) or a failure (F), and there are M
successes in the population.
(iii) A sample of n individuals is selected without replacement in such a way that each subset of
size n is equally likely to be chosen.

The random variable of interest is X = the number of S’s in the sample. The probability of X
depends on the parameters n, M and N, and so we wish to obtain P(X = x) = h(x; n, M, N). In
general, if the sample size n is smaller than the number of successes in the population (M), then
the largest possible X value is n. However, if M < n (e.g., a sample size of 25 and only 15
successes in the population), then X can be at most M. Similarly, whenever the number of
population failures (N – M) exceeds the sample size, the smallest possible X value is 0 (since all
sampled individuals might then be failures). However, if     n , the smallest possible X
value is n  (   ). Summarizing, the possible values of X satisfy the restriction
max( 0, n  (    ))  x  min( n,  ) .

Proposition 1: If X is the number of S’s in a completely random sample of size n drawn from a
population consisting of MS’s and (N – M)F’s, then the probability distribution of X, called the
hypergeometric distribution, is given by
      
  
 x  n  x 
(   x)  h( x; n,  ,  ) 

 
n
for x an integer satisfying max( 0, n     )  x  min( n,  ) .
The mean and variance of the hypergeometric random variable X having pmf h(x; n, M, N) are
  n   
(  )  n  V ( )     n   1  
   1    
The ratio M/N is the proportion of S’s in the population. If we replace M/N by p in E(X) and
V(X), we get
( )  np
 n (2)
V ( )     np(1  p)
  1 
Expression (2) shows that the means of the binomial and hypergeometric random variables are
equal, whereas the variances of the two random variables differ by the factor (N – n)/(N – 1),
often called the finite population correction factor. This factor is less than 1, so the
hypergeometric variable has smaller variance than does the binomial random variable. The
correction factor can be written (1 – n/N)/1 – n/N), which is approximately 1 when n is small
relative to N.
Example 4
Five individuals from an animal population thought to be near extinction in a certain region have
been caught, tagged and released to mix into the population. After they have had an opportunity
to mix, a random sample of 10 of these animals is selected. Let X = the number of tagged
animals in the second sample. If there are actually 25 animals of this type in the region, what is
the probability that
(a) X = 2? (b)   2 ? (c) Find the mean and variance of X.
Solution
The parameter values are n = 10, M = 5 (5 tagged animals in the population), and N = 25, so
 5  20 
  
 x 10  x 
(   x)  h( x; 10, 5, 25)  x  0, 1, 2, 3, 4, 5
 25 
 
 10 
 5  20 
  
(a) (   2)  h(2; 10, 5, 25)    
2 8
 0.385
 25 
 
 10 
2
(b) (   2)  (   0, 1 or 2)   h( x; 10 , 5, 25 )
x 0

= 0.057 + 0.257 + 0.385


= 0.699

(c) Using n = 10, M = 5 and N = 25, so p   .2 ,


5
25

Mean = E(X) = 10(.2) = 2

Variance = V(X) = 15 (10)(.2)(.8)  (.625)(1.6)  1


24
Note: If the sampling was carried out with replacement, V(X) = 1.6

Suppose the population size N is not actually known, so the value x is observed and we wish to
estimate N. It is reasonable to equate the observed sample proportion of S’s, x/n, with the
population proportion, M/N, giving the estimate
n
̂ 
x
ˆ  250 .
If M = 100, n = 40 and x = 16, then 
Let the population size, N, and number of population S’s, M, get large with the ratio M/N
approaching p. Then h(x; n, M, N) approaches b(x; n, p); so for n/N small, the two are
approximately equal provided that p is not too near either 0 or 1. In general, if sampling was
without replacement but n/N was at most .05, then the binomial distribution could be used to
compute approximate probabilities involving the number of S’s in the sample.
The Negative Binomial Distribution: The negative binomial random variable and distribution are
based on an experiment satisfying the following conditions:
(i) The experiment consists of a sequence of independent trials.
(ii) Each trial can result in either a success (S) or a failure (F).
(iii) The probability of success is constant from trial to trial, so P(S on trial i) = p for
i  1, 2, 3, .
(iv) The experiment continues (trials are performed) until a total of r successes have been
observed,
where r is a specified positive integer.

The random variable of interest is X = the number of failures that precede the rth success; X is
called a negative binomial random variable because, in contrast to the binomial random variable,
the number of successes is fixed and the number of trails is random.
Proposition 2: The probability mass function (pmf) of the negative binomial random variable X
with parameters r = number of S’s and p = P(S) is
 x  r  1 r
nb( x; r , p)    p (1  p) x x  0, 1, 2,
 r  1 
Example 5
A paediatrician wishes to recruit 5 couples, each of whom is expecting their child, to participate
in a new natural childbirth regime. Let p = P(a randomly selected couple agrees to participate). If
p = .2, what is the probability that 15 couples must be asked before 5 are found who agree to
participate? That is, with S = {agrees to participate}, what is the probability that 10 F’s occur
before the fifth S? (Jay L. Devore, 2004).
Solution
Given r = 5, p = .2 and x = 10, so
14 
nb(10; 5,.2)   (.2) 5 (.8)10  .034
4
The probability that at most 10 F’s are observed (at most 15 couples are asked) is
10 10
 x  4 x
(   10)   nb( x; 5,.2)  (.2) 5
  (.8)  0.164
x 0 x 0  4 

The negative binomial random variable is taken to be the number of trials X + r rather than the
number of failures. In the special case, r = 1, the pmf is
nb( x; 1, p )  (1  p) x p x  0, 1, 2,  (3)

Expression (3) is called the geometric distribution. Both X = number of F’s and Y = number of
trials (= 1 + X) are referred to in the literature as geometric random variables.
If X is a negative binomial random variable with pmf nb(x; r, p), then
r (1  p) r (1  p)
(  )  V ( ) 
p p2

Finally, by expanding the binomial coefficient in front of p r (1  p ) x and doing some


cancellation, it can be seen that nb(x; r, p) is well defined even when r is not an integer. This
generalized negative binomial distribution has been found to fit observed data quite well in a
wide variety of applications.

Expected Values of Discrete Random Variables

The expected or average value of a discrete random variable X taking on values x i with pmf
 ( xi )  (   xi ) i  1, 2,  , is defined by

( )   xi  (xi ) .
i

Examples of Probability mass functions are:


Bernoulli ( p  0, q  0, p  q  1) : Any random variable whose only possible values are 0 and
1 is called a Bernoulli random variable.

 (0)  q  (1)  p

 ( x)  x, x  0, 1

The Bernoulli law applies in those situations where the outcome results into two mutually and
exhaustive states: success (p) or failure (q). A random variable (r.v.) that has the Bernoulli pmf is
said to be a the Bernoulli random variable.
Properties of the Bernoulli distribution
If we denote the mean and standard deviation of a binomial distribution by  and 
respectively, then
(a) Mean   p

(b) Variance  2  pq

(c) Standard deviation   pq


Proof
The pmf of a Bernoulli random variable is given as
(   x)  p x (1  p)1 x x  0, 1
The expected value of X is
()     xp
x 0,1
x
(1  p)1 x  0( p 0 )(1  p)10  1( p1 )(1  p)11  0  p  p

The variance of X is
V ()   2   x 2 p( x)   2  x 2
p x (1  p)1 x  p 2  (0) 2 ( p 0 )(1  p)10  12 ( p1 )(1  p)11  p 2
x 0,1

 p  p 2  p(1  p)  pq , where q = 1 – p.

Binomial (n  1, 2, ; 0  p  1) : A binomial random variable has a PMF given as

 n C k p k q nk , k  0, 1, 2, n

 (k )  
0 otherwise

The binomial law applies in games of chance, military defence strategies, failure analysis and
many other situations.
Properties of the Binomial distribution
If we denote the mean and standard deviation of a binomial distribution by  and 
respectively, then
(a) Mean   np
(b) Variance  2  npq

(c) Standard deviation   npq


Proof
The pmf of a Binomial random variable X is given as
n
C x p x (1  p) n  x x  0, 1, 2,, n

The expected value of X is


n n
n! n
(n  1)!
( )     x n C x p x (1  p) n  x   x  p x (1  p) n  x  np p x 1 (1  p) n  x  np
x 0 x 0 x!(n  x)! x 1 ( x  1)! ( n  x )!
n
(n  1)!
where  ( x  1)!(n  x)! p
x 1
x 1
(1  p) n  x  1

The variance of X is
var()  (  2 )  [( )] 2

To find (  2 ) , we use E[X(X – 1)] in the pmf


n
n! n
(n  2)!
[ (   1)]   (   1) p x (1  p) n  x  n(n  1) p 2  p x  2 (1  p) n  x  n(n  1) p 2
x 0 x!(n  x)! x  2 ( x  2)! ( n  x )!

[ (   1)]  (  2 )  ( )  (  2 )  [ (   1)]  ( )

 var()  n(n  1) p 2  np  (np) 2

 n 2 p 2  np 2  np  n 2 p 2  np  np 2  np(1  p)  npq
Example 6
Find the probability that in 10 throws of a fair six-faced dice, a perfect square number shows up
three times.
Solution
When a fair six-faced dice is thrown, the possible outcomes are   1, 2, 3, 4, 5 and 6.
The perfect squares are 1 and 4.
Let p be the probability that a perfect square number shows up.
Let q be the probability that a perfect square number does not show up, then:
2 1 4 2
p  ; q 
6 3 6 3
This experiment is a Bernoulli trial. Let  (   x) be the probability that in n trials, there will be
x perfect square numbers. Then
 (   x)  n C x p x q n  x .

10  9  8  7!  1   2 
3 7 3 7
1  2
Therefore,  (   3) 10 C3          
 3  3 3  2  1  7!  3   3 
3 7
1  2 5120
 120       0.26
 3  3 19683
Example 7
A Gardner produces seeds in baskets for sale. The probability that a seed selected at random will
grow is 0.8. If twenty of these seeds are sown, what is the probability that
(i) Less than seven will grow;
(ii) Less than seven will not grow;
(iii) Exactly half the seeds will grow?
Solution
Let p be the probability that a seed selected at random will grow.
Let q be the probability that a seed selected at random will not grow, then:
4 1
p  0.8  ; q  1  0.8  0.2 
5 5
This experiment is a Bernoulli trial. Let  (   x) be the probability that in n trials, x trails will
result in successes.
(i) (  7)  (  0)  (  1)  (  2)  (  3)  (  4)  (  5)  (  6)
Using Tables, (  7)  (  6) = 0.000
(ii) P(less than 7 will not grow) = (  7)  1  (  6)
From Tables, we have 1 – 0.000 = 1
(iii) P(exactly half the seed will grow) = (   10)
10 10
 4 1
 20
C10      0.002031
 5 5
Using Tables, we have (  10)  (  10)  (  9)
= 0.003 – 0.001
= 0.002
Example 8
The binomial probability distribution function of a random variable X is
x 3 x
1  3
(   x )  C x    
3
x  0, 1, 2, 3
 4  4
Calculate (a) Mean (b) Variance (c) Standard deviation (d) Coefficient of variation
Solution
x 3x
1 3
(   x )  C x    
3
 (   0); (   1); (   2); (   3)
4 4

0 3 0 1 31 2 3 2 3 3 3
1 3 1 3 1 3 1 3
 C 0     ; C1     ; C 2     ; C3    
3 3 3 3

4 4 4 4 4 4 4 4


27 1 9 1 3 1
 1 1 ; 3   ; 3   ; 1  1
64 4 16 16 4 64
27 27 9 1
 ; ; ;
64 64 64 64

(a) Mean,   ()   p( x)


 0  (   0)  1  (   1)  2  (   2)  3  (   3)
27 27 9 1
 0  1  2   3
64 64 64 64
27 18 3
 0  
64 64 64
48
  0.75
64

(b) Variance of X, var()  x  () p( x)


2

X p(x) x  () x  ()2 x  ( )2 p( x)


0 27 3 9 243
64 4 16 1024

1 27 1 1 27
64 4 16 1024

2 9 5 25 225
64 4 16 1024

3 1 9 81 81
64 4 16 1024

243 27 225 81 576


 var( )   2       0.5625
1024 1024 1024 1024 1024

(c) Standard deviation of X,   var()

 0.5625
 0.75

(d) Coefficient of variation, C.V.   100 %

0.75
  100%
0.75
 100%

Geometric (x > 0): The geometric random variable is the number of trials until the first success
occurs. The pmf for geometric random variable is given as
(   x)  pq x 1 x  1, 2, 3,

Properties of a Geometric distribution


1
(a) Mean  
p
q
(b) Variance  2 
p2

q
(c) Standard deviation  
p2
Proof
The probability mass function for the geometric distribution is
(   x)  pq x 1 x  1, 2, 3,

The expectation of X is
 
d   x
()   x pq x 1  p xq x 1  p q 
x 1 x 1 dq  x 1 
 
1 1
 q x  q  q2  q3   
x 1 1 q
i. e. r
x 1
n

1 r
 r  1 (sum of infinite geometric series).

d  1  1 p p 1
 ( )  p    p   2 
dq  1  q  (1  q) 2
(1  q) 2
p p
The variance of X is

[ (   1)]   x( x  1) pq x 1  p[2  1(q )1  3  2(q ) 2    n(n  1)q n 1  ]
x 1

 p[2(q)1  6(q) 2    n(n  1)q n 1  ] (i)

Multiply both sides by q, we have q[ (   1)]  p[2q  6q 2    n(n  1)q n  ] (ii)
Subtract Equation (ii) from (i), we get
[ (   1)]  q[ (   1)]  p[2q  4q 2  6q 3    2nq n  ]

 2 pq  4 pq 2  6 pq 3    2npq n  ]
 2 pq(1  2q  3q 2    nq n1  )
1
Now 1  2q  3q 2   
p2
Hence,
 1  2q
[ (   1)]  q[ (   1)]  2 pq 2  
p  p
[ (   1)]1  q 
2q
p
2q
[ (   1)] p 
p
2q
[ (   1)]  2
p

Recall [ (   1)]  (  2 )  ( )  (  2 )  [ (   1)]  ( )


Therefore,
var()  (  2 )  [( )] 2
2
2q  p  1 
   
p2  p
2q  p  1 2(1  p)  p  1 2  2 p  p  1 1  p q
 2
 2
 2
 2  2
p p p p p

Poisson (   0 ): When the number of trials is very large and the probability of success is
comparatively very small, the binomial distribution may not be a very suitable model for random
experiments with repeated trials. A Poisson distribution is more suitable for this purpose and has
a pmf given as
 e  x
 x  0, 1, 2,
 ( x)   x!
0 otherwise

where e is the base of the natural logarithm (e = 2.71828 . . .); x! is the factorial of x;  is the
expected value of X (  is a positive real number). The Poisson law is wily used in every branch
of science and engineering e.g. number of cars arriving at a traffic light; number of claims/losses
occurring in a given period of time; number of customers arriving at a cashier desk in the bank,
etc.

Many practical situations where we want to know the average of a subset of the population fall
with the realm of conditional expectations. Examples are: the average of the passing grades of an
examination; the average height of exit classes in secondary schools; the average lifespan of
electric bulbs which will last beyond 40 hours of continuous burning, etc.
If we denote the mean and standard deviation of a Poisson distribution by  and  respectively,
then
(a) Mean   
(b) Variance  2  
(c) Standard deviation    .
Proof
The probability mass function of Poisson distribution is given as
e  x
(   x )  x  0, 1, 2,
x!
The expectation of X is

e  x 3 n
(  )   x   0  e    e  2  e     e  
x 0 x! 2! (n  1)!

  3 n 1 
 e   1     
 1! 2! (n  1)! 

e
 e    e      
e
The variance of X is
e  x

 
2
ne   n
(  )   x
2 2   2
 0  e   2e   3e  
x 0 x! 2! (n  1)!

 32 43 nn 1 


 e   1  2     
 2! 3! (n  1)! 
 2 3 n 1 22 33 (n  1)n 1 
 e   1            
 2! 3! (n  1)! 2! 3! (n  1)! 

 2 3 n 1    2 3 n  2 
 e  1          1      
 2! 3! (n  1)!   1! 2! 3! (n  2)! 
 e   [e   e  ]     2
Therefore, var()  (  2 )  [( )] 2

   2  2  
Poisson distribution is the only distribution that has the mean equal to the variance.
Example 9
One out of a thousand people reacted to a newly manufactured COVID-19 vaccine against a
pandemic disease. If 3000 people were treated with this vaccine, find the probability that:
(i) Exactly two people react to the vaccine;
(ii) At most two people react to the vaccine;
(iii) At least three people react to the vaccine;
(iv) No one reacts to the vaccine;
(v) Not less than one person reacts to the vaccine.
Solution
Let the probability of success be p, then
1
p  0.001, n  3000
1000
Since p is small and n is comparatively large, the Poisson distribution rather than the binomial
distribution will be better probability model.
  np  0.001 3000  3
e   2
(i) (   2) 
2!
e 3 3 2 9
  3
2! 2e
9
  0.224
40.17
or
Using Statistical Tables, (  2)  (  2)  (  1)
= 0.423 – 0.199
= 0.224
Thus, probability that exactly two people react to the vaccine is 0.224.

(ii) (  2)  (  0)  (  1)  (  2)

e 3 30 e 3 31 e 3 3 2
  
0! 1! 2!
1 3 9
  
20.09 20.09 2  20.09
1 3 4.5
  
20.09 20.09 20.09
8.5
  0.423
20.09
or
Using Statistical Tables, (  2)  0.423
Therefore, probability that at most two people react to the vaccine is 0.423.

(iii) (  3)  1  (  2)


= 1 – 0.423
= 0.577
Hence, probability that at least three people react to the vaccine is 0.577.

e 3 30 1
(iv) (   0)   3
0! e
1
  0.05
20.09
 Probability that no one reacts to the vaccine is 0.05.

(v) (  1)  1  (  0)


= 1 – 0.05
= 0.95
 Probability that not less than one person reacts to the vaccine is 0.95.

Example 10
Five percent of the television produced by a firm on the average are known to be defective. If the
sets are sold in consignments of 100 with a guarantee that not more than 2 sets will be defective,
calculate:
(i) The mean number of defective products;
(ii) The variance of the number of defective products;
(iii) Standard deviation of the number of defective products;
(iv) The coefficient of variance of the number of defective products;
(v) The probability that the television set will fail to meet the guaranteed quality.
Solution
Let success be product is defective; X is the number of successes and p is the probability of
success.
That is, n = 100; p = 5% = 0.05;   np  100  0.05  5
(i) Mean,   ()    np
 100  0.05  5

(ii) Var(X),  2    5
(iii) Standard deviation,   var()  5  2.236

(iv) Coefficient of variation   100 %

2.236
  100%
5
 44.72%
(v) (  2)  1  (  2)
= 1 – P(X = 0) + P(X = 1) + P(X = 2)
 5 0 e 5 51 e 5 5 2 e 5 
 1     
 0! 1! 2! 
 25 
 1  e  5 1  5  
 2
 1  0.12395
 0.87605
Assignments
1. A family decides to have children until it has three children of the same gender.
Assuming P(B) = P(G) = .5, what is pmf of X = the number of children in the family?

2. Individuals A and B have red and white dice (both fair). If they each roll until they
obtain five “doubles” (1-1, . . . , 6-6), what is the pmf of X = the total number of times a
die is rolled? What are E(X) and V(X)?

3. It is known that 2 out of every 5 cigarette smokers in a village have cancer of lungs. Find
the probability that out of a random sample of 8 smokers from the village, 5 will have
cancer of the lungs.
4. If 20% of bolts produced by a machine are defective, determine the probability, correct
to two decimal places, that out of 8 bolts chosen at random
(a) one is defective;
(b) at most two are defective;
(c) exactly four are defective.

5. The binomial probability distribution function of a random variable X is defined by


x 4 x
1  2
f ( x)  C x    
4
x  0, 1, 2, 3, 4
 3  3
Calculate the
(a) Mean
(b) Variance
(c) Standard deviation
(d) Coefficient of variation

You might also like