Lecture-4: Binomial R.V Approximations and Conditional Probability Density Functions
Lecture-4: Binomial R.V Approximations and Conditional Probability Density Functions
Table 4.1
4
Since x1 < 0, from Fig. 4.1(b), the above probability is given
by P (2475 ≤ X ≤ 2525 ) = erf ( x 2 ) − erf ( x1 ) = erf ( x 2 ) + erf (| x1 |)
5
= 2 erf = 0 . 516 ,
7
where we have used Table 4.1 (erf (0.7) = 0.258).
1 1
e−x
2
e−x
2 /2
/2
2π 2π
x x
x1 x2 x1 x2
(a) x1 > 0, x2 > 0 (b) x1 < 0, x2 > 0
Fig. 4.1
b. The Poisson Approximation
As we have mentioned earlier, for large n, the Gaussian
approximation of a binomial r.v is valid only if p is fixed,
i.e., only if np >> 1 and npq >> 1 . what if np is small, or if it
does not increase with n? 5
Obviously that is the case if, for example, p → 0 as n → ∞,
such that np = λ is a fixed number.
Many random phenomena in nature in fact follow this
pattern. Total number of calls on a telephone line, claims in
an insurance company etc. tend to follow this type of
behavior. Consider random arrivals such as telephone calls
over a line. Let n represent the total number of calls in the
interval (0, T ). From our experience, as T → ∞ we have n → ∞
so that we may assume n = µT . Consider a small interval of
duration ∆ as in Fig. 4.2. Had there been only a single call
coming in, the probability p of that single call occurring in
that interval must depend on its relative size with respect to
T. 1 2
n
∆
0 T 6
Fig. 4.2
∆
Hence we may assume Note that p → 0 as T → ∞ .
p =
T
.
∆
However in this case np = µ T ⋅ = µ ∆ = λ is a constant,
T
and the normal approximation is invalid here.
Suppose the interval ∆ in Fig. 4.2 is of interest to us. A call
inside that interval is a “success” (H), whereas one outside is
a “failure” (T ). This is equivalent to the coin tossing
situation, and hence the probability Pn ( k ) of obtaining k
calls (in any order) in an interval of duration ∆ is given by
the binomial p.m.f. Thus
n!
Pn ( k ) = p k (1 − p ) n − k , (6)
( n − k )! k !
Thus λk − λ
lim Pn ( k ) = e , (8)
n → ∞ , p → 0 , np = λ k!
since the finite products
1 −
1
1 −
n
2
n
1 − k n− 1 as well
λ
as n tend to unity as n → ∞ , and
k
1 −
λ
n
lim 1 − = e − λ .
n→ ∞
n
The right side of (8) represents the Poisson p.m.f and the
Poisson approximation to the binomial r.v is valid in
situations where the binomial r.v parameters n and p
diverge to two extremes ( n → ∞ , p → 0 ) such that their
product np is a constant. 8
Example 4.2: Winning a Lottery: Suppose one million
lottery tickets are issued with 100 winning tickets among
them. (a) If a person purchases 100 tickets, what is the
probability of winning? (b) How many tickets should one
buy to be 95% confident of having a winning ticket?
Solution: The probability of buying a winning ticket
No. of winning tickets 100
p= = 6 = 10− 4.
Total no. of tickets 10
Here n = 100 , and the number of winning tickets X in the n
purchased tickets has an approximate Poisson distribution
with parameter λ = np = 100 × 10 − 4 = 10 − 2 . Thus
−λ λk
P( X = k ) = e ,
k!
and (a) Probability of winning = P( X ≥ 1) = 1 − P( X = 0) = 1 − e−λ ≈ 0.01.
9
(b) In this case we need P ( X ≥ 1) ≥ 0.95 .
P ( X ≥ 1) = 1 − e − λ ≥ 0 .95 implies λ ≥ ln 20 = 3.
k =0 k! k =0 k!
4 2
= 1 − e −2 1 + 2 + 2 + + = 0.052.
3 3
P (x1 < X (ξ ) ≤ x 2 | B ) =
x2
∫ x1
f X ( x | B ) dx . (17)
13
Example 4.4: Refer to example 3.2. Toss a coin and X(T)=0,
X(H)=1. Suppose B = {H }. Determine FX ( x | B ).
Solution: From Example 3.2, FX ( x ) has the following form.
We need FX ( x | B ) for all x.
For x < 0, {X (ξ ) ≤ x }= φ , so that { (X (ξ ) ≤ x ) ∩ B }= φ ,
and FX ( x | B ) = 0.
FX (x) FX (x)
1 1
q
x x
1 1
(a) (b)
Fig. 4.3 14
For 0 ≤ x < 1, {X (ξ ) ≤ x }= {T }, so that
{ (X (ξ ) ≤ x ) ∩ B }= {T }∩ { H }= φ and FX ( x | B ) = 0.
For x ≥ 1, {X (ξ ) ≤ x }= Ω , and
P( B )
{ (X (ξ ) ≤ x ) ∩ B }= Ω ∩ { B }= { B } and FX ( x | B) = =1
P( B )
(see Fig. 4.3(b)).
Example 4.5: Given FX ( x ), suppose B = {X (ξ ) ≤ a}. Find f X ( x | B ).
Solution: We will first determine FX ( x | B ). From (11) and B
as given above, we have
P { (X ≤ x ) ∩ (X ≤ a ) }
FX ( x | B ) = . (18)
P (X ≤ a )
15
For x < a , (X ≤ x ) ∩ (X ≤ a ) = (X ≤ x ) so that
P (X ≤ x )= FX (x)
FX (x | B ) = . (19)
P (X ≤ a ) FX (a )
For x ≥ a , (X ≤ x ) ∩ (X ≤ a ) = ( X ≤ a ) so that FX ( x | B ) = 1.
Thus
FX ( x )
, x < a,
FX ( x | B ) = FX (a ) (20)
1, x ≥ a,
and hence
fX (x)
d , x < a,
fX (x | B) = FX ( x | B ) = FX (a ) (21)
dx 0 , otherwise.
16
FX ( x | B )
f X ( x | B)
1
f X (x )
FX (x )
a x a x
(a) (b)
Fig. 4.4
f X ( x | B)
f X (x )
x
a b
Fig. 4.5 18
We can use the conditional p.d.f together with the Bayes’
theorem to update our a-priori knowledge about the
probability of events in presence of new observations.
Ideally, any new information should be used to update our
knowledge. As we see in the next example, conditional p.d.f
together with Bayes’ theorem allow systematic updating. For
any two events A and B, Bayes’ theorem gives
P (B | A)P ( A)
P(A | B) = . (27)
P(B)
∫
x2
F ( x 2 | A ) − F X ( x1 | A ) f X ( x | A ) dx
= X P ( A) = x1
P ( A ). (28)
F X ( x 2 ) − F X ( x1 )
∫
x2
f X ( x ) dx 19
x1
Further, let x1 = x, x2 = x + ε , ε > 0, so that in the limit as ε → 0,
lim P {A | ( x < X (ξ ) ≤ x + ε ) }= P (A | X (ξ ) = x ) =
f X ( x | A)
ε→0 fX (x)
P ( A ). (29)
or
P( A | X = x) fX (x)
f X |A ( x | A ) = . (30)
P ( A)
or
+∞
P ( A) = ∫−∞
P ( A | X = x ) f X ( x ) dx (32)
P ( A | P = p ) = p k q n−k , (34) p
0 1
21
Fig. 4.6
and using (32) we get
1 1 ( n − k )! k !
P ( A) = ∫ 0
P ( A | P = p ) f P ( p ) dp = ∫ 0
p k (1 − p ) n − k dp =
( n + 1 )!
. (35)
24