0% found this document useful (0 votes)
37 views

Applied Probability Concepts and Example

This document discusses concepts in applied probability including: 1. Random variables and their properties including discrete and continuous random variables. 2. Conditional probability and the law of total probability. 3. Examples of conditional probability calculations for discrete and continuous random variables. 4. Discrete-time Markov chains and their transition matrices. Random walks and the gambler's ruin problem are used as examples. 5. Random stochastic processes indexed by time and their classification as discrete or continuous time processes.

Uploaded by

rx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Applied Probability Concepts and Example

This document discusses concepts in applied probability including: 1. Random variables and their properties including discrete and continuous random variables. 2. Conditional probability and the law of total probability. 3. Examples of conditional probability calculations for discrete and continuous random variables. 4. Discrete-time Markov chains and their transition matrices. Random walks and the gambler's ruin problem are used as examples. 5. Random stochastic processes indexed by time and their classification as discrete or continuous time processes.

Uploaded by

rx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/337656694

Applied Probability: Concepts & Examples

Article · December 2019

CITATIONS READS
0 13

1 author:

Johar M. Ashfaque
East Kent Hospitals University NHS Foundation Trust
138 PUBLICATIONS   53 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Generalized Continued Fractions View project

Recreational Mathematics View project

All content following this page was uploaded by Johar M. Ashfaque on 01 December 2019.

The user has requested enhancement of the downloaded file.


Applied Probability
Concepts & Examples

Johar M. Ashfaque

1 What is a Random Variable?


Let X be a function from the sample space Ω into the set of reals R. In order for X to be RV, some
extra requirements are needed. What are they?
The necessary and sufficient condition is: for each x ∈ R and ω ∈ Ω (an element in the set of all the
possible outcomes for the named event),

P(x) := {ω ∈ Ω : X(ω) ≤ x}

must be an event such that the probability P ∈ R≥0 that is the probability of the event taking place is
well-defined on P(x).
The point here is that for any function X : Ω → R, the set P(x) is not necessarily an event (in this case,
such an X is not a RV). Only for that function X which guarantees P(x) as an event for any x ∈ R, then
X is a RV.
There are two types of random variables: discrete and continuous. Where discrete random variables can
only take particular values or specified values in R, continuous random variables can take any value in
R.
A random variable has either an associated probability distribution (discrete random variable) or prob-
ability density function (continuous random variable).

1.1 Summary

Ω: The Sample Space (the set of all outcomes for the named event).
ω: An element of the sample space.
R: The set of real numbers.
X: A random variable if and only if for each x ∈ R and ω ∈ Ω

P(x) := {ω ∈ Ω : X(ω) ≤ x}

is an event with probability P ∈ R≥0 defined on P(x).

2 What is Conditional Probability?


Given any two events A and B which are not independent, the probability of event A taking place given
that event B has already taken place is defined by
P (A ∩ B)
P (A|B) = , P (B) > 0.
P (B)
That is the probability of event A given that event B has already taken place is defined as the probability
of both event A and event B taking place divided by the probability of event B taking place.
If event A and event B are two independent events then

P (A ∩ B) = P (A)P (B)

1
and
P (A|B) = P (A).
This can be interpreted as the definition of independence.

3 The Law of Total Probability


Let ∪∞ ∞
i=1 Bi be i mutually exclusive events. If ∪i=1 Bi = Ω where Ω is the sample space and for any i 6= j

Bi ∩ Bj = ∅

that is Bi and Bj are disjoint then



X
P (A) = P (Bi )P (A|Bi ).
i=1

4 Conditional Probability: The Discrete Case


Suppose that X and Y are discrete RVs. For any x, y ∈ R consider the events {X = x} and {Y = y}
with
P (Y = y) > 0.
Define the conditional probability mass function of X given that Y = y by

p(x, y)
pX|Y (x|y) = .
pY (y)

4.1 Example

Suppose that the joint mass function p(x, y) of X and Y is

p(1, 1) = 0.5, p(1, 2) = 0.1, p(2, 1) = 0.1, p(2, 2) = 0.3.

Let us calculate pX|Y (x|Y = 1).


To begin with let us compute pY (1) which is
X
pY (1) = p(x, 1) = p(1, 1) + p(2, 1) = 0.6.
x

Now we can compute pX|Y (1|1) and pX|Y (2|1) very easily which can be found to be

5
pX|Y (1|1) =
6
and
1
pX|Y (2|1) = .
6

5 Conditional Probability: The Continuous Case


Suppose that X and Y are continuous RVs. For any x, y ∈ R consider the events {X = x} and {Y = y}
with
P (Y = y) > 0.

2
Define the conditional probability mass function of X given that Y = y by
f (x, y) f (x, y)
fX|Y (x|y) = = R∞
fY (y) −∞
f (x, y)dx
and the associated conditional expectation is given by
Z ∞
E(X|Y = y) = xfX|Y (x|y)dx.
−∞

5.1 Example 1

Suppose the joint density of X and Y is given by


(
6xy(2 − x − y), 0 < x < 1, 0 < y < 1
f (x, y) =
0, otherwise.

Let us compute the conditional expectation of X given Y = y where 0 < y < 1.


If y ∈ (0, 1) we have for x ∈ (0, 1) that
f (x, y) f (x, y) 6xy(2 − x − y) 6xy(2 − x − y)
fX|Y (x|y) = = R∞ = R1 =
fY (y) −∞
f (x, y)dx 6xy(2 − x − y)dx 4 − 3y
0
Then
5 − 4y
E(X|Y = y) =
8 − 6y
where y ∈ (0, 1).

5.2 Example 2

Consider the triangle in the plane R whose vertices are (0, 0), (0, 1) and (1, 0).
Let X and Y be continuous random variables for which the joint density is given by
(
2, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, x + y ≤ 1
f (x, y) =
0, otherwise.

Let us compute the conditional expectation of X given Y = y where 0 ≤ y ≤ 1. 0 ≤ y ≤ 1 tells us that


x ∈ [0, 1 − y] and so
2 1
fX|Y (x|y) = R 1−y =
2dx 1 − y
0
. Then
1−y
1−y
Z
E(X|Y = y) = x(1 − y)−1 dx = .
0 2

6 Discrete-Time Markov Chains & Transition Matrix


Let X1 , X2 ,..., be independent RVs each taking values -1 and 1 with probabilities q = 1 − p and p where
p ∈ (0, 1).
Define the random walk
n
X
Sn = a + Xi
i=1
by the position of the movement after n steps having started at S0 = a (in most cases a = 0).
For p = 21 , q = 1 − p = 1
2 and so we would have a symmetric random walk.

3
6.1 Example: The Gambler’s Ruin

A man is saving to buy a BMW at a cost of N units of money. He stars with a (0 ≤ a ≤ N ) units and
tries to win the remainder by a gamble.
He flips a fair coin repeatedly; if its heads his friend pays him one unit; for tails, he pays his friend one
unit. He keeps playing this game until either he runs out of money or he wins enough to buy.
This is an example of a symmetric random walk.

6.2 Random Stochastic Process

A random stochastic process is a group of random variables

{Xt : t ∈ T }

indexed by some set T ⊂ R where T can be be thought of as time.


If T ⊂ R is a discrete set, we speak of discrete time process {Xt }, t ∈ T .
If T happens to be continuous we speak of a continuous time process {Xt }, t ∈ T .
If the state space S of this process is finite then {Xt } is called a chain.

6.3 Discrete Time Markov Chain (DTMC)

{Xn } is a DTMC if it satisfies for all n



P {Xn+1 = in+1 X0 = i0 , ..., Xn−1 = in−1 , Xn = in }

= P {Xn+1 = in+1 Xn = in }

holds for arbitrary ik ∈ S for k = 0, 1, ..., n + 1, n ≥ 1. This means that the probability of any future
behaviour of the process when its present state is known is not altered by the additional knowledge of its
past behaviour. This is the Markovian property.

6.4 Example: Random Walk on the Cube

Let S = {(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)} where each state represents
one corner of a cube in the obvious way in R3 :

(0, 1, 1) / (1, 1, 1)

(0, 1, 0) / (1, 1, 0)


(0, 0, 1) / (1, 0, 1)

 
(0, 0, 0) / (1, 0, 0)

We wish to write down the transition matrix of the Markov chain in which from any corner independently
of the past the next transition is equally likely to be to any of the three adjacent corners with probability

4
1
3.
(0, 0, 0) (0, 0, 1) (0, 1, 0) (0, 1, 1) (1, 0, 0) (1, 0, 1) (1, 1, 0) (1, 1, 1)
 
(0, 0, 0) 0 1/3 1/3 0 1/3 0 0 0
(0, 0, 1) 
 1/3 0 0 1/3 0 1/3 0 0  
(0, 1, 0) 
 1/3 0 0 1/3 0 0 1/3 0  
(0, 1, 1) 
 0 1/3 1/3 0 0 0 0 1/3  
(1, 0, 0) 
 1/3 0 0 0 0 1/3 1/3 0  
(1, 0, 1) 
 0 1/3 0 0 1/3 0 0 1/3  
(1, 1, 0)  0 0 1/3 0 1/3 0 0 1/3 
(1, 1, 1) 0 0 0 1/3 0 1/3 1/3 0

6.5 Another Example

Consider the Markov chain with three states S = {1, 2, 3} with


 1 1 1 
2 4 4
 
 1 2

P =
 3 0 3


 
1 1
2 2 0

then the state transition diagram for this chain is as shown below:

Figure 1: The State Transition Diagram

Furthermore, given
1
P (X1 = 1) = P (X1 = 2) =
4
we can find P (X1 = 3, X2 = 2, X3 = 1). Firstly, we will have to compute P (X1 = 3). We proceed as
follows
1 1 1
P (X1 = 3) = 1 − P (X1 = 1) − P (X1 = 2) = 1 − − = .
4 4 2

5
Then
1
P (X1 = 3, X2 = 2, X3 = 1) = P (X1 = 3) · p3,2 · p2,1 = .
12

7 Regular Transition Matrices


A transition matrix is regular if some power of the transition matrix contains all positive entries.
For example  
0.75 0.25 0
A= 0 0.5 0.5 
0.6 0.4 0
which squares to  
0.5625 0.3125 0.125
A2 =  0.3 0.45 0.25 
0.45 0.35 0.2
since all the entries in A2 are positive, matrix A is regular. But
 
0.5 0 0.5
B= 0 1 0 
0 0 1

is not regular since  


0.25 0 0.75
B2 =  0 1 0 
0 0 1
and using the fact that if zeros occur in the identical places in both B and B 2 than for higher power they
will appear in the same places. So no power of matrix B contains all positive entries.

8 Absorbing Markov Chains


State i of a Markov chain is an absorbing state if pii = 1.
A Markov chain is an absorbing chain if
• the chain has at least one absorbing state
• it is possible to go from any non-absorbing state to an absorbing state.
For example  
1 0 0
 0.3 0.5 0.2 
0 0 1
where p11 = 3 = p33 and we deduce that state 1 and state 3 are absorbing states. This is an example
of a Markov chain that is an absorbing chain since state 2 is non-absorbing and it is possible to go
from state 2 to state 1 and state 3.
However,  
0.6 0 0.4 0
 0 1 0 0 
 
 0.9 0 0.1 0 
0 0 0 1
is a Markov chain that is non-absorbing.

6
9 Steady-State Distribution
We wish to find the steady state distribution for the following chain

Figure 2: The State Transition Diagram

we find
1 1 1
π1 + π2 + π3 = π1
2 3 2
1 1
π1 + π3 = π2
4 2
1 2
π1 + π2 = π3
4 3
π1 + π2 + π3 = 1

from which we deduce that


10 5
π3 = π2 = π1
9 8
and
16
π1 = π2
9
giving
16
π1 =
35
9
π2 =
35
10
π3 = .
35

7
10 More Examples of DTMC
A mobile robot randomly moves along a circular path divided into three sectors. At every sampling instant
the robot with probability p to move clockwise and with probability 1 − p to move counter-clockwise.
During the sampling interval the robot accomplishes the length of a sector.
• Define a discrete Markov chain for the random walk of the robot.
• Study the stationary probability that the robot is localized in each sector for p ∈ [0, 1].

State X = sector occupied by the robot ∈ {1, 2, 3}. The transition matrix reads
 
0 p 1−p
 1−p 0 p 
p 1−p 0

For p = 0 we have

8
Each state of the Markov chain is periodic with period 3. Therefore, stationary probabilities do not exist.
p = 1 case is similar to the p = 0 case.
If p ∈ (0, 1) the Markov chain is irreducible aperiodic and finite. Therefore, the stationary-state proba-
bilities exist.
The unique steady state probability vector is given by
1 1 1
π=( , , )
3 3 3
which is the solution of the system of equations
π = πP
3
X
πi = 1
i=1

11 Reversible Markov Chains


A Markov chain is with stationary distribution π is reversible if and only if
πi pij = πj pji
for all states i and j.

11.1 Random Walk on a Graph

Recall that a graph is a collection of vertices where the ith vertex vi is directly joined by at most one
edge each to di other vertices (called its neighbour).
If X
σ= di < ∞
i
then a random walk on the vertices of the graph simply goes to any one of the neighbours of vi with
equal probability d1i . Such a walk is reversible. To see this note that the detailed balance equations take
the form
πi πj
=
di dj
for neighbours i and j and the solution such that
X
πi = 1
i

is obviously
di
πi = .
σ

11.2 The Ehrenfest Model

M balls distributed in two urns. Each time, pick a ball at random, move it from one urn to the other
one. Let Xn be the number of balls in urn 1. Prove that the chain has a reversible distribution.
The non-zero transition probabilities are
i
pi,i−1 =
M
M −i
pi,i+1 =
M

9
We are seeking solutions to
M −i i+1
πi pi,i+1 = πi+1 pi+1,i ⇒ πi = πi+1
M M
leading to  
πi (M − i − 1)(M − i)...M M
= = .
π0 i(i − 1)...1 i
Note.
X X M  1
πi = 1 = π0 ⇒ π0 = m .
i i
i 2
Note. Using
i
pi,i−1 =
M
and i → i + 1 arrive at
i+1
pi+1,i = .
M
The chain is irreducible and periodic.

12 Continuous-Time Markov Chains


We will consider a random process
{X(t), t ∈ [0, ∞)}.
We assume that we have a countable state space
S ⊂ {0, 1, 2, ...}.
If X(0) = i, then X(t) stays in state i for a random amount of time say T1 where T1 is a continuous
random variable. At time T1 , the process jumps to a new state j and will spend a random amount of
time T2 in that state and so on. The random variables T1 , T2 , ... follow an exponential distribution. The
probability of going from state i to state j is denoted pij .
In particular, suppose that at time t, we know that X(t) = i. To make any prediction about the future,
it should not matter how long the process spends in each state and therefore must have a ”memoryless”
property. The exponential distribution is the only continuous distribution with the ”memoryless” prop-
erty. Thus, the time that a continuous-time Markov chain spends in state i known as the holding time
will have an exponential distribution: eλi where λi is a non-negative real number. We further assume
that the λi s are bounded that is there exists a real number M < ∞ such that λi < M for all i ∈ S.
A continuous-time Markov chain has two components. First is the discrete-time Markov chain known as
the jump chain or an embedded Markov chain which give the transition probabilities pij . The second is,
for each state, there is a holding time parameter λi that controls the time spent in each state.

12.1 An Example

Consider a Markov chain with two states S = {0, 1}. Assume the holding time parameters are given by
λ0 = λ1 = λ > 0. That is, the time that the chain spends in each state before going to the other state
has eλ distribution.
• Draw the state diagram of the embedded (jump) chain.
• Find the transition matrix P (t). You may find the following identities useful

ex − e−x X x2n+1
sinh(x) = =
2 n=0
(2n + 1)!

ex + e−x X x2n
cosh(x) = =
2 n=0
(2n)!

10
12.2 The Solution: Part 1

There are two states in the chain and none of them are absorbing as λ0 = λ1 = λ > 0. The jump chain
must therefore have the following transition matrix
 
0 1
P =
1 0
where the state-transition diagram of the embedded (jump) chain is

Figure 3: The State Transition Diagram

12.3 The Solution: Part 2

The Markov chain has a simple structure. Lets find P00 (t). By definition
P00 (t) = P (X(t) = 0|X(0) = 0), ∀t ∈ [0, ∞).
Assuming that X(0) = 0, X(t) will be 0 if and only if we have an even number of transitions in the time
interval [0, t]. The time between each transition is an eλ random variable. Therefore, the transitions
occur according to a Poisson process with parameter λ.
We find

X (λt)2n
P00 (t) = e−λt
n=0
(2n)!

−λt
X (λt)2n
= e
n=0
(2n)!
−λt
 λt 
−λt e + e
= e
2
1 1 −2λt
= + e
2 2

1 1 −2λt
P01 (t) = 1 − P00 (t) = − e .
2 2

11
P11 (t) = P00 (t).
P10 (t) = P01 (t).
Therefore, the transition matrix for any t ≥ 0 is given by
 1 1 −2λt 1 1 −2λt 
2 + 2e 2 − 2e
P (t) =  
1 1 −2λt 1 1 −2λt
2 − 2 e 2 + 2 e

Letting t go to infinity, the transition matrix reads


1 1
 
2 2
lim P (t) =  
t→∞ 1 1
2 2

12.4 CTMC Transition Matrix: Properties

The transition matrix satisfies the following properties:


1. P (0) is equal to the identity matrix, P (0) = I;
2. The rows of the transition matrix must sum to 1
X
pij (t) = 1, ∀t ≥ 0;
j∈S

3. For all s, t ≥ 0,
P (s + t) = P (s)P (t).

13 CTMC: Limiting Distribution Examples

13.1 Example 1

Consider a continuous-time Markov chain X(t) that has the jump chain

Figure 4: The State Transition Diagram

12
Assume the holding parameters are given by

λ1 = 2, λ2 = 1, λ3 = 3.

We wish to find the limiting distribution of X(t).


First notice that the chain is irreducible. The transition matrix of the jump chain reads
 
0 1 0
P (t) =  0 0 1 
1 1
2 2 0

the stationary distribution for the jump chain being


1
π̃ = (1, 2, 2).
5
This is obtained by solving the following system of equations

π3 = 2π1
π2 = 2π1
π2 = π3
3
X
πi = 1
i=1

The limiting distribution of X(t) can then be obtained by using


π̃j
λj
πj = P π̃k
.
k∈S λk

Hence, we conclude that


1
π= (3, 12, 4)
19
is the limiting distribution of X(t).

13.2 Example 2

Consider a continuous-time Markov chain X(t) that has the jump chain

13
Figure 5: The State Transition Diagram

Assume the holding parameters are given by

λ1 = 2, λ2 = 3, λ3 = 4.

We wish to find the limiting distribution of X(t).


First notice that the chain is irreducible. The transition matrix of the jump chain reads

0 21 12
 

P (t) =  0 13 23 
1
2 0 12

the stationary distribution for the jump chain being


1
π̃ = (4, 3, 8).
15
The limiting distribution of X(t) can then be obtained by using
π̃j
λj
πj = P π̃k
k∈S λk

which gives
1
π= (2, 1, 2)
5
as the limiting distribution of X(t).

14
14 Continuous-Time Markov Chains: The Generator Matrix
For a continuous-time Markov chain, define the generator matrix G as a transition matrix whose (i, j)-th
entry is given by (
λi pij if i 6= j
gij =
−λi if i = j

14.1 Example

A chain with two states S = {0, 1} and λ0 = λ1 = λ > 0. We found that the transition matrix for any
t ≥ 0 is given by  1 1 −2λt 1 1 −2λt 
2 + 2e 2 − 2e
P (t) =  .
1 1 −2λt 1 1 −2λt
2 − 2 e 2 + 2 e
We wish
1. To find the generator matrix G

2. Show that for any t ≥ 0


P 0 (t) = P (t)G = GP (t).

By definition

g00 = −λ0 = −λ
g11 = −λ1 = −λ
g01 = λ0 p01
= λ
g10 = λ1 p10
= λ

Thus, the generator matrix is given by


 
−λ λ
G=
λ −λ

To show that for any t ≥ 0


P 0 (t) = P (t)G = GP (t)
we note that
−λe−2λt λe−2λt
 
0
P (t) =
λe−2λt −λe−2λt
We also have that
−λe−2λt λe−2λt
 
P (t)G = = GP (t).
λe−2λt −λe−2λt

14.2 The Forward and Backward Equations

• The forward equations state that


P 0 (t) = P (t)G

• The backward equations state that


P 0 (t) = GP (t)

15
14.3 The Limiting Distribution Via Generator Matrix

Consider a continuous-time Markov chain X(t) with the state space S and the generator matrix G. The
probability distribution π on S is stationary distribution for X(t) if and only if it satisfies

πG = 0.

14.4 Example 1

The generator matrix for the continuous-time Markov chain is given by


 
−λ λ
G=
λ −λ

We wish to find the stationary distribution for this chain by solving

πG = 0.

P
We have that π0 = π1 and by imposing that i πi = 1, we obtain

1
π= (1, 1).
2

14.5 Example 2

Consider the Markov chain X(t) that has the following jump chain

Figure 6: The State Transition Diagram

and assume λ1 = 2, λ2 = 1 and λ3 = 3.


1. We wish to find the generator matrix for this chain.
2. We wish to find the limiting distribution for X(t) by solving πG = 0.

16
The jump chain is irreducible and the transition matrix of the jump chain is given by
 
0 1 0
P (t) =  0 0 1 
1 1
2 2 0

and by definition, we have the generator matrix G with entries


 
−2 2 0
G =  0 −1 1  .
3 3
2 2 −3

Now by solving
πG = 0
we will be able to find the limiting distribution. We obtain the following system of equations
3
−2π1 + π3 = 0
2
3
2π1 − π2 + π3 = 0
2
π2 − 3π3 = 0

P
Then imposing i πi = 1, we find that
1
π= (3, 12, 4)
19
is the limiting distribution of X(t).

A Axioms of Probability
Given
• P (Ω) = 1
• 0 ≤ P (A) ≤ 1 for any event A ⊆ Ω
• if A1 , ..., An are events and Ai ∩ Aj = ∅ for all i 6= j then
n
X
P P (A1 ∪ A2 ∪ ... ∪ An ) = P (Ai ).
i=1

A.1 P (Ac ) = 1 − P (A)

A and Ac are disjoint that is A ∩ Ac = ∅.

A ∪ Ac = Ω ⇒ P (A) + P (Ac ) = P (Σ) = 1

from which follows that


P (Ac ) = 1 − P (A).

A.2 P (∅) = 0

P (∅) = P (Ωc ) = 1 − P (Ω) = 1 − 1 = 0.

17
A.3 P (B) ≥ P (A) given A ⊆ B

If A ⊆ B then
B = A ∪ (B ∩ Ac )
and
A ∩ (B ∩ Ac ) = ∅.
Then
P (B) = P (A) + P (B ∩ Ac ) ⇒ P (B) ≥ P (A).
| {z }
≥0

A.4 P (A ∪ B) = P (A) + P (B) − P (A ∩ B)

A = (A ∩ B c ) ∪ (A ∩ B)
B = (B ∩ Ac ) ∪ (A ∩ B)
⇒ A ∪ B = (A ∩ B c ) ∪ (A ∩ B) ∪ (B ∩ Ac ) ⇒ P (A ∪ B) = P (A ∩ B c ) + P (A ∩ B) + P (B ∩ Ac )
But
P (A ∩ B c ) = P (A) − P (A ∩ B)
and
P (B ∩ Ac ) = P (B) − P (A ∩ B)
which leads to
P (A ∪ B) = P (A) + P (B) − P (A ∩ B).

B Poisson Distribution Through Examples


A Poisson random variable is the number of successes that result from a Poisson experiment. The
probability distribution of a Poisson random variable is called a Poisson distribution.
Given the mean number of successes λ we have the PoissonFormula which states that if we were to
conduct a Poisson experiment in which the average number of successes within a given region is λ then
the Poisson probability is
e−λ λn
P (n; λ) = , λ>0
n!
where n is the actual number of successes.
A cumulative Poisson distribution refers to the probability that the Poisson random variable is greater
than some specified lower limit and less than some specified upper limit.

B.1 Example 1

Each customer who enters Rebecca’s clothing store will purchase a suit with probability p > 0. If the
number of customers entering the store is Poisson distributed with parameter λ > 0
• we want to know the probability that Rebecca does not sell any suits:

18
Let X be the numbers of suits that Rebecca sells and N be the number of customers who enter the
store. Then

X
P {X = 0} = P {X = 0|N = n}P {N = n}
n=0

X e−λ λn
= P {X = 0|N = n}
n=0
n!

X e−λ λn
= (1 − p)n
n=0
n!

X (λ(1 − p))n
= e−λ
n=0
n!
−λ λ(1−p)
= e e
= e−λp

• we want to know the probability that Rebecca sells k suits:


First note that given N = n X has a binomial distribution with parameter n and p
 
n k
P {X = k|N = n} = p (1 − p)n−k if n ≥ k
k
and
P {X = k|N = n} = 0 if n < k.
Then

X
P {X = 0} = P {X = k|N = n}P {N = n}
n=0

X e−λ λn
= P {X = k|N = n}
n=0
n!
∞ 
e−λ λn

X n k
= p (1 − p)n−k
n=0
k n!

e−λ (λp)k X (λ(1 − p))n
=
k! n=0
n!
e−λ (λp)k λ(1−p)
= e
k!
e−λp (λp)k
=
k!

B.2 Example 2

Suppose the average number of lions seen on a one-day safari is 5. What is the probability that tourists
will see fewer than four lions on the next one-day safari?
This is a Poisson experiment in which λ = 5 and n = 0, 1, 2, 3 and we wish to calculate the sum of the
probabilities P (0; 5), P (1; 5), P (2; 5), P (3; 5).
This leads to
3
X
P (n ≤ 3; 5) = P (n; 5) = 0.2650
n=0

19
B.3 Example 3

The number of baseball games rained out is a Poisson process with arrival rate of 5 per 30 days.
• We wish to find the probability that there are more than 5 rained out games in 15 days.
• We wish to find the probability that there are no rained out games in 7 days.
First note that
5 1
λ= =
30 6
and the total number of rained out games in t days is Poisson( 6t ). To find the probability that there are
more than 5 rained out games in 15 days reduces to
5
1 − P (n ≤ 5; ) = 0.042.
2

Similarly, the probability that there are no rained out games in 7 days is

e−7/6 = 0.311

20

View publication stats

You might also like