ROSSIntroductiontoProbabilityModels PDFCORTADO
ROSSIntroductiontoProbabilityModels PDFCORTADO
Coxian random variables often arise in the following manner. Suppose that an item
must go through m stages of treatment to be cured. However, suppose that after each
stage there is a probability that the item will quit the program. If we suppose that
the amounts of time that it takes the item to pass through the successive stages are
independent exponential random variables, and that the probability that an item that
has just completed stage n quits the program is (independent of how long it took to
go through the n stages) equal to r (n), then the total time that an item spends in the
program is a Coxian random variable.
(c) If f (·) is o(h) and g(·) is o(h), then so is f (·) + g(·). This follows since
f (h) + g(h) f (h) g(h)
lim = lim + lim =0+0=0
h→0 h h→0 h h→0 h
(e) From (c) and (d) it follows that any finite linear combination of functions, each of
which is o(h), is o(h).
In order for the function f (·) to be o(h) it is necessary that f (h)/h go to zero as h
goes to zero. But if h goes to zero, the only way for f (h)/h to go to zero is for f (h) to
go to zero faster than h does. That is, for h small, f (h) must be small compared with h.
The o(h) notation can be used to make statements more precise. For instance, if
X is continuous with density f and failure rate function λ(t), then the approximate
statements
Because the Laplace transform uniquely determines the distribution, we can thus con-
clude that N (t) is Poisson with mean λt.
To show that N (s + t) − N (s) is also Poisson with mean λt, fix s and let Ns (t) =
N (s + t) − N (s) equal the number of events in the first t time units when we start
our count at time s. It is now straightforward to verify that the counting process
{Ns (t), t 0} satisfies all the axioms for being a Poisson process with rate λ. Conse-
quently, by our preceding result, we can conclude that Ns (t) is Poisson distributed with
mean λt.
The Exponential Distribution and the Poisson Process 301
Remarks
(i) The result that N (t), or more generally N (t + s) − N (s), has a Poisson distribution
is a consequence of the Poisson approximation to the binomial distribution (see
Section 2.2.4). To see this, subdivide the interval [0, t] into k equal parts where k
is very large (Figure 5.1). Now it can be shown using axiom (iv) of Definition 5.2
that as k increases to ∞ the probability of having two or more events in any of
the k subintervals goes to 0. Hence, N (t) will (with a probability going to 1) just
equal the number of subintervals in which an event occurs. However, by stationary
and independent increments this number will have a binomial distribution with
parameters k and p = λt/k + o(t/k). Hence, by the Poisson approximation to the
binomial we see by letting k approach ∞ that N (t) will have a Poisson distribution
with mean equal to
t t to(t/k)
lim k λ + o = λt + lim
k→∞ k k k→∞ t/k
= λt
by using the definition of o(h) and the fact that t/k → 0 as k → ∞.
(ii) Because the distribution of N (t + s) − N (s) is the same for all s, it follows that
the Poisson process has stationary increments.
Figure 5.1
302 Introduction to Probability Models
where the last two equations followed from independent and stationary increments.
Therefore, from Equation (5.12) we conclude that T2 is also an exponential random
variable with mean 1/λ and, furthermore, that T2 is independent of T1 . Repeating the
same argument yields the following.
Proposition 5.1 Tn , n = 1, 2, . . . , are independent identically distributed exponential
random variables having mean 1/λ.
Remark The proposition should not surprise us. The assumption of stationary and
independent increments is basically equivalent to asserting that, at any point in time,
the process probabilistically restarts itself. That is, the process from any point on is
independent of all that has previously occurred (by independent increments), and also
has the same distribution as the original process (by stationary increments). In other
words, the process has no memory, and hence exponential interarrival times are to be
expected.
Another quantity of interest is Sn , the arrival time of the nth event, also called the
waiting time until the nth event. It is easily seen that
n
Sn = Ti , n1
i=1
and hence from Proposition 5.1 and the results of Section 2.2 it follows that Sn has a
gamma distribution with parameters n and λ. That is, the probability density of Sn is
given by
(λt)n−1
f Sn (t) = λe−λt , t 0 (5.13)
(n − 1)!
Equation (5.13) may also be derived by noting that the nth event will occur prior to or
at time t if and only if the number of events occurring by time t is at least n. That is,
N (t) n ⇔ Sn t
Hence,
∞
(λt) j
FSn (t) = P{Sn t} = P{N (t) n} = e−λt
j!
j=n
(λt)n−1
= λe−λt
(n − 1)!
The Exponential Distribution and the Poisson Process 303
Example 5.13 Suppose that people immigrate into a territory at a Poisson rate λ = 1
per day.
(a) What is the expected time until the tenth immigrant arrives?
(b) What is the probability that the elapsed time between the tenth and the eleventh
arrival exceeds two days?
Solution:
(a) E[S10 ] = 10/λ = 10 days.
(b) P{T11 > 2} = e−2λ = e−2 ≈ 0.133.
Proposition 5.1 also gives us another way of defining a Poisson process. Suppose
we start with a sequence {Tn , n 1} of independent identically distributed exponential
random variables each having mean 1/λ. Now let us define a counting process by saying
that the nth event of this process occurs at time
Sn ≡ T1 + T2 + · · · + Tn
The resultant counting process {N (t), t 0}∗ will be Poisson with rate λ.
Remark Another way of obtaining the density function of Sn is to note that because
Sn is the time of the nth event,
P{t < Sn < t + h} = P{N (t) = n − 1, one event in (t, t + h)} + o(h)
= P{N (t) = n − 1}P{one event in (t, t + h)} + o(h)
(λt)n−1
= e−λt [λh + o(h)] + o(h)
(n − 1)!
(λt)n−1
= λe−λt h + o(h)
(n − 1)!
where the first equality uses the fact that the probability of 2 or more events in (t, t + h)
is o(h). If we now divide both sides of the preceding equation by h and then let h → 0,
we obtain
(λt)n−1
f Sn (t) = λe−λt
(n − 1)!
Let N1 (t) and N2 (t) denote respectively the number of type I and type II events
occurring in [0, t]. Note that N (t) = N1 (t) + N2 (t).
Proposition 5.2 {N1 (t), t 0} and {N2 (t), t 0} are both Poisson processes having
respective rates λ p and λ(1 − p). Furthermore, the two processes are independent.
Proof. It is easy to verify that {N1 (t), t 0} is a Poisson process with rate λ p by
verifying that it satisfies Definition 5.3.
• N1 (0) = 0 follows from the fact that N (0) = 0.
• It is easy to see that {N1 (t), t 0} inherits the stationary and independent increment
properties of the process {N (t), t 0}. This is true because the distribution of
the number of type I events in an interval can be obtained by conditioning on the
number of events in that interval, and the distribution of this latter quantity depends
only on the length of the interval and is independent of what has occurred in any
nonoverlapping interval.
• P{N1 (h) = 1} = P{N1 (h) = 1 | N (h) = 1}P{N (h) = 1}
+P{N1 (h) = 1 | N (h) 2}P{N (h) 2}
= p(λh + o(h)) + o(h)
= λ ph + o(h)
Example 5.15 Suppose nonnegative offers to buy an item that you want to sell arrive
according to a Poisson process with rate λ. Assume that each offer is the value of a
continuous random variable having density function f (x). Once the offer is presented
to you, you must either accept it or reject it and wait for the next offer. We suppose that
you incur costs at a rate c per unit time until the item is sold, and that your objective is
to maximize your expected total return, where the total return is equal to the amount
received minus the total cost incurred. Suppose you employ the policy of accepting the
first offer that is greater than some specified value y. (Such a type of policy, which we
call a y-policy, can be shown to be optimal.) What is the best value of y? What is the
maximal expected net return?
The Exponential Distribution and the Poisson Process 305
Solution: Let us compute the expected total return when you use the y-policy, and
then choose the value of y that maximizes this quantity.
∞ Let X denote the value of
a random offer, and let F̄(x) = P{X > x} = x f (u) du be its tail distribution
function. Because each offer will be greater than y with probability F̄(y), it follows
that such offers occur according to a Poisson process with rate λ F̄(y). Hence, the
time until an offer is accepted is an exponential random variable with rate λ F̄(y).
Letting R(y) denote the total return from the policy that accepts the first offer that
is greater than y, we have
Differentiation yields
∞
d c
E[R(y)] = 0 ⇔ − F̄(y)y f (y) + x f (x) d x − f (y) = 0
dy y λ
or
∞ ∞ c
y f (x) d x = x f (x) d x −
y y λ
or
∞ c
(x − y) f (x) d x =
y λ
It is not difficult to show that there is a unique value of y that satisfies the preceding.
Hence, the optimal policy is the one that accepts the first offer that is greater than
y ∗ , where y ∗ is such that
∞
(x − y ∗ ) f (x) d x = c/λ
y∗
306 Introduction to Probability Models
Putting y = y ∗ in Equation (5.14) shows that the maximal expected net return is
∞
1
E[R(y ∗ )] = ( (x − y ∗ + y ∗ ) f (x) d x − c/λ)
F̄(y ∗ ) y ∗
∞ ∞
1 ∗ ∗
= ( (x − y ) f (x) d x + y f (x) d x − c/λ)
F̄(y ∗ ) y ∗ y∗
1
= (c/λ + y ∗ F̄(y ∗ ) − c/λ)
F̄(y ∗ )
= y∗
Thus, the optimal critical value is also the maximal expected net return. To understand
why this is so, let m be the maximal expected net return, and note that when an offer
is rejected the problem basically starts anew and so the maximal expected additional
net return from then on is m. But this implies that it is optimal to accept an offer
if and only if it is at least as large as m, showing that m is the optimal critical
value.
It follows from Proposition 5.2 that if each of a Poisson number of individuals is
independently classified into one of two possible groups with respective probabilities p
and 1 − p, then the number of individuals in each of the two groups will be independent
Poisson random variables. Because this result easily generalizes to the case where the
classification is into any one of r possible groups, we have the following application
to a model of employees moving about in an organization.
Example 5.16 Consider a system in which individuals at any time are classified as
being in one of r possible states, and assume that an individual changes states in accor-
dance with a Markov chain having transition probabilities Pi j , i, j = 1, . . . , r . That
is, if an individual is in state i during a time period then, independently of its previous
states, it will be in state j during the next time period with probability Pi j . The individ-
uals are assumed to move through the system independently of each other. Suppose that
the numbers of people initially in states 1, 2, . . . , r are independent Poisson random
variables with respective means λ1 , λ2 , . . . , λr . We are interested in determining the
joint distribution of the numbers of individuals in states 1, 2, . . . , r at some time n.
Solution: For fixed i, let N j (i), j = 1, . . . , r denote the number of those indi-
viduals, initially in state i, that are in state j at time n. Now each of the (Poisson
distributed) number of people initially in state i will, independently of each other,
be in state j at time n with probability Pinj , where Pinj is the n-stage transition
probability for the Markov chain having transition probabilities Pi j . Hence, the
N j (i), j = 1, . . . , r will be independent Poisson random variables with respective
means λi Pinj , j = 1, . . . , r . Because the sum of independent Poisson random vari-
ables is itself a Poisson randomvariable, it follows that the number of individuals
r
in state j at time n—namely i=1 N j (i)—will be independent Poisson random
variables with respective means i λi Pinj , for j = 1, . . . , r .
Example 5.17 (The Coupon Collecting Problem) There are m different types of
coupons. Each time a person collects a coupon it is, independently of ones previously