Poisson Process
Poisson Process
5.1 Introduction
(i) A radioactive system emits a stream of - particles, which reach a Geiger counter. A Geiger counter
is a defective counter, which gets locked for sometime when a radioactive particle strikes it. While
an unlocked counter is capable of recording the arrival of particles, a locked counter does not record
the event. Let X (t) be the number of recordings by a Geiger counter during the time-interval (0,t].
Suppose that the half-life of the particle (the time taken till the mass of the matter reduces to its half
by the process of disintegration) is large as compared to t. Then X (t) is the sum of a large number of
(ii) A large pond has a large number of fishes. Let X (t) be the number of fishes caught in the time-
interval (0,t]. The chance of catching a fish is independent of the number of fishes already caught,
when the catching of a fish is a Bernoulli trial with the probability p (say) of catching (success).
Further the chances of catching a fish at the next time point are the same irrespective of the time
interval since the last success. In other words, there is “no premium for waiting”.
(iii) Recall the queuing system having one server who serves the customer on the basis of “First come.
First served” rule. Let X (t) be the length of the queue at any time t, when the customers are joining
the system at a constant rate and hence the inter-arrival times are i.i.d. random variables.
All the above processes represent a situation when the number of trials is large, and each trial is subject
independently to a Bernoulli law. The probability of one occurrence of the event in a very small interval is
constant but in the same interval two or more events can occur with a probability, which is of order zero.
Let N ( t ), t T [0, ) be a non-negative counting process with discrete state space S = {0,1,2…}
where N (t) denotes the occurrence of a random, rare event E in time T. Further, let
p n (t ) 1
n 0
i.e., p ( t ) is a function of time. Under certain conditions, called as the postulates or assumptions,
n
Postulates of Poisson process: There are three basic postulates or assumptions of the Poisson process:
(i) Independence: N ( t ) is Markovian, i.e., the occurrences of the event in (0,t] are independent of the
(ii) Homogeneity in time: pn ( t ) depends only on the length t of the time interval (0,t] and not on where
(iii) Regularity (Orderliness): In an interval (t, t + h) of infinitesimal length h, exactly one event can
occur with probability h o ( h ) and the probability of more than one events is of order o ( h ) (called
o(h)
p1 ( h ) h o ( h ); lim 0
h 0 h
and, pk ( h ) o ( h )
k 2
Since p n ( h ) 1 p 0 ( h ) p1 ( h ) o ( h ) 1
n2
p0 ( h ) 1 h o ( h ) P (no event in ( t , t h ))
t ( t ) n
pn ( t ) e , n 0,1, 2...
n
0 t t+h
For n 1,
n2
P ( n k events occur in the interval (0,t )) P ( k events occur in ( t ,t h ))
k 2
p ( t )(1 h ) p (t ) h o ( h )
n n 1
p n ( t h ) p n ( t ) o(h)
pn ( t ) pn 1 ( t )
h h
p n ( t h ) p n ( t ) o(h)
lim pn ( t ) pn 1 ( t ) lim
h 0 h h 0 h
For n = 0,
p0 ( t h ) p0 ( t ) p0 ( h ) p0 ( t )(1 h o ( h ))
p 0 ( t h ) p 0 ( t )
lim p 0 ( t )
h 0 h
p 0 '( t )
p0 ( t )
t c
p ( t ) c1e , c1 e
0
Initially, at t 0, p0 (0) 0 c1 1
t
p0 ( t ) e (4.2)
t
p '( t ) p ( t ) p ( t ) e
1 1 0
t
Integrating factor I.F. of this differential equation is is e
d
p (t )e
dt 1
t
t
p (t )e t c
1
At t 0, p (0) 0 c 1
1
t
p ( t ) te
1
t ( t )n 1
Let p (t ) e (4.3)
n 1 n1
t ( t )n 1
p '( t ) p ( t ) e
n n n1
t
Multiplying both sides by e , we have
t ( t )n 1
e ( p '( t ) p ( t ))
n n n1
Integrating both sides with respect to t
t ( t )n
e p (t ) c
n n
p (0) 0 c 0
n
t ( t )n
p (t ) e , n 1, 2, 3,...
n n
An alternate and more elegant technique of obtaining the result is the generating function technique.
P ( s , t ) pn (t ) sn
n 0
N (t )
P ( N ( t ) n ) sn E ( s )
n 0
so,
2
P ( s , 0) p n (0) s n p0 (0) sp1 (0) s p2 (0)
n 0
p0 (0)
P ( s ,t )
p n ( t ) s n p n '( t ) s n
t n 0 t n 0
(4.4)
p0 '( t ) p n '( t ) s n
n 1
From (4.1), we have pn '( t ) pn ( t ) pn 1 ( t ); n 1
Multiplying this equation by sn and summing over all possible values of n, we have
p n '( t ) s n p n ( t )s p n 1 ( t ) s n
n
(4.5)
n 1 n 1 n 1
where,
p n '( t ) s n P ( s , t ) p0 '( t )
n 1 t
p n ( t ) s n P ( s ,t ) p 0 ( t ) (from (4.4))
n 1
p n 1 ( t ) s n sP ( s , t )
n 1
t
P ( s , t ) p0 '( t ) P ( s ,t ) p 0 ( t ) sP ( s , t )
P ( s ,t ) ( s 1) P ( s ,t ) ( p 0 '( t ) p 0 ( t ))
t
P ( s ,t ) ce ( s 1) t
As P ( s ,0) 0
( s 1) t
P( s, t ) e
( s 1) t t
( ts )n
P( s, t ) e e
n 0 n
p n ( t ) coefficient of s n in P ( s ,t )
t ( t ) n
e , n 0
n
N ( t ) ~ Pois( t ), n 0,1, 2...
e t ( ts )n eisn
i ( s ) E eisX ( t )
n 0 n
t ( eis 1)
e
Deductions:
(i)
E ( N ( t )) t
Var ( N ( t )) t
i.e., the mean and the variance of N(t) depend on t and as such, the process is evolutionary.
(ii) In an interval of unit length, the mean number of occurrences is . This is called the parameter of the
process.
(iii) Poisson process is a continuous parameter, discrete state space stochastic process. But E (N (t)) is a
(v) If E occurred r times up to the initial instant 0 from which t is measured then the initial condition will
be
pr (0) 1, pn (0) 0 n r
p n ( t ) P ( N ( t ) n r )
( t )n r
e t , n r
nr
0, n r
Var ( X )
P (| X E ( X ) | a ) ; a 0
a2
t
Put X N ( t ) P (| N ( t ) t | a ) ; a 0
a2
N (t ) a t
or, P 2
t t a
a N (t )
Let P
t t t
N (t )
lim P 0
t t
N (t )
i.e., for large t, can be taken as an estimate of .
t
(i) Additive property: Sum of two independent Poisson processes is again a Poisson process.
Proof: Let N1 ( t ) and N 2 ( t ) be two independent Poisson processes with parameters 1 and 2
n
P ( N ( t ) n ) P ( N1 ( t ) r , N 2 ( t ) n r )
r 0
n
P ( N1 ( t ) r ) P ( N 2 (t ) n r ) due to independence
r 0
n ( 1t )r t ( 2t )n r
e 1t e 2
r 0 r n r
( 1 2 ) t n ( 1t )r ( 2t )n r
e
r 0 r n r
( 1 2 )n
e ( 1 2 ) t t n ,
n 0
n
( 1 2 ) t (( 1 2 ) t )n
e , n0
n
N ( t ) ~ Pois(( 1 2 ) t )
Ni ( t ) i ( s 1) t
Alternatively, let the p.g.f. of Ni ( t ), i 1, 2 be E ( s ) e .
The p.g.f. of N ( t ) is
N (t ) N1 ( t ) N 2 ( t )
E(s ) E (s )
E ( s N1 ( t ) ) E ( s N2 ( t ) )
N ( t ) ~ Pois(( 1 2 ) t )
i ( s ) E eisN ( t ) E eis ( N1 ( t ) N 2 ( t ))
E eis ( N1 ( t ) E eis ( N 2 ( t )
( 1 2 ) t ( eis 1)
e
Proof: Let N1 ( t ) and N 2 ( t ) be two independent Poisson processes with parameters 1 and 2
n ( 1t )n r t ( 2 t )r
e 1t e 2
r 0 n r r
n n2 r
( 1 2 ) t 1 2 t 12
e
2 r 0 r n r
n
( 1 2 ) t 1 2
e I[ n ] t 12
2
n 2r
x
2
where, I[ n ] ( x ) is modified Bessel function of order n ( -1). N (t) is not a Poisson
r 0 r n r
process.
N (t ) N1 ( t ) N 2 ( t )
E(s ) E (s )
E ( s N1 ( t ) ) E ( s N 2 ( t ) )
1 N2 ( t )
E ( s N1 ( t ) ) E
s
s 2 t
( 1 2 ) t 1 s
e e
N (t )
Then pn (t) is the coefficient of sn in the expansion of E ( s )
E ( N ( t )) ( 1 2 ) t
E ( N 2 ( t )) ( 1 2 ) t ( 1 2 )2 t 2
Var ( N ( t )) ( 1 2 ) t
i ( s ) E eisN ( t ) E eis ( N1 ( t ) N 2 ( t ))
E eis ( N1 ( t ) E e is ( N 2 ( t )
t ( 1 eis 2 e is ) t ( 1 2 )
e
(iii) Decomposition of a Poisson process: A random selection from a Poisson process
Let N (t), the number of occurrences of an event E in an interval of length t is a Poisson process with
parameter . Further, let each occurrence of E has a constant probability p of being recorded, and that
If M (t) is the number of occurrences recorded in an interval of length t, then M (t) is also a Poisson
Proof:
P ( M ( t ) n ) P ( E occurs ( n r ) times by epoch t and exactly n out of n r occurrences are recorded)
r 0
= P ( N ( t )n r ) P ( n events are recorded)
r 0
( t )n r n r n r
= e t p q
r 0 n r n r
p n ( qt )r
= e t ( t ) n
n r 0 r
( tp )n qt
= e t e
n
M ( t ) ~ Pois( pt )
i ( s ) E eisM ( t )
pt ( eis 1)
e
Corollary:
1. If M1 (t) is the number of events not being recorded, then M1 (t) is a Poisson process with parameter
(1-p) = q. For example, a Geiger counter records radioactive disintegrations according to a
Poisson law. Also the disintegrations, which have not been recorded, follow a Poisson law.
r
p1, p2 ,... pr ; pi 1 , then, these r independent streams are Poisson processes with parameters
i 1
(iv) Poisson process and Binomial distribution: If N (t) is a Poisson process then for s < t
nk
n s s k
P ( N ( s ) k | N ( t ) n ) 1
k t t
Proof:
P ( N ( s ) k ,N (t ) n )
P ( N ( s ) k | N (t ) n )
P ( N (t ) n )
P ( N ( s ) k , N ( t s ) n k )
P ( N (t ) n )
P ( N ( s ) k ) P ( N ( t s ) n k )
P ( N (t ) n )
e s ( s )k e ( t s ) ( ( t s ))n k
k n k
t
e ( t ) n
n s k ( t s )n k
k nk tn
n s k s nk
1
k t t
(v) If { N ( t ), t 0} is a Poisson process then the (auto) correlation coefficient between N (t) and N (t + s)
t
is .
ts
Proof: Let be the parameter of the process, then
E ( N (T )) T ; Var ( N (T )) T
E ( N 2 (T )) T ( T )2 ; T t ,t s .
E ( N ( t ) N ( t s )) E ( N ( t )( N ( t s ) N ( t ) N ( t )))
E ( N 2 (T )) E ( N (t )( N ( t s ) N ( t )))
E ( N 2 (T )) E ( N ( t )) E (( N ( t s ) N ( t )))
t 2t 2 2ts
t
t t
Autocor ( N ( t ), N ( t s ))
t ( t s ) t s
min( t ,t ')
In general, ( N ( t ), N ( t '))
max( t ,t ')
Example
(1) M/G/1 queue: Recall the queuing system where customers join the system hoping for some service.
There is a server who serves one customer (if any present) only at time points 0,1,2… The number of
customers Yn in the time interval (n, n+1), are i.i.d. random variables .The service station has a capacity of
at most c customers including the one being served and further arrivals are not entertained by the service
station (lost customers). Further the service times of successive arrivals are assumed to be independent
random variables with a common distribution, say, G. and they are independent of further arrivals. Then
{Xn, n 1}, the number of customers at time point n is a Markov chain with state space S = {0,1,2…c}.
We have
Yn , if X n 0 and 0 Yn c 1
X n 1 X n 1 Yn , if 1 X n c and 0 Yn c 1 X n
c, otherwise
x j
P (Yn j ) e x dG ( x ); j 0,1,..
0 j
x j
e x dG ( x ); i 0, j 0
0 j
x j i 1
p e x dG ( x ); j i 1, i 1
ij 0 j i 1
0, otherwise.
Taking a cue from the above example, we can now identify some distributions, which are closely associated
Inter-arrival time: Let { N ( t ), t 0} is a Poisson process with parameter . Let X be the interval between two
successive occurrences of the event E for which N (t) is the counting process. Then X, known as the inter-arrival
Theorem 4.2: The interval between two successive occurrences of a Poisson process { N ( t ), t 0} with
1
parameter has a negative exponential distribution with mean .
Proof: Let X be the interval between two successive occurrences of { N ( t ), t 0} . Let
FX ( x ) P ( X x ) be the c.d.f. of X.
Let Ei and Ei +1 be the two successive occurrences of the event E for which N (t) is the counting process
P ( N ( x ) 0 | N ( ti ) i )
x
p0 ( x ) e , x0
Since i is arbitrary, so for the interval X between any two successive occurrences
FX ( x ) P ( X x ) 1 P ( X x )
1e x ; x 0
dFX ( x )
f ( x) e x ; x 0
dx
X ~ exp( )
Theorem 4.3: The intervals between successive occurrences of a Poisson process are i.i.d. exponential variables
1
with common mean .
Proof: We prove the result by mathematical induction. Let the successive occurrence points of the Poisson
process are 0 t1 t 2 ... . In the earlier theorem, we have proved that the inter-arrival time between two
1
successive occurrences is an exponential variable with mean . For three successive occurrences, if Xi = ti+1 -
x1
P ( ti 2 x1 x2 | X 1 x1 ) f ( x ) dx
0
P ( t2 x1 x2 | X 1 x1 ) P ( N ( t2 ) 0) e x2
x1
P ( X1 x1 , X 2 x2 ) e x2 e x dx
0
x2 x1
e (1 e )
P ( X 1 x1 , X 2 x2 , X k xk , X k 1 xk 1 )
x1 x2 xk k
k 1 xi
k
P (Wk 1 xi | X1 x1 , X 2 x2 , X k xk ) e i 1 dx1 dxk
0 0 0 i 1
where
k 1 k 1 k
P (Wk 1 xi | X1 x1 , X 2 x2 , X k xk ) P N xi N xi 0
i 1 i 1 i 1
P ( N ( xk 1 )0)
xk 1 x1 xk
P ( X 1 x1 , X 2 x2 , X k xk , X k 1 xk 1 ) e (1 e ) (1 e )
The converse of this theorem is equally true, which along with this theorem gives a characterization to the
Poisson process.
Theorem 4.4: If the intervals between successive occurrences of an event E are independently distributed
1
exponentially with common mean , then the event E has Poisson process as its counting process.
1
Proof: Let { Zi , i 0} be a sequence of i.i.d. negative exponential variables with common mean , where,
th th
Zi interval between ( n -1) and n occurrences of the event E .
Define Wn Z1 Z 2 ... Z n as the waiting time up to the nth occurrence, i.e., the time from origin to the
n x n 1e x
gW ( x ) , x 0
n n!
t
and c.d.f. FW ( x ) P (Wn t ) g ( x ) dx
n 0
FW ( t ) P (Wn t ) 1 P (Wn t )
n
1 P ( N (t ) n)
1 P ( N ( t ) n1)
1 FN ( t ) ( n1)
FN ( t ) ( n 1) 1 FW ( t )
n
t n
xn 1 e x dx
0 n!
n
xn 1 e x dx
t n!
1 n 1 y
y e dy (put x y )
t !
n
n ( t ) j
e t (integration by parts)
j 0 j!
p ( t ) P ( N ( t ) n ) FN ( t ) ( n ) FN ( t ) ( n 1)
n
t ( t ) n
e
n!
N ( t ) ~ Pois( t ); t 0
It may be noted that Poisson process has independent exponentially distributed inter-arrival times and Gamma
The next result explains the purely random nature of a Poisson process.
Theorem 4.5: If a Poisson process N (t) has occurred only once by the time-point t, then the distribution of
dT
P ( t t dt | N (T ) 1) ; 0 t T
T
Proof: We have
t
P ( t t dt | N (T ) 1) e dt
T
P ( N (T ) 1) Te
( T t )
and P ( N (T ) 1 | t ) e is the probability that there was no occurrence of N (t)
P ( t t dt , N (T ) 1)
P ( t t dt | N ( T ) 1)
P ( N ( T ) 1)
P ( t t dt ) P ( N (T ) 1)
P ( N ( T ) 1)
e t dt .e ( T t )
e T T
dt
T
The result can be interpreted as follows:
If a Poisson process N (t) has occurred only once by the time-point t, this is equally likely to happen anywhere
We state some more results, which further emphasize the random nature of Poisson process.
(i) For a Poisson process with parameter , the time interval up to the first occurrence also follows an
1
exponential distribution with mean , i.e., if X0 is the time up to the first occurrence, then
P ( X 0 x ) P ( N ( x ) 0)
x
p0 ( x ) e , x0
x
i.e., P ( X x ) e is independent of i and t.
(ii) Suppose that the interval X is measured from an arbitrary point ti r ( r 0) in the interval
( ti ,ti1 ) and not the point ti of the occurrence of Ei. Let Y ti 1 ( ti r ) . Y is called random
modification Y with the same mean. In other words, there is no premium for “waiting”.
(iii) Suppose that A and B are two independent series of Poisson events with parameters 1 and 2
respectively. Define a random variable N as the number of occurrences of A between two successive
occurrences of B. Then
N ~ Geo 2 .
1 2
Let X be the random variable denoting the interval between two successive occurrences of B. Then
2 x
f X ( x ) 2 e , x 0 . Hence,
P ( A occurs k times in an arbitrary interval between two successive occurrences of B ) P ( N k )
( 1t )k
e 1t f ( t ) dt
0 k
( 1t )k
e 1t e 2 t dt
0 k 2
1k
k ( ) t
2 t e 1 2 dt
k 0
2 1k
( 1 2 )k 1
k
2 1
; k 0, 1, 2, ...
( 1 2 ) 1 2
(iv) The above property can be generalized to define what we call as a Poisson count process.
Poisson count process: Let E and E' be two random sequences of events occurring at instants
( t1 , t2 ,...) and ( t1 ', t2 ',...) respectively. The number, Nn, of occurrences of E' in an interval
If E is a Poisson process, then the count process is called the Poisson count process.
If, along with E, E' is also a Poisson process then the count process Nn has a geometric distribution.
(v) Suppose that A and B are two independent series of Poisson events with parameters 1 and 2
respectively. Define a random variable N as the number of occurrences of A between two successive
occurrences of B. the interval between two consecutive occurrences of B is the sum of two independent
2 x
exponential variates and has the joint density f ( x ) 2 xe
2
f ( x ) 2 e 2 x f ( x ) 2 e 2 x
f ( x ) 22 xe 2 x
( 1t )k 2 t
P ( k occurrences of A between every second occurrence of B ) = e 1t 2 te 2 dt
0 k
1k 22
( 1 2 ) t k 1
= e t dt
k 0
1k 22 k 2
=
k ( 1 2 )k 2
2 k
k 1 2 1
= ; k 0,1, 2
1 1 2 1 2
In the classical Poisson process, it is assumed that the conditional probabilities are constant, i.e., the probability
of k events in the interval [t, t +h] given occurrence of n events by time-point t is given by
ho ( h ), k 1
p ( h ) P ( N ( h ) k | N (t ) n ) o ( h ), k 2
k
1 ho ( h ), k 0
i.e., pk(h) is independent of n as well as t. This process can be generalized by considering no more a constant
This generalized process has excellent interpretations in terms of birth-death processes. Consider a population of
organisms, which reproduce to create similar organisms. The population is dynamic as there are additions in
terms of births and deletions in terms of deaths. Let n be the size of the population at instant t. Depending upon
the nature of additions and deletions in the population, various types of processes can be defined.
4.5.1 Pure birth process: Let is a function of n, the size of the population at instant t. Then
p ( k , h | n , t ) P ( N ( h ) k | N (t ) n )
n h o ( h ), k 1
(4.6)
o ( h ), k 2
1n ho ( h ), k 0
Then,
pn ( t h ) pn ( t )(1 n h ) pn 1 ( t ) n 1 h o ( h ), n 1
p0 ( t h ) p0 ( t )(1 0 h ) o ( h )
pn '( t ) n pn (t ) n 1 pn 1 (t ), n 1 (4.7)
p '( t ) p ( t ) (4.8)
0 0 0
This is a pure birth process (only births are there and no deaths as k is a non-negative integer). For specified
(i) Yule-Furry process: Let n n . Then (4.7) and (4.8) can be written as
p '( t ) n p ( t ) ( n 1) p ( t ), n 1
n n n 1
p '( t ) 0
0
Let the initial conditions be p (0) 1; p (0) 0 i 1 , i.e., the process starts with only one
1 i
member at time t = 0.
For n = 1,
t
p1 '( t ) p1 ( t ) p1 ( t ) c1e ;
t
p1 ( t ) e
For n = 2,
p2 '( t ) 2 p2 ( t ) p1 ( t )
t
p2 '( t ) 2 p2 ( t ) e
2 t
Integrating factor for this equation is e
2 t t
e p2 ( t ) e2 t e t dt c2 e
t
Since p (0) 0 c 1
2 2
t t
p (t ) e (1 e )
2
t t n 2
Let pn 1 ( t ) e (1 e )
Now,
pn '( t ) n pn ( t ) ( n 1) pn 1 ( t )
t t n 2
( n 1) e (1 e )
n t
Multiplying both sides by e , we have
nt ( n 1) t t n 1
e pn ( t ) ( n 1) ent e t (1e t )n 2 dt cn e (1 e )
t
Since pn (0) 0 cn 0
t t n 1
pn ( t ) e (1 e ) ;n 1
and p0 ( t ) 0
t
{ pn ( t ), n 1} has geometric distribution with parameter e and p.g.f.
P ( s , t ) e t (1e t )n 1 s n
n 1
se t
1 s (1e t )
t t
Var ( N ( t )) e ( e 1)
4.5.2 Birth and death process: Now, along with additions in the population, we consider deletions also, i.e.,
q ( k , h | n , t ) P (number of deaths in ( t , t h ) k | N ( t ) n )
n ho ( h ), k 1
(4.9)
o ( h ), k 2
1 n ho ( h ), k 0
For n k , 0 0
(4.6) and (4.9) together constitute a birth and death process. The probability of more that one birth or
pn ( t ) P ( N ( t ) n )
To obtain the differential-difference equation for pn(i), we consider the time interval
(0, t h ) (0, t ) [ t , t h )
Since, births and deaths, both are possible in the population, so the event { N ( t h ) n , n 1} can occur
where,
P ( E00 ) P ( no birth and no death in ( t , t h ) | N ( t ) n )
pn ( t )(1n h o ( h ))(1 n h o ( h ))
pn ( t )(1( n n ) h o ( h ))
pn 1 ( t )( n 1h o ( h ))
pn 1 ( t )( n 1h o ( h ))
P ( E11 ) pn ( t )( n h o ( h ))( n h o ( h ))
pn ( t )( o ( h ))
o(h)
So for n 1
For n = 0
p0 ( t ) 0 hp0 ( t ) 1hp1 ( t ) o ( h )
p0 '( t ) 0 p0 ( t ) 1 p1 ( t ) (4.11)
pn (0) 0, n i
pi (0) 1
(4.10) and (4.11) represent the differential-difference equations of a birth and death process.
4.5.3 Births and death rates: Depending upon the values of n and n , various types of birth and death
(i) Immigration: When n , i.e., n is independent of population size n, then the increase in the
population can be regarded as due to an external source. The process is, then, known as an
immigration process.
(ii) Emigration: When n , i.e., n is independent of population size n, then the decrease in the
population can be regarded as due to elimination of some elements present in the population. The
(iii) Linear birth process: When n n , then n h n h , is the conditional probability of one
birth in an interval of length h, given that n organisms are present at the beginning of the interval.
(iv) Linear death process: When n n , then the process is known as a linear death process.
When the specific values of both n and n are considered simultaneously, we get the following processes:
P (an element of the population gives birth to a new member in a small interval of length h )
h o ( h )
P ( one birth in interval ( t , t h ) | N ( t ) n ) n h o ( h )
and
i.e., if for a birth and death process, n = n and n = n ( n 1); 0 =0 =0 , then the process is a
This process, which is evolutionary in nature, has extensive applications in various fields,
P ( s , t ) pn (t ) sn
n 0
Then, P ( s , t ) np n ( t ) s n 1
s n 1
and P ( s , t ) p n '( t ) s n
t n 1
Multiplying (4.12) by sn, summing over n = 1,2,3… and then adding (4.13) to the result, we have
p0 '( t ) p n '( t ) s n ( ) p n ( t )s ( n1) p n 1 ( t ) s n ( n1) p n 1 ( t ) p1 ( t )
n
n 1 n 1 n 1 n 1
P P 2 P P
( )s s
t s s s
P
( ( ) s s )
2
s
Under the initial condition N (0) = i, the solution of this partial differential-difference equation is given
by
i
(1 s )( s ) e ( ) t
P ( s , t ) ( ) t
(4.14)
(1 s )( s ) e
(b) Mean population size: Differentiating P(s,t) w.r.t. s partially at s = 1, we get the mean population
size M(t) as
P ( s ,t ) ( ) t
M (t ) ie
t
0, if
As t , M ( t ) , if
i , if
Since this method involves differentiation of a not-so-easy p.g.f., so obtaining M(t) may be a bit involved
exercise. Alternatively, M (t) can be obtained from (4.12) and (4.13) directly.
Now, M ( t ) E ( N ( t )) np n ( t )
n 1
Multiplying both sides of (4.12) by n and adding over different values of n, we have
np n '( t ) ( ) n2 p n ( t ) n ( n1) p n 1 ( t ) n ( n1) p n 1 ( t ) (4.15)
n 1 n 1 n 1 n 1
where,
n ( n1) p n 1 ( t ) ( n 1) 2 p n 1 ( t ) ( n 1) p n 1 ( t )
n 1 n 1 n 1
M 2 (t ) M (t )
2
where, M 2 ( t ) E ( N ( t )) n2 p n ( t )
n 1
and,
n ( n1) p n 1 ( t ) ( n 1) 2 p n 1 ( t ) ( n 1) p n 1 ( t )
n 1 n 1 n 1
( M 2 ( t ) p1 ( t ))( M ( t ) p1 ( t ))
M 2 (t ) M (t )
and, np n '( t ) M '( t )
n 1
M '( t ) ( ) M 2 ( t ) M 2 ( t ) M ( t ) M 2 ( t )M ( t )
( ) M (t )
( ) t
M ( t ) ce , c being the constant of integration.
( ) t
Initially, M (0) i c i M ( t ) ie
n2 p n '( t ) ( ) n3 p n ( t ) n2 ( n1) p n 1 ( t ) n2 ( n1) p n 1 ( t )
n 1 n 1 n 1 n 1
M 2 '( t ) ( ) M 3 ( t ) ( n 1)3 ( n 1) 2 n ( n 1) p n 1 ( t )
n 1
( n 1)3 ( n 1)2 n ( n 1) p n 1 ( t )
n1
M 2 '( t ) 2( ) M 2 ( t ) ( ) M ( t )
( ) t
or, M 2 '( t ) 2( ) M 2 ( t ) ( ) ie
2 ( ) t i ( ) ( ) t
M 2 (t ) e e c
2
Initially, M 2 (0) i
2 i ( )
c i
i ( ) ( ) t 2 i ( )
M 2 ( t ) e2 ( ) t e i
i ( ) ( ) t ( ) t
Var ( N ( t ))
e e 1 , if
If , then
M 2 '( t ) ( ) M ( t ) i .2
M 2 ( t ) i .2 t c
At t 0, M 2 (0) i 2
M 2 ( t ) i .2 t i 2
(c) Probability of extinction: Since 0 = 0 , so 0 is an absorbing state, i.e. once the population reaches
(1 s )( s ) e ( ) t
P(s, t )
(1 s )( s ) e ( ) t
(1e ( ) t ) s ( e ( ) t )
( e ( ) t ) s (1e ( ) t )
a bs
1
a bs c a
c ds 1
ds
c
where,
( ) t
a (1 e )
b e ( ) t
c e ( ) t
( ) t
d 1 e
a (1e ( ) t )
So, P ( N ( t ) 0) p0 ( t )
c e ( ) t
(1e ( ) t ) 1, if
lim
t e ( ) t
1, if
The physical interpretation of the probability of extinction is that if the birth rate is less than the death
rate in a population, the population will ultimately become extinct with probability 1. If birth rate is
more than the death rate, then the population becomes extinct with probability less than unity.
(iii) Linear growth with immigration: For linear growth, 0 = 0 and once the population reaches 0, it is
bound to remain there itself and 0 becomes an absorbing state. However, if we assume that along
with births, additions in the population are possible through immigrations also, i.e., some organisms
from some other populations may also join the population under consideration, then
state. As soon as the birth rate reaches 0, some other organisms join the system and the population
will never become extinct (reflecting barriers). This process is the process of linear growth with
immigration.
(iv) Immigration-death process: If n = and n = n ( n 0) , then the birth rate is constant and
death rate is a linear function of n, then the process is known as immigration-death process. This is
(v) Pure death process: In this case, n = 0 n , i.e., no new births other than those present at the
so, P ( of a death in ( t , t h ) | N ( t ) n ) n h o ( h )
and,
p0 ( t h ) p0 ( t ) p1 ( t )( h ) o ( h )
p0 '( t ) p1 ( t )
(4.17)
(4.16) and (4.17) are the differential-difference equations of a pure death process.
To obtain an expression for pn(t), we assume that initially i individuals were present when the process
began.
For n = i,
pi '( t ) i pi ( t )
p i '( t )
i
pi ( t )
d
or, (ln pi ( t )) i
dt
it
pi ( t ) ce , c being the constant of integration.
i t
Initially, pi (0) 1 c 1 pi ( t ) e
For n = i-1,
pi 1 '( t ) ( i 1) pi 1 ( t ) i pi ( t ),
pi 1 '( t ) ( i 1) pi 1 ( t ) i pi ( t )
i ( 1) t
Integrating factor for the equation is e
d ( i 1) t t
(e pi 1 ( t )) i e
dt
( i 1) t t
or, e pi 1 ( t ) i e t dt ie c
At t 0, c i
t ( i 1) t i t ( i 1) t
pi 1 ( t ) i (1 e )e (1 e ) e
1
i t i n n t
pn ( t ) (1 e ) e ; n 0,1, 2..., i
n
np n '( t ) n 2 p n ( t ) n ( n 1) p n 1 ( t )
n 1 n 1 n 1
M 2 ( t ) M 2 ( t ) M ( t )
where,
2
M 2 ( t ) E ( N ( t )) n2 p n ( t )
n 1
and, M ( t ) E ( N ( t )) np n ( t )
n 1
M '( t ) M ( t )
t
M ( t ) ce
t
Initially, M (0) i c i M ( t ) ie .
M 3 ( t ) ( n 1)3 ( n 1)2 n ( n 1) p n 1 ( t )
M 3 ( t ) M 3 ( t ) p1 ( t )M 2 ( t ) p 1 ( t ) M 2 ( t ) M ( t )
where, M 3 ( t ) n3 p n ( t ) M 2 (t ) 2 M 2 (t ) M (t )
n 1
t
or, M 2 ( t ) 2 M 2 ( t ) i e
2 t t
e M 2 ( t ) ie c , where c is the constant of integration
2 2
Now, M 2 (0) i c i i
t 2 t
ie (i i )e
2
M 2 (t )
t t
Var ( N ( t )) ie (1 e )