0% found this document useful (0 votes)
24 views34 pages

Poisson Process

(1) Continuous time Markov chains describe stochastic processes where the probability of transitions between states depends only on the current state, not on the history. (2) Three examples are given: radioactive decay, catching fish from a pond, and queue lengths at a service counter. (3) These processes can be modeled as Poisson processes if the number of events is large and each event is independent with a constant probability. (4) The key properties of a Poisson process are independence, homogeneity in time, and regularity (at most one event occurs in an infinitesimal time interval). (5) It is proven that if a counting process satisfies these properties, then the number of events in an interval will follow a Poisson distribution

Uploaded by

2004jayeshrai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views34 pages

Poisson Process

(1) Continuous time Markov chains describe stochastic processes where the probability of transitions between states depends only on the current state, not on the history. (2) Three examples are given: radioactive decay, catching fish from a pond, and queue lengths at a service counter. (3) These processes can be modeled as Poisson processes if the number of events is large and each event is independent with a constant probability. (4) The key properties of a Poisson process are independence, homogeneity in time, and regularity (at most one event occurs in an infinitesimal time interval). (5) It is proven that if a counting process satisfies these properties, then the number of events in an interval will follow a Poisson distribution

Uploaded by

2004jayeshrai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Continuous Time Markov Chains

5.1 Introduction

Consider the following processes:

(i) A radioactive system emits a stream of  - particles, which reach a Geiger counter. A Geiger counter

is a defective counter, which gets locked for sometime when a radioactive particle strikes it. While

an unlocked counter is capable of recording the arrival of particles, a locked counter does not record

the event. Let X (t) be the number of recordings by a Geiger counter during the time-interval (0,t].

Suppose that the half-life of the particle (the time taken till the mass of the matter reduces to its half

by the process of disintegration) is large as compared to t. Then X (t) is the sum of a large number of

i.i.d. Bernoulli events, each having a probability p (say) of being recorded.

(ii) A large pond has a large number of fishes. Let X (t) be the number of fishes caught in the time-

interval (0,t]. The chance of catching a fish is independent of the number of fishes already caught,

when the catching of a fish is a Bernoulli trial with the probability p (say) of catching (success).

Further the chances of catching a fish at the next time point are the same irrespective of the time

interval since the last success. In other words, there is “no premium for waiting”.

(iii) Recall the queuing system having one server who serves the customer on the basis of “First come.

First served” rule. Let X (t) be the length of the queue at any time t, when the customers are joining

the system at a constant rate and hence the inter-arrival times are i.i.d. random variables.

All the above processes represent a situation when the number of trials is large, and each trial is subject

independently to a Bernoulli law. The probability of one occurrence of the event in a very small interval is

constant but in the same interval two or more events can occur with a probability, which is of order zero.

These all processes are the Poisson processes.


4.2 Poisson process

Let  N ( t ), t T  [0, )  be a non-negative counting process with discrete state space S = {0,1,2…}

where N (t) denotes the occurrence of a random, rare event E in time T. Further, let

p ( t )  P ( N ( t )  n )  P (of n occurences of E in an interval (0, t ]); n  0,1, 2...


n


 p n (t )  1
n 0

i.e., p ( t ) is a function of time. Under certain conditions, called as the postulates or assumptions,
n

N ( t ) ~ Pois( t ) . Then  N ( t ), t  T  [0, )  is called a Poisson process.

Postulates of Poisson process: There are three basic postulates or assumptions of the Poisson process:

(i) Independence: N ( t ) is Markovian, i.e., the occurrences of the event in (0,t] are independent of the

number of occurrences of E in an interval prior to (0,t].

(ii) Homogeneity in time: pn ( t ) depends only on the length t of the time interval (0,t] and not on where

the interval is situated on the time-axis.

(iii) Regularity (Orderliness): In an interval (t, t + h) of infinitesimal length h, exactly one event can

occur with probability  h  o ( h ) and the probability of more than one events is of order o ( h ) (called

as the zero order), i.e.,

o(h)
p1 ( h )   h  o ( h ); lim  0
h 0 h


and,  pk ( h )  o ( h )
k 2


Since  p n ( h )  1  p 0 ( h )  p1 ( h ) o ( h )  1
n2

 p0 ( h )  1   h  o ( h )  P (no event in ( t , t  h ))

Under these postulates, we prove the following theorem:


Theorem 4.1: Under the conditions of independence, homogeneity and orderliness, N ( t ) ~ Pois( t ) , i.e.,

 t ( t ) n
pn ( t )  e , n  0,1, 2...
n

Proof: Consider pn ( t  h ) for n  0

  

0 t t+h

For n  1,

p ( t  h )  P ( n events occur in an interval (0, t  h ))


n

 P ( n events occur in the interval (0,t )) P ( no events occur in ( t ,t h ))

 P ( n 1 events occur in the interval (0,t )) P ( one event occurs in ( t ,t h ))

n2
  P ( n k events occur in the interval (0,t )) P ( k events occur in ( t ,t h ))
k 2

 p ( t )(1   h )  p (t ) h  o ( h )
n n 1

p n ( t  h ) p n ( t ) o(h)
    pn ( t )   pn 1 ( t ) 
h h

p n ( t  h ) p n ( t ) o(h)
 lim    pn ( t )   pn 1 ( t )  lim
h 0 h h 0 h

or, pn '( t )   pn ( t )   pn1 ( t ); n  1 (4.1)

For n = 0,

p0 ( t  h )  p0 ( t ) p0 ( h )  p0 ( t )(1   h  o ( h ))

p 0 ( t  h ) p 0 ( t )
 lim   p 0 ( t )
h 0 h
p 0 '( t )
  
p0 ( t )

or, log p 0 ( t )  t c , c is the arbitrary constant of integration

 t c
 p ( t )  c1e , c1  e
0

Initially, at t  0, p0 (0)  0  c1  1

 t
p0 ( t )  e (4.2)

Putting n = 1 in (4.1), we have

 t
p '( t )   p ( t )   p ( t )  e
1 1 0

t
Integrating factor I.F. of this differential equation is is e


d

p (t )e
dt 1
t
  

t
 p (t )e  t  c
1

At t  0, p (0)  0  c  1
1

 t
 p ( t )   te
1

 t ( t )n 1
Let p (t )  e (4.3)
n 1 n1

Then from (4.1)

 t ( t )n 1
p '( t )   p ( t )   e
n n n1

t
Multiplying both sides by e , we have

t ( t )n 1
e ( p '( t )   p ( t ))  
n n n1
Integrating both sides with respect to t

t ( t )n
e p (t )  c
n n

p (0)  0  c  0
n

 t ( t )n
 p (t )  e , n  1, 2, 3,...
n n

Thus, by mathematical induction, we have shown that

N ( t ) ~ Pois( t ), n  0,1, 2...

Probability generating function and the Characteristic functions of a Poisson process:

An alternate and more elegant technique of obtaining the result is the generating function technique.

Define the probability generating function


P ( s , t )   pn (t ) sn
n 0


N (t )
  P ( N ( t )  n ) sn  E ( s )
n 0

so,


2
P ( s , 0)   p n (0) s n  p0 (0)  sp1 (0)  s p2 (0) 
n 0

 p0 (0)

Differentiating P(s, t) partially w.r.t. t, we have

P ( s ,t )   
  p n ( t ) s n   p n '( t ) s n
t n  0 t n 0
(4.4)

 p0 '( t )   p n '( t ) s n
n 1
From (4.1), we have pn '( t )   pn ( t )   pn 1 ( t ); n  1

Multiplying this equation by sn and summing over all possible values of n, we have

  
 p n '( t ) s n    p n ( t )s    p n 1 ( t ) s n
n
(4.5)
n 1 n 1 n 1

where,

 
 p n '( t ) s n  P ( s , t )  p0 '( t )
n 1 t


 p n ( t ) s n  P ( s ,t ) p 0 ( t ) (from (4.4))
n 1


 p n 1 ( t ) s n  sP ( s , t )
n 1

so, (4.5) becomes


t  
P ( s , t )  p0 '( t )    P ( s ,t ) p 0 ( t )   sP ( s , t )


 P ( s ,t )   ( s 1) P ( s ,t ) ( p 0 '( t )  p 0 ( t ))
t

 P ( s ,t )  ce ( s 1) t

As P ( s ,0)  0

 ( s 1) t
 P( s, t )  e

Hence, the p.g.f. of the process is given by

 ( s 1) t  t
 ( ts )n
P( s, t )  e  e 
n 0 n

 p n ( t )  coefficient of s n in P ( s ,t )

 t ( t ) n
 e , n  0
n
 N ( t ) ~ Pois( t ), n  0,1, 2...

The characteristic function of this process is given by

 e t ( ts )n eisn
i ( s )  E  eisX ( t )   
n 0 n

t ( eis 1)
 e

Deductions:

(i)
E ( N ( t ))  t

Var ( N ( t ))  t

i.e., the mean and the variance of N(t) depend on t and as such, the process is evolutionary.

(ii) In an interval of unit length, the mean number of occurrences is . This is called the parameter of the

process.

(iii) Poisson process is a continuous parameter, discrete state space stochastic process. But E (N (t)) is a

continuous non-random function of t.

(iv) Poisson process has independent and stationary (time-homogeneous) increments.

(v) If E occurred r times up to the initial instant 0 from which t is measured then the initial condition will

be

pr (0)  1, pn (0)  0  n  r

 p n ( t )  P ( N ( t )  n r )

 ( t )n  r
e t , n  r
  nr

0, n  r

(vi) As t   , for a Poisson process N (t)


 N (t ) 
P       0, where   0 is a preassigned number.
 t 

Using Chebyshev's inequality for a random variable X,

Var ( X )
P (| X  E ( X ) |  a )  ; a  0
a2

t
Put X  N ( t )  P (| N ( t )   t |  a )  ; a  0
a2

 N (t ) a t
or, P      2
 t t a

a  N (t )  
Let    P     
t  t  t

 N (t ) 
 lim P       0
t   t 

N (t )
i.e., for large t, can be taken as an estimate of .
t

4.3 Properties of Poisson process

(i) Additive property: Sum of two independent Poisson processes is again a Poisson process.

Proof: Let N1 ( t ) and N 2 ( t ) be two independent Poisson processes with parameters 1 and 2

respectively and N ( t )  N1 ( t )  N 2 ( t ) . Then

n
P ( N ( t )  n )   P ( N1 ( t )  r , N 2 ( t )  n r )
r 0

n
  P ( N1 ( t )  r ) P ( N 2 (t )  n r ) due to independence
r 0

n ( 1t )r   t ( 2t )n  r
  e  1t e 2
r 0 r n r
 ( 1  2 ) t n ( 1t )r ( 2t )n  r
 e 
r 0 r n r

 2n 12n 1 12 2n  2 


 e  ( 1  2 ) t t n     
 n 1 n1 2 n 2 

 ( 1 2 )n 
 e  ( 1  2 ) t t n  ,

n 0
 n 

 ( 1  2 ) t  (( 1  2 ) t )n 
 e   , n0
 n 

 N ( t ) ~ Pois(( 1  2 ) t )

Ni ( t ) i ( s 1) t
Alternatively, let the p.g.f. of Ni ( t ), i  1, 2 be E ( s )  e .

The p.g.f. of N ( t ) is

N (t ) N1 ( t )  N 2 ( t )
E(s )  E (s )

 E ( s N1 ( t ) ) E ( s N2 ( t ) )

 e1 ( s 1) t e2 ( s 1) t


( 1  2 )( s 1) t
 e

 N ( t ) ~ Pois(( 1  2 ) t )

The c.f. of N (t) is

i ( s )  E  eisN ( t )   E  eis ( N1 ( t )  N 2 ( t )) 

 E  eis ( N1 ( t )  E  eis ( N 2 ( t ) 

( 1  2 ) t ( eis 1)
 e

(ii) Difference of two independent Poisson processes.

Proof: Let N1 ( t ) and N 2 ( t ) be two independent Poisson processes with parameters 1 and 2

respectively and N ( t )  N1 ( t )  N 2 ( t ) . Then



P ( N ( t )  n )   P ( N1 ( t )  n r , N 2 ( t )  r )
r 0

n ( 1t )n  r   t ( 2 t )r
  e 1t e 2
r 0 n r r

 
n n2 r
 ( 1  2 ) t  1  2  t 12
 e   
 2  r  0 r n r

n
 ( 1  2 ) t  1  2
 e    I[ n ] t 12  
 2

n  2r
 x
 2
where, I[ n ] ( x )     is modified Bessel function of order n ( -1). N (t) is not a Poisson
r 0 r n r

process.

Alternatively, the p.g.f. of N (t) is

N (t ) N1 ( t )  N 2 ( t )
E(s )  E (s )

 E ( s N1 ( t ) ) E ( s  N 2 ( t ) )

1  N2 ( t )
 E ( s N1 ( t ) ) E  
s

  
  s 2 t
( 1  2 ) t  1 s 

 e e

N (t )
Then pn (t) is the coefficient of sn in the expansion of E ( s )

E ( N ( t ))  ( 1  2 ) t

E ( N 2 ( t ))  ( 1 2 ) t ( 1 2 )2 t 2

 Var ( N ( t ))  ( 1  2 ) t

The c.f. of N (t) is

i ( s )  E  eisN ( t )   E  eis ( N1 ( t )  N 2 ( t )) 

 E  eis ( N1 ( t )  E  e is ( N 2 ( t ) 

t ( 1 eis  2 e is ) t ( 1  2 )
 e
(iii) Decomposition of a Poisson process: A random selection from a Poisson process

Let N (t), the number of occurrences of an event E in an interval of length t is a Poisson process with

parameter . Further, let each occurrence of E has a constant probability p of being recorded, and that

recording of an occurrence is independent of that of another occurrence and of N (t).

If M (t) is the number of occurrences recorded in an interval of length t, then M (t) is also a Poisson

process with parameter p.

Proof:


P ( M ( t )  n )   P ( E occurs ( n r ) times by epoch t and exactly n out of n  r occurrences are recorded)
r 0


=  P ( N ( t )n r ) P ( n events are recorded)
r 0

 ( t )n  r n  r n r
=  e  t p q
r 0 n r n r

p n  (  qt )r
= e   t ( t ) n 
n r 0 r

( tp )n  qt
= e t e
n

  t (1 q ) ( tp )n   tp ( tp )n


= e = e
n n

 M ( t ) ~ Pois(  pt )

The c.f. of M (t) is

i ( s )  E  eisM ( t ) 
 pt ( eis 1)
 e

Corollary:
1. If M1 (t) is the number of events not being recorded, then M1 (t) is a Poisson process with parameter

(1-p) = q. For example, a Geiger counter records radioactive disintegrations according to a

Poisson law. Also the disintegrations, which have not been recorded, follow a Poisson law.

2. If a Poisson process can be broken up into r independent streams with probabilities

r
p1, p2 ,... pr ;  pi  1 , then, these r independent streams are Poisson processes with parameters
i 1

 p1,  p2 ,...  pr respectively.

(iv) Poisson process and Binomial distribution: If N (t) is a Poisson process then for s < t

nk
n s   s k
P ( N ( s )  k | N ( t )  n )      1 
k  t   t 

Proof:

P ( N ( s )  k ,N (t )  n )
P ( N ( s )  k | N (t )  n ) 
P ( N (t )  n )

P ( N ( s )  k , N ( t  s )  n k )

P ( N (t )  n )

P ( N ( s )  k ) P ( N ( t  s )  n k )

P ( N (t )  n )

e  s (  s )k e  ( t  s ) (  ( t  s ))n  k
k n k
  t
e ( t ) n

n s k ( t  s )n  k

k nk tn

 n   s k  s nk
     1 
k  t   t 

(v) If { N ( t ), t  0} is a Poisson process then the (auto) correlation coefficient between N (t) and N (t + s)

t
is .
ts
Proof: Let  be the parameter of the process, then

E ( N (T ))  T ; Var ( N (T ))  T

E ( N 2 (T ))  T ( T )2 ; T t ,t  s .

E ( N ( t ) N ( t  s ))  E ( N ( t )( N ( t  s ) N ( t ) N ( t )))

 E ( N 2 (T )) E ( N (t )( N ( t  s ) N ( t )))

 E ( N 2 (T )) E ( N ( t )) E (( N ( t s ) N ( t )))

as N ( t ) and N ( t s ) are independent.

 t  2t 2  2ts

 Cov( N ( t ), N ( t  s ))  t  2t 2  2ts t  ( t  s )

 t

t t
 Autocor ( N ( t ), N ( t  s ))  
t ( t  s ) t s

min( t ,t ')
In general,  ( N ( t ), N ( t ')) 
max( t ,t ')

Example

(1) M/G/1 queue: Recall the queuing system where customers join the system hoping for some service.

There is a server who serves one customer (if any present) only at time points 0,1,2… The number of

customers Yn in the time interval (n, n+1), are i.i.d. random variables .The service station has a capacity of

at most c customers including the one being served and further arrivals are not entertained by the service

station (lost customers). Further the service times of successive arrivals are assumed to be independent

random variables with a common distribution, say, G. and they are independent of further arrivals. Then

{Xn, n  1}, the number of customers at time point n is a Markov chain with state space S = {0,1,2…c}.

We have
Yn , if X n  0 and 0  Yn  c 1


X n 1   X n 1 Yn , if 1  X n  c and 0  Yn  c 1 X n

c, otherwise

If Yn is a Poisson process, then


 x j
P (Yn  j )   e  x dG ( x ); j  0,1,..
0 j

and the transition probabilities of the Markov chain {Xn, n  0) are

  x j
  e  x dG ( x ); i  0, j  0
0 j


   x  j i 1
p    e  x dG ( x ); j  i 1, i  1
ij 0 j i 1

0, otherwise.



4.4 Poisson distribution and related distributions

Taking a cue from the above example, we can now identify some distributions, which are closely associated

with the Poisson process.

Inter-arrival time: Let { N ( t ), t  0} is a Poisson process with parameter . Let X be the interval between two

successive occurrences of the event E for which N (t) is the counting process. Then X, known as the inter-arrival

time is a random variable following an exponential distribution.

We state and prove the following result.

Theorem 4.2: The interval between two successive occurrences of a Poisson process { N ( t ), t  0} with

1
parameter  has a negative exponential distribution with mean .

Proof: Let X be the interval between two successive occurrences of { N ( t ), t  0} . Let

FX ( x )  P ( X  x ) be the c.d.f. of X.

Let Ei and Ei +1 be the two successive occurrences of the event E for which N (t) is the counting process

occurring at time epochs ti and ti +1 respectively. Then,

P ( X  x )  P ( Ei 1 did not occur in ( ti , ti  x ) |Ei occurred at instant ti )

 P ( Ei 1 did not occur in ( ti , ti  x ) | N ( ti )  i )

 P ( N ( x )  0 | N ( ti )  i )

 x
 p0 ( x )  e , x0

Since i is arbitrary, so for the interval X between any two successive occurrences

FX ( x )  P ( X  x )  1  P ( X  x )

 1e  x ; x 0

dFX ( x )
 f ( x)    e   x ; x 0
dx

 X ~ exp(  )

The next result is an extension of this result.

Theorem 4.3: The intervals between successive occurrences of a Poisson process are i.i.d. exponential variables

1
with common mean .

Proof: We prove the result by mathematical induction. Let the successive occurrence points of the Poisson

process are 0  t1  t 2  ... . In the earlier theorem, we have proved that the inter-arrival time between two

1
successive occurrences is an exponential variable with mean . For three successive occurrences, if Xi = ti+1 -

ti, i =1,2 are the inter-arrival times, then


P ( X 1  x1 , X 2  x2 )  P ( Ei  2 didnot occur in (ti 1 , ti 1 x )|Ei 1 occured at ti 1 )
2

x1
  P ( ti  2  x1  x2 | X 1  x1 ) f ( x ) dx
0

P ( t2  x1  x2 | X 1  x1 )  P ( N ( t2 )  0)  e   x2

x1
P ( X1  x1 , X 2  x2 )  e   x2  e   x dx
0
  x2   x1
 e (1  e )

Let the result holds for k inter-arrival times X1 , X 2 ,..., X k . Then

P ( X 1  x1 , X 2  x2 , X k  xk , X k 1  xk 1 )

x1 x2 xk k
k 1    xi
 
k
  P (Wk 1   xi | X1  x1 , X 2  x2 , X k  xk ) e i 1 dx1 dxk
0 0 0 i 1

where

k 1   k 1   k  
P (Wk 1   xi | X1  x1 , X 2  x2 , X k  xk )  P  N   xi  N   xi 0 
i 1   i 1   i 1  

 P ( N ( xk 1 )0)

  xk 1   x1   xk
 P ( X 1  x1 , X 2  x2 , X k  xk , X k 1  xk 1 )  e (1  e ) (1  e )

 X1 , X 2 ,..., X k are i.i.d. random variables.

The converse of this theorem is equally true, which along with this theorem gives a characterization to the

Poisson process.

Theorem 4.4: If the intervals between successive occurrences of an event E are independently distributed

1
exponentially with common mean , then the event E has Poisson process as its counting process.

1
Proof: Let { Zi , i  0} be a sequence of i.i.d. negative exponential variables with common mean , where,

th th
Zi  interval between ( n -1) and n occurrences of the event E .

Define Wn  Z1  Z 2  ...  Z n as the waiting time up to the nth occurrence, i.e., the time from origin to the

nth subsequent occurrence.

Then, Wn ~ Gamma (  , n ) with p.d.f.

 n x n 1e  x
gW ( x )  , x  0
n n!

t
and c.d.f. FW ( x )  P (Wn  t )   g ( x ) dx
n 0

Obviously, { N ( t )  n}  {Wn  Z1  Z 2  ...  Z n  t } , i.e. the two c.d.f.'s FN ( t ) and

FWn satisfy the relation

FW ( t )  P (Wn  t )  1  P (Wn  t )
n

 1  P ( N (t )  n)

 1  P ( N ( t )  n1)

 1  FN ( t ) ( n1)

 FN ( t ) ( n  1)  1  FW ( t )
n

t n
  xn 1 e  x dx
0 n!

 n
  xn 1 e  x dx
t n!

1 n 1  y
  y e dy (put  x y )
t !
n
n ( t ) j
  e  t (integration by parts)
j 0 j!

 p ( t )  P ( N ( t )  n )  FN ( t ) ( n )  FN ( t ) ( n 1)
n

 t ( t ) n
 e
n!

 N ( t ) ~ Pois( t ); t  0

It may be noted that Poisson process has independent exponentially distributed inter-arrival times and Gamma

distributed waiting times.

The next result explains the purely random nature of a Poisson process.

Theorem 4.5: If a Poisson process N (t) has occurred only once by the time-point t, then the distribution of

the time interval  in [0,T], in which it occurred, is uniform in [0,T], i.e.,

dT
P ( t    t  dt | N (T )  1)  ; 0 t T
T

Proof: We have

 t
P ( t    t  dt | N (T )  1)   e dt
 T
P ( N (T )  1)  Te

  ( T t )
and P ( N (T )  1 |   t )  e is the probability that there was no occurrence of N (t)

in the time interval (t, T ]. Hence,

P ( t    t  dt , N (T ) 1)
P ( t    t  dt | N ( T )  1) 
P ( N ( T )  1)

P ( t    t  dt ) P ( N (T )  1)

P ( N ( T )  1)

 e t dt .e  ( T t )

e T T
dt

T
The result can be interpreted as follows:

If a Poisson process N (t) has occurred only once by the time-point t, this is equally likely to happen anywhere

in [0,T]. This is why the Poisson process is purely random.

We state some more results, which further emphasize the random nature of Poisson process.

(i) For a Poisson process with parameter , the time interval up to the first occurrence also follows an

1
exponential distribution with mean , i.e., if X0 is the time up to the first occurrence, then

P ( X 0  x )  P ( N ( x )  0)

 x
 p0 ( x )  e , x0

 x
i.e., P ( X  x )  e is independent of i and t.

(ii) Suppose that the interval X is measured from an arbitrary point ti  r ( r  0) in the interval

( ti ,ti1 ) and not the point ti of the occurrence of Ei. Let Y  ti 1  ( ti  r ) . Y is called random

modification of X or the residual time of X. Then, if X is exponentially distributed, so is its random

modification Y with the same mean. In other words, there is no premium for “waiting”.

(iii) Suppose that A and B are two independent series of Poisson events with parameters 1 and 2

respectively. Define a random variable N as the number of occurrences of A between two successive

occurrences of B. Then

  
N ~ Geo  2  .
 1 2 

Let X be the random variable denoting the interval between two successive occurrences of B. Then

 2 x
f X ( x )  2 e , x  0 . Hence,
P ( A occurs k times in an arbitrary interval between two successive occurrences of B )  P ( N  k )


( 1t )k
  e  1t f ( t ) dt
0 k

 ( 1t )k
  e  1t  e  2 t dt
0 k 2

1k 
k  (   ) t
 2  t e 1 2 dt
k 0

2 1k

( 1 2 )k 1
k
2  1 
   ; k  0, 1, 2, ...
( 1 2 )  1 2 

(iv) The above property can be generalized to define what we call as a Poisson count process.

Poisson count process: Let E and E' be two random sequences of events occurring at instants

( t1 , t2 ,...) and ( t1 ', t2 ',...) respectively. The number, Nn, of occurrences of E' in an interval

(tn1 , tn ) is known as the count process of E' in E.

If E is a Poisson process, then the count process is called the Poisson count process.

If, along with E, E' is also a Poisson process then the count process Nn has a geometric distribution.

Nn (n = 1,2…) are i.i.d. geometric variates.

(v) Suppose that A and B are two independent series of Poisson events with parameters 1 and 2

respectively. Define a random variable N as the number of occurrences of A between two successive

occurrences of B. the interval between two consecutive occurrences of B is the sum of two independent

 2 x
exponential variates and has the joint density f ( x )  2 xe
2
  

f ( x )  2 e 2 x f ( x )  2 e 2 x

f ( x )  22 xe 2 x


( 1t )k 2   t
 P ( k occurrences of A between every second occurrence of B ) =  e 1t 2 te 2 dt
0 k

1k 22 
 ( 1  2 ) t k 1
= e t dt
k 0

1k 22 k 2
=
k ( 1 2 )k  2

2 k
 k 1  2   1 
=       ; k  0,1, 2
 1   1 2   1 2 

i.e., the distribution is negative binomial (convolution of exponential distribution).

4.5 Generalizations of Poisson process

In the classical Poisson process, it is assumed that the conditional probabilities are constant, i.e., the probability

of k events in the interval [t, t +h] given occurrence of n events by time-point t is given by

 ho ( h ), k  1

p ( h )  P ( N ( h )  k | N (t )  n )  o ( h ), k  2
k 
1 ho ( h ), k  0

i.e., pk(h) is independent of n as well as t. This process can be generalized by considering  no more a constant

but a function of n or t or both. The generalized process is again Markovian in nature.

This generalized process has excellent interpretations in terms of birth-death processes. Consider a population of

organisms, which reproduce to create similar organisms. The population is dynamic as there are additions in

terms of births and deletions in terms of deaths. Let n be the size of the population at instant t. Depending upon

the nature of additions and deletions in the population, various types of processes can be defined.
4.5.1 Pure birth process: Let  is a function of n, the size of the population at instant t. Then

p ( k , h | n , t )  P ( N ( h )  k | N (t )  n )

n h o ( h ), k  1
 (4.6)

 o ( h ), k 2

1n ho ( h ), k  0

Then,

pn ( t  h )  pn ( t )(1  n h )  pn 1 ( t ) n 1 h  o ( h ), n  1

p0 ( t  h )  p0 ( t )(1  0 h )  o ( h )

 pn '( t )   n pn (t )  n 1 pn 1 (t ), n  1 (4.7)

p '( t )    p ( t ) (4.8)
0 0 0

This is a pure birth process (only births are there and no deaths as k is a non-negative integer). For specified

initial conditions, an explicit expression for pn(t) can be obtained.

Depending upon form of n, different processes can be obtained.

(i) Yule-Furry process: Let n  n . Then (4.7) and (4.8) can be written as

p '( t )   n p ( t )  ( n  1)  p ( t ), n  1
n n n 1

p '( t )  0
0

Let the initial conditions be p (0)  1; p (0)  0  i  1 , i.e., the process starts with only one
1 i

member at time t = 0.

Using principle of mathematical induction, we now, obtain an expression for pn(t).

For n = 1,

 t
p1 '( t )    p1 ( t )  p1 ( t )  c1e ;

c is the constant of integration.


1
At t  0, p (0)  1  c  1
1 1

 t
 p1 ( t )  e

For n = 2,

p2 '( t )   2  p2 ( t )   p1 ( t )

 t
 p2 '( t )  2  p2 ( t )  e

2 t
Integrating factor for this equation is e

2 t t
 e p2 ( t )   e2 t e t dt  c2  e
t

Since p (0)  0  c  1
2 2

 t  t
 p (t )  e (1  e )
2

 t  t n  2
Let pn 1 ( t )  e (1  e )

Now,

pn '( t )  n pn ( t )  ( n  1)  pn 1 ( t )

 t  t n  2
 ( n  1)  e (1  e )

n t
Multiplying both sides by e , we have

nt ( n 1) t  t n 1
e pn ( t )   ( n  1)  ent e t (1e t )n  2 dt  cn  e (1  e )
t

Since pn (0)  0  cn  0

 t  t n 1
 pn ( t )  e (1  e ) ;n 1

and p0 ( t )  0

 t
 { pn ( t ), n  1} has geometric distribution with parameter e and p.g.f.

P ( s , t )   e t (1e t )n 1 s n
n 1

se t

1 s (1e t )

E ( N ( t ))  P '( s ,t )|s 1  et

t t
Var ( N ( t ))  e ( e  1)

4.5.2 Birth and death process: Now, along with additions in the population, we consider deletions also, i.e.,

along with births, deaths are also possible. Define

q ( k , h | n , t )  P (number of deaths in ( t , t  h )  k | N ( t )  n )

 n ho ( h ), k  1
 (4.9)

 o ( h ), k  2

1 n ho ( h ), k  0

For n  k , 0  0

(4.6) and (4.9) together constitute a birth and death process. The probability of more that one birth or

more than one death is o (h). We wish to obtain

pn ( t )  P ( N ( t )  n )

To obtain the differential-difference equation for pn(i), we consider the time interval

(0, t  h )  (0, t )  [ t , t  h )

Since, births and deaths, both are possible in the population, so the event { N ( t h )  n , n  1} can occur

in the following mutually exclusive ways:

Eij  n  i  j individuals at time-point t , i births and j deaths in ( t , t  h )


i , j  0,1, 2...

It is easy to see that P ( Eij , i  j  2)  o ( h ) . Therefore,

pn ( t  h )  P ( E00 )  P ( E10 )  P ( E01 )  P ( E11 )

where,
P ( E00 )  P ( no birth and no death in ( t , t  h ) | N ( t )  n )

 pn ( t )(1n h o ( h ))(1 n h o ( h ))

 pn ( t )(1( n  n ) h o ( h ))

P ( E10 )  pn 1 ( t )( n 1h o ( h ))(1 n 1h o ( h ))

 pn 1 ( t )( n 1h o ( h ))

P ( E01 )  pn 1 ( t )(1  n 1 h  o ( h ))(  n 1 h  o ( h ))

 pn 1 ( t )( n 1h o ( h ))

P ( E11 )  pn ( t )( n h o ( h ))( n h o ( h ))

 pn ( t )( o ( h ))

 o(h)

So for n 1

pn ( t  h )  pn ( t )(1  ( n  n ) h )  pn1 ( t ) n1h  pn1 ( t ) n1h  o ( h )

 pn '( t )  ( n  n ) pn ( t )n1 pn1 ( t ) n1 pn1 ( t ) (4.10)

For n = 0

p0 ( t  h )  p0 ( t )(1  0 h  o ( h ))(1  0 h  o ( h ))  p1 ( t )(1  1h  o ( h ))( 1h  o ( h )

 p0 ( t )  0 hp0 ( t )  1hp1 ( t )  o ( h )

 p0 '( t )   0 p0 ( t )  1 p1 ( t ) (4.11)

Initially at t = 0, if i organisms are there in the population, then

pn (0)  0, n  i

pi (0)  1

(4.10) and (4.11) represent the differential-difference equations of a birth and death process.

We make the following assertion:


For arbitrary n  0, n  0 , there always exists a solution pn ( t ) (  0) such that  p n ( t )  1 . If
n

n and n are bounded, the solution is unique and satisfies  p n ( t )  1 .


n

4.5.3 Births and death rates: Depending upon the values of n and n , various types of birth and death

processes can be defined.

(i) Immigration: When n   , i.e., n is independent of population size n, then the increase in the

population can be regarded as due to an external source. The process is, then, known as an

immigration process.

(ii) Emigration: When n   , i.e.,  n is independent of population size n, then the decrease in the

population can be regarded as due to elimination of some elements present in the population. The

process is, then, known as an emigration process.

(iii) Linear birth process: When n  n , then n h  n h , is the conditional probability of one

birth in an interval of length h, given that n organisms are present at the beginning of the interval.

1   is the birth rate in a unit interval per organism.   0.


0

(iv) Linear death process: When n  n , then the process is known as a linear death process.

When the specific values of both n and n are considered simultaneously, we get the following processes:

(i) Immigration-emigration process: When n =  and n =  , the process is known as

immigration-emigration process. This is an M/M/1 queue.

(ii) Linear growth process: If for a birth and death process

P (an element of the population gives birth to a new member in a small interval of length h )

 h  o ( h )
 P ( one birth in interval ( t , t  h ) | N ( t )  n )  n h  o ( h )

and

P ( an element of the population dies in a small interval of length h )   h  o ( h )

 P ( one death in an interval ( t , t  h ) | N ( t )  n )  n h  o ( h )

i.e., if for a birth and death process, n = n and n = n ( n  1); 0 =0 =0 , then the process is a

linear growth process.

This process, which is evolutionary in nature, has extensive applications in various fields,

particularly, in queuing theory. Now we shall analyze this process.

The differential-difference equations for this process are

pn '( t )   n (    ) pn ( t )  ( n  1)  pn1 ( t )  ( n  1)  pn 1 ( t ), n  1 (4.12)

and p0 '( t )   p1 ( t ) (4.13)

(a) Generating function: Let the p.g.f. of {pn (t)} be


P ( s , t )   pn (t ) sn
n 0

 
Then, P ( s , t )   np n ( t ) s n 1
s n 1

 
and P ( s , t )   p n '( t ) s n
t n 1

Multiplying (4.12) by sn, summing over n = 1,2,3… and then adding (4.13) to the result, we have

   
p0 '( t )   p n '( t ) s n   (    )  p n ( t )s    ( n1) p n 1 ( t ) s n    ( n1) p n 1 ( t )   p1 ( t )
n
n 1 n 1 n 1 n 1
P P 2 P P
   (   )s  s 
t s s s

P
 (   (   ) s  s )
2
s

Under the initial condition N (0) = i, the solution of this partial differential-difference equation is given

by

i
  (1 s )(   s ) e (    ) t 
P ( s , t )    (   ) t


(4.14)
  (1 s )(   s ) e 

Expanding P(s,t) as a power series in n, we get pn (t).

(b) Mean population size: Differentiating P(s,t) w.r.t. s partially at s = 1, we get the mean population

size M(t) as

P ( s ,t )  (   ) t
M (t )   ie
t

0, if   

As t   , M ( t )   , if   

i , if   

Since this method involves differentiation of a not-so-easy p.g.f., so obtaining M(t) may be a bit involved

exercise. Alternatively, M (t) can be obtained from (4.12) and (4.13) directly.


Now, M ( t )  E ( N ( t ))   np n ( t )
n 1

Multiplying both sides of (4.12) by n and adding over different values of n, we have

   
 np n '( t )   (    )  n2 p n ( t )    n ( n1) p n 1 ( t )    n ( n1) p n 1 ( t ) (4.15)
n 1 n 1 n 1 n 1

where,

  
 n ( n1) p n 1 ( t )   ( n 1) 2 p n 1 ( t )   ( n 1) p n 1 ( t )
n 1 n 1 n 1

 M 2 (t )  M (t )


2
where, M 2 ( t )  E ( N ( t ))   n2 p n ( t )
n 1
and,

  
 n ( n1) p n 1 ( t )   ( n 1) 2 p n 1 ( t )   ( n 1) p n 1 ( t )
n 1 n 1 n 1

 ( M 2 ( t ) p1 ( t ))( M ( t ) p1 ( t ))

 M 2 (t )  M (t )


and,  np n '( t )  M '( t )
n 1

Therefore, from (4.15), we get

M '( t )   (    ) M 2 ( t )    M 2 ( t ) M ( t )     M 2 ( t )M ( t ) 

 (   ) M (t )

(   ) t
 M ( t )  ce , c being the constant of integration.

(   ) t
Initially, M (0)  i  c  i  M ( t )  ie

Again, from (4.12)

   
 n2 p n '( t )   (    )  n3 p n ( t )    n2 ( n1) p n 1 ( t )    n2 ( n1) p n 1 ( t )
n 1 n 1 n 1 n 1



 M 2 '( t )   (    ) M 3 ( t )    ( n 1)3 ( n 1) 2  n ( n 1) p n 1 ( t )
n 1



   ( n 1)3 ( n 1)2 n ( n 1) p n 1 ( t ) 
n1

 M 2 '( t )  2(    ) M 2 ( t )  (    ) M ( t )

 (   ) t
or, M 2 '( t )  2(    ) M 2 ( t )  (    ) ie

2 (    ) t i (   )  (   ) t
 M 2 (t ) e  e c
 

2
Initially, M 2 (0)  i
2 i (   )
 c  i 
 

i (   )  (   ) t 2 i (   )
 M 2 ( t ) e2 (    ) t  e i 
   

i (   ) (   ) t (   ) t
Var ( N ( t )) 
 
e e 1 , if   

If    , then

M 2 '( t ) (    ) M ( t )  i .2 

 M 2 ( t )  i .2 t c

At t 0, M 2 (0)  i 2

 M 2 ( t )  i .2 t i 2

and Var ( N ( t ))  2 it

(c) Probability of extinction: Since 0 = 0 , so 0 is an absorbing state, i.e. once the population reaches

0, it remains thereafter and the population becomes extinct.

Without any loss of generality, let N (0) = 1. Then (4.14) becomes

 (1 s )(   s ) e (    ) t
P(s, t ) 
 (1 s )(   s ) e (    ) t

 (1e (    ) t ) s (  e (    ) t )

(    e (    ) t ) s (1e (    ) t )

a  bs 
1
a bs c  a 
  
c ds 1
ds
c

where,

 (   ) t
a   (1  e )

b    e  (    ) t

c    e (    ) t

 (   ) t
d  1 e
a  (1e (    ) t )
So,   P ( N ( t )  0)  p0 ( t )
c    e (    ) t

P ( the population will eventually become extinct )  lim p0 ( t )


t 


 (1e (    ) t )   1, if   
 lim      
t     e ( ) t

1, if   

and lim pn ( t )  0 for n  0 if    .


t 

The physical interpretation of the probability of extinction is that if the birth rate is less than the death

rate in a population, the population will ultimately become extinct with probability 1. If birth rate is

more than the death rate, then the population becomes extinct with probability less than unity.

(iii) Linear growth with immigration: For linear growth, 0 = 0 and once the population reaches 0, it is

bound to remain there itself and 0 becomes an absorbing state. However, if we assume that along

with births, additions in the population are possible through immigrations also, i.e., some organisms

from some other populations may also join the population under consideration, then

n = n  a ( a  0) and n = n ( n  1); 0 = a, 0 = 0 and state 0 is no longer an absorbing

state. As soon as the birth rate reaches 0, some other organisms join the system and the population

will never become extinct (reflecting barriers). This process is the process of linear growth with

immigration.

(iv) Immigration-death process: If n =  and n = n ( n  0) , then the birth rate is constant and

death rate is a linear function of n, then the process is known as immigration-death process. This is

the self-service model of the queuing theory (M / M / ).

(v) Pure death process: In this case, n = 0  n , i.e., no new births other than those present at the

beginning of the process, are possible and


P ( of a death in ( t , t  h ))   h  o ( h )

so, P ( of a death in ( t , t  h ) | N ( t )  n )  n h  o ( h )

This process is called a pure death process.

Now, for this process n = 0 and n = n ( n  0)

 pn ( t  h )  pn ( t )(1  n h )  pn1 ( t )(( n  1) h )  o ( h )

 pn '( t )   n pn ( t )  ( n  1)  pn1 ( t ), n 1 (4.16)

and,

p0 ( t  h )  p0 ( t )  p1 ( t )(  h )  o ( h )

 p0 '( t )   p1 ( t )

(4.17)

(4.16) and (4.17) are the differential-difference equations of a pure death process.

To obtain an expression for pn(t), we assume that initially i individuals were present when the process

began.

For n = i,

pi '( t )   i  pi ( t )

p i '( t )
   i
pi ( t )

d
or, (ln pi ( t ))   i 
dt
 it
 pi ( t )  ce , c being the constant of integration.

 i t
Initially, pi (0)  1  c  1  pi ( t )  e

For n = i-1,
pi 1 '( t )   ( i  1)  pi 1 ( t )  i  pi ( t ),

 pi 1 '( t )  ( i  1)  pi 1 ( t )  i  pi ( t )
i (  1) t
Integrating factor for the equation is e

d ( i 1) t  t
 (e pi 1 ( t ))  i  e
dt

( i 1)  t  t
or, e pi 1 ( t )   i e  t dt   ie c

At t  0, c   i

 t  ( i 1) t i   t  ( i 1) t
 pi 1 ( t )  i (1  e )e    (1  e ) e
 
1

Proceeding in the similar manner, we have

i    t i  n  n t
pn ( t )    (1  e ) e ; n  0,1, 2..., i
n

Now, we proceed to obtain mean and variance of a pure death process.

Multiplying (4.16) by n and summation over all possible values of n, we have

  
 np n '( t )     n 2 p n ( t )    n ( n 1) p n 1 ( t )
n 1 n 1 n 1

   M 2 ( t )    M 2 ( t ) M ( t ) 

where,


2
M 2 ( t )  E ( N ( t ))   n2 p n ( t )
n 1


and, M ( t )  E ( N ( t ))   np n ( t )
n 1

 M '( t )    M ( t )

 t
 M ( t )  ce

 t
Initially, M (0)  i  c  i  M ( t )  ie .

Again, from (4.16)


  
 n2 p n '( t )     n3 p n ( t )    n 2 ( n 1) p n 1 ( t )
n 1 n 1 n 1


   M 3 ( t )  ( n 1)3 ( n 1)2 n ( n 1) p n 1 ( t ) 

   M 3 ( t )   M 3 ( t ) p1 ( t )M 2 ( t ) p 1 ( t ) M 2 ( t ) M ( t ) 


where, M 3 ( t )   n3 p n ( t )  M 2 (t )   2  M 2 (t )  M (t )
n 1

 t
or, M 2 ( t )  2  M 2 ( t )  i e

2 t t
 e M 2 ( t )  ie  c , where c is the constant of integration

2 2
Now, M 2 (0)  i  c  i  i

 t 2  t
ie  (i  i )e
2
 M 2 (t ) 

 t  t
 Var ( N ( t ))  ie (1  e )

You might also like