Computer Networks Modeling Arrivals and Service With Poisson
Computer Networks Modeling Arrivals and Service With Poisson
Introduction
In computer networks, packet arrivals and service are modeled as a stochastic process
in which events occur at times t1 , t2 , . . . For instance, in the figure below, t1 , t2 , . . . can
be interpreted as the packet arrival times, or the service completion times. Accordingly,
Ti , defined as ti ti1 for i > 1, can be interpreted as the inter-arrival times of packets
(intervals between subsequent arrivals), or the delays experienced by the served packets
(assuming that the server is always busy). Similarly, A(t) denotes the number of
packets that arrive in [0, t], or the number of packets served in [0, t]. In the rest of this
document, we will refer to packets rather than service, but it should be clear that the
discussion applies to both.
T3
T2
A(t)
T1
t1
t2
t3
Poisson
The packet inter-arrival times are typically modeled as a Poisson process, i.e. Ti are
independent and identically distributed (IID, so we drop i from the term Ti in the
following expression), and obey an an exponential distribution:
FT (t) = P (T t) = 1 et
where is a parameter. We will give a name shortly.
The probability density function for T is therefore (the derivative of 1 et with
respect to t):
fT (t) = et
Therefore,
t2
P (t1 T t2 ) =
t1
tet dt =
E[T ] =
0
Therefore, the parameter is called the arrival rate, or simply rate 1 . Similarly,
Z
E[T 2 ] =
t2 et dt =
0
2
2
1
2
Now we prove a unique property of the exponential process, known as the memoryless
property. Consider the waiting time until some arrival occurs. The memoryless property states that given that no arrival has occurred by time , the distribution of the
remaining waiting time is the same as it was originally. Mathematically,
P (T > + t|T > ) = P (T > t)
The proof is simple as a direct consequence of the exponential distribution:
1A
mathematical
P result, known as the law of large numbers, says that if Xi are IID,
then limn
tn
limn A(t
n)
i=1
Xi
= limn
Pn
i=1
Ti
= E[T ] =
1
.
A paradox
is Z1 , which is exponentially distributed (with parameter ) and independent of everything prior to t. Therefore, E[Z1 ] = 1/ = 20 minutes. Now, when a customer
arrives to the station at time t, what is the average time since the last departure? Let
us call this time Y1 . We dont know how many arrivals/departures we had in [0, t], so
let us condition on A(t).
P (Y1 > y|A(t) = n) = P (Tn+1 > y) = ey
Since this is independent of n, Y1 is independent of A(t) and is exponentially
distributed (with parameter ). Therefore, E[Y1 ] = 1/ = 20 minutes.
By definition, the inter-arrival time is the time from the last departure to the next
arrival. Therefore, the average inter-arrival time is E[Y1 + Z1 ] = E[Y1 ] + E[Z1 ] =
20 + 20 = 40 minutes, by the linearity of expectation. But it should be 20 minutes
only! What happened? This is an example of what is known as random incidence.
The customer is more likely to fall in a large interval. For an intuition, consider the
following example of randomly throwing a ball into bins.
Figure 5: Y(t)+Z(t)
lim
1
t
Pn
But limn
Pn
i=1
Pn
Ti2
Ti2 /n
sum of areas of squares
= lim Pi=1
= lim Pi=1
n
n
n
n
length
T
T /n
i=1 i
i=1 i
(Y (t)+Z(t))dt =
1
t
Pn
(Y (t) + Z(t))dt =
0
i=1
Ti /n = E[T ]. Therefore,
2/2
2
=
1/
n
X
Ti = T1 + T2 . . . Tn
i=1
From this relation, we can show that the probability density function for the arrival
times is:
ftn (t) =
n tn1 et
(n 1)!
=
=
=
=
P (A(t) = n and Z1 )
P (A(t) = n) P (Z1 |A(t) = n)
P (A(t) = n) P (Z1 ) (memoryless property)
P (A(t) = n) (1 e )
Therefore,
P (A(t) = n) =
P (tn+1 (t, t + ])
1 e
Now,
t+
P (tn+1 (t, t + ]) =
t
n+1 n e
n+1 tn et
d
n!
n!
t t+
(n 1)/
n+1 tn et
n!(1 e )/
(t)n et
n!
It is not hard to show that any counting process that satisfies the above probability
mass function, in addition to the stationary and independent increment properties, is
a Poisson process. For instance P (T1 > t) = P (A(t) = 0) = et . Also, P (Tn >
t + ) = 0|A(
, ) = 1, A(0,
) = n 2) =
t|tn1 = ) = lim0 P (A(t,
P (A(t) = 0) (stationary and independent increment).
From this probability mass function, we can obtain the following:
E[A(t)] = t
2
PASTA
=
This of course means that N
npn = limt 1t
i=0
the same probability as seen by an arriving customer:
Rt
0
6.1
Example 1
Assume customer inter-arrival times are unifromly distributed between 2 and 4 seconds. Assume also that customer service times are all equal to 1. Then an arriving
customer always finds an empty system. Therefore, a0 = 1. On the other hand, the
average number of customers in the system is N = 1/3 (apply Littles theorem on
= 1/3 and T = 1). Therefore, p0 6= 1.
6.2
Example 2
=
=
t + )
lim0 P (N (t) = n|A(t,
t + ) = 1)
lim0 P (A(t) D(t) = n|A(t,
t + ) and A(t)
where D(t) is the number of customers that depart in [0, t]. Since A(t,
are independent, and service times are independent of future arrivals, A(t) D(t) and
t + ) are independent. Therefore, lim0 P (A(t) D(t) = n|A(t,
t + ) = 1) =
A(t,
P (A(t) D(t) = n) = P (N (t) = n) = pn (t). Taking the limit as t , we have
an = pn . This property of a Poisson process is called PASTA (Poisson Arrivals See
Time Averages).
P (A(t + ) = 0) P (A(t) = 0)
= P (A(t) = 0)
d
P (A(t) = 0)) = P (A(t) = 0)
dt
So P (A(t) = 0) = et . In general, one can show that P (A(t) = n) = (t)n et /n!.
This alternative definition of a Poisson process allows us to think about merging and
splitting Poisson processes.
7.1
Merging
We can show that the sum of two Poisson processes with rates and is a Poisson
process with rate + .
t + ) = 0) = (1 + o())(1 + o()) = 1 ( + ) + o()
P (A(t,
t+) = 1) = (+o())(1+o())+(+o())(1+o()) = (+)+o()
P (A(t,
t + ) 2) = o()
P (A(t,
Since the two processes satisfy the stationary and independent increment properties, the resulting process does too. Therefore, the resulting process is a Poisson
process with rate + .
7.2
Splitting
A Poisson process with rate can be split into two independent Poisson processes as
follows: each arrival is independently directed to process 1 with probability p and to
process 2 with probability 1 p.
P (A1 (t, t + ) = 0) = (1 ) + (1 p) + o() = 1 p + o()
P (A1 (t, t + ) = 1) = p( + o()) + o() = p + o()
P (A1 (t, t + ) 2) = o()
Similar calculation can be done for process 2 by exchanging p and 1 p. Since the
original process satisfies the stationary and independent increments properties, so do
process 1 and process 2. Therefore, process 1 and process 2 are both Poisson processes
with rates p and (1 p) respectively.
1p
(1 p)
m+n
m
pm (1 p)n
m+n
. This is simply the binomial distribution, since, given
= (m+n)!
m!n!
m
m + n arrivals to the original process, each independently goes to process 1 with
probability p.
where
which proves that A1 (t) and A2 (t) are independent. To show that process 1 and process
2 are independent we must show that for any t1 < t2 < . . . < tk , {A1 (ti1 , ti ), 1
i k} and {A2 (tj1 , tj ), 1 j k} are independent. The argument above shows this
independence for i = j. For i 6= j, the independence follows from the independent
increment property of A(t).
References
Dimitri Bertsekas and Robert Gallager, Data Networks
Rober Gallager, Discrete Stochastic Processes