0% found this document useful (0 votes)
45 views3 pages

Sum of Independent Exponentials

The document discusses the density function of the sum of independent exponential random variables with different parameters. It proves that this distribution, known as the hypoexponential distribution, has a particular density function involving the parameters. The proof proceeds by considering the 2-variable case and then extending inductively to n variables. Some background is provided on where this result was originally published and other related references.

Uploaded by

data science
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views3 pages

Sum of Independent Exponentials

The document discusses the density function of the sum of independent exponential random variables with different parameters. It proves that this distribution, known as the hypoexponential distribution, has a particular density function involving the parameters. The proof proceeds by considering the 2-variable case and then extending inductively to n variables. Some background is provided on where this result was originally published and other related references.

Uploaded by

data science
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Sum of independent exponentials

Lemma 1. Let (Xi )i=1...n , n ≥ 2, be independent exponential random variables with pairwise distinct
respective parameters λi . Then the density of their sum is
" n # n
Y X e−λj x
(1) fX1 +X2 +···+Xn (x) = λi Qn , x > 0.
i=1 j=1 (λk − λj )
k6=j
k=1

Remark. I once (in 2005, to be more precise) thought this stuff would be part of some research-related
arguments, but I ended up not using it. Later on I realized it’s actually Problem 12 of Chapter I in
Feller: An Introduction to Probability Theory and its Applications, Volume II. And recently I have
read about it, together with further references, in “Notes on the sum and maximum of independent
exponentially distributed random variables with different scale parameters” by Markus Bibinger under
http://arxiv.org/abs/1307.3945. Moreover, I now know that this distribution is known as the
Hypoexponential distribution (thanks János!).
Proof. First we compute the convolutions needed in the proof.
Zx
−ax −bx e(a−b)x − 1 e−bx − e−ax
e ∗e = e−a(x−u) e−bu du = e−ax = .
a−b a−b
0

For n = 2,

e−λ2 x − e−λ1 x
 −λ1 x
e−λ2 x

e
fX1 +X2 (x) = fX1 (x) ∗ fX2 (x) = λ1 λ2 = λ1 λ2 + ,
λ1 − λ2 λ2 − λ1 λ1 − λ2

in accordance to (1). Now inductively, fix n ≥ 3, and assume the statement is true for n − 1. Then
"n−1 # n−1
Y X e−λj x
fX1 +X2 +···+Xn (x) = fX1 +X2 +···+Xn−1 (x) ∗ fXn (x) = λi n−1
∗ fXn (x)
Q
i=1 j=1 (λk − λj )
k6=j
k=1
" n
# n−1 " n # "n−1 n−1
#
Y X e−λn x − e−λj x Y X e−λj x X e−λn x
= λi n−1
= λi n
Q − n
Q .
(λk − λj ) j=1 (λk − λj )
Q
i=1 j=1 (λj − λn ) (λk − λj ) i=1 j=1
k6=j k6=j k6=j
k=1 k=1 k=1

The proof is done as soon as we show that the coefficient of e−λn x fits the coefficients seen in the sum
of (1), i.e.
n−1
X 1 1
(2) − n
Q = n−1
(λk − λj )
Q
j=1 (λk − λn )
k6=j k=1
k=1

1
or, equivalently,
n
X 1
n
Q = 0.
j=1 (λk − λj )
k6=j
k=1

To this order, we write


n
Q
(λk − λl )
n n k6=l6=j
X 1 X k, l=1
n
Q = Qn
j=1 (λk − λj ) j=1 (λk − λl )
k6=j k6=l
k=1 k, l=1

which is zero if and only if


n
X n
Y
(λk − λl )
j=1 k6=l6=j
k, l=1

is zero. We transform the latter in the following display. The nontrivial steps are changing orders of
λ’s and thus signs in the factors of the products.
n
X n
Y n
X n
Y n
Y
(λk − λl ) = (λk − λl ) (λk − λl )
j=1 k6=l6=j j=1 j6=k6=l6=j k=j6=l
k, l=1 k, l=1 k, l=1
Xn Y n Yn n
Y
2
=± (λk − λl ) (λk − λl ) (λk − λl )
j=1 j6=k>l6=j k=j>l k=j<l
k, l=1 k, l=1 k, l=1
n
X Y n Y n Y n
=± (λk − λl )2 (λk − λl ) (λk − λl ) (−1)n−j =
j=1 j6=k>l6=j j=k>l k>l=j
k, l=1 k, l=1 k, l=1
Yn n
X n
Y
=± (λk − λl ) (λk − λl ) (−1)n−j ,
k>l j=1 j6=k>l6=j
k, l=1 k, l=1

which is zero if and only if


n
X n
Y
(3) (λk − λl ) (−1)j
j=1 j6=k>l6=j
k, l=1

is zero. Notice that the product here is a Vandermonde determinant of the form

1
λ1 λ21 ··· λ1n−2
λ22 λ2n−2

1 λ2 ···

. .. .. .. ..
.. . . . .


n−2 ,
1
λj−1 λ2j−1 ··· λj−1
n−2
λ2j+1

1 λj+1 ··· λj+1

. .. .. .. ..
.. . . . .


1 λn λ2n ··· n−2
λn

2
and hence (3) is nothing but the expansion of the determinant

1 1
λ1 λ21 ··· λ1n−2
λ22 λ2n−2

1 1 λ2 ···

. . .. .. .. ..
.. .. . . . .


1 1 λn λ2n ··· λn−2
n

w.r.t. its second column. As this determinant is zero, so is (3) and thus (2) is proven.

You might also like