SIMON Probability Distributions Involving Gaussian Random Variables 2006
SIMON Probability Distributions Involving Gaussian Random Variables 2006
The Gaussian distribution and those derived from it are at the very core of
a huge number of problems in multi-disciplinary fields of engineering,
mathematics and science. All scientists and professional engineers face,
sooner or later, a problem whose solution involves processing, either ana-
lytically, or by simulation, or by a mixture of the two, random variables
whose distribution is related to Gaussian variables. The book, with its
comprehensive information in analytical, tabular, and graphical form, is an
invaluable tool for scientists and engineers.
-- Sergio Benedetto, Politecnico di Torino
Praise for the Hardcover Edition of Simon's
Probability Distributions Involving Gaussian Random Variables
The fact that the book collected all these results (after double-checking
them) with unified notations and in an organized fashion made it very use-
ful for many of the graduate students and researchers doing analytical
types of investigations. Not only did I use it, I also recommend it to my
graduate students and many of our papers benefited from the formulas
available in the handbook. The handbook saves valuable time for all the
researchers that rely on it to simplify their results or write them in terms of
well-known functions that are available in popular mathematical packages
such as Matlab and Mathematica.
-- Mohamed-Slim Alouini, Texas A&M University at Qatar
This is a unique book and an invaluable reference for engineers and scien-
tists in the fields of electrical engineering, mathematics and statistics.
There is no other single reference book that covers Gaussian and Gaussian-
related distributions of random variables that are encountered in research
work and practice. This is a reference book that I recommend to all my
graduate students working in the area of telecommunication system analy-
sis and design.
-- John Proakis, Professor Emeritus, Northeastern University
and Adjunct Professor, University of California, San Diego
Marvin Simon has created an important and very popular book that is use-
ful to a very large community of engineers and scientists. This work is
also very carefully done with many corrections to earlier works that were
not easily available to this community.
-- Jim Omura, Gordon and Betty Moore Foundation
Marvin K. Simon
Principal Scientist
Jet Propulsion Laboratory
Pasadena, Califomia, U.S.A.
- Springer
Library of Congress Control Number: 2006933933
Simon, Marvin K.
Probability Distributions Involving Gaussian Random Variables
A Handbook for Engineers and Scientists/ by Marvin K. Simon
p. cm.
Preface xxi
Acknowledgment xxiii
INTRODUCTION
B. Rayleigh
C. Rician
D. Central Chi-square
E. Noncentral Chi-square
F. Log-Normal
C. Rician
D. Central Chi-Square
E. Noncentral Chi-Square
F. Log-Normal
C. Independent Gaussian (x) Gaussian (One Has Zero Mean, Both Have
Identical Variance)
viii
E. Independent Rayleigh (x) Rayleigh
B. Dependent Gaussian
C. Independent Rayleigh
D. Dependent Rayleigh
E. Independent Log-Normal
F. Dependent Log-Normal
9. QUADRATIC FORMS 89
A. Both Vectors Have Zero Mean 91
REFERENCES 139
ILLUSTRATIONS 143
Johann Carl Friedrich Gauss
1777-1855
In 1788 Gauss began his education at the Gymnasium with the help of
Buttner and Bartels, where he learnt High German and Latin. After
receiving a stipend from the Duke of Brunswick- Wolfenbuttel, Gauss
entered Brunswick Collegium Carolinum in 1792. At the academy
Gauss independently discovered Bode's law, the binomial theorem
and the arithmetic- geometric mean, as well as the law of quadratic
reciprocity and the prime number theorem.
With his stipend to support him, Gauss did not need to find a job so
he devoted himself to research. He published the book Disquisitiones
Arithmeticae in the summer of 1801. There were seven sections, all but
the last section, referred to above, being devoted to number theory.
Gauss arrived in Gottingen in late 1807. In 1808 his father died, and a
year later Gauss's wife Johanna died after giving birth to their second
son, who was to die soon after her. Gauss was shattered and wrote to
Olbers asking him give him a home for a few weeks,
to gather new strength in the arms of your friendship - strength for a life
which is only valuable because it belongs to my three small children.
xiv
Gauss was married for a second time the next year, to Minna the best
friend of Johanna, and although they had three children, this marriage
seemed to be one of convenience for Gauss.
Gauss had been asked in 1818 to carry out a geodesic survey of the
state of Hanover to link up with the existing Danish grid. Gauss was
pleased to accept and took personal charge of the survey, making
measurements during the day and reducing them at night, using his
extraordinary mental capacity for calculations. He regularly wrote to
Schumacher, Olbers and Bessel, reporting on his progress and
discussing problems.
From the early 1800's Gauss had an interest in the question of the
possible existence of a non-Euclidean geometry. He discussed this
topic at length with Farkas Bolyai and in his correspondence with
Gerling and Schumacher. In a book review in 1816 he discussed
proofs which deduced the axiom of parallels from the other Euclidean
axioms, suggesting that he believed in the existence of non-Euclidean
geometry, although he was rather vague. Gauss confided in
Schumacher, telling him that he believed his reputation would suffer
if he admitted in public that he believed in the existence of such a
geometry.
In 1831 Farkas Bolyai sent to Gauss his son JQnosBolyai's work on the
subject. Gauss replied
xvi
The period 1817-1832 was a particularly distressing time for Gauss.
He took in his sick mother in 1817, who stayed until her death in 1839,
while he was arguing with his wife and her family about whether
they should go to Berlin. He had been offered a position at Berlin
University and Minna and her family were keen to move there.
Gauss, however, never liked change and decided to stay in Gottingen.
In 1831 Gauss's second wife died after a long illness.
Allgemeine Theorie... showed that there can only be two poles in the
globe and went on to prove an important theorem, which concerned
the determination of the intensity of the horizontal component of the
magnetic force along with the angle of inclination. Gauss used the
Laplace equation to aid him with his calculations, and ended up
specifying a location for the magnetic South pole.
xvi i
Humboldt greatly. However, Gauss's changes obtained more accurate
results with less effort.
Gauss and Weber achieved much in their six years together. They
discovered Kirchhoff's laws, as well as building a primitive telegraph
device which could send messages over a distance of 5000 ft.
However, this was just an enjoyable pastime for Gauss. He was more
interested in the task of establishing a world-wide net of magnetic
observation points. This occupation produced many concrete results.
The Magnetischer Verein and its journal were founded, and the atlas of
geomagnetism was published, while Gauss and Weber's own journal
in which their results were published ran from 1836 to 1841.
Gauss spent the years from 1845 to 1851 updating the Gottingen
University widow's fund. This work gave him practical experience in
financial matters, and he went on to make his fortune through shrewd
investments in bonds issued by private companies.
Gauss presented his golden jubilee lecture in 1849, fifty years after his
diploma had been granted by Hemstedt University. It was
xviii
appropriately a variation on his dissertation of 1799. From the
mathematical community only Jacobi and Dirichlet were present, but
Gauss received many messages and honours.
From 1850 onwards Gauss's work was again of nearly all of a practical
nature although he did approve Riemann's doctoral thesis and heard
his probationary lecture. His last known scientific exchange was with
Gerling. He discussed a modified Foucalt pendulum in 1854. He was
also able to attend the opening of the new railway link between
Hanover and Gottingen, but this proved to be his last outing. His
health deteriorated slowly, and Gauss died in his sleep early in the
morning of 23 February, 1855.
xix
PREFACE
There are certain reference works that engineers and scientists alike
find invaluable in their day-to-day work activities. Many of these
reference volumes are of a generic nature such as tables of integrals,
tables of series, handbooks of mathematical formulas and transforms,
etc. (see Refs. 1, 2, 3, and 4 for example), whereas others are
collections of technical papers and textbooks that directly relate to the
individual's specific field of specialty. Continuing along this train of
thought, there exists a great deal of valuable information that, in its
original form, was published in university and company reports and
as such the general public was in many cases not aware of its
existence. Even worse, today this archival material is no longer
available to the public in any form since its original source has
declared it to be out of print for quite some time now. Furthermore,
most of the authors of these works have long since retired or sadder
yet have passed on; however, the material contained in the
documents themselves has intrinsic value and is timeless in terms of
its value to today's practicing engineer or scientist. As time marches
on and new young engineers and scientists replace the old ones, the
passing of the torch must include a means by which this valuable
information be communicated to the new generation. Such is the
primary motivation behind this book, the more detailed objective
being described as follows.
One of the most important, from both the theoretical and
practical viewpoint, probability distributions is the Gaussian
distribution or as mathematicians prefer to call it the normal
distribution. Although the statistical characterization of the basic
Gaussian random variable (RV), e.g., its probability density function
(PDF), cumulative distribution function (CDF), and characteristic
function (CF) are well-known and widely documented in the
literature (e.g., [ 5 ] ) ,in dealing with the applications, one is quite often
in need of similar characterizations for arithmetic combinations, e.g.,
sums, differences, products, ratios of Gaussian RVs and also the
square of Gaussian RVs (so-called chi-square RVs.) Other applications
involve log-normal RVs and thus their statistical characterization is
2 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
where
: (3
Q(x)= -erfc -
and
6 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
we have
----
Whereas the book will only provide results for the noncentral
moments of a RV, e.g., E { x k ] , k integer, the central moments, most
notably the variance, can be obtained from the relation
where (r) k!
= -denotes a combinatorial coefficient.
i!( k - i)!
A log-normal RV is one whose logarithm has a Gaussian
distribution. That is y =lOX'10 is a log-normal RV when X is
characterized by the PDF of (1.1).
FUNDAMENTAL ONE-DIMENSIONAL
VARIABLES
A. Gaussian
6.Rayleigh
E { R ~=
J (202r'2 (F), k integer
3
E { R ~=) (202)*i2T 1+ - ,k integer
(
r(rn+
E { R ~=} (202)*i2
E) , k integer
( m - l)!
FUNDAMENTAL ONE-DIMENSIONAL VARIABLES 11
E{R'} = (20')1'~
.(rn + y)
,k integer (2.13)
T(m+l/2)
C. Rician
where , ~ ; ( a ; P ; y is
) the confluent hypergeometric function [2] and
a= 1x1.
where
12 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
x2 + a2
Q,(a.P) = li (2.20)
where
( s)
+(-I)" exp - - C i!(m
(m+i-I)!
nl-l
i=o -i-I)!
(02)i]7
-
2ra
More often than not in the literature, the subscript "1" identifying
the order of the first-order Marcum Q-function is dropped from the
notation. We shall maintain its identity in this text to avoid possible
ambiguity with the two-dimensional Gaussian Q-function defined in
Eq. (A.37) of Appendix A.
E { R ~=} (202)li2
exp
k integer
D. Central Chi-square
1. n = l
E { Y ~ =} (202)*r ( m + k ) , k integer
( m - I)!
r(m+k+;)
E { Y ~ =} ( 2 0 2 ) *
i)
r(rn +
,k integer (2.39)
E. Noncentral Chi-square
FUNDAMENTAL ONE-DIMENSIONAL VARIABLES 15
1
1 - 2jwo 1-2jwo2
(
( 2)
(rntk-l)! 2:2)
E { Y ~ =} ( 2 0 2 ) lexp - ,F; rn+k;m;- , k integer (2.47)
(rn-I)!
k integer
F. Log-Normal
where x,, are the zeros and Hx,, are the weight factors of the N,-order
Hermite polynomial and can be found in Table 25.10 of [2]. In
addition, the moments of y are given by
[F- F
~ { y *=} exp - X + - - o ,k integer
FUNDAMENTAL MULTIDIMENSIONAL
VARIABLES
A. Gaussian
is given as
For the special case of n = 2,X = 0, and the covariance matrix of (1.10),
the joint moments are given as
18 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
10, k, + k2 odd
k, + k2 even
B. Rayleigh
"
(I -p4)y2
[(i.2Y2 + 21p/lq
+ lpl(l- p 2 ) yY,(y2+ q2)sin 8
+c ) ( ~ Ido, (3.11)
+ 2'1 p l y Y o + p Z y 2 )
C. Rician
E. Noncentral Chi-square
Consider the pair of central chi-square RVs of order n l
Y; = IIx"' ~ ~~,
6= IIx")~~
defined from the underlying Gaussian vectors
x"' E N,(P, 0:) and x'~'E N~(X(l),0:). Thenl the joint PDF of Y; and
Y, is given (for n > 2) by
[
xexp - 1 [++,-
2(1- p2)
( 4+4- 2 ~ o P ~ ) ~ ~ ) ]
o:o;
F. Log-Normal
Once again the joint CDF and also the joint CF are not available in
closed form. However, the joint moments can be determined from the
joint CDF of the corresponding Gaussian RVs. In particular,
1 1
(al,
Yxl,x2 0.) = exp jwl$ + jw,$ - -0:~:- -w:o:
2 2
- pwlw201a2)
(3.22)
then
DIFFERENCE OF CHI-SQUARE
RANDOM VARIABLES
-- - -
Note that K,(x)is defined for 0 < x I whereas I , ( x ) is defined
for I x I and is an even function of x .
1
,exp
20,
- --
[
1
( 2 ; ) (rn -
111-1
) (2(m- 1) - i)!
i m I - i)!
)
20:
1
( 1 nl-l (2(m- 1) - i)!
i!~-
m1 - i)!
1 m-I i
(2(m- 1) - i)!
P, (y) = exp[ 2;,) -
:- ( r n - ~-)-
! [-~ ~ ? ~ ~ ~ ~ ~ ( i - ~ ) ! ( r n - ~ - i )
i-l
[
P , ( ~ =) l - e x p -
2; l-
~-
1
-
](m-
nl-I i
( 2 ( m- 1) - i)!
~)![o~~o~~~~(i-~~!(m-~
[
1-exp -7 Y ) [- 01 ) [ L ) ~ - ' , ~ ~ ~
$ .)==+[
2 0 , 0,+ 0, =;, ,=, i - l)! a: + 0:
m-I-i
2 4
(ml + m2 - 2 - i)!
r112-'
1
,exp
20, [)
- --
[ ,"
20; (m, - l)! a, + a,2 ) " " ~ i!(rn2-1-i)!
1 0: [)1"21~(ml+m2-2-i)!
(4.16)
Ill1
-1 i
,112
(m,+m2- 2 - i ) !
exp - --
1
(2:;) (rnl - 1 ) a )Z ( i - i)!(rn2- 1 - i)!
(
I-exp --2
1
) r n 2 -I ! (A) ,n2
,II,-I i (m, +m2- 2 - i ) !
i -m - I - i)!
PY (Y)= -I - 2)
20,' 0, + 0,
[ (a,,+ l ) / 2
nt2-I-i
1
(-*)
~ [ ~ ) ( ~ l l l - ~ ~ ~ ~ n ~ 2 - l
where
DIFFERENCE OF CHI-SQUARE RANDOM VARIABLES 33
Before concluding this section we point out that for the case
n, = n, = m, m odd, the PDF can be expressed in the form of an infinite
series in Whittaker functions [2] which themselves are expressed in
terms of the confluent hypergeometric function , 4(a;P;y ) . Because
of the absence of the functions in standard mathematical software
manipulation packages such as MathematicaG9, and the complexity of
the resulting expressions, their use is somewhat limited in practical
applications and thus the author has decided to omit these results for
this case. Nevertheless, the CF is still simple and given by
PZ ( 2 ) = PY ( - 2 ) a+a, (4.45)
a1-'a2
u;+u;
that is, we use the expression for the PDF of Y, (which applies for
y 2 0) but substitute z for y, -0;for a;, and then take its negative
and apply it for z 5 0 . Similarly, for Y, a noncentral chi-square RV
with 2m2 degrees of freedom, the PDF of Z2 is expressible as
A. Independent Central Chi-square (+) Central Chi-
Square
Define now the RV Z = I: + Y, = I: - Z2. Also, for the results of Section
4A, define the notation
Note that since Z only takes on positive (or zero) values, the PDF of
Z is defined only for z 2 0 . Similarly, define the notation
+ p- ,;;( ~ ) ~ o ; + -2 u>
p ; ( ~ ) ~ ~ ;- 9 -;0,n, odd
PZk) = (5.6)
P; (~)l , ~ +-, , - P; (z)l,,+ - a ; , 2 0, n, even
SUM OF CHI-SQUARE RANDOM VARIABLES 37
Before proceeding, the reader is cautioned that care must be
exercised in applying (5.4) and (5.6) since in some instances the
substitution 0;-+ -0; in the generic form of the PDF of Y might
result in functions with imaginary or undefined arguments. In these
instances, one is better off deriving the result for the chi-square sum
directly from a convolution of the individual chi-square RV PDFs
rather than from the result for the chi-square difference. In this same
regard, a closed-form result for the CDF might exist for the chi-square
sum RV even though it doesn't exist for the chi-square difference RV.
a,'- a:
(5.7)
"l-' (2(m- 1) - i)!
1
p z ( z ) = q
exp --
(
III-I-i
1
--
2S:)(m-l)![o:"Jniz ilirn-l-i)!
(5.15)
+L
20; exp[- L) 2[Alrn
(2(m 1) - i)!
20; (m-I)! o2-o1
i!(m-1-i)!
-
(
7 --
1 III-Ii (2(m- I ) - i)!
P,(z) = l - e x p -2Sl)(m-l)!(,5Jiz~(i-q!(rn-l-i)!
1 - i
(2(m- I ) - i)!
(
-exp --
2);:
--
(
(rn - 1 ) i - i)!(rn- I - i)!
k integer
(5.33)
k integer
(5.37)
SUM OF CHI-SQUARE RANDOM VARIABLES 43
p ( z )= - - - - - - (5.42)
L(3)
p z ( 4 = 20:
2"f2
02
($1 (rn, +n~,-1)12
exp[- g) (5.48)
For 0; > o:, applying (5.4)and (5.6) to (4.35) and (4.36) gives
For the limited case of 0; = 0; = 0 2 ,one can use the series expansion
of the generalized Marcum Q-function, namely,
1- Qk(a,p) = exp(- v)
SUM OF CHI-SQUARE RANDOM VARIABLES
$(!)I
45
and
P,(z) = (R,) (
2 exp - 1
i- n ~ ( /n / 2 ++ i 20:
+ ~ ) [ ~ ( ~ ; ~ o f ) '
(5.61)
SUM OF CHI-SQUARE RANDOM VARIABLES 47
Pz(z)=
(2)2"'2
( )=,r(rn2
a2
exp -2 +i+l)
2 4 ;=, ,=, i! l! ~ ( m+,1)
(5.64)
1
Y Z ( 4 =
(1- 2jwo:)"" (1- 2jwo:)"" 1 - 2jwo,
PRODUCTS OF RANDOM VARIABLES~
1. n = l
4
A large number of the PDF and statistical moment results in this
section come from Ref. 5.
i even
(6.8)
(0,k odd
x<o
Px (4= - 1 (m+i-I)! [L) m-1-i-1
I-- ?
( m- I)! olo2 i=o i!(m- i - 1 - l)!
2m+i o,02
B. Dependent Gaussian (x) Gaussian (Both Have
Zero Mean)
(-) i-l-r
,x<O
54 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
(m+i+l-I)!
i=o r=o 2'i!(m+ i - l)!
(m+i+1-l)!
2li!(m+ i - l)!
PRODUCTS OF RANDOM VARIABLES 55
px (+ -- -- - - (6.38)
I
xexp -
02w2(a:+ a : ) - 2(jw + p o 2 0 2 ) x$"$"
" 1 (6.41)
56 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
E { R ~ =) ( 2 0 , ~ ~ ~ ) ~ ,k integer
X
PRODUCTS O F RANDOM VARIABLES 57
4. nl,n2
r(T)
E{R'} = ( 2 o I o 2 ) '
(2) ,k integer
r ) [ )
r
P,(r)=l-exp --
2a0 : [ --
i,o (i!)
) 20,
- Ki+l - , r 2 0
a,a2 6 1 0 2
(6.61)
; [ 2,) [; $)
E { R ~ =} ( 2 0 , a 2 ) ' ~ ' -+1 exp ---;- ,F; - + l ; l ; l ,k integer (6.62)
PRODUCTS OF RANDOM VARIABLES 59
.(+)
.(+)
E{R*)=( 2 0 ~ o , ) ~
(2)
I(:)
(
exp -- k+n,.nl. 4
2 $ : ) " [ T 9 2 '2af)'(6.65)
k integer
(2
Y, ( w ) = exp - -1w)
1
B. Independent Gaussian (+) Gaussian (One Has
Zero Mean)
A large number of the CDF results in this section come from Ref. 11
where they appear in their normalized (the RVs that form the ratio
have unit variance) form. Once again, a large number of the PDF and
statistical moment results come from Ref. 5.
62 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
:1 (
y X ( w )= exp - jwp- (1 - p 2 ~ ~ w ~ ) ]
I x")I
Let X E N , ( 0 , ~ : )and R = with x("E N,,(0.0;) be independent
Gaussian and Rayleigh RVs. Then, the ratio Z = X I R has the
following statistical properties.
( z2 1 E21
Yz(o)= l+-lwl exp ---l-lwl
66 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
I x("I
Let X E N, ( 0 , ~ : )and R = with x(')E N,,(%'~),
0:)be independent
Gaussian and Rice RVs. Then, the ratio Z = X I R has the following
statistical properties.
1. n = l
1
-tan-'
7C is,)
--
where
I. Independent Rayleigh (i-) Rayleigh
(m, + m, - l)!
PR( r ) = 1- (ml - I)! (m2- I)! z(
.
m
2i )
-1
,$
m , + i o , r + 0 2 j m ' t i , r > o (7.51)
70 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
( I - p2)"-'(0:- o ; r 2 ) r
PR (r)l 2n1 = PA' (~11 -
21~~-2
J j i ( 2 m - 2)(m- 2)!
111-112 9
r 2 0 (recursive form)
r 2 0 (recursive form)
K. Independent Rice (+) Rayleigh
Let R, = I x(')I
and R2 = IIxi2)ll
be independent Rice and Rayleigh RVs
corresponding to Gaussian vectors x"' E N1,(0.o:) and
x EN ( 2 , 0 ) . Then, the ratio R = R2 1 R, has the following
statistical properties.
where
RATIOS OF RANDOM VARIABLES 73
PR ( r )=
2 r ( m + 11 2)o~"'o,r2"'-1
exp --
(
1 a;o?r2
I
f i ( m - 1)!(c(r2+ 0i)11f+112 ~ ~ ; ) 1 6 [ m + 5 ; m ; 2 0 ~ ( 0 ~ r 2 + 0 ; )
'ex'[-
a,'
1-('
2 ( 4 r 2+ 0 : ) i=2
1
4;-I
2i-2 0fr2
i -l)(o?r2
i-l
74 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
2 a l r + a 2 a , ( 0 2l o l ) - a 2 ( o l o 2 ) r
pR(r)=
n + 2( dmolr + o2' -4
alr- a, a,( 0 2 1 o , )+ a 2 ( 0 1 0 2 ) r , r 2 0
' d m
where
d "'
X-
dc"'
76 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
where
where
c , ( m , - l , m 2 -1;r) =
4(1- p2)2"(~I~2)"-i
Po,$0,( ~ 9 u1 2 ) =
T 2( n I 2)(1+ u:)"(l+u;)"
MAXIMUM AND MINIMUM OF PAIRS OF
RANDOM VARIABLES
Let XI E N ~ ( ~ , , O and
: ) X2 E N,($,o;) be independent Gaussian RVs.
Then,
B. Dependent Gaussian
MAXIMUM AND MINIMUM OF PAIRS OF RANDOM VARIABLES 81
C. Independent Rayleigh
[
k12
r ) r2)[(
P R ~ X ('1 = 0;( m-
20,
exp --
20;
1-exp --
[
2r0 ;2 );=, = i!L 20;
[~)i]
( r ) = 2 - exp - -
PR,na5
2r2
0; ( )= i=,
I[ 20:
i! r2 li [
-exp -7
2r2
0 , )=L[L)'
;=, i! 2 0 ;
D. Dependent Rayleigh
where
- Iplo,a2cos[m(6+ n / 2)]
-(lll-I)12
a: + 0;+ 21pI0,a~sin 6
h,(6)4
240;(1- p 2 )
and
m"' 1
( m - I ) ! 277
I' (hl(0))-".f,,
-n
( 7 ,h;' (e))h(e)dO,r 2 0
(8.34)
+ o,2(m-1)!
r [ r 2 2 ) ' 1 ' -exp
20,
1 [ --r 2 2 ]
202
+-- m"' 1
( m - I)! 2 n -"
I'
(h,(6))-"'
fll (r,h;' (6))h(6)d& r 2 0
(8.36)
E. Independent Log-Normal
Let y , = 10X1"Oand y , = 1 0 ~ ~ be " ~ independent log-normal RVs
corresponding to Gaussian RVs XI E N, ( F , ,0 2 )and X2 E N, ($,0 2 ) .
Then, the maximum y = max(y,,y 2 ) and minimum Ymi.= m i n ( ,,~y 2 )
mdx
F. Dependent Log-Normal
10 log,, y - j7,
(8.43)
9
0 2
Q[
0,(10 log,, Y - x 2 ) - PO, (10 log,, Y - XI)
0102 4- I
QUADRATIC FORMS
where
which can also be written in the vector form
with
QUADRATIC FORMS 91
(v,v2)" n + k - 1 (-d)"-k-l
P&) =
exp(v2d)C
k=, (v, +
,,)'I+* (
k k - , ! ,d<O
(9.15)
,d2O
where
QUADRATIC FORMS 93
with
where
I
dB, a < b
94 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
Note that the form of PD(0)in (9.22) does not depend on the ratio a 1 b
and thus for the case where both vectors have zero mean, whereupon
a = b = 0, using the fact that Q,(0,0)= 1, we immediately obtain the
result in (9.18).
where
Note that since yU,pyy> 0, then for either A < 0 or B < 0, the
arguments of the square roots in (9.30) are always positive.
D. General Hermetian Quadratic Forms
As an alternative to describing each component of the quadratic form
of (9.5) in a matrix form as in (9.7), the quadratic form itself can be
expressed in a matrix form as follows. Let
'H 0 . . 0 -
O H 0 0
Q = . 0 . . .
0
-O O O H-
Note from (9.31) together with (9.7) that the matrix Q is Hermetian,
i.e., Q = Q*'.
Consider now a general quadratic form of the type in (9.32)
where Q is again Hermetian but not restricted to the diagonal form in
(9.31). Furthermore, denoting the complex elements of V by
X,, = X,,, + jX,, and q, = q,, + jqrl,,,assume that their real and imaginary
parts satisfy the relations
and
QUADRATIC FORMS 97
Then, denoting the complex covariance matrix of V by
L = E{(V - V)(V - v)*~}
assumed to be nonsingular, the CF of D is
given by [17]
where the f,'s are the elements of F and the Ak's are the eigenvalues
of LQ.
A special case of (9.36) corresponds to X and Y having zero
mean in which case V = 0. For this case, the f,'s are all equal to zero
and thus (9.36) simplifies to
Let X E N, ( 0 , ~ : and
) R= I x")I be independent Gaussian and
Rayleigh RVs where x ' ~E)N , ( o , D : ) . Then, the product RV Z = XR
has the PDF
I x("I
Let XI E N , ( o , ~ : ) ,R = and X, E ~ ~ ( 0 . beo ~independent
)
Gaussian ,Rayleigh and Gaussian RVs and x"' E N, ( 0 , ~ : ) Then,
. the
RV Z = XIR + X, has the PDF
' The relation given there is actually for the complementary error
function which is related to the Gaussian Q-function by (1.4). Other
early traces of this alternative form can be found in the work of
Weinstein [I81 and Pawula [19].
first-order Marcum Q-function can be looked upon as a special case of
the incomplete Toronto function 122, pp. 227-2281 which finds its
roots in the radar literature and is defined by
T,(rn, n, r ) = 2r"-""1e-r2
JOB t"-"e-"I,, (2rf)dt
In particular, we have
[ [ (3 (a')]
xexp -- 1+2 - sine+ - dO,a>P2O
The results in (A.6) and (A.7) can put in a form with a more
reduced integration interval. In particular, using the symmetry
properties of the trigonometric functions over the intervals (-z,O)
and (O,z),we obtain the alternative forms
PrarO
and
Q,(a, + L/"
= 1 47~-" {exp[- $[I +)!(2 I)')!(
sin0 +
A comparison of (2.24) with (1.3) and (2.20) reveals that the canonical
form of the generalized Marcum Q-function suffers from the same
two disadvantages as previously discussed for the Gaussian Q-
function and the first-order Marcum Q-function. Once again from the
standpoint of application, it would be desirable to have an integral
form for the generalized Marcum Q-function in which the limits are
finite and the integrand possesses a Gaussian nature. The discovery
of such a form was made independently in Refs. 24 and 25 with the
following results:
xexp
[ ";
-- 1+2 - sine+
(ill]
- do, O + l a l P < l
and
xexp
[ --
( 3I)'):(
1+2 - sine+ - dB, O < P l a < l
that can be further simplified and separated into m odd and m even as
110 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
and
[.;! (3
xexp -- 1 + 2 - sine+
cr]]
- de,OIPla<l,meven
APPENDIX A: ALTERNATIVE FORMS 111
We observe from (A.16) through (A.21) that a andB are restricted to
be unequal. The special case of a = P has the closed-form result [24]
where
112 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
with
- 1 -
d, ( k ,1 ) 4 2k-1-J( )
k - 1 - j ( n + l + j)! (1-I)!
)
( n + k ) ! =2k-,-j ( - 1 j ) ! k + n
k-1- j
(A.26)
or equivalently, letting t = y 2 I 2,
T(a,x) = -I
1 -
y2"-'exp
2"-' Jr;
or equivalently
then equating (A.35) with (A.32) and using the form in (A.34) gives
yet another alternative form for the Gaussian Q-function, namely,
r(1/2,x2/2)- 1 coss x x2
Q(x)= 2 r j q -XI~
WE (
exp --
2s1n20)
d0 (A.36)
QkY;P) =
1
2nJI-p2
u2 + v2 - 2puv
1
dvdu
More recently, this author derived [28] another alternative form for
~ ( x , ~ ;inp this
) same region that dispensed with the trigonometric
factor that precedes the exponentials in the integrands of (A.38) and
furthermore resulted in an exponential argument that is precisely in
the same simple form as that in the Craig representation of ~ ( x ) .
Specifically:
x , ;p = J1
2n 0
tan-'
I
dl-p2xly
-
l-pxly ]ex(-
dl-p2y/x
+ ~2n~ ~ - l [ w ] ~ ~ ( -
Also, for p = 0, we immediately get from (A.40) the Craig form for
Q 2(x), namely [25, Eq. (80)]
or equivalently
4
Quite often one is interested in integrals of this type where g(y)
takes the form of a PDF.
The advantage of (B.l) or (B.2) is that the integration now has finite
limits and thus even if the Laplace transform is such that the integral
cannot be obtained in closed form, it can still be evaluated quite
simply by numerical integration. In this section of the appendix, we
focus on functions g ( x ) whose Laplace transform is known in closed
form and as such the integral can be expressed either in the finite
integral form of (B.l) or (B.2) or in closed form.
A. Q-Function and x
e(2k)[ )
=
,, k
mb2
k
4(a2+ m b 2 )
, 2 0. m integer
a 2 0, m noninteger
APPENDIX B: INTEGRALS INVOLVING Q-FUNCTIONS 119
[exp(-p2x212)Q,(b,ax)dx =
[exp(-p2x21 2)Q,(a,b)& =
2 + b2 - a2c2
J c m x(b,
~ ,a ) d x =
2a2 2a (
Q,(b,ac) + -exp -
b2) (l3.14)
x [acl,(abc)+ bl,(abc)]
x [acl,(abc)+ bl,(abc)]
x [aclo(abc)+ bl,(abc)]
ICmx exp(--p2x2
/ 2)Ql (ax,b)dx =
1
P
exp(- ?)
p z ~ 2
QI (ac,b )
Icmxexp(-p2x212)Q1(b,ux)dx =
122 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
jOc
x e x p ( - p 2 x 21 2)Q1(b,ar)dx =
loc
x e x p ( p 2 x 2I 2)Q1(b,ax)dx =
and L,(x) is the Laguerre polynomial which is the special case of the
generalized Laguerre polynomial defined in (A.24) corresponding to
n = 0, i.e.,
[ x e x p ( - p 2 x 21 2)10( c x ) ~( b, ,ax)& =
[XI, ( c x ) ~(b,, ux)dx =
[x enp(-p2x2I 2)1,( b x ) ~(c,, a ) d x =
Note that in checking the consistency between (B.35) and (B.36), use
has been made of the relation
1
Q,(a,a )= -[I + e x p ( - a ' ) ~ ~ ( a ~ ) ] (B.37)
2
which holds even when a is imaginary and also the fact that
I, ( x ) = Io(-x).
I, (cx)Q,(b,ax)dx = (B.38)
C
&xi exp(-p2x2/ 2 ) 4 ( c x ) ~(ax,
, b)dx =
[x2 exp(-p2x2/ 2 ) 4 ( c x ) ~( b, ,m ) d x =
c2 bc2
xQ, (ax,bx)dx = -Q,(ac,bc) +
2 2(a2- b 2 )
(
xexp -- + a11(abc2)]
b2 c2)[b$(abc2)
c2
Joc XQ, (ax,ax)dx = -{1
4
+ exp(-02c2)[Io(a2c2)
+ I, (a2c2)l} (8.44)
Then
% x exp(-p2x2l 2)Q,(ax,bx)& =
[x exp(-p2x212)Q1(ax,bx)& = - (1l
2P2
):
+
[x~'~-'Q,,,
( b ,m ) d x =
2 ( n-1 );
4,, , 1 ( -- ,n2l (B.50)
a2"
[ x exp(-p2x21 2)QI1
(ax,b)dx =
[[
x exp
a
2(a2+ p 2 ) =
,,
a2b2 1.1+x-
k! 2(a2+ p 2 )
- 2
=
1(b2).]
-
k! 2
1 x e x p ( - p 2 x 2l 2 ) ~(ax,
, bx)k=
l s-r f s-r\"'\
and
130 PROBABILITY DISTRIBUTIONS INVOLVING GAUSSIAN RANDOM VARIABLES
where 0 = 0, < 0, < 0, < ... < ON-,< 8, = n I 2. Note that (C.3) is nothing
other than a particular form of Riemann sum that is ordinarily used to
approximate the integral of a function, and thus as N increases, the
tightness of the bound improves. Clearly, for the special case N = 1
whereby 0, = 0 and 9, = n 1 2 , (C.3) reduces to (C.l).
The two-dimensional Gaussian Q-function of (A.39) can
similarly be upper bounded. Consider first the case where 1- py 1 x
(or 1- px I y ) is positive. Then, as for the one-dimensional Gaussian
Q-function, the integrands of (A.39) are a monotonically increasing
function of 0 in their respective integration intervals in which case
the integral, ~ ( xy:,p), is upper bounded by
where
1
Q(x,Y :p ) 5 -
[ L)+'
(8,- Bi-,) exp -
(R-4-l)
where the first term corresponds to the leading term in the asymptotic
expansion of Q(x). Thus, the second term can be regarded as an
integral representation of the deviation of Q(x) from its
approximation by the first term of its asymptotic series. Furthermore,
since the second term is always negative for x > 0, then the first term
is also an upper bound on Q(x), i.e.,
APPENDIX C:BOUNDS ON Q-FUNCTIONS 135
A lower bound on Q(x) can be obtained by first rewriting the
classical representation of Q(x) in (1.3) as
resulting in
v=-exp(-$), dv=yexP(-$)dy
we obtain
9
P+a
exp - ( ( " Y Y ] ~Q 1 ( a , P ) s Gfi e x p - (P - 4' ] (c-21)
1- -X
a-P
a
- (a ] (a.P)
iPi2 5 Q1
Clearly since Q, (a$) can never be negative, the lower bound is only
useful for values of the arguments that result in a non-negative value.
APPENDIX C: BOUNDS ON Q-FUNCTIONS 137
Making a similar recognition in (A.11), then for a > /3 2 0 we obtain
the lower bound
{ [
1 - - 1 exp -
2
-
2
"'1 - '2"'I] 5 Ql (a.S) (C.24)
For P > a , the bounds in (C.25) are tighter than those in (C.23).
However, for a > /3 the lower bound in (C.24) is very tight and better
than that in (C.25).
Upper and lower bounds on the mth-order Marcum Q-
function are not readily obtainable from the alternative forms in
(A.16) and (A.17). Nevertheless, it is possible [34] to obtain such
bounds by using the upper and lower bounds on the first-order
Marcum Q-function given in (C.23) together with the recursive
relation
and
REFERENCES
Fig. 1. Rayleigh PDFs: (a) n=l, Eq. (2.2);(b) n=2, Eq. (2.5).
Fig. 2. Rayleigh: (a) n=l, Eq. (2.3);(b) n=2, Eq. (2.6).
Fig. 3. Rayleigh Moments: (a) n=l, Eq. (2.4); (b) n=2, Eq. (2.7).
Fig. 4. Rician PDFs: (a) n=l, Eq. (2.14); (b) n=2, Eq. (2.17).
0 1 2 3 4 5
r
Fig. 5. Rician CDFs: (a) n=l, Eq. (2.15); (b) n=2, Eq. (2.18).
Fig. 6. Rician Moments: (a) n=l, Eq. (2.16);(b) n=2, Eq. (2.19).
Fig. 7. Central Chi-square PDFs: (a) n=2, Eq. (2.32); (b) n=3, Eq. (2.36);
(c) n=4, Eq. (2.32); (d) n=5, Eq. (2.36).
Fig. 7. cont'd.
Fig. 8. Central Chi-square CDFs: (a) n=2, Eq. (2.33); (b) n=4, Eq. (2.33).
Fig. 9. Central Chi-square Moments: (a) n=3, Eq. (2.39); (b) n=4,
Eq. (2.34).
Fig. 10. Noncentral Chi-square PDFs: (a) n=2, Eq. (2.44); (b) n=3,
Eq. (2.48); (c) n=4, Eq. (2.44); (d) n=5, Eq. (2.48).
Fig. 10.cont'd.
Fig. 11.Noncentral Chi-square CDFs: (a) n=2, Eq. (2.45); (b) n=4,
Eq. (2.45).
Fig. 12. Noncentral Chi-square Moments: (a) n=l, Eq. (2.43); (b) n=2,
Eq. (2.47).
Fig. 13. Independent Central Chi-square (-) Central Chi-square PDFs:
(a) n,=n2=l,equal variance, Eq. (4.1); (b) nl=n,=l, unequal variance,
Eq. (4.1);(c) nl=n2=2,equal variance, Eq. (4.4); (d) nl=n2=2,unequal
variance, Eq. (4.4).
Fig. 13. cont'd.
Fig. 14. Independent Central Chi-square (-) Central Chi-square CDFs:
(a) n,=n2=2,equal variance, Eq. (4.5); (b) n1=n2=2,unequal variance,
Eq. (4.5).
159
Fig. 15. Independent Central Chi-square (-) Central Chi-square PDFs:
(a) n,=4, n2=2,equal variance, Eq. (4.13); (b) n,=4, n2=2,unequal
variance, Eq. (4.13).
Fig. 16. Independent Central Chi-square (-) Central Chi-square CDFs:
(a) n1=4, n2=2,equal variance, Eq. (4.14); (b) n1=4,n,=2, unequal
variance, Eq. (4.14).
Fig. 17. Dependent Central Chi-square (-) Central Chi-square PDFs:
(a) n1=n2=1,equal variance, Eq. (4.20); (b) n,=n,=l, unequal variance,
Eq. (4.20); (c) n1=n2=2,equal variance, Eq. (4.23); (d) n,=n,=2, unequal
variance, Eq. (4.23).
Fig. 17. cont'd.
Fig. 18. Dependent Central Chi-square (-) Central Chi-square CDFs:
(a) nl=n,=2, equal variance, Eq. (4.24); (b) nl=n,=2, unequal variance,
Eq. (4.24).
Fig. 19. Independent Noncentral Chi-square (-) Central Chi-square
PDFs: (a) nl=n,=2, equal variance, Eq. (4.32); (b) nl=n,=2, unequal
variance, Eq. (4.32).
Fig. 20. Independent Noncentral Chi-square (-) Central Chi-square
CDFs: (a) nl=n,=2, equal variance, Eq. (4.33); (b) nl=n,=2, unequal
variance, Eq. (4.33).
Fig. 21. Independent Central Chi-square (+) Central Chi-square PDFs:
(a) nl=n2=1,unequal variance, Eq. (5.7); (b) nl=n2=2,unequal variance,
Eq. (5.11).
Fig. 22. Independent Central Chi-square (+) Central Chi-square
CDFs: (a) n,=n,=l, unequal variance, Eq. (5.8);(b) n,=n,=2, unequal
variance, Eq. (5.12).
Fig. 23. Independent Central Chi-square (+) Central Chi-square
Moments: (a) n,=n,=l, unequal variance, Eq. (5.10); (b) nl=n2=2,
unequal variance, Eq. (5.14).
Fig. 24. Dependent Central Chi-square (+) Central Chi-square PDFs:
(a) n,=n,=l, equal variance, Eq. (5.30); (b) n,=n,=l, unequal variance,
Eq. (5.30); (c) nl=n,=2, equal variance, Eq. (5.34); (d) nl=n,=2, unequal
variance, Eq. (5.34).
Fig. 24. cont'd.
Fig. 25. Dependent Central Chi-square (+) Central Chi-square CDFs:
(a) nl=n2=l,equal variance, Eq. (5.31); (b) nl=n2=l,unequal variance,
Eq. (5.31); (c) nl=n,=2, equal variance, Eq. (5.35); (d) n,=n2=2,unequal
variance, Eq. (5.35).
Fig. 25. cont'd.
Fig. 26. Dependent Central Chi-square (+) Central Chi-square
Moments: (a) n,=n,=l, Eq. (5.33); (b) n,=n,=2, Eq. (5.37).
Fig. 27. Independent Noncentral Chi-square (+) Central Chi-square
PDFs: (a) n,=n,=2, Eq. (5.45); (b) n,=4, n,=2, Eq. (5.48).
Fig. 28. Independent Noncentral Chi-square (+) Central Chi-square
CDFs: (a) n,=n2=2,Eq. (5.46); (b) n,=4, n,=2, Eq. (5.49).
Fig. 29. Independent Zero Mean Gaussian (x) Gaussian PDFs:
(a) n=1, Eq. (6.2); (b) n=2, Eq. (6.5).
Fig. 30. Dependent Zero Mean Gaussian (x) Gaussian PDFs:
(a) n=l, Eq. (6.15); (b) n=2, Eq. (6.18).
Fig. 31. Zero Mean Gaussian (x) Gaussian CDFs: (a) Independent,
n=2, Eq. (6.6); (b) Dependent, n=2, Eq. (6.19).
Fig. 32. Independent Rayleigh (x) Rayleigh PDFs: (a) n,=n,=2,
Eq. (6.45);(b) n,=n,=4, Eq. (6.48).
Fig. 33. Dependent Rayleigh (x) Rayleigh PDFs: (a) n=2, Eq. (6.54);
(b) n=4, Eq. (6.54).
Fig. 34. Independent Rayleigh (x) Rayleigh Moments: (a) n,=n,=2,
Eq. (6.47); (b) n,=n,=4, Eq. (6.50).
Fig. 35. Eependent Rayleigh (x) Rayleigh Moments: (a) n=2, Eq. (6.56);
(b) n=4, Eq. (6.56).
Fig. 36. Independent Rice (x) Rayleigh PDFs: (a) n,=n,=2, equal
variance, Eq. (6.59); (b) n,=n,=2, unequal variance, Eq. (6.59).
Fig. 37. Independent Rice (x) Rayleigh CDFs: (a) nl=n,=2, equal
variance, Eq. (6.60); (b) nl=n,=2, unequal variance, Eq. (6.60).
Fig. 38. Independent Rice (x) Rayleigh Moments: (a) nl=n2=2,
Eq. (6.61); (b) n1=n2=4,Eq. (6.64).
Fig. 39. Zero Mean Gaussian (t)Gaussian PDFs: (a) Independent,
Eq. (7.1); (b) Dependent, Eq. (7.9).
Fig. 40. Zero Mean Gaussian (+) Gaussian CDFs: (a) Independent,
Eq. (7.2); (b) Dependent, Eq. (7.10).
I " ' ' ]
190
Fig. 43. Independent Zero Mean Gaussian (+) Rice PDFs:
(a) n=1, Eq. (7.28);(b) n=2, Eq. (7.30).
Fig. 44. Independent Zero Mean Gaussian (+) Rice CDFs: (a) n=l,
Eq. (7.29); (b) n=2, Eq. (7.31).
Fig. 45. Independent Rayleigh (+) Rayleigh PDFs: (a) n,=n,=l, Eq.
(7.37); (b) n,=n,=2, Eq. (7.44).
Fig. 46. Dependent Rayleigh (+) Rayleigh PDFs: (a) n=1, Eq. (7.56);
(b) n=2, Eq. (7.58).
Fig. 47. Independent Rayleigh (+) Rayleigh CDFs: (a) n,=n2=l,
Eq. (7.38);(b)n,=n2=2, Eq. (7.45).
Fig. 48. Dependent Rayleigh (t)Rayleigh CDFs: (a) n=l, Eq. (7.57);
(b) n=2, Eq. (7.59).
Fig. 49. Independent Rice (+) Rayleigh PDFs: (a) n,=n,=l, Eq. (7.67);
(b) n,=n,=2, Eq. (7.74).
Fig. 50. Independent Rice (+) Rayleigh CDFs: (a) nl=n,=l, Eq. (7.68);
(b) nl=n,=2, Eq. (7.75).
Fig. 51. Independent Rice (+) Rice PDFs: (a) n,=n,=2, equal noncen-
trality parameters, Eq. (7.82); (b) n,=n,=2, unequal noncentrality
parameters, Eq. (7.82).
Fig. 52. Independent Rice (+) Rice CDFs: (a) n,=n,=2, equal noncen-
trality parameters, Eq. (7.83);(b) n,=n,=2, unequal noncentrality
parameters, Eq. (7.83).