Random Processes
Random Processes
Don H. Johnson
Rice University
2009
Contents
1 Probability 1
1.1 Foundations of Probability Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Mathematical Structure of Events . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 The Probability of an Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Random Variables and Probability Density Functions . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Function of a Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Expected Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 Jointly Distributed Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.4 Notions of Statistical Dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.5 Random Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.6 Single function of a random vector . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.7 Several functions of a random vector . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Sequences of Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Special Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.1 The Gaussian Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.2 The Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.3 The Exponential Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.4 The Bernoulli Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4.5 Stable Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2 Stochastic Processes 19
2.1 Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.2 The Gaussian Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.3 Sampling and Random Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2 Structural Aspects of Waveform Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.1 Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.2 Time-Reversibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2.3 Statistical Dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2.4 Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.3 Simple Waveform Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.4 Structure of Point Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4.2 The Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.4.3 Non-Poisson Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.5 Linear Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.5.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.5.2 Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
i
ii CONTENTS
3 Estimation Theory 69
3.1 Terminology in Estimation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.2 Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.2.1 Minimum Mean-Squared Error Estimators . . . . . . . . . . . . . . . . . . . . . . 71
3.2.2 Maximum a Posteriori Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.2.3 Linear Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.2.4 Maximum Likelihood Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.3 Signal Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.3.1 Linear Minimum Mean-Squared Error Estimator . . . . . . . . . . . . . . . . . . . 82
3.3.2 Maximum Likelihood Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.4 Linear Signal Waveform Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.4.1 General Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.4.2 Wiener Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.5 Probability Density Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.5.1 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.5.2 Histogram Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.5.3 Density Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Bibliography 149
Chapter 1
Probability
: A or B (union)
A
A B
B
: A and B (intersection)
A : A (complement)
AB AB
The null set 0/ is the complement of , the universal set containing all events. Indecomposable or elementary
events have nothing in common: i j 0.
/ Events, on the other hand, may share elements. Events are said
to be mutually exclusive if there is no element common to both events: A B 0. /
For a collection of events to be an algebra,
0/ and .
If the events A and B , then both the union and intersection of these events are in : A B
and A B . This property implies that all finite unions and intersections of events are also contained
in the algebra.
If A1 AN
N
An and
A
N
n
i 1 i 1
We say that is a -algebra if the algebra is closed under all countable intersections and unions. Note that
A
this means that
If A1 An and n
i 1 i 1
In probability theory, a sample space is the set of all possible elementary outcomes i of an experiment,
which can be collected into event sets.
1
2 Probability Chap. 1
X(i)
A
i
x
Figure 1.1: A random variable X is a function having a domain on the -algebra of events and a range lying
somewhere on the real line. Random variables need not be one-to-one or onto.
The probability distribution function or cumulative can be defined for continuous, discrete (only if an
ordering exists), and mixed random variables.
PX x PrX x
Note that X denotes the random variable and x denotes the argument of the distribution function. Probabil-
ity distribution functions are increasing functions: if A : X x1 and B : x1 X x2 ,
PrA B PrA PrB PX x2 PX x1 Prx1 X x2 , which means that PX x2 PX x1 ,
x1 x2 .
The probability density function pX x is defined to be that function when integrated yields the distribution
function. x
PX x p X d
As distribution functions may be discontinuous when the random variable is discrete or mixed, we allow den-
sity functions to contain impulses. Furthermore, density functions must be non-negative since their integrals
are increasing.
1.2.1 Function of a Random Variable
When random variables are real-valued, we can consider applying a real-valued function. Let Y f X ; in
essence, we have the sequence of maps f :
, which is equivalent to a simple mapping from sample
space to the real line. Mappings of this sort constitute the definition of a random variable, leading us to
conclude that Y is a random variable. Now the question becomes What are Y s probabilistic properties?.
The key to determining the probability density function, which would allow calculation of the mean and
variance, for example, is to use the probability distribution function.
For the moment, assume that f is a monotonically increasing function. The probability distribution of
Y we seek is
PY y PrY y
Pr f X y
PrX f 1 y (*)
PX f 1
y
Equation (*) is the key step; here, f 1 y is the inverse function. Because f is a strictly increasing function,
the underlying portion of sample space corresponding to Y y must be the same as that corresponding to
X f 1 y. We can find Y s density by evaluating the derivative.
d f 1 y
py y pX f 1 y
dy
What property do the sets A and B have that makes this expression correct?
4 Probability Chap. 1
Example
Suppose X has an exponential probability density: pX x ex ux, where ux is the unit-step func-
tion. We have Y X 2 . Because the square-function is monotonic over the positive real line, our
formula applies. We find that
pY y
1
e y y 0
2 y
Although difficult to show, this density indeed integrates to one.
Several important quantities are expected values, with specific forms for the function f .
f X X.
The expected value or mean of a random variable is the center-of-mass of the probability density func-
tion. We shall often denote the expected value by mX or just m when the meaning is clear. Note that
the expected value can be a number never assumed by the random variable (pX m can be zero). An
important property of the expected value of a random variable is linearity: aX a X , a being a
scalar.
f X X 2 .
X 2 is known as the mean squared value of X and represents the power in the random variable.
f X X
mX 2 .
The so-called second central difference of a random variable is its variance, usually denoted by X2 . This
expression for the variance simplifies to X2 X 2
2 X , which expresses the variance operator
. The square root of the variance X is the standard deviation and measures the spread of the
distribution of X. Among all possible second differences X
c2 , the minimum value occurs when
c mX (simply evaluate the derivative with respect to c and equate it to zero).
f X X n .
X n is the nth moment of the random variable and X
mX n the nth central moment.
f X e juX .
The characteristic function of a random variable is essentially the Fourier Transform of the probability
density function.
j X
e X j pX xe j x dx
The moments of a random variable can be calculated from the derivatives of the characteristic function
evaluated at the origin.
n d X j
n
X j
n
d n
0
The joint probability density function pX Y x y is related to the distribution function via double integration.
x y 2 PX Y x y
PX Y x y p X Y d d or pX Y x y
x y
Since limy PX Y x y PX x, the so-called marginal density functions can be related to the joint density
function.
pX x pX Y x d and pY y pX Y y d
Extending the ideas of conditional probabilities, the conditional probability density function pX Y xY y
is defined (when pY y 0) as
pX Y x y
pX Y xY y
pY y
For jointly defined random variables, expected values are defined similarly as with single random vari-
ables. Probably the most important joint moment is the covariance:
covX Y XY
X Y where XY xypX Y x y dx dy
Related to the covariance is the (confusingly named) correlation coefficient: the covariance normalized by the
standard deviations of the component random variables.
covX Y
X Y
X Y
Because of the Cauchy-Schwarz inequality, the correlation coefficients value ranges between
1 and 1.
A conditional expected value is the mean of the conditional density.
X Y
pX Y xY y dx
Note that the conditional expected value is now a function of Y and is therefore a random variable. Conse-
quently, it too has an expected value, which is easily evaluated to be the expected value of X.
X Y
xpX Y xY y dx pY y dy X
More generally, the expected value of a functionof two random variables can be shown to be the expected
value of a conditional expected value: f X Y f X Y Y . This kind of calculation is frequently
simpler to evaluate than trying to find the expected value of f X Y all at once. A particularly interesting
example of this simplicity is the random sum of random variables. Let L be a random variable and Xl a
sequence of random variables. We will find occasion to consider the quantity Ll 1 Xl . Assuming that the each
component of the sequence has the same expected value X , the expected value of the sum is found to be
SL Ll 1 Xl L
L X
L X
6 Probability Chap. 1
To find the density, we need to evaluate the N th-order mixed derivative (N is the dimension of the random
vectors). The Jacobian appears and in this case, the Jacobian is the determinant of the matrix A.
p Y y
1
p X A 1 y
detA
1.3 Sequences of Random Variables
Sequences of random variables X1 X2 denotes a sequence of functions defined on the probability space.
We care how this sequence of random variables behaves, in particular does the sequence converge to some
well-defined random variable?
?
lim Xn X
n
One could simply extend the definition of a convergent sequence of real-valued functions: does f n x
f x?
Here, convergence means that the sequence of real numbers f n x0 converges to f x0 for all choices of x0 .
Were it so simple. This kind of convergence is known as point-wise convergence. It is well-known that
Fourier series do not converge point-wise (points of discontinuity cause problems). Consequently, we need
weaker forms of convergence, which amounts to defining what means. For random variables, it is even
more complicated because we also need to include the definition of probability for all the random variables
involved. Consequently, many forms of convergence have been defined.
8 Probability Chap. 1
Sure convergence. The random sequence Xn converges surely to the random variable X if the sequence
X n converges to the function X as n
for all . Sure convergence amounts to point-wise
convergence of nonrandom functions. This most restrictive form of convergence requires that the sequence
converges even on sets that have probability zero of occurring. Consequently, this form of convergence is
usually too demanding.
Almost-sure convergence. The sequence Xn converges a.s. to X on all sets that have non-zero proba-
bility.
Pr lim Xn X 1
n
lim Xn
X 2 0
n
This kind of convergence depends only on the second-order properties of the random variable (all second
moments must be finite, of course) and thus is a weak form of convergence.
Convergence in probability. This even weaker form of convergence than in mean-square demands that
the probability the sequence deviates from the limit be zero.
lim PrXn
X 0 0
n
Weaker means that we can show that all sequences converging in mean-square also converge in proba-
bility but not vice-versa. The proof relies on the Chebyshev inequality.
PrY
Y 2
2
Showing this result is easy.
Y 2 y2 pY y dy
y2 pY y dy
y
2 pY y dy
y
2 PrY
for all points of continuity of PX x. This is the weakest form of convergence of those described here since it
only concerns the probability assignments, not the inherent properties of the random variables.
The hierarchy of convergence modes of random sequences is shown in Figure 1.2.
Sec. 1.4 Special Random Variables 9
ms s as
Figure 1.2: The implication hierarchy of notions of convergence are depicted. The weakest form (in proba-
bility) encompasses more situations while the most restrictive surely applies to the fewest.
Perhaps the most common application of notions of convergence is the Law of the Unconscious Statis-
tician: the sample average of statistically independent, identically distributed random variables converges to
the mean.
1 n
lim Xi X
n n
i 1
Here, the sequence of random variables is the sample average and the convergent random variable only as-
sumes the value of the mean with non-zero probability. The Strong Law of Large Numbers uses the notion of
almost sure convergence and the Weak Law of Large Numbers uses convergence in probability.
x
m2
2 2 2 2
The mean of such a Gaussian random variable is m and its variance 2 . As a shorthand notation, this informa-
tion is denoted by x m 2 . The characteristic function X of a Gaussian random variable is given
by
X j e jm e 2
2 2
No closed form expression exists for the probability distributionfunction of a Gaussian random variable.
For a zero-mean, unit-variance, Gaussian random variable 0 1 , the probability that it exceeds the value
x is denoted by Qx.
PrX x 1
PX x e 2 d Qx
1 2
2 x
A plot of Q is shown in Fig. 1.3. When the Gaussian random variable has non-zero mean and/or non-unit
Gaussian random variables are also known as normal random variables.
10 Probability Chap. 1
-1
10
-2
10
-3
Q(x) 10
-4
10
-5
10
-6
10
0.1 1 10
x
Figure 1.3: The function Q is plotted on logarithmic coordinates. Beyond values of about two, this function
decreases quite rapidly. Two approximations are also shown that correspond to the upper and lower bounds
given by Eq. 1.1.
ex 2 Qx
1 x 2 1 2
ex 2
2 1 x2
(1.1)
2 x
As x becomes large, these bounds approach each other and either can serve as an approximation to Q ; the
upper bound is usually chosen because of its relative simplicity. The lower bound can be improved; noting
that the term x1 x2 decreases for x 1 and that Qx increases as x decreases, the term can be replaced
by its value at x 1 without affecting the sense of the bound for x 1.
ex 2 Qx
1 2
x1 (1.2)
2 2
We will have occasion to evaluate the expected value of exp aX bX 2 where X m 2 and a, b
are constants. By definition,
eaX bX
exp ax bx2
x
m2 2 2 dx
2 1
2 2
The argument of the exponential requires manipulation (i.e., completing the square) before the integral can be
evaluated. This expression can be written as
1
2 2
1
2b 2x2
2m a 2x m2
2 2 1
2b 2 1
2b 2 2 2
Sec. 1.4 Special Random Variables 11
1
1
2b 2
m a 2
2
exp
x
dx
2 2 2 2 1
2b 2
Let
x
1m2ba 2
2
12b 2
which implies that we must require that 1
2b 2 0 or b 12 2 . We then obtain
aX
1
2b 2
m a 2
2
m2
1 1
e bX 2
exp
2 2 1
2b 2
2
2
1
2b 2
2
e
2
2 d
1. a 0, X m 2 .
bm2
exp
ebX
2
12b 2
1
2b 2
2. a 0, X 0 2 .
ebX 2
1
1
2b 2
3. X 0 2 .
a2 2
exp 212b 2
eaX bX
2
1
2b 2
The real-valued random vector X is said to be a Gaussian random vector if its joint distribution function
has the form
pX x exp
x
m K x
m
1 1 t 1
det2 K 2
The vector mX denotes the expected value of the Gaussian random vector and KX its covariance matrix.
mX X KX XX
mX mX
12 Probability Chap. 1
As in the univariate case, the Gaussian distribution of a random vector is denoted by X mX KX .
Note that if the covariance matrix is diagonal, which would occur if the components of the random vector
were pairwise uncorrelated, the joint probability density factors into the marginal distributions. Thus, for
Gaussian random vectors, if all components are pairwise uncorrelated, the random variables are statistically
independent. The weakest form of statistical independence implies the strongest.
After applying a linear transformation to Gaussian random vector, such as Y AX, the result is also a
Gaussian random vector (a random variable if the matrix is a row vector): Y AmX AKX A .
The characteristic function of a Gaussian random vector is given by
X j exp j mX
t KX
t1
2
From this formula, the N th-order moment formula for jointly distributed Gaussian random variables is easily
derived.
all X 1 X N 2 X N N 1 X N N N even
X1 XN N
X
N
X N 2 X N 3 X N N1 X
all
N N 1 N N N odd
where N denotes a permutation of the first N integers and N i the ith element of the permutation. For
example, X1 X2 X3 X4 X1 X2 X3 X4 X1 X3 X2 X4 X1 X4 X2 X3 .
1.4.2 The Central Limit Theorem
Let Xl denote a sequence of independent, identically distributed, random variables. Assuming they have
zero means and finite variances (equaling 2 ), the Central Limit Theorem states that the sum Ll 1 Xl L
converges in distribution to a Gaussian random variable.
1 L
Xl
0 2
L
Ll 1
Because of its generality, this theorem is often used to simplify calculations involving finite sums of non-
Gaussian random variables. However, attention is seldom paid to the convergence rate of the Central Limit
Theorem. Kolmogorov, the famous twentieth century mathematician, is reputed to have said The Central
Limit Theorem is a dangerous tool in the hands of amateurs. Lets see what he meant.
Taking 2 1, the key result is that the magnitude of the difference between Px, defined to be the
probability that the sum given above exceeds x, and Qx, the probability that a unit-variance Gaussian random
variable exceeds x, is bounded by a quantity inversely related to the square root of L [7: Theorem 24].
Px
Qx c
X 3
1
3
L
The constant of proportionality c is a number known to be about 0.8 [11: p. 6]. The ratio of absolute third
moment of Xl to the cube of its standard deviation, known as the skew and denoted by X , depends only on the
distribution of Xl and is independent of scale. This bound on the absolute error has been shown to be tight [7:
pp. 79ff]. Using our lower bound for Q (Eq. 1.2 10), we find that the relative error in the Central Limit
Theorem approximation to the distribution of finite sums is bounded for x 0 as
Px
Qx 2 x2 2 2 x1
cX 1x2
Qx
e
L x x1
X
1 XN jN N
1 N X j
0
.
Sec. 1.4 Special Random Variables 13
4
10
10 3
2
10
101
1
0 1 2 3
x
Figure 1.4: The quantity which governs the limits of validity for numerically applying the Central Limit
Theorem on finite numbers of data is shown over a portion of its range. To judge these limits, we must
compute the quantity L 2 2 c2 X , where denotes the desired percentage error in the Central Limit Theorem
approximation and L the number of observations. Selecting this value on the vertical axis and determining
the value of x yielding it, we find the normalized (x 1 implies unit variance) upper limit on an L-term sum
to which the Central Limit Theorem is guaranteed to apply. Note how rapidly the curve increases, suggesting
that large amounts of data are needed for accurate approximation.
Suppose we require that the relative error not exceed some specified value . The normalized (by the standard
deviation) boundary x at which the approximation is evaluated must not violate
4 x1
L 2
ex 1x2 2
2
2 2
2 c X x1
x
As shown in Fig. 1.4, the right side of this equation is a monotonically increasing function.
Example
For example, if 0 1 and taking cX arbitrarily to be unity (a reasonable value), the upper limit
of the preceding equation becomes 1 6 103 L. Examining Fig. 1.4, we find that for L 10 000, x
must not exceed 1 17. Because we have normalized to unit variance, this example suggests that the
Gaussian approximates the distribution of a ten-thousand term sum only over a range corresponding
to an 76% area about the mean. Consequently, the Central Limit Theorem, as a finite-sample distribu-
tional approximation, is only guaranteed to hold near the mode of the Gaussian, with huge numbers of
observations needed to specify the tail behavior. Realizing this fact will keep us from being ignorant
amateurs.
pX x e x ux
The expected value of the exponential random variable is 1 and the variance is 1 2 . This makes the
exponential random variables coefficient of variation equal to one.
14 Probability Chap. 1
When Bernoulli random variables are statistically dependent, the correlation among random variable pairs
is no longer sufficient to describe the joint probability function. A more detailed statistical structure than that
imposed by pairwise correlation occurs with most non-Gaussian random variables. The only cases in which
correlation determines the dependence structure occurs when we can write the joint distribution as
pX x f X
m K1 X
m
where f is some function of a scalar that can yield a joint density function. One such example is f x
11 x. More generally, all joint density functions can be expanded in terms of a set of orthogonal functions
as
N N
N
k N
pX x pXn xn 1 ai 1 in xn
iN jN k
iN
n 1 k 2 j 1 i1 n 1
This expression reflects just how complicated dependence can be. First of all, the set jN k denotes the
integers i1 iN that reflect the j th subset of arrangements of the integers 1 N of order k. For example,
1N 2 110 120 130 210 220 230 . Furthermore, 0 x 1. Thus, this representation of joint
distributions shown that pairs, triples, etc. of random variables can be individually dependent.
1.4.5 Stable Random Variables
Stable random variables play and interesting niche role in probability theory. X is a stable random variable
if the weighted sum of two statistically independent instances has the same (within a scaling and shift)
probability distribution. For example, Gaussian random variables are stable. What is interesting about non-
Gaussian stable random variables is that they disobey the Central Limit Theorem. For example, the sum of
two Cauchy random variables is also Cauchy, which has a probability density function of the form
px x
1
2
x2
Clearly, the sum of any number of Cauchy random variables will never converge to the Gaussian. The
Central Limit Theorem requirement violated by stable random variables, save for the Gaussian, is that they
have infinite variance. All stable random variables have a characteristic function of the form
x j
ea 0 2 a a constant
The Gaussian case occurs when 2.
Chap. 1 Problems 15
Problems
1.1 Space Exploration and MTV
Joe is an astronaut for project Pluto. The mission success or failure depends only on the behavior of
three major systems. Joe feels that the following assumptions are valid and apply to the performance of
the entire mission:
The mission is a failure only if two or more major systems fail.
System I, the Gronk system, fails with probability 0.1.
System II, the Frab system, fails with probability 0.5 if at least one other system fails. If no other
system fails, the probability the Frab system fails is 0.1.
System III, the beer cooler (obviously, the most important), fails with probability 0.5 if the Gronk
system fails. Otherwise the beer cooler cannot fail.
(a) What is the probability that the mission succeeds but that the beer cooler fails?
(b) What is the probability that all three systems fail?
(c) Given that more than one system failed, determine the probability that:
(i) The Gronk did not fail.
(ii) The beer cooler failed.
(iii) Both the Gronk and the Frab failed.
(d) About the time Joe was due back on Earth, you overhear a radio broadcast about Joe while watch-
ing MTV. You are not positive what the radio announcer said, but you decide that it is twice as
likely that you heard mission a success as opposed to mission a failure. What is the probability
that the Gronk failed?
II
a d
I c IV
III b
However, all the links may not be available. Let p denote the probability that any link is available and
assume that the availability of a link is statistically independent of any other links state. Two terminal
can communicate only if they are connected by at least one chain of links.
(a) Let A : I and IV can communicate. Calculate PrA.
(b) Let B : II and III can communicate. Calculate PrB.
(c) Calculate PrAB. Are the events A and B statistically independent?
(d) Prove that PrA would be increased if link c were connected between I and III as opposed to II
and III.
1.4 Communication Channels
A noisy discrete communication channel is available. Once each microsecond, one letter from the
three-letter alphabet a b a is transmitted and one letter from the three-letter alphabet A B C is
received. The conditional probability of each received letter given the transmission letter is provided
by the following transition diagram.
a 0.60.3 0.1 A
0.1
b 0.5 B
0.1
0.4
c 0.80.1 C
The a priori probability of each letter being transmitted is Pra 0 3, Prb 0 5, Prc 0 2.
(a) What decision rule an algorithm for relating a received letter to a transmitted letter has the
largest probability of being correct.
(b) What is the probability of error for this decision rule?
(c) What is the maximum probability of error that could be obtained without the the use of the chan-
nel? In other words, the receiver must decide what is transmitted without receiving anything!
1.5 Probability Density Functions?
Which of the following are probability density functions? Indicate your reasoning. For those that are
valid, what is the mean and variance of the random variable?
ex
(a) pX x (b) pX x
sin 2 x
2 x
1
x x 1 1 x 1
(c) pX x (d) pX x
0 otherwise 0 otherwise
ex1 x1
(e) pX x x 1 x x
1 (f) pX x
1 1 1
4 2 4 0 otherwise
want to change the probability distribution to one required by the problem at hand. One technique is
known as the distribution method.
(a) If PX x is the desired distribution, show that U PX X (applying the distribution function to a
random variable having that distribution) is uniformly distributed over 0 1. This result means
that X PX1 U has the desired distribution. Consequently, to generate a random variable having
any distribution we want, we only need the inverse function of the distribution function.
(b) Why is the Gaussian not in the class of nice probability distribution functions?
(c) How would you generate random variables having the hyperbolic secant
pX x 12sech x2, the Laplacian and the Cauchy densities?
(d) Write MATLAB functions that generate these random variables. Again use hist to plot the
probability function. What do you notice about these random variables?
1.7 Cauchy Random Variables
The random variables X1 and X2 have the joint pdf
pX
1 X2 x1 x2
1 b1 b2
2 b21 x21 b22 x22 1 2
b b 0
(a) Show that X1 and X2 are statistically independent random variables with Cauchy density functions.
(b) Show that X j eb1 .
1
(c) Define Y X1 X2 . Determine pY y.
(d) Let Zi be a set of N statistically independent Cauchy random variables with bi b, i 1 N.
Define
1 N
N i1 i
Z Z
Determine pZ z. Is Zthe sample meana good estimate of the expected value Zi ?
1.8 Correlation Coefficients
The random variables X, Y have the joint probability density pX Y x y. The correlation coefficient
X Y is defined to be
X Y
X
mX Y
my
X Y
(a) Using the Cauchy-Schwarz inequality, show that correlation coefficients always have a magnitude
less than to equal to one.
(b) We would like find an affine estimate of one random variables value from the other. So, if we
wanted to estimate X from Y , our estimate X! has the form X! aY b, where
a, bare constants
to be found. Our criterion is the mean-squared estimation error: 2 X!
X 2 . First of all,
let a 0: we want to estimate X without using Y at all. Find the optimal value of b.
(c) Find the optimal values for both constants. Express your result using the correlation coefficient.
(d) What is the expected value of your estimate?
(e) What is the smallest possible mean-squared error? What influence does the correlation coefficient
have on the estimates accuracy?
1.9 Probabilistic Football
A football team, which shall remain nameless, likes to mix passing and running plays. The yardage
gained on any running play is a random variable uniformly distributed between zero and ten yards
regardless of the yardage gained on any other play. The teams quarterback, Bob Linguini, has a strange
quirk: the yardage gained on a passing play depends on the previous play. If the previous play was
a running play, the yardage gained passing is a random variable uniformly distributed betwee zero
and twenty yards. If the previous play that gained Y yards, the yardage gained is a random variable
uniformly distributed between
Y and 20
Y yards. On any play, the team is equally likely to run or
pass.
18 Probability Chap. 1
(a) What is the probability density function of the random variable defined to be the total yardage
agained on a running play followed by a passing play?
(b) A running play is executed followed by two passing plays. Find the probability density function
of the yardage gained on the second passing play.
(c) What is the probability that a total of at least ten yards is gained in the two passing plays mentioned
in part (b)?
1.10 Order Statistics
Let X1 XN be independent, identically distributed random variables. The density of each random
variable is pX x. The order statistics X0 1 X0N of this set of random variables is the set that
results when the original one is ordered (sorted).
X0 1 X0 2 X0 N
(a) What is the joint density of the original set of random variables?
(b) What is the density of X0 N , the largest of the set?
(c) Show that the joint density of the ordered random variables is
(d) Consider a Poisson process having constant intensity 0 . N events are observed to occur in the
interval 0 T . Show that the joint density of the times of occurrence W1 WN is the same as the
order statistics of a set of random variables. Find the common density of these random variables.
1.11 Estimating Characteristic Functions
Suppose you have a sequence of statistically independent, identically distributed random variables
X1 XN . From these, we want to estimate the characteristic function of the underlying random vari-
able. One way to estimate it is to compute
! j 1 N j Xn
N n1
X e
Stochastic Processes
The expected value or mean of a process is the expected value of the amplitude at each t.
X t mX t
xpX t x dx
For the most part, we take the mean to be zero. The correlation function is the first-order joint moment
between the processs amplitudes at two times.
RX t1 t2 x1 x2 pX t X t x1 x2 dx1 dx2
1 2
Since the joint distribution for stationary processes depends only on the time difference, correlation functions
of stationary processes depend only on t1
t2 . In this case, correlation functions are really functions of
a single variable (the time difference) and are usually written as RX where t1
t2 . Related to the
correlation function is the covariance function KX , which equals the correlation function minus the square
Xt(i)
A
i
t
Figure 2.1: A stochastic process is defined much like a random variable (Figure 1.1) but with a time function
assigned to each element of event space. The collection of time function is known as the ensemble.
19
20 Stochastic Processes Chap. 2
of the mean.
KX RX
m2X
The variance of the process equals the covariance function evaluated as the origin. The power spectrum of a
stationary process is the Fourier Transform of the correlation function.
X f
RX e j2 f d
A particularly important example of a random process is white noise. The process X t is said to be white
if it has zero mean and a correlation function proportional to an impulse.
X t 0 RX
N0
2
The power spectrum of white noise is constant for all frequencies, equaling N0 2. which is known as the
spectral height.
When a stationary process X t is passed through a stable linear, time-invariant filter, the resulting output
Y t is also a stationary process having power density spectrum
Y f H f 2X f
We note especially that for distinct samples of a random process to be uncorrelated, the correlation func-
tion RX kT must equal zero for all non-zero k. This requirement places severe restrictions on the correlation
function (hence the power spectrum) of the original process. One correlation function satisfying this property
is derived from the random process which has a bandlimited, constant-valued power spectrum over precisely
the frequency region needed to satisfy the sampling criterion. No other power spectrum satisfying the sam-
pling criterion has this property. Hence, sampling does not normally yield uncorrelated amplitudes, meaning
that discrete-time white noise is a rarity. White noise has a correlation function given by R k 2 k,
X
where is the unit sample. The power spectrum of white noise is a constant: f 2 .
X
The
curious reader can track down why the spectral height of white noise has the fraction one-half in it. This definition is the
convention.
Sec. 2.2 Structural Aspects of Waveform Processes 21
Many results, arising from the extensive literature on Markov chains, exist for this model and not for the
more general one given previously. Some special-case results are known for the situation in which the output
depends on several past input values.
In applications, the input frequently appears additively, yielding the additive-input Markovian model.
Here, the system model G ; equals the sum of a state-dependent part Gs Xl 1 and an input.
When the system is linear, for example, this relation expresses the well-known autoregressive model:
GXl 1;Wl kp 1 ak Xl k Wl .
For Markovian systems (additive-input or not), a type of dynamic system model can be found for the
pth -order multivariate density of the output process Xl . We begin by noting that the conditional density of the
output at time l given the values of the previous p states is easily expressed in terms of the inputs amplitude
distribution. For a given value X of Xl 1, an output equaling X0 could have arisen from one of several input
values, depending on the nonlinear nature of GX; . Notation demands that we represent the ith possibility
as an index on the systems transformation rather than on the input: x0 GiX; w, i 1 . The density
associated with the output conditioned on state thus equals
pX X
l l 1
X0 X X i X; X0 pW Gi X; X0
G 1 1
i 0
This equation might suggest that no memoryless transformations can be applied to the input. Such transformations would modify
the inputs amplitude distribution, but not its whiteness. To keep the notation under control, we use Wl to represent the transformed input.
Do note, however, that subsequent results place requirements on this distribution.
22 Stochastic Processes Chap. 2
Gi X; denotes the inverse function of GiX; . Multiplying this conditional density by the p -order
1 th
density of the state gives the p 1th-order density, which, when integrated over the most distant state, yields
the outputs multivariate density. The resulting integral equation describes the structural evolution of the
systems state.
pX X0 pX X X0 X1 pX X1 dX p (2.3)
l l l 1 l 1
Here, Xk colXk Xk p1. When the process is stationary, the pth -order joint densities appearing on each
side of this equation are equal. To determine when such a stationary output exists, we seek conditions under
which this equation has a unique solution. Such conditions revolve around the properties of pX X X0 X1 ,
l l 1
which depends on both the systems characteristics and the inputs amplitude distribution.
For nonlinear systems, stability tests are also equivalent to stationarity tests. Stability of nonlinear differ-
ence equations would lead us too far afield; we concentrate on stationarity results here. The Markov system
model of Eq. (2.1) can be tested for stability using Lyapunov functions.
Theorem A stationary distribution exists for a Markovian system if the output is weakly continuous (the
conditional expected value gXl Xl 1 X is continuous for all bounded, continuous functions g )
and if there exists a continuous, non-negative function L , a Lyapunov function, that satisfies LX
as
X
and, for some bounded positive constant K ,
LXl 1
LXl Xl X KX
LXl 1
LXl Xl X 0 X
denotes a compact set in p. Here, Xl denotes the state of the Markov system at time l : Xl
colXl Xl p1.
The quantity LX denotes the expected change LXl 1
LXl Xl X in the systems energy as
time goes on. The continuity condition is satisfied when the system G ; is continuous in each state variable
and in the input. In some cases, a smooth amplitude distribution for the input Wl suffices.
Example
Consider a first-order linear Markovian system expressed by
Xl aXl 1 Wl
where Wl is a zero-mean, white input. Let the Lyapunov function be LX X 2 . We calculate change
LX in system energy from sample to sample to be
As this example shows, the theorem requires the existence of only one Lyapunov function to demonstrate
stationarity. To show that stationarity does not obtain can be much more difficult: We would need to show that
Sec. 2.2 Structural Aspects of Waveform Processes 23
no Lyapunov function can exist. When we do demonstrate stationarity, some restriction on system parameters
must usually be enforced. For a given Lyapunov function choice, such restrictions are sufficient, but may not
be necessary.
Example
Consider the bilinear Markov system described by
Xl a bWl Xl 1 Wl
where the input has zero-mean and finite variance W2 . When we choose LX x2 as before, the
parameters must satisfy a2 b2 W2 1. If we use LX X instead, the condition a bWl 1
results. What parameter ranges satisfy both conditions depends on the inputs amplitude distribution.
For example, when the input has a Laplacian distribution, the latter condition becomes
$
%
a
a 1
bW 2 exp
# 1
b W2 2 &
The set over which the energy change equals zero defines the outputs stationarity. For instance, in the
first example, LX 0 occurs when X 2 Wl2 1
a2 , which precisely equals the output variance under
stationary conditions. Because of this observation, we can distill an intuitive feel for what the theorem means.
Only over a restricted range of state does the system expand; over a much larger set the system contracts,
tending toward no energy change as stationary behavior dominates. This lack of expansion is equivalent
to stability. When the distribution if the systems initial condition equals the stationary one, the outputs
distribution is unchanging.
For a stationary process to be produced by passing white noise through some system, the production
must have started in the distant, unremembered past or at some finite time with a particular choice of initial
condition dsitribution. Here a quandry arises: How does one distinguish a specific initial condition as being
bad or good? Assuming the stationary distribution is never zero, any initial condition could have arisen
from a given distribution, the stationary one in particular, thereby resulting in stationary behavior. In other
words, transients never occur! While this argument may hold theoretically, the authors prefer starting systems
at the Big Bang and concentrate on observing and processing signals long afterwards.
Assuming the theorems conditions are satisfied, we obtain a fundamental relation that a stationary pro-
cesss joint amplitude distribution must satisfy when we generate it by passing white noise through a single-
input system.
p p X0 X p1 p p X0 X1 X p p p X1 X p dX p (2.4)
X
l Xl Xl 1 X
l
From one viewpoint, the pth -order amplitude distribution is an eigenfunction having eigenvalue one of the
kernel pX X X0 X1 . Finding this eigenfunction, either analytically or numerically, seems feasible
l l 1
only for low-order (small p) systems [31],[39: 4.2.4].
Work is simpler in the additive-input case, in which this integral equation becomes
pX X0 X p1 pW X0
Gs X1 X p pXX1 X p dX p
Here, the kernels dependence on both the inputs amplitude distribution and the systems characteristics
become explicit. From this relationship, we can easily see that if the input has an even probability density, so
24 Stochastic Processes Chap. 2
too will the output if and only if the systems input-output relation is odd: Gs
X
Gs X. We can also
find more explicit relationships using this equation. Take the first-order (p 1) case for example. Evaluating
the Fourier transform yields
e j Gs X1 X1 X d dX1
X W
1
2
We see that what remains is a kind of Fourier transform in which the systems strictly nonlinear part Gs X
X
plays a central role: The more complicated this term is, the more difficult this equation is to use.
When the system is linear, Xl aXl 1 Wl , we obtain the simplest possible result.
X
W
X a
Given the input distribution, only with difficulty can the output distribution be found from this formula. Curi-
ously, if we have the outputs amplitude distribution, we can use this result to calculate the input distribution.
For example, assuming the output has a stable distribution, which is defined as having a characteristic function
of the form exp
r , 0 r 2, we find that the input must also possess a stable distribution of the same
degree r. Note, however, that in general that there is no guarantee that the ratio is a characteristic function.
We note that, because characteristic functions are positive-definite, they obey 0 1. Because
of the denominator, arbitrary substitution of valid characteristic functions into this ratio may well produce
a quantity that exceeds one. In such cases, we have just proven that certain distributions cannot describe
first-order Markov, linear processes.
Example
The most famous example of this phenomenon is due to Rosenblatt [34: p. 52]. Lets try to force
the output of a first-order linear system to have a uniform amplitude distribution. The corresponding
characteristic function has the functional form of sin . When we calculate the ratio a sin sina ,
we find that the zeros of this function occuring in the demoninator cause difficulty: Unless they are
cancelled by zeros in the numerator, the potential characteristic function will be infinite, a property
characteristic functions do not possess. Cancellation only occurs when the parameter a equals the
reciprocal of an integer. Thus, we conclude that a process having a uniform amplitude distribution and
a linear, first-order dependence structure can only occur when the correlation coefficient of successive
values equals 12 13 . Fig. 2.2 portrays an example of this process.
2.2.2 Time-Reversibility
A processs time-reversibility characteristics comprise a special form of dependence structure. A stationary
process Xl is time-reversible if the multivariate density of the amplitudes at the ordered times l1 lN equals
that of the amplitudes at the times
l1
lN .
pX
l1
XlN
X1 XN pX
l
Xl X1 XN
1 N
Assuming the stationarity equation 2.4 23 has a solution, the result can be examined to determine its time-
reversibility. Examining this evolution equation does not reveal any general structure that the input or the
system must possess to produce a time-reversible output. In fact, cases exist wherein the inputs amplitude
distribution solely determines the outputs time-reversibility: Non-Gaussian, time-reversible processes cannot
be produced by causal linear filters excited by white noise [42]. Only in the Gaussian case is the output
time-reversible.
Sec. 2.2 Structural Aspects of Waveform Processes 25
Figure 2.2: A portion of the sample function taken from a first-order linear Markov process is shown in the
bottom panel. A histogram estimate of this processs amplitude distribution (10,000 samples) is shown in the
top panel. Here, the process is generated according to Xl 12 Xl 1 Wl , where Wl is an IID sequence with
each element equaling 12 with probability 12.
where Wl represents the white noise input. The output is time-reversible if and only if the input is Gaussian
or if ai 0 and the coefficients b j obey the symmetry property b j bq j , j 0 q2.
Only when the systems transfer function has linear phase can a non-Gaussian input produce a time-reversible,
non-Gaussian linear process. In all other cases, the filters output conveys its causality, making measurements
of the filters phase characteristics much easier. Note that a processs time-reversibility depends on both the
generation systems characteristics and on the amplitude distribution of the white input.
Multivariate distributions that correspond to the joint distribution of non-Gaussian, time-reversible pro-
cesses are easily found. For example, the so-called elliptically symmetric distributions fall into this class [25].
On the other hand, even conjuring examples of densities for time-irreversible processes can be quite difficult:
26 Stochastic Processes Chap. 2
The bivariate amplitude density of time-irreversible processes are not symmetric functionspX
l1 Xl
but have equal marginals p X pX . In either case, if we can specify the outputs
2
pX Xl
l1 2 l1 l2
multivariate distribution, a system generation model for it can be found from the conditional density of the
stationarity equations kernel pX X X0 X.
l l 1
The importance of time-irreversible stationary processes rests on their physical existence. Thermody-
namic arguments demonstrate that only in very carefully controlled circumstances can time-reversible pro-
cesses describe physical measurements. Thus, physically plausible models must produce time-irreversible
processes. What models produce time-irreversible outputs can only be assessed by calculating the multivari-
ate distribution; to emphasize what was mentioned previously, changing the inputs amplitude distribution can
change the output from a time-reversible to a time-irreversible one.
2.2.3 Statistical Dependence
The dependence structure of a non-Gaussian process both illuminates what system describes its generation
and what form the multivariate distribution must take. In fact, modeling physical situations demands that a de-
pendence structure be imposed. The authors would be remiss not to point out that if the process was obtained
by periodically sampling a continuous-time one, the resulting measurement cannot be white. To show this
fact, consider the correlation function RX Xl Xl of the sequence. It equals the sampled values on the
continuous-time processs correlation function: RX R Ts , where X"t denotes the continuous-time
X
process and Ts the sampling interval. For Xl to be white, we at least need the correlation function to correpond
to a unit sample. This condition means that the analog signals correlation function must have zero-crossings
uniformly separated by Ts . Assuming that we wish to bey the Sampling Theorem, this situation only occurs
when the analog signal is ideally bandlimited to precisely the Nyquist frequency. Interestingly, oversampling
only increases correlation; to achieve maximal decorrelation, we must in general undersample to try to effect
a white sampling sequence.
Broad categories for dependence have been defined based on analytic and theoretical considerations. Un-
fortunately, testing data against these can be virtually impossible, which makes assuming they apply somewhat
tenuous.
Definition Two (or more) amplitudes are independent if they are statistically independent: Xl is independent
of Xl if pX X X1 X2 pX X1 .
1
2 l1 l2 l1
Note this notions symmetry: If Xl is independent of Xl , then so is Xl of Xl . This symmetry contrasts with
1 2 2 1
the asymmetry of the next dependence category.
Definition The amplitude Xl is mean-square independent of Xl if the conditional expected value of the first
1 2
with respect to the second does not depend on the conditioning value.
Xl 1
Xl
2
X Xl 1
Weakly uncorrelated (uncorrelated) amplitudes occur when we consider the expected values of the amplitudes
directly.
Xl Xl Xl Xl 1 2 1 2
The notion of correlation is symmetric, meaning that we can say two amplitudes are uncorrelated without
ambiguity.
These dependence categories form a clear progression, wherein each implies others.
Independence is simultaneously the simplest, the most powerful, and the most difficult to verify empirically.
Because the minimum mean-squared error predictor of a random variable X given another, say Y is the con-
ditional expected value X Y y, mean-square independence means that providing an amplitude value
does not reduce the mean-squared error in predicting another. The notion of strongly uncorrelated is not
used frequently; uncorrelated and correlated amplitudes correspond to dependence categories prevalent in the
second-order radom process theory.
Spanning these categories, succint dependence structures have been defined that ease the transistion to
theoretical development and incorporating model-based notions.
Markov dependence. When the conditional distribution of a processs amplitude at time l given the
processs entire past Xl 1 Xl 2 functionally depends only on the p most recent values, the process is
Markovian of order p.
pX X
l l 1
X0 X1 pX X
l l 1 Xl p X0 X1 X p
From this definition, it easily follows that Markov dependence extends to any set of past adjacent p values.
pX X
l l 1
X0 Xl 1 pX X
l l 1 Xl p X0 Xl 1 Xl p 0
Be that as it may, this dependence structure does not mean that the process depends only on a subset of
adjacent p values. For example, if p 1, the conditional density pX X X0 X usually depends on X
l l
for all , even if p. Thus, a Markov processs memorythe time-span over which a value depends on a
previous oneis usually infinite.
Systems that produce Markov dependence structures have the form expressed by Eq. (2.1) 21. One
important special case of which is the linear autoregressive process, described by the input-output relation
Xl kp 1 ak Xl k Wl . Including more than one input amplitude value in the systems input-output relation
usually means that Markovian dependence does not result.
The Markov structure for an additive-input, stationary process (Eq. 2.2) 21 allows the explicit evaluation
of the processs multivariate amplitude distribution for any order. Let Xk denote col X X 1 Xik 1, the
point at which we evaluate the kth -order density. The index indicates the lag relative to the time l of the
temporal origin of the associated amplitude vector. Assume the process has Markovian order p and that we
want the kth -order amplitude distribution. For the moment, k p. The joint distribution of the amplitude
vector Xl k equals
p k X0k p X0 Xk1 pX Xk11
X Xl Xk1 k1
l l 1 l 1
p
Xl X p
X0 X p1 pW X0
Gs Xp1
l 1
28 Stochastic Processes Chap. 2
This reduction of the multivariate density into the product of a known conditional density and a lower order
multivariate density can be repeated until a pth -order multivariate density results. Assuming we can calculate
this density, the final expression for the desired multivariate density is
k p
p k Xk p p X pk p pW Xi1
Gs Xpi
X X
l l k p i 1
When the order of the multivariate density is less than the Markovian order p, we can simply integrate the
pth density over the unwanted components. Thus, regardless of the number of and selection of process values
for which we want the joint probability distribution, we can calculate the multivariate density if we know the
pth order joint distribution and the amplitude distribution of the white noise input. Note how this expression
exemplifies the infinite-duration dependence structure of the Markov process: No matter how remote the lag
, the pair Xl Xl are dependent through the telescope-like product terms.
Example
Consider the first-order, linear autoregressive case: Xl aXl 1 Wl . The kth -order multivariate density
has the expression
k 1
p k Xk pX Xk 1 pW Xi1
aXi
X
l i 1
When the white input has a stable amplitude distribution, the output has the same distribution with a
different variance: When a stationary distribution exists, the variance of the output equals that of the
input divided by 1
a2 . Note that among stable distributions, only the Gaussian has finite variance.
Be that as it may, we can derive the multivariate distribution for a linear, first-order Markov Cauchy
process to be
k 1
1
a2 2 2
p k
X
Xk
k k 2 1
a2 Xl2k 1 i 1 2 X
2
l
l i1
aXl i
Note the asymmetry in this joint distribution, which indicates the processs time-irreversibility.
q-dependence. This property applies to processes generated by systems that depend only on the most
recent q input values.
Xl Gi Wl Wl q1
Thus, amplitude values separated by q or more samples are independent. For statisticians, q-dependence re-
fines the definition of moving average processes. In the signal processing terminology of linear systems, such
finite-memory systems are said to be FIR (have Finite-duration Impulse Responses). Correlation matrices of
q-dependent processes are banded, with nonzero correlation extending only over q
1 diagonals above and
below the main one.
Mixing. Mixing structures rival Markovian structures in theoretical importance; in those situations where
these structures both occur, very powerful results obtain. For a formal definition of mixing, define ab to be
the -field generated by the amplitudes Xl a l b. With these -fields, we can (conceptually) define
individual, conditional, and joint probabilities of the occurence of particular sets defined over time-restricted
portions of the process. The fundamental notion of mixing is that the joint probability of two sets of process
amplitudes asymptotically factorsthe two sets become independentas the temporal separation between
the sets increases. Formally, let A ab and A ab be sets that are members of -fields defined over time
intervals. Let denote the shift operator that operates on set so that A ab11 .
Sec. 2.2 Structural Aspects of Waveform Processes 29
A process is said to be uniformly strongly mixing, uniformly mixing, or -mixing if the two sets are asym-
potically independent in a slightly different way than in strong mixing.
supPr A A
PrA
0
A A
A process is said to be -mixing if the maximal correlation coefficient between the two -fields is asympoti-
cally zero.
cov U V
0
sup
U L V L
2
0
U 12 V 12
2
Here L2 denotes the collection of all second-order (finite variance) random variables measurable with
a
b
respect to ab .
1 1
1
4
4 2
4
2 12
For stationary Gaussian processes, 2 , which means that Gaussian processes are strongly
mixing if and only if they are -mixing [18]. From these inequalities, we glean the following maximal (no
other implications exist) relations among the various types of mixing conditions [3].
Strong mixing was the first defined condition and claims the high ground; the more recent definitions yield
more stringent criteria, leaving the adverb in strongly mixing somewhat inappropriately chosen.
30 Stochastic Processes Chap. 2
If stationary processes can be shown to be strongly mixing, many interesting properties can result. Among
process classes have been shown to be strongly mixing are white noise, Markov processes (if they are purely
nondeterministic) [33: p. 195], and Gaussian processes having continuous and positive power spectra [18]. A
Markov process cannot be -mixing without its mixing coefficients decreasing exponentially [8]: ca ,
0 a 1 and c a positive constant. If Xl is strongly mixing, then the process Yl Gi Xl Xl q1 is strongly
mixing [34: p. 79]. This result means that all q-dependent processes are strongly mixing.
We defer discussing the interaction of mixing and ergodic theory to 2.2.4 30. For now, we comment on
an issue that should worry the signal processor: When can I produce estimates from dependent data and how
does the dependence affect my estimates accuracy? Suffice it that if a stationary sequence is strongly mixing,
then estimation procedures, such as kernel estimates for densities and regression functions and distributional
parameters, converge, but at a slower rate. Even the Central Limit Theorem survives!
Theorem [15: p. 316] Let Sk kl 01 Xl be the cumulative sum of a strongly mixing process having mixing
coefficient k. Let Pk denote the distribution function of the cumulative sum normalized as Nk1Sk
Mk ,
where limk Nk . If Pk converges to a non-degenerate distribution function P , then P is stable. If
this stable distributions parameter equals , then Nk k1 gk, where gk is slowly varying as k
.
Theorem [15: pp. 3467] Let Xl be a zero-mean, strongly mixing stationary process with mixing coefficient
and variance 2 that has the property Xl 2 for some 0. If
2
Putting these results in context, we found that distributions of sums of random variables are never far
away from densities of infinitely divisible random variables. For those situations for which these asymptotic
densities approach a limiting distribution, amplitude averages taken from observing strongly mixing processes
also obey a Central Limit Theorem.
2.2.4 Ergodicity
Perhaps the most subtle structural component of a random process is its ergodicity. Intuitively, an ergodic
process is one in which estimates of its characteristicsexpected value, covariance, amplitude distribution,
etc.have meaning. Nonergodic processes have the somewhat counterintuitive property that estimates cannot
converge; their dependence structure is such that convergence, no matter how many observations are available,
never occurs. Presumably, nature is not so capricious to not allow us to learn from experiment its underlying
structure: We believe that physically relevant models should produce ergodic processes. On the other hand,
we can produce nonergodic models without difficulty. For example, all spherically symmetric joint amplitude
distributions (save the Gaussian) correspond to nonergodic processes [41]. Thus, testing models for ergodic
behavior should be high on the checklist for reasonableness and applicability.
Definition The process Xl is said to be ergodic if temporal averages of a function g of some finite collection
CX observations converges to its expected value.
1 1
lim
g CX gCX a.s.
0
This collection consists of a selection of process amplitudes Xl Xl . We clearly need to require that
gCX .
0 1
A process is purely nondeterministic if a set derived from currently occuring amplitudes is aymptotically independent of a set derived
from amplitudes occuring in the distant past. Formally, if A ab , PrA PrA as almost everywhere.
Sec. 2.2 Structural Aspects of Waveform Processes 31
This definition formalizes our notion of meaningful measurements, but does not directly relate ergodicity to a
processs structural properties. To obtain results, two important theorems relate ergodicity to joint-amplitude-
distribution and system descriptions of random processes.
Theorem Let A and A be sets contained in the -algebras ab and ab respectively, which are generated
from the stationary random process Xl over the interval expressed by the subscripts and superscripts. Let
denote the shift operator 28 that can be applied to these sets and 1 be the operators inverse image:
1 A : A. Xl is ergodic if and only if
1 1
lim
PrA A PrA PrA
0
1 1
lim
pXa
Xb Xa
Xb X X pXa
X
b
X pXa
X
b
X
0
1 1
lim
pX0 X X X pX X pX X
0
lim p
X0 X
X X
pX X pX X
Note that if an amplitude distribution passes these tests, that does not mean that they are ergodic or mixing.
Because the theorem demands all sets selected from the -algebras satisfy the condition, a passing distribu-
tion is only consistent with ergodicity. On the other hand, a failing distribution is not ergodic, presumably
dismissing the corresponding process from consideration as a viable model for reality.
Example
Consider the multivariate Gaussian density expressed by 0 KX . When the covariance function
KX , which corresponds to entries in the covariance matrix KX , approaches zero for large lags, the
multivariate density factors: lim 0 KX 0 X2 . When we consider two groups of am-
plitudes separated by lag , increasing the separation creates a covariance matrix that asymptotically
has two square matrices on the diagonal and zero-valued entries elsewhere. Again, the multivariate
density factors and the ergodicity test is passed for all choices of groups. Thus, stationary Gaussian
processes are mixing, hence ergodic.
The seemingly odd appearance of 1 A allows us to focus on a fixed set A that corresponds to more and more remotely shifted set.
32 Stochastic Processes Chap. 2
Example
Now consider the elliptically symmetric bivariate distribution having Laplacian marginals [25].
2X 2 X 2
2 XX
pX
0 X X X
2
1
K
1
2 12 0 2 1
2
Here, K0 denotes the modifed Bessel function of the second kind. All densities in this class are
parameterized by the correlation coefficient between X0 and X . Letting this coefficient approach
zero (as in the Gaussian example just described), the density does not factor, meaning that the process
is not mixing. Applying the more direct, but harder to use, sum-of-densities test, this density still
does not pass. Thus, as expected (the only ergodic elliptically symmetric process is the Gaussian), the
process corresponding to this density is not ergodic.
Theorem Assume the conditions apply for a white-noise driven Markov system to produce a stationary output
(Thm. 2.2.1 22). Furthermore, assume that
1. zero is an equilibrium point of the zero-input system (0 G0; 0);
2. the amplitude density of the input Wl is non-zero in some open interval that includes the origin;
3. the transformation G ; is continuous everywhere and continuously differentiable in a neighborhood
of the origin;
4. for some 0, GX;Wl for all values of X lying in the systems state space.
Under these conditions, the output Xl produced by GXl 1;Wl has exponentially decreasing mixing coeffi-
cients , and hence is ergodic.
Thus, stable systems that, like linear systems, produce zero output with no input and satisfy continuity prop-
erties yield an ergodic output. All stable linear systems fall into this category; chaotic systems do not (their
output to zero-input never dies). Because the theorem does not provide necessary and sufficient conditions,
one wonders how nonlinear systems fit. In the case of additive-input Markov systems, if Gs Xl 1 is bounded
and the inputs amplitude distribution function is continuous, then the output is -mixing [9]. Thus if the
output is stationary, it is ergodic.
It is unfortunate these two theorems conditions differ. The theorem that applies to the joint amplitude
distribution is more direct and exhaustive. The system-based one is not necessarily comprehensive: other
systems and inputs may exist that produce ergodic outputs. More work is needed in this area to produce a
comprehensive ergodic theorem.
Xl Xl Xb Xa . The estimation error forms a mean-squared independent sequence 26, which of
course is not a white sequence. This next-of-kin to white noise that is produced by least-squares estimation
errors is known as an innovations sequence.
Linear processes. From a systems viewpoint, the simplest nontrivial dependence structure is expressed
by passing white noise through a linear system. When the system is strictly stable (all poles lie inside the
unit circle), the resultant is a stationary linear process. For a finite variance input to yield a finite variance
output, we must impose a somewhat more restrictive condition: hl 2 , where hl denotes the systems
unit-sample response. In this case, a linear system generates a linear process from white noise according to
the convolution sum.
Xl hl
kWk
k
Note that the filter need not be causal to produce a well-defined linear process.
The most frequently used linear process is the ARMAautoregressive-moving averageprocess. Here,
the systems input-output relation is expressed by the difference equation
p q
Xl ai Xl i b jWl j
i 1 j 0
The dependence structure of such processes is represented by the notation ARMA(p, q). Here, p equals the
number of poles in the systems transfer function, q the number of zeros. A more specialized, but frequently
used, case results when no zeros occur (q 0). In this case, we have a pure AR process, which is symbolized
by AR(p). In both ARMA and AR processes when all poles lie inside the unit circle, the process is strongly
mixing, which means that the output is ergodic. When no poles occur, the ARMA model becomes a MA(q)
model, which produces a linear q-dependent output. This process is clearly ergodic so long as its order q
is finite 30. As described previously, the only time-reversible linear processes are linear Gaussian ones,
produced when the white-noise input is Gaussian, and when the system has a linear phase transfer function
(no poles and either even- or odd-symmetric moving-average coefficients b j ) (Thm. 2.2.2 25). Typically,
linear non-Gaussian processes are time-irreversible.
Stable processes. Having more theroretical than practical interest are stable processes. Here, the white-
noise input to a linear system has an amplitude distribution drawn from the collection of stable probability
densities. Because a linear combination of stable random variables produces a stable random variable having
the same form: weighted sums of Gaussians are Gaussian, weighted sums of Cauchy random variables has
a Cauchy distribution. Thus, a linear systems output, when driven by stable white noise, is also stable and
has a marginal amplitude distribution of the same formthe parameters can differas the inputs. All stable
processes are linear, and these processes can be used to explore the dependence structures of linear processes.
However, the only stable density having a finite variance is the Gaussian, which means that non-Gaussian
stable processes all have the somewhat unrealistic property of infinite power.
Chaotic processes. Using the word process along side chaotic may seem to clash. However, chaotic
signals, which are produced by zero-input, nonlinear systems, do share some common properties with random
processes.
Consider the zero-input input-output relation expressed by
Xl Gs Xl 1 Xl p
Assume this systems initial condition, values for the states Xl 1 Xl p, have some joint distribution. What
is the outputs asymptotic distribution, if it exists? In other words, we ask when repeated solutions the state-
evolution equation (2.3) 22 converge. In some cases, no limit exists, and no such stationary density can be
It is somewhat curious that the dependence structure expressed by elliptically symmetric distributions contains all least-squares
predictors that turn out to be linear, and yet, because the corresponding processes are time-reversible, they cannot be produced by linear
systems having a white-noise input [2]!
34 Stochastic Processes Chap. 2
defined. If the limit exists, then the evolution equation defines the limiting distributions.
Recall the notation Xp col X X p1 . The conditional density expresses the input-output relation
Gs .
Because stable linear systems settle to zero from any initial condition, linear processes have quite boring
limiting distributions: pX p X0p X . Nontrivial results can emerge in the nonlinear case.
Example
Consider the over-used example of the input-output relation expressed by the Henon map.
Xl 4Xl 1 1
Xl 1
If the initial condition lies outside the interval 0 1, the systems output is unbounded: The system
acts as if it is unstable. If, however, the system is started within this interval, the outputs amplitude
remains in 0 1 forever. To investigate solutions to the evolution equation, the conditional distribution
specifying the system has the simple form
pX X
l l 1
X0 X1 X0
4X1 1
X1
In such situations, when a nontrivial amplitude distribution satisfies the evolution equation corresponding to
a zero-input, hence deterministic, system, the signal thus produced is said to be chaotic.
Chaotic signals have been segregated from stochastic ones. A relation between chaotic signals, which
are generated deterministically, and stochastic ones, generated by passing white noise through some system,
would seem only remotely possible. When we consider the time-reversed system, the one that generates
signals in the opposite temporal direction, a random process characterization must result in at least some
cases. Take the just presented Henon map example; the systems that generate the same signal values forward
and backward in time are
Xl 4Xl 1 1
Xl 1
1 Wl 1
Xl PrWl 1
1 1
Xl 1
2 2
Sec. 2.3 Simple Waveform Processes 35
Figure 2.3: Using the initial condition X0 1 2 in the Henon difference equation produces the depicted
signal. Visually, this signal seems odd somehow, having some structure that does not correspond to intuitive
notions of what random signals should appear to be.
The second equation results from the sign ambiguity of the quadratic formula applied to the first equation
to find Xl 1 in terms of Xl . The equally likely choice for the probability assignment creates an amplitude
distribution for the signal that agrees with the chaotic systems stationary distribution. Because of this kind
of constraint, a nonlinear Markov model for the signal must be used to describe how to generate the chaotic
signal in the opposite temporal direction. The authors chose temporal direction arbitrarily in this example;
nature is not so arbitrary. If we impose causality as a constraint to help define the right model for a given
set of observations, one of these modelseither the stochastic or deterministic choicebecomes the pre-
ferred model. Thus, the observations time-reversibility structure determines which model describes natural
phenomena!
Example
Another, more interesting, example of this duality is the process discussed by Rosenblatt [34: p. 52].
Here, with K some nonzero integer (positive or negative), two systems produce the identical signal
values.
X Wl PrWl k K
1 1
Xl k 0 K
1
K l 1 K
Xl 1 KXl mod 1
The first difference equation describes a linear autoregressive (hence Markov) process. Using the
techniques used in a previous example 24 of this process, we find that the generated signals am-
plitude distribution is uniform over 0 1. The (deterministic) equation describing how to generate
this signal in the opposite temporal direction is also known as a congruential uniform random number
generator. Apparently, chaotic signals have been used for decades to model stochastic phenomena!
The stochastic counterpart indicates that the successive outputs of this random number generator are
correlated (correlation coefficient 1K). This correlation is one reason why K is usually quite large in
applications.
The authors emphasize that these examples demonstrate that randomness and determinism are not di-
chotomous concepts. Either model can describe the same set of observations in these cases. Tests that attempt
to distinguish chaotic signals from random ones cannot succeed unless they take into account the direction of
time. Signal processing researchers are intensely investigating the full picture of how chaos and randomness
are related.
36 Stochastic Processes Chap. 2
interval.
Pr Nt t 1 Nt n W w t; n; w O
Here, O denotes a quantity that approaches zero faster than its argument: limx0 O x x 0. The quantity
t; n; w denotes the point processs intensity, an expression of the instantaneous rate at which events occur
and how this rate depends on process history.
In waveform processes, the stochastic process is defined by the multivariate distribution of arbitrarly selected
amplitudes. For a regular point process, the intensity defines it: We distinguish point processes solely by
their intensity definitions. Save for the Poisson process, the intensity depends on history, and expresses the
processs dependence structure. Note that because the intensity is proportional to probability, all intensities
are non-negative and bounded, and have units of events/s. When an intensity equals zero, no events occur.
Because the intensity defines how the event occurrence rate varies with previous event occurrences, which
are governed by the point processs probability law, the intensity itself is a random process when viewed as
a waveform. When the point process is ergodic, the intensity can be estimated from observations, meaning
that, as opposed to waveform processes, we have a chance of deriving a model for an observed point process.
Often, we want the rate of event occurrence to depend on more than just the processs history. To model
seasonal variations of rain storm occurrences, for example, the intensity should depend on some periodic
The notion of the Big Bang comes to mind here.
We define the occurrence time w0 to equal t1 , the beginning of the observation interval.
Sec. 2.4 Structure of Point Processes 37
waveform. In such cases, the intensity depends on time through more than its history. The waveform control-
ling event occurrence in addition to history can be deterministic, or it can be a sample function of a waveform
process. When deterministic, we have a non-stationary point process; when stochastic, we have what is known
as a doubly stochastic point process. The intensity of such processes is indicated by st ; t; Nt ; W, where
s represents a signal that somehow modulates the event occurrence rate.
In applications, we may want to associate with each event a value or sequence of values. For example,
Californians not only care when earthquakes occur, but also how strong they are.
Definition A marked point process is regular and has associated with each event a mark U: a vector of
random variables colU1 Um having a joint probability distribution dependent on process history (which
now includes previous marks).
The intensity of a marked point process has the complicated general form t; Nt ; W; U1 UNt . In
marked processes, the rate at which events occur as well as mark values can depend on previous marks and
when they occurred. For example, earthquake rate increases (temporarily) after a big one occurs, and these
aftershocks thankfully have decreased mark values.
The definition of stationarity is somewhat tricky because of the explicit inclusion of the processs birth at
time t 0. Frequently, point processes have a nontrivial dependence structure: Rate of event occurrence does
depend on process history. Because no history exists prior to the processs birthday, how the intensity evolves
from having no history to having one becomes an important mathematical detail that we would like to ignore
in applications: We would like to assume that the start-up transient has dissipated, leaving the process in kind
of steady-state. The only point processesPoisson processesimmune from this transient have intensities
that do not depend on process history.
Definition The sample function density pN
t1 t2
n; w equals the joint probability density of the number and
W
occurence times of events that occur in a given interval t1 t2.
Note that this density completely characterizes a point process during the stated time interval. A reason-
able definition of stationarity must involve this densitys properties as the observation interval becomes more
distant from process initiation.
Definition A point process is said to be stationary if the sample function density asymptotically depends only
on the interval t
WN and on the interevent interval vector equivalent to W.
T
This definition recalls the existence of a stationary distribution for a waveform process produced by a system
provided by a white input 23. The point process equivalent of a generation model is the intensity; as
described subsequently 42, the sample function density completely depends on the intensity and, given the
intensity, we can generate the point process.
2.4.2 The Poisson Process
Some signals have no waveform. Consider the measurement of when lightning strikes occur within some
region; the random process is the sequence of event times, which has no intrinsic waveform. Such processes
are termed point processes, and have been shown [37] to have a simple mathematical structure. Define some
quantities first. Let Nt be the number of events that have occurred up to time t (observations are by convention
assumed to start at t 0). This quantity is termed the counting process, and has the shape of a staircase
function: The counting function consists of a series of plateaus always equal to an integer, with jumps between
plateaus occurring when events occur. Nt1 t2 Nt2
Nt1 corresponds to the number of events in the interval
t1 t2. Consequently, Nt N0 t . The event times comprise the random vector W; the dimension of this vector
is Nt , the number of events that have occurred. The occurrence of events is governed by a quantity known as
the intensity t; Nt ; W of the point process through the probability law
PrNt t t 1 Nt ; W t; Nt ; W t
for sufficiently small t. Note that this probability is a conditional probability; it can depend on how many
events occurred previously and when they occurred. The intensity can also vary with time to describe non-
38 Stochastic Processes Chap. 2
stationary point processes. The intensity has units of events/s, and it can be viewed as the instantaneous rate
at which events occur.
The simplest point process from a structural viewpoint, the Poisson process, has no dependence on process
history. A stationary Poisson process results when the intensity equals a constant: t; Nt ; W 0 . Thus, in
a Poisson process, a coin is flipped every t seconds, with a constant probability of heads (an event) occurring
that equals 0 t and is independent of the occurrence of past (and future) events. When this probability varies
with time, the intensity equals t , a non-negative signal, and a nonstationary Poisson process results.
From the Poisson processs definition, we can derive the probability laws that govern event occurrence.
These fall into two categories: the count statistics PrNt1 t2 n, the probability of obtaining n events in an
interval t1 t2, and the time of occurrence statistics pWn w, the joint distribution of the first n event times in
the observation interval. These times form the vector Wn , the occurrence time vector of dimension n. From
these two probability distributions, we can derive the sample function density.
Count statistics. We derive a differentio-difference equation that PrNt1 t2 n, t1 t2 , must satisfy for
event occurrence in an interval to be regular and independent of event occurrences in disjoint intervals. Let t1
be fixed and consider event occurrence in the intervals t1 t2 and t2 t2 , and how these contribute to the
occurrence of n events in the union of the two intervals. If k events occur in t1 t2, then n
k must occur in
t2 t2 . Furthermore, the scenarios for different values of k are mutually exclusive. Consequently,
n
PrNt n PrNt1 t2 k Nt n
k
1 t 2 2 t2
k 0
PrNt 0Nt1 t2 n PrNt1 t2 n
2 t2
PrNt 2 t2 1Nt1 t2 n
1 PrNt1 t2 n
1
n
PrNt 2 t2 kNt1 t2 n
k PrNt1 t2 n
k
k 2
Because of the independence of event occurrence in disjoint intervals, the conditional probabilities in this
expression equal the unconditional ones. When is small, only the first two will be significant to first order
in . Rearranging and taking the obvious limit, we have the equation defining the count statistics.
d PrNt1 t2 n
t2 PrNt1 t2 n t2 PrNt1 t2 n
1
dt2
To solve this equation, we apply a z-transform to both sides. Defining the transform of PrNt1 t2 n to be
Pt2 z, we have
Pt2 z
t2 1
z1 Pt2 z
t2
Applying the boundary condition that Pt1 z 1, this simple first-order differential equation has the solution
t2
Pt2 z exp
1
z1 d
t1
To evaluate the inverse z-transform, we simply exploit the Taylor series expression for the exponential, and
we find that a Poisson probability mass function governs the count statistics for a Poisson process.
t2
d
n
t1 t2
PrNt1 t2 n exp
d (2.5)
n! t1
In the literature, stationary Poisson processes are sometimes termed homogeneous, nonstationary ones inhomogeneous.
Remember, t is fixed and can be suppressed notationally.
1
Sec. 2.4 Structure of Point Processes 39
The integral of the intensity occurs frequently, and we succinctly denote it by tt2 . When the Poisson process
1
is stationary, the intensity equals a constant, and the count statistics depend only on the difference t2
t1 .
Time of occurrence statistics. To derive the multivariate distribution of W, we use the count statistics
and the independence properties of the Poisson process. The density we seek satisfies
w1 1 wn n
pWn d Pr W1 w1 w1 1 Wn wn wn n
w1 wn
The expression on the right equals the probability that no events occur in t1 w1, one event in w1 w1 1 ,
no event in w1 1 w2, etc.. Because of the independence of event occurrence in these disjoint intervals, we
can multiply together the probability of these event occurrences, each of which is given by the count statistics.
Pr W1 w1 w1 1 Wn wn wn n
w 1 w 2 wn n
ww1 1 e ww2 2 e wwnn n ewn
w 2 w
t 1 w1 w1 1 w2
e 1 1 e 2
1 2
' (
n
wk k
twn
e 1 for small k
k 1
From this approximation, we find that the joint distribution of the first n event times equals
'
) n
(
wn
pWn w
k w exp
d t 1 w1 w2 wn
k 1 t1
)
0 otherwise
Sample function density. For Poisson processes, the sample function density describes the joint distri-
bution of counts and event times within a specified time interval. Thus, it can be written as
pN
t1 t2 W n; w PrNt1 t2 nW1 w1 Wn wn pWn w
The second term in the product equals the distribution derived previously for the time of occurrence statistics.
The conditional probability equals the probability that no events occur between wn and t2 ; from the Poisson
processs count statistics, this probability equals exp
tw2n . Consequently, the sample function density for
the Poisson process, be it stationary or not, equals
'
n
(
t2
pNt W n; w wk exp
d (2.6)
1 t2 k 1 t1
Properties. From the probability distributions derived on the previous pages, we can discern many struc-
tural properties of the Poisson process. These properties set the stage for delineating other point processes
from the Poisson. They, as described subsequently, have much more structure and are much more difficult to
handle analytically.
The counting process Nt is an independent increment process. For a Poisson process, the
number of events in disjoint intervals are statistically independent of each other, meaning that we have an
independent increment process. When the Poisson process is stationary, increments taken over equi-duration
intervals are identically distributed as well as being statistically independent. Two important results obtain
from this property. First, the counting processs covariance function KN t u equals 2 mint u. This close
relation to the Wiener waveform process indicates the fundamental nature of the Poisson process in the world
of point processes. Note, however, that the Poisson counting process is not continuous almost surely. Second,
40 Stochastic Processes Chap. 2
the sequence of counts forms an ergodic process, meaning we can estimate the intensity parameter from
observations.
The mean and variance of the number of events in an interval can be easily calculated from the Poisson
distribution. Alternatively, we can calculate the characteristic function and evaluate its derivatives. The
characteristic function of an increment equals
Nt exp e j
1 tt2
1 t2 1
The first two moments and variance of an increment of the Poisson process, be it stationary or not, equal
Nt t 1 2
tt2
1
2
Nt2 t 1 2
tt2 tt2
1 1
Nt t 1 2
tt2
1
Note that the mean equals the variance here, a trademark of the Poisson process.
Poisson process event times form a Markov process. Consider the conditional density
pWn W W wn wn1 w1. This density equals the ratio of the event time densities for the n- and n
1-
n1 1
dimensional event time vectors. Simple substitution yields
wn
pWn W
n1 W1 wnwn1 w1 wn exp
d wn wn1
wn1
pn 0 e0 0
To show that the exponential density for a white sequence corresponds to the most random distribution,
Parzen [30] proved that the ordered times of n events sprinkled independently and uniformly over a given in-
terval form a stationary Poisson process. If the density of event sprinkling is not uniform, the resulting ordered
times constitute a nonstationary Poisson process with an intensity proportional to the sprinkling density.
Doubly stochastic Poisson processes. Here, the intensity t equals a sample function drawn from
some waveform process. In waveform processes, the analogous concept does not have nearly the impact it
does here. Because intensity waveforms must be non-negative, the intensity process must be nonzero mean
and non-Gaussian. Assume throughout that the intensity process is stationary for simplicity. This model
arises in those situations in which the event occurrence rate clearly varies unpredictably with time. Such
processes have the property that the variance-to-mean ratio of the number of events in any interval exceeds
one. In the process of deriving this last property, we illustrate the typical way of analyzing doubly stochastic
processes: Condition on the intensity equaling a particular sample function, use the statistical characteristics
Sec. 2.4 Structure of Point Processes 41
of nonstationary Poisson processes, then average with respect to the intensity process. To calculate the
expected number Nt1 t2 of events in a interval, we use conditional expected values:
Nt t Nt t t t1 t t2
1 2
t 1 2
t d
2
t2
t1 t
This result can also be written as the expected value of the integrated intensity: Nt t tt . Similar
1 2
2
1
calculations yield the increments second moment and variance.
2
Nt t 2 tt tt
1 2
2
1
2
1
Using the last result, we find that the variance-to-mean ratio in a doubly stochastic process always exceeds
unity, equaling one plus the variance-to-mean ratio of the intensity process.
The approach of sample-function conditioning can also be used to derive the density of the number of
events occurring in an interval for a doubly stochastic Poisson process. Conditioned on the occurrence of a
sample function, the probability of n events occurring in the interval t1 t2 equals (Eq. 2.5, 38)
n
tt2
Pr Nt1 t2 n t t1 t t2 1
exp
tt2
n! 1
Because tt2 is a random variable, the unconditional distribution equals this conditional probability averaged
1
with respect to this random variables density. This average is known as the Poisson Transform of the random
variables density.
n
Pr Nt1 t2 n e p t2 d
0 n! t
1
These expressions encompass both stationary and nonstationary cases. When stationary, the intensity depends
only on interevent intervals, and this dependence is frequently expressed by rewriting the intensity in terms of
intervals instead of time: t; n; w "
; n; . Using this re-expression, the last equation becomes
n1
p
n1 Nt n
n1 n
"
n1 ; n; exp
"
; n; d (2.7)
0
Here, n1 is defined to be the time until the next event: n1 t
wNt .
42 Stochastic Processes Chap. 2
From these relations, we can derive a systems model for generating regular point processes [17, 28].
We exploit here a property of the conditonal distribution function. For any random variable X, its distribution
function maps its range uniformly into the interval 0 1. Furthermore, the inverse distribution function PX1
maps the unit interval into the random variables domain. Thus, applying the inverse distribution function to
a uniformly distributed random variable U
0 1 results in the random variable X: PX1 U X. This
property underlies many a random
variable generationtechnique. Here, we apply it to the conditional interval
distribution, which equals exp
0n1" ; n; d .
n1
lnUn1 "
; n; d
0
The negative logarithm of a uniform random variable equals a unit-parameter exponential random variable,
which has density exp
xux. Denoting such a random variable as E, we find that
n1
En1 "
; n; d
0
When applied to the sequence of (dependent) interevent intervals, this mapping generates a sequence of
independent, identically distributed random variableswhite noise. Thus, the intervals in a regular point
process can be generated by passing white noise (distributed exponentially) through the inverse function of
the above integral.
n1 G ; En1
n1 (2.8)
G1 ; n1 ; n; d
"
0
When the intensity depends only on a finite number of events that occurred prior to time t, the sequence
of interevent intervals constitute a Markov process. Thus, the Markov process structural characterizations
developed in previous sections for waveform processes apply as well to the sequence of intervevent intervals
in a stationary point process. When the point process is nonstationary, the generating system varies with time,
which must be expressed as the sum of interval durations: t n .
Example
The simplest possible example is the stationary Poisson process. Assume that it has intensity equaling
0 . The integral can be calculated explicitly as 0 , and its inverse is, of course, also linear. We find
that
1
n En
0
This system yields a sequence of independent, exponentially distributed random variables having
parameter 0 .
The next several examples illustrate frequently encountered intensities. These are all stationary; they can
be made nonstationary by including temporal variations into the intensity and doubly stochastic by making
these variations dependent on a random process.
Renewal processes. Second only to the Poisson process in simplicity, stationary renewal processes are
characterized by independent interevent intervals that are not exponentially distributed. Here, the probability
of an event depends on time since the last event occurrence.
"
n1 ; n; 0 rn1
Note that this inverse function is always well-defined. When the intensity is positive, the integral is strictly increasing. When zero,
integral is a constant over some contiguous range of interevent intervals. In this case, the inverse function is taken to be the ranges
rightmost edge.
Sec. 2.4 Structure of Point Processes 43
Here, r denotes the recovery function, which is normalized so that lim r 1. The Poisson process
is a renewal process with a recovery function equaling one for all inervals. The normalization isolates the
dependence of event occurrence on interval from the implicit occurrence rate 0 . Note that this rate does not
equal the average occurrence rate except in the Poisson case.
In a renewal process, the interval distribution can be directly calculated from the intensity and vice versa.
Using Eq. 2.7 41,
p 0 r exp
0 r d
0
p
0 r
1
P
The ratio in the last equation is known in point process literature as the hazard function and the age-specfic
failure rate. This terminology comes from considering events as component failures of some sort and noting
that the ratio can be interpreted as the probability of a failure in instant occurring at given that the failure
interval exceeds .
Example
One example recovery function is a delayed step: r u
. Here, is known as the deadtime:
Because the intensity equals zero for seconds after each event, events cannot occur during this time.
This model has been used to approximate latching of a photomultiplier tube after a recorded incident
photon and to describe discharge patterns of single auditory neurons [16]. The average rate at which
events occur in this process equals 0 1 0 . This result can be easily derived by considering how
to generate it: Using the generation equation (2.8), we find that n 1 En .
0
Passing from a stationary renewal process model to a nonstationary one can be done in several
ways. One approach used in modeling neural discharges [16] is to express the intensity as the product
of a recovery function and a time-varying rate.
Example
One renewal process that exhibits positive-agingthe probability of an event increases with time
since the last eventhas a linear recovery function: r a . Here, the interevent intervals have a
Rayleigh density.
p 0 a exp
0 a 2 2
First-order Markovian point processes. More complicated point process dependence structures have
been found in recordings from single neurons [17]. Here, the probability of an event depends not only on
time since the last event, but also on time since the pentultimate one. Thus, the sequence of intervals forms a
first-order Markov process in which the intensity has the form
"
; n; 0 r n1
sn
where s is a positive-valued shifting function that essentially delays the recovery function to longer interval
durations in a way that depends on the previous intervals duration. When the shifting function is a decreas-
ing function, the delay is less for longer preceding intervals than shorter ones. Thus, in this process, long
intervals tend to be followed by short ones and vice versa. Simple calculations from the generation equation
44 Stochastic Processes Chap. 2
(Eq. 2.8 42) show that the conditional expected value, equivalent to the least-squares predictor, equals the
shifting function plus a constant.
n1 n sn C
The constant depends in a complicated way on both the recovery function and the shifting function.
Hawkes process [12]. This process demonstrates that regular point processes need not be Markovian.
Here, the intensity depends on the output of a linear filter that has an input equal to impulses occurring at
event times. t
t; n; w 0 ht
dN
The Steiljes integral in this expression simply equals the summed impulse responses delayed by all event
times occurring prior to time t:
n
t; n; w 0 ht
wi
i 1
Thus, the intensity depends on all past event times. Note that not all impulse responses can occur in this ex-
pression. Fir instance, intensities are always positive quantities, meaning that the summed impulse responses
cannot be more negative than
0 . Further restrictions on the impulse response result if we demand the
Hawkes process be stationary. Defining as the average occurrence rate, the intensity must satisfy
t
0 ht
d
From this constraint we find the average occurrence rate equals 0 1
0 h d , which means that the
filters impulse response must satisfy 0 h d 1. This constraint means that the filters gain at zero
frequency must be less than unity.
Definition A linear vector space is a collection of elements called vectors having the following properties:
(b) a b x = ab x.
(c) If 1 and 0 denotes the multiplicative and additive identity elements respectively of the field of
scalars; then 1 x x and 0 x 0
(d) ax y ax ay and a bx ax bx.
There are many examples of linear vector spaces. A familiar example is the set of column vectors of length
N. In this case, we define the sum of two vectors to be:
x1 y1 x1 y1
* x2 + * y2 + * x2 y2 +
*
*
.. +
+ *
*
+
.. +
*
*
..
+
+
. . .
xN yN xN yN
and scalar multiplication to be a colx1 x2 xN colax1 ax2 axN . All of the properties listed above are
satisfied.
A more interesting (and useful) example is the collection of square integrable functions. A square-
integrable function xt satisfies:
Tf
xt 2dt
Ti
One can verify that this collection constitutes a linear vector space. In fact, this space is so important that it
has a special nameL2 Ti T f (read this as el-two); the arguments denote the range of integration.
Definition Let be a linear vector space. A subspace of is a subset of which is closed. In other
words, if x y , then x y and all elements of are elements of , but some elements of are not
elements of . Furthermore, the linear combination ax by for all scalars a b. A subspace is sometimes
referred to as a closed linear manifold.
As an example, an inner product for the space consisting of column matrices can be defined as
N
x y xt y xi yi
i 1
The reader should verify that this is indeed a valid inner product (i.e., it satisfies all of the properties given
above). It should be noted that this definition of an inner product is not unique: there are other inner product
definitions which also satisfy all of these properties. For example, another valid inner product is
x y xt Ky
46 Stochastic Processes Chap. 2
where K is an N N positive-definite matrix. Choices of the matrix K which are not positive definite do not
yield valid inner products (property 4 is not satisfied). The matrix K is termed the kernel of the inner product.
When this matrix is something other than an identity matrix, the inner product is sometimes written as x yK
to denote explicitly the presence of the kernel in the inner product.
Definition The norm of a vector x is denoted by x and is defined by:
Because of the properties of an inner product, the norm of a vector is always greater than zero unless the
vector is identically zero. The norm of a vector is related to the notion of the length of a vector. For example,
if the vector x is multiplied by a constant scalar a, the norm of the vector is also multiplied by a.
In other words, longer vectors a 1 have larger norms. A norm can also be defined when the inner
product contains a kernel. In this case, the norm is written xK for clarity.
Definition An inner product space is a linear vector space in which an inner product can be defined for all
elements of the space and a norm is given by equation 2.9. Note in particular that every element of an inner
product space must satisfy the axioms of a valid inner product.
For the space consisting of column matrices, the norm of a vector is given by (consistent with the first
choice of an inner product)
' (12
N
x x2i
i 1
This choice of a norm corresponds to the Cartesian definition of the length of a vector.
One of the fundamental properties of inner product spaces is the Schwarz inequality.
This is one of the most important inequalities we shall encounter. To demonstrate this inequality, consider the
norm squared of x ay.
Let a
x yy2 . In this case:
x y2 x y2
x ay2 x2
2 y4 y2
y2
x y2
x2
y2
As the left hand side of this result is non-negative, the right-hand side is lower-bounded by zero. The Schwarz
inequality of Eq. 2.10 is thus obtained. Note that equality occurs only when x
ay, or equivalently when
x cy, where c is any constant.
Definition Two vectors are said to be orthogonal if the inner product of the vectors is zero: x y 0.
Consistent with these results is the concept of the angle between two vectors. The cosine of this angle is
defined by:
x y
cosx y
x y
Because of the Schwarz inequality, cosx y 1. The angle between orthogonal vectors is 2 and the
angle between vectors satisfying Eq. 2.10 with equality x y is zero (the vectors are parallel to each other).
Sec. 2.5 Linear Vector Spaces 47
Definition The distance d between two vectors is taken to be the norm of the difference of the vectors.
d x y x
y
In our example of the normed space of column matrices, the distance between x and y would be
12
N
x
y xi
yi 2
i 1
which agrees with the Cartesian notion of distance. Because of the properties of the inner product, this
distance measure (or metric) has the following properties:
d x y d y x (Distance does not depend on how it is measured.)
d x y 0 x y (Zero distance means equality)
d x z d x y d y z (Triangle inequality)
We use this distance measure to define what we mean by convergence. When we say the sequence of vectors
xn converges to x xn
x, we mean
lim xn
x 0
n
Definition A Hilbert space is a closed, normed linear vector space which contains all of its limit points:
if xn is any sequence of elements in that converges to x, then x is also contained in . x is termed the
limit point of the sequence.
Example
Let the space consist of all rational numbers. Let the inner product be simple multiplication: x y
xy. However, the limit point of the sequence xn 1 1 12! 1n! is not a rational number.
Consequently, this space is not a Hilbert space. However, if we define the space to consist of all finite
numbers, we have a Hilbert space.
Definition If
is a subspace of , the vector x is orthogonal to the subspace
for every y
, x y 0.
We now arrive at a fundamental theorem.
Theorem Let be a Hilbert space and
a subspace of it. Any element x H has the unique decomposition
x y z, where y
and z is orthogonal to
. Furthermore, x
y minv x
v: the distance between
x and all elements of
is minimized by the vector y. This element y is termed the projection of x onto
.
Geometrically,
is a line or a plane passing through the origin. Any vector x can be expressed as the
linear combination of a vector lying in
and a vector orthogonal to y. This theorem is of extreme importance
in linear estimation theory and plays a fundamental role in detection theory.
2.5.4 Separable Vector Spaces
Definition A Hilbert space is said to be separable if there exists a set of vectors i , i 1 , elements
of , that express every element x as
x xi i (2.11)
i 1
48 Stochastic Processes Chap. 2
where xi are scalar constants associated with i and x and where equality is taken to mean that the distance
between each side becomes zero as more terms are taken in the right.
, ,
, m ,
, ,
lim ,x
xi i , 0
m , ,
i 1
The set of vectors i are said to form a complete set if the above relationship is valid. A complete set is
said to form a basis for the space . Usually the elements of the basis for a space are taken to be linearly
independent. Linear independence implies that the expression of the zero vector by a basis can only be made
by zero coefficients.
xi i 0 xi 0 i 1
i 1
The representation theorem states simply that separable vector spaces exist. The representation of the vector
x is the sequence of coefficients xi .
Example
The space consisting of column matrices of length N is easily shown to be separable. Let the
vector i be given a column matrix having a one in the ith row and zeros in the remaining rows:
i col0 0 1 0 0. This set of vectors i , i 1 N constitutes a basis for the space.
Obviously if the vector x is given by x colx1 x2 xN , it may be expressed as:
N
x xi i
i 1
In general, the upper limit on the sum in Eq. 2.11 is infinite. For the previous example, the upper limit is
finite. The number of basis vectors that is required to express every element of a separable space in terms of
Eq. 2.11 is said to be the dimension of the space. In this example, the dimension of the space is N. There exist
separable vector spaces for which the dimension is infinite.
Definition The basis for a separable vector space is said to be an orthonormal basis if the elements of the
basis satisfy the following two properties:
The inner product between distinct elements of the basis is zero (i.e., the elements of the basis are
mutually orthogonal).
i j 0 i j
i 1 i 1
For example, the basis given above for the space of N-dimensional column matrices is orthonormal. For
clarity, two facts must be explicitly stated. First, not every basis is orthonormal. If the vector space is
separable, a complete set of vectors can be found; however, this set does not have to be orthonormal to be
a basis. Secondly, not every set of orthonormal vectors can constitute a basis. When the vector space L2 is
discussed in detail, this point will be illustrated.
Despite these qualifications, an orthonormal basis exists for every separable vector space. There is an ex-
plicit algorithmthe Gram-Schmidt procedurefor deriving an orthonormal set of functions from a complete
set. Let i denote a basis; the orthonormal basis i is sought. The Gram-Schmidt procedure is:
Sec. 2.5 Linear Vector Spaces 49
1. 1 1 1
This step makes 1 have unit length.
2. 2 2
1 21 .
Consequently, the inner product between 2 and 1 is zero. We obtain 2 from 2 forcing the vector
to have unit length.
2 . 2 2 2 .
The algorithm now generalizes.
k. k k
ki 11i k i
k . k k k
By construction, this new set of vectors is an orthonormal set. As the original set of vectors i is a complete
set, and, as each k is just a linear combination of i , i 1 k, the derived set i is also complete.
Because of the existence of this algorithm, a basis for a vector space is usually assumed to be orthonormal.
A vectors representation with respect to an orthonormal basis i is easily computed. The vector x may
be expressed by:
x xi i (2.12)
i 1
xi x i (2.13)
This formula is easily confirmed by substituting Eq. 2.12 into Eq. 2.13 and using the properties of an inner
product. Note that the exact element values of a given vectors representation depends upon both the vector
and the choice of basis. Consequently, a meaningful specification of the representation of a vector must
include the definition of the basis.
The mathematical representation of a vector (expressed by equations 2.12 and 2.13) can be expressed
geometrically. This expression is a generalization of the Cartesian representation of numbers. Perpendicular
axes are drawn; these axes correspond to the orthonormal basis vector used in the representation. A given
vector is representation as a point in the plane with the value of the component along the i axis being xi .
An important relationship follows from this mathematical representation of vectors. Let x and y be any
two vectors in a separable space. These vectors are represented with respect to an orthonormal basis by xi
and yi , respectively. The inner product x y is related to these representations by:
x y xi yi
i 1
This result is termed Parsevals Theorem. Consequently, the inner product between any two vectors can be
computed from their representations. A special case of this result corresponds to the Cartesian notion of the
length of a vector; when x y, Parsevals relationship becomes:
12
x x2i
i 1
These two relationships are key results of the representation theorem. The implication is that any inner product
computed from vectors can also be computed from their representations. There are circumstances in which the
latter computation is more manageable than the former and, furthermore, of greater theoretical significance.
2.5.5 The Vector Space L2
Special attention needs to be paid to the vector space L2 Ti T f : the collection of functions xt which are
square-integrable over the interval Ti T f :
Tf
xt 2 dt
Ti
50 Stochastic Processes Chap. 2
Consistent with this definition, the length of the vector xt is given by
T 12
xt 2 dt
f
x
Ti
Physically, x2 can be related to the energy contained in the signal over Ti T f . This space is a Hilbert space.
If Ti and T f are both finite, an orthonormal basis is easily found which spans it. For simplicity of notation, let
Ti 0 and T f T . The set of functions defined by:
12
2 i
1t
2i1 t
2
cos
T T
12 (2.15)
2i t
2 2 it
sin
T T
is complete over the interval 0 T and therefore constitutes a basis for L2 0 T . By demonstrating a basis,
we conclude that L2 0 T is a separable vector space. The representation of functions with respect to this
basis corresponds to the well-known Fourier series expansion of a function. As most functions require an
infinite number of terms in their Fourier series representation, this space is infinite dimensional.
There also exist orthonormal sets of functions that do not constitute a basis. For example, the set i t
defined by:
1
iT t i 1T
i t T i 0 1
0 otherwise
over L2 0 . The members of this set are normal (unit norm) and are mutually orthogonal (no member
overlaps with any other). Consequently, this set is an orthonormal set. However, it does not constitute a basis
for L2 0 . Functions piecewise constant over intervals of length T are the only members of L2 0 which
can be represented by this set. Other functions such as et ut cannot be represented by the i t defined
above. Consequently, orthonormality of a set of functions does not guarantee completeness.
While L2 0 T is a separable space, examples can be given in which the representation of a vector in this
space is not precisely equal to the vector. More precisely, let xt L2 0 T and the set i t be defined by
Eq. (2.15). The fact that i t constitutes a basis for the space implies:
, ,
, ,
,
,
,x t
xi i t ,
, ,
0
i 1
where T
xi xt i t dt
0
In particular, let xt be:
1 0 t T 2
xt
0 T 2 t T
Obviously, this function is an element of L2 0 T . However, the representation of this function is not equal
to 1 at t T 2. In fact, the peak error never decreases as more terms are taken in the representation. In the
special case of the Fourier series, the existence of this error is termed the Gibbs phenomenon. However, this
Sec. 2.5 Linear Vector Spaces 51
error has zero norm in L2 0 T ; consequently, the Fourier series expansion of this function is equal to the
function in the sense that the function and its expansion have zero distance between them. However, one of
the axioms of a valid inner product is that if e 0 e 0. The condition is satisfied, but the conclusion
does not seem to be valid. Apparently, valid elements of L2 0 T can be defined which are nonzero but have
zero norm. An example is
1 t T 2
e
0 otherwise
So as not to destroy the theory, the most common method of resolving the conflict is to weaken the definition
of equality. The essence of the problem is that while two vectors x and y can differ from each other and be
zero distance apart, the difference between them is trivial. This difference has zero norm which, in L2 ,
implies that the magnitude of x
y integrates to zero. Consequently, the vectors are essentially equal. This
notion of equality is usually written as x y a.e. (x equals y almost everywhere). With this convention, we
have:
e 0 e 0 a.e.
Consequently, the error between a vector and its representation is zero almost everywhere.
Weakening the notion of equality in this fashion might seem to compromise the utility of the theory. How-
ever, if one suspects that two vectors in an inner product space are equal (e.g., a vector and its representation),
it is quite difficult to prove that they are strictly equal (and as has been seen, this conclusion may not be valid).
Usually, proving they are equal almost everywhere is much easier. While this weaker notion of equality does
not imply strict equality, one can be assured that any difference between them is insignificant. The measure
of significance for a vector space is expressed by the definition of the norm for the space.
2.5.6 A Hilbert Space for Stochastic Processes
The result of primary concern here is the construction of a Hilbert space for stochastic processes. The space
consisting of random variables X having a finite mean-square value is (almost) a Hilbert space with inner
product XY . Consequently, the distance between two random variables X and Y is
12
d X Y X
Y 2
Now d X Y 0 X
Y 2 0. However, this does not imply that X Y . Those sets with prob-
ability zero appear again. Consequently, we do not have a Hilbert space unless we agree X Y means
PrX Y 1.
Let X t be a process with X 2 t . For each t, X t is an element of the Hilbert space just defined.
Parametrically, X t is therefore regarded as a curve in a Hilbert space. This curve is continuous if
2
lim X t
X u 0
t u
Processes satisfying this condition are said to be continuous in the quadratic mean. The vector space of
greatest importance is analogous to L2 Ti T f previously defined. Consider the collection of real-valued
stochastic processes X t for which
Tf
Ti
X t 2 dt
Stochastic processes in this collection are easily verified to constitute a linear vector space. Define an inner
product for this space as:
Tf
X t Y t
Ti
X t Y t dt
While this equation is a valid inner product, the left-hand side will be used to denote the inner product
instead of the notation previously defined. We take X t Y t to be the time-domain inner product as in
52 Stochastic Processes Chap. 2
Eq. (2.14). In this way, the deterministic portion of the inner product and the expected value portion are
explicitly indicated. This convention allows certain theoretical manipulations to be performed more easily.
One of the more interesting results of the theory of stochastic processes is that the normed vector space
for processes previously defined is separable. Consequently, there exists a complete (and, by assumption,
orthonormal) set i t i 1 of deterministic (nonrandom) functions which constitutes a basis. A process
in the space of stochastic processes can be represented as
X t Xi it Ti t T f
i 1
Strict equality between a process and its representation cannot be assured. Not only does the analogous
issue in L2 0 T occur with respect to representing individual sample functions, but also sample functions
assigned a zero probability of occurrence can be troublesome. In fact, the ensemble of any stochastic process
can be augmented by a set of sample functions that are not well-behaved (e.g., a sequence of impulses) but
have probability zero. In a practical sense, this augmentation is trivial: such members of the process cannot
occur. Therefore, one says that two processes X t and Y t are equal almost everywhere if the distance
between X t
Y t is zero. The implication is that any lack of strict equality between the processes (strict
equality means the processes match on a sample-function-by-sample-function basis) is trivial.
2.5.7 Karhunen-Loeve Expansion
The representation of the process, X t , is the sequence of random variables Xi . The choice basis of i t
is unrestricted. Of particular interest is to restrict the basis functions to those which make the Xi uncorre-
lated random variables. When this requirement is satisfied, the resulting representation of X t is termed the
Karhunen-Loeve expansion. Mathematically, we require Xi X j Xi X j , i j. This requirement can
be expressed in terms of the correlation function of X t .
T T
Xi X j X i d X j d
T
0
T
0
i j RX d d
0 0
As Xi is given by
T
Xi 0
mX i d
or T
i g j d 0 i j
0
where T
g j KX j d
0
Furthermore, this requirement must hold for each j which differs from the choice of i. A choice of a function
g j satisfying this requirement is a function which is proportional to j : g j j j . Therefore,
T
KX j d j j
0
The i which allow the representation of X t to be a sequence of uncorrelated random variables must
satisfy this integral equation. This type of equation occurs often in applied mathematics; it is termed the
eigenequation. The sequences i and i are the eigenfunctions and eigenvalues of KX , the covari-
ance function of X t . It is easily verified that:
KX t u iit iu
i 1
Example
KX t u 2 mint u. The eigenequation can be written in this case as
t T
2
u u du t u du t
0 t
Evaluating the first derivative of this expression,
T d t
2 t t 2 u du
2t t
t
T
dt
udu
d
or 2
t dt
Evaluating the derivative of the last expression yields the simple equation
d 2
2 t
dt 2
This equation has a general solution of the form t A sin t B cos t It is easily seen that
B must be zero. The amplitude A is found by requiring 1. To find , one must return to the
original integral equation. Substituting, we have
t T
u sin u du 2tA
2A sin u du A sin t
0 t
After some manipulation, we find that
A sin t t 0 T
A sin t
A t cos T
0t 0 T
or A t cos T
54 Stochastic Processes Chap. 2
Therefore,
T n
1 2 , n
1 2 and we have
2T 2
n
n
122 2
12
n t
2
sin
n
1 2 t
T T
The eigenfunctions of a positive-definite covariance function constitute a complete set. One can easily
show that these eigenfunctions are also mutually orthogonal with respect to both the usual inner product
and with respect to the inner product derived from the covariance function.
If X t Gaussian, Xi are Gaussian random variables. As the random variables Xi are uncorrelated and
Gaussian, the Xi comprise a sequence of statistically independent random variables.
Assume KX t u t
u: the stochastic process X t is white. Then
N0
2
N0
t
u udu t
2
for all t . Consequently, if i N0 2 , this constraint equation is satisfied no matter what choice is
made for the orthonormal set i t . Therefore, the representation of white, Gaussian processes con-
sists of a sequence of statistically independent, identically-distributed (mean zero and variance N0 2)
Gaussian random variables. This example constitutes the simplest case of the Karhunen-Loeve expan-
sion.
Problems
2.1 Simple Processes
Determine the mean, correlation function and first-order amplitude distribution of each of the processes
defined below.
(a) Xt is defined by the following equally likely sample functions.
Xt 1 1 Xt 3 sin t
Xt 2
2 Xt 4 cos t
(b) Xt is defined by Xt cosAt , where A and are statistically independent random variables.
is uniformly distributed over 0 2 and A has the density function
pA A
1
1 A2
This joint density is found to be a valid joint density for Xt and Xu when t
u t2
t1 .
Chap. 2 Problems 55
1 l
l k1 k
Yl X l 1 2
(a) Find the mean and correlation function of the stochastic sequence Yl .
(b) Is Yl stationary? Indicate your reasoning.
(c) How large must n be to ensure that the probability of the relative error Yl
Yl being less
than 0.1 is 0.95?
2.4 The Morning After the Night Before
Sammy has had a raucous evening and is trying to return home after being dropped at the RMC. As
he has had a little too much, he walks in a random fashion. His friends observe that at each second
after he leaves the RMC, he has moved a distance since the previous observation. The distance is a
Gaussian random variable with zero mean and standard deviation one meter. In addition, this distance
can be reasonably assumed to be statistically independent of his movements at all other times (he is
really out of it!). We wish to predict, to some degree, Sammys position with time. For simplicity, we
assume his movements are in one dimension.
(a) Define a stochastic process Xt that describes Sammys position relative to the RMC at each obser-
vation time.
(b) What is the mean and variance of this process?
(c) What is the probability that sammy is more than ten meters from the RMC after two minutes of
observations? First put your answer in terms of Q , then find a numeric answer.
(d) After ten minutes of wandering, Sammy bumps into a tree 150 meters from the RMC. What is the
probability that Sammy comes within one meter of the same tree ten seconds after his collision.
Again, express your answer first in terms of Q and then find a numeric answer.
2.5 Not-So-Random Sample Functions
Consider the process defined by Xt A cos2 f 0t where A is a random variable and f 0 is a constant.
(a) Find the first-order density pXt x.
(b) Find the mean and correlation function of Xt .
(c) Can X f be calculated? If so, calculate it; if not, why not?
(d) Now let Xt A cos2 f 0t B sin2 f 0 t , where A and B are random variables and f 0 constant.
(e) Find necessary and sufficient conditions for Xt to be wide-sense stationary.
(f) Show that necessary and sufficient conditions for the stochastic process Xt defined by Xt
cos2 f 0t with f 0 a constant to be wide-sense stationary is that the characteristic function
j satisfy
j1 0 j2
56 Stochastic Processes Chap. 2
Xt cos2 Ft
where F and are statistically independent random variables. The quantity is uniformly distributed
over
and F can assume one of the values 1, 2, or 3 with equal probability.
(a) Compute the mean and correlation function of Xt .
This process serves as the input to the following system.
Xt Zt t Yt
dt
t 1
Delay
Note: The signals are multiplied, not summed, at the node located just before the integrator. Yt and Zt
are related by t
Yt Z d
t 1
Xt A cos2 Ft
where A, F, and are statistically independent random variables. The random variable is uniformly
distributed over the interval
. The densities of the other random variables are to be determined.
(a) Show that Xt is a wide-sense stationary process.
(b) Is Xt strict-sense stationary? Why or why not?
(c) The inventor of this process claims that Xt can have any correlation function one desires by ma-
nipulating the densities of A and F. Demonstrate the validity of this result and the requirements
these densities must satisfy.
(d) The inventor also claims that Xt can have any first-order density one desires so long as the desired
density is bounded. Show that this claim is also valid. Furthermore, show that the requirements
placed on the densities of A and F are consistent with those found in the previous part.
(e) Could this process be fruitfully used in simulations to emulate a process having a specific cor-
relation function and first-order density? In other words, would statistics computed from the
simulation results be meaningful? Why or why not?
2.8 Quadrature Representation of Stochastic Signals
Let Xt be a zero-mean process having the quadrature representation
Xt Xc t cos 2 f ot
Xs t sin 2 f ot
(b) In terms of the correlation functions RXc and RXs and the cross-correlation function
RXc Xs , determine sufficient conditions for Xt to be wide-sense stationary.
(c) Under these conditions, what are the joint statistics of teh random variables Xc t and Xs u ? In
particular , so that RXc Xs 0 for 0.
(d) Show that if the power spectrum of Xt is bandpass and symmetric about f f o , then RXc Xs
0 for all . What does this result say about the joint statistics of the processes Xc t and Xs t ?
2.9 Random Telegraph Wave
One form of the random telegraph wave Xt is derived from a stationary Poisson process N0 t having
constant event rate .
Xt
1 N0 t even
1 N0 t odd
For t 0, the process is undefined. N0 t denotes the number of events that have occurred in the interval
0 t and has a probability distribution given by
PrN0 t n
t n e t t0
n!
Note that PrN0 0 0 1.
(a) What is the probability distribution of Xt ?
(b) Find the mean and correlation function of Xt .
(c) Is this process wide-sense stationary? stationary in the stricter sense? Provide your reasoning.
2.10 Independent Increment Processes
A stochastic process Xt is said to have stationary, independent increments if, for t1 t2 t3 t4 :
The random variable Xt2
Xt1 is statistically independent of the random variable Xt4
Xt3 .
The pdf of Xt2
Xt1 is equal to the pdf of Xt T
Xt T for all t1 , t2 , T .
2 1
(b) Using the result of part (a), find expressions for Xt and Xt .
(c) Define Xt j to be the characteristic function of the first-order density of the process Xt . Show
that this characteristic function must be of the form:
Xt j et
f
Xt Xu
u u Xu u t
In words, the expected value of a martingale at time t given all values of the process that have been
observed up to time u (u t) is equal to the most recently observed value of the process (Xu ).
58 Stochastic Processes Chap. 2
pX
tn Xt1 Xt Xn X1 Xn1 pX
tn Xtn1
Xn Xn1
n1
(a) Show that Xt is mean-square continuous if and only if the correlation function RX t u is contin-
uous at t u.
(b) Show that if RX t u is continuous at t u, it is continuous for all t and u.
(c) Show that a zero-mean, independent-increment process with stationary increments is mean-square
continuous.
(d) Show that a stationary Poisson process is mean-square continuous. Note that this process has no
continuous sample functions, but is continuous in the mean-square sense.
2.14 Properties of Correlation Functions
(a) Show that correlation and covariance functions have the following properties:
1. RX t u RX u t
2. RX RX
3. KX2 t u KX t t KXu u
4. KX t u 12 KX t t KX u u
5. RX RX 0
(b) Let Xt be a wide-sense stationary random process. If st is a deterministic function and we define
Yt Xt st , what is the expected value and correlation function of Yt ?
2.15 Correlation Functions and Power Spectra
(a) Which of the following are valid correlation functions? Indicate your reasoning.
Chap. 2 Problems 59
1. RX t u et u
2
1
T
7. RX T
2. RX t u 2 max t u
0 otherwise
3. RX t u e
t u
1 T
4. RX t u cost cosu 8. RX
0 otherwise
5. RX e
e2 9. RX 25
6. RX 5sin 1000
10. RX 1
1
(b) Which of the following are valid power density spectra? Indicate your reasoning.
1. X f sin f
f 4. X f e f
e2 f
2. X f
sin f
2
5. X f 1 0 25e j2 f
f
1 f 1T
3. X f exp
f f 02
6. X f 0 otherwise
4
(a) Show that if Xt is strict-sense stationary, then the third-order correlation function depends only on
the time differences t2
t1 and t3
t1 .
(b) Find the third-order correlation function of Xt A cos2 f 0 t , where U
and A,
f 0 are constants.
(c) Let Zt Xt Yt , where Xt is Gaussian, Yt is non-Gaussian, and Xt , Yt are statistically independent,
zero-mean processes. Find the third-order correlation function of Zt .
2.17 Joint Statistics of a Process and its Derivative
Let Xt be a wide-sense stationary stochastic process. Let Xt denote the derivative of Xt .
(a) Compute the expected value and correlation function of Xt in terms of the expected value and
correlation function of Xt .
(b) Under what conditions are Xt and Xt orthogonal? In other words, when does X X 0 where
X X Xt Xt ?
(c) Compute the mean and correlation function of Yt Xt
Xt .
(d) The bandwidth of the process Xt can be defined by
f 2 X f d f
B2X
X f d f
Express this definition in terms of the mean and correlation functions of Xt and Xt .
(e) The statistic U is used to count the average number of excursions of the stochastic process Xt
across the level Xt A in the interval 0 T . One form of this statistic is
T
U
1
T
d u X
A
0 dt
t dt
where u denotes the unit step function. Find the expected value of U, using in your final
expression the formula for BX . Assume that the conditions found in part (b) are met and Xt is a
Gaussian process.
60 Stochastic Processes Chap. 2
(a) If Xl 0 2 , find the probability density function of each element of the output sequence
Yl .
(b) Show that X j X j0 for all choices of no matter what the amplitude distribution of
l l
Xl may be.
(c) If Xl is non-Gaussian, the computation of the probability density of Yl can be difficult. On the
other hand, if the density of Yl is known, the density of Xl can be found. How is the characteristic
function of Xl related to the characteristic function of Yl ?
(d) Show that if Yl is uniformly distributed over
1 1, the only allowed values of the parameter a
are those equalling 1m, m 2, 3, 4, . . . .
In other words, the output equals the positive-valued amplitudes of the input and is zero otherwise.
Xt Yt
HWR
Xt Yt
t t
(a) What is the mean and variance of Yt ? Express your answer in terms of the correlation function of
Xt .
(b) Is the output Yt a stationary process? Indicate why or why not.
(c) What is the cross-correlation function between input and output? Express your answer in terms
of the correlation function of Xt .
W1,t Xt
BPF |H(f)|
aXtYt Zt
f
o fo fo+W
BPF o
W2,t Yt
The frequency f 0 is much larger than the bandwidth W .
(a) Find the correlation function of Xt .
Zt consists of the sum of Xt , Yt and an intermodulation distortion component aXt Yt , where a is an
unknown constant.
(b) What is the cross-correlation function of Yt and Zt ?
(c) Find the correlation function of Zt .
(d) You want to remove the intermodulation distortion component from Zt . Can this removal be
accomplished by operating only on Zt ? If so, how; if not, why not.
2.22 Predicting the Stock Market
The price of a certain stock can fluctuate during the day while the true value is rising or falling. To
facilitate financial decisions, a Wall Street broker decides to use stochastic process theory. The price Pt
of a stock is described by
Pt Kt Nt 0 t 1
where K is the constant our knowledgeable broker is seeking and Nt is a stochastic process describing
the random fluctuations. Nt is a white, Gaussian process having spectral height N0 2. The broker
decides to estimate K according to:
1
!
K Pt gt dt
0
(a) Find the probability density function of the estimate K! for any gt the broker might choose.
(b) A simple-minded estimate of K is to use simple averaging (i.e., set gt = constant). Find the value
of this constant which results in K! K. What is the resulting percentage error as expressed by
#
K! K!.
(c) Find gt which minimizes the percentage error and yields K! K. How much better is this
optimum choice than simple averaging?
2.23 Constant or no Constant?
To determine the presence or absence of a constant voltage measured in the presence of additive, white
Gaussian noise (spectral height N0 2), an engineer decide to compute the average V of the measured
voltage Vt .
1 T
V V dt
T 0 t
The value of the constant voltage, if present, is V0 . The presence and absence of the voltage are equally
likely to occur.
(a) Derive a good method by which the engineer can use the average to determine the presence or
absence of the constant voltage.
(b) Determine the probability that the voltage is present when the engineers method announces it is.
62 Stochastic Processes Chap. 2
(c) The engineer decides to improve the performance of his technique by computing the more com-
plicated quantity V given by
T
V f t Vt dt
0
What function f t maximizes the probability found in part (b)?
2.24 Estimating the Mean
Suppose you have stochastic process Yn produced by the depicted system. The input Wn is discrete-time
white noise (not necessarily Gaussian) having zero mean and correlation function RW l W2 l .
The system relating Xn to the white-noise input is governed by the difference equation
Xn aXn1 Wn a 1
The quantity m is an unknown constant.
Wn Xn Yn
RX e
Two methods are proposed.
1. Let Xt A cos2 Ft where A, F, and are statistically independent random variables.
2. Define Xt by:
Xt h Nt d
0
where Nt is white and ht is the impulse response of the appropriate filter.
(a) Find at least two impulse responses ht that will work in method 2.
(b) Specify the densities for A, F, in method 1 that yield the desired results.
(c) Sketch sample functions generated by each method. Interpret your result. What are the technical
differences between these processes?
2.26 Multipath Channels
Let Xt be a Gaussian random process with mean mX t and covariance function KX t u. The process
is passed through the depicted system.
Xt
+
Yt
Gain Delay
Chap. 2 Problems 63
cos(2f 0 t+)
64 Stochastic Processes Chap. 2
Xt Yt Yt2 Zt
H1 f 2 H2 f
H f
1 f
1
f
f0 f0
H f
2
1
2 f
f
f f
where H1 f is the transfer function of an ideal bandpass filter and H2 f is an ideal lowpass. Assume
that f is small compared to range of frequencies over which X f varies.
(a) Find the mean and correlation function of Yt2 in terms of the second-order statistics of Xt .
(b) Compute the power density spectrum of the process Zt .
(c) Compute the expected value of Zt .
(d) By considering the variance of Zt , comment on the accuracy of this measurement of the power
density of the process Xt .
2.31 Three Filters
Let Xt be a stationary, zero-mean random process that serves as the input to three linear, time-invariant
filters. The power density spectrum of Xt is X f N0 2. The impulse responses of the filters are
0t 1
h1 t
1
0 otherwise
2et t 0
h2 t
0 otherwise
2 sin 2 t 0 t 2
h3 t
0 otherwise
(c) Is there any pair of processes for which Yi t Y jt 0 for all t?
Chap. 2 Problems 65
(d) Is there any pair of processes for which Yi t Y ju 0 for all t and u?
2.32 Generation of Processes
Let Xt be a wide-sense stationary process having correlation function Rx . Xt serves as the
input to a linear, time-invariant system having impulse response ht .
(a) Determine an ht0 so that the linear system is stable and causal, and yields an output Yt having the
correlation function
RY e e
(b) Show that your answer is not unique by finding at least three other alternatives.
2.33 Time-Bandwidth Product
It is frequently claimed that the relation between noise bandwidth and reciprocal duration of the ob-
servation interval play a key role in determining whether DFT values are approximately uncorrelated.
While the statements sound plausible, their veracity should be checked. Let the covariance function of
the observation noise be KN l al .
(a) How is the bandwidth (defined by the half-power point) of this noises power spectrum related to
the parameter a? How is the duration (defined to be two time constants) of the covariance function
related to a?
(b) Find the variance of the length-L DFT of this noise process as a function of the frequency index
k. This result should be compared with the power spectrum calculated in part (a); they should
resemble each other when the memory of the noisethe duration of the covariance function
is much less than L while demonstrating differences as the memory becomes comparable to or
exceeds L.
(c) Calculate the covariance between adjacent frequency indices. Under what conditions will they be
approximately uncorrelated? Relate your answer to the relations of a to L found in the previous
part.
2.34 More Time-Bandwidth Product
The results derived in Problem 2.33 assumed that a length-L Fourier Transform was computed from
a length-L segment of the noise process. What will happen if the transform has length 2L with the
observation interval remaining unchanged?
(a) Find the variance of DFT values at index k.
(b) Assuming the conditions in Problem 2.33 for uncorrelated adjacent samples, now what is the
correlation between adjacent DFT values?
2.35 Sampling Stochastic Processes
(a) Let Xt be a wide-sense stationary process bandlimited to W Hz. The sampling interval Ts satisfies
1
Ts 2W . What is the covariance of successive samples?
(b) Now let Xt be Gaussian. What conditions on Ts will insure that successive samples will be statis-
tically independent?
(c) Now assume the process is not strictly bandmilited to W Hz. This process serves as the input
to an ideal lowpass filter having cutoff frequency W to produce the process Yt . This output is
1
sampled every 2W seconds to yield an approximate representation Zt of the original signal Xt .
Show that the mean-squared value of the sampling error, defined to be 2 Xt
Zt 2 , is given
by 2 2 W x f d f .
2.36 Properties of the Poisson Process
Let Nt be a Poisson process with intensity t .
(a) What is the expected value and variance of the number of events occurring in the time interval
t u?
66 Stochastic Processes Chap. 2
5. XY where X and Y are real-valued random variables having finite mean-square values.
6. covX Y , the covariance of the real-valued random variables X and Y . Assume that the
random variables have finite mean-square values.
(b) Under what conditions is T T
Qt uxt yu dt du
0 0
a valid inner product for the set of finite-energy functions defined over 0 T ?
2.43 Inner Products with Kernels
Let an inner product be defined with respect to the positive-definite, symmetric kernel Q.
where xQy is the abstract notation for the mapping of the two vectors to a scalar. For example, if x and
y are column matrices, Q is a positive-definite square matrix and
x yQ xt Qy
vi v j 0 i j
68 Stochastic Processes Chap. 2
(b) Show that these eigenvectors are orthogonal with respect to the inner product generated by Q.
Consequently, the eigenvectors are orthogonal with respect to two different inner products.
(c) Let Q "
" be the inverse kernel associated with Q. If Q is a matrix, then QQ I. If Q is a continuous-
time kernel, then
Qt uQ
"u v du t
v
Show that the eigenvectors of the inverse kernel are equal to those of the kernel. How are the
associated eigenvalues of these kernels related to each other?
2.44 A Karhunen-Loeve Expansion
Let the covariance function of a wide-sense stationary process be
1
1
KX
0 otherwise
Find the eigenfunctions and eigenvalues associated with the Karhunen-Loeve expansion of Xt over
0 T with T
1.
(c) Let X be a continuous parameter process so that
T
X Y Xt Yt dt
0
Show that this inner product implies
T
KX t u u du t
0
(d) Again let X be a continuous parameter process. However, define the inner product to be
T T
X Y Qt uXtYu dt du
0 0
where Qt u is a non-negative definite function. Find the equivalent relationship implied by the
requirements of the Karhunen-Loeve expansion. Under what conditions will the s satisfying this
relationship not depend on the covariance function of X?
Chapter 3
Estimation Theory
In searching for methods of extracting information from noisy observations, this chapter describes estima-
tion theory, which has the goal of extracting from noise-corrupted observations the values of disturbance
parameters (noise variance, for example), signal parameters (amplitude or propagation direction), or signal
waveforms. Estimation theory assumes that the observations contain an information-bearing quantity, thereby
tacitly assuming that detection-based preprocessing has been performed (in other words, do I have something
in the observations worth estimating?). Conversely, detection theory often requires estimation of unknown
parameters: Signal presence is assumed, parameter estimates are incorporated into the detection statistic, and
consistency of observations and assumptions tested. Consequently, detection and estimation theory form a
symbiotic relationship, each requiring the other to yield high-quality signal processing algorithms.
Despite a wide variety of error criteria and problem frameworks, the optimal detector is characterized
by a single result: the likelihood ratio test. Surprisingly, optimal detectors thus derived are usually easy
to implement, not often requiring simplification to obtain a feasible realization in hardware or software. In
contrast to detection theory, no fundamental result in estimation theory exists to be summoned to attack
the problem at hand. The choice of error criterion and its optimization heavily influences the form of the
estimation procedure. Because of the variety of criterion-dependent estimators, arguments frequently rage
about which of several optimal estimators is better. Each procedure is optimum for its assumed error
criterion; thus, the argument becomes which error criterion best describes some intuitive notion of quality.
When more ad hoc, noncriterion-based procedures are used, we cannot assess the quality of the resulting
estimator relative to the best achievable. As shown later, bounds on the estimation error do exist, but their
tightness and applicability to a given situation are always issues in assessing estimator quality. At best,
estimation theory is less structured than detection theory. Detection is science, estimation art. Inventiveness
coupled with an understanding of the problem (what types of errors are critically important, for example) are
key elements to deciding which estimation procedure fits a given problem well.
69
70 Estimation Theory Chap. 3
X equals the estimate minus the actual parameter value: X !X
. It too is a random quantity and
is often used in the criterion function. For example, the mean-squared error is given by t ; the minimum
mean-squared error estimate would minimize this quantity. The mean-squared error matrix is t ; on the
main diagonal, its entries are the mean-squared estimation errors for each component of the parameter vector,
whereas the off-diagonal terms express the correlation between the errors. The mean-squared estimation error
t equals the trace of the mean-squared error matrix tr t .
Bias. An estimate is said to be unbiased if the expected value of the estimate equals the true value of the
parameter: ! . Otherwise, the estimate is said to be biased: ! . The bias b is usually
considered to be additive, so that b !
. When we have a biased estimate, the bias usually
depends on the number of observations L. An estimate is said to be asymptotically unbiased if the bias tends
to zero for large L: limL b 0. An estimates variance equals the mean-squared estimation error only if
the estimate is unbiased.
An unbiased estimate has a probability distribution where the mean equals the actual value of the param-
eter. Should the lack of bias be considered a desirable property? If many unbiased estimates are computed
from statistically independent sets of observations having the same parameter value, the average of these es-
timates will be close to this value. This property does not mean that the estimate has less error than a biased
one; there exist biased estimates whose mean-squared errors are smaller than unbiased ones. In such cases,
the biased estimate is usually asymptotically unbiased. Lack of bias is good, but that is just one aspect of how
we evaluate estimators.
Consistency. We term an estimate consistent if the mean-squared estimation error tends to zero as the
number of observations becomes large: limL t 0. Thus, a consistent estimate must be at least
asymptotically unbiased. Unbiased estimates do exist whose errors never diminish as more data are collected:
Their variances remain nonzero no matter how much data are available. Inconsistent estimates may provide
reasonable estimates when the amount of data is limited, but have the counterintuitive property that the quality
of the estimate does not improve as the number of observations increases. Although appropriate in the proper
circumstances (smaller mean-squared error than a consistent estimate over a pertinent range of values of L),
consistent estimates are usually favored in practice.
Efficiency. As estimators can be derived in a variety of ways, their error characteristics must always be
analyzed and compared. In practice, many problems and the estimators derived for them are sufficiently
complicated to render analytic studies of the errors difficult, if not impossible. Instead, numerical simulation
and comparison with lower bounds on the estimation error are frequently used instead to assess the estimator
performance. An efficient estimate has a mean-squared error that equals a particular lower bound: the Cramer-
Rao bound. If an efficient estimate exists (the Cramer-Rao bound is the greatest lower bound), it is optimum
in the mean-squared sense: No other estimate has a smaller mean-squared error (see 3.2.4 79 for details).
For many problems no efficient estimate exists. In such cases, the Cramer-Rao bound remains a lower
bound, but its value is smaller than that achievable by any estimator. How much smaller is usually not
known. However, practitioners frequently use the Cramer-Rao bound in comparisons with numerical error
calculations. Another issue is the choice of mean-squared error as the estimation criterion; it may not suffice
to pointedly assess estimator performance in a particular problem. Nevertheless, every problem is usually
subjected to a Cramer-Rao bound computation and the existence of an efficient estimate considered.
the prior density (one that applies before the data become available). Choosing the prior, as we have said so
often, narrows the problem considerably, suggesting that measurement of the parameters density would yield
something like what was assumed! Said another way, if a prior is not chosen from fundamental considerations
(such as the physics of the problem) but from ad hoc assumptions, the results could tend to resemble the as-
sumptions you placed on the problem. On the other hand, if the density is not known, the parameter is termed
nonrandom, and its values range unrestricted over some interval. The resulting nonrandom-parameter es-
timation problem differs greatly from the random-parameter problem. We consider first the latter problem,
letting be a scalar parameter having the prior density p . The impact of the a priori density becomes
evident as various error criteria are established, and an optimum estimator is derived.
where pX X is the joint density of the observations and the parameter. To minimize this integral with
respect to !, we rewrite it using the laws of conditional probability as
2 p X x
!X2 p X x d dx
The density pX is nonnegative. To minimize the mean-squared error, we must minimize the inner integral
for each value of X because the integral is weighted by a positive quantity. We focus attention on the inner
integral, which is the conditional expected value of the squared estimation error. The condition, a fixed value
of X, implies that we seek that constant [!X] derived from X that minimizes the second moment of the
random parameter . A well-known result from probability theory states that the minimum of x
c2
occurs when the constant c equals the expected value of the random variable x (see 1.2.2 4). The inner
integral and thereby the mean-squared error is minimized by choosing the estimator to be the conditional
expected value of the parameter given the observations.
!MMSE X X
Thus, a parameters minimum mean-squared error (MMSE ) estimate is the parameters a posteriori (after the
observations have been obtained) expected value.
The associated conditional probability density p X X is not often directly stated in a problem defini-
tion and must somehow be derived. In many applications, the likelihood function pX X and the a priori
density of the parameter are a direct consequence of the problem statement. These densities can be used to
find the joint density of the observations and the parameter, enabling us to use Bayess Rule to find the a
posteriori density if we knew the unconditional probability density of the observations.
pX X p
p X X
p X X
This density pX X is often difficult to determine. Be that as it may, to find the a posteriori conditional
expected value, it need not be known. The numerator entirely expresses the a posteriori densitys dependence
on ; the denominator only serves as the scaling factor to yield a unit-area quantity. The expected value
is the center-of-mass of the probability density and does not depend directly on the weight of the density,
bypassing calculation of the scaling factor. If not, the MMSE estimate can be exceedingly difficult to compute.
72 Estimation Theory Chap. 3
Example
Let L statistically independent observations be obtained, each of which is expressed by X l
N l . Each N l is a Gaussian random variable having zero mean and variance N2 . Thus, the unknown
parameter in this problem is the mean of the observations. Assume it to be a Gaussian random variable
a priori (mean m and variance 2 ). The likelihood function is easily found to be
2
L1
X l
pX X
1 1
# exp
l 0 2N2 2 N
22 2 l 0 2N2 2 N
p X X
pX X
In an attempt to find the expected value of this distribution, lump all terms that do not depend explicitly
on the quantity into a proportionality term.
1 X l
2
m
2
p X X exp
2 N2 2
where 2 is a quantity that succinctly expresses the ratio N2 2 N2 L2 . The form of the a
posteriori density suggests that it too is Gaussian; its mean, and therefore the MMSE estimate of ,
is given by
2 m X l
MMSE X
! 2
2 N
More insight into the nature of this estimate is gained by rewriting it as
N2 L 2 1 L1
!MMSE X m 2 2 X l
2 N L
2 N L L l 0
The term N2 L is the variance of the averaged observations for a given value of ; it expresses the
squared error encountered in estimating the mean by simple averaging. If this error is much greater
than the a priori variance of (N2 L 2 ), implying that the observations are noisier than the
variation of the parameter, the MMSE estimate ignores the observations and tends to yield the a priori
mean m as its value. If the averaged observations are less variable than the parameter, the second
term dominates, and the average of the observations is the estimates value. This estimate behavior
between these extremes is very intuitive. The detailed form of the estimate indicates how the squared
error can be minimized by a linear combination of these extreme estimates.
The conditional expected value of the estimate equals
N2 L 2
!MMSE
2 N2 L 2 N2 L
m
Sec. 3.2 Parameter Estimation 73
This estimate is biased because its expected value does not equal the value of the sought-after pa-
rameter. It is asymptotically unbiased as the squared measurement error N2 L tends to zero as L
becomes large. The consistency of the estimator is determined by investigating the expected value of
the squared error. Note that the variance of the a posteriori density is the quantity 2 ; as this quan-
tity does not depend on X, it also equals the unconditional variance. As the number of observations
increases, this variance tends to zero. In concert with the estimate being asymptotically unbiased, the
expected value of the estimation error thus tends to zero, implying that we have a consistent estimate.
Any scaling of the density by a positive quantity that depends on X does not change the location of the
maximum. Symbolically, p X pX p pX ; the derivative does not involve the denominator, and this term
can be ignored. Thus, the only quantities required to compute !MAP are the likelihood function and the
parameters a priori density.
Although not apparent in its definition, the MAP estimate does satisfy an error criterion. Define a criterion
that is zero over a small range of values about 0 and a positive constant outside that range. Minimization
of the expected value of this criterion with respect to ! is accomplished by centering the criterion function
at the maximum of the density. The region having the largest area is thus notched out, and the criterion is
minimized. Whenever the a posteriori density is symmetric and unimodal, the MAP and MMSE estimates
coincide. In Gaussian problems, such as the last example, this equivalence is always valid. In more general
circumstances, they differ.
Example
Let the observations have the same form as the previous example, but with the modification that the
parameter is now uniformly distributed over the interval 1 2 . The a posteriori mean cannot be
computed in closed form. To obtain the MAP estimate, we need to find the location of the maximum
of
1 L1 1 X l
2
pX X p
1
# exp
1 2
2
1
l 0 2 2 2 N
N
Evaluating the logarithm of this quantity does not change the location of the maximum and simplifies
the manipulations in many problems. Here, the logarithm is
L1
2
X l
ln pX X p
ln2
1
lnC 1 2
l 0 N
where C is a constant with respect to . Assuming that the maximum is interior to the domain of the
parameter, the MAP estimate is found to be the sample average X l L. If the average lies outside
this interval, the corresponding endpoint of the interval is the location of the maximum. To summarize,
)
1 l X l L 1
!MAP X )
l X l L 1 l X l L 2
2 l X l L
2
74 Estimation Theory Chap. 3
The a posteriori density is not symmetric because of the finite domain of . Thus, the MAP estimate
is not equivalent to the MMSE estimate, and the accompanying increase in the mean-squared error is
difficult to compute. When the sample average is the estimate, the estimate is unbiased; otherwise it
is biased. Asymptotically, the variance of the average tends to zero, with the consequences that the
estimate is unbiased and consistent.
x2 x x
For example, if x and y are each column matrices having only one column, their inner product might be
defined as x y xt y. Thus, the linear estimator as defined by the Orthogonality Principle must satisfy
! LIN X
X 0 for all linear transformations (3.1)
To see that this principle produces the MMSE linear estimator, we express the mean-squared estimation error
t 2 for any choice of linear estimator ! as
!
2 !LIN
!LIN
! 2
!LIN
2 !LIN
! 2
2 !LIN
! LIN
!
As ! LIN
! is the difference of two linear transformations, it too is linear and is orthogonal to the estimation
error resulting from ! LIN . As a result, the last term is zero and the mean-squared estimation error is the sum
There is a confusion as to what a vector is. Matrices having one column are colloquially termed vectors as are the field quantities
such as electric and magnetic fields. Vectors and their associated inner products are taken to be much more general mathematical
objects than these. Hence the prose in this section is rather contorted.
Sec. 3.2 Parameter Estimation 75
of two squared norms, each of which is, of course, nonnegative. Only the second norm varies with estimator
choice; we minimize the mean-squared estimation error by choosing the estimator ! to be the estimator ! LIN ,
which sets the second term to zero.
The estimation error for the minimum mean-squared linear estimator can be calculated to some degree
without knowledge of the form of the estimator. The mean-squared estimation error is given by
!LIN
2 !LIN
! LIN
!LIN
! LIN !LIN
The first term is zero because of the Orthogonality Principle. Rewriting the second term yields a general
expression for the MMSE linear estimators mean-squared error.
2 2
!LIN
This error is the difference of two terms. The first, the mean-squared value of the parameter, represents the
largest value that the estimation error can be for any reasonable estimator. That error can be obtained by the
estimator that ignores the data and has a value of zero. The second term reduces this maximum error and
represents the degree to which the estimate and the parameter agree on the average.
Note that the definition of the minimum mean-squared error linear estimator makes no explicit assump-
tions about the parameter estimation problem being solved. This property makes this kind of estimator at-
tractive in many applications where neither the a priori density of the parameter vector nor the density of the
observations is known precisely. Linear transformations, however, are homogeneous: A zero-valued input
yields a zero output. Thus, the linear estimator is especially pertinent to those problems where the expected
value of the parameter is zero. If the expected value is nonzero, the linear estimator would not necessarily
yield the best result (see Problem 3.14).
Example
Express the first example 72 in vector notation so that the observation vector is written as
X A N
where the matrix A has the form A col1 1. The expected value of the parameter is zero. The
linear estimator has the form !LIN LX, where L is a 1L matrix. The Orthogonality Principle states
that the linear estimator satisfies
LX
t MX 0 for all 1L matrices M
To use the Orthogonality Principle to derive an equation implicitly specifying the linear estimator, the
for all linear transformations phrase must be interpreted. Usually, the quantity specifying the linear
transformation must be removed from the constraining inner product by imposing a very stringent
but equivalent condition. In this example, this phrase becomes one about matrices. The elements of
the matrix M can be such that each element of the observation vector multiplies each element of the
estimation error. Thus, in this problem the Orthogonality Principle means that the expected value of
the matrix consisting of all pairwise products of these elements must be zero.
LX
Xt 0
Thus, two terms must equal each other: LXXt Xt . The second term equals 2At as the
additive noise and the parameter are assumed to be statistically independent quantities. The quantity
XXt in the first term is the correlation matrix of the observations, which is given by AAt 2
KN . Here, KN is the noise covariance matrix, and 2 is the parameters variance. The quantity
76 Estimation Theory Chap. 3
AAt is a L L matrix with each element equaling 1. The noise vector has independent components;
the covariance matrix thus equals N2 I. The equation that L must satisfy is therefore given by
N2 2 2 2
* .. +
N2 2
..
* 2 . . +
L1 LL *
* ..
+
+ 2 2
.. ..
. . . 2
2 2 N2 2
The components of L are equal and are given by Li 2 N2 L2 . Thus, the minimum mean-
squared error linear estimator has the form
2
!LIN X X l
1
N L L
2 2
l
Note that this result equals the minimum mean-squared error estimate derived earlier under the
condition that 0. Mean-squared error, linear estimators, and Gaussian problems are intimately
related to each other. The linear minimum mean-squared error solution to a problem is optimal if the
underlying distributions are Gaussian.
The logarithm of the likelihood function may also be used in this maximization.
Example
Let X l be a sequence of independent, identically distributed Gaussian random variables having an
unknown mean but a known variance N2 . Often, we cannot assign a probability density to a param-
eter of a random variables density; we simply do not know what the parameters value is. Maximum
likelihood estimates are often used in such problems. In the specific case here, the derivative of the
logarithm of the likelihood function equals
ln pX X L1
X l
1
N2 l 0
The solution of this equation is the maximum likelihood estimate, which equals the sample average.
1 L1
!ML X l
L l0
Sec. 3.2 Parameter Estimation 77
The expected value of this estimate !ML equals the actual value , showing that the maximum
likelihood estimate is unbiased. The mean-square error equals N2 L and we infer that this estimate is
consistent.
Parameter Vectors
The maximum likelihood procedure (as well as the others being discussed) can be easily generalized to situa-
tions where more than one parameter must be estimated. Letting denote the parameter vector, the likelihood
function is now expressed as pX X . The maximum likelihood estimate ! ML of the parameter vector is
given by the location of the maximum of the likelihood function (or equivalently of its logarithm). Using
derivatives, the calculation of the maximum likelihood estimate becomes
ln pX X
0
ML
where denotes the gradient with respect to the parameter vector. This equation means that we must
estimate all of the parameters simultaneously by setting the partial of the likelihood function with respect to
each parameter to zero. Given P parameters, we must solve in most cases a set of P nonlinear, simultaneous
equations to find the maximum likelihood estimates.
Example
Lets extend the previous example to the situation where neither the mean nor the variance of a se-
quence of independent Gaussian random variables is known. The likelihood function is, in this case,
L1
1 2
pX X 2 exp
2 X l
1
1
l 0 2 2
Evaluating the partial derivatives of the logarithm of this quantity, we find the following set of two
equations to solve for 1 , representing the mean, and 2 , representing the variance.
1 L1
2 l0
X l
1 0
L1
L
2 2
21 2 X l
12 0
2 l 0
1 L1
!1ML X l
L l0
1 L1 2
!2ML X l
!1ML
Ll 0
The expected value of !1ML equals the actual value of 1 ; thus, this estimate is unbiased. However,
the expected value of the estimate of the variance equals 2 L
1L. The estimate of the variance
is biased, but asymptotically unbiased. This bias can be removed by replacing the normalization of L
in the averaging computation for !2ML by L
1.
The variance rather than the standard deviation is represented by . The mathematics is messier and the estimator has less attractive
2
properties in the latter case. Problem 3.8 illustrates this point.
78 Estimation Theory Chap. 3
Cramer-Rao Bound
The mean-square estimation error for any estimate of a nonrandom parameter has a lower bound, the Cramer-
Rao bound [6: pp. 474477], which defines the ultimate accuracy of any estimation procedure. This lower
bound, as shown later, is intimately related to the maximum likelihood estimator.
We seek a bound on the mean-squared error matrix M defined to be
M !
!
t t
A matrix is lower bounded by a second matrix if the difference between the two is a non-negative definite
matrix. Define the column matrix x to be
!
b
ln pX X
x
where b denotes the column matrix of estimator biases. To derive the Cramer-Rao bound, evaluate xxt .
M
bbt I b
xx t
I b
t
F
where b represents the matrix of partial derivatives of the bias [ bi j ] and the matrix F is the Fisher
information matrix
t
F ln pX X ln pX X (3.2)
The notation t means the matrix of all second partials of the quantity it operates on (the gradient of the
gradient). This matrix is known as the Hessian. Demonstrating the equivalence of these two forms for the
Fisher information is quite easy. Because pX X dX 1 for all choices of the parameter vector, the gra-
dient of this expression equals zero. Furthermore, ln pX X pX X pX X . Combining
these results yields
ln pX x pX x dx 0
Evaluating the gradient of this quantity (using the chain rule) also yields zero.
t
ln pX x pX x ln pX x ln pX x pX x dx
t
0
t
or ln pX X ln pX X
t ln pX X
Calculating the expected value for the Hessian form is sometimes easier than finding the expected value of
the outer product of the gradient with itself. In the scalar case, we have
' (2 2
ln pX X ln pX X
2
Returning to the derivation, the matrix xxt is non-negative definite because it is a correlation matrix.
Thus, for any column matrix , the quadratic form t xxt is non-negative. Choose a form for that
simplifies the quadratic form. A convenient choice is
t
F1 I b
Sec. 3.2 Parameter Estimation 79
where is an arbitrary column matrix. The quadratic form becomes in this case
t
t xxt t M
bbt
I b F1 I b
As this quadratic form must be non-negative, the matrix expression enclosed in brackets must be non-negative
definite. We thus obtain the well-known Cramer-Rao bound on the mean-square error matrix.
t
t b bt I b F1 I b
This form for the Cramer-Rao bound does not mean that each term in the matrix of squared errors is
greater than the corresponding term in the bounding matrix. As stated earlier, this expression means that the
difference between these matrices is non-negative definite. For a matrix to be non-negative definite, each term
on the main diagonal must be non-negative. The elements of the main diagonal of t are the squared
errors of the estimate of the individual parameters. Thus, for each parameter, the mean-squared estimation
error can be no smaller than
t
!i
i2 b2i I b F1 I b
ii
This bound simplifies greatly if the estimator is unbiased (b 0). In this case, the Cramer-Rao bound
becomes
!i
i2 Fii 1
Thus, the mean-squared error for each parameter in a multiple-parameter, unbiased-estimator problem can
be no smaller than the corresponding diagonal term in the inverse of the Fisher information matrix. In such
problems, the estimates error characteristics of any parameter become intertwined with the other parameters
in a complicated way. Any estimator satisfying the Cramer-Rao bound with equality is said to be efficient.
Example
Lets evaluate the Cramer-Rao bound for the example we have been discussing: the estimation of the
mean and variance of a length L sequence of statistically independent Gaussian random variables. Let
the estimate of the mean 1 be the sample average !1 X l L; as shown in the last example, this
estimate is unbiased. Let the estimate of the variance 2 be the unbiased estimate !2 X l
2
!1 L
1. Each term in the Fisher information matrix F is given by the expected value of the
paired products of derivatives of the logarithm of the likelihood function.
ln pX X ln pX X
Fi j i j
1 L1
ln pX X X l
12
L
ln 22
2 22 l0
ln pX X 1 L1
1 2 l0
X l
1 (3.3)
ln pX X L1
L
2 2
21 2 X l
12 (3.4)
2 l 0
80 Estimation Theory Chap. 3
L
2 1 2
22 l0
X l
1
2 ln p X L1
X X
2 ln p 1 L1
X l
1 X l
12
X 1 L
2 1 22 l 0 22 222 23 l0
its inverse is also a diagonal matrix with the elements on the main diagonal equalling the reciprocal of
those in the original matrix. Because of the zero-valued off-diagonal entries in the Fisher information
matrix, the errors between the corresponding estimates are not inter-dependent. In this problem, the
mean-square estimation errors can be no smaller than
!1
12 L2
2
!2
22 2L2
Note that nowhere in the preceding example did the form of the estimator enter into the computation
of the bound. The only quantity used in the computation of the Cramer-Rao bound is the logarithm of the
likelihood function, which is a consequence of the problem statement, not how it is solved. Only in the
case of unbiased estimators is the bound independent of the estimators used. Because of this property, the
Cramer-Rao bound is frequently used to assess the performance limits that can be obtained with an unbiased
estimator in a particular problem. When bias is present, the exact form of the estimators bias explicitly enters
the computation of the bound. All too frequently, the unbiased form is used in situations where the existence
of an unbiased estimator can be questioned. As we shall see, one such problem is time delay estimation,
presumably of some importance to the reader. This misapplication of the unbiased Cramer-Rao arises from
desperation: the estimator is so complicated and nonlinear that computing the bias is nearly impossible. As
shown in Problem 3.9, biased estimators can yield mean-squared errors smaller as well as larger than the
unbiased version of the Cramer-Rao bound. Consequently, desperation can yield misinterpretation when a
general result is misapplied.
In the single-parameter estimation problem, the Cramer-Rao bound incorporating bias has the well-known
form
2
1 db
2 b2 ' d
(2
ln pX X
Note that the sign of the biass derivative determines whether this bound is larger or potentially smaller than
the unbiased version, which is obtained by setting the bias term to zero.
Thats why we assumed in the example that we used an unbiased estimator for the variance.
Note that this bound differs somewhat from that originally given by Cramer [6: p. 480]; his derivation ignores the additive bias term
bbt .
Sec. 3.2 Parameter Estimation 81
Efficiency
An interesting question arises: when, if ever, is the bound satisfied with equality? Recalling the details of
the derivation of the bound, equality results when the quantity t xxt equals zero. As this quantity is the
expected value of the square of t x, it can only equal zero if t x 0. Substituting in the form of the column
matrices and x, equality in the Cramer-Rao bound results whenever
1
ln pX X I bt F! X
b (3.5)
This complicated expression means that only if estimation problems (as expressed by the a priori density)
have the form of the right side of this equation can the mean-square estimation error equal the Cramer-Rao
bound. In particular, the gradient of the log likelihood function can only depend on the observations through
the estimator. In all other problems, the Cramer-Rao bound is a lower bound but not a tight one; no estimator
can have error characteristics that equal it. In such cases, we have limited insight into ultimate limitations on
estimation error size with the Cramer-Rao bound. However, consider the case where the estimator is unbiased
(b 0). In addition, note the maximum likelihood estimate occurs when the gradient of the logarithm of the
likelihood function equals zero: ln pX X 0 when ! ML . In this case, the condition for equality
in the Cramer-Rao bound becomes
F!
! ML 0
As the Fisher information matrix is positive-definite, we conclude that if the estimator equals the maximum
likelihood estimator, equality in the Cramer-Rao bound can be achieved. To summarize, if the Cramer-Rao
bound can be satisfied with equality, only the maximum likelihood estimate will achieve it. To use estimation
theoretic terminology, if an efficient estimate exists, it is the maximum likelihood estimate. This result stresses
the importance of maximum likelihood estimates, despite the seemingly ad hoc manner by which they are
defined.
Example
Consider the Gaussian example being examined so frequently in this section. The components of the
gradient of the logarithm of the likelihood function were given earlier by equations (3.3) 79. These
expressions can be rearranged to reveal
ln pX X
L l X l
1
L 1
* 1 + 2
* + * +
* +
ln pX X
2L 1 2 l X l
1 2
2 2 2 2
The first component, which corresponds to the estimate of the mean, is expressed in the form required
for the existence of an efficient estimate. The second componentthe partial with respect to the vari-
ance 2 cannot be rewritten in a similar fashion. No unbiased, efficient estimate of the variance
exists in this problem. The mean-squared error of the variances unbiased estimate, but not the max-
imum likelihood estimate, is lower-bounded by 222L
12 . This error is strictly greater than the
Cramer-Rao bound of 222L2 . As no unbiased estimate of the variance can have a mean-squared error
equal to the Cramer-Rao bound (no efficient estimate exists for the variance in the Gaussian problem),
one presumes that the closeness of the error of our unbiased estimator to the bound implies that it
possesses the smallest squared-error of any estimate. This presumption may, of course, be incorrect.
1. The maximum likelihood estimate is at least asymptotically unbiased. It may be unbiased for any num-
ber of observations (as in the estimation of the mean of a sequence of independent random variables)
for some problems.
2. The maximum likelihood estimate is consistent.
3. The maximum likelihood estimate is asymptotically efficient. As more and more data are incorporated
into an estimate, the Cramer-Rao bound accurately projects the best attainable error and the maximum
likelihood estimate has those optimal characteristics.
4. Asymptotically, the maximum likelihood estimate is distributed as a Gaussian random variable. Be-
cause of the previous properties, the mean asymptotically equals the parameter and the covariance
matrix is LF 1 .
Most would agree that a good estimator should have these properties. What these results do not provide
is an assessment of how many observations are needed for the asymptotic results to apply to some specified
degree of precision. Consequently, they should be used with caution; for instance, some other estimator may
have a smaller mean-square error than the maximum likelihood for a modest number of observations.
Manipulating this equation to make the universality constraint more transparent results in
' (
L1 L1
hk hLIN l X l
X k 0 for all h
k 0 l 0
Written in this way, the expected value must be 0 for each value of k to satisfy the constraint. Thus, the
quantity hLIN of the estimator of the signals amplitude must satisfy
L1
hLIN l X l X k X k for all k
l 0
Sec. 3.3 Signal Parameter Estimation 83
Assuming that the signals amplitude has zero mean and is statistically independent of the zero-mean noise,
the expected values in this equation are given by
where KN k l is the covariance function of the noise. The equation that must be solved for the unit-sample
response hLIN of the optimal linear MMSE estimator of signal amplitude becomes
L1 L1
hLIN l KN k l 2 s k 1
hLIN l sl for all k
l 0 l 0
This equation is easily solved once phrased in matrix notation. Letting KN denote the covariance matrix of
the noise, s the signal vector, and hLIN the vector of coefficients, this equation becomes
KN hLIN 2 1
st hLIN s
The matched filter for colored-noise problems consisted of the dot product between the vector of observations
and K N s (see the detector result 127). Assume that the solution to the linear estimation problem is pro-
1
Substituting the vector expression for hLIN yields the result that the mean-squared estimation error equals the
proportionality constant c defined earlier.
2
2 1 2 st K 1s
N
Thus, the linear filter that produces the optimal estimate of signal amplitude is equivalent to the matched
filter used to detect the signals presence. We have found this situation to occur when estimates of unknown
parameters are needed to solve the detection problem. If we had not assumed the noise to be Gaussian,
however, this detection-theoretic result would be different, but the estimator would be unchanged. To repeat,
this invariance occurs because the linear MMSE estimator requires no assumptions on the noises amplitude
characteristics.
Example
Let the noise be white so that its covariance matrix is proportional to the identity matrix (KN N2 I).
The weighting factor in the minimum mean-squared error linear estimator is proportional to the signal
waveform.
2 2 L1
hLIN l ! sl X l
N2 2 N2 2 l0
s l LIN
84 Estimation Theory Chap. 3
This proportionality constant depends only on the relative variances of the noise and the parameter. If
the noise variance can be considered to be much smaller than the a priori variance of the amplitude,
then this constant does not depend on these variances and equals unity. Otherwise, the variances must
be known.
We find the mean-squared estimation error to be
2
2 1 2 N2
This error is significantly reduced from its nominal value 2 only when the variance of the noise
is small compared with the a priori variance of the amplitude. Otherwise, this admittedly optimum
amplitude estimate performs poorly, and we might as well as have ignored the data and guessed that
the amplitude was zero.
X l sl N l l 0 L
1
The vector of observations X is formed from the data in the obvious way. Evaluating the logarithm of the
observation vectors joint density,
ln pX X
ln det2 KN
X
s t K
N X
s
1 1 1
2 2
where s is the signal vector having P unknown parameters, and KN is the covariance matrix of the noise.
The partial derivative of this likelihood function with respect to the ith parameter i is, for real-valued signals,
ln pX X
i
X
s t KN 1 s
i
X
s t
K 1s 0 i 1 P
N
i
ML
The Cramer-Rao bound depends on the evaluation of the Fisher information matrix F. The elements of
this matrix are found to be
st 1 s
Fi j KN i j 1 P (3.6)
i j
In other words, the problem is difficult in this case.
Sec. 3.3 Signal Parameter Estimation 85
Further computation of the Cramer-Rao bounds components is problem dependent if more than one parameter
is involved, and the off-diagonal terms of F are nonzero. If only one parameter is unknown, the Cramer-Rao
bound is given by
2
db
1
d
2 b2 s 1 s
t
KN
When the signal depends on the parameter nonlinearly (which constitute the interesting cases), the maximum
likelihood estimate is usually biased. Thus, the numerator of the expression for the bound cannot be ignored.
One interesting special case occurs when the noise is white. The Cramer-Rao bound becomes
2
db
N2 1
d
2 b2 L1
sl
2
l 0
The derivative of the signal with respect to the parameter can be interpreted as the sensitivity of the signal
to the parameter. The mean-squared estimation error depends on the integrated squared sensitivity: The
greater this sensitivity, the smaller the bound.
For an efficient estimate of a signal parameter to exist, the estimate must satisfy the condition we derived
earlier (Eq. 3.5 81).
1
s t KN 1 X
s ? I bt s t KN 1 s !X
b
Because of the complexity of this requirement, we quite rightly question the existence of any efficient estima-
tor, especially when the signal depends nonlinearly on the parameter (see Problem 3.15).
Example
Let the unknown parameter be the signals amplitude; the signal is expressed as sl and is observed
in the presence of additive noise. The maximum likelihood estimate of the amplitude is the solution
of the equation
X
!MLst KN 1s 0
The form of this equation suggests that the maximum likelihood estimate is efficient. The amplitude
estimate is given by
Xt K 1
N s
!ML t 1
s KN s
The form of this estimator is precisely that of the matched filter derived in the colored-noise situation
(see Eq. 4.9 127). The expected value of the estimate equals the actual amplitude. Thus the bias is
zero and the Cramer-Rao bound is given by
1
2 st K 1
N s
N X
s N s ML
st K 1 ?
st K 1 !
Li Lf Li Lf Li Lf
Interpolation Filtering Prediction
(le < 0) (le = 0) (le > 0)
Figure 3.1: The three classical categories of linear signal waveform estimation are defined by the observation
intervals relation to the time at which we want to estimate the signal value. As time evolves, so does the
observation interval so that le , the interval between the last observation and the estimation time, is fixed.
estimate of the amplitude has fixed error characteristics that do not depend on the actual signal ampli-
tude. A signal-to-noise ratio for the estimate, defined to be 2 2 , equals the signal-to-noise ratio
of the observed signal.
When the amplitude is well described as a random variable, its linear minimum mean-squared
error estimator has the form
2 Xt K 1
N s
!LIN
1 s K
2 t
N
1s
which we found in the white-noise case becomes a weighted version of the maximum likelihood
estimate (see the example 83).
2
!LIN Xt s
N2
2
Seemingly, these two estimators are being used to solve the same problem: Estimating the amplitude of
a signal whose waveform is known. They make very different assumptions, however, about the nature
of the unknown parameter; in one it is a random variable (and thus it has a variance), whereas in the
other it is not (and variance makes no sense). Despite this fundamental difference, the computations
for each estimator are equivalent. It is reassuring that different approaches to solving similar problems
yield similar procedures.
Interpolation. The interpolation or smoothing problem is to estimate the signal at some moment within the
observation interval (le 0). Observations are thus considered before and after the time at which the
signal needs to be estimated. In practice, applying interpolation filtering means that the estimated signal
waveform is produced some time after it occurred.
Sec. 3.4 Linear Signal Waveform Estimation 87
Filtering. We estimate the signal at the end of the observation interval (le 0). Thus, a waveform estimate is
produced as soon as the signal is observed. The filtering problem arises when we want to remove noise
(as much as possible) from noise-corrupted signal observations as they are obtained.
Prediction. Here, we attempt to predict the signals value at some future time (le 0). The signals structure
must be well known to enable us to predict what values the signal obtains. Prediction filters have
obvious applications in sonar/radar tracking and stock market analysis. Of all the waveform estimation
problems, this one produces the largest errors.
Waveform estimation algorithms are not defined by this categorization; each technique can be applied
to each type of problem (in most cases). Instead, the algorithms are defined according to the signal model.
Correctness of the signal model governs the utility of a given technique. Because the signal usually appears
linearly in the expression for the observations (the noise is usually additive), linear waveform estimation
methodsfiltersare frequently employed.
X l S"l N l
S!L f le k f L h
L f kX k
L
i
The estimate of the signals value at L f le is thus produced at time L f in the filters output. The duration of
the filters unit-sample response extends over the entire observation interval Li L f .
The Orthogonality Principle that proved so useful in linear parameter estimation can be applied here.
It states that the estimation error must be orthogonal to all linear transformations of the observations (see
Eq. 3.1 74). For the waveform estimation problem, this requirement implies that
Lf
SL f le
S!L f le hL f kX k 0 for all h
k Li
This expression implies that each observed value must be orthogonal to the estimation error at time L f le .
Lf
SL f le
h
L f jX j X k
0 for all k in Li L f
j Li
Simplifying this expression, the fundamental equation that determines the unit-sample response of the linear
88 Estimation Theory Chap. 3
where KX k l is the covariance function of the observations, equaling X kX l , and K L f le k is the
cross-covariance between the signal at L f le and the signal-related component of the observation at k. When
SS
the signal and noise are uncorrelated, KX k l K k l KN k l . Given these quantities, the preceding
S
equation must then be solved for the unit-sample response of the optimum filter. This equation is known as
the generalized Wiener-Hopf equation.
From the general theory of linear estimators, the mean-squared estimation error at index l equals the
variance of the quantity being estimated minus the estimates projection onto the signal.
2l KS l l
S!l Sl
Expressing the signal estimate as a linear filtering operation on the observations, this expression becomes
Lf
l
2
KS l l
h
L f kKSS l k
k Li
Further reduction of this expression is usually problem dependent, as succeeding sections illustrate.
3.4.2 Wiener Filters
Wiener filters are the solutions of the linear minimum mean-squared waveform estimation problem for the
special case in which the noise and the signal are stationary random sequences [13: 10018];[40: 481
515];[43]. The covariance functions appearing in the generalized Wiener-Hopf equation thus depend on the
difference of their arguments. Considering the form of this equation, one would expect the unit-sample
response of the optimum filter to depend on its arguments in a similar fashion. This presumption is in fact
valid, and Wiener filters are always time invariant.
Lf
S!L f le h
L f
kX k
k Li
We consider first the case in which the initial observation time Li equals
. The resulting filter uses all
of the observations available at any moment. The errors that result from using this filter are smaller than
those obtained when the filter is constrained to use a finite number of observations (such as some number of
recent samples). The choice of Li
corresponds to an infinite-duration impulse response (IIR) Wiener
filter; in a succeeding section, Li is finite and a finite-duration impulse response (FIR) Wiener filter results.
The error characteristics of the IIR Wiener filter generally bound those of FIR Wiener filters because more
observations are used. We write the generalized Wiener-Hopf equation for the IIR case as
Lf
K
SS
L f l e
k KX j
k h
L f
j for all k in
L f
j
Changing summation variables results in the somewhat simpler expression known as the Wiener-Hopf equa-
tion. It and the expression for the mean-squared estimation error are given by
K
SS
l le KX l
kh
k for all l in 0
k 0
(3.7)
2 KS 0
h
kK
SS
l e k
k 0
Presumably, observations have been continuously available since the beginning of the universe.
Sec. 3.4 Linear Signal Waveform Estimation 89
The first term in the error expression is the signal variance. The mean-squared error of the signal estimate
cannot be greater than this quantity; this error results when the estimate always equals 0.
In many circumstances, we want to estimate the signal directly contained in observations: X S N.
This situation leads to a somewhat simpler form for the Wiener-Hopf equation.
KS l le KS l
k KN l
k h
k for all l in 0
k 0
It is this form we solve, but the previous one is required in its solution.
Solving the Wiener-Hopf equation. The Wiener-Hopf equation at first glance appears to be a convolu-
tion integral, implying that the optimum filters frequency response could be easily found. The constraining
conditionthe equation applies only for the variable l in the interval 0 means, however, that Fourier
techniques cannot be used for the general case. If the Fourier Transform of the left side of the Wiener-Hopf
equation were evaluated only over the constraining interval, the covariance function on the left would be
implicitly assumed 0 outside the interval, which is usually not the case. Simply stated but mathematically
complicated, the covariance function of the signal outside this interval is not to be considered in the solution
of the equation.
One set of circumstances does allow Fourier techniques. Let the Wiener filter be noncausal with L f .
In this case, the Wiener-Hopf equation becomes
KS l KX l
kh
k for all l
k
As this equation must be valid for all values of l, a convolution sum emerges. The frequency response H
f
of the optimum filter is thus given by
H
f
S f
S f N f
where S f and N f are, respectively, the signal and the noise power spectra. Because this expression
is real and even, the unit-sample response of the optimum filter is also real and even. The filter is therefore
noncausal and usually has an infinite duration unit-sample response. This result is not often used in temporal
signal processing but may find applications in spatial problems. Be that as it may, because this filter can use
the entire set of observations to estimate the signals value at any moment, it yields the smallest estimation
error of any linear filter. Computing this error thus establishes a bound on how well any causal or FIR Wiener
filter performs. The mean-squared estimation error of the noncausal Wiener filter can be expressed in the time
domain or frequency domain.
2 KS 0
h
l KS l
1
2
l
S f N f
12 S f N f d f
The causal solution to the Wiener-Hopf equation, the frequency response of the causal Wiener filter, is the
product of two terms: the frequency response of a whitening filter and the frequency response of the signal
estimation filter based on whitened observations [40: 48293].
e j2 f le S f
H
f
1
S N f S N f
90 Estimation Theory Chap. 3
f means the Fourier Transform of a covariance functions causal part, which corresponds to its values
at nonnegative indices and f the stable, causal, and minimum-phase square root of f . Evaluation
of this expression therefore involves both forms of causal-part extraction. This solution is clearly much more
complicated than anticipated when we first gave the Wiener-Hopf equation. How to solve it is best seen by
example, which we provide once we determine an expression for the mean-squared estimation error.
Error characteristics of Wiener filter output. Assuming that S" equals S, the expression for the mean-
squared estimation error given in Eq. 3.7 88, can be simplified with the result
2 KS 0
h
kKS le k (3.8)
k 0
Applying Parsevals Theorem to the summation, this expression can also be written in the frequency domain
as
1
e j2 f le S f
KS0
1 f f e j2 f le S f d f
1
2 2
2 S N S N
Noting that the first and third terms in the integral can be combined, the mean-squared error can also be
written as
1
e j2 f le S f
2
KS0
1 f d f
2
2
2 S N
The expression within the magnitude bars equals the frequency response of the second component of the
Wiener filters frequency response. Again using Parsevals Theorem to return to the time domain, the mean-
squared error can be expressed directly either in terms of the filters unit-sample response or in terms of signal
and noise quantities by
2 KS 0
KSS
2
l k
w e
k 0
where the latter quantity is the cross-covariance function between the signal and the signal after passage
through the whitening filter.
Example
Lets estimate the value of SL f le with a Wiener filter using the observations obtained up to and
including time L f . The additive noise in the observations is white, having variance 87. The power
spectrum of the signal is given by
S f 1
54
cos2 f
1 1
1
0 5e j2 f 1
0 5e j2 f
The variance of the signal equals the value of the covariance function (found by the inverse Fourier
Transform of this expression) at the origin. In this case, the variance equals 43; the signal-to-noise
ratio of the observations, taken to be the ratio of their variances, equals 76.
The power spectrum of the observations is the sum of the signal and noise power spectra.
S f N f 1 1
1
0 5e j2 f 1
0 5e j2 f
87
16 1
0 25e j2 f 1
0 25e j2 f
7 1
0 5e j2 f 1
0 5e j2 f
Sec. 3.4 Linear Signal Waveform Estimation 91
S f 7 1
S f N f 16 1
0 25e j2 f 1
0 25e j2 f
The unit-sample response corresponding to this frequency response and the covariance function of the
signal are found to be
7 1 l 4 1 l
h
l and KS l
15 4 3 2
Using Eq. 3.8 90, we find that the mean-squared estimation error for the noncausal estimator equals
43
45 815.
The convolutionally causal part of signal-plus-noise power spectrum consists of the first terms in
the numerator and denominator of the signal-plus-noise power spectrum.
4 1
0 25e j2 f
S N f
7 1
0 5e j2 f
The second term in the expression for the frequency response of the optimum filter is given by
e j2 f le
e j2 f le S f 10 5e
j2 f
105e j2 f
S N f 4
110025e
j2 f
5e j2 f
7
7 e j2 f le
4 1
0 5e j2 f 1
0 25e j2 f
The additively causal part of this Fourier Transform is found by evaluating its partial fraction expan-
sion.
7 e j2 f le e j2 f le 4 2e j2 f
4 1
0 5e j2 f 1
0 25e j2 f 2 7 1
0 5e j2 f 1
0 25e j2 f
The simplest solution occurs when le equals zero: Estimate the signal value at the moment of the
most recent observation. The first term on the right side of the preceding expression corresponds to
the additively causal portion.
S f 2
1
S N f 7 1
0 5e j2 f
S!l S l
1 X l
1! 1
4 2
The waveforms that result in this example are exemplified in Fig. 3.2.
92 Estimation Theory Chap. 3
Figure 3.2: The upper panel displays observations having statistic characteristics corresponding to those
given in the accompanying example. The output of the causal Wiener filter is shown in the bottom panel
along with the actual signal, which is shown as a dashed line.
To find the mean-squared estimation error, the cross-covariance between the signal and its
whitened counterpart is required. Thisquantity equals the inverse transform of the Wiener filters
second component and thus equals 2 712l l 0. The mean-squared estimation error is nu-
merically equal to
l 2
4
2
3 l0
2
7
1
2
4
0 57
7
which compares with the smallest possible value of 0 53 provided by the noncausal Wiener filter.
Thus, little is lost by using the causal filter. The signal-to-noise ratio of the estimated signal is equal to
KS 0 2 . The causal filter yields a signal-to-noise ratio of 2 33, which should be compared with
the ratio of 1 17 in the observations. The ratio of the signal-to-noise ratios at the output and input of a
signal processing operation is usually referred to as the processing gain. The best possible processing
gain is 2 14 and equals 2 0 in the causal case. These rather modest gains are because of the close
similarity between the power spectra of the signal and the noise. As the parameter of this signals
power spectrum is increased, the two become less similar, and the processing gain increases.
Sec. 3.4 Linear Signal Waveform Estimation 93
Now consider the case in which le 0: We want to predict the signals future value. The whitening
filter portion of the solution does not depend on the value of le and is therefore identical to that just
given. The second component of the Wiener filters frequency response does depend on le and is given
for positive values of le by
e j2 f le S f 2 e j2 f le
7
S N f
1
0 5e j2 f
The causal portion of this frequency response is found by shifting the unit-sample response to the
left and retaining the positive-time portion. Because this frequency response has only one pole, this
manipulation is expressed simply as a scaling.
le
e j2 f le S f 2 1
2
S N f 7 1
0 5e j2 f
2le
H
f
1
2 1
0 25e j2 f
The optimum linear predictor is a scaled version of the signal estimator. The mean-squared error in-
creases as the desired time of the predicted value exceeds the time of the last observation. In particular,
the signal-to-noise ratio of the predicted value is given by
KS 0 1
2 1
4
7 2
1 2le
The signal-to-noise ratio decreases rapidly as the prediction time extends into the future. This decrease
is directly related to the reduced correlation between the signal and its future values in this example.
This correlation is described by the absolute value of the signals covariance function relative to its
maximum value at the origin. As a covariance function broadens (corresponding to a lower frequency
signal), the prediction error decreases. If a covariance function oscillates, the mean-squared prediction
error varies in a similar fashion.
Finite-duration Wiener filters. Another useful formulation of Wiener filter theory is to constrain the
filters unit-sample response to have finite duration. To find this solution to the Wiener-Hopf equation, the
values of L f and Li are chosen to be finite. Letting L represent the duration of the filters unit-sample response
(L L f
Li 1), the Wiener-Hopf equation becomes
L1
K
SS
l le KX k
l h
k for all l in 0 L
1
k 0
When the signal component of the observations equals the signal being estimated (S" S), the Wiener-Hopf
equation becomes kS le KX h
. The L L matrix KX is the covariance matrix of the sequence of L obser-
vations. In the simple case of uncorrelated signal and noise components, this covariance matrix is the sum of
those of the signal and the noise (KX KS KN ). This matrix has an inverse in all but unusual circumstances
with the result that the unit-sample response of the FIR Wiener filter is given by
X kS le
K 1
h
Because this covariance matrix is Toeplitz and Hermitian, its inverse can be efficiently computed using a
variety of algorithms [23: 8090]. The mean-squared error of the estimate is given by
L1
2 KS 0
k
h
k KS le
k 0
KS 0
ktS le K 1
X kS le
Linear prediction. One especially important variation of the FIR Wiener filter occurs in the unique situ-
ation in which no observation noise is present, and the signal generation model contains only poles [21, 22].
Thus, the signal Sl is generated by passing white noise wl through a linear system given by the difference
equation
Sl a1 Sl
1 a2 Sl
2 a p Sl
p wl
The coefficients a1 a p are unknown. This signal modeling approach is frequently used to estimate the
signals spectrum.
As no noise is present in the observations, the filtered estimate of the signal (le 0) equals Sl and the
estimation error is exactly 0. The concern of linear prediction is not this trivial problem, but the so-called one-
step prediction problem (le 1): Predict the value of the signal at index l given values of Sl
1Sl
2 .
Thus, we seek a FIR Wiener filter predictor, which has the form
S!l h0Sl
1 h1Sl
2 h p
1Sl
p
Comparing the signal model equation to that for the Wiener filter predictor, we see that the model parameters
a1 a p equal the Wiener filters unit-sample response h because the input wl is uncorrelated from
sample to sample. In linear prediction, the signal model parameters are used notationally to express the filter
coefficients.
The Orthogonality Principle can be used to find the minimum mean-squared error predictor of the next
signal value. By requiring orthogonality of the prediction error to each of the observations used in the estimate,
the following set of equations results.
a1 KS 0 a2 KS 1 a p KS p
1 KS 1
a1 KS 1 a2 KS 0 a p KS p
2 KS 2
.. ..
. .
a1 KS p
1 a2KS p
2 a p KS 0 KS p
In linear prediction, these are known as the Yule-Walker equations. Expressing them concisely in matrix form
KS a kS , the solution is a K 1
S kS .
From the signal model equation, we see that the mean-squared prediction error Sl
S!l 2 equals
the variance w2 of the white-noise input to the model. Computing the mean-squared estimation error accord-
ing to Eq. 3.7 88, this variance is expressed by
w2 KS 0
a1 KS 1
a p KS p
Sec. 3.5 Probability Density Estimation 95
This result can be combined with the previous set of equations to yield a unified set of equations for the
unknown parameters and the mean-squared error of the optimal linear predictive filter.
KS 0 KS 1 KS p
1 w2
* K 1
* S KS 0 KS p
1 + *
+ *
a1
+
+
* +
* 0 +
* + * + * +
* .. .. + * . + * . + (3.9)
KS 1
* .. + * . + * . +
. . . . .
KS p KS 1 KS 0
a p 0
To solve this set of equations for the model coefficients and the input-noise variance conceptually, we compute
the preliminary result K 1 2 1
S . The first element of this vector equals the reciprocal of w ; normalizing KS
so that its leading term is unity yields the coefficient vector a. Levinsons algorithm can be used to solve these
equations efficiently and simultaneously obtain the noise variance [23: 21116].
1 L1
P!X an I X l
L l0
an
Note that linear filtering optimizes the mean-squared error whether the signals involved are Gaussian or not. Other error criteria might
better capture unexpected changes in signal characteristics and non-Gaussian processes contain internal statistical structure beyond that
described by the covariance function.
96 Estimation Theory Chap. 3
where I is the indicator function, equaling one if its argument is true and zero otherwise. This kind of
estimate is known in information theory as a type [5: Chap. 12], and types have remarkable properties. For
example, if the observations are statistically independent, the probability that a given sequence occurs equals
L1
PrX X 0 X L
1 PX X l
l 0
Note that the number of times each letter occurs equals LP!X an . Using this fact, we can convert this sum to
a sum over letters.
N 1
log PrX LP!X an log PX an
n 0
N 1
P! an
L P!X an log P!X an
log X
PX an
n 0
log PrX
L P!X P!X PX
which yields
PrX e
L P!X P!X PX (3.10)
n 0 0 n
Because the Kullback-Leibler distance is non-negative, equaling zero only when the two probability distribu-
tions equal each other, we maximize Eq. (3.10) with respect to P by choosing P P: ! The type estimator is
the maximum likelihood estimator of PX .
The number of length-L observation sequences having a given type P! approximately equals eL P .
The probability that a given sequence has a given type approximately equals eL PP , which means that
the probability a given sequence has a type not equal to the true distribution decays exponentially with the
number of observations. Thus, while the coin flip sequences H,H,H,H,H and T,T,H,H,T are equally
likely (assuming a fair coin), the second is more typical because its type is closer to the true distribution.
3.5.2 Histogram Estimators
By far the most used technique for estimating the probability distribution of a continuous-valued random
variable is the histogram; more sophisticated techniques are discussed in [36]. For real-valued data, subdivide
the real line into N intervals Xi Xi1 having widths i Xi1
Xi , i 1 N. These regions are called
bins and they should encompass the range of values assumed by the data. For large values, the edge bins
can extend to infinity to catch the overflows. Given L observations of a stationary random sequence X l ,
Sec. 3.5 Probability Density Estimation 97
l 0 L
1, the histogram estimate hi is formed by simply forming a type from the number Li of these
observations that fall into the ith bin and dividing by the binwidth i .
)
)
)
h1 L1
L1 X1 X X2
)
)
h 2 L2
X2 X X3
pX X
L2
) ..
)
) .
)
)
h
N LN
LN XN X XN 1
The histogram estimate resembles a rectangular approximation to the density. Unless the underlying
density has the same form (a rare event), the histogram estimate does not converge to the true density as
the number L of observations grows. Presumably, the value of the histogram at each bin converges to the
probability that the observations lie in that bin.
Xi1
pX X dX
Li
lim
L L Xi
To demonstrate this intuitive feeling, we compactly denote the histogram estimate by using indicator func-
tions. An indicator function Ii X l for the ith bin equals one if the observation X l lies in the bin and is zero
otherwise. The estimate is simply the average of the indicator functions across the observations.
1 L1
hi I X l
Li l0 i
The expected value of Ii X l is simply the probability Pi that the observation lies in the ith bin. Thus, the
expected value of each histogram value equals the integral of the actual density over the bin, showing that the
histogram is an unbiased estimate of this integral. Convergence can be tested by computing the variance of
the estimate. The variance of one bin in the histogram is given by
Pi
Pi2
hi Li2
L21 2 IiX kIiX l
Pi2
i k l
To simplify this expression, the correlation between the observations must be specified. If the values are
statistically independent (we have white noise), each term in the sum becomes zero and the variance is given
by hi Pi
Pi2 Li2 . Thus, the variance tends to zero as L
and the histogram estimate is con-
sistent, converging to Pi i . If the observations are not white, convergence becomes problematical. Assume,
for example, that Ii X k and Ii X l are correlated in a first-order, geometric fashion.
This mean-squared error becomes zero only if L
, Li
, and i
0. Thus, the binwidth must decrease
more slowly than the rate of increase of the number of observations. We find the optimum compromise
between the decreasing binwidth and the increasing amount of data to be
15
9pX X
i 2 L15
2 d2 p X X dX 2
X X
Using this binwidth, we find the the mean-squared error to be proportional to L45 . We have thus discovered
the famous 4/5 rule of density estimation; this is one of the few cases where the variance of a convergent
statistic decreases more slowly than the reciprocal of the number of observations. In practice, this optimal bin-
width cannot be used because the proportionality constant depends of the unknown density being estimated.
Roughly speaking, wider bins should be employed where the density is changing slowly. How the optimal
binwidth varies with L can be used to adjust the histogram estimate as more data become available.
3.5.3 Density Verification
Once a density estimate is produced, the class of density that best coincides with the estimate remains an
issue: Is the density just estimated statistically similar to a Gaussian? The histogram estimate can be used
directly in a hypothesis test to determine similarity with any proposed density. Assume that the observations
are obtained from a white, stationary, stochastic sequence. Let 0 denote the hypothesis that the data have
an amplitude distribution equal to the presumed density and 1 the dissimilarity hypothesis. If 0 is true,
the estimate for each bin should not deviate greatly from the probability of a randomly chosen datum lying
in the bin. We determine this probability from the presumed density by integrating over the bin. Summing
these deviations over the entire estimate, the result should not exceed a threshold. The theory of standard
hypothesis testing requires us to produce a specific density for the alternative hypothesis 1 . We cannot
rationally assign such a density; consistency is being tested, not whether either of two densities provides the
best fit. However, taking inspiration from the Neyman-Pearson approach to hypothesis testing (4.1.2 113),
we can develop a test statistic and require its statistical characteristics only under 0 . The typically used, but
ad hoc test statistic SL N is related to the histogram estimates mean-squared error [6: 41641].
SL N
N Li
LPi 2 N
L2
LPi LPi
L
i 1 i 1 i
This statistic sums over the various bins the squared error of the number of observations relative to the ex-
pected number. For large L, SL N has a 2 probability distribution with N
1 degrees of freedom [6: 417].
Thus, for a given number of observations L we establish a threshold N by picking a false-alarm probability
PF and using tables to solve PrN2 1 N PF . To enhance the validity of this approximation, statisticians
recommend selecting the binwidth so that each bin contains at least ten observations. In practice, we fulfill
this criterion by merging adjacent bins until a sufficient number of observations occur in the new bin and
defining its binwidth as the sum of the merged bins widths. Thus, the number of bins is reduced to some
number N , which determines the degrees of freedom in the hypothesis test. The similarity test between the
histogram estimate of a probability density function and an assumed ideal form becomes
1
SL N N
0
In many circumstances, the formula for the density is known but not some of its parameters. In the
Gaussian case, for example, the mean or variance are usually unknown. These parameters must be determined
This result assumes that the second derivative of the density is nonzero. If it is not, either the Taylor series expansion brings higher
order terms into play or, if all the derivatives are zero, no optimum binwidth can be defined for minimizing the mean-squared error.
Chap. 3 Problems 99
from the same data used in the consistency test before the test can be used. Doesnt the fact that we use
estimates rather than actual values affect the similarity test? The answer is yes, but in an interesting way:
The similarity test changes only in that the number of degrees of freedom of the 2 random variable used to
establish the threshold is reduced by one for each estimated parameter. If a Gaussian density is being tested,
for example, the mean and variance usually need to be found. The threshold should then be determined
according to the distribution of a N2 3 random variable.
Example
Three sets of observations are considered: Two are drawn from a Gaussian distribution and the other
not. The first Gaussian example is white noise, a signal whose characteristics match the assumptions
of this section. The second is non-Gaussian, which should not pass the test. Finally, the last test
consists of colored Gaussian noise that, because of dependent samples, does not have as many degrees
of freedom as would be expected. The number of data available in each case is 2000. The histogram
estimator uses fixed-width bins and the 2 test demands at least ten observations per merged bin. The
mean and variance estimates are used in constructing the nominal Gaussian density. The histogram
estimates and their approximation by the nominal density whose mean and variance were computed
from the data are shown in Fig. 3.3. The chi-squared test (PF 0 1) yielded the following results.
Density N N2 3 S2000 N
White Gaussian 70 82.2 78 4
White sech 65 76.6 232 6
Colored Gaussian 65 76.6 77 8
The white Gaussian noise example clearly passes the 2 test. The test correctly evaluated the non-
Gaussian example, but declared the colored Gaussian data to be non-Gaussian, yielding a value near
the threshold. Failing in the latter case to correctly determine the datas Gaussianity, we see that the
2 test is sensitive to the statistical independence of the observations.
Problems
3.1 Estimates of identical parameters are heavily dependent on the assumed underlying probability densi-
ties. To understand this sensitivity better, consider the following variety of problems, each of which
asks for estimates of quantities related to variance. Determine the bias and consistency in each case.
(a) Compute the maximum a posteriori and maximum likelihood estimates of based on L statisti-
cally independent observations of a Maxwellian random variable X.
pX X
2 32 2 1 X 2
X e 2 X 0 0
p e 0
(b) Find the maximum a posteriori estimate of the variance 2 from L statistically independent ob-
servations having the exponential density
p X X
1
eX
2
X 0
2
where the variance is uniformly distributed over the interval 0 max
2 .
(c) Find the maximum likelihood estimate of the variance of L identically distributed, but depen-
dent Gaussian random variables. Here, the covariance matrix is written KX 2 K" , where the
X
normalized covariance matrix has trace trK
" L. Assume the random variables have zero mean.
X
100 Estimation Theory Chap. 3
0.5
White
Gaussian
0.5
White
Sech
0.5
Colored
Gaussian
-4 0 4
x
Figure 3.3: Three histogram density estimates are shown and compared with Gaussian densities having the
same mean and variance. The histogram on the top is obtained from Gaussian data that are presumed to
be white. The middle one is obtained from a non-Gaussian distribution related to the hyperbolic secant
[pX X 21 sech2 X 2 ]. This density resembles a Gaussian about the origin but decreases exponentially
in the tails. The bottom histogram is taken from a first-order autoregressive Gaussian signal. Thus, these data
are correlated, but yield a histogram resembling the true amplitude distribution. In each case, 2000 data points
were used and the histogram contained 100 bins.
3.2 Imagine yourself idly standing on the corner in a large city when you note the serial number of a passing
beer truck. Because you are idle, you wish to estimate (guess may be more accurate here) how many
beer trucks the city has from this single observation.
(a) Making appropriate assumptions, the beer trucks number is drawn from a uniform probability
density ranging between zero and some unknown upper limit, find the maximum likelihood esti-
mate of the upper limit.
(b) Show that this estimate is biased.
(c) In one of your extraordinarily idle moments, you observe throughout the city L beer trucks. As-
Chap. 3 Problems 101
suming them to be independent observations, now what is the maximum likelihood estimate of
the total?
(d) Is this estimate of biased? asymptotically biased? consistent?
3.3 Estimating a Bit
To send a bit, a discrete-time communications system transmits either 1 or
1 for L successive in-
dices. The channel adds white Gaussian noise and the receiver must determine which bit was sent from
these noise-corrupted observations. The bits values are equally likely.
(a) What is the MAP estimate of the bits value?
(b) Determine the bias, if any, of the MAP estimate?
(c) Is the MAP estimate in this case consistent?
(d) Find the minimum mean-squared error estimator of the bit.
3.4 We make L observations X1 XL of a parameter corrupted by additive noise (Xl Nl ). The pa-
rameter is a Gaussian random variable [ 0 2 ] and Nl are statistically independent Gaussian
random variables [Nl 0 N2 ].
(a) Find the MMSE estimate of .
(b) Find the maximum a posteriori estimate of .
(c) Compute the resulting mean-squared error for each estimate.
(d) Consider an alternate procedure based on the same observations Xl . Using the MMSE criterion,
we estimate immediately after each observation. This procedure yields the sequence of esti-
mates !1 X1 !2X1 X2 !LX1 XL . Express !l as a function of !l 1, l21, and Xl . Here,
l2 denotes the variance of the estimation error of the l th estimate. Show that
1
l2
1
2
l2
N
3.6 Although the maximum likelihood estimation procedure was not clearly defined until early in the 20th
century, Gauss showed in 1805 that the Gaussian density was the sole density for which the maximum
likelihood estimate of the mean equaled the sample average. Let X0 XL1 be a sequence of
statistically independent, identically distributed random variables.
(a) What equation defines the maximum likelihood estimate m ! ML of the mean m when the common
probability density function of the data has the form pX
m?
(b) The sample average is, of course, l Xl L. Show that it minimizes the mean-squared error l Xl
m2.
(c) Equating the sample average to m! ML , combine this equation with the maximum likelihood equa-
tion to show that the Gaussian density uniquely satisfies the equations.
Note: Because both equations equal 0, they can be equated. Use the fact that they must hold for all L
to derive the result. Gauss thus showed that mean-squared error and the Gaussian density were closely
linked, presaging ideas from modern robust estimation theory.
3.7 Whats In-Between the Samples?
We sample a stationary random process Xt every T seconds, ignoring whether the process is bandlimited
or not. To reconstruct the signal from the samples, we use linear interpolation.
Rl A1 c1 l A2 c2 l Nl l 0 L
1
where Nl is ubiquitous additive (not necessarily white) Gaussian noise. The carrier signals c1 l and
c2 l have unit energy; their detailed waveforms need to be selected to provide the best possible system
design.
(a) What is the maximum likelihood estimate of the amplitudes?
(b) Is the maximum likelihood estimate biased or not? If it is biased, what are the most general
conditions on the carrier signals and the noise would it make it unbiased?
(c) Under what conditions are the amplitude estimation errors uncorrelated and as small as possible?
3.11 MIMO Channels
Two parameters 1 , 2 are transmitted over a MIMO (Multiple-Input, Multiple-Output) channel. The
two parameters constitute the channels two-dimensional vector input , and the channel output is H .
H is the non-square transfer function matrix that represents the set of linear combinations of the
parameters found in the output. The observations consist of
R H N
where the noise vector N is Gaussian, having zero mean and covariance matrix K.
(a) What is the maximum likelihood estimate of ?
(b) Find this estimates total mean-squared error.
(c) Is this estimate biased? Is it efficient?
3.12 Prediction
A signal sl can be described as a stochastic process that has zero mean and covariance function
Ks s2 a. This signal is observed in additive white Gaussian noise having variance 2 . The signal
and noise are statistically independent of each other.
sl 1 that is based on observations that end at time l and begin at
(a) Find the optimal predictor !
time l
L 1.
(b) How does this predictor change if we want to estimate sl k based on observations made over
l l L
1?
(c) How does the predictors mean-squared error vary with k?
3.13 Let the observations be of the form X H n where and n are statistically independent Gaussian
random vectors.
0 K n 0 Kn
The vector has dimension M; the vectors X and n have dimension N.
(a) Derive the minimum mean-squared error estimate of , !MMSE , from the relationship !MMSE
X.
104 Estimation Theory Chap. 3
(b) Show that this estimate and the optimum linear estimate !LIN derived by the Orthogonality Prin-
ciple are equal.
(c) Find an expression for the mean-squared error when these estimates are used.
3.14 Suppose we consider an estimate of the parameter having the form ! X C, where X denotes
the vector of the observables and is a linear operator. The quantity C is a constant. This estimate
is not a linear function of the observables unless C 0. We are interested in finding applications for
which it is advantageous to allow C 0. Estimates of this form we term quasi-linear.
(a) Show that the optimum (minimum mean-squared error) quasi-linear estimate satisfies
X C
X C
0 for all and C
where !QLIN
X C
.
(b) Find a general expression for the mean-squared error incurred by the optimum quasi-linear esti-
mate.
(c) Such estimates yield a smaller mean-squared error when the parameter has a nonzero mean. Let
be a scalar parameter with mean m. The observables comprise a vector X having components
given by Xl Nl l 1 L where Nl are statistically independent Gaussian random vari-
ables [Nl 0 N2 ] independent of . Compute expressions for !QLIN and !LIN . Verify that
!QLIN yields a smaller mean-squared error when m 0.
3.15 On Page 85, we questioned the existence of an efficient estimator for signal parameters. We found
in the succeeding example that an unbiased efficient estimator exists for the signal amplitude. Can a
nonlinearly represented parameter, such as time delay, have an efficient estimator?
(a) Simplify the condition for the existence of an efficient estimator by assuming it to be unbiased.
Note carefully the dimensions of the matrices involved.
(b) Show that the only solution in this case occurs when the signal depends linearly on the parameter
vector.
3.16 Cramer-Rao Bound for Signal Parameters
In many problems, the signal as well as the noise are sometimes modeled as Gaussian processes. Lets
explore what differences arise in the Cramer-Rao bounds for the stochastic and deterministic signal
cases. Assume that the signal contains unknown parameters , that it is statistically independent of the
noise, and that the noise covariance matrix is known.
(a) What forms do the conditional densities of the observations take under the two assumptions?
What are the two covariance matrices?
(b) As a preliminary, show that
A 1 A 1
A1 A
(c) Assuming the stochastic signal model, show that each element of the Fisher information matrix
has the form
1 1 K 1 K
Fi j tr K K
2 i j
where K denotes the covariance matrix of the observations. Specialize this expression by assum-
ing the noise component has no unknown parameters.
3.17 Estimating the Amplitude of a Sinusoid
Suppose you observe a discrete-time sinusoid in additive Laplacian white noise having variance per
sample of 2 .
Xl A sin2 f 0 l Nl l 0 L
1
The frequency is known and is harmonic with the observation interval ( f 0 nL for some integer n).
Chap. 3 Problems 105
where the frequency f 0 is a harmonic of 1T . What are the maximum likelihood estimates of 0
and a in this case?
Note: The following facts will prove useful.
1 2 acos
I0 a e d is the modified Bessel function of the first kind, order 0
2 0
I0 a I1 a the modified Bessel function of the first kind, order 1
I2 a I0 a
I0 a
2
(d) Find the Cramer-Rao bounds for the mean-squared estimation errors for !
0 and a! assuming unbi-
ased estimators.
3.19 In the classic radar problem, not only is the time of arrival of the radar pulse unknown but also
the amplitude. In this problem, we seek methods of simultaneously estimating these parameters. The
received signal X l is of the form
X l 1 sl
2 N l
where 1 is Gaussian with zero mean and variance 12 and 2 is uniformly distributed over the observa-
tion interval. Find the receiver that computes the maximum a posteriori estimates of 1 and 2 jointly.
Draw a block diagram of this receiver and interpret its structure.
3.20 We can derive the Cramer-Rao bound for estimating a signals delay.
(a) The parameter is the delay of the signal s observed in additive, white Gaussian noise:
X l sl
N l , l 0 L
1. Derive the Cramer-Rao bound for this problem.
106 Estimation Theory Chap. 3
(b) This bound is claimed to be given by N2 E 2 , where 2 is the mean-squared bandwidth. Derive
this result from your general formula. Does the bound make sense for all values of signal-to-noise
ratio E N2 ?
(c) Using optimal detection theory, derive the expression for the probability of error incurred when
trying to distinguish between a delay of and a delay of . Consistent with the problem posed
for the Cramer-Rao bound, assume the delayed signals are observed in additive, white Gaussian
noise.
3.21 Estimating Model Probabilities
We want to estimate the a priori probability 0 based on data obtained over N statistically independent
observation intervals. During each length-L observation interval, the observations consist of white
Gaussian noise having variance of either 02 or 12 (12 02 ). 0 is the probability that the observations
have variance 02 and we do not know which model applies for any observation interval.
(a) One approach is to classify each observation interval according to its variance, count the number
of times the variance equals 02 , and divide this count by N. What is the classification algorithm
that lies at the heart of this estimator?
(b) What classifier threshold yields an unbiased estimate of 0 ? Comment on the feasibility of this
approach.
(c) Rather than use this ad hoc approach, lets use a systematic estimation approach: what is the
maximum likelihood estimate of 0 based on the N observation intervals?
3.22 The signal has a power spectrum given by
17 1 17
8
s f
cos 2 f
20 1
45 cos 2 f
X l sl N l l 0 L
1
Treat finding the optimal estimate of the signals spectrum as an optimal FIR filtering problem, where
the quantity to be estimated is l sl e j2 f l .
(a) Find the spectral estimate that minimizes the mean-squared estimation error.
(b) Find this estimates mean-squared error.
Chap. 3 Problems 107
! m 1 Dm1
X nX n m 0 m D
1
D
m n0
KX
(a) Find the expected value of this revised estimator, and show that it is indeed unbiased.
(b) To derive the variance of this estimate, we need the fourth moment of the observations, which
is conveniently given in Chapter 1 (1.4.1 12). Derive the covariance estimates variance and
determine whether it is consistent or not.
(c) Evaluate the expected value and variance of the spectral estimate corresponding to this covariance
estimate.
(d) Does spectral estimate consistency become a reality with this new estimation procedure?
3.26 Lets see how spectral estimators work on a real dataset. The file spectral.mat contains a signal
comprised of a sinusoid and additive noise.
(a) Evaluate the periodogram for this signal. What is your best guess for the sine waves frequency
and amplitude based on the periodogram? Are your estimates good in any sense?
(b) Use the Barlett spectral estimation procedure instead of the periodogram. Use a section length of
500, a Hanning window, and half-section overlap. Now what are your estimates of the sinusoids
frequency and amplitude?
(c) The default transform size in MATLABs fft function is the datas length. Instead of using
the section length in your Bartlett estimate, use a longer transform to determine the frequency
resolution of the spectral peak presumably corresponding to the sinusoid. Compare the resolutions
that section lengths of 500 and 1000 provide. Also compare the resolutions the Hanning and
rectangular windows provide for these section lengths.
3.27 Optimal Spectral Estimation
While many spectral estimation procedures are found in the literature, few take into account the pres-
ence of additive noise. Assume that the observations consist of a signal s and statistically independent,
additive, zero-mean noise N.
X l sl N l l 0 L
1
Treat finding the optimal estimate of the signals spectrum as an optimal FIR filtering problem, where
the quantity to be estimated is l sl e j2 f l .
(a) Find the spectral estimate that minimizes the mean-squared estimation error.
(b) Find this estimates mean-squared error.
(c) Under what conditions is this estimate unbiased?
3.28 Filter Coefficient Estimation
White Gaussian noise W l serves as the input to a simple digital filter governed by the difference
equation
Xl aXl 1 Wl
We want to estimate the filters coefficient a by processing the output observed over l 0 L
1.
Prior to l 0, the filters input is zero.
(a) Find an estimate of a.
(b) What is the Cramer-Rao bound for your estimate?
108 Estimation Theory Chap. 3
3.29 The histogram probability density estimator is a special case of a more general class of estimators
known as kernel estimators.
1 L1
pX x k x
X l
Ll 0
Here, the kernel k is usually taken to be a density itself.
(a) What is the kernel for the histogram estimator?
(b) Interpret the kernel estimator in signal processing terminology. Predict what the most time con-
suming computation of this estimate might be. Why?
(c) Show that the sample average equals the expected value of a random variable having the density
pX x regardless of the choice of kernel.
Chapter 4
Detection Theory
Ci j j Prsay i j true
i j
C Ci j j PrX i j true
i j
Ci j j pX X j dX
i j
i j
109
110 Detection Theory Chap. 4
pX Xi is the conditional probability density function of the observed data X given that model i was
i
true. To minimize this expression with respect to the decision regions 0 and 1 , ponder which integral
would yield the smallest value if its integration domain included a specific observation vector. This selection
process defines the decision regions; for example, we choose 0 for those values of X which yield a smaller
value for the first integral.
We choose 1 when the inequality is reversed. This expression is easily manipulated to obtain the decision
rule known as the likelihood ratio test.
pX X1
0 C10
C00
1
1
pX X0 0 1 C01
C11
(4.1)
0
The comparison relation means selecting model 1 if the left-hand ratio exceeds the value on the right;
otherwise, 0 is selected. Thus, the likelihood ratio pX X1 pX X0 , symbolically represented
X, is computed from the observed value of X and then compared with a threshold equaling 0 C10
1 0
by
C00 1 C01
C11 . Thus, when two models are hypothesized, the likelihood ratio test can be succinctly
expressed as the comparison of the likelihood ratio with a threshold.
1
X
(4.2)
0
The data processing operations are captured entirely by the likelihood ratio pX X1 pX X0 .
1 0
Furthermore, note that only the value of the likelihood ratio relative to the threshold matters; to simplify the
computation of the likelihood ratio, we can perform any positively monotonic operations simultaneously on
the likelihood ratio and the threshold without affecting the comparison. We can multiply the ratio by a positive
constant, add any constant, or apply a monotonically increasing function which simplifies the expressions. We
single one such function, the logarithm, because it simplifies likelihood ratios that commonly occur in signal
processing applications. Known as the log-likelihood, we explicitly express the likelihood ratio test with it as
1
ln X ln
(4.3)
0
Useful simplifying transformations are problem-dependent; by laying bare that aspect of the observations
essential to the model testing problem, we reveal the sufficient statistic X: the scalar quantity which best
summarizes the data [19: pp. 18-22]. The likelihood ratio test is best expressed in terms of the sufficient
statistic.
1
X
(4.4)
0
We will denote the threshold value by when the sufficient statistic is used or by when the likelihood ratio
appears prior to its reduction to a sufficient statistic.
As we shall see, if we use a different criterion other than the Bayes criterion, the decision rule often
involves the likelihood ratio. The likelihood ratio is comprised of the quantities pX Xi , termed the
i
likelihood function, which is also important in estimation theory. It is this conditional density that portrays
the probabilistic model describing data generation. The likelihood function completely characterizes the kind
Sec. 4.1 Elementary Hypothesis Testing 111
p (r | M0 ) 1 p (r | M1)
R|M0 R|M1
50
1
100
r r
100 100
L (r )
2
1
3 r
16.7
Figure 4.1: Conditional densities for the grade distributions assuming that a student did not study (0 ) or did
(1 ) are shown in the top row. The lower portion depicts the likelihood ratio formed from these densities.
of world assumed by each model; for each model, we must specify the likelihood function so that we can
solve the hypothesis testing problem.
A complication, which arises in some cases, is that the sufficient statistic may not be monotonic. If mono-
tonic, the decision regions 0 and 1 are simply connected (all portions of a region can be reached without
crossing into the other region). If not, the regions are not simply connected and decision region islands are
created (see Problem 4.2). Such regions usually complicate calculations of decision performance. Monotonic
or not, the decision rule proceeds as described: the sufficient statistic is computed for each observation vector
and compared to a threshold.
Example
An instructor in a course in detection theory wants to determine if a particular student studied for his
last test. The observed quantity is the students grade, which we denote by X. Failure may not indicate
studiousness: conscientious students may fail the test. Define the models as
0 : did not study
1 : studied
The conditional densities of the grade are shown in Fig. 4.1. Based on knowledge of student behavior,
the instructor assigns a priori probabilities of 0 14 and 1 34. The costs Ci j are chosen to
reflect the instructors sensitivity to student feelings: C01 1 C10 (an erroneous decision either way
is given the same cost) and C00 0 C11 . The likelihood ratio is plotted in Fig. 4.1 and the threshold
value , which is computed from the a priori probabilities and the costs to be 13, is indicated. The
calculations of this comparison can be simplified in an obvious way.
1 1
X 1 50
or X 16 7
50 0 3 0 3
The multiplication by the factor of 50 is a simple illustration of the reduction of the likelihood ratio to
a sufficient statistic. Based on the assigned costs and a priori probabilities, the optimum decision rule
112 Detection Theory Chap. 4
says the instructor must assume that the student did not study if the students grade is less than 16 7;
if greater, the student is assumed to have studied despite receiving an abysmally low grade such as 20.
Note that as the densities given by each model overlap entirely: the possibility of making the wrong
interpretation always haunts the instructor. However, no other procedure will be better!
We want to maximize Pc by selecting the decision regions 0 and 1 . The probability correct is maximized
by associating each value of X with the largest term in the expression for Pc . Decision region 0 , for example,
is defined by the collection of values of X for which the first term is largest. As all of the quantities involved
are non-negative, the decision rule maximizing the probability of a correct decision is
pX X1 1
1 0
pX X0 0 1
Note that if the Bayes costs were chosen so that Cii 0 and Ci j C, (i j), we would have the same
threshold as in the previous section.
To evaluate the quality of the decision rule, we usually compute the probability of error Pe rather than the
probability of being correct. This quantity can be expressed in terms of the observations, the likelihood ratio,
and the sufficient statistic.
Pe 0 pX X0 dX 1 pX X1 dX
1 0
0 1
p d p 1 d (4.5)
0 0
0 1 1
When the likelihood ratio is non-monotonic, the first expression is most difficult to evaluate. When monotonic,
the middle expression proves the most difficult. Furthermore, these expressions point out that the likelihood
ratio and the sufficient statistic can be considered a function of the observations X; hence, they are random
variables and have probability densities for each model. Another aspect of the resulting probability of error is
that no other decision rule can yield a lower probability of error. This statement is obvious as we minimized
the probability of error in deriving the likelihood ratio test. The point is that these expressions represent a
lower bound on performance (as assessed by the probability of error). This probability will be non-zero if the
conditional densities overlap over some range of values of X, such as occurred in the previous example. In
this region of overlap, the observed values are ambiguous: either model is consistent with the observations.
Our optimum decision rule operates in such regions by selecting that model which is most likely (has the
highest probability) of generating any particular value.
Neyman-Pearson Criterion
Situations occur frequently where assigning or measuring the a priori probabilities Pi is unreasonable. For
example, just what is the a priori probability of a supernova occurring in any particular region of the sky?
We clearly need a model evaluation procedure which can function without a priori probabilities. This kind of
test results when the so-called Neyman-Pearson criterion is used to derive the decision rule. The ideas behind
and decision rules derived with the Neyman-Pearson criterion [27] will serve us well in sequel; their result is
important!
Using nomenclature from radar, where model 1 represents the presence of a target and 0 its absence,
the various types of correct and incorrect decisions have the following names [44: pp. 1279].
The remaining probability Prsay 0 0 true has historically been left nameless and equals 1
PF . We
should also note that the detection and miss probabilities are related by PM 1
PD . As these are conditional
probabilities, they do not depend on the a priori probabilities and the two probabilities PF and PD characterize
the errors when any decision rule is used.
These two probabilities are related to each other in an interesting way. Expressing these quantities in
terms of the decision regions and the likelihood functions, we have
PF pX X0 dX PD pX X1 dX
1 0 1 1
As the region 1 shrinks, both of these probabilities tend toward zero; as 1 expands to engulf the entire
range of observation values, they both tend toward unity. This rather direct relationship between PD and PF
does not mean that they equal each other; in most cases, as 1 expands, PD increases more rapidly than PF
(we had better be right more often than we are wrong!). However, the ultimate situation where a rule is
always right and never wrong (PD 1, PF 0) cannot occur when the conditional distributions overlap. Thus,
to increase the detection probability we must also allow the false-alarm probability to increase. This behavior
represents the fundamental tradeoff in hypothesis testing and detection theory.
One can attempt to impose a performance criterion that depends only on these probabilities with the
consequent decision rule not depending on the a priori probabilities. The Neyman-Pearson criterion assumes
that the false-alarm probability is constrained to be less than or equal to a specified value while we attempt
to maximize the detection probability PD .
max PD subject to PF
1
In hypothesis testing, a false-alarm is known as a type I error and a miss a type II error.
114 Detection Theory Chap. 4
A subtlety of the succeeding solution is that the underlying probability distribution functions may not be
continuous, with the result that PF can never equal the constraining value . Furthermore, an (unlikely)
possibility is that the optimum value for the false-alarm probability is somewhat less than the criterion value.
Assume, therefore, that we rephrase the optimization problem by requiring that the false-alarm probability
equal a value that is less than or equal to .
This optimization problem can be solved using Lagrange multipliers; we seek to find the decision rule that
maximizes
F PD PF
where is the Lagrange multiplier. This optimization technique amounts to finding the decision rule that
maximizes F, then finding the value of the multiplier that allows the criterion to be satisfied. As is usual in
the derivation of optimum decision rules, we maximize these quantities with respect to the decision regions.
Expressing PD and PF in terms of them, we have
F pX X1 dX pX X0dX
1
1
1 0
pX X1 pX X0 dX
1 1 0
To maximize this quantity with respect to 1 , we need only to integrate over those regions of X where
the integrand is positive. The region 1 thus corresponds to those values of X where pX X1
pX X1
1
pX X0 0
1
The ubiquitous likelihood ratio test again appears; it is indeed the fundamental quantity in hypothesis testing.
Using the logarithm of the likelihood ratio or the sufficient statistic, this result can be expressed as either
1 1
ln X ln
X
or
0
0
We have not as yet found a value for the threshold. The false-alarm probability can be expressed in terms
of the Neyman-Pearson threshold in two (useful) ways.
p 0 d
PF
0
(4.6)
p 0 d
One of these implicit equations must be solved for the threshold by setting PF equal to . The selection
of which to use is usually based on pragmatic considerations: the easiest to compute. From the previous
discussion of the relationship between the detection and false-alarm probabilities, we find that to maximize
PD we must allow to be as large as possible while remaining less than . Thus, we want to find the
smallest value of
(note the minus sign) consistent with the constraint. Computation of the threshold is
problem-dependent, but a solution always exists.
Example
Sec. 4.1 Elementary Hypothesis Testing 115
An important application of the likelihood ratio test occurs when X is a Gaussian random vector for
each model. Suppose the models correspond to Gaussian random vectors having different mean values
but sharing the same identity covariance.
0 : X 0 2 I
1 : X m 2 I
Thus, X is of dimension L and has statistically independent, equal variance components. The vector of
means m colm0 mL1 distinguishes the two models. The likelihood functions associated this
problem are
2
L1
pX X0
1 1 Xl
exp
0
l 0 2 2 2
2
L1 Xl
ml
pX X1
1 1
exp
1
l 0 2 2 2
This expression for the likelihood ratio is complicated. In the Gaussian case (and many others), we
use the logarithm the reduce the complexity of the likelihood ratio and form a sufficient statistic.
L1
1 Xl
ml 2 2
ln X
2 2
12 Xl2
l 0
L1 L1
1 1
2 ml Xl
2 2 m2l
l 0 l 0
The likelihood ratio test then has the much simpler, but equivalent form
L1
1
1 L1
ml Xl 2 ln 2 m2l
l 0 0 l 0
To focus on the model evaluation aspects of this problem, lets assume means be equal to a positive
constant: ml m 0.
L1
1 2
0 m ln 2
Lm
Xl
l 0
Note that all that need be known about the observations Xl is their sum. This quantity is the sufficient
statistic for the Gaussian problem: X Xl and 2 ln m Lm2.
When trying to compute the probability of error or the threshold in the Neyman-Pearson criterion,
we must find the conditional probability density of one of the decision statistics: the likelihood ratio,
the log-likelihood, or the sufficient statistic. The log-likelihood and the sufficient statistic are quite
Why did the author assume that the mean was positive? What would happen if it were negative?
116 Detection Theory Chap. 4
x Q1 x
101 1.281
102 2.396
103 3.090
104 3.719
105 4.265
106 4.754
Table 4.1: The table displays interesting values for Q1 that can be used to determine thresholds in the
Neyman-Pearson variant of the likelihood ratio test. Note how little the inverse function changes for decade
changes in its argument; Q is indeed very nonlinear.
similar in this problem, but clearly we should use the latter. One practical property of the sufficient
statistic is that it usually simplifies computations. For this Gaussian example, the sufficient statistic is
a Gaussian random variable under each model.
0 : X 0 L 2
1 : X Lm L 2
To find the probability of error from the expressions found on Page 112, we must evaluate the area
under a Gaussian probability density function. These integrals are succinctly expressed in terms of
Qx, which denotes the probability that a unit-variance, zero-mean Gaussian random variable exceeds
x (see chapter 1 9). As 1
Qx Q
x, the probability of error can be written as
Lm
Pe 1 Q 0Q
L L
An interesting special case occurs when 0 12 1 . In this case, Lm2 and the probability of
error becomes ' (
Lm
Pe Q
2
As Q is a monotonically
decreasing function, the probability of error decreases with increasing
values of the ratio Lm2 . However, as shown in appendix Fig. 1.3 10, Q decreases in a
nonlinear fashion. Thus, increasing m by a factor of two may decrease the probability of error by a
larger or a smaller factor; the amount of change depends on the initial value of the ratio.
To find the threshold for the Neyman-Pearson test from the expressions given on Page 114, we
need the area under a Gaussian density.
PF Q (4.7)
L 2
As Q is a monotonic and continuous function, we can now set equal to the criterion value with
the result
L Q1
where Q1 denotes the inverse function of Q . The solution of this equation cannot be performed
analytically as no closed form expression exists for Q (much less its inverse function); the criterion
value must be found from tables or numerical routines. Because Gaussian problems arise frequently,
the accompanying table provides numeric values for this quantity at the decade points. The detection
probability is given by ' (
PD Q Q1
Lm
Sec. 4.1 Elementary Hypothesis Testing 117
p ( |H 0 ) p ( |H1 )
|H |H
0 1
PD
PF
Figure 4.2: The densities of the sufficient statistic X conditioned on two hypotheses are shown for the
Gaussian example. The threshold used to distinguish between the two models is indicated. The false-alarm
probability is the area under the density corresponding to 0 to the right of the threshold; the detection
probability is the area under the density corresponding to 1 .
These densities and their relationship to the threshold are shown in Fig. 4.2. We see that the detection
probability is greater than or equal to the false-alarm probability. Since these probabilities must decrease
monotonically as the threshold is increased, the ROC curve must be concave-down and must always exceed
the equality line (Fig. 4.3). The degree to which the ROC departs from the equality line PD PF measures the
relative distinctiveness between the two hypothesized models for generating the observations. In the limit,
the two models can be distinguished perfectly if the ROC is discontinuous and consists of the point 1 0.
The two are totally confused if the ROC lies on the equality line (this would mean, of course, that the two
models are identical); distinguishing the two in this case would be somewhat difficult.
Example
Consider the Gaussian example we have been discussing where the two models differ only in the
means of the conditional distributions. In this case, the two model-testing probabilities are given by
Lm
PF Q and PD Q
L L
By re-expressing as 2
m Lm2 , we discover that these probabilities depend only on the ratio Lm .
' ( ' (
PF Q Lm
2
PD Q
2
Lm
Lm Lm
118 Detection Theory Chap. 4
1.0 7.44
2
0.8
1
0.6
PD
0.4
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0
PF
Figure 4.3: A plot of the receiver operating characteristic for the densities shown
in the previous figure. Three
ROC curves are shown corresponding to different values for the parameter Lm .
As this signal-to-noise ratio increases, the ROC curve approaches its ideal form: the northwest
corner of a square as illustrated in Fig. 4.3 by the value of 7.44 for Lm , which corresponds to a
signal-to-noise ratio of 7 442 17 dB. If a small false-alarm probability (say 104 ) is specified, a large
detection probability (0 9999) can result. Such values of signal-to-noise ratios can thus be considered
large and the corresponding model evaluation problem relatively easy. If, however, the signal-to-
noise ratio equals 4 (6 dB), the figure illustrates the worsened performance: a 104 specification
on the false-alarm probability would result in a detection probability of essentially zero. Thus, in a
fairly small signal-to-noise ratio range, the likelihood ratio tests performance capabilities can vary
dramatically. However, no other decision rule can yield better performance.
Specification of the false-alarm probability for a new problem requires experience. Choosing a reason-
able value for the false-alarm probability in the Neyman-Pearson criterion depends strongly on the problem
difficulty. Too small a number will result in small detection probabilities; too large and the detection proba-
bility will be close to unity, suggesting that fewer false alarms could have been tolerated. Problem difficulty is
assessed by the degree to which the conditional densities pX X0 and pX X1 overlap, a prob-
0 1
lem dependent measurement. If we are testing whether a distribution has one of two possible mean values as
in our Gaussian example, a quantity like a signal-to-noise ratio will probably emerge as determining perfor-
mance. The performance in this case can vary drastically depending on whether the signal-to-noise ratio is
large or small. In other kinds of problems, the best possible performance provided by the likelihood ratio test
can be poor. For example, consider the problem of determining which of two zero-mean probability densities
describes a given set of data consisting of statistically independent observations (Problem 4.2). Presumably,
the variances of these two densities are equal as we are trying to determine which density is most appropriate.
In this case, the performance probabilities can be quite low, especially when the general shapes of the densi-
ties are similar. Thus a single quantity, like the signal-to-noise ratio, does not emerge to characterize problem
difficulty in all hypothesis testing problems. In sequel, we will analyze each model evaluation and detection
problem in a standard way. After the sufficient statistic has been found, we will seek a value for the threshold
that attains a specified false-alarm probability. The detection probability will then be determined as a function
of problem difficulty, the measure of which is problem-dependent. We can control the choice of false-alarm
Sec. 4.1 Elementary Hypothesis Testing 119
probability; we cannot control over problem difficulty. Confusingly, the detection probability will vary with
both the specified false-alarm probability and the problem difficulty.
We are implicitly assuming that we have a rational method for choosing the false-alarm probability cri-
terion value. In signal processing applications, we usually make a sequence of decisions and pass them to
systems making more global determinations. For example, in digital communications problems the model
evaluation formalism could be used to receive each bit. Each bit is received in sequence and then passed
to the decoder which invokes error-correction algorithms. The important notions here are that the decision-
making process occurs at a given rate and that the decisions are presented to other signal processing systems.
The rate at which errors occur in system input(s) greatly influences system design. Thus, the selection of a
false-alarm probability is usually governed by the error rate that can be tolerated by succeeding systems. If
the decision rate is one per day, then a moderately large (say 0.1) false-alarm probability might be appropri-
ate. If the decision rate is a million per second as in a one megabit communication channel, the false-alarm
probability should be much lower: 1012 would suffice for the one-tenth per day error rate.
4.1.4 Beyond Two Models
Frequently, more than two viable models for data generation can be defined for a given situation. The clas-
sification problem is to determine which of several models best fits a set of measurements. For example,
determining the type of airplane from its radar returns forms a classification problem. The model evaluation
framework has the right structure if we can allow more than two model. We happily note that in deriving the
likelihood ratio test we did not need to assume that only two possible descriptions exist. Go back and examine
the expression for the maximum probability correct decision rule 112. If K models seem appropriate for a
specific problem, the decision rule maximizing the probability of making a correct choice is
where Ci summarizes all additive terms that do not depend on the observation vector X. The quantity i X
is termed the sufficient statistic associated with model i. In many cases, the functional form of the sufficient
statistic varies little from one model to another and expresses the necessary operations that summarize the
observations. The constants Ci are usually lumped together to yield the threshold against which we compare
the sufficient statistic. For example, in the binary model situation, the decision rule becomes
1 1
1 X C1 0 X C0 or 1 X
0 X C0
C1
0
0
Example
In the Gaussian problem just discussed, the logarithm of the likelihood function is
1 L1
ln pX Xi
ln 2 2
2 Xl
mi2
L
i 2 2 l 0
where mi is the mean under model i. After appropriate simplification that retains the ordering, we
have
mi L1 1 Lmi
2
i X
ci
2 l0 l
X C i
2 2
The term ci is a constant defined by the error criterion; for the maximum probability correct criterion,
this constant is ln i .
When employing the Neyman-Pearson test, we need to specify the various error probabilities
Prsay i j true. These specifications amount to determining the constants ci when the sufficient statistic
is used. Since K
1 comparisons will be used to home in on the optimal decision, only K
1 error probabil-
ities need be specified. Typically, the quantities Prsay i 0 true, i 1 K
1, are used, particularly
when the model 0 represents the situation when no signal is present (see Problem 4.7).
Integrate-and-Dump
In this procedure, no attention is paid to the bandwidth of the noise in selecting the sampling rate.
Instead, the sampling interval is selected according to the characteristics of the signal set. Because
of the finite duration of the integrator, successive samples are statistically independent when the noise
bandwidth exceeds 1. Consequently, the sampling rate can be varied to some extent while retaining
this desirable analytic property.
Sampling
Traditional engineering considerations governed the selection of the sampling filter and the sampling
rate. As in the integrate-and-dump procedure, the sampling rate is chosen according to signal properties.
Presumably, changes in sampling rate would force changes in the filter. As we shall see, this linkage
has dramatic implications on performance.
With either method, the continuous-time detection problem of selecting between models (a binary selec-
tion is used here for simplicity)
0 : X t s0 t N t 0 t T
1 : X t s1 t N t 0 t T
Sec. 4.2 Detection of Signals in Gaussian Noise 121
t
r(t)
t
r() d
t = l
rl
Integrate-and-Dump
r(t) Sampling rl
Filter
t = l
Sampling
Figure 4.4: The two most common methods of converting continuous-time observations into discrete-time
ones are shown. In the upper panel, the integrate-and-dump method is shown: the input is integrated over an
interval of duration and the result sampled. In the lower panel, the sampling method merely samples the
input every seconds.
where si t denotes the known signal set and N t denotes additive noise modeled as a stationary stochastic
process is converted into the discrete-time detection problem
0 : Xl s0l Nl 0lL
1 : Xl s1l Nl 0lL
where the sampling interval is always taken to divide the observation interval T : L T . We form the
discrete-time observations into a vector: X colX 0 X L
1. The binary detection problem is to
distinguish between two possible signals present in the noisy output waveform.
0 : X s0 N
1 : X s1 N
To apply our model evaluation results, we need the probability density of X under each model. As the only
probabilistic component of the observations is the noise, the required density for the detection problem is
given by
pX Xi pN X
si
i
pN X
s1
X
pN X
s0
Much of detection theory revolves about interpreting this likelihood ratio and deriving the detection threshold
(either threshold or ).
4.2.1 White Gaussian Noise
By far the easiest detection problem to solve occurs when the noise vector consists of statistically independent,
identically distributed, Gaussian random variables. In this book, a white sequence consists of statistically
independent random variables. The white sequences mean is usually taken to be zero and each components
We are not assuming the amplitude distribution of the noise to be Gaussian.
Thezero-mean assumption is realistic for the detection problem. If the mean were non-zero, simply subtracting it from the observed
sequence results in a zero-mean noise component.
122 Detection Theory Chap. 4
variance is 2 . The equal-variance assumption implies the noise characteristics are unchanging throughout
the entire set of observations. The probability density of the zero-mean noise vector evaluated at X
si equals
that of Gaussian random vector having independent components (K 2 I) with mean si .
L2
pN X
si exp
2 X
si t X
si
1 1
2 2 2
The resulting detection problem is similar to the Gaussian example examined so frequently in the hypothesis
testing sections, with the distinction here being a non-zero mean under both models. The logarithm of the
likelihood ratio becomes
1
X
s0 t X
s0
X
s1 t X
s1 2 2 ln
st0 s0
1
st1 s1
Xt s1
Xt s0
2
2 2 0 ln
The quantities in parentheses express the signal processing operations for each model. If more than two
signals were assumed possible, quantities such as these would need to be computed for each signal and the
largest selected. This decision rule is optimum for the additive, white Gaussian noise problem.
Each term in the computations for the optimum detector has a signal processing interpretation. When
expanded, the term sti si equals Ll 01 s2i l , which is the signal energy Ei . The remaining termXt si is the
only one involving the observations and hence constitutes the sufficient statistic i X for the additive white
Gaussian noise detection problem.
i X Xt si
An abstract, but physically relevant, interpretation of this important quantity comes from the theory of linear
vector spaces. There, the quantity Xt si would be termed the dot product between X and si or the projection of
X onto si . By employing the Schwarz inequality, the largest value of this quantity occurs when these vectors
are proportional to each other. Thus, a dot product computation measures how much alike two vectors are:
they are completely alike when they are parallel (proportional) and completely dissimilar when orthogonal
(the dot product is zero). More precisely, the dot product removes those components from the observations
which are orthogonal to the signal. The dot product thereby generalizes the familiar notion of filtering a
signal contaminated by broadband noise. In filtering, the signal-to-noise ratio of a bandlimited signal can be
drastically improved by lowpass filtering; the output would consist only of the signal and in-band noise.
The dot product serves a similar role, ideally removing those out-of-band components (the orthogonal ones)
and retaining the in-band ones (those parallel to the signal).
Expanding the dot product, Xt si Ll 01 X l si l , another signal processing interpretation emerges. The
dot product now describes a finite impulse response (FIR) filtering operation evaluated at a specific index.
To demonstrate this interpretation, let hl be the unit-sample response of a linear, shift-invariant filter where
hl 0 for l 0 and l L. Letting X l be the filters input sequence, the convolution sum expresses the
output.
k
X k hk X l h k
l
l k L1
Letting k L
1, the index at which the unit-sample responses last value overlaps the inputs value at the
origin, we have
L1
X k hkk L1 X l hL
1
l
l 0
Sec. 4.2 Detection of Signals in Gaussian Noise 123
Figure 4.5: The detector for signals contained in additive, white Gaussian noise consists of a matched filter,
whose output is sampled at the duration of the signal and half of the signal energy is subtracted from it. The
optimum detector incorporates a matched filter for each signal compares their outputs to determine the largest.
If we set the unit-sample response equal to the index-reversed, then delayed signal hl si L
1
l , we
have
L1
X k si L
1
kk L1 X l si l
l 0
which equals the observation-dependent component of the optimal detectors sufficient statistic. Fig. 4.5
depicts these computations graphically. The th
sufficient statistic for the i signal is thus expressed in signal
processing notation as X k si L
1
k k L1
Ei 2. The filtering term is called a matched filter because
the observations are passed through a filter whose unit-sample response matches that of the signal being
sought. We sample the matched filters output at the precise moment when all of the observations fall within
the filters memory and then adjust this value by half the signal energy. The adjusted values for the two
assumed signals are subtracted and compared to a threshold.
To compute the performance probabilities, the expressions should be simplified in the ways discussed in
the hypothesis testing sections. As the energy terms are known a priori, they can be incorporated into the
threshold with the result
L1
1
E1
E0
X l s1l
s0 l 2 ln
l 0 0 2
The left term constitutes the sufficient statistic for the binary detection problem. Because the additive noise is
presumed Gaussian, the sufficient statistic is a Gaussian random variable no matter which model is assumed.
Under i , the specifics of this probability distribution are
L1
X l s1l
s0l si l s1l
s0l 2s1l
s0l 2
l 0
Note that the only signal-related quantity affecting this performance probability (and all of the others) is
the ratio of energy in the difference signal to the noise variance. The larger this ratio, the better (smaller)
the performance probabilities become. Note that the details of the signal waveforms do not greatly affect
the energy of the difference signal. For example, consider the case where the two signal energies are equal
(E0 E1 E); the energy of the difference signal is given by 2E
2 s0 l s1 l . The largest value of this
energy occurs when the signals are negatives of each other, with the difference-signal energy equaling 4E.
Thus, equal-energy but opposite-signed signals such as sine waves, square-waves, Bessel functions, etc. all
yield exactly the same performance levels. The essential signal properties that do yield good performance
values are elucidated by an alternate interpretation. The term s1 l
s0 l 2 equals s1
s0 2 , the L2 norm
of the difference signal. Geometrically, the difference-signal energy is the same quantity as the square of the
Euclidean distance between the two signals. In these terms, a larger distance between the two signals will
mean better performance.
Example
A common detection problem in array processing is to determine whether a signal is present (1 )
or not (0 ) in the array output. In this case, s0 l 0. The optimal detector relies on filtering the
array output with a matched filter having an impulse response based on the assumed signal. Letting
the signal under 1 be denoted simply by sl , the optimal detector consists of
1
X l sL
1
l l L1
E 2 2 ln
0
1
or X l sL
1
l l L1
0
Fig. 4.6 displays the probability of detection as a function of the signal-to-noise ratio E 2 for several
values of false-alarm probability. Given an estimate of the expected signal-to-noise ratio, these curves
can be used to assess the trade-off between the false-alarm and detection probabilities.
The important parameter determining detector performance derived in this example is the signal-to-noise
ratio E 2 : the larger it is, the smaller the false-alarm probability is (generally speaking). Signal-to-noise
ratios can be measured in many different ways. For example, one measure might be the ratio of the rms
signal amplitude to the rms noise amplitude. Note that the important one for the detection problem is much
different. The signal portion is the sum of the squared signal values over the entire set of observed values
the signal energy; the noise portion is the variance of each noise componentthe noise power. Thus, energy
can be increased in two ways that increase the signal-to-noise ratio: the signal can be made larger or the
observations can be extended to encompass a larger number of values.
To illustrate this point, two signals having the same energy are shown in Fig. 4.7. When these signals are
shown in the presence of additive noise, the signal is visible on the left because its amplitude is larger; the
one on the right is much more difficult to discern. The instantaneous signal-to-noise ratiothe ratio of signal
amplitude to average noise amplitudeis the important visual cue. However, the kind of signal-to-noise
ratio that determines detection performance belies the eye. The matched filter outputs have similar maximal
values, indicating that total signal energy rather than amplitude determines the performance of a matched
filter detector.
Sec. 4.2 Detection of Signals in Gaussian Noise 125
Figure 4.6: The probability of detection is plotted versus signal-to-noise ratio for various values of the false-
alarm probability PF . False-alarm probabilities range from 101 down to 106 by decades. The matched
filter receiver was used since the noise is white and Gaussian. Note how the range of signal-to-noise ratios
over which the detection probability changes shrinks as the false-alarm probability decreases. This effect is a
consequence of the non-linear nature of the function Q .
Signal
3 3
Signal + Noise
2 2
1 1
0 0
l l
-1 -1
-2 -2
-3 -3
10 10
Matched Filter Output
0 0
l l
-10 -10
Figure 4.7: Two signals having the same energy are shown at the top of the figure. The one on the left
equals one cycle of a sinusoid having ten samples/period (sin 2 f o l with f o 0 1). On the right, ten cycles
of similar signal is shown, with an amplitude a factor of 10 smaller. The middle portion of the figure
shows these signals with the same noise signal added; the duration of this signal is 200 samples. The lower
portion depicts the outputs of matched filters for each signal. The detection threshold was set by specifying a
false-alarm probability of 102.
1
X
s1 K X
s1
X
s0 K
t 1 t 1
X
s0 2 ln
0
Sec. 4.2 Detection of Signals in Gaussian Noise 127
The sufficient statistic for the colored Gaussian noise detection problem is
i X Xt K1 si (4.9)
The quantities computed for each signal have a similar, but more complicated interpretation than in the
white noise case. Xt K1 si is a dot product, but with respect to the so-called kernel K1 . The effect of the
kernel is to weight certain components more heavily than others. A positive-definite symmetric matrix (the
covariance matrix is one such example) can be expressed in terms of its eigenvectors and eigenvalues.
L
1
K1 vk vtk
k 1 k
where k and vk denote the kth eigenvalue and eigenvector of the covariance matrix K. Each of the constituent
dot products is largest when the signal and the observation vectors have strong components parallel to vk .
However, the product of these dot products is weighted by the reciprocal of the associated eigenvalue. Thus,
components in the observation vector parallel to the signal will tend to be accentuated; those components
parallel to the eigenvectors having the smaller eigenvalues will receive greater accentuation than others. The
usual notions of parallelism and orthogonality become skewed because of the presence of the kernel. A
covariance matrixs eigenvalue has units of variance; these accentuated directions thus correspond to small
noise variances. We can therefore view the weighted dot product as a computation that is simultaneously
trying to select components in the observations similar to the signal, but concentrating on those where the
noise variance is small.
The second term in the expressions constituting the optimal detector are of the form sti K1 si . This quantity
is a special case of the dot product just discussed. The two vectors involved in this dot product are identical;
they are parallel by definition. The weighting of the signal components by the reciprocal eigenvalues remains.
Recalling the units of the eigenvectors of K, sti K1 si has the units of a signal-to-noise ratio, which is computed
in a way that enhances the contribution of those signal components parallel to the low noise directions.
To compute the performance probabilities, we express the detection rule in terms of the sufficient statistic.
1
Xt K1 s1
s0 ln s K s1
st0 K1s0
1 t 1
0 2 1
The distribution of the sufficient statistic on the left side of this equation is Gaussian because it consists as a
linear transformation of the Gaussian random vector X. Assuming the ith model to be true,
Xt K1 s1
s0 sti K1 s1
s0 s1
s0 t K1 s1
s0
The false-alarm probability for the optimal Gaussian colored noise detector is given by
' (
ln 12 s1
s0 t K1 s1
s0
PF Q
s1
s0 t K1s1
s012 (4.10)
128 Detection Theory Chap. 4
Matched
Filter
1 +
K si 1
s 'iK s i
2
Matched
Whitening Filter
Filter +
si
1
s 'iK s i
2
Figure 4.8: These diagrams depict the signal processing operations involved in the optimum detector when the
additive noise is not white. The upper diagram shows a matched filter whose unit-sample response depends
both on the signal and the noise characteristics. The lower diagram is often termed the whitening filter
structure, where the noise components of the observed data are first whitened, then passed through a matched
filter whose unit-sample response is related to the whitened signal.
As in the white noise case, the important signal-related quantity in this expression is the signal-to-noise ratio
of the difference signal. The distance interpretation of this quantity remains, but the distance is now warped
by the kernels presence in the dot product.
The sufficient statistic computed for each signal can be given two signal processing interpretations in the
colored noise case. Both of these rest on considering the quantity Xt K1 si as a simple dot product, but with
different ideas on grouping terms. The simplest is to group the kernel with the signal so that the sufficient
statistic is the dot product between the observations and a modified version of the signal si K1 si . This
modified signal thus becomes the equivalent to the unit-sample response of the matched filter. In this form,
the observed data are unaltered and passed through a matched filter whose unit-sample response depends on
both the signal and the noise characteristics. The size of the noise covariance matrix, equal to the number of
observations used by the detector, is usually large: hundreds if not thousands of samples are possible. Thus,
computation of the inverse of the noise covariance matrix becomes an issue. This problem needs to be solved
only once if the noise characteristics are static; the inverse can be precomputed on a general purpose computer
using well-established numerical algorithms. The signal-to-noise ratio term of the sufficient statistic is the dot
product of the signal with the modified signal si . This view of the receiver structure is shown in Fig. 4.8.
A second and more theoretically powerful view of the computations involved in the colored noise detector
emerges when we factor covariance matrix. The Cholesky factorization of a positive-definite, symmetric
matrix (such as a covariance matrix or its inverse) has the form K LDLt . With this factorization, the
sufficient statistic can be written as
t
Xt K1 si D12 L1 X D12 L1 si
The components of the dot product are multiplied by the same matrix (D12 L1 ), which is lower-triangular.
If this matrix were also Toeplitz, the product of this kind between a Toeplitz matrix and a vector would be
equivalent to the convolution of the components of the vector with the first column of the matrix. If the matrix
is not Toeplitz (which, inconveniently, is the typical case), a convolution also results, but with a unit-sample
response that varies with the index of the outputa time-varying, linear filtering operation. The variation of
the unit-sample response corresponds to the different rows of the matrix D12 L1 running backwards from
the main-diagonal entry. What is the physical interpretation of the action of this filter? The covariance of
the random vector x AX is given by Kx AKX At . Applying this result to the current situation, we set
A D12 L1 and KX K LDLt with the result that the covariance matrix Kx is the identity matrix!
Thus, the matrix D12 L1 corresponds to a (possibly time-varying) whitening filter: we have converted the
Sec. 4.2 Detection of Signals in Gaussian Noise 129
colored-noise component of the observed data to white noise! As the filter is always linear, the Gaussian
observation noise remains Gaussian at the output. Thus, the colored noise problem is converted into a simpler
one with the whitening filter: the whitened observations are first match-filtered with the whitened signal
s
i D12 L1 si (whitened with respect to noise characteristics only) then half the energy of the whitened
signal is subtracted (Fig. 4.8).
Example
To demonstrate the interpretation of the Cholesky factorization of the covariance matrix as a time-
varying whitening filter, consider the covariance matrix
1 a a2 a3
* a 1 a a2 +
K * +
a2 a 1 a
a3 a2 a 1
This covariance matrix indicates that the noise was produced by passing white Gaussian noise through
12
a first-order filter having coefficient a: N l aN l
1 1
a2 wl , where wl is unit-
variance white noise. Thus, we would expect that if a whitening filter emerged from the matrix ma-
nipulations (derived just below), it would be a first-order FIR filter having an unit-sample response
proportional to
)
1 l 0
hl
a l 1
)
0 otherwise
Simple arithmetic calculations of the Cholesky decomposition suffice to show that the matrices L and
D are given by
1 0 0 0 1 0 0 0
* a 1 0 0 + * 0 1
a2 0 0 +
L * + D * +
a2 a 1 0 0 0 1
a2 0
a3 a2 a 1 0 0 0 1
a2
and that their inverses are
1 0 0 0
1 0 0 0 * 0 1 0 0 +
*
a * +
1 * 1 0 0 +
+ 1 * 1
a2 +
L 0 D * 0 1 +
a 1 0 * 0 0 +
1
a2
0 0
a 1 1
0 0 0
1
a2
Because D is diagonal, the matrix D12 equals the term-by-term square root of the inverse of D. The
product of interest here is therefore given by
1 0 0 0
*
*
a 1 0 0 +
+
12 1 * 1
a2 1
a2 +
D L *
* 0
a 1 0 +
+
1
a2 1
a2
0 0
a 1
1
a2 1
a2
Let X express the product D12 L1 X. This vectors elements are given by
X0 X0 X1
1
X1
aX0 etc
1
a2
130 Detection Theory Chap. 4
Thus, the expected FIR whitening filter emerges after the first term. The expression could not be of
this form as no observations were assumed to precede X0 . This edge effect is the source of the time-
varying aspect of the whitening filter. If the system modeling the noise generation process has only
poles, this whitening filter will always stabilizenot vary with timeonce sufficient data are present
within the memory of the FIR inverse filter. In contrast, the presence of zeros in the generation system
would imply an IIR whitening filter. With finite data, the unit-sample response would then change on
each output sample.
X t si t N t i 0 K
1 0 t T
where the si t comprise the signal set. N t is usually assumed to be statistically independent of the
transmitted signal and a white, Gaussian process having spectral height N0 2. We represent the received
signal with a Karhunen-Loeve expansion.
X t X j j t si j N j j t
j 1 j 1
where si j and N j are the representations of the signal si t and the noise N t , respectively. To have a
Karhunen-Loeve expansion, it suffices to choose j t so that the N j are pairwise uncorrelated. As N t
is white, we may choose any j t we want! In particular, choose j t to be the set of functions which
yield a finite-dimensional representation for the signals si t . A complete, but not necessarily orthonormal,
set of functions that does this is
s0 t sK1 t 0 t 1 t
where j t denotes any complete set of functions. We form the set j t by applying the Gram-Schmidt
procedure to this set. With this basis, si j 0 j K. In this case, the representation of X t becomes
si j N j j 0 K
1
Xj
Nj jK
so that we may write the model evaluation problem we are attempting to solve as
We can consider the model evaluation problem that operates on the representation of the received signal
rather than the signal itself. Recall that using the representation is equivalent to using the original
process. We have thus created an equivalent model evaluation problem. For the binary signal set case,
0 : X s0 N
1 : X s1 N
where N contains statistically independent Gaussian components, each of which has variance N0 2.
Note that components are statistically independent of each other and that, for j K, the representation
contains no signal-related information. Because these components are extraneous and will not con-
tribute to improved performance, we can reduce the dimension of the problem to no more than K by
ignoring these components. By rejecting these noise-only components, we are effectively filtering out
out-of-band noise, retaining those components related to the signals. Using eigenfunctions related to
the signals defines signal space, allowing us to ideally reject pure-noise components.
As a consequence of these observations, we have a model evaluation problem of the form
X0 si 0 N0
X
*
..
.
+ *
..
.
+
* ..
.
+
s 2
i X ln i si X
i i
N0
0 K
1
2 2
and choose the largest. The components of the signal and received vectors are given by
T T
si j si t j t dt Xj X t j t dt
0 0
Because of Parsevals Theorem, the inner product between representations equals the time-domain inner prod-
uct between the represented signals.
T
si X si t X t dt
0
0 si t dt
T 2
Furthermore, si 2 Ei , the energy in the ith signal. Thus, the sufficient statistic for the optimal
detector has a closed form time-domain expression.
N0
T
i X ln i si t X t dt
Ei
2 0 2
This form of the minimum probability of error receiver is termed a correlation receiver; see Fig. 4.9. Each
transmitted signal and the received signal are correlated to obtain the sufficient statistic. These operations
project the received signal onto signal space.
An alternate structure which computes the same quantities can be derived by noting that if f t and gt
are nonzero only over 0 T , the inner product (correlation) operation can be written as a convolution followed
by a sampler.
T
f t gt dt f t gT
t
0 t T
Consequently, we can restructure the correlation operation as a filtering-and-sampling operation. The im-
pulse responses of the linear filters are time-reversed, delayed versions of the signals in the signal set. This
132 Detection Theory Chap. 4
T
0 dt
s0(t)
T
r(t) 0 dt choose decision
largest
s1(t)
T
0 dt
sK1(t)
Figure 4.9: Correlation receiver structure for the optimum detector. When unequally likely and/or unequal-
energy signals are used, the correction term N0 2 ln i
Ei 2 must be added to each integrators output.
s0(Tt)
t=T
sK1(Tt)
t=T
Figure 4.10: Matched filter receiver structure for the optimum detector. When unequally likely, unequal
signals are used, the correction term N0 2 ln i
Ei 2 must be added to each samplers output.
structure for the minimum probability of error receiver is known as the matched-filter receiver; see Fig. 4.10.
Each type of receiver has the same performance; however, the matched filter receiver is usually easier to
construct because the correlation receiver requires an analog multiplier.
As we know, receiver performance is judged by the probability of error, which, for equally likely signals
in a binary signal set, is given by
' (
s0
s1
Pe Q
2 N0 2
The computation of the probability of error and the dimensionality of the problem can be assessed by consid-
ering signal space: The representation of the signals with respect to a basis. The number of basis elements
required to represent the signal set defines dimensionality. The geometric configuration of the signals in this
space is known as the signal constellation. Once this constellation is found, computing intersignal distances
is easy.
4.3.2 Binary Signaling Schemes
The following series of examples are important as they constitute the most popular signaling schemes in
binary digital communication. For all of these examples, the elements of each signal set are assumed to be
Sec. 4.3 Continuous-time detection 133
equally likely. Under this assumption, the N0 2 ln i term in the expression for i X cancels with the result
that the computations simplify to
si 2
i X X si
for all i
2
Note especially that under these conditions, the optimum receiver does not require knowledge of the spectral
height N0 2 of the channel noise, an important simplification in practice.
Example
Let the binary signal set be
s0 t s1 t
E
0 0t T
T
The receiver is a single correlator, with the outputcompared to the threshold E 2. The distance
between the signals is easily seen to be s0
s1 E. Consequently, the probability of error which
results from employing this signal set equals Pe Q E 2N0 . This signaling scheme is termed
amplitude-shift keying (ASK) or on-off keying (OOK).
Example
Let the binary signal set be
0 t T 2
s0 t s1 t
E T
E T 0 t T
E T T 2 t T
When these signals are equally likely to be sent, the sufficient statistic for this problem becomes
i X X si . Note that the energy term si 2 2 does not occur: For any signal set containing
equal-energy components, this term is common and need not be computed. Consequently, the receiver
for signal sets having this property need not know the energy of the received signals. In practical
applications, the energy of the signal portion of the received waveform may not be known precisely;
for example, the physical distance between the transmitter and the receiver, which determines how
much the signal is attenuated, may be unknown. A signal set which does require knowledge of the
received signal energy is shown in the first example (ASK).
From the signal constellation, the distance between the signals is s1
s2 2E, resulting in a
probability of error equal to '3 (
E
Pe Q
N0
This particular example has no specific name; however, note that s0 s1 0, meaning that the signals
are orthogonal to each other. Such signal sets are said to be orthogonal signal sets.
Example
Let the signal set be defined as
s0 t s1 t
E E
0tT
T T
Note that this signal set is another example of one having equal-energy components; therefore, the
receiver need not contain information concerning the energy of the received signals. The distance
134 Detection Theory Chap. 4
between the signals is s1
s2 2 E so that
'3 (
2E
Pe Q
N0
This signal set is termed an antipodal (opposite-signed) signal set. If the energy of each component
of a signal set is constrained to be less than a given value, the signal set having the largest distance
between its components is the antipodal signal set.
A greater distance between the components of the signal set implies a better performance (i.e., smaller
Pe ) for the same signal energy. Note that these probabilities of error are monotonic functions of the ratio of
signal energy to channel-noise spectral height. In designing a digital communications system on the basis of
performance only, maximum performance is obtained by increasing signal energy and choosing the best
signal set: the antipodal signal set. Furthermore, note that performance does not depend on the detailed
waveforms of the signals. Signal sets having the same signal constellation have the same performance.
The previous examples are in the class of baseband signal sets: The spectra of the signals is concentrated
at low frequencies. Modulated signal sets, those having their spectra concentrated at high frequencies, can be
analyzed in a similar fashion. Note that since the following examples have constellations identical with their
baseband counterparts, their performances are also the same. The signal set consisting of
s0 t s1 t
2E
0 sin 2 f 0t 0 t T
T
(where f 0 T is an integer) is an example of a modulated ASK signal set. An orthogonal signal set is exemplified
by frequency-shift keying (FSK):
s0 t s1 t
2E 2E
sin 2 f 0 t sin 2 f 1t 0 t T
T T
where f 0 T and f 1 T are distinct integers. Finally, phase-shift keying (PSK) corresponds to an antipodal signal
set.
s0 t s1 t
2E 2E
sin 2 f 0t sin 2 f 0t 0 t T
T T
4.3.3 K-ary Signal Sets
To generalize these results to K-ary signal sets is obvious. The optimum receiver computes
si 2
i X N0 2 ln i si X
i 0 K
1
2
for each i and chooses the largest. Conceptually, these are no more complicated than binary signal sets. The
minimum probability of error receiver remains a matched filter and has a similar structure to those shown
previously. However, the computation of the probability of error may not be simple.
Chap. 4 Problems 135
Problems
4.1 Consider the following two-model evaluation problem [40: Prob. 2.2.1].
0 : X N
1 : X s N
where s and N are statistically independent, positively valued, random variables having the densities
ps s aeas and p N N bebN
(a) Prove that the likelihood ratio test reduces to
1
X
0
(b) Find for the minimum probability of error test as a function of the a priori probabilities.
(c) Now assume that we need a Neyman-Pearson test. Find as a function PF , the false-alarm prob-
ability.
4.2 Two models describe different equi-variance statistical models for the observations [40: Prob. 2.2.11].
0 : pX X 1
e 2X
2
1 : pX X 1 1 X2
e 2
2
(a) Find the likelihood ratio.
(b) Compute the decision regions for various values of the threshold in the likelihood ratio test.
(c) Assuming these two densities are equally likely, find the probability of making an error in distin-
guishing between them.
4.3 Cautious Decision Making
Wanting to be very cautious before making a decision, an ELEC 531 student wants to explicitly allow
no decision to be made if the data dont warrant a firm decision. The Bayes cost criterion is used
to derive the cautious detector. Let the cost of a wrong decision be Cwd 0, the cost of making no
decision be C? 0 and the cost of making a correct decision be zero. Two signal models are possible
and they have a priori probabilities 0 and 1 .
(a) Derive the detector that minimizes the average Bayes cost, showing that it is a likelihood ratio
detector.
(b) For what choices of Cwd and C? is the decision rule well-defined?
(c) Let the observations consist of L samples, with model 0 corresponding to white Gaussian noise
and model 1 corresponding to a known signal in additive white Gaussian noise. Find the decision
rule in terms the sufficient statistic.
4.4 A hypothesis testing criterion radically different from those discussed in 4.1.1 and 4.1.2 is minimum
equivocation. In this information theoretic approach, the two-model testing problem is modeled as a
digital channel (Fig. 4.11). The channels inputs, generically represented by the x, are the models and
the channels outputs, denoted by y, are the decisions.
The quality of such information theoretic channels is quantified by the mutual information I x; y de-
fined to be difference between the entropy of the inputs and the equivocation [5: 2.3, 2.4].
I x; y H x
H xy
H x
Pxi log Pxi
i
Pxi y j
H xy
Pxi y j log
i j Py j
136 Detection Theory Chap. 4
1 P
H0 F
say 0
H
Figure 4.11: The two-model testing problem can be abstractly 1 P
described as a communication channel where the inputs are the D
models and the outputs are the decisions. The transition prob- x P
y
abilities are related to the false-alarm (PF ) and detection (PD ) F
probabilities.
H1 say 1
PD H
Here, Pxi denotes the a priori probabilities, Py j the output probabilities, and Pxi y j the
joint probability of input xi resulting in output y j . For example, Px0 y0 Px0 1
PF and
Py0 Px0 1
PF Px1 1
PD . For a fixed set of a priori probabilities, show that the deci-
sion rule that maximizes the mutual information is the likelihood ratio test. What is the threshold when
this criterion is employed?
Note: This problem is relatively difficult. The key to its solution is to exploit the concavity of the
entropy function.
4.5 Detection and Estimation Working Together
Detectors are frequently used to determine if a signal is even present before applying estimators to tease
out the signal. The same, but unknown signal having duration L may or may not be present in additive
white Gaussian noise during the ith observation interval, i 1 .
0 : Ri Ni
1 : Ri s Ni
0 , 1 denote the a priori probabilities. Once M intervals have been determined by the front-end
detector to contain a signal, we apply the maximum likelihood estimator to measure the signal.
(a) What is the maximum likelihood signal estimate?
(b) What is the front-end detectors algorithm?
(c) Even if we use an optimal front-end detector, it can make errors, saying a signal is present when it
isnt. What is the mean-squared error of the combined detector-estimator in terms of the detectors
detection and false-alarm probabilities?
4.6 Non-Gaussian statistical models sometimes yield surprising results in comparison to Gaussian ones.
Consider the following hypothesis testing problem where the observations have a Laplacian probability
distribution.
0 : pX X 12 e
X m
1 : pX X 12 e
X
m
(a) Find the sufficient statistic for the optimal decision rule.
(b) What decision rule guarantees that the miss probability will be less than 0 1?
4.7 Developing a Neyman-Pearson decision rule for more than two models has not been detailed because
a mathematical quandry arises. The issue is that we have several performance probabilities we want
to optimize. In essence, we are optimizing a vector of performance probabilities, which requires us to
specify a norm. Many norms can be chosen; we select one in this problem.
Assume K distinct models are required to account for the observations. We seek to maximize the sum
of the probabilities of correctly announcing i , i 1 K. This choice amounts to maximizing the
L1 norm of the detection probabilities. We constrain the probability of announcing i when model 0
was indeed true to not exceed a specified value.
Chap. 4 Problems 137
(a) Formulate the optimization problem that simultaneously maximizes i Prsay i i under the
constraint Prsay i 0 i . Find the solution using Lagrange multipliers.
(b) Can you find the Lagrange multipliers?
(c) Can your solution can be expressed as choosing the largest of the sufficient statistics i X Ci ?
4.8 Pattern recognition relies heavily on ideas derived from the principles of statistical model testing.
Measurements are made of a test object and these are compared with those of standard objects to
determine which the test object most closely resembles. Assume that the measurement vector X is
jointly Gaussian with mean mi i 1 K and covariance matrix 2 I (i.e., statistically independent
components). Thus, there are K possible objects, each having an ideal measurement vector mi and
probability i of being present.
(a) How is the minimum probability of error choice of object determined from the observation of X?
(b) Assuming that only two equally likely objects are possible (K 2), what is the probability of error
of your decision rule?
(c) The expense of making measurements is always a practical consideration. Assuming each mea-
surement costs the same to perform, how would you determine the effectiveness of a measurement
vectors component?
4.9 Define y to be
L
y xk
k 0
where the xk are statistically independent random variables, each having a Gaussian density 0 2 .
The number L of variables in the sum is a random variable with a Poisson distribution.
l
PrL l e l 0 1
l!
Based upon the observation of y, we want to decide whether L 1 or L 1. Write an expression for
the minimum Pe likelihood ratio test.
4.10 One observation of the random variable X is obtained. This random variable is either uniformly dis-
tributed between
1 and 1 or expressed as the sum of statistically independent random variables, each
of which is also uniformly distributed between
1 and 1.
(a) Suppose there are two terms in the aforementioned sum. Assuming that the two models are equally
likely, find the minimum probability of error decision rule.
(b) Compute the resulting probability of error of your decision rule.
(c) Show that the decision rule found in part (a) applies no matter how many terms are assumed
present in the sum.
4.11 The observed random variable X has a Gaussian density on each of five models.
pX X i
1
exp
X
mi 2 i 1 2 5
i 2 2 2
where m1
2m, m2
m, m3 0, m4 m, and m5 2m. The models are equally likely and the
criterion of the test is to minimize Pe .
(a) Draw the decision regions on the X-axis.
(b) Compute the probability of error.
(c) Let 1. Sketch accurately Pe as a function of m.
138 Detection Theory Chap. 4
4.12 The goal is to choose which of the following four models is true upon the reception of the three-
dimensional vector X [40: Prob. 2.6.6].
0 : X m0 N
1 : X m1 N
2 : X m2 N
3 : X m3 N
where
a 0
a 0
m0 0 m1 a m2 0 m3
a
b b b b
The noise vector N is a Gaussian random vector having statistically independent, identically distributed
components, each of which has zero mean and variance 2 . We have L independent observations of the
received vector X.
(a) Assuming equally likely models, find the minimum Pe decision rule.
(b) Calculate the resulting error probability.
(c) Show that neither the decision rule nor the probability of error do not depend on b. Intuitively,
why is this fact true?
4.13 Discrete Estimation
Estimation theory focuses on deriving effective (minimum error) techniques for determining the value
of continuous-valued quantities. When the quantity is discrete-valued (integer-valued, for example),
the usual approaches dont work well since they usually produce estimates not in the set of known
values. This problem explores applying decision-theoretic approaches to yield a framework for discrete
estimation.
Lets explore a specific example. Let a sequence of statistically independent, identically distributed
observations be Gaussian having mean m and variance 2 . The mean m can only assume the values
1, 0, and 1, and these are equally likely. The mean, whatever its value, is constant throughout the L
observations.
(a) What is the minimum probability of error decision rule? What is the resulting probability of error?
(b) What is the MAP estimate of the mean?
(c) The problem with using the detection approach of part (a) is that the probability of error is not a
standard error metric in estimation theory. Suppose we want to find the minimum mean-squared
error, discrete-valued estimate. Show that by defining an appropriate Bayes cost function, you
can create a detection problem that minimizes the mean-squared error. What is the Bayes cost
function that works?
(d) Find the minimum mean-squared error estimate using the minimum Bayes cost detector.
(e) What is the resulting mean-squared error?
4.14 Diversity Communication
In diversity signaling, one of two equally likely signal groups is transmitted, with each member sm of
the group s1 s2 sM sent through one of M parallel channels simultaneously. The receiver has
access to all channels and can use them to make a decision as to which signal group was transmitted.
The received vector (dimension L) emerging from the mth channel has the form
Xm smi Nm i 0 1
The noise is colored, Gaussian, and independent from channel to channel. The statistical properties of
the noise are known and they are the same in each channel.
Chap. 4 Problems 139
(i)
s1 R1
+
(i) N1 R
s2 2
+
Transmitter Receiver
N2
(i)
sM RM
+
NM
0 : X l N l l 0 L
1
1 : X A sin2 l L N l l
0 L
1
PrN n 1
aan n 0 1
You perform a drug trial over a very large population (large enough so that the approximation of the
geometric probability distribution remains valid). Either the drug is ineffective, in which case the
distributions parameter equals a0 , or is effective and the parameter equals a , a a0 . The a priori
probability that the drug will be effective is .
(a) Construct the minimum probability of error test that decides drug effectiveness.
140 Detection Theory Chap. 4
1
0 sent 0 received
Figure 4.12: A binary symmetric digital communications chan-
nel.
1 sent 1 received
1
4.20 When a patient is screened for the presence of a disease in an organ, a section of tissue is viewed under
a microscope and a count of abnormal cells made. Even under healthy conditions, a small number
of abnormal cells will be present. Presumably a much larger number will be present if the organ is
diseased. Assume that the number L of abnormal cells in a section is geometrically distributed.
PrL l 1
l l 0 1
The parameter of a diseased organ will be larger than that of a healthy one. The probability of a
randomly selected organ being diseased is p.
(a) Assuming that the value of the parameter is known in each situation, find the best method of
deciding whether an organ is diseased.
(b) Using your method, a patient was said to have a diseased organ. In this case, what is the probability
that the organ is diseased?
(c) Assume that is known only for healthy organs. Find the disease screening method that mini-
mizes the maximum possible value of the probability that the screening method will be in error.
4.21 Interference
Wireless communication seldomly occurs without the presence of interference that originates from
other communications signals. Suppose a binary communication system uses the antipodal signal set
si l
1i1 A, l 0 L
1, with i 0 1. An interfering communication system also uses an
antipodal signal set having the depicted basic signal. Its amplitude AI is unknown. The received signal
consists of the sum of our signal, the interfering signal, and white Gaussian noise.
AI
L/2 L
l
DAI
(a) What receiver would be used if the bit intervals of the two communication systems were aligned?
(b) How would this receiver change if the bit intervals did not align, with the time shift not known?
Assume that the receiver knows the time origin of our communications.
4.22 Assume we have N sensors each determining whether a signal is present in white Gaussian noise or
not. The identical signal and noise models apply at each sensor, with the signal having energy E.
(a) What is each sensors receiver?
(b) Assuming the signal is as likely as not, what is the optimal fusion rule?
(c) Does this distributed detection system yield the same error probabilities as the optimal detector
that assimilates all the observations directly?
4.23 Data are often processed in the field, with the results from several systems sent a central place for
final analysis. Consider a detection system wherein each of N field radar systems detects the presence
or absence of an airplane. The detection results are collected together so that a final judgment about the
airplanes presence can be made. Assume each field system has false-alarm and detection probabilities
PF and PD respectively.
(a) Find the optimal detection strategy for making a final determination that maximizes the probability
of making a correct decision. Assume that the a priori probabilities 0 , 1 of the airplanes
absence or presence, respectively, are known.
(b) How does the airplane detection system change when the a priori probabilities are not known?
Require that the central judgment have a false-alarm probability no bigger than PF N .
142 Detection Theory Chap. 4
4.24 Mathematically, a preconception is a model for the world that you believe applies over a broad class
of circumstances. Clearly, you should be vigilant and continually judge your assumptions correctness.
Let Xl denote a sequence of random variables that you believe to be independent and identically
distributed with a Gaussian distribution having zero mean and variance 2 . Elements of this sequence
arrive one after the other, and you decide to use the sample average Ml as a test statistic.
1 l
l i1 i
Ml X
(a) Based on the sample average, develop a procedure that tests for each l whether the preconceived
model is correct. This test should be designed so that it continually monitors the validity of the
assumptions, and indicates at each l whether the preconception is valid or not. Establish this test
so that it yield a constant probability of judging the model incorrect when, in fact, it is actually
valid.
(b) To judge the efficacy of this test, assume the elements of the actual sequence have the assumed
distribution, but that they are correlated with correlation coefficient . Determine the probability
(as a function of l) that your test correctly invalidates the preconception.
(c) Is the test based on the sample average optimal? If so, prove it so; if not, find the optimal one.
4.25 Assume that observations of a sinusoidal signal sl A sin2 f l , l 0 L
1, are contaminated
by first-order colored noise as described in the example 129.
(a) Find the unit-sample response of the whitening filter.
(b) Assuming that the alternative model is the sole presence of the colored Gaussian noise, what is
the probability of detection?
(c) How does this probability vary with signal frequency f when the first-order coefficient is positive?
Does your result make sense? Why?
4.26 In space-time coding systems, a common bit stream is transmitted over several channels simultane-
ously but using different signals. Xk denotes the signal received from the kth channel, k 1 K,
and the received signal equals sk i Nk . Here, i equals 0 or 1, corresponding to the bit being trans-
mitted. Each signal has length L. Nk denotes a Gaussian random vector with statistically independent
components having mean zero and variance k2 (the variance depends on the channel).
(a) Assuming equally likely bit transmissions, find the minimum probability of error decision rule.
(b) What is the probability that your decision rule makes an error?
(c) Suppose each channel has its own decision rule, which is designed to yield the same miss proba-
bility as the others. Now what is the minimum probability of error decision rule of the system that
combines the individual decisions into one?
4.27 The performance for the optimal detector in white Gaussian noise problems depends only on the dis-
tance between the signals. Lets confirm this result experimentally. Define the signal under one hy-
pothesis to be a unit-amplitude sinusoid having one cycle within the 50-sample observation interval.
Observations of this signal are contaminated by additive white Gaussian noise having variance equal to
1.5. The hypotheses are equally likely.
(a) Let the second hypothesis be a cosine of the same frequency. Calculate and estimate the detectors
false-alarm probability.
(b) Now let the signals correspond to square-waves constructed from the sinusoids used in the pre-
vious part. Normalize them so that they have the same energy as the sinusoids. Calculate and
estimate the detectors false-alarm probability.
(c) Now let the noise be Laplacian with variance 1.5. Although no analytic expression for the detector
performance can be found, do the simulated performances for the sinusoid and the square-wave
signals change significantly?
Chap. 4 Problems 143
(d) Finally, let the second signal be the negative of the sinusoid. Repeat the calculations and the
simulation for Gaussian noise.
4.28 Physical constraints imposed on signals can change what signal set choices result in the best detection
performance. Let one of two equally likely discrete-time signals be observed in the presence of white
Gaussian noise (variance/sample equals 2 ).
0 : X l s0 l N l
l 0 L
1
1 : X l s1 l N l
We are free to choose any signals we like, but there are constraints. Average signal power equals
l s2 l L, and peak power equals maxl s2 l .
(a) Assuming the average signal power must be less than Pave , what are the optimal signal choices?
Is your answer unique?
(b) When the peak power Ppeak is constrained, what are the optimal signal choices?
(c) If Pave Ppeak , which constraint yields the best detection performance?
4.29 One of the more interesting problems in detection theory is determining when the probability distri-
bution of the observations differs from that in other portions of the observation interval. The most
common form of the problem is that over the interval 0 C , the observations have one form, and that in
the remainder of the observation interval C L
1 have a different probability distribution. The change
detection problem is to determine whether in fact a change has occurred and, if so, estimate when that
change occurs.
To explore the change detection problem, lets explore the simple situation where the mean of white
Gaussian noise changes at the C th sample.
0 : X l 0 2 l
0 L
1
1 : X l 0 2 l 0 C
1
m 2 l C L
1
(a) What is the optimal amplitude choice for the binary and quaternary (four-signal) signal sets when
the noise is white and the signal energy is constrained (l s2i l E)? Comment on the uniqueness
of your answers.
(b) Describe the optimal binary QAM signal set when the noise is colored.
(c) Now suppose the peak amplitude (maxl si l Amax ) is constrained. What are the optimal signal
sets (both binary and quaternary) for the white noise case? Again, comment on uniqueness.
4.32 Checking for Repetitions
Consider the following detection problem.
s N1
0 : X1
X2 s N2
s1 N1
1 : X1
X2 s2 N2
Here, the two observations either contain the same signal or they contain different ones. The noise
vectors N1 and N2 are statistically independent of each other and identically distributed, with each
being Gaussian with zero mean and covariance matrix K 2 I.
(a) Find the decision rule that minimizes the false-alarm probability when the miss probability is
required to be less than 1
.
(b) Now suppose none of the signals is known. All is that is known is that under 0 , the signals
are the same and that under 1 they are different. What is the optimal decision rule under these
conditions?
4.33 A sampled signal is suspected of consisting of a periodic component and additive Gaussian noise. The
signal, if present, has a known period N. The number of samples equals L, a multiple of N. The noise
is white and has known variance 2 . A consultant (you!) has been asked to determine the signals
presence.
(a) Assuming the signal is a sinusoid with unknown phase and amplitude, what should be done to
determine the presence of the sinusoid so that a false-alarm probability criterion of 0.1 is met?
(b) Other than its periodic nature, now assume that the signals waveform is unknown. What compu-
tations must the optimum detector perform?
4.34 Delegating Responsibility
Modern management styles tend to want decisions to be made locally (by people at the scene) rather
than by the boss. While this approach might be considered more democratic, we should understand
how to make decisions under such organizational constraints and what the performance might be.
Let three local systems separately make observations. Each local systems observations are identi-
cally distributed and statistically independent of the others, and based on the observations, each system
decides which of two models applies best. The judgments are relayed to the central manager who must
make the final decision. Assume the local observations consist either of white Gaussian noise or of a
signal having energy E to which the same white Gaussian noise has been added. The signal energy is
the same at each local system. Each local decision system must meet a performance standard on the
probability it declares the presence of a signal when none is present.
(a) What decision rule should each local system use?
(b) Assuming the observation models are equally likely, how should the central management make
its decision so as to minimize the probability of error?
(c) Is this decentralized decision system optimal (i.e., the probability of error for the final decision is
minimized)? If so, demonstrate optimality; if not, find the optimal system.
Appendix
Probability Distributions
Logarithmic pn p
p p log 1 p
nlog1 p 1 p log1 p
1 p log q
145
146 Detection Theory Chap. 4
x y y ,
y
mx ,
xy : correlation
mx my x y
my coefficient
2 $
) )
)
)
x
mx
x
y
my )
)
%
pxy y
Conditional 1
exp
2 1 2 x2 2x 1
) 2 2 )
Gaussian )
) )
)
&
mx x2 1
2
y
m y x
y
Multivariate 1
exp
12 x
mt K1 x
m
det 2 K1 2
Gaussian
m K
x
m r
Ar
1 Ar 2
2 1r 12
21 1rAr
Generalized e m
Gaussian 3r
1
1 x2 2 i 1 Xi2 ,
Chi-Squared 2 2 2 x 2 e , 0 x 2
Xi IID 0 1
(2 )
Noncentral 1
2 x 2 4 I 2 2
xe12 x
Chi-Squared
2 i 1 Xi2 , Xi
(2 ) 2 2 IID mi 1
i 1 m2i
2 12 12 1 2
2 2
x 1 ex 1 1
, 0 x
2
2 2
Chi 2
2 21 1
2 2
2 1
2
12
12 1 x2
Students t 2 0 2 , 2
m n
m21
1
m 2mn m2
2
Beta m n m2 n2
x mn mn2 mn2 m n m2 n2
xn21
, 0 x 1, 0 a b
F Distribu-
mn2
m m2 xm22 , 0 x; 1 m n
m2n2
tion
n
1 mnxmn2
n 2n2 mn2 m2 m
n2 , n2 mn22 n4
, n4 Fm n n2n
e
k
2 mx
n
2 mn
2 m 2 m2 n2
Fm n
Non-central p m mxn n2 n22 n4 ,
2
n2
F Fm n 2 k
k! n
m 2 m
k 0 2
n4
n2 n
detw NM2 1 e
trK2 w , NK
1
Uniform 1
axb ab ba2
ba , 2 12
Exponential e x , 0 x 1 1 2
2
12
logx
m
em
2 2 2
Lognormal 1 e ,0x 2 e2m e2
e
2 2
#2 x 2
#
2 32 2
ax 2
Maxwell a x e , 0x 8
a 3
8 a1
x
m
Laplacian 1 e 2 2 m 2
2 2
ba a1 bx a a
Gamma a
x e ,
0 x, 0 a b b b2
#
2
Rayleigh 2axeax , 0 x
4a
1
a 1
4
b
abxb1 eax , 0 x, 0 a b
1 a 1 b
a2b 1 2b
21 1b
1 1 b
Weibull
Arc-Sine 1 ,0x1 1 1
x1x 2 8
Sine Ampli- 1 , x 1 0 1
2
1x2
tude
eacosxm
,
x
2 I0 a
Circular m
Normal
a
Cauchy xm2 a2 m
(from
symmetry
arguments)
Logistic exma ,0a m a2 2
a1 exma2 3
Gumbel exma exp
exma , m a a2 2
a 6
0a
Pareto aba , 0 a; 0 b x ab ab2
x1a a1 , a1 a2a12 , a 2
Table 4: Non-Gaussian distributions.
148 Detection Theory Chap. 4
Bibliography
1. B. D. O. Anderson and J. B. Moore. Optimal Filtering. Prentice Hall, Englewood Cliffs, NJ, 1979.
2. I. F. Blake and J. B. Thomas. On a class of processes arising in linear estimation theory. IEEE Trans.
Info. Th., IT14:1216, January 1968.
3. R. Bradley. Basic properties of strong mixing conditions. In E. Eberlein and M. S. Taqqu, editors,
Dependence in Probability and Statistics, pages 165192. Birkhauser, Boston, MA, 1986.
4. J. P. Burg. The relationship between maximum entropy spectra and maximum likelihood spectra. Geo-
physics, 37:375376, Apr. 1972.
5. T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, Inc., 1991.
6. H. Cramer. Mathematical Methods of Statistics. Princeton University Press, Princeton, NJ, 1946.
7. H. Cramer. Random Variables and Probability Distributions. Cambridge University Press, third edition,
1970.
8. Y. A. Davydov. Mixing conditions for Markov chains. Th. Prob. and its Applications, 18:312328, 1973.
9. P. Doukhan and M. Ghindes. Estimation dans le processus xn1 f xn n . C. R. Acad. Science A,
297:6164, 1980.
10. D. J. Edelblute, J. M. Fisk, and G. L. Kinnison. Criteria for optimum-signal-detection theory for arrays.
J. Acoust. Soc. Am., 41:199205, Jan. 1967.
11. P. Hall. Rates of convergence in the central limit theorem, volume 62 of Research Notes in Mathematics.
Pitman Advanced Publishing Program, 1982.
12. A. G. Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58:83
90, 1971.
13. S. Haykin. Adaptive Filter Theory. Prentice Hall, Englewood Cliffs, NJ, 1986.
14. I. A. Ibragimov. A note on the central limit theorem for dependent random variables. Th. Prob. and its
Applications, 20:135141, 1975.
15. I. A. Ibragimov and Yu.V. Linnik. Independent and Stationary Sequences of Random Variables. Wolters-
Noordhoff Publishing, Groningen, 1971.
16. D. H. Johnson and A. Swami. The transmission of signals by auditory-nerve fiber discharge patterns. J.
Acoust. Soc. Am., 74:493501, Aug 1983.
17. D. H. Johnson, C. Tsuchitani, D. Linebarger, and M. Johnson. The application of a point process model
to the single unit responses of the cat lateral superior olivae to ipsilaterally presented tones. Hearing Res.,
21:135159, 1986.
18. A. N. Kolmogorov and Y. A. Rozanov. On strong mixing conditions for stationary gaussian processes.
Th. Prob. and its Applications, 5:204208, 1960.
19. E. L. Lehmann. Testing Statistical Hypotheses. John Wiley & Sons, New York, second edition, 1986.
20. R. S. Lipster and A. N. Shiryayev. Statistics of Random Processes I: General Theory. Springer-Verlag,
New York, 1977.
21. J. Makhoul. Linear prediction: A tutorial review. Proc. IEEE, 63:561580, Apr. 1975.
22. J. D. Markel and A. H. Gray, Jr. Linear Prediction of Speech. Springer-Verlag, New York, 1976.
23. S. L. Marple, Jr. Digital Spectral Analysis. Prentice Hall, Englewood Cliffs, NJ, 1987.
149
150 Bibliography
24. D. P. McGinn and D. H. Johnson. Estimation of all-pole model parameters from noise-corrupted se-
quences. IEEE Trans. Acoustics, Speech and Signal Processing, ASSP-37:433436, Mar. 1989.
25. D. K. McGraw and J. F. Wagner. Elliptically symmetric distributions. IEEE Trans. Info. Th., IT-14:110
120, 1968.
26. P. A. P. Moran. Some experiments on the prediction of sunspot numbers. J. Royal Stat. Soc. B, 16:112
117, 1954.
27. J. Neyman and E. S. Pearson. On the problem of the most efficient tests of statistical hypotheses. Phil.
Trans. Roy. Soc. Ser. A, 231:289337, Feb. 1933.
28. T. Ozaki. Maximum likelihood estimation of hawkes self-exciting point processes. Ann. Inst. Math.
Stat., 31:145155, 1979. Part B.
29. A. Papoulis. Probability, Random Variables, and Stochastic Processes. McGraw-Hill, New York, second
edition, 1984.
30. E. Parzen. Stochastic Processes. Holden-Day, San Francisco, 1962.
31. P. S. Rao and D. H. Johnson. Generation and analysis of non-Gaussian Markov time series. IEEE Trans.
Signal Processing, 40:845856, 1992.
32. M. Rosenblatt. A central limit theorem and a strong mixing condition. Proc. Natl. Acad. Sci. U. S. A.,
42:4347, 1956.
33. M. Rosenblatt. Markov Processes. Structure and Asymptotic Behavior. Springer-Verlag, New York,
1971.
34. M. Rosenblatt. Stationary Sequences and Random Fields. Birkhauser, Boston, MA, 1985.
35. H. Sakai and H. Tokumaru. Statistical analysis of a spectral estimator for ARMA processes. IEEE Trans.
Auto. Control, AC-25:122124, Feb. 1980.
36. B. W. Silverman. Density Estimation. Chapman & Hall, London, 1986.
37. D. L. Snyder. Random Point Processes. Wiley, New York, 1975.
38. J. R. Thompson and R. A. Tapia. Nonparametric Function Estimation, Modeling, and Simulation. SIAM,
Philadelphia, PA, 1990.
39. H. Tong. Non-linear Time Series. Clarendon Press, Oxford, 1990.
40. H. L. van Trees. Detection, Estimation, and Modulation Theory, Part I. John Wiley & Sons, New York,
1968.
41. A. M. Vershik. Some characteristic properties of Gaussian stochastic process. Th. Prob. and its Applica-
tions, 9:353356, 1964.
42. G. H. Weiss. Time reversibility of linear stochastic processes. J. Appl. Prob., 12:831836, 1975.
43. N. Wiener. Extrapolation, Interpolation, and Smoothing of Stationary Time Series. MIT Press, Cam-
bridge, MA, 1949.
44. P. M. Woodward. Probability and Information Theory, with Applications to Radar. Pergamon Press,
Oxford, second edition, 1964.
Bibliography 151