We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 6
OPTIMUM FILTERS
7.1 INTRODUCTION
‘The estimation of one signal from another is one of the most important problems in signal
processing and it embraces a wide range of interesting applications. In many of these
applications the desired signal, whether is it speech, a radar signal, an EEG, or an image, is
not available or observed directly. Instead, for a variety of reasons, the desired signal may
be noisy and distorted. For example, the equipment used to measure the signal may be of
limited resolution, the signal may be observed in the presence of noise or other interfering,
signals, or it may be distorted due to the propagation of the signal from the source to the
receiver as in a digital communication system. In very simple and idealized environments,
it may be possible to design a classical filter such as a lowpass, highpass, or bandpass filter,
to restore the desired signal from the measured data. Rarely, however, will these filters be
‘optimum in the sense of producing the best estimate of the signal. Therefore, inthis chapter
we consider the design of optimum digital filrers, which include the digital Wiener filter
and the diserete Kalman filter (2).
Inthe 1940s, driven by important applications in communication theory, Norbert Wiener
pioneered research in the problem of designing a filter that would produce the optimum
estimate of a signal from a noisy measurement or observation. The discrete form of the
Wiener filtering problem, shown in Fig. 7.1, is to design a filter to recover a signal d(n)
from noisy observations
x(n) =
(n) + v(n)
Assuming that both dn) and v(n) are wide-sense stationary random processes, Wiener
considered the problem of designing the filter that would produce the minimum mean-
square error estimate of d(n). Thus, with
§ = Ellen
where
e(n) = d(n) — din)336 opnmum rurers
an)
a
aa) dw eon)
we ~ H
Figure 7.1 lilustration ofthe general Wiener filtering problem, Given
‘vo wide-sense stationary processes, x(n) and dn), that are statistically
related to each other, the filter W 2) is t0 produce the minimum mean
square error estimate, din), of d(n)
the problem is to find the filter that minimizes &. We begin this chapter by considering the
general problem of Wiener filtering in which a linear shift-invariant filter, W(2), is to be
designed that will filter a given signal, x(n), to produce the minimum mean square estimate,
d(n), of another signal, d(n). Depending upon how the signals x(n) and d(n) are related to
cach other, a number of different and important problems may be cast into a Wiener filtering
framework, Some of the problems that will be considered in this chapter include:
1. Filtering. This is the classic problem considered by Wiener in which we are given
x(n) = d(n) + v(n) and the goal is to estimate d(n) using a causal filter, Le., t0
estimate d(n) from the current and past values of x(n).
2. Smoothing. This is the same as the filtering problem except that the filter is allowed
tobe noncausal. A Wiener smoothing filter, for example, may be designed to estimate
d(n) from x(n) = dn) + (a) using all of the available data,
Prediction. If d(n) = x(n + 1) and W(2) is a causal filter, then the Wiener filter
becomes a linear predictor. In this case, the filter is to produce a prediction (estimate)
of x( + 1) in terms of a linear combination of previous values of x(n).
4, Deconvolution. When x(x) = d(n) + gn) + v(m) with gin) being the unit sample
response of a linear shift-invariant filter, the Wiener filter becomes a deconvolution
filter.
3
First, we consider the design of FIR Wiener filters in Section 7.2. The main result here will
be the derivation of the discrete form of the Wiener-Hopf equations which specify the filter
coefficients of the optimum (minimum mse) filter. Solutions to the Wiener-Hopf equations
are then given for the cases of filtering, smosthing, prediction, and noise cancellation, In
Section 7.3 we then consider the design of IIR Wiener filters, First, in Section 7.3.1 we solve
the noncausal Wiener filtering problem. Then, in Section 7.3.2, the problem of designing
causal Wiener filter is considered. Unlike the noncausal Wiener filter, solving for the
optimum causal Wiener filter is a nonlinear problem that requires a spectral factorization
of the power spectrum of the input process, x(n). The design of a causal Wiener filter is
illustrated with examples of Wiener filtering, Wiener prediction, and Wiener deconvolution.
Finally, in Section 7.4 we consider recursive approaches to signal estimation and derive
what is known as the discrete Kalman filter. Unlike the Wiener filter, which is a linear
shift-invariant filter for estimating stationary processes, the Kalman filter is shift-varying
and applicable to nonstationary as well as stationary processes,THE FIR WIENER FILTER __ 337
7.2. THE FIR WIENER FILTER
In this section we consider the design of an FIR Wiener filter that produces the minimum
‘mean-square estimate of a given process d (1) by filtering a set of observations of a stati
cally related process x(). Itis assumed that x(n) and d (n) are jointly wide-sense stationary
with known autocorrelations, r,(&) and ra(k), and known cross-correlation ra,(e). Denot-
ing the unit sample response of the Wiener filter by w(mn), and assuming a (p ~ I)st-order
filter, the system function is
We = Dowie”
With x(1) the input to the filter, the output, which we denote by dn), is the convolution of
wn) with x(n),
dan) = Y wx -) (7.1)
‘The Wiener filter design problem requires that we find the filter coefficients, w(k), that
minimize the mean-square error!
£ = Elleml’} = Eflam ~de?} a2
As discussed in Section 2.3.10 of Chapter 2, in orderfor aset of filter coefficients to minimize
itis necessary and sufficient that the derivative of & with respect to w*(k) be equal to zero
5 ian Elmer) = feng } =0 a3)
with
ew =a - 5 ay
it follows that m
1 en
and Bq, (73) becomes
Efex" 3)
which is known as the orthogonality principle or the projection theorem: Substit
Eq, (7.4) into Eq. (7.5) we have
16)
Eldidx"(n— 8} — Yo wE {xin — Dx" -
fat
“Noe tha our wide-sense stationarity assumption implies tha the mean square eror doesnot depend upon
Compare this with the orthogonality principle in Chapter 4, p. 14633
(OPTIMUM FLTERS
Finally, since x(n) and d(n) are jointly WSS then E(x(a —I)x*(n — k)) =ry(k = 1) and
Eld(n)x*(n — k)} = raxk) and Eq, (7.6) becomes
wrk D = ra) 5 k=O1.. pod an
f
which isa set of p linear equations in the p unknowns w(k), k = 0, 1... p—l-Tn matrix
form, using the fact that the autocorrelation sequence is conjugate symmetric, r4(k) =
r2(-k), Eq. (7.7) becomes
6) QD n@-) wv) rax(0)
ra) 400) rip—2) wil) ras(l)
Qe) n@-3) w@ |o| ra | aay
r(p— Vp —2) 0) w(p=D rasp)
which is the matrix form of the Wiener-Hopf equations. Equation (7.8) may be written more
concisely as
Rew = ray 79)
where Ry is a p x p Hermitian Toeplitz matrix of autocorrelations, w is the vector of filter
coetficients, and tis the vector of cross-correlations between the desired signal d(n) and
the observed signal x(n),
The minimum mean-square error in the estimate of d(n) may be evaluated from Eq. (7.2)
as follows. With
£ (leone) = £fe0[aon -F wersin—o] }
= E feat) - Fwreleimea—} @.10)
recall that if w(K) is the solution to the Wiener-Hopf equations, then, it follows from Eq,
(7.5) tha E e(n)x"(n ~ k)} = 0. Therefore, the second term in Eq, (7.10) is equal to zero
and
[ae — Y werein —p]aron}
rd
En = Efe(n)d"(a)} =
Finally, taking expected values we have
Erin = r4(0) — D> wOr3,0. ay
or, using vector notation,
(7.12)
Alternatively, sinceTHE FR WIENER FUER __ 339
Table 7.1 The Wiener-Hopf Equations for the FIR Wiener Filter
and the Minimum Mean-Square Error
Wiener Hopf equations
Freed = rally § k=O Noe PH
Correlations mn
rah) = Exx"(n 19}
ruth) = Edens" =}
Minimum error
an = r0) — Fwd
the minimum error may also be written explicitly in terms of the autocorrelation matrix Ry
and the cross-correlation vector ry, as follows:
§
1400) ~ EIR rae (7.3)
The FIR Wiener filtering equations are summarized in Table 7.1
We now look at some Wiener filtering applications that illustrate how to formulate the
‘Wiener filtering problem, set up the Wiener-Hopf equations (7.9), and solve for the filter
coefficients
7.2.1 Filtering
In the filtering problem, a signal d(1) isto be estimated from a noise corrupted observation
x(n) = d(n) + vn)
Filtering, or noise reduction, is an extremely important and pervasive problem that is found
in many applications such as the transmission of speech in a noisy environment and the
reception of data across a noisy channel. It is also important in the detection and location
of targets using sensor arrays, the restoration of old recordings, and the enhancement of
images.
Using the results in the previous section, the optimum FIR Wiener filter may be easily
derived. It will be assumed that the noise has zero mean and that it is uncorrelated with
d(n). Therefore, E{d(n)v*(n — k)} = 0 and the cross-correlation between d(n) and x(n)
becomes
So ee
= Eldinyd'n 8} + Eldino*(n ~ 0}
= rath) (7.14)
Next, since
r(k) = Efxin +x")
Elfin +k) + vn +H] [dca + vey)" } 7.15)340 oprmum rurers
‘with v(m) and d() uncorrelated processes it follows that
(k= ralk) + rk)
‘Therefore, with Ry the autocorrelation matrix for d(n), Ry the autocorrelation matrix for
v(m), and ray = rg = [r4(0),-...7a(p — DI”, the Wiener-Hopf equations become
(Ry +R,Jw ary 7.16)
In order to simplify these equations any further, however, specific information about the
statistics of the signal and noise are required.
Example 7.2.1 Filtering
Let d(n) be an AR(I) process with an autocorrelation sequence
rah) = als
with 0