Adsp
Adsp
It therefore follows that the general form for the power spectrum of a WSS
process is
N
X
Px(eω ) = Pxr (eω ) + αk u0(ω − ωk )
k=1
Assuming that the filter is stable, the output process x[n] will be
2
wide–sense stationary and with Pw = σw , the power spectrum of x[n] will
be
−1
2 Bq (z)Bq (z )
Px(z) = σw
Ap(z)Ap(z −1)
∗
Recall that “(·) ” in analogue frequency corresponds to “z −1” in “digital freq.”
2
Bq (eθ )
Pz (eθ ) = σw
2
2
|Ap(eθ )|
Solution: The system function is (poles and zeros – resonance & sink)
1 + 0.9025z −2
H(z) =
1 − 0.5562z −1 + 0.81z −2
7
Power Spectrum
−1
0 0.5 1 1.5 2 2.5 3 3.5
Frequency
p
X q
X
x[n] − ap(l)x[n − l] = bq (l)w[n − l]
l=1 l=0
p
X q
X
rxx(k) − ap(l)rxx(k − l) = bq (l)rxw (k − l)
l=1 l=0
Since x is WSS, it follows that x[n] and w[n] are jointly WSS.
The model implies that under suitable condition, x[n] is also a weighted
sum of past values of x, plus an added shock w[n], that is
0, n < 0
• Unit response u[n] =
1, n ≥ 0
– If w[n] = δ[n] then
Due to its “all–pole“ nature follows the duality between IIR and FIR filters.
Pxx = pyulear(x,p)
[Pxx,w] = pyulear(x,p,nfft)
[Pxx,f] = pyulear(x,p,nfft,fs)
[Pxx,f] = pyulear(x,p,nfft,fs,’range’)
[Pxx,w] = pyulear(x,p,nfft,’range’)
Description:-
Pxx = pyulear(x,p)
Solution:-
randn(’state’,1);
x = filter(1,a,randn(256,1)); % AR system output
pyulear(x,4) % Fourth-order estimate
AR(2) signal x=filter([1],[1, −1.2, 0.8],w) ACF for AR(2) signal x=filter([1],[1, −1.2, 0.8],w) ACF for AR(2) signal x=filter([1],[1, −1.2, 0.8],w)
6 2000
1500
1500
4
1000
1000
2
500 500
0
0 0
−2
−500
−500
−4
−1000
−1000
−6 −1500
0 50 100 150 200 250 300 350 400 −400 −300 −200 −100 0 100 200 300 400 −20 −10 0 10 20
Sample number Correlation lag Correlation lag
ρk = ak1 , k>0
Notice the difference in the behaviour of the ACF for a1 positive and negative
2 2
σw σw
σx2 = =
1 − ρ1a1 1 − a21
2 2
2σw 2σw
Pxx(f ) = 2 = 1 + a2 − 2a cos(2πf )
|1 − a1e−2πf | 1
1
Signal values
Signal values
0 0
−2
−4 −5
0 20 40 60 80 100 0 20 40 60 80 100
Sample Number Sample Number
ACF ACF
1 1
0.5
Correlation
Correlation
0 0.5
−0.5
−1 0
0 5 10 15 20 0 5 10 15 20
Correlation lag Correlation lag
Power/frequency (dB/rad/sample)
Power/frequency (dB/rad/sample)
Burg Power Spectral Density Estimate Burg Power Spectral Density Estimate
10 15
5 10
5
0
0
−5
−5
−10 −10
−15 −15
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
a1 + a2 < 1
a2 − a1 < 1
−1 < a2 < 1
100
Signal values
50
−50
0 50 100 150 200 250 300
Sample Number
0.5
Correlation
−0.5
0 5 10 15 20 25 30 35 40 45 50
Correlation lag
1
ACF ACF
II I
m m
Real Roots
−2 2 a1
ACF
ACF
m
m
III Complex Roots IV
−1
a1
ρ1 =
1 − a2
a21
ρ2 = a2 +
1 − a2
Spectrum
2
2σw
Pxx(f ) = 2
|1 − a1e−2πf − a2e−4πf |
2
2σw
= , 0 ≤ f ≤ 1/2
1 + a21 + a22 − 2a1(1 − a2 cos(2πf ) − 2a2 cos(4πf ))
10
Signal values
0
−10
−20
0 50 100 150 200 250 300 350 400 450 500
Sample Number
ACF
1
Correlation
0.5
−0.5
0 5 10 15 20 25 30 35 40 45 50
Correlation lag
Power/frequency (dB/rad/sample)
10
−5
−10
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
cos−1 (0.5303) 1
The damping factor D = 0.5 = 0.71, frequency f0 = 2π = 6.2
The fundamental period of the autocorrelation function is 6.2.
1500
1500
4
1000
1000
2
500 500
0
0 0
−2
−500
−500
−4
−1000
−1000
−6 −1500
0 50 100 150 200 250 300 350 400 −400 −300 −200 −100 0 100 200 300 400 −20 −10 0 10 20
Sample number Correlation lag Correlation lag
1 ρ1 ρ1
ρ1 1 ρ2
ρ2 − ρ21 ρ2 ρ1 ρ3
a11 = ρ1, a22 = , a33 = , etc
1 − ρ21 1 ρ1 ρ2
ρ1 1 ρ1
ρ2 ρ1 1
• For an AR(p) process, the PAC akk will be nonzero for k ≤ p and zero
for k > p ⇒ tells us the order of an AR(p) process.
(apply the E{·} operator to the general AR(p) model expression, and
recall that E{w[n]} = 0)
(Hint:
E{x[n]} = x̂[n] = E {ak−1,1 x[n − 1] + · · · + ak−1,k−1 x[n − k + 1] + w[n]} =
ak−1,1 x[n − 1] + · · · + ak−1,k−1 x[n − k + 1]) )
whether the process is an AR or not
100
Signal values
0.5
Correlation
50
0
0
−50 −0.5
0 100 200 300 0 10 20 30 40 50
Sample Number Correlation lag
Partial ACF for sunspot series Burg Power Spectral Density Estimate
Power/frequency (dB/rad/sample)
1 35
30
0.5
25
Correlation
0 20
15
−0.5
10
−1 5
0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1
Correlation lag Normalized Frequency (×π rad/sample)
4 0.5
Signal values
Correlation
2
0 0
−2
−4 −0.5
−6
−8 −1
0 100 200 300 400 500 0 10 20 30 40 50
Sample Number Correlation lag
Partial ACF for AR(2) signal Burg Power Spectral Density Estimate
Power/frequency (dB/rad/sample)
0.8 15
0.6 10
0.4
5
Correlation
0.2
0
0
−5
−0.2
−10
−0.4
−0.6 −15
−0.8 −20
0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1
Correlation lag Normalized Frequency (×π rad/sample)
5 0.8
Signal values
Correlation
0 0.6
−5 0.4
−10 0.2
−15 0
0 100 200 300 400 500 0 10 20 30 40 50
Sample Number Correlation lag
Partial ACF for AR(3) signal Burg Power Spectral Density Estimate
Power/frequency (dB/rad/sample)
0.4 30
0.2
20
0
Correlation
10
−0.2
−0.4
0
−0.6
−10
−0.8
−1 −20
0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1
Correlation lag Normalized Frequency (×π rad/sample)
2400 2400
Nasdaq value
Nasdaq value
2200 2200
2000 2000
1800 1800
1600 1600
1400 1400
0 500 1000 1500 2000 0 500 1000 1500 2000
Day number Day number
ACFx of
10 Nasdaq composite June 2003 − February 2007
7 ACFx of
10 Nasdaq composite June 2003 − February 2007
7
7 7
6 6
5 5
4 4
ACF value
ACF value
3 3
2 2
1 1
0 0
−1 −1
−2 −2
−3 −3
−2000 −1500 −1000 −500 0 500 1000 1500 2000 −2000 −1500 −1000 −500 0 500 1000 1500 2000
Correlation lag Correlation lag
a = 1.0000 -0.9994
p ∗ log(N )
M DL = log(E) +
N
AIC = log(E) + 2p/N
0.96
0.94
0.92
0.9
0.88
1 2 3 4 5 6 7 8 9 10
0.96
0.94
0.92
0.9
0.88
1 2 3 4 5 6 7 8 9 10
AR(2) Model Order
1
2
0.8
Signal values
Correlation
1
0.6
0.4
0
0.2
−1
0
−2 −0.2
0 100 200 300 400 500 0 10 20 30 40 50
Sample Number Correlation lag
Partial ACF for MA(3) signal Burg Power Spectral Density Estimate
Power/frequency (dB/rad/sample)
0.5 −4
0.4
−6
0.3
−8
Correlation
0.2
0.1 −10
0
−12
−0.1
−14
−0.2
−0.3 −16
0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1
Correlation lag Normalized Frequency (×π rad/sample)
Signal values
0.5
0
2000 3000 4000 5000 6000 7000 8000 9000 10000
Sample Number
Partial ACF for W1 Partial ACF for W2 Partial ACF for W3
0 1 1
−1 0.5 0.5
Correlation
Correlation
Correlation
1 0 0
0 −0.5 −0.5
−1 −1 −1
0 25 50 0 25 50 0 25 50
Correlation lag Correlation lag Correlation lag
MDL calculated for W1 MDL calculated for W2 MDL calculated for W3
1 1 1
MDL
MDL
0.6 0.5 0.6
0.4 0.4
0.2 0 0.2
0 25 50 0 25 50 0 25 50
Model Order Model Order Model Order
ii) The finite MA(q) process has an ACF that is zero beyond q. For an AR
process, the ACF is infinite in extent and consits of mixture of damped
exponentials and/or damped sine waves.
As we will see, there is an infinite number of time functions with any given spectrum.
Spectral factorization is a method of finding the one time function which is also
minimum phase. The minimum-phase function has many uses. It, and it alone,
may be used for feedback filtering. It will arise frequently in wave propagation
problems of later chapters. It arises in the theory of prediction and regulation for
the given spectrum. We will further see that it has its energy squeezed up as close
as possible to t = 0. It determines the minimum amount of dispersion in viscous
wave propagation which is implied by causality. It finds application in two-dimen-
sional potential theory where a vector field magnitude is observed and the com-
ponents are to be inferred.
This chapter contains four computationally distinct methods of computing
the minimum-phase wavelet from a given spectrum. Being distinct, they offer
separate insights into the meaning of spectral factorization and minimum phase.
seem that the time reverse of any function would have the same autocorrelation
as the function. Actually, certain applications will involve complex time series;
therefore we should make the more precise statement that any wavelet and its
complex-conjugate time-reverse share the same autocorrelation and spectrum. Let
us verify this for simple two-point time functions. The spectrum of (b,, b,) is
The conjugate-reversed time function (ti,, 6,) with Z transform Br(Z) = 6, + 6,Z
has a spectrum
We see that the spectrum (3-1-1) is indeed identical to (3-1-2). Now we wish to
extend the idea to time functions with three and more points. Full generality may
be observed for three-point time functions, say B(Z) = b, + +
b,Z b 2 z 2 . First,
we call upon the fundamental theorem of algebra (which states that a polynomial
of degree n has exactly n roots) to write B(Z) in factored form.
Its spectrum is
Now, what can we do to change the wavelet (3-1-3) which will leave its
spectrum (3- 1-4) unchanged ? Clearly, b, may be multiplied by any complex num-
ber of unit magnitude. What is left of (3-1-4) can be broken up into a product of
factors of form (Zi - l/Z)(Zi - Z). But such a factor is just like (3-1-1). The time
function of (Zi - Z) is (Zi , - l), and its complex-conjugate time-reverse is (- 1, Zi).
Thus, any factor (Zi - Z ) in (3-1-3) may be replaced by a factor ( - 1 + ZiZ). In a
generalization of (3-1-3) there could be N factors [(Zi - Z), i = 1, 2, . . . , N)]. Any
combination of them could be reversed. Hence there are 2Ndifferent wavelets which
may be formed by reversals, and all of the wavelets have the same spectrum. Let us
look off the unit circle in the complex plane. The factor (Zi - Z ) means that Zi is
+
a root of both B(Z) and R(Z). If we replace (Zi - Z) by (- 1 ZiZ) in B(Z), we
have removed a root at Zi from B(Z) and replaced it by another at Z = l/Zi. The
roots of R(Z) have not changed a bit because there were originally roots at both
Zi and l/Zi and the reversal has merely switched them around. Summarizing the
situation in the complex plane, B(Z) has roots Zi which occur anywhere, R(Z) must
FIGURE 3-1
Roots of B ( l / Z )B(Z).
have all the roots Zi and, in addition, the roots l / z i . Replacing some particular
root Zi by l/Zi changes B(Z) but not R(Z). The operation of replacing a root at
Zi by one at l/Zi may be written as
The multipyling factor is none other than the all-pass filter considered in an earlier
chapter. With that in mind, it is obvious that B'(Z) has the same spectrum as B(Z).
In fact, there is really no reason for Zi to be a root of B(Z). If Zi is a root of B(Z),
then B'(Z) will be a polynomial; otherwise it will be an infinite series.
Now let us discuss the calculation of B(Z) from a given R(Z). First, the roots
of R(Z) are by definition the solutions to R(Z) = 0. If we multiply R(Z) by ZN
(where R(Z) has been given up to degree N), then Z ~ R ( Z )is a polynomial and the
solutions Zi to Z N ~ ( Z=) 0 will be the same as the solutions of R(Z) = 0. Finding
all roots of a polynomial is a standard though difficult task. Assuming this to have
been done we may then check to see if the roots come in the pairs Z i and l/Zi.
If they do not, then R(Z) was not really a spectrum. If they do, then for every
zero inside the unit circle, we must have one outside. Refer to Fig. 3-1. Thus,
if we decide to make B(Z) be a minimum-phase wavelet with the spectrum R(Z),
we collect all of the roots outside the unit circle. Then we create B(Z) with
EXERCISES
I How can you find the scale factor bNin (3-1-6)?
2 Compute the autocorrelation of each of the four wavelets (4,0, -I), (2, 3, -2),
(-2, 39% (LO, -4).
3 A power spectrum is observed to fit the form P(w) = 38 + 10 cos u - 12 cos 2w.
What are some wavelets with this spectrum? Which is minimum phase? [HINT:
cos 2w = 2 cos2 w - 1; 2 cos o = Z + 1/Z; use quadratic formula.]
4 Show that if a wavelet b, = (bo, bl ,..., 6,) is real, the roots of the spectrum R come in
the quadruplets Zo, l/Zo, z o , and l/Zo. Look into the case of roots exactly on the
unit circle and on the real axis. What is the minimum multiplicity of such roots?
n Time
FIGURE 3-2
Percent of total energy in a filter between time 0 and time t.
SPECTRAL FACTORIZATION 53
t
t Pout Pin Pkt - E n 2 (Pkt -pi%)
k=O
The difference, which is given in the right-hand column, is clearly always positive.
To prove that the miminum-phase wavelet delays energy the least, the pre-
ceding argument is repeated with each of the roots until they are all outside the
unit circle.
EXERCISE
I Do the foregoing minimum-energy-delay proof for complex-valued b, s, and P.
[CAUTION: + +
Does Pi,= (s bZ)P or Pin= (S bZ)P?]
seismic exploration data typically solves about 100,000 sets of Toeplitz simultaneous
equations in a day.
Another important application of the algebra associated with Toeplitz
matrices is in high-resolution spectral analysis. This is where a power spectrum is
to be estimated from a sample of data which is short (in time or space). The con-
ventional statistical and engineering knowledge in this subject is based on assump-
tions which are frequently inappropriate in geophysics. The situation was fully
recognized by John P. Burg who utilized some of the special properties of Toeplitz
matrices to develop his maximum-entropy spectral estimation procedure described
in a later chapter.
Another place where Toeplitz matrices play a key role is in the mathematical
physics which describes layered materials. Geophysicists often model the earth by
a stack of plane layers or by concentric spherical shells where each shell or layer
is homogeneous. Surprisingly enough, many mathematical physics books do not
mention Toeplitz matrices. This is because they are preoccupied with forward
problems; that is, they wish to calculate the waves (or potentials) observed in a
known configuration of materials. In geophysics, we are interested in both forward
problems and in inverse problems where we observe waves on the surface of the
earth and we wish to deduce material configurations inside the earth. A later
chapter contains a description of how Toeplitz matrices play a central role in such
inverse problems.
We start with a time function x,which may or may not be minimum phase.
Its spectrum is computed by R(Z) = ~ ( ~ / z ) x ( z ) .As we saw in the preceding sec-
tions, given R(Z) alone there is no way of knowing whether it was computed from
a minimum-phase function or a nonminimum-phase function. We may suppose
that there exists a minimum phase B(Z) of the given spectrum, that is, R(Z) =
B(l/Z) B(Z). Since B(Z) is by hypothesis minimum phase, it has an inverse
A(Z) = l/B(Z). We can solve for the inverse A(Z) in the following way:
To solve for A(Z), we identify coefficients of powers of 2. For the case where, for
example, A(Z) is the quadratic a, + a , Z + a 2 Z 2 , the coefficient of Z0 in (3-3-2)
is
This gives three equations for the three unknowns a;, a;, and v. To put (3-3-5)
in a form where standard simultaneous equations programs could be used one
would divide the vectors on both sides by v. After solving the equations, we get
a, by noting that it has magnitude I / & and its phase is arbitrary, as with the root
method of spectral factorization.
At this point, a pessimist might interject that the polynomial A(Z) = a,+
a,Z + a , z 2 determined from solving the set of simultaneous equations might
not turn out to be minimum phase, so that we could not necessarily compute B(Z)
by B ( Z ) = l / A ( Z ) . The pessimist might argue that the difficulty would be especially
likely to occur if the size of the set (3-3-5) was not taken to be large enough.
Actually experimentalists have known for a long time that the pessimists were
wrong. A proof can now be performed rather easily, along with a description of
a computer algorithm which may be used to solve (3-3-5).
The standard computer algorithms for solving simultaneous equations require
time proportional to n3 and computer memory proportional to n2. The Levinson
computer algorithm [Ref. 131 for Toeplitz matrices requires time proportional to
n2 and memory proportional to n. First notice that the Toeplitz matrix contains
many identical elements. Levinson utilized this special Toeplitz symmetry to
develop his fast method.
The method proceeds by the approach called recursion. That is, given the
solution to the k x k set of equations, we show how to calculate the solution to the
(k + 1) x (k + 1) set. One must first get the solution for k = 1 ;then one repeatedly
(recursively) applies a set of formulas increasing k by one at each stage. We will
show how the recursion works for real-time functions (r, = r - , ) going from the
3 x 3 set of equations to the 4 x 4 set, and leave it to the reader to work out the
general case.
Given the 3 x 3 simultaneous equations and their solution ai
then the following construction defines a quantity e given r3 (or r3 given e)
The first three rows in (3-3-7) are the same as (3-3-6); the last row is the new defi-
nition of e. The Levinson recursion shows how to calculate the solution a' to the
4 x 4 simultaneous equations which is like (3-3-6) but larger in size.
The important trick is that from (3-3-7) one can write a " reversed" system
of equations. (If you have trouble with the matrix manipulation, merely write out
(3-3-8) as simultaneous equations, then reverse the order of the unknowns, and
then reverse the order of the equations.)
To make the right-hand side of (3-3-10) look like the right-hand side of (3-3-8), we
have to get the bottom element to vanish, so we must choose c3 = e/v. This
implies that v' = u - c3 e = v - e2/zj = v[l - ( e / ~ ) ~ ]Thus,
. the solution to the
4 x 4 system is derived from the 3 x 3 by
phase. First, we notice that u = l/Z, a, and u' = lliida6 are always positive. Then
from (3-3-13) we see that - 1 < e/u < + 1. (The fact that c = e/u is bounded by
unity will later be shown to correspond to the fact that reflection coefficients for
waves are so bounded.) Next, (3-3-12) may be written in polynomial form as
A ' ( Z ) = A ( Z ) - ( ~ / V ) Z ~ A ( ~ / (3-3-14)
Z)
We know that z3has unit magnitude on the unit circle. Likewise (for real time
series), the spectrum of A(Z) equals that of A(l/Z). Thus (by the theorem of adding
garbage to a minimum-phase wavelet) if A(Z) is minimum phase, then A1(Z) will
also be minimum phase. In summary, the following three statements are equivalent:
EXERCISES
I The top row of a 4 x 4 Toeplitz set of simultaneous equations like (3-3-8) is (1, a, ;Ik, a).
What is the solution ak?
2 How must the Levinson recursion be altered if time functions are complex? Specific-
ally, where d o complex conjugates occur in (3-3-1I), (3-3-12), and (3-3-13)?
3 Let A,(Z) denote a polynomial whose coefficients are the solution t o a n m x m set of
Toeplitz equations. Show that if Bk(Z)= Z k A k ( Z - ' )then
2n
vn a n m -
- 2
27l
j 0
R(Z)B.(Z)Z -" dm n5m
which means that the polynomial Bm(Z)is orthogonal to polynomial Z n over the unit
circle under the positive weighting function R. Utilizing this result, state why B, is
orthogonal to B , , that is,
V . 6.. = -
1 2n
2.rr 0
I
R(zlB.(z)~. dw (i)
(HINT: First consider n I m, then all n.)
Toeplitz matrices are found in the mathematical literature under the topic of poly-
nomials orthogonal on the unit circle. The author especially recommends Atkinson's
book (Ref. 14).
If I RI > 2 on the unit circle then a scale factor should be divided out. Insert this
power series into the power series for logarithms.
U ( Z ) = In R(Z)
Of course, in practice this would be a lot of effort, but it could be done in a syste-
matic fashion with a computer program. Now define U,' by dropping negative
powers of Z from U ( Z )
The desired minimum-phase wavelet is B(Z); its spectrum is R(Z). To see why
this is so, consider the following identities.
= exp
2
+z -1
-00
u,zk +
uo
++ x
1
ukz*)
= exp (5+ 2
2 -00
ukzk) exp ;( +1
00
1
U k ~ k )
= exp [u (;)I
+ exp Iu + (z)]