0% found this document useful (0 votes)
29 views8 pages

Notation Notes Oct16

This document discusses the various notations and definitions used in spectral analysis and system identification. [1] There are different formulations that may come from the backgrounds of early contributors or needs to standardize approaches. [2] The focus is on estimating properties of discrete-time signals that may come from continuous signals or exist solely in discrete domains. [3] It provides examples of commonly used notations for concepts like the z-transform, discrete time Fourier transform, autocorrelation, periodograms, and window functions, which can vary between references.

Uploaded by

Arash Marashian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views8 pages

Notation Notes Oct16

This document discusses the various notations and definitions used in spectral analysis and system identification. [1] There are different formulations that may come from the backgrounds of early contributors or needs to standardize approaches. [2] The focus is on estimating properties of discrete-time signals that may come from continuous signals or exist solely in discrete domains. [3] It provides examples of commonly used notations for concepts like the z-transform, discrete time Fourier transform, autocorrelation, periodograms, and window functions, which can vary between references.

Uploaded by

Arash Marashian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Notational nightmares; the wide variety of

notations and definitions in System Identification


and Spectral Analysis

Roy Smith

October 28, 2016

1 Why?

There are a number of different formulations for the most common concepts in
spectral analysis and system identification. Some of these may have come from
the differing background of the early participants; others may arise from the need
to improve or standardise the approaches.

Without wishing to pass judgment on the authors’ motivations, I will simply at-
tempt to provide a summary of the various formulations relevant to a system
identification course. Although the reader may wish to select a favourite, reading
the literature forces her or him to at least recognise the others.

The focus here is on the estimating the properties of discrete-time signals. These
may come from underlying continuous-time signals or may simply exist solely in
the discrete domain.

2 The Sources

This is not an exhaustive listing, nor is it intended as a recommended reading


list. These are the texts on my bookshelf. For simplicity the texts and papers are
abbreviated as:

1
2.1 Texts
O&W Alan V. Oppenhiem & Alan S. Wilsky with S.Hamid Nawab, Signals &
Systems, Prentice-Hall, 2nd Ed., 1996.
O&S Alan V. Oppenhiem & Ronald W. Schafer, Digital Signal Processing, Prentice-
Hall, 1975.
LL Lennart Ljung, System Identification; Theory for the User, Prentice-Hall, 2nd
Ed., 1999.
S&M Petre Stoica & Randolph Moses, Introduction to Spectral Analysis, Simon
& Schuster, 1997.

2.2 Papers
PW Peter Welch, “The use of the fast Fourier transform for the estimation of
power spectra: A method based on time averaging over short, modified peri-
odograms,” IEEE Trans. Audio and Electroacoustics, vol. 15(2), pp. 70–73,
1967.

3 Fourier and Transform Concepts

3.1 Local notation

For at least this document we denote discrete-time signals, by



k = 0, . . . , K − 1
 for finite length signals, or
x(k), where k = 0, . . . , ∞ for infinite signals, or

k = −∞, . . . , 0, . . . , ∞ for doubly infinite signals.

When considering a finite length signal, we will use K to denote its length (the
number of sampled time points in the signal). When making a calculation (for
example: a discrete Fourier transform) we must choose a “calculation length”
which we will denote by N . In many instances we choose to do the calculation
over all of our available data (i.e. N = K), but keep in mind that there are cases
when it is better to choose N < K.1
1
One example is when we have a periodic excitation. It is almost always better to choose N
to be an integer number of periods, even if that involves throwing away some of our data.

2
In the following X(n) denotes the frequency domain representation, written here
as a function of its discrete index, n. As the index is associated with a particular
frequency it can also be written as
2πn
X(ejωn ), with ωn = .
N
Although it is simpler to write this as X(ω), the exponent form will be kept to
emphasise the periodic nature of the functions we are interested in.

3.2 Z-transform

X
X(z) = x(k)z −k
k=−∞

The only significant variation in the Z-transform is in the way it is pronounced.

3.3 Discrete Time Fourier Transform

The discrete-time Fourier Transform, and its inverse are defined by,

X

X(e ) = x(k)e−jωk
k=−∞
Z
1
x(k) = X(ejw )ejωk dω
2π 2π

Note that X(ejω ) is a periodic function of a continuous variable, ω. Usually only


one period is presented. It could be between 0 and 2π or between −π and π, or
simply the non-negative frequencies, 0 to π. Linear frequency scales are common in
signal processing, logarithm frequency scales are more common in control systems.

This definition is used by O&S, O&W, S&M.

3.4 Discrete Fourier Transform

This applies to a finite length signals and here we take the calculation length to
be N . The usual approach is to define the Discrete Fourier Transform (DFT) as

3
the Fourier Series of the signal’s period continuation. Variations in scale factors
arise at this point.

In O&W2 and PW we find,


N −1
1 X
X(n) = x(k)e−j(2π/N )kn , n = 0, . . . , N − 1.
N k=0
N
X −1
x(k) = X(n)ej(2π/N )kn , k = 0, . . . , N − 1.
n=0

This leads to a scale factor of 1/N between the DFT and the Fourier Series.

In LL we find,
N −1
1 X
X(n) = √ x(k)e−j(2π/N )kn , n = 0, . . . , N − 1.
N k=0
N −1
1 X
x(k) = √ X(n)ej(2π/N )kn , k = 0, . . . , N − 1.
N n=0

In O&S3 and S&M we find,


N
X −1
X(n) = x(k)e−j(2π/N )kn , n = 0, . . . , N − 1.
k=0

N −1
1 X
x(k) = X(n)ej(2π/N )kn , k = 0, . . . , N − 1.
N n=0

In this case the DFT matches the Fourier Series of the periodic continuation of
the signal.

This last convention matches the fft and ifft commands in Matlab. Note
however that as Matlab doesn’t support zero as an indexing variable the calcu-
lation indices are shifted by one.

All of the major concepts involving the DFT work with any of these scalings.
However we must be careful when proving theorems and deriving related results
such as Parseval’s theorem.
2
To be fair this only appears in an exercise.
3
It is interesting that one author has used different definitions for different textbooks.

4
4 Spectral analysis concepts

4.1 Autocorrelation and autocovariance

In the statistics literature the autocorrelation is frequently defined (for a stationary


signal) as,

E{(x(k) − µx )(x(k + τ ) − µx )}
R(τ ) = ,
λx
where the mean and variance of the distribution from which x(k) is drawn are µx
and λx respectively.

In signal processing literature the definition of the autocorrelation does not subtract
the mean or scale by the inverse of the variance. In our text examples we have:

In both O&S4 and O&W we have,



X
R(τ ) = x(k)x(k + τ ).
k=−∞

This definition is only applicable to finite energy signals. In particular, it is not


applicable to random signals or periodic signals.

In S&M we have,

X
R(τ ) = x(k)x(k − τ ).
k=−∞

Note that there is a difference in the direction of the signal shift. For the auto-
correlation this makes no difference—it does affect the cross-correlation definitions
though.

For random signals most authors use the term autocovariance and define it (for
stationary signals) as follows:

In LL we have (for stationary zero mean random signals),

R(τ ) = E{(x(k)(x(k − τ )}.


4
In O&S this called an aperiodic autocorrelation.

5
In fact LL does not define or use a correlation function in this context at all; it
only appears as the title of the correlation method for analysing single sinusoidal
excitation.

Again O&S changes the sign of τ and also subtracts off the mean, to give the
autocovariance definition as,
R(τ ) = E{(x(k) − µx )(x(k + τ ) − µx )}.
O&S make the observation that for zero mean signals the autocorrelation and
autocovariance are the same. Note that when defining covariance or autocovariance
functions there is no scaling by the inverse of the variance.

4.2 Periodograms

The periodogram, denoted here by I(ejω ), can also defined as a periodic function
of a continuous frequency variable.

In O&S and S&M5 we find,


2
N −1
1 X
−jωk
I(ejω ) = x(k)e .

N


k=0

In LL6 we find,
2
NX−1
I(ejω ) = x(k)e−jωk .


k=0

In PW the periodogram is given for averages but the base definition can be de-
duced as,
N −1 2
X
jω −jωk
I(e ) = N x(k)e .


k=0

These texts are internally consistent as the scalings on the periodogram are re-
quired to obtain convergence of the periodogram to the spectrum without intro-
ducing a further scaling term.
5
In S&M the periodogram shares the same symbol as the spectrum.
6
In LL the periodogram is not given its own symbol.

6
4.3 Window functions

Windowing is used to smooth frequency domain estimates of spectra or frequency


response functions. Windows can be applied in the time-domain to autocorrelation
estimates or to signals before performing DFT calculations. Window functions
therefore generally have a frequency domain and time-domain representation.

We will write time-domain windows as a function of a width parameter, γ, and


the lag variable, τ , in the form, wγ (τ ). Many (but not all) time-domain window
definitions can be written in terms of the ratio of these parameters,

wγ (τ ) = f (τ /γ).

There are several conflicting ways in which these definitions are used. The use of
window functions in the time-domain appears to be more consistent than in the
frequency domain. Again the variation is primarily in the scaling, with a factor of
2π appearing differently in different texts.

In LL the time-domain window function is used in calculating a smoothed spectral


estimate is via,

X

φ(e ) = wγ (τ )R(τ )e−jτ ω ,
τ =−∞

where R(τ ) is the autocorrelation (or an estimate of the autocorrelation). Note


that this equation simultaneously performs the smoothing and Fourier transform
to return a result in the frequency domain.

In the frequency domain the window is used to smooth the data via,
Z π

φ(e ) = Wγ (ejξ − ejω )φ̂(ejω )dξ,
−π

where φ̂(ejω ) is the unsmoothed frequency domain function. The shift in Wγ (ω) is
due to the fact that Wγ (ejω ) is centred around ω = 0.

The two types of smoothing are equivalent (at least as N −→ ∞) if we define the
relationship between the windows as,
Z π
wγ (τ ) = Wγ (ejξ )ejξτ dξ.
−π

This differs from the usual inverse Fourier Transform by a factor of 1/2π.

7
In S&M (and briefly in O&S) the time domain windowed spectral estimate is
defined identically,

X

φ(e ) = wγ (τ )R(τ )e−jτ ω ,
τ =−∞

However, the frequency domain weighting is given by,


Z π
jω 1
φ(e ) = Wγ (ejω − ejξ )φ̂(ejω )dξ,
2π −π

making the relationship between the time- and frequency-domain windows the
usual Fourier Transform,
Z π
1
wγ (τ ) = Wγ (ejξ )ejξτ dξ.
2π −π

Note also that the frequency domain smoothing is presented as a convolution


in S&M and a shift in LL. As frequency domain windows are almost always
symmetric this doesn’t make a mathematical difference.

These notational discrepancies mean that for the same time-domain window, the
frequency domain versions in S&M are 2π times larger than those in LL.

The texts themselves are internally consistent, but applying a window definition
from one text to a periodogram definition from another may lead to a scaling error.
Care is required in translating these concepts.

You might also like