Spectral Density
Spectral Density
The power spectrum of a time series describes the distribution of power into frequency components
composing that signal.[1] According to Fourier analysis, any physical signal can be decomposed into a number of
discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of a certain signal
or sort of signal (including noise) as analyzed in terms of its frequency content, is called its spectrum.
When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one
may compute the energy spectral density. More commonly used is the power spectral density (or simply power
spectrum), which applies to signals existing over all time, or over a time period large enough (especially in relation
The spectral density of a fluorescent
to the duration of a measurement) that it could as well have been over an infinite time interval. The power spectral
light as a function of optical
density (PSD) then refers to the spectral energy distribution that would be found per unit time, since the total energy
wavelength shows peaks at atomic
of such a signal over all time would generally be infinite. Summation or integration of the spectral components yields
transitions, indicated by the
the total power (for a physical process) or variance (in a statistical process), identical to what would be obtained by
numbered arrows.
integrating over the time domain, as dictated by Parseval's theorem.[1]
The spectrum of a physical process often contains essential information about the nature of . For instance, the
pitch and timbre of a musical instrument are immediately determined from a spectral analysis. The color of a light
source is determined by the spectrum of the electromagnetic wave's electric field as it fluctuates at an extremely
high frequency. Obtaining a spectrum from time series such as these involves the Fourier transform, and
generalizations based on Fourier analysis. In many cases the time domain is not specifically employed in practice,
such as when a dispersive prism is used to obtain a spectrum of light in a spectrograph, or when a sound is perceived The voice waveform over time (left)
through its effect on the auditory receptors of the inner ear, each of which is sensitive to a particular frequency. has a broad audio power spectrum
(right).
However this article concentrates on situations in which the time series is known (at least in a statistical sense) or
directly measured (such as by a microphone sampled by a computer). The power spectrum is important in statistical
signal processing and in the statistical study of stochastic processes, as well as in many other branches of physics and engineering. Typically the process is a
function of time, but one can similarly discuss data in the spatial domain being decomposed in terms of spatial frequency.[1]
Units
In physics, the signal might be a wave, such as an electromagnetic wave, an acoustic wave, or the vibration of a mechanism. The power spectral density
(PSD) of the signal describes the power present in the signal as a function of frequency, per unit frequency. Power spectral density is commonly expressed in
watts per hertz (W/Hz).[2]
When a signal is defined in terms only of a voltage, for instance, there is no unique power associated with the stated amplitude. In this case "power" is
simply reckoned in terms of the square of the signal, as this would always be proportional to the actual power delivered by that signal into a given
impedance. So one might use units of V2 Hz−1 for the PSD. Energy spectral density (ESD) would have units would be V2 s Hz−1 , since energy has units of
power multiplied by time (e.g., watt-hour).[3]
In the general case, the units of PSD will be the ratio of units of variance per unit of frequency; so, for example, a series of displacement values (in meters)
over time (in seconds) will have PSD in units of meters squared per hertz, m2 /Hz. In the analysis of random vibrations, units of g2 Hz−1 are frequently used
for the PSD of acceleration, where g denotes the g-force.[4]
Mathematically, it is not necessary to assign physical dimensions to the signal or to the independent variable. In the following discussion the meaning of x(t)
will remain unspecified, but the independent variable will be assumed to be that of time.
Definition
Energy spectral density describes how the energy of a signal or a time series is distributed with frequency. Here, the term energy is used in the generalized
sense of signal processing;[5] that is, the energy of a signal is:
The energy spectral density is most suitable for transients—that is, pulse-like signals—having a finite total energy. Finite or not, Parseval's theorem[6] (or
Plancherel's theorem) gives us an alternate expression for the energy of the signal:
where:
is the value of the Fourier transform of at frequency (in Hz). The theorem also holds true in the discrete-time cases. Since the integral on the left-hand
side is the energy of the signal, the value of can be interpreted as a density function multiplied by an infinitesimally small frequency interval,
describing the energy contained in the signal at frequency in the frequency interval .
(Eq.1)
The function and the autocorrelation of form a Fourier transform pair, a result also known as the Wiener–Khinchin theorem (see also
Periodogram).
As a physical example of how one might measure the energy spectral density of a signal, suppose represents the potential (in volts) of an electrical
pulse propagating along a transmission line of impedance , and suppose the line is terminated with a matched resistor (so that all of the pulse energy is
delivered to the resistor and none is reflected back). By Ohm's law, the power delivered to the resistor at time is equal to , so the total energy is
found by integrating with respect to time over the duration of the pulse. To find the value of the energy spectral density at frequency ,
one could insert between the transmission line and the resistor a bandpass filter which passes only a narrow range of frequencies ( , say) near the
frequency of interest and then measure the total energy dissipated across the resistor. The value of the energy spectral density at is then estimated to
be . In this example, since the power has units of V2 Ω−1 , the energy has units of V2 s Ω−1 = J, and hence the estimate
−1
of the energy spectral density has units of J Hz , as required. In many situations, it is common to forget the step of dividing by so that the
energy spectral density instead has units of V2 Hz−2 .
This definition generalizes in a straightforward manner to a discrete signal with a countably infinite number of values such as a signal sampled at discrete
times :
where is the discrete-time Fourier transform of The sampling interval is needed to keep the correct physical units and to ensure that we
recover the continuous case in the limit But in the mathematical sciences the interval is often set to 1, which simplifies the results at the expense of
generality. (also see normalized frequency)
The above definition of energy spectral density is suitable for transients (pulse-like signals) whose
energy is concentrated around one time window; then the Fourier transforms of the signals generally
exist. For continuous signals over all time, one must rather define the power spectral density (PSD)
which exists for stationary processes; this describes how the power of a signal or time series is
distributed over frequency, as in the simple example given previously. Here, power can be the actual
physical power, or more often, for convenience with abstract signals, is simply identified with the
squared value of the signal. For example, statisticians study the variance of a function over time (or
over another independent variable), and using an analogy with electrical signals (among other physical
processes), it is customary to refer to it as the power spectrum even when there is no physical power
involved. If one were to create a physical voltage source which followed and applied it to the
terminals of a one ohm resistor, then indeed the instantaneous power dissipated in that resistor would be
given by watts.
The power spectrum of the measured cosmic
The average power of a signal over all time is therefore given by the following time average, microwave background radiation temperature
where the period is centered about some arbitrary time : anisotropy in terms of the angular scale. The solid
line is a theoretical model, for comparison.
However, for the sake of dealing with the math that follows, it is more convenient to deal with time limits in the signal itself rather than time limits in the
bounds of the integral. As such, we have an alternative representation of the average power, where and is unity within the
arbitrary period and zero elsewhere.
Clearly in cases where the above expression for P is non-zero (even as T grows without bound) the integral itself must also grow without bound. That is the
reason that we cannot use the energy spectral density itself, which is that diverging integral, in such cases.
In analyzing the frequency content of the signal , one might like to compute the ordinary Fourier transform ; however, for many signals of interest
the Fourier transform does not formally exist.[N 1] Regardless, Parseval's theorem tells us that we can re-write the average power as follows.
Then the power spectral density is simply defined as the integrand above.[8][9]
(Eq.2)
From here, we can also view as the Fourier transform of the time convolution of and
Now, if we divide the time convolution above by the period and take the limit as , it becomes the autocorrelation function of the non-windowed
signal , which is denoted as , provided that is ergodic, which is true in most, but not all, practical cases.[10].
From here we see, again assuming the ergodicity of , that the power spectral density can be found as the Fourier transform of the autocorrelation
function (Wiener–Khinchin theorem).
(Eq.3)
Many authors use this equality to actually define the power spectral density.[11]
The power of the signal in a given frequency band , where , can be calculated by integrating over frequency. Since
, an equal amount of power can be attributed to positive and negative frequency bands, which accounts for the factor of 2 in the
following form (such trivial factors depend on the conventions used):
More generally, similar techniques may be used to estimate a time-varying spectral density. In this case the time interval is finite rather than approaching
infinity. This results in decreased spectral coverage and resolution since frequencies of less than are not sampled, and results at frequencies which are
not an integer multiple of are not independent. Just using a single such time series, the estimated power spectrum will be very "noisy"; however this can
be alleviated if it is possible to evaluate the expected value (in the above equation) using a large (or infinite) number of short-term spectra corresponding to
statistical ensembles of realizations of evaluated over the specified time window.
Just as with the energy spectral density, the definition of the power spectral density can be generalized to discrete time variables . As before, we can
consider a window of with the signal sampled at discrete times for a total measurement period .
Note that a single estimate of the PSD can be obtained through a finite number of samplings. As before, the actual PSD is achieved when (and thus )
approaches infinity and the expected value is formally applied. In a real-world application, one would typically average a finite-measurement PSD over
many trials to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is
sometimes called a periodogram. This periodogram converges to the true PSD as the number of estimates as well as the averaging time interval approach
infinity (Brown & Hwang).[12]
If two signals both possess power spectral densities, then the cross-spectral density can similarly be calculated; as the PSD is related to the autocorrelation, so
is the cross-spectral density related to the cross-correlation.
The power spectrum is always real and non-negative, and the spectrum of a real valued process is also an even function of frequency:
.
For a continuous stochastic process x(t), the autocorrelation function Rxx (t) can be reconstructed from its power spectrum Sxx (f) by using
the inverse Fourier transform
Using Parseval's theorem, one can compute the variance (average power) of a process by integrating the power spectrum over all
frequency:
For a real process x(t) with power spectral density , one can compute the integrated spectrum or power spectral distribution ,
which specifies the average bandlimited power contained in frequencies from DC to f using:[14]
Note that the previous expression for total power (signal variance) is a special case where ƒ → ∞.
Given two signals and , each of which possess power spectral densities and , it is possible to define a cross power spectral density
(CPSD) or cross spectral density (CSD). To begin, let us consider the average power of such a combined signal.
Using the same notation and methods as used for the power spectral density derivation, we exploit Parseval's theorem and obtain
where, again, the contributions of and are already understood. Note that , so the full contribution to the cross power is,
generally, from twice the real part of either individual CPSD. Just as before, from here we recast these products as the Fourier transform of a time
convolution, which when divided by the period and taken to the limit becomes the Fourier transform of a cross-correlation function.[15]
where is the cross-correlation of with and is the cross-correlation of with . In light of this, the PSD is seen to be a special
case of the CSD for . If and are real signals (e.g. voltage or current), their Fourier transforms and are usually restricted to
positive frequencies by convention. Therefore, in typical signal processing, the full CPSD is just one of the CPSDs scaled by a factor of two.
For discrete signals xn and yn , the relationship between the cross-spectral density and the cross-covariance is
Estimation
The goal of spectral density estimation is to estimate the spectral density of a random signal from a sequence of time samples. Depending on what is known
about the signal, estimation techniques can involve parametric or non-parametric approaches, and may be based on time-domain or frequency-domain
analysis. For example, a common parametric technique involves fitting the observations to an autoregressive model. A common non-parametric technique is
the periodogram.
The spectral density is usually estimated using Fourier transform methods (such as the Welch method), but other techniques such as the maximum entropy
method can also be used.
Related concepts
The spectral centroid of a signal is the midpoint of its spectral density function, i.e. the frequency that divides the distribution into two
equal parts.
The spectral edge frequency (SEF), usually expressed as "SEF x", represents the frequency below which x percent of the total power
of a given signal are located; typically, x is in the range 75 to 95. It is more particularly a popular measure used in EEG monitoring, in
which case SEF has variously been used to estimate the depth of anesthesia and stages of sleep.[16][17][18]
A spectral envelope is the envelope curve of the spectrum density. It describes one point in time (one window, to be precise). For
example, in remote sensing using a spectrometer, the spectral envelope of a feature is the boundary of its spectral properties, as defined
by the range of brightness levels in each of the spectral bands of interest.[19]
The spectral density is a function of frequency, not a function of time. However, the spectral density of a small window of a longer signal
may be calculated, and plotted versus time associated with the window. Such a graph is called a spectrogram. This is the basis of a
number of spectral analysis techniques such as the short-time Fourier transform and wavelets.
A "spectrum" generally means the power spectral density, as discussed above, which depicts the distribution of signal content over
frequency. For transfer functions (e.g., Bode plot, chirp) the complete frequency response may be graphed in two parts: power versus
frequency and phase versus frequency—the phase spectral density, phase spectrum, or spectral phase. Less commonly, the two
parts may be the real and imaginary parts of the transfer function. This is not to be confused with the frequency response of a transfer
function, which also includes a phase (or equivalently, a real and imaginary part) as a function of frequency. The time-domain impulse
response cannot generally be uniquely recovered from the power spectral density alone without the phase part. Although these are
also Fourier transform pairs, there is no symmetry (as there is for the autocorrelation) forcing the Fourier transform to be real-valued. See
Ultrashort pulse#Spectral phase, phase noise, group delay.
Sometimes one encounters an amplitude spectral density (ASD), which is the square root of the PSD; the ASD of a voltage signal has
units of V Hz−1/2.[20] This is useful when the shape of the spectrum is rather constant, since variations in the ASD will then be
proportional to variations in the signal's voltage level itself. But it is mathematically preferred to use the PSD, since only in that case is
the area under the curve meaningful in terms of actual power over all frequency or over a specified bandwidth.
Applications
Any signal that can be represented as a variable that varies in time has a corresponding frequency spectrum. This includes familiar entities such as visible
light (perceived as color), musical notes (perceived as pitch), radio/TV (specified by their frequency, or sometimes wavelength) and even the regular rotation
of the earth. When these signals are viewed in the form of a frequency spectrum, certain aspects of the received signals or the underlying processes
producing them are revealed. In some cases the frequency spectrum may include a distinct peak corresponding to a sine wave component. And additionally
there may be peaks corresponding to harmonics of a fundamental peak, indicating a periodic signal which is not simply sinusoidal. Or a continuous spectrum
may show narrow frequency intervals which are strongly enhanced corresponding to resonances, or frequency intervals containing almost zero power as
would be produced by a notch filter.
Electrical engineering
The concept and use of the power spectrum of a signal is fundamental in electrical engineering, especially in
electronic communication systems, including radio communications, radars, and related systems, plus passive remote
sensing technology. Electronic instruments called spectrum analyzers are used to observe and measure the power
spectra of signals.
The spectrum analyzer measures the magnitude of the short-time Fourier transform (STFT) of an input signal. If the
signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power
spectral density. Spectrogram of an FM radio signal
with frequency on the horizontal axis
and time increasing upwards on the
Cosmology vertical axis.
Primordial fluctuations, density variations in the early universe, are quantified by a power spectrum which gives the
power of the variations as a function of spatial scale.
Climate Science
Power spectral-analysis have been used to examine the spatial structures for climate research.[21] These results suggests atmospheric turbulence link climate
change to more local regional volatility in weather conditions.[22]
See also
Bispectrum
Brightness temperature
Colors of noise
Least-squares spectral analysis
Noise spectral density
Spectral density estimation
Spectral efficiency
Spectral leakage
Spectral power distribution
Whittle likelihood
Window function
Notes
1. Some authors (e.g. Risken[7]) still use the non-normalized Fourier transform in a formal way to formulate a definition of the power
spectral density
where is the Dirac delta function. Such formal statements may sometimes be useful to guide the intuition, but should always be
used with utmost care.
References
1. P Stoica & R Moses (2005). "Spectral Analysis of Signals" (http:// 3. Michael Peter Norton & Denis G. Karczub (2003). Fundamentals
user.it.uu.se/~ps/SAS-new.pdf) (PDF). of Noise and Vibration Analysis for Engineers (https://books.goog
2. Gérard Maral (2003). VSAT Networks (https://books.google.com/ le.com/books?id=jDeRCSqtev4C&q=%22power+spectral+densit
books?id=CMx5HQ1Mr_UC&q=%22power+spectral+density%2 y%22+%22energy+spectral+density%22&pg=PA352).
2+W/Hz&pg=PR20). John Wiley and Sons. ISBN 978-0-470- Cambridge University Press. ISBN 978-0-521-49913-2.
86684-9. 4. Alessandro Birolini (2007). Reliability Engineering (https://books.
google.com/books?id=xPIW3AI9tdAC&q=acceleration-spectral-d
ensity+g+hz&pg=PA83). Springer. p. 83. ISBN 978-3-540-49388-
4.
5. Oppenheim; Verghese. Signals, Systems, and Inference. pp. 32– 16. Iranmanesh, Saam; Rodriguez-Villegas, Esther (2017). "An
4. Ultralow-Power Sleep Spindle Detection System on Chip". IEEE
6. Stein, Jonathan Y. (2000). Digital Signal Processing: A Computer Transactions on Biomedical Circuits and Systems. 11 (4): 858–
Science Perspective. Wiley. p. 115. 866. doi:10.1109/TBCAS.2017.2690908 (https://doi.org/10.110
7. Hannes Risken (1996). The Fokker–Planck Equation: Methods 9%2FTBCAS.2017.2690908). hdl:10044/1/46059 (https://hdl.han
of Solution and Applications (https://books.google.com/books?id dle.net/10044%2F1%2F46059). PMID 28541914 (https://pubme
=MG2V9vTgSgEC&pg=PA30) (2nd ed.). Springer. p. 30. d.ncbi.nlm.nih.gov/28541914). S2CID 206608057 (https://api.se
manticscholar.org/CorpusID:206608057).
ISBN 9783540615309.
17. Imtiaz, Syed Anas; Rodriguez-Villegas, Esther (2014). "A Low
8. Fred Rieke; William Bialek & David Warland (1999). Spikes:
Exploring the Neural Code (Computational Neuroscience). MIT Computational Cost Algorithm for REM Sleep Detection Using
Press. ISBN 978-0262681087. Single Channel EEG" (https://www.ncbi.nlm.nih.gov/pmc/articles/
PMC4204008). Annals of Biomedical Engineering. 42 (11):
9. Scott Millers & Donald Childers (2012). Probability and random 2344–59. doi:10.1007/s10439-014-1085-6 (https://doi.org/10.100
processes. Academic Press. pp. 370–5. 7%2Fs10439-014-1085-6). PMC 4204008 (https://www.ncbi.nlm.
10. The Wiener–Khinchin theorem makes sense of this formula for nih.gov/pmc/articles/PMC4204008). PMID 25113231 (https://pub
any wide-sense stationary process under weaker hypotheses: med.ncbi.nlm.nih.gov/25113231).
does not need to be absolutely integrable, it only needs to 18. Drummond JC, Brann CA, Perkins DE, Wolfe DE: "A comparison
exist. But the integral can no longer be interpreted as usual. The of median frequency, spectral edge frequency, a frequency band
formula also makes sense if interpreted as involving distributions power ratio, total power, and dominance shift in the determination
(in the sense of Laurent Schwartz, not in the sense of a statistical of depth of anesthesia," Acta Anaesthesiol. Scand. 1991
Cumulative distribution function) instead of functions. If is Nov;35(8):693-9.
continuous, Bochner's theorem can be used to prove that its
Fourier transform exists as a positive measure, whose 19. Swartz, Diemo (1998). "Spectral Envelopes". [1] (http://recherche.
ircam.fr/anasyn/schwarz/da/specenv/3_3Spectral_Envelopes.ht
distribution function is F (but not necessarily as a function and
ml).
not necessarily possessing a probability density).
20. Michael Cerna & Audrey F. Harvey (2000). "The Fundamentals of
11. Dennis Ward Ricker (2003). Echo Signal Processing (https://boo
FFT-Based Signal Analysis and Measurement" (http://www.lumer
ks.google.com/books?id=NF2Tmty9nugC&q=%22power+spectr
ink.com/courses/ece697/docs/Papers/The%20Fundamentals%2
al+density%22+%22energy+spectral+density%22&pg=PA23).
0of%20FFT-Based%20Signal%20Analysis%20and%20Measure
Springer. ISBN 978-1-4020-7395-3.
ments.pdf) (PDF).
12. Robert Grover Brown & Patrick Y.C. Hwang (1997). Introduction
21. Communication, N. B. I. (2022-05-23). "Danish astrophysics
to Random Signals and Applied Kalman Filtering. John Wiley &
Sons. ISBN 978-0-471-12839-7. student discovers link between global warming and locally
unstable weather" (https://nbi.ku.dk/english/news/news22/danish
13. Von Storch, H.; Zwiers, F. W. (2001). Statistical analysis in -astrophysics-student-discovers-link-between-global-warming-an
climate research. Cambridge University Press. ISBN 978-0-521- d-locally-unstable-weather/). nbi.ku.dk. Retrieved 2022-07-23.
01230-0.
22. Sneppen, Albert (2022-05-05). "The power spectrum of climate
14. An Introduction to the Theory of Random Signals and Noise, change" (https://doi.org/10.1140/epjp/s13360-022-02773-w). The
Wilbur B. Davenport and Willian L. Root, IEEE Press, New York, European Physical Journal Plus. 137 (5): 555. arXiv:2205.07908
1987, ISBN 0-87942-235-1 (https://arxiv.org/abs/2205.07908).
15. William D Penny (2009). "Signal Processing Course, chapter 7" Bibcode:2022EPJP..137..555S (https://ui.adsabs.harvard.edu/ab
(http://www.fil.ion.ucl.ac.uk/~wpenny/course/course.html). s/2022EPJP..137..555S). doi:10.1140/epjp/s13360-022-02773-w
(https://doi.org/10.1140%2Fepjp%2Fs13360-022-02773-w).
ISSN 2190-5444 (https://www.worldcat.org/issn/2190-5444).
S2CID 248652864 (https://api.semanticscholar.org/CorpusID:248
652864).
External links
Power Spectral Density Matlab scripts (http://vibrationdata.wordpress.com/category/power-spectral-density/)