Measurements Book
Measurements Book
Measurements Book
of Mechanical Systems
A guide for engineers and technicians
Marco Tarabini
1 CHAPTER 1 - SAMPLING 9
1.1 Introduction 9
1.1.1 Quantization 9
1.1.2 Asynchronous Sampling 13
1.1.3 Synchronous sampling 16
1.3 References 22
2.1 Introduction 25
2.5 Examples 48
2.5.1 Monitoring of rotating machines 50
2.5.2 Electric power generator vibration measurements 56
2.5.3 Long-term monitoring of vehicle vibration 59
2.5.4 Noise monitoring 61
2.5.5 Use of auto- and cross-correlation in automotive applications 63
2.6 Conclusions 68
2.7 References 70
3.1 Introduction 71
3.1.1 The Fourier transform 71
3.1.2 Intuitive Fourier Transform approach 76
3.1.3 Properties of the Fourier Transform 78
1 Chapter 1 - Sampling
1.1 Introduction
The majority of the signals describing mechanical phenomena are, at least in
principle, continuous. A signal of continuous amplitude and time is commonly
referred to as analog signal and has a value for any time instant. In order to be
analysed, signals are reduced from continuous to discrete form, as in the case of
the conversion of sound wave to the sequence of samples on a CD or the
conversion of the continuous time on the discrete value on a LCD watch.
Analog to Digital Conversion is the process that allows converting a continuous
signal into an analog one and is composed by two different processes that are the
sampling and the quantization [1, 2]. Both these processes reduce the amount of
information that can be gathered from the signal:
Sampling converts the time from a continuous variable to a discrete one.
In other terms, the value of a function is observed only at discrete time
intervals (each second, each millisecond, ) This process affects the
independent variable (time)
Quantization converts the measured value from a continuous variable to
a discrete one. The Analog to Digital converter describes all the values
between a maximum and a minimum value with a finite number of values.
This introduces errors, given that similar values can be converted in the
same digital value. This process affects the dependent variable (measure).
1.1.1 Quantization
Quantization is the process of mapping a continuous variable to a discrete one, as
for instance the approximation of real numbers by rounding them to the nearest
integer. Because quantization is a many-to-few mapping, it is an inherently non-
linear and irreversible process (i.e., because the same output value is shared by
multiple input values, it is impossible in general to recover the exact input value
when given only the output value). This operation is performed by analog to digital
converters, which are primarily described by their resolution. The resolution
indicates the smallest variation of the signal that is detected by the converter and
is the ratio between the converter range (CR) and the number of discrete values
that the converter can produce (N). The resolution is often referred to as least
significant bit (LSB):
CR
LSB = (1)
N
Normally, the number of values that the converter can produce is expressed as a
power of two, because values are commonly stored in binary form. The CR is
expressed as the difference between the maximum and the minimum voltage that
the converter can measure. The LSB is therefore:
V max V min
LSB =
2B (2)
Figure 1 Effect of the number of bits and converter range on the quantization of a signal with 0-Pk
amplitude of 5 mV. The importance of choosing a proper converter range is evident.
Results outline that, in presence of A/D converters with a low number of bits, the
choice of the full-scale range is crucial. The choice of small converter ranges,
13
though, may lead to signal clipping, which occurs whenever the signal amplitude
exceeds the converter range.
Signal - AD converter 16 bits, Full-Scale +/- 0.1 V
0.2
0.15
0.1
0.05
0
-0.05
-0.1
-0.15
-0.2
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
Time [s]
Figure 2 Clipping error as a consequence of the small converter range in comparison to the signal
amplitude.
Plots show the effect converter range on a sine waves with amplitudes of 0.05, 0.1
and 0.2 V. The clipping is evident on the lower plot, where amplitudes outside the
converter range are coerced to the nearest values.
1.1.2 Asynchronous Sampling
As already stated, sampling a signal means observing its value only at well-defined
time instants. If these instants are not synchronized with the observed
phenomenon, the sampling is called asynchronous; in this case, samples are
acquired each dT seconds. In other words, dT is the time difference between two
consecutive signal observations Given that all the variations of the signal between
two consecutive observations are completely neglected, also the sampling may
modify the analogue signal. A signal is correctly sampled if the time difference
between two consecutive samples is somehow smaller than the time in which the
signal varies.
The sampling frequency fs (i.e. the number of samples collected each second) is
1
fs = [ Samples / s] (3)
dT
Figure 3 Analogue to Digital conversion of a sine wave with 8 Hz frequency with sampling rates
from 100 to 7 Hz. Aliasing in the last plot is evident.
As clearly explained in ref. [2], the key point to remember is that a digital signal
cannot contain frequencies above one-half the sampling rate (i.e., the Nyquist
frequency/rate). When the frequency of the continuous wave is below the Nyquist
rate, the frequency of the sampled data is a match. However, when the
continuous signal's frequency is above the Nyquist rate, aliasing reduces the
frequency of the signal below Nyquist. Frequencies above the Nyquist rate have a
corresponding digital frequency between zero and one-half the sampling rate as
shown in Figure 4.
Figure 4. Aliasing as a function of the ratio between the signal and the sampling frequency
0.5
-0.5
-1
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
Time [s]
0.5
-0.5
-1
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
Time [s]
0.5
-0.5
-1
0 2 4 6 8 10 12 14 16 18 20
Angle [rad]
Figure 6 Harmonic signal with increasing frequency sampled synchronously (10 samples/rev)
observed in time domain (upper plot) and angle domain (lower plot)
0.5
-0.5
-1
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
Time [s]
1
0.5
-0.5
-1
0 2 4 6 8 10 12 14 16 18 20
Angle [rad]
Angle vs Time
Interpolation
Identification of
samples with constant
angle difference
(times)
The tachometer is used to identify the period of each revolution, as shown in the
scheme of Figure 9.
19
12
10
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
Time [s]
When the tachometer detects the 1xRevolution signal, the shaft completed an
integer revolution, i.e. the angle increased of 2 radians. In the simplest
implementation of this method, the angle can be supposed to be linearly
dependent from the time (although more complex interpolation algorithms are
obviously possible). The array of the resampling times can be identified by dividing
each single revolution (2) in the desired number of samples (for instance 8
samples/revolution in Figure 10) and computing the times t1, t2, , t8 at which the
angle is equal to 0, /4, /2, 3/4 and so on. If the interpolation is linear, the time
between two samples belonging to the same revolution is constant.
12
10
1st rev. 2nd rev. 3rd rev.
7.5
2.5
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
Time [s]
3.14
1st rev.
2.5
1.5
0.5
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55
Time [s]
t1 t2 t8
Figure 10 Identification of the resampling times t1t8 in each revolution using the interpolated
angle (red dotted curve) instead of the actual curve (blue curve).
The array of times t1, t2, , tn is then used to identify which samples xi of the signal
(asynchronously sampled) have to be used for further analyses. All the points
sampled at times different from t1, t2, , tn are discarded.
1.5
0.5
t2 t3
0 t1
-0.5
-1
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
Time [s]
Figure 11 selection of the samples using the times identified at the previous step.
As shown in Figure 12, the linear interpolation (in general, any kind of
interpolation) may lead to measurement errors, given that the discrepancies
between the interpolating curve and the actual curve lead to resample the signal
at the time t1 instead of the correct time t1*.
21
3.14
1st rev.
2.5
1.5
0.5
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55
t1 t1*
Time [s]
Figure 12 Difference between the times identified starting from the actual angle (t1*) and the one
identified from the interpolated curve (t1).
The use of angular encoders allows avoiding the off-line data interpolation, since
the sampling is carried out directly at equal angular intervals. In this case, the
signal is sampled only when the encoder detects that the shaft rotated of a certain
angle.
Encoder Signal
Samples are acquired in the angle domain, as shown in the plot of the next figure.
Encoder Signal
ADC an
Digital
As later explained, also spectral analysis benefits from the use of synchronous
sampling.
1.3 References
[2] S. W. Smith, "The scientist and engineer's guide to digital signal processing,"
1997.
[6] J. S. Bendat, Nonlinear System Analysis and Identification from Random Data.
Wiley New York etc., 1990.
[13] G. Moschioni, B. Saggin and M. Tarabini, "The use of sound intensity for the
determination of the acoustic trajectory of aircrafts," in International Congress on
Acoustics, Madrid, 2007, pp. 2-7.
[14] F. J. Harris, "On the use of windows for harmonic analysis with the discrete
Fourier transform," Proc IEEE, vol. 66, pp. 51-83, 1978.
[16] M. Vetterli and D. Le Gall, "Perfect reconstruction FIR filter banks: Some
properties and factorizations," Acoustics, Speech and Signal Processing, IEEE
Transactions On, vol. 37, pp. 1057-1071, 1989.
[21] T. H. Patel and A. K. Darpe, "Application of full spectrum analysis for rotor
fault diagnosis," in IUTAM Symposium on Emerging Trends in Rotor Dynamics,
2011, pp. 535-545.
2.1 Introduction
The analysis of time-varying signals (the most typical case in mechanics is the
vibration) is a complex task, given that phenomena are often the superposition of
deterministic and random components. In case of vibration, the signal
characteristics mainly depend on the system mass, spring and damping
characteristics. Let us consider for instance the vibration of a non-damped one-
degree-of-freedom mechanical system: the vibration is (theoretically) a sine wave
described by:
Amplitude, in terms of zero-to-peak, peak-to-peak or RMS level
Sine wave period or frequency
Initial phase
These three groups of parameters define in a unique way the sine pattern, as
shown in Figure 15
1.5
T
-0.5
-1
-1.5
T?
3
0-Pk RMS
1
Pk-Pk
0
0 0.5 1 1.5 2 2.5
-1
-2
-3
Figure 16 Dynamic signal characterized by the sum of two asynchronous harmonics and noise
The plot shows that the signal is not fully described by its amplitude, frequency
(that is not unique) and phase. However, descriptive parameters such as the
amplitude, the RMS or the period can be used to provide a generic snapshot of
the signal characteristics, and are therefore the first parameters usually observed
in any mechanical system analysis.
This document aims to introduce the most common parameters used to describe
the time-varying signals. This allows describing a signal (independently on its time
history complexity) using one or more single numerical values. The use of a single
parameter obviously eases the data processing and allows performing the
machine monitoring in a very simple manner. Typical examples of parameter-
based monitoring are:
If the maximum vibration of a machine exceeds 1 mm/s the machine may
be broken
If the temperature of the cooling liquid of a car engine exceeds 90C there
might be a problem
If the body temperature exceeds 37 C, you are probably sick
0
0 0.5 1 1.5 2
-1
-2
-3
4
3
2.08 1.97 1.89
2 1.45 1.25 1.36
1.48
2.02 2.16
1.77
1 0.59
0.44
1.08
0 0.34 0.30
-0.02
-1
-2
-3
0 0.05 0.1 0.15
Figure 17 Analog to digital conversion
The samples xi are the basis for the computation of several indices:
Mean
RMS
Peak value
Peak to peak amplitude
Standard deviation
Variance
Maximum and Minimum value
Details for their computation are reported in the following sections. Additional
details for the computation and the use of the above indexes can be found in the
references [1-6].
29
2.2.1 Mean
The arithmetic mean (hereinafter ) of a sample is a measure of the central
tendency of the signal. There are different kinds of mean (harmonic, geometric,
arithmetic, generalized), whose expression can be found in any statistic textbook.
The most used parameter in experimental mechanics is the arithmetic mean (or
simply, mean or average), defined as the ratio between the sum of all the signal
samples and the number of samples
1 n
= xi
n i =1
(5)
The mean of a signal is often referred to as DC for analogy with the analysis of
electric circuits.
If x is a continuous, time dependent variable, the finite sum is substituted by the
integral. The mean value of x(t) is computed as:
1
x ( t ) dt
T 0
= (6)
This value is often referred to as Expected value, and is indicated with E(x).
2.2.2 RMS, Standard Deviation and Variance
The two main limitations of the mean value in the description of time-varying
signals are:
i. It doesnt provide any information on the dispersion of the signal around
the mean
ii. It is often if the measurement is performed using AC sensors, as for
instance in vibration measurements, given that the average is always zero.
The same consideration applies when the signal is high-pass filtered
before the acquisition.
Information about the data dispersion around the mean are provided by the
Standard Deviation. Low standard deviation indicates that the measurement
points are concentrated around the mean. High standard deviation indicates that
the measurements are scattered over a large range of values.
The Standard Deviation (hereinafter ) is mathematically defined as:
1 n 1 n
n
( x
=i 1 =i 1
i 2
=)2
=
n
(xi )2 (7)
is the square root of the ratio between the sum of squared deviations from the
mean (positive) and the number of samples (always positive) and consequently is
always a positive quantity. The measurement unit of is the same of x, i.e. if X is
a velocity measured in mm/s, the standard deviation of X is a number expressing
the variability of the velocity in mm/s.
The quantity is the variance; consequently, the standard deviation is the square
root of the variance.
Another quantity used to express the signal dispersion is the so-called Root Mean
Square (RMS), defined as the square root of the mean of samples squares.
1 n
RMS = (xi )2
n i =1
(8)
There is a relationship between the variance, the RMS and the mean:
2 RMS 2 2
=
(9)
2
RMS= 2 + 2
The RMS is a measure of a signal average power. An AC voltage with a given RMS
value has the same heating (power) effect of a DC voltage with that same value.
1.733 v
1.414 v
1v 1v
Figure 18 RMS of signals with the same RMS (1) but different time histories
For a sine wave, the RMS can be computed analytically and is equal to the peak
value divided by the square root of 2.
0-Pk Amplitude Pk-Pk Amplitude
RMS ( sine )
= = (10)
2 2 2
31
Four examples of signals with the same time length (100 s) but different means (2
and 10 units) and standard deviations (0.3 and 3 units) is shown in Figure 19.
14 RMS= 2.02 14
12 12
10 10
8 8
6 6
4 4
2 2
0 0
0 20 40 60 80 100 120 0 20 40 60 80 100 120
14 RMS= 3.60 14
12 12
10 10
8 8
6 6
4 4
2 2
0 0
0 20 40 60 80 100 120 0 20 40 60 80 100 120
Figure 19 RMS of signals with different mean values and standard deviations.
The RMS of the above signals was derived from the mean and standard deviation
using the property introduced in the previous page.
If we consider x(t) as a continuous random variable, the previously described
parameters can be expressed with integral formulations leveraging on the
probability density function. Their expressions will be presented in the next
paragraphs.
2.2.3 Central Moments, Skewness and Kurtosis
The kth central moment is defined as the expected value of
Mk E ( x )
k
= (11)
M3 and M4 are used to derive the Skewness and Kurtosis. The Skewness provides
information about the distribution symmetry.
M3
S= (13)
3
A negative Skewness indicates that the left tail is longer, while a positive Skewness
indicates that the right tail is longer.
Kurtosis, similarly to Skewness, is a descriptor of the shape of a probability
distribution (it describes its flatness) and can be computed as:
M4
K
= 3 (14)
4
The "minus 3" at the end of the formula can be seen as a correction factor to make
the kurtosis of the normal distribution equal to zero.
2.2.4 Maximum, minimum, Peak to Peak amplitude, Zero to Peak amplitude
Other quantities that are commonly used to describe the signals behaviour are the
upper and the lower signal boundary.
The signal maximum and minimum do not require any explanation, given that they
are simply the largest and the smallest values that the measured quantity
assumed during the observation period. Hereinafter, the will referred to as xmax
and xmin. xmax is often referred to as Peak value
The actual peak to peak signal amplitude (Pk-Pk) is defined as the difference
between the maximum and the minimum value of the signal:
Pk Pk = xmax xmin (15)
The Pk-Pk value is meaningful both for zero mean signals (e.g. the vibration of a
shaft in terms of acceleration) and for non-zero mean signal (the pressure
fluctuations inside the lubrication circuit of a turbine).
Owing to the difficulty in the implementation of a Pk-Pk calculator on analog
circuits, up to 10 years ago the Pk-Pk amplitude was derived for AC signals by
multiplying the RMS by 2 2 . This approximation is unbiased in case of purely
harmonic signals and biased if the signal is noisy or is the superposition of different
harmonics.
On many instruments for the vibration analysis, the above defined Pk-Pk is
referred to as True Pk-Pk, while the Pk-Pk is derived multiplying the RMS times
2 2 . The difference in the analysis of harmonic vibration signals with a null
average is usually trivial, while in presence of signal spikes the Pk-Pk amplitude is
a biased indicator.
33
The zero to peak amplitude (0-Pk) is commonly used in the analysis of AC signal
and is mathematically defined as the half of the Pk-Pk amplitude
Pk Pk
0 Pk = (16)
2
The 0-Pk index is commonly used only when in the analysis of AC signals, i.e. when
the signal average is theoretically null. In non-zero mean signals, this index is
useless.
Finally, the crest factor is a measure showing the ratio of peak values to the
average value, i.e. it shows how extreme the peaks are in a waveform. Crest factor
1 indicates no peaks, such as direct current. Higher crest factors indicate peaks,
for example characterizing impulsive events on a generally smooth waveform.
Crest Factor (CF) is defined as:
max(x )
CF = (17)
RMS
Table 1: comparison between the true Pk-Pk and the Pk-Pk computed from RMS for different
signals.
2 2 2
1
0 0 0
-1
-2 -2 -2
0 1 2 0 1 2 0 1 2
1 1.45 1.7
Max
-1 -1.41 -1.75
Min
True Pk-Pk
2 2.86 3.45
Pk-Pk (RMS)
2 2.07 2.03
0 0.02 -0.1
Sk
The crest factor is mainly used to point out impulsive events or spikes in a
generically uniform waveform; an alternative to the CF for the identification of
spikes is the Kurtosis, that thanks to the fourth power is capable of evidencing
values different from the mean. The presence of the modulus prevents from
distinguishing positive and negative spikes. The spikes asymmetry can be
identified by analysing the skewness index.
35
PDF is equal to the ratio between the number of times that the variable assumed
the value x* and the total number of samples n.
Similarly, the cumulative probability density function (hereinafter CPDF) is the
probability expressing the chance that a signal assumes a value lower than x*:
n ( xi < x * )
P(x ) =
*
(19)
n
It is also possible to define the chance that a signal has to fall within a certain
interval x*H - x*L
n ( xi < xh* ) n ( xi < xl* )
P(x , x ) =
*
h
*
l (20)
n
i.e. the chance of falling within an interval is the difference between the chance
of being lower than the upper limit minus the chance of being lower than the
upper limit. If x*H and x*L differ by a quantity that tends to 0, P ( xh* , xl* ) equals
An example of the use of p ( x * ) is shown in the next plot: the numbers indicated
in the plot areas indicate the number of times that the signal assume values
between 1 and 2, 2 and 3, 3 and 4, 4 and 5 and so on.
6
5
0
4
3
8
Value
3
6
2
4
1
0
0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24
Time [s]
The PDF can be obtained by dividing the number of events belonging to a certain
class by the total number of events. This number is usually expressed in
percentage and plotted versus the variable x as shown in the histogram of the next
figure.
40%
35%
30%
25%
PDF - p(x)
20%
15%
10%
5%
0%
1 2 3 4 5 6
x
The CPDF is obtained by progressively adding the PDF, obtaining a number that
expresses how many times the signal was smaller than a given threshold.
37
5
21
4
18
Value
3
10
2
4
1
0
0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24
Time [s]
As previously done for the PDF, the CPDF plot is obtained by dividing the number
of events within a certain class by the total number of events.
100%
90%
80%
70%
60%
CPDF - P(x)
50%
40%
30%
20%
10%
0%
1 2 3 4 5 6
x
Figure 23 CPDF plot summarizing the chance that the signal has a value smaller than a given level
The above plots were voluntarily limited to a small number of samples to ease the
plot understanding. Real cases often involve signals with thousands of samples
and the number of histogram bars (a variable that is chosen by the experimenter)
usually ranges between 10 and 50. A more common situation is the one
represented in Figure 24, that shows the CPDF of a vibration velocity ranging
between 32 and 35.8 m/s.
As later pointed out, PDF and CPDF can be useful in the identification of alert and
alarm levels in plant monitoring.
The probability distribution can be used to compute the expected values of
continuous random variables. The mean value of x(k) can be identified using the
probability density function p(x) by multiplying each value x times its probability.
+
E x ( k )
= x p ( x ) dx
= x (21)
Finally, the variance of x(k) is defined by the mean square of x(k) about its mean
E ( x ( k ) x ) = ( x ( k ) x ) p ( x ) dx = 2x x2 = x2
2 + 2
(23)
2.3.1.1 Percentiles
The CPDF plot can be used to identify the distribution percentiles. The percentile
is a value c dividing the population in two parts:
% values smaller than c
(1-) % values larger than c
The percentile c can be found from the CPDF plot by intersecting the curve at the
desired value and reading the corresponding measured value.
100
90
80
70
60
CPDF (%)
50
40
30
20
10
0
32
33
34
35
32.2
32.4
32.6
32.8
33.2
33.4
33.6
33.8
34.2
34.4
34.6
34.8
35.2
35.4
35.6
35.8
In addition:
Percentile can be unique or not (if the CPDF is horizontal);
The 0th percentile is the minimum of a population;
The 100th percentile is the maximum of a population;
The median is the 50th percentile;
The most commonly used percentiles are the 1st, 5th, 95th and 99th.
% confidence interval of the CPDF in Figure 24 and in Figure 25 has been derived
at the end of section 2.3.1 and is the interval between 32.8 and 35 m/s.
2.3.2 Boxplots
Boxplots are a convenient way to summarize a population using box and whiskers.
The box is limited by the 25th and the 75th percentiles (Q1 and Q3) and divided by
the median (50th percentile, Q2). The whiskers start from the boxplot and end at
Q2-3(Q3-Q1) and Q2+3(Q3-Q1); asterisks are used to denote the outliers.
45
40
35
30
25
20
15
10
CPDF - P(x)
0.04 0.8
PDF - p(x)
0.03 0.6
0.02 0.4
0.01 0.2
0 0
-3 -1 x 1 3 -3 -1 x 1 3
Figure 27 PDF (left) and CPDF (right) of data sampled from an approximately uniform distribution
is the mean of the distribution and also its median and mode. is its standard
deviation. A random variable with a Gaussian distribution is said to be normally
distributed and is called a normal deviate. If = 0 and = 1, the distribution is
called the standard normal distribution or the unit normal distribution, and a
random variable with that distribution is a standard normal deviate.
Normal distributions quite common in experimental mechanics, given that they
are used to model random vibration generated by turbulent layers, earthquakes
and a great variety of random phenomena. In addition, the mean of a large
number of random variables independently drawn from the same distribution is
normally distributed, independently from the original distributions (because of
the central limit theorem). Thus, physical quantities that are expected to be the
sum of many independent processes (such as measurement errors) often have a
distribution very close to normal. Another reason is that a large number of results
and methods (such as propagation of uncertainty and least squares parameter
fitting) can be derived analytically, in explicit form, when the relevant variables
are normally distributed.
0.5 0.5
=0 SD=1
0.4 =1 0.4 SD=1.5
=2 SD=2
0.3 0.3
PDF - p(x)
PDF - p(x)
0.2 0.2
0.1 0.1
0 0
-4 -2 0 2 4 -5 0 5
x x
Figure 28 Effects of the mean and standard deviation on the PDF of the Gaussian distribution. The
left plot shows PDF with the same SD (=1) and different means; the right plot shows PDF with the
same mean (=0) and different standard deviations.
The Gaussian PDF is symmetrical with respect to the mean value, and indicates
that there is a large probability of finding the measurand close to the mean. The
probability decreases when the measure is far from the mean, and the decrease
rate is governed by the Standard Deviation. The PDF increases between - and
and decreases between and +.
The CPDF (Figure 29) rapidly increases close to the mean value and very slowly
close to the tails. Being the PDF symmetrical, the CPDF is equal to 50% in
correspondence of the mean.
1
0.8
=70
0.6
=73
CPDF - P(x)
0.4 =76
0.2
0
65 70 75 80 85
x
Figure 29 CPDF of three Gaussian distributions with standard deviation 3 and means of 70, 73 and
76
where the sample size is small and population standard deviation is unknown. This
reflects many cases of the experimental mechanics, where measurements are
performed to assess the confidence intervals (and the uncertainty) of the
measurand. A derivation of the t-distribution was published in 1908 by William
Sealy Gosset while he worked at the Guinness Brewery in Dublin, Ireland. A
scientific paper was published with the pseudonym of Student given that
Guinness did not want to share the results of internal researches with the
competitors.
The t-distribution is symmetric and bell-shaped, like the normal distribution, but
has heavier tails. This reflects the necessity of giving more relevance to the
possibility of having data different from the mean if the distribution has been
estimated starting from a reduced number of samples. For current purposes, the
t-student PDF depends on the number of measurements from which the standard
deviation has been estimated.
Student's t-distribution has the probability density function given by
+ 1
+1
2 1+ x 2
2
=f (x) (25)
2
where is the number of degrees of freedom and is the gamma function. The t-
student PDF is similar to that of the Gaussian distribution with mean 0 and
variance 1, except that it is a lower and wider if the number of degrees of freedom
is low. As the number of degrees of freedom grows, the t-distribution approaches
the normal distribution. The next images show a comparison of the t-student PDF
(green and red) with the Gaussian PDF (blue) for increasing values of . If the
number of degrees of freedom is infinite, the t-student PDF is equal to the
Gaussian PDF.
0.45
=1
0.4
=3
0.35
=100
0.3
PDF - p(x)
0.25
0.2
0.15
0.1
0.05
0
-4 -3 -2 -1 0 1 2 3 4
x
Figure 30 Effect of the number of degrees of freedom on the t-student distribution and comparison
with the Gaussian PDF
The main outcome of the above consideration is that the confidence intervals of
a variable estimated from a reduced number of samples are wider than the ones
estimated with a large number of samples.
In general, the mean values are different at different times and must be calculated
separately for every time of interest.
x ( t1 ) x ( t 2 ) if t1 t2
(28)
y ( t1 ) y ( t 2 ) if t1 t2
+ ) E ( xk ( t ) x ( t ) ) ( yk ( t + ) y ( t + ) )
C xy ( t , t=
If =0
C xx ( t , t ) = E ( xk ( t ) x ( t ) ) = x2 ( t )
2
C yy ( t , t ) = E ( yk ( t ) y ( t ) ) = y2 ( t )
2
(30)
C xy ( t , t ) = E ( xk ( t ) x ( t ) ) ( yk ( t ) y ( t ) ) = C xy ( t )
{x (t )} and {y (t )} .
k k
T
1
Rxx ( ) = E xk ( t ) xk ( t + ) = lim x ( t ) x ( t + ) dt
T T
0
T
1
Ryy ( ) = E yk ( t ) yk ( t + ) = lim y ( t ) y ( t + ) dt (32)
T T
0
T
1
Rxy ( ) = E xk ( t ) yk ( t + ) = lim x ( t ) y ( t + ) dt
T T
0
{y (t )} .
k
Said p(x1,x2) the joint probability density function of x1 and x2, the correlation
functions become:
Rxx ( ) = x 1 x2 p ( x1 , x2 ) dx1 dx2
Ryy ( ) = y 1 y2 p ( y1 , y2 ) dy1 dy2 (33)
Rxy ( ) = x 1 y2 p ( x1 , y2 ) dx1 dy2
For arbitrary values of x and y the covariance is related to the correlation by the
equations
xx ( )
C= Rxx ( ) x2
yy ( )
C= Ryy ( ) x2 (34)
xy ( )
C= Rxy ( ) x2
The second is that, as time lag equal to 0 means that the two considered signals
are exactly the same, the maximum value for the autocorrelation function is that
for t=0.
2 ( x )= Rxx ( 0 ) Rxx ( k ) for every k (36)
So, in the case we are dealing with the autocorrelation function (i.e. when y=x) we
get:
Rxx (=
k ) Rxx ( k ) (40)
1 1
0 0
-1 -1
-2 -2
0 0.2 0.4 0.6 0.8 1 -0.5 -0.4 -0.2 0 0.2 0.4 0.5
Time [s] Time [s]
Figure 31 Autocorrelation of a square wave. The left plot shows the original signal, the right plot
shows the signal and the autocorrelation overlapped.
5 5
4 4
2 2
0 0
-2 -2
-4 -4
-5 -5
0 0.2 0.4 0.6 0.8 1 -0.5 -0.4 -0.2 0 0.2 0.4 0.5
Time [s] Time [s]
Figure 32 autocorrelation of a noisy square wave: The left plot shows the original signal, the right
plot shows the signal and the autocorrelation overlapped.
Cross-correlation is mainly used to identify the time delay between two signals
related with the same phenomenon. A typical example is the one of radar
systems. A radar transmits a pulse of energy that is reflected by the object being
examined. The received waveform is a noisy version of the transmitted one, and
the peak of the cross correlation between the transmitter and the receiver
signals occurs at a time that is twice the one that the sound needs to cover the
distance between the radar and the source.
2.5 Examples
In this section we describe how the previously described tools can be conveniently
used for the monitoring and diagnosis of mechanical systems, together with some
practical hints on the choice of parameters that can be arbitrarily chosen in the
analysis, such as the averaging time, the parameters to be monitored and so on.
The main application of time-domain signal analysis is probably the monitoring of
mechanical components, of environmental conditions and of civil structures.
Monitoring is always performed comparing the measure with a reference value:
consequently, each monitoring process is the result of three steps:
i. Planning: choice of the parameters to be measured
ii. Training: identification of the normal (typical) parameters value
iii. Operative: actual measurement campaign, - comparison between
measurements and normal values.
These criteria, however, derived from subjective opinions, with little regard to the
dynamic systems to which they are applied.
The basic level for vibration monitoring is described by the ISO 10816 standard,
which provides the guidelines for machine monitoring using the RMS, which
establishes general conditions and procedures for the measurement and
evaluation of Vibration using measurements made on non-rotating parts of
complete machines. The general evaluation criteria, which are presented in terms
of both Vibration magnitude and Change of Vibration, relate to both operational
monitoring and acceptance testing. They have been provided primarily with
regard to securing reliable, safe, long-term operation of the machine, while
minimizing adverse effects on associated equipment. Guidelines are also
presented for setting operational limits.
51
The standard specifies that the reference quantity for assessing the health state
of a machinery is the vibration magnitude (the RMS of the vibration velocity in
frequency ranges which depend on the rotation speed). The different parts of the
ISO 10816 describe the measurement procedures for different machineries:
Land-based steam turbines and generators in excess of 50 MW (part 2)
Industrial machines with nominal power above 15 kW (part 3)
Gas turbine sets with fluid-film bearings (part 4)
Machine Sets in Hydraulic Power Generating and Pumping Plants (part 5)
Reciprocating Machines with Power Ratings Above 100 kW (part 6)
Pumps for industrial applications, including measurements on rotating
shafts (part 7)
Reciprocating compressor systems (part 8)
fluid pressures
o mean value (DC)
static and dynamic forces on the supports
o mean value (DC)
o standard deviation (AC)
o RMS overall
o Peak values (Max, 0-Pk, Pk-Pk)
o Amplitude and phase of the spectral components 1X, 2X, etc.
rotation speed
o mean value (DC)
o standard deviation (AC)
-1.5
-2
The plot shows the effect of computing the average over a time of 0.25 s (red line)
or 2.5 s (green line). There are three evident aspects:
the red points (0.25 s average) follow the actual displacement line without
being affected by the short-time oscillations;
the green points (2.5 s average) cannot be used to represent the signal
fluctuations, given that the peak at 12.5 s is evident, but its amplitude
underestimates the actual one.
53
the amount of memory needed to store the green curve is lower than that
needed for the red curve
In general, long time window cancels the short time oscillations but may not be
able to follow rapid signal variations. On the contrary, short time period may
follow minor signal oscillations and require a large amount of memory if the
measures have to be saved for further analyses. A good trade-off between these
two conditions is the choice of short-term period and of conditional storage:
measures are computed each fraction of second, similarly to what
happens for the red curve in Figure 34.
measures are stored on the disk only if there is a variation with respect
to the previously stored measure (in this case, when the measure is
constant the datum is not saved).
Training
The training consists in the identification of the typical measure values and require
that the measurement system and the machine are correctly working.
Measurements are then performed in normal conditions and data are then
analysed using Boxplots, PDF, CPDF or descriptive statistic.
The most reliable approach consists in the identification of the confidence
intervals for the variable using the CPDF. Warning and alarm levels can be chosen
using the 95 and 99.7 % confidence intervals, although more selective criteria can
be used if the consequences of a fault involve danger for people or for
machineries.
100
90
80
70
60
CPDF [%]
50
40
30
20
10
0
0 10 20 30 40 50 60
Vibration value [um]
Figure 35 Identification of the 80% confidence interval from preliminary measurements. The
interval, in this case, is 14 37 m.
Operative
The operative measurements are performed comparing whether the measured
parameter is coincident with the thresholds identified as previously explained.
Data collected in the operative phase can be then used to define new thresholds
or to perform statistical analyses with diagnostic purposes.
Let us consider some experimental data describing the vibration measured on the
supports of a rotating machinery. The vibration likely depends on the shaft
rotation speed, and the parameters that can be extracted from the vibration time
history are all the ones listed at the beginning of this section. For instance, one
can choose to measure the RMS of the vibration and the amplitude of the
vibration component synchronous with the rotor vibration (described in detail in
the Fourier analysis chapter, usually referred to as 1X amplitude).
Each of these two quantities is measured at specific time intervals (for instance,
each 10 s). The measures (plotted versus time) are shown in Figure 36. The plots
show that the vibration magnitude (RMS and 1X, lower plot) increase when the
rotation speed of the shaft increases. The comparison with the ISO 10816 values
is useful to identify if the machinery is correctly working. This analysis (often
referred to as trend analysis) is very simple and is useful to point out any
machinery defect. As already evidenced, however, the diagnostic capability of this
analysis is limited.
55
0.03 300
Vibration Amplitude
Frequency [RPM]
0.02 200
0.015 150
0.01 100
0.005 50
0 0
0 10 20 30 40 50 60 70
Time [s]
Figure 36 Vibration RMS measured on a small rotor with variable rotation frequency. Red line:
rotation frequency, blue line: vibration RMS
Another similar example is shown in Figure 37, which shows the peak to peak
amplitude measured on the turbine and compressor sides of a large power plant
during the plant start. Also in this case, the vibration somehow depends on the
rotation speed, but without any prior knowledge on the machine vibration values
it is not possible to state whether the plant is correctly working or not. The
comparison with the ISO 10816 values for the typical machine suggests that the
plant is correctly working.
18 60
16
Vibration Amplitude [um RMS]
55
14
12 50
Frequency [Hz]
10
45
8
6 40
4
35
2 Vibration Amplitude Frequency
0 30
0 100 200 300 400 500
Time [s]
Figure 37 red line: rotation speed of a large compressor/turbine; blue line: 1xRPM amplitude of the
vibration displacement.
60
Vibration Amplitude [um RMS]
50
65
Frequency [Hz]
40
30
60
20
10
Vibration Amplitude Frequency
0 55
0 50 100 150 200
Time [s]
Figure 38 Vibration measured on the supports of the compressor of a large power plant. Red line:
rotation frequency; Blue line: vibration amplitude.
More powerful tools for the identification of the causes of anomalous behaviour
will be introduced in the next chapters.
2.5.2 Electric power generator vibration measurements
In this section we describe the vibration measured on a large electric power
generator in accordance with the ISO 8528-9 standard. Part 9 refers to the
monitoring of combustion engines with vibration measurements performed on
non-rotating machineries. According to the standard, the vibration has to be
measured along three mutually perpendicular axes in the positions indicated in
Figure 39. The standard provides the vibration limits in terms of RMS of the
vibration velocity; given that the machine has a nominal power of approximately
2 MW and a rotation speed of 1000 RPM, the vibration limits are 45 mm/s on the
motor and 22 mm/s on the alternator.
The vibration has been measured using an accelerometer with nominal sensitivity
of 10 mV/ms-2 and the velocity signal has been computed by integrating the
acceleration signal in time domain. The RMS of the vibration is shown in Table 2.
Data shows that the basic criterion of the vibration level can be very useful in order
to obtain a snapshot of complex mechanical phenomena. Table 2 indicates that,
regardless of the measurement location, the vibration has the same order of
magnitude on the three axes. The vibration on the generator is averagely larger
than the vibration on the alternator, with magnitudes lower than the limits
imposed by the ISO standard (45 mm/s on the generator and 22 on the alternator).
Table 2 Vibration measured on the generator and on the alternator at the indicated by the ISO
8528
Vibration [mm/s RMS]
Measurement Positions
Vertical Horizontal Axial
1r. Alternator basement 8.73 6.83 3.58
2r. Alternator Head 5.42 3.3 9.98
3r. Alternator support free side 5.08 3.57 3.31
4r. Alternator support generator side 3.71 4.06 3.07
5r. Alternator head generator side 4.65 4.85 5.35
6r. Generator head alternator side 11.5 37.1 33.4
7r. Generator support alternator side 6.07 6.03 3.02
8r. Generator support free side 4.56 5.3 2.01
9r. Accessory rack support 4.53 3.98 2.99
10r. Accessory rack frame 5.21 5.86 7.23
11r. Generator basement 9.84 7.66 3.07
12r. Generator head free side 15.5 8.51 5.51
1l. Basement alternator side 11.9 6.87 3.4
2l. Alternator head free side 7.53 4.69 10.8
3l. Alternator support free side 5.9 3.59 3.71
4l. Alternator support generator side 4.62 4.13 3.4
5l. Alternator head generator side 5.13 4.74 6.2
6l. Generator head alternator side 15.5 8.35 23.9
7l. Generator support alternator side 5.48 6.37 2.91
8l. Generator support free side 4.95 5.27 2.67
9l. Accessory rack base 6.19 4.01 3.97
10l. Accessory rack Frame 7.75 4.77 17.6
11l. External basement generator side 12.2 7.98 4.02
12l. Motor head 5.71 6.35 3.74
13l. Tank 42.3 6.65 11.2
14l. Accessory rack frame 6.91 9.8 27.3
15l. Oil filter 9.45 10.1 13
59
0.35
Weighted Acceleration [m/s]
0.30
0.25
0.20
0.15
0.10
Figure 41 Boxplots of the weighted accelerations and of the vector sum measured in ten
repetitions of the same 2 km path with the same car.
Additional analyses were performed using the probability density functions of the
acceleration vector sum. The PDF describes the probability of the value falling
within a particular interval and, consequently, provides a detailed overview of the
values that the acceleration assumed during our tests. Results are shown in Figure
42.
18 25
1 1
16 2 2
14 3 20 3
4 4
Percentage of Events [%]
8
10
6
4
5
2
0 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.05
0.15
0.25
0.35
0.45
0.55
0.65
0.75
0.85
0.95
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.05
0.15
0.25
0.35
0.45
0.55
0.65
0.75
0.85
0.95
(a) Total Acceleration Level [m/s] (b)
Total Acceleration Level [m/s]
25 25
1 1
2 2
20 3 20 3
4 4
Percentage of Events [%]
Percentage of Events [%]
5 5
15 15
10 10
5 5
0 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.05
0.15
0.25
0.35
0.45
0.55
0.65
0.75
0.85
0.95
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.05
0.15
0.25
0.35
0.45
0.55
0.65
0.75
0.85
0.95
(c) (d)
Total Acceleration Level [m/s] Total Acceleration Level [m/s]
Figure 42 Probability density functions of the seat vibration for four car speed ranges. (a) 0-30
km/h, (b) 30-50 km/h, (c) 50-80 km/h, (d) 80-130 km/h,. The different plots identify different cars.
Results show that at low speeds there are several differences between the cars
behaviour. The most frequent values range between 0.15 m/s2 and 0.45 m/s2.
Such a spread can be endorsed to the different cars mechanical behaviour, to the
characteristics of the roads where tests were performed or to different
interactions between the seat and the driver. At medium speeds (30 to 80 km/h,
plots b and c) the all the cars have a similar behaviour, while at high speed (80 to
130 km/h) differences between the cars are important.
61
The basic parameter for any noise monitoring procedure is the A-weighted sound
pressure level that is computed as follows:
Sound pressure p(t) is measured by a microphone
p(t) is frequency weighted (A-weighting curve) to obtain pA(t)
The RMS of pA(t) is computed on the whole time history to obtain RMSA
The A-weighted sound pressure level is computed as SPL(A)=20 log
(RMSA/210-5)
Figure 43 Scheme of the experimental site (1) and its surroundings: airport terminal (2), runway (3)
and helicopter school (4). Dotted lines indicate major and minor roads.
The measured sound pressure level is shown in Figure 44. The trend analysis can
be used to evidence the periodicity of sound events (low level during the night,
high level during the hours of major airport activity). However, this analysis cannot
be used to identify the noise causes (are aircrafts taking off or landing? Is the
sound coming from the terminal or from the airplanes that are leaving the
runway?). Once again, the trend analysis is the groundwork for any further study,
but can only be used to outline unusual conditions without providing information
on the causes.
63
75
65
SPL [dB(A)]
55
45
35
25
Date
75
SPL [dB(A)]
65
55
45
35
25
Figure 44 Sound pressure level measured close to Milan Malpensa airport (week summary and
daily detail).
The radial acceleration measured by the MEMS placed on the liner is shown in
Figure 46: the acceleration is null when the sensor is in contact with the asphalt,
while when it is rotating the acceleration is centripetal.
0.05
-0.05
-0.1
-0.15
-0.2
-0.25
-0.3
-0.35
-0.4
0 0.1 0.2 0.3 0.4 0.5
Time [s]
The car speed can be identified by detecting the period of the above signal; the
period can be measured by detecting the distance between the signal maxima or
with detecting the time at which the signal crosses a specific threshold (triggering
operation). Both the approaches, however, are biased in presence of
measurement noise. The period detection is more robust using the
autocorrelation of the signal shown in Figure 46: as previously stated, the
autocorrelation of a periodic signal is another signal (generally different from the
original one) periodic of the same period of the original one. This is better shown
in Figure 47, where the original signal and its autocorrelation are shown in a time
span of 0.1 s.
65
0.05
-0.05
-0.1
-0.15
-0.2
-0.25
-0.3
-0.35
-0.4
0.1 0.12 0.14 0.16 0.18 0.2
Time [s]
0.026
0.0255
0.025
0.0245
0.024
0.0235
0.023
0.05 0.075 0.1 0.125 0.15 0.175 0.2
Time [s]
Figure 47 Comparison between the radial acceleration signal (upper plot) and the autocorrelation
signal (lower plot)
The plots in Figure 47 clearly show that the identification of the local maximum is
much easier on the autocorrelation signal (rather than on the original one), given
that the measurement noise is strongly attenuated. Once that all the local maxima
have been identified, it is possible to compute the (average) wheel speed on each
revolution by dividing the wheel circumference by the time between two local
maxima. In this case, the autocorrelation has been computed using equation (32)
without the 1/T normalization factor.
0.026
0.0255
0.025
0.0245
0.024
0.0235
0.023
0.05 0.1 0.2 0.3 0.4 0.5
Time [s]
15
14.5
14
13.5
13
12.5
12
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28
Time [s]
Figure 48 Autocorrelation signal in the first 0.5 s and the average speed plotted versus time.
The identification of the car speed from the speed rotation fails whenever there
is slipping between the wheel and the ground. The incipient slipping condition can
be detected by comparing the wheel rotating speed and the absolute car speed;
the latter can be identified in different ways, such as GPS signals or with inertial
platforms placed on the car. The most robust approach, as later explained,
leverages on the cross correlation.
The tyre adherence on the asphalt can be detected from the asphalt roughness:
as shown in Figure 49, asphalt may exhibit different roughness and the vehicle can
be characterized by different adherence conditions.
67
Figure 50 Experimental setup for the identification of the asphalt roughness using laser sensors.
When the car moves, the two lasers measure similar asphalt profiles delayed of
the time requested by the car to cover the distance between the two sensors
(Figure 51). If the two sensors are not perfectly aligned with the direction of
motion or in presence of measurement noise the time delay between the two
signals can be identified using the cross-correlation function.
1.5
1
-1
-2
-3
0 0.1 0.2 0.3 0.4 0.5
Time [s]
Figure 51 Examples of the two signals measured by two laser transducers shown in Figure 50
The cross-correlation between the two signals is shown in Figure 52; the peak of
the cross-correlation function occurs at a time t*; the car speed can be detected
by dividing the distance between the two laser sensors by the t*. Also in this case,
the cross-correlation has been computed using equation (32) without the 1/T
normalization factor.
0.12
0.1
0.08
0.06
0.04
0.02
0
-0.02
-0.04
-0.1 -0.075 -0.05 -0.025 0 0.025 0.05 0.075 0.1
Time [s]
2.6 Conclusions
This chapter introduced the basic criteria for the analysis of mechanical systems
in time domain. We have initially introduced a series of parameters (such as the
mean, the standard deviation, the peak-to-peak amplitude and so on) that can be
successfully used to describe the whole waveform with a unique number. We have
then introduced the concept of running statistics, i.e. the analysis of the evolution
of these parameters versus time (for instance, the variation of the average daily
temperature or the variation of the vibration RMS measured each 10 s). This kind
of analysis (referred to as trend analysis) is the most basic criterion for the analysis
of signals and is a powerful tool for the identification of machineries faults or of
anomalous working conditions. However, the diagnostic capabilities of the trend
69
analysis are limited: it is very easy to identify that a machine is not properly
working, but it is almost impossible to understand why the machine is not
correctly working.
In this chapter, we have also presented some tools that can be used for the
statistical analysis (the probability density function and other derived quantities),
and we have seen how to use these tools for the identification of typical values
assumed by a signal. In the next chapters we will introduce diagnostic tools that
can be used to identify the mechanical parameters of systems starting from the
measured data.
2.7 References
[1] A. V. Oppenheim and R. W. Schafer, "Discrete-time signal processing," Prentice
Hall Signal Processing, pp. 1120, 2009.
[2] G. D'Antona and A. Ferrero, Digital Signal Processing for Measurement
Systems: Theory and Applications. Springer Science Business Media, 2006.
[3] S. W. Smith, "The scientist and engineer's guide to digital signal
processing," 1997.
[4] J. S. Bendat and A. G. Piersol, "Engineering applications of correlation and
spectral analysis," New York, Wiley-Interscience, 1980.315 P., vol. 1, 1980.
[5] J. S. Bendat and A. G. Piersol, Random Data: Analysis and Measurement
Procedures. John Wiley & Sons, Inc. New York, NY, USA, 1990.
[6] J. S. Bendat, Nonlinear System Analysis and Identification from Random Data.
Wiley New York etc., 1990.
[7] D. C. Montgomery and G. C. Runger, Applied Statistics and Probability for
Engineers. John Wiley & Sons, Inc., 2003.
[8] P. P. L. Regtien, M. J. Korsten, W. Otthius and F. v. d. Heijden, Measurement
Science for Engineers. Elsevier Science & Technology Books, 2004.
[9] G. Moschioni, B. Saggin and M. Tarabini, "Long Term WBV Measurements on
Vehicles Travelling on Urban Paths," Industrial Health, vol. 48, pp. 606-614, 2010.
[10] C. Asensio, G. Moschioni, M. Ruiz, M. Tarabini and M. Recuero,
"Implementation of a thrust reverse noise detection system for
airports," Transportation Research Part D: Transport and Environment, vol. 19,
pp. 42-47, 3, 2013.
[11] Anonymous "ISO 20906. Acoustic-Unattended monitoring of aircraft sound in
the vicinity of airports," 2009.
[12] G. Moschioni, B. Saggin and M. Tarabini, "Contribution of airports to noise in
surrounding environment; identification and measurement of noise sources," in
Proceedings of Internoise 2007, Istanbul, 2007, .
[13] G. Moschioni, B. Saggin and M. Tarabini, "The use of sound intensity for the
determination of the acoustic trajectory of aircrafts," in International Congress on
Acoustics, Madrid, 2007, pp. 2-7.
71
3.1 Introduction
Time domain analysis (TDA) evaluates how a signal changes over time. Several
mechanical phenomena cannot be easily understood using time domain analysis,
given that the signal evolution is the superposition of multiple harmonics, whose
amplitude is often time-dependent. The frequency domain analysis (FDA) provides
for additional information allowing a better understanding of the phenomenon.
FDA investigates the dependence of a signal from the frequency, i.e. from the rate
of signal variation versus time. Put simply, FDA decomposes the content of signal
into different frequencies present in that process and helps identifying the
periodicities. The result of the FDA is the spectrum, i.e. a plot of the amplitude of
different periodic components versus their frequencies.
3.1.1 The Fourier transform
The French mathematician Joseph Fourier showed that it is possible to decompose
any periodic signal as the superposition of potentially infinite number of sine
waves with different amplitudes and magnitudes. The decomposition process is
called Fourier transform (FT): said f(t) the dime domain function, its FT is:
+
F(f)
= f (t ) e
i 2 f t
dt (41)
2 2
2 T
T
=An f ( t ) cos ( 2 fnt )dt + f ( t ) sin ( 2 fnt )dt
T 0 0
T
(43)
f (t ) sin (2 f t )dt
n
n = tan1 0
T
f (t ) cos (2 f t )dt
0
n
The above expression states that any periodic signal can be decomposed as the
sum of cosines waves with amplitudes and phases that can be computed with the
above mathematical definitions.
An example of this decomposition is shown in the next figures: the left plot in
Figure 53 is the sum of different harmonic components, whose frequencies,
amplitude and phases are shown in the right plot.
73
1.75
1
0.5
0
-0.5
-1
-1.5
-2.25
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Time [s]
1
0.75
0.5
0.25
0
-0.25
-0.5
-0.75
-1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Time [s]
The right plot in Figure 53 can be summarized with the amplitude and the
frequency of each harmonic component, as shown in
0.8
0.6
0.4
0.2
0
0 2 4 6 8 10 12 14 16 18 20
Frequency [Hz]
-2
-4
0 2 4 6 8 10 12 14 16 18 20
Frequency [Hz]
Figure 54 Signals spectra: amplitude and phase (An and n) as a function of the frequency fn.
Given that the Fourier transform is a linear operator, the spectrum of the original
signal is the sum of the spectra in Figure 54. The spectrum of the signal shown in
Figure 53 is shown in Figure 55.
75
0.8
0.6
0.4
0.2
0
0 2 4 6 8 10 12 14 16 18 20
Frequency [Hz]
-2
-3
0 2 4 6 8 10 12 14 16 18 20
Frequency [Hz]
The three figures can be summarized in a 3D plot, that shows the time and
frequency domain representations of the above signals.
When signals are sampled, said N the number of samples composing the
acquisition buffer, the FT can be replaced by the expression
1 N 1
=G(k) f ( n ) e i 2 kn / N (45)
N n=0
This expression is referred to as Discrete Fourier Transform (DFT) and is the basis
of all the computer-based computations of the FT. Given that the amount of
information in the time domain is finite (N samples), also the amount of
information in frequency domain will be limited.
Let us suppose that the signal is sampled with a sampling frequency fs; the time
between the acquisition of two consecutive samples will be 1/ fs and the time
length of the buffer N/fs. The signal spectrum will be discrete, with a frequency
resolution equal to fs/N (the inverse of the buffer time length). As a consequence,
the DFT plot is composed by N/2 lines, indicating the signal spectrum between 0
and fs/2. The finite amount of information available in frequency domain may
prevent from correctly describing the signal spectrum because of Leakage effect,
as explained in the next paragraphs.
3.1.2 Intuitive Fourier Transform approach
The Fourier Transform is a mathematical algorithm developed by J.W. Cooley and
J.W. Tukey in 1965 which allows a computer to perform the Discrete Fourier
Transform. The Fourier transform produces the frequency domain signal from the
77
time domain signal. The Fast Fourier Transform (commonly called an FFT) is a
class of algorithms that compute the magnitude of energy versus frequency for a
given signal. An FFT does this by assuming the time domain signal is composed of
a sum of sinusoids of various frequencies. The algorithm computes the amplitude
of each of these sinusoids and the result is plotted as magnitude versus frequency.
Each sinusoid can be seen as the sum of two vectors having the same amplitude,
rotating with equal and opposite angular speed and symmetrical with respect to
the real axis. The initial phase f is the position of the vectors with respect to the
reference axis at the starting time (Figure 57).
x Re
A/2 A/2
Im
0 t CCW CW
We can state that each sine wave can be seen as the sum of two counter-rotating
vectors (with amplitude A/2) in the complex plane. The phase of the Fourier
transform is the angle between the two vectors and the real axis at time t=0. The
sum of the two vectors is always a real vector, with amplitude that varies in the
interval A.
Following this approach, the result of the numerically available Fourier transform
algorithm is the so-called Double Sided Spectrum, i.e. a vector containing both the
positive and the negative frequency bins. The amplitude of each harmonic
component of the signal is twice the amplitude of the positive or negative
harmonic. Since the two vectors are complex conjugate (the condition is necessary
to have a real sum), the phase of the two counter-rotating vectors are opposite.
|A|
A/2 A/2
-f 0 +f f
Figure 58 Double sided spectrum
Re
A/2
x
y
Im
A/2 x
Following this approach, the result of the FFT algorithm (implemented in software
such as Matlab, LabVIEW, Excel ) is a symmetrical vector, which includes
components both at positive and negative frequency.
3.1.3 Properties of the Fourier Transform
The Fourier transform has some properties that are not useful by themselves, but
are fundamental for understanding the properties of spectral estimators. Let us
recall two very simple concepts, shown in Figure 60.
1. A general function h(t), defined in the time domain, is said to be even if
h(t )= h(t )
2. A function h(t), defined in the time domain, is said to be odd if
h(t ) = h(t )
h(t)
h(t)
(a) (b)
t t
Figure 60 Examples of even (a) and odd (b) functions in time domain
It can be demonstrated that every real valued function h(t) may be decomposed
into an even component E(t) and an odd component O(t).
79
(t ) E (t ) + O (t )
h=
(46)
h ( t ) = E ( t ) + O ( t ) = E ( t ) O ( t )
E(t) and O(t) can be derived inverting the two above equations:
h ( t ) h ( t )
O (t ) = (47)
2
h ( t ) + h ( t )
E (t ) = (48)
2
But what happens to the Fourier transform of even and odd functions? To answer
this question, we apply the property of decomposing a function in its real and odd
parts to a function that is being Fourier Transformed. Calling to mind the Fourier
Transform definition:
FT {h=
(t )} h (t ) e
j t
(f)
dt H=
= h (t ) e
j 2 ft
dt (49)
We can express the exponential term using the Euler equivalence as:
e j t cos ( t ) j sen ( t )
= (50)
If we write the function h(t) as the sum of an even and an odd component the
Fourier transform of H(t) becomes:
H ( j )= E (t ) + O (t ) cos ( t ) j sen ( t ) dt (51)
Given that the integral of the product of an even function times an odd function
is null, i.e. that
O (t )
cos ( t ) dt = 0
(52)
E (t ) sen ( t ) dt = 0
i.e. the Fourier transform is the sum of an even and an odd component, and if we
define the addendum as HE and the dedendum HO:
HE ( j ) = E (t ) cos ( t ) dt (54)
HO ( j ) = j O (t ) sen ( t ) dt (55)
we can see what happens to the negative frequencies of the double sided spectra
for the even and odd spectral components.
HE (=
j ) E (t ) cos ( =
t ) dt E (t ) cos (=
t ) dt HE ( j ) (56)
HO ( j ) =
j O ( t ) sen ( t ) dt =
j O ( t ) sen ( t ) dt =
HO ( j ) (57)
In practice, the Fourier Transform of a real even function is a real even function
and the inverse Fourier transform of a real even function is a real even function.
Similarly, the Fourier Transform of a real odd function is an imaginary odd function
and the inverse Fourier transform of an imaginary odd function is a real odd
function.
Re
Re
Im
Im
t
Re Re
t
Im
Im
Generic functions (i.e. neither even nor odd), being the sum of an even and an odd
part, have spectra that are the sum of real and imaginary spectra, i.e. are complex.
81
0 T 2T
T 2T
Figure 62 Leakage origin: if the observation time T is not an integer multiple of the signal period
the circular reconstruction creates a signal that is generally different from the original one (lower
plot)
The amplitude spectra of the signals shown in Figure 62 are plotted in Figure 63.
The spectrum of the first signal is coincident with the expected one (a single line
at the frequency of 2 Hz), while the second signal spectrum is composed by
different harmonics at frequencies between 1 and 20 Hz. This phenomenon, called
leakage, prevents from detecting the actual signal frequency and amplitude, with
error magnitude depending both on the signal and on the observation time.
1
0.8
0.6
0.4
0.2
0
0 2 4 6 8 10 12 14 16 18 20
Frequency [Hz]
0.8
0.6
0.4
0.2
0
0 2 4 6 8 10 12 14 16 18 20
Frequency [Hz]
The uselessness of the phase diagram and the necessity of describing the
energetic content of random components lead adopting different quantities for
the spectral analysis.
0.11
0.1
0.09
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
200
150
100
50
0
-50
-100
-150
-200
10
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40
Frequency [Hz]
Figure 64 Amplitude and phase spectrum of a real signal
The power spectrum indicates the energy of each frequency component. The
amplitude of a deterministic harmonic component does not depend on the
frequency resolution, while the amplitude of random component depends on the
acquisition duration. This can be explained considering that with a low frequency
resolution, each spectral bin is used to represent different adjacent spectral
component. Conversely, a long acquisition time implies a high frequency
85
0.1
0.01
0.001
0.0001
1E-5
0 2.5 5 7.5 10 12.5 15 17.5 20 22.5 25 27.5 30 32.5 35 37.5 40 42.5 45 47.5 50
Frequency [Hz]
Figure 65 Power spectrum of the same phenomenon observed for a long time (black) and for a
short time (pink). The same consideration applies to the amplitude spectra.
The PSD is defined as the ratio between the power spectrum and the frequency
resolution:
2 2
G(k) N 1 N 1 i 2 kn / N
PSD =
= f (n) e (59)
f fs N n = 0
The measurement units are squared units over Hertz; in this case, the PSD of
random components does not depend on the frequency resolution, i.e. observing
the signal for a longer time does not averagely lower the PSD. On the contrary, in
presence of harmonic components not affected by leakage, the PSD increases with
the observation time, since a constant power is divided by smaller values.
500
100
10
0.1
0.01
0.001
0.0001
1E-5
0 2.5 5 7.5 10 12.5 15 17.5 20 22.5 25 27.5 30 32.5 35 37.5 40 42.5 45 47.5 50
Frequency [Hz]
Figure 66 Power spectral density of the same phenomenon observed for different times (pink short
time, black long time)
Since the PS and the PSD are expressed in squared units, is quite common to
compute their square roots, so that they are expressed in units or units/sqrt(Hz).
3.3.3 Auto- and cross- spectra
We now introduce two quantities related to the Fourier transform of signals,
namely the autospectrum and the cross-spectrum, that will be used in the next
section (spectral estimation) to understand how the Fourier transform of a system
observation xk(t) is related to the Fourier transform of the process x(t).
The easiest way of defining the spectral density functions is the calculation of the
Fourier transform of a previously calculated correlation function. Let us consider
two signals whose correlation functions exist and whose integral are finite.
The Fourier transform of correlation functions are:
87
Sxx ( f ) = R ( ) e
xx
i 2 f
d
Syy ( f ) = R ( ) e
yy
i 2 f
d (60)
Sxy ( f ) = R ( ) e
xy
i 2 f
d
The quantities Sxx and Syy are called autospectral density functions, while Sxy is
called cross-spectral density function. Given that the result of the Fourier
transform is a double-sided spectrum Sxx, Syy and Sxy are double-sided spectra, i.e.
include both the positive and negative frequencies. If we consider the Fourier
transform properties for even and odd functions and the symmetry properties of
the stationary correlation function:
Sxx ( =
f ) S * xx =
( f ) Sxx ( f )
Syy ( =
f ) S *yy =
( f ) Syy ( f ) (61)
Sxy ( =
f ) S * xy =
( f ) Syx ( f )
Since the autospectrum is mainly used for energetic purposes, it is quite common
to use, instead of Sxx and Syy the quantities Gxx and Gyy defined by
Gxx ( f ) 2Sxx ( f )
= 0 < f < , 0 elsewhere
(63)
Gyy ( f ) 2Syy ( f ) 0 < f < , 0 elsewhere
=
X Gxx
Sxx
f
Figure 67 Difference between double-sided autospectrum (Sxx) and single-sided autospectrum (Gxx)
Depending on the spectral quantity on which the averages are computed (real
quantities such as auto-spectra or complex quantities as cross-spectra) there
might be different situations.
Let us consider the following signal, which is the sum of a deterministic (sine with
frequency of 3 Hz) component and of random noise.
15
10
-5
-10
-15
0 10 20 30 40 50 60 70 80 90 100
Time [s]
Figure 68 Signal to be averaged.
Let us consider a single frequency, for instance the one at 3 Hz: the spectral bin
amplitude is the sum of the deterministic component (whose modulus and phase
are constant for 100 s) and of a random component (whose amplitude and phase
continuously vary). The difference between deterministic and random
components is mainly endorsable to the phase, as shown in Figure 69. The sum of
components with a random phase tends to zero, while the sum of components
with a deterministic phase is a vector with modulus equal to the sum of moduli
and phase equal to the single vectors phase. Hence, the average of vectors with
a deterministic phase tends to the average of moduli, while the average of vectors
with a random phase tend to zero.
Im
Im
Re
Re
Figure 69 Difference between vectors with comparable amplitudes and random phase (left) and
deterministic phase (right).
In the case (a) the harmonic component is acquired without leakage and the phase
of the harmonic is the same in each of the 100 subsets. If we average the
amplitude and phase spectra, the average of the deterministic component tends
to the actual harmonic amplitude, while the average of random components tends
to zero. Conversely, if we average the power spectrum there will not be any
reduction of random noise, given that all the components behave as deterministic
components with random amplitude and zero phase (the power spectrum is a real
quantity, i.e. a complex number with a zero phase). The two different situations
are shown in Figure 70.
91
0.1
0.01
0.001
0.0001
1E-5
0 10 20 30 40 50 60 70 80 90 100
Frequency [Hz]
Figure 70 Effect of different types of averages on noisy signal with deterministic phase: red line
average of moduli, grey line average of complex values
In the case (b), conversely, the harmonic at 3 Hz generates leakage and the phase
in each of the 40 subsets is different. Consequently, if we compute the averages
of complex quantities, both the harmonic component and the noise behave as
random vectors and the average tends to zero. On the contrary, the average of
power spectra is similar to the case shown in Figure 70 and Figure 71 (except for
the larger leakage). The spectra averaged with the two different techniques are
shown in Figure 71.
1
0.1
0.01
0.001
0.0001
1E-5
0 10 20 30 40 50 60 70 80 90 100
Frequency [Hz]
Figure 71 Effect of different types of averages on noisy signal with random phase: red line average
of moduli, grey line average of complex values
The vibration on one of the two supports (for instance at the point B of the above
figure) depends on:
The rotational speed of the rotor and its imbalance
The vibration generated by the ball bearings
The vibration generated by the gears (meshing vibration at each tooth
contact)
The rotational speed of the motor and its unbalance
10
-2
-4
-6
-8
-10
0 1 2 3 4 5 6 7 8 9 10
Time [s]
Figure 73 vibration generated by a rotating shaft
The interpretation of data of the above figure is almost impossible. The signal
seems periodic, but it is not clear which are its harmonic components and which
is their relative importance. The spectrum of the signal in Figure 73 is shown in
Figure 74.
95
0.55
0.525
0.5
0.475
0.45
0.425
0.4
0.375
0.35
0.325
0.3
0.275
0.25
0.225
0.2
0.175
0.15
0.125
0.1
0.075
0.05
0.025
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Frequency [Hz]
Figure 74 Spectrum of the signal shown in the previous figure.
The spectrum clearly shows that there are different harmonic components that
are larger than the average (1.5 Hz, amplitude 0.4 m/s2, 10 Hz, amplitude 0.52
m/s2 and 11 Hz, amplitude 0.17 m/s2). The mechanical origin of the different
spectral components cannot be identified a priori, but the knowledge of the
rotational speed of the two shafts can provide useful information about the
vibration causes. Let us call f the frequency observed in the spectrum and n the
rotational speed of the shaft: vibration causes can be identified according to the
next table.
Table 3 Causes of vibration if the observed frequency is equal to the shaft rotation frequency (f=n)
Cause Notes
Rotor imbalance Magnitude proportional to the imbalance; the vibration
is almost radial and increases with the rotation speed.
Mounting of pulleys, gears and any rotating object has
to be also verified.
Rotor flexure Axial vibration generally large
Rotor resonance Natural frequency of the system generally close to n,
very large vibration magnitude
Ball bearings mounted misaligned The rotor balancing has to be repeated in its actual
working conditions
Supports misalignment Large axial vibration (50% in magnitude with respect to
the axial one) and presence of harmonics with
frequencies equal to 2n and 3n
Non-uniform motor magnetic field The vibration disappear when the motor is turned off
Belt whose length is an integer The use of a stroboscopic light stops both the belt and
multiple of the pulley diameter the pulley
Gear with a damaged teeth A visual inspection of the gears is usually enough to
identify the damaged teeth. The effect of a periodic
force due to the impact with a damaged teeth can be
superimposed to the imbalanced gear effect.
Force synchronous with the rotating Presence of harmonics with frequencies equal to 2n and
speed 3n
Superposition of mechanical defects f is almost coincident with n; the cause is often the
with magnetic field irregularities presence of a asynchronous motor with speed close to
the synchronous one.
Table 4 Causes of vibration if the observed frequency lower than the shaft rotation frequency
Cause Notes
Non-correct bearing lubrication f (0,40 0,45) n, usually n is large. The shaft is affected
by the oil whirl phenomenon
Damaged ball bearing cage f (0,40 0,45) n, presence of harmonics with
frequencies of 2n and 3n
Rotor loosely constrained f=n; bearings may be not properly locked, possible
rotor cracks
97
Table 5 Causes of vibration if the observed frequency is twice the shaft rotation frequency (f=2n)
Cause Notes
Misalignment Strong axial vibration
Loosely supported rotor Bolts not correctly fastened. Large clearance between
the rotating parts and the bearings
Table 6 Causes of vibration if the observed frequency is an integer multiple of the shaft rotation
frequency (f=k n)
Cause Notes
Bearings not correctly aligned or The frequency depends on the number of rolling
mounted with large interference in elements of the bearing. A stroboscopic analysis is
their support useful to verify the defect.
Damaged gear Said the number of damaged teeth, the frequency bin
is f = n. If equals the number of teeth, the gear is
characterized by excessive wear.
Presence of fans Said the number fan palettes, the frequency bin is f =
n.
Misalignment and large axial Usually due to loose supports
clearance
Table 7 Causes of vibration if the observed frequency is much larger than the rotation frequency
Cause Notes
Excessive belt preload Relevant high frequency acoustic noise
Multiple belts with different Different belts slide on the pulleys
preloads
Unloaded gears Shocks between the gear teeth when the torque is close
to zero
Fluid action on rotor blades Amplitude and frequency not constant. Presence of
axial vibration
The above tables can be used to identify the cause of the observed vibration: for
instance, in the spectrum of Figure 74, given that the rotation frequency is
approximately 12 Hz, there causes of different harmonics are:
12 Hz: imbalanced rotor
18 Hz: resonance of a mechanical component
50 Hz and 100 Hz: electrical motor and electric noise.
This diagnosis is obviously qualitative: from the above table, for instance, one
cannot derive the causes of harmonics at frequencies between 35 and 45 Hz. In
these cases, more complex analyses based on the simultaneous use of multiple
transducers and on coherence analysis are needed.
3.4.1.2 Examples
We propose here the spectra measured in real working conditions, with brief
comments on the causes generating the different features.
T M
1x RPM
A B
Amplitude
2x RPM
Frequency
Figure 75 Scheme of a non-balanced rotor (left) and example of the possible spectrum (right) with
peaks occurring at 1X and 2X shaft frequency.
T
M 2x
A B
Amplitude
1x
4x 5x
3x
Frequency
M
2x
Amplitude
A B
3x
5x
1x 4x 6x
Frequency
Figure 77 Loosely supported rotor: the energy of the harmonics 1X, 2X, 3X leaks to adjacent
frequencies; the peaks seem affected by leakage.
2x belt
frequency
Amplitude
Frequency
Tooth mesh
frequency
sidebands
Frequency
2X
Amplitude
0.42X
1X
3X
Frequency
Bearing Frequencies
Amplitude
Frequency
1X RPM
Amplitude
2X net frequency
100 Hz
Frequency
10
9
8
Vibration Modulus [mm/s]
7
6
5
4
3
2
1
0
0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000
Frequency [Hz]
The vibration measured on the generator head (Figure 84) exhibits components
at 1000 RPM (1X, evidenced in red), while all the other components are minor
(green).
7
6
Vibration Modulus [mm/s]
0
0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000
Frequency [Hz]
These examples show that the spectral analysis is useful for the identification of
the dominant spectral components and for a preliminary analysis of the vibration
causes. In one case, the 1 X RPM vibration component is not the most severe one,
thus evidencing that the problem is not related with rotor imbalance or shafts
misalignments.
103
Preliminary tests were performed hitting the machine head with an impact
hammer (while the machine was not working); the acceleration spectra along the
X and Y axes are shown in Figure 87. Plots show that all the accelerometers are
characterized by a dominant component at 23 Hz and that the vibration amplitude
is lower for the accelerometer number 0 (close to the constraint) and is larger for
the accelerometer number 5 (located on the tool head).
0.03
A1
0.025 A2
Vibration Modulus [m/s]
A3
0.02
A4
0.015 A5
A6
0.01 A7
A8
0.005
A9
0
0 10 20 30 40 50
Frequency [Hz]
Figure 87 Spectra of the vibration measured by the different accelerometers along the X axis.
Similar tests were performed along the Y direction; the vibration spectra are
shown in Figure 89 and Figure 90.
105
0.03
A1
0.025 A2
0.015 A5
A6
0.01 A7
A8
0.005
A9
0
0 10 20 30 40 50
Frequency [Hz]
Figure 89 Spectra of the vibration measured by the different accelerometers along the Y axis.
In these tests the vibration is due to the mechanical compliance of the structure,
which behaves as a beam constrained at the upper extremity of the arm.
The bridge vibration has been measured for approximately 4 hours; the bridge
vibration was generated both by the vehicles passage and by natural elements
(water flow, wind). The measured acceleration spectrum is shown in Figure 92.
0.0016
0.0014
Vibration Modulus [m/s]
0.0012
0.001
0.0008
0.0006
0.0004
0.0002
0
0 5 10 15 20 25 30 35 40 45 50
Frequency [Hz]
Figure 92 Spectrum of the acceleration measured by an accelerometer (time history lasting 1200 s)
buffers. Instead of having a single spectrum with a very high frequency resolution
f it is possible to compute N spectra with a frequency resolution f/N. The next
plot shows the N different spectra obtained in different time frames: the situation
is similar to the plot of Figure 92, i.e. each single plot shows amplifications close
to the resonance region but the single spectra are still noisy.
0.02
T1
0.018
T2
0.016
Vibration Modulus [m/s]
T3
0.014 T4
0.012 T5
0.01 T6
T7
0.008
T8
0.006
T9
0.004 T10
0.002
0
0 5 10 15 20 25 30 35 40 45 50
Frequency [Hz]
Figure 93 Spectra obtained with the Welch approach, i.e. by dividing the time history into different
buffers.
At least in principle it is possible to compute the both the complex or the power
spectral averages: since the phase of the above spectra is random, better results
are expected by using the power spectral averages.
0.006
0.005
0.003
0.002
0.001
0
0 5 10 15 20 25 30 35 40 45 50
Frequency [Hz]
0.006
0.005
Vibration Modulus [m/s]
0.004
0.003
0.002
0.001
0
0 5 10 15 20 25 30 35 40 45 50
Frequency [Hz]
Figure 94 Averages of the spectra plotted in Figure 93: autospectral average (top) and complex
moduli average (bottom). A1, A2 and A3 are three accelerometers located at different positions.
The upper plot in Figure 94 (power spectral averages) clearly outlines the vibration
amplification in correspondence of the bridge vibration modes. Conversely, the
lower plot shows that since the phase is not deterministic, both the deterministic
and the random components are attenuated by the averaging procedure.
109
3.6 References
[1] J. S. Bendat, Nonlinear System Analysis and Identification from Random Data.
Wiley New York etc., 1990.
[2] J. S. Bendat and A. G. Piersol, Random Data: Analysis and Measurement
Procedures. John Wiley & Sons, Inc. New York, NY, USA, 1990.
[3] J. S. Bendat and A. G. Piersol, "Engineering applications of correlation and
spectral analysis," New York, Wiley-Interscience, 1980.315 P., vol. 1, 1980.
[4] T. Momono and B. Noda, "Sound and vibration in rolling bearings," Motion &
Control, vol. 6, pp. 29-37, 1999.
Mass
Damper Spring
Ground
Figure 95 Mass, Spring and Damper mechanical system
The equations governing the mechanical system dynamics are second-order linear
differential equations. In the following, we will refer to harmonic forces given that,
thanks to the Fourier series, can be used to describe any periodic function. For any
linear mechanical system, thanks to the additivity and homogeneity properties, it
is possible to identify the response to any periodic function by:
i. Decomposing the periodic, non-harmonic function in its harmonic
components using the Fourier series
ii. Identify the responses to each single harmonic as per (i)
iii. The general solution is the sum of the responses identified at (ii)
4.2 Forces
4.2.1 Inertial forces
The equations of motion can be derived from the Newton law by applying the
DAlambert principle 1:
F= m a (64)
F is the sum of all the external forces acting on a body, m is the body mass and a
its acceleration with respect to a fixed reference. The previous equation is a vector
equation, given that both F and a are vectors (with 3 elements in case of Cartesian
forces or 6 elements if the torques are included). In case of planar motions, the
equation has just two components and can be written as:
1
Which states that it is possible to use the static equations also in dynamics by adding the
inertial forces.
Fx= m ax
(65)
Fy= m a
In presence of linear motion, the above equations revert to the well know scalar
formulation of the newton law. Given that the acceleration is the second
derivative of the displacement:
F = mx (66)
F is the sum of all the external forces: given that we are describing linear
mechanical systems, F can include:
Any time dependent force F(t)
Elastic forces
Damping forces
Fel
x0
x(t)
If the static equilibrium position is coincident with the axis reference, the above
equation becomes
Fel = k x (68)
Where the time dependence has been omitted for clarity. In this case the variable
x is the displacement with respect to the static equilibrium position. The most
113
common elastic force is the one generated by springs, which is proportional to the
spring elongation through the elastic constant k (N/m)
The elastic force is conservative, i.e. the energy that is necessary to shorten the
spring is returned when the spring returns to its original length.
4.2.3 Damping forces
Damping forces are the forces that dissipate energy, and are the ones preventing
from the existence of perpetual motion. In the following, we will consider the
viscous forces (the structural damping forces will not be analysed). The
conventional symbol for the dampers is:
A B
Figure 97 Damper
The viscous force opposes to the damper shortening or elongation with a modulus
proportional to the velocity through a constant c:
Fvisc
= -c=s -c(vB - v=
A ) -c(xB - x A )
(69)
Let us suppose that the displacement is harmonic, i.e. x=x0 eit; the velocity is
therefore
x = i ( x0 e it ) (71)
parameter seems identical to the spring elastic constant k, but the damping force
is in quadrature with the displacement.
The above equation can be used to describe both the free and the forced motion
(F(t)=0 and F(t)0 respectively).
4.3.1 1 DOF mechanical system with viscous damping
The 1 DOF mechanical system with a viscous damper is modelled according to the
scheme in Figure 98.
c k
A body with mass m is connected to a fixed base through a spring with elastic
constant k and a viscous damper with constant c. The body can only translate
along the vertical direction and its displacement with respect to the static
equilibrium position is called x.
Given that there are no external forces, the dynamic equilibrium is described by
the following equation
mx =kx cx (74)
i.e.,
mx + cx + kx =
0 (75)
115
This equation is obviously true if the displacement and its derivatives are null, but
this is not the only possible solution.
Any function x(t) that verifies the above equation is a possible solution of the
system. The theory of the homogeneous linear differential equation states that
the possible solutions are a linear combination (wit constants named A and B) of
the two functions
x1 = e 1t (77)
and
x2 = e 2t (78)
Consequently
x(t ) Ae 1t + Be 2t
= (79)
4.3.1.1.1 c2-4mk=0
In this case, the differential equation has two coincident negative solutions that
are:
c
= (81)
2m
2
This indicates also the particular case in which the equation has a double solution
(=1=2), in this case the solution is x1 = e t e x2 = te t .
x(t ) Ae t + Bte t
= (82)
with A and B constant values that can be determined by imposing the boundary
conditions (for instance, the initial and the final velocity, acceleration or
displacement). Given that is negative, the solution is a monotonic decreasing
function. The free motion of a system whose damping is c2=4mk, consequently, is
non-periodic and transient: after a certain time, the system returns to its static
equilibrium position without oscillations. The viscous damping constant c that
generates this motion is called critical damping, and is:
cc = 2 km (83)
4.3.1.1.2 c2-4mk>0
If the viscous damping is larger than the critical damping, the characteristic
equation has two real solutions:
with A and B constant values that can be identified with the boundary conditions.
Given that both the functions are monotonic decreasing functions, the solution is
a monotonic decreasing function. Also in this case, the free motion of a system
with viscous damping c2>4mk is transient (tends to zero after a given time) and
aperiodic, i.e. the system recovers its equilibrium position without oscillations.
4.3.1.1.3 c2-4mk<0
Whenever the system damping is limited (lower than the critical damping), the
characteristic equations does not have real solutions. By introducing the
imaginary unit i:
c 1 c 1
+i 4 mk c2 t i 4 mk c2 t
x(t ) = Ae 1t + Be 2t = Ae 2m 2m
+ Be 2m 2m
(88)
Using the Euler notation, the final part of the above equation can be written as
i 21m
4 mk c2 t
1
i 4 mk c2 t
1
Ae
+ Be =
2m
X 0 sin 4mk c 2 t + (90)
2m
Where the constants X0 and only depend on A and B, and can be identified using
the boundary conditions. The general solution of the differential equation is:
c
t 1
x(t ) X 0 e
= 2m
sin 4mk c 2 t + (91)
2m
1
Displacement [m]
0.5
-0.5
-1
0 2 4 6 8 10
Time [s]
c
t
The oscillation amplitude is X 0 e 2m
, i.e. is time-dependent through a
coefficient that depends on the system mass and damping. For infinite
time the oscillation amplitude tends to zero independently from the
physical parameters of the system.
The phase and the amplitude of the oscillations depend on the boundary
conditions, while the frequency of the oscillations only depend on the
physical characteristics of the system. The natural pulsation of the system
is:
1 4mk c 2 k c2
p = 4mk c 2 = = =
2m 4 m2 4 m2 m 4 m2
(92)
k c2 m k c2
= 1 = 1
m 4 m2 k m 4km
The natural pulsation is the ratio between the stiffness and a mass of a 1 DOF
mechanical system and is equal to the resonance pulsation described in the next
sections.
The viscous damping factor is a number indicating the ratio between the damping
and the critical damping. Consequently, only systems with lower than 1 are
characterized by harmonic transient responses.
With these parameters, the free equation of motion can be written as follows:
(
(t ) X 0 e nt sin n 1 2 t +
x= ) (93)
3
The natural pulsation n is not non-dimensional (it is measured in s-1)
119
1 k c2
=
d
2
4mk c= 1 = n 1 2 (94)
2m m 4km
This equation shows that the natural frequency of the damped system d equals
the natural frequency of the non-damped system only if c=0, =0. If the system is
non-damped, the equation of motion becomes:
X 0 sin (nt + )
x(t ) = (95)
i.e. the non-damped system vibrates around its equilibrium position with constant
amplitude for infinite time.
=0.1 =0.25
1.5 1.5
1 1
Displacement [mm]
Displacement [mm]
0.5 0.5
0 0
-0.5 -0.5
-1 -1
0 2 4 6 8 10 12 0 2 4 6 8 10 12
Time [s] Time [s]
50 50
40 40
30 30
20 20
Velocity [mm/s]
Velocity [mm/s]
10 10
0 0
-10 -10
-20 -20
-30 -30
-40 -40
-50 -50
0 2 4 6 8 10 12 0 2 4 6 8 10 12
Time [s] Time [s]
Figure 100 Effect of the damping ratio on the time response (displacement and velocity)
F0ejt
c k
The only difference with the scheme of Figure 98 is the presence of the time-
dependent force F(t). In the following, we will consider a harmonic force with
pulsation and amplitude F0 A:
F(t)=F0 cos ( t);
F(t)=F0eit.
121
Provided that the basement motion is null, the equation describing the system
motion is:
mx + cx + kx =
F (t ) (96)
It is well known that the differential equations solutions are the sum of the
solution of the homogeneous equation and the complete equation. The
homogeneous equation
mx + cx + kx =
0 (97)
has been largely studied in the previous section. Results outlined that the free
response is harmonic and damped; given that the solution tends to zero after a
certain time, the solution is often referred to as transient. Consequently, the
response of a system to a given fore is the sum of the transient response and the
steady response
1
0.8
0.6
Displacement [m]
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
0 2 4 6 8 10
Time [s]
2
1.5
1
Displacement [m]
0.5
0
-0.5
-1
-1.5
-2
0 2 4 6 8 10
Time [s]
2
1.5
1
Displacement [m]
0.5
0
-0.5
-1
-1.5
-2
0 2 4 6 8 10
Time [s]
O, equivalently
F0
X0 = (102)
(k m 2 ) + ic
1 1
arg ( X 0 ) =
arg F0 2 arg ( F0 ) + arg
= 2 =
(k m ) + ic (k m ) + ic
=arg ( F0 ) + arg(1) arg ( (k m 2 ) + ic )
(104)
c
arg
= ( X 0 ) arg ( F0 ) arg 2
(k m )
If the phase of the force is null (i.e. F0 is a real positive number) |F0|=F0 and
arg(F0)=0.
The ratio between the displacement and the force is a complex function that
depends on and is largely used in the study of mechanical systems. This function
commonly referred to as receptance () is:
X0 1
( )
= = ( ) (106)
F0 (k m 2 ) + ic
125
As explained in detail in the next section, the time response of the system can be
recovered using the inverse Fourier transform. The receptance can be written
using the non-dimensional parameters described in section 4.3. Dividing the
receptance numerator and denominator by the stiffness, the receptance
becomes:
X0 1
( )
= = ( ) k = 1 1
(108)
F0 m 2 c k 2
1 + i 1 2 + i2
k k n n
The receptance depends on the stiffness, on the viscous damping ratio and on the
ratio /n, and can be conveniently represented using the modulus and phase
plots versus the frequency (Bode diagrams) or plotting the real versus the
imaginary part as a function of the parameter (Nyquist plots).
The real/imaginary representation is quite infrequent; receptance and the other
FRFs are commonly represented through the modulus and phase representation
in logarithmic scales as in the next plot.
100
10
0.1
0.01
1 10 100 1000
Frequency [Hz]
0
-0.785
FRF Phase [x/y]
-1.57
-2.355
-3.14
1 10 100 1000
Frequency[Hz]
Figure 103 Bode diagram of the receptance (natural frequency 100 Hz, =0.015)
X c
arg 0 = arctan 2
(110)
F0 (k m )
X
lim arg 0 = 0 (112)
0
F0
Given that the Bode diagrams are in logarithmic scale, the value =0 cannot be
displayed. But when is close to 0 both the modulus and the phase diagrams have
horizontal asymptotes.
The phase has a horizontal asymptote at 0;
The modulus has a horizontal asymptote 1/k if the axis is linear, but given
that the value is usually shown in dB, the value is
1
20log10 (1 ) 20log10 ( k ) =
20log10 = 20log10 ( k ) .
k
In this region, the system behaviour is mainly static, i.e. the forces generated by
the velocity and acceleration are minor.
The high-frequency behaviour can be studied computing the limit for
X 1 1
lim 0 lim
= = 0 (113)
2
(k m ) + (c ) m
2
F0
2 2
X
lim arg 0
= (114)
F0
X 1
lim 20log10 0 = lim 20log10
F
(k m ) + (c )
2 2 2
0
1
= lim
= 20log10 2
(115)
m
= lim ( 20log10 ( m ) 40log10 ( ) ) =
Given that log10() is the independent variable (plotted on the x axis( and that -
20log10(m) is a constant, the previous limit is an asymptote with angular
The modulus can be studied analysing the first derivative of the function, but the
analytic expression evidences the presence of a reciprocal function (1/x), a square
root (x) and the polynomial [(k-m2)2+(c)2].
Given that the first two functions are monotonic, the maxima and the minima
depend on the polynomial function [(k-m2)2+(c)2]. Hence
1
(k m 2 )2 + (c )2
=0 (117)
The derivative is
( (k m 2 )2 + (c )2 ) ( k 2 + m2 4 2mk 2 + c 2 2 )
=
(119)
= 4m 2 (2mk c=) 2[2m (2mk c 2 )]
2 3 2 2 2
This equation has three solutions. One is for =0 (relative minimum), and the
other two are the solutions of the equation m22-(2mk-c2)=0. Only the positive
one has a physical meaning and its value is:
2mk c 2 k c2 c 2 2mk c 2 2k
r = = = 2
n = 2
n =
2m2 m 2m2 2m2 2mk 4m mk
(120)
c2 k
= 2 2
n =n2 2 2n2 =n 1 2 2
4mk m
129
becomes complex, and consequently the receptance does not have a maximum.
This occurs when 1 2 2 < 0 i.e. 1 < 2 2 , or > 1 / 2 =
0.707
100
10
FRF modulus [x/y]
0.1
0.01
1 10 100 1000
Frequency [Hz]
0
-0.785
FRF Phase [x/y]
-1.57
-2.355
-3.14
1 10 100 1000
Frequency[Hz]
Figure 104 Bode diagram of the receptance as a function of the damping ratio n=100, =0.015,
0.15 and 0.3)
35
30
25
15
10
0
0 100 200 300 400 500
Frequency [Hz]
0
-0.785
FRF Phase [x/y]
-1.57
-2.355
-3.14
0 100 200 300 400 500
Frequency[Hz]
Figure 105 Receptance modulus and phase of the previous plot in linear scale
131
X 0 i
Y ( )
Mobility:= = ( ) ,
F0 (k m 2 ) + ic
X0 2
Accelerance: A( ) = ( ) = ,
F0 (k m 2 ) + ic
F0 (k m 2 ) + ic
Impedance: Y 1 ( ) = ( ) = i
X 0
F0 (k m 2 ) + ic
Apparent mass: A1 ( ) = ( ) =
X0 2
This method gives better results of the FRF magnitude peaks in presence of closely
spaced resonance peaks or highly damped systems.
where fu and fl are respectively the upper and the lower frequencies of the signal.
If we consider the FRF of a mechanical system, we can determine the values of fu
and fl for which
2 2 1 2
H
= ( fl ) H=
( fu ) H ( fn ) (125)
2
133
where fn is the natural frequency of the system. In this way, the upper and the
lower frequencies are defined so that the power (amplitude squared) of the FRF
modulus is the half of that of the resonance peak. If the ratio between the squared
amplitude is 0.5, the attenuation is 3 dB 4.
It can be shown that the system damping is:
fu2 fl 2
= (126)
2 fn2
If the damping is lower than 0.05 the above equation can be approximated as
fu fl
(127)
2 fn
Or, equivalently
B
(128)
2 fn
Where T is the period of the free oscillation: the amplitude of the harmonic motion
is:
x(t1 ) = X 0 e nt1 (130)
nt2 n (t1 + T)
x(t2 ) X=
= 0e X 0 e = X 0 e n t1 e n T (131)
The natural logarithm of the ratio between these two amplitudes is:
4
This can be easily shown keeping in mind that att(dB)=20 log(a/a0) when a/a0 = 0.5
x(t1 ) X 0 e n t1 1
ln=
ln = ln =
n t1 n T n T (
ln= en T )
x(t2 ) X0e e e
(132)
2 2
= = n T n =
n 1 2
1 2
and:
x(t1 ) X 0 e n t1 1
ln=
x (t )
ln
X e
= ln =
n t1 nnT
e e nnT
( ennT )
ln=
n 0
(135)
2 2 n
= = nnT n n =
n 1 2 1 2
Also in this case, if the damping is low, the solution can be approximated as:
135
x(t1 ) 2 n 1 x(t )
ln = 2n ln 1
x(tn ) 1 2 2n x(tn )
(136)
km x(t1 )
c ln
n x(t n )
1.5
1
Displacement [m]
0.5
-0.5
-1
0 0.5 1 1.5 2 2.5 3
Time [s]
Real signals often show a behaviour that is far from the theoretical one as, for
instance, in the case of Figure 107.
0.05
Acceleration [m/s]
0.025
-0.025
-0.05
0 1 2 3 4 5 6
Time [s]
Figure 107 Damped response of a real mechanical system
In this case, the choice of different peaks may lead to different results;
consequently, the damping is better estimated through the envelop signal, that is
based on the Hilbert Transform. Let us define the Hilbert transform, that is an
operator that transforms a signal x(t) into another signal y(t) in time domain:
1 x (t )
y (t ) =
d (137)
t
1
In other words, the Hilbert transform is the convolution between a signal and
t
1
(t ) x (t )
y= (138)
t
Using the Euler notation, the complex signal above can be expressed in terms of
modulus and phase
(t ) A (t ) e j (t )
AS= (140)
Where A(t) is the envelope or amplitude of the analytic signal and (t) is the phase
of the analytic signal. The derivative of the (t) is the instantaneous frequency.
The envelope signal can be used to identify the system damping, given that for a
single DOF mechanical system the envelop signal should be exponential. If we
apply the Hilbert approach to the signal shown in Figure 107 we obtain the signal
in Figure 108.
0.05
0.02
Acceleration [m/s]
0.01
-0.01
-0.02
-0.03
-0.04
-0.05
3.5 4.5 5.5 6.5 7.5 8.5 9.5
Time [s]
Once that the envelope signal is known, it is possible to approximate the signal
A(t) with an exponential curve.
137
0.05
0.045
0.04
0.035
Acceleration [m/s]
0.03
0.025
0.02
y = 0.091e-0.243x
0.015
0.01
0.005
0
3.5 4.5 5.5 6.5 7.5 8.5 9.5
Time [s]
In the case presented above, the regression curve is y = 0.091e-0.243X. It follows that
the product between the damping ratio and the natural pulsation is 0.243. Given
that the natural frequency of the system was 4.2 Hz, the damping ratio was 0.2 %
(natural pulsation 26.4 rad/s).
5.1 Introduction
Measurements are often performed to create mathematical models of
mechanical systems: this is the case, for instance, of the modal analysis, where the
experimenter aims to identify the response of a mechanical device to a dynamic
stimulus, e.g. a time-varying force. The main difference with the time domain and
frequency domain analyses presented in the previous section is that the
measurements are performed to characterize input/output systems, i.e. to
identify a mathematical relationship between the stimulus and the response.
Consequently, in the previous chapters we described the techniques that can be
used to identify the characteristics of a single signal; in this chapter we introduce
the analysis of single input, single output (SISO) linear systems: we will initially
describe the linear systems theory and then apply the theory to the case of
mechanical systems.
A linear system converts an input function x(t) into an output (or response)
function y(t). In case of mechanical systems, t is an independent variable
(generally the time), x and y can be the two signals measured by two transducers
(e.g. the force input on structure and the deriving acceleration). We hereby
assume that x is a continuous variable, but the results that will be derived are
equally applicable to discrete variables.
5.1.1.1 Homogeneity
A system is said to be homogeneous if
H { x ( t )} =
H {x ( t )}
(142)
5.1.1.2 Additivity
A system is said to be additive if
H {x1 ( t ) + x2 ( t )}= H {x1 ( t )} + H {x2 ( t )} (143)
The additivity property implies that if a process is the sum two functions, the
output is the sum of responses to each function. This property is also referred to
as superposition of effects. If we consider an additive mechanical system (a
system whose equation is additive), its response to the sum of two inputs will be
equal to the sum of responses to each single input.
Additivity is obviously valid also for more than two terms, i.e.
N N
H xi ( t ) = H { xi ( t )} (144)
= i 1= i1
5.1.1.4 Causality
A system is said to be causal if there is no output before there is an input. In other
words
x ( t ) 0 for t < t0
if=
then (147)
y ( t ) 0 for t < t0
=
5.1.1.5 Stability
A system is said to be stable if its response to any bounded input is bounded. In
other words
if x ( t ) < K
then (148)
y ( t ) < cK
Where both c and K are constant. In the case of mechanical systems, if there input
is a limited force, the output is also limited.
5.2.1 Convolution
The clearest definition of convolution [4, 5] is provided by Wikipedia: The
convolution is a mathematical operation on two functions f and g, producing a
third function that is typically viewed as a modified version of one of the original
functions, giving the area overlap between the two functions as a function of the
amount that one of the original functions is translated
The convolution of x and h is written x*h, using an asterisk or star. It is defined as
the integral of the product of the two functions after one is reversed and shifted
+
=y ( t ) x=
(t ) * h (t ) x ( )h ( t ) d (149)
If h and x are discrete signals, the convolution assumes the following expression:
y (i ) = x (i ) h(i ) = x (k )* h(i k )
k =
(150)
It can be easily shown [1] that the convolution sum satisfies the commutative and
associative properties, i.e.
x (i )* h(i ) = h(i )* x (i ) (151)
x ( i ) * y ( i ) * w ( i ) = x ( i ) * y ( i ) * w ( i ) (152)
The convolution plays an extremely important role for linear systems and
represents an important tool to understand some fundamental concepts about
signal sampling. Consequently, the way the convolution sum of two sequences is
computed is described into details, following the approach used in reference [1].
Let us consider the two sequences:
2 0i4
x (i ) =
0 elsewhere
and
1+ x 0 i 2
y ( i )= 5 x 3 i 4
0 elsewhere
1.5
x
1
0.5
0
-6 -4 -2 0 2 4 6 8 10
Figure 111 Sequences x(k) and y(k), respectively equal to x(i) and y(i)
The first point is the generation of the sequence y(-k), obtained by reversing the
sequence y
3.5
2.5
2
y(-k)
1.5
0.5
0
-6 -1 4 9
2.5
2
x
1.5 y(-k)
0.5
0
-6 -1 4 9
The next step is the computation of the product x(k) y(-k) for each k. In our case,
the convolution is different from 0 only for limited values: for any k lower than 0,
this product is 0 being x=0, while for values larger than 1 the product is null
because y(-k) is 0. The sum of all the values is
0*0+1*0+2*0+3*0+2*0+1*2+0*2+0*2+0*2+0*2+0*0+0*0+0*0+0*0=2
This means that the value of the convolution integral for i=0 is 2, so w(0)=2
Similarly, the convolution has to be computed for positive delays (i=1, i=2, ) and
negative delays (i=-1, i=-2 and so on). The two curves for a delay of y equal to -1,
are shown in the figure below.
3.5
2.5
2
x
1.5 y(-1-k)
0.5
0
-6 -1 4 9
In the case in Figure 114, the convolution w(-1)=4+2=6. If i=1 the plots become:
3.5
2.5
2
x
1.5 y(1-k)
0.5
0
-6 -1 4 9
In this case, all the products are null, so the convolution w(1) is equal to 0.
The procedure can be repeated for any value of i, obtaining the values shown in
Table 9 Convolution values computed between i=-2 and i=11
i -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3
Convolution w(i) 0 0 2 6 12 16 18 16 12 6 2 0 0 0
One can notice that the convolution sequence is longer than the two sequences
used to generate it; in general, the convolution of two sequences with lengths M
and N generates a finite length sequence with length equal to M+N-1 samples.
5.2.2 Dirac Function
Let us consider a square-shaped function f(t) defined as:
1 b b
<t <
f (t ) = b 2 2
(154)
0 b
t >
2
The area below this function can be computed by integrating f(t), and is unitary
independently on the value of b:
b
2
1 1
A
= f ( t ) dt
= dt
= = b 1 (155)
b
b b
2
If the value of b tends to zero the width of the square function decreases and the
height increases, as shown in Figure 118.
145
0.6
0.5
0.4
f(t)
0.3
0.2
0.1 b
0
-5 -3 -1 1 3 5
Time [s]
Also in this case, the area below the function is unitary, being b as small as we
want. The Dirac function, or delta function (t), is the limit of f(t) for b that tends
to zero, and is a function that is null everywhere except at zero (where it is infinite)
t= 0
(t ) = (156)
0 t 0
The Dirac function has two properties that are particularly useful in the analysis of
linear systems, that are the translation and the sampling properties.
The scaling property states that scaling the time axis of a factor modifies the
integral of a factor 1/||.
By using the symmetry of the Dirac function the above equation becomes:
x (T ) (T (t ) ) dT
x ( t ) *=
(t ) (163)
Since the area below the Dirac function is unitary the integral is:
t + /2 1
x (t - ) x (t - )
dT = (165)
t /2
Thanks to the sampling property of the Dirac function, the integral is equal to the
exponential function e - j 2 f t evaluated at t=0 (where the Dirac function is
cantered)
X
= (f) (t ) e
- j 2 f t
= j 2 f 0
dt e -= 1 f (167)
As we will see in the following, the Fourier transform of the rectangle function is
very important. By applying the FT definition
T
2
1 T
X(f)=
w (t ) e
- j 2 f t
dt =T 1 e dt =
- j 2 f t
e - j 2 f t T2
j2 f
2
2
- j 2 f T j 2 f T - j 2 f T j 2 f T
e 2
e 2
e 2
+e 2
=
= =(169)
j2 f j2 f
T e
j 2 f T - j 2 f T
2
e 2
=
2 j2 f T / 2
Remembering that
=
sin 2 f T
X ( f ) T=
(
2 T * sync function ) (171)
2 f T
2
That is the sync function multiplied by the time length of the window T
1.2
0.8
0.6
0.4
0.2
0
-30 -20 -10 0 10 20 30
-0.2
-0.4
y ( t )= H x ( t ) = H x ( ) ( t - ) d
(173)
-
H x ( ) ( t - ) d =
H x ( ) ( t - ) d (174)
- -
H x ( ) ( t - ) =
d x ( ) H (t - ) d (175)
- -
The above expression is called superposition integral and is the core of linear
system theory. It states that if the response of H to an impulse is known, then
response to any input f can be computed using the preceding integral. In other
words, the response of a linear system is characterized completely by its impulse
response. The integral expresses the convolution between the input and the
impulse response. Consequently
(t ) x (t ) h (t )
y= (178)
The practical meaning of the above equation is evident. Thanks to the sampling
property of the Dirac function, the input x(t) can be seen as the superposition of
weighted and delayed Dirac functions
8
0
0 1 2 3 4 5 6 7 8
Figure 120: Decomposition of a ramp signal as the sum of scaled and delayed Dirac functions
The output is the convolution between the Dirac function and the impulse
response: every single input (with amplitude and location defined by x(t)(-t) )
the output y has a magnitude that depends on the weighted impulse response
(with the weight given by x(t)). A delay is also present at the output and it is given
by x(t)h(-t).
2.5 3.5
3
2
2.5
1.5
2
1
1.5
0.5
1
0 0.5
-0.5 0
0 2 4 6 8 10 12 14 16 0 2 4 6 8 10 12 14 16
5 7
6
4
5
3
4
2 3
2
1
1
0
0
-1 -1
0 2 4 6 8 10 12 14 16 18 0 2 4 6 8 10 12 14 16 18
35
30
25
20
15
10
5
0
-5
0 2 4 6 8 10 12 14 16 18 20 22 24
Figure 121 Time domain system response. The green plot is the sum of all pink curves of the last
graph.
151
3 7
2.5 6
5
2
4
1.5
3
1
2
0.5
1
0 0
-0.5 -1
0 2 4 6 8 10 12 14 16 0 2 4 6 8 10 12 14 16 18 20 22
7 8
6 6
5
4
4
2
3
0
2
-2
1
0 -4
-1 -6
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35 40 45
40
30
20
10
-10
-20
0 5 10 15 20 25 30 35 40 45
Figure 122 Response of the system to a parabolic input. The response is shown in green (sum of
the pink curves)
Y ( f ) = y ( t ) e j 2 f t dt (180)
H ( f ) = h ( t ) e j 2 f t dt (181)
x ( ) h (t ) d e
=Y(f) j 2 f t
dt (183)
x ( ) h ( t ) e j 2 f t dt d
=Y(f) (184)
x ( ) h ( u ) e j 2 f (u + )du d
Y(f)= (185)
The term between the square brackets can be decomposed into two components
h ( u ) e j 2 f (u + )du = h ( u ) e j 2 f u e j 2 f du (186)
Y ( f ) = x ( ) e j 2 f d h ( u ) e j 2 f u du
(188)
The two factors of the above equation are respectively the Fourier transform of
the input and the Fourier transform of the impulse response. It follows that:
y ( t ) =x ( t ) h ( t ) Y ( f ) =X ( f ) H ( f ) (189)
frequency domain). The convolution theorem has several implications and can be
used, for instance explain the leakage and aliasing phenomena described in the
previous chapters. Details are shown in the next chapter.
The second part of the last equation provides the description of H(f) 5:
Y(f)
H( f )
= ,X(f)0 f (190)
X(f)
5
that is called Frequency Response Function (FRF) or transfer function (TF) and is
mathematically defined as the Fourier Transform of the impulse response
x X
t - fmax + fmax f
Figure 123 Continuous signal and its spectrum. For compactness, the spectrum phase has been
neglected.
Sampling this signal means multiplying the above signal for a sampling function
sf(t) (also known as comb function) that is a train of delta functions delayed of
uniform time intervals t. The spectrum of the comb function is a discrete and
periodic function, composed by a series of Dirac functions with scaling factor 1/T
and period 1/t
sf SF
1/T
t t fs =1/t fs =1/t f
The result of sampling is a (discrete) sum of scaled Dirac impulses, which can be
represented by a discrete set of weights, i.e. the sampled signal. The convolution
theorem implies that the spectrum of the sampled signal is the convolution
between the spectrum of the signal and the spectrum of the comb function.
x X
t f
Figure 125 Spectrum of the sampled signal
The spectrum of the sampled signal, being the convolution of the signal and the
sampling function spectra, is also periodic. Intuitively, it can be obtained by
pasting the spectrum in Figure 123 over each peak of the spectrum of the sampling
function in Figure 124.
Depending on the numerical values of fmax and fs there are three different
situations, shown in Figure 126. The sampling is correct when the positive part of
the spectrum centred at the null frequency does not overlap with the negative
part of the spectrum centred at fs. Given that both the positive and the negative
part of the spectrum have a frequency fmax, the sampling is correct if
2 fmax fs
Or, equivalently
fs 2 fmax
This equation was formerly presented as the basic condition for avoiding the
aliasing. Let us define the Nyquist frequency as:
fs
fNy =
2
A signal is correctly sampled if fmax<fNy. Depending on the numerical values of fmax
and fNy three situations are possible:
fmax<fNy the signal is oversampled: the spectral lines of the sampling
function are wide apart and the spectrum signal centred at f=0 does not
overlap with the adjacent ones.
fmax=fNy limit condition: the signal is correctly sampled (no overlap, no
useless samples)
x X
-fNy fNy
t - fmax + fmax f
x -fNy X fNy
t - fmax + fmax f
x -fNyX fNy
t - fmax + fmax f
Figure 126: Aliasing explained through convolution theorem.
157
5.3.2 Leakage
Let us consider the function x(t) and its spectrum X(f)
x X
t f
Figure 127 Signal x(t) and its spectrum X(f)
w W
1 f=1/T
T t f
The multiplication of the signal by the rectangular window implies that the signal
spectrum has to be convoluted with the window spectrum. Consequently, the
spectrum of the windowed signal will be the convolution between the spectra in
Figure 127 and Figure 128.
x X
Figure 129 Windowed signal (left) and its spectrum (right). The red dotted line indicates the
spectrum of the non-windowed signal
x X
t - fmax + fmax f
T
Figure 130 Continuous signal observed for a finite time.
The Fourier transform has been defined on periodic signals and, consequently, the
time domain signal in Figure 130 must become periodic. Intuitively, this operation
can be performed by copying the portion of the original signal after the end of the
signal and before its beginning. This operation is called circular convolution and is
obtained by convoluting the signal with a function of unitary Dirac functions with
period T. Figure 131 shows the periodic train of pulses and its spectrum.
cc CC
1/T
T t f f
The spectrum of the periodic signal is therefore the multiplication of the spectra
shown in Figure 130 and Figure 131.
159
x X
t f
Figure 132 Result of the circular convolution: the observed signal is made periodic and the
spectrum becomes discrete.
When the original signal is sampled, it has to be multiplied in time domain by the
sampling function (Figure 124) and consequently its spectrum becomes periodic.
x X
t f
Figure 133 Spectrum of the DFT
In addition, the observation of the signal for a limited time period introduces the
leakage problem by convoluting the signal spectrum with the sync function.
Summarizing, the spectrum of an analog signal that has been sampled for a finite
time is different from the theoretical signal spectrum for three reasons:
the observation of the signal for a finite time requires that the signal is
made periodic using the circular convolution: consequently, the spectrum
is discrete
sampling the signal means multiplying the signal by a sampling function;
the signal spectrum is the convolution between the theoretical spectrum
and a periodic function; the spectrum is also periodic;
observing the signal for a finite time amount means convoluting the
spectrum with the sync function; consequently, the original spectrum may
differ from the theoretical one (leakage).
5.3.4 Examples
Suppose that we have to acquire a sinusoidal signal with frequency of 5 Hz and
amplitude 1 V with a data acquisition board whose sampling rate of 50 samples/s,
and that we have to choose the number of samples to acquire (50 or 75 samples)
in order to minimize the spectral error.
First of all, we verify that the signal frequency does not exceed the Nyquist one.
Given that the sampling rate is 50 Hz, the Nyquist frequency is 25 Hz, i.e. larger
than the signal frequency (5 Hz). Consequently, we do not have any aliasing
problem.
The second step is to verify if the signal frequency is an integer multiple of the
frequency resolution, in order to verify if the spectrum is affected by spectral
leakage or not. Let us initially consider the case of N=50. The observation time T
is equal to 1 s and the frequency resolution f is 1/1s = 1 Hz. The convolution
between the actual signal spectrum (a line at 5 Hz) and the sync function is shown
in Figure 134.
1
0.8
0.6
0.4
0.2
0
0 2 4 6 8 10 12 14 16 18 20 22 24
Frequency [Hz]
If we consider the case of N=75, the observation time T is equal to 1.5 s and the
frequency resolution f is 1/1.5s = 0.66667 Hz. The convolution between the
actual signal spectrum (a line at 5 Hz) and the sync function is shown in Figure 135.
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 5 10 15 20 25
Frequency [Hz]
Figure 135 Spectrum of a sine wave (f=5Hz); 75 samples are acquired with a sampling frequency of
50 Hz
The result of the convolution process can be predicted in a very simple manner:
1. Draw the spectral axes, evidencing the spectral resolution and the Nyquist
frequency
f f
2. Sketch the amplitude of the theoretical spectral bin at its exact frequency
(let us initially consider the case in which the spectral line does not match
one of the lies plotted above)
f f
3. Sketch the sync function modulus by aligning the top value with the signal
amplitude; remembering that the zero crossings of the sync function have
a spacing that is 1/T, the result is something similar to the one in
f f
f
Figure 139 Estimation of leakage: step 4
5. Erase both the original spectral line and the sync function and you will
obtain the spectrum modulus
f
Figure 140 Estimation of leakage: step 5
The procedure for plotting the signal phase is very similar: the only
difference is that the phase of the sync function has to be added to the
phase of the signal (while the amplitudes are multiplied).
6. If the spectral bin is an integer multiple of the frequency resolution,
Figure 138 becomes
f f
Figure 141 Result of the convolution process if the signal matches the frequency resolution
f
Figure 142 Result of the convolution process in case of no leakage.
165
5.4 References
[1] J. S. Bendat and A. G. Piersol, "Engineering applications of correlation and
spectral analysis," New York, Wiley-Interscience, 1980.315 P., vol. 1, 1980.
[2] J. S. Bendat and A. G. Piersol, Random Data: Analysis and Measurement
Procedures. John Wiley & Sons, Inc. New York, NY, USA, 1990.
[3] G. D'Antona and A. Ferrero, Digital Signal Processing for Measurement
Systems: Theory and Applications. Springer Science Business Media, 2006.
6.1 Introduction
As already outlined in the previous section, leakage occurs in the DFT as a
consequence of the convolution between the signal spectrum and the window
spectrum [1-3]. When truncating the continuous signal, it is possible that the
circular convolution (needed to create a periodic signal) creates jumps at the
signals junctions. As a consequence, the observation of a purely harmonic signal
such as cos(t) causes its Fourier transform to be different from zero also at
frequencies other than . If the signal is made of two sinusoids of different
frequencies, leakage can prevent from distinguishing them in the spectrum;
similarly, if their frequencies are different but one is much larger than the other
is, the leakage from the larger component can cover the smallest one.
These problems can be overcome using weighting functions (as the one shown,
for instance, in Figure 143) that somehow smooth these jumps, with a technique
commonly referred to as signal windowing [1]. The convolution theorem indicates
that, if the sampled signal is purely harmonic, the spectrum of the observed signal
depends on the spectrum of the window.
1.2
1
Amplitude [V]
0.8
0.6
0.4
0.2
0
0 1 2 3 4
Time [s]
In the next sections we will present a few windows and their typical usage in the
analysis of signals, but independently on the window, it is important to introduce
the concept of normalization. The observation of a signal with a window that is
zero at the beginning and at the end of the observation period cancels a portion
of the signal. Consequently, the energetic content of the windowed signal is
different (in principle, lower) from that of the original signal. It is therefore a
common procedure to introduce a normalization factor in the windows (a
coefficient that causes the window to be larger than 1 in its central part) so that
167
the area below the window is equal to 1*T. In this way, the window should grant
the same energetic content of a rectangular window observing the same signal
portion.
Given that windows different from the rectangular one are always zero at the
beginning and at the end of the observation, the use of windows may lead to
errors if the signal is nonstationary. If we consider, for instance, an impulse
occurring at the beginning of the observation, the above window would multiply
such an impulse by (approximately) 0, thus cancelling the signal itself. It is
therefore important, before windowing a signal, observing whether the latter is
stationary or not.
1.5
0.01
1
0.0001
0.5
0 1E-6
0 0.2 0.4 0.6 0.8 1 -10 -7.5 -5 -2.5 0 2.5 5 7.5 10
Time [s] Frequency [Hz]
The window has been introduced by the Austrian metrologist Juluius Von Hann
and has the mathematical expression:
2 n
(n) 1 cos
Hw=
N 1
Where is the normalization factor (1 if the window is normalized, 0.5 if we can
accept an energetic reduction of the signal)?
The main differences between the Hanning window and the rectangular one are:
The width of the main lobe is 4 f (2f);
The width of the side lobes is f;
The first side lobe is approximately 30 times lower than the main lobe
The other lobes are (asymptotically) attenuated of 40 dB each decade.
Consequently, under the main lobe of the Hanning window there will always be at
least three spectral lines and, because of the convolution theorem, the spectrum
of a sine wave not generating leakage will be biased, as per Figure 146. Since the
side lobes of both the Hanning and the rectangular window have zero crossings
spaced at multiples of f, the long term leakage will not be an issue.
169
1.2
Spectrum (No window)
1 Spectrum (Hanning)
Spectral Amplitude [V pk]
0.8
IF the signal frequency is
an integer multiple of the
frequency resolution the
0.6 use of hanning window
introduce harmonics at
frequencies adjacent to
the actual one
0.4
0.2
0
0 2 4 6 8 10
Frequency [Hz]
Figure 146 Spectrum of a sine wave with a frequency of 1 Hz sampled for 4 seconds: comparison
between the rectangular and the Hanning window behaviour.
0.7
Spectral Amplitude [V pk]
0.6
If the signal frequency is not an
integer multiple of the
0.5 frequency resolution the use of
hanning window reduces the
leakage after a few spectral lines
0.4 .and allows estimating the
amplitude better than the "no
0.3 window" approach
0.2
0.1
0
0 2 4 6 8 10
Frequency [Hz]
Figure 147 Spectrum of a sine wave with a frequency of 1.11 Hz sampled for 4 seconds:
comparison between the rectangular and the Hanning window behaviour.
2 1
1.5
0.01
1
0.0001
0.5
0 1E-6
0 0.2 0.4 0.6 0.8 1 -10 -7.5 -5 -2.5 0 2.5 5 7.5 10
Time [s] Frequency [Hz]
2 0.01
1 0.0001
0 1E-6
0 0.2 0.4 0.6 0.8 1 -10 -7.5 -5 -2.5 0 2.5 5 7.5 10
Time [s] Frequency [Hz]
1.5
0.01
1
0.0001
0.5
0 1E-6
0 0.2 0.4 0.6 0.8 1 -10 -7.5 -5 -2.5 0 2.5 5 7.5 10
Time [s] Frequency [Hz]
6.3 Examples
The main advantage of the windows is that the spectral analysis is often computed
offline and, consequently, it is usually possible to observe the spectrum with
different windows in order to identify different aspects.
Let us consider, for instance a signal that is the sum of two harmonics, one with
frequency 17.9 Hz (amplitude 10 units) and the other with frequency of 30 Hz
(amplitude 0.1 units). The signal is sampled for 1 s and consequently the
component at 17.9 Hz generates leakage. The signal spectrum computed using the
rectangular window is shown in Figure 152.
Spectrum - AD converter 20 bits, Full-Scale +/- 10 V- Sampling Frequency 100 Hz
20
0.1
0.01
0.001
0.0001
1E-5
1E-6
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
Frequency
Figure 152 Spectrum of a signal composed by two harmonics observed with the rectangular
window
The plot shows that the long term leakage generated by the low frequency
component prevents from clearly detecting the high frequency component. The
173
observation of the same signal using the Hanning window (Figure 153) allows
identifying the two frequency components, although the short-term leakage of
the Hanning window (main lobe width 2f) creates three large spectral bins for
each frequency component.
Spectrum - AD converter 20 bits, Full-Scale +/- 10 V- Sampling Frequency 100 Hz
20
0.1
0.01
0.001
0.0001
1E-5
1E-6
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
Frequency
Figure 153 Spectrum of a signal composed by two harmonics observed with the Hanning window
Another drawback of the Hanning window is that it is not possible to recover the
original signal amplitude. The best choice for the identification of the harmonics
amplitude is the flat top window, that has very low long term leakage and a flat
main lobe.
Spectrum - AD converter 20 bits, Full-Scale +/- 10 V- Sampling Frequency 100 Hz
20 O
S
1
0.1
0.01
0.001
0.0001
1E-5
1E-6
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
Frequency
Figure 154 Spectrum of a signal composed by two harmonics observed with the flat top window
On the contrary, if we consider a signal that is the sum of two closely spaced
harmonics, one with frequency 10.2 Hz (amplitude 1 unit) and the other with
frequency of 11.9 Hz (amplitude 1 unit). The signal is sampled for 1 s and
consequently both the components generate leakage. The signal spectrum
computed using the rectangular window is shown in Figure 152.
0.1
0.01
0.001
0.0001
1E-5
1E-6
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
Frequency
Figure 155 Spectrum of a signal composed by two harmonics observed with the rectangular
window
The plot shows that the long term leakage generated by both components is
relevant, but the two spectral peaks are quite evident: it is therefore possible to
understand that the signal is the superposition of two phenomena. The
observation of the same signal using the Hanning window (Figure 153) prevents
from clearly distinguishing between the two components, given that the
frequency difference is comparable to the main lobe width of 2f..
Spectrum - AD converter 14 bits, Full-Scale +/- 10 V- Sampling Frequency 100 Hz
20
0.1
0.01
0.001
0.0001
1E-5
1E-6
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
Frequency
Figure 156 Spectrum of a signal composed by two harmonics observed with the Hanning window
The situation is even worst using the flat top window, which merges the two
spectral component into a unique group of harmonics.
175
0.1
0.01
0.001
0.0001
1E-5
1E-6
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
Frequency
Figure 157 Spectrum of a signal composed by two harmonics observed with the flat top window
7.1 Introduction
As already pointed out, the most convenient way to describe a linear time-
invariant system is represented by the frequency response function (FRF) or,
equivalently, by the impulse response. If a system is deterministic the FRF is the
ratio between the spectra of the output and of the input. In presence of stochastic
phenomena, the I/O spectra are usually estimated using spectral averages and the
equation FRF(f)=Y(f)/X(f) is therefore scarcely used.
Aim of this chapter is the description of the tools allowing to estimate the
frequency response function of stochastic processes.
The Fourier transforms of Ryy ( ) and Rxy ( ) are respectively called Auto-spectral
density and Cross-spectral density
Note that Syy ( f ) is a real-valued function, being the product of two real valued
quantities. Conversely, Sxy ( f ) is a complex-valued function, being the product of
the Fourier transform of the impulse response H(f) and the autospectrum.
These results apply only to ideal situations, where both the input and the output
are not affected by measurement noise. Let us consider the direct Fourier
transform of the linear system response convolution integral:
y ( t )= h ( ) x (t ) d FT Y ( f )= H ( f ) X ( f ) (198)
0
From the frequency domain expression of H(f) we can derive two equations: the
first considers the complex conjugate of both the equations side
*
Y= ( f ) H* ( f ) X * ( f ) (199)
Since squared moduli are the product of the complex quantity and its complex
conjugate:
2
X * ( f )Y ( f ) = H ( f ) X ( f ) (203)
179
Hence we can consider the expectations of this last equation and we obtain
Sxy ( f ) =
H ( f ) Sxx ( f ) and its complex conjugate Sxy * ( f ) =
Syx ( f ) =
H * ( f ) Sxx ( f )
The expectation of the squared moduli equation is
2
Syy ( f ) =H ( f ) Sxx ( f ) =H ( f ) H * ( f ) Sxx ( f ) (204)
Syy ( f )
H( f ) = (207)
Syx ( f )
Which means that the modulus of the cross-spectrum is equal to the product of
auto-spectra moduli.
All the previously listed equations for the identification of the frequency response
function are, in principle, equivalent. Estimation of frequency response depends
on the chosen formula given that there are errors due to the spectral estimation
and to measurement noise both in input and output signals.
Figure 158 Linear system whose output measurement is contaminated by random noise.
If we multiply the right-hand side and the left-hand sides of the above equation
times the complex conjugate of X(f) we obtain
X * ( f ) Y ( f ) = H ( f ) X * ( f ) X ( f ) + X * ( f ) N ( f ) (211)
The averaged cross-spectrum between the noise and the input is zero, since the
input and the noise are not generally correlated. We call H1 estimator the quantity
Sxy ( f )
H 1 ( f ) = (213)
Sxx ( f )
Which is the best FRF estimator if the measurement noise on the input is minor.
If no averages are performed, the H1 estimator is coincident with the FRF
definition.
181
n(t) x(t)
+
N(f) X(f)
Figure 159 Linear system whose input measurement is contaminated by random noise.
If we multiply the above equation times the complex conjugate of Y(f) we obtain
Y ( f ) Y * ( f ) = H ( f ) X ( f ) Y * ( f ) N ( f ) Y * ( f ) (215)
The expectation of the cross-spectrum between the noise and the output is zero,
given that the noise and the output are uncorrelated. The H2 estimator of the
frequency response function is therefore:
This estimator has to be used if the measurement input is dominant. Being the
output not affected by measurement noise, the H2 numerator is correctly
measured. Averaging complex quantity sends to zero the non-deterministic
components, i.e. the averaged cross-spectrum Sxy ( f ) is equal to Sxy ( f ) .
Consequently, H2 estimator is not affected by input measurement noise.
7.3 Coherence
We define the coherence function the ratio between H1 and H2
2
H1 ( f ) Sxy ( f ) Sxy ( f ) Sxy ( f )
=
2
( f ) = = (218)
H2 ( f ) Sxx ( f ) Syy ( f ) Sxx ( f ) Syy ( f )
Being the ratio between the modulus of the cross-spectrum squared and the
product of two autospectra the coherence is a real function, defined as a function
of frequency. Given that H1<H<H2 the coherence is limited between 0 and 1
0 2 ( f )1 (219)
i.e. the coherence is 1 independently from the system linearity and from the
presence of noise. The computation of averages, given their different effects of
autospectral and cross-spectral estimates, lowers the coherence from unitary
values.
7.3.1 Coherent Output Power
Coherent Output Power (afterwards referred to as COP) is an application of the
coherence function that can be used in a multiple input/output system to
discriminate a source of sound or vibration and to separate the contribution of
each input on the unique output channel. The basic idea behind the COP method
is to use the coherence between x(t) and y(t) as a weighting function for the
output signal spectrum, Syy, so as to assign higher relevance, in output, to input-
coherent frequencies. COP is calculated as follows:
COP
= ( f ) xy2 ( f ) Syy ( f ) (221)
The COP method weights the output proportionally to the coherence value: if, for
a certain frequency, coherence is unitary, the entire energy in output signal is
attributed to input. Conversely, if the coherence has low values, the weight of
input in output signal is proportionally lowered. COP methods work efficiently if
the following main hypotheses are satisfied:
the different input signals are uncorrelated;
measured source reference signal x(t) depends only on the source whose
contribution is to be singled-out in receiver signal (no cross-talk in the
measurement system).
The COP method is widely used in acoustics in order to single-out the contribution
of a noise source in a noisy environment. In this case, the coherence is usually
computed between a vibration signal (input) and the measured sound pressure
(or sound intensity). The COP indicates the amount of acoustic energy that
depends on the vibration of a specific element.
The harmonic excitation has to be preferred if we need to outline the system non
linearities. Actually if the system is linear, its response to a harmonic input is a
harmonic output with the same frequency. Furthermore, if the amplitude of the
input signal is changed, the output should proportionally change. The frequency
can be changed in two ways
With discrete steps
Continuously, by using a constant or variable sweep velocity
185
With the stepped sine technique, tests are performed at each frequency, thus the
test can be very time-expensive. The whole excitation energy is concentrated in
one frequency and consequently the signal to noise ratio is definitely favourable.
The stepped-sine excitation implies that a sinusoidal excitation has a steady
frequency and amplitude for a finite time (generally from 1 s to some minutes,
depending on the tests). At each frequency, steady-state conditions are reached,
after which the input and output signals are measured and the amplitude and
phase relationships between the stimulus and the response are computed
extracting the FFT magnitude and phase at the specific frequency.
Time [s]
Frequency [Hz]
Another signal that is conventionally used is the frequency sweep, i.e. a harmonic
signal with usually constant amplitude and frequency that increases or decreases
with time. The rate with which the frequency changes has to be chosen depending
on the system damping, so as to obtain a quasi-stationary response at each
frequency. Consequently, in lightly damped systems we will need to slowly vary
the frequency; in these cases, the time required for the tests can be important.
The problem is basically to remain at each frequency for a time that is large
enough to perform our measurements. Under this perspective, we need to store
only the spectral component of our interest. This task is performed by tracking
filters, which follow the frequency given by the oscillator and discard from the
transfer function all the points except the one at the excited frequency. The
frequency value is updated only if the excitation signal passes again at the
investigated frequency.
0.05
0.04 f
0.03
0.02
0.01
00 500 1000 1500 2000 2500
Frequency [Hz]
H
0.5
-0.5
0 0.5 1 1.5 2 2.5
Time [s]
0.04
0.03
0.02
0.01
0
0 20 40 60 80 100
Frequency [Hz]
Figure 162 Example of white noise excitation: time history (upper plot) and spectrum (lower plot)
0.04
1 spectrum
0.03
0.02
0.01
0
0 20 40 60 80 100 Hz
0.04
Average of 128 spectra
0.03
0.02
0.01
0 0 20 40 60 80 100 Hz
A A A
T T T
The main advantage of the ideal impulse is that it allows a quick evaluation of the
frequency response function over a wide frequency range. Furthermore, since the
input energy is equally partitioned (at least in theory), one can estimate the FRF
only from the system response. However, in order to reduce the random noise
189
effects, test result should be the average of several impulses and the energetic
content at each frequency is low. Another drawback is that the force signal shows
a relevant crest factor that, in presence of system non linearities, may prevent
from using this technique. When analysing large structures with a noticeable
damping, the energy given by a conventional hammer can be inadequate. In these
situation the energetic content would be too little to give sufficient energy to all
the components in the frequency range of interest. The impossibility of controlling
the stimulus signal and its bandwidth is a relevant limit of this technique.
When analysing the sampling and windowing problem, one must notice that
sampling has to be performed with a pre-trigger, so as to include all the
meaningful parts of the signal in the analysis. Furthermore, the zeroes before the
begin of the impulse and after the end of the transitory avoid the leakage due to
discontinuities between the beginning and the end of the record.
7.4.4 Stimuli comparison
The comparison between advantages and drawbacks of the described stimuli is
presented in the next table
Table 10 comparison between the different stimuli characteristics.
Nonlinear Signal to Control of th
Leakage behavior Crest Noise stimulus Test
averaging Factor Ratio Bandwith Duration
From a mechanical point of view, there are two main problems to be faced, i.e.
the way of constraining the structure and the way of providing the mechanical
stimulus. In the following we analyse these two problem presenting practical
cases.
7.5.1 Excitation sources
The first step in the measurement process involves selecting an excitation system
(i.e. the actuator to be used for the creation of vibration) and of the excitation
time history (e.g. white noise, sweep sine).
In general, the available excitation systems dictate the choice of the time history.
For instance, the choice of an impact hammer dictates the use of impulsive forces,
while the availability of controllable sources allows the experimenter like to
choose between white noise, sweep or bump tests.
7.5.1.1 Shaker
There are four main categories of shakers:
Piezoelectric
Electromagnetic
Electrohydraulic
Hydraulic
The most common ones are probably the electromagnetic shakers, in which the
force is generated by an alternating current that drives a magnetic coil. The
maximum frequency limit varies from approximately 2 kHz to 20 kHz depending
on the size, the smaller shakers having the higher operating range. The maximum
force rating is also a function of the shaker size and varies between 100 N to 10
kN; the smaller the shaker, the lower the force rating. Hydraulic shakers usually
provide higher force levels, although their operative frequency range is lower
(maximum 1 kHz). Conversely, piezoelectric shakers provide very low
displacements (i.e. low acceleration levels at small frequencies) and their
operative range is usually limited to frequencies above 500 Hz.
191
The vibration modes of interest must be excited and consequently all the node
points and the node lines have to be avoided. For current purposes, pre-testing
with hammers may help identifying the best excitation points.
Figure 167 Impact hammer with the different tips and the additional mass
The hammer size has to be chosen in order to provide a sufficient energy for the
modal excitation. Large structures often exhibit low natural frequencies and
therefore require big hammers with soft tips. Conversely, small structures are
usually characterized by high natural frequencies and their modal analysis is
performed using small hammers and hard tips.
Examples of the spectra obtained with the same impact hammer but with
different tips are shown in Figure 169 and Figure 170. Figure 169 shows the
spectra obtained with a plastic tip hammering a hard surface (black) and a soft
surface (red). The plots show that when the contact surface is soft the signal
bandwidth is lower, but the energetic content at low frequency is larger.
0.1
0.01
Force [N RMS]
0.001
Plastic Tip - Metal Surface
Plastic Tip - Rubber Surface
0.0001
1 10 100 1000
Frequency [Hz]
Figure 169 Impulsive force spectra obtained with a plastic tip used on an aluminium component
and on a rubber component.
Figure 170 shows the spectra obtained with a rubber tip hammering a hard surface
(left plot) and a soft surface (right plot). In this case, given that the signal
bandwidth is governed by the softer component (in this case the tip), the signal
bandwidth is not affected by the specimen.
193
0.1
0.01
Force [N RMS]
7.5.1.3 Vibrodynes
Vibrodynes generate harmonic forces by means of the inertia of two imbalanced
counter-rotating rotors, as shown in Figure 171. To understand the working
principle, let us consider two masses rotating in opposite directions at a distance
r from the rotation centre. The inertial force that each mass generates is a vector
with modulus is mr (being the angular velocity). The two forces produce a
sinusoidal resultant force with constant direction, pulsation and modulus
2mr.
Piezoelectric Material
Workpiece
One of the main uses of experimental vibration analysis is the matching of the
dynamic characteristics with model or finite element predictions, which are often
based on the hypothesis of ideal constraints, i.e. with null or infinite impedance.
In practice, however, these conditions are never fulfilled. A hollow joint, for
instance, should prevent any displacement independently from the force at the
interface; in free conditions the structure should float in space without any
connection to the ground. Both these conditions are not physically realizable and
195
structure are constrained in conditions approximating the locked or free ones. For
instance, the free system approximation is achieved by supporting elements with
soft elastic belts or placing the structure over soft cushions. With such a constraint
the rigid body frequencies are close to zero and are sufficiently different from the
flexible modes ones.
In our experience it is always difficult to model bodies suspended with elastic belts
excited by a shaker, which constraints at least one point of the structure in the
direction of excitation. Best results in terms of agreement between the finite
element predictions and the experimental results are obtained:
By mounting the object on the shaker head and comparing the results
with the constraint displacement model.
By suspending the object over soft belts and providing the excitation with
the impact hammer.
There is not a unique method for supporting a structure for FRF experimental
estimation, since each situation has its own dynamic and mounting
characteristics. For instance, if we need to identify the vibration modes of a mill
basement it would not be possible to support it with soft elastic belts and use an
impact hammer for vibration modes excitation, while it is easier to lay the
structure over a bed of springs. On the other hand, it might be very difficult to use
the bed of springs to suspend a lightweight mechanical component. The next
figures show practical examples and the ways in which the constraint problems
have been solved.
Figure 173 Vibration testing of an infrared spectrometer along three axes. The instrument is
mounted directly on the shaker head to simulate the locked condition with constraint
displacement. The vibration is imposed along the vertical axis and a prism is used to rotate the
instrument.
Figure 174. Vibration testing of a cooler for space applications. Left image shows the suspended
tests: vibration is measured by accelerometer 1. The cooler is supported by three elastic belts (2).
Right image shows the rigid mounting, with load cells for inertial force measurements
Figure 175 Identification of the vibration modes of a valve. The valve was suspended over two
belts, clearly visible in the front and the rear of the figure.
Figure 176 Identification of the free vibration modes of a drill tip. The left figure shows the
suspended drill. The point on the right of the first figure is the spot of a laser Doppler vibrometer
used for non-contact measurements.
The typical experimental setup for the identification of the response to vibration
of a structure is shown in Figure 177: the shaker is usually connected to the
structure by stingers, i.e. mechanical elements with very large stiffness in the
197
direction of excitation and minor stiffness in the transverse direction. Aim of these
object is to reduce as much as possible the effect of the shaker on the specimen,
to ease the alignment between the shaker and the structure, to avoid side loads
on the force transducer and on the shaker. The easiest way to create a stinger is
to use a long threaded metal rod connected to both the shaker and the structure.
The shaft is simply supported at the two extremities by roll bearings and the
vibration is measured by two accelerometers. Said L the distance between the
supports, the accelerometers are located at L/2 and L/3 from one support. The
mechanical stimulus is provided by a large impact hammer, shown in Figure 179;
the tip was chosen to obtain a bandwidth of 200 Hz, given that the analysis was
limited to the first two vibration modes.
199
Figure 179 Accelerometers (a and b) and impact hammer (c) used for the modal analysis of the
shaft
The modulus, phase and coherence of the frequency response function between
the hammer (input) and the two accelerometers (output) are shown in Figure 180.
1.2
0.6
0.4
0.2
0
0 50 100 150 200
Frequency [Hz] CH1 CH2
200
150
100
50
FRF Phase []
0
0 50 100 150 200
-50
-100
-150
-200
Frequency [Hz]
CH1 CH2
1
0.9
0.8
0.7
0.6
Coherence
0.5
0.4
0.3
0.2
0.1
0
0 50 100 150 200
Frequency [Hz] CH1 CH2
Figure 180 Modulus, phase and coherence between the force generated by the impact hammer
(input) and the vibration measured by the two accelerometers (output)
The plot of the modulus clearly evidences the two amplifications due to the first
two resonances at 43 and 152 Hz; a minor vibration mode at 163 Hz also exists,
but will not be analyzed here. The coherence is close to unity for frequencies
larger than 25 Hz and the phase of the two signals are comparable from 0 to 70
201
Hz. The coherence evidences that the behavior of the system below 25 Hz is not
ideal (in this case it is affected by the supporting frame and by the unilateral
constraint).
Data show that the first vibration mode (with natural frequency of 43 Hz) the two
accelerometers move in phase, while at the second vibration mode the two
accelerometers move in counter phase. Additionally, at the first vibration mode
the vibration amplitude at the center (accelerometer 1) is larger than that
measured at L/3 (accelerometer 2). On the contrary, the second vibration mode is
characterized by a vibration amplitude larger at L/3 than at L/2. The first two
vibration modes are shown in Figure 181.
The modal analysis can be performed with techniques much more complex than
the ones described here; nevertheless, the hammering tests are usually the
starting point for study of mechanical systems dynamics.
7.6.2 Tuned mass dampers
This case-study describes the tests performed to identify the natural frequencies
of a tuned mass damper (TMD) with leaf-springs. The TMD has to be mounted on
pipes with diameters ranging between 100 and 220 mm. The experimental setup
is shown in Figure 182: the tuned mass damper was mounted on the head of an
electro-mechanic shaker. The resonances were identified from the peaks of the
vibration transmissibility, measured between the shaker head (stimulus) and the
seismic masses (responses).
Ref
DAQ
Shaker
Shaker
Amplifier
PC
Figure 182 Experimental setup for the identification of the natural frequencies of the TMD
In each test (duration 300 s) the vibration transmissibility was measured with two
or three vibration levels. The transfer function was computed using the H1
estimator, computing 6 averages in order to obtain a frequency resolution of 0.02
Hz. The actual peak location was obtained by interpolating the 5 points adjacent
to the peak using a quadratic function, and identifying the maximum from that
function. Two different TMD (shown in Figure 183) were mounted on the shaker.
The first natural frequency of TMD1 was close to 3.25 Hz (Figure 184). The
variation of the first natural frequency deriving from the different excitation levels
are lower than 0.15 Hz. The first natural frequency of TMD2 ranged between 2.40
and 2.54 Hz depending on the excitation level.
203
14
0.3 m/s2
Transmissibility [(m/s)/(m/s)]
12
0.4 m/s2
10
8 0.5 m/s2
6
4
2
0
1 3 5 7 9
Frequency [Hz]
10
M1, 0.5 m/s
Transmissibility [(m/s)/(m/s)]
9
8 M2, 0.5 m/s
7 M1, 0.3 m/s
6 M2, 0.3 m/s
5
4
3
2
1
0
1 3 5 7 9
Frequency [Hz]
Figure 184 Transmissibility of TMD 1 (upper plot) and TMD 2 (lower plot)
The next step was the identification of the system damping and natural frequency.
We show here an example in a particular case extracted from the lower plot of
Figure 184: the natural frequency can be estimated by analysing both the FRF peak
and the phase, but in presence of limited frequency resolution or measurement
noise the identification can be not straightforward. Figure 185 shows the FRF
modulus and phase for the identification of the half-peak bandwidth and 90
phase shift.
Transmissibility [(m/s)/(m/s)]
8
7
6
5
4
3
2
1
0
1 1.5 2 2.5 3 3.5
Frequency [Hz]
0
Transmissibility Phase []
-45
-90
-135
-180
1 1.5 2 2.5 3 3.5
Frequency [Hz]
Figure 185 Identification of the system natural frequency and damping using the peak of the FRF
and the phase diagram.
In order to identify the FRF modulus peak and the FRF phase shift in presence of
measurement noise it is possible to interpolate the local FRF behavior. In our case,
the 5 points adjacent to the peak were fitted with quadratic and linear equations,
as in Figure 186.
205
10
9
Transmissibility [(m/s)/(m/s)]
8
7
6
5
4
3
2
1 y = -186.15x2 + 905.13x - 1091.5
0
2.25 2.3 2.35 2.4 2.45 2.5 2.55 2.6
Frequency [Hz]
-45
Phase []
-90
-135
y = -477.21x + 1086.2
-180
2.25 2.3 2.35 2.4 2.45 2.5 2.55 2.6
Frequency [Hz]
Figure 186 Identification of the FRF maximum by interpolation of the modulus (upper plot) and
phase (lower plot)
The peak of the parabolic equation was identified from the fitting curve and is
equal to 2.43 Hz; similarly, the phase equals -90 at 2.46 Hz. The natural frequency
can be assumed to be equal to the average between the two values, i.e. 2.44 Hz
The damping was computed using the half power bandwidth method described
in the previous sections:
fu 2 fl 2
=
2 fn2
The upper, lower and the natural frequencies were identified from the plot of
Figure 185:
Fl= 2.30 Hz
In this way, the lateral forces transmitted between the two shakers should be
limited. The right part of Figure 187 clearly shows the force platform structure:
the shaker imposes the vibration to the lower plate. An intermediate plate is
connected to the lower plate by two blades of harmonic steel, which are rigid
along the vertical direction and compliant along the horizontal axis. The horizontal
shaker stinger is connected to the intermediate plate. The 6-DOF force
measurement system connects the lower plate to the upper plate, where the test
specimen is fixed.
207
Spectra of the acceleration along the vertical and horizontal axes are shown in
Figure 188.
0.1
0.09 V1
0.08 H
Acceleration [m/s]
0.07
V2
0.06
0.05
0.04
0.03
0.02
0.01
0
0 10 20 30 40
Frequency [Hz]
Figure 188 Spectrum of the acceleration along the horizontal (H) and vertical axes (V1 and V2)
Figure 189 shows the frequency response function measured between the vertical
stimulus and the vertical force. Plots show that the coherence is close to one
between 2 and 20 Hz, i.e. where the acceleration spectrum was different from
zero. The coherence is lower than one below 2 Hz and above 30 Hz, where the
signal to noise ratio is low because of the absence of the stimulus.
Frequency [Hz]
180
135
Apparent Mass Phase []
90
45
0
-45
-90
-135
-180
0 5 10 15 20 25 30 35 40
Frequency [Hz]
1
0.9
0.8
0.7
Coherence
0.6
0.5
0.4
0.3
0.2
0.1
0
0 5 10 15 20 25 30 35 40
Frequency [Hz]
Figure 189 Transfer function and coherence between the vertical acceleration and the vertical
forces
209
8.1 Introduction
As preliminarily evidenced in chapter 4, Bode and Nyquist plots are the most
common ways for the visualization of the frequency response functions of linear,
time invariant systems. In experimental mechanics, it is quite common to use the
Bode diagrams for the visualization of different frequency dependent quantities.
In the most general case, a Bode plot is the combination of magnitude (expressed
in dB) and phase plots versus the frequency logarithm. Conversely, Nyquist plots
are parametric plots showing the real and the imaginary part of the FRF (when the
frequency varies). The difference between linear and logarithmic scale plots is
shown in Figure 190.
10 10
Y Lin
FRF [m/s/m/s]
FRF [m/s/m/s]
5 5
0 0
0 100 200 300 400 500 1 10 100 1000
Frequency[Hz] Frequency[Hz]
100 100
10 10
Y Log
FRF [m/s/m/s]
FRF [m/s/m/s]
1 1
0.1 0.1
0.01 0.01
0 100 200 300 400 500 1 10 100 1000
Frequency[Hz] Frequency[Hz]
X Lin X Log
Figure 190 Difference between 1 DOF FRF if the scales are plotted in linear or logarithmic scales.
This because in mechanics and acoustics the power is proportional to the squared
amplitude (for instance, kinetic energy and velocity) and it is desirable for the two
dB formulations to have the same results.
The reference values for mechanical quantities are:
acceleration: 10-6 m/s2
velocity: 10-9 m/s
force: 10-6 N
sound pressure level: 2 x 10-5 Pa
Since the FRF modulus expresses the ratio between two quantities it can be
expressed in dB by considering the quantity 20 log(|FRF|). Bode plots are often
used in mechanics for the visualization of the vibration response of mechanical
systems and of transducers.
211
10
-45
FRF Phase []
-90
-135
-180
10 100 1000
Frequency [Hz]
Figure 191 Bode diagrams showing the frequency response function of a transducer.
0.6
0.4
0.2
0.0
-0.2
-0.4
-0.6
-0.8
-1.0
-1.0 -0.5 0.0 0.5 1.0
Re
2000
Rotation Speed [RPM]
1500
1000
500
0
0 20 40 60 80 100 120 140 160 180
Time [min]
In this field, it is quite common to plot the amplitude and the phase of the
component synchronous with the rotation frequency (referred to as 1X) in order
213
14
12
10
8
6
4
2
0
0 500 1000 1500 2000 2500
Frequency [RPM]
70
60
50
1xRPM Phase []
40
30
20
10
0
0 500 1000 1500 2000 2500
Frequency [RPM]
Figure 194 Bode plot of the 1X component of a large gas turbine during the start-up. Upper plot:
modulus, lower plot: phase
14
12
10
Increasing Speed
y [m]
8 2000 RPM
2 180 RPM
0
0 2 4 6 8 10 12 14 16
x [m]
Figure 195 Nyquist plot of the 1X component of the above described system during the start-up.
The initial and final rotation speed are indicated by arrows.
Plots show that there is a general vibration increase when the rotation frequency
increases; the absence of resonances (that would result in modulus amplification
with factors 10 50) is evident.
8.4.2 Industrial Fan Vibration
A fan rotates at speed variable between 1000 and 2500 CPM with vibration
strongly increasing at 1700 RPM. The Bode diagram is shown in Figure 196.
215
9
8
7
1xRPM Modulus [mm/s]
6
5
4
3
2
1
0
1000 1200 1400 1600 1800 2000 2200 2400
Frequency [RPM]
360
315
270
1xRPM Phase []
225
180
135
90
45
0
1000 1200 1400 1600 1800 2000 2200 2400
Frequency [RPM]
The plot clearly shows the typical resonance pattern, with an increasing vibration
amplitude and a phase shift of 180. An impact test was also performed, in order
to understand which elements were resonating. The time history of the impact
tests is shown in Figure 197.
Acceleration [m/s 2 ]
10
5
0
-5
-10
-15
-20
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
Time [s]
The spectrum of the above shown signal is shown in Figure 198: the spectral peak
at 1700 RPM is smaller than that at 2700 RPM but clearly visible.
1.6
1.4
Acceleration [m/s 2 RMS]
1.2
0.8
0.6
0.4
0.2
0
0 2500 5000 7500 10000 12500 15000 17500 20000
Frequency [RPM]
9.1 Introduction
The necessity of time-frequency analysis (TFA) arises from an inherent imitation
of the Fourier analysis, which assumes that signals are infinite in time or periodic.
This is in contrast with the majority of mechanical phenomena, whose duration is
finite and may significantly vary over time; consequently, if a mechanical
phenomenon is stationary, its spectral properties are not time-dependent and,
consequently, the Fourier analysis performed on data acquired in different time is
meaningful. If a phenomenon is non-stationary, its spectral properties vary with
time and any spectral description must be time-dependent. This is the case, for
instance, of the vibration measured on the bearings of a rotating machinery:
during the start-up or coast-down the rotation speed varies, and the Fourier
analysis on consecutive time buffers leads to different results. The spectral peaks
amplitude and frequency are therefore time dependent and the previously
described spectral analyses are not sufficient to completely describe the system.
A convenient way to represent the time-varying spectral characteristics is the
time-frequency analysis (TFA), with which the evolution of spectral components is
plotted versus time. Commonly TFA are visualized using two-dimensional signals
obtained from timefrequency transforms. The most common form of time
frequency analysis is the short-time Fourier transform (STFT), but other
techniques such as wavelets, Gabor spectrograms and Wigner distributions are
available.
There are two ways of graphically representing the results of the STFT, i.e. the
colormap and the waterfall plots. Let us consider the case of the vibration signal
shown in Figure 199.
The colormap is a 2D plot in which the spectral amplitudes are coded with
different colours and plotted versus time (x axis) and frequency (y axis). The
waterfall is a 3D plot in which the spectral amplitude (z axis) is plotted versus time
and frequency. The colormap and waterfall plots are shown in Figure 200.
Figure 200 Colormap (upper plot) and Waterfall (lower plot) representation of the STFT
The spectral peaks are evidenced by the red lines in the colormap (the colour
obviously may change if using a different colour code) and by large z amplitude
in the waterfall graph. In the plots of Figure 200, for instance, the left figure
show that the dominant spectral components have frequencies between 50 and
100 Hz (although between 40 and 60 s frequencies between 200 and 300 Hz are
important). The waterfall plot shows that the dominant harmonic components
vary between 200 and 400 RPM. With reference to the colormap, the presence
of horizontal red lines indicates the presence of harmonic signals with a fixed
frequency, vertical patterns indicate impulsive events (i.e. characterized by an
almost flat spectrum).
219
The number of points for each DFT affects both the time selectivity and the
frequency resolution. Since points are acquired with given sampling rate, the
number of points affects the time length of the buffer (T) and the frequency
resolution (1/T). In particular, choosing a small number of samples reduces the
frequency resolution, as shown in the STFT of Figure 201.
Figure 201 Effect of the number of samples on the STFT time and frequency resolution. Small
number of samples (a), large number of samples (b) and large number of samples with overlap (c).
9.3 Wavelets
The link between time and frequency of the STFT analysis can be overcome
decomposing the signal as a sum of short-lasting functions (with finite duration,
contrarily to the FT that decomposes a signal as a sum of cosine waves).
There are three kinds of wavelet transforms i.e. the continuous wavelet transform
(CWT), the discrete wavelet transform (DWT) and the wavelet packet transform
(WPT).
J. Morlet along with A. Grossmann formulated the CWT and defined it as the sum
over all time of the signal multiplied by scaled and shifted versions of the mother
wavelet function. In this way, each time-dependent signal can be decomposed
using basic functions that are located at a specific time instant and properly scaled.
The basic functions for the decomposition are called mother wavelets, and are
wave-like oscillations with amplitude that begins and ends at zero. The most
common wavelets are the Mexican Hat, the Morlet, the Haar and the Daubechies
wavelets, shown in Figure 202.
The translation and dilatation process scales and delays the mother wavelet s,t(x)
according to the following expression
1 x t
s ,t ( x ) = (225)
s s
Where s and t are respectively the scaling factor and the translation parameter. In
this way, any arbitrary function f(x) is decomposed as a sum of scaled and delayed
wavelets: the continuous wavelet transform is given by:
221
1 x t
Wf ( s , t ) = f ( x ) dx (226)
s
s
Digital signals are analysed using the DWT theorized in 1976 by Crosier et al. [15]
and refined in 1989 by Vetterli and Le Gall [16].
Coifman, Meyer and Wickerhauser [17] developed the WPT, which are the bases
formed by taking linear combinations of the usual wavelet functions. The WPT
continuously decomposes both the approximate and detail components of the
signal so as to respectively utilize low and high-frequency components of the
signal at various scales. This feature allows the extraction of signal features that
combine both non-stationary and stationary characteristics.
The WPT is the latest technique in the family of wavelet transforms and is found
to be one of the best analysing tools of vibration signals and fault detection. It is a
generalization of the DWT and, as such, gives the user a much richer
characterization of the signal being analysed. The WPT is faster than the CWT as
it uses orthogonal and bi-orthogonal bases with a better resolution in the high-
frequency region. The detection method used is similar to that in spectral analysis,
where each fault has a characteristic frequency to be detected. So, faults could be
monitored by monitoring packets that have central frequencies equal or close to
the fault frequency of interest. The power of WPT lies in its frequency resolution
of signals with a short or small number of samples as compared to the FFT
approach.
9.3.1 Wavelets applications
The literature survey revealed that wavelets have several applications such as data
compression and signal analysis, and that applying the WT as a filtering operation
can be useful in vibration signal analysis, especially for non-stationary signals. In
many applications, wavelet transforms can replace the Fourier Transform. In the
analysis of mechanical systems, for instance, plotting the scaling factors versus
time provide the same information of the STFT, with the main advantage of the
capability of identifying the high frequency content of impulsive signals and, in
particular, the time location of these events, as shown in Figure 203.
9.4 Examples
The STFT and the time-frequency analyses are useful in the analysis of transient
signal, and can be used to identify the evolution of spectral components in time
domain. Let us consider three modal tests performed on an electrodynamic
shaker using frequency sweep, shock and white noise excitation. The resulting
STFT colour maps are shown in Figure 204.
The first plot shows the STFT of a sweep sine signal: the graph shows that there is
a single dominant component (1X) with variable amplitude (the colour of the
dominant component changes from red to yellow). The 2X component is generally
small, except at 80 and 90 seconds, where its amplitude is comparable with that
of the 1X. In addition to the 1X and 2X components there is also a tonal component
at approximately 300 Hz with constant amplitude. Finally, after 90 seconds there
is a general increase of the vibration magnitude, given that the plot colour changes
from green to yellow. The second plot shows the STFT of an impulsive event. One
can notice that almost all the frequency components between 0 and 2500 Hz are
relevant between 0.9 and 1 s. Some of them are more pronounced, as evidenced,
for instance, by the red horizontal areas at 300 Hz. Other frequencies (100, 2100
223
Hz) are absent, given that there is a green horizontal area in the STFT plot. The last
plot shows the STFT of a stationary signal. In this case, the STFT is almost useful,
given that the plot provides the same indications of a spectrum plot. The colormap
shows that the dominant frequency components between 20 and 400 Hz are
constant in time between 0 and 150 s. Minor frequency components are present
between 700 and 800 Hz and at 1350 Hz. All the components are trivial for times
larger than 150 s (where the test ended).
Figure 204: Short-time Fourier transform of acceleration signal measured during a sweep test
(top), a shock test (middle) and a white noise test (bottom).
Figure 205 Response of a mechanical system to three sine sweeps at increasing, decreasing and
then increasing frequencies.
225
9.5 References
[1] A. Croisier, D. Esteban and C. Galand, "Perfect channel splitting by use of
interpolation/decimation/tree decomposition techniques," in International
Conference on Information Sciences and Systems, 1976, pp. 443-446.
[2] M. Vetterli and D. Le Gall, "Perfect reconstruction FIR filter banks: Some
properties and factorizations," Acoustics, Speech and Signal Processing, IEEE
Transactions On, vol. 37, pp. 1057-1071, 1989.
[3] Z. Peng and F. Chu, "Application of the wavelet transform in machine condition
monitoring and fault diagnostics: a review with bibliography," Mechanical
Systems and Signal Processing, vol. 18, pp. 199-221, 2004.
Very often, the diagnosis of rotating machineries starts from the observation of
the shaft vibration in a plane perpendicular to its axis. Tools such as the orbit plots,
the shaft centreline plots and the full spectrum plots are the basis for the fluid film
bearings machines monitoring. As for vibration spectra, some mechanical failures
result in characteristic plot shapes and it is therefore possible to compare, for
instance, the orbit plots with the ones with characteristic defects to perform
advanced diagnoses. A very good summary of this chapter can be found online on
the internet site of the Mobius Institute (http://www.mobiusinstitute.com).
CCW rotation
Key Phasor
Shaft Centerline
Orbit
As the shaft speed increases from zero to the regime value the shaft centre moves
from the bottom of the bearing clearance to the operational centre, following a
path referred to as shaft centreline. During the normal operation, the shaft
centre oscillates around the final point of the shaft centreline; such an oscillation
is known as the orbit plot. The shaft centreline is derived from the average value
(DC component) of the waveform measured by the two proximity sensors as a
227
function of the rotating speed. Conversely, the orbit plot is derived from the
waveform AC component.
The analysis of the attitude and eccentricity is the basis for the journal bearing
analysis. A comprehensive description of the rotors analysis is provided in the
referenced books [18].
0.5087
3598
0.45
0.4
0.35
0.3
0.25
3598
0.2
0.15
0.1
0.05
36
0
-0.04625
-0.358 -0.3 -0.2 -0.1 0 0.1 0.2 0.2507
Real [EU]
In general, the machine is working properly if the shaft centreline final point is in
the lower quadrant (lower-left if the shaft is rotating clockwise, lower-right if the
0 0
0 0
0 0
0 0
0 0
0
0
0
0
0
0
0
0
0
0
2
Figure 208 Example of raw orbit plot (left) and filtered orbit plot (right)
The shaft can whirl in the same direction of the shaft (forward precession) or in
opposite direction (reverse precession). In normal conditions, the shaft moves in
forward precession, but many common machine faults increase the forward
precession of the shaft in the bearing, while other malfunctions produce reverse
precession. For instance, there are two common machinery problems producing
0.5X vibration. The fluid induced instability produces a vibration at 0.4 0.5X with
a forward precession. Rotor rubs also produce a 0.5X vibration, but the precession
is usually reverse. Consequently, determining the precession direction can help
identifying the fault origin.
The relative amplitude of the major and minor orbit main axes should be
comparable. If an axis of the filtered orbit is much larger than the other there
might be misalignment between the supports. The left plot in Figure 209shows an
229
orbit in which the major and the minor axis are comparable. The second shows an
orbit in which the dominant axis is much larger than the minor one. Typically,
these orbits are generated by shaft misalignments.
The presence of unusual orbit shapes when adding 2X, 3X and high order
components (as in Figure 210) is usually indicative of system malfunctions.
Figure 210 Orbit plots when the 2X, 3X and higher order harmonics are larger than the 1X.
0
9
In this case, the shaft rotates in both clockwise and counter-clockwise directions,
i.e. with heavy preload the shaft can move into reverse precession.
10.3.3 Rotor Rub
The orbit analysis is particularly useful for the identification of rotor rub, i.e. when
the shaft hits the supports or other fixed parts. The rub usually modifies the orbit,
which assumes different shapes (8-shaped, full circles or other orbits such as in
Figure 213).
DOR - CW
2
3
4
5
6
7
8
9
Rf = X 2f + Yf2 + 2 X f Yf sin ( x , f y , f )
(227)
R f = X 2f + Yf2 2 X f Yf sin ( x , f y , f )
Where Xf, Yf, x , f and y , f are respectively the moduli and the phases of the
signals measured by two transducers x and y.
If the orbit is a circle, the two components Xf and Yf are two harmonic functions
with the same amplitude and a phase difference of 90. In these conditions, the
full spectrum would include only Rf since R-f would be null. Conversely, if the orbit
is a line aligned with one of the two measurement axes, the full spectrum would
result in two bins at +f and f with identical amplitudes. Given that the orbits are
usually elliptical, the typical situation is usually intermediate between these two:
an example of full spectrum of a shaft rotating at 1000 RPM is shown in Figure
216. The +1X component is dominant, while +2X and -2X components are
comparable.
5
4.75
4.5
4.25
4
3.75
3.5
3.25
3
2.75
2.5
2.25
2
1.75
1.5
1.25
1
0.75
0.5
0.25
0
-4008 -3500 -3000 -2500 -2000 -1500 -1000 -500 0 500 1000 1500 2000 2500 3000 3500 4093
Frequency [RPM]
It must be noticed that the full spectrum is not linked in any way to the negative
components of the DFT computation. Negative frequencies are conventionally
referred to the ones generating a counter-rotation of the shaft orbit and can
only be derived from the presence of two transducers. Also in this case, it is
Full rub Rotor Natural N Y In other cases, there might be a self-excited reverse
Self-excited Frequency response.
response
Fluid induced 0.3 to 0.6X Y N Mainly forward orbit with internal loops
whirl
Fluid induced Rotor Natural Y Y Mainly forward orbit with internal loops. Reverse 1X
whip Frequency and sub-synchronous components are present.
0.1 to 0.8 X Y N Very similar to fluid induced whirl, but it disappear
Rotating stall
when the fluid flow changes
235
120
100
80
60
40
20
-20
-40
-60
-80
-100
-120
-140
-163.1
-200 -175 -150 -125 -100 -75 -50 -25 0 25 50 75 100 125 150 175 200 231.9
Horizontal [Hz]
Figure 217 Orbit plots on support number 1, rotation speed 3000 RPM
The comparison between the orbit on support number 1 and number 2 is shown
in Figure 218: both the larger vibration amplitude and the difference between the
vibration along the coordinated axes are evident.
Figure 218 Comparison between the orbit plot on supports 1 (red) and 2 (blue)
The difference also evidenced by the vibration time histories (Figure 219) and the
full spectrum plots (Figure 220). The time history shows that the vibration along
channel A is approximately twice larger than the vibration along channel B.
200
Channel A Channel B
150
100
50
Vibration [um]
0
0 0.05 0.1 0.15 0.2
-50
-100
-150
-200
Time[s]
Figure 219 Time histories of the vibration measured by the two perpendicular proximity sensors
placed on support number 1
The full spectrum plot (in the next page) shows that the forward 1X harmonic (50
Hz) is much larger than the reverse 1X harmonic, due to the presence of radial
loads and imbalanced rotor.
237
100
95
90
85
80
75
70
65
60
55
50
45
40
35
30
25
20
15
10
5
0
-298.4 -250 -200 -150 -100 -50 0 50 100 150 200 250 302.5
Frequency [Hz]
Figure 220 Full spectrum of the vibration measured on the support number 1.
Figure 222 Trajectory of the tool when the head is moving along the X axis in negative direction (
0.01 mm).
239
Figure 223 Trajectory of the tool when the head is moving along the Y axis in positive direction (
0.07 mm).
Figure 224 Trajectory of the tool when the head is moving along the Y axis in negative direction (
0.9 mm).