Operational Modal Analysis of Civil Engineering Structures - An Introduction and Guide For Applications
Operational Modal Analysis of Civil Engineering Structures - An Introduction and Guide For Applications
Operational
Modal Analysis of
Civil Engineering
Structures
An Introduction and Guide for
Applications
Operational Modal Analysis of Civil
Engineering Structures
Carlo Rainieri • Giovanni Fabbrocino
Operational Modal
Analysis of Civil
Engineering Structures
An Introduction and Guide
for Applications
Carlo Rainieri Giovanni Fabbrocino
Department of Biosciences Department of Biosciences
and Territory, Structural and Territory, Structural
and Geotechnical Dynamics Lab and Geotechnical Dynamics Lab
University of Molise University of Molise
Termoli, Italy Termoli, Italy
vii
viii Preface
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Operational Modal Analysis: A New Discipline? . . . . . . . . . . . . . 1
1.2 Preliminary Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Fundamental Principle and Applications of OMA . . . . . . . . . . . . 8
1.4 Organization of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5 A Platform for Measurement Execution
and Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.1 Generalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.2 VIs and Toolkits for Data Processing
and System Identification . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.3 Recurrent Structures for Software Development . . . . . . . . 16
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2 Mathematical Tools for Random Data Analysis . . . . . . . . . . . . . . . . 23
2.1 Complex Numbers, Euler’s Identities,
and Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 Stationary Random Data and Processes . . . . . . . . . . . . . . . . . . . . 28
2.2.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.2 Fundamental Notions of Probability Theory . . . . . . . . . . . 29
2.2.3 Correlation Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2.4 Spectral Density Functions . . . . . . . . . . . . . . . . . . . . . . . . 38
2.2.5 Errors in Spectral Density Estimates and
Requirements for Total Record Length in OMA . . . . . . . . 44
2.3 Matrix Algebra and Inverse Problems . . . . . . . . . . . . . . . . . . . . . 46
2.3.1 Fundamentals of Matrix Algebra . . . . . . . . . . . . . . . . . . . 46
2.3.2 Inverse Problems: Error Norms and Least
Squares Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.4.1 Operations with Complex Numbers . . . . . . . . . . . . . . . . . 53
2.4.2 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.4.3 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.4.4 Probability Density Functions . . . . . . . . . . . . . . . . . . . . . 54
2.4.5 Auto- and Cross-Correlation Functions . . . . . . . . . . . . . . . 55
2.4.6 Auto-Correlation of Gaussian Noise . . . . . . . . . . . . . . . . . 55
ix
x Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
List of Abbreviations
AC Alternate current
ADC Analog digital converter
ADO ActiveX Data Object
AFDD-T Automated frequency domain decomposition-tracking
AMUSE Algorithm for multiple unknown signal extraction
AR Auto regressive
ARES Automated modal parameter extraction system
ARMA Auto-regressive moving average
ARMAV Auto-regressive moving average vector
BFD Basic frequency domain
BMID Blind modal identification
BR Balanced realization
BSS Blind source separation
CMRR Common mode rejection ratio
COMAC Coordinate modal assurance criterion
Cov-SSI Covariance-driven stochastic subspace identification
CPU Central processing unit
CVA Canonical variate analysis
DC Direct current
DD-SSI Data-driven stochastic subspace identification
DDT Dynamic data type
DFT Discrete Fourier transform
DOF Degree of freedom
DR Dynamic range
DSN Data source name
ECOMAC Enhanced coordinate modal assurance criterion
EFDD Enhanced frequency domain decomposition
EMA Experimental modal analysis
ERA Eigensystem realization algorithm
EVD Eigenvalue decomposition
FDD Frequency domain decomposition
FE Finite element
FEM Finite element model
FFT Fast Fourier transform
xiii
xiv List of Abbreviations
RD Random decrement
RMFD Right matrix fraction description
rms Root mean square
SDOF Single degree of freedom
SERP Stationary and ergodic random process
SHM Structural health monitoring
SIMO Single input multiple output
SISO Single input single output
SNR Signal-to-noise ratio
SOBI Second-order blind identification
SRP Stationary random process
SSI Stochastic subspace identification
SVD Singular value decomposition
UMPA Unified matrix polynomial approach
UPC Unweighted principal component
UR Upper residual (term)
VI Virtual instrument
ZOH Zero order hold
List of Symbols
dB Decibel
h(τ) IRF
H(ω) FRF
[] Matrix
hi Row vector
{} Column vector
i Imaginary unit/number of block rows
fs Sampling frequency
T Period/duration
N Number of samples
*
Complex conjugate
H
Hermitian adjoint
T
Transpose
+
Pseudoinverse
NDOF Number of DOFs
Nm Number of modes
λ Continuous-time pole/eigenvalue
μ Discrete-time pole/mean
ε Error
n Order of state space model or polynomial model
p Order of AR/ARMA model/number of time lags in SOBI
l Number of output time series
ˆ Estimated quantity
γ xy
2
(ω) Coherence function
Δf Frequency resolution
Δfn Relative scatter between natural frequency estimates
Δt Sampling interval
tk Discrete time instant
Re() Real part of a complex number
Im() Imaginary part of a complex number
kk L2-norm
j[]j Determinant
adj([]) Adjoint matrix
fd,r Damped frequency
xvii
xviii List of Symbols
The use of experimental tests to gain knowledge about the dynamic response
of civil structures is a well-established practice. In particular, the experimental
identification of the modal parameters can be dated back to the middle of the
Twentieth Century (Ewins 2000). Assuming that the dynamic behavior of the
structure can be expressed as a combination of modes, each one characterized by
a set of parameters (natural frequency, damping ratio, mode shape) whose values
depend on geometry, material properties, and boundary conditions, Experimental
Modal Analysis (EMA) identifies those parameters from measurements of the
applied force and the vibration response.
In the last decades the principles of system identification and the experimental
estimation of the modal parameters have provided innovative tools for the under-
standing and control of vibrations, the optimization of design, and the assessment of
performance and health state of structures. In fact, even if the Finite Element
(FE) method and the fast progress in computing technologies have made excellent
analysis tools available to the technical community, the development of new high-
performance materials and the increasing complexity of structures have required
powerful tools to support and validate the numerical analyses. In this context the
experimental identification of the modal properties definitely supports the engineers
to get more physical insight about the dynamic behavior of the structure and to
discriminate between the errors due to discretization and those due to simplified
or even wrong modeling assumptions. Moreover, since the vibration response
originates from the modes, which are inherent properties of the structure, forces
exciting the structure at resonant frequencies yield large vibration responses that
can result in discomfort or even damage. Regular identification of modal para-
meters and analysis of their variation can support the assessment of structural
performance and integrity.
Since the origin of EMA, testing equipment and data processing algorithms have
significantly evolved. EMA is currently a well-established field, based on a sound
where As and An denote the signal amplitude and the noise amplitude (expressed in
the same units), respectively. When the SNR is low, the signal of interest can become
indistinguishable. Thus, appropriate data acquisition strategies must be adopted to
minimize the level of noise that inevitably affects measurements (Chap. 3).
Complex signals may be decomposed into elemental signals. Examples of
elemental signals are the impulse and the sinusoid. When the signal is decomposed
into scaled and shifted impulses, a time domain analysis takes place; if, instead, the
signal is decomposed into scaled sinusoids of different frequency, analysis is carried
out in the frequency domain. It is always possible to convert a signal from one
domain to the other, so the final choice is usually dictated by considerations about
computational efficiency, ease of data interpretation, and noise reduction techniques.
The dynamic behavior of physical systems is often described by defining an
ideal constant-parameter linear system (also known as linear time-invariant—
LTI—system, Fig. 1.1). A system is characterized by constant parameters if all its
fundamental properties are invariant with respect to time. Moreover, it shows a
linear mapping between input and output if the response characteristics are additive
and homogeneous. As a result, the response of the system to a linear combination of
given inputs equals the same linear combination of the system responses to the
individual, separately analyzed inputs. The constant-parameter assumption is
reasonably valid for several physical systems encountered in the practice. However,
its validity depends on the extension of the considered time interval. For large
1.2 Preliminary Concepts 5
observation periods it could be not realistic. This is the case, for instance, of
structures subjected to continual vibrations, where fatigue damage can cause a
change in the stiffness of the structure. The time intervals of practical interests
for the output-only modal testing are such that it is possible to consider the structure
as time invariant. The validity of the linearity assumption for real structures
depends not only on the characteristics of the structure, but also on the magnitude
of the input. Physical systems typically show nonlinear response characteristics
when the magnitude of the applied load is large. Moreover, nonlinearities are often
not associated to abrupt changes in the response; the presence of a transition makes
the problem even more complex. However, for the applications of our interest, the
response of many structures can be reasonably assumed to be linear, since ambient
excitation yields small amplitude vibrations.
From a general point of view, the dynamics of a civil structure, like any
mechanical system, can be described in terms of its mass, stiffness, and damping
properties, or in terms of its vibration properties (natural frequencies, damping
ratios, and mode shapes) or in terms of its response to a standard excitation.
The first approach is usually adopted when a FE model of the structure is set.
The dynamic properties of structures describe their free vibration response (when
no force or acceleration is applied) or, in other words, the ways in which they
“naturally” vibrate. In fact, under certain assumptions, the dynamic response of the
structure can be decomposed into a set of vibration modes, each one with a charac-
teristic deflected shape of its own: the mode shape. The corresponding natural
frequency and damping ratio govern the motion of the structure according to one of
these shapes. Natural frequencies and mode shapes can be obtained from the mass
and stiffness properties of the structure through the solution of an eigenproblem.
Under the assumption of proportional viscous damping, the modes of the damped
structure coincide with those of the undamped structure. Thus, it is possible to
compute the real eigenvalues and eigenvectors (associated with the natural
frequencies and the mode shapes) of the system without damping and, in a second
stage, apply a correction to the modal responses to account for the effect of damping.
It is worth pointing out that, while the eigenfrequencies of undamped or
proportionally damped systems are univocally determined, the eigenvectors are
not. In fact, the eigenproblem leaves the scaling factor undetermined. However,
it affects only the amplitude, while the shape (that is to say, the relative values of
the components of the mode shape vector) remains unchanged. For this reason,
conventional scaling procedures are usually adopted to normalize the mode
shape vectors. A frequently adopted scaling scheme is based on the orthogonality
of natural modes with respect to the mass and stiffness matrices of the structure.
The pre- and post-multiplication of those matrices by the modal matrix [Φ],
6 1 Introduction
which collects the mode shape vectors in columns, yield the following diagonal
matrices:
2 3
..
6 . 7
½ΦT ½M½Φ ¼ 6
4 mr 7
5 ð1:2Þ
..
.
2 3
..
6 . 7
½ΦT ½K ½Φ ¼ 6
4 kr 7
5 ð1:3Þ
..
.
Thus, it is possible to normalize the mode shape vectors in a way that the matrix
of (1.2) equals the identity matrix. According to this scaling scheme, the elements
on the main diagonal of the matrix of (1.3) correspond to the eigenvalues. The
relationship between the mass normalized mode shape {ψ r} and the unscaled one
{ϕr} for the r-th mode is:
1
fψ r g ¼ pffiffiffiffiffiffi fϕr g ð1:4Þ
mr
where:
The lower limit of integration is zero since the LTI system has been assumed
to be physically realizable (causal), that is to say, it responds only to past inputs.
The assumption of causality, in fact, implies that:
A LTI system is also stable if every bounded input function f(t) produces a
bounded output y(t).
Alternatively, a physically realizable and stable LTI system can be described by
the FRF H(ω). The convolution integral of (1.6) reduces to a simple multiplication
when it is expressed in terms of the FRF and the Fourier transforms of the input—
F(ω)—and the output—Y(ω)—(see also Chap. 2):
A LTI system cannot cause any frequency translation. It can only modify the
amplitude and phase of the applied input. Its FRF is a function of the sole
frequency, while it is not function of either time or system excitation.
From the experimental point of view, FRFs are estimated by forced vibration
tests. Depending on the number of applied inputs and the number of measured
outputs, four types of testing scheme can be identified: Single Input Single Output
(SISO), Single Input Multiple Output (SIMO), Multiple Input Single Output
(MISO), Multiple Input Multiple Output (MIMO). The output-only modal tests,
which represent the subject of this book, are always of the MIMO type because of
the assumptions about the input (see also Chap. 4).
8 1 Introduction
OMA aims at estimating the dynamic properties of LTI systems from records of
their dynamic response only. Thus, the unknown environmental and operational
loads play a fundamental role in testing and in the subsequent modal analysis.
If, on one hand, the environmental excitation is advantageous when large civil
structures are tested, on the other hand data acquisition and, above all, data
processing require supplemental attention to carry out successful output-only
modal tests. In fact, since the input is unmeasured, some characteristics of the
excitation can be erroneously confused with dynamic properties of the structure
under test. Moreover, since the test engineer has no control on the applied
excitation, the identification of closely spaced modes can be a more difficult
task with respect to EMA. Specific actions and functional checks are needed to
ensure that good quality information about closely spaced modes can be extracted
from the measured data. For instance, this objective can be achieved by ensuring a
sufficient amount of independent information in the data. In Chapter 4 it is shown
that the output power spectral density (PSD) matrix can be expressed in terms of
the FRF matrix of the structure and the input PSD matrix (4.12). As a conse-
quence, its rank (defining the number of independent rows or columns in the
matrix) cannot be larger than the rank of the individual matrices appearing in the
product. This implies that closely spaced modes cannot be estimated if the rank of
the input PSD matrix is equal to one. This happens when only one input is present,
or the inputs are fully correlated. It can be also the case when free decay data are
used for output-only modal identification. In fact, in this case multiple sets of
initial conditions are needed to handle closely spaced modes, since multi-output
measurements with respect to a single set of initial conditions are equivalent to a
SIMO scheme (Zhang et al. 2005).
Rank deficiency over a limited frequency band in the proximity of the consi-
dered closely spaced modes can partially hide the actual physical properties of the
structure (for instance, by revealing only one of the modes, or a combination of the
two modes). Thus, a proper design of sensor layout and a preliminary evaluation of
the sources of excitation acting on the structure in its operational conditions play a
primary role in ensuring the possibility to obtain high quality information from
modal testing. As a general rule, a large number of sensors allow maximizing the
rank of the FRF matrix, while several uncorrelated inputs ensure the maximization
of the rank of the input PSD matrix. On the contrary, correlated inputs or input
applied in a single point limit the rank of the input PSD matrix; sensors placed
in nodes of the mode shapes or multiple sensors measuring the same DOF
(thus, adding no new independent information) limit the rank of the FRF matrix.
Some recommendations for the definition of sensor layout for recurrent typologies
of civil structures are reported in Chap. 3. About the input, moving loads acting on
the structure or environmental loads, such as wind and traffic, force the rank of the
input PSD matrix to values larger than one.
1.3 Fundamental Principle and Applications of OMA 9
damage, but also to localize and quantify it. An extensive review about these
techniques is available in the literature (Doebling et al. 1996, Sohn et al. 2003,
Farrar and Worden 2013). The main drawback of damage detection techniques
based on the analysis of the changes in the estimated modal properties is related to
the influence of boundary conditions and operational and environmental factors on
the estimates. Such an influence can produce changes in the modal parameter
estimates that are of the same order of magnitude as those due to damage. However,
in recent years a number of techniques able to remove the influence of environ-
mental factors on modal parameter estimates have been developed (see, for
instance, Deraemaeker et al. 2008, Magalhaes et al. 2012), thus raising a renewed
interest towards vibration-based damage detection. Another relevant limitation to
the extensive application of these damage detection techniques was the lack of fully
automated procedures for the estimation of the modal parameters of the monitored
structure. This issue has determined large research efforts in the last few years to
develop reliable and robust automated OMA techniques. This topic is extensively
discussed in Chap. 6.
An application of modal parameter estimates somehow related to vibration-
based SHM and to inherent limitations of OMA techniques is represented by the
estimation of the modal masses or, equivalently, the scaling factors of mode shapes.
In fact, since the input is not measured, only unscaled mode shapes can be obtained
from operational modal testing. For this reason, specific techniques for the estima-
tion of the scaling factors, based on the application of known structural modifi-
cations, have been developed. This topic is discussed in Chap. 5. The estimation of
the scaling factors makes possible, among the rest, the reconstruction of the FRF
matrix from the experimental results and the application of a specific class of
damage detection techniques (see, for instance, Doebling et al. 1998, Pandey and
Biswas 1994).
Additional applications concern load identification. In this case, the known
modal parameters are used to solve an inverse problem for the identification of
the unknown forces that produced a given measured response (Parloo et al. 2003).
The wide range of applications and the increasing demand for output-only modal
testing in the current civil engineering practice justify the increasing attention and the
large research efforts in the field of OMA observed in the last decade. The present
book has been conceived and prepared in order to transfer the OMA concepts from
academy to practice and foster the attraction of young students and researchers to this
discipline. For this reason, it is organized in clearly defined learning steps covering
the required interdisciplinary notions to train a thorough and effective modal analyst.
The topics discussed in the book encompass the fundamental theoretical notions, the
criteria for proper measurement execution, and the methods and criteria for an
appropriate and detailed data analysis. The integration of those three components is
the key for a full exploitation of the potentialities of OMA. The importance of mixing
1.4 Organization of the Book 11
all the necessary theoretical and practical skills for a successful modal testing can be
easily realized by recognizing, for instance, that a well-trained test engineer must
be able to quickly identify anomalies in the measurements and the corresponding
possible technical solutions (they sometimes simply consist in the replacement of a
faulty connector or in the relocation of one or more sensors). This is possible only if
the modal analyst has a sound theoretical basis.
The balance between theoretical and practical aspects of OMA is highlighted
by the content of the different chapters. In Chapter 2 the basic notions of random
processes and inverse problems are reported. Chapter 3 is focused on the issues
related to measurement execution. Chapter 4 discusses the main models of the
dynamic behavior of structures, how these models are applied in the context of
OMA, the similarities and differences about different OMA methods, and the
procedures for post-processing and validation of experimental results.
In the illustration of the fundamental analysis tools (Chap. 2) and the theoretical
basis of OMA methods (Chap. 4), heavy mathematical derivations are avoided to
focus the attention on the concepts and their practical implications. The attention to
the applicative aspects is remarked by the detailed presentation of the algorithms
and implementation details of popular OMA methods and by the applications
proposed at the end of the next three chapters. The latter, in particular, are aimed
at the prompt practical verification of the previously discussed theoretical notions,
thus providing a valuable motivation to learning. In most of the proposed applica-
tions the reader has two options: he can develop his own systems and software
according to the reported algorithms and tutorials, or he can use the basic software
accompanying the book. The two options are conceived to fulfill the needs of
both academicians and technicians.
Guidelines for the application in the field of the concepts and notions reported in
the first chapters can be obtained from the analysis of the case studies discussed
in Chap. 5. Finally, Chapter 6 extensively analyzes the latest developments in the
field of OMA concerning the automated identification of the modal parameters.
At the end of the last two chapters no applications are proposed. The reason is that
Chap. 5 already has a practical character, while the topic discussed in Chap. 6
(automated OMA) is very recent. Thus, Chapter 6 basically reports the viewpoint of
the authors about the matter, since a wide consensus in the definition of the “best
methods” for automated output-only modal identification has not been reached, yet.
At the end of these preliminary notes and before the introduction of the
recommended platform for the implementation of the systems and procedures
discussed in this book, it is worth remarking two additional aspects concerning
the illustration of the topics. The adopted approach consists in making the various
topics “as simple as possible but not simpler”. For this reason, heavy mathematical
derivations are avoided, providing all the necessary references for the reader inter-
ested in more details; however, the mathematical description of concepts and
algorithms has been retained to ensure an autonomous learning basis and simplify
their practical implementation. The mathematical description of models and
methods is also functional to the understanding of the common characters behind
different OMA methods. Starting from a number of investigations reported in the
12 1 Introduction
1.5.1 Generalities
as the common platform for system and software implementation. The choice of the
platform for measurement execution and data processing can obviously be different
but, for practical reasons, it is impossible to provide a tutorial for every program-
ming language. Nevertheless, since the algorithms are general, any other choice is
possible for their implementation, provided that the reader carries out the appro-
priate language translations.
LabVIEW programs are called Virtual Instruments, or VIs, because their appear-
ance and operation imitate physical instruments, such as oscilloscopes and
multimeters. LabVIEW contains a wide set of tools to acquire, analyze, display,
and store data, as well as for code troubleshooting (National Instruments 2005a).
Programming in LabVIEW requires the design and implementation of a user
interface, or Front Panel (Fig. 1.2), with controls and indicators, which are the
interactive input and output terminals of the VI, respectively. Figure 1.2 shows a
simple VI to analyze the variations of a signal and its spectrum for different
amplitudes of its components, a sinusoid and a random signal. The user can
interactively set the parameters of the signals by means of the controls.
Several types of controls and indicators are available. Examples of controls are
knobs, push buttons, dials, and other input mechanisms. Controls simulate instru-
ment input mechanisms and supply data to the block diagram (Fig. 1.3) of the VI.
Examples of indicators are graphs, LEDs, and other output displays. Indicators
simulate instrument output mechanisms and display the data acquired or generated
by the block diagram. Controls and indicators can handle different types of
variables; it is possible to distinguish:
• Numeric controls and indicators, such as slides and knobs, graphs, charts.
• Boolean controls and indicators, such as buttons and switches.
• String, path, array, cluster, enumerated type controls. . . each one associated to a
certain type of data.
14 1 Introduction
The management of the user interface is based on the related code developed in
the block diagram by using VIs and structures to get the control of the Front Panel
objects. Objects in the block diagram include terminals and nodes. Block diagrams
are built by connecting the objects with wires. The color and symbol of each
terminal indicate the data type of the corresponding control or indicator. Constants
are terminals that supply given data values to the block diagram.
An example of code is reported in Fig. 1.3, which shows the block diagram
associated to the Front Panel of Fig. 1.2. In this simple example, the management of
the user interface is based on a While Loop. The user sets the values of the controls
in the Front Panel and they are continuously acquired and sent to the VIs for signal
generation. The outputs of these VIs are then combined (added) and used to
compute the spectrum of the signal.
LabVIEW can also be used to communicate with hardware such as data acquisi-
tion, vision, and motion control devices, as well as GPIB, PXI, VXI, RS232, and
RS485 equipment. It can be both National Instruments and third party hardware.
Hardware configuration is carried out through a specific interface that is illustrated
in Chap. 3 in the context of the proposed application for the development of a
dynamic data acquisition system based on programmable hardware.
LabVIEW adopts a dataflow model for the running VIs. A block diagram node
executes when it receives all the required inputs. When a node executes, it produces
output data that are passed to the next node in the dataflow path. The movement of
data through the nodes determines the execution order of the VIs and structures on
the block diagram. The main difference between LabVIEW and other programming
languages is indeed in the dataflow model. Text-based programming languages
typically respond to a control flow model of program execution. In control flow,
1.5 A Platform for Measurement Execution and Data Processing 15
Fig. 1.4 VIs for output-only modal identification (included in the Time Series Analysis Tools)
the sequential order of program elements determines the execution order of the
program. In LabVIEW the flow of data rather than the sequential order of
commands determines the execution order of block diagram elements. Therefore,
it is possible to create block diagrams that have simultaneous operations.
Dataflow execution makes memory management easier than the control flow
model of execution. In LabVIEW the user typically does not allocate memory for
variables or assign values to them. On the contrary, he defines a block diagram with
wires that represent the transition of data. When a VI generates new data, it automa-
tically allocates the memory for those data. When the VI no longer uses the data,
LabVIEW de-allocates the associated memory. When new data are added to an array
or a string, LabVIEW allocates sufficient additional memory to manage the new data.
model has more degrees of freedom. However, an overestimated order can introduce
spurious artifacts not related to the physics of the observed system. As a result, a
number of criteria for selection of the order have been developed over the years.
They take into account the model-fitting error associated to a certain choice of the
order but they also incorporate a penalty when the order increases. Many of those
criteria (Akaike’s Information Criterion, Bayesian Information Criterion, Final
Prediction Error Criterion, Minimal Description Length Criterion, Phi Criterion),
differing only for the way the penalty is evaluated, are available in the Time Series
Analysis Tools of the Advanced Signal Processing Toolkit. More details about the
VIs in the Toolkit and their applications can be found in the related user manual
(National Instruments 2005b).
The additional toolkits can simplify the implementation of the algorithms
discussed in this book, but they are not critical and their use can be avoided with
no large penalties. In fact, all the necessary tools for the implementation of OMA
algorithms are already present in LabVIEW. They include, among the others, linear
algebra tools, tools for probability and statistics, curve fitting tools, tools for
polynomial analysis, tools for filtering and signal processing. In the application of
these tools, the analysis of the related documentation, such as the LabVIEW
Analysis Concepts (National Instruments 2004), is recommended to avoid possible
mistakes in the interpretation of input settings, outputs, and mode of operation.
Moreover, the analysis of the above mentioned documentation might disclose
functionalities that cannot be immediately recognized by the user.
The State Machine (Fig. 1.6) is one of these fundamental architectures. It allows
the execution of different sections of code (states) in an order that can be deter-
mined in several different ways. Thus, it can be used to implement complex
decision-making algorithms represented by flowcharts.
State Machines are used in applications where distinguishable states exist.
An initialization phase is sometimes needed. State Machines perform a specific
action for each state in the diagram. Each state can lead to one or multiple states, or
end the process. The user input or results of computations in the running state
determine the next state. This is the reason why they are commonly adopted to
manage the user interactions. Different user inputs lead to different processing
segments. Each of these segments represents one of the states in the State Machine.
Since each state in a State Machine carries out a specific action and calls the next
state, this architecture requires the definition of some conditions. Thus, the common
elements in a State Machine architecture are: the While Loop, which continually
executes the various states, the Case Structure, containing the code associated to a
certain state, the shift register and the transition code, which determines the next
state in the sequence (Fig. 1.6).
Another design pattern commonly used to interact with user input via the front
panel is the Event structure.
Events are caused by actions of the user (for instance, a click of the mouse).
In this case the execution of the code is governed by the occurring events. In an
event-driven program, the program first waits for events to occur, responds to those
events, then returns to waiting for the next event. How the program responds
depends on the code implemented for that specific event. The order in which an
event-driven program executes depends on which events occur and on the order in
which those events occur. While the program waits for the next event, it frees up
CPU resources that might be used to perform other processing tasks.
In LabVIEW, the Event structure (Fig. 1.7) allows handling events in an
application. Multiple cases can be added to the Event structure and configured to
handle one or more events. Configuration of the events is easy. It is sufficient to
right-click the Event structure border and select Edit Events Handled by This Case
from the shortcut menu. The Edit Events dialog box appears (Fig. 1.8) and it is
possible to configure the event.
The Event structure waits until an event happens, then it executes the VI in
the case associated to that event. Using the Event structure minimizes the CPU
usage because the VI does not continually poll the Front Panel for changes, as in the
case of the While Loop structure. In contrast to polling, the Event structure does not
lose user events. In fact, events are stored in a queue and processed in the order in
which they occur.
The execution of parallel operations is slightly more complex and requires
specific structures. The Producer/Consumer architecture (Fig. 1.9) is a commonly
adopted design pattern for the development of data acquisition and data logging
systems. It consists of two parallel loops, where the first loop produces data and the
other consumes those data. Data queues are used to communicate data between the
loops. The queues ensure data buffering between producer and consumer loops.
1.5 A Platform for Measurement Execution and Data Processing 19
References
Bendat JS, Piersol AG (2000) Random data: analysis and measurement procedures, 3rd edn.
Wiley, New York
Chopra AK (2000) Dynamics of structures—theory and applications to earthquake engineering,
2nd edn. Prentice Hall, Upper Saddle River, NJ
Deraemaeker A, Reynders E, De Roeck G, Kullaa J (2008) Vibration-based structural health
monitoring using output-only measurements under changing environment. Mech Syst Signal
Proc 22:34–56
References 21
Doebling SW, Farrar CR, Prime MB, Shevitz DW (1996) Damage identification and health
monitoring of structural and mechanical systems from changes in their vibration
characteristics: a literature review, Technical Report LA-13070-MS, UC-900. Los Alamos
National Laboratory, New Mexico
Doebling SW, Farrar CR, Prime MB (1998) A summary review of vibration-based damage
identification methods. Shock Vib Dig 30(2):91–105
Ewins DJ (2000) Modal testing: theory, practice and application, 2nd edn. Research Studies Press
Ltd., Baldock
Farrar CR, Worden K (2013) Structural health monitoring: a machine learning perspective.
Wiley, Chichester
Friswell MI, Mottershead JE (1995) Finite element model updating in structural dynamics.
Kluwer, Dordrecht
Heylen W, Lammens S, Sas P (1998) Modal analysis theory and testing. Katholieke Universiteit
Leuven, Leuven
Magalhaes F, Cunha A, Caetano E (2012) Vibration based structural health monitoring of an arch
bridge: from automated OMA to damage detection. Mech Syst Signal Proces 28:212–228
Maia NMM, Silva JMM, He J, Lieven NAJ, Lin RM, Skingle GW, To W-M, Urgueira APV (1997)
Theoretical and experimental modal analysis. Research Studies Press, Taunton
National Instruments (2004) LabVIEW analysis concepts. National Instruments Corporation,
Austin
National Instruments (2005a) LabVIEW fundamentals. National Instruments Corporation, Austin
National Instruments (2005b) LabVIEW advanced signal processing toolkit—time series analysis
tools user manual. National Instruments Corporation, Austin
Pandey AK, Biswas M (1994) Damage detection in structures using changes in flexibility. J Sound
Vib 169:3–17
Parloo E, Verboven P, Guillaume P, Van Overmeire M (2003) Force identification by means of
in-operation modal models. J Sound Vib 262:161–173
Sohn H, Farrar CR, Hemez FM, Shunk DD, Stinemates DW, Nadler BR (2003) A review of
structural health monitoring literature: 1996–2001. Technical Report LA-13976-MS, UC-900,
Los Alamos National Laboratory, New Mexico
Teughels A, De Roeck G (2004) Structural damage identification of the highway bridge Z24 by FE
model updating. J Sound Vib 278:589–610
Zhang L, Brincker R, Andersen P (2005) An overview of operational modal analysis: major
development and issues. In: Proc 1st International Operational Modal Analysis Conference,
Copenhagen
Mathematical Tools for Random
Data Analysis 2
Sinusoidal signals are frequently used in signal processing and system analysis.
In fact, sines and cosines are orthogonal functions and form a base for the analysis
of signals. Moreover, they are eigenfunctions for LTI systems. However, in order to
simplify operations and mathematical manipulations, sinusoidal signals are often
expressed by complex numbers and exponential functions.
Consider a sinusoidal function characterized by amplitude M > 0, frequency ω
and phase angle φ:
e iX
¼ cos ðXÞ i sin ðXÞ ð2:3Þ
eiX þ e iX
cos ðXÞ ¼ ð2:4Þ
2
eiX e iX
sin ðXÞ ¼ ð2:5Þ
2i
where i is the imaginary unit (i2 ¼ 1), y(t) can be rewritten in terms of complex
exponentials:
M iðωt φÞ M iðωt φÞ
yðtÞ ¼ e þ e ð2:6Þ
2 2
d X=c+id
|X|
c
−θ
Re
|X*|
-d X*=c-id
ea eb ¼ eaþb ð2:7Þ
dðea Þ da a
¼ e ð2:9Þ
db db
Moreover, the graphical representation of complex numbers c + id in rectangular
(2.10) and polar coordinates (2.11) gives different opportunities to analyze the data
and recover the information they hold. In Fig. 2.1 the complex number c + id and
its conjugate c-id are represented in the complex plane. Taking into account the
relation ((2.10) and (2.11)) between the amplitude (r) and phase (θ) in polar coordi-
nates on one hand and the real (c) and imaginary (d) part in rectangular coordinates on
the other, it can be noted that the complex conjugate has the same amplitude of the
original complex number but opposite phase.
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
d
ðr; θÞ ¼ c þ d , arctan
2 2
ð2:11Þ
c
ðc þ id Þ ðe þ if Þ ¼ ðc e d f Þ þ iðc f þ d eÞ ð2:13Þ
ðc þ id Þ ðc þ id Þðe if Þ ðc þ idÞðe if Þ
¼ ¼ ð2:14Þ
ðe þ if Þ ðe þ if Þðe if Þ e2 þ f 2
where fu and fv are complex valued functions and the superscript * means complex
conjugate. In particular, this type of decomposition, originally developed for
periodical functions, can be extended to nonperiodic functions, such as transients
and random signals, by assuming that they are periodic functions with period equal
to the duration T of the signal. For a nonperiodic signal x(t), the (forward) Fourier
transform (analysis equation: (2.17)) and the inverse Fourier transform (synthesis
equation: (2.18)) are given by:
ð þ1
X ðf Þ ¼ xðtÞe i2πft
dt ð2:17Þ
1
ð þ1
xðtÞ ¼ Xðf Þei2πft df : ð2:18Þ
1
Thus, (2.17) shows that any signal x(t) can be decomposed in a sum (represented
by the integral) of sinusoidal functions (recall the Euler’s formula relating complex
exponentials and sinusoidal functions, (2.2) and (2.3)). In practical applications,
26 2 Mathematical Tools for Random Data Analysis
when the signal x(t) is recorded and analyzed by means of digital equipment, it is
represented by a sequence of values at discrete equidistant time instants. As a
consequence, only discrete time and frequency representations are considered,
and the expression of the Fourier transform has to be changed accordingly.
First of all, when dealing with discrete signals it is worth noting that the
sampling interval Δt is the inverse of the sampling frequency fs (representing the
rate by which the analog signal is sampled and digitized):
1
Δt ¼ : ð2:19Þ
fs
In order to properly resolve the signal, fs has to be selected so that it is at least twice
the highest frequency fmax in the time signal:
f s 2f max : ð2:20Þ
1
Δf ¼ ð2:21Þ
NΔt
In the presence of a finite number N of samples, the frequency resolution Δf can
only be improved at the expense of the resolution in time Δt, and vice versa. As a
consequence, for a given sampling frequency, a small frequency spacing Δf is
always the result of a long measuring time T (large number of samples N):
T ¼ NΔt ð2:22Þ
Assuming that the signal x(t) has been sampled at N equally spaced time instants
and that the time spacing Δt has been properly selected (it satisfies the Shannon’s
theorem (2.20)), the obtained discrete signal is given by:
xn ¼ xðnΔtÞ n ¼ 0, 1, 2, . . . , N 1: ð2:23Þ
Taking into account the uncertainty principle expressed by (2.21), the discrete
frequency values for the computation of X(f) are given by:
k k
fk ¼ ¼ k ¼ 0, 1, 2, . . . , N 1 ð2:24Þ
T NΔt
and the Fourier coefficients at these discrete frequencies are given by:
X
N 1
i2 πkn
Xk ¼ xn e N k ¼ 0, 1, 2, . . . , N 1: ð2:25Þ
n¼0
2.1 Complex Numbers, Euler’s Identities, and Fourier Transform 27
The Xk coefficients are complex numbers and the function defined in (2.25) is
often referred to as the Discrete Fourier Transform (DFT). Its evaluation requires
N2 operations. As a consequence, in an attempt to reduce the number of operations,
the Fast Fourier Transform (FFT) algorithm has been developed (Cooley and
Tukey 1965). Provided that the number of data points equals a power of 2, the
number of operations is reduced to N · log2N. The inverse DFT is given by:
1XN 1
i2πkn
xn ¼ Xk e N n ¼ 0, 1, 2, . . . , N 1 ð2:26Þ
N k¼0
The coefficient X0 captures the static component of the signal (DC offset).
The magnitude of the Fourier coefficient Xk relates to the magnitude of the sinusoid
of frequency fk that is contained in the signal with phase θk:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
jX k j ¼ ½ReðXk Þ2 þ ½ImðXk Þ2 ð2:27Þ
ImðXk Þ
θk ¼ arctan ð2:28Þ
ReðXk Þ
The Fourier transform is a fundamental tool in signal analysis, and it has the
following important properties:
• Linearity: given two discrete signals x(t) and y(t), the Fourier transform of any
linear combination of the signals is given by the same linear combination of the
transformed signals X(f) and Y(f);
• Time shift: if X(f) is the Fourier transform of x(t), then Xð f Þe i2πf t0 is the Fourier
transform of x(t-t0);
• Integration and differentiation: integrating in the time domain corresponds to
dividing by i2πf in the frequency domain, differentiating in the time domain to
multiplying by i2πf in the frequency domain;
• Convolution: convolution in time domain corresponds to a multiplication in the
frequency domain, and vice versa; for instance, the Fourier transform of the
following convolution integral:
ð þ1
að t Þ ¼ bð τ Þ c ð t τÞdτ ¼ bðtÞ∗ cðtÞ ð2:29Þ
1
is given by:
Að f Þ ¼ Bð f Þ C ð f Þ ð2:30Þ
These last two properties are some of the major reasons for the extensive use of
the Fourier transform in signal processing, since complex calculations are
transformed into simple multiplications.
28 2 Mathematical Tools for Random Data Analysis
1X K
μx ðtÞ ¼ lim xk ðtÞ: ð2:31Þ
K!1 K k¼1
1X K
Rxx ðt, t þ τÞ ¼ lim xk ðtÞxk ðt þ τÞ: ð2:32Þ
K!1 K k¼1
Whenever the quantities expressed by (2.31) and (2.32) do not vary when the
considered time instant t varies, the random process is said to be (weakly)
stationary. For weakly stationary random processes, the mean value is independent
of the time t and the autocorrelation depends only on the time lag τ:
μx ðtÞ ¼ μx ð2:33Þ
In the following sections the basic descriptive properties for stationary random
records (probability density functions, auto- and cross-correlation functions, auto-
and cross-spectral density functions, coherence functions) are briefly discussed,
2.2 Stationary Random Data and Processes 29
pð x Þ 0 ð2:37Þ
ð þ1
pðxÞdx ¼ 1 ð2:38Þ
1
ðx
Pð x Þ ¼ pðζ Þdζ ð2:39Þ
1
Pð a Þ Pð b Þ if ab ð2:40Þ
Pð 1Þ ¼ 0, Pðþ1Þ ¼ 1 ð2:41Þ
the two random variables are statistically independent; for statistically independent
variables it also follows that:
When a random variable assumes values in the range ( 1, +1), its mean value
(or expected value) can be computed from the product of each value with its
probability of occurrence as follows:
ð þ1
E½xk ¼ xpðxÞdx ¼ μx : ð2:48Þ
1
h i ð þ1
E ðxk μ x Þ2 ¼ ðx μx Þ2 pðxÞdx ¼ ψ 2x μ2x ¼ σ 2x : ð2:50Þ
1
ð þ1
þ1 ð
Cxy ¼ E ðxk μx Þ yk μy ¼ ðx k μ x Þ yk μy pðx; yÞdxdy : ð2:51Þ
1 1
Taking into account that the following relation exists between the covariance of
the two random variables and their respective standard deviations:
2.2 Stationary Random Data and Processes 31
Cxy σ x σ y ð2:52Þ
Cxy
ρxy ¼ : ð2:53Þ
σxσy
It assumes values in the range [ 1, +1]. When it is zero, the two random variables
are uncorrelated. It is worth noting that, while independent random variables are also
uncorrelated, uncorrelated variables are not necessarily independent. However, it is
possible to show that, for physically important situations involving two or more
normally distributed random variables, being mutually uncorrelated does imply
independence (Bendat and Piersol 2000).
Relevant distributions for the analysis of data in view of modal identification are
the sine wave distribution and the Gaussian (or normal) distribution.
When a random variable follows a Gaussian distribution, its probability density
function is given by:
ðx μ x Þ2
1
pð x Þ ¼ pffiffiffiffiffi e 2σ 2x
ð2:54Þ
σ x 2π
while its probability distribution function is:
ðx ðζ μ x Þ2
1
PðxÞ ¼ pffiffiffiffiffi e 2σ 2
x dζ ð2:55Þ
σ x 2π 1
with μx and σx denoting the mean value and the standard deviation of the random
variable, respectively. The Gaussian probability density and distribution functions
are often expressed in terms of the standardized variable z, characterized by zero
mean and unit variance:
x μx
z¼ ð2:56Þ
σx
for convenience of plotting and applications. The Gaussian probability density and
distribution functions in standardized form are given by:
1 z2
pðzÞ ¼ pffiffiffiffiffi e 2 ð2:57Þ
2π
ðx
1 ζ2
PðzÞ ¼ pffiffiffiffiffi e 2 dζ ð2:58Þ
2π 1
32 2 Mathematical Tools for Random Data Analysis
0.4
0.375
0.35
0.325
0.3
0.275
probability density
0.25
0.225
0.2
0.175
0.15
0.125
0.1
0.075
0.05
0.025
0
−3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3
z
0.9
0.8
Probability distribution
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
−3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3
z
Figures 2.2 and 2.3 show the plots of (2.57) and (2.58).
The importance of the Gaussian distribution in physical problems can be partially
addressed to the central limit theorem (Papoulis 1991). It states that, given K
mutually independent random variables, whatever their (eventually different)
distributions, their sum is a normally distributed random variable when K!1.
A sine wave characterized by given amplitude A and frequency f0 can be
considered a random variable when its initial phase angle is a random variable.
2.2 Stationary Random Data and Processes 33
2.75
2.5
2.25
2
1.75
1.5
1.25
1
0.75
0.5
0.25
−3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3
z
1
pðxÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , jxj < A ð2:59Þ
π A2 x 2
while its mean value and variance are:
A2
μx ¼ 0, σ 2x ¼ : ð2:60Þ
2
The plot of the probability density function of the sine wave is shown in Fig. 2.4.
If a Gaussian noise n(t) (with zero mean and σ n2 variance) is added to the above
mentioned sine wave:
σ 2s A2
R¼ ¼ : ð2:63Þ
σ 2n 2σ 2n
34 2 Mathematical Tools for Random Data Analysis
The estimation of probability density functions from recorded data and the
analysis of their shape provide an effective mean for the identification of spurious
harmonics superimposed to the stochastic response of the structure under test, as
shown in Chap. 5.
It is worth noting that, when dealing with finite records of the structural
response, an exact knowledge of parameters, such as mean and variance, and,
therefore, of probability density functions is generally not available. Only estimates
based on finite datasets can be obtained. Thus, it is desirable to get high quality
estimates from the available data. They can be obtained through an opportune
choice of the estimator. Since different estimators exist for the same quantity, the
choice should be oriented towards estimators that are:
• Unbiased: the expected value of the estimator is equal to the parameter being
established;
• Efficient: the mean square error of the estimator is smaller than for other possible
estimators;
• Consistent: the estimator approaches the parameter under estimation with a
probability approaching unity as the sample size increases.
Thus, even if a different choice is possible, the unbiased estimators for the mean
and variance given by:
1X N
μ^ x ¼ xi ð2:64Þ
N i¼1
1 X
N
σ^ 2x ¼ ðx i μ^ x Þ2 ð2:65Þ
N 1 i¼1
are adopted in the following; the hat (^) indicates that the quantities in (2.64)
and (2.65) are estimates of the true mean and variance based on a finite number
of samples.
The probability density function of a record can be estimated by dividing the full
range of data into a number of intervals characterized by the same (narrow) width.
For instance, assuming that [a, b] is the full range of data values, it can be divided
into K intervals characterized by the equal width W:
b a
W¼ ð2:66Þ
K
Then, the number Nk of data values falling into the k-th interval [dk-1, dk]:
dk 1 ¼ a þ ðk 1ÞW, dk ¼ a þ kW ð2:67Þ
Nk
p^ ðxÞ ¼ : ð2:68Þ
NW
2.2 Stationary Random Data and Processes 35
Note that the first interval includes all values not larger than a, while the last
interval includes all values strictly larger than b. Moreover:
X
K þ1
N k ¼ N: ð2:69Þ
k¼0
If the mean values are both equal to zero, the covariance functions coincide with
the correlation functions:
Rxx ðτÞ ¼ E xk ðtÞxk ðt þ τÞ ð2:76Þ
36 2 Mathematical Tools for Random Data Analysis
Ryy ðτÞ ¼ E yk ðtÞyk ðt þ τÞ ð2:77Þ
Rxy ðτÞ ¼ E xk ðtÞyk ðt þ τÞ : ð2:78Þ
The quantities Rxx and Ryy are called auto-correlation functions of xk(t) and
yk(t), respectively; Rxy is called cross-correlation function between xk(t) and yk(t).
When the mean values are not zero, covariance functions and correlation
functions are related by the following equations:
Taking into account that two stationary random processes are uncorrelated if
Cxy(τ) ¼ 0 for all τ (2.53) and that this implies Rxy(τ) ¼ μxμy for all τ (2.81), if μx or
μy equals zero the two processes are uncorrelated when Rxy(τ) ¼ 0 for all τ.
Taking into account that the cross-correlation function and the cross-covariance
function are bounded by the following inequalities:
2
Cxy ðτÞ Cxx ð0ÞCyy ð0Þ ð2:82Þ
2
Rxy ðτÞ Rxx ð0ÞRyy ð0Þ ð2:83Þ
When the mean values and covariance (correlation) functions of the considered
stationary random processes can be directly computed by means of time averages
on an arbitrary pair of sample records instead of computing ensemble averages, the
two stationary random processes are said to be weakly ergodic. In other words, in
the presence of two ergodic processes, the statistical properties of weakly stationary
random processes can be determined from the analysis of a pair of sample records
only, without the need of collecting a large amount of data. As a consequence, in the
2.2 Stationary Random Data and Processes 37
presence of two ergodic processes, the mean values of the individual sample
functions can be computed by a time average as follows:
ðT
1
μx ðkÞ ¼ lim xk ðtÞdt ¼ μx ð2:87Þ
T!1 T 0
ðT
1
μy ðkÞ ¼ lim yk ðtÞdt ¼ μy : ð2:88Þ
T!1 T 0
The index k denotes that the k-th sample function has been chosen for the
computation of the mean value: since the processes are ergodic, the results are
independent of this choice (μx(k) ¼ μx, μy(k) ¼ μy). It is worth pointing out that
the mean values are also independent of the time t.
In a similar way, auto- and cross-covariance functions can be computed directly
from the k-th sample function as follows:
ðT
1
Cxx ðτ; kÞ ¼ lim xk ðtÞ μx xk ðt þ τÞ μx dt ¼
T!1 T 0 ð2:89Þ
¼ Rxx ðτ; kÞ μ2x
ðT
1
Cyy ðτ; kÞ ¼ lim yk ðtÞ μy yk ðt þ τÞ μy dt ¼
T!1 T 0 ð2:90Þ
¼ Ryy ðτ; kÞ μ2y
ðT
1
Cxy ðτ; kÞ ¼ lim xk ðtÞ μx yk ðt þ τÞ μy dt ¼
T!1 T 0 ð2:91Þ
¼ Rxy ðτ; kÞ μx μy
and, the processes being ergodic, the results are independent of the chosen function
(Cxx(τ, k) ¼ Cxx(τ), Cyy(τ, k) ¼ Cyy(τ), Cxy(τ, k) ¼ Cxy(τ)).
It is worth pointing out that only stationary random processes can be ergodic.
When a stationary process is also ergodic, the generic sample function is represen-
tative of all others so that the first- and second-order properties of the process can
be computed from an individual sample function by means of time averages.
With stationary and ergodic processes, the auto- and cross-correlation functions
are given by the following expressions:
ðT
1
Rxx ðτÞ ¼ lim xðtÞxðt þ τÞdt ð2:92Þ
T!1 T 0
ðT
1
Ryy ðτÞ ¼ lim yðtÞyðt þ τÞdt ð2:93Þ
T!1 T 0
ðT
1
Rxy ðτÞ ¼ lim xðtÞyðt þ τÞdt: ð2:94Þ
T!1 T 0
38 2 Mathematical Tools for Random Data Analysis
1 X
N r
R^ xx ðrΔtÞ ¼ xn xnþr r ¼ 0, 1, 2, . . . , m ð2:96Þ
N r n¼1
for a stationary record with zero mean (μ ¼ 0) and uniformly sampled data at Δt.
Thus, (2.96) provides an unbiased estimate of the auto-correlation function at
the time delay rΔt, where r is also called the lag number and m denotes the
maximum lag.
Given a pair of sample records xk(t) and yk(t) of finite duration T from stationary
random processes, their Fourier transforms (which exist as a consequence of the
finite duration of the signals) are:
ðT
i2 π ft
Xk ð f ; T Þ ¼ xk ðtÞe dt ð2:97Þ
0
2.2 Stationary Random Data and Processes 39
ðT
i2 π ft
Yk ð f ; TÞ ¼ yk ðtÞe dt ð2:98Þ
0
and the two-sided auto- and cross-spectral density functions are defined as follows:
1
Sxx ð f Þ ¼ lim E X ð f ; T ÞXk ð f ; T Þ ð2:99Þ
T!1 T k
1
Syy ð f Þ ¼ lim E Y ð f ; T ÞY k ð f ; T Þ ð2:100Þ
T!1 T k
1
Sxy ð f Þ ¼ lim E X ð f ; TÞ Ykð f ; TÞ ð2:101Þ
T!1 T k
where * denotes complex conjugate. Two-sided means that S(f) is defined for f in
the range ( 1, +1); the expected value operation is working over the ensemble
index k. The one-sided auto- and cross-spectral density functions, with f varying in
the range (0, +1), are given by:
1 h i
Gxx ð f Þ ¼ 2Sxx ð f Þ ¼ 2 lim E j X k ð f ; T Þj 2 0 < f < þ1 ð2:102Þ
T!1 T
1 h i
Gyy ð f Þ ¼ 2Syy ð f Þ ¼ 2 lim E jY k ð f ; T Þj2 0 < f < þ1 ð2:103Þ
T!1 T
1
Gxy ð f Þ ¼ 2Sxy ð f Þ ¼ 2 lim E Xk ð f ; T Þ Y k ð f ; T Þ 0 < f < þ1 ð2:104Þ
T!1 T
The two-sided spectral density functions are more commonly adopted in
theoretical derivations and mathematical calculations, while the one-sided spectral
density functions are typically used in the applications. In particular, in practical
applications the one-sided spectral density functions are always the result of Fourier
transforms of records of finite length (T < 1) and of averaging of a finite number of
ensemble elements.
Before analyzing the computation of PSDs in practice, it is interesting to note
that PSDs and correlation functions are Fourier transform pairs. Assuming that
mean values have been removed from the sample records and that the integrals of
the absolute values of the correlation functions are finite (this is always true for
finite record lengths), that is to say:
ð þ1
jRðτÞjdτ < 1 ð2:105Þ
1
40 2 Mathematical Tools for Random Data Analysis
the two-sided spectral density functions are the Fourier transforms of the
correlation functions:
ð þ1
i2 π f τ
Sxx ð f Þ ¼ Rxx ðτÞe dτ ð2:106Þ
1
ð þ1
i2 π f τ
Syy ð f Þ ¼ Ryy ðτÞe dτ ð2:107Þ
1
ð þ1
i2 π f τ
Sxy ð f Þ ¼ Rxy ðτÞe dτ ð2:108Þ
1
where Cxy(f) is called the coincident spectral density function (co-spectrum) and
Qxy(f) is the quadrature spectral density function (quad-spectrum). The one-sided
cross-spectral density function can be also expressed in complex polar notation as
follows:
iθxy ð f Þ
Gxy ð f Þ ¼ Gxy ð f Þ e 0<f <1 ð2:112Þ
Qxy ð f Þ
θxy ð f Þ ¼ arctan : ð2:114Þ
Cxy ð f Þ
2.2 Stationary Random Data and Processes 41
Taking into account that the cross-spectral density function is bounded by the
cross-spectrum inequality:
2
Gxy ð f Þ Gxx ð f ÞGyy ð f Þ ð2:115Þ
2 2
Gxy ð f Þ Sxy ð f Þ
γ 2xy ð f Þ ¼ ¼ ð2:116Þ
Gxx ð f ÞGyy ð f Þ Sxx ð f ÞSyy ð f Þ
where:
0 γ 2xy ð f Þ 1 8 f: ð2:117Þ
Note that the conversion from two-sided to one-sided spectral density functions
doubles the amplitude (jSxy( f )j ¼ jGxy( f )j/2) while preserving the phase.
It is worth pointing out two important properties of Gaussian random processes
for practical applications. First, it can be shown (Bendat and Piersol 2000) that if a
Gaussian process undergoes a linear transformation, the output is still a Gaussian
process. Moreover, given a sample record of an ergodic Gaussian random process
with zero mean, it can be shown (Bendat and Piersol 2000) that the Gaussian
probability density function p(x):
x2
1
pðxÞ ¼ pffiffiffiffiffiffi e 2 σ2
x ð2:118Þ
σx 2 π
is completely determined by the knowledge of the auto-spectral density function.
In fact, taking into account that (Bendat and Piersol 2000):
ð þ1 ð1
σ 2x ¼ x pðxÞdx
2
Gxx ð f Þdf , ð2:119Þ
1 0
Gxx(f) alone determines σx. As a consequence, spectral density functions (and their
Fourier transform pairs, the correlation functions) play a fundamental role in the
analysis of random data, since they contain the information of interest.
In practical applications, PSDs can be obtained by computing the correlation
functions first and then Fourier transforming them. This approach is known as the
Blackman-Tukey procedure. Another approach, known as the Welch procedure, is,
instead, based on the direct computation of the FFT of the records and the estima-
tion of the PSDs in agreement with (2.102)–(2.104). The Welch procedure is less
computational demanding than the Blackman-Tukey method, but it requires some
operations on the signal in order to improve the quality of the estimates.
According to (2.102)–(2.104), the one-sided auto-spectral density function
can be estimated by dividing a record into nd contiguous segments, each of length
T ¼ NΔt, Fourier transforming each segment and then computing the auto-spectral
42 2 Mathematical Tools for Random Data Analysis
|U(f)|
1
0.9
0.8
0.7
Amplitude
0.6
0.5
0.4
0.3
0.2
0.1
0
−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8 9 10
Frequency (k/T)
^ xx ðf Þ ¼ 2 X nd
G jXi ðf Þj2 : ð2:120Þ
nd NΔt i¼1
The number of data values N in each segment is often called the block size for the
computation of each FFT; it determines the frequency resolution of the resulting
estimates. The number of averages nd, instead, determines the random error of the
estimates, as discussed in Sect. 2.2.5.
Even if the direct computation via FFT of the spectral density function is
advantageous from a computational point of view, specific strategies are required
to eliminate the errors originating from the fact that the estimates are based on
records of finite length. A sample record x(t) can be interpreted as an unlimited
record v(t) multiplied by a rectangular time window u(t):
1 0tT
xðtÞ ¼ uðtÞ vðtÞ uðtÞ ¼ : ð2:121Þ
0 elsewhere
length. In such a case, in fact, the discrete frequency values, equally spaced at
Δf ¼ 1/T, coincide with zeros of the spectral window in the frequency domain with
the only exception of the frequency line in the main lobe. The result is an exact
reproduction of the correct spectrum. Thus, in order to suppress the leakage problem,
data are made periodic by tapering them by an appropriate time window, which
eliminates the discontinuities at the beginning and end of the analyzed record. There
are different options for the choice of the window (Heylen et al. 1998). Here, the most
commonly employed window is introduced. It is the full cosine tapering window, also
known as Hanning window, which is given by:
8 0 1
>
< πt
1 cos 2 @ A 0tT
uHanning ðtÞ ¼ T ð2:122Þ
>
:
0 elsewhere
The highest side lobe level of the Hanning window is 32 dB below the main
lobe. Thus, leakage is minimized. However, the use of the Hanning window to
compute spectral density estimates by Fourier transform techniques implies a loss
factor of 3/8:
ðT
u2Hanning ðtÞdt
3
0
ðT ¼ : ð2:123Þ
8
u2 ðtÞdt
0
In Sect. 2.2.2 the definition of unbiased estimator has been reported. Attention is
herein focused on the errors affecting the estimates. In fact, a recommended length
of the records for OMA applications can be obtained from the analysis of errors in
spectral density estimates.
From a general point of view, the repetition of a certain experiment leads to a
number of estimates x^ of the quantity of interest x. When the expected value of the
estimates over the K experiments is equal to the true value x, the estimate x^ is
unbiased. On the contrary, when there is a scatter between expected value of the
estimates and true value, it is possible to define the bias b½x^ of the estimate:
The bias error is a systematic error occurring with the same magnitude and in
the same direction when measurements are repeated under the same conditions.
The variance of the estimate:
h i
Var ½x^ ¼ E ðx^ E½x^ Þ2 ð2:125Þ
describes the random error, namely the not systematic error occurring in different
directions and with different magnitude when measurements are repeated under
the same conditions.
The mean square error:
h i
mse½x^ ¼ E ðx^ xÞ2 ¼ Var ½x^ þ ðb½x^ Þ2 ð2:126Þ
the normalized rms error of an estimate is a function only of the total record length
and of the frequency resolution:
1
ε2r ¼ : ð2:128Þ
T r Δf
1
nd ¼ : ð2:129Þ
ε2r
1
εr ¼ pffiffiffiffiffi 0:10 ð2:130Þ
nd
nd 100: ð2:131Þ
On the other hand, a negligible bias error, in the order of 2 %, can be obtained
by choosing (Bendat and Piersol 2000):
1 Br 2ξn ωn
Δf ¼ ¼ ¼ ð2:132Þ
T 4 4
where Br is the half power bandwidth at the natural frequency ωn, ξn is the
associated damping ratio, while T is the length of the i-th data segment. The relation
between the total record length Tr and the length T of the i-th data segment is:
Tr
T r ¼ nd T ) T ¼ : ð2:133Þ
nd
Taking into account the relation between natural circular frequency and natural
period of a mode and that the fundamental mode is characterized by the longest
natural period, the substitution of (2.133) into (2.132) yields the following
expression:
nd
Tr ¼ T1 ð2:134Þ
πξ1
46 2 Mathematical Tools for Random Data Analysis
relating the total record length to the fundamental period of the structure under
investigation. Assuming nd ¼ 100 and a typical value for the damping ratio—for
instance, about 1.5 % for reinforced concrete (r.c.) structures in operational
conditions—a suggested value for the total record length is about 1,000–2,000
times the natural period of the fundamental mode of the structure, in agreement
with similar suggestions reported in the literature (Cantieni 2004).
Taking into account the assumptions under the formula given in (2.134), the
suggested value for the total record length minimizes the random error εr and
suppresses the leakage (since a very small value for the bias error εb has been set).
Most OMA methods are based on fitting of an assumed mathematical model to the
measured data. In such a case, the ultimate task is to determine the unknown modal
parameters of the system from the measured response of the structure under certain
assumptions about the input. This is an example of inverse problem. The solution of
inverse problems is based on matrix algebra, including methods for matrix
decomposition.
Matrix algebra plays a relevant role also in the case of those OMA methods that
extract the modal parameters without assumptions about the system that produced
the measured data. Thus, a review of basics of matrix algebra and of some methods
for the solution of inverse problems is helpful to understand the mathematical
background of the OMA methods described in Chap. 4.
Consider the generic matrix:
2 3
a1 , 1 ... a1 , M
½ A ¼ 4 ai, j ... 5 ð2:135Þ
a L, 1 a L, M
of the product matrix in terms of the elements in the i-th row and j-th column of the
two matrices in the product:
X
½C ¼ ½A½B, cij ¼ ai , k bk , j ð2:136Þ
k
Switching of columns and rows of a certain matrix [A] provides the transpose
matrix [A]T. When a square matrix coincides with its transpose, it is said to be
symmetric. If the matrix [A] is complex-valued, its Hermitian adjoint [A]H is
obtained by transposing the matrix [A]* whose elements are the complex conjugates
of the individual elements of the original matrix [A]. A square matrix identical to its
Hermitian adjoint is said to be Hermitian; real-valued symmetric matrices are a
special case of Hermitian matrices.
The inverse [A] 1 of the matrix [A] is such that [A] 1[A] ¼ [I]. A determinant
equal to zero characterizes a singular (noninvertible) matrix. On the contrary, if the
determinant is nonzero, the matrix is invertible (or nonsingular). The matrix [A] is
orthogonal if its inverse and its transpose coincide: [A] 1 ¼ [A]T (its rows and
columns are, therefore, orthonormal vectors, that is to say, orthogonal unit vectors).
If [A] is complex-valued, it is a unitary matrix if its Hermitian adjoint and its
inverse coincide: [A] 1 ¼ [A]H. The following relations hold:
1
ð½A½BÞ ¼ ½B 1 ½A 1
ð2:137Þ
The rank r([A]) of a matrix [A] is given by the number of independent rows or
columns in [A]. By definition, a row (column) in a matrix is linearly independent if
it cannot be computed as a linear combination of the other rows (columns). If the
rank of an L L matrix [A] is r([A]) ¼ S, with S < L, then there exists a submatrix
of [A] with dimensions SS and nonzero determinant.
When the matrix [A] acts as a linear operator transforming a certain vector {x}
into a new vector {y}:
Those vectors define a subspace of {x} called the null space. As a consequence,
not all the space of {y} can be reached from {x}; the range of [A] defines the subset
of the space of {y}, which can be reached through the transformation defined in
(2.141). The dimension of the range coincides with the rank of [A], and the sum of
the dimensions of the null space and the range equals the dimension L of the matrix.
If the matrix [A] is invertible:
f y g ¼ ½ A f x g ¼ f 0g , f x g ¼ f 0g ð2:143Þ
the eigenvalues are real and positive and the matrix [A] is invertible.
When dealing with systems of equations, the matrix inversion can be more
effectively implemented by decomposing the matrix into factors. There are differ-
ent types of matrix decomposition methods. The eigenvalue decomposition (EVD)
provides an expression for the invertible square matrix [A] as a product of three
matrices:
1
½A ¼ ½X½Λ½X ð2:147Þ
where the columns of [X] are the eigenvectors of [A] while [Λ] is a diagonal matrix
containing the corresponding eigenvalues of [A]. Taking advantage of the eigen-
value decomposition of [A] and of (2.137), the inverse of [A] can be obtained as:
1
1 1
½A ¼ ½X½Λ½X ¼ ½X½Λ 1 ½X 1 : ð2:148Þ
2.3 Matrix Algebra and Inverse Problems 49
The elements in [Λ] 1 are the inverse of the eigenvalues of [A]. Note that,
whenever the matrix [A] is also symmetric, the matrix [X] is orthogonal.
The Singular Value Decomposition (SVD) can be considered an extension of the
EVD to rectangular matrices. The SVD of a real-valued matrix [A] of dimensions
L M, with LM and r([A])M, is given by:
to fitting depends on the selected model. For the sake of clarity, in this section
the main concepts are illustrated with reference to a very simple and general
polynomial function:
y ðx Þ ¼ c 0 þ c 1 x þ c 2 x 2 þ . . . þ c L 1 x L 1
ð2:151Þ
y1 ¼ c0 þ c1 x1 þ c2 x21 þ . . . þ cL 1 x1L 1
...
yi ¼ c0 þ c1 xi þ c2 x2i þ . . . þ cL 1 xiL 1 ð2:152Þ
...
yM ¼ c0 þ c1 xM þ c2 x2M þ . . . þ cL 1 xML 1
Note that the setting of the problem in matrix form does not require a linear
functional relation between y and x. A linear combination of basis functions of x is
also appropriate. In general, there are more measurements than unknowns (M > L),
so that an overdetermined set of equations is defined, and measurements are noisy.
It is worth pointing out that the problem is definitely underdetermined when the
number of unknowns L exceeds the number of equations M. In this case the inverse
problem cannot lead to a unique solution and additional information has to be
provided or the number of unknowns has to be reduced. On the contrary, when
M > L the problem may actually be overdetermined, but it can be also even-
determined or underdetermined, depending on the eventual presence of interrelated
measurements which do not provide additional information. Thus, the rank of the
matrix in (2.153) actually determines if the problem is overdetermined or
underdetermined. However, in practical applications, the sole determination of the
rank of a matrix can be misleading due to the presence of measurement noise. For
instance, the rank of the following matrix:
1 0
8 ð2:154Þ
1 10
2.3 Matrix Algebra and Inverse Problems 51
is 2 but the second row can be considered linearly dependent on the first row from
the practical point of view, since it does not provide a significant contribution of
information to the solution of the inverse problem. In similar conditions the SVD of
the matrix can provide more valuable information about the type of inverse
problem. In fact, the condition number κ, defined as the ratio between the maximum
and minimum absolute values of singular values, can be computed to assess if the
matrix is noninvertible (κ ¼ 1), ill-conditioned (κ very large) or invertible (small
κ). Since the small singular values in ill-conditioned problems magnify the errors,
considering only the subset of the largest singular values can reduce their effect.
The selection of the number of singular values to be retained is usually based on
sorting of the singular values and identification of jumps; in the absence of jumps, a
selection ensuring numerical stability is carried out.
Assuming that a curve fitting the measured data has been found and the
functional relation between y and x in (2.151) has been established, there will be
an error (or residual) associated to the i-th measurement. It can be computed as
difference between the predicted (yi,pred) and the measured (yi,meas) value of y:
Thus, the objective of the analysis is the estimation of the unknown coefficients
(c0, c1, . . ., cL-1) from the measured data in a way able to minimize the sum of the
residuals when all measurements are taken into account.
Different definitions for the residuals can be adopted, taking into account that the
selected error definition has an influence on the estimation of the unknown
parameters. For instance, when the data are characterized by the presence of very
large and very small values in the same set, the computation of the residuals
according to (2.155) biases the inversion towards the largest values. As an alterna-
tive, one of the following definitions of residual can be considered:
Additional error definitions can be found in the literature (see, for instance,
Santamarina and Fratta 2005).
A global evaluation of the quality of the fit can be obtained from the computation
of the norm of the vector of residuals {ε}:
8 9
>
> ε1 >
> >
> >
< :::: >
=
fε g ¼ εi ð2:158Þ
>
> >
>
> :::: >
>
: >
;
εM
52 2 Mathematical Tools for Random Data Analysis
The order of the norm is related to the weight placed on the larger errors:
the higher the order of the norm, the higher the weight of the larger errors.
Three notable norms are:
X
L1 ¼ j εi j ð2:160Þ
i
X 2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi
L2 ¼ j ε i j ¼ fε gT fε g ð2:161Þ
i
The L1 norm provides a robust solution, since it is not sensitive to a few large
errors in the data; the L2 norm is compatible with additive Gaussian noise present in
the data; the L1 norm considers only the largest error and, as a consequence, is the
most sensitive to errors in the data.
Based on the previous definitions, the least squares solution of the inverse
problem can be defined as the set of values of the coefficients (c0, c1, . . ., cL-1)
that minimizes the L2 norm of the vector of residuals. Thus, setting the derivative of
this L2 norm with respect to {c} equal to zero, under the assumption that [M]T[M] is
invertible the least squares solution provides the following estimate of the model
parameters:
1
fcg ¼ ½MT ½M ½MT fymeas g ¼ ½Mþ fymeas g: ð2:163Þ
2.4 Applications
Task. Create a calculator to carry out the following operations with complex
numbers:
• Get real part and imaginary part of a complex number;
• Get amplitude and phase of a complex number;
• Compute complex conjugate of a complex number;
• Compute sum, product, and division between two complex numbers;
• Compute amplitude and phase of 1 + 0i and 0 + 1i;
• Compute: (1 + 0i) + (0 + 1i), (1 + 0i) * (0 + 1i), (1 + 0i)/(0 + 1i); (1 + 1i) + (1-1i),
(1 + 1i) * (1-1i), (1 + 1i)/(1-1i).
Suggestions. This is a very simple example to get confidence with the LabVIEW
environment and with basic operations with complex numbers. In the design of the
user interface on the Front Panel, place controls and indicators and set in their
properties a complex representation of the data (right click on the control/indicator,
then select “Representation” and “Complex Double CDB”). Then, write the code
in the Block Diagram (CTRL + E to open it from the Front Panel). Algebraic
operators are in the Functions Palette under “Programming – Numeric”; the
operations on complex numbers are under “Programming – Numeric – Complex.”
Appropriately connect controls and indicators. Use the While Loop structure under
“Programming – Structures” to develop a simple, interactive user interface. It is
possible to define the timing in the execution of the cycles by means of the “Wait
Until Next ms Multiple.vi” in “Programming – Timing.”
Suggestions. In the design of the user interface on the Front Panel, place a control
to input the number of data points N; place three “Waveform Graph” controls (it is in
54 2 Mathematical Tools for Random Data Analysis
the Controls Palette under “Modern – Graph”) to show the signal in time domain and
the amplitude and phase spectra in frequency domain. In the Block Diagram place the
“Sine.vi” (it can be found in the Functions Palette under “Mathematics – Elementary
– Trigonometric”); use a “For Loop” structure (it can be found in the Functions
Palette under “Programming – Structures”) to generate signals of the desired length.
Transform the obtained arrays of Double into Waveform type of data by wiring the
sampling interval and the array of values of the signal to the “Build Waveform.vi”
(it can be found in the Functions Palette under “Programming – Waveform”).
Compute the Fourier Transform of the signals by means of “SVT FFT Spectrum
(Mag-Phase).vi” (“Addons – Sound & Vibration – Frequency Analysis – Baseband
FFT”). Appropriately connect controls and indicators. Use the While Loop structure
under “Programming – Structures” to develop a simple, interactive user interface. It is
possible to define the timing in the execution of the cycles by means of the “Wait
Until Next ms Multiple.vi” in “Programming – Timing.”
Sample code. Refer to “FFT mag-phase.vi” in the folder “Chapter 2” of the disk
accompanying the book.
2.4.3 Statistics
Task. Compute mode, mean, maximum and minimum value, standard deviation,
and variance of the data in “Data for statistics and histogram.txt” in the folder
“Chapter 2\Statistics” of the disk accompanying the book. Use the data to plot a
histogram. Pay attention to the obtained results for different values of the number of
intervals.
Suggestions. Use the “Read from Spreadsheet File.vi” to load the data from file.
Maximum and minimum value in the data can be identified by means of the “Array
Max & Min.vi” under “Programming – Array.” Mean, standard deviation, and
variance can be computed by means of the “Std Deviation and Variance.vi” under
“Mathematics – Probability and Statistics.” In the same palette there are “Mode.vi”
and “Histogram.vi,” which can be used to compute the mode and plot the histogram.
Place a “XY Graph” on the Front Panel to plot the resulting histogram.
Sample code. Refer to “Statistics and histogram.vi” in the folder “Chapter 2\Statistics”
of the disk accompanying the book.
Task. Plot the standardized probability density function of a sine wave in noise
(2.62) for different values of the variance ratio defined in (2.63).
Suggestions. Use the “For Loop” to set the vector of values of the standardized
variable z where the probability density function will be computed. Compute
the variance of the Gaussian noise from the values of the variance ratio. Use a
2.4 Applications 55
“For Loop” to compute the values of the function in the integral at the selected values
of z. Use the “Numeric Integration.vi” under “Mathematics – Integration & Differ-
entiation” to compute the integral. Put the arrays of the values of z and p(z) into a
cluster (use the “Bundle.vi” under “Programming – Cluster & Variant”), create an
array of the plots of p(z) and show them in a “XY graph” on the Front Panel.
Sample code. Refer to “Sine wave in Gaussian noise.vi” in the folder “Chapter 2” of
the disk accompanying the book.
Task. Compute and plot all the possible auto- and cross-correlation functions from
the data in “Sample record 12 channels – sampling frequency 10 Hz.txt” in the
folder “Chapter 2\Correlation” of the disk accompanying the book. Data in the file
are organized in columns: the first column gives the time; the next 12 columns
report the data for each of the 12 time histories.
Suggestions. Use the “Read from Spreadsheet File.vi” to load the data from file.
Compute the auto-correlation functions associated to the 12 time series and the
cross-correlation between couples of records. Use the formula (2.96) for the direct
estimation of correlation functions, eventually organizing the data into matrices
and taking advantage of the matrix product. Organize the resulting data into a 3D
matrix so that one of its dimensions is associated to the time lag, and at a certain
time lag a square matrix of dimensions 12 12 is obtained. Use the “While Loop”
to create a user-interface for the selection of one of the auto- or cross-correlation
functions from the matrix. Plot the data into a “Waveform Graph” placed on the
Front Panel.
Task. Generate a Gaussian white noise, compute its mean, variance, and standard
deviation, and plot its autocorrelation function. Use the “AutoCorrelation.vi” under
“Signal Processing – Signal Operation.”
Suggestions. Use the “Gaussian white noise.vi” under “Signal Processing – Signal
Generation” to generate the data. Use the “While Loop” structure to create a user-
interface for the selection of the parameters (number of samples and standard
deviation of simulated data) for data generation. Place the appropriate controls
for such parameters on the Front Panel. Use the “AutoCorrelation.vi” under “Signal
56 2 Mathematical Tools for Random Data Analysis
Task. Compute (according to the Welch procedure) and plot the auto-spectral density
functions of the data in “Sample record 12 channels – sampling frequency 10 Hz.txt”
(see Sect. 2.4.5). Divide the time series into a user-selectable number of segments;
consider a 50 % overlap of the data segments. Analyze the effects of windowing and
number of segments on the resulting spectra and the frequency resolution.
Suggestions. Create a SubVI that, given a time history, the number of segments and the
sampling frequency, provides an array of waveform data, where each waveform
consists of a (partially overlapping) data segment, and the number of data segments
nd. Take advantage of the “For Loop” structure to divide the total record into nd
segments. In the main VI, use the “Read from Spreadsheet File.vi” to load the data
from file. Compute the sampling frequency from the sampling interval. Select one
time history in the dataset and send it to the previously mentioned SubVI to divide it
into partially overlapping segments. Use the “For Loop” structure and the “SVT
Power Spectral Density.vi” under “Addons – Sound & Vibration – Frequency Analy-
sis – Baseband FFT” to compute the averaged PSDs. Plot the results into a “Waveform
Graph” placed on the Front Panel. Use the “While Loop” structure to create a user-
interface for the selection of the time histories and the analysis parameters.
Sample code. Refer to “Averaged PSD.vi” and “Overlap 0.5.vi” in the folder
“Chapter 2\PSD and overlap” of the disk accompanying the book.
Task. Create a matrix [A] of dimensions 8 8 and whose entries are random
numbers. Create a constant matrix [B] of dimensions 8 8 and rank r([B]) ¼ 6.
Compute the matrices [A] + [B] and [A][B]. For each of the previously men-
tioned matrices compute the SVD, plot the obtained singular values after they
have been normalized with respect to the largest one, compute the condition
number.
The SVD can be carried out by the “SVD Decomposition.vi” under “Mathematics –
Linear Algebra.”
Sample code. Refer to “SVD and rank of a matrix.vi” in the folder “Chapter 2” of
the disk accompanying the book.
References
Bendat JS, Piersol AG (2000) Random data: analysis and measurement procedures, 3rd edn.
Wiley, New York, NY
Cantieni R (2004) Experimental methods used in system identification of civil engineering
structures. In: Proc 2nd Workshop Problemi di vibrazioni nelle strutture civili e nelle
costruzioni meccaniche, Perugia
Cooley JW, Tukey JW (1965) An algorithm for the machine calculation of complex Fourier series.
Math Comp 19:297–301
Golub GH, Van Loan CF (1989) Matrix computations. John Hopkins University Press,
Baltimore, MD
Heylen W, Lammens S, Sas P (1998) Modal analysis theory and testing. Katholieke Universiteit
Leuven, Leuven
Papoulis A (1991) Probability, random variables, and stochastic processes, 3rd edn. McGraw-Hill,
New York, NY
Santamarina JC, Fratta D (2005) Discrete signals and inverse problems: an introduction for
engineers and scientists. Wiley, Chichester
Data Acquisition
3
3.2 Transducers
When an input acceleration is applied at the base of the accelerometer, the inertia
force associated to the mass causes a deformation of the crystal. The piezoelectric
material generates an electric charge proportional to its deformation.
The electrical charge on piezoelectric crystals can be induced by compression,
shear, or flexural deformation. Each method offers advantages and drawbacks.
Thus, piezoelectric accelerometers are alternatively built according to one of
these methods by choosing the most appropriate for the considered application. In
compression mode the seismic mass applies a compressive force on the piezoelec-
tric crystal mounted on a rigid base. This method leads to a high-frequency range
but it is susceptible to thermal transient effects because the crystal is in contact with
the base of the housing. Moreover, any deformation of the base is transmitted to the
crystal, which provides erratic readings not associated to the acceleration. For these
reasons, compressive design of piezoelectric accelerometers is being more and
more replaced by shear and flexural design. Accelerometers based on the shear
method show the best overall performance. In this case the crystal is clamped
between a center post and the outer mass: the larger the attached mass, the larger
the shear force applied to the crystal for a given acceleration. These accelerometers
work well over a high-frequency range and do not suffer strain or thermal transient
effects, since the crystal is not in direct contact with the base. Flexural design leads
to very high output signals because the crystal is subjected to high stress levels.
The bending of the crystal can be the result of inertia forces associated to the sole
mass of the crystal, but additional weight can be added to the crystal to enhance
bending. This type of accelerometers shows a limited frequency range in compari-
son with the previous ones, and they are more prone to damage if exposed to
excessive shock or vibration. Accelerometers based on flexural design are typically
well suited for low frequency, low vibration amplitude applications such as modal
testing of structures.
The charge collected by electrodes on the piezoelectric crystal is transmitted
to a signal conditioner, which converts the electric charge into voltage. A remote
signal conditioner characterizes charge mode sensors. On the contrary, a built-in signal
conditioner characterizes the so-called Integrated Electronics Piezo-Electric (IEPE)
accelerometers (also known as voltage mode accelerometers). In the presence of a
built-in signal conditioner, the signal cable carries also the required power supply.
As a consequence, the signal is high-pass filtered to remove the frequencies close to
DC. The (built-in or remote) signal conditioner output is a voltage signal available for
display, recording, and analysis. Nowadays IEPE accelerometers (Fig. 3.1) are
replacing charge mode sensors since they offer a number of advantages, such as
simplified operation, lower cost, resolution virtually unaffected by cable type and
length (long cables can be used without increase in noise, loss of resolution, or signal
attenuation).
From a mechanical point of view, a piezoelectric accelerometer is equivalent to a
SDOF system characterized by an oscillating mass subjected to an input ground
motion; the spring and the dashpot of the SDOF system are represented by the
piezoelectric material. As a consequence, some characteristics of piezoelectric
64 3 Data Acquisition
a Gain (csi=0)
Gain (csi=0.2)
b Phase (csi=0)
Phase (csi=0.2)
Gain factor Gain (csi=0.707) Phase factor Phase (csi=0.707)
10 0
−0.25
−0.5
1 −0.75
−1
−1.25
Phase [rad]
−1.5
Gain
0.1
−1.75
−2
−2.25
0.01 −2.5
−2.75
−3
0.001 −3.25
0.01 0.1 1 10 0.01 0.1 1 10
Normalized frequency f/fn Normalized frequency f/fn
Fig. 3.2 Sample FRFs of piezoelectric accelerometers for different damping values: amplitude
(a) and phase (b)
accelerometers can be obtained from the analysis of the FRF of the system, relating
the input acceleration and the output voltage (Fig. 3.2).
In terms of amplitude, the widest frequency range with uniform gain is
associated to a damping ratio of 0.707. For this reason, some accelerometers are
designed with added damping in order to maximize their frequency range. This is
mainly the case of large accelerometers, characterized by low values of the first
natural frequency. From a general point of view, independently of the damping
ratio, the gain factor is basically uniform for frequencies up to 20 % of the
undamped natural frequency of the sensor. Thus, it may happen that no specific
design in terms of damping is carried out by the manufacturer, and the useful
frequency range of the sensor is simply up to 20 % of its natural frequency.
3.2 Transducers 65
a b
Gain factor Gain Phase factor Phase
0 0
−10
−20
−30
−40
−50
Phase [rad]
Gain
−60
−70
−80
−90
−100
−110
-3 −120
0 20 40 60 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200
Frequency [Hz] Frequency [Hz]
Fig. 3.4 Gain (a) and phase (b) of the typical frequency response of force balance accelerometers
that the maximum sensor output has a level fitting the recorder maximum input so
that the sensor dynamic range is optimally used.
The dynamic range DRs of a sensor (often expressed in dB) is the ratio between
the largest and the smallest signal it can measure:
V max, s 2
DRs ¼ 10 log ð3:1Þ
V n, s
In (3.1) Vmax,s and Vn,s represent the maximum voltage signal and the noise
floor of the sensor, respectively. The noise floor typically depends on frequency
pffiffiffiffiffiffi
and it is expressed in V= Hz. Thus, it has to be squared and integrated over the
considered frequency range in order to compute the value of DRs. Nowadays, the
best accelerometers have a dynamic range higher than 150 dB. However,
the dynamic range of the total measurement chain depends also on the dynamic
range of the digitizer. As a consequence, a dynamic range in the order of
120–140 dB is also suitable since it fits well the dynamic range of the average
24 bit digitizers.
Sensor resolution represents the smallest incremental change of physical quan-
tity that leads to a detectable change in the sensor output. It is usually expressed in
absolute terms or as a percentage of the full-scale range. This provides the mini-
mum and maximum values of the physical quantity that can be measured by the
sensor. Some accelerometers are characterized by user-selectable full-scale range.
An ideal sensor should behave linearly, but a certain deviation from linearity is
always present. Such deviation should be as limited as possible and it is expressed
by the percentage of nonlinearity. Accelerometers with good performance typically
show a nonlinearity lower than 1 %.
The cross axis sensitivity of a sensor quantifies its sensitivity to motion perpen-
dicular to the main axis. Accelerometers for modal testing typically show a low
transverse sensitivity, in the order of 2 % or less.
Readings from accelerometers are often characterized by a certain offset. The
offset error is defined as the output of the sensor at zero input. Some accelerometers
permit the minimization of the offset error through a mechanical intervention.
Settling time of piezoelectric accelerometers is another parameter to be taken
into account during measurements. It represents the time needed by the sensor to
reach a stable output once power is supplied to the sensor. In other words, after
power is supplied, it is the amount of time to wait before starting recording in order
to get stable measurements.
Sensor self noise plays a primary role in determining the capability of the sensor
to properly resolve the structural response. In fact, if the signal to be recorded is
very small, it may drown in the electronic noise of the sensor. Reliable information
about or quantification of sensor self noise is, therefore, fundamental in the pres-
ence of very weak ambient excitation or in the case of very massive low-rise
structures.
68 3 Data Acquisition
Fig. 3.5 Peterson’s noise models: New Low Noise Model (NLNM), New High Noise Model
(NHNM)
Methods to estimate the noise floor of sensors are described in Brincker and
Larsen (2007). An approach is based on the isolation of the sensor from any
vibration (for instance, by a seismic isolation chamber). When two or more sensors
are available, an alternative approach is based on measurements carried out in the
same location. In this case only one physical signal is present (the ambient vibration
signal), and the SVD of the PSD matrix makes possible the simultaneous estimation
of signal spectrum and noise floor of sensors.
Comparing the noise floor of the sensors with reference models of the seismic
background noise can be useful to check their effectiveness for ambient vibration
measurements. Even if such models were developed to deal with seismic back-
ground noise, Peterson’s noise curves (Peterson 1993) represent a valuable tool to
assess the performance of a sensor in the presence of very low levels of vibration. In
fact, seismic micro-tremors represent low-amplitude vibrations of the ground due to
artificial and natural noise sources. Starting from ground acceleration PSDs deter-
mined for noisy and quiet periods at 75 worldwide distributed digital stations and
averaging the spectra of the three quietest and the three noisiest independent records,
Peterson has derived two curves (Fig. 3.5) which represent upper and lower bounds
of the typical PSDs of the observed ambient noise. In very quiet conditions, the
sensor is able to properly resolve the seismic background noise if its noise floor is
below the Peterson’s low noise model. Otherwise, the sensor output signal is just
electronic noise. Thus, the Peterson’s curves are extremely useful as a reference for
assessing the quality of seismic stations, for predicting the detectability of small
3.2 Transducers 69
signals, and for the design of seismic sensors. By definition, the low noise model
represents a conservative estimate of the expected level of micro-tremor excitation.
Man-made activities, site amplification, and other factors influence the actual micro-
tremor input so that almost all sites have a noise level above the Peterson’s low-noise
model, often by a large factor. The Peterson’s low-noise model summarizes the
lowest observed vertical seismic noise levels throughout the seismic frequency
band. Peterson’s investigations have also shown that the minimum level of horizon-
tal acceleration is similar or slightly higher. As a consequence, the Peterson’s
low-noise model can be considered representative of both the vertical and horizontal
minimum vibration levels.
The expected level of vibration of a structure in operational conditions is
definitely higher than the seismic background noise, at least as an effect of site
and structural amplifications. As a consequence, a sensor that accomplishes the
Peterson’s low-noise model is also effective for OMA tests. Taking into account the
structural amplification at resonance and the Peterson’s low-noise model, a sensor
for OMA test in very quiet environment has to be characterized by a noise floor in
the order of 140 dB or better in the frequency range 0.1–100 Hz.
Both piezoelectric and force balance seismic accelerometers are very effective in
measuring the response of civil structures in operational conditions. Each type of
sensor has advantages and limitations. For instance, piezoelectric accelerometers
are easy to install and are characterized by a good frequency response (large
bandwidth), but they are also characterized by fragile sensing elements and by
some limitations in measuring low-frequency components; moreover, the coaxial
cable typically adopted to link the sensors with the data acquisition hardware is
more prone to pick up noise from the environment with respect to the cables (made
by individually shielded twisted pairs) adopted for force balance accelerometers
(see also Sect. 3.4). On the other hand, force balance sensors have a low-noise floor
and they are able to measure very low-frequency signals and even the DC compo-
nent. However, the upper bound of their bandwidth is much lower than the upper
frequency limit of piezoelectric sensors. Moreover, they suffer DC drifts and are
characterized by expensive and heavy cabling. Electromagnetic sensors can some-
times be used for OMA tests taking advantage of their low-noise floor and of the
cheap and noise robust cabling made by a single shielded twisted pair. However, the
bad response at low frequencies limits their use in many civil engineering
applications.
It is worth noting that the final choice of the sensors is always the result of a
number of factors. In this book the use of seismic (piezoelectric or force balance)
accelerometers is recommended because of their high performance in measuring
low-amplitude ambient vibrations. However, the test engineer has to be aware that
the market offers a large variety of sensors, characterized by a range of
specifications and prices: thus, the final choice must take into account different
factors, such as the final objective of measurements, the expected amplitude of the
motion to be measured, the characteristics of the sensors in relation to those of the
data acquisition hardware and of the tested structure, and, the last but not the least,
the available budget.
70 3 Data Acquisition
Data acquisition systems perform the conversion of the analog signals coming from
the sensors into digital signals, which can be stored into a digital medium and
analyzed by software. A large variety of systems for data acquisition are available
on the market. They show different characteristics, configurations, and price, and
the choice is made even more difficult when the available budget is not the sole
constraint. In fact, additional factors influencing the choice are portability, the
experience of the analyst, and the degree of compatibility with third-party hardware
and software. A rough distinction can be made between dedicated solutions offered
by specialized vendors (Fig. 3.6a) and customizable solutions based on program-
mable hardware (Fig. 3.6b). The first class of equipment is usually characterized by
a steep learning curve, so that basic knowledge about signal acquisition and limited
time to get confidence with the system are sufficient to carry out the first tests;
however, these systems are accompanied by proprietary software for configuration
and control of the measurement process and typically suffer for limited versatility.
On the other hand, the solutions based on programmable hardware require more
time, deeper theoretical knowledge, and larger efforts by the user to begin field
tests. In fact, the user has in charge the duty of selecting the hardware components
and developing the data acquisition software. However, the main advantages with
these solutions are the lower price and the higher versatility. In both cases, the
analysis of the main characteristics of analog-to-digital converters (ADCs) is
helpful in order to properly select, among different solutions, the system that better
fits the user’s needs.
An ADC is a device that converts the continuous signal coming from the sensors
into a sequence of digital numbers representing the amplitude of the signal. The
conversion is based on the discretization of time (sampling) and signal amplitude
(quantization). Quantization necessarily introduces a small amount of error, since
the continuous amplitude of the signal is approximated by a finite number of
discrete values. However, the quantization error is usually negligible with respect,
for instance, to measurement noise. It can become significant only in the case of
poor resolution.
The resolution of an ADC can be defined as the smallest step that can be
detected. This is related to one change of the Least Significant Bit (LSB). For
high dynamic range digitizers, the order of magnitude of resolution is about 1 μV.
The number of bits of an ADC is sometimes also referred to as resolution. However,
most ADCs have an internal noise higher than one count: in this case, the number of
noise free bits, rather than the total bit number, yields the effective resolution.
The noise level is related to the number of bits occupied by noise when the input
is zero. Only the last two bits are typically corrupted by noise in good quality 24-bit
digitizers.
The dynamic range is defined as the ratio between the largest and the smallest
value the ADC can acquire without significant distortion. It is usually expressed in
dB. Taking into account that the lowest bits often contain only noise, the dynamic
range is also defined as the ratio between the largest input voltage and the noise
level of the digitizer. This number may be dependent on the sampling frequency.
Good digitizers currently have a dynamic range higher than 100 dB.
An example is helpful to illustrate the previous concepts. Consider an ADC
characterized by a number of bits equal to Nbit. Assume that the maximum and
minimum input voltage have the same absolute value; in other words, the input
voltage varies between Vmax, ADC (for instance, 5 V). The available number of
discrete amplitude values is N ¼ 2Nbit . As a consequence, the nominal value of the
ADC resolution is:
V n, ADC
2LSB ¼ ð3:3Þ
ΔV ADC
72 3 Data Acquisition
The dynamic range of the ADC can be computed from the number of effective
bits as:
V max, ADC
DRADC ¼ 20 log ¼ 20 log 2Nbit, eff 1
6 N bit, eff 1 : ð3:5Þ
V n, ADC
Thus, a dynamic range of about 120 dB corresponds to 21 effective bits and vice
versa. It is worth noting that the dynamic range of the complete measurement
system depends also on the dynamic range of the sensors. For this reason, maximum
input and noise floor of data acquisition system and sensors have to match.
The sampling rate is the number of samples acquired per second. The maximum
sampling rate of the ADC defines the largest frequency range that can be
investigated. For the majority of applications of OMA in civil engineering, a
maximum sampling rate of 100 Hz or 200 Hz is satisfactory. Moreover, when
large structures characterized by a high content of low-frequency modes are tested,
the data acquisition system, like the sensors, has to be able to measure low-frequency
components. In these conditions a frequency range starting from DC is required.
The absolute accuracy is a measure of all error sources. It is defined as the
difference between the input voltage and the voltage representing the output.
Ideally, this error should be LSB/2 (the quantization error, that is to say the
error only due to the digitization steps).
Conversion time, that is to say the minimum time required for a complete
conversion, is defined only for converters based on a sample-and-hold architecture.
However, the sigma-delta architecture is currently preferred for 24-bit ADCs
because of its higher performance; because of the different architecture, based on
a continuous signal tracking, the conversion interval is not important for sigma-
delta ADCs. Without describing sigma-delta converters in details, it is worth noting
that their high performance is basically obtained by sampling the input signal at a
frequency much higher than the desired data rate. The samples then pass through a
filter which expands the data to 24 bits, rejects signal components higher than the
Nyquist frequency associated to the desired sampling frequency, and digitally
resamples the data at the chosen data rate. In general, the built-in anti-aliasing
filters automatically adjust themselves (see Sect. 3.6 for more details about
aliasing). The combination of analog and digital filtering provides a very accurate
representation of the signal.
If several channels are available in the same digitizer, a signal recorded in one
channel may be seen in another channel. This phenomenon is referred to as cross
talk. The amount of cross talk is expressed in dB and means how much lower the
level in a channel is in the neighboring channels. A good quality 24-bit digitizer has
120 dB of damping or better. Cheap multichannel digitizers, instead, usually use a
single ADC and an analog multiplexer, which connects different inputs sequentially
3.4 Wired vs. Wireless 73
to the ADC input. This limits the cross talk separation since analog multiplexers
have limited performance. For high-quality measurements, solutions with one
digitizer per channel are therefore recommended.
Nonlinearity is related to how two different signals at the input are
intermodulated (so that the amplitude of a signal depends on the other) at the
output. It is expressed as a percentage of the full scale. This is usually not a problem
with modern sigma-delta converters.
The offset represents the DC level of the output when the input is zero. Some
offset is always present, due to the ADC or the connected devices, but it can often
be minimized. It is worth noting that any offset limits the dynamic range, since the
ADC will reach its maximum value (positive or negative) for smaller input values
than its nominal full scale.
Additional characteristics influencing the choice of the solutions for data acqui-
sition are versatility and robustness. For instance, systems allowing the selection of
different measurement schemes (single ended and differential) or acquisition of
signals from sensors of different type are definitely more attractive. Portability, low
power consumption and possibility of battery operation, robustness in varying
environmental conditions also play a primary role in the choice of the equipment
for field testing. Finally, distributed measurements by modular instruments are
becoming more and more attractive thanks to the possibility to use the same device
to carry out tests with either a few or a large number of measurement channels. This
characteristic is particularly attractive in testing large or complex structures, when
pretests or local tests with a reduced number of sensors and global measurements
with a large number of channels have to be carried out on the same structure in a
limited amount of time.
The last decade has been characterized by large efforts in the development of
wireless sensor networks for structural testing and health monitoring. The first
commercially available wireless sensor platform, originally developed at the Uni-
versity of California-Berkeley, appeared in 1999. Since then, this technology has
experienced a rapid development and an increasing interest of the scientific and
professional community in this field. The commercial success is mainly due to the
low cost and the possibility to combine different sensors in the same wireless node.
More details can be found elsewhere (Lynch and Loh 2006).
Even if a number of wireless sensing solutions are currently available, offering
attractive advantages such as the reduction of costs and installation time associated
to the use of cables, they have not fully replaced wired systems. The main
advantage of wired systems over wireless sensor network is in the time synchroni-
zation of the channels. Simultaneous sampling, in fact, ensures the phase coherence
among all the measurement channels, preventing errors in the computation of cross-
correlation and cross-spectral density functions. Time synchronization in wireless
sensor networks requires specific solutions while it represents an ordinary task
74 3 Data Acquisition
when a single data acquisition system and wired sensors are adopted. In wireless
sensor networks, each node has an ADC of its own. As a consequence, time
synchronization of the different ADCs requires an external time base providing a
time reference. A possible solution for time synchronization of distributed ADCs is
represented by GPS. Whenever a GPS signal is available, wireless sensor networks
can be adopted.
The choice of the type of cable and the connection with the terminals also
influence the quality of measurements. The following types of cables are typically
used to carry the analog signals from the sensors to the data acquisition system:
• Coaxial cables (Fig. 3.7a)
• Cables consisting of a single shielded twisted pair
• Cables consisting of multiple twisted pairs with common shield
• Cables consisting of multiple individually shielded twisted pairs (Fig. 3.7b)
Coaxial cables (typically used for piezoelectric accelerometers) are very cheap
but they are the most vulnerable to electrical noise exposure; cables consisting of
multiple individually shielded twisted pairs (typically used for force balance
accelerometers) are heavy and expensive but they are the least prone to pick up
noise from the environment.
3.5 Sensor Installation 75
The connection of the wires with the terminals represents another critical aspect,
since a poor connection fosters the introduction of electrical noise. In this perspec-
tive, particular attention has to be focused on proper shield termination, avoiding
ground current flows through the shield. This is possible by connecting the shield to
the ground only at one end. Instructions for cable assembly provided by sensor and
data acquisition system manufacturers have to be carefully fulfilled.
Even when ground loops are avoided, there is always a certain amount of noise
picked up from the environment. Some rules of thumb can help to minimize these
effects. For instance, mobile phones must be switched off during tests, and cables
and data acquisition systems must be placed as far as possible from sources of
electromagnetic noise, including the computer screen. Moreover, dangling wires
must be avoided. Cables must be clamped during tests because their motion can
cause triboelectric effects. In simple words, the charge generated on the dielectric
within the cable can lead to changes in the magnetic field if the dielectric does
not maintain contact with the cable conductors, thus generating errors in the
measurements.
If the measurement scheme has been appropriately defined, and installation and
measurements are properly carried out, the collected data will be of good quality
and ready for the analysis. During data processing, techniques for noise reduction
can be adopted to further improve the quality of the obtained modal identification
results. However, it is worth emphasizing that there is no substitute to good
measurements. The acquired signal must carry the physical information together
with a certain amount of noise in order to successfully apply OMA techniques.
Signal processing techniques and advanced modal analysis techniques have no
effects if the recorded signal consists of noise only.
Sensor layout and attachment method influence the identifiability of the modal
properties of the structure under test. In particular, while the attachment method
might have an influence on the extension of the investigated frequency range, the
possibility to observe different and sometimes closely spaced modes depends on
sensor layout (as mentioned also in Chap. 1).
Accelerometers can be mounted by a variety of methods, including magnet,
adhesive, and stud or screwed bolt. The main drawback when using stud or screwed
bolt is the higher difficulty in moving the sensors in different positions of the
structure. However, a stiff connection ensures that the useful frequency range of
the accelerometer is not limited by the mounting method. In fact, the connection
between sensor and structure acts like a spring, leading to a spring-mass system
with the sensor itself. As a consequence, a reduction in the stiffness of the
connection narrows the useful frequency range of the accelerometer by a reduction
of its upper limit. The connection by stud or screwed bolt usually ensures a very
stiff connection. However, a smooth flat surface of the structure at the desired
sensor locations is also required to avoid a reduction of the upper frequency limit.
76 3 Data Acquisition
In fact, the prepared surface allows the sensor to be installed normally to it and with
a very stiff contact between sensor and structure. On the contrary, a rough surface
can lead to misalignments or poor contact, causing a loss of stiffness of the
connection and the corresponding reduction of the frequency range. From a
general point of view, the choice of the attachment method depends on different
parameters, such as accessibility of the selected sensor positions, structural surface,
expected amplitude of vibration and frequency range, portability, and surface type
and conditions.
Sensor location mainly influences the observability of the structural modes. One
of the reasons which may require to move the sensors from one location to another
is their installation in correspondence or very close to nodes of the structural mode
shapes. In similar cases the data collected by those sensors do not provide effective
information for the identification of some of the structural modes (specifically,
those modes whose nodes correspond to the position of the sensors). In fact, as
already mentioned in Chap. 1, sensor placed at nodal points limits the rank of the
FRF matrix. Another reason to move the sensors could be the identification of
distinct resonances associated to very similar mode shape estimates as a result of
poor spatial resolution of measurements and unfortunate choice of sensor layout.
The choice of the sensor layout depends on the number of available sensors, the
needed information about the mode shapes, which may lead to different
requirements in terms of spatial density of the sensors, and the objectives of the
modal identification test. In the literature several studies aimed at the optimization
of sensor location are available (see, for instance, Papadimitriou and Lombaert
2012, Cyrille 2012, Marano et al. 2011). However, the results obtained from the
application of those methods depend on the adopted criteria and optimization
techniques, leading to different possible layouts. Thus, those techniques can support
the definition of the test layout but a careful planning by the test engineer and a
certain amount of physical insight still play a relevant role in the definition of
layouts able to maximize the observability of the modes and the amount of
information provided by the sensors. Nevertheless, it is possible to define the
minimum number of sensors that have to be installed to identify at least the
fundamental modes of a structure. In fact, assuming that at least a couple of closely
spaced modes exists and that the measurement noise limits the identifiability of
some modes, in particular in the case of weakly excited structures, the fundamental
modes can be identified by appropriate installation of at least 6–8 sensors. “Appro-
priate installation” means that the adopted sensor layout ensures the observability
of modes of different type (for instance, translational and torsional modes) and does
not limit the rank of the FRF matrix (for instance, installation of sensors in very
close points has to be avoided since the information they provide is basically the
same). When some a priori information about the mode shapes is available, an
effective sensor layout can be obtained by installing the sensors in a set of points
where all the modes of interest are well represented (that is to say, those modes
show fairly large modal displacements at the selected locations). Even if the sensor
layout varies from structure to structure and it can also be refined during the test if
the measurements do not appear satisfactory at a first preliminary analysis, some
3.5 Sensor Installation 77
recurrent schemes for sensor placement can be identified. They are shown in
Fig. 3.8 for buildings and tower-like structures, and in Fig. 3.9 for bridges.
Under the assumption of rigid floors in building-like structures, at least three
sensors have to be installed on a floor. In order to ensure the observability of both
translational and torsional modes, the sensors have to be installed in two orthogonal
directions and in opposite corners of each instrumented floor. Installing the sensors
on, at least, two distinct floors allows, in principle, the identification of some higher
modes, too. When the tested structure is a bridge, both vertical and horizontal
components of acceleration have to be measured in order to ensure the identifica-
tion of both horizontal and vertical bending modes. Moreover, installation of
couples of sensors measuring the vertical component of acceleration at the same
abscissa along the main axis of the bridge but in symmetrical points with respect to
it ensures the observability of torsional modes.
78 3 Data Acquisition
The previously suggested layouts represent only crude guidelines for the instal-
lation of the sensors. The actual test layout has to be defined taking into account all
the previously discussed issues and the characteristics of the tested structure.
In some cases the objective of the tests can be the investigation of the dynamic
response of selected portions of a structure, such as the cables in cable-stayed
bridges or steel rods in ancient vaults. When similar lightweight structural
components are tested, it is important to ensure that the attached accelerometers
have a minimum influence on the vibratory behavior of the tested system. In
particular, the added mass represented by wires and sensors has to be as limited as
possible. Taking into account that the weight of the sensor typically increases with
its sensitivity, the influence of the added mass on the dynamic response has to be
carefully evaluated if high sensitivity accelerometers are going to be used. If a
significant influence of the added mass on the dynamic response is expected,
alternative solutions have to be adopted. They can range from the replacement of
the heavy high-sensitivity accelerometers with lighter accelerometers at the
expenses of a reduction in sensitivity, up to the adoption of wireless sensor networks.
In some cases the number of available sensors is not sufficient to obtain the
desired spatial resolution for the mode shape estimates. For instance, assume that
the mode shapes have to be estimated in M points and the number of available
sensors is Ns < M. The estimation of the mode shapes in all the M points, therefore,
requires the execution of multiple tests in order to cover all the points of interest.
Assuming that T tests have to be carried out, T datasets are obtained. However, the
mode shape estimates obtained from each dataset cannot simply be glued together
to obtain the mode shape estimate in the desired M locations. In fact, as mentioned
in Chap. 1, the excitation is not measured in OMA and, as a consequence, the
obtained mode shape estimates cannot be scaled in order to fulfill the property of
orthogonality with respect to the mass matrix. Only un-scaled mode shape estimates
can be obtained, and the scaling factor between mass-normalized mode shapes and
un-scaled mode shapes can vary from test to test. In these conditions the only
possibility to merge the different partial mode shape estimates is to carry out
multiple tests where a number of sensors Nref (called reference sensors) remain in
the same position during all tests and the other Nrov sensors (called roving sensors)
are moved until the structural response is measured at all the desired M locations.
The mode shape estimates obtained from each dataset can be then divided in two
parts: Nref components are defined over the set of points common to all tests, while
the remaining Nrov components are defined over non-overlapping sets of points. The
Nref components of the estimated mode shape obtained from the i-th test are related
to the Nref components obtained from the first dataset in the same locations through
a scaling constant:
n o n o
ϕref
k
, 1 ¼ α1, i ϕref , i
k k
ð3:6Þ
k k
where {ϕref , 1 } and {ϕref , i } are the portions of the k-th mode shape estimated from
the first and i-th dataset at the Nref reference points, respectively. Assuming Nref 3,
3.5 Sensor Installation 79
Reference amplitudes of
the k-th mode in setup
i>1
R'4
R'3
R'2
R'1
R1 R2 R3 R4
Reference amplitudes
of the k-th mode in
setup 1
Fig. 3.10 Determination of the scaling factor of mode shapes in multi-setup measurements
It is worth noting that, if the k-th mode shape vector is characterized by small
components in the reference points, noise can heavily affect the resulting value of
the scaling factor. Thus, the use of a fairly large number of reference sensors is
recommended in order to ensure the availability of a number of reasonably large
mode shape components for all the modes. Additional details and more advanced
procedures to merge mode shape estimates from different setups can be found in the
literature (Reynders et al. 2009, Dohler et al. 2010a, Dohler et al. 2010b).
The previous discussions highlight the importance of an accurate design of the
test. A final suggestion can be given for those situations where the number of
sensors is not a limiting factor but there is the need for an improvement of the SNR
(this could be the case of very weak excitation, so that, even if the sensors are able
80 3 Data Acquisition
to properly resolve the small amplitude vibrations, the peaks of mode resonances
are not so far from the noise level). In these conditions, if sensors are installed in
couples, so that the two sensors in the same couple measure the same physical
signal s(t) (that is, the two sensors are located in the same point and measure the
same component of acceleration):
y1 ðtÞ ¼ sðtÞ þ n1 ðtÞ, ð3:8Þ
and that signal and noise for each sensor as well as the noise components of the two
sensors in the couple are uncorrelated:
E½sðtÞn1 ðt τ Þ ¼ 0 ð3:11Þ
E½sðtÞn2 ðt τ Þ ¼ 0 ð3:12Þ
while the auto-correlation of the sum of the signals, y1(t) + y2(t), amplifies the signal
with respect to the noise:
Ry1 þy2 , y1 þy2 ðτÞ ¼ 4Rs, s ðτÞ þ Rn1 , n1 ðτÞ þ Rn2 , n2 ðτÞ: ð3:16Þ
The installation of couples of sensors in the same position obviously does not
overcome the limits of the sensors. If the single sensor is not able to measure the
structural response, no physical information is included in the data but they are
just noise. However, if the sensors are properly selected, so that they are able to
resolve the structural response, this sensor layout can improve the SNR of the
collected data.
3.6 Sampling, Filtering, and Decimation 81
The definition of the value of the sampling frequency represents one of the most
important settings in dynamic tests. It is always a compromise between the need for
an accurate representation of the signal in digital form and the available (eventually
large, but always limited) memory and hard disk space for data storage and
analysis. Thus, very long observation periods may require lower values of the
sampling frequency to avoid overburdening. However, the sampling frequency
cannot be set too low, because it determines the frequency range that can be
investigated. As a consequence, it has to be set depending on the maximum
frequency of the structure under test.
Assuming that the expected value of the maximum frequency is 20 Hz, the
sampling frequency has to be set in a way able to observe this maximum frequency.
The Shannon’s theorem (2.20) states that the sampling frequency has to be at least
twice the highest frequency contained in the time signal. In the present example, a
sampling frequency larger than 40 Hz is needed, so that the upper bound of the
observable frequency range (the so-called Nyquist frequency fN ¼ fs/2) is higher
than 20 Hz.
Figure 3.11 shows the effect of an inadequate selection of the sampling rate
that causes an erroneous reconstruction of the waveform after digitization of the
continuous signal with a high-frequency signal appearing as a low-frequency one.
Thus, a typical problem of digital signal analysis caused by the discrete sampling of
continuous signals is the so-called aliasing. Aliasing originates from the
discretization of the continuous signal when the signal is sampled too slowly and,
as a consequence of this under-sampling, higher frequencies than the Nyquist
True signal
Aliasing Aliased signal
1
0.8
0.6
0.4
Amplitude
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Time [sec]
frequency are reflected in the observed frequency range causing amplitude and
frequency errors in the spectrum of the signal.
For instance, assume that the analog signal includes the power line frequency
fPL ¼ 50 Hz > fN; in the absence of any countermeasure, when the analog signal is
sampled at fs ¼ 40 Hz, the frequency fPL will be aliased and it will appear as a frequency
component of the signal at 10 Hz. In fact, the alias frequency is given by the absolute
value of the difference between the closest integer multiple of the sampling frequency
and the input frequency. In the present case: j fs fPLj ¼ j40 50j Hz ¼ 10 Hz.
This alias frequency cannot be distinguished from actual low-frequency
components in the signal. Effective countermeasures are, therefore, needed to
prevent distortions in the computation of the spectrum of the digital signal.
The only method to avoid aliasing is the removal of all frequency components
in the analog signal that are above the Nyquist frequency before the analog-to-
digital conversion. An analog low-pass filter applied before this conversion
allows restricting the frequency range of the original analog signal. Such an
analog filter with sharp cut-off is usually referred to as the anti-aliasing filter.
The presence of an analog anti-aliasing filter before the ADC is a critical
requirement in the selection of the data acquisition system, since it only can
minimize aliasing. It is worth pointing out that an ideal anti-aliasing filter passes
all the frequencies below and removes all the frequencies above the cut-off
frequency of the filter, but it is not physically realizable. Thus, real anti-aliasing
filters always have a transition band, which causes a gradual attenuation of the
input frequencies (Fig. 3.12). Thus, the frequencies in the transition band can still
cause aliasing.
Taking into account that no low-pass filter has an infinitely sharp roll-off, the
filter is often set at 80 % of the Nyquist frequency (40 % of the sampling frequency)
or even less, depending on the roll-off rate of the filter itself. Thus, the portion of the
spectrum close to the Nyquist frequency is inevitably distorted and should be
disregarded. In other words, due to the existence of this finite transition band
in the analog anti-aliasing filter, the sampling frequency has to be set higher than
3.6 Sampling, Filtering, and Decimation 83
twice the maximum frequency in the signal. In the previous sample case, assuming
that the filter is set at 80 % of the Nyquist frequency so that it cuts off all frequencies
above fN and the transition band is [0.8fN, fN], the frequency component at 20 Hz can
be properly measured by setting fs > 50 Hz.
Unwanted frequency components can still be present in the acquired data.
Digital filters can reject them. In Chap. 2 the modification of the signal in time
domain by windowing has been discussed. Like the windows, the filters modify the
signal but they act in frequency domain. They are usually classified as low-pass,
high-pass, band-pass, and band-stop, depending on the frequency range of the
signal that the filter is intended to pass from the input to the output with no
attenuation. In particular, a low-pass filter excludes all frequencies above the
cut-off frequency of the filter, while a high-pass filter excludes the frequencies
below the cut-off frequency; a band-pass filter excludes all frequencies outside its
frequency band, while a band-stop filter excludes the frequencies inside the filter
band. Since the real filters have certain roll-off features outside the design fre-
quency limits, the extension of the transition band and some effects on magnitude
and phase of the signal have to be taken into account in the applications of digital
filters.
An ideal low-pass filter removes all frequency components above the cut-off
frequency. Moreover, it is characterized by a linear phase shift with frequency. The
linear phase implies that the signal components at all frequencies are delayed by a
constant time. As a consequence, the overall shape of the signal is preserved.
However, the transfer functions of real filters only approximate the characteristics
of ideal filters. A real filter can show a ripple (that is to say, an uneven variation of
the gain with frequency) in the pass-band, a transition band outside the filter limits,
and finite attenuation and ripple in the stop-band. Real filters can also show some
nonlinearity in their phase response, causing a distortion of the shape of the signal.
Thus, center frequency (the frequency where the filter causes a 3 dB attenuation)
and roll-off rate of the filter play a primary role in the choice of the filter, together
with the presence of ripple in the pass-band (often present in high order filters) and
phase nonlinearity.
Two types of digital filters exist, the so-called Finite Impulse Response (FIR)
filters, characterized by a finite impulse response, and the Infinite Impulse Response
(IIR) filters, whose impulse response exists indefinitely. The main difference is that
the output of FIR filters depends only on the current and past input values, while the
output of IIR filters depends also on the past output values.
IIR filters have an ARMA structure (see also Chap. 4). They provide a fairly
sharp roll-off with a limited number of coefficients. Thus, the main advantage in
using IIR filters is that they require fewer coefficients than FIR filters to carry out
similar filtering operations. As a consequence, they are less computational demand-
ing. However, they often show nonlinear phase and ripples.
FIR filters are also referred to as moving average filters. Their main advantage
consists in a linear phase response, which makes them ideal for applications where
the information about the phase has to be preserved. Unlike IIR filters, FIR filters
are also always stable.
84 3 Data Acquisition
It is worth pointing out once again that only analog filters can definitely ensure
that aliasing has not occurred. Alias frequencies cannot be recognized after the
signal has been sampled, and this makes digital filters ineffective for their removal.
When records of the structural response are filtered to remove unwanted fre-
quency components before modal parameter extraction, it is important to take into
account the errors on amplitude and phase induced by filters. However, if the same
filtering operation is applied to all measurement channels in the dataset, it does not
negatively affect the estimation of the mode shapes.
In the context of modal identification, high-pass filters are often used to remove
frequencies close to DC, while low-pass filters are used to exclude high-frequency
components in view of decimation. The acquired signals are often sampled at a
higher frequency than needed for the analysis. Decimation (or down-sampling) is
therefore used to resample the acquired signals to a lower sampling frequency. This
task cannot be accomplished simply by removing a certain number of samples
depending on the adopted decimation factor (for instance, one in every two samples
if the decimation factor is equal to 2). In fact, aliasing may occur due to the presence
of frequency components between the original Nyquist frequency and the new
value after decimation. These frequency components above the new Nyquist
frequency have to be removed before decimation by applying a low-pass filter.
Low-pass filtering ensures that the decimated data are not affected by aliasing.
Validation of the collected data is recommended before they are analyzed and the
modal parameters estimated. The validation step is definitely feasible in the case of
offline data processing, while it is usually omitted in automated OMA. It consists in
the careful inspection of the time histories and, in particular, of their distribution to
identify common anomalies in the data. A usual assumption in data processing is
that data are stationary, with Gaussian distribution and no periodicities. Simple
considerations about the physical mechanisms that produced the data are often
sufficient to assess if the data can be considered stationary. For instance, this is the
case when data are generated by time-invariant phenomena (both the structure and
the loading). However, OMA methods are robust even in the presence of slightly
nonstationary data; on the contrary, application of OMA to large transients, such as
those associated to earthquake loading, leads to unpredictable and often erroneous
results. If data are collected under conditions that do not permit an assumption of
stationarity based on physical considerations, specific investigations based, for
instance, on the reverse arrangements test (Bendat and Piersol 2000) can be
carried out.
Periodic signals in otherwise random data lead to sharp peaks in the auto-
spectrum that can be confused with narrow-band resonances. Some techniques for
the identification of periodic components in random data will be illustrated in
Chap. 5.
3.7 Data Validation and Pretreatment 85
1
0
−1
−2
−3
0 250 500 750 1000 1250
Time [sec]
Even if specific tests, such as the chi-square goodness-of-fit test, can be carried
out (Bendat and Piersol 2000), an easy check for normality consists in the
comparison between the estimated probability density function of the data and
the theoretical normal distribution. This is especially true in the case of suffi-
ciently long records, such as those typically adopted in OMA tests. Deviations
from normality can also provide information about the presence of certain
anomalies in the data.
Different anomalies can affect the raw data. One of the most common is signal
clipping, which occurs when the signal saturates the ADC. Clipping may be
two-sided, thus revealing improper matching between sensor output and ADC
input, or one-sided, usually caused by excessive offset. It can be easily
recognized by a visual inspection of the time series (Fig. 3.13) or by the analysis
of the probability density function of the data (Fig. 3.14). It is worth noting that
low-pass filtering typically obscures clipping, making its identification impossi-
ble. Thus, digital filtering has to be applied after data validation.
Improper matching between sensor output and ADC input can also lead to
excessive instrumentation noise. This happens if the input signal from the sensor
has a very low voltage with respect to the maximum allowed input voltage of the
ADC. As a consequence, it occupies only a limited number of the available bits of
the ADC. Since the least significant bits are occupied by noise, it is clear that the
capabilities of the ADC are poorly exploited, and the resulting data are
characterized by a low SNR. An excessive instrumentation noise can be detected
through the inspection of the auto-spectrum of the signal, since the resonances
appear nearly buried in noise (Fig. 3.15). In extreme cases, the signal is so small that
it is totally obscured by the digital noise of the ADC. Excessive digital noise can be
easily identified by visual inspection of the time series. In fact, the signal appears as
a sequence of well-defined steps (Fig. 3.16). In these cases the measurement chain
is definitely inadequate and it has to be replaced.
Intermittent noise spikes may sometimes appear in the measured time histories
(Fig. 3.17). They can be the result of actual physical events, but malfunctioning of
the measurement chain (for instance, a defective connector or shielded cable when
subjected to vibration) might also cause them.
86 3 Data Acquisition
0.35
Probability density
0.3
0.25
0.2
0.15
0.1
0.05
0
−4 −3 −2 −1 0 1 2 3 4
z
1.2
1
Amplitude
0.8
0.6
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10
Frequency [Hz]
2
0
−2
−4
−6
0 100 200 300 400 500 600 700 800 900 1000 1100 1200
Time [sec]
0.0005
Amplitude
−0.0005
−0.001
0 25 50 75 100 125 150 175 200 225 250 275 300 325 350 375 400 425 450 475 500
Time [sec]
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
−20 −15 −10 −5 0 5 10 15 20
z
Fig. 3.18 Probability density function of a random signal with noise spikes
228 5 Applications
demonstrated by values equal to 1 along the main diagonal and close to 0 elsewhere.
Another validation of the obtained mode shapes has been carried out through the
complexity plots. As shown in Fig. 5.11, all modes are normal or nearly normal.
Slight imaginary components can be observed only in the mode shape of the fifth
mode. They can be probably related to noise effects and weak excitation.
The numerical characterization of the dynamic response of the Tower was based
on the implementation of a number of FE models. Modal analyses have been
carried out through the SAP2000® software (Computers and Structures 2006).
The model of the building represented in detail, under different assumptions,
the geometric and mechanic characteristics of the structural elements and the
mass distribution on plain and along height. Position and geometry of the structural
and nonstructural elements at each floor have been defined according to the results
of in-situ investigations and original drawings. One-dimensional structural
elements (columns, beams, braces) have been modeled by “beam” elements while
two-dimensional structural elements (r.c. walls, stairs) have been modeled by
“shell” elements. Nonstructural walls made of tuff masonry have been modeled
by shell elements as well. This assumption leads to overall stiffness characteristics
that are similar to the uncracked shear stiffness evaluated according to relevant
behavioral models (Fardis 1996). Finally, at each floor, 0.05 m thick shell elements
have been used to model the floors. The structure is assumed fixed at the base,
so no soil-structure interaction has been considered.
The masses have been directly associated to the structural elements according to
the specific mass of the material (concrete) and the geometric dimensions of the
cross section. In a similar way, a uniform area mass has been assigned to floors
and stairs. This mass has been evaluated according to the section geometry. No live
loads have been applied, in accordance with the state of the structure at time of
testing. As regards tuff masonry walls, a linear mass has been externally applied to
the beams they stand on.
Correlation with the experimental results has been assessed by defining a
number of model classes through the combination of the following modeling
assumptions:
• Absence vs. presence of tuff masonry walls;
• Absence vs. presence of the basement structure;
• Shell elements vs. rigid diaphragms to model floors.
The relevant uncertainties in the numerical modeling of the structure required a
rational assessment of the effectiveness of these different modeling assumptions.
In particular, the evaluation of the correlation with the results of the dynamic test
was aimed at the assessment of the influence of curtain walls on the dynamic
behavior of the structure, the characterization of the level of interaction between
the Tower and the surrounding basement, and the assessment of the sensitivity to
different assumptions about the in-plane stiffness of the floors.
5.3 Correlation Between Numerical and Experimental Modal Property Estimates 229
a b
Complexity plot Plot 0 Complexity plot Plot 0
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
Im
Im
0 0
−0.2 −0.2
−0.4 −0.4
−0.6 −0.6
−0.8 −0.8
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
Re Re
c d
Complexity plot Plot 0 Complexity plot Plot 0
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
Im
Im
0 0
−0.2 −0.2
−0.4 −0.4
−0.6 −0.6
−0.8 −0.8
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
Re Re
e f
Complexity plot Plot 0 Complexity plot Plot 0
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
Im
Im
0 0
−0.2 −0.2
−0.4 −0.4
−0.6 −0.6
−0.8 −0.8
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
Re Re
Fig. 5.11 Complexity plots: mode I (a), II (b), III (c), IV (d), V (e), VI (f). (# Elsevier Ltd.
2013), reprinted with permission
230 5 Applications
Fig. 5.12 Numerical models of the Tower under: simplified (a) and complete modeling of the
basement (b), (# Elsevier Ltd. 2013), reprinted with permission
As the role of tuff masonry walls is concerned, the correlation with the model
characterized by absence of curtain walls has been evaluated mainly because this
represents a traditional assumption in structural design. However, as expected,
under low levels of excitation the dynamic response of the structure is definitely
influenced by the presence of the masonry infills, as pointed out also by the results
of the modal identification tests in operational conditions. This circumstance has
been confirmed by the poor correlation between experimental and numerical results
obtained for the model without walls. As a consequence, this case will be no more
discussed hereinafter.
About the basement structure, the main source of uncertainty was related to the
specific link (short deep beams) with the inner Tower and the related level of
restraint offered to this. A simplified approach has been considered first. It was
based on the assumption that the basement can be considered as a translational
restraint along the perimeter of the first level of the Tower (Fig. 5.12a). This
assumption takes into account the high transverse stiffness of the basement, due
to the presence of perimeter r.c. walls and r.c. stairs for the access to the Tower,
and its reduced height with respect to the Tower, resulting in a low contribution
in terms of participating mass. On the other hand, because of the low level of
vibrations in operational conditions, an additional model based on the assumption
of full interaction between the Tower and the basement has been considered
(Fig. 5.12b). This condition mainly affects the values of natural frequencies,
in particular at higher modes.
5.3 Correlation Between Numerical and Experimental Modal Property Estimates 231
The validation of the modeling assumptions and the tuning of the model parameters
have taken advantage of the sensitivity analyses carried out for each class of models
and of the evaluation of correlations with the experimental results. In particular,
232 5 Applications
a 0.8
b
1.45
0.9 1.5
0.75 1.4
0.8 1.4
1.35
f [Hz]
f [Hz]
c d 3.3
1.75 3.2
1.8 3.5 3.1
1.7
1.7 3
1.65
3 2.9
1.6
f [Hz]
1.6
f [Hz]
2.8
1.5
1.55 2.5 2.7
1.4
1.5 2.6
1.3 2 2.5
4.5 1.45 4.5
3 4 3 2.4
4 2.5 1.4 2.5
x 105 2
x 105 3.5 2 7 2.3
3.5 7
1.5 x 10
1.5 x 10 1.35
3 Gt [kN/mq] 3 1
Gt [kN/mq] 1 Ec [kN/mq]
Ec [kN/mq]
e 4.4 f
4.3 5.2
4.5 4.2 5.5
4.1 5
5
4 4
4.8
f [Hz]
f [Hz]
3.9 4.5
3.5 3.8 4.6
4
3.7
3 3.6 3.5 4.4
4.5 4.5
3 3.5 3
4 4 4.2
2.5 2.5
x 105 3.5 2
3.4 x 105 3.5 2
1.5 x 107 1.5 x 107
3 1 Gt [kN/mq] 3 1
Gt [kN/mq] Ec [kN/mq] Ec [kN/mq]
Fig. 5.13 Sensitivity of natural frequencies to changes in the elastic modulus of materials
(Floor ¼ Shell With basement): mode I (a), II (b), III (c), IV (d), V (e), VI (f), (# Elsevier
Ltd. 2013), reprinted with permission
correlations have been evaluated as discussed in Chap. 4, while the systematic tuning
of the parameters of the FE model has required the minimization of a user-defined
objective function accounting for the correlation between numerical and experimen-
tal estimates of the modal properties. In the literature it is possible to find several
objective functions accounting for deviations between numerical and experimental
results. Widely used objective functions are defined in terms of the cumulative scatter
between analytical and experimental values of natural frequencies only:
X
Nm
Jf ¼ jΔf i j ð5:2Þ
i¼1
5.3 Correlation Between Numerical and Experimental Modal Property Estimates 233
where the scatter between the numerical and the experimental estimate of the
natural frequency of the i-th mode is computed according to 4.298; alternatively,
they can take into account also the mode shape correlation expressed by the MAC
or the NMD (see also Chap. 4):
X
Nm
J f , f ϕg ¼ w1 jΔf i j þ w2 MAC ϕie ; ϕia ð5:3Þ
i¼1
X
Nm
e a
J f , f ϕg ¼ w1 jΔf i j þ w2 NMD ϕi ; ϕi ð5:4Þ
i¼1
where w1 and w2 are optional weighting factors, which can be adopted when the
dynamic properties are measured with different accuracy (Jaishi and Ren 2005).
In the present case, the model parameter tuning was aimed at setting a refined model
able to reproduce as close as possible the actual dynamic behavior of the structure
and, at the same time, enhance the accuracy of response spectrum and seismic time-
history linear analyses. Thus, checks not only of the mode shapes but also of the
values of the PMRs (in particular those associated to the less correlated modes)
throughout the model refinement process were necessary to properly fit the
requirements of seismic analyses.
The PMR of a mode provides a measure of how important that mode is for
computing the response of the modeled structure to the ground acceleration loads in
each of the three global directions defined into the model. Thus it is useful for
determining the accuracy of response spectrum analyses and seismic time-history
analyses. International seismic codes (CEN 2003) require that response spectrum
analyses are carried out combining, according to appropriate rules, all the modes
characterized by a PMR larger than 5 % or, alternatively, a number of modes
characterized by a total PMR larger than 85 %. This rule provides also a guide to
select the number of modes to be taken into account in model refinement. The
values of the PMRs obviously depend on the modeling assumptions and can vary
from one model to another. Nevertheless, they represent global parameters, which
can be estimated with adequate accuracy if the model to be optimized already
shows a good correlation with the experimental data at the beginning of the tuning.
The PMR depends on the mode shapes of the structure and its mass distribution.
The latter can usually be estimated with high accuracy. Thus, if the model also
shows a good correlation with the experimental mode shapes, the values of the
PMRs provided by the model can be considered fairly reliable. This is also the case
of the herein considered four model classes. Thus, in the presence of a fairly
accurate model, it is possible to take advantage of the information about the
PMRs to drive the refinement process towards a solution characterized by maxi-
mum accuracy in terms of prediction capability of the structural response to an
input ground motion.
Whenever the experimentally identified modes are not enough in number to
fulfill code provisions, the experimental results represent a constraint in the model
refinement. In the present case the values of the PMRs (Table 5.2) have pointed out
234
Table 5.2 PMRs of sample models for each class, (# Elsevier Ltd. 2013), reprinted with permission
Total Sum PMR Total PMR
Mode # PMR X dir. PMR Y dir. PMR X dir. PMR Y dir. Z rotation Z rotation
Floor ¼ Shell With basement I 6.02E 01 8.53E 07 6.02E 01 8.53E 07 2.54E 01 2.54E 01
II 7.36E 07 6.38E 01 6.02E 01 6.38E 01 2.47E 01 5.01E 01
III 1.49E 08 1.18E 03 6.02E 01 6.39 E 01 1.10E 01 6.11E 01
IV 1.73E 01 3.35E 06 7.75E 01 6.39E 01 7.35E 02 6.84E 01
V 3.16E 06 1.59E 01 7.75E 01 7.98E 01 6.54E 02 7.50E 01
VI 4.53E 08 1.30E 05 7.75E 01 7.98E 01 2.32E 02 7.73E 01
Floor ¼ Diaphragm With basement I 5.94E 01 2.95E 06 5.94E 01 2.95E 06 2.51E 01 2.51E 01
II 2.00E 06 6.29E 01 5.94E 01 6.29E 01 2.40E 01 4.91E 01
III 3.25E 08 1.75E 03 5.94E 01 6.31E 01 1.12E 01 6.03E 01
IV 1.77E 01 3.68E 06 7.71E 01 6.31E 01 7.53E 02 6.78E 01
V 5.24E 06 1.60E 01 7.71E 01 7.91E 01 6.56E 02 7.44E 01
VI 2.21E 08 3.57E 05 7.71E 01 7.91E 01 2.41E 02 7.68E 01
Floor ¼ Shell Without basement I 6.52E 01 8.12E 07 6.52E 01 8.12E 07 2.81E 01 2.81E 01
II 7.30E 07 7.07E 01 6.52E 01 7.07E 01 2.80E 01 5.60E 01
III 3.40E 08 1.31E 03 6.52E 01 7.09E 01 1.25E 01 6.86E 01
IV 1.69E 01 3.37E 06 8.20E 01 7.09E 01 7.32E 02 7.59E 01
V 3.00E 06 1.43E 01 8.20E 01 8.52E 01 5.99E 02 8.19E 01
VI 1.30E 10 1.25E 05 8.20E 01 8.52E 01 2.12E 02 8.40E 01
Floor ¼ Diaphragm Without basement I 6.81E 01 4.32E 06 6.81E 01 4.3E 06 2.93E 01 2.93E 01
II 2.34E 06 7.47E 01 6.81E 01 7.47E 01 2.90E 01 5.83E 01
III 4.26E 08 2.15E 03 6.81E 01 7.50E 01 1.36E 01 7.19E 01
5
Table 5.3 Frequency scatter after parameter tuning according to (5.2) and (5.5), (# Elsevier Ltd.
2013), reprinted with permission
Total
J Solution Δf1 Δf2 Δf3 Δf4 Δf5 Δf6 scatter
Equation (5.2) Floor ¼ Diaphragm Without 3.08 0.01 2.45 11.56 0.08 1.99 19.17
basement
(Ec ¼ 22,500 MPa;
Gt ¼ 310 MPa)
Floor ¼ Shell Without 10.66 0.68 1.50 0.91 2.98 0.59 17.31
basement
(Ec ¼ 19,500 MPa;
Gt ¼ 390 MPa)
Floor ¼ Diaphragm With 6.05 0.03 2.25 4.07 3.43 0.008 15.83
basement
(Ec ¼ 24,250 MPa;
Gt ¼ 300 MPa)
Floor ¼ Shell With basement 8.03 2.11 0.008 0.65 4.86 0.12 15.78
(Ec ¼ 24,000 MPa;
Gt ¼ 360 MPa)
Equation (5.5) Floor ¼ Diaphragm Without 0.04 1.08 1.40 14.96 0.61 2.75 20.84
basement
(Ec ¼ 24,000 MPa;
Gt ¼ 300 MPa)
Floor ¼ Shell Without 0.24 0.75 1.35 12.04 3.41 0.46 18.27
basement
(Ec ¼ 24,750 MPa;
Gt ¼ 300 MPa)
Floor ¼ Diaphragm With 0.13 3.64 1.33 10.51 0.25 3.29 19.15
basement
(Ec ¼ 27,500 MPa;
Gt ¼ 300 MPa)
Floor ¼ Shell With basement 0.30 2.90 0.79 8.77 4.33 0.40 17.49
(Ec ¼ 28,500 MPa;
Gt ¼ 300 MPa)
that, in almost all cases, six modes are not sufficient to get a total PMR larger than
85 %; however, higher modes after the fifth were generally characterized by PMRs
lower than 5 %. This confirmed that the response of the Tower to input ground
motions mostly depends on the first five or six modes. Thus, the experimentally
identified modes were sufficient for the objectives of the analysis.
For each class of models the solution provided by the minimization of the above
mentioned objective functions was locally unique but, in almost all cases, the
maximum error affected the fundamental mode of the structure (Table 5.3). This
was also characterized by a high PMR and, therefore, contributed significantly to
the structural response to an input ground motion. In order to maximize the overall
correlation with the experimental estimates of the modal parameters and minimize
the error on the fundamental modes, characterized by the largest PMRs, an alterna-
tive objective function has been formulated. The information about the PMRs has
236 5 Applications
Table 5.4 Mode shape correlation (NMD) of models characterized by different assumptions for
the basement and model parameters tuned by minimization of the function in (5.5), (# Elsevier
Ltd. 2013), reprinted with permission
Solution Mode I Mode II Mode III Mode IV Mode V Mode VI
Floor ¼ Shell Without basement 0.242 0.076 0.319 0.306 0.140 0.352
(Ec ¼ 24,750 MPa; Gt ¼ 300 MPa)
Floor ¼ Shell With basement 0.207 0.060 0.309 0.285 0.126 0.347
(Ec ¼ 28,500 MPa; Gt ¼ 300 MPa)
been directly included into the objective function as a weighting factor for the
frequency scatters, thus obtaining the following expression:
X
Nm
J f , PMR ¼ ðjΔf i j PMRi Þ ð5:5Þ
i¼1
where:
T 2
ϕia ½MfI g
PMRi ¼ T ð5:6Þ
ϕia ½M ϕia
is the PMR of the i-th mode and {I} the influence vector (Chopra 2000). By this
choice of the objective function, a larger error on the less contributing modes does
not bias the overall calibration. The model parameters minimizing the objective
function in (5.5) for the four model classes are reported in Table 5.3, together with
the frequency scatters.
The objective function of (5.5) implicitly takes into account the information
about the mode shapes through the PMRs. It provides a solution that is not simply
the one minimizing the cumulative scatter with the experimental results. In fact, at
the same time it gives the best results in terms of response spectrum and seismic
time-history analyses, maximizing the correlation for the modes characterized by
large PMRs and allowing larger errors in the case of modes providing minor
contributions to the overall seismic response of the structure.
The best correlation with the experimental data has been obtained by modeling
the floors by shell elements, taking into account the full interaction with the
basement, and considering the following values of the updating parameters:
Ec ¼ 28,500 MPa, Gt ¼ 300 MPa. Taking into account the state of conservation of
the structure, realistic values of the elastic properties of materials have been
obtained. On the other hand, further checks of the solutions provided by (5.5)
in terms of mode shape correlation have shown that the mode shapes obtained
when the basement is explicitly modeled are also better correlated to the experi-
mental data than those obtained when the effect of the basement is taken into
account in a simplified way. The differences are highlighted by the NMD values
shown in Table 5.4.
5.4 Mass Normalized Mode Shapes 237
A major limitation of OMA pertains to the estimation of mode shapes. Since the
input is unmeasured, the mode shapes cannot be mass normalized. Only unscaled
mode shapes can be estimated because the scaling factors depend on the unknown
excitation (Parloo et al. 2005). The missing information about the scaling factors
can restrict the use of the modal parameter estimates in those application domains
requiring mass normalized mode shapes, such as structural response simulation,
load estimation (Parloo et al. 2003), some damage detection techniques (Pandey
and Biswas 1994, Parloo et al. 2004). From a general point of view, the knowl-
edge of the scaling factors of the mode shapes is critical whenever an FRF matrix
has to be assembled from the estimated modal parameters (see also Chap. 4).
For this reason, methods for the experimental estimation of mass normalized
mode shapes by OMA procedures have recently been developed. They are based
on the modification of the dynamic behavior of the structure under test by
changing its mass or stiffness. The results of output-only modal identification
of both the original and the modified structure are finally used to estimate the
scaling factors.
The most popular methods for the estimation of scaling factors and mass
normalized mode shapes belong to the class of the mass-change methods. These
methods have been extensively validated by tests in the laboratory and on real
structures (see, for instance, Parloo et al. 2005, Lopez-Aenlle et al. 2010). They
consist in attaching masses to the points of the structure where the mode shapes of
the unmodified structure have been estimated in a previous stage.
The number, magnitude and location of the masses is defined by the modal
analyst and, as discussed below, this choice, together with the accuracy of modal
parameter estimates, is responsible for the accuracy of the estimated scaling factors.
238 5 Applications
Moreover, since the mass-change methods use the modal parameters of both the
original and the modified structure, they require extensive tests and a careful design
of the experimental program. Lumped masses are typically used, so that the mass-
change matrix [ΔM] is diagonal.
Stiffness-change methods are a possible alternative to mass-change methods.
As the mass-change methods, they use the modal parameters of both the original
and the modified structure, but in this case the dynamic response of the structure is
modified by changing its stiffness. Stiffness changes are obtained by connecting
devices, such as cables or bars, at points of the structure where the mode shapes of
the unmodified structure have been estimated in a previous stage. Combined mass-
stiffness-change methods have also been developed (Khatibi et al. 2009), obtaining
promising results. However, changing the stiffness of real civil structures is gener-
ally expensive and often impractical. This is the reason why attention is herein
focused on the mass-change methods and, in particular, on best practice criteria to
obtain accurate estimates of the scaling factors.
Equation (5.7) shows the relation between scaled and unscaled mode shape
vectors:
ψ j ¼ αj ϕj ð5:7Þ
where αj is the scaling factor for the j-th mode shape. Equation (5.7) highlights that
the value of the scaling factor depends on the normalization used for the unscaled
mode shape. Mode shapes normalized in a way that the largest element in each
vector is equal to 1 are herein considered.
Omitting the theory behind each formulation (references are provided for the
interested reader), well-established expressions for the estimation of the scaling
factors by the mass-change approach are reported below.
Those fulfilling the structural-dynamic-modification theory are usually referred
to as exact equations, to remark the difference with the so-called approximated
equations, which do not properly take into account the change of the mode shapes
when the structure is modified (Lopez-Aenlle et al. 2012). All the proposed
formulations to estimate the scaling factor of a certain mode depend only on the
modal parameters of that mode. Thus, it is eventually possible to optimize the
estimation of the scaling factor of individual modes at the expenses of an increase in
number and time of testing.
The original idea of the mass-change method can be traced back to Parloo
et al. (2002). Using a first-order approximation for the sensitivity of the natural
frequencies of lightly damped structures to mass changes, they derived a closed-
form expression for the estimation of the scaling factors:
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffi
u
u 2 ω ω
αj ¼ t
j, 0 j, m
T ð5:8Þ
ωj, 0 ϕj, 0 ½ΔM ϕj, 0
where ωj,0 and ωj,m represent the natural frequency of the j-th mode of the original
and the modified structure, respectively, and {ϕj,0} is the j-th (unscaled) mode
5.4 Mass Normalized Mode Shapes 239
shape of the original structure. The validity of this formulation is restricted to small
changes of the modal properties as a result of the mass change.
Alternative formulations able to take into account the changes in mode shapes
and permitting relatively large modifications have been proposed by Bernal (2004)
and Lopez-Aenlle et al. (2010). If mode shape estimates of both the original and the
modified structure are available, the following matrix can be computed:
þ
B^ ¼ Φ
^0 Φ^m ð5:9Þ
where Φ ^ 0 and Φ ^ m are the matrices collecting (in columns) the estimates of the
mode shapes of the original and the modified structure, respectively. The terms on
the main diagonal of B^ are close to one while the off-diagonal terms are close to
zero, except for closely spaced modes (Lopez-Aenlle et al. 2012). The terms on the
main diagonal are usually fairly accurate even in the presence of modal truncation
effects, while poor accuracy characterizes the off-diagonal terms (Lopez-Aenlle
et al. 2012). If only the diagonal terms B^ jj of B^ are considered, the scaling factor
of the j-th mode can be computed as:
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
u
u ω2j, 0 ω2j, m Bjj
u
αj ¼ t T ð5:10Þ
ω2j, m ϕj, 0 ½ΔM ϕj, m
where {ϕj,m} is the j-th (unscaled) mode shape of the modified structure. This
expression was originally obtained by Bernal (2004) as a result of the projection of
the mode shapes of the modified system on the basis of the original structure. The
expression proposed by Lopez-Aenlle et al. (2010) is a special case of the formula-
tion given by (5.10) where B^ jj is assumed to be equal to one:
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
u
u ω2j, 0 ω2j, m
u
αj ¼ t T : ð5:11Þ
ω2j, m ϕj, 0 ½ΔM ϕj, m
Equations (5.8), (5.10) and (5.11) point out the primary role of output-only
modal parameter identification in the accurate estimation of scaling factors. The
uncertainties associated to natural frequency and mode shape estimates must be as
low as possible (best practices to carry out a good modal identification have been
discussed throughout this book).
However, the uncertainties in the estimation of scaling factors depend not only on
the accuracy of modal parameter estimates but also on the adopted mass-change
strategy. Different choices for the number, magnitude and location of the masses lead
to different changes in the dynamic behavior of the structure. Thus, the mass-change
strategy has to be carefully designed and analyzed before applying the mass-change
method for the estimation of the scaling factors. Previous experiences reported in
the literature provide the following guidelines for a successful estimation of scaling
240 5 Applications
Assume that only five masses are available. The objective of this example is the
analysis of the variations of the scaling factor estimates for two different
configurations of the attached masses, one of which has been optimized for the
estimation of the scaling factors of the three fundamental modes of the structure.
For this explanatory application, assume that all modes of the structure have
been identified and the modal displacements have been initially measured at all the
ten DOFs. The optimum location of the masses can be defined from the analysis of
modal displacements. The contribution of a unit mass located at the j-th DOF to
shift the natural frequency of the k-th mode is expressed by the square of the related
component of the unscaled mode shape vector of the k-th mode of the original
structure (Lopez-Aenlle et al. 2010). The squared modal displacements are reported
in Table 5.5 for all the DOFs and modes of the original structure. They have also
been normalized making the largest element equal to 100 for each mode. DOFs are
numbered from 1 to 10 from bottom up.
The analysis of the squared modal displacements in Table 5.5 shows that the
largest frequency shifts for the three fundamental modes of the structure can be
obtained by placing the masses at DOFs #10, #9, #7, #3, #2. The results obtained by
this configuration of the added masses are compared with those obtained by placing
the masses at DOFs #10, #8, #5, #2, #1.
Denote the optimum configuration of the masses as CONFIG#1 and the other as
CONFIG#2. In both cases, the added mass is equal to 3 % of the total mass of the
structure. In other words, masses of 450 kg are attached to the five DOFs considered
in the respective analysis cases.
242 5 Applications
Table 5.5 Contribution of unit mass at DOF j to the frequency shift of mode k
DOF DOF DOF DOF DOF DOF DOF DOF DOF DOF
Mode #10 #9 #8 #7 #6 #5 #4 #3 #2 #1
I 100.0 95.6 87.1 75.4 61.5 46.5 31.9 18.9 8.7 2.2
II 100.0 64.3 19.8 0.0 19.8 64.3 100.0 100.0 64.3 19.8
III 87.1 18.9 8.7 75.4 95.6 31.9 2.2 61.5 100.0 46.5
IV 100.0 0.0 100.0 100.0 0.0 100.0 100.0 0.0 100.0 100.0
V 64.3 19.8 100.0 0.0 100.0 19.8 64.3 64.3 19.8 100.0
VI 46.5 61.5 31.9 75.4 18.9 87.1 8.7 95.6 2.2 100.0
VII 31.9 95.6 2.2 75.4 61.5 8.7 100.0 18.9 46.5 87.1
VIII 19.8 100.0 64.3 0.0 64.3 100.0 19.8 19.8 100.0 64.3
IX 8.7 61.5 100.0 75.4 18.9 2.2 46.5 95.6 87.1 31.9
X 2.2 18.9 46.5 75.4 95.6 100.0 87.1 61.5 31.9 8.7
Table 5.6 Natural frequency estimates for the original and the modified structures
Mode # Original structure (Hz) CONFIG#1 (Hz) CONFIG#2 (Hz) CONFIG#3 (Hz)
I 0.363 0.357 0.358 0.358
II 1.082 1.063 1.066 1.066
III 1.777 1.743 1.750 1.750
A third case is also considered, where the masses are attached at the DOFs of
CONFIG#1 but the magnitude of the mass change is lower than 3 %. This configu-
ration of the masses is denoted as CONFIG#3 and the added masses, characterized
by the same magnitude (367 kg) for the various DOFs, produce the same frequency
shift obtained by the mass change considered in CONFIG#2.
The analysis of the simulated response to a Gaussian white noise of the original
structure and of the modified structures has given the results shown in Table 5.6 in
terms of natural frequency estimates. Cov-SSI has been used to analyze the data.
CONFIG#1 yields frequency shifts in the range 1.7–1.9 %, while the frequency
shifts corresponding to CONFIG#2 and CONFIG#3 are in the range 1.4–1.5 %.
Please, note that the simulated response of the original structure has been analyzed
twice in order to obtain the mode shape estimates at the different sets of DOFs
considered in CONFIG#1 and CONFIG#3 on one hand, and CONFIG#2 on the
other hand. The scaling factors estimated according to (5.8), (5.10) and (5.11) for
the three configurations of added masses are reported in Tables 5.7, 5.8 and 5.9.
The obtained results point out that, for a given mass change (3 % of the total
mass of the structure), the scaling factors estimated by CONFIG#1 are by far more
accurate than those provided by CONFIG#2. Moreover, comparing the results
provided by CONFIG#2 and CONFIG#3 shows that the estimates obtained by
CONFIG#3 are characterized by similar (or sometimes better) accuracy with
those provided by CONFIG#2, despite of the lower mass change (2.4 % instead
of 3 %) and the equal frequency shifts for the considered modes.
5.5 The Excitation System: Identification of Spurious Frequencies 243
The above discussed example points out the importance of carefully choosing
the location of the added masses. Whenever the optimization of the mass location
for the simultaneous estimation of the scaling factors of multiple modes is not
feasible, repeated tests focused on subset of modes or individual modes are
recommended to obtain accurate scaling factor estimates.
For Gaussian distributions the kurtosis is equal to 3, while the kurtosis of a sine
wave is 1.5. If the excess kurtosis:
κ¼κ 3 ð5:13Þ
is used instead of the kurtosis defined by (5.12), harmonic signals are identified by
values of κ equal to 1.5, while null values characterize Gaussian distributions.
As an alternative, if the probability density function of the random process has been
estimated (Chap. 2), harmonic signals in a stochastic process can be identified from
local minima of the entropy (sometimes referred to as Shannon entropy) in the
frequency range under investigation:
X
K
S¼ p^ k logðp^ k Þ: ð5:14Þ
k¼1
reduce their influence on the estimated natural frequencies and damping ratios
(Jacobsen et al. 2007). Whenever the harmonic frequencies close to the eigenfre-
quencies are known, modified NExT-type methods can also be adopted to explic-
itly take into account the harmonic excitation components in the identification
process and enhance the accuracy of modal parameter estimates. If the presence of
the harmonic component is not known in advance, accurate modal parameter
estimates can be obtained from the transmissibility-based method under the
assumption of fully correlated inputs (such as those coming from a dominant
discrete source); no filtering or interpolation is needed, but the assumptions of the
method require changing loading conditions. They imply more involved test
procedures to ensure changes in the location, number or amplitude of the forces.
These methods are beyond the scope of the present book because spurious
harmonics rarely lead to major problems in output-only modal identification of
civil engineering structures; the interested reader can refer to the literature for
more details (Mohanty and Rixen 2004, Mohanty and Rixen 2006, Devriendt
et al. 2009). Nevertheless, a complete structural and functional assessment of the
structure under test is recommended in order to identify possible sources of
harmonic excitation and take into account their possible effects on dynamic
measurements and test results.
Few studies are reported in the literature about the output-only identification of
structural modes in the presence of spurious dominating frequencies due to
dynamic interaction with adjacent structures. Oliveira and Navarro (2009) reported
this issue, pointing out the critical role of a careful examination of the peaks in the
spectra to successfully sort structural and spurious modes.
An integrated approach to the identification of spurious dominating frequencies
due to dynamic interaction with adjacent structures has recently been proposed
(Rainieri et al. 2012). It is herein described in its main aspects, because it provides
an interesting example of how the integrated use of output-only modal analysis
techniques can rationally support the identification of spurious frequencies due to
interaction with adjacent buildings. These frequencies do not appear as sharp peaks
in response spectra. This makes the discrimination between structural resonances
and spurious dominant frequencies even more difficult.
The case study analyzed in this section is particularly interesting in this perspec-
tive, because it deals with close and interacting structures characterized by similar
natural frequencies. Narrowband excitation due to rotating components is also
present. The effect of interactions on response spectra is represented by the pres-
ence of several peaks in a narrow frequency range; however, only a few of them are
actually related to structural modes.
The herein discussed case study refers to an experimental campaign carried out
on four buildings belonging to the B1 Block in the area of the Guardia di Finanza
Non-Commissioned Officers’ School in Coppito (L’Aquila, Italy) under the coor-
dination of the Italian Department of Civil Protection right after the L’Aquila
earthquake on April 6-th, 2009. In particular, the results of dynamic tests are
presented.
5.5 The Excitation System: Identification of Spurious Frequencies 247
Fig. 5.15 The B1 Building Block: East side (a), courtyard (b), South side (c), outdoor stairs (d)
45
40
35
30
Singular values
25
20
15
10
−5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10
Frequency [Hz]
Fig. 5.16 Sample singular value plots: filled circles denote structural modes, empty circles denote
dominant frequencies due to interactions, the empty square denotes a harmonic due to rotating
parts, (# Elsevier Ltd. 2012), reprinted with permission
frequencies and, as shown in Fig. 5.16, such frequencies were located in a narrow
frequency range between 2 and 4 Hz.
Interaction effects, represented by dominant frequency components in the
response spectrum of a building associated to structural resonances of the
surrounding structures, were probably related to the solutions adopted for
the foundations, which were common to all buildings and stairs in the block.
In addition, as a consequence of ground shaking and of the technological
solutions adopted for seismic joints, pounding phenomena between buildings and
stairs were not inhibited. The result was a certain degree of interaction among the
different buildings and among buildings and stairs (both interior and exterior) so
that the natural frequencies of a certain building or stair appeared in the dynamic
response in operational conditions of other close buildings in the block. A plane
view of the block is reported in Fig. 5.17. In the same picture, numbers 2, 3, 5 and
6 identify the investigated buildings. Their relevant dimensions are 48.7 m
by 15.40 m.
Measurements of the dynamic responses of the buildings were carried out to
monitor the evolution of the dynamic parameters during aftershocks, as a support
to visual inspections for structural damage detection, to refine numerical analyses
for capacity assessment, and in view of the installation of a vibration-based
SHM system. Thus, the adopted sensor layout was aimed at monitoring the largest
number of buildings at the same time with a limited number of available sensors.
5.5 The Excitation System: Identification of Spurious Frequencies 249
Fig. 5.17 Plan view of the B1 Block: nine buildings jointed among them and with stairs,
(# Elsevier Ltd. 2012), reprinted with permission
The sensor layout had to consider also some operational constraints (limited room
accessibility). The final configuration ensured the identification of at least the
fundamental bending and torsional modes of the tested buildings. The sensor layout
is reported in Fig. 5.18. A tern of sensors in three orthogonal directions has also
been installed on the basement in order to measure eventual input at the base of the
structure due to ground motion. Even if the whole measurement setup is herein
described, only records of the dynamic response of the structures in operational
conditions have been considered in the following analyses. Buildings #2 and #3
have been monitored at the upper level and at the intermediate level. Four sensors
have been installed in two opposite corners at each level and along two orthogonal
directions in order to ensure observability of both translational and tensional modes.
Testing of buildings #5 and #6, instead, involved only two couples of sensors placed
at the upper level and measuring in two orthogonal directions parallel to the main
axes of each building. In the data processing phase it has also been recognized that
sensor #3 in the x direction on building #5 was out of order. It has not been used in
the analyses. Thus, only three sensors were available for the characterization of the
dynamic behavior of building #5. This circumstance increased the complexity of
modal identification.
The dynamic responses of buildings #2 and #3 have been measured by means of
uniaxial force balance accelerometers, while piezoelectric accelerometers have
been installed on buildings #5 and #6. The characteristics of the installed sensors
are summarized in Table 5.10. The selected sensors ensured the possibility to
properly resolve the dynamic response of the buildings to both the operational
loads and eventual input ground motions.
Vibration data have been acquired through a customized 16-bit data acquisition
recorder, able to acquire dynamic data from a number of different sensors including
force balance and piezoelectric accelerometers. Sensors were installed on steel
250 5 Applications
Table 5.10 Characteristics of the installed sensors, (# Elsevier Ltd. 2012), reprinted with permission
Measurement Sensitivity Full-scale Dynamic Transverse
Sensor directions (V/g) range (g) Resolution (g) range (dB) Bandwidth (Hz) sensitivity (%) Nonlinearity
Force balance 1 20 0.5 140 0–200 1 <1,000 μg/
accelerometer g2
IEPE 1 10 0.5 0.000001 – 0.1–200 5 1 %
accelerometer
The Excitation System: Identification of Spurious Frequencies
251
252 5 Applications
plates screwed on the floor. Cables were clamped to the structure along the whole
path to the data acquisition recorder in order to avoid introduction of measurement
noise by triboelectric effects due to cable motion.
Data have simultaneously been acquired from the 27 accelerometers installed
over the block of buildings; a sampling frequency equal to 100 Hz was adopted. The
dynamic response of the buildings has continuously been monitored for more than
two consecutive days. Each stored record was 3,600 s long.
Before processing, the records have been accurately inspected in order to assess
data quality. Offsets and spurious trends have been removed. Data have been
filtered and decimated to obtain a final sampling rate of 20 Hz. Hanning window
and 66 % overlap have been adopted for spectrum computation in view of fre-
quency domain modal parameter estimation. OMA has been carried out according
to the following methods: Cov-SSI, EFDD and SOBI. An automated output-only
modal identification procedure, named LEONIDA (Rainieri and Fabbrocino 2010),
has also been applied. It is a frequency domain algorithm for automated OMA
that takes advantage of the information obtained from the SVD of the PSD
matrices computed from multiple records to identify the bandwidth of the structural
modes before automatically extracting the modal parameters. More details are
given in Chap. 6.
The dynamic responses of the buildings have been analyzed from the local and
global point of view. The global analysis has put in evidence the similarities. Then,
local analyses and consistency checks have made the discrimination between
structural resonances and spurious dominating frequencies possible. The selection
of candidate modes was based on the analyst’s experience following best practice
criteria for the various OMA methods. The only exception was represented by the
results provided by LEONIDA, which is a fully automated OMA procedure and, as
such, it does not require any user intervention. Ranking of the candidate modes
resulted from the inspection of a number of features; scores have been assigned
according to the list reported in Table 5.11.
A score equal to 1 ( 1) has been adopted for values of the parameters usually
associated to the presence (absence) of a structural mode; a score equal to 0.5 was,
instead, adopted for values of the parameters associated to more uncertain
conditions. The maximum achievable total score was 7. A candidate mode has
been labeled as structural if the total score was at least equal to 5/7. The minimum
value of the total score defining a structural mode and the adopted scores for SOBI
take into account that the responses of building #5 and building #6 have been
measured by few sensors (only three working sensors for building #5). As already
mentioned in Chap. 4, this has consequences on the number of identifiable modes
by SOBI. Moreover, this circumstance has limited the possibility to analyze the
aspect of mode shapes.
Scoring of candidate modes is detailed in Tables 5.12, 5.13, 5.14, and 5.15.
Ranking and identified structural modes (dark bars above threshold) are shown in
Fig. 5.19. Realistic damping values were associated to nearly all candidate modes
with few exceptions, so their inspection has not been useful to sort the structural
modes. For this reason damping has not been selected as a feature for classification.
Nevertheless, damping values are reported for the sake of completeness.
5.5 The Excitation System: Identification of Spurious Frequencies 253
Table 5.12 Building #2: candidate modes and score report, (# Elsevier Ltd. 2012), reprinted
with permission
Mode Mode
shape shape Total
f (Hz) EFDD Cov-SSI SOBI LEONIDA Coherence complexity aspect ξ (%) score
2.18 1.0 1.0 0 1.0 0.5 1.0 1.0 5.9 1.5
2.58 1.0 1.0 0.5 1.0 1.0 1.0 1.0 2.2 6.5
2.64 1.0 1.0 0 1.0 1.0 1.0 1.0 0.2 2.0
2.88 1.0 1.0 1.0 1.0 0.5 1.0 1.0 2.4 6.5
2.98 1.0 1.0 0 1.0 1.0 1.0 1.0 1.3 4.0
3.52 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.1 7.0
5.31 1.0 1.0 0.5 1.0 1.0 1.0 1.0 0.9 6.5
7.09 1.0 1.0 0 1.0 1.0 1.0 1.0 0.7 6.0
Table 5.13 Building #3: candidate modes and score report, (# Elsevier Ltd. 2012), reprinted
with permission
Mode
Mode shape Total
f (Hz) EFDD Cov-SSI SOBI LEONIDA Coherence complexity aspect ξ (%) score
2.18 1.0 1.0 0 1.0 1.0 1.0 1.0 2.8 4.0
2.58 1.0 1.0 1.0 1.0 1.0 1.0 1.0 2.5 7.0
2.88 1.0 0.5 0.5 1.0 1.0 1.0 1.0 1.1 6.0
2.99 1.0 1.0 0 1.0 0.5 1.0 1.0 2.6 3.5
3.19 1.0 1.0 0 1.0 1.0 1.0 1.0 3.7 4.0
3.52 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.2 7.0
5.26 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.1 1.0
254 5 Applications
Table 5.14 Building #5: candidate modes and score report, (# Elsevier Ltd. 2012), reprinted
with permission
Mode
Mode shape Total
f (Hz) EFDD Cov-SSI SOBI LEONIDA Coherence complexity aspect ξ (%) score
2.20 1.0 1.0 0.5 1.0 1.0 1.0 1.0 2.7 1.5
2.41 1.0 1.0 0 1.0 1.0 1.0 1.0 3.2 0.0
2.67 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.2 7.0
3.15 1.0 1.0 0 1.0 1.0 0.5 1.0 2.5 5.5
3.96 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.1 7.0
Table 5.15 Building #6: candidate modes and score report, (# Elsevier Ltd. 2012), reprinted
with permission
Mode
Mode shape Total
f (Hz) EFDD Cov-SSI SOBI LEONIDA Coherence complexity aspect ξ (%) score
2.24 1.0 1.0 0.5 1.0 1.0 1.0 1.0 2.9 6.5
2.41 1.0 1.0 0 1.0 1.0 1.0 1.0 3.3 4.0
2.67 1.0 1.0 0.5 1.0 1.0 1.0 1.0 1.8 6.5
2.81 1.0 1.0 0.5 1.0 1.0 0.5 1.0 1.4 2.0
3.20 1.0 1.0 0 1.0 1.0 1.0 1.0 3.5 2.0
3.47 1.0 1.0 0 1.0 1.0 1.0 1.0 5.3 2.0
3.96 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 7.0
5.80 1.0 1.0 0.5 1.0 1.0 1.0 1.0 1.5 6.5
7.54 1.0 1.0 0 1.0 1.0 1.0 1.0 0.7 6.0
a b
c d
Fig. 5.19 Score reports and selection of structural modes: building #2 (a), #3 (b), #5 (c), #6 (d),
(# Elsevier Ltd. 2012), reprinted with permission
5.5 The Excitation System: Identification of Spurious Frequencies 255
Table 5.16 Output-only modal identification results, (# Elsevier Ltd. 2012), reprinted with
permission
Natural
Building # Mode # frequency (Hz) ξ (%) Type
2 I 2.58 2.2 Translation x
II 2.88 2.4 Translation y
III 3.52 1.1 Torsion
IV 5.31 0.9 Translation x
V 7.09 0.7 Torsion
3 I 2.58 2.5 Translation x
II 2.88 1.1 Translation y
III 3.52 1.2 Torsion
5 I 2.67 1.2 Translation y
II 3.15 2.5 Mainly translation y
(slightly complex)
III 3.96 1.1 Torsion
6 I 2.24 2.9 Translation x
II 2.67 1.8 Translation y
III 3.96 1.0 Torsion
IV 5.80 1.5 Translation x
V 7.54 0.7 Torsion
Mode shape complexity and coherence sometimes have also given inconsistent
indications. In fact, modes labeled as structural can show slight imaginary
components in their shapes as a result of noise effects; on the contrary, real shapes
could be associated to spurious frequencies. In a similar way, dominant frequencies
not related to structural modes can exhibit large values of coherence between
couples of channels. Thus, the use of different OMA procedures and the integrated
analysis of different features play a critical role to successfully discriminate struc-
tural modes from dominating frequencies associated to dynamic interaction effects.
Complete modal identification results are reported in Table 5.16. Their inspection
shows that the sequence of modes is slightly different for building #5 with respect to
the others. In fact, translational modes in the x direction are missing, while there is
an additional bending mode along y. This is probably the result of the very limited
number of installed sensors on this building and of the presence of only one
working sensor in the x direction. Even if a peak has been identified at 2.20 Hz
(Table 5.14), there was a lack of sufficiently coherent information to label it as a
structural mode. Another possible reason for the observed differences could be
related to the pounding phenomena and damage of partition walls caused by the
earthquake that affected that building at a major extent.
Apart from some uncertainties still affecting the results for building #5, the
proposed approach has rationally supported the identification of the fundamental
modes of the structures under test, successfully discriminating them from
frequencies related to dynamic interaction effects. The importance of the integrated
approach adopted for output-only modal identification is confirmed by the fact that a
single OMA method can lead to wrong results as a result of the possibility of limited
256 5 Applications
a
Source (auto-correlation)
1
0.9
0.8
0.7
Amplitude
0.6
0.5
0.4
0.3
0.2
0.1
0
−0.1
−0.2
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500
Number of time lags p
b Source spectrum
0.00045
0.0004
0.00035
0.0003
Amplitude
0.00025
0.0002
0.00015
0.0001
5E-5
0
0 1 2 3 4 5 6 7 8 9 10
f [Hz]
Fig. 5.20 Spurious harmonic at 6.2 Hz due to rotating parts: source autocorrelation (a) and
spectrum (b) obtained from application of the SOBI algorithm, (# Elsevier Ltd. 2012), reprinted
with permission
checks. In fact, those based on coherence values and mode shape complexity have
been proved to be insufficient in similar conditions. On the contrary, a detailed
investigation of the recorded responses through several complementary tools is
definitely more effective in the presence of dominant frequencies induced by dynamic
interaction effects. Such peak frequencies, similar in the aspect to the actual structural
resonances, make the modal identification by a single method particularly complex.
It is also interesting to note the presence of a spurious harmonic at 6.2 Hz in the
response spectra (Fig. 5.16). It was due to rotating equipment (the engine of a
plumbing) located in the basement of the building. Such harmonic can be clearly
identified in Fig. 5.16 as a sharp-pointed resonance. It is far enough from relevant
structural modes, so their identification has not been influenced by its presence.
Its character of spurious harmonic is remarked by the very low damping ratio
(ξ ¼ 0.005 %) and by the aspect of the identified source correlation provided by
SOBI (Fig. 5.20).
5.6 Development of Predictive Correlations 257
OMA is definitely a valuable and effective tool to enhance the knowledge about
the dynamic behavior of structures. Even if model updating is intuitively a natural
(and probably the most popular) approach to nondestructive structural investigation
and assessment of the dynamic behavior of structures, modal data can also be used
to develop empirical models and correlations for the prediction of the dynamic
properties of selected classes of homogeneous structures.
An example of such models has already been introduced in Sect. 5.2 to describe
the dependence of damping on the amplitude of vibrations (Jeary 1986). This
section illustrates how modal parameter estimates obtained from response measure-
ments in operational conditions for homogeneous structures can be processed to
develop empirical correlations for the prediction of natural periods as a function of
relevant geometric parameters.
The fundamental period of vibration plays a primary role in determining the
seismic demand. Recent destructive earthquakes have shown a correlation between
the level of seismic damage experienced by a given structure and the closeness
between its fundamental vibration period and the predominant period of the soil
motion. Thus, seismic risk management in urban areas and the development of
damage scenarios can benefit of accurate evaluations of the dynamic characteristics
of structures.
A reliable evaluation of the dynamic properties of a structure is relevant for both
structural design and seismic performance assessment. The latter is of particular
interest in the case of historical structures, where interventions have to be carefully
calibrated and very accurate structural models and analyses are necessary.
The estimation of the modal properties and the prediction of the seismic
response of a structure usually take advantage of numerical modeling. However,
when prescribed regularity conditions are fulfilled so that linear static analyses can
be carried out, a complete modal analysis can be avoided. In fact, only the value of
the fundamental period is required.
It can be estimated as a function of geometric parameters (usually, the height) by
appropriate experimental correlations. In fact, for a given structural typology, the
dependence of the fundamental period on mass and stiffness can ultimately be
referred to geometric parameters, such as the height and the plan dimensions.
In this section attention is focused on the development of empirical correlations
for the prediction of the fundamental periods of masonry towers belonging to the
Italian architectural heritage. The development of the predictive correlations is
based on the results of OMA tests carried out on several historical masonry towers
in Italy.
The effects of recent earthquakes (Umbria-Marche 1997, Molise 2002, L’Aquila
2009, Emilia 2012) on this type of structures have raised the attention on their
seismic performance. The large research efforts are also witnessed by the increasing
number of dynamic tests carried out on historical masonry towers in Italy and by the
development of specific guidelines for their seismic performance assessment
(Ministero dei Beni e delle Attività Culturali 2010). Such guidelines recognize
258 5 Applications
the role of OMA in enhancing the knowledge about the dynamic behavior of
historical structures.
The actual seismic response of a structure depends on the evolution of its
dynamic properties along the transient base excitation. However, their accurate
evaluation at low levels of vibration is also of interest, because the structural
behavior in the linear regime and the initial seismic response depend on this set
of dynamic properties. The lengthening of the natural period is only a consequence
of the input ground motion and its intensity. Thus, an accurate linear model of the
structure, representative of its actual behavior, is the fundamental requirement also
for nonlinear analyses. Seismic vulnerability assessment (Sepe et al. 2008) and
damage detection can also take advantage of accurate estimates of the modal
properties in operational conditions. In particular, these are relevant in the context
of seismic vulnerability assessment at the large scale, when the knowledge of the
dynamic parameters of structures and the underlying soil allows the identification
of resonance phenomena (such an approach has been recently adopted for
microzonation studies in large cities; see, for instance, Navarro et al. 2004, Panou
et al. 2005, Mucciarelli et al. 2008).
All the mentioned applications of modal data explain the large research efforts
for the development of empirical correlations for the prediction of the fundamental
periods of structures.
In the case of historical towers and similar high-rise structures, their seismic
behavior depends on factors such as slenderness, degree of connection between the
walls, presence of lower nearby structures acting as horizontal restraints. Since
masonry towers are usually characterized by a lower geometric and structural
complexity with respect, for instance, to churches, they can be analyzed according
to classical structural schemes through accurate and detailed models. Even linear
models can provide useful and reliable information about the seismic performance
of towers, because they are basically isostatic structures and, therefore, the stress
redistribution is very low. The lack of significant energy dissipation is also one
of the main reasons why masonry towers are very sensitive to seismic actions. As a
consequence, linear dynamic analyses play a primary role in the investigation of
the amplification of motion along the structure. This is a critical aspect
above all when there are bells in the upper part of the tower. The bell cell, in
fact, causes a loss of regularity in elevation. The wide openings for the bells turn
the walls into slender and poorly compressed columns. These are usually very
vulnerable also in consideration of the amplification of motion from the base to
the top of the structure.
A simplified model of masonry towers under seismic loading usually
represents the structure as a cantilever beam subjected to the horizontal forces
associated to the earthquake. When a linear static analysis is adopted, the total
horizontal force depends on the value of the elastic response spectrum. This is
evaluated at a period equal to the fundamental natural period of the structure in
the considered direction.
The dependence of the seismic action on the fundamental period of the tower
requires effective tools for its prediction. The Italian guidelines for intervention
5.6 Development of Predictive Correlations 259
Fig. 5.21 The investigated bell towers in the Molise region: Santa Maria delle Rose bell tower in
Bonefro (a), S. Pardo bell tower in Larino (b), Santa Maria della Pietà bell tower in Larino (c),
Santa Maria Maggiore bell tower in Morrone del Sannio (d), Sant’Alfonso dei Liguori bell tower
in Colletorto (e), San Giacomo bell tower in Santa Croce di Magliano (f), Santa Maria delle Rose
bell tower in Montorio nei Frentani (g), Santa Maria Assunta bell tower in Provvidenti (h), Santa
Maria Assunta bell tower in Ripabottoni (i)
T1 ¼ C H ð5:15Þ
C ¼ 0:02 ð5:16Þ
5.6 Development of Predictive Correlations 261
Fig. 5.22 Comparison between experimental and predicted values of the fundamental period of
vibration of Italian masonry towers (the black line represents the bisector of the plane)
that implies:
T 1 ¼ 0:02 H: ð5:17Þ
In Fig. 5.22 the values of the fundamental period predicted according to (5.17)
are reported as a function of the corresponding experimental estimates. The analysis
of the distribution of the points with respect to the bisector (represented by the black
line) points out the good agreement between predicted and experimentally
estimated values of the period.
If other formulations for the prediction of the same parameters are available, the
L2 norm of the error between predicted and measured values (Chap. 2) can be
evaluated to compare the relative predictive performance.
262 5 Applications
Fig. 5.23 Correlation between the natural periods of the first two modes for the Italian masonry
towers
In the present case the collected data have also made possible the development
of correlations for the prediction of the natural period of higher modes as a function
of the period of the fundamental mode. In fact, the natural period T2 of the second
mode of Italian masonry towers was found to be related to the natural period T1 of
the first mode as follows:
T 2 ¼ 0:86 T 1 : ð5:18Þ
In fact, as shown in Fig. 5.23, the collected data show a linear correlation
between the two values of the natural period.
Finally, the period of the third (torsional) mode can be estimated from the
fundamental period as follows:
T 3 ¼ 0:4 T 1 ð5:19Þ
while the periods of the higher bending modes can be estimated from the periods of
the corresponding fundamental modes as follows:
T 4 ¼ 0:3 T 1 ð5:20Þ
T 5 ¼ 0:3 T 2 : ð5:21Þ
The correlations expressed by (5.19), (5.20) and (5.21) are based on a more
limited number of experimental values (about ten) and they have to be validated
against a larger population of samples. Nevertheless, they are in good agreement
with similar correlations available in the literature for masonry buildings
(Ministerio de Fomento 2002).
References 263
In summary, the present and the previous sections in the chapter have presented
a number of possible applications of the techniques discussed in this book. Most
of the analyzed case studies refer to real experimental tests. They are aimed at
giving the reader an overview of the potentialities and limitations of OMA for real
life applications. From the analysis of these and eventually other experiences
reported in the literature about OMA the reader can get useful hints for design of
experiments and data processing.
References
Agneni A, Coppotelli G, Grappasonni C (2012) A method for the harmonic removal in operational
modal analysis of rotating blades. Mech Syst Signal Process 27:604–618
Bendat JS, Piersol AG (2000) Random data: analysis and measurement procedures, 3rd edn. John
Wiley & Sons, New York
Bernal D (2004) Modal scaling from known mass perturbations. ASCE J Eng Mech
130(9):1083–1088
Brincker R, Andersen P, Møller N (2000) An indicator for separation of structural and harmonic
modes in output-only modal testing. In: Proc XVIII international modal analysis conference,
San Antonio, TX
Brincker R, Ventura CE, Andersen P (2003) Why output-only modal testing is a desirable tool for
a wide range of practical applications. In: Proc XXI international modal analysis conference,
Kissimmee, FL
CEN, European Committee For Standardization (2003) Eurocode 8: design provisions for earth-
quake resistance of structures, part 1.1: general rules, seismic actions and rules for buildings,
Pren 1998-1, Brussels
CEN, European Committee for Standardization (2005) Eurocode 8: design provisions for earth-
quake resistance of structures. Part 3: assessment and retrofitting of buildings. Brussels
Chopra AK (2000) Dynamics of structures – theory and applications to earthquake engineering,
2nd edn. Prentice Hall, Upper Saddle River, NJ
Computers and Structures (2006) SAP2000® v.11, manual. Computers and Structures Inc,
Berkeley, CA
Consiglio Superiore dei Lavori Pubblici (2008) Nuove Norme Tecniche per le Costruzioni,
D.M. Infrastrutture 14/01/2008, published on S.O. n. 30 at the G.U. 04/02/2008 n. 29
(in Italian)
Conte C, Rainieri C, Aiello MA, Fabbrocino G (2011) On-site assessment of masonry vaults:
dynamic tests and numerical analysis. Geofiz 28:127–143
Devriendt C, De Sitter G, Vanlanduit S, Guillaume P (2009) Operational modal analysis in the
presence of harmonic excitations by the use of transmissibility measurements. Mech Syst
Signal Process 23:621–635
Doebling SW, Farrar CR, Prime MB, Shevitz DW (1996) Damage identification and health
monitoring of structural and mechanical systems from changes in their vibration
characteristics: a literature review, technical report LA-13070-MS, UC-900. Los Alamos
National Laboratory, New Mexico
Ewins DJ (2000) Modal testing: theory, practice and application, 2nd edn. Research Studies Press
Ltd., Baldock
Fardis MN (1996) Experimental and numerical investigations on the seismic response of RC
infilled frames and recommendations for code provisions. ECOEST/PREC8 report
no. 6. LNEC, Lisbon
Friswell MI, Mottershead JE (1995) Finite element model updating in structural dynamics.
Kluwer, Dordrecht
264 5 Applications
Gentile C, Saisi A (2007) Ambient vibration testing of historic masonry towers for structural
identification and damage assessment. Const Build Mat 21(6):1311–1321
Gentile C, Saisi A (2013) Operational modal testing of historic structures at different levels of
excitation. Const Build Mat. doi: 10.1016/j.conbuildmat.2013.01.013 (in press)
Herlufsen H, Andersen P, Gade S, Møller N (2005) Identification techniques for operational modal
analysis – an overview and practical experiences. In: Proc 1st international operational modal
analysis conference, Copenhagen
Hu W-H, Moutinho C, Caetano E, Magalhães F, Cunha A (2012) Continuous dynamic monitoring
of a lively footbridge for serviceability assessment and damage detection. Mech Syst Signal
Process 33:38–55
Jacobsen N-J, Andersen P, Brincker R (2007) Eliminating the influence of harmonic components
in operational modal analysis. In: Proc XXV international modal analysis conference, Orlando
Jaishi B, Ren W-X (2005) Structural finite element model updating using ambient vibration test
results. ASCE J Struct Eng 131(4):617–628
Jeary AP (1986) Damping in tall buildings – a mechanism and a predictor. Earthq Eng Struct Dyn
14:733–750
Jeary AP (1997) Damping in structures. J Wind Eng Ind Aerodyn 72:345–355
Khatibi MM, Ashory MR, Malekjafarian A (2009) Scaling of mode shapes using mass-stiffness
change method. In: Proc 3rd international operational modal analysis conference, Portonovo
Lagomarsino S (1993) Forecast models for damping and vibration periods of buildings. J Wind
Eng Ind Aerodyn 48:221–239
Lopez-Aenlle M, Fernandez P, Brincker R, Fernandez-Canteli A (2010) Scaling-factor estimation
using an optimized mass-change strategy. Mech Syst Signal Process 24:3061–3074
Lopez-Aenlle M, Brincker R, Pelayo F, Fernandez-Canteli A (2012) On exact and approximated
formulations for scaling-mode shapes in operational modal analysis by mass and stiffness
change. J Sound Vib 331:622–637
Marseglia PS (2013) Comportamento sismico di volte in muratura, Ph.D. thesis, University of
Salento, Lecce (in Italian)
Ministero dei Beni e delle Attività Culturali (2010) Linee Guida per la valutazione e riduzione del
rischio sismico del patrimonio culturale allineate alle nuove Norme tecniche per le costruzioni
(D.M. 14 gennaio 2008), Circolare 26/2010. http://www.pabaac.beniculturali.it (in Italian)
Ministerio de Fomento (2002) Norma de Construcciòn Sismorresistente. Parte General y
Edificaciòn (Spanish Standard, in Spanish)
Modak SV, Rawal C, Kundra TK (2010) Harmonics elimination algorithm for operational modal
analysis using random decrement technique. Mech Syst Signal Process 24:922–944
Mohanty P, Rixen DJ (2004) A modified Ibrahim time domain algorithm for operational modal
analysis including harmonic excitation. J Sound Vib 275:375–390
Mohanty P, Rixen DJ (2006) Modified ERA method for operational modal analysis in the presence
of harmonic excitations. Mech Syst Signal Process 20:114–130
Mottershead JE, Link M, Friswell MI (2011) The sensitivity method in finite element model
updating: a tutorial. Mech Syst Signal Process 25(7):2275–2296
Mucciarelli M, Milutinovic Z, Gosar A, Herak M, Albarello D (2008) Assessment of seismic site
amplification and seismic building vulnerabilità in the Republic of Macedonia, Croatia and
Slovenia. In: Proc 14th world conference on earthquake engineering, Beijing
Navarro M, Vidal F, Feriche M, Enomoto T, Sanchez FJ, Matsuda I (2004) Expected ground-RC
building structures resonance phenomena in Granada city (Southern Spain). In: Proc 13rd
world conference on earthquake engineering, Vancouver, BC
Oliveira CS, Navarro M (2009) Fundamental periods of vibration of RC buildings in Portugal from
in situ experimental and numerical techniques. Bull Earthq Eng 8(3):609–642
Pandey AK, Biswas M (1994) Damage detection in structures using changes in flexibility. J Sound
Vib 169(1):3–17
Panou AA, Theodulidis N, Hatzidimitriou P, Stylianidis K, Papazachos CB (2005) Ambient noise
horizontal-to-vertical spectral ratio in site effects estimation and correlation with seismic
References 265
damage distribution in urban environment: the case of the city of Thessaloniki (Northern
Greece). Soil Dyn Earthq Eng 25:261–274
Parloo E, Verboven P, Guillaume P, Van Overmeire M (2002) Sensitivity-based operational mode
shape normalization. Mech Syst Signal Process 16(5):757–767
Parloo E, Verboven P, Guillaume P, Van Overmeire M (2003) Force identification by means of
in-operation modal models. J Sound Vib 262(1):161–173
Parloo E, Vanlanduit S, Guillaume P, Verboven P (2004) Increased reliability of reference-based
damage identification techniques by using output-only data. J Sound Vib 270:813–832
Parloo E, Cauberghe B, Benedettini F, Alaggio R, Guillaume P (2005) Sensitivity-based opera-
tional mode shape normalisation: application to a bridge. Mech Syst Signal Process 19:43–55
Peeters B, Cornelis B, Janssens K, Van der Auweraer H (2007) Removing disturbing harmonics in
operational modal analysis. In: Proc 2nd international operational modal analysis conference,
Copenhagen
Pridham BA, Wilson JC (2003) A study on errors in correlation-driven stochastic realization using
short data sets. Prob Eng Mech 18:61–77
Qian S, Chen D (1996) Joint time-frequency analysis: methods and applications. PTR Prentice
Hall, Upper Saddle River, NJ
Rainieri C, Fabbrocino G (2010) Automated output-only dynamic identification of civil engineer-
ing structures. Mech Syst Signal Process 24(3):678–695
Rainieri C, Fabbrocino G, Cosenza E (2010a) Some remarks on experimental estimation of
damping for seismic design of civil constructions. Shock Vib 17:383–395
Rainieri C, Fabbrocino G, Cosenza E (2010b) On damping experimental estimation. In: Proc 10th
international conference on compuational structures technology, Valencia
Rainieri C, Fabbrocino G, Cosenza E (2011) Integrated seismic early warning and structural health
monitoring of critical civil infrastructures in seismically prone areas. Struct Health Monit
10:291–308
Rainieri C, Fabbrocino G, Manfredi G, Dolce M (2012) Robust output-only modal identification
and monitoring of buildings in the presence of dynamic interactions for rapid post-earthquake
emergency management. Eng Struct 34:436–446
Rainieri C, Fabbrocino G, Verderame GM (2013) Non-destructive characterization and dynamic
identification of a modern heritage building for serviceability seismic analyses. NDT E Int
60:17–31
Reynders E, De Roeck G (2008) Reference-based combined deterministic-stochastic subspace
identification for experimental and operational modal analysis. Mech Syst Signal Process
22:617–637
Reynders E, Pintelon R, De Roeck G (2008) Uncertainty bounds on modal parameters obtained
from stochastic subspace identification. Mech Syst Signal Process 22:948–969
Rosenow SE, Uhlenbrock S, Schlottmann G (2007) Parameter extraction of ship structures in
presence of stochastic and harmonic excitations. In: Proc 2nd international operational modal
analysis conference, Copenhagen
Sepe V, Speranza E, Viskovic A (2008) A method for large-scale vulnerability assessment of
historic towers. Struct Cont Health Monit 15:389–415
Tamura Y, Yoshida A, Zhang L, Ito T, Nakata S, Sato K (2005) Examples of modal identification
of structures in Japan by FDD and MRD techniques. In: Proc 1st international operational
modal analysis conference, Copenhagen
Woodhouse J (1998) Linear damping models for structural vibration. J Sound Vib 215(3):547–569
Automated OMA
6
In spite of their minor autonomy with respect to fully automated modal parameter
identification methods, modal tracking procedures are advantageous for applications
requiring short response time and low computational efforts.
In the following sections, after a brief literature review about automated OMA
methods, two different automated modal parameter identification methods and an
automated modal tracking method are described and applied to a number of case
studies. The reported applications show that a reliable and accurate automated
identification of the modal parameters is possible. They point out the potential of
automated OMA in solving a number of monitoring applications, but also the
drawbacks related to the influence of environmental and operational factors on
the modal parameter estimates. It is worth noting that the herein described
automated OMA methods are the authors’ research outcome in the field. Thus,
this chapter basically reports a specific viewpoint about the matter; the reason is that
a wide consensus in the definition of the “best methods” for automated output-only
modal identification has not been reached, yet. Nevertheless, relevant references are
reported at the end of the chapter for the reader interested in more details about
automated OMA methods proposed by different research groups.
6.2.1 Objectives
the estimator and the experience of the user. Since the extensive interaction
between tools and user is inappropriate for monitoring purposes, the first attempts
to automate OMA introduced selection criteria and clustering tools to separate the
physical poles from the others without user interactions. For instance, one of the
early approaches to automated modal identification, based on the LSCF method
(Vanlanduit et al. 2003), performed the selection of physical poles from a high-
order model by means of a number of deterministic and stochastic criteria and a
fuzzy clustering approach. However, the algorithm for pole selection was quite
complex and demanding from a computational point of view.
A very simple criterion for automated modal parameter identification via SSI
was proposed in the same period (Peeters and De Roeck 2001a) and applied to
monitor the environmental and damage effects on the dynamic behavior of the Z24
Bridge in Switzerland. It was based on the selection, in the stabilization diagram,
of the poles that were at least five times stable. This basic criterion has also been
applied to track the effects of changing environmental conditions on the modal
parameters of Tamar bridge (Brownjohn and Carden 2007). However, it cannot
ensure that the identified poles are physical (Chap. 5).
A more refined automated OMA procedure based on the SSI technique was
proposed some years later (Deraemaeker et al. 2008). It was basically a tracking
method because an initial set of modal parameters, using stochastic subspace
identification and stabilization diagram, had to be identified before launching the
procedure.
A fully automated method for extraction of modal parameters by SSI was
proposed in the same period (Andersen et al. 2007). It was based on the clear
stabilization diagram obtained according to a multipatch subspace approach. Pole
extraction was carried out by the graph theory. This algorithm was very fast, so that
it could be used as a monitoring routine, but further improvements were needed to
increase its reliability and robustness.
A refinement of automated Cov-SSI was achieved some years later (Magalhaes
et al. 2009). It was able to ensure an effective identification of closely spaced
modes, but it showed poor performance in the identification of weakly excited
modes. It was based on the application of an advanced clustering algorithm
allowing a reliable identification of structural modes. However, a number of
parameters had to be set at startup by time-consuming calibrations. Among the
parameters to be set there was the number of block rows of the Toeplitz matrix of
correlations, whose value has an influence on the quality of the stabilization
diagram and is somehow dependent on the signal-to-noise ratio (Chap. 5). Thus,
the quality of signals had to be evaluated in advance and the number of block rows
chosen accordingly in order to obtain high-quality stabilization. Other parameters
requiring calibration were related to the clustering algorithm (the interested reader
can refer to Tan et al. 2006 for a detailed analysis of clustering techniques). The
rules to build the hierarchical tree and, in particular, the tree cut level had to be set
according to the noise level of signals. A preliminary phase of analysis and manual
initialization of the system was, therefore, necessary before each application. As
previously mentioned, this approach yields static acceptance criteria that cannot
270 6 Automated OMA
always ensure an effective tracking of dynamic parameters over time. The need of
avoiding tuned and statically set parameters was not fully understood, yet.
Among the methods based on conventional signal processing, the so-called time
domain filtering method (Guan et al. 2005) was a tracking procedure based on the
application of a band-pass filter to the system response in order to separate the
individual modes in the spectrum. The frequency limits of the filter were static and
user specified according to the PSD plots of response signals. However, when the
excitation is unknown, it might be difficult to identify the regions where certain
modes are located according only to power spectrum plots. Moreover, in the case of
close modes, it is very difficult, or even impossible, to correctly define such limits
so that natural changes in modal frequencies can be tracked. Thus, this method only
ensured a rough modal tracking.
The automation of the FDD method (Brincker et al. 2007) marked a fundamental
step towards the elimination of any user interaction and the use of automated OMA
as modal information engine in SHM systems. It was based on the identification of
the modal domain around each identified peak in the singular value plots according
to predefined limits for the so-called modal coherence function and modal domain
function. The recommended limit value was 0.8 for both the indicators. However, if
the limit value for the modal coherence indicator was somehow justified (Brincker
et al. 2007) on the basis of the standard deviation of correlation between random
vectors and on the number of measurement channels, few comments were reported
about the modal domain indicator. Moreover, threshold setting for peak detection
still affected the results.
A slightly modified version of this approach was applied to the permanent
monitoring of the “Infante D. Henrique” bridge (Magalhaes et al. 2008). A reduced
effectiveness of the procedure was observed whenever noise level in spectra
increased. Among the parameters to be set there was also the MAC rejection
level (Chap. 4). It was set by means of a number of sensitivity tests and time-
consuming tuning for the monitored structure. The selection of a very small value
(0.4) was recommended. However, it was recognized that this value could be
ineffective in the case of small number of sensors and close natural frequencies
associated to similar mode shape estimates. Moreover, the need for a time-
consuming initialization phase and the related static settings of analysis parameters
did not guarantee that they were adequate over the whole monitoring period, in
particular in the case of remarkable changes in the structure such as those induced
by extreme events.
Automated versions of SOBI and of transmissibility-based OMA were also
available. About SOBI, the identification of structural modes was based on the
rejection of all modes out of the frequency range of interest and of sources
characterized by a fitting error higher than 10 %; the selection of the structural
modes from the remaining sources was based on the computation of a confidence
factor (Poncelet et al. 2008). However, the effectiveness of the method in the
analysis of real measurements was not proved.
The automated transmissibility-based OMA method started from the SVD of a
two-column matrix consisting of transmissibility functions evaluated at different
6.2 Automated OMA in Frequency Domain: LEONIDA 271
loading conditions. Since all transmissibility functions converge to the same value
at the natural frequency of the system, rank-one matrices marked the presence of
structural modes. They were identified by analyzing the plot of the inverse of the
second singular value, which was characterized by a peak at the natural frequency
of a mode. However, peak selection was still carried out through the static definition
of a threshold. Moreover, in the presence of measurement noise the approach was
not reliable. The use of a smoothing function was proposed to overcome this
drawback (Devriendt et al. 2008), but it had to be carefully applied to avoid
distortions.
In summary, the available automated OMA methods were affected by some
common drawbacks:
• most of them moved from a threshold-based peak detection; as a consequence, a
first calibration phase was needed for its appropriate definition; only some of
the identified peaks corresponded to actual modes, thus validation criteria were
needed; moreover, performance of peak detection algorithms got worse in the
presence of measurement noise;
• identification of actual modes was based on a number of statically set
parameters; a time-consuming calibration process for each monitored structure
was required; the static identification of thresholds and parameters was often
inadequate to follow natural changes in modal properties of structures due to
damage or environmental effects;
• most of the algorithms were somehow sensitive to noise, and higher or poorly
excited modes were not always identified.
Thus, an alternative strategy was needed. In the case of LEONIDA, it was based
on the definition of some objective criteria for the identification of mode bandwidth
before modal parameter extraction.
It is interesting to note that the distinction between automated modal parameter
identification and automated modal tracking and the importance of avoiding the
tuning of analysis parameters at startup have recently been recognized and accepted
also by other authors (see, for instance, Reynders et al. 2012).
computed between the first singular vectors obtained from the analysis of the first
and the n-th block of data, with n ¼ 1, . . ., nmax:
H t0 þnΔT 2
t0
u1 ðωÞ u1 ðωÞ
MAC ut10 ðωÞ ; ut10 þnΔT ðωÞ ¼ H t0 t0 þnΔT H t0 þnΔT
ut10 ðωÞ u1 ðωÞ u1 ðωÞ u1 ðωÞ
ð6:1Þ
should be constant and equal to 1 for a stationary and ergodic system. This is
approximately true for real datasets due to the unknown input and the presence of
noise; thus, specific selection criteria and tolerances must be set. In (6.1) the
superscripts denote the starting time (t0 and t0 + nΔT ) of the first and the n-th
record, respectively. The records have the same duration ΔT, yielding spectra
characterized by the same number of averages and frequency resolution. Note
that, even if the SVD of the output PSD matrix at resonance gives un-scaled
mode shapes (Chap. 4) and the scaling factor can vary from record to record, the
algorithm is not sensitive to changes in scaling. In fact, it is based on the analysis of
the sequence of MAC values that are not affected by a constant multiplier.
Taking into account the basic principle for the identification of the bandwidth of
modes, the algorithm can easily be implemented as follows. As in the classical FDD
method, the fundamental data processing tool is represented by the SVD of the
output PSD matrix. After the decomposition, the first singular vector at each
frequency line is considered. This step is repeated for a number of subsequent
records. Afterwards, the MAC between the first singular vectors at the same
frequency line (6.1) is computed. Since the MAC index is quite sensitive to
noise, appropriate countermeasures for noise reduction have to be considered, as
discussed below.
The averaged MAC vs. frequency plot (Fig. 6.1) looks like a coherence function.
In the frequency range of a given mode, density of points increases, MAC value is
high (close to 1) and a kind of bell can be observed. The identification of the
bandwidth of the modes is carried out through the evaluation of some statistical
parameters related to the MAC value sequence and its first derivative at each
frequency line. Since the error in mode shape estimation is basically related to
the error in spectrum estimation and, as such, to the number of averages (Chap. 2;
the interested reader can also refer to Bendat and Piersol 2000 for more details), it is
maintained at a given level by defining the record duration for each step. Thus, the
sequences of MAC values indicating the presence of a mode can be identified by the
analysis of their statistical parameters, which are mostly influenced by the presence
of a mode rather than by the error in spectrum estimation. Mean and standard
deviation are the statistical parameters assumed for mode bandwidth identification.
In order to have a good estimation of such parameters, at least ten steps have
generally to be taken into account.
Figure 6.2 shows that the MAC function is nearby horizontal only at the
frequency lines located within a mode bandwidth. It is possible to assume that
6.2 Automated OMA in Frequency Domain: LEONIDA 273
MAC vs f
1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
MAC
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10
f [Hz]
0.6
0.5
0.4
0.3
0.2
0.1
0
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
Step Number
0.5
0.4
0.3
0.2
0.1
0
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
Step Number
Fig. 6.2 MAC sequences at different frequency lines: noise domain (a) and mode bandwidth
(b), (# Elsevier Ltd. 2010), reprinted with permission
From the analysis of MAC sequences, the bandwidth of a number of modes can
be identified. Within each bandwidth, the use of peak detection algorithms over the
corresponding portion of the first singular value plot leads to the identification of
the natural frequency for that mode. The corresponding singular vector at that
frequency line is a good estimate of the mode shape of the structure.
A flowchart of the algorithm is shown in Fig. 6.3. It can be easily implemented
in LabVIEW environment by adopting the state machine architecture: in fact,
6.2 Automated OMA in Frequency Domain: LEONIDA 275
START
No
n=nmax
Yes
MAC vs f plots
(MAC value sequence)
MAC (f ) No Assign f to
complies with
noise domain
preset limits
Yes
Assign f to modal
domain
Mode bandwidth
identification
Report saving
Yes
Next
dataset
No
END
Fig. 6.3 LEONIDA: Algorithm for automated modal parameter identification, (# Elsevier
Ltd. 2010), reprinted with permission
276 6 Automated OMA
Modal parameter
extraction
well-defined stages can be identified (Fig. 6.4). The data source can be selected at
start-up. For instance, data can be retrieved from file, but also from a remote
MySQL database or directly from measurement hardware, thus allowing integration
of the software within fully automated SHM systems. If data are retrieved from file,
the number of steps cannot be controlled but it depends on the record length.
As shown in Section 6.2.4, only 1-h long (or longer) records can assure a sufficiently
large number of steps and, therefore, results characterized by high reliability.
In the first stage, the MAC sequences at all discrete frequency lines are computed.
Computational time can be optimized adopting parallel recording and data processing
procedures. Moreover, a partial overlap between subsequent records can be consid-
ered in order to increase the number of steps in the case of data retrieved from file, or
to keep the global estimation time low when another data source is adopted.
In the second stage, mode bandwidths are identified according to the previously
mentioned limits. At the end of this state, a number of bandwidths are identified by
their limit values of frequency.
In the third stage, modal parameters are extracted in a fully automated way by
analyzing only the portion of the singular value plots in the given mode bandwidth.
FDD is an efficient and easy to manage OMA technique, but it requires user
interaction in its classical implementation (Chap. 4). Its simplicity has made it
particularly attractive for the implementation of automated OMA procedures. The
first approach to automated output-only modal parameter identification via FDD
reported in the literature (Brincker et al. 2007) starts from the identification
(through an automated peak picking algorithm) of all peaks in the first singular
value plot provided by the SVD of the output PSD matrix. After this preliminary
phase, it is necessary to assess the nature of the peaks and associate them to
structural modes, noise, or spurious harmonics. The last case can be tackled
according to the methods described in Chap. 5.
In order to distinguish structural modes from noise peaks, and different close
physical modes, two indicators are defined. First of all, the modal coherence
(basically, the MAC index), calculated between the first singular vector at the
peak and the first singular vector at adjacent points, is used to distinguish physical
modes from noise peaks. If the modal coherence is close to unity, then the first
singular value at the neighboring point belongs to the same dominating mode
(Brincker et al. 2007). An adjacent point is assumed to hold physical information
if the modal coherence exceeds a threshold level. The suggested threshold value for
the modal coherence indicator is 0.8.
Once the selection of peaks holding modal information is completed, different
closely spaced modes are separated through the so-called modal domain indicator.
It consists in the MAC computed between the first singular vectors corresponding to
neighboring points in a certain frequency range around a given peak. If its value is high
over the whole considered frequency range, only one mode is dominating. All points
characterized by a modal domain indicator higher than a user-defined threshold
(the recommended value is 0.8) identify the modal domain around the considered peak.
The main steps of this automated FDD method can be, therefore, summarized as
follows:
• peak identification;
• check if peak is physical (modal coherence and harmonic indicator);
• if peak is physical, then the corresponding modal domain is defined around it;
otherwise, the noise domain is set;
• the identified modal and noise domains are excluded from the search set (initial
search set goes from DC to the Nyquist frequency);
• search is stopped when the search set is empty, the peak is below a predefined
excitation level or a specified number of modes has been estimated.
Another automated FDD method has been developed afterwards (Magalhaes
et al. 2008). It is characterized by a similar approach but it is oriented to simplify
some aspects of the previous process. In fact, it is demonstrated that a reliable
modal identification can be obtained adopting a suitably low limit value for the
modal coherence. However, this threshold must be defined for each monitored
structure by means of time-consuming sensitivity tests. Moreover, it has been
recognized that the recommended value (0.4) can be ineffective if the number of
278 6 Automated OMA
sensors is small and similar mode shape vectors for close natural frequencies are
obtained from the identification process.
These circumstances clearly show that those methods are characterized by
some relevant drawbacks. First of all, the threshold-based peak detection is very
sensitive to noise and spurious harmonics, as remarked also by Brincker et al. (2007).
If the automated FDD is used as a monitoring routine, the threshold for peak detection
has to be defined by means of a time-consuming calibration process based on a
number of sensitivity tests carried out on the monitored structure, as suggested by
Magalhaes et al. (2008). No specific suggestions exist for a-priori threshold definition
in the case of single tests.
It is worth noting that an initialization phase can eventually be accepted for SHM
applications, when the time spent to carry out sensitivity analyses for threshold and
parameter definition is negligible with respect to the whole observation period.
In the case of single tests, instead, there is no other way than a-priori threshold and
parameter definition. Criteria for objective identification of physical modes are,
therefore, of primary importance to avoid any preliminary initialization and to ensure
the versatility of the algorithm for different applications (single modal identification
tests, continuous SHM) independently of the structural typology. An attempt to tackle
this issue is present in the first method (Brincker et al. 2007). However, if the limit
value for the modal coherence indicator is somehow justified on the basis of the
standard deviation of the correlation between random vectors and of the number of
measurement channels, few comments are reported about the modal domain
indicator.
It is also worth noting that the number and position of sensors can influence the
correlation between singular vectors in a certain frequency range, where more than
one mode could be present. In the presence of a few not optimally located sensors,
criteria based on correlation between singular vectors in a certain frequency range
may fail. Finally, parameters and thresholds that have been set for a given structure
through an initialization phase are not guaranteed to be always adequate even for
the structure itself; this is the case, for example, of structures that experience
remarkable changes (damages) due to extreme events.
LEONIDA overcomes most of these drawbacks since there is no preliminary
threshold-based peak detection. In LEONIDA the logical process is reversed,
exploiting the preliminary identification of the bandwidth of the modes. Within
each bandwidth, modal parameters are then estimated. In addition, since it is based
only on the correlation between singular vectors at the same frequency line, it is
virtually insensitive to number and position of sensors. Some tests, in fact, have
shown that it is effective provided that the observability of a given mode is ensured
by at least two sensors (see, for instance, Rainieri et al. 2012). The definition of
criteria for mode separation and the calibration of limits on a number of different
datasets, case studies, and measurement chains allow avoiding any initialization
phase and ensure the applicability of LEONIDA in different conditions (single
tests, continuous monitoring).
6.2 Automated OMA in Frequency Domain: LEONIDA 279
Fig. 6.5 Innovative features of LEONIDA with respect to previous automated FDD procedures
(the dashed boxes denote analysis steps that are avoided or implicitly carried out in LEONIDA)
The identification performance of LEONIDA has been assessed through its appli-
cation to case studies characterized by different degrees of complexity. Different
record lengths and measurement hardware have been considered. Structure under test,
280 6 Automated OMA
Table 6.1 Summary of records and measurement hardware, (# Elsevier Ltd. 2010), reprinted
with permission
Data acquisition
Structure Sensors hardware Record Duration (s)
Tower of the Nations Epi-sensor FBA K2 (Kinemetrics TdN1 1,500
(Naples)—r.c. ES-U2 (Kinemetrics Inc.)—24 bit ADC TdN2 2,400
Inc.)
Bell tower Epi-sensor FBA TrioGuard 32—16 Single 3,600
(Montorio)—masonry ES-U2 (Kinemetrics bit ADC record
Inc.)
Bell tower (Montelongo)—r.c. Epi-sensor FBA TrioGuard 32—16 Single 3,600
ES-U2 (Kinemetrics bit record
Inc.)
S. Maria del Carmine Bell Epi-sensor FBA PXI-4472 (NI)— Single 1,800
Tower (Naples)—masonry ES-U2 (Kinemetrics 24 bit ADC record
Inc.)
School of Engineering Main Epi-sensor FBA K2 (Kinemetrics RC0 1,200
Building (Naples)—r.c. ES-U2 (Kinemetrics Inc.)—24 bit ADC RC1 1,200
Inc.) RC2 3,300
RC3 3,800
Also the extension of mode bandwidths is quite stable in all cases independently
of the record length (Table 6.3). However, record length has an effect on the
identification performance at higher modes. In fact, some tests have shown that
short record lengths yield, as a result of noise, a less clear identification of the
bandwidth of higher, poorly excited modes. Thus, tools for rejection of false
positives are needed in this case. If higher modes are of interest, longer record
durations must be considered. In fact, when the number of steps increases, a number
of wrongly identified frequency ranges disappear, while regions where modes are
actually located remain stable.
Some specific investigations about the performance of the algorithm in the
identification of higher modes and poorly excited modes have been performed.
Selected records have been decimated until a final sampling frequency of 20 Hz and
the modal identification performance has been assessed in the range 0-10 Hz.
282 6 Automated OMA
peak plot
1.1
1
Normalized First Singular Value
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9
Frequency [Hz]
Fig. 6.6 LEONIDA: Example of identified bandwidth of a mode, (# Elsevier Ltd. 2010),
reprinted with permission
This frequency range includes the higher modes of the considered test cases.
Results are summarized in Table 6.4. They point out that LEONIDA can identify
also the higher modes. The only exception is represented by the fifth mode of the
Tower of the Nations. It shows that LEONIDA may fail in the presence of poorly
excited modes. However, the identification of poorly excited modes is difficult even
when an expert user carries out a classical output-only modal analysis.
LEONIDA has also been tested against heavily nonstationary signals, such as
those collected by the SHM system of the School of Engineering Main Building on
April 6-th, 2009, and related to L’Aquila earthquake. As expected, LEONIDA has
not been able to identify any structural mode in this case. However, in view of its
6.2 Automated OMA in Frequency Domain: LEONIDA 283
Table 6.4 LEONIDA: performance in the identification of higher modes, (# Elsevier Ltd.
2010), reprinted with permission
Structure Mode fLEONIDA (Hz) fFDD (Hz) fCov-SSI (Hz) fSOBI (Hz)
Tower of the Nations I 0.81 0.81 0.81 0.81
(Naples)—record TdN2 II 1.36 1.36 1.36 1.38
III 1.62 1.63 1.63 1.63
IV 3.02 3.02 3.02 3.02
V Failure 4.36 4.32 4.36 (not well
separated)
VI 5.17 5.16 5.10 5.22
Bell tower (Montelongo)— I 3.41 3.40 3.41 3.43
single record II 4.13 4.13 4.11 4.13
III 5.06 5.06 5.06 5.06
IV 5.62 5.62 5.63 5.60
V 6.35 6.35 6.35 6.33 (not well
separated)
VI 7.42 7.42 7.40 7.40
MAC
a MAC vs f 1st SV
1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
MAC
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
f [Hz]
MAC
b MAC vs f 1st SV
1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
MAC
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5
f [Hz]
Fig. 6.7 LEONIDA: MAC vs. frequency plot for the School of Engineering Main Building in
operational conditions (a) and in the presence of input ground motion (b), (# Elsevier Ltd. 2010),
reprinted with permission
0.900
0.800
0.700
0.600
MAC
0.500
0.400
0.300
0.200
0.100
0.000
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42
Step Number
Fig. 6.8 Effect of input ground motion on the MAC sequence in the bandwidth of a mode,
(# Elsevier Ltd. 2010), reprinted with permission
6.3.1 Algorithm
Another more refined automated modal parameter identification method has been
recently developed. Even if its performance assessment is still in progress, the
description of the algorithm in its most relevant parts can be of interest because of
the innovative characteristics of the method. As in the case of LEONIDA, the new
algorithm, called ARES (Automated modal paRameter Extraction System), aims at
overcoming the typical drawbacks of automated OMA procedures:
• threshold-based peak and physical pole detection;
• need for a preliminary calibration phase at each new application;
• static settings of thresholds and parameters which may be unsuitable to track the
natural changes in modal properties of structures due to damage or environmental
effects;
• sensitivity to noise, problems of false or missed identification.
Moreover, it aims at providing accurate and reliable estimates of modal damping
ratios. Automated OMA algorithms often underestimate this aspect. In fact, they do
not provide damping estimates or, whenever they are able to estimate modal
damping, they usually yield very scattered results. The fairly large scatter
associated to damping estimates, in comparison with that of natural frequency
and mode shape estimates, is well documented in the literature. Even if the scatter
286 6 Automated OMA
In ARES, SSI is not directly applied to the multivariate time series of the
structural response but, after a preprocessing step, it processes the individual source
correlations obtained from the JAD of a number of time shifted correlation matri-
ces. The problem of avoiding user interaction in setting the analysis parameters is
solved taking into account that ARES is basically insensitive to the settings of p and t
in JAD (Sect. 4.5.4) and that, for a given maximum model order, the quality of
stabilization first improves and then gets worse for increasing values of the number
of block rows. Thus, a sensitivity analysis for different values of the number of
block rows makes its automated setting possible. In fact, based on the results of the
sensitivity analysis, the number of block rows is set in a way able to minimize the
variance of the modal parameter estimates at different model orders (see also
Chap. 5).
As in the case of other automated OMA methods based on SSI available in the
literature, clustering techniques make possible the automatic selection of physical
poles in the stabilization diagram. However, the preliminary JAD makes the
automatic analysis of the stabilization diagram and extraction of the physical
poles easier. In fact, as a result of the JAD, the raw data associated to the measured
structural response are transformed into source correlations (Sect. 4.5.4). They can
be well-separated (showing the contribution of a single mode to the structural
response), not well-separated (showing noise or minor contributions from other
modes that are superimposed to the contribution of the main mode) or just noise.
Thus, the automated interpretation of the stabilization diagram is simplified because
the modal parameters are extracted by applying the SSI method to the individual
source correlations instead of the multivariate time series of data. Once the source
correlations have been individually analyzed by the SSI method, the physical poles
are separated from the spurious ones by means of clustering techniques and mode
validation criteria. Their effectiveness is enhanced by the fact that the analyzed
stabilization diagrams report information about one mode only at the time. The
flowchart of the algorithm is shown in Fig. 6.9.
A state-machine architecture has been adopted for the implementation of ARES,
since the following well-established data processing steps can be identified:
• Correlation matrices are computed from the raw data;
• The JAD of p time-shifted covariance matrices provides the correlations of the
sources (both modal and noise sources); a number of analyses involving both
real and simulated datasets have shown that the setting of p has no influence on
the estimates, unless it is set much too low (p < 100); in this case slight decreases
in accuracy can be observed;
• The source correlations are individually analyzed by SSI for the estimation of
natural frequencies and damping ratios;
• The obtained poles are grouped into clusters. For each source, the corresponding
poles are grouped into clusters according to the hierarchical clustering method.
The cluster characterized by the largest number of elements is selected as
representative of the mode. This step is repeated for different values of the
number of block rows i ranging in the interval [20, 80] with step Δi equal to
2 (these parameters are theoretically user-selectable but the above mentioned
288 6 Automated OMA
LOADING OF DATA
JOINT APPROXIMATE
DIAGONALIZATION
END
values are recommended, since they have provided the best results in the
majority of applications);
• The obtained natural frequency and damping ratio estimates are subjected to a
selection and validation phase. Clusters that do not fulfil the validation checks
are removed from the dataset. In particular, the average damping ratio in each
cluster has to be in the range 0–5 % and the corresponding coefficient of
variation not larger than 10 %. The first limitation is based on an empirical
observation about the behavior of civil structures in operational conditions:
in fact, they are usually weakly damped. The second limitation recognizes that
physical poles are characterized by small standard deviations, while spurious
poles show much larger values of this parameter (Reynders et al. 2012). Checks
about the physical significance of the estimates are also carried out (for instance,
checks of the sign of damping). It is worth pointing out that the validation criteria
have to be applied after the hierarchical clustering stage, since they might
remove most of the spurious poles and a number of physical poles could be
separated and lost as a result of the clustering stage.
• The natural frequency and damping ratio estimates in each cluster are
normalized in the range [0, 1] and a k-means clustering algorithm is applied
6.3 Automated OMA by Hybrid Method: ARES 289
y1 y2 y3 y4
200 N/m 200 N/m 200 N/m 200 N/m 400 N/m
5 kg 5 kg 5 kg 10 kg
C1 C2 C3 C4 C5
f
with k ¼ 2 clusters, allowing the presence of empty clusters. This last step
removes eventual spurious poles that are still present, thus further refining the
precision and accuracy of estimates;
• The final values of natural frequency and damping ratio of the identified modes
are selected according to the results of the sensitivity analyses with respect to the
number of block rows, for a fixed value of the maximum model order (equal to
16) in the stabilization diagram. The cluster characterized by the minimum
variance of the estimates is selected as the one providing the best estimate of
the modal parameters for the considered mode;
• Mode shape estimates are finally obtained from SVD of the output PSD matrix at
the previously estimated frequency of the mode.
The previous discussion about the sequence of analysis steps in ARES highlights
how the source separation makes the discrimination of physical and noise modes
easier and more reliable. The sensitivity analysis with respect to the number of
block rows and the grouping of the poles in clusters leads to a robust identification
of the modal parameters and to a quantification of the precision of the estimates.
The performance of ARES in terms of accuracy and reliability of estimates has been
investigated through a statistical analysis of the results obtained from data continuously
generated by a simulated 4-DOF system excited by Gaussian white noise.
The mass and stiffness properties of the system are reported in Fig. 6.10.
Rayleigh damping is adopted. The modal properties of the system are reported in
Table 6.5. The four case studies differ for the assumed values of damping or the
SNR.
The system matrices and, therefore, the associated modal parameters have been
kept constant in all runs in order to focus the attention only on the uncertainties
associated to inherent limitations of the estimator.
The system response to Gaussian white noise has been simulated 10,000 times.
The input has been applied at DOF #1. Each simulated dataset consisted of four
measurement channels; the total record length was 3,600 s and the sampling
frequency was 10 Hz. Gaussian white noise has been added to the system response
290 6 Automated OMA
Table 6.6 ARES: success rate of automated modal identification in 10,000 runs
Case study
Case study Case study Case study #4—success
Mode #1—success rate (%) #2—success rate (%) #3—success rate (%) rate (%)
I 99.79 100.0 99.71 99.87
II 99.96 100.0 99.97 100.0
III 99.95 100.0 99.98 100.0
IV 100.0 100.0 100.0 100.0
in order to simulate the effect of measurement noise. The SNRs are reported in
Table 6.5. Each dataset has been then processed using the described algorithm in
order to automatically extract the modal parameters of the system.
The analysis of the obtained results has highlighted the reliability of ARES.
In fact, a success rate larger than 99 % has been obtained for all modes (Table 6.6).
Missed identification of the dynamic properties of one of the modes occurred only
in a few runs. This was probably due to a combined effect of weak excitation and
low SNR, which affected the quality of separation and stabilization.
The results in terms of natural frequency and damping estimates are summarized
in Tables 6.7 and 6.8. Very accurate natural frequency estimates, characterized by
low standard deviation σ, have been obtained. The error in natural frequency
estimates is much lower than 1 % in the 95 % of the runs for all case studies.
The accuracy of estimates slightly improves when the SNR increases. The
estimates are also very precise. In fact, the coefficient of variation γf,cluster of the
natural frequency estimates in a cluster is much lower than 0.1 % in the 95 % of
the runs when the cluster selected by the sensitivity analysis with respect to the
number of block rows is considered.
Damping estimates are fairly accurate and characterized by moderate uncer-
tainty (σ lower than 0.2 %). In particular, the variability of estimates slightly
increases when the nominal damping values increase. Larger errors are associated
to damping estimates with respect to natural frequencies. However, the scatter
with respect to the nominal values is lower than 10 and 20 % in the 50 and 95 % of
the runs, respectively. The errors slightly decrease when the SNR increases.
Damping estimates are also fairly precise (the coefficient of variation γξ,cluster of
the damping ratio estimates in a cluster is much lower than 10 % in the 95 % of the
runs when the cluster selected by the sensitivity analysis with respect to the
6.3 Automated OMA by Hybrid Method: ARES 291
a b
Mode I Histogram Mode II Histogram
24 40
22
35
Frequency of occurrence [%]
c d
Mode III Histogram Mode IV Histogram
50 55
45 50
Frequency of occurrence [%]
40 45
40
35
35
30
30
25
25
20
20
15
15
10 10
5 5
0 0
0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4
Modal damping [%] Modal damping [%]
Fig. 6.11 ARES: Distribution of the identified damping ratios after 10,000 runs (case study #3)—
mode I (a), mode II (b), mode III (c), mode IV (d)
6.4.1 Objectives
The requirements of advanced seismic protection systems for critical structures lead
to focus the attention on the reduction of computational efforts and response time of
modal extraction engines. LEONIDA definitely represents a solution for automated
modal parameter identification. It can also be used as modal information engine in
vibration-based monitoring systems, but length of records, amount of computa-
tional burden, and response time make it not ideal for a specific class of
applications, the continuous monitoring of structures in seismically prone areas.
Thus, the availability of an optimized strategy for automated modal tracking for
rapid structural health assessment in seismically prone areas is crucial. In the case
of structures exposed to seismic risk, the procedures for automated modal tracking
have to be accurate and reliable but they also have to be robust and able to provide
frequent estimates of dynamic properties.
From a general point of view, both automated modal tracking and automated
modal parameter identification algorithms are suitable for modal-based SHM.
However, most of the automated OMA procedures are characterized by high
response time. On the contrary, an automated modal tracking procedure can
be considered effective for vibration-based SHM in seismic areas if it is
characterized by low computational burden and it is robust even in the case of
short records. In fact, these characteristics allow increasing the number of modal
parameter estimates per hour. Moreover, application-dependent data processing
parameters have to be avoided in order to effectively follow the natural changes of
the modal properties of structures due to damage or environmental effects.
294 6 Automated OMA
Refinement of these aspects is relevant for the fast evaluation of health conditions
of a structure after an earthquake or during a seismic sequence. Statistical treat-
ment of modal parameter estimates and elimination of environmental effects
(Hu et al. 2012, Sohn et al. 2003, Worden et al. 2007) can be effective for accurate
health assessment purposes. Even in the case of structures subjected to seismic
events, a reliable evaluation of the variation of the modal parameters induced by a
ground motion cannot avoid the analysis of a sufficiently large population of
samples collected in the first few hours after the event. The collection of a
sufficient amount of data allows an appropriate consideration of the effect of
random errors and slight nonstationarities on the estimates. However, the need for
near real-time estimation of the modal parameters after seismic events is above all
related to the issues of emergency management. As also demonstrated by recent
seismic events occurred in Italy—see for instance the L’Aquila earthquake 2009
data—a sequence of closely repeated in time earthquakes (much less than 1-h lag
between two consecutive ground motions) is usual, in particular in the first days
after the mainshock. Thus, a near real-time estimation of the modal properties
allows the collection of sufficient amount of information about the structure in
between two events, when the effects of input ground motions that negatively
affect the output-only estimation of the modal parameters are null (Rainieri and
Fabbrocino 2010, Rainieri et al. 2011b). The collection of a fairly large amount of
data in a short time after the seismic is not a trivial task. Most of the automated
OMA procedures are not optimized in this sense. Nevertheless, fast, reliable, and
robust modal tracking procedures able to provide frequent modal parameter
estimates after the event play a fundamental role for effective vibration-based
SHM in seismically prone areas. In this context it is worth mentioning that the
prompt post-earthquake health assessment of strategic structures is also crucial to
support rescue operations.
Reference no
Start modes
available
yes
Data loading
(DB, file, Reference modes
measurement acquisition
hardware)
PSD matrix
SVD of PSD
matrix
Mode bandwidth
identification
Data visualization
& storage
Next yes
dataset
no
stop
Fig. 6.12 AFDD-T: Algorithm for automated modal tracking, (# Elsevier Ltd. 2011), reprinted
with permission
6.4 Automated Modal Tracking: AFDD-T 297
0.9
0.8
0.7
0.6
MAC
0.5
0.4
0.3
0.2
0.1
0
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
Frequency [Hz]
Fig. 6.13 AFDD-T: MAC vs. frequency plot, (# Elsevier Ltd. 2011), reprinted with permission
MAC vs. f
Filter
MAC vs. frequency plot 1-st SV plot
1
0.9
0.8
0.7
0.6
MAC
0.5
0.4
0.3
0.2
0.1
0
0.5 1.27 1.35 2
Frequency [Hz]
Fig. 6.14 AFDD-T: Automatic selection of the bandwidth of the mode, (# Elsevier Ltd. 2011),
reprinted with permission
0.9
0.8
0.7
0.6
MAC
0.5
0.4
0.3
0.2
0.1
0
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
Frequency [Hz]
MAC vs. frequency plot
1
b
0.9
0.8
0.7
0.6
MAC
0.5
0.4
0.3
0.2
0.1
0
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
Frequency [Hz]
Fig. 6.15 AFDD-T: Effect of the number of sensors (simulated data): poor spatial definition (a),
improved spatial definition (b), (# Elsevier Ltd. 2011), reprinted with permission
Another relevant issue concerns the definition of the number of sensors that
ensures an effective spatial filtering. Figure 6.15 shows that the effectiveness of
spatial filtering improves as the number of sensors increases. Results of simulations
and real applications have shown that, in the presence of a few sensors, local
maxima appear in the MAC vs. frequency plot; however, the absolute maximum
is reached in the bandwidth of the considered mode defined by the reference mode
shape. Thus, detection of the absolute maximum ensures a reliable automatic setting
of the filter and an effective extraction of modal parameters. Only a poor sensor
6.4 Automated Modal Tracking: AFDD-T 299
Fig. 6.16 SHM system of the School of Engineering Main Building: User interface of AFDD-T,
(# Elsevier Ltd. 2011), reprinted with permission
placement, unable to put in evidence the differences in the shapes of the structural
modes, can seriously affect the reliability of the automated modal tracking (Rainieri
et al. 2012). In this case it is possible to observe a decrease in the success rate of
identification of the modes characterized by very similar mode shape estimates. On
the contrary, if the sensor layout is properly set (off-diagonal terms in the AutoMAC
matrix very close to 0), the filtering procedure is effective even in the case of a few
sensors and short records. Note that the short duration of time series yields noisy
spectra as a result of the little number of averages. The adoption of an appropriate
architecture for data processing allows getting a new modal parameter estimate every
3–5 min. This time interval fits the requirements of safety and emergency manage-
ment in seismically prone areas, allowing the observation of the evolution of the
structure in between consecutive events of the seismic sequence.
AFDD-T has been implemented into software running on the local server of the
School of Engineering Main Building SHM system and interfaced with its remote
MySQL database. The software, working on a on-line basis, has been developed in
LabVIEW environment and it is characterized by a graphic interface showing the
location of sensors on the structure, the plot of the first singular value (obtained by
SVD of the output PSD matrix) vs. frequency, the acceleration waveforms and the
results of identification in the form of plots of the natural frequency estimates
vs. time (Fig. 6.16).
The software is characterized by a Producer/Consumer architecture (Fig. 6.17).
The Producer cycle is used to get data from the database while the Consumer cycle
processes these data and shows the output on screen. Parallel cycles extract the
dynamic properties (natural frequency and mode shape) for each monitored mode.
The adopted software architecture and a partial overlap among subsequent time
series allow collecting a new estimate of the fundamental modal parameters every
300 6 Automated OMA
Fig. 6.17 AFDD-T: Software architecture, (# Elsevier Ltd. 2011), reprinted with permission
3 min. Several error checks ensure software robustness against possible hardware bad
functioning conditions. Thus, AFDD-T makes possible an effective tracking of modal
parameters and, indirectly, of the health state of the structure (Sohn et al. 2003)
in seismically prone areas.
AFDD-T has been validated against simulated data before its integration in the moni-
toring system of the School of Engineering Main Building. Simulated data have been
obtained by applying a Gaussian white noise as input on a shear-type 15-stories 1-bay
r.c. frame, characterized by well-separated modes, and a 3D two stories r.c. frame,
characterized by closely spaced modes. The effect of the number of sensors has been
assessed first. The obtained results confirm that a reliable modal tracking is possible
even in the presence of a few sensors, provided that mode observability is assured. The
only effect is the presence of other relative maxima in the MAC vs. frequency plot
together with the absolute maximum, as mentioned in the previous section.
Attention has been then focused on the robustness of the procedure. Local
stiffness changes have been applied to the structural models in order to simulate
damage. The capability of AFDD-T of identifying the modal parameters without
updating the reference mode shapes has been evaluated. The obtained results have
shown that AFDD-T is able to follow the variation of the natural frequencies with
reduced errors (lower than 1 %) even in the case of a few sensors and for quite large
stiffness (and, therefore, natural frequency) variations. The robustness with respect to
6.4 Automated Modal Tracking: AFDD-T 301
−55
0E+0 9.5E-1 1.9E+0 2.8E+0
Frequency [Hz]
Fig. 6.18 The School of Engineering Main Building: Singular value plots, (# Elsevier Ltd.
2011), reprinted with permission
moderate changes in the structural mode shapes is crucial for applications of modal
tracking for emergency management purposes, when a sequence of closely repeated
in time earthquakes can hit the structure and make the automatic update of the
reference mode shapes through LEONIDA (or ARES) unpredictable. In the case of
closely spaced modes and a few sensors, the use of mode shape estimates obtained by
FDD-based algorithms improves the effectiveness of spatial filtering in the presence
of noise. More details about the results of simulations for validation of AFDD-T can
be found elsewhere (Rainieri et al. 2011a).
AFDD-T has also been tested against real datasets collected by the SHM system
of the School of Engineering Main Building in view of its integration into the
continuous vibration-based monitoring system. They have been analyzed offline,
first. Test results have remarked the superior performance of spatial filtering over
traditional filtering. Application of a bandpass filter may suffer some limitations. In
fact, the identification of the bandwidth of a mode according only to PSD plots or
singular value plots may be difficult. Moreover, static filter limits allow the
observation only of limited variations of the natural frequencies, in particular in
the presence of closely spaced modes. The School of Engineering Main Building,
for instance, is characterized by two closely spaced modes having natural
frequencies equal to 0.92 and 0.99 Hz, respectively. However, such values can
decrease until 0.89–0.9 Hz for the first mode and 0.95–0.97 Hz for the second mode
in summer, while they increase in winter up to 0.94–0.95 and 1.01–1.02 Hz for the
first and second mode, respectively. If a limit value equal to 0.95 (Fig. 6.18) is
adopted to separate the modes, in agreement with the sample singular value plots
obtained from a single output-only modal identification by FDD, the classical
302 6 Automated OMA
bandpass filter does not provide satisfactory results in terms of effectiveness and
reliability of modal parameter tracking. In fact, in the presence of closely spaced
modes, damage or environmental effects can easily move the natural frequencies
outside the limits of the filter, causing an error in the estimates. On the contrary,
AFDD-T sets the limits of the filter at each iteration so that changes in modal
parameters can effectively and automatically be tracked.
The performance of AFDD-T in estimating higher modes has been assessed by
offline analysis of a number of records of the dynamic response to ambient
vibrations of the School of Engineering Main Building collected by its monitoring
system. Each record consisted of six measurement channels and was 10-min long.
The first record has been used to carry out a manual identification of the
structural modes by FDD. Then, the other records have been used for the automated
modal tracking, carried out by adopting as reference mode shapes those provided by
the manual modal parameter identification. Some of the higher modes of the
structure were not identifiable from the considered dataset, and they have been
excluded also from the performance assessment of AFDD-T. This is consistent with
the fact that the automated identification of higher, poorly excited modes can fail.
Results have shown that AFDD-T is able to carry out an effective modal tracking
also at higher modes. Only in the case of poorly excited modes and relevant noise
level AFDD-T has provided more scattered data (refer to Rainieri et al. 2011a for
more details). However, also a manual identification of the modal properties may be
difficult or even unfeasible in similar conditions, when the resonance is almost
buried in noise. As the role of the number of sensors is concerned, results of modal
tracking confirm that only six sensors, among those installed on the School of
Engineering Main Building, are enough to provide a robust and reliable identifica-
tion of modes if observability is assured. This is relevant for large scale monitoring
of strategic structures, because the number of sensors and, therefore, the costs of the
monitoring system can be reduced without detrimental influence on the reliability
of results. This circumstance permits an optimization of the economic resources
assigned to the implementation of structural monitoring systems in the case of a
relevant number of structures.
After validation, AFDD-T has been optimized to continuously monitor the
fundamental modes of the School of Engineering Main Building, because they
are the most relevant from the seismic response standpoint.
An effective automated modal parameter identification and tracking has been
obtained by the combination of LEONIDA with AFDD-T (Fig. 6.19) and their
integration within the SHM system installed on the structure. LEONIDA provides
the reference mode shapes to AFDD-T. Periodically or on demand (i.e. after an
extreme event), a new reference can be obtained by LEONIDA and used to confirm
the previous one or to replace it.
The herein reported monitoring results clearly point out the potentialities and
limitations of automated OMA procedures for vibration-based monitoring. The
sequences of natural frequency estimates collected in summer 2008 and in winter
6.4 Automated Modal Tracking: AFDD-T 303
Fig. 6.19 Integration of LEONIDA and AFDD-T for fully automated vibration-based monitoring
304 6 Automated OMA
Mode I Mode I
1.6 1.6
1.4 1.4
1.2 1.2
Frequency [Hz]
Frequency [Hz]
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2
0.2
0
0 1000 2000 3000 4000 5000 6000 7000 8000 0
0 2000 4000 6000 8000 10000 12000 14000
Sample
Sample
Mode II Mode II
1.6 1.6
1.4 1.4
1.2 1.2
Frequency [Hz]
Frequency [Hz]
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2
0.2
0
0 1000 2000 3000 4000 5000 6000 7000 8000 0
0 2000 4000 6000 8000 10000 12000 14000
Sample
Sample
Frequency [Hz]
1 1
0.8 0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0 1000 2000 3000 4000 5000 6000 7000 8000 0
0 2000 4000 6000 8000 10000 12000 14000
Sample
Sample
Fig. 6.20 The School of Engineering Main Building: Monitoring results in summer 2008 (left)
and winter 2009 (right); occurrence of L’Aquila earthquake is marked by the circles, (# Elsevier
Ltd. 2011), reprinted with permission
2009 are shown in Fig. 6.20. Their analysis has put in evidence some environmental
effects on the modal parameter estimates.
The sequence related to winter 2009 also includes the effects of the ground
motion induced by the L’Aquila earthquake mainshock, occurred on April 6-th,
2009. It is represented by the sudden drop at the end of the plots.
Statistics are computed on 7,132 samples in the first case (summer 2008) and on
13,265 samples in the second (winter 2009).
In order to better visualize the variations of the natural frequencies over different
days, natural frequency estimates related to the same day are also grouped together
and plotted over time in Fig. 6.21.
Histograms of data collected in summer 2008 and winter 2009 are shown in
Fig. 6.22. In Table 6.10 a synthesis of the results of automated modal parameter
tracking in operational conditions is reported. Analysis of data and histograms
confirm that the estimates are normally distributed.
6.4 Automated Modal Tracking: AFDD-T 305
Mode I Mode I
1.6 1.6
1.4 1.4
1.2
Frequency [Hz]
Frequency [Hz]
1.2
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 0 5 10 15 20 25 30 35
Day Day
Mode II Mode II
1.6 1.6
1.4 1.4
1.2
Frequency [Hz]
1.2
Frequency [Hz]
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 0 5 10 15 20 25 30 35
Day Day
1.2
Frequency [Hz]
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 0 5 10 15 20 25 30 35
Day Day
Fig. 6.21 The School of Engineering Main Building: Monitoring results in summer 2008 (left)
and winter 2009 (right)—aggregated data, (# Elsevier Ltd. 2011), reprinted with permission
The collected data can be useful also for a detailed characterization of the
structure in its health state. In particular, the effect of environmental factors can
be studied and the estimates can be depurated from these effects before the
application of damage detection procedures. This aspect is out of the scope of
the book. The interested reader can refer to the wide literature on this topic for more
details (see, for instance, Deraemaeker et al. 2008, Magalhaes et al. 2012, Hu
et al. 2012). In the context of the present discussion, it is sufficient to observe a
moderate influence of temperature on the dynamic characteristics of the monitored
structure. In fact, during hot summer time, lower values of natural frequency for the
first three modes have been observed (in particular for the first and the third mode)
with respect to those observed in winter.
The low values of standard deviation of natural frequencies reported in
Table 6.10 confirm that the influence of environmental parameters on this structure
is relatively small (standard deviations lower than 0.01 Hz) and relatively uniform
for all modes.
306 6 Automated OMA
Mode I Mode I
3500 6000
3000 5000
2500
Occurrence
4000
Occurrence
2000
3000
1500
2000
1000
500 1000
0 0
0.89 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.89 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98
Frequency [Hz] Frequency [Hz]
Mode II Mode II
3500 9000
3000 8000
7000
2500
Occurrence
6000
Occurrence
2000 5000
1500 4000
3000
1000
2000
500
1000
0 0
0.89 0.97 0.98 0.99 1 1.01 0.96 0.97 0.98 0.99 1 1.01 1.02
Frequency [Hz] Frequency [Hz]
2500 7000
6000
Occurrence
2000
Occurrence
5000
1500 4000
3000
1000
2000
500
1000
0 0
1.27 1.28 1.29 1.3 1.31 1.32 1.27 1.28 1.29 1.3 1.31 1.32 1.33 1.34
Frequency [Hz] Frequency [Hz]
Fig. 6.22 The School of Engineering Main Building: Distributions of natural frequencies observed
in summer 2008 (left) and winter 2009 (right), (# Elsevier Ltd. 2011), reprinted with permission
Table 6.10 The School of Engineering Main Building: summary of monitoring results,
(# Elsevier Ltd. 2011), reprinted with permission
fn (Hz) fn (Hz) fn (Hz) standard
Monitoring period Mode modal value mean value deviation
Summer 2008 (7,130 samples)— I 0.92 0.92 0.0098
average temperature 25.2 C II 0.99 0.99 0.0081
III 1.29 1.29 0.0084
Winter 2009 (13,265 samples)— I 0.94 0.94 0.0101
average temperature 11.5 C II 0.99 0.99 0.0074
III 1.31 1.31 0.0075
This long (and exciting, we hope!) journey through OMA and its applications ends
with an outlook on the opportunities of (automated) OMA in the field of vibration-
based monitoring. In fact, as mentioned in Chap. 5 and remarked in this chapter, the
identification of the modal parameters is often an intermediate step in view of other
analyses. Model updating is just one of the possible applications. Others concern
damage detection and tensile load estimation. A detailed discussion about these last
two topics strictly related to continuous vibration-based SHM is beyond the scope
of the book. This section is intended to provide only an overview about them to
highlight the wide applicative perspectives of the previously described procedures.
Relevant references are reported at the end of the chapter for the interested reader.
Vibration-based SHM is a very active research field. Extensive surveys and
dedicated books are available in the literature (Doebling et al. 1996, Sohn
et al. 2003, Farrar and Worden 2013). The monitoring process consists in the
observation of the structure over long periods of time. Records of the structural
response are continuously acquired by appropriate sensors and measurement
systems, and damage sensitive feature are extracted from the collected data and
analyzed to assess the health conditions of the structure.
From a general point of view, damage is defined as any change of the structure
that adversely affects its performance (Sohn et al. 2003). This change can be in the
form of stiffness change (for instance, cracking), mass change, connectivity change
(for instance, looseness in a bolted joint) or boundary condition change (for
instance, bridge scour). An effective SHM system should be able to automatically
308 6 Automated OMA
Fig. 6.23 The School of Engineering Main Building: Monitoring results before and after
L’Aquila earthquake (occurring at time 3.32) for mode I (a), mode II (b), and Mode III (c),
(# Elsevier Ltd. 2011), reprinted with permission
6.5 Automated OMA and Vibration-Based Monitoring 309
detect damage at an early stage (Doebling et al. 1996). Five damage detection levels
have been defined (Sohn et al. 2003):
• Level 1: identification of damage existence;
• Level 2: localization of damage;
• Level 3: identification of the type of damage;
• Level 4: quantification of damage severity;
• Level 5: prediction of the remaining service life of the structure (prognosis).
Modal-based damage detection starts by recognizing that the modal parameters
are functions of the physical parameters (mass, stiffness, and damping). Assuming
that damage yields a change in the physical properties of the structure, this is reflected
by a change in the modal properties. Thus, it is theoretically possible to identify
damage from the analysis of the variations of the modal parameters. A number of
damage sensitive features have been, therefore, defined in terms of modal parameters.
One of the main drawbacks of modal-based damage detection is related to the
sensitivity to environmental and operational conditions that can cause changes in
the modal parameters of the same order of magnitude of those induced by damage. As
a consequence, the modal parameter estimates have to be depurated from the effects
of environmental factors in order to effectively detect damage. Possible approaches to
remove environmental effects are those based on regression analysis (a sample
application can be found in Magalhaes et al. 2012), when measurements of the
environmental parameters are available, and those based on factor analysis (see, for
instance, Deraemaeker et al. 2008) to get rid of the variability due to the environment
without information about the environmental factors.
Damage sensitive features can be defined in terms of natural frequencies and
mode shapes. Natural frequency variations provide the easiest way to detect the
presence of damage, because they can be accurately estimated even in the presence
of a few sensors. However, the information they provide is limited to Level 1
damage detection. Moreover, their sensitivity to damage is relatively low while
they are quite sensitive to environmental effects; thus, early stage damage detection
is often difficult. Other features are defined in terms of mode shapes and mode
shape curvatures. Mode shapes are less sensitive to environmental conditions, and
they provide relevant information for damage location. However, they are typically
estimated with lower accuracy with respect to natural frequencies.
Once the damage sensitive features have been purified from the environmental
effects, a number of tools can be applied for feature discrimination. They can be
broadly classified as supervised and unsupervised learning approaches (Farrar and
Worden 2013). The former are applied when data are available for both the
undamaged and damaged structure, while the latter are applied when reference
data are available only for the structure in healthy state.
This brief review about modal-based damage detection confirms the fundamental
role played by automated OMA procedures in the field of SHM and the importance of
getting very accurate modal parameter estimates in a fully automated way.
Another interesting application of automated OMA procedures concerns the
indirect estimation and monitoring of tensile loads in cables and tie rods. A
vibration-based system for tensile load monitoring can easily be developed based
on the concepts and procedures illustrated throughout this book.
310 6 Automated OMA
The tensile load can be obtained from estimates of the dynamic parameters of
bending modes of the cable or tie-rod under investigation through the solution of an
inverse problem. A number of approaches for tensile load estimation are available
in the literature (see, for instance, Tullini and Laudiero 2008, Rebecchi et al. 2013,
Wenzel and Pichler 2005, Maes et al. 2013). Their review is out of the scope of this
section. However, the fundamental concepts behind some of them are herein
reported to show how a tensile load monitoring system can be developed based
on automated OMA procedures. Other, eventually more refined approaches for
tensile load estimation can obviously replace those herein reported with no loss of
generality for the present discussion.
The approach proposed by Rebecchi et al. (2013) can be adopted for the
estimation of the tensile load in tie-rods. In the most general case of uncertain
boundary conditions, the tensile load can be estimated from measurements of one
mode shape of the member in at least five positions. The method starts from the
equation governing the dynamic behavior of a beam with uniform section and
subjected to a constant axial tensile force:
where:
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ! rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi !
N 4ω2 ρEI N 4ω2r ρEI
α ¼
2
1þ r2 1 , β ¼
2
1þ þ1 ð6:4Þ
2EI N 2EI N2
with ωr the natural frequency of the considered mode. The coefficients C1, C2, C3,
and C4 are determined from the measured modal displacements in four points of
the beam; the last modal displacement is used to compute the tensile load N.
The number of unknowns and, therefore, of measurement points can be reduced
if the boundary conditions are known. More details can be found elsewhere (Tullini
and Laudiero 2008, Rebecchi et al. 2013).
When the influence of bending stiffness and end constraints is negligible with
respect to the tensile load, the problem is reduced to that of the wire. In this case the
natural frequencies are given by the following closed-form expression
(Lagomarsino and Calderini 2005):
sffiffiffiffi
n N
fn ¼ n ¼ 1, 2, . . . , N m ð6:5Þ
2L ρ
6.5 Automated OMA and Vibration-Based Monitoring 311
where L is the length of the cable. Thus, the experimental estimation of a single
natural frequency is sufficient to estimate the tensile load. A correction factor can
eventually be applied to take into account the negligible but not null influence of
bending stiffness and support conditions in real cases (Wenzel and Pichler 2005).
When only the rotational stiffness of the supports is negligible with respect to the
bending stiffness, the problem becomes that of the prestressed pinned beam. The
analytical expression for the natural frequency of the n-th mode is:
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
n n2 π 2 EI N
fn ¼ þ 2 n ¼ 1, 2, . . . , N m ð6:6Þ
2 ρL 4
ρL
Two natural frequency estimates are required to solve the inverse problem in this
case, because there are two unknowns (N and EI). These can be obtained from the
following closed form expressions (Lagomarsino and Calderini 2005):
!
j2 f 2i i2 f 2j
N ¼ 4ρL 2 ð6:7Þ
i2 j2 i2 j2 j2 i2
!
4ρL4 f2 f 2i
EI ¼ 2 2 j 2 ð6:8Þ
π 2
j j i i2 j2 i2
where fi and fj are the natural frequency estimates for the i-th and j-th modes
respectively, with i < j.
Taking advantage of programmable hardware and integrating an automated
OMA procedure and the appropriate method for tensile load estimation, a continu-
ous vibration-based monitoring of tensile loads can be carried out. A monitoring
system for members in traction is herein described. It is based on distributed
wireless modules for acceleration measurements and on a centralized data
processing system. A small number of piezoelectric accelerometers are installed
on each cable. The vibration data related to one or more cables are acquired by the
wireless modules and stored into a MySQL database. Each monitored element has
an individual table of its own. ARES continuously analyzes the dynamic response
of the cables to ambient vibrations, providing the modal parameter estimates for
tensile load estimation. The architecture of the system is illustrated in Fig. 6.24.
When the number of monitored elements is very large, the data acquisition task can
be fragmented among a number of local servers.
The master server acquires time histories of predefined duration from the local
servers and carries out the automated identification of the modal parameters.
Tensile loads are then estimated according to the most appropriate procedure
(depending on the characteristics of the member) among those previously outlined.
A prototype system has been recently installed in Italy (Rainieri and Fabbrocino
2013). It is monitoring the tensile load in one of the cables of a sample steel arch
(Fig. 6.25). Interesting preliminary results have been obtained until now, remarking
the influence of environmental factors on the dynamic response of structural systems.
312 6 Automated OMA
Fig. 6.25 Installation of the prototype of monitoring system on one of the cables of a sample
steel arch
References 313
References
Allemang RJ, Brown DL, Phillips AW (2010) Survey of modal techniques applicable to
autonomous/semi-autonomous parameter identification. In: Proc international conference on
noise and vibration engineering ISMA 2010, Leuven
Andersen P, Brincker R, Goursat M, Mevel L (2007) Automated modal parameter estimation for
operational modal analysis of large systems. In: Proc 2nd international operational modal
analysis conference, Copenhagen
Bendat JS, Piersol AG (2000) Random data: analysis and measurement procedures, 3rd edn. John
Wiley & Sons, New York
Brincker R, Andersen P, Jacobsen NJ (2007) Automated frequency domain decomposition for
operational modal analysis. In: Proc 25th SEM international modal analysis conference,
Orlando
Brownjohn JMW, Carden EP (2007) Tracking the effects of changing environmental conditions on
the modal parameters of Tamar bridge. In: Proc 3rd international conference on structural
health monitoring of intelligent infrastructure, Vancouver
Carden EP, Brownjohn JMW (2008) Fuzzy clustering of stability diagrams for vibration-based
structural health monitoring. Comp Aid Civil Infrastr Eng 23:360–372
Chauhan S, Tcherniak D (2009) Clustering approaches to automatic modal parameter estimation.
In: Proc XVII International Modal Analysis Conference, Orlando
Clough RW, Penzien J (1993) Dynamics of structures, 2nd edn. McGraw-Hill Inc., New York
Deraemaeker A, Reynders E, De Roeck G, Kullaa J (2008) Vibration-based structural health
monitoring using output-only measurements under changing environment. Mech Syst Signal
Process 22:34–56
Devriendt C, De Troyer T, De Sitter G, Guillaume P (2008) Automated operational modal analysis
using transmissibility functions. In: Proc international conference on noise and vibration
engineering ISMA 2008, Leuven
Doebling SW, Farrar CR, Prime MB, Shevitz DW (1996) Damage identification and health
monitoring of structural and mechanical systems from changes in their vibration
characteristics: a literature review. Technical report LA-13070-MS, UC-900. Los Alamos
National Laboratory, New Mexico
Farrar CR, Worden K (2013) Structural health monitoring: a machine learning perspective. Wiley,
Chichester
Goethals I, Vanluyten B, De Moor B (2004) Reliable spurious mode rejection using self learning
algorithms. In: Proc international conference on noise and vibration engineering ISMA 2004,
Leuven
Guan H, Karbhari VM, Sikorski CS (2005) Time-domain output only modal parameter
extraction and its application. In: Proc 1st international operational modal analysis conference,
Copenhagen
Hu W-H, Moutinho C, Caetano E, Magalhaes F, Cunha A (2012) Continuous dynamic monitoring
of a lively footbridge for serviceability assessment and damage detection. Mech Syst Signal
Process 33:38–55
Lagomarsino S, Calderini C (2005) The dynamical identification of the tensile force in ancient
tie-rods. Eng Struct 27:846–856
Maes K, Peeters J, Reynders E, Lombaert G, De Roeck G (2013) Identification of axial forces in
beam members by local vibration measurements. J Sound Vib 332:5417–5432
Magalhaes F, Cunha A, Caetano E (2008) Dynamic monitoring of a long span arch bridge. Eng
Struct 30:3034–3044
Magalhaes F, Cunha A, Caetano E (2009) Online automatic identification of the modal parameters
of a long span arch bridge. Mech Syst Signal Process 23:316–329
Magalhaes F, Cunha A, Caetano E (2012) Vibration based structural health monitoring of an
arch bridge: from automated OMA to damage detection. Mech Syst Signal Process
28:212–228
314 6 Automated OMA
Pappa RS, James III GH, Zimmerman DC (1997) Autonomous modal identification of the Space
Shuttle tail rudder. Report NASA TM-112866, National Aeronautics and Space Administration
Peeters B, De Roeck G (2001a) One-year monitoring of the Z24-bridge: environmental effects
versus damage events. Earthq Eng Struct Dyn 30:149–171
Peeters B, De Roeck G (2001b) Stochastic system identification for operational modal analysis:
a review. ASME J Dyn Syst Meas Contr 123(4):659–667
Poncelet F, Kerschen G, Golinval JC (2008) In-orbit vibration testing of spacecraft structures.
In: Proc international conference on noise and vibration engineering ISMA 2008, Leuven
Rainieri C, Fabbrocino G (2010) Automated output-only dynamic identification of civil engineer-
ing structures. Mech Syst Signal Process 24:678–695
Rainieri C, Fabbrocino G (2013) Vibration data as a tool for continuous monitoring of cable tensile
loads. In: Proc 9th international workshop on structural health monitoring, Stanford
Rainieri C, Fabbrocino G, Cosenza E (2010) Some remarks on experimental estimation of
damping for seismic design of civil constructions. Shock Vib 17:383–395
Rainieri C, Fabbrocino G, Cosenza E (2011a) Near real-time tracking of dynamic properties for
standalone structural health monitoring systems. Mech Syst Signal Process 25:3010–3026
Rainieri C, Fabbrocino G, Cosenza E (2011b) Integrated seismic early warning and structural
health monitoring of critical civil infrastructures in seismically prone areas. Struct Health
Monit 10:291–308
Rainieri C, Fabbrocino G, Manfredi G, Dolce M (2012) Robust output-only modal identification
and monitoring of buildings in the presence of dynamic interactions for rapid post-earthquake
emergency management. Eng Struct 34:436–446
Rebecchi G, Tullini N, Laudiero F (2013) Estimate of the axial force in slender beams with
unknown boundary conditions using one flexural mode shape. J Sound Vib 332:4122–4135
Reynders E, Houbrechts J, De Roeck G (2012) Fully automated (operational) modal analysis.
Mech Syst Signal Process 29:228–250
Sohn H, Farrar CR, Hemez FM, Shunk DD, Stinemates DW, Nadler BR (2003) A review of
structural health monitoring literature: 1996–2001. Technical report LA-13976-MS, UC-900,
Los Alamos National Laboratory
Tan P-N, Steinbach M, Kumar V (2006) Introduction to data mining. Pearson Addison-Wesley,
Reading
Tullini N, Laudiero F (2008) Dynamic identification of beam axial loads using one flexural mode
shape. J Sound Vib 318:131–147
Vanlanduit S, Verboven P, Guillaume P, Schoukens J (2003) An automatic frequency domain
modal parameter estimation algorithm. J Sound Vib 265:647–661
Wenzel H, Pichler D (2005) Ambient vibration monitoring. Wiley, Chichester
Worden K, Manson G, Surace C (2007) Aspects of novelty detection. Key Eng Mater 347:3–16
Index
I
Ibrahim time domain (ITD), 150 M
Impulse response function (IRF), 6, 108 Matrix fraction description (MFD), 120
Independent component analysis (ICA), Mean phase deviation (MPD), 180, 182
167–168 Modal assurance criterion (MAC)
Infinite impulse response (IIR) filters, 83 sequences, 271–274
Instrumental variable (IV) Modal overlap factor (MOF), 180
method, 152–153 Modal phase collinearity (MPC), 180, 181
Integrated electronics piezoelectric (IEPE) Modal scale factor (MSF), 188
accelerometers, 63 Mode shape estimates, 179–185
complexity plot, 180
MPC, 180, 181
J MPD, 180, 182
Joint approximate diagonalization (JAD) Multi degree of freedom (MDOF) systems, 7
technique, 171 MySQL database, 99–100
320 Index
N signal-to-noise ratio, 4
Natural excitation techniques (NExT), 103 software development
ERA, 147 design patterns, 16
Hankel matrix, 148–149 event structure, 18–19
ITD, 150 producer/consumer architecture, 18, 20
LSCE, 147 state machine, 17–18
Prony’s equation, 147 VI hierarchy, 16–17
system matrix, 150 spatial filter, 12
Normalized modal difference (NMD), 188 SSI, 103
Nyquist frequency, 81 stable system, 7
stationarity, 104
structural health monitoring, 3
O testing types, 7
Operational deflection shapes (ODSs), 103 transmissibility function, 174–175
Operational modal analysis (OMA) virtual instruments, 13, 15–16
advantages and disadvantages, 177–179 Output-only modal identification, 225–229
ambient vibration modal identification, 2
applications of
data processing, 8 P
input-output testing, 9 Parametric OMA methods
modal parameters, 9 mathematical modes, 192
PSD matrix, 8 noise modes, 191
vibration-based damage detection, 9–10 stabilization diagrams for, 191–195
broadband excitation, 105 Peak picking method, 127–130, 195–199
classification of, 126–127 Poly-reference least squares complex
combined system, 104–105 frequency (p-LSCF) method
data acquisition, 4 closely spaced modes, 142
definition, 2 companion matrix, 145–146
diagonal matrices, 6 cost function, 143
EMA, 1–3 error formulation, 143
FDD, 103 Jacobian matrix, 143–144
forward problems, 4 single complex-valued matrix, 142–143
frequency response function, 6 Principal component analysis (PCA), 167
generalities Probability density function, 29, 34, 54–55
block diagram, 13–15 Probability theory, 29–35
front panel, 13
high order model, 126
impulse response function, 6 R
inverse problems, 4 Random data analysis
LabVIEW (see LabVIEW) auto-and cross-correlation functions, 55
linearity, 104 auto-power spectral density function, 56
linear time-invariant system, 4–5 complex numbers, 23–27, 53
low order model, 126 condition number κ, 51
MDOF systems, 7 error norms, 49–52
modal mass, 6 Euler’s identities, 23–27
mode shapes, 5 Fourier transform, 23–27
natural frequencies, 5 DFT, 27, 42, 53
NExT, 103 FFT, 27
noise, 4 properties, 27
observability, 104 Gaussian noise, auto-correlation of, 55–56
ODS, 103 least squares method, 52
organization of, 10–12 matrix algebra
physically realizable, 7 decomposition method, 46
random decrement technique, 175–176 eigenvector, 48
SDOF system, 7 EVD, 48
Index 321
T
Time domain methods, 127 V
ARMA model, 151–153 Virtual instruments (VIs ), 13, 15–16