Advanced Mechanical Vibrations - Physics, Mathematics and Applications
Advanced Mechanical Vibrations - Physics, Mathematics and Applications
Advanced Mechanical Vibrations - Physics, Mathematics and Applications
Vibrations
Advanced Mechanical
Vibrations
Physics, Mathematics and
Applications
The right of Paolo Luciano Gatti to be identified as author of this work has been asserted by him
in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form
or by any electronic, mechanical, or other means, now known or hereafter invented, including
photocopying and recording, or in any information storage or retrieval system, without
permission in writing from the publishers.
For permission to photocopy or use material electronically from this work, access www.
copyright.com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive,
Danvers, MA 01923, 978-750-8400. For works that are not available on CCC please contact
[email protected]
Trademark notice: Product or corporate names may be trademarks or registered trademarks, and
are used only for identification and explanation without intent to infringe.
Typeset in Sabon
by codeMantra
To my wife Simonetta and my daughter
Greta J., for all the future ahead.
And in loving memory of my parents Paolina and Remo
and my grandmother Maria Margherita, a person
certainly endowed with the ‘wisdom of life’.
Contents
Preface xi
Acknowledgements xiii
Frequently used acronyms xv
vii
viii Contents
In writing this book, the author’s main intention was to write a concise
exposition of the fundamental concepts and ideas that, directly or indi-
rectly, underlie and pervade most of the many specialised disciplines where
linear engineering vibrations play a part.
The style of presentation and approach to the subject matter places
emphasis on the inextricable – and at times subtle – interrelations and inter-
play between physics and mathematics, on the one hand, and between the-
ory and applications, on the other hand. In this light, the reader is somehow
guided on a tour of the main aspects of the subject matter, the starting point
being (in Chapter 2, Chapter 1 is for the most part an introductory chapter
on some basics) the formulation of the equations of motion by means of
analytical methods such as Lagrange’s equations and Hamilton’s principle.
Having formulated the equations of motion, the next step consists in deter-
mining their solution, either in the free vibration or in the forced vibration
conditions. This is done by considering both the time- and frequency-
domain solutions – and their strict relation – for different types of systems
in order of increasing complexity, from discrete finite degrees-of-freedom
systems (Chapters 3 and 4) to continuous systems with an infinite number
of degrees-of-freedom (Chapter 5).
Having obtained the response of these systems to deterministic excita-
tions, a further step is taken in Chapter 6 by considering their response to
random excitations – a subject in which, necessarily, notions of probability
theory and statistics play an important role.
This book is aimed at intermediate-advanced students of engineering,
physics and mathematics and to professionals working in – or simply inter-
ested in – the field of mechanical and structural vibrations. On his/her part,
the reader is assumed to have had some previous exposure to the subject
and to have some familiarity with matrix analysis, differential equations,
Fourier and Laplace transforms, and with basic notions of probability and
statistics. For easy reference, however, a number of important points on
xi
xii Preface
some of these mathematical topics are the subject of two detailed appen-
dixes or, in the case of short digressions that do not interrupt the main flow
of ideas, directly included in the main text.
Milan (Italy) – May 2020
Paolo Luciano Gatti
Acknowledgements
The author wishes to thank the staff at Taylor & Francis, and in particular
Mr. Tony Moore, for their help, competence and highly professional work.
A special thanks goes to my wife and daughter for their patience and
understanding, but most of all for their support and encouragement
in the course of a writing process that, at times, must have seemed like
never-ending.
Last but not the least, a professional thanks goes to many engineers with
whom I had the privilege to collaborate during my years of consulting work.
I surely learned a lot from them.
xiii
Frequently used acronyms
BC boundary condition
BVP boundary value problem
C clamped (type of boundary condition)
F free (type of boundary condition)
FRF frequency response function
GEP generalised eigenvalue problem
HE Hamilton equation
IRF impulse response function
LE Lagrange equation
MDOF multiple degrees of freedom
MIMO multiple inputs–multiple outputs
n-DOF n-degrees of freedom
pdf probability density function
PDF probability distribution function
PSD power spectral density
QEP quadratic eigenvalue problem
r.v. random variable
SEP standard eigenvalue problem
SDOF (also 1-DOF) single degree of freedom
SISO single input–single output
SL Sturm–Liouville
SS simply supported (type of boundary condition)
TF transfer function
WS weakly stationary (of random process)
xv
Chapter 1
1.1 INTRODUCTION
In the framework of classical physics – that is, the physics before the two
‘revolutions’ of relativity and quantum mechanics in the first 20–30 years
of the twentieth century – a major role is played by Newton’s laws. In par-
ticular, the fact that force and motion are strictly related is expressed by
Newton’s second law F = dp dt , where p = mv and we get the familiar
F = ma if the mass is constant. This equation is definitely a pillar of (clas-
sical) dynamics, and one of the branches of dynamics consists in the study,
analysis and prediction of vibratory motion, where by this term one typi-
cally refers to the oscillation of a physical system about a stable equilibrium
position as a consequence of some initial disturbance that sets it in motion
or some external excitation that makes it vibrate.
1
2 Advanced Mechanical Vibrations
Remark 1.1
of time for random ones. Then, in light of the fact that with random
data each observation/record is in some sense ‘unique’, their description
can only be given in statistical terms.
where
The time interval between two identical conditions of motion is the period T
and is the inverse of the (ordinary) frequency ν = ω 2π (expressed in Hertz;
symbol Hz, with dimensions of s −1), which, in turn, represents the number
of cycles per unit time. The basic relations between these quantities are
ω = 2πν , T = 1 ν = 2π / ω (1.2)
ei z + e− i z ei z − e− i z
e ± i z = cos z ± i sin z, cos z = , sin z = (1.3)
2 2i
4 Advanced Mechanical Vibrations
where only the real part of the complex expressions is assigned a physical
meaning.
Phasors are often very convenient, but some care must be exercised when
considering the energy associated with the oscillatory motion because the
various forms of energy (energy, energy density, power, etc.) depend on the
( )
square of vibration amplitudes. And since Re x2 ≠ ( Re(x)) , we need to take
2
the real part first and then square to find the energy. Complex quantities,
moreover, are also very convenient in calculations. For example, suppose
that we have two physical quantities of the same frequency but different
phases expressed as x1(t) = X1 cos (ω t − θ1 ) and x2 (t) = X2 cos (ω t − θ 2 ). Then,
the average value x1 x2 of the product x1 x2 over one cycle is
T
1 1
x1 x2 ≡
T ∫ x (t)x (t) dt = 2 X X cos (θ − θ ) (1.6)
0
1 2 1 2 1 2
where the calculation of the integral, although not difficult, is tedious. With
complex notation the calculation is simpler, we express the two harmonic
quantities as x1(t) = X1 e i (ω t −θ1 ) and x2 (t) = X2 e i (ω t −θ2 ) and obtain the result by
simply determining the quantity Re x1*x2 2. ( )
A few preliminary fundamentals 5
Remark 1.2
With complex notation, some authors use j instead of i and write e j ω t, while
some other authors use the negative exponential notation and write e − i ω t or
e − j ω t. However, since we mean to take the real part of the result, the choice
is but a convention and any expression is fine as long as we are consistent. In
any case, it should be observed that in the complex plane, the positive expo-
nential represents a counter-clockwise-rotating phasor, while the negative
exponential represents a clockwise-rotating phasor.
2,4
Amplitude (arbitrary units)
1,6
0,8
0,0
-0,8
-1,6
-2,4
0 5 10 15 20 25 30
seconds
eats (ω 2 − ω1 = 0.6 ).
Figure 1.1 B
6 Advanced Mechanical Vibrations
the crests of the first wave correspond to the troughs of the other and they
practically cancel out. This pattern repeats on and on, and the result is
the so-called phenomenon of beats shown in the figure. The maximum
amplitude occurs when ω t = nπ ( n = 0,1, 2, ), that is, every π ω seconds,
and consequently, the frequency of the beats is ω π = ν 2 − ν1, equal to the
(ordinary) frequency difference between the two original signals. For sig-
nals with unequal amplitudes (say, A and B), the total amplitude does not
become zero and varies between A + B and A − B , but in general, the typi-
cal pattern can still be easily identified.
1.3.2 Displacement, velocity,
acceleration and decibels
If the oscillating quantity x(t) of Equation 1.4 is a displacement, we can
recall the familiar definitions of velocity and acceleration (frequently
denoted with overdots as x (t) and x
(t), respectively)
dx(t) d 2 x(t)
v(t) ≡ x (t) = , a(t) ≡ x
(t) =
dt dt 2
and calculate the derivatives to get
v(t) = iω Ce iω t = ω Ce i (ω t + π 2) = V e i (ω t + π 2)
(1.8)
a(t) = −ω 2 Ce iω t = ω 2Ce i (ω t + π) = A e i (ω t + π)
where the second equality in Equation 1.81 follows from Euler’s relation
(Equation 1.3) by observing that e i π 2 = cos ( π 2) + i sin ( π 2) = i . Similarly,
for Equation 1.82 , the same argument shows that e i π = −1.
Physically, Equations 1.8 tell us that velocity leads displacement by 90°
and that acceleration leads displacement by 180° (hence, acceleration leads
velocity by 90°). In regard to amplitudes, moreover, they show that the
maximum velocity amplitude V and maximum acceleration amplitude A are
V = ω C and A = ω 2C = ω V . Clearly, these conclusions of physical nature
must not – and in fact do not – depend on whether we choose to represent
the quantities involved by means of a negative or positive exponential term.
In principle, therefore, it should not matter which one of these quantities –
displacement, velocity or acceleration – is considered, because all three pro-
vide the necessary information on amplitude and frequency content of the
vibration signal. In practice, however, it is generally not so, and some physi-
cal considerations on the nature of the vibrations to be measured and/or on
the available measuring instrumentation often make one parameter prefer-
able with respect to the others.
The amplitudes relations above, in fact, show that the displacement tends to
give more weight to low frequency components while, conversely, acceleration
A few preliminary fundamentals 7
Most physical system possessing elasticity and mass can vibrate. The sim-
plest models of such systems are set up by considering three types of basic
(discrete) elements: springs, viscous dampers and masses, which relate
applied forces to displacement, velocity and acceleration, respectively. Let
us consider them briefly in this order.
The restoring force that acts when a system is slightly displaced from
equilibrium is due to internal elastic forces that tend to bring the system
back to the original position. Although these forces are the manifestation
of short-range microscopic forces at the atomic/molecular level, the simplest
way to macroscopically model this behaviour is by means of a linear mass-
less spring (Figure 1.2). The assumption of zero mass assures that a force
F acting on one end is balanced by a force − F on the other end, so that the
spring undergoes an elongation equal to the difference between the dis-
placements x2 and x1 of its endpoints. For small elongations, it is generally
correct to assume a linear relation of the form
F = k ( x2 − x1 ) (1.10)
where k is a constant (the spring stiffness, with units N/m) that represents
the force required to produce a unit displacement in the specified direction.
If, as it sometimes happens, one end of the spring is fixed, the displace-
ment of the other end is simply labelled x and Equation 1.10 becomes
F = −kx , where the minus sign indicates that the force is a restoring force
opposing displacement. The reciprocal of stiffness 1 k is also used, and it is
called flexibility or compliance.
In real-world systems, energy – where, typically, the energies of inter-
est in vibrations are the kinetic energy of motion and the potential strain
energy due to elasticity – is always dissipated (ultimately into heat) by some
means. So, although this ‘damping effect’ can, at least on a first approach,
often be neglected without sacrificing much in terms of physical insight into
the problem at hand, the simplest model of damping mechanism is provided
by the massless viscous damper. This is a device that relates force to veloc-
ity, of which a practical example can be a piston fitting loosely in a cylinder
filled with oil so that the oil can flow around the piston as it moves inside
the cylinder. The graphical symbol usually adopted is the dashpot shown in
Figure 1.3 for which we have a linear relation of the form
F = c ( x 2 − x1 ) (1.11)
F = mx
(1.12)
(a)
(b)
Figure 1.4 (a) Springs connected in series. (b) Springs connected in parallel.
(a)
(b)
(c)
Figure 1.5 A
few examples of local stiffness for continuous elastic elements.
1 1 1 k1 + k2
= + = , keq = k1 + k2(1.13)
keq k1 k2 k1k2
Formulating the
equations of motion
2.1 INTRODUCTION
Remark 2.1
13
14 Advanced Mechanical Vibrations
N N N N
∑ p = ∑ F
k =1
k
k =1
(ext)
k , ∑ ( r × p ) = ∑ ( r × F ) (2.1)
k =1
k k
k =1
k
(ext)
k
Remark 2.2
The weak form of the third law states that the mutual forces of the two par-
ticles are equal and opposite. In addition to this, the strong form – which is
necessary to arrive at Equation 2.12 – states that the internal forces between
the two particles lie along the line joining them.
Formulating the equations of motion 15
fi ( r1 , , rN , t ) = 0 ( i = 1, , m) (2.2)
which mathematically represent kinematical conditions that limit the parti-
cles motion. Examples are not hard to find: any two particles of a rigid body
must satisfy the condition ( rk − rj ) − dkj2 = 0 for all t because their mutual
2
r1 = r1 ( q1 , , qn , t ) , , rN = rN ( q1 , , qn , t ) (2.3)
Remark 2.3
Let rk be the position vector of the kth particle of a system; we call virtual
displacement δ rk an imaginary infinitesimal displacement consistent with
Formulating the equations of motion 17
the forces and constraints imposed on the system at the time t; this meaning
that we assume any time-dependent force or moving constraint to be ‘fro-
zen’ at time t (this justifies the term ‘virtual’ because in general δ rk does not
coincide with a real displacement drk occurring in the time dt). If the system
is in static equilibrium and Fk is the total force acting on the particle, equi-
librium implies Fk = 0, and consequently, ∑k Fk ⋅ δ rk = 0 – where we recog-
nise the l.h.s. as the system’s total virtual work. Since, however, each force
can be written as the sum of the applied force Fk(a) and the constraint force
( )
fk, we have ∑k Fk(a) + fk ⋅ δ rk = 0. If now, at this point, we limit ourselves to
workless constraints, that is, constraints such that ∑k fk ⋅ δ rk = 0, we arrive
at the principle of virtual work
N
δ W (a) ≡ ∑F
k =1
( a)
k ⋅ δ rk = 0 (2.4)
stating that the condition for static equilibrium of a system with workless
constraints is that the total virtual work of the applied forces be zero. Note,
however, that Equation 2.4 does not imply Fk(a) = 0 because the δ rk , owing
to the presence of constraints, are not all independent.
Remark 2.4
∑ (F
k =1
( a)
k )
− mkrk ⋅ δ rk = ∑F
k =1
( a)
k ⋅ δ rk − ∑ m r ⋅ δ r
k =1
k k k = 0 (2.5)
n n
∂ rk ∂r d ∂ rk ∂ rk
rk = ∑ i =1
∂ qi
q i + k , δ rk =
∂t ∑ ∂∂ qr δ q ,
i =1
k
i
i
∂ rk ∂ rk
=
∂ qi ∂ q i
, =
dt ∂ qi ∂ qi
(2.6)
N n
N
∂ rk
n
∑k=1
Fk(a) ⋅ δ rk = ∑ ∑
i =1
k=1
Fk(a) ⋅ δ qi =
∂ qi ∑ Q δ q (2.7)
i =1
i i
where Qi ( i = 1, , n ) is called the ith generalised force and is defined by the
term in parenthesis, while, with some lengthier manipulations, the second
term on the l.h.s. of 2.5 becomes
N n
∑k=1
mk rk ⋅ δ rk = ∑ dtd ∂∂qT − ∂∂qT δ q (2.8)
i =1
i i
i
T=
1
2 ∑
k =1
mkvk2 =
1
2 ∑ m r ⋅ r (2.9)
k =1
k k k
Together, Equations 2.7 and 2.8 give d’Alembert’s principle in the form
n
∑ dtd ∂∂qT − ∂∂qT − Q δ q = 0 (2.10)
i =1
i i
i i
d ∂T ∂T
− = Qi ( i = 1,… , n ) (2.11)
dt ∂ q i ∂ qi
Qi = − ∑ ∇ V ⋅ ∂∂ qr
k=1
k
k
i
=−
∂V
∂ qi
(2.12)
and we can move the term − ∂ V ∂ qi to the l.h.s. of Equation 2.11. Then,
observing that ∂ V ∂ q i = 0 because in terms of generalised co-ordinates V
is a function of the form V = V ( q1 , , qn , t ), we can define the Lagrangian
function (or simply Lagrangian) L of the system as
L = T − V (2.13)
and write LEs 2.11 in what we can call their standard form for holonomic
systems, that is
d ∂L ∂L
− =0 ( i = 1,… , n ) (2.14)
dt ∂ q i ∂ qi
Remark 2.5
Finally, if some of the applied forces Fk(a) are conservative while some oth-
ers are not, then the generalised forces Qi are written as Qi = Q i − ∂V ∂qi ,
where the Qi are those generalised forces not derivable from a potential
function. In this case, LEs take the form
20 Advanced Mechanical Vibrations
d ∂L ∂L
− i
=Q ( i = 1,… , n ) (2.15)
dt ∂ q i ∂ qi
∂L
pi = (i = 1,..., n) (2.16)
∂ q i
∂L
p i = ( i = 1,… , n ) (2.17)
∂ qi
H ( q, p, t ) ≡ ∑ p q − L (2.18)
i
i i
with the understanding that all the q i on the r.h.s. are expressed as func-
tions of the variables q, p, t (see Remark 2.6 below). Then, since the func-
tional form of the Hamiltonian is H = H ( q, p, t ), its differential is
dH = ∑ ∂∂ Hq dq + ∂∂ Hp dp + ∂∂Ht dt (2.19a)
i
i
i
i
i
which can be compared with the differential of the r.h.s. of Equation 2.18,
that is, with
d
∑
i
pi q i − L =
∑ q dp + ∑ p dq −∑ ∂∂qL dq + ∂∂qL dq − ∂∂Lt dt
i
i i
i
i i
i
i
i
i
i
= ∑ q dp − ∑ ∂∂qL dq − ∂∂Lt dt
i
i i
i
i
i
(2.19b)
where the second equality is due to the fact that, owing to Equation 2.16,
the term ∑ pi dq i cancels out with the term ∑ (∂ L / ∂ q i ) dq i . The comparison
Formulating the equations of motion 21
∂H ∂H
p i = − , q i = (i = 1,… , n) (2.20a)
∂ qi ∂ pi
Remark 2.6
∂H ∂H
p i = − + Qi , q i = (i = 1,… , n) (2.20b)
∂ qi ∂ pi
Example 2.1
A paradigmatic example of oscillating system is the simple pendulum
of fixed length l (Figure 2.1).
The position of the mass m is identified by the two Cartesian co-
ordinates x, y , but the (scleronomic) constraint x2 + y 2 − l 2 = 0 tells us
that this is a 1-DOF system. Then, since a convenient choice for the
generalised co-ordinate is the angle θ , we have
22 Advanced Mechanical Vibrations
x = l sin θ x = lθ cos θ x = lθ 2 sin θ + lθ cos θ
⇒ ⇒
y = −l cos θ y = lθ sin θ y = lθ 2 cos θ + lθ sin θ
(2.21)
( )
T = m x 2 + y 2 2 = ml 2θ 2 2
( )
⇒ L θ , θ ≡ T − V =
ml 2θ 2
+ mgl cos θ
V = mgy = − mgl cos θ 2
(2.22)
d ∂L 2 ∂L
= ml θ , = − mgl sin θ
dt ∂θ ∂θ
(F ( a)
)
− mr ⋅ δ r = ( Fx − mx
) δ x + ( Fy − my) δ y = 0 (2.24)
where in this case Fx = 0, Fy = − mg, the accelerations x , y are given
by Equations 2.213 and the virtual displacements, using Equations
2.211, are δ x = l cos θ δθ , δ y = l sin θ δθ . Also, we can now determine
the generalised force Q; since F ⋅ δ r = Fx δ x + Fy δ y = − mgl sin θ δθ , then
Q = − mgl sin θ , which, being associated with the angular virtual dis-
placement δθ , is a torque.
Formulating the equations of motion 23
m + m2 2 2 m2 l22φ2 cos (θ − φ )
T = 1 l1 θ + + m2 l1l2 θφ
2 2 (2.25)
V = − ( m1 + m2 ) gl1 cos θ − m2 gl2 cos φ
d ∂ L′ ∂ L′
− =0 ( i = 1,… , n ) (2.27)
dt ∂ u i ∂ ui
with L′(u, u , t) = L ( q(u, t), q (u, u , t), t ), where by this equality, we mean that
the ‘new’ Lagrangian L′ is obtained from the ‘old’ by substituting for qi , q i
the functions which express them in terms of the new variables ui , u i .
In light of the fact that the ‘invariance property’ of LEs is the reason
why the form 2.14 is particularly desirable, it could be asked if this form
also applies to cases that are more general than the one in which the forces
are conservative – such as, for instance, forces that may also depend on
time and/or on the velocities of the particles. As it turns out, the answer is
affirmative and LEs have the standard form whenever there exists a scalar
function V = V ( q, q , t ), such that the generalised forces are given by
∂V d ∂V
Qi = − + (2.28)
∂ qi dt ∂ q i
∂r ∂r
N n n
∂ rk ∂ rk
T=
1
2 ∑
k=1
mk
∑
i =1
∂ qi
q i + k ⋅
∂ t ∑ j =1
∂ qj
q j + k
∂ t
n
N
∂ rk ∂ rk
n
N
∂ rk ∂ rk
N
∂r
2
=
1
∑∑
2 i , j =1 k=1
mk ⋅ q i q j +
∂ qi ∂ q j ∑∑
i =1
k=1
mk ⋅
∂ qi ∂ t
q i +
1
2 ∑
k=1
mk k
∂t
n n
=
1
∑
2 i , j =1
Mij ( q, t ) q i q j + ∑ b (q, t ) q + T
i =1
i i 0 = T2 + T1 + T0
(2.29)
Passing now to LEs, we focus our attention on the lth equation and observe
that in order to determine its structure more explicitly, we need to manipu-
late the two terms on the l.h.s., that is, the terms
d ∂ L dpi d ∂ T d ∂ T2 ∂ T1
= = = +
dt ∂ q l dt dt ∂ q l dt ∂ q l ∂ q l
∂ L ∂ (T − V ) ∂ T2 ∂ T1 ∂ T0 ∂ V
= = + + −
∂ ql ∂ ql ∂ ql ∂ ql ∂ ql ∂ ql
where in writing these two relations, we took into account (a) the defini-
tion 2.16 of pl , (b) the fact that, in most cases, V does not depend on the q i
(and therefore ∂ V ∂ q l = 0), and (c) that, in the general (non-natural) case,
T = T2 + T1 + T0.
26 Advanced Mechanical Vibrations
∂ pj ∂ pj ∂ pl
dpl
dt
= ∑ ∂qr
r
q r + ∑ ∂ q
r
r
qr +
∂t
∂ Mlj ∂ bl ∂ Mlj ∂ bl
= ∑ ∂ qr
q j q r + ∑ q r +
∂ qr ∑ Mlj qj + ∑ ∂t
q j +
∂t
r,j r j j
(2.31a)
∂ Mlj ∂ Ml r ∂ Mlj ∂ bl
dpl 1
dt
=
2 ∑ ∂ q
j ,r
r
+
∂ q j
q j q r + ∑ M q + ∑
j
lj j
j
∂t
q j + ∑ ∂∂qb q + ∂ t
j
l
j
j
(2.31b)
∂ T2 ∂ T1 ∂ T0 ∂ V 1 ∂ Mij
+ + − =
∂ ql ∂ ql ∂ ql ∂ ql 2 ∑ ∂q l
q i q j +
∑ ∂∂ qb q + ∂∂Tq
i
l
i
0
l
−
∂V
∂ ql
i, j i
(2.32)
Then, putting Equations 2.31 and 2.32 together and renaming dummy
indexes as appropriate, we get the desired result, that is, the explicit struc-
ture of the lth Lagrange equation. This is
∂ Mlj ∂ Ml r ∂ Mrj
∑ M q + 12 ∑ ∂ q
j
lj j
j ,r
r
+
∂ qj
−
∂ ql
q j q r
∂ Mlj ∂ bl ∂ bj ∂ bl ∂ V ∂ T0
+ ∑ j
∂t
+ −
∂ q j ∂ ql
q j + + −
∂ t ∂ ql ∂ ql
=0 (l = 1,… , n )
(2.33a)
Formulating the equations of motion 27
∂ Mlj ∂ bl ∂ (V − T0 )
∑M j
lj qj + ∑[ jr, l ]q q + ∑
j ,r
j r
j
∂t
q j + ∑ g q + + ∂ t +
j
lj j
∂ ql
=0
(2.33b)
when one introduces the Christoffel symbol of the first kind [ jr , l ] and the
skew-symmetric coefficients glj (i.e. such that glj = − g j l ) defined, respec-
tively, as
1 ∂ Mlj ∂ Ml r ∂ Mrj ∂ bl ∂ bj
[ jr, l ] = 2 + −
∂ ql
, glj = − (2.34)
∂ qr ∂ qj ∂ q j ∂ ql
∂ 2 T1 ∂ 2 T1 ∂ bl ∂ 2 T1
glj = − , = (2.35)
∂ q j ∂ q l ∂ ql ∂ q j ∂ t ∂t ∂ q l
dL
dt
= ∑ ∂∂qL q + ∂∂qL q
i
i
i
i
i
d ∂L i q i + ∂ L qi = d ∂L
= ∑i
dt ∂ q i
q i − Q
∂ q i ∑
i
q i − Qi q i
dt ∂ q i
(2.37)
d
dt ∑ ∂∂qL q − L = ∑ Q q (2.38)
i
i
i
i
i i
h = T2 − T0 + V = T2 + U (2.39)
Finally, note that the term T1 does not appear on the r.h.s. of Equation 2.39
because – as pointed out at the end of the preceding subsection 2.4.2 – the
gyroscopic forces do no work on the system.
Remark 2.7
ii. Clearly, the definition of h applies even in the case in which L depends
explicitly on t . In this more general case, we have h = h(q, q , t) and
Equation 2.38 reads
dh(q, q , t)
dt
= ∑ Q q − ∂∂Lt (2.40)
i
i i
dH
dt
= ∑ ∂∂ Hq q + ∂∂ Hp p = ∑ (− p q + q p ) = 0 (2.41)
i
i
i
i
i
i
i i i i
forces acting on the particle can be written as (Chapter 1) Fk(E) = −kk rk and
Fk(V) = −ck rk, where kk , ck are two non-negative constants called the (kth)
stiffness and viscous coefficients, respectively. Let us now consider elastic
forces first. If we introduce the scalar function
N
V (E) ( r1 , , rN ) =
1
2 ∑ k r ⋅ r (2.42)
k =1
k k k
Example 2.3
The elastic potential energy is also called strain energy. In the sim-
plest example of a spring that is stretched or compressed (within its
linear range), the force–displacement relation is linear and we have
F = −kx, where we assume x = 0 to be the undeformed position.
Dispensing with the minus sign, which is inessential for our present
purposes because x may be a compression or an elongation, the work
done by this force from x = 0 to x equals the strain energy and we have
x
∫
(E)
V (E) = k r dr = kx2 2. Consequently, we can write V = F x 2 , a for-
0
mula known as Clapeyron’s law, which states that the strain energy
is one-half the product Fx. In this light, the example we consider here
is the calculation of the strain energy of a rod of length L and cross-
sectional area A under the action of a longitudinal force F(x, t). Calling
u(x, t) and ε (x, t), respectively, the displacement and strain at point x
and time t, the infinitesimal element dx of the rod undergoes a deforma-
tion (∂ u ∂ x)dx = ε (x, t) dx and the strain energy of the volume element
A dx, by Clapeyron’s law, is dV (E) = ε F dx 2. Then, from the definition
σ (x, t) = F A of axial stress and the assumption to remain within the
elastic range (so that σ = Eε , where E is Young's modulus), we are led
to F = EA ε , and consequently, dV (E) = ε 2 EA dx 2. Integrating over the
rod length, we obtain the rod strain energy
L L 2
1 1 ∂u
V (E) =
2 ∫
0
EAε 2 dx =
2 ∫
0
EA dx (2.43)
∂x
D ( r1 ,… , rN ) =
1
2 ∑ c r ⋅ r (2.44)
k =1
k k k
Formulating the equations of motion 31
which, denoting by ∇k the gradient with respect to the kth velocity vari-
ables x k , y k , zk, is such that Fk(V) = −∇k D. The dissipative nature of these
forces is rather evident; since D is non-negative and Equation 2.44 gives
2D = − ∑k Fk(V) ⋅ rk, then 2D is the rate of energy dissipation due to these
forces.
With respect to LEs, on the other hand, we can recall the relation
∂ rk ∂ qi = ∂ rk ∂ q i given in Section 2.3 and determine that the ith gener-
alised viscous force is
N N
∂ rk
∑ ∑ ∇ D ⋅ ∂∂ qr ∂D
k
Qi(V) = − ∇k D ⋅ = k =− (2.45)
i =1
∂ qi i =1
i ∂ q i
d ∂L ∂L ∂D
− + = 0 (2.46)
dt ∂ q i ∂ qi ∂ q i
A final point worthy of mention is that whenever the transformation 2.3
does not involve time explicitly, the function D ( q, q ) has the form
n N
∂ rk ∂ rk
D(q, q) =
1
2 ∑
i , j =1
Cij q i q j , Cij (q) = ∑c
k=1
k ⋅
∂ qi ∂ q j
(2.47)
(with Cij = C ji) and, just like the kinetic energy, is a homogeneous function
of order two in the generalised velocities to which Euler’s theorem (Remark
2.8(i)) applies. Then, using this theorem together with Equation 2.38, we
are led to
dh
dt
= ∑Q
i
(V)
i q = −
i ∑ ∂∂ qD q = −2D (2.48)
i
i
i
Remark 2.8
i. In the more general case in which the system is acted upon by mono-
genic, viscous and non-monogenic forces, LEs are written as
d ∂L ∂L ∂D
− + = Qi (2.49a)
dt ∂ q i ∂ qi ∂ q i
∂H ∂D ∂H
p i = − − + Qi , q i = (i = 1,… , n) (2.49b)
∂ qi ∂ q i ∂ pi
n n
∑j =1
alj dq j + al dt = 0 ⇒ ∑ a δq
j =1
lj j =0 (l = 1, , m) (2.50)
where the first relation is the differential form of the constraints while the
second is the relation that, on account of 2.501, must hold for the virtual
displacements δ q j . Now, since the constraints 2.501 are non-integrable and
cannot be used to eliminate m co-ordinates/variables in favour of a remain-
ing set of n − m co-ordinates, we must tackle the problem by retaining all
the variables. By proceeding as in Section 2.3, we arrive at Equation 2.10,
but now the δ q j s are not independent. If, however, we multiply each one of
Equations 2.502 by an arbitrary factor λl (called Lagrange multiplier) and
form the sum
m n n
m
∑ ∑
l =1
λl
j =1
alj δ q j = ∑ ∑ λ a δ q
j =1
l =1
l lj j =0
n
d ∂T ∂T m
∑j =1
−
dt ∂ q j ∂ q j
− Qj − ∑l =1
λl alj δ q j = 0 (2.51)
Formulating the equations of motion 33
and Equation 2.51 reduces to the sum of the first n − m terms, where now
the remaining δ q j are independent. Consequently, we obtain
m
d ∂T ∂T
−
dt ∂ q j ∂ q j
= Qj + ∑λ al =1
l lj ( j = 1,… , n − m) (2.53)
Together, Equations 2.52 and 2.53 form the complete set of n LEs for non-
holonomic systems. But this is not all; these equations and the m equa-
tions of constraints (expressed in the form of the first-order differential
equations ∑ j alj q j + al = 0) provide n + m equations to be solved for the n + m
unknowns q1 , , qn , λ1 , , λm.
In particular, if the generalised forces Qj – which, we recall, correspond
to applied forces – are conservative (or monogenic), then the potential V
(or V) is part of the Lagrangian and we obtain the standard non-holonomic
form of Lagrange equations
m
d ∂L ∂L
−
dt ∂ q j ∂ q j
= ∑λ a
l =1
l lj ( j = 1,… , n ) (2.54)
Equations 2.53 or 2.54 also suggest the physical meaning of the λ -multipli-
ers: the r.h.s. terms (of LEs) in which they appear are the generalised forces
of constraints. The method, therefore, does not eliminate the unknown con-
straint forces from the problem but provides them as part of the solution.
Remark 2.9
j
j
l
34 Advanced Mechanical Vibrations
2.5 HAMILTON'S PRINCIPLE
δS =δ
∫ L (q, q , t) dt = 0 (2.56)
t0
t1
where S ≡
∫t0
L dt is called the action – or action integral or action func-
tional – L is the system’s Lagrangian L = T − V and t0 , t1 are two fixed
instants of time. In words, the principle may be stated by saying that the
actual motion of the system from time t0 to t1 is such as to render the
action S stationary – in general, a minimum – with respect to the func-
tions qi (t) ( i = 1, , n ) for which the initial and final configurations qi (t0 )
and qi (t1) are prescribed.
Remark 2.10
quite similar for functionals, and the analogy extends also to the fact that
the character of the extremum can be judged on the basis of the sign of the
second variation δ 2S (although in most applications of mechanics, the sta-
tionary path turns out to be a minimum of S). As for terminology, it is quite
common (with a slight abuse of language) to refer to δ S = 0 as a principle
of least action.
t1 t1
∂L ∂L
δS =
∫ L (q + δ q, q + δ q , t ) − L (q, q , t ) dt = ∫ ∂ q δ q + ∂ q δ q dt
t0 t0
t1 t1
∂L ∂L d ∂L
=
∂ q
δq +
∫ − δ qdt
t0 t0 ∂ q dt ∂ q
(2.57)
∂L d ∂L
− = 0 (2.58)
∂ q dt ∂ q
36 Advanced Mechanical Vibrations
t1 t1 t
1
∑∫
k
d
dt
( mkrk ) ⋅ δ rk dt =
∑
k
mkrk ⋅ δ rk −
t t0 ∫ ∑ m r ⋅ δ r dt
k
k k k
t0 0
t1
k
∫ ∑ 2 dt = [...] − ∫
m r ⋅r k k
t1
= [...] tt10 − δ t1
t0 δ T dt
t0
k
t0
(2.59)
where we first integrated by parts, took into account that the fact that the
δ -operator commutes with the time derivative, and then used the relation
δ ( mk rk ⋅ rk 2) = mk rk ⋅ δ rk by observing that δ ( mk rk ⋅ rk 2) = δTk (thereby
implying that ∑k δTk is the variation δT of the system’s total kinetic energy).
If now we further assume that at the instants t0 , t1, the position of the sys-
tem is given, then δ rk ( t0 ) = δ rk ( t1 ) = 0 and the boundary term in square
brackets vanishes. Finally, putting the pieces together, we arrive at the
expressions
t1 t1
∫(
t0
)
δ W (a) + δ T dt = 0,
∫ δ L dt = 0 (2.60)
t0
where the first expression is often called the extended Hamilton’s principle,
while the second follows from the first when all the applied forces are deriv-
able from a potential function V (q, t), and we have δ W (a) = −δ V . Also, we
notice that since δ W (a) and δ T (and δ L) are independent on the choice of
co-ordinates, we can just as well see the principles in terms of generalised
Formulating the equations of motion 37
δ
∫ L dt = 0 (2.61)
t0
which is, as mentioned at the beginning of this section, the classical form
of Hamilton’s principle. In this respect, however, it is important to point
out that the classical form 2.61 does not apply to non-holonomic systems
because in this case the shift from 2.602 to 2.61 cannot be made. For non-
holonomic systems, in fact, it can be shown that the varied path is not in
general a geometrically possible path; this meaning that the system cannot
travel along the varied path without violating the constraints. Consequently,
the correct equations of motion for non-holonomic systems are obtained by
means of Equations 2.60, which, it should be noticed, are not variational
principles in the strict sense of the calculus of variations, but merely inte-
grated forms of d’Alembert’s principle. Detailed discussions of these aspects
can be found in Greenwood (2003), Lurie (2002), Rosenberg (1977) and
in the classical books by Lanczos (1970) and Pars (1965). In any case, the
great advantage of Hamilton’s principle (in the appropriate form, depend-
ing on the system) is that it can be used to derive the equations of motion
of a very large class of systems, either discrete or continuous. In this latter
case, moreover, we will see in the next section that the principle automati-
cally provides also the appropriate spatial boundary conditions.
Remark 2.11
S=
∫ L ( x , x , t, u, ∂ u, ∂ u, ∂ u) dt = ∫ ∫ Λ ( x , x , t, u, ∂ u, ∂ u, ∂ u) dx dt
t0
1 2 t 1 2
t0 R
1 2 t 1 2
(2.63)
t1 l
∂Λ ∂Λ ∂Λ
∫ ∫ ∂ u δ u + ∂ u δ u + ∂ u′ δ u′ dxdt = 0 (2.64a)
t0 0
where the variation δ u(x, t) is required to be zero at the initial and final
times t0 , t1, i.e.
δ u ( x, t0 ) = δ u ( x, t1 ) = 0 (2.64b)
under the assumption that the δ -operator commutes with both the time-
and spatial derivatives. For the second term, the integration by parts is with
respect to time, and we get
l t1 l t1 t1
∂Λ ∂ Λ ∂ ∂Λ
∫ ∫
0
t0 ∂ u
δ u dt dx =
∫
0
∂ u
δ u
t0
−
∫
t0
∂t ∂u
δ u dt dx
(2.65a)
l t1
∂ ∂Λ
=−
∫ ∫ ∂ t ∂ u δ u dt dx
0 t0
where in writing the last equality, we took the conditions 2.64b into
account. For the third term of 2.64a, on the other hand, the integration by
parts is with respect to x and we get
t1
l ∂Λ t1
∂Λ l l
∂ ∂Λ
∫ ∫
t0
0 ∂ u ′
δ u′ dx dt =
∫
t0
δ u −
∂ u′ 0
∫ 0
∂ x ∂ u′
δ u dx dt (2.65b)
Using Equations 2.65a and 2.65b in 2.64a, the result is that we have trans-
formed the l.h.s. of Equation 2.64a into
t1 l t1 l
∂ Λ ∂ ∂ Λ ∂ ∂ Λ ∂Λ
∫∫
t0 0
∂ u
−
∂ t ∂
u
−
∂ x
∂ u ′
δ u dx dt +
t0
∂ u′ ∫
δ u dt (2.66)
0
δ u (0, t ) = δ u ( l , t ) = 0 (2.67)
(note that if δ S vanishes for all admissible δ u(x, t), it certainly vanishes for
all admissible δ u(x, t) satisfying the extra condition 2.67), then the second
integral in 2.66 is zero and only the double integral remains. But then,
owing to the arbitrariness of δ u, the double integral is zero only if
∂Λ ∂ ∂Λ ∂ ∂Λ
− − = 0 (2.68a)
∂ u ∂ t ∂ u ∂ x ∂ u′
which must hold for 0 ≤ x ≤ l and all t. This is the Lagrange equation of
motion of the system.
40 Advanced Mechanical Vibrations
Now we relax the condition 2.67; since the actual u( x, t ) must satisfy
Equation 2.68a, the double integral in 2.66 vanishes and the Hamilton’s
principle tells us that we must have
∂Λ ∂Λ
δ u = 0, δ u = 0 (2.68b)
∂ u′ x=0
∂ u′ x=l
which provide the possible boundary conditions of the problem. So, for
example, if u is given at x = 0 then δ u(0, t) = 0, and the first boundary condi-
tion of 2.68b is automatically satisfied. If, however, u is not pre-assigned at
x = 0, then Equation 2.68b1 tells us that we must have ∂ Λ ∂ u′ x = 0 = 0. Clearly,
the same applies to the other end point x = l. As for terminology, we note the
following: since the boundary condition δ u(0, t) = 0 is imposed by the geom-
etry of the problem, it is common to call it a geometric (or imposed) bound-
ary condition. On the other hand, the boundary condition ∂ Λ ∂ u′ x = 0 = 0
depends on Λ – that is, on the nature of the system’s kinetic and potential
energies, and consequently on inertial effects and internal forces – and for
this reason, it is referred to as a natural (or force) boundary condition.
Remark 2.12
2
∂Λ ∂ ∂Λ
−
∂ u ∂ t ∂ (∂ t u)
− ∑ ∂∂x ∂ (∂∂Λu) = 0
j =1
j j
(2.69)
∂2 ∂Λ
(2.70b)
∂ x2 ∂ (∂ xx
2
u)
On the other hand, for a 2-dimensional system (e.g. plates),
Equation 2.70a gives the four terms
∂2 ∂Λ ∂2 ∂Λ ∂2 ∂Λ ∂2 ∂Λ
+ 2 + + (2.70c)
∂ x ∂ (∂ xx u) ∂ y ∂ (∂ yy u) ∂ x ∂ y ∂ (∂ xy u) ∂ y ∂ x ∂ (∂ yx
2 2 2 2 2
u)
Example 2.4
As an application of the considerations above, we obtain here the equa-
tion of motion for the longitudinal (or axial) vibrations of a bar with
length l, mass per unit length m ˆ (x), cross-sectional area A(x) and axial
stiffness EA(x) – a typical continuous 1-dimensional system. If we let
the function u(x, t) represent the bar’s axial displacement at point
x (0 < x < l ) at time t, it is not difficult to show (see, for example, Petyt
(1990)) that the Lagrangian density for this system is
2 2
1 ∂u 1 ∂ u (2.71)
Λ ( x, ∂ t u, ∂ x u ) = ˆ
m − EA
2 ∂ t 2 ∂ x
where, respectively, the two terms on the r.h.s. are the kinetic and
potential energy densities. Then, noting that the various terms of
Equation 2.68a are in this case explicitly given by
∂Λ ∂ ∂Λ ∂2u ∂ ∂Λ ∂ ∂u
= 0, = ˆ
m , =− EA
∂u ∂ t ∂ (∂ t u) ∂ t2 ∂ x ∂ (∂ x u) ∂ x ∂ x
(2.72)
∂ ∂u ∂2u
EA − ˆ
m = 0 (2.73a)
∂ x ∂ x ∂ t2
∂u ∂u
EA δu = 0, EA δu = 0 (2.73b)
∂x x =0 ∂x x=l
42 Advanced Mechanical Vibrations
Example 2.5
Consider now the transverse (or flexural) vibration of the same bar
of Example 2.4, where here u(x, t) represents the transverse displace-
ment of the bar whose flexural stiffness is EI (x), where E is the Young’s
modulus and I (x) is the cross-sectional moment of inertia. Since the
Lagrangian density is now
2 2
1 ∂u 1 ∂2 u
Λ ( x, ∂ t u, ∂ xx u ) = ˆ
m − EI ∂ x2 (2.74)
2 ∂ t 2
∂2 ∂Λ ∂2 ∂2u
=− 2 EI
∂ x2 (
∂ ∂ x2 x u ) ∂x ∂ x2
and it is now left to the reader to calculate the other terms, put the
pieces together and show that the result is the fourth-order differential
equation of motion
∂2 ∂2u ∂2u
EI ∂ x2 + m
ˆ = 0 (2.75)
∂ x2 ∂ t2
l t1 l t1
1
2 ∫∫ ( )
ˆ δ u 2 dt dx =
m
∫ ∫ mu
ˆ ∂ (δ u ) dt dx
t
0 t0 0 t0
l l t1 t1 l
=
∫
0
ˆ u δ u tt10 dx −
m
∫∫
0 t0
δ u dxdt = −
ˆ u
m
∫ ∫ mˆ u δ u dxdt
t0 0
(2.76)
t1 l t1 l
′
∫
t0
EI u ′′ δ ( u ′ ) − ( EI u ′′ ) δ u dt +
0
∫ ∫ (EI u′′)′′ δ u dx dt (2.77)
t0 0
where the primes denote the spatial derivatives. Putting Equations 2.76
and 2.77 together and reverting to the standard notation for deriva-
tives, we arrive at Hamilton’s principle in the form
t1 l
∂ 2 u ∂ u
l
∂ ∂2u
∫
t0
∂x
EI δ u
∂ x2
0
− EI 2 δ dt
∂ x ∂ x 0
t1 l
∂ 2 ∂2u ∂ 2 u
−
∫ ∫ ∂ x
t0 0
2
EI 2
∂x
ˆ 2 δ u dx dt = 0
+m
∂ t
which gives the equation of motion 2.75 and the four boundary
conditions
x=l x=l
∂ ∂2u ∂2u ∂u
EI δ u = 0, EI 2 δ = 0 (2.78)
∂ x ∂ x2 ∂ x ∂ x x=0
x =0
where it should be noticed that four is now the correct number because
Equation 2.75 is a fourth-order differential equation.
So, for example, if the bar is clamped at both ends, then the geom-
etry of the system imposes the four geometric boundary conditions
∂u ∂u
u (0, t ) = u ( l , t ) = 0, = = 0 (2.79)
∂x x =0 ∂x x=l
∂2u ∂2u
EI 2 = 0, EI 2 = 0 (2.80)
∂x x =0
∂x x=l
which physically mean that the bending moment must be zero at both
ends. This is also the case at a free end, where together with zero bend-
ing moment, we must have the (natural) boundary condition of zero
transverse shear force. Denoting by primes derivatives with respect to
x , this condition on shear force is expressed by ( EIu ′′ )′ = 0.
44 Advanced Mechanical Vibrations
2.6 SMALL-AMPLITUDE OSCILLATIONS
Remark 2.13
We also recall here from basic physics that the compound (or physical)
pendulum is a rigid body of mass m pivoted at a point O distant d from its
centre of mass G. Since the body is free to rotate under the action of grav-
ity, the equation of motion is θ + (Wd JO ) sin θ = 0, where W = mg is the
weight of the body and JO is its moment of inertia about a horizontal axis
passing through the centre of rotation O. Then, for small oscillations, we
get TCP = 2π JO Wd and ω CP = Wd JO , thus implying that in terms of
period and frequency of oscillation, our compound pendulum is equivalent
to a simple pendulum with length L = JO md (which is sometimes called the
reduced length of the compound pendulum).
neighbourhood of that point. If, as it is often the case, the reference point
is the equilibrium position, this circumstance carries with it three implica-
tions: (a) that the equilibrium point must be a solution of the (non-linear)
equations of motion such that qi = qi 0 = const ( i = 1, , n ), and consequently
q i 0 = 0, (b) that the reference state must be a stable equilibrium point (oth-
erwise a small departure from an unstable equilibrium would lead to a
growing state of motion that does not remain in the neighbourhood of the
equilibrium point), and (c) that non-linear terms can be approximated by
the linear terms in their Taylor series expansion.
Given these preliminary considerations, we can now return to the pendu-
lum examples above and note that we first obtained the exact expressions
for T and V, derived the equations of motion and then linearised them. This
is perfectly legitimate, but since it is generally more convenient to approxi-
mate the expressions of T and V in the first place and then use the resulting
Lagrangian (as, for example, the Lagrangian of Equation 2.82) to obtain
the linearised equations of motion, this is what we do now by first consider-
ing an n-DOF natural system – that is, we recall, a holonomic system for
which the T1 and T0 parts of the kinetic energy are zero.
In this case, the equilibrium positions are the stationary points of the
potential energy – i.e. the points such that ∂ V ∂ qi = 0 ( i = 1, , n ) – with
the additional requirement that a stable equilibrium point corresponds to a
relative minimum of V. Observing that in many practical cases, the stable
equilibrium point can be identified by inspection, let us denote by q10 , , qn0
(q0 for brevity) the generalised co-ordinates of this equilibrium position and
let ui = qi − qi 0 be small variations from this position. We can then expand
the potential energy as (all sums are from 1 to n)
∂V ∂ 2V
V = V ( q0 ) + ∑ i
∂ q
i q =q
0
ui +
1
2 ∑
i,j
∂q ∂q
i j q=q0
ui u j + (2.83)
and note that the term V ( q0 ) can be ignored (because an additive constant
is irrelevant in the potential energy) and that the first sum is zero because
of the equilibrium condition. The expansion, therefore, starts with second-
order terms, thus implying that if, under the small-amplitude assumption,
we neglect higher-order terms, we are left with the homogeneous quadratic
function of the ui
∂ 2V
V≅
1
2 ∑
i,j
∂q ∂q
i j q=q0
ui u j =
1
2 ∑ k u u (2.84)
i,j
ij i j
where the second expression defines the constants kij , called stiffness coef-
ficients, that are symmetric in the two subscripts (i.e. kij = kj i ).
46 Advanced Mechanical Vibrations
For the kinetic energy, on the other hand, we already have a homoge-
neous quadratic function of the generalised velocities, because T = T2 for
natural systems. Therefore, we can write
T=
1
2 ∑ M (q) q q
i, j
ij i j =
1
2 ∑ M (q) u u ≅ 12 ∑ M (q ) u u (2.85)
i, j
ij i j
i, j
ij 0 i j
where in the last expression, we directly assigned to the functions Mij (q)
their equilibrium values Mij ( q0 ). Then, denoting by mij the mass-coefficients
Mij ( q0 ) in Equation 2.85 and observing that mij = m j i because they inherit
this property from the original coefficients Mij (q), the ‘small-amplitude’
Lagrangian of the system is
L ( u, u ) =
1
2 ∑ (m
i, j
ij u i u j − kij ui u j ) (2.86)
d ∂L
dt ∂u r
= ∑m j
rj j ,
u
∂L
∂ur
=− ∑k u
j
rj j
∑ (m
j
rj j + krj u j ) = 0,
u + Ku = 0 (2.87)
Mu
where the first relation is a set of n equations ( r = 1, , n ), while the second
is the more compact matrix form obtained by introducing the n × n matrices
M, K of the m- and k-coefficients, respectively, and the n × 1 column vectors
, u. Clearly, the matrix versions of the properties mij = m j i and kij = kj i are
u
M = MT and K = KT , meaning that both the mass and stiffness matrices are
symmetric.
Remark 2.14
L=
1 T
2
(
u Mu − uT Ku (2.88) )
ii. If, as a specific example, we go back to the double pendulum at the
beginning of this section, the matrix form of the Lagrangian 2.82 is
Formulating the equations of motion 47
( m1 + m2 ) l12 m2 l1 l2 θ
( )
2L = θ φ
m2 l1 l2
m2l22 φ
−
( θ φ )
( m1 + m2 ) g l1 0 θ
×
0 m2 g l2 φ
from which it is evident that the mass- and stiffness matrix are both
symmetric. And since – as shown in Appendix A and as we will see
in future chapters – symmetric matrices have a number of desirable
properties, the fact of producing symmetric matrices is an impor-
tant feature of the Lagrangian method.
iii. In the special case of a simple 1-DOF system, Equations 2.87 reduce
to the single equation mu + ω 2u = 0 if one defines
+ ku = 0 , or u
ω = k m . If u is a Cartesian co-ordinate (say, the familiar x), then m
2
If our system is also acted upon by dissipative viscous forces, we can recall
the developments of Section 2.4.4 and notice the formal analogy between
T2 and the Rayleigh dissipation function D. This means that – as we did for
the Mij (q) – we can expand the coefficients Cij (q) and retain only the first
term Cij ( q0 ) ≡ cij to obtain
D≅
1
2 ∑c
i, j
ij u i u j (2.89)
Then, owing to Lagrange’s equations in the form 2.46, we are led to the
equations of motion
∑ ( m u + c
j
rj j rj u j + krj u j ) = 0, + Cu + Ku = 0 (2.90)
Mu
where, as for Equations 2.87, the first relation holds for r = 1, , n while
the second is its matrix form (and the damping matrix is symmetric, i.e.
C = CT , because cij = c j i).
For non-natural systems, on the other hand, we may have two cases:
(i) T1 = 0 and T0 ≠ 0 or (ii) both T1 , T0 non-zero. In the first case, the sys-
tem can be treated as an otherwise natural system with kinetic energy
T2 and potential energy U = V − T0 (i.e. the dynamic potential of Section
2.4.2), thus implying that the stiffness coefficients are given by the sec-
ond derivatives of U instead of the second derivatives of V. In the second
case, we can recall from Section 2.4.2 that T1 = ∑ i bi (q) q i. Expanding in
Taylor series the coefficients bi , the small amplitude approximation gives
48 Advanced Mechanical Vibrations
+ Cu + Ku = f (2.91)
Mu
for a system with dissipative viscous forces. The components of f are the
generalised forces Q1 , , Qn, but we do not denote this vector by q to avoid
possible confusion with a vector of generalised co-ordinates.
As for terminology, the equations with a zero r.h.s. define the so-called
free-vibration problem, while the equations with a non-zero r.h.s. define
the forced vibration problem; we will consider the solutions of both prob-
lems in the following chapters.
Remark 2.15
The linearised equations of motion in the form 2.90 or 2.91 are sufficiently
general for many cases of interest. However, it is worth observing that in
the most general case, the u and u terms, respectively, are (C + G)u and
(K + H) u, where G is the gyroscopic matrix mentioned above while the
skew-symmetric matrix H (denoted by N by some authors) is called circula-
tory. For these aspects, we refer the interested reader to Meirovitch (1997),
Pfeiffer and Schindler (2015) or Ziegler (1977).
mV 2 m v′ 2 m dr ′
+ ( w × r ′ ) + mV ⋅ + mv′ ⋅ ( w × r ′ ) − U (2.92)
2
L= +
2 2 2 dt
dr ′ d dV
mV ⋅ = ( mV ⋅ r ′ ) − m ⋅ r ′ (2.93)
dt dt dt
m v′ 2 m
+ ( w × r ′ ) − m A ⋅ r ′ + m v′ ⋅ ( w × r ′ ) − U ( r ′ ) (2.94)
2
L′ =
2 2
where, in writing Equation 2.94, we
Passing to the equation of motion in the frame K ′, the first relation we get
from Lagrangian 2.94 is
∂ L′ d ∂ L′
= mv′ + m ( w × r ′ ) ⇒ = ma′ + m ( w
× r ′ ) + m ( w × v′ ) (2.95)
∂ v′ dt ∂ v′
50 Advanced Mechanical Vibrations
Remark 2.16
∂ L′ ∂U
∂ r′
{ }
= m w 2 r ′ − ( w ⋅ r ′ ) w − m A + m ( v′ × w ) −
∂ r′
∂U
= m w × ( r ′ × w ) − m A + m ( v′ × w ) − (2.96)
∂ r′
where, owing to the property of Remark 2.16(iii), the term within curly
brackets in the first line of Equation 2.96 is equal to w × (r ′ × w). Finally,
by further recalling the property a × b = − b × a, Equations 2.95 and 2.96
together lead to the equation of motion in the frame K ′; this is
∂U
m a′ = − − mA − m ( w
× r ′ ) − 2m ( w × v′ ) − m w × ( w × r ′ ) (2.97)
∂ r′
Remark 2.17
i. Since w × v′ = wv′ sin α where α is the angle between the two vectors,
the Coriolis force – which is linear in the velocity and is a typical
example of gyroscopic term mentioned at the end of Section 2.4.2 – is
zero if v′ is parallel to w. Also, the Coriolis force is always perpen-
dicular to the velocity and therefore it does no work.
ii. The centrifugal force lies in the plane through r ′ and w, is perpen-
dicular to the axis of rotation (i.e. to w) and is directed away from the
axis. Its magnitude is mw 2d, where d is the distance of the particle
from the axis of rotation.
m a′ = − ∂ U ∂ r ′ − 2m ( w × v′ ) − m w × ( w × r ′ ) (2.99)
where, as above, the last two terms on the r.h.s. are the Coriolis and the
centrifugal force. In this respect, note that the Coriolis term comes from
the T1-part of the kinetic energy while the centrifugal term comes form the
T0 -part.
In this special case, the particle generalised momentum in frame K ′ is
p′ = ∂ L′ ∂ v′ = m ( v′ + w × r ′ ), but since the term within parenthesis is the
particle velocity v relative to K, then p′ coincides with the momentum p in
frame K. If, in addition, the origins of the two reference systems coincide,
then r = r ′ and it also follows that the angular momentum M′ = r ′ × p′ in
frame K ′ is the same as the angular momentum M = r × p in frame K. As
for the energy function, in frame K ′, we have
h′ = ∑ ∂∂ Lx ′′ x ′ − L′ = p′ ⋅ v′ − L′ (2.100)
i
i
i
mv′ 2 m mv′ 2
+ U − ( w × r′ ) =
2
h′ = + U + UCtf (2.101)
2 2 2
52 Advanced Mechanical Vibrations
where in the rightmost expression, the subscript ‘Ctf’ stands for ‘centrifu-
gal’, thus showing that the rotation of K ′ manifests itself in the appear-
ance of the additional potential energy term UCtf = − m ( w × r ′ ) 2 , which is
2
independent on the particle velocity v′ . In this regard, note that the energy
contains no term linear in the velocity.
At this point, recalling that with no translational motion we have
v′ = v − w × r ′ , we can use this in Equation 2.101 to get
mv 2
h′ = + U − mv ⋅ ( w × r ′ ) = h − w ⋅ M (2.102)
2
Equation 2.102 is the relation between the particle energies in the two
frames under the assumptions that K ′ is uniformly rotating relative to K and
that the origins of the two frames coincide. Also, recalling that under these
assumptions we have M′ = M, we can equivalently write h′ = h − w ⋅ M′ ,
thus showing that the particle energy in frame K ′ is less than the energy in
frame K by the amount w ⋅ M′ = w ⋅ M.
Example 2.6
Relative to an inertial frame of reference K with axes x, y, z , a particle
of mass m moves in the horizontal xy-plane under the action of a force
field with potential U(x, y). We wish to write the particle equation of
motion from the point of view of a non-inertial frame K′ that is rotat-
ing with angular velocity w(t) with respect to K. Assuming that the
axis of rotation is directed along the vertical direction z (i.e. w = w k,
where k is the vertical unit vector), let us call q1 , q2 , q3 the co-ordinates
of frame K′ by also assuming that q3 is parallel to z and that the origins
of the two frames coincide. Then, the relation between the particle co-
ordinates in the two systems is
m w2 2
L′ =
2
(
m 2
)
q1 + q 22 + mw ( q1q 2 − q2 q1 ) +
2
( )
q1 + q22 − U (2.105)
2 − 2mwq 2 − mw 2 q1 + ∂U ∂q1 = 0
m q1 − mwq
(2.106)
1 + 2mwq1 − mw 2 q2 + ∂U ∂q2 = 0
m q2 + mwq
which, in addition to the expected qi -terms and to the ‘real’ field forces
− ∂U ∂qi , show – as mentioned earlier – the appearance of the fictitious
forces due to the non-inertial state of motion of the frame K′ . Clearly,
the terms with w do not appear if the rotation is uniform. Also, in the
Lagrangian 2.105, it is immediate to identify the three parts T2 , T1 , T0
of the kinetic energy – respectively, quadratic, linear and independent
on the velocities q .
d ∂L ∂L
=0 ⇒ = const ≡ β i (i = 1,… , k) (2.108)
dt ∂ q i ∂ q i
with the constants β i being determined from the initial conditions. However,
although the ignorable co-ordinates do not appear in L, all the velocities q i
do, and one would like to eliminate the qs corresponding to the ignorable
qs in order to reduce – as in the Hamiltonian formulation – the number of
DOFs. This can be done by solving the k equations (2.108) for the q i (as
functions of qk +1 ,… , qn , q k +1 ,… , q n , c1 ,… , ck , t ) and then using them in the
so-called Routh function (or Routhian), defined as
k
R = L− ∑ β q (2.109)
i =1
i i
n n k
∂R ∂R
dR = ∑
i = k+1
∂ q i
dqi + ∑
i = k+1
∂ q i
dq i + ∑ ∂∂βR dβ + ∂∂Rt (2.110)
i =1
i
i
But we can also use the definition on the r.h.s. of Equation 2.109 to write
dR as
k
n
∂L
n
∂L ∂L
k k
dL−
∑i =1
∑
β i q i =
∂
i =k+1 qi
dqi + ∑ i =1
∂ q i
dq i +
∂t
− ∑
i =1
q i dβ i − ∑ β dq
i =1
i i
(2.111)
and then notice that the first sum on the r.h.s. with the dq i can be split into
two parts as ∑ki =1 () + ∑ ni =k+1 () and that – since β i = ∂L ∂q i – the first part
cancels out with the rightmost sum. At this point, we can compare the vari-
ous differential terms of Equations 2.110 and 2.111 to obtain (besides the
evident relation ∂ R ∂ t = ∂ L ∂ t )
∂ R ∂ qi = ∂ L ∂ qi , ∂ R ∂ q i = ∂ L ∂q i (i = k + 1,… , n)
(2.112)
− ∂ R ∂β i = q i (i = 1,… , k)
Formulating the equations of motion 55
where Equations 2.1121 can now be substituted in LEs to give the ‘reduced’
system of n − k equations of motion
d ∂R ∂R
−
dt ∂ q i ∂ qi
=0 ( i = k + 1,… , n ) (2.113)
with exactly the same form as Lagrange’s equations, but with R in place
of L. As for the ignorable co-ordinates, in most cases, there is no need to
solve for them but, if necessary, they can be obtained by direct integration
of Equations 2.1123.
Example 2.7
In order to illustrate the procedure, a typical example is the so-called
Kepler problem, where here we consider a particle of unit mass
attracted by a gravitational force (i.e. with a r −2 force law) to a fixed
point. Using the usual polar co-ordinates, we obtain the Lagrangian
( )
L = 2−1 r2 + r 2θ 2 + µ r, where µ is a positive (gravitational) constant.
( )
Since L = L r , r, θ , the ignorable co-ordinate is θ and Equation 2.108
for this case is r 2θ = β , from which it follows θ = β r 2 . Using this last
relation to form the Routhian, we get
r2 β2 µ
R = L − β θ = − 2 + (2.114)
2 2r r
Remark 2.18
i. In the example above, the careful reader has probably noticed that the
Routhian procedure has led to the appearance of an additional poten-
tial energy term (the term proportional to r −2 in the example).
ii. In case of systems with more DOFs, the procedure may also lead to the
appearance of gyroscopic terms (linear in the qs), even if there were no
such terms in the original Lagrangian (see, for example, Greenwood
(1977) or Lanczos (1970)).
56 Advanced Mechanical Vibrations
(2.115)
At this point, defining the new variable x = sin(θ 2) sin(α 2), differentiation
gives
where the denominator in the last expression is obtained by using the rela-
tion sin2 (θ 2) = x2 sin2 (α 2), which follows from the definition of x. Finally,
substituting Equation 2.117 in 2.116 and using the result in Equation 2.115,
we are led to
1
dx Tα g l
∫ (1 − x )(1 − k x ) =
0
2 2 2 4 l
⇒ Tα = 4
g
K (2.118)
where we defined k = sin(α 2) and K is the so-called elliptic integral (in the
Jacobi form). Elliptic integrals are extensively tabulated, but for our present
purposes, it suffices to consider the expansion
Formulating the equations of motion 57
π 1 2 1 3 4 1 3 5 6
2 2 2
K= 1 + k + k + k +
2 2 2 4 2 4 6
π
= 1 + 0.25sin2 (α 2) + 0.1406 sin4 (α 2) + 0.0977 sin6 (α 2) +
2
(2.119)
which shows that the period (and, clearly, the frequency) of oscillation
depends on the amplitude. This is one of the typical phenomena of non-
linear vibrations.
Finally, note that for small angles (sin(α 2) ≈ α 2 << 1), we can write
l α2
Tα = 2π 1 + 16 + (2.120)
g
3.1 INTRODUCTION
+ ku = 0,
mu + cu + ku = 0 (3.1)
mu
where the system’s physical parameters on the l.h.s., that is, mass, stiffness
and damping characteristics, are not represented by matrices but by simple
scalar quantities.
+ ω n2 u = 0,
u ω n2 ≡ k m (3.2)
where the second relation defines the quantity ω n (whose physical meaning
will be clear shortly). Assuming a solution of the form u = eα t , we obtain
the characteristic equation α 2 + ω n2 = 0; then α = ± iω n, and the solution of
Equation 3.21 can be written as u(t) = C1 e iω n t + C2 e − iω n t , where the complex
constants C1 , C2 are determined from the initial conditions and must be
such that C1 = C2∗ because the displacement u(t) is a real quantity. So, if the
initial conditions of displacement and velocity at t = 0 are given by
59
60 Advanced Mechanical Vibrations
1 v 1 v
u(t) = u0 − i 0 e iω n t + u0 + i 0 e − iω n t (3.4)
2 ωn 2 ωn
where the two constants (A, B in the first expression and C, θ in the second)
are determined from the initial conditions (3.3). We leave to the reader the
easy task of checking the relations
Remark 3.1
kC 2 mω n2C 2 mV 2
ET = = = (3.7)
2 2 2
which shows that the energy is proportional to the amplitude squared (and
where, in writing the last relation, we recalled that the velocity amplitude is
V = ω nC ). Also, observing that the total energy equals the potential energy
at maximum displacement and the kinetic energy at maximum velocity, the
energy equality Ek(max) = Ep(max) leads immediately to the relation ω n2 = k m.
Lastly, we leave to the reader to determine that the average kinetic and
potential energies over a period T = 2π ω n are equal and that their value
is ET 2.
If now we turn our attention to a viscously damped 1-DOF system, it is
convenient to introduce the two quantities ccr , ζ called the critical damping
and the damping ratio, respectively, and defined as
+ 2ω n ζ u + ω 2n u = 0 (3.9)
u
( )
α 1,2 = −ζ ± ζ 2 − 1 ω n (3.10)
c > ccr , c = ccr and c < ccr , and one speaks of over-damped, critically damped
and under-damped system – where this last case is the most important in
vibration study because only here the system does actually ‘vibrate’.
u(t) = C1 e
(ζ )
− + ζ 2 −1 ω n t
+ C2 e
(ζ )
− − ζ 2 −1 ω n t
(3.12a)
C1 =
(
v0 + ω n u0 ζ + ζ 2 − 1 ), C2 =
(
−v0 − ω n u0 ζ − ζ 2 − 1 ) (3.12b)
2ω n ζ 2 − 1 2ω n ζ 2 − 1
Also, in this case, therefore, the system does not vibrate but returns
to its rest position, although now it takes longer with respect to the
critically damped case. Moreover, it is worth observing that if, for
brevity, we define ω = ω n ζ 2 − 1 , the solution of Equations 3.12a
and b can equivalently be expressed as
v + ζ ω n u0
u(t) = e −ζ ω n t u0 cosh ω t + 0 sinh ω t (3.12c)
ω
Under-damped case: As mentioned above, in this case (ζ < 1), the sys-
tem actually vibrates. In fact, by substituting the two roots (3.10) –
which are now complex conjugates with negative real part – in the
general solution, we get
(
u(t) = e −ζ ω n t C1 e i ω n t 1−ζ 2
+ C2 e − i ω n t 1−ζ 2
) (3.13a)
u0 ω d − i ( v0 + ζ u0 ω n ) u0 ω d + i ( v0 + ζ u0 ω n )
C1 = , C2 = (3.13b)
2ω d 2ω d
where the relations among the constants and the initial conditions are
2
v + ζ u0ω n B v0 + ζ u0ω n
C = A2 + B2 = u02 + 0 , tan θ = = (3.15)
ωd A u0ω d
Remark 3.2
where in the last equality, we used the free-vibration Equation 3.12 . Since
the Rayleigh dissipation function is D = cu 2 2 in this case, Equation 3.16
confirms the result of Section 2.4.4, that is, that the rate of energy loss
is −2D.
Remark 3.3
Example 3.1
At this point, it could be asked how a negative stiffness or a nega-
tive damping can arise in practice. Two examples are given
Finite DOFs systems 65
3.2.1 Logarithmic decrement
Since, in general, the damping characteristic of a vibrating system is the
most difficult parameter to estimate satisfactorily, a record of the actual
system’s free oscillation response can be used to obtain such an estimate.
Assuming that our system behaves as an under-damped 1-DOF system, let
t1 , t2 be the times at which we have two successive peaks with amplitudes
u1 , u2 , respectively. Then t2 − t1 = Td = 2π ω d and we can use the complex
form of the solution u(t) given in Remark 3.2-iv to write
u1 Xe −ζ ω n t1 e − iω d t1
= = e i ω d Td eζ ω n Td = e i 2π e2π ζ ω n ωd
= e2π ζ (ω n / ω d ) (3.18)
u2 Xe −ζ ω n t2 e − iω d t2
66 Advanced Mechanical Vibrations
u1 ω 2πζ δ
δ ≡ ln = 2π ζ n = ⇒ζ = (3.19)
u2 ωd 1−ζ 2
4π 2 + δ 2
u1 − u2
ζ ≅ (3.20)
2π u2
ui − ui + m
ζ ≅ (3.21)
2mπ ui + m
Remark 3.4
+ Ku = 0,
Mu + Cu + Ku = 0 (3.23)
Mu
for the undamped and viscously damped case, respectively. Also, it was
shown that M, K, C are n × n symmetric matrices (but in general they are not
diagonal, and the non-diagonal elements provide the coupling between the
n equations) while u is a n × 1 time-dependent displacement vector. Starting
with the undamped case and assuming a solution of the form u = z e iωt in
which all the co-ordinates execute a synchronous motion and where z is a
time-independent ‘shape’ vector, we are led to
( K − ω M ) z = 0 ⇒ Kz = ω
2 2
Mz (3.24)
( )
which has a non-zero solution if and only if det K − ω 2 M = 0. This, in
turn, is an algebraic equation of order n in ω 2 known as the frequency
(or characteristic) equation. Its roots ω 12 , ω 22 , , ω n2 are called the eigen-
values of the undamped free vibration problem, where, physically, the
positive square roots ω 1 , , ω n represent the system’s (undamped) natural
frequencies (with the usual understanding that ω 1 is the lowest value, the
so-called fundamental frequency, and that the subscript increases as the
value of frequency increases). When the natural frequencies ω j ( j = 1, , n )
have been determined, we can go back to Equation 3.241 and solve it
for z for each eigenvalue. This gives a set of vectors z1 , , z n , which are
known by various names: eigenvectors in mathematical terminology,
natural modes of vibration, mode shapes or modal vectors in engineering
terminology. Whatever the name, the homogeneous nature of the math-
ematical problem implies that the amplitude of these vectors can only
be determined to within an arbitrary multiplicative constant – that is, a
scaling factor. In other words, if, for some fixed index k, z k is a solution,
then a z k (a constant) is also a solution, so, in order to completely deter-
mine the eigenvector, we must fix the value of a by some convention. This
process, known as normalisation, can be achieved in various ways and
one possibility (but we will see others shortly) is to enforce the condition
of unit length zTk z k = 1.
Then, assuming the eigenvectors to have been normalised by some
method, the fact that our problem is linear tells us that the general solution
is given by a linear superposition of the oscillations at the various natural
frequencies and can be expressed in sinusoidal forms as
68 Advanced Mechanical Vibrations
n n
where the 2n constants – C j , θ j in the first case and Dj , Ej in the second case –
are obtained from the initial conditions (more on this in Section 3.3.2)
u(t = 0) = u0 , u (t = 0) = u 0 (3.26)
Remark 3.5
3.3.1 Orthogonality of eigenvectors
and normalisation
Starting with the eigenvalue problem Kz = λ Mz , let z i , z j be two eigen-
vectors corresponding to the eigenvalues λi , λ j , with λi ≠ λ j . Then, the two
relations
Kz i = λi Mz i , Kz j = λ j Mz j (3.27)
Finite DOFs systems 69
we can now subtract one equation from the other to obtain ( λi − λ j ) zTi Mz j = 0 .
But since we assumed λi ≠ λ j , this implies
zTi M z j = 0 (i ≠ j) (3.28)
zTi K z j = 0 (i ≠ j) (3.29)
where the value of the scalars Mi , Ki – called the modal mass and modal
stiffness of the ith mode, respectively – will depend on the normalisation of
the eigenvectors. However, note that no such indetermination occurs in the
ratio of the two quantities; in fact, we have
Ki zT Kz i zT Mz i
= Ti = λi Ti = λi = ω 2i (i = 1, , n) (3.31)
Mi z i Mz i z i Mz i
P T MP = I, P T KP = diag ( λ1 , , λn ) ≡ L (3.33)
Remark 3.6
i. Two other normalisation conventions are as follows: (a) set the largest
component of each eigenvector equal to 1 and determine the remain-
ing n − 1 components accordingly, and (b) scale the eigenvectors so
that all the modal masses have the same value M, where M is some
convenient parameter (for example, the total mass of the system).
ii. It is not difficult to see that the relationship between pi and its
non-normalised counterpart is pi = z i Mi . Also, it is easy to see that the
non-normalised versions of Equations 3.33 are Z T MZ = diag ( M1 , , Mn )
and Z T KZ = diag ( K1 , , Kn ), where Z is the modal matrix
Z = z1 z 2 z n of the non-normalised eigenvectors.
iii. If we need to calculate the inverse of the modal matrix, note that from
Equation 3.331, it follows P −1 = PT M .
3.25 – let us say, in the second expression – it is readily seen that at time
t = 0, we have u0 = ∑ j Dj p j and u 0 = ∑ j ω j Ej p j . Then, pre-multiplying both
expressions by pTi M and taking Equations 3.321 into account, we get
pTi Mu0 = ∑ D p Mp
j
j
T
i j = Di , pTi Mu 0 = ∑ ω E p Mp
j
j j
T
i j = ω i Ei (3.34)
thus implying that we can write the general solution of the undamped
problem as
n
u= ∑ p Mu
i =1
T
i 0 cos ω i t +
pTi Mu 0
ωi
sin ω i t pi (3.35a)
or, equivalently, as
n
u= ∑ p p M u
i =1
i
T
i 0 cos ω i t +
u 0
ωi
sin ω i t (3.35b)
Remark 3.7
The fact that in Equation 3.35a the vector u is a linear combination of the
mode shapes pi tells us that the vectors pi form a basis of the n-dimensional
(linear) space of the system’s vibration shapes. This fact – which can be
mathematically proved on account of the fact that the matrices K, M are
symmetric and that in most cases at least M is positive-definite (see, for
example, Hildebrand (1992), Laub (2005) and Appendix A) – means that
72 Advanced Mechanical Vibrations
pTi Mw = ∑ α p Mp = ∑ α δ
j
j
T
i j
j
j ij ⇒ α i = pTi Mw (3.36)
w= ∑ ( p Mw ) p = ∑ ( p p M) w,
j
T
j j
j
j
T
j I= ∑p p
j
j
T
j M (3.37)
Remark 3.8
3.3.3 Normal co-ordinates
Having introduced in Section 3.3.1 the modal matrix, we can now consider
the new set of co-ordinates y related to the original ones by the transforma-
tion u = Py = ∑ i pi yi. Then, Equation 3.231 becomes MPy + KPy = 0 and
we can pre-multiply it by PT to obtain PT MPy + PT KPy = 0 , or, owing to
Equations 3.33,
y + Ly = 0
⇒ yi + ω i2 yi = 0 (i = 1,… , n) (3.38)
where the second expression is just the first written in terms of components.
The point of the co-ordinate transformation is now evident: the modal
matrix has uncoupled the equations of motion, which are here expressed
Finite DOFs systems 73
n n n
ω i2 yi2
T=
y T I y
2
= ∑
i =1
y i2
2
, V=
yT L y
2
= ∑
i =1
2
⇒L=
1
2 ∑ ( y
i
2
i − ω i2 yi2 )
(3.40)
Example 3.2
In order to illustrate the developments above, consider the simple
2-DOF system of Figure 3.2. Taking the position of static equilibrium
as a reference and calling u1 , u2 the vertical displacements of the two
masses, it is not difficult to obtain the (coupled) equations of motion
3m 0 u1 5k − k u1 0 (3.42)
+ =
0
m u2 − k k u2 0
5k − k z1 3m 0 z1 (3.43)
=ω2
−k
k z2 0 m z 2
ω 1 = 2 k 3m , ω 2 = 2 k m (3.44)
and we can use these values in Equation 3.43 to determine that for
the first eigenvector z1 = [ z11 z 21 ]T we get the amplitude ratio
T
z11 z 21 = 1 3 , while for the second eigenvector z 2 = z12 z 22 we
Finite DOFs systems 75
get z12 z 22 = −1. This means that in the first mode at frequency ω 1 ,
both masses are, at every instant of time, below or above their equi-
librium position (i.e. they move in phase), with the displacement of the
smaller mass m that is three times the displacement of the mass M. In
the second mode at frequency ω 2 , on the other hand, at every instant
of time, the masses have the same absolute displacement with respect
to their equilibrium position but on opposite sides (i.e. they move in
opposition of phase).
Given the amplitude ratios above, we can now normalise the eigen-
vectors: if we choose mass-normalisation, we get
1 1
1 1 1 1 12m 2 m
p1 = , p2 = ⇒ P=
12m 3 2 m −1 3 1
−
12m 2 m
(3.45)
1 1 3 1
u1 = y1 + y2 , u2 = y1 − y2 (3.46a)
12m 2 m 12m 2 m
3m m m 3 ( u1 + u2 )
y1 = (u1 + u2 ) , y2 = (3u1 − u2 ) ⇒ y =
2 2 2 u1 − u2
(3.46b)
so that using Equations 3.46b, it can be shown that the energy
(
expressions in the original co-ordinates – that is, T = 3mu12 + mu22 2 )
( 1
2
2 )
and V = 5ku + ku − 2 ku1u2 2 – become T = y + y 2 and
2
( 2
1
2
2 )
( ) ( )
V = k 3m y12 + k m y22 , with no cross-product term in the potential
energy.
Finally, if now we assume that the system is started into motion with
the initial conditions u 0 = [ 1 1]T , u 0 = [ 0 0]T , we can write the
general solution in the form given by of Equations 3.35 to get
u1 1 1 1 1
u = 3 cos ω 1t + −1 cos ω 2t (3.47)
2 2 2
76 Advanced Mechanical Vibrations
Example 3.3
For this second example, which we leave for the most part to the reader,
we consider the coupled pendulum of Figure 3.3. The small-amplitude
Lagrangian is
L=
2
(
1 2 2 2
) ( )
ml θ1 + θ 2 − mgl θ12 + θ 22 − kl 2 (θ1 − θ 2 ) (3.48)
2
kl + mg θ1 0
ml 0 θ1 − kl
(3.49)
0 + =
ml θ2 − kl kl + mg θ 2 0
( )
Then, the condition det K − ω 2 M = 0 gives the frequency equation
g k g g 2k g g 2k
ω 4 − 2ω 2 + + + = 0 ⇒ ω 2 − ω 2 − − = 0
l m l l m l l m
(3.50)
ω1 = g l , ω 2 = g l + 2 k m (3.51)
which in turn lead to the amplitude ratios z11 z 21 = 1 for the first mode
at frequency ω 1 and z12 z 22 = −1 for the second mode at frequency ω 2 .
In the first mode, therefore, the spring remains unstretched and each
mass separately acts as a simple pendulum of length l ; on the other hand,
in the second mode, the two masses move in opposition of phase. Note
that if the term 2k m in the second frequency is small compared with
g l, the two frequencies are nearly equal and this system provides a nice
example of beats (recall Section 1.3.1). In fact, if one mass is displaced
Figure 3.3 C
oupled pendulum.
Finite DOFs systems 77
a small distance while the other is kept in its equilibrium position and
then both masses are released from rest, the disturbed mass vibrates for
a number of cycles without apparently disturbing the other, but then the
motion of this second mass slowly builds up while that of the first one
slowly dies away (and the pattern repeats on and on).
The mass-normalised eigenvectors are now obtained from the condi-
tion zTi Mzi = 1( i = 1,2 ), and we get
1 1 1 1 1 1 1 (3.52)
p1 = , p2 = ⇒ P=
2ml 1 2ml −1 2ml 1 −1
T
while, on the other hand, we get b1 = 1 2 1 2 and
T
b2 = 1 2 −1 2 if we prefer to work with unit-length eigen-
vectors. At this point, it is now easy to determine that the relation
y = P T Mu gives the normal co-ordinates
ml ml ml u1 + u2 (3.53)
y1 = (u1 + u2 ) , y2 = (u1 − u2 ) ⇒ y=
2 2 2 u1 − u2
One point worthy of mention is that in both examples above, the equa-
tions of motions (expressed in terms of the original co-ordinates) are cou-
pled because the stiffness matrix K is non-diagonal – a case which is often
referred to as static or stiffness coupling. When, on the other hand, the
mass matrix is non-diagonal one speaks of dynamic coupling, and a case
in point in this respect is the double pendulum encountered in Chapter 2.
From the small-amplitude Lagrangian of Equation 2.82, in fact, we can
readily obtain the linearised equations of motion; in matrix form, we have
θ 0
× = (3.54)
φ 0
ω 1,2
2
=
g
2m1l1l2 {
ML ± M 2 L2 − 4Mm1 l1l2 (3.55) }
78 Advanced Mechanical Vibrations
{K − λi M} ∂ pi = {M ∂ λi + λi ∂ M − ∂ K} pi (3.56)
which we now pre-multiply on both sides by pTi . By so doing, and by taking
into account that (a) pTi Mpi = 1 and (b) pTi ( K − λi M ) = 0 (which is just the
transpose equation of the eigenproblem rewritten as ( K − λi M ) pi = 0 ), we
obtain the first-order correction for the ith eigenvalue, that is
∂ λi = pTi (∂ K − λi ∂ M ) pi (3.57)
Remark 3.9
pTk (∂ K − λi ∂ M ) pi
ci k = (i ≠ k) (3.58)
λ i − λk
Finite DOFs systems 79
At this point, the only missing piece is the c coefficient for i = k; enforcing
the normalisation condition ( pk + ∂ pk ) ( M + ∂ M ) ( pk + ∂ pk ) = 1 for the
T
pTk (∂ M ) pk
ckk = − (3.59)
2
Finally, putting the pieces back together and denoting by λˆ i , p̂i the per-
turbed eigenpair, we have
λˆ i = λi + pTi (∂ K − λi ∂ M ) pi
pTi (∂ M ) pi pTr (∂ K − λi ∂ M ) pi (3.60)
pˆ i = pi −
2
pi + ∑
r (r ≠ i )
λi − λr pr
where these expressions show that only the ith unperturbed eigenpair is
needed for the calculation of λ̂i , while the complete unperturbed set is
required to obtain p̂i . Also, note that the greater contributions to ∂ pi come
from the closer modes, for which the denominator λi − λr is smaller.
Example 3.4
As a simple example, let us go back to the system of Example 3.2 in
Section 3.3.3. If now we consider the following modifications: (a)
increase the first mass of 0.25m, (b) decrease the second mass of 0.1m
and (c) increase the stiffness of the first spring of 0.1k , the ‘perturbing’
mass and stiffness terms ∂ M, ∂ K are
0.25m 0 0.1k 0
∂M = , ∂K =
0 −0.1m 0 0
1
( ) 1
( )
T T
p1 = 1 3 , p2 = 1 −1
12m 2 m
∂λ 2 = pT2 (∂ K − λ2 ∂ M ) p 2 = −0.0500 ( k m )
and therefore
For the first eigenvector, on the other hand, we obtain the expansion
coefficients (Equations 3.58 and 3.59)
c12 =
(
pT2 ∂ K − λ1 ∂ M p1 ) = 0.0289, c11 = −
p1T (∂ M ) p1
= 0.0271
λ1 − λ2 2
1
( ) (3.62a)
T
pˆ 1 = p1 + c11 p1 + c12 p 2 = 0.311 0.875
m
1
( ) (3.62b)
T
pˆ 2 = 0.459 −0.584
m
1
( ) 1
( )
T T
p1(exact ) = 0.313 0.871 , p(exact)
2 = 0.458 −0.594
m m
(λ 2
j M + λj C + K z j = 0 ) ( j = 1, , n ) (3.63)
where we call λ j , z j the eigenpairs of the perturbed – that is, lightly damped –
system. By assuming that they differ only slightly from the undamped eigen-
pairs ω j , p j, we can write the first-order approximations
λ j = iω j + ∂λ j , z j = p j + ∂ p j (3.64)
and substitute them in Equation 3.63. Then, if in the (rather lengthy) resulting
expression, we take into account the undamped relation Kp j − ω 2j Mp j = 0
and neglect second-order terms (including terms containing (∂λ j ) C and
Finite DOFs systems 81
{K − ω M} ∂ p
2
j j { }
+ iω j C + 2 (∂λ j ) M p j = 0 (3.65)
(
At this point, by pre-multiplying by pTj and observing that pTj K − ω 2j M = 0, )
{ }
we obtain pTj C + 2 (∂λ j ) M p j = 0, which, recalling that pTj M p j = 1, gives
pTj C p j
∂λ j = − (3.66)
2
pTk Cp j
aj k = − i ω j (3.67)
ω k2 − ω 2j
i ω j pTk Cp j
z j = pj + ∑
k (k≠ j)
ω 2 − ω 2 pk (3.68)
j k
( )
a
pTj KM −1 Kpi = λia+1δ ij ( a = 1, 2,) (3.70)
which, by inserting before the parenthesis in the l.h.s. the term MM −1, can
be expressed in the equivalent form
( )
−1
pTj MK −1 Mpi = pTj M M −1 K pi = λi−1 δ ij (3.72)
3.3.7 Eigenvalue degeneracy
So far, we have assumed that the system’s n eigenvalues are all distinct and have
postponed (recall Remark 3.5 (iii)) the complication of degenerate eigenvalues –
that is, the case in which one or more roots of the frequency equation has an
algebraic multiplicity greater than one, or in more physical terms, when two or
more modes of vibration occur at the same natural frequency. We do it here.
However, observing that in most cases of interest the system’s matrices
are symmetric, the developments of Appendix A (Sections A.4.1 and A.4.2)
show that degenerate eigenvalues are not a complication at all but only a
minor inconvenience. In this respect, in fact, the main result is Proposition
A.7, which tells us that for symmetric matrices we can always find an ortho-
normal set of n eigenvectors. This is because symmetric matrices (which are
special cases of normal matrices) are non-defective, meaning that the alge-
braic multiplicity of an eigenvalue λi always coincides with the dimension of
the eigenspace e ( λi ) associated with λi , i.e. with its geometric multiplicity.
So, if the algebraic multiplicity of λi is, say, m (1 < m ≤ n ), this implies
that we always have the possibility to find m linearly independent vectors in
e ( λi ) and – if they are not already so – make them mutually orthonormal by
means of the Gram–Schmidt procedure described in Section A.2.2. Then,
since the eigenspace e ( λi ) is orthogonal to the eigenspaces e ( λk ) for k ≠ i ,
these resulting m eigenvectors will automatically be orthogonal to the other
eigenvectors associated with λk for all k ≠ i .
Remark 3.10
Similarly, the presence of rigid-body modes does not change much also the
arguments leading to the solution of the free-vibration problem. In fact, if
we consider an n-DOF system with m rigid-body modes r1 , , rm, we can
express the transformation to normal co-ordinates as
m n− m
u= ∑
j =1
rj w j (t) + ∑ p y (t) = Rw + Py (3.75)
k=1
k k
= 0,
w y + Ly = 0 (3.76)
u0 = ∑j =1
bj rj + ∑
k=1
Dk pk , u 0 = ∑ j =1
a j rj + ∑ω E p
k=1
k k k
By using once again the orthogonality conditions, it is then left to the reader
the easy task to show that we arrive at the expression
m n− m
T
∑( ∑
pTk Mu 0
u= )
rjT Mu 0 t + rjT Mu0 rj +
p
k Mu 0 cos ω k t +
ωk
sin ω k t pk
j =1 k=1
(3.78a)
or, equivalently
m n− m
∑ ∑ p p M u
u 0
u= rj rjT M ( u0 + u 0 t ) + k
T
k 0 cos ω k t + sin ω k t (3.78b)
j =1 k=1
ωk
which are the counterparts of Equations 3.35a and 3.35b for a system with
m rigid-body modes. Using Equation 3.75, it is also left to the reader to
show that the kinetic and potential energies become now
m n− m n− m
T=
1
2 ∑ j =1
2
w +
j
1
2 ∑ y ,
k=1
2
k V=
yT Ly 1
2
=
2 ∑ω
k=1
2
k yk2 (3.79)
86 Advanced Mechanical Vibrations
Remark 3.11
k ( m1 + m2 )
ω 1 = 0, ω2 =
m1 m2
If now, on the other hand, we consider the relative displacement z = x1 − x2
of m1 with respect to m2, the equation of motion is m1 m2 z + k ( m1 + m2 ) z = 0,
from which we obtain only the non-zero natural frequency ω 2 above.
The equation of motion for the free vibration of a viscously damped system
is Mu + Cu + Ku = 0 . As anticipated in Section 3.3.5, assuming a solu-
tion of the form u = z e λ t gives the quadratic eigenvalue problem (QEP for
short; but it is also called complex eigenvalue problem by some authors) of
( )
Equation 3.63, i.e. λ 2 + λ C + K z = 0. If now we determine the solution of
the undamped problem, form the modal matrix P and – just like we did in
Section 3.3.3 – pass to the normal co-ordinates y by means of the transfor-
mation u = Py, we are led to
+ PT CPy + Ly = 0 (3.80)
Iy
CM −1 K = KM −1C (3.81)
3.4.1 Rayleigh damping
In light of the facts that the decoupling of the equations of motion is a note-
worthy simplification and that the various energy dissipation mechanisms of a
physical system are in most cases poorly known, a frequently adopted model-
ling assumption called proportional or Rayleigh damping consists in express-
ing C as a linear combination of the mass and stiffness matrices, that is
C = a M + b K (3.83a)
where a, b are two scalars. A damping matrix of this form, in fact, leads to
a diagonal modal matrix Ĉ in which the damping ratios are given by
1 a
2ζ j ω j = pTj ( aM + bK ) p j = a + bω 2j ⇒ ζj = + bω j (3.83b)
2 ωj
Remark 3.12
Example 3.5
By considering the simple 2-DOF system with matrices
2 0 32 −1 2 3 −1
M= , C= , K=
0 1 −1 2 1 2 −1 1
Finite DOFs systems 89
0
ˆ = 14
C
0 1
the reader is also invited to show that the damping ratios and damped
frequencies are (to three decimal places) ζ 1 = 0.177, ζ 2 = 0.354 and
ω d 1 = 0.696, ω d 2 = 1.323, respectively. As a final check, one can show
( )
that the calculation of det λ 2 M + λ C + K leads to the characteristic
equation 2λ 4 + 2.5λ 3 + 5.5λ 2 + 2λ + 2 = 0, whose solutions are the two
complex conjugate pairs λ1,2 = −0.125 ± 0.696 i and λ3,4 = −0.500 ± 1.323 i.
Then, recalling from Section 3.2 that these solutions are represented
in the form −ζ jω j ± iω j 1 − ζ 2j (where ω j , for j = 1,2 , are the undamped
natural frequencies), we readily see that the imaginary parts are the
damped frequencies given above, while the decay rates of the real
parts correspond to the damping ratios ζ 1 = 0.125 ω 1 = 0.177 and
ζ 2 = 0.5 ω 2 = 0.354.
∑ a ( M K ) (3.84a)
−1 k
C=M k
k=0
where the r coefficients ak can be determined from the damping ratios speci-
fied for any r modes – say, the first r – by solving the r algebraic equations
r −1
2ζ j = ∑a ω
k=0
k
2k−1
j (3.84b)
Then, once the coefficients ak have been determined, the same Equation
3.84b can be used to obtain the damping ratios for modes r + 1, , n . By
so doing, however, attention should be paid to the possibility that some of
90 Advanced Mechanical Vibrations
n
C = MP Cˆ PT M = M
∑ 2ζ ω
j =1
j j p j pTj M (3.85)
where the second expression shows clearly that the contribution of each
mode to the damping matrix is proportional to its damping ratio. Obviously,
setting ζ = 0 for some modes means that we consider these modes as
undamped.
3.4.2 Non-classical damping
Although certainly convenient, the assumption of proportional or classical
damping is not always justified and one must also consider the more general
case in which the equations of motion 3.232 cannot be uncoupled (at least
by the ‘standard’ method of passing to normal co-ordinates). Assuming a
solution of the form u = z e λ t , substitution in the equations of motion leads
( )
to the QEP λ 2 M + λ C + K z = 0 , which has non-trivial solutions when
(
det λ 2 M + λ C + K = 0 (3.86) )
holds. This is a characteristic equation of order 2n with real coefficients,
thus implying that the 2n roots are either real or occur in complex conju-
gate pairs. The case of most interest in vibrations is when all the roots are
in complex conjugate pairs, a case in which the corresponding eigenvectors
are also in complex conjugates pairs. In addition, since the free-vibration
of stable systems dies out with time because of inevitable energy loss, these
complex eigenvalues must have a negative real part.
Given these considerations, linearity implies that the general solution to
the free-vibration problem is given by the superposition
2n
u= ∑c z e
j =1
j j
λj t
(3.87)
Finite DOFs systems 91
where the 2n constants c j are determined from the initial conditions. Also,
since u is real, we must have c j +1 = c∗j if, without the loss of generality, we
assign the index ( j + 1) to the complex conjugate of the jth eigenpair.
Alternatively, we can assign the same index j to both eigensolutions,
write the eigenvalue as λ j = µ j + iω j (with the corresponding eigenvector z j)
and put together this eigenpair with its complex conjugate λ j∗ , z ∗j to form a
damped mode s j defined as
s j = c j z j e(
µ j +i ω j ) t
+ c∗j z ∗j e(
µ j − i ω j )t
(3.88a)
{ }
= 2 e µ j t Re c j z j e i ω jt = C j e µ j t Re z j e ( { i ω j t −θ j )
}
where Re{•} denotes the real part of the term within parenthesis, and in
the last relation, we expressed c j in polar form as 2c j = C j e − iθ j . Clearly,
by so doing, Equation 3.87 becomes a superposition of the n damped
modes of vibration s j. Moreover, writing the complex vector z j as
T
z j = r1 j e − i φ1 j rnj e − i φnj , we have
1j ( j j 1j )
r cos ω t − θ − φ
s j = Cj eµj t (3.88b)
rnj cos (ω j t − θ j − φnj )
Remark 3.13
z Hj Cz j z Hj Kz j
2ζ jω j = , ω 2j = (3.91)
z Hj Mz j z Hj Mz j
( )
T
where, following a common notation (also used in Appendix A), z Hj = z ∗j .
The preceding sections have shown that for both conservative (undamped)
and non-conservative (damped) systems, we must solve an eigenproblem: a
GEP in the first case and a QEP in the second case. Since, however, there
Finite DOFs systems 93
3.5.1 Undamped Systems
Let us consider the conservative case first. Provided that M is non-singular,
both sides of the GEP Kz = λ Mz can be pre-multiplied by M −1 to give
M −1 Kz = λ z , which is an SEP for the matrix A = M −1 K (often referred to as
the dynamic matrix). Similarly, if K is non-singular, we can pre-multiply the
generalised problem by K −1 to get Az = γ z, where now the dynamic matrix
is A = K −1 M and γ = 1 λ . The main drawback of these methods is that the
dynamic matrix, in general, is not symmetric.
In order to obtain a more convenient symmetric problem, we recall that
one possibility (using the Cholesky factorisation) was considered in Remark
3.10 of Section 3.3.7. A second possibility consists in solving the SEP for the
(symmetric and positive-definite) matrix M and consider its spectral decom-
position M = RD2 RT , where R is orthogonal (i.e. RRT = I ) and D2 is the
diagonal matrix of the positive – this is why we write D2 – eigenvalues of M.
Then, substitution of the spectral decomposition into the original GEP gives
Kz = λ RD2 RT z and consequently, since DRT = (RD)T , Kz = λ NNT z where
N = RD. If now we pre-multiply both sides of the eigenproblem by N −1, insert
the identity matrix N −T NT between K and z on the l.h.s. and define the vec-
tor x = NT z, we obtain an SEP for the symmetric matrix S = N −1 KN −T .
A different strategy consists in converting the set of n second-order ordi-
nary differential equations into an equivalent set of 2n first-order ordinary
differential equations by introducing velocities as an auxiliary set of vari-
ables. This is called a state-space formulation of the equations of motions
because, in mathematical terms, the set of 2n variables u1 ,… , un , u1 ,… , u n
defines the so-called state space of the system. In essence, this approach
is similar to what we did in Section 2.3.1, where we discussed Hamilton’s
canonical equations. In that case, we recall, the generalised momenta p j
played the role of auxiliary variables (and the set of 2n variables q j , p j defines
the so-called phase space of the system. By contrast, the n-dimensional space
defined by the variables u1 , , un is known as configuration space).
Starting from the set of n second-order ordinary differential equa-
tions Mu + Ku = 0 , pre-multiplication by M −1 leads to u = − M −1 Ku.
T
Now, introducing the 2n × 1 vector x = u u (which clearly implies
94 Advanced Mechanical Vibrations
T
x = u ), we can put together the equation u
u = − M −1 Ku with the
trivial identity u = u in the single matrix equation
0 I
x = Ax , A= −1
(3.92)
− M K 0
C M u K 0 u 0
+ = (3.93a)
M 0 u
0 − M u 0
T
which, defining the 2n × 1 state vector x = u u and the 2n × 2n real
symmetric (but not, in general, positive-definite) matrices,
ˆ = C M K 0
M , Kˆ = (3.93b)
M 0 0 − M
ˆ + λ Mv
Kv ˆ = 0 (3.94)
whose characteristic equation and eigenvalues are the same as the ones of
the original QEP. The 2n-dimensional eigenvectors of Equation 3.94 have
T
the form v j = z j λ j z j , where z j are the n-dimensional eigenvectors of
Finite DOFs systems 95
ˆ j =M
ˆ j δ ij , Kˆ j vTj Kˆ v j
vTi Mv vTi Kˆ v j = Kˆ j δ ij , λj = − =− T (3.95)
Mˆj vj M ˆ vj
Remark 3.14
i. The fact that the symmetric GEP 3.94 leads, in the under-damped case,
to complex eigenpairs may seem to contradict the ‘educated guess’
(based on the developments of preceding sections and of Appendix A)
that a symmetric eigenvalue problem should produce real eigenvalues.
However, there is no contradiction because, as pointed out above, the
matrices Mˆ , Kˆ are not, in general, positive-definite.
ii. Note that some authors denote the matrices M ˆ , Kˆ by the symbols A, B,
respectively.
Also, the GEP 3.94 can be converted into the standard form by pre-multiplying
both sides by Mˆ −1 or Kˆ −1 (when they exist), in analogy with the procedure con-
sidered at the beginning of the preceding section. In this case, it is not difficult
to show that, for example, the matrix M ˆ −1 can be obtained in terms of the
original mass and damping matrices as
M −1
ˆ −1 = 0
M
M −1 − M −1 CM −1
The last possibility we consider here parallels the method described at the
end of the preceding Section 3.5.1 for the undamped case. Starting from
Mu + Cu + Ku = 0 , pre-multiplication by M −1 gives u
= − M −1 K u − M −1C u.
This equation together with the trivial identity u = u can be combined into
T
the single matrix equation x = Ax , where x = u u and A is now the
2n × 2n matrix
96 Advanced Mechanical Vibrations
0 I
A= −1 −1
(3.96)
− M K −M C
3.6 EIGENVALUES SENSITIVITY OF
VISCOUSLY DAMPED SYSTEMS
Proceeding along the lines of Sections 3.3.4 and 3.3.5, we consider now
the sensitivity of the eigenvalues of a non-proportionally damped system.
Assuming that the system does not possess repeated eigenvalues, our start-
ing point here is the fact that the eigenpairs of a damped system satisfy
( )
the QEP λ j2 M + λ j C + K z j = 0, which, defining for present convenience
the matrix Fj = λ M + λ j C + K , can be rewritten as Fj z j = 0. Then, pre-
2
j
(∂ z )F z
T
j j j + zTj (∂ Fj ) z j + zTj Fj (∂ z j ) = 0 (3.97)
zTj ∂λ j ( 2λ j M + C ) + λ j2 ∂ M + λ j ∂ C + ∂ K z j = 0
⇒ ( )
∂λ j zTj ( 2λ j M + C ) z j = − zTj λ j2 ∂ M + λ j ∂ C + ∂ K z j
∂λ j = −
(
zTj λ j2 ∂ M + λ j ∂ C + ∂ K z j ) (3.98)
z T
j ( 2λ j M + C ) z j
This is, as a matter of fact, a generalisation to the damped case of the
‘undamped equation’ 3.57 because it is not difficult to show that Equation
3.98 reduces to Equation 3.57 if the system is undamped. When this is the
case, in fact, C = 0, the eigenvalues are λ j = iω j and the complex eigenvec-
tors z j (with the appropriate normalisation, see Remark 3.15 below) become
Finite DOFs systems 97
( )
pTj −ω 2j ∂ M + ∂ K p j
i (∂ω j ) = −
2iω j
⇒ ( ) ( )
∂ ω 2j = pTj ∂ K − ω 2j ∂ M p j
which is exactly the undamped equation when one recalls that in Equation
3.57, we have λ j = ω 2j .
The sensitivity of the eigenvectors of a damped system is definitely more
involved, and for a detailed account, we refer the interested reader to
Chapter 1 of Adhikari (2014b).
Remark 3.15
Finite-DOFs systems
Response to external excitation
4.1 INTRODUCTION
99
100 Advanced Mechanical Vibrations
1 e −ζ ω n t
h(t) = sin ω n t , h(t) = sin ω d t (4.2a)
mω n mω d
1 e −ζ ω n (t −τ )
h(t − τ ) = sin ω n (t − τ ), h(t − τ ) = sin ω d (t − τ ) (4.2b)
mω n mω d
impulses that have occurred between the onset of the input at t = 0 and
time t. Therefore, we can write
t
u(t) =
∫ f (τ ) h(t − τ ) dτ (4.3)
0
v + ζ ω n u0
u(t) = e −ζ ω n t u0 cos ω d t + 0 sin ω d t
ωd
t
1
+
mω d ∫ f (τ ) e
0
−ζ ω n (t −τ )
sin [ω d (t − τ )] dτ (4.4)
Remark 4.1
( )
i. Substitution of the initial conditions u 0+ = 0 and u 0+ = fˆ m in ( )
Equations 3.11 and 3.12c gives the (non-oscillatory) IRFs of a criti-
cally damped and over-damped system, respectively. However, as
already mentioned before, the undamped and under-damped cases
considered earlier are the ones of most interest in vibrations.
ii. Considering the integration limits of Equation 4.3, it can be observed
that the lower limit is zero because we assumed the input to start at
t = 0. But since this is not necessarily the case, we can just as well
102 Advanced Mechanical Vibrations
extend this limit to −∞. For the upper limit, on the other hand, we
know that h(t − τ ) is zero for t < τ . But this is the same as τ > t , and we
can therefore extend the limit to +∞ without affecting the result. The
conclusion is that the Duhamel integral of Equation 4.3 is, as a matter
of fact, a convolution product in agreement with definition B.29.
iii. If now, with the lower and upper limits at infinity, we make the change
of variable α = t − τ ; it is almost immediate to obtain
∞
u(t) =
∫ f (t − α ) h(α ) dα (4.6)
−∞
thus showing that it does not matter which one of the two functions,
f (t) or h(t), is shifted (note that, with a different notation, this is prop-
erty (a) of Remark B.5(ii) in Appendix B). In calculations, therefore,
convenience suggests the most appropriate shifting choice.
iv. A system is stable if a bounded input produces a bounded output. By a
∞
well-known property of integrals, we have u(t) ≤
∫ −∞
f ( t − α ) h(α ) dα ,
but since a bounded input means that there exists a finite constant K
such that f (t) ≤ K , then it follows that a system is stable whenever its
∞
IRF is absolutely integrable, that is, whenever
∫ −∞
h(α ) dα < ∞.
The preceding considerations show that, in essence, the IRF of a linear system
characterises its input–output relationships in the time domain and that for this
reason it is an intrinsic property of the system. Consequently, if we Fourier and
Laplace transform the functions f (t), h(t), call F(ω ), F(s) the two transforms of
f (t) and observe that Equations B.30 and B.52 of Appendix B show that
1 1
H (ω ) = , H (s) = (4.8)
(
k 1 − β + 2i ζ β
2
) (
m s + 2 ζ ω n s + ω n2
2
)
where, in the first relation, we introduced the frequency ratio β = ω ω n . So,
Equations 4.7 and 4.8 show that the system’s response in the frequency- and
Response to external excitation 103
in the s-domain – that is, the functions U(ω ) = F [ u(t)] and U(s) = L [ u(t)],
respectively – is given by
Remark 4.2
F(s)
U(s) = = H (s)F(s) (4.10a)
m(s2 + 2ζ ω n s + ω n2 )
F(s) m u0 ( s + 2ζ ω n ) v0
U(s) = 2 + 2 2 + 2 (4.10b)
s + 2ζ ω n s + ω n s + 2ζω n s + ω n s + 2ζω n s + ω n2
2
whose inverse Laplace transform gives, as it should be expected,
Equation 4.4.
104 Advanced Mechanical Vibrations
0
Then, making use of the tabulated integral
e at
∫ e at sin ( b t ) dt =
a + b2
2 ( a sin bt − b cos bt ) (4.11)
1 e −ζ ω n t ζ ω n
r(t) = − sin ω d t + cos ω d t (t > 0) (4.12)
k k ω d
f f
u2 (t) = 0 (1 − cos ω n t1 ) cos ω n ( t − t1 ) + 0 sin ω n t1 sin ω n ( t − t1 )
k k
f0
= cos ω n ( t − t1 ) − cos ω n t (4.13)
k
where, for clarity, in the first expression we put the initial conditions
within curly brackets (the second expression then follows from a well-
known trigonometric relation).
It is instructive to note that this same problem can be tackled by
using Laplace transforms. In fact, by first observing that the rectangu-
lar pulse of duration t1 (of unit amplitude for simplicity) can be written
Response to external excitation 105
1m e − s t1 m
U(s) = −
(
s s + ωn
2 2
) (
s s2 + ω n2 )
whose inverse Laplace transform L−1 [U(s)] leads to the same
result as earlier, i.e. u1(t) = k−1 (1 − cos ω nt ) for 0 < t < t1 and
u2 (t)u = k−1 cos ω n ( t − t1 ) − cos ω n t for t > t1 (by using a table of Laplace
transforms, the reader is invited to check this result).
where, for present convenience, we put into evidence the term f m because
this is the r.h.s. of the equation of motion when it is rewritten in the form
+ ω n2 x = f m (4.15)
u
u(t) = ω n
∫ x(τ ) sinω (t − τ ) dτ (4.16)
0
n
If, on the other hand, the excitation is in the form of base velocity x(t), we
+ ω 2n u = ω 2n x to get
can differentiate the equation u
d 2u
+ ω 2n u = ω n2 x (4.17)
dt 2
which is a differential equation in u formally similar to Equation 4.15.
Then, paralleling the result of Equation 4.14, we get the velocity response
t
u (t) = ω n
∫ x(τ )sinω (t − τ ) dτ (4.18)
0
n
Finally, if the base motion is given in terms of acceleration x (t), we can con-
sider the relative coordinate z = u − x of the mass with respect to the base.
Since z = u − x
(and, consequently, u = z + x
), we can write the equation of
motion u + ω 2n (u − x) = 0 as z + ω 2n z = − x and obtain the response in terms
of relative displacement. We get
t
1
z(t) = −
ωn ∫ x(τ ) sinω (t − τ ) dτ (4.19)
0
n
(t) = ω n
u
∫ x(τ ) sinω (t − τ ) dτ (4.20)
0
n
Remark 4.3
i. Equations 4.19 and 4.20 are frequently used in practice because in most
applications the motion of the base is measured with accelerometers.
ii. The relative motion equation (4.19) is important for the evaluation of
stress. For example, failure of a simple 1-DOF system generally cor-
responds to excessive dynamic load on the spring. This occurs when
z(t) max exceeds the maximum permissible deformation (hence stress)
of the spring.
Response to external excitation 107
An important point for both theory and practice is that the FRF H (ω ) of
Equation 4.81 provides the system’s steady-state response to a sinusoidal forc-
ing function of unit amplitude at the frequency ω . Using complex notation,
in fact, the equation of motion in this case is mu + cu + ku = e iω t, which, by
iω t
(
assuming a solution of the form u(t) = Ue , leads to −ω 2m + iω c + k U = 1 )
and consequently
1 1
U= = = H (ω ) (4.21)
(
k − mω 2 + icω k 1 − β 2 + 2iζ β )
where, in writing the second expression, we used the relations
m k = 1 ω n2 , c k = 2ζ ω n and the definition of the frequency ratio β = ω ω n .
Then, the fact that H (ω ) is a complex function means that it has a magni-
tude and phase and therefore that the response may not be in phase with the
excitation. More explicitly, we can write the polar form H (ω ) = H (ω ) e − iφ
and, after a few simple calculations, determine the magnitude and phase
angle as
1 2ζβ
H (ω ) = , tan φ = (4.22a)
(
k 1− β ) + (2ζβ )
2 2 2 1− β2
where φ is the angle of lag of the displacement response relative to the har-
monic exciting force, and we have φ ≅ 0 for β << 1, φ = π 2 radians for β = 1
and φ that tends asymptotically to π for β >> 1. Also, it is not difficult to
show that the real and imaginary parts of H (ω ) are
1 1− β 2
1 2ζ β
Re [ H (ω )] = , Im [ H (ω )] = −
( )
k 1 − β 2 + ( 2ζβ )2
2
( )
k 1 − β 2 + ( 2ζβ )2
2
(4.22b)
Remark 4.4
( )
−1
maximum occurs at β = 1 − 2ζ 2 and that Dmax = 2ζ 1 − ζ 2 , which is
often approximated by Dmax ≅ 1 2ζ for small damping. This, in turn, implies
that for values of, say, ζ = 0.05 or ζ = 0.1 – which are not at all uncommon
in applications – the amplitude of the displacement response at resonance
is, respectively, ten times or five times the static response. For small damp-
ing, such high values of the response are due to the fact that at resonance
the inertia force is balanced by the spring force and that, consequently, the
external force overcomes the (relatively small) damping force. Also, note
that damping plays an important role only in the resonance region; away
from resonance – that is, in the regions β << 1 and β >> 1 – damping is defi-
nitely of minor importance. In this respect, it can be observed that in the
resonance region, the system’s steady-state response can be approximated
by u(t) ≅ ( f0 cω n ) e i(ω t −π 2), thus showing that c is the ‘controlling parameter’
for ω close to ω n . By contrast, when β << 1, we are not far from the condi-
tion of static excitation and we expect the stiffness k to be the ‘controlling
parameter’. This is confirmed by the approximation u(t) ≅ ( f0 k) e iω t , which
in fact holds for ω << ω n. At the other extreme – that is, when β >> 1 – the
( )
approximation u(t) ≅ f0 mω 2 ) e i (ω t −π) indicates that mass is the ‘controlling
parameter’ in this region.
Response to external excitation 109
With this solution, the initial conditions u(0) = u0 and u (0) = v0 give the
constants
1
A = u0 − f0 H (ω ) cos φ , B=
ωd
(v0 − ω f0 H (ω ) sin φ + ζω n A) (4.24a)
Now, assuming the system to start from rest and considering that at
resonance (ω = ω n ) we have φ = π 2 and f0 H (ω n ) = f0 2ζ k, the two
constants reduce to
ω n f0 f
A = 0, B=− ≅ − 0 (4.24b)
2ζ kω d 2ζ k
u(t) ≅
(
f0 1 − e −ζ ω n t ) sin ω t (4.25)
n
2ζ k
f (t) = ∑C e
r =−∞
r
iω r t
(4.26)
u(t) = ∑C
r =−∞
r H ( rω 1 ) e i ( rω1 t −φr ) (4.27)
and we can have a resonance condition whenever one of the exciting frequen-
cies ω 1 ,2ω 1 ,3ω 1 , is close or equal to the system’s natural frequency ω n .
+ c ( u − x ) + k ( u − x ) = 0 ⇒
mu + cu + ku = cx + kx (4.28)
mu
U k + icω 1 + 2 iζ β
= = (4.29)
X k − mω + icω 1 − β 2 + 2 iζ β
2
which characterises the motion transmissibility between the base and the
mass. At this point, some easy calculations show that the magnitude and
phase of the complex function U X are
U k2 + (cω )2 1 + (2ζβ )2
= =
X (k − mω ) 2 2
+ (cω )2 (1 − β ) + (2ζβ )
2 2 2
(4.30)
2ζβ 3
tan φ =
1 − β 2 + 4ζ 2β 2
where now φ is the angle of lag of the mass displacement with respect to the
base motion.
A different problem is when the system’s mass is subjected to a har-
monic force f (t) = F0 e iω t, and we are interested in limiting the force fT (t)
Response to external excitation 111
FT k + icω
= (4.31)
F0 k − mω 2 + icω
which is exactly the same function of Equation 4.29 – although with a dif-
ferent physical meaning – and implies that the magnitude FT F0 is given by
the r.h.s. of Equation 4.301 (clearly, the phase of FT relative to F0 is the same
as the r.h.s. of Equation 4.302 , but in these types of problems, the phase is
generally of minor importance).
A first conclusion, therefore, is that the problem of isolating a mass from
the motion of the base is the same as the problem of limiting the force trans-
mitted to the base of a vibrating system. And since the magnitude of U X
or FT F0 is smaller than one only in the region β > 2 , it turns out that we
must have ω > 2 ω n (or ω n < ω 2 ) in order to achieve the desired result. It
should also be noticed that for β = 2 , all magnitude curves have the same
value of unity irrespective of the level of damping.
In addition to this, a rather counterintuitive result is that in the isolation
region β > 2 , a higher damping corresponds to a lower isolation effect
(it is not so, however, in the region β < 2 ), so that, provided that we stay
in the ‘safe’ isolation region, it is advisable to have low values of damping.
When this is the case, it is common to approximate the transmissibility – we
( )
denote it here by T – by T ≅ 1 β 2 − 1 and refer to the quantity 1 − T as the
isolation effectiveness.
Remark 4.5
It is important to point out that the force transmissibility 4.31 is the same
as the motion transmissibility 4.29 only if the base is ‘infinitely large’, that
is, if its mass is much larger than the system’s mass. If it is not so, it can be
shown (Rivin, 2003) that we have
FT mb k2 + (cω )2
= 2 (4.32)
F0 m + mb mmb 2
k − m + m ω + (cω )
2
b
112 Advanced Mechanical Vibrations
Z mω 2 β2 Z β2
= = ⇒ =
X k − mω + icω 1 − β + 2iζ β
(1 − β )
2 2
X 2 2
+ (2ζ β )2
(4.33)
This result is often useful in practice for the two reasons already mentioned
in Remark 4.3: first because in some applications relative motion is more
important than absolute motion and, second, because the base acceleration
is in general easy to measure.
4.3.2 Eccentric excitation
Eccentric excitation is generally due to an unbalanced mass me with eccen-
tricity r that rotates with angular velocity ω (see Figure 4.1). A typical exam-
ple is an engine or any rotary motor mounted on a fixed base by means of
a flexible suspension.
The equation of motion for the system of Figure 4.1 is mu + cu + kx = fecc (t),
where the magnitude of fecc is Fecc = me rω 2 . Along the lines of the preceding
section, assuming harmonic motions for the driving force and the displace-
ment response of the form fecc (t) = Fecc e iω t and u(t) = Ue iω t , we get
U 1 U mω 2
= ⇒ = (4.34)
Fecc k − mω 2 + icω r µ k − mω 2 + icω
U β2 2ζβ
= , tan φecc = (4.35)
rµ
(1 − β ) + (2ζβ )
2 2 2 1− β2
mb Z 1 Z mEω 2
m F = k − m ω 2 + icω ⇒ =
r µ k − mEω 2 + icω
(4.37)
E ecc E
k c ω
ωn = , ζ= , β= (4.38)
mE 2mE ω n ωn
Z β2 2ζ β
= , tan φecc = (4.39)
rµ
(1 − β ) + (2ζβ )
2 2 2 1− β2
Displacement/Force = Receptance
Velocity/Force = Mobility
Acceleration/Force = Accelerance
Response to external excitation 115
iω −ω 2
M(ω ) = , A(ω ) = (4.40a)
k − mω 2 + icω k − mω 2 + icω
with respective magnitudes
ω ω2
M(ω ) = , A(ω ) = (4.40b)
(k − mω )
2 2
+ (cω )2 (k − mω )
2 2
+ (cω )2
In regard to the polar forms of the various FRFs, we adopt the conven-
tion of writing R(ω ) = R(ω ) e − i φ D (see Section 4.3, where here we added
the subscript ‘D’ for ‘displacement’ to the phase angle of receptance),
M(ω ) = M(ω ) e − i φ V and A(ω ) = A(ω ) e − i φA . By so doing, φD ,φ V ,φA are under-
stood as angles of lag behind the driving excitation, this meaning that a
positive value of φ corresponds to an angle of lag while a negative value
corresponds to an angle of lead. Then, in order for our convention to be in
agreement with the physical fact that the velocity leads displacement by π 2
and acceleration leads displacement by π, we have the relations φV = φD − π 2
and φA = φD − π = φV − π 2.
Remark 4.6
Finally, it is worth pointing out that in some applications, one can find
the inverse relations of the FRFs mentioned above; these are the force/
motion ratios
Force/Displacement = Dynamic stiffness
Force/Velocity = Mechanical impedance
Force/acceleration = Apparent mass
with the frequently used symbols K(ω ) and Z(ω ) for dynamic stiffness and
mechanical impedance (while, to the author’s knowledge, there is no stan-
dard symbol for apparent mass, so here we will use J(ω )). Then, from the
developments above, it is not difficult to see that mathematically we have
K = R−1 , Z = M −1 and J = A−1.
4.3.4 Damping evaluation
In Section 3.2.1, it was shown that damping can be obtained from a
graph of the decaying time-history of the system free-response. With the
FRFs at our disposal, we now have other methods to evaluate damping
and here we consider three of the most common. The first consists in
simply determining the maximum value of the receptance, which occurs
at resonance and is f0 2ζ k if the driving force has magnitude f0 . Then,
we have
f0
ζ≅ (4.41)
2k R(ω ) max
1 1
= (4.42)
2ζ 2
(1 − β ) + (2ζβ )
2 2 2
Response to external excitation 117
( )
which, in turn, leads to the equation β 4 − 2β 2 1 − 2ζ 2 + 1 − 8ζ 2 = 0 whose
roots are β = 1 − 2ζ ± 2ζ 1 + ζ . For small damping, we can write
2
1,2
2 2
β1,2
2
≅ 1 ± 2ζ and consequently β1,2 = 1 ± 2ζ ≅ 1 ± ζ , from which it follows:
β1 − β2 ω 1 − ω 2
ζ= = (4.43)
2 2ω n
The third method is based on the Nyquist plot of mobility. Recalling that
we mentioned Nyquist plots in Remark 4.4(iii), we add here that one note-
worthy feature of these graphs that they enhance the resonance region
with an almost circular shape. For viscously damped systems, however, the
graph of mobility traces out an exact circle, and the radius of the circle can
be used to evaluate the damping constant c. More specifically, since the real
and imaginary parts of mobility are
Re [ M(ω )] =
cω 2
, Im [ M(ω )] =
(
ω k − mω 2 )
(k − mω ) 2 2
+ (cω )
2
(k − mω ) 2 2
+ (cω )2
(4.44)
we can define
1
U = Re [ M(ω )] − , V = Im [ M(ω )] (4.45)
2c
4.3.5 Response spectrum
In Example 4.2, we have determined the response of an undamped system
to a rectangular impulse of amplitude f0 and duration t1. Here, however,
we are not so much interested in the entire time-history of the response
but we ask a different question: what is the maximum value umax of the
response and when does it occur? As we shall see shortly, the answer to this
question – which is often important for design purposes – leads to the very
useful concept of response spectrum.
118 Advanced Mechanical Vibrations
Going back to the specific case of Example 4.2, the system’s response
‘normalised’ to the static response f0 k is
ku ( t < t1 ) ku ( t > t1 )
= 1 − cos ω nt , = cos ω n ( t − t1 ) − cos ω nt (4.46)
f0 f0
kumax 2sin(πη) (η ≤ 1 2)
= (4.47)
f0 2 (η > 1 2)
so that a graph of kumax f0 as a function of η gives the so-called response
spectrum to a finite-duration rectangular input. In just a few words, there-
fore, one can say that the response spectrum provides a ‘summary’ of the
largest response values (of a linear 1-DOF system) to a particular input
loading – a rectangular pulse in our example – as a function of the natural
period of the system. Also, note that by ignoring damping, we obtain a
conservative value for the maximum response.
In order to give another example, it can be shown (e.g. Gatti, 2014, ch. 5)
that the response of an undamped 1-DOF system to a half-sine excitation
force of the form
f0 sin ω t 0 ≤ t ≤ t1
f (t) = (4.48)
0 t > t1
f0
u( t ; t ≤ t1 ) = (sin ω t − β sin ω nt )
(
k 1− β2 )
(4.49)
f0 β
u( t ; t > t1 ) = sin ω n t + sin ω n ( t − t1 )
(
k β2 −1 )
which in turn – and here the reader is invited to fill in the missing details
(see Remark 4.7(i) below for a hint) – lead to the maximum ‘normalised’
displacements
k 1 2πβ 2η 2π
umax ( t ≤ t1 ) = sin = sin
f0 1− β 1 + β 2η − 1 1 + 2η
(4.50)
k 2β π 4η
umax ( t > t1 ) = 2 cos = cos(πη)
f0 β −1 2β 1 − 4η 2
where the first expressions on the r.h.s. are given in terms of the frequency
ratio β = ω ω n , while the second are in terms of η = t1 T . In this respect,
moreover, two points worthy of notice are as follows:
a. Equation 4.501 holds for β < 1 (or equivalently η > 1 2), while Equation
4.502 holds for β > 1 (or equivalently η < 1 2),
b. At resonance (β = 1 or η = 1 2), both expressions (4.50) become inde-
terminate. Then, the well-known l’Hôpital rule from basic calculus
gives (using, for example, the first expression of 4.501)
k π
umax ( β = 1) = ≅ 1.571
f0 2
Remark 4.7
+ PT CP y + Ly = PT f (4.51)
Iy
y0 j + ζ jω j y0 j
y j (t) = e −ζ jω j t y0 j cos ω dj t + sin ω dj t
ω dj
t
1
∫ ϕ (τ ) e sin ω dj ( t − τ ) dτ
−ζ jω j (t −τ )
+ j (4.53)
ω dj
0
where y0 j , y0 j are the initial displacement and velocity of the jth modal coor-
dinate and ω dj = ω j 1 − ζ j2 is the jth damped frequency. If, as it is often the
case, the initial conditions are zero, we can compactly write in matrix form
t
y(t) =
∫ diag hˆ (t − τ ),, hˆ (t − τ ) P f(τ ) dτ (4.54)
0
1 n
T
( )
where hˆ j (t) = e −ζ jω j t ω dj sin ω djt is the jth modal IRF. Then, with the trans-
formation u = Py, we can go back to the original physical co-ordinates, and
Equation 4.54 gives
Response to external excitation 121
t t
u(t) =
∫
0
P diag hˆ1(t − τ ),, hˆ n (t − τ ) PT f (τ ) dτ =
∫ h(t − τ ) f(τ ) dτ (4.55)
0
Remark 4.8
If, with zero initial conditions, the external loading f is orthogonal to one of
the system’s modes, say the kth mode pk, then ϕ k = pTk f = 0 and consequently
y j = 0, meaning that this mode will not contribute to the response. By the
same token, we can say that if we do not want the kth mode to contribute
to the response, we must choose ϕ k = 0.
(ω 2
j )
− ω 2 + 2iζ jω jω Yj = pTj F ( j = 1,2,, n )
and these n equations can be compactly expressed in the matrix form as
1 T
Y = diag 2 P F = diag Hˆ j (ω ) PT F (4.57)
ω j − ω + 2iζ jω jω
2
( )
−1
where Hˆ j (ω ) = ω 2j − ω 2 + 2 iζ jω jω is the jth modal FRF ( j = 1,2,, n ).
Then, the response in terms of physical coordinates is given by
U = P diag Hˆ j (ω ) PT F (4.58)
so that, in agreement with the developments and nomenclature of the pre-
ceding sections, the ratio of the displacement response U and the forcing
excitation F is the system’s receptance. This is now an n × n matrix, and
Equation 4.58 rewritten as U = R(ω ) F shows that we have
122 Advanced Mechanical Vibrations
R(ω ) = P diag Hˆ j (ω ) PT =
∑ Hˆ
m=1
m (ω ) pm pTm (4.59a)
from which it is easy to see that R(ω ) is a symmetric matrix. Its j,kth
element is
n n
∑ ∑ω
p j m pkm
Rj k (ω ) = Hˆ m (ω )p j m pkm = (4.59b)
m=1 m=1
2
m − ω 2 + 2 iζ mω mω
Remark 4.9
i. In light of the developments of Section 4.2, the fact that the modal
IRF hˆ j (t) and the modal (receptance) FRF Hˆ j (ω ) form a Fourier trans-
form pair is no surprise. More precisely, owing to our definition of
Fourier transform (see Appendix B and Remark 4.2(i)), we have
Hˆ j (ω ) = 2π F hˆ j (t) ,
hˆ j (t) = (1 2π ) F −1 Hˆ j (ω ) ( j = 1, , n )
(4.60a)
from which it follows
ii. The fact that the IRF and FRF matrices h(t), R(ω ) are symmetric is a
consequence of the reciprocity theorem (or reciprocity law) for linear
systems, stating that the response – displacement in this case – of the
jth DOF due to an excitation applied at the kth DOF is equal to the dis-
placement response of the kth DOF when the same excitation is applied
at the jth DOF. Clearly, reciprocity holds even if the system’s response
is expressed in terms of velocity or acceleration, thus implying that the
mobility and accelerance FRF matrices M(ω ), A(ω ) are also symmetric.
∑
t
a mode superposition u(t) = p y (t), where y (t) =
j
ϕ (τ )hˆ (t − τ ) dτ is
j j j
∫0
j j
Response to external excitation 123
the solution for the jth modal coordinate and hˆ j (t) is the jth modal IRF.
However, observing that in many cases of interest, only a limited num-
ber of lower-order modes (say the first r, with r < n and sometimes even
r << n ) gives a significant contribution to u(t); it is reasonable to expect
that perhaps we can obtain a satisfactory approximate solution by simply
‘truncating’ the modes higher than the rth and by considering only the
sum
u (r ) = ∑ p y (4.61)
j =1
j j
This is certainly possible, but since the response depends on the system
under investigation and on the type of loading (its frequency content,
spatial distribution, etc.), the question arises of how many modes should
be retained; too many implies a larger and unnecessary computational
effort and too few may mean a poor and inaccurate solution – and espe-
cially so if the excitation has some frequency components that are close
to one of the truncated modes (in this respect, it is eminently reasonable
that, at a minimum, one should retain all the modes that fall within
the frequency band of the excitation). So, provided that in any case, the
truncation must be made with some ‘educated engineering judgment’; the
idea of the so-called mode-acceleration method is to add a complement-
ing term that takes into account the contribution of the truncated n − r
modes. With respect to simple truncation, experience has shown that
this method often provides a significant improvement on the quality of
the solution.
Considering an undamped system for simplicity, the equations of motion
can be rewritten as Ku = f − Mu, and consequently, if K is non-singular,
u = K −1f − K −1Mu
(4.62)
Then, observing that Equation 4.61 implies u (r) = ∑ rj =1 p j yj , we can use this
in the r.h.s. of Equation 4.62 to obtain a truncated mode acceleration solu-
tion u(r) as
r r
∑ ∑ω
yj
u(r) = K −1f − K −1 yj M p j = K −1f − 2 p j (4.63)
j
j =1 j =1
where in writing the last expression we took the relation MPj = ω j−2 KP into
account. As for terminology, the term K −1 f on the r.h.s. is called the pseudo-
static response, while the name of the method is due to the accelerations yj
in the other term.
124 Advanced Mechanical Vibrations
t
Now, since each y j is given by y j = (1 ω j )
∫ p f(τ )sin ω (t − τ ) dτ , we
0
T
j j
can insert this expression into the jth modal equation of motion rewritten
as yj = pTj f − ω 2j y j and substitute the result in Equation 4.63 to obtain
r r
p j pTj
∑ ∑
p j pTj t
u (r ) =
j =1
ωj ∫ 0
f (τ )sin ω j (t − τ ) dτ + K −1 −
j =1
ω 2j
f (4.64a)
r n
∑ ∑
p j pTj t
p j pTj
u (r ) =
j =1
ωj ∫ 0
f (τ )sin ω j (t − τ ) dτ +
j = r +1
ω 2 f (4.64b)
j
where the last term represents the contribution of the n − r truncated modes.
Remark 4.10
where in the last expression the first term represents the dynamic response
of the lower-order modes in the frequency bandwidth of the excitation,
while the second term is a pseudo-static correction due to the fact that for
the modes with indexes j = r + 1,, n it is legitimate to make the ‘static’
( )
−1
approximation 1 − β j2 ≅ 1. If now, as we did mention earlier, we take
into account the modal expansion of the matrix K −1 and observe that
( )
−1
Hˆ j (ω ) = ω 2j − ω 2 is the jth modal FRF (in the form of receptance) of our
undamped system, we can write
n r
∑ ∑ω
p j pTj
R(ω ) ≅ Hˆ j (ω ) p j pTj + K −1 − 2 (4.65b)
j
j =1 j =1
( −ω 2
M + iω C + K U = F (4.66) )
where the vector U can be expanded in terms of the system’s modes and
∑
n
expressed as U = bj p j . For a classically damped system with no rigid-
j =1
body modes, in fact, Equation 4.58 is such an expansion because, in light
of Equation 4.59a, it is not difficult to show that the expansion coefficients
are bj = Hˆ j (ω ) pTj F.
On the other hand, if the system has m rigid-body modes rj , the expan-
sion becomes
m n− m
U= ∑ j =1
a j rj + ∑ b p (4.67)
j =1
j j
rkT F pTk F
ak = − , bk = = Hˆ k (ω ) pTk F (4.68)
ω2 ω − ω + 2i ζ k ω kω
2
k
2
and show that the coefficients of the elastic modes are the same as in the
absence of rigid-body modes. Then, Equation 4.67 becomes
m n− m m n− m
∑( )
rjT F
∑ ∑ ∑
rj rjT
U=− ω 2 rj + Hˆ j (ω ) pTj F p j = − + Hˆ j (ω )p j pTj F
j =1 j =1
j =1
ω2 j =1
(4.69)
∑ ∑ Hˆ (ω ) p p (4.70)
rj rjT
R(ω ) = − + j j
T
j
j =1
ω2 j =1
126 Advanced Mechanical Vibrations
C M u K 0 u f
+ = ⇒ ˆ + Kx
Mx ˆ =q
M 0 u
0 − M u 0
(4.71)
Remark 4.11
Recall that since the coefficients of the matrices involved are real, for
underdamped systems, the eigensolutions occur in complex conjugate
pairs. For calculations, therefore, it is useful to arrange them in some con-
( )
venient way; for example, for j = 1,, n , as ( λ2 j , z 2 j ) = λ2∗ j −1 , z ∗2 j −1 or as
( )
( λ j+n , z j+n ) = λ j∗ , z∗j .
Under the assumption that the eigenvectors form a complete set – which
is true if the eigenvalues are all distinct and, more generally, if the matrices
Response to external excitation 127
2n
so that substituting Equation 4.72 into 4.71, pre-multiplying by vTj and tak-
ing the orthogonality conditions 3.95 into account (and here, without the
ˆ j = 1 for all j) give the
loss of generality, we also adopt the normalisation M
2n independent first-order equations
dyˆ j d
− λ j yˆ j = φ j (t) ⇒ yˆ j (t)e − λ jt = φ j (t)e − λ jt (4.73)
dt dt
where φ j = vTj q and where the second expression follows from the first by
multiplying both sides by e − λ jt and rearranging the result. From Equation
4.732 , we readily obtain the solution yˆ j (t); assuming for simplicity zero ini-
tial conditions (i.e. yˆ j (0) = 0 for all j), we have
t
∫
yˆ j (t) = φ j (τ ) e λ j (t −τ ) dτ
0
( j = 1,,2n ) (4.74)
∫ diag e
λ j (t −τ )
yˆ(t) = V T q(τ ) dτ (4.75)
0
from which we get, observing that the matrix version of Equation 4.72 is
x(t) = Vyˆ(t),
∫ V diag e
λ j (t −τ )
x(t) = V T q(τ ) dτ (4.76)
0
T
Finally, recalling that v j = z j λ j z j , the 2n × 2n matrices V , V T can be
partitioned as
Z
V = , VT = ZT diag ( λ j ) ZT (4.77a)
Z diag (λj )
128 Advanced Mechanical Vibrations
where the sizes of Z, ZT and diag ( λ j ) are n × 2n, 2n × n and 2n × 2n, respec-
tively. Also, by further observing that
f
V T q = ZT diag ( λ j ) ZT T
= Z f (4.77b)
0
T
we can use these last three relations together with x = u u in
Equation 4.76 to obtain the solution in terms of the original n-dimensional
displacement vector u(t) as
t
∫ Z diag e
λ j (t −τ )
u(t) = ZT f (τ ) dτ (4.78)
0
( )
T T
e( ) dτ = v j Q e i ω t − e λ j t = v j Q e (4.79)
∫
T λ jt i ω −λ j τ
yˆ j (t) = v Q e
j
iω − λ j iω − λ j
0
1 T iω t Z 1 T iω t
x = V diag V Qe = diag Z Fe
iω − λ j Zdiag ( λ j ) iω − λ j
(4.80)
where in writing the last expression we took Equations 4.77a and b into
account. Then, the steady-state solution for the original displacement
vector is
2n
1 T iω t z m zTm i ω t
u(t) = Z diag
iω − λ j
Z Fe = ∑
m=1
iω − λ F e (4.81)
m
∑ iω − λ
z j m zkm
Rj k (ω ) = (4.82b)
m
m=1
n
z ∗j m zkm
∗
∑ z j m zkm
= +
(
m=1 ζ mω m + i ω − ω m 1 − ζ m
2
) (
ζ mω m + i ω + ω m 1 − ζ m2 )
(4.82c)
Remark 4.12
Especially in the discipline of control theory, many authors use the term
poles for the eigenvalues λm and call as residue for mode m the term z j m zkm .
Also, in modal analysis literature, this residue is often given a symbol of its
own such as, for example, m Aj k or rj k,m.
( ) T
QEP λ 2 M + λ C + K z = 0 by v j = z j λ j z j . Together, the eigenvectors
v j form the state-space modal matrix V and we have (see Equation A.41 of
dyˆ(t)
= V −1AVyˆ + V −1q(t) = diag ( λ j ) yˆ + V −1q(t) (4.84)
dt
Remark 4.13
i. Since in matrix form, Equations 4.731 read ŷ (t) = diag ( λ j ) yˆ + V T q(t);
the difference with Equation 4.84 is that now in the last term on the
r.h.s., we have V −1 instead of V T . In addition to this, it should be
noticed that here the modal matrix V is the matrix of eigenvectors of
the non-symmetric SEP Av = λ v, while in the preceding Section 4.5,
V is the matrix of eigenvectors of the symmetric GEP 3.94. So, even
if we are using the same symbol V , the fact itself that here and in
Section 4.5, we are considering two different state-space formulations
should make it sufficiently clear that we are not dealing with the same
matrix.
ii. From Appendix A, Section A.4, we recall that for non-symmetric
matrices, we have right and left eigenvectors which satisfy the bi-
orthogonality conditions of Equations A.41. If, with the symbols of
the present section, we denote by V, W the matrices of right and left
eigenvectors of A, respectively, Equations A.41 are written as WT V = I
and WT AV = diag ( λ j ). Moreover, since the first equation implies
WT = V −1, the second equation becomes the relation V −1AV = diag ( λ j )
mentioned earlier and used in Equation 4.84.
Response to external excitation 131
Vupper Vupper
V= = , V −1 = Vleft
−1 −1
Vright
Vlower Vupper diag ( λ j )
(4.86a)
0
V −1q = Vleft
−1 −1
Vright −1
= Vright M −1 f (4.86b)
M −1 f
(
we can use these relations together with x = u u in Equation 4.852
T
)
to obtain the system’s displacement response u(t) to an arbitrary excitation
f(t); that is,
u(t) =
∫V
0
upper diag e λ j (t −τ ) Vright
−1
M −1 f (τ ) dτ (4.87)
1 −1
u(t) = Vupper diag Vright M −1 F e i ω t (4.88)
iω − λ j
Example 4.4
Applying the state-space formulation of this section to a damped
1-DOF system, Equation 4.83 reads
u 0 1 u 0
=
x = Ax + q ⇒
+
u − k m − c m u f (t ) m
(4.89)
1 1 −1 1 λ2 −1
λ1,2 = −ζω n ω n ζ 2 − 1,
V= ⇒V =
λ1 λ2 λ2 − λ1 − λ1 1
(4.90)
1 −1 1 −f m
Vupper = [1 1] , −1
Vright = ,
−1
Vright M −1f =
λ2 − λ1 1 λ2 − λ1 f m
∫ f (τ ){e }
1 λ2 (t − τ )
u (t) = − e λ1(t − τ ) dτ (4.91a)
m ( λ2 − λ1 )
0
u (t) =
1
2 imω d ∫ f (τ ) e −ζω n (t − τ )
{e iω d (t − τ )
}
− e − iω d (t − τ ) dτ
0
t
1
=
mω d ∫ f (τ ) e
0
−ζω n (t − τ )
sin [ω d (t − τ )] dτ (4.91b)
On the other hand, for a harmonic excitation of the form f (t) = Fe iω t , the
1-DOF version of Equation 4.88 is
Fe iω t 1 1 Fe iω t 1
u(t) = − =
m ( λ2 − λ1 ) iω − λ2 iω − λ1
m ( iω − λ2 )( iω − λ1 )
(4.92a)
so that some easy calculations (in which we take into account the rela-
tions λ1λ2 = ω n2 and λ1 + λ2 = −2ζω n ) lead to
Fe iω t 1 1
u(t) = = Fe iω t (4.92b)
(
m ω n − ω + 2 iζωω n k 1 − β 2 + 2iζ β
2 2
)
where the term within parenthesis in the last expression is, as expected,
the receptance FRF H (ω ) of Equation 4.81. It is now left to the reader as
an exercise to work out the state-space formulation of Section 4.5 with
the two symmetric matrices
k
ˆ = c
M
m
, Kˆ =
0
m 0 0 −m
m1 0 1000 0
M= = ,
0 m2 0 500
k1 + k2 −k2 10 −5 5
K= = × 10
−k2 k2 −5 5
that the undamped modes uncouple the equations of motion. Solving the
undamped free-vibration problem, we are then led to the following eigenval-
ues and mass-orthonormal eigenvectors (already arranged in matrix form)
λ1 0 292.9 0
diag ( λ j ) = = ,
0 λ2 0 1707.1
0.0224 −0.0224
P=
0.0316 0.0316
thus implying that the system’s natural frequencies are ω 1 = 17.11 and
ω 1 = 41.32 rad/s. For the modal damping ratios, on the other hand, we use
the relations pTj Cp j = 2ζ j ω j ( j = 1,2) to obtain ζ 1 = 0.0086 and ζ 2 = 0.0207.
With these data of frequency and damping, we can now readily write the
two modal FRFs receptances as
1 1
Hˆ 1(ω ) = , Hˆ 2 (ω ) = (4.93)
292.9 − ω + 0.293 iω
2
1707.1 − ω 2 + 1.707 iω
and use Equation 4.59a to obtain the receptance matrix in physical coordi-
nates. This gives
Figures 4.4 and 4.5, respectively, show in graphic form the magnitude of
the receptances R11(ω ) and R12 (ω ) (where R12 (ω ) = R21(ω ) because the matrix
R(ω ) is symmetric).
At this point, although we know that a state-space formulation is not neces-
sary for a proportionally damped system, for the sake of the example, it may be
nonetheless instructive to adopt this type of approach for the system at hand.
Using, for instance, the formulation of Section 4.5.1, we now have the matrix
0 0 1 0
0 I 0 0 0 1
A= =
− M −1
K − M −1C −1000 500 −1 0.5
1000 −1000 −1 −1
-70
R11 - Magnitude (m/N, dB values)
-90
-110
-130
-150
0 10 20 30 40 50 60
frequency (rad/s)
Figure 4.4 R
eceptance R11 – magnitude.
-70
R12 =R 21 - Magnitude (m/N, dB values)
-90
-110
-130
-150
0 10 20 30 40 50 60
frequency (rad/s)
−0.854 + 41.308 i 0 0 0
0 −0.854 − 41.308 i 0 0
diag λ j =
( )
0 0 −0.146 + 17.114 i 0
0 0 0 −0.146 − 17.114 i
V=
−0.0948 + 0.5693i −0.0948 − 0.5693i 0.4140 + 0.4010i 0.4140 − 0.4010i
0.1340 − 0.8052i 0.1340 + 0.8052i 0.5855 + 0.5671i 0.5855 − 0.5671i
Then, following the developments of Section 4.5.1, we (a) form the matrix Vupper with the first two rows of V, (b) invert V,
−1 −1
(c) form the matrix Vright with the last two columns of V −1 and (d) calculate the product Vright M −1, which results
1 −1
R(ω ) = Vupper diag Vright M −1 (4.95)
iω − λ j
where, just to give an example, the calculations show that the element R11(ω )
of R(ω ) thus obtained is
with the eigenvalues ordered as in the matrix diag ( λ j ) above. Then, since
λ2 = λ1∗ and λ4 = λ3∗ , it is left to the reader to show that the substitution of their
numerical values into Equation 4.96 gives the (1,1)-element of the matrix
in Equation 4.94. Clearly, the same applies to the FRFs R12 (ω ) = R21(ω ) and
R22 (ω ).
( )
−1
R(ω ) = K − ω 2 M + iω C (4.97)
follows that if all excitation forces but the kth are zero, we have U j = Rj k (ω )Fk
and, consequently,
U
Rj k (ω ) = j (4.98)
Fk Fr =0; r ≠k
which in turn shows that Rj k (ω ) gives the displacement response of the jth
DOF when the excitation force is applied at the kth DOF only. From an
experimental point of view, this condition is easy to achieve because we
must only apply the excitation force at the kth DOF and measure the sys-
tem’s response at the jth DOF (clearly, the same applies to the mobility
M j k (ω ) or the accelerance Aj k (ω ) when the motion is expressed or measured
in terms of velocity or acceleration, respectively).
If now, on the other hand, we consider the FRFs in the form of force/motion
ratios – say, for example, the dynamic stiffness matrix K(ω ) = F U – then it
(
is easy to see that the equations of motion lead to K(ω ) = −ω 2 M + iω C + K , )
which implies K = R −1. However, besides the general fact that given a non-
singular matrix A = a jk , we cannot expect the j, kth element of A −1 to be
given simply by 1 a jk (unless A is diagonal), a more important reason why
in general we have K jk (ω ) ≠ {Rjk (ω )} lies in the physical interpretation of
−1
the elements of the dynamic stiffness matrix. Following the same line of
reasoning as earlier, in fact, we have
F
K j k (ω ) = j (4.99)
Uk Ur =0; r ≠k
which tells us that we have to measure the displacement at the kth DOF
with all the other DOFs fixed at zero displacement. In just a few words, this
means that we are basically dealing with a different system; no longer – as
for Rj k (ω ) – a force-free system except for a single input force, but with a
system clamped at all its DOFs except the one where we measure the dis-
placement. Needless to say, this condition is experimentally very difficult
(if not even impossible in most cases) to achieve.
Finally, another point worthy of notice is that for systems with reason-
ably well-separated modes, only one row or one column of the FRF matrix
(typically accelerance because acceleometers are probably the most used
motion transducers in experiments) is necessary to extract the system’s
natural frequencies, damping ratios and mode shapes. This is particularly
important in the applied field of Experimental Modal Analysis, because
out of the n2 elements of the FRF matrix – of which only n(n + 1) 2 are
independent because the matrix is symmetric for linear systems – only n are
necessary to determine the system’s modal parameters. For more details on
this aspect, we refer the interested reader to specialised texts such as Ewins
(2000), Maia and Silva (1997) or Brandt (2011).
Chapter 5
Vibrations of continuous
systems
5.1 INTRODUCTION
A taut flexible string with a uniform mass per unit length µ and under the
stretching action of a uniform tension T0 is the ‘classical’ starting point in
the study of continuous systems. If the string undisturbed position coin-
cides with the x-axis and its (small, we will see shortly what this means)
transverse motion occurs in the y-direction, then its deflected shape at point
x and time t is mathematically described by a field function y(x, t), so that
at a given instant of time, say t = t0, y ( x, t0 ) is a snapshot of the string shape
at that instant, while at a fixed point x = x0, the function y ( x0 , t ) represents
the time-history of the string particle located at x0 .
In order to obtain the equation of motion, we observe that the small-
amplitude Lagrangian density (given by the difference between the kinetic
and potential energy densities) for the string has the functional form
Λ(∂ t y, ∂ x y ), and we have
2 2
µ ∂y T0 ∂ y
Λ ( ∂ t y, ∂ x y ) = − (5.1a)
2 ∂t 2 ∂x
where the expressions of the energy densities on the r.h.s. can be found in
every book on waves (for example, Billingham and King (2000) or Elmore
and Heald (1985)). Then, by calculating the prescribed derivatives of
Equation 2.68a, we get
∂Λ ∂ ∂Λ ∂2y ∂ ∂Λ ∂2y
= 0, = µ , = −T (5.1b)
∂ t ∂ (∂ t y ) ∂ x ∂ (∂ x y )
0
∂y ∂ t2 ∂ x2
value problem (IVP) given by Equation 5.2 supplemented with the initial
(i.e. at t = 0) conditions y(x,0) = u(x) and ∂ t y(x,0) = w(x) is the so-called
d’Alembert solution of Equation B.67. Also, it can be shown that this same
solution can be obtained without the aid of Laplace and Fourier transforms
(which we used in Example B.6) by imposing the initial conditions (ICs) at
t = 0 to a general solution of the form
y ( x, t ) = f ( x − ct ) + g ( x + ct ) (5.3)
where f and g are two unrelated arbitrary functions that represent, respec-
tively, a waveform (or wave profile or progressive wave) propagating without
the change of shape in the positive x-direction and a waveform propagat-
ing without the change of shape in the negative x-direction. Moreover, if
the two wave profiles have a finite spatial extension, the fact that the wave
Equation 5.2 is linear also tells us that when the two waves come together,
they simply ‘pass through’ one another and reappear without distortion.
Remark 5.1
2π
y(x, t) = A sin (x − ct) = A sin (kx − ω t) (5.4)
λ
2π λ ω
ω = 2πν = , c = λν = = (5.5)
T T k
Remark 5.2
If now we consider the special case of two sinusoidal waves with equal
amplitude and velocity travelling on the string in opposite directions,
we have the solution of the wave equation given by the superposition
y(x, t) = A sin ( kx − ω t ) + A sin ( kx + ω t ). Using well-known trigonometric
relations, however, this can be transformed into
which is a solution in its own right but no longer with the space and time
variables associated in the ‘propagating form’ x ± ct . Equation 5.6, in fact,
represents a stationary or standing wave, that is, a wave profile that does
not advance along the string but in which all the elements of the string
oscillate in phase (with frequency ω ) and where the particle at x = x0 sim-
ply vibrates back and forth with amplitude 2A sin kx0. Moreover, the par-
ticles at the positions such that sin kx = 0 do not move at all and are called
nodes; a few easy calculations show that they are spaced half-wavelengths
apart, with antinodes – i.e. particles of maximum oscillation amplitude –
spaced halfway between the nodes. Physically, therefore, it can be said that
standing waves are the result of constructive and destructive interference
between rightward- and leftward-propagating waves.
x > 0) localised disturbance is reflected back into the string by the fixed
boundary in the form of an outgoing (i.e. rightward-propagating) distur-
bance which is the exact replica of the original wave except for being upside
down. This ‘wave-reversing’ effect without change of shape applies to (and
is characteristic of) the fixed boundary, but clearly other BCs – such as, for
example, ky(0, t) = T0 {∂ x y(0, t)}, which corresponds to an elastic boundary
with stiffness k – will give different results, with, in some cases, even con-
siderable distortion of the reflected pulse with respect to the incoming one.
Given these preliminary considerations, our interest now lies in the
free vibration of a uniform string of length L fixed at both ends (x = 0
and x = L). Mathematically, the problem is expressed by the homogeneous
wave equation
∂2y 1 ∂2y
− = 0 (5.7a)
∂ x2 c 2 ∂ t 2
y(0, t) = 0, y(L, t) = 0
(5.7b)
y(x,0) = y0 (x), ∂ t y(x,0) = y0 (x)
respectively. In order to tackle the problem, we can use the standard and
time-honoured method of separation of variables, which consists in looking
for a solution of the form y(x, t) = u(x)g(t) and substitute it in Equation 5.7a
to obtain c2u′′ u = g g. But since a function of x alone (the l.h.s.) can be
equal to a function of t alone (the r.h.s) only if both functions are equal to
a constant, we denote this constant by −ω 2 and obtain the two ordinary
differential equations
d 2u d2g
+ k2 u = 0, + ω 2 g = 0 (5.8)
dx2 dt 2
(with, we recall, k= ω c) where the BCs are ‘transferred’ to the spatial part
u(x) and become u(0) = u(L) = 0. Then, recalling that the solution of the
spatial Equation 5.81 is u(x) = C sin kx + D cos kx, enforcing the BCs leads
to D = 0 and to the frequency (or characteristic) equation sin kL = 0, which
in turn implies kL = nπ ( n = 1, 2,). So, it turns out that the only allowed
wavenumbers are kn = nπ L, meaning that the string can only vibrate har-
monically at the frequencies
nπ c n π T0
ωn = = ( n = 1, 2,) (5.9a)
L L µ
Vibrations of continuous systems 145
y0 (x) = ∑n =1
An sin kn x, y0 (x) = ∑ ω B sin k x (5.11)
n =1
n n n
and then by multiplying both sides of the ICs by sin km x and integrating over
the string length. By so doing, the use of Equation 5.12 in Equations 5.111
and 5.112 , respectively, leads to (renaming the index as appropriate)
L L
2 2
An =
L ∫ y (x)sin k x dx,
0
0 n Bn =
Lω n ∫ y (x)sin k x dx (5.13)
0
0 n
Together, Equations 5.10 and 5.13 provide the desired solution, from which
we can clearly see the role played by BSs and ICs; the BCs determine the
natural frequencies and the mode shapes – that is, the eigenvalues and eigen-
functions in more mathematical terminology – while the ICs determine the
contribution of each mode to the string vibration. The reason for the terms
‘eigenvalues’ and ‘eigenfunctions’ becomes clearer if we define k2 = λ (here
146 Advanced Mechanical Vibrations
λ is not a wavelength) and rewrite the spatial part of the problem 5.7 – i.e.
Equation 5.81 and the corresponding BCs – as
− u′′ = λ u
(5.14)
u(0) = 0, u(L) = 0
where it is now evident that Equation 5.141 is an eigenvalue problem for the
differential operator − d 2 dx2.
Together, Equations 5.14 define the so-called BVP belonging to a par-
ticular class that mathematicians call Sturm-Liouville problems (SLps).
We will have more to say about this in Section 5.4, but for time being, a
first observation is that the class of SLps is quite broad and covers many
important problems of mathematical physics. A second observation con-
cerns the term ‘orthogonality’ used in connection with Equation 5.12. In
fact, since it can be shown that for any two functions f , g belonging to the
(real) linear space of twice-differentiable functions defined on the interval
L
0 ≤ x ≤ L, the expression
∫
0
f (x)g(x) dx defines a legitimate inner product
(where ‘legitimate’ means that it satisfies all the properties given in Section
A.2.2 of Appendix A), we can write
L
f g ≡
∫ f (x)g(x) dx (5.15)
0
and say that the two functions are orthogonal if f g = 0. This is indeed
the case for the eigenfunctions (of the BVP 5.14) sin kn x and sin km x (n ≠ m).
In this light, moreover, note that Equations 5.13 can be expressed as the
inner products
2 2
An = sin kn x y0 (x) , Bn = sin kn x y0 (x) (5.16)
L Lω n
Remark 5.3
ii. The reader is invited to show that the total energy ‘stored’ in each
mode is
µ ω 2L 2
En =
4
(
An + Bn2 (5.17) )
and that, in addition, by virtue of the modes orthogonality
∑
∞
(Equation 5.12), the total energy of the string is given by E = En
n =1
meaning that each mode vibrates with its own amount of energy
and there is no energy exchange between modes.
iii. The homogeneous nature of the BVP 5.14 determines the eigenfunc-
tions un (x) to within an arbitrary scaling (or normalisation) factor.
Fixing this factor by some convention gives the normalised eigenfunc-
tions (which we will denote by φn (x) in order to distinguish them
from their ‘unnormalised’ counterparts un (x)). So, for example, in the
case of the string, one possibility is to normalise the eigenfunctions
so that they satisfy the ortonormality relations φn φm = δ n m. With
this convention, it is then not difficult to see that, in light of Equation
5.12, the normalised eigenfunctions are φn (x) = ( 2 L )sin kn x .
Example 5.1
If the string is set into motion by pulling it aside by an amount a (a << L)
at the midpoint and then releasing it at t = 0, the initial velocity is zero
(thus implying that all the Bn coefficients of Equations 5.132 are zero),
while the initial shape is
2ax L 0≤x≤L 2
y0 (x) = (5.18)
2a(L − x) L L 2≤x≤L
Then, using Equation 5.131, the coefficients An are given by
L 2 L
4a
An = 2
L
0
∫ L2
∫
x sin kn x dx + ( L − x ) sin kn x dx (5.19)
∑ (2(n−1)+ 1)
n
8a
y(x, t) = sin ( k2n+1 x ) cos (ω 2n+1t )
π2 2
n =0
8a πx 1 3π x (5.20)
=
π2 sin L cos ω 1t − 32 sin L x cos 3ω 1t +
148 Advanced Mechanical Vibrations
where in writing the second expression we took into account the rela-
tions kn = nπ L and ω n = nω 1. Note that the absence of even-order
modes is not at all unexpected; these modes, in fact, have a node at
x = L 2, which is precisely the point where we displaced the string
when we applied the IC 5.18.
∂2y ρ ∂2y ∂ 2θ ρ ∂ 2θ
= , = (5.21)
∂ x2 E ∂ t 2 ∂ x2 G ∂ t 2
where E is Young’s modulus and G is the shear modulus for the material,
and a comparison of Equations 5.21 with the string Equation 5.2 shows
that cl = E ρ and cs = G ρ are the propagation velocities of longitudinal/
shear waves along the rod/shaft. In light of this close mathematical analogy,
our discussion here will be limited to the rod longitudinal vibration because
it is evident that by simply using the appropriate symbols and physical
parameters for the case at hand, all the considerations and results obtained
for the string apply equally well to the rod and the shaft.
So, if now we proceed by separating the variables and looking for a solu-
tion of the form y(x, t) = u(x)g(t), for the spatial part u(x) of Equation 5.211,
we obtain the ordinary differential equation
Vibrations of continuous systems 149
d 2u(x)
+ γ 2u(x) = 0 (5.22)
dx2
nπ E nπ x
ωn = , un (x) = Cn sin (n = 1, 2,) (5.23)
L ρ L
If, on the other hand, the rod is clamped at the end point x = 0 and free
at x = L, the BCs are u(0) = 0 (a geometric BC if we recall the discussion
in Section 2.5.1 of Chapter 2) and u′(L) = 0 (a natural BC). With these
BCs, the sinusoidal solution for u(x) gives D = 0 and the frequency equa-
tion cos γ L = 0, from which it follows γ n L = (2n − 1)π 2. Consequently, the
eigenpairs of the clamped-free rod are
nπ E nπ x
ωn = , un (x) = Cn cos (n = 0,1, 2,) (5.25)
L ρ L
where in this case, however, the eigenvalue zero is acceptable because its
corresponding eigenfunction is not identically zero. In fact, inserting γ = 0
in Equation 5.22 gives u′′ = 0, and consequently, by integrating twice,
u(x) = c1x + c2 , where c1 , c2 are two constants of integration. Enforcing the
BCs now leads to u0 (x) = c2 and not, as in the previous cases, u0 (x) = 0 (which
means no motion at all, and this is why zero is not an acceptable eigenvalue
in those cases). The constant eigenfunction u0 = c2 of this free-free case is
a rigid-body mode in which the rod moves as a whole, without any inter-
nal stress or strain. Just like for finite-DOFs systems, therefore, rigid-body
modes are characteristic of unrestrained systems.
150 Advanced Mechanical Vibrations
Remark 5.4
∂ ∂y ∂2 y d du
EA(x) = ρ A(x) 2 ⇒ −
EA(x) = ω 2 µ(x)u (5.26)
∂x ∂x ∂t dx dx
where Equation 5.262 is relative to the spatial part u(x) of the
function y(x, t) after having expressed it in the separated-variables
form y(x, t) = u(x)w(t) and having called −ω 2 the separation con-
stant (or, equivalently for our purposes, after having assumed a
solution of the form y(x, t) = u(x) e i ωt and having substituted it in
Equation 5.261).
ii. For the shaft, the torsional stiffness GJ(x) (where J is the area polar
moment of inertia) replaces the longitudinal stiffness EA(x) appear-
ing on the l.h.s., while on the r.h.s., we have the mass moment of
inertia per unit length I (x) = ρ J(x) instead of the mass per unit length
µ(x) = ρ A(x). Clearly, in this case, u(x) is understood as the spatial
part of the function θ (x, t).
Let [ a, b] be a finite interval of the real line and let p(x), p′(x) ≡ dp dx , q(x)
and w(x) be given real-valued functions that are continuous on [ a, b], with,
in addition, the requirements p(x) > 0 and w(x) > 0 on [ a, b]. Then, a regular
Sturm-Liouville problem (SL problem for short) is a one-dimensional BVP
of the form
−(pu′)′ + qu = λ wu
(5.27)
Ba u ≡ α 1u(a) + β1u′(a) = 0, Bb u ≡ α 2u(b) + β2u′(b) = 0
d d
L=− p(x) + q(x) (5.28)
dx dx
Vibrations of continuous systems 151
d
u (Lv) − (Lu) v = p ( u′v − uv′ ) (5.29)
dx
which follows from the easy-to-check chain of relations
which is known as Green’s formula. Equation 5.30 implies that when the
boundary terms on the r.h.s. are zero – as is the case for a regular SL problem
152 Advanced Mechanical Vibrations
φk φ j w
≡
∫ φ (x)φ (x)w(x) dx = 0 (5.31)
a
k j
In order to show that the eigenvalues are real, let λ , φ (x) be a possibly complex
eigenpair and let us write λ = r + is and φ (x) = u(x) + iv(x). Then, the prop-
erty of self-adjointness φ Lφ = Lφ φ implies φ λ wφ = λ wφ φ because
Lφ = λ wφ . If now we generalise the inner product 5.15 to the complex case
b
as
∫ a
f *g dx and recall that w(x) is real, the equation φ λ wφ = λ wφ φ in
explicit form reads
b b b
λ
∫φ
2
wdx = λ ∗
∫φ
2
wdx ⇒ ( λ − λ ) ∫ u
∗ 2
+ v 2 w dx = 0
a a a
which, since the last integral is certainly positive, gives λ − λ ∗ = 2is = 0 and
tells us that λ is real. In this respect, moreover, it can also be shown that the
eigenfunctions can always be chosen to be real.
Passing to w-orthogonality, let φk , φ j be two eigenfunctions corresponding
to the eigenvalues λk , λ j with λk ≠ λ j, so that Lφk = λkwφk and Lφ j = λ j wφ j.
Multiplying the first relation by φ j , the second by φk , integrating over the
b
interval and subtracting the two results leads to ( λk − λ j )
∫ a
φk φ j wdx = 0,
which in turn implies Equation 5.31 because λk ≠ λ j.
Remark 5.5
i. The fact that – as pointed out above – for any two functions f , g continu-
b b
ous on [ a, b], the expressions f g ≡
∫ a
f *g dx and f g w
≡
∫
a
f *gw dx
(note that here we consider the possibility of complex functions; for
real functions, the asterisk of complex conjugation can obviously be
ignored) define two legitimate inner products implies that we can
define the norms induced by these inner products as
Vibrations of continuous systems 153
f = f f , f w
= f f w
respectively.
ii. In order to call an operator self-adjoint, one should first introduce its
adjoint and see if the two operators are equal (hence the term self-
adjoint). For our purposes, however, this is not strictly necessary and
the property u Lv = Lu v suffices. In this respect, note the evident
analogy with the relation u Av = Au v that holds in the finite-
dimensional case for a symmetric matrix A (also see the final part of
Section A.3 of Appendix A).
Another property of the eigenvalues is that they are positive. More precisely,
we have the following proposition: if q ≥ 0 and α 1β1 ≤ 0, α 2β2 ≥ 0 then all
the eigenvalues are positive. The only exception is when α 1 = 0, α 2 = 0 and
q = 0, a case in which the regular SL problem becomes
−( pu′ )′ = λ wu
(5.32)
u′(a) = 0, u′(b) = 0
−[ pφφ ′ ]ba +
∫ ( pφ′ 2
)
+ qφ 2 dx
λ= b
a
(5.33)
∫ φ w dx
a
2
α2 α
− p(b)φ (b)φ ′(b) + p(a)φ (a)φ ′(a) = p(b)φ 2 (b) − 1 p(a)φ 2 (a)
β2 β1
and the second expression – which follows directly from the BCs – shows
that, under our assumptions on the α , β -coefficients, the boundary term is
non-negative (note that if β1 and β2 are zero, the BCs give φ (a) = φ (b) = 0 and
the boundary term is zero). Then, since the two integrals in Equation 5.33
are positive, it follows that the eigenvalue λ is also positive.
154 Advanced Mechanical Vibrations
∞ b
f (x) = ∑
n=1
cn φn (x) where cn = φ n f w
=
∫ φ (x)f (x)w(x) dx (5.34)
n
a
u(x) = ∑φ
n =1
n u w
φn (x) (5.35)
− [ puu′ ]a +
∫ pu′
b 2
+ qu2 dx
u Lu
R [ u] ≡ = b
a
(5.36)
uuw
∫ u w dx
a
2
where the second expression follows from the first by writing in explicit
form the two inner products and performing an integration by parts in the
numerator. Also, note that the denominator can equivalently be written as
2
u w.
The property of interest is that the lowest eigenvalue λ1 of the regu-
lar SL problem is the minimum value of R for all continuous functions
u ≠ 0 that satisfy the BCs and are such that (pu′)′ is continuous on [ a, b].
Mathematically, therefore, we can write
u Lu
λ1 = min R [ u] = min (5.37)
uuw
to which it must be added that the minimum value is achieved if and only
if u(x) is the eigenfunction corresponding to λ1.
For a proof of this property, we can follow this line of reasoning: by
preliminarily observing that for any continuous function u(x) satisfying the
BCs, the eigenfunction expansion 5.35 implies the two results:
∑ ∑λ
2 2
u w
= uu w
= φn u w
, Lu = n φn u w
wφn (5.38)
n n
we can write a chain of relations that leads to the conclusion stated by the
theorem, namely, that λ1 ≤ R [ u] and λ1 = R [ u] if and only if u = φ1. The chain
of relations is
u Lu = ∑φ
n
n u w
φn Lu = ∑φ
n
n u w
Lφn u = ∑λ
n, m
m φn u w
φm u w
φm φn w
∑λ ∑λ ∑φ
2 2
= n φn u w
φn u w
= n φn u w
≥ λ1 n u w
= λ1 u u w
n n n
where we used the expansion 5.35 in writing the first equality, the self-
adjointness of L in the second, the expansion 5.382 in the third and the
orthogonality of the eigenfunctions in the fourth. Then, the inequality fol-
lows because we know from previous results that λn ≥ λ1 (and the last equal-
ity follows from Equation 5.381).
156 Advanced Mechanical Vibrations
Remark 5.6
K[ u] =
∫ u wdx
a
2
and where the theory tells us (see, for example, Collins (2006))
that the constrained variational problem 5.39 is equivalent to the
unconstrained problem of finding the extremals of the functional
I = J − λ K, with λ playing the role of a Lagrange multiplier.
ii. In light of Equation 5.37, it is natural to ask if also the other eigenvalues
satisfy some kind of minimisation property. The answer is a ffirmative
and it can be shown that the nth eigenvalue λn is the minimum of
R[ u] over all twice-differentiable functions that are orthogonal to the
first n − 1 eigenfunctions. The minimising function in this case is φn.
In this respect, it should also be pointed out that these minimisation
properties play an important role in approximate methods used to
numerically evaluate the lowest-order eigenvalues of vibrating systems
for which an analytical solution is very difficult or even impracticable.
constant under the small deflections that displace the membrane from its
equilibrium position.
Denoting by w(x, y, t) the field function that describes the membrane
motion at the point x, y and time t , the small-amplitude Lagrangian density
{ 2
2
Example 5.2
Leaving the details to the reader as an exercise, we consider here
the case of a rectangular membrane of size a along x, b along y and
fixed along all edges. Choosing a system of Cartesian coordinates
and assuming a solution of Equation 5.41 in the separated-variables
form u(x, y) = f (x) g(y), we get ( f ′′ f ) = − ( g ′′ g ) − k2, where the primes
denote derivatives with respect to the appropriate argument. Then,
calling −kx2 the separation constant and defining ky2 = k2 − kx2, we are led
to the two ordinary differential equations f ′′ + kx2 f = 0 and g ′′ + ky2 g = 0
whose solutions are, respectively, f (x) = A1 sin kx x + A2 cos kx x and
g(y) = B1 sin ky y + B2 cos ky y .
At this point, enforcing the fixed BCs f (0) = f (a) = 0 and
g(0) = g(b) = 0, it follows that we must have kx = nπ a ( n = 1, 2,) and
ky = mπ b ( m = 1, 2,), thus implying that the eigenfrequencies are
158 Advanced Mechanical Vibrations
T n 2 m2
ω n m = c kx2 + ky2 = π + ( n, m = 1, 2, ) (5.42a)
σ a2 b2
∞
w(x, y, t) = ∑ A
n , m=1
nm cos ω n m t + Bn m sin ω n m t un m (x, y) (5.43)
where the constants are determined from the ICs w(x, y, 0) = w0 (x, y)
and ∂ t w(x, y, 0) = w
0 (x, y) by a direct extension of the procedure fol-
lowed in Section 5.2.2. Here, we get
a b
4 4
An m =
ab ∫ ∫ w sin (nπx a) sin (mπy b) dx dy = ab
0 0
0 un m w0
a b
4 4
Bn m =
abω n m ∫ ∫ w sin (nπx a) sin (mπy b) dxdy = abω
0 0
0
nm
0
un m w
(5.44)
d 2 f 1 df 2 α 2 d2g
+ + k − 2 f = 0, + α 2 g = 0 (5.46)
dr 2 r dr r dθ 2
where the zero at x = 0, which is also a root for all the Jn functions with
n ≥ 1, is excluded because it leads to no motion at all. So, it turns out that for
each value of n, we have a countably infinite number of solutions. Labelling
160 Advanced Mechanical Vibrations
them with the index m ( m = 1, 2,) and recalling that k = ω c, the natural
frequencies of the membrane are
which have the same shape but differ from one another by an angular rota-
tion of 90°.
Putting together the eigenfunctions un m , uˆ n m with the time part of the
solution, we can write the general solution as
∞
∞
w(r , θ , t) = ∑∑
m=1
n =0
un m ( An m cos ω n m t + Bn m sin ω n m t )
∞
+ ∑ uˆ ( Aˆ
n=1
nm nm )
cos ω n m t + Bˆ n m sin ω n m t (5.50)
Remark 5.7
∫ 0
( )
Jn2 ( kn m r )rdr = R2 2 Jn2+1 ( kn m R), it is left to the reader to determine
Vibrations of continuous systems 161
df k dy d 2f k2 d 2 y
= , = 2
dr ω dx dr 2
ω dx2
d 2 y 1 dy 2 n2 d dy n2
+ + ω − 2 y =0 ⇒ − x + y = ω 2 xy (5.51)
dx2 x dx x dx dx x
Remark 5.8
Although it is not the case for the circular membrane considered here, it is
worth mentioning the fact that a singular SL problem may have a continu-
ous spectrum, where this term means that the problem may have non-trivial
solutions for every value of λ or for every value λ in some interval. This is
the most striking difference between regular problems (whose eigenvalues
are discrete) and singular ones. Some singular problems, moreover, may
have both a discrete and a continuous spectrum. However, when the prob-
lem has no continuous spectrum but a countably infinite number of discrete
eigenvalues, its eigenpairs, as pointed out above, have properties that are
similar to those of a regular SL problem.
where in the second equation – which applies when the beam is uniform with
constant stiffness EI and constant cross-sectional area A – we defined the
parameter a = EI µ = EI ρ A. The first thing we notice is that Equation
5.522 is not a ‘standard’ wave equation; first of all, because there is a fourth-
order derivative with respect to x and, second, because a does not have the
dimensions of velocity. As it can be easily checked, moreover, waves of the
functional form 5.3 do not satisfy Equation 5.522 , thus indicating that a
(flexural) wave of arbitrary shape cannot retain its shape as it travels along
the beam. In fact, if we consider a travelling sinusoidal wave of the form
y(x, t) = A cos(kx − ω t), substitution in Equation 5.522 leads to ω = a k2 and
consequently to the phase velocity cp ≡ ω k = ak = 2πa λ , which in turn
tells us that cp , unlike the case of transverse waves on a string, is not the
same for all wavenumbers (or wavelengths). As known from basic physics,
this phenomenon is called dispersion and the rate at which the energy of a
flexural pulse – that is, a non-sinusoidal wave comprising waves with differ-
ent wavenumbers – propagates along the beam is not given by cp but by the
group velocity cg ≡ dω dk. And since in our case the dispersion relation is
Vibrations of continuous systems 163
dcp dcp
c g = cp + k , c g = cp − λ (5.53)
dk dλ
Remark 5.9
The above result on the beam propagation velocities leads to the definitely
unphysical conclusion that both cp and cg tend to increase without limit
as k → ∞ or, equivalently, as λ → 0. This ‘anomaly’ is due to the fact that
the equation of motion 5.52 is obtained on the basis of the simplest theory
of beams, known as Euler-Bernoulli theory, in which the most important
assumption is that plane cross sections initially perpendicular to the axis
of the beam remain plane and perpendicular to the deformed neutral axis
during bending. The assumption, however, turns out to be satisfactory for
wavelengths that are large compared with the lateral dimensions of the
beam; when it is not so the theory breaks down and one must take into
account – as Rayleigh did in his classic book of 1894 The Theory of Sound –
the effect of rotary (or rotatory) inertia. This effect alone is sufficient to
prevent the divergence of the velocities at very short wavelengths, but is not
in good agreement with the experimental data. Much better results can be
obtained by means of the Timoshenko theory of beams, in which the effect
of shear deformation is also included in deriving the governing equations.
We will consider these aspects in Section 5.7.3.
d 4 u(x) ω 2 ω 2µ
− γ 4 u(x) = 0 where γ4 = = (5.54)
dx4 a2 EI
whose general solution can be written as
where the constants depend on the type of BCs. Here we will consider a few
typical and ‘classical’ cases.
164 Advanced Mechanical Vibrations
n2 π 2 EI nπx
ωn = , un (x) = C4 sin γ n x = C4 sin (5.57)
L2 µ L
Note that the eigenfunctions are the same as those of the fixed-
fixed string.
Case 2: One end clamped and one end free (C-F or cantilever
configuration).
If the end at x = 0 is rigidly fixed (clamped) and the end at x = L is
free, the geometric BCs require the displacement u and slope u′ to
vanish at the clamped end, whereas at the free end, we have the two
(natural) BCs of zero bending moment and zero shear force EI u′′′ .
Then, using the BCs
C1 + C3 = 0, C2 + C4 = 0
1 0 1 0 C1
0 1 0 1 C2
cosh γ L sinh γ L − cos γ L − sin γ L C3 = 0 (5.59)
sinh γ L cosh γ L sin γ L − cos γ L
C4
2
EI 2n − 1 EI
ω n = (γ n L ) ( n = 1, 2,) (5.60)
2
≅ π2
µ L4 2 µ L4
cosh γ n L + cos γ n L
C2 = −C1 ≡ −κ n C1
sinh γ n L + sin γ n L
{ }
un (x) = C1 ( cosh γ n x − cos γ n x ) − κ n ( sinh γ n x − sin γ n x ) (5.61)
and it can be shown that the second mode has a node at x = 0.783L
and the third has two nodes at x = 0.504L and x = 0.868L, etc.
Case 3: Both ends clamped (C-C configuration).
All four BCs are geometric in this case, and we must have
1 0 1 0 C1
0 1 0 1 C2
cosh γ L sinh γ L cos γ L sin γ L C3 =0
sinh γ L cosh γ L − sin γ L cos γ L
C4
{ }
un (x) = C1 ( cosh γ n x − cos γ n x ) − κ n ( sinh γ n x − sin γ n x ) (5.63a)
cosh γ n L − cos γ n L
κn = (5.63b)
sinh γ n L − sin γ n L
Finally, we leave to the reader the task of filling in the details for
other two classical configurations: the free-free beam and the beam
clamped at one end and simply supported at the other.
Case 4: Both ends free (F-F configuration).
The BCs are now all natural and require the bending moment
and shear force to vanish at both ends, that is
{ }
un (x) = C1 ( cosh γ n x + cos γ n x ) − κ n ( sinh γ n x + sin γ n x ) (5.65)
where κ n is the same as in the C-C case (i.e. Equation 5.63b); sub-
stitution of γ = 0 in Equation 5.54 gives u0 (x) = Ax3 + Bx2 + C x + D
which with the BCs 5.64 does not lead – as in the other cases – to
the trivial zero solution, but to u0 (x) = C x + D, that is, a linear com-
bination of the two functions u0(1) = 1 and u0(2) = x. And since here we
are considering the lateral motions of the beam, u0(1) , u0(2) are physi-
cally interpreted as a transverse rigid translation of the beam as a
whole and a rigid rotation about its centre of mass, respectively.
Also, note that the first elastic mode has two nodes at x = 0.224L
and x = 0.776L.
Case 5: C-SS configuration.
For a beam clamped at the end x = 0 and simply supported at the
other, the frequency equation turns out to be tan γ L = tanh γ L. The
first four roots are
∫ (
( um un′′′′− un um′′′′) dx = γ n4 − γ m4 )∫u u dx (5.66)
m n
0 0
and we can integrate by parts four times the term um un′′′′ to get
L L
∫0
um un′′′′dx = [ um un′′′− um
′ un′′ + um ′′′un ] 0L +
′′ un′ + um
∫ u′′′′u dx (5.67)
0
m n
from which we readily see that any combination of the various ‘classical’
BCs (e.g. SS-SS, C-C, C-F, etc.) cause the left-hand-side to vanish. More
than that, the same occurs for any set of homogeneous BCs of the form
(with a, b, c, d constants), where this type of condition can arise, for exam-
ple, from a combination of linear and torsional springs. So, having assumed
γ n ≠ γ m from the beginning, we conclude that for BCs of the form 5.69, we
L
have the orthogonality relation
∫ 0
um un dx = 0, or, in inner product nota-
tion, um un = 0.
Remark 5.10
The essence of the argument does not change if the beam is not uniform
and has a flexural stiffness s(x) ≡ EI (x) and mass per unit length µ (x) that
168 Advanced Mechanical Vibrations
L L
0
(
um ( sun′′ )′ − un ( sum′′ )′ + s ( um′′ un′ − um′ u′′ )n = ω n2 − ω m2 )∫ µum nu dx (5.68b)
0
∫ 0
µ um un dx = 0, or um un µ
= 0 in inner product notation (note that here
µ (x) plays the role of the weight function w(x) introduced in Section 5.4 on
SLps).
∂4 y ∂2 y ∂2 y d 4 u T0 d 2u µω 2
EI − T0 + µ =0 ⇒ − − u = 0 (5.70)
∂ x4 ∂ x2 ∂t 2 dx4 EI dx2 EI
where the second equation follows from the first when we assume a solution
of the form y(x, t) = u(x) e i ωt. Looking for solutions of Equation 5.702 in the
form u(x) = A eα x leads to
α 12 T0
2
T0 µω2
= ± +
α 22 2EI 2EI EI
with α 12 > 0 and α 22 < 0. Consequently, we have the four roots ±η and ±i ξ
where
12 12
2
µω 2 T 2
µω 2 T
T T
η = 0 + + 0 , ξ = 0 + − 0 (5.71)
2EI EI 2EI 2EI EI 2EI
where the constants depend on the type of BCs (note that Equation 5.72
is only formally similar to Equation 5.55 because here the hyperbolic and
trigonometric functions have different arguments). As it turns out, the
Vibrations of continuous systems 169
simplest case is the SS-SS configuration, in which the BCs are given by
Equation 5.56. Enforcing these BCs on the solution 5.72 – and the reader is
invited to do the calculations – yields C1 = C3 = 0 and the frequency equa-
tion sinh η L sin ξ L = 0. And since sinh η L ≠ 0 for η L ≠ 0, we are left with
sin ξ L = 0, which in turn implies ξ n L = nπ. Thus, the allowed frequencies
are
n2 π 2 EI T0 L2
ωn = + ( n = 1, 2,) (5.73a)
L2 µ n2 π 2 µ
nπ T0 n2 π 2
ωn =
L µ
(
1 + n2 π 2 R = 2
L
) EI
µ
1
1 + 2 2 (5.73b)
nπ R
where these two expressions show more clearly the two extreme cases in
terms of the non-dimensional ratio R = EI T0L2 : for small values of R, the
tension is the most important restoring force and the beam behaves like a
string; conversely, for large values of R, the bending stiffness EI is the most
important restoring force and we recover the case of the beam with no axial
force.
Associated to the frequencies 5.73, we have the eigenfunctions
un (x) = C4 sin ξ n x because enforcing the BCs leads – together with the previ-
ously stated result C1 = C3 = 0 – also to C2 = 0.
Remark 5.11
n2 π 2 EI L2 n2π 2 EI T
ω1 = 1 − T0 2 = 2 1 − 0 (5.74)
L2 µ π EI L µ TE
For BCs other than the SS-SS configuration, the calculations are in general
more involved. If, for example, we consider the C-C configuration, it is
convenient to place the origin x = 0 halfway between the supports. By so
170 Advanced Mechanical Vibrations
doing, the eigenfunctions are divided into even (i.e. such that u(− x) = u(x))
and odd (i.e. such that u(− x) = − u(x)), where the even functions come from
the combination C1 cosh η x + C3 cos ξ x while the odd ones come from the
combination C2 sinh η x + C4 sin ξ x . In both cases, if we fit the BCs at x = L 2
they will also fit at x = − L 2. For the even and odd functions, respectively,
the C-C BCs u(L 2) = u′(L 2) = 0 lead to the frequency equations
which must be solved by numerical methods. The first equation gives the
frequencies ω 1 , ω 3 , ω 5 , associated with the even eigenfunctions, while
the second gives the frequencies ω 2 , ω 4 , ω 6 , associated with the odd
eigenfunctions.
κ GA µ rg2
Vshear =
2
(∂ x y − ψ )2 , Trot =
2
(∂ t ψ )2 (5.76)
where ψ = ψ (x, t) is the angle of rotation of the beam (at point x and time
t) due to bending alone, and κ is a numerical factor known as Timoshenko
shear coefficient that depends on the shape of the cross-section (typical val-
ues are, for example, κ = 0.83 for a rectangular cross-section and κ = 0.85
for a circular cross-section).
With the terms 5.76, the Lagrangian density becomes
Λ=
µ
2
{(∂ y) + r (∂ ψ ) } − 12 {EI (∂ ψ ) + κ GA (∂ y − ψ ) } (5.77)
t
2 2
g t
2
x
2
x
2
Vibrations of continuous systems 171
where here we have two independent fields, namely, the bending rotation ψ
and the transverse displacement y (so that the difference (∂ x y − ψ ) describes
the effect of shear).
Remark 5.12
(
− µ ∂ tt2 y + κ GA ∂ xx
2
)
y − ∂ xψ = 0
(5.78)
κ GA (∂ x y − ψ ) − µ rg2 ∂ tt2ψ + EI ∂ xx
2
ψ =0
which govern the free vibration of the uniform Timoshenko beam and show
that physically we have two coupled ‘modes of deformation’. In general, the
two equations cannot be ‘uncoupled’, but for a uniform beam, it is possible
and the final result is a single equation for y. From Equations 5.78, in fact,
we obtain the relations
µ
∂ xψ = ∂ xx
2
y− ∂ tt2 y
κ GA
(5.79)
EI ∂ xxx
3
ψ + κ GA ∂ xx
2
( )
y − ∂ xψ − µ rg2 ∂ xtt
3
ψ =0
where the first follows directly from Equation 5.781 while the second is
obtained by differentiating Equation 5.782 with respect to x. Then, using
Equation 5.791 (and its derivatives, as appropriate) in Equation 5.792 gives
the single equation
172 Advanced Mechanical Vibrations
∂4 y ∂2 y µ EI ∂2 y µ 2rg2 ∂4 y
EI +µ 2 − + µ rg2 + = 0 (5.80)
∂x 4
∂t κ GA 2
∂ x ∂t 2
κ GA ∂t 4
which, as expected, reduces to Equation 5.522 when shear and rotary iner-
tia are neglected. With respect to the Euler-Bernoulli beam, the three addi-
tional terms are
µ EI ∂2 y ∂2 y µ 2 rg2 ∂4 y
− , − µ rg2 , (5.81)
κ GA ∂ x2 ∂t 2 ∂ x2 ∂ t 2 κ GA ∂t 4
where the first is due to shear, the second to rotary inertia and the third is a
‘coupling’ term due to both effects. Note that this last term vanishes when
either of the two effects is negligible. Also, note that, in mathematical par-
lance, the shear effect goes to zero if we let G → ∞ while the rotary inertia
term goes to zero if we let rg → 0.
If now we look for a solution of Equation 5.80 in the usual form
y(x, t) = u(x) e i ω t , we arrive at the ordinary differential equation
d 4 u µ ω 2 d 2u µ ω 2
+ − u = 0 (5.83)
dx4 κ GA dx2 EI
and we can parallel the solution procedure used in Section 5.7.2. As we did
there, we obtain the four roots ±η and ±iξ , where now however we have
12
2
µω 2 µω 2 µω 2
η= + −
2κ GA EI 2κ GA
(5.84)
12
2
µω 2 µω 2 µω 2
ξ= + +
2κ GA EI 2κ GA
ω (shear)
n κ GAL2 1
= = (5.85)
1 + n π ( rg L ) ( E κ G )
2
ωn (0)
κ GAL2 + n2π 2EI 2 2
where in the second expression the effect of the ratio rg L is put into evi-
dence. As rg L → 0 (that is, very slender beams), we get ω n(shear) ω n(0) → 1.
Rotary inertia alone (Rayleigh beam)
If now we neglect the effect of shear (G → ∞ ), Equation 5.82 gives
d 4 u µ ω 2rg2 d 2u µ ω 2
+ − u = 0 (5.86)
dx4 EI dx2 EI
and we can proceed exactly as above to arrive at
ω (rot)
n 1
= 2 (5.87)
ω (0)
n 1 + n π ( rg L )
2 2
Remark 5.13
Equation 5.85 for a shear beam (i.e. no rotary inertia) and to the
frequencies of the Euler-Bernoulli beam if we neglect both shear
and rotary inertia effects.
iii. Equation 5.88 is quadratic in ω n2, meaning that it gives two values of
ω n2 for each n. The smaller value corresponds to a flexural vibration
mode, while the other to a shear vibration mode.
iv. If we consider the effect of rotary inertia alone, Equation 5.87 can
be used to evaluate the minimum slenderness ratio needed in order
to have, for example, ω n(rot) ω n(0) ≥ 0.9. For n = 1, 2,3 (first, second and
third mode), respectively, we get L rg ≥ 6.5, L rg ≥ 13.0 and L rg ≥ 19.5
Using Equation 5.85 and assuming a typical value of 3 for the ratio
E κ G, the same can be done for the effect of shear alone. In order to
obtain the result ω n(shear) ω n(0) ≥ 0.9, for the first, second and third mode,
respectively, we must have slenderness ratios such that L rg ≥ 11.2,
L rg ≥ 22.5 and L rg ≥ 33.7.
Eh3
D= (5.90)
(
12 1 − ν 2 )
is the plate flexural stiffness, where E is Young’s modulus and ν is Poisson’s
ratio.
Remark 5.14
D ∂2 w ∂2 w ∂2 w ∂2 w ∂2 w 2
2
+ − 2(1 − ν ) 2 − (5.91)
2 ∂ x2 ∂ y 2 ∂ x ∂ y
2
∂ x ∂ y
where we recognise the first term within curly brackets as the rect-
( )
2
angular coordinates expression of ∇2w .
ii. Forming the (small-amplitude) Lagrangian density Λ with the
kinetic and potential energy densities of point (i) and observing that
(
Λ = Λ ∂ t w, ∂ xx
2
w, ∂ yy
2
w, ∂ xy
2
w, ∂ yx
2
)
w , it is left to the reader to check
that the calculations of the appropriate derivatives prescribed by
Equations 2.69 and 2.70 lead to the equation of motion 5.89.
iii. In light of the aforementioned analogy between plates and beams,
it is quite evident that a plate is a dispersive medium. As for Euler-
Bernoulli beams, the Kirchhoff theory of plates predicts unbounded
wave velocity at very short wavelengths, and this ‘unphysical’ diver-
gence is removed when one takes into account the effects of shear and
rotary inertia. Also, for finite plates, both effects – like in beams –
are found to decrease the natural frequencies (with respect to the
Kirchhoff theory), becoming more significant as the relative thickness
of the plate increases and for higher frequency vibration modes.
(∇ 4
)
−γ 4 u = 0 ⇒ (∇ 2
)( )
+ γ 2 ∇2 − γ 2 u = 0 (5.92)
176 Advanced Mechanical Vibrations
Remark 5.15
(∇ 2
)( ) ( )(
+ γ 2 ∇2u1 − γ 2u1 + ∇2u2 − γ 2u2 = ∇2 + γ 2 ∇2u1 − γ 2u1 )
( )(
= ∇2 + γ 2 ∇2u1 + γ 2u1 − γ 2u1 − γ u ) = −2γ ( ∇
2
1
2 2
)
+ γ 2 u1 = 0
thus confirming that the complete solution can be expressed as the sum
u = u1 + u2.
5.8.1 Rectangular plates
Assuming complete support per edge, for rectangular plates, there are many
distinct cases involving all possible combinations of classical BCs. Among
these, it turns out that the more tractable ones are those in which two
opposite edges are simply supported and that the simplest case is when all
edges are simply supported. Here, therefore, we use rectangular coordinates
and consider a uniform rectangular plate of length a along the x direction,
length b along y and simply supported on two opposite edges.
Vibrations of continuous systems 177
If the simply supported edges are x = 0 and x = a, the BCs at these edges are
u(0, y) = u(a, y) = 0 and ∂2xx u(0, y) = ∂2xx u(a, y) = 0, and they are all satisfied if
we choose a solution of the form u(x, y) = Y (y)sin α x = [Y1(y) + Y2 (y)] sin α x
with α = nπ a ( n = 1, 2,), where in writing the last expression, we took
into account the fact that, as pointed out in the preceding section, u is given
by the sum u1(x, y) + u2 (x, y).
Now, substituting u1 = Y1 sin α x into the rectangular coordinates expres-
(
)
sion of equation ∇2 + γ 2 u1 = 0 and u2 = Y2 sin α x into the rectangular
coordinates expression of ( ∇ 2
)
− γ 2 u2 = 0 leads to the two equations
( )
Y1′′+ γ 2 − α 2 Y1 = 0, ( )
Y2′′− γ 2 + α 2 Y2 = 0 (5.93)
Y1(y) = A1 cos ( )
γ 2 − α 2 y + A2 sin ( γ 2 −α2y )
(5.94)
Y2 (y) = A3 cosh ( γ + α y A4 sinh
2 2
) ( γ +α y
2 2
)
thus implying that the complete solution u = u1 + u2 is
where we defined
r = γ 2 −α2 , s = γ 2 + α 2 (5.95b)
At this point, we must enforce the BCs at the edges y = 0 and y = b on the
solution 5.95. If also these two edges are simply supported, the appropriate
BCs are u(x,0) = u(x, b) = 0 and ∂2yy u(x,0) = ∂2yy u(x, b) = 0. Then, since the
BCs at y = 0 give (as the reader is invited to check) A1 = A3 = 0, the solution
5.95 becomes u = { A2 sin ry + A4 sinh sy} sin α x, while for the other two BCs
at y = b, we must have
sin rb sinh sb
det =0 ⇒ (s2 + α 2 )sin rb sinh sb = 0
−α sin rb
2 2
s sinh sb
D n 2 m2 ρh a
2
ω nm = π 2 a2 + b2 ⇒ ω n m a2 = π 2 n2 + m2 (5.96)
ρh D b
nπ x mπy
unm (x, y) = Anm sin sin (5.97)
a b
which are the same mode shapes as those of the rectangular membrane
clamped on all edges (Equation 5.42b).
Remark 5.16
The solution 5.95 has been obtained assuming γ 2 > α 2 . If γ 2 < α 2 , Equation
( )
5.931 is rewritten as Y1′′− α 2 − γ 2 Y1 = 0 and Equation 5.941 becomes
Y1(y) = A1 cosh ry ˆ , with r̂ = α 2 − γ 2 . In solving a free-vibration
ˆ + A2 sinh ry
problem, one should consider both possibilities and obtain sets of eigenval-
ues (i.e. natural frequencies) for each case, by checking which inequality
applies for an eigenvalue to be valid. However, in Leissa (1973), it is shown
that while for the case γ 2 > α 2 proper eigenvalues exist for all six rectan-
gular problems in which the plate has two opposite sides simply supported;
the case γ 2 < α 2 is only valid for the three problems having one or more free
sides. For example, for a SS-C-SS-F plate (with a typical value ν = 0.3 for
Poisson’s ratio), it is found that the case γ 2 < α 2 applies for nb a > 7.353.
For the three cases in which the two opposite non-SS edges have the same
type of BC – that is, the cases SS-C-SS-C and SS-F-SS-F (the case SS-SS-
SS-SS has already been considered above) – the free vibration modes are
either symmetric or antisymmetric with respect not only to the axis x = a 2
but also with respect to the axis y = b 2. In this light, it turns out to be con-
venient to place the origin so that the two non-SS edges are at y = ± b 2 and
use the even part of solution 5.95 to determine the symmetric frequencies
and mode shapes and the odd part for the antisymmetric ones.
So, for example, for the SS-C-SS-C case in which we assume γ 2 > α 2,
the even and odd parts of Y are, respectively, Yev (y) = A1 cos ry + A3 cosh sy
and Yodd (y) = A2 sin ry + A4 sinh sy , where r , s are as in Equation 5.95b.
Vibrations of continuous systems 179
which is more complicated than either of the two frequency equations 5.98
although it gives exactly the same eigenvalues. Note that with these BCs,
there is no need to consider the case γ 2 < α 2 for the reason explained in
Remark 5.16 above.
Remark 5.17
i. Since a plate with SS-C-SS-C BCs is certainly stiffer than a plate with
all edges simply supported, we expect its natural frequencies to be
higher. It is in fact so, and just to give a general idea, for an aspect ratio
a b = 1 (i.e. a square plate) the non-dimensional frequency parameter
ω a2 ρ h D is 19.74 for the SS-SS-SS-SS case and 28.95 for the SS-C-
SS-C case.
ii. If, on the other hand, at least one side is free, we have a value of
12.69 for a SS-C-SS-F square plate (with ν = 0.3), a value of 11.68 for
a SS-SS-SS-F plate (ν = 0.3) and 9.63 for a SS-F-SS-F plate (ν = 0.3).
Note that for these last three cases – which also refer to the aspect
ratio a b = 1 – we specified within parenthesis the value of Poisson’s
ratio; this is because it turns out that the non-dimensional frequency
parameter ω a2 ρ h D does not directly depend on ν unless at least
one of the edges is free. It should be pointed out, however, that the
frequency does depend on Poisson’s ratio because the flexural stiffness
D contains ν .
iii. In general, the eigenfunctions of plates with BCs other that SS-SS-
SS-SS have more complicated expressions than the simple eigen-
functions of Equation 5.97. For example, the eigenfuntions of the
SS-C-SS-C plate mentioned above are
5.8.2 Circular plates
For circular plates, it is convenient to place the origin at the centre of the
( )
plate and express the two equations ∇2 + γ 2 u1 = 0 and ∇2 − γ 2 u2 = 0 ( )
in terms of polar co-ordinates r, θ , where x = r cos θ , y = r sin θ . Then,
since the equation for u1 is Helmholtz equation of Section 5.5, we can
proceed as we did there, that is, write u1 in the separated-variables form
u1(r , θ ) = f1(r)g1(θ ) and observe that (a) the periodicity of the angular part
of the solution implies g1(θ ) = cos nθ + sin nθ ( n = 1, 2,) and that (b)
f1(r) = A1 Jn (γ r) + B1Yn (γ r) because f1(r) is the solution of Bessel’s equation of
order n (we recall that Jn , Yn are Bessel’s functions of order n of the first and
second kind, respectively).
Passing now to the equation for u2 , we can rewrite it as ∇2 + (iγ )2 u2 = 0
and proceed similarly. By so doing, a solution in the ‘separated form’
u2 (r , θ ) = f2 (r)g2 (θ ) leads, on the one hand, to g2 (θ ) = cos nθ + sin nθ and, on
the other, to the equation for f2 (r)
d 2 f2 1 df2 2 n2
+ − γ + 2 f2 = 0 (5.101)
dr 2 r dr r
where, for a plate that extends continuously across the origin, we must
require B1 = B2 = 0 because the functions of the second kind Yn , Kn become
unbounded at r = 0. So, for a full plate, we have the solution
Jn (γ R) I n (γ R)
det =0 ⇒ Jn (γ R)I n′ (γ R) − I n (γ R) Jn′ (γ R) = 0 (5.105)
Jn′ (γ R) I n′ (γ R)
λ 2n m D
ω nm = (5.106)
R2 ρh
λ01
2
= 10.22, λ11
2
= 21.26, λ21
2
= 34.88,
λ02
2
= 39.77, λ31
2
= 51.04, λ12
2
= 60.82
Jn (γ n m R)
un m (r , θ ) = An m Jn (γ n m r ) − In (γ n m r ) ( cos nθ + sin nθ ) (5.107)
I n (γ nm R)
where the constants An m depend on the choice we make for the normalisa-
tion of the eigenfunctions. From the results above, it is found that just like
for the circular membrane of Section 5.5.1, (a) the n, m th mode has n dia-
metrical nodes and m nodal circles (boundary included), and (b) except for
n = 0, each mode is degenerate.
BCs other than the clamped case lead to more complicated calculations,
and for this, we refer the interested reader to Leissa (1969).
∇4 unm = γ nm
4
unm , ∇4 ulk = γ lk4 ulk (5.108)
182 Advanced Mechanical Vibrations
are identically satisfied, and we can therefore interpret the first and sec-
ond of Equations 5.108, respectively, by seeing un m as the plate deflection
under the load q1 = D γ nm 4
unm and ulk as the plate deflection under the load
q2 = D γ lk ulk. At this point, we invoke Betti's reciprocal theorem (or Betti’s
4
law) for linearly elastic structures (see, for example, Bisplinghoff, Mar and
Pian (1965)): The work done by a system of forces Q1 under a distortion
u(2) caused by a system Q2 equals the work done by the system Q2 under
a distortion u(1) caused by the system Q1. In mathematical form, this state-
ment reads Q1 u(2) = Q2 u(1).
So, returning to our problem, we observe that (a) the two loads are q1 , q2
as given above, (b) the deflection ulk is analogous to u(2), while the deflection
un m is analogous to u(1), and (c) we obtain a work expression by integrating
over the area of the plate S. Consequently, the equality of the two work
expressions gives
Dγ nm
4
∫
unm ulk dS = Dγ lk4
∫u ulk nm dS ⇒ (γ 4
nm − γ lk4 )∫u u dS = 0
nm lk
S S S
∫u
S
u dS = 0 (5.109)
nm lk
d2 d2
Lb = EI dx2 (5.110)
dx2
Vibrations of continuous systems 183
∫ ∫
v ( EI u′′ )′′ dx = [ v(EI u′′)′ − u(EIv′′)′ + EI (v′′u′ − u′′v′)] 0 + (EIv′′)′′ u dx
L
0 0
(5.111)
then it can be easily verified that for the classical BCs (SS, C and F), the
boundary term within square brackets on the r.h.s. vanishes, thus implying
that Equation 5.111 reduces to the self-adjointness relation v Lb u = Lb v u .
Passing to positive-definiteness – which, we recall, is mathematically
expressed by the inequality u Lb u ≥ 0 for every sufficiently regular func-
tion u – we note that two integrations by parts lead to
L
u Lb u =
∫ u (EI u′′)′′dx
0
L L
where the third equality is due to the fact that the boundary term within
square brackets vanishes for the classical BCs and where the final inequality
holds because EI > 0 and (u′′)2 ≥ 0. Then, imposing the equality u′′ = 0 gives
u = C1x + C2, and since it is immediate to show that for the SS and C BCs, we
get C1 = C2 = 0, it follows that u Lb u = 0 if and only if u = 0, thus implying
that Lb is positive-definite. It is not so, however, for an F-F beam because in
this case the system is unrestrained and we know that there are rigid-body
modes (recall Case 4 in Section 5.7). In this case, therefore, the operator Lb
is only positive-semidefinite.
Things are a bit more involved for the plate operator Lp = ∇4 , where here
for simplicity (but without loss of generality for our purposes here), we con-
sider a uniform plate with constant flexural stiffness. In order to show that
Lp satisfies the relation u Lpv = Lp u v for many physically meaningful
types of BCs, let us start by observing that
( ) (
u ( Lpv ) = u∇4v = u∇2 ∇2v = u∇ ⋅ ∇∇2v (5.113) )
where in writing the last equality we used the fact that ∇2 = ∇ ⋅ ∇ (or
∇2 = div ( grad ) if we use a different notation). Then, recalling the vector
calculus relations
184 Advanced Mechanical Vibrations
∇ ⋅ (u A) = ∇u ⋅ A + u∇ ⋅ A, ∇ ⋅ ( f ∇u ) = f ∇2 u + ∇f ⋅ ∇u (5.114)
( ) (
∇ ⋅ u∇∇2v = ∇u ⋅ ∇∇2v + u∇ ⋅ ∇∇2v ) ( ) ⇒ (
u∇ ⋅ ∇∇2v )
= ∇ ⋅ ( u∇∇ v ) − ∇u ⋅ ( ∇∇ v )
2 2
( ) (
u ( Lpv ) = ∇ ⋅ u∇∇2v − ∇u ⋅ ∇∇2v (5.115) )
Now we set f = ∇2v in 5.1142 to obtain
( )
∇ ⋅ ∇2v∇u = ∇2v ∇2u + ∇∇2v ⋅ ∇u ⇒ ( )
∇u ⋅ ∇∇2v = ∇ ⋅ ∇2v ∇u − ∇2v ∇2u ( )
and substitute it in Equation 5.115. By so doing, integration over the two-
dimensional region/domain S occupied by the plate gives
S
2
S
2 2
which, using the divergence theorem (see the following Remark 5.18) for
the first and second term on the r.h.s., becomes
∫ u (L v ) dS = ∫ u ∂ (∇ v ) dC − ∫ ∇ v ( ∂ u) dC + ∫ (∇ v∇ u) dS (5.117)
S
p
C
n
2
C
2
n
S
2 2
Remark 5.18
∫ ∇ ⋅ A dS = ∫ A ⋅ n dC
S C
where n is the outward normal from the boundary C. Explicitly, using this
theorem for the first and second term on the r.h.s. of Equation 5.116 gives,
respectively
Vibrations of continuous systems 185
∫ ∇ ⋅ (u∇∇ v ) dS = ∫ u (∇∇ v ) ⋅ n dC = ∫ u ∂ (∇ v) dC
S
2
C
2
C
n
2
∫ ∇ ⋅ (∇ v∇u) dS = ∫ ∇ v∇u ⋅ n dC = ∫ ∇ v (∂ u) dC
S
2
C
2
C
2
n
where the rightmost equalities are due to the fact that ∇() ⋅ n = ∂n (),
where ∂n denotes the derivative in the direction of n.
∫ {u ( L v) − ( L u) v} dS
S
p p
(5.118)
=
∫ {u ∂ (∇ v) − v ∂ (∇ u) − ∇ v ∂ u + ∇ u ∂ v} dC
C
n
2
n
2 2
n
2
n
showing that the plate operator is self-adjoint if for any two functions u, v
that are four times differentiable the boundary integral on the r.h.s. of
Equation 5.118 vanishes – as it can be shown to be the case for the classical
BCs in which the plate is clamped, simply supported or free.
For positive-definiteness, we can consider Equation 5.117 with v replaced
by u, that is
∫ u(Lu) dS = ∫ {u ∂ (∇ u) − ∇ u ( ∂ u)}dC + ∫ (∇ u) dS
2 2 2 2
u Lp u = n n
S C S
Ku = λ Mu (5.119)
d d
K=− EA(x) dx , M = µ(x) (5.120)
dx
and where, as we know from Section 5.4, K is self-adjoint for many physi-
cally meaningful BCs. Equation 5.119 is the so-called differential eigen-
value problem, and the analogy with the finite-DOFs eigenproblem is
evident, although it should be just as evident that now – unlike the finite-
DOFs case in which the BCs are, let us say, ‘incorporated’ in the system’s
matrices – Equation 5.119 must be supplemented by the appropriate BCs for
the problem at hand.
Moreover, if we consider membranes, beams and plates, it is not difficult
to show that the differential eigenvalue form 5.119 applies to these systems
as well. In fact, for a non-uniform (Euler-Bernoulli) beam, we have
d2 d2
K= EI (x) dx2 , M = µ(x) (5.121)
dx2
f = ∑c φ
n =1
n n where cn = φn Mf (5.123)
Remark 5.19
Example 5.1
Pursuing the analogy with finite-DOFs systems, let us go back, for
example, to Section 3.3.4 and notice that Equation 3.57 rewritten
by using the ‘angular bracket notation’ for the inner product reads
∂λi = p i (∂ K − λi ∂ M ) p i , thereby suggesting the ‘continuous system
counterpart’
∂λi = φi (∂ K − λi ∂ M ) φi (5.124)
d 4 u(x) µ rg2 E d 2
EI = λ µ − u(x) (5.125)
dx4 κ G dx2
µ rg2 E d 2
∂M = − , ∂ K = 0 (5.126)
κ G dx2
µ rg2 E d 2φ i
∂λi = − λi φi (∂ M)φ i = λi φi
κG dx2
2 2 2 L 2
2i π r E iπ x 2 2
i π E rg
∫ sin
g
= − λi 2
dx = − λi (5.127)
κ GL 3
L κ G L
0
λˆ i
2
i 2 π 2 E rg
= 1− (5.128)
λi κ G L
which must be compared with the exact result of Equation 5.85. And
since this equation, when appropriately modified for our present pur-
poses, reads
−1
λˆ i(shear) i 2 π 2 E rg
2
= 1 + (5.129)
λi κ G L
∂2 w(x, t)
Kw(x, t) + M = 0 (5.130)
∂t 2
Vibrations of continuous systems 189
substitute it into Equation 5.130 and then take the inner product of the
resulting equation with φ j (x). This gives
i
y i φ j Kφ i +
i
∑
yi φ j Mφi = 0 ∑
and, consequently, owing to the orthogonality of the eigenfunctions
(Equations 5.122),
y j (0)
y j (t) = y j (0) cos ω j t +
ωj
sin ω j t ( j = 1, 2,…) (5.133)
the only thing left to do at this point is to evaluate the initial (i.e. at t = 0)
0 = ∂ t w(x,0) for the
conditions y j (0), y j (0) in terms of the ICs w0 = w(x,0), w
function w(x, t). The result is
w0 = ∑ y (0)φ (x)
i
i i
⇒
Mw0 = ∑ y (0)Mφ (x)
i
i i
⇒
φ j Mw0 = y j (0)
w 0 = ∑ y (0)φ (x)
i i Mw 0 = ∑ y (0)Mφ (x)
i i
φ j Mw 0 = y j (0)
i i
in analogy with the finite-DOFs Equation 3.35a. Note that the function
w(x, t) automatically satisfies the BCs because so do the eigenfunctions φ j .
Remark 5.20
Since for a uniform string fixed at both ends we have K = −T0 ∂ xx2
, M = µ and
the mass-normalised eigenfunctions φ j (x) = ( 2 µ L )sin ( jπ x L ), the reader
is invited to re-obtain the solution of Equations 5.10 and 5.13 (Section
5.2.2) by using Equation 5.135. Although it is sufficiently clear from the
context, note that in that section the field variable is denoted by y(x, t) (it is
not a modal coordinate), while here we used the symbol w(x, t).
∂2 w(x, t)
Kw(x, t) + M = f (x, t) (5.136)
∂t 2
∞ t
w(x, t) = ∑j =1
φ j (x)
∫φ j f hˆ j (t − τ ) dτ (5.139)
0
where φ j f =
∫ R
φ j (x)f (x, t)dx and R is the spatial region occupied by the
system. In this regard, note that the jth mode does not contribute to the
response if φ j f = 0.
If now we let the excitation be of the form f (x, τ ) = δ ( x − xk )δ (τ ) – that
is, a unit-amplitude delta impulse applied at point x = xk and at time τ = 0 –
then φ j f = φ j ( xk ) δ (τ ) and it follows from Equation 5.139 that the output
at point x = xm is
∞
w ( xm , t ) = ∑φ ( x
j =1
j m )φ j ( xk ) hˆ j (t) (5.140)
h ( xm , xk , t ) = ∑ φ (x
j =1
j m )φ j ( xk ) hˆ j (t) (5.141)
where φ j F =
∫ F(r)φ (r) dr and R is the spatial region occupied by the
R
j
vibrating system, and in the last relation, we recalled from previous chap-
( )
−1
ters that Hˆ j (ω ) = ω 2j − ω 2 is the (undamped) modal FRF.
In particular, if a unit amplitude excitation is applied at the point x = xk ,
then f (x, t) = δ ( x − xk ) e iω t and φ j F = φ j ( xk ), and consequently
∞
w(x, t) = ∑φ (x)φ ( x ) Hˆ (ω ) e
j =1
j j k j
iω t
(5.142b)
φ j (x)φ j ( xk )
∞ ∞
H ( x, xk , ω ) = ∑ j =1
φ j (x)φ j ( xk ) Hˆ j (ω ) = ∑
j =1
ω 2j − ω 2
(5.143)
Example 5.2
Consider a vertical clamped-free rod of length L, uniform mass per
unit length µ and uniform stiffness EA subjected to an excitation in
the form of a vertical base displacement g(t). In order to determine
the system’s response, we first write the longitudinal rod displacement
w(x, t) as
∂2 u ∂2 u d2g
− EA 2
+ µ 2 = − µ 2 (5.145)
∂x ∂t dt
where the r.h.s. is an ‘effective force’ that provides the external excita-
tion to the system, and we can write feff = − µ g. Assuming the system
to be initially at rest, the jth modal response is given by Equation 5.138
Vibrations of continuous systems 193
L
2 g(t) 2µ L
∫
φ j feff = − µ g(t) φ j (x) dx = −
0
(2 j − 1)π
∞ t
φ j (x)
u(x, t) = −2 2µ L ∑j =1
(2 j − 1)πω j ∫ g(τ ) sin ω (t − τ ) dτ (5.146)
j
0
where the natural frequencies are given by Equation 5.241 and the time
integral must be evaluated numerically if the base motion g(t) and its
corresponding acceleration g(t) are not relatively simple functions of
time. Then, the total response w(x, t) is obtained from Equation 5.144.
As an incidental remark to this example, it should be noticed that
here we used a ‘standard’ method in order to transform a homogeneous
problem with non-homogeneous BCs into a non-homogeneous prob-
lem with homogeneous BCs. In fact, suppose that we are given a linear
operator B (B = − EA ∂2xx + µ ∂tt2 in the example), the homogeneous equa-
tion Bw = 0 to be satisfied in some spatial region R and the non-homo-
geneous BC w = y on the boundary S of R. By introducing the function
u = w − v, where v satisfies the given BC, the original problem becomes
the following problem for the function u: the non-homogeneous equa-
tion Bu = − By in R with the homogeneous BC u = 0 on S.
Example 5.3
Let us now consider a uniform clamped-free rod of length L and mass
per unit length µ excited by a load of the form f (x, t) = p(t) δ (x − L) at
the free end. Then, we have
2 ( 2 j − 1) π j −1 2
φ j f = p(t) φ j (L) = p(t)
µL
sin
2 = (−1) p(t) µ L (5.147)
where in writing the last expression, we used the relation
sin [(2 j − 1)π 2] = (−1) j −1. If we assume the system to be initially at rest,
Equation 5.138 gives
t
(−1) j −1 2
y j (t) =
ωj µL ∫ p(τ ) sin ω (t − τ ) dτ (5.148)
0
j
and if now we further assume that p(t) = θ (t), where θ (t) is a unit ampli-
tude Heaviside step function (i.e. θ (t) = 0 for t < 0 and θ (t) = 1 for t ≥ 0),
194 Advanced Mechanical Vibrations
(−1) j −1 2
y j (t) =
ω 2j µL
(1 − cos ω j t )
thus implying that the displacement in physical coordinates is given by
the superposition
∞ j −1
w(x, t) =
2
µL ∑ (−ω1)
j =1
2
j
(1 − cos ω t ) φ (x)
j j
∞ j −1
=
8L
π 2 EA ∑ (2(−j1)− 1)
j =1
2 (1 − cos ω t ) sin (2 j −2L1)π x
j (5.149)
Example 5.4
In the preceding Example 5.3, the solution 5.149 has been obtained by
using the modal approach ‘directly’, that is, by considering the rod as
a ‘forced’ or non-homogeneous system (that is, with the non-zero term
f = p(t) δ (x − L) on the r.h.s. of the equation of motion) subjected to the
‘standard’ BCs of a rod clamped at x = 0 and free at x = L. However,
since the external excitation is applied at the boundary point x = L, the
same rod problem is here analysed with a different strategy that is in
general better suited for a system excited on its boundary. In order to
do so, we start by considering the rod as an excitation-free (i.e. homo-
geneous) system subject to non-homogeneous BCs, so that we have the
equation of motion and BCs
∂2 w ∂2 w
− EA 2
+ µ 2 = 0; w(0, t) = 0, EA ∂ x w(L, t) = p(t) (5.150)
∂x ∂t
where the term upst (x, t) = r(x)p(t), often called ‘pseudo-static’ displace-
ment (hence the subscript ‘pst’), satisfies the equation EA ∂ xx 2
upst = 0
and is chosen with the intention of making the BCs for u(x, t) homo-
geneous. Considering first the BCs, we note that with a solution of the
form 5.151, the BCs of Equations 5.150 imply
Vibrations of continuous systems 195
which in turn means that if we want the BCs for u(x, t) to be homoge-
neous, we must have r(0) = 0 and r ′(L) = 1 EA. Under these conditions,
in fact, we have
∂2 u ∂2 u
− EA 2
+ µ 2 = feff (x, t) where
∂x ∂t
(5.153)
d2p d 2r µ x d2p
feff (x, t) = − µ r(x) 2 + EA p(t) 2 = −
dt dx EA dt 2
and where in writing the last expression we took into account that
r(x) = x EA and that, consequently, d 2 r dx2 = 0.
We have at this point transformed the problem 5.150 into the non-
homogeneous problem 5.153 with the homogeneous BC 5.152, where
the forcing term feff accounts for the boundary non-homogeneity of
Equation 5.1503. Now we apply the modal approach to this new problem
by expanding u(x, t) in terms of eigenfunctions as u(x, t) = ∑ j y j (t)φ j (x)
d2p µ x
t t L
y j (t) =
∫
0
φ j feff hˆ j (t − τ ) dτ = −
∫0
dt 2 EA ∫
φ
0
j (x) dx hˆ j (t − τ ) dτ
(5.154a)
L
µ 2 (2 j − 1)π x µ 2 (−1) j −1 4L2
EA µL ∫ x sin
0
2L
dx =
EA µ L (2 j − 1)2 π 2
t
µ 2 (−1) j −1 4L2 d 2 p(τ ) ˆ
y j (t) = −
EA µ L (2 j − 1)2 π 2 ∫
0
dτ 2
hj (t − τ ) dτ (5.154b)
196 Advanced Mechanical Vibrations
Now, assuming p(t) = θ (t) (as in the preceding Example 5.3) and
recalling the properties of the Dirac delta of Equations B.411 and B.42
in Appendix B, the time integral gives
t t
d 2θ (τ ) ˆ dδ (τ ) ˆ d hˆ j (τ )
∫0
dτ 2
hj (t − τ ) dτ =
∫ 0
dτ
hj (t − τ ) dτ = −
dτ
τ =t
= cos ω j t
and therefore
∞ j −1
µ
∑ ((2−j1)− 1)4Lπ
2
2
u(x, t) = − cos ω j t
EA µL j =1
2 2
∞ j −1
w(x, t) =
x
−
8L
EA π 2 EA ∑ (2(−j1)− 1)
j =1
2
(2 j − 1)π x
sin
2L
cos ω j t (5.155)
∞ j −1
w(x, t) =
8L
π 2 EA ∑ (2(−j1)− 1)
j =1
2
(2 j − 1)πx
sin
2L
∞ j −1
−
8L
π 2 EA ∑ (2(−j1)− 1)
j =1
2
(2 j − 1)π x
sin
2L
cos ω j t (5.156)
and then consider the fact that the Fourier expansion of the function
π 2 x 8L is (we leave the proof to the reader)
∞ j −1
π2 x
8L
= ∑ (2(−j1)− 1)
j =1
2
(2 j − 1)π x
sin
2L
which, when used in Equation 5.156, shows that the first term of this
equation is exactly the term x EA of Equation 5.155, i.e. the function
r(x) of the pseudo-static displacement. So, the two solutions are indeed
equal, but it is now evident that the inclusion of the pseudo-static term
from the outset makes the series 5.155 much more advantageous from
a computational point of view because of its faster rate of convergence.
In actual calculations, therefore, less terms will be required to achieve
a satisfactory degree of accuracy.
Example 5.5
In modal testing, the experimenter is often interested in the response
of a system to an impulse loading at certain specified points. So, if
Vibrations of continuous systems 197
∞ ∞
(−1) j −1 sin ω j t sin ω j t
w(L, t) =
2
µL ∑ j =1
ωj
φ j (L) =
2
µL ∑
j =1
ωj
(5.157)
∑
∞
Equation 5.141 gives h(L, L, t) = φ 2 (L)hˆ (t), which is in fact the
j j
j =1
same as Equation 5.157 when one considers the explicit expressions of
φ j (L) and hˆ j (t).
If, on the other hand, we are interested in the receptance FRF at
x = L , Equation 5.143 gives
∞ ∞
φ j2 (L)
H (L, L, ω ) = ∑ω
j =1
2
j − ω 2
=
µ
2
L ∑ω j =1
2
j
1
− ω2
(5.158)
Example 5.6
As a simplified model of a vehicle travelling across a bridge deck, con-
sider the response of an SS-SS Euler-Bernoulli beam to a load of con-
stant magnitude P moving along the beam at a constant velocity V.
Under the reasonable assumption that the mass of the vehicle is small
in comparison with the mass of the bridge deck (so that the beam eigen-
values and eigenfunctions are not appreciably altered by the presence of
the vehicle), we write the moving load as
P δ (x − V t) 0≤t ≤LV
f (x, t) = (5.159)
0 otherwise
198 Advanced Mechanical Vibrations
L
2 j πV t
∫
φ j f = P φ j (x) δ ( x − Vt ) dx = P
0
µL
sin
L
t
P 2 jπVt
y j (t) =
ωj µL ∫ sin
0
L
sin ω j (t − τ ) dτ (5.160a)
which gives, after two integrations by parts (see the following Remark
5.21 for a hint),
2 L2 jπ V jπ V t
y j (t) = P
2 2
sin ω j t − sin
µL j π V − ω j L ω jL
2 2 2
L
(5.160b)
L2 sin ( jπx L ) jπ V
∞
jπ V t
w(x, t) =
2P
µL ∑
j =1
2 2
j π V − ω j L ω jL
2 2 2
sin ω j t − sin
L
(5.161)
which in turn shows that resonance may occur at the ‘critical’ values
of velocity
Lω j jπ EI
Vj(crit) =
jπ
=
L µ
( j = 1, 2, ) (5.162)
where the last expression follows from the fact that the beam natu-
ral frequencies – we recall from Section 5.7 – are ω j = ( j π L ) EI µ .
2
Remark 5.21
If, for brevity, we call A the integral in Equation 5.160a and define a j = jπV L ,
( )
two integrations by parts lead to A = (1 a j ) sin ω j t − ω j a2j sin a j t + ω 2j A a2j ,
from which Equation 5.160b follows easily.
H ( x, xk , ω ) = W (x, ω ) (5.163)
Example 5.7
As an example, let us reconsider the uniform clamped-free rod of
Example 5.3 by assuming a harmonic excitation of unit ampli-
tude at the free end x = L, so that we have the equation of motion
− EA ∂2xx w + µ ∂2tt w = 0 for x ∈ (0, L) plus the BC w(0, t) = 0 at x = 0 and
the force condition EA ∂ x w(L, t) = δ (x − L) e iω t at x = L. Proceeding as
explained above and defining γ 2 = µω 2 EA, we are led to
where the second BC (Equation 5.1643) follows from the fact that the
force excitation has unit amplitude. At this point, since the solution
of Equation 5.1641 is W (x, ω ) = C1 cos γ x + C2 sin γ x, enforcing the BCs
gives C1 = 0 and C2 = (γ EA cos γ L ) , and consequently
−1
1
W (x, ω ) = H (x, L, ω ) = sin γ x (5.165)
γ EA cos γ L
200 Advanced Mechanical Vibrations
∞ j −1
H (x, L, ω ) =
2
µL ∑ ω(−1)− ω
j =1
2
j
2
(2 j − 1)πx (5.166)
sin
2L
L
2 1 (2 j − 1)πx
φj W =
µ L γ EA cos γ L ∫ sin 2L
sin γ x dx
0
L
2 1 µ
=
µ L γ EA cos γ L ∫ sin ω x
0
j
EA
sin γ x dx
2 (−1) j −1
φj W = (5.167)
(
µ L µ ω 2j − ω 2 )
This in turn implies
∞ ∞
W (x, ω ) = ∑
j =1
µ φ j W φ j (x) = ∑φ
j =1
j W µ
φ j (x) (5.168)
and tells us that the r.h.s. of Equation 5.166 is the series expansion of
the function W (x, ω ) of Equation 5.165.
If now, for example, we are interested in the response at x = L,
Equation 5.165 gives
tan γ L 1 µ
W (L, ω ) = H (L, L, ω ) = = tan ω L (5.169a)
γ EA ω µ EA EA
which must be compared with the FRF that we obtain from Equation
5.143, that is, with
Vibrations of continuous systems 201
∞
H (L, L, ω ) =
2
µL ∑ω
j =1
2
j
1
− ω2
(5.169b)
In order to show that the two FRFs of Equations 5.169a and b are the
same, the reader is invited to do so by setting θ = 2γ L π in the standard
result found in mathematical tables
∞
πθ 2θ 2θ 2θ
π
2
tan
2
= + +
1 − θ 2 32 − θ 2 52 − θ 2
+ = ∑ (2 j − 12θ) − θ
j =1
2 2
Example 5.8
As a second example of the method, consider the longitudinal vibra-
tions of a vertical rod subjected to a support harmonic motion of unit
amplitude. For this case, the equation of motion Kw + µ ∂ tt2 w = 0 and
the appropriate BCs are
∂2 w ∂2 w
− EA 2
+ µ 2 = 0; w(0, t) = e iωt , ∂ x w(L, t) = 0 (5.170)
∂x ∂t
where, typically, we have seen that the operator L is self-adjoint for a large
class of systems of our interest.
202 Advanced Mechanical Vibrations
Now, leaving mathematical rigour aside for the moment, let us make some
formal manipulations and suppose that we can find an operator L−1 such
that L−1 L = LL−1 = I , where I is the identity operator. Then L−1 Lu = L−1 F
and the solution of problem 5.173 is u = L−1 F . Moreover, since L is a dif-
ferential operator, it is eminently reasonable (and correct) to expect that L−1
is an integral operator and that the solution u will be expressed in the form
u = L−1F =
∫ G(x, ξ)F(ξ) dξ (5.174)
where G(x, ξ ) is a function to be determined that depends on the problem
at hand. As for terminology, in the mathematical literature, G(x, ξ ) is called
the Green’s function of the differential operator L and is – using a com-
mon term from the theory of integral equations – the kernel of the integral
operator L−1.
In light of these considerations, let us proceed with our ‘free’ manipula-
tions. If now we write
∫
F(x) = LL−1F = L G(x, ξ )F(ξ ) dx =
∫ {LG(x, ξ)} F(ξ) dξ (5.175)
and recall the defining property of the Dirac delta function (see Section B.3
of Appendix B), then the last expression in Equation 5.175 shows that G
must be such that
Remark 5.22
We have pointed out in Appendix B that although the Dirac delta is not a
function in the ordinary sense but a so-called distribution (or generalised
function), our interest lies in the many ways in which it is used in applica-
tions and not in the rigorous theory. The rather ‘free’ manipulations above,
therefore, are in this same spirit, and a more mathematically oriented reader
can find their full justification in the theory of distributions (see References
given in Remark B.6 of Appendix B).
The problem at this point is how to determine the Green’s function, and
here we illustrate two methods. For the first method, we can go back to
Section 5.10 and recall that Equation 5.143 gives the system’s response at
Vibrations of continuous systems 203
the point x to a local delta excitation applied at xk. Therefore, with a simple
change of notation in which we set xk = ξ , it readily follows that that same
equation provides the expression of the Green’s function in the form of a
series expansion in terms of the system’s eigenfunctions, that is,
∞
φ j (x)φ j (ξ )
G(x, ξ ) = ∑
j =1
λj − λ
(5.177)
In this light, in fact, the system’s response of Equation 5.142a can be rewrit-
ten as
∞ φ (x)
w(x, t) = u(x) e i ωt = e i ωt ∑∫
j =1
φ j (ξ )F(ξ ) d ξ
j
λ j − λ
R
∞
φ j (x)φ j (ξ )
= e i ωt
R
∫∑j =1
λ j − λ ∫
F(ξ ) d ξ = e iωt G(x, ξ )F(ξ ) d ξ
R
in agreement with Equation 5.174, and where the Green’s function is given
by Equation 5.177.
For the second method, we refer to a further development – in addition to
the ones considered in Section 5.4 – on SLps. For the non-homogeneous SLp
in fact, we have the following result (see, for example, Debnath and
Mikusinski (1999) or Vladimirov (1987)):
Provided that λ = 0 is not an eigenvalue of Lu = λ u subject to the BCs
5.1782 and 5.1783, the Green’s function of the problem 5.178 is given by
1 u1(x) u2 (ξ ) (a ≤ x < ξ )
G(x, ξ ) = − (5.179)
p(x)W (x) u2 (x) u1(ξ ) (ξ < x ≤ b)
u1(x) u2 (x)
W (x) ≡ det = u1(x) u2′ (x) − u2 (x) u1′ (x)
u1′ (x) u2′ (x)
204 Advanced Mechanical Vibrations
Given the Green’s function 5.179, the solution of the problem 5.178 is then
b
u(x) =
∫ G(x, ξ) F(ξ) dξ (5.180)
a
Example 5.9
For a uniform finite string fixed at both ends under the action of a
distributed external load f (x, t), we know that we have the equation of
motion Kw + M ∂2tt w = f (x, t) with K = −T0 ∂2xx and M = µ . Then, assum-
ing an excitation and a corresponding response of the harmonic forms
given above, we get T0 u ′′(x) + µω 2 u(x) = − F(x), from which it follows
which become
once we enforce the fixed BCs u1 (0) = 0 and u2 (L) = 0 at the two
end points. In addition to this, two more conditions are required in
order to match the solutions at the point x = ξ . The first, obviously,
is u1 (ξ ) = u2 (ξ ) because the string displacement must be continuous at
x = ξ . For the second condition, we integrate Equation 5.181 across the
point of application of the load to get
ξ +ε ξ +ε ξ +ε
∫
ξ −ε
u ′′ dx + k2
∫
ξ −ε
u dx =
∫ δ (x − ξ) dx = 1
ξ −ε
ξ +ε
∫ u′′ dx = u′
ξ +ε
= u2′ (ξ ) − u1′ (ξ ) = 1 (5.183)
ξ −ε
ξ −ε
Vibrations of continuous systems 205
w(x, t) = e iωt
∫ G(x, ξ)F(ξ) dx (5.185)
0
Remark 5.23
Random vibrations
6.1 INTRODUCTION
Assuming the reader to have had some previous exposure to the basic defi-
nitions and ideas of probability theory and statistics, we start by observing
that the concepts of ‘event’ and ‘random variable’ (r.v. for short) can be con-
sidered as two levels of a hierarchy. In fact, while a single number P(A) – its
probability – suffices for an event A, the information on a random variable
requires the knowledge of the probability of many, even infinitely many,
events. A step up in this hypothetical hierarchy, we find the concept of ran-
dom (or stochastic) process: a family X(z) of random variables indexed by
a parameter z that varies within an index set Z, where z can be discrete or
continuous.
207
208 Advanced Mechanical Vibrations
For the most part, in the field of vibrations, the interest is focused on con-
tinuous random processes of the form X(t) with t ∈T , where t is time and
T is some appropriate time interval (the range of t for which X(t) is defined,
observed or measured).
Remark 6.1
i. Although the meaning will always be clear from the context, the time
interval T must not be confused with the period of a periodic function
(also denoted by T in previous chapters);
ii. A process can be random in both time and space. A typical example
can be the vibrations of a tall and slender building during a wind-
storm; here, in fact, the effects of wind and turbulence are random
in time and also with respect to the vertical coordinate y along the
structure.
Since X(t) is a random variable for each value of t , we can use the familiar
definitions of probability distribution function (often abbreviated as PDF)
and probability density function (pdf), and write
where
∂FX (x; t)
fX (x; t) = , (6.1b)
∂x
and where the notations FX (x; t) and fX (x; t) for the PDF and pdf, respec-
tively, show that for a random process, these functions are in general time
dependent. The functions above are said to be of first order because more
detailed information on X(t) can be obtained by considering its behaviour
at two instants of time t1 , t2 – and, in increasing levels of detail, at any finite
number of instants t1 ,..., t n . So, for n = 2, we have the (second-order) joint-
PDF and joint-pdf
2
FXX ( x1 , x2 ; t1 , t2 ) = P (
i =1
)
X ( t i ) ≤ xi
(6.2a)
2
fXX ( x1 , x2 ; t1 , t2 ) dx1dx2 = P (
i =1
xi < X ( t i ) ≤ xi + dxi )
Random vibrations 209
with
∂2 FX ( x1 , x2 ; t1 , t2 )
fXX ( x1 , x2 ; t1 , t2 ) = , (6.2b)
∂ x1 ∂ x2
and where the extension to n > 2 is straightforward. Note that the second-
order functions contain information on the first-order ones because we have
FX ( x1 ; t1 ) = FXX ( x1 , ∞; t1 , t2 ) , FX ( x2 ; t2 ) = FXX ( ∞, x2 ; t1 , t2 )
∞ ∞
(6.3)
fX ( x1 ; t1 ) =
∫
−∞
fXX ( x1 , x2 ; t1 , t2 ) dx2 , fX ( x2 ; t2 ) =
∫
−∞
fXX ( x1 , x2 ; t1 , t2 ) dx1
(this is a general rule and nth-order functions contain information on all kth-
order for k < n). Clearly, by a similar line of reasoning, we can consider more
than one stochastic process – say, for example, two processes X(t) and Y (t′) –
and introduce their joint-PDFs for various possible sets of instants of time.
As known from basic probability theory (see also the following Remark
6.2(i)), we can use the PDFs or pdfs to calculate expectations or expected val-
ues. So, in particular, we have the m th-order ( m = 1,2,) moment E (X m (t)) –
with the mean µX (t) = E(X(t)) as a special case – and the mth-order cen-
{
tral moment E [ X(t) − µX (t)]
m
}, with the variance σ 2
X (t) as the special case
for m = 2. For any two instants of time t1 , t2 ∈T , we have, respectively, the
autocorrelation and autocovariance functions, defined as
RXX ( t1 , t2 ) ≡ E X ( t1 ) X ( t2 )
(6.4a)
{
KXX ( t1 , t2 ) ≡ E X ( t1 ) − µX ( t1 ) X ( t2 ) − µX ( t2 ) }
and related by the equation
Also, to every process X(t) with finite mean µX (t), it is often convenient to
associate the centred process Xˆ (t) = X(t) − µX (t), which is a process with
zero mean whose moments are the central moments of X(t). In particular,
note that the autocorrelation and autocovariance functions of Xˆ (t) coincide.
210 Advanced Mechanical Vibrations
RXY ( t1 , t2 ) = E X ( t1 )Y ( t2 )
(6.6a)
{
KXY (t1 , t2 ) ≡ E X ( t1 ) − µX ( t1 ) Y ( t2 ) − µY ( t2 ) }
while the ‘two-processes counterpart’ of Equation 6.4b is
( )
m
general formula E ( X − µX ) = ∑ (−1)k m !
m
µXk E X m−k (with the con-
k= 0 k !(m− k)!
( )
vention E X 0 = 1) that gives the central moments of X in terms of
its ordinary (i.e. non-central) moments. Also, the square root of the
variance, denoted by σ X , is called standard deviation, while E X 2 ( )
is called root mean square (rms) value;
ii.
In the light of point (i) of the Remark, it is understood
that the autocorrelation of Equation 6.4a1 is obtained as
∞ ∞
RXX ( t1 , t2 ) =
∫ ∫ −∞ −∞
x1x2 fXX ( x1 , x2 ; t1 , t2 ) dx1 dx2 and that the auto-
covariance, cross-correlation and cross-covariance are determined
accordingly;
iii. A key concept in probability theory is the notion of independent
random variables, where in general terms a r.v. Y is independent of
another r.v. X if knowledge of the value of X gives no information
at all on the possible values of Y , or on the probability that Y will
take on any of its possible values. In this respect, important conse-
quences of independence are the relations FXY (x, y) = FX (x)FY (y) and
Random vibrations 211
fXY (x, y) = fX (x)fY (y), meaning that the joint-PDF and joint-pdf fac-
torise into the product of the individual PDFs and pdfs. Also, inde-
pendence implies E(XY ) = E(X)E(Y ) and Cov(X , Y ) = 0, where the
covariance is defined as Cov(X , Y ) = E ( X − µX )(Y − µY ) . However,
note that the first two relations are ‘if and only if’ statements, whereas
the last two are not; they are implied by independence but, when they
hold, in general do not imply independence;
iv. We recall that two random variables satisfying Cov(X , Y ) = 0 are
called uncorrelated. So, the final part of point (iii) of the remark tells
us that independence implies uncorrelation, but uncorrelation does
not, in general, implies independence.
Example 6.1
As a simple example, consider the vibrations of a car that travels every
day over a certain rough road at a given speed and takes approxi-
mately 10 minutes from beginning to end. So, with t varying in the
time interval T = [0,600] seconds, we will measure a vibration time
history x1(t) on the first day, x2 (t) on the second day, etc. But since
the (hypothetical) ‘population’ associated with this ensemble is the set
of sample functions that, in principle, could be recorded by repeating
the ‘experiment’ an infinite number of times, our representation of the
process will be a finite set of time records. For purpose of illustration,
Figure 6.1 shows an ensemble of size n = 4.
6.2.1 Stationary processes
The term ‘stationary’ refers to the fact that some characteristics of a ran-
dom process – moments and/or probability laws – remain unchanged under
an arbitrary shift of the time axis, meaning that the process is, broadly
speaking, in some kind of ‘statistical steady state’.
In general, moments’ stationarity is more frequently used in applica-
tions and we call a process mean-value (or first-moment) stationary if
µX (t + r) = µX (t) for all values of t and r, a condition which clearly implies
Random vibrations 213
hold for any value of r and any two times t1 , t2. When this is the case, it
follows that the autocorrelation (or cross-correlation) cannot depend on
the specific values of t1 , t2 but only on their difference τ = t2 − t1 and we can
write RXX (τ ) or RXY (τ ). A slight variation of second-moment stationarity is
obtained if the equalities 6.7 hold for the covariance or cross-covariance
function. In the two cases, respectively, we will then speak of covariant sta-
tionary process or jointly covariant stationary processes and, as above, we
will have the simpler functional dependence KXX (τ ) or KXY (τ ). Note that if
a process X(t) is both mean-value and second-moment stationary, then (by
virtue of Equation (6.4b)) it is also covariant stationary. In particular, the
two notions coincide for the centred process Xˆ (t) – which, being defined as
Xˆ (t) = X(t) − µX (t), is always mean-value stationary.
By extension, a process is called mth moment stationary if
E X ( t1 + r ) X ( t m + r ) = E X ( t1 ) X ( t m ) (6.8)
for all values of the shift r and times t1 ,, t m. So, in particular, if
t1 = t2 = = t m ≡ t , then E X m (t + r) = E X m (t) and the mth moment
E(X m ) does not depend on t . At the other extreme, if the times t1 ,, t m are
all different, then the mth moment function will not depend on their specific
values but only on the m − 1 time increments τ 1 = t2 − t1 ,, τ m−1 = t m − t m−1.
Remark 6.3
The developments above show that stationarity reduces the number of neces-
sary time arguments by one. This is a general rule that applies also to the other
types of stationarity – that is first order, second order, etc. – introduced below.
for all values of x, t and r. This implies that the processes PDF and pdf do
not change with time and can be written as FX (x) and fX (x). The processes
are second-order stationary if
for all values of x1 , x2 , t1 , t2 and r, and so on, up to the most restrictive type
of stationarity – called strict – which occurs when X(t) is mth-order station-
ary for all m = 1,2.
Given these definitions, the question arises if the various types of sta-
tionarity are somehow related. The answer is ‘partly so’, and here, we only
limit ourselves to two results: (a) mth-order stationarity implies all sta-
tionarities of lower order, while the same does not apply to mth moment
stationarity (so, for example, a second-moment stationary process may
not be mean-value stationary) and (b) mth-order stationarity implies mth
moment stationarity. In this respect, note that from points (a) and (b), it
follows that a mth-order stationary process is also stationary up to the
mth moment.
Remark 6.4
thus implying that in practice one can consider only the positive values of τ .
Under the assumption of weak stationarity, a second property is that the
two functions are bounded by their values at τ = 0 because we have
( )
RXX (τ ) ≤ RXX (0) = E X 2 , KXX (τ ) ≤ KXX (0) = σ X2 , (6.12)
which hold for all τ and follow from using the r.v.s X(t) and X(t + τ ) in the
( ) ( )
well-known result E(XY ) ≤ E X 2 E Y 2 called Schwarz’s inequality.
Remark 6.5
where the second equation is simply the first for τ = 0 and the first, in turn,
is the stationary counterpart of Equation 6.4b. Also, since this is a fre-
quently encountered case in practice, it is worth noting that for µX = 0,
Equation 6.13 become KXX (τ ) = RXX (τ ) and RXX (0) = KXX (0) = σ X2 .
Another property of autocovariance functions concerns their behaviour
for large values of τ because in most cases of practical interest (if the process
does not contain any periodic component), it is found that KXX (τ ) → 0 as
τ → ∞. Rather than a strict mathematical property, however, this is some-
how a consequence of the randomness of the process and indicates that,
in general, there is an increasing loss of correlation as X(t) and X(t + τ ) get
further and further apart. In other words, this means that the process pro-
gressively loses memory of its past, the loss of memory being quick when
KXX (τ ) drops rapidly to zero (as is the case for extremely irregular time
records) or slower when the time records are relatively smooth.
If two WS processes are cross-covariant stationary, the cross-correlation
functions RXY (τ ), RYX (τ ) are neither odd nor even, and in general, we have
RXY (τ ) ≠ RYX (τ ). However, the property of invariance under a time shift
leads to
2 2
RXY (τ ) ≤ RXX (0) RYY (0), KXY (τ ) ≤ KXX (0) KYY (0) = σ X2 σ Y2 , (6.16)
6.2.3 Ergodic processes
By definition, a process is strictly ergodic if one sufficiently long sample
function x(t) is representative of the whole process. In other words, if the
length T of the time record is large and it can reasonably be assumed that
x(t) passes through all the values accessible to it, then we have good rea-
sons to believe that the process is ergodic. The rationale behind this lies
essentially in two considerations of statistical nature. The first is that in
any sample other than x(t), we can expect to find not only the same values
(although, clearly, in a different time order) taken on by the process in the
sample x(t), but also the same frequency of appearance of these values. The
second is that if we imagine to divide our sample function x(t) in a number,
say p, of sections and we can assume the behaviour in each section to be
independent on the behaviour of the other sections, then, for all practical
purposes, we can consider the p sections as a satisfactory and representative
ensemble of the process.
The consequence is that we can replace ensemble averages by time aver-
ages, that is averages calculated along the sample x(t). So, for example, we
say that a WS process is weakly ergodic if it is both mean-value ergodic and
second-moment ergodic, that is if the two equalities
hold, where the temporal mean value x (but we will also denote it by x)
and the temporal correlation CXX (τ ) are defined as
T
1
x = lim
T →∞ T ∫ x(t) dt
0
(6.18)
T
1
CXX (τ ) = x(t) x(t + τ ) = lim
T →∞ T ∫ x(t) x(t + τ ) dt.
0
Random vibrations 217
Remark 6.6
sense and some insight on the physical mechanism generating the process
under study.
The second consideration refers to the generic term ‘sufficiently long
sample’, used more than once in the preceding discussion. A practical
answer in this respect is that the duration of the time record must be
at least longer than the period of its lowest spectral components (the
meaning of ‘spectral components’ for a random process is considered is
Section 6.4).
Remark 6.7
Example 6.2
Consider the random process X(t) = U sin(ω t + V ), where ω is a constant
and the two r.v.s U ,V are such that (a) they are independent, (b) the
amplitude U has mean and variance µ U , σ U2 , respectively, and (c) the
phase angle V is a r.v. uniformly distributed in the interval [0,2π ] (this
meaning that its pdf is fV (v) = 1 2π for v ∈[0,2π ] and zero otherwise).
In order to see if the process is WS and/or ergodic, we must calculate
and compare the appropriate averages (the reader is invited to fill in the
details of the calculations). For ensemble averages, on the one hand, we
take independence into account to obtain the mean as
2π 2π
µU
E(X(t)) = E(U)E(V ) = µU
∫
0
fV (v) sin(ω t + v)dv =
2π ∫ sin(ω t + v) dv = 0
0
(6.19)
( )
E U2
2π
=
2π ∫ sin(ω r + v) sin(ω s + v) dv
0
( )
E U2
2π
=
4π ∫ cos ω (s − r) − cos [ω (r + s) + 2v] dv
0
σ +µ
2
U
2
U σ 2 + µU2 (6.20)
= cos ω (s − r) = U cos ω τ ,
2 2
( )
where τ = s − r and we took the relation E U 2 = σ U2 + µ 2U into account.
Since the mean does not depend on time and the correlation depends
only on τ , the process is WS (also note that RXX (τ ) = KXX (τ ) because
E(X) = 0). On the other hand, the time averages calculated (between
zero and T = 2π ω ) for a specific sample in which the two r.v.s have the
values u, v give
T
1
x T
=
T ∫ u sin(ω t + v)dt = 0
0 (6.21)
u2
(T )
CXX (τ ) = u2 sin(ω t + v)sin [ω (t + τ ) + v ] = cos ωτ
T 2
Since the main physical quantities involved in the study and analysis of vibra-
tions are displacement, velocity and acceleration, it is important to consider
how these quantities are related in the case of random processes. Just like
ordinary calculus, the calculus of random processes revolves around the
notion of limit and hence convergence. We call stochastic derivative of X(t)
the process X (t) defined as
220 Advanced Mechanical Vibrations
∑ X (t ) ∆t
b
I=
∫ a
X(t) dt ≡ lim In = lim
n→∞ n→∞
k=1
( n)
k
( n)
k , (6.23)
{ }
where Pn = a = t0(n) ≤ t1(n) ≤, ≤ t n(n) = b is a partition of the interval [ a, b],
( n) ( n) ( n)
∆t = t − t
k k and the sequence of partitions P1 , P2 , is such that
k−1
( n) ( n)
∆ t = max ∆ t → 0 as n → ∞. Note that for fixed a, b, the quantity I is a
k
random variable.
Remark 6.8
dµ (t)
d
( )
µX (t) ≡ E X (t) = E[ X(t)] = X . (6.24)
dt dt
Random vibrations 221
Things are a bit more involved for the cross-correlations between X (t) and
X(t), but if we let r , s be two instants of time with r ≤ s , it is not difficult to
obtain the relations
∂RXX (r , s) ∂RXX (r , s)
(r , s) =
RXX , RXX (r , s) = ,
∂r ∂s
(6.25)
∂2 RXX (r , s)
(r , s) =
RXX .
∂r ∂s
If, in particular, X(t) is a WS process, then its mean value is time inde-
( )
pendent and Equation 6.24 implies µX ≡ E X (t) = 0. Moreover, since RXX
depends only on the difference τ = s − r (so that dτ dr = −1 and dτ ds = 1),
the same functional dependence applies to the correlation functions above
and we have
Remark 6.9
i. ( )
E X (t) = 0 implies, owing to Equation 6.15, RXX (τ ) = KXX (τ ),
RXX (τ ) = KXX (τ ) and RXX (τ ) = KXX
(τ ). From this last relation, it follows
(τ ) = RXX
(τ ) = RXX
′′ (τ ), (τ ) = − RXX
(τ ) = RXX (τ )
(3)
RXX RXX
(6.27)
(τ ) = RXX (τ ), σ X2 = RXX
(4) (4)
RXX (0),
b b b
E(I) =
∫ µ (t)dt,
a
X
2
E(I ) =
∫ ∫R
a a
XX (r , s) drds, (6.28)
thus implying that Var(I) is given by the double integral of the covariance
function KXX (r , s). A slight generalisation of Equation 6.23 is given by the
integral
Q(z) =
∫ X(t)k(t, z) dt , (6.29)
a
µQ (z) =
∫µ a
X (t) k(t , z) dt
(6.30)
b b
RQQ ( z1 , z2 ) =
∫∫R
a a
XX (r , s) k ( r , z1 ) k ( s, z2 ) drds,
which follow from the fact that the kernel function k is non-random.
A different process – we call it J(t) – is obtained if X(t) is integrable on
t
[ a, b], t ∈[ a, b], and we consider the integral J(t) =
∫ a
X(r) dr . Then, we have
the relations
Random vibrations 223
t r s
µ J (t) = E( J(t)) =
∫µ
a
X (r)dr , RJJ (r , s) =
∫∫R
a a
XX (u, v) dudv
(6.31)
s
RXJ (r , s) =
∫R
a
XX (r , v) dv,
which lend themselves to two main considerations. The first is that, when
compared with Equations 6.24 and 6.25, they show that the process J(t)
can be considered as the ‘stochastic antiderivative’ of X(t). This is a satis-
factory parallel with ordinary calculus and conforms with our idea that
even with random vibrations we can think of velocity as the derivative of
displacement and of displacement as the integral of velocity. The second
consideration is less satisfactory: When X(t) is stationary, it turns out that,
in general, J(t) is not. Just by looking at the first of Equation 6.31 in fact,
we see that the integral process J(t) of a mean-value stationary process X(t)
is not mean-value stationary unless µX = 0. Similarly, it is not difficult to
find examples of second-moment stationary processes whose integral J(t)
is neither second-moment stationary nor jointly second-moment stationary
with X(t).
6.4 SPECTRAL REPRESENTATION OF
STATIONARY RANDOM PROCESSES
∞
A stationary random process X(t) is such that the integral
∫−∞
X(t) dt does
not converge, and therefore, it does not have a Fourier transform in the
classical sense. Since, however – we recall from Section 6.2.2 – randomness
results in a progressive loss of correlation as τ increases, the covariance
function of the most real-world processes is, as a matter of fact, integra-
ble on the real line R. Then, its Fourier transform exists, and, by defini-
tion, we call it power spectral density (PSD) and denote it by the symbol
SXX (ω ). Moreover, if SXX (ω ) is itself integrable on R, then the two functions
KXX (τ ), SXX (ω ) form a Fourier transform pair, and we have
∞
1
SXX (ω ) = F {KXX (τ )} =
2π ∫K
−∞
XX (τ ) e − iωτ dτ
(6.32)
∞
Remark 6.10
i. Other common names for SXX (ω ) are autospectral density or, simply,
spectral density;
ii. Note that some authors define SXX (ω ) as F {RXX (τ )}. They generally
assume, however, that either X(t) is a process with zero-mean or its
nonzero-mean value has been removed (otherwise, the PSD has a
Dirac-δ ‘spike’ at ω = 0);
iii. The name ‘PSD’ comes from an analogy with electrical systems. If, in
fact, we think of X(t) as a voltage signal across a unit resistor, then
2
X(t) is the instantaneous rate of energy dissipation.
Example 6.3
−c τ
Given the correlation function RXX (τ ) = R0 e (where c is a positive
constant), the reader is invited to show that the corresponding spectral
density is
cR0
SXX (ω ) =
(
π c2 + ω 2 )
and to draw a graph for at least two different values of c, showing that
increasing values of c imply a faster decrease of RXX to zero (meaning
more irregular time histories of X(t)) and a broader spectrum of fre-
quencies in SXX . Conversely, given the PSD, SXX (ω ) = S0 e − c ω , the reader
is invited to show that the corresponding correlation function is
2cS0
RXX (τ ) = .
c2 + τ 2
Example 6.4
Leaving the details of the calculations to the reader (see the
hint below), the PSD corresponding to the correlation function
RXX (τ ) = e − c τ cos bτ is
1 c c
SXX (ω ) = + .
2π c2 + (ω + b)2 c2 + (ω − b)2
Random vibrations 225
In this case, the shape of the curve depends on the ratio between the
two parameters c and b . If c < b, the oscillatory part prevails and the
spectrum shows two peaks at the frequencies ω = ±b. On the other
hand, if c > b, the decreasing exponential prevails and the spectrum is
‘quite flat’ over a range of frequencies.
Hint for the calculation: Use Euler’s formula 2 cos bτ = e i bτ + e − i bτ
and calculate the Fourier transform of RXX (τ ) as
1 c τ i bτ
0 ∞
∫ ( ) ( )
4π
e e +e
−∞
− i bτ
e − iωτ dτ +
∫
0
e − c τ e i bτ + e − i bτ e − iωτ dτ .
1 T
E [CXX (τ ; k, T )] ≡ E
2T ∫
−T
xk (t) xk (t + τ ) dt
(6.34)
T T
1 1
=
2T
−T
∫ E[ x (t)x (t + τ )] dt = R
k k XX (τ )
2T ∫ dt = R
−T
XX (τ ),
{ }
CXX (τ ; k, T ) is related to the modulus squared of X k(T ) (ω ) ≡ F xk(T ) (t) by the
equation
π (T ) 2
SXX (ω ; k, T ) ≡ F {CXX (τ ; k, T )} = Xk (ω ) . (6.35)
T
π 2
SXX (ω ; T ) = F {RXX (τ )} = E X k(T ) (ω ) , (6.36)
T
where the first equality is due to Equation 6.34 when one takes into account
that the expectation and Fourier operators commute, that is E ( F{•} ) = F {E(•)}.
Finally, since in the limit as T → ∞ the function SXX (ω ; T ) tends to the PSD
SXX (ω ), Equation 6.36 gives
π (T ) 2
SXX (ω ) = lim E Xk (ω ) , (6.37)
T →∞ T
which, by showing that the PSD SXX (ω ) does indeed ‘capture’ the frequency
information of the process X(t), provides the ‘yes’ answer to our question.
Remark 6.11
Remark 6.12
This additional remark is made because Equation 6.35 may suggest the
following (incorrect) line of thought: Since the index k is superfluous
if X(t) is ergodic, we can in this case skip the passage of taking expec-
tations (of Equation 6.35) and obtain Equation 6.37 in the simpler form
2
SXX (ω ) = lim ( π T ) X (T ) (ω ) . This, in practice, means that the (unknown)
T →∞
PSD of an ergodic process can be estimated by simply squaring the modu-
{ }
lus of F x(T ) (t) and then multiplying it by π T , where x(T ) (t) is a single and
sufficiently long time record. However tempting, this argument is not cor-
2
rect because it turns out that ( π T ) X (T ) (ω ) , as an estimator of SXX (ω ), is
not consistent, meaning that the variance of this estimator is not small and
does not go to zero as T increases. Consequently, we can indeed write the
2
approximation SXX (ω ) ≅( π T ) X (T ) (ω ) , but we cannot expect it to be a reli-
able and accurate approximation of SXX (ω ), no matter how large is T. The
conclusion is that we need to take the expectation of Equation 6.35 even if
the process is ergodic. In practice, this means that more reliable approxima-
tions are obtained only at the price of a somewhat lower-frequency resolu-
tion (this is because taking expectations provides a ‘smoothed’ estimate
of the PSD. More on this aspect can be found, for example, in Lutes and
Sarkani (1997) or Papoulis (1981).
*
SXX (ω ) = SXX (ω ) = SXX (−ω ), *
SXY (ω ) = SYX (ω ) = SXY (−ω ), (6.38)
where the first equation shows that auto-PSDs are real, even functions of ω .
Since this implies that there is no loss of information in considering only
the range of positive frequencies, one frequently encounters the so-called
one-sided spectral densities GXX (ω ), GXX (ν ), where ν = ω 2π is the ordinary
frequency in hertz and where the relations with SXX (ω ) are
KXX (0) = σ = 2
X
∫S
−∞
XX (ω ) dω , (6.40)
228 Advanced Mechanical Vibrations
which can be used to obtain the variance of the (stationary) process X(t) by
calculating the area under its PSD curve. For cross-PSDs, Equation 6.382
shows that, in general, they are complex functions with a real part
Re {SXY (ω )} and an imaginary part Im {SXY (ω )}. In applications, these two
functions are often called the co-spectrum and quad-spectrum, respec-
tively, and sometimes are also denoted by special symbols like, for example,
CoXY (ω ) and QuXY (ω ).
If now we turn our attention to the first two derivatives of KXX (τ ), the
properties of Fourier transforms (Appendix B, Section B.2.1) give
F {KXX
′ (τ )} = iω F {KXX (τ )} , F {KXX
′′ (τ )} = −ω 2 F {KXX (τ )}, (6.41)
where on the r.h.s. of both relations we recognise the PSD SXX (ω ) = F {KXX (τ )},
while (owing to Equation 6.26 and Remark 6.9) the l.h.s, are F KXX (τ ) { }
{ }
(τ ) , respectively. Since, by definition, these two transforms are
and −F KXX
the cross-PSD SXX (ω ) and the PSD SXX
(ω ), we conclude that
The same line of reasoning applies to the third- and fourth-order derivatives
of KXX (τ ), and we get
which are special cases of the general formula SX( j)X(k) (ω ) = (−1)k (iω ) j +k SXX (ω ),
where we write X ( j) to denote the process d j X(t) dt j .
Finally, it is worth pointing out that knowledge of SXX (ω ) allows us to
obtain the variances of X (t) and X
(t) as
∞ ∞
σ =
2
X ∫S
−∞
XX (ω ) dω =
∫ω
−∞
2
SXX (ω ) dω
(6.44)
∞ ∞
σ X2 =
∫S
−∞
XX (ω ) dω =
∫ω
−∞
4
SXX (ω ) dω ,
∞ −ω1 ω2
KXX (τ ) =
∫S
−∞
XX (ω ) e iωτ
dω = S0
∫e
−ω 2
iωτ
dω + S0
∫e
ω1
iωτ
dω
2S0 τ ∆ω
(sin ω 2τ − sin ω 1τ ) = 0 sin
4S
= cos (ω 0τ ) , (6.45)
τ τ 2
whose graph is shown in Figure 6.3 for the values ω 0 = 50 rad/s, ∆ω = 4 rad/s
(i.e. ∆ω ω 0 = 0.08) and S0 = 1 (Figure 6.3b is a detail of Figure 6.3a in the
neighbourhood of τ = 0). From Figure 6.2, it is immediate to see that the
area under SXX (ω ) is 2S0 ∆ω ; this is in agreement with Equation 6.40 because
setting τ = 0 in Equation 6.45 and observing that τ −1 sin (τ ∆ω 2) → ∆ω 2 as
τ → 0 gives exactly KXX (0) = σ X2 = 2S0 ∆ω .
So, since a typical narrowband process is such that ∆ω ω 0 << 1, its auto-
covariance function is practically a cosine oscillation at the frequency ω 0
enveloped by the slowly varying term ( 4S0 τ ) sin (τ ∆ω 2) that decays to zero
for increasing values of τ . Moreover, the fact that the frequency interval ∆ω
10
-5
-10
-5,0 -2,5 0,0 2,5 5,0
(s)
∞
is small means that we can rewrite Equation 6.441 as σ X2 ≅ ω 02
∫ −∞
SXX (ω )dω
and use it with Equation 6.40 to approximate the ‘characteristic frequency’
ω 0 of the process as the ratio of standard deviations
ω 0 ≅ σ X σ X . (6.46)
200
150
100
50
-50
-100
-1,0 -0,8 -0,6 -0,4 -0,2 0,0 0,2 0,4 0,6 0,8 1,0
(s)
KXX (τ ) = 2π S0 δ (τ ) (6.48)
yields the desired spectral density SXX (ω ) = S0. A more realistic process,
called band-limited white noise, has a constant (with value S0) PSD only up
to a cut-off frequency ω = ω C . In this case, we get
2S0 sin ω C τ
KXX (τ ) = F−1 {SXX (ω )} = , (6.49)
τ
whose graph is shown in Figure 6.5 for the values ω C = 150 rad/s and S0 = 1.
Also, note that the area under the PSD is now σ X2 = 2S0ω C , in agreement
with the fact that the function 6.49 is such that KXX (τ ) → 2S0ω C as τ → 0.
Remark 6.13
300
200
100
-100
-0,8 -0,6 -0,4 -0,2 0,0 0,2 0,4 0,6 0,8
(s)
intuitively expected, it follows that the covariance 6.49 tends to the white-
noise covariance of Equation 6.48 as ω C → ∞.
X(t) =
∫ F(t − α ) h(α ) dα . (6.50)
−∞
Then, calling µF = E(F(t)) the mean input level, a first quantity of interest
is the mean output E(X(t)). Recalling that expectations and integrals com-
mute, we take expectations on both sides of Equation 6.50 to get
E(X(t)) = µF
∫ h(α ) dα = µ H(0), (6.51)
−∞
F
bibo, an acronym for bounded input–bounded output). Also, note that the
rightmost equality in Equation 6.51 is obtained by simply setting ω = 0 in
∞
the relation H (ω ) =
∫−∞
h(t)e − iω t dt .
Example 6.5
Leaving the calculation to the reader, consider a damped 1-DOF. Since
its IRF is given by Equation 4.2a2 , the integral in Equation 6.51 leads to
∞
µF µF µ
µX =
mω d ∫e
0
−ζ ω n α
sin (ω dα ) dα =
m ω n2
= F . (6.52)
k
This result coincides with the rightmost term in 6.51 because for a
damped SDOF system, we have H (0) = 1 k (see Equation 4.81).
Assuming now without loss of generality that the input process has zero
mean (so that, by Equation 6.51, µX = 0) from Equation 6.50, we get
234 Advanced Mechanical Vibrations
∞ ∞
X(t)X(t + τ ) =
∫ F (t − α ) h(α ) dα ∫ F(t + τ − γ ) h(γ ) dγ
−∞ −∞
∞ ∞
=
∫ ∫ h(α )h(γ ) F(t − α ) F(t + τ − γ ) dα dγ .
−∞ −∞
∞ ∞
RXX (τ ) =
∫ ∫ h(α )h(γ ) R
−∞ −∞
FF (τ + α − γ ) dα dγ , (6.53)
σ = RXX (0) =
2
X
∫ h (γ ) dγ . (6.54)
−∞
2
In the frequency domain, on the other hand, the double integral above turns
into a simpler relation. In fact, with the understanding that all integrals extend
from −∞ to ∞, we can Fourier transform both sides of Equation 6.53 to get
∫ { ∫∫ h(α )h(γ ) R }
1
SXX (ω ) = FF (τ + α − γ ) dα dγ e − iω τ dτ
2π
1
=
∫∫ h(α )h(γ ) 2π ∫ R FF (τ + α − γ ) e − iω τ dτ dα dγ ,
which, by introducing the variable y = τ + α − γ in the integral within curly
brackets, leads to
1
SXX (ω ) =
∫∫ h(α ) e iω α
h(γ ) e − iω γ
2π ∫R FF (y) e − iω y dy dα dγ
= dα dγ SFF (ω ) = H ∗ (ω )H (ω ) SFF (ω ),
∫ h(α ) e iω α
∫ h(γ ) e − iω γ
Random vibrations 235
from which we obtain the relation between the response PSD and the exci-
tation PSD
2
SXX (ω ) = H (ω ) SFF (ω ). (6.55)
At this point, observing that RXX (τ ) = F−1 {SXX (ω )}, we can use Equation
6.55 to obtain the response variance as
∞ ∞
∫S ∫ H(ω )
2
σ = RXX (0) =
2
X XX (ω ) dω = SFF (ω ) dω , (6.56)
−∞ −∞
thus implying that if X(t) is a displacement response, Equation 6.44 give the
variances of the velocity X (t) and acceleration X (t).
Other quantities of interest are the cross-relations between input and output.
Starting once again from Equation 6.50, we now have F(t)X(t + τ ) =
∞
∫ −∞
F(t) F ( t + τ − α ) h(α ) dα , so that taking expectations, we get
RFX (τ ) =
∫ h(α )R
−∞
FF (τ − α ) dα , (6.57)
where the second relation follows from Equation 6.382 and from the fact
that SFF (ω ) is real. Note that Equations 6.58 – unlike Equation 6.55 that is a
real equation with no phase information – are complex equations with both
magnitude and phase information. In this respect, it can be observed that in
applications and measurements, they provide two methods to evaluate the
system’s FRF H (ω ); these are known as the H1 estimate and H 2 estimate of
the FRF and are given by
SFX (ω ) SXX (ω )
H1 (ω ) = , H 2 (ω ) = , (6.59)
SFF (ω ) SXF (ω )
where the first relation follows directly from Equation 6.581, while the sec-
ond is obtained by first rewriting Equation 6.55 as SXX (ω ) = H (ω )H ∗ (ω )SFF (ω )
and then using Equation 6.582 .
236 Advanced Mechanical Vibrations
Remark 6.14
Ideally, Equation 6.59 should give the same result. Since, however, this is
not generally the case in actual measurements (typically, because of extra-
neous ‘noise’ that contaminates the input or output signals, or both), one
can use the H1 H 2 ratio as an indicator of the quality of the measurement
by defining the so-called coherence function
2
H1(ω ) SFX (ω )
γ 2 (ω ) = = ,
H 2 (ω ) SFF (ω )SXX (ω )
6.5.2 SDOF-system response to
broadband excitation
From preceding chapters, we know that the FRF of a SDOF system with
( )
−1
parameters m, k, c is given by H (ω ) = k − mω 2 + iω c ; consequently,
2 1 1
H (ω ) = = , (6.60)
(k − mω ) 2 2
+ (ω c) 2
2
(
m ω n2 − ω 2 )
2
+ ( 2ζ ω nω )
2
SFF (ω n )
SXX (ω ) ≅ H (ω ) SFF (ω n ) =
2
, (6.61)
(
m2 ω n2 − ω 2 ) + (2ζ ω ω )
2 2
n
where the rightmost result can be obtained from tables of integrals (see the
following Remark 6.15).
Remark 6.15
∫
2
a list of integrals H k (ω ) dω for k = 1,2,5 can be found in Appendix
−∞
1 of Newland (1993).
( )
−1
If now we observe that c = 2mζ ω n, m = k ω n2 and H (ω n ) = 4ζ 2k2
2
,
Equation 6.62 can be rewritten as
which, when compared with Equation 6.62, shows that we can evaluate the
∞
∫
2 2
area under the curve H (ω ) (the integral H (ω ) dω ) by calculating
−∞
the area of the rectangle whose horizontal and vertical sides, respectively,
are 2πζ ω n and the value H (ω n ) . Note that these are two quantities that
2
t ωdt
R R
σ (t) = 2 0 2
2
X
m ωd ∫e
0
−2 ζ ω n γ
sin (ω dγ ) dγ = 2 0 3
2
m ωd ∫e
0
−2 ζ (ω n ω d ) y
sin2 y dy , (6.64)
where the second integral is a slightly modified version of the first obtained by
introducing the variable y = ω d γ . We did this because, with a = − 2ζ ω n ω d ,
the r.h.s. of 6.64 is of the same form as the standard tabulated integral
1 ax
∫e ax
sin2 xdx =
a2 + 4 ∫
e sin x ( a sin x − 2cos x ) + 2 e ax dx
R0 −2ζ ω n t ζ ωn 2ζ 2ω n2 2
σ X2 (t) = 3 1 − e 1+ sin ( 2ω d t ) + sin (ω d t ) , (6.65)
4m ζ ω n
2
ωd ωd 2
thus implying
R0 R0 π S0 π S0
σ X2 = lim σ X2 (t) = 3 = = 3 = , (6.66)
t →∞ 4m ζ ω n 2 kc 2m ζ ω n
2 2
kc
where in writing the last two relations we observed that the PSD of the (white-
noise) input process is the constant S0 given by S0 = R0 F {δ (τ )} = R0 2π .
Note that – as it must be – Equation 6.66 gives precisely the steady-state
value of Equation 6.62.
If, as an illustrative example, we consider an SDOF system with a natural
frequency ω n = 10 rad/s, Figure 6.6 shows a graph of the ratio W (t) = σ X2 (t) σ X2
(i.e. the term within braces in Equation 6.65) for the two values of damp-
ing ζ = 0.05 and ζ = 0.10. From the figure, the effect of damping is rather
evident: The lower the damping, the slower the convergence of σ X2 (t) to its
steady-state value.
Random vibrations 239
1,0
damp. = 10%
0,8
Variance ratio W(t)
damp. = 5%
0,6
0,4
0,2
0,0
0,0 1,0 2,0 3,0
seconds
Remark 6.16
µF −ζ ω n t ζω n
µX (t) = 1 − e cos ω d t + ω sin ω d t , (6.67)
k d
Given these important properties, it may now be useful here to briefly recall
some basic mathematical facts. A random variable X has a Gaussian (or
normal) distribution with mean µX and variance σ X2 if its pdf is
1 (x − µX )2
fX (x) = exp − . (6.68)
2π σ X 2σ X2
The idea is extended to the bivariate case and the pdf of two jointly Gaussian
r.v.s X , Y is
1
fXY (x, y) = e − g (x,y), (6.69a)
σ 1σ 2 1 − ρ 2
where
1 ( x − m1 )2 ( y − m2 )2 2 ρ ( x − m1 ) ( y − m2 )
g(x, y) = + − (6.69b)
(
2 1 − ρ2 ) σ 1
2
σ 22 σ 1σ 2
1 1
fX (x) = exp − (x − m)T K −1(x − m) , (6.70)
( 2π ) n2
det K 2
Random vibrations 241
σ 12 ρ σ 1σ 2 σ 2 σ1 −ρ
1
K= ⇒ K −1 = .
ρ σ 1σ 2 σ 22 (
σ 1σ 2 1 − ρ 2 ) −ρ σ1 σ 2
X(t) =
−∞
∫ h(α ) F(t − α ) dα , (6.71a)
where h(t) is the q × p IRF matrix whose ijth element, the IRF hij (t),
represents the output/response at point i due to a unit Dirac delta excitation
Fj (t) = δ (t) applied at point j. Then
p ∞
X j (t) = ∑ ∫ h (α )F (t − α ) dα
k=1 −∞
jk k ( j = 1,, q ) (6.71b)
is the jth element of the vector X(t) and gives the response at the
jth point to the p inputs. Now, using Equation 6.71a together with
∞
X(t + τ ) =
∫ −∞
h(γ ) F(t + τ − γ ) dγ , we can form the product X(t)XT (t + τ ) and
R XX (τ ) =
∫ ∫ h(α ) R
−∞ −∞
FF (τ + α − γ ) hT (γ ) dα dγ , (6.72)
where the asterisk denotes complex conjugation and H(ω ) = 2π F {h(t)} is the
q × p FRF matrix whose j, kth element is H j k (ω ) = 2π F {hj k (t)}.
Along the same line, it is now not difficult to obtain the cross-quantities
between input and output; in the time and frequency domain, respectively,
we get
R FX (τ ) =
∫R
−∞
FF (τ − α ) hT (α ) dα , SFX (ω ) = SFF (ω ) HT (ω )
∞
(6.74)
R XF (t) =
∫ h(α )R
−∞
FF (τ + α ) dα , ∗
SXF (ω ) = H (ω )SFF (ω ),
where the ‘FX’ matrices have dimensions p × q , while the ‘XF’ matrices
are q × p. Also, note that by virtue of the properties of Equations 6.11 and
6.14, the matrix R FF (τ ) is such that RTFF (−τ ) = R FF (τ ), and so is R XX . On the
other hand, the matrices SFF (ω ) and SXX (ω ) are Hermitian (i.e. such that
∗
SFF (ω ) = SH
FF (ω ), or in terms of components, SFj Fk (ω ) = SFkFj (ω )), where S FF (ω )
H
Remark 6.17
RX j Xk (τ ) = ∑ ∫∫ h (α ) h
l , m=1
jl km (γ ) RFl Fm (τ + α − γ ) dα dγ
(6.75)
p
SX j Xk (ω ) = ∑H
l , m=1
∗
jl (ω ) H km (ω ) SFl Fm (ω ),
p p
SFj Xk (ω ) = ∑l =1
SFj Fl (ω )H kl (ω ), SX j Fk (ω ) = ∑H
l =1
∗
jl (ω )SFl Fk (ω ); (6.76)
iv. Note that in the special case of multiple inputs and only one output
(i.e. q = 1), the matrices h(t), H(ω ) are 1 × p row vectors whose elements
are generally labelled by a single (input) index. So, for example, for
two inputs and one output, the matrix H(ω ) is the 1 × 2 row vector
H1 (ω ) H 2 (ω ) , SFF (ω ) is a 2 × 2 matrix, and we have only one out-
put PSD given by
2 2
SXX (ω ) = H1 (ω ) SF1 F1 (ω ) + H 2 (ω ) SF2 F2 (ω )
+ H1∗ (ω ) H 2 (ω ) SF1 F2 (ω ) + H 2∗ (ω ) H1 (ω ) SF2 F1 (ω ).
A final result concerns the mean value of the output, on which nothing has
been said because at the beginning of this section, we assumed the inputs to
have zero mean. When this is not the case, the input mean values µF1 ,, µFp
can be arranged in a p × 1 column vector m F and we can use Equation 6.71a
to obtain the output mean as
∞
∫
m X = h(α ) dα m F = H(0) m F, (6.77a)
−∞
p ∞ p
µX j = ∑ ∫
l =1
µFl hjl (α ) dα = ∑µ H
l =1
Fl jl (0) ( j = 1,, q ). (6.77b)
−∞
ˆ ∗ (ω ) PT SFF (ω ) P H
SXX (ω ) = P H ˆ (ω ) PT , (6.79a)
ˆ T (ω ) = H
where we took into account that H ˆ (ω ) and that P∗ = P. Also, using
Equation 4.59a, the PSD matrix SXX (ω ) can be expressed as
n
n
SXX (ω ) =
∑
l =1
Hˆ l∗ (ω ) pl pTl SFF (ω )
∑ Hˆ
m=1
m (ω ) pm pTm
n
= ∑ Hˆ (ω ) p p S
l ,m=1
∗
l l
T
l FF (ω )Hˆ m (ω ) pm pTm , (6.79b)
SX j Xk (ω ) = ∑∑∑∑ p
l m r s
jl pml SFmFr (ω ) prs pks Hˆ l∗ (ω ) Hˆ s (ω ), (6.80)
where all sums are from 1 to n and Hˆ l (ω ), Hˆ s (ω ) are the lth and sth modal
FRFs. In particular, if (a) the components of the input vector are white-
noise processes, and (b) the modes of the system under investigation
Random vibrations 245
are lightly damped and well separated, then the major contribution to
the sum on the r.h.s of Equation 6.80 will come from the square terms
of the form H l (ω ) ( l = 1,, n ) because, in comparison, the cross-terms
2
Remark 6.18
( )
E XXT = R XX (0) =
∫ PHˆ (ω ) S
∗
QQ
ˆ (ω )PT dω ,
(ω )H
−∞
( )
whose diagonal elements are E X 2j , while the off-diagonal elements
are the cross-values E ( X j Xk ) with j ≠ k .
Sww ( x1 , x2 , ω ) = ∑ H ( x , r ,ω ) H ( x , r ,ω ) S
l , m=1
∗
1 l 2 m FF ( rl , rm , ω ), (6.81)
which gives the cross-PSD of the responses at two generic points x1 , x2 when
p localised inputs are applied at the points r1 ,, rp (the auto-PSD for the
response at x is then the special case x1 = x2 = x ). Then, for a distributed
excitation, we must pass to the limit p → ∞; by so doing, the sums become
spatial integrals on the beam length and we get
L L
Sww ( x1 , x2 , ω ) =
∫ ∫ H ( x , r ,ω ) H ( x , r ,ω ) S
0 0
∗
1 1 2 2 FF ( r1, r2 , ω )dr1dr2. (6.82)
If now we assume that the systems eigenpairs are known and recall from
Chapter 5 that the physical coordinates FRFs are expressed in terms of the
modal FRFs by means of Equation 5.143, we have
∞
H ( x1 , r1 , ω ) = ∑φ ( x )φ ( r ) Hˆ (ω )
j =1
j 1 j 1 j
(6.83)
∞
H ( x2 , r2 , ω ) = ∑φ ( x )φ ( r ) Hˆ (ω )
k=1
k 2 k 2 k
where
L L
G(ω ) =
∫ ∫ Hˆ (ω )Hˆ (ω )φ ( r )φ (r )S
0 0
∗
j k j 1 k 2 FF ( r1, r2 , ω ) dr1 dr2 . (6.84b)
IRF hˆ j (t), a first result that can be easily obtained is the mean value of the
beam displacement. This is
∞ ∞
where in writing the last relation we took into account the explicit expres-
sion of q j (t). If, moreover, the excitation is WS stationary, then µF is time
independent and we get
∞ ∞ L
µw (x) = ∑
j =1
∫ ∫
φ j (x) hˆ j (τ ) dτ µF (ξ )φ j (ξ ) d ξ
−∞ 0
∞ L
= ∑
j =1
∫
φ j (x)Hˆ j (0) µF (ξ )φ j (ξ ) d ξ , (6.86)
0
thus showing that the mean response is also time independent. In p articular,
if µF (x) = 0, then µw (x) = 0. Now, assuming this to be the case and by also
assuming the excitation to be WS stationary in time, the cross-correlation
function Rq jqk (τ ) between q j (t) and qk (t + τ ) is given by
L L
Rww ( x1 , x2 , τ ) = E w ( x1 , t )w ( x2 , t + τ )
∞ ∞
= ∑
j ,k=1
E y j (t)yk (t + τ ) φ j ( x1 ) φk ( x2 ) = ∑R
j ,k=1
y j yk (τ ) φ j ( x1 ) φk ( x2 ),
(6.88)
∞
where, observing that the modal response is y j (t) =
∫
−∞
hˆ j (α ) q j (t − α )dα ,
we have
248 Advanced Mechanical Vibrations
∞ ∞
Ry j yk (τ ) = E y j (t)yk (t + τ ) =
∫ ∫ hˆ (α ) hˆ (γ ) R
−∞ −∞
j k q j qk (τ + α − γ ) dα dγ . (6.89)
∞ ∞ ∞
Rww ( x1 , x2 , τ ) = ∑φ ( x )φ ( x ) ∫ ∫ hˆ (α )hˆ (γ ) R
j ,k=1
j 1 k 2 j k q j qk (τ + α − γ ) dα dγ , (6.90)
−∞ −∞
where
∞ ∞ L L
g(τ ) =
∫ ∫ ∫ ∫ hˆ (α )hˆ (γ )φ ( r )φ ( r ) R
−∞ −∞ 0 0
j k j 1 k 2 FF ( r1, r2 , τ + α − γ ) dr1 dr2 dα dγ , (6.91b)
and it is now not difficult to show that the Fourier transform of the correla-
tion of Equation 6.91 leads exactly to the PSD of Equation 6.84.
Given this result, it follows that if, in a certain problem, we know the
PSD of the excitation, the response correlation is obtained as
∞ ∞ ∞
Rww ( x1 , x2 , τ ) =
∫ Sww ( x1 , x2 , ω ) e i ωτ dω = ∑ ∫
φ j ( x1 ) φk ( x2 ) G(ω ) e i ωτ dω ,
−∞ j ,k=1 −∞
(6.92)
where G(ω ) is given by Equation 6.84b. Also, note that this same correlation
can be obtained in terms of the PSD of the modal excitations as
∞ ∞
Rww ( x1 , x2 , τ ) = ∑
j ,k=1
∫
φ j ( x1 )φk ( x2 ) Hˆ ∗j (ω )Hˆ k (ω ) Sq jqk (ω ) e i ωτ dω , (6.93)
−∞
which follows from the fact that Sww ( x1 , x2 , ω ) is the Fourier transform of
Equation 6.90.
Random vibrations 249
N a+ (T ) = ν a+ T (6.94)
∞ a ∞ ∞
∫ ∫
0 a− xdt
fX X (x, x ) dxdx ≅
∫
0
fX X (a, x ) xdt
∫
dx = dt fX X (a, x ) xdx
0
, (6.95)
+
ν =
a
∫f
0
X X (a, x ) xdx
. (6.96)
250 Advanced Mechanical Vibrations
Remark 6.19
Equation 6.96 is a general result that holds for any probability distribu-
tion. In the special case of a Gaussian process with joint-pdf
1 x2 x 2
fX X (x.x ) = exp − − , (6.97)
2π σ Xσ X 2σ X2 2σ X2
Remark 6.20
Using the results above, we can now go back to Section 6.5.2 and evaluate
ν a+ in the special case in which X(t) is the output of an SDOF system sub-
jected to a Gaussian white-noise excitation with constant PSD SFF (ω ) = S0.
The desired result can be readily obtained if we recall the relations
Random vibrations 251
ωn a2kc
ν a+ = exp − , (6.99)
2π 2π S0
P ( peak > a ) =
∫ f (α ) dα . (6.100)
a
P
At this point, recalling once again that the time histories of narrowband
processes are typically rather well behaved, we make the following emi-
nently reasonable assumptions: (a) Each up-crossing of x = a results in a
peak with amplitude greater than a and (b) each up-crossing of x = 0 cor-
responds to one ‘cycle’ of our smoothly varying time history. Under these
assumptions, it follows that the ν a+ ν 0+ ratio represents the favourable
fraction of peaks greater than a and that, consequently, we can write the
∞
equality
∫
a
fP (α ) dα = ν a+ ν 0+ . Then, differentiating both sides with respect
to a gives the first result
1 dν a+
− fP (a) = . (6.101)
ν 0+ da
a a2
fP (a) = 2 exp − , (6.102)
σX 2σ X2
∫ a
fP (α ) dα = ν a+ ν 0+ and, owing to Equation 6.100, obtain the probability
(
P (peak > a) = exp − a2 2σ X2 )
(6.103)
P (peak ≤ a) = F (a) = 1 − exp ( − a
P
2
)
2σ X2 ,
respectively, where FP (a) is the Rayleigh PDF corresponding to the pdf fP (a).
If the probability distribution of the process X(t) is not Gaussian, it turns
out that the peaks distribution may significantly differ from the Rayleigh
distribution, but that a possible generalisation to non-Gaussian cases can
be obtained as follows: If we call a0 the median of the Rayleigh distribu-
tion, we can use Equation 6.1032 together with the definition of median –
that is the relation FP ( a0 ) = 1 2 – to obtain σ X2 = a02 2ln2. Substitution
of this result into FP (a) leads to the alternative form of the Rayleigh PDF
FP ( a a0 ) = 1 − exp − ( a a0 ) ln2 , which, in turn, can be seen as a special
2
case of the more general one-parameter distribution
which is sketched in Figure 6.7 for three different values of k and where the
case k = 2 corresponds to the Rayleigh pdf.
2,0
k=5
1,5
f(a/a0)
1,0
k=1
0,5 k=2
0,0
0,0 0,5 1,0 1,5 2,0 2,5 3,0
a/a0
Now, since in probability theory the distribution with PDF and pdf given,
respectively, by Equations 6.104a and 6.104b is known as Weibull distri-
bution (see the following Remark 6.21), we can conclude that the Weibull
distribution provides the desired generalisation to the case of non-Gaussian
narrowband processes.
Remark 6.21
The Weibull4 distribution, in its general form, has two parameters, while
the Rayleigh distribution has only one. Calling A, B the two parameters, a
r.v. Y is said to have a Weibull distribution if its PDF and pdf (both defined
for y ≥ 0 and zero for y < 0) are
( )
FY (y) = 1 − exp − Ay B , ( )
fY (y) = AB y B−1 exp − Ay B , (6.105)
(
FY (y) = 1 − exp − y 2 2R2 ,) ( ) ( )
fY (y) = y R2 exp − y 2 2R2 , (6.106)
where it is not difficult to show that the mean and variance of Y are
( )
E(Y ) = R π 2 ≅ 1.25R and σ Y2 = R2 2 ( 4 − π ) ≅ 0.43R2. Also, from
Equations 6.105 and 6.106, it can be readily seen that the Rayleigh distri-
bution is the special case of the Weibull distribution obtained for the choice
of parameters A = 1 2R2 and B = 2. Moreover, it should be noticed that for
B = 1, the Weibull distribution becomes the so-called exponential distribu-
tion fY (y) = A exp ( − Ay ).
At this point, we can obtain a final important result by observing that the
developments above provide a lower bound – which we will call M – on the
amplitude of the highest peak that may be expected to occur within a time
interval T . If, in fact, we call M the (as yet unknown) threshold level that,
+
on average, is exceeded only once in time T , then ν M T = 1 and therefore
+
ν M = 1 T . For a narrowband process, consequently, the favourable fraction
+
of peaks greater than M is ν M ν 0+ = 1 ν 0+T and this, we recall, is the probabil-
ity P ( peak > M ). Since, however, under the assumption that the peaks have a
Weibull probability distribution we have P ( peak > M ) = exp − ( M a0 ) ln2 ,
k
the equality 1 ν 0+T = exp − ( M a0 ) ln2 leads to
k
254 Advanced Mechanical Vibrations
1k
M ln(ν 0+ T )
= ,(6.107)
a0 ln2
Remark 6.22
ln ( − ln P ) = k ln a − k ln a0 + ln ( ln2) (6.108)
so that k is the slope of the graph. On the other hand, the median peak
height a0 can be obtained from the zero intercept of the graph, which will
occur at a point a1 such that 0 = ln ( a1 a0 ) ln2 , from which it follows
k
1 = ( a1 a0 ) ln2 a0 = a1 ( ln2) . (6.109)
k 1k
⇒
Appendix A
On matrices and linear spaces
A.1 MATRICES
255
256 Appendix A
cij = ∑a b
k=1
i k kj (1 ≤ i ≤ m, 1 ≤ j ≤ p ) (A.1)
1 −1
2 −1 3 14 17
AB = 3 2 = 23 =C
0 1 4 5 7
30
There is, however, no reversing for complex conjugation and (AB)* = A* B*.
Other important definitions for square matrices are as follows:
In definitions 3 and 4, I is the unit matrix, that is the matrix whose only
nonzero elements are all ones and lie on the main diagonal; for example, the
2 × 2 and 3 × 3 unit matrices are
1 0 0
1 0
I2 = , I3 = 0 1 0 ,
0 1
0 0 1
which, for brevity, can be denoted by the symbols diag(1,1) and diag(1,1,1).
The matrix I is a special case of diagonal matrix, where this name indicates
all (square) matrices whose only nonzero elements are on the main diag-
onal. On the other hand, one speaks of upper-(lower)-triangular matrix
if aij = 0 for j < i ( j > i ) and strictly upper-(lower)-triangular if aij = 0 for
j ≤ i ( j ≥ i ). Clearly, the transpose of an upper-triangular matrix is lower-
triangular and vice versa.
A square matrix is called normal if it commutes with its Hermitian-
adjoint, that is if AA H = A H A, and the reader can easily check the following
properties:
where
a11 a12 a13 a14 a15
A11 = , A12 = ,
a21 a22 a23 a24 a25
A 21 = a31 a32 , A 22 = a33 a34 a35 .
∑
n
nal elements (n elements if the matrix is n × n ), that is trA = ai i . The
i =1
determinant, on the other hand, involves all the matrix elements and has a
more complicated expression. It is denoted by det A or A and can be calcu-
lated by the so-called Laplace expansion
n n
det A = ∑
j =1
(−1)i + j aij det A ij = ∑ (−1)
i =1
i+ j
aij det A ij , (A.4)
a11 a12
det = a11a22 − a12 a21
a21 a22
Appendix A 259
and so on. Note that three consequences of the expansions A.4 are that (a)
the determinant of a diagonal or triangular matrix is given by the product
( ) ( )
of its diagonal elements, (b) det AT = det A and (c) det A H = (det A)∗. A
very useful property of determinants is
Remark A.1
Proposition A.1
A1. x + y = y + x
A2. (x + y) + z = x + (y + z)
A3. There exists a unique vector 0 ∈V such that x + 0 = x
A4. There exists a unique vector − x ∈V such that x + (− x) = 0.
M1. a (x + y) = a x + a y
M2. (a + b) x = a x + b x
M3. (ab) x = a(b x)
M4. 1 x = x ,
Example A.1
For any positive integer n, the set of all n-tuples of real (or com-
plex) numbers ( x1 , , xn ) forms a vector space on the real (com-
plex) field when we define addition and scalar multiplication term
by term, that is ( x1 , , xn ) + ( y1 , , yn ) ≡ ( x1 + y1 , , xn + yn ) and
a ( x1 , , xn ) ≡ ( a x1 , , a xn ), respectively. Depending on whether F = R
or F = C, the vector spaces thus obtained are the well-known spaces Rn
or C n , respectively.
Example A.2
The reader is invited to verify that the set Mm × n (F) of all m × n matrices
with elements in F is a vector space when addition and multiplication
by a scalar are defined as in Section A.1.
x= ∑ x u (A.7)
i =1
i i
for some set of scalars x1 ,, xn ∈ F . The set of vectors is called a basis of
V if (a) the set spans V and (b) the set is linearly independent. In general, a
vector space V has many bases but the number of elements that form any
one of these bases is defined without ambiguity. In fact, it can be shown
(Halmos (2017) that all bases of V have the same number of elements; this
number is called the dimension of V and denoted by dim V . Equivalently,
but with different wording, V is called n-dimensional – that is dim V = n – if
it is possible to find (in V) n linearly independent elements but any set of
n + 1 vectors of V is linearly dependent. When, on the other hand, we can
find n linearly independent elements for every n = 1,2,, V is said to be
262 Appendix A
Example A.3
The standard basis in the familiar (three-dimensional) space R3 is
given by the three vectors i = (1,0,0), j = (0,1,0), k = (0,0,1), and the vec-
tor x = 2 i + 3 k has components (2,0,3) relative to this basis. If now we
consider the set of vectors e1 = i + j, e2 = i + 2k , e3 = i + j + k , it is left to
the reader to show that this set is a basis and that the components of x
relative to it are (1, 2, −1).
n n n
x= ∑x u ,
i =1
i i x= ∑ xˆ v ,
j =1
j j vj = ∑c
k=1
kj u k , (A.8)
where the scalars x1 ,, xn and xˆ 1 ,, xˆ n are the components of x relative
to the u- and v-basis, respectively, while the n2 scalars ckj in Equation
A.83 (which, it should be noted, is a set of n equation; one for each j with
j = 1,2,, n ) specify the change of basis. Then, it is not difficult to see that
substitution of Equation A.83 into A.82 leads to x = ∑k ( ∑ j ckj xˆ j ) u k , which,
in turn, can be compared with Equation A.81 to give
xk = ∑c
j =1
kj xˆ j , [ x ]u = C [ x ] v ,(A.9)
where the first of Equations A.9 is understood to hold for k = 1,, n (and is
therefore a set of n equations), while the second comprises all these n equa-
tions is a single matrix relation when one introduces the matrices
Appendix A 263
Remark A.2
xˆ j = ∑ cˆ i
ji xi , [ x ]v = Cˆ [ x ]u, (A.11)
where, as in A.9, the second relation is the matrix version of the first. But
then Equations A.9 and A.11 together give CC ˆ = CCˆ = I, which in turn
ˆ −1
proves that C = C as soon as we recall from Section A.1.1 that the inverse
of a matrix – when it exists – is unique. It is now left to the reader to show
that in terms of components the relations CC ˆ = CCˆ = I read
∑ cˆ
j
ij cj k = ∑cj
kj cˆ j i = δ i k, (A.12)
A (x + y) = A x + A y, A (a x) = a (A x) (A.13)
264 Appendix A
for all x , y ∈V and all a ∈ F is called a linear operator (or linear transfor-
mation) from V to U, and it is not difficult to show that a linear operator is
completely specified by its action on a basis u1 ,, u m of its domain V. The
set of all linear operators from V to U becomes a linear (vector) space itself –
and precisely an nm-dimensional linear space often denoted by L(V ,U) –
if one defines the operations of addition and multiplication by scalars as
(A + B) x = A x + B x and (aA) x = a (A x), respectively.
In particular, if U = V , the linear space of operators from V to itself is
denoted by L(V ). For these operators, one of the most important and far-
reaching aspects of the theory concerns the solution of the so-called eigen-
value problem A x = λ x, where x is a vector of V and λ is a scalar. We have
encountered eigenvalue problems repeatedly in the course of the main text,
and in this appendix, we will discuss the subject in more detail in Section A.4.
Particularly important among the set of linear operators on a vector space
are isomorphisms. A linear operator is an isomorphism (from the Greek
meaning ‘same structure’) if it is both injective (or one-to-one) and surjec-
tive (or onto), and two isomorphic linear spaces – as far as their algebraic
structure is concerned – can be considered as the same space for all practi-
cal purposes, the only ‘formal’ difference being in the name and nature of
their elements. In this regard, a fundamental theorem (see, for example,
Halmos (2017)) states that two finite-dimensional vector spaces over the
same scalar field are isomorphic if and only if they have the same dimen-
sion, and a corollary to this theorem is that any n-dimensional real vector
space is isomorphic to Rn and any n-dimensional complex vector space is
isomorphic to C n .
More specifically, if V is an n-dimensional space over R (or C) and
u1 ,..., u n is a basis of V; the mapping that to each vector x ∈V associates
the coordinate vector [ x ]u , that is the n-tuple of components ( x1 ,, xn ), is
an isomorphism from V to Rn (or C n ). The immediate consequence of this
fact is that the expansion (A.7) in terms of basis vectors is unique and that
choosing a basis of V allows us to multiply vectors by scalars and/or sum
vectors by carrying out all the necessary calculations in terms of compo-
nents, that is by operating in Rn (or C n ) according to the operations defined
in Example A.1.
If A : V → U is an isomorphism, then it is invertible (or nonsingular),
which means that there exists a (unique) linear operator B : U → V such
that B (A x) = x for all x ∈V and A (By) = y for all y ∈U . The operator B is
denoted by A−1 and is an isomorphism itself.
the notion of inner product, where an inner (or scalar) product is a map-
ping • • from V × V (the set of ordered pairs of elements of V) to the field
F satisfying the defining properties
∗
IP1. x y = y x
IP2. x a y1 + b y 2 = a x y1 + b x y 2
IP3. x x ≥ 0 and x x = 0 if and only if x = 0,
where the asterisk denotes complex conjugation and can be ignored for
vector spaces on the real field. It can be shown (see, for example, Horn
and Johnson 1993) that an inner product satisfies the Cauchy–Schwarz
inequality: For all x , y ∈V
2
xy ≤ x x y y (A.14)
and the equal sign holds if and only if x , y are linearly dependent. Moreover,
using the inner product, we can define the length x of a vector (or norm
in more mathematical terminology) and the angle θ between two nonzero
vectors as
x y
x = x x , cos θ = , (A.15)
x y
where 0 ≤ θ ≤ π 2 and it can be shown that these quantities satisfy the usual
properties of lengths and angles. So, for example, in the case of the vector
spaces Rn or C n of Example A.1, it is well known that the ‘standard’ inner
product of two elements x = ( x1 ,, xn ) , y = ( y1 ,, yn ) is defined as
n n
x y R n ≡ ∑
i =1
xi yi , x y C n ≡ ∑ x y , (A.16a)
i =1
*
i i
∑
2
x = xi . (A.16b)
i
Remark A.3
in the first slot (so that conjugate linearity turns up in the second slot).
In real spaces, this is irrelevant – in this case, in fact, property IP1
reads x y = y x , and we have linearity in both slots – but it is not so
in complex spaces. With linearity in the first slot, for example, instead
of Equation A.18a below, we have x j = x u j ;
ii. A second point we want to make here is that, from a theoretical stand-
point, the concept of norm is more ‘primitive’ than the concept of
inner product. In fact, a vector norm can be axiomatically defined as
a mapping • : V → R satisfying the properties:
Two vectors are said to be orthogonal if their inner product is zero, that
is x y = 0, and in this case, one often writes x ⊥ y. In particular, a basis
u1 ,, u n is called orthonormal if the basis vectors have unit length and are
mutually orthogonal, that is if
n n
y1
xy V
= ∑x y = ∑ x u
i =1
∗
i i
i =1
i u i y = [ x ] [ y ]u = x
H
u
∗
1 x ∗
n
yn
,
(A.19)
xy V
= ∑ i
xi u i ∑ j
yju j = ∑i,j
xi∗y j u i u j = ∑ i,j
xi∗y j δ ij = ∑x y ,
i =1
∗
i i
while we used Equation A.18a in writing the second equality (and clearly
[ x ]u ,[ y ]u are the coordinate vectors formed with the components xi , yi).
Note that in writing Equation A.19, we assumed V to be a complex space;
however, since in a real space complex conjugation can be ignored and
[ x ]H T
u is replaced by [ x ]u , it is immediate to see that in this case, we have
T
x y V = ∑ i xi yi = [ x ]u [ y ]u .
Finally, a third point concerns the change-of-basis matrix C, which we
know from Section A.2 to be nonsingular. Since, however, orthonormal
bases are special, we may ask if the change-of-basis matrix between ortho-
normal bases – besides being obviously nonsingular – is also special in some
way. The answer is affirmative, and it turns out that C is unitary in complex
spaces and orthogonal in real spaces. In fact, if u1 ,, u n and v1 ,, v n are
two orthonormal bases of the complex space V, then we have v j v k = δ j k
and we can use Equation A.83 together with the properties of the inner
product and the orthogonality of the u-basis to write
δ j k = v j vk = ∑∑c c
i m
∗
ij mk ui um = ∑∑c c
i m
∗
ij mk δim = ∑c c
i
∗
ij ik ,
xy = ∑x y
i,j
*
i j ui u j = ∑x u
i,j
*
i ij yj = [x] H
u U [ y ]u , (A.20)
where in the last expression U is the n × n matrix of the uij. Since, however, the
choice of another nonorthonormal basis v1 ,, v n leads to x y = [ x ] H v V [ y ]v,
it turns out that the result of the inner product seems to depend on the basis.
The fact that it is not so can be shown by using Equation A.83 and writing
vij = v i v j = ∑c
k ,m
*
c
ki m j uk u m = ∑c
k ,m
*
ki ukm cm j (i, j = 1,, n),
xy V
= [x] H H H
v V [ y ] v = [ x ] v C U C [ y ] v . (A.21)
[x] H H
v V [ y ] v = [ x ] u U [ y ]u, (A.22)
Appendix A 269
thus fully justifying the fact that the inner product is used to define metric
quantities such as lengths and angles, that is quantities that cannot depend
on which basis – orthonormal or not – we may decide to choose.
Remark A.4
[ y ]u = [ T ]u [ x ]u , [ y ]v = [ T ]v [ x ]v , (A.24)
where the second equation follows immediately from the first. But since, by
definition, two square matrices A, B are said to be similar if there exists a
nonsingular matrix S – called the similarity matrix – such that B = S−1AS
(which implies A = SBS−1), Equations A.25 show that (a) the matrices
[ T ]u , [ T ]v are similar and (b) the change-of-basis matrix C plays the role of
the similarity matrix.
At this point, the only piece missing is the explicit determination of the
elements of [ T ]u and [ T ]v , that is the two sets of scalars t ij and tˆij (n2 scalars
for each set). This is a necessary step because we can actually use Equations
A.25 only if we know – together with the elements of C and/or Cˆ = C −1 – at
least either one of the two sets. These scalars are given by the expansions of
the transformed vectors of a basis in terms of the same basis, that is
T uj = ∑t k
kj uk , T vj = ∑ tˆ
k
kj vk ( j = 1, , n ). (A.26)
Remark A.5
If we recall that the elements of C, Cˆ are given by the expansions of the vec-
tors of one basis in terms of the other basis, that is
vj = ∑c
k
kj uk , uj = ∑ cˆ
k
kj vk ( j = 1, , n), (A.28)
Appendix A 271
we can use Equations A.26 and A.28 to write Equations A.25 in terms of
components; in fact, starting from Equation A.281, we have the chain of
equalities
T vj = ∑c k
kj (T uk ) = ∑ ckj ∑ tiku i
k i
= ∑ k ,i
ckjt ik
∑
m
cˆ m i v m =
∑∑
m
i ,k
cˆ m it i kck j v m
tˆ m j = ∑ cˆ
i ,k=1
t c
m i ik kj (m, j = 1, , n), (A.29)
Example A.4
As a simple example in R2 , consider the two bases
1 0 1 −2
u1 = , u2 = ; v1 = , v2 = ,
0 1 1 0
x1 x1 + x2
T = ,
x2 x2
the reader is invited to use Equations A.26 and check the easy calcula-
tions that give
272 Appendix A
t11 1 tˆ tˆ12 0
t12 1 1
[ T ]v = =
11
[ T ]u = = ,
t21 t22 0 1 t21 ˆt22 − 1 2 1
If, in addition to the linear structure, V has an inner product, we can con-
sider the special case in which the two bases are orthonormal. If V is a real
vector space, we know from Section A.2.2 that the change-of-basis matrix
is orthogonal, that is CT C = I. This implies that two orthonormal bases lead
to the special form of similarity
[ T ]v = CT [ T ]u C, [ T ]u = C[ T ]v CT , (A.30)
[ T ]v = C H [ T ]u C, [ T ]u = C[ T ]v C H. (A.31)
T x y = x T y (A.32)
for all x , y ∈V . In real vector spaces, these operators are called symmetric:
Hermitian or self-adjoint in complex spaces.
Then, if u1 ,, u n is an orthonormal basis of the real space V and we call
xi , yi ( i = 1,, n ), respectively, the components of x , y relative to this basis,
we have
Tx = ∑xt
i ,k
i ki uk , Ty = ∑y t
j, m
j mj u m , (A.33)
where the n2 scalars t ij are the elements of the matrix [ T ]u that represents
T relative to the u-basis. Using Equations A.33, the two sides of Equation
A.32 are expressed in terms of components as
Tx y = ∑x t
i ,k, j
i ki y j uk u j = ∑x t
i ,k, j
i ki y j δ kj = ∑x t
i, j
i ji yj
(A.34)
x Ty = ∑
i , j ,m
y j t m j xi u i u m = ∑i , j ,m
y j t m j xi δ i m = ∑
i,j
xi t i j y j ,
Appendix A 273
respectively. Since these two terms must be equal, the relations A.34
imply t j i = t ij, meaning that the matrix [ T ]u is symmetric. Moreover,
if v1 ,, v n is another orthonormal basis and C is the change-of-
basis matrix from the u-basis to the v-basis, Equation A.301 gives
( )
T
[ T ] Tv = CT [ T ]u C = CT [ T ] Tu C = CT [ T ]u C = [ T ]v , which means that [ T ]v
is also symmetric. A similar line of reasoning applies if V is a complex
space; in this case – and the reader is invited to fill in the details – we get
[ T ]H H
u = [ T ]u and then, from Equation A.311, [ T ] v = [ T ]v . The conclusion is
that, relative to orthonormal bases, symmetric operators are represented by
symmetric matrices and Hermitian operators are represented by Hermitian
matrices.
Remark A.6
In the light of the strict relation between linear operators and matrices
examined in the preceding section, one of the most important part of
the theory of operators and matrices is the so-called eigenvalue problem.
Our starting point here is the set Mn (F) of all n × n matrices on the field of
274 Appendix A
scalars F = C . For the most part, it will seldom make a substantial differ-
ence if the following material is interpreted in terms of real numbers instead
of complex numbers, but the main reason for the assumption is that C is
algebraically closed, while R is not. Consequently, it will be convenient to
think of real vectors and matrices as complex vectors and matrices with
‘restricted’ entries (i.e. with zero imaginary part).
Let A, x , λ be an n × n matrix, a column vector and a scalar, respectively.
One calls standard eigenvalue problem (or standard eigenproblem, SEP for
short) the equation
Ax = λ x ⇔ (A − λ I)x = 0, (A.35)
where the second expression is just the first rewritten in a different form.
A scalar λ and a nonzero vector x that satisfy Equation A.35 are called,
respectively, eigenvalue and eigenvector of A. Since an eigenvector is always
associated with a corresponding eigenvalue, λ and x together form a so-
called eigenpair. The set of all eigenvalues of A is called the spectrum of A
and is often denoted by the symbol σ (A), or, for some authors, Λ(A).
Three observations can be made immediately: Firstly, if x is an eigenvec-
tor associated with the eigenvalue λ , then any nonzero scalar multiple of
x is also an eigenvector. This means that eigenvectors are determined to
within a multiplicative constant – that is a scaling factor. Choosing this
scaling factor by some appropriate (or convenient) means, a process called
normalisation, fixes the length/norm of the eigenvectors and removes the
indeterminacy. Secondly, if x , y are two eigenvectors both associated with
λ , then any nonzero linear combination of x , y is an eigenvector associated
with λ . Third, recalling point 7 of Proposition A.1, A is singular if and only
if λ = 0 is one of its eigenvalues, that is if and only if 0 ∈σ (A). If, on the
other hand A, is nonsingular and λ , x is an eigenpair of A, then it is immedi-
ate to show that λ −1 , x is an eigenpair of A −1.
Remark A.7
The name standard used for the eigenproblem A.35 is used to distinguish
it from other types of eigenproblems. For example, the generalised eigen-
problem (GEP) – frequently encountered in the main text – involves two
matrices and has the (slightly more complicated) form Ax = λ Bx. However,
it is shown in the main text that a generalised problem can always be recast
in standard form.
det ( A − λ I ) = 0. (A.36)
Appendix A 275
Remark A.8
The fact that the roots of the characteristic equation are n follows from the fun-
damental theorem of algebra: In the field of complex numbers, a polynomial
of degree n with complex coefficients has exactly n zeroes, counting (algebraic)
multiplicities. The algebraic multiplicity of a root – which is to be distinguished
from the geometric multiplicity, to be defined later on – is the number of times
this root appears as a solution of the characteristic equation.
Again, we point out that the considerations above depend on the fact that
the complex field is algebraically closed; for matrices on R, little can be said
about the number of eigenvalues in that field.
Now, since det(A − λ I) = det(A − λ I)T = det(AT − λ I) and
( )
{det(A − λ I)} = det ( A − λ I )H = det A H − λ *I , it follows that (a) AT has theH
*
yTj x k = 0, (A.37)
yTj Ax k = 0. (A.38)
276 Appendix A
Remark A.9
y Hj x k = δ j k , y Hj A x k = λ j δ j k , (A.40)
where the l.h.s. of A.401 is an inner product of two complex vectors in the
usual sense. Note, however, that Equations A.39 and A.40 do not neces-
sarily hold for all j, k = 1,, n because the matrix A may be defective (a
concept to be introduced in the next section). If it is nondefective, then
there are n linearly independent eigenvectors, the two equations hold for all
j, k = 1,, n and we can write the matrix Equations A.41 and A.42 below.
As far as eigenvalues are concerned, it turns out that the theory is simpler
when the n eigenvalues λ1 , λ2 ,, λn are all distinct and λ j ≠ λk for j ≠ k .
When this is the case, A is surely nondefective, each eigenvalue is associated
with a unique (to within a scaling factor) eigenvector and the (right-)eigen-
vectors form a linearly independent set (for this, see, for example, Laub
(2005) or Wilkinson (1996)). The same, clearly, applies to the eigenvectors
of AT , that is the left-eigenvectors of A. If now we arrange the n left-(right-)
eigenvectors of A to form the n × n matrix Y (X) whose jth column is given
by the components of y j ( x j ), Equations A.39 can be compactly written as
where in A.412 we took into account that the first equation implies YT = X −1 ,
If, as in Remark A.9, we consider A H and, again, let Y be the matrix of n
left-eigenvectors of A, we have
which is given as Theorem 9.15 in Chapter 9 of Laub (2005). Also, note that
Equations A.42 imply A = X diag ( λ1 , , λn ) X −1 = ∑ ni =1 λi x i y iH (clearly, for
Equations A.41, the y H
i of the last relation is replaced by y i ).
T
Appendix A 277
Proposition A.2
Proposition A.3
Remark A.10
Proposition A.4
Proposition A.5
a 1
A=
0 a
Proposition A.6
x Hj x k = δ jk , x Hj Ax k = λ j δ jk.(A.43)
Similarity via a unitary (or orthogonal) matrix is clearly simpler than ‘ordi-
nary’ similarity because X H (or XT ) is much easier to evaluate than X −1.
Moreover, when compared to ordinary similarity, unitary similarity is an
equivalence relation that partitions Mn into finer equivalence classes because,
as we saw in Section A.2.2, it corresponds to a change of basis between
orthonormal bases.
Example A.5
By explicitly carrying out the calculations, the reader is invited to check
the following results. Let A be the real symmetric matrix
1 −4 2
A = −4 3 5 .
2 5 −1
−6.5135 0 0
L = diag ( λ1 , λ2 , λ3 ) = 0 2.1761 0 .
0 0 7.3375
Proposition A.7
1. A is normal;
2. A is unitarily diagonalisable;
2 2
3. ∑ ni , j =1 aij = ∑ ni =1 λi ;
4. There is an orthonormal set of n eigenvectors of A.
Remark A.11
So, summarising the results of the preceding discussion, we can say that
a complex Hermitian (or real symmetric) matrix:
Proposition A.8
A
e = ∑
k= 0
Ak
k!
= I+A+
A2
2!
+ , e At
= ∑
k= 0
(At)k
k!
= I + tA +
t 2A 2
2!
+ …,
(A.45)
A
e =e SBS−1
= I + SBS −1
+
(SBS )
−1 2
+ = I + SBS−1 +
SB2S−1
+
2! 2!
B2
= S I+ B+ + S−1 = S e B S−1 , (A.46)
2!
U H AU = T, (A.48)
284 Appendix A
Remark A.12
A = U SV H ,(A.49)
where the elements of S = sij ∈ Mm × n are zero for i ≠ j and, calling p the
minimum between m and n, s11 ≥ s22 ≥ ≥ skk > sk+1,k+1 = = spp = 0. In
other words, the only nonzero elements of S are real positive numbers on
the main diagonal and their number equals the rank of A. These elements
are denoted by σ 1 ,, σ k and called the singular values of A. Moreover, if A
is real, then both U and V may be taken to be real, U, V are orthogonal and
the upper H is replaced by an upper T .
So, for example, if A ∈ M3 × 4 and rank A = 2, we get a matrix of the form
σ1 0 0 0
S= 0 σ2 0 0 ,
0 0 0 0
Appendix A 285
with only two nonzero singular values σ 1 = s11 , σ 2 = s22 and the other diago-
nal entry s33 equal to zero. If, on the other hand, rank A = 3, then we have
three singular values and the third is σ 3 = s33 > 0 .
The reason why the singular values are real and non-negative is because
the scalars σ i2 are the eigenvalues of the Hermitian and positive-semidefinite
matrices AA H , A H A. In fact, using Equation A.49 and its Hermitian conju-
gate A H = V SH U H together with the relations U H U = I m , V H V = I n , we get
where both products SSH and SH S are square matrices whose only nonzero
2 2
elements σ 1 ,, σ k lie on the main diagonal. This, in turn, implies that we
can call u1 , u 2 ,, u m the first, second, …, mth column of U and v1 , v 2 ,, v n
the first, second, …, nth column of V and rewrite Equation A.51 as
2
AA H u i = σ i u i ( i = 1,, m) ; 2
A H Av i = σ i v i ( i = 1,, n ),
(A.52)
A.7 MATRIX NORMS
set of non-negative real numbers satisfying the axioms N1–N3. The same
idea can be extended to matrices by defining a matrix norm as a mapping
• : Mn → R, which, for all A, B ∈ Mn (F) and all a ∈ F satisfies the axioms
where MN1–MN3 are the same as the defining axioms of a vector norm. A
mapping that satisfies these three axioms but not necessarily the fourth is
sometimes called a generalised matrix norm.
Two examples of matrix norms are the maximum column sum and the
maximum row sum norms, denoted, respectively, by the symbols • 1 , • ∞
and defined as
n n
A 1 = max
1≤ j ≤ n ∑ i =1
aij , A ∞
= max
1≤ i ≤ n ∑a
j =1
ij , (A.53)
∑a ( )
2
A E
= ij = tr A H A , A 2
= σ max (A), (A.54)
i , j =1
where σ max (A) – that is σ 1 if, as in the preceding section, we arrange the
singular values in non-increasing order – is the maximum singular value
of A or, equivalently, the positive square root of the maximum eigen-
value of A H A. Both norms A.54 have the noteworthy property of being
unitarily invariant, where by this term we mean that UAV E = A E and
UAV 2 = A 2 for any two unitary matrices U, V.
Also, associated with any vector norm, it is possible to define a matrix
norm as
Ax
A = max = max Ax , (A.55)
x ≠0 x x =1
which is called the matrix norm subordinate to (or induced by) the vector
norm. This norm is such that I = 1 and satisfies the inequality Ax ≤ A x
for all x. More generally, a matrix and a vector norm satisfying an inequal-
ity of this form are said to be compatible (which clearly implies that a vector
norm and its subordinate matrix norm are always compatible). Note also
Appendix A 287
Proposition A.9
κ (A) = A A −1 (A.57)
σ max (A)
κ 2 (A) = , (A.58)
σ min (A)
( )
where σ min (A) = 1 σ max A −1 is the minimum singular value of A.
With these definitions, the following two propositions may give a general
idea of the results that can be obtained by making use of matrix norms.
Proposition A.10
Moreover, for any given ε > 0, there exists a matrix norm such that
ρ(A) ≤ A ≤ ρ(A) + ε .
288 Appendix A
µ − λi ≤ κ (S) E , (A.60)
It is well known that the basic trigonometric functions sine and cosine are
periodic with period 2π and that, more generally, a function f (t) is called
periodic of (finite) period T if it repeats itself every T seconds, so that
f (t) = f (t + T ) for every t . If now we consider the fundamental (angular or
circular) frequency ω 1 = 2π T and its harmonics ω n = nω 1 ( n = 1, 2,), one
of the great achievements of J.B Fourier (1768–1830) is to have shown that
almost any reasonably well-behaved periodic function of period T can be
expressed as the sum of a trigonometric series. In mathematical terms, this
means that we can write
∞ ∞
A
f (t) = 0 +
2 ∑ (A cosω t +B sinω t),
n=1
n n n n f (t) = ∑ C exp(iω t), (B.1)
n=−∞
n n
where the second expression is the complex form of the Fourier series and
is obtained from the first by using Euler’s formula e ± i x = cos x ± i sin x. The
constants An , Bn or Cn are called Fourier coefficients, and it is not difficult
to show that we have
thus implying that the C-coefficients are generally complex. However, if f (t)
is a real function, then Cn = C−∗n .
Remark B.1
i. In Equation B.11, the ‘static term’ A0 2 has been introduced for future
convenience in order to include the case in which f (t) oscillates about
some nonzero value;
289
290 Appendix B
f (t) =
A0
2
+
n=1
∑
Dn sin (ω nt + ϕ n ). (B.3)
∫e
0
i nω1t
e − i mω1t dt = T δ n m,(B.4)
(B.5)
which, in turn, can be substituted into Equations B.2 to give the original
‘trigonometric’ coefficients as
T T
2 2
An =
T ∫
0
f (t) cos (ω n t ) dt , Bn =
T ∫ f (t)sin (ω t ) dt (B.6)
0
n
T
for n = 0,1,2,, where, in particular – since A0 = 2 T −1
∫ 0
f (t) dt – the term
A0 2 of Equation B.11 is the average value of f (t). Also, note that Equations
B.5 and B.6 show that f (t) must be integrable on its periodicity interval for
the coefficients to exist.
Appendix B 291
If now we ask about the relation between the mean-square value of f (t)
and its Fourier coefficients, we must first recall that the mean-square value
is defined as
T T
1 1
∫ ∫ f (t)
2 2 2
f (t) ≡ f (t) dt = dt , (B.7)
T T
0 0
when the integral on the r.h.s. exists. Then, we can substitute the series
expansion of Equation B.12 into the r.h.s. of Equation B.7, exchange the
integral and series signs, and use Equation B.4 again to get
∫ (∑ C e )(∑ C e ) dt
T T
1 1
f 2 (t) =
T ∫
0
f 2 (t) dt =
T
0
n
n
i ωn t
m
∗ −i ωm t
m
T ∞
CnCm∗
= ∑∑ n m T ∫ e i ω n t e − i ω m t dt = ∑∑ n m
CnCm∗ δ nm = ∑C
n=−∞
n
2
,
0
(B.8)
where the absence of cross-terms of the form CnCm∗ (with n ≠ m) in the final
result is worthy of notice because it means that each Fourier component
Cn makes its own contribution (to the mean-square value of f (t)), indepen-
dently of all the other components. When proved on a rigorous mathemati-
cal basis, this result is known as Parseval’s relation, and the reader is invited
to check that its ‘trigonometric version’ reads
∞
f 2 (t) =
A02 1
4 2
+ ∑(A n=1
2
n )
+ Bn2 . (B.9)
On physical grounds, it should be noted that Equations B.8 and B.9 are
particularly important because the squared value of a function (or ‘signal’,
as it is sometimes called in applications-oriented literature) is related to
key physical quantities such as energy or power. In this respect, in fact, an
important mathematical result is that for every square-integrable function
T
∫
2
on [0, T ] – that is functions such that f (t) dt is finite – Parseval’s rela-
0
tion holds and its Fourier series converges to f (t) in the mean-square sense,
which means that we have
T
∫ f (t) − S
2
lim N dt = 0 , (B.10)
N →∞
0
292 Appendix B
∑
N
where SN = Cn e i ω nt is the sequence of partial sums of the series.
n=− N
However, since mean-square convergence is a ‘global’ (on the interval [0, T ])
type of convergence that does not imply pointwise convergence – which, in
general, is the type of convergence of most interest in applications – a result
in this direction is as follows (Boas (1983)).
Remark B.2
i. More precisely, if t0 ∈[0, T ] is a point where f (t) has a jump with left-
( ) ( )
hand and right-hand limits f t0− , f t0+ , respectively, then the series
( ) ( )
converges to the value f t0+ + f t0− 2;
ii. Proposition B.1 is useful because in applications, one often encounters
periodic functions that are integrable on any finite interval and that
have a finite number of jumps and/or corners in that interval. In these
cases, the theorem tells us that we do not need to test the convergence
of the Fourier series, because once we have calculated its coefficients,
the series will converge as stated;
iii. An important property of Fourier coefficients of an integrable func-
tion is that they tend to zero as n → ∞ (Riemann–Lebesgue lemma). In
practice, this means that in approximate computations, we can trun-
cate the series and calculate only a limited number of coefficients.
Example B.1
−1 −π < t < 0
f (t) = (B.11)
+1 0<t <π
∞
sin ( 2n + 1) t
f (t) =
4
π
1
3
1
5
4
sin t + sin3t + sin 5t + =
π ∑
n =0
2n + 1
, (B.12)
( ) ( )
which, as expected, converges to f 0+ + f 0− 2 = 0 at the point of
discontinuity t = 0. The same occurs also at the endpoints t = ±π if we
consider the function f (t) to be extended by periodicity over the entire
real line R. In this case, in fact, the extension leads to the left and
( ) ( )
right limits f −π − = 1, f −π + = −1 at t = −π and f π − = 1, f π + = −1 ( ) ( )
at t = π.
Example B.2
On the interval [ −π, π ], the function f (t) = t is continuous, satisfies
f (−π) = f (π) at its endpoints and is even (i.e. f (−t) = f (t)). Since this last
property suggests that also in this case, the trigonometric form is more
convenient for the calculation of the Fourier coefficients (because the
sine coefficients Bn are all zero), the reader is invited to determine the
cosine coefficients and show that (a) A0 = π and (b) only the An for n odd
are nonzero, with A1 = − 4 π , A3 = − 4 9π , A5 = − 4 25π , etc. The series
expansion, therefore, can be equivalently written in the two forms
f (t) =
π 2
+
2 π ∑
n =1
n2 2 π ∑
n =1
(2n − 1)2
, (B.13)
Remark B.3
i. As a matter of fact, it turns out that the series B.13 converges uni-
formly because of the regularity properties of t . In this respect, more-
over, it can be shown (see, e.g., Howell (2001) or Sagan (1989)) that
these same properties allow the termwise differentiation of the series
B.13. By so doing, in fact, we get the coefficients of the series B.12,
294 Appendix B
and the result makes sense because the function B.11 is the derivative
of t . On the other hand, if we differentiate term-by-term the coef-
ficients of the series B.12 – which does not converge uniformly – we
get nonsense (and surely not the derivative of the pulse B.11). And this
is because piecewise continuity of f (t) is not sufficient for termwise
differentiation;
ii. Having pointed out in the previous remark that term-by-term dif-
ferentiation of a Fourier series requires some care, it turns out that
termwise integration is less problematic and, in general, the piecewise
continuity of the function suffices. An example is precisely the pulse
B.11, whose integral is t . By integrating term-by-term its Fourier
coefficients, in fact, the result is that we obtain the coefficients of the
series B.13;
iii. Another aspect worthy of mention is the so-called Gibbs phenom-
enon, which consists in the fact that near a jump discontinuity the
Fourier series overshoots (or undershoots) the function by approxi-
mately 9% of the jump. So, for instance, if we drew a graph of the
partial sum SN of the series on the r.h.s. of B.12 for a few different
values of N (and the reader is invited to do so), we would observe the
presence of ‘ripples’ (or ‘wiggles’) in the vicinity of the discontinuity
points of the function f (t). These ‘ripples’ are due to the non-uniform
convergence of the Fourier series and persists no matter how many
terms of the series are employed (even though, for increasing N, they
get confined to a steadily narrower region near the discontinuity).
∞ T 2 ∞
f (t) =
1
∑ ∫
2π n=−∞
−T 2
∑
f (s) e iω n (t − s) ds ∆ω =
1
2π n=−∞
g (ω n )∆ω , (B.14)
Appendix B 295
∞ T 2 ∞
g (ω n ) g (ω n )
∞
∑
n=−∞
2π
∆ω →
∫ 2π
dω ,
∫ f (s) e i ω n (t − s)
ds →
∫ f (s) e i ω (t − s)
ds
−∞ −T 2 −∞
1 1
f (t) =
2π ∫ g(ω ) dω = 2π ∫∫ f (s) e i ω (t − s)
ds dω
(B.15)
1
=
∫
2π ∫ f (s) e −i ω s
ds e i ω t dω =
∫ F(ω ) e iω t
dω ,
and we could return to the original variable t because s was just a dummy
variable of integration. But then, since Equation B.15 gives
∞
f (t) =
∫ F(ω ) e
−∞
iω t
dω , (B.17)
these last two relations show that when the integrals exist, the functions
f (t) and F(ω ) form a pair. This is called a Fourier transform pair and usu-
ally one calls F(ω ) the (forward) Fourier transform of f (t), while f (t) is the
inverse Fourier transform of F(ω ). In accordance with these names, one can
formally introduce the forward and inverse Fourier transform ‘operators’
F, F-1 and conveniently rewrite Equations B.16 and B.17 in the more concise
notation
Remark B.4
F(ν ) =
∫
−∞
f (t) e − i 2πνt dt , f (t) =
∫ F(ν) e
−∞
i 2 πνt
dν . (B.19)
Example B.3
Given the rectangular pulse of unit area (sometimes called the boxcar
function)
1 2 −1 ≤ t ≤ 1
f (t) = , (B.20)
0 otherwise
1
sin ω
F(ω ) =
1
4π ∫e −iω t
dt =
1
4i πω
(
ei ω − e− i ω =)2πω
, (B.21)
−1
where F(ω ) is real because f (t) is even. If, on the other hand, we con-
sider the time-shifted version of the boxcar function defined as f (t) = 1 2
for 0 ≤ t ≤ 2 (and zero otherwise), we get the complex transform
F(ω ) = ( 2πω ) e − i ω sin ω , which has the same magnitude of B.21 but a
−1
say, this is the same result that we obtain by using the transform B.19.
1
∫ F (ω ) 2π ∫ f (t)e ∫ F (ω )F(ω ) dω = ∫ F(ω )
∗ −iω t ∗ 2
I= dt dω = dω
(B.22a)
1 1 1
dω dt =
∫ f (t) ∫ F (ω )e ∫ f (t)f (t) dt = 2π ∫ f (t)
∗ −iω t ∗ 2
I= dt
2π 2π
(B.22b)
∫ f (t) ∫ F(ω )
2 2
dt = 2π dω . (B.23)
which, denoting by f (k) (t) the kth-order derivative of f (t) and assuming it to
be Fourier-transformable in its own right, is just a special case of the more
general relation
t
Next, if we consider the function I (t) =
∫ t0
f (s) ds – that is the ‘anti-derivative’
of f (t) – and assume it to be Fourier-transformable, Equation B.26a gives
(iω )−1 F[ I '(t)] = F[ I (t)]. But since I '(t) = f (t), this means (iω )−1 F [ f (t)] = F [ I (t)]
and therefore
1
F [ I (t)] = F(ω ). (B.27)
iω
On the other hand, for the derivatives F (k) (ω ) = d kF dω of F(ω ), the result is
that if the function t k f (t) is transformable for k = 1,2,, n, then F(ω ) can be
differentiated n times and we have
w(t) =
∫ f (t − τ )g(τ )dτ = ( f ∗ g )(t), (B.29)
−∞
1 1
F[ w(t)] =
2π ∫∫ f (t − τ )g(τ ) e − iω t
dτ dt =
2π ∫ g(τ ) ∫ f (t − τ ) e − iω t
dt dτ
1
=
∫ g(τ ) e − iω τ
2π ∫ f (s) e − iω s
∫
ds dτ = F(ω ) g(τ ) e − i ω τ dτ = 2π F(ω )G(ω ),
(B.30)
times the product of the transforms F(ω ) and G(ω ). By a ‘symmetric’ argu-
ment, the reader is invited to show that F−1 [(F ∗ G)(ω )] = f (t)g(t), which, in
turn, suggests that we also have
F [ f (t)g(t)] = ( F ∗ G ) (ω ). (B.31)
Remark B.5
∆t ∆ν ≅ 1, (B.32)
central peak), but the precise value of the number on the r.h.s. of Equation
B.32 is not really important; the point of the theorem is that two members
of a Fourier transform pair – each one in its appropriate domain – cannot
be both ‘narrow’. The implications of this fact pervade the whole subject
of signal analysis and have important consequences in both theory and
practice.
In applications, it is quite common to encounter situations that confirm
the principle. For example, when a lightly damped structure in free vibration
oscillates for a relatively long time (large ∆ t ) at its natural frequency ω n , this
implies that a graph of the Fourier transform of this vibration signal will be
strongly peaked at ω = ω n (i.e. with a small ∆ω ); by contrast, if we want to
excite many modes of vibration of a structure in a relatively large band of fre-
quencies (large ∆ω ), we can do so by a sudden blow with a short time duration
(small ∆ t ). So, the point is that, as far as we know, the uncertainty principle
represents an inescapable law of nature with which we must come to terms.
∫
−∞
f (t)δ (t) dt = f (0),
∫ f (τ )δ (t − τ ) dτ = f (t),(B.33)
−∞
(where the second relation is a translated version of the first), which, in the
light of the fact that the l.h.s. of Equation B.332 is the convolution product
(f ∗ δ )(t), tells us that the Dirac delta is the unit element for the operation of
convolution, that is
(f ∗ δ )(t) = f (t).(B.34)
Remark B.6
The reason for the above quotation marks in ‘function’ and ‘definition’ is
that the Dirac delta is not an ordinary function and Equation B.33 is not a
Appendix B 301
proper definition. In fact, when one considers that Equation B.331 is trying
to tell us is that the Dirac delta must∞simultaneously have the two proper-
ties: (a) δ (t) = 0 for all t ≠ 0, and (b)
∫
δ (t) dt = 1, the weakness of the argu-
−∞
ment is evident; unless δ (t) is something other than an ordinary function,
the integral of a function that is zero everywhere except at one point is
necessarily zero no matter what definition of integral is used. The rigorous
mathematical justification of these facts was given in the 1940s by Laurent
Schwartz with his theory of distributions, where he showed that the Dirac
delta is a so-called distribution (or generalised function). Since, however,
our main interest lies in the many ways in which the Dirac delta is used in
applications, for these rigorous aspects we refer the more mathematically
oriented reader to Chapters 7 and 8 of Appel (2007), Chapter 6 of Debnath
and Mikusinski (1999) or to the book of Vladimirov (1981).
Now consider the definitions of Fourier transform of a function f (t) and
its inverse (Equations B.16 and B.17). If we substitute one into the other, we
get (all integrals are from −∞ to ∞)
1 1
f (t) =
∫ F(ω )e iω t
dω =
∫ 2π ∫ f (r)e − iω r
dr e i ω t dω =
∫ f (r) 2π ∫ e i ω (t − r )
dω dr ,
which, owing to Equation B.332 , shows that the last integral within brack-
ets can be interpreted as an integral representation of the delta function and
that we can write
∞ ∞
1 1
δ (t − r) =
2π ∫
−∞
e i ω (t − r) dω , δ (t) =
2π ∫e
−∞
iω t
dω , (B.35)
Owing to Equations B.35, it turns out that the Dirac delta can be conve-
niently used in many cases. As a first example, we can now directly obtain
Equation B.31. In fact, starting from the chain of relations (again, all inte-
grals are from −∞ to ∞)
ds e i r t dr
∫
f (t)g(t) = f (t) G(r) e i r t dr =
∫ G(r) ∫ F(s)e i st
ds dr ,
=
∫ G(r) ∫ F(s)e i (s + r ) t
302 Appendix B
dz dr =
∫ G(r) ∫ F(z − r)e izt
∫ ∫ F(z − r)G(r) dr e izt
dz ,
where the integral within brackets is the convolution (F ∗ G)(z). So, if now
for present convenience we denote this convolution by W (z), the result thus
obtained is
∫
f (t) g(t) = W (z) e i z t dz (B.37)
and we can apply the Fourier transform operator on both sides of Equation
B.37 to get, as desired, Equation B.31, that is
1 1
dz e − i ω t dt =
F [ f (t)g(t)] =
2π ∫ ∫ W (z) e izt
∫ W (z) 2π ∫ e i (z −ω ) t
dt dz
=
∫ W (z)δ (z − ω ) dz = W (ω ) = (F ∗ G)(ω ), (B.38)
where we have used the integral representation B.351 in the third equality
and the defining property B.331 in the fourth.
A second example is Parseval’s relation B.23, which can be obtained by
writing the chain of relations
1 dt f ∗ (r) e i ω r dr dω
∫ F(ω ) ∫ F(ω )F (ω ) dω = (2π) ∫ ∫ f (t) e ∫
2 ∗ −i ω t
dω = 2
1 1
=
2π ∫∫ f (t)f (r) 2π ∫ e
∗ i ω (r − t )
dω drdt
1 1 1
f (t) f ∗ (r)δ (r − t) dr dt =
∫ ∫ ∫ ∫
2
= f (t)f ∗ (t) dt = f (t) dt
2π 2π 2π
(B.39)
At this point, one may ask about the integral of the Dirac delta. So, by
introducing the so-called Heaviside (or unit step) function
0 t <0
θ (t) = ,(B.40)
1 t ≥0
t
dθ (t)
δ (t) =
dt
, θ (t) =
∫ δ (τ ) dτ ,(B.41)
−∞
which, in turn, lead to the question about the derivative δ ′(t) of δ (t). For
this, however, we must take an indirect approach because δ ′(t) cannot be
directly determined from the definition of δ (t); by letting f (t) be a well-
behaved function and assuming the usual rule of integration by parts to
hold, we get
thus showing that δ ′(t), like δ (t), vanishes for all t ≠ 0 but ‘operates’ (under
the integral sign) on the derivative f ′(t) rather than on f (t). Then, generalis-
ing the relation B.42, the kth derivative δ (k) (t) is such that
∫ f (t)δ (k)
(t) dt = (−1)k f (k) (0).(B.43)
can write δ (t) = lim wε (t) and two illustrative examples of such sequences are
ε →0
sin(t ε )
wε (t) =
1
ε π
exp − t 2 ε 2 , ( ) wε (t) =
πt
, (B.44)
which, in the limit, satisfy the defining property B.331. With the Gaussian
functions B.441, in fact, for any well-behaved function f (t) that vanishes
fast enough at infinity to ensure the convergence of any integral in which it
occurs, we get
1 1 f (0) − s2
lim
e→0 ε π ∫ f (t) e −t 2 ε 2
dt = lim
e→0 π ∫ f (ε s) e − s2
ds =
π ∫e ds = f (0),
where we made the change of variable s = t ε in the first equality and the
last relation holds because
∫e − s2
ds = π . Also, it may be worth noticing
that by defining A = 1 ε in the functions B.442 , we can once again obtain
the integral representation of Equation B.352; in fact, we have
A ∞
sin At 1 1
δ (t) = lim
A→∞ πt
= lim
A→∞ 2π ∫
−A
e i ω t dω =
2π ∫e
−∞
iω t
dω ,
304 Appendix B
F(u) = T [ f (t)] =
∫ K(t, u)f (t) dt ,(B.45)
a
where the function K(t , u) is called the kernel of the transformation and
F(u) – often symbolically denoted by T[ f (t)] – is the transform of f (t) with
respect to the kernel. The various transformations differ (and hence have
different names) depending on the kernel and on the integration limits a, b.
So, for example, we have seen that the choice K(t , u) = (1 2π ) e − i u t together
with a = −∞, b = ∞ gives the Fourier transform, but other frequently encoun-
tered types are called the Laplace, Hankel and Mellin transforms, just to
name a few. Together with the Fourier transform, the Laplace transform is
probably the most popular integral transformation and is defined as
∞
F(s) ≡ L [ f (t)] =
∫ f (t) e
0
− st
dt ,(B.46)
1
F(s) = L ebt =
s−b
(c = Re(s) > b ). (B.47)
Remark B.7
In the light of definition B.46, it is natural to ask for the inverse
transform of an image F(s). So, even if in most cases it is common to
make use of (widely available) tables of integral transforms, it can be
shown (see, for example, Boas (1983), Mathews and Walker (1970)
or Sidorov et al. (1985)) that the inversion formula (also known as
Bromwich integral) is
c+ i ∞
1
−1
L [ F(s)] =
2iπ ∫ F(s) e
c− i ∞
st
ds, (B.48)
where the notation in the term on the r.h.s. means that we integrate along
the vertical straight line Re(s) = c (c > α ) in the complex plane. Then, the
integral converges to f (t), where f (t) is continuous, while at jumps, it con-
verges to the mid-value of the jump; in particular, for t = 0, the integral
( )
converges to f 0+ 2. Also, it may be worth mentioning that the integral of
Equation B.48 can be evaluated as a contour integral by means of the so-
called theorem of residues, one of the key results in the theory of functions
of a complex variable.
Passing to the first and second derivatives f ′(t), f ′′(t) – which we assume to
be originals in the sense of Remark B.7 (ii) – a single and a double integra-
tion by parts, respectively, lead to
( )
L[ f ′(t)] = s F(s) − f 0+ , ( ) ( )
L[ f ′′(t)] = s2F(s) − sf 0+ − f ′ 0+ , (B.50a)
( ) ( )
where f 0+ , f ′ 0+ are the limits as t approaches zero from the positive side.
More generally, if we denote by f (k) (t) the kth derivative of f (t), Equation
B.50a are special cases of the relation
k
t
If f (t) is an original with image F(s) and if the function I (t) =
∫ 0
f (u) du is
also an original, then we can apply Equation B.491 to I (t) and take into
( )
account that I 0+ = 0. Then
F(s)
L[ I (t)] = . (B.51)
s
A final property we consider here is the counterpart of Equation B.30. If, in
fact, we let F(s), G(s) be the images of the two originals f (t), g(t) with conver-
gence abscissas α , β , respectively, then the convolution theorem for Laplace
transforms reads
Example B.4
With reference to inverse Laplace transforms, in applications one often
finds functions that can be written as the ratio of two polynomials in
s, that is F(s) = P(s) Q(s), where Q(s) is of higher degree than P(s). Then,
calling f (t) the inverse transform of F(s), an easy recipe in these cases is
provided by the two following rules:
P(q) q t
e (B.53)
Q′(q)
Appendix B 307
P(s)
A + iB = 2 . (B.54)
Q′(s) s=q + i ω
P ( q1 ) P ( q2 ) 1 1 −2 t
f (t) = L−1 [ F(s)] = eq1 t + eq2 t = + e .
Q′ ( q1 ) Q′ ( q2 ) 2 2
s
F(s) = ,
(
(s + 3) s2 + 4s + 5 )
whose denominator has the simple root q1 = −3 and the complex
conjugate pair of zeros q ± iω = −2 ± i. For the simple root, rule (a)
leads to the term −(3 2) e −3 t, while, on the other hand, rule (b) tells
us that the term associated with the complex conjugate zeros is
e −2 t (A cos t − B sin t). Then, using Equation B.54, we leave to the
reader the calculations that lead to A = 3 2, B = 1 2. Finally, putting
the pieces back together, we can write the desired result as
3 −3 t 3 1
f (t) = L−1 [ F(s)] = − e + cos t − sin t e −2 t .
2 2 2
Example B.5
Equations B.50 show that Laplace transforms ‘automatically’ provide
for the initial conditions at t = 0. In this example, we exploit this prop-
erty by considering the ordinary (homogeneous) differential equation
d 2 f (t)
+ a2 f (t) = 0, (B.55)
dt 2
sf0 f0′
F(s) = + .
s 2 + a2 s 2 + a2
f0′
f (t) = f0 cos at + sin at, (B.56)
a
d 2 f (t)
+ a2 f (t) = g(t), (B.57)
dt 2
ˆ
gs sf f′
F(s) = + 2 0 2 + 2 0 2
( )(
s + a s2 + ω 2
2 2
s +a) ( s +a ) ( )
( )
because L[cos ω t ] = s s2 + ω 2 . In order to return to the time domain,
we already know the inverse transform of the last two terms, while for
the first term, we can use the convolution theorem and write
s 1 −1 s −1 1
L−1 2 2 2 2 = L 2 2 ∗L 2
s + ω s + a s + ω s + a
2
(
= (cos ω t) ∗ a−1 sin at =) 1
a ∫ cos(ω τ )sin a (t − τ ) dτ ,
0
(B.58)
t
f0′ gˆ
f (t) = f0 cos at +
a
sin at +
a ∫ cos(ω τ )sin a(t − τ ) dτ ,
0
(B.59)
which, again, is the same result that we can obtain by the standard
methods.
Appendix B 309
∂2 y 1 ∂2 y
= , (B.60)
∂ x2 c2 ∂t 2
where y = y(x, t), and here, we assume that the initial conditions are
given by the two functions
∂ y(x, t)
y(x,0) = u(x), = w(x). (B.61)
∂t t =0
If now we call Y (x, s) the Laplace transform of y(x, t) relative to the time
variable and transform both sides of Equation B.60, we get
∂ 2 Y (x, s) 1 2
= 2 s Y (x, s) − su(x) − w(x) , (B.62)
∂ x2 c
1 2
−k2 Ψ(k, s) = s Ψ(k, s) − s U(k) − W (k) , (B.63)
c2
where k, the so-called wavenumber, is the (Fourier) conjugate variable
of x and we define
s 1
χ (k, t) ≡ L−1 [ Ψ(k, s)] = U(k) L−1 2 + W (k) L−1 2
s + k c
2 2 2 2
s + k c
(B.65)
W (k)
= U(k) cos(kct) + sin(kct),
kc
310 Appendix B
from which we can obtain the desired result by inverse Fourier trans-
formation. For the first term on the r.h.s. of Equation B.65, we have
∞
1
F −1 [U(k) cos kct ] =
2 ∫ U(k) e
−∞
i kc t
+ e − ikct e i k x dk
∞ ∞
1 1
=
2 ∫
−∞
U(k) e i k (x+c t ) dk +
2 ∫ U(k) e
−∞
i k (x−c t )
dk
1
=
2
[ u(x + ct) + u(x − ct)] . (B.66a)
For the second term, on the other hand, we first note that
sin(kct) t
F −1 W (k)
kc
0
∫
= F −1 W (k) cos(kcr) dr ,
x + ct
sin(kct) 1
F −1 W (k)
=
kc 2c
x − ct
∫
w(ξ ) d ξ . (B.66b)
Finally, putting Equations B.66a and B.66b back together yields the
solution y(x, t) of Equation B.60 as
x + ct
1
y(x, t) = F −1 [ χ (k, t)] =
2
[ u(x + ct) + u(x − ct)] + 21c ∫ w(ξ) dξ , (B.67)
x − ct
which is the so-called d’Alembert solution (of Equation B.60 with ini-
tial conditions B.61). As an incidental remark, we note for a string
of finite length L, we can proceed from Equation B.62 by expanding
Y (x, s), u(x) and w(x) in terms of Fourier series rather than taking their
Fourier transforms.
References and further reading
311
312 References and further reading
Ivchenko, G.I. and Medvedev, Yu.I. (1990) Mathematical Statistics, Moscow: Mir
Publishers.
Jazar, R.N. (2013). Advanced Vibrations: A Modern Approach, New York:
Springer.
Junkins, J.L. and Kim, Y. (1993). Introduction to Dynamics and Control of Flexible
Structures, Washington DC: AIAA (American Institute of Aeronautics and
Astronautics).
Kelly, S.G. (2007) Advanced Vibration Analysis, Boca Raton, FL: CRC Press.
Komzsik, L. (2009). Applied Calculus of Variations for Engineers, Boca Raton,
FL: CRC Press.
Köylüoglu, H.U. (1995). Stochastic response and reliability analyses of struc-
tures with random properties subject to stationary random excitation, PhD
Dissertation, Princeton University, Princeton, NJ.
Lanczos, C. (1970). The Variational Principles in Mechanics, New York: Dover
Publications.
Landau, L.D. and Lifshitz E.M. (1982). Meccanica, Roma: Editori Riuniti.
Laub, A.J. (2005). Matrix Analysis for Scientists and Engineers, Philadelphia, PA:
SIAM (Society for Industrial and Applied Mathematics).
Leissa, A.W. (1969). Vibration of Plates, NASA SP-160, Washington DC, US
Government Printing Office.
Leissa, A.W. (1973). The free vibration of rectangular plates, Journal of Sound and
Vibration, 31: 257–293.
Leissa, A.W. and Qatu, M.S. (2011). Vibrations of Continuous Systems, New York:
McGraw-Hill.
Lurie, A.I. (2002). Analytical Mechanics, Berlin Heidelberg: Springer.
Lutes, L.D. and Sarkani, S. (1997). Stochastic Analysis of Structural and Mechanical
Vibrations, Upper Saddle River, NJ: Prentice-Hall.
Ma, F., Caughey, T.K. (1995), Analysis of linear nonconservative vibrations, ASME
Journal of Applied Mechanics, 62: 685–691.
Ma, F., Imam, A., Morzfeld, M. (2009). The decoupling of damped linear systems
in oscillatory free vibration, Journal of Sound and Vibration, 324: 408–428.
Ma, F., Morzfeld, M., Imam, A. (2010). The decoupling of damped linear systems
in free or forced vibration, Journal of Sound and Vibration, 329: 3182–3202.
Maia, N.M.M. and Silva, J.M.M. (eds.). (1997) Theoretical and Experimental
Modal Analysis, Taunton: Research Studies Press.
Mathews, J. and Walker, R.L. (1970). Mathematical Methods of Physics, 2nd ed.,
Redwood City, CA: Addison-Wesley.
McConnell, K.G. (1995). Vibration Testing, Theory and Practice, New York: John
Wiley & Sons.
Meirovitch, L. (1980). Computational Methods in Structural Dynamics, Alphen
aan den Rijn: Sijthoff & Noordhoff.
Meirovitch, L. (1997). Principles and Techniques of Vibrations, Upper Saddle
River, NJ: Prentice-Hall
Moon, F.C. (2004). Chaotic Vibrations: An Introduction for Applied Scientists
and Engineers, Hoboken, NJ: John Wiley & Sons.
Naylor, A.W. and Sell, G.R. (1982). Linear Operator Theory in Engineering and
Science, New York: Springer-Verlag.
314 References and further reading
317
318 Index
Improved
A streamlined A single point search and
experience for of discovery discovery of
our library for all of our content at both
customers eBook content book and
chapter level