Week 9: PHY 305 Classical Mechanics Instructor: Sebastian W Uster, IISER Bhopal, 2020
Week 9: PHY 305 Classical Mechanics Instructor: Sebastian W Uster, IISER Bhopal, 2020
These notes are provided for the students of the class above only. There is no guarantee for cor-
rectness, please contact me if you spot a mistake.
At the beginning of section 3.1.1 we had seen that you can imagine a rigid body as composed of
many mass points with fixed distances between them, and motivated this also by the idea that after
all the rigid body really is composed of bound molecules. However the forces keeping the molecules
together, will not rigidly fix their distances, but instead allow them vibrations or oscillations around
equilibrium positions. However neighboring molecules will strongly a↵ect one another. In reality
this thus turns into a system of coupled oscillators which we study now more closely.
Let’s first see what happens in a simple example, which is somewhat unrealistic but chosen for easy
concepts: Two carts can move without friction on a rail, and are attached in between two walls via
springs as shown in the figure below.
In the absence of the middle spring with p k2 , each of these carts just forms a simple harmonic
oscillator with known frequency !n = kn /mn . However due to the middle spring, you cannot
shift cart 1 without exerting a pulling or pushing force onto cart 2, so the two oscillators are now
coupled.
80
As expected since the oscillators are coupled, the equation for ẍ1 depends on x2 and vice versa.
We can solve (3.58) by adding or subtracting the two equations, but we will soon encounter more
complicated coupled systems (with more than two variables), so let us instead directly learn a more
powerful approach. Let us form a vector of positions x(t) = [x1 (t), x2 (t)]T and write (3.58) as
" #
k1 +k2 k2
ẍ(t) = M̃ x(t), with matrix M̃ = m1 m1 . (3.59)
k2 k3 +k2
m2 m2
Motivated by the need to solve matrix-di↵erential equations of this kind, we look at:
for functions fk (t). To warmup we look at an equation of first order in time initially (unlike
(3.59)). Importantly the coefficients mkn do not depend on t nor on the fk .
We can write this in matrix form as
d
f (t) = M f (t), (3.61)
dt
where we have grouped all functions into an N -component vector f (t)T =
[f1 (t), f2 (t), . . . fN (t)] and all coefficients mkn into the N ⇥ N matrix M . Let us assume
we have found all N eigenvalues ` and eigenvectors v` of M according to
M v` = ` v` . (3.62)
We can then show that the general solution of this system is given by
X
f (t) = c` v` e ` t , (3.63)
`
• That it is a solution you can proof yourself in 3 lines by inserting (3.63) into (3.61) and then
using (3.62). For the the statement that it is also all solutions, we refer to math courses.
• You may think of t as time, which is the case for which we will now use this result, but it can
of course be any variable.
81
Second order system of linear di↵erential equations: We can now use the same trick
also for a system of second order in time such as (3.59):
d2
f (t) = M f (t). (3.64)
dt2
If all eigenvalues ` of the matrix M are non-zero, this has a solution
Xh p p i
f (t) = c+
` v ` e + `t
+ c ` v ` e `t
. (3.65)
`
• We shall require only the case ` 6= 0 here. For eigenvalues ` you instead add a contribution
v` (c0` + c1` t) to the sum.
• When you are reading the books, you shall find that I have deviated in my choice of how
to present the solutions for the coupled di↵erential equation systems. I find the presented
way more straightforward than the ones from the books, but shall comment on the latter
near Eq. (3.85) later. If one of the readers can tell me a reason for introducing “generalized
eigenvalue problems” to solve the coupled oscillator, which is better than “all the books do
it”, they get a cookie.
We can now apply Eq. (3.65) to Eq. (3.59). The general case, despite there being only two oscillators,
is still a bit messy, so we look at two instructive cases:
82
Link to cos and sin: We can invert Eq. (3.67) to find
1
cos (!t) = (ei!t + e i!t
),
2
1
sin (!t) = (ei!t e i!t
). (3.68)
2i
Using these functions, we can write the solution for the two coupled carts in Eq. (3.59) explicitly
as
x1 (t) 1 + i!0 t 1
ẍ(t) = = c0 e + c0 e i!0 t
+ c+
1e
i!1 t
+ c1 e i!1 t , (3.69)
x2 (t) 1 1
p p
where we defined !0 = k/m and !1 = 3k/m.
• As stated before, the (complex) coefficients c are determined by the initial conditions x(t = 0)
and ẋ(t = 0).
• It might be on first sight confusing that complex numbers enter the solution. However this is
making our calculations easier, and represents no problem. To see the latter, convince yourself
of the following important points: (i) If x(t = 0) is real, then according to (3.59) it will remain
real at all times, since the matrix M̃ is real. (ii) If a complex function z(t) = x(t) + iy(t)
with real x, y satisfies Eq. (3.59), then also the real part x and imaginary part y satisfy
(3.59) separately. We thus know that inserting the right initial conditions will give us a real
solution, or alternatively can just take the real part of whatever solution we find. Take the
viewpoint that you find more comfortable.
Taking the latter view point (ii), we can also get rid of complex numbers at this point: Let us write
the real part of each term in curved brackets in (3.69):
2Re c+
` e
i!` t
+ c` e i!` t
= c+
` e
i!` t
+ c` e i!` t
+ c+⇤
` e
i!` t
+ c` ⇤ ei!` t
⇤ i!` t
= (c+
` + c` ) e + (c` + c+⇤ )e i!` t
| {z } | {z ` }
⌘A` ei'` /2 ⌘A` e i'` /2
In the second line, we wrote the pre-factors of complex exponentials in polar notation, using real
⇤ ⇤
numbers A` = 2|c+ +
` + c` | and '` = arg[c` + c` ] and in the last line we used Eq. (3.68).
83
Normal modes and normal frequencies for two coupled oscillators
x1 (t) 1 1
= A0 cos (!0 t + '0 ) + A1 cos (!1 t + '1 ). (3.71)
x2 (t) 1 1
p
where the Ak are normal mode amplitudes, the !k are normal mode frequencies !0 = k/m,
p
!1 = 3k/m, the 'k are normal mode phases and the vectors [1, 1]T and [1, 1]T describe
the normal modes themselves.
• We still have unknown numbers Ak , 'k in the solution that need to be determined by the
initial conditions.
• If we preferred, we could have written sin instead of cos, since that just changes 'k .
Let us now look at what these normal modes mean. Firstly we see from Eq. (3.71), that whenever
only one Ak is non-zero, the motion will involve simple harmonic oscillation (but pertaining to both
carts at once).
We have a symmetric normal mode for A1 = 0, then x1 (t) = x2 (t) = A0 cos (!0 t + '0 ), thus both
carts move perfectly synchronized. This motion is visualized below, for zero phase o↵set '0 = 0.
We see that in this motion the middle spring is actually never stretched, so it makes sense that the
carts move at the same frequency as if they were individually attached to the wall only.
The second normal mode is anti-symmetric: x1 (t) = x2 (t) = A1 cos (!1 t + '1 ), so the cart motion
is always exactly opposite:
In this mode the middle spring is heavily involved, so the normal mode frequency di↵ers from the
single oscillator one.
84
The di↵erential equation (3.59) is linear in x, so it obeys the superposition principle: Any linear
combination of valid solutions will itself be a solution. We can thus express the most general
solution (3.71) as a superposition of the two normal modes. Depending on the initial conditions,
usually both modes would be involved, which can look rather irregular:
A second tractable case is if the coupling between the oscillating carts is weak, so for the second
spring k2 ⌧ k1 , k3 for k1 = k3 = k. We still keep m1 = m2 = m. We can proceed as before, but
instead of (3.66) we now get
k+k2 k
M̃ = m m , (3.72)
k k+k2
m m
The eigenvectors of this matrix are actually the same as of (3.66), but one of the eigenvalues
changes, so we now have eigenmode frequencies
r r
k k + 2k2
!0 = , !1 = . (3.73)
m m
Since k2 ⌧ k, these two frequencies are almost the same. Let us define !
¯ = (!0 + !1 )/2 and
¯ ⇡ !0 ⇡ !1 and ✏ ⌧ !
✏ = (!1 !0 )/2, then we know that ! ¯ is small.
Inspection of individual normal modes would give very similar result as in the three equal springs
case, but for the weakly coupled case we can make sense of the combination of normal modes.
Let us insert the frequencies into (3.71), set A0 = A1 = A/2 and '0 = '1 = 0 and return to the
complex notation by replacing cos (!t) with exp (i!t). We then have
⇢
x1 (t) A 1 i(!0 ✏)t A 1
= Re e + ei(!0 +✏)t
x2 (t) 2 1 2 1
⇢
Eq. (3.68) cos (✏t) i!0 t cos (✏t) cos (!0 t)
= Re A e = (3.74)
i sin (✏t) sin (✏t) sin (!0 t)
Since we know that ✏ ⌧ !0 , these look as shown below. Each xk (t) rapidly oscillates with the mean
frequency !0 . However the amplitude of these oscillations migrates between the two oscillators
periodically, giving rise to a beat pattern.
85
left: Beating of two coupled harmonic
oscillators. Fast oscillations have a pe-
riod Tfast = 2⇡/¯ ! while the envelope
changes on a much larger period Tslow =
2⇡/✏ (for two beatings).
The same pattern is seen, when e.g. superimposing two sound or light waves of nearby frequency
or wavelength.
So far we studied seemingly special scenarios. We will now show that these are actually highly
representative of what is the most frequently encountered general dynamics. Consider a generic
Lagrangian with M generalized coordinates qn , which we group into a single M dimensional vector
q = [q1 , · · · qM ]T . We assume the system is conservative so that we can write a potential energy
V (q) which is now also a function in M -dimensions. Let the constraints be holonomic r = r(q).
We can then show that the Lagrangian takes the form
1X
L= Ak` (q)q̇k q̇` V (q). (3.75)
2
k`
P
For this, you can show that e.g. AP k` (q) = ↵ m↵ (@r↵ /@qk )(@r↵ /@q` ), assuming that the uncon-
strained kinetic energy was T = 12 ↵ m↵ r2↵ , however the precise form of Ak` (q) is not so important
in the following. You can look at assignment4 and quiz1 for cases where the function Ak` (q) was
non-trivial and the kinetic energy contained cross-terms (⇠ q̇a q̇b ).
We can see from Eq. (2.29), that a special point in coordinate space is given by
@L @V (q)
= = 0. (3.76)
@qn @qn
Hence all generalized momenta q̇n remain zero if initially zero.
The equilibrium is called stable, if a small perturbation q(t) around this point q(t) =
q0 + q(t) will remain small (e.g. oscillate) and unstable of that is not the case.
We see in the figure below, how this looks in some arbitrary high dimensional energy landscape:
86
left: Sketch of some potential energy
with two generalized coordinates q1 , q2 .
All extremal points of this function sat-
isfy @V (q)/@qn = 0 and are hence
equilibrium points, this includes max-
ima, minima and saddle points. We
have drawn the cuts through this sur-
face along the coordinate axes in brown
(near a maximum) and pink (near a
minimum)
• Of the extrema, only local minimal will be stable, saddle points and maxima are unstable.
• We had seen some first examples of stable/unstable dynamics in assignment 3 Q4 (the hoop
on a ring), or assignment 5 Q2, the stability of rotation around the di↵erent principal axes.
• I cannot draw more than two dimensions, but you can imagine that all the same concepts
apply in any number of dimensions/ for any number of coordinates.
It is more likely to find real physical systems in the vicinity of a stable equilibrium (because
otherwise they would break or be driven from an unstable to a stable point, in the presence of
dissipation). Since the system will typically not make more than small excursions away from q0 ,
we can use a multi-variate Taylor expansion of the potential energy around the equilibrium point
X @V (q) 1 X @ 2 V (q)
V (q) ⇡ V (q0 ) + (qk q0k ) + (qk q0k )(ql q0` ) (3.77)
@qk q=q0 2 @qk @q` q=q0
k | {z } k` | {z }
=0 ⌘Kk`
The linear term of the Taylor expansion is zero, due to our definition (3.76) of an equilibrium point.
For the remainder, let us assume we can change coordinates such that q0 = 0 and adjust our zero of
energy such that V (q0 ) = V (0) = 0. Finally we give the coefficients of the second order expansion
terms the name “matrix K” as shown, and are done with simplifying the potential energy.
For the kinetic energy, we want to expand everything to second order in q̇k . Since the term q̇k q̇`
in (3.75) already is of second order, we only need the constant part Mk` = Ak` (0) of the prefactor.
With that, we finally arrive at a much simplified
1X
L= (Mk` q̇k q̇` Kk` qk q` ) . (3.78)
2
k`
1
• This is called a quadratic form. In vector notation you can write it as L = 2 q̇T M q̇ qT Kq .
• We know that M and K are symmetric matrices, and shall use that shortly.
87
Example 38, Bead on a wire: Consider the bead of mass m constrained to sit on an
arbitrarily shaped wire shown in the figure below.
1 1 @ 2 f (q) 2
L = mq̇ 2 mg q (3.79)
2 2 @q 2
| {z }
=! 2
We see directly that despite the more complicated starting point, we reached the Lagrangian
for a simple harmonic oscillator.
We now immediately proceed to find Lagrange’s equations from (3.78). For this we can use the
Kronecker delta symbol nm introduced near (2.98) to write e.g. @qn /@qm = nm . Then
!
@L 1X @ 1X 1 X X X
= Kk` (qk q` ) = Kk` ( nk q` + qk n` ) = Kn` q` + Kkn qk = Kn` q` .
@qn 2 @qn 2 2
k` k` ` k `
(3.80)
where in the last equality we have used that the matrix K is symmetric (Kk` = K`k ). In the same
way, we find
@L X
= Mn` q̇` , (3.81)
@ q̇n
`
1
M q̈ = Kq , q̈ = M Kq (3.82)
88
• For the equivalence written, we of course require that the matrix M is invertible. In the
second form, you can then see, that Eq. (3.59) was a special case of this form.
• Our general solution method (3.65) applies straightforwardly to Eq. (3.82).
Similar to Eq. (3.71) we find the oscillator solutions of Eq. (3.82) as:
As before !` are called normal mode frequencies, v` normal modes, and their respective
p
coefficients a` are fixed by the initial conditions. The !` are found as !` = ` from the
non-zero eigenvalues of M 1 K, the normal modes are the corresponding eigenvectors.
• Since we started with a very general mechanical problem near equilibrium at the beginning
of section 3.6.2, we now see that a large portion of physics can be understood in terms of
(coupled) harmonic oscillators.
• If you prefer, you can do similar steps to reduce Eq. (3.83) to cosines with arbitrary phase, as
we did for Eq. (3.71). However it actually typically is easier to work with complex exponentials
as far as possible.
• After finding eigenvalues and eigenvectors of M 1 K we can again use a transformation matrix
O as in (3.29) to define new coordinates q̃ = Oq that bring the Lagrange equation into
diagonal form:
Ka = ! 2 M a , (K ! 2 M )a = 0 (3.85)
89
continued: Method 2: We can always reduce a system of second order di↵erential equations
to a (larger) system of first order di↵erential equations with the following trick: Define
u(t) = q̇(t), and combine y(t) = [q, u]T into a large 2M dimensional vector. We can then
write Eq. (3.82) as
0 1
ẏ(t) = 1 y(t), (3.86)
M K 0
where all the four items in the matrix above are themselvs M ⇥ M matrix blocks. At this
point we can use solution methods for first order systems, such as Eq. (3.63). See also the
numerics part of assignments 1 or 4.
90