0% found this document useful (0 votes)
110 views11 pages

Week 9: PHY 305 Classical Mechanics Instructor: Sebastian W Uster, IISER Bhopal, 2020

This document discusses coupled oscillators, beginning with two carts attached by springs on a rail as a simple example. The equations of motion for the two carts are coupled differential equations that can be written in matrix form. The general solution to a system of coupled linear differential equations with constant coefficients is a sum of terms involving the eigenvalues and eigenvectors of the coefficient matrix. For the special case of two equal masses and identical springs, the eigenvalues and eigenvectors are found and the solution is written explicitly involving complex exponentials.

Uploaded by

Keijo Salonen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views11 pages

Week 9: PHY 305 Classical Mechanics Instructor: Sebastian W Uster, IISER Bhopal, 2020

This document discusses coupled oscillators, beginning with two carts attached by springs on a rail as a simple example. The equations of motion for the two carts are coupled differential equations that can be written in matrix form. The general solution to a system of coupled linear differential equations with constant coefficients is a sum of terms involving the eigenvalues and eigenvectors of the coefficient matrix. For the special case of two equal masses and identical springs, the eigenvalues and eigenvectors are found and the solution is written explicitly involving complex exponentials.

Uploaded by

Keijo Salonen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Week 9

PHY 305 Classical Mechanics


Instructor: Sebastian Wüster, IISER Bhopal, 2020

These notes are provided for the students of the class above only. There is no guarantee for cor-
rectness, please contact me if you spot a mistake.

3.6 Coupled oscillators

At the beginning of section 3.1.1 we had seen that you can imagine a rigid body as composed of
many mass points with fixed distances between them, and motivated this also by the idea that after
all the rigid body really is composed of bound molecules. However the forces keeping the molecules
together, will not rigidly fix their distances, but instead allow them vibrations or oscillations around
equilibrium positions. However neighboring molecules will strongly a↵ect one another. In reality
this thus turns into a system of coupled oscillators which we study now more closely.

3.6.1 Two coupled oscillators

Let’s first see what happens in a simple example, which is somewhat unrealistic but chosen for easy
concepts: Two carts can move without friction on a rail, and are attached in between two walls via
springs as shown in the figure below.

left: The masses of the carts are m1


and m2 , the spring constants of the
three springs k1 , k2 , k3 as shown. We
take coordinates where the original po-
sition of cart k is xn , and assume that
for x1 = x2 = 0, which denote the
equilibrium (pink) position of the carts,
all three springs are in their natural
position (neither stretched nor com-
pressed).

In the absence of the middle spring with p k2 , each of these carts just forms a simple harmonic
oscillator with known frequency !n = kn /mn . However due to the middle spring, you cannot
shift cart 1 without exerting a pulling or pushing force onto cart 2, so the two oscillators are now
coupled.

We set up Newton’s (or equivalently Lagrange’s) equations, and find:


m1 ẍ1 = k1 x1 + k2 (x2 x1 ),
m2 ẍ2 = k3 x2 k2 (x2 x1 ). (3.58)

80
As expected since the oscillators are coupled, the equation for ẍ1 depends on x2 and vice versa.
We can solve (3.58) by adding or subtracting the two equations, but we will soon encounter more
complicated coupled systems (with more than two variables), so let us instead directly learn a more
powerful approach. Let us form a vector of positions x(t) = [x1 (t), x2 (t)]T and write (3.58) as
" #
k1 +k2 k2
ẍ(t) = M̃ x(t), with matrix M̃ = m1 m1 . (3.59)
k2 k3 +k2
m2 m2

Motivated by the need to solve matrix-di↵erential equations of this kind, we look at:

Coupled system of linear di↵erential equations: Consider the system of di↵erential


equations with constant coefficients
d
f1 (t) = m11 f1 (t) + m12 f2 (t) + · · · + m1N fN (t),
dt
d
f2 (t) = m21 f1 (t) + m22 f2 (t) + · · · + m2N fN (t),
dt
···
d
fN (t) = mN 1 f1 (t) + mN 2 f2 (t) + · · · + mN N fN (t).
dt
(3.60)

for functions fk (t). To warmup we look at an equation of first order in time initially (unlike
(3.59)). Importantly the coefficients mkn do not depend on t nor on the fk .
We can write this in matrix form as
d
f (t) = M f (t), (3.61)
dt
where we have grouped all functions into an N -component vector f (t)T =
[f1 (t), f2 (t), . . . fN (t)] and all coefficients mkn into the N ⇥ N matrix M . Let us assume
we have found all N eigenvalues ` and eigenvectors v` of M according to

M v` = ` v` . (3.62)

We can then show that the general solution of this system is given by
X
f (t) = c` v` e ` t , (3.63)
`

where c` are a set of coefficients determined by the initial conditions f (0).

• That it is a solution you can proof yourself in 3 lines by inserting (3.63) into (3.61) and then
using (3.62). For the the statement that it is also all solutions, we refer to math courses.

• You may think of t as time, which is the case for which we will now use this result, but it can
of course be any variable.

81
Second order system of linear di↵erential equations: We can now use the same trick
also for a system of second order in time such as (3.59):

d2
f (t) = M f (t). (3.64)
dt2
If all eigenvalues ` of the matrix M are non-zero, this has a solution
Xh p p i
f (t) = c+
` v ` e + `t
+ c ` v ` e `t
. (3.65)
`

• We shall require only the case ` 6= 0 here. For eigenvalues ` you instead add a contribution
v` (c0` + c1` t) to the sum.
• When you are reading the books, you shall find that I have deviated in my choice of how
to present the solutions for the coupled di↵erential equation systems. I find the presented
way more straightforward than the ones from the books, but shall comment on the latter
near Eq. (3.85) later. If one of the readers can tell me a reason for introducing “generalized
eigenvalue problems” to solve the coupled oscillator, which is better than “all the books do
it”, they get a cookie.

We can now apply Eq. (3.65) to Eq. (3.59). The general case, despite there being only two oscillators,
is still a bit messy, so we look at two instructive cases:

Equal masses and springs:

First we look at carts of equal mass m1 = m2 = m coupled by three identical springs k1 = k2 =


k3 = k. Then M̃ in Eq. (3.59) becomes
 2k k
M̃ = m m , (3.66)
k 2k
m m

which has eigenvalues


p 0 = k/m and
p 1 = 3k/m, with corresponding normalized eigenvectors
v0 = [1, 1]T / 2 and v1 = [1, 1]T / 2. Since k and m are positive,
p the p` are negative, so when
taking the square root in (3.65), we find complex numbers, e.g. 0 = i k/m etc. The solution
thus contains

Complex exponential function:


left:

ei!t = cos (!t) + i sin (!t). (3.67)

For a function z = ei' , we have |z| = 1, so the complex


number sits always on the unit circle in the complex
plane. The angle ' is called the “phase” of that num-
ber.

82
Link to cos and sin: We can invert Eq. (3.67) to find
1
cos (!t) = (ei!t + e i!t
),
2
1
sin (!t) = (ei!t e i!t
). (3.68)
2i

Using these functions, we can write the solution for the two coupled carts in Eq. (3.59) explicitly
as
  
x1 (t) 1 + i!0 t 1
ẍ(t) = = c0 e + c0 e i!0 t
+ c+
1e
i!1 t
+ c1 e i!1 t , (3.69)
x2 (t) 1 1
p p
where we defined !0 = k/m and !1 = 3k/m.

• As stated before, the (complex) coefficients c are determined by the initial conditions x(t = 0)
and ẋ(t = 0).

• It might be on first sight confusing that complex numbers enter the solution. However this is
making our calculations easier, and represents no problem. To see the latter, convince yourself
of the following important points: (i) If x(t = 0) is real, then according to (3.59) it will remain
real at all times, since the matrix M̃ is real. (ii) If a complex function z(t) = x(t) + iy(t)
with real x, y satisfies Eq. (3.59), then also the real part x and imaginary part y satisfy
(3.59) separately. We thus know that inserting the right initial conditions will give us a real
solution, or alternatively can just take the real part of whatever solution we find. Take the
viewpoint that you find more comfortable.

Taking the latter view point (ii), we can also get rid of complex numbers at this point: Let us write
the real part of each term in curved brackets in (3.69):

2Re c+
` e
i!` t
+ c` e i!` t
= c+
` e
i!` t
+ c` e i!` t
+ c+⇤
` e
i!` t
+ c` ⇤ ei!` t
⇤ i!` t
= (c+
` + c` ) e + (c` + c+⇤ )e i!` t
| {z } | {z ` }
⌘A` ei'` /2 ⌘A` e i'` /2

= A` ei!` t ei'` + e i!` t


e i'`
/2 = A` cos (!` t + '` ). (3.70)

In the second line, we wrote the pre-factors of complex exponentials in polar notation, using real
⇤ ⇤
numbers A` = 2|c+ +
` + c` | and '` = arg[c` + c` ] and in the last line we used Eq. (3.68).

Alltogether we can now write the real solution as follows:

83
Normal modes and normal frequencies for two coupled oscillators
  
x1 (t) 1 1
= A0 cos (!0 t + '0 ) + A1 cos (!1 t + '1 ). (3.71)
x2 (t) 1 1
p
where the Ak are normal mode amplitudes, the !k are normal mode frequencies !0 = k/m,
p
!1 = 3k/m, the 'k are normal mode phases and the vectors [1, 1]T and [1, 1]T describe
the normal modes themselves.

• We still have unknown numbers Ak , 'k in the solution that need to be determined by the
initial conditions.

• If we preferred, we could have written sin instead of cos, since that just changes 'k .

Let us now look at what these normal modes mean. Firstly we see from Eq. (3.71), that whenever
only one Ak is non-zero, the motion will involve simple harmonic oscillation (but pertaining to both
carts at once).

We have a symmetric normal mode for A1 = 0, then x1 (t) = x2 (t) = A0 cos (!0 t + '0 ), thus both
carts move perfectly synchronized. This motion is visualized below, for zero phase o↵set '0 = 0.

We see that in this motion the middle spring is actually never stretched, so it makes sense that the
carts move at the same frequency as if they were individually attached to the wall only.

The second normal mode is anti-symmetric: x1 (t) = x2 (t) = A1 cos (!1 t + '1 ), so the cart motion
is always exactly opposite:

In this mode the middle spring is heavily involved, so the normal mode frequency di↵ers from the
single oscillator one.

84
The di↵erential equation (3.59) is linear in x, so it obeys the superposition principle: Any linear
combination of valid solutions will itself be a solution. We can thus express the most general
solution (3.71) as a superposition of the two normal modes. Depending on the initial conditions,
usually both modes would be involved, which can look rather irregular:

To see everything we just learnt in an experiment, watch this video .

Weakly coupled oscillators:

A second tractable case is if the coupling between the oscillating carts is weak, so for the second
spring k2 ⌧ k1 , k3 for k1 = k3 = k. We still keep m1 = m2 = m. We can proceed as before, but
instead of (3.66) we now get
 k+k2 k
M̃ = m m , (3.72)
k k+k2
m m

The eigenvectors of this matrix are actually the same as of (3.66), but one of the eigenvalues
changes, so we now have eigenmode frequencies
r r
k k + 2k2
!0 = , !1 = . (3.73)
m m
Since k2 ⌧ k, these two frequencies are almost the same. Let us define !
¯ = (!0 + !1 )/2 and
¯ ⇡ !0 ⇡ !1 and ✏ ⌧ !
✏ = (!1 !0 )/2, then we know that ! ¯ is small.

Inspection of individual normal modes would give very similar result as in the three equal springs
case, but for the weakly coupled case we can make sense of the combination of normal modes.

Let us insert the frequencies into (3.71), set A0 = A1 = A/2 and '0 = '1 = 0 and return to the
complex notation by replacing cos (!t) with exp (i!t). We then have
 ⇢  
x1 (t) A 1 i(!0 ✏)t A 1
= Re e + ei(!0 +✏)t
x2 (t) 2 1 2 1
⇢  
Eq. (3.68) cos (✏t) i!0 t cos (✏t) cos (!0 t)
= Re A e = (3.74)
i sin (✏t) sin (✏t) sin (!0 t)

Since we know that ✏ ⌧ !0 , these look as shown below. Each xk (t) rapidly oscillates with the mean
frequency !0 . However the amplitude of these oscillations migrates between the two oscillators
periodically, giving rise to a beat pattern.

85
left: Beating of two coupled harmonic
oscillators. Fast oscillations have a pe-
riod Tfast = 2⇡/¯ ! while the envelope
changes on a much larger period Tslow =
2⇡/✏ (for two beatings).

The same pattern is seen, when e.g. superimposing two sound or light waves of nearby frequency
or wavelength.

3.6.2 Many coupled oscillators

So far we studied seemingly special scenarios. We will now show that these are actually highly
representative of what is the most frequently encountered general dynamics. Consider a generic
Lagrangian with M generalized coordinates qn , which we group into a single M dimensional vector
q = [q1 , · · · qM ]T . We assume the system is conservative so that we can write a potential energy
V (q) which is now also a function in M -dimensions. Let the constraints be holonomic r = r(q).
We can then show that the Lagrangian takes the form
1X
L= Ak` (q)q̇k q̇` V (q). (3.75)
2
k`
P
For this, you can show that e.g. AP k` (q) = ↵ m↵ (@r↵ /@qk )(@r↵ /@q` ), assuming that the uncon-
strained kinetic energy was T = 12 ↵ m↵ r2↵ , however the precise form of Ak` (q) is not so important
in the following. You can look at assignment4 and quiz1 for cases where the function Ak` (q) was
non-trivial and the kinetic energy contained cross-terms (⇠ q̇a q̇b ).

We can see from Eq. (2.29), that a special point in coordinate space is given by

Equilibrium points We say a mechanical system is in equilibrium at a point q0 for which


all generalized forces vanish:

@L @V (q)
= = 0. (3.76)
@qn @qn
Hence all generalized momenta q̇n remain zero if initially zero.
The equilibrium is called stable, if a small perturbation q(t) around this point q(t) =
q0 + q(t) will remain small (e.g. oscillate) and unstable of that is not the case.

We see in the figure below, how this looks in some arbitrary high dimensional energy landscape:

86
left: Sketch of some potential energy
with two generalized coordinates q1 , q2 .
All extremal points of this function sat-
isfy @V (q)/@qn = 0 and are hence
equilibrium points, this includes max-
ima, minima and saddle points. We
have drawn the cuts through this sur-
face along the coordinate axes in brown
(near a maximum) and pink (near a
minimum)

• Of the extrema, only local minimal will be stable, saddle points and maxima are unstable.

• We had seen some first examples of stable/unstable dynamics in assignment 3 Q4 (the hoop
on a ring), or assignment 5 Q2, the stability of rotation around the di↵erent principal axes.

• I cannot draw more than two dimensions, but you can imagine that all the same concepts
apply in any number of dimensions/ for any number of coordinates.

It is more likely to find real physical systems in the vicinity of a stable equilibrium (because
otherwise they would break or be driven from an unstable to a stable point, in the presence of
dissipation). Since the system will typically not make more than small excursions away from q0 ,
we can use a multi-variate Taylor expansion of the potential energy around the equilibrium point
X @V (q) 1 X @ 2 V (q)
V (q) ⇡ V (q0 ) + (qk q0k ) + (qk q0k )(ql q0` ) (3.77)
@qk q=q0 2 @qk @q` q=q0
k | {z } k` | {z }
=0 ⌘Kk`

The linear term of the Taylor expansion is zero, due to our definition (3.76) of an equilibrium point.
For the remainder, let us assume we can change coordinates such that q0 = 0 and adjust our zero of
energy such that V (q0 ) = V (0) = 0. Finally we give the coefficients of the second order expansion
terms the name “matrix K” as shown, and are done with simplifying the potential energy.

For the kinetic energy, we want to expand everything to second order in q̇k . Since the term q̇k q̇`
in (3.75) already is of second order, we only need the constant part Mk` = Ak` (0) of the prefactor.
With that, we finally arrive at a much simplified

Lagrangian near an equilibrium point

1X
L= (Mk` q̇k q̇` Kk` qk q` ) . (3.78)
2
k`

1
• This is called a quadratic form. In vector notation you can write it as L = 2 q̇T M q̇ qT Kq .

• We know that M and K are symmetric matrices, and shall use that shortly.

87
Example 38, Bead on a wire: Consider the bead of mass m constrained to sit on an
arbitrarily shaped wire shown in the figure below.

left: Bead (brown), stuck on a wire


defined by coordinates z = f (x), sub-
ject to gravity Vgrav = mgz (pink).

We want to use a generalized coordinate q = x. As we had seen in quiz one, starting


from kinetic
 energy T = 12 m(ẋ2 + ż 2 ) and then using the constraint z = f (x), we find
⇣ ⌘2
T = 12 m 1 + @f@q(q) q̇ 2 for the kinetic part of the Lagrangian and V = mgf (q) for the
potential one. When expanding to second order in q̇ and q we can drop the second piece
of that kinetic energy, and Taylor expand the potential around the minimum (violet dashed
line in figure).

1 1 @ 2 f (q) 2
L = mq̇ 2 mg q (3.79)
2 2 @q 2
| {z }
=! 2

We see directly that despite the more complicated starting point, we reached the Lagrangian
for a simple harmonic oscillator.

We now immediately proceed to find Lagrange’s equations from (3.78). For this we can use the
Kronecker delta symbol nm introduced near (2.98) to write e.g. @qn /@qm = nm . Then
!
@L 1X @ 1X 1 X X X
= Kk` (qk q` ) = Kk` ( nk q` + qk n` ) = Kn` q` + Kkn qk = Kn` q` .
@qn 2 @qn 2 2
k` k` ` k `
(3.80)
where in the last equality we have used that the matrix K is symmetric (Kk` = K`k ). In the same
way, we find
@L X
= Mn` q̇` , (3.81)
@ q̇n
`

hence we reach the

Lagrange equations for dynamics near an equilibrium point

1
M q̈ = Kq , q̈ = M Kq (3.82)

• We have used matrix-vector product notation again.

88
• For the equivalence written, we of course require that the matrix M is invertible. In the
second form, you can then see, that Eq. (3.59) was a special case of this form.
• Our general solution method (3.65) applies straightforwardly to Eq. (3.82).

Similar to Eq. (3.71) we find the oscillator solutions of Eq. (3.82) as:

Normal modes and normal frequencies for oscillations near equilibrium


X
q(t) = Re[ a` v` ei!` t ]. (3.83)
`

As before !` are called normal mode frequencies, v` normal modes, and their respective
p
coefficients a` are fixed by the initial conditions. The !` are found as !` = ` from the
non-zero eigenvalues of M 1 K, the normal modes are the corresponding eigenvectors.

• Since we started with a very general mechanical problem near equilibrium at the beginning
of section 3.6.2, we now see that a large portion of physics can be understood in terms of
(coupled) harmonic oscillators.
• If you prefer, you can do similar steps to reduce Eq. (3.83) to cosines with arbitrary phase, as
we did for Eq. (3.71). However it actually typically is easier to work with complex exponentials
as far as possible.
• After finding eigenvalues and eigenvectors of M 1 K we can again use a transformation matrix
O as in (3.29) to define new coordinates q̃ = Oq that bring the Lagrange equation into
diagonal form:

q̃¨` = !`2 q̃` . (3.84)


Since these now describe a system of uncoupled harmonic oscillators, the functioning of the
normal modes becomes more apparent.

Two alternative solution methods for second order system:


Method 1: We can directly make the Ansatza q(t)` = a` ei!` t . Insertion into (3.82) gives

Ka = ! 2 M a , (K ! 2 M )a = 0 (3.85)

This is called a generalized eigenvalue problem. We say a is the eigenvector of K


with respect to M . The solution proceeds similar to the usual eigenvalue problem (and hence
we can also get away without knowing the name above). Using the expression to the right of
, above, we note that to have a non-zero solution for a we require det(K ! 2 M ) = 0. The
resultant polynomial equation gives us the eigenvalues ! 2 in the usual way. Subsequently
we can add up all solutions q(t)` that we found, since Eq. (3.82) is linear and obeys the
superposition principle. We then reach Eq. (3.83) again.
a
As discussed before, we can work with the complex Ansatz and take the real part in the end.

89
continued: Method 2: We can always reduce a system of second order di↵erential equations
to a (larger) system of first order di↵erential equations with the following trick: Define
u(t) = q̇(t), and combine y(t) = [q, u]T into a large 2M dimensional vector. We can then
write Eq. (3.82) as

0 1
ẏ(t) = 1 y(t), (3.86)
M K 0

where all the four items in the matrix above are themselvs M ⇥ M matrix blocks. At this
point we can use solution methods for first order systems, such as Eq. (3.63). See also the
numerics part of assignments 1 or 4.

We shall apply the complete formalism to examples in assignments and tutorials.

90

You might also like