Fourier1415 Part3
Fourier1415 Part3
Thus applying a FT to terms involving derivatives replaces the di↵erential equation with an algebraic
equation for f˜, which may be easier to solve.
Let’s remind ourselves of the origin of this fundamental
R result. The simplest approach is to write
a function f (x) as a Fourier integral: f (x) = f˜(k) exp(ikx) dk/2⇡. Di↵erentiation with respect
R
to x can be taken inside the integral, so that df /dx = f˜(k) ik exp(ikx) dk/2⇡. From this we can
immediately recognise ik f˜(k) as the FT of df /dx. The same argument can be made with a Fourier
series.
We will illustrate the application of this result with the familiar example of the driven damped
simple harmonic oscillator.
dI Q
V (t) = L + RI + . (12.137)
dt C
55
Figure 12.14: A simple series LCR circuit.
Now, since the rate of change of charge on the capacitor is simply the current, dQ/dt = I, we can
di↵erentiate this equation, to get a second-order ODE for I:
d2 I dI I dV
L 2
+R + = . (12.138)
dt dt C dt
This is the same di↵erential equation as before, with
(z, , !02 , f ) ! (I, R/L, 1/LC, V̇ /L). (12.139)
The simplest case to consider is where the driving force oscillates at a single frequency, !. Let’s
look at how this is often solved, without using any Fourier terminology. We can always choose the
origin of time so that f (t) = A cos !t, so we want to solve z̈ + ż + !02 z = A cos !t. The normal
approach is to guess that z must respond at the same frequency, so that z = a cos !t + b sin !t.
Substituting this guess, we get
(!02 ! 2 )(a cos !t + b sin !t) + !( a sin !t + b cos !t) = A cos !t. (12.140)
In order for the lhs to be a pure cos, the sin coefficient must vanish: b(!02
! ) a ! = 0. Equating 2
2 2
cos coefficients then gives a(!0 ! ) + b ! = A. These equations can be written in matrix form:
✓ 2 ◆✓ ◆ ✓ ◆
(!0 ! 2 ) ! a A
2 2 = . (12.141)
! (!0 ! ) b 0
So inverting the matrix gives
✓ ◆ ✓ ◆✓ ◆
a 1 (!02 !2) ! A
= 2 2 . (12.142)
b ! + (!02 ! 2 )2 ! (!02 !2) 0
Hence the solution is
⇥ ⇤ 1 ⇥ ⇤
z = A (!02 ! 2 )2 + 2
!2 (!02 ! 2 ) cos !t + ! sin !t . (12.143)
Because of the occurrence of sin and cos terms, there is a phase shift with respect to the driving
term, so we must be able to write this as z = z0 cos(!t + ), and expressions for the amplitude and
phase could be obtained with a bit of trigonometric e↵ort. But there is an easier way.
56
12.1.2 Complex solution
Write the driving term as f = A exp(i!t). The normal justification for this complex approach is that
we will take the real part at the end. This is fair enough, but we will give a better justification later.
Note that the amplitude A could be complex, A = |A| exp(i ), so we can easily include a phase
in the input signal; in the real formalism, we chose the origin of time so that this phase vanished,
otherwise the algebra would have been even messier. If we now try a solution z = c exp(i!t), where
again c can include a phase, we get
! 2 c + i !c + !02 c = A, (12.144)
since the exp(i!t) factor on each side can be divided out. This gives us the solution for c immediately
with almost no work.
The result can be made a bit more intuitive by splitting the various factors into amplitudes and
phases. Let A = |A| exp(i ) and ( ! 2 + i ! + !02 ) = |B| exp(i↵), where
q
|B| = (!02 ! 2 )2 + 2 ! 2 (12.145)
and
tan ↵ = !/(!02 ! 2 ). (12.146)
Then we have simply
|A|
z(t) = exp[i(!t + ↵)], (12.147)
|B|
so the dynamical system returns the input oscillation, modified in amplitude by the factor 1/|B|
and lagging in phase by ↵. For small frequencies, this phase lag is very small; it becomes ⇡/2 when
! = !0 ; for larger !, the phase lag tends to ⇡.
The same equations can be obtained using the real approach, but it takes a great deal longer. Once
again, we see the advantage of the complex formalism.
f˜(!)
z̃(!) = . (12.149)
!02 !2 + i !
This solution in Fourier space is general and works for any time-dependent force. Once we have a
specific form for the force, we can in principle use the Fourier expression to obtain the exact solution
for z(t), assuming we can do the necessary integrals.
57
As a simple example, consider a driving force that can be written as a single complex exponential:
Unsurprisingly, the result is a -function spike at the driving frequency. Since we know that z̃(!) =
f˜(!)/(!02 ! 2 + i !), we can now use the inverse FT to compute z(t):
Z 1
1 f˜(!)
z(t) = 2
ei!t d! (12.152)
2⇡ 1 !0 ! 2 + i
Z 1
(! ⌦)
= A 2
ei!t d!
1 !0 !2 + i !
exp(i⌦t) |A|
= A 2 2
= exp(i⌦t + ↵),
!0 ⌦ + i ⌦ |B|
where A = |A| exp(i ) and !02 ⌦2 + i ⌦ = |B| exp(i↵). This is just the answer we obtained by
taking the usual route of trying a solution proportional to exp(i⌦t) – but the nice thing is that the
inverse FT has produced this for us automatically, without needing to guess.
Finally, we can also clarify the common use of complex exponentials to represent real oscillations.
The traditional argument is that (as long as we deal with linear equations) the real and imaginary
parts process separately and so we can just take the real part at the end. But in Fourier analysis,
we have noted that real functions require the Hermitian symmetry f˜(!) = f˜⇤ ( !). If f (t) is to be
real, it therefore makes no sense to consider purely a signal at a single !: we must allow for the
negative-frequency part simultaneously. If there is to be a spike in f˜ at ! = +⌦, we therefore need
a corresponding spike at ! = ⌦:
The inverse Fourier transform of this is just a real oscillation with arbitrary phase:
where A = |A| exp(i ). Notice that this f (t) is the sum of a complex exponential and its conjugate,
so we have
f (t) = 2Re[A exp(i⌦t)]. (12.155)
Thus the Hermitian symmetry between positive and negative frequencies ends up instructing us to
adopt exactly the traditional approach: solve the problem with f / exp(i⌦t) and take the real part
at the end.
Finally, then, the time-dependent solution when we insist on this real driving force of given frequency
comes simply from adding the previous solution to its complex conjugate:
|A| |A| |A|
z(t) = exp[i(⌦t + ↵)] + exp[ i(⌦t + ↵)] = 2 cos(⌦t + ↵). (12.156)
|B| |B| |B|
The factor 2 is unimportant; it also occurs in the definition of f (t) = 2|A| cos(⌦t + ), so it can be
absorbed in the definition of |A|.
58
FOURIER ANALYSIS: LECTURE 14
where !n = 2⇡n/T .
1
X 1
X
dz
= !n an sin (!n t) + n!bn cos (n!t) (12.158)
dt n=1 n=1
X1 X1
d2 z(t)
= !n2 an cos (!n t) !n2 bn sin (!n t)
dt2 n=1 n=1
giving X
z̈ + ż + !02 z = [(!02 ! 2 ) + i !n ]cn exp(i!n t). (12.161)
n
Which is the same relation we found between z̃ and f˜ when we took the direct Fourier Transform
of the di↵erential equation.
59
12.4 Complex impedance
In the previous lecture, we looked at using Fourier transforms to solve the di↵erential equation
for the damped harmonic oscillator. It’s illuminating to reconsider this analysis in the specific
context of the LCR circuit, where LI¨ + RI˙ + I/C = V̇ . Let the voltage be a complex oscillation,
V = Ṽ exp(i!t), so that the current is of the same form, I = I˜ exp(i!t), obeying
1
! 2 LI˜ + i!RI˜ + I˜ = i! Ṽ , (12.163)
C
Thus we see that the circuit obeys a form of Ohm’s law, but involving a complex impedance, Z:
˜ i
Ṽ = Z(!)I; Z(!) = R + i!L . (12.164)
!C
This is a very useful concept, as it immediately allows more complex circuits to be analysed, using
the standard rules for adding resistances in series or in parallel.
The frequency dependence of the impedance means that di↵erent kinds of LCR circuit have functions
as filters of the time-dependent current passing through them: di↵erent Fourier components (i.e.
di↵erent frequencies) can be enhanced or suppressed. For example, consider a resistor and inductor
in series:
˜ Ṽ (!)
I(!) = . (12.165)
R + i!L
For high frequencies, the current tends to zero; for ! ⌧ R/L, the output of the circuit (current
over voltage) tends to the constant value I(!)/V (!) = R. So this would be called a low-pass filter:
it only transmits low-frequency vibrations. Similarly, a resistor and capacitor in series gives
˜ Ṽ
I(!) = 1
. (12.166)
R + (i!C)
This acts as a high-pass filter, removing frequencies below about (RC) 1 . Note that the LR circuit
can also act in this way if we measure the voltage across the inductor, VL , rather than the current
passing through it:
˜ Ṽ Ṽ
ṼL (!) = i!LI(!) = i!L = . (12.167)
R + i!L 1 + R(i!L) 1
1
Finally, a full series LCR circuit is a band-pass filter, which removes frequencies below (RC) and
above R/L from the current.
12.5 Resonance
It is interesting to look at the solution to the damped harmonic oscillator in a bit more detail. If the
forcing term has a very high frequency (! !0 ) then |B| is large and the amplitude is suppressed
– the system cannot respond to being driven much faster than its natural oscillation frequency. In
fact the amplitude is greatest if ! is about !0 (if is small). We can di↵erentiate to show that the
maximum amplitude is reached at the resonant frequency:
q
! = !res = !02 2 /2. (12.168)
2
When is small, this is close to the natural frequency of the oscillator: !res ' !0 /4!02 .
60
The amplitude of the oscillation falls rapidly as we move away from resonance. To see this, write
|B|2 = (!02 ! 2 )2 + 2 ! 2 in terms of ! 2 = !res
2
+ x:
|B|2 = 2 2
!res 4
/4 + x2 . (12.169)
If we write ! = !res + ✏, then x ' 2!res ✏ to lowest order in ✏. If ⌧ !0 , we can now neglect the
di↵erence between !res and !0 , so that the amplitude of oscillation becomes approximated by the
following expression:
1 ( !0 ) 2
' . (12.170)
|B|2 (1 + 4✏2 / 2 )
This is a Lorentzian dependence of the square of the amplitude on frequency deviation from reso-
nance. The width of the resonance is set by the damping: moving a frequency ✏ = /2 away from
resonance halves the squared amplitude.
12.6 Transients
Finally, note that we can always add a solution to the homogeneous equation (i.e. where we set the
right hand side to zero). The final solution will be determined by the initial conditions (z and dz/dt).
This is because the equation is linear and we can superimpose di↵erent solutions. For this additional
solution (also called the complementary function), we try an oscillating solution z / exp(i!t).
Substituting in the damped oscillator di↵erential equation gives the auxiliary equation:
! 2 + i ! + !02 = 0, (12.171)
p
a quadratic equation with the solution ! = i /2 ± 2 /4 + ! 2 . There are two main cases to
0
consider:
p
(1) Underdamped: /2 < !0 . z = e t/2 (Aei⌦t + Be i⌦t ), where ⌦ = 2 /4 + ! 2 .
0
0 0
p
(2) Overdamped: /2 > !0 . z = e t/2 (Ae⌦ t + Be ⌦ t ), where ⌦0 = 2 /4 !02 .
0
So there is only oscillation if the damping is not too high. For very heavy damping, e t/2 Ae⌦ t
yields very nearly a time independent z, as is physically reasonable for an extremely viscous fluid.
But in all cases, the solution damps to zero as t ! 1. Therefore, if the initial conditions require
a component of the homogeneous solution, this only causes an initial transient, and the solution
settles down to the steady-state response, which is what was calculated earlier.
One mathematical complication arises with critical damping: /2 = !0 , so that the two roots
coincide at ⌦ = 0. The simplest way of seeing how to deal with this is to imagine that ⌦ is non-zero
0 0
but very small. Thus Ae⌦ t + Be ⌦ t ' (A + B) + (A B)⌦0 t. So the critically damped solution
is z = e t/2 (C + Dt) (you can check that this does solve the critically-damped equation exactly).
One might also consider generalising the equation so that or !02 are negative. This changes the
physical behaviour and interpretation (e.g. we will now get runaway solutions that increase with
time), but no new algebraic issues arise.
Whatever the form of the complementary function, it presents a problem for the Fourier solution of
the di↵erential equation, especially with a periodic driving force: in general the undriven motion of
the system will not share this periodicity, and hence it cannot be described by a Fourier series. As a
result, the Fourier solution is always zero if the Fourier components of the driving force vanish, even
though this is unphysical: an oscillator displaced from z = 0 will show motion even in the absence
of an applied force. For a proper treatment of this problem, we have to consider the boundary
61
conditions of the problem, which dictate the amount of the homogeneous solution to be added to
yield the complete solution.
When dealing with Fourier transforms, this step may seem inappropriate. The Fourier transform
describes non-periodic functions that stretch over an infinite range of time, so it may seem that
boundary conditions can only be set at t = 1. Physically, we would normally lack any reason for
a displacement in this limit, so the homogeneous solution would tend to be ignored – even though
it should be included as a matter of mathematical principle. In fact, we will eventually see that the
situation is not this simple, and that boundary conditions have a subtle role in yielding the correct
solution even when using Fourier methods.
62
FOURIER ANALYSIS: LECTURE 15
13 Green’s functions
63
To repeat, we normally tend to think of this as involving a single spike located at q = x, which pulls
out the value of f at the location of this spike. But we can flip the viewpoint and think of (x q)
as specifying a spike at x = q, where now the integral covers all values of q: spikes everywhere.
Alternatively, consider the analogy with the inverse Fourier transform:
Z
f (x) = f˜(k)/2⇡ exp(ikx), dk. (13.184)
Here, we have basis functions exp(ikx), which we think of as functions of x with k as a parameter,
with expansion coefficients f˜(k)/2⇡. From this point of view, the sifting relation uses (x q) as
the basis function, with q as the parameter specifying where the spike is centred.
So if f (x) is a superposition of spikes, we only need to understand the response of a linear system to
one spike and then superposition of responses will give the general solution. To show this explicitly,
take LG(t, T ) = (t T ) and multiply both sides by f (T ) (which is a constant). But now integrate
both sides over T , noting that L can be taken outside the integral because it doesn’t depend on T :
Z Z
L G(t, T )f (T ) dT = (t T )f (T ) dT = f (t). (13.185)
The last step uses sifting to show that indeed adding up a set of impulses on the RHS, centred at
di↵ering values of T , has given us f (t). Therefore, the general solution is a superposition of the
di↵erent Green’s functions: Z
y(t) = G(t, T )f (T ) dT. (13.186)
This says that we apply a force f (T ) at time T , and the Green’s function tells us how to propagate
its e↵ect to some other time t (so the Green’s function is also known as a propagator).
When solving di↵erential equations, the solution is not unique until we have applied some boundary
conditions. This means that the Green’s function that solves LG(t, T ) = (t T ) also depends
on the boundary conditions. This shows the importance of having boundary conditions that are
homogeneous: in the form of some linear constraint(s) being zero, such as y(a) = y(b) = 0, or y(a) =
ẏ(b) = 0. If such conditions apply to G(t, T ), then a solution that superimposes G(t, T ) for di↵erent
values of T will still satisfy the boundary condition. This would not be so for y(a) = y(b) = 1,
and the problem would have to be manipulated into one for which the boundary conditions were
homogeneous – by writing a di↵erential equation for z ⌘ y 1 in that case.
d2 y dy
2
+ a1 (x) + a0 (x)y(x) = f (x). (13.187)
dx dx
Now, if we can solve for the complementary function (i.e. solve the equation for zero RHS), the
Green’s function can be obtained immediately. This is because a delta function vanishes almost
64
everywhere. So if we now put f (x) ! (x z), then the solution we seek is a solution of the
homogeneous equation everywhere except at x = z.
We split the range into two, x < z, and x > z. In each part, the r.h.s. is zero, so we need to solve
the homogeneous equation, subject to the boundary conditions at the edges. At x = z, we have to
be careful to match the solutions together. The function is infinite here, which tells us that the
first derivative must be discontinuous, so when we take the second derivative, it diverges. The first
derivative must change discontinuously by 1. To see this, integrate the equation between z ✏ and
z + ✏, and let ✏ ! 0:
Z z+✏ 2 Z z+✏ Z z+✏ Z z+✏
dy dy
2
dx + a1 (x) dx + a0 (x)dx = (x z)dx. (13.188)
z ✏ dx z ✏ dx z ✏ z ✏
The second and third terms vanish as ✏ ! 0, as the integrands are finite, and the r.h.s. integrates
to 1, so
dy dy
= 1. (13.189)
dx z+✏ dx z ✏
13.3.1 Example
We now have to adjust the four unknowns A, B, C, D to match the boundary conditions.
The boundary condition y = 0 at x = 0 means that B(z) = 0, and y = 0 at x = ⇡/2 implies that
C(z) = 0. Hence ⇢
A(z) sin x x < z
G(x, z) = (13.193)
D(z) cos x x > z.
Continuity of G implies that A(z) sin z = D(z) cos z and a discontinuity of 1 in the derivative implies
that D(z) sin z A(z) cos z = 1. We have 2 equations in two unknowns, so we can eliminate A or
D:
sin2 z cos z
A(z) A(z) cos z = 1 ) A(z) = 2 = cos z (13.194)
cos z sin z + cos2 z
and consequently D(z) = sin z. Hence the Green’s function is
⇢
cos z sin x x < z
G(x, z) = (13.195)
sin z cos x x > z
65
The solution for a driving term x on the r.h.s. is therefore (be careful here with which solution for
G to use: the first integral on the r.h.s. has x > z)
Z ⇡/2 Z x Z ⇡/2
y(x) = z G(x, z) dz = cos x z sin z dz sin x z cos z dz. (13.196)
0 0 x
Integrating by parts,
1 ⇡
y(x) = (x cos x sin x) cos x (⇡ 2 cos x 2x sin x) sin x = x sin x. (13.197)
2 2
13.4 Summary
So to recap, the procedure is to find the Green’s function by
• solving the homogeneous equation either side of the impulse, with the same boundary condi-
tions e.g. G = 0 at two boundaries, or G = @G/@x = 0 at one boundary.
• Note the form of the solution will be the same for (e.g.) x < z and x > z, but the coefficients
(strictly, they are not constant coefficients, but rather functions of z) will di↵er either side of
x = z).
• integrating the Green’s function with the actual driving term to get the full solution.
66
FOURIER ANALYSIS: LECTURE 16
67
T
where f (T ) = e . Hence
Z t Z 1
z(t) = G(t, T )f (T )dT + G(t, T )f (T )dT. (13.206)
0 t
13.6 Causality
The above examples showed how the boundary conditions influence the Green’s function. If we are
thinking about di↵erential equations in time, there will often be a di↵erent boundary condition,
which is set by causality. For example, write the first equation we considered in a form that
emphasises that it is a harmonic oscillator:
Since the system clearly cannot respond before it is hit, the boundary condition for such applications
would be expected on physical grounds to be
Whether or not such behaviour is achieved depends on the boundary conditions. Our first example
did not satisfy this criterion, because the boundary conditions were of the form y(a) = y(b) = 0.
This clearly presents a problem if T is between the points a and b: it’s as if the system knows
when we will strike the bell, or how hard, in order that the response as some future time t = b
will vanish. In contrast, our second example with boundary conditions at a single point ended up
yielding causal behaviour automatically, without having to put it in by hand.
The causal Green’s function is particularly easy to find, because we only need to think about the
behaviour at t > T . Here, the solution of the homogeneous equation is A sin !0 t + B cos !0 t, which
must vanish at t = T . Therefore it can be written as G(t, T ) = A sin[!0 (t T )]. The derivative
must be unity at t = T , so the causal Green’s function for the undamped harmonic oscillator is
1
G(t, T ) = sin[!0 (t T )]. (13.211)
!0
As a further example, we can revisit again the di↵erential equation with the opposite sign from the
oscillator:
d2 z
!02 z = f (t). (13.212)
dt2
68
We solved this above by taking the Fourier transform of each side, to obtain
Z
1 1
f˜(!) i!t
z(t) = 2
e d!. (13.213)
2⇡ 1 !0 + !2
This looks rather similar to the solution in terms of the Green’s function, so can we say that
G(t, T ) = exp( !0 |t T |)/2!0 ? Direct di↵erentiation gives Ġ = ± exp( !0 |t T |)/2, with the +
sign for t > T and the sign for t < T , so it has the correct jump in derivative and hence satisfies
the equation for the Green’s function.
But this is a rather strange expression, since it is symmetric in time: a response at t can precede
T . The problem is that we have imposed no boundary conditions. If we insist on causality, then
G = 0 for t < T and G = A exp[!0 (t T )] + B exp[ !0 (t T )] for t > T . Clearly A = B, so
G = 2A sinh[!0 (t T )]. This now looks similar to the harmonic oscillator, and a unit step in Ġ at
t = T requires
1
G(t, T ) = sinh[!0 (t T )]. (13.215)
!0
So the correct solution for this problem will be
Z t
1
z(t) = f (T ) sinh[ !0 (t T )] dT. (13.216)
!0 1
Note the changed upper limit in the integral: forces applied in the future cannot a↵ect the solution
at time t. We see that the response, z(t), will diverge as t increases, which is physically reasonable:
the system has homogeneous modes that either grow or decline exponentially with time. Special
care with boundary conditions would be needed if we wanted to excite only the decaying solution
– in other words, this system is unstable.
69