Scattering Theory
Scattering Theory
Landelijke Master
Erik Koelink
[email protected]
Spring 2008
ii
Contents
1 Introduction 1
1.1 Scattering theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Inverse scattering method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
iii
iv
Bibliography 123
Index 125
Chapter 1
Introduction
According to Reed and Simon [9], scattering theory is the study of an interacting system on
a time and/or distance scale which is large compared to the scale of the actual interaction.
This is a natural phenomenon occuring in several branches of physics; optics (think of the blue
sky), acoustics, x-ray, sonar, particle physics,. . . . In this course we focus on the mathematical
aspects of scattering theory, and on an important application in non-linear partial differential
equations.
is the Fourier transform of the wave function with respect to spatial variable x. (Here we
have scaled Planck’s constant ~ to 1.) Using the fact that the Fourier transform interchanges
differentiation and multiplication the expected value for the momentum can be expressed in
terms of a differential operator. The kinetic energy, corresponding to |p|2 /2m, at time t of
−1
the particle can be expressed as 2m h∆ψ, ψi, where we take the inner product corresponding
2 3
to the Hilbert space L (R ) and ∆ is the three-dimensional Laplacian (i.e. with respect to
the spatial coordinate x).
1
2 Chapter 1: Introduction
The potential energy at time t of the particle is described by hV ψ, ψi, so that (total)
energy of the particle at time t can be written as
−1
E = hHψ, ψi, H= ∆ + V,
2m
where we have suppressed the time-dependence. The operator H is known as the energy
operator, or as Hamiltonian, or as Schrödinger operator. In case the potential V is independent
of time and suitably localised in the spatial coordinate, we can view H as a perturbation of
−1
the corresponding free operator H0 = 2m ∆. This can be interpreted that we have a ‘free’
particle scattered by the (time-independent) potential V . The free operator H0 is a well-
known operator, and we can ask how its properties transfer to the perturbed operator H.
Of course, this will depend on the conditions imposed on the potential V . In Chapter 2 we
study this situation in greater detail for the case the spatial dimension is 1, but the general
perturbation techniques apply in more general situations as well. In general the potentials for
which these results apply are called short-range potentials.
In quantum mechanics the time evolution of the wave function is determined by the time-
dependent Schrödinger equation
∂
i ψ(x, t) = Hψ(x, t). (1.1.1)
∂t
We want to consider solutions that behave as solutions to the corresponding free time-
dependent Schrödinger equation, i.e. (1.1.1) with H replaced by H0 , for time to ∞ or −∞.
In case of the spatial dimension being 1 and for a time-independent potential V , we study
this situation more closely in Chapter 3. We do this for more general (possibly unbounded)
self-adjoint operators acting a Hilbert space.
Let us assume that m = 21 , then we can write the solution to the free time-dependent
Schrödinger equation as Z
2
ψ(x, t) = F (p) eip·x e−i|p| t dp,
R3
where F (p) denotes the distribution of the momenta at time t = 0 (up to a constant). This
follows easily since the exponentials eip·x are eigenfunctions of H0 = −∆ for the eigenvalue
|p|2 . For the general case the solution is given by
Z
2
ψ(x, t) = F (p) ψp (x) e−i|p| t dp,
R3
where Hψp = |p|2 ψp and where we can expect ψp (x) ∼ eip·x if the potential V is sufficiently
‘nice’. We study this situation more closely in Chapter 4 in case the spatial dimension is one.
1.3 Overview
The contents of these lecture notes are as follows. In Chapter 6 we collect some results from
functional analysis, especially on unbounded self-adjoint operators and the spectral theorem,
Fourier analysis, especially related to Sobolev and Hardy spaces. For these results not many
4 Chapter 1: Introduction
proofs are given, since they occur in other courses. In Chapter 6 we also discuss some results
on the spectrum and the essential spectrum, and for these results explicit proofs are given.
So Chapter 6 is to be considered as an appendix.
d2
In Chapter 2 we study first the Schrödinger operator − dx 2 + q, and we give conditions on
q such that the Schrödinger operator is a unbounded self-adjoint operator with the Sobolev
d2
space as the domain which is also the domain for the unperturbed Schrödinger operator − dx 2.
For this we use a classical result known as Rellich’s perturbation theorem on perturbation of
unbounded self-adjoint operators. We discuss the spectrum and the essential spectrum in this
case. It should be noted that we introduce general results, and that the Schrödinger operator
is merely an elaborate example.
In Chapter 3 the time-dependent Schrödinger operator is discussed. We introduce the
notions of wave operators, scattering states and the scattering operator. Again this is a
general procedure for two self-adjoint operators acting on a Hilbert space, and the Schrödinger
operator is an important example.
In Chapter 4 the discussion is specific for the Schrödinger operator. We show how to
determine the Jost solutions, and from this we discuss the reflection and transmission coeffi-
cient. The Gelfand-Levitan-Marchenko integral equation is derived, and the Gelfand-Levitan-
Marchenko equation is the key step in the inverse scattering method.
In Chapter 5 we study the Korteweg-de Vries equation, and we discuss shortly the original
approach of Gardner, Greene, Kruskal and Miura that triggered an enormous amount of
research. We then discuss the approach by Lax, and we show how to construct the N -soliton
solutions to the KdV-equation. We carry out the calculations for N = 1 and N = 2. This
chapter is not completely rigorous.
The lecture notes end with a short list of references, and an index of important and/or
useful notions which hopefully increases the readability. The main source of information for
Chapter 2 is Schechter [10]. Schechter’s book [10] is also relevant for Chapter 3, but for Chapter
3 also Reed and Simon [9] and Lax [7] have been used. For Chapter 4 several sources have
been used; especially Eckhaus and van Harten [4], Reed and Simon [9] as well as an important
original paper [3] by Deift and Trubowitz. For Chapter 5 there is an enormous amount of
information available; introductory and readable texts are by Calogero and Degasperis [2],
de Jager [6], as well as the original paper by Gardner, Greene, Kruskal and Miura [5] that
has been so influential. As remarked before, Chapter 6 is to be considered as an appendix,
and most of the results that are not proved can be found in general text books on functional
analysis, such as Lax [7], Werner [11], or course notes for a course in Functional Analysis.
Chapter 2
In Chapter 6 we recall certain terminology, notation and results that are being used in Chapter
2.
d 2
2.1 The operator − dx 2
d 2
2
Theorem 2.1.1. − dx 2 with domain the Sobolev space W (R) is a self-adjoint operator on
2
L (R).
d 2
2
We occasionally denote − dx 2 by L0 , and then its domain by D(L0 ) = W (R). We first
d
consider the operator i dx on its domain W 1 (R).
d
Lemma 2.1.2. i dx with domain the Sobolev space W 1 (R) is self-adjoint.
d
Proof. The Fourier transform, see Section 6.3, intertwines i dx with the multiplication operator
1
M defined by (M f )(λ) = λf (λ). The domain W (R) under the Fourier transform is precisely
d
D(M ) = {f ∈ L2 (R) | λ 7→ λf (λ) ∈ L2 (R)}, see Section 6.3. So (i dx , W 1 (R)) is unitarily
equivalent to (M, D(M )).
Observe that (M, D(M )) is symmetric;
Z
hM f, gi = λ f (λ)g(λ) dλ = hf, M gi, ∀f, g ∈ D(M ).
R
∗
Assume now g ∈ D(M ), or
Z
D(M ) 3 f 7→ hM f, gi = λ f (λ)g(λ) dλ ∈ C
R
is a continuous functional on L2 (R). By taking complex conjugates, this implies the existence
of a constant C such that
Z
λg(λ)f (λ) dλ ≤ Ckf k, ∀f ∈ W 1 (R).
R
5
6 Chapter 2: Schrödinger operators and their spectrum
By the converse Hölder’s inequality, it follows that λ 7→ λg(λ) is square integrable, and
kλ 7→ λg(λ)k ≤ C. Hence, D(M ∗ ) ⊂ D(M ), and since the reverse inclusion holds for any
densely defined symmetric operator, we find D(M ) = D(M ∗ ), and so (M, D(M )) is self-
adjoint. This gives the result.
Note that we actually have that the Fourier transform gives the spectral decomposition
d d
of −i dx . The functions x 7→ eiλx are eigenfunctions to −i dx for the eigenvalue λ, and the
iλ·
Fourier transform of f is just λ 7→ hf, e i, and the inverse Fourier transform states that f is
a continuous linear combination of the eigenvectors, f = √12π R hf, eiλ· ieiλ· dλ.
R
By the proof of Theorem 2.1.1 and Lemma 6.2.4 it follows that the spectrum is contained
in [0, ∞). In order to show that the spectrum is [0, ∞) we establish that any positive λ is
contained in the spectrum. Since the spectrum is closed, the result then follows.
So pick λ > 0 arbitrary. We use the description of Theorem 6.5.1, so we need to construct
a sequence of functions, say fn ∈ W 2 (R), of norm 1, such that k − fn00 − λ fn k → 0. By the first
paragraph ψ(x) = exp(iγx), with γ 2 = λ, satisfies ψ 00 = λψ, but this function is not in L2 (R).
2
The idea is to approximate this function in Lp (R) using an approximation of the delta-function.
The details
√ are as follows. Put φ(x) = 4
2/π exp(−x2 ), so that kφk = 1. Then define
φn (x) = ( n)−1 φ(x/n) and define fn (x) = φn (x) exp(iγx). Then kfn k = kφn k = kφk = 1,
and since fn is infinitely differentiable and fn and all its derivatives tend to zero rapidly as
x → ±∞ (i.e. fn ∈ S(R), the Schwartz space), it obviously is contained in the domain W 2 (R).
Now
1 x
fn0 (x) = φ0n (x)eiγx + iγφn (x)eiγx = √ φ0 ( )eiγx + iγφn (x)eiγx
n n n
1 x 2iγ x
fn00 (x) = 2 √ φ00 ( )eiγx + √ φ0 ( )eiγx − γ 2 φn (x)eiγx ,
n n n n n n
so that
2|γ| · 1 ·
kfn00 + λfn k ≤ √ kφ0 ( )k + 2 √ kφ00 ( )k
n n n n n n
2|γ| 0 1
= kφ k + 2 kφ00 k → 0, n → ∞.
n n
Chapter 2: Schrödinger operators and their spectrum 7
So this sequence meets the requirements of Theorem 6.5.1, and we are done.
d 2
Again we use the Fourier transform to describe the spectral decomposition of − dx 2 . The
d2
functions x 7→ cos(λx) and x 7→ sin(λx) are the eigenfunctions of − dx2 for the eigenvalue
λ2 ∈ (0, ∞). Observe that the Fourier transform as in Section 6.3 preserves the space of even
functions, and that for even functions we can write the transform pair as
√ Z ∞ √ Z ∞
2 2
fˆ(λ) = √ f (x) cos(λx) dx, f (x) = √ fˆ(λ) cos(λx) dλ,
π 0 π 0
Exercise 2.1.4. Derive the (Fourier-)sine transform using odd functions. By splitting an
arbitrary element f ∈ L2 (R) into an odd and even part give the spectral decomposition of
d2
− dx 2.
d 2
Exercise 2.1.5. The purpose of this exercise is to describe the resolvent for L0 = − dx 2 . We
2
obtain that for z ∈ C\R, z = γ , =γ > 0, we get
−1
Z
−1
(L0 − z) f (x) = eiγ|x−y| f (y) dy
2γi R
• Check that u(x) defined by the right hand side satisfies −u00 − zu = f , for f in a suitable
dense subspace, e.g. Cc∞ (R). Show that (L0 − z)−1 extends to a bounded operator on
L2 (R) using Young’s inequality for convolution products.
• Instead of checking the result for f ∈ Cc∞ (R), we derive it by factoring the second order
differential operator −u00 − γ 2 u = f into two first order differential equations;
d d
(i + γ)u = v, (i − γ)v = f.
dx dx
– Show that the second first order differential equation is solved by
Z x
v(x) = −i e−iγ(x−y) f (y) dy.
R∞
Argue that requiring v ∈ L2 (R) gives v(x) = i x
e−iγ(x−y) f (y) dy. Show that
1
kvk ≤ |=γ| kf k.
– Treat the other first Rorder differential equation in a similar way to obtain the
x
expression u(x) = −i −∞ eiγ(x−y) v(y) dy.
– Finally, express u in terms of f to find the explicit expression for the resolvent
operator.
8 Chapter 2: Schrödinger operators and their spectrum
d 2
2.2 The Schrödinger operator − dx 2 +q
setting we call q the potential of the Schrödinger operator. The first question to be dealt with
d2
is whether or not − dx 2 + q with this domain is self-adjoint or not. For this we use a 1939
perturbation Theorem 2.2.4 by Rellich2 , and then we give precise conditions on the potential
function q such that the conditions of Rellich’s Perturbation Theorem 2.2.4 in this case are
met.
Theorem 2.2.4 (Rellich’s Perturbation Theorem). Let H be a Hilbert space. Assume T : H ⊃
D(T ) → H is a self-adjoint operator and S : H ⊃ D(S) → H is a symmetric operator, such
that D(T ) ⊂ D(S) and ∃ a < 1, b ∈ R
kSxk ≤ a kT xk + b kxk, ∀ x ∈ D(T ),
then (T + S, D(T )) is a self-adjoint operator.
1
Erwin Rudolf Josef Alexander Schrödinger, (12 August 1887 — 4 January 1961) Austrian physicist, who
played an important role in the development of quantum mechanics, and is renowned for the cat in the box.
2
Franz Rellich (14 September 1906 — 25 September 1955) German mathematician, who made contributions
to mathematical physics and perturbation theory.
Chapter 2: Schrödinger operators and their spectrum 9
For y ∈ H arbitrary, pick x ∈ D(T ) such that (T − iλ)x = y, so that we get kyk2 =
kT (T − iλ)−1 yk2 + λ2 k(T − iλ)−1 yk2 . This implies the basic estimates
and, since a + b/|λ| < 1 for |λ| sufficiently large, it follows that kS(T − iλ)−1 k < 1. In
particular, 1 + S(T − iλ)−1 is invertible in B(H), see Section 6.2. Now
(where the equality also involves the domains!) and we want to conclude that Ran(S + T −
iλ) = H. So pick x ∈ H arbitrary, we need to show that there exists y ∈ H such that
−1
(S + T ) − iλ y = x, and this can be rephrased as (T − iλ)y = 1 + S(T − iλ)−1 x by the
invertibility for |λ| sufficiently large. Since Ran(T − iλ) = H such an element y ∈ H does
exist.
Finally, since S + T with dense domain D(T ) is symmetric, it follows by Lemma 6.2.2 that
it is self-adjoint.
The next exercise considers what happens if a = 1.
Exercise 2.2.5. Prove Wüst’s theorem, which states the following. Assume T : H ⊃ D(T ) →
H is a self-adjoint operator and S : H ⊃ D(S) → H is a symmetric operator, such that
D(T ) ⊂ D(S) and there exists b ∈ R such that
1. Show that we can use Rellich’s Perturbation Theorem 2.2.4 to see that T +tS, 0 < t < 1,
is self-adjoint.
2. Take x ∈ Ker((T + S)∗ − i), and show there exists yt ∈ D(T ) such that kyt k ≤ kxk and
(T + tS + i)yt = x. (Use Lemma 6.2.2.)
4. Show that (1 − t)kT yt k ≤ k(T + tS)yt k + tbkyt k and conclude that (1 − t)kT yt k is
bounded as t % 1. Conclude next that (1 − t)kSyt k is bounded as t % 1, and hence
kzt k is bounded.
5. Show that x is the weak limit of zt as t % 1, and conclude that kxk = 0. (Hint consider
hzt − x, ui for u ∈ D(T ).)
to Q. For this we need to rephrase the condition of Rellich’s Theorem 2.2.4, see Theorem
2.2.6(5), into conditions on the potential q.
1. W 2 (R) ⊂ D(Q),
Z y+1
3. sup |q(x)|2 dx < ∞,
y∈R y
Corollary 2.2.7. If the potential q satisfies Theorem 2.2.6(3), then the Schrödinger operator
d2 2 2
− dx 2 + q is self-adjoint on its domain W (R). In particular, this is true for q ∈ L (R).
d 2
Proof of Theorem 2.2.6. (1) ⇒ (2): Equip W 2 (R) with the graph norm of L =− dx 2 , denoted
2
by k · kL , see Section 6.2, which in particular means that W (R) with this norm is complete.
We claim that Q : (W 2 (R), k · kL ) → L2 (R) is a closed operator, then the Closed Graph
Theorem 6.2.1 implies that it is bounded, which is precisely (2). To prove the closedness, we
take a sequence {fn }∞ 2 2
n=1 such that fn → f in (W (R), k · kL ) and qfn → g in L (R). We have
to show that f ∈ D(Q) and qf = g. First, since fn → f and (W 2 (R), k · kL ) is complete,
Chapter 2: Schrödinger operators and their spectrum 11
we have f ∈ W 2 (R) ⊂ D(Q) by assumption (1). This assumption and Lemma 2.2.1 also
gives hqfn , hi = hfn , qhi for all h ∈ W 2 (R). Taking limits, hg, hi = hf, qhi = hqf, hi since
convergence in (W 2 (R), k · kL ) implies convergence in L2 (R) by k · k ≤ k · kL . Since W 2 (R) is
dense in L2 (R), we may conclude g = Q(f ).
2
(2) ⇒ (3): Put φ(x) = e1−x , so in particular φ ∈ W 2 (R) and φ(x) ≥ 1 for x ∈ [0, 1]. Put
φy (x) = φ(x − y) for the translated function, then
Z y+1 Z
2
|q(x)φy (x)|2 dx ≤ C kφ00y k2 + kφy k2 = C kφ00 k2 + kφk2 < ∞
|q(x)| dx ≤
y R
Exercise 2.2.9. 1. Show that, by reducing to real and imaginary parts, we can restrict to
the case that f is a real-valued function in C 1 (R).
d(f 2 )
2. Using (x) = 2f (x)f 0 (x), and 2ab ≤ εa2 + ε−1 b2 , show that
dx
Z Z
2 2 0 2 1
f (x) − f (s) ≤ ε |f (t)| dt + |f (t)|2 dt
I ε I
for x, s ∈ I.
d2 p0 d p00
− − 2 + q −
dx2 p dx p
for some strictly positive p ∈ C 2 (R). Show that this differential operator for real-valued q
is symmetric for a suitable choice of domain on the Hilbert space L2 (R, p(x)2 dx). Establish
d2 2
that this operator is unitarily equivalent to the Schrödinger operator − dx 2 + q on L (R).
We have gone through some trouble to establish a suitable criterion in Corollary 2.2.7 such
that the Schrödinger operator is a self-adjoint operator. It may happen that the potential is
such that the corresponding Schrödinger operator defined on W 2 (R) ∩ D(Q) is not self-adjoint
but a symmetric densely defined operator, and then one has to look for self-adjoint extensions.
It can also happen that the potential is so ‘bad’, that W 2 (R) ∩ D(Q) is not dense. We do not
go into this subject.
section we give a condition on the potential q that ensures that the essential spectrum of
d2 d2
− dx 2 + q is [0, ∞) as well. In this section we assume that − dx2 + q is a self-adjoint operator
with domain the Sobolev space W 2 (R), which is the case if Theorem 2.2.6(3) holds.
We start with the general notion of T -compact operator, and next discuss the influence of
a perturbation of T by a T -compact operator S on the essential spectrum.
Definition 2.3.1. Let (T, D(T )) be a closed operator on a Hilbert space H. The operator
(S, D(S)) on the Hilbert space is compact relative to T , or T -compact, if D(T ) ⊂ D(S) and
for any sequence {xn }∞ ∞
n=1 satisfying kxn k+kT xn k ≤ C the sequence {Sxn }n=1 has a convergent
subsequence.
Chapter 2: Schrödinger operators and their spectrum 13
Theorem 2.3.2. Let (T, D(T )) be a self-adjoint operator on a Hilbert space H, and (S, D(S))
a closed symmetric T -compact operator. Then (T + S, D(T )) is a self-adjoint operator, and
σess (T ) = σess (S + T ).
Proof. By Definition 2.3.1, D(T ) ⊂ D(S), so if we can check the condition in Rellich’s Per-
turbation Theorem 2.2.4 we may conclude that (T + S, D(T )) is self-adjoint. We claim that
for all ε > 0 there exists K > 0 such that
Then the first statement follows from Rellich’s Theorem 2.2.4 by taking ε < 1.
To prove this claim we first prove a weaker claim, namely that there exists a constanst C
such that
kSxk ≤ C kT xk + kxk), ∀x ∈ D(T ).
Indeed, if this estimate would not hold, we can find a sequence {xn }∞ n=1 such that kT xn k +
∞
kxn k = 1 and kSxn k → ∞. But by T -compactness, {Sxn }n=1 has a convergent subsequence,
say Sxnk → y, so in particular kSxnk k → kyk contradicting kSxn k → ∞.
Now to prove the more refined claim (2.3.1) we argue by contradiction. So assume ∃ε > 0
such that we cannot find a K such that (2.3.1) holds. So we can find a sequence {xn }∞ n=1
in D(T ) such that kSxn k > εkT xn k + nkxn k. By changing xn to xn /(kxn k + kT xn k) we
can assume that this sequence satisfies kxn k + kT xn k = 1. By the weaker estimate we have
kSxn k ≤ C kT xn k + kxn k ≤ C and C ≥ kSxn k ≥ εkT xn k + nkxn k, so that kxn k →
0 and hence kT xn k → 1 as n → ∞. By T -compactness, the sequence {Sxn }∞ n=1 has a
convergent subsequence, which we also denote by {Sxn }∞ n=1 , say Sx n → y. Since we assume
(S, D(S)) closed, we see that xn → 0 ∈ D(S) and y = S0 = 0. On the other hand kyk =
limn→∞ kSxn k ≥ ε limn→∞ kT xn k = ε > 0. This gives the required contradiction.
In order to prove the equality of the essential spectrum, take λ ∈ σess (T ), so we can take
a sequence {xn }∞ n=1 as in Theorem 6.5.4(3). Recall that this means that the sequence satisfies
kxn k = 1, hxn , yi → 0 for all y ∈ H and k(T − λ)xn k → 0. We will show that λ ∈ σess (T + S).
In particular, kxn k + kT xn k is bounded, and by T -compactness it follows that {Sxn }∞ n=1
has a convergent subsequence, say Sxnk → y. We claim that y = 0. Indeed, for arbitrary
z ∈ D(T ) ⊂ D(S) we have
Indeed, arguing by contradiction, if this is not true there exists a sequence {xn }∞ n=1 such
that kxn k + kT xn k = 1 and kxn k + k(S + T )xn k → 0, and by T -compactness of S we have
that there exists a convergent subsequence {Sxnk }∞ ∞
k=1 of {Sxn }n=1 converging to x. Then
necessarily T xnk → −x and xnk → 0, and since T is self-adjoint, hence closed, we have x = 0.
This contradicts the assumption kxk + kT xk = 1.
Now to prove that S is (T + S)-compact, take a sequence {xn }∞ n=1 satisfying kxn k + k(T +
S)xn k ≤ C, then by (2.3.2) we have kxn k + kT xn k ≤ CC0 , so by T -compactness the sequence
{Sxn }∞
n=1 has a convergent subsequence. Or S is (T + S)-compact.
Exercise 2.3.3. The purpose of this exercise is to combine Rellich’s Perturbation Theorem
2.2.4 with Theorem 2.3.2 in order to derive the following statement: Let (T, D(T )) be a
self-adjoint operator, and S1 , S2 symmetric operators such that (i) D(T ) ⊂ D(S1 ), (ii) S2 is
a closed T -compact operator, (iii) ∃a < 1, b ≥ 0 such that kS1 xk ≤ akT xk + bkxk for all
x ∈ D(T ). Then S2 is (T + S1 )-compact and σess (T + S1 + S2 ) = σess (T + S1 ).
Prove this result using the following steps.
• Show kT xk ≤ k(T +S1 )xk+akT xk+bkxk and conclude (1−a)kT xk ≤ k(T +S1 )xk+bkxk.
d 2
Corollary 2.3.5. If q ∈ L2 (R), then the essential spectrum of − dx 2 + q is [0, ∞).
Proof of Theorem 2.3.4. First assume that the condition on the potential q is not valid,
∞
R yn +1 there 2exists a ε > 0 and a sequence∞ of points {yn }n=1 such that |yn | → ∞ and
then
yn
|q(x)| dx ≥ ε. Pick a function φ ∈ Cc (R) with the properties φ(x) ≥ 1 for x ∈ [0, 1]
and supp(φ) ⊂ [−1, 2]. Define the translated function φn (x) = φ(x − yn ), then obviously
Chapter 2: Schrödinger operators and their spectrum 15
φn ∈ W 2 (R) and kφ00n k + kφn k = kφ00 k + kφk is independent of n. Consequently, the as-
d2 ∞
sumption that Q is − dx 2 -compact implies that {Qφn }n=1 has a convergent subsequence, again
∞
denoted by {Qφn }n=1 , say Qφn → f . Observe that
Z Z yn +1
2
|q(x)φn (x)| dx ≥ |q(x)|2 dx ≥ ε,
R yn
√
so that kf k ≥ ε > 0. On the other hand, for any fixed bounded interval I of length 1 we
have φn |I = 0 for n sufficiently large since |yn | → ∞, so that
Z 12 Z 21 Z 21
2 2 2
|f (x)| dx ≤ |f (x) − q(x)φn (x)| dx + |q(x)φn (x)| dx
I I I
Z 12
2
= |f (x) − q(x)φn (x)| dx → 0, n → ∞.
I
R
This shows I |f (x)|2√dx = 0, hence, by filling R with such intervals, kf k = 0, which is
contradicting kf k ≥ ε > 0.
Now assume that the assumption on q is valid. We start with some general remarks on
W (R). For f ∈ W 2 (R) ⊂ C 1 (R) (using the Sobolev inbedding Lemma 6.3.1) we have, using
2
and similarly
1√ √
|f 0 (x)| ≤
2kλ 7→ 1 + λ2 λ(Ff )(λ)k.
2
Now, reasoning as in the proof of (3) ⇒ (4) for Theorem 2.2.6 we see
√ 2
Z Z
kλ 7→ 1 + λ2 (Ff )(λ)k = |(Ff )(λ)| dλ + λ2 |(Ff )(λ)|2 dλ
2
R R
Z Z
3 1 3 1
≤ |(Ff )(λ)|2 dλ + λ4 |(Ff )(λ)|2 dλ = kf k2 + kf 00 k2 ,
2 R 2 R 2 2
and similarly
√ 1 3
kλ 7→ 1 + λ2 λ(Ff )(λ)k2 ≤ kf k2 + kf 00 k2 ,
2 2
√
so, with C = 3,
|f (x)| + |f 0 (x)| ≤ C (kf 00 k + kf k) .
(So we have actually proved that the Sobolev inbedding W 2 (R) ⊂ C 1 (R) is continuous.)
16 Chapter 2: Schrödinger operators and their spectrum
If {fn }∞ 2 00
n=1 is now a sequence in W (R) such that kfn k + kfn k ≤ C1 , then it follows that
fn and fn0 are uniformly bounded, this in particular implies that M = {fn | n ∈ N} is
uniformly continuous. Since M is also bounded in the supremum norm, it follows by the
Arzelà-Ascoli theorem that the closure of M is compact in C(R). In particular, there is a
convergent subsequence, which is also denoted by {fn }∞n=1 . This result, with the locally square
integrability condition on q, now gives for each fixed N > 0
Z N Z N
2 2
|q(x) fn (x) − fm (x) | dx ≤ kfn − fm k∞ |q(x)|2 dx → 0, (2.3.3)
−N −N
k(q − qN )f k2 ≤ ε C2 kf 00 k2 + C3 kf k2 , ∀ f ∈ W 2 (R).
The claims follows from the reasoning after Lemma 2.2.8 by replacing q and C by q − qN
and ε (taking e.g. the ε in the proof of Theorem 2.2.6 equal to 1, so we can take C2 = 21 ,
C3 = 52 ).
Exercise 2.3.6. Consider the Schrödinger operator with a potential of the form q1 + q2 , with
q1 (x) = a, q2 (x) = bχI (x), where χI is the indicator function of the set I (i.e. χI (x) = 1 if
x ∈ I and χI (x) = 0 if x ∈ / I) for I the interval I = [c, d]. Use Exercise 2.3.3 to describe the
essential spectrum, see also Section 2.5.1.
is [0, ∞). So this is the case if q satisfies the condition in Theorem 2.3.4, which we now assume
throughout this section. This in particular means that any λ < 0 in the spectrum is isolated
and contained in the point spectrum by Theorem 6.5.5. In this case there is an eigenfunction
f ∈ W 2 (R) such that
−f 00 (x) + q(x) f (x) = λ f (x), λ < 0.
d 2
We count eigenvalues according to their multiplicity, i.e. the dimension of Ker(− dx 2 +q−λ). In
least a one-dimensional space of bound states. There are many more explicit criterions on the
potential that imply explicit results for the dimension of the space of bound states.
We start with a general statement.
Proposition 2.4.1. Let (T, D(T )) be a self-adjoint operator on a Hilbert space H such that
σess (T ) ⊂ [0, ∞). Then T has negative eigenvalues if and only if there exists a non-trivial
subspace M ⊂ D(T ) ⊂ H such hT x, xi < 0 for all x ∈ M \{0}.
L
Proof. If T has negative eigenvalues, then put M = Ker(T − λ), summing over all 0 > λ ∈
σp (T ) and where we take only finite linear combinations.
If T has no negative eigenvalues, then it follows by Theorem 6.5.5(ii) that σ(T ) ⊂ [0, ∞).
By the Spectral Theorem 6.4.1, it follows that
Z
hT x, xi = λ dEx,x (λ), ∀x ∈ D(T ),
σ(T )
and since Ex,x , x 6= 0, is a positive measure and σ(T ) ⊂ [0, ∞) the right hand side is obviously
non-negative. So M is trivial.
This principle can be used to establish a criterion for the occurrence of bound states.
2α 1 √
r
4α4
Z Z
0 2 4 2 −2α2 (x−a)2 2 −y 2 π
kf k = 4α (x − a) e dx = √ y e dy = √ π=α ,
R
3
2α 2 R 22 2
eigenvalue.
The implicit assumption in Corollary 2.4.3 is that q integrable is over the set B. Note
that the standing assumption in this section is the assumption of Theorem 2.3.4. If B is
R p R 1/2
a bounded set, then the Hölder inequality implies B |q(x)| dx ≤ |B| B |q(x)|2 dx , so
that the locally square integrability already implies this assumption. Here |B| denotes the
(Lebesgue) measure of the set B.
Proof. Note
Z Z Z
−2α2 x2 −2α2 x2
q(x)e dx ≤ q(x)e dx → q(x) dx < 0, as α ↓ 0,
R B B
2 2
so that α1 R q(x)e−2α x dx can be made arbitrarily negative, hence we can apply Theorem
R
2.4.2. Note that interchanging limit and integration is justifiable by the Dominated Conver-
gence Theorem 6.1.3.
In particular, the special case B = R of Corollary 2.4.3 gives the following result.
d2
R
Corollary 2.4.4. For q ∈ L1 (R) such that R q(x) dx < 0 the Schrödinger operator − dx 2 + q
We also give without proof an estimate on the number of negative eigenvalues for the
Schrödinger operator.
Theorem
R 2.4.5. Assume that q satisfies the condition in Theorem 2.3.4 and additionally that
R
|x| |q(x)| dx < ∞, then the number N of eigenvalues of the Schrödinger operator is bounded
by Z
N ≤2+ |x| |q(x)| dx.
R
The argument used in the proof of Theorem 2.4.5 is an extension of a comparison argument
for a Schrödinger operator on a finite interval, see [4].
Exercise 2.5.1. Show that if the potential q satisfies q(x) ≥ a almost everywhere, then
(−∞, a) is contained in the resolvent ρ(L). Rephrased, the spectrum σ(L) is contained in
[a, ∞). Prove this using the following steps.
• Show that h(L − λ)f, f i = h−f 00 + qf − λf, f i = kf 0 k2 + R (q(x) − λ)|f (x)|2 dx, ∀f ∈
R
W 2 (R).
• Conclude that for λ < a, (a − λ)kf k ≤ k(L − λ)f k for all f ∈ W 2 (R). Use Theorem
6.5.1 to conclude that λ ∈ ρ(L).
Exercise 2.5.2. Assume that the potential q ∈ L2 (R) satisfies q(x) ≥ a for a < 0. Show
that [a, 0] contains only finitely many points of the spectrum of the corresponding Schrödinger
operator. (Hint: Use Theorem 6.5.5 and Exercise 2.5.1.)
Now consider q(x) = a + bχI (x), b ∈ R, where we take for convenience I = [0, 1], and
denote the corresponding Schrödinger operator again by L. By Exercise 2.3.3 we know that
σess (L) = [a, ∞). In case b > 0, we have σess (L) = [a, ∞) by combining Exercise 2.3.3 and
Theorem 2.3.4. Then Exercise 2.5.1 implies σ(L) = [a, ∞).
We now consider the case a = 0, b < 0. By Corollary 2.4.3 or Corollary 2.4.4, there is
negative point spectrum. On the other hand, the essential spectrum is [0, ∞) and by Exercise
2.5.1 the spectrum is contained in [b, ∞). We look for negative spectrum, which, by Theorem
6.5.5, is contained in the point spectrum and consists of isolated points. We put λ = −γ 2 ,
p
|b| > γ > 0 and we try to find solutions to Lf = λf , or
00 2
f − γ f = 0,
x < 0,
00 2
f − (γ + b)f = 0, 0 < x < 1,
00
f − γ 2 f = 0, x > 1.
The first and last equation imply f (x) = A− exp(γx) + B− exp(−γx), x < 0, and f (x) =
A+ exp(γx) + B+ exp(−γx), x > 1, for constants A± , B± ∈ C. Since we need f ∈ L2 (R),
we see that B− = 0 and A+ = 0, and because an eigenfunction (or bound state) can be
changed by multiplication by a constant, we take A− = 1, or f (x) = exp(γx), x < 0, and
f (x) = B+ exp(−γx), x > 1. Now b + γ 2 < 0, and we put −ω 2 = b + γ 2 , ω > 0. It is left as
an exercise to check that γ 2 = −b, i.e ω = 0, does not lead to an eigenfunction. So the second
equation gives f (x) = A cos(ωx) + B sin(ωx), 0 < x < 1. Since an eigenfunction f ∈ W 2 (R) ⊂
C 1 (R), we need to choose A, B, B+ such that f is C 1 at 0 and at 1. At 0 we need 1 = A
for continuity and γ = ωB for continuity of the derivative. So f (x) = cos(ωx) + ωγ sin(ωx),
0 < x < 1, and we need cos(ω) + ωγ sin(ω) = B+ e−γ for continuity at 1 –this then fixes B+ –
and −ω sin(ω) + γ cos(ω) = −γB+ e−γ for a continuous derivative at 1. In order that this has
a solution we require
γ2 2γω
−ω sin(ω) + γ cos(ω) = −γ cos(ω) − sin(ω) ⇒ tan(ω) = 2
ω ω − γ2
20 Chapter 2: Schrödinger operators and their spectrum
We take the potential q(x) = a cosh−2 (px), a, p ∈ R, see Figure 2.1 for the case a = −2, p =
1. Since q ∈ L2 (R), it follows from Corollary 2.2.7 and Corollary 2.3.5 that the corresponding
d2
Schrödinger operator L =− dx 2 + q is self-adjoint and σess (L) = [0, ∞). Since for a > 0 we
0
R
have hLf, f i = kf k + R q(x)|f (x)|2 dx ≥ 0 for all f ∈ W 2 (R) it follows that σ(L) = [0, ∞) as
well. Since cosh−2 ∈ L1 (R), we can use Corollary 2.4.4 to see that for a < 0 there is at least
one negative eigenvalue.
We consider a first example in the following exercise.
Exercise 2.5.5. Show that f (x) = (cosh(px))−1 satisfies
2p2
−f 00 (x) − f (x) = −p2 f (x),
cosh2 (px)
or −p2 ∈ σp (L) for the case a = −2p2 .
In Exercise 2.5.8 you have to show that in this case σ = {−p2 }∪[0, ∞) and that Ker(L+p2 )
is one-dimensional.
Chapter 2: Schrödinger operators and their spectrum 21
x
-2 -1 0 1 2
-0.5
-1
-1.5
-2
Exercise 2.5.6. Consider a general potential q ∈ L2 (R) and the corresponding Schrödinger
operator L1 . Set fp (x) = f (px) for p 6= 0, and let Lp denote the Schrödinger operator with
potential qp . Show that λ ∈ σ(L1 ) ⇐⇒ p2 λ ∈ σ(Lp ) and similarly for the point spectrum and
the essential spectrum.
Because of Exercise 2.5.6 we can restrict ourselves to the case p = 1. We transform the
Schrödinger eigenvalue equation into the hypergeometric differential equation;
z(1 − z) yzz + c − (1 + a + b)z yz − ab y = 0 (2.5.1)
for complex parameters a, b, c. In Exercise 2.5.7 we describe solutions to (2.5.1). This exercise
is not essential, and is best understood within the context of differential equations on C with
regular singular points.
Put λ = γ 2 with γ real or purely imaginary, and consider −f 00 (x) + a cosh−2 (x)f (x) =
γ 2 f (x). We put z = 21 (1−tanh(x)) = (1+exp(2x))−1 , where tanh(x) = sinh(x)/ cosh(x). Note
dz −1
that −∞ is mapped to 1 and ∞ is mapped to 0, and x 7→ z is invertible with dx = 2 cosh 2
(x)
.
So we have to deal with (2.5.1) on the interval (0, 1).
Before transforming the differential equation, note that tanh(x) = 1 − 2z and cosh−2 (x) =
22 Chapter 2: Schrödinger operators and their spectrum
4. Use Exercise 2.5.5 to obtain an expression for cosh−1 (x) in terms of a hypergeometric
function.
Since we are interested in negative eigenvalues we assume that a < 0 and λ = γ 2 < 0; we
put γ = iβ with β > 0. Then the eigenfunction f satisfies
1 1
|f (x)|2 = 2β
|y( )|2 ,
cosh (x) 1 + e2x
so that square integrability
R∞ for x → ∞ gives a condition on y at 0. We call a function f square
integrable at ∞ if a |f (x)|2 dx < ∞ for some a ∈ R. Note that any f ∈ L2 (R) is square
integrable at ∞. Square integrability at −∞ is defined analogously. Note that 2 F1 a,c b ; z
This can only happen if the Γ-functions in the denominator have poles. Since the Γ-functions
q at the {· · · , −2, −1, 0} and β > 0 and a < 0 we see q
have poles that this can only happen if
1
2
+ β − 14 − a = −n ≤ 0 for n a non-negative integer, or β = 14 − a − 21 − n.
It remains to check that in this case the eigenfunctions are indeed elements of the Sobolev
space W 2 (R). We leave this to the reader.
Exercise 2.5.8. Show that the number of (strictly) negative eigenvalues of the Schrödinger
operator for the potential a/ cosh2 (x) is given by the integer N satisfying N (N − 1) < −a <
N (N + 1). Show that for the special case −a = m(m + 1), m ∈ N, one of the points β
corresponds to zero, and in this case N = m. Conclude that the spectrum of the Schrödinger
operator in Exercise 2.5.5 has spectrum {−p2 } ∪ [0, ∞) and that there is a one-dimensional
space of eigenfunctions, or bound states, for the eigenvalue −p2 .
The eigenvalue equation f 00 (x)+(λ−a exp(−2|x|))f (x) = 0 can be transformed into a well-
known differential equation. For the transformation
√ we consider the intervals√(−∞, 0) and
(0, ∞) separately. For x < 0 we put z = −a exp(x), and for x > 0 we put z = −a exp(−x)
and put y(z) = f (x), then the eigenvalue equation is transformed into the Bessel differential
equation
z 2 yzz + z yz + (z 2 − ν 2 ) y = 0,
2
√
where
√ we have put λ = −ν . Note that (−∞, 0) is transformed into √ −a) and1 (0, ∞) into
(0,
( −a, 0), and so we need to glue solutions together at x = 0 or z = −a in a C -fashion.
Exercise 2.5.9. 1. Check the details of the above transformation.
2. Define the Bessel function
∞ ∞
(z/2)ν X (−1)n z 2n X (−1)n z ν+2n
Jν (z) = = .
Γ(ν + 1) n=0 (ν + 1)n 4n n! n=0 n! Γ(ν + n + 1) 2
Show that the power series in the middle defines an entire function. Show that Jν (z) is
a solution to Bessel differential equation. Conclude that J−ν is also a solution.
3. Show that for ν ∈ / Z the Bessel functions Jν and J−ν are linearly independent solutions
to the Bessel functions. (Hint: show that for any pair of solutions y1 , y2 of the Bessel
differential equation the Wronskian W (z) = y1 (z)y20 (z) − y10 (z)y2 (z) satisfies a first order
differential equation that leads to W (z) = C/z. Show that for y1 (z) = Jν (z), y2 (z) =
J−ν (z) the constant C = −2 sin(νπ)/π.)
Chapter 2: Schrödinger operators and their spectrum 25
x
-2 -1 0 1 2
0
-0.5
y -1
-1.5
-2
The reduction to the Bessel differential equation works for arbitrary a, but we now stick
to the case a < 0. In case a < 0 we want to discuss the discrete spectrum. We consider
2
eigenfunctions f for √ eigenvalue λ = −ν < 0, so we have ν ∈ R and we assume ν > 0.
We now put c = −a. For negative x it follows |f (x)| = |y(cex )|2 , and since Jν (z) =
2
(z/2)ν
Γ(ν+1)
1 + O(z) as z → 0 for ν ∈ / −N, it follows that f is a multiple of Jν (cex ) for x < 0
in order to be square integrable at ∞. In order to extend f to a function for x > 0, we put
f (x) = A Jν (ce−x ) + B J−ν (ce−x ), so that it is a solution to the eigenvalue equation for x > 0.
Since f has to be in W 2 (R) ⊂ C 1 (R) we need to impose a C 1 -transition at x = 0, this gives
(
A Jν (c) + B J−ν (c) = Jν (c), Jν (c) J−ν (c) A Jν (c)
=⇒ =
−cA Jν0 (c) − cB J−ν
0
(c) = cJν0 (c) Jν0 (c) J−ν0
(c) B −Jν0 (c)
0
A π J−ν (c) −J−ν (c) Jν (c)
=⇒ =− ,
B 2 sin(νπ) −Jν0 (c) Jν (c) −Jν0 (c)
since the determinant of the matrix is precisely the Wronskian of the Bessel functions Jν and
J−ν , see Exercise 2.5.9(3), and where we now assume ν ∈ / Z.
2 −x 2
Similarly, for x > 0 we have |f (x)| = |y(ce )| is square integrable at ∞ if only if y
equals Jν , so that f is a multiple of Jν (ce−x ) for x > 0 in order to be square integrable. So
we can only have a square integrable eigenfunction (or bound state) in case B in the previous
26 Chapter 2: Schrödinger operators and their spectrum
calculation vanishes, or Jν (c)Jν0 (c) = 0. Given a, hence c, we have to solve this equation for ν
yielding the corresponding eigenvalues λ = −ν 2 for the Schrödinger operator. This equation
cannot be solved easily in a direct fashion, and requires knowledge about Bessel functions,
its derivatives and its zeros. Since the negative spectrum has to be contained in [−c2 , 0] we
conclude that the smallest positive zero of ν 7→ Jν (c) and ν 7→ Jν0 (c) is less than c. (This is a
classical result, due to Riemann.)
Chapter 3
has a unique solution ψ(t), t ∈ R, given by ψ(t) = exp(−itL)ψ0 which satisfies kψ(t)k = kψ0 k.
Proof. Let us first show existence. We use the Spectral Theorem 6.4.1 to establish the operator
U (t) = exp(−itL), which is a one-parameter group of unitary operators on the Hilbert space
L2 (R). Define ψ(t) = U (t)ψ0 . This is obviously defined for all t ∈ R and ψ(0) = U (0)ψ0 = ψ0 ,
since U (0) = 1 in B(H). Note that this can be defined for any ψ0 , i.e. for this construction
the requirement ψ0 ∈ D(L) is not needed.
In order to show that the differential equation is fulfilled we consider
1
U (t + h)ψ0 − U (t)ψ0 + iLU (t)ψ0 k2
k
h
and to seeR that U (t)ψ0 ∈ D(L) we note that, by the Spectral Theorem 6.4.1, it suffices to
note that R λ2 dEU (t)ψ0 ,U (t)ψ0 (λ) < ∞ which follows from
27
28 Chapter 3: Scattering and wave operators
since ψ0 ∈ D(L), U (t) commutes with L and U (t) is unitary by the Spectral Theorem 6.4.1.
Since all operators are functions of the self-adjoint operator L, it follows that this equals
1 Z
−i(t+h)L −itL −itL
1 −i(t+h)λ 2
2
− e−itλ + iλe−itλ dEψ0 ,ψ0 (λ)
k e −e + iLe ψ0 k = e
h R h
Z Z
−itλ 1 −ihλ
2 1 −ihλ
− 1 + iλ|2 dEψ0 ,ψ0 (λ).
= e e − 1 + iλ dEψ0 ,ψ0 (λ) =
e
R h R h
Since h1 e−ihλ − 1 + iλ → 0 as h → 0, we only need to show that we can interchange the limit
with the integration. In order to do so we use | h1 e−iλh R− 1 | ≤ |λ|, so that the integrand can
be estimated by 4|λ|2 independent of h. So we require R |λ|2 dEψ0 ,ψ0 (λ) < ∞, or ψ0 ∈ D(L),
see the Spectral Theorem 6.4.1.
To show uniqueness, assume that ψ(t) and φ(t) are solutions to (3.1.1), then their difference
ϕ(t) = ψ(t) − φ(t) is a solution to iϕ0 (t) = Lϕ(t), ϕ(0) = 0. Consider
d d
kϕ(t)k2 = hϕ(t), ϕ(t)i = hϕ0 (t), ϕ(t)i + hϕ(t), ϕ0 (t)i
dt dt
= h−iLϕ(t), ϕ(t)i + hϕ(t), −iLϕ(t)i = −i hLϕ(t), ϕ(t)i − hϕ(t), Lϕ(t)i = 0
since L is self-adjoint. So kϕ(t)k is constant and kϕ(0)k = 0, it follows that ϕ(t) = 0 for all t
and uniqueness follows.
Exercise 3.1.2. We consider the setting of Theorem 3.1.1. Show that ψ(t) = e−iλt u, u ∈
L2 (R), λ ∈ R fixed, is a solution to the time-dependent Schrödinger equation (3.1.1) if and
only if u ∈ D(L) is an eigenvector of L for the eigenvalue λ.
the potential q satisfies the conditions of Theorem 2.3.4. However, it should be noted that the
set-up in this section is much more general and works for any two operators L and L0 which
are (possibly unbounded) self-adjoint operators on a Hilbert space H, and in this situation
we now continue.
Definition 3.2.1. The solution ψ to (3.1.1) for a general self-adjoint operator (L, D(L)) on a
Hilbert space H has an incoming asymptotic state ψ − (t) = exp(−itL0 )ψ − (0) for a self-adjoint
operator (L0 , D(L0 )) on H if
lim kψ(t) − ψ − (t)k = 0.
t→−∞
Chapter 3: Scattering and wave operators 29
The solution ψ to (3.1.1) has an outgoing asymptotic state ψ + (t) = exp(−itL0 )ψ + (0) if
The solution ψ to (3.1.1) is a scattering state if it has an incoming asymptotic state and an
outgoing asymptotic state.
Define the operator W (t) = eitL e−itL0 , which is an element in B(H). Note that t 7→ eitL and
t 7→ e−itL0 are one-parameter groups of unitary operators. In particular, they are isometries,
i.e. preserve norms. Also note that in general eitL e−itL0 6= eit(L−L0 ) , unless L and L0 commute.
Assuming the solution ψ to (3.1.1) has an incoming state, then we have
kW (t)ψ − (0) − ψ(0)k = keitL e−itL0 ψ − (0) − ψ(0)k = ke−itL0 ψ − (0) − e−itL ψ(0)k
= kψ − (t) − ψ(t)k → 0, t → −∞.
Definition 3.2.2. The wave operators W ± are defined as the strong limits of W (t) as t →
±∞. So its domain is
Note that for a scattering state ψ, the wave operators completely determine the incoming
and outgoing asymptotic states. Indeed, assume W ± ψ ± (0) = ψ(0), then we can define ψ ± (t) =
exp(−itL0 )ψ ± (0) and ψ(t) = exp(−itL)ψ(0). It then follows that
so that ψ ± (t) are the incoming and outgoing asymptotic states. In physical models it is
important to have as many scattering states as possible, preferably Ran(W ± ) is the whole
Hilbert space H. Note that scattering states have initial values in Ran W + ∩ Ran W − .
Proposition 3.2.4. Assume that L has an eigenvalue λ for the eigenvalue u ∈ D(L), and
consider the solution ψ(t) = exp(−iλt)u to (3.1.1), see Exercise 3.1.2. Then ψ has no (incom-
ing or outgoing) asymptotic states unless u is an eigenvector for L0 for the same eigenvalue
λ.
Proof. Assume ψ(t) = exp(−iλt)u has an incoming asymptotic state, so that there exists a
v ∈ H such that W − v = u or limt→−∞ W (t)v = u = ψ(0). This means k exp(−itL0 )v −
exp(−itL)uk = k exp(−itL0 )v − exp(−itλ)uk → 0 as t → −∞, and using the isometry
property this gives k v − exp(−it(λ − L0 ))uk → 0 as t → −∞. For s ∈ R fixed we get
using that W (t) and W (s) are isometries. For ε > 0 arbitrary, we can find N ∈ N such
that for n ≥ N we have kf − fn k ≤ 2ε , since fn → f in H. And since fN ∈ D(W + ) we
have limt→∞ W (t)fN exists, so there exists T > 0 such that for s, t ≥ T we have k W (t) −
W (s) fN k ≤ 2ε . So kW (t)f − W (s)f k ≤ ε, hence limt→∞ W (t)f exists, and f ∈ D(W + ).
Similarly, D(W − ) is closed.
Chapter 3: Scattering and wave operators 31
Since the wave operator W − is an isometry from its domain to its range, it is injective, and
we can write
ψ + (0) = Sψ − (0), S = (W + )−1 W − .
S is the scattering operator; it relates the incoming asymptotic state ψ − with the outgoing
asymptotic state ψ + for a scattering state ψ. Note that S = (W + )−1 W − with
Theorem 3.2.7. D(S) = D(W − ) if and only if Ran(W − ) ⊂ Ran(W + ) and Ran(S) = D(W + )
if and only if Ran(W + ) ⊂ Ran(W − ).
So we can rephrase Corollary 3.2.8 as S is unitary if and only if all solutions with an
incoming asymptotic state also have an outgoing asymptotic state and vice versa. Loosely
speaking, there are only scattering states.
Proof of Theorem 3.2.7. Consider the first statement. If Ran(W − ) ⊂ Ran(W + ), then D(S) =
{f ∈ D(W − ) | W − f ∈ Ran(W + )} = D(W − ) since the condition is true. Conversely, if
D(S) = D(W − ), we have by definition W − f ∈ Ran(W + ) for all f ∈ D(W − ), or Ran(W − ) ⊂
Ran(W + ).
For the second statement, note that Sf = g means f ∈ D(S) and W + g = W − f , so
Ran(S) = {g ∈ D(W + ) | W + g ∈ Ran(W − )}. So if Ran(W + ) ⊂ Ran(W − ), the condition is
void and Ran(S) = D(W + ). Conversely, if Ran(S) = D(W + ), we have W + g ∈ Ran(W − ) for
all g ∈ D(W + ) or Ran(W + ) ⊂ Ran(W − ).
The importance of the wave operators is that they can be used to describe possible unitary
equivalences of the operators L and L0 .
32 Chapter 3: Scattering and wave operators
By Theorem 3.2.6 the subspaces D(W ± ), Ran(W ± ) are closed, and since orthocomple-
ments are closed as well, we can rephrase the first part of Theorem 3.2.11.
Corollary 3.2.12. D(W ± ) and D(W ± )⊥ reduce e−itL0 and Ran(W ± ) and Ran(W ± )⊥ reduce
e−itL .
Proof of Theorem 3.2.11. Take f ∈ D(W + ) and put g = W + f and consider
In particular, e−isL0 f ∈ D(W + ) and W + e−isL0 f = e−isL g = e−isL W + f . This proves that
e−isL0 preserves D(W + ) and that e−isL preserves Ran(W + ) and W + e−isL0 = e−isL W + .
The statement for W − is proved analogously.
In order to see that these spaces also reduce L and L0 we need to consider the orthogonal
projections on D(W ± ) and Ran(W ± ) in relation to the domains of L and L0 . We use the
characterisation of the domain of L, respectively L0 , as those f ∈ H for which the limit
limt→0 t exp(−itL)f −f , respectively limt→0 1t exp(−itL0 )f −f , exists, see Stone’s Theorem
1
6.4.2.
Theorem 3.2.13. D(W + ) and D(W − ) reduce L0 and Ran(W + ) and Ran(W − ) reduce L.
In view of Theorems 3.2.11 and 3.2.13 we may expect that the wave operators intertwine
L and L0 . This is almost true, and this is the content of the following Theorem 3.2.15.
Note that this is a statement for generally unbounded operators, so that this statement
also involves the domains of the operators. In case L0 has trivial kernel, we see that W +
and W − intertwine the self-adjoint operators. In particular, in case the wave operators are
unitary, we see that L and L0 are unitarily equivalent, and so by Exercise 3.2.9 have the same
spectrum.
Proof. Observe first that Ker(L0 )⊥ = Ran(L0 ). Indeed, we have for arbitrary f ∈ Ker(L0 )
and for arbitrary g ∈ D(L0 ) the identity 0 = hL0 f, gi = hf, L0 gi, since L0 is self-adjoint.
Next we note that, with the notation P ± for the orthogonal projection on the domains
D(W ± ) for the wave operators as in Theorem 3.2.13, we have P P ± P = P ± P . Indeed, this is
obviously true on Ker(L0 ) since both sides are zero. For any f ∈ Ran(L0 ) put f = L0 g, so that
P P + P f = P P + f = P P + L0 g = P L0 P + g = L0 P + g = P + L0 g = P + f = P + P f by Theorem
34 Chapter 3: Scattering and wave operators
3.2.13 (twice) and the observation P L0 = L0 . So the result follows for any f ∈ Ran(L0 ),
and since orthogonal projections are bounded operators the result follows for Ran(L0 ) by
continuity of the projections. The case for P − is analogous.
We first show that LW + P ⊃ W + L0 . So take f ∈ D(W + L0 ) = {f ∈ D(L0 ) | L0 f ∈
D(W + )}. Then f = P f +(1−P )f , so that f ∈ D(L0 ), (1−P )f ∈ Ker(L0 ) ⊂ D(L0 ) shows that
P f ∈ D(L0 ). Since with D(W + ), also D(W + )⊥ , reduces L0 , we see that (1 − P + )P f ∈ D(L0 )
and
L0 (1 − P + )P f = (1 − P + )L0 P f = (1 − P + )L0 f
since L0 P f = L0 f + L0 (1 − P )f = L0 f as 1 − P projects onto Ker(L0 ). Since f ∈ D(W + L0 ),
we have L0 f ∈ D(W + ), so that (1 − P + )L0 f = 0. We conclude that (1 − P + )P f ∈ Ker(L0 )
and P (1 − P + )P f = 0 or P f = P 2 f = P P + P f = P + P f , where the last equality follows from
the second observation in this proof. Now P f = P + P f says P f ∈ D(W + ) or f ∈ D(W + P ).
As a next step we show that W + P f ∈ D(L), and for this we use Stone’s Theorem 6.4.2.
So, using Theorem 3.2.11,
1 −itL 1
− 1 W + P f = W + e−itL0 − 1 P f.
e (3.2.1)
t t
Since P f ∈ D(L0 ), limt→0 1t e−itL − 1 P f exists and since the domain of W + is closed it
follows that the right hand side has a limit −iW + LP f as t → 0. So the left hand side has a
limit, and by Stone’s Theorem 6.4.2, we have W + P f ∈ D(L) and the limit is −iLW + P f . So
f ∈ D(LW + P ) and LW + P f = W + L0 P f , and since L0 P f = L0 f we have W + L0 ⊂ LW + P .
Conversely, to show LW + P ⊂ W + L0 , take f ∈ D(LW + P ), or P f ∈ D(W + ) and W + P f ∈
D(L). So, again by Stone’s Theorem 6.4.2, limt→0 1t (e−itL − 1)W + P f exists, and since (3.2.1)
is valid, we see that W + 1t (e−itL0 − 1)P f converges to, say, g = −iLW + P f . Since W + is
continuous, g ∈ Ran(W + ) = Ran(W + ) by Theorem 3.2.6. So g = W + h for some h ∈ H, and
1 −itL0 1 1
− 1 P f − hk = kW + e−itL0 − 1 P f − W + hk = kW + e−itL0 − 1 P f − gk → 0,
k e
t t t
as t → ∞. Again, by Stone’s Theorem 6.4.2, P f ∈ D(L0 ) and −iL0 P f = h, and thus
W + L0 P f = iW + h = ig = LW + P f . As before, with P f ∈ D(L0 ) it follows f ∈ D(L0 )
and L0 P f = L0 f , so that we have f ∈ D(W + L0 ) and LW + P f = W + L0 f . This proves
LW + P ⊂ W + L.
The case W − is analogous.
d
Exercise 3.2.16. Consider the operator L0 = i dx with domain W 1 (R), then L0 is an un-
d
bounded self-adjoint operator on L2 (R), see Lemma 2.1.2. For L we take i dx + q, for some
potential function q.
• Show that U (t) = eitL0 is a translation operator, i.e. U (t)f (x) = f (x − t).
• Define M to be the multiplication operator by a function m. Show that for im0 = qm
we have L0 M = M L. What conditions on q imply that M : L2 (R) → L2 (R) is a
unitary operator? Give a domain D(L) such that (L, D(L)) is an unbounded self-adjoint
operator on L2 (R).
Chapter 3: Scattering and wave operators 35
• What conditions on q ensure that W ± exist? What are W ± in this case? Describe the
scattering operator S = (W + )−1 W − as well in this case. (Hint: S is a multiple of the
identity.)
for s < t and s, t tend to ∞. Similarly, the limit limt→−∞ f (t) exists. Cook’s idea is also used
in Theorem 3.4.9. This gives rise to the following description.
Proposition 3.3.1. Assume that exp(−itL0 )f ∈ D(L0 ) ∩ D(L) for all t ≥ a for some a ∈ R,
and that Z ∞
k(L − L0 )e−itL0 f k dt < ∞,
a
+
then f ∈ D(W ). Similarly, if exp(−itL0 )g ∈ D(L0 ) ∩ D(L) for all t ≤ b for some b ∈ R,
and that Z b
k(L − L0 )e−itL0 gk dt < ∞,
−∞
then g ∈ D(W − ).
where we need exp(−itL0 )f ∈ D(L), f ∈ D(L0 ), cf. Theorem 3.1.1. Since this shows
exp(−itL0 )f ∈ D(L) ∩ D(L0 ) we have W 0 (t)f = i exp(itL) L − L0 exp(−itL0 )f and hence
kW 0 (t)f k = k(L − L0 ) exp(−itL0 )f k.
Apply now the previous observation to get for arbitrary u ∈ L2 (R),
Z t Z t
0
|hW (t)f, ui − hW (s)f, ui| ≤ |hW (x)f, ui| dx ≤ kuk kW 0 (x)f k dx,
s s
36 Chapter 3: Scattering and wave operators
By assumption for t > s ≥ a, the right hand side integrated over [a, ∞) is finite. Hence,
limt→∞ W (t)f exists, or f ∈ D(W + ).
Theorem 3.3.2. Let L be the Schrödinger operator, where the potential q satisfies the con-
d2 2
ditions of Corollary 2.2.7, and L0 is the unperturbed operator − dx 2 with domain W (R). If
x 7→ (1 + |x|)α q(x) is an element of L2 (R) for some α > 12 , then D(W + ) = L2 (R) = D(W − ).
The following gives a nice special case of the theorem.
Corollary 3.3.3. Theorem 3.3.2 holds true if the potential q is locally square integrable and
for some β > 1 the fuction x 7→ |x|β q(x) is bounded for |x| → ∞.
Exercise 3.3.4. Prove Corollary 3.3.3 from Theorem 3.3.2.
Proof of Theorem 3.3.2. The idea is to show that we can use Proposition 3.3.1 for sufficiently
many functions. Consider the function fs (λ) = λ exp(−λ2 − isλ) for s ∈ R. Then fs ∈
d2
L1 (R)∩L2 (R). We want to exploit the fact that the Fourier transform diagonalises L0 = − dx 2,
which we have to integrate with respect to t in order to be able to apply Proposition 3.3.1.
Using the estimate e−x ≤ (1 + xa )−a for x ≥ 0 and a ≥ 0, we get
so that Z
−itL0 2 2 a− 23
k(L − L0 )e us k ≤ C(1 + t ) |q(x)|2 |x − s|2−2a dx
R
and noting |x − s| ≤ (1 + |x|)(1 + |s|) and inserting this estimate gives, assuming a ≤ 1,
Z
−itL0 2 2 a−3/2
k(L − L0 )e us k ≤ C(1 + t ) |q(x)|2 (1 + |x|)2−2a dx
R
and the integral with respect to t is finite for 23 − a > 1 or a < 12 and the integral with respect
to x is the L2 (R)-norm of x 7→ q(x)(1 + |x|)1−a , and this is finite by assumption if 1 − a ≤ α.
Since we need 0 ≤ a < 21 , we see that the integral is finite for a suitable choice of a if α > 12 .
By Proposition 3.3.1 we see that us ∈ D(W ± ) for arbitrary s ∈ R.
Lemma 3.3.6. {us | s ∈ R} is dense in L2 (R).
Now Lemma 3.3.6 and Theorem 3.2.6 imply that D(W + ) = L2 (R) = D(W − ).
Exercise 3.3.7. Prove Lemma 3.3.5 and Lemma 3.3.6. For the proof of Lemma 3.3.5 use
that the lemma is true if C = R and Cauchy’s theorem from Complex Function Theory on
shifting contours for integrals of analytic functions. For the proof of Lemma 3.3.6 proceed as
follows; take g ∈ L2 (R) orthogonal to any us , then
Z
2
Fg (λ)λe−λ eisλ dλ
0 = hg, us i = hFg, Fus i =
R
2
for any s, i.e. the Fourier inverse of Fg (λ)λe−λ equals zero. Conclude that g has to be
zero in L2 (R).
3.4 Completeness
As stated in Section 3.2 we want as many scattering states as possible. Moreover, by Proposi-
tion 3.2.4, the discrete spectrum is not be expected to be related to scattering states, certainly
d2
not for the Schrödinger operator L and unperturbed Schrödinger operator − dx 2 , since for the
38 Chapter 3: Scattering and wave operators
last one σp = ∅, whereas the discrete spectrum of the Schrödinger operator L may be non-
trivial. For this reason we need to exclude eigenvectors, and we go even one step further by
restricting to the subspace of absolute continuity, see Section 6.6. Again, we phrase the results
for self-adjoint operators L and L0 acting on a Hilbert space H, but the main example is the
case of the Schrödinger operators on L2 (R).
We assume that the domains of the wave operator W ± at least contain the absolute
continuous subspace Hac (L0 ) for the operator L0 .
d 2
Exercise 3.4.1. Show that for the unperturbed Schrödinger operator − dx 2 the subspace of
Definition 3.4.2. Let L and L0 be self-adjoint operators on the Hilbert space H. The gener-
alised wave operators Ω± = Ω± (L, L0 ) exist if Hac (L0 ) ⊂ D(W ± ), and Ω± = W ± Pac (L0 ).
We first translate some of the results of the wave operators obtain in Section 3.2 to the
generalised wave operators Ω± .
1. Ω± are partial isometries with inital space Ran(Pac (L0 )) and final space Ran(Ω± ),
Proof. The first statement follows from the discussion following Definition 3.2.2. The sec-
ond statement follows from Theorem 3.2.13
⊥ and Theorem 3.2.15 and the observation that
Ker(L0 ) ⊂ Ran Ppp (L0 ) ⊂ Ran Pac (L0 ) and Theorem
6.6.2. By
the second result we see
that the generalised wave operators intertwine LRan(Ω± ) with L0 Ran(Pac (L0 )) and by the first
statement this equivalence is unitary. So LRan(Ω± ) has only absolutely continuous spectrum
or Ran(Ω± ) ⊂ Pac (L), see Section 6.6.
The next proposition should be compared to Exercise 3.2.5, and because we take into
account the projection onto the absolutely continuous subspace some more care is needed.
Proposition 3.4.4. Let A, B, C be self-adjoint operators, and assume that Ω± (A, B) and
Ω± (B, C) exist, then Ω± (A, C) exist and Ω± (A, C) = Ω± (A, B)Ω± (B, C).
Proof. Since Ω+ (B, C) exists, it follows from the third statement of Proposition
3.4.3 that
Ran(Ω+ (B, C)) ⊂ Ran(Pac (B)), so for any ψ ∈ H we have 1 − Pac (B) Ω+ (B, C)ψ = 0, or
Now
eitA e−itC Pac (C)ψ = eitA e−itB eitB e−itC Pac (C)ψ
= eitA e−itB Pac (B)eitB e−itC Pac (C)ψ + eitA e−itB (1 − Pac (B))eitB e−itC Pac (C)ψ.
Put Ω+ (B, C)ψ = φ, and Ω+ (A, B)φ = ξ, then
keitA e−itC Pac (C)ψ − ξk ≤ keitA e−itB Pac (B)eitB e−itC Pac (C)ψ − ξk
+ keitA e−itB (1 − Pac (B))eitB e−itC Pac (C)ψk
≤ keitA e−itB Pac (B)eitB e−itC Pac (C)ψ − eitA e−itB Pac (B)φk + keitA e−itB Pac (B)φ − ξk
+ k(1 − Pac (B))eitB e−itC Pac (C)ψk
≤ keitB e−itC Pac (C)ψ − φk + keitA e−itB Pac (B)φ − ξk + k(1 − Pac (B))eitB e−itC Pac (C)ψk
and since each of these three terms tends to zero as t → ∞, the result follows.
Completeness is related to the fact that we only have scattering states, i.e. any solution
to the time-dependent Schrödinger equation with an incoming asymptotic state also has an
outgoing asymptotic state and vice versa. This is known as weak asymptotic completeness,
and for the generalised wave operators this means Ran(Ω+ (L, L0 )) = Ran(Ω− (L, L0 )). We
⊥
have strong asymptotic completeness if Ran(Ω+ (L, L0 )) = Ran(Ω− (L, L0 )) = Ppp (L)H .
Here Ppp (L) is the projection on the closure of the subspace of consisting of all eigenvectors of
L, see Section 6.6. In this section we have an intermediate notion for completeness, and here
Pac (L) denotes the orthogonal projection onto the absolutely continuous subspace for L, see
Section 6.6.
Definition 3.4.5. Assume Ω± (L, L0 ) exist, then we say that the generalised wave operators
are complete if Ran(Ω+ (L, L0 )) = Ran(Ω− (L, L0 )) = Ran(Pac (L)).
Note that completeness plus empty singular continous spectrum of L, see Section 6.6, is
equivalent to strong asymptotic completeness.
Theorem 3.4.6. Assume Ω± (L, L0 ) exist. Then the generalised wave operators Ω± (L, L0 )
are complete if and only if the generalised wave operators Ω± (L0 , L) exist.
Proof. Assume first that Ω± (L, L0 ) and Ω± (L0 , L) exist. By Proposition 3.4.4 we have
Pac (L) = Ω± (L, L) = Ω± (L, L0 )Ω± (L0 , L),
so that Ran(Pac (L)) ⊂ Ran(Ω± (L, L0 )). Since Proposition 3.4.3 gives the reverse inclusion,
we have Ran(Pac (L)) = Ran(Ω± (L, L0 )), and so the generalised wave operators are complete.
Conversely, assume completeness of the wave operators Ω± (L, L0 ). For φ ∈ Pac (L) =
Ran(Ω+ (L, L0 )) we have ψ such that φ = Ω+ (L, L0 )ψ, or
keitL e−itL0 Pac (L0 )ψ − φk = ke−itL0 Pac (L0 )ψ − e−itL φk = kPac (L0 )ψ − eitL0 e−itL φk
tends to zero as t → ∞. It follows that strong limit of eitL0 e−itL exists on the absolutely
continuous subspace for L, hence Ω+ (L0 , L) exists.
40 Chapter 3: Scattering and wave operators
Theorem 3.4.6 gives a characterisation that is usually hard to establish. The reason is that
L0 is the unperturbed “simple” operator, and we can expect to have control on its absolutely
continuous subspace and its spectral decomposition, but in order to apply Theorem 3.4.6 one
also needs to know these properties of the perturbed operator, which is much harder. There
are many theorems (the so-called Kato-Birman theory) on the existence of the generalised
wave operators if L and L0 (or f (L) and f (L0 ) for a suitable function f ) differ by a trace-class
operator. This is less applicable for the Schrödinger operators.
In the remainder of this Chapter 3 we use the notion of T -smooth operators in order to
study wave operators for the Schrödinger operators. However, the proof is too technical to
consider in detail, and we only discuss the main ingredients.
Definition 3.4.7. Let (T, D(T )) be a self-adjoint operator on a Hilbert space H. A closed
operator (S, D(S)) on H is T -smooth if there exists a constant C such that for all x ∈ H the
element eitT x ∈ D(S) for almost all t ∈ R and
Z
1
kSeitT xk2 dt ≤ C kxk2 .
2π R
Note that the identity operator is never T -smooth for any operator T .
The notion of T -smooth is stronger than the assumption occuring in the Rellich Pertur-
bation Theorem 2.2.4.
Proposition 3.4.8. Let (T, D(T )) be a self-adjoint operator on a Hilbert space H, and the
closed operator (S, D(S)) on H be T -smooth. Then for all ε > 0 there exists a K such that
for all ε > 0, where R(z) = (T − z)−1 , z ∈ ρ(T ), is the resolvent for T , see Section 6.4. To
estimate this expression we use the Cauchy-Schwarz inequality (6.1.1) in L2 (0, ∞);
Z ∞
∗
|hR(λ + iε)x, S yi| ≤ kyk kSe−itT xke−εt dt
0
21 Z 21
∞ ∞
kyk √
Z
≤ kyk kSe−itT xk2 dt e−2εt dt ≤√ 2πC kxk,
0 0 2ε
Chapter 3: Scattering and wave operators 41
where E is the spectral measure for the self-adjoint operator T . Now estimate |F (t)| ≤
kSe−itT xkkyk to see that F ∈ H. Since F is the Fourier transform of the measure Ex,x
we see fromR the Plancherel Theorem, see Section 6.3, that Ex,x is absolutely continuous by
Ex,x (B) = B F −1 F (λ) dλ, B Borel set. Hence, x is in the absolutely continuous subspace
for T .
Theorem 3.4.9. Assume L and L0 are self-adjoint operators such that L = L0 + A∗ B in the
following sense; D(L) ⊂ D(A), D(L0 ) ⊂ D(B) and for all x ∈ D(L), y ∈ D(L0 )
hLx, yi = hx, L0 yi + hAx, Byi.
Assume that A is L-smooth and B is L0 -smooth, then the wave operators W ± exist as unitary
operators.
In particular, Ran W + = Ran W − = H. Note that Theorem 3.4.9 does not deal yet
with generalised wave operators, so it cannot be applied to the Schrödinger operators once
the perturbed Schrödinger operator has discrete spectrum, cf. Proposition 3.2.4. Moreover,
it still has the problem that there is a smoothness condition with respect to the perturbed
operator L. The proof of Theorem 3.4.9 is based on the idea of Cook as for Proposition 3.3.1.
Proof. Take x ∈ D(L0 ) and consider w(t) = eitL e−itL0 x. Take y ∈ D(L) and consider
d d
hw(t), yi = he−itL0 x, e−itL yi = −ihL0 e−itL0 x, e−itL yi + ihe−itL0 x, L e−itL yi
dt dt
= ihB e−itL0 x, A e−itL yi
since e−itL0 x ∈ D(L), eitL y ∈ D(L0 ). Consequently, for t > s
Z t Z t
−iuL0 −iuL
|hw(t) − w(s), yi| ≤ |hB e x, A e yi| du ≤ kB e−iuL0 xk kA e−iuL yk du
s s
Z t 12 Z t 12
≤ kB e−iuL0 xk2 du kA e−iuL yk2 du
s s
Z t 12 Z 12 Z t 12
≤ kB e−iuL0 xk2 du kA e−iuL yk2 du ≤ Ckyk kB e−iuL0 xk2 du
s R s
42 Chapter 3: Scattering and wave operators
and since the integrand is integrable over R we see that w(s) is Cauchy as s → ∞. So the
domain of W + contains the dense subset D(L0 ), and since D(W + ) is closed by Theorem 3.2.6
it follows that D(W + ) is the whole Hilbert space. Similarly for W − .
Since the problem is symmetric in L and L0 , i.e. L0 = L − B ∗ A it follows that the strong
limits of eitL0 e−itL also exist for t → ±∞. Since these are each others inverses, unitarity
follows.
d 2
Theorem 3.4.9 can be used to discuss completeness for the Schrödinger operator − dx 2 + q,
but the details are complicated. We only discuss globally the proof of Theorem 3.4.13. For
this we first give a condition on the resolvent of T and S for S to be a T -smooth operators
using the Fourier transform. R
First recall, that for F : R → H a a Hilbert space valued function, such
R that R kF (t)k dt <
∞ (and t 7→R hF (t), yi is measurable for all y ∈ H) we can define R F (t) dt ∈ H. In-
deed,
yR 7→ R hy,
F (t)i
R dt is bounded and now use Riesz’s representation theorem. Note
that R F (t) dt R≤ R kF (t)k dt. For such a function F we define its Fourier transform
as FF (λ) = √12π R e−iλt F (t) dt as an element of H.
• First take S bounded, then for any x ∈ H we see that hSFF (λ), xi = hFF (λ), S ∗ xi is
the (ordinary) Fourier transform of hSF (t), xi = hF (t), S ∗ xi. Use Plancherel and next
take x elements
R of an orthonormal
R basis of the Hilbert space (we assume H separable)
Deduce R kSFF (λ)k2 dλ = R kSF (t)k2 dt.
• Next assume (S, D(S)) self-adjoint with spectral decomposition E. Use the previous
result for SE(−n, n) and use that if F (t) ∈ D(S) almost everywhere kSE(−n, n) F (t)k2
converges monotonically to kSF (t)k2 as n → ∞. Now use the monotone convergence
theorem.
• For arbitrary (S, D(S)) we take its polar decomposition S = U |S| with D(|S|) = D(S),
(|S|, D(|S|)) self-adjoint and k |S|xk = kSxk for all x ∈ D(S) = D(|S|).
Chapter 3: Scattering and wave operators 43
Apply Lemma 3.4.10 (with the Fourier transform replaced by the inverse Fourier transform)
to F (t) = e−εt e−itT x for t ≥ 0, F (t) = 0 for t < 0. Using the functional calculus as in the
beginning of the proof of Proposition 3.4.8 we get, for ε > 0 and R(z) the resolvent for T ,
Z Z ∞
2
kSR(λ + iε)xk dλ = 2π e−2εt kSe−itT xk2 dt.
R 0
Proposition 3.4.12. S is a T -smooth operator if and only if for all x ∈ H we have R(λ ±
iε)x ∈ D(S) for allmost all λ ∈ R and
Z
sup kSR(λ + iε)xk2 + kSR(λ − iε)xk2 dλ < ∞.
kxk=1, ε>0 R
Proof. For the first statement we use the two equations following Exercise 3.4.11 to obtain
Z Z
kSR(λ + iε)xk + kSR(λ − iε)xk dλ = 2π e−2ε|t| kSe−itT xk2 dt.
2 2
R R
In case S is a T -smooth operator the right hand side is bounded for ε ≥ 0, so the supremum
of the left hand side is finite. In case the supremum of the left hand side is finite, it follows
from the monotone convergence theorem that R kSe−itT xk2 dt < ∞.
R
(interchanging is valid since the integrand is positive), and the inner integral
Z
1
Z
1 1 λ − t ∞ π
dλ = dλ = arctan = .
2 2 2
R |t − λ − iε| R (t − λ) + ε ε ε ε
λ=−∞
So we get
Z Z Z
1
4εC dEx,x (t) dλ = 4πC dEx,x (t) = 4πCkxk2
R R |t − λ − iε|2 R
or S is T -smooth.
The machinery of T -smooth operators and in particular Theorem 3.4.9 can be applied to
the Schrödinger operator, but the proof is very technical and outside the scope of these lecture
notes.
d d 2 2
Theorem 3.4.13. Assume q ∈ L1 (R) ∩ L2 (R), and let L0 = − dx 2 , L = − dx2 + q, then the
±
wave operators Ω (L, L0 ) are complete
We put (p
|q(x)|, q(x) 6= 0, q(x)
a(x) = −x2
, b(x) = .
e q(x) = 0 a(x)
Then we are in the situation of Theorem 3.4.9, but the space of absolute continuity has
to brought into play. This is done by employing the spectral decomposition and assuming
that there exists an open set Z such that E(Z) corresponds to the orthogonal projection on
the absolutely continuous subspace. Next it turns out that even for R being the resolvent
d2
of L0 = − dx 2 one cannot prove the sufficient condition of Proposition 3.4.12 since there
is no estimate uniformly in λ ∈ R. One first shows that the condition can be relaxed to
εkAR(λ ± iε)k2 ≤ CI independently of ε > 0, λ ∈ I for any interval I with compact closure
in Z, where E(Z) is Pac .
For R the resolvent of L0 and B multiplication by b this is easily checked. Note that in
the application indeed b ∈ L2 (R).
Chapter 3: Scattering and wave operators 45
Lemma 3.4.14. With R the resolvent of L0 = − dxd2 and B the multiplication operator by
b ∈ L2 (R) we have εkBR(λ ± iε)k2 ≤ CI for all intervals I with compact closure in (0, ∞).
so that the sequence is bounded. Hence, B being L0 -compact we see that BR(z)fn has a
convergent subsequence, so that BR(z) is a compact operator.
Proof. For this we need the resolvent of L0 , which we claim equals
−1
Z
eiγ|x−y| f (y) dy
R(z)f (x) =
2γi R
|b(x)|2 |b(x)|2
Z Z Z
−2=γ |x−y|
2 2
e−2=γ |y| dy kf k2
|b(x) R(z)f (x)| ≤ 2
e dy |f (y)| dy = 2
4|γ| R R 4|γ| R
2 2 2
|b(x)| kf k kak
= kf k2 =⇒ kBR(z)f k2 ≤
4|z| =γ 4|z| =γ
where we denote the resolvent for L by R and the resolvent for L0 by R0 . If we now can show
∗
that 1 − A BR0 (z̄) is invertible and if we have sufficient control on its inverse we can use
Lemma 3.4.14 once more to find the result. In particular one needs the inverse to be uniformly
bounded in a neighbourhood of an interval I (with I as in Lemma 3.4.14). We will not go
into this, but refer to Schechter [10, Ch. 9] and Reed and Simon [9, Ch. XIII].
46 Chapter 3: Scattering and wave operators
Chapter 4
47
48 Chapter 4: Schrödinger operators and scattering data
so that it gives a solution to the Schrödinger eigenvalue equation −f 00 +qf = γ 2 f . The integral
equation can be derived by viewing f 00 +γ 2 f = u, u = q f , as an inhomogeneous linear second-
order differential equation and apply the method of variation of constants. Similarly, a solution
of the Schrödinger integral equation at −∞ gives rise to an eigenfunction of the Schrödinger
operator for the eigenvalue γ 2 . The Schrödinger integral equation can also be obtained using
the method of variation of constants for f 00 + γ 2 f = g, and then take g = qf .
The three-dimensional analogues of the Schrödinger integral equations in quantum me-
chanics are known as the Lippmann-Schwinger equation.
Theorem 4.1.2. Assume that the potential q is a real-valued integrable function. Let γ ∈ C,
=γ ≥ 0, γ 6= 0. Then the Schrödinger integral equation at ∞ has a unique solution fγ+ which
is continuously differentiable and satisfies the estimates
Z ∞
+ iγx −x=γ
1
|fγ (x) − e | ≤ e exp( |q(y)| dy) − 1
|γ| x
+ Z ∞
dfγ iγx −x=γ
1
| (x) − iγ e | ≤ |γ| e exp( |q(y)| dy) − 1.
dx |γ| x
Moreover, the Schrödinger integral equation at −∞ has a unique solution fγ− which is contin-
uously differentiable and satisfies the estimates
Z x
− −iγx
x=γ 1
|fγ (x) − e |≤e exp( |q(y)| dy) − 1,
|γ| −∞
− Z x
dfγ iγx
x=γ 1
| (x) + iγe | ≤ |γ| e exp( |q(y)| dy) − 1.
dx |γ| −∞
Using ex − 1 ≤ xex we can estimate
Z ∞ Z ∞
+ iγx −x=γ 1 1
|fγ (x) − e | ≤ e |q(y)| dy exp( |q(y)| dy),
|γ| x |γ| x
Z ∞ Z ∞ (4.1.1)
dfγ+ iγx −x=γ 1
| (x) − iγ e | ≤ e |q(y)| dy exp( |q(y)| dy),
dx x |γ| x
and a similar estimate for fγ− and its derivative.
+ −
Since q is real-valued it follows that for x ∈ R we have fγ+ (x) = f−γ̄ (x) and fγ− (x) = f−γ̄ (x)
+
as solutions to the Schrödinger integral equations. Since q is an integrable function and fγ ∈
C 1 (R) is bounded for x → ∞, the differentiations in the calculations preceding Theorem 4.1.2
are justifiable using Lebesgue’s Theorems 6.1.3 and 6.1.4 on differentiation and dominated
convergence and hold almost everywhere. The solutions in Theorem 4.1.2 are known as Jost
solutions.
Proof. The Schrödinger integral equations are Volterra2 type integral equations, and a stan-
dard way to find a solution is by successive iteration. Put f0 (x) = e−iγx , and define
Z ∞
sin(γ(x − y))
fn+1 (x) = − q(y) fn (y) dy, n ≥ 0.
x γ
2
Vito Volterra (3 May 1860 — 11 October 1940), Italian mathematician.
Chapter 4: Schrödinger operators and scattering data 49
Now
sin(γ(x − y)) eiγ(y−x) = 1 1 − e2iγ(y−x) ≤ 1 1 + e−2(y−x)=γ ≤ 1
2 2
for y ≥ x and =γ ≥ 0. Hence, assuming =γ ≥ 0 we find
Z ∞
mn+1 (x) ≤ 1
|q(y)| |mn (y)| dy,
|γ| x
n
R(x)
and then we have by induction on n the estimate |mn (x)| ≤ , where
|γ|n n!
Z ∞ Z
R(x) = |q(y)| dy ≤ |q(y)| dy = kqk1 .
x R
Note that R is a continuous decreasing bounded function, which is almost everywhere differ-
entiable by Lebesgue’s differentation Theorem 6.1.4. Indeed, for n = 0 this inequality is valid
since m0 (x) = 1, and
∞
n Z R(x) n+1
R(y) R(x)
Z
mn+1 (x) ≤ 1 1
n
|q(y)| dy = n+1 s ds = n+1
|γ| x |γ|n n! |γ| n! 0 |γ| (n + 1)!
by putting s = R(y), so ds = −|q(y)| dy and the interval [x, ∞) is mapped to [0, R(x)], since
R is decreasing. P
∞ P∞
So the series n=0 mn (x) converges uniformly, and Pso does the series n=0 fn (x), for
∞
=γ ≥ 0. It is straightforward to check that the series n=0 fn (x) gives a solution to the
Schrödinger equation at ∞, and this solution we denote by fγ+ . Then
∞
X ∞
X
−x=γ
|fγ+ (x) iγx
−e |≤ |fn (x)| ≤ e |mn (x)| ≤ e−x=γ | exp(R(x)/|γ|) − 1| → 0, x→∞
n=1 n=1
+ X ∞ X ∞ −x=γ
n
dfγ dfn e R(x)
= |γ| e−x=γ (exp(R(x)/|γ|) − 1)
iγx
dx (x) − iγe ≤ dx (x) ≤
|γ| n−1 n!
n=1 n=1
−
for m = m+ γ and a similar one for mγ .
The estimates in Theorem 4.1.2 can be improved by imposing more conditions on the
potential q. We discuss two possible extensions. The first additional assumption also allows
an extension to the case γ = 0.
R
Theorem 4.1.5. Assume the potential q is real-valued and R (1 + |x|)|q(x)| dx < ∞. Then
for γ ∈ C with =γ ≥ 0,
Z ∞
+ iγx C e−x=γ
|fγ (x) − e | ≤ (1 + max(−x, 0)) (1 + |y|)|q(y)| dy
1 + |γ| x
dfγ+ C e−x=γ ∞
Z
iγx
| (x) − iγ e | ≤ (1 + |y|)|q(y)| dy,
dx 1 + |γ| x
where C is a constant only depending on the potential q. Moreover, the Schrödinger integral
equation at −∞ has a unique solution fγ− which is continuously differentiable and satisfies the
estimates
Z x
− −iγx C e−x=γ
|fγ (x) − e |≤ (1 + max(x, 0)) (1 + |y|)|q(y)| dy
1 + |γ| −∞
dfγ− C e−x=γ x
Z
iγx
| (x) + iγe | ≤ (1 + |y|)|q(y)| dy.
dx 1 + |γ| −∞
Proof. In particular, the potential q satisfies the estimate of Theorem 4.1.2, and the estimates
given there hold.
Use the estimate
1
(e2iγ(y−x) − 1) ≤ (y − x)
2iγ
for γ ∈ R, y − x ≥ 0 and iterate to find
ZZ Z
|mn (x)| ≤ ··· (y1 − x)(y2 − y1 ) · · · (yn − yn−1 )|q(y1 )| · · · |q(yn )| dyn · · · dy1
x≤y1 ≤y2 ···≤yn
ZZ Z
≤ ··· (y1 − x)(y2 − x) · · · (yn − x)|q(y1 )| · · · |q(yn )| dyn · · · dy1 .
x≤y1 ≤y2 ···≤yn
Note that the integrand is invariant with respect to permutations of y1 up to yn , but not
the region. Taking all possible orderings gives, where Sn is the group of permutations on n
elements,
n! |mn (x)|
X ZZ Z
≤ ··· (y1 − x)(y2 − x) · · · (yn − x)|q(y1 )| · · · |q(yn )| dyn · · · dy1
w∈Sn x≤yw(1) ≤yw(2) ···≤yw(n)
Z ∞Z ∞ Z ∞
= ··· (y1 − x)(y2 − x) · · · (yn − x)|q(y1 )| · · · |q(yn )| dyn · · · dy1
x x x
Z ∞ n
= (y − x) |q(y)| dy ,
x
52 Chapter 4: Schrödinger operators and scattering data
R∞
so that, with R(x) = x
(y − x) |q(y)| dy, we get
|m+
γ (x)| ≤ exp(R(x)) − 1 ≤ R(x) exp(R(x)).
and observe that this inequality holds for positive and negative x ∈ R. We estimate the first
term, which is independent of x, by
Z ∞ Z ∞
+
1+ y|q(y)| |mγ (y)| dy ≤ 1 + R(0) exp(R(0)) y|q(y)| dy
0 0
2
= 1 + R(0) exp(R(0)) = C1 < ∞
R∞
where C1 is a finite constant only depending on R(0) = 0 y|q(y)| dy.
γm+ (x)
Now define M (x) = C1 (1+|x|) , p(x) = (1 + |x|)|q(x)|, then we can rewrite the implicit
+
estimate on mγ (x) in terms of M (x) as follows;
Z ∞
C1 (1 + |x|)|M (x)| ≤ C1 + (−x) C1 (1 + |t|) |q(t)| |M (t)| dt
x
Z ∞
1 −x
=⇒ |M (x)| ≤ + p(t) |M (t)| dt (4.1.5)
1 + |x| 1 + |x| x
Z ∞
≤1+ p(t) |M (t)| dt,
x
so that as before, cf. proof of Theorem 4.1.2, or using Gronwall’s3 Lemma, see Exercise 4.1.6,
we get Z ∞ Z
|M (x)| ≤ exp p(y) dy ≤ exp (1 + |y|) |q(y)| dy = C2 < ∞.
x R
So |m+ ≤ C3 (1 + |x|) with C3 = C1 C2 only depending on properties of q. This estimate is
γ (x)|
now being used to estimate
Z ∞ Z ∞
+ +
|mγ (x) − 1| ≤ y|q(y)| |mγ (y)| dy + (−x) |q(y)| |m+
γ (y)| dy
0 x
Z ∞ Z ∞
≤ R(0) exp(R(0)) y|q(y)| dy + (−x)C3 (1 + |y|) |q(y)| dy
0 x
3
Thomas Hakon Grönwall (16 January 1877 — 9 May 1932), Swedish-American mathematician, engineer,
chemist.
Chapter 4: Schrödinger operators and scattering data 53
which is an estimate uniform in γ ∈ R. Combining this estimate in case |γ| ≤ 1 with the
estimate of Theorem 4.1.2 in case |γ| ≥ 1 gives the result for γ ∈ R. By Corollary 4.1.4 and
Theorem 4.1.2 the result is extended to γ ∈ C, =γ ≥ 0.
Differentiating (4.1.4) gives
Z ∞
dm+γ
(x) = − e2iγ(y−x) q(y) m+
γ (y) dy,
dx x
so that
dm+ ∞
(1 + max(0, −y))
Z
γ
| (x)| ≤ C |q(y)| dy,
dx x 1 + |γ|
which gives the result. The statements for fγ− are proved analogously.
Exercise 4.1.6. The following is known as Gronwall’s Lemma. Assume R ∞f , p are positive
continuous functions
R∞ on R such that there exists K ≥ 0 with f (x) ≤ K + x f (t)p(t) dt, then
f (x) ≤ K exp( x p(t) dt). Prove this according to the following steps.
• Observe that
f (x)p(x)
R∞ ≤ p(x)
K + x f (t)p(t) dt
and integrate both sides.
R∞ R∞
• Exponentiate the resulting inequality to get K + x
f (t)p(t) dt ≤ K exp( x p(t) dt),
and deduce the result.
At what cost can we remove the continuity assumption on f and p?
54 Chapter 4: Schrödinger operators and scattering data
RTheorem
2m|y|
4.1.8. Let q be a real-valued potential. Assume that there exists m > 0 such that
R
e |q(y)| dy < ∞, then
Z ∞
−x=γ −2mx0
+ iγx
e2my |q(y)| dy − 1,
|fγ (x) − e | ≤ e exp C(γ, m)e
x
Z ∞
dfγ+ iγx e −x=γ
−2mx0
e2my |q(y)| dy − 1,
| (x) − iγe | ≤ exp C(γ, m)e
dx C(γ, m) x
and similarly for its derivative, which is similar to the estimate in Theorem 4.1.5, but note
that the γ-region where this estimate is valid is much larger.
Proof. Since the assumption implies that q is integrable, Theorem 4.1.2 shows that there
is a unique solution fγ+ , and we only need to improve the estimates. Using the estimate
2|z|
| sin z| ≤ 1+|z| exp(|=z|) for z ∈ C (prove this estimate) we can now estimate, for y ≥ x ≥ x0 ,
1
sin(γ(x − y)) eiγ(y−x) ≤ 2|x − y| (y−x)|=γ| −(y−x)=γ −2m(y−x) 2my −2mx
γ 1 + |γ| |x − y| e e e e e
2|x − y|
(y−x) |=γ|−=γ−2m −2mx0 2my
≤ e e e ≤ C(γ, m) e−2mx0 e2my
1 + |γ| |x − y|
Chapter 4: Schrödinger operators and scattering data 55
for =γ > −m. Here C(γ, m) is a finite constant independent of x and y, which we discuss
later in the proof.
Under this assumption we now have for x ≥ x0
Z ∞
−2mx0
|mn+1 (x)| ≤ C(γ, m)e e2my |q(y)| |mn (y)| dy,
x
dfγ−
for =γ > −m, x ≤ x0 with (Cγ, m) as in Theorem 4.1.8. Conclude γ 7→ fγ− (x) and γ 7→ dx
(x)
are analytic for γ ∈ C with =γ > −m for fixed x ≤ x0 .
We also need some properties of the Jost solutions when differentiated with respect to γ.
R ∂m+
Proposition 4.1.10. Assume that R |x|k |q(x)| dx < ∞ for k = 0, 1, 2, then ∂γγ (x) is a
continously differentiable function, and it is the unique solution to the integral equation
Z ∞ 2iγx
e −1
n(x) = M (x) + q(y) n(y) dy,
x 2iγ
Z ∞
y − x 2iγ(y−x) e2iγ(y−x) − 1
M (x) = e − 2
q(y) m+
γ (y) dy,
x γ 2iγ
with limx→∞ n(x) = 0.
∂m+γ
It follows from the proof below that we can also estimate ∂γ
(x) explicitly. Moreover, it
is a solution to
n00 + 2iγ n0 = q n − 2i m+
γ,
i.e. the differential equation (4.1.3) differentiated with respect to γ. Switching back to the
∂f + −iγx ∂mγ
+
Jost solution fγ+ , we see that we find a solution ∂γγ (x) = −ixe−iγx m+
γ (x) + e ∂γ
(x) to
−f 00 + q f = γ 2 f + 2γ fγ+ .
Proof. Rewrite (4.1.4) with m(x, γ) = m+ γ (x),
Z ∞ x
e2iγx − 1
Z
m(x, γ) = 1 + D(y − x, γ) q(y) m(y, γ) dy, D(x, γ) = = e2iγt dt.
x 2iγ 0
∂m
Denoting ṁ(x, γ) = ∂γ
(x, γ),we see that ṁ satisfies the integral equation
Z ∞
ṁ(x, γ) = M (x, γ) + D(y − x, γ) q(y) ṁ(y, γ) dy,
x
Z ∞
M (x, γ) = Ḋ(y − x, γ) q(y) m(y, γ) dy,
x
which we consider as an integral equation for ṁ(x, γ) with known function M and which want
to solve in the same way by an iteration argument;
Z ∞ ∞
X
h0 (x) = M (x, γ), hn+1 (x) = D(y − x, γ) q(y) hn (y) dy, ṁ(x, γ) = hn (x).
x n=0
The iteration scheme is the same as in the proof of Theorem 4.1.2 except for the initial
condition. So we investigate the initial function M (x, γ). Note that
Z x
x 1 1
|Ḋ(x, γ)| = | 2ite2iγt dt| = | e2iγx − D(x, γ)| ≤ 2 (1 + |γ|x)
0 γ γ |γ|
Chapter 4: Schrödinger operators and scattering data 57
for x ≥ 0, =γ ≥ 0, so that
Z ∞
1
|M (x, γ)| ≤ (1 + |γ|(y − x))|q(y)| |m+
γ (y)| dy
x |γ|2
By the estimate |m+
γ (y)|≤ C(1 + |y|), cf. the proof of Theorem 4.1.5, we get
Z ∞ Z ∞
1 1
|M (x, γ)| ≤ 2 (1 + |y|) |q(y)| dy + (y − x)|q(y)| (1 + |y|) dy
|γ| x |γ| x
and the first integral is bounded and tends to zero for x → ∞, and for the second integral we
use
Z ∞ Z ∞ Z ∞
(y − x)|q(y)| (1 + |y|) dy = y|q(y)| (1 + |y|) dy + (−x) |q(y)| (1 + |y|) dy
x x x
Z ∞ Z ∞
2
≤ |q(y)| (|y| + |y| ) dy + (−x) |q(y)| (1 + |y|) dy = K(x),
0 x
cf.
R ∞ proof of Theorem 4.1.5. So RM (x) is bounded, and even tends to zero, for x → ∞ since
∞
x
(y − x)|q(y)|(1 + |y|) dy ≤ x |q(y)|(|y| + |y|2 ) dy → 0 and grows at most linearly for
x → −∞, Z ∞
C C
|M (x, γ)| ≤ 2 (1 + |y|) |q(y)| dy + K(x) = K1 (x)
|γ| x |γ|
Note that K1 (x) ≥ K1 (y) for x ≤ y, and then we find
Z ∞ n
K1 (x)
|hn (x)| ≤ |q(y)| dy
|γ|n n! x
and we find a solution to the integral equation. The remainder of the proof follows the lines
of proofs of Theorems 4.1.2 and 4.1.5 and is left as an exercise.
Exercise 4.1.11. Finish the proof of Proposition 4.1.10, and state and prove the correspond-
ing proposition for m−
γ.
Proposition 4.2.1. Assume q is a continuous real-valued integrable potential. For two so-
lutions f1 , f2 to −f 00 + qf = λf , the Wronskian W (f1 , f2 ) is constant, and in particular
+ −
W (fγ+ , f−γ ) = −2iγ and W (fγ− , f−γ ) = 2iγ, γ ∈ R\{0}.
d
[W (f, g)](x) = f (x)g 00 (x) − f 00 (x)g(x) = (q(x) − λ)f (x)g(x) − (q(x) − λ)f (x)g(x) = 0,
dx
+
since f and g are solutions to the eigenvalue equation. In particular, in case f = fγ+ , g = f−γ
we can evaluate this constant by taking x → ∞ and using the asymptotics of Theorem 4.1.2.
−
Similarly for the Wronskian of f±γ .
Exercise 4.2.2. In case the function q is not continuous, the derivative of the Wronskian has
to be interpreted in the weak sense. Modify Proposition 4.2.1 and its proof accordingly.
Since we now have four solutions, two by two linearly independent for γ 6= 0, to the
Schrödinger eigenvalue equation which has a two-dimensional solution space, for the eigenvalue
λ = γ 2 , γ ∈ R\{0}, we obtain a± ±
γ , bγ for γ ∈ R\{0} such that for all x ∈ R,
fγ− (x) = a+ + + +
γ fγ (x) + bγ f−γ (x),
(4.2.1)
fγ+ (x) = a− − − −
γ fγ (x) + bγ f−γ (x).
± + + −
Note that for γ ∈ R\{0} the relation fγ± (x) = f−γ (x) implies a+ + −
γ = a−γ , bγ = b−γ , aγ = a−γ
−
and b−
γ = b−γ . Now
for γ ∈ R\{0}.
Taking Wronskians in (4.2.1) and using W (f, f ) = 0 and Proposition 4.2.1 we obtain
− 1
a+ + + + +
γ W (fγ , f−γ ) = W (fγ , f−γ ) =⇒ aγ = − W (fγ− , f−γ
+
),
2iγ
− 1
b+ + + + +
γ W (f−γ , fγ ) = W (fγ , fγ ) =⇒ bγ = W (fγ− , fγ+ ).
2iγ
Similarly, we find
1 1
a−
γ =
−
W (fγ+ , f−γ ), b−
γ = − W (fγ+ , fγ− ), 1 = |b− 2 − 2
γ | − |aγ | .
2iγ 2iγ
−
It follows that b+
γ = bγ .
Chapter 4: Schrödinger operators and scattering data 59
Using the Jost solutions we now define the solution ψγ (x), γ ∈ R\{0}, by the boundary
conditions at ±∞;
(
T (γ) exp(−iγx), x → −∞,
ψγ (x) ∼
exp(−iγx) + R(γ) exp(iγx), x → ∞
Note that the first condition determines ψγ up to a multiplicative constant, which is determined
by the requirement that the coefficient of exp(−iγx) is 1 as x → ∞. The function T (γ) is the
transmission coefficient and R(γ) is the reflection coefficient.
A potential q satisfying the assumptions above is a reflectionless potential if R(γ) = 0 for
γ ∈ R\{0}.
Using Theorem 4.1.2 it follows that
ψγ (x) = T (γ) fγ− (x) = f−γ
+
(x) + R(γ) fγ+ (x).
It follows that
1 1 a+
γ a+
γ
T (γ) = +
= −, R(γ) = +
= −
.
bγ bγ bγ bγ
This implies for γ ∈ R\{0}
T (γ) = T (−γ), R(γ) = R(−γ), |T (γ)|2 + |R(γ)|2 = 1,
+
2iγ W (fγ− , f−γ ) (4.2.2)
T (γ) = , R(γ) = −
W (fγ− , fγ+ ) W (fγ− , fγ+ )
Physically, |T (γ)|2 + |R(γ)|2 = 1 can be interpreted as conservation of energy for transmitted
and reflected waves.
Note that the reflection coefficient is related to waves travelling from left to right, and
we could also have (equivalently) studied the reflection coefficient R− (γ) = a− −
γ /bγ for waves
travelling from right to left. Note that the transmission coefficient does not change. Relabeling
the reflection coefficient R+ (γ) = R(γ), we define for γ ∈ R (or γ ∈ R\{0} depending on the
potential q) the scattering matrix
T (γ) R− (γ)
S(γ) = ∈ U (2), (4.2.3)
R+ (γ) T (γ),
i.e. S(γ) is a 2 × 2-unitary matrix. To see this we note first that, as for R(γ),
R− (γ) 1
= a−
γ =
−
W (fγ+ , f−γ ), R− (−γ) = R− (γ), |T (γ)|2 + |R− (γ)|2 = 1.
T (γ) 2iγ
From the expressions in terms of Wronskians we find
R+ (γ) R− (−γ)
+ = 0 =⇒ T (γ)R− (−γ) + R+ (γ)T (−γ) = T (γ)R− (γ) + R+ (γ)T (γ) = 0
T (γ) T (−γ)
which shows that the columns of S(γ) are orthogonal vectors. Since we already established
that each column vector has length 1, we obtain S(γ) ∈ U (2).
60 Chapter 4: Schrödinger operators and scattering data
Exercise 4.2.3. Work out the relation between the scattering matrix S(γ) and the scattering
d2 d2
operator S as defined in Section 3.2 in the case L0 = − dx 2 , L = − dx2 + q (with appropriate
assumptions on the potential q). (Hint: use the Fourier transform to describe the spectral
decomposition of L0 in terms of a C2 -vector-valued measure, and describe the action of the
scattering operator in terms of this decomposition.)
Proposition 4.2.4. Assuming the conditions of Theorem 4.1.2 we have
lim T (γ) = 1, lim R(γ) = 0.
γ→±∞ γ→±∞
dfγ−
Proof. Write fγ− (x) = e−iγx + R0− (x), fγ+ (x) = eiγx + R0+ (x), dx
(x) = −iγe−iγx + R1− (x)
dfγ+
dx
= iγeiγx + R1+ (x), where Ri± (x), i = 0, 1, also depend on γ, so that
(x)
W (fγ− , fγ+ ) = e−iγx + R0− (x) iγeiγx + R1+ (x) − −iγe−iγx + R1− (x) eiγx + R0+ (x)
= 2iγ + iγeiγx R0− (x) + e−iγx R1+ (x) + R0− (x)R1+ (x)
+ iγe−iγx R0+ (x) − eiγx R1− (x) − R0+ (x)R1− (x).
Theorem 4.1.2 shows that for real x
1
|R0± | ≤ G(γ), |R1± | ≤ |γ|G(γ), G(γ) = exp(kqk1 /|γ|) − 1 = O .
|γ|
Hence for γ ∈ R\{0}
W (fγ− , fγ+ )
≤ 2G(γ) + G(γ) 2 → 0,
− 1 |γ| → ∞.
2iγ
Using (4.2.2) this gives the limit for the transmission coefficient, and from this the limit for
the reflection coefficient follows.
The transmission and reflection can be written as integrals involving the potential function
q, the solution ψγ , and the Jost solution.
Proposition 4.2.5. Assume q is continuous and satisfies the conditions of Theorem 4.1.2,
then for γ ∈ R\{0},
Z
1
R(γ) = q(x) ψγ (x) e−iγx dx,
2iγ R
Z
1
T (γ) = 1 + q(x) ψγ (x) eiγx dx,
2iγ R
Z
1 1
= 1− e−iγx q(x)fγ+ (x) dx.
T (γ) 2iγ R
In the three-dimensional analogue of Proposition 4.2.5, the last type of integrals is precisely
what can be measured in experiments involving particle collission. Again, the continuity of q
is not essential, cf. Exercise 4.2.2.
Chapter 4: Schrödinger operators and scattering data 61
by Theorem 4.1.2. The first term is o(1) as x → −∞, since q is integrable, and the second
term O(exβ ), x → −∞. So we obtain for γ, =γ > 0, the asymptotic behaviour
1
m+
γ (x) = + o(1), x → −∞.
T (γ)
eiγ0 x
fγ+0 (x) = eiγ0 x m+
γ0 (x) = + o(e−x=γ0 ), x → −∞ =⇒ fγ+0 ∈
/ L2 (R)
T (γ0 )
Similarly, fγ−0 6∈ L2 (R), and since fγ+0 and fγ−0 span the solution space, there is no square
integrable solution, hence γ02 is not an eigenvalue of the corresponding Schrödinger operator.
Assume next that T −1 has a zero at γ0 with =γ0 > 0, so that fγ−0 is a multiple of fγ+0 . By
Theorem 4.1.2 it follows that fγ+0 , respectively fγ−0 , is a square integrable function for x → ∞,
respectively for x → −∞. Since fγ−0 is a multiple of fγ+0 , it follows that fγ+0 ∈ L2 (R). Theorem
df + df −
4.1.2 then also gives dxγ0 = dxγ0 ∈ L2 (R), and since (fγ±0 )00 (x) = −γ02 fγ±0 (x)+q(x)fγ±0 (x) ∈ L2 (R)
since we assume q ∈ L∞ (R). This then gives that the eigenfunction is actually in the domain
W 2 (R). Hence fγ+0 is an eigenfunction for the corresponding Schrödinger equation for the
eigenvalue γ02 . Since the Schrödinger operator is self-adjoint by Corollary 2.2.7, we have
γ02 ∈ R and so γ0 ∈ iR>0 .
So the poles of T are on the positive imaginary axis, and such a point, say ip, p > 0,
corresponds to an eigenvalue −p2 of the corresponding Schrödinger operator. Since q ∈ L∞ (R)
we have that the spectrum is contained in [−kqk∞ , ∞), and by Theorems 2.3.4 and 2.3.2 its
essential spectrum is [0, ∞). By Theorem 6.5.5 it follows that the intersection of the spectrum
with [−kqk∞ , 0) can only have a finite number of points which all correspond to the point
spectrum.
So we can label the zeros of the Wronskian on the positive imaginary axis as ipn , pn >
0, n ∈ {1, 2, · · · , N }, and then fn (x) = fip+n (x) is a square integrable eigenfunction of the
Schrödinger operator. We put
Z
1 1
= |fn (x)|2 dx, kfn k = √ .
ρn R ρn
It remains
−1 to show that the residue of T at ipn is expressible in terms of ρn . Start with
2iγ T (γ) = W (fγ+ , fγ− ), which is an equality for analytic functions in the open upper half
plane. Differentiating with respect to γ gives
2i dT −1 ∂fγ+ − +
∂fγ−
+ 2iγ (γ) = W ( , f )(x) + W (fγ , )(x),
T (γ) dγ ∂γ γ ∂γ
64 Chapter 4: Schrödinger operators and scattering data
dT −1 ∂fγ+ ∂fγ−
2iγ0 (γ0 ) = W ( |γ=γ0 , fγ−0 )(x) + W (fγ+0 , |γ=γ0 )(x).
dγ ∂γ ∂γ
∂f −
Since fγ+ is a solution to −f 00 + q f = γ 2 f and ∂γγ is a solution to −f 00 + q f = γ 2 f + 2γ fγ− ,
cf. Proposition 4.1.10, we find, cf. the proof of Proposition 4.2.1,
∂fγ−
Z x
2ipn fn (y) fip−n (y) dy = W (fip+n , |γ=−ipn )(x).
−∞ ∂γ
so that we obtain
dT −1
Z
−2 pn (ipn ) = 2ipn fn (y) fip−n (y) dy.
dγ R
Since fip±n are real-valued, cf. remark following Theorem 4.1.2, non-zero and multiples of each
other we see that fip−n = Cn fip+n (x) = Cn fn (x) for some real non-zero constant Cn , so that
dT −1
Z
Cn
(ipn ) = |fn (y)|2 dy 6= 0.
dγ i R
Note that Cn = limx→∞ Cn epn x fip+n (x) = limx→∞ epn x fip−n . It follows that the zero of 1
T
at ipn
R −1
is simple, so that T has a simple pole at ipn with residue Cin R |fn (y)|2 dy .
Exercise 4.2.8. The statement on the simplicity of the eigenvalues in Theorem 4.2.7 has not
been proved. Show this.
Chapter 4: Schrödinger operators and scattering data 65
Definition 4.2.9. Assume the conditions as in Theorem 4.2.7, then the transmission coeffi-
cient T , the reflection coefficient R together with the poles on the positive imaginary axis with
the corresponding square norms; {(pn , ρn ) | pn > 0, 1 ≤ n ≤ N } constitute the scattering data.
Given the potential q, the direct scattering problem constitutes of determining T , R and
{(pn , ρn )}.
Remark 4.2.10. It can be shown that it suffices to take the reflection coefficient R to-
gether with {(pn , ρn ) | pn > 0, 1 ≤ n ≤ N } as the scattering data, since the transmis-
sion coefficient
p can be completely recovered from this. Indeed, the norm of T follows from
|T (γ)| = 1 − |R(γ)|2 , see 4.2.2, and the transmission coefficient can be completely recon-
structed using complex function techniques and Hardy spaces.
Exercise 4.2.11. Work out the scattering data for the cosh−2 -potential using the results as
in Section 2.5.2. What can you say about the scattering data for the other two examples in
Sections 2.5.1, 2.5.3?
where we have switched to 2y instead of y in order to have nicer looking formulas in the sequel.
Note that for the Jost solution we have
Z ∞
+ iγx 1 1
fγ (x) = e + A(x, y) eiγy dy, A(x, y) = B(x, (y − x)).
x 2 2
So the Paley-Wiener Theorem 6.3.2 gives B(x, ·) ∈ L2 (0, ∞) for each x ∈ R, but the kernel B
satisfies many more properties. Let us consider the kernel B more closely. Recall the proof of
Theorem 4.1.2 and (4.1.4), and write
Z ∞ ∞
1 2iγ(y−x) X
m+
γ (x) −1= (e − 1)q(y) dy + mn (x; γ),
x 2iγ n=2
say 12 B1 (x, 21 y), is in C0 (R) by the Riemann-Lebesgue Lemma. So it remains to take the
Fourier transform of the integral. First recall, cf. proof of Theorem 4.1.5, that the integrand
is majorized by (y − x)|q(y)| which is assumed to be integrable on [x, ∞) and the integral is
1
of order O( |γ| ), so that it is in L2 (R) with respect to γ. Write
Z ∞ ∞ Z ∞
1 2iγ(y−x) 1 2iγ(y−x)
(e − 1)q(y) dy = − (e − 1)w(y) + w(y) e2iγ(y−x) dy
2iγ 2iγ y=x
Zx ∞ x
1 ∞
Z
2iγy 1 iγy
= w(x + y) e dy = w(x + y) e dy
0 2 0 2
R∞
where we put w(x) = x q(y) dy, so that limx→∞ w(x) = 0. So we have written the integral as
the inverse Fourier transform of 12 H(y)w(x + 21 y), where H is the Heaviside function H(x) = 1
for x ≥ 0 and H(x) = 0 for x < 0. So we obtain
B(x, y) = H(y)w(x + y) + B1 (x, y), B1 (x, ·) ∈ C0 (R).
R∞
In particular, B(x, 0) = w(x) = x q(t) dt.
00 0
Since m+ +
γ (x) is a solution to m + 2iγ m = qm, we see that mγ (x) − 1 is a solution to
dm+
m00 + 2iγ m0 = qm + q. Since limx→∞ m+ γ
γ (x) − 1 = 0 and limx→∞ dx (x) = 0, we can integrate
this over the interval [x, ∞) to find the integral equation for m+γ;
Z ∞ Z ∞
0
−m (x) − 2iγm(x) = q(y) m(y) dy + q(y) dy.
x x
Theorem 4.3.1. Assume q satisfies the conditions of Theorem 4.1.5. The integral equation
Z ∞ Z yZ ∞
B(x, y) = q(t) dt + q(t) B(t, z) dt dz, y ≥ 0,
x+y 0 x+y−z
Moreover, B is a solution to
∂
Bx (x, y) − By (x, y) = q(x)B(x, y)
∂x
R∞
with boundary conditions B(x, 0) = Rx q(t) dt and limx→∞ kB(x, ·)k∞ = 0.
∞
Moreover, defining m(x; γ) = 1 + 0 B(x, y) e2iγy dy, then the function e−iγx m(x; γ) is the
Jost solution fγ+ for the corresponding Schrödinger operator.
Chapter 4: Schrödinger operators and scattering data 67
The partial differential equation is a hyperbolic boundary value problem known as a Gour-
sat partial differential equation.
Proof. We solve the integral equation by an iteration as before. Put B(x, y) = ∞
P
n=0 Kn (x, y)
with Kn (x, y) defined recursively
Z ∞ Z yZ ∞
K0 (x, y) = q(t) dt, Kn+1 (x, y) = q(t) Kn (t, z) dt dz.
x+y 0 x+y−z
It is then clear that B(x, y) solves the integral equation provided the series converges. We
claim that
n Z ∞ Z ∞
R(x)
|Kn (x, y)| ≤ S(x+y), R(x) = (t−x)|q(t)| dt, S(x) = |q(t)| dt. (4.3.1)
n! x x
R∞
Note that 0 S(x + y) dy = R(x). Let us first assume the claim is true. Then the series
converges uniformly on compact sets in R2 and the required estimate follows. It is then also
clear that Kn (x, y) ∈ R for all n since q is real-valued, so that B is real-valued. We leave
uniqueness as an exercise, cf. the uniqueness proof in the proof of Theorem 4.1.2.
Observe that S(x + y) ≤ S(x) for y ≥ 0, so that estimate on the L∞ (0, ∞)-norm of B(x, ·)
follows immediately, and this estimate also gives limx→∞ kB(x, ·)k∞ = 0, since S(x) → 0 as
x → ∞. For the L1 (0, ∞)-norm we calculate
Z ∞
∞
Z
|B(x, y)| dy ≤ exp R(x) S(x + y) dy = exp R(x) R(x)
0 0
Here we use Lebesgue’s Differentiation Theorem 6.1.4, so the resulting identities hold almost
everywhere. Unless we impose differentiability conditions on q, weR cannot state anything
∞
about the higher order partial derivatives, but Bx (x, y) − By (x, y) = x q(t) B(t, y) dt can be
differentiated with respect to x, and this gives the required partial differential equation. We
also obtain
Z y
∂B
(x, y) + q(x + y) ≤
|q(x + y − z) B(x + y − z, z)| dz,
∂x 0
Z y
≤ |q(x + y − z)| exp(R(x + y − z)) S(x + y) dz (4.3.2)
0
Z x+y
≤ S(x + y) exp(R(x)) |q(t)| dt ≤ S(x + y) exp(R(x)) S(x)
x
68 Chapter 4: Schrödinger operators and scattering data
since R is decreasing. The one but last estimate can also be used to observe that
Z x+y
∂B
(x, y) + q(x + y) ≤ S(x) exp(R(x))
|q(t)| dt → 0, y ↓ 0,
∂x x
since S(x + y) ≤ S(x) and q is integrable. This gives the other boundary condition for the
partial differential equation. Similarly,
Z y Z ∞
∂B
(x, y) + q(x + y) ≤ |q(x + y − z) B(x + y − z, z)| dz + |q(t) B(t, y)| dt,
∂y 0 x
Z y Z ∞
≤ |q(x + y − z)| exp(R(x + y − z)) S(x + y) dz + |q(t)| exp(R(t)) S(t + y) dt
0 x
Z x+y Z ∞
≤ S(x + y) exp(R(x)) |q(t)| dt + S(x + y) exp(R(x)) |q(t)| dt
x x
≤ 2 S(x + y) exp(R(x)) S(x).
R∞
Define n(x; γ) = 0 B(x, y) e2iγy dy, then by (4.3.2) we find | ∂B ∂x
(x, y)| ≤ |q(x + y)| + S(x +
y)S(x) exp(R(x)) and the right hand side is integrable with respect to y ∈ [0, ∞). So we can
differentiate with respect to x in the integrand to get
Z ∞ Z ∞ Z ∞
0 2iγy 2iγy
n (x, γ) = Bx (x, y) e dy = Bx (x, y) − By (x, y) e dy + By (x, y) e2iγy dy
0 0
Z ∞Z ∞ ∞ Z ∞0
= − q(t) B(t, y) dt e2iγy dy + B(x, y)e2iγy − 2iγ B(x, y) e2iγy dy
y=0
Z0 ∞ Zx ∞ Z ∞ 0
since limy→∞R ∞B(x, y) = 0 by limy→∞ S(x + y) = 0, and where Fubini’s theorem is applied and
B(x, 0) = x q(t) dt. So n(x; γ) satisfies the integrated version of the differential equation
m00 + 2iγm0 = qm + q. Since |n(x; γ)| ≤ kB(x, ·)k1 → 0 as x → ∞, it follows that n(γ; x) =
m+γ (x) − 1 and the result follows.
It remains to prove the estimate in (4.3.1). This is proved by induction on n. The case
n = 0 is trivially satisfied by definition of S(x + y). For the induction step, we obtain by the
induction hypothesis and the monotonicity of S the estimate
Z yZ ∞
(R(t))n
|Kn+1 (x, y)| ≤ |q(t)| S(t + z) dt dz
0 x+y−z n!
Z yZ ∞
(R(t))n
≤ S(x + y) |q(t)| dt dz
0 x+y−z n!
Chapter 4: Schrödinger operators and scattering data 69
and it remains to estimate the double integral. Interchanging the order of integration, the
integral equals
Z x+y Z ∞
(R(t))n y (R(t))n y
Z Z
|q(t)| dz dt + |q(t)| dz dt
x n! x+y−t x+y n! 0
Z x+y Z ∞
(R(t))n (R(t))n
= (t − x)|q(t)| dt + y|q(t)| dt
x n! x+y n!
Z ∞ Z ∞ Z ∞ n
(R(t))n 1
≤ (t − x)|q(t)| dt = (t − x)|q(t)| (u − t)|q(u)| du dt
x n! x n! t
Z ∞ Z ∞ n
1 (R(x))n+1
≤ (t − x)|q(t)| (u − x)|q(u)| du dt =
x n! t (n + 1)!
using y ≤ t − x in the first inequality and u − t ≤ u − x for x ≥ t in the second inequality.
This proves the induction step.
Exercise 4.3.2. Work out the kernel B for the cosh−2 -potential using the results as in Section
2.5.2, see also Section 4.5. What can you say about the kernel B for the other two examples
in Sections 2.5.1, 2.5.3?
Note that in the above considerations for the Jost solution γ ∈ R, but it is possible to
extend the results to γ in the closed upper half plane, =γ ≥ 0.
Proposition 4.3.3. Assume that q satisfies the assumptions of Theorem 4.1.5. The relations
Z ∞ Z ∞
+ 2iγy + iγx
mγ (x) = 1 + B(x, y) e dy, fγ (x) = e + A(x, y) eiγy dy
0 x
where ε(R) ∈ (0, 21 π) is choosen such that sin θ ≥ √1R for all θ ∈ (ε(R), π − ε(R)). We have
sin ε(R) = √1R and ε(R) ∼ √1R as R → ∞. Now the integral can be estimated by
Z ∞ Z ∞ √ π
2ε(R) |B(x, y)| dy + π e−2y R |B(x, y)| dy = 2ε(R)kB(x, ·)k1 + √ kB(x, ·)k2 → 0
0 0 24R
as R → ∞.
Exercise 4.3.4. Work out the proof of the second statement of Proposition 4.3.3.
.
+
Now as in Section 4.3 we have that m− γ (x) − 1 is in the Hardy class H2 , so we similarly get
∞
fγ− (x) = e−iγx + −x A− (x, y)eiγy dy, hence the right hand side equals
R
1 ≤ n ≤ N , with notation and residues as in Theorem 4.2.7. This idea shows that the above
approach can be adapted by adding a finite sum to the kernel r in (4.4.1).
In order to make the above approach rigorous we need to investigate the reflection co-
efficient more closely. Instead of dealing with the kernel A we deal with the kernel B as
considered in Theorem 4.3.1.
Proposition 4.4.1. Assume the potential q satisfies the conditions of Theorem 4.1.2, then
the reflection coefficient R ∈ L2 (R), so that its inverse Fourier transform, up to scaling and
1
R
linear transformation of the argument, Kc : y 7→ π R R(γ)e2iγy dy is defined as an element in
L2 (R).
So Kc (x) = π1 R R(γ) e2iγx dγ is considered as an element of L2 (R). Since R(γ) = R(−γ)
R
Proof. First observe that the reflection coefficient is bounded on R by (4.2.2). From the proof
1 1 1
of Proposition 4.2.4 it follows that T (γ) − 1 = O( |γ| ) as γ → ±∞. So T (γ) = 1 + O( |γ| ) and
p 1 2 1 2
so is |R(γ)| = 1 − |T (γ)|2 = O( |γ| ). So |R(γ)| = O( |γ|2 ), and R ∈ L (R).
Assuming the conditions of Theorem 4.2.7, we can consider the discrete spectrum of the
corresponding Schrödinger operator. With the notation of Theorem 4.2.7 we define
N
X
Kd (x) = 2 ρn e−2pn x
n=1
72 Chapter 4: Schrödinger operators and scattering data
and Kd (x) = 0 if the transmission coefficient has no poles, i.e. the Schrödinger operator has no
discrete eigenvalues. So we let Kc , respectively Kd , correspond to the continuous, respectively
discrete, spectrum of the Schrödinger operator. Now define, K = Kc + Kd as the sum of two
square integrable functions on the interval [a, ∞) for any fixed a ∈ R.
Theorem 4.4.2. Assume the potential q satisfies the conditions of Theorem 4.2.7. Then the
kernel B satisfies the integral equation
Z ∞
K(x + y) + B(x, z) K(x + y + z) dz + B(x, y) = 0,
0
for allmost all y ≥ 0 which for each fixed x ∈ R is an equation that holds for B(x, ·) ∈ L2 (0, ∞).
Moreover, the integral equation has a unique solution B(x, ·) ∈ L2 (0, ∞).
The integral equation is the Gelfand-Levitan-Marchenko equation, and we see that the
kernel B, hence by Theorem 4.3.1 the potential q, is completely determined by the scattering
data. So in this way the inverse scattering problem is solved. In particular, the transmission
coefficient T is not needed, cf. Remark 4.2.10, and hence is determined by the remainder of
the scattering data.
+
Proof. So consider T (γ) fγ− (x) = f−γ (x) + R(γ) fγ+ (x) and rewrite this in terms of m±
±γ (x) to
find for γ ∈ R\{0}
T (γ) m− +
γ (x) = m−γ (x) + R(γ) e
2iγx
m+
γ (x)
Z ∞ Z ∞
−2iγy 2iγx
=1+ B(x, y) e dy + R(γ)e 1+ B(x, y) e2iγy dy
Z ∞ 0 0
Z ∞
− −2iγy 2iγx 2iγx
=⇒ T (γ) mγ (x) − 1 = B(x, y) e dy + R(γ)e + R(γ)e B(x, y) e2iγy dy
0 0
using Theorem 4.3.1. As functions of γ, the first term on the right hand side is an element of
L2 (R) by Theorem 4.3.1, and similarly for the last term on the right hand side using that R(γ)
is bounded by 1, see (4.2.2). Proposition 4.4.1 gives that the middle term on the right hand
side is an element of L2 (R). So we see that the function on the left hand side is an element of
L2 (R), which we can also obtain directly from T (γ) m− −
γ (x)−1 = T (γ) mγ (x)−1 + T (γ)−1 .
By Corollary 4.1.7 and the boundedness of T , the first term is even in the Hardy class H2+
and the second term is in L2 (R) by the estimates in the proof of Proposition 4.2.4.
So we take the inverse Fourier transform of this identity for elements of L2 (R). Since
we have switched to other arguments we use the Fourier transform in the following form, cf.
Section 6.3;
Z Z
−2iλy 1
g(λ) = f (y) e dy, f (y) = (Gg)(y) = g(λ) e2iλy dλ,
R π R
where G is a bounded invertible operator on L2 (R). So applying G we get an identity in L2 (R)
for each fixed x;
G γ 7→ T (γ) m− 2iγx
γ (x) − 1 = B(x, ·) + G γ 7→ R(γ)e
Z ∞
2iγx
B(x, z) e2iγz dz .
+ G γ 7→ R(γ)e
0
Chapter 4: Schrödinger operators and scattering data 73
It remains to calculate the three G-transforms explicitly. Since the identity is not in L2 (R) we
have to employ approximations.
We first calculate G γ 7→ R(γ)e2iγx . Put Rε (γ) = exp(−εγ 2 ) R(γ), so that Rε ∈ L1 (R) ∩
L2 (R), and note that
Z Z
2
|Rε (γ)e2iγx 2iγx 2
− R(γ)e | dγ = |R(γ)|2 |1 − e−εγ |2 dγ → 0, ε ↓ 0,
R R
Now
Z
1 2
2iγx
R(γ) e−εγ e2iγ(x+y) dy = G γ 7→ Rε (γ) (x + y)
G γ 7→ Rε (γ)e (y) =
π R
where CM is the closed contour in the complex plane consisting of the interval [−M, M ] on
the real axis and the half circle z = M eiθ , 0 ≤ θ ≤ π. (Note that we first observe this for CM
δ
74 Chapter 4: Schrödinger operators and scattering data
where the interval is taken [−M + iδ, M + iδ] for some δ > 0 and next let δ & 0.) This is a
consequence of
1
T (γ)m−
γ (x) − 1 = O( ), =γ ≥ 0,
|γ|
so that the integrand is O( |γ|1 2 ) and the integrand over the half circle tends to zero.
Now the contour integral can be evaluated, cf. proof of Proposition 4.3.3, by Cauchy’s
Theorem;
N
2πi X 1
Iε (x, y) = Resγ=ipn (T (γ)m−
γ (x) − 1)e
2iγy
π n=1 1 − iεγ
N
X 1
Resγ=ipn T (γ) m− −2pn y
= 2i ipn (x)e .
n=1
1 + εpn
Since fip−n (x) = Cn fip+n (x) = Cn fn (x), cf. proof of Theorem 4.2.7, we find m−
ipn (x) =
−2pn x +
e Cn mipn (x). With the value of the residue in Theorem 4.2.7, we find
N
X 1
lim Iε (x, y) = −2 m+ (x)e−2pn (x+y)
ε↓0
n=1
kfn k2 ipn
N N Z ∞
X 1 −2pn (x+y) X 1
= −2 2
e −2 2
B(x, z)e−2pn z dz e−2pn (x+y)
n=1
kfn k n=1
kf n k 0
Z ∞
= −Kd (x + y) − B(x, z) Kd (x + y + z) dz.
0
Z ∞
(x)
K f (y) = K(x + y + z) f (z) dz,
0
then K (x) : L2 (0, ∞) → L2 (0, ∞) is a bounded operator as we show below. For uniqueness we
need to show that K (x) f + f = 0 only has the trivial solution f = 0 in L2 (R).
Chapter 4: Schrödinger operators and scattering data 75
First, to show that K (x) is bounded on L2 (0, ∞) we consider for f, g ∈ L2 (0, ∞)∩L1 (0, ∞),
Z ∞ Z ∞
(x)
hK f, gi = K(x + y + z) f (z) dz g(y) dy
0 0
N Z
X ∞ Z ∞
=2 ρn e−2pn (x+y+z) f (z) g(y) dz dy
n=1 0 0
Z ∞ Z ∞ Z
1
+ f (z)g(y)R(γ)e2iγ(x+y+z) dγ dy dz
π 0 0 R
N
X Z
F −1 f (2γ) F −1 ḡ (2γ) R(γ) e2iγx dγ
= hf, en ihen , gi + 2
n=1 R
N Z
X 1
F −1 f (γ) F −1 ḡ (γ) R( γ) eiγx dγ
= hf, en ihen , gi +
n=1 R 2
√
where en (y) = 2ρn e−2pn y−pn x is an element of L2 (0, ∞) and h·, ·i denotes the inner product of
the Hilbert space L2 (0, ∞), which we consider as a subspace of L2 (R) in the obvious way. Note
that interchanging of the integrals is justified because all integrals converge for f, g ∈ L1 (0, ∞).
Next we estimate the right hand side in terms of the L2 (0, ∞)-norm of g by
N
X
kgkL2 (0,∞) kf kL2 (0,∞) ken k2L2 (0,∞) + kF −1 f kL2 (R) kF −1 ḡkL2 (R)
n=1
N
X
≤ kgkL2 (0,∞) kf kL2 (0,∞) ken k2L2 (0,∞) + 1 ,
n=1
since |R(γ) e2iγx | ≤ 1 and, recall L2 (0, ∞) ⊂ L2 (R), the Fourier transform being an isometry.
So K (x) is a bounded operator, and the above expression for hK (x) f, gi remains valid for
arbitrary f, g ∈ L2 (0, ∞).
So for a real-valued f ∈ L2 (0, ∞)
N Z
X 1
(x)
kf k2L2 (0,∞) 2
| F −1 f (γ)|2 R( γ) eiγx dγ
hf + K f, f i = + |hf, en i| +
n=1 R 2
−1
R
using kf k2L2 (0,∞) = R
| F f (γ)|2 dγ. Since f is real-valued, x, γ ∈ R, we have
1 1
| F −1 f (γ)|2 (1 + R( γ) eiγx ) = | F −1 f (−γ)|2 (1 + R(− γ) e−iγx )
2 2
76 Chapter 4: Schrödinger operators and scattering data
by (4.2.2), so
Z ∞ Z ∞
−1 1 1
2 iγx
| F −1 f (γ)|2 (1 + R( γ) eiγx ) dγ ≤ 0.
2 | F f (γ)| < 1 + R( γ) e dγ = 2<
0 2 0 2
Since |R(γ)| ≤ 1 by (4.2.2) it follows that < 1 + R( 12 γ) eiγx ≥ 0, so that the integral is
non-negative and hence it has to be zero and since the integrand is non-negative it has to be
zero almost everywhere. Note that a zero of < 1 + R( 21 γ) eiγx can only occur if |R( 12 γ)| = 1,
its maximal value. The transmission coefficient is zero, T (γ) = 0, by (4.2.2) and hence
+ + +
f−γ (x) + R(γ) fγ+ (x) = 0, or f−γ and fγ+ are linearly dependent solutions, i.e. W (fγ+ , f−γ ) = 0.
By Proposition 4.2.1 this implies γ = 0. We conclude that F −1 f is zero almost everywhere,
hence f is zero almost everywhere, or f = 0 ∈ L2 (0, ∞).
So any real-valued f ∈ L2 (0, ∞) satisfying f + K (x) f = 0 equals f = 0 ∈ L2 (0, ∞). Since
the kernel K is real-valued, for any f ∈ L2 (0, ∞) with f +K (x) f = 0 we have <f +K (x) <f = 0
and =f + K (x) =f = 0, so then f = 0 ∈ L2 (0, ∞). This proves uniqueness of the solution.
We do not address the characterisation problem, i.e. given a matrix S(γ) as in (4.2.3),
under which conditions on its matrix elements is S(γ) the scattering matrix corresponding to
a potential q? The characterisation problem depends on the class of potentials that is taken
into account. We refer to the paper [3] of Deift and Trubowitz for these results.
Remark 4.4.4. We briefly sketch another approach to recover the potential function from the
scattering data, which is known as the Fokas-Its Riemann-Hilbert approach. We assume that
the transmission coefficient T has no poles in the upper half plane, but the line of reasoning
can be easily adapted to this case. We write
R(λ)fλ+ (x)eiλx
Z
iγx + 1
e f−γ (x) = 1 + dλ.
2πi R λ−γ
Chapter 4: Schrödinger operators and scattering data 77
R(λ)fλ+ (x)ei(λ+γ)x
Z
+ iγx 1
fγ (x) = e + dλ
2πi R λ+γ
and this determines the Jost solution from the scattering data.
Now the Jost solution fγ+ determines the potential q, as can be seen as follows. Write
fγ+ (x) = eiγx+u(x;γ) with u(x; γ) → 0 as x → ∞, =γ ≥ 0 and u(x; γ) → 0 as γ → ±∞, =γ ≥ 0.
Then the Schrödinger equation for fγ+ translates into the differential equation −2iγux =
uxx + u2x − q, which is a first order equation for ux . Because of its behaviour as γ → ±∞ we
expand ux (x; γ) = ∞ vn (x)
P
n=1 (2iγ)n and this gives v1 (x) = q(x) and recursively defined vn ’s. So in
1
particular, ux (x; γ) = 2iγ q(x) as γ → ±∞. Moreover, fγ+ (x)e−iγx = eu(x;γ) ∼ 1 + u(x; γ), so
that we formally obtain
d +
q(x) = lim 2iγ fγ (x)e−iγx − 1 .
γ→∞, =γ≥0 dx
So in this way, one can also recover the potential q from the scattering data.
We first consider the easiest non-trival case, N = 1. So that K(x) = Kd (x) = 2ρe−2px ,
ρ = ρ1 , p = p1 , and the Gelfand-Levitan-Marchenko equation becomes
Z ∞
−2p(x+y)
2ρe + B(x, z)2ρe−2p(x+y+z) dz + B(x, y) = 0, y ≥ 0.
0
Z ∞
−2p(x+y) −2p(x+y)
=⇒ B(x, y) = −2ρe − 2ρe B(x, z)e−2pz dz,
0
so that the y-dependence of B(x, y) is 2ρe−2py . Substitute B(x, y) = −2ρe−2py w(x), so that
Z ∞
−2xp −2px −ρ
w(x) = e +e w(x) −2ρe−4pz dz = e−2px + w(x)e−2px
0 2p
ρ −2px
so that w(x) = e−2px /(1 + 2p
e ) and we get
−2ρe−2p(x+y) −2ρe−2px
B(x, y) = ρ −2px =⇒ B(x, 0) = ρ −2px .
1 + 2p e 1 + 2p e
78 Chapter 4: Schrödinger operators and scattering data
Deriving this expression and multiplying by −1 gives the potential by Theorem 4.3.1;
4pρe−2px −2p2
q(x) = − ρ −2px 2 = , (4.5.1)
(1 + 2p e ) cosh2 (px + 21 ln(2p/ρ))
which is, up to a affine scaling of the variable, the potential considered in Section 2.5.2.
So the potential −2p2 cosh−2 (px) is reflectionless, and we can ask whether there are more
values for a such that the potential a cosh−2 (px) is reflectionless. By Exercise 2.5.6 it suffices
to consider the case p = 1. We can elaborate on the discussion in Section 2.5.2 by observing
q q
1 1 1 1
iγ − iγ + 4 − a, 2 − iγ − 4 − a 1
fγ+ (x) = 2iγ cosh(x) 2 F1 2 ; ,
1 − iγ 1 + e2x
q q
1 1 1 1
iγ − iγ + 4 − a, 2 − iγ − 4 − a e2x
fγ− (x) = 2iγ cosh(x) 2 F1 2 ; ,
1 − iγ 1 + e2x
q q
1 1 1 1
+ − a, − − a 1
+
f−γ (x) = e−ixγ 2 F1 2 4 2 4
; ,
1 + iγ 1 + e2x
Proposition 4.5.1. The potential q(x) = −l(l + 1) cosh−2 (x) is a reflectionless potential.
The corresponding self-adjoint Schrödinger operator has essential spectrum [0, ∞) and discrete
spectrum −l2 , −(l − 1)2 , · · · , −1. The eigenfunction for the eigenvalue −k 2 is given by
+ 1 k − l, 1 + k + l 1
fk (x) = fik (x) = 2 F1 ;
(2 cosh(x))k 1+k 1 + e2x
Note that the 2 F1 -series in the eigenfunction is a terminating series, since k − l ∈ −N and
(−n)m = 0 for m > n.
Proof. The statements about the spectrum and the precise form of the eigenvalues and eigen-
functions follow from Theorem 4.2.7 and the previous considerations. It remains to calculate
the squared norm, for which we use Theorem 4.2.7. First we calculate
ekx e2x
kx − k − l, 1 + k + l
Ck = lim e fik (x) = lim x 2 F1 ;
x→∞ x→∞ (e + e−x )k 1+k 1 + e2x
k − l, 1 + k + l (−l)l−k
= 2 F1 ;1 = = (−1)l−k ,
1+k (1 + k)l−k
where we use 2 F1 (−n, b; c; 1) = (c − b)n /(b)n for n ∈ N, which is known as the Chu-Vander-
monde summation formula. In order to apply Theorem 4.2.7 we also need to calculate the
residue of the transmission coefficient at γ = ik;
(1 − iγ)l (1 − iγ)l
Resγ=ik T (γ) = Resγ=ik (−1)l = lim (γ − ik)(−1)l
(1 + iγ)l γ→ik (1 + iγ)l
l
−i (−1) (1 + k)l l−k 1 l
= k−1
= i (−1) ,
(−1) (k − 1)! (l − k)! (k − 1)! k
1 1
k − l, l + k + 1
Z
| 2 F1 ; z |2 z k−1 (1 − z)k−1 dz.
2 0 1+k
R1
Expand the terminating 2 F1 -series, and use the beta-integral 0 z α−1 (1−z)β−1 dz = Γ(α)Γ(β)
Γ(α+β)
, to
rewrite this as a double series. Rephrase the result of Proposition 4.5.1 as a double summation
formula.
Now that we have established the existence of reflectionless potentials, we can try to solve
the Gelfand-Levitan-Marchenko equation for a reflectionless potential with an arbitrary, but
80 Chapter 4: Schrödinger operators and scattering data
finite,
PN number of discrete eigenvalues (or bound states). So we have now K(x) = Kc (x) =
−2pn x
2 n=1 ρn e and we have to solve B(x, y) from
N
X Z ∞ N
X
−2pn (x+y)
2 ρn e + B(x, z) 2 ρn e−2pn (x+y+z) dz + B(x, y) = 0,
n=1 0 n=1
and this shows that we can expand B(x, y) as a linear combination of e−2pn y when considered
as function of y. We put —the form choosen makes the matrix involved of the form I plus a
positive definite matrix—
N
X √
B(x, y) = ρn e−pn (x+2y) wn (x),
n=1
and we need to determine wn (x) from the Gelfand-Levitan-Marchenko equation in this case.
Plugging this expression into the integral equation we see that we get an identity, when
considered as function in y, is a linear combination of e−2pn y . Since the pn ’s are different, each
√
coefficient of e−2pn y has to be zero. This gives, after dividing by ρn e−pn x ,
N ∞
√ X √ Z
2 ρn e−pn x + 2 ρm ρn wm (x)e−(pn +pm )x e−2(pn +pm )z dz + wn (x) = 0
m=1 0
√ √
for n = 1, · · · , N . Put w(x) = (w1 (x), · · · , wN (x))t and v(x) = 2( ρ1 e−p1 x , · · · , ρn e−pn x )t
we can rewrite this as (I + S(x))w(x) + v(x) = 0, where I is the identity N × N -matrix and
√
S(x)nm = ρn ρm e−(pn +pm )x /(pn + pm ), which is also a symmetric matrix. In order to see that
S(x) is also positive definite we take an arbitrary vector ξ ∈ CN ,
N N Z ∞
X X √ √
hS(x)ξ, ξi = S(x)ij ξj ξi = ρj e−pj x ξj e−pj t ρi e−pi x ξi e−pi t dt
i,j=1 i,j=1 0
Z ∞
= |f (x, t)|2 dt ≥ 0,
0
PN √
with f (x, t) = i=1 ρi ξi e−pi (x+t) . It follows that I + S(x) is invertible, as predicted by
−1
Theorem 4.4.2, so that we can solve for w(x) = − I + S(x) v(x). So
N
X √ −1 1
B(x, y) = wn (x) ρn e−pn (x+2y) = −h I + S(x) v(x), v(x + 2y)i
n=1
2
N
1 −1 1 X −1
=⇒ B(x, 0) = − h I + S(x) v(x), v(x)i = − I + S(x) n,m vm (x)vn (x)
2 2 n,m=1
N
X −1 d −1 d
=2 I + S(x) n,m I + S(x) m,n = 2 tr I + S(x) I + S(x)
n,m=1
dx dx
Chapter 4: Schrödinger operators and scattering data 81
observing that
d √ 1
I + S(x) n,m = − ρn ρm e−(pn +pm )x = − vn (x)vm (x).
dx 4
In order to rewrite B(x, 0) in an even more compact way, note that for a N × N -matrix
A(x) depending on a variable x, we can calculate
d
det(A(x))
dx
a011 (x) a12 (x) · · · a1N (x) a11 (x) a12 (x) · · · a01N (x)
a0 (x) a22 (x) . . . a2N (x) a21 (x) a22 (x) · · · a0 (x)
21 2N
= det .. + · · · + det
... .. .. ... ..
. . . .
0 0
aN 1 (x) aN 2 (x) · · · aN N (x) aN 1 (x) aN 2 (x) · · · aN N (x)
N
X M
X N
X
= a01j (x)A1j (x) + ··· + a0N j (x)AN j (x) = a0ij (x)Aij (x)
j=1 j=1 i,j=1
N
X dA
a0ij (x)A−1 (x)A−1 (x)
= det(A(x)) ji (x) = det(A(x)) tr
i,j=1
dx
by developing according to the columns that have been differentiated, denoting the signed
principal minors by Aij (x), so that A−1 (x)ij = det(A(x))−1 Aji (x).
This observation finally gives
d d2
B(x, 0) = 2 ln det(I + S(x)) , q(x) = −2 2
ln det(I + S(x)) ,
dx dx
√
S(x)n,m = ρn ρm e−(pn +pm )x /(pn + pm ).
In Section 5.5 we consider the case N = 2 with an additional time t-dependence of Propo-
82 Chapter 4: Schrödinger operators and scattering data
sition 4.5.3. In case N = 2 we can make Proposition 4.5.3 somewhat more explicit;
√ !
ρ1 −2p1 x ρ1 ρ2 −(p1 +p2 )x
1 + 2p e p1 +p2
e
det(I + S(x)) = det √ρ1 ρ2 −(p 1
1 +p2 )x ρ2 −2p2 x
p1 +p2
e 1 + 2p 2
e
ρ1 −2p1 x ρ2 −2p2 x 1 1
= 1+ e + e + − ρ1 ρ2 e−2(p1 +p2 )x
2p1 2p2 4p1 p2 (p1 + p2 )2
ρ1 −2p1 x ρ2 −2p2 x ρ1 ρ2 (p2 − p1 )2 −2(p1 +p2 )x
= 1+ e + e + e
2p1 2p2 4p1 p2 (p1 + p2 )2
√ √ √
ρ1 ρ2 −(p1 +p2 )x n p2 − p1 p1 + p2 2 p1 p2 (p1 +p2 )x p2 − p1 ρ1 ρ2 −(p1 +p2 )x
= √ e √ e + √ e
2 p1 p2 p1 + p2 p2 − p1 ρ1 ρ2 p1 + p2 2 p1 p2
√ρ p √
p1 ρ2 (p1 −p2 )x o
1 2 (p2 −p1 )x
+ √ e +√ e
p1 ρ2 ρ1 p2
√ √
p2 − p1 ρ1 ρ2 −(p1 +p2 )x n p1 + p2 2 p1 p2
= √ e cosh (p1 + p2 )x + ln( √ )
p1 + p2 p1 p2 p2 − p1 ρ1 ρ2
√
p1 + p2 ρ1 p 2 o
+ cosh (p2 − p1 )x + ln( √ ) .
p2 − p1 p1 ρ2
where q is a function of two variables x, space, and t, time, with x, t ∈ R. Here qt (x, t) =
∂q
∂t
(x, t), etc. This equation can be viewed as a non-linear evolution equation by writing it as
qt = S(q) with S a non-linear map on a suitable function space defined by S(f ) = 6f fx −fxxx .
We also consider the Korteweg-de Vries equation together with an initial condition;
We first discuss some straightforward properties of the KdV-equation (5.1.1), also in rela-
tion to some other types of (partial) differential equations in the following exercises.
Exercise 5.1.1. Show that with q a solution to the Korteweg-de Vries equation (5.1.1) also
q̃(x, t) = q(x − 6Ct, t) − C is a solution. This is known as Galilean invariance.
Exercise 5.1.2. Show that by a change of variables the Korteweg-de Vries equation can be
transformed into qt + aqqx + bqx + cqxxx = 0 for arbitrary real constants a, b, c.
Exercise 5.1.3. The Burgers3 equation qt = qxx + 2qqx looks similar to the KdV-equation.
Show that the Burgers equation can be transformed into a linear equation ut = uxx by the
Hopf-Cole transformation q = (ln u)x .
1
Diederik Johannes Korteweg (31 March 1848 — 10 May 1941), professor of mathematics at the University
of Amsterdam and supervisor of de Vries’s 1894 thesis “Bijdrage tot de kennis der lange golven”.
2
Gustav de Vries (22 January 1866 — 16 December 1934), mainly worked as a high school teacher.
3
Johannes Martinus Burgers (13 January 1895 — 7 June 1981), is one of the founding fathers of research
in fluid dynamics in the Netherlands.
83
84 Chapter 5: Inverse scattering method and the KdV equation
With writing down this initial value problem, the question arises on the existence and
uniqueness of solutions. This is an intricate questions and the answer depends, of course, on
smoothness and decay properties of q0 . We do not go into this, but formulate two results
which we will not prove. Our main concern is to establish a method in order to construct
solutions explicitly.
dk q0
Theorem 5.1.5. Assume q0 ∈ C 4 (R) and (x) = O(|x|−M ), M > 10, for k ∈ {0, 1, 2, 3, 4}.
dxk
Then the Korteweg-de Vries initial value problem (5.1.2) has a real-valued unique solution (in
the classical sense).
Exercise 5.1.6. In this exercise we sketch a proof of the unicity statement. Assume q and u
are solutions to (5.1.2), and put w = u − q.
• Use |qx (x,R t) − 21 ux (x, t)| ≤ C Rto conclude dtd R w2 (x, t) dx ≤ 12C R w2 (x, t) dx and
R R
Theorem 5.1.5 is proved using the Inverse Spectral Method, a method that we discuss later,
and it should be noted that even a C ∞ (R)-condition of the initial value q0 is not sufficient to
guarantee a continuous global solution to (5.1.2). Methods from evolution equations have led
to the following existence result.
Theorem 5.1.7. Assume the initial value q0 is an element of the Sobolev space W s (R) for
s ≥ 2. Then the Korteweg-de Vries initial value problem (5.1.2) has a solution q such that
t 7→ q(·, t) is a continuous map from R≥0 into W s (R) and a C 1 -map from R≥0 into L2 (R).
Chapter 5: Inverse scattering method and the KdV equation 85
The last statement means that limh→0 k h1 q(·, t + h) − q(·, t) − qt (·, t)k = 0 in L2 (R), and
Assuming that we can differentiate under the integral sign and that q and qxx decay sufficiently
fast we get
Z Z Z
d
q(x, t) dx = qt (x, t) dx = (3q 2 (x, t) − qxx (x, t))x dx = 0.
dt R R R
where a is some integration constant. Multiplying by f 0 shows that we can integrate again,
and
c 1
−cf f 0 − 3f 2 f 0 + f 00 f 0 = af 0 =⇒ − f 2 − f 3 + (f 0 )2 = af + b
2 2
for some integration constant b. This is a non-linear first order (ordinary) differential equation
after rescaling the integrating constants. Since the left hand side is a square, the regions where
the cubic polynomial F is positive are of importance. So the zeros of the F play a role in the
analysis.
Remark 5.1.9. In general the equation y 2 = F (x), with F a cubic polynomial, is an elliptic
curve, so that we can expect elliptic functions involved in the solution. The elliptic curve
is non-singular (i.e. no cusps or self-intersections) if and only if the polynomial F has three
distinct roots.
86 Chapter 5: Inverse scattering method and the KdV equation
1 y
0
-1.5 -1 -0.5 0 0.5 1
x
-1
-2
Figure 5.1: The cubic polynomial F in case 2 with double zero larger than the simple zero.
Case 1 can be completely solved in terms of elliptic functions. For our purposes case 2
is the most interesting, since the corresponding solution is related to the so-called soliton
solution of the KdV equation (5.1.1) The case 2 is of interest in case the simple root, say α, is
smaller than the double root, say β, so F (x) = 2(x − α)(x − β)2 as in the situation of Figure
5.1 with α = −1 and β = 0.
If one considers the differential equation (f 0 )2 = F (f ) in this case then we find that a
solution f with initial value α < f (x0 ) < β satisfies α < f (x) < β for all x. So we can expect
bounded solutions in this case.
Chapter 5: Inverse scattering method and the KdV equation 87
Taking square roots and noting that the differential equation is separable gives
Z f
dξ √
√ = ± 2x + C,
α (ξ − β) ξ − α
−2 φ π √
√ ln | tan( + )| = ± 2x + C,
β−α 2 4
and using the addition formula for tan or for sin and cos we can rewrite this as
1 + tan( φ ) r
β−α
2
ln =± (x − C).
φ 2
1 − tan( 2 )
1+tan( φ )
Denoting the right hand side by y, we get 2
1−tan( φ )
= ey which gives tan( φ2 ) = tanh( y2 ). Now
2
we can determine f as a function of x
φ φ
f (x) = α + (β − α) sin2 φ = α + 4(β − α) sin2 ( ) cos2 ( )
2 2
φ φ
= α + 4(β − α) tan2 ( ) cos4 ( )
2 2
tan2 ( φ2 )
= α + 4(β − α) 2
1 + tan2 ( φ2 )
tanh2 ( y2 )
= α + 4(β − α) 2
1 + tanh2 ( y2 )
α−β
= α + (β − α) tanh2 (y) = β +
cosh2 (y)
α−β
= β+ q
cosh2 ( β−α
2
(x − C))
using cos2 φ = 1/(1 + tan2 φ) and 12 1 + tanh2 ( y2 ) tanh y = tanh( y2 ) and using that cosh
Exercise 5.1.12. Give a qualitative analysis of possible growth/decay behaviour of the solu-
tions to (f 0 )2 = F (f ) for the other listed cases for the cubic polynomial F .
Exercise 5.1.13. Assume now that F has three distinct real roots,
R φ then one can proceed
in the same way, except that now we get the integral of the form 0 √ 1
dν instead of
1−k2 sin2 ν
Rφ 1
0 cos ν
dν. Use the Jacobian elliptic function cn defined by
Z φ
1
v = p dν cos φ = cn(v; k).
0 1 − k 2 sin2 ν
The corresponding solutions to the KdV-equation (5.1.1) are known as cnoidal waves, and
were already obtained by Korteweg and de Vries in their 1895 paper.
In the course of history the KdV-equation (5.1.1) has been linked to other well-known
ordinary differential equations by other suitable substitutions, and two of them are discussed
in the following exercises.
Exercise 5.1.14. Put q(x, t) = t + f (x + 3t2 ), and show that q satisfies the KdV-equation
(5.1.1) if and only if f satisfies 1 − 6f f 0 + f 000 = 0, or by integrating C + z − 3f 2 + f 00 = 0.
This equation is essentially the Painlevé I equation g 00 (x) = 6g 2 (x) + x, which is the sim-
plest equation in the Painlevé4 equations consisting of 6 equations. The Painlevé equations
are essentially the second order differential equations of the form y 00 (t) = R(t, y, y 0 ) with R
a polynomial in y, y 0 with meromorphic coefficients in t together with the Painlevé prop-
erty meaning that there are no movable branch points and no movable essential singularities
exluding linear and integrable differential equations.
Exercise 5.1.15. Put q(x, t) = −(3t)−2/3 f (3t)x1/3 , and show that q is a solution to the KdV-
equation (5.1.1) if and only if f satisfies f 000 + (6f − z)f 0 − 2f = 0. This is related to the
Painlevé II equation.
4
Paul Painlevé (5 December 1863 — 29 October 1933), French mathematician and politician. Painlevé was
Minister in several French Cabinets, as well as Prime-Minister of France.
Chapter 5: Inverse scattering method and the KdV equation 89
where the eigenfunctions and eigenvalues will also depend on t. Expressing q in terms of
f and λ and plugging this into the KdV-equation (5.1.1) gives a huge equation, that upon
multiplying by f 2 , can be written as
λt f 2 + f ux − fx u x = 0,
u(x, t) = ft (x, t) + fxxx (x, t) − 3 q(x, t) + λ(t) fx (x, t). (5.2.1)
This is a tedious verification and one of the main steps, assuming that all partial derivatives
exist and ftxx = fxxt . Note that the second term in (5.2.1) is a derivative of a Wronskian.
Later it has been pointed out that the KdV-equation can be interpreted as a compatibility
condition ftxx = fxxt for solutions of systems of partial differential equations. This approach
has also led to several other generalisations.
Integrating relation (5.2.1) over R, and assuming that f ∈ L2 (R), so that λ corresponds
to the discrete spectrum of the Schrödinger operator, we get
a
λt kf k2 = − lim f (x, t) ux (x, t) − fx (x, t)ux (x, t)
a→∞ −a
and the right hand side gives zero, so that the (discrete) spectrum is constant. So the im-
portant observation is that the KdV-equation is a description of an isospectral family of
Schrödinger operators. Hence, we can proceed as follows to find a solution to an initial value
problem (5.1.2) for the KdV-equation:
1. Describe the spectral data of the Schrödinger operator with potential q0 ; this is described
in Section 4.2.
2. Solve the evolution of the spectral data when the potential evolves according to the
KdV-equation; this is yet unclear.
5
Gardner, Greene, Kruskal and Miura have received the 2006 AMS Leroy P. Steele Prize for a Seminal
Contribution to Research for one of their follow-up papers.
90 Chapter 5: Inverse scattering method and the KdV equation
3. Determine the potential q(x, t) from the spectral data of the Schrödinger operator at time
t; this can be done essentially using the Gelfand-Levitan-Marchenko integral equation
of Theorem 4.4.2 and Theorem 4.3.1.
We shortly derive heuristically that the evolution in step 2 is linear, and since the Gelfand-
Levitan-Marchenko integral equation is also linear, this procedure gives a mainly linear method
to solve the KdV-inital value problem (5.1.2). Note however that in explicit cases it is hard
to solve the Gelfand-Levitan-Marchenko equation explicitly, except for the reflectionless po-
tentials as discussed in Section 4.5. This method is now known as the Inverse Scattering
Method (ISM), or inverse spectral method, or inverse scattering transformation. The inverse
scattering method can be considered as an analogue of the Fourier transform used to solve
linear differential equations.
Now that we have established λt = 0 for an eigenvalue λ in the discrete spectrum, we use
this in (5.2.1) and carrying out the differentiation gives
0 = (f ux − fx u)x = f uxx − fxx u = f uxx − (q − λ)u ,
or u is also a solution to the Schrödinger equation. Hence, u = C f + D g, with C, D ∈ C and
g a linearly independent solution, i.e.
ft + fxxx − 3(q + λ) fx = C f + D g. (5.2.2)
Assume λ = λn = −p2n a discrete eigenvalue using the notation as in Theorem 4.2.7, so that
f (x, t) = fn (x, t) ∼ e−pn x for x → ∞. So in the region where the potential vanishes, in
particular for x → ∞, we see that left hand side has exponential decay, and since g doesn’t
have exponential decay, it follows that D = 0.
Now fixing n, normalise ψ(x, t) = c(t)fn (x, t), so that kψ(·, t)k = 1 for all t, and since in
the above considerations f was an arbitrary solution to the Schrödinger equation we get
ψt + ψxxx − 3(q + λ)ψx = Cψ.
We claim that C = 0, and in order to see this we multiply this equation R by ψ and integrate
over R with respect to x. Then the left hand side equals C. Now recall R ψt (x, t)ψ(x, t) dx =
1 1 d
R2 hψt (·, t), ψ(·, t)i + hψ(·, t), ψ1 t (·, t)i 2 = 2 dt
kψ(·, t)k2 = 0, since its norm is constant 1. Also
λ ψ (x, t)ψ(x, t) dx = λn 2 ψ(x, t) |∞
R n x −∞ = 0, since the eigenfunction decays exponentially
2
fast. Similarly, R ψxxx (x, t)ψ(x, t) dx = − R ψxx (x, t)ψx (x, t) dxR = − 12 ψx (x, t) |∞
R R
−∞ = 0,
since
R the derivative of the R eigenfunction decays exponentially. R q(x, t)ψx (x, t)ψ(x, t) dx =
λ ψ (x, t)ψ(x, t) dx + R ψx (x, t)ψxx (x, t) dx = 0, as before. So for the normalised eigen-
R n x
function ψ(·, t) for the constant eigenvalue λn we find
ψt + ψxxx − 3(q + λ)ψx = 0.
Now use ψ(x, t) ∼ c(t)e−pn x as x → ∞, and since we assume that the potential decays
sufficiently fast we get a linear first order differential equation for c(t);
dc 3
(t) + (−pn )3 c(t) − 3(−p2n )(−pn )c(t) = 0 =⇒ c(t) = e4pn t c(0).
dt
Chapter 5: Inverse scattering method and the KdV equation 91
Now that we have determined the evolution of the discrete part of the spectral data,
we consider next the evolution of the continuous part of the spectrum. Let λ = γ 2 , and
consider (5.2.2) for ψ(x, t) = ψγ (x, t) with ψγ as in Section 4.2, where we now assume that
the transmission and reflection coefficient also depend on t; R(γ, t), T (γ, t). Again using that
the potential q vanishes for large enough x, we see that
ψt (x, t) + ψxxx (x, t) − 3(q(x, t) + λ)ψx (x, t) ∼ Tt (γ, t) + 4iγ 3 T (γ, t) e−iγx ,
x → −∞,
3 −iγx 3 iγx
ψt (x, t) + ψxxx (x, t) − 3(q(x, t) + λ)ψx (x, t) ∼ 4iγ e + Rt (γ, t) − 4iγ e , x → ∞.
We conclude that ψt + ψxxx − 3(q + λ)ψx is a constant multiple of ψ, and this constant is 4iγ 3
as follows by looking at the coefficient of e−iγx for x → ∞. Equating gives two linear first
order differential equations for the transmission and reflection coefficients;
Tt (γ, t) + 4iγ 3 T (γ, t) = 4iγ 3 T (γ, t) =⇒ T (γ, t) = T (γ, 0),
3
Rt (γ, t) − 4iγ 3 R(γ, t) = 4iγ 3 R(γ, t) =⇒ R(γ, t) = R(γ, 0)e8iγ t .
Note that this also implies that starting with a reflectionless potential q0 , we stay within the
class of reflectionless potentials.
Exercise 5.2.1. Assume that the √ solution scheme for the Korteweg-de Vries equation with
initial value q0 (x) = − 2c cosh−2 ( 12 cx) of this section is valid. Use the results of Section 4.5
to derive the solution q(x, t) of Proposition 5.1.10. in this way. This solution is known as the
pure 1-soliton solution, see Section 5.5.
1 00 q(·, t + h) − q(·, t)
−f + q(·, t + h)f + f 00 − q(·, t)f = f .
h h
Assuming that the partial derivative with respect to t of q exists we obtain
q(x, t + h) − q(x, t)
Z
1 2
k L(t + h)f − L(t)f − qt (·, t)f k2 = − qt (x, t) |f (x)|2 dx.
h h
R
If we assume moreover that qt (·, t) ∈ L∞ (R) with locally bounded L∞ -norm we can apply
Dominated Convergence Theorem 6.1.3 to see that the right hand tends to zero as h → 0 even
for arbitrary f ∈ L2 (R). So in this case we have established Lt (t) as multiplication operator
by qt (·, t) with domain D(L) = W 2 (R).
Let us assume that we also have a strongly continuous family of bounded operators
V (t) ∈ B(H). As an application we derive a product rule, which is the same as for functions
except that we have to keep track of domains. Note that D(V (t)L(t)) = D(L(t)) = D(L) is
independent of t, and write for x ∈ D(Lt (t))
1
V (t + h)L(t + h)x − V (t)L(t)x
h
1 1
= V (t + h)L(t + h)x − V (t + h)L(t)x + V (t + h)L(t)x − V (t)L(t)x
h h
1 1
= V (t + h) L(t + h)x − L(t)x + V (t + h)L(t)x − V (t)L(t)x
h h
Lemma 5.3.2. (i) Let L(t) be a strongly continuous family of operators on a Hilbert space H
with constant domain D(L) = D(L(t)) and let V (t) be a strongly continuous family of locally
bounded operators on H which is locally bounded in the operator norm. If x ∈ D(Lt (t)) and
L(t)x ∈ D(Vt (t)), then x ∈ D (V L)t (t) and
d(V L)
(t)x = Vt (t)L(t)x + V (t)Lt (t)x.
dt
∗ ∗
(ii) Let V (t) be as in (i), then dVdt (t) = Vt (t) .
Exercise 5.3.3. Prove the remaining statement of Lemma 5.3.2. What happens if we try to
differentiate L(t)V (t) with respect to t?
For any two (possibly) unbounded operators (A, D(A)), and (B, D(B)) the commutator
[A, B] is defined as the operator [A, B] = AB − BA with D([A, B]) = D(AB) ∩ D(BA) =
{x ∈ D(A) ∩ D(B) | Bx ∈ D(A) and Ax ∈ D(B)}. A (possibly unbounded) operator B is
anti-selfadjoint if iB is a self-adjoint operator. So B ∗ = −B, and in particular D(B) = D(B ∗ ).
Theorem 5.3.4 (Lax). Let L(t), t ≥ 0, be a family of self-adjoint operators on a Hilbert
space with D(L(t)) = D(L) independent of t which is strongly continuously differentiable.
Assume that there exists a family of anti-selfadjoint operators B(t), with constant domain
D(B(t)) = D(B), depending continuously on t such that
• Lt = [B, L], i.e. Lt (t) = [B(t), L(t)] for all t ≥ 0,
• there exists a strongly continuous family V (t) ∈ B(H) satisfying Vt (t) = B(t)V (t),
V (0) = 1 ∈ B(H),
• L(t)V (t) is differentiable with respect to t.
Then L(t) is unitarily equivalent to L(0) and in particular, the spectrum of L(t) is independent
of t.
Such pairs L, B are called Lax pairs, and this idea has been fruitful in e.g. integrable
systems, see Exercise 5.3.9 for an easy example.
Note that in case B = B(t) is independent of t, then we can take V (t) = exp(tB). In
particular, this is a family of unitary operators, since B is anti-selfadjoint and so the second
requirement is automatically fulfilled. This remains true in the general time dependent case
under suitable conditions on B(t), but there is not such an easy description of the solution.
This is outside the scope of these notes, but in the case of the KdV-equation as in the sequel
Vt (t) = B(t)V (t), V (0) = 1 has a unitary solution in case q(·, 0) ∈ W 3 (R).
Sketch of proof. We first claim that V (t) is actually unitary for each t. To see this, take
w(t) = V (t)w, v(t) = V (t)v and so dw
dt
(t) = B(t)w(t) and dv
dt
(t) = B(t)v(t), so that
d d d
hw(t), v(t)i = h w(t), v(t)i + hw(t), v(t)i = hB(t)w(t), v(t)i + hw(t), B(t)v(t)i
dt dt dt
= hB(t)w(t), v(t)i − hB(t)w(t), v(t)i = 0.
94 Chapter 5: Inverse scattering method and the KdV equation
Or hV (t)w, V (t)vi = hw, vi, and V (t) is an isometry, and since we also get from this hw, vi =
hV (t)∗ V (t)w, vi for arbitrary v it follows V (t)∗ V (t) = 1. Note that W (t) = V (t)V (t)∗ satisfies
Wt = BV V ∗ + V V ∗ B ∗ = BW − W B = [B, W ] with initial condition W (0) = 1. Since
W (t) = 1 is a solution, we find W (t) = V (t)V (t)∗ = 1, or V (t) is unitary, and the claim
follows.
By Lemma 5.3.2 we see that V (t)∗ is also differentiable, and so is L̃(t) = V (t)∗ L(t)V (t).
We want to show that L̃(t) is independent of t. Assuming this for the moment, it follows
that V (t)∗ L(t)V (t) = L(0), since V (0) = 1, and hence L(t) is unitarily equivalent to L(0) by
L(t) = V (t)L(0)V (t)∗ .
To prove the claim we differentiate with respect to t the relation L(t)V (t) = V (t)L̃(t);
dL dV dV dL̃
(t) V (t) + L(t) (t) = (t) L̃(t) + V (t) (t)
dt dt dt dt
dL dL̃
=⇒ (t) V (t) + L(t) B(t) V (t) = B(t) V (t) L̃(t) + V (t) (t)
dt dt
dL dL̃
=⇒ (t) V (t) + L(t) B(t) V (t) = B(t) V (t) V (t)∗ L(t) V (t) + V (t) (t)
dt dt
dL̃ dL
=⇒ V (t) (t) = (t) + L(t) B(t) − B(t) L(t) V (t) = 0
dt dt
using the product rule of Lemma 5.3.2, the differential equation for the unitary family V (t).
Since V (t) is unitary we get ddtL̃ (t) = 0.
Exercise 5.3.5. Show formally that a self-adjoint operator of the form L = L0 + Mq , where
L0 is a fixed self-adjoint operator and Mq is multiplication by a t-dependent function q, and
assuming there exists a anti-selfadjoint B such that BL − LB = MK(q) , then L is isospectral
if q satisfies the equation qt = K(q).
Lax’s Theorem 5.3.4 dates from 1968, shortly after the the discovery of Gardner, Greene,
Kruskal and Miura of the isospectral relation of the Schrödinger operator and the KdV-
equation. In order to see how the KdV-equation arises in the context of Lax pairs, we take
L(t) as before as the Schrödinger operator with time-dependent potential q(·, t). We look for
B(t) in the form of a differential operator, and since it has to be anti-self-adjoint we only
allow for odd-order derivatives, i.e. we try for integer m and yet undetermined functions bj ,
j = 0, · · · , m − 1,
m−1
d2m+1 X d2j+1 d2j+1
Bm (t) = 2m+1 + Mj (t) 2j+1 + 2j+1 Mj (t) , Mj (t)f (x) = bj (x, t)f (x),
dx j=0
dx dx
considered as operator on L2 (R) with domain W 2m+1 (R) for suitable functions bj .
d
In case m = 0 we have B0 (t) is dx independent of t, and [B, L] = qx , i.e. the multiplication
2
operator on L (R) by multiplying by qx . So then the condition Lt = [B, L] is related to the
partial differential equation qt = qx .
Chapter 5: Inverse scattering method and the KdV equation 95
Exercise 5.3.6. Solve qt = qx , and derive directly the isospectral property of the correspond-
ing Schrödinger operator L(t).
and using the general commutation [∂, f ] = fx repeatedly we see [∂ 3 , q] = 3∂qx ∂ + qxxx ,
[∂, ∂b∂] = ∂bx ∂, [b∂, q] = bqx , [∂b, q] = bqx , so that we obtain
1
− qxxx + 3qqx = [B1 (t), L(t)] = Lt (t) = qt ,
2
which is the KdV-equation up to a change of variables, see Exercise 5.1.2.
3
d d
Exercise 5.3.7. Check that taking B(t) = −4 dx 3 + 6q(x, t) dx + 3qx (x, t) gives [B(t), L(t)] =
Exercise 5.3.8. Consider the case m = 2 and derive the corresponding 5-th order partial
differential equation for q. Proceeding in this way, one obtains a family of Korteweg-de Vries
equations.
Exercise 5.3.9. Assume that we have a Lax pair as in Theorem 5.3.4 in the case of a
finite dimensional Hilbert space, e.g. H = C2 . Then we can view Lt = [B, L] as a system
n
of N = dim H first order coupled differential equations. Show that tr L(t) then gives
invariants for this system. Show also that (Lk )t = [B, Lk ] for k ∈ N. Work this out for the
2
example of the harmonic oscillator; q̈ + ω
q = 0, where dot denotes derivative with
respect to
0 − 12 ω
p ωq
time t, by checking that L = , B is the constant matrix 1 and q̇ = p
ωq −p 2
ω 0
gives a Lax pair.
The idea of Lax pairs has turned out to be very fruitful in many other occassions; similar
approaches can be used for other nonlinear partial differential equations, such as the sine-
Gordon equation, the nonlinear Schrödinger equations, see Exercise 5.3.5. These equations
have many properties in common, e.g. an infinite number of conserved quantities, so-called
Bäcklund transformations to combine solutions into new solutions –despite the nonlinearity.
We refer to [2], [6] for more information.
96 Chapter 5: Inverse scattering method and the KdV equation
d
w(t) = Vt (t)w = B(t)V (t)w = B(t)w(t).
dt
Exercise 5.4.1. Check directly for the case m = 0, cf. Exercise 5.3.6, that the eigenfunction
for a discrete eigenvalue µ satisfies the appropriate differential equation at arbitrary time t.
Assume now that q(·, t) satisfies the conditions of Theorem 4.2.7 for all t, and assume that
the Schrödinger operator at time t = 0 has bound states. Then it follows that itphas the same
eigenvalues at all times t. It follows that ψn (x, 0) = Nn (0)fn (x, 0), Nn (0) = ρn (0), is an
eigenfunction of the Schrödinger operator at t = 0 of length 1. Hence V (t)ψn (·, 0) = ψn (·, t) is
an eigenfunction of the Schrödinger operator at t of length 1, since V (t) is unitary. Moreover,
ψn (·, t) is a multiple of fn (x, t), the solution
p of the Schrödinger integral equation at time t.
So ψn (x, t) = Nn (t)fn (x, t), Nn (t) = ρn (t) with the notation as in Theorem 4.2.7 with
time dependency explicitly denoted up to a phase factor. Let us also assume that q(·, t) is
compactly supported for all (fixed) t, then we know by Theorem 4.1.2, with notation as in
Theorem 4.2.7 that fn (x, t) = e−pn x for x sufficiently large, i.e. outside the support of q(·, t).
d3
For such x we have B(t) = −4 dx 3 with the version as in Exercise 5.3.7, so that the phase
±
with A(0) = 1, E(0) = R(γ), C(0) = 0, D(0) = T (γ), where f±γ (x, t) denote the Jost
solutions of the corresponding Schrödinger operator at time t. Moreover, assume that q(·, t) is
compactly supported for all (fixed) t, so that Ψγ (x, t) = A(t) e−iγx + E(t) eiγx for x sufficiently
d3
large. Again for such x we have B(t) = −4 dx 3 and so
d
Ψγ (x, t) = B(t) Ψγ (x, t) = −4(−iγ)3 A(t) e−iγx − 4(iγ)3 E(t) eiγx
dt
d dA dE
Ψγ (x, t) = (t) e−iγx + (t) eiγx .
dt dt dt
This gives dt (t) = −4iγ A(t), A(0) = 1 and dE
dA 3
dt
(t) = 4iγ 3 E(t), E(0) = R(γ), hence A(t) =
exp(−4iγ 3 t), E(t) = exp(4iγ 3 t)R(γ).
For x sufficiently negative we find Ψγ (x, t) = C(t) eiγx + D(t) e−iγx , and
d
Ψγ (x, t) = B(t) Ψγ (x, t) = −4(iγ)3 C(t) eiγx − 4(−iγ)3 D(t) e−iγx
dt
d dC dD
Ψγ (x, t) = (t) eiγx + (t) e−iγx .
dt dt dt
This gives dC
dt
(t) = 4iγ 3 C(t), C(0) = 0 and dD dt
(t) = −4iγ 3 D(t), D(0) = T (γ). Hence
3
C(t) = 0, D(t) = exp(−4iγ t)T (γ).
Combining this then gives
3 3 3
Ψγ (x, t) = e−4iγ t T (γ) fγ− (x, t) = e−4iγ t f−γ
+
(x, t) + e4iγ t R(γ) fγ+ (x).
If we now denote the corresponding time-dependent Jost solutions, transmission and reflection
coefficients by
ψγ (x, t) = T (γ, t) fγ− (x, t) = f−γ
+
(x, t) + R(γ, t) fγ+ (x, t),
3
we obtain e4iγ t Ψγ (x, t) = ψγ (x, t). This then immediately gives T (γ, t) = T (γ), or the
3
transmission coefficient is independent of time, and R(γ, t) = e8iγ t R(γ).
Theorem 5.4.2. Assume q(x, t) is a solution to the KdV-equation (5.1.1) such that for each
∂k q
t ≥ 0 the function q(·, t), t ≥ 0, satisfies the conditions of Theorem 4.2.7 and such that ∂x k (x, t)
is bounded for k ∈ {0, 1, 2, 3} as |x| → ∞ and lim|x|→∞ q(x, t) = lim|x|→∞ qx (x, t) = 0. Then
d2
the scattering data of L(t) = − dx 2 + q(·, t) satisfies
3
T (γ, t) = T (γ), R(γ, t) = e8iγ t R(γ), pn (t) = pn (0), ρn (t) = exp(8p3n t)ρn (0).
Note that the theorem in particular shows that the poles of the transmission coefficient T ,
hence the eigenvalues of the Schrödinger operator, are time-independent.
In case q(·, t) is of compact support, then the above argumentation leads to the statements
of Theorem 5.4.2. For the N -soliton solutions this is not the case, and we need Theorem 5.4.2
in the more general form. The idea of the proof is the same.
Note that Theorem 5.4.2 provides a solution scheme for the KdV-equation with initial
condition (5.1.2) as long as the initial condition q(·, 0) satisfies the conditions. Determine the
scattering data at time t, and next use the Gelfand-Levitan-Marchenko integral equation as
in Theorem 4.4.2 to determine q(x, t).
98 Chapter 5: Inverse scattering method and the KdV equation
for some constant C independent of x and t. Take the logarithm and deriving the resulting
0 2 00 (x)f (x)
expression twice and multiplying by −2 to find the potential; q(x, t) = 2 (f (x))(f−f
(x))2
, with
f (x) = det(I + S(x, t)). This shows that the potential q(x, t) is indeed independent of C.
Using a computer algebra system is handy, and in case one uses Maple, we find
5 cosh (−x + 28 t) cosh (−3 x + 36 t) + 3 − 3 sinh (−x + 28 t) sinh (−3 x + 36 t)
q(x, t) = −12 .
(3 cosh (−x + 28 t) + cosh (−3 x + 36 t))2
Using the addition formula cosh(x ± y) = cosh(x) cosh(y) ± sinh(x) sinh(y) we obtain
3 + 4 cosh(2x − 8t) + cosh(4x − 64t)
q(x, t) = −12 2 . (5.5.1)
3 cosh(x − 28t) + cosh(3x − 36t)
Chapter 5: Inverse scattering method and the KdV equation 99
as can be done easily in e.g. Maple. Compare this with Proposition 4.5.3. The solution (5.5.1)
is known as the pure 2-soliton solution.
So we have proved from Theorem 5.4.2 and the inverse scattering method described in
Sections 4.4, 4.5 the following Proposition.
Proposition 5.5.1. Equation (5.5.1) is a solution to the KdV-equation (5.1.1) with initial
condition q(x, 0) = cosh−62 (x) .
Having the solution (5.5.1) at hand, one can check by a direct computation, e.g. using a
computer algebra system like Maple, that it solves the KdV-equation. However, it should be
clear that such a solution cannot be guessed without prior knowledge.
In Figure 5.2 we have plotted the solution q(x, t) for different times t. It seems that q(·, t)
exists of two solitary waves for t 0 and t 0, and that the ‘biggest’ solitary wave travels
faster than the ‘smallest’ solitary wave, and the ‘biggest’ solitary wave overtakes the ‘smallest’
at t = 0. For t ≈ 0 the waves interact, and then the shapes are not affected by this interaction
when t grows. This phenomenon is typical for soliton solutions.
In order to see this from the explicit form (5.5.1) we introduce new variables x1 = x−4p21 t =
x − 4t and x2 = x − 4p22 t = x − 16t, so Figure 5.2 suggest that the ‘top’ of the ‘biggest’ or
‘fastest travelling’ solitary wave occurs for x2 = 0, and the top for the ‘smallest’ or ‘slowest
travelling’ solitary wave occurs for x1 = 0. So we see the ‘biggest’ solitary wave travel at four
times the speed of the ‘smallest’ solitary wave.
We first consider the ‘fastest travelling’ solitary wave, so we rewrite
From these asymptotic considerations, we see that the ‘biggest’ solitray wave undergoes a
phase-shift, meaning that its top is shifted by ln(3) forward during the interaction with the
‘smallest’ solitary wave.
100 Chapter 5: Inverse scattering method and the KdV equation
so that
1 48t−4x1
2
e −2
−12 t → ∞,
3 24t−x1 = ,
(2e + 12 e24t−3x1 )2 2
cosh (x1 + 12 ln(3))
q1 (x1 , t) ∼ 1 −48t+4x1
2
e −2
−12 , t → −∞.
3 −24t+x1 =
+ 12 e−24t+3x1 )2 2
cosh (x1 − 12 ln(3))
(2e
So the ‘smallest’ wave undergoes the same phase-shift, but in the opposite direction.
So we can conclude that
−2 −8
q(x, t) ∼ 2 + 2 , t → ±∞.
cosh (x − 4t ± 2 ln(3)) cosh (x − 16t ∓ 12 ln(3))
1
This also suggests that one can build new solutions out of known solutions, even though
the Korteweg-de Vries equation (5.1.1) is non-linear. This is indeed the case, we refer to [2],
[6] for more information.
Remark 5.5.2. It is more generally true that a solution which exhibits a solitary wave in
its solution for t 0, then the speed of this solitary wave equals −4λ for some λ < 0 in the
discrete spectrum of the corresponding Schrödinger operator.
Remark 5.5.3. It is clear that for N ≥ 3 the calculations become more and more cumbersome.
There is a more unified treatment of soliton solutions possible, using the so-called τ -functions.
For an introduction of these aspects of the KdV-equation in relation also to vertex algebras
one can consult [8].
Chapter 5: Inverse scattering method and the KdV equation 101
x x x
-20 -10 0 10 20 -20 -10 0 10 20 -20 -10 0 10 20
0 0 0
-2 -2 -2
y -4 y -4 y -4
-6 -6 -6
-8 -8 -8
x x x
-20 -10 0 10 20 -20 -10 0 10 20 -20 -10 0 10 20
0 0 0
-2 -2 -2
y -4 y -4 y -4
-6 -6 -6
-8 -8 -8
x x x
-20 -10 0 10 20 -20 -10 0 10 20 -20 -10 0 10 20
0 0 0
-2 -2 -2
y -4 y -4 y -4
-6 -6 -6
-8 -8 -8
Figure 5.2: N = 2-soliton solution for t = −1, t = −0.5, t = −0.2 (first row), t = −0.05,
t = 0, t = 0.05 (second row) and t = 0.2, t = 0.5, t = 1 (third row).
102 Chapter 5: Inverse scattering method and the KdV equation
Chapter 6
In this Chapter we recall several notions from functional analysis in order to fix notations and
to recall standard results. Most of the unproved results can be found in Lax [7], Werner [11] or
in course notes for a course in Functional Analysis. Some of the statements are equipped with
a proof, notably in Section 6.5 for the description of the spectrum and the essential spectrum.
• kxk = 0 ⇐⇒ x = 0,
such that X is complete with respect to the metric topology induced by d(x, y) = kx − yk.
Completeness means that any Cauchy2 sequence, i.e. a sequence {xn }∞ n=1 in X such that
∀ε > 0 ∃N ∈ N ∀n, m ≥ N kxn − xm k < ε, converges in X to some element x ∈ X , i.e. ∀ε > 0
∃N ∈ N ∀n ≥ N kxn − xk < ε. If xn → x in X , then kxn k → kxk by the reversed triangle
inequality |kxk − kyk| ≤ kx − yk.
An inner product on a vector space H is a mapping h·, ·i : H × H → C such that
• hαx + βy, zi = αhx, zi + βhy, zi ∀α, β ∈ C, ∀x, y, z ∈ H,
103
104 Chapter 6: Preliminaries and results from functional analysis
• Find a subsequence, say {yn }∞ n=1 such that hyn , xm i converges for all m as n → ∞.
Conclude that hyn , ui converges for all u in the linear span U of the elements xn , n ∈ N.
• Show that hyn , ui converges for all u in the closure U of the linear span U
• Define φ : H → C as φ(w) = limn→∞ hyn , wi. Show that φ is a continuous linear func-
tional.
3
David Hilbert (23 January 1862 —14 February 1943), German mathematician. Hilbert is well-known for
his list of problems presented at the ICM 1900, some still unsolved.
4
Hermann Amandus Schwarz (25 January 1843 — 30 November 1921), German mathematician.
5
Frigyes Riesz (22 January 1880 — 28 February 1956), Hungarian mathematician, who made many contri-
butions to functional analysis.
6
Pythagoras of Samos (approximately 569 BC – 475 BC), Greek philosopher.
Chapter 6: Preliminaries and results from functional analysis 105
• Use the Riesz representation theorem to find x ∈ H such that the subsequence {yn }∞
n=1
converges weakly to x.
R 1/p
This is a Banach space with respect to the norm kf kp = R |f (x)|p dx . The case p = ∞
∞
is L (R) = {f : R → C measurable | ess supx∈R |f (x)| < ∞}, which is a Banach space for
the norm kf k∞ = ess supx∈R |f (x)|. Here we follow the standard convention of identifying two
functions that are equal almost everywhere.
The case p = 2 gives a Hilbert space L2 (R) with inner product given by
Z
hf, gi = f (x) g(x) dx
R
Since we work mainly with the Hilbert space L2 (R), we put kf k = kf k2 . In this case the
Cauchy-Schwarz inequality states
Z Z 21 Z 21
2 2
f (x)g(x) dx ≤ |f (x)| dx |g(x)| dx ,
R R R
which is also known as Hölder’s inequality. The converse Hölder’s inequality states for mea-
surable function f such that
Z
sup f (x)g(x) dx = C < ∞,
R
R
where the supremum is taken over all functions g such that kgk ≤ 1 and R
f (x)g(x)dx exists,
we have f ∈ L2 (R) and kf k = C.
We recall some basic facts from Lebesgue’s integration theory. We also recall that for a
convergent sequence {fn }∞ p
n=1 to f in L (R), 1 ≤ p ≤ ∞ there exists a convergent subsequence
{fnk }∞
k=1 that converges pointwise almost everywhere to f .
The Dominated Convergence Theorem 6.1.3 is needed to show that the Lp (R) spaces are
complete. Another useful result is the following.
RTheorem 6.1.4 (Lebesgue’s Differentiation Theorem). For f ∈ L1 (R) its indefinite integral
x
−∞
f (y) dy is differentiable with derivative f (x) almost everywhere.
A Banach space is separable if there exists a denumerable dense set. The examples Lp (R),
1 ≤ p < ∞, are separable, but L∞ (R) is not separable. In these lecture notes the Hilbert
spaces are assumed to be separable.
Another example of a Banach space is C(R), the space of continuous functions f : R → C
with respect to the supremum norm, kf k∞ = supx∈R |f (x)|. The Arzelà-Ascoli8 theorem
states that M ⊂ C(R) with the properties (i) M bounded, (ii) M closed, (iii) M is uniformly
continuous (i.e. ∀ε > 0 ∃δ > 0 ∀f ∈ M : |x − y| < δ ⇒ |f (x) − f (y)| < ε), then M is compact
in C(R).
6.2 Operators
An operator is a linear map T from its domain D(T ), a linear subspace of a Banach space X ,
to a Banach space Y, denoted by T : X ⊃ D(T ) → Y or by T : D(T ) → Y or by (T, D(T )) if
X and Y are clear from the context, or simply by T . We say that (T, D(T )) is an extension of
(S, D(S)), in notation S ⊂ T , if D(S) ⊂ D(T ) and Sx = T x for all x ∈ D(S). Two operators
S and T are equal if S ⊂ T and T ⊂ S, so that in particular the domains have to be equal.
By Ker(T ) we denote the kernel of T , Ker(T ) = {x ∈ D(T ) | T x = 0}, and by Ran(T ) we
denote its range, Ran(T ) = {y ∈ Y | ∃x ∈ D(T ) such that T x = y}. The graph norm on
D(T ) is given by kxkT = kxk + kT xk.
The operator (T, D(T )) is densely defined if the closure of its domain is the whole Banach
space, D(T ) = X . We define the sum of two operators S and T by (S + T )x = Sx + T x for
x ∈ D(T + S) = D(T ) ∩ D(S), and the composition is defined as (ST )x = S(T x) with domain
D(ST ) = {x ∈ D(T ) | T x ∈ D(S)}. Note that it might happen that the domains of the sum
or composition are trivial, even if S and T are densely defined.
A linear operator T : X → Y is continuous if and only if T is bounded, i.e. there exists a
constant C such that kT xk ≤ Ckxk for all x ∈ X . The operator norm is defined by
kT xk
kT k = sup
x∈X kxk
8
Cesare Arzelà (6 March 1847 — 15 March 1912), Italian mathematician. Ascoli Italian mathematician.
Chapter 6: Preliminaries and results from functional analysis 107
where 1 ∈ B(X ) is the identity operator x 7→ x. For Hilbert spaces we use apart from the
operator norm on B(H) also the strong operator topology, in which Tn →T if Tn x → T x
for all x ∈ H, and the weak operator topology, in which Tn → T if hTn x, yi → hT x, yi for
all x, y ∈ H. Note that convergence in operator norm implies convergence in strong operator
topology, which in turn implies convergence in weak operator topology.
For T ∈ B(H), H Hilbert space, the adjoint operator T ∗ ∈ B(H) is defined by hT x, yi =
hx, T ∗ yi for all x, y ∈ H. We call T ∈ B(H) a self-adjoint (bounded) operator if T ∗ = T . If a
self-adjoint operator T satisfies hT x, xi ≥ 0 for all x ∈ H, then T is a positive operator, T ≥ 0.
T ∈ B(H) a unitary operator if T ∗ T = 1 = T T ∗ . An isometry is an operator T ∈ B(H), which
preserves norms; kT xk = kxk for all x ∈ H. A surjective isometry is unitary. An orthogonal
projection is a self-adjoint operator P ∈ B(H) such that P 2 = P , so that P projects onto
Ran(P ), a closed subspace of H, or P |Ran(P ) is the identity and Ker(P ) = Ran(P )⊥ , the
orthogonal complement. In particular P is a positive operator. A partial isometry U ∈ B(H)
is an element such that U U ∗ and U ∗ U are orthogonal projections. The range of the projection
U ∗ U is the inital subspace, say D, and the range of the projection U U ∗ is the final subspace,
say R, and we can consider U as a unitary map from D to R.
An operator T : H ⊃ D(T ) → H is symmetric if hT x, yi = hx, T yi for all x, y ∈ D(T ). For
a densely defined operator (T, D(T )) we define
such that S extends T , T ⊂ S. The smallest (with respect to extensions) closed operator of a
closable operator is its closure, denoted (T̄ , D(T̄ )). For closed operator (T, D(T )) its domain
is complete with respect to the graph norm k · kT .
Theorem 6.2.1 (Closed Graph Theorem). A closed operator (T, D(T )) with D(T ) = X is
bounded, T ∈ B(X ).
A closed operator (T, D(T )) from one Hilbert space H1 to another Hilbert space H2 has
a polar decomposition, i.e. T = U |T |, where D(|T |) = D(T ) and (|T |, D(|T |)) is self-adjoint
operator on H1 and U : H1 → H2 is a partial isometry with initial space (Ker T )⊥ and final
space Ran T . The condition Ker T = Ker |T | determine U and |T | uniquely.
For an operator on a Hilbert space H we have that (T, D(T )) is closable if and only its
adjoint (T ∗ , D(T ∗ )) is densely defined and then its closure is (T ∗∗ , D(T ∗∗ )). In particular,
any densely defined symmetric operator is closable, its closure being T ∗∗ . In particular, if
(T, D(T )) is self-adjoint, then it is a closed operator. For a closed, densely defined operator
(T, D(T )) on a Hilbert space H, the linear space D(T ) is a Hilbert space with respect to
hx, yiT = hx, yi + hT x, T yi. The corresponding norm is the graph norm.
The following lemma gives necessary and sufficient conditions for a densely defined, closed,
symmetric operator (T, D(T )) to be a self-adjoint operator on the Hilbert space H.
Lemma 6.2.2. Let (T, D(T )) be densely defined, symmetric operator, then the following are
equivalent.
In case (T, D(T )) is closed, the spaces Ran(T ± i), Ran(T − z) are closed.
In fact for a closed, densely defined, symmetric operator T the dimension of Ker(T ∗ − z)
and Ran(T − z) is constant in the upper half plane =z > 0 and in the lower half plane =z < 0.
A densely defined, symmetric operator (T, D(T )) whose closure is self-adjoint, is known
as an essentially self-adjoint operator.
Sketch of proof. We prove Lemma 6.2.2 in case T is closed and for the equivalence of the first
three assumptions. Note that 5 ⇒ 3 and 4 ⇒ 2.
First, for y ∈ Ran(T + z)⊥ we have hT x, yi + zhx, yi = h(T + z)x, yi = 0 for all x ∈ D(T ),
so that in particular by definition y ∈ D(T ∗ ) and h(T ∗ + z̄)y, xi = 0 for all x ∈ D(T ). Since
D(T ) is dense, it follows that (T ∗ + z̄)y = 0 or y ∈ Ker(T ∗ + z̄). The reverse conclusion can
be obtained by reversing the argument. This shows that 2 ⇔ 3, and also 4 ⇔ 5.
Chapter 6: Preliminaries and results from functional analysis 109
k(T + i)xk2 = h(T + i)x, (T + i)xi = kT xk2 + kxk2 + 2<hT x, ixi = kT xk2 + kxk2 ≥ kxk2 .
• Take H = L2 (R) and D(T ) = Cc1 (R), the space of continuously differentiable functions
with compact support. You may assume that Cc1 (R) is dense in L2 (R). Show that
(T, D(T )) is essentially self-adjoint.
• Take H = L2 ([0, ∞]), and D(T ) = Cc1 (0, ∞), the space of continuously differentiable
functions with compact support in (0, ∞). You may assume that Cc1 (0, ∞) is dense in
L2 ([0, ∞]). Show that T is not essentially self-adjoint.
In this case one can show that there doesn’t exist a self-adjoint (S, D(S)) such that
T ⊂ S, or T has no self-adjoint extensions.
110 Chapter 6: Preliminaries and results from functional analysis
Lemma 6.2.4. Let (T, D(T )) be a closed, densely defined operator on H. Then T ∗ T with its
domain D(T ∗ T ) = {x ∈ D(T ) | T x ∈ D(T ∗ )} is a densely defined self-adjoint operator on H.
Moreover, this operator is positive and its spectrum σ(T ∗ T ) ⊂ [0, ∞).
See Section 6.4 for the definition of the spectrum. We only apply Lemma 6.2.4 in case T
is a self-adjoint operator, and in this case Lemma 6.2.4 follows from the Spectral Theorem
6.4.1.
If f 0 , f ∈ L1 (R) we have, by integration by parts, F(f 0 )(λ) = iλ(Ff )(λ), so the Fourier
transform intertwines differentiation and multiplication. The Riemann10 -Lebesgue lemma
states that F : L1 (R) → C0 (R), where C0 (R) is the space of continuous functions on R that
vanish at infinity. For f ∈ L1 (R) such that Ff ∈ L1 (R) we have the Fourier inversion formula
1
R
f (x) = √2π R (Ff )(λ) eiλx dλ. We put
Z
1
F f (λ) = fˇ(λ) = √
−1
f (x) eiλx dx.
2π R
Rn
For f ∈ L2 (R) ∩ L1 (R) one can show that −n f (x)e±iλx dx converges in L2 (R)-norm as
n → ∞. This defines F, F −1 : L2 (R) → L2 (R), and then Parseval’s11 identity holds;
We can also define the Sobolev space using weak derivatives . We say that f ∈ L2 (R) has
p
a weak derivative of order p ∈ N in case Cc∞ (R) 3 φ 7→ (−1)p hf, ddxφp i is continuous (as a
functional on the Hilbert space L2 (R)). Here Cc∞ (R) is the space of infinitely many times
differentiable functions having compact support. This space is dense in Lp (R), 1 ≤ p < ∞.
p
By the Riesz representation theorem there exists g ∈ L2 (R) such that hg, φi = (−1)p hf, ddxφp i,
and we define the p-th weak derivative of f as Dp f = g. Then one can show that
W m (R) = {f ∈ L2 (R) | Dp f ∈ L2 (R) exists, ∀p ∈ {0, 1, 2, · · · , m}},
m
X
hf, giW m (R) = hDp f, Dp giL2 (R)
p=0
i.e. the L2 -norm of F is the H2+ -norm of f , and limb↓0 f (· + ib) = F −1 F in L2 (R).
Proof. First, if f is the inverse Fourier transform of F supported on [0, ∞), then f is an
analytic function in the open upper half plane since, writing z = a + ib,
Z ∞
1
f (z) = f (a + ib) = √ F (λ)e−bλ eiaλ dλ
2π 0
R
and this integral converges absolutely in the open upper half plane. Then C f (z) dz = 0 for
any closed curve in the open upper half plane by interchanging integrations and eiλz being
analytic. So f is analytic by Morera’s theorem. By the Plancherel identity we have
and
Z ∞
−1 −1 −bλ
kf (· + ib) − F 2
F k = kF λ 7→ (1 − e 2
)F (λ) k = (1 − e−bλ )2 |F (λ)|2 dλ → 0,
0
ebλ
Z Z Z
−iλx 1 −iλ(x+ib) 1
bλ
e F(fb )(λ) = √ e fb (x) dx = √ e fb (x) dx = √ e−iλz f (z) dz
2π R 2π R 2π =z=b
We claim that this expression is independent of b > 0 as functions in L2 (R). Assuming this
claim is true, we define F (λ) = ebλ F(fb )(λ), then
Z
e−2bλ |F (λ)|2 dλ = ke−bλ F k2 = kFfb k2 = kfb k2 ≤ C 2
R
independent of b. From this we can observe the following; (i) F (λ) = 0 for λ < 0 (almost
everywhere) by considering b → ∞, (ii) F ∈ L2 (R) by taking the limit b ↓ 0, and (iii), as
identity in L2 (R),
Z ∞
−1
1
f (z) = f (a + ib) = fb (a) = F Ffb (a) = √ eiλa (Ffb )(λ) dλ
Z ∞ Z ∞ 2π 0
1 1
=√ eiλa e−bλ F (λ) dλ = √ eiλz F (λ) dλ.
2π 0 2π 0
Chapter 6: Preliminaries and results from functional analysis 113
The proof of theR claim is as follows. Let Cα , α > 0, be the rectangle with vertices at ±α + i
and ±α + ib, then Cα e−iλz f (z) dz = 0 by analyticity of the integrand in the open upper half
R α+ib
plane. Put I(α) = α+i e−iλz f (z) dz in case b > 1 (the other case is similar). Then
Z b Z b Z b
2 −iλ(α+it) 2 2
|I(α)| = |i e f (α + it) dt| ≤ |f (α + it)| dt e2tλ dt,
1 1 1
By the Plancherel theorem we have limn→∞ kF(fb ) − gn (b, ·)k = 0, so that we can find a
subsequence of {gn (b, ·)}∞n=1 that converges pointwise to F(fb ) almost everywhere, and by
restricting the previous limit to this subsequence we see that ebλ F(fb )(λ) is independent of b
as claimed.
In the last part of the proof we have used the fact that if {fn } is a Cauchy sequence in
L (R) converging to f ∈ L2 (R), then there exists a subsequence {fnk } such that fnk → f ,
2
The Borel17 sets on R are the smallest σ-algebra that contain all open intervals. A (positive)
measure is a map µ : M → [0, ∞] such that µ P is countably additive, i.e. for all sets Ai ∈ M
∞
with Ai ∩Aj = ∅ for i 6= j we have µ(∪∞ A
i=1 i ) = i=1 µ(Ai ). A complex measure is a countably
additive map µ : M → C.
For the Spectral Theorem 6.4.1 we need the notion of spectral measure, which is a orthog-
onal projection valued measure on the Borel sets of R. So, denoting the spectral measure by
E, this means that for any Borel set A ⊂ R, E(A) ∈ B(H) is an orthogonal projection such
that E(∅) = 0, E(R) = 1 (the identity element of B(H)) and for pairwise disjoint Borel sets
A1 , A2 , · · · we have
X∞
E(Ai ) x = E(∪∞ i=1 Ai ) x, ∀ x ∈ H.
i=1
and the support of the complex measure Ex,y is contained in the spectrum σ(T ). For any
bounded measurable function f there is a uniquely defined operator f (T ) defined by
Z
hf (T )x, yi = f (λ) dEx,y (λ),
R
such that the map B(R) 3 f 7→ f (T ) ∈ B(H) is a ∗-algebra homomorphism from the space
B(R) of bounded measurable functions to the space of bounded linear operators, i.e. (f g)(T ) =
f (T )g(T ), (af + bg)(T ) = af (T ) + bg(T ), f¯(T ) = f (T )∗ , for all f, g ∈ B(R), a, b ∈ C, and
where f¯(x) = f (x). Moreover, for a measurable real-valued function f : R → R define
Z
D = {x ∈ H | |f (λ)|2 dEx,x (λ) < ∞},
Z R
hf (T )x, yi = f (λ) dEx,y (λ),
R
The map f 7→ f (T ) is known as the functional calculus for bounded measurable functions.
This can be extended to unbounded measurable functions. All integrals can be restricted to
the spectrum σ(T ), i.e. the spectral measure is supported on the spectrum.
The Spectral Theorem 6.4.1 in particular characterises the domain in terms of the spectral
measure, since Z
D(T ) = {x ∈ H | λ2 dEx,x (λ) < ∞}.
R
2
R 2
Note also that kf (T )xk = R |f (λ)| dEx,x (λ) ≥ 0 since Ex,x is a positive measure.
In particular, it follows from the Spectral Theorem 6.4.1, that for a self-adjoint (T, D(T ))
on the Hilbert space H and the function exp(−itx), t ∈ R, we get a bounded operator U (t) =
exp(−itT ). From the functional calculus it follows immediately that U (t + s) = U (t)U (s),
U (0) = 1, U (t)∗ = U (−t), so that t 7→ U (t) is a 1-parameter group of unitary operators in
B(H). Stone’s18 Theorem 6.4.2 states that the converse is also valid.
U (t)x − x U (t)x − x
D(T ) = {x ∈ H | lim converges} T x = i lim , x ∈ D(T ).
t→0 t t→0 t
U (t) as in Stone’s Theorem 6.4.2 is called a strongly continuous 1-parameter group of
unitary operators.
Theorem 6.5.1. Let (T, D(T )) be a self-adjoint operator on a Hilbert space H, and λ ∈ R.
Then λ ∈ σ(T ) if and only if the following condition holds: there exists a sequence {xn }∞
n=1
in D(T ) such that
1. kxn k = 1,
2. lim (T − λ)xn = 0.
n→∞
18
Marshall Harvey Stone (8 April 1903 — 9 January 1989), American mathematician.
116 Chapter 6: Preliminaries and results from functional analysis
Proof. Let us first assume that such a sequence exists. We have to prove that λ ∈ σ(T ).
Suppose not, then λ is element of the resolvent, and so there exists R(λ; T ) ∈ B(H) such that
R(λ; T )(T − λ) ⊂ 1. So
and since kR(λ; T )k is independent of n, the right hand side can be made arbitrarily small
since (T − λ)xn → 0. This is a contradiction, so that λ ∈ / ρ(T ), or λ ∈ σ(T ). Note that this
implication is independent of (T, D(T )) being self-adjoint, so we have proved that in general
the approximate point spectrum is part of the spectrum.
We prove the converse statement by negating it. So we assume such a sequence does not
exist, and we have to show that λ ∈ ρ(T ). First we claim that there exists a C > 0 such that
To see why (6.5.1) is true, we assume that there exists a sequence {yn }∞
n=1 in D(T ) such that
kyn k
→ ∞, n → ∞.
k(T − λ)yn k
h(T − λ)z, xi = lim h(T − λ)z, xn i = lim hz, (T − λ)xn i = lim hz, yn i = hz, yi
n→∞ n→∞ n→∞
since xn ∈ D(T ∗ ) = D(T ) and λ ∈ R. This implies by definition that x ∈ D((T − λ)∗ ) =
D(T ∗ ) = D(T ) and that (T − λ)∗ x = y, or (T − λ)x = y, or y ∈ Ran(T − λ).
Summarising, T −λ is an injective map onto the closed subspace Ran(T −λ). We now show
that Ran(T − λ) is the whole Hilbert space H. Fix an element z ∈ H. We define a functional
Chapter 6: Preliminaries and results from functional analysis 117
|φ(y)| = |hx, zi| ≤ kxk kzk ≤ C k(T − λ)xk kzk = C kyk kzk
or φ is a continuous linear functional on Ran(T −λ) as a closed subspace of H, so that the Riesz
representation theorem gives φ(y) = hy, wi for some w ∈ H. This is h(T − λ)x, wi = hx, zi
and since this is true for all x ∈ D(T ) we see that w ∈ D((T − λ)∗ ) = D(T ∗ ) = D(T ) and
z = (T − λ)∗ w = (T − λ)w since T is self-adjoint and λ ∈ R. This shows that z ∈ Ran(T − λ),
and since z ∈ H is arbitrary, it follows that Ran(T − λ) = H as claimed.
So now we have T − λ as an injective map from D(T ) to H = Ran(T − λ), and we can
define its inverse (T − λ)−1 as a map from H to D(T ). Now (6.5.1) implies that (T − λ)−1 is
bounded, hence (T − λ)−1 ∈ B(H) and so λ ∈ ρ(T ).
The Spectral Theorem 6.4.1 for the case of a compact operator is well-known, and we recall
it.
Theorem 6.5.2 (Spectral theorem for compact operators). Let K ∈ K(H) be self-adjoint.
Then its spectrum is a denumerable set in R with 0 as the only possible point of accumulation.
Each non-zero point in the spectrum is an eigenvalue of K, and the corresponding eigenspace is
finite dimensional. Denoting the spectrum by |λ1 | ≥ |λ2 | ≥ |λ3 | · · · (each λ ∈ σ(K) occurring
as many times as dim(Ker(K − λ))) and {ei }i≥1 , kei k = 1, the corresponding eigenvectors,
then M X
H = Ker(K) ⊕ C ei , Kx = λi hx, ei i ei .
i≥1 i≥1
The essential spectrum of a self-adjoint operator is the part of the spectrum that is not
influenced by compact perturbations. Suppose e.g. that λ ∈ σp (T ) with corresponding eigen-
vector e such that the self-adjoint operator x 7→ T x − λhx, eie has λ in its resolvent, then
we see that this part of the spectrum is influenced by a compact (in this case even rank-one)
perturbation. The definition of the essential spectrum is as follows. The essential spectrum
is also known as the Weyl spectrum.
Definition 6.5.3. Let (T, D(T )) be a self-adjoint operator on a Hilbert space H, then the
essential spectrum is \
σess (T ) = σ(T + K).
K∈K(H), K ∗ =K
Obviously, σess (T ) ⊂ σ(T ) by taking K equal to the zero operator. Also note that the
essential spectrum only makes sense for infinite dimensional Hilbert spaces. For H finite
dimensional the essential spectrum of any self-adjoint operator is empty.
Theorem 6.5.4. Let (T, D(T )) be a self-adjoint operator on a Hilbert space H, and take
λ ∈ R. Then the following statements are equivalent:
1. λ ∈ σess (T );
118 Chapter 6: Preliminaries and results from functional analysis
• kxn k = 1,
• {xn }∞
n=1 has no convergent subsequence,
• (T − λ)xn → 0;
3. there exists a sequence {xn }∞
n=1 in the domain D(T ) such that
• kxn k = 1,
• xn → 0 weakly,
• (T − λ)xn → 0.
Proof. (3) ⇒ (2): Assume that {xn }∞ n=1 has a convergent subsequence, we relabel and we can
assume that the sequence converges to x. Since kxn k = 1, we have kxk = 1. Since xn → x
we also have xn → x weakly, and since also xn → 0 weakly we must have x = 0 contradicting
kxk = 1.
(2) ⇒ (3): Since kxn k = 1 is a bounded sequence, it has a weakly convergent subsequence,
which, by relabeling, may be assumed to be the original sequence, see Exercise 6.1.1. Denote
its weak limit by x. Since by assumption there is no convergent subsequence, there exists
δ > 0 so that kxn − xk ≥ δ for all n ∈ N (by switching to a subsequence if necessary). Since
the sequence is from D(T ), the self-adjointness implies
h(T − λ)y, xn i = hy, (T − λ)xn i
for all y ∈ D(T ). Since (T − λ)xn → 0 and xn → x weakly we find by taking n → ∞ that
h(T − λ)y, xi = 0 for all y ∈ D(T ). Since this is obviously continuous as a functional in y, we
see that x ∈ D((T − λ)∗ ) = D(T ), since T is self-adjoint, and that (T − λ)∗ x = (T − λ)x = 0.
We now define zn = (xn − x)/kxn − xk, which can be done since kxn − xk ≥ δ > 0. Then
by construction kzn k = 1 and zn → 0 weakly. Moreover, since x ∈ Ker(T − λ),
1 1
k(T − λ)zn k = k(T − λ)xn k ≤ k(T − λ)xn k → 0.
kxn − xk δ
So the sequence {zn }∞
n=1 satisfies the conditions of (3).
(2) ⇒ (1): We prove the negation of this statement. Assume λ ∈
/ σess (T ), and let K be a
self-adjoint compact operator such that λ ∈ ρ(T + K). So in particular R(λ; T + K) : H →
D(T + K) = D(T ) is bounded, or
kR(λ; T + K)xk ≤ kR(λ; T + K)k kxk
=⇒ kyk ≤ kR(λ; T + K)k k(T + K − λ)yk
by switching x to (T +K −λ)y, y ∈ D(T ) arbitrary. Now take any sequence {xn }∞n=1 satisfying
the first and last condition of (2). Then by the above observation, with C = kR(λ; T + K)k,
kxn − xm k ≤ C k(T + K − λ)(xn − xm )k
≤ C k(T − λ)xn k + C k(T − λ)xm k + C kK(xn − xm )k
Chapter 6: Preliminaries and results from functional analysis 119
and the first two terms on the right hand side tend to zero by assumption. Since K is compact,
and the sequence {x}∞ ∞
n=1 is bounded by 1 there is a convergent subsequence of {Kxn }n=1 . By
∞
relabeling we may assume that the sequence {Kxn }n=1 is convergent, and then the final term
on the right hand side tends to zero. This means that {xn }∞ n=1 is a Cauchy sequence, hence
convergent. So the three conditions in (2) cannot hold.
(1) ⇒ (2) We consider two possibilities; dim Ker(T − λ) < ∞ or dim Ker(T − λ) = ∞.
In the last case we pick an orthonormal sequence {xn }∞ n=1 in Ker(T − λ) ⊂ D(T ). Then
obviously, kxn k = 1 and (T − λ)x√n = 0 for all n, and this sequence cannot have a convergent
subsequence, since kxn − xm k = 2 by the Pythagorean theorem, cf. Section 6.1.
In case Ker(T − λ) is finite-dimensional, we claim that there exists a sequence {xn }∞ n=1
in D(T ) such that xn ⊥ Ker(T − λ), kxn k = 1 and (T − λ)xn → 0. In order to prove the
claim we first observe that D(T ) ∩ Ker(T − λ)⊥ is dense in Ker(T − λ)⊥ . Indeed, let P denote
the orthogonal projection on Ker(T − λ) and take arbitrary x ∈ Ker(T − λ)⊥ , ε > 0, so that
∃y ∈ D(T ) such that kx − yk < ε. Write y = P y + (1 − P )y, then P y ∈ Ker(T − λ) ⊂
D(T ) so that with y ∈ D(T ) and D(T ) being a linear space, also (1 − P )y ∈ D(T ). Now
kx − (1 − P )yk = k(1 − P )(x − y)k ≤ kx − yk < ε, so the density follows.
In order to see why the claim in the previous paragraph is true, note that P is self-adjoint
and, since this is a finite rank operator, P is compact. Note that T +P is a self-adjoint operator.
If such a sequence would not exist, then we can conclude, by restricting to Ker(T − λ)⊥ , as in
the proof of Theorem 6.5.1, cf. (6.5.1), that there exists a C such that for all x ⊥ Ker(T − λ),
x ∈ D(T ) we have kxk ≤ Ck(T − λ)xk. For x ∈ D(T ) ∩ Ker(T − λ)⊥ we have
On the other hand, (T −λ)xn → 0, and since T is self-adjoint, hence closed, we see that T −λ is
closed. This means that x ∈ D(T ) and x ∈ Ker(T −λ). So x ∈ Ker(T −λ)∩Ker(T −λ)⊥ = {0},
which contradicts kxk = 1.
The essential spectrum can be obtained from the spectrum by “throwing out the point
spectrum”. Below we mean by an isolated point λ of a subset σ of R that there does not
exists a sequence {λn }∞
n=1 of points in σ such that λn → λ in R.
Theorem 6.5.5. Let (T, D(T )) be a self-adjoint operator on a Hilbert space H. Then we have
Proof. To prove the first statement assume that λ is not isolated in σ(T ), so there exists a
sequence {λn }∞
n=1 , λn 6= λ, λn ∈ σ(T ) and λn → λ. By invoking Theorem 6.5.1 we can find
for each n ∈ N an element xn ∈ H such that kxn k = 1 and k(T − λn )xn k < n1 |λ − λn | for
n ∈ N. Then
1
k(T − λ)xn k ≤ k(T − λn )xn k + |λ − λn | kxk < (1 + ) |λ − λn | → 0, n → ∞,
n
In order to show that λ ∈ σess (T ), we can use Theorem 6.5.4. It suffices to show that the
constructed sequence {xn }∞
n=1 has no convergent subsequence. So suppose it does have a
convergent subsequence, again denoted by {xn }∞ n=1 , say xn → x. Then kxk = 1, and by
closedness of T − λ, x ∈ D(T ) and x ∈ Ker(T − λ). On the other hand,
|(λ − λn )| |hxn , xi| = |hxn , λxi − λn hxn , xi| = |hxn , T xi − λn hxn , xi|
1
= |h(T − λn )xn , xi| ≤ |λ − λn |
n
and since λ 6= λn we see that hxn , xi → 0. But by assumption hxn , xi → kxk2 = 1, so this
give the required contradiction.
For the second statement we use Theorem 6.5.1 to get a sequence {xn }∞ n=1 such that
kxn k = 1 and (T − λ)xn → 0. Since λ ∈ / σess (T ), Theorem 6.5.4 implies that this sequence
must have a convergent subsequence, again denoted {xn }∞ n=1 with xn → x. Since T is self-
adjoint, T − λ is closed and hence x ∈ D(T ) and (T − λ)x = 0. Now kxk = 1 shows that
Ker(T − λ) is non-trivial, and hence λ ∈ σp (T ).
Recall the definition of reducing spaces as in Definition 3.2.10 for the second statement.
Using Theorem 6.6.2(2) we can define the absolutely continuous spectrum of a self-adjoint
operator (T, D(T )) as σac (T ) = σ(T Hac ) and the singular spectrum σs (T ) = σ(T Hs ). Using
Theorem 6.6.2 it follows that σ(T ) = σac (T ) ∪ σs (T ).
Proof. First observe that, using hE(B)x, yi = hE(B)x, E(B)yi since E(B) is self-adjoint
projection, we have
|hE(B)x, yi|2 ≤ hE(B)x, xi hE(B)y, yi (6.6.1)
by the Cauchy-Schwarz inequality (6.1.1). Now (6.6.1) shows that for x ∈ Hac and any Borel
set B with λ(B) = 0 we also have hE(B)x, yi = 0, or Ex,y is absolutely continuous. If
x ∈ Hs we have a Borel set B such that λ(B) = 0 and Ex,x (B c ) = 0, and (6.6.1) implies that
Ex,y (B c ) = 0 as well, so Ex,y is singular as well.
Observe that Ecx,cx = |c|2 Ex,x for any c ∈ C, so that with x also cx are in Hac , respectively
Hc . Next let x, y ∈ Hac , then for any Borel set B
and the first two measures are absolutely continuous by assumption, and the last two are
absolutely continuous by the reasoning in the previous paragraph. So x + y ∈ Hac . Similarly,
Hs is a linear space.
To see that Hac ⊥ Hs take x ∈ Hac , y ∈ Hs and consider the measure Ex,y . This measure is
absolutely continuous, since x ∈ Hac , and singular, since y ∈ Hs , so it has to be the measure
identically equal to zero. So hE(B)x, yi = 0 for all Borel sets B, and taking B = R and
recalling E(R) = 1 in B(H), we see that hx, yi = 0.
To finish the proof of the first statement we show that any element, say z ∈ H, can be
written as z = x + y with x ∈ Hac and y ∈ Hs . Use the Lebesgue decomposition theorem to
write Ez,z = µac + µs , and let Z be the Borel set such that λ(Z) = 0 and µs (Z c ) = 0. Put
x = E(Z c )z, y = E(Z)z, and it remains to show that these elements are the required ones.
122 Chapter 6: Preliminaries and results from functional analysis
First, x + y = E(Z)z + E(Z c )z = E(R)z = z by additivity of the spectral measure. Next for
an arbitrary Borel set B
implying that x ∈ D(T ) using the Spectral Theorem 6.4.1. Similarly, y ∈ D(T ). For x ∈
D(T ) ∩ Hac , y ∈ H, we have, again by the Spectral Theorem 6.4.1,
Z
ET x,y (B) = λ dEx,y , (6.6.2)
B
so that ET x,y is absolutely continuous with respect to Ex,y , in fact its Radon-Nikodym deriva-
tive is λ. Since Ex,y is absolutely continuous, it follows that ET x,T x –take y = T x– is absolutely
continuous, hence T x ∈ Hac . We can reason in a similar way for the singular subspace Hs , or
we can use the first statement and T being self-adjoint to see that T : D(T ) ∩ Hs → Hs .
In case T has an eigenvector x for the eigenvalue λ, then Ex,x is a measure with Ex,x ({λ}) >
0 and Ex,x (R\{λ}) = 0. So in particular, all eigenspaces are contained in Hs . Define Hpp =
Hpp (T ) as the closed linear span of all eigenvectors of T , so that Hpp ⊂ Hs . Again, Hpp
reduces T . We denote by Hcs = Hcs (T ) the orthocomplement of Hpp in Hs , then Hcs also
reduces T . So we get the decomposition
Here ‘pp’ stands for ‘pure point’ and ‘cs’ for ‘continuous
singular’. Since these
spaces reduce
T , we have corresponding spectra, σpp (T ) = σ(T Hpp ) and σsc (T ) = σ(T Hsc ). In general,
σpp (T ) is not equal to σp (T ), but σpp (T ) = σp (T ).
Bibliography
[2] F. Calogero and A. Degasperis, Spectral Transform and Solitons I, North Holland, 1982.
[3] P. Deift and E. Trubowitz, Inverse scattering on the line, Commun. Pure and Applied
Math. 32 (1979), 121–251.
[4] W. Eckhaus and A. van Harten, The Inverse Scattering Transformation and the Theory
of Solitons, North Holland, 1981.
[5] C.S. Gardner, J.M. Greene, M.D. Kruskal, R.M. Miura, Method for solving the Korteweg-
de Vries equation, Physical Review Letters 19 (1967), 1095–1097.
[6] E.M. de Jager, Mathematical introduction to the theory of solitons, pp. 355–617 in Math-
ematical Structures in Continuous Dynamical Systems (E. van Groesen, E.M. de Jager
eds.), Nort Holland, 1994.
[8] T. Miwa, M. Jimbo and E. Date, Solitons: Differential Equations, Symmetries and Infi-
nite Dimensional Algebras, Cambridge Univ. Press, 2000.
[9] M. Reed and B. Simon, Methods of Modern Mathematical Physics III: Scattering Theory,
Academic Press, 1979.
[10] M. Schechter, Operator Methods in Quantum Mechanics, North Holland, 1981 (reprinted
by Dover).
123
124
Index
125
126 Index
T -bound, 9
T -compact operator, 12
time-dependent Schrödinger equation, 27
transmission coefficient, 59
T -smooth operator, 40
wave operator, 29
wave operator, generalised, 38
weak asymptotic completeness, 39
weak compactness, 104
weak convergence, 104
weak derivative, 111
weak operator topology, 107
Weyl spectrum, 117
Wronskian, 24, 25, 57