Introduction To Partial Differential Equations 802635S: Valeriy Serov University of Oulu 2011
Introduction To Partial Differential Equations 802635S: Valeriy Serov University of Oulu 2011
802635S
Lecture Notes
3rd Edition
Valeriy Serov
University of Oulu
2011
2 Fourier Series 23
8 Layer Potentials 82
Index 119
i
0 Preliminaries
We consider Euclidean space Rn , n ≥ 1 with elements x = (x1 , . . . , xn ). The Euclidean
length of x is defined by q
|x| = x21 + · · · + x2n
and the standard inner product by
(x, y) = x1 y1 + · · · + xn yn .
We say that Ω ⊂ Rn , n ≥ 2 is an open set if for any x ∈ Ω there is R > 0 such that
BR (x) ⊂ Ω.
(iii) α! = α1 ! · · · αn ! with 0! = 1
1
2) Banach spaces (Lp , 1 ≤ p ≤ ∞, C k ) and Hilbert spaces (L2 ): If 1 ≤ p < ∞ then
we set
Z 1/p
p p
L (Ω) := {f : Ω → C measurable : kf kLp (Ω) := |f (x)| dx < ∞}
Ω
while
L∞ (Ω) := {f : Ω → C measurable : kf kL∞ (Ω) := ess sup |f (x)| < ∞}.
x∈Ω
Moreover
X
C k (Ω) := {f : Ω → C : kf kC k (Ω) := max |∂ α f (x)| < ∞},
x∈Ω
|α|≤k
′
3) Hölder’s inequality: Let 1 ≤ p ≤ ∞, u ∈ Lp and v ∈ Lp with
1 1
+ ′ = 1.
p p
Then uv ∈ L1 and
Z Z p1 Z 1′
p
p p′
|u(x)v(x)|dx ≤ |u(x)| dx |v(x)| dx ,
Ω Ω Ω
2
5) Fubini’s theorem about the interchange of the order of integration:
Z Z Z Z Z
|f (x, y)|dxdy = dx |f (x, y)|dy = dy |f (x, y)|dx ,
X×Y X Y Y X
if one of the three integrals exists.
Exercise 1. Prove the generalized Leibnitz formula
X
∂ α (f g) = Cαβ ∂ β f ∂ α−β g,
β≤α
Hypersurface
A set S ⊂ Rn is called hypersurface of class C k , k = 1, 2, . . . , ∞, if for any x0 ∈ S there
is an open set V ⊂ Rn containing x0 and a real-valued function ϕ ∈ C k (V ) such that
∇ϕ ≡ (∂1 ϕ, . . . ∂n ϕ) 6= 0 on S ∩ V
S ∩ V = {x ∈ V : ϕ(x) = 0} .
By implicit function theorem we can solve the equation ϕ(x) = 0 near x0 to obtain
xn = ψ(x1 , . . . , xn−1 )
k
for some C function ψ. A neighborhood of x0 in S can then be mapped to a piece of
fn = 0 by
the hyperplane x
x 7→ (x′ , xn − ψ(x′ )),
where x′ = (x1 , . . . , xn−1 ). The vector ∇ϕ is perpendicular to S at x ∈ S ∩ V . The
vector ν(x) which is defined as
∇ϕ
ν(x) := ±
|∇ϕ|
is called the normal to S at x. It can be proved that
(∇ψ, −1)
ν(x) = ± p .
|∇ψ|2 + 1
If S is the boundary of a domain Ω ⊂ Rn , n ≥ 2 we always choose the orientation so
that ν(x) points out of Ω and define the normal derivative of u on S by
∂u ∂u
∂ν u := ν · ∇u ≡ ν1 + · · · + νn .
∂x1 ∂xn
Thus ν and ∂ν u are C k−1 functions.
Example 0.1. Let Sr (y) = {x ∈ Rn : |x − y| = r}. Then
n
x−y 1X ∂
ν(x) = and ∂ν = (xj − yj ) .
r r j=1 ∂xj
3
The divergence theorem
Let Ω ⊂ Rn be a bounded domain with C 1 boundary S = ∂Ω and let F be a C 1 vector
field on Ω. Then Z Z
∇ · F dx = F · νdσ(x).
Ω S
provided that the integral in question exists. The basic theorem on the existence of
convolutions is the following (Young’s inequality for convolution):
Proposition 1 (Young’s inequality). Let f ∈ L1 (Rn ) and g ∈ Lp (Rn ), 1 ≤ p ≤ ∞.
Then f ∗ g ∈ Lp (Rn ) and
kf ∗ gkLp ≤ kf kL1 kgkLp .
Proof. Let p = ∞. Then
Z Z
|(f ∗ g)(x)| ≤ |f (x − y)||g(y)|dy ≤ kgkL∞ |f (x − y)|dy = kgkL∞ kf kL1 .
Rn Rn
Let 1 ≤ p < ∞ now. Then it follows from Hölder’s inequality and Fubini’s theorem
that
Z Z Z p
p
|(f ∗ g)(x)| dx ≤ |f (x − y)||g(y)|dy dx
Rn Rn Rn
Z Z p/p′ Z
≤ |f (x − y)|dy |f (x − y)||g(y)|p dydx
Rn Rn R n
Z Z
p/p ′
≤ kf kL1 |f (x − y)||g(y)|p dydx
ZR R
n n
Z
p/p′ p
≤ kf kL1 |g(y)| dy |f (x − y)|dx
Rn Rn
p/p′ p/p′ +1
= kf kL1 kgkpLp kf kL1 = kf kL1 kgkpLp .
4
1 1 1
Exercise 2. Suppose 1 ≤ p, q, r ≤ ∞ and p
+ q
= r
+ 1. Prove that if f ∈ Lp (Rn )
and g ∈ Lq (Rn ) then f ∗ g ∈ Lr (Rn ) and
kf ∗ gkr ≤ kf kp kgkq .
In particular,
kf ∗ gkL∞ ≤ kf kLp kgkLp′ .
Definition. Let u ∈ L1 (Rn ) with
Z
u(x)dx = 1.
Rn
and thus
Z Z
u (x)ϕ(x)dx − ϕ(0) ≤ |uε (x)||ϕ(x) − ϕ(0)|dx
n ε √
R |x|≤ ε
Z
+ √
|uε (x)||ϕ(x) − ϕ(0)|dx
|x|> ε
Z Z
≤ sup√
|ϕ(x) − ϕ(0)| |uε (x)|dx + 2 kϕkL∞ √
|uε (x)|dx
|x|≤ ε Rn |x|> ε
Z
≤ sup
√
|ϕ(x) − ϕ(0)| · kukL1 + 2 kϕkL∞ √
|u(y)|dy → 0
|x|≤ ε |y|>1/ ε
as ε → 0.
Example 0.2. Let u(x) be defined as
(
sin x1
2
· · · sin2xn , x ∈ [0, π]n
u(x) =
0, x∈/ [0, π]n .
5
Fourier transform
If f ∈ L1 (Rn ) its Fourier transform fb or F(f ) is the bounded function on Rn defined
by Z
b
f (ξ) = (2π) −n/2
e−ix·ξ f (x)dx.
Rn
Clearly f (ξ) is well-defined for all ξ and
fb
b ≤ (2π)−n/2 kf k1 .
∞
Hence
Z
−n/2
2|Ff (ξ)| ≤ (2π) |f (x + πξ/|ξ|2 ) − f (x)|dx
Rn
= (2π)−n/2
f (· + πξ/|ξ|2 ) − f (·)
L1 → 0
6
2. If T : Rn → Rn is linear and invertible then f[
◦ T = |det T |−1 fb((T −1 )′ ξ), where
′
T is the adjoint matrix.
It is clear that
F −1 f (x) = Ff (−x), F −1 f = F(f )
and for f, g ∈ L1 (Rn )
(Ff, g)L2 = (f, F −1 g)L2 .
The Schwartz space S(Rn ) is defined as
n ∞ n α β
S(R ) = f ∈ C (R ) : sup |x ∂ f (x)| < ∞, for any multi-indices α and β .
x∈Rn
Exercise 8. Prove that if f ∈ L1 (Rn ) has compact support then fb extends to an entire
holomorphic function on Cn .
7
Exercise 9. Prove that if f ∈ C0∞ (Rn ) i.e. f ∈ C ∞ (Rn ) with compact support, is
supported in {x ∈ Rn : |x| ≤ R} then for any multi-index α we have
|(iξ)α fˆ(ξ)| ≤ (2π)−n/2 eR|Im ξ| k∂ α f k1 ,
that is, fb(ξ) is rapidly decaying as |Re ξ| → ∞ when |Im ξ| remains bounded.
Distributions
We say that ϕj → ϕ in C0∞ (Ω), Ω ⊂ Rn open, if ϕj are all supported in a common
compact set K ⊂ Ω and
sup |∂ α ϕj (x) − ∂ α ϕ(x)| → 0, j→∞
x∈K
for all α. A distribution on Ω is a linear functional u on C0∞ (Ω) that is continuous, i.e.,
1. u : C0∞ (Ω) → C. The action of u to ϕ ∈ C0∞ (Ω) is denoted by hu, ϕi. The set of
all distributions is denoted by D′ (Ω).
2. hu, c1 ϕ1 + c2 ϕ2 i = c1 hu, ϕ1 i + c2 hu, ϕ2 i
3. If ϕj → ϕ in C0∞ (Ω) then hu, ϕj i → hu, ϕi in C as j → ∞. It is equivalent to the
following condition: for any K ⊂ Ω there is a constant CK and an integer NK
such that for all ϕ ∈ C0∞ (K),
X
|hu, ϕi| ≤ CK k∂ α ϕk∞ .
|α|≤NK
In particular,
b(εξ) = (2π)−n/2 .
lim ubε (ξ) = lim u
ε→0+ ε→0+
8
1) limε→0+ huε , ϕi = ϕ(0) i.e. limε→0+ uε = δ in the sense of distributions, and
2) δb = (2π)−n/2 · 1.
∂ α (u ∗ ψ) = u ∗ ∂ α ψ.
u[ bu.
∗ ψ = (2π)n/2 ψb
1. δb = (2π)−n/2 · 1, b
1 = (2π)n/2 δ
2. ∂d
α δ = (iξ)α (2π)−n/2
cα = i|α| ∂ α (b
3. x 1) = i|α| (2π)n/2 ∂ α δ.
9
1 Local Existence Theory
A partial differential equation of order k ∈ N is an equation of the form
F x, (∂ α u)|α|≤k = 0, (1.1)
pointwise for all x ∈ Ω. The equation (1.1) is called linear if it can be written as
X
aα (x)∂ α u(x) = f (x) (1.2)
|α|≤k
for some known functions aα and f . In this case we speak about the (linear) differential
operator X
L(x, ∂) ≡ aα (x)∂ α
|α|≤k
and write (1.2) simply as Lu = f. If the coefficients aα (x) belong to C ∞ (Ω) we can apply
the operator L to any distribution u ∈ D′ (Ω) and u is called a distributional solution
(or weak solution) of (1.2) if the equation (1.2) holds in the sense of distributions, i.e.
X
(−1)|α| hu, ∂ α (aα ϕ)i = hf, ϕi,
|α|≤k
where ϕ ∈ C0∞ (Ω). Let us list some examples. Here and throughout we denote
2
ut = ∂u
∂t
, utt = ∂∂t2u and so forth.
ut = k∆u
b) Wave equation
utt = c2 ∆u
c) Poisson equation
∆u = f,
where ∆ ≡ ∇ · ∇ = ∂12 + · · · + ∂n2 is the Laplacian (or the Laplace operator ).
10
3. The telegrapher’s equation
utt = c2 ∆u − αut − m2 u
4. Sine-Gordon equation
utt = c2 ∆u − sin u
ut + cu · ux + uxxx = 0.
A nonzero ξ is called characteristic for L at x if χL (x, ξ) = 0 and the set of all such ξ
is called the characteristic variety of L at x, denoted by charx (L). In other words,
3. L = ∆ is elliptic in Rn .
P
4. L = ∂1 − nj=2 ∂j2 , charx (L) = {ξ ∈ Rn \ {0} : ξj = 0, j = 2, 3, . . . , n} .
P n P o
5. L = ∂12 − nj=2 ∂j2 , charx (L) = ξ ∈ Rn \ {0} : ξ12 = nj=2 ξj2 .
χL (x, ν(x)) = 0
and S is called non-characteristic if it is not characteristic at any point, that is, for
any x ∈ S
χL (x, ν(x)) 6= 0.
11
Let us consider the linear equation of the first order
n
X
Lu ≡ aj (x)∂j u + b(x)u = f (x), (1.3)
j=1
u1 − u2 = ϕ · γ,
~ · ∇)u1 − (A
(A ~ · ∇)u2 = (A
~ · ∇)(ϕγ) = γ(A
~ · ∇)ϕ + ϕ(A
~ · ∇)γ = 0,
~ · ∇)ϕ = 0 ⇔ (A
since S is characteristic for L ((A ~ · ∇ )ϕ = 0 ⇔ A ~ · ν = 0). That’s
|∇|
why to make the initial value problem well-defined we must assume that S is non-
characteristic for this problem.
Let us assume that S is non-characteristic for L and u = g on S. We define the
integral curves for (1.3) as the parametrized curves x(t) that satisfy the system
~
ẋ = A(x), x = x(t) = (x1 (t), . . . , xn (t)) (1.4)
X ∂u n
du d ~ · ∇)u = f − bu ≡ f (x(t)) − bu(x(t))
= (u(x(t))) = ẋj = (A
dt dt j=1
∂x j
12
or
du
= f − bu. (1.5)
dt
By the existence and uniqueness theorem for ordinary differential equations there is a
unique solution (unique curve) of (1.4) with x(0) = x0 . Along this curve the solution
u(x) of (1.3) must be the solution of (1.5) with u(0) = u(x(0)) = u(x0 ) = g(x0 ).
Moreover, since S is non-characteristic, x(t) ∈
/ S for t 6= 0, at least for small t, and the
curves x(t) fill out a neighborhood of S. Thus we have proved the following theorem.
Remark. The method which was presented above is called the method of characteristics.
Let us consider some examples where we apply the method of characteristics.
on S. We obtain
Therefore
u(x) = u(x1 , x2 , x3 ) = g(x1 e−x3 , x2 e−2x3 )e3x3 .
χL (x, ν(x)) = 1 · 0 + x1 · 0 − 1 · 1 = −1 6= 0
13
and S is non-characteristic. The system (1.4)-(1.5) for this problem becomes
with
(x1 , x2 , x3 )|t=0 = (x01 , x02 , 1), u(0) = x01 + x02 .
We obtain
t2
x1 = t + x01 , x2 = + tx01 + x02 , x3 = −t + 1, u = (x01 + x02 )et .
2
Then,
t = 1 − x3 , x01 = x1 − t = x1 + x3 − 1,
(1 − x3 )2 1 x2
x02 = x2 − − (1 − x3 )(x1 + x3 − 1) = − x1 + x2 − x3 + x1 x3 + 3
2 2 2
and, finally,
x23 1
u= + x1 x3 + x2 − e1−x3 .
2 2
Now let us generalize this technique to quasi-linear equations or to the equations
of the form n
X
aj (x, u)∂j u = b(x, u), (1.6)
j=1
~ y) := (a1 , . . . , an , b) ∈ Rn+1
A(x,
is tangent to the graph y = u(x) at any point. This suggests that we look at the
~ in Rn+1 given by solving the ordinary differential equations
integral curves of A
S ∗ := {(x, g(x)) : x ∈ S}
14
in Rn+1 then the graph of the solution should be the hypersurface generated by the
integral curves of A~ passing through S ∗ . Again, we need to assume that S is non-
characteristic in a sense that the vector
Remark. If S is parametrized as
φ(x1 , . . . , xn ) = 0,
15
for s > 0, s 6= 2. The system (1.4)-(1.5) for this problem is
ẋ1 = u, ẋ2 = 1, u̇ = 1
with
x01
(x1 , x2 , u)|t=0 = (x01 , x02 , ) = (s, s, s/2).
2
Then
u = t + s/2, x2 = t + s, ẋ1 = t + s/2
t2 st
so that x1 = 2
+ 2
+ s. This implies
x1 − x2 = t2 /2 + t(s/2 − 1).
Hence
2(x1 − x2 ) x1 − x2 t
u = +1+ −
x2 − 2 t 2
2(x1 − x2 ) x2 − 2 x1 − x2
= +1+ −
x2 − 2 2 x2 − 2
x1 − x2 x2 − 2 x1 − x2 x2
= +1+ = +
x2 − 2 2 x2 − 2 2
2x1 − 4x2 + x22
= .
2(x2 − 2)
u∂1 u + ∂2 u = 0
and ν(x) = (0, 1). Now we have to solve the ordinary differential equations
ẋ1 = u, ẋ2 = 1, u̇ = 0
16
with
(x1 , x2 , u)|t=0 = x01 , 0, h(x01 ) .
We obtain
x2 = t, u ≡ h(x01 ), x1 = h(x01 )t + x01
so that
x1 − x2 h(x01 ) − x01 = 0.
Let us assume that
−x2 h′1 (x01 ) − 1 6= 0.
By this condition last equation defines an implicit function x01 = g(x1 , x2 ). That’s why
the solution u of the Burgers equation has the form
ax1 + b 1
u(x1 , x2 ) = , x2 6= − .
ax2 + 1 a
u, ∂ν u, . . . , ∂νk−1 u (1.7)
on S are called the Cauchy data of u on S. And the Cauchy problem is to solve (1.2)
with the Cauchy data (1.7). We shall consider Rn , n ≥ 2, as Rn−1 × R and denote the
coordinates by (x, t), where x = (x1 , . . . , xn−1 ). We can make a change of coordinates
from Rn to Rn−1 × R so that x0 ∈ S is mapped to (0, 0) and a neighborhood of x0 in
17
∂
S is mapped into the hyperplane t = 0. In that case ∂ν = ∂t on S = {(x, t) : t = 0}
and equation (1.2) can be written in the new coordinates as
X
aα,j (x, t)∂xα ∂tj u = f (x, t) (1.8)
|α|+j≤k
18
and moreover
X
∂tk u(x, t) = (a0,k )−1 f (x, t) − aα,j (x, t)∂xα ∂tj u .
|α|+j≤k,j<k
Next, let us denote by yα,j = ∂xα ∂tj u and by Y = (yα,j ) this vector. Then equation
(1.10) can be rewritten as
X
y0,k = (a0,k )−1 f − aα,j yα,j
|α|+j≤k,j<k
or
X
∂t (y0,k−1 ) = (a0,k )−1 f − aα,j ∂xj y(α−~j),j
|α|+j≤k,j<k
where Y, B and Φ are analytic vector-valued functions and Aj ’s are analytic matrix-
valued functions. Without loss of generality we can assume that Φ ≡ 0. Let Y =
(j)
(y1 , . . . , yN ), B = (b1 , . . . , bN ), Aj = (aml )N
m,l=1 . We seek a solution Y = (y1 , . . . , yN ) in
the form X (m)
ym = Cα,j xα tj , m = 1, 2, . . . , N.
(m)
The Cauchy data tell us that Cα,0 = 0 for all α and m, since we assumed Φ ≡ 0. To
(m)
determine Cα,j for j > 0, we substitute ym into (1.11) and get for m = 1, 2, . . . , N
X (j)
∂ t ym = aml ∂xj yl + bm (x, y)
or X X X (j) X (m) X
(m) ~
Cα,j jxα tj−1 = aml xβ t r Cα,j αj xα−j tj + b(m) α j
αj x t .
βr
j,l β,r
(m)
It can be proved that this equation determines uniquely the coefficients Cα,j and
therefore the solution Y = (y1 , . . . , yN ).
19
Remark. Consider the following example in R2 , due to Hadamard, which sheds light
on the Cauchy problem:
√
∆u = 0, u(x1 , 0) = 0, ∂2 u(x1 , 0) = ke− k
sin(x1 k), k ∈ N.
u′′1 u2 + u′′2 u1 = 0
at least for some x1 and some subsequence of k. Hence u(x1 , x2 ) is not bounded. But
the solution of the original problem which corresponds to the limiting case k = ∞ is
of course u ≡ 0, since u(x1 , 0) = 0 and ∂2 u(x1 , 0) = 0 in the limiting case. Hence the
solution of the Cauchy problem may not depend continuously on the Cauchy data. It
means by Hadamard that the Cauchy problem for elliptic operators is ”ill-posed”, even
in the case when this problem is non-characteristic.
Remark. This example of Hadamard shows that the solution of the Cauchy problem
may not depend continuously on the Cauchy data. By the terminology of Hadamard
”the Cauchy problem for the Laplacian is not well-posed or it is ill-posed”. Due to
Hadamard and Tikhonov any problem is called well-posed if the following are satisfied:
20
1. existence
2. uniqueness
∂ ∂ ∂
L≡ +i − 2i(x + iy) . (1.12)
∂x ∂y ∂t
21
This implies that
Z 2π Z
∂V ∂u ∂u ∂u ∂u dσ(z)
= ir +i (r, θ, t)dθ = +i (x, y, t)2r
∂r 0 ∂x ∂y |z|=r ∂x ∂y 2z
Z Z
∂u f (t) ∂V dσ(z)
= 2r i + dσ(z) = 2r i + f (t)
|z|=r ∂t 2z ∂t |z|=r 2z
∂V
= 2r i + iπf (t) .
∂t
Let us introduce now a new function U (s, t) = V (s) + πF (t), where s = r2 and F ′ = f .
The function F exists because f is continuous. It follows from (1.13) that
1 ∂V ∂V ∂U ∂V ∂U ∂U
≡ , = , =i .
2r ∂r ∂s ∂s ∂s ∂s ∂t
Hence
∂U ∂U
+i = 0. (1.14)
∂t ∂s
Since (1.14) is the Cauchy-Riemann equation then U is a holomorphic (analytic) func-
tion of the variable w = t + is, in the region 0 < s < R2 , |t| < R and U is continuous
up to s = 0. Next, since U (0, t) = πF (t) (V = 0 when s = 0 ⇔ r = 0) and f (t) is real-
valued then U (0, t) is also real-valued. Therefore, by the Schwarz reflection principle
(see complex analysis), the formula
U (−s, t) := U (s, t)
22
2 Fourier Series
Definition. A function f is said to be periodic with period T > 0 if the domain D(f )
of f contains x + T whenever it contains x, and if
It follows that if T is a period of f then mT is also a period for any integer m > 0.
The smallest value of T > 0 for which (2.1) holds is called the fundamental period of
f.
For example, the functions sin mπx L
and cos mπx
L
, m = 1, 2, . . . are periodic with
2L
fundamental period T = m . Note also that they are periodic with the common period
2L.
Definition. Let us assume that the domain of f is symmetric with respect to {0}, i.e.
if x ∈ D(f ) then −x ∈ D(f ). A function f is called even if
and odd if
f (−x) = −f (x), x ∈ D(f ).
Definition. The notations f (c ± 0) are used to denote the limits
f (c ± 0) = lim f (x).
x→c±0
23
Proposition. The functions 1, sin mπx
L
and cos mπx
L
, m = 1, 2, . . . form a mutually or-
thogonal set on the interval −L ≤ x ≤ L. In fact,
Z L (
mπx nπx 0, m = 6 n
cos cos dx = (2.4)
−L L L L, m = n
Z L
mπx nπx
cos sin dx = 0 (2.5)
−L L L
Z L (
mπx nπx 0, m 6= n
sin sin dx = (2.6)
−L L L L, m = n
Z L Z L
mπx mπx
sin dx = cos dx = 0. (2.7)
−L L −L L
Proof. Let us derive (for example) (2.5). Since
1
cos α sin β = (sin(α + β) − sin(α − β))
2
we have for m 6= n
Z L Z Z
mπx nπx 1 L (m + n)πx 1 L (m − n)πx
cos sin dx = sin dx − sin dx
−L L L 2 −L L 2 −L L
( )L ( )L
1 − cos
(m+n)πx 1 − cos (m−n)πx
L L
= −
2 (m+n)π 2 (m−n)π
L −L L −L
( )
1 − cos(m + n)π cos(m + n)π
= (m+n)π
+ (m+n)π
2
L L
( )
1 − cos(m − n)π cos(m − n)π
− (m−n)π
+ (m−n)π
= 0.
2
L L
If m = n we have
Z L Z L
mπx nπx 1 2mπx
cos sin dx = sin dx = 0
−L L L 2 −L L
since sine is odd. Other identities can be proved in a similar manner and are left to
the reader.
Let us consider the infinite trigonometric series
a0 X mπx
∞
mπx
+ am cos + bm sin . (2.8)
2 m=1
L L
24
This series consists of 2L−periodic functions. Thus, if the series (2.8) converges for all
x, then the function to which it converges will be periodic of period 2L. Let us denote
the limiting function by f (x), i.e.
a0 X mπx
∞
mπx
f (x) = + am cos + bm sin . (2.9)
2 m=1
L L
for each fixed n. It follows from the orthogonality relations (2.4),(2.5) and (2.7) that
the only nonzero term on the right hand side is the one for which m = n in the first
summation. Hence, Z L
nπx
f (x) cos dx = Lan
−L L
or Z
1 L nπx
an = f (x) cos dx. (2.10)
L −L L
A similar expression for bn may be obtained by multiplying (2.9) by sin nπx
L
and inte-
grating termwise from −L to L. Thus,
Z
1 L nπx
bn = f (x) sin dx. (2.11)
L −L L
Hence Z L
1
a0 = f (x)dx. (2.12)
L −L
Definition. Let f be a piecewise continuous function on the intervel [−L, L]. The
Fourier series of f is the trigonometric series (2.9), where the coefficients a0 , am and
bm are given by (2.10), (2.11) and (2.12).
25
It follows from this definition and (2.2)-(2.3) that if f is even on [−L, L] then the
Fourier series of f has the form
∞
a0 X mπx
f (x) = + am cos (2.13)
2 m=1
L
and if f is odd then
∞
X mπx
f (x) = bm sin . (2.14)
m=1
L
The series (2.13) is called the Fourier cosine series and (2.14) is called the Fourier
sine series.
Example 2.1. Find the Fourier series of
−1, −π ≤ x < 0
sgn(x) = 0, x=0
1, 0<x≤π
on the interval [−π, π].
Since L = π and sgn(x) is odd function we have a Fourier sine series with
Z Z π
1 π 2 π 2 cos(mx)
bm = sgn(x) sin(mx)dx = sin(mx)dx = −
π −π π 0 π m 0
(
m
2 cos(mπ) 1 2 1 − (−1) 0, m = 2k, k = 1, 2, . . .
= − + = = 4
π m m π m πm
, m = 2k − 1, k = 1, 2, . . . .
That’s why
∞
X 4
sgn(x) = sin((2k − 1)x).
k=1
π(2k − 1)
In particular,
∞ ∞
π X sin((k − 1/2)π) X (−1)k+1
= = .
2 k=1
k − 1/2 k=1
k − 1/2
Example 2.2. Let us assume that f (x) = |x|, −1 ≤ x ≤ 1. In this case L = 1 and
f (x) is even. Hence we will have a Fourier cosine series (2.13), where
Z 1 Z 1
a0 = |x|dx = 2 xdx = 1
−1 0
and
Z 1 Z 1
1
sin(mπx) sin(mπx)
am = 2 x cos(mπx)dx = 2 x −2 dx
0 mπ 0 0 mπ
1
cos(mπx) cos(mπ) 1
= 2
(mπ)2 = 2 (mπ)2 − (mπ)2
0
(
m 0, m = 2k, k = 1, 2, . . .
2((−1) − 1)
= = 4
(mπ)2 − (mπ)2 , m = 2k − 1, k = 1, 2, . . . .
26
So we have ∞
1 4 X cos((2k − 1)πx)
|x| = − 2 .
2 π k=1 (2k − 1)2
In particular,
∞
π2 X 1
= 2
.
8 k=1
(2k − 1)
a0 X mπx
N
mπx
SN (x) = + am cos + bm sin .
2 m=1
L L
We investigate the speed with which the series converges. It is equivalent to the
question: how large value of N must be chosen if we want SN (x) to approximate
f (x) with some accuracy ε > 0? So we need to choose N such that the residual
RN (x) := f (x) − SN (x) satisfies
|RN (x)| < ε
for all x, say, on the interval [−L, L]. Consider the function f (x) from Example 2.2.
Then ∞
4 X cos((2k − 1)πx)
RN (x) = 2
π k=N +1 (2k − 1)2
and
∞
4 X 1 4 1 1
|RN (x)| ≤ < 2 + + ···
π 2 k=N +1 (2k − 1)2 π (2N )(2N + 1) (2N + 1)(2N + 2)
4 1 1 1 1 4 2
= − + − + · · · = = <ε
π 2 2N 2N + 1 2N + 1 2N + 2 2N π 2 N π2
if and only if N > επ22 . Since π 2 ≈ 10 then if ε = 0.04 it is enough to take N = 6, for
ε = 0.01 we have to take N = 21.
The function f (x) = |x| is ”good” enough with respect to ”smoothness” and the
smoothness of |x| guarantees a good approximation by the partial sums. We would
like to formulate a general result.
27
Corollary. When f is a 2L−periodic function that is continuous on (−∞, ∞) and has
a piecewise continuous derivative, its Fourier series not only converges at each point
but it converges uniformly on (−∞, ∞), i.e. for every ε > 0 there exists N0 (ε) such
that
|f (x) − SN (x)| < ε, N ≥ N0 (ε), x ∈ (−∞, ∞).
Example 2.3. For sgn(x) on [−π, π) we had the Fourier series
∞
4 X sin((2k − 1)x)
sgn(x) = .
π k=1 2k − 1
Let us extend sgn(x) outside the interval −π ≤ x < π so that it is 2π-periodic. Hence,
this function has jumps at xn = πn, n = 0, ±1, ±2, . . . and
∞
4 X sin((2k − 1)πn) 1
= (sgn(πn + 0) + sgn(πn − 0)) = 0.
π k=1 2k − 1 2
and
Z Z L
1 L
mπx mπx L
− cos mπx
L
bm = f (x) sin dx =
sin dx = mπ
L −L L 0 L L 0
(
L L 0, m = 2k, k = 1, 2, . . .
= (1 − cos(mπ)) = (1 − (−1)m ) = 2L
mπ mπ mπ
, m = 2k − 1, k = 1, 2, . . . .
Hence
∞
L 2L X sin
(2k−1)πx
L
f (x) = + .
2 π k=1 2k − 1
It follows that for any x 6= nL, n = 0, ±1, ±2, . . .,
N
L 2L X sin
(2k−1)πx
L
SN (x) = + → f (x), N → ∞,
2 π k=1 2k − 1
28
where f (x) = 0 or L. At any x = nL,
L L
SN (x) ≡ → , N → ∞.
2 2
But nevertheless, the difference
where
am −ibm
2 , m = 1, 2, . . .
cm = a20 , m=0
a−m +ib−m
2
, m = −1, −2, . . . .
If f is real-valued then cm = c−m and
Z L
1 mπx
cm = f (x)e−i L dx, m = 0, ±1, ±2, . . . .
2L −L
(f (−L) = f (L) by periodicity). Thus, g(x) is even and its Fourier (cosine) series
represents f on [0, L].
29
2. Define a function h of period 2L so that
f (x), 0<x<L
h(x) = 0, x = 0, L
−f (−x), −L < x < 0.
Thus, h is the odd periodic extension of f and its Fourier (sine) series represents
f on (0, L).
3. Define a function K of period 2L so that
K(x) = f (x), 0≤x≤L
and let K(x) be defined on (−L, 0) in any way consistent with Theorem 1. Then
its Fourier series involves both sine and cosine terms, and represents f on [0, L].
Example 2.5. Suppose that
(
1 − x, 0 < x ≤ 1
f (x) =
0, 1 < x ≤ 2.
As indicated above, we can represent f either by a cosine series or sine series. For
cosine series we define an even extension of f as follows:
1 − x, 0 ≤ x ≤ 1
0, 1<x≤2
g(x) =
1 + x, −1 ≤ x < 0
0, −2 ≤ x < −1,
see Figure 1.
30
and
Z 2 Z 1
1 mπx mπx
am = g(x) cos dx = (1 − x) cos dx
2 −2 2 0 2
1 Z 1
sin mπx 2
2 mπx
= (1 − x) mπ + sin dx
2 0
mπ 0 2
1
2 cos mπx 2
4 mπ
= − mπ = 1 − cos
mπ 2 0
m2 π 2 2
(
4
m2 π 2
, m = 2k − 1, k = 1, 2, . . .
= 4 k
m2 π 2
1 − (−1) , m = 2k, k = 1, 2, . . . .
or
∞ ∞
4 X cos 2 X cos((2k − 1)πx)
(2k−1)πx
1 2
+ + 2 .
4 π 2 k=1 (2k − 1)2 π k=1 (2k − 1)2
This representation holds for all x ∈ R. In particular, for all x ∈ [1, 3] we have
∞ ∞
4 X cos 2 X cos((2k − 1)πx)
(2k−1)πx
1 2
+ + 2 = 0.
4 π 2 k=1 (2k − 1)2 π k=1 (2k − 1)2
31
3 One-dimensional Heat Equation
Let us consider a heat conduction problem for a straight bar of uniform cross section
and homogeneous material. Let x = 0 and x = L denote the ends of the bar (x-axis
is chosen to lie along the axis of the bar). Suppose that no heat passes through the
sides of the bar. We also assume that the cross-sectional dimensions are so small that
temperature u can be considered the same on any given cross section.
Then u is a function only of the coordinate x and the time t. The variation of temper-
ature in the bar is governed by a partial differential equation
where α2 is a constant known as the thermal diffusivity. This equation is called the
heat conduction equation or heat equation.
In addition, we assume that the initial temperature distribution in the bar is given
by
u(x, 0) = f (x), 0 ≤ x ≤ L, (3.2)
where f is a given function. Finally, we assume that the temperature at each end of
the bar is given by
where g0 and g1 are given functions. The problem (3.1), (3.2), (3.3) is an initial value
problem in time variable t. With respect to the space variable x it is a boundary value
problem and (3.3) are called the boundary conditions. Alternatively, this problem can
be considered as a boundary value problem in the xt-plane:
t
x
x=0 u(x, 0) = f (x) x=L
We start by considering the homogeneous boundary conditions when the functions g0 (t)
32
and g1 (t) in (3.3) are identically zero:
2
α uxx = ut , 0 < x < L, t > 0
u(0, t) = u(L, t) = 0, t > 0 (3.4)
u(x, 0) = f (x), 0 ≤ x ≤ L.
Such method is called a separation of variables. Substituting (3.5) into (3.1) yields
or
X ′′ (x) 1 T ′ (t)
= 2
X(x) α T (t)
in which the variables are separated, that is, the left hand side depends only on x and
the right hand side only on t. This is possible only when both sides are equal to the
same constant:
X ′′ 1 T′
= 2 = −λ.
X α T
Hence, we obtain two ordinary differential equations for X(x) and T (t)
X ′′ + λX = 0,
T ′ + α2 λT = 0. (3.6)
The boundary condition for u(x, t) at x = 0 leads to
It follows that
X(0) = 0
(since otherwise T ≡ 0 and so u ≡ 0 which we do not want). Similarly, the boundary
condition at x = L requires that
X(L) = 0.
So, for the function X(x) we obtain the homogeneous boundary value problem
(
X ′′ + λX = 0, 0<x<L
(3.7)
X(0) = X(L) = 0.
The values of λ for which nontrivial solutions of (3.7) exist are called eigenvalues and
the corresponding nontrivial solutions are called eigenfunctions. The problem (3.7) is
called an eigenvalue problem.
33
Lemma 1. The problem (3.7) has an infinite sequence of positive eigenvalues
n2 π 2
λn = , n = 1, 2, . . .
L2
with the corresponding eigenfunctions
nπx
Xn (x) = c sin ,
L
where c is an arbitrary nonzero constant.
Proof. Suppose first that λ > 0, i.e. λ = µ2 . The characteristic equation for (3.7) is
r2 + µ2 = 0 with roots r = ±iµ, so the general solution is
Note that µ is nonzero and there is no loss of generality if we assume that µ > 0. The
first boundary condition in (3.7) implies
X(0) = c1 = 0,
µL = nπ, n = 1, 2, . . .
or
n2 π 2
λn = , n = 1, 2, . . . .
L2
Hence the corresponding eigenfunctions are
nπx
Xn (x) = c sin .
L
If λ = −µ2 < 0, µ > 0, then the characteristic equation for (3.7) is r2 − µ2 = 0 with
roots r = ±µ. Hence the general solution is
Since
eµx + e−µx eµx − e−µx
cosh µx = and sinh µx =
2 2
this is equivalent to
X(x) = c′1 eµx + c′2 e−µx .
34
The first boundary condition requires again that c1 = 0 while the second gives
c2 sinh µL = 0.
Since µ 6= 0 (µ > 0), it follows that sinh µL 6= 0 and therefore we must have c2 = 0.
Consequently, X ≡ 0, i.e. there are no nontrivial solutions for λ < 0.
If λ = 0 the general solution is
X(x) = c1 x + c2 .
The boundary conditions can be satisfied only if c1 = c2 = 0 so there is only the trivial
solution in this case as well.
n2 π 2
Turning now to (3.6) for T (t) and substituting L2
for λ we have
nπα 2
T (t) = ce−( L ) .
t
n=1
L
is also a solution of the same problem. In order to take into account infinitely many
functions (3.8) we assume that
∞
X
u(x, t) = c n e −( L ) sin nπx ,
nπα 2
t
(3.9)
n=1
L
where the coefficients cn are yet undetermined, and the series converges in some sense.
To satisfy the initial condition from (3.4) we must have
∞
X nπx
u(x, 0) = cn sin = f (x), 0 ≤ x ≤ L. (3.10)
n=1
L
In other words, we need to choose the coefficients cn so that the series (3.10) converges
to the initial temperature distribution f (x).
It is not difficult to prove that for t > 0, 0 < x < L, the series (3.9) converges (with
any derivative with respect to x and t) and solves (3.1) with boundary conditions (3.4).
Only one question remains: can any function f (x) be represented by a Fourier sine
series (3.10)? Some sufficient conditions for such representation are given in Theorem
1 of Chapter 2.
35
Remark. We can consider the boundary value problem for any linear differential equa-
tion
y ′′ + p(x)y ′ + q(x)y = g(x) (3.11)
of order two on the interval (a, b) with the boundary conditions
y(a) = y0 , y(b) = y1 , (3.12)
where y0 and y1 are given constants. Let us assume that we have found a fundamental
set of solutions y1 (x) and y2 (x) to the corresponding homogeneous equation
y ′′ + p(x)y ′ + q(x)y = 0.
Then the general solution to (3.11) is
y(x) = c1 y1 (x) + c2 y2 (x) + yp (x),
where yp (x) is a particular solution to (3.11) and c1 and c2 are arbitrary constants.
To satisfy the boundary conditions (3.12) we have the linear nonhomogeneous al-
gebraic system (
c1 y1 (a) + c2 y2 (a) = y0 − yp (a)
(3.13)
c1 y1 (b) + c2 y2 (b) = y1 − yp (b).
If the determinant
y1 (a) y2 (a)
y1 (b) y2 (b)
is nonzero, then the constants c1 and c2 can be determined uniquely and therefore the
boundary value problem (3.11)-(3.12) has a unique solution. If
y1 (a) y2 (a)
y1 (b) y2 (b) = 0
36
If
0 1
6= 0
sin µ cos µ
i.e. sin µ 6= 0 then c1 is uniquely determined and the boundary value problem in
question has a unique solution. If sin µ = 0 then the problem has solutions (actually,
infinitely many) if and only if
1 1
y1 − 2 = y0 − 2 cos µ.
µ µ
If µ = 2πk then sin µ = 0 and cos µ = 1 and the following equation must hold
1 1
y1 − 2
= y0 − 2
µ µ
i.e. y1 = y0 . If µ = π + 2πk then sin µ = 0 and cos µ = −1 and we must have
2
y1 + y0 = .
µ2
Suppose now that one end of the bar is held at a constant temperature T1 and the
other is maintained at a constant temperature T2 . The corresponding boundary value
problem is then
2
α uxx = ut , 0 < x < L, t > 0
u(0, t) = T1 , u(L, t) = T2 , t > 0 (3.14)
u(x, 0) = f (x).
37
Then w(x, t) satisfies the homogeneous boundary value problem for the heat equation:
2
α wxx = wt , 0 < x < L, t > 0
w(0, t) = w(L, t) = 0 (3.17)
˜
w(x, 0) = f (x),
38
The boundary conditions yield c2 = T1 and
Z L Z y
1 1
c1 = T2 − T1 − 2 dy p(s)ds .
L α 0 0
A different problem occurs if the ends of the bar are insulated so that there is no
passage of heat through them. Thus, in the case of no heat flow, the boundary value
problem is
α2 uxx = ut , 0 < x < L, t > 0
ux (0, t) = ux (L, t) = 0, t > 0 (3.22)
u(x, 0) = f (x).
This problem can also be solved by the method of separation of variables. If we let
u(x, t) = X(x)T (t) it follows that
X ′′ + λX = 0, T ′ + α2 λT = 0. (3.23)
If λ = −µ2 < 0, µ > 0, then (3.23) for X(x) becomes X ′′ − µ2 X = 0 with general
solution
X(x) = c1 sinh µx + c2 cosh µx.
Therefore, the conditions (3.24) give c1 = 0 and c2 = 0 which is unacceptable. Hence
λ cannot be negative.
If λ = 0 then
X(x) = c1 x + c2 .
Thus X ′ (0) = c1 = 0 and X ′ (L) = 0 for any c2 leaving c2 undetermined. Therefore
λ = 0 is an eigenvalue, corresponding to the eigenfunction X0 (x) = 1. It follows from
(3.23) that T (t) is also a constant. Hence, for λ = 0 we obtain the constant solution
u0 (x, t) = c2 .
39
If λ = µ2 > 0 then X ′′ + µ2 X = 0 and consequently
Thus the unknown coefficients in (3.25) must be the Fourier coefficients in the Fourier
cosine series of period 2L for even extension of f . Hence
Z
2 L nπx
cn = f (x) cos dx, n = 0, 1, 2, . . .
L 0 L
and the series (3.25) provides the solution to the heat conduction problem (3.22) for a
rod with insulated ends. The physical interpretation of the term
Z
c0 1 L
= f (x)dx
2 L 0
is that it is the mean value of the original temperature distribution.
Exercise 16. Let v(x) be a solution of the problem
(
v ′′ (x) = 0, 0 < x < L
v ′ (0) = T1 , v ′ (L) = T2 .
40
Show that the problem
2
α uxx = ut , 0 < x < L, t > 0
ux (0, t) = T1 , ux (L, t) = T2 , t > 0
u(x, 0) = f (x)
Since ∞ ∞
X X 1
u(x, 0) = cn sin(nπx) = 2
sin(nπx)
n=1 n=1
n
1
then we may conclude that cn = n2
necessarily (since the Fourier series is unique).
Hence the solution is ∞
X 1 2
u(x, t) = 2
sin(nπx)e−(nπ) t .
n=1
n
41
and
T ′ + λT = 0, t > 0.
As above, one can show that (3.26) has nontrivial solutions only for λ > 0, namely
(2m − 1)2 π 2 (2m − 1)πx
λm = 2
, Xm (x) = sin , m = 1, 2, 3, . . . .
4L 2L
The solution to the mixed boundary value problem is
X∞
(2m − 1)πx −( (2m−1)πα 2
)t
u(x, t) = cm sin e 2L
m=1
2L
with arbitrary constants cm . To satisfy the initial condition we have
X∞
(2m − 1)πx
f (x) = cm sin , 0 ≤ x ≤ L.
m=1
2L
This is a Fourier sine series but in some specific form. We show that the coefficients
cm can be calculated as
Z
2 L (2m − 1)πx
cm = f (x) sin dx
L 0 2L
and such representation is possible.
In order to prove it, let us first extend f (x) to the interval 0 ≤ x ≤ 2L so that
it is symmetric about x = L, i.e. f (2L − x) = f (x) for 0 ≤ x ≤ L. Then extend
the resulting function to the interval (−2L, 0) as an odd function and elsewhere as a
periodic function fe of period 4L. In this procedure we need to define
fe(0) = fe(2L) = fe(−2L) = 0.
Then the Fourier series contains only sines:
X∞
e nπx
f (x) = cn sin
n=1
2L
with the Fourier coefficients
Z 2L
2 nπx
cn = fe(x) sin dx.
2L 0 2L
Let us show that cn = 0 for even n = 2m. Indeed,
Z
1 2L e mπx
c2m = f (x) sin dx
L 0 L
Z Z
1 L mπx 1 2L mπx
= f (x) sin dx + f (2L − x) sin dx
L 0 L L L L
Z Z
1 L mπx 1 0 mπ(2L − y)
= f (x) sin dx − f (y) sin dy
L 0 L L L L
Z Z
1 L mπx 1 0 mπy
= f (x) sin dx + f (y) sin dy = 0.
L 0 L L L L
42
That’s why
∞
X (2m − 1)πx
fe(x) = c2m−1 sin ,
m=1
2L
where
Z
1 2L e (2m − 1)πx
c2m−1 = f (x) sin dx
L 0 2L
Z Z
1 L (2m − 1)πx 1 2L (2m − 1)πx
= f (x) sin dx + f (2L − x) sin dx
L 0 2L L L 2L
Z
2 L (2m − 1)πx
= f (x) sin dx
L 0 2L
as claimed. Let us remark that the series
∞
X (2m − 1)πx
cm sin
m=1
2L
ux (0, t) = u(L, t) = 0
43
4 One-dimensional Wave Equation
Another situation in which the separation of variables applies occurs in the study of a
vibrating string. Suppose that an elastic string of length L is tightly stretched between
two supports, so that the x-axis lies along the string. Let u(x, t) denote the vertical
displacement experienced by the string at the point x at time t. It turns out that if
damping effects are neglected, and if the amplitude of the motion is not too large, then
u(x, t) satisfies the partial differential equation
a2 uxx = utt , 0 < x < L, t > 0. (4.1)
Equation (4.1) is known as the one-dimensional wave equation. The constant a2 = T /ρ,
where T is the force in the string and ρ is the mass per unit length of the string material.
x
x=0 x=L
u(x, t)
To describe the motion completely it is necessary also to specify suitable initial and
boundary conditions for the displacement u(x, t). The ends are assumed to remain
fixed:
u(0, t) = u(L, t) = 0, t ≥ 0. (4.2)
The initial conditions are (since (4.1) is of second order with respect to t):
u(x, 0) = f (x), ut (x, 0) = g(x), 0 ≤ x ≤ L, (4.3)
where f and g are given functions. In order for (4.2) and (4.3) to be consistent it is
also necessary to require that
f (0) = f (L) = g(0) = g(L) = 0. (4.4)
Equations (4.1)-(4.4) can be interpreted as the following boundary value problem for
the wave equation:
t
x
x=0 u(x, 0) = f (x) x=L
ut (x, 0) = g(x)
44
Let us apply the method of separation of variables to this homogeneous boundary
value problem. Assuming that u(x, t) = X(x)T (t) we obtain
X ′′ + λX = 0, T ′′ + a2 λT = 0.
This is the same boundary value problem that we have considered before. Hence,
n2 π 2 nπx
λn = , Xn (x) = sin , n = 1, 2, . . . .
L2 L
Taking λ = λn in the equation for T (t) we have
nπa 2
T ′′ (t) + T (t) = 0.
L
The general solution to this equation is
nπat nπat
T (t) = k1 cos + k2 sin ,
L L
where k1 and k2 are arbitrary constants. Using the linear superposition principle we
consider the infinite sum
X∞
nπx nπat nπat
u(x, t) = sin an cos + bn sin , (4.5)
n=1
L L L
where the coefficients an and bn are to be determined. It is clear that u(x, t) from (4.5)
satisfies (4.1) and (4.2) (at least formally). The initial conditions (4.3) imply
∞
X nπx
f (x) = an sin , 0 ≤ x ≤ L,
n=1
L
∞ (4.6)
X nπa nπx
g(x) = bn sin , 0 ≤ x ≤ L.
n=1
L L
Since (4.4) are fulfilled then (4.6) are the Fourier sine series for f and g, respectively.
Therefore,
Z
2 L nπx
an = f (x) sin dx,
L 0 L
Z L (4.7)
2 nπx
bn = g(x) sin dx.
nπa 0 L
45
Finally, we may conclude that the series (4.5) with the coefficients (4.7) solves (at least
formally) the boundary value problem (4.1)-(4.4).
Each displacement pattern
nπx nπat nπat
un (x, t) = sin an cos + bn sin
L L L
is called a natural mode of vibration and is periodic in both space variable x and time
variable t. The spatial period 2L
n
in x is called the wavelength, while the numbers nπa
L
are called the natural frequencies.
Exercise 18. Find a solution of the problem
uxx = utt , 0 < x < 1, t > 0
u(0, t) = u(1, t) = 0, t ≥ 0
u(x, 0) = x(1 − x), ut (x, 0) = sin(7πx)
for the wave and heat equations we can see that the second series has the exponential
factor that decays fast with n for any t > 0. This guarantees convergence of the series
as well as the smoothness of the sum. This is not true anymore for the first series
because it contains only oscillatory terms that do not decay with increasing n.
The boundary value problem for the wave equation with free ends of the string can
be formulated as follows:
2
a uxx = utt , 0 < x < L, t > 0
ux (0, t) = ux (L, t) = 0, t ≥ 0
u(x, 0) = f (x), ut (x, 0) = g(x), 0 ≤ x ≤ L.
Let us first note that the boundary conditions imply that f (x) and g(x) must satisfy
46
and the formal solution u(x, t) is
∞
b 0 t + a0 X nπx nπat nπat
u(x, t) = + cos an cos + bn sin .
2 n=1
L L L
The initial conditions are satisfied when
∞
a0 X nπx
f (x) = + an cos
2 n=1
L
and ∞
b0 X nπa nπx
g(x) = + bn cos ,
2 n=1
L L
where
Z
2 L nπx
an = f (x) cos dx, n = 0, 1, 2, . . .
L 0 L
Z
2 L
b0 = g(x)dx
L 0
and
Z L
2 nπx
bn = g(x) cos dx, n = 1, 2, . . . .
nπa 0 L
Let us consider the wave equation on the whole line. It corresponds, so to say, to
the infinite string. In that case we no more have the boundary conditions but we have
the initial conditions:
(
a2 uxx = utt , −∞ < x < ∞, t > 0
(4.8)
u(x, 0) = f (x), ut (x, 0) = g(x).
Proposition. The solution u(x, t) of the wave equation is of the form
u(x, t) = ϕ(x − at) + ψ(x + at),
where ϕ and ψ are two arbitrary C 2 functions of one variable.
Proof. By the chain rule
∂tt u − a2 ∂xx u = 0
if and only if
∂ξ ∂η u = 0,
where ξ = x + at and η = x − at (and so ∂x = ∂ξ + ∂η , a1 ∂t = ∂ξ − ∂η ). It follows that
∂ξ u = Ψ(ξ)
or
u = ψ(ξ) + ϕ(η),
where ψ ′ = Ψ.
47
To satisfy the initial conditions we have
It follows that
1 1 1 1
ϕ′ (x) = f ′ (x) − g(x), ψ ′ (x) = f ′ (x) + g(x).
2 2a 2 2a
Integrating we obtain
Z x Z x
1 1 1 1
ϕ(x) = f (x) − g(s)ds + c1 , ψ(x) = f (x) + g(s)ds + c2 ,
2 2a 0 2 2a 0
where c1 and c2 are arbitrary constants. But ϕ(x) + ψ(x) = f (x) implies c1 + c2 = 0.
Therefore the solution of the initial value problem is
Z x+at
1 1
u(x, t) = (f (x − at) + f (x + at)) + g(s)ds. (4.9)
2 2a x−at
Exercise 20. Prove that if f and g are merely locally integrable, then u from (4.9) is
a distributional solution of (4.8) and the initial conditions are satisfied pointwise.
where (
1, |x| ≤ 1
f (x) =
0, |x| > 1
is given by the d’Alembert formula
1
u(x, t) = (f (x − t) + f (x + t)) .
2
Some solutions are graphed below.
48
u(x, t)
1 t=0
x
−1 1
u(x, t)
1 t= 1
2
x
−1 1
u(x, t)
t=2
1
2
x
−1 1
We can apply the d’Alembert formula for the finite string also. Consider again the
boundary value problem with homogeneous boundary conditions with fixed ends of the
string. 2
a uxx = utt , 0 < x < L, t > 0
u(0, t) = u(L, t) = 0, t ≥ 0
u(x, 0) = f (x), ut (x, 0) = g(x), 0 ≤ x ≤ L
f (0) = f (L) = g(0) = g(L) = 0.
Let h(x) be the function defined for all x ∈ R such that
(
f (x), 0≤x≤L
h(x) =
−f (−x), −L ≤ x ≤ 0
and 2L-periodic and let k(x) be the function defined for all x ∈ R such that
(
g(x), 0≤x≤L
k(x) =
−g(−x), −L ≤ x ≤ 0
49
and 2L-periodic. Let us also assume that f and g are C 2 functions on the interval
[0, L]. Then the solution to the boundary value problem is given by the d’Alembert
formula Z x+at
1 1
u(x, t) = (h(x − at) + h(x + at)) + k(s)ds.
2 2a x−at
Remark. It can be checked that this solution is equivalent to the solution which is given
by the Fourier series.
50
5 Laplace Equation in Rectangle and in Disk
One of the most important of all partial differential equations in applied mathematics
is the Laplace equation:
uxx + uyy = 0 2D-equation
(5.1)
uxx + uyy + uzz = 0 3D-equation
The Laplace equation appears quite naturally in many applications. For example, a
steady state solution of the heat equation in two space dimensions
α2 (uxx + uyy ) = ut
satisfies the 2D-Laplace equation (5.1). When considering electrostatic fields, the elec-
tric potential function must satisfy either 2D or 3D equation (5.1).
A typical boundary value problem for the Laplace equation is (in dimension two):
(
uxx + uyy = 0, (x, y) ∈ Ω ⊂ R2
(5.2)
u(x, y) = f (x, y), (x, y) ∈ ∂Ω,
where f is a given function on the boundary ∂Ω of the domain Ω. The problem (5.2)
is called the Dirichlet problem (Dirichlet boundary conditions). The problem
(
uxx + uyy = 0, (x, y) ∈ Ω
∂u
∂ν
(x, y) = g(x, y), (x, y) ∈ ∂Ω,
∂Ω
51
for fixed a > 0 and b > 0. The solution of this problem can be reduced to the solutions
of
uxx + uyy = 0, 0 < x < a, 0 < y < b
u(x, 0) = u(x, b) = 0, 0<x<a (5.3)
u(0, y) = g(y), u(a, y) = f (y), 0 ≤ y ≤ b,
and
uxx + uyy = 0, 0 < x < a, 0 < y < b
u(x, 0) = g1 (x), u(x, b) = f1 (x), 0 < x < a
u(0, y) = 0, u(a, y) = 0, 0 ≤ y ≤ b.
Due to symmetry in x and y we consider (5.3) only.
y
u(x, b) = 0
b
x
u(x, 0) = 0 a
and
X ′′ − λX = 0, 0 < x < a. (5.5)
From (5.4) one obtains the eigenvalues and eigenfunctions
nπ 2 nπy
λn = , Yn (y) = sin , n = 1, 2, . . . .
b b
Substitute λn into (5.5) to get the general solution
nπx nπx
X(x) = c1 cosh + c2 sinh .
b b
As above, represent the solution to (5.3) in the form
nπy nπx
∞
X nπx
u(x, y) = sin an cosh + bn sinh . (5.6)
n=1
b b b
52
The boundary condition at x = 0 gives
∞
X nπy
g(y) = an sin ,
n=1
b
with Z b
2 nπy
an = g(y) sin dy.
b 0 b
At x = a we obtain
nπy nπa
∞
X nπa
f (y) = sin an cosh + bn sinh .
n=1
b b b
53
Dirichlet problem for a disk
Consider the problem of solving the Laplace equation in a disk {x ∈ R2 : |x| < a}
subject to boundary condition
u(a, θ) = f (θ), (5.8)
where f is a given function on 0 ≤ θ ≤ 2π. In polar coordinates x = r cos θ, y = r sin θ,
the Laplace equation takes the form
1 1
urr + ur + 2 uθθ = 0. (5.9)
r r
We apply again the method of separation of variables and assume that
It is possible to show that (5.11) require λ to be real. In what follows we will consider
the three possible cases.
If λ = −µ2 < 0, µ > 0, then the equation for T becomes T ′′ − µ2 T = 0 and
consequently
T (θ) = c1 eµθ + c2 e−µθ .
It follows from (5.11) that
(
c1 + c2 = c1 e2πµ + c2 e−2πµ
c1 − c2 = c1 e2πµ − c2 e−2πµ
so that c1 = c2 = 0.
If λ = 0 then T ′′ = 0 and T (θ) = c1 + c2 θ. The first condition in (5.11) implies
then that c2 = 0 and therefore T (θ) ≡ constant.
If λ = µ2 > 0, µ > 0, then
54
Now the conditions (5.11) imply that
(
c1 = c1 cos(2πµ) + c2 sin(2πµ)
c2 = −c1 sin(2πµ) + c2 cos(2πµ)
or (
c1 sin2 (µπ) = c2 sin(µπ) cos(µπ)
c2 sin2 (µπ) = −c1 sin(µπ) cos(µπ).
If sin(µπ) 6= 0 then (
c1 = c2 cot(µπ)
c2 = −c1 cot(µπ).
Hence c21 + c22 = 0 i.e. c1 = c2 = 0. Thus we must have sin(µπ) = 0 and so
λ n = n2 , Tn (θ) = c1 cos(nθ) + c2 sin(nθ), n = 0, 1, 2, . . . . (5.12)
Turning now to R, for λ = 0 we have r2 R′′ + rR′ = 0 i.e. R(r) = k1 + k2 log r. Since
log r → −∞ as r → 0 we must choose k2 = 0 in order for R (and u) to be bounded.
That’s why
R0 (r) ≡ constant. (5.13)
For λ = µ2 = n2 the equation for R becomes
r2 R′′ + rR′ − n2 R = 0.
Hence
R(r) = k1 rn + k2 r−n .
Again, we must choose k2 = 0 and therefore
Rn (r) = k1 rn , n = 1, 2, . . . . (5.14)
Combining (5.10),(5.12), (5.13) and (5.14) we obtain
∞
a0 X n
u(r, θ) = + r (an cos(nθ) + bn sin(nθ)). (5.15)
2 n=1
55
and
Z 2π
1
bn = f (θ) sin(nθ)dθ.
πan 0
This procedure can be used also to study the Neumann problem, i.e. the problem
in the disk with the boundary condition
∂u
(a, θ) = f (θ). (5.16)
∂r
Also in this case the solution u(r, θ) has the form (5.15). The boundary condition
(5.16) implies that
X∞
∂u
(r, θ) = nan−1 (an cos(nθ) + bn sin(nθ)) = f (θ).
∂r r=a n=1
Hence Z 2π
1
an = f (θ) cos(nθ)dθ
πnan−1 0
and Z 2π
1
bn = f (θ) sin(nθ)dθ.
πnan−1 0
Remark. For the Neumann problem a solution is defined up to an arbitrary constant
a0
2
. Moreover, f must satisfy the consistency condition
Z 2π
f (θ)dθ = 0
0
since integrating
∞
X
f (θ) = nan−1 (an cos(nθ) + bn sin(nθ))
n=1
56
6 The Laplace Operator
We consider what is perhaps the most important of all partial differential operators,
the Laplace operator (Laplacian) on Rn , defined by
n
X
∆= ∂j2 ≡ ∇ · ∇.
j=1
We will start with a quite general fact about partial differential operators.
Proof. Let X
L(x, ∂) ≡ aα (x)∂ α
|α|≤k
This implies that aα (x) must be constants (because aα (x) ≡ aα (x + h) for all h), say
aα . Next, since L now has constant coefficients we have (see Exercise 5)
c
Lu(ξ) = P (ξ)b
u(ξ),
u[
◦ T (ξ) = (b
u ◦ T ) (ξ).
Therefore
\x)(ξ) = Lu(T
(Lu)(T c ξ)
or
\
P (ξ)u(T x)(ξ) = P (T ξ)b
u(T ξ).
57
This forces
P (ξ) = P (T ξ).
Write ξ = |ξ|θ, where θ ∈ Sn−1 = {x ∈ Rn : |x| = 1} is the direction of ξ. Then
T ξ = |ξ|θ′ with some θ′ ∈ Sn−1 . But
0 = P (ξ) − P (T ξ) = P (|ξ|θ) − P (|ξ|θ′ )
shows that P (ξ) does not depend on the angle θ of ξ. Therefore P (ξ) is radial, that is,
X
P (ξ) = P1 (|ξ|) = a′α |ξ||α| .
|α|≤k
But since we know that P (ξ) is a polynomial then |α| must be even:
X
P (ξ) = aj |ξ|2j .
j
Conversely, let X
Lu = aj ∆j u.
j
It is clear by the chain rule that Laplacian commutes with a translation Th and a
rotation T . By induction the same is true for any power of ∆ and so for L as well.
n−1 ′
Lemma 1. If f (x) = ϕ(r), r = |x|, that is, f is radial, then ∆f = ϕ′′ (r) + r
ϕ (r).
∂r xj
Proof. Since ∂xj
= r
then
n
X n
X x
j ′
∆f = ∂j (∂j ϕ(r)) = ∂j ϕ (r)
j=1 j=1
r
Xn x n
X x2j
′ j
= ϕ (r)∂j + ϕ′′ (r)
j=1
r j=1
r2
Xn 2 n
X
1 xj ′
x2j
= − 3 ϕ (r) + ϕ′′ (r)
j=1
r r j=1
r2
n
X
n ′ 1 n−1 ′
= ϕ (r) − 3 x2j ϕ′ (r) + ϕ′′ (r) = ϕ′′ (r) + ϕ (r).
r r j=1
r
58
Corollary. If f (x) = ϕ(r) then ∆f = 0 on Rn \ {0} if and only if
(
a + br2−n , n 6= 2
ϕ(r) =
a + b log r, n = 2,
Exercise 22. For u, v ∈ C 2 (Ω) ∩ C 1 (Ω) and for S = ∂Ω, which is a surface of class
C 1 , prove the following Green’s identities:
a) Z Z
(v∆u − u∆v) dx = (v∂ν u − u∂ν v) dσ
Ω S
b) Z Z
(v∆u + ∇v · ∇u) dx = v∂ν udσ.
Ω S
59
Exercise 23. Prove that if u is harmonic on Ω and u ∈ C 1 (Ω) then
Z
∂ν udσ = 0.
S
2. ∂ν u = 0 on S, then u ≡ constant.
Proof. By resorting to real and imaginary parts it suffices to consider real-valued func-
tions. If we let u = v in part b) of Exercise 22 we obtain
Z Z
2
|∇u| dx = u∂ν udσ(x).
Ω S
Proof. Let us apply Green’s identity a) with u and v = |y|2−n , if n 6= 2 and v = log |y|
if n = 2 in the domain
60
for the sphere. Since u is harmonic then due to Exercise 23 we can get from (6.1) that
for any ε > 0, ε < r,
Z Z
1−n 1−n
ε udσ(y) = r udσ(y).
|x−y|=ε |x−y|=r
That’s why
Z Z
1−n
lim ε u(y)dσ(y) = lim u(x + εθ)dθ
ε→0 |x−y|=ε ε→0 |θ|=1
Z
1−n
= ωn u(x) = r u(y)dσ(y).
|x−y|=r
This proves the theorem because the latter steps hold for n = 2 also.
Corollary. If u and r are as in Theorem 2 then
Z Z
n n
u(x) = n u(y)dy ≡ u(x + ry)dy, x ∈ Ω. (6.2)
r ωn |x−y|≤r ωn |y|≤1
Corollary 2. If {uk }∞
k=1 is a sequence of harmonic functions on an open set Ω ⊂ R
n
61
Proof. Since u is continuous on Ω then the set {x ∈ Ω : u(x) = A} is closed in Ω. On
the other hand due to Theorem 2 (see (6.2)) we may conclude that if u(x) = A in
some point x ∈ Ω then u(y) = A for all y in a ball about x. Indeed, if y0 ∈ Bσ′ (x) and
u(y0 ) < A then u(y) < A for all y from small neighborhood of y0 . Hence, by Corollary
of Theorem 2,
Z
n
u(x) = n u(y)dy
r ωn |x−y|≤r
Z Z
n n
= n u(y)dy + n u(y)dy
r ωn |x−y|≤r,|y0 −y|>ε r ωn |y−y0 |≤ε
Z Z
n n
< A n dy + n dy
r ωn |x−y|≤r,|y0 −y|>ε r ωn |y−y0 |≤ε
Z
n
= A n dy = A,
r ωn |x−y|≤r
that is, A < A. This contradiction proves our statement. This fact also means that
the set {x ∈ Ω : u(x) = A} is also open. Hence it is either Ω (in this case u ≡ A in Ω)
or the empty set (in this case u(x) < A in Ω).
Corollary 1. Suppose Ω ⊂ Rn is open and bounded. If u is real-valued and harmonic
on Ω and continuous on Ω, then the maximum and minimum of u on Ω are achieved
only on ∂Ω.
Corollary 2 (The uniqueness theorem). Suppose Ω is as in Corollary 1. If u1 and
u2 are harmonic on Ω and continuous in Ω (might be complex-valued) and u1 = u2 on
∂Ω, then u1 = u2 on Ω.
Proof. The real and imaginary parts of u1 − u2 and u2 − u1 are harmonic on Ω. Hence
they must achieve their maximum on ∂Ω. These maximum are, therefore zero, so
u1 ≡ u 2 .
Theorem 4 (Liouville’s theorem). If u is bounded and harmonic on Rn then u ≡
constant.
Proof. For any x ∈ Rn and |x| ≤ R by Corollary of Theorem 2 we have
Z Z Z
n
n
|u(x) − u(0)| = n u(y)dy − u(y)dy ≤ n |u(y)|dy,
R ωn BR (x) BR (0) R ωn D
where
D = (BR (x)\BR (0)) ∪ (BR (0)\BR (x))
is the symmetric difference of the balls BR (x) and BR (0). That’s why we obtain
Z Z Z
n kuk∞ n kuk∞ R+|x| n−1
|u(x) − u(0)| ≤ dy ≤ r dr dθ
Rn ωn R−|x|≤|y|≤R+|x| Rn ωn R−|x| |θ|=1
(R + |x|)n − (R − |x|)n 1
= n
kuk∞ = O kuk∞ .
R R
Hence the difference |u(x) − u(0)| vanishes as R → ∞, that is, u(x) = u(0).
62
Definition. A fundamental solution for a partial differential operator L is a distribu-
tion K ∈ D′ such that
LK = δ.
Remark. Note that a fundamental solution is not unique. Any two fundamental solu-
tions differ by a solution of the homogeneous equation Lu = 0.
Exercise 25. Show that the characteristic function of the set
(x1 , x2 ) ∈ R2 : x1 > 0, x2 > 0
63
Exercise 28 allows us to write
Z Z Z
−n −1
h∆Kε , ϕi = ϕ(x)ε ψ(ε x)dx = ϕ(εz)ψ(z)dz → ϕ(0) ψ(z)dz
Rn Rn Rn
∆Kε → δ.
1. f ∈ L1 (Rn ) if n ≥ 3
R
2. R2 |f (y)| (| log |y|| + 1) dy < ∞ if n = 2
R
3. R |f (y)| (1 + |y|) dy < ∞ if n = 1.
64
Hence f ∗ K ∈ L1loc (Rn ) by addition and we may calculate
h∆(f ∗ K), ϕi = hf ∗ K, ∆ϕi, ϕ ∈ C0∞ (Rn )
Z Z Z
= (f ∗ K)(x)∆ϕ(x)dx = f (y)K(x − y)dy∆ϕ(x)dx
ZR
n Rn Rn
Z Z
= f (y) K(x − y)∆ϕ(x)dxdy = f (y)hK(x − y), ∆ϕ(x)idy
Rn Rn Rn
Z Z
= f (y)h∆K(x − y), ϕ(x)idy = f (y)hδ(x − y), ϕ(x)idy
Rn Rn
Z
= f (y)ϕ(y)dy = hf, ϕi.
Rn
Hence ∆(f ∗ K) = f .
As ε → 0 the right hand side of this equation tends to the right hand side of (6.5)
for each x ∈ Ω. This is because for x ∈ Ω and y ∈ S there are no singularities in K.
On the other hand, the left hand side is just (u ∗ ∆Kε ) (x) if we set u ≡ 0 outside Ω.
According to the proof of Theorem 5
(u ∗ ∆Kε ) (x) → u(x), ε → 0,
completing the proof.
Remark. If we know that u = f and ∂ν u = g on S then
Z
u(x) = f (y)∂νy K(x − y) − K(x − y)g(y) dσ(y)
S
65
The following theorem concerns spaces C α (Ω) and C k,α (Ω) which are defined by
∂
Kε (x) = ωn−1 xj (|x|2 + ε2 )−n/2
∂xj
( (6.6)
∂2 −nxi xj (|x|2 + ε2 )−n/2−1 , i 6= j
Kε (x) = ωn−1
∂xi ∂xj (|x|2 + ε2 − nx2j )(|x|2 + ε2 )−n/2−1 , i = j.
∂
K(x) = ωn−1 xj |x|−n
∂xj
( (6.7)
∂2 −nωn−1 xi xj |x|−n−2 , i 6= j
K(x) =
∂xi ∂xj ωn−1 (|x|2 − nx2j )|x|−n−2 , i = j,
for x 6= 0. The formulae (6.7) show that ∂j K(x) is a locally integrable function
and since g is bounded with compact support then g ∗ ∂j K is continuous. Next,
g ∗ ∂j Kε → g ∗ ∂j K uniformly as ε → +0. It is equivalent to ∂j Kε → ∂j K in the
topology of distributions (see the definition). Hence ∂j (g ∗ K) = g ∗ ∂j K.
This argument does not work for the second derivatives because ∂i ∂j K(x) is not
integrable. But there is a different procedure for these terms.
Let i 6= j. Then ∂i ∂j Kε (x) and ∂i ∂j K(x) are odd functions of xi (and xj ), see (6.6)
and (6.7). Due to this fact their integrals over any annulus 0 < a < |x| < b vanish.
For Kε we can even take a = 0.
Exercise 32. Prove this fact.
66
That’s why for any b > 0 we have
Z Z
g ∗ ∂i ∂j Kε (x) = g(x − y)∂i ∂j Kε (y)dy − g(x) ∂i ∂j Kε (y)dy
Rn |y|<b
Z Z
= (g(x − y) − g(x))∂i ∂j Kε (y)dy + g(x − y)∂i ∂j Kε (y)dy.
|y|<b |y|≥b
If we let ε → 0 we obtain
Z Z
lim g ∗ ∂i ∂j Kε (x) = (g(x − y) − g(x))∂i ∂j K(y)dy + g(x − y)∂i ∂j K(y)dy.
ε→0 |y|<b |y|≥b
Since the convergence in (6.8) and (6.9) is uniform then at this point we have shown
that g ∗ K ∈ C 2 . But we need to prove more.
67
Lemma 2 (Calderon-Zigmund). Let N be a C 1 function on Rn \ {0} that is homoge-
neous of degree −n and satisfies
Z
N (y)dy = 0
a<|y|<b
for any 0 < a < b < ∞. Then if g is a C α function with compact support, 0 < α < 1,
then Z
h(x) = lim (g(x − z) − g(x))N (z)dz
b→∞ |z|<b
belongs to C α .
Proof. Let us write h = h1 + h2 , where
Z
h1 (x) = (g(x − z) − g(x))N (z)dz,
|z|≤3|y|
Z
h2 (x) = lim (g(x − z) − g(x))N (z)dz.
b→∞ 3|y|<|z|<b
and hence
|h1 (x + y) − h1 (x)| ≤ |h1 (x + y)| + |h1 (x)| ≤ 2c′ |y|α .
On the other hand
Z
h2 (x + y) − h2 (x) = lim (g(x − z) − g(x))N (z + y)dz
b→∞ 3|y|<|z+y|<b
Z
− lim (g(x − z) − g(x))N (z)dz
b→∞ 3|y|<|z|<b
Z
= lim (g(x − z) − g(x))(N (z + y) − N (z))dz
b→∞ 3|y|<|z|<b
Z
+ lim (g(x − z) − g(x))N (z + y)dz
b→∞ {3|y|<|z+y|<b}\{3|y|<|z|<b}
= I1 + I2 .
It is clear that
{3|y| < |z + y|} \ {3|y| < |z|} ⊂ {2|y| < |z|} \ {3|y| < |z|}
= {2|y| < |z| ≤ 3|y|} .
68
That’s why
Z
|I2 | ≤ |g(x − z) − g(x)||N (z + y)|dz
2|y|<|z|≤3|y|
Z
≤ c |z|α |z + y|−n dz
Z2|y|<|z|≤3|y|
≤ c′ |z|α−n dz = c′′ |y|α .
2|y|<|z|≤3|y|
Note that the condition α < 1 is needed here. Collecting the estimates for I1 and I2
we can see that the lemma is proved.
In order to end the proof of Theorem it remains to note that ∂i ∂j K(x) satisfies all
conditions of Lemma 2. Thus the Theorem is also proved.
Exercise 33. Show that a function K1 is a fundamental solution for ∆2 ≡ ∆(∆) on
Rn if and only if K1 satisfies the equation
∆K1 = K,
where K is the fundamental solution for the Laplacian.
Exercise 34. Show that the following functions are the fundamental solutions for ∆2
on Rn :
1. n = 4:
log |x|
−
4ω4
2. n = 2:
|x|2 log |x|
8π
3. n 6= 2, 4:
|x|4−n
.
2(4 − n)(2 − n)ωn
Exercise 35. Show that (4π|x|)−1 e−c|x| is the fundamental solution for −∆ + c2 on R3
for any constant c ∈ C.
69
7 The Dirichlet and Neumann Problems
The Dirichlet problem
Given functions f in Ω and g on S = ∂Ω, find a function u in Ω = Ω ∪ ∂Ω satisfying
(
∆u = f, in Ω
(D)
u = g, on S.
We assume that Ω is bounded with C 1 boundary. But we shall not, however, assume
that Ω is connected. The uniqueness theorem (see Corollary of Theorem 3 of Chapter
6) shows that the solution of (D) will be unique (if it exists), at least if we require
u ∈ C(Ω). For (N) uniqueness does not hold: we can add to u(x) any function that is
constant on each connected component of Ω. Moreover, there is an obvious necessary
condition for solvability of (N). If Ω′ is a connected component of Ω then
Z Z Z Z
∆udx = ∂ν udσ(x) = g(x)dσ(x) = f dx,
Ω′ ∂Ω′ ∂Ω′ Ω′
that is, Z Z
f (x)dx = g(x)dσ(x).
Ω′ ∂Ω′
It is also clear (by linearity) that (D) can be reduced to the following homogeneous
problems: (
∆v = f, in Ω
(DA )
v = 0, on S
(
∆w = 0, in Ω
(DB )
w = g, on S
and u := v + w solves (D). Similar remarks apply to (N), that is
(
∆v = f, in Ω
∂ν v = 0, on S
(
∆w = 0, in Ω
∂ν w = g, on S
and u = v + w.
70
Definition. The Green’s function for (D) in Ω is the solution G(x, y) of the boundary
value problem (
∆x G(x, y) = δ(x − y), x, y ∈ Ω
(7.1)
G(x, y) = 0, x ∈ S, y ∈ Ω.
Analogously, the Green’s function for (N) in Ω is the solution G(x, y) of the boundary
value problem (
∆x G(x, y) = δ(x − y), x, y ∈ Ω
(7.2)
∂νx G(x, y) = 0, x ∈ S, y ∈ Ω.
where K is the fundamental solution of ∆ in Rn and, for any y ∈ Ω, the function vy (x)
satisfies (
∆vy (x) = 0, in Ω
(7.4)
vy (x) = −K(x − y), on S
in the case of (7.1) and
(
∆vy (x) = 0, in Ω
∂νx vy (x) = −∂νx K(x − y), on S
in the case of (7.2). Since (7.4) guarantees that vy is real then so is G corresponding
to (7.1).
Lemma 2. For both (7.1) and (7.2) it is true that G(x, y) = G(y, x) for all x, y ∈ Ω.
Proof. Let G(x, y) and G(x, z) be the Green’s functions for Ω corresponding to sources
located at fixed y and z, y 6= z, respectively. Let us consider the domain
see Figure 2.
71
b
z
b
y
72
The same is true for the integral over |x − z| = ε, that is,
Z
(G(x, z)∂νx G(x, y) − G(x, y)∂νx G(x, z)) dσ(x) ≈ −I2 , ε → 0,
|x−z|=ε
where Z
I2 = G(x, y)∂νx G(x, z)dσ(x).
|x−z|=ε
and
Z
1−n n−1
I2 ≈ (2 − n)cn ε ε G(εθ + z, y)dθ → (2 − n)cn ωn G(z, y), ε → 0.
|θ|=1
It means that G(y, z) = G(z, y) for all z 6= y. This proof holds for n = 2 (and even for
n = 1) with some simple changes.
Proof. For each fixed y, the function vy (x) := G(x, y) − K(x − y) is harmonic in Ω, see
(7.4). Moreover, on S = ∂Ω, vy (x) takes on the positive value
|x − y|2−n
−K(x − y) ≡ − .
ωn (2 − n)
By the minimum principle, it follows that vy (x) is strictly positive in Ω. This proves
the first inequality.
Exercise 36. Prove the second inequality in Lemma 3.
Exercise 37. Show that for n = 2 Lemma 3 has the following form:
1 |x − y|
log < G(x, y) < 0, x, y ∈ Ω,
2π h
where h ≡ maxx,y∈Ω |x − y|.
73
Exercise 38. Obtain the analogue of Lemma 3 for n = 1. Hint: show that the Green’s
d2
function for the operator dx 2 on Ω = (0, 1) is
(
x(y − 1), x < y
G(x, y) =
y(x − 1), x > y.
Then the Laplacian of the first term is f (see Theorem 6 of Chapter 6), and the second
term is harmonic in x (since vy (x) is harmonic). Also v(x) = 0 on S because the same
is true for G. Thus, this v(x) solves (DA ).
Consider now (DB ). We assume that g is continuous on S and we wish to find
w which is continuous on Ω. Applying Green’s identity a) (together with the same
limiting process as in the proof of Lemma 2) we obtain
Z
w(x) = (w(y)∆y G(x, y) − G(x, y)∆w(y)) dy
ZΩ Z
= w(y)∂νy G(x, y)dσ(y) = g(y)∂νy G(x, y)dσ(y).
S S
Let us denote the last integral by (P). Since ∂νy G(x, y) is harmonic in x and continuous
in y for x ∈ Ω and y ∈ S then w(x) is harmonic in Ω. In order to prove that this w(x)
solves (DB ) it remains to prove that w(x) is continuous in Ω and w(x) on S is g(x).
We will prove this general fact later.
Definition. The function ∂νy G(x, y) on Ω × S is called the Poisson kernel for Ω and
(P) is called the Poisson integral .
Now we are in the position to solve the Dirichlet problem in a half-space. Let
Ω = Rn+1
+ = (x′ , xn+1 ) ∈ Rn+1 : x′ ∈ Rn , xn+1 > 0 ,
∆n+1 = ∆n + ∂t2 , n = 1, 2, . . .
74
Let us prove then that the Green’s function for Rn+1
+ is
G(x, y; t, s) = K(x − y, t − s) − K(x − y, −t − s). (7.5)
It is clear (see (7.5)) that G(x, y; t, 0) = G(x, y; 0, s) = 0 and
∆n+1 G = δ(x − y, t − s) − δ(x − y, −t − s) = δ(x − y)δ(t − s)
because for t, s > 0, −t − s < 0 and, therefore, δ(−t − s) = 0. Thus G is the Dirichlet
Green’s function for Rn+1+ . From this we immediately have the solution of (DA ) in
n+1
R+ as Z Z ∞
u(x, t) = G(x, y; t, s)f (y, s)dsdy.
Rn 0
To solve (DB ) we compute the Poisson kernel for this case. Since the outward normal
derivative on ∂Rn+1
+
∂
is − ∂t then the Poisson kernel becomes
∂ ∂
− G(x, y; t, s)|s=0 = − (K(x − y, t − s) − K(x − y, −t − s)) |s=0
∂s ∂s
2t
= n+1 . (7.6)
ωn+1 (|x − y|2 + t2 ) 2
Exercise 39. Prove (7.6).
Note that (7.6) holds for any n ≥ 1. According to the formula for (P), the candidate
for a solution to (DB ) is:
Z
2 tg(y)
u(x, t) = dy. (7.7)
ωn+1 Rn (|x − y|2 + t2 ) n+1
2
75
Proof. It is clear that for any t > 0, Pt (x) ∈ L1 (Rn ) ∩ L∞ (Rn ), see (7.8). Hence
Pt (x) ∈ Lq (Rn ) for all q ∈ [1, ∞] with respect to x and for any fixed t > 0. Therefore,
the integral in (7.9) is absolutely convergent and the same is true if Pt is replaced by
its derivatives ∆x Pt or ∂t2 Pt (due to Young’s inequality for convolution).
Since G(x, y; t, s) is harmonic for (x, t) 6= (y, s) then Pt (x) is also harmonic and
∆x u + ∂t2 u = g ∗ (∆x + ∂t2 )Pt = 0.
It remains to prove that if g is bounded and continuous then
ku(·, t) − g(·)k∞ → 0
76
for |x| > 2R. Hence u(x, t) → 0 as x → ∞ uniformly for t ∈ [0, T ]. This proves
that u(x, t) vanishes at infinity if g(x) has compact support. For general g, choose a
sequence {gk } of compactly supported functions that converges uniformly (in L∞ (Rn ))
to g and let
uk (x, t) = (gk ∗ Pt )(x).
Then
Z
kuk − ukL∞ (Rn+1 ) = sup (gk − g)(y)Pt (x − y)dy
t,x
R
n
Z
≤ sup kgk − gkL∞ (Rn ) sup |Pt (x − y)|dy
t x Rn
Z
= kgk − gkL∞ (Rn ) sup |Pt (y)|dy = kgk − gkL∞ (Rn ) → 0
t>0 Rn
as k → ∞.
Hence u(x, t) vanishes at infinity. Now suppose v is another solution and let w :=
v − u. Then w vanishes at infinity and also at t = 0 (see Theorem 1). Thus |w| < ε on
the boundary of the region {(x, t) : |x| < R, 0 < t < R} for R large enough.
t=R
b
t=0
R
But since w is harmonic then by the maximum principle it follows that |w| < ε on this
region. Letting ε → 0 and R → ∞ we conclude that w ≡ 0.
Let us consider now the Dirichlet problem in a ball. We use here the following
notation:
B = B1 (0) = {x ∈ Rn : |x| < 1} , ∂B = S.
77
Now, assuming first that n > 2, define
x2−n
G(x, y) := K(x − y) − |x| K −y
|x|2
2−n !
1 x
= |x − y|2−n − − y|x| , x 6= 0. (7.10)
(2 − n)ωn |x|
78
Exercise 42. Prove (7.11).
Theorem 3. If f ∈ L1 (S) then
Z
u(x) = P (x, y)f (y)dσ(y), x ∈ B,
S
with x = ry. This proves (7.12). We claim also that for any y0 ∈ S and for the
neighborhood Bσ (y0 ) ⊂ S,
Z
lim P (ry0 , y)dσ(y) = 0. (7.13)
r→1−0 S\Bσ (y0 )
and therefore
|ry0 − y|−n < (r|y0 − y|)−n ≤ (rσ)−n
if y ∈ S\Bσ (y0 ), i.e., |y − y0 | ≥ σ. Hence |ry0 − y|−n is bounded uniformly for r → 1− 0
and y ∈ S\Bσ (y0 ). In addition, 1 − |ry0 |2 ≡ 1 − r2 → 0 as r → 1 − 0. This proves
(7.13).
79
Now, suppose f ∈ C(S). Hence f is uniformly continuous since S is compact.
That’s why for any ε > 0 there exists δ > 0 such that
ku(r·) − f (·)kp → 0
as r → 1 − 0.
Exercise 43. Show that the Poisson kernel for the ball BR (x0 ) is
R2 − |x − x0 |2
P (x, y) = , n ≥ 2.
ωn R|x − y|n
80
Theorem 5. Suppose u is harmonic on Ω\ {x0 } and u(x) = o (|x − x0 |2−n ) for n > 2
and u(x) = o (log |x − x0 |) for n = 2 as x → x0 . Then u has a removable singularity
at x0 .
We claim that u = v in B\ {0}, so that we can remove the singularity at {0} by setting
u(0) := v(0). Indeed, given ε > 0 and 0 < δ < 1 consider the function
(
u(x) − v(x) − ε(|x|2−n − 1), n > 2
gε (x) =
u(x) − v(x) + ε log |x|, n=2
in B\Bδ (0). These functions are real (we can assume without loss of generality),
harmonic and continuous for δ ≤ |x| ≤ 1. Moreover gε (x) = 0 on ∂B and gε (x) < 0
on ∂Bδ (0) for all δ small enough. By the maximum principle, it is negative in B\ {0}.
Letting ε → 0 we see that u − v ≤ 0 in B\ {0}. By the same arguments we may
conclude that also v − u ≤ 0 in B\ {0}. Hence u = v in B\ {0} and we can extend u
to the whole ball by setting u(0) = v(0). This proves the theorem.
81
8 Layer Potentials
In this chapter we assume that Ω ⊂ Rn is bounded and open, and that S = ∂Ω is a
surface of class C 2 . We assume also that both Ω and Ω′ := Rn \ Ω are connected.
as x → 0.
2. The solutions of (IN) and (EN) are unique up to a constant on Ω and Ω′ , re-
spectively. When n > 2 this constant is zero on the unbounded component of
Ω′ .
82
Proof. If u solves (ID) with f = 0 then u ≡ 0 because this is just the uniqueness
theorem for harmonic functions (see Corollary 2 of Theorem 3 of Chapter 6). If u
solves (ED) with f = 0 we may assume that {0} ∈ e = |x|2−n u |x|x2 solves
/ Ω′ . Then u
n o
(ID) with f = 0 for bounded domain Ωe = x : x2 ∈ Ω′ . Hence u e ≡ 0 so that u ≡ 0
|x|
and part 1) is proved.
Exercise 45. Prove that if u is harmonic then u e = |x| u |x|x2 , x 6= 0, is also
2−n
harmonic.
Concerning part 2), by Green’s identity we have
Z Z Z
2
|∇u| dx = − u∆udx + u∂ν− udσ(x).
Ω Ω S
d
where ∂r u ≡ dr
u. Since for n > 2 and for large |x| we have
u(x) = O |x|2−n , ∂r u(x) = O |x|1−n
then
Z Z
2−n 1−n
u∂r udσ(x) ≤ cr r dσ(x) = cr3−2n rn−1 = cr2−n → 0
∂Br (0) ∂Br (0)
as r → ∞. Hence Z
|∇u|2 dx = 0.
Ω′
It implies that u is constant on Ω′ and u = 0 on the unbounded component of Ω′
because for large |x|,
u(x) = O |x|2−n , n > 2.
If n = 2 then ∂r u(x) = O (r−2 ) for function u(x) which is harmonic at infinity.
Exercise 46. Prove that if u is harmonic at infinity then u is bounded and ∂r u(x) =
O (r−2 ) as r → ∞ if n = 2 and ∂r u(x) = O(|x|1−n ), r → ∞, if n > 2.
Due to Exercise 46 we obtain
Z
u∂ udσ(x) ≤ cr−2 r = cr−1 → 0, r → ∞.
r
∂Br (0)
83
We now turn to the problem of finding the solutions (existence problems). Let us
try to solve (ID) by setting
Z
e(x) :=
u f (y)∂νy K(x − y)dσ(y), (8.1)
S
Definition. The functions u(x) from (8.2) and (8.3) are called the double and single
layer potentials with moment (density) ϕ, respectively.
and
|I(x, y)| ≤ c1 + c2 |log |x − y|| , α = 0,
where c > 0 and c1 , c2 ≥ 0.
Remark. Note that a continuous kernel of order 0 is also a continuous kernel of order
α, 0 < α < n − 1.
84
We denote by Ib the integral operator
Z
b
If (x) = I(x, y)f (y)dσ(y), x∈S
S
with kernel I.
Lemma 1. If I is a continuous kernel of order α, 0 ≤ α < n − 1, then
1. Ib is bounded on Lp (S), 1 ≤ p ≤ ∞.
2. Ib is compact on L2 (S).
Proof. It is enough to consider 0 < α < n − 1. Let us assume that f ∈ L1 (S). Then
Z Z
b
If
1 ≤ |I(x, y)||f (y)|dσ(y)dσ(x)
L (S) S S
Z Z
≤ c |f (y)|dσ(y) |x − y|−α dσ(x)
S S
Z d
≤ c kf kL1 (S) rn−2−α dr = c′ kf kL1 (S) ,
0
85
Lemma 2. 1. If I is a continuous kernel of order α, 0 ≤ α < n − 1, then Ib trans-
forms bounded functions into continuous functions.
2. If Ib is as in part 1 then u + Iu
b ∈ C(S) for u ∈ L2 (S) implies u ∈ C(S).
Since |z − y| ≤ |x − z| + |x − y| then
Z 3δ
I1 ≤ c kf k∞ rn−2−α dr → 0, δ → 0.
0
b = ϕIu
Write Iu c + (1\ − ϕ)Iu := Ib0 u + Ib1 u. By the Cauchy-Schwarz-Buniakowsky
inequality we have
Z 1/2
|Ib1 u(x) − Ib1 u(y)| ≤ kuk2 2
|I1 (x, z) − I1 (y, z)| dσ(z) → 0, y → x,
S
86
then g is continuous for u ∈ L2 (S) by the conditions of this lemma. Since the operator
norm of Ib0 can be made on L2 (S) and L∞ (S) less than 1 (we can do it due to the
choice of ε > 0 small enough), then
−1
u = I + Ib0 g,
where I is the identity operator. Since g is continuous and the operator norm is less
than 1, then
X∞ j
u= −Ib0 g.
j=0
First of all
(x − y, ν(y))
∂νy K(x − y) = − . (8.4)
ωn |x − y|n
Exercise 48. Prove that (8.4) holds for any n ≥ 1.
It is clear also that (8.4) defines a harmonic function in x ∈ Rn \S, y ∈ S. Moreover,
it is O (|x|1−n ) as x → ∞ (y ∈ S) so that u is also harmonic at infinity.
Exercise 49. Prove that (8.4) defines a harmonic function at infinity.
Lemma 3. There exists c > 0 such that
87
Lemma 4. I is a continuous kernel of order n − 2, n ≥ 2.
Lemma 5.
Z
1, x∈Ω
I(x, y)dσ(y) = 0, x ∈ Ω′ (8.5)
S
1
2
, x ∈ S.
or Z
I(x, y)dσ(y) = 0, x ∈ Ω′ .
S
If x ∈ Ω, let δ > 0 be such that Bδ (x) ⊂ Ω. Then K(x−y) is harmonic in y in Ω\Bδ (x)
and therefore by Green’s identity
Z
0 = (1 · ∆y K(x − y) − K(x − y)∆1) dy
Ω\Bδ (x)
Z Z
= ∂νy K(x − y)dσ(y) − ∂νy K(x − y)dσ(y)
S |x−y|=δ
Z 1−n Z
δ
= I(x, y)dσ(y) − dσ(y)
S ωn |x−y|=δ
Z
= I(x, y)dσ(y) − 1
S
or Z
I(x, y)dσ(y) = 1.
S
88
∂Bδ+
S
δ
Bδ+
Ω′ b Ω
x
Bδ−
Sδ Ωδ ∂Bδ−
δ 1−n n−1 ωn 1
= lim δ + o(δ n−1 ) = .
δ→0 ωn 2 2
It means that the limit in (8.6) exists and (8.5) is satisfied.
Lemma 6. There exists c > 0 such that
Z
|∂νy K(x − y)|dσ(y) ≤ c, x ∈ Rn .
S
89
Denote Bδ = {y ∈ S : |x0 − y| < δ}. We estimate the integral of |I(x, y)| over S\Bδ
and Bδ separately. If y ∈ S\Bδ then
|x − y| ≥ |x0 − y| − |x − x0 | > δ − δ/2 = δ/2
and
|I(x, y)| ≤ cδ 1−n
so that the integral over S\Bδ satisfies (8.7), where again c′ does not depend on δ.
To estimate the integral over Bδ we note that (see (8.4)),
|(x − y, ν(y))| |(x − x0 , ν(y)) + (x0 − y, ν(y))|
|I(x, y)| = n
=
ωn |x − y| ωn |x − y|n
|x − x0 | + c|x0 − y|2
≤ . (8.8)
ωn |x − y|n
The latter inequality follows from Lemma 3 since x0 , y ∈ S. Moreover, we have (due
to Lemma 3)
|x − y|2 = |x − x0 |2 + |x0 − y|2 + 2(x − x0 , x0 − y)
2 2 x − x0
= |x − x0 | + |x0 − y| + 2|x − x0 | x0 − y,
|x − x0 |
2 2
≥ |x − x0 | + |x0 − y| − 2|x − x0 ||(x0 − y, ν(x0 ))|
≥ |x − x0 |2 + |x0 − y|2 − 2c|x − x0 ||x0 − y|2
≥ |x − x0 |2 + |x0 − y|2 − |x − x0 ||x0 − y|,
1
if we choose δ > 0 such that |x0 − y| ≤ 2c , where constant c > 0 is from Lemma 3.
1
Since |x − x0 ||x0 − y| ≤ 2 (|x − x0 | + |x0 − y|2 ) then finally we obtain
2
1
|x − y|2 ≥ |x − x0 |2 + |x0 − y|2
2
and (see (8.4) and (8.8))
|x − x0 | + |x0 − y|2 |x − x0 | c
|I(x, y)| ≤ c ≤c + .
|2
(|x − x0 + |x0 − y|2 )n/2 |2
(|x − x0 + |x0 − y|2 )n/2 |x0 − y|n−2
This implies
Z Z δ Z δ
′ |x − x0 | n−2 ′ rn−2
|I(x, y)|dσ(y) ≤ c n/2
r dr + c dr
Bδ (|x − x0 |2 + r2 )
0 0 rn−2
Z ∞
′ ′ arn−2
≤ cδ+c dr,
0 (a2 + r2 )n/2
where a := |x − x0 |. For the latter integral we have (t = r/a)
Z ∞ Z ∞
arn−2 tn−2
dr = dt < ∞.
0 (a2 + r2 )n/2 0 (1 + t2 )n/2
90
If we combine all estimates then we may conclude that there is c0 > 0 such that
Z
|∂νy K(x − y)|dσ(y) ≤ c0 , x ∈ Rn ,
S
The uniformity of convergence follows from the fact that S is compact and ϕ ∈ C(S).
Corollary. For x ∈ S,
ϕ(x) = u− (x) − u+ (x),
where u± = limt→±0 u(xt ).
We state without a proof that the normal derivative of the double layer potential
is continuous across the boundary in the sense of the following theorem.
Theorem 3. Suppose ϕ ∈ C(S) and u is defined by the double layer potential (8.2)
with moment ϕ. Then for any x ∈ S,
91
Let us now consider the single layer potential
Z
u(x) = ϕ(y)K(x − y)dσ(y)
S
as x → x0 and δ → 0.
92
and define the function f on the tubular neighborhood V of S by
(
v(x) + ∂ν u(x), x ∈ V \S
f (x) = (8.9)
b
Iϕ(x) + Ib∗ ϕ(x), x ∈ S,
Since
(x − y, ν(y))
I(x, y) = −
ωn |x − y|n
and ν(x) = ν(x0 ) for x = x0 + tν(x0 ) ∈ V we have
(x − y, ν(x)) (x − y, ν(x0 ))
I ∗ (x, y) = I(y, x) = n
≡ . (8.10)
ωn |x − y| ωn |x − y|n
Hence
(x − y, ν(x0 ) − ν(y)) |x − y||ν(x0 ) − ν(y)|
∗
|I(x, y) + I (x, y)| = ≤
ωn |x − y|n ωn |x − y|n
|x − y||x0 − y| |x0 − y|
≤ c n
≤ c′ = c′ |x0 − y|2−n ,
ωn |x − y| |x0 − y|n−1
because |x0 − y| ≤ |x0 − x| + |x − y| ≤ 2|x − y|. Here we have also used the fact that
|ν(x0 ) − ν(y)| ≤ c|x0 − y| since ν is C 1 .
93
x
S
x0
y
This estimate allows us to obtain that the corresponding integral over Bδ can be
dominated by
Z Z δ
2−n ′
c |x0 − y| dσ(y) = c r2−n rn−2 dr = c′ δ.
|y−x0 |≤δ 0
b 1
Iϕ(x) b
+ Ib∗ ϕ(x) = v− (x) + ∂ν− u(x) = ϕ(x) + Iϕ(x) + ∂ν− u(x).
2
It follows that
ϕ(x) b∗
∂ν− u(x) = − + I ϕ(x).
2
By similar arguments we obtain
b 1
Iϕ(x) b
+ Ib∗ ϕ(x) = v+ (x) + ∂ν+ u(x) = − ϕ(x) + Iϕ(x) + ∂ν+ u(x)
2
and therefore
ϕ(x) b∗
∂ν+ u(x) = + I ϕ(x).
2
This finishes the proof.
Corollary.
ϕ(x) = ∂ν+ u(x) − ∂ν− u(x),
where u is defined by (8.3).
Lemma 8. If f ∈ C(S) and
ϕ b∗
+I ϕ=f
2
then Z Z
ϕdσ = f dσ.
S S
94
Lemma 9. Let n = 2.
1. If ϕ ∈ C(S) then the single layer potential u with moment ϕ is harmonic at
infinity if and only if Z
ϕ(x)dσ(x) = 0.
S
But log |x − y| − log |x| → 0 as x → ∞ uniformly for y ∈ S and therefore, this term is
harmonic at infinity
R (we have a removable singularity). Hence u is harmonic at infinity
if and only if S ϕ(x)dσ(x) = 0 and in this case u(x) vanishes at infinity. This proves
part 1.
In part 2, u is harmonic at infinity. If u is constant on Ω then it solves (ED) with
f ≡ constant on S. But a solution of such problem must be constant and vanish at
infinity. Therefore this constant is zero. Thus ϕ ≡ 0 and, hence, u ≡ 0.
We assume (for simplicity and without loss of generality) that Ω and Ω′ are simply-
connected, that is, ∂Ω has only one C 2 component. For f ∈ C(S) consider the integral
equations
1 b = f,
± ϕ + Iϕ (1± )
2
1
± ϕ + Ib∗ ϕ = f, (1∗± )
2
where I(x, y) = ∂νy K(x − y) and I ∗ (x, y) = I(y, x). Theorems 2 and 4 show that the
double layer potential u with moment ϕ solves (ID) (respectively (ED)) if ϕ satisfies
(1+ ) (respectively (1− )) and the single layer potential u with moment ϕ solves (IN)
(respectively (EN)) if ϕ satisfies (1∗− ) (respectively (1∗+ )). For n = 2 we need the extra
necessary condition for (EN) given by Lemma 9.
We proceed to study the solvability of (1± ) and (1∗± ). Let us introduce
b 1
V± = ϕ : Iϕ = ± ϕ
2
(8.11)
1
W± = ϕ : Ib∗ ϕ = ± ϕ ,
2
where ϕ is allowed to range over either L2 (S) or C(S).
95
Fredholm’s Theorem
Let A be a compact operator on a Hilbert space H. For each λ ∈ C, let
Vλ = {x ∈ H : Ax = λx}
and
Wλ = {x ∈ H : A∗ x = λx} .
Then
1. The set {λ ∈ C : Vλ =6 {0}} is finite or countable with only one possible accumu-
lation point at {0}. Moreover, dimVλ < ∞ for λ 6= 0.
2. dimVλ = dimWλ if λ 6= 0.
3. R(A − λI) and R(A∗ − λI) are closed if λ 6= 0.
Here and throughout the use of symbol I for the identity operator is to be distinguished
from a function I = I(x, y) by the context in which it appears.
Corollary 1. Suppose λ 6= 0. Then
1. (A − λI)x = y has a solution if and only if y⊥Wλ .
2. A − λI is surjective (onto) if and only if it is injective (invertible).
In other words, either (A−λI)ϕ = 0 and (A∗ −λI)ψ = 0 have only the trivial solutions
ϕ = ψ = 0 for λ 6= 0 or they have the same number of linearly independent solutions
ϕ1 , . . . , ϕm , ψ1 , . . . , ψm , respectively. In the first case (A−λI)ϕ = g and (A∗ −λI)ψ = f
have unique solutions (A − λI and A∗ − λI are invertible) for every g, f ∈ H. In the
second case (A − λI)ϕ = g and (A∗ − λI)ψ = f have the solutions if and only if ϕj ⊥g
and ψj ⊥f for every j = 1, 2, . . . , m.
Proof. It is known and not so difficult to show that
⊥
R(A − λI) = Ker A∗ − λI , (8.12)
where M ⊥ denotes the orthogonal complement of M ⊂ H defined by
M ⊥ = {y ∈ H : (y, x)H = 0, x ∈ M }.
Exercise 51. Prove (8.12)
But by part 3 of Fredholm’s theorem (λ 6= 0) we know that R(A − λI) = R(A−λI)
and, therefore R(A − λI) = Ker (A∗ − λI)⊥ . It is equivalent to the fact that
y ∈ R(A − λI) ⇔ y⊥Ker (A∗ − λI)
or
(A − λI)x = y, x ∈ H ⇔ y⊥Wλ .
For the second part, A − λI is surjective if and only if R(A − λI) = H, that is,
Ker (A∗ − λI) = 0. But this is equivalent to A∗ − λI being invertible or A − λI being
invertible (injective).
96
Corollary 2.
where V± and W± are defined by (8.11) and the direct sums are not necessarily orthog-
onal.
Proof. By Lemma 5 we know that
Z
1
I(x, y)dσ(y) = , x ∈ S.
S 2
It can be interpreted as follows: ϕ(x) ≡ 1 belongs to V+ . Hence dimV+ ≥ 1. But
by part 2 of Fredholm’s theorem dimV+ = dimW+ . Since the single layer potential
uniquely solves the (EN) and (IN) then dimW+ ≤ 1. Hence dimV+ = dimW+ = 1.
Therefore, in order to prove the equality
L2 (S) = V+⊥ ⊕ W+
it is enough to show that V+⊥ ∩ W+ = {0} (because V+⊥ is a closed subspace of codi-
mension 1).
Suppose ϕ ∈ V+⊥ ∩ W+ . Then Ib∗ ϕ = 12 ϕ (ϕ ∈ W+ ) and there is ψ ∈ L2 (S) such
that ϕ = − ψ2 + Ib∗ ψ (ϕ ∈ V+⊥ ), see Corollary 1 for λ = 1/2 and A = Ib∗ . Next, since
Ib∗ ϕ − 21 ϕ ≡ 0 and ϕ ∈ L2 (S) then part 2 of Lemma 2 gives that ϕ is continuous and
hence ψ is continuous too.
Let u and v be the single layer potentials with moments ϕ and ψ, respectively.
Then by Theorem 4
ϕ b∗
∂ ν− u = − +I ϕ=0
2
ψ ϕ
∂ν− v = − + Ib∗ ψ = ϕ = + Ib∗ ϕ = ∂ν+ u.
2 2
It follows that
Z Z Z
0 = (u∆v − v∆u)dx = (u∂ν− v − v∂ν− u)dσ(x) = u∂ν+ udσ(x).
Ω S S
Hence Z
|∇u|2 dx = 0
Ω′
′
and so u is constant in Ω . This gives finally ϕ = ∂ν+ u = 0. The other equalities can
be proved in a similar manner.
97
Remark. Since we know that
⊥
b 1 b 1
W∓⊥ ∗
= Ker I ± I =R I± I
2 2
Proof. We have already proved uniqueness (see Theorem 1) and the necessity of the
conditions on f (see Exercise 23). So all that remains is to establish existence. It turns
out that in each case this question is reduced to the question of the solvability of an
integral equation.
Note first that Z
f dσ = 0
S
if and only if
(f, 1)L2 (S) = 0
or f ∈ V+⊥ since 1 ∈ V+ and dimV+ = 1. But f ∈ V+⊥ is necessary and sufficient
condition (see Corollary 1) to solve the integral equation
ϕ b∗
− + I ϕ = f.
2
If ϕ is a solution of this equation, ϕ is continuous (see part 2 of Lemma 2). Hence, by
Theorem 4 the single layer potential Rwith moment ϕ solves (IN)
Similarly, for (EN), we have that S f dσ = 0 if and only if f ∈ V−⊥ . In this case we
can solve the equation
ϕ b∗
+I ϕ=f
2
and then solve (EN) by the single layer potential with moment ϕ, see again Theorem
4.
98
Consider now (ID). By Corollary 2 of Fredholm’s Theorem and Remark after it we
can write for f ∈ C(S) ⊂ L2 (S),
ϕ
f= b
+ Iϕ + ψ, (8.13)
2
where ψ ∈ V− ⊂ C(S) and ϕ is continuous since f − ψ is continuous (part 2 of Lemma
2).
b = 0. Let us prove that this condition implies that
Since ψ ∈ V− then 12 ψ + Iψ
ψ = 0. Consider the double layer potential
Z
v(x) = ψ(y)I(x, y)dσ(y), x ∈/ S.
S
99
9 The Heat Operator
We turn our attention now to the heat operator
L = ∂t − ∆x , (x, t) ∈ Rn × R.
The heat operator is a prototype of parabolic operators. These are operators of the
form X
∂t + aα (x, t)∂xα ,
|α|≤2m
where
2 |x|2
Kt (x) = (2π)−n/2 F −1 e−|ξ| t ≡ (4πt)−n/2 e− 4t , t > 0 (9.2)
100
Let us first prove that Z
Kt (x)dx = 1.
Rn
Indeed, using polar coordinates,
Z Z Z ∞ Z
|x|2 2
−n/2 − −n/2 n−1 − r4t
Kt (x)dx = (4πt) e 4t dx = (4πt) r e dr dθ
Rn Rn 0 |θ|=1
Z ∞
r2
−n/2
= ωn (4πt) rn−1 e− 4t dr
Z0 ∞
n−1 1 √ ds
= ωn (4πt)−n/2 (4st) 2 e−s 4t √
2 s
Z ∞0
ωn −n/2
= π sn/2−1 e−s ds
2 0
ωn −n/2 1 2π n/2 −n/2
= π Γ(n/2) = π Γ(n/2) = 1.
2 2 Γ(n/2)
Theorem 1. Suppose that f ∈ L∞ (Rn ) is uniformly continuous. Then u(x, t) :=
(f ∗ Kt )(x) satisfies ∂t u − ∆u = 0 and
ku(·, t) − f (·)kL∞ (Rn ) → 0
as t → +0.
Proof. For fixed t > 0
−n/2 −
|x−y|2 |x − y|2 n
∆x Kt (x − y) = (4πt) e 4t
2
−
4t 2t
and for fixed |x − y| 6= 0
−n/2 −
|x−y|2 |x − y|2 n
∂t Kt (x − y) = (4πt) e 4t − .
4t2 2t
Therefore (∂t − ∆x )Kt (x − y) = 0.
But we can differentiate (with respect to x and t) under the integral sign since this
integral will be absolutely convergent for any t > 0. That’s why we may conclude that
∂t u(x, t) − ∆x u(x, t) = 0.
It remains only to verify the initial condition. We have
Z
u(x, t) − f (x) = (f ∗ Kt )(x) − f (x) = f (y)Kt (x − y)dy − f (x)
Rn
Z Z
= f (x − z)Kt (z)dz − f (x)Kt (z)dz
Rn Rn
Z
= (f (x − z) − f (x))Kt (z)dz
ZR
n
√
= (f (x − η t) − f (x))K1 (η)dη.
Rn
101
The assumptions on f imply that
√ Z
|u(x, t) − f (x)| ≤ sup |f (x − η t) − f (x)| K1 (η)dη
x∈Rn ,|η|<R Rn
Z
+ 2 kf kL∞ (Rn ) K1 (η)dη < ε/2 + ε/2
|η|≥R
for small t and for R large enough. So we can see that u(x, t) is continuous (even
uniformly continuous and bounded) for (x, t) ∈ Rn × [0, ∞) and u(x, 0) = f (x).
Proof. We can differentiate under the integral defining u as often as we please, because
the exponential function decreases at infinity faster than any polynomial. Thus, the
heat equation takes arbitrary initial data (bounded and uniformly continuous) and
smooths them out.
then u ≡ 0.
where F~ = (ψ∂1 ϕ − ϕ∂1 ψ, . . . , ψ∂n ϕ − ϕ∂n ψ, ϕψ). Given x0 ∈ Rn and t0 > 0 let us
take
ψ(x, t) = u(x, t), ϕ(x, t) = Kt0 −t (x − x0 ).
Then
∂t ψ − ∆ψ = 0, t > 0
∂t ϕ + ∆ϕ = 0, t < t0 .
102
we obtain
Z Z Z
0 = F~ · νdσ = u(x, b)Kt0 −b (x − x0 )dx − u(x, a)Kt0 −a (x − x0 )dx
∂Ω |x|≤r |x|≤r
Z b Z n
X xj
+ dt (u(x, t)∂j Kt0 −t (x − x0 ) − Kt0 −t (x − x0 )∂j u(x, t)) dσ(x).
a |x|=r j=1 r
Letting r → ∞ the last sum vanishes by assumptions (9.3). That’s why we have
Z Z
0= u(x, a)Kt0 −a (x − x0 )dx − u(x, b)Kt0 −b (x − x0 )dx.
Rn Rn
As we know from the proof of Theorem 1 for b → t0 − 0 the second term tends to
u(x0 , t0 ) and for a → +0 the first term tends to
Z
u(x, 0)Kt0 (x − x0 )dx = 0
Rn
because u(x, 0) = 0. Hence we have finally that u(x0 , t0 ) = 0 for all x0 ∈ Rn , t0 > 0.
Theorem 3. The kernel Kt (x) is a fundamental solution for the heat operator.
That’s why we can apply the dominated convergence theorem and obtain
Z Z
lim Kε (x, t)dx = Kt (x)dx.
e→+0 Rn Rn
or
h∂t Kε − ∆x Kε , ϕi → ϕ(0), ϕ ∈ C0∞ (Rn+1 ).
103
Using integration by parts we obtain
Z ∞ Z
h∂t Kε − ∆x Kε , ϕi = hKε , −∂t ϕ − ∆ϕi = dt Kt (x)(−∂t − ∆)ϕ(x, t)dx
ε Rn
Z Z ∞
= − dx Kt (x)∂t ϕ(x, t)dt
Rn ε
Z ∞ Z
− dt Kt (x)∆x ϕ(x, t)dx
ε Rn
Z Z ∞ Z
= Kε (x)ϕ(x, ε)dx + dt ∂t Kt (x)ϕ(x, t)dx
Rn ε Rn
Z ∞ Z
− dt ∆x Kt (x)ϕ(x, t)dx
ε Rn
Z Z ∞ Z
= Kε (x)ϕ(x, ε)dx + dt (∂t − ∆)Kt (x)ϕ(x, t)dx
Rn ε Rn
Z
= Kε (x)ϕ(x, ε)dx → ϕ(0, 0), ε → 0
Rn
Let us now consider the heat operator in a bounded domain Ω ⊂ Rn over a time
interval t ∈ [0, T ], 0 < T ≤ ∞. In this case it is necessary to specify the initial
temperature u(x, 0), x ∈ Ω, and also to prescribe a boundary condition on ∂Ω × [0, T ].
∂Ω
t=T
Ω t=0
The first basic result concerning such problems is the maximum principle.
104
Proof. Given ε > 0, set v(x, t) := u(x, t) + ε|x|2 . Then ∂t v − ∆v = −2nε. Suppose 0 <
T ′ < T . If maximum of v in Ω×[0, T ′ ] occurs at an interior point of Ω×(0, T ′ ) then the
first derivatives ∇x,t v vanish there and the second derivative ∂j2 v for any j = 1, 2, . . . , n
is nonpositive (consider v(x, t) as a function of one variable xj , j = 1, 2, . . . , n). In
particular, ∂t v = 0 and ∆v ≤ 0, which contradicts with ∂t v − ∆v = −2nε < 0 and
∆v = 2nε > 0.
Likewise, if the maximum occurs in Ω×{T ′ }, then ∂t v(x, T ′ ) ≥ 0 and ∆v(x, T ′ ) ≤ 0
which contradicts with ∂t v − ∆v < 0. Therefore,
max u ≤ max u.
Ω×[0,T ] (Ω×{0})∪(∂Ω×[0,T ])
u(x, t) = F (x)G(t).
Then
∂t u − ∆u = F G′ − G∆x F = 0
if and only if
G′ ∆F
= := −λ2
G F
or
G′ + λ2 G = 0, ∆F + λ2 F = 0,
for some constant λ. The first equation has the general solution
2
G(t) = ce−λ t ,
105
where c is an arbitrary constant. Without loss of generality we assume that c = 1. It
follows from (9.4) that (
∆F = −λ2 F, in Ω
(9.5)
F = 0, on ∂Ω,
because u(x, t) = F (x)G(t) and G(0) = 1.
It remains to solve (9.5) which is an eigenvalue (spectral) problem for the Laplacian
with Dirichlet boundary condition. It is known that the problem (9.5) has infinitely
∞ 2 ∞ 2
many solutions {Fj (x)}j=1 with corresponding λj j=1 . The numbers −λj are called
eigenvalues and Fj (x) are called eigenfunctions of the Laplacian. It is also known that
λj > 0, j = 1, 2, . . ., λ2j → ∞ and {Fj (x)}∞
j=1 can be chosen as complete orthonormal
∞
set in L (Ω) (or {Fj (x)}j=1 forms an orthonormal basis of L2 (Ω)). This fact allows us
2
where fj = (f, Fj )L2 (Ω) are called the Fourier coefficients of f with respect to {Fj }∞
j=1 .
If we take now ∞
X 2
u(x, t) = fj Fj (x)e−λj t , (9.7)
j=1
that is, u(x, t) from (9.7) satisfies the heat equation and u(x, t) = 0 on ∂Ω × (0, ∞). It
remains to prove that u(x, t) satisfies the initial condition and to determine for which
functions f (x) the series (9.6) converges and in what sense. This is the main question
in the Fourier method.
It is clear that the series (9.6) and (9.7) (for t ≥ 0) converge in the sense of
L (Ω). It is also clear that if f ∈ C 1 (Ω) vanishes at the boundary then u will vanish
2
on ∂Ω × (0, ∞) and one easily verifies that u is a distributional solution of the heat
equation (t > 0). Hence it is a classical solution since u(x, t) ∈ C ∞ (Ω × (0, ∞)) (see
Corollary 2 of Theorem 1).
Similar considerations apply to the problem
∂t u − ∆u = 0, in Ω × (0, ∞)
u(x, 0) = f (x), in Ω
∂ν u(x, t) = 0, on ∂Ω × (0, ∞).
This problem boils down to finding orthonormal basis of eigenfunctions for Laplacian
with the Neumann boundary condition. Let us remark that for this problem, {0} is
always an eigenvalue and 1 is an eigenfunction.
106
Exercise 55. Prove that u(x, t) of the form (9.7) is a distributional solution of the
heat equation in Ω × (0, ∞).
Rπ
Exercise 56. Show that 0 |u(x, t)|2 dx is a decreasing function of t > 0, where u(x, t)
is the solution of (
ut − uxx = 0, 0 < x < π, t > 0
u(0, t) = u(π, t) = 0, t > 0.
107
10 The Wave Operator
The wave equation is defined as
The wave equation is satisfied exactly by the components of the classical electromag-
netic field in vacuum.
The characteristic variety of (10.1) is
charx (L) = (ξ, τ ) ∈ Rn+1 : (ξ, τ ) 6= 0, τ 2 = |ξ|2
and
{(ξ, τ ) ∈ charx (L) : τ < 0}
the forward and backward light cone, respectively.
The wave operator is a prototype of hyperbolic operators. It means that the main
symbol X
aα (x, t)ξ α τ j
|α|+j=k
Theorem 1. Suppose u(x, t) is C 2 function and that ∂t2 u − ∆u = 0. Suppose also that
u = 0 and ∂ν u = 0 on the ball B = {(x, 0) : |x − x0 | ≤ t0 } in the hyperplane t = 0.
Then u = 0 in the region Ω = {(x, t) : 0 ≤ t ≤ t0 , |x − x0 | ≤ t0 − t}.
Proof. By considering real and imaginary parts we may assume that u is real. Denote
by Bt = {x : |x − x0 | ≤ t0 − t}. Let us consider the following integral
Z
1
E(t) = (ut )2 + |∇x u|2 dx
2 Bt
108
Straightforward calculations using the divergence theorem show us that
Z n n
!
X X
I1 = ∂j [(∂j u)ut ] − (∂j2 u)ut + ut utt dx
Bt j=1 j=1
Z Z n
X
= ut (utt − ∆x u)dx + (∂j u)νj ut dσ(x)
Bt ∂Bt j=1
Z Z
1
≤ |ut | |∇x u|dσ(x) ≤ |ut |2 + |∇x u|2 dσ(x) ≡ −I2 .
∂Bt 2 ∂Bt
Hence
dE
≤ −I2 + I2 = 0.
dt
But E(t) ≥ 0 and E(0) = 0 due to Cauchy data. Therefore E(t) ≡ 0 if 0 ≤ t ≤ t0 and
thus ∇x,t u = 0 in Ω. Since u(x, 0) = 0 then u(x, t) = 0 also in Ω.
(x0 , t0 )
b
x0 b
t0
Remark. This theorem shows that the value of u at (x0 , t0 ) depends only on the Cauchy
data of u on the ball {(x, 0) : |x − x0 | ≤ t0 }.
Conversely, the Cauchy data on a region R in the initial (t = 0) hyperplane influ-
ence only those points inside the forward light cones issuing from points of R. Sim-
ilar result holds when the hyperplane t = 0 is replaced by a space-like hypersurface
S = {(x, t) : t = ϕ(x)}. A surface S is called space-like if its normal vector ν = (ν ′ , ν0 )
satisfies |ν0 | > |ν ′ | at every point of S, i.e., if ν lies inside the light cone. It means that
|∇ϕ| < 1.
109
Lemma 1. If ϕ is a C 2 function on Rn , then Mϕ (x, 0) = ϕ(x) and
2 n−1
∆x Mϕ (x, r) = ∂r + ∂r Mϕ (x, r).
r
It implies that
110
Proof. We employ induction with respect to k. If k = 1 then
k−1
2 1
∂r ∂r r2k−1 ϕ(r) = ∂r2 (rϕ) = ∂r (ϕ + rϕ′ ) = 2ϕ′ + rϕ′′
r
and k
∂r 2k ′ ∂r
(r ϕ ) = (r2 ϕ′ ) = 2ϕ′ + rϕ′′ .
r r
Assume that k−1 k
1 ∂r
∂r2 ∂r r 2k−1
ϕ(r) = (r2k ϕ′ ).
r r
Then
k k−1
2 1 2k+1
2 1 ∂r 2k+1
∂r ∂r r ϕ(r) = ∂r ∂r r ϕ
r r r
k−1
2 1
= ∂r ∂r (2k + 1)r2k−1 ϕ + r2k ϕ′
r
k−1 k−1
2 1 2k−1
2 1
= (2k + 1)∂r ∂r r ϕ + ∂r ∂r r2k ϕ′
r r
k k
∂r ∂r
= (2k + 1) r2k ϕ′ + (r2k (rϕ′ )′ )
r r
k
∂r
= (2k + 1)r2k ϕ′ + r2k (rϕ′ )′
r
k
∂r
= (2k + 1)r2k ϕ′ + r2k ϕ′ + r2k+1 ϕ′′
r
k
∂r
= (2k + 2)r2k ϕ′ + r2k+1 ϕ′′
r
k+1
∂r
= r2k+2 ϕ′ .
r
Corollary of Lemma 1 gives that if u(x, t) is a solution of the wave equation (10.1)
in Rn × R then Mu (x, r, t) satisfies (10.3), i.e.,
2 n−1
∂r + ∂r Mu = ∂t2 Mu ,
r
with initial conditions:
111
Let us set
n−3
∂r 2
e(x, r, t) :=
u rn−2 Mu ≡ T Mu ,
r (10.5)
fe(x, r) := T Mf , ge(x, r) := T Mg
for n = 2k + 1, k = 1, 2, . . ..
Moreover, the initial conditions are satisfied due to (10.4) and (10.5).
But now, since (10.6) is a one-dimensional problem, we may conclude that u e(x, r, t)
from Lemma 3 is equal to
Z r+t
1 e e
e(x, r, t) =
u f (x, r + t) + f (x, r − t) + ge(x, s)ds . (10.7)
2 r−t
Lemma 4. If n = 2k + 1, k = 1, 2, . . ., then
e(x, r, t)
u
Mu (x, 0, t) = lim ,
r→0 (n − 2)!!r
112
Proof. By (10.5) we have
k−1 k−2
∂r 2k−1
∂r
e(x, r, t) =
u r Mu = (2k − 1)r2k−3 Mu + r2k−2 ∂r Mu
r r
= (2k − 1)(2k − 3) · · · 1 · Mu r + O(r2 )
or
e(x, r, t)
u
= Mu + O(r).
(n − 2)!!r
Hence
e(x, r, t)
u
Mu (x, 0, t) = lim .
r→0 (n − 2)!!r
But by definition of Mu we have that Mu (x, 0, t) = u(x, t), where u(x, t) is the solution
of (10.2). The initial conditions in (10.2) are satisfied due to (10.5). Next, since
e(x, r, t) satisfies (10.7) then
u
Z !
e(x, r, t)
u 1 fe(x, r + t) + fe(x, r − t) 1 r+t
lim = lim + lim ge(x, s)ds
r→0 (n − 2)!!r 2(n − 2)!! r→0 r r→0 r r−t
1
= ∂r fe|r=t + ∂r fe|r=−t + ge(x, t) − ge(x, −t) ,
2(n − 2)!!
because fe(x, t) and ge(x, t) are odd functions of t. That’s why we finally obtain
e(x, r, t)
u 1
lim = ∂r fe|r=t + ge(x, t) .
r→0 (n − 2)!!r (n − 2)!!
Now we are in the position to prove the main theorem for odd n ≥ 3.
n+3 n+1
Theorem 2. Suppose that n = 2k+1, k = 1, 2, . . .. If f ∈ C 2 (Rn ) and g ∈ C 2 (Rn )
then
( n−3 Z
1 ∂t 2 n−2
u(x, t) = ∂t t f (x + ty)dσ(y)
(n − 2)!!ωn t |y|=1
n−3 Z ) (10.9)
∂t 2
+ tn−2 g(x + ty)dσ(y)
t |y|=1
solves (10.2).
Proof. Due to Lemmata 3 and 4 u(x, t) given by (10.8) is the solution of the wave
equation. It remains only to check that this u satisfies the initial conditions. But
(10.9) gives us for small t that
113
It implies that
The last equality follows from the fact that Mf (x, t) is even in t and so its derivative
vanishes at t = 0.
Remark. If n = 3 then (10.9) becomes
Z Z
1
u(x, t) = ∂t t f (x + ty)dσ(y) + t g(x + ty)dσ(y)
4π |y|=1 |y|=1
Z Z
1
≡ f (x + ty)dσ(y) + t ∇f (x + ty) · ydσ(y)
4π |y|=1 |y|=1
Z
+ t g(x + ty)dσ(y) .
|y|=1
The solution of (10.2) for even n is readily derived from the solution for odd n by
”the method of descent”. This is just the trivial observation: if u is a solution of the
wave equation in Rn+1 × R that does not depend on xn+1 then u satisfies the wave
equation in Rn × R. Thus to solve (10.2) in Rn × R with even n, we think of f and g
as functions on Rn+1 which are independent of xn+1 .
n+4 n+2
Theorem 3. Suppose that n is even. If f ∈ C 2 (Rn ) and g ∈ C 2 (Rn ) then the
function
( n−2 Z !
2 ∂t 2 f (x + ty)
u(x, t) = ∂t tn−1 p dy
(n − 1)!!ωn+1 t |y|≤1 1 − y2
n−2 Z !) (10.10)
∂t 2 g(x + ty)
+ tn−1 p dy
t |y|≤1 1 − y2
Proof. If n is even then n + 1 is odd and n + 1 ≥ 3. That’s why we can apply (10.9)
in Rn+1 × R to get that
( n−2 Z !
1 ∂t 2
u(x, t) = ∂t tn−1 f (x + ty + tyn+1 )dσ(ey)
(n − 1)!!ωn+1 t y12 +···+yn
2 +y 2
n+1 =1
n−2 Z !)
∂t 2
+ tn−1 g(x + ty + tyn+1 )dσ(e y) ,
t y12 +···+yn 2 +y 2
n+1 =1
(10.11)
where ye = (y, yn+1 ), solves (10.2) in Rn+1 × R (formally). But if we assume now
that f and g do not depend on xn+1 then u(x, t) does not depend on xn+1 either and
114
solves (10.2) in Rn × R. It remains only to calculate the integrals in (10.11) under this
assumption. We have
Z Z
f (x + ty + tyn+1 )dσ(e
y) = f (x + ty)dσ(e
y)
2
|y|2 +yn+1 =1 2
|y|2 +yn+1 =1
Z
dy
= 2 f (x + ty) p ,
|y|≤1 1 − |y|2
because we have the upper and lower hemispheres of the sphere |y|2 + yn+1
2
= 1.
Similarly for the second integral in (10.11). This proves the theorem.
Remark. If n = 2 then (10.10) becomes
( Z ! Z )
1 f (x + ty) g(x + ty)
u(x, t) = ∂t t p dy + t p dy .
2π |y|≤1 1 − y2 |y|≤1 1 − y2
Now we consider the Cauchy problem for the inhomogeneous wave equation
(
∂t2 u − ∆x u = w(x, t)
(10.12)
u(x, 0) = f (x), ∂t u(x, 0) = g(x).
and (
∂t2 u2 − ∆u2 = w
(B)
u2 (x, 0) = ∂t u2 (x, 0) = 0.
For the problem (B) we will use a method known as Duhamel’s principle.
Theorem 4. Suppose w ∈ C [ 2 ]+1 (Rn × R). For s ∈ R let v(x, t; s) be the solution of
n
(
∂t2 v(x, t; s) − ∆x v(x, t; s) = 0
v(x, 0; s) = 0, ∂t v(x, 0; s) = w(x, s).
Then Z t
u(x, t) := v(x, t − s; s)ds
0
solves (B).
115
Proof. By definition of u(x, t) it is clear that u(x, 0) = 0. We also have
Z t
∂t u(x, t) = v(x, 0; t) + ∂t v(x, t − s; s)ds.
0
It implies that ∂t u(x, 0) = v(x, 0; 0) = 0. Differentiating once more in t we can see that
Z t
2
∂t u(x, t) = ∂t (v(x, 0; t)) + ∂t v(x, 0; t) + ∂t2 v(x, t − s; s)ds
0
Z t
= w(x, t) + ∆x v(x, t − s; s)ds
0
Z t
= w(x, t) + ∆x v(x, t − s; s)ds = w(x, t) + ∆x u.
0
But this ordinary differential equation with initial conditions can be easily solved to
obtain
b sin(|ξ|t) b sin(|ξ|t) sin(|ξ|t)
ub(ξ, t) = f (ξ) cos(|ξ|t) + gb(ξ) ≡ f (ξ)∂t + gb(ξ) .
|ξ| |ξ| |ξ|
It implies that
sin(|ξ|t) sin(|ξ|t)
u(x, t) = F −1
fb(ξ)∂t +F −1
gb(ξ)
|ξ| |ξ|
−n/2 −1 sin(|ξ|t) −n/2 −1 sin(|ξ|t)
= f ∗ ∂t (2π) F + g ∗ (2π) F
|ξ| |ξ|
= f ∗ ∂t Φ(x, t) + g ∗ Φ(x, t), (10.13)
where Φ(x, t) = (2π)−n/2 F −1 sin(|ξ|t)
|ξ|
.
The next step is to try to solve the equation
116
That’s why Fb must be a solution of ∂t2 u + |ξ|2 u = 0 for t 6= 0. Therefore
(
a(ξ) cos(|ξ|t) + b(ξ) sin(|ξ|t), t < 0
Fb(ξ, t) =
c(ξ) cos(|ξ|t) + d(ξ) sin(|ξ|t), t > 0.
This gives two equations for the four unknown coefficients a, b, c and d. But it is
1
reasonable to require F (x, t) ≡ 0 for t < 0. Hence, a = b = c = 0 and d = (2π)−n/2 |ξ| .
That’s why (
b (2π)−n/2 sin(|ξ|t)
|ξ|
, t>0
F (ξ, t) = (10.14)
0, t < 0.
If we compare (10.13) and (10.14) we may conclude that
−n/2 −1 sin(|ξ|t)
F (x, t) = (2π) Fξ , t>0
|ξ|
and (
Φ(x, t), t > 0
Φ+ (x, t) =
0, t<0
is the fundamental solution of the wave equation, i.e., F (x, t) with t > 0.
There is one more observation. If we compare (10.9) and (10.10) with (10.13) then
we may conclude that these three formulae are the same. Hence, we may calculate the
inverse Fourier transform of
sin(|ξ|t)
(2π)−n/2
|ξ|
in odd and even dimensions respectively with (10.9) and (10.10). Actually, the result
is presented in these two formulae.
When solving the wave equation in the region Ω × (0, ∞), where Ω is a bounded
domain in Rn , it is necessary to specify not only Cauchy data on Ω × {0} but also some
conditions on ∂Ω × (0, ∞) to tell the wave what to do when it hits the boundary. If the
boundary conditions on ∂Ω × (0, ∞) are independent of t, the method of separation of
variables can be used.
Let us (for example) consider the following problem:
2
∂t u − ∆x u = 0, in Ω × (0, ∞)
u(x, 0) = f (x), ∂t u(x, 0) = g(x), in Ω (10.15)
u(x, t) = 0, on ∂Ω × (0, ∞).
117
We can look for solution u in the form u(x, t) = F (x)G(t) and get
(
∆F (x) + λ2 F (x) = 0, in Ω
(10.16)
F (x) = 0, on ∂Ω,
and
G′′ (t) + λ2 G(t) = 0, 0 < t < ∞. (10.17)
The general solution of (10.17) is
At the same time f (x) and g(x) have the L2 (Ω) representations
∞
X ∞
X
f (x) = fj Fj (x), g(x) = gj Fj (x), (10.19)
j=1 j=1
where fj = (f, Fj )L2 and gj = (g, Fj )L2 . It follows from (10.15) and (10.18) that
∞
X ∞
X
u(x, 0) = aj Fj (x), ut (x, 0) = λj bj Fj (x). (10.20)
j=1 j=1
It is clear that all series (10.18),(10.19) and (10.20) converge in L2 (Ω), because {Fj }∞
j=1
is an orthonormal basis in L2 (Ω). It remains only to investigate the convergence of
these series in stronger norms (which depends on f and g, or more precisely, on their
smoothness).
The Neumann problem with ∂ν u(x, t), x ∈ ∂Ω, can be considered in a similar
manner.
118
Index
δ-function, 8 Gaussian kernel, 100
Gibbs phenomenon, 29
a translation, 57 gradient, 10
approximation to the identity, 5 Green’s function, 71
biharmonic equation, 11 Green’s identities, 59
Burgers equation, 16 Hans Lewy example, 21
Cauchy data, 17 harmonic function, 59
Cauchy problem, 17 Harnack’s inequality, 80
Cauchy-Kowalevski theorem, 18 heat equation, 10, 32
Cauchy-Riemann operator, 11, 63 heat operator, 100
characteristic, 11 hyperplane, 3
characteristic form, 11 hypersurface, 3
characteristic variety, 11 ill-posed problem, 21
continuous kernel, 84 integral curves, 12
convolution, 4 interior Dirichlet problem, 82
d’Alembert formula, 48 interior Neumann problem, 82
differential operator, 10 Korteweg-de Vries equation, 11
Dirichlet problem, 51
distribution, 8 Laplace equation, 51
distributional solution, 10 Laplace operator, 10, 57
divergence theorem, 4 Laplacian, 10, 57
double layer potential, 84 linear superposition principle, 35
Duhamel’s principle, 115 Liouville’s theorem, 62
119
Plancherel theorem, 7
Poisson equation, 10
Poisson integral, 74
Poisson kernel, 74
principal symbol, 11
quasi-linear equation, 14
Reflection Principle, 80
regular distribution, 8
removable singularity, 80
Riemann-Lebesgue lemma, 6
rotation, 57
Schwartz space, 7
separation of variables, 33
Sine-Gordon equation, 11
single layer potential, 84
spherical mean, 109
support, 7
telegrapher’s equation, 11
tempered distribution, 9
tubular neighborhood, 93
120