0% found this document useful (0 votes)
168 views

222b Lecture Notes

Uploaded by

TOM DAVIS
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
168 views

222b Lecture Notes

Uploaded by

TOM DAVIS
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

Math 222b (Partial Differential Equations 2) Lecture

Notes
Izak Oltman
last updated: June 23, 2020

These are lecture notes for math 222b (second semester graduate partial differential
equations) at UC Berkeley instructed by Professor Daniel Tataru during the Spring of 2020.
These notes frequently reference Evans’ text Partial Differential Equations.

Contents
1 Elliptic PDEs 3
1.1 Second order Elliptic equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Lax-Milgram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 Fredholm Alternative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4 Elliptic Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5 Maximum Principal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.6 Eigenfunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2 Parabolic Equations 40
2.1 Higher Regularity for solutions to parabolic equations . . . . . . . . . . . . . . . 44
2.2 Maximum Principal for Parabolic Equations . . . . . . . . . . . . . . . . . . . . 50

3 Hyperbolic Equations 52
3.1 Energy Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2 Symmetries of Minkowski Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.3 Finite speed of propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.4 Variable Coefficient Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.5 Higher Regularity Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.6 Hyperbolic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.7 Linear Semigroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.8 Homework Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

4 Nonlinear PDEs 80
4.1 First Order Nonlinear PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.2 Conservation Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

1
Contents

Appendices 94

A Sobolev Space 94
A.1 Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.2 Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.3 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.4 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
A.5 Compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

B 2nd order elliptic PDEs 95


B.1 Eigenfunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
B.2 Unique Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

C Parabolic Equations 96
C.1 Higher Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
C.2 Maximum Principal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

D Hyperbolic Equations 97

Index 98

– 2–
1. Elliptic PDEs

Lecture 01 (1/21)

Last semester, the topics covered were:


1. theory of distributions

2. Fourier transform

3. fundamental solutions. This deals with P ˆDa, for operators  ∆, ∂¯ (elliptic operators),
∂t  ∆ (heat), j (wave equation)

4. Sobolev spaces: H s , W k,p , C k , C k,σ


This semester we will study:
1. Linear pdes:

(a) elliptic (Laplacian like)


(b) parabolic (heat like)
(c) hyperbolic (wave like)

2. nonlinear pde’s

(a) first order equations: method of characteristics


(b) Hamilton-Jacobi equations
(c) conservation laws

If time permits:
1. Schrodinger equation

2. nonlinear pde’s

1 Elliptic PDEs
1.1 Second order Elliptic equations
Recall that the model operator is ∆ ∂12    ∂n2 in Rn with n C 2. Last semester, we
computed the fundamental solution for ∆, K ˆx, given by:
¢̈
¨c2 log SxS, n 2
K ˆ x ¦
¨cn SxS2n ,
¤̈
nC3

A fundamental solution satisfies:

 ∆K δ0
1 Ä
a
notation: we will use the notation Dj ∂j , with this we get that Djf ξj fÂ
i

– 3–
1. Elliptic PDEs

What do we do with this? Suppose we would like to solve ∆u f in Rn , then one solution
for this equation is given by: u K ‡ f (this convolution makes sense even for distributions
f ). This fundamental solution isn’t unique, but it is the only one that decays at infinity (or
close enough).

If we take the Fourier transform, we get


Å
∆f
 ξ 2 fÂ
So we call the symbol of ∆ ξ2.

 fÂ, therefore u
Let’s say we would like to solve ∆u f , this is the same as ξ 2 u  Sξ S  2 fÂ.

Large ξ corresponds to high frequency while small ξ corresponds to low frequencies.

P
Elliptic equations: Consider ˆD a constant coefficient pde, and given P ˆD u f,
u fÂ, then u
then P ˆξ Â  fÂ~P ˆξ . We would like this division to make sense.

Definition 1.1. An Elliptic operator is an operator such that P ˆξ  x 0 for large Sξ S


Example 1.1. 1  ∆ 1  ξ 2 (elliptic), ∆2 ξ 4 (elliptic), ∂1 iξ1 (not elliptic for n C 2).
∂¯ ξ1  iξ2 , ∂t  ∆ iτ  ξ 2
Definition 1.2 (Elliptic Operator of order n). P ˆD such that P ˆξ  is a polynomial of
order n, and Spˆξ S Sξ Sn for large n. Sometimes this is called uniformly elliptic.
We will put the emphasis on second order elliptic operators. Our main model is
∆. More generally, we could take a positive matrix A > Mn n and look at the symbol

P ˆξ  Aξ ξ Sξ S2 , then we would have P ˆD ajk ∂j ∂k (where we are using Einstein sum-
mation notation).

Also we could look at complex elliptic operators


Example 1.2.  ∆  i∂1 ∂2 ξ 2  iξ1 ξ2
We could also look at systems: u ˆu1 , . . . , uk , then our partial differential operator
looks like:
P il ˆDul fj
for j 1, . . . , k, then the symbols P jl ˆξ  become a matrix. By computation:
P jl ˆDul fj
P jl ˆξ Â
ul ˆ ξ  fÂj ˆξ 
so for each ξ we have a system to solve. Ellipticity says something about det P ˆξ S Sξ S2k

For this class: second order elliptic operators will be something of the form:
P  ajk ˆx∂j ∂k  bj ˆx∂j  cˆx
with x > Rn with the condition that cSξ S2 B ajk ˆxξj ξk B C Sξ S2 for all x > Rn .

– 4–
1. Elliptic PDEs

Remark 1.1. We require that ajk to be real, but b, c can be complex.


Things to consider:
1. regularity of the coefficients a, b, c

(a) at a minimum, ajk > L ª

(b) relaxed setting: ajk > Lipa and b, c > L ª

2. Form of the equation, we could also write the differential operator as (in divergence
form):

P  ∂j ajk ˆx∂k  bj ˆx∂j  cˆx

suppose we were looking at:

∂j ajk ˆx∂k u ∂j ˆajk ˆx∂k u ajk ˆx∂j ∂k u  ˆ∂j ajk ˆx∂k u


´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
2nd order 1st order

the other form is known as non-divergence form.


Note that divergence form and non-divergence form are equivalent in the relaxed setting.
But they are not equivalent if aij are below Lipschitz.
Lecture 02 (1/23)

Recall our discussion of ellipctic operators. We can write an elliptic operator in diver-
gence form:

P  ∂i aij ˆx∂j  bj ˆx∂j  cˆx


´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
looks like divergence

and we have the non-divergence form:

P aij ˆx∂i ∂j  bj ˆx∂j  cˆx

the important thing here is the ellipticity condition, which says that ˆaij  is a real, positive
definite matrix:

cSξ S2 B aij ξi ξ  j B C Sξ S2

If we have constant coefficients, once we neglect lower order terms, only the principal
part, we have:

P  aij ∂i ∂j

then in the Fourier space, after Fourier transforming P u f , we get:

u fÂ
pˆξ Â
a
Lipschitz continuous

– 5–
1. Elliptic PDEs

where

pˆ ξ  aij ξi ξj

Now if we have variable coefficients with the equation:

 aij ˆx∂i ∂j u f

what happens if we take the Fourier transform of this? We get:

 fÂ
ãij ˆx ‡ ξi ξj u

with ã the inverse Fouier transform. We call aij ˆxξi ξj the symbol of P .

Consider the model problem: ˆI  ∆u f . We can solve this by taking the Fourier
transform, to get:

Â
ˆ1  ξ  u
2


 u 1  ξ2
But now, how can we use this in Sobolev spaces?

A quick reminder of Sobolev space

W k,2 Hk ™u  ∂ α u > L2 , SαS B k ž

Recall, we have the norm:


2
Yu YH k Q
SαSBk
Y∂
α
uY2L2

which makes this a Hilbert space.

We can interpret H k in the Fourier side. In the Fourier space, we have the equivalent
norm:
2 2 k ~2 2
YuYH k Yˆ1  ξ  ÂYL2
u

We can also look at fractional Sobolev spaces by replacing k by s > R to get H s for any
s > R.

Now let’s look at mapping properties of ˆI  ∆. Suppose that u > H s , what can we say
about ˆI  ∆u? The answer is H s 2 . 

We could get this in the Fourier space by applying Plancheral’s formula:


s2
YˆI
2
 ∆uY s2
H S ˆ1  ξ  2 SˆIÆ
2 2
 ∆uS dξ S 2 s~2
ˆ1  ξ  SÂ
2
uS dξ 2
Yu YH s

– 6–
1. Elliptic PDEs

Therefore ˆI  ∆  H s Hs  2 is a bounded and invertible operator.

In particular, if we would like to solve ˆI  ∆u f with f > H s 2 , then u > H s .




These computations will remain valid if we compute:

ˆI  ∆  W k,p Wk  2,p

and this will be bounded and invertible, for 1 @ p @ ª.

Solvability for 2nd order elliptic equations: As a general principal, we would like to
look at equations like:

Pu f

and we will ask

1. existence

2. uniqueness

3. regularity

For example, if f > H s 2 , then is there solution for U > H s ? (existence). For uniqueness, this


is obvious. For regularity, suppose that f has more derivatives that exist, do the solutions
have more derivatives existing?

Where? The simplest place to work is in Rn . We could also work in a bounded domain
of Rn that we usually call Ω, with a boundary condition. The simplest boundary condition
is the Dirichlet condition: uS∂Ω 0.

Local Solvability: Given the equation P u f . Suppose we pick x0 in our domain, now
we are asking to solve in the ball B ˆx0 , r. Even with this at every point, the solutions may
not match, so we don’t have global solution.

Let look at the variable coefficient divergence form:

P  ∂i aij ∂j  bj ∂j  c

and let’s assume that aij , bj , c > L . Let’s look at P u


ª
f with u > H s and f > H s 2 . Looking


at the principal part, we want:

∂i aij ∂j u

to make sense, note that ∂j u > H s 1 , we need this to be defined everywhere, they cannot be


distributions. Therefore we require that s  1 C 0. For regularity, we require that s  1 0


(what?) this is because if you multiply a H 0 function by a L , you stay in H 0 . So we are
ª

looking at s 1. So we must have f > H 1 and u > H 1 . 

– 7–
1. Elliptic PDEs

Proposition 1.1. P  H 1 H  1 is a linear bounded operator.

To solve the equation P u f we need (1) existence and (2) uniqueness.

Let’s discuss the property of uniqueness: Given f > H 1 , if a solution u exists, then it is


unique. Another way to think about this is that P  H 1 H 1 is one-to-one. If this admits 

a bounded inverse P  H 1 RˆP  ` H 1 , this is corresponds to having an estimate:




Yu YH 1 B cY P u Y H  1 (1)
°
f

this is the same as saying that:

YP
 1
f YH 1 B cYf YH 1

Claim 1.1. (1) implies uniqueness.

Proposition 1.2. Suppose that P u1 f and P u2 f , then P ˆu1  u2  0, by (1), we get


that:

Yu1  u2 YH 1 B Y0YH 1 0

Existence is the same thing as saying that P is onto, ie RˆP  H 1 . Recall that given 

any differential operator, we can get the adjoint operator by the formula:

P uˆϕ uˆP ϕ ‡

for test functions ϕ. Consider our currently studied operator P , then looking term by term:

 ∂i aij ∂j uˆϕ aij ∂j uˆ∂i ϕ ∂j uˆaij ∂i ϕ uˆ∂j aij ∂i ϕ

We assumed that ˆaij  was symmetric, so (the principal part of) P is self-adjoint.

P P ‡

ij ij
∂i a ∂j ∂j a ∂i

bj ∂ j ∂j bj

c c

Therefore P ‡
 ∂i aij ∂j  ∂j bj  c.

This setting changes in the complex setting. If f, g > L2 , then we look at:

`f, g eL2 S f ḡdx

Then the adjoint operator we would like satisfies:

S Â
P uϕ S uP ϕ ‡

– 8–
1. Elliptic PDEs

Let’s compute this for the constant term of P

S cuϕ̄ S ucϕ̄ S uc̄ϕ

So in the general setting, we have:

P ‡
 ∂i aij ∂j  ∂j b̄j  c̄

Another way we can think about this is for X, Y Banach spaces. Then given P  X Y
we can consider P  Y
‡
X with the formula:
‡ ‡

Px y ‡
x P y ‡ ‡

So we have our operator P  H 1 H 1 . Then we have P  H 1 H 1 and this makes


 ‡ 

sense as ˆH 1  H 1 . And from functional analysis, we know that P is bounded if and only
‡ 

if P is bounded.
‡

Then it will turn out that P is onto if and only if P is bounded below, ie if wee have:
‡

YuYH 1 B YP uYH 1

this gives us uniqueness for P and existence for P . Similarly: ‡

Yu YH 1 B Y P u YH  1
‡

this gives us existence for P and uniqueness for P . ‡

Lecture 03 (1/28)

Recall we were looking at solvability of the following differential operator:

Pu  ∂i ˆaij ∂j u  bj ∂j u  cu

with the condition that aij , bj , c > L ª


ˆU  , with uniform ellipticity for a: aij ji and there
exists λ A 0 such that aij C λI.

We are interested in trying to solve:

Pu f

the question of solving this PDE is equivalent to the question of invertibility for a bounded
operator P  X Y for X, Y Banach spaces.

A digression in functional analysis: given Banach spaces X, Y , a bounded linear


operator P  X Y , we are interested in the following questions:

1. (existence or solvability) for all f > Y does there exist u > X such that P u f

2. (uniqueness) if u > X and P u 0, then is it necessary that u 0?

– 9–
1. Elliptic PDEs

we would like to convert these questions into the task establishing some bounds involving P .

Proposition 1.3. Given a bounded linear operator between Banach spaces: P  X Y and
suppose there exists C A 0 such that:

YuYX B C YP u YY (2)

for all u > X (this is called a coercivity bound for P ), then:

ˆ (uniqueness for P ) If P u 0, then u 0

ˆ (existence or solvabilty for P ) P ‡ ‡


 Y ‡
X , for all g > X , there exists v > Y
‡ ‡ ‡
such
that P v g and Yv YY ‡ B C Yg YX ‡
‡

Proof. uniqueness is obvious.

Existence for P is a consequence of the Hahn-Banach theorem. We are interested in


‡

finding a v > Y such that:


‡

`P v, ue `g, ue
‡

for all u > X. Note that the left side is `v, P ue. Let W P ˆX  ` Y , by definition, every
element w > P ˆX  is just w P ˆu for some u, and this u is unique (call it uˆw). Now we
define: ũ  W R such that:

`ũ, w e `g, uˆw e

then clearly ṽ is bounded, because:

S `ũ, w e S S `g, uˆw e S B Yg YX ‡ YuˆwYX B C Yg YX ‡ YP uYY C Yg Y X ‡ Y w Y Y

therefore Yṽ Y B C Yg YX ‡ , therefore by Hahn-Banach, there exists v  Y R such that v ṽ on


W and Yv Y B cYg YX ‡
To extend the existence statement to P , let us assume that X is reflexive, ie X ˆX  , ‡ ‡

( ‡
(
where the identification is given by X ? u ˆX ? g `g, ue > ˆX  . From this, we get: ‡ ‡

YuYX sup S `g, ue S


Yg YX ‡ B1

this is an exercise that follows from the Hahn Banach theorem.

Proposition 1.4. Assume that X is reflexive and there exists C A 0 such that Yv YY ‡ B
C YP v YX ‡ for all v > Y , then
‡ ‡

ˆ (uniqueness for P ) if P v ‡ ‡
0, then v 0

ˆ (existence for P ) for all f > Y ,there exists u > X such that P u f and YuYX B cYf YY

– 10 –
1. Elliptic PDEs

Remark 1.2 (not to be used in this course). Given P  X Y (bounded linear operator on
Banach spaces), then

ker P ‡
ˆImP 
Ù

where if V ` Y is a subspace, V Ù ˜v >Y ‡


 v ˆu 0 ¦ u > V  , and

ker P Ù ˆImP ‡


where if W ` X is a subspace, then


‡
ÙW ˜u > X  `g, ue 0 ¦ g > W

The first equality tells us that if ker P ˜0, then ˆImP Ù ˜0, then by HB, ImP
‡
Y a,
so this implies that P being one-to-one implies that P is surjective. ‡

Lemma 1.1. If ImP Y , then there exists C A 0 such that (2) holds.

Proof. Exercise: apply the open mapping theorem.


In Proposition 1.4, we used reflexibility to simplify the proof (this will be sufficient for
further applications). However, if we give up on YuYX B C Yf YY , then the proposition still
holds without reflebility. See Rudin FA, theorem 4.13.

1.2 Lax-Milgram
Let us prove the Lax-Milgram theorem (Evans 6.2.1).

Theorem 1.1 (Lax-Milgram). Given a Hilbert space H with a bilinear mapping:

B H H R

with

1. (boundedness) there exists α A 0 such that:

SB u, v S B αYuYH Yv YH

2. (coercivity) there exists c A 0 such that:

YuYH
2
B CB u, u

then for all f  H Rˆf > H , there exists a unique u > H such that: B u, v 
‡
`f, v e for all
v > H.
a
in finite dimensional linear algebra, we always have that ImP ImP , but in the infinite dimensional
case, the image of an operator may not be closed

– 11 –
1. Elliptic PDEs

Proof. For all u > H, define P u  H R with the property:

`P u, v e B u, v 

for all v > H, so P u > H , so we would like to show that P u


‡
f . We would like to verify the
hypothesis of our two propositions:

Boundedness of P :

YP uYH ‡ sup S `P u, v e S sup SB u, v S B αYuYH


Yv YH B1 Yv YH B1

by boundedness.

Coercivity for P :

YuYH
2
B CB u, u C `P u, ue B C YP uYH ‡ YuYH
By Proposition 1.3, we have uniqueness.

Next we would like to check coercivity for P (here we will use the fact that H is a
‡

Hilbert space). By the Riesz representaion theorem, we know that H is reflexive, therefore
P H H
‡ ‡
H ‡

2
Yv YH B CB v, v  C `P v, v e C `v, P v e B C Yv YH YP v YH ‡ ‡ ‡

so by Proposition 1.4, we get existence.

Let’s now apply these results to problems in PDEs (reference 6.2.2 of Evans).
Example 1.3. Consider P u ∂i ˆaij ∂j u that is uniform elliptic. Let U be a C 1 bounded
domain in Rd with d C 2 and let H H01 ˆU  (so X H01 ˆU  and Y H 1 ˆU  ˆH01 ˆU  ).  ‡

Theorem 1.2. Given f  H  1 ˆU  , there exists u > H01 ˆU  such that P u f and YuYH01 ˆU  B
C Y f Y H  1 ˆU 
Proof. Define B u, v  `P u, v e, we would like to verify boundedness and coercivity.

Boundedness comes from that fact that P  H01 ˆU  H  1 ˆU  is a bounded map.

Coercivity:

`P u, ue S U
 ∂i ˆaij ∂j uudx S U
aij ∂j u∂i udx C S U
λSDuS2 dx

Therefore:

S U
SDuS
2
dx B λ 1 B u, u


– 12 –
1. Elliptic PDEs

But now by Poincare, there exists CU such that:

YuYL2 ˆU  B CU YDuYL2 ˆU 
provided that u > H01 ˆU . Then with this, we arrive at:
¼
YuYH 1 ˆU  YuY2L2 ˆU   YDuY2L2 ˆu B CB u, u

then by Lax-Milgram, we are done.


Example 1.4 (Homogeneous Sobolev Space). Consider the above problem above, but now
with U Rd (for d C 3)a . And let:

H Ḣ 1 ˆRd 

which is defined as the closure of C0 ˆRd  with respect to the norm YuYḢ
ª
YDuYL2 .

Then the theorem and the proof are exactly the same with Ḣ  1 ˆRd  ˜Df  f > L2 
ˆH ˆRd 
1 ‡

Example 1.5. Consider the general uniformly elliptic operator:

Pu  ∂i ˆaij ∂j u  bj ∂j u  cu f

Let’s first take H H01 ˆU .


Theorem 1.3. There exists γ 㠈ˆ2λ 1 YbYLª  A 0 such that if cˆx C γ on U , then for all


f > H 1 ˆU , there exist u > H01 ˆU  such that P u f and YuYH01 ˆU  B cYf YH 1 ˆU 


Proof.

`P u, ue S U
 ∂i ˆaij ∂j uudx  SU
bj ∂j uu  S cuudx C λ S SDuS
2
dx  γ S U
u2 dx  S YbYLª S∂j uSSuSdx

Apply Young’s inequality, to get:

YbYLª S S∂j uSSuSdx B


λ
2 S S∂j uS
2
dx 
YbY2Lª
2λ S SuS
2
dx

B
λ
2 S U
SDuS
2
dx  SU
ˆγ 
YbY2Lª

SuS
2

Lecture 04 (2/11)

Recall we were looking at second order elliptic operators in divergence form:

P ∂j aij ∂k  bi ∂j  c


a
because of the GNS inequality

– 13 –
1. Elliptic PDEs

and the adjoint operator:

P ‡
∂j ajk ∂k  ∂j bj  c

assuming that a, b, c > L , a is uniformly positive definite and symmetric. With the special
ª

case when b c and c > R in which P P and we call P symmetric.


‡

Today we have P an elliptic operator on Ω ` Rn (bounded). We will ask for about the
solvability of P u f for f > H 1 ˆΩ, we ask whether there exists u > H01 ˆΩ (which forces


the Dirichlet boundary conditions).

Last time we considered the bilinear form:

B ˆu, v  S Ω
aij ∂i u∂j v  bi ∂j uv  cuvdx

which formally (if we integrate by parts) we can think of as:

S P uvdx S uP vdx
‡

Then it is easy to show that

SB ˆu, v S B YuYH01 Yv YH01


last time we showed that if B ˆu, u C cYuY2H 1 (coercive), then we have solvability for P and
0
therefore solvability for P as well (Lax-Milgram).
‡

Also note that we have:

Yu YH 1
0
B cYP uYH 1
and the same bound for P . ‡

Today we want to extend this a little further, in which case B is not positive definite.
Now if P is arbitrary (but still elliptic), we have that:

B ˆu, u C c S S©uS
2
dx  C ˆbYuYL2 Y©uYL2  C ˆcYuY2 DL2

first inequality is because a A 0. Then we can apply Poincare’s inequality: Y©uYL2 C dYuYL2
for u > H01 ˆΩ. This tells us that B is positive definite if the second two constants SbS, ScS P 1.

By Young, we have:
1
YuYL2 Y©uYL2 B εY©uY2L2  YuYL2
ε
In general, we get:

B ˆu, u C cYDuY2L2  C YuY2L2

– 14 –
1. Elliptic PDEs

So we are unable, in general, to apply Lax-Milgram.

What if, we replace the equation P u f by the equation P u  λu f . Can we solve this
problem? Let Bλ be the new bilinear form:
Bλ ˆu, u B ˆu, u  λYuY2L2
where we have replaced c by c  λ, then we have:
Bλ ˆu, u C cY©uY2L2  ˆλ  cYuY2L2
then if λ Q 1, then P  λ is solvable. And by the same token, P ‡
 λ is solvable.

So now we would like to go from P  λ to P .

Pu f
ˆP  λ u f  λu
Now ˆP  λ  1  H  1 H01 , so:
u ˆP λ  1 ˆf 

 λu
1 1
u  λˆP  λ u 
ˆP  λ  f


´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶


K g

So we have a map K  H 1 H01 , note that H01 ` L2 ` H 1 , with both subset inequalities as
 

compact embeddings. So here we would like to replace H01 by L2 and solve this in L2 . So
now if K  H 1 H01 , then K  L2 L2 . This is now a compact operatora


So now we take our new equation


ˆI  K u g
and throw everything we know away, and just think about compact operators, and ask if
this is solvable.

Before that, let’s look at examples of compact operators


Example 1.6. Let Ku `u, v e w with v, w > L2
Example 1.7. Finite rank operators:

Ku Q u, v
N

j 1
` j e wj

with vj , wj > L2 . The range RˆK  is a finite dimensional space.


Theorem 1.4. K is compact if and only if K lim Km where Km have finite rank.
Now if YK Y @ 1, then I  K is invertible because:
1
ˆI  K 
I  K  K2  
This takes us to the following
a
if un is a bounded sequence, then Kun has a bounded subsequence.

– 15 –
1. Elliptic PDEs

1.3 Fredholm Alternative


Looking at the solvability for I  K and I  K , note the following simple theorem:
‡

Theorem 1.5. If K is compact, then K is compact. ‡

Theorem 1.6.
1. KerˆI  K  and KerˆI  K ‡
 are finite dimensional.
2. RˆI  K  and RˆI  K ‡
 are closed.
3. K ˆI  K  RˆI  K Ù and KerˆI  K
‡ ‡
 R ˆI  K Ù
4. dim KerˆI  K  dim KerˆI  K ‡


One application to elliptic equations is the following theorem:


Theorem 1.7 (Fredholm Alternative). One of the following two holds:
1. P is solvable iff P is solvable. This is the case when KerˆI  K 
‡
KerˆI  K ‡
 ˜0

2. There exist finite dimensional subspaces U and U where U ‡


KerˆI  K  and U ‡

KerˆI  K . ‡

Then P u f is solvable if and only if f Ù KerP ‡


. Also the solution is unique modolu
KerP .

Let’s say that KerP span ˜w1 , . . . , wn , then we require, for solvability, that f wi
‡
Ù
for all i. And if KerP span ˜v1 , . . . , vm , then any solution is u  c1 v1    cn vn
Proof. (of the first theorem)

Assume by contradiction that KerˆI  K  is infinite dimensional. Take u1 , . . . , un , . . . be


orthonormal vectors in KerˆI  K . Then we have:
º
Yuj  uk Y 2
therefore uk has no Cauchy subsequence. Since ui > KerˆI  K  we know that ui Kui .
But these elements are bounded, and so Kui must have a convergent subsequence, which is
impossible.

For the second, let v > RˆI  K , then:


v lim ˆI  K un
n ª

If un is bounded, then Kun has a convergent subsequence Kunj v1 , then unj u, therefore
v ˆI  K u. We can force un to be bounded by replacing un by un  ũn for ũn > KerˆI  K .
Ù
When we do these, we an assume that un KerˆI  K . If Yun Y ª, then we would get:
un
0 lim ˆI  K 
n Yu n Y ª

– 16 –
1. Elliptic PDEs

but this gives a contradiction

For the third, note that:


KerˆI  K Ù R ˆI  K ‡


which holds for any bounded operator. But the range is closed, so we are done.

Let’s prove a special case of the last property. Suppose that I K is one-to-one (KerI K
˜0). Therefore I  K is onto (RˆI  K  H iff Ker ˆI  K  ˜0). Assume not, then: ‡

ˆI  K H H1 ø H
And
ˆI  K H1 H2 ø H1
because ˆI  K  is one-to-one. We can keep doing this to get an infinite sequence of subspaces
Ù
H ù H1 ù H2 . So we select ui > Hi such that ui Hi 1 with norm one. Now we would like


to show that uk  uk 1 has no convergent subsequence. Note that:




um  uj ˆI  K um  ˆI  K uj  Kum  Kuj

Lecture 05 (2/13)

Recall that we were looking at second order elliptic pdes on a boundary with Dirichlet
boundary conditions. We asked whether this equation was solvable or not.
¢̈
¨P u 0 in Ω
¦ (E)
¨u 0 in ∂Ω
¤̈

and we asked about solvability of P , let’s call the other equation ˆE . We discussed the
‡ ‡

Fredholm alternative: either


1. either ˆE  and ˆE ‡
 are solvable (P, P ‡
 H1 H  1 are invertible)
2. or there are finitely many obstructions to solvability.
what we mean by the last point is that KerˆP  is finite dimensional and KerˆP ‡
 are finite
dimensional and they have the same dimension.

The first interesting observation is that this is a global result. The second is that this
property only applies in a bounded domain (we used compact Sobolev embeddings). Also,
this is very general, only based on ellipticity. Also dim KerP  dim KerP is stable under ‡

small pertubations of our operator, this difference is known as the index of the operator P a .

Today we will look at local properties of elliptic equations.


a
see Fredholm operators

– 17 –
1. Elliptic PDEs

Theorem 1.8. ˆE  is locally solvable.


Proof. Pick x > Ω, and let B be a ball around x. Recall that solvability for P is equivalent
to a bound for P ‡

Yv YH 1 ˆB 
0
B Y P v Y H  1 ˆB 
‡

Using the same Bilinear form, we have:


`P v, v e B ˆv, v 
‡

Now:
`P
‡
v, v e B Yv YH01 YP v YH 1 ‡

so we would like to prove:


B ˆv, v  C Yv Y2H 1
0

Last time we proved that:


B ˆv, v  cY©v Y2L2  C Yv Y2L2
by the Poincare inequality with c and C not depending on the domain Ω, we have that:
Y©v YL2 C C̃ Yv YL2
If B B ˆx0 , ε, how does the constant C̃ depend on ε. We use a scaling trick. Given v in
Bε , we define wˆx v ˆx0  εx for x > B ˆ0, 1. Then:
Y©w YL2 C C YwYL2œ

so we get:
εY©v YL2 C C ~εYv YL2 œ

Therefore C̃ C ε 1 , so we can make ε small enough to get our coercive bound.


œ 

Note that ∆  H s H s 2 . Also if a, b, c > L , then P  H 1 H 1 . a must be in L ,


 ª  ª

however c can be in Ln~2 and b > Ln . This allows us to study the problem P u f for f > H 1 . 

1.4 Elliptic Regularity


The question of elliptic regularity: suppose that f > H s with s A 1, can we conclude that
u > H s 2?


Let’s say that s 0, this says that f > L2 , and we ask if u > H 2 . We must improve the
regularity of a, b and c.
Pu  ∂j ajk ∂k u  bj ∂j u  cu
this works if a > C 1 ˆLip, and works if b, c > L ª
(could we have a being in H 1 work?)

– 18 –
1. Elliptic PDEs

Theorem 1.9. Suppose u > Hloc


1
solves P u f > L2loc , then u > Hloc
2
(given the above regularity
on the coefficients).

Note that this theorem is not related to solvability. Also note that this is a local property.
Proof. The first step in the proof is called localization. Let B B ˆx0 , ε for x0 > Ω. Then
consider the cutoff function χ > C0 defined as;
ª

¢̈
¨1 in B
χ ˆx  ¦
¨0 outside 2B
¤̈

then replace u by χu. So lets compute:

P ˆχu χP u  b © χu  a©χ © u  a©2 χ u  ©a©χu

note that all our terms are in L2 . So P ˆχu f˜ > L2 . So we are looking at the same state-
ment of the theorem but with the support of a function lying in 2B.

A quantitative bound we would like is something like:

Yu YH 2 B Yf YL2  YuYH 1 (3)

Proof. (of (3))

We know already that:

Yv Y H 1 B YP v YH 1  Yv YL2

Now let’s take v © ua . Then we have:

Pv P ©u © P u  P, ©u

Where brackets mean the commutator of two operators. Then let’s do a little computation:

c, ©u c©u  ©ˆcu  ˆ© cu

Therefore

Pv © P u  ∂ ˆ©a∂u  ©b∂u  ©cu

Now we must back track, since b∂u  cu > L2 given that u > H 1 , then we can absorb these
terms into f and discard them from our estimate (3) because Yb∂u  cuYL2 B YuYH 1 .

If P  ∂a∂ then

P ˆ©u © P u  ∂ ˆ©a∂u © f  ∂ ˆ©a©u


a
where we want to really add the first derivatives, otherwise we would get a vector

– 19 –
1. Elliptic PDEs

this tells us that:

Y©uYH 1 B Y©f YH 1  Y∂ ©a©uYH 1 B Yf YL2  YuYH 1

Now since:

YuYH 2  YuYL2  Y©uYH 1 ß Yf YL2  YuYH 1

Our issue now is that in proving this inequality, we have assume that u > H 2 . Let’s list
the fixes:

1. regularizations: replace u by uε ϕε ‡u. Doing this, we must compute P uε P ˆϕε ‡u


ϕε ‡ P u  P, ϕε ‡. The commutator term would give us:

∂ a, ϕε ‡∂u

we would like to show that the L2 norm of this is smaller than YuYH 1

2. We have:

∂1 uˆx lim D1h ˆu


h 0

with D1h the difference quotient in the direction e1 . If u > H 1 , then D1h u > H 1 . Then
we have for P ∂a∂

P Dh u  ∂ Dh , a∂u

note that v  ∂u > L2 , then

aˆx  h  aˆx
Dh , av ˆx  v ˆx  h
h
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
oˆ1

Lecture 06 (2/18)

Last time we started to prove the theorem:

Theorem 1.10. In solving


¢̈
¨P u f in Ω
¦
¨u 0 in ∂Ω
¤̈

if f > L2loc then u > Hloc


2
with P elliptic and aij are Lipschitz.

– 20 –
1. Elliptic PDEs

Last time we talked about interior regularity. Now we we want to talk about boundary
regularity.
Proof. Last time we used the estimate:

Yv Y H 1
0
B YP v YH  1

we applied this estimate to v © u (this is what we did for the interior problem).

Near the boundary, the same thing does not work because © u x 0 on the boundary.

In the simple case where we have a flat boundary, let x1 be the normal direction and xn
the tangential direction, then:
1
© u ˆ© u, ∂n u

we know that v ©
1u 0 on ∂Ω so we can apply the H01 bound to this v to get:


1
©uYL2 B YP uYL2  YuYH 1

we want now to bound Y©2 uYL2 we miss Y∂n2 uYL2 . We get this by going back to our PDE

 ∂j ajk ∂k u f

We can rearrange to get:

ann ∂n2 u f  Q
j or kxn
∂j ajk ∂k  ˆ∂n ann ∂n u

all terms on the right side are L2 . And now ann C c A 0 by ellipticity.

What if the boundary is not flat? Let X be some curved boundary, locally it is the graph
of xn f ˆx . Then we can flatten this by letting y
œ
x and yn xn  f ˆx . So what
œ œ œ

jk
happens to a when we straighten the boundary.
∂ ∂ ∂y
∂x ∂y ∂x
∂y ∂y
therefore the matrix a becomes the matrix b a . We would like to conclude that b
∂x ∂x
is Lipschitz. We would like ∂y ~∂x to be Lipschitz, let’s just say C 1 to keep things simple.
These means a C 2 change of coordinates. Therefore f must be C 2 , therefore we would like
a C 2 boundary.

Alternate view if boundary is not flat: Instead of © u, use a vector field X œ


X j ∂j
with the restriction that X ˆx is tangent to ∂Ω if x > ∂Ω and we let v Xu
Let’s generalize this theorem.

– 21 –
1. Elliptic PDEs

Theorem 1.11 (Higher Elliptic Regularity). Take the same PDE and assume that f >
k
Hloc with a > C k 1 , b > C k , and c > C k 1 then u > Hloc
  k 2 

We would prove this by letting v ©


k1 u.

Corollary 1.1. If a, b, c > C , then f > Cloc implies that u > Cloc
ª ª ª

This comes from the idea that:


k
Cloc ª
9Hloc

via Sobolev imbeddings.

The same result holds in analytic cases. The same result holds for Nuemman boundary
conditions.

1.5 Maximum Principal


Let Ω be an open subset of Rn and P an elliptic operator written in non-divergence form:

P  ajk ∂j ∂k  bj ∂j  c

and we suppose that a, b, c > C ˆΩ. And let’s look for solutions u > C 2 to P u f (which
implies that f > C 0 ).

Recall that for the Laplace equation ∆u 0 we get the maximum principal which
states that:

max u max u
Ω ∂Ω

And we have the strong maximal principal which states that if max u is attained in Ω
then u is constant.

Recall the key part of the proof is the mean value property. This was proved by using
the fundamental solution for the Laplacian, which is:

K ˆx  cn SxS2  n

Remember if we wanted to solve ∆u f then one solution is:

uˆx S K ˆx  y f ˆy dy

And we have that ∆k δ0 .

What happens if we look at variable coefficients. Then instead of K ˆx  y  we will have


K ˆx, y  which we think should look like Sx  y S2 n (this is due to non-invariance of shifting


of our operator). The fact that this is a fundamental solution says that

Py K ˆx, y  δy x ˆy 

– 22 –
1. Elliptic PDEs

Then we have:

uˆx S K ˆx, y f ˆy dy

Operators of this type are called Caldeon Zygumond operators. (why do Fundamental
solutions always exist?)

Proposition 1.5. Given C  0, P u A 0, then maxΩ u max∂Ω u

Proof. Suppose x0 > Ω is a maximum point for u. Then © uˆx0  0 and ∆2 uˆx0  B 0. So we
have:

0 A P u ˆx 0   ajk ∂j ∂k uˆx0   bj ∂j ˆx0 

the second term is zero the first term is T rˆA∆2 u, then A I by a change of coordinates,
and so we get T rˆ∆2 u C 0 so we get the right side is C 0 which is a contradiction.

Theorem 1.12. c  0, if P u B 0 then maxΩ u max∂Ω u.

Proof. Replace u by uε u  εϕ and try to apply the previous proposition with ϕ > C2 and
we want:

P uε @ 0

we need that P ϕ @ 0 in Ω.

Try 1: Let ϕ be quadratic. ϕ x2 , then:

Pϕ  2ajj  2bj xj

this doesn’t quite work, unless we can shrink our domain very small.
2
Try 2: Let ϕ eλx . So ∂j ϕ 2λxj ϕ and ∂j ∂k ϕ 2λδjk ϕ  4λ2 xj xk ϕ, therefore:
2 2 2
Pϕ  ajj 2λeλx  4λ2 ajk xj xk eλx  2λbj xj eλx

so we would like to control the sign of:


2
λeλx ˆa
jj
 4λajk xj xk  2bj xk 

The first term is positive, the next looks like λSxS2 and the last looks like cSxS (because b is
bounded). So we have:
º
P ϕ B c1  c2 λSxS2  c2 SxS @ c λSxS  c2 SxS @ 0

by Cauchy Schwartz given λ large enough.


Let’ s now look at the strong maximum principal.

– 23 –
1. Elliptic PDEs

Theorem 1.13 (Strong Maximum Principal). Given P u B 0, then if the maximum of u


is attained inside Ω, then u must be constant.

Proof. Let umax be the maximum value and let Ω ˜x > Ωuˆx umax  assume that Ω x Ω
 

and let x > Ω  Ω . and we would like to make sure that dˆx, Ω  @ dˆx, ∂ٍ and expand a
 

ball B around x such that it touches the boundary of Ω . 

Let x2 be the point in Ω touching the ball. So we know that P u B 0in B and uˆx1  @


uˆx2  maxB u. Then we would like to conclude that:

∂u
ˆ x2  A 0
∂ν
(this would contradict the proof the the strong maximal principal and is known as Hopf’s
lemma.)

We know that u B uˆx2  in B. We claim that u  εϕ B uˆx2  in B for ϕ a smooth function


strictly larger than 0 such that ∂ν ϕ @ 0 and ϕ 0 on ∂B.

We want P ϕ B 0. ϕ
2 ˆx,∂B 
eλd  1 should work.
Lecture 07 (2/20)

Let’s fill in some details of the previous lecture.

Proposition 1.6. If P u @ 0 then maxΩ u max∂Ω u

we proved this by examining what would happen if we had an interior maximum.

Then we got the stonger theorem that says the same thing but under the weaker assump-
tion P u B 0.

To prove this, we let ϕ be a smooth function with the property P ϕ @ 0.a . If we had
such a function, we could let uε u  εϕ which implies that P uε @ 0 then we can apply our
proposition to get that

max uε max uε
Ω ∂Ω

Now how do we go from uε to u. Note that uε ÐÐÐÐÐ


unif ormly
u as ε 0, this implies that as ε 0
we get:

max u max u
Ω ∂Ω

Recall then
2
a
recall ϕ eλx

– 24 –
1. Elliptic PDEs

Lemma 1.2 (Hopf ). Given P such that P u B 0 in B ˆ0, R. And suppose that x0 > ∂B is a
∂u
maximal point and uˆ0 @ uˆx0 . Then ˆx0  A 0.
∂ν
If you picture a ball centered at 0 with x0 on the boundary. Then make a small ball, so
we have two concentric balls of radius r and R. So we have:

sup u @ uˆx0 
∂Br

Let A be the annulus. Then again let’s consider a smooth function ϕ such that

1. ϕ C 0 in A, ϕ 0 on CR

2. P ϕ B 0

3. ˆx 0  @ 0

Now let uε u  εϕ, so we have:

max uε max uε maxˆmax uε , max uε 


A ∂Ω CR Cr
´¹¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹¶ ´¹¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹¶
uˆx0  maxCR uOˆε

Then note that:

uˆx0  C max u  Oˆε


Cr

for ε small enough. So if ε P 1, then:

max uε uε ˆx0 
A

∂uε
this implies that ˆx0  C 0, breaking up the left and rearranging, gives our claim.
∂ν
To construct ϕ, let
2 x2
ϕ eR  1

We get:

aij xi xj   Oˆλ  Oˆ1 C eλˆR A r2


2  x2  2  x2 
 Pϕ 4λ2 ˆeλˆR 2
SxS

1.6 Eigenfunctions
Recall what we know from Lax Milgram. If P is in divergence form and are looking at
¢̈
¨P u f in Ω
¦
¨u 0 in ∂Ω
¤̈

– 25 –
1. Elliptic PDEs

for solvability, we have the Fredholm alternative. Now P u  cu f is solvable if Rc is large


enough (Rλ A λ for some λ0 ). If c is large enough, we get P u λu for some function u. u is
called an eigenfunction and λ an eigenvalue.

This notion of eigenfunctions and eigenvalues are discussed in Evans for operators P
ij
∂i a ∂j , these have positive Bilinear form:

B ˆu, u S aij ∂i ∂j udx C Y©uY2L2 C YuY2H 1


0

in this case Rλ0 A 0.

Let’s say we are trying to solve P u λu. Let c be large enough so we can solve
ˆP cu ˆλ  cu, then let K ˆP  c 1 , then u ˆλ  cKu. Now if λ is an eigen-



value for P then 1~ˆλ  c is an eigenvalue for the operator K.

What can we say about the eigenvalues for the compact operator K? Say Ku µu.
µ ¶ σ ˆK  if K  µ is invertible. We say µ > σp ˆK  if µ is an eigenvalue.

Proposition 1.7. K is compact, µ x 0, then µ > σ ˆK  if and only if µ > σp ˆK  (point


spectrum).

Therefore σ ˆK  ˜0, eigenvalues

Proposition 1.8. If K is compact, then σ ˆK  is at most countable, σ ˆK  ˜λ1 , λ2 , . . . , . . .,


λn 0 as n ª

Proof. Suppose there exist λ1 , . . . , λn , . . . such that λm λ x 0, with λi distinct.

Take u1 , . . . , un , . . . eigenfunctions corresponding to those eigenvalues.

Then we claim that uj are linearly independent. If not, we would have:

Qc u j j 0

with cj0 x 0, therefore:


j xj0
ˆ j0  λj cj0 MK
j xj0
ˆ  λj ˆ Qc u
j j 0

we therefore get cj0 0.

Now let Vk spanˆu1 , . . . , uk  with dimVk k then we get V1 ` V2 `  ` Vk ` .

Ù
Now select elements vj such that vj Vj 1 and vj > Vj , and normalize them in the L2


norm. Now Kvj must have a convergent subsequence. And now for m A j

Kvj  Kvm  λm vm  ˆK  λm vm  Kvj

– 26 –
1. Elliptic PDEs

The third term is in Vj the send term is Vm  1 and the first is perpendicular to Vm  1 so we
get:

YKvj  Kvm Y C λm Yvm Y C c

and so we get a contradiction.


Proposition 1.9. The eigenvalues for P are discrete ˜λ1 , . . . , λm , . . . with the property that
λm ª as m ª.
Definition 1.3. P is symmetric if P P ‡

Example 1.8. P ∂j ajk ∂k  c with c > R.


Example 1.9.  ∂j ajk ∂k  ibj ∂j with bj are real and P∂ b j j 0
Now if K ˆP  c  1 then K ‡
ˆP ‡  c  1

Proposition 1.10. If K is symmetric, then λj > R.


Proof. If Kuj λj uj then inner product with uj and we get that λj λ̄j
Another question to ask is how quickly the eigenvalues go to infinity. If we let NR be the
number of eigenvalues in the ball of radius R. It turns out the NR Cn V olˆΩRn~2 which
is known as the Weyl law. Where V olˆΩ depends on aij (the volume with respect to the
Riemannian metric induced by aij .
Lecture 08 (2/25)

Today we will work with operators of the form P ∂j ajk ∂k . This operator is symmetric:
`P u, v e `u, P v e and equivalently, B u, v  is symmetric. Last time we said that a symmetric
operator has real eigenvalues, λ1 B λ2 B  ª. And the eigenspaces are finite dimensional
(via Fredholm).
Proposition 1.11. There exists an orthonormal basis in L2 , consisting of eigenfunctions.
Proof. If λi x λj , ui , uj are eigenfunctions, then the claim is that `ui , uj eL2 0.

This is because:

`P ui , uj e `ui , P uj e

Therefore λi `ui , uj e λj `ui , uj e


Furthermore, the bilinear form is:

B ˆu, v  `P u, v e `u, P v e

So if u ui and v vj be eigenfunctions with λi x λj . Then:

B ˆ u i , uj  λi `ui , uj e 0

– 27 –
1. Elliptic PDEs

Therefore:

B ˆu, v  S Ω
akl ∂k u∂l v

this»is an inner product on H01 . It follows that ˜uj  forms an orthogonal basis in H01 , and
uj ~ λj is an orthonormal basis.

Now since P is positive:

`P u, ue B ˆu, u C cYuY2H 1
0

so λ1 A 0. So if u > H01 then we have:

u Qc u j j

then we have:

YuYH 1
2
0
B ˆu, u Qc λ 2 2
j j Yuj YL2

Now

YuYL2
2
Qc 2
j

Therefore:

B ˆu, u C λ1 YuY2L2

We have equality if and only if cj 0 for λj A λ1 if and only if P u λ1 u

Theorem 1.14 (Variational Interpretation of λ1 ).


B ˆu, u
1. λ1 inf u>H01
YuY2L2

2. The inf is attained if and only if u is a λ1 eigenfunction


B ˆu, u
3. λ2 inf u>H01 where V1 is the λ1 -eigenspace.
Ù
u V1 YuY2L2

Theorem 1.15. λ1 is a simple eigenvalue (eigenspace has dimension 1) and we can take
u1 A 0

Proof. Suppose that u1 is a λ1 eigenvalue. Show that u1 C 0.

Suppose not, then let u1 u1


 u1 . After some thought:


©u1 
©u1 1˜u1 A0
©u ©u1 1˜u @0

1 1

– 28 –
1. Elliptic PDEs

And ©u1 0 almost everywhere in ˜u1 0, ©u1 © u1  ©u1 almost everywhere. Therefore:
 

B ˆ u 1 , u1  B ˆu1 , u1   B ˆu1 , u1 
  

We have B ˆu, u C λ1 YuY2L2 , so that:


B ˆu1 , u1   B ˆu1 , u1 
   
λ1 ˆYu1 YL2  Yu2 YL2 
 

the first term of the left is greater than or equal to the first on the right (and same for the
second). Therefore we must have equality of each pair of terms. We can conclude that u1 

and u1 are also λ1 eigenfunctions.




Let Ω Ω 8 Ω where Ω is the support of u . We have that P u


   λ1 u C 0. By the
  

strong maximual principal, we get that u A 0 in Ω . Now by Hopf’s lemma, we get that:
 

∂u
ˆ x0  @ 0


∂ν
for x0 on the boundary (found by expanding an interior point ball until it hits the boundary.
Now Γ ˜u 0, this is a smooth surface at points where ©u x 0. Therefore around x0 we
have a nice surface.

Let’s look at u near x0 . Then we have ©u x 0 in Ω and


   © u
 0 in Ω therefore u is


not smooth, which gives us a contradiciton.

Therefore we can conclude that u1 C 0. By maximal principal and Hopf’s lemma we get
∂u1
tha u1 A 0 in Ω and @ 0 in ∂Ω.
∂ν
Now let’s suppose that we have two linearly independent eigenfunctions u1 and u2 . But
this implies that c1 u1  c2 u2 has a fixed sign. Choose x0 > Ω such that ˆc1 u1  c2 u2 ˆx0  0
but this implies that c1 u1  c2 u2 0
Recall last time we stated the Weyl Law: N ˆλ λn~2 V ˆΩ and λj j 2~n C ˆΩ
Example 1.10. P ∂x2 in 0, π  so we have ∂x2 u
 λu so λn n2 and un sin nx.
This leads to Sturm-Liouville Theory. Given an operator:
P  ∂x a∂x  b∂x c
Then un has exactly n  1 zeros and the λi are equidistributed asymptotically. This won’t
be the case in the following examples.
Example 1.11. Ω 0, π n and P  ∆. Then we have:

u M sin k x
n

i 1
i i

so:

λ Qkn

i 1
2
i

with kj integers. Now these eigenvalues are no longer evenly spaced and is more complicated.

– 29 –
1. Elliptic PDEs

Example 1.12. Ω 0, π ~α1     0, π ~αn  then

u sin k1 α1 x1  sin kn αn xn

then we have:

λ α1 k12    αn kn2

Example 1.13. P ∆ in B1 . Then change to polar coordinates ˆx, y  ˆr, θ  > 0, 1 


0, 2π  (periodic boundary conditions). So we get:
1 1
P  ∂r2  ∂r  2 ∂θ2
r r
so if we carryout separation of variables, u f ˆrg ˆθ then we get: gn einx so µn n2 then
we must solve:
2 1 n2
ˆ∂r  ∂r  2  f λf
r r
which have solutions as Bessel functions.
Example 1.14. P ∆ in B ˆ0, 1 in Rn with n C 3, then we have:
n1 1
P  ∂r2  ∂r  2 ∆S n1
r r
where ∆S n1 is what we call the spherical Laplacian.
Example 1.15. P ∆S n1 . Suppose ∆u 0 in Rn , let’s look for homogeneous solutions

uˆr, ȍ rγ g ˆΘ

since u is smooth, this implies that γ n > N, therefore u is a tempered distribution, taking
Fourier transform tells us that supp u 0 therefore u
transform tell us that u is polynomial.
Â
ˆj 
P
ej ∂i , taking the inverse Fourier

Suppose u is a polynomial of degree k. Therefore u rk gk ˆΘ. Then we get

0 k ˆk  1  k ˆn  1gk  ∆S n1 gk

therefore gk is a spherical eigenfunction with eigenvalue k ˆk  n  2 with k C 0


Lecture 09 (2/27)
Example 1.16 (The Hermite Operator). H  ∆  x2 in Rn , then our bilinear form is:

B ˆu, v  S Huv S © u © v  x2 uv

Then:

B ˆu, u S S©uS
2
 x2 u 2

– 30 –
1. Elliptic PDEs

So replace our space Ḣ 1 with X 1 which are functions u > Ḣ 1 and xu > L2 . And our dual
becomes X 1 ˆX 1  Ḣ 1  xL2 .
 œ 

Now we have an embedding H01 f L2 f H 1 . And we will see that X 1 f L2 and so




X 1 f L2 f X 1 . (I don’t understand)


How does this compare to the L2 norm? We had:


B ˆu, u
λ1 inf1
u>X YuY2L2

do we have an equality:

B ˆu, u C cYuY2L2

We can decompose our operator as P ∂i1  x2i .

In 1-dimensions, we have H  ∂ 2  x2 so :

B ˆu, u S 2
S∂x uS  x u
² ± A
2 2
?
Cc S u2 dx
B

A tells us that u is localized near x  is localized near ξ


0 while B tells us that u 0.

Crash course in quantum mechanics We assign particles as functions u > L2 with


YuYL2 1. The position is given as a probability P ˆp > E  2
E SuS dx while the velocity R
is given by P ˆv ˆp > E

E R
uS2 dξ. We cannot know both the location and velocity of the
 SÂ
particle. Now the mean deviation of the particle from 0 is:

S
1~2
δx ‹ x2 SuS2 dx

while the average velocity away from zero is

S
1~2
δξ ‹ ξ 2 SÂ
uS2 dξ 

Returning to our problem, our problem is essentially the uncertainty principal. By Cauchy
Schartza , we have:

B ˆu, u C 2ˆ S S∂x uS
2
dx1~2 ˆ S ˆxu
2
dx1~2

we will prove the stronger inequality, that the above is larger than c u2 . In quantum me-
chanics, this is trying to prove:
R
1
δxδξ C c
2
a
Young’s inequality really?

– 31 –
1. Elliptic PDEs

we have:

ˆ S 2
S∂x uS dx1~2 ˆ S ˆxu
2
dx1~2 C 2ˆS  ∂x u xudx

2S ∂x ˆu2 xdx
 2 S u2 dx

this inequality is sharp if ∂x u xu so that u c expˆx2 ~2. So we have found the lowest
eigenvalue, λ1 2 with eigenfunction u1 expˆx2 ~2

How do we find more eigenfunctions. We know that ˆ∂x  xu1 0. The adjoint operator
is ˆ∂x  x  b and let a  ˆ∂X  x. We have:
2
Ha  aH ˆ∂x x2 ˆ∂x  x  ˆ∂x  xˆ∂x2  x2 


2∂x  2x 2a

So that Ha aˆH  2. The same computation gives us that Hb bˆH  21. Now suppose
we have a function Hu λu. Then

H ˆau aˆH2u ˆλ  2au


H ˆbu ˆλ  2bu

So we call b a creation operator, while a is called an annihilation operator. We lost a factor


of 2 somewhere.
2
So in 1  d we have that λk 2k and uk ˆbk1 ˆex ~2 .

2
Moreover, uk pk ˆxe x ~2 where pk is a polynomial of degree k which are called Hermitian


2
polynomials which is an orthonormal basis in L2 ˆe x dx 

Consider the operator

P  ∂j ajk ∂k  bj ∂j  c

we want b, c > R, c C 0 with Dirichlet boundary conditions. This operator has the strong
maximal principal and Hopf’s lemma.
Theorem 1.16. For such P , we have the following
1. There exist ˆλ1 , u1  such that λ1 A 0 and u1 A 0.

2. λ1 is simple.

3. Any eigenvalue λ satisfies Rλ C λ1


Proof. Suppose that P v f and suppose that f C 0 and f e~ quiv0. The maximal principal
∂v
plus Hopf’s lemma, tells us that v A 0 and on the boundary, then:
∂v
C ™f > H n 9 H01 , f C 0ž

– 32 –
1. Elliptic PDEs

C is convex and is a cone. From here we conclude that v Kf and we get:

KC C

So now if w is an eigenfunction, then we are looking at the equation P w λw or equivalently,


w λkw.

As a starting point, let f A 0 and ∂f ~∂ν @ 0, then we have that v Kf has the same
properties. And lets say that f B µv, thne

w ηk ˆw  εv  (4)

we want to solve this in our cone C. We have:


η
w C ηK ˆεv  C εη C ε w
µ
then apply K n times to get:
η n
w C εˆ  v
µ
η
therefore B 1 so that η B µ.
µ
For η P 1 we can solve (4) by the contraction principal. So either we can solve for all
η or as η η0 w wη ª. The first case cannot happen because we have a bound on η by µ.

Now 0 B ηj B µ Ywj Y ª , wj ηj K ˆwj  εv . So


wj wj εv
ηj K ˆ  Y
Yw j Y Yw j Y Yw j

things converge and we get w ηkw.


Lecture 10 (3/3)

Recall we are looking at non self-adjoint operators P ∂j ajk ∂k  bj ∂ c with a, b, c real


and c C 0. We want to prove

Theorem 1.17. 1. The first eigenvalue λ1 is real and simple

2. the first eigenfunction is positve

3. for any other eigenvalues λ, we have Rλ C λ1

Recall the set up we had last time

1. P satisfies the strong maximal principal and the Hopf lemma

2. P u 0 has only 0 solution

– 33 –
1. Elliptic PDEs

3. P u f s solvable, we denote the solution u Kf

4. If f C 0 then u Kf C 0

5. C ˜u > H n 9 H01  therefore by property 4, we have that K  C C


∂v
Now let w > C, w ~ 0, v Kw. Then we know that v A 0 in Ω and @ 0 on ∂Ω.
∂ν
Therefore w B µv

Then we want to solve u ηKu with η λ1 and u u1 in the cone C.

We can’t solve this directly, so we will consider the following penalized problem:

u ηK ˆu  εv  (5)

With η C 0.

Remark 1.3. Note that if η P 1, then a solution exists by the contraction principal.

Remark 1.4. If η A µ, then this equation has no solution (apply K repeatedly to (5)).

Fixed Point Theorem:

1. Contraction Principal (η P 1)

2. Schaurder’s Theorem: Given K a compact operator, K  C C (with C a compact,


convex subset in a Banach space X), then K has a fixed point.a

3. Schaeffer’s Theorem: For a compact operator K  C C acting on a convex set


such that 0 > C. Can we solve u λKu for λ λ0 .

If the set of solutions ˆλ, u > 0, λ0   C is bounded, then u λ0 Ku has a solution in


C

The Schaeffer theorem (applied to our case) implies that there exist a sequence ηn > ˆ0, µ
and un > C such that un ηn K ˆun  εv  and Yun YX ª.

Now consider:
un un εv
ηn K ˆ  
Yun YX Yun Y Yu n Y

ηn converges on a subsequence to η, ε term goes to zero, K is bounded so its image is


compact, therefore the left converges to u on a subsequence. So we get:

u ηKu
a
convexity is needed, consider rotation of an annulus, this is a compact function with no fixed point

– 34 –
1. Elliptic PDEs

Now we have u1 λ1 Ku1 with λ1 > R, u1 C 0. Observe that by the maximum principal
and Hopf’s lemma, we know that:
¢̈
¨ u1 A
¨ 0 in Ω
¦ ∂u1
¨
¨ @ 0 on ∂Ω
¤̈ ∂ν
we would like to show that here is no other eigenfunction. Suppose that v1 is another
eigenfunction associated to the same eigenvalue (we would like to show that λ1 is simple).
If λ1 is real, his implies that Rv1 and Iv1 are eigenfunctions. So we may assume that v1 is
real and v1 is not negative. We have
v1 B µu1
v1
so take the smallest such µ. Therefore µ supΩ , and this supremum is actually attained.
u1
So we can let w µu1  v1 C 0, and we have that w is an eigenfunction.

v 1 ˆ x0  ∂w
Now if we have µ, then if x0 > Ω, then wˆx0  0. If x0 > ∂Ω, the ˆx0  0.
u1 ˆx0  ∂ν
So w contradicts either the strong maximal principal, or Hopf’s lemma. The conclusion is
w  0, therefore v1 µu1 .

Now we must prove the third part of the theorem. Suppose P u1 λ1 u1 with u1 A 0 and
λ1 > R and suppose P u λu with λ > C. We, would like to look at the ration w u~v where
we think of v u1 , but really v u1 ε , so we get w 0 on ∂Ω.


Let’s change P to divergence form as P ajk ∂j ∂k  bj ∂j  c

Now we have P ˆvw λˆvw. The left side can be expanded as:
P ˆvw P v w  bj ∂j w v  ajk ∂j ∂k w w  2ajk ∂j w∂k v
Define the operator:
Q  ajk ∂j ∂k  b̃j ∂k
so that:
Pv
Qw  w λv
v
Complex conjugating this, we get:
Pv
Qw̄  w̄ λ̄w̄
v
now we look at:
QSwS2 Qˆww̄ Qww̄  wQw̄ aij ∂i w∂j w̄
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
@0
Pv 2
B 2 Sw S  2RλSw S
2
v

– 35 –
1. Elliptic PDEs

therefore:
Pv
QˆSwS2  B ˆ2Rλ  2 Sw S
2
v
Pv
If v u1 , then we would get λ1 , so we would get:
v
QSwS2 B ˆ2Rλ  2λ1 SwS2

If Rλ @ λ1 , then the rhs is smaller than or equal to zero, so by the maximal principal we
would have SwS2 B 0 so that w  0, and we would be done.

What is P u11 ε ? Note that:




∂u11  ε
ˆ1  ε∂u1 u1
 ε

∂ 2 u11  ε
ˆ1  ε ∂
2
u1 u1 ε  εˆ1  ε∂u1 ∂u1 u1 ε
   1

Therefore:

P u11
 ε
ˆ1  εP u1 u1 u1
 ε
 εcu11  ε
 εˆ1  εajk ∂j u1 ∂k u1 u1 ε   1

Using positive definiteness of ajk , the conclusion is that:

P u11
 ε
C λ1 ˆ1  εu11  ε
 εcu11
 ε

So we get that:

QSwS2 B ˆ2Rλ  2λ1  cεSwS2

and we get the same result.


Lecture 11 (3/5)

The topic for today will be unique continuation.


du
Recall our discussion, given P u f in Ω with u 0 on ∂Ω or we could have 0 on ∂Ω.

If we consider a second order ODE: u œœ
 au œ
 bu 0 given uˆ0 u0 and u ˆ0 œ
u1 .

Now suppose we want to solve ∆u 0 in the upper half plane. If xn is the normal
direction, then we have ∂n2 u ∆1 u. Can we solve:
¢̈∆u 0 in ˜xn A 0
¨
¨
¨
¦uˆx1 , 0 u0
¨
¨
¨u ˆx1 , 0 u
¤̈ xn 1

What class should we solve this in?

– 36 –
1. Elliptic PDEs

1. analytic functions u0 and u1 . This gives us all terms in the taylor series for u on the
boundary. This is the Cauchy Kovolevskaya theorem.

2. Let’s try to solve in H k , so ˆu0 , u1  > H k  H k 1 . If we take a Fourier transform in x1 ,




then we get

∂n2 u 1
ˆxn , ξ 
1 2
Â
ˆξ  u

œ S ξ œ S xn
ˆxn , ξ 1 
therefore u aeSξ Sxn  be 
which gives us:

Â1 ˆξ 
u œ

ˆxn , ξ
u  Â0 ˆξ  cosh Sξ Sxn 
u sinh Sξ Sxn
œ œ œ œ

Sξ S
œ

 leaves the class of Schwartz class and tempered distributions when xn A 0.


however, u

However, if u0 0, then we get u > Cloc , if u1 > L2 then we have a problem.


ª

So the answer is that this cannot be solved in this class of functions in general.

So if we try to impose Nuemann boundary conditions on top of Dirichlet, we get an


ill-posed problem.

Now consider:
¢̈
¨P u 0 in Ω
¦
¨u 0 in ∂Ω
¤̈

The adjoint will be:


¢̈ ‡
¨P v 0 in Ω
¦
¨v 0 on ∂Ω
¤̈

Now if we consider
¢̈
¨
¨ Pu 0 in Ω
¨
¨
¨u 0 in ∂Ω
¦
¨
¨ ∂u
¨
¨
¨ 0 in ∂Ω
¤̈ ∂v

the adjoint becomes


¢̈ ‡
¨P v 0
¦
¨no b.c.
¤̈

Let’s rephrase our topic unique continuation as the uniqueness for ill-posed problems.

– 37 –
1. Elliptic PDEs

Example 1.17.
¢̈
¨
¨ Pu 0
¨
¨
¨uS
¦ ∂Ω 0
¨
¨
¨ ∂u
¨
¨ uS∂Ω 0
¤̈ ∂ν

then is u 0? It will turn out that this is a local question. Take a function u that solves this
guy, and extend by 0 to get rid of the boundary (this works because our function is zero and
its first derivative is zero. We therefore get the no boundary formulation
¢̈
¨P u 0
¦
¨u 0 in O
¤̈

for an open set O. Does this imply that u  0?


Theorem 1.18. If P has C 1 coefficients, then unique continuation holds.
There is a stronger theorem
Theorem 1.19. If u vanishes at infinite order at x0 , then u  0.
Proof. We have energy estimates, if supp u is small, then

Yu YH 2 B YP uYL2
we don’t have this. So now replace u by v χu with χ a cutoff function. So that
P v P χu P, χu. Direct energy methods will not work.

The idea is called Carleman estimates

Ye
τϕ
uYH 2 B Yeτ ϕ P uYL2

with ϕ > C and τ a parameter, τ A τ0 . We need to prove that this holds and that it implies
ª

unique continuation.

Make ϕ a function so it’s zero level set goes into the region where u x 0. Then let:
¢̈
¨1 ϕA0
χ ¦
¨0
¤̈
ϕ @ ε

so that supp ©χ ` ˜ε @ ϕ @ 0. Let v χu. Then we get P v P χu P, χu f , this is
only supported in supp ©χ. Then apply the Carleman estimate:

Ye
τϕ
v YH 2 B Yeτ ϕ P v YL2

then what happens as τ ª. The righthand side goes to zero because the exponential is neg-
ative. This tells us that v 0 where ϕ A 0. But this is where v u, and so v u in this region.

– 38 –
1. Elliptic PDEs

Let’s now prove the Carleman estimate. But we will prove this in a very special case:
P  ∆b ©  c
with b, c > L . Then we have (how?)
ª

Ye ∂ uYL2  τ Yeτ ϕ ©uYL2  τ 2 Yeτ ϕ uYL2 B τ 1~2 Yeτ ϕ P uYL2


τϕ 2

the point is that the b and c terms are perturbative, and it is enough to prove the inequality
for the Laplacian. Now let v eτ ϕ u, so we get:
Y∂
2
v YL2  τ Y∂v YL2  τ 2 Yv SL2 B τ 1~2 Y eτ ϕ P e  τϕ
v YL2
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

so
Pτ  eτ ϕ ∆e  τϕ

note that:
eτ ϕ ∂j e  τϕ
∂j  τ ϕj
so ∂j ∂j  τ ϕj by conjugation. What we get is:
Pτ  ˆ∂j  τ ϕj 2
this operator is not elliptic or self-adjoint. The symbol of Pτ is:
 ˆiξj  τ ϕj 2 ξ 2  τ 2 S©ϕS2  2iτ ξj ϕj
we can decompose P as symmetric and anti-symmetric part:
Pτ Pτs  Pτa
therefore:
Pτs  ∆  τ 2 S©ϕS2
Pτa τ ˆ∂j ϕj  ϕj ∂j 
so now:
2 s
YPτ ϕYL2 YˆPτ  Pτa v Y2L2 s
YP τ v Y
2

a
YPτ v YL2
2
 2 `Pτ v s , Pτv e
The last term becomes:
s a a
`Pτ v, Pτ v e  `Pτ v, Pτs v e ` Pτs , Pτa v, v e
Then by computation, we have:
Pτs , Pτa  τ 3 ϕj ∂j S©ϕS2  2τ ∂j ϕjk ∂k 2τ 3 ©ϕ D2 ϕ©ϕ  2τ © D2 ϕ  C
We require that C ˆx, ξ  A 0 when Pτa,s ˆx, ξ  0. This is called the pseudoconvexity condition.

Lecture 12 (3/10)

– 39 –
2. Parabolic Equations

2 Parabolic Equations
I missed this lecture and will try to fill in the main results. Also this was the first lecture
that was moved online due to the Coronavirus.

We are now interested in solving:


¢̈u Lu f in UT
¨
¨
t 
¨
¦u 0 in ∂U  0, T 
¨
¨
¨u u0 in U  ˜t 0
¤̈

where UT U  ˆ0, T  and U is an open bounded subset of Rn and T A 0 is fixed.

Where L ∂j aij ˆx, t∂i  bi ˆx, t∂i  cˆx, t is such that

Qa ij
ˆx, tξi ξj C θSξ S2
for a fixed θ A 0 and all ˆx, t > UT . This is a parabolic equation.

Rename U by Ω. Let’s conduct some energy estimates letting f 0 (the source term).
Let

E ˆuˆt S Ω
Suˆx, tS
2
dx

then:
d
dt
E S 2uˆx, t∂t udx  2 S Ω
uLu 2B u, u; t

with B the bilinear form B u, v; t


have:
 RΩ uˆtLV ˆtdx. If we had a source term, we would

d
dt
E 2B ˆu, u  S uf dx

where everything is a function of time


Proposition 2.1. B ˆu, u C c1 Y©uY22  c2 YuY22
Proof. This was proven in the elliptic pde section.
Therefore we have:
d
dt
E B c1 Y©uY22  c1 YuY22  S uf dx

the third term should really be thought of as a pairing of an H01 with it’s dual, and so by
definition of norms, we have:

S uf dx B YuYH01 Yf YH 1

– 40 –
2. Parabolic Equations

And finally we get (where we use Young’s inequality on the third term with an ε to get
absorbed into the first term):
d
E B c1 Y©uY22  c1 YuY22  cYf Y2H 1
dt
From this, we rearrange, multiply by an exponential to get:
d
ˆe
 ct
YuY2 
2
B cYf Y2H 1 e  ct
dt
from this we apply Gronwall’s inequality to somehow (after rearranging) to get:

S S
1 t
YuˆtYL2
2
 Y©uYL2
2
B Yu0 Y2L2  c 2
Yf YH  1
0 0

If we take the supremum over t, we get:

sup YuˆtY2L2  Y©uY2L2 ˆ0,T ;L2 B Yu0 Y22  Yf Y2L2 ˆ0,t;H 1


t> 0,t

we can replace Y©uY2L2 ˆ0,T ;L2 by YuY2L2 ˆ0,T ;H 1  by Poincare. We can then write this compactly
as:
2
YuYLª L2  YuYL2 H 1
2
B Yu0 Y2L2  Yf YL2 H 1
0

The terms on the left can be replaced (I think) by YuY2Lª L2 L2 H 1 . We could have replaced 9
0
the f term with Yf YL1 L2 , in which case we have the sharper inequality
2
YuYLª L2  YuYL2 H 1
2
B Yu0 Y2L2  Yf YL2 H 1  L1 L2
0

where f > ˜f f1  f2  f1 > L2 H 1 , f2 > L1 L2  and




Yf YL1 L2 L2 H 1 min Yf1 YL1 L2  Yf2 YL2 H 1


f f1  f2

Let’s now turn to the adjoint problem. If v is a test function, then:

U ˆut  Luvdxdt U uˆvt  L v dxdt ‡

For somereason, we now impose the final time condition on the adjoint v as v ˆT  vT . So
now we have:

S f vdxdt S ˆ∂t u  Luvdxdt S uˆT vT  u0 v ˆ0  S ˆ∂t v  L v udxdt


‡

rearranging, we get:

S f vdxdt  S gudxdt S uˆT vT  S u0 v ˆ0dx

– 41 –
2. Parabolic Equations

Lecture 13 (3/12)

Recall our setup: we have a domain Ω ` Rn and we let D 0, T   Ω. Then our setup is
that we have L, a second order elliptic operator. Then our heat equation is (H)
¢̈ˆ∂  Lu f
¨
¨
t
¨
¦uˆ0 u0
¨
¨
¨uS
¤̈ ∂Ω 0

We also looked at the adjoint problem (H ) ‡

¢̈ˆ∂  L‡ v g
¨
¨
t
¨
¦v ˆT  vT
¨
¨
¨v S
¤̈ ∂Ω 0

We had the following duality relation

U ˆ∂t  Lu v
D ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
 u ˆ∂t  L v  dxdt ‡

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
S uvdxStt 1
0 (6)
f g

So now we have:

S Ω
uˆT vT dx  U D
ugdxdt S u0 v ˆ0dx  U vf dxdt

we know the rhs (by intitial data) while the lhs is something we want. This can be written
as a pairing ˆˆuˆT , u, ˆvT , g  ˆˆu0 , v , ˆv ˆ0, f 
Definition 2.1. Given two functional spaces X and Y , we write f > XY to mean f ˆt,  >Y
(
and t f ˆt,  is in X.
Example 2.1. If u > L L2 , then u
ª
uˆt, x where for each t, uˆt,  > L2 and:
YuYLª L2 esssupt Yuˆt, YL2

Theorem 2.1. Given u0 > L2 and f > L2 H  1  L1 L2 , the heat equation (H) has a unique
solution u > L L2 9 L2 H01 such that
ª

YuYCL2 9L2 H 1
0
B Yu0 YL2  Yf YL2 H 1  L1 L2

We proved last time that this inequality is true by our energy estimates. Therefore we
have proved uniqueness of the theorem. We now need to prove existence. The key part of
existence is the duality relation.

Now we are going to define the weak solution of our duality relation (6) holds for every
function v such that v > L L2 9 L2 H01 and ˆ∂t  L v > L2 H 1 . So now we want to consider
ª ‡ 

the pairing:

`ˆuˆT , u, ˆvT , g e  S u0 v ˆ0dx  U vf dxdt

– 42 –
2. Parabolic Equations

with ˆuˆT , u > L2  L L2 9 L2 H01  X , and vT , g > L2  L1 L2  L2 H


ª ‡  1  X.

Now if we look at the map v ˆvT , g  (


ˆv ˆT , ∂t  L v  this is a linear map from ‡

C 9 C0 , and the range of this map is a subspace Y of X .


ª ‡

We can compute the following:

S `ˆuˆT , u, ˆvT , g e S B Yu0 YL2 Yv ˆ0YL2  Yf YL2 H 1  L1 L2 Yv YL2 H01 9Lª L2

B ˆYu0 Yl2  Yf YL2 H 1 L L


1 2 ˆYvT YL2  Yg YL2 H 1 L1 L2 
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
C

The first inequality comes from Holder and the definition of the dual pairing. For the second
inequality, we first use energy estimates to repplace the norm on v with a norm on v ˆ0.
Then we form a new PDE with the change of variables tmapsto  t and use our energy esti-
mates on this guy (this is why we define the adjoint problem as going backwards in time.

So now we have the map:

T  ˆvT , g  `ˆuˆT , u, ˆvT , g e α


´¹¹ ¹ ¹ ¸¹¹ ¹ ¹ ¶
Y `X ‡

and we know T  Y R is a bounded linear operator.

Now we are going to use the Hahn Banach theorem, which says that T has bounded
extension T  X ‡
R. So now replace T by its extension. Now for this bounded extension,
there exists `uˆT , ue > X such that T ˆvT , g  α.
‡

Now we would like to show that this solves our equation. Let’s first suppose that v has
compact support in D. In this case our duality relation becomes:

U uˆ∂t  L vdxdt ‡
U f vdxdt

integrating by parts (as distributions) the left side becomes:

ˆ∂ t  L u ˆv  f ˆv 

this then implies that ˆ∂t  Lu f in the sense of distributions.

So now the question is, why is u > C ˆL2 ? Right now, we know that u > L2 H01 9 L L2 ª

and

ut  Lu  f > L2 H  1

Now:
d
dt
2
YuYL2 2 S uut dx > L1t

– 43 –
2. Parabolic Equations

So we have proved that:


d
YuYL2 > Lt
1
dt
u > L L2 ª

Therefore YuˆtYL2 is continuous in time. Does the limit limt t0 uˆt converge?

We know that:

S
t
uˆt  uˆt0  ut ˆsds
t0

what we then see is that:

lim uˆt uˆt0 


t t0

in H 1 (I don’t know why), So we have weak convergence in L2 therefore `uˆt, ϕe



`uˆt0 , ϕe
for ϕ > H01 . By density, we can extend this to all ϕ in L2 .

Therefore we have:

1. limt uˆt uˆt0  in H  1 and weakly in L2 .


t0

2. YuˆtYL2 Yuˆt0 YL2

these two imply strong convergence.


Proof.
2 2
Yuˆt  uˆt0 YL2 `uˆt  uˆt0 , uˆt  uˆt0 e YuˆtY  2 `uˆt, uˆt0 e  Yuˆt0 Y2

The first term converges by norm convergence and the second by weak convergence.
We now need to show that u actually matches our initial data.

2.1 Higher Regularity for solutions to parabolic equations


We have:
¢̈
¨ˆ∂t  Lu f
¦
¨uˆ0 u0
¤̈

then we know if u0 > L2 and f > L2 H  1 then we get u > C ˆL2  9 L2 H01 .

Higher regularity in this context would mean that now we have u0 > H k , f > L2 ˆH k 1  

then we would like to conclude that u > C ˆH k  9 L2 ˆH k 1 9 H01 . We must require that L


has better coefficients.

– 44 –
2. Parabolic Equations

Theorem 2.2. We will assume that a > C k , b > C k  1 and c > C k 2 . Then higher regularity


holds.
Note that this is a local result. Given our cylinder D and some point ˆt0 , x0  > D. Expand
a ball around our point (only in spacial coordinates), then send it backwards in time a little,
to get a small parabolic cylinder Cε

Cε ™ˆx, t  ˆx  x0  @ ε and t0  ε2 B t B t0 ž
Lecture 14 (3/17)

Recall we were looking at second order parabolic equations:


¢̈ˆ∂  Lu f
¨
¨
t
¨
¦uˆ0 u0
¨
¨
¨uS
¤̈ ∂Ω 0

With D Ω  0, T , Ω > Rn with L a second order elliptic operator with bounded coeffi-
cients.

With u0 > L2 and f > L1t L2x L2 H 1 , then there exists a unique solution u > C ˆL2 9L2 ˆH01 .


We began talking about higher regularity. Given more regularity of f , we would like to
get more regularity for our solution.

Suppose u0 > H01 and f > L2t L2x  L1 H01 and then the higher regularity statement we (want
to) get is u > C ˆH01  9 L2 ˆH 2 9 H01 .
Remark 2.1. To get a result to like this, we will add regularity to the the coefficients. We
will now assume a > C 1 , b > L , c > L .
ª ª

Remark 2.2 (Compatability Conditions). In the previous setup, we required uS∂Ω 0.


We now need more conditions (I believe normal derivatives vanishing up to higher order).
So now given u0 > H 2 9 H01 and f > L2 H 1  L1 H 2 , we would like u > C ˆH 2 9 H01  9 L2 ˆH 3 9
H01 .

Let’s see if this setup makes sense. It makes sense to restrict our equation ˆ∂t  Lu f
to the boundary. We end up also requiring Lu f on the boundary. Now at t 0, Lu > L2 ,
but f is defined almost everywhere, so something doesn’t quite make sense.

Suppose we are looking at u0 > H 3 9 H01 and f 0, then this means that if we take
the relation Lu f 0 on ∂Ω. Then we have the compatibility condition, which says that
Lu0 0 on ∂Ω.

Now if we had smooth data, then we would have C solutions only if we have infinitely
ª

many compatability conditions (and we would need to assume C coefficients). ª

– 45 –
2. Parabolic Equations

Claim 2.1. Higher regularity is a local property. Given u0 > H01 and f > L2 , then u >
CH01 9 L2 H 2 .

We can replace this with a local statement, that says if:

u0 > H0,loc
1

f > L2loc

then u > CH0,loc


1 2 2
9 L H
loc

where we are looking at parabolic cylinders, where given ˆt0 , x0  > D then Cε ˆt0 , x0 
@ ε, t0  ε2 @ t B t0 .
˜Sx  x0 S

To do this we will talk about localization. Suppose we want to study regularity of our
function u near ˆt0 , x0 , then in order to do this we will take the function u and replace it
by v χu where χ is a cutoff function adapted to a parabolic cylinder.
Where our cylinder has x side ε and time side ε2 . Let’s say χ 1 close to our point, and
outside the cylinder χ 0. But note that χ only is supported backwards in time.

Now our v equation is

ˆ∂ t  Lv ˆ∂ t  Lχu χˆ∂t  Lu  ∂t  L, χu  g

note that the second term is supported in a small region. We would like to write down how
the commutator looks like:

∂t  L, χ χt  ©χ©  Lχ

now u > C ˆL2loc  9 L2 H0,loc


1
and we assume f > L2 L2loc then we would like to conclude that
u > C ˆH0,loc  9 L Hloc
1 2 2

Now v > C ˆL2  9 L2 H01 and g > L2 then we would like to get out of this that v >
1 2
C ˆH0,loc  9 L2 Hloc

Let’s look at higher regularity estimates:

YuYC ˆH 1 9L2 H 2
0
B Yu0 YH01  Yf YL2
(I don’t know what the norm on the left is)
Proof. Define E1 ˆu R
S©uS2 dx. But actually we would like to replace the integrand by
B ˆu, u ajk ∂j u∂k u, so now we have:
d
dt
E1 ˆu S ∂t ajk ∂j u∂k udx  2 S ajk ∂j ut ∂k udx

now if we ignore lower order terms, the second term becomes:

2 S  ut ∂j ajk ∂k udx  2 S S∂j a


jk
∂k uS2 dx

– 46 –
2. Parabolic Equations

So we can control the elliptic operator, so we have control of the H 2 norm of u (by elliptic
theory). So at the end of the day we get:
d
E1 ˆu @ cYuY2H 2  c2 YuY2H 1  c3 YuYH 2 Yf YL2
dt 0

where we then added the lower order terms. Then by Gronwall’s inequality, we get our
energy estimate.

Now our difficulty is that we have used the higher regularity in the proof of the theorem.

Note the following observation. We have u > C ˆL2 , we would like to show that u > C ˆH01 .
Suppose that we have some solution v > C ˆH01 , is this enough? Yes because we have unique-
ness in L2 we have that u v which then implies that u > H01 .

Let’s start with ˆ∂t  Lu f with v ∂t u, then we have:


ˆ∂t  L v ∂t f  ∂t  L, ∂t u ∂t f  Lt u g
originally u > C ˆL2  9 L2 ˆH01 , now we are trying to study two derivatives of u, so we are
saying u0 > H 2 9 H01 , f > L2 ˆH 1 , ft > L2 ˆH 1 . So now note that:

∂t f  Lt u > L2 ˆ H  1


therefore v ˆ0 ∂t ˆuˆ0 Luˆ0  f ˆ0 > L2 . So we also have g > L2 H 1 therefore by 

solving this equation we get v > L2 H01 9 C ˆL2 . Now v Lu  ft . So if we think of this as
an elliptic equation for u then from this we can conclude that u has two derivatives more of
f (or something, I really don’t follow any of this).

Or difficulty here is v ut . Can we verify this? We know this happens at the initial time.
Let:

S
1
wˆt u0  v ˆsds
0

we would like to say that w u, because this would imply that v ∂t u and conclude the
proof of the regularity statement. We know that:
ˆ∂ t  Lu f
let’s try to write down a solution for w. Does w solve the same equation?

S
t
ˆ∂ t  L w v ˆt  Lˆtv ˆsds  Lˆtu0
0

S S
t t
v ˆt   ˆLˆt  Lˆsv ˆs  Lˆsv ˆs ds  Lˆtu0
0 0 ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
 ∂t v ft Lt u

S
t
v ˆt  ˆLˆt  Lˆ0v ˆ0  Ls wˆsds
0

S
t
 f  Ls ˆu  wds
0

– 47 –
2. Parabolic Equations

rearrannging we get:

S
t
ˆ∂ t  Lˆw  u Ls ˆw  uˆsds
0
ˆw  uˆ0 0

applying energy estimates here, we get:

S
t
Yw  uYL2 H 1
0
BY Ls ˆw  udsYL2 H 1
0

then if we apply Holder in time, we get:

Yw  uYL2 H 1 0,T 
0
B T Yw  uYL2 H01 0,T 

if T P 1 then Yw  uY 0 so w 0
Lecture 15 (3/19)

We have a couple objectives for today:

1. conclude discussion of higher regularity for parabolic equations

Recall we have:
¢̈ˆ∂  Lu f in Ω  0, T 
¨
¨
t
¨
¦uˆ0 u0
¨
¨
¨u 0 in ∂Ω  0, T 
¤̈

we have the result that if u0 > L2 and f > L2 H 1  L1 L2 then u > C ˆL2  9 L2 H 1 then


we get that if u0 > H k and f >2 H k 1  L1 H k then we get u > C ˆH k  9 L2 H k 1 . Note


 

that we must also include compatibility conditions (on the boundary) to get this
final resulta .

Last time we discussed that:

1. higher regularity is a local property (so we can talk out about solutions in parabolic
cylinders)

2. it is enough to have existence of higher regularity solution (uniqueness comes from the
weaker topology(?))

3. we can prove higher regularity energy estimates (this follows by integration by parts
and Gronwall’s inequality)
a
essentially we want the the equation to be satisfied on the boundary such that the boundary traces make
sense

– 48 –
2. Parabolic Equations

Last time we took u0 > H 2 9 H01 then we got u > C ˆH 2 9 H01  9 L2 H 3 .

To get this , we started with an equation for u and rewrote it as a system for ˆu, ut .a .
Recall that this argument requires additional regularity on the coefficients of the operator:
∂t a > L , ∂x2 a > L .
ª ª

Observe that if we assume more regularity on coefficients, then we can repeat the same ar-
gument. If u0 > H 2n 9 H01 (with compatability conditions) then u > C ˆH 2n 9 H01 , u > L2 H 2n 1 . 

To prove this, we think of a system for ˆu, ∂t , . . . , ∂tn u.

Skipping the details of the proof, the conclusion is if a, b, c > C , f > C ª ª


and u0 > C ,
ª

then u > C . We also need infinitely many compatabiltiy conditions.


ª

Example 2.2. H 1 result: if u0 > H01 , f > L2 then u > C ˆH01  9 L2 H 2


Proof. 1. assume the coefficients are smooth, then:

C ª
 C ª
? ˆu 0 , f  u>C ª

while

H01  L2 C ˆH01  9 L2 H 2

is bounded (which we proved via energy estimates last class). Now C  C ª ª


is dense
in H01  L2 , therefore we can extend the data to a solution map by density.

2. now let’s assume that a is Lipschitz and b, c > L . Now we are going to regularize a, b
ª

and c. Let an > C be such that (1) an are Lipschitz continuous uniformly in n and
ª

(2) an a in L ª

Now the corresponding problem is:


¢̈
¨ˆ∂t  Ln un f
¦
¨un ˆ0 u0
¤̈

so we get un are well-defined and

Yun YC ˆH 1 9L2 H 2
0
B Yu0 YH01  Yf YL2
note that this inequality holds uniformly in n (because an are uniformly Lipschitz (?))

To find a solution for the coefficients a, we want to try to take a limit:

u lim un
n ª

We know that
a
strictly speaking, this is not the most general system

– 49 –
2. Parabolic Equations

(a) un is uniformly bounded in C ˆH01  9 H 2


(b) un u in L L2 9 L2 H01
ª

By (b), u exists and by (a) u > L H01 9 L2 H 2


ª

To prove ˆb, subtract the equatinos of un and um :


¢̈
¨ˆ∂t  Lˆun  um  ˆLm  Ln um
¦
¨ˆun  um ˆ0 0
¤̈

then let’s write an energy estimate for this weaker topology CL2 9 L2 H01 , to get:

Yun  um YCL2 9 L2 H01 B YˆLn  Lm um YH 1


B Yan  am YLª Yum YH01

the first term goes to zero, while the second term is uniformly bounded, therefore the
left-hand-side goes to zero as n, m ª

2.2 Maximum Principal for Parabolic Equations


Given the same equation

šˆ∂t  L u 0

With L aij ∂i ∂j  bi ∂i  c and a, b, c are continuous and real-valued. We will not enforce a
boundary condition, and instead look for solutions u > C 2 ˆΩ  ˆ0, T  and u > C ˆΩ  0, T .
Let our domain be D Ω  0, T , ∂D ∂Ω  0, T  8 Ω  ˜0.a

Theorem 2.3. For c 0 then

max u B max u
D̄ ∂D

This solutions also holds for solutions to the equation:

ˆ∂ T  Lu B 0

which we call subsolutions

Remark 2.3. Replacing u by  u we get the minimum principal for super-solutions (flip
above inequality).
a
note that the top is not included in the boundary of D

– 50 –
2. Parabolic Equations

Proof. (of Maximum principal)

In the case for second order elliptic equations, we first assume that we have strict in-
equality in subsolution, suppose

ut  Lu @ 0

then there cannot be an interior maximum. Assume by contradiction that ˆt0 , x0  > D is a
local maximum. Then we must have that:

∂t uˆt0 , x0  ∂x uˆt0 , x0  0

and the Hessian D2 uˆt0 , x0  B 0. Plugging these into our equation, we get:

ˆut  Luˆt0 , x0  C 0

so we get a contradiction. However we must allow for the possibility that t0 T There, we
have:

∂t uˆt0 , x0  C 0

but then we still get the same contradiction.

For the second part of the proof, we must allow for ut  Lu B 0. Here we will replace u
by uε ˆt, x uˆt, x  εt, then we get:

uεt  Luε @ 0

so max maxD̄ uε max∂D uε so if ε 0, we get our result.


what is the problem with c, consider the equation:

∂t u  ∆u  cu 0

First assume c is constant, if c A 0, this would force exponential decay, if c @ 0, then this
would lead to exponential growth.

So let’s try to prove a general result for c C 0

Theorem 2.4. If ut  Lu B 0, then maxD̄ u B max∂D u with u the positive part of u


 

Corollary 2.1. If ut  Lu 0, then

max SuS B max SuS


D̄ ∂D

the proof is same as before, but taking into account the sign of cu

There are two observations:

– 51 –
3. Hyperbolic Equations

1. The maximal principal implies uniqueness of solutions, if


¢̈u  Lu f
¨
¨
t
¨
¦uˆ0 u0
¨
¨
¨u 0 ∂Ω
¤̈

take two solutions, subtract, apply maximum principal. We therefore get uniqueness
in C 2 but it doesn’t have existence in C 2 . We could (1) assume more regularity on the
coefficients or (2) prove maximum principal in a weaker topology. This has been done,
see viscosity solutions (we will talk about this later)

Extra stuff: We also have a strong maximum principal for parabolic equations:

If the maximum is attained at some point ˆt0 , x0  > D, then uˆt, x uˆt0 , x0  for t B t0

For elliptic equations, Hopf’s lemma implies the strong maximal principal (read this in
Evans).

For parabolic equations: the Harnack inequality implies the strong maximal principal.
This can be seen in Evans, please read the statement. Having the Harnack inquality is
morally equivalent to having a positive fundamental solution.
Lecture 16 (3/31)

3 Hyperbolic Equations
Today we will talk about the wave equation. But first let’s recall some things discussed in
the first semester.

Recall the D’Allembertian operator: j ∂t2  ∆x which is in operator in Rn 1 Rn  R

where the Rn is space and R is time. This is sometimes called Minkowski space-time.

We can look at the inhomoegenous problem

j u f

or the initial value problem


¢̈ju f
¨
¨
¨
¦uˆt 0 u0
¨
¨
¨u ˆt 0 u1
¤̈ t

Recall the fundamental solution for the wave equation: K ˆt, x. So if we want to solve
ju f , one solution is u f ‡ K ie jK δ0 . Note that this gives us one solution. But we
can get more solutions by adding any solution to the homogeneous equation.

– 52 –
3. Hyperbolic Equations

We then talked about the distinguished fundamental solution. This has to do with
casuality and we call it the forward fundamental solution. What we mean is: if we
start with f being the Dirac mass centered at ˆ0, 0, then we are looking for a fundamental
solution K that is supported forward in time:

supp K ` ˜t C 0

Let’s remember what this forward fundamental solution is. In 1  d this was:
1
K ˆt, x 1˜tASxS
2
in 2  d, we get:
1
K ˆt, x c2 º 1˜tCSxS
t 2  x2
in 3  d, we get (a distribution, not a function):
1
K c 3 δS x S t c3 δx2  t2 0
t
Two key properties:

1. finite speed of propagation: waves move with speed B 1

2. Huygen’s principle: waves move with speed exactly 1 in n 3, 5, 7, . . .

In higher dimensions, for n even, we have:


1
K ˆt, x cn n 1
ˆt2  x2  2

where we interpret this as a homogeneous distribution so it is supported on the solid light


cone. For n odd, we have:
1 3 
ˆ n2
K ˆt, x cn n 1 δt S xS
t 2

so it is supported on the light cone.

If we would like to solve the initial value problem using the forward fundamental solution,
we have:

u f ‡ K  u 0  ˆ u 0 δt œ
0  u 1 δt 0 ‡ K

and if we let
¢̈
¨u tC0
ũ ¦
¨0
¤̈
t@0

– 53 –
3. Hyperbolic Equations

then we get:

j ũ f 1tC0  u0 δt œ
0  u1 δt 0

We can also discuss the Fourier approach (for f 0):


¢̈ˆ∂  ξ 2 Âuˆt, ξ  0
¨
¨
t
¨
¦uˆ0, ξ  u Â0
¨
¨
¨u
¤̈Ât ˆ0, ξ  uÂ1 ˆξ 

the solution would be:


sin tSξ S
ˆt, ξ 
u Â0 ˆξ  cos tSξ S  u
u Â1 ˆξ 
Sξ S

3.1 Energy Estimates


Given the equation ju 0, then define the energy functional:

E ˆu
1
2 S Rn
S∂t uS
²
2
 S©x uS
´¹¹ ¹¸¹ ¹ ¶
2
dx
kinetic energy potential energy

then we have:
d
dt
E ˆu  S Rn
utt ut  ©x ut ©x udx

S Rn
utt ut  ut ∆udx 0

we could do this backwards:


1 2
0 ut ˆutt  ∆u ∂t u  ©ˆut ©u  ©ut © u
2 t
1 2 2
∂t ˆu  S©uS   ∂j ˆut ∂j u
2 t
so
1 2 2
∂t  ˆut  S©uS  ∂j ˆut ∂j u ut j u
2 ´¹¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹¶
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ fj
e

we can call this term in the left e, the energy density, and the first term on the right the
energy flux fj .

We also get conservation of momentum.

Pj S Rn
ut ∂j udx

– 54 –
3. Hyperbolic Equations

then:

∂t ˆut ∂j u utt ∂j u  ut ∂j ut
1
 j u ∂j u  ∆u ∂j u  ∂j u2t
2
1
 j u∂j u  ∂j u2t  ©ˆ©u∂j u  ©u ∂j ©
2
1
 j u∂j u  ∂j ˆu2t  S©uS2   ∂k ˆ∂k u∂j u
2
the second two terms are the momentum fluxes.

So for energy, we multiply by ∂t u and for the momentum, we multiply by ∂x u.

Let’s put everything together into one thing: the energy-momentum tensor. Note
that we can write ∆ eij ∂i ∂j with E the identity matrix. This E is associated with Euclidean
distance, we can write: SxS2 eij xi xj . Now we could write: j mij ∂i ∂j , but instead we can
change the notation to write mαβ ∂α ∂β with α, β 0, 1, . . . , n and x0 t. Then our matrix is:

M diag ˆ1, 1, . . . , 1

this is the Minkowski metric, and allows us to compute lengths of vectors SxS2 mαβ xα xβ
2 2 2
x  x   x .
0 1 n

So we can now classify vectors.

Definition 3.1. If SxS2 0, we call x a null vector

Definition 3.2. If SxS2 A 0, we call x space-like

Definition 3.3. If SxS2 @ 0, we call x time like

Lecture 17 (4/2)

Recall we were talking about energy estimates for the wave equation

ju f

We called Rn  1 Minkowski space-time. We introduced the Minkowski metric:


2
SxS mαβ xα xβ

where m diag ˆ1, 1, . . . , 1 and α, β 0, . . . , n, with 0 time and the other space.

Last time, we classified vectors as space-like, time-like, and light-like (null). We can
inverty mαβ as mαβ (it’s the same matrix though). Then we write j mαβ ∂α ∂β .

– 55 –
3. Hyperbolic Equations

We can also talk about covariant derivatives. We can write ∂ α mαβ ∂β , which gives
us ∂ 0 ∂0 and ∂ j ∂j .

We can also write vector in our tangent space as x xα ∂α . And we can lower the in-
dex, to write xβ mαβ xα . Then an element in our cotangent space can be written ω xβ dxβ .

Then we could write j ∂ α ∂α . Then the symbol of j is mαβ ξα ξβ  Sξ S2 .

Now let’s talk about the energy-momentum tensor. This is a matrix:


1
T αβ ∂ α u∂ β u  mαβ ∂ γ u∂γ u
2
this is a quadratic form in ∂u and is also a symmetric matrix.
Lemma 3.1. If ju 0, then ∂α T αβ 0 (the tensor is divergence free).

If ju 0, then we get ju ∂ β u
Proof.

∂α T αβ ∂α ∂ α ∂ β  ∂ α u∂ β ∂α u  mαβ ∂ γ u∂α ∂γ u
ju∂ β u  ∂ α u∂ β ∂α u  ∂ γ u∂ β ∂γ u 0

since second and third terms are the same sum (switch indicies).
Let’s construct a energy estimate with a differential approach. We have
we rearrange, we get:
P∂T
α α
αβ , so if

∂0 T 0β ∂j T jβ

if we integrate both sides in a spacial direction, then the right side vanishes, and we get:

∂t S T 0β dx 0

for β 0, . . . , n which gives us n  1 conservation laws.

If β 0, then we have
1
T 00 ˆ∂
0
u2  2 2
ˆˆ∂0 u  ˆ∂x u 
2
1 2 2
ˆˆ∂0 u  ˆ∂x u  e
2
which gives us energy. For β j, we have:

T 0j ∂ 0u ∂ j u

which gives us momentum conservation.

– 56 –
3. Hyperbolic Equations

Returning to ∂α T αβ 0. Let’s take linear combinations of this. Take a vector field


x xα ∂ α and use this as weightsa . Then we get:
αβ
ˆ∂α T  xβ 0

or (as long as xα are constant):

∂α ˆT αβ x⠍ 0
1
this is a linear combination of energy and momentum. Recall that our energy e ˆS∂t uS2 
2
S∂x uS2  is positive definite. Observe that the conserved quantity that we get from above is:

S T 0β xβ dx

is this a positive definite quantity or not? Note we have:

T 0β xβ e x0  p j xj
1
x0 ˆS∂t uS2  S∂x uS2   ∂t u∂ j uxj
2
So we require x0 A 0. To control the cross terms, we use the Cauchy Schwartz inequality:

S∂t u ∂ j u xj S B S∂t uS S∂x uSS x̃ S


®
only space
1
B ˆ∂t u2  ˆ∂x u2  x̃
2
Therefore
Lemma 3.2. T 0β xβ is positive definite if and only if x0 A 0 and x20 A x21  x2n if and only if
x0 A 0 and SxS2 @ 0 (so x is forward time-like).

3.2 Symmetries of Minkowski Space


Now let’s ask: what are the symmetries of M n  1 Rn  R?
ˆ translations

ˆ linear transformations
a linear transformations is like x Ay, then
2
SxS xM x AM Ay yAt M Ay

so this is a symmetry if At M A M . In the Euclidean setting, M is just the identity, so


At A I means that A is orthogonal.

a
I realize now that this is supposed to be X, so at some point, I switch

– 57 –
3. Hyperbolic Equations

But now M is our Minkowski metric. Suppose first that A only affects spacial coordinates.
Then A is orthogonal, so that:
1 0
A Œ ‘
0 θ
Example 3.1. Suppose n 1 then we have
 1 0
M Œ ‘
0 1
xy xy
now if we let u and v , then we get x2  t2 4uv, then in the u, v coordinates,
2 2
0 2
we get: M Œ ‘
0 2
Now let u λu and v λ 1 v. This is the Lorentz group which as two components (1)


the orthogonal subgroup and (2) Lorenzt boosts


Geometrically, we can integrate our conserved quantity over rotated surfaces (I didn’t
quite follow this) to get something like:

SP N e 0  Nj pj SP N e 0  Nj pj

PN
1 1

the rights side is positive definite if and only if SN0 S2 A S jS


2 which is like saying N is
positive definite.

We need to be careful about writing normal vectors. If is our surface ˜ϕ 0, thenP
dϕ dα ϕdx so Nα ∂α ϕ is a covector. Then if we want SN0 S A N12    Nj2 , then N α Nα @ 0.
α 2

So we can classify surfaces:


1. time like N α Nα @ 0
2. null if N α Nα 0
3. space-like N α Nα A 0
Theorem 3.1. Energy density on P is positive definite if and only if P is space-like.
Lecture 18 (4/7)

Recall we were studying the wave equatino on Minkowski space Rn with the metric  1
1
mαβ diag ˆ1, 1, . . . , 1. We had the energy momentum tensor T αβ ∂ α u∂ β  mαβ ∂ γ u∂γ u.
2
We found that ∂α T αβ 0 (if ju 0. We got conservation laws from this.

Then if we picked a vector field xβ , and from this we get ∂α T αβ xβ 0 (if ju 0 we get this
is equal juxβ ∂ β u). This is like taking linear combinations of energy and momentum relations.

To get energy identites, we integrate over a space-time region. Consider two times t0 , t1

– 58 –
3. Hyperbolic Equations

and let the region between these be Dt0 ,t1  , then since

S Dt0 ,t1
∂α T αβ xβ 0

then we apply the divergence theorem to get:

S t t1
T 0β xβ dx S t t0
T 0β xβ dx
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
energy at t1 energy at t0

we would like these both to be positive.


Proposition 3.1. T 0β xβ is positive definite if and only if x is forward time-like: x0 A 0 and
SxS2 @ 0

P P
Now generalize this. Let 0 and 1 be two surfaces with D the region between the two
surfaces. Then we have a conormal vector on these surfaces as N .

Then we have:

S D
∂α T αβ xβ dx 0

then by the divergence theorem we get:

SP N T
0
α
αβ
xβ dx SP N T 1
α
αβ
xβ dx

The first quantity is energy on Σ0 and the other is energy on Σ1 . We would like these to be
positive definite.
Proposition 3.2. The energy density is positive definite if and only if N and x are both
either forward time-like or backward time-like.

This is true if and only if x is forward time like and Σ0,1 are space-like.

– 59 –
3. Hyperbolic Equations

3.3 Finite speed of propagation


Consider the slice t 0 with a disc centered at x0 and radius R. Take the cone coming out
of this with slope 1:
C ˜Sx  x0 S  t R

Now take the initial and intermediate circular slices C0 and Ct and consider the lateral
boundary of this frustrum Γ 0,t .

Let’s then consider an energy identity in this region C 0,t . So we have Σ0 c0 and Σ1
CT 8 Γ 0,t , so we have:

S C0
e dx S C1
e dx  SΓ 0,T 
eΓ dΓ

the first two are positive definite by above, but what about the third? But we get that Γ
is not space like, it is actually a null-surface, so we no longer know if this energy density
is positive definite. However, this is limit of space-like surfaces, which means that we get
something that is non-negative definite.

Therefore if ju 0, u, ut on C0 then u, ut 0 on Ct , therefore u 0 in C.

So C is what we call the cone of uniqueness for C0 . Therefore the data in C0 uniquely
detetermines the solution in this region. We call C the cone of uniqueness for C0 .

Alternatively, if our initial data is supported in D0 , then our solution is supported in the
upsidedown fustrum D, which can be shown easily from what we showed above. We call D
the domain of influence of D0 .

– 60 –
3. Hyperbolic Equations

question: what happens if C0 is an arbitrary space-like region?

Let Σ0 be some space-like surface. Let Σ1 be a cap on this surface. If Σ1 is space-like,


then we have a good energy estimate: if u 0 in Σ0 then u 0 in Σ1 . Now take Σ1 and start
expanding it outwards for as long as it remains spacelike. What is the limiting surface?

If Σ1 ˜ϕ 0 and suppose ϕ @ 0 is in our region, therefore NΣ1 © ϕ.

The good case is when NΣ1 is time-like SN S2 @ 0. The limiting case is NΣ1 is a null vector
SN S2 0. This is equivalent to :

mαβ ∂α ϕ∂β ϕ 0

this is known as the Eikonal equation

3.4 Variable Coefficient Equations


Now we would like to study the PDE:
αβ α
ˆg ∂α ∂β b ∂ α  c u 0
´¹¹ ¹ ¹ ¹ ¹ ¹¸¹¹ ¹ ¹ ¹ ¹ ¹ ¶
jg

we won’t discuss much about the lower order terms, they play a pertubative role (as long as
b, c > L ). We want g to look very much like the Minkwoski metric.
ª

In the elliptic case, we had the coefficients aij and asked for this matrix to be positive
definite which is equivalent to the identity matrix after a linear change of coordinates.

Therefore we would like to require that g αβ has signature ˆn, 1.

– 61 –
3. Hyperbolic Equations

Now we want the regularity that g αβ > Lip.

Notations:

ˆg
αβ

 1
 gαβ

where the term on the right is a pseudo-Riemannain metric. So if x xα ∂α , then:


2
SxS gαβ xα xβ

and ∂ α g α⠈x∂β

Now let look at energy estimates: We still have the tensor:


1
T αβ ∂ α u∂ β u  g αβ ∂ γ upγ u
2
then we can compute the divergence:

∂α T αβ 0

if g has constant coefficients (this is equivalent to Minkoski by a change of coordinates).


Otherwise, we get (for jg u 0:

∂α T αβ Oˆ©g S©uS2 

which tells us why we require g to be Lipschitz. If u didn’t solve, we would add jg u∂ β u.

Differential geometry comment: we actually want:


1 º
jg u º ∂α ˆ gg αβ ∂β u
g

with g S det gαβ S, then ©α T αβ 0 where the differention is the Levi-Cevita covariant differ-
entiation.

Returning to reality. Take two time slices t0 , t1 with a region between them as D t0 ,t1 
then we get (where x is now a variable vector field):

∂α ˆT αβ x⠍ OˆS © g SS©uS2 

So we get:

S t t1
T 0β xβ  S t t0
T 0β xβ S D t0 ,t1 
Oˆ © g 2
S©uS dxdt

Both terms on the left-hand-side are positive definite if x is forward time like and t constant
is a space-like surface (ie ϕ t which has normal N ˆ1, . . . , 0) this means that g00 @ 0

– 62 –
3. Hyperbolic Equations

Let:

E ˆt S T 0β xβ dx

then we have:

E ˆt1   E ˆt0  S D t0 ,t1 


© g S©uS
2
 jg u Xudxdt

divide by t1  t0 , so we get:
d
dt
E ˆt SDt
© g S©uS
2
 jg uXudx
B Y©g YLª E ˆt  Y jg uYL2 YXuYL2
and YXuYL2 is like SE ˆtS1~2 , so this is the perfect setup for Gronwall’s inequality:

S S S
t t t
E ˆt B E ˆ0 exp ‹ Y©g YLª ds  Y jg uYL2 exp ‹ Y©g YLª dσ  ds
0 0 s

If we are bounded in 0, T , then the easiest way to write thsi is:

sup E ˆt1~2 B E ˆ01~2  Y jg uYL1 L2x


t> 0,T 

(and note that the left is Yut YL2  Y©uYL2


Lecture 19 (4/9)
Recall our discussion on the wave equation. We have our metric g αβ on Minkowski space
(or more generally a psuedo-Riemannian manifold). Then g αβ has signature ˆn, 1 (invariant
under changes of coordinates). Then we let jg g αβ ∂α ∂β .

Now we are looking at the Cauchy problem:


¢̈j u f
¨
¨
g
¨
¦uˆt 0 u0
¨
¨
¨u ˆ t 0 u1
¤̈ t

this initial data is called Cauchy data. The important thing is that the surface ˜t constant
are constant, which is equivalent to g 00 @ 0.

Recall our energy estimates. Given our energy momentum tensor T αβ . Then

E ˆuˆt S T 0β Xβ dx

and Xβ is a forward time-like vector field. One example could be ∂t .

Then our energy estimate for the wave equation looks like:
d
E ˆuˆt B cY©g YLª E ˆuˆt  E ˆu1~2 Yf YL2
dt

– 63 –
3. Hyperbolic Equations

and note that E ˆuˆt Ru 2


t S
 ©x uS2 dx. Via Gronwall, we get:

sup E ˆuˆt B E ˆuˆ0  Yf Y2L1 L2


t> 0,T  t x

We could also write:

E ˆuˆt Yut Y2L2  YuY2Ḣ 1

3.5 Higher Regularity Estimates


Fix k A 0, suppose u0 > H k 1 , u1 > H k , and ∂x,t

j
> L1 L2 for Sj S B k, then:

E k ˆu  QE ∂u
S j S Bk
ˆ
j


Proposition 3.3. Assume that g > Cx,t


k 1
, then 

sup E k ˆuˆt B E k ˆuˆ0  Y∂ Bk f Y2L1 L2


t> 0,T 

This will lead to a higher regularity statement: if the data is more regular, then so is the
solution.

The proof is that we apply energy estimates to ∂ j u for Sj S B k.

Recall that finite speed of propagation says that given a set at an initial time, this
uniquely determines the solution later in and a region. In other words, given two surfaces
Σ0,1

fig

Proposition 3.4. If Σ0,1 are space-like, then E ˆuΣ1 B E ˆuΣ0

This proposition, actually requires more, it requires a global time-like vector field X or a
global foliation (contuum of slices) which is space-like.

Question: what is the maximal region where energy estimates hold? In the limit, Σ1
becomes null, in which case this gives us the domain of uniqueness of Σ0 . Now if we write
Σ1 ˜ϕ 0, then we get the Eikonal equation:

g αβ ∂α ϕ∂β ϕ 0

Also Huyhen’s principle disappears for variable coefficients.

Theorem 3.2 (Well-Posedness of Wave Equation). Assume g > C 1 , ˜t const are


uniformly space-like. Then for each ˆu0 , u1  > H 1  L2 , and f > L1t L2x , there exists a unique
solution uˆx, t such that u > C ˆ0, T ; H 1  and ut > C 0, T ; L2 

– 64 –
3. Hyperbolic Equations

We call our state space H H 1  L2 .

Theorem 3.3 (Higher Regularity). Assume g > C k 1 , ˜t const is space-like. Then 

for each ˆu0 , u1  > H k 1  H k and f > L1t L2x  k derivatives, there exists a unique solution


u > C ˆ0, T ; H k 1  and ut > C ˆ0, T ; H k .




The remaining question is the existence of solutions.


Approach 1: use Duality. The idea is that existence for P corresponds to estimates of
P . And estimates for P correspond to existence for P ‡.
‡

Our example for the heat equation was we studied P in L2 (and therefore P > L2 . ‡

Now writing jg in divergence form:

jg ∂α g αβ ∂β

(this is equivalent to previous jg modulo lower order terms). Then we can compute the
adjoint problem:

S jg u vdxdt S ∂α g αβ ∂β u vdxdt S u∂β g αβ ∂α vdxdt

but g αβ is symmetric. Therefore the conclusion is that j


‡
g j g

Now lets integrate from t0 to t1 (with constant coefficients), as before, we had:

S j uvdxdt SD
u j vdxdt  boundary terms

that term is:

S S
t2
utt vdt ut v Stt21  ut vt dt
t1

ut v  uvt Stt21  S uvtt dt

therefore:

boundary term S ut v  uvt dxStt t1


t0

writing this again, we have:

S ut v  uvt dx

note the first two terms mus be L2 , while the second must be Ḣ 1 and Ḣ 1 . So ˆu, ut  > H 1  L2 

corresponds to studying ˆv, vt  > L2  H 1 . 

Let’s take this observation and put it into the duality argument. What we get at the end
of the day is the following:

– 65 –
3. Hyperbolic Equations

Proposition 3.5. Given energy estimates for v > H 1  L2 , this gives us existence for u in
L2  H 1


observe that once can also prove lower energy estimates. eg if we are looking at jg u f
in L2  H 1 , then we can relate this to a function v with u  jv, then we look at v > H 1  L2 ,


then:
Â
u
v
Â
ˆ1  Sξ S2 1~2

Observation 1: we do not need existence of solutions for all data ˆu0 , u1  > H 1  L2 , it
is enough for a dense set.

Observation 2: we don’t need exact solutions. It is enough to be able to find approxi-


mate solutions with arbitrarily small error.

ˆu0 , u1   ˆu0m , u1m


ˆ  ˆ 
jg um fm 0

Lecture 20 (4/14)

3.6 Hyperbolic Systems


Given a function u ˆu 1 , . . . , u m  with uj  R1  n RˆC, then our equation is
¢̈∂ u Akl
¨
¨
t j j ˆx∂l uk
¨
¦j 1, . . . , m
¨
¨
¨u ˆ0 u
¤̈ j 0

this is what we call our first order linear system with coefficients Akl
j ˆx are real valued
Lipschitz functions.

A shorter formulation could be:

∂t u Al ∂l u  A0 u
±
harmless

with Al > M m  m (square matrices).

Let’s first think about this in the constant coefficient case: then we can use a Fourier
transform method.

uˆt, x ˆt, ξ , then:


u
l
 iA ξl u
∂t u Â

so we have a system ∂t v iBv. We would like to diagonalize this matrix and write:

v ˆt eiBt v ˆ0

– 66 –
3. Hyperbolic Equations

Let’s make a linear change of coordinates to diagonalize B as diag ˆλ1 , . . . , λm , and so we


get eiλm t .

Note that since B Al xil , then replacing ξ by µξ, then λ becomes µλ.

Now suppose that for some m, Imλm @ 0, then we get:


eiλm t ecSξSt
which blows up (leaves Schwartz space). Therefore we would like the eigenvalues of B to
not have negative imaginary parts. But therefore, since we have complex conjugate pairs,
we must exclude positive parts, therefore:
Remark 3.1. We have ill-posedness if B has non-real eigenvalues.
Definition 3.4. Our system is hyperbolic if for each ξ > Rn , the matrix B Al ξl has only
real eigenvalues.
If B diag ˆλ1 , . . . , λn , then notice that ˆλ1 , . . . , λn  are homogeneous functions of order
1 in ξ.

Easy Case: simple eigenvalues: λi x λj for i x j. Then λ1 , . . . , λm are smooth functions


of ξ. And eiBt diag ˆeiλj t and SeiBt S B C uniformly in time.
Definition 3.5. Strictly hyperbolic systems: λ1 x λ2 x  x λm for the real eigenvalues
of B Al ξl
The even easier case is when n 1, then B Aξ.

Now let’s suppose that B has multiple eigenvalues:


λ1  λj x 
eg
λ 1
B Œ ‘
0 1
then:
eiλt teiλt
eiBt Œ ‘
0 eiλt
Then:
Se
iBt
S B ˆ1  t
and so we have growth. Such problems may be well-posed but with growth in time and with
loss of derivatives (in the constant coefficient case). This becomes a big mess with variable
coefficients.

A bettercase would be symmetric matrices. If B Al ξl is symmetric, we would require


l
A to be symmetric

– 67 –
3. Hyperbolic Equations

Definition 3.6. Symmetric hyperbolic systems: when B is symmetric.

Goal: well-posedness theory for symmetric hyperbolic systems.


¢̈
¨ ∂t u Al ˆx, t∂l u
¦ (7)
¨uˆ0 u0
¤̈

Theorem 3.4. Assume Al are real-valued symmetric matrices that are Lipschistz. Then the
system (7) is well-posed in L2 : existence, uniqueness, and:

YuYC ˆL2  B Yu0 YL2


Proof. Step 1: energy estimates.

∂t u Al ˆx, t∂l u  f

we write:

E ˆu
1
2 S 2
SuS dx

then:
d
dt
E ˆu S u ut dx S u Al ˆx, tdl u  u f dx

S 1
2
1
∂l ˆu Al u  u∂l Al u  f dx
2
this last step requires the symmetry of A. So we get:
d
dt
E ˆu  S 
1
2
u∂l Al u  uf dx B cE ˆu  E 1~2 ˆuYf YL2x

by Lipschitz of A, then use Gronwall again, to get:

sup E ˆuˆt B E ˆuˆ0  Yf Y2L1 L2


t> 0,T  t x

– 68 –
3. Hyperbolic Equations

Step 2: existence.

take 1: (by duality) Identity the adjoint system (take our system and hit it with a test
function:

`ut  Al ˆx, t∂l u, v e `f, v e

this is the same as:

au, ∂t v  ∂l ˆAl ˆx, tv f `f, v e

therefore the adjoint equation will be:

∂t v ∂l ˆAl ˆx, tv   g Al ˆx, t∂l v  ˆ∂l Al ˆx, tv

this is still a symmetric hyperbolic system, so we still have energy estimates.

So our duality relations is:

∂t u  Al ∂l u f

this is forward in time from 0, T . And we have the adjoint problem:

∂t v  ∂l Al v g

which is backward in time T 0. Integrate both by parts with v and u respectively and add
them up to get:

S S S fv
T
uvdxST0  gudxdt
Rn 0 R

the same argument follows from the heat equation, and so we get the same proof of existence.
Notice that this is well-posed forward and backwards in time (unlike the heat equation).

Existence take 2 the idea is to find a good method to construct approximate solutions.
We will use something called parabolic regularization (this is known as viscosity ap-
proximation. Our original equation is :

ut Al ∂l u

our new equation is:

uεt Al ∂l uε  ε∆uε

this new equation (by parabolic theory) is L2 well-posed locally in t (on a time interval that
depends on ε). So we would like to look for energy estimates which do not depend on ε.

– 69 –
3. Hyperbolic Equations

Let’s compute the energy for our new equation


d1
dt 2
2
Yuε YL2 S uε Al ∂l uε  εuε ∆uε dx
B C Yuε Y2L2  εY©ue Y2L2 dx
now if we use Gronwall, we get:

Yuε YLª L2 B Yuε ˆ0YL2


adding something back in (?) we get:
º
YuYLª L2  εYuYL2 H 1 B Yu0 YL2
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
parabolic regularizing ef f ect

Let’s now assume our initial data has more regularity u0 > H 2 . Then we have H 2 estimates:
ε
Yu YLª H 2 B Yu0 YH 2
so we have:

∂t uε  Al ∂l uε ε∆uε

the last term goes to 0 in L2 . Therefore by using energy estimates for the hyperbolic system,
we get:

Yu
ε1
 uε2 YLª L2 B ε1  ε2 0

so uε u in L L2 (by completness) therefore u sovles the hyperbolic system.


ª

So uε is bounded in H 2 and uε u in L2 . Therefore u > H 2

Lecture 21 (4/16)

Today we will return to the question of existence of solutions for the wave equation.

Proof 1: (Parabolic regularization). We are given ju 0. We can reinterpret the wave


equation as a system.

Our new variables should be ˆu, v ut . Now we have the system:


¢̈
¨ut v
¦
¨vt ∆u
¤̈

a parabolic regularization would look like:


¢̈ ε
¨ut v ε  ε∆uε
¦
¨v ε ∆uε  ε∆v ε
¤̈ t

– 70 –
3. Hyperbolic Equations

(note that if u > Hxk , then v > Hxk 1 . So we are looking for solutions ˆu, v  > C ˆH k  H k 1 .
 

Now these parabolic equations are well-posed in H k  H k 1 and we have uniform energy 

estimates in H k  H k 1 as ε 0.


Now take ˆu0 , v0  > H k  H k 1 , then ˆuε , v ε  are uniformly bounded in H k  H k 1 . So if


 

we pass to the limit, we get ˆuε , v ε  ˆu, v  in H k 2  H k 3 (same argument as first order
 

hyperbolic systems).

Poof 2: Question: cane we write the wave equation as a hyperbolic system? With
∂t2 u  ∆u0, then define the new variables:
¢̈
¨ v0 ∂t u
¦
¨ vj ∂j u j 1, . . . , n
¤̈

¢̈
¨∂t v0 ∂j vj
¦
¨∂t vj ∂j v0
¤̈

so ∂t v Av with:

’0 ∂1  ∂n “
– ∂1 0  0—
A – —
–  0  0—
”∂ 0  0•
n

this is a symmetric matrix.

Apriori, for v we impose no compatability conditions. But to interpret v as a wave


solution, we need ∂i vj ∂j vi . We assume this holds initially and then we try to propagate
this.
∂t ˆ∂i vj  ∂j vi  ∂i ∂j v0  ∂j ∂i v0 0
If we had variable coefficients (however not in the most general case)
∂t2 u  g ij ∂i ∂j u 0
assuming g 0j 0, so we get:
¢̈
¨∂t v0 g ij ∂i vj
¦
¨∂t vj ∂j v0
¤̈

(we require g ij to be positive definite. then:

’0 g 1j ∂j  g nj ∂j “
– ∂1 0  0 —
A – —
–  0   —
”∂ 0  0 •
n

– 71 –
3. Hyperbolic Equations

we would be in trouble, but we cold just redefine our metric as:

Y v Yg 2 v02  g ij vi vj Yv Y2

Interesting remark: A becomes symmetric relative to this new metric:


d 2
Y v Yg © g v 2  v0 g ij ∂i vj  g ij vi ∂j v0
dt
(this is like a density flux relation). So now if we integrate in x, we get:
1
2 S 2
Yv Yg dx S Oˆ © g Yv Y2 dx

which is what we need for Gronwall’s inequality to apply.

Proof 3 Can we think of the wave equation as an ode? utt ∆u.

Two difficulties:

1. our space X H 1  L2 is infinite dimensional

2. the operator ∆ is unbounded in X

Remark 3.2. If we had an equation like utt Lu with L bounded, we could think of this as
an ODE in an infinite dimensional space and solve it.

idea consider some finite dimensional approximation of the wave equation.

Setup: let Ω be some bounded domain, nice boundary, Dirchlet boundary condition. Let
un be an orthonormal frame in L2 ˆRn  which is also an orthogonal frame in H 1 ˆRn . Then
let:

uˆt Qc
ª

j 1
j ˆtuj ˆt

so we look for solutions u of the form:

uˆt Qc
n

j 1
j ˆtuj

then:

∂t2 u Qc g
n

j 1
j
kl
∂k ∂l uj

So our approximate equation would be:

∂t2 u P 1,m cj g kl ∂k ∂l uj

– 72 –
3. Hyperbolic Equations

Then we have:

∂t2 cj Qg
i
a kl ∂k ∂l ui , uj f ci

for j 1, . . . , n. So this is an ODE system of the Fourier coefficients cj which we can solve
by ODE theory. We want to prove uniform estimates with respect to m. This is known as
Galerkin Approximation (this is the proof that Evans uses).

From a numerical perspective: wewould like to discretize the domain.

Proof 4. We preform a time discretization. t 0 t ε t 2ε. Then the key thing


is that the enery estimates hold.

Given an ODE:
¢̈ œ
¨x F ˆ x
¦
¨xˆ0 x0
¤̈

then xˆε xˆ0  εF ˆxˆ0, and xˆ2ε xˆε εF ˆxˆ2卍. Then if F is Lipschitz, then
this converges. This is Newton’s method. This doesn’t work for PDEs. A variant of this
is the Implicit Newton method xˆε xˆ0  εF ˆxˆε. The difficulty is the equation for
xˆε. Next time we will apply this to the wave equation.
Lecture 22 (4/21)

Recall from last time we were looking of solutions for the wave equation.

One method we talked about for approximating solutions was the Newton method for
the equation:
¢̈ œ
¨u F ˆu
¦
¨uˆ0 u0
¤̈

we discretize in time, with time step ε. Then we set:

ũˆε uˆ0  F ˆuˆ0


ũˆ2ε uˆε  εF ˆuˆ

By a Taylor expansion, we know that the error is O ˆ ε2  .


If we are solving from t 0 to t 1, then the number of steps is comparable to ε 1 , 

therefore the total error is comparable to ε2 ε 1 ε. Something important for this to work


is that F must be Lipschitz continuous.

We can modify this method as:

u ˆε  uˆ0  εF ˆuˆε

– 73 –
3. Hyperbolic Equations

which would be an implicit Newton method.

Let’s implement this for the wave equation. Given ut v and vt ∆u, then:
u u
Œ ‘ AŒ ‘
v t
v
with
0 1
A Œ ‘
∆ 0
Then let:
u u u
Œ ‘ ˆ ε Œ ‘ ˆ0  εA Œ ‘ ˆε
v v v
ie
u ˆε  uˆ0  εv ˆε
v ˆε  v ˆ0  ε∆uˆε
so we want to solve this system in the energy space: ˆuˆ0, v ˆ0 > Ḣ 1  L2 , and require
ˆuˆε, v ˆε belongs to the same space. Note that this is an elliptic equation (sub v into first
equation)
uˆε uˆ0  εv ˆ0  ε2 ∆uˆε
 2
ˆ1  ε ∆uˆε uˆ0  εv ˆ0
so this is a nondegenerate elliptic equation. So that:

E ˆuˆ0, v ˆ0 S S©uˆ0S


2
 Sv ˆ0S
2
dx

S S©uˆε  ε©v ˆεS


2
 Sv ˆε  ∆uˆεS
2
dx

S S©uˆεS
2
 Sv ˆεS
2
dx  ε2 S S©v ˆεS
2
 S∆uˆεS
2
dx  2ε S © uˆε © v ˆε  v ˆε∆uˆεdx

the last term has a zero term (?), so we get


E ˆuˆε, v ˆε B E ˆuˆ0, v ˆ0

3.7 Linear Semigroup


Given the equation:
¢̈
¨ut Au
¦
¨uˆ0 u0
¤̈
>X
with X a Banach space and A  DˆA X with DˆA ` X and we ask when we can solve this.

Let u0 uˆt and uˆs ÐSÐЈ


t,s 
uˆt with S ˆt, s  X X as bounded linear operators.

– 74 –
3. Hyperbolic Equations

Definition 3.7. ˜S ˆt, stCs is a C 0 semigroup if

1. S ˆt, s  X X is bounded

2. S ˆt, t I

3. S ˆt1 , t2 S ˆt2 , t3  S ˆt1 , t3 

4. limt 0 S ˆt, 0x x for all x > X

Remark 3.3. If A is independent of t, then S ˆt, s S ˆt  s, in which case we have


S ˆtS ˆs S ˆt  s

Definition 3.8. A is the infinitesimal generator of S ˆt if

S ˆhx  x
Ax lim
h 0 h
(u S ˆtx should solve u œ
Au) with DˆA ˜x > X  this limit exists
Proposition 3.6. A has the following properties:

1. A is densely defined, DˆA X

2. A is closeda (so we have ˆx, Ax (the graph of A) is closed)

Proof. Given x > X, then we have:

S
1 h
x lim S ˆtxdt
h 0h 0

then we claim that the right-hand-side belongs to DˆA. To see this, we need to look at:

lim
S ˆk  R h h
0 S ˆtxdt  0 S ˆtxdt R
k
R R
k 0
hk h
k S ˆtxdt  0 S ˆtxdt
lim
k
R R
k 0
h k k
h S ˆtxt  0 S ˆtxdt
lim
k 0 k
S ˆhx  x

therefore xh R h
0 S ˆtxdt > DˆA and Axh S ˆhx  x.

We omit the second part of the proposition.


a
if xn x and Axn y, then x > DˆA and y Ax

– 75 –
3. Hyperbolic Equations

Question: when is an unbounded operator A the generator of a C 0 semigroup?a

Returning to the implicit Newton method, we have uˆε uˆ0  εAuˆε. Rearranging
this, we have:
ˆI  εAuˆε uˆ0
We would need ˆI  εA to have a bounded inverse. So note that A generates the C 0
semigroup S ˆt eAt , then YS ˆtY B CeM t . If λ A M , thenb

S S
ª ª

e  λt
S ˆtdt eˆA  λt
dt ˆA  λ 
 1
0 0

Conclusion, if we have this bound on YS ˆtY, then ˆA  λ  1 has a bounded inverse, and
C
YˆA  λ
 1
Y B
λM
for λ A M . We call the spectrum of A: σ ˆA ˜λ  A  λ is not invertible and the resol-
vent ρˆA ˜λ  A  λ is invertible then ˆM, ª ` ρˆA
Theorem 3.5 (Hiller-Yoshida). A closed, densely defined operator A defines a C 0 semi-
group bounded by CeM t if and only if ˆM, ª ` ρˆA and
C
YˆA  λ
 N
Y B
ˆλ  M N

for all N
Remark 3.4. C C 1
1
Remark 3.5. If C 1, then it is enough to have YˆA  λ1 Y B
ˆλ  M 

Definition 3.9. S ˆt is a C 0 contraction semigroup if YS ˆtY B 1 for all t (this is the same
thing as saying M 0)
Theorem 3.6. A is the generator of a C 0 cotnraction semi-group if and only if ˆ0, ª > ρˆA
and Yˆλ  A 1 Y B 1~λ for λ A 0.


Observe that we proved this for the wave equation.

I don’t see this, so I will just work through the proof given in Evans.
Theorem 3.7. For
¢̈u  Lu 0 UT
¨
¨ tt
¨
¨
¨
¨ S∂U 0
u
¦
¨
¨ uˆ0 g
¨
¨
¨
¨u ˆ0 h
¤̈ t

we have a solution given by a contraction semi-group on H01  L2


a
This would then provide a solution to the PDE, where the semigroup is time evolution
b
I don’t think the following inequalities are straightforward, this is a several line proof in Evans

– 76 –
3. Hyperbolic Equations

Proof. Let v ut , then we have:


¢̈
¨
¨
ut v
¨
¨
¨
¨vt  Lu
¨
¨
0
¨
¦uS∂U 0
¨
¨
¨
¨ uˆ0 g
¨
¨
¨
¨
¨v ˆ0 h
¤̈

and we were given (I forgot in theorem statement):: Lu ˆaij uxi xj  cu, with c C 0, aij aji .

(
Then our generator A  ˆu, v  ˆv, Lu will be defined on DˆA ˆH 2 ˆU  9 H01 ˆU  
H01 ˆU .
We need to show that A is closed and densely-defined.

DˆA is clearly dense in H01 ˆU   L2 ˆU .

If ˆuk , vk  ˆu, v  and Aˆuk , vk  ˆf, g  (both in) H01 ˆU   L2 ˆU , then ˆvk , Luk 
ˆf, g  therefore f v and Luk g

Lecture 23 (4/23)

3.8 Homework Problems


Today’s lecture will be a discussion on homework problems.

Example 3.2 (Homework 5, problem 1). Given a Lorentzian metric g, T αβ the energy
momentum tensor. Given vector fields X and Y (vector fields), figure out when Xα T αβ Yβ is
positive definite. It is, if and only if, X and Y are both forward time-like or both backwards
time-liked.
1
Proof. We have T αβ ∂ α u∂ β u  g αβ ∂ γ u∂γ u.
2
First assume X and Y are forward time-like. One approach would be to simplify the
problem using symmetries. Diagonalize g αβ to become mαβ . Next we simplify our choice
of X and Y without changing m. To do this, we will use the Lorentz group. With this
simplification, the rest is easy.

Another, more geometric approach would be to let © u N , then we have:


1
Xα T αβ Yβ Xα N α N β Yβ  Xα g αβ Nγ N γ
2
1
ˆX N ˆY N   ˆX Y ˆN N 
2
note that if X, Y are forward time-like, then X Y @ 0. Then we get can see that X Ù is
space-like

– 77 –
3. Hyperbolic Equations

Then we can consider the orthogonal decomposition as N ax  by  N1 with N1 Ù x, y, then


we see that:
1
Xα T αβ X⠈ax  by 
2 2
ˆx y ˆˆax  by   N1 
X ˆax  by  y 
2
a2 ˆˆX X ˆX Y ˆ1~2  b2 ˆˆY Y ˆX Y ˆ1~2  abˆX 2 Y 2  ˆXY 2  
1 2 2 1
a X XY  b2 Y 2 XY  abX 2 Y 2
2 2
so we need ˆX 2 Y 2 2 B X 2 XY Y 2 XY . So the conclusion is that we want X 2 Y 2 B ˆXY 2

Example 3.3 (Homework 5, problem 3). Given jg u 0 on a region Ω  0, T  D with ∂D


the lateral boundary, and we impose Dirichlet boundary condition.

and we would like energy estimates.


Proof. We have ∂α T αβ Xβ OˆS © uS2 . If we choose X to be forward time like and integrate
in Ω, then we get:

S Ω
∂α ˆT αβ X⠍dx S Ω
OˆS uS2 dx
©

– 78 –
3. Hyperbolic Equations

the term on the right goes into Gronwall’s, so we need to work the the right term. This is:

S D
∂α ˆT αβ X⠍dx S
top
T 0β Xβ  S bot
T 0β Xβ  S ∂D
Nα T αβ Xβ dσ

with N , the outer normal. The first two terms are positive definite so we would like the
third term to be B 0.

Question: Can we choose X forward time-like, so that this expression is nonpositive?

Note that Nα T αβ Xβ is bilinear in © u, so if u 0 on the boundary, then © uYN , so that:


1
Nα T αβ X⠈N N ˆX N   ˆN N ˆX N 
2
1
ˆN N ˆX N 
2
so to get our third term smaller than zero, we need for X N B 0, so either N points inward
or is tangent to the boundary.

If X is tangent to the boundary, then we have no boundary terms and we get supt> 0,T  E ˆuˆt ß
E ˆuˆ0.

If X is pointing inwards, then we get a good contribution from the boundary:

sup E ˆuˆt 
t> 0,T 
S ∂D
S
∂u 2
∂ν
S dσ B E ˆuˆ0

∂u
so if ˆu0 , u1  > H 1  L2 , then S∂D > L2 , It turns out that this is useful in control theory.
∂ν
Example 3.4 (Homework 4, problem 2). given
¢̈
¨ut  ∆g u 0
¦
¨uˆ0 u0
¤̈
c
with u0 > L2 , then YuˆtYH 1 B º
t
Proof. Given t, choose t0 @ t such that uˆt0  > H 1 , then use well-posededness to get uˆt > H 1
and YuˆtYH 1 B Yuˆt0 YH 1 , so we get:

S
t
YuˆsYH 1
2
B Yu0 Y2L2
0

so we can choose t0 such that

S
1 t 1
Yuˆt0 YH 1
2
B YuˆsYH 1
2
B Yu0 Y
2
t 0 t
then take the square root.(wait what?)

– 79 –
4. Nonlinear PDEs

Example 3.5 (homework 3, problem ?). Given Ai real-valued, then ∆g,A  ˆ∂j iAk g jk ˆ∂k 
iAk  we want to know if it is self-adjoint, and to compute the spectrum

Proof. The first part is easy, but just don’t break it up, use facts about products of skew
and self adjoint operators.

For the second, we can compute the bilinear form as:

B ˆu, v  S ∆g,A uū S g jk ˆ∂j  iAj uˆ∂k  iAk u C c S Sˆ∂j  iAj uS2 x A 0

Lecture 24 (4/28)

4 Nonlinear PDEs
4.1 First Order Nonlinear PDEs
Let’s first look at first order scalar equations:

uΩ R

with Ω ` Rn , and a function F ˆx, u, Du 0. We will now use the notation Du  © u.

Or we could look at first order systems:

uΩ Rm

with F ˆx, u, Du 0 (this is a fully nonlinear PDE).

If F depends linearly on Du:

QA k
ˆx, u∂k u 0

these are called quasilinear pdes.

Let’s begin with first order scalar equations. Let’s classify these from simplest to hardest.

1. Linear equations

Qa
m

j 1
j
ˆx∂j u  bu f

we can think of aj ˆx∂j as a directional derivative, which can be written as a vector


field X ˆa0 , . . . , an  so we get:

Xu  bu f

– 80 –
4. Nonlinear PDEs

Then the integral curves of X are


¢̈ẋ X ˆx
¨
¨
¨
¦xˆ0 x0
¨
¨
¨x xˆt, x 
¤̈ 0

which is an ode. Then if I am looking at uˆxˆt, x0  and differentiate with respect to


t, we get:

u̇  buˆxˆt, x0   f

therefore to solve our PDE, all we have to do is know how to solve ODEs. This is
called a transport equation.

An initial value problem would be something like where we are given a surface Σ and
given uSΣ , and asked to find u. THen we would need all integral curves to intersect Σ.
A bad situation would be if the integral curve was tangent to the surface Σ, this would
be bad, so we need to impose things on Σ

Definition 4.1. The surface Σ is called noncharacterstic if X is transersal to Σ at


every point.

Theorem 4.1. Noncharacteristicc first order linear scalar pdes are well-posed.

2. Semi-linear equation

aj ˆx∂j u  bˆx, u 0

so only the lower order terms are nonlinear. There is essentially no difference from the
linear case.
¢̈
¨ẋ X ˆ x
¦
¨u̇ bˆx, u
¤̈

the first equation is nonlinear, which we solve first to get integral curves. However
then our second equation is also nonlinear (this is the only difference).

We have the same notion of characterstic surfaces. We get the same theorem as before.

There is a standard normalization. Given our surface Σ, we would like to make a


change of coordinates to flatten our surface, and call the normal direction time.

– 81 –
4. Nonlinear PDEs

then we get

ut  ãj ˆt, x∂j u  b̃ˆx, u 0

3. Quasilinear equations

aj ˆx, u∂j u  bˆx, u 0

Then our vector field is X ˆaj ˆx, u, X X ˆx, u, then if u is given to you then we
have ẋ X ˆx, u to solve, then we need u̇ bˆx, u. So it is most natural to solve
this as a system.

Here, there are two new features:

(a) the initial value problem is uSΣ u0 , but now the noncharacteristic conditions
depends on u0
(b) also chacteristic curves can intersect. And therefore well-posedness is a local
well-posedness. Can we continue solutions after characteristics intersect?
(c) Fully nonlinear equations

H ˆx, u, ∂u 0

we can consider the linearized equation. We have a one paramter family of solu-
tions:

u h ˆx  uˆh, x

with h > R and x > Rn . Then our original solution

I missed the rest of this lecture.

Lecture 25 (4/30)

Recall we are studying scalar first order nonlinear pde’s

H ˆx, u, Du 0

– 82 –
4. Nonlinear PDEs

We can solve this locally using the method of characteristics. And we also discussed the
initial value problem with data on a surface (and imposed the condition that the surface
should be noncharacteristic). Unfortunately, characteristics may still intersect, which means
that we have no global well-posedness in C 1 .

There are two scenarios, either © u has jumps (u > Lip) or u has jumps (shocks).

There are two classes of H


1. Hamilton-Jacobi equations.

The simplest case is if H H ˆx, Du or possibly H H ˆx, u, Du with the dependence
on u very mild (possibly Lipschitz), so u∂u is not allowed.

For these equations, we hope to have corners but no discontinuities


2. Conservation laws
∂k ˆF k ˆu 0
then the solutions we look for u have jump discontinuities (so there is no chain rule)
For Hamilton-Jacobi equations we have the Evolution type:
ut  H ˆx, Du 0
in R  Rn and the stationary type
u  H ˆx, Du 0
in Rn . We will look at evolution type problems with an intial condition:
¢̈
¨ut  H ˆx, Du 0
¦
¨uˆ0 u0
¤̈
Example 4.1. SDuS 1 in 1  d with uˆ0 uˆ1 1. Note that there is no smooth solution,
we must have a corner where we jump from slope 1 to slope 1, here are sketches of some
solutions:

– 83 –
4. Nonlinear PDEs

returning to ut  H ˆx, Du 0, let’s consider parabolic approximation by considering:

uεt  H ˆx, Duε  ε∆uε

for this equation, we have long time solutions using maximum principal (so that the solution
u stays bounded). Then we define the “good solution” as u limε 0 uε .

Problems: convergence (we cannot pass to the limit in the equation), also does the limit
exist?

We should give a name to these solutions, we call this uε method the viscous approxi-
mation and call the solution the viscosity solution.

We must find an intrinstic way to characterize solutions.

We want to define a notion of viscosity solution to H ˆx, u, Du 0. For this we use test
function inequalities. For ϕ like

(ie x0 is a local max for u  ϕ) we want H ˆx0 , uˆx0 , Dϕˆx0  B 0, and for the other side

(ie x0 is a local min for u  ϕ) we want H ˆx0 , uˆx0 , Dϕˆx0  C 0. The first one we call
viscosity subsolutions, and the second we call viscosity supersolutions.

– 84 –
4. Nonlinear PDEs

Let’s see some theorems. Given the Cauchy problem


¢̈
¨ut  H ˆx, Du 0
¦
¨uˆ0 u0
¤̈

¢̈
¨vt  H ˆx, Dv  0
¦
¨v ˆ0 v0
¤̈
Uniqueness: (max principle) suppose u, v are continuous viscosity solutions. Then u0 B v0
implies that u B v. (Max principle enhanced or comparison principle) Given u a subsolution
and v a superposition, then u0 B v0 implies that u B v

Existence: viscosity solutions exist! Suppose there exist a subsolution u and superso- 

lution u , u B u . Then there exists a solution u with u B u B u .


    

We can relax that solutions u and v be continuous and instead let them be upper or lower
semicontinuous for sub and super solutions. This helps with the existence proof, but makes
the comparison harder.

The proof of uniqueness (as well as definitions of viscosity solutions) can be found in
Evans chapter 10.

Here is the idea of the proof for the comparison theorem.


Proof. (sketch) We would like to look at a max. point ˆt0 , x0  for u  v (where u is a subso-
lution and v a supersolution). However, u and v are not differentiable!

Solution: double the variables! Look at ˆtεδ , xεδ , ˆsεδ , yεδ  max for:
1 2 2 ε ε
uˆt, x  v ˆs, y   ˆˆx  y   ˆt  s   
δ T t T s
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
A B

where A is to push xε , yε and tε , sε together and B is to push away from the final time T .

Apply definitions and pass to the limit, as δ 0, ε 0, one gets


ˆtεδ , xεδ  ˆt0 , x0 
ˆs ε δ , y ε δ  ˆt0 , x0 

and a contradiction, unless the max is at the initial time.


Here is the idea of the proof of existence:
Proof. (sketch)

Perrou’s method. Given super and sub solutions u and u we look for a solution u in 


between.

– 85 –
4. Nonlinear PDEs

Then we construct u as follows. Define:

uˆx sup ũ ˆx




ũ subsolution

(the largest subsolution) or

u ˆx  inf ũ ˆx
ũ supersolution


and show that u is both a sub and supersolution, using directly the definitions.

Lecture 26 (5/5)

Recall that last time we were talking about the Hamilton Jacobi equations. Which is a
scalar equation on n dimensions

ut  H ˆx, Du 0

and our goal is to meaningfully continue solutions after characteristics intersect. The mo-
tivation was viscous approximation which led to viscosity solutions. This is a maximum
principle based theory.
Example 4.2. Consider the Hamilton Jacobi euation in 1  d on the interval 0, 1
¢̈
¨S∂x uS 1
¦
¨uˆ0 uˆ1 0
¤̈

we saw this before, and know there must be at least one turning point. We could have a
corner that goes up then down, or down then up.

– 86 –
4. Nonlinear PDEs

for the first we apply the subsolution test:

S∂x ϕˆx0 S B1

(this is true) while for the second, we would apply the supersolution test:

S∂x ϕS C1

but this is false. And so only corners of the first type are allowed, and so we get the unique
viscosity solution

Example 4.3. Now consider

 S∂x uS 1

for the first type of corner, we apply the subsolution test S  ∂x ϕˆx0 S B 1 which is false.
Therefore we are only allowed corners o the second type, so the unique solution is flipped
about the horizontal axis. Therefore we cannot just change signs of Hamilton Jacobi equations
and keep our solutions.

4.2 Conservation Laws

∂t u ∂x F ˆu

case 1 u is scalar and we have one dimension.

case 2 u is scalar and we have n dimensions, so we have:

∂t u Q∂
j
xj Fj ˆu

case 3 u ˆu1 , . . . , un  in 1  d, we have:

∂t um ∂x Fm ˆu

case 4 u > Rm in n dimensions, so we have:

∂t um Q∂ F
j
j mj ˆu

– 87 –
4. Nonlinear PDEs

In the first two cases, you can use the method of characteristics. In both these, we can
use viscosity approximation to construct solutions. This leads to something called entropy
solutions.

In case 3 and 4, for local solutions, we relate this to nonlinear hyperbolic systems. We
allow for shocks (jump singularities). We have a fairly complete theory in 1  d, but is wide
open in higher dimensions.

Returning to case 1:
∂t u ∂x F ˆu
u ∂x v
∂t ∂x v ∂x F ˆ∂x v 
∂t v F ˆ∂ x v 
we began with our conservation law and ended up with a Hamilton-Jacobi equation.

If viscous approximations work (and they do), then solving the conservation law is equiv-
alent to solving the Hamilton-Jacobi equations.

Key observation: the equation has to be satisfied in the sense of distributions.


uεt ∂x F ˆuε   ε∆uε

ÐL
P
uε u so ut ∂x F ˆu. We can use this to study shocks:

Let n ˆnt , nx  be the normal of Σ. Test with test function ϕ > C0 ª

S uϕt  F ˆu∂x ϕ 0

– 88 –
4. Nonlinear PDEs

split this integral into ΩL and ΩR . So when we integrate by parts

S ΩL
ϕt uL  ϕx F ˆuL dxdt  S ΩR
ϕt uR  ϕx F ˆuR dxdt 0

the first integral is:

S ΩL
ϕ ˆ∂t uL  ∂x F ˆuL  


´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
S Σ
ϕ u nt  ϕF ˆuL nx dσ
0

note that when we do this the other way, we get the normal pointing in the other directino,
so this is:

S Σ
ϕuR ut  ϕF ˆuR uxdσ

this holds for every ϕ. Then we get:


ˆuL  uR nt ˆF ˆuL   F ˆuR ux

let T ˆ1, σ  be the tangent vector with σ the shock seppe. so n ˆσ, 1 therefore missed
some equations
 F ˆu
σ
u
(brackets indicate jump). This is known as the Rankine-Hugoniout condition.
Example 4.4 (Burger’s Equation).
1
ut  ∂x ˆu2  0
2
the characteristics for this are:
ṫ 1
ẋ u
u̇ 0
we could consider the initial condition u0 1x@0 . We will get a shock, we will have shock
speed:
ˆ1~2 1 1
σ
1 2

– 89 –
4. Nonlinear PDEs

Example 4.5. Consider the same but u0 1xA0 . We then get a region that is not covered.
The solution is to put in a shock of speed σ 1~2

(here characteristics emerge from the shock) however, there is another solution with no shock
involving distributional solutions called rarefaction wave.
Energy-Flux Pairs

ut  ∂x F ˆu 0

u g ˆu , h
œ
g F so
œ œ

∂t g ˆu g ˆu u  t
œ
g ˆu F ˆu u x
œ œ
 h ˆux
œ

so

∂t g ˆu  ∂x hˆu 0

(as a formal computation, not compatable if u has a jump).

F ˆu
Suppose that u has a jump uL uR and σ . The energy flux equation would be
u
satisfied if hˆu~ g ˆu σ. If g is convex, then we get:

∂t g ˆu  ∂x hˆu B 0

for “good shocks” and C 0 for “bad shocks”.

Definition 4.2. u > L ª


is called an entropy solution if
1. u is a distributional solution

2. for every entropy flux paira ˆg, h we have ∂t g ˆu  ∂x hˆu B 0


a
entropy flux includes the requirement that g is convex

– 90 –
4. Nonlinear PDEs

Lecture 27 (5/7)

Recall our discussion on scalar conservation laws:

ut  ∂x F ˆu 0

Example 4.6 (Burger’s equation (1-d)).

ut  uux 0

then F ˆu ˆ1~2u2 is convex and its derivative is monotone.

The two things we ask are (1) is the equation satisfied in the sense of distributions and
(2) are the solutions entropy solutionsa

Theorem 4.2. (Works in all dimensions) Given any data u0 > L1 9 BV , there exists a unique
entropy solution u for the conservation law so that

1. YuˆtYBV B Yuˆ0YBV

2. Yˆu  v ˆtYL1 B Yu0  v0 YL1


Proof. (idea)

Existence: use viscous approximation. uεt  ∂x F ˆuε  εuεxx

Estimates: (Kruzkov 70’) Given ˆS, η  with η œ


S F (in higher dimension ηj
œ œ œ
S Fj )
œ œ

with S a convex function.

The simplest example of a convex function is S ˆu Su  k S. So S œœ


2δk , η œ
sgnˆu  k F œ

then η sgnˆu  k ˆF ˆu  F ˆk .

If ˆu, v  are differentiable:

d
dt S Su  v Sε dx B0

(subscript ε denotes a regularization of the absolute value). Then the idea to do this is to
double the variables! Look at:

Suˆt, x  v ˆs, y S

then

ˆ∂t  ∂s Suˆt, x  v ˆs, y S  ˆ∂x  ∂y ˆsgnˆu  v ˆF ˆu  F ˆv  B 0


a
ie for any entropy flux pair ˆS, η  where S œ η œ F œ , we have ∂t S ˆu  ∂x η ˆu B 0 and S is convex

– 91 –
4. Nonlinear PDEs

in the distributional sense, this would be:

U ˆu  v ˆˆ∂t  ∂s ϕ  sgnˆu  v ˆF ˆu  F ˆv ˆ∂x  ∂y ϕ C 0

with a test function ϕˆt, x, s, y , let’s choose specifically


ts xy
ϕ f‹ ,  ψε ˆt  s, x  y 
2 2

where ψε ÐεÐ0 δ0,0 . Then sending ε 0, we get:

∂t ˆu  v   ∂x sgg ˆu  v ˆF ˆu  F ˆv  B 0

Then integrate, and we get:


d
dt S
Rn
Su  v S B0

and we genearte a nonlinear contraction semigroup in L1 .

Why YuYBV ? If uˆx solves our conservation law, then uˆx  h solves our conservation
law. Therefore:

Yuˆx  uˆx  hYL1 B Yu0 ˆx  u0 ˆx  hYL1

divide both sides by h, and we get (as h gets small)

YuYBV B Yu0 YBV

Now let’s talk about hyperbolic systems

u  R  Rn Rm
u ˆu1 , . . . , um 
∂t u ∂x F ˆu

F > M m n . We will discuss mostly the 1-d case. Bressan introduced the viscous approxima-


tion. Glein discussed front tracking.

Here is some idea of the difficulties.

ˆ there is no method of characteristics

ˆ this is a nonlinear hyperbolic system

– 92 –
4. Nonlinear PDEs

given ut  ∂x F ˆu 0, we can linearize:


vt  ∂x ˆF ˆuv  0
œ

 vt  F ˆuvx  v∂x F ˆu


œ œ
0
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
lower order

ignoring the lower order term, we look at the other part, where F ˆu M m m . For this we œ 

will use the theory of strictly hyperbolic systems: for all u, F ˆu has distinct real eigenvalues œ

ˆλk ˆu, rk ˆu.

Theorem 4.3. Assume strict hyperbolicity, then ut  ∂x F ˆu is locally well-posed in H k for
d
kA
2
Let’s look at a special case o fsolutions: simple solutions (look like scalar equations). Our
Ansatz is
u v ˆwˆt, x
´¹¹ ¹ ¹ ¸¹¹ ¹ ¹ ¶
scalar
so v  R Rm . Then we have
ut  ∂x F ˆu 0
 v̇Wt  DF ˆv v̇wx 0
therefore v̇ is a eigenvector of DF , so v̇ rk , wt  λk ˆv wk 0.

Let’s call this special solution a k-wave, and the above is a transport equation, and
therefore moves with speed λk

λ1 @ λ2 @ @ λm are speeds of propagation for our PDE. So solutions are like nonlin-
ear interactinos of m trnasversal waves.

Let’s consider another case of special solutions: the data has a jump discontinuity:
¢̈
¨uL x@0
u 0 ˆ x ¦
¨uR
¤̈
xA0
this is called the Riemann problem (because there is scaling invariance) (uˆt, x uˆx~t
for the Riemann problem).

Consider now the restricted Riemann problem: use only k-waves:


u v ˆw 
v̇ rk ˆ v 
to avoid our picture doesn’t get too messy, we impose © λk rk x 0 (generally nonlinear)

Given a point uL , we get a curve called the rarefaction cure (with admissible uR )

Now the shock curves obey σ u F ˆu therefore σ is an approximate eigenvalue of


F ˆu, σ λk ˆuL,R . Therefore u rk ˆuL,R .

– 93 –
Appendices
A Sobolev Space
Definition A.1 (Sobolev Space). Given a domain U in Rd , k C 0 and 1 B p B ª define:
W k,p ˜u > Lp  Dα u > Lp ˆU  ¦ SαS B k
where derivatives are in the distributional sense.
We apply the norm:
YuYW k,p ˆU  Q
S α S Bk
YD
α
uYLp

We let W0k,p ˆU  be the completion of C0 ˆU  in the W k,p ˆU . We get that W k,p ˆU  is


ª

a Banach space, and W 2,p ˆU   H k is a Hilbert space. We also get that the norm is
equivalent to the following on the Fourier side:
YuYH k ˆRd  Yˆ1  Sξ S2 ˆk~2 u
Âˆξ YL2

A.1 Approximation
For all u > W k,p ˆU  (for U any domain) we get a sequence of C ª
functions converging to
norm to u.

For all u > W k,p ˆU , U a C 1 domain, get un > C ª


ˆŪ  converging to u in norm.

C0 ˆRd  is dense in W k,p ˆRd 


ª

A.2 Extension
Given a C k domain Ω in Rd with a open set containing its closure, then we can extend
W k,p ˆΩ to W k,p ˆRn  whose support is contained in V and is norm bounded by the original
norm (with a constant).

A.3 Trace
We have a way of restricting elements of Sobelev spaces to their boundary, even if they don’t
really make sense.
Theorem A.1. Given Ω a C 1 bounded domain, then we have a map:
T r  W k,p ˆΩ Lp ˆ∂ٍ

such that T rˆu uS∂Ω for u > C ª


ˆΩ, YT r ˆuYLp ˆ∂ٍ B C YuYW 1,p ˆΩ and T rˆu 0 if and
only if u > W0k,p ˆΩ
We have a stronger statement that trˆH k ˆU  T r ˆH k  1~2 ˆ∂U 

– 94 –
B. 2nd order elliptic PDEs

A.4 Inequalities
Theorem A.2 (Gagliardo-Nirenberk-Sobolev).

YuYLp‡ ˆRd  B C YDuYLp

For u > W01,p ˆU  or for u > W01,p ˆU  and U is a C 1 bounded domain.

Theorem A.3 (Poincare Inequality). For every bounded set Ω there exists a constant C
such that for all u > W01,p :

YuYLp ˆΩ B C Y©uYLp ˆΩ

A.5 Compactness
Definition A.2. Given Banach spaces X, Y we say X compactly embeds into Y if

1. X ` Y

2. Y YX ß Y YY

3. Every bounded sequence in X has a convergent subsequence in Y (the inclusion is a


compact operator).

Theorem A.4 (Rellich-Kondrachov). For U a C 1 bounded domain, 1 B p @ d, 1 B q @ p ‡

we have W 1,p ˆU  f Lq ˆU 

The rest of the appendix is just a summary of some results of these notes for my own
benefit.

B 2nd order elliptic PDEs


In divergence form, an elliptic operator is:

P  ∂i aij ˆx∂j  bj ˆx∂j  cˆx

with aij being real, positive definite and symmetric so that:

cSξ S2 B aij ξi ξj B C Sξ S2

Theorem B.1. If P is a linear operator between Banach spaces with a coercive bound:

Yu YX B C YP uYY

then we have uniqueness for P and solvability for P (the adoint of P )


‡

Theorem B.2 (Lax-Milgram). Given a bilinear mapping B  H  H R on a hilbert


space that is bounded an coercive, then for all f > H there exists a uniqueu > H such that
‡

B ˆu, v  `f, v e for all v > H.

– 95 –
C. Parabolic Equations

Remark B.1. By Lax-Milgram, we get solvability of P u  λu f for sufficiently large λ.


We can bootstrap this to get the following.

Theorem B.3 (Fredholm Alternative (consequence)). Either P u f is solvable, or


there are finitely many functions that f must be orthogonal to and the solution is unique
modula finitely many functions.

Concretely, if P  λ is invertible, then let K be its inverse, which can be a compact


operator from L2 L2 . Then let U KerˆI  K  spanˆw1 , . . . , wn  and U ‡
KerˆI  K ‡

Ù
spanˆv1 , . . . , vn . If n 0, then we have solvability. Otherwise we have solvability if f vi
for all i and the solution is unique modulo addition of multiples of wi .

Theorem B.4 (regularity). If u > Hloc


1
solves P u f > L2loc , then u > Hloc
2
(given certain
regularity on the coefficients)

This is proved via localization and replacing energy estimates with derivatives of our
solution.

Theorem B.5 (Maximum Principal). IF P u B 0 then maxΩ u max∂Ω u

Theorem B.6 (Hopf Theorem). Given P u B 0 on B ˆ0, R and x0 > ∂B is maximal and
∂u
uˆ0 @ uˆx0 , thne ˆx 0  A 0
∂ν

B.1 Eigenfunctions
For the operator P ∂j ajk ∂i we can use the Fredholm alternative and theory of compact
operators to study eigenvalues and eigenfunctions. We get discrete real eigenvalues going
off to infinity and we can get an orthonormal basis in L2 . There are things that can be
said about computing eigenvalues, simplicity of the smallest eigenvalue, and distribution of
eigenvalues.

B.2 Unique Continuation


If P u 0 on a bounded domain and u is zero on the boundary with zero normal derivative
then under certain assumptions on P , u is zero if we extend it. This is proven with a
Carleman estimate.

C Parabolic Equations
Given the Parabolic equation:
¢̈u  Lu f in U  0, T 
¨
¨
t
¨
¦uS∂U
¨
¨
¨ uS
¤̈ t 0 u0

– 96 –
D. Hyperbolic Equations

We have the energy estimate (to prove uniqueness):

S S
1 t
2
YuˆtYL2 
2
Y©uYL2 dt B Yu0 Y2L2  Yf YH 1 dt
0 0

which can be written:


2
YuYLªL2 
2
YuYL2 H 1 B Yu0 Y2L2  Yf YL2 H 1
0

The adjoint problem can be written as (to prove existence):


¢̈ˆ∂  L‡ v g
¨
¨
t
¨
¦v ˆT  vT
¨
¨
¨v S
¤̈ ∂Ω 0

which gives the duality relation:

S Ω
uˆT vT dx  U D
ugdxdt S u0 v ˆ0dx  U vf dxdt

C.1 Higher Regularity


When solving a parabolic equation with initial data in L2 and forcing in L2 H 1  L1 L2 , then 

our solution is in C ˆL2  9 L2 H 1 (with the correct compatability conditions and regularity
on coefficients). Adding more regularity on initial data and forcing gives more regularity on
the solution.

D Hyperbolic Equations

– 97 –
Index

Index
adjoint operator, 8 14 (3/17), 45
15 (3/19), 48
Caldeon Zygumond operators, 23 16 (3/31), 52
Carleman estimates, 38 17 (4/2), 55
coercivity bound, 10 18 (4/7), 58
compact operator, 15 19 (4/9), 63
Dirichlet, 7 20 (4/14), 66
divergence form, 5 21 (4/16), 70
22 (4/21), 73
Eikonal equation, 61 23 (4/23), 77
elliptic equation 24 (4/28), 80
local solvability, 18 25 (4/30), 82
elliptic operator, 4 26 (5/5), 86
order, 4 27 (5/7), 91
energy-momentum tensor, 56 localization, 19
fixed point, 34 maximal principal, 22
Fredholm Alternative, 16 Minkowski space-time, 52
Galerkin approximation, 73
noncharacterstic, 81
Hahn-Banach, 10 null vector, 55
Hermite operator, 30
parabolic approximation, 84
Hopf’s Lemma, 25
parabolic equation, 40
implicit Newton method, 74 parabolic regularization, 70
infinitesimal generator, 75
quantum mechanics, 31
Lax-Milgram, 11 question, 7, 18, 23, 31, 39, 44, 46–49, 79
lecture
01 (1/21), 3 scaling trick, 18
02 (1/23), 5 semigroup, 75
03 (1/28), 9 Sobolev space, 6
04 (2/11), 13 space-like, 55
05 (2/13), 17 symbol, 4
06 (2/18), 20
07 (2/20), 24 time-like, 55
08 (2/25), 27
unique continuation, 36
09 (2/27), 30
10 (3/3), 33 viscosity solution, 84
11 (3/5), 36 viscous approximation, 84
12 (3/10), 39
13 (3/12), 41 Young’s inequality, 13

– 98 –

You might also like