0% found this document useful (0 votes)
11 views

FA-Lecture_notes

The document contains lecture notes on functional analysis, covering topics such as vector spaces, Hilbert spaces, compact operators, and various theorems including the Hahn-Banach theorem and the uniform boundedness principle. It provides definitions, examples, and theorems related to normed vector spaces, metric spaces, and compactness. The notes are structured into sections that progressively build on foundational concepts in functional analysis.

Uploaded by

2567577904
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

FA-Lecture_notes

The document contains lecture notes on functional analysis, covering topics such as vector spaces, Hilbert spaces, compact operators, and various theorems including the Hahn-Banach theorem and the uniform boundedness principle. It provides definitions, examples, and theorems related to normed vector spaces, metric spaces, and compactness. The notes are structured into sections that progressively build on foundational concepts in functional analysis.

Uploaded by

2567577904
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

1

LECTURE NOTES ON FUNCTIONAL ANALYSIS

DONGMENG XI AND JIN LI

Contents

1. Preliminary (vector space, norm, metric, and convergence) 2


2. Hilbert space, Projection, Riesz’s theorem, Lax-Milgram theorem 10
2.1. Hilbert space, Projection map and Riesz’s theorem 10
2.2. Orthognality in Hilbert space 15
2.3. Lax-Milgram theorem 16
2.4. Sobolev spaces and weak solutions 18
2.5. The weak solution to the boundary value problem 20
3. Hilbert basis, Compact operators, Spectrum 23
3.1. Hilbert basis 23
3.2. Fredholm theory of compact operators in Hilbert space 25
3.3. Spectrum 30
4. The uniform boundedness principle, The closed graph theorem 34
4.1. The Baire category theorem 34
4.2. The uniform boundedness principle 34
4.3. The open mapping theorem and The closed graph theorem 36
5. Hahn-Banach theorem, Bidual space 40
5.1. Analytic form of Hahn-Banach theorem 40
5.2. Geometric form of Hahn-Banach theorem 43
5.3. Bidual space E ∗∗ and Orthogonality relations 45
6. Lp spaces 47

1
Acknowledgement. We are grateful to Xinfa Meng for his assistance in typing the notes.
1
2 DONGMENG XI AND JIN LI

1. Preliminary (vector space, norm, metric, and convergence)

Definition 1. Vector space (v.s.). A vector space V over a field F(R or C) is a set V with two
operations:
(1) Addition + : V × V → V . V equipped with “+” is an Abelian group (Commutative
and associative laws), with identity element (zero vector) and inverse elements (additive
inverse).
(2) Scalar multiplication · : F × V → V . The scalar multiplication satisfies the associative
and distributive laws; 1 · x = x, ∀x ∈ V , where 1 denotes the multiplicative identity.

Examples of v.s. Set F = R,


1◦ X = {p(t) : polynomials with t ∈ R}
2◦ C([a, b]) = {f (t) : continuous functions with t ∈ [a, b]}
3◦ Rn
4◦ X = {(x1 , x2 , · · · ) : xi ∈ R, i ∈ N+ }

Definition 2. Normed vector space (n.v.s.). A vector space E over a field F is said to be a
normed vector space if there is a function ∥ · ∥ : E → R satisfying
(1) ∥x + y∥ ≤ ∥x∥ + ∥y∥ for all x, y ∈ E
(2) ∥αx∥ = |α| · ∥x∥ for every x ∈ E, α ∈ F
(3) ∥x∥ > 0 if x ̸= 0.
The function ∥ · ∥ is called a norm, and we also denote a normed vector space E by (E, ∥ · ∥). For
simplicity, we will write n.v.s..

Definition 3. Metric space. A Metric space X is a set X equipped with a binary function
d : X × X → R satisfying for all x, y, z ∈ X
(1) d(x, y) ≤ d(x, z) + d(z, y) (Triangle inequality)
(2) d(x, y) = d(y, x)
(3) d(x, y) ≥ 0 with equality iff(if and only if) x = y
d(·, ·) is called the distance function.
Let (xk )k≥1 be a sequence in (X, d). We also simply write “a sequence (xk )k≥1 in X”, or “a
sequence (xk ) in X”.
Limit in (X, d). We say (xk )k≥1 converges to x̄, denoted by lim xk = x̄, if lim d(xk , x̄) = 0.
k→∞ k→∞
Cauchy sequence. A sequence (xn )n≥1 in X is said to be a Cauchy sequence if lim d(xi , xj ) = 0.
i,j→∞
Completeness. We say a metric space (X, d) is complete if every Cauchy sequence (xn )n≥1 in X
is associated with an x0 ∈ X s.t.(such that) lim d(xn , x0 ) = 0.
n→∞
Induced metric. Consider a n.v.s. (E, ∥ · ∥). Define d(x, y) = ∥x − y∥ on E, then (E, d) is a
LECTURE NOTES ON FUNCTIONAL ANALYSIS 3

metric space. (Prove it!)


Banach space. A complete normed vector space is also called a Banach space.

Examples of n.v.s.
1◦ (Rn , ∥ · ∥K )
(1) If K is a convex, bounded and closed set with o ∈ intK and K = −K, then ∥x∥K =
inf {λ > 0 : x ∈ λK} is a norm on Rn .
(2) If ∥ · ∥ is a norm on Rn and let K = {x : ∥x∥ ≤ 1}, then K is a convex, bounded and
closed set with B2 (o, r) ⊂ K and K = −K. Here B2 (o, r) is a Euclidean ball with radius
r centered at the origin.

2◦ C([a, b])
(1) Define ∥f ∥∞ := sup |f (x)|, then(C([a.b]), ∥ · ∥∞ ) is a Banach space.
x∈[a,b]
Rb
(2) Define ∥f ∥1 := a |f (x)|dx, then (C([a.b]), ∥ · ∥1 ) is a n.v.s. but not complete.

3◦ C(Ω̄) and C k (Ω̄). Let Ω ⊂ Rn be an open bounded subset.

C(Ω̄) := {uniformly continuous functions f : Ω → R} .

One can confirm that (C(Ω̄), ∥ · ∥1 ) and (C(Ω̄), ∥ · ∥∞ ) are all normed vector space.

C k (Ω̄) := f ∈ C(Ω̄) : Dα f ∈ C(Ω̄) for every α = (α1 , · · · , αn ) satisfying |α| ≤ k




Pn α
where α = (α1 , · · · , αn ) is a multi-index with αi ≥ 0, i ≥ 1 and |α| = i=1 αi , D f (x) =
∂ |α|
α
∂x1 1 ···∂xα n f (x).
n

Define ∥f ∥1,1 = Ω |f | + nk=1 Ω |∂k f | for every f ∈ C 1 (Ω̄), where ∂k f = ∂x∂ k f . It is easily
R P R

seen that ∥ · ∥1,1 is a norm on the vector space C 1 (Ω̄), and we leave it as an exercise.
And we can also define ∥f ∥1,∞ = sup |f (x)| + nk=1 sup |∂k f (x)| for every f ∈ C 1 (Ω̄) as a norm
P
x∈Ω x∈Ω
on C 1 (Ω̄).

Definition 4. Open sets and Closed sets. Suppose (X, d) is a metric space.
Let x ∈ X and r > 0. Denote the open ball centered at x with radius r by Bx (r) = B(x, r) =
{z ∈ X : d(z, x) < r}, and the closed ball by B̄(x, r) = {z ∈ X : d(z, x) ≤ r}.
We say A ⊂ X is open if for every x ∈ A, there exists r > 0 s.t. B(x, r) ⊂ A. We also set the
empty set ∅ to be open. One can confirm that an open ball is open.
We say A ⊂ X is closed, if Ac = {x ∈ X : x ∈ / A} is open. One can confirm that a closed ball
is closed.
Ā = (int(Ac ))c is called the closure of A. One can also confirm that x ∈ Ā iff there exists
(xk )k≥1 ⊂ A s.t. xk → x as k → ∞.
4 DONGMENG XI AND JIN LI

Remark 1. We define intA := {x ∈ A : there exists r > 0 s.t. B(x, r) ⊂ A}, and x ∈ intA is
called an interior point of A.
Property 1. intA is open and A is open iff A = intA.
Property 2. For (X, d)
(1) Both X and ∅ are open.
(2) If Aα is open for every α ∈ I, then ∪ Aα is open.
α∈I
m
(3) If Aα is open for every 1 ≤ k ≤ m, m ∈ N+ , then ∩ Ak is open.
k=1

Examples:
1◦ . Rn .
2◦ . A discrete metric space X is a space such that d(x, y) = 1 for all x, y ∈ X. Each subset of
a discrete metric space is both open and closed.
3◦ . Any finite-dimensional linear subspace of a n.v.s is closed. But infinite-dimensional linear
subspace may not be closed. For instance, Cc∞ (Rn ) in L1 (Rn ) and polynomials in C([0, 1]).

Definition 5. Compactness. Let (X, d) be a metric space.


(1) Precompact. We say K is precompact, if each sequence (xk )k≥1 in K has a convergent
subsequence converging to some point in X.
(2) Compact. We say K is compact, if K is precompact and closed.
(3) Totally bounded. We say K is totally bounded., if for any ϵ > 0, there are finitely many
balls B(xi , ϵ) with xi ∈ K such that K ⊂ ∪m
i=1 B(xi , ϵ).

Definition 6. Open cover. We say {Aα }α∈I is an open cover, if {Aα }α∈I is a class of open sets
s.t. K ⊂ ∪ Aα .
α∈I

Theorem 1.1. K is compact iff each open cover of K has a finite subcover.

Proof. Necessary part. Suppose any sequence (xk )k≥1 in K has a convergent subsequence con-
verging to a point in K. Let {Oi }i∈I (⊃ K) be an arbitrary open cover of K.
Step 1. We prove that there is an r > 0 such that for every x ∈ K, B(x, r) ⊆ Oi for some i ∈ I.
The proof is by contradiction. Assume that for any r > 0, there is an xr ∈ K, such that for each
i ∈ I, B(xr , r) ⊈ Oi . Now choose the sequence {xn }n≥1 in X so that

B (xn , 1/n) ⊈ Oi for all i ∈ I

By the assumption, {xn }n≥1 has a convergent subsequence {xnk }k∈N , and xnk → x as k → ∞,
where x ∈ K. Then, there must be some i0 ∈ I such that x ∈ Oi0 , and since Oi0 is open, so there
exists r0 > 0 such that B(x, r0 ) ⊆ Oi0 . Choose N large enough such that d (x, xN ) < 21 r0 and
LECTURE NOTES ON FUNCTIONAL ANALYSIS 5

1
N
< 21 r0 . Now if y ∈ B (xN , 1/N ), then
d(x, y) ≤ d (x, xN ) + d (xN , y)
1 1
< r0 + r0 = r0
2 2
and hence y ∈ B(x, r0 ) ⊆ Oi0 . It follows that B (xN , 1/N ) ⊆ B(x, r0 ) ⊆ Oi0 , a contradiction!
Step 2. Next we prove that K is totally bounded i.e. for any ϵ > 0, there are finitely many balls
B(xi , ϵ) with xi ∈ K, such that K ⊂ m
S
i=1 B(xi , ϵ).
Otherwise, we can choose an arbitrary x1 ∈ K, then choose x2 ∈ K\B(x1 , ϵ), and step by step
we have
[k
xk+1 ∈ K \ B(xi , ϵ),
i=1
for k = 1, 2, · · · . The sequence (xk )k≥1 satisfies d(xk , xj ) ≥ ϵ for any k ̸= j, which contradicts to
the assumption that (xk )k≥1 has a convergent subsequence.
Step 3. Let ϵ < r where the r is chosen as in Step 1. Then by Step 2 there are finitely many balls
{B(xi , ϵ)}m
i=1 which covers K. Since Step 1 provides an Oki containing B(xi , ϵ), we have found
the finite open cover. Therefore, the necessary part is proved.

Sufficient part. Suppose each open cover of K has a finite subcover. We assume on the contrary
that there is a sequence {zk }k≥1 in K, such that any subsequence of {zk }k≥1 does not converge
to some point in K.
We notice that {zk }k≥1 must have infinitely many distinct points, since otherwise, {zk }k≥1
shall has a convergent subsequence converging to a point who appears infinitely many times in
this sequence.
Step 1. For an arbitrary x ∈ K, there is an ϵx > 0, such that B(x, ϵx ) ∩ {zk }k≥1 is a subset of
singleton {x}. Since otherwise, if there is an x such that for any ϵ > 0, B(x, ϵ) ∩ {zk }k≥1 has a
point different from x, then x will be a limit point to a subsequence, and this contradicts to the
original assumption.
S
Step 2. For each x ∈ K, we select corresponding ϵx as in Step 1. Since x∈K B(x, ϵxi ) covers K,
there are x1 , · · · , xN , such that N
S SN
i=1 B(x i , ϵx i
) covers K. However, i=1 B(xi , ϵxi ) has at most
N elements in {zk }k≥1 , which contradicts to the fact that {zk }k≥1 has infinitely many distinct
points. Therefore, the sufficient part is proved. □

Definition 7. Continuous map. For two metric spaces (X, d1 ) and (Y, d2 ). Let f : X → Y be a
map.

Remark 2. We say f is continuous at x0 ∈ X, provided for all ϵ > 0, there exists δ > 0 s.t.
if x ∈ X satisfies d1 (x, x0 ) < δ, then d2 (f (x), f (x0 )) < ϵ or equivalently lim f (x) exists and
x→x0
6 DONGMENG XI AND JIN LI

lim f (x) = f (x0 ) . In particular, we say f is continuous on X, if f is continuous at each point
x→x0
of X.

Proposition 1.2. f : X → Y is continuous iff f −1 (A) is open in X, for every open set A in Y .

We leave it as an exercise.

Definition 8. Dual space. Let (E, ∥ · ∥) be a n.v.s.. The dual space of E is the space of all
continuous linear functionals on E and denote the dual space of E by E ∗ . The dual norm on E ∗
is defined by
|f (x)|
∥f ∥E ∗ = sup = sup f (x) = sup f (x).
x∈E ∥x∥ x∈E, x∈E,
∥x∥≤1 ∥x∥=1

Remark. We shall write ∥f ∥ instead of ∥f ∥E ∗ if there is no confusion. Given f ∈ E ∗ and x ∈ E,


we shall write ⟨f, x⟩ instead of f (x). We call ⟨·, ·⟩ the scalar product for the duality E ∗ , E.

Proposition 1.3. E ∗ is Banach.

Prove it as an exercise.

Definition 9. Isometry. Let φ : (X1 , d1 ) → (X2 , d2 ). We say φ is an isometry, if d2 (φ(x), φ(y)) =


d1 (x, y) holds for every x, y ∈ X1 .

Proposition 1.4. An isometry must be injective.

Definition 10. Dense subset. A ⊂ X is said to be dense, if Ā = X. Equivalently, for every


x ∈ X there exists (xk )k≥1 in A s.t. d(xk , x) → 0 as k → ∞.

Definition 11. Separable space. A metric space is called separable if it has a countable dense
subset.

Definition 12. Completion. Let (X, d) be a metric space. We say (X,e d) is a completion of X,
if there exists an isometry φ : X → X,
e such that φ(X) is dense in X
e and Xe is complete.

Theorem 1.5. Let (X, d) be a metric space. Denote C[X] to be the collection of all Cauchy
sequences in X. Define a relation ‘∼’ on C[X] by (xn ) ∼ (yn ) iff lim d(xn , yn ) = 0. Then ‘∼’ is
n→∞
an equivalence relation on C[X]. Let X e := C[X]/ ∼ be the quotient space, equipped with metric
de defined by
e n )], [(yn )]) = lim d(xn , yn ) for all [(xn )], [(yn )] ∈ X,
d([(x e
n→∞

where [(xn )] represents the equivalence class of (xn ). Then, X


e is a completion of (X, d) .

In order to simplify the symbol, we sometimes denote by (xn ) an arbitrary sequence (xn )n≥1
in X.
LECTURE NOTES ON FUNCTIONAL ANALYSIS 7

Proof. In the following proof, we may use the notation x to denote a sequence (xn ) in C[X], and
use [x] to denote an element in X̃.
Step 1. It is easy to verify the following conditions,

(1) (xn ) ∼ (xn ).


(2) (xn ) ∼ (yn ) iff (yn ) ∼ (xn ).
(3) If (xn ) ∼ (yn ) and (yn ) ∼ (zn ), then (xn ) ∼ (zn ).

So ∼ is an equivalence relation.
Step 2. Next, we prove that de is a well-defined distance function on X. e
e Let (xn ), (x′n ) ∈ [x] and (yn ), (yn′ ) ∈ [y]. Since (xn ), (yn ) are Cauthy sequences,
Let [x], [y] ∈ X.

|d(xn , yn ) − d(xm , ym )| ≤ |d(xn , yn ) − d(xm , yn )| + |d(xm , yn ) − d(xm , ym )|


≤ d(xn , xm ) + d(yn , ym ) → 0 as n, m → ∞.

Thus lim d(xn , yn ) exists. On the other hand, since d(xn , yn ) ≤ d(xn , x′n ) + d(x′n , yn′ ) + d(yn , yn′ ),
n→∞
we have lim d(xn , yn ) = lim d(x′n , yn′ ), which means de is well-defined.
n→∞ n→∞
Let [z] ∈ X
e and (zn ) ∈ [z]. Then,

(1) By d(xn , zn ) ≤ d(xn , yn ) + d(yn , zn ), taking n → ∞, we have d([x],


e [z]) ≤ d([x],
e [y]) +
d([y],
e [z]).
(2) d([x],
e [y]) = lim d(xn , yn ) = lim d(yn , xn ) = d([y],
e [x]).
n→∞ n→∞
(3) d([x],
e [y]) ≥ 0 is clear, and if d([x],
e [y]) = lim d(xn , yn ) = 0, then (xn ) ∼ (yn ) i.e. [x] = [y].
n→∞

Consequently, de is a well-defined distance function on X.


e

Step 3. In this step, we prove that (X, e is complete. Let [xk ]
e d) be a Cauthy sequence in
k≥1
k k k j k j k

X, and let (xn ) ∈ [x ]. Since d([x ], [x ]) = lim d(xn , xn ) and [x] k≥1 is a Cauthy sequence,
e e
n→∞
1
ki

so there exists a subsequence [x] i≥1 s.t. d([x]e ki , [x]ki+1 ) < i+1 . Then there are N1 , N2 , · · ·
2
+
satisfying Ni+1 ≥ Ni , i ∈ N such that
1 1
d(xkni , xkmi ) < i
and d(xkni , xkni+1 ) < i for any n, m ≥ Ni (EQ1)
2 2
Consider the sequence (xkNii )i≥1 . For each i, by (EQ1), we have
k k 1
d(xkNii , xNi+1 ) ≤ d(xkNii , xkNii+1 ) + d(xkNii+1 , xNi+1 )< .
i+1 i+1
2i−1
8 DONGMENG XI AND JIN LI

Thus, for j > i, we have


k k k k
d(xkNii , xNjj ) ≤ d(xkNii , xNi+1
i+1
) + · · · + d(xNj−1
j−1
, xNjj )
1 1
≤ + ··· +
2i−1 2j−2
1
≤ →0 as i → ∞.
2i−2
It follows that
k
lim d(xkNii , xNjj ) = 0, (EQ2)
i,j→∞

and hence (xkNii )i≥1 is a Cauchy sequence.


Denote yi = xkNii , and [y] = [(yi )]. We will show lim d([x
e j ], [y]) = 0.
j→∞
j kj
Since ([x ]) is a Cauchy sequence, and ([x ]) is its subsequence, we have
e j ], [xkj ]) = lim lim d(xj , xkj ).
0 = lim d([x (EQ3)
i i
j→∞ j→∞ i→∞

Then, by the triangle inequality, (EQ2) and (EQ3), we obtain


e j ], [y]) = lim lim d(xj , xki )
lim d([x i Ni
j→∞ j→∞ i→∞
 
k k k k
≤ lim lim d(xji , xi j ) + d(xi j , xNjj ) + d(xNjj , xkNii )
j→∞ i→∞
k k
= lim lim d(xi j , xNjj )
j→∞ i→∞

= 0,

where the last equation follows from the first inequality in (EQ1). This shows the completeness
of (X,
e d).
e

Step 4. Define φ(z) = [(z, z, · · · )] for every z ∈ X. Then φ : X → X e is an isometry, since for
any x, y ∈ X, d(x, y) = lim d(xn , yn ) = d([(xe n )], [(yn )]) where xn = x, yn = y. And for every
n→∞
[(xn )] ∈ X,
e lim d(φ(x
e k ), [(xn )]) = lim lim d(xk , xn ) = 0, thus φ(X) is dense in X.
e □
k→∞ k→∞ n→∞

Theorem 1.6. If (X f1 , de1 ) with φ1 and (X


f2 , de2 ) with φ2 are two completions of (X, d), then there
is a unique isometry f from X f1 to Xf2 such that f ◦ φ1 = φ2 .

Remark. In other words, the completion is unique up to isometry.

Proof. Step 1. We prove the following Claim first.


f1 . If (xn ) and (x′ ) are two sequences in X s.t. lim φ1 (xn ) = lim φ1 (x′ ) = ye,
Claim. Let ye ∈ X n n
n→∞ n→∞
then lim φ2 (xn ) = lim φ2 (x′n ).
n→∞ n→∞
Proof of Claim. In fact, since φ1 is an isometry, de1 (φ1 (xn ), φ1 (x′n )) = d(xn , x′n ). Since de1 (φ1 (xn ), ye)+
de1 (φ1 , ye) → 0 as n → ∞, by tringle inequality we have d(xn , x′n ) → 0. Thus (xn ) and
LECTURE NOTES ON FUNCTIONAL ANALYSIS 9

(x′n ) are equivalent Cauthy sequences. Since X f2 is complete and φ2 is isometry which implies
de2 (φ2 (xn ), φ2 (x′n )) = d(xn , x′n ), we deduce that lim φ2 (xn ), lim φ2 (x′n ) both exist and are the
n→∞ n→∞
same.
Step 2. By the Claim above, we can define f : X f1 → Xf2 as follows. For every ye ∈ X f1 , define
f (e
y ) = lim φ2 (xn ) where (xn ) is an arbitrary sequence in X converging to ye. It is clear that f
n→∞
is well-defined.
f1 and (x1 ), (x2 ) be sequences in X s.t. lim φ1 (xk ) = yek k = 1, 2. Then
Let ye1 , ye2 ∈ X n n n
n→∞
de2 (f (ye1 ), f (ye2 )) = lim de2 (φ2 (x1n ), φ2 (x2n )) = lim d(x1n , x2n ) = de1 (ye1 , ye2 ). Consequently, f is an
n→∞ n→∞
isometry form X
f1 to X
f2 .

Step 3. For every x ∈ X, denote ye = φ1 (x), we have

f ◦ φ1 (x) = f (e
y)
= lim φ2 (xn ) (where xn = x, n ∈ N+ )
n→∞

= φ2 (x).

This implies f ◦ φ1 = φ2 .
Step 4. Suppose there is another isometry f ′ : X f1 → X f2 satisfying f ′ ◦ φ1 = φ2 on X. The
condition f ′ ◦φ1 = φ2 = f ◦φ1 implies that f (e
y ) = f ′ (e
y ) for every y ∈ φ1 (X). Since isometry must
be continuous (Prove it!), and φ1 (X) is dense in X, e we have f (e y ) = f ′ (e
y ) for every y ∈ X.
e □
10 DONGMENG XI AND JIN LI

2. Hilbert space, Projection, Riesz’s theorem, Lax-Milgram theorem

2.1. Hilbert space, Projection map and Riesz’s theorem.

Definition 13. Inner product space (i.p.s.). An inner product space is a vector space H together
with a symmetric positive definite bilinear function (·, ·) : H × H → R satisfying that for any
λi ∈ R and ui , u, v ∈ H, i = 1, 2

(1) (u, v) = (v, u) (Symmetric)


(2) (u, u) ≥ 0 and (u, u) = 0 iff u = 0 (Positive definite)
(3) (λ1 u1 + λ2 u2 , v) = λ1 (u1 , v) + λ2 (u2 , v) (Linear )

Such a function (·, ·) is said to be an inner product (scalar product).

Examples.
1◦ Rn := (x1 , · · · , xn )t : xk ∈ R, 1 ≤ k ≤ n


Let A ∈ Mn be symmetric and positive definite. Then

(x, y) = xt Ay for every x, y ∈ Rn

defines an inner product on Rn .


n 1
o
2◦ l2 := x = (x1 , . . . , xn , . . . ) : xi ∈ R, ∥x∥2 = ( ∞ 2 2
P
x
k=1 k ) < ∞
P∞
Define (x, y) = k=1 xk yk for every x = (x1 , . . . , xn , . . . ), y = (y1 , . . . , yn , . . . ) ∈ l2 .
3◦ L2 (Ω)
Let Ω ⊂ Rn be a measurable set.
 Z 
2 2 2
L (Ω) := f : Ω → R : f is integrable on Ω, f (x) dx < ∞ .

Rigorously, when referring to the L2 (Ω) space, we are essentially discussing a quotient space
L2 (Ω)/ ∼. Here, for u, v ∈ L2 (Ω), we say u ∼ v iff u(x) = v(x) a.e. x ∈ Ω. Under this notation,
for [u], [v] ∈ L2 (Ω)\ ∼, the inner product ([u], [v]) is still defined by
Z
(u, v)L2 = u(x)v(x)dx.

Note that it is independent of the choices of the representatives u and v. The benefit is that
it makes the “zero vector” of this space to be the unique one satisfying ([u], [u]) = 0. One can
easily confirm that (L2 (Ω)\ ∼, (·, ·)) is a inner product space.
Usually, we abandon the above notation L2 (Ω)\ ∼. Instead, we simply write L2 (Ω), and we
say u and v are the “same point in L2 (Ω)”, if u(x) = v(x) a.e.
4◦ L2g (Ω)
LECTURE NOTES ON FUNCTIONAL ANALYSIS 11
R R
Suppose g ∈ C(Ω), g ≥ 0 and Ω
g > 0. Define (u, v)g = Ω
u(x)v(x)g(x)dx for measurable functions u
and v, and Z
L2g (Ω) := {u : uis measurable and u2 g < ∞}.

5◦ (Product space) Let H1 and H2 be two inner product spaces. Denote H1 × H2 := {x1 ⊕ x2 :
xi ∈ Hi , i = 1, 2}. Define

(x1 ⊕ x2 , y1 ⊕ y2 )H1 ×H2 := (x1 , y1 )H1 + (x2 , y2 )H2 .

It is not hard to see that (·, ·)H1 ×H2 is an inner product space of H1 × H2 . We say (H1 ×
H2 , (·, ·)H1 ×H2 ) (briefly write it as H1 × H2 ) is the product (inner product) space of H1 and H2 .
If H1 and H2 are Hilbert spaces, so does H1 × H2 .

1 1
Remark 3. Cauchy-Schwarz inequality. Suppose H is an i.p.s., then (u, v) ≤ (u, u) 2 (v, v) 2 holds
for any u, v ∈ H.
Proof. ∀u, v ∈ H, by (u − v, u − v) ≥ 0 and (u + v, u + v) ≥ 0, we get (u, u) + (v, v) ≥ 2|(u, v)|.
Letting ū = u 1 , v̄ = v 1 , we have (ū, ū) = (v̄, v̄) = 1, and hence |(ū, v̄)| ≤ 1. Equivalently,
(u,u) 2 (v,v) 2
1 1
|(u, v)| ≤ (u, u) (v, v) .
2 2

1
Induced norm. Now let |u| = (u, u) 2 for every u ∈ H. By Cauchy-Schwarz inequality, | · | is a
norm. We call it the induced norm.

Remark 4. Parallelogram law. Suppose H is an i.p.s. Then


2 2
a+b a−b 1
+ = (|a|2 + |b|2 ) a, b ∈ H.
2 2 2
It can be verified directly from the definition of | · |. In addition, one can confirm that the
parallelogram law coincides with the cosine theorem.

Definition 14. Hilbert space. If the i.p.s. (H, | · |) is complete, we say H is a Hilbert space.
From now on, we assume H to be Hilbert space.

Remark 5.
(1) The examples 1◦ , 2◦ , 3◦ are all Hilbert spaces.

(2) If Ω ⊂ Rn is open and bounded, C(Ω̄), (·, ·)L2 is still an inner prodect space, but not
Hilbert.

Examples. Let Ω be as above. For u, v ∈ C 1 (Ω̄), define


Z X n Z
(u, v)H ′ = u(x)v(x)dx + ∂k u(x)∂k v(x)dx.
Ω k=1 Ω
12 DONGMENG XI AND JIN LI

Then (·, ·)H ′ is an inner product.(Prove it as an exercise!)

Theorem 2.1 (Projection onto a closed convex set). Let K ⊂ H be nonempty, closed and convex.
Then for any f ∈ H, there exists a unique element u ∈ K, s.t.

|f − u| = min |f − v| = d(K, f ) (2.1)


v∈K

Moreover, it is characterized by

u ∈ K and (f − u, v − u) ≤ 0 ∀v ∈ K. (2.2)

Proof. (1) Existence.


W.L.O.G.(Without loss of generality), we assume f ∈ / K. Assume (vn )n≥1 is a sequence in K
s.t. lim |f − vn | = d(K, f ) =: d = lim dn where dn = |f − vn |. We shall prove that (vn )n≥1 is a
n→∞ n→∞
Cauchy sequence.
In fact, by the parallelogram law, for any i, j ∈ N+
2 2
(f − vi ) − (f − vj ) (f − vi ) + (f − vj ) |f − vi |2 + |f − vj |2 di 2 + dj 2
+ = =
2 2 2 2
vi +vj (f −vi )+(f −vj ) vi +vj
Because K is convex, 2
∈ K, hence 2
=f− 2
≥ d, then we have
2
vi − vj di 2 + dj 2
≤ − d2 → 0 as i, j → ∞.
2 2
Thus there exists u ∈ H s.t. vi → u. Since K is closed, u ∈ K. Finally, we get lim |f − vn | =
n→∞
|f − u|.

(2) Characterization.
Necessary part. Suppose u ∈ K satisfies |f − u| = min |f − v| = d(K, f ). For an arbitrary
v∈K
v ∈ K, let Φ(t) = (f − ((1 − t)u + tv), f − ((1 − t)u + tv)). Then Φ(0) takes minimum on [0, 1],
and it follows that
Φ(t) − Φ(0)
lim+ ≥ 0,
t→0 t
which implies (f − u, u − v) ≥ 0 by directly computing derivation. So (2.2) holds.
LECTURE NOTES ON FUNCTIONAL ANALYSIS 13

Sufficient part. Suppose there exists u ∈ K, s.t. (2.2) holds for any v ∈ K. It follows from a
direct computation that for every v ∈ K,

|f − u|2 − |f − v|2 = (f − u, f − u) − (f − v, f − v)
= (u, u) − (v, v) + 2(f, v − u)
= (u, u) − (v, v) + 2(u, v − u) + 2(f − u, v − u)
≤ −(u, u) − (v, v) + 2(u, v) = −(u − v, u − v) ≤ 0.

So the (2.1) holds.

(3) Uniqueness.
If u1 , u2 are two minimizers, then (f − u1 , u2 − u1 ) ≤ 0 as well as (f − u2 , u1 − u2 ) ≤ 0. Adding
them together, we get (u1 − u2 , u1 − u2 ) ≤ 0 i.e. u1 − u2 = 0 and u1 = u2 .

Definition 15. Metric projection. The Theorem 2.1 defines a map PK , by for every f ∈ K,

PK f = u ∈ K s.t. |u − f | = d(K, f ).

It is called the projection of f onto K. And the map PK is sometimes called the metric projection.
In addition, let n = f − u and V := {x ∈ H : (n, x) = (n, u)}, then V is a Support hyperplane of
K, and V − := {x : (n, x) ≤ (n, u)} ⊃ K.

Proposition 2.2. Let K ∈ H be nonempty, convex and closed. Then |PK x − PK y| ≤ |x − y|


holds for any x, y ∈ H. This means the projection mapping PK is a contraction.

Proof. Denote Px := PK x and Py := PK y. By Thm 2.1, (x − Px , Py − Px ) ≤ 0 as well as (y −


Py , Px − Py ) ≤ 0. Adding them, we have |Py − Px |2 = (Py − Px , Py − Px ) ≤ (y − x, Py − Px ).
Together with (y − x, Py − Px ) ≤ |y − x|||Py − Px |, so we get |Py − Px | ≤ |y − x|. □

Corollary 2.3. Let f ∈ H. If K = M happens to be a closed linear subspace of H, then the


characterization in Thm. 2.1 becomes (f − PM f, v) = 0 for every v ∈ M .

Proof. By Thm 2.1, (f − PM f, v − PM f ) ≤ 0 for all v ∈ M , which implies (f − PM f, v) ≤ (f −


PM f, PM f ). While for arbitrary λ ∈ R, λv ∈ M , then we have (f − PM f, λv) ≤ λ(f − PM f, v) ≤
(f − PM f, PM f ) which implies (f − PM f, v) = 0 for every v ∈ M .
Conversely, if u ∈ M satisfies (f − u, v) = 0 for every v ∈ M , then by Thm 2.1, u = PM f . □

Remark 6. In the case that M is closed and linear, PM f is called the orthogonal projection of f
onto M , and the map PM is linear.
14 DONGMENG XI AND JIN LI

Theorem 2.4 (Riesz-Frechet representation theorem). Given any ϕ ∈ H ∗ there exists a unique
1
u ∈ H, s.t. ϕ(v) = ⟨ϕ, v⟩H ∗ ,H = (u, v) for every v ∈ H. Moreover, ∥ϕ∥H ∗ = |u|H = ⟨u, u⟩ 2 .
Proof. Assume ϕ ̸= 0, otherwise ϕ(v) = (0, v).
Let M = [ϕ = 0] ⊂ H be a closed linear subspace. Choose x ∈ H\M , i.e. ϕ(x) ̸= 0, then
x−Px
(x − PM x, y) = 0 for every y ∈ M . Denote Px = PM x, and let u = ϕ(x) |x−P |2
.
x
2
ϕ(x)
We claim that ϕ(v) = (u, v) for every v ∈ H. In fact, by direct computation, (u, u) = |x−P x|
2 =
 
ϕ(x) ϕ(v)
ϕ |x−P 2x = ϕ(u). Given v ∈ H, let v0 = v − ϕ(u) u, then ϕ(v0 ) = 0, i.e. v0 ∈ M . By the Cor
x|
 
ϕ(v) ϕ(v)
2.3, (u, v0 ) = 0, thus (u, v) = u, v0 + ϕ(u) u = ϕ(u) (u, u) = ϕ(v).
In the end,
∥ϕ∥H ∗ = sup ϕ(v) = sup (u, v) ≤ |u||v| ≤ |u|,
∥v∥≤1 ∥v∥≤1

and on the other hand, for v = u/|u|,

(u, v ′ ) = |u| ≤ sup (u, v) = ∥ϕ∥H ∗ .


∥v∥≤1

Finally, if there exists u′ ∈ H, s.t. (u, v) = (u′ , v) for every v ∈ H, then (u − u′ , u − u′ ) = 0


which means u = u′ . □

Definition 16. Bounded linear operator. Let E, F be two normed vector spaces. If A : E → F
is linear, and if sup ∥Au∥F < ∞, we say A is a bounded linear operator.
∥u∥E ≤1
We denote the space of bounded linear operators by L(E, F ).
Remark 7. Actually one can confirm that any linear operator ϕ ∈ L(E, F ) iff ϕ is continuous.
Examples.
1◦ Bounded but not continuous functional: many examples are known in the previous study.
2◦ Continuous but not bounded functional: let en = (0, . . . , 0, 1, 0, . . . ) ∈ l2 , where 1 only
appears at the n-th coordinate. Define fn (x) = max{0, 1/2 − ∥x − en ∥} and

X
f (x) = 2nfn (x)
n=1

for any x ∈ l2 . Clearly fn is supported on disjoint closed balls B̄(en , 1/2). Hence f is a continuous
functional l2 → R but f (en ) = n tells us that f is unbounded.
Definition 17. Bidual space. Let E be a normed vector space. Recall the definition of its dual
space E ∗ and E ∗ is a Banach space. Then we call E ∗∗ = (E ∗ )∗ its bidual.
Proposition 2.5. Let x ∈ E. Define ξ : E ∗ → R by ⟨ξ, f ⟩E ∗∗ ,E ∗ = ⟨f, x⟩E ∗ ,E for every f ∈ E ∗ .
Then
LECTURE NOTES ON FUNCTIONAL ANALYSIS 15

(1) ξ ∈ E ∗∗ .
(2) The map J : E → E ∗∗ , x 7→ ξ for ∀x ∈ E is linear and bounded.
J is called the canonical injection.

Proof.
(1) Clearly, ξ is linear. Since ⟨f, x⟩E ∗ ,E ≤ ∥f ∥E ∗ ∥x∥, we have ⟨ξ, f ⟩E ∗∗ ,E ∗ ≤ ∥x∥∥f ∥E ∗ . That is
to say, sup ⟨ξ, f ⟩E ∗∗ ,E ∗ ≤ ∥x∥ < ∞.
∥f ∥E ∗ ≤1

(2) For any a, b ∈ R, x, y ∈ E, f ∈ E ∗ ,

⟨J(ax + by), f ⟩ = ⟨f, ax + by⟩ = a⟨f, x⟩ + b⟨f, y⟩


= a⟨Jx, f ⟩ + b⟨Jy, f ⟩ = ⟨aJx + bJy, f ⟩.

And the boundedness follows from (1). □

Remark. We say E is reflexive, if J(E) = E ∗∗ .

Theorem 2.6. A Hilbert space H must be reflexive.

Proof. Define L : H → H ∗ by (Lu)(v) = (u, v), ∀v ∈ H. By Thm 2.4, for every ϕ ∈ H ∗ , there
exists unique u ∈ H, s.t. ϕ = Lu, and ∥Lu∥H ∗ = |u|. Thus, L : H → H ∗ is linear, bijective and
an isometry.
Define A : H ∗∗ → H ∗ , by for every ξ ∈ H ∗∗

⟨Aξ, u⟩H ∗ ,H = ⟨ξ, Lu⟩H ∗∗ ,H ∗ for all u ∈ H

Since Aξ ∈ H ∗ (Prove it!), the definition of A make sense. Then for every ξ ∈ H ∗∗ , there exists
v ∈ H s.t. Aξ = Lv. Now for any Lu ∈ H ∗ ,

⟨Jv, Lu⟩H ∗∗ ,H ∗ = ⟨Lu, v⟩H ∗ ,H = (u, v)


= ⟨Lv, u⟩H ∗ ,H = ⟨Aξ, u⟩H ∗ ,H = ⟨ξ, Lu⟩H ∗∗ ,H ∗

which means Jv = ξ, so J is surjective. □

2.2. Orthognality in Hilbert space.

Definition 18. Orthogonality. Let M ⊂ H be a linear subspace. Set

M ⊥ := {u ∈ H : (u, v) = 0 for all v ∈ M } .

We say that M ⊥ is the space orthogonal to M .


16 DONGMENG XI AND JIN LI

Theorem 2.7. (M ⊥ )⊥ = M̄ .

Proof. Step 1. Firstly, we show that for any linear subspace N ⊂ H, N ⊥ is closed. Actually,
from ui → u with ui ∈ N ⊥ , we know (ui , v) = 0, ∀v ∈ N , so (u, v) = 0, ∀v ∈ N and u ∈ N ⊥ .
Step 2. Next, we show that M̄ ⊂ (M ⊥ )⊥ . For u ∈ M , (u, v) = 0, ∀v ∈ M ⊥ , and u ∈ (M ⊥ )⊥ . So
M ⊂ (M ⊥ )⊥ , and by Step 1, (M ⊥ )⊥ is closed, hence M̄ ⊂ (M ⊥ )⊥ .
Step 3. Finally, we prove that (M ⊥ )⊥ ⊂ M̄ . Suppose by contradiction that f ∈ (M ⊥ )⊥ \M̄ .
Denote Pf = PM̄ f . By Cor 2.3, (f − Pf , v) = 0 for all v ∈ M̄ , then f − Pf ∈ M̄ ⊥ . Since
f ∈ (M ⊥ )⊥ and clearly (M̄ )⊥ ⊂ (M )⊥ , so (f − Pf , f ) = 0. Hence (f − Pf , f − Pf ) = 0, which
means f = Pf ∈ M̄ , a contradiction! □

Theorem 2.8 (Orthogonal decomposition). Let M ⊂ H be a closed linear subspace. For any
u ∈ H, u = PM u + PM ⊥ u is an orthogonal decomposition of u to M and M ⊥ . Moreover, the
orthogonal decomposition is unique in the sense that if u = uM + uM ⊥ for some uM ∈ M and
uM ⊥ ∈ M ⊥ , then uM = PM u and uM ⊥ = PM ⊥ u.

Proof. We write u = PM u + (u − PM u). By Corollary 2.3, u − PM u ∈ M ⊥ . In addition,


(u − (u − PM u), v) = (PM u, v) = 0 for any v ∈ M ⊥ . Corollary 2.3 also implies that PM ⊥ u =
u − PM u.
If u = uM +uM ⊥ for some uM ∈ M and uM ⊥ ∈ M ⊥ , then uM −PM u = PM ⊥ u−uM ⊥ ∈ M ∩M ⊥ .
But M ∩ M ⊥ = {o}. Hence uM = PM u and uM ⊥ = PM ⊥ u. □
Remark. We can use the Orthogonal decomposition to prove (M ⊥ )⊥ ⊂ M̄ . Let u ∈ (M ⊥ )⊥ .
Then u = PM̄ u + PM ⊥ u since M ⊥ = (M̄ )⊥ . But PM ⊥ u = o which follows from Corollary 2.3 and
(u − o, v) = 0 for any v ∈ M ⊥ .

2.3. Lax-Milgram theorem.

Definition 19. A bilinear form a : H × H → R is said to be


(1) continuous if there is a constant C > 0 such that

|a(u, v)| ≤ C|u||v| for all u, v ∈ H;

(2) coercive if there exists a constant α > 0 such that

a(v, v) ≥ α|v|2 for every v ∈ H.

Theorem 2.9 (Lax-Milgram). Assume a(·, ·) is a continuous, coercive, bilinear form on a Hilbert
space H. Then given any ϕ ∈ H ∗ , there exists a unique u ∈ H, s.t.

a(u, v) = ⟨ϕ, v⟩ for every v ∈ H.


LECTURE NOTES ON FUNCTIONAL ANALYSIS 17

Moreover, if a is symmetric, then u is characterized by


 
1 1
u∈H and a(u, u) − ⟨ϕ, u⟩ = min a(v, v) − ⟨ϕ, v⟩ .
2 v∈H 2
Proof. (1) Firstly, we show the characterization under the symmetric setting.
Clearly, a(u, v) is a new inner product now. Since a is continuous and coercive, there exists
constants c and c′ > 0, such that c′ |u|2 ≤ a(u, u) ≤ c|u|2 which implies that H1 := (H, a(·, ·)) is
still Hilbert.
From ϕ ∈ H ∗ , it follows that ϕ(v) ≤ M |v|, ∀v ∈ H for some M > 0, then together with
1
|v| ≤ c1′ a(v, v) 2 , we have that ϕ is also a bounded linear functional on H1 .
Now, by Riesz-Frechet representation theorem, there exists a unique u ∈ H, s.t.

a(u, v) = ⟨ϕ, v⟩ for every v ∈ H.

Since a(v − u, v − u) ≥ 0, we have a(v, v) − 2a(u, v) ≥ a(u, u) − 2a(u, u) for every v ∈ H. It


follows that  
1 1
a(u, u) − ⟨ϕ, u⟩ = min a(v, v) − ⟨ϕ, v⟩ .
2 v∈H 2
Conversely, if 12 a(u, u) − ⟨ϕ, u⟩ ≤ 12 a(u, v) − ⟨ϕ, v⟩ for every v ∈ H, then there shall be
d 1

dt t=0 2
a(u + tw, u + tw) − ⟨ϕ, u + tw⟩ = 0 for every w ∈ H.. By a direction computation,
a(u, w) = ⟨ϕ, v⟩ holds for every w ∈ H.

(2) Next, we handle the situation without symmetry. Our aim is to find u ∈ H, s.t. ⟨ϕ, v⟩ =
a(u, v) for every v ∈ H.
By Riesz-Frechet representation theorem, there exists an f ∈ H, s.t. ⟨ϕ, v⟩ = (f, v) for every v ∈
H. And if we fix an u ∈ H, the map v 7→ a(u, v) is continuous and linear on H. So again
by Riesz-Frechet representation theorem, there exists an element in H denoted by Au, s.t.
a(u, v) = (Au, v) for every v ∈ H. Then this naturally induces a map A : H → H, u 7→ Au.
A satisfies
1◦ (A(λu + λ′ u′ ), v) = a(λu + λ′ u′ , v) = λa(u, v) + λ′ a(u′ , v) = λ(Au, v) + λ′ (Au′ , v) =
(λAu + λ′ Au′ , v), so A is linear.
2◦ (Au, Au) = a(u, Au) ≤ c|Au||u|, so |Au| ≤ c|u| i.e. A is continuous.
3◦ |Au||u| ≥ (Au, u) = a(u, u) ≥ c′ |u|2 , so |Au| ≥ c′ |u|.
Claim 1. The condition 3◦ above implies that A is injective and the range R(A) = {Au : u ∈ H}
is closed. In fact, if Au1 = Au2 , then |u1 − u2 | ≤ c1′ |Au1 − Au2 | and u1 = u2 . If (fi := Aui )i≥1
converges to f¯ ∈ H, then |ui − uj | ≤ c1′ |Aui − Auj | → 0 as i, j → ∞, and there exists ū ∈ H, s.t.
ui → ū. By the continuous of A, we have Aū = f¯. Thus R(A) is closed.
Claim 2. R(A)⊥ = {o}. Actually, if w ∈ R(A)⊥ ⊂ H, then (Au, w) = 0 for all u ∈ H. Taking
u = Aw, we have a(w, w) = (Aw, w) = 0 ≥ c′ |w|2 , so w = 0.
18 DONGMENG XI AND JIN LI

Therefore, by Thm 2.7, R(A) = (R(A)⊥ )⊥ = H. Since R(A) is closed, we have R(A) = H.
As a result, A : H → H is surjective and injective, and hence there exists u ∈ H, s.t. Au = f .
That is to say, ⟨ϕ, v⟩ = (f, v) = (Au, v) = a(u, v) for every v ∈ H. □

2.4. Sobolev spaces and weak solutions.


Let Ω ⊂ Rn be an open subset, f : Ω → R be a measurable function, and we define the support
of function f by
supp f := {x ∈ Ω : f (x) ̸= 0}.
Now we recall the following function spaces,

Cc (Ω) := f ∈ C(Ω̄) : supp f ⊂ Ω is compact
Cck (Ω) := Cc (Ω) ∩ C k (Ω̄)
Cc∞ (Ω) := Cc (Ω) ∩ C ∞ (Ω̄)

Remark. Clearly, Cc∞ (Ω) ⊂ Cck (Ω) ⊂ Cc (Ω) ⊂ L2 (Ω).

Proposition 2.10. L2 (Ω) is complete.

We leave its proof as an exercise, and actually we can prove a more general result for Lp (Ω).

Definition 20. (1) For f, g ∈ Cc1 (Ω), (f, g)H01 := Ω f (x)g(x)dx + nk=1 Ω ∂k f (x)∂k g(x)dx
R P R
1
 Cc (Ω).
defines an inner product on the linear space 
1 1
(2) Define H0 (Ω) to be the completion of Cc (Ω), | · |H01 , where | · |H01 denotes the induced norm
of (·, ·)H01 .

Exercise: Please verify the statement (1).


The Sobolev space H01 (Ω) can be described precisely as follows, thanks to the completeness of
L2 (Ω).
n−th (n+1)−th
z }| { z }| {
1 2 2
Theorem 2.11. Let X := Cc (Ω)× Cc (Ω) × · · · × Cc (Ω), and Y := L (Ω) × · · · × L (Ω). Denote

X1 := {(f, f1 , ..., fn ) ∈ X : fk = ∂k f, k = 1, ..., n}

to
 be a linear subspace
 of X. Denote the closure of X1 in Y by X̄1 . Then, X̄1 is a completion of
1
Cc (Ω), | · |H01 .

Proof. Define φ : Cc1 (Ω) → X by

φ(f ) = (f, ∂1 f, ..., ∂n f ).


LECTURE NOTES ON FUNCTIONAL ANALYSIS 19

Then, φ (Cc1 (Ω)) = X1 . Since Y is complete, X̄1 is complete too. □


We remark that each metric space has a unique completion up to isometries, as shown in the
first section.

Definition 21. For each “point” in H01 (Ω), we identify it with a point (u, u1 , . . . , un ) ∈ X̄1 .
Usually, we write u ∈ H01 (Ω), instead of (u, u1 , ..., un ) ∈ X̄1 . Here u means a function in
H01 (Ω) as well as a function in L2 (Ω). For each j, we denote ∂j u := uj , and ∂j u is said to be the
weak partial derivative of u.

Remark 8. It is reasonable to write u ∈ H01 . It is easily seen that, for f ∈ Cc1 (Ω) ⊂ H01 , ∂j f is
determined by f . In fact, for (u, u1 , ..., un ) ∈ X̄1 , by the knowledge of real analysis, uj is also
uniquely determined a.e. by u.

Theorem 2.12. For each u ∈ H01 (Ω), we have


Z Z
∂j u(x)v(x)dx = − u(x)∂j v(x)dx, ∀v ∈ Cc1 (Ω).
Ω Ω

Proof. Let (f )i≥1 be a Cauchy sequence in (Cc1 (Ω), (·, ·)H 1 ). Then ((f i , ∂1 f i , . . . , ∂n f i ))i≥1 is a
i

Cauchy sequence in X̄1 . Since Y is complete, there is a unique (u, u1 , . . . , un ) ∈ Y such that
|f i − u|L2 → 0 and |∂j f i − uj | → 0 as i → ∞ for each j = 1, . . . , n. For any v ∈ Cc1 (Ω), from
integration by parts, it follows that
Z Z
∂j f (x)v(x)dx = − f i (x)∂j v(x)dx,
i
Ω Ω

and then letting i → ∞ to conclude


Z Z
uj (x)v(x)dx = − u(x)∂j v(x)dx.
Ω Ω

2.4.1. Weak Solution to the Dirichlet Problem. Now we consider the problem
(
−∆u(x) + u(x) = f (x) x ∈ Ω
(P 1)
u(x) = 0 x ∈ ∂Ω

where f ∈ C(Ω̄).
In functional analysis, we consider this problem from a dual viewpoint. Define T : L2 (Ω) →
(L2 (Ω))∗ , by Z
(T f )(v) = f (x)v(x)dx, ∀v ∈ L2 (Ω).

2 2 ∗
Define A : C (Ω̄) → (L (Ω)) by
Z
(Au)(v) = (−∆u + u)vdx, ∀v ∈ L2 (Ω).

20 DONGMENG XI AND JIN LI

The solution of (P1) is now reduced to solve

Au = T f.

By integration by parts, we have


Z n
Z X
(Au)(v) = u(x)v(x)dx + ∂k u(x)∂k v(x)dx for every v ∈ Cc1 (Ω).
Ω Ω k=1

Define B : H01 (Ω) → (H01 (Ω))∗ by


Z n
Z X
(Bu)(v) = u(x)v(x)dx + ∂k u(x)∂k v(x)dx for every v ∈ H01 (Ω).
Ω Ω k=1

Now, instead of solving (P1), we aim to find u ∈ H01 , such that

Bu = T f.

Such a u ∈ H01 (Ω) is said to be the weak solution of (P1). One benefit now is the completeness!
To find u ∈ H01 (Ω) s.t. Bu = T f holds, recalling the definitions of (u, v)H01 and (u, v)L2 , it is
equivalent to solving
(u, v)H01 = (f, v)L2 for every v ∈ H01 (Ω).
Since (f, v)L2 ≤ |f |2 |v|2 ≤ |f |2 |v|H01 , ϕ(v) = (f, v)L2 is a bounded linear operator on H01 (Ω).
Thus there exists unique u ∈ H01 (Ω), s.t.

(u, v)H01 = (f, v)L2 for every v ∈ H01 (Ω).

2.5. The weak solution to the boundary value problem.


A second-order linear differential operator L : C 2 (Ω̄) → C(Ω̄) is of the form
n
! n
X ∂ X ∂
Lu(x) = −div aij (x) u(x) + ai (x) u(x)+a0 (x)u(x) for every u ∈ C 2 (Ω̄), x ∈ Ω̄.
j=1
∂x j i=1
∂x i
P 
1 0 n ∂
Here aij ∈ C (Ω̄), ai ∈ C (Ω̄), i = 0, 1, ..., n., and j=1 aij (x) ∂xj u(x) denotes the vector

n n n
!
X ∂ X ∂ X ∂
a1j (x) u(x), a2j (x) u(x), ..., anj (x) u(x) .
j=1
∂x j j=1
∂x j j=1
∂x j

Pn
Definition 22. Elliptic operator. We say L is elliptic, if there exists θ > 0, s.t. i,j=1 aij (x)ξ i ξ j ≥
θ|ξ|2 , for any x ∈ Ω and ξ = (ξ1 , ..., ξn ) ∈ Rn .
LECTURE NOTES ON FUNCTIONAL ANALYSIS 21

Suppose a second-orderlinear differential operator


 L is elliptic from C 2 (Ω̄) into C(Ω̄) with
Pn ∂ 1
ai = 0, i.e. Lu(x) = −div j=1 aij (x) ∂xj u(x) + a0 (x)u(x), where aij ∈ C (Ω̄) and a0 ∈ C(Ω̄).
Assume there is θ0 > 0, s.t. a0 (x) ≥ θ0 for all x ∈ Ω.
Consider the Dirichlet problem : Find u ∈ C 2 (Ω̄) s.t.
(
Lu(x) = f (x) x ∈ Ω
(P 2)
u(x) = 0 x ∈ ∂Ω
We say u ∈ H01 (Ω) is a weak solution of (P 2), if
n Z Z Z
X ∂ ∂
aij (x) u(x) v(x)dx + a0 (x)u(x)v(x)dx = f (x)v(x)dx for all v ∈ H01 (Ω).
i,j=1 Ω ∂x j ∂x i Ω Ω

Theorem 2.13. (Existence of weak solution)


(1) The Dirichlet problem (P 2) always has a weak solution.
(2) If u ∈ C 2 (Ω̄) is a classical solution of (P 2), then u must be a weak solution.

Proof.
Step 1. Define a(·, ·) : H01 (Ω) × H01 (Ω) → R by
n Z Z
X ∂ ∂
a(u, v) = aij (x) u(x) v(x)dx + a0 (x)u(x)v(x)dx for every u, v ∈ H01 (Ω)
i,j=1 Ω ∂x j ∂x i Ω

then u is a weak solution iff u solves


Z

(P 2 ) a(u, v) = f (x)v(x)dx = (f, v)L2 for all v ∈ H01 (Ω).

Therefore, to prove (1), it suffices to verify that a(·, ·) satisfies the assumptions of Lax-Milgram
theorem.
It is clear that a(·, ·) : H01 (Ω) × H01 (Ω) → R is bilinear. We will show that a(·, ·) is continuous
and coercive.
On one hand, since Ω̄ ⊂ Rn is compact, aij ∈ C 1 (Ω̄) and a0 ∈ C(Ω̄), there exists an M > 0
s.t. |aij (x)| ≤ M, |a0 (x)| ≤ M for any x ∈ Ω. Hence,
n Z Z
X ∂ ∂
|a(u, v)| = aij (x) u(x) v(x)dx + a0 (x)u(x)v(x)dx
i,j=1 Ω
∂xj ∂xi Ω
n  
X ∂ ∂
≤M u(x), v(x) + M (u, v)L2
i,j=1
∂x j ∂x i L2

≤ M 2 (u, v)H01 .
22 DONGMENG XI AND JIN LI

Therefore, a(·, ·) is continuous.


On the other hand, since L is elliptic, there exists θ > 0, s.t.
n Z
X ∂ ∂
aij (x) u(x) u(x)dx ≥ θ|∇u(x)|2 for every x ∈ Ω.
i,j=1 Ω
∂xj ∂xi

From this and a0 (x) ≥ θ0 > 0, it follows that


Z X n  2 Z

a(u, u) ≥ θ u(x) dx + θ0 u(x)2 dx
Ω i=1 ∂x i Ω

≥ min{θ, θ0 }|u|H01 .

Therefore, a(·, ·) is also coercive.


R
Step 2. We see that φ(v) := Ω f (x)v(x)dx, ∀v ∈ H01 (Ω) is bounded. By Lax-Milgram theorem,
there exists a unique u ∈ H01 (Ω) s.t.

a(u, v) = φ(v) = (f, v)L2 for every v ∈ H01 (Ω).

By 1◦ , this proves (1).


Step 3. If u ∈ C 2 (Ω̄), we have that for any v ∈ Cc1 (Ω), 1 ≤ i, j ≤ n
∂2
Z Z
∂ ∂
u(x) v(x)dx + u(x)v(x)dx = 0 (Prove it as an exercise.)
Ω ∂xj ∂xi Ω ∂xi ∂xj
R
Then a(u, v) = Ω Lu(x)v(x)dx for every v ∈ Cc1 (Ω).
R
Now, If u ∈ C 2 (Ω̄) solves (P 2), we have a(u, v) = Ω f (x)v(x)dx = (f, v)L2 for every v ∈
Cc1 (Ω). Since a(·, ·) is continuous, and Cc1 (Ω) = H01 (Ω), so finally we have

a(u, v) = (f, v)L2 for every v ∈ H01 (Ω).

This proves (2). □


LECTURE NOTES ON FUNCTIONAL ANALYSIS 23

3. Hilbert basis, Compact operators, Spectrum

3.1. Hilbert basis.


Suppose H is a Hilbert space. Recall that for a set A, the span A denote the set consisting of
finite linear combinations of the elements in A.

Definition 23. Algebraic basis. Let E be an n.v.s. and let {ei }i∈I be a family of vectors in E.
Notice that the index set I here may not be countable. We say {ei }i∈I is an algebraic basis, if
every x ∈ E can be uniquely written as
X
x= xi ei , for some finite subset J ⊂ I and xi ∈ R.
i∈J

Definition 24. Hilbert sum. Let {En }n≥1 be a sequence of closed subspaces of H. We say that
H is the Hilbert sum of En ’s and denote it by H = ⊕ En , if the following holds,
n≥1

(1) {En }n≥1 are mutually orthogonal, i.e. (u, v) = 0 for any u ∈ En , v ∈ Em with n ̸= m.

S
(2) The linear space spanned by En is dense in H.
n=1

Theorem 3.1. Assume that H is the Hilbert sum of En ’s. Given u ∈ H, set un = PEn u and
sn = nk=1 uk . Then,
P

lim sn = u.
n→∞
Now it is reasonable to write ∞
P
k=1 uk and we have such following Bessel-Parseval’s identity,

X
|uk |2 = |u|2 .
k=1

P∞
Lemma 3.2. Let (vn )n≥1 be any sequence in H, s.t. (vm , vn ) = 0 for any m ̸= n and k=1 |vk |2 <
Pn
∞. Then s := lim sn := lim vk exists and
n→∞ n→∞ k=1


X
|s|2 = |vk |2 .
k=1

Pm
Proof of Lemma 3.2. Clearly, |sn − sm |2 = 2
k=n+1 |vk | → 0 as n, m → ∞, n < m. Thus
s = lim sn ∈ H exists, by the completeness of H. By the definition of induced norm and a
n→∞
direct computation, it follows that |s|2 = lim |sn |2 = ∞ 2
P
n→∞ k=1 |vk | . □

Proof of Theorem 3.1. Since un = PEn u, we have

(u − un , v) = 0 for all v ∈ En ,
24 DONGMENG XI AND JIN LI

and hence (u, un ) = |un |2 , n ∈ N. Adding them, we get (u, sn ) = nk=1 |uk |2 . At the same time,
P

(sn , sn ) = nk=1 |uk |2 , hence (u, sn ) = |sn |2 . Therefore, |sn | ≤ |u| which implise that ∞ 2
P P
k=1 |uk | ≤
n
|u|2 . Now, by Lem 3.2, we have s = lim sn = lim uk exists and |s|2 = ∞ 2
P P
n→∞ n→∞ k=1 k=1 |uk | .

What follows aims to show lim sn = u and ∞ 2 2


P
n→∞ k=1 |uk | = |u| . Since uk = PEk u ∈ Ek , and
Ek ’s are mutually orthogonal, by projection theorem we have
n
!
X
u− uk , v = 0 for any v ∈ Em , m ≤ n.
k=1

As n → ∞, we have (u − s, v) = 0 for all v ∈ Em . This implies that (u − s, v) = 0 for all


 ∞   ∞ 
S S
v ∈ span Em . Since span Em = H, we have (u − s, v) = 0 for all v ∈ H. Since
m=1 m=1
u − s ∈ H, we know u − s = 0, i.e. u = s and hence |u|2 = |s|2 = ∞ 2
P
k=1 |uk | . □

Definition 25. Hilbert basis. A sequence (en )n≥1 in H is said to be a Hilbert basis of H, if it
satisfies

n n
(1) (en , em ) = δm , where δm is called Kronecker symbol defined by
(
n 1 n=m
δm := .
0 n ̸= m
(2) span {en : n ∈ N} is dense in H.

In some textbooks, it is also called complete orthonormal system or orthonormal basis. For
example, the Fourier series.

Corollary 3.3. Let (en )n≥1 be an orthonormal basis. Then for every u ∈ H, we have u =
P∞ n
(u, ek )ek and |u|2 = ∞ 2
P P
k=1 (u, ek )ek , i.e. u = n→∞
lim k=1 (u, ek ) . Conversely, given any sequence
P k=1
(αn )n≥1 ∈ l2 , the series ∞ k=1 αk ek converges to some element u ∈ H s.t. (u, ek ) = αk for all
2
P∞ 2
k ∈ N and |u| = k=1 αk .

Proof. It follows immediately from Thm 3.1 that u = ∞


P
k=1 PEk u, where Ek = span{ek } = Rek .
Indeed, PEk u = (u, ek )ek , which follows from the fact that

(u, ek )ek ∈ Ek and (u − (u, ek )ek , λek ) = 0 for all λ ∈ R.

Moreover, |u|2 = ∞ 2
P
k=1 (u, ek ) .
LECTURE NOTES ON FUNCTIONAL ANALYSIS 25

∞ n
Conversely, (αk )k≥1 ∈ l2 , αk 2 < +∞, and hence lim |αk ek |2 < +∞. Then by Lem
P P
k=1 n→∞ k=1
n
P
3.2 lim αk ek exists. Denote it by u and it is clear that (u, ek ) = αk for all k ∈ N and
n→∞ k=1

|u|2 = αk 2 .
P

k=1

Theorem 3.4. Every separable Hilbert space has an orthonormal basis.

Proof. Let {vn : n ∈ N} be a countable dense subset of H. Let Fk denotes span{vi : 1 ≤ i ≤ k}.

S
Clearly, Fk is dense in H. Now we construct an orthonormal basis as follows.
k=1
1◦ If F1 = {o}, reindex by letting Fk+1 = Fk , for all k ≥ 1. If F1 ̸= {o}, let e1 = v1 /|v1 |.
Then span{e1 } = F1 .

2 If F2 = F1 , reindex by letting Fk+1 = Fk , for all k ≥ 2. If F2 ̸= F1 , let
 .
e2 = v2 − (v2 , e1 )e1 v2 − (v2 , e1 )e1 .

Then (e1 , e2 ) = 0 and span{e1 , e2 } = F2 .



3 For step n: If Fn = Fn−1 , reindex by letting Fk+1 = Fk , for all k ≥ n. If Fn ̸= Fn−1 , let
 n−1
X . n−1
X
e n = vn − (vn , ei )ei v2 − (vn , ei )ei .
i=1 i=1

Then (ei , ej ) = δij , for any i, j ≤ n and span{e1 , ..., en } = Fn .


Through such Schmidt orthogonalization progress, we get these {en }n≥1 above which satisfies
all the conditions desired. □

Remark 9. L2 (Ω) is seperable. Consider simple functions in the form


m
X
qk χRk
k=1
Qn
where Rk = i=1 (aik , bik ) and qk , aik , bik ∈ Q. The set of these functions is countable and dense
in L2 (Ω).

3.2. Fredholm theory of compact operators in Hilbert space.

Definition 26. Adjoint operator. Let L ∈ L(H1 , H2 ), where H1 , H2 are Hilbert spaces. Define
L∗ : H2 → H1 , by
(u, L∗ v)H1 = (Lu, v)H2 for every u ∈ H1 , v ∈ H2 .
And we call L∗ the adjoint operator of L.
26 DONGMENG XI AND JIN LI

Definition 27. Self-adjoint. If L ∈ L(H) and L = L∗ , i.e. (Lu, v) = (u, Lv) for all u, v ∈ H, we
say that L is self-adjoint.

Definition 28. Weak convergence. We say a sequence (xn )n≥1 in H converges weakly to x ∈ H,
written xn ⇀ x, if lim (y, xn ) = (y, x) for all y ∈ H.
n→∞

Remark 10. We say xn → x strongly, if |xn −x| → 0. And it is clear that xn → x strongly implies
xn ⇀ x. In fact, weak convergence is equivalent to strong convergence in finite-dimensional n.v.s.
However, a weak convergence may not be a strong convergence in infinite-dimensional Hilbert

space. E.g. sin nx ∈ L2 ([0, 1]) converges weakly to 0 but ∥ sin nx∥2 → 1/ 2 (as n → ∞).

Definition 29. Compact operator. In Banach setting, A ∈ L(E, F ) is said to be compact if


A(BE ) is precompact in F . Here BE := {x ∈ E : ∥x∥E ≤ 1} denotes the closed unit ball in E.
The set of all compact operators from E into F is denoted by K(E, F ). If E = F , we write
K(E) = K(E, E) for simplicity.

Remark 11. It is clear that A is a compact operator is equivalent to that A(D) is compact in F
for any bounded set D ⊂ E.

Theorem 3.5. If K ∈ K(H), then K ∗ ∈ K(H).

Proof. Suppose (vn )n≥1 is a sequence in BH . We aim to show that there exists a subsequence
(vnk )k≥1 in BH such that (K ∗ (vnk ))k≥1 converges to a point w̄ ∈ H.
Step 1. Since K(BH ) is precompact, it must be separable. Let A0 = {Ku1 , · · · , Kuj , · · · } be a
countable dense subset
  of K(BH ), where uj∈ BH . Since (vn , Ku1 ) ≤ ∥K∥|u1 ||vn | ≤ ∥K∥, there
(1) (1)
is a subsequence vn s.t. (vn , Ku1 ) converges. Do this step by step, we can extract
  n≥1    n≥1 
(k+1) (k) (k+1)
subsequence vn of vn s.t. (vn , Kuk+1 ) converges. Taking the diagonal
  n≥1
 n≥1  n≥1
(k) (k)
i.e. vk , we have (vk , Kuj )k≥1 converges for all j ∈ N.
n≥1

 
(k)
Step 2. We claim that (vk , w) converges uniformly for any w ∈ K(BH ).
k≥1
Since A0 is countable and dense in K(BH ), then for any ϵ > 0, there are finite elements
n0
S
Ku1 , · · · , Kun0 , s.t. K(BH ) ⊂ B(Kui , ϵ). For these elements, there exists N0 > 0, whenever
i=1
k, m ≥ N0 , there is
(k) (m)
(vk , Kuj ) − (vm , Kuj ) ≤ ϵ for all 1 ≤ j ≤ n0 .
LECTURE NOTES ON FUNCTIONAL ANALYSIS 27

Then, for any w ∈ K(BH ), there exists j0 ∈ {1, ..., n0 }, s.t. |w − Kuj0 | < ϵ and hence
(k) (m)
(vk , w) − (vm , w)
(k) (k) (m) (k) (m) (m)
≤ (vk , w) − (vk , Kuj0 ) + (vk , Kuj0 ) − (vm , Kuj0 ) + (vm , Kuj0 ) − (vm , w)
(k) (k) (m) (m)
≤ vk w − Kuj0 + (vk , Kuj0 ) − (vm , Kuj0 ) + vm w − Kuj0
≤3ϵ.
 
(k)
Thus it is proved that (vk , w) converges uniformly for any w ∈ K(BH ).
k≥1
(k) (k)
Step 3. Since (K ∗ vk , u) = (vk , Ku), it follows from step 2 that
(k) (k) (k)
K ∗ vk − K ∗ vm
(m)
= sup (K ∗ vk − K ∗ vm
(m) (m)
, u) = sup (vk − vm , Ku) → 0 as k, m → ∞.
|u|≤1 |u|≤1

(k)
Since H is complete, there exists w̄ ∈ H s.t. K ∗ vk → w̄ as k → ∞. □

Proposition 3.6. K(H) is a closed linear subspace of L(H).

Proof. Indeed, λ1 K1 (BH ) + λ2 K2 (BH ) is precompact. Suppose (Kn )n≥1 in K(H) converges to
K ∈ L(H), and (un )n≥1 isan arbitrary
 sequence inBH . Then, by the proof of Thm 3.5, we
(k) (k)
can extract a subsequence uk of (un )n≥1 , s.t. Kn uk converges for all n ∈ N. Since
k≥1 k≥1
(k) (k)
∥Kuk − Kn uk ∥ ≤ ∥K − Kn ∥ → 0, we have
 
(k) (l) (k) (l)
∥Kuk − Kul ∥ ≤ 2∥K − Kn ∥ + Kn uk − ul
→ 2∥K − Kn ∥ as k, l → ∞
→0 as n → ∞.

This implies that K(BH ) is precompact. □

Theorem 3.7 (Fredholm alternative). Let K : H → H be a compact operator. Then


(i) N (I − K) is finite dimensional.
(ii) R(I − K) is closed and more precisely R(I − K) = N (I − K ∗ )⊥ .
(iii) N (I − K) = 0 iff R(I − K) = H.
(iv) dim N (I − K) = dim N (I − K ∗ ).

Remark 12.
(a) This theorem is also true in the Banach setting.
(b) When we were talking about the weak solution of an elliptic PDE, we defined a continuous
and coercive bilinear form a(u, v) over H01 (Ω). By Lax-Milgram theorem, for f ∈ H01 (Ω)
28 DONGMENG XI AND JIN LI

there exist a unique u ∈ H01 (Ω) s.t.

a(u, v) = (f, v) for all v ∈ H01 (Ω).

Define map K by Kf = u. It can be proved by the compact injection H01 (Ω) ⊂ L2 (Ω)
that K is compact.
(c) Fredholm alternative for I − K studies the eigenvalues of K, and hence the elliptic oper-
ators in (b).
(d) Examples. (Not closed range). By Proposition 3.6, it is easy to verify that the mapping
K : l2 → l2 defined by
 x xn 
2
K(x1 , x2 , . . . , xn , . . . ) = x1 , , . . . , , . . . ,
2 n
is a compact operator. Moreover, R(K) is a dense set of l2 since it contains a set
{(x1 , x2 , . . . , xn , . . . ) ∈ l2 : only finitely many xn are not zero} which is dense in l2 . But
R(K) is not closed since 1, 21 , . . . , n1 , . . . ∈ l2 \ R(K).


Proof. (1) If dim N (I − K) = +∞, one can select an orthonormal set {uk }k∈N ⊂ N (I − K).
Then uk − Kuk = 0 for all k ∈ N. It follows that |uk − uj |2 = 2 = |Kuk − Kuj |2 for all k ̸= j.
However this contradicts to the compactness of K, as (Kuk )k≥1 ⊂ K(BH ) would not contain any
convergent subsequence. Thus (i) is proved.

(2) We will prove that there exists c0 > 0 s.t. |u − Ku| ≥ c0 |u| for all u ∈ N (I − K)⊥ . Otherwise,
there exists (uk )k≥1 ⊂ N (I − K)⊥ with |uk | = 1 and |uk − Kuk | → 0 as k → ∞. Since K is
compact, there exists v ∈ H and subsequence (ukj )j≥1 , s.t. Kukj → v. Thus ukj → v ∈ BH .
Then, since K is continuous, we have Kv = v, and hence v ∈ N (I − K). However this would
imply (v, ukj ) = 0 for all j ∈ N, so letting j → ∞, (v, v) = 0 which is a contradiction!

(3) We claim that for any A ∈ L(H), v ∈ R(A), there exists u ∈ N (A)⊥ s.t. v = Au. Actually,
since for v ∈ R(A) there exists u0 ∈ H s.t. v = Au0 , let u = PN (A)⊥ u0 . Then, (u0 − u, w) = 0 for
all w ∈ N (A)⊥ . This implies u0 − u ∈ (N (A)⊥ )⊥ = N (A), and it follows that v = Au0 = Au.
Next let (vk )k≥1 ⊂ R(I − K) satisfying vk → v. There is (uk )k≥1 ⊂ N (I − K)⊥ s.t. uk − Kuk =
vk . By (2), we have |vn − vm | ≥ c0 |un − um | which implies that uk → ū for some ū. From the
continuity of I − K it follows that ū − K ū = v, i.e. v ∈ R(I − K). This means R(I − K) is
closed.

(4) Let A ∈ L(H). v ∈ N (A∗ ) is equivalent to (Au, v) = (u, A∗ v) = 0 for all u ∈ H. This is
equivalent to v ∈ R(A)⊥ . So take A = I − K, and note that (I − K)∗ = I − K ∗ , so together
with R(A) = (R(A)⊥ )⊥ , we obtain (ii).

(5) If N (I − K) = {o}, we show R(I − K) = H. Otherwise, suppose (I − K)H =: H1 ̸= H, which


is closed according to (ii). Furthermore, H2 := (I − K)H1 , we must have H2 ̸= H1 . Indeed, since
LECTURE NOTES ON FUNCTIONAL ANALYSIS 29

we have already known that I − K is injective, so there exists u ∈ H\H1 , s.t. (I − K)u ∈ H1 ,
which cannot equal to (I − K)v for any v ∈ H1 . Therefore, letting Hk = (I − K)k H, we have
Hk ⊊ Hk−1 . Choose uk ∈ Hk ∩ Hk+1 ⊥ with |uk | = 1, k ∈ N. The existence of uk for all k ∈ N
holds by the projection theorem. In fact, there exists α ∈ Hk \Hk+1 , then we can set

uk = α − PHk+1 α α − PHk+1 α .

And one can easily confirm that such uk is the desired element. Then for any n > m, we have

Kun − Kum = −(un − Kun ) + (um − Kum ) + un − um .

Since −(un − Kun ) + (um − Kum ) + un is contained in Hm+1 , together with um ∈ Hm+1 ⊥ , we
get |Kun − Kum | ≥ |um | = 1 from parallelogram law. This contradicts to the compactness of K.
Thus we have proved that N (I − K) = {o} ⇒ R(I − K) = H.
Conversely, if R(I −K) = H, then by (ii) we have N (I −K ∗ ) = {o}, and hence R(I −K ∗ ) = H.
Thus finally, again by (ii), we get N (I − K) = {o}, and (iii) is proved.

(6) We claim that if A ∈ L(H) with dim R(A) < +∞, then A ∈ K(H). Actually, for all u ∈ BH ,
∥Au∥ ≤ ∥A∥, hence A(BH ) is bounded in R(A) which is finite-dimensional. Thus A(BH ) is
precompact.

(7) We will prove dim N (I −K) ≥ dim R(I −K)⊥ = dim N (I −K ∗ ). Otherwise, assume dim R(I −
K)⊥ > dim N (I − K). Then, we can define some map A : N (I − K) → R(I − K)⊥ to be a
one-to-one bounded linear map, and extend it to H by setting Au = 0 for all u ∈ N (I − K)⊥ .
Claim N (I − (A + K)) = {o}. Indeed, if u − (A + K)u = 0, then Au = (I − K)u ∈ R(I − K).

1◦ When u ∈ N (I − K)⊥ , then Au = 0 = (I − K)u, and hence u ∈ N (I − K). This implies


u = 0.
2 When u = u1 + u2 with u1 ∈ N (I − K), u2 ∈ N (I − K)⊥ , then Au = (I − K)(u1 + u2 ) =

(I − K)u2 ∈ R(I − K) as well as Au = Au1 + Au2 = Au1 ∈ R(I − K)⊥ . We get Au =


(I − K)u2 ∈ R(I − K) ∩ R(I − K)⊥ = {o}, and hence u2 ∈ N (I − K)⊥ ∩ N (I − K) = {o},
i.e. u2 = 0. Now we have Au = Au1 = (I − K)u1 = 0. Since A is injective on N (I − K),
so u1 = 0, and u = 0.

Thus we get N (I − (A + K)) = {o}.


By (5), we have R(I − (A + K)) = H. So let v ∈ R(I − K)⊥ be arbitrary, then there exists
u ∈ H, s.t. (I − K)u − Au = v. Since (I − K)u ∈ R(I − K) and Au ∈ R(I − K)⊥ , we have
A(−u) = v. This implies R(I − K)⊥ ⊂ R(A). Noting that A : H → R(I − K)⊥ can not be
surjective, a contradiction!
Finally, since R(I − K ∗ )⊥ = N (I − K), we also have dim N (I − K ∗ ) ≥ dim R(I − K ∗ )⊥ =
dim N (I − K). In consequence, (iv) is proved. □
30 DONGMENG XI AND JIN LI

3.3. Spectrum.
Let T ∈ L(E), and E is a Banach space.

Definition 30. Resolvent set. The resolvent set of T denoted by ρ(T ) is defined by

ρ(T ) := {λ ∈ R : (T − λI) is bijective from E to E} .

Definition 31. Spectrum. The complement of resolvent set in R is called the spectrum, denoted
by σ(T ), i.e. σ(T ) = R\ρ(T ).

Definition 32. Eigenvalue. λ is called an eigenvalue of T , if

N (T − λI) ̸= {o}.

The set of all eigenvalues is denoted by EV (T ). Clearly, EV (T ) ⊂ σ(T ).

Definition 33. Eigenvector. If λ ∈ EV (T ) and u ∈ N (T − λI)\{o}, then we say u is an


eigenvector of λ. And N (T − λI) is the corresponding eigenspace.

Remark 13.

(1) λ ∈ σ(T ) iff either T − λI is not injective or T − λI is not surjective.


(2) λ ∈ EV (T ) iff T − λI is not injective, or equivalently T x = λx has non-zero solution.

Before studying the properties of spectrum, we invoke a result in the next section here. The
Corollary 4.7 tells us that for Banach spaces E, F , if L ∈ L(E, F ) and L is bijective, then
L−1 ∈ L(E, F ).

Theorem 3.8. Assume dim H = ∞ and K ∈ K(H). Then


(1) 0 ∈ σ(K).
(2) σ(K)\{0} = EV (K)\{0}.
(3) σ(K) is either a finite set or a sequence converging to 0.

Lemma 3.9. If BH is compact, the H is finite-dimensional.

Proof of Lemma 3.9. Otherwise, there exists (en )n≥1 s.t. (ei , ej ) = δij , then |ei − ej |2 = 2 for any
i ̸= j, which implies (en )n≥1 does not have a convergent subsequence, a contradiction!. □
LECTURE NOTES ON FUNCTIONAL ANALYSIS 31

Proof of Theorem 3.8. (1) Assume 0 ∈ / σ(K). Then K is bijective and so I = K ◦ K −1 ,


being the composition of a compact operator and a bounded linear operator, is compact. Then
BH = I(BH ) is compact, which contradicts to dim H = ∞. Thus (i) is proved.

(2) It is clear that σ(K)\{0} ⊃ EV (K)\{0}. Conversely, let λ ∈ σ(K) with λ ̸= 0. If λ ∈ /


K
EV (K), i.e. N (K − λI) = {o}, then because λ is also compact, it follows from Theorem 3.7(iii)
that R Kλ − I = H. Thus λ ∈ ρ(K) = R\σ(K), a contradiction! Therefore, if λ ∈ σ(K)\{0},


we must have λ ∈ EV (K).

(3) Suppose (λk )k≥1 is a sequence of distinct numbers in σ(K) s.t. λk → λ. We will show λ = 0.
Indeed, since λk ∈ σ(K), there exists wk ̸= 0, s.t. Kwk = λk wk . Let Hk = span{w1 , · · · , wk }.
Then Hk ⊂ Hk+1 and Hk ̸= Hk+1 , for each k ≥ 1, since (wk )k≥1 are linear independent, and it
can be shown by induction. Observe that (K − λk I)Hk ⊂ Hk−1 for k ≥ 2. Choose an element
uk ∈ Hk for every k ≥ 2, with uk ∈ Hk−1 ⊥ , |uk | = 1 and choose u1 = |ww1
1|
. Now, if k > l, then
Hl−1 ⊊ Hl ⊂ Hk−1 ⊊ Hk . Thus,
Kuk Kul (K − λk I)uk (K − λl I)ul
− = − − ul + uk > 1,
λk λl λk λl

since
 uk ∈ Hk−1 and (K − λk I)uk , (K − λl I)ul , ul ∈ Hk−1 . We get that if λk → λ ̸= 0, then
Kuk
λk
does not have a convergent subsequence, a contradiction! □
k≥1

Theorem 3.10. Let T ∈ L(H) be a self-adjoint operator. Set

m = inf (T u, u) and M = sup (T u, u).


|u|=1 |u|=1

Then σ(T ) ⊂ [m, M ] and m, M ∈ σ(T ).

Proof. (1) Let λ > M . Then for any u ∈ H,

(λu − T u, u) ≥ (λ − M )(u, u).

By the Lax-Milgram Theorem, for every w ∈ H, there exists unique u ∈ H, s.t.

(λu − T u, v) = (w, v) ∀v ∈ H.

This implies λu − T u = w and hence λI − T : H → H is bijective. Therefore λ ∈ ρ(T ). Similarly,


any λ < m belongs to ρ(T ) too. Thus σ(T ) ⊂ [m, M ].

(2) We will prove M ∈ σ(T ). Define [u, v] := (M u − T u, v). Then [·, ·] is a symmetric bilinear
form and [u, u] ≥ 0, ∀u ∈ H. Then, by a similar proof of the Cauchy-Schwarz inequality for inner
product, we have
[u, v]2 ≤ [u, u][v, v],
32 DONGMENG XI AND JIN LI
1 1
which means |(M u − T u, v)| ≤ (M u − T u, u) 2 (M v − T v, v) 2 holds for any u, v ∈ H. Letting
v = M u − T u, we get
1 1
|M u − T u|2 ≤ (M u − T u, u) 2 (M v − T v, M u − T u) 2
1 1
≤ ∥M I − T ∥ 2 |M u − T u|(M u − T u, u) 2 .

.
Then
1 1
|M u − T u| ≤ c(M u − T u, u) 2 where c = ∥M I − T ∥ 2 .
If M ∈ ρ(T ), by Cor 4.7, (M I − T )−1 ∈ L(H). Let (uk )k≥1 be a sequence in H satisfying |uk | = 1
and (T uk , uk ) → M as k → ∞. Then we get
1
|M uk − T uk | ≤ c(M − (T uk , uk )) 2 → 0.

Hence
|uk | = |(M I − T )−1 (M uk − T uk )| ≤ ∥(M I − T )−1 ∥ · |M uk − T uk | → 0,
which contradicts to the assumption that |uk | = 1. Therefore, M ∈ σ(T ), and likewise we have
m ∈ σ(T ). □

Theorem 3.11. Let H be a separable Hilbert space and T ∈ K(H) be self-adjoint. Then there
exists a countable orthonormal basis of H composed of eigenvectors of T .

Proof. By Thm 3.8, let (λk )k≥1 comprise the sequence of distinct eigenvalues of T , excepting 0.
Set λ0 = 0. Write H0 = N (T ) and Hk = N (T − λk I), k ≥ 1. Then, by Thm 3.7, 0 < dim Hk <
∞, 0 ≤ dim H0 ≤ ∞, k ≥ 1. If u ∈ Hk , v ∈ Hl for k ̸= l, then

λk (u, v) = (T u, v) = (T v, u) = λl (v, u),

which implies (u, v) = 0. Thus (Hk )k≥1 are mutually orthogonal.


Denote F = span ∞
S
j=0 Hj . We shall prove that F is dense 
in H.

Pm
Firstly, T (F ) ⊂ F . In fact, T uj = λj uj ∈ Hj , we have T u
j=0 j ∈ F for any m ∈ N. It
follows that T (F ⊥ ) ⊂ F ⊥ . Indeed, since T (F ) ⊂ F , (v, u) = 0 ∀u ∈ F implies that (T v, u) =
(v, T u) = 0 for all u ∈ F . Then, T v ∈ F ⊥ .
Next, denote T0 to be the restriction of T to F ⊥ . Note that F ⊥ is closed, and hence it is
a Hilbert space. Now T0 is a self-adjoint compact operator on F ⊥ . By Thm 3.8, σ(T0 )\{0} =
EV (T0 )\{0}. Since any non-zero eigenvalue of T0 is also an eigenvalue of T in H, we have
σ(T0 ) = {o}. Indeed, if T0 u − λu = 0 for some λ ̸= 0, u ∈ F ⊥ , then u is also a eigenvector of T
LECTURE NOTES ON FUNCTIONAL ANALYSIS 33

corresponding to λ, and hence u ∈ Hj ⊂ F for some j ≥ 1, which means u = 0. By Thm 3.10,

inf (T u, u) = sup (T u, u) = 0.
u∈F ⊥ u∈F ⊥

It follows that (T u, u) = 0 for all u ∈ F ⊥ . Then, since T is self-adjoint,

2(T u, v) = (T (u + v), u + v) − (T u, u) − (T v, v) ∀v ∈ F ⊥ .

This implies T u ∈ (F ⊥ )⊥ , together with T (F ⊥ ) ⊂ F ⊥ , we have T u = 0, i.e. T ≡ 0 on F ⊥ .


Consequently, F ⊥ ⊂ N (T ) ⊂ F , and so F ⊥ = {o}. Thus F̄ = (F ⊥ )⊥ = H.
Since H is seperable and H0 = N (T ) is closed, we see
 that H0 has a Hilbert basis. By Thm
T
3.7, for each λk with k ≥ 1, N (T − λk I) = N λk − I is finite-dimensional. So, generally, we
can select a finite Hilbert basis of each Hk for k ≥ 0. Now the union of these bases is clearly a
Hilbert basis of H. □
34 DONGMENG XI AND JIN LI

4. The uniform boundedness principle, The closed graph theorem

4.1. The Baire category theorem.

Theorem 4.1 (Baire category theorem). Let X be a complete metric space and let {Xn }n≥1 be
a sequence of closed subsets in X. Assume that

int Xn = ∅ for all n ≥ 1,

then
int (∪∞
n=1 Xn ) = ∅.

Remark 14.
(1) A set A is called no where dense, if int Ā = ∅.
(2) If On is open for all n ≥ 1, and Ōn = X, then ∩∞n=1 On = X.

(3) If ∪n=1 Xn = X, then there must exist an n0 s.t. int Xn0 ̸= ∅.

Proof. Assume int (∪∞ ∞


n=1 Xn ) ̸= ∅. Then there exists B̄(x0 , r0 ) ⊂ ∪n=1 Xn , where x0 ∈ X, r0 >
0, B̄(x0 , r0 ) = {x ∈ X : d(x, x0 ) ≤ r0 }.
Since int X1 = ∅, there exists x1 ∈ B(x0 , r0 )\X1 .
Since X1 is closed, there exists r1 ∈ (0, 1) s.t. B(x1 , r1 ) ⊂ B(x0 , r0 ) and B̄(x1 , r1 ) ⊂ X1c .
Proceed this step by step, there is a sequence of balls (B(xk , rk ))k≥1 s.t. B(xk , rk ) ⊂ B(xk −
1, rk − 1), B̄(xk , rk ) ∩ Xk = ∅ and 0 < rk < k1 for all k ∈ N.
As a result, d(xj , xk ) ≤ k1 for all j ≥ k, which implies (xk )k≥1 is Cauchy. Since X is complete,
there exists x′ ∈ B̄(x0 , r0 ) s.t. xk → x′ . Thus x′ ∈ ∪∞ ′
n=1 Xn . However, x ∈ B̄(xk , rk ) for any
k ∈ N. This implies x′ ∈ / Xk for any k ∈ N. A contradiction! □

Remark. If int X1 = int X2 = ∅, then int(X1 ∪ X2 ) = ∅.

4.2. The uniform boundedness principle.

Definition 34. linear operators space. Let E, F be two n.v.s.. Denote L(E, F ) to be the space
of continuous(=bounded) linear operators from E into F with norm

∥T ∥L(E,F ) = sup ∥T x∥F ,


∥x∥E ≤1

and will write L(E) instead of L(E, E).


LECTURE NOTES ON FUNCTIONAL ANALYSIS 35

Theorem 4.2 (Banach-Steinhaus uniform boundedness principle). Let E, F be two Banach


spaces and let (Ti )i∈I be a family (not necessary countable) of continuous linear operators from
E to F . Assume
sup ∥Ti x∥ < ∞ for all x ∈ E, (4.1)
i∈I
then
sup ∥Ti ∥L(E,E) < ∞.
i∈I
Or equivalently, there exists a constant c s.t.

∥Ti x∥ ≤ c∥x∥ for any x ∈ E and i ∈ I.

Remark. A brief statement is that one can derive a global estimate from pointwise estimates.

Proof. For each n ≥ 1, let

Xn = {x ∈ E : ∥Ti x∥ ≤ n for all i ∈ I}.

Then by the continuity of Ti , Xn = ∩i∈I {x ∈ E : ∥Ti x∥ ≤ n} is closed. By (4.1), ∪∞


n=1 Xn = E.
Now, by Baire Category theorem, there exists n0 ≥ 1 s.t. int Xn0 ̸= ∅. Then there exists x0 ∈
int Xn0 and r > 0 s.t. B(x0 , r) ⊂ Xn0 . We have

∥Ti (x0 + rz)∥ ≤ n0 for all i ∈ I and z ∈ B(o, 1).

It follows that
1
∥Ti (z)∥ ≤ (n0 + ∥Ti (x0 )∥) for all z ∈ B(o, 1).
r
This together with (4.1), implies sup ∥Ti ∥ < ∞. □
i∈I

Corollary 4.3. Let E, F be Banach spaces. Let {Tn }∞


n=1 ⊂ L(E, F ) such that Tn x converges to
T x for every x ∈ E. Then we have
(1) sup ∥Tn ∥ < ∞;
n∈N
(2) T ∈ L(E, F );
(3) ∥T ∥ ≤ lim ∥Tn ∥.
n→∞

Proof. (1) Tn → T x ⇒ ∥Tn x − T x∥ → 0 ⇒ ∥Tn x∥ ≤ ∥T x∥ + c where c is a constant. Then by


Thm 4.2, we are done.

(2) By (1), ∥Tn x∥ ≤ c∥x∥ for some universal constant c > 0. It follows from this and ∥Tn x −
T x∥ → 0 that ∥T x∥ ≤ c∥x∥. Then T ∈ L(E, F ).

(3) Since ∥Tn x∥ ≤ ∥Tn ∥∥x∥ for all n ∈ N, we have ∥T x∥ = lim ∥Tn x∥ ≤ lim ∥Tn ∥∥x∥ for all
n→∞ n→∞
x ∈ E. It follows that ∥T ∥ ≤ lim ∥Tn ∥. □
n→∞
36 DONGMENG XI AND JIN LI

Corollary 4.4. Let G be a Banach space and let B ∗ be a subset of G∗ . Assume that

the set ⟨B ∗ , x⟩ = {⟨f, x⟩ : f ∈ B ∗ } is bounded, for all x ∈ G.

Then B ∗ is bounded.

Proof. It is the direct corollary of Theorem 4.2 for E = G, F = R and the family {Ti : i ∈ I} =
B∗. □

Corollary 4.5. Let G be a Banach space and B be a subset of G. Assume that

the set ⟨f, B⟩ = {⟨f, x⟩ : x ∈ B} is bounded for all f ∈ G∗ .

Then B is bounded.

Proof. Recall that x defines a Jx ∈ E ∗∗ s.t. ⟨Jx, f ⟩ = ⟨f, x⟩ for all f ∈ E ∗ and ∥Jx∥ = ∥x∥
(follows from Corollary 5.5). Notice that ⟨JB, f ⟩ is bounded. By Corollary 4.4, JB is bounded
and hence B is bounded. □

4.3. The open mapping theorem and The closed graph theorem.

Theorem 4.6 (Open mapping theorem). Let E, F be two Banach spaces and let T ∈ L(E, F )
that is surjective. Then there exists a constant c > 0 such that

T (BE (o, 1)) ⊃ BF (o, c).

Remark 15.
(1) If we in addition assume that T is injective, then T −1 will also be continuous. Hence T
is a homeomorphism.
(2) Let U be open, and let y0 ∈ T (U ). Then there exists B(x0 , r) ⊂ U with T x0 = y0 and
r > 0. Then T (U ) ⊃ T (x0 ) + T (B(o, r)) ⊃ y0 + B(o, c) = B(y0 , c), which means T (U )
is open. That is, T maps open sets to open sets. Reversely, if a linear map T : E → F
maps open sets to open sets, then T must be surjective.
(3) Clearly, a bijective T maps open sets to open sets if and only if it maps closed sets to
closed sets. However, if T ∈ L(E, F ) is only surjective but not injective, then T may not
map closed sets to closed sets. E.g. Let T : R2 → R such that T (x1 , x2 ) = x1 . Then
C = {(x1 , x2 ) | x1 > 0, x2 = 1/x1 } is closed but T (C) is not closed.

Proof. Step 1. Firstly, we prove a weak version: If T is linear and onto, then there exists constant
c > 0 s.t.
T (B(o.1)) ⊃ B(o, 2c). (4.2)
LECTURE NOTES ON FUNCTIONAL ANALYSIS 37

Proof of Step 1. Let Xn = T (B(o, n)). Since T is surjective, we have ∪∞


n=1 Xn = F . It follows
that there exists n0 , x0 ∈ Xn0 and r > 0 s.t. B(x0 , r) ⊂ Xn0 .
Since Xn is o-symmetric, B(−x0 , r) ⊂ Xn0 too. It follows that

B(x0 , r) + B(−x0 , r) ⊂ Xn0 + Xn0 = 2Xn0

and hence
x0 + rz1 + (−x0 ) + rz2 ∈ 2Xn0 for all z1 , z2 ∈ B(o, 1).
This implies that
rz ∈ Xn0 for all z ∈ B(o, 1).
r
Let c = 2n0
, we have B(o, 2c) ⊂ T (B(o, 1)).
Step 2. Next we show that: Assume T ∈ L(E, F ) that satisfies (4.2). Then

T (BE (o, 1)) ⊃ BF (o, c).

Proof of Step 2. (4.2) implies that


  
λ
T BE o, ⊃ BF (o, λc) for all λ > 0.
2
Let y ∈ BF (o, c). Choose ϵ = 2c , then there exists x1 ∈ E s.t. ∥x1 ∥ < 12 and
c
∥y − T x1 ∥ < ,
2
which implies y − T x1 ∈ BF (o, 2c ).
Next choose ϵk = 2c2 , 2c3 , · · · step by step, then there exists (xk )k≥1 s.t. for every k ∈ N
1 c
∥xk ∥ <
k
and ∥y − T x1 − · · · − T xk ∥ < k .
2 2
n
The fact ∥xk ∥ < 21k implies that k=1 xk is a Cauchy sequence, and thus converges to a point
P

x̄. Also, nk=1 T xk → y, and by the continuity of T , we have that y = T x̄.


P

Now we show x̄ ∈ BE (o, 1), and by the arbitriness of y ∈ BF (o, c) our proof is finished.
Actually, we denote d = 21 − ∥x1 ∥ > 0, since ∥xk ∥ < 21k for all k ∈ N, 1 − ∥x̄∥ ≥ 12 − ∥x1 ∥ + 21 −
∥ ∞
P
k=2 xk ∥ ≥ d > 0. Thus x̄ ∈ BE (o, 1). □
Corollary 4.7. If T ∈ L(E, F ) in Thm. 4.6 is additionally bijective, then T is a homeomorphism
between E and F .
Proof. Since T has inverse T −1 which is linear, by Thm 4.6,

T (B(0, 1)) ⊃ B(o, c) ⇒ B(0, 1) ⊃ T −1 (B(o, c))


⇒ T −1 (B(y, cε)) ⊂ B(T −1 y, ε), ∀y ∈ F, ∀ε > 0.

This proves the continuity of T −1 . □


38 DONGMENG XI AND JIN LI

Corollary 4.8. Let E be a v.s. equipped with two norms ∥ · ∥1 and ∥ · ∥2 . If E is complete for
both norms, and if there exists c ≥ 0 s.t.

∥x∥2 ≤ c∥x∥1 for all x ∈ E,

then the two norms are equivalent, i.e. there exists c′ > 0 s.t.
1
∥x∥2 ≤ ∥x∥1 ≤ c′ ∥x∥2 for all x ∈ E.
c′
(Equivalence will imply that the topolgies generated by the norms are the same.)

Proof. Let E = (E, ∥ · ∥1 ), F = (E, ∥ · ∥2 ) and T = I. Then there exists constant c̃ s.t. BE (o, 1) ⊃
BF (o, c̃), which means
c̃x
≤ 1.
∥x∥2 1
Thus∥x∥1 ≤ 1c̃ ∥x∥2 , and now we can set c′ = max c, 1c̃ .


Theorem 4.9 (Closed graph theorem). Let E, F be two Banach spaces and let T be a linear
operator from E into F . Assume that the graph of T , G(T ) := {(x, T x) ∈ E × F : x ∈ E} is
closed in E × F . Then T is continuous.

Remark 16.
(1) The graph of any continuous map is closed.
(2) The norm in E × F is defined by

∥(x, y)∥E×F = ∥x∥E + ∥y∥F for every (x, y) ∈ E × F.

It is simple to verify that such space is a Banach space.

Proof. Denote ∥x∥1 = ∥x∥E and ∥x∥2 = ∥x∥E + ∥T x∥F .


Claim. ∥ · ∥2 is a norm and (E, ∥ · ∥2 ) is complete.
Proof of Claim. ∥ · ∥2 is a norm, since
1◦ From ∥x∥2 = 0, we get ∥x∥E = 0 and hence x = 0.
2◦ ∥x + y∥2 ≤ ∥x∥E + ∥y∥E + ∥T x∥F + ∥T y∥F .
Let (xn )n≥1 be an arbitrary sequence s.t. ∥xk − xj ∥2 → 0 as k, j → ∞. Then ∥xk − xj ∥E → 0
and ∥T xk − T xj ∥F → 0 as k, j → ∞.. Thus there exist x̄, ȳ s.t. limn→∞ xn = x̄, limn→∞ T xn = ȳ.
Hence it is clear that (xn , T xn ) → (x̄, ȳ) in E × F , and since G(T ) is closed, we have that
(x̄, ȳ) ∈ G(T ) i.e. ȳ = T x̄. Consequently, xn → x̄ in (E, ∥ · ∥2 ) and hence (E, ∥ · ∥2 ) is Banach.
LECTURE NOTES ON FUNCTIONAL ANALYSIS 39

Now from ∥x∥1 ≤ ∥x∥2 and Corollary 4.8, there exists c > 0 s.t. ∥x∥2 ≤ c∥x∥1 and c > 1
because ∥ · ∥F doesn’t vanish. Thus ∥T x∥F ≤ (c − 1)∥x∥E , which means T ∈ L(E, F ). □

Remark 17. Let E, F be Banach spaces, D ⊂ E be such that D̄ = E. Let A : D → F be linear


and G(A) be closed in E × F . Recall that N (A), R(A) denote the kernel and range. Denote
G = G(A), L = E × {o}. Then

N (A) × {o} = G ∩ L
E × R(A) = G + L
{o} × N (A∗ ) = G⊥ + L⊥
R(A∗ ) × F ∗ = G⊥ + L⊥

where A∗ is the ajoint operator of A.


40 DONGMENG XI AND JIN LI

5. Hahn-Banach theorem, Bidual space

Suppose E is a vector space over R without other structures.

5.1. Analytic form of Hahn-Banach theorem.

Theorem 5.1 (Hahn-Banach theorem, analytic form). Let p : E → R be a functional satisfying

p(λx) = λp(x) for all x ∈ E, λ ≥ 0;


p(x + y) ≤ p(x) + p(y) for all x, y ∈ E.

Let G ⊂ E be a linear subspace and g : G → R be a linear functional such that

g(x) ≤ p(x) for all x ∈ G.

Under these assumptions, there exists a linear functional f defined on E that extends g, i.e.
g(x) = f (x) for all x ∈ G and f (x) ≤ p(x) for all x ∈ E.

Remark 18. We can first prove a finite dimensional version here.


Suppose E is finite dimensional. Let x0 ∈ E\G, and consider h(x + tx0 ) = g(x) + tα for all
x ∈ G. We aim to find an α satisfying g(x) + tα ≤ p(x + tx0 ).
Notice that g(x) ≤ p(x). Then, for any x, y ∈ G, since

g(x + y) = g(x) + g(y) ≤ p(x + y) ≤ p(x − x0 ) + p(y + x0 ),

we have
g(x) − p(x − x0 ) ≤ p(y + x0 ) − g(y).
 
Thus let sup g(x) − p(x − x0 ) ≤ α ≤ inf p(y + x0 ) − g(y) . Then for all x ∈ G,
x∈G y∈G

g(x) − α ≤ p(x − x0 ) and g(x) + α ≤ p(x + x0 ).

Hence   
x
h(x + tx0 ) = g(x) + tα ≤ t p + x0 = p(x + tx0 ).
t
Thus we have extended g from G to span{G, x0 }. Since E is finite dimensional, through finitely
many such steps we can extend g to E.

Remark 19. One geometric meaning of Theorem 5.1 can be stated as below.
Suppose p(x) ≥ 0 for all x ∈ E. Let B = {x : p(x) ≤ 1}. Define Hg ⊂ G by Hg = {x ∈
G : g(x) = 1}. It follows that Hg ∩ B = ∅. Then the statement of Thm 5.1 means that one can
extend Hg to Hf ⊂ E s.t. Hf ∩ B = ∅.

Before the proof of Theorem 5.1, we introduce some conceptions.

Definition 35. Order relation.


LECTURE NOTES ON FUNCTIONAL ANALYSIS 41

(1) A relation “≤” is a partial order on a set P , if for any a, b, c ∈ P ,


1◦ a ≤ a.
2◦ a ≤ b and b ≤ a ⇒ a = b.
3◦ a ≤ b and b ≤ c ⇒ a ≤ c.
(2) We say Q is a totally ordered subset, in brief t.o.s., if Q ⊂ P satisfying that for any
x, y ∈ Q, either x ≤ y or y ≤ x.
(3) We say c ∈ P is an upper bound for subset Q ⊂ P , if x ≤ c for all x ∈ Q
(4) We say m ∈ P is a maximal element of P , if there is no element x ∈ P such that m ≤ x
except for x = m.
(5) We say that P inductive if every totally ordered subset Q ⊂ P has an upper bound.

Lemma 5.2 (Zorn). Every nonempty ordered set that is inductive has a maximal element.

Examples. X is not empty and P(x) = {A : A ⊂ X}. A ⊂ B defines A ≤ B, then “⊂” is a


partial order.
The proof of Zorn lemma is based on the axiom of choice. It is not required to know its proof
at this moment, but one should understand the statements and how to use it!

Proof of Theorem 5.1. Let


 

 D(h) is a linear subspace of E, 

P = h : D(h) ⊂ E → R h is linear, .
 
h extend g and h(x) ≤ p.
 

On P we define partial order “≤”, by

h1 ≤ h2 ⇔ D(h1 ) ⊂ D(h2 ) and h2 extends h1 .

Step 1. We prove that P is inductive.


Suppose Q ⊂ P is a t.o.s.. Let D̄ = ∪h∈Q D(h) and h̄ : D̄ → R be defined by

h̄(x) = h(x) if x ∈ D(h) for some h ∈ Q.

It is well-defined since if x ∈ D(h) ∩ D(h′ ) for h, h′ ∈ Q, we must have h(x) = h′ (x) for that Q
is totally ordered. Then it is clear that h̄ is linear and h ≤ h̄ for all h ∈ Q, i.e. h̄ is an upper
bound of Q.
Step 2. By Zorn’s Lemma, there exists a maximal element in P , say f . We claim that D(f ) = E.
Otherwise, there exists x0 ∈ E\D(f ). Let D′ = span{D(f ), x0 } and let F (x + tx0 ) = f (x) + tα,
where α is chosen s.t.

f (x) + tα ≤ p(x + tx0 ) for all x ∈ D(f ), t ∈ R.


42 DONGMENG XI AND JIN LI
 
Such an α exists since we can choose sup g(x) − p(x − x0 ) ≤ α ≤ inf p(y + x0 ) − g(y) as in
x∈G y∈G
Remark 18. As a result, F (x + tx0 ) is linear, D(f ) ⊂ D′ , f = F on D(f ), and F ≤ p, which
means f ≤ F . A contradiction!
In conclusion, such maximal element f is just what we desired. □

Remark. Taking maximal element is a useful tool in dealing with infinite dimensional case.

Recalling the conception of Dual space, now we give several related properties based on Hahn-
Banach theorem.

Corollary 5.3. Let E be a n.v.s. and let G ⊂ E be a linear subspace. If g : G → R is a


continuous linear functional, then there exists f ∈ E ∗ that extends g and such that

∥f ∥E ∗ = sup |g(x)| = ∥g∥G∗ .


x∈G,∥x∥≤1

Proof. Take p(x) = ∥g∥G∗ ∥x∥. Applying Thm 5.1, there exists f s.t. f ≤ p, f = g on G. □

Corollary 5.4. For every x0 ∈ E, there exists f0 ∈ E ∗ s.t.

∥f0 ∥ = ∥x0 ∥ and ⟨f0 , x0 ⟩ = ∥x0 ∥2 .

Proof. Take G = Rx0 , then g(tx0 ) = t∥x0 ∥2 . Thus ∥g∥G∗ = ∥x0 ∥. Now by Cor 5.3, there exists
f0 ∈ E ∗ s.t. ∥f0 ∥ = ∥g∥G∗ = ∥x0 ∥ and ⟨f0 , x0 ⟩ = g(x0 ) = ∥x0 ∥2 . □

Corollary 5.5. For every x ∈ E, we have

∥x∥ = sup |⟨f, x⟩| = ∥Jx∥E ∗∗ = max |⟨f, x⟩|.


f ∈E ∗ ,∥f ∥≤1 f ∈E ∗ ,∥f ∥≤1

Proof. Assume x ̸= 0. Then clearly |⟨f, x⟩| ≤ ∥f ∥∥x∥ implies that

sup |⟨f, x⟩| ≤ ∥x∥.


f ∈E ∗ ,∥f ∥≤1

By Corollary 5.4, there exists f0 ∈ E ∗ , s.t. ∥f0 ∥ = ∥x∥ and ⟨f0 , x⟩ = ∥x∥2 . Let f1 = f0 /∥f0 ∥,
then |⟨f1 , x⟩| = ∥x∥ and ∥f1 ∥ = 1, which means

sup |⟨f, x⟩| ≥ |⟨f1 , x⟩| = ∥x∥.


f ∈E ∗ ,∥f ∥≤1


LECTURE NOTES ON FUNCTIONAL ANALYSIS 43

5.2. Geometric form of Hahn-Banach theorem.


Definition 36. Hyperplane and Half space.
A hyperplane is a subset of E with form

H = {x ∈ E : f (x) = α}

where f is some linear functional that does not vanish identically and α ∈ R. For simplicity,
write H = [f = α].
We call a set half space with form

H + = {x ∈ E : f (x) ≥ α} or H − = {x ∈ E : f (x) ≤ α}.

For simplicity H + = [f ≥ α] or H − = [f ≤ α].


Proposition 5.6. [f = α] is closed if and only if f is continuous.
Definition 37. Separation. Suppose A, B ⊂ E. We say H = [f = α] separates A and B if

f (x) ≤ α for all x ∈ A and f (x) ≥ α for all x ∈ B.

We say H strictly separates A and B if there exists ϵ > 0 s.t.

f (x) ≤ α − ϵ for all x ∈ A and f (x) ≥ α + ϵ for all x ∈ B.


Definition 38. Convex set. We say a subset A ⊂ E is convex, if for any x, y ∈ A there always
holds [x, y] ⊂ A, where [x, u] denotes the line segment {(1 − t)x + ty : t ∈ [0, 1]}.
Theorem 5.7 (Hahn-Banach theorem, geometric form). Let A, B ⊂ E be non-empty and convex
such that A ∩ B = ∅. Assume that one of them is open. Then there exists a closed hyperplane
that separates A and B.
Proof. Step 1. Suppose A is open with o ∈ A, and x0 ∈
/ A. We will prove that A and {x0 } can
be separated. Define p(x) = inf{α > 0 : x ∈ αA}.
Claim 1. A = {x ∈ E : p(x) < 1}.
Proof of claim. First, suppose that x ∈ A. Since A is open, it follows that (1 + ϵ)x ∈ A for ϵ > 0
1
small enough and therefore p(x) ≤ 1+ϵ < 1. Conversely, if p(x) < 1 there exists α ∈ (0, 1) such
−1
that x ∈ αA, and thus x = α(α x) + (1 − α)0 ∈ A.
Claim 2. p is sublinear.
Proof of claim. For an arbitrary λ > 0,

p(λx) = inf {α > 0 : λx ∈ αA}


n α α o
= inf λ : x ∈ A
λ λ
= λ inf {β > 0 : x ∈ βA} = λp(x).
44 DONGMENG XI AND JIN LI

For an arbitrary ϵ > 0, there exist α1 > 0, α2 > 0, s.t. for all x ∈ α1 A, y ∈ α2 A,

ϵ + p(x) > α1 and ϵ + p(y) > α2 .

Since A is convex, we have that

x + y ∈ α1 A + α2 A = (α1 + α2 )A.

Then p(x + y) < α1 + α2 < p(x) + p(y) + 2ϵ. By the arbitrariness of ϵ, we obtain p(x + y) ≤
p(x) + p(y).
Now, since x0 ∈ / A, we have p(x0 ) ≥ 1. Let g(tx0 ) = t. Then, g is linear on G = Rx0 and
g(tx0 ) = t ≤ p(tx0 ) = tp(x0 ) for any t > 0. By Theorem 5.1, there exists f extending g s.t.

f (x) ≤ p(x) for all x ∈ E and f (x0 ) = g(x0 ) = 1.

Thus H = [f = 1] separates {x0 } and A.


Moreover, we claim that f is continuous, which will imply that H = [f = 1] is closed. Since
A is open and o ∈ A, we have B̄(o, r) = {x : ∥x∥ ≤ r} ⊂ A for some r > 0, thus ∥x∥ ≤ r means
p(x) < 1 and hence  
rx ∥x∥
p < 1 i.e. p(x) ≤ .
∥x∥ r
∥x∥ ∥x∥ ∥x∥
It follows that f (x) ≤ p(x) ≤ r
and f (−x) ≤ r
. In conclusion, |f (x)| ≤ r
and hence
continuous.
Step 2. Now we consider A and B. Suppose A is open, then
[
A − B = {x − y : x ∈ A, y ∈ B} = (A − {y})
y∈B

is open. Since A ∩ B = ∅, o ∈
/ A − B. Let z0 ∈ A − B, by Step 1 there exists f that is continuous
and s.t. H = [f = 1] separates A − B − {z0 } and {−z0 }. Precisely, f (−z0 ) = 1 and for all
x − y − z0 ∈ A − B − {z0 }, there is

f (x − y − z0 ) ≤ p(x − y − z0 ) < 1.

This implies f (x) < f (y) for all x ∈ A, y ∈ B. Find an α ∈ R s.t. sup f (x) ≤ α ≤ inf f (y), then
x∈A y∈B
we complete the proof with the hyperplane [f = α]. □

Theorem 5.8. Let A ⊂ E, B ⊂ E be convex, nonempty and such that A ∩ B = ∅. Assume A is


closed and B is compact. Then there exists a closed hyperplane that strictly separates A and B.

Remark. If A, B are only assumed to be closed, their distance can tend to zero, and thus we
can’t strictly separates them.
LECTURE NOTES ON FUNCTIONAL ANALYSIS 45

Proof. Let C = A − B. By the closeness of A and compactness of B, C must be closed. Since


A ∩ B = ∅, o must be in C c , the complement of C. Then there exists B(o, r) s.t. B(o, r) ∩ C = ∅.
By Thmorem 5.7, there exists H = [f = α] separating B(o, r) and C. Say

f (x − y) ≥ α for all x ∈ A, y ∈ B,
f (rz) ≤ α holds for all z ∈ B(o, 1), and then holds for all ∥z∥ ≤ 1.

It follows from the linearity of f that

inf f (x) ≥ sup f (y) + α,


x∈A y∈B

α ≥ r sup f (z) =: ϵ > 0.


∥z∥=1

ϵ
Then there exists β s.t. inf x∈A f (x) − 2
≥ β ≥ supy∈B f (y) + 2ϵ . This means that Hβ = [f = β]
strictly separates A and B. □

Corollary 5.9. Let F ⊂ E be a linear subspace s.t. F̄ ̸= E, then there exists some f ∈ E ∗ , f ̸≡ 0,
s.t.
⟨f, x⟩ = 0 for all x ∈ F.

Remark. Corollayr 5.9 extends F to a closed space, which is actually a hyperplane H = [f = 0].

Remark 20. By Corollary 5.9, we find that if one can show that every continuous linear functional
on E that vanishes on F must vanish on E, then one can deduce that F is dense in E.

Proof. Let x0 ∈ E\F. By Thm 5.8 with A = F̄ and B = {x0 }, there is a closed hyperplane
[f = α] that strictly separates F̄ and {x0 }. Then for all x ∈ F ,

⟨f, x⟩ < α < ⟨f, x0 ⟩.

Since F is a linear subspace, this implies ⟨f, x⟩ = 0. □

5.3. Bidual space E ∗∗ and Orthogonality relations.


Let E be a n.v.s.. Recall that E ∗ denotes its dual space with norm

∥f ∥E ∗ = sup |⟨f, x⟩|.


x∈E,∥x∥≤1

And bidual E ∗∗ is the dual of E ∗ , with

∥ξ∥E ∗∗ = sup ⟨ξ, f ⟩| for all ξ ∈ E ∗∗ .


f ∈E ∗ ,∥f ∥≤1

Definition 39. Canonical injection. J : E → E ∗∗ is defined as follows

⟨Jx, f ⟩E ∗ ∗,E ∗ = ⟨f, x⟩E ∗ ,E for all x ∈ E, f ∈ E ∗


46 DONGMENG XI AND JIN LI

.
Remark 21. As Proposition 2.5 shows, J is linear and bounded, and furthermore J is an isometry,
i.e. ∥Jx∥E ∗∗ = ∥x∥E . Actually, by Cor 5.5,

∥Jx∥E ∗∗ = sup |⟨Jx, f ⟩| = sup |⟨f, x⟩| = ∥x∥.


∥f ∥≤1 ∥f ∥≤1

J may not be surjective from E onto E ∗∗ (see Chapter 3 and 4). We can identify E with a
subspace of E ∗∗ .
Definition 40. Reflexive. We say E is reflexive if J(E) = E ∗∗ . In this case, we also write
E = E ∗∗ .
Definition 41. Perpendicularity. If M ⊂ E is a subspace,

M ⊥ := {f ∈ E ∗ : ⟨f, x⟩ = 0 for all x ∈ M }.

If N ⊂ E ∗ is a subspace,

N ⊥ := {x ∈ E : ⟨f, x⟩ = 0 for all x ∈ N }.

Note that N ⊥ ⊂ E, rather than E ∗∗ .


Remark. M ⊥ must be closed. We say M ⊥ is the space orthogonal to M .
Proposition 5.10. M ⊂ E is a linear subspace

(M ⊥ )⊥ = M̄ .

Let N ⊂ E ∗ be a linear subspace, then

(N ⊥ )⊥ ⊃ N̄ . (5.1)
Proof. (1) Suppose x ∈ M . Then ⟨f, x⟩ = 0 for all f ∈ M ⊥ , which means x ∈ (M ⊥ )⊥ . Since
(M ⊥ )⊥ is closed, we have M̄ ⊂ (M ⊥ )⊥ .
Suppose x ∈ (M ⊥ )⊥ . Then

⟨f, x⟩ = 0 for all f ∈ M ⊥ . (5.2)

/ M̄ , there exists H = [f0 = a0 ] with f0 ∈ E ∗ that strictly separates {x} and M̄ . It follows
If x ∈
that
⟨f0 , x⟩ > α0 > ⟨f0 , y⟩ for all y ∈ M̄ . (5.3)
Since ⟨f0 , λy⟩ < α0 for all λ ∈ R, we get ⟨f0 , y⟩ = 0 for all y ∈ M . This means that f0 ∈ M ⊥ .
However, by(5.2), ⟨f0 , x⟩ must also be zero. This contradict to (5.3), hence x must be in M̄ .

(2) Since N ⊂ E ∗ , by the definition of (N ⊥ )⊥ , f ∈ N implies ⟨f, x⟩ = 0 for all x ∈ N ⊥ , and


hence f ∈ (N ⊥ )⊥ . Since (N ⊥ )⊥ must be closed, we have proved (5.1). □
LECTURE NOTES ON FUNCTIONAL ANALYSIS 47

6. Lp spaces

1◦ When we talking about these function spaces, we automatically have assumed that we are
working on (Rn , M, µ), where µ is Lebesgue measure on Rn and M is the σ-algebra consisting
of Lebesgue measurable sets.
2◦ Sometimes we write dx instead of dµ. We use this notation because most results hold in
general measure space (X, Σ, µ).
Definition 42. Lp spaces. Let p ∈ [1, ∞), Ω ∈ M. Set
( Z  p1 )
Lp (Ω) = f : Ω → R f is measurable and ∥f ∥p := |f |p dµ <∞ .

Here ∥ · ∥p is a norm follows from the Minkowski’s inequality which will be showed below.
Set
( )
f is measurable and there is a constant c ∈ R
L∞ (Ω) = f : Ω → R
such that |f (x)| ≤ c a.e. on Ω.
and ∥f ∥L∞ = ∥f ∥∞ =: inf{c : |f (x)| ≤ c a.e. on Ω}. For example, we define f : Ω → R by

1 x∈/ N+
f (x) =
x x ∈ N+

Then there isn’t C ∈ R s.t. |f (x)| ≤ C for all x ∈ R, but |f (x)| ≤ 1 a.e. x ∈ Ω.
Exercise : Assume µ(Ω) < ∞ and f ∈ Lp (Ω) for all p ≥ 1.
1◦ Prove that limp→∞ ∥f ∥p = ∥f ∥∞ .
2◦ Prove that ∥ · ∥∞ is a norm on L∞ (Ω).

Definition 43. Conjugate number. Let p ∈ [1, ∞], the number p′ satisfying
1 1
+ ′ =1
p p
will be called the conjugate exponent, especially when p = 1, p′ = ∞ and when p = ∞, p′ = 1.

Theorem 6.1 (Hölder’s inequality). Assume f ∈ Lp , g ∈ Lp with p ∈ [1, ∞]. Then f · g ∈ L1
and Z
|f · g|dx ≤ ∥f ∥p ∥g∥p′ .

Proof. (1) p ∈ (1, ∞).


By Yonug’s inequality, we know that for all a, b ∈ R and a, b ≥ 0
1 1 ′
ab ≤ ap + ′ bp .
p p
48 DONGMENG XI AND JIN LI

This can be proved derectly by the concavity of log function.


Then, we have
1 1 ′
|f (x)g(x)| ≤ |f (x)|p + ′ |g(x)|p for a.e. x ∈ Ω.
p p
It follows that f · g ∈ L1 and
Z Z Z
1 1 ′ 1 1 ′
|f · g|dµ ≤ |f (x)| dx + ′ |g(x)|p dx = ∥f (x)∥pp + ′ ∥g(x)∥pp′ .
p
p p p p
For the function λf (λ > 0) and g, we will have
Z
1 1 ′
λ |f · g|dµ ≤ λp ∥f (x)∥pp + ′ ∥g(x)∥pp′ .
p p
p′

∥g∥pp′ ,
 p
To make the Right Hand Side only contain we take λ = ∥g∥ ∥f ∥p , then
p′
Z
p′ 1
 
|f · g|dµ ≤ ∥g(x)∥p′ = ∥f ∥p ∥g∥p′ .
λ

(2) p = 1(similarly p = ∞), then p′ = ∞.


Since |g(x)| ≤ ∥g∥∞ a.e., we have that
Z Z
|f · g|dµ ≤ ∥g∥∞ |f |dµ = ∥f ∥1 ∥g∥∞ .


Remark 22. When p ∈ (1, ∞), equality holds in Hölder’s inequality iff |f |p = c|g|p a.e. for some
constant c ≥ 0.

Remark 23. By Hölder’s inequality, one can confirm that Lp ⊂ Lq for any 1 ≤ q ≤ p.

Remark 24. (Jensen’s inequality). Let J : R → R be convex and let f : Ω → R be in L1 (Ω).


Suppose µ(Ω) < ∞, then

(1) (J ◦ f ) is in L1 (Ω);  
1 1
R R
(2) µ(Ω) Ω
J ◦ f (x)dµ(x) ≥ J µ(Ω) Ω
f (x)dµ(x) .

We refer to Lib[Analysis, AMS] for a proof.

Theorem 6.2 (Minkowski’s inequality). If f, g ∈ Lp , p ∈ [1, ∞], then

∥f + g∥p ≤ ∥f ∥p + ∥g∥p .

Proof. If p = 1, this inequality follows from |f + g| ≤ |f | + |g|.


If p = ∞, |f | ≤ c1 , |g| ≤ c2 , then |f + g| ≤ c1 + c2 a.e.
LECTURE NOTES ON FUNCTIONAL ANALYSIS 49

Suppose p ∈ (1, ∞). By Hölder inequality,


Z Z Z
|f + g| |f + g|dµ ≤ |f + g| |f |dµ + |f + g|p−1 |g|dµ
p−1 p−1

≤ |f + g|p−1 p′
∥f ∥p + |f + g|p−1 p′
∥g∥p (6.1)

Here |f + g|p ≤ (|f | + |g|)p ≤ 2p−1 (|f |p + |g|p ), which means f + g ∈ Lp .


R 1
1− p1
While |f + g|p−1 p′ = |f + g|(p−1)/(1− p ) dµ , we deduce from (6.1) that

∥f + g∥p ≤ ∥f ∥p + ∥g∥p .

Corollary 6.3. By Minkowski’s inequality, we immediately get


(1) Lp is an n.v.s for any p ∈ [1, ∞].
(2) Lp is Banach.

Theorem 6.4 (Riesz’s respresentation theorem of Lp version). Suppose p ∈ (1, ∞). Let ϕ ∈
(Lp )∗ . Then there exists a unique function u ∈ Lp s.t.

Z
⟨ϕ, v⟩ = u · vdµ, for all v ∈ Lp .

Moreover, ∥u∥p′ = ∥ϕ∥(Lp )∗ .

We omit its proof here. And by this theorem, it is clear that when p ∈ (1, ∞), Lp is reflexive
and (Lp )∗ coincides with Lp .

Recalling the Lusin’s theorem in real analysis, we know such proposition.

Proposition 6.5. Cc (Rn ) is dense in L1 .

Futher, we show a propsition of general form bellow.

Proposition 6.6. Cc (Rn ) is dense in Lp for all p ∈ [1, ∞].

Proof. Consider p ∈ (1, ∞). Let f ∈ Lp (Rn ), and let



f (x) |f (x)| ≤ n

fn (x) = n f (x) ≥ n

−n f (x) ≤ n

R
Then |fn − f |p → 0 a.e. x ∈ Rn , and|fn − f |p ≤ |2f |p ∈ L1 , from which it follows that |fn −
f |p dµ → 0. Thus, for any ϵ > 0, there exists fN s.t. ∥fN ∥∞ < ∞ and ∥fN − f ∥p < ϵ.
50 DONGMENG XI AND JIN LI

Letting χB(o,k) (x) · fN (x) =: hk (x), we have |hk − fN |p → 0 a.e. and |hk − fN |p ≤ |2fN |p . Thus
there exists h s.t.
∥f − h∥p < 2ϵ, ∥h∥∞ < ∞ and supp h is compact.
For such function h, by Proposition 6.5 there exists g ∈ Cc (Rn ), s.t. ∥g − h∥1 ≤ ϵ and
∥g∥∞ ≤ ∥h∥∞ . Then
Z Z Z 
p p−1
|h − g| dµ = |h − g||h − g| dµ ≤ |h − g|dµ ∥2h∥p−1∞ ,

and hence 1
1− 1 1
∥h − g∥p ≤ ∥h − g∥1p ∥2h∥∞ p ≤ cϵ p .
1
In the end, for any ϵ > 0, there exists g ∈ Cc (Rn ) s.t. ∥f − g∥p ≤ 2ϵ + cϵ p , which is exactly
our desired result. □

R
Theorem 6.7. Suppose j ∈ Cc (Rn ) with j ≥ 0 and Rn jdx = 1. For k ∈ N, define jk (x) =
R
k n j(kx), so that Rn jk dx = 1, and ∥jk ∥1 = ∥j∥1 . Let f ∈ Lp (Rn ) for some p ∈ [1, ∞), and define
Z
fk := jk ∗ f = jk (x − y)f (y)dy.
Rn

Then

fk ∈ Lp and ∥fk ∥p ≤ ∥j∥1 ∥f ∥p , (6.2)


fk → f in Lp as k → ∞, (6.3)
fk ∈ C ∞ (Rn ) and Dα fk = (Dα jk ) ∗ f. (6.4)

If f ∈ Cc (Rn ), then fk → f uniformly.

Remark 25. If f is supported on a compact set K ⊂ Ω, and supp j = B, then supp jk = k1 B and
supp f ∗ jk ⊂ K + k1 B.

Proof. Step 1. Young’s inequality


Z Z p
p
∥fk ∥p = f (x − y)jk (y)dy dx
Z Z  Z  p′
p
p
≤ f (x − y) · jk (y)dy jk (y)dy dx
Z Z   p
p ′
= f (x − y) dx jk (y)dy ∥j∥1p

= ∥f ∥pp ∥jk ∥p1

We get ∥fk ∥p ≤ ∥f ∥p ∥jk ∥1 i.e. (6.2)


LECTURE NOTES ON FUNCTIONAL ANALYSIS 51

Step 2. If f ∈ Cc (Rn ), then there exists N > 0 s.t. supp f, supp j ⊂ B(o, N ). It follows that
Z
f (x) − fk (x) = (f (x)jk (y) − f (x − y)jk (y))dy
B(o, N
k
)
Z
≤ f (x) − f (x − y) jk (y)dy.
B(o, N
k
)

Since f is uniformly continuous, for any ϵ > 0, when k is sufficiently large,


N
|f (x) − f (x − y)| < ϵ for any y ∈ B(o, ),
k
and then Z
|f (x) − fk (x)| ≤ ϵjk (y)dy = ϵ.
B(o, N
k
)

This proves that fk → f uniformly.


Step 3. We show a claim in this step that for j, h ∈ Cc (Rn ), supp j ∗ h ⊂ supp h + supp j.
R
Indeed, if for some x ∈ Rn , j ∗ h(x) ̸= 0, i.e. supp h j(x − y)h(y)dy ̸= 0 then there exists
y ∈ supp h, s.t. j(x − y) ̸= 0. This means x ∈ supp h + supp j. Since supp h + supp j is closed,
s supp j ∗ h ⊂ supp h + supp j.
Step 4. Now we consider the general case. For f ∈ Lp , by Prop 6.6, there exists h ∈ Cc (Rn ) such
that ∥f − h∥p < ϵ. By Step 1 and the triangle inequality, we have

∥jk ∗ f − f ∥p ≤ ∥jk ∗ (f − h)∥p + ∥fk ∗ h − hk ∥p + ∥h − f ∥p


≤ 2∥f − h∥p + ∥jk ∗ h − h∥p
≤ ∥fk ∗ h − h∥p + 2ϵ

Since |fk ∗ h − h| → 0 uniformly by Step 2, then by Step 3 there exists N > 0 s.t. when k ≥ N
Z Z
|jk ∗ h − h|dx = |jk ∗ h − h|dx ≤ ϵ.
supp h+B(o, N
k
)

In conclusion, we have jk ∗ f → f in Lp as k → ∞ i.e. (6.3).


Step 5. Suppose {ei }ni=1 is a orthonormal basis of Rn ,
fk (x + m1 ei ) − fk (x) jk (x + m1 ei − y) − jk (x − y)
Z
= f (y)dy.
1/m B(x,N +1) 1/m
By the differential mean value theorem, for an arbitrary y ∈ Rn there exists θy ∈ (0, m1 ) s.t.
1
jk (x + e
m i
− y) − jk (x − y) ∂
= jk (x + θy ei − y).
1/m ∂xi
52 DONGMENG XI AND JIN LI

Since jk ∈ Cc∞ (Rn ), there exists ck > 0, s.t.



jk (x + θy ei − y) f (y) ≤ ck · f (y) · χB(x,N +1) (y) ∈ L1 .
∂xi
This together with the pointwise convergence, we have
1
jk (x + e
m i
− y) − jk (x − y) ∂
lim f (y) = jk (x − y)f (y)
m→∞ 1/m ∂xi
and the dominated convergence theorem implies that ∂i fk = (∂i jk ) ∗ f , which proves (6.4). □

Definition 44. Locally p-th integrable function spaces. We define locally p-th integrable function
spaces by

Lploc (Ω) := {f ∈ Lp (Ω) | f ∈ Lp (K) for any compact set K ⊂ Ω} .

where p ∈ [1, ∞].

Corollary 6.8. Let Ω ⊂ Rn be open and let u ∈ L1loc (Ω) be such that
Z
u · f dµ = 0 for all f ∈ Cc∞ (Ω).

Then u = 0 a.e. on Ω.

Proof. Let J ⊂ Ω be compact, and consider such two sets

K + = {x ∈ K : u(x) ≥ 0} and K − = {x ∈ K : u(x ≤ 0)}.

Let g1 = χK + and g2 = χK − . Clearly g1 , g2 ∈ Lp (K) for all p ∈ [1, ∞].


Take a sequence {hk }k≥1 ⊂ Cc∞ (Ω), where hk = jk ∗ g1 as in Theorem 6.7, then ∥hk − g∥1 → 0.
After passing to a subsequence, we may assume hk (x) → g1 (x) a.e. x ∈ Ω.
Since Z Z
hk (x) = g1 (x − y)jk (y)dy ≤ ∥g1 ∥∞ jk (y)dy = 1 for all x ∈ Ω,

we have ∥hk ∥ ≤ 1. Hence, |u · hk | ≤ |u| ∈ L1 and u · hk → u · g1 (x) a.e.. By the dominated


convergence theorem,
Z Z Z
0 = lim u · hk dx = u · g1 dx = u(x)dx ≥ 0,
k→∞ K+

which implies that u = 0 a.e. x ∈ K + . Similarly u = 0 a.e. x ∈ K − .


Let Ki ⊂ Ω be a sequence of compact sets s.t. ∪∞i=1 Ki = Ω. Then we have u = 0 a.e. x ∈ Ω. □

Proposition 6.9. f ∈ Lploc (Ω) implies f ∈ Lqloc (Ω), if 1 ≤ q ≤ p.


LECTURE NOTES ON FUNCTIONAL ANALYSIS 53

This proposition follows immediately from Hölder’s inequality.

Definition 45. W 1,p spaces and weak derivatives.


( )
p
There exists g1 , · · · , gn ∈ L (Ω), s.t.
W 1,p (Ω) := f ∈ Lp (Ω) R ∂ϕ .
g · ϕ for all ϕ ∈ Cc∞ (Ω), i = 1, · · · , n.
R

f · ∂xi
= − Ω 1

The functions g1 , · · · , gn are called the weak derivatives of f . Denote ∂i fi = gi , and ∇f :=


(∂1 f, · · · , ∂n f ) is called the weak gradient.

You might also like