6710 Notes
6710 Notes
6710 Notes
Contents
1
Metric spaces
1.1 Preliminaries . . . . . . . . . . .
1.2 Open sets, closed sets, topology
1.3 Completeness . . . . . . . . . .
1.4 Banach Fixed-point Theorem . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
2
3
4
.
.
.
.
8
8
9
10
10
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
12
12
13
14
14
15
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
17
18
19
20
.
.
.
.
.
ii
CONTENTS
4.5
4.6
Linear operators II
5.1 Hilbert adjoint . . . . . . . . . . . . . . . . .
5.2 Boundedness . . . . . . . . . . . . . . . . .
5.3 Reflexive spaces . . . . . . . . . . . . . . .
5.4 Weak convergence and weak* convergence
5.5 Compact operators . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
25
25
25
26
27
28
Spectral Theory
6.1 Preliminaries . . . . . . . . . . . . . . . . . . .
6.2 Spectral properties of bounded linear operators
6.3 Spectral properties of compact operators . . .
6.4 Bounded self-adjoint operators . . . . . . . . .
6.5 Return to Fredholm theory . . . . . . . . . . . .
6.6 Sturm-Liouville problem . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
30
30
31
32
33
34
34
.
.
.
.
39
39
39
40
41
Distribution Theory
7.1 Multi-index notation . . . .
7.2 Definitions . . . . . . . . . .
7.3 Distributions . . . . . . . .
7.4 Operations on distributions
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Index
42
Disclaimer: These notes are for my personal use. They come with no claim of correctness or
usefulness.
Errors or typos can be reported to me via email and will be corrected as quickly as possible.
Many proofs have been omitted for brevity but can be found in either:
E. Kreyszig, Introductory Functional Analysis with Applications, Wiley (1989).
F. G. Friedlander and M. Joshi, Introduction to the Theory of Distributions, Cambridge
(1999).
1
Metric spaces
1.1
Preliminaries
Definition 1.1. A metric space is a set X , together with a mapping d : X R, with properties
for all x, y, z X :
d(x, y) :=
otherwise
o
P
p
x = (1 , 2 , . . .) :
|
|
<
norm and will be a noteworthy case. It is worth definition two noteworthy inequalities:
Theorem 1.1 (Hlders inequality).
|j j |
j=1
p
X
!1/p
|j |p
j=1
p
X
!1/q
|j |p
j=1
where p, q are conjugate, i.e. 1/p + 1/q = 1. Note, Cauchy-Schwarz inequality is a special case
when p = q = 2.
Theorem 1.2 (Minkowskis inequality).
p
X
!1/p
|j + j |p
j=1
1.2
p
X
!1/p
|j |p
j=1
p
X
!1/p
|j |p
j=1
The collection J of all open sets on (X, d) is called a topology for (X, d) if it satisfies:
(T1) J
(T2) The union of any collection (infinite or finite) of elements of J is in J .
(T3) Any finite intersection is in J .
Note, no metric is needed for a topology but a metric always induces a topology.
1.3. COMPLETENESS
Proof. Choose x [0, 1] and write as binary x = 1 +2 /2+3 /22 + and let y = (i ) ` .
There are uncountably many of these but the distance between them is always 1. Thus, consider a
ball of radius 1/3 around each point. No countable dense subset exists. Thus, not separable.
Example 1.12. `p for 1 p < is separable.
1.3
Completeness
Fact: every convergent sequence is a Cauchy Sequence is convergent, but the reverse is not
necessarily true.
Definition 1.12. A space is said to be complete if all Cauchy sequences convergence.
Definition 1.13. A sequence is bounded if the set of its elements is bounded.
Theorem 1.3. Convergent sequences are bounded. Furthermore, convergent sequences have
unique limits and if xn x, yn y , then d(xn , yn ) d(x, y).
Theorem 1.4. Let M X , then x M iff (xn ) M : xn x. Also, M is closed iff
every sequence in M which converges, converges to a point in M .
Theorem 1.5. A subspace M of X is complete iff M is closed as a subspace of X .
Example 1.13. ` is complete.
(n)
(j)
Proof. Idea: Consider (xn = {1 , . . .}), a Cauchy sequence, then 1 forms a Cauchy sequence
of real numbers, thus converges and defines our limit.
Example 1.14. Consider c, the subspace of ` consisting of the convergent sequences is
complete by noting that it is closed.
Example 1.15. `p for 1 p < is complete.
Example 1.16. C[a, b] is complete.
Proof. Let (xn ) be a Cauchy sequence in C[a, b]. For a fixed t0 , xn (t0 ) is a Cauchy sequence
of real numbers and therefore converges. Now, note that x(t) is continuous by noting that
|x(t) x(t + h)| < |x(t) xn (t)| + |xn (t) xn (t + h)| + |xn (t + h) x(t + h)| < , thus
x(t) is continuous.
Rb
Example 1.17. Q, polynomials on [a, b], and C[a, b] with the metric d(x, y) = a |x(t)
y(t)| dt are all examples of incomplete spaces. Note the last example comes from approximating
a step function.
1.4
This theorem is also known as the contraction mapping principle. In general if we have a mapping
T : X X and we want to solve T x = x, the solution is called a fixed point.
Definition 1.15. T is a contraction on a set S if a < 1 : d(T x, T y) < a d(x, y) for all
x, y S .
Theorem 1.6 (Banach-fixed point theorem). If (X, d) is a complete metric space and T is a
contraction, then T has a unique fixed point x X .
We can define the iteration xn+1 = T xn for n = 0, 1, . . .. Using a geometric series argument,
an
we see that d(xn , x) 1a
d(x0 , x), which gives us an estimate of the error.
Corollary 1.1. Let T be a mapping of a complete metric space (X, d) onto itself. Suppose T
is a contraction on some closed ball Y with radius r around a point x0 and also assume that
d(x0 , T x0 ) < (1 a)r, then fixed point iteration converges to a unique fixed point in Y .
Thus, if we have a contraction on a smaller space, as long as it maps onto itself, we still have our
contraction mapping.
Example 1.18. We can apply this to linear algebra. Consider X = Rn with d(x, y) =
maxj |j j |. Consider trying to solve x = Cx + B , where C = (cij ), B = (b1 , b2 , . . . bn ).
We hope to solve this by (I C)x = B , the form of our fixed point iteration. Thus,
define T : X X as T x := Cx + B , then we again hope to solve T x = x. Consider
|T x T y| = |C(x y)|. Note, in particular:
n
X
|T xj T yj | =
cjk (k k )
k=1
n
X
k=1
n
X
|cjk | max |k k |
k
|cjk |d(x, y)
k=1
Pn
k=1
A similar result holds for Jacobi iteration, where A = (A D) + D, where D are the
diagonal entries of A. We now are solving the problem (A D)x + Dx =
C, meaning
Pn
k=1
ajk
ajj < 1 for a
(
x0 = f (x, t)
x(t0 ) = x0 ,
(1.1)
K : |f (x, t) f (y, t)| k|x y|, then the IVP defined by 1.1 has a unique solution on the
interval J = [t0 , t0 + ], where < min {a, b/c, 1/k}.
Proof. Integrating the IVP yields:
t
Z
x(t) = x0 +
f (, x( )) d
t0
Rt
Z t
|T x(t) x0 | = f (, x( )) d
t0
c|t t0 |
c.
since x J , J
Z t
|T x(t) T y(t)| f (, x( )) f (, y( )) d
t0
|t t0 |K max |x( ) y( )|
tJ
= K|t t0 | d(x, y)
K d(x, y).
This suggests that d(T x, T y) K d(x, y), thus if K < 1, we have a contraction, which is
true by the definition of . Thus, we have the Picard iteration:
xn+1 (t) = x0 +
f (, xn ( )) d.
t0
Example 1.19. We can also consider integral equations. Consider those of the first kind to be
Rb
of the form: a k(t, )x( ) d , which is the continuous extension of multiplying by a matrix.
We assume k is continuous on this square and therefore is bounded.
We can also define integral equations of the second kind which take the form: x(t)
Rb
a k(t, )x( ) d + v(t). Here, is a parameter and k is again assumed to be continuous. We typically hope to solve for x, which can be done by rearranging: (I k)x = v .
Rb
Thus, define the integral operator: T x(t) := a k(t, )x( ) d + v(t), meaning we hope to
= ||c(b a) d(x, y)
Thus, if c||(b a) < 1, we have a contraction. Also note that if the integral upper bound is
changed to t, we can redefine our kernel to be 0 unless t.
2
Normed Vector Spaces, Banach
Spaces
2.1
Preliminaries
Definition 2.1. A vector space over a field K is a non-empty set of elements (vectors) together
with two algebraic operations (+, ) that satisfy the usual vector space axioms not listed here.
Example 2.1. Rn , Cn are both vector spaces.
Example 2.2. C[a, b] is a vector space.
Example 2.3. `p are vector spaces for all p.
Definition 2.2. A subspace of a vector space X is a non-empty subset that is closed under
addition and multiplication with scalars and the subset is a vector space itself.
As usual, we can write elements in a vector space as (finite) linear combinations of the other
elements. A linearly independent set is one in which: 1 x1 + +n xn = 0 = 1 = n = 0,
otherwise they are linearly dependent.
Definition 2.3. A vector space is called finite-dimensional if there is a non-negative integer n
such that the space contains n linearly independent vectors but any subset of the space with n + 1
vectors is linearly dependent. In this case, we consider dim X = n.
Example 2.4. X = {0}, dim X = 0.
Example 2.5. dim Rn = dim Cn = n.
Example 2.6. dim `p = .
If B is any linearly independent subset of X such that span B = X then B is called a (Hamel)
basis of X . Every x has a unique representation as a linearly combination of elements in B .
8
2.2. NORMS
Note, every vector space has a Hamel basis but it cant always be found. Every Hamel basis has
the same cardinality. All proper subspaces have lesser dimension.
2.2
Norms
The norm provides us with the notion of the size of an element. Equipping a space with a norm
has a profound impact and provides complex structure.
Definition 2.5. A normed vector space (NVS) is a vector space with a norm defined. A norm
on X automatically defines a metric: d(x, y) = kx yk and this is called the induced norm.
|xy|
Note, not every metric is induced by a norm (for instance, d(x, y) = 1+|xy| is not).
Definition 2.6. A Banach space is a normed vector space which is also complete. Note, every
normed vector space has a unique completion.
1/2
P
j=0
|j kp
1/p
Example 2.9. C[a, b] with kxk = maxt |x(t)| is a Banach space, but inducing this same space
R
b
a
1/2
|x(t)| dt
provides a normed vector space, but not a complete
2
one. The completion of this space is L2 [a, b], a very important space.
Theorem 2.1. A subspace Y X is complete iff Y is closed in X . Note, needs to be a
subspace.
Definition 2.7. A sequence {xn } is said to be convergent if there exists an x X such that
limn kxn xk 0.
Definition 2.8. If j=1 kxj k , the sum is considered absolutely convergent . Note:
{absolute convergence = convergence} {X is complete}.
Definition 2.9. If a NVS X contains
Pa sequence (e1 , e2 , . . .) with the property that every vector
can be represented uniquely x = j=0 j ej , then (en ) is said to be a Schauder basis of X .
Example 2.10. In `p , the natural choice of basis vectors (1, 0, 0, . . .), (0, 1, 0, . . .), . . . form a
Schauder basis.
10
2.3
Lemma 2.1. Let {x1 , . . . , xn } constitute a linearly independent set in a NVS X . Then there
exists a number c > 0 such that for every selection of scalars {1 , . . . , n }:
k1 x1 + n xn k c(|1 | + |n |).
2.4
Definition 2.11. A metric space is (sequentially) compact if every sequence has a convergent
subsequence. Note: a subset is compact if every sequence in the subset converges to an element
of the subset.
Theorem 2.6. Every compact subset M of a metric space is closed and bounded. Note: the
converse is not always true.
Example 2.11. Consider (en ) `2 . The set is bounded and closed but every element is distance
1 apart and therefore has no convergent subsequence.
Theorem 2.7. If X is finite dimensional, then {closed and boundedness} compactness.
11
Theorem 2.8. If the closed unit ball B = {x : kxk 1} is compact then X is finite dimensional.
3
Linear operators
3.1
Preliminaries
(T1) The domain D(T ) is a vector space and the range, R(T ) lies in a vector space over the
same field.
(T2) T (ax + by) = aT x + bT y for all x, y D(T ) and scalars a, b.
Definition 3.2. Define the null space of an operator to be N(T ) = {x D(T ) : T x = 0}.
Example 3.1. I : X X where Ix := x is the identity operator.
Example 3.2. If A Rmn then Ax is defined by matrix operation over a finite dimensional
space.
Example 3.3. Let X = {p[0, 1] R : p is polynomial} and define Dp(t) :=
unbounded.
Example 3.4. Q : L1 [0, 1] L1 [0, 1] defined by Qf (t) :==
R1
Z
kQf k1 =
kQf (t)k dt
Z 1 Z t
=
k
f (s) dsk dtk
0
0
Z 1Z t
|f (s)| ds dt
0
0
Z 1
|f (s)| ds
0
= kf k1 .
12
dp
.
dt
Note this is
13
Thus, from this, we can conclude that Q maps [0, 1] onto itself.
Example 3.5. Let k be a continuous function on [0, 1]2 , then define K : C[0, 1] C[0, 1] to
R1
be (Kf )(t) := 0 k(s, t)f (s) ds for t [0, 1]. This is again the continuous matrix extension.
Theorem 3.1. The operator T is injective (or one-to-one) if T x1 = T x2 = x1 = x2 . If
T : D(T ) R(T ) is injective, then T 1 : R(T ) D(T ) defined by T 1 y = x where
T x = y . For linear operators, note that T 1 exists if N(T ) = {0}. Furthermore, if T 1 exists
and T is a linear operator, then T 1 is also a linear operator.
3.2
For a bounded linear operator, the smallest constant c defines the norm on the operator:
kT xk
sup kT xk.
xD(T ) kxk xD(T )
kT k := sup
kxk=1
x6=0
R1
0
Z 1
kKf k = max
k(s, t)f (s) ds
t[0,1]
0
Z 1
max
|k(s, t)| |f (s)|ds
t
0
Z 1
Z 1
max
|f (s)| ds
|k(s, t)| ds
t
0
0
{z
}
|
=c
ckf k.
Thus, we can conclude kKk kKf k/kf k c and therefore K is a BLO.
14
Theorem 3.2. If a normed space X is finite dimensional, then every linear operator on X is
bounded.
Theorem 3.3. Let T : D(T ) X Y be a linear operator, then T is bounded iff T is
continuous (even at a single point).
Note that if T is bounded then N(T ) is closed. Also, if T1 , T2 satisfy the norm property:
kT1 T2 k kT1 k kT2 k.
3.3
Linear functionals
Consider the special case of a linear operator where f : D(F ) X K , where K is the field
associated with X , typically R or C, then this operator is called a linear functional. As with
linear operators, we can define the notion of boundedness:
Definition 3.4. A bounded linear functional (BLF) is a linear functional if there c such that
|f (x)| ckf k. From this, we can define the norm of the functional to be:
kf xk
.
xD(f ) kxk
kf k := sup
x6=0
Example 3.10. Let y0 Rn be fixed, then the dot product with y0 , that is f x = x y0 defines
a linear functional where |f (x)| = |x y| kxkky0 k by Cauchy-Schwartz which suggests
kf k ky0 k.
Example 3.11. Consider y0 Lq and define Rf : Lp R by f (x) := x(t)y0 (t) dt. Its
clear f is linear and we can also note |f (x)| |x(t)||y0 (t)| dt kx(t)kp ky0 kq by Hlders
inequality. Thus, we have a BLF. Note we can obtain equality in the integral by choosing
x(t) := sgn y(t)|y(t)|q1 .
Example 3.12. Consider x C[0, 1] and t0 [0, 1], then f (x) := x(t0 ) is a BLF. This can be
seen by noting |f (x)| = |x(t0 )| maxt |x(t)| = kxk, thus kf k 1.
Example 3.13. Consider again the space of polynomials on [0, 1] and let f (x) :=
that if we consider xn (t) = tn this is again unbounded.
3.4
df
(1).
dx
Note
Definition 3.5. The set of all linear functionals over a vector space X forms a vector space,
which we will denote X and is called the algebraic dual.
15
Note, since X is itself a vector space, it also has an algebraic dual, denoted X . We can
identify elements of this vector space: for x X , define g X which maps g : X K by
g(f ) = f (x).
From this, we can define a mapping c : X X by c(x) = g , where g is defined above. Note
that c is linear. This mapping is called the canonical embedding of X X .
Definition 3.6. If c is bijective, X and X are isomorphic, in which case X is considered to
be algebraically reflexive.
Theorem 3.4. Finite dimensional vector spaces are always algebraically reflexive.
3.5
Dual spaces
Definition 3.7. The set of all bounded linear functionals on a normed vector space X is said
to be the dual space X 0 , where the definition of bounded linear functional remains the same as
defined in Definition 3.4.
Denote B(X, Y ) to be the space of all BLO from X to Y . This space is itself a vector space.
In fact, it is also a normed vector space by taking kBk = supkxk=1 kBxk. From this, we can
rewrite the dual space as X 0 = B(X, K).
Theorem 3.5. Let X be a NVS and Y be a Banach space, then B(X, Y ) is a Banach space.
Proof. We already know that we have a NVS. We need only show completeness, but this follows
from linearity of the operator.
Note we can think of this from a linear algebra perspective. Each x X has a unique
representation x = 1 e1 + + n en , but conversely every n-tuple of coefficients {1 , . . . , n }
defines a unique x under the same mapping. Thus, these spaces are isomorphic.
Denote T x := 1 T e1 + m T em and let B = (b1 , . . . , bn ) be a basis for Y . For each k
there is a unique
1k b1 + nk bP
T ek = y , which then suggests
n sinceP
Pm representation:PTmek = P
n
n
m
that T x = k=1 k T (ek ) = k=1 k j=1 jk bk = j=1 ( k=1 jk k ) bj . Thus, n m
matrices are just mappings from dim X = m dim Y = n. The elements of X can be thought
of as m dimensional column vectors (i.e. a m 1 matrix) and the elements of X are effectively
1 m matrices, or row vectors.
From this, we can conclude that X = X for finite dimensional X and since every BLO on a
finite dimensional space is bounded, we know X = X 0 .
16
Example 3.14. (Rn )0 = Rn . This follows from the above analysis. |f (x)| = | k k |
P
P
P
P
|k k | ( |k |)1/2 ( |k |)1/2 = kxk ( |k |)1/2 . Note we can obtain equality by
choosing k = k , thus kf k = kgk, where g = (1 , . . . , n ).
e
,
so
f
(`
)
=
f
(
e
)
=
k f (ek ). Note nowe
k k
Pn k=1 k k
P
that |f (x)|
|k k | supk |k | |k | = supk |k |kxk1 = kf k supk kk k =
kgk . We can again obtain equality in this case telling us that kf k = kgk .
4
Inner product spaces
4.1
Preliminaries
Definition 4.1. An inner product space is a vector space X together with an inner product.
Definition 4.2. An inner product is a mapping h, i : X X K , where K is the field.
Note the inner product satisfies the typical set of axioms.
Also note the inner product defines a norm: kxk = hx, xi. This is the norm induced by an
inner product, thus every inner product space is a normed vector space. One key inequality for
inner product spaces is the following:
Theorem 4.1 (Schwarz inequality). For every x, y X :
hx, yi =
1
kx + yk2 kx yk2
4
1
kx + yk2 kx yk2
4
1
= hx, yi =
kx + iyk2 kx iyk2 .
4
< hx, yi =
17
18
j j .
Rb
a
x(t)
y (t) dt.
4.2
Theorem 4.4. Let X be an inner product space, then there is a Hilbert space H and an
isomorphism T : X W , where W H and is dense. H is unique up to isomorphism (that
is, preserves inner product). In other words, every inner product space has a completion.
Now, we can see that C[a, b], along with C 1 [a, b], the subspace of C[a, b] where derivatives are
continuous and C01 [a, b], the subspace of C 1 [a, b] where the endpoints are both 0 are all Banach
spaces but not Hilbert spaces.
We know that L2 [a, b] is the completion of C[a, b], but what are the completion of the other
spaces? Define H1 [a, b] to be the completion of C 1 [a, b]. It is called a Sobolev space. The inner
product in this case is defined to be:
Z
hx, yiH 1 [a,b]
:=
Z
x(t)
y (t) dt +
Similarly, the Sobolev space H01 [a, b] can be defined, corresponding to elements such that
limta+ = limta = 0.
Example 4.8. Using this notion of Sobolev spaces, we can now begin to form alternate
formulations of problems, which we will revisit later. Consider the following ODE:
x00 cx = f,
t (a, b),
x(a) = x(b) = 0.
19
Note, we can now multiply both sides of this equation by some function y(t) and integrate from
a to b to obtain:
00
x (t)y(t) dt c
a
x(t)y(t) dt =
a
f (t)y(t) dt
a
b
0
x (t)y (t) dt c
a
x(t)y(t) dt =
a
f (t)y(t) dt
a
4.3
|x(t)|2 dt +
Orthogonal projections
kx yk = inf kx yk.
ym
H = Y Y .
(4.1)
20
In the above case, we can define P : X Y to be the orthogonal projection such that P x = y ,
where x = y + z . Note that P 2 = P , N(P ) = Y .
Definition 4.5. A set M is said to be orthogonal if hx, yi = 0 for all x, y M . It is called
orthonormal if kxk = 1 for all x M .
Example 4.10. In Rn , the standard basis elements are orthonormal.
Example 4.11. In L2 (0, 1), the set M = {sin(2nx)} {cos(2nx)} forms an orthogonal
set. It can be rescaled to be orthonormal. Note, this is the Fourier basis.
Pn
Note, now the sum i=1 hx, ei i ei is well defined for any x X since we know x = y + z for
y Y, z Y . We then can define the projection of x onto y as:
y=
n
X
hx, ej i ej =
j=1
n
X
hy + z, ej i ej =
j=1
n
X
hy, ej i ej
j=1
Note that we know kxk2 = kyk2 + kzk2 which suggests that kyk2 kxk2 , which leads to the
following inequality:
Theorem 4.7 (Bessels inequality). Let (ek ) be an orthogonal sequence in X , then for every
x X:
|hx, ej i| kxk2
j=1
4.4
Definition 4.6. A set M is considered total in a normed vector space X if the span M is dense
in X .
21
A few facts worth noting are that for a Hilbert space H , H must contain a total orthonormal set.
All total orthonormal sets for H have the same cardinality, denoted the Hilbert dimension. A
given subset M of H contains a countable total orthonormal set. Also, we have the following
variation of Bessels Inequality:
Theorem 4.8 (Parsevals equality). An orthonormal set M is total if and only if for every
x H:
| hx, ej i | = kxk.
j=1
Note the sum is to but there are only countably non-zero coefficients.
Note that if two Hilbert spaces have the same Hilbert dimension, they are isomorphic, therefore
once youve seen one real Hilbert space of dimension n, youve seen them all. Besicovich
spaces are almost periodic functions which form a non-separable Hilbert space but it rarely
occurs in applications.
Note, this analysis provides the justification for the Fourier series expansion: f (t) = k=1 ak sin(2kt)+
P
2
j=1 bj cos(2jt) + c0 . As mentioned before, these form a total orthonormal set in L .
What about the Fourier transform? f (t) = f()eit d ? Do the exponential functions
correspond to a uncountable orthonormal set? No. They arent even in L2 .
We can also consider the Gram-Schmidt Process which produces a total orthonormal set
{y1 , . . . , yn } from a linearly independent set {x1 , . . . , xn }.
The construction of the first element is easy: y1 = kxx11 k . Now, the projection theorem says that
x2 = y + z , where y span{y1 } and z y . Thus, x2 = hx2 , y1 i y1 + v2 , which suggests that
v2 = x2 hx2 , y1 i y1 6= 0 since x1 , x2 are linearly independent. Finally, set y2 = kvv22 k to get
normality.
Similarly, for x3 we know that x3 = hx3 , y1 i y1 + hx3 , y2 i y2 + v3 and we can repeat the process.
Note, this is guaranteed to work if the xj are linearly independent, but this method is numerically
unstable due to dividing by a potentially arbitrarily small number.
4.5
Recall that X 0 was the set of all bounded linear functionals on X . We know if X is reflexive
0 . To check the reflexivity of a Hilbert space we have the following theorem.
then X X
Theorem 4.9 (Riesz representation theorem). Let H be a Hilbert space. For every bound
linear functional f H 0 , there is a unique element z H such that for all x H :
f (x) = hx, zi .
22
Furhtermore, kf k = kzk.
Proof. First note that N(f ) is closed since f is bounded. If N(f ) = H , then z = 0 and f = 0
and we are done, so assume that H = N(f ) N f and chose z0 N(f ) \ {0}.
We know immediately that f (z0 ) 6= 0, so note that
f (x)z0
f (x)
f x
f (z0 ) = 0.
= f (x)
f (z0 )
f (z0 )
| {z }
N(f )
0=
f (x)
x
z0 , z0
f (z0 )
= hx, z0 i
f (x)
hz0 , z0 i
f (z0 )
and therefore:
f (z0 )
=
f (x) = hx, z0 i
kz0 k2
f (z0 )
x,
z0
kz0 k
+
.
f (z )
So, if we choose z = kz00k z0 , we then have f (x) = hx, zi as desired. Uniqueness is obvious. To
show the norm relation:
|f (x)|
z = kf k kzk.
kxk
4.6
Lax-Milgram theorem
Definition 4.7. Let X and Y be real vector spaces, then a function X Y R is considered
a bilinear form if it satisfies the typical linearity properties.
23
Note, if the field is the complex numbers and the second term becomes conjugated in linearity,
the form is considered sesquilinear.
We say the form is bounded if there exists a constant c such that: |a(x, y)| ckxkX kykY . As
usual, the norm of this operator is defined to be the smallest c that satisfies this property.
Theorem 4.10. Let a be a bounded bilinear form over real Hilbert spaces a : H1 H2 R,
then there exists a uniquely determined linear operator S : H1 H2 such that:
a(x, y) = hSx, yi .
Furthermore, kak = kSk.
Proof. For a fixed x, the bilinear form becomes a BLF and this follows immediately from the
Riesz Representation Theorem.
Definition 4.8. A bilinear form b(x, y) is called coercive (or uniformly elliptic) if there exists a
constant c such that:
b(x, x) ckxk2 .
Proof. Let S : H H be the operator such that b(x, y) = hSx, yi, then we know from
coercivity:
24
In general, we can now solve problems of the form b(x, y) = g(y) for all y and turn them into
the form Bx = g in our functional space.
Example 4.13. Consider the following boundary value problem:
(
u00 + au = f
u(0) = u(1) = 0.
(4.2)
Assume a(t) C[0, 1] and a(t) a0 > 0 in this interval. We can also assume f L2 (0, 1) to
proceed. Consider a test function v C01 [0, 1] and multiply both sides and integrate:
u00 v + auv = f v
Z 1
Z 1
Z 1
00
u v dt +
auv dt =
f v dt
0
0
0
Z 1
Z 1
Z 1
0 0
u v dt +
auv dt =
f v dt.
0
(4.3)
0 0
Thus, our bilinear form is bounded. We need now show coercivity as well:
0 2
a(u)2
Z
Z
0 2
(u ) + a0 u2
Z
Z
0 2
2
min{1, a0 }
(u ) + u
b(u, u) =
(u ) +
= min{1, a0 }kuk2H1
R
5
Linear operators II
5.1
Hilbert adjoint
Note, Hilbert adjoint and adjoint are equivalent when working in a Hilbert space. The adjoint
always exists and is unique with kT k = kT k. If we have a sesquilinear form h(x, y) then this
can be written as h(y, x) = hy, T xi = hT x, yi.
Example 5.1. Consider a matrix A : Rn Rm as a linear operator. Then A = (aij ) Rmn .
In this case:
hAx, yi = Ax y = (Ax)T y = xT AT y = x, AT y = AT = A .
Example 5.2. Consider K : L2 [0, 1] L2 [0, 1] defined by the usual integral operator
R1
(Ku)(t) = 0 k(t, s)u(s) ds. We can interchange the integrals as we did the sums above and
R1
the resulting adjoint becomes K v(s) = 0 k(t, s)v(t) ds.
5.2
Boundedness
26
f 6=0
|f (x)|
.
kf k
Theorem 5.4 (Uniform Boundedness Theorem). Let X be a Banach space and let {n } be a
sequence of BLO such that n : X Y , where Y is a normed vector space. If for each fixed
x X : {n (x)} is bounded, then there is a c such that kn k c.
5.3
Reflexive spaces
27
Thus, we can conclude that if X is reflexive, then if either X or X 00 is separable, then the other
must be separable as well. Also note that if X is separable and X 0 is not, X cannot be reflexive.
For instance, note that `1 is separable but ` is not, thus `1 is not reflexive.
5.4
Definition 5.2. The sequence {xn } in a normed vector space X is said to strongly converge if
kxn xk 0 as n to.
Definition 5.3. The sequence {xn } in a normed vector space X is said to weakly converge if
kf (xn ) f (x)k 0 as n to for all f X 0 .
In this case, we denote this limit xn * x and say x is the weak limit of {xn }. Its clear that
strong convergence = weak convergence, but the converse is not necessarily true. Note, if we
have a finite dimensional space, the converse is true.
Example 5.6. Consider
X = `p and define (xn ) := (en ). Since (`p )0 = `q , we can represent
P
Note that the weak limit is always unique and that if a sequence is weakly convergent, then every
subsequence is as well. Also, {kxn k} must also be bounded, but the proof is tricky.
Let {fn } be a sequence of bound linear functionals in X 0 . Then recall that we say fn f , that
is, converges strongly if kfn f k 0 and fn *, that is, converges weakly if g(fn f ) 0
for all g X 00 . We can introduce yet another notion of convergence.
Definition 5.4. If the above sequence has the property that fn (x) f (x) 0 for all x X ,
Weak convergence is stronger than weak* convergence. That is, weak implies weak* . To see
this, note that if fn * then g(fn ) g(f ) 0 for all g X 00 , thus every x X maps to a
g X 00 by the canonical map g(f ) = f (x). Thus, g(fn ) = fn (x) and g(f ) = f (x), thus
fn (x) f (x) 0.
Also note that if X is reflexive, meaning X = X 00 , then weak and weak* are equivalent.
q
p
Suppose {un } is a sequence in Lq , where 1 < q <
R . L is
R reflexive and its pdual space is L ,
q
where 1/p + 1/q = 1. un * u in L means that un v uv for all v L and in this case,
weak* is the same.
Next consider {un } as a sequence in L . Here, un * means that for all f (L )0 : f (un ) f ,
0
but we dont know what
L = (L1 )0 , which now means that un * u
R (L ) Ris. We can note that
which suggests that un v uv for all v L1 .
28
(
u00 a(x)u = f
u(0) = u(1) = 0,
where a(x) corresponds to some periodic material coefficient, say a periodic square wave from
1 to 2 such that an (x) = a(x/n). We can now construct the weak form by multiplying by a
v H01 (0, 1):
0 0
uv +
Z
an uv =
fv
Note that {an }R is a sequenceR in L (0, 1) and that |uv| kukL2 kvkL2 so uv L1 . This then
means that if an (uv) a(uv) for some a, then the solution converges.
Another possible an choice is an (x) = 2 + sin(2nx). In this case, an (x)1 dx = 2, and also,
R1
R
R
(1
(x))
dx
=
1
a
,
and
finally
v
sin nx * in L .
5.5
Compact operators
Recall that a set K is compact if every sequence {xn } in K has a convergent subsequence {xnk }
and converges to an element in K .
Definition 5.5. Let X, Y be normed vector spaces. An operator T : X Y is said to be
compact if T is linear and if for every bounded subset M of X , then T (M ) is compact.
Lemma 5.2. Every compact operator T : X Y is bounded. If dim X = , then
I : X X is not compact.
To see this, note that u = {x X : kxk = 1} is bounded and T (u) is compact and hence
bounded, meaning supkxk=1 kT xk = T is bounded. To see the second statement, note
that I(u) u is not compact when dim X = .
Theorem 5.7 (Compactness criterion I). An equivalent criteria for compactness can be
summarized as: T is compact if and only if it maps every bounded sequence {xn } in X onto a
sequence {T xn } in Y which has a convergent subsequence.
C(X, Y ) is the space of all compact operators T : X Y and is a vector space since
C(X, Y ) B(X, Y ), which is Banach if Y is Banach.
Theorem 5.8 (Compactness criterion II). Yet another equivalent criteria is: If T is bounded
and linear and either:
29
Proof. Let (xn ) be a bounded sequence in X . We will construct a subsequence yn such that T yn
converges.
We know T1 is compact, so a subsequence x1,m of xm such that T1 x1,m is Cauchy. We can
repeat this process for T1 , . . . , Tn . Thus we have a sequence (Tk xj,k ) that is Cauchy whenever
k j . We can then define the diagonal sequence ym = xm,m . For any fixed k , (Tk ym ) is
Cauchy because eventually k m. Since {xm } is bounded, we have that {ym } is also bounded.
We want to show that (T ym ) is Cauchy. Given > 0, choose p, N large enough such that
kTp T k < /3c and kTp yk Tp yj k /3
kT yk T yj k kT yk Tp yk k + kTp yk Tp yj k + kTp yj T yj k
kT Tp kkyk k + /3 + kT Tp kkyj k < .
Example 5.8. Consider T : `2 `2 defined by T x := (1 /1, 2 /2, 3 /3, . . .), which can be
thought of as multiplying by an infinite diagonal matrix. We hope to show that T is compact.
Define Tn to be T truncated to n terms. Note that since dim Tn = n, it is compact. We also
2 P
2
P
1
1
2
2
kxk2 =
note that kTn x T xk2 =
j=n+1 |j | n+1
j=n+1 |j /j| n+1
1
kTn x T xk n+1
kxk = kTn T k 0. Therefore, by our previous theorem, T is
compact.
Theorem 5.10. Let X, Y be NVS and let T be compact, then if xn * x then T xn T x.
6
Spectral Theory
6.1
Preliminaries
Ax = x.
(6.1)
Recall that C is an eigenvalue if (6.1) has a solution, in which case x 6= 0 is the corresponding
eigenvector. The nullspace of (A I) is the eigenspace corresponding to the eigenvalue .
The collection of eigenvalues of A is called the spectrum of A, denoted (A).
Note that eigenvalues are the same for similar matrices, that is A2 = C 1 A1 C , where C is
non-singular, thus (A) is unambiguously defined. Also, det(A I) is a polynomial of degree
n in . The Fundamental Theorem of Algebra says that there must be at least one root and no
more than n including multiplicities.
This works for finite dimensional operator, but defining the spectrum of an infinite dimensional
operator becomes less straightforward. Thus, consider X 6= {0}be a complex NVS and let
T : D(T ) X X be a linear operator.
For C, define T := T I and R (T ) := T1 = (T I)1 when it exists. R (T ) is
called the resolvent operator because it is solving something.
Definition 6.1. The resolvent set (T ) is the set of all C such that:
31
We can then define the spectrum (T ) to be C (T ). Thus, one or more of the conditions
must fail for (T ). We can then split (T ) into three disjoint sets depending on which
property is violated:
(1) p (T ), the point spectrum, consists of all such that R (T ) = (T I)1 does not
exists. Note that in this case (T I)x = 0 has a non-zero solution, meaning that we can
think of this as T x = x, thus denoting as an eigenvalue. This x is also considered an
eigenvector.
(2) If R (T ) = (T I)1 exists but is not bounded, then exists in the continuous spectrum,
denoted c (T ).
(3) Again, if R (T ) = (T I)1 exists but the domain is not dense in X , then r (T ),
the residual spectrum.
By definition, we know (T ) = p (T ) c (T ) r (T ), thus we can characterize an operator
by its spectrum.
1
Example 6.1. Consider the identity operator I . Then R (I) = (I I)1 = [(1 )I] =
1
I . Note that R (I) is not defined for = 1 so 1 p (I). If 6= 1, then R (I) exists and
1
is bounded and is defined for all X , thus (I).
Example 6.2. Consider T : `2 `2 defined by T x := (1 , 2 /2, 3 /3, . . .). If = 1/n,
then (T I) = (T n1 I) = (T n1 I)en = 0, thus R (T ) does not exist, meaning
1
is an eigenvalue with eigenvector en . Note that when = 0, (T I) = T and T 1 =
n
(1 , 22 , 33 , . . .), which exists and is defined on a dense subset of `2 , which are all sequences
with finitely many non-zero empties. Note that kT 1 en k = n, thus this is unbounded and
therefore 0 c (T ).
Example 6.3. Consider T : `2 `2 defined by T x := (0, 1 , 2 , . . .), that is, we have the right
shift operator. First consider = 0. Then (T I) = T and therefore T 1 x = (2 , 3 , . . .).
Note, R exists and is bounded, but the range of T are only sequences with a 0 in the first term,
which is not dense in `2 , meaning r (T ).
6.2
The above theorem is known as a Neumann series for a bounded linear operator. The proof
is clear by considering a geometric series and that we know that in a Banach space, absolute
convergence implies convergence.
32
6.3
Theorem 6.6. Let X be a NVS and T : X X be compact. The set of all eigenvalues p (T )
is countable with the only accumulation point possible for p (T ) is 0.
From this, we can know that the set of all eigenvalues of a compact linear operator can be arranged
into a sequence whose only possible point of accumulation is 0.
Theorem 6.7. Let X be a NVS. Let T : X X be compact. Then the eigenspace
corresponding to every non-zero eigenvalue p (T ) is finite dimensional.
Theorem 6.8 (Fredholm alternative). Let X be a NVS and let T : X toX be a compact linear
operator. Then either:
33
Both these lemmas combined can be used to prove the statement of the Fredholm alternative.
Theorem 6.9. Let T : X X be compact and X be a NVS. Then every non-zero (T )
is an eigenvalue.
It can also be shown that 0 (T ) for all compact linear operators with infinite dimension. This
follows from the fact that if 0 (T ), then T 1 would eist and be bounded, meaning T 1 T = I
would be compact, but note that I can only be a compact operator under finite dimensions, thus
we have a contradiction.
6.4
34
that T x = j=1 hx, uj i T uj = j=1 j hx, uj i uj , which tells us everything about {uj }.
So, to solve (I T )x = y , we need only solve j=1 (1j ) hx, uj i uj = j=1 hy, uj i uj =
(1 j ) hx, uj i = hy, uj i. In other words, we diagonalize the problem. We can see from
P hy,uj i
this that hx, uj i = hy, uj i /(1 j ) and therefore x =
j=1 1j uj , which follows from
P
x = j=1 hx, uj i uj .
Thus, if we want to solve T x = y , we can simply solve x =
since T is compact, thus T 1 must be unbounded.
j=1
hy,uj i
uj ,
j
but we know j 0
Overall, the big take-away from this theorem is: compact operators can be approximated with
finite matrices.
6.5
We can now formulate a stronger version of the Fredholm alternative for Hilbert spaces.
We first need the lemma:
Lemma 6.3. If T : H H is a compact linear operator, then T is also compact.
Theorem 6.16 (Fredholm alternative). Let T : H H be a BLO on the Hilbert space H ,
then R(T ) = N(T ) . Furthermore, there are three exclusive alternatives:
6.6
Sturm-Liouville problem
We can apply most of the theory covered in this course by considering a single example, the
Sturm-Liouville problem.
35
1
vx
vtt = 0,
(6.2)
where (x) is the density of the medium and (x) is the compressibility, which is equivalent to
1/(x), where is the bulk modulus.
Wed like to separate variables v(x, t) = u(x)g(t), which yields:
g(t)
1 0
u (x) u(x)g 00 (t) = 0 =
1 0
u (x)
x
u(x)
g 00 (t)
= .
g(t)
Note, as with any separation of variables problem, both sides of this equation are constant so we
obtain two ordinary differential equations:
0
1 0
u = u(x)
g 00 (t) = g(t).
We can observe that g(t) = eit , = 2 so 0. Other solutions for g(t) exist but this will
suffice. We can generalize the u equation to the form:
(u0 )0 qu + u = 0,
q(x) 0,
(6.3)
Note, q(x) corresponds to absorption, and here we will assume we have Dirichlet boundaries,
that is u(0) = u(1) = 0. Let w C01 [0, 1] be a test function, then we have:
0
(u0 ) w quw + uw dx
Z0 1
Z 1
0
0
=
[(u )w + quw] dx
uw dx.
0=
R1
R1
Thus, if we let a(u, w) := 0 [u0 w0 + quw] dx and b(u, w) := 0 uw dx, we have a problem
of the form a(u, w) = b(u, w), which is the weak form of our problem and true for any
w H01 (0, 1). We hope to find C and u H01 (0, 1).
If a, b are bounded, the Riesz representation theorem implies there are bounded operators:
A : H01 H01 and B : H01 H01 such that a(u, w) = hAu, wi and b(u, w) = hBu, wi, in
which case our problem becomes: hAu Bu, wi = 0 for all w H01 () which suggests that
Au Bu = 0. Thus, we must check if a, b are bounded, that is if |a(u, w)| ckukkwk.
36
u0 w0 + uw dx. Thus:
Z 1
uw dx
|b(u, w)| =
0
Z 1
(x)|u(x)||w(x)| dx
0
Z 1
|u(x)||w(x)| dx
max (x)
x[0,1]
ckukL2 kwkL2
ckukH1 kwkH1 .
Note, a similar argument holds for a:
Z
|a(u, w)| =
u w + quw dx
0
kuk2L2
Rx
0
u0 (t) dt,
u(x)2 dx
0
2
Z 1 Z x
0
=
u (t) dt dx
0
0
Z 1 Z x
0
2
u (t) dt , dx
0
0
Z 1Z 1
u0 (t)2 dtdx
Z0 1 0
=
u0 (t)2 dt
=
by Jensens inequality
= ku0 k2L2 ,
which suggests that kuk2L2 ku0 kL2 for u H01 , which is the statement of Poincars inequality
in one dimension. This then allows us to continue:
37
u02 + qu2 dx
a(u, u) =
Z0 1
u02 dx
0
Z 1
u0 (x)2 dx
inf (x)
x
0
Z 1
u0 (x)2 dx
= 0
Z0 1 0 2
u (x)
u0 (x)2
= 0
+
dx
2
2
0
Z
0 1 0
by Bu,
w
compact. Define B
H1
:=
uw dx since hBu, wi =
uw.
1
Consider {sin(nx)}
:= cn sin(nx) where
n=1 , a complete orthogonal set in H0 . Let
R 1 02n
2
cn := 2 for normalization. Note that now hn , n i = 0 n + 2n = 1, meaning we
(n) +1
becomes:
have a complete orthonormal set (basis) for H01 . Therefore, B
D
n , m
B
(R 1
n m = 0
= R01 2 2
c sin (nx) =
0 n
if n 6= m
1
(n)2 +1
if n = m.
1
(n)2 +1
If (un ) is a sequence
in H01 with kun k 1, then assuming that is smooth, kun k c,
R
is compact, B(u
38
Also note that A, B are self-adjoint. It can be shown that if A1 is bounded and A is Rself adjoint,
1
then so is A1 . We can define a new inner product (, ) by (u, w) := hBu, wi = 0 uw dx.
T is then self adjoint with respect to this new inner product. To see this:
(T u, w) = hT u, Bwi = A1 Bu, Bw = Bu, A1 Bw = hBu, T wi = (u, T w) .
Thus, T is self adjoint with respect to (, ), meaning the eigenvalues are real and the eigenvectors
associated with distinct eigenvalues are orthogonal with respect to (, ).
Note, we can also show that is always positive by noting that:
7
Distribution Theory
7.1
Multi-index notation
u :=
||
u
1 x1 n xn
Example 7.1.
3u
= u = = (2, 0, 1).
2
x1 x3
7.2
Definitions
k=0
C k ()
The support of a function u is the complement of the largest open set on which u = 0. Denote
this set by supp u. Note, for continuous functions u, supp u = {x : u(x) 6= 0}.
Define yet another space: C0 () := {u C () : supp u and is compact}. Historically C0 () = D(), the space of distributions.
Finally, the last space we need is Dk = {u C0 (Rn ) : supp u K : K is compact}.
39
40
||p
Note that kkk,p = 0 6 = = 0 since we are only evaluating it on the compact subset K .
Definition 7.1. Let {n } be a sequence in C0 (). We say that n C0 () if
K : supp j K for all j and kn kk,p . Note again, K must be compact.
7.3
Distributions
Note the action of u D1 () on C0 () by u() = hu, i, but this is not an inner product,
just notationally the same.
The continuity means that j = hu, j i hu, i or equivalently j 0 =
hu, j i 0.
Theorem 7.1. A linear functional u on C0 () is continuous iff for every compact K ,
c, p : | hu, i | ckkk,p for all Dk .
Definition 7.3. If there is an integer p such that for every compact K , c : | hu, i |
ckkk,p for all Dk , then u is said to have order p.
Example 7.2. We can consider the function defined by h, i := (0). Note, in this case,
| h, i | =R|(0)| kkk,0 for every k , so is a distribution of order zero. The classical
notation is (x)(x) dx = h, i.
(
Example 7.3.
R Every f L ) defines aRdistribution u by hu, i := f (x)(x) dx. Here,
| hu, i | = f (x)(x) dx max || |f (x)|dx = ckkk,0 for any k . We then just need
f to be locally integrable for this to hold.
7.4
41
Operations on distributions
E
D
:= u, x
. Note
j
that the minus sign comes from integration by parts. For any multi-index , h u, i :=
=(1)|| hu, i.
u
,
xj
In general,R if T : C0 ()
with T 0 mapping the same spaces
R C00 () is linear and continuous
such that (T ) = (T ) for all , C0 (), then given u D0 (), define T u by
hT u, i := hu, T 0 i. Note that T only has to be defined over smooth functions.
*
z
dual pairing
}|
{
u
,
|{z}
|{z}
We force our test functions to be limited but our distributions can be very general in our example,
we pair [H01 (0, 1)]0 with H01 (0, 1).
We can define the linear partial differential operator, defined by L := ||k a wjere a
R
R
P
C , k is the order of the operator. Then, note: (L) = ||k ( (a ))(1) =
R P
||k (1) (a ), which suggests that hLu, i = hu, L0 i, which is defined for every
{z
L0
distribution.
We
R can alsoRdefine the translation operator x defined by (x )(y) := (x + y). Note:
(x ) = x by a change of variables, then hx u, i = hu, x i.
Returning back to the function, we will try to solve Lu = , where L is a partial differential
operator. The solution u fundamental solution of a PDE.
For
R test functions , , define the convolution to be: ( )(x) := (x y)(y) dy =
(y)(xy) dy :=R ( )(x). Also define the reflection operator to be: (R)(x) := (x).
Note: ( )(x) = (y)(Rx )(y) dy by noting that Rx = x R. Thus, we can define
the convolution by (u )(x) := hu, Rx i.
This provides us with a point-wise definition for u or an alternate way to define distributions
by considering the limit as becomes localized. In this case, ( )(x) = (x), that is,
becomes the identity operator: u = L1 . Now, if we solve Lu = f , we can take v = u f so
Lu = = Lu f = f = f .
Index
B(A), 1
B(X, Y ), 15
C(X, Y ), 28
C[a, b], 1
C 1 [a, b], 18
C k () , 39
C01 [a, b], 18
C0 (), 39
D0 (), 40
Dk , 39
H1 [a, b], 18
L2 [a, b], 9
` , 1
`p , 2
absolutely convergent, 9
accumulation point, 3
adjoint, 25
algebraic dual, 14
algebraically reflexive, 15
Banach fixed-point theorem, 4
Banach space, 9
Bessels inequality, 20
bilinear form, 22
bounded linear functional , 14
bounded linear operator (BLO), 13
bounded sequence, 3
canonical embedding, 15
Cauchy sequence, 3
Cauchy-Schwarz inequality, 2
closed set, 2
closure, 3
coercivity, 23
compact operators, 28
compactness, 10
42
complete (space), 3
completion, 4
continuous (map), 3
contraction mapping, 4
convergent sequence, 3
convex set, 19
convolution, 41
dense, 3
discrete metric, 1
distribution, 40
eigenvalue, 30
equivalent norm, 10
finite-dimension, 8
fixed point, 4
Fourier coefficients, 20
Fredholm alternative, 32, 34
fundamental solution, 41
Gauss-Seidel iteration, 5
Gram-Schmidt Process, 21
Hlders inequality, 2
Hahn-Banach theorem, 25
Hilbert space, 18
index dual space, 15
induced norm, 9
inner product, 17
inner product space, 17
integral equations, 6
interior, 2
interior point, 2
Jacobi iteration, 5
linear functional, 14
INDEX
linear operator, 12
linear partial differential operator, 41
metric space, 1
Minkowskis inequality, 2
multi-index, 39
Neumann series, 31
norm, 9
normed vector space (NVS), 9
null space, 12
open set, 2
operator norm, 13
orthogonality, 19
orthonormal set, 20
Picards theorem, 5
Poincars inequality, 36
reflection operator, 41
resolvent operator, 30
resolvent set, 30
Riesz representation theorem, 21
Schauder basis, 9
self-adjoint, 25
self-adjoint operator, 33
separable, 3
sesquilinear form, 23
Sobolev space, 18
spectral radius, 32
Spectral theorem, 33
spectrum, 31
Sturm-Liouville problem, 34
subspace, 8
support, 39
test functions, 39
topology, 2
total orthonormal set, 20
translation operator, 41
Uniform boundedness theorem, 26
43
vector space, 8
weak convergence, 27
weak* convergence, 27