0% found this document useful (0 votes)
56 views50 pages

Functional Notes-2

functional analysis

Uploaded by

Azkam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views50 pages

Functional Notes-2

functional analysis

Uploaded by

Azkam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Notes on Analysis

Dr. Deepesh K. P.

September 12, 2024


Thanks
Chapter 1

From Vector spaces to Normed

linear spaces

The whole mathematics is developed from sets. One can proceed to study mathematics from sets by

forming collection of subsets of a given set and develop theories (Topology and σ-algebras of measure

theory). Another way is to introduce an operations on a given set. If we study about a set by introducing

one suitable operation between elements of the set, it can lead to Group theory. If more than one

operations are considered between elements of the given set, we can think of developing Ring theory. If

the second operation is performed between an element of the set with a suitable collection of scalars, then

it leads to Linear algebra which is built upon the vector space structure.

In functional analysis, the basic building materials used are vector spaces. First let us recollect some

of the basic ideas from Linear algebra. Then onwards we will be developing all concepts and try to be

self contained as long as possible (except for some basic Real analysis and Topology concepts).

Definition 1.0.1. A non empty set V together with two operations namely the vector addition ‘+′ and

the scalar multiplication ‘.′ is called a vector space if it satisfy the following properties (here we assume

that u, v, w are arbitrary elements of V and α, β are arbitrary real numbers):

1. u + v ∈ V

2. u + v = v + u

3. u + (v + w) = (u + v) + w

4. there is an element 0 in V such that 0 + u = u

5. there is an element −u in V such that u + (−u) = 0

1
2

6. α.u ∈ V

7. (α + β).u = α.u + β.u

8. α.(u + v) = α.u + α.v

9. (α β).u = α (β.u) = β (α.u)

10. 1.u = u

The elements of a vector space V are called vectors and the real numbers α, β, . . . are called scalars.

For the purpose of discussing functional analysis, we consider the scalars to be either from the set of

all real numbers R or from the set of all compled numbers C. The scalar field is denoted by K in this

discussion (this means K represents either R or C). A set should satisfy all the 10 properties of the

operations if it is to be called a vector space. If any one of these 10 properties is not satisfied, the set

can not be a vector space.

An immediate way to get more vector spaces is to look inside the vector space and search for suitable

subsets called subspaces.

Definition 1.0.2. Let V be a vector space. A subset W of V is said to be a subspace if W itself is a

vector space under the induced operations.

This means that a vector space sitting inside a bigger vector space is called a subspace. Of course,

the operations in the subspace should be the same as that of the bigger space.

It is easy to observe that a subset qualifies to be a subspace of the vector space if it satisfies the

properties (1) and (6) of the definition of vector space. That is, a subset W ⊆ V if and only if w1 +w2 ∈ W

and αw ∈ W for every w, w1 , w2 ∈ W and α ∈ K.

Now we give an example of a very general and useful vectors space.

Example 1.0.3. Let S be any set. Let us call F(S, K) to be the collection of all functions from S to K.

That is,

F(S, K) = {f : S → K is a function}.

We introduce two operations here

(f + g)(s) = f (s) + g(s), (αf )(s) = αf (s)

for f, g ∈ F(S, K) and α ∈ K. Then since the images of the functions are from R or C, and these spaces

have the associative, commutative, distributive and all other properties, we can show that F(S, K) under
3

the two operations forms a vector space. Note that when S = {1, 2, . . . , n}, we get the collection of all

n−tuples, when S = N, we get the collection of all real/complex sequences and when S is an interval

like [a, b] in R, we get the standard function spaces like the polynomials P[a, b] and continuous function

C[a, b] from inside this vector space as subspaces.

In a vector space, from a given set of elements, we can create more and more dependent elements by

means of the two operations.

Definition 1.0.4. Let V be a vector space, u1 , u2 , . . . , un ∈ V and α1 , α2 , . . . , αn ∈ R. Then the vector

u = α1 u1 + α2 u2 + . . . + αn un

is called a linear combination of the vectors u1 , u2 , . . . , un . Note that u ∈ V . The collection of all possible

finite linear combinations of elements of a set S is called the linear span of the set, denoted as span(S).

We can also consider span of infinite sets.

Definition 1.0.5. If E ⊂ V is an infinite set then the span of E is the collection of all finite linear

combinations of elements of E. This means

Span(E) = { all linear combinations of finite subsets of E}.

We say that a subset of V is linearly dependent if at least one of the elements can be constructed

using the other elements as linear combinations.

Definition 1.0.6. A set u1 , u2 , . . . , un ∈ V is said to be linearly dependent if

uj = α1 u1 + α2 u2 + . . . + αj−1 uj−1 + αj−1 uj−1 + . . . + αn un

for atleast one uj .

The following is a simple characterization of linearly dependent sets.

Theorem 1.0.7. A collection of vectors {u1 , u2 , . . . , un } in V is linearly dependent if and only if there

exist scalars α1 , α2 , . . . , αn , not all zeros, such that

α1 u1 + α2 u2 + . . . + αn un = 0.

Sets which are not linearly dependent are called linearly independent. They can be independently

defined also using the following result.


4

Definition 1.0.8. A set of vectors u1 , u2 , . . . , un in a vector space V is said to be linearly independent

if and only if

α1 u1 + α2 u2 + . . . + αn un = 0 ⇒ α1 = 0, α2 = 0, . . . , αn = 0

An infinite set E ⊂ V is said to be linearly independent if each finite subset of E is linearly independent.

There are some subsets of V which can span the entire vector space(create the entire vector space by

means of the two operations).

Definition 1.0.9. A subset S of a vector space V is said to be a spanning set if every element of V can

be written as a finite linear combination of elements of S. In other words, span(S) = V .

Note that this S can be very large and any super set of a spanning set is again a spanning set.

Note that the spanning sets may contain linearly dependent elements, which are actually useless (as some

of them can be created using others). So avoiding the linearly dependent elements from a spanning set

one can get a ”basis”.

Definition 1.0.10. A linearly independent subset which spans the vector space is called a basis (or Hamel

basis) of the vector space. That is, a collection of vectors B is a basis of V if and only if it is linearly

independent and it is a spanning set.

Every vector of V can be written as a linear combination of elements of a spanning set. When the

spanning set is a basis, this expression turns out to be unique.

Theorem 1.0.11. Let B be a basis for a vector space V . Then for each v ∈ V , there exists a unique

collection of vectors v1 , v2 , . . . , vn from B and scalars c1 , c2 , . . . , cn such that

v = c1 v1 + c2 v2 + . . . + cn vn .

In many cases, we identify the vector v with the scalar set (c1 , c2 , . . . , cn ) in linear algebra. Note that

infinite dimensional spaces also can have good bases. For example {1, x, x2 . . .} form a countable basis

for P, the collection of all polynomials on R and the collection of vectors {e1 , e2 , . . .} with

en = (0, 0, . . . , 0, 1, 0, . . .),

where 1 appears in the nth coordinate, for each n ∈ N, forms a countable basis for c00 , the space of all

sequences which are vanishing after a finite number of terms. We will see details of this space later.

Even though there can be infinitely many different bases for a vector space, all of them have a unique

special number associated with them.


5

Theorem 1.0.12. The cardinality of each basis of a vector space will be the same. This number is called

the dimension of the vector space.

The basic question one may feel to ask is about the need of a different structure on a vector space as it

already has lot of operations on it. Note that Linear algebra mostly deals with finite dimensional vector

spaces (of course, infinite dimensional spaces can be discussed with the two operations also) and linear

transformations between them which can be represented by m × n matrices (which is not possible when

you have an infinite dimensional vector space, in general). Further, even though we can talk about linear

combinations in a vector space, infinite linear combinations not possible to be treated in a vector space

structure if we do not have concept of convergence of sequences. To talk about convergence of sequences

you need a metric structure or, at least, a topological structure on a vector space.

Even though we talk about linear transformations, which are particular functions between vector

spaces, we can not talk about the continuity of functions in the settings of linear algebra. Further, to talk

about closed-ness of elements or to discuss completeness property and all, we need a metric to be defined

on a vector space. Some of the other interesting concepts that one would like to discuss for vectors in a

vector space are the concepts of length of vectors and geometry of vectors. For dealing with such things,

one can introduce the following structures:

i. Topological vector spaces

ii. Metric on vector spaces

iii. Normed vector spaces

iv. Inner product spaces

We will see that inner products induces a norm; norm induces a metric and a metric induces a topology

on vector spaces. Hence the concept of topological vector spaces is the most general among these. But

in our Functional analysis we start our study with the in between concept, that is the concept of normed

linear spaces as the theory is very strong and considerably general in this settings, which allows us to

move back and forth when necessary.

We begin with the definition of a norm on a vector space.

Definition 1.0.13. A norm on a vector space X is a function ∥ · ∥ : X → R such that for any x, y ∈ X

and for all λ ∈ K,

1. ∥x∥ ≥ 0 and ∥x∥ = 0 ⇔ x = 0

2. ∥λx∥ = |λ|∥x∥
6

3. ∥x + y∥ ≤ ∥x∥ + ∥y∥

If a linear space X is equipped with a norm ∥ · ∥, we call (X, ∥ · ∥) a normed linear space.

1.1 Some simple examples of normed linear spaces

We have seen that F(S, K) is a vector space and when S = {1, 2, . . . , n}, we get Kn = {(x1 , x2 , . . . , xn ) :

xi ∈ K for i = 1, 2, . . . , n}. It is easy to see that Kn with

∥(x1 , x2 , . . . , xn )∥1 = |x1 | + |x2 | + . . . + |xn |

is a normed linear space. The property |a + b| ≤ |a| + |b| of complex numbers helps to verify the norm

properties. The same property is enough to show that

∥(x1 , x2 , . . . , xn )∥∞ = sup{|x1 |, |x2 |, . . . , |xn |}

is also a norm on Kn .

When S = N, F(S, K) becomes the collection of all scalar sequences. It is not difficult to see that the

previous norms, when extended, forms norms on certain subspaces of F(S, K). That is, the space


X
ℓ1 (N) = {(an ) : an ∈ K and |an | < ∞}
n=1

is a vector subspace of F(N, K) and



X
∥(an )∥1 = |an |
n=1

forms a norm on ℓ1 (N) (we write this space as ℓ1 ). Again, the collection

ℓ∞ (N) = {(an ) : an ∈ K and sup |an | < ∞}


n∈N

is a subspace (of all bounded sequences) of F(N, K) and

∥(an )∥∞ = sup |an |


n=1,2,...,∞

forms a norm on ℓ∞ (N) (we write this space as ℓ∞ ).


7

1.2 Some properties of norm

We define certain relaxed versions of norms in the following.

Definition 1.2.1. If we modify the third condition in the Definition 1.0.13 by

∥x + y∥ ≤ C(∥x∥ + ∥y∥)

for some C > 0, for all x, y ∈ X, then the quantity ∥ · ∥ is called a quasinorm on X.

Definition 1.2.2. If the condition ”∥x∥ = 0 iff x = 0” is omitted from Definition 1.0.13, then the

quantity ∥ · ∥ is called a seminorm on X.

Note that ∥(x1 , x2 )∥ = |x1 | for (x1 , x2 ) ∈ K2 is a seminorm on K2 .

We discuss some of the properties of norm here. These can be easily proved. First we give a refined

version of triangle inequality:

Theorem 1.2.3. For any x, y ∈ X, we have |∥x∥ − ∥y∥| ≤ ∥x − y∥

Proof. This follows from the triangle inequality applied to ∥x∥ = ∥(x − y) + y∥.

This inequality immediately gives the continuity of the norm function.

Theorem 1.2.4. The norm function ∥ · ∥ : X → R is a continuous function.

Proof. Use the fact |∥x∥ − ∥y∥| ≤ ∥x − y∥. So if we take f : X → R by f (x) = ∥x∥ for x ∈ X, for any

ϵ > 0 we can choose δ = ϵ such that |f (x) − f (y)| < ϵ whenever ∥x − y∥ < δ.

It is easy to see that every norm induces a metric on a normed linear space.

Theorem 1.2.5. On a normed linear space (X, ∥ · ∥), if we define

d(x, y) = ∥x − y∥,

then d is a metric on X.

Proof. Verify the metric properties.

Thus every normed linear space is a metric space, and therefore a topological space also. What are

the open sets in this topology?

Definition 1.2.6. Let X be a normed linear space and x0 ∈ X. Then the open ball with center at x0

and a positive radius r is denoted as B(x0 , r). That is

B(x0 , r) = {y ∈ X : d(x0 , y) < r} = {y ∈ X : ∥x0 − y∥ < r}.


8

We call B(0, 1) as the open unit ball of X. The collection of all open balls form a basis for the metric

topology, which is the topology on X. As in the case of a metric space, the convergence of a sequence is

defined as in the following:

Definition 1.2.7. We say that a sequence (xn ) in X converges to x in X if ∥xn − x∥ → 0 as n → ∞.

The convexity of the open/closed balls is very easy to observe.

Theorem 1.2.8. The open/closed balls in X are convex sets.

Proof. Take x, y ∈ B(x0 , r) and show that for any 0 < t < 1, tx + (1 − t)y ∈ B(x0 , r).

Subspaces of a normed linear spaces are always normed.

Theorem 1.2.9. Suppose (X, ∥ · ∥) is a normed linear space and Y be a subspace of X. Then (Y, ∥ · ∥)

is a normed linear space.

Proof. All elements of Y are in X. Hence all properties are hereditary.

We give some more examples of normed linear spaces here.

Definition 1.2.10. Consider the following subsets of the sequence space F(N, K).

1. c00 = {(an ) : there exists some N0 such that an = 0 ∀ n > N0 }

2. c0 = {(an ) : an = 0 as n → ∞}

3. c = {(an ) : {an } is convergent}

Theorem 1.2.11. The spaces c, c0 , c00 with ∥ · ∥∞ are normed linear spaces.

Proof. In view of Theorem 1.2.9, it is enough to show that these are subspaces of ℓ∞ .

1.3 The standard normed linear spaces

There are some standard spaces used in functional analysis to develop the useful structures and theory

and also to provide examples for illustrating/disproving results. We will see these spaces in this section.

1.4 Kn and ℓp with ∥ · ∥p , 1 ≤ p ≤ ∞

We have already seen that (Kn , ∥ · ∥1 ) and (Kn with ∥ · ∥∞ ) are normed linear spaces. We can define

certain other norms on this space which are useful and gives more examples of normed linear spaces.
9

We define the pth norm on Kn , for 1 ≤ p < ∞, by

n
! p1
1 X
∥(x1 , x2 , . . . , xn )∥p = (|x1 |p + |x2 |p + . . . + |xn |p ) = p |xi |p .
i=1

We will show that ∥ · ∥p is a norm on Kn . For this we need some famous inequalities.

Young’s inequality

a+b
This is a generalization of the AM-GM inequality. From (a − b)2 ≥ 0, we get 2 ≥ ab. We can re-write

this as
a2 b2
+ ≥ ab
2 2

by taking a2 and b2 in place of a and b. A generalized version of this is called the Young’s ineqaulity.

1 1
Theorem 1.4.1. (Young’s inequality) Let a, b ≥ 0, 1 < p < ∞ and 1 < q < ∞ be such that p + q = 1.

Then
ap bq
+ ≥ ab.
p q

a ap 1
Proof. Instead, we will show that − − ≤ 0 (just by dividing with bq throughout), assuming
bq−1 pbq q
1 1 1 ap
b ̸= 0, since if b = 0, it is trivial. Make a function f : [0, ∞) → R by f (t) = t p − t − by taking t = q .
p q b
Now show that f is increasing on the left side of 1 and decreasing to the right side of 1, showing that the

maximum of f is at t = 1, which is 0.

This inequality leads to the famous Holder’s inequality, which is very useful in proving the triangle

inequality in many cases

Theorem 1.4.2. (Holder’s inequality for n-tuples)

Let (an )n1 , (bn )n1 be n-tuples in Kn and p, q be Holder’s pairs. Then

n n
! p1 n
! q1
X X X
p q
|ai bi | ≤ |ai | |bi |
i=1 i=1 i=1

This result can be extended for sequences also.

Theorem 1.4.3. (Holder’s inequality for sequences)

Let (an )∞ ∞
1 , (bn )1 be sequences in K and p, q be Holder’s pairs. Then

∞ ∞
! p1 ∞
! q1
X X X
|ai bi | ≤ |ai |p |bi |q
i=1 i=1 i=1
10

proof: If unbounded, nothing to show. For bounded, show that partial sums are bounded by.. Note

that for p = 1, if we take q = ∞, we can show the same result.

Once we have the Holder’s inequality, we can get the Minkowski’s inequality which is nothing but the

triangle inequality for ∥ · ∥p .

Theorem 1.4.4. (Minkowski’s inequality for n-tuples)

Let (an )n1 , (bn )n1 be n-tuples in Kn and 1 < p < ∞. Then

n
! p1 n
! p1 n
! p1
X X X
p p p
|ai + bi | ≤ |ai | + |bi |
i=1 i=1 i=1

This result can be extended for sequences also.

Theorem 1.4.5. (Minkowski’s inequality for sequences)

Let (an ), (bn ) be sequences on K and 1 < p < ∞. Then


! p1 ∞
! p1 ∞
! p1
X X X
|ai + bi |p ≤ |ai |p + |bi |p
i=1 i=1 i=1

These results helps us to prove the norm properties for some of the standard spaces.

Theorem 1.4.6. For 1 < p < ∞, ∥ · ∥p on Kn is a norm.

proof.

Theorem 1.4.7. For 1 < p < ∞, ℓp is a vector space and ∥ · ∥p is a norm on ℓp .

Proof:

Note that c00 being a subspace of ℓp is a normed linear space with ∥ · ∥p ; whereas c0 can’t be defined with

this norm. Also it is easy to see that each ℓp is sitting inside ℓ∞ .

Theorem 1.4.8. Suppose 1 ≤ p < q ≤ ∞. Then ∥ · ∥q ≤ ∥ · ∥p .


y
Proof. Show that ∥x∥p ≤ 1 ⇒ ∥x∥q ≤ 1. Then take x = for nonzero y.
∥y∥p

Theorem 1.4.9. Suppose 1 ≤ p < q ≤ ∞. Then ℓp ⊂ ℓq . Show that the inclusion is proper.

Proof. For q = ∞, it follows from the fact that convergent sequences are bounded. For 1 ≤ p < q < ∞,

it follows from the previous theorem.

1.5 The function spaces

We know that F(S, K) is a vector space for any set S. In general, we cant define a norm on this vector

space. But there are useful cases and subspaces of this vector space other than Kn and sequence spaces,
11

which also allow us to define a norm on them.

1.6 ℓ∞ (S, K) with ∥ · ∥∞

Note that the space of all bounded sequences

ℓ∞ (S) := ℓ∞ (S, K) = {f : S → K such that f is a bounded function}

is a vector space. We can define

∥f ∥∞ = Sup{|f (t)| : t ∈ S}.

Then this is a norm (easy!) and it makes ℓ∞ (S) a normed linear space. For this we do not need any

additional structure on S. This is totally different from the Lp (E) space which is going to come soon.

1.7 The space of all continuous functions C[a, b]

Consider

C[a, b] = {f : [a, b] → K, f is continuous}

Since C[a, b] is a subspace of ℓ∞ ([a, b]), it follows that C[a, b] is a normed linear space with ∥ · ∥∞ . The

same is true for P[a, b] and C ′ [a, b] - space of all smooth functions- also.

Note that one can replace [a, b] with any compact metric space Ω also. This is denoted as C(Ω), which

is a normed linear space with ∥ · ∥∞ . We define another easy norm on C[a, b] by

Z b
∥f ∥1 = |f (t)|dt, for f ∈ C[a, b],
a

where the integral is the usual Riemann integral. It is easy to verify the norm properties.

Like in the case of sequences, we can generalize the 1−integral to p−integrals for 1 < p < ∞ and derive

new norms on C[a, b]. We will show that

! p1
Z b
p
∥f ∥p = |f (t)| dt , for f ∈ C[a, b]
a

is a norm on C[a, b] for 1 < p < ∞. To show this we need the Holder’s and Minkowski’s inequalities for

functions. The proofs are just analogous to the sequence case.


12

1 1
Theorem 1.7.1. Let 1 < p < ∞ and q be such that p + q = 1. Then for any f, g ∈ C[a, b],

! p1 ! q1
Z b Z b Z b
p q
|f (t) + g(t)|dt ≤ |f (t)| dt |g(t)| dt
a a a

Using this we get the Minkowski’s inequality:

Theorem 1.7.2. Let 1 < p < ∞ and f, g ∈ C[a, b]. Then

! p1 ! p1 ! p1
Z b Z b Z b
|f (t) + g(t)|p dt ≤ |f (t)|p dt + |g(t)|p dt
a a a

This essentially gives the triangle inequality which can be used to prove that ∥ · ∥p is a norm on C[a, b].

Theorem 1.7.3. C[a, b] with ∥ · ∥p , 1 < p < ∞ is a normed linear space.

Theorem 1.7.4. The spaces P[a, b], C ′ [a, b] with ∥ · ∥p , 1 < p < ∞ is a normed linear space.

1.8 The Lp spaces

We have seen that C[a, b] is a normed linear space with ∥ · ∥p , 1 ≤ p ≤ ∞. But there are a lot of

useful functions which are not continuous. Further, since the above p-norms are using integrability

and integrable functions are more general in the sense that they do not need stringent conditions like

continuity or differentiablity, we define certain subclasses of functions space which can be defined using

integrability. We use a bit of measure theory to do this.

1.9 Exercise Problems

1. Show that on a non-zero vectorspace V , ∥x∥ = 0 ∀ x ∈ V is a semi-norm.

2. Can you show ∥x∥ ≥ 0 for all x ∈ X from triangle inequality?

3. Show that ∥(x1 , x2 , . . . , xn )∥ = |x1 | is a semi-norm on Kn .

4. Show that every semi-norm on a vectorspace induces a norm in a suitable quotient space.

5. Show that the sum of two norms on a vectorspace V is a norm on V .

6. Show that in a semi-normed space, ∥0∥ = 0.

7. Show that |∥x∥ − ∥y∥| ≤ ∥x − y∥ is true in semi normed space also.

8. Show that if p > 0 and ∥.∥ is a norm on a vectorspace X, ∥x∥∗ = p∥x∥ is also a norm on X.
13

9. Show that the distance induced by a norm on a nonzero normed space can never be bounded.

10. Show that the distance of R defined by

d(x, y) = max{1, |x − y|}, x, y ∈ R

is not induced by any norm.


Z 1

11. Show that on C [0, 1], ∥f ∥c = |f ′ (t)| dt is not a norm; but ∥f ∥ = ∥f ∥∞ + ∥f ′ ∥∞ is a norm.
0

12. Young’s inequality


1 1
Consider the Hölder pair p and q (1 < p < ∞ and p + q = 1). Show that for a, b ≥ 0,
p q
a b
p + q ≥ ab.

13. Show the Holder’s inequality for sequences for, p = 1 and q = ∞.

14. Show that (Kn , ∥.∥p ) is a normed linear space for 1 < p < ∞.

15. Show that (ℓp , ∥.∥p ) is a normed linear space for 1 < p < ∞.

16. Can ∥ · ∥p , 1 ≤ p < ∞ be defined on c0 ? Justify.

17. If 1 ≤ p < q < ∞, show that ∥an ∥q ≤ ∥an ∥p , for any sequence (an ) in ℓp .

18. For 1 ≤ p < q ≤ ∞, show that ℓp ⊂ ℓq . Show that the containment is strict.
 ∞
 p1
p p
P
19. If 0 < p < 1, then ℓ is a quasi-normed linear space with ∥an ∥p = |an | .
n=1
1
Hint: Use (a + b)p ≤ ap + bp for 0 < p < 1 and f (x) = x p is a convex function for such p.

20. Show that Lq (X) ⊆ Lp (X) if 1 ≤ p < q ≤ ∞, provided µ(X) < ∞.

21. Show that Lp (R) and Lq (R) are not comparable, 1 ≤ p < q ≤ ∞ (see that µ(R) = ∞)

22. Is C(a, b) a normed linear space with the norm ∥ · ∥∞ ?

23. Show that c00 is not a closed subspace of ℓ∞ .

24. Show that c0 is a closed subspace of ℓ∞ .

25. Show that if (X1 , ∥ · ∥1 ) and (X2 , ∥ · ∥2 ) are normed spaces, then the cartesian product X1 × X2 is

also a normed linear space with the norm ∥(x, y)∥ = ∥x∥1 + ∥y∥2 for (x, y) ∈ X1 × X2 .
14

26. Let V be the vectorspace of all continuous complex valued functions on J = [a, b]. Let X1 =

(V, ∥ · ∥∞ ), where ∥x∥∞ = maxt∈J |x(t)| and let X2 = (V, ∥ · ∥2 ), where

Z b
1/2
∥x∥2 = ⟨x, x⟩ and ⟨x, y⟩ = x(t)y(t)dt.
a

Show that the identity mapping x → x of X1 onto X2 is continuous.

27. Orthonormalize the first three terms of the sequence (x0 , x1 , x2 , . . .), where xi (t) = ti , on the interval

[-1,1], where
Z 1
⟨x, y⟩ = x(t)y(t)dt.
−1
Chapter 2

Inner Product Spaces

Suppose V is a vector space on K, where K stands for the set of all real or complex numbers. In this

section we mainly deal with the vector spaces:

a. The space Cn , the space of all n-tuples of complex numbers.

b. The space ℓ2 , the space of all 2-summable sequences.

c. The space Pn [a, b], the space of all complex polynomials of degree atmost n with domain [a, b].

d. The space P[a, b], the space of all complex polynomials with domain [a, b].

e. The space C[a, b], the space of all complex valued functions with domain [a, b].

f. The space L2 [a, b], the space of all 2-integrable complex functions with domain [a, b].

An inner product on a vector space helps to introduce certain geometry to the vector space (like the

orthogonality). It can be also thought as an extension of the concept of dot product of 3-dimensional

vectors (elements ofR3 ) to vector spaces.

Definition 2.0.1. A vector space V is said to be an inner product space if there is defined a function

⟨·, ·⟩ : V × V → K satisfying the following properties:

1. ⟨v, v⟩ ≥ 0 for all v ∈ V

2. ⟨v, v⟩ = 0 if and only if v = 0

3. ⟨u + v, w⟩ = ⟨u, w⟩ + ⟨v, w⟩ for all u, v, w ∈ V .

4. ⟨αu, v⟩ = α⟨u, v⟩ for all u, v ∈ V and α ∈ K.

5. ⟨u, v⟩ = ⟨v, u⟩ for all u, v ∈ V .

15
16

The above defined function ⟨, ⟩ on V is called an inner product on V (note that on all vector spaces,

we may not be able to define an inner product). The properties 1 and 2 are combinedly called positive

definiteness and the properties 3 and 4 together are called linearity properties. Property 5 is called

conjugate symmetry. Note that property 5 becomes ⟨u, v⟩ = ⟨v, u⟩ if the inner product is real.

We use the notation (X, ⟨·, ·⟩) for an inner product space (instead of V) and the elements of the space

are denoted by x, y, z, . . . (instead of u, v, w, . . .).

2.1 Some Examples

Example 2.1.1. 1. Take X = R and define ⟨x, y⟩ = xy, the usual multiplication of real numbers.

2. Take X = R3 and define ⟨(a1 , b1 , c1 ), (a2 , b2 , c2 )⟩ = a1 a2 + b1 b2 + c1 c2 , the dot product.

n
X
n
3. Take X = K and define ⟨(x1 , x2 , . . . xn ), (y1 , y2 , . . . yn )⟩ = xj yj
j=1


X
4. Take X = ℓ2 and define ⟨(x1 , x2 , . . .), (y1 , y2 , . . .)⟩ = xj yj
j=1

Z b
5. Take X = Pn [a, b] and define ⟨p(x), q(x)⟩ = p(x)q(x)dx
a

Z b
6. Take X = C[a, b] and define ⟨f, g⟩ = f (x)g(x)dx
a

Z b
2
7. Take X = L [a, b] and define ⟨f, g⟩ = f (x)g(x)dµ(x)
a

Here as an illustration, we verify the inner product properties for 6 only. Here

Z b
⟨f, g⟩ = f (x)g(x)dx.
a

Z b
a. ⟨f, f ⟩ = |f (x)|2 dx ≥ 0 since the integral of a non negative function over any subset of R is non
a
negative.
Z b
b. ⟨f, f ⟩ = |f (x)|2 dx = 0 if and only if f = 0, the zero function.
a

Z b Z b Z b
c. ⟨f + h, g⟩ = (f (x) + h(x))g(x)dx = f (x)g(x)dx + h(x)g(x)dx = ⟨f, g⟩ + ⟨h, g⟩
a a a

Z b Z b
d. ⟨f, g⟩ = f (x)g(x)dx = g(x)f (x)dx = ⟨g, f ⟩
a a
17

2.2 Creating norm from inner product

We can see that the inner product induces a norm on the inner product space by

p
∥x∥ = ⟨x, x⟩, ∀x ∈ X.

But in order to show that this positive quantity gives a norm on the inner product space we need a result

called the ’Schwartz inequality’.

Theorem 2.2.1. Suppose X is an inner product space and x, y ∈ X. Then these vectors satisfy the so

called Schwartz Inequality:

|⟨x, y⟩| ≤ ∥x∥ ∥y∥

Proof. Start with ∥x − αy∥2 = ⟨x − αy, x − αy⟩, which is a positive quantity. Now expand it to get
⟨x,y⟩
∥x∥2 − 2Re(⟨x, αy⟩) + |α|2 ∥y∥2 ≥ 0. Since it is true for any α ∈ C, choosing α = ∥y∥2 , for y ̸= 0, we get

the conclusion.

If y = 0, then inequality is trivially true as both sides are zeros.

Note 2.2.2. Note that Schwartz inequality becomes the Holder’s inequality (with p = 2) in the case

of n−tuples, sequences or L2 functions. Once we have the Schwartz inequality, we can show that ∥x∥

defined above, is actually a norm on the inner product space.

Note 2.2.3. Recall that in the 2 dimensional case,

⟨x, y⟩
⟨x, y⟩ = ∥x∥∥y∥ cos θ ⇒ p p = cos θ
⟨x, x⟩ ⟨y, y⟩

and | cos θ| ≤ 1 for all θ, resulting in |⟨x, y⟩| ≤ ∥x∥ ∥y∥. The Schwartz inequality says that this is true

always in any inner product space.

Theorem 2.2.4. An inner product on a vector space X always induces a norm on X.

Proof. As mentioned, define


p
∥x∥ = ⟨x, x⟩, ∀x ∈ X.

Except the triangle inequality, other properties easily gets followed from the properties of inner product.
18

To show triangle inequality, use

∥x + y∥2 =⟨x + y, x + y⟩ = ∥x∥2 + 2 Re(⟨x, y⟩) + ∥y∥2

≤∥x∥2 + 2|⟨x, y⟩| + ∥y∥2

≤∥x∥2 + 2∥x∥∥y∥ + ∥y∥2 ( Using Theorem 2.2.1)

from which we get ∥x + y∥2 ≤ (∥x∥ + ∥y∥)2 , proving the result.

As we have seen norm is a continuous function. We see that inner products are also continuous in the

individual variables. That is, for a fixed y ∈ X, if we define f (x) = ⟨x, y⟩, then f : X → K is continuous.

Theorem 2.2.5. Inner product is continuous in the first and second variable. This means if xn → x as

n → ∞, then for each y ∈ X fixed, ⟨xn , y⟩ → ⟨x, y⟩ (also ⟨y, xn ⟩ → ⟨y, x⟩).

Proof. Use the idea: |⟨xn , y⟩ − ⟨x, y⟩| = |⟨xn − x, y⟩| ≤ ∥xn − x∥∥y∥ by Schwartz inequality.

The above norm is called the norm induced by the inner product. This norm induces a metric or

distance on the inner product space by

d(x, y) = ∥x − y∥ ∀x, y ∈ X,

giving it a metric structure. Thus every inner product space is a normed space and every normed space

is a metric space indeed.

2.3 Some properties of inner products

Note that there are metric spaces which are not normed spaces (eg. any bounded metric space cannot

be a normed space!) and there are normed spaces which are not inner product spaces. For example we

can show that ℓp is an inner product space if and only if p = 2.

To show that the norm of a particular normed space is not induced by any inner product, we should

know the properties of the norms created from inner products. Such a property is the Parallelogram law.

Theorem 2.3.1. Suppose X is an inner product space and x, y ∈ X. Then these vectors satisfy the so

called parallelogram identity:

∥x + y∥2 + ∥x − y∥2 = 2(∥x∥2 + ∥y∥2 )


19

Proof. Start with ∥x + y∥2 + ∥x − y∥2 = ⟨x + y, x + y⟩ + ⟨x − y, x − y⟩ and cancel out the terms ⟨x, y⟩

and ⟨y, x⟩ on expansion.

This is called the parallelogram identity because in the case of 2-dimensional vectors, if two vectors

are represented as the two adjacent sides of a parallelogram, then the diagonals represents the sum and

difference of the vectors. In a parallelogram it is true that the sum of the squares of the diagonal lengths

is twice the sum of the squares of the two side lengths.

Theorem 2.3.2. (ℓp , ∥∥p ) is an inner product space if and only if p = 2.

In other words, among the norms ∥∥p , only the norm ∥∥2 is induced by an inner product.

Proof. We already know the inner product on ℓ2 induces the ∥∥2 . Now show that parallelogram identity

does not hold if p ̸= 2 by taking e1 and e2 .

Note that the same is true for (Cn , ∥∥p ) using the same technique.

We know that from an inner product, a norm arises and the expression for the norm in terms of the
p
inner product is ∥x∥ = ⟨x, x⟩. A natural question is, if we know that a norm is induced from an inner

product, can we express the inner product in terms of the norm?

Theorem 2.3.3. (Polarization identity) Suppose (X, ⟨ ⟩) is an inner product space and ∥ · ∥ be the norm

induced. Then
1
∥x + y∥2 − ∥x − y∥2 + i(∥x + iy∥2 − ∥x − iy∥2 )

⟨x, y⟩ =
4

and the imaginary part is not required if the space is a real inner product space.

Proof. Expand the RHS.

Note 2.3.4. An information: Suppose we have a normed linear space (X, ∥ · ∥). Is there any criterian to

check if this norm is induced by some inner product? The classical result by Von Nuemann says that a

norm is induced by an inner product if and only if the norm satisfies the Parallelogram identity. In that

case the Polarization identity helps us to find the inner product from the norm.

2.4 Orthogonal and orthonormal sets

Definition 2.4.1. We say that two elements x, y ∈ X are orthogonal to each other if ⟨x, y⟩ = 0. We say

that a set E of vectors in X is an orthogonal set if ⟨xα , xβ ⟩ = 0 for all xα , xβ in X with xα ̸= xβ .

Example 2.4.2. The vectors (1, 0, 1) and (1, 0, −1) are orthogonal in C3 and sin x; and cos x are or-

thogonal in C[−1, 1]. The set {(1, 0, 0, . . .), (0, 1, 0, . . .), (0, 0, 1, . . .), . . .} is an orthogonal set in ℓ2 .
20

Definition 2.4.3. We say that a set E in X is an orthonormal set if it is an orthogonal set and all

vectors of E are of unit length. That is,




0
 if xα ̸= xβ
⟨xα , xβ ⟩ =

1
 if xα = xβ


Theorem 2.4.4. Show that any two different elements of an orthonormal set are at a distance of 2.

Proof. Use ∥x − y∥2 = ⟨x − y, x − y⟩ = 2.

Example 2.4.5. The vectors (1, 0, 1) and (1, 0, −1) are orthogonal in C3 but not orthonormal ; Same

is the case with sin x and cos x in C[−1, 1]. The set {(1, 0, 0, . . .), (0, 1, 0, . . .), (0, 0, 1, . . .), . . .} is an

orthonormal set in ℓ2 .

Note 2.4.6. Note that if E = {xα }α∈A is an orthogonal set (where A is some index set) which does not
 

contain 0, then is an orthonormal set in X.
∥xα ∥

Another geometrical aspect of inner product spaces is that vectors which are orthogonal to each other

behaves like 3-dimensional vectors and satisfy the ”Pythagorus theorem”.

Theorem 2.4.7. Suppose X is an inner product space and x, y are orthogonal in X. Then these vectors

satisfy the so called Pythagorus Theorem: ∥x + y∥2 = ∥x∥2 + ∥y∥2 .

Proof. Start with ∥x + y∥2 = ⟨x + y, x + y⟩ and use ⟨x, y⟩ = ⟨y, x⟩ = 0 on expansion.

Note 2.4.8. This result can be extended to n vectors. That is if x1 , x2 , . . . , xn are orthogonal in an inner

product space X, then

∥x1 + x2 + . . . + xn ∥2 = ∥x1 ∥2 + ∥x2 ∥2 + . . . + ∥xn ∥2 .

What is the connection between orthogonal sets and linearly independent sets?

Theorem 2.4.9. Every orthonormal set is linearly independent.

Proof. Use Pythagorus theorem to show this!

But linearly independent sets need not be orthonormal (or even orthogonal)!. However, we can show

that from every sequence of linearly independent set, we can generate an orthonormal set.
21

Theorem 2.4.10. Suppose {y1 , y2 , . . .} is a linearly independent set in X. Then by Gramm-Schmidt

orthonormalization process:

u1 = y1
⟨y2 , u1 ⟩
u2 = y2 − u1
∥u1 ∥2
⟨y3 , u1 ⟩ ⟨y3 , u2 ⟩
u3 = y3 − u1 − u2
∥u1 ∥2 ∥u2 ∥2

......
n−1
X ⟨yn , ui ⟩
un = yn − ui
i=1
∥ui ∥2

......

 
u1 u2 u3
one can create the set {u1 , u2 , u3 , . . .}, which will be an orthogonal set and the set , , ,...
∥u1 ∥ ∥u2 ∥ ∥u3 ∥
will be an orthonormal set in X. Further the span of {y1 , y2 , . . . , yn } and {u1 , u2 , u3 , . . . , un } are equal

for each n ∈ N.

Proof. It is a verification. For the last part, observe that y1 , y2 , . . . , yn ∈ span{u1 , u2 , . . . , un } and

conversely u1 , u2 , . . . , un ∈ span {y1 , y2 , . . . , yn }

We have seen that two elements x, y are orthogonal (x ⊥ y) if ⟨x, y⟩ = 0. In the following we define

the orthogonal complement of a set.

Definition 2.4.11. For a subset A of an inner product space X, the orthogonal compliment (or annihi-

lator) of A is denoted by A⊥ and is defined as the set of all those elements which are orthogonal to every

element of A. That is

A⊥ = {x ∈ X : ⟨x, y⟩ = 0 for all y ∈ A}.

It can be seen that from each non empty subset A, we can create a closed subspace in the inner

product space.

Theorem 2.4.12. Let A be a subset of an inner product space X. Then A⊥ is a closed subspace of X.

Proof. Show the sums and scalar multiples are inside to show that it is subspace. To show closedness,

take a sequence (xn ) in A⊥ which converges to x in X. To show that x ∈ A⊥ , use the fact that the inner

product is a continuous function in one variable.


22

2.5 Problems

1. Show that ⟨x, 0⟩ = 0 for any x ∈ X.

2. Show that ⟨x, αy⟩ = α⟨x, y⟩.

3. Show that ⟨x, y⟩ − ⟨z, y⟩ = ⟨x − z, y⟩.

4. Show that if ⟨x, u⟩ = ⟨x, v⟩ for all x ∈ X, then u = v.

Hint: Take ⟨x, u − v⟩ = 0 for all x, and make the choice x = u − v.

5. Verify Schwartz inequality for x = (2, 1, 0) and y = (1, 3, −1) in R3 .

6. Show that equality happens in Schawrtz inequality iff x and y are dependent.

|⟨x,αy⟩)|
Hint: ∥x − αy∥2 = ∥x∥2 − ∥y∥2

7. Show that in a real inner product space, ∥x + y∥2 = ∥x∥2 + ∥y∥2 if and only if x ⊥ y.

Hint: One side is true for every inner product space (Pythagorus Theorem). For the other side,

expand ∥x + y∥2 = ∥x∥2 + ∥y∥2 using inner product and cancel out ⟨x, x⟩ and ⟨y, y⟩. Now use

⟨x, y⟩ = ⟨y, x⟩ being real numbers, resulting in ⟨x, y⟩ = 0.

8. Show that if X is a real inner product space, ∥x∥ = ∥y∥ implies ⟨x + y, x − y⟩ = 0.

Hint: Assume ∥x∥2 = ∥y∥2 . Expand ⟨x + y, x − y⟩ and cancel out ∥x∥2 and ∥y∥2 . In the remaining

terms use ⟨x, y⟩ = ⟨y, x⟩ to get 0.

9. Show that ∥x + αy∥ = ∥x − αy∥ for all α ∈ K if and only if ⟨x, y⟩ = 0.

Hint: Squaring ⟨x + αy, x + αy⟩ = ⟨x − αy, x − αy⟩. Expand and cancel the terms ⟨x, x⟩ and

|α|2 ⟨y, y⟩, resulting in 2{α⟨y, x⟩ + α⟨x, y⟩} = 0 . That is Real(α⟨y, x⟩) = 0. Choose α = 1 and then

i to get ⟨y, x⟩=0.

10. Show that {(1, 2), (1, 3)} is a linearly independent set in the inner product space R2 , whereas they

are not orthogonal.

11. Can you show that if A⊥ = B ⊥ , then span(A) = span(B)?

12. X ⊥ = {0} and {0}⊥ = X.

Hint: 0 is the one and only one element orthogonal to all elements of X.
23

13. Even if A is just a subset of X, A⊥ is always a subspace.

Hint: Take any two elements u, v ∈ A⊥ and α ∈ K and show that ⟨u + αv, x⟩ = 0 for all x ∈ A.

14. A⊥ is a closed subspace of X for every A ⊂ X.

Hint: A⊥ is a subspace by [2]. Take xn → x in X, where xn ∈ A⊥ . To show x ∈ A⊥ , take z ∈ A

and use ⟨x, z⟩ = ⟨ lim xn , z⟩ = lim ⟨xn , z⟩ = 0 (See Note 2.2.5).


n→∞ n→∞

15. The only possible element of A ∩ A⊥ is 0.

Hint: Suppose z ∈ A ∩ A⊥ . Then z ∈ A and z ∈ A⊥ ⇒ ⟨z, z⟩ = 0 ⇒ z = 0.

16. If A ⊂ B in X, then B ⊥ ⊂ A⊥ .

Hint: Let x ∈ B ⊥ . Then ⟨x, y⟩ = 0 for all y ∈ B. But then ⟨x, z⟩ = 0 for all z ∈ A, since A ⊂ B.

17. If A ⊂ X, then A⊥ = A⊥ .

Soln: A⊥ is closed.


18. If A ⊂ X, then A⊥ = A .


Hint: Since A ⊂ A, by [5], A ⊂ A⊥ . For converse, take x ∈ A⊥ . Then ⟨x, y⟩ = 0 for all y ∈ A. To

show x ∈ A , let z ∈ A. Then z = lim zn , for zn ∈ A. Now use ⟨x, z⟩ = ⟨x, lim zn ⟩ = 0.
n→∞ n→∞

19. If S is dense in X, then S ⊥ = {0}


Hint: S ⊥ = S = X ⊥ = 0.

20. If A ⊂ X, then A⊥ = span(A)⊥ .

21. Verify Pythagorus theorem for x = (2, 1, 0) and y = (3, −6, 5) in R3 .


24
Chapter 3

Banach spaces and Hilbert spaces

We first discuss about convergent sequences and Cauchy sequences in a metric space.

Show convergent sequences are Cauchy. Not the other way.

Closed sets and complete sets.

Closed sets depends on the superset; but completeness does not.

Show that closed subset of a complete set is complete.

Define Banach and Hilbert spaces.

Give KN with ∥∥∞ as an example of complete metric space.

Show that (C00 , ∥∥∞ ) is not a Banach space.

Define equivalent norms on a vector space

α∥x∥1 ≤ ∥x∥2 ≤ β∥x∥1 ∀x ∈ X

Examples of equivalent norms on Cn , ∥ · ∥1 and ∥ · ∥∞ .

Theorem 3.0.1. If ∥ · ∥1 and ∥ · ∥2 are two equivalent norms on a vector space X. Then (X, ∥ · ∥1 ) is

complete if and only if (X, ∥ · ∥2 ) is complete.

So if KN with ∥∥∞ is a Banach space, then KN with ∥∥1 is also a Banach space.

3.1 Finite dimensional normed spaces

We have seen that KN with ∥∥∞ is a Banach space and therefore KN with ∥∥1 is also a Banach space. We

will see that any finite dimensional normed space is a Banach space. We also give some characterizations

of finite dimensional normed linear space.

First let us define some norms which can be defined on every finite dimensional spaces.

25
26

Definition 3.1.1. Norm ∥ · ∥∞ on any X = span{u1 , u2 , . . . , uN }.

Theorem 3.1.2. Every finite dimensional space is complete w. r. to ∥ · ∥∞ .

Proof. Same as the proof of KN . Take a Cauchy sequence, show that the coordinate sequences are Cauchy

and use their limits to define the limit of the Cauchy sequence.

Let us define the distance from a point to a set in a metric space.

Definition 3.1.3. Suppose (X, d) is a metric space and A ⊂ X. For a point x0 ∈ X, we define the

distance from x0 to A as

dist(x0 , A) = inf{d(x0 , a) : a ∈ A}.

We can see that if A is a closed set, dist(x0 , A) > 0 if and only if x0 ∈


/ A.

Now we want to show that a finite dimensional normed linear space is complete w. r. to any norm

on it. For that purpose, we prove

Theorem 3.1.4. Any two norms are equivalent on a finite dimensional normed linear space .

Proof. Let X be a normed linear space with basis {u1 , u2 , . . . , uN } and ∥∥ be any norm on X. We show

that ∥ · ∥ is equivalent to ∥∥∞ . Now using the transitivity of equivalent norms, we get the conclusion.
N
X N
X
It is easy to see that ∥x∥ = ∥ αi ui ∥ ≤ ∥x∥∞ ∥ui ∥.
i=1 i=1
For the converse part, observe that

N
X
∥x∥ = ∥ α i ui ∥
i=1

≥ |αi |dist(ui , Yi )

≥ |αi | min dist(ui , Yi ),


i=1,2,...,N

where Yi = span{u1 , . . . , ui−1 , ui+1 , . . . , uN }. Note that mini=1,2,...,N dist(ui , Yi ) > 0. Now taking supre-

mum on both sides we get

∥x∥ ≥ m0 ∥x∥∞ ,

where m0 = mini=1,2,...,N dist(ui , Yi ) proving the result.

This, along with Theorem 3.1.2, immediately gives the following result.

Theorem 3.1.5. Every finite-dimensional normed linear space is complete w. r. to any norm.

Proof. Any two norms on finite dimensional normed linear space are equivalent. So if the space is complete

w. r. to one of the norms, it will be complete w.r.to any other norm also.
27

So finite dimensional spaces are very good examples of Banach spaces. But how do we identify if a

normed linear space is finite-dimensional, if we do not have information about the basis? Now we obtain

a characterization for finite-dimensional normed linear space in terms of the topological properties of

their closed unit balls. For this purpose, we first quote a famous result from the theory of metric spaces.

Theorem 3.1.6. Suppose (X, d) is any metric space and A ⊂ X. Then T.F.A.E:

i. A is compact

ii. Every sequence in A has got a subsequence, converging in A

iii. A is complete and totally bounded

If X itself is complete, we can have the following corollary.

Corollary 3.1.7. Suppose (X, d) is a complete metric space and A ⊂ X. Then A is compact if and only

if A is closed and totally bounded.

For the space K, we can see that all bounded sets are totally bounded also (using the idea that subsets

of totally bounded sets are totally bounded). So in K, subsets are bounded if and only if they are totally

bounded. Hence it is immediate to see that

Theorem 3.1.8. A subset of K is compact if and only if it is closed and totally bounded.

Not only for K, the above result is true for any finite dimensional normed linear spaces, w. r. to any

norm.

Theorem 3.1.9. (Heine-Borel theorem)

A subset of a finite dimensional normed linear space is compact if and only if it is closed and bounded.

Proof. Let A ⊂ X be compact. Then by Theorem 3.1.6, A is complete and totally bounded. Hence A is

closed and bouned.

Conversely, suppose A is closed and bounded. Let us take a basis {u1 , u2 , . . . , uN } be a basis for X.

We will show that every sequence in A has a convergent subsequence. Take (xn ) in A, which is a bounded

set. Then

∥xn ∥∞ ∼ ∥xn ∥ ≤ M.

X
If xn = αn(j) uj , this means |αnj | ≤ M for j ∈ {1, 2, . . . , N }. Now find subsequence αnj k → αj . Now
X
form x = α(j) uj and show that xnk → x as k → ∞. Now x ∈ A, being closed.

Thus the closed unit ball is compact in a finite dimensional normed linear space . But if the space

is infinite dimensional this can’t happen. We first give an example and then show that this is actually a

characterizing property of finite dimensional spaces with the help of Riesz Lemma.
28

Example 3.1.10. Consider ℓ∞ . Then the closed unit ball contains en = (0, 0, . . . , 0, 1, 0 . . .), where 1 is

at the nth place. Then ∥en − em ∥ = 1 for each n, m ∈ N. Hence closed unit ball is not a compact set.

Next we define a fundamental result on normed linear space , called the Riesz Lemma.

Theorem 3.1.11. (Riesz Lemma)

Let Y be a closed proper subspace of a normed linear space X and 0 < r < 1. Then there exists an

xr ∈ X such that ∥xr ∥ = 1 and r < dist(xr , Y ) ≤ 1.

Proof. Use

Now we are in a position to prove:

Theorem 3.1.12. A normed linear space is finite dimensional iff U X is compact.

Proof. If X is finite dimensional, then by Heine-Borel Theorem, we get the conclusion.

Suppose X is not finite dimensional. Then there is

3.2 Some standard examples of infinite dimensional Banach spaces

We will see that many of the standard spaces are Banach spaces. If a space is not Banach, then it is

always possible to find a Banach space which contains this space as a dense subset. This constructive

proof we are not discussing here. Now we give some examples of standard infinite dimensional (all finite

dimensional spaces are already considered) Banach spaces.

Theorem 3.2.1. For 1 ≤ p ≤ ∞, ℓp is a Banach space.

Proof. Write

Theorem 3.2.2. The spaces c0 , c are Banach spaces w.r.to ∥ · ∥∞ .

Proof. Write

Theorem 3.2.3. C[a, b] is a Banach space w.r.to ∥ · ∥∞ .

Proof. Write

Theorem 3.2.4. For 1 ≤ p ≤ ∞, Lp is a Banach space.

Proof. Write

Some of the standard non-Banach spaces are the following.

Theorem 3.2.5. C ′ [a, b] is not a Banach space w.r.to ∥ · ∥∞ .


29

Proof. Write

Theorem 3.2.6. C[a, b] is not a Banach space w.r.to ∥ · ∥p for 1 ≤ p < ∞.

Proof. Write

Theorem 3.2.7. P[a, b] is not a Banach space w.r.to ∥ · ∥∞ .

Proof. Write

We show another useful characterization of Banach spaces. For this, we introduce some concepts.

Definition 3.2.8. Summable series

Definition 3.2.9. Absolutely summable series

The following property helps us to show that some normed linear space s are not Banach spaces.

Theorem 3.2.10. A normed linear space X is a Banach space iff every absolutely summable series are

summable.

Proof. One side is obtained by triangle inequality.

Example: In c00 , we have absolutely summable series ( n12 ), which is not summable. Hence it is ???

3.3 Basis and Completeness

Now we study the relationship between completeness and basis. We have seen that if a normed linear

space has a finite basis, then it must be complete w. r. to any norm Theorem 3.1.5. Now, what if

the basis is infinite? Can we conclude something? Generally, the answer is no since there+ are infinite

dimensional spaces like c00 which is incomplete in ∥∥∞ and ℓ1 , which is complete. But if the basis is

denumerable (infinite, but countable), then we can conclude that the space can never be a Banach space.

To do this, we need a result on metric spaces, namely, the Baire category theorem. We first define some

concepts in this regard.

0
Definition 3.3.1. A subset A of a metric space X is said to be nowhere-dense subset if A = ϕ.

This means A contains no interior points. It should be observed that, in such a case, not only A but

A itself does not contain any open balls inside it.

What is the connection between dense and nowhere-dense sets? They are strongly complementary in
0
some sense. That is if A is dense in X, then A = X and so A also equals the whole space X, whereas

nowhere dense sets have the opposite. Also, we know that if A is dense in X, its complement does not

contain any open ball inside; whereas if A is nowhere dense, A and A do not contain open balls.
30

Examples of nowhere-dense sets are finite sets in R, closed sets which do not contain any intervals

(like N or any discrete closed set in R) etc. If a set (or its closure) contains an interval inside, then it can

never be nowhere-dense. Note that dense sets can not be nowhere-dense (so Q is not nowhere-dense in

R) as their closure contains the whole set.

The Baire category theorem helps to identify complete metric spaces.

Theorem 3.3.2. (Baire category theorem)

A complete metric space can not be written as a denumerable union of nowhere-dense subsets.

Proof. The proof is by contradiction. Assume if possible X = ∪∞


i=1 Vn , where Vn is nowhere dense subset

of X for each n ∈ N. We can assume that Vn is closed and nowhere dense since ∪∞
i=1 Vn = X also for

such Vn . Now use the idea that Vn can not contain any balls inside, and so starting with any x1 , we can

get x2 ∈ B(x1 , r1 ) ∩ VnC giving B(x2 , r2 ) ⊂ B(x1 , r1 ) ∩ V1C , the latter being an open set. Now find such

a Cauchy sequence (xn ) from X, which can not converge in X, giving a contradiction.

Also, it is easy to see that the interior of a proper, closed subspace of any normed linear space is ϕ.

Theorem 3.3.3. Let X be any normed linear space and Y be a proper subspace of X. Then Y does not

contain any interior point.

Proof. Suppose Y contain an interior point, say y0 . Then we can show that Y will contain all points of

X inside, a contradiction to the assumption.

We have the following simple corollary

Corollary 3.3.4. Every finite-dimensional subspace of an infinite dimensional normed linear space is

nowhere dense.

Proof. Finite dimensional spaces are complete and hence closed. So by the above theorem, if Y is a finite-
0
dimensional subspace of an infinite dimensional space, then it is proper, closed and hence Y = ϕ.

Now we establish the connection between the existence of denumerable basis and completeness.

Theorem 3.3.5. Let X be a normed linear space with a denumerable basis, say {un }∞
n=1 . Then X can

not be a Banach space w. r. to any norm.

Proof. Since {un }∞


n=1 is a basis, every element of X is a finite linear combination of un s. Hence X =

span{u1 , u2 , . . .}. This implies

X = ∪∞
n=1 Xn ,

where Xn = span{u1 , u2 , . . . , un }, finite dimensional space for each n ∈ N, and hence nowhere dense.

Then X can not be complete due to the Baire category theorem.


31

Example 3.3.6. Consider the spaces P[a, b] and c00 . These spaces have denumerable bases. Hence these

spaces are not Banach spaces w.r.to any norm. Further, from any normed space, collect a sequence of

linearly independent elements. Their span is a subspace of the normed linear space and has denumerable

basis. Hence such spaces can never be Banach spaces. Note that c00 is the span of (en ).

So in essence, if X has a finite basis, then it must be complete and if the basis is countable, then the

space can not be complete. Hence we need to tackle only the case when X has uncountable bases, to

check the completeness.

With this, we wind up the discussion of Banach spaces temporarily

3.4 Problems

1. Let X be a finite dimensional vectorspace with basis {u1 , u2 , . . . , un }. Show that for x ∈ X with
Pn
x= αi ui , αi ∈ K,
i=1
n
X
∥x∥∞ = ∥ αi ui ∥∞ = sup |αi |
i=1,2,...,n
i=1

is a norm on X.

2. Consider Kn with 2 norms,

n
!1/2
X
∥(x1 , x2 , . . . , xn )∥2 = |xi |2 where (x1 , x2 , . . . , xn ) ∈ Kn
i=1

and an arbitrary norm, say ∥ · ∥. Show directly (without using equivalence of norms on finite

dimensional normed linear spaces), that ∃ b > 0 such that ∥x∥ ≤ b∥x∥2 ∀x ∈ Kn .

3. Show that on a normed linear space X, ‘∥∥1 ≃ ∥∥2 if ∥∥1 is equivalent to ∥∥2 ’ gives an equivalence

relation on the collection of all norms on X.

4. Show that dist(x0 , A) = 0 if and only if x0 ∈ A.

5. Show that finite dimensional subspaces are closed in a normed linear space .

6. Every totally bounded sets in a metric space are bounded. Give example to show that bounded

sets may not be totally bounded. (Hint:- (R, d) with d(x, y) = min{|x − y|, 1})

7. Show that A ⊂ R with usual metric d on R is totally bounded iff it is bounded.

8. Show that A ⊂ C with usual metric d on C is totally bounded iff it is bounded.

9. Every compact sets in a metric space are totally bounded. Also show that converse is not true.
32

10. If A ⊂ X is totally bounded then all subsets of A are also totally bounded.

11. Show that for a subspace Y of X, dist(αx, Y ) = |α|dist(x, Y ), where α ∈ K and x ∈ X.

12. Suppose that there are two equivalent norms on a normed linear space. Then show that

• if a set is bounded w.r.to one norm, then it is bounded w.r.to the other norm also.

• if a set is closed w.r.to one norm, then it is closed w.r.to the other norm also.

• if a sequence is convergent w.r.to one norm, then it is convergent w.r.to the other norm also.

13. Let Y be a subspace of X and if y0 ∈ Y and x0 ∈ X, show that dist(x0 + y0 , Y ) = dist(x0 , Y ).

14. Show that the closed unit ball of ℓ2 is not compact, without using the theorem. (The same result

hold for ℓp , 1 ≤ p ≤ ∞.)

15. Show that ℓ∞ is a Banach space with respect to ∥ · ∥∞ .

16. Show that c0 is a Banach space with respect to ∥ · ∥∞ .

17. Show that c is a Banach space with respect to ∥ · ∥∞ .

18. Show that Riesz lemma fails when Y is not closed.


Chapter 4

Basis concepts for a space

We have seen the connection between Banach spaces and the basis. Recall that a collection of vectors

β = {uα : α ∈ A} is a basis for a vector space (or a normed linear space) X if it is linearly independent

and it spans X. A basis always exists (by Zorn’s lemma) for any vector space and it may be finite,

denumerable or uncountable.

4.1 Hamel basis

In this chapter, we plan to introduce some other basis-like concepts for a normed linear space . To

distinguish, we call the usual basis (using which every element of the space can be written as a finite

linear combination) a Hamel basis. Remember that we did not assume any condition on the norm of

the basis elements while defining the Hamel basis (as it is defined for vector spaces where we need not

have a norm in general). In a normed linear space, we can take the basis elements to be of norm 1;

and further, every element will have a unique finite linear combination expression in terms of the basis

vectors.

4.2 Schauder basis and separability

Since we have a topology on a normed linear space , we can think of using infinite linear combinations

to define a basis concept. This gives rise to a new concept of a basis for normed linear space .

Definition 4.2.1. Let X be a normed linear space and (un ) be a countable (finite or denumerable)

collection of unit elements from X. Such a collection is called a Schauder basis for X if every element

x ∈ X has a unique expression


X
x= αn un
n

33
34

where αn are scalars and the sum is finite or denumerable.


X
Note 4.2.2. Note that by x = αn un , in the denumerable case, we mean that the partial sums of the
n
n
X
series, say sn = αn un → ∞ as n → ∞.
i=1

Theorem 4.2.3. If there is a countable Hamel basis for X, then that basis gives a Schauder basis (by

means of normalised vectors) also.

Proof. Take the countable basis, say (xn ) and form un = ∥xxnn ∥ , n ∈ N. Now being a Hamel basis every
XN
element in X has got a unique expression x = αn un , proving that it is a Schauder basis also.
n=1

Following is a simple example for a Schauder basis, which is not a Hamel basis.

Example 4.2.4. It is easy to see that (en ) is not a Hamel basis for ℓ1 , as by using a finite linear

combination, we can create only finitely many non-zero coordinates.

But (en ) is a Schader basis for ℓ1 . Taking any x ∈ ℓ1 , we can write x = (x1 , x2 , . . .) and it is easy to

X
show that x = xi ei , since for any ϵ > 0,
i=1


X n
X ∞
X
∥x − sn ∥ = ∥ xi ei − xi ei ∥ = |xi | < ϵ
i=1 i=1 i=n+1

is possible for large n, being the tail of a convergent series, and the expression is unique.

Note that, for a finite-dimensional space, any Hamel basis consisting of unit vectors forms a Schauder

basis. It is also obvious that if there is a denumerable Hamel basis for a normed linear space , then

making the basis elements to be of norm 1 creates a Schauder basis. For example from the Hamel basis

{1, x, x2 , . . .} of P[0, 1], is a Schauder basis also (w.r.to ∥ · ∥∞ ).

There is a close connection between Schauder basis and separability of a normed linear space . We

first define the separability of a topological space.

Definition 4.2.5. A space X is said to be separable if there is a countable dense subset A in X. That

is A ⊆ X, A is countable and A = X.

Note that R is separable since rational numbers Q forms a countable dense subset. The space C is also

separable since Q + i Q is also countable and dense in C. Since a dense set intersects with all non-empty

open subsets, we can easily see the following:

Theorem 4.2.6. If there are uncountably many non-empty disjoint open sets in a topological space, then

the space can not be separable.


35

Proof. If a dense set is there, it intersects with each of the open sets at different points, making it

mandatory to have uncountable numbers of points inside the dense set.

The following corollary is immediate.

Corollary 4.2.7. If a metric space has uncountably many points {xα }α∈A , where A is an uncountable

index set, which are at a distance of some ϵ0 > 0 or more, then the space can not be separable.

Proof. The collection of all {B(xα , ϵ0 ) : α ∈ A} forms an uncountable collection of disjoint open balls in

the space.

These results help us to give an example of a non-separable space.

Theorem 4.2.8. The space L∞ [a, b] is not separable.

Proof. The collection {χ[a,a+ϵ) : 0 < ϵ < b − a} forms an uncountable set at 1 distance apart. Here χS

represents the characteristic function corresponding to the set S.

What about finite-dimensional normed linear space ?

Theorem 4.2.9. Every finite-dimensional normed linear space is separable.

Proof. Take a basis {u1 , u2 , . . . , un } for X and consider

n
X
D={ ri ui : ri ∈ Q + i Q}.
i=1

Then D is separable and D = X.

We can observe that a Schauder basis is always linearly independent. However, they need not span

X (if they span, they form a Hamel basis itself). But the following is always true.

Proposition 4.2.10. If (un ) is a Schauder basis for X, then span(un ) = X.



X
Proof. Assume X to be of infinite dimension w.l.o.g. For each x ∈ X, xi ui and so the partial sums
i=1
n
X
sn = xi ei is in span(un ) and they converge to x, giving x ∈ span(un ). Hence X ⊆ span(un ). The
i=1
other part is clear.

Now we establish the connection between Schauder basis and separable spaces.

Theorem 4.2.11. A normed linear space with a Schauder basis is separable.

Proof. Finite dimensional spaces are separable. Now, let (un ) be a Schauder basis for a normed linear
Xn
space X. Now form Dn = { rj uj : rj ∈ Q + i Q}, a countable set and let D = ∪∞
n=1 Dn , which is also
i=1
countable. Now show that D = X.
36

An example was constructed by Enflo in one of his renowned work to show that separable normed

linear space need not have a Schauder basis. We will see that for Hilbert spaces, it is true that it is

separable if and only if it has a countable orthonormal basis (which is also a Schauder basis).

The concept of separable spaces is very important in functional analysis as many results are proven

for these spaces. Also in Hilbert space theory, separable Hilbert spaces allow us to use a countable

orthonormal basis structure to deal with them.

4.3 Orthonormal basis

Note that this concept is valid only in an inner product setting as it uses the inner product and orthogo-

nality concept. Recall that an orthonormal set is an orthogonal set of unit vectors. That is E = {uα }α∈A ,

where A is an index set is orthogonal if ⟨uα , uβ ⟩ = 0 when α ̸= β and 1 when α = β.

Unlike the Hamel basis and Schauder basis, the orthonormal basis is not defined using the way elements

are represented using the basis elements (later we will see that such an expression is possible), but using

the maximality of such sets.

Definition 4.3.1. A maximal orthonormal set in an inner product space X is called an orthonormal

basis (or a complete orthonormal system) for X. That is E is an orthonormal basis if E is orthonormal

set in X and there are no bigger orthonormal sets properly containing E.

Do there exist orthonormal basis for every inner product space? The answer is ”yes”, if we assume

Zorn’s lemma.

Theorem 4.3.2. Let E0 be any orthonormal set in X. Then there exists an orthonormal basis for X

containing E0 .

Proof. Consider the set C of all orthonormal sets Eγ in X which contains E0 . This is a non empty set

with a partial order A ≺ B if A ⊆ B. Then every chain in this collection C has an upper bound, which

is their union. Hence there exists a maximal element, say E ∈ C. It is easy to show that this maximal

element is the orthonormal basis for X.

Note that orthonormal basis can be uncountable also (a Schauder basis is always countable).

Example 4.3.3. Consider ℓ2 and the collection E = {en }. Then it is easy to see that this collection

is an orthonormal set . It is maximal because if we consider any Ẽ containing E properly, there is an

x ∈ Ẽ − E. But this x = (xn ), a sequence and xn = ⟨x, en ⟩ = 0 for all n ∈ N, giving a contradiction that

x = 0.
37

Same technique works for Kn with E = {e1 , e2 , . . . , en } (we can also use the fact that in an n−dimentional

space, there can’t be n + 1 orthonormal elements). It is very easy to see that every element in a fi-

nite dimensional inner product space can be expressed as a linear combination and the expression is
n
X
x= ⟨x, uj ⟩uj .
j=1
It is easy to verify the orthonormality of a given collection. The following result helps to check the

maximality.

Theorem 4.3.4. Let E be an orthonormal set in an inner product space X. Then E is an orthonormal

basis if and only if E ⊥ = {0}.

Proof. It is just easy.

Using this proof, we can obtain the following result about Hamel basis.

Theorem 4.3.5. Suppose E is an orthonormal set and is also a basis for an inner product space X.

Then E is an orthonormal basis for X.

Proof. Use E ⊥ = (span(E))⊥ = X ⊥ = {0}.

The following shows that given any orthonormal set in an inner product space X, it is an orthonormal

basis for certain subspaces of X.

Theorem 4.3.6. Let E be an orthonormal set in X. Then

i. E is an orthonormal basis for span(E).

ii. E is an orthonormal basis for span(E).

Proof. Let W = span(E) and V = span(E). Now E is a Hamel basis for W and so by above theorem,

E is an orthonormal basis also.

For the second part, consider x ∈ span(E) such that x ∈ E ⊥ . Then there is a sequence xn ∈ span(E)

such that xn → x in span(E). Now x ∈ E ⊥ implies ⟨xn , x⟩ = 0 for all n ∈ N and hence ⟨xn , x⟩ → ⟨x, x⟩ =

0. Thus E ⊥ = {0}.

Remark 4.3.7. The above result helps us to conclude that {e1 , e2 , . . . , en } is an orthonormal basis for

Kn . Also (en ) is a basis for c00 . If we assume the fact that C00 is dense in ℓ2 , the same set (en ) is a basis

for ℓ2 also, by means of part (ii) of the same.

Now using Gramm-Schmidt process we can create a an orthonormal basis for (P[−1, 1], ∥∥2 ) from

the standard basis {1, x, x2 , . . .}. This set is actually the sequence of Legendre polynomials. If we assume

the fact that P[−1, 1] is dense in C[−1, 1], ∥∥2 , the same set of Legendre polynomials is an orthonormal

basis for C[−1, 1], ∥∥2 also, by means of part (ii) of the same. Now using the fact that C[−1, 1], ∥∥2 is
38

dense in L2 [−1, 1], we can see that Legendre polynomials is an orthonormal basis for L2 [−1, 1] by the

same result.

Now we give an example of an orthonormal basis for L2 [0, 2π]

eint
Example 4.3.8. Consider un = √ for each n ∈ Z, which are in C[0, 2π]. It is easy to see that this set

E = {un : n ∈ Z} is an orthonormal set . Hence it is an orthonormal basis for the span of E. The span

of {eint } are called trigonometric polynomials. By Weierstrass approximation theorem, trigonometric

polynomials are dense (w.r.to ∥ · ∥∞ and so w.r.to ∥ · ∥2 also) in C[0, 2π]. Hence E is an orthonormal

basis for C[0, 2π] and since C[0, 2π] is dense in L2 [0, 2π], E is a countable orthonormal basis for L2 [0, 2π]

also.

Note that since eint = cos nt + i sin nt, the collection { √12π , sin
√ nt , cos
π
√ nt : n ∈ N} also forms an
π

orthonormal basis for the above three spaces using similar arguments.

Even though we called a maximal orthonormal set as an orthonormal basis , it is not clear in what

sense it is a basis. A basis’s fundamental role is to express a given element as a linear combination of the

basis elements. our next aim is to show this for orthonormal basis . But we need some standard results

to establish this fact. See problem 14, as we are expecting something like this in the general case also.

We first prove a result called Bessel’s inequality:

Theorem 4.3.9. (Bessel’s inequality)

Let (un ) be a countable orthonormal set in an inner product space X and x ∈ X. Then


X
|⟨x, uj ⟩|2 ≤ ∥x∥2 .
i=1

n
X n
X
Proof. Define sn = ⟨x, uj ⟩uj . Then ⟨x, sn ⟩ = ⟨sn , x⟩ = ⟨sn , sn ⟩ = |⟨x, uj ⟩|2 . Now
i=1 i=1

n
X
0 ≤ ∥x − sn ∥2 = ∥x∥2 − |⟨x, uj ⟩|2 ,
i=1

n
X
which implies that |⟨x, uj ⟩|2 ≤ ∥x∥2 for each n ∈ N, and taking n → ∞, we get the conclusion.
i=1

This immediately gives the following corollary.

Corollary 4.3.10. Reimann-Lebesgue Lemma

Now our aim is to see if, using an orthonormal basis E, can we have the representation

X
x= ⟨x, u⟩u
u∈E
39

for each x ∈ X?

Such a countable sum expression can’t be expected unless we have E to be a countable set. But if

except countably many ⟨x, u⟩ are zeros, then also we can have such an expression meaningful.

Theorem 4.3.11. Suppose E = {uα } is any orthonormal set in an inner product space X and x ∈ X.

Then the set

{uα ∈ E : ⟨x, uα ⟩ =
̸ 0}

is always a countable set.

1
Proof. It is an application of Bessel’s inequality. First Observe that if we take Dn = {uα : ⟨x, uα ⟩ > n}

for n ∈ N, then D = ∪Dn . One can show that each Dn is a finite set, by showing that if it was not
m
so, by Bessel’s inequality we get ∥x∥2 ≥ n2 for any m ∈ N, a contradiction, since ∥x∥ must be a finite

number.
X
Thus for any x ∈ X, with respect to an orthonormal set E, the expression ⟨x, u⟩u is a countable
u∈E
expression, and hence there is a meaning for this sum. We can see that this expression turns out to

be the expression for x in a Hilbert space. In the following theorem in this regard, we need to use the

completeness of the space.

Theorem 4.3.12. (Fourier series expansion)

Let H be a Hilbert space and E be an orthonormal set in H Then E is an orthonormal basis if and

only if for each x ∈ X,


X
x= ⟨x, uα ⟩uα .
uα ∈E

X
Proof. The converse part is simple. Assume x = ⟨x, uα ⟩uα . To show E is an orthonormal basis ,
uα ∈E
we need to show that E ⊥ = {0}. So take x ∈ E ⊥ . Then ⟨x, uα ⟩ = 0 for all uα , and hence x = 0.
X
For the other part, we will show that the series x = ⟨x, uα ⟩uα actually converge and call the limit
uα ∈E
as y, and then show that y = x.

Theorem 4.3.13. (Parseval’s identity)

Let H be a Hilbert space and E be an orthonormal set in H Then E is an orthonormal basis if and

only if for each x ∈ X,


X
|⟨x, uα ⟩|2 = ∥x∥2 .
uα ∈E

Proof. It is easy to prove this result using the above result and by analysing the proof of Bessel’s inequality.
X X
We will show that |⟨x, uα ⟩|2 = ∥x∥2 if and only if x = ⟨x, uα ⟩uα . Note that both these
uα ∈E uα ∈E
summations are countable summations and the index set can be taken as N.
40

Xn
From Bessel’s inequality, we have ∥x − sn ∥2 = ∥x∥2 − |⟨x, ui ⟩|2 for each n ∈ N. Suppose that
X i=1
|⟨x, uα ⟩|2 = ∥x∥2 . This means that the RHS limit exists, and so the LHS limit also must exist,
uα ∈E

X
which implies that x = lim sn = ⟨x, ui ⟩ui . Similar arguments give the converse part.
i=1

Now it looks like the orthonormal basis acts like a Schauder basis. Yes, it is. But by definition,

Schauder basis must be countable.

Corollary 4.3.14. A countable orthonormal basis of a Hilbert space is a Schauder basis.

Proof. Note that in an orthonormal basis , ∥ui ∥ = 1 and by Fourier series representation, every element

has a unique representation in terms of the orthonormal basis elements. Note that in view of the Fourier

series representation theorem, if y has the same representation, then x − y ∈ E ⊥ = {0}.

Hence by Theorem, every Hilbert space which has a countable basis is separable space. What about

the converse? The converse is also true in the Hilbert space settings. To show this, we first prove that

every Hilbert space has got an orthonormal basis , countable or uncountable.

Theorem 4.3.15. . Suppose X is an inner product space and E0 be any orthonormal set in X. Then

there exists an orthonormal basis for X containing E.

In particular, every Hilbert space has an orthonormal basis .

Proof. Consider the collection E = {E : E is an orthonormal basis for X containing E0 }. Then E is a

partially ordered set with partial order ⊂ and every chain in E has an upper bound (the union). Hence

there will be a maximal element, which is an orthonormal basis for X.

Now we prove the relation between separable Hilbert spaces and orthonormal basis .

Theorem 4.3.16. A Hilbert space is separable if and only if it has a countable orthonormal basis .

Proof. It there is a countable orthonormal basis , that is a Schauder basis also, and hence by Theorem,

the space must be separable.



Conversely, suppose the orthonormal basis is uncountable. Then we know that ∥uα − uβ ∥ = 2 for

each α ̸= β. Hence there are uncountably many disjoint open sets in H and hence it can not be separable,

proving the theorem.

4.4 Problems

1. Show that {e1 , e2 , . . .} forms a Schauder basis for ℓp , 1 ≤ p < ∞.


41

2. If (X, ∥ · ∥∞ ) is a finite-dimensional normed linear space, then every Hamel basis of unit vectors is

also a Schauder basis.

3. Show that elements of a Schauder basis are linearly independent.

4. Suppose {u1 , u2 , · · · } is a Schauder basis for X. Then closure of span{u1 , u2 , · · · } = X.

5. Let a = (1, 1, · · · ). Show that {a, e1 , e2 , · · · } is a Schauder basis for the subspace c of ℓ∞ .

6. Show that if X is separable, all subsets of X are also separable.

7. Show that if X is separable and X = Y , then Y is also separable.

8. Show that P[a, b] with ∥ · ∥∞ is separable.

9. Show that C[a, b] with ∥ · ∥∞ is separable (use 7).

10. Show that every orthonormal basis is a linearly independent set.


n
X
11. Show that for each x ∈ X, x = ⟨x, uj ⟩uj in a finite dimensional inner product space X.
j=1

12. Prove that A⊥ = span(A)⊥ .


13. Prove that A⊥ = A .

14. Show that every inner product space has an orthonormal basis.

15. Show that an orthonormal basis is a Hamel basis if and only if the space is finite dimensional inner

product space . (Hint: One side is clear from problem 11.)


X X
16. Show that if xi ∈ H for i ∈ N, a Hilbert space, then xi converges in H if and only if ∥xi ∥2

converges.
42
Chapter 5

Bounded Linear Transformations

We want to introduce the continuity concept for linear transformations. This is possible since we consider

the domain and codomain to be normed linear space as they are topological spaces. Recall

Definition 5.0.1. A function T : X → Y is said to be a Linear transformation if

T (αx + y) = αT x + T y ∀x, y ∈ X, α ∈ K.

Note that we consider only linear operators in this course as operators. The continuity of such

functions is defined as in the usual manner:

Definition 5.0.2. A linear transformations T : X → Y is said to be continuous at a point x0 ∈ X if for

any ϵ > 0, there exists a δ > 0 such that ∥T x − T x0 ∥ < ϵ for all x ∈ X with ∥x − x0 ∥ < δ.

Since X and Y are metric spaces, the above definition is equivalent to saying that whenever any

sequence xn → x0 in X, T xn → T x0 as n → ∞.

5.1 Bounded linear transformations

In normed spaces we are trying to interpret continuity in terms of boundedness.

Definition 5.1.1. bounded sets in normed spaces, unit ball closed and open, unit sphere.

Proof. see

Definition 5.1.2. Bounded linear transformations

Proposition 5.1.3. A linear transformation T : X → Y is bounded if and only if T (U ) is bounded in Y .

Proof. see

43
44

Note that the above theorem says that T is bounded if and only if {T x : ∥x∥ ≤ 1} is a bounded set.

The following is a more mathematically explicit way of defining bounded maps:

Theorem 5.1.4. T : X → Y is bounded if and only if there exists and α > 0 such that

∥T x∥ ≤ α∥x∥ ∀x ∈ X.

Proof. Write

Its time for some examples:

Example 5.1.5. Consider a bounded sequence (λn ). Define a map T : ℓp → ℓp , 0 ≤ p < ∞, defined by

T (x1 , x2 , . . .) = (λ1 x1 , λ2 x2 , . . .), (x1 , x2 , . . .) ∈ ℓp .

Then T is a bounded linear operator and α can be chosen as any upper bound of |λn |.

Example 5.1.6. Consider a function ϕ ∈ C[a, b]. Define a map T : (C[a, b], ∥ · ∥∞ ) → (C[a, b], ∥ · ∥∞ ),

defined by

T f (x) = ϕ(x)f (x), f ∈ C[a, b].

Then T is a well defined bounded linear operator and α can be chosen as ∥ϕ∥∞ .

Example 5.1.7. Define a map T : (C ′ [0, 1], ∥ · ∥∞ ) → (C[0, 1], ∥ · ∥∞ ), defined by

T f (x) = f ′ (x), f ∈ C ′ [0, 1].

Then T is a well defined linear operator; but T is unbounded since the bounded set {xn : n ∈ N} is

mapped to the unbounded set {nxn−1 , n ∈ N}.

5.2 Continuity and boundedness of Linear transformations

Proposition 5.2.1. For a linear transformation T : X → Y , the following are equivalent:

1. T is bounded

2. T is continuous at 0

3. T is uniformly continuous on X

Proof. See
45

Definition 5.2.2. norm of operator

Theorem 5.2.3. ∥T ∥ = sup{∥T x∥}

Proof. See

Theorem 5.2.4. ∥T ∥ = inf{α : ∥T x∥}

Proof. See

5.3 The operator space

Define B(X,Y), operations in it.

Theorem 5.3.1. B(X, Y ) is a normed linear space .

Proof. see

Theorem 5.3.2. B(X, Y ) is a Banach space if Y is complete.

Proof. SEe

Note: the result is if and only if due to HBET.

Theorem 5.3.3. If X is a finite dimensional normed linear space , then any operator T : X → Y is

bounded.

This means L(X, Y ) = B(X, Y ) if X is finite dimensional. The following result shows that it happens

so if and only if X is finite dimensional.?

5.4 Problems

1. Show that T : X → Y is bounded if and only if T (UX ) is a bounded set.

2. Show that T : X → Y is bounded if and only if T (B(a, r)) is a bounded set for some a ∈ X and

r > 0.

3. Show that T : X → Y is bounded if and only if T (SX ) is a bounded set.

4. Show that {e1 , e2 , . . .} forms a Schauder basis for ℓp , 1 ≤ p < ∞.

5. Consider the linear map T : c00 → c00 given by

T (x1 , x2 , . . .) = x1 + x2 + . . . ∀ (x1 , x2 , . . .) ∈ c00 .


46

Show that T is a bounded operator with respect to ∥ · ∥1 and not a bounded operator with respect

to ∥ · ∥2 .

6. Consider the linear map T : c00 → c00 given by


X x(j)
T (x1 , x2 , . . .) = ∀ (x1 , x2 , . . .) ∈ c00 .
j=1
j

Show that T is a bounded operator with respect to ∥ · ∥2 and not a bounded operator with respect

to ∥ · ∥∞ .
Contents

1 From Vector spaces to Normed linear spaces 1

1.1 Some simple examples of normed linear spaces . . . . . . . . . . . . . . . . . . . . . . . . 6

1.2 Some properties of norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.3 The standard normed linear spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.4 Kn and ℓp with ∥ · ∥p , 1 ≤ p ≤ ∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.5 The function spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.6 ℓ∞ (S, K) with ∥ · ∥∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.7 The space of all continuous functions C[a, b] . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.8 The Lp spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.9 Exercise Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Inner Product Spaces 15

2.1 Some Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.2 Creating norm from inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.3 Some properties of inner products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.4 Orthogonal and orthonormal sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3 Banach spaces and Hilbert spaces 25

3.1 Finite dimensional normed spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.2 Some standard examples of infinite dimensional Banach spaces . . . . . . . . . . . . . . . 28

3.3 Basis and Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4 Basis concepts for a space 33

4.1 Hamel basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.2 Schauder basis and separability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

47
48

4.3 Orthonormal basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

5 Bounded Linear Transformations 43

5.1 Bounded linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5.2 Continuity and boundedness of Linear transformations . . . . . . . . . . . . . . . . . . . . 44

5.3 The operator space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

You might also like