0% found this document useful (0 votes)
32 views4 pages

1 Self-Adjoint Transformations: Lecture 5: October 12, 2021

Lecture Five Mathematical Toolkit - Autumn 2021 Toyota Technological Institute at Chicago Madhur Tulsiani

Uploaded by

Pushkaraj Panse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views4 pages

1 Self-Adjoint Transformations: Lecture 5: October 12, 2021

Lecture Five Mathematical Toolkit - Autumn 2021 Toyota Technological Institute at Chicago Madhur Tulsiani

Uploaded by

Pushkaraj Panse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Mathematical Toolkit Autumn 2021

Lecture 5: October 12, 2021


Lecturer: Madhur Tulsiani

1 Self-adjoint transformations

Definition 1.1 A linear transformation ϕ : V → V is called self-adjoint if ϕ = ϕ∗ . Note that


such a transformation necessarily needs to map v to itself, and is thus a linear operator.

Example 1.2 The transformation represented by matrix A ∈ Cn×n is self-adjoint if A = A T .


Such matrices are called Hermitian matrices.

Proposition 1.3 Let V be an inner product space and let ϕ : V → V be a self-adjoint linear
operator. Then

- All eigenvalues of ϕ are real.

- If {w1 , . . . , wn } are eigenvectors corresposnding to distinct eigenvalues then they are mutu-
ally orthogonal.

Proof: The first property can be observed by noting that if v ∈ V \ {0V } is an eigenvector
with eigenvalue λ, then

λ · hv, vi = hv, λ · vi = hv, ϕ(v)i = h ϕ∗ (v), vi = h ϕ(v), vi = λ · hv, vi .

Since hv, vi 6= 0, we must have λ = λ which implies that λ ∈ R. For the second part,
observe that if i 6= j, then we have

λ j · wi , w j = wi , ϕ ( w j ) = ϕ ∗ ( wi ), w j = ϕ ( wi ), w j = λ i · wi , w j .








Since eigenvalues are real, we get (λi − λ j ) · wi , w j = 0, which implies wi , w j = 0 using
λi 6 = λ j .

1
2 The Real Spectral Theorem

In this lecture, we will prove the ”real spectral theorem” for self-adjoint operators ϕ : V →
V (so named because the eigenvalues of a self-adjoint operator are real, not because other
spectral theorems are fake!) We will show that any such operator is not only diagonalizable
(has a basis of eigenvectors) but is in fact orthogonally diagonalizable i.e., has an orthonormal
basis of eigenvectors. This gives a very convenient way of thinking about the action of
such operators. In particular, let dim(V ) = n and {w1 , . . . , wn } form an orthonormal basis
of eigenvectors for ϕ, with corresponding eigenvalues λ1 , . . . , λn . Then for any vector v
expressible in this basis as (say) v = ∑in=1 ci · wi , we can think of the action of ϕ as
!
n n
ϕ(v) = ϕ ∑ ci · vi = ∑ c i · λ i · wi .
i =1 i =1

Of course, we can also think of the action of ϕ in this way as long as w1 , . . . , wn form a basis
(not necessarily orthonormal). However, this view is particularly useful when they form
an orthonormal basis. As we will later see, this also provides the “right” basis to think
about many matrices, such as the adjacency matrices of graphs (where such decomposi-
tions are the subject of spectral graph theory). To prove the spectral theorem, We will need
the following statement (which we’ll prove later).

Proposition 2.1 Let V be a finite-dimensional inner product space (over R or C) and let ϕ : V →
V be a self-adjoint linear operator. Then ϕ has at least one eigenvalue.

Using the above proposition, we will prove the spectral theorem below for finite dimen-
sional vector spaces. The proof below can also be made to work for Hilbert spaces (using
the axiom of choice). The above proposition, which gives the existence of an eigenvalue
is often proved differently for finite and infinite-dimensional spaces, and the proof for
infinite-dimensional Hilbert spaces requires additional conditions on the operator ϕ. We
first prove the spectral theorem assuming the above proposition.

Proposition 2.2 (Real spectral theorem) Let V be a finite-dimensional inner product space and
let ϕ : V → V be a self-adjoint linear operator. Then ϕ is orthogonally diagonalizable.

Proof: By induction on the dimension of V. Let dim(V ) = 1. Then by the previous


proposition ϕ has at least one eigenvalue, and hence at least one eigenvector, say w. Then
w/ kwk is a unit vector which forms a basis for V.
Let dim(V ) = k + 1. Again, by the previous proposition ϕ has at least one eigenvector, say
w. Let W = Span ({w}) and let W ⊥ = {v ∈ V | hv, wi = 0}. Check the following:

- W ⊥ is a subspace of V.

2
- dim(W ⊥ ) = k.
- W ⊥ is invariant under ϕ i.e., ∀v ∈ W ⊥ , ϕ(v) ∈ W ⊥ .

Thus, we can consider the operator ϕ0 : W ⊥ → W ⊥ defined as


ϕ0 (v) := ϕ(v) ∀v ∈ W ⊥ .
Then, ϕ0 is a self-adjoint (check!) operator defined on the k-dimensional space W ⊥ . By the

induction hypothesis, there exists
n an orthonormalo basis {w1 , . . . , wk } for W such that each
wi is an eigenvector of ϕ. Thus w1 , . . . , wk , kw
wk
is an orthonormal basis for V, comprising
of eigenvectors of ϕ.

3 Existence of eigenvalues

We now prove Proposition 2.1, which shows that a self-adjoint operator must have at least
one eigenvalue. Let us assume for now that V is an inner product space over C. As was
observed in class, in this case we don’t need self-adjointness to guarantee an eigenvalue.
We thus prove the following more general result

Proposition 3.1 Let V be a finite dimensional vector space over C and let ϕ : V → V be a linear
operator. Then ϕ has at least one eigenvalue.

Proof: Let dim(V ) = n. Let v ∈ V \ 0V be any non-zero vector. Consider the set of n + 1
vectors {v, ϕ(v), . . . , ϕn (v)}. Since the dimension of V is n, there must exists c0 , . . . , cn ∈ C
such that
c 0 · v + c 1 · ϕ ( v ) + · · · + c n ϕ n ( v ) = 0V .
We assume above that cn 6= 0, otherwise we can only consider the sum to the largest i such
that ci 6= 0. Let P( x ) denote the polynomial c0 + c1 x + · · · + cn x n . Then the above can be
written as ( P( ϕ))(v) = 0, where P( ϕ) : V → V is a linear operator defined as
P( ϕ) := c0 · id + c1 · ϕ + · · · + cn ϕn ,
with id used to denote the identity operator. Since P is a degree-n polynomial over C, it can
be factored into n linear factors, and we can write P( x ) = cn ∏in=1 ( x − λi ) for λ1 , . . . , λn ∈
C. This means that we can write
P( ϕ) = cn ( ϕ − λn · id) · · · ( ϕ − λ1 · id) .
Let w0 = v and define wi = ϕ(wi−1 ) − λi · wi−1 for i ∈ [n]. Note that w0 = v 6= 0V and
wn = P( ϕ)(v) = 0V . Let i∗ denote the largest index i such that wi 6= 0V . Then, we have
0V = w i ∗ + 1 = ϕ ( w i ∗ ) − λ i ∗ + 1 · w i ∗ .
This implies that wi∗ is an eigenvector with eigenvalue λi∗ +1 .

3
To prove Proposition 2.1 using this, we note that ϕ = ϕ∗ implies the eigenvalue found by
the above proposition must are real.

Exercise 3.2 Use the fact that the eigenvalues of a self-adjoint operator are real to prove Proposition
2.1 even when V is an inner product space over R.

Hint: Define a “complex extension” V 0 = {u + iv | u, v ∈ V }, which is a vector space over


C with the scalar multiplication rule

( a + ib) · (u + iv) = ( a · u − b · v) + i ( a · v + b · u) .

Also, extend ϕ to ϕ0 defined as ϕ0 : V 0 → V 0 with ϕ0 (u + iv) = ϕ(u) + iϕ(v). Then, ϕ0 has


at least one (possibly complex) eigenvalue by the previous result. Can you use it to deduce
the existence of a real eigenvalue for ϕ.

You might also like