1 Self-Adjoint Transformations: Lecture 5: October 12, 2021
1 Self-Adjoint Transformations: Lecture 5: October 12, 2021
1 Self-adjoint transformations
Proposition 1.3 Let V be an inner product space and let ϕ : V → V be a self-adjoint linear
operator. Then
- If {w1 , . . . , wn } are eigenvectors corresposnding to distinct eigenvalues then they are mutu-
ally orthogonal.
Proof: The first property can be observed by noting that if v ∈ V \ {0V } is an eigenvector
with eigenvalue λ, then
Since hv, vi 6= 0, we must have λ = λ which implies that λ ∈ R. For the second part,
observe that if i 6= j, then we have
λ j · wi , w j = wi , ϕ ( w j ) = ϕ ∗ ( wi ), w j = ϕ ( wi ), w j = λ i · wi , w j .
Since eigenvalues are real, we get (λi − λ j ) · wi , w j = 0, which implies wi , w j = 0 using
λi 6 = λ j .
1
2 The Real Spectral Theorem
In this lecture, we will prove the ”real spectral theorem” for self-adjoint operators ϕ : V →
V (so named because the eigenvalues of a self-adjoint operator are real, not because other
spectral theorems are fake!) We will show that any such operator is not only diagonalizable
(has a basis of eigenvectors) but is in fact orthogonally diagonalizable i.e., has an orthonormal
basis of eigenvectors. This gives a very convenient way of thinking about the action of
such operators. In particular, let dim(V ) = n and {w1 , . . . , wn } form an orthonormal basis
of eigenvectors for ϕ, with corresponding eigenvalues λ1 , . . . , λn . Then for any vector v
expressible in this basis as (say) v = ∑in=1 ci · wi , we can think of the action of ϕ as
!
n n
ϕ(v) = ϕ ∑ ci · vi = ∑ c i · λ i · wi .
i =1 i =1
Of course, we can also think of the action of ϕ in this way as long as w1 , . . . , wn form a basis
(not necessarily orthonormal). However, this view is particularly useful when they form
an orthonormal basis. As we will later see, this also provides the “right” basis to think
about many matrices, such as the adjacency matrices of graphs (where such decomposi-
tions are the subject of spectral graph theory). To prove the spectral theorem, We will need
the following statement (which we’ll prove later).
Proposition 2.1 Let V be a finite-dimensional inner product space (over R or C) and let ϕ : V →
V be a self-adjoint linear operator. Then ϕ has at least one eigenvalue.
Using the above proposition, we will prove the spectral theorem below for finite dimen-
sional vector spaces. The proof below can also be made to work for Hilbert spaces (using
the axiom of choice). The above proposition, which gives the existence of an eigenvalue
is often proved differently for finite and infinite-dimensional spaces, and the proof for
infinite-dimensional Hilbert spaces requires additional conditions on the operator ϕ. We
first prove the spectral theorem assuming the above proposition.
Proposition 2.2 (Real spectral theorem) Let V be a finite-dimensional inner product space and
let ϕ : V → V be a self-adjoint linear operator. Then ϕ is orthogonally diagonalizable.
- W ⊥ is a subspace of V.
2
- dim(W ⊥ ) = k.
- W ⊥ is invariant under ϕ i.e., ∀v ∈ W ⊥ , ϕ(v) ∈ W ⊥ .
3 Existence of eigenvalues
We now prove Proposition 2.1, which shows that a self-adjoint operator must have at least
one eigenvalue. Let us assume for now that V is an inner product space over C. As was
observed in class, in this case we don’t need self-adjointness to guarantee an eigenvalue.
We thus prove the following more general result
Proposition 3.1 Let V be a finite dimensional vector space over C and let ϕ : V → V be a linear
operator. Then ϕ has at least one eigenvalue.
Proof: Let dim(V ) = n. Let v ∈ V \ 0V be any non-zero vector. Consider the set of n + 1
vectors {v, ϕ(v), . . . , ϕn (v)}. Since the dimension of V is n, there must exists c0 , . . . , cn ∈ C
such that
c 0 · v + c 1 · ϕ ( v ) + · · · + c n ϕ n ( v ) = 0V .
We assume above that cn 6= 0, otherwise we can only consider the sum to the largest i such
that ci 6= 0. Let P( x ) denote the polynomial c0 + c1 x + · · · + cn x n . Then the above can be
written as ( P( ϕ))(v) = 0, where P( ϕ) : V → V is a linear operator defined as
P( ϕ) := c0 · id + c1 · ϕ + · · · + cn ϕn ,
with id used to denote the identity operator. Since P is a degree-n polynomial over C, it can
be factored into n linear factors, and we can write P( x ) = cn ∏in=1 ( x − λi ) for λ1 , . . . , λn ∈
C. This means that we can write
P( ϕ) = cn ( ϕ − λn · id) · · · ( ϕ − λ1 · id) .
Let w0 = v and define wi = ϕ(wi−1 ) − λi · wi−1 for i ∈ [n]. Note that w0 = v 6= 0V and
wn = P( ϕ)(v) = 0V . Let i∗ denote the largest index i such that wi 6= 0V . Then, we have
0V = w i ∗ + 1 = ϕ ( w i ∗ ) − λ i ∗ + 1 · w i ∗ .
This implies that wi∗ is an eigenvector with eigenvalue λi∗ +1 .
3
To prove Proposition 2.1 using this, we note that ϕ = ϕ∗ implies the eigenvalue found by
the above proposition must are real.
Exercise 3.2 Use the fact that the eigenvalues of a self-adjoint operator are real to prove Proposition
2.1 even when V is an inner product space over R.
( a + ib) · (u + iv) = ( a · u − b · v) + i ( a · v + b · u) .