0% found this document useful (0 votes)
180 views8 pages

Orthogonal Diagonalization: Linear Algebra Eigenvalues and Diagonalization

1) Orthogonal matrices have only 1 and -1 as possible eigenvalues. 2) A matrix is orthogonally diagonalizable if it can be written as A = QDQ-1, where D is a diagonal matrix and Q is an orthogonal matrix. 3) The spectral theorem for symmetric matrices states that every symmetric matrix is orthogonally diagonalizable.

Uploaded by

prabhakar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
180 views8 pages

Orthogonal Diagonalization: Linear Algebra Eigenvalues and Diagonalization

1) Orthogonal matrices have only 1 and -1 as possible eigenvalues. 2) A matrix is orthogonally diagonalizable if it can be written as A = QDQ-1, where D is a diagonal matrix and Q is an orthogonal matrix. 3) The spectral theorem for symmetric matrices states that every symmetric matrix is orthogonally diagonalizable.

Uploaded by

prabhakar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Roberto’s Notes on Linear Algebra

Chapter 10: Eigenvalues and diagonalization Section 6

Orthogonal diagonalization
What you need to know already: What you can learn here:

 What it means to diagonalize a matrix.  A special kind of diagonalization that is


 How to diagonalize a matrix. reserved for a special kind of matrices.

By using what we have learned about eigenvalues and similarity, we can prove So, orthogonal matrices are also nice in the sense that their eigenvalues are
a really intriguing fact. simple.
Yes, and that implies that a matrix similar to an orthogonal matrix can also
have only 1 and -1 as eigenvalues, a fact that may make other computations easy.
Technical fact But it turns out that another linkage between similarity and orthogonal matrices
leads to interesting results.
If Q is an orthogonal matrix, its only possible
eigenvalues are 1 and -1.
Definition
Proof A matrix A is orthogonally diagonalizable if it can
be written as
Assume that Q is an orthogonal matrix,  is one of its eigenvalues and v A = QDQ−1
is one of its eigenvectors. Then:
Qv =  v  Qv =  v where D is a diagonal matrix and Q is an orthogonal
matrix.
But, being an orthogonal matrix, Q preserves lengths. Moreover, the scalar
multiple  can be separated by using its absolute value. Therefore:
Qv =  v  v = v   = 1   = 1
So, we are looking at a matrix that is diagonalizable through an orthogonal
matrix.

Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 6: Orthogonal diagonalization Page 1
Yes: isn’t that fun to say? But there is an easier characterization of these
matrices, one that may be a big surprise for you. In fact, the big surprise I have
mentioned a few sections back! Technical fact
Let me start by pointing out a key property of orthogonally diagonalizable The Spectral Theorem
matrices.
for symmetric matrices
Every symmetric matrix is orthogonally
Technical fact diagonalizable.

Every orthogonally diagonalizable matrix is


symmetric.
Wow! And how do we prove that?
That is the problem: unlike the other direction, whose proof is very easy, the
Proof proof of this statement is very technical and complicated. So, I will just give you an
idea of how it works, but I will leave out the details.
We know that the inverse of an orthogonal matrix is its transpose and that a But before we do that, here is an example of how to orthogonally diagonalize a
−1
diagonal matrix is obviously symmetric. Therefore, if A = QDQ : symmetric matrix.

AT = QDQ ( ) = ( QDQ ) = (Q )
−1 T T T T T
DT QT = QDQ −1 = A
 1 −2 0 
Therefore, the matrix is symmetric. 
Example: −2 0 2
 
Too bad that this is another one of those one-way statements, or we could say
 0 2 −1
that orthogonally diagonalizable matrices and symmetric matrices are the same This matrix is symmetric, so we should be able to diagonalize it by using an
thing. orthogonal matrix. We use the usual method of finding eigenvalues and
eigenvectors and then arranging them properly.
Well, that is the surprise: they are!
For eigenvalues, we get:
1−  −2 0
1−  0 1−  −2
−2 − 2 = −2 − (1 +  )
−2 2 −2 −
0 2 −1 − 
(
= −4 + 4 − (1 +  )  2 −  − 4 )
= −4 + 4 −  −  − 4 +  −  − 4  = − + 9 = −  2 − 9
2 3 2 3
( )
Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 6: Orthogonal diagonalization Page 2
Therefore, the eigenvalues are  = −3, 0, 3 , which allows us to get the  −1 2 −2   −3 0 0 
Q =  −2 1 2  , D =  0 0 0   A = QDQ −1
1
diagonal matrix:
3
 −3 0 0   2 2 1   0 0 3 
D =  0 0 0 
 0 0 3 
Cool! But why is it called the Spectral Theorem?
Now we construct unit eigenvectors for each eigenvalue:
Because its proof is based on finding certain characteristics of the spectrum of
 4 −2 0  2 0 1  −1
 = −3   −2 3 2 
REF 
0 1 1   v1 = s  −2 
eigenvalues, that is, the entire set of eigenvalues of the matrix. It is also related to a
  larger set of theorems that are also called spectral for similar reasons (they relate to
 0 2 2   0 0 0   2  the entire spectrum of something)
 1 −2 0  1 0 −1  2

 = 0   −2 0 2   REF 
0 2 −1  v 2 = t 1 
 Outline of the
  proof
 0 2 −1 0 0 0   2 
 −2 −2 0  1 0 2  −2  The really difficult part of the proof is to show that a symmetric nn matrix

 = 3   −2 −3 2   REF 
0 1 −2   v 3 = k  2 
 A only has real eigenvalues, that is, that all solutions of the characteristic
  equation are real numbers. This part of the proof requires the use of
 0 2 −4  0 0 0   1 
complex numbers and their theory.
Now notice that the three basic vectors we have found are perpendicular to Once this is done, we know that the sum of the algebraic multiplicities of the
each other. By normalizing each of them, we can obtain an orthonormal set eigenvalues of A equals n, but what about their geometric multiplicities? It
of vectors: turns out that for a symmetric matrix, geometric and algebraic multiplicities
 −1  −1 are equal for each eigenvalue. This is less difficult, but still tricky.
  1 
v1 = s  −2   u1 =  −2  Once we know this, we know that A has enough eigenvectors to make it
3 diagonalizable, so one only needs to prove that the eigenspaces of different
 2   2  eigenvalues are not just independent, but orthogonal. This is another
challenging step, but it turns out to be true. This, together with some steps of
2  2 the modified Gram-Schmidt process, tell us that we can find a set of
  1 
v 2 = t 1   u 2 = 1  orthogonal eigenvectors to make up the matrix Q.
3
 2   2 
Although I have not given you the details of the proof, we can extract from the
 −2   −2  outline an interesting set of facts about symmetric matrices.
  1 
v3 = k  2   u3 =  2 
3
 1   1 
If we arrange these three vectors as columns of a matrix, we obtain an
orthogonal one, as claimed. Therefore:

Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 6: Orthogonal diagonalization Page 3
1 0 0   u1T 
0   
Technical facts 0  u 2T 
A = u1 u 2 un   2
  
Given a symmetric, nn matrix A:   
0 0 n  u nT 
 It is always possible to find an orthogonal set
consisting of n eigenvectors for A. And now we only have to compute this product:
 1u1T 
 Any two eigenvectors of A corresponding to  
 2u 2T 
different eigenvalues are orthogonal. A = u1 u 2 un  =  u u T + 2u 2u 2T + + nu nu nT
  1 1 1
 If u1, u2 , , un  is a set of orthonormal  T
nu n 
column eigenvectors, corresponding,
respectively, to the eigenvalues 1, 2 , , n  as claimed.

(with each eigenvalue repeated according to its


multiplicity), then:  1 −2 0 
A =  u u + 2u u + ... + nu u
T T T 
Example: −2 0 2
1 1 1 2 2 n n
 
The last statement is also called the Spectral  0 2 −1
Decomposition Theorem.
We have seen in the previous example that this matrix has eigenvalues
 = −3, 0, 3 with corresponding orthonormal eigenvectors:
 −1  2  −2 
1  1  1 
The first two facts follow immediately from the Spectral Theorem, while the u1 =  −2  u 2 = 1  u3 =  2 
last one, while not immediate, is fairly simple to prove, as follows: 3 3 3
 2   2   1 
Proof
Therefore, the Spectral Decomposition Theorem allows us to state that:
Under our assumptions:  1 −2 0 
A =  −2 0 2  =
1 0 0
0   0 2 −1
0 
A = QDQ −1 ; D= 2
; Q = u1 u 2 un    −1 
  1  1 
  = −3   −2    −1 − 2 2  +
n  3
  2   3 
0 0   
But since Q is orthogonal, its inverse is its transpose, so that we can write:
Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 6: Orthogonal diagonalization Page 4
 2  Now you got me confused: how is this spectral decomposition useful? The
1  1 
+0  1     2 1 2  + calculations it requires are fairly long…
3
 2   3 
   Its value lies in two features. First, it allows us to write a matrix in terms of
orthogonal vectors, a fact that turns out to be useful in further development of matrix
  −2  theory and linear algebra in general. Second, while the calculations may be long,
1  1 
+3   2     −2 2 1  they are easy, since at every step we are just doing a single product or a single
3
  1   3  addition, without combining them through a dot product.
  
But I must admit that its true value only comes up in later applications.
By adding these three matrices we get: Doesn’t that make you want to take a second linear algebra course?

 −1  −2  I am not sure about that!


= −  −2   −1 − 2 2 +  2   −2 2 1
1 1
Well, in the worst-case scenario that this is the last fact about matrices you’ll
3 3
 2   1  ever learn, it is now part of your knowledge and you are in the company of a very
small group of people educated enough to know it!
1 2 −2   4 −4 −2   3 −6 0 
1  1
2  =  −6 0 6 
 1 Congratulations!
=− 2 4 −4  +  −4 4
3 3 3
 −2 −4 4   −2 2 1   0 6 −3

And isn’t that true?

Summary
 Symmetric matrices are the one and only matrices that can be diagonalized through an orthogonal matrix.
 This implies that symmetric matrices can be decomposed into a sum of products of orthogonal vectors.

Common errors to avoid


 This is a very theoretical section, but with a very simple and practical punch line! Enjoy the punch line, but do not ignore the theoretical part, even if it looks strange and
complicated. Wrestling with them will do your intellectual health much good!

Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 6: Orthogonal diagonalization Page 5
Learning questions for Section LA 10-6

Review questions:

1. Describe why the Spectral Theorem is so surprising and difficult to prove.

Memory questions:

1. What are the possible eigenvalues for an orthogonal matrix? 3. Which matrices are orthogonally diagonalizable?

2. What does the Spectral theorem state? 4. Are all symmetric matrices diagonalizable?

Computation questions:

1 2 2  3. Construct a symmetric matrix whose eigenvalues are 1, 2, 3 and whose


corresponding eigenvectors are 1 2 2 ,  2 −1 0 , 2 4 −5
1. Construct an orthogonal diagonalization of the matrix A =  2 1 2 .
  respectively.
2 2 1 
4. Determine one pair of values ( x, y ) , if any, for which
2. Construct a symmetric matrix whose eigenvalues are –1, 0 and 1 with  x 3y x 3y 
1   −1  0   x −5 y y y 
   1  , 0 
corresponding eigenvectors 1 , Q= is an orthogonal matrix, and one pair for
      3 y x / 3 y −5 y 
 0   0  1   
x y −5 y y 
which it is orthogonally diagonalizable. I expect you to determine such values
through an organized procedure, NOT by trial and error.

Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 6: Orthogonal diagonalization Page 6
Theory questions:

1. If Q is an orthogonal matrix, what can the product of its eigenvalues be? 5. Does every orthogonal matrix have eigenvalues?

2. What is special about symmetric matrices with respect to diagonalization? 6. Assuming that a matrix A is orthogonally diagonalizable, when may the Gram-
Schmidt process be needed to complete such diagonalization?
3. What feature of the matrix A tells us that it is not orthogonally diagonalizable?
7. Which theorem relates symmetric and orthogonal matrices?
4. Can an upper triangular matrix be orthogonally diagonalizable?

Proof questions:

1. Prove that if Q is an orthogonal matrix and  is one of its eigenvalues, then  Q is also orthogonal for any whole number n.
n

2. Prove that if A and B are two orthogonal matrices of the same size, then the square of the product of all eigenvalues of BA is 1.

Templated questions:

1. Construct a simple symmetric matrix and obtain its orthogonal diagonalization.

What questions do you have for your instructor?

Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 6: Orthogonal diagonalization Page 7
Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 6: Orthogonal diagonalization Page 8

You might also like