Introduction To Linear Algebra V: 1 Eigenvalue and Eigenvector
Introduction To Linear Algebra V: 1 Eigenvalue and Eigenvector
∗
Jack Xin (Lecture) and J. Ernie Esser (Lab)
Abstract
Eigenvalue, eigenvector, Hermitian matrices, orthogonality, orthonormal basis,
singular value decomposition.
2 Hermitian Matrix
For any complex valued matrix A, define AH = ĀT , where bar is complex conjugate. A is
Hermitian if AH = A, for example:
3 2−i
A=
2+i 4
∗
Department of Mathematics, UCI, Irvine, CA 92617.
1
A basic fact is that eigenvalues of a Hermitian matrix A are real, and eigenvectors of
distinct eigenvalues are orthogonal. Two complex column vectors x and y of the same
dimension are orthogonal if xH y = 0. The proof is short and given below. Consider
eigenvalue equation:
Ax = λx,
ᾱ = αH = (xH Ax)H = xH Ax = α,
(Ax1 )H x2 = (xH H H H H
2 Ax1 ) = (λ1 x2 x1 ) = λ1 x1 x2 ,
so λ1 6= λ2 implies xH
1 x2 = 0.
It follows that by choosing orthogonal basis for each eigenspace, Hermitian matrix A has
n-orthonormal (orthogonal and of unit length) eigen-vectors, which become an orthogonal
basis for C n . Putting orthonomal eigenvectors as columns yield a matrix U so that U H U =
I, which is called unitary matrix. If A is real, unitary matrix becomes orthogonal matrix
U T U = I.
Clearly a Hermitian matrix can be diagonalized by a unitary matrix (A = U DU H ).
The necessary and sufficient condition for unitary diagonalization of a matrix is that it is
normal, or satisfying the equation:
A AH = AH A.
3 Orthogonal Basis
In Rn , let v1 , v2 , ..., vn be n orthonormal column vectors, viT vj = δij (=1 if i=j, otherwise
0). Then each vector v has the representation:
n
X
v= cj vj , cj = v T vj .
j=1
2
Here cj vj is the projection of v onto vj .
If u = ni=1 bj vj , then:
P
n
X
T
u v= bj c j ,
j=1
and n
X
kuk22 = length of u squared = u u = T
b2j ,
j=1
X = DCT ∗ x,
and
Σ = [diag([σ1 σ2 · · · σn ]); 0],
is m × n matrix, 0 is m-n zero rows of dimension n. The numbers σi ’s are called singular
values.
It follows from (4.3) that AT A = V ΣT ΣV T ,
ΣT Σ = diag([σ12 , σ2 · · · σn ]),
3
so σj2 (j=1:n) are real eigenvalues of AT A, while columns of V are corresponding orthogonal
eigenvectors. From AV = U Σ, we see that each column of U is uj = Avj /σj , j=1, 2, ..., r,
where r is the rank of A (or the number of nonzero singular values). Check that uj ’s are
orthonormal. Putting uj ’s (j=1:r) together gives part of the column vectors of U (the U1 in
U = [U1 U2 ]), the other part U2 is the orthogonal complement. Since uj ’s (j=1:r) span than
the range of A (range(A)), U2 consist of orthonormal column vectors in range(A)⊥ = N(AT ),
the nullspace of AT .
In Matlab, [U,S,V]=svd(A) gives the result (S in lieu of Σ). Keeping k < r of the
singular values gives the rank-k approximation of A, or A ≈ U Sk V T , where Sk is obtained
from S by zeroing out σj (j=k+1:r), so called low rank approximation, which is useful in
image compression among other applications. The approximation is optimal in Frobenius
sense (or in the sense of Euclidean, l2 , norm of matrices).
References
[1] S. Leon, Linear Algebra with Applications, Pearson, Prentice-Hall, 2010.