0% found this document useful (0 votes)
78 views

Introduction To Linear Algebra V: 1 Eigenvalue and Eigenvector

This document provides an introduction to key concepts in linear algebra including eigenvalues and eigenvectors, Hermitian matrices, orthogonal basis, and singular value decomposition (SVD). It defines these terms and concepts mathematically and discusses how they relate to each other. Examples are provided for computing eigenvalues and eigenvectors in Matlab, representing vectors in an orthogonal basis, and using SVD for matrix approximation and dimensionality reduction. References for further reading on linear algebra concepts are also included.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views

Introduction To Linear Algebra V: 1 Eigenvalue and Eigenvector

This document provides an introduction to key concepts in linear algebra including eigenvalues and eigenvectors, Hermitian matrices, orthogonal basis, and singular value decomposition (SVD). It defines these terms and concepts mathematically and discusses how they relate to each other. Examples are provided for computing eigenvalues and eigenvectors in Matlab, representing vectors in an orthogonal basis, and using SVD for matrix approximation and dimensionality reduction. References for further reading on linear algebra concepts are also included.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Introduction to Linear Algebra V


Jack Xin (Lecture) and J. Ernie Esser (Lab)

Abstract
Eigenvalue, eigenvector, Hermitian matrices, orthogonality, orthonormal basis,
singular value decomposition.

1 Eigenvalue and Eigenvector


For an n × n matrix A, if
A x = λ x, (1.1)
has a nonzero solution x for some complex number λ, then x is eigenvector corresponding
to eigenvalue λ. Equation (1.1) is same as saying x belongs to the null space of A − λI, or
A − λI is singular or the so called characteristic equation holds:

det(A − λI) ≡ p(λ) = 0, (1.2)

p(λ) is a polynomial of degree n, hence n complex eigenvalues.


In Matlab, eigenvalues and eigenvectors are given by [V,D]=eig(A), where columns of
V are eigenvectors, D is a diagonal matrix with entries being eigenvalues.
Matrix A is diagonalizable (A = V DV −1 , D diagonal) if it has n linearly independent
eigenvectors. A sufficient condition is that all n eigenvalues are distinct.

2 Hermitian Matrix
For any complex valued matrix A, define AH = ĀT , where bar is complex conjugate. A is
Hermitian if AH = A, for example:
 
3 2−i
A=
2+i 4

Department of Mathematics, UCI, Irvine, CA 92617.

1
A basic fact is that eigenvalues of a Hermitian matrix A are real, and eigenvectors of
distinct eigenvalues are orthogonal. Two complex column vectors x and y of the same
dimension are orthogonal if xH y = 0. The proof is short and given below. Consider
eigenvalue equation:
Ax = λx,

and let α = xH Ax, then:

ᾱ = αH = (xH Ax)H = xH Ax = α,

so α is real. On the other hand, α = λxH x, so λ is real.


Let xi (i=1,2) be eigenvectors corresponding to distinct eigenvalues λi (i=1,2). We have
the identities:
(Ax1 )H x1 = xH H
1 Ax2 = λ2 x1 x2 ,

(Ax1 )H x2 = (xH H H H H
2 Ax1 ) = (λ1 x2 x1 ) = λ1 x1 x2 ,

so λ1 6= λ2 implies xH
1 x2 = 0.

It follows that by choosing orthogonal basis for each eigenspace, Hermitian matrix A has
n-orthonormal (orthogonal and of unit length) eigen-vectors, which become an orthogonal
basis for C n . Putting orthonomal eigenvectors as columns yield a matrix U so that U H U =
I, which is called unitary matrix. If A is real, unitary matrix becomes orthogonal matrix
U T U = I.
Clearly a Hermitian matrix can be diagonalized by a unitary matrix (A = U DU H ).
The necessary and sufficient condition for unitary diagonalization of a matrix is that it is
normal, or satisfying the equation:

A AH = AH A.

This includes any skew-Hermitian matrix (AH = −A).

3 Orthogonal Basis
In Rn , let v1 , v2 , ..., vn be n orthonormal column vectors, viT vj = δij (=1 if i=j, otherwise
0). Then each vector v has the representation:
n
X
v= cj vj , cj = v T vj .
j=1

2
Here cj vj is the projection of v onto vj .
If u = ni=1 bj vj , then:
P
n
X
T
u v= bj c j ,
j=1

and n
X
kuk22 = length of u squared = u u = T
b2j ,
j=1

which is called Parseval formula (generalization of Pythagorean theorem).


An example of N -dimensional orthogonal basis is given by the discrete cosine transform:

X = DCT ∗ x,

where DCT is n × n orthogonal matrix:


    
1
DCT = w(k) cos π n − (k − 1)/N ,
2 k=1:N,n=1:N
√ p
w(1) = 1/ N ; w(k) = 2/N , if k ≥ 2.
In Matlab, X = dct(x), DCT=dct(eye(n)), n ≥ 2.

4 Singular Value Decomposition (SVD)


For a general real m × n matrix A, a factorization similar to orthogonal diagonalization of
symmetric matrices (AT = A) is SVD. Suppose m ≥ n, then there are m × m orthogonal
matrix U and n × n orthogonal matrix V , also non-negative numbers σ1 ≥ σ2 ≥ · · · ≥ σn ≥
0, such that
A = U ΣV T , (4.3)

and
Σ = [diag([σ1 σ2 · · · σn ]); 0],

is m × n matrix, 0 is m-n zero rows of dimension n. The numbers σi ’s are called singular
values.
It follows from (4.3) that AT A = V ΣT ΣV T ,

ΣT Σ = diag([σ12 , σ2 · · · σn ]),

3
so σj2 (j=1:n) are real eigenvalues of AT A, while columns of V are corresponding orthogonal
eigenvectors. From AV = U Σ, we see that each column of U is uj = Avj /σj , j=1, 2, ..., r,
where r is the rank of A (or the number of nonzero singular values). Check that uj ’s are
orthonormal. Putting uj ’s (j=1:r) together gives part of the column vectors of U (the U1 in
U = [U1 U2 ]), the other part U2 is the orthogonal complement. Since uj ’s (j=1:r) span than
the range of A (range(A)), U2 consist of orthonormal column vectors in range(A)⊥ = N(AT ),
the nullspace of AT .
In Matlab, [U,S,V]=svd(A) gives the result (S in lieu of Σ). Keeping k < r of the
singular values gives the rank-k approximation of A, or A ≈ U Sk V T , where Sk is obtained
from S by zeroing out σj (j=k+1:r), so called low rank approximation, which is useful in
image compression among other applications. The approximation is optimal in Frobenius
sense (or in the sense of Euclidean, l2 , norm of matrices).

References
[1] S. Leon, Linear Algebra with Applications, Pearson, Prentice-Hall, 2010.

You might also like