Orthogonal Diagonalization of Symmetric Matrices: MATH10212 - Linear Algebra - Brief Lecture Notes
Orthogonal Diagonalization of Symmetric Matrices: MATH10212 - Linear Algebra - Brief Lecture Notes
(A~v1 )T ~v2 = ~v1T AT ~v2 = ~v1T (A~v2 ) = ~v1 · (λ2~v2 ) = λ2 (~v1 · ~v2 ). Thus, λ1 (~v1 · ~v2 ) =
λ2 (~v1 · ~v2 ). Since λ1 6= λ2 by hypothesis, we must have ~v1 · ~v2 = 0.
1 2 2
Example. For A = 2 1 2 the characteristic polynomial is
¯ 2 2 1 ¯
¯1 − λ 2 2 ¯¯
¯
det(A − λI) = ¯¯ 2 1−λ 2 ¯¯ =
¯ 2 2 1 − λ¯
(1 − λ)3 + 8 + 8 − 4(1 − λ) − 4(1 − λ) − 4(1 − λ) = · · · = −(λ − 5)(λ + 1)2 . Thus,
eigenvalues are 5 and −1.
2 2 2 x1 0
Eigenspace E−1 : (A−(−1)I)~x = ~0; 2 2 2 x2 = 0; x1 = −x2 −x3 ,
2 2 2 x3 0
−s − t
where x2 , x3 are free var.; E−1 = s | s, t ∈ R ;
t
−1 −1
a basis of E−1 : ~u1 = 1 , ~u2 = 0 .
0 1
Apply Gram–Schmidt to orthogonalize: ~v1 = ~u1 ; seek ~v2 = ~u2 + c~ v1 ;
−1/2
for ~v2 · ~v1 = 0 obtain c = − ~u~v12·~
·~
v1
= − 1
, thus, ~
v 2 = ~
u 2 − (1/2)~v1 = −1/2.
v1 2
1
−4 2 2 x1 0
Eigenspace E5 : (A − 5I)~x = ~0; 2 −4 2 x2 = 0; solve this
2 2 −4 x3 0
t
system....: x1 = x2 = x3 , where x3 is a free var.; E5 = t | t ∈ R ;
t
1
a basis of E5 : 1 .
1
(Note, E5 is automatically orthogonal to E−1 .)
MATH10212 • Linear Algebra • Brief lecture notes 59
Normalize: √ √ √
−1/√ 2 −1/√6 1/√3
1/ 2 , −1/ 6 , 1/ 3 .
p √
0 2/3 1/ 3
√ √ √
−1/√ 2 −1/√6 1/√3
Let Q = 1/ 2 −1/
p 6 1/√3 , which is an orthogonal matrix;
0 2/3 1/ 3
−1 0 0
then QT AQ = 0 −1 0.
0 0 5
W ⊥ = {~v in Rn | ~v · w
~ = 0 for all w
~ in W }
a. W ⊥ is a subspace of Rn .
b. (W ⊥ )⊥ = W .
c. W ∩ W ⊥ = {~0}.
d. If W = span(w ~ k ), then ~v is in W ⊥ if and only if ~v · w
~ 1, . . . , w ~ i = 0 for all
i = 1, . . . , k.
Orthogonal Projections
Method for finding the orthogonal projection and the distance (given
a subspace V of Rn and some vector ~u ∈ Rn ).
Choose an orthogonal basis ~v1 , . . . , ~vr of V (if V is given as a span of some
non-orthogonal vectors, apply Gram–Schmidt first to obtain an orthogonal
basis of V ); we know that there is an orthogonal basis ~vr+1 , . . . , ~vn of V ⊥
such that
~v1 , . . . , ~vr , ~vr+1 , . . . , ~vn
is an orthogonal basis of Rn (but we do not really need these ~vr+1 , . . . , ~vn !).
Then
r
X n
X
~u = ai~vi + bj ~vj .
i=1 j=r+1
We now find the coefficients ai . For that, take dot product with ~vi0 for each
i0 = 1, . . . , r: on the right only one term does not vanish, since the ~vk are
orthogonal to each other:
whence
~u · ~vi0
a i0 = .
~vi0 · ~vi0
Having found these ai0 for each i0 = 1, . . . , r, we now have the orthogonal
projection of ~u onto V :
Xr
~v = ai~vi .
i=1
~ = ~u − ~v ∈ V ⊥ ,
w
(W ⊥ )⊥ = W
dim W + dim W ⊥ = n
4. There exists an element ~0 in V , called a zero vector such that ~u +~0 = ~u.
a. 0~u = ~0
b. c~0 = ~0
c. (−1)~u = −~u
d. If c~u = ~0, then c = 0 or ~u = ~0.
Subspaces
Definition A subset W of a vector space V is called a subspace of V if W
is itself a vector space with the same scalars, addition and scalar multipli-
cation as V .
Spanning Sets
Definition If
S = {~v1 , ~v2 , . . . , ~vk }
is a set of vectors in a vector space V , then the set of all linear combinations
of
~v1 , ~v2 , . . . , ~vk
is called the span of
~v1 , ~v2 , . . . , ~vk
and is denoted by
span(~v1 , ~v2 , . . . , ~vk )
or span(S). If V =span(S), then S is called a spanning set for V and V is
said to be spanned by S.
MATH10212 • Linear Algebra • Brief lecture notes 63
Linear Independence
Bases
1. B spans V and
2. B is linearly independent.