Lecture 8
Lecture 8
Feng Wei
[email protected]
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
or
Im f := { w ∈ W w = f (v) for some v ∈ V }.
It is not difficult to see that Ker f is a subspace of V , while Im f is a
subspace of W .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
N (f) ∼
= { x ∈ Rn Ax = 0 } := N (A).
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Example 2
√1 √1 √1
3 6
√1 −1
2
√1
, √ ,
3 2
−2
6
√1 0 √
3 6
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Definition 1
Let S ⊆ Rn . The orthogonal complement of S is defined as the set
S ⊥ = { v ∈ Rn vT s = 0 ∀s ∈ S }.
Example 3
Let
1 −1
0 2
S = Span
2 , 1
.
1 3
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Answer:
−4 −1
−3 −2
S ⊥ = Span
2 , 0
.
0 1
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Theorem 1
Let R, S ⊆ Rn . Then
1 S ⊥ ⊆ Rn .
2 S ⊕ S ⊥ = Rn .
3 (S ⊥ )⊥ = S .
4 R ⊆ S if and only if S ⊥ ⊆ R ⊥ .
5 (R + S )⊥ = R ⊥ ∩ S ⊥ .
6 (R ∩ S )⊥ = R ⊥ + S ⊥ .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
We only discuss and prove the second item in class. The proofs of other
results are left as exercises. Let {v1 , · · · , vk } be an orthonormal basis for S
and let x ∈ Rn be an arbitrary vector. We construct a vector
k
x1 = ∑ (xT vi )vi
i=1
and set
x2 = x − x1 .
Then x1 ∈ S . It is enough for us to show that x2 ∈ S ⊥ . Indeed, x2T v j = 0.
Thus we have S + S ⊥ = Rn . The remainder is to prove that
S ∩ S ⊥ = {0}.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(2) =⇒ (3);
The proofs of (4) and (5) are straightforward.
(5) =⇒ (6).
Theorem 2
Let f : Rn −→ Rm be a linear mapping. The matrix representation of f
with respect to the basis v1 , · · · , vn and w1 , · · · , wm is A, that is,
Then
1 N (A)⊥ = R(AT ).
2 R(A)⊥ = N (AT ).
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
v ∈ R(A)⊥ ⇐⇒ vT x = 0, ∀x ∈ R(A).
⇐⇒ vT Ay = 0, ∀y ∈ Rn .
⇐⇒ yT AT v = 0, ∀y ∈ Rn .
⇐⇒ AT v = 0.
⇐⇒ v ∈ N (AT )
Please practice and complete the first statement in an analogous manner.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Theorem 3
Let f : Rn −→ Rm be a linear mapping. The matrix representation of f
with respect to the basis v1 , · · · , vn and w1 , · · · , wm is A, that is,
Then
1 Rn = N (A) ⊕ R(AT ).
2 Rm = R(A) ⊕ N (AT ).
The previous figure makes many key properties seem almost obvious and
we return to this figure frequently both in the context of linear mappings
and in illustrating concepts such as singular value decompositions of
matrices, controllability and observability of linear systems.
Definition 2
Let V and W be linear spaces over a filed F and f : V −→ W be a linear
mapping.
1 f is called a surjective if R( f ) = W .
2 f is said to be a injective if N ( f ) = {0}.
Furthermore, if f is both surjective and injective, then we say f is a
bijective. In this case, we call f a linear automorphism.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
dim N ( f ) + dim R( f ) = n.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .