0% found this document useful (0 votes)
1 views

Lecture 19

Uploaded by

priyanshigiri11a
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Lecture 19

Uploaded by

priyanshigiri11a
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Lecture 19

Fundamental Theorem of Linear Algebra & Least-Square Approximation

Fundamental Subspaces

Let A ∈ Mm×n (R). Suppose N (A) is the null space of A, C(A) is the column space of A, C(AT ) is
the column space of AT and N (AT ) is the null space of AT . Then N (A), C(AT ) are subspaces of Rn , and
C(A), N (AT ) are subspaces of Rm . These subspaces are called fundamental subspaces associated to A.
Lemma 1. N (A) ⊥ C(AT ) and C(A) ⊥ N (AT ).
Proof: Let x ∈ N (A) and y ∈ C(AT ). Then A(x) = 0 and AT z = y for some z ∈ Rm . Then
y T x = z T Ax = 0, that is, hx, yi = 0 so that N (A) ⊥ C(AT ). Similarly, C(A) ⊥ N (AT ). 
Theorem 2 (Fundamental Theorem of Linear Algebra). Let A ∈ Mm×n (R). Then
1. Rn = N (A) ⊕ C(AT )
2. Rm = C(A) ⊕ N (AT ).
Proof: Since C(AT ) is a subspace of Rn , Rn = C(AT ) ⊕ (C(AT ))⊥ . We claim that C(AT )⊥ = N (A). By
Lemma 11, N (A) ⊆ C(AT )⊥ . Note that n = dim(C(AT )) + dim((C(AT ))⊥ ) and by rank-nullity theorem
n = rank(A) + nullity(A). This implies dim(N (A)) = dim((C(AT ))⊥ ). Hence, N (A) = (C(AT ))⊥ .
Similarly one can proof Rm = C(A) + N (AT ). 

Least-Square Approximation
Problem 3. Let A ∈ Mm×n (R) and b ∈ Rn such that b 6∈ C(A), where C(A) is the column-space of
A. In other words, the system Ax = b is inconsistent. So the problem is to find a “pseudo solution” or
“approximate solution” under certain condition in error term.
Definition 4 (Least-Square Method). A method to approximate a solution of an inconsistent system of
linear equations such that the solution minimizes the sum of square of errors made in every equation.

Let AX = b be an inconsistent system of linear equation, where A ∈ Mm×n (R), X ∈ Rn and b ∈ Rm .


Suppose X0 = (x1 , x2 , . . . , xn ) is an approximate solution of the system. Then AX0 = b0 and b0 6= b. The
n
error term for the i-th equation is |bi − b0i | = |
P
aij xj − bi |. For X0 to be a least-square solution of the
j=1
system, the sum of square of the errors made in each equation should be minimum, that is,
m X
n 2
X
aij xj − bi is minimum.
i=1 j=1

Theorem 5. Suppose X0 is a least square solution. Then AX0 is the orthogonal projection of b on the
column-space of A.

2
m P
P n
Proof. Let X0 be the least-square approximation of AX = b. Then aij xj − bi is minimum. For
i=1 j=1
2
m P
n
n
= ||AY − b||2 . Thus ||AX0 − b|| ≤ ||AX − b|| for all X ∈ Rn
P
Y = (y1 , y2 , . . . , yn ) ∈ R , aij yj − bi
i=1 j=1

1
as ||AX0 − b|| is minimum. Recall that wv is the orthogonal projection of v on to W if and only if
||v − wv || ≤ ||v − w|| for all w ∈ W. Take V = Rm , W = {AX : X ∈ Rn } and v = b. Then AX0 is the
orthogonal projection of b on the column-space of A.

Theorem 6. Let X0 be a least-square approximation of AX = b and N (A) be the null space of A. Suppose
S is the set of all least-square solutions of AX = b. Then S = X0 + N (A).

Proof. Let X ∈ X0 + N (A). Then X = X0 + Xh so that AX − b = AX0 − b. Thus X ∈ S. Now suppose


X ∈ S. Then ||AX − b|| = ||AX0 − b|| ⇒ ||(AX0 − b) + A(X − X0 )||2 = ||AX0 − b||2 ⇒ ||AX0 − b||2 +
||A(X − X0 )||2 = ||AX0 − b|| since A(X − X0 ) ∈ C(A) and (AX0 − b) ⊥ Y for all Y ∈ C(A). Therefore,
||A(X − X0 )|| = 0 ⇒ A(X − X0 ) = 0 ⇒ X − X0 ∈ N (A) so that X = X0 + (X − X0 ) ∈ X0 + N (A).

Application of Fundamental Theorem of Linear Algebra

Lemma 7. Let A ∈ Mm×n (R). Then the AT AX = AT b is consistent for every b ∈ Rm .


Proof. It is enough to show that each AT b is in the column space of AT A. By Fundamental Theorem of
Linear Algebra, Rm = C(A) ⊕ N (AT ). Thus, there exist X ∈ Rm and Y ∈ N (AT ) such that b = AX + Y .
Therefore, AT b = AT (AX) + AT Y = AT AX + 0.

Theorem 8. Let AX = b be an inconsistent system of linear equations and X0 ∈ Rn . Then X0 is a


least-square solution of AX = b if and only if AT AX0 = AT b.

Proof. Note that N (AT )⊥ = C(A). Then X0 is a least-square solution if and only if AX0 − b ∈ C(A)⊥ ,
that is, (AX0 − b) ∈ N (AT ) ⇔ AT (AX0 − b) = 0 ⇔ AT AX0 = AT b.

Remark 9. For finding a least-square solution, one can solve the system AT AX = AT b.

Example 10. Find a straight line y = a + bx which fits best the given points (1, 0), (2, 3), (3, 4), (4, 4) by
least-square method.

Solution: We get the following system of equations

a+b =0
a + 2b =3
a + 3b =4
a + 4b =4

which is 
inconsistent.
 For finding
  a least-square solution, we will solve the system AT AX = AT b,
1 1 0    
1 2 3 4 10 11
where A   . Thus, AT A = T
. Thus, (AT A|AT b =
1 3 and b = and A b =

4 10 30 34
 1 4  4     
4 10 | 11 1 3 | 34/10 1 3 | 34/10 1 0 | −1/2
∼ ∼ ∼ . Thus, y = −1/2 +
10 30 | 34 4 10 | 11 0 −2 | −13/5 0 1 | 13/10
13/10x is a best fit.

2
For applying orthogonal projection method, W = C(A) = {(x + y, x + 2y, x + 3y, x + 4y) | x, y ∈ R}. Ba-
sis of W is {(1, 1, 1, 1), (1, 2, 3, 4)}. An orthogonal basis of W is {(1, 1, 1, 1), (−3/2, −1/2, 1/2, 3/2)},
||(1, 1, 1, 1)||2 = 4 and ||(−3/2, −1/2, 1/2, 3/2)||2 = 5. Take v = b = (0, 3, 4, 4). Then PW (v) =
11/4(1, 1, 1, 1) + 13/10(−3/2, −1/2, 1/2, 3/2)
 = 1/10(8, 21,34, 47).
 Then a least-square
  solution can  be
1 1 | 8/10 1 1 | 8/10 1 1 | 8/10
1 2 | 21/10 0 1 | 13/10 0 1 | 13/10
obtained by solving AX = PW (v) so that  1 3 | 34/10 ∼ 0 2 | 26/10 ∼ 0 0 |
    .
0 
1 4 | 47/10 0 3 | 39/10 0 0 | 0
Thus (−1/2, 13/10) is a least-square solution so that y = −1/2x + 13/10 is a best fit.

You might also like