0% found this document useful (0 votes)
139 views12 pages

MATH 4A - Linear Algebra With Applications: Lecture 27: Gram-Schmidt Orthogonalization

The document discusses Gram-Schmidt orthogonalization, which is a method for taking a set of vectors that span a vector space and transforming them into an orthogonal basis. It first proves that every vector space has an orthogonal basis, which is necessary for the orthogonal decomposition theorem. It then introduces the Gram-Schmidt process, which takes a basis and iteratively constructs a new basis that is orthogonal. Normalizing the vectors produces an orthonormal basis. An example applies Gram-Schmidt to find the orthogonal and orthonormal bases of a specific vector space.

Uploaded by

akshay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
139 views12 pages

MATH 4A - Linear Algebra With Applications: Lecture 27: Gram-Schmidt Orthogonalization

The document discusses Gram-Schmidt orthogonalization, which is a method for taking a set of vectors that span a vector space and transforming them into an orthogonal basis. It first proves that every vector space has an orthogonal basis, which is necessary for the orthogonal decomposition theorem. It then introduces the Gram-Schmidt process, which takes a basis and iteratively constructs a new basis that is orthogonal. Normalizing the vectors produces an orthonormal basis. An example applies Gram-Schmidt to find the orthogonal and orthonormal bases of a specific vector space.

Uploaded by

akshay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Understanding the orthogonal decomposition theorem Gram-Schmidt orthogonalization

MATH 4A - Linear Algebra with Applications


Lecture 27: Gram-Schmidt orthogonalization

5 June 2019

Reading: §6.3-6.4
Recommended problems from §6.4:
Announcement: please fill out online evaluations!
Understanding the orthogonal decomposition theorem Gram-Schmidt orthogonalization

Lecture plan

1 Understanding the orthogonal decomposition theorem

2 Gram-Schmidt orthogonalization
Understanding the orthogonal decomposition theorem Gram-Schmidt orthogonalization

Recall: orthogonal decomposition theorem

Theorem
Let W be a subspace of Rn . Then each y in Rn can be written
uniquely in the form
y = projW y + z
where projW y is in W and z is in W ⊥ . In fact, if {u1 , . . . , up } is
any orthogonal basis of W , then
y · u1 y · up
projW y = u1 + · · · + up
u1 · u1 up · up

and z = y − projW y.
Understanding the orthogonal decomposition theorem Gram-Schmidt orthogonalization

Geometric interpretation

I mentioned this on Monday, but it’s worth reiterating: given the


orthogonal basis {u1 , . . . , up } of W , the formula
y · u1 y · up
projW y = u1 + · · · + up
u1 · u1 up · up

says that the projection of y onto W is simply the (vector) sum of


the projections of y onto each of the vectors u1 , . . . , up .

Interestingly, the word “uniquely” in the first part of the theorem


implies that even though this formula for projW y depends on a
choice of orthogonal basis of W (if dim W > 1, then there are
infinitely many!), the result of the formula (namely, projW y) only
depends on W .
Understanding the orthogonal decomposition theorem Gram-Schmidt orthogonalization

Example

 
1
What is the orthogonal projection of the vector 3 onto the

2
3
2-dimensional subspace of R that contains the x-axis and forms
an angle π4 with the y -axis?
Understanding the orthogonal decomposition theorem Gram-Schmidt orthogonalization

Let’s prove the theorem


We’ll first prove the second statement:

“if {u1 , . . . , up } is any orthogonal basis of W , then


y · u1 y · up
projW y = u1 + · · · + up
u1 · u1 up · up
and z = y − projW y.”

and then use it to prove the so-called “existence half” of the first
statement:

“Let W be a subspace of Rn . Then each y in Rn can be written uniquely


in the form
y = projW y + z
where projW y is in W and z is in W ⊥ .”

Although we’ll also need to be sure to prove the “uniqueness half”


too.
Understanding the orthogonal decomposition theorem Gram-Schmidt orthogonalization

A gap in our proof!

Since we used the second statement to prove the first statement,


we needed to assume that W has an orthogonal basis. But the
first part of the theorem is supposed to be true for any W , not just
W promised to have an orthogonal basis.

So, we need to show that every subspace of Rn has an orthogonal


basis.
Understanding the orthogonal decomposition theorem Gram-Schmidt orthogonalization

Gram-Schmidt orthogonalization: example

Let      
2 3 7
x1 = 1
  x2 = 1
  x3 = 2 .

3 2 3
Can we find an orthogonal basis for Span{x1 , x2 , x3 }?
Understanding the orthogonal decomposition theorem Gram-Schmidt orthogonalization

iClicker

Using the same procedure we just did, find an orthogonal basis of


   
 3 1 
Span  6 , 2
 
0 2
 

 
3
that contains 6. What is the 2nd entry of the other vector?

0
(a) -8
(b) -2
(c) 0
(d) 2
1
(e) 5 + 3
Understanding the orthogonal decomposition theorem Gram-Schmidt orthogonalization

The Gram-Schmidt orthogonalization process

Theorem
Given a basis {x1 , . . . , xp } for a nonzero subspace W of Rn , define

v1 = x1
x2 · v1
v2 = x2 − v1
v1 · v1
x3 · v1 x3 · v2
v3 = x3 − v1 − v2
v1 · v1 v2 · v2
..
.
xp · v1 xp · v2 xp · vp−1
vp = xp − v1 − v2 − · · · − vp−1
v1 · v1 v2 · v2 vp−1 · vp−1

Then {v1 , . . . , vp } is an orthogonal basis of W . In fact,

Span{v1 , . . . , vk } = Span{x1 , . . . , xk } for each k = 1, 2, . . . , p.


Understanding the orthogonal decomposition theorem Gram-Schmidt orthogonalization

Orthonormalizing

If we want an orthonormal basis, and not just an orthogonal basis,


do the following:
1 Use the Gram-Schmidt orthonormalization on the previous
slide to change the basis {x1 , . . . , xp } of W to an orthogonal
basis {v1 , . . . , vp }.
2 Normalize each of the vectors in {v1 , . . . , vp }. More precisely,
let
1
uk = k
kvk k
for each k = 1, . . . , p. Then {u1 , . . . , up } will be an
orthonormal basis for W .
Understanding the orthogonal decomposition theorem Gram-Schmidt orthogonalization

Example

 
1
Let W be the span of 1. Let’s find an orthonormal basis of
1
W ⊥.

You might also like