0% found this document useful (0 votes)
23 views5 pages

HW 10

The document contains solutions to three problems regarding orthonormal bases and polynomials. Problem 1 proves that a vector is in the span of an orthonormal basis if and only if its norm equals the sum of squares of inner products with basis vectors. Problem 2 uses the Gram-Schmidt process to orthonormalize the basis 1, x, x^2, x^3 in the inner product space P3(R). Problem 3 finds the polynomial p satisfying conditions that minimizes the integral of the squared difference between 1 + 4x and p(x).
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views5 pages

HW 10

The document contains solutions to three problems regarding orthonormal bases and polynomials. Problem 1 proves that a vector is in the span of an orthonormal basis if and only if its norm equals the sum of squares of inner products with basis vectors. Problem 2 uses the Gram-Schmidt process to orthonormalize the basis 1, x, x^2, x^3 in the inner product space P3(R). Problem 3 finds the polynomial p satisfying conditions that minimizes the integral of the squared difference between 1 + 4x and p(x).
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Daniel Suryakusuma

SID: 24756460
Math 110, Spring 2019.
Homework 10, due April 14.

Prob 1. Let e1 , . . ., em be an orthonormal list of vectors. Prove that


m
X
v ∈ span{e1 , . . . , em } ⇐⇒ kvk2 = |hv, ej i|2 .
j=1

Solution. We first prove the forward direction ( =⇒ ). Given e1 , . . . , em is an orthonormal list of vectors,
because they are linearly independent (Axler 6.26), we can uniquely write, for some scalars cj ∈ IF,
m
X
v= cj ej = c1 e1 + · · · + cm em . (1)
j=1

Because the list e1 , . . . , em is orthonormal, “taking the inner product of both sides of this equations with ej
gives hv, ej i = aj ” (as given by Axler’s proof of 6.30). To see this explicitly, consider the following:

hv, e1 i = h[c1 e1 + · · · + cm em ], e1 i = c1 + 0 + · · · + 0
hv, e2 i = h[c1 e1 + · · · + cm em ], e2 i = 0 + c2 + · · · + 0
..
.
hv, em i = h[c1 e1 + · · · + cm em ], em i = 0 + · · · + 0 + cm

Recall that Axler 6.25 gives “If e1 , . . . , em is an orthonormal list of vectors in V , then ∀a1 , . . . , am ∈ IF,
||a1 e1 + · · · + am em ||2 = |a1 |2 + · · · + |am |2 .” Then consider the following;

||v||2 = ||c1 e1 + · · · + cm em ||2


= |c1 |2 + · · · + |cm |2
= |hv, e1 i|2 + · · · + |hv, em i|2
Xm
= |hv, ej i|2
j=1

Pm
Now we prove the backwards ( ⇐= ) direction. Given ||v||2 = j=1 |hv, ej i|2 , we wish to show that
Pm
/ span{e1 , . . . , em }. Then v 6= j=1 cj em for all cj ∈ IF, so
v ∈ span{e1 , . . . , em }. Suppose v ∈
m
X
(v 6= c1 e1 + · · · + cm em ) =⇒ (||v||2 6= ||c1 e1 + · · · + cm em ||2 ) =⇒ (||v||2 6= |hv, ej i|2 ),
j=1

Pm
which is a direct contradiction to our hypothesis that ||v||2 = j=1 |hv, ej i|2 . So it must be true that
v ∈ span{e1 , . . . , em }.

1
Prob 2. Consider the space P3 (IR) with the inner product
Z 1
hf, gi = f (x)g(x)dx.
−1

Use the Gram-Schmidt algorithm to orthonormalize the basis 1, x, x2 , x3 .


Solution. Recall the Gram-Schmidt algorithm for “orthonormalizing” any given basis v1 , . . . , vm is outlined
recursively (Axler 6.31) as follows: P j−1
v1 vj − i=1 hvj ,ei iei
Let e1 = |v1 | . Then recursively for incrementing j = 2, . . . , m, let: ej = Pj−1 .
vj − i=1 hvj ,ei iei

With our given inner product space over P3 (IR) and ordered basis {1, x, x , x3 }, we simply follow the
2

procedure (our orthonormal basis is e1 , e2 , e3 , e4 as follows):


1 1 1 1
e1 = =p = qR =√
|1| h1, 1i 1
1dx 2
−1

x − hx, √12 i √12 x x x


e2 = = =p =p
|x − hx, √1 i √1 | |x| hx, xi 2/3
2 2

2 √1 √
x2 − 3 2 x2 − 1 [3x2 − 1] 10
e3 = =p 3 =
|x2 − 13 | 8/45 4

6 √x
x3 − 3x
r
3 3 x3 −
 
x − 0e1 − hx , e2 ie2 − 0e3 5 2/3 175 3 3x 7
e4 = 3 = √ = 3 5
3x = x − = √ [5x3 − 3x]
x − 0e1 − hx3 , e2 ie2 − 0e3 x3 − 6 √x x − 5
8 5 2 2
5 2/3

Explicit calculations for the inner products required above are as follows:
√ Z 1
1 1 1
hx, 1/ 2i = √ xdx = √ x2 −1 = 0
−1 2 2 2
Z 1
1 1
hx, xi = x2 dx = x3 −1 = 2/3
−1 3
Z 1
1 1 1 √
hx2 , e1 i = √ x2 dx = √ x3 −1 = 2/3
−1 2 3 2
Z 1
x p 1 1
hx2 , e2 i = (p )x2 dx = 3/2 x4 −1 = 0
−1 2/3 4
Z 1  5 1
2 1 2 1 2 2 x 3 x
hx − , x − i = [x − 1/3] dx = − 2x + = 8/45
3 3 −1 5 9 −1
Z 1
1
hx3 , e1 i = (x3 )[ √ ]dx = 0
−1 2
Z 1 √ √ 5 √
3 3 6x 6x 1 6
hx , e2 i = (x )[ ]dx = −1
=
−1 2 10 5
Z 1 2
√ √  6 1
3 3 [3x − 1] 10 10 x x4
hx , e3 i = (x )[ ]dx = − =0
−1 4 4 2 4 −1
Z 1 2 Z 1 1
9x2 6x4
 7
3x 3 3x 3x x 3x3 6x5 2 6 12 8
hx3 − ,x − i= x3 − dx = [x6 + − ]dx = + − = + − =
5 5 −1 5 −1 25 5 7 25 25 −1 7 25 25 175

2
Prob 3. Find p ∈ P3 (IR) such that p(0) = 0, p0 (0) = 0, and
Z 1
|1 + 4x − p(x)|2 dx
0

is as small as possible.
R1
Solution. Consider an inner product defined for f, g ∈ P3 (IR) as hf, gi := 0 |f ||g| dx. We first verify this
satisfies the requirements for an inner product. Due to properties of polynomials, for λ ∈ IR, u, v, w ∈ P(IR),
we have additivity (hu + v, wi = hu, wi + hv, wi), homogeneity (hλu, vi = λhu, vi), and conjugate symmetry
(although here IF = IR). And because we multiply two absolute valued polynomials |f |, |g| then our integral
over [0, 1] surely is positive with hv, vi = 0 ⇐⇒ v = 0. As all the required properties of an inner product
are satisfied, our inner product is well defined.
Because p ∈ P3 (IR), we can write for some scalars a0 , a1 , a2 , a3 ∈ IR, p(x) = a0 + a1 x + a2 x2 + a3 x3 .
However, p(0) = 0 =⇒ a0 = 0 and p0 (0) = 0 =⇒ a1 = 0. So we can write

p(x) = a2 x2 + a3 x3 .

Let U be the subspace of P3 (IR) spanned by p(x) above for a2 , a3 ∈ IR. Due to the absolute value in
our integral, our main problem of minimizing the given integral is equivalent to finding some u ∈ U that
minimizes |(1 + 4x) − u|. In other words, finding u ∈ U that minimizes the “distance” from (1 + 4x). Proven
in Axler, this occurs precisely for u ∈ U with

u = PU (v) = hv, e1 ie1 + · · · + hv, em iem ,

given e1 , . . . , em is an orthonormal basis of U .


We have written for any u ∈ U , u = a2 x2 + a3 x3 . Take the canonical monomial basis, x2 , x3 for U .
Performing Gram-Schmidt on this basis, we get:

x2 x2 x2 x2 √
e1 := 2
=p = qR = q = 5x2
|x | hx2 , x2 i 1 1
0
|x2 ||x2 | dx 5
5x2 2
x3 − hx3 , e1 ie1 x3 − 6 x3 − 5x6
e2 := = 5x2
=q
|x3 − hx3 , e1 ie1 | x3 − 2
hx3 − 5x6 , x3 − 5x2
6 i
6

5x2 √
x3 −
= qR 6
= 7[6x3 − 5x2 ]
1 5x2 5x2
0
|x3 − 6 ||x3 − 6 | dx

Then we have, for v := 1 + 4x,

u = PU (v) = hv, e1 ie1 + · · · + hv, em iem


D √ E√ D √ E√
= 1 + 4x, 5x2 5x2 + 1 + 4x, 7[6x3 − 5x2 ] 7[6x3 − 5x2 ]
√ 2Z 1 √ 2 √ Z 1 √
   
3 2 3 2
= 5x |1 + 4x|| 5x | dx + 7[6x − 5x ] |1 + 4x|| 7[6x − 5x ]| dx
0 0
20x2 −77[6x3 − 5x2 ]
= +
3 30
39x2 77x3
= −
2 5
39x2 77x3
So p = 2 − 5 ∈ P3 (IR) satisfies p(0) = p0 (0) = 0 and minimizes our given integral expression.

3
Prob 4. Consider a complex vector space V = span (1, cos x, sin x, cos 2x, sin 2x) with an inner product
Z π
hf, gi := f (t)g(t)dt.
−π

Let U be the subspace of odd functions in V . What is U ⊥ ? Find an orthonormal basis for both U and U ⊥ .
Solution. We are given V = span{1, cos x, sin x, cos 2x, sin 2x}. First verify these are linearly indepen-
dent. Suppose we have for some ai ∈ C, that a0 + a1 cos x + a2 sin x + a3 cos 2x + a4 sin 2x = 0. Re-
call from trigonometric identities that cos 2x = cos2 x − sin2 x and sin 2x = 2 sin x cos x. Then obviously
{1, cos x, sin x, cos 2x, sin 2x} is a linearly independent list spanning V (and thus a basis of V ). Then any
v ∈ V can be written as a linear combination of our basis vectors. That is, ∀v ∈ V, for some ai ∈ C,

v = a0 + a1 cos x + a2 sin x + a3 cos 2x + a4 sin 2x.

If U is the subspace of odd functions in V , from the definition of even function and odd function,
1(−x) = 1 = 1(x), cos(−x) = cos(x), cos(−2x) = cos(2x), so we have that 1, cos x, cos 2x ∈ / U . Likewise
sin(−x) = − sin(x), sin(−2x) = − sin(2x), so sin x, sin 2x ∈ U . Because our list is a basis for V , and U ⊂ V ,
we have:
U = {v ∈ V | a0 = a1 = a3 = 0}
It is easily verified from dimension 2 and linear independence that {sin x, sin 2x} is a basis for U . But of
course, any basis can be “orthonormalized” via Gram-Schmidt. We want an orthonormal basis of U , say
{e1 , e2 }.

sin x sin x sin x sin x


e1 := =p = qR = √
| sin x| hsin x, sin xi π
sin xsin x dx π
−π
Rπ 2
sin 2x − hsin 2x, sin
√ x i sin
π
√x
π
sin 2x − −π 2 sin √xπcos x dx sin
√x
π
e2 := = π 2 sin2 x cos x
sin 2x − hsin 2x, sin
√ x i sin
√x sin
√x
R
π π sin 2x − −π

π
dx π
sin 2x − 0 sin 2x sin 2x
= =p = √
sin 2x − 0 hsin 2x, sin 2xi π

Axler gives V = U ⊕ U ⊥ . So dim V = dim U + dim U ⊥ . We established above that 1, cos x, cos 2x ∈ / U,
but this list is linearly independent (and of correct size, 3). So 1, cos x, cos 2x is a basis of U ⊥ .
Let e3 , e4 , e5 be an orthonormal basis (given by Gram-Schmidt) for U ⊥ .
1 1 1 1
e3 := =p = qR =√
|1| h1, 1i π
1 dx 2π
−π

cos x − hcos x, e3 ie3 cos x


e4 := = √
cos x − hcos x, e3 ie3 π
cos 2x − hcos 2x, e3 ie3 − hcos 2x, e4 ie4 cos 2x
e5 := = √
cos 2x − hcos 2x, e3 ie3 − hcos 2x, e4 ie4 π

So we have { sin
√ x , sin
π
√ 2x } is an orthonormal basis of U , and { √1 , cos
π 2π
√ x , cos
π
√ 2x } is an orthonormal basis of
π
U ⊥ , where U ⊥ is the “orthogonal complement” of U , the subset of V consisting of all vectors orthogonal to
subspace U .

4
Prob 5. Suppose T ∈ L(V ) and U is a finite-dimensional subspace of V . Prove that U is invariant under
T if and only if
PU T PU = T PU .
Solution. First we prove the forward ( =⇒ ) direction. For all v ∈ V , we can uniquely write v = u + w,
where u ∈ U, w ∈ U ⊥ (given by Axler’s definition of orthogonal projection). Suppose U is invariant under
T . Then by definition of T -invariant, ∀u ∈ U, T (u) ∈ U . But by definition of PU , Axler gives: PU (v) = u
for all v ∈ V .
That is,
PU (v) = u =⇒ ∀v ∈ V, [PU T PU ](v) = PU T (u) = T (u) = T PU (v).
Because this holds for all v ∈ V , this gives our desired statement PU T PU = T PU .

Now we prove the backwards ( ⇐= ) direction. Suppose PU T PU = T PU . If we show T (u) ∈ U , we have


U is T -invariant as desired. Let u ∈ U . For any finite dimensional U ⊂ V , it is trivially so that PU (u) = u,
by definition of orthogonal projection.

PU [T PU (u)] = PU [T (u)] because PU (u) = u


= T PU (u) hypothesis, PU T PU = T PU
= T (u) because PU (u) = u

Then it must be so that PU [T (u)] = T (u), which implies by definition of orthogonal projection that T (u) ∈ U
for all u ∈ U . This precisely gives U is T -invariant, as desired.

You might also like