HW 10
HW 10
SID: 24756460
Math 110, Spring 2019.
Homework 10, due April 14.
Solution. We first prove the forward direction ( =⇒ ). Given e1 , . . . , em is an orthonormal list of vectors,
because they are linearly independent (Axler 6.26), we can uniquely write, for some scalars cj ∈ IF,
m
X
v= cj ej = c1 e1 + · · · + cm em . (1)
j=1
Because the list e1 , . . . , em is orthonormal, “taking the inner product of both sides of this equations with ej
gives hv, ej i = aj ” (as given by Axler’s proof of 6.30). To see this explicitly, consider the following:
hv, e1 i = h[c1 e1 + · · · + cm em ], e1 i = c1 + 0 + · · · + 0
hv, e2 i = h[c1 e1 + · · · + cm em ], e2 i = 0 + c2 + · · · + 0
..
.
hv, em i = h[c1 e1 + · · · + cm em ], em i = 0 + · · · + 0 + cm
Recall that Axler 6.25 gives “If e1 , . . . , em is an orthonormal list of vectors in V , then ∀a1 , . . . , am ∈ IF,
||a1 e1 + · · · + am em ||2 = |a1 |2 + · · · + |am |2 .” Then consider the following;
Pm
Now we prove the backwards ( ⇐= ) direction. Given ||v||2 = j=1 |hv, ej i|2 , we wish to show that
Pm
/ span{e1 , . . . , em }. Then v 6= j=1 cj em for all cj ∈ IF, so
v ∈ span{e1 , . . . , em }. Suppose v ∈
m
X
(v 6= c1 e1 + · · · + cm em ) =⇒ (||v||2 6= ||c1 e1 + · · · + cm em ||2 ) =⇒ (||v||2 6= |hv, ej i|2 ),
j=1
Pm
which is a direct contradiction to our hypothesis that ||v||2 = j=1 |hv, ej i|2 . So it must be true that
v ∈ span{e1 , . . . , em }.
1
Prob 2. Consider the space P3 (IR) with the inner product
Z 1
hf, gi = f (x)g(x)dx.
−1
With our given inner product space over P3 (IR) and ordered basis {1, x, x , x3 }, we simply follow the
2
Explicit calculations for the inner products required above are as follows:
√ Z 1
1 1 1
hx, 1/ 2i = √ xdx = √ x2 −1 = 0
−1 2 2 2
Z 1
1 1
hx, xi = x2 dx = x3 −1 = 2/3
−1 3
Z 1
1 1 1 √
hx2 , e1 i = √ x2 dx = √ x3 −1 = 2/3
−1 2 3 2
Z 1
x p 1 1
hx2 , e2 i = (p )x2 dx = 3/2 x4 −1 = 0
−1 2/3 4
Z 1 5 1
2 1 2 1 2 2 x 3 x
hx − , x − i = [x − 1/3] dx = − 2x + = 8/45
3 3 −1 5 9 −1
Z 1
1
hx3 , e1 i = (x3 )[ √ ]dx = 0
−1 2
Z 1 √ √ 5 √
3 3 6x 6x 1 6
hx , e2 i = (x )[ ]dx = −1
=
−1 2 10 5
Z 1 2
√ √ 6 1
3 3 [3x − 1] 10 10 x x4
hx , e3 i = (x )[ ]dx = − =0
−1 4 4 2 4 −1
Z 1 2 Z 1 1
9x2 6x4
7
3x 3 3x 3x x 3x3 6x5 2 6 12 8
hx3 − ,x − i= x3 − dx = [x6 + − ]dx = + − = + − =
5 5 −1 5 −1 25 5 7 25 25 −1 7 25 25 175
2
Prob 3. Find p ∈ P3 (IR) such that p(0) = 0, p0 (0) = 0, and
Z 1
|1 + 4x − p(x)|2 dx
0
is as small as possible.
R1
Solution. Consider an inner product defined for f, g ∈ P3 (IR) as hf, gi := 0 |f ||g| dx. We first verify this
satisfies the requirements for an inner product. Due to properties of polynomials, for λ ∈ IR, u, v, w ∈ P(IR),
we have additivity (hu + v, wi = hu, wi + hv, wi), homogeneity (hλu, vi = λhu, vi), and conjugate symmetry
(although here IF = IR). And because we multiply two absolute valued polynomials |f |, |g| then our integral
over [0, 1] surely is positive with hv, vi = 0 ⇐⇒ v = 0. As all the required properties of an inner product
are satisfied, our inner product is well defined.
Because p ∈ P3 (IR), we can write for some scalars a0 , a1 , a2 , a3 ∈ IR, p(x) = a0 + a1 x + a2 x2 + a3 x3 .
However, p(0) = 0 =⇒ a0 = 0 and p0 (0) = 0 =⇒ a1 = 0. So we can write
p(x) = a2 x2 + a3 x3 .
Let U be the subspace of P3 (IR) spanned by p(x) above for a2 , a3 ∈ IR. Due to the absolute value in
our integral, our main problem of minimizing the given integral is equivalent to finding some u ∈ U that
minimizes |(1 + 4x) − u|. In other words, finding u ∈ U that minimizes the “distance” from (1 + 4x). Proven
in Axler, this occurs precisely for u ∈ U with
x2 x2 x2 x2 √
e1 := 2
=p = qR = q = 5x2
|x | hx2 , x2 i 1 1
0
|x2 ||x2 | dx 5
5x2 2
x3 − hx3 , e1 ie1 x3 − 6 x3 − 5x6
e2 := = 5x2
=q
|x3 − hx3 , e1 ie1 | x3 − 2
hx3 − 5x6 , x3 − 5x2
6 i
6
5x2 √
x3 −
= qR 6
= 7[6x3 − 5x2 ]
1 5x2 5x2
0
|x3 − 6 ||x3 − 6 | dx
3
Prob 4. Consider a complex vector space V = span (1, cos x, sin x, cos 2x, sin 2x) with an inner product
Z π
hf, gi := f (t)g(t)dt.
−π
Let U be the subspace of odd functions in V . What is U ⊥ ? Find an orthonormal basis for both U and U ⊥ .
Solution. We are given V = span{1, cos x, sin x, cos 2x, sin 2x}. First verify these are linearly indepen-
dent. Suppose we have for some ai ∈ C, that a0 + a1 cos x + a2 sin x + a3 cos 2x + a4 sin 2x = 0. Re-
call from trigonometric identities that cos 2x = cos2 x − sin2 x and sin 2x = 2 sin x cos x. Then obviously
{1, cos x, sin x, cos 2x, sin 2x} is a linearly independent list spanning V (and thus a basis of V ). Then any
v ∈ V can be written as a linear combination of our basis vectors. That is, ∀v ∈ V, for some ai ∈ C,
If U is the subspace of odd functions in V , from the definition of even function and odd function,
1(−x) = 1 = 1(x), cos(−x) = cos(x), cos(−2x) = cos(2x), so we have that 1, cos x, cos 2x ∈ / U . Likewise
sin(−x) = − sin(x), sin(−2x) = − sin(2x), so sin x, sin 2x ∈ U . Because our list is a basis for V , and U ⊂ V ,
we have:
U = {v ∈ V | a0 = a1 = a3 = 0}
It is easily verified from dimension 2 and linear independence that {sin x, sin 2x} is a basis for U . But of
course, any basis can be “orthonormalized” via Gram-Schmidt. We want an orthonormal basis of U , say
{e1 , e2 }.
Axler gives V = U ⊕ U ⊥ . So dim V = dim U + dim U ⊥ . We established above that 1, cos x, cos 2x ∈ / U,
but this list is linearly independent (and of correct size, 3). So 1, cos x, cos 2x is a basis of U ⊥ .
Let e3 , e4 , e5 be an orthonormal basis (given by Gram-Schmidt) for U ⊥ .
1 1 1 1
e3 := =p = qR =√
|1| h1, 1i π
1 dx 2π
−π
So we have { sin
√ x , sin
π
√ 2x } is an orthonormal basis of U , and { √1 , cos
π 2π
√ x , cos
π
√ 2x } is an orthonormal basis of
π
U ⊥ , where U ⊥ is the “orthogonal complement” of U , the subset of V consisting of all vectors orthogonal to
subspace U .
4
Prob 5. Suppose T ∈ L(V ) and U is a finite-dimensional subspace of V . Prove that U is invariant under
T if and only if
PU T PU = T PU .
Solution. First we prove the forward ( =⇒ ) direction. For all v ∈ V , we can uniquely write v = u + w,
where u ∈ U, w ∈ U ⊥ (given by Axler’s definition of orthogonal projection). Suppose U is invariant under
T . Then by definition of T -invariant, ∀u ∈ U, T (u) ∈ U . But by definition of PU , Axler gives: PU (v) = u
for all v ∈ V .
That is,
PU (v) = u =⇒ ∀v ∈ V, [PU T PU ](v) = PU T (u) = T (u) = T PU (v).
Because this holds for all v ∈ V , this gives our desired statement PU T PU = T PU .
Then it must be so that PU [T (u)] = T (u), which implies by definition of orthogonal projection that T (u) ∈ U
for all u ∈ U . This precisely gives U is T -invariant, as desired.