0% found this document useful (0 votes)
166 views5 pages

Hw2sol PDF

This document provides solutions to homework problems for a Fall 2019 Math 104 class. It contains solutions to 5 problems involving linear algebra concepts such as linear independence, orthonormal vectors, direct sums of subspaces, nilpotent matrices, and matrix representations of linear transformations.

Uploaded by

elena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
166 views5 pages

Hw2sol PDF

This document provides solutions to homework problems for a Fall 2019 Math 104 class. It contains solutions to 5 problems involving linear algebra concepts such as linear independence, orthonormal vectors, direct sums of subspaces, nilpotent matrices, and matrix representations of linear transformations.

Uploaded by

elena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Fall 2019 Math 104 - HW 2 Solutions

(If you find any errors, please let your instructor know ASAP)

Problem 1. Let x1 , . . . , xk ∈ Rn be non-zero mutually orthogonal vectors. Show that {x1 , . . . , xk }


must be a linearly independent set.

Solution. That the {xi } are mutually orthogonal and non-zero means that

hxi , xj i = xTi xj = 0, for i 6= j, and hxi , xi i =


6 0.

Suppose that we have a relationship

a1 x1 + a2 x2 + · · · + ak xk = 0.

Then, taking the inner product of both sides with xi , we have that

hxi , a1 x1 i + · · · + hxi , ai xi i + · · · + hxk , ak xk i = hxi , 0i = 0.

Using mutual orthogonality, we see that 0 = hxi , ai xi i = ai hxi , xi i. Because hxi , xi i 6= 0, we see
that ai = 0. As i was arbitrary, ai = 0 for each i. Thus {x1 , . . . , xk } is a linearly independent set.
Problem 2. Let v1 , . . . , vn be orthonormal vectors in Rn . Show that Av1 , . . . , Avn are also or-
thonormal if and only if A is an orthonormal matrix.

Solution. First, suppose that A is orthogonal. Then AT A = AAT = I. We check that the {Avi }
are orthonormal:

hAvi , Avj i = (Avi )T Avj = viT AT Avj = viT (I)vj = viT vj = hvi , vj i.

Thus if i = j, then hAvi , Avj i = hvi , vi i = 1. If i 6= j, then hAvi , Avj i = hvi , vj i = 0. Thus the
{Avi } are orthogonal.
Now suppose that the vectors {Avi } are orthonormal. By the previous problem, the {vi } are
linearly independent, and there are n of them in Rn , so they form a basis. Write A is the basis given
by the {vi }. Then the ith of A is the vectors Avi expressed in the basis given by {vi }. As the {Avi }
are orthonormal, this implies that the columns of A are orthonormal, so A is orthogonal.
Problem 3. Let Pn denote the vector space of polynomials of degree less than or equal to n, and
of the form p(x) = p0 + p1 x + · · · + pn xn , where the coefficients pi are all real. Let PE denote
the subspace of all even polynomials in Pn , i.e., those that satisfy the property that p(−x) = p(x).
Similarly, let PO denote the subspace of all odd polynomials, i.e., those satisfying p(−x) = −p(x).
Show that Pn = PE ⊕ PO .

Solution. By definition, Pn = PE ⊕ PO if PE ∩ PO = 0 and Pn = PE + PO . First, we check that


PE ∩ PO = 0.
Let f ∈ PE ∩ PO . Then f is a polynomial that is both even and odd, i.e., f (−x) = f (x) and
f (−x) = −f (x). Thus f (x) = −f (x), or 2f (x) = 0. Thus f = 0.
Now, we check that Pn = PE + PO . Let f ∈ Pn . Let fE (x) = f (x)+f 2
(−x)
, and let fO (x) =
f (x)−f (−x)
2
. Clearly fE (x) + fO (x) = f (x). Then we claim that fE ∈ PE , and fO ∈ PO . Indeed,

f (−x) + f (− − x) f (x) + f (−x)


fE (−x) = = = fE (x),
2 2
f (−x) − f (− − x) f (x) − f (−x)
fO (−x) = =− = −fO (x).
2 2
Thus we have written each element of Pn as a sum of an element of PE and an element of PO , so
PE + PO = Pn .
Problem 4. This problem examines nilpotent matrices.

(a) Find a square matrix A, whose entries are not all zeros, such that A2 = 0.

(b) Exhibit a nonzero vector that belongs to the nullspace of the matrix that you just constructed.

(c) In general, prove that if a matrix B satisfies B 2 = 0, then it cannot be invertible.


! !
0 1 0 0
Solution. (a) Let A = . Then we see that A2 = .
0 0 0 0
! !
1 0
(b) Let e1 = . Then Ae1 = , so e1 is in the nullspace of A.
0 0

(c) Let B be a matrix such that B 2 = 0. Then we see that

0 = det(0) = det(B · B) = det(B) det(B).

As the only number whose square is 0 is 0, we see that det(B) = 0 and B is not invertible.
Alternatively, suppose that B is invertible and AB is the identity. Then

I = AB = A(I)B = A(AB)B = A2 B 2 = A2 0 = 0,

but the identity matrix is not 0, so B must not have been invertible.
!
2 3 4
Problem 5. (a) Let A = and consider A as a linear transformation mapping R3 to
8 5 1
R2 . Find the matrix representation with of A with respect to the bases
      
 1
 0 1 
1 , 1 , 0 .
     
 0

1 1 

of R3 and  ! !
3 2
 
,
 1 1 

of R2 .

(b) Programming question.

Solution. (a) We need to express Av for v in the first basis in terms of the second basis. We
compute:  
1 ! ! !
5 3 2
A 1 = = (−21) + 34 ,
 
13 1 1
0
 
0 ! ! !
7 3 2
A 1 = = (−5) + 11 ,
 
6 1 1
1
 
1 ! ! !
6 3 2
A 0 = = (−12) + 21 .
 
9 1 1
1
Therefore, we have that !
−21 −5 −12
A= .
34 11 21

(b) The programming part can be found in another file.

You might also like