223 Practice Problems Part 2
223 Practice Problems Part 2
2 3
1
16. Find all vectors normal to ~v = 435.
1
2 3 2 3
1 2
17. Find all vectors normal to 425 and 405.
0 2
t
18. Show that the set X = { : for some t 2 R} is a subspace of R2 using the definition of subspace.
3t
There are three statements we need to show.
1. X is non-empty.
2. X is closed under vector addition.
3. X is closed under scalar multiplication.
3. Let 2 R. Then
x1 x1
~v = = 2 X.
3x1 3( x1 )
19. Consider the plane P ✓ R3 given in vector form by ~x = td~1 + sd~2 , where
2 3 2 3
2 4
d~1 = 4 1 5 and d~2 = 4 2 5 .
9 6
(a) Express P in normal form.
2 3
a
Let p~ = 4 b 5 be a vector which is orthogonal to d~1 and d~2 . By definition we have that M p~ = ~0, where
c
the rows of M are given by d~1 and d~2 . Notice that
✓ ◆
2 1 9 1 0 3 0
RREF = .
4 2 6 0 1 3 0
2 3
3t
and so ~n = 3t5 is the general solution to the desired system of equations. Finally, because P passes
4
t 2 3
3
through the origin, we know a normal form of P is 35 · ~x = 0.
4
1
(b) Let ~n be the vector normal to P you found in the previous part and define ~v = 2d~1 + 3d~2 + 5~n.
Determine projP (~v ).
This homogeneous system has a unique solution exactly when rref(M ) has no free variable columns, or
equivalently, rref(M ) has k pivot columns.
1. T (~v + w)
~ = T (~v ) + T (w) ~ 2 R2 ,
~ for all ~v , w
2. T (c~v ) = cT (~v ) for all ~v 2 R2 and c 2 R.
Let us proceed:
x1 x
1. Let ~v = ~ = 2 . Then
,w
y1 y2
x1 + x2 3x1 + 3x2 3x1 3x2
T (~v + w)
~ =T = = + = T (~v ) + T (w).
~
y1 + y2 y1 + y2 y1 y2
23. Let R : R2 ! R2 be the rotation by 90 degrees clockwise. Find the matrix representation of R
0 1
MR =
1 0
1 2 3
24. Find col(A) for A = .
2 2 3
Since the first two columns of A are linearly independent and span R2 , we conclude that col(A) = R2 .
1 2 3
25. Find the null(A) for A = .
2 2 3
We know that null(A) = null rref(A) . Row-reduction gives us
1 0 0
rref(A) =
0 1 3/2
2 3
0
From here we conclude that null(A) = span{4 3/25}.
1
To compute the rank, looking at either the row space or the column space is equally reasonable. For
example, 8 2 3 2 3 2 3 2 39 8 2 3 2 3 2 39
>
> 1 0 3 0 >> >
> 1 0 0 >
< 6 7 6 7 6 7 6 7= < 6 7 6 7 6 7> =
6 07 617 627 607 6 07 617 607
col(M ) = span 4 5 , 4 5 , 4 5 , 4 5 = span 4 5 , 4 5 , 4 5 ,
>
> 0 0 0 0 >> >
> 0 0 0 >>
: ; : ;
0 0 0 1 0 0 1
as the third column is a linear combination of first two. Therefore, rank(M ) = 3. Rank-nullity theorem
quickly implies that nullity(M ) = 4 rank(M ) = 4 3 = 1.
27. Explain how row reduction can be used to find the rank and nullity of a matrix.
First of all, recall that the rank and the nullity are preserved by row operations. Therefore, rank(A) =
rank(rref(A)) and nullity(A) = nullity(rref(A)). However,
and
nullity(rref(A)) = number of free variable columns in the row-reduced form.
2 8 1
29. Let B = . Use row operations to reduce B to the identity matrix. Find B in terms of elementary
0 4
matrices.
There are many ways to reduce B to the identity matrix, we propose a way to do this by using three
elementary operations.
1
1
2 8 1 0 R1 7!R1 2R2 2 0 1 2 R1 7! 12 R1 1 0 2
1 R2 7! 14 R2 1 0 2
1
= = = 1 .
0 4 0 1 0 4 0 1 0 4 0 1 0 1 0 4
If we denote 23 2 3
1 1 0
1 2
E1 = , E2 = 4 2 05 , E3 = 4 15 ,
0 1 0
0 1 4
then
1
B = E3 E2 E1 .
30. Let B = {~b1 , ~b2 } be an ordered basis. Define T : R2 ! R2 by T (~b1 ) = ~b1 + ~b2 , T (~b2 ) = 5~b2 . Find [T ]B .
Recall that [T ]B is defined as follows: for any ~x 2 Rn we should have [T ~x]B = [T ]B [~x]B . In particular,
1 0
[T ~b1 ]B = [T ]B [~b1 ]B = [T ]B , [T ~b2 ]B = [T ]B [~b2 ]B = [T ]B .
0 1
Therefore, [T ~b1 ]B and [T ~b2 ]B provide the first and the second columns of the matrix [T ]B , respectively. Now
we use the identities in the statement of the problem to derive
1 0
[T ~b1 ]B = [ ~b1 + ~b2 ]B = , [T ~b2 ]B = [5~b2 ]B = .
1 5
Therefore,
1 0
[T ]B = .
1 5
~ ~ ~ 1 ~ 1
31. Let E be the standard basis, and let B = {b1 , b2 }, where b1 = and b2 = .
1 1
(a) Find the change of basis matrix from B to E.
(b) Find the change of basis matrix from E to B.
Recall that
[E B][~x]B = [~x]E ,
[B E][~x]E = [~x]B .
1
This also implies that [E B] = [B E]. Because we have already expressed B in terms of E, we can
immediately say
1 1
[E B] = [~b1 |~b2 ] = .
1 1
Now it remains to find the inverse of [B E], and we will have
1 1 1 1
[B E] = [E B] = .
2 1 1
32. Let T : R2 ! R2 be defined by T (~e1 ) = ~e1 and T (~e2 ) = 2~e1 + ~e2 . Find Vol(T (C2 )).
1 2
This volume equals the determinant of the matrix , which is 1. Or we can also notice that this
0 1
transformation corresponds to a row operation R1 7! R1 + 2R2 which does not change volumes.
33. Explain in words what each elementary matrix does.
1 ↵
(a) E1 =
0 1
0 1
(b) E2 =
1 0
↵ 0
(c) E3 =
0 1
(a) This is an elementary matrix which corresponds to a row operation which adds an ↵-multiple of the
second row to the first row.
(b) This elementary matrix corresponds to the row operation which swaps two rows. Or, equivalently,
swaps the x and y-coordinates.
(c) This elementary matrix stretches one side of the unit square by ↵. Or, equivalently, it corresponds to
the row operation which multiplies the first row by ↵.
36. T : R2 ! R2 be defined by T (~e1 ) = 2~e1 + ~e2 and T (~e2 ) = ~e1 + 2~e2 . Find the eigenvalues for T and for each
eigenvalue find a corresponding eigenvector.
1
~v1 = , 1=3
1
1
~v2 = , 2=1
1
37. Find the eigenvalues and a corresponding eigenvector for each eigenvalue of the matrix
1 2
B=
3 2
38. Find an example of 2 ⇥ 2 matrix that is diagonalizable over the real numbers.
4 1 3 0
Many examples exist! Here is one; can be diagonalized to .
1 4 0 5
39. Find an example of a 2 ⇥ 2 matrix that is invertible and not diagonalizable over the real numbers.
Rotations
✓ serve
◆ as good examples of invertible but not diagonalizable matrices. For example, consider
0 1
A= . Its characteristic polynomial is p( ) = 2 + 1, its roots are purely imaginary. There exist
1 0
other examples. Most all rotations will turn vectors by an angle that is not a multiple of 180 degrees. In
these cases, the characteristic polynomial will not have real-valued roots and hence cannot be diagonalized
over R.
40. Complete the following sentences with a mathematically correct definition.
(a) A linear combination of the vectors ~v1 , . . . , ~vn is
~ = ↵1~v1 + ↵2~v2 + · · · + ↵n~vn for some scalars ↵1 , ↵2 , . . . , ↵n 2 R.
a vector w
(d) The linear combination ↵1~v1 + · · · + ↵n~vn is called a convex linear combination if
↵1 , ↵2 , . . . , ↵n 0 and ↵1 + ↵2 + · · · + ↵n = 1.
(d) Find X = P1 \ P2 \ P3 .
2 3
x
A point ~x = y 5 2 P1 \ P2 \ P3 satisfies equations
4
z
8
< x + y + z = 2,
2x 2y 3z = 4,
:
2x + 4y 5z = 0.
2 3
1 1 1 | 2
To solve the system of equations for x, y, z we consider the augmented matrix 42 2 3 | 45.
2 3 2 4 5 | 0
40
1 0 0 | 19
4
The row reduced form of this matrix is 0 1 0 | 10 5
, which means there is a unique solution
19
8
2 40 3 82 40 39 0 0 1 | 19
19 < 19 =
~x = 4 10 5
. Therefore, X = 4 10 5 .
19 : 819 ;
8
19 19
(d) If ~u and ~v are two vectors in R2 , then the set of all non-negative linear combinations of ~u and ~v is a
line.
1 1
Sometimes True. Examples: Take ~u = and ~v = , then the set of all non-negative linear
1 1
1 2
combinations of ~u and ~v is the line x = y. Take ~u = and ~v = , then the set of all non-negative
1 2
linear combinations of ~u and ~v is a part of the line x = y in the first quadrant.
(e) A set containing the zero vector is linearly dependent.
Always True. This is true as we have 7(~0) = ~0. So if ~0 2 A, then ~0 can be written as a non-trivial
linear combination of elements of A.
(f) If S is a linearly dependent set, then each element of S is a linear combination of other elements of S.
⇢
1 2 0 0
Sometimes True. Consider S = , , . S is linearly dependent, but is not a linear
0 0 1 ⇢ 1
1 1 0
combination of other elements of S. The set R = , , is a linearly dependent set and any
0 1 1
element is a linear combination of the other two elements.
(c) Let M be a change of basis matrix. Let S be the transformation induced by M . Then, null(S) = {~0}.
(e) Let ~u 2 R3 be a non-zero vector and define the transformation T : R3 ! R3 by T (~x) = vcomp~u (~x).
Then null(T ) 6= range(T ).
(b) Let ~x 2 R2 and ~y 2 R3 . Which of the following operations are mathematically defined?
~a · ~x ~a · ~y ~a · ~b · ~c (~a · ~b)~y ~a(~b · ~y )
No justification needed.
Solution: ~a · ~x, (~a · ~b)~y
(c) Show that for any ~a 2 R2 we have ~a = (~a · ~e1 )~e1 + (~a · ~e2 )~e2 .
x
Assume ~a = . We have ~a · ~e1 = x and ~a · ~e2 = y, by applying the algebraic definition of the dot
y
product. So
x 0 x
(~a · ~e1 )~e1 + (~a · ~e2 )~e2 = x~e1 + y~e2 = + =
0 y y
47. In this question, you will work with a new definition.
2 2
The set S is a line segment with endpoints and .
2 2
(b) If possible, find distinct vectors ~a and ~b so that S = conv1 {~a, ~b}. Otherwise, explain why it is impossible.
2 2
Take ~a = and ~b = .
2 2
(c) If possible, find distinct vectors ~a and ~b so that U = conv0 {~a, ~b} ✓ conv1 {~a, ~b}. Otherwise, explain
why it is impossible.
First, we point out that conv0 {~a, ~b} = {~0} for any vectors ~a and ~b by the definition, because ↵1 +↵2 = 0
for non-negative ↵1 , ↵2 only if ↵1 = ↵2 = 0. We also point out that conv1 {~a, ~b} is the set of all convex
linear combinations of vectors ~a and ~b.
Therefore, it suffices to find distinct vectors ~a and ~b so that {~0} is a subset of the set of all convex
linear combinations of vectors ~a and ~b. This is possible by taking ~a = ~0 and ~b = ~e1 .
48. Complete the following sentences with a mathematically correct definition. No marks will be awarded for
a “close” but incorrect definition.
(a) The projection of ~u onto the set X is
2 3 2 3
v1 w1
4 5
(b) Let ~v = v2 , w~ = w2 5. The dot product ~v · w
4 ~ is
v3 w3
v1 w 1 + v2 w 2 + v3 w 3 .
Alternative solution: k~v kkwk
~ cos ✓, where ✓ is the angle between ~v and w.
~
the n ⇥ n matrix whose diagonal entries are all one and the rest are zero.
An ordered basis {~v1 , . . . , ~vn } that can continuously be transformed into the standard ordered basis
{~e1 , . . . , ~en } while being linearly independent the whole time.
(e) The range of a linear transformation S : Rn ! Rm is (in set notation):
49. For each of the following, give an example if possible. Otherwise, explain why it is not possible.
(a) A subspace V ✓ R3 that contains exactly two di↵erent lines passing through the origin.
This is not possible. Assume V contains two di↵erent lines. Then there exist two vectors d~1 , d~2 each
spanning a di↵erent line. In particular, {d~1 , d~2 } is a linearly independent set. Because V is a subspace
we have that span({d~1 , d~2 }) ✓ V . In particular, the line spanned by d~1 + d~2 is contained in V , and so
V contains at least 3 di↵erent lines.
This is not possible. The associated matrix to T is a 3 ⇥ 2 matrix. This matrix will have rank at
most 2, since it has two columns. This implies range(T ) has dimension at most 2.
⇢
1
(d) A linear transformation T : R2 ! R2 whose range is span .
2
x x
There are infinitely many examples, but for the sake of specificity define T by T = .
y 2x
⇢
1
(e) A linear transformation S : R ! R whose null space is span
2 2
.
2
x 0
There are infinitely many examples, but for the sake of specificity define S by S = .
y y 2x
⇢
1
(f) A 2 ⇥ 2 matrix A whose column space is span .
2
1 0
There are infinitely many examples, but for the sake of specificity: Take A = .
2 0
⇢
1
(g) A matrix M whose null space is span .
2
0 0
There are infinitely many examples, but for the sake of specificity: Take M = .
2 1
1
(h) A basis B for R2 such that [~e1 ~e2 ]B = .
1
Take B = {e1 , e2 }.
50. Let C, D ✓ R2 be circles of radius 1 centered at the points (0, 0) and ( p12 , p12 ), respectively.
(a) If possible find a subspace V1 such that V1 ✓ D. If this is not possible, explain why.
This is possible. Notice that the distance between the origin and ( p12 , p12 ) is 1 and so {~0} ✓ D.
Therefore V1 = {~0} satisfies the properties of subspace.
(b) If possible find a subspace V2 such that D ✓ V2 . If this is not possible, explain why.
(c) If possible find a subspace V3 such that V3 ✓ C. If this is not possible, explain why.