0% found this document useful (0 votes)
17 views25 pages

223 Practice Problems Part 2

Uploaded by

myday3124
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views25 pages

223 Practice Problems Part 2

Uploaded by

myday3124
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Practice Problems Part 2

2 3
1
16. Find all vectors normal to ~v = 435.
1

2 3 2 3
1 2
17. Find all vectors normal to 425 and 405.
0 2


t
18. Show that the set X = { : for some t 2 R} is a subspace of R2 using the definition of subspace.
3t
There are three statements we need to show.

1. X is non-empty.
2. X is closed under vector addition.
3. X is closed under scalar multiplication.

Let us check that these conditions hold.



0
1. The simplest way to check this is to observe that ~0 2 X, as ~0 = .
3·0
 
x1 x
~ 2 X. Let us denote ~v =
2. Suppose that ~v , w ~ = 2 . From the definition of X we know that
,w
y1 y2
y1 = 3x1 and y2 = 3x2 . Therefore,
 
x1 + x2 x1 + x2
~v + w
~= = 2 X.
3x1 + 3x2 3(x1 + x2 )

3. Let 2 R. Then  
x1 x1
~v = = 2 X.
3x1 3( x1 )
19. Consider the plane P ✓ R3 given in vector form by ~x = td~1 + sd~2 , where
2 3 2 3
2 4
d~1 = 4 1 5 and d~2 = 4 2 5 .
9 6
(a) Express P in normal form.
2 3
a
Let p~ = 4 b 5 be a vector which is orthogonal to d~1 and d~2 . By definition we have that M p~ = ~0, where
c
the rows of M are given by d~1 and d~2 . Notice that
✓ ◆ 
2 1 9 1 0 3 0
RREF = .
4 2 6 0 1 3 0
2 3
3t
and so ~n = 3t5 is the general solution to the desired system of equations. Finally, because P passes
4
t 2 3
3
through the origin, we know a normal form of P is 35 · ~x = 0.
4
1

(b) Let ~n be the vector normal to P you found in the previous part and define ~v = 2d~1 + 3d~2 + 5~n.
Determine projP (~v ).

projP (~v ) = 2d~1 + 3d~2 .

(c) Find a basis B for R3 such that ~n 2 B.

{~n, d~1 , d~2 }


20. Explain how row reduction can be used to determine whether or not a set of vectors in Rn is linearly
independent.
Let ~v1 , ~v2 , . . . , ~vk be vectors in Rn , and consider the matrix M = [ ~v1 | ~v2 | . . . | ~vk ] whose columns are the
vectors ~vi . By the algebraic definition of linear independence, these vectors are linearly independent if the
equation
↵1~v1 + ↵2~v2 + · · · + ↵k~vk = ~0
has only the trivial solution. This equation can be re-written as a matrix equation:
2 3
↵1
6 ↵2 7
6 7
M 6 .. 7 = ~0
4.5
↵k

This homogeneous system has a unique solution exactly when rref(M ) has no free variable columns, or
equivalently, rref(M ) has k pivot columns.

21. Let T : R2 ! R2 be defined by  


x 3x
T = .
y y
Prove T is a linear transformation using the definition.
There are two statements we need to prove:

1. T (~v + w)
~ = T (~v ) + T (w) ~ 2 R2 ,
~ for all ~v , w
2. T (c~v ) = cT (~v ) for all ~v 2 R2 and c 2 R.

Let us proceed:
 
x1 x
1. Let ~v = ~ = 2 . Then
,w
y1 y2
   
x1 + x2 3x1 + 3x2 3x1 3x2
T (~v + w)
~ =T = = + = T (~v ) + T (w).
~
y1 + y2 y1 + y2 y1 y2

2. Now let c be an arbitrary scalar. Then


  
cx1 3cx1 3x1
T (c~v ) = T = =c = cT (~v ).
cy1 cy1 y1

1
22. Let P : R2 ! R2 be the projection onto span{~u}, where ~u = . Find a matrix P (in the standard basis)
2
so that P [~x]E = P(~x) for all ~x 2 R2 .
Recall that the columns of P can be recovered be considering the images Pe1 and Pe2 . In other words,
P = (P e~1 |P e~2 ). Let us compute these images.
2 3
1
~u · ~e1 1 6 7
P e~1 = vcomp~u~e1 = ~u = ~u = 4 55 .
~u · ~u 5 2
5
2 3
2
~u · ~e2 2 6 57
Pe2 = vcomp~u~e2 = ~u = ~u = 4 5 .
~u · ~u 5 4
5
Therefore,
2 3
1 2
6 57
P = 45 5.
2 4
5 5

23. Let R : R2 ! R2 be the rotation by 90 degrees clockwise. Find the matrix representation of R

0 1
MR =
1 0


1 2 3
24. Find col(A) for A = .
2 2 3
Since the first two columns of A are linearly independent and span R2 , we conclude that col(A) = R2 .

1 2 3
25. Find the null(A) for A = .
2 2 3
We know that null(A) = null rref(A) . Row-reduction gives us

1 0 0
rref(A) =
0 1 3/2
2 3
0
From here we conclude that null(A) = span{4 3/25}.
1

26. Find the rank and nullity of the matrix, M.


2 3
1 0 3 0
60 1 2 07
6 7
40 0 0 05
0 0 0 1

To compute the rank, looking at either the row space or the column space is equally reasonable. For
example, 8 2 3 2 3 2 3 2 39 8 2 3 2 3 2 39
>
> 1 0 3 0 >> >
> 1 0 0 >
< 6 7 6 7 6 7 6 7= < 6 7 6 7 6 7> =
6 07 617 627 607 6 07 617 607
col(M ) = span 4 5 , 4 5 , 4 5 , 4 5 = span 4 5 , 4 5 , 4 5 ,
>
> 0 0 0 0 >> >
> 0 0 0 >>
: ; : ;
0 0 0 1 0 0 1
as the third column is a linear combination of first two. Therefore, rank(M ) = 3. Rank-nullity theorem
quickly implies that nullity(M ) = 4 rank(M ) = 4 3 = 1.
27. Explain how row reduction can be used to find the rank and nullity of a matrix.
First of all, recall that the rank and the nullity are preserved by row operations. Therefore, rank(A) =
rank(rref(A)) and nullity(A) = nullity(rref(A)). However,

rank(rref(A)) = dim(row(rref(A))) = number of pivot columns in the row-reduced form,

and
nullity(rref(A)) = number of free variable columns in the row-reduced form.

28. Let A be a 3 ⇥ 4 matrix. Describe a process to find the nullspace of A.


One way to approach
2 3 find the nullspce is to setup the matrix equation and find the solutions using row
x 2 3
6y 7 0
reduction. A 6 7 4 5
4z 5 = 0
0
w


2 8 1
29. Let B = . Use row operations to reduce B to the identity matrix. Find B in terms of elementary
0 4
matrices.
There are many ways to reduce B to the identity matrix, we propose a way to do this by using three
elementary operations.
   1
 1
2 8 1 0 R1 7!R1 2R2 2 0 1 2 R1 7! 12 R1 1 0 2
1 R2 7! 14 R2 1 0 2
1
= = = 1 .
0 4 0 1 0 4 0 1 0 4 0 1 0 1 0 4

If we denote 23 2 3
 1 1 0
1 2
E1 = , E2 = 4 2 05 , E3 = 4 15 ,
0 1 0
0 1 4
then
1
B = E3 E2 E1 .
30. Let B = {~b1 , ~b2 } be an ordered basis. Define T : R2 ! R2 by T (~b1 ) = ~b1 + ~b2 , T (~b2 ) = 5~b2 . Find [T ]B .
Recall that [T ]B is defined as follows: for any ~x 2 Rn we should have [T ~x]B = [T ]B [~x]B . In particular,
 
1 0
[T ~b1 ]B = [T ]B [~b1 ]B = [T ]B , [T ~b2 ]B = [T ]B [~b2 ]B = [T ]B .
0 1

Therefore, [T ~b1 ]B and [T ~b2 ]B provide the first and the second columns of the matrix [T ]B , respectively. Now
we use the identities in the statement of the problem to derive
 
1 0
[T ~b1 ]B = [ ~b1 + ~b2 ]B = , [T ~b2 ]B = [5~b2 ]B = .
1 5

Therefore, 
1 0
[T ]B = .
1 5

 
~ ~ ~ 1 ~ 1
31. Let E be the standard basis, and let B = {b1 , b2 }, where b1 = and b2 = .
1 1
(a) Find the change of basis matrix from B to E.
(b) Find the change of basis matrix from E to B.
Recall that
[E B][~x]B = [~x]E ,
[B E][~x]E = [~x]B .
1
This also implies that [E B] = [B E]. Because we have already expressed B in terms of E, we can
immediately say 
1 1
[E B] = [~b1 |~b2 ] = .
1 1
Now it remains to find the inverse of [B E], and we will have

1 1 1 1
[B E] = [E B] = .
2 1 1

32. Let T : R2 ! R2 be defined by T (~e1 ) = ~e1 and T (~e2 ) = 2~e1 + ~e2 . Find Vol(T (C2 )).

1 2
This volume equals the determinant of the matrix , which is 1. Or we can also notice that this
0 1
transformation corresponds to a row operation R1 7! R1 + 2R2 which does not change volumes.
33. Explain in words what each elementary matrix does.

1 ↵
(a) E1 =
0 1

0 1
(b) E2 =
1 0

↵ 0
(c) E3 =
0 1
(a) This is an elementary matrix which corresponds to a row operation which adds an ↵-multiple of the
second row to the first row.
(b) This elementary matrix corresponds to the row operation which swaps two rows. Or, equivalently,
swaps the x and y-coordinates.
(c) This elementary matrix stretches one side of the unit square by ↵. Or, equivalently, it corresponds to
the row operation which multiplies the first row by ↵.

34. Compute the following.


✓ ◆
1 ↵
(a) det
0 1
✓ ◆
0 1
(b) det
1 0
✓ ◆
↵ 0
(c) det
0 1
(a) This is an elementary matrix which corresponds to a row operation which does not change volumes.
Therefore, the determinant equals 1.
(b) This elementary matrix swaps two rows, therefore, its determinant equals 1.
(c) This elementary matrix stretches one side of the unit square by ↵, therefore, its determinant equals ↵.
35. Let ~v be an eigenvector with eigenvalue = 3 for the transformation T . Describe in words the geometric
relationship between T and ~v .
The transformation T stretches ~v by 3 AND then reflects the resulting vector w.r.t. the origin.

36. T : R2 ! R2 be defined by T (~e1 ) = 2~e1 + ~e2 and T (~e2 ) = ~e1 + 2~e2 . Find the eigenvalues for T and for each
eigenvalue find a corresponding eigenvector.

1
~v1 = , 1=3
1
1
~v2 = , 2=1
1
37. Find the eigenvalues and a corresponding eigenvector for each eigenvalue of the matrix

1 2
B=
3 2

First of all, we need to find the characteristic polynomial of B.


2
p( ) = det( I B) = 3 4.

The roots of this polynomials


 are 1, 4. These are precisely the eigenvalues of B. Now, let us find the
x
eigenvectors. If v 1 = is an eigenvector, then
y
   (
1 2 x x x + 2y = x
Bv 1 = v 1 () = () .
3 2 y y 3x + 2y = y

1
Solving this system of linear equations, we get x + y = 0, therefore, we can choose v 1 = . For v4 we
1
get (
x + 2y = 4x
,
3x + 2y = 4y

2
therefore, 2y = 3x, and v4 = will work.
3

38. Find an example of 2 ⇥ 2 matrix that is diagonalizable over the real numbers.
 
4 1 3 0
Many examples exist! Here is one; can be diagonalized to .
1 4 0 5

39. Find an example of a 2 ⇥ 2 matrix that is invertible and not diagonalizable over the real numbers.
Rotations
✓ serve
◆ as good examples of invertible but not diagonalizable matrices. For example, consider
0 1
A= . Its characteristic polynomial is p( ) = 2 + 1, its roots are purely imaginary. There exist
1 0
other examples. Most all rotations will turn vectors by an angle that is not a multiple of 180 degrees. In
these cases, the characteristic polynomial will not have real-valued roots and hence cannot be diagonalized
over R.
40. Complete the following sentences with a mathematically correct definition.
(a) A linear combination of the vectors ~v1 , . . . , ~vn is
~ = ↵1~v1 + ↵2~v2 + · · · + ↵n~vn for some scalars ↵1 , ↵2 , . . . , ↵n 2 R.
a vector w

(b) The sets A and B are equal if


A ✓ B and B ✓ A.

(c) The set A is the intersection of sets X and Y if


A = {a : a 2 X and a 2 Y }.

(d) The linear combination ↵1~v1 + · · · + ↵n~vn is called a convex linear combination if
↵1 , ↵2 , . . . , ↵n 0 and ↵1 + ↵2 + · · · + ↵n = 1.

(e) The vector ~u points in the positive direction of the vector ~v if


~u = k~v for some positive scalar k.
41. For each of the following, give an example if possible. Otherwise, explain why it is impossible.
(a) Vectors ~u, ~v 2 R2 so that every vector in the set {~e1 , ~e2 , ~e1 ~e2 } is a convex linear combinations of ~u
and ~v .
There are no such vectors.
If ~e1 = a~u + b~v and ~e2 = c~u + d~v are convex linear combinations of ~u and ~v , then

~e1 ~e2 = (a + c)~u (b + d)~v

has negative coefficients.


(b) Vectors d,~ p~ 2 R3 so that ~x = td~ + sd~ + p~ is a vector form of a plane.
There are no such vectors since any vector form of a plane must have two linearly independent direction
vectors.
8 2 3 2 3 9 8 2 3
< 1 0 = < 1
~ 4 5 4 5
(c) A vector d 2 R so that the lines `1 = ~x : ~x = t 1 + 0 for some t 2 R and `2 = ~x : ~x = td +
3 ~ 4 15
: ; :
1 3 2
intersect at exactly one point.
Such d~ exists.
~
In
2 general,
3 it would be sufficient to take points 2 p~ 23`1 and
2 3~q 22`23and set d = p~ ~q . But in our case,
1 1 0 1
4 15 already belongs to both `1 and `2 since 4 15 = 405 415, and so it is a point of intersection.
2 2 3 2 3 1
1
Since 4 15 must be the only intersection point, we must pick d~ to be a vector linearly independent
2 32 2 3
1 1
from 415 so that lines are not the same. For example, d~ = 425.
1 3
(d) Non-parallel, nonzero vectors ~a, ~b, ~c 2 R2 so that ~a · ~b = ~a · ~c = ~b · ~c = 0.
There are not such vectors. Any two non-zero, non-parallel vectors in R2 are linearly independent and
therefore span R2 . Suppose ~a, ~b are non-zero and non-parallel. Then ~c = ↵~a + ~b, since span{~a, ~b} = R2 .
Since ~c 6= 0, we know at least one of ↵ and is non-zero. Thus, at lease one of ~a·~c = ↵~a·~a+ ~a·~b = ↵k~ak2
or ~b · ~c = k~bk2 is non-zero, which contradicts the assumption that ~c is orthgonal to both ~a and ~b.
⇢  ⇢ ⇢
1 1 1 1
(e) A vector ~a 2 R so that ~a 2 span
2
, , but ~a 2/ span [ span .
1 1 1 1
   ⇢ 
2 1 1 1 1
There are many examples that work. For example, ~a = = + 2 span , .
⇢ ⇢ 0 ⇢ 1 1 ⇢ 1 1
1 1 1 1
But ~a 2 / span and ~a 2
/ span . So ~a 2/ span [ span .
1 1 1 1
42. Let P1 ✓ R3 be the plane with equation x+y+z = 2, let P2 ✓ R3 be the plane with equation 2x 2y 3z = 4,
and let P3 ✓ R3 be the plane with equation 2x + 4y 5z = 0. Further, let X = P1 \ P2 \ P3 .
(a) Is P1 \ P2 a point, line, plane, or other? Justify your answer.
P1 \ P2 is a line. Here we present main steps, and leave all computations to the reader.
2 3
x
A general point ~x = y 5 2 P1 \ P2 satisfies equations
4
z

x + y + z = 2,
2x 2y 3z = 4.

1 1 1 | 2
To solve the system of equations for x, y, z we consider the augmented matrix .
 2 2 3 | 4
1
1 0 4
| 2
The reduced row echelon form of this matrix is 5 , which has no pivot in the third
0 1 4 | 0
column2 and3 so2the3 third variable, z, may be considered as a free variable. Taking z = t, we obtain
1
4
2
~x = t 4 54 5 + 405, which is a vector form of a line.
1 0

(b) If possible, express P1 \ P2 in vector form. Otherwise, explain why it is impossible.


Vector form of P1 \ P2 is obtained in the prevous part.
(c) If possible, find a vector ~v 2 P3 which is parallel to P1 \ P2 . Otherwise, explain why it is impossible.
2 1 3
4
A vector ~v is parallel to P1 \ P2 only if ~v is a nonzero multiple of the direction vector 4 55
4
of the
2 3 1
1
4
line P1 \ P2 . This means we are looking for some s 6= 0 such that ~v = s 4 55
4
2 P3 . We substitute
1
latter expression into the equation for P3 and get
⇣s⌘ ✓ ◆
5s
2 +4 5s = 0,
4 4

which is only possible if s = 0. Therefore, there is no vector ~v 2 P3 parallel to P1 \ P2 .

(d) Find X = P1 \ P2 \ P3 .
2 3
x
A point ~x = y 5 2 P1 \ P2 \ P3 satisfies equations
4
z
8
< x + y + z = 2,
2x 2y 3z = 4,
:
2x + 4y 5z = 0.
2 3
1 1 1 | 2
To solve the system of equations for x, y, z we consider the augmented matrix 42 2 3 | 45.
2 3 2 4 5 | 0
40
1 0 0 | 19
4
The row reduced form of this matrix is 0 1 0 | 10 5
, which means there is a unique solution
19
8
2 40 3 82 40 39 0 0 1 | 19
19 < 19 =
~x = 4 10 5
. Therefore, X = 4 10 5 .
19 : 819 ;
8
19 19

(e) Is span(X) a point, line, plane, or other? Explain.


The set X contains the one element, which is a nonzero vector, and so span(X) is a line.
43. For each of the following statements, indicate whether the statement is

• Always True, by providing an argument,


• Sometimes True, by providing an example where the statement is true and an example where the
statement is false, or
• Never True, by providing an argument.
(a) If A and B are sets in R3 , then span(A) \ span(B) is nonempty.
Always True. ~0 2 R3 is in the span of any set in R3 . So, ~0 2 span(A) \ span(B).

~ of a plane P ✓ R3 is also an element of P.


(b) A direction vector, d,
Sometimes True. Examples: let P1 be a plane given by a vector form ~x = t~e1 + s~e2 and P2 be a plane
given by a vector form ~x = t~e1 + s~e2 + ~e3 . Then, ~e1 is a direction vector for both planes, and ~e1 2 P1
but ~e1 2
/ P2 .

(c) The intersection of two planes in R3 is a line.


Sometimes True. Two planes can meet in a line. But they can also be parallel, where the intersection
is the empty set or they can be the same plane, in which case the intersection is a plane.

(d) If ~u and ~v are two vectors in R2 , then the set of all non-negative linear combinations of ~u and ~v is a
line.  
1 1
Sometimes True. Examples: Take ~u = and ~v = , then the set of all non-negative linear
1  1 
1 2
combinations of ~u and ~v is the line x = y. Take ~u = and ~v = , then the set of all non-negative
1 2
linear combinations of ~u and ~v is a part of the line x = y in the first quadrant.
(e) A set containing the zero vector is linearly dependent.
Always True. This is true as we have 7(~0) = ~0. So if ~0 2 A, then ~0 can be written as a non-trivial
linear combination of elements of A.

(f) If S is a linearly dependent set, then each element of S is a linear combination of other elements of S.
⇢   
1 2 0 0
Sometimes True. Consider S = , , . S is linearly dependent, but is not a linear
0 0 1 ⇢   1
1 1 0
combination of other elements of S. The set R = , , is a linearly dependent set and any
0 1 1
element is a linear combination of the other two elements.

(g) If ~x · ~u = 0 for any vector ~x, then ~u = 0.


2 3
u1
6 u2 7
6 7
Always True. For ~u = 6 .. 7, based on the assumption, ~ei · ~u = 0 for 1  i  n where ~ei are elements
4.5
un
of the standard basis. On the other hand ~ei · ~u = ui . Hence we have ui = 0 for 1  i  n.
(Alternate solution) If you compute ~u · ~u, you will obtain ||~u|| = 0, which implies ~u must be the zero
vector.
44. For each of the statements below, indicate ALWAYS TRUE if the statement is always satisfied, NEVER
TRUE if the statement is never satisfied, and SOMETIMES TRUE if there are examples where the
statement is true and there are examples where the statement is false.
(a) A linear transformation T : R2 ! R2 has the property that null(T ) = range(T ).

ALWAYS TRUE ALWAYS FALSE SOMETIMES TRUE

(b) Let V, W ✓ Rn be subspaces. V \ W = {}, the empty set.

ALWAYS TRUE ALWAYS FALSE SOMETIMES TRUE

(c) Let M be a change of basis matrix. Let S be the transformation induced by M . Then, null(S) = {~0}.

ALWAYS TRUE ALWAYS FALSE SOMETIMES TRUE

(d) Let B be a basis for R2 . Then [~e1 ~e2 ]B = [~e1 ]B [~e2 ]B .

ALWAYS TRUE ALWAYS FALSE SOMETIMES TRUE

(e) Let ~u 2 R3 be a non-zero vector and define the transformation T : R3 ! R3 by T (~x) = vcomp~u (~x).
Then null(T ) 6= range(T ).

ALWAYS TRUE ALWAYS FALSE SOMETIMES TRUE


 
1 1
45. (a) Let ~u = and ~v = . Express span{~u} [ span{~v } in set builder notation.
1 1
 
1 1
{~r 2 R : ~r = t
2
or ~r = s for some s, t 2 R}
1 1

(b) Express span{~u} + span{~v } in set builder notation.


 
1 1
span{~u} + span{~v } = {~r 2 R : ~r = t
2
+s for some s, t 2 R}
1 1

(c) Does span{~u} + span{~v } = span{~u} [ span{~v }? Justify your answer.


    
1 1 2 1 1
No, 2 span{~u} and 2 span{~v }. So we have = + 2 span{~u} + span{~v }. But
 1  1  0 1 1
2 2 2
2
/ span{~u} and 2/ span{~v }, and so 2
/ span{~u} [ span{~v }.
0 0 0
 
1 ~ 1
46. Let ~a = ,b= .
2 1
(a) If possible, find a vector ~c such that ~a · ~c = 1 and ~b · ~c = 3. Otherwise, explain why it is impossible.

x
Let ~c = . We want x + 2y = 1 and x + y = 3. Adding the equations we get 3y = 2 which implies
y  7
2 7
y = . Using the value of y we get x = . So we have found the desired ~c, which is ~c = 2 3 .
3 3 3

(b) Let ~x 2 R2 and ~y 2 R3 . Which of the following operations are mathematically defined?
~a · ~x ~a · ~y ~a · ~b · ~c (~a · ~b)~y ~a(~b · ~y )
No justification needed.
Solution: ~a · ~x, (~a · ~b)~y
(c) Show that for any ~a 2 R2 we have ~a = (~a · ~e1 )~e1 + (~a · ~e2 )~e2 .

x
Assume ~a = . We have ~a · ~e1 = x and ~a · ~e2 = y, by applying the algebraic definition of the dot
y
product. So   
x 0 x
(~a · ~e1 )~e1 + (~a · ~e2 )~e2 = x~e1 + y~e2 = + =
0 y y
47. In this question, you will work with a new definition.

A k-convex linear combination of the vectors ~v1 , . . . , ~vn is a vector w ~ = ↵1~v1 + · · · +


~ such that w
↵n~vn for some ↵1 , . . . , ↵n satisfying ↵1 , . . . , ↵n 0 and ↵1 + . . . + ↵n = k
For a set of vectors X, let convk (X) denote the set of k-convex linear combinations of vectors in
X.
⇢ 
1 1
Let S = conv2 , .
1 1
(a) Draw the set S.

 
2 2
The set S is a line segment with endpoints and .
2 2
(b) If possible, find distinct vectors ~a and ~b so that S = conv1 {~a, ~b}. Otherwise, explain why it is impossible.
 
2 2
Take ~a = and ~b = .
2 2

(c) If possible, find distinct vectors ~a and ~b so that U = conv0 {~a, ~b} ✓ conv1 {~a, ~b}. Otherwise, explain
why it is impossible.
First, we point out that conv0 {~a, ~b} = {~0} for any vectors ~a and ~b by the definition, because ↵1 +↵2 = 0
for non-negative ↵1 , ↵2 only if ↵1 = ↵2 = 0. We also point out that conv1 {~a, ~b} is the set of all convex
linear combinations of vectors ~a and ~b.
Therefore, it suffices to find distinct vectors ~a and ~b so that {~0} is a subset of the set of all convex
linear combinations of vectors ~a and ~b. This is possible by taking ~a = ~0 and ~b = ~e1 .
48. Complete the following sentences with a mathematically correct definition. No marks will be awarded for
a “close” but incorrect definition.
(a) The projection of ~u onto the set X is

The point p~ 2 X which is closest to ~u.

2 3 2 3
v1 w1
4 5
(b) Let ~v = v2 , w~ = w2 5. The dot product ~v · w
4 ~ is
v3 w3

v1 w 1 + v2 w 2 + v3 w 3 .
Alternative solution: k~v kkwk
~ cos ✓, where ✓ is the angle between ~v and w.
~

(c) Let n be a positive integer. The identity matrix of size n is

the n ⇥ n matrix whose diagonal entries are all one and the rest are zero.

(d) A positively oriented basis is

An ordered basis {~v1 , . . . , ~vn } that can continuously be transformed into the standard ordered basis
{~e1 , . . . , ~en } while being linearly independent the whole time.
(e) The range of a linear transformation S : Rn ! Rm is (in set notation):

The set {~q 2 Rm : ~q = S~p, for some p~ 2 Rn }.

49. For each of the following, give an example if possible. Otherwise, explain why it is not possible.
(a) A subspace V ✓ R3 that contains exactly two di↵erent lines passing through the origin.

This is not possible. Assume V contains two di↵erent lines. Then there exist two vectors d~1 , d~2 each
spanning a di↵erent line. In particular, {d~1 , d~2 } is a linearly independent set. Because V is a subspace
we have that span({d~1 , d~2 }) ✓ V . In particular, the line spanned by d~1 + d~2 is contained in V , and so
V contains at least 3 di↵erent lines.

(b) A set X ✓ R2 such that projX (~v ) = ~v , for all ~v 2 R2 .

This is possible. Take X = R2 .

(c) A linear transformation T : R2 ! R3 such that the range of T is R3 .

This is not possible. The associated matrix to T is a 3 ⇥ 2 matrix. This matrix will have rank at
most 2, since it has two columns. This implies range(T ) has dimension at most 2.
⇢
1
(d) A linear transformation T : R2 ! R2 whose range is span .
2
 
x x
There are infinitely many examples, but for the sake of specificity define T by T = .
y 2x

⇢
1
(e) A linear transformation S : R ! R whose null space is span
2 2
.
2
 
x 0
There are infinitely many examples, but for the sake of specificity define S by S = .
y y 2x

⇢
1
(f) A 2 ⇥ 2 matrix A whose column space is span .
2

1 0
There are infinitely many examples, but for the sake of specificity: Take A = .
2 0

⇢
1
(g) A matrix M whose null space is span .
2

0 0
There are infinitely many examples, but for the sake of specificity: Take M = .
2 1

1
(h) A basis B for R2 such that [~e1 ~e2 ]B = .
1

Take B = {e1 , e2 }.
50. Let C, D ✓ R2 be circles of radius 1 centered at the points (0, 0) and ( p12 , p12 ), respectively.
(a) If possible find a subspace V1 such that V1 ✓ D. If this is not possible, explain why.

This is possible. Notice that the distance between the origin and ( p12 , p12 ) is 1 and so {~0} ✓ D.
Therefore V1 = {~0} satisfies the properties of subspace.

(b) If possible find a subspace V2 such that D ✓ V2 . If this is not possible, explain why.

This is possible. Take V2 = R2 .

(c) If possible find a subspace V3 such that V3 ✓ C. If this is not possible, explain why.

This is not possible. Every subspace of R2 contains the origin, but ~0 2


/ C.

You might also like