0% found this document useful (0 votes)
2 views8 pages

Exam 1 Sol

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 8

MIT 18.

06 Exam 1 Solutions, Spring 2022


Johnson

Problem 1 (26 points):


Suppose  
1 2 1 2
A= 2 4 2 5 .
1 2 1 1
(a) Give a basis for N (A):

We proceed by elimination to reduce A to upper-triangular form:


  r2 − 2r1    
1 2 1 2 r3 − r1 1 2 1 2 1 2 1 2
r3 +r1
A= 2 4 2 5  −→  1  −→  1  = U,
1 2 1 1 −1 0
which immediately tells us that A is rank 2, and that the 1st and 4th
columns are the pivot columns. We will then solve equations by divid-
ing the variables into pivot and free variables, x = [p1 , f1 , f2 , p2 ]. The
nullspace will therefore be 4 − 2 = 2 dimensional, and our basis will need
2 vectors.

To find our usual basis for N (A), the special solutions, we will set the
free variables to [1, 0] and [0, 1] and solve for the pivot variables, which
leads to the upper-triangular systems:
      
1 2 p1 −2 −1
= or ,
1 p2 0 0
where the left-hand-side is the upper-triangular matrix in the pivot columns
of U and the right-hand-side is minus the free columns. By backsubsti-
tution, we get p2 = 0 in both cases and p1 = −2 or − 1, respectively.
Plugging these into x = [p1 , f1 , f2 , p2 ], we get our “special” basis for N (A):
   
−2 −1
 1   0 
N (A) = span of   0 , 1  .
  

0 0

1
 
1
(b) For what value or values (if any) of α does Ax =  2α  have any solu-
α
tion x?

To check whether a solution exists, we apply the same elimination steps


from A → U to this right-hand-side, and check if it is zero in the 3rd
row (matching the row of zeros in U ), which is ensures that it is in C(A).
Hence:

  r2 − 2r1    
1 r3 − r1 1 1
r3 +r1
 2α  −→  2α − 2  −→  2α − 2  ,
α α−1 3α − 3

giving the condition 3α − 3 = 0, i.e. α = 1 .

2
Problem 2 (24 points):
Give a basis for the nullspace N (A) and a basis for the column space C(A)
for each of the following matrices:
 
1
 2 
(a) The one-column matrix A =   3 .

4
This matrix is obviously rank 1 (full column rank), so N (A) = {~0} and
the basis for N (A) is the empty set {}: the nullspace is zero-dimensional
 
1
 2 
so it needs no basis vectors. A basis for C(A) is just   3  , the first

4
column of A (which is also the pivot column).

(b) The one-row matrix A = 1 2 3 4 .

This matrix is also rank 1 (full row rank), with 1 pivot column and 3
free columns. We can read off the special solutions, so the 3-dimensional
nullspace N (A) has the basis
     
−2 −3 −4
 1   0   0 
 0 , 1 , 0  .
     

0 0 1

More explicitly, the special solutions are of the form (p1 , f1 , f2 , f3 ), where
we set the free variables to (1, 0, 0), (0, 1, 0), and (0, 0, 1) (the columns of
I) and solve for p1 , but since this is one equation in one variable we can
do it by inspection: p1 is just equal minus the free column.

Since it has full row rank, the column space C(A) is all of R1 , and is

spanned by the pivot column 1 .

Note that in 18.06 we sometimes gloss over the distinction between R


(scalars) and R1 (1-component column vectors) and R1×1 (1 × 1 matri-
ces). If you think of A here as a “row vector” or “covector” that takes dot
products with [1, 2, 3, 4], then the output is in R rather than R1 and you
might say that a basis is the number 1 . I will accept that answer as well.
 
1 2 3 4
 1 2 3 4  
(c) The 100-row matrix A =  . . . .  in which every row is 1 2 3 4 .
 
 .. .. .. .. 
1 2 3 4

3
This also has rank 1—after elimination, all the rows after the first will
be zero. So N (A) will be 3-dimensional and C(A) will be 1-dimensional.

The first thing to realize is that we are doing the same operation as in
part (b), but we are repeating the output 100 times. This doesn’t change
the nullspace, since if the first row of the output is zero then all of the
rows are zero. So the nullspace basis is the same as in part (b), i.e. N (A)
is spanned by the special solutions
     
−2 −3 −4
 1   0   0 
 0 , 1
,
  0 .
   

0 0 1

The column space C(A) is spanned by the pivot column—the first column,
here—of A, which is simply
 
1
 1 
 ∈ R100 ,
 
 ..
 . 
1

i.e. 100 rows of 1’s.

4
Problem 3 (25 points):
 
1
Suppose that we are solving Ax =  2  . In each of the parts below, a
3
complete solution x is proposed. For each possibility, say impossible if that
could not be a complete solution to such an equation, or give the the size m × n
and the rank of the matrix A if x is possible.
 
1
 2 
(a) ~x = 
 3 

4
Impossible. A would need to be a 3 × 4 matrix, but such a matrix
would have rank ≤ 3 and hence could not have unique solutions (could
not be full column rank).
     
1 1 1
 2   −1   0 
(b) ~x = 
  + α1 
  + α2  
 for all real numbers α1 , α2 ∈ R
3  5  0 
4 17 1
Possible. Awould need to be a 3 × 4 matrix of rank 2, in order to have
a 2d nullspace.
   
1 1
(c) ~x = +α for all real numbers α ∈ R
2 2
Impossible. For α = −1, this would give ~x = ~0, which could not be
a solution with a nonzero right-hand side.
   
1 1
(d) ~x = +α for all real numbers α ∈ R
2 −1
Possible. A would need to be a 3 × 2 matrix of rank 1, in order to
have a 1d nullspace.
     
1 1 1
(e) ~x = + α1 + α2 for all real numbers α1 , α2 ∈ R
2 −1 −1
Possible: 3 × 2 of rank 1. Since the second two vectors are the same,
α2 is redundant with α1 and this is equivalent to the previous part with
α = α1 + α2 .

Note: There was a typographical error in this problem: I had meant


to make
 the two vectors
 linearlyindependent,
 i.e. to ask something like
1 1 1
~x = + α1 + α2 . In this case the solution would
2 −1 1
have been impossible: A would need to be a 3 × 2 matrix, but to have a
2d nullspace it would need to have rank 0, which means that no non-zero

5
right-hand-side could have a solution. Equivalently, the second two vec-
tors form a basis for R2 if they are linearly independent, so there is some
value of α1 and α2 that cancels the (1, 2) vector and gives ~x = ~0, which
cannot be a solution with a nonzero right-hand side.

6
Problem 4 (25 points):
Let
     
1 2 −1 −1 5
B= 1 1 , C= 2 −1  , b =  −8  .
1 1 1 2 −4
Compute:
(CB)−1 b.
(Hint: Remember what I said in class about inverting matrices!)

Solution: As usual, we don’t want to compute matrix inverses explicitly, we


want to solve linear systems. In this case
(CB)−1 b = B −1 C −1
| {z }b,
y
| {z }
x

where y = C −1 b is computed by solving Cy = b using backsubstitution (since


C is upper-triangular), and then x = B −1 y is computed by solving Bx = y using
forward-substitution (since B is lower-triangular). No Gaussian elimination
is required! Proceeding, we have
    
2 −1 −1 y1 5 2y1 = 5 + y2 + y3 = −2 =⇒ y1 = −1
 2 −1   y2  =  −8  =⇒ 2y2 = −8 + y3 = −10 =⇒ y2 = −5 ,
2 y3 −4 2y3 = −4 =⇒ y3 = −2
| {z } | {z } | {z }
C y b

and hence
    
1 x1 −1 x1 = −1
 1 1  x2  =  −5  =⇒ x2 = −5 − x1 = −4 ,
1 1 1 x3 −2 x3 = −2 − x1 − x2 = 3
| {z }| {z } | {z }
B x y

giving us our solution


 
−1
x = (CB)−1 b =  −4  .
3

Alternative solutions: You could, of course,


 solve this inother ways. You
0 −2 −1
could multiply CB together to obtain CB =  1 1 −1 , and then labori-
2 2 2
 1 1 3

2 4 8
ously invert this, e.g. with Gauss–Jordan, to obtain (CB)−1 =  − 12 1
4 − 18  ,
0 − 21 1
4

7
and finally multiply this by b. But this is a lot more work than doing a sin-
gle backsolve followed by a single forward-solve, and is much more error-prone.
In class, I repeatedly emphasized that you almost never need to compute
matrix inverses explicitly, and if you do so then you are probably making
a mistake. Inverses are useful for algebraic manipulations, but when it comes
time to finally calculate something you should read them as “solving a linear
system.” Another way of viewing this is that, for n × n matrices, back/forward
solves take ∼ n2 operations, but both multiplying CB and inverting the matrix
take ∼ n3 operations, so if I had given you a larger matrix then the penalty of
doing it the slow way would have been even more dramatic.

You might also like