Exam 1 Sol
Exam 1 Sol
Exam 1 Sol
To find our usual basis for N (A), the special solutions, we will set the
free variables to [1, 0] and [0, 1] and solve for the pivot variables, which
leads to the upper-triangular systems:
1 2 p1 −2 −1
= or ,
1 p2 0 0
where the left-hand-side is the upper-triangular matrix in the pivot columns
of U and the right-hand-side is minus the free columns. By backsubsti-
tution, we get p2 = 0 in both cases and p1 = −2 or − 1, respectively.
Plugging these into x = [p1 , f1 , f2 , p2 ], we get our “special” basis for N (A):
−2 −1
1 0
N (A) = span of 0 , 1 .
0 0
1
1
(b) For what value or values (if any) of α does Ax = 2α have any solu-
α
tion x?
r2 − 2r1
1 r3 − r1 1 1
r3 +r1
2α −→ 2α − 2 −→ 2α − 2 ,
α α−1 3α − 3
2
Problem 2 (24 points):
Give a basis for the nullspace N (A) and a basis for the column space C(A)
for each of the following matrices:
1
2
(a) The one-column matrix A = 3 .
4
This matrix is obviously rank 1 (full column rank), so N (A) = {~0} and
the basis for N (A) is the empty set {}: the nullspace is zero-dimensional
1
2
so it needs no basis vectors. A basis for C(A) is just 3 , the first
4
column of A (which is also the pivot column).
(b) The one-row matrix A = 1 2 3 4 .
This matrix is also rank 1 (full row rank), with 1 pivot column and 3
free columns. We can read off the special solutions, so the 3-dimensional
nullspace N (A) has the basis
−2 −3 −4
1 0 0
0 , 1 , 0 .
0 0 1
More explicitly, the special solutions are of the form (p1 , f1 , f2 , f3 ), where
we set the free variables to (1, 0, 0), (0, 1, 0), and (0, 0, 1) (the columns of
I) and solve for p1 , but since this is one equation in one variable we can
do it by inspection: p1 is just equal minus the free column.
Since it has full row rank, the column space C(A) is all of R1 , and is
spanned by the pivot column 1 .
3
This also has rank 1—after elimination, all the rows after the first will
be zero. So N (A) will be 3-dimensional and C(A) will be 1-dimensional.
The first thing to realize is that we are doing the same operation as in
part (b), but we are repeating the output 100 times. This doesn’t change
the nullspace, since if the first row of the output is zero then all of the
rows are zero. So the nullspace basis is the same as in part (b), i.e. N (A)
is spanned by the special solutions
−2 −3 −4
1 0 0
0 , 1
,
0 .
0 0 1
The column space C(A) is spanned by the pivot column—the first column,
here—of A, which is simply
1
1
∈ R100 ,
..
.
1
4
Problem 3 (25 points):
1
Suppose that we are solving Ax = 2 . In each of the parts below, a
3
complete solution x is proposed. For each possibility, say impossible if that
could not be a complete solution to such an equation, or give the the size m × n
and the rank of the matrix A if x is possible.
1
2
(a) ~x =
3
4
Impossible. A would need to be a 3 × 4 matrix, but such a matrix
would have rank ≤ 3 and hence could not have unique solutions (could
not be full column rank).
1 1 1
2 −1 0
(b) ~x =
+ α1
+ α2
for all real numbers α1 , α2 ∈ R
3 5 0
4 17 1
Possible. Awould need to be a 3 × 4 matrix of rank 2, in order to have
a 2d nullspace.
1 1
(c) ~x = +α for all real numbers α ∈ R
2 2
Impossible. For α = −1, this would give ~x = ~0, which could not be
a solution with a nonzero right-hand side.
1 1
(d) ~x = +α for all real numbers α ∈ R
2 −1
Possible. A would need to be a 3 × 2 matrix of rank 1, in order to
have a 1d nullspace.
1 1 1
(e) ~x = + α1 + α2 for all real numbers α1 , α2 ∈ R
2 −1 −1
Possible: 3 × 2 of rank 1. Since the second two vectors are the same,
α2 is redundant with α1 and this is equivalent to the previous part with
α = α1 + α2 .
5
right-hand-side could have a solution. Equivalently, the second two vec-
tors form a basis for R2 if they are linearly independent, so there is some
value of α1 and α2 that cancels the (1, 2) vector and gives ~x = ~0, which
cannot be a solution with a nonzero right-hand side.
6
Problem 4 (25 points):
Let
1 2 −1 −1 5
B= 1 1 , C= 2 −1 , b = −8 .
1 1 1 2 −4
Compute:
(CB)−1 b.
(Hint: Remember what I said in class about inverting matrices!)
and hence
1 x1 −1 x1 = −1
1 1 x2 = −5 =⇒ x2 = −5 − x1 = −4 ,
1 1 1 x3 −2 x3 = −2 − x1 − x2 = 3
| {z }| {z } | {z }
B x y
7
and finally multiply this by b. But this is a lot more work than doing a sin-
gle backsolve followed by a single forward-solve, and is much more error-prone.
In class, I repeatedly emphasized that you almost never need to compute
matrix inverses explicitly, and if you do so then you are probably making
a mistake. Inverses are useful for algebraic manipulations, but when it comes
time to finally calculate something you should read them as “solving a linear
system.” Another way of viewing this is that, for n × n matrices, back/forward
solves take ∼ n2 operations, but both multiplying CB and inverting the matrix
take ∼ n3 operations, so if I had given you a larger matrix then the penalty of
doing it the slow way would have been even more dramatic.