110 Review Solutions
110 Review Solutions
110 Review Solutions
Math 110
Review Solutions
Week 1.
1. Show that the set of differentiable real-valued functions f on the interval (−4, 4) such that f 0 (−1) =
3f (2) is a subspace of R(−4,4) .
1. Show it contains 0. Let f0 denote the zero function, where f0 (x) = 0 ∀x ∈ R. 0 = f00 (−1) =
3f (2) = 0, so f0 is in this set.
2. Show it’s closed under addition and scalar multiplication. Let c ∈ R be a scalar, and let f, g be
differentiable real-valued functions on (−4, 4) such that f 0 (−1) = 3f (2) and g 0 (−1) = 3g(2). Then
(f + cg)0 (−1) = f 0 (−1) + cg 0 (−1) = 3f (2) + 3cg(2) = 3(f + cg)(2) by linearity of differentiation.
Thus f + cg is also in the set, so the set is indeed closed under addition, scalar multiplication, and
contains zero, and is thus a subspace.
(If we check this property first, we don’t need to check (1.) separately, since that is included if
f = g = f0 and c = 0.)
2. RLet b ∈ R. Show that the set of continuous real-valued functions f on the interval [0, 1] such that
1 [0,1] if and only if b = 0.
0 f = b is a subspace of R
R1
Check that this set contains f0 (the zero function). 0 f0 = 0, so if the set is a subspace, then
necessarily b = 0.
Now we show that if b = 0, the set is a subspace.
R1 Let Rc ∈ R be a scalar,
R 1 and let f, gR be continuous
1 1 R1
real-valued functions on [0, 1] such that 0 f = 0 and 0 g = 0. Then 0 (f + cg) = 0 f + c 0 g =
0 + c0 = 0 by linearity of integration. Thus f + cg is also in the set, so the set is indeed closed under
addition, scalar multiplication, and contains zero, and is thus a subspace when b = 0.
4. Give an example of a nonempty subset U of R2 such that U is closed under scalar multiplication,
but U is not a subspace of R2 .
U is the union of the 1st and 3rd quadrant of R2 , i.e. the set {(x, y) : xy ≥ 0}. (In general, any cone
will have the property of being closed under scalar multiplication, where a cone is a set bounded by
subspaces of Rn .)Even simpler, the union of a set of subspaces is closed under scalar multiplication,
and not necessarily a subspace. For instance, U can be the union of the line y = 2x and y = x.
5. Give an example of a nonempty subset U of R such that U is closed under addition, but U is not a
subspace of R.
U = {1, 2, 3, . . .}, the set of positive integers.
U1 + W = U2 + W,
then U1 = U2 .
Counterexample: U1 = {0} and U2 = W .
Week 2.
v1 + w = a3 (v3 + w) + · · · + am (vm + w)
v2 + w = b3 (v3 + w) + · · · + bm (vm + w)
Isolate w to get:
w(1 − a3 − . . . − am ) = −v1 + a3 v3 + · · · + am vm
w(1 − b3 − . . . − bm ) = −v2 + b3 v3 + · · · + bm vm
We want (11/6)(t − 14) − 22 to equal 0 in order to eliminate the last row. Solving for t, we get
t = 26.
6. Explain why there does not exist a list of six polynomials that is linearly independent in P4 (R).
dim P4 (R) = 5, and by the Dimension Theorem, there cannot be 6 linearly independent vectors in
a vector space of dimension 5.
8. Suppose p0 , . . . , pm are polynomials in Pm (R) such that pj (2) = 0 for each j. Prove that p0 , . . . , pm
is not linearly independent in Pm (R).
Week 3.
1. Let
S = Span ((4, 2, −1, 1), (2, 6, −7, 1), (2, −4, 6, 0), (−1, 0, −2, 3), (1, 2, −3, 2))
be a subspace of R4 . Find a basis for S.
This is a problem to check that you can easily do row reduction on matrices and get all the infor-
mation you need from that result.
We set up a matrix using the vectors {v1 , . . . , v5 } in S as columns so that the rank of the matrix can
tell us how many of these columns are linearly independent, and so that the columns that contain
the pivots can tell us which of the vectors we can choose as a basis.
v1 v2 v3 v4 v5
4 2 2 −1 1 1 1 0 3 2 1 1 0 3 2
2 6 −4 0 2 0 −2 2 −13 −7 0 2 −2 13 7
−1 −7 6 −2 −3 → 0 4 −4 −6 −2 → 0
→
0 0 −32 −16
1 1 0 3 2 0 −6 6 1 −1 0 0 0 40 20
v1 v2 v3 v4 v5
1 1 0 3 2
0
2 −2 13 7
0 0 0 2 1
0 0 0 0 0
We take the vectors v1 , v2 , v4 corresponding to the columns containing the pivots as the basis for S.
2. Let V be an infinite dimensional vector space. Show that V contains an infinite set of linearly
independent vectors.
We prove this by contradiction. (In general, to show something is infinite, the easiest way to prove
it is to suppose it is finite of maximal size n, and then show that we can add another element to it,
which contradicts the maximality of n.)
Suppose the largest set of linearly independent vectors in V is v1 , . . . , vn of size n. Since V is
infinitely dimensional, {v1 , . . . , vn } cannot span V . Thus there is some vector w 6∈ Span(v1 , . . . , vn ).
Thus {v1 , . . . , vn , w} is also linearly independent, which contradicts the maximality assumption.
Thus V must contain an infinite set of linearly independent vectors.
3. Find an example of a linear transformation T : R3 → R2 such that (1, 1, 1) ∈ N (T ) and T is onto.
Let us choose a basis b1 , b2 , b3 for R3 and T such that T (b1 ) = 0, T (b2 ) = (1, 0), and T (b3 ) = (0, 1).
The easiest is to let b1 = (1, 1, 1), and so we can choose b2 = (1, 0, 0) and b3 = (0, 1, 0) for a valid
basis. Then for v = a1 b1 + a2 b2 + a3 b3 , T (v) = (a2 , a3 , 0). This is sufficient to define T . (Of course,
you can also write T in terms of the standard basis for R3 , but that is not necessary and more time
consuming.)
We show these choices satisfy the conditions. The set {T (b1 ), T (b2 )} = {(1, 0), (0, 1)} is linearly
independent since these vectors are the standard basis for R2 . Thus dim R(T ) = 2 so T is onto, and
(1, 1, 1) ∈ N (T ), as desired.
4. Find an example of a linear transformation T : R2 → R3 such that (1, 1, 1), (−1, 2, 0) ∈ R(T ). What
is N (T )?
Let T be a linear transformation such that T (1, 0) = (1, 1, 1) and T (0, 1) = (−1, 2, 0). Then
T (x, y) = (x − y, x + 2y, x). By the dimension theorem, dim N (T ) = 2 − dim R(T ) = 0, so N (T ) =
{0}.
5. Suppose T ∈ L(V, W ) and v1 , . . . , vm is a list of vectors in V such that {T v1 , . . . , T vm } is a linearly
independent list in W . Prove that v1 , . . . , vm is linearly independent.
Suppose they are linearly dependent, so there are some a1 , . . . , am that are not all 0 such that
a1 v1 + · · · + am vm = 0. Then T (a1 v1 + · · · + am vm ) = 0, but by linearity this means that a1 T (v1 ) +
· · ·+am T (vm ) = 0, which contradicts that T (v1 ), . . . , T (vm ) is linearly independent. Thus v1 , . . . , vm
must be linearly independent.
6. Give an example of a transformation that satisfies homogeneity (i.e. cf (v) = f (cv) for a scalar c
and vector v) but is not linear. And additivity (i.e. f (v + w) = f (v) + f (w) for vectors v, w) but is
not linear.
V = R, let T (x) = 1/x for x 6= 0 and T (0) = 0: this is homogeneous but not linear.
V = N, the set of natural numbers, and T is the identity map. This satisfies additivity, but is not
linear. Also V is not a vector space.
8. Compute the null space and range of linear transformation T : P4 (R) → P4 (R), defined by:
T (1) = 4 + 2x + 2x2 − x3 + x4
T (x) = 2 + 6x − 4x2 + 2x4
T (x2 ) = −1 − 7x + 6x2 − 2x3 − 3x4
T (x3 ) = 1 + x + 3x3 + 2x4
..to be continued!
Week 5.
1. Suppose D ∈ L(P(R)) is the differentiation map and T ∈ L(P(R)) is the multiplication by x2 map.
Show that T D 6= DT . Let Q ∈ L(P(R)) be the integration map. Is DQ = QD?
We show this holds for P(R) for any integer n. Let β = {1, x, . . .} be the standard basis for P(R).
After looking at some examples, it’s easy to see that D(βi ) = iβi−1 for i ≥ 1 and D(β0 ) = D(1) = 0.
Also, T (βi ) = βi+2 for i ≥ 0.
Thus we can check that (when we restrict to dimension n)
0 0 0 ··· 0
0 0 ··· 0 0 0 0 ··· 0
2 0 ··· 0
0 1 0 ··· 0
[DT ]ββ = 0
3 ··· 0 ,
β
[T D]β = 0
.. .. . . .. 0 2 ··· 0
. . . . . .. .. .. ..
.. . . . .
0 0 ··· 0
0 0 0 ··· 0
Thus T D and DT are not equal. We could also show they are not equal by showing their action on
the {βi } elements is not equal.
Also, Q(βi ) = βi+1 /i for i ≥ 1 and Q(β0 ) = Q(1) = x = β1 . Thus
0 0 ··· 0 1 0 ··· 0 0 0 0 ··· 0
1 0 · · · 0 0 1 · · · 0 0 1 0 · · · 0
[Q]ββ = 0 1/2 · · · 0 , [QD]ββ = 0 0 · · · 0 , [DQ]ββ = 0 0 1 · · · 0 .
.. .. .. . .. .. .. .. .. .. .. .. . . ..
. ..
. . . . . . . . . . . .
0 0 ··· 0 0 0 0 ··· 1 0 0 0 ··· 0
3. Let T, S ∈ L(V, W ). Suppose T + S is one to one. Prove or show a counterexample that T and S
must also be one to one.
Counterexample. Let V = R2 and W = R2 . Let T (x, y) = (x, 0), S(x, y) = (0, y).
4. Let T, S ∈ L(V, W ). Suppose T + S is onto. Prove or show a counterexample that T and S must
also be onto.
Counterexample. Let V, W = R, and let T be the 0 map and S the identity map.
5. Suppose V is finite-dimensional. Prove that every linear map on a subspace of V can be extended
to a linear map on V . I.e. show that if U is a subspace of V and S ∈ L(V, W ), then there exists
T ∈ L(V, W ) such that T u = Su for all u ∈ U .
Let u1 , . . . , uk be a basis for U , so we can extend that to a basis for V by taking u1 , . . . , uk , v1 , . . . , vm .
Define T ∈ L(V, W ) by T (ui ) = S(ui ) and T (vi ) = 0. (This is sufficient for the definition of T since
its action is completely determined by its action on the basis elements. However, you do need to
say a couple sentences to prove T satisfies the conditions.)
6. Suppose V is finite-dimensional with dim V > 0 and suppose W is infinite-dimensional. Prove that
L(V, W ) is infinite-dimensional.
Suppose L(V, W ) is finite-dimensional with basis {T1 , . . . , Tn }. R(Ti ) ≤ dim V . Since W is infinite
dimensional, we can find some w ∈ W such that w 6∈ (R(T1 ) ∪ · · · ∪ R(Tn )). Let {v1 , . . . , vk } be a
basis for V , and define T ∈ L(V, W ) by T (vi ) = w. Then T is a new linear transformation that is
not in the span of T1 , . . . , Tn , which contradicts that they are a basis for L(V, W ). Thus this vector
space must be infinite-dimensional.
(There are several ways to solve this problem, this is one example.)
Week 4.
1. Show that for each q ∈ P(R) there exists p ∈ P(R) such that ((x2 + 5x + 7)p)00 = q.
We show that for any integer n, the map T ∈ L(Pn (R)) defined by T (p(x)) = ((x2 +5x+7)p)00 is onto.
This is the composition of two maps: U (p(x)) = (x2 + 5x + 7)p(x) composed with S(r(x)) = r00 (x).
It is easy to show S : Pn+2 (R) → Pn (R) is onto and U : Pn (R) → Pn+2 (R) is one-to-one (I won’t
show this here, but you should know how to do this). Thus T = SU is onto (you did that exercise
in the homework). Since this is true for all n, the map on P(R) has the same property.
2. Suppose D ∈ L(P3 (R), P2 (R)) is the differentiation map. Find a basis of P3 (R) such that the matrix
of D with respect to these bases is
1 0 0 0
0 1 0 0 .
0 0 1 0
For this problem it is important to specify the basis of P2 (R)! Since the problem didn’t say, we can
choose that to be the standard basis β = {1, x, x2 }. Thus the above matrix is [D]βγ for some γ which
we compute below.
Let γ1 , . . . , γ4 be a basis of P3 (R). Let γ1 = a0 + a1 x + a2 x2 + a3 x3 , {bi } for γ2 , {ci } for γ3 , and
{di } for γ4 . Then
3. Let T : R2 → R3 be defined by T (x, y) = (x + y, x − y, 2x + y), and let β be the basis {(−1, 2), (3, 4)}
on R2 and let R3 have the standard basis γ. Denote the standard basis on R2 by σ. Find the change
of basis matrix [ψ]γσ . Find [T ]γβ .
This problem has a typo, it should be: [ψ]βσ .
It is easiest to find:
−1 3
[ψ −1 ]σβ = .
2 4
Thus
1 4 −3
[ψ]βσ = ([ψ −1 ]σβ )−1 =− .
10 −2 −1
Also,
1 1
[T ]γσ = 1 −1 .
2 1
Then
1 1 1 7
−1 3
[T ]γβ = [T ]γσ [ψ −1 ]σβ = 1 −1 = −1 −1 .
2 4
2 1 0 10
4. Suppose V is finite dimensional and dim V > 1. Prove that the set of noninvertible operators on V
is not a subspace of L(V ). Prove that the set of noninvertible operators on V is a subspace of L(V )
when dim V = 1.
Discussed in section. The set of noninvertible operators is not closed under addition. If dim V > 1,
choose a nonzero noninvertible operator T , say projection onto the first basis vector v1 , represented
by the matrix with a 1 in the upper left corner and zeros everywhere else. Now let S = I − T ,
represented by the identity matrix minus [T ]. S is noninvertible as well (the first basis vector v1 is
in N (S)), but T + S = I, which is invertible.
When dim V = 1, the only vector contained in the set of noninvertible operators is the 0 transfor-
mation. The set {0} is a subspace. (See proof of Quiz problem 3)
5. Prove that if T ∈ L(V, W ) is injective and surjective, then T has an inverse in L(V, W ).
Let S : W → V be such that if T (v) = w, then S(w) = v. Since T is surjective (onto), every w ∈ W
has some v such that T (v) = w, and since T is injective (one-to-one), every w ∈ W has a unique
v ∈ V such that T (v) = w. Thus S(w) is well-defined. We show that S is a linear transformation.
1. T (0) = 0, so S(0) = 0.
2. Let w, u ∈ W and let c be a scalar. Let v, t ∈ V be such that T (v) = w and T (t) = u. Then
T (v + ct) = T (v) + cT (t) = w + cu, so S(w + cu) = v + ct = S(w) + cS(t). Thus S is indeed a linear
transformation.
6. Let T ∈ L(Pn (R), R) be defined by T (f (x)) = f (1). Find the matrix representation of T with
respect to the standard basis β of Pn (R). Let S ∈ L(Rm , R) be such that [S]σγ = [T ]σβ where γ is the
standard basis of Rm and σ is the standard basis of R. What is m and what is S?
Let R ∈ L(Dm×m (R), R) where Dm×m (R) is the vector space of diagonal m × m matrices. Suppose
[R]σα = [T ]σβ where α is the standard basis of Dm×m (R). What is R?
First part: β = {1, x, x2 , . . . , xn }. T (1) = 1, T (x) = 1, and we can see that T (xi ) = 1 for all i. Thus
[T ]ββ = [1, . . . , 1]. Naturally, m must equal n + 1 since γ must have the same dimension as β (this
was supposed to be an “obvious” question). Since [S]σγ = [1, . . . , 1], we have that S(ei ) = 1 where
γ = {e1 , . . . , en+1 } is the standard basis of Rn+1 . Thus S(x1 , . . . , xn+1 ) = x1 + · · · + xn+1 .
Second part. Just like the above, if x1 , . . . , xn+1 are the entries on the diagonal of a matrix A ∈
Dm×m (R), then R(A) = x1 + . . . + xn+1 .
Week 6.
1. What is the dual basis of the standard basis e1 , . . . , en of F n ?
(
n 1 i=j
We find {fi } ∈ L(F , F ) such that fi (ej ) = . To solve for fi , we have the system
0 i 6= j
of n equations {fi (e0 ) = 0, . . . , fi (ei−1 ) = 0, fi (ei ) = 1, fi (ei+1 = 0, . . .}, which has the solution
fi (x1 , . . . , xn ) = xi , the projection onto the i’th coordinate. (You should probably write this out in
more detail, but I won’t here since computations are simple.)
2. Let V = P3 (R) have the standard ordered basis β = {1, x, . . . , x3 }. What is the dual basis of β in
V ∗ ? Write the dual basis as an operator on f (x) ∈ P3 (R).
Just like in the previous problem, when an element p(x) = a0 + · · · + a3 x3 ∈ P3 is written as a vector
(a0 , . . . , a3 ), we get that fi is the projection map onto the i’th coordinate ai . Thus each fi picks
out the i’th coefficient of p(x). As an operator, this can be written as taking the i’th derivative and
scaling by i!1 , that is, fi (p(x)) = i!1 p(i) (0).
Note: we need p(i) (0) so that we can isolate the coefficient that corresponds to xi .
3. Let V = P2 (R) with some ordered basis γ. Suppose the basis of V ∗ is
Z 1
φ1 (f (t)) = f (t)dt
0
0
φ2 (f (t)) = f (1)
φ3 (f (t)) = f (0)
Find the ordered basis that corresponds to the ordered basis φ1 , φ2 , φ3 .
Let γ1 = a0 + a1 x + a2 x2 , γ2 = b0 + b1 x + b2 x2 , γ3 = c0 + c1 x + c2 x2 be the ordered basis γ. Since
we know φi (γj ) = δij , we get the following 9 equations:
φ1 (γ1 ) = a0 + a1 /2 + a2 /3 = 1 φ2 (γ1 ) = a1 + 2a2 = 0 φ3 (γ1 ) = a0 = 0
φ1 (γ2 ) = b0 + b1 /2 + b2 /3 = 0 φ2 (γ2 ) = b1 + 2b2 = 1 φ3 (γ2 ) = b0 = 0
φ1 (γ3 ) = c0 + c1 /2 + c2 /3 = 0 φ2 (γ3 ) = c1 + 2c2 = 0 φ3 (γ3 ) = c0 = 1
V = U1 ⊕ W and V = U2 ⊕ W,
then U1 = U2 .
v1 − v2 , v2 − v3 , v3 − v4 , v4
5. Let T1 , . . . , Tn ∈ L(P(R), P(R)), where Ti (f (x)) is the i’th derivative of f . Show that T1 , . . . , Tn is
linearly independent.
6. Let T1 , . . . , Tn ∈ L(P(R), P(R)), where Ti (f (x)) = xi f (x). Show that T1 , . . . , Tn is linearly indepen-
dent.
7. Suppose S, T ∈ L(V ). Prove ST is invertible if and only if both S and T are invertible.
8. Suppose W is finite dimensional and T1 , T2 ∈ L(V, W ). Prove that N (T1 ) = N (T2 ) if and only if
there exists an invertible operator S ∈ L(W ) such that T1 = S(T2 ).
11. Let c, d be scalars, and let V be a vector space of dimension n > 0 with ordered basis β1 , . . . , βn .
Let f1 , . . . , fn be the corresponding ordered basis of V ∗ . Let u = cβ1 + β2 ∈ V , v = β1 + dβ2 ∈ V ,
and g = cf1 + f2 ∈ V ∗ , h = f1 + df2 ∈ V ∗ . Show g and h must be linearly independent if u and v
are.