Selected Solutions To Hoffman and Kunze's Linear Algebra Second Edition
Selected Solutions To Hoffman and Kunze's Linear Algebra Second Edition
Greg Kikola
https://github.com/gkikola/sol-hoffman-kunze
Contents
Preface v
1 Linear Equations 1
1.2 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . 1
1.3 Matrices and Elementary Row Operations . . . . . . . . . . . . . 5
1.4 Row-Reduced Echelon Matrices . . . . . . . . . . . . . . . . . . . 11
1.5 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6 Invertible Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2 Vector Spaces 27
2.1 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.3 Bases and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4 Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.6 Computations Concerning Subspaces . . . . . . . . . . . . . . . . 50
3 Linear Transformations 55
3.1 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . 55
3.2 The Algebra of Linear Transformations . . . . . . . . . . . . . . 63
3.3 Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.4 Representation of Transformations by Matrices . . . . . . . . . . 74
3.5 Linear Functionals . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.6 The Double Dual . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.7 The Transpose of a Linear Transformation . . . . . . . . . . . . . 101
iii
iv CONTENTS
Preface
This is an unofficial solution guide to the book Linear Algebra, Second Edition,
by Kenneth Hoffman and Ray Kunze. It is intended for students who are study-
ing linear algebra using Hoffman and Kunze’s text. I encourage students who
use this guide to first attempt each exercise on their own before looking up the
solution, as doing exercises is an essential part of learning mathematics.
In writing this guide, I have avoided using techniques or results before the
point at which they are introduced in the text. My solutions should therefore
be accessible to someone who is reading through Hoffman and Kunze for the
first time.
Given the large number of exercises, errors are unavoidable in a work such
as this. I have done my best to proofread each solution, but mistakes will get
through nonetheless. If you find one, please feel free to tell me about it via
email: [email protected]. I appreciate any corrections or feedback.
Please know that this guide is currently unfinished. I am slowly working on
adding the remaining chapters, but this will be done at my own pace. If you
need a solution to an exercise that I have not included, try typing the problem
statement into a web search engine such as Google; it is likely that someone else
has already posted a solution.
This guide is licensed under the Creative Commons Attribution-ShareAlike
4.0 International License. To view a copy of this license, visit
http://creativecommons.org/licenses/by-sa/4.0/
I am deeply grateful to the authors, Kenneth Hoffman and Ray Kunze, for
producing a well-organized and comprehensive book that is a pleasure to read.
Greg Kikola
www.gregkikola.com
[email protected]
v
vi PREFACE
Chapter 1
Linear Equations
1.2.2 Exercise 2
Let F be the field of complex numbers. Are the following two systems of linear
equations equivalent? If so, express each equation in each system as a linear
combination of the equations in the other system.
x1 − x2 = 0 3x1 + x2 = 0
2x1 + x2 = 0 x1 + x2 = 0
1
2 CHAPTER 1. LINEAR EQUATIONS
Solution. The systems are equivalent. For the first system, we can write
x1 − x2 = (3x1 + x2 ) − 2(x1 + x2 ) = 0,
2x1 + x2 = 21 (3x1 + x2 ) + 12 (x1 + x2 ) = 0.
1.2.3 Exercise 3
Test the following systems of equations as in Exercise 1.2.2.
−x1 + x2 + 4x3 = 0 x1 − x3 = 0
x1 + 3x2 + 8x3 = 0 x2 + 3x3 = 0
1 5
2 x1 + x2 + 2 x3 =0
1.2.4 Exercise 4
Test the following systems as in Exercise 1.2.2.
i
2x1 + (−1 + i)x2 + x4 = 0 1+ x1 + 8x2 − ix3 − x4 = 0
2
2
3x2 − 2ix3 + 5x4 = 0 3 x1 − 21 x2 + x3 + 7x4 = 0
Solution. Call the equations in the system on the left L1 and L2 , and the equa-
tions on the right R1 and R2 . If R1 = aL1 +bL2 then, by equating the coefficients
of x3 , we get
−i = −2ib,
which implies that b = 1/2. By equating the coefficients of x1 , we get
i
1+ = 2a,
2
so that
1 1
a= + i.
2 4
1.2. SYSTEMS OF LINEAR EQUATIONS 3
1 1 5 1
−1 = a + 5b = + i + = 3 + i,
2 4 2 4
which is clearly a contradiction. Therefore the two systems are not equivalent.
1.2.5 Exercise 5
Let F be a set which contains exactly two elements, 0 and 1. Define an addition
and multiplication by the tables:
+ 0 1 · 0 1
0 0 1 0 0 0
1 1 0 1 0 1
Verify that the set F , together with these two operations, is a field.
Solution. From the symmetry in the tables, we see that both operations are
commutative.
By considering all eight possibilities, one can see that (a+b)+c = a+(b+c).
And one may in a similar way verify that (ab)c = a(bc), so that associativity
holds for the two operations.
0 + 0 = 0 and 0 + 1 = 1 so F has an additive identity. Similarly, 1 · 0 = 0
and 1 · 1 = 1 so F has a multiplicative identity.
The additive inverse of 0 is 0 and the additive inverse of 1 is 1. The multi-
plicative inverse of 1 is 1. So F has inverses.
Lastly, by considering the eight cases, one may verify that a(b + c) = ab + ac.
Therefore distributivity of multiplication over addition holds and F is a field.
1.2.7 Exercise 7
Prove that each subfield of the field of complex numbers contains every rational
number.
Proof. Let F be a subfield of C and let r = m/n be any rational number, written
in lowest terms. F must contain 0 and 1, so if r = 0 then we are done. Now
assume r is nonzero.
Since 1 ∈ F , and F is closed under addition, we know that 1 + 1 = 2 ∈ F .
And, if the integer k is in F , then k + 1 is also in F . By induction, we see that
all positive integers belong to F . We also know that all negative integers are
in F because F is closed under additive inverses. So, in particular, m ∈ F and
n ∈ F.
Now F is closed under multiplicative inverses, so n ∈ F implies 1/n ∈ F .
Finally, closure under multiplication shows that m · (1/n) = m/n = r ∈ F .
Since r was arbitrary, we can conclude that all rational numbers are in F .
4 CHAPTER 1. LINEAR EQUATIONS
1.2.8 Exercise 8
Prove that each field of characteristic zero contains a copy of the rational number
field.
Proof. Let F be a field of characteristic zero. Define the map f : Q → F (where
Q denotes the rational numbers) as follows. Let f (0) = 0F and f (1) = 1F ,
where 0F and 1F are the additive and multiplicative identities, respectively, of
F . Given a positive integer n, define f (n) = f (n − 1) + 1F and f (−n) = −f (n).
−1
If a rational number r = m/n is not an integer, define f (r) = f (m) · (f (n)) .
First we show that the function f preserves addition and multiplication. A
simple induction argument will show that, in the case of integers m and n, we
have
f (m + n) = f (m) + f (n) and f (mn) = f (m)f (n).
Now let r1 = m1 /n1 and r2 = m2 /n2 be rational numbers in lowest terms.
Then, by the definition of f ,
Likewise,
or
f (m1 )f (n2 ) = f (m2 )f (n1 ).
This implies
f (m1 n2 ) = f (m2 n1 ).
Now if m1 n2 6= m2 n1 , then this would imply that F does not have characteristic
zero. So m1 n2 = m2 n1 and so r1 = r2 .
What we have shown is that every rational number corresponds to a distinct
element of F , and that the operations of addition and multiplication of rational
numbers is preserved by this correspondence. So F contains a copy of Q.
1.3. MATRICES AND ELEMENTARY ROW OPERATIONS 5
1.3.2 Exercise 2
If
3 −1 2
A = 2 1 1
1 −3 0
find all solutions of AX = 0 by row-reducing A.
Solution. We get
1 − 31 23 1 − 31 2
3 −1 2 3
(1) (2) 5 (1)
2 1 1 −−→ 2 1 1 −−→ 0 − 13 −−→
3
1 −3 0 1 −3 0 0 − 38 − 23
1 − 13 2 3 3
3 1 0 5 1 0 5
1 (2) 1 (1) (2)
0 1 − 5 −−→ 0 1 − 5 −−→ 0 1 − 15 −−→
0 − 83 − 23 0 0 − 65 0 0 1
1 0 0
0 1 0 .
0 0 1
Thus AX = 0 has only the trivial solution.
1.3.3 Exercise 3
If
6 −4 0
A= 4 −2 0
−1 0 3
find all solutions of AX = 2X and all solutions of AX = 3X. (The symbol cX
denotes the matrix each entry of which is c times the corresponding entry of
X.)
6 CHAPTER 1. LINEAR EQUATIONS
or, equivalently,
4x1 − 4x2 =0
4x1 − 4x2 =0
−1x1 + x3 = 0.
B can be row-reduced:
4 −4 0 1 0 −1
4 −4 0 → 0 1 −1 .
−1 0 1 0 0 0
where a is a scalar.
Similarly, the equation AX = 3X can be solved by row-reducing
3 −4 0 1 0 0
4 −5 0 → 0 1 0 .
−1 0 0 0 0 0
where b is a scalar.
1.3.4 Exercise 4
Find a row-reduced matrix which is row-equivalent to
i −(1 + i) 0
A = 1 −2 1 .
1 2i −1
1.3. MATRICES AND ELEMENTARY ROW OPERATIONS 7
0 1+i −1 0 0 0
1.3.5 Exercise 5
Prove that the following two matrices are not row-equivalent:
2 0 0 1 1 2
a −1 0 , −2 0 −1 .
b c 3 1 3 5
We see that this matrix is row-equivalent to the identity matrix. The corre-
sponding system of equations has only the trivial solution.
For the second matrix, we get
1 1 2
1 1 2
1 1 2
(2) (1) (2)
−2 0 −1 − −→ 0 2 3 −−→ 0 1 23 −−→
1 3 5 0 2 3 0 2 3
1
1 1 2 1 0 2
(2)
0 1 32 −−→ 0 1 32 .
0 0 0 0 0 0
1.3.6 Exercise 6
Let
a b
A=
c d
be a 2 × 2 matrix with complex entries. Suppose that A is row-reduced and also
that a + b + c + d = 0. Prove that there are exactly three such matrices.
8 CHAPTER 1. LINEAR EQUATIONS
If A is not the zero matrix, then it has at least one nonzero row. If it has
exactly one nonzero row, then in order to satisfy the given constraints, the
nonzero row will have a 1 in the first column and a −1 in the second. This gives
two possibilities,
1 −1 0 0
A= or A = .
0 0 1 −1
Finally, if A has two nonzero rows, then it must be the identity matrix or the
matrix [ 01 10 ], but neither of these are valid since the sum of the entries is nonzero
in each case. Thus there are only the three possibilities given above.
1.3.7 Exercise 7
Prove that the interchange of two rows of a matrix can be accomplished by a
finite sequence of elementary row operations of the other two types.
Proof. We can, without loss of generality, assume that the matrix has only two
rows, since any additional rows could just be ignored in the procedure that
follows. Let this matrix be given by
a a2 a3 · · · an
A0 = 1 .
b1 b2 b3 · · · bn
We can see that A4 has the same entries as A0 but with the rows interchanged.
And only a finite number of elementary row operations of the first two kinds
were performed.
1.3. MATRICES AND ELEMENTARY ROW OPERATIONS 9
1.3.8 Exercise 8
Consider the system of equations AX = 0 where
a b
A=
c d
Proof. This is clear, since the equation 0x1 + 0x2 = 0 is satisfied for any
(x1 , x2 ) ∈ F 2 (note that in any field, 0x = (1 − 1)x = x − x = 0).
Since ad − bc = 0, the second row of this final matrix is zero, and we see
that there are nontrivial solutions. If we let
then (x1 , x2 ) is a solution if and only if x1 = yx01 and x2 = yx02 for some
y ∈ F.
1.4. ROW-REDUCED ECHELON MATRICES 11
1
3 x1 + 2x2 − 6x3 = 0
−4x1 + 5x3 = 0
−3x1 + 6x2 − 13x3 = 0
− 37 x1 + 2x2 − 8
3 x3 =0
1.4.2 Exercise 2
Find a row-reduced echelon matrix which is row-equivalent to
1 −i
A = 2 2 .
i 1+i
and this last matrix is in row-reduced echelon form. Therefore the homogeneous
system AX = 0 has only the trivial solution x1 = x2 = 0.
12 CHAPTER 1. LINEAR EQUATIONS
1.4.3 Exercise 3
Describe explicitly all 2 × 2 row-reduced echelon matrices.
1.4.4 Exercise 4
Consider the system of equations
x1 − x2 + 2x3 = 1
2x1 + 2x3 = 1
x1 − 3x2 + 4x3 = 2.
Does this system have a solution? If so, describe explicitly all solutions.
From this we see that the original system of equations has solutions. All solu-
tions are of the form
1 1
x1 = −t + , x2 = t − , and x3 = t,
2 2
for some scalar t.
1.4. ROW-REDUCED ECHELON MATRICES 13
1.4.5 Exercise 5
Give an example of a system of two linear equations in two unknowns which
has no solution.
Solution. We can find such a system by ensuring that the coefficients in one
equation are a multiple of the other, while the constant term is not the same
multiple. For example, one such system is
x1 + 2x2 = 3
−3x1 − 6x2 = 5.
1.4.6 Exercise 6
Show that the system
x1 − 2x2 + x3 + 2x4 = 1
x1 + x2 − x3 + x4 = 2
x1 + 7x2 − 5x3 − x4 = 3
has no solution.
0 9 −6 −3 2 0 0 0 0 −1
Since the first nonzero entry in the bottom row of the last matrix is in the right-
most column, the corresponding system of equations has no solution. Therefore
the original system of equations also has no solution.
1.4.7 Exercise 7
Find all solutions of
1.4.8 Exercise 8
Let
3 −1 2
A = 2 1 1 .
1 −3 0
For which triples (y1 , y2 , y3 ) does the system AX = Y have a solution?
Solution. We will perform row-reduction on the augmented matrix:
1 − 31 2 1
3 y1
3 −1 2 y1 3
(1) (2)
2 1 1 y2 −−→ 2 1 1 y2 −−→
1 −3 0 y3 1 −3 0 y3
1 2 1
1 − 31 2 1
1 −3 3 3 y1 3 3 y1
5 (1) (2)
0 − 13 − 23 y1 + y2 −−→ 0 1 − 15 − 52 y1 + 53 y2 −−→
3
0 − 38 − 23 − 13 y1 + y3 0 − 38 − 23 − 31 y1 + y3
3 1 1 3 1 1
1 0 5 5 y1 + 5 y2 1 0 5 5 y1 + 5 y2
(1) (2)
0 1 − 15 − 52 y1 + 35 y2 −−→ 0 1 − 15 − 25 y1 + 35 y2 −−→
0 0 − 65 − 75 y1 + 85 y2 + y3 0 0 1 7 4 5
6 y1 − 3 y2 − 6 y3
1 0 0 − 12 y1 + y2 + 21 y3
0 1 0 − 16 y1 + 13 y2 − 16 y3 .
7 4 5
0 0 1 6 y1 − 3 y2 − 6 y3
Since every row contains a nonzero entry in the first three columns, the system
of equations AX = Y is consistent regardless of the values of y1 , y2 , and y3 .
Therefore AX = Y has a unique solution for any triple (y1 , y2 , y3 ).
1.4. ROW-REDUCED ECHELON MATRICES 15
1.4.9 Exercise 9
Let
3 −6 2 −1
−2 4 1 3
A=
0
.
0 1 1
1 −2 1 0
For which (y1 , y2 , y3 , y4 ) does the system of equations AX = Y have a solution?
Solution. Row-reduction on the augmented matrix gives
2
1 −2 − 13 1
3 y1
3 −6 2 −1 y1 3
−2 4 1 3 y2 (1) −2 4 1 3 y2
(2)
−−→ −−→
0 0 1 1 y3
0 0 1 1 y3
1 −2 1 0 y4 1 −2 1 0 y4
2 1 1 2
1 −2 3 − 3 3 y1 1 −2 − 31 1
3 y1
3
0 0 7 7 2 2 3
3 y1 + y2 (1) 0 0 1 1 7 y1 + 7 y2 (2)
3 3
−−→ −−→
0 0 1 1 y3 0 0 1 1 y3
1 1
0 0 3 3 − 13 y1 + y4 0 0 1
3
1
3 − 31 y1 + y4
1
1 −2 0 −1 7 y1 − 27 y2
2
0 0 1 1 7 y1 + 37 y2
.
− 27 y1 − 73 y2 + y3
0 0 0 0
0 0 0 0 − 37 y1 − 71 y2 + y4
− 72 y1 − 37 y2 + y3 =0
− 37 y1 − 1
7 y2 + y4 = 0.
Solution. We get
3
2 −1 1
ABC = 1 1 −1
1 2 1
−1
3 −3
2 −1 1 4 −4
= 1 −1 = ,
1 2 1 4 −4
−1 1
and
3
2 −1 1
CAB = 1 −1 1
1 2 1
−1
4
= 1 −1 = 0 .
4
1.5.2 Exercise 2
Let
1 −1 1 2 −2
A = 2 0 1 , B = 1 3 .
3 0 1 4 4
Solution. We have
1 −1 1 1 −1 1 2 −2
A(AB) = 2 0 1 2 0 1 1 3
3 0 1 3 0 1 4 4
1 −1 1 5 −1 7 −3
= 2 0 1 8 0 = 20 −4 ,
3 0 1 10 −2 25 −5
1.5. MATRIX MULTIPLICATION 17
and
2
1 −1 1 2 −2
A2 B = 2 0 1 1 3
3 0 1 4 4
2 −1 1 2 −2
= 5 −2 3 1 3
6 −3 4 4 4
7 −3
= 20 −4 .
25 −5
So A(AB) = A2 B as expected.
1.5.3 Exercise 3
Find two different 2 × 2 matrices A such that A2 = 0 but A 6= 0.
Solution. Two possibilities are
0 1 0 0
and .
0 0 1 0
Both of these are nonzero matrices that satisfy A2 = 0.
1.5.4 Exercise 4
For the matrix A of Exercise 1.5.2, find elementary matrices E1 , E2 , . . . , Ek such
that
Ek · · · E2 E1 A = I.
Solution. We want to reduce
1 −1 1
A = 2 0 1
3 0 1
to the identity matrix. To start, we can use two elementary row operations of
the second kind to get 0 in the bottom two entries of column 1. Performing the
same operations on the identity matrix gives
1 0 0 1 0 0
E1 = −2 1 0 and E2 = 0 1 0 .
0 0 1 −3 0 1
Then
1 −1 1
E2 E1 A = 0 2 −1 .
0 3 −2
Next, we can use a row operation of the first kind to make the central entry into
a 1:
1 0 0 1 −1 1
E3 = 0 21 0 , so that E3 E2 E1 A = 0 1 − 21 .
0 0 1 0 3 −2
18 CHAPTER 1. LINEAR EQUATIONS
so that
1
1 0 2
E5 E4 E3 E2 E1 A = 0 1 − 12 .
0 0 − 12
Then
1
1 0 0
1 0 2
E6 = 0 1 0 so that E6 E5 E4 E3 E2 E1 A = 0 1 − 12 .
0 0 −2 0 0 1
Finally,
− 21
1 0 0 1 0
1
E7 = 0 1 and E8 = 0 1 0 ,
2
0 0 1 0 0 1
which gives
1 0 0
E8 E7 E6 E5 E4 E3 E2 E1 = 0 1 0 = I.
0 0 1
Thus each of E1 , E2 , . . . , E8 are elementary matrices, and they are such that
E8 · · · E2 E1 A = I.
1.5.5 Exercise 5
Let
1 −1
3 1
A = 2 2 and B = .
−4 4
1 0
Is there a matrix C such that CA = B?
Then
1 −1
c1 c2 c3 3 1
2 2 = .
c4 c5 c6 −4 4
1 0
This leads to the following system of equations:
1.5.6 Exercise 6
Let A be an m × n matrix and B an n × k matrix. Show that the columns
of C = AB are linear combinations of the columns of A. If α1 , . . . , αn are the
columns of A and γ1 , . . . , γk are the columns of C, then
n
X
γj = Brj αr .
r=1
Therefore,
− 78
3
− 14 3
1 0 0 8 8
3 −2 3
1
− 14 ,
1
R = 0 1 0 P = 0 − 14 = 2 0 −2 ,
4
8
0 0 1 11 1 1 1 1 2 1
8 8 4 8
and R = P A.
1.6.2 Exercise 2
Do Exercise 1.6.1, but with
2 0 i
A = 1 −3 −i .
i 1 1
1.6. INVERTIBLE MATRICES 21
So
1 0 0 10 1 − 3i 3 − 9i
1
R = 0 1 0 = I, P = 0 −9 − 3i 3 − 9i ,
30
0 0 1 −10i 6 + 2i 18 + 6i
and R = P A.
1.6.3 Exercise 3
For each of the two matrices
2 5 −1 1 −1 2
4 −1 2 , 3 2 4
6 4 1 0 1 −2
1 25 − 21 5
− 21 5
− 12
1 1
2 5 −1 2 2
(1) (2) (2)
4 −1 2 −−→ 4 −1 2 −−→ 0 −11 4 −−→ 0
−11 4 ,
6 4 1 6 4 1 0 −11 4 0 0 0
22 CHAPTER 1. LINEAR EQUATIONS
and we see that the original matrix is not invertible since it is row-equivalent to
a matrix having a row of zeros.
For the second matrix, we get
1 −1 2 1 0 0
3 2 4 , 0 1 0
0 1 −2 0 0 1
1 −1 2 1 0 0
0 5 −2 , −3 1 0
0 1 −2 0 0 1
1 −1 2 1 0 0
0 1 −2 , 0 0 1
0 5 −2 −3 1 0
1 0 0 1 0 1
0 1 −2 , 0 0 1
0 0 8 −3 1 −5
1 0 0 1 0 1
0 1 −2 , 0 0 1
0 0 1 − 3 1 − 58
8 8
1 0 0 1 0 1
0 1 0 , − 34 41 − 14 .
0 0 1 − 38 81 − 58
From this we see that the original matrix is invertible and its inverse is the
matrix
8 0 8
1
−6 2 −2 .
8
−3 1 −5
1.6.4 Exercise 4
Let
5 0 0
A = 1 5 0 .
0 1 5
For which X does there exist a scalar c such that AX = cX?
Solution. Let
x1
X = x2 .
x3
Then AX = cX implies
5x1 = cx1
x1 + 5x2 = cx2
x2 + 5x3 = cx3 ,
1.6. INVERTIBLE MATRICES 23
1.6.5 Exercise 5
Discover whether
1 2 3 4
0 2 3 4
A=
0 0 3 4
0 0 0 4
is invertible, and find A−1 if it exists.
Solution. We proceed in the usual way:
1 2 3 4 1 0 0 0
0 2 3 4 0 1 0 0
0 0 3 4 , 0 0 1
0
0 0 0 4 0 0 0 1
1 2 3 0 1 0 0 −1
0 2 3 0 0 1 0 −1
,
−1
0 0 3 0 0 0 1
1
0 0 0 1 0 0 0 4
1 2 0 0 1 0 −1 0
0 2 0 0 0 1 −1 0
,
0 0 13 − 13
0 0 1 0
1
0 0 0 1 0 0 0 4
1 0 0 0 1 −1 0 0
0 1 0 0 0 1 − 1 0
2 2
, .
1
− 13
0 0 1 0 0 0 3
1
0 0 0 1 0 0 0 4
Thus A is invertible and
1 −1 0 0
1
0 − 21 0
A−1 = 2
.
1
0 0 3 − 13
1
0 0 0 4
24 CHAPTER 1. LINEAR EQUATIONS
1.6.6 Exercise 6
Suppose A is a 2 × 1 matrix and that B is a 1 × 2 matrix. Prove that C = AB
is not invertible.
Proof. Let
a
A= and B = c d
b
so that
ac ad
C = AB = .
bc bd
1.6.7 Exercise 7
Let A be an n × n (square) matrix. Prove the following two statements:
But the product on the right is clearly the n×n zero matrix, so B = 0.
(b) If A is not invertible, then there exists an n×n matrix B such that AB = 0
but B 6= 0.
1.6.8 Exercise 8
Let
a b
A= .
c d
Prove, using elementary row operations, that A is invertible if and only if
ad − bc 6= 0.
1.6.9 Exercise 9
An n × n matrix A is called upper-triangular if Aij = 0 for i > j, that is,
if every entry below the main diagonal is 0. Prove that an upper-triangular
(square) matrix is invertible if and only if every entry on its main diagonal is
different from 0.
First, suppose every entry on the main diagonal of A is nonzero, and consider
the homogeneous linear system AX = 0:
Since Ann is nonzero, the last equation implies that xn = 0. Then, since
An−1,n−1 is nonzero, the second-to-last equation implies that xn−1 = 0. Con-
tinuing in this way, we see that xi = 0 for each i = 1, 2, . . . , n. Therefore the
system AX = 0 has only the trivial solution, hence A is invertible.
Conversely, suppose A is invertible. Then A cannot contain any zero rows,
nor can A be row-equivalent to a matrix with a row of zeros. This implies that
Ann 6= 0. Consider An−1,n−1 . If An−1,n−1 is zero, then by dividing row n by
Ann , and then by adding −An−1,n times row n to row n−1, we see that A is row-
equivalent to a matrix whose (n − 1)st row is all zeros. This is a contradiction,
so An−1,n−1 6= 0. In the same manner, we can show that Aii 6= 0 for each
i = 1, 2, . . . , n. Thus all entries on the main diagonal of A are nonzero.
1.6.11 Exercise 11
Let A be an m×n matrix. Show that by means of a finite number of elementary
row and/or column operations one can pass from A to a matrix R which is both
‘row-reduced echelon’ and ‘column-reduced echelon,’ i.e., Rij = 0 if i 6= j,
Rii = 1, 1 ≤ i ≤ r, Rii = 0 if i > r. Show that R = P AQ, where P is an
invertible m × m matrix and Q is an invertible n × n matrix.
Proof. By Theorem 5, A is row-equivalent to a row-reduced echelon matrix R0 .
And, by the second corollary to Theorem 12, there is an invertible m×m matrix
P such that R0 = P A.
Results that are analogous to Theorems 5 and 12 (with similar proofs) hold
for column-reduced echelon matrices, so there is a matrix R which is column-
equivalent to R0 and an invertible n × n matrix Q such that R = R0 Q. Then
R = P AQ and we see that, through a finite number of elementary row and/or
column operations, A passes to a matrix R that is both row- and column-reduced
echelon.
Chapter 2
Vector Spaces
α = (α1 , α2 , . . . , αn ),
β = (β1 , β2 , . . . , βn ),
and
γ = (γ1 , γ2 , . . . , γn )
α + β = (α1 + β1 , . . . , αn + βn ) = (β1 + α1 , . . . , βn + αn ) = β + α,
0 = (0, 0, . . . , 0),
27
28 CHAPTER 2. VECTOR SPACES
and
2.1.2 Exercise 2
If V is a vector space over the field F , verify that
2.1.3 Exercise 3
If C is the field of complex numbers, which vectors in C 3 are linear combinations
of (1, 0, −1), (0, 1, 1), and (1, 1, 1)?
x1 + x3 = y1
x2 + x3 = y2
−x1 + x2 + x3 = y3 .
2.1. VECTOR SPACES 29
2.1.4 Exercise 4
Let V be the set of all pairs (x, y) of real numbers, and let F be the field of real
numbers. Define
(x, y) + (x1 , y1 ) = (x + x1 , y + y1 )
c(x, y) = (cx, y).
Is V , with these operations, a vector space over the field of real numbers?
Solution. No, V is not a vector space. Most of the conditions are satisfied, but
distributivity over scalar addition fails when y is nonzero:
but
c(x, y) + d(x, y) = (cx, y) + (dx, y) = (cx + dx, 2y).
2.1.5 Exercise 5
On Rn , define two operations
α⊕β =α−β
c · α = −cα.
The operations on the right are the usual ones. Which of the axioms for a vector
space are satisfied by (Rn , ⊕, ·)?
Solution. Commutativity of ⊕ fails, since α−β is not, in general, equal to β −α.
Associativity of ⊕ also fails since, for nonzero γ,
(α − β) − γ 6= α − β + γ = α − (β − γ).
c · (α ⊕ β) = c · (α − β)
= −c(α − β)
= −cα + cβ
and
c · α ⊕ c · β = −cα − (−cβ)
= −cα + cβ,
(c1 + c2 ) · α = −(c1 + c2 )α
= −c1 α − c2 α,
while
c1 · α ⊕ c2 · α = −c1 α − (−c2 α)
= −c1 α + c2 α,
2.1.6 Exercise 6
Let V be the set of all complex-valued functions f on the real line such that
(for all t in R)
f (−t) = f (t).
The bar denotes complex conjugation. Show that V , with the operations
is a vector space over the field of real numbers. Give an example of a function
in V which is not real-valued.
and
2.1.7 Exercise 7
Let V be the set of pairs (x, y) of real numbers and let F be the field of real
numbers. Define
(x, y) + (x1 , y1 ) = (x + x1 , 0)
c(x, y) = (cx, 0).
2.2 Subspaces
2.2.1 Exercise 1
Which of the following sets of vectors α = (a1 , . . . , an ) in Rn are subspaces of
Rn (n ≥ 3)?
(a) all α such that a1 ≥ 0
Solution. This is not a subspace since it is not closed under scalar multi-
plication (take any negative scalar).
Solution. This is not a subspace since it is not closed under vector addi-
tion. For example, (1, 1, 1, . . . ) is in the set, but the sum of this vector
with itself is not.
Solution. This is not a subspace since it is not closed under vector addi-
tion. For example (1, 0, . . . ) and (0, 1, . . . ) are each in the set, but their
sum is not.
2.2.2 Exercise 2
Let V be the (real) vector space of all functions f from R into R. Which of the
following sets of functions are subspaces of V ?
(a) all f such that f (x2 ) = f (x)2
each belong to this set, but their sum f + g does not. Therefore this is
not a subspace.
2.2. SUBSPACES 33
2.2.3 Exercise 3
Is the vector (3, −1, 0, −1) in the subspace of R5 spanned by the vectors
(2, −1, 3, 2), (−1, 1, 1, −3), and (1, 1, 9, −5)?
Solution. The subspace spanned by these three vectors consists of all linear
combinations
x1 (2, −1, 3, 2) + x2 (−1, 1, 1, −3) + x3 (1, 1, 9, −5).
Therefore (3, −1, 0, −1) is in this subspace if and only if the system of equations
2x1 − x2 + x3 = 3
−x1 + x2 + x3 = −1
3x1 + x2 + 9x3 = 0
2x1 − 3x2 − 5x3 = −1
has a solution. However, the augmented matrix can be row-reduced to
2 −1 1 3 1 0 2 0
−1 1 1 −1 0 1 3 0
→ .
3 1 9 0 0 0 0 1
2 −3 −5 −1 0 0 0 0
Therefore, this system of equations has no solution and the vector (3, −1, 0, −1)
is not in the subspace spanned by the other three given vectors.
34 CHAPTER 2. VECTOR SPACES
2.2.4 Exercise 4
Let W be the set of all (x1 , x2 , x3 , x4 , x5 ) in R5 which satisfy
2x1 − x2 + 34 x3 − x4 =0
2
x1 + 3 x3 − x5 = 0
9x1 − 3x2 + 6x3 − 3x4 − 3x5 = 0.
Solution. After performing the necessary elementary row operations, the coef-
ficient matrix becomes
2 −1 43 −1 0 1 0 32
0 −1
2
1 0 0 −1 → 0 1 0 1 −2 .
3
9 −3 6 −3 −3 0 0 0 0 0
So, letting x3 = 3t, x4 = u, and x5 = v, we see that the elements of W have the
form
(v − 2t, 2v − u, 3t, u, v).
2.2.5 Exercise 5
Let F be a field and let n be a positive integer (n ≥ 2). Let V be the vector
space of all n × n matrices over F . Which of the following sets of matrices A in
V are subspaces of V ?
Solution. This cannot be a subspace since the zero matrix is not invertible.
Solution. This is also not a subspace since it is possible for the sum of two
non-invertible matrices to be invertible. For example, in the 2 × 2 case,
the matrices
1 0 0 0
and
0 0 0 1
are not invertible, but their sum is the identity matrix, which is invertible.
Solution. We will assume that the field F has more than two elements.
In that case, this set cannot be a subspace since the identity matrix has
the property that I 2 = I, but the sum of the identity with itself does not
have this property.
2.2.6 Exercise 6
(a) Prove that the only subspaces of R1 are R1 and the zero subspace.
Also
b1
b1 = · a1 ,
a1
and we have a contradiction since β was assumed to not be a scalar mul-
tiple of α. Similarly a2 6= 0 also leads to a contradiction. This shows that
the system of equations above has a solution, so that W = R2 .
Solution. The subspaces of R3 are the zero subspace, the set of all scalar
multiples of a fixed nonzero vector (i.e., a line through the origin), the set
of all linear combinations of two linearly independent vectors (i.e., a plane
through the origin), and R3 itself.
2.2.7 Exercise 7
Let W1 and W2 be subspaces of a vector space V such that the set-theoretic
union of W1 and W2 is also a subspace. Prove that one of the spaces Wi is
contained in the other.
Proof. Let W1 and W2 be as stated, but assume that neither is contained in the
other. Then there is a vector u ∈ W1 such that u 6∈ W2 , and there is a vector
v ∈ W2 such that v 6∈ W1 . Since W1 ∪ W2 is a subspace, u + v ∈ W1 ∪ W2 . Now
either u + v ∈ W1 or u + v ∈ W2 . In the first case, since −u ∈ W1 we must have
(u + v) + (−u) = v ∈ W1 ,
which is a contradiction. But then u + v ∈ W2 leads to a similar contradiction.
Therefore one of the subspaces Wi must be contained in the other.
2.2.8 Exercise 8
Let V be the vector space of all functions from R into R; let Ve be the subset
of even functions,
f (−x) = f (x);
let Vo be the subset of odd functions,
f (−x) = −f (x).
(a) Prove that Ve and Vo are subspaces of V .
Proof. Suppose f and g are even functions. Then for any scalar c,
(cf + g)(−x) = cf (−x) + g(−x)
= cf (x) + g(x)
= (cf + g)(x),
so cf + g is also even and therefore Ve is a subspace of V . Similarly, if f
and g are both odd functions, then
(cf + g)(−x) = cf (−x) + g(−x)
= −cf (x) − g(x)
= −(cf + g)(x),
so Vo is also a subspace.
2.2. SUBSPACES 37
f (x) + f (−x)
g(x) =
2
and let h be the function given by
f (x) − f (−x)
h(x) = .
2
It is clear that g ∈ Ve and h ∈ Vo . Since f (x) = g(x) + h(x) for all x, we
see that V = Ve + Vo .
2.2.9 Exercise 9
Let W1 and W2 be subspaces of a vector space V such that W1 + W2 = V and
W1 ∩ W2 = {0}. Prove that for each vector α in V there are unique vectors α1
in W1 and α2 is W2 such that α = α1 + α2 .
Proof. Since W1 + W2 = V , we may find α1 in W1 and α2 in W2 such that
α = α1 +α2 . Now suppose there is also α3 in W1 and α4 in W2 with α = α3 +α4 .
Then
α1 + α2 = α3 + α4 .
Rearranging, we get
α1 − α3 = α4 − α2 .
But the vector on the left-hand side must belong to W1 , and the vector on the
right-hand side must belong to W2 . Therefore α1 −α3 belongs to the intersection
of W1 and W2 , which implies that α1 − α3 = 0 or α1 = α3 . And α4 = α2 also.
This shows that the vectors α1 and α2 are unique.
38 CHAPTER 2. VECTOR SPACES
c1 α1 + c2 α2 = 0.
2.3.2 Exercise 2
Are the vectors
linearly independent in R4 ?
c1 + 2c2 + c3 + 2c4 = 0
c1 − c2 − c3 + c4 = 0
2c1 − 5c2 − 4c3 + c4 = 0
4c1 + 2c2 + 6c4 = 0.
2.3.3 Exercise 3
Find a basis for the subspace of R4 spanned by the four vectors of Exercise 2.3.2.
2.3. BASES AND DIMENSION 39
2.3.4 Exercise 4
Show that the vectors
form a basis for R3 . Express each of the standard basis vectors as linear com-
binations of α1 , α2 , and α3 .
Solution. Since dim R3 = 3, we need only show that the three vectors are inde-
pendent. Let c1 , c2 , c3 be scalars such that
c1 α1 + c2 α2 + c3 α3 = 0.
7 3 1
(1, 0, 0) = α1 + α2 + α3
10 10 5
1 1 1
(0, 1, 0) = − α1 + α2 − α3
5 5 5
3 3 1
(0, 0, 1) = − α1 + α2 + α3 .
10 10 5
2.3.5 Exercise 5
Find three vectors in R3 which are linearly dependent, and are such that any
two of them are linearly independent.
2.3.6 Exercise 6
Let V be the vector space of all 2 × 2 matrices over the field F . Prove that V
has dimension 4 by exhibiting a basis for V which has four elements.
Proof. We may simply take the standard basis:
1 0 0 1 0 0 0 0
, , , .
0 0 0 0 1 0 0 1
Later, in Exercise 2.3.12, we will prove that this set is a basis for V in the more
general case where V is the space of m × n matrices.
Since V has a basis with four elements, it has dimension 4.
2.3.7 Exercise 7
Let V be the vector space of Exercise 6. Let W1 be the set of matrices of the
form
x −x
y z
and let W2 be the set of matrices of the form
a b
.
−a c
2.3.8 Exercise 8
Again let V be the space of 2 × 2 matrices over F . Find a basis {A1 , A2 , A3 , A4 }
for V such that A2j = Aj for each j.
Solution. Let
1 0 0 0 1 1 0 0
A1 = , A2 = , A3 = , and .
0 0 0 1 0 0 1 1
A simple check will show that A2j = Aj for each j. To show that {A1 , A2 , A3 , A4 }
is a basis for V , we need only show that it spans V (since any spanning set with
four vectors must be linearly independent).
Let
x y
B=
z w
be an arbitrary 2×2 matrix over F . Then we can write B as a linear combination
of A1 , A2 , A3 , A4 as follows:
B = (x − y)A1 + (w − z)A2 + yA3 + zA4 .
Therefore the set {A1 , A2 , A3 , A4 } is indeed a basis for V .
2.3.9 Exercise 9
Let V be a vector space over a subfield F of the complex numbers. Suppose α,
β, and γ are linearly independent vectors in V . Prove that (α + β), (β + γ),
and (γ + α) are linearly independent.
Proof. Let c1 , c2 , and c3 be scalars in F such that
c1 (α + β) + c2 (β + γ) + c3 (γ + α) = 0.
By rearranging, this becomes
(c1 + c3 )α + (c1 + c2 )β + (c2 + c3 )γ = 0.
Since α, β, and γ are linearly independent, we must have
c1 + c3 = 0, c1 + c2 = 0, and c2 + c3 = 0.
But this system of equations has the unique solution (c1 , c2 , c3 ) = (0, 0, 0).
Therefore (α + β), (β + γ), and (γ + α) are linearly independent.
42 CHAPTER 2. VECTOR SPACES
2.3.10 Exercise 10
Let V be a vector space over the field F . Suppose there are a finite number of
vectors α1 , . . . , αr in V which span V . Prove that V is finite-dimensional.
2.3.11 Exercise 11
Let V be the set of all 2 × 2 matrices A with complex entries which satisfy
A11 + A22 = 0.
(a) Show that V is a vector space over the field of real numbers, with the
usual operations of matrix addition and multiplication of a matrix by a
scalar.
This shows that V is closed under matrix addition and scalar multiplica-
tion.
Next, we already know that matrix addition is commutative and associa-
tive. The zero matrix belongs to V , and for any A in V , the matrix −A
is also in V .
The remaining vector space axioms follow from the properties of matrix
addition and scalar multiplication. Therefore V is a vector space.
(c) Let W be the set of all matrices A in V such that A21 = −A12 (the bar
denotes complex conjugation). Prove that W is a subspace of V and find
a basis for W .
2.3.12 Exercise 12
Prove that the space of all m × n matrices over the field F has dimension mn,
by exhibiting a basis for this space.
Proof. Let F m×n denote the space of m × n matrices over F .
For each i and j with 1 ≤ i ≤ m and 1 ≤ j ≤ n, let ij denote the m × n
matrix over F whose ijth entry is 1, with all other entries 0. Let B denote the
set of all ij . We will show that B is a basis for F m×n , so that the dimension of
this space is mn.
First, let
Xm X n
A= cij ij ,
i=1 j=1
where each cij is an arbitrary scalar in F . Then A is the matrix whose ijth entry
is cij . By choosing these scalars appropriately, we see that any m × n matrix
over F can be written as a linear combination of the matrices in B. Therefore
B spans F m×n .
Moreover, A = 0 if and only if each cij = 0, so B is linearly independent.
This shows that B is a basis for F m×n .
2.3.13 Exercise 13
Discuss Exercise 2.3.9, when V is a vector space over the field with two elements
described in Exercise 1.2.5.
Solution. In Exercise 2.3.9, it was stated that the field F should be a subfield
of the complex numbers (in particular, a field with characteristic 0). When this
restriction is taken away, the result does not necessarily hold, as we will now
demonstrate.
Let V be the vector space F 3 , where F is the field with 2 elements. Let
α = (1, 0, 0), β = (0, 1, 0), and γ = (0, 0, 1). We see that α, β, and γ are linearly
independent (in fact they form the standard basis of F 3 ).
Now consider the vectors
α + β = (1, 1, 0), β + γ = (0, 1, 1), and γ + α = (1, 0, 1).
These are not linearly independent, since
(1, 1, 0) + (0, 1, 1) + (1, 0, 1) = (0, 0, 0).
So the result from Exercise 2.3.9 does not hold in this more general setting.
44 CHAPTER 2. VECTOR SPACES
2.3.14 Exercise 14
Let V be the set of real numbers. Regard V as a vector space over the field of
rational numbers, with the usual operations. Prove that this vector space is not
finite-dimensional.
Proof. Assume the contrary, and let {x1 , x2 , . . . , xn } be a finite basis for V .
Then every real number can be expressed as a linear combination
c1 x1 + c2 x2 + · · · + cn xn ,
2.4 Coordinates
2.4.1 Exercise 1
Show that the vectors
form a basis for R4 . Find the coordinates of each of the standard basis vectors
in the ordered basis {α1 , α2 , α3 , α4 }.
Solution. Let
1 0 1 0
1 0 0 0
P =
0
.
1 0 0
0 1 4 2
P is invertible and has inverse
0 1 0 0
0 0 1 0
P −1 = .
1 −1 0 0
−2 2 − 12 1
2
2.4.2 Exercise 2
Find the coordinate matrix of the vector (1, 0, 1) in the basis of C 3 consisting
of the vectors (2i, 1, 0), (2, −1, 1), (0, 1 + i, 1 − i), in that order.
Solution. Let
2i 2 0
P = 1 −1 1 + i .
0 1 1−i
Then
1
− 21 i
2 −i −1
P −1 = − 12 i −1 i .
− 14 + 14 i 1
2 + 12 i 1
Since
1 − 1i −i
−1 1
1 1
−2 − 2i
1 2 2
P −1 0 = − 12 i
1
−1 i 0 = 2i ,
1 −1 + 1i 1 1 1 3 1
4 4 2 + 2i 1 4 + 4i
2.4.3 Exercise 3
Let B = {α1 , α2 , α3 } be the ordered basis for R3 consisting of
What are the coordinates of the vector (a, b, c) in the ordered basis B?
Solution. Let
1 1 1
P = 0 1 0 .
−1 1 0
Then
0 1 −1
P −1 = 0 1 0
1 −2 1
and
a 0 1 −1 a b−c
P −1 b = 0 1 0 b = b .
c 1 −2 1 c a − 2b + c
2.4.4 Exercise 4
Let W be the subspace of C 3 spanned by α1 = (1, 0, i) and α2 = (1 + i, 1, −1).
Solution. Since neither α1 nor α2 is a scalar multiple of the other, the set
{α1 , α2 } is linearly independent. Hence this set is a basis for W .
(b) Show that the vectors β1 = (1, 1, 0) and β2 = (1, i, 1 + i) are in W and
form another basis for W .
(c) What are the coordinates of α1 and α2 in the ordered basis {β1 , β2 } for
W?
2.4. COORDINATES 47
and
3 1 1 1
α2 = + i β1 + − + i β2 .
2 2 2 2
2.4.6 Exercise 6
Let V be the vector space over the complex numbers of all functions from R into
C, i.e., the space of all complex-valued functions on the real line. Let f1 (x) = 1,
f2 (x) = eix , f3 (x) = e−ix .
(a) Prove that f1 , f2 , and f3 are linearly independent.
(b) Let g1 (x) = 1, g2 (x) = cos x, g3 (x) = sin x. Find an invertible 3×3 matrix
P such that
X3
gj = Pij fi .
i=1
2.4.7 Exercise 7
Let V be the (real) vector space of all polynomial functions from R into R of
degree 2 or less, i.e., the space of all functions f of the form
f (x) = c0 + c1 x + c2 x2 .
f (x) = c0 + c1 x + c2 x2
Solution. Let
a + bt + ct2 = 0,
b + 2ct = 0,
c = 0.
Working backward through the equations, we see that a, b, and c must all be 0.
This shows that B is linearly independent.
2.4. COORDINATES 49
a + bt + ct2 = c0 ,
b + 2ct = c1 ,
c = c2 .
This shows that any polynomial of degree 2 or less can be written as a linear
combination of g1 , g2 , and g3 . B is therefore a basis for V .
Moreover, we have also shown that the polynomial f (x) = c0 + c1 x + c2 x2
has coordinates
(c0 − tc1 + t2 c2 , c1 − 2tc2 , c2 )
in the ordered basis {g1 , g2 , g3 }.
50 CHAPTER 2. VECTOR SPACES
c1 α1 + c2 α2 + · · · + cn αn = 0
then AX = 0 as required.
2.6.2 Exercise 2
Let
α1 = (1, 1, −2, 1), α2 = (3, 0, 4, −1), α3 = (−1, 2, 5, 2).
Let
α = (4, −5, 9, −7), β = (3, 1, −4, 4), γ = (−1, 1, 0, 1).
(a) Which of the vectors α, β, γ are in the subspace of R4 spanned by the αi ?
2.6.3 Exercise 3
Consider the vectors in R4 defined by
Find a system of homogeneous linear equations for which the space of solutions
is exactly the subspace of R4 spanned by the three given vectors.
Then a vector ρ in R4 is in the row space of A if and only if it has the form
1 11
ρ= r1 , r2 , r2 − r1 , r2 − 2r1 ,
4 4
ρ = (x1 , x2 , x3 , x4 ),
1
x3 = x2 − x1
4
11
x4 = x2 − 2x1 .
4
1
x1 − x2 + x3 = 0
4
11
2x1 − x2 + x4 = 0.
4
This system of equations is homogeneous and its solution set is precisely the
subspace spanned by α1 , α2 , and α3 .
52 CHAPTER 2. VECTOR SPACES
2.6.4 Exercise 4
In C 3 , let
α1 = (1, 0, −i), α2 = (1 + i, 1 − i, 1), α3 = (i, i, i).
Prove that these vectors form a basis for C 3 . What are the coordinates of the
vector (a, b, c) in this basis?
Solution. Let
1 0 −i
A = 1 + i 1 − i 1 .
i i i
By performing row-reduction, one can verify that A is row-equivalent to the
identity matrix. So A has rank 3 and α1 , α2 , and α3 are linearly independent
and span C 3 , as required to be a basis.
Let the coordinates of (a, b, c) in this basis be (x, y, z). This leads to the
following system of equations.
x + (1 + i)y + iz = a
(1 − i)y + iz = b
−ix + y + iz = c.
With a bit of effort, one may determine this system to have the solution
a+b−2c + 4c−2a−2b i
x 5 5
a+b−2c
y =
5 + 3b−2a−c
5 i.
z 3a−2b−c
− a+b+3c
i
5 5
2.6.5 Exercise 5
Give an explicit description for the vectors
β = (b1 , b2 , b3 , b4 , b5 )
5
in R which are linear combinations of the vectors
α1 = (1, 0, 2, 1, −1), α2 = (−1, 2, −4, 2, 0)
α3 = (2, −1, 5, 2, 1), α4 = (2, 1, 3, 5, 2).
Solution. Performing row-reduction on the augmented matrix
1 −1 2 2 b1
0
2 −1 1 b2
A= 2 −4 5 3 b3
1 2 2 5 b4
−1 0 1 2 b5
produces
2 1
− 61 b4 − 12 b5
1 0 0 0 3 b1 + 2 b2
7 5
0 1 0 0 6 b4 − − 23 b2 − 12 b5
3 b1
3
0 0 1 0
R= 2 b4 − 2b1 − 25 b2 − 12 b5
.
4 3 5 1
0 0 0 1 3 b1 + b − b + b
2 2 6 4 2 5
0 0 0 0 b3 + b2 − 2b1
2.6. COMPUTATIONS CONCERNING SUBSPACES 53
2.6.6 Exercise 6
Let V be the real vector space spanned by the rows of the matrix
3 21 0 9 0
1 7 −1 −2 −1
A= 2 14 0
.
6 1
6 42 −1 13 0
(a) Find a basis for V .
0 0 0 0 0
The three nonzero rows ρ1 , ρ2 , and ρ3 of R form a basis for V .
2.6.7 Exercise 7
Let A be an m × n matrix over the field F , and consider the system of equations
AX = Y . Prove that this system of equations has a solution if and only if the
row rank of A is equal to the row rank of the augmented matrix of the system.
Proof. Let R be the row-reduced echelon matrix that is row-equivalent to A.
Form the augmented matrix A0 and let R0 be the row-reduced echelon matrix
row-equivalent to A0 . Then the nonzero rows of R form a basis for the row space
of A, and the nonzero rows of R0 form a basis for the row space of A0 . We want
to show that these bases have the same number of elements.
By the nature of the process of row reduction, it must be that the first n
columns of R0 will be identical to the n columns of R. Consequently, R0 cannot
have fewer nonzero rows than R, as any nonzero row of R must correspond to a
nonzero row in R0 . However, it might be possible for R0 to have more nonzero
rows than R. Such nonzero rows would need to have zeros in every column
except the last. But then such a row would indicate that the system AX = Y
has no solutions, which we know to be false. Therefore A and A0 have the same
row rank.
Now let us consider the converse. Let R and R0 be as before, and suppose
that the row ranks of A and A0 are equal. If AX = Y has no solutions, then R0
would necessarily have a row consisting of zeros in every column but the last.
But then the corresponding row in R would have only zero entries, resulting in
R0 having a larger row rank than R. This is impossible, so AX = Y must have
a solution.
Chapter 3
Linear Transformations
(a) T (x1 , x2 ) = (1 + x1 , x2 );
but
T ((1, 0) + (2, 0)) = T (3, 0) = (9, 0).
55
56 CHAPTER 3. LINEAR TRANSFORMATIONS
3.1.2 Exercise 2
Find the range, rank, null space, and nullity for the zero transformation and
the identity transformation on a finite-dimensional space V .
3.1.3 Exercise 3
Describe the range and the null space for the differentiation transformation of
Example 2. Do the same for the integration transformation of Example 5.
Solution. Let V be the space of polynomial functions from F into F and let D
be the differentiation transformation. Given a polynomial
f (x) = c0 + c1 x + · · · + ck xk ,
we can always find another polynomial g(x) such that (Dg)(x) = f (x), namely
the polynomial
1 1
g(x) = c0 x + c1 x2 + · · · + ck xk+1 .
2 k+1
3.1. LINEAR TRANSFORMATIONS 57
f (x) = c1 x + c2 x2 + · · · + ck xk .
Then the function g(x) = (Df )(x) is such that (T g)(x) = f (x). So we see that
the range of T is the space of polynomials with zero constant term.
Lastly, if f is a polynomial such that (T f )(x) = 0, then f must be the zero
polynomial. The null space of T is therefore the trivial subspace {0}.
3.1.4 Exercise 4
Is there a linear transformation T from R3 into R2 such that T (1, −1, 1) = (1, 0)
and T (1, 1, 1) = (0, 1)?
1
− 12 y2
1 1 1 y1
1 0 0 2 y3
1
−1 1 0 y2 → 0 1 0 2 y2 + 12 y3 .
1 1 0 y3 0 0 1 y1 − y3
So we may write
1 1
ρ = (b1 , b2 , b3 ) = (b3 − b2 )α + (b2 + b3 )β + (b1 − b3 )γ.
2 2
Now suppose the transformation T is such that T (γ) = (x1 , x2 ), for some x1
58 CHAPTER 3. LINEAR TRANSFORMATIONS
1 1 1 1
T (b1 , b2 , b3 ) = b1 − b2 , b1 + b2 .
2 2 2 2
By picking different values for x1 and x2 , we see that there are infinitely many
possibilities for T .
3.1.5 Exercise 5
If
α1 = (1, −1), β1 = (1, 0)
α2 = (2, −1), β2 = (0, 1)
α3 = (−3, 2), β3 = (1, 1)
is there a linear transformation T from R2 into R2 such that T αi = βi for i = 1,
2, and 3?
Solution. No. To see why, observe that
(−3, 2) = −(1, −1) − (2, −1).
We have
T (−α1 − α2 ) = (1, 1)
but
−T (α1 ) − T (α2 ) = −(1, 0) − (0, 1) = (−1, −1).
So T cannot be a linear transformation.
3.1.6 Exercise 6
Describe explicitly the linear transformation T from F 2 into F 2 such that
T 1 = (a, b), T 2 = (c, d).
Solution. For any (x1 , x2 ) in F 2 , we have
T (x1 , x2 ) = T (x1 1 + x2 2 )
= x1 T (1 ) + x2 T (2 )
= x1 (a, b) + x2 (c, d)
= (x1 a + x2 c, x1 b + x2 d).
3.1. LINEAR TRANSFORMATIONS 59
3.1.7 Exercise 7
Let F be a subfield of the complex numbers and let T be the function from F 3
into F 3 defined by
and
x1 − x2 + 2x3 = a
2x1 + x2 =b
−x1 − 2x2 + 2x3 = c.
From this latter matrix, we see that this system of equations has a solution
if and only if
−a + b + c = 0.
60 CHAPTER 3. LINEAR TRANSFORMATIONS
has a row rank (and thus column rank) of 2. But the column space of A
is precisely the range of T , so we may conclude that T has rank 2.
(c) What are the conditions on a, b, and c that (a, b, c) be in the null space
of T ? What is the nullity of T ?
a − b + 2c = 0
2a + b =0
−a − 2b + 2c = 0.
3.1.8 Exercise 8
Describe explicitly a linear transformation from R3 into R3 which has as its
range the subspace spanned by (1, 0, −1) and (1, 2, 2).
Solution. Let {1 , 2 , 3 } denote the standard ordered basis for R3 . Theorem 1
allows us to find infinitely many linear transformations satisfying the given
criterion. For example, we may take some linear combination of the two given
vectors, say (2, 2, 1), and then look for a linear transformation T such that
does the job. The range of T is precisely the subspace of R3 spanned by (1, 0, −1)
and (1, 2, 2). Of course, as noted, there are infinitely many other transformations
that would work.
3.1. LINEAR TRANSFORMATIONS 61
3.1.9 Exercise 9
Let V be the vector space of all n × n matrices over the field F , and let B be a
fixed n × n matrix. If
T (A) = AB − BA
3.1.10 Exercise 10
Let V be the set of all complex numbers regarded as a vector space over the
field of real numbers (usual operations). Find a function from V into V which
is a linear transformation on the above vector space, but which is not a linear
transformation on C 1 , i.e., which is not complex linear.
Solution. Define T (a + bi) = a. Then for any real number c and any complex
numbers z = a1 + b1 i and w = a2 + b2 i, we have
iT (1) = i 6= 0 = T (i),
so T is not linear on C 1 .
3.1.11 Exercise 11
Let V be the space of n × 1 matrices over F and let W be the space of m × 1
matrices over F . Let A be a fixed m × n matrix over F and let T be the linear
transformation from V into W defined by T (X) = AX. Prove that T is the
zero transformation if and only if A is the zero matrix.
Proof. First suppose that T is the zero transformation. Then AX = 0 for all X
in V . In particular, let X = j , where j is the column vector whose jth entry
is 1 and all other entries zero. Then AX = 0 implies that the jth column of A
has only zero entries. Since this is true for all j with 1 ≤ j ≤ n, we see that A
is the m × n zero matrix.
Conversely, let A be the zero matrix. Then AX = 0 for all X so T is clearly
the zero transformation.
62 CHAPTER 3. LINEAR TRANSFORMATIONS
3.1.12 Exercise 12
Let V be an n-dimensional vector space over the field F and let T be a linear
transformation from V into V such that the range and null space of T are
identical. Prove that n is even. (Can you give an example of such a linear
transformation T ?)
Solution. This result follows directly from Theorem 2: if the rank of T is k,
then the nullity is also k and we have
k + k = n,
T is a linear transformation. And both the range and the null space of T is the
x-axis.
3.1.13 Exercise 13
Let V be a vector space and T a linear transformation from V into V . Prove
that the following two statements about T are equivalent.
(a) The intersection of the range of T and the null space of T is the zero
subspace of V .
(b) If T (T α) = 0, then T α = 0.
Proof. Assume that (a) is true. If T (T α) = 0, then T α belongs to the null space
of T . But T α is also in the range of T , so T α = 0 by assumption.
Conversely, assume that (b) holds. Let β belong to the intersection of the
range of T with the null space of T . Then T (β) = 0 and there is some α in V
such that T (α) = β. Then T (T α) = T (β) = 0, so that T α = 0 by assumption.
But T α = β, so β = 0. Therefore the specified intersection is the zero subspace
and (a) holds.
3.2. THE ALGEBRA OF LINEAR TRANSFORMATIONS 63
(b) Give rules like the ones defining T and U for each of the transformations
(U + T ), U T , T U , T 2 , U 2 .
Solution. We have
(U + T )(x1 , x2 ) = (x1 + x2 , x1 ),
U T (x1 , x2 ) = (x2 , 0),
T U (x1 , x2 ) = (0, x1 ),
T 2 (x1 , x2 ) = (x1 , x2 ),
and
3.2.2 Exercise 2
Let T be the (unique) linear operator on C 3 for which
Is T invertible?
or
z1 + z3 i = 0
z2 + z3 = 0
z1 i + z2 = 0.
This system of equations has infinitely many solutions, each of the form
Therefore the null space of T is not {0}, so T is singular and not invertible.
64 CHAPTER 3. LINEAR TRANSFORMATIONS
3.2.3 Exercise 3
Let T be the linear operator on R3 defined by
Is T invertible? If so, find a rule for T −1 like the one which defines T .
By Theorem 9, T is invertible.
Let α = (y1 , y2 , y3 ) in R3 be such that T α = (x1 , x2 , x3 ). Then
3y1 = x1
y1 − y2 = x2
2y1 + y2 + y3 = x3 .
So we have
−1 1 1
T (x1 , x2 , x3 ) = x1 , x1 − x2 , x3 − x1 + x2 .
3 3
3.2.4 Exercise 4
For the linear operator T of Exercise 3.2.3, prove that
(T 2 − I)(T − 3I) = 0.
Proof. We have
so
(T 2 − I)(x1 , x2 , x3 ) = (8x1 , x1 , 8x1 ).
Also,
(T − 3I)(x1 , x2 , x3 ) = (0, x1 − 4x2 , 2x1 + x2 − 2x3 ).
Consequently,
3.2.5 Exercise 5
Let C 2×2 be the complex vector space of 2 × 2 matrices with complex entries.
Let
1 −1
B=
−4 4
and let T be the linear operator on C 2×2 defined by T (A) = BA. What is the
rank of T ? Can you describe T 2 ?
Solution. Let
a b
A=
c d
3.2.6 Exercise 6
Let T be a linear transformation from R3 into R2 , and let U be a linear trans-
formation from R2 into R3 . Prove that the transformation U T is not invertible.
Generalize the theorem.
Solution. We will state and prove the more general result directly. Let V and
W be finite-dimensional vector spaces over the same field F , and suppose that
dim V > dim W . Let T be a linear transformation from V into W and let U
be a linear transformation from W into V . Then the transformation U T is not
invertible, as we will now show.
First, since the rank of T is at most dim W < dim V , it follows that the
nullity of T is greater than 0. Thus T is not one to one, and there are distinct
vectors α and β in V such that T α = T β. Then we have
3.2.7 Exercise 7
Find two linear operators T and U on R2 such that T U = 0 but U T 6= 0.
Solution. Let T and U be given by
T (x1 , x2 ) = (x1 , 0) and U (x1 , x2 ) = (0, x1 ).
Then T U (x1 , x2 ) = T (0, x1 ) = (0, 0) as required. We also have U T 6= 0 since
U T (1, 1) = U (1, 0) = (0, 1) 6= (0, 0).
3.2.8 Exercise 8
Let V be a vector space over the field F and T a linear operator on V . If T 2 = 0,
what can you say about the relation of the range of T to the null space of T ?
Give an example of a linear operator T on R2 such that T 2 = 0 but T 6= 0.
Solution. Let β be in the range of T . Then there is an α in V such that T α = β.
But then
T β = T (T α) = T 2 (α) = 0,
so β is in the null space of T . This shows that the range of T is contained in
the null space of T .
On R2 , define T by
T (x1 , x2 ) = (x2 , 0).
Then
T 2 (x1 , x2 ) = T (x2 , 0) = (0, 0),
so T 2 = 0 but T 6= 0.
3.2.9 Exercise 9
Let T be a linear operator on the finite-dimensional space V . Suppose there
is a linear operator U on V such that T U = I. Prove that T is invertible
and U = T −1 . Give an example which shows that this is false when V is not
finite-dimensional.
Solution. Let α be in the null space of U , i.e., let U α = 0. Then
T U (α) = T (U α) = T (0) = 0.
Thus α is in the null space of T U . But T U = I, so this implies that α = 0.
This shows that U is non-singular. By Theorem 9, U is invertible.
Since T U = I, we have by the associativity of function composition that
U −1 = (T U )U −1 = T (U U −1 ) = T.
But if T = U −1 , then by definition T is invertible and U = T −1 .
To show that the original statement is not true when we remove the require-
ment that V be finite-dimensional, let V be the space of polynomial functions
over F , where F has characteristic zero. Let T = D, the differentiation opera-
tor, and let U = E, the integration operator, as defined in Example 11. Then T
and U are linear operators on V such that T U = I, but T is not invertible since
the differentiation operator is singular (its null space consists of all constant
functions).
3.2. THE ALGEBRA OF LINEAR TRANSFORMATIONS 67
3.2.10 Exercise 10
Let A be an m×n matrix with entries in F and let T be the linear transformation
from F n×1 into F m×1 defined by T (X) = AX. Show that if m < n it may
happen that T is onto without being non-singular. Similarly, show that if m > n
we may have T non-singular but not onto.
Solution. Let B = {1 , . . . , n } be the standard ordered basis for F n×1 and let
B 0 = {01 , . . . , 0m } be the standard ordered basis for F m×1 .
Now suppose m < n. Let A be the m × n matrix whose jth column, for
1 ≤ j ≤ m, is 0j , and whose remaining columns are zero. Then the linear
transformation T (X) = AX is such that
and
T (j ) = 0 for m < j ≤ n.
3.2.11 Exercise 11
Let V be a finite-dimensional vector space and let T be a linear operator on V .
Suppose that rank(T 2 ) = rank(T ). Prove that the range and null space of T
are disjoint, i.e., have only the zero vector in common.
Proof. Note that the null space of T is contained in the null space of T 2 , since
if T α = 0 then T 2 (α) = T (0) = 0. But T and T 2 have the same rank, so by
Theorem 2 they must have the same nullity. So any basis for the null space of T
must also be a basis for the null space of T 2 . It follows that the two null spaces
are exactly equal.
Now let β be in the intersection of the range and null space of T . Then there
is an α in V with T α = β. This implies that
T 2 (α) = T β = 0,
so α is in the null space of T 2 . But T and T 2 have the same null space, so α is
in the null space of T . Hence
β = T α = 0.
This shows that the intersection of the range of T and the null space of T is
precisely the set {0}.
68 CHAPTER 3. LINEAR TRANSFORMATIONS
3.2.12 Exercise 12
Let p, m, and n be positive integers and F a field. Let V be the space of m × n
matrices over F and W the space of p × n matrices over F . Let B be a fixed
p × m matrix and let T be the linear transformation from V into W defined
by T (A) = BA. Prove that T is invertible if and only if p = m and B is an
invertible m × m matrix.
Proof. First assume that p = m (so that V = W ) and that B is invertible.
Define the linear transformation U from V into V by U (A) = B −1 A. Then for
any A in V , we have
and
U T (A) = U (BA) = B −1 (BA) = (B −1 B)A = A.
This shows that T U = U T = I, so by definition T is invertible and T −1 = U .
The first half of the proof is complete.
Next, for the converse, assume that T is invertible. Then T is non-singular,
so the nullity of T is 0. By Theorem 2, we have
3.3 Isomorphism
3.3.1 Exercise 1
Let V be the set of complex numbers and let F be the field of real numbers.
With the usual operations, V is a vector space over F . Describe explicitly an
isomorphism of this space onto R2 .
Solution. Define the map T from V into R2 by
and it is onto since (a, b) is evidently in the range of T for all a, b in R. Therefore
T is an isomorphism and V and R2 are isomorphic.
3.3.2 Exercise 2
Let V be a vector space over the field of complex numbers, and suppose there is
an isomorphism T of V onto C 3 . Let α1 , α2 , α3 , α4 be vectors in V such that
−2x1 − x2 = 1
(1 + i)x1 + x2 = 0
x2 = i.
1 0 − 21 − 12 i
−2 −1 1
1 + i 1 0 → 0 1 i ,
0 1 i 0 0 0
so
1 1
T α1 = − − i T α2 + iT α3 .
2 2
70 CHAPTER 3. LINEAR TRANSFORMATIONS
(b) Let W1 be the subspace spanned by α1 and α2 , and let W2 be the subspace
spanned by α3 and α4 . What is the intersection of W1 and W2 ?
(c) Find a basis for the subspace of V spanned by the four vectors αj .
3.3.3 Exercise 3
Let W be the set of all 2 × 2 complex Hermitian matrices, that is, the set
of 2 × 2 complex matrices A such that Aij = Aji (the bar denoting complex
conjugation). As we pointed out in Example 6 of Chapter 2, W is a vector space
over the field of real numbers, under the usual operations. Verify that
t + x y + iz
(x, y, z, t) →
y − iz t − x
is an isomorphism of R4 onto W .
Proof. Denote this mapping by T . Then for any
α = (x1 , y1 , z1 , t1 ) and β = (x2 , y2 , z2 , t2 )
in R4 and any c in R, we have
T (cα + β) = T (cx1 + x2 , cy1 + y2 , cz1 + z2 , ct1 + t2 )
(ct1 + t2 ) + (cx1 + x2 ) (cy1 + y2 ) + i(cz1 + z2 )
=
(cy1 + y2 ) − i(cz1 + z2 ) (ct1 + t2 ) − (cx1 + x2 )
t + x1 y1 + iz1 t + x2 y2 + iz2
=c 1 + 2
y1 − iz1 t1 − x1 y2 − iz2 t2 − x2
= cT α + T β.
This shows that T is a linear transformation.
Next, if T α = T β then t1 + x1 = t2 + x2 and t1 − x1 = t2 − x2 , which
together imply that t1 = t2 and x1 = x2 . Similarly, y1 + iz1 = y2 + iz2 implies
that y1 = y2 and z1 = z2 . Therefore T is one to one.
Finally, let
a b + ci
A=
b − ci d
be any 2 × 2 Hermitian matrix. Then we see that
1 1 1 1 a b + ci
T a + d, b, c, a − d = = A,
2 2 2 2 b − ci d
so T is onto. This shows that T is an isomorphism and R4 is isomorphic to
W.
3.3.4 Exercise 4
Show that F m×n is isomorphic to F mn .
Proof. An obvious isomorphism is the map T from F m×n onto F mn given by
A11 A12 · · · A1n
A21 A22 · · · A2n
T .
.. .. ..
.. . . .
Am1 Am2 · · · Amn
= (A11 , A12 , . . . , A1n , A21 , A22 , . . . , A2n , . . . , Am1 , Am2 , . . . , Amn ).
That is, the jth coordinate of T (A) is the jth entry of A when the entries are
ordered from left-to-right and then top-to-bottom. It should be evident that T
is a linear transformation that is both one to one and onto.
72 CHAPTER 3. LINEAR TRANSFORMATIONS
3.3.5 Exercise 5
Let V be the set of complex numbers regarded as a vector space over the field
of real numbers. We define a function T from V into the space of 2 × 2 real
matrices, as follows. If z = x + iy with x and y real numbers, then
x + 7y 5y
T (z) = .
−10y x − 7y
(a) Verify that T is a one-one (real) linear transformation of V into the space
of 2 × 2 real matrices.
3.3.6 Exercise 6
Let V and W be finite-dimensional vector spaces over the field F . Prove that
V and W are isomorphic if and only if dim V = dim W .
Proof. First suppose that V and W are isomorphic via the isomorphism T .
Suppose V has dimension n and let {α1 , . . . , αn } be a basis for V . Then by
Theorem 8, the set {T α1 , . . . , T αn } is linearly independent and thus forms a
basis for the range of T . But T is onto, so the range of T is W . Therefore
dim W = n as required.
Conversely, suppose dim V = dim W = n. By Theorem 10, V and W are
both isomorphic to F n . Thus V is isomorphic to W and the proof is complete.
3.3.7 Exercise 7
Let V and W be vector spaces over the field F and let U be an isomorphism of V
onto W . Prove that T → U T U −1 is an isomorphism of L(V, V ) onto L(W, W ).
Proof. Let S denote the stated map from L(V, V ) to L(W, W ). If c is in F , then
S(cT1 + T2 ) = U (cT1 + T2 )U −1
= (cU T1 + U T2 )U −1
= cU T1 U −1 + U T2 U −1
= cS(T1 ) + S(T2 ),
so S is a linear transformation.
Next, suppose S(T1 ) = S(T2 ). That is, U T1 U −1 = U T2 U −1 . Then
T1 = (U −1 U )T1 (U −1 U )
= U −1 (U T1 U −1 )U
= U −1 (U T2 U −1 )U
= (U −1 U )T2 (U −1 U )
= T2 ,
S(U −1 T U ) = U (U −1 T U )U −1 = (U U −1 )T (U U −1 ) = T,
Solution. We have
Now let
1 −i
P = .
i 2
Then
−1 2 i
P =
−i 1
and we get
−1 2 i 1 2
[(1, 0)] B0 =P [(1, 0)]B = = .
−i 1 0 −i
Of course, the zero vector has the same coordinates in every basis, so we
see that the matrix of T relative to B, B 0 is
0 2 0
[T ]B
B = .
−i 0
Solution. We have
Solution. We have
−1 2 i 1 0 1 −i 2 −2i
[T ]B0 = P [T ]B P = = .
−i 1 0 0 i 2 −i −1
is given by
0 1
P = ,
1 0
so
−1 0 1 2 −2i 0 1 −1 −i
[T ]{α2 ,α1 } = P [T ]B0 P = = .
1 0 −i −1 1 0 −2i 2
3.4.2 Exercise 2
Let T be the linear transformation from R3 into R2 defined by
(a) If B is the standard ordered basis for R3 and B 0 is the standard ordered
basis for R2 , what is the matrix of T relative to the pair B, B 0 ?
Solution. Since
T (1, 0, 0) = (1, −1), T (0, 1, 0) = (1, 0), and T (0, 0, 1) = (0, 2),
Solution. We have
and
3.4.3 Exercise 3
Let T be a linear operator on F n , let A be the matrix of T in the standard
ordered basis for F n , and let W be the subspace of F n spanned by the column
vectors of A. What does W have to do with T ?
α = x1 T 1 + x2 T 2 + · · · + xn T n
= T (x1 1 + x2 2 + · · · + xn n )
= T (x1 , x2 , · · · , xn ),
3.4.4 Exercise 4
Let V be a two-dimensional vector space over the field F , and let B be an
ordered basis for V . If T is a linear operator on V and
a b
[T ]B =
c d
Proof. By Theorem 12, the function which assigns a linear operator on V to its
matrix relative to B is an isomorphism between L(V, V ) and F 2×2 . Theorem 13
shows that this function preserves products also. Thus we can operate on T by
simply performing the corresponding operations on its matrix and vice versa.
So consider the following computation.
3.4.5 Exercise 5
Let T be the linear operator on R3 , the matrix of which in the standard ordered
basis is
1 2 1
A = 0 1 1 .
−1 3 4
Find a basis for the range of T and a basis for the null space of T .
3.4. REPRESENTATION OF TRANSFORMATIONS BY MATRICES 77
Solution. By Exercise 3.4.3, we know that the column space of A is the range of
T . We can find a basis for the column space of A by row-reducing its transpose
AT and taking the nonzero rows. We get
1 0 −1 1 0 −1
AT = 2 1 3 → 0 1 5 ,
1 1 4 0 0 0
3.4.6 Exercise 6
Let T be the linear operator on R2 defined by
T (x1 , x2 ) = (−x2 , x1 ).
Solution. Since
So,
" # " 1 #
1 1 2
−3
−1 3 3 0 −1 1 1 3
[T ]B = P [T ]{1 ,2 } P = = .
2
3 − 31 1 0 2 −1 − 53 1
3
78 CHAPTER 3. LINEAR TRANSFORMATIONS
(c) Prove that for every real number c the operator (T − cI) is invertible.
(d) Prove that if B is any ordered basis for R2 and [T ]B = A, then A12 A21 6= 0.
3.4.7 Exercise 7
Let T be the linear operator on R3 defined by
T (x1 , x2 , x3 ) = (3x1 + x3 , −2x1 + x2 , −x1 + 2x2 + 4x3 ).
(a) What is the matrix of T in the standard ordered basis for R3 ?
Solution. Since
T (1, 0, 0) = (3, −2, −1),
T (0, 1, 0) = (0, 1, 2),
T (0, 0, 1) = (1, 0, 4),
the matrix of T in the standard ordered basis is
3 0 1
[T ]{1 ,2 ,3 } = −2 1 0 .
−1 2 4
3.4. REPRESENTATION OF TRANSFORMATIONS BY MATRICES 79
{α1 , α2 , α3 }
where
1 −1 2
P = 0 2 1 .
1 1 1
So
1
− 34 5
−4 4
3
0 1 1 −1 2
1 1 1
[T ]{α1 ,α2 ,α3 } = − 4 4 4 −2 1 0 0 2 1
1 1
− 21 −1 2 4 1 1 1
2 2
17 35 11
4 4 2
3 15
= − 4 − 23 .
4
− 12 − 72 0
(c) Prove that T is invertible and give a rule for T −1 like the one which defines
T.
T −1 (x1 , x2 , x3 )
4 2 1 8 13 2 1 2 1
= x1 + x2 − x3 , x1 + x2 − x3 , − x1 − x2 + x3 .
9 9 9 9 9 9 3 3 3
80 CHAPTER 3. LINEAR TRANSFORMATIONS
3.4.8 Exercise 8
Let θ be a real number. Prove that the following two matrices are similar over
the field of complex numbers:
iθ
cos θ − sin θ e 0
,
sin θ cos θ 0 e−iθ
and
Thus the second matrix represents T in the ordered basis B. By Theorem 14,
the two matrices are similar.
3.4.9 Exercise 9
Let V be a finite-dimensional vector space over the field F and let S and T be
linear operators on V . We ask: When do there exist ordered bases B and B 0 for
V such that [S]B = [T ]B0 ? Prove that such bases exist if and only if there is an
invertible linear operator U on V such that T = U SU −1 .
Proof. Assume such bases exist, so that [S]B = [T ]B0 . Let U be the operator
which carries B onto B 0 . Then by Theorem 14, we have
[T ]B0 = [U ]−1
B [T ]B [U ]B = [U
−1
T U ]B = [S]B .
3.4.10 Exercise 10
We have seen that the linear operator T on R2 defined by T (x1 , x2 ) = (x1 , 0) is
represented in the standard ordered basis by the matrix
1 0
A= .
0 0
3.4. REPRESENTATION OF TRANSFORMATIONS BY MATRICES 81
On the other hand, since S 6= I, there exist distinct β2 and β3 such that
Sβ2 = β3 .
c1 Sα1 + c2 Sα2 = 0.
3.4.11 Exercise 11
Let W be the space of all n × 1 column matrices over a field F . If A is an
n × n matrix over F , then A defines a linear operator LA on W through left
multiplication: LA (X) = AX. Prove that every linear operator on W is left
multiplication by some n × n matrix, i.e., is LA for some A.
Now suppose V is an n-dimensional vector space over the field F , and let B
be an ordered basis for V . For each α in V , define U α = [α]B . Prove that U is
an isomorphism of V onto W . If T is a linear operator on V , then U T U −1 is a
linear operator on W . Accordingly, U T U −1 is left multiplication by some n × n
matrix A. What is A?
Solution. Let C be the standard ordered basis for W = F n×1 , i.e. the jth vector
in the basis has a 1 in its jth row and all other entries zero. Then if S is any
linear operator on W , we may take the n × n matrix A to be
A = [S]C .
82 CHAPTER 3. LINEAR TRANSFORMATIONS
Then LA and S have the same matrix relative to the ordered basis C, and
therefore LA = S. Therefore every linear operator on W is left multiplication
by some n × n matrix.
Now let V be an n-dimensional vector space over F and B = {α1 , . . . , αn }
an ordered basis for V . Define U α = [α]B , so that U is a map of V into W . We
will show that U is an isomorphism.
First, for any β1 , β2 in V and scalar c in F , we have
U (cβ1 + β2 ) = [cβ1 + β2 ]B
= c[β1 ]B + [β2 ]B
= cU β1 + U β2 ,
Define
β = y1 α1 + y2 α2 + · · · + yn αn .
Then
U β = [β]B = Y.
Then
U T U −1 (X) = U T (x1 α1 + x2 α2 + · · · + xn αn )
= U (x1 T α1 + x2 T α2 + · · · + xn T αn )
= x1 [T α1 ]B + x2 [T α2 ]B + · · · + xn [T αn ]B
= x1 [T ]B [α1 ]B + x2 [T ]B [α2 ]B + · · · + xn [T ]B [αn ]B
= [T ]B (x1 U α1 + x2 U α2 + · · · + xn U αn )
= [T ]B X.
3.4.12 Exercise 12
Let V be an n-dimensional vector space over the field F , and let
B = {α1 , . . . , αn }
(c) Let S be any linear operator on V such that S n = 0 but S n−1 6= 0. Prove
that there is an ordered basis B 0 for V such that the matrix of S in the
ordered basis B 0 is the matrix A of part (a).
βj = S j−1 β1 .
Let B 0 = {β1 , . . . , βn }.
Now suppose c1 , c2 , . . . , cn are scalars in F such that
c1 β1 + c2 β2 + · · · + cn βn = 0.
c2 β2 + c3 β3 + · · · + cn βn = 0,
84 CHAPTER 3. LINEAR TRANSFORMATIONS
ck+1 βk+1 + · · · + cn βn = 0.
Taking S n−k−1 of both sides (in the case where k = n−1, we take S 0 = I)
then gives ck+1 βn = 0, so that ck+1 = 0. Therefore c1 = c2 = · · · = cn = 0
and B 0 is linearly independent. Since dim V = n and B 0 is a linearly
independent set of n vectors in V , it follows that B 0 is a basis for V .
Finally, we have defined β1 , . . . , βn so that
[P ]C = A = [Q]C 0 .
N = [Q]B = [U ]B [P ]B [U ]−1 −1
B = [U ]B M [U ]B
3.4.13 Exercise 13
Let V and W be finite-dimensional vector spaces over the field F and let T be
a linear transformation from V into W . If
are ordered bases for V and W , respectively, define the linear transformations
E p,q as in the proof of Theorem 5: E p,q (αi ) = δiq βp . Then the E p,q , 1 ≤ p ≤ m,
1 ≤ q ≤ n, form a basis for L(V, W ), and so
m X
X n
T = Apq E p,q
p=1 q=1
for certain scalars Apq (the coordinates of T in this basis for L(V, W )). Show
that the matrix A with entries A(p, q) = Apq is precisely the matrix of T relative
to the pair B, B 0 .
3.4. REPRESENTATION OF TRANSFORMATIONS BY MATRICES 85
Thus the ith coordinate of T αj is Aij and we see that the matrix of T relative
to B, B 0 is precisely the matrix A.
86 CHAPTER 3. LINEAR TRANSFORMATIONS
3.5.2 Exercise 2
Let B = {α1 , α2 , α3 } be the basis for C 3 defined by
− 12 1 − 12
Therefore
1 −1 0 x1
f1 (x1 , x2 , x3 ) = 1 0 0 1 −1 1 x2 = x1 − x2 .
− 21 1 −1 x3
2
Similarly, we get
1 −1 0 x1
f2 (x1 , x2 , x3 ) = 0 1 0 1 −1 1 x2 = x1 − x2 + x3 ,
− 21 1 −1 x3
2
and
1 −1 0 x1
1 1
f3 (x1 , x2 , x3 ) = 0 0 1 1 −1 1 x2 = − x1 + x2 − x3 .
2 2
− 21 1 −1 x3
2
3.5.3 Exercise 3
If A and B are n×n matrices over the field F , show that trace(AB) = trace(BA).
Now show that similar matrices have the same trace.
88 CHAPTER 3. LINEAR TRANSFORMATIONS
= tr(BA).
So tr(AB) = tr(BA).
Next, suppose A and B are similar, and let P be an invertible n × n matrix
such that B = P −1 AP . Using the fact that was proven above, we get
tr(B) = tr(P −1 AP )
= tr((P −1 A)P )
= tr(P (P −1 A))
= tr((P P −1 )A)
= tr(A).
3.5.4 Exercise 4
Let V be the vector space of all polynomial functions p from R into R which
have degree 2 or less:
p(x) = c0 + c1 x + c2 x2 .
Define three linear functionals on V by
Z 1 Z 2 Z −1
f1 (p) = p(x) dx, f2 (p) = p(x) dx, f3 (p) = p(x) dx.
0 0 0
Show that {f1 , f2 , f3 } is a basis for V ∗ by exhibiting the basis for V of which it
is the dual.
Solution. First we evaluate,
1
1 2 1 3 1 1
f1 (p) = c0 x + c1 x + c2 x = c0 + c1 + c2 ,
2 3 2 3
0
2
1 1 8
f2 (p) = c0 x + c1 x2 + c2 x3 = 2c0 + 2c1 + c2 ,
2 3 3
0
−1
1 1 1 1
f3 (p) = c0 x + c1 x2 + c2 x3 = −c0 + c1 − c2 .
2 3 2 3
0
Now, let {p1 , p2 , p3 } be the basis for V of which {f1 , f2 , f3 } is the dual. To
determine pi , we want to find values for the coefficients c1 , c2 , and c3 so that
3.5. LINEAR FUNCTIONALS 89
fi (pi ) = 1 and fj (pi ) = 0 for j 6= i. This gives three systems of linear equations,
having augmented matrices
1 1 1 1 1 1
1 2 3 1 1 2 3 0 1 2 3 0
8 8 8
2 2 0 , 2 2 1 , and 2 2 0 .
3 3 3
1
−1 2 − 13 0 −1 1
2 − 31 0 −1 1
2 − 13 1
We can combine these into one augmented matrix and perform row-reduction,
which gives
1 1
− 61 − 31
1 2 3 1 0 0 1 0 0 1
8
2 2 0 1 0 → 0 1 0 1 0 1 .
3
1 1
−1 2 − 3 0 0 1 0 0 1 − 32 1
2 − 21
So we see that
3
p1 (x) = 1 + x − x2 ,
2
1 1 2
p2 (x) = − + x ,
6 2
1 1
p3 (x) = − + x − x2 ,
3 2
and {f1 , f2 , f3 } is the dual basis of {p1 , p2 , p3 }.
3.5.5 Exercise 5
If A and B are n × n complex matrices, show that AB − BA = I is impossible.
Proof. In Example 19, the trace function was shown to be a linear functional
on the space of n × n matrices. And in Exercise 3.5.3, we proved that, given
two matrices A and B, tr(AB) = tr(BA). It now follows that
3.5.6 Exercise 6
Let m and n be positive integers and F a field. Let f1 , . . . , fm be linear func-
tionals on F n . For α in F n define
Show that T is a linear transformation from F n into F m . Then show that every
linear transformation from F n into F m is of the above form, for some f1 , . . . , fm .
Proof. First, for any α1 , α2 in F n and any c in F , we have
3.5.7 Exercise 7
Let α1 = (1, 0, −1, 2) and α2 = (2, 3, 1, 1), and let W be the subspace of R4
spanned by α1 and α2 . Which linear functionals f :
f (x1 , x2 , x3 , x4 ) = c1 x1 + c2 x2 + c3 x3 + c4 x4
From the reduced form we see that c3 and c4 can be arbitrary, with
c1 = c3 − 2c4 and c2 = c4 − c3 .
where s and t are scalars in F . Note that we can also find a basis for W 0 by
first taking s = 1, t = 0 and then by taking s = 0, t = 1.
3.5.8 Exercise 8
Let W be the subspace of R5 which is spanned by the vectors
f (x1 , x2 , x3 , x4 , x5 ) = c1 x1 + c2 x2 + c3 x3 + c4 x4 + c5 x5 .
3.5. LINEAR FUNCTIONALS 91
f (α1 ) = c1 + 2c2 + c3 =0
f (α2 ) = c2 + 3c3 + 3c4 + c5 = 0
f (α3 ) = c1 + 4c2 + 6c3 + 4c4 + c5 = 0.
c1 = −4c4 − 3c5 ,
c2 = 3c4 + 2c5 ,
and
c3 = −2c4 − c5 .
and
f2 (x1 , x2 , x3 , x4 , x5 ) = −3x1 + 2x2 − x3 + x5 ,
is a basis for W . Note that dim W 0 = 2, which agrees with Theorem 16.
0
3.5.9 Exercise 9
Let V be the vector space of all 2 × 2 matrices over the field of real numbers,
and let
2 −2
B= .
−1 1
Let W be the subspace of V consisting of all A such that AB = 0. Let f be a
linear functional on V which is in the annihilator of W . Suppose that f (I) = 0
and f (C) = 3, where I is the 2 × 2 identity matrix and
0 0
C= .
0 1
Find f (B).
Solution. Note that
a b 2 −2 2a − b −2a + b
= =0
c d −1 1 2c − d −2c + d
if and only if 2a = b and 2c = d. Therefore, a basis for W is
1 2 0 0
, .
0 0 1 2
92 CHAPTER 3. LINEAR TRANSFORMATIONS
Now, let
f (A) = c1 A11 + c2 A12 + c3 A21 + c4 A22 ,
where A is any 2 × 2 matrix. We know that f annihilates the two basis vectors
for W found above, and we also know that f (I) = 0 and f (C) = 3. This leads
to the following system of linear equations:
c1 + 2c2 = 0
c3 + 2c4 = 0
c1 + c4 = 0
c4 = 3.
3
c1 = −3, c2 = , c3 = −6, c4 = 3,
2
so
3
f (A) = −3A11 + A12 − 6A21 + 3A22 .
2
Therefore
3
f (B) = −3(2) + (−2) − 6(−1) + 3(1) = 0.
2
3.5.10 Exercise 10
Let F be a subfield of the complex numbers. We define n linear functionals on
F n (n ≥ 2) by
n
X
fk (x1 , . . . , xn ) = (k − j)xj , 1 ≤ k ≤ n.
j=1
Solution. Call the subspace W . We first want to find dim W 0 . Fix some n ≥ 2
and consider the set B = {f1 , f2 }. Since f1 (α) = f2 (α) = 0 for any α in W , we
have
Observe that by multiplying the first row by −1 and interchanging the two
rows, we can put A in row-reduced echelon form. Since A is row-equivalent to
a row-reduced matrix having two nonzero rows, we see that the set B is linearly
independent.
3.5. LINEAR FUNCTIONALS 93
3.5.11 Exercise 11
Let W1 and W2 be subspaces of a finite-dimensional vector space V .
B 0 = {α1 , . . . , αm , β1 , . . . , βn }.
B 00 = {α1 , . . . , αm , γ1 , . . . , γp }.
g(α1 ) = · · · = g(αm ) = 0,
g(β1 ) = · · · = g(βn ) = 0,
g(γ1 ) = f (γ1 ), ..., g(γp ) = f (γp ).
94 CHAPTER 3. LINEAR TRANSFORMATIONS
h(α1 ) = · · · = h(αm ) = 0,
h(γ1 ) = · · · = h(γp ) = 0,
h(β1 ) = f (β1 ), ..., h(βn ) = f (βn ).
3.5.12 Exercise 12
Let V be a finite-dimensional vector space over the field F and let W be a
subspace of V . If f is a linear functional on W , prove that there is a linear
functional g on V such that g(α) = f (α) for each α in the subspace W .
Proof. Let B = {α1 , . . . , αm } be a basis for W and extend this basis to a basis
B 0 = {α1 , . . . , αm , β1 , . . . , βn }
for V .
Let f be any linear functional on W . Define the linear functional g on V by
3.5.13 Exercise 13
Let F be a subfield of the field of complex numbers and let V be any vector
space over F . Suppose that f and g are linear functionals on V such that the
function h defined by h(α) = f (α)g(α) is also a linear functional on V . Prove
that either f = 0 or g = 0.
Proof. Let f , g, and h be the linear functionals on V with the properties de-
scribed above.
Choose any α, β in V . Then
h(α + β) = f (α + β)g(α + β)
= (f (α) + f (β))(g(α) + g(β))
= f (α)g(α) + f (α)g(β) + f (β)g(α) + f (β)g(β)
= h(α) + h(β) + f (α)g(β) + f (β)g(α)
= h(α + β) + f (α)g(β) + f (β)g(α).
3.5. LINEAR FUNCTIONALS 95
f (β)g(β) = 0,
and it follows that g(β) = 0 (since f (β) 6= 0). Substituting zero for g(β) in
equation (3.3) then gives
f (β)g(α) = 0.
But, again, f (β) is nonzero, so we must have g(α) = 0. Since α was chosen
arbitrarily, it follows that g = 0 and the proof is complete.
3.5.14 Exercise 14
Let F be a field of characteristic zero and let V be a finite-dimensional vector
space over F . If α1 , . . . , αm are finitely many vectors in V , each different from
the zero vector, prove that there is a linear functional f on V such that
f (αi ) 6= 0, i = 1, . . . , m.
α 1 = c1 β1 + c2 β2 + · · · + cn βn ,
f (α1 ) = ck c−1
k = 1 6= 0.
This shows that the statement holds for the base case of m = 1.
Now assume that the statement is true when m = k for some k ≥ 1, and
let k + 1 nonzero vectors α1 , . . . , αk+1 be given. We may apply the inductive
hypothesis to find a linear functional f0 on V such that f0 (αi ) 6= 0 for all i with
1 ≤ i ≤ k.
If f0 (αk+1 ) happens to be nonzero, then we are done. So we will assume
that f0 (αk+1 ) = 0. Again the inductive hypothesis, when applied to the single
vector αk+1 , allows us to find a linear functional f1 such that f1 (αk+1 ) 6= 0.
For each i with 1 ≤ i ≤ k + 1, define the number Ni as follows. First, if
there is no positive integer n such that
3.5.15 Exercise 15
According to Exercise 3.5.3, similar matrices have the same trace. Thus we
can define the trace of a linear operator on a finite-dimensional space to be the
trace of any matrix which represents the operator in an ordered basis. This is
well-defined since all such representing matrices for one operator are similar.
Now let V be the space of all 2 × 2 matrices over the field F and let P be a
fixed 2 × 2 matrix. Let T be the linear operator on V defined by T (A) = P A.
Prove that trace(T ) = 2 trace(P ).
Proof. Let
a b
P =
c d
and let
1 0 0 1 0 0 0 0
B= , , ,
0 0 0 0 1 0 0 1
be an ordered basis for V . We calculate
1 0 a 0 1 0 0 0
T = =a +c ,
0 0 c 0 0 0 1 0
0 1 0 a 0 1 0 0
T = =a +c ,
0 0 0 c 0 0 0 1
0 0 b 0 1 0 0 0
T = =b +d ,
1 0 d 0 0 0 1 0
0 0 0 b 0 1 0 0
T = =b +d .
0 1 0 d 0 0 0 1
3.5. LINEAR FUNCTIONALS 97
From this, we see that the matrix for T relative to B is the 4 × 4 matrix
a 0 b 0
0 a 0 b
[T ]B =
c
.
0 d 0
0 c 0 d
3.5.16 Exercise 16
Show that the trace functional on n × n matrices is unique in the following
sense. If W is the space of n × n matrices over the field F and if f is a linear
functional on W such that f (AB) = f (BA) for each A and B in W , then f is
a scalar multiple of the trace function. If, in addition, f (I) = n, then f is the
trace function.
Proof. Let
B = {11 , 12 , . . . , 1n , . . . , n1 , n2 , . . . , nn }
be the basis for W where ij is the n × n matrix having a 1 in the i, jth entry
and all other entries 0. Since f is linear, we may write it as
n X
X n
f (A) = Cij Aij , (3.5)
i=1 j=1
where each Cij is a fixed constant and Aij is the i, jth entry of A.
Note that the p, qth entry of ij is δip δqj , where δij is the Kronecker delta.
So, if we fix some indices i, j, a, b, each in the range from 1 to n, then we have
X n
n X n
X
f (ij ab ) = Cpq (δip δkj )(δak δqb ) = Cib δaj . (3.6)
p=1 q=1 k=1
Since these must be equal, we see that C11 = C22 = · · · = Cnn . On the other
hand, (3.6) also gives
3.5.17 Exercise 17
Let W be the space of n × n matrices over the field F , and let W0 be the
subspace spanned by the matrices C of the form C = AB − BA. Prove that W0
is exactly the subspace of matrices which have trace zero.
Proof. Let W1 be the subspace of matrices having trace zero. From Exer-
cise 3.5.3, we know that
(n − 1) + (n2 − n) = n2 − 1
f (x1 , . . . , xn ) = c1 x1 + · · · + cn xn .
f (1, −1, 0, 0, . . . , 0) = c1 − c2 = 0.
f (0, 1, −1, 0, 0, . . . , 0) = c2 − c3 = 0,
(b) Show that the dual space W ∗ of W can be ‘naturally’ identified with the
linear functionals
f (x1 , . . . , xn ) = c1 x1 + · · · + cn xn
on F n which satisfy c1 + · · · + cn = 0.
f (x1 , . . . , xn ) = c1 x1 + · · · + cn xn
{1 − i | 2 ≤ i ≤ n}
For each of the following linear operators T , let g = T t f , and find g(x1 , x2 ).
(a) T (x1 , x2 ) = (x1 , 0)
Solution. We have
g(x1 , x2 ) = (T t f )(x1 , x2 )
= f (T (x1 , x2 ))
= f (x1 , 0)
= ax1 .
g(x1 , x2 ) = f (T (x1 , x2 ))
= f (−x2 , x1 )
= −ax2 + bx1 .
Solution.
g(x1 , x2 ) = f (x1 − x2 , x1 + x2 )
= (b + a)x1 + (b − a)x2 .
3.7.2 Exercise 2
Let V be the vector space of all polynomial functions over the field of real
numbers. Let a and b be fixed real numbers and let f be the linear functional
on V defined by
Z b
f (p) = p(x) dx.
a
3.7.3 Exercise 3
Let V be the space of all n × n matrices over a field F and let B be a fixed n × n
matrix. If T is the linear operator on V defined by T (A) = AB − BA, and if f
is the trace function, what is T t f ?
(T t f )(A) = f (T A)
= f (AB − BA)
= f (AB) − f (BA)
= 0,
3.7.4 Exercise 4
Let V be a finite-dimensional vector space over the field F and let T be a linear
operator on V . Let c be a scalar and suppose there is a non-zero vector α in
V such that T α = cα. Prove that there is a non-zero linear functional f on V
such that T t f = cf .
Proof. Let n = dim V and let α be a nonzero vector in V with T α = cα. Define
the linear operator U on V by
U = T − cI.
so the nullspace of U t has dimension greater than zero. Therefore we can find
a nonzero linear functional f in V ∗ such that U t f = 0. Then for any β in V ,
0 = (U t f )(β)
= f (U β)
= f (T β − cβ)
= f (T β) − cf (β)
= (T t f )(β) − cf (β).
3.7.5 Exercise 5
Let A be an m × n matrix with real entries. Prove that A = 0 if and only if
trace(At A) = 0.
3.7. THE TRANSPOSE OF A LINEAR TRANSFORMATION 103
Since we have a sum of squares (of real numbers) equal to zero, it must be the
case that each squared number is zero. In particular Aij = 0 for each i, j, since
every such entry appears in the sum.
3.7.6 Exercise 6
Let n be a positive integer and let V be the space of all polynomial functions
over the field of real numbers which have degree at most n, i.e., functions of the
form
f (x) = c0 + c1 x + · · · + cn xn .
Let D be the differentiation operator on V . Find a basis for the null space of
the transpose operator Dt .
Solution. Let W denote the range of D. By Theorem 22, the null space of Dt
is the annihilator of W . We know that W is the space of polynomials having
degree at most n − 1, so dim W = n − 1. By Theorem 16, the annihilator
W 0 must have dimension n − (n − 1) = 1, so we may take any nonzero linear
functional in W 0 as a basis vector. Let g be the unique linear functional such
that
g(xk ) = δnk , 1 ≤ k ≤ n.
That is, g sends every polynomial to the coefficient for its xn term. In particular,
g annihilates W , so {g} is a basis for W 0 and therefore also a basis for the null
space of Dt .
3.7.7 Exercise 7
Let V be a finite-dimensional vector space over the field F . Show that T → T t
is an isomorphism of L(V, V ) onto L(V ∗ , V ∗ ).
Proof. Suppose V has dimension n and let U be the function from L(V, V )
into L(V ∗ , V ∗ ) given by U (T ) = T t . We could show that U is an isomorphism
by appealing to the definition of the transpose. Instead, we will write U as a
composition of three linear transformations U1 , U2 , and U3 , mapping
U U U
1
L(V, V ) −−→ F n×n −−→
2
F n×n −−→
3
L(V ∗ , V ∗ ),
3.7.8 Exercise 8
Let V be the vector space of n × n matrices over the field F .
(a) If B is a fixed n × n matrix, define a function fB on V by fB (A) =
trace(B t A). Show that fB is a linear functional on V .
Therefore fB is linear.
(b) Show that every linear functional on V is of the above form, i.e., is fB for
some B.
where each cij is a fixed scalar in F . Now let B be the matrix whose i, j
entry is cij . Then for each matrix A in V ,
n
X
t
fB (A) = trace(B A) = (B t A)jj
j=1
n X
X n n X
X n
= (B t )ji Aij = cij Aij = g(A).
j=1 i=1 j=1 i=1
Therefore g = fB .