Linear Al Done Right Note
Linear Al Done Right Note
Linear Al Done Right Note
Contents
1. Some note before reading the book 4
2. Terminology 4
1
Toan Quang Pham page 2
5.5. Exercises 3C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.6. 3D: Invertibility and Isomorphic Vector Spaces . . . . . . . . . . . . . . . . . . . 31
5.7. Exercises 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.8. 3E: Products and Quotients of Vector Spaces . . . . . . . . . . . . . . . . . . . . 36
5.9. Exercises 3E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.10. 3F: Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.11. Exercises 3F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6. Chapter 4: Polynomials 48
6.1. Exercise 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
13.Summary 122
2. Carefully check whether the theorem/definition holds for real vector space or complex
vector space.
2. Terminology
1. There exists a basis of V with respect to which the matrix of T = the matrix of T with
respect to basis of V .
4. The empty set is not a vector space because it doesn’t contain any additive identity 0 (it
doesn’t have any vector).
5. Because the collection of objects satisfying the orgininal condition is all vectors in V , same
as collection of objects satisfying the new condition.
(b) Let S be set of all continuous real-valued functions on the interval [0, 1]. S has additive
identity (Example 1.24). If f, g 2 S then f + g, cf is continuous real-valued function on
the interval [0, 1] for any c 2 F.
(c) The idea is similar, if g, f are two differentiable real-valued functions then so are f + g, cf .
Toan Quang Pham page 5
(e) Let an , bn be to sequences of S then lim an = lim bn = 0 so lim (an +bn ) = lim can = 0
n!1 n!1 n!1 n!1
which follows an + bn , can 2 S.
Answer. {(0, 0)} and R2 are obviously two of them. Are there any other subspaces? Let S
be a subspace of R2 and there is an element (x1 , y1 ) 2 S with (x1 , y1 ) 6= (0, 0). Note that
(x1 , y1 ) 2 S for each 2 F so this follows that all points in the line y = xy11 x belong to S. If
there doesn’t exist a point (x2 , y2 ) 2 S so that (x2 , y2 ) doesn’t lie in the line y = xy11 x then we
can state that S, set of points lie in y = xy11 x, is a subspace of R2 . Hence, since point (x1 , y1 ) is
an arbitrary point, we deduce that all lines in R2 through the origin are subspaces of R2 .
Now, what if there exists such (x2 , y2 ) 2 S then the line y = xy22 x also belongs to S. We will
state that if there exists two lines belong to S then S = R2 . Indeed, since R2 is the union of
all lines in R2 through the origin, it suffices to prove that all lines y = ax for some a belongs
to R, given that two lines y = xy11 x and y = xy22 x already belong to S. Let m = xy11 , n = xy22 .
We know that if (x1 , y1 ), (x2 , y2 ) 2 S then (x1 + x2 , y1 + y2 ) 2 S. We pick x1 , x2 so that
(m a)x1 = (a n)x2 then mx1 + nx2 = a(x1 + x2 ). We pick y1 = mx1 , y2 = nx2 then this
follows (y3 , x3 ) = (y1 + y2 , a(x1 + y2 )) 2 S which yields that line y = ax belongs to S. Since
this is true for arbitrary a so we deduce that S = R2 .
Thus, all subspaces of R2 are {(0, 0)}, R2 and all lines in R2 through the origin.
Answer. Similar idea to previous problem. It’s obvious that {(0, 0, 0)}, R3 are two subspaces of
R3 . Let S be a different subspace.
2. If there exists (x2 , y2 , z2 ) 2 S and (x2 , y2 , z2 ) does not lie in the line ` on R3 then the
plane P through the origin, (x1 , y1 , z1 ) and (x2 , y2 , z2 ) belongs to S.
Thus, {0}, R3 , all line in R3 through the origin and all planes in R3 through the origin are all
subspaces of R3 .
theo_7:1C:1.45 Theorem 3.2.4 (Sum of subspaces, page 20) Suppose U1 , . . . , Um are subspaces of V . Then
U1 + U2 + . . . + Um is the smallest subspace of V containing U1 , . . . , Um .
Toan Quang Pham page 6
v = |0 + .{z . . + 0} 2 U1 + . . . + Um .
. . + 0} +v + 0| + .{z
i 1 m i
2. Already did.
5. Yes.
12. Assume the contrary, that means there exists two subspaces U1 , U2 of V so that U1 [ U2
is also a subspace and there exists u1 2 U1 , u2 2 U2 , u1 62 U2 , u2 62 U1 . We have u1 , u2 2
U1 [ U2 so u1 + u2 2 U1 [ U2 . WLOG, assume u1 + u2 2 U1 then (u1 + u2 ) u1 2 U1 or
u2 2 U1 , a contradiction. Thus, either U1 ✓ U2 or U2 ✓ U1 .
14. Done
15. U +U is exactly U . From Theorem 1.39, U +U contains U . We also notice that U contains
U + U because every u 2 U + U then u 2 U .
18. Operation of the addition on the subspaces of V has additive identity, which is U = {0}.
Only {0} has additive inverse.
(a) vj 2 span(v1 , . . . , vj 1 ).
(b) if the jth term is removed from v1 , . . . , vm , the span of the remaining list equals
span(v1 , . . . , vm ).
theo1 Theorem 4.1.2 (2.23) In a finite-dimensional vector space, the length of every linearly
independent list of vector is less than or equal to the length of every spanning list of
vectors.
This system of equation has m variable and n equation with n < m.eq1Here is an algorithm in
finding non-zero solution for a1 , . . . , am . Number the equations in (1) from top to bottom as
(11), (12), . . . , (1n).
1. For 1 i m 1, get rid of am in (1i) by subtracting to (1(i + 1)) times some constant.
By doing this, we obtain a system of equations where the first n 1 equations only contain
m 1 variables, the last equation contains m variables.
Toan Quang Pham page 10
2. For 1 i m 2, get rid of am in (1i) by subtracting to (1(i + 1)) times some constant.
We obtain an equivalent system of equations where the first n 2 equations contain m 2
variables, equation (1(n 1)) contains m 1 variables and and equation (1n) contains m
variables.
3. Keep doing this until we get an system of equation so that the (1i) equation contains
m n + i variables.
4. Starting from equation (11) with m n+1 variables a1 , . . . , am n+1 , we can pick a1 , . . . , am+n 1
so that it satisfies equation (11) and a1 , . . . , am n+1 not all 0, which is possible since
m n + 1.
5. Come to (12) and pick am n+2 , to (13) and pick am n+3 until to (1n) and pick am .
theo:2 Theorem 4.1.3 (2.26 Finite dimensional subspaces) Every subspace of a finite-dimensional
vector space is finite-dimensional.
a1 h1 + . . . + am n+1 hm n+1 = k1
8
>
> a1 c1,1 + a2 c2,1 + . . . + am n+1 cm n+1,1 = 1,
>
>
<a c + a c + . . . + a
1 1,2 2 2,2 m n+1 cm n+1,2 = 0,
()
>
> ...
>
>
:a c
1 1,m n + . . . + am n+1 cm n+1,m n = 0.
Remark 4.1.4. Both my proofs for the two theorems use the result that a system of m equations
with n variables in R with m < n has infinite solution in R.
coroll:1 Corollary 4.1.5 (Example 2.20) If some vector in a list of vectors in V is a linear combi-
nation of the other vectors, then the list is linearly dependent.
P P
Proof.
Pm List v1 , . . . , vm has v1 = i ai vi then v1 i ai vi = 0, or there exists x1 , . . . , xm not all
0 so i=1 xi vi = 0. Hence, v1 , . . . , vm is linearly dependent.
This is another way to check if some list is linearly independent or not. See Exercises 10 (2A)
for application. The reverse is also true.
Corollary 4.1.6
exer:2A:11
coroll:2 (Exercise 11, 2A) Suppose v1 , . . . , vm is linearly independent in V and w 2 V then v1 , v2 , . . . , vm , w
is linearly independent if and only if w 62 span(v1 , v2 , . . . , vm ).
This is a corollary implying from Linear Dependence Lemma (2.21). With this, we can construct
aexer:2A:14
spanning list for any infinite-dimensional vector space. Also with this, we can prove Exercise
14, which proves existence of a sequence of vectors in infinite-dimensional vector space so that for
any positive integer m, the first m vectors
exer:2A:15
exer:2A:16
in the sequence is linearly independent. See Exercise
exer:2A:14
15, 16 for applications for Exercise 14.
2. Example 2.18: (a) True for v 6= 0. If v = 0 then for any a 2 F we always have av = 0, so
list of one vector 0 is not linearly independent.
(b) Consider u, v 2 V . We have au + bv = 0 () au = bv so in order to have only one
choice of (a, b) to be (0, 0) then u can’t be scalar multiple of v.
(c),(d) True.
4. 8
Similarly to 3, let say if x · (2, 3, 1) + y · (1, 1, 2) + z · (7, 3, c) = 0 for x, y, z 2 R then
>2x + y + 7z = 0,
<
3x y + 3z = 0, From two above equations, we find x = 2z, y = 3 and substitute
>
:
x + 2y + cz = 0
into the third equation to get (c 8)z = 0. This has at least two solutions of z if and only
if c = 8. Done.
9. Not true. Let v1 , v2 , . . . , vm be (1, 0, 0, . . . , 0), (0, 1, . . . , 0), . . . , (0, . . . , 0, 1), respectively
and let w3 , . . . , wm be (0, 0, 2, 0, . . . , 0), . . . , (0, . . . , 0, 2) and w1 = (0, 1, 0, . . . , 0) and
Toan Quang Pham page 13
If a1 + . . . + aj 1 1 = 0 then vj a1 v1 a2 v2 . . . aj 1 vj 1 = 0. If j = 1 then
that means vj = 0, so v1 , v2 , . . . , vm is linearly dependent, a contradiction. If j 2 then
vj = a1 v1 + . . . + aj 1 vj 1 which also follows that v1 , . . . , vm is linearly dependent, a
contradiction.
Thus, we must have a1 + . . . + aj 1 1 6= 0. Therefore, since RHS 2 span(v1 , . . . , vm ) so
m 2 span(v1 , . . . , vm ).
exer:2A:14 14. Prove that V is infinite-dimensional if and only if there is a sequence v1 , v2 , . . . of vectors
in V such that v1 , . . . , vm is linearly independent for every positive integer m.
If there exists such sequence of vectors in V : Assume the contrary that V is finite-
dimensional then there exists finite spanning list in V with length `. However, from
existence of such sequence, we can find in V a linearly independent list with arbitrary
Toan Quang Pham page 14
theo1
large length, which contradicts to Theorem 4.1.2 that length of any linearly independent
list less than or equal to length of spanning list in a finite-dimensional vector space. We
are done.
exer:2A:16 16. Consider sequence of continuous real-valued functions defined on [0, 1]: 1, x, x2 , x3 , . . . and
we are done.
exer:2A:17 17. Nice problem. Since pj (2) = 0 for each j so that means x 2 divides each pj . Let qi = xpi 2
then q0 , q1 , . . . , qm are polynomials in Pm 1 (F). If q0 , q1 , . . . , qm is linear independent but
it has length m + 1 larger theo1
than length m of spanning list in Pm 1 (F), a contradiction
according to Theorem 4.1.2. P Thus, q0 , q1 , . . . , qm is linearP dependent. That Pmeans there
exists ai 2 F not all 0 so m a q
i=0 i i = 0. This follows (x 2) m
a q
i=0 i i = 0 or m
i=0 i i = 0.
a p
Hence, p0 , . . . , pm is linearly dependent.
theo_5:2B:2.33 Theorem 4.3.3 (2.33) Every linearly independent list of vectors in a finite-dimensional
vector space can be extended to a basis of the vector space.
theo_6:2B:2.34 Theorem 4.3.4 (2.34) Suppose V is finite-dimensional and U is subspace of V . Then there
is a subspace W of V such that V = U W .
theo:2
Proof.theo_4:2B:2.32
According to Theorem 4.1.3 then U is also finite-dimensional. Hence, according to The-
orem 4.3.2 then U has a basis v1 , . . . , vm . Since v1 , . . . , vm is linearly independent so according
theo_5:2B:2.33
to Theorem 4.3.3, there exists vectors w1 , w2 , . . . , wn so v1 , . . . , vm , w1 , . . . , wn is a basis of V
Let W = span(w1 , . . . , wn ) then V = U W .
exer:2B:8 theo_6:2B:2.34
Exercise 8 chapter 2B gives another rephrasing of theorem 4.3.4, saying that if there exists sub-
spaces U, W of V so V = U W and each has basis {u1 , . . . , um } and {w1 , . . . , wn } respectively
then u1 , . . . , um , w1 , . . . , wn is a basis of V .
exercises. It basically says spanning list/basis/ linearly independent list has addition and scalar
multiplication properties.
Proof. It’s not hard to see that if (v1 , . . . , vm ) is a spanning list then so is (v1 , . . . , vi +w, vi+1 , . . . , vm ).
If (v1 , . . . , vm ) is linearly independent, we consider ai 2 F so a1 v1 + a2 v2 + . . . + ai (vi + w) +
. . . + am vm = 0 or
4.5. Exercises 2B
1. If (v1 , . . . , vm ) is a basis in V then so is (v1 , . . . , vm 1 , vm + v1 ) according to proposition
prop:1
4.4.1. Hence in order for V to have only one basis, we must have vm = vm + v1 or v1 = 0.
Since vector 0 in the list so in order for this list to be linearly independent, the list must
contain 0 only. Thus, this follows {0} is the only vector space that has only one basis.
exer:2B:3 3. (a) U = {(x1 , x2 , x3 , x4 , x5 ) 2 R5 : x1 = 3x2 , x3 = 7x4 }. Basis of U is {(3, 1, 0, 0, 0), (0, 0, 7, 1, 0), (0, 0, 0,
(b) Add two more to basis of U which are (0, 1, 0, 0, 0) and (0, 0, 0, 1, 0) we will get a basis
of R5 .
theo_6:2B:2.34
(c) According to proof of theorem 4.3.4, pick W = span((0, 1, 0, 0, 0), (0, 0, 0, 1, 0)). then
R5 = U W .
(
4 Im(m) + 7 Im(n) = Im(z3 )
. Hence, m, n can be uniquely determined. Thus,
Im(m) + Im(n) = Im(z4 )
the list is a basis.
prop:1
(b) Proposition
0 4.4.1 can construct
1 a basis of C than contains three vectors in (a). Let
5
(1, 6, 0, 0, 0), (0, 1, 0, 0, 0), (0, 0, 4, 1, 2), (0, 0, 0, 1, 0), (0, 0, 0, 0, 1).
(1, 6, 0, 0, 0), (0, 1, 0, 0, 0), (0, 0, 4, 1, 2), (0, 0, 7, 1, 3), (0, 0, 0, 0, 1).
theo_6:2B:2.34
(c) According to proof of theorem 4.3.4, pick W = span((0, 1, 0, 0, 0), (0, 0, 0, 0, 1)) and
we are done.
prop:1
exer:2B:5 5. True. Note that 1, x, x2 , x3 is a basis of P3 (F) so according proposition 4.4.1 we find
1, x, x2 + x3 , x3 is also a basis of P3 (F) and one of the vectors has degree 2.
prop:1
exer:2B:6 6. True according to proposition 4.4.1.
theo1
Proof. This is true according to theorem 4.1.2. By considering a basis v1 , . . . , vm in finite-
dimensional vector space V then v1 , . . . , vm is a list length m which is both linearly independent
and spanning V . Other bases are also linearly independent lists so they must have length less
than length of any spanning list, which is m. They are also spanning lists so their lengh must
be greater than length of any independent list, which is m. Hence, other bases must also have
length m.
Definition 4.6.2. The dimension of a finite-dimensional vector space is the length of any basis
of the vector space. The dimension of V (if V is finite-dimensional) is denoted by dimV .
theo_11:2C:2.39 Theorem 4.6.4 (2.39) Suppose V is finite-dimensional. Then every linearly independent
list of vectors in V with length dim V is a basis of V .
theo_5:2B:2.33
Proof. According to theorem 4.3.3, every linearly independent list can be extended to a basis.
Since the list has length dim V , that means no vector need to be added to the list to create a
basis, i.e. the list is a basis.
theo_12:2C:2.42 Theorem 4.6.5 (2.42) Suppose V is finite-dimensional. Then every spanning list of vectors
in V with length dim V is a basis of V .
theo_13:2C:2.43 Theorem 4.6.6 (Dimension of a sum, 2.43) If U1 and U2 are subspaces of a finite-dimensional
vector space, then
this follows the right hand side of the identity will be m + n + l. Hence, it suffices to prove
dim(U1 + U2 ) = m + n + l. Indeed, we will prove that
v 1 , . . . , v m , u1 , . . . , u l , w 1 , . . . , w n (2) eq2:2C:2.43
is a basis of U1 + U2 . This list obviously spans U1 + U2 . Before, we go and prove the above
exer:2B:8
is is linearly independent, notice that form exercise 8 2B then U1 = (U \ U2 ) S where S =
span(w 1 , . . . , wn ) so this follows S \ (U1 \ U2 ) = {0} from theorem Direct sum of two subspaces
theo_7:1C:1.45
3.2.4. eq2:2C:2.43
Now, coming back to (2), assume the list is linearly depdendent,theo_8:2A:2.21 and note that v1 , . . . , vm , u1 , . . . , ul
is linearly independent so from Linear Dependence Lemma 4.1.1 we find that there exists
1 j n so wi 2 span(v1 , . . . , vm , u1 , . . . , ul , w1 , . . . , wj 1 ), or there exists v 2 S, v 6= 0 so
v 2 U2 . Note that S ⇢ U1 so from this we find v 2 U1 \ U2 . Hence, v 2 (S \ (U1 \ U2 )) and
v 6= 0, a contradiction with above argument. Thus, the list is linearly independent. We obtain
eq2:2C:2.43
list (2) is a basis of U1 +U2 so dim (U1 +U2 ) = m+n+l = dim U1 +dim U1 dim (U1 +U2 ).
4.7. Exercises 2C
1. Let v1 , . . . , vm be basis of Utheo_11:2C:2.39
then this list is linearly independent which has length equal
to dim V so from theorem 4.6.4 the list is a basis of V , therefore, U = V .
theo_10:2C:2.38
2. We have dim R2 = 2 and from theorem 4.6.3 then if U is subspace of R2 then dim U 2.
If dim U = 0 then U = {0}, if dim U = 1 then U is U = span((x, y)) which is a line in
R2 through the origin and (x, y) 6= 0. If dim U = 2 then U = R2 .
b) Add x2 to 1, x, (x 6)3 , (x 6)4 we get a basis of P4 (F) because this list is linearly
independent and as length of 5.
theo_6:2B:2.34
c) According to theorem 4.3.4, we find W = span (x2 ).
exer:2C:6 6. a) With U = {p 2 P4 (F) : p(2) = p(5)} then 1, (x 2)(x 5), (x 2)(x 5)x, (x
2)(x 5)x2 is a basis of U . Indeed, this is linearly independent and similarly to above
argument, we find dim U 4 so we are done.
b) Add x to the list and we obtain a linearly independent list with length of 5 so it is a
basis of P4 (F).
theo_6:2B:2.34
c) According to theorem 4.3.4, we find W = span (x).
7. a) With U = {p 2 P4 (F) : p(2) = p(5) = p(6)} then 1, (x 2)(x 5)(x 6), (x
2)(x 5)(x 6)x is a basis of U . Indeed, this is is linearly independent and it is
exer:2C:6
a subspace of U2 = {p 2 P4 (F) : p(2) = p(5)}. From exercise 6 2C we
theo_11:2C:2.39
know
dim U2 = 4 so dim U 4. If dim U = 4 then from theorem 4.6.4 we have U = U2
but (x 2)(x 5) 2 U2 , (x 2)(x 5) 62 U , a contradiction. Thus, dim U 3 so
dim U = 3 which follows 1, (x 2)(x 5)(x 6), (x 2)(x 5)(x 6)x is a basis of
U.
b) Add x, x2 to the list and we get a linearly independent list with length of 5 so the
new list is a basis of P4 (F).
theo_6:2B:2.34
c) According to theorem 4.3.4 we find W = span (x, x2 ).
R1
8. a) With U = {p 2 P4 (F) : 1 p = 0} then the list 2x, 3x
2 1, 4x3 , 5x4 3x2 is a
basis of U . Indeed, this list is linearly independent and from dim U 4 because
we follow dim U = 4 which means the list is a basis of U
x2 62 U, x2 2 P4 (F), theo_11:2C:2.39
according to theorem 4.6.4.
An interesting remark is that if V = {p 2 P5 (F) : p( 1) = p(1), p(0) = 0} then for
a basis of V , a list of antiderivative of all polynomials in basis of V is a basis of U .
b) Add 1 to the list we get a basis of P4 (F).
c) Similarly, W = span (1).
9. If w 62 span (v1 , . . . , vm ) then w+v1 , . . . , w+vm is linearly independent in V so dim span(w+
+ vm ) = m. If w 2 span (v1 , . . . , vm ) then there w can be written uniquely
v1 , . . . , wP
m
as w = i=1 ai vi . WLOG, say the biggest i so ai 6= 0 is j (1 j m). Consider
U = span(w + v1 , . . . , w + vj 1 , w + vj+1 , . . . , w + vm ). We see that the list w + v1 , . . . , w +
vj 1 , w + vj+1 , . . . , w + vm is linearly independent because otherwise
b1 (w + v1 ) + . . . + bj 1 (w + vj 1) + bj+1 (w + vj+1 ) + . . . + bm (w + vm ) = 0
is true for bi 2 F not all 0. However, this follows w = c1 v1 + . . . + cj 1 vj 1 + cj+1 vj +
. . . + cm vm , a contradiction since aj = 0 in this representation of w. Thus, w + v1 , . . . , w +
vj 1 , w + vj+1 , . . . , w + vm is a basis of U so dim U = m 1. On the other hand, U is a
subspace of span(w + v1 , . . . , w + vm ) so
dim span(w + v1 , . . . , w + vm ) dim U = m 1.
Toan Quang Pham page 20
P
10. List p0 , p1 , . . . , pm is linearly independent. Indeed, consider P (x) = m i=1 ai pi for ai 2 F.
We have [x ]P = [x ]am pm so P (x) = 0 for all x 2 R iff [x ]P = 0 or [xm ]am pm = 0
m m m
which follows am = 0. This leads to [xm 1 ]P = [xm 1 ]am 1 pm 1 and we keep going
until ai = 0 for all 0 i m. Thus, the list is linearly independent with length of
theo_11:2C:2.39
m + 1 = dim Pm (F) so from theorem 4.6.4 we follow p0 , . . . , pm is a basis of Pm (F).
theo_13:2C:2.43
11. From theorem 4.6.6 we follow dim (U \ W ) = 0, i.e. U \ W = {0}. This follows R8 =
U W.
15. Since dim V = n so there exists a basis v1 , . . . , vn of V . Pick Ui = span(vi ) then of course
dimUi = 1 and V = U1 U2 . . . Un .
16. It’s not hard to prove U1 + . . . + Um is finite-dimensional and note that let vi,1 , vi,2 , . . . , vi,bi
be the basis of Ui then dim Ui = bi and
17. This is false, take three lines in R2 as U1 , U2 , U3 then LHS = 2 and RHS = 3.
Toan Quang Pham page 21
theo_14:3A:3.5 Theorem 5.1.1 (3.5 Linear maps and basis of domain) Suppose v1 , . . . , vn is a basis of V
and w1 , . . . , wn 2 W . There exists a unique linear map T : V ! W such that T vj = wj for
each j = 1, . . . , n.
prop:2 Proposition 5.1.2 (Linear maps take 0 to 0) Suppose T is a linear map from V to W .
Then T (0) = 0.
5.2. Exercises 3A
prop:2
1. We have T (0, 0, 0) = (0 + b, 0) but from proposition 5.1.2 then T (0, 0, 0) = (0, 0) so b = 0.
On the other hand, we have T (1, 1, 1) = (1, 6 + c) so T (2, 2, 2) = 2T (1, 1, 1) = (2, 12 + 2c)
but T (2, 2, 2) = (2, 12 + 8c). Thus, c = 0. If b = c = 0 then it’s obvious that T is linear.
6. Prove 3.9, page 56: Associativity: (T1 T2 )T3 = T1 (T2 T3 ). Let T1 2 L(U, V ), T2 2 L(V, W ), T3 2
L(W, Z). Hence,
(T1 T2 )T3 (u) = (T1 T2 )(T3 (u)) = T1 (T2 (T3 (u))) = T1 (T2 T3 ).
Note that w1 , . . . , wkm is linearly independent so the above equation follows that i ↵j = 0
for any 1 i k, 1 j m. However, since v 6= 0 so there must exists an ↵l 6= 0 (1
l m). This deduces i = 0 for all 1 i k, i.e. T1 , . . . , Tk is linearly independent. And
this is true for any k 2 N so L(V, W ) is infinite-dimensional.
P
13. Since v1 , . . . , vm is linear dependent so there exists vi = j6=iP ↵j vj with ↵j 2 F. Hence,
pick arbitrary wj (1 j m, j 6= i) and then pick wi so wi 6= j6=i ↵j wj = T vi . Done.
14. Let v1 , . . . , vm (m 2) be basis of V then for i 3, let Svi = T vi = vi but Sv1 = T v2 =
v1 , Sv2 = T v1 = v2 . Hence, ST (v1 ) = Sv2 = v2 but T S(v1 ) = T ( v1 ) = v2 . Thus,
ST 6= T S.
Toan Quang Pham page 23
Proposition 5.3.2
(3.16) Let T 2 L(V, W ) then T is injective if and only if null T = {0}.
Proof. If T is injective. Assume the contrary that there exists v 6= 0, v 2 null T . then for any
u 6= v, u 2 V then T (u + v) = T u + T v = T u, a contradiction since T is injective.
Reversely, if null TT = {0}. For any u, v 2 V so T u = T v then T (u v) = 0. Thus, u = v.
This follows T is injective.
Proposition 5.3.3
(3.19) If T 2 L(V, W ) then range T is a subspace of W .
Proof. We have T (0) = 0 so 0 2 range T . For any v1 , v2 2 range T then there exists u1 , u2 2 V
so T u1 = v1 , T u2 = v2 so T (u1 + u2 ) = v1 + v2 . Hence, v1 + v2 2 range T . We also have
v1 = T u1 = T ( u1 ) so v1 2 range T for any 2 F. Thus, range T is a subspace of W .
theo_15:3B:3.22 Theorem 5.3.5 (3.22, Fundamental Theorem of Linear Maps) Suppose V is finite-dimensional
and T 2 L(V, W ). Then range T is finite-dimensional and
theo_6:2B:2.34
According to theorem 4.3.4, V = null T W so for any u 2 V there uniquelyP exists u1 2
null T, w 2 W so u = u1 + w. Hence, T u = T u1 + T w = T w or v = T u = T w = ki=1 ↵k T wk
for any v 2 range T . Thus, T w1 , . . . , T wk spans range T, which follows dim range T = k, as
desired.
theo_16:3B:3.23 Theorem 5.3.6 (3.23) Suppose V and W are finite-dimensional vector spaces such that
dim V > dim W . Then no linear map from V to W is injective.
Proof. Asusme the contrary, there is injective linear map T 2 L(V, W ). Hence, null T = {0}
theo_15:3B:3.22
or dim null T = 0. Thus, from theorem 5.3.5, dim V = dim range T dim W since range T
is a subspace of W , a contradiction. Thus, there doesn’t exist injective linear map from V to
W.
theo_16:3B:3.24 Theorem 5.3.7 (3.24) Suppose V and W are finite-dimensional vector spaces such that
dim V < dim W . Then no linear map from V to W is surjective.
Proof. For any T 2 L(V, W ), we have dim range T = dim V dim null T < dim W , which
means range T 6= W , so T is not surjective.
Theorem 5.3.8
(3.26) A homogeneous system of linear equations with more variables than equations has
nonzero solutions.
Proof. Using the notation from the textbook. This means n > m, or dim Fn > dim Fm .
This follows no linear map from Fn to Fm is injective, i.e. dim nullT (x1 , . . . , xn ) > 0 for any
T 2 L(V, W ). This means the system of linear equations has nonzero solutions.
Theorem 5.3.9
(3.29) An inhomogeneous system of linear equations with more equations than variables
has no solution for some choice of constant terms.
Proof. We have dim Fn < dim Fm so no linear map from Fn to Fm is surjective. Hence,
for any T 2 L(V, W ) then range T 6= Fm . This follows there exists (c1 , . . . , cm ) 2 Fm but
(c1 , . . . , cm ) 62 range T . Thus, the system of linear equations with constant terms c1 , . . . , cm has
no solution.
5.4. Exercises 3B
5.4.1. A way to construct (not) injective, surjective linear map
theo_14:3A:3.5
That is based on theorem 5.1.1. T is injective if basis V is mapped one-to-one with basis of W
(which is also we need dim V dim W for existence of injective linear map from V to W ). In
Toan Quang Pham page 25
5.4.2. Exercises
theo_15:3B:3.22
1. If T 2 L(V, W ) then from theorem 5.3.5, dim V = 5. We can let V = F5 and since
range T is a subspace of W so 2 = dim range T dim W . Hence, we can let W = F2 .
Hence, we can write T (x, y, z, t, w) = (x + y, t + w). With this, null T = {(x, y, z, t, w) 2
F5 : x + y = t + w = 0}. Hence, basis of null T is (1, 1, 0, 0, 0), (0, 0, 1, 0, 0), (0, 0, 1, 1).
And range T = F2 .
6. Because that will mean dim null T = dim range T so 5 = dim R5 = 2 · dim null T , a
contradiction.
exer:3B:7
exer:3B:8 8. Notation the same with exercise 7 but this time m n 2. Construct T so T vi = wi for
1 i n 2, T vj = jwn 1 for n 1 j m. Construct S so Svi = wi for 1 i n 2,
T vj = (m + 1 j)wn for n 1 j m. It’s obvious that T, S 2 R = {T 2 L(V, W ) :
T is not surjective}.
However, T + S is surjective since T v1 , T v2 , . . . , T vn is w1 , . . . , wn 2 , (n 1)wn 1 + (m
n + 2)wn , nwn 1 + (m n + 1)wn , . . . which spans W . Thus, S + T is surjective. Thus, R
is not subspace of L(V, W ).
exer:3B:9 9. T
Pm is injective so for any v 2 span (v1 , . . . , wm ) then T v = 0 if and only if v = 0, i.e.
i=1 ↵i T vi = 0 if and only if ↵i = 0 for all 1 i m, i.e. T v1 , . . . , T vm is linearly
independent in W .
P Pm
exer:3B:10 10. For any u 2 range T there exists v 2 V or v = m i=1 ↵i vi so T v = u or u = i=1 ↵i T vi .
Thus, any u 2 range T can be expressed as linear combination of T v1 , . . . , T vm which
follows T v1 , . . . , T vm spans range T .
exer:3B:12 12. Since null T is a subspace of finite-dimensional vector space V so according to theorem
theo_5:2B:2.33
4.3.3, a basis v1 , . . . , vm of null T can be extended to a basis of V , i.e. v1 , . . . vm , u1 , . . . , un .
Hence, let U = span (u1 , . . . , un ) then we have U \ null T = {0}. On the other hand, any
v 2 V there exists u 2 U, z 2 null T so v = u + z so T v = T (u + z) = T u. This follows
range T = {T v : v 2 V } = {T u : u 2 U }.
theo_15:3B:3.22
exer:3B:13 13. A basis for null T is (5, 1, 0, 0), (0, 0, 7, 1) so dim null T = 2. Combining with theorem 5.3.5
we find dim range T =theo_11:2C:2.39
2 = dim W and since range T is a subspace of W so range T = W
according to theorem 4.6.4. Thus, T is surjective.
theo_15:3B:3.22
exer:3B:14 14. Similarly, from theorem 5.3.5 to find dim range T = 5 = dim R5 so T is surjective.
exer:3B:15 15. Assume the contrary, there exists such linear map T then dim null T = 2. On the other
hand, dim range T dim R2 = 2 so dim null T + dim range T 4, a contradiction to
theo_15:3B:3.22
theorem 5.3.5.
exer:2A:14
exer:3B:16 16. Assume the contrary that V is infinite-dimensional then according to exercise 14 there
exists a sequence v1 , v2 , . . . so v1 , . . . , vm is linearly independent for every positive integer
m. WLOG, say that v1 , . . . , vn is basis of null T .
Since range T of a linear map on V is finite-dimensional so there exists u1 , . . . , uk 2 V so
T u1 , . . . , T uk is a basis of range T . Note that there exists a M > n so all uj (1 j k)
can be represented as linear combination of vi for i < M . Hence, since T vM 2 range T so
X̀ M
X1
T vM = ↵ i T ui = i T vi .
i=1 i=1
Toan Quang Pham page 27
PM 1 PM 1 Pn
This follows vM i=1 i vi 2 null T or vM = i=1 i vi + j=1 j vj , a contradiction
since v1 , . . . , vM is linearly independent. Thus, V is finite-dimensional.
theo_16:3B:3.23
exer:3B:17 17. According to theorem 5.3.6, if there exists an injective linear map from V to W then
dim V dim W . Conversely, if dim V dim W , let v1 , . . . theo_14:3A:3.5
, vm be basis of V and
w1 , . . . , wn (n m) be basis of W then according to theorem 5.1.1, there exists a lin-
ear map T : V ! W such that T vj = wj for 1 j m. With this, we will find that
null T = {0} so T is injective.
theo_16:3B:3.24
exer:3B:18 18. According to theorem 5.3.7, if there exists a surjective map from V onto W then dim V
dim W . Conversely, if dim V dim W , let v1theo_14:3A:3.5
, . . . , vm be basis of V and w1 , . . . , wn (n
m) be basis of W then according to theorem 5.1.1, there exists a linear map T : V ! W
such that T vj = wj for 1 j n. With this, we find range T = W so T is surjective.
theo_15:3B:3.22
exer:3B:19 19. If there exists such linear map T then according to theorem 5.3.5, we have dim null T =
dim U = dim V dim range T dim V dim W . Conversely, if dim U dim V dim W ,
let u1 , u2 , . . . , um , v1 , . . . , vn be basis of V where u1 , . .theo_14:3A:3.5
. , um be basis of U and let w1 , . . . , wk
be basis of W then k n. According to theorem 5.1.1, there exists a linear map T 2
L(V, W ) so T ui = 0, T vj = wj for 1 j n, 1 i m. With k n, we can show that
null T = U .
exer:3B:20 20. If there exists such linear map S then when T u = T v then ST u = ST v or u = v. Thus,
T is injective.theo_14:3A:3.5
Conversely, if T is injective, let T v1 , . . . , T vm be a basis of range T . Hence,
according to 5.1.1, there exists a linear map S 2 L(W, V ) so S(T vi ) = vi and S(wj ) = 0
for arbitrary uj 2 V where T v1 , . . . , T vm , w1 , . . . , wn is basis of W .
P Pm
For any v 2 V then T v = m i=1 ↵i T vi . Since T is injective so null T = 0 so v = i=1 ↵i vi .
Hence, !
Xm X m Xm
ST v = S ↵i T vi = ↵i ST vi = ↵i vi = v.
i=1 i=1 i=1
exer:3B:21 21. If there exists such linear map S then any w 2 W we have T (Sw) = w 2 W so w 2 range T .
Since range T is a subspace of W , this follows range T = W or T is surjective.
Conversely, if T is surjective then range T = W . There exists v , . . . , vm 2 V so T v1 , . . . , T vm
theo_14:3A:3.5 1
is basis of W . Hence, according to theorem 5.1.1, there exists a linear P map S 2 L(W, V )
so S(T vi ) = vi for all 1 i m. This follows, for any w 2 W, w = m i=1 ↵i T vi then
m
! m
! m
X X X
T Sw = T ↵i S(T vi ) = T ↵i vi = ↵i T vi = w.
i=1 i=1 i=1
exer:3B:22 22. For any v 2 null ST then T v 2 null S. Hence, we define linear map T 0 2 L(null ST, null S)
so that for any v 2 null ST then T 0 v = T v. Note that since null ST is a subspace of theo_15:3B:3.22
V so
null T 0 is a subspace of null T so dim null T 0 dim null T . According to theorem 5.3.5,
we have
dim null ST = dim null T 0 + dim range T 0 dim null T + dim null S.
Toan Quang Pham page 28
exer:3B:23 23. Note that range ST is a subspace of range S so dim range ST dim range S. We can
write range ST = {S(T v) : v 2 U } = {Sv : v 2 range T }. Let u1 , . . . , um be basis of
range T then Su1 , . . . , Sum will span range ST so dim range ST dim range T . Thus,
dim range ST min{dim range S, dim range T }.
exer:3B:24 24. (note that V is not necessarily finite-dimensional so we can’t actually set a basis for V ) If
there exists such S, then for any v 2 null T1 then ST1 v = S(0) = 0 = T2 v or v 2 null T2 .
Thus, null T1 ⇢ null T2 .
Conversely, if null T1 ⇢ null T2 , let T1 v1 , . . . , T1 vm , w1 , . . . , wn be basis of W where T v1 , . . . , T vm
is basis of range T . Let S be linear map so S(T vi ) = T2 vi and Swj = 0 for 1 i m, 1
j n.
We
Pm prove that T2 = PmST1 . Indeed, for any v 2 V , we have T v 2 range T so T1 v =
↵ T
i=1 i 1 iv so v i=1 ↵i vi 2 null T1 ⇢ null T2 . Hence,
m
! m m m
!
X X X X
T2 (v) = T2 ↵i vi = ↵ i T2 v i = ↵i ST1 vi = ST1 ↵i T1 vi = S(T1 v).
i=1 i=1 i=1 i=1
exer:3B:25 25. If there exists such S then for all v 2 V we have T1 v = T2 Sv 2 range T2 which follows
range T1 ⇢ range T2 . Conversely, for v1 , . . . , vm as basis of V , since range T1 ⇢ range T2
theo_14:3A:3.5
there exists ui so T1 vi = T2 ui for all 1 i m. According to theorem 5.1.1, there exists
a linear map S 2 L(V, V ) so Svi = ui for all 1 i m. This follows T2 Svi = T2 ui = T1 vi
for all 1 i m, which leads to T2 Sv = T1 v for all v 2 V .
exer:3B:28 28. Let 'i (v) be scalar multiple of wi in representation P of T v. We prove Pthat 'i is linear.
Indeed, since T is linear so T (u + v) = T u + T v or m '
i=1 i (u + v)w i = m
i=1 i u + 'i v)wi .
('
Since w1 , . . . , wm is linearly independent so 'i (u + v) = 'i (u) + 'i (v). Similarly, 'i ( v) =
'i v for any v 2 V .
exer:3B:29 29. It’s obvious that null ' {au : a 2 F} is subset of V . On the other hand, for any v 2 V
then '(v) = a. We know '(u) =6 0 so x = v 'u a
u 2 V . Hence, we have
✓ ◆
a
a = '(v) = ' x + u = a + '(x).
'u
exer:3B:29
exer:3B:30 30. According to exercise 29, if there is u 2 V not in null '1 = null '2 then let c =
'1 (u)/'2 (u). For any v 2 V, v can be represented as v = z+au, (a 2 F) where z 2 null '1 .
Thus, '1 (v) = '1 (au) = a'1 (u) = c'2 (v).
5.5. Exercises 3C
exer:3C:1 1. Let v1 , . . . , vn be basis of V and w1 , . . . , wm be basis of W where w1 , . . . , wk (k =
dim range T m) is basis of range T since range T is subspace of W . We know that
T v1 , . . . , T vn spans range T and for any 1 i k then wi 2 range T , hence wi can be
written as linear combination of T vj (1 j n). We have
n
X n
X m
X
wi = ↵j T vj = ↵j Ah,j wj .
j=1 j=1 h=1
In order for the above to hold, there must exist a 1 h m so Ah,i 6= 0. This is true for
any 1 i k. Thus, M(T ) has at least k = dim range T nonzero entries.
exer:3C:2 2. Let v1 , . . . , v4 be basis of P3 (R) and w1 , . . . , w3 be basis of P2 (R). From the matrix, since
Ai,j = 0 for i 6= j and Ai,i = 0 so we find Dvk = wk for 1 k 3 and Dv4 = 0. So we
can choose vi = xi for 1 i 3 and v4 = 0 then wi = ixi 1 for 1 i 3.
exer:3C:4 4. Since the first column of M(T ) (except for possibly A1,1 = 1) so we can let w1 = T v1 . It
doesn’t matter what should w2 , . . . , wn specifically be as long as w1 , . . . , wn is basis of W .
P
exer:3C:5 5. If there doesn’t exists v = w1 + ni=2 ↵i wi in range T then for any u 2 V , since T vi 2
range T so we must have A1,i = 0 for all 1 i m, or entries of the first row of M(T )
are all 0.
P
If there does exist such w = w1 + ni=2 ↵i wi in range T then that means there exists v1 2 V
so T v1 = w. Extend v1 to a basis v1 , . . . , vm of V with T vi = ui for i 2. Note that
if v1 , . . . , vm is basis of of V then so does v1 , v2 v ,
2 1 3v v
3 1 , . . . , v m m v1 . Hence,
T (vi i v 1 ) = ui i w. By choosing the right i to remove w1 from ui , we find that
v 1 , v2 2 v 1 , v3 3 v1 , . . . , vm m v1 is the desired basis of V , which will gives A1,k = 0
for all 1 k m.
exer:3C:14
exer:3C:15 15. Just did in 14.
theo:3.59:3D Theorem 5.6.2 (3.59) Two finite-dimensional vector space over F are isomorphic if and
only if they have same dimension.
theo:3.65:3D Theorem 5.6.3 (3.65) Suppose T 2 L(V, W ) and v inV . Suppose v1 , . . . , vn is basis of V
and w1 , . . . , wm is basis of W . Then
theo:3.69:3D Theorem 5.6.4 (3.69) Suppose V is finite-dimensional and T 2 L(V ). Then the following
are equivalent:
(a) T is invertible.
(b) T is surjective.
(c) T is injective.
5.7. Exercises 3D
exer:3D:1 1. We have (ST )(T 1S 1) = I and (T 1 S 1 )(ST ) = I.
exer:3D:3 3. If there exists such invertible operator T then if Su = Sv then T u = T v which follows
u = v. Thus, S is injective.
If S is injective, since U is subspace of V , basis v1 , . . . , vm of U can be extended to basis
v1 , . . . , vn (m n) of V . Since S is injective so Sv1 , . . . , Svm is linearly independent, so
this list can be extended to basis Sv1 , . . . , Svm , um+1 , . . . , un of V . From this, we let T be
a linear map so T vi = Svi for 1 i m and T vi = ui for m < i n. We find that T is
surjective so T is invertible and T u = Su for any u 2 U .
exer:3D:4 4. If there exists such invertible operator S then if u 2 null T1 then 0 = T1 u = ST2 u so
T2 u = 0. Similarly if u 2 null T2 then u 2 null T1 . Thus, null T1 = null T2 .
Toan Quang Pham page 32
exer:3D:5 5. If there exists such invertible operator S then for any u 2 range T1 , u = T1 v = T2 (Sv) so
u 2 range T2 . Similarly, if u 2 range T2 then u 2 range T1 . Thus, range T1 = range T2 .
Conversely, for v1 , . . . , vm as basis of V , since range T1theo_14:3A:3.5
= range T2 there exists ui so
T1 vi = T2 ui for all 1 i m. According to theorem 5.1.1, there exists a linear map
S 2 L(V, V ) so Svi = ui for all 1 i m. This follows T2 Svi = T2 ui = T1 vi for all
1 i m, which leads to T2 Sv = T1 v for all v 2 V . On the other hand, since v1 , . . . , vm
is linearly independent so u1 , . . . , um is linearly independent so it is basis of V . Hence, S
is surjective so S is invertible.
exer:3D:6 6. If there exists such R, S. Let v1 , . . . , vm be basis of null T1 then ST2 Rvi = 0. Since S is
invertible so S is injective, which means T2 Rvi = 0. Thus, Rvi 2 null T2 and Rv1 , . . . , Rvm
is linearly independent so dim null T2 dim null T1 . Since R is invertible so R is surjective
so let Ru1 , . . . , Run be basis of null T2 . Hence S(T2 Rui ) = S(0) = 0 so T1 ui = 0 so
ui 2 null T1 . Since u1 , . . . , un is linearly independent so dim null T2 dim null T1 . Thus,
dim null T1 = dim null T2 .
Conversely, if dim null T1 = dim null T2 : Let v1 , . . . , vm be basis of V where vn+1 , . . . , vm (n
m) is basis of null T1 . Let u1 , . . . , um be another basis of V where un+1 , . . . , um is basis
of null T2 . This folows that two lists T1 v1 , . . . , T1 , vn and T2 u1 , . . . , T2 un are linearly inde-
pendent so they can be extended to two bases of W , which are T1 v1 , . . . , T1 vn , w1 , . . . , wk
and T2 u1 , . . . , T2 un , z1 , . . . , zk .
theo_14:3A:3.5
According to theorem 5.1.1, there exists a linear map R 2 L(V ) so Rvi = ui for 1 i m.
Since u1 , . . . , um is basis of V so R is surjective so R is invertible. There also exists linear
map S 2 L(W ) so S(T2 ui ) = T1 vi for 1 i n and Szj = wj for 1 j k. We can see
that S is also surjective so S is invertible.
Toan Quang Pham page 33
P
Now we prove T1 = ST2 R. Indeed, for any v = m i=1 ↵i vi in V , we have
m
! n
!
X X
ST2 Rv = ST2 ↵i ui = ST2 ↵i ui (since T2 uj = 0 for m j > n),
i=1 i=1
n
! n
X X
=S ↵ i T2 ui = ↵ i T1 v i ,
i=1 i=1
m
X
= ↵i T1 vi = T1 v (since T1 vj = 0 for m j > n).
i=1
theo:3.69:3D
exer:3D:13 13. Note that RST 2 L(V ) and RST is surjective exer:3D:9
so according to theorem 5.6.4, RST is
invertible operator. According to exercise 9 (3D), since RST is invertible so R and ST are
also invertible, which follows S, T, R are invertible. Thus, S is injective.
exer:3D:14 14. T is linear. We have T u = T v then W(u) = W(v) so u = v. Thus, T is injective. For any
A 2 Fn,1 there exists a v 2 V so T v = A. Thus, M is isomorphism of V onto Fn,1 .
exer:3D:15 15. Let A = M(T ) with respect to standard bases of both Fn,1 , Fm,1 . Then fortheo:3.65:3D
any v 2 Fn,1 ,
we have M(v) = v and T v = M(T v). Hence, according to theorem 5.6.3, we have
T v = M(T v) = M(T )M(v) = Av.
exer:3D:16 16. Let v1 , . . . , vn be basis of V . First, we prove that for any 1 i n, there exists Ci 2 F
so T vi = Ci vi . Indeed, we can choose linear map S so Svi = vi P and Svj = 2vj for
1 j n, j 6= i. This follows ST vi = T (Svi ) = T vi . Hence, if T vi = nk=1 ↵k vk then
n n
!
X X X
↵k vk = T vi = ST vi = S ↵k vk = ↵i vi + 2 ↵k vk .
k=1 k=1 k6=i
P
This follows k6=i ↵k vk = 0, which happens when ↵k = 0 for k 6= i. Thus, T vi = ↵i vi =
Ci vi .
Next, we prove that for any two 1 j, i n so j 6= i then Ci = Cj = C. Indeed, pick an
invertible operator S so Svi = vj then T Svi = T vj = Cj vj but ST vi = S(Ci vi ) = Ci vj .
Hence, Ci vj = Cj vj so Ci = Cj = C. Thus, T vi = Cvi for all 1 i n, or T is scalar
multiple of I.
exer:3D:17 17. (For this, it’s better to think linear maps as matrices) Let v1 , . . . , vn be basis of V . Denote
linear map Ti,j as T vj = vi , T vk = 0 for all k 6= j, 1 k n (image this as n-by-n matrix
with 0 in all entries except for 1 in (i, j)-th entry).
P
If all T 2 E is 0 then E = {0}. If there exists T 6= 0, WLOG T v1 = ni=1 ↵i vi with ↵1 6= 0.
We find Ti,1 (T T1,1 ) 2 E. Note that Ti,1 T T1,1 = ↵1 Ti,1 so Ti,1 2 E for all 1 i n. Since
Ti,1 2 E so Ti,j = Ti,1 T1,j 2 E for all 1 j n. Thus, Ti,j 2 E for all 1 i, j n. Since
T1,1 , . . . , Tn,n is linearly independent and dim V = n2 so E = V .
exer:3D:18 18. Define a map T from V onto L(F, V ) so for any v 2 V , T v = Rv 2 L(F, V ) so Rv 1 = v.
it’s not hard to verify that T is linear. If T u = T v then Rv = Ru so Rv (1) = Ru (1) so
u = v. Thus, T is injective. For any R 2 L(F, V ), then there is R(1) 2 V so V (R(1)) = R.
Thus, T is surjective. In conclusion, T is an isomorphism so V and L(F, V ) are isomorphic
vector spaces.
exer:3D:19 19. (a) For any q 2 P(R), which we can restrict to q 2 Pm (R). Define T 0 : Pm (R) ! Pm (R)
so T 0 p = T p for p 2 Pm (R). Since deg T p deg p = m so T 0 is indeed an operator on
finite-dimensional Pm (R). Since T is injective so T 0 is also injective, which follows T 0 is
invertible. Hence, for q 2 Pm (R) there exists p 2 Pm (R) so T 0 p = T p = q. This follows T
is surjective.
Toan Quang Pham page 35
(b) Assume the contrary, there is non-zero p so deg T p < deg p. Let T p = q. From
(a), there exists r 2 P(R) so deg r deg q and T r = q. This follows T p = T r so
p = r. However deg p > deg T p = deg q deg r, a contradiction. Thus, we must have
deg T p = deg p for any non-zero p 2 P(R).
exer:3D:20 20. Denote n-by-n matrix A with Ai,j is the (i, j)-th entry. Define a linear map T 2 L(Fn,1 )
so T v = Av. Hence, (a) is equivalent to T being injective and (b) is equivalent to T being
theo:3.69:3D
surjective. Thus, (a) is equivalent to (b) according to theorem 5.6.4.
(a) v w 2 U.
(b) v + U = w + U .
(c) (v + U ) \ (w + U ) 6= 0.
(b) T̃ is injective,
Remark 5.8.4. Be careful with vector space V /U , because there are more than one v 2 V to
give the same affine subset v + U of V . So if we define a map T from V /U to some W , be sure
that T makes senses, i.e. for v + U = u + U then T (v + U ) = T (u + U ).
5.9. Exercises 3E
exer:3E:1 1. If T is a linear map then for any two u, v 2 V we have (u, T u)+(v, T v) = (u+v, T (u+v)) 2
graph of T and (u, T u) = ( u, T u) 2 graph of T . We also have (0, T 0) 2 graph of T so
graph of T is a subspace of V ⇥ W .
Conversely, if graph of T is a subspace of V ⇥ W . We have (u, T u) + (v, T v) = (u + v, T u +
T v) 2 graph of T so T u + T v = T (u + v) (otherwise if T u + T v 6= T (u + v) then function
Toan Quang Pham page 36
exer:3E:2 2. Let (v1,1 , v1,2 . . . , v1,m ), (v2,1 , . . . , v2,m ), . . . , (vk,1 , . . . , vk,m ) be basis of V1 ⇥ V2 ⇥ · · · ⇥ Vm .
Assume the contrary that V1 is infinite-dimensional. then there exists v 2 V1 so v 62
span (v1,1 , v2,1 , . . . , vk,1 ). This follows (v, u2 , . . . , um ) for any ui 2 Vi is not a linear com-
bination of (vi,1 , . . . , vi,m ), 1 i k, a contradiction. Thus, Vi is finite-dimensional.
exer:3E:3 3. Let V = P(R), U1 = span (1, x, x2 ) and U2 = {p 2 P(R) : deg p 2}. We find U1 + U2 is
not a direct sum since 1 + x + 2x2 + x3 = (1 + x + x2 ) + (x2 + x3 ) = (1 + x) + (2x2 + x3 ).
Define a linear map T from U1 ⇥ U2 onto U1 + U2 so T (u1 , u2 ) = u1 + xu2 . We can
find that T is surjective. If T (u1 , u2 ) = T (v1 , v2 ) then u1 + xu2 = v1 + xv2 . Note that
deg u1 < deg xu2 for any u1 2 U1 , u2 2 U2 so this only happens when u1 = v1 , u2 = v2 .
Thus, T is injective which follows T is an isomorphism from U1 ⇥ U2 onto U1 + U2 .
exer:3E:4 4. Define a map S from L(V1 ⇥ · · · ⇥ Vm , W ) onto L(V1 , W ) ⇥ · · · ⇥ L(Vm , W ) so for any
T 2 L(V1 ⇥ · · · ⇥ Vm , W ), S(T ) = (R1 , . . . , Rm ) where Ri are map from Vi onto W so: for
any v = (0, . . . , 0, vi , 0, . . . , 0) 2 V1 ⇥ · · · ⇥ Vm then Ri vi = T v = w 2 W . It’s not hard to
check that Ri is a linear map.
Now we prove that S is a linear map. Say S(T1 ) = (R1 , . . . , Rm ) and S(T2 ) = (U1 , . . . , Um )
then S(T1 ) + S(T2 ) = (R1 + U1 , . . . , Rm + Um ). On the other hand, for any 1 i m, we
have for any vi 2 Vi then (Ri + Ui )vi = Ri vi + Ui vi = T1 v + T2 v = (T1 + T2 )v where v =
(0, . . . , 0, vi , 0, . . . , 0). This follows S(T1 + T2 ) = (R1 + U1 , . . . , Rm + Um ) = S(T1 ) + S(T2 ).
Similarly, for any (R1 , . . . , Rm ) 2 L(V1 , W ) ⇥ · · · ⇥ L(Vm , W ) we can define respective
T 2 L(V1 ⇥ · · · ⇥ Vm , W ). Thus, S is surjective. If S(T1 ) = S(T2 ) then (R1 , . . . , Rm ) =
(U1 , . . . , Um ) so Ri = Ui or T1 vi = T2 vi where vi = (0, . . . , 0, vi , 0, . . . , 0) for all 1 i m.
This leads to T1 = T2 . Thus, S is injective so S is an isomorphism from L(V1 ⇥· · ·⇥Vm , W )
onto L(V1 , W ) ⇥ · · · ⇥ L(Vm , W ) so these two vector spaces are isomorphic.
exer:3E:4
exer:3E:5 5. Similarly to previous exercise 4, define a linear map S from L(V, W1 ⇥ · · · ⇥ Wm ) onto
L(V, W1 ) ⇥ · · · ⇥ L(V, Wm ) so for T 2 L(V, W1 ⇥ · · · ⇥ Wm ), S(T ) = (R1 , . . . , Rm ) where
Ri 2 L(V, Wi ) so: if T v = (w1 , . . . , wm ) then Ri vi = wi .
exer:3E:14 14. (a) If (x1 , x2 , . . .) 2 U then there exists M1 2 N so for all i M1 then xi = 0. Sim-
ilarly, if (y1 , y2 , . . .) 2 U then there exists M2 . Thus, for (x1 + y1 , x2 + y2 , . . .), for all
i max{M1 , M2 } then xi + yi = 0, which means (x1 + y1 , x2 + y2 , . . .) 2 U . Similarly
(x1 , . . .) 2 U . And (0, 0, . . .) 2 U so U is subspace of F1 .
(b) Denote pi as i-th prime number and vi 2 F1 so for any k 2 N, k 1 then the kpi -th
coordinate of vi is 2, the rest of the coordinates of vi is 1. We will prove that prove that
v1 + U, . . . , vm + U is linearly independentexer:2A:14
for any m 1, which can follow that F1 /U is
infinite-dimensional according to exercise 14 (2A).
Pm Pm
Indeed, consider (
theo:3.85:3E i=1 ↵ i v i ) + U is equal to U when v = i=1 ↵i vi 2 U according to
theorem 5.8.1, which means there exists M 2 N so for all i M , i-th coordinate of v is
0. According to notation of vi , the x-th coordinate of v where
P
• x ⌘ 1 (mod p1 · · · pm ) is m i=1 ↵i .
Q P
• x ⌘ 0 (mod pj ), x ⌘ 1 (mod i6=j pi ) is 2↵j + i6=j ↵i .
Pm P
If we pick x M then from the above, we find that i=1 ↵i = 2↵j + i6=j ↵i = 0
for every 1 j m. Hence, we find ↵i = 0 for all 1 i m. In other words,
v1 + U, v2 + U, . . . , vm + U is linearly independent in F1 /U , as desired.
theo:3.91:3E
exer:3E:15 15. According to theorem 5.8.3, V /(null ') is isomorphic to range '. We find that range ' = F
since ' 6= 0 so for any c 2 F, there exists 2 F so '(1) = c. Hence, dim range ' =
dim F = 1, which follows dim V /(null ') = 1.
exer:3E:18 18. If there exists such linear map S then 0 = T (0) = S(⇡0) = S(U ). For any u 2 U then
T (u) = S(⇡u) = S(U ) = 0 so u 2 null T . Thus, U ⇢ null T .
Toan Quang Pham page 39
theo:3.107:3F Theorem 5.10.4 (3.107) Suppose V and W are finite-dimensional and T 2 L(V, W ).
Then:
theo:3.109:3F Theorem 5.10.6 (3.109) Suppose V and W are finite-dimensional and T 2 L(V, W ).
Then:
theo:3.117:3F Theorem 5.10.7 (3.117) Suppose V and W are finite-dimensional and T 2 L(V, W ). Then
dim range T equals to column rank of M(T ).
5.11. Exercises 3F
exer:3F:1 1. Let ' 2 L(V, F). If ' is not a zero map then there exists v 2 V so 'v = c 6= 0. Hence, for
any 2 F then '( /c · v) = so ' is surjective.
exer:3F:2 2. Linear functionals on R[0,1] : '1 (f ) = f (0.5), '2 (f ) = f (0.1), '3 (f ) = f (0.2).
exer:3F:3 3. Pick ' 2 V 0 so '(v) = 1, value of '(u) for u 62 {av : a 2 F} can be picked arbitrarily.
exer:3F:6 6. (a) If v1 , . . . , vm spans V : Consider ' 2 V 0 so (') = 0 then '(vi ) = 0 for all 1 i m,
which leads to '(v) = 0 for all v 2 V since v1 , . . . , vm spans V . Hence, ' = 0, i.e. is
injective.
Conversely, if is injective. If v1 , . . . , vm does not span V then there exists vm+1 62
span (v1 , . . . , vm ), vm+1 2 V . Hence, there exists ' 2 V 0 so '(vm+1 ) = 1 and '(v) = 0 for
all v 2 V, v 62 {avm+1 : a 2 F}. We find (') = 0 but ' 6= 0, a contradiction to injectivity
of . Thus, v1 , . . . , vm spans V .
(b) If v1 , . . . , vm is linearly independent then for any (x1 , . . . , xm ) 2 Fm , we can construct
' 2 V 0 so '(vi ) = xi . Hence, is surjective.
Pm 1
If is surjective⇣ but v1 , . . . , vm is P
linearly dependent,
⌘ i.e. WLOG vm = i=1 ↵i vi .
m 1
Hence, (') = '(v1 ), . . . , '(vm 1 ), i=1 ↵i 'i (vi ) , which does not cover all Fm . Thus,
v1 , . . . , vm is linearly independent.
(xj )( j)(0)
exer:3F:7 7. Because 'j (xj ) = j! = 1 and 'j (xi ) = 0 for i 6= j.
exer:3F:10 10. We have (S + T )0 (') = ' (S + T ) = ' S + ' T = S 0 (') + T 0 (') = (S 0 + T 0 )('). Thus,
(S + T )0 = S 0 + T 0 . Similarly, ( T )0 = T 0 .
exer:3F:11 11. If there exists such (c1 , . . . , cm ) 2 Fm and (d1 , . . . , dn ) 2 Fn then A.,k = dk ·C where C is an
m-by-1 matrix so Ci = ci . Hence, rank of A is the dimension of span (A.,1 , A.,2 , . . . , A.,n ),
which is 1.
Toan Quang Pham page 41
If rank of A is 1 then dimension of span (A.,1 , A.,2 , . . . , A.,n ) is 1, which means A.,k = dk · C
for some C 2 Fm,1 .
exer:3F:12 12. Let I be the identity map on V then the dual map I 0 (') = ' I = ' so I 0 is the identity
map on V 0 .
exer:3F:13 13. (a) (T 0 ('1 ))(x, y, z) = ('1 T )(x, y, z) = '1 (4x + 5y + 6z, 7x + 8y + 9z) = 4x + 5y + 6z
and (T 0 ('2 ))(x, y, z) = 7x + 8y + 9z.
exer:3F:9
(b) According to exercise 9 (3F) then since T 0 ('1 ) 2 (R3 )0 and 1, 2, 3 is dual basis of
(R3 )0 so
exer:3F:14 14. (a) We have (T 0 ('))(p) = (' T )(p) = '(T p) = '(x2 p(x)+p00 (x)) = (x2 p(x)+p00 (x))0 (4) =
(2xp(x) + x2 p0 (x) + p000 (x))(4) = 8p(4) + 16p0 (4) + p000 (4).
R1
(b) (T 0 ('))(x3 ) = '(T (x3 )) = '(x5 + 6x) = 0 (x5 + 6x)dx = 16 + 3.
exer:3F:15 15. If T = 0 then T 0 (') = ' T = 0 for any ' 2 W 0 . Hence, T 0 = 0.
Conversely,
Pm if T 0 = 0, let w1 , . . . , wm be basis of W . For any v, we can write T (v) =
i=1 ↵i wi . We can choose ' 2 W so '(wi ) = ↵i . Hence,
m
X m
X
0 = (T 0 ('))(v) = '(T v) = ↵i '(wi ) = ↵i2 .
i=1 i=1
Now we will prove that such linear map T will satisfy T 0 ( ) = T for all ' 2 W 0 .
Indeed, we find that
m
!
X
j (T vk ) = j Ai,k wi ,
i=1
m
!
X
= j (T 0 ( i ))(vk )wi ,
i=1
= (T 0 ( j ))(vk ), (because j (wi ) = 0 for i 6= j)
Toan Quang Pham page 42
exer:3F:25 25. Let S = {v 2 V : '(v) = 0 for every ' 2 U 0 }. If v 2 U then obviously v 2 S. Thus,
U ⇢ S. If v 2 S but v 62 U , then there exists basis u1 , . . . , un , v, v1 , . . . , vm of V where
u1 , . . . , un is basis of U . Hence, there exists ' 2 U 0 so 'ui = 0 and 'v = 0. Thus, v 62 S,
a contradiction. We find that if v 2 S then v 2 U . In conclusion U = S = {v 2 V : '(v) =
0 for every ' 2 U 0 }.
exer:3F:26 26. Let S = {v 2 V : '(v) = 0 for every ' 2 }. If ' 2 then '(v) = 0 for every v 2 S so
' 2 S 0 . Thus, ⇢ S 0 .
Let v1 , . . . , vn be basis of S, which can be extended to basis v1 , . . . , vn , u1 , . . . , um of V .
Let ' , . . . , m be dual basis of V 0 . Hence, for any ' 2
P1 ,n. . . , 'n , 1P ⇢ V 0 , we have
m
' = i=1 ↵i 'i + j=1 j j . Note that 1 , . . . , m is basis of S so since vk 2 S, we have
0
n
X m
X
'(vk ) = ↵i 'i (vk ) + j j (vk ),
i=1 j=1
= ↵k .
On the other hand, since ⇢ S 0 and ' 2 , vk 2 S so '(vk ) = 0. Thus, ↵k = 0 for every
1 k n. This follows 1 , . . . , m spans . Note that 1 , . . . , m is linearly independent.
Thus, 1 , . . . , m is basis of , which leads to = span ( 1 , . . . , m ) = S 0 .
theo:3.107:3F exer:3F:25
exer:3F:27 27. According to theorem 5.10.4 then (range T )0 = { ' : 2 R}. According to exercise 25
(3F) then
exer:3F:30 30. Let U = (null '1 ) \ (null '2 ) \ · · · \ (null 'm ) then U is a subspace of V . Hence, U 0 =
{' 2 V 0 : '(u) = 0 for any u 2 U }. Thus, we find that 'i 2 U 0 for every 1 i m. Since
'1 , . . . , 'm is linearly independent so dim U 0 m. Thus, dim U = dim V dim U 0
dim V m.
On the other hand, since V is finite-dimensional so let dim V = n m. If we let Ai be set
that contains a basis of null 'i . Then we will notice that dim U |A1 \ A2 \ · · · \ Am |.
It suffices to prove |A1 \ A2 \ · · · \ Am | n m.
Observe that since 'i in the linearly dependent list so range 'i = 1. Thus, |Ai | = null 'i =
dim V range 'i = n 1. We prove by induction on m that |A1 \ A2 \ · · · \ Am | n m.
Indeed, it’s true for m = 1. If it’s true for m 1, consider for m then we have:
exer:3F:32 32. (a) T is surjective since range T = span (v1 , . . . , vn ) = V . Thus, operator T is invertible.
(b) Since T is invertible so T v1 , . . . , T vn is linearly independent, which follows M(T u1 ), . . . , M(T un )
is linearly independent, i.e. columns of M(T ) is linearly independent.
(c) Since dim span (M(T u1 ), . . . , M(T un )) = n = dim Fn,1 so column of M(T ) spans
Fn,1 .
(d) Let '1 , . . . , 'n be dual basis of u1 , . . . , un and 1 , . . . , n be dual basis of v1 , . . . , vn .
T 0 2 L(V 0 ) is a dual map of T with respect to these to dual bases. Since (M(T 0 ))t = M(T )
so i-th row of M(T ) is i-th column of M(T 0 ). Since T is invertible so T 0 is also invertible.
Hence, T 0 ('1 ), . . . , T 0 ('n ) is linearly independent. This follows M(T 0 '1 ), M(T 0 '2 ), . . . , M(T 0 'n )
is linearly independent or columns of M(T 0 ) are linearly independent or rows of M(T ) are
linearly idependent in Fn,1 .
(e) Row rank of M(T ) is n so rows of M(T ) spans Fn,1 .
exer:3F:33 33. Let S be a function that takes A 2 Fm,n to At 2 Fn,m then S(A + B) = (A + B)t =
At + B t = S(A) + S(B) and S( A) = ( A)t = At = S(A). Thus, S is a linear map
from Fm,n onto Fn,m .
S can be easily seen to be surjective, as for any A 2 Fn,m then there exists B = At 2 Fm,n
so S(B) = S(At ) = (At )t = A.
If S(A) = 0 then A = 0. Thus, S is injective so S is invertible.
Toan Quang Pham page 45
exer:3F:34 34. (a) For any ' 2 V 0 then (⇤(v1 +v2 ))(') = '(v1 +v2 ) = '(v1 )+'(v2 ) = (⇤v1 )(')+(⇤v2 )(').
Thus, ⇤(v1 + v2 ) = ⇤v2 + ⇤v1 . Similarly, ⇤( v) = (⇤v). Thus, ⇤ is a linear map from V
to V 00 .
(b) For any v 2 V , we will prove that (⇤ T )(v) = (T 00 ⇤)(v) or ⇤(T v) = T 00 (⇤v). In
order to prove this, we show that for any ' 2 V 0 then (⇤(T v))(') = (T 00 (⇤v))('). Indeed,
we have (⇤(T v))(') = '(T v) and since T 00 = (T 0 )0 2 L(V 00 , V 00 ) so
(T 00 (⇤v))(') = ((⇤v) T 0 )(') = (⇤v)(T 0 (')) = (T 0 ('))(v) = (' T )(v) = '(T v).
Thus, (T 00 (⇤v))(') = (⇤(T v))(') for any ' 2 V 0 , which follows ⇤(T v) = T 00 (⇤v) for any
v 2 V , which leads to ⇤ T = T 00 ⇤.
(c) If ⇤v = 0 then (⇤v)(') = '(v) = 0 for any ' 2 V 0 , which leads to v = 0. Thus, ⇤ is
injective.
Let v1 , . . . , vn be basis of V and
Pn'1 , . . . , 'n be dual basis of V . For any S 2 V , we
0 00
show that ⇤v = S where v = i=1 S('i )vi . It suffices to show that. Indeed, we have
'i (v) = S('i ) for every 1 i n. This follows '(v) = S(') for any ' 2 V 0 . Thus,
⇤v = S. Thus, ⇤ is surjective.
In conclusion, ⇤ is an isomorphism from V onto V 00 .
exer:3F:35 35. Denote 'i 2 (P(R))0 P so 'i (xi ) = 1 and 'i (xj ) = 0 for i 6= j. This follows for any
0
' 2 (P(R)) then ' = i 0 '(xi )'i .
P
Let ei 2 R1 where ei = (0, . . . , 0, 1, 0, . . .). Then any e 2 R1 we have e = i 0 ↵i ei .
| {z }
i 1
6. Chapter 4: Polynomials
theo:4.17:4 Theorem 6.0.1 (4.17) Suppose p 2 P(R) is nonconstant polynomial. Then p has unique
factorization (except for the order of factors) of the form
2
p(x) = c(x 1 ) · · · (x m )(x + b1 x + c1 ) · · · (x2 + bM x + cM )
6.1. Exercise 4
exer:4:1 1. Done.
exer:4:2 2. No since ( xm + x) + xm = x does not have degree of m.
exer:4:3 3. Yes.
Pm
exer:4:4 4. Pick p = (x 1)
a1 (x
2)
a2 · · · (x m)
am so ai 2 Z, ai 1 and i=1 ai = n.
exer:4:5 5. Let A be a (m + 1)-by-(m + 1) matrix so Ai,j = zij 1 . Define a linear map T 2 L(Rm+1,1 )
so T x = Ax.0 First,
1 we prove that T is invertible. Indeed, if there exists x 6= 0, x 2
a0
B a1 C
B C
Rm+1,1 , x = B . C so T x = Ax = 0 then the polynomial a0 + a1 x + a2 x2 + . . . + am xm
@ .. A
am
has m+1 distinct zeros z1 , . . . , zm , zm+1 , a contradiction. Thus, we must have x = 0 which
follows operator T is injective, which means T is invertible. Hence for any W 2 Rm+1,1 ,
Wi,1 = wi then there exists unique x 2 Rm+1,1 , x1,i = ↵i 1 so T x = Ax = W . Note
that Ai,. x = (Ax)i,1 = wi so polynomial p = ↵0 + ↵1 x + ↵2 x2 + . . . + ↵m xm satisfies the
condition. Since the x 2 Rm+1,1 is uniquely determined so p is uniquely determined.
exer:4:6 6. If p(x) has a zero then there exists q 2 P(C) so deg q = m 1 and p(x) = (x )q(x).
Hence, p0 (x) = q(x) + (x )q 0 (x).
If p(x), p0 (x) shares a same zero, say then p0 ( ) = 0 which follows q( ) = 0 so q(x) =
(x )r(x) so p(x) = (x )2 r(x) with r 2 P(C) and deg r = m 2. This follows p has
at most m 1 distinct zeros.
theo:4.17:4
exer:4:7 7. According to theorem 6.0.1 then p(x) has a unique factorization
2
p(x) = c(x 1 ) · · · (x m )(x + b1 x + c1 ) · · · (x2 + bM x + cM ).
With p(x) having no real zero then each p(x) has a unique factorization
p(x) = c(x2 + b1 x + c1 ) · · · (x2 + bM x + cM )
This follows degree of p is always even, a contradiction. Thus, polynomial with odd degree
must have a real zero.
Toan Quang Pham page 47
exer:4:8 8. There exists q(x) 2 P(R) so p(x) = (x 3)q(x) + r for some r 2 R. We can see right
away that r = p(3). We prove that T p = q 2 P(R). Indeed, q 2 RR and for x 6= 3 then
q(x) = p(x) r
x 3 =
p(x) p(3)
x 3 . For x = 3 then observe that p0 (x) = q(x) + (x 3)q 0 (x) so
p (3) = q(3). Thus, essentially, T p = q is a polynomial in P(R).
0
We have (T (p + q))(3) = (p + q)0 (3) = p0 (3) + q 0 (3) = (T (p))(3) + (T (q))(3) and for x 6= 3
then (T (p + q))(x) = (p+q)(x)x (p+q)(3)
3 = p(x)x p(3)
3 + q(x)x q(3)
3 = (T (p))(x) + (T (q))(x).
Thus, T (p + q) = T (p) + T (q). Similarly, T ( p) = T (p). We find T is a linear map.
exer:4:9 9. If p 2 P(C) then p has unique factorization of the form p(z) = c(z 1 ) · · · (z m) where
c, 1 , . . . , m 2 C. Hence
q(z) = p(z)p(z),
m
Y m
Y
=c (z i )c (z i ),
i=1 i=1
Ym
⇥ ⇤
= cc (z i )(z i) ,
i=1
m
Y
2
= |c| z2 2(Re i )z + | i |2 .
i=1
exer:4:11 11. If deg q deg p then there exists r, s 2 P(F) so q = pr + s with deg s < deg p. We find
q + U = pr + s + U = s + U . And each such s 2 P(F), deg s < deg p can be represented as
linear combination of 1, x, . . . , xdeg(p) 1 . Thus, we follow that 1+U, x+U, . . . , xdeg(p) 1 +U
spans P(F)/U . And this list is linearly independent so this list is basis of P(F)/U . Hence,
dim P(F)/U = deg p.
Toan Quang Pham page 48
example:eigenvalue Example 7.1.1 Suppose T 2 L(F2 ) defined by T (w, z) = ( z, w) then T does not have
eigenvalue if F = R. However, T has eigenvalues ±i if F = C.
theo:5.6:5A Theorem 7.1.2 (5.6) Suppose V is finite-dimensional, T 2 L(V ) and 2 F. Then the
following are equivalent:
(a) is an eigenvalue of T .
theo:5.13:5A Theorem 7.1.4 (5.13) Suppose V is finite-dimensional. Then each operator on V has at
most dim V distinct eigenvalues.
7.2. Exercises 5A
exer:5A:1 1. (a) For any u 2 U then T u = 0 2 U so U is invariant under T .
(b) For any u 2 U then T u 2 range T ⇢ U so U is invariant under T .
exer:5A:2 2. For every u 2 null S then ST u = T (Su) = T (0) = 0 so T u 2 null S. Thus, null S is
invariant under T .
exer:5A:3 3. For every u 2 range S then there exists v 2 V so Sv = u. Hence, T u = T (Sv) = S(T v) 2
range S so range S is invariant under T .
exer:5A:4 4. For any u1 2 U1 , u2 2 U2 , . . . , um 2 Um then T (u1 + . . . + um ) = T u1 + . . . + T um 2
U1 + . . . + Um . Thus, U1 + . . . + Um is invariant under T .
exer:5A:5 5. Let U1 , . . . , Um be invariant subspaces of V under T . Then for any v 2 S = U1 \ . . . \ Um
then T v 2 V so S is invariant under T .
exer:5A:6 6. If U 6= {0} then there exists basis u1 , . . . , um of U . If there exists v 2 V, v 62 U then we
can choose operator T 2 L(V ) so T u1 = v, which follows U is not invariant under T , a
contradiction. Thus, if v 2 V then v 2 U , i.e. U = V .
Toan Quang Pham page 49
exer:5A:7 7. If T (x, y) = ( 3y, x) = (x, y) for (x, y) 6= (0, 0) then 3y = x, x = y which follows
3y = 2 y. If y = 0 then x = 0, a contradiction. Hence, y 6= 0 which means 3 = 2 .
Thus, T does not have any eigenvalue.
exer:5A:10 10. (a) If T (x1 , . . . , xn ) = (x1 , . . . , xn ) = (x1 , 2x2 , 3x3 , . . . nxn ) for (x1 , . . . , xn ) 6= (0, . . . , 0)
then ( i)xi = 0 for i = 1, . . . , n. This follows 2 {1, . . . , n}. If = i then eigenvector
is ↵ · ei where ei 2 Fn so ei ’s coordinates are 0 except the i-th coordinate is 1.
(b) Let U be an invariant subspace of V under T then for any u = (x1 , . . . , xn ) 2 U then
v = (x1 , 2x2 , . . . , nxn ) 2 U . With only these two vectors in U , we can choose certain
↵, 2 F to 1-st coordinate of w = ↵u + v is 0. We also have T w 2 U and from w, T w,
we can obtain a new vector in U so its 1st and 2nd coordinates are 0. Keep going like
that, we can eventually obtain that ei = (0, . . . , 0, 1, 0, . . . 0) 2 U if xi 6= 0. This follows
| {z }
i 1
subspace U is spanned by list of ei so xi 6= 0 for some (x1 , . . . , xn ) 2 U . Thus, any subset
of the standard basis of Fn is a spanning list of an invariant subspace of V .
exer:5A:12 12. If (T p)x = p(x) = xp0 (x) for some p(x) 6= 0. Let p = a0 +a1 x+. . . , a4 x4 then p0 (x) = a1 +
2a2 x+3a3 x2 +4a4 x3 . Thus, this follows ai = iai for i = 0, . . . , 4. Hence, 2 {0, 1, 2, 3, 4}
so that (a0 , . . . , a4 ) 6= (0, . . . , 0). If = i then p = axi is can eigenvector for any a 2 R.
theo:5.13:5A
exer:5A:13 13. Since V is finite-dimensional so according to theorem 7.1.4, there are at most dim V
different eigenvalues oftheo:5.6:5A
T , in other words, at most dim V ↵ 2 F so T ↵I is not invertible
according to theorem 7.1.2. Thus, we can clearly choose ↵ so |↵ 1
| < 1000 and T ↵I
is invertible.
of T .
(b) If v is eigenvalue of T then S 1v is eigenvector of S 1T S corresponding to .
Pn
exer:5A:16 16. Let v1 , . . . , vn be basis of V and for 1 k n, let T vk = i=1 ↵i,k vPi so ↵i,j 2 R according
m
to problem’s condition. Pn If is eigenvalue of T with eigenvector v = i=1 i vi with i 2 C.
We show that v = i=1 i vi is the eigenvector corresponding to eigenvalue of T , i.e.
T v = · v.
P P P
Indeed, from T v = v, where T v = nk=1 k T vk = ni=1 k ni=1 ↵i,k vi , we follow for any
P P P
i = 1, . . . , n then nk=1 k ↵i,k = i so nk=1 k ↵i,k = i or nk=1 k ↵i,k = · i . Thus,
n n
! n
X X X
Tv = ↵
k i,k v i = ( · i vi ) = · v.
i=1 k=1 i=1
exer:5A:18 18. If T (z1 , z2 , . . .) = (0, z1 , . . .) = (z1 , z2 , . . .) for some (z1 , z2 , . . .) 6= (0, 0, . . .) then z1 = 0
and zi+1 = zi for all i 1. In both two cases = 0 and 6= 0, we all obtain zi = 0 for
all i 1, a contradiction. Thus, T does not have eigenvalue.
exer:5A:20 20. If T (z1 , z2 , . . .) = (z1 , z2 , . . .) = (z2 , z3 , . . .) for some (z1 , z2 , . . .) 6= (0, 0, . . .) then zi =
zi+1 for all i 1. For any eigenvalue then corresponding eigenvector is ↵(1, , 2 , . . .)
for any ↵ 2 F.
0 1
1
B .. C
exer:5A:24 24. (a) Let x = @ . A then T x = Ax = x so 1 is eigenvalue of T .
1
theo:3.117:3F
(b) Note that matrix A is precisely M(T ) and according to theorem 5.10.7, dim range (T
I) is equal to column rank of A I. Since sum of all entries in each column of A is 1 so
sum of the entries in each column of A I is 0. Let ei 2 Fn,1 be the i-th row of A I then
en = e1 + . . . + en 1 so row rank of A I is at most n 1, which means column rank of
A I is at most n 1 or dim range (T I) n 1 so T I is not injective. According
theo:5.6:5A
to theorem 7.1.2, we follow 1 is an eigenvalue of T .
p
exer:5A:30 30. Let x1 , x2 , x3 be eigenvectors corresponding to 4, 5, 7 then x1 , x2 , x3 is linearly indepen-
dent so x1p , x2 , x3 is basis of R3 . Say x =p↵1 x1 + ↵2 x2 + ↵3 x3 then T x 9x = 13↵1 x1
4↵2 x2 + ( 7 9)↵3 x3 . We write ( 4,p5, 7) as linear combination of x1 , x2 , x3 then from p
that to find ↵i so T x 9x = ( 4, 5, 7). Thus, such x 2 R3 so T x 9x = ( 4, 5, 7)
exists.
exer:5A:34 34. If null T is injective but (null T ) \ (range T ) 6= {0}, i.e. there exists v 62 null T but
T v 2 null T , i.e. T v 2 (null T ) \ (range T ). Hence, (T /null T )(v + null T ) = null T .
Since v 62 null T so we find T /null T is not injective, a contradiction. Thus, we must have
(null T ) \ (range T ) = {0}.
Conversely, if (null T ) \ (range T ) = {0}. If T /(null T ) is not injective then there exists
v 62 null T so T v 2 null T . This follows T v 6= 0 and T v 2 (null T ) \ (range T ), a
contradiction. Thus, T /null T is injective.
theo:5.21:5B Theorem 7.3.1 (5.21) Every operator on a finite-dimensional, nonzero, complex vector
space has an eigenvalue.
theo:5.27:5B Theorem 7.3.2 (5.27) Suppose V is a finite-dimensional complex vector space and T 2
L(V ). Then T has an upper-triangular matrix with respect to some basis of V .
theo:5.30:5B Theorem 7.3.3 (5.30) Suppose T 2 L(V ) has an upper-triangular matrix with respect to
some basis of V . Then T is invertible if and only if all the entries on the diagonal of that
upper-triangular matrix are nonzero.
Following theorems are not from the book. I found these from a blog post of tastymath75025
(AoPS).
theo:1:5B Theorem 7.3.5 Every operator T on a finite-dimensional real vector space V has invariant
subspace U of dimension 1 or 2.
theo:2:5B Theorem 7.3.6 Every operator T on a odd-dimensional real vector space has an eigenvalue.
7.4. Exercises 5B
exer:5B:1 1. (a) We have (I T )(I +T +T 2 +. . .+T n 1 ) = I T n = I and (I +T +. . .+T n 1 )(I T) = I
so I T is invertible and (I T ) 1 = I + T + T 2 + . . . + T n 1 .
(b) Looks like 1 xn = (1 x)(1 + x + . . . + xn 1 ).
exer:5B:6 6. Since U is invariant under T so for any u 2 U then T n u 2 U for all i 0. This follows
p(T )u 2 U for every u 2 U , i.e. U is invariant under p(T ) for any p 2 P(F).
exer:5B:7 7. Since 9 is eigenvalue of T 2 so there exists v 2 V, v 6= 0 so ((z 2 9)(T ))v = (T 2 9I)v = 0
or (((z 3)(z + 3))(T ))v = (T 3I)(T + 3I)v = 0. If (T + 3I)v = 0 then 3 is eigenvalue
of T . If (T + 3I)v = u 6= 0 then (T 3I)u = 0 which follows 3 is eigenvalue of T .
exer:5B:8 8. Imagine T as rotation 45 clockwise in the plane R2 . This will give T 4 = I. Note that
T (x, y) = (x cos 45 y sin 45 , x sin 45 + y cos 45 ).
exer:5B:9 9. If is a zero of p then p = (z )q(z). Hence 0 = p(T )v = (T I)q(T )v. Since
deg q < deg p so q(T )v = u 6= 0 which follows (T I)u = 0 or is eigenvalue of T .
exer:5B:10 10. Let p = a0 + a1 z + . . . + am z m then from T v = v we have
p(T )v = (a0 + a1 T + . . . + am T m )v,
= a0 v + a1 T v + . . . + am T m v,
m
= a0 v + a1 v + . . . + am v,
= p( )v.
exer:5B:10
exer:5B:11 11. If ↵ = p( ) for some eigenvalue of T then from exercise 10 (5B) we have ↵ is eigvalue of
p(T ). Conversely, if ↵ is eigenvalue of p(T ) then there exists v 2 V, v 6= 0 so p(T )v = ↵v
or (p(T ) ↵I)v = 0. Since F = C so we can factorise p ↵ = c(x 1 ) · · · (x m ) which
follows 0 = (p(T ) ↵I)v = c(T 1 I) · · · (T m I)v. Hence, there exists 1 j m so
j is eigenvalue of T . Hence p(T )v = p( j )v = ↵v so ↵ = p( j ). See MSE,
exer:5B:8 the prob-
exer:5B:12 12. Back to exercise 8 (5B), there exists T 2 L(R2 ) so T 4 = I, i.e. 1 is eigenvalue of lem is still
(x4 )(T ). However, 1 6= p( ) = 4 for any 2 R. true if T is
only trian-
exer:5B:13 13. If subspace U 6= {0} theo:5.21:5B
is invariant under T . U is finite-dimensional then T |U 2 L(U ) so gularizable
according to theorem 7.3.1, there exists eigenvalue of T |U , which follows is eigenvalue on any fi-
of T , a contradiction. Thus U is either {0} or infinite-dimensional. nite vector
space over
a field F
Toan Quang Pham page 55
0 1
0 0 1
exer:5B:14 @
14. Pick T so M(T ) = 1 0 0A.
0 1 0
0 1
1 1 0
exer:5B:15 15. Pick T so M(T ) = @1 1 0A.
0 0 1
exer:5B:16 16. If dim V = n then consider linear map S : Pn (C) ! V sotheo_16:3B:3.23
Sp = (p(T ))v. Since
dim Pn (C) = n + 1 > n = dim V so according to theorem 5.3.6, S is not injective.
Hence, there exists p 2 Pn (C), p 6= 0 so Sp = (p(T ))v = 0, which leads to existence of
eigenvalue of T .
exer:5B:17 17. If dim V = n then consider linear map S : Pn2 (C) ! L(V ) sotheo_16:3B:3.23
Sp = p(T ). Since
dim Pn2 (C) = n2 + 1 > n2 = dim L(V ) so according to theorem 5.3.6, S is not injec-
tive, i.e. there exists p 2 Pn2 (C), p 6= 0 so Sp = p(T ) = 0. Pick v 6= 0, v 2 V then
p(T )v = 0 which follows existence of eigenvalue of T .
exer:5B:18 18. Since V is finite-dimensional complex vector space so there exists so T I is invertible or
is not invertible. If is not eigenvalue of T then T I is invertible so dim range (T I) =
dim v. If is eigenvalue of T then T I is not invertible so dim range (T I) dim v 1.
Thus, f ( ) is not a continuous function.
(a) T is diagonalizable;
(d) V = E( 1, T ) ··· E( m , T ).
theo:5.44:5C Theorem 7.5.3 (5.44) If T 2 L(V ) has dim V distinct eigenvalues then T is diagonaliz-
able.
7.6. Exercises 5C
exer:5C:1 1. Since T is diagonalizable so V has basis v1 , . . . , vn consisting of eigenvectors of T . Let
T vi = i vi . List of vj so j = 0 is basis of null space of T . For all vj so j 6= 0 then we find
T vj = j vj spans range of T , hence vj is basis of range of T . Thus, V = null T range T .
exer:5C:2 2. The converse is not true. For example, if v1 , v2 , v3 is basis of V , let T 2 L(V ) so T v1 =
0, T v2 = v3 , T v3 = v2 then span (v1 ) is null space of T and span (v2 , v3 ) is range of T .
Hence, V = null T range T but T is not diagonalizable as 0 is the only eigenvalue of T
with E(0, T ) = span (v1 ).
exer:5C:3 3. If (a) is true then (b) is also true. Let u1 , . . . , un be basis of null T which can be extended
to basis u1 , . . . , un , v1 , . . . , vm of V .
If (b) is true then since vi 2 V = null T + range T so vi wi 2 range T for some
wi 2 null T = span (u1 , . . . , un ). Note that v1 w1 , . . . , vm wm is linearly dependent
as v1 , . . . , vm is linearly dependent and since dim range T = m so v1 w1 , .P . . , vm wm
n
is
Pm basis of range T . Hence if there is v 2 (null T ) \ (range T ) then v = i=1 ↵i ui =
j=1 j j (v w j ) which leads to j = 0 as v 1 , . . . , v ,
m 1u , . . . , u n is linearly dependent.
Thus, v = 0 or (null T ) \ (range T ) = {0}.
Toan Quang Pham page 57
theo_7:1C:1.45
If (c) is true then from theorem 3.2.4 (1C), we find null T +range T is a direct sum. Hence,
if v1 , . . . , vm is basis of null T , u1 , . . . , un is basis of range T then v1 , . . . , vm , u1 , . . . , un
is linearly independent. On the other hand, since V is finite-dimensional so dim V =
dim range T + dim null T = m + n. Thus, v1 , . . . , vm , u1 , . . . , un is basis of V , or V =
null T range T .
exer:5C:4 4. Let V = P(R) and T 2 L(V ) so T (1) = 0, T x = 0, T xi = x2 for all i 2. Then null T =
span (1, x) and range T = span (x2 ) so null T \ range T = {0} but V 6= null T + range T .
We do the same thing for U2 . This keeps going as long as dim Ui 1 (since there always
theo:5.21:5B
exists eigenvalue of an operator
L on complex vector space according to theorem 7.3.1).theo:5.41:5C
At
the end, we will obtain V = m i=1 E( i , T ) for eigenvalues i of T . From theorem 7.5.2
(5C), this follows T is diagonalizable.
exer:5C:6 6. Since T has dim V distinct eigenvalues so T is diagonalizable, hence according to theorem
theo:5.41:5C
7.5.2 (5C), V has basis v1 , . . . , vn which are eigenvectors of T corresponding to eigenval-
ues 1 , . . . , n . Thus, v1 , . . . , vn are also eigenvectors of S analogously corresponding to
1 , . . . , n . Hence, ST vi = S( i vi ) = i i vi = T Svi for all 1 i n. This follows
ST = T S as v1 , . . . , vn is basis of V .
The inequality holds when Sk = dim E( k , T ), i.e. k appears on the diagonal A precisely
dim E( k , T ) times.
theo:5.6:5A
exer:5C:8 8. If both T 2I and T 6I are not invertible, then according to theorem 7.1.2 (5A), 2 and
6 are eigevalues of T , i.e. dim E(2, T ) 1, dim E(6, T ) 1. However,
Hence, E( , T ) = E( 1 , T 1 ) = {0}.
exer:5A:21
If is eigenvalue of T then according to exercise 21 (5A), if v 2 E( , T ) then v 2 E( 1 , T 1 ).
theo:5.38:5C
exer:5C:10 10. We have null T = null (T 0I) = E(0, T ). Hence, according to theorem 7.5.1,
m
X
dim E( i , T ) + dim E(0, T ) dim V = dim range T + dim null T.
i=1
Pm
This follows i=1 dim E( i , T ) dim range T.
exer:5C:13 13. Let T, R 2 L(F4 ) so T v1 = Rv1 = 2v1 , T v2 = Rv2 = 6v2 , T v3 = Rv3 = 7v3 and T v4 =
v1 + 7v4 , Rv4 = v2 + 6v4 where v1 , . . . , v4 is basis of F4 . It’s not hard to check that T, R
only have 2, 6, 7 as eigenvalues. We show that for any if there is S 2 L(F4 ) so SR = T S
then S is not invertible, which can follow that there doesn’t exists invertible operator S
so R = S 1 T S.
P
Indeed, if Sv1 = 4i=1 ↵i vi then
4
!
X
T Sv1 = T ↵i vi = 2↵1 v1 + 6↵2 v2 + 7↵3 v3 + ↵4 (v1 + 7v4 ).
i=1
And
4
X
SRv1 = 2Sv1 = 2 ↵i vi .
i=1
And
4
X
SRv4 = S(6v4 + v2 ) = ↵2 v2 + 6 i vi .
i=1
exer:5C:14 14. If T does not have a diagonal matrix for any basis of C3 then that means dim E(6, T ) =
dim E(7, T ) = 1 and T only has eigenvalues 6, 7. This makes the construction for T much
easier: For arbitrary basis of C3 then choose T 2 L(C3 ) so T v1 = 6v1 , T v2 = 7v2 , T v3 =
v1 + v2 + 6v3 .
Indeed, we show that T has only two eigenvalues 6 and 7. If there exists v = a1 v1 + a2 v2 +
a3 v3 6= 0 so
8.2. Exercises 6A
exer:6A:1 1. Definiteness is not true, i.e. |1 · 0| + |1 · 0| = 0 but (1, 0) 6= (0, 0).
exer:6A:2 2. For (x1 , y1 , z1 ) 2 R3 so x1 y1 < 0 then h(x1 , y1 , z1 ), (x1 , y1 , z1 )i = 2x1 y1 < 0, a contradic-
tion to condition of positivity in definition of inner product.
exer:6A:4 4. (a) hu + v, u vi = kuk2 hu, vi + hv, ui kvk2 . Since we’re talking about real inner
product, so hu, vi = hv, ui, done.
(b) This is deduced from (a).
(c) A rhombus made by two vectors u, v 2 R2 with same norm has two diagonals u v, u+v.
From (b), we find u + v is orthogonal to u v, which follows two diagonals of the rhombus
are perpendicular to each other.
p ⌦p p ↵
exer:6A:5 5. If there exists v 2 V, v 6= 0 so T v = 2v then kT vk2 = 2v, 2v = 2 kvk2 > kvk2 ,
p p
a contradiction. Thus there does not existspv 2 V, v 6= 0 so T v = 2v, theo:5.6:5A
i.e. 2 is
not eigenvalue of T , which follows that T 2I is invertible from theorem 7.1.2 under
assumption that V is finite-dimensional.
Toan Quang Pham page 62
exer:6A:6 6. If hu, vi = 0 then according to Pythagorean’s theorem, ku + avk2 = |a|2 kvk2 +kuk2 kuk2
for all a 2 F.
If ku + avk2 kuk2 for all a 2 F then it is equivalent to 0 a2 kvk2 + 2aRe hv, ui. If
hv, ui 6= 0 then we can choose a so 0 > a2 kvk2 + 2aRe hv, ui, a contradiction. Thus,
hv, ui = 0.
exer:6A:7 7. Since a, b 2 R so kau + bvk2 = |a|2 kuk2 + 2ab hu, vi + |b|2 kvk2 , similarly, kbu + avk2 =
|b|2 kuk2 + 2ab hu, vi + |a|2 kvk2 . Hence, 2 2 2
⌘ = kbu + avk iff |a| kuk + |b| kvk =
⇣ kau + bvk
2 2 2
|b|2 kuk2 + |a|2 kvk2 iff |a|2 |b|2 kuk2 kvk2 = 0 for all a, b 2 R iff kvk = kuk.
theo:6.15:6A
exer:6A:8 8. We have | hu, vi | = 1 = kuk kvk so according to theorem 8.1.2, u = av for a 2 F. From
here we find a = 1 or u = v.
theo:6.15:6A
exer:6A:9 9. Let kuk = a, kvk = b then we have according to theorem 8.1.2 (6A), we have | hu, vi | ab
so
(1 | hu, vi |)2 (1 ab)2 (1 a2 )(1 b2 ).
exer:6A:13 13. Draw triangle formed by u, v and u v. Let ✓ be angle between u and v then applying
law of cosines, we have
exer:6A:15 15. Consider x = (a1 , 21/2 a2 , . . . , n1/2 an ), y = (b1 , 2 1/2 b2 , . . . , n 1/2 bn ) 2 R2 then according
theo:6.15:6A
to theorem 8.1.2 with respect to Euclidean inner product, we have the desired inequality.
p
exer:6A:16 16. 2kvk2 = ku + vk2 + ku vk2 2kuk2 = 34 so kvk = 17.
ku+vk2 +ku vk2 = x2 +y 2 +a2 +b2 +1/2 (x + a)2 (y + b)2 +1/2 (x a)2 + (y b)2 ,
and
2(kuk2 + kvk2 ) = x2 + y 2 + a2 + b2 + |x2 y 2 | + |a2 b2 |.
Then we must have
ku + vk2 ku vk2 = hu + v, u + vi hu v, u vi ,
= kuk + hu, vi + hv, ui + kvk2
2
kuk2 hu, vi hv, ui + kvk2 ,
= 2 hu, vi + 2 hv, ui .
In real inner product, we find hu, vi = hv, ui, which will obtain the desired identity.
Toan Quang Pham page 64
exer:6A:19
exer:6A:20 20. From exercise 19, we know that ku + vk2 ku vk2 = 2 hu, vi + 2 hv, ui. Next, we have
ku + ivk2 ku ivk2 = hu + iv, u + ivi2 hu iv, u ivi ,
2 2
= kuk + hu, ivi + hiv, ui + kivk
kuk2 + kivk2 hu, ivi hiv, ui ,
=2 [hu, ivi + hiv, ui] ,
=2i [hv, ui hu, vi] .
Thus,
1
hu, vi = ku + vk2 ku vk2 + ku + ivk2 i ku ivk2 i .
4
exer:6A:21 21. We will only prove for the case F = R, the other case F = C is similar (which proof
for this case can be seen here) . Define a function h , i : U ⇥ U ! R so hu, vi =
1
4 ku + vk
2 ku vk2 . Then it’s obvious that kuk = hu, ui1/2 . It suffices to show that
h; , i is an inner product.
Indeed, the positivity and definiteness conditions satisfies since hu, ui = kuk2 is 0 iff
kuk = 0 iff u = 0. Now we check the condition of additivity in first slot, we have
1
hu, vi + hw, vi = ku + vk2 + kw + vk2 ku vk2 kw vk2 .
4
Since the norm satisfies the parallelogram equality so
1 2
ku + vk2 + kw + vk2 = kw + u + 2vk2 + ku wk ,
2
1⇥ ⇤
= ku wk2 + 2 kw + u + vk2 + kvk2 kw + uk2 ,
2
1 1
= kw + u + vk2 + ku wk2 + kvk2 kw + uk2
2 2
Similarly, we obtain
1 1
ku vk2 + kw vk2 = kw + u vk2 + kvk2 + ku wk2 kw + uk2 .
2 2
Thus,
1
hu, vi + hw, vi = kw + u + vk2 kw + u vk2 = hw + u, vi .
4
Next we check the condition of homogeneity in first slot. From the condition of additivity of
first slot, we follow that hnu, vi = n hu, vi for any n 2 Z. For = a/b 2 Q, a, b 2 Z, b 6= 0
then for any u, v 2 U , we have u/b 2 U so b hu/b, vi = hu, vi. Hence,
a Da E
hu, vi = hu, vi = u, v = h u, vi .
b b
So far, homogeneity in first slot is true on Q. Since kuk + kvk ku + vk so (kuk + kvk)2
ku + vk2 so kukkvk hu, vi. From kuk + kvk ku + vk we also follow that the norm is
continuous.
Toan Quang Pham page 65
Similarly, h u, vi1 = hu, vi1 and hu, vi1 = hSu, Svi = hSv, Sui = hv, ui1 .
exer:6A:25 25. Since S is not injective so there exists u 2 V, u 6= 0 so Sv = 0. Hence, hv, vi1 = 0 but
v 6= 0, which fails the definiteness condition of inner product.
exer:6A:26 26. (a) This is talking about Euclidean inner product space. So
hf (t), g(t)i0 = h(f1 (t), . . . , fn (t)) , (g1 (t), . . . , gn (t))i0 ,
= [f1 (t)g1 (t) + . . . + fn (t)gn (t)]0 ,
Xn
⇥ 0 ⇤
= fi (t)gi (t) + fi (t)gi0 (t) ,
i=1
n
X n
X
= fi0 (t)gi (t) + fi (t)gi0 (t),
i=1 i=1
⌦ ↵ ⌦ ↵
= f (t), g(t) + f (t), g 0 (t) .
0
(b) We have hf (t), f (t)i = kf (t)k2 = c2 so hf (t), f (t)i0 = 0. On the other hand, from (a)
then hf (t), f (t)i0 = 2 hf 0 (t), f (t)i so done.
(c) (DON’T KNOW WITH CURRENT KNOWLEDGE)
exer:6A:27 27. Since kw uk + kw vk2 = 1
2 k2w u vk2 + ku vk2 so we are done.
exer:6A:28 28. If there is two points u, v 2 C, u 6= v closest to w 2 C then kw uk = kw vk kw rk for
2
all r 2 C. Since 12 (u+v 2 C so we have kw uk2 w 12 (u + v) = kw uk2 14 ku vk2 ,
which follows ku vk2 = 0 or u = v, a contradiction. Thus, there is at most one point in
C that is closest to w.
Toan Quang Pham page 66
exer:6A:29 29. (a) d(u, v) 0 and d(u, v) = 0 iff ku vk = 0 iff u = v; d(u, v) = d(v, u); d(u, v)+d(v, w)exer:6A:21
=
ku vk + kv wk ku wk = d(u, w) according to definition of norm on exercise 21.
Thus d is metric on V .
(b) (I couldn’t solveexer:6A:21
this so I did some research and found the proof, note that P norm is
defined in exercise 21 P(6A) ). Let v1 , . . . , vn be basis of V . For all x 2 V, x = ni=1 ↵i vi ,
define a norm kxk1 = ni=1 |↵i | (can verify that it is indeed P a norm). Consider a Cauchy
sequence {xk }k 1 in the metric space (V, d) with xk = ni=1 ↵i,k vi , i.e. for any " > 0 there
exists N > 0 so for all m, n > N we have
n
X
d(xm , xn ) = kxm xn k = (↵i,m ↵i,n )vi < ".
i=1
Since any two norms on some finite-dimensional vector space V over F are equivalent, there
exists pairs of real numbers 0 < C1 C2 so for all x 2 V then C1 kxk1 kxk C2 kxk1 .
Hence, for any j = 1, . . . , n, we have
k
X n
X
(↵i,m ↵i,n )vi C1 |↵i,m ↵i,n | C1 |↵j,m ↵j,n |.
i=1 i=1
This follows that for any " > 0, there exists N > 0 so for all m, n > N then |↵j,m ↵j,n | < ".
This follows that {↵j,i }i 1 is a Cauchy sequence in F. Since F is complete (see page 2 in
here P
for proof of case F = R) so for any j = 1, . . . , n, {↵j,i }i 1 converges to j . Consider
a = ni=1 i vj . We show that Cauchy sequence {xk }k 1 converges, i.e. for any " > 0,
there exists N > 0 so for all n > N then d(xn , a) = kxn ak < ". This is true because
n
X n
X n
X
kxn ak = (↵i,n i )vi C2 |↵i,n i | < C2 "i .
i=1 i=1 i=1
(c) It suffices to prove that any Cauchy sequence {xk }k 1 in U converges to x in U . This
is true if we choose u1 , . . . , un as basis of U and prove similarly to (b), which will gives
u 2 span (u1 , . . . , un ) = U .
exer:6A:30 30. (Followed from the hint in the book) Let p = q + 1 kxk2 r for some polynomial r on Rn .
It suffices to choose r so p is a harmonic polynomial on Rn , i.e. 1 kxk2 r = ( q).
Let V be set of polynomial r on Rn so deg r P deg q. Hence V is a finite-dimensional
n
vector space with basis xm1
1 m1
x 2 · · · x mn where
n i deg q. Define an operator T on
i=1 mP
n
V so for r 2 V then T r P = 1 kxk r = 2 1 i=1 xi r. Consider polynomial r
2
n
so T r = 0, i.e. h = 1 i=1 xi r is harmonic polynomial. Since harmonic polynomial
2
Gram-Schmidt Procedure helps us to find an orthonormal basis of a vector space. From this,
we follow:
theo:6.34:6B Theorem 8.3.3 (6.34) Every finite-dimensional inner product space has an orthonormal
basis.
theo:6.37:6B Theorem 8.3.4 (6.37) Suppose T 2 L(V ). If T has an upper-triangular matrix with
respect to some basis of V , then T has an upper-triangular matrix with respect to some
orthonormal basis of V .
theo:6.42:6B Theorem 8.3.5 (6.42, Riesz Representation Theorem) Suppose V is finite-dimensional and
' is a linear functional on V . Then there is a unique vector u 2 V such that 'v = hv, ui.
In particular, u = '(e1 )e1 + · · · + '(en )en for ANY orthonormal basis e1 , . . . , en of V .
8.4. Exercises 6B
exer:6B:1 1. p
(a) We have h(cos ✓, sin ✓), ( sin ✓, cos ✓)i = 0 and k(cos ✓, sin ✓)k = k( sin ✓, cos ✓)k =
(cos ✓)2 + (sin ✓)2 = 1. Similarly with the other orthonormal basis.
(b) If (x1 , y1 ), (x2 , y2 ) is an orthonormal basis of R2 then x1 x2 + y1 y2 = 0 and x21 +
y12 = x22 + y22 = 1. Since |x1 | 1 so there exists ✓ 2 R so x1 = cos ✓, which follows
y12 = (sin ✓)2 . WLOG, say y1 = sin ✓ and x1 6= 0 then x2 /y2 = y1 /x1 = tan ✓. Hence,
x22 + y22 = y 2 (tan ✓)2 + 1 = 1 so y22 = (cos ✓)2 . WLOG y2 = cos ✓ then x2 = sin ✓.
Thus, (x1 , y1 ), (x2 , y2 ) is (cos ✓, sin ✓), ( sin ✓, cos ✓).
theo:6.30:6B
exer:6B:2 2. If v 2 span (e1 , . . . , en ) then it’s true according to theorem 8.3.1.
Conversely, if kvk2 = | hv, e1 i |2 + · · · + | hv, en i |2 . Let u = hv, e1 ie1 + · · · + hv, en ien then
we have
hu, vi = hv, ui = | hv, e1 i |2 + · · · + | hv, en i |2 = kvk2 .
Hence, hv, ui = kvk2 . On the other hand, we also have kuk2 = kvk2 . This follows
ku vk2 = kuk2 + kvk2 hu, vi hv, ui = 0, hence v = u 2 span (e1 , . . . , en ).
Toan Quang Pham page 68
theo:6.37:6B
exer:6B:3 3. According to the proof of theorem 8.3.4 (6B), it suffices to apply the Gram-Schmidt Pro-
theo:6.31:6B
cedure 8.3.2 to (1, 0, 0), (1, 1, 1), (1, 1, 2). We have e1 = (1, 0, 0) and
Z ⇡ Z
sin ix sin jx 2 ⇡
dx = [cos(i j)x cos(i + j)x] dx,
⇡ ⇡ ⇡ ⇡
2 sin(i j)x sin(i + j)x ⇡
= = 0.
⇡ i j i+j ⇡
Z ⇡ Z
sin ix cos jx 2 ⇡
= [sin(i + j)x + sin(i j)x] dx,
⇡ ⇡ ⇡ ⇡
2 cos(i + j)x cos(i j)x ⇡
= + = 0.
⇡ i j j i ⇡
Z
⇡
cos ix 1 sin ix ⇡
p dx = p = 0.
⇡ 2⇡ 2⇡ i ⇡
Z ⇡
sin ix 1 cos ix ⇡
p dx = p = 0.
⇡ 2⇡ 2⇡ i ⇡
Z ⇡ Z ⇡
sin ix cos ix 1
dx = sin(2ix) dx = 0.
⇡ ⇡ 2⇡ ⇡
Similarly, we can find that
1 cos ix sin ix
p = p = p = 1.
2⇡ ⇡ ⇡
Toan Quang Pham page 69
R1
exer:6B:5 5. We have k1k2 = 0 1 dx = 1 so e1 = 1. We have
Z 1
1
x hx, e1 i e1 = x x dx = x .
0 2
And
2 Z 1✓ ◆2 Z 1✓ ◆
1 1 2 1 1 1 1 1
x = x dx = x x+ dx = + =
2 0 2 0 4 3 2 4 12
p
so e2 = (2x 1) 3. We have
Z 1 Z 1
2
⌦ 2
↵ ⌦ 2
↵ 2 2
x x , e1 e 1 x , e2 e 2 = x x dx 3(2x 1) x2 (2x 1) dx,
0 0
1 1
= x2 (2x 1),
3 2
1
= x2 x .
6
Since
2 Z 1✓ ◆2
2 1 2 1
x x = x x dx,
6 0 6
Z 1✓ ◆
4 2 1 1
= x 2x + x2 + x +
3
dx,
0 3 3 36
7
= .
60
p
2 105
Thus, e3 = 7 x2 x 1
6 .
exer:6B:6 6. Since differentiation operator on P2 (R) has an upper-triangular
theo:6.37:6B
matrix with respect exer:6B:5
to
the basis 1, x, x2 so according to proof of theorem 8.3.4 (6B), the answer is in exercise 5.
exer:6B:7 7. Define a linear functional T on P2 (R) so T p = p 12 . Consider the inner product hp, qi =
R1 theo:6.42:6B
0 p(x)q(x) dx then according to theorem 8.3.5 (6B), if T p = hp, qi then q(x) = e1 (1/2)e1 +
e3 (1/2)e3 where e1 , e2 , e3 is orthonormal basis of P2 (R), which can be found
e2 (1/2)e2 +exer:6B:5
in exercise 5 (6B).
exer:6B:8 8. Similarly, we find
✓Z 1 ◆ ✓Z 1 ◆
q(x) = (cos ⇡x)dx + (2x 1) cos ⇡x dx 3(2x 1),
0 0
✓Z 1 ✓ ◆ ◆ ✓ ◆
1 60 1
+ x2 x cos ⇡x dx x2 x .
0 6 7 6
exer:6B:10 10. Since span (v1 ) = span (e1 ) so v1 = hv1 , e1 i e1 or e1 = hv1v,e1 1 i . Observe that ke1 k = 1
so that means | hv1 , e1 i | = kv1 k, which leads to hv1 , e1 i = ±kv1 k. Thus, there are two
choices for e1 . Similarly, from vj = hvj , e1 i e1 + · · · + hvj , ej i ej we find hvj , ej i = ±kvj
hvj , e1 i e1 · · · hvj , ej 1 i ej 1 k so there are two ways to choose ej . In total, there are 2m
orthonormal lists e1 , . . . , em of vectors in V .
On the other hand, we can also write u = kv + w with hv, wi1 = 0 = hv, w, i2 . We find
hv, ui1 = kkvk21 and hv, ui2 = kkvk22 . Combining with the above, we find kvk21 = ckvk22 for
any v 2 V .
Therefore, for any v, w 2 V , if one of them is 0 then hv, wi1 = c hv, wi2 = 0. If v, w 6= 0
then v = kw + u with hw, ui1 = hw, ui2 = 0. Hence,
exer:6B:12 12. Since V is finite-dimensional, there exists an othornormal basis e1 , . . . , en of V with respect
to inner product h·, ·i2 . We apply Gram-Schmidt Procedure to e1 , . . . , en with respect to
inner product h·, ·i1 and obtain another orthonormal basis u1 , . . . , un of V . With this, we
theo:6.30:6B
have span (e1 , . . . , ej )P
= span (u1 , . . . , uj ) for all 1 j n. According to theorem 8.3.1
(6B) we have kvk22 = ni=1 |hv, ei i|2 and
n
X
kvk21 = |hv, uk i|2 ,
k=1
n
* k
+ 2
X X
= v, huk , ei i ei ,
k=1 i=1
n X
k 2
X
= huk , ei i hv, ei i ,
k=1 i=1
n k
!2
X X
huk , ei i · |hv, ei i| ,
k=1 i=1
Xn X
= Ak |hv, ek i|2 + Bi,j |hv, ei i| · |hv, ej i| .
k=1 1i<jn
In above, Ak , Bi,j are constants obtained from |hui , ej i| so they will be the same regardless
of which
Pn v 2 V is Pchosen. Note that 2 |hv, ei i hv, ej i| hv, ei i2 + hv, ej i2 so if we choose
C = k=1 Ak + 1i<jn Bi,j we will obtain Ckvk22 kvk21 for all v 2 V .
Toan Quang Pham page 71
theo:6.31:6B
exer:6B:13 13. Apply Gram-Schmidt Procedure 8.3.2 (6B) to linearly independent list of vectors v1 , . . . , vm ,
we find an orthonormal list e1 , . . . , em so span (e1 , . . . , ej ) = span (v1 , . . . , vj ) for all
theo:6.30:6B
1 j m. Hence, according to theorem 8.3.1 (6B), say w 2 span (v1 , . . . , vm ) then
we have w = hw, e1 i e1 + · · · + hw, em i em and vj = hvj , e1 i e1 + · · · + hvj , ej i ej so
j
X
hw, vj i = hw, ei i hvj , ei i = Cj + hw, ej i hvj , ej i.
i=1
This leads to
|b |2 1/n (1 | hvj , ej i |)2
Pn j < .
k=1 |bk |
2
| hvj , ej i |2 + 1/n (1 | hvj , ej i |)2
Next, we will show that
Indeed, the above is equivalent to (n| hvj , ej i | n + 1)2 0, which is true. Therefore, for
Pn|bj |2
all 1 j n then 2 < n,
1
which will leads to a contradiction if we sums up all the
k=1 |bk |
inequalities. Thus, v1 , . . . , vn is linearly independent so v1 , . . . , vn is basis of V .
exer:6B:15 15. We prove for the general CR ([a, b]) and '(f ) = f (c) for some c 2 [a, b]. Assume the
contrary, there exists such function g then pick f (x) = (x c)2 g(x), we have f (c) = 0
Rb Rb
and f (x) 2 CR ([a, b]). Hence, a f (x)g(x)dx = a [(x c)g(x)]2 dx = f (c) = 0 so this
follows (x c)g(x) = 0 or g(x) = 0 for x 2 [a, b] since g(x) is continuous on [a, b]. Hence,
hf, gi = 0 for all f 2 CR ([a, b]), which is a contradiction if we choose f so f (c) 6= 0. Thus,
there doesn’t exist such function g.
exer:6B:16 16. We induct on dim V . For dim V = 1 then there exists v 2 V so kvk = 1 and T v = v
with | | < 1. Hence, since 0 < | | < 1 so limm!1 | |m = 0 so for any ✏ > 0, there exists
M so for all m > M then kT m vk = k m vk = | |m < ✏.
If the problem is true for dim V = n 1. For dim V = n, since F = C,theo:5.27:5B
T has an upper-
triangular matrix with respect to some basis of V according to theorem 7.3.2 (5B), which
follows T has an upper-triangular matrix with respect to some orthonormal basis e1 , . . . , en
theo:6.37:6B
of V according to theorem 8.3.4 (6B). Hence, T is invariant under U = span (e1 , . . . , en 1 )
so according to inductive hypothesis, for 0 < ✏ < 1, there exists m 2 Z, m 1 so
kT m vk ✏kvk for all v 2 U .
We can easily show that T m en = u + ↵n,n m where u 2 U . Hence, we can prove inductively
P im T (k 1 i)m u + ↵m(k 1) T m e . Since
that for any positive integer k then T mk en = ki=02 ↵n,n n,n n
max{✏, |↵n,n |} < 1 so limx!1 x max{✏, |↵n,n |}x = 0. Hence, there exists X so for all
x > X then x max{✏, |↵n,n |}x < kuk"
for sufficiently small " > 0. If we choose large enough
k, then for any 0 i m 2, we have
im (k 1 i)m
k↵n,n T uk = |↵n,n |im T (k 1 i)m
u ,
kT mk vk kT mk wk + | n| · kT mk en k,
✏mk kwk + | n |",
p
= ✏mk | 1 |2 + · · · + | n 1 |2 + | n |",
p
| 1 |2 + · · · + | n |2 = kvk.
exer:6B:17 17. (a) We show that for u, v 2 V then u+ v = (u + v). Indeed, we have for any w 2 V
then
1. PU 2 L(V ).
4. v PU v 2 U ? .
5. PU2 = PU .
6. kPU vk kvk.
8.6. Exercises 6C
exer:6C:1 1. If v 2 {v1 , . . . , vm }? iff hv, vi i = 0 for all 1 i m iff hv, wi = 0 for all w 2
span (v1 , . . . , vm ) iff v 2 span (v1 , . . . , vm )? . Thus, {v1 , . . . , vm }? = span (v1 , . . . , vm )? .
exer:6C:2 2. If U = V then it’s obviously U ? = {0}. Conversely, if U ? = {0}. Let e1 , . . . , em be
orthonormaltheo:6.31:6B
basis of U , if U 6= V then there exists v 62 U, v 2 U . Apply Gram-Schmidt
Procedure 8.3.2 (6B) to linear independent list e1 , . . . , em , v to get a orthonormal list
e1 , . . . , em , em+1 which leads to em+1 2 U ? , a contradiction. Thus, U = V .
exer:6C:3 3. According to Gram-Schmidt Procedure then span (u1 , . . . , um ) = span (e1 , . . . , em ) = U
so e1 , . . . , em is orthonormal basis of U . Since each fi is orthogonal to each ej so fi 2 U ?
for each 1 i n. Hence, f1 , . . . , fn is an orthonormal list in U ? . On the other hand,
dim U ? = dim V dim U = n so f1 , . . . , fn is orthonormal basis of U ? .
exer:6C:3
exer:6C:4 4. According to exercise 3 (6C), apply Gram-Schmidt Procedure to (1, 2, 3, 4), ( 5, 4, 3, 2),
(1, 0, 0, 0), (0, 1, 0, 0).
exer:6C:5 5. Since V is finite-dimensional so U, U T is finite-dimensional.
theo:6.51:6C
Hence, since U = (U ? )?
theo:6.47:6C
according to theorem 8.5.2 (6C), so from theorem 8.5.1 (6C), we have V = U U ? =
Toan Quang Pham page 75
n
X m X
X n
kvk2 = kP vk2 + | i |2 + 2Re ↵i j hei , fj i .
j=1 i=1 j=1
If there exists at least one hei , fj i 6= 0 then choose ↵x = y = 0 for all x 6= i, y 6= j and
↵i , j so ↵i j = Khei , fj i for some real number K < 0 so 2K| hei , fj i |2 + | j |2 < 0. Hence,
kvk2 kP vk2 = | j |2 + 2K| hei , fj i |2 < 0.
This contradicts to assumption that kvk kP vk. Thus, we must have hei , fj i = 0 for
every 1 i m, 1 j n. Thus, every vector in null P is orthogonal to every vector
in range P = U , or null P ⇢ U ? . Combining with v = P v + w for any v 2 V , we find
PU v = PU (P v + w) = P v for all v 2 V .
theo:6.47:6C
exer:6C:9 9. Since U is finite-dimensional subspace of V so according to theorem 8.5.1 (6C) we have
V = U U ? , or each v can be written uniquely as v = u + w for u 2 U, w 2 U ? .
If U is invariant under T then we have T u 2 U so for any v 2 V, v = u + w then PU TU v =
PU T u = T u = T PU v which leads to PU T PU = T PU . Conversely, if PU T PU u = T PU u or
PU T u = T u then that means T u 2 V for all u 2 U . Thus, U is invariant under T .
theo:6.47:6C
exer:6C:10 10. Since V is finite-dimensional so according to theorem 8.5.1 (6C) we have V = U U ? , i.e.
each v can be represented uniquely as v = u + w with u 2 U, w 2 U ? .
If U and U ? are both invariant under T then for any v 2 V, T v = T u + T w with T u 2
U, T w 2 U ? . Hence, PU T v = PU (T u + T w) = T u = T PU v so PU T = T PU . Conversely,
if PU T = T PU then for any u 2 U we have PU T u = T PU u = T u so T u 2 U . This
follows U is invariant under T . For any w 2 U ? we have PU T w = T PU w = T (0) = 0 so
T w 2 null PU = U ? . Thus, U ? is invariant under T .
Toan Quang Pham page 76
theo:6.56:6C
exer:6C:11 11. Such u is exactly PU (1, 2, 3, 4) according to theorem 8.5.4 (6C). We do this step by step:
• Apply Gram-Schmidt Procedure to (1, 1, 0, 0), (1, 1, 1, 2) and obtain orthonormal basis
of U which is e1 , e2 .
• Calculate h(1, 2, 3, 4), e1 i , h(1, 2, 3, 4), e2 i (note that since V = R4 so we’re talking
about Euclidean inner product).
• PU (1, 2, 3, 4) = h(1, 2, 3, 4), e1 i e1 + h(1, 2, 3, 4), e2 i e2 is the answer.
exer:6C:12 12. Define subspace U of P3 (R) as U = {p(x) 2 P3 (R) deg p 3, p0 (0) = 0, p(0) = 0}. This
follows U = {ax3 + bx2 : a, b 2 R} = span (x3 , x2 ). Hence, similarly to previous exercise,
we find orthonormal basis of U from basis x3 , x2 and inner product on P3 (R):
Z ⇡
hp, qi = p(x)q(x)dx.
⇡
exer:6C:14 14. (a) If U ? 6= {0}, then we show that for any g 2 U ? , g 6= 0, we have xg(x) 2 U ? . Indeed,
for any f 2 U then f (x)x 2 U so
Z 1
hf (x), xg(x)i = [xf (x)] g(x)dx = hxf (x), g(x)i = 0.
1
def:7.2:7A Definition 9.1.1 (adjoint). Suppose T 2 L(V, W ). The adjoint of T is the function T ⇤ : W !
V such that hT v, wi = hv, T ⇤ wi .
def:7.11:7A Definition 9.1.2 (self-adjoint operator). An operator T 2 L(V ) is called self-adjoint or Her-
mitian if T = T ⇤ .
theo:7.7:7A
From theorem 9.1.3, we find that if T is self-adjoint, then that means null Tdef:6.53:6C
= (range T )? . If
adding the condition T = T , we obtain that T is an orthogonal projection 8.5.3.
2
theo:7.10:7A
From theorem 9.1.4 (7A), we find that for self-adjoint operator A, i.e. A = A⇤ then matrix
of A with respect to some orthonormal basis is equal to its conjugate transpose. This is called
Hermitian matrix. If we’re talking about real inner product space then matrix of self-adjoint
operator A (wrt orthonormal basis) is equal to its own transpose. This kind of matrix is called
real symmetric matrix.
theo:7.21:7A Theorem 9.1.6 Suppose T 2 L(V ) is normal and v 2 V is eigenvector of T with eigenvalue
. Then v is also eigenvector of T ⇤ with eigenvalue .
theo:7.22:7A Theorem 9.1.7 (7.22) Suppose T 2 L(V ) is normal. Then eigenvectors of T corresponding
to distinct eigenvalues are orthogonal.
How to find the adjoint T ⇤ of T 2 L(V, W )?: There are two ways:
Toan Quang Pham page 78
exer:7D:6 exer:7A:1
1. Directly: Find T ⇤ from hT v, wi = hv, T ⇤ wi. See exercise 6 (7D) or exercise 1 (7A) as
examples.
2. Indirectly: Find orthonormal bases of V and W . Write M(T ) with respect two these two
theo:7.10:7A
bases then M(T ⇤ ) is the conjugate transpose of M(T ) according to theorem 9.1.4 (7A).
9.2. Exercises 7A
exer:7A:1 1. For any (a1 , . . . , an ) 2 Fn then
surjective iff T ⇤ is injective.
exer:7A:6 6. (a) Write out orthonormal basis of P2 (R) using Gram-Schmidt Procedure to 1, x, x2 then
write out M(T ) with respect to this orthonormal basis.
(b) This is not a contradiction because the matrix of T is created from nonorthonormal
basis of P2 (R).
Toan Quang Pham page 79
exer:7A:7 7. Since S, T are self-adjoint so for any v, w 2 V then hST v, wi = hT v, Swi = hv, T Swi.
Thus, ST is self-adjoint iff for any v, w 2 V then hST v, wi = hv, ST wi iff hv, T Swi =
hv, ST wi iff ST = T S.
0 0 0 0
Toan Quang Pham page 80
theo:7.22:7A
exer:7A:14 14. Since T is normal so according to theorem 9.1.7 (7A) then hT v, T wi = hv, wi = 0 so
22 (32 + 42 ) = kT vk2 + kT wk2 = kT (v + w)k2 so kT (v + w)k = 10.
Thus, T ⇤ w = hw, xi u.
(a) With F = R, T is self-adjoint then T v = T ⇤ v for every v 2 V so hv, ui x = hv, xi u so
u, x is linearly dependent.
If u, x is linearly dependent with u = x then for any v 2 V we have hv, ui x = hv, xi u/ =
hv, xi u so T = T ⇤ or self-adjoint.
(b) We have T T ⇤ v = hv, xi T u = hv, xi kuk2 x and T ⇤ T v = hv, ui T ⇤ x = hv, ui kxk2 u. If T
is normal then T T ⇤ = T ⇤ T then u, x is linearly dependent.
If u = x then for every v 2 V , we have
⌦ ↵
hv, xi kuk2 x = v, 1 u | |2 kxk2 x,
⌦ ↵
= v, /| |2 u | |2 kxk2 x,
= /| |2 hv, ui | |2 kxk2 x,
= hv, ui kxk2 u.
exer:7A:16 16. First, we will show that null T = null T ⇤ . Indeed, we have v 2 null T iff 0 = kT vk2 =
kT ⇤ vk2 iff T ⇤ v = 0 or v 2 null T ⇤ . Thus, theo:7.7:7A
null T = null T ⇤ so (null T )? = (null T ⇤ )? so
range T = range T according to theorem 9.1.3 (7A).
⇤
exer:7A:17 17. We show that for any positive integer k then null T k = null T k+1 . Indeed, it’s obvious
that if v 2 null T k then v 2 null T k+1 .exer:7A:16
If v 2 null T k+1 then T k v 2 null T and since
null T = null T according to exercise 16 (7A) so T k v 2 null T ⇤ . On the other hand,
⇤
exer:7A:20 20. Note that both V T ⇤ and T 0 w are linear map from W to V . Hence, it suffices to
0
Since there two are linear functionals on V so it again suffices to prove that for every
v 2 V then ( V (T ⇤ w))v = (T 0 ( W w))v. Indeed, according to definition of V , we
have ( V (T ⇤ w))v = hv, T ⇤ wi. On the other hand, (T 0 ( W w))v = (( W w) T )v =
( W w)(T v) = hT v, wi. Since hv, T ⇤ wi = hT v, wi so we obtain the desired equality.
exer:6B:4
exer:7A:21 21. From exercise 4 (6B), we find the list
(a) T is normal.
theo:7.29:7B Theorem 9.3.2 (7.29, Real Spectral Theorem) Suppose F = R and T 2 L(V ). Then the
following are equivalent:
(a) T is self-adjoint.
9.4. Exercises 7B
exer:7B:1 1. Pick T 2 L(R3 ) so T (1, 1, 0) = 2(1, 1, 0), T (0, 1, 0) = 3(0, 1, 0) and T (0, 0, 1) = 4(0, 0, 1)
then with respect to standard basis
0 (which is1 also orthonormal 0 basis with1respect to usual
2 0 0 2 1 0
inner product) we have M(T ) = @ 1 3 0 so M(T ) = 0 3 0A. Thus T is not
A ⇤ @
0 0 4 0 0 4
self-adjoint.
exer:7B:3 3. Pick T 2 L(C)3 ) so T (1, 0, 0) = 3(1, 0, 0), T (0, 1, 0) = 2(0, 1, 0) and T (0, 0, 1) = 2(1, 1, 1).
Then (T 2 5T + 6I)(0, 0, 1) = ( 2, 0, 0) so T 2 5T + 6I 6= 0. 2 and 3 are the only
eigenvalues of T .
theo:7.22:7A
exer:7B:4 4. If T is normal then according to theorem 9.1.7 (7A), every pairs of eigenvectors correspond-
ing to distinct eigenvalues of T are orthogonal. According
theo:7.24:7B
to Complex Spectral Theorem
theo:5.41:5C
9.3.1 then T has diagonal matrix so from theorem 7.5.2 (5C), V = E( 1 , T ) · · · E( m , T ).
theo:6.34:6B
Conversely, according to theorem 8.3.3 (6B), each finite-dimensional vector space E( i , T )
has an orthonormal basis. Combining all orthonormal bases of vector spaces E( i , T ) for
1 i m we obtain a new orthonormal basis (since every vector in E( i , T ) is orthogonal
to every vector in E( j , T ) for j 6= i). This orthonormal basis has length dim E( 1 , T ) +
Toan Quang Pham page 83
exer:7B:8 8. UNSOLVEd
exer:7B:9 9. Every normal operator T has a diagonal matrix according to Complex Spectral Theorem
theo:7.24:7B
9.3.1
p (7B). Pick operator S 2 L(V ) so S has diagonal matrix M(S) and M(S)i,i =
M(T )i,i for every 1 i n (this is possible because we are considering complex vector
space). Hence, M(S 2 ) = (M(S))2 = M(T ) so S 2 = T .
theo:1:5B
exer:7B:10 10. Theorem 7.3.5 (look at the proof) makes the construction easier. It suffices to choose
an operator T so that T has no eigenvalue. Pick V = R2 and T (x, y) = ( y, x) then if
T (x, y) = (x, y) we find y = x, x = y so y( 2 + 1) = 0, which happens when y = 0
so x = 0. Thus, T has no eigenvalues. Hence, since for any v 2 R2 , v 6= 0 then T 2 v, T v, v
is linearly dependent so there exists real numbers b, c so b2 < 4c and T 2 v + bT v + cv = 0.
The condition b2 < 4c guarantees that T has no eigenvalue, otherwise we can factorise
T 2 v + bT v + cv and then obtain eigenvalue for T .
exer:7B:11 11. If T is self-adjoint then T is also normal so according to Real/Complex Spectral Theorem, T
has a diagonal matrix M(T ) with respect to some p orthonormal basis of V . Pick S 2 L(V )
so S also has a diagonal matrix so M(S)i,i = M(T )i,i for every i. We can conclude
3
S 3 = T from here.
exer:7B:12 12. Since T is self-adjoint so according to Complex/Real Spetral Theorem, V has an orthonor-
mal basis
P consisting of eigenvectors of T , i.e. each v 2 V, kvk = 1 can be represented as
vP= m i=1 ai ui where ui 2 E( i , T ) and hui , uj i = 0 for 1 i < j m and ai 2 F so
m
i=1 |ai | = 1. Hence,
2
m 2 m
X X
2
kT v vk = ( i )ai ui = |( i )ai |2 .
i=1 i=1
Toan Quang Pham page 84
P
If | i | ✏ for all 1 i m then kT v vk2 ✏2 m i=1 |ai | = ✏ so kT v
2 2 vk ✏, a
contradiction. Thus, there must exists i so | i | < ✏.
exer:7A:3
exer:7B:13 13. According to exercise 3 (7A), if U is invariant under T then U ? is invariant under T ⇤ .
Hence, if T is normal then T ⇤ |U ? 2 L(U ? ) is normal.
We prove Complex Spectral Theorem by induction on dim V . In a complex inner product
space V , there always exists an eigenvector u 6= 0 corresponding to eigenvalue of T . Let
U = span (u) then since dim U ? < dim V , there exists an orthonormal basis e1 , . . . , en exer:7A:2
1
of U ? consisting of eigenvectors of T ⇤ |U ? . Since T is normal so according to exercise 2
(7A), e1 , . . . , en 1 is also orthonormal basis consisting of eigenvectors of T . Note that
eigenvalues of T of these eigenvectors must be different from , otherwise u 2 U ? which
means u = 0, a contradiction. Thus, V has orthonormal basis u, e1 , . . . , en 1 consisting of
eigenvectors of T .
exer:7B:14 14. One direction is true according to Real Spectral Theorem. If U has basis u1 , . . . , un
consisting of eigenvectors of T , consider inner product on U so hui , uj i = 0 for all 1
i < j n then this inner product makes T into a self-adjoint operator according to Real
Spectral Theorem.
Another terminology for the above definition is positive semidefinite. A positive semidefi-
nite operator that satisfies hT v, vi > 0 for all non-zero v is called positive definite.
theo:7.35:7C Theorem 9.5.2 (7.35, Positive Operators) Let T 2 L(V ). Then the following are equiva-
lent:
(a) T is positive;
theo:7.42:7C Theorem 9.5.3 (7.42, Isometries) Suppose S 2 L(V ). Then the following are equivalent:
(a) S is an isometry;
(d) there exists an orthonormal basis e1 , . . . , en of V such that Se1 , . . . , Sen is orthonor-
mal;
(e) S ⇤ S = I;
(f) SS ⇤ = I;
(g) S ⇤ is an isometry;
theo:7.43:7C Theorem 9.5.4 (7.43) Suppose V is a complex inner product space and S 2 L(V ). Then
the following are equivalent:
(a) S is isometry.
9.6. Exercises 7C
exer:7C:1 1. Let e1 , e2 be orthonormal basis of V . Let T e1 = xe1 + ye2 and T e2 = ye1 + ze2 with
x, y, z 2 R so 2y + x + z < 0, x, z 0. Hence, T is self-adjoint and hT e1 , e1 i = x
0, hT e2 , e2 i = z 0. On the other hand,
theo:7.35:7C
exer:7C:6 6. If T is positive operator on V then according to theorem 9.5.2 (7C), T has self-adjoint
square root R 2 L(V ). Hence, we have Rk = (R⇤ )k = (Rk )⇤ so Rk is self-adjoint for every
theo:7.35:7C
positive integer k. Hence, T k = (R2 )k = (Rk )2 so again from theorem 9.5.2 (7C), T k is a
positive operator on V .
exer:7C:8 8. It suffices to check the definiteness condition of inner product. We have hv, viT = 0 iff
exer:7C:7
hT v, vi = 0. Applying previous exercise 7 (7C), the condition hT v, vi = 0 for only v = 0
is equivalent to T being invertible.
exer:7C:9 9. Let S be a self-adjoint square root of identity operator I then according to Real/Complex
theo:7.24:7B
Spectral Theorem 9.3.1 (7B), V has an orthonormal basis e1 , e2 consisting of eigenvectors
of S corresponding to eigenvalues 1 , 2 . Thus, we have S 2 e1 = 21 e1 = Ie1 = e1 so 21 = 1.
Similarly, 22 = 1. Hence, identity operator on F2 has finitely many self-adjoint square
roots.
theo:7.42:7C
exer:7C:10 10. If (a) holds, i.e. S is isometry then from theorem 9.5.3 (7C), S ⇤ is also isometry which
leads to (b). If any of (b),(c) or (d) holds, we can follow that S ⇤ is isometry which can
theo:7.42:7C
leads back to other conditions according to theorem 9.5.3 (7C).
exer:5C:12
exer:7C:11 11. Similar to exercise 12 (5C). Since T1 is normal so according to Complex Spectral Theorem
theo:7.24:7B
9.3.1 (7B), there exists an orthonormal basis v1 , v2 , v3 consisting of eigenvectors 2, 5, 7 of T1 .
Similarly, there exists an orthonormal basis w1 , w2 , w3 consisting of eigenvectors 2, 5, 7 of
T2 . Let Sexer:5C:15
2 L(F3 ) so Sv1 = w1 , Sv2 = w2 and Sw3 = w3 then S is invertible. Similarly to
exercise 15 (5C), we can show that T1 = S 1 T S. Now it suffices to show that S is isometry.
P P3 2 P
Indeed, for any v 2 V, v = 3i=1 ai ei then kSvk2 = i=1 ai wi = 3i=1 |ai |2 = kvk2 .
theo:7.42:7C
Thus, S is isometry so according to theorem 9.5.3 (7C), S 1 = S ⇤ so T1 = S ⇤ T2 S.
exer:5C:13
exer:7C:12 12. It is essentially exercise 13 (5C).
exer:7C:13 13. It is false. Pick S 2 L(V ) for Sej = ej for all j 6= 2 and Se2 = x1 e1 +x2 e2 where x1 , x2 2 R
so |x1 |2 + |x2 |2 = 1 to guarantee kSe2 k = 1. Hence kS(e1 + e2 )k2 = k(x1 + 1)e1 + x2 e1 k2 =
|x1 + 1|2 + |x2 |2 = 2 + 2x1 6= 2 = ke1 + e2 k2 . Thus, S is not an isometry.
exer:7C:14 14. We’ve already shown that T is self-adjoint so T is also self-adjoint. We have
h( T ) sin ix, sin ixi = hsin ix, sin ixi > 0, h( T ) cos ix, cos ixi = hcos ix, cos ixi > 0.
exer:7A:21
And h( T )ei , ej i = 0 for i 6= j (see exercise 21 (7A) for more details). With this, we can
conclude that h( T )v, vi > 0 for all v 2 V , i.e. T is positive.
Toan Quang Pham page 87
theo:7.51:7D Theorem 9.7.2 (7.51 Singular Value Decomposition) Suppose T 2 L(V ) has singular val-
ues s1 , . . . , sn . Then there exist orthonormal bases e1 , . . . , en and f1 , . . . , fn of V such
that
T v = s1 hv, e1 i f1 + . . . + sn hv, en i fn
for every v 2 V .
theo:7.52:7D Theorem 9.7.3 (7.52) Suppose T 2 L(V ). Then the singular values of T are the nonnega-
tive square roots of the eigenvalues of T ⇤ T , with each eigenvalue repeated dim E( , T ⇤ T )
times.
n
! n
X X p
Rv = R hv, ei i ei = hv, ei i i ei .
i=1 i=1
• Find T ⇤ (see 7A) then find T ⇤ T then find all eigenvalues of T ⇤ T . Hence, singular values
theo:7.52:7D
of T are nonnegative square roots of the eigenvalues of T ⇤ T according to theorem 9.7.3
(7D).
• If T is self-adjoint: The singular values of T equals the absolute values of the eigenvalues
exer:7D:10
of T according to exercise 10 (7D).
9.8. Exercises 7D
exer:7A:15
exer:7D:1 1. According to solution in exercise 15 (7A) then T ⇤ w = hw, xi u so T ⇤ T v = hv, ui T ⇤ x =
p kxk
hv, ui hx, xi u. One can verify that T ⇤ T v = kuk hv, ui u is indeed true.
exer:7D:2 2. p
Let T (a, b) = (5b, 0) then T ⇤ (x, y) = (0, 5x). Hence, T ⇤ T (x, y) = (0, 25y). Hence,
T ⇤ T (x, y) = (0, 5y) which has two singular values 0, 5 while 0 is the only eigenvalue
of T .
Toan Quang Pham page 88
theo:7.45:7D
exer:7D:3 3. Apply Polar Decomposition 9.7.1 (7D) to T ⇤ we find there ⇣p exists
⌘⇤ an isometry S 2 L(V )
p p p
so T ⇤ = S (T ⇤ )⇤ T ⇤ = S T T ⇤ . Hence, T = (T ⇤ )⇤ = T T ⇤ S ⇤ = T T ⇤ S ⇤ because
p
T T ⇤ is positive. Note that S is an isometry so S ⇤ is also an isometry.
p
exer:7D:4 4. Since s is singular
p value of T so there exists v 2 V, kvk = 1 so T ⇤ T v = sv. Hence, we
have kT vk = k T T vk = |s|kvk = s.
⇤
exer:7D:5 5. We have
h(a, b), T ⇤ (x, y)i = hT (a, b), (x, y)i = h( 4b, a), (x, y)i = 4bx + ay = h(a, b), (y, 4x)i .
Hence, T ⇤ (x, y) = (y, 4x). Hence, T ⇤ T (x, y) = T ⇤ ( 4y, x) = (x, 16y). Thus, singular
values of T are 1, 4.
exer:7D:6 6. We have for all a2 , a1 , a0 2 R then
⌦ ↵ ⌦ ↵
a2 x2 + a1 x + a0 , T ⇤ (b2 x2 + b1 x + b0 ) = T (a2 x2 + a1 x + a0 ), b2 x2 + b1 x + b0 ,
⌦ ↵
= 2a2 x + a1 , b2 x2 + b1 x + b0 ,
Z 1
= (2a2 x + a1 )(b2 x2 + b1 x + b0 ) dx,
1
2
= (2a2 b1 + a1 b2 ) + 2a1 b0 ,
3 ✓ ◆
4 2
= a 2 · b1 + a 1 b2 + 2b0 .
3 3
On the other hand, we have
⌦ ↵ 2 2
a1 x2 + a1 x + a0 , c2 x2 + c1 x + c0 = 2a0 c0 + (a1 c1 + a2 c0 + a0 c2 ) + a2 c2 ,
✓ 3 ◆ 5
2 2 2
= a2 c2 + c0 + a1 · c1 + 2a0 c0 .
5 3 3
Thus, we must have c0 = 0, 43 b1 = 25 c2 + 23 c0 and 23 b1 + 2b0 = 23 c1 . This follows c0 = 0, c2 =
10
3 b1 and c1 = b1 + 3b0 . We obtain T (b2 x + b1 x + b0 ) = 3 b1 x + (b1 + 3b0 )x. Thus,
⇤ 2 10 2
p
exer:7D:8 8. Since T = SR so T ⇤ = (SR)⇤ = R⇤ S ⇤ = RS ⇤ so T ⇤ T = RS ⇤ SR = R2 so R = T ⇤ T .
p p p
exer:7D:9 9. Since kT p vk = k T T vk so T is invertible iff T T is invertible
⇤ ⇤ iff null T ⇤ T = {0} iff
range T ⇤ T = V . Thus, it suffices p to show that range T ⇤ T = V iff there exists a unique
isometry S 2 L(V ) so T = S T T . ⇤
p
Indeed, if range T ⇤ T = p V then obviously isometry operator S is uniquely determined
since Spmust satisfy T =pS T ⇤ T . Conversely, if there exists p a unique isometry S 2 L(V ) so
T = S T ⇤ T but range T ⇤ T 6= V ,theo:7.45:7D
which means range T ⇤ T )? 6= {0}. Hence, by looking
back proof of Polar Decomposition 9.7.1 (7D) p from the book, for m ? 1, let e1 , . . . , em and
f1 , . . . , fm be orthonormal bases of (range T ⇤ T )? and (range p T ) , respectively. Hence,
we have at least two ways to choose linear map S2 : (range T ⇤ T )? ! (range T )? , one is
S2 (a1 e1 + . . . am em ) = a1 f1 + . . . + am fm or S2 (a1 e1 + . . . , am fm ) p
= a1 f1 . . . am fm .
This follows there are at least p two possible isometries S so T = S T ⇤ T , a contradiction.
Thus, we must have range T ⇤ T = V or T is invertible.
theo:7.52:7D
exer:7D:10 10. According to theorem 9.7.3 (7D), the singular values of T are the nonnegative square roots
of the eigenvalues of T ⇤ T . Since T is self-adjoint so T ⇤ T = T 2 so eigenvalues of T are the
nonnegative square roots of the eigenvalues of T 2 . Thus, the singular values of T equal
the absolute values of the eigenvalues of T , repeated appropriately.
exer:7D:11 11. It suffices to show that T ⇤ T and T T ⇤ have the same eigenvalues values that are repeated
under same number of times. Since T ⇤ T is self-adjoint so according to Spectral Theorem,
there exists an orthonormal basis e1 , . . . , en consisting of eigenvectors of T ⇤ T . WLOG,
among these vectors, say f1 , . . . , fm are eigenvectors of T ⇤ T corresponding to eigenvalue
. Note that m = dim E( , T ⇤ T ).
One the other hand, T T ⇤ (T fi ) = T (T ⇤ T fi ) = T fi so T f1 , . . . , T fm are eigenvectors
of T T ⇤ corresponding to eigenvalues . For any 1 i < j m then hT fi , T fj i =
hfi , T ⇤ T fj i = hfi , fj i = 0 so T f1 , . . . , T fm is also an orthonormal list which means
T f1 , . . . , T fm is linearly independent. This follows that eigenvalue under operator T T ⇤
is repeated at least dim E( , T ⇤ T ) times, i.e. dim E( , T T ⇤ ) dim E( , T ⇤ T ). Similarly
and we obtain dim E( , T T ⇤ ) = dim E( , T ⇤ T ) and can conclude that T T ⇤ and T ⇤ T have
same eigenvalues that are repeated same number of times. Thus, T and T ⇤ have same
singular values.
theo:7.52:7D
exer:7D:12 12. The statement is false. According to theorem 9.7.3 (7D), the singular values of T 2 are
the nonnegative square roots of the eigenvalues of (T 2 )⇤ T 2 = (T ⇤ )2 T 2 and the singular
values of T are the nonnegative square roots of the eigenvalues of T ⇤ T . Hence, the singular
values of T 2 equal the squares of the singular values of T when the eigenvalues of (T ⇤ )2 T 2
equals the squares of the eigenvalues of T ⇤ T . Note that (T ⇤ )2 T 2 and T ⇤ T are self-adjoint
operators so we must have (T ⇤ T )2 = (T ⇤ )2 T 2 , which is incorrect when T is not normal.
Choose a nonnormal operator T in 2-dimensional vector space V as a counterexample.
p p
exer:7D:13 13. Since for all v 2 V then
p kT vk = k T T vk so T is invertible iff T T is invertible iff 0 is
⇤ ⇤
p p p
exer:7D:14 14. Again, from kTp vk = k T ⇤ T vk so dim range T = dim range
p p T ⇤ T and dim range T ⇤T =
n dim E(0, T T ) from the fact that T T is positive sop T T is diagonalizable. Note
⇤ ⇤ ⇤
that according to the definition of singular values, dim E(0, pT ⇤ T ) is number of repetition
of singular value 0 of T . Thus, dim range T = n dim E(0, T ⇤ T ) equals the number of
nonzero singular values of T .
theo:7.51:7D
exer:7D:15 15. If S has singular values s1 , . . . , sn then according to theorem 9.7.2 (7D), there exist
orthonormal bases e1 , . . . , en and f1 , . . . , fn of V such that Sv = s1 hv,
P e1 i f 1 + . . . +
sn hv, en i fn for every v 2 V . This follows for every v 2 V , kSvk2 = ni=1 |si hv, ei i |2 .
Hence, if all singular values of S equal 1 then kSvk = kvk so S is an isometry. Conversely,
if S is an isometry then kSei k = si = 1 for all 1 i n so all singular values of S equal
1.
exer:7D:16 16. Note that with T1 exer:7C:11
= S1 T2 S2 then T1⇤ T = S2⇤ (T2⇤ T2 )S2 . The idea to solve this problem is
similar to exercise 11 (7C).
If T1 , T2 have same singular values then two positive operators T1⇤ T1 and T2⇤ T2 have same
eigenvalues 1 , . . . , n . According to Spectral Theorem, let e1 , . . . , en be orthonormal
basis consisting of eigenvectors of T1⇤ T1 and f1 , . . . , fn be orthonormal basis consisting of
eigenvectors of T2⇤ T2 . Let S2 2 L(V ) so S2 ei = fi for all 1 i n then similarly to
exer:7C:11
exercise 11 (7C), we can show that S2 is an isometry and that T1⇤ T1 = S2⇤ (T2⇤ T2 )S2 . This
leads to kT1 vk = kT2 S2 vk for every v 2 theo:7.45:7D
V . With this, we can construct isometry S1
similarly to proof of Polar Decomposition 9.7.1 (7D) from the book so that S1 T2 S2 = T1 .
Conversely, if there exists isometries S1 , S2 so S1 T2 S2 = T1 then T1⇤ T1 = S2⇤ T2⇤ T2 S2 . If
f1 , f2 , . . . , fn is basis of V and are also eigenvectors of T2⇤ T2 corresponding to eigenvalues
1 , . . . , n . Since S2 is invertible, for each fi , there exists ei 2 V so S2 ei = fi . This follows
e1 , . . . , en is basis of V . Hence,
This follows T1⇤ T1 and T2⇤ T2 have same eigenvalues so T1 and T2 have same singular values.
exer:7D:17 17. (a) We have
p
(c) We have T ⇤ T ei = si ei so
p p
T ⇤ T v = T ⇤ T (hv, e1 i e1 + . . . + hv, en i en ) ,
= s1 hv, e1 i e1 + . . . + sn hv, en i en .
ei
(d) We have T ei = si fi so T 1f
i = si . Hence,
1
T v = T 1 (hv, f1 i f1 + . . . + hv, fn i fn ) ,
Xn
= hv, fi i T 1 fi ,
i=1
hv, f1 i e1 hv, fn i en
= + ... + .
s1 sn
P
exer:7D:18 18. (a) If s1 s2P · · · sn are all singular values of T then we have kT vk2 = ni=1 s2i | hv, ei i |2
and kvk2 = ni=1 | hv, ei i |2 so it can be easily seen that s1 kvk kT vk sn kvk.
(b) Applying (a) we obtain s1 | | sn .
exer:7D:19 19. It suffices to show that for every " > 0 there exists > 0 so for all ku vk < then exer:7D:18
kT (u v)k < ". Indeed, if s the largest singular value of T then from previous exercise 18
(7D), we find kT (u v)k sku vk. Hence, if we choose = "/s, we are done. Thus, T
is uniformly continuous with respect to the metric d on V .
exer:7D:18
exer:7D:20 20. According to exercise 18 (7D), for any v 2 V , we have k(S + T )vk kSvk + kT vk
exer:7D:4
(t + s)kvk. On the other hand, from exercise 4 (7D), there exists v 2 V, kvk = 1 so
k(S + T )vk = r so this follows r t + s.
Toan Quang Pham page 92
theo:8.3:8A Theorem 10.1.2 (8.3) Suppose T 2 L(V ). Suppose m is a nonnegative integer such that
null T m = null T m+1 . Then
theo:8.4:8A Theorem 10.1.3 (8.4) Suppose T 2 L(V ). Let n = dim V . Then null T k = null T k+1 for
all positive integer k n.
theo:8.5:8A Theorem 10.1.4 (8.5) Suppose T 2 L(V ). Let n = dim V . Then V = null T n range T n .
theo:8.13:8A Theorem 10.1.5 (8.13) Let T 2 L(V ). Suppose 1 , . . . , m are distinct eigenvalues of
T and v1 , . . . , vm are corresponding generalized eigenvectors. Then v1 , . . . , vm is linearly
independent.
theo:8.19:8A Theorem 10.1.7 (8.19) Suppose N is a nilpotent operator on V . Then there exists a basis
of V with respect to which the matrix of N is an upper-triangular matrix with only 0’s in
the diagonal.
How two determine all the generalized eigenspaces corresponding to distinct eigenvalues of
operator T 2 L(V ):
2. For each eigenvalue of T , find null space of (T I)dim V , which is equal to generalized
eigenspace G( , T ) of T corresponding to .
Toan Quang Pham page 93
10.2. Exercises 8A
exer:8A:1 1. 0 is the only eigenvalue of T . We have T 2 (w, z) = T (z, 0) = 0 for all (w, z) 2 C2 so
G(0, T ) = null T 2 = C2 .
exer:8A:2 2. If T (w, z) = (w, z) = ( z, w) then w = z, z = w. This follows 2 + 1 = 0 so
eigenvalues of T are = ±i. We have
(T + iI)2 (w, z) = (T + iI)( z + iw, w + iz),
=( w iz, z + iw) + i( z + iw, w + iz),
= ( 2w 2iz, 2iw 2z).
Thus, (T + iI)2 (w, z) = 0 when w + iz = 0. Thus, G( i, T ) = span ( i, 1). We have
(T iI)2 (w, z) = (T iI)( z iw, w iz),
= (iz w, z iw) i( z iw, w iz),
= (2iz 2w, 2z 2iw).
Thus, (T iI)2 (w, z) = 0 when iz w = 0 so G(i, T ) = span (i, 1).
exer:8A:3 3. Let dim V = n. If v 2 G( , T ) then (T I)n v = 0. Hence, ( T ) n (T I)n v = 0 so
( 1 I T ) v = 0 so v 2 G(
1 n 1 , T ). Similarly, if v 2 G(
1 1 , T ) then v 2 G( , T ).
1
Thus, G( , T ) = G( , T ).
1 1
theo:8.13:8A
exer:8A:4 4. The proof for this is completely similar to proof of theorem 10.1.5 (8A). Indeed, assume the
contrary there is v 2 G(↵, T ) \ G( , T ), v 6= 0 then let k be the largest nonnegative integer
so (T ↵I)k v 6= 0. Hence, with w = (T ↵I)k v we have (T ↵I)w = (T ↵I)k+1 v = 0
so T w = ↵w. This follows (T I)w = (↵ )w for any 2 F. Hence, we obtain,
(T I)n w = (↵ )n w. On the other hand, since v 2 G( , T ) so (T I)n w =
(T ↵I)k (T I)n v = 0. This follows (↵ )n w = 0 which leads to w = 0, a contradiction.
Thus, G(↵, T ) \ G( , T ) = {0}.
exer:8A:5 5. Let a0 , . . . , am 1 2 F so a0 v + a1 T v + . . . + am 1 T m 1 v = 0. Apply the the operator
T m 1 to both side of this we obtain a0 T m 1 v = 0 which leads to a0 = 0. Similarly,
we apply T m 2 to obtain a1 = 0, ..., T to obtain am 2 = 0. Thus, ai = 0 for all i so
v, T v, . . . , T m 1 v is linearly independent.
exer:8A:6 6. Since T (z1 , z2 , z3 ) = (z2 , z3 , 0) so T 2 theo:8.18:8A
= 0. If T has square root when S 4 = T 2 = 0 so S
is nilpotent so according to theorem 10.1.6 (8A) then S 2 = 0 or T = 0, a contradiction.
Thus, T has no square root.
theo:8.18:8A
exer:8A:7 7. If N is nilpotent then from theorem 10.1.6 (8A), N dim V = 0 or G(0, N ) = null N dim V = V
so there exists a basis ofexer:8A:4
V consisting of generalized eigenvectors corresponding to 0.
Combining with exercise 4 (8A), we find 0 is the only eigenvalue of N .
exer:8A:8 8. It is false. When V = C2 , consider to nilpotent operators T, S 2 L(C2 ) so T (x, y) = (0, x)
and S(x, y) = (y, 0). Then (T + S)2 (x, y) = (T + S)(y, x) = (x, y). Thus, T + S is not a
nilpotent operator. Thus, set of nilpotent operators on C2 is not a subspace of L(C2 ).
Toan Quang Pham page 94
exer:8A:10 10. It suffices to show that null T n 1 \ range T n 1 = {0}. Assume the contrary, if there
exists v 2 null T n 1 \ range T n 1 , v 6= 0 then T n 1 v = 0 and there exists utheo:8.4:8A
2 V, u 6= 0
so T n 1 u = v. This follows T 2n 2 u = 0 so T n u = 0 according exer:8A:5
to theorem 10.1.3 (8A).
Note that v = T n 1 u 6= 0 and T n u = 0 so according to exercise 5 (8A), u, T u, . . . , T n 1 u
is linearly independent so it is basis of V . On the other hand, not that T i u 2 null T n
so that means T n = 0, a contradiction since T is not nilputent. Thus, we must have
null T n 1 \ range T n 1 = {0}, which leads to V = null T n 1 range T n 1 .
exer:8A:11 11. It is false. The idea is to pick T so T 2 doesn’t have enough eigenvectors to form a basis
of V . Let T 2 L(C2 ) so T (x, y) = (x, x + y) then T 2 (x, y) = T (x, x + y) = (x, 2x + y).
Hence, if (x, y) = (x, 2x + y) then ( 1)x = 0 and ( 1)y = 2x. With this, we obtain
1 is the only eigenvalue of T and that E(1, T ) = span {(0, 1)}. This follows T 2 is not
2 2
diagonalizable.
exer:8A:12 12. Let v1 , . . . , vn be such basis of V . We prove inductively on i n that there exists positive
integer ki so T ki vi = 0. Indeed, since T v1 = 0 so it’s true for i = 1. If it’s true for
all i m < n. Consider vm+1 then according to the assumption, we have T vm+1 2
span (v1 , . . . , vm ) so with k = max{k1 , . . . , km } we obtain T k+1 vm+1 = 0. Thus, this
follows N is nilpotent.
exer:8A:13 13. Since N is nilpotent so there exists a basis of V with respect to which the matrix of theo:6.37:6B
N is
an upper triangular matrix with only 0’s in the diagonal. Therefore, from theorem 8.3.4
(look at the proof of it, because only apply the theorem itself doesn’t help), there exists
an orthonormal basis of V with respect to which the matrix of N is an upper triangular
matrix with only 0’s in the diagonal. With this and the from the proof of Complex Spectral
theo:7.24:7B
Theorem 9.3.1 given that N is normal, we can deduce that matrix of N is the 0 matrix,
in other words N = 0.
exer:8A:13
exer:8A:14 14. (copy from part of exercise 13 (8A)) Since N is nilpotent, there exists a basis of V with
respect to which the matrix of N theo:6.37:6B
is an upper triangular matrix with only 0’s in the
diagonal. Therefore, from theorem 8.3.4 (look at the proof of it, because only apply the
theorem itself doesn’t help), there exists an orthonormal basis of V with respect to which
the matrix of N is an upper triangular matrix with only 0’s in the diagonal.
exer:8A:16 16. For any nonnegative integer k, if v 2 range T k+1 then v 2 range T k so range T k+1 ⇢
range T k .
exer:8A:16
exer:8A:17 17. From exercise 16 (8A), we already know range T m+k+1 ⇢ range T m+k . Now, if v 2
range T m+k then v = T m+k u. Since T m u 2 range T m = range T m+1 so there exists w 2 V
so T m u = T m+1 w so v = T m+k u = T m+k+1 w so v 2 range T m+k+1 . Thus, we obtain
range T m+k ⇢ range T m+k+1 . With this, we conclude range T m+k = range T m+k+1 for
every nonnegative integer k, in other words range T k = range T m for all k > m.
exer:8A:17
exer:8A:18 18. According to exercise 17 (8A), it suffices
exer:8A:16
to prove range T n = range T n+1 . Assume it is
exer:8A:17
not true, then from exercises 16 and 17 (8A), we find
This follows dim range T n+1 < 0, a contradiction. Thus, we must have range T n =
range T n+1 .
exer:8A:19 19. null T m = null T m+1 iff dim null T m = dim null T m+1 iff dim range T m = dim range T m+1
iff range T m = range T m+1 .
exer:8A:19 exer:8A:15
exer:8A:20 20. From exercise 19 (8A), range T 4 6= range T 5 follows null T 4 6= null T 5 so from exercise 15
(8A), T is nilpotent.
(a) V = G( 1, T ) ··· G( m , T );
exer:5C:5
The idea for the proof of (a) is similar to of exercise 5 (5C).
theo:8.5:8A
Proof of (a). We induct on dim V . From theorem 10.1.4 (8A) then V = null (T i I)
n
exer:8A:4
u 2 G( i , T ) \ G( j , T ) so from exercise 4 (8A) we find u = 0. Thus, v = (T i I) w 2 U . We
n
theo:8.26:8B Theorem 10.3.2 (8.26) Suppose V is a complex vector space and T 2 L(V ). Then the
sum of multiplicities of all eigenvalues of T equals dim V .
theo:8.29:8B Theorem 10.3.3 (8.29) Suppose V is a complex vector space and T 2 L(V ). Let 1 , . . . , m
be the distinct eigenvalues of T , with multiplicities d1 , . . . , dm . Then there is a basis of V
with respect to which T has a block diagonal matrix of the form
0 1
A1 0
B .. C
@ . A,
0 Am
theo:8.31:8B Theorem 10.3.4 (8.31) Suppose N 2 L(V ) is nipotent. Then I + N has kth root.
theo:8.33:8B Theorem 10.3.5 (8.33) Suppose V is a complex vector space and T 2 L(V ) is invertible.
Then T has a kth root.
10.4. Exercises 8B
exer:8B:1 1. Since 0 is the only
theo:5.27:5B
eigenvalue of N and that V is a complex vector space, from theorem
theo:5.32:5B
7.3.2 (5B) and 7.3.4 (5B), we follow there exists a basis of V with respect to which the
matrix of N is an upper-triangular matrix with only 0’s on the diagonal. From exercise
exer:8A:12
12 (8A), we follow N is nilputent.
Toan Quang Pham page 97
exer:8B:2 2. The idea is choose operator T so T has 0 as the only eigenvalue and that there doesn’t
exists any basis of V with respect to which T has upper-triangular matrix. Hence:
• First pick null T = span (v1 ).
• Choose U = range T so T |U 2 L(U ) does not have any eigenvalue. Let dim U = 2
then say T v2 = v3 , T v3 = v2 , we can find that T |U does not have any eigenvalue.
• Confirm that T is not nilpotent. Now we have V = span (v1 , v2 , v3 ) so it suffices to
show T 3 6= 0. Indeed, we have
exer:8B:11 11. First we show that number of times that 0 appears on the diagonal of the matrix of T is
at most dim null T n . For a basis of V with respect to which T has an upper-triangular
matrix, let va1 , . . . , vam be subset of that basis so the i-th 0 on the diagonal (from top left
to bottom right) is also in ai -th column of matrix of T .
We prove inductively on i that there exist ui 2 span (vP 1 , . . . , vai ) so T u1 = 0 and T ui 2
a1 1
span (u1 , . . . , ui 1 ) for i 2. For i = 1, let u1 = i=1 ↵i vi + va1 . Since T va1 2
span (v1 , . . . , va1 1 ) so we can write
aX
! a 1
1 1 X
1 aX
1 1
T (u1 ) = T ↵i vi + v
i i = (↵i T vi + i vi ) .
i=1 i=1 i=1
T (u1 ) = @ ↵j ci,j + i A vi .
i=1 j=i
Observe that on the diagonal, there is no 0 between ai 1 -th number and ai -th number, so
with similar approach to case i = 1, we can choose ↵ai 1 +1 , . . . , ↵ai 1 so scalar multipli-
cation of vj (ai 1 + 1 j ai 1) in representation of T ui is 0. With this and note that
ui 1 2 span (v1 , . . . , vai 1 ), we obtain:
0 1
ai 1 ai 1
X X
T ui = T @ A
↵j vj + i vi ,
j=1 i=1
0 1
ai ai
1 1
X1 X
=T@ ↵j vj A + ki vi + i 1 ui 1 ,
j=1 i=1
0 1 0 1
ai ai ai
1 1
X2 X1 X
=T@ ↵j vj A + T @ ↵j vj A + ki vi + i 1 ui 1 .
j=1 j=ai 2 +1 i=1
37:8C:cayley_hamilton Theorem 10.5.2 (Cayley-Hamilton Theorem) Suppose V is a complex vector space and
T 2 L(V ). Let q denote the characteristic polynomial of T . Then q(T ) = 0.
Definition 10.5.3 (8.43, minimal polynomial). Suppose T 2 L(V ). Then the minimal popy-
nomial of T is the unique monic polynomial p of smallest degree such that p(T ) = 0.
theo:8.46:8C Theorem 10.5.4 (8.46) Suppose T 2 L(V ) and q 2 P(F). Then q(T ) = 0 if and only if q
is a polynomial multiple of the minimal polynomial of T .
theo:8.46:8C theo:8.37:8C:cayley_hamilton
With F = C, theorem 10.5.4 and Cayley-Hamilton Theorem 10.5.2 follows that the charac-
teristict polynomial of T is a polynomial multiple of the minimal polynomial of T .
theo:8.49:8C Theorem 10.5.5 (8.49) Let T 2 L(V ). Then the zeros of the minimal polynomial of T
are precisely the eigenvalues of T .
so null (T i I) = null (T
d i
i ) , or we can say that (z
n
i ) is the minimal polynomial
di
of T |G( i ,T ) .
exer:8C:12
The idea for the proof is similar to proof of exercise 12 (8C).
How to find characteristic/minimal polynomial of an operator.
Good sum-
mary ques-
Question 10.5.7. What can you tell about an operator from looking at its minimal/characteristic tion
polynomial?
10.6. Exercises 8C
exer:8C:1 1. If 3, 5, 8 are the only eigenvalues of T then each of them has multiplicity of at most 2. This
follows characteristic polynomial of T is (z 3)d1 (z 5)d2 (z 8)d3 with di 2. Hence,
(T 3I)2 (T 5I)2 (T 8I)2 = 0.
Toan Quang Pham page 101
exer:8C:2 2. If 5, 6 are the only eigenvalues of T then each of them has multiplicity of at most n 1.
This follows (T 5I)n 1 (T 6I)n 1 = 0.
exer:8C:3 3. Let T (1, 0, 0, 0) = 7(1, 0, 0, 0), T (0, 1, 0, 0) = 7(0, 1, 0, 0) and T (0, 0, 1, 0) = 8(0, 0, 1, 0),
T (0, 0, 0, 1) = 8(0, 0, 0, 1).
exer:8C:4 4. If the minimal polynomial is (z 1)(z 5)2 , this could mean that G(5, T ) = null (T 5I)2 .
Choose T (a, b, c, d) = (a, 5b, 5c, c + 5d) then T 2 (a, b, c, d) = (a, 25b, 25c, 10c + 25d). Hence,
we find G(1, T ) = span ((1, 0, 0, 0)) and G(5, T ) = span ((0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1)),
in particular (T 5I)(0, 0, 1, 0) 6= 0 but (T 5I)(0, 0, 1, 0) = 0; (T 5I)(0, 1, 0, 0) =
(T 5I)(0, 0, 0, 1) = 0 so indeed G(5, T ) = null (T 5I)2 . The characteristic polynomial
equals (z 1)(z 5)3 .
Note that (z 1)(z 5) is not minimal polynomial because (T I)(T 5I)2 (0, 0, 1, 0) 6= 0
and (T I)(T 5I)2 = 0 because G(5, T ) = null (T 5I)2 . Hence (z 1)(z 5)2 is the
minimal polynomial.
exer:8C:5 5. This means G(1, T ) = null (T I)2 6= null (T I). Hence, choose T (a, b, c, d) = (0, 3b, c, c+
d).
exer:8C:6 6. This means G(1, T ) = null (T I). Choose T (a, b, c, d) = (0, 3b, c, d).
exer:8C:7 7. From P 2 = P we follow 0, 1 are the only eigenvalues of P . Since P 2 = P so dim G(0, T ) =
dim null P so the characteristic polynomial of P is z dim null P (z 1)dim V dim null P . From
exer:5B:4
P 2 = P we also find that V = range P null P according to exercise 4 (5B). Hence
dim null P + dim P = dim V so the characteristic polynomial of P is z dim null P (z
1)dim range P .
exer:8C:8 8. T is invertible iff 0 is not an eigenvalue of T iff 0 iso not a zero of the minimal polynomial
of T iff the constant term in the minimal polynomial of T is nonzero.
exer:8C:10 10. Since T is invertible so the constant term in the characteristc polynomial of T is nonzero,
i.e. p(0) 6= 0. Let 1 , . . . , m be eigenvalues of T with corresponding multiplicities
exer:8A:3
d1 , . . . , dm . Hence, p(z) = (z 1 ) d1 · · · (z
m ) dm . From exercise 3 (8A), we have
Toan Quang Pham page 102
G( i , T ) = G( 1i , T 1) so characteristic polynomial of T 1 is
1 d1 1 dm
q(z) = (z 1 ) · · · (z m ) ,
= 1
d1
···
m
dm
· z dim V · ( 1 z 1 d1
) ···( m z 1 dm
) ,
( 1)dim V
= · z dim V ( 1 z 1 d1
) ···( m z 1 dm
) ,
p(0)
✓ ◆
1 dim V 1
= z p .
p(0) z
Next, we show that p(z) has no repeated zero iff all generalized eigenvector of T is an
eigenvalue of T . Indeed, if p(z) has a repeated zero i , i.e. Q di 2 thendjthere exists v 2 V
so v 62 null (T i I) but v 2 null (T i I) di , otherwise
j6=i (T j I) v 2 null (T i I)
Q
for all v 2 V which leads to q(T ) = (T i I) j6=i (T j I)
d j = 0, where q(z) is
a polynomial with degree less than p(z), a contradiction. Conversely, if there exists a
generalized eigenvector v of T corresponding to eigenvalues i but v is not an eigenvector
of T corresponding to i , i.e. (T i I)v 6= 0 but v 2 G( i , T ). Note that for any 1 j m
then T j I is invariant underexer:8A:4
G( i , T ). Combining with Q the fact that G(↵, T )\G( , T ) =
{0} for ↵ 6= from exercise 4 (8A), we follow that j6=i (T dj
j I) (T i I)v 6= 0 and
Q
i I)v 2 G( i , T ). This follows p(z) has a repeated zero i .
d
j6=i (T j I) (T
j
exer:8C:15 15. (a) We know that there exists a monic polynomial p(z) (such as minimal polynomial of
T ) so p(T )v = 0. Hence, it suffices the uniqueness of such polynomial of smallest degree.
Assume the contrary, there exists two polynomial p, q with same and smallest degree so
Toan Quang Pham page 103
minimal polynomial h of T ⇤ .
Now it suffices to show that deg q = deg h. Assume the contrary deg h < deg q then with
similar approach, we can construct a polynomial g from h so g(T ) = 0 and deg g = deg h <
deg q = deg p, a contradiction. Thus, we must have deg g = deg h so q(z) is the minimal
polynomial of T ⇤ .
exer:8C:17 17. Let q(z) = (z k1
1 ) · · · (z m)
km be the minimal polynomial of T with k + . . . + k = n
1 m
and p(z) theo:8.46:8C
= (z d
1 ) · · · (z
1
m)
dm be the characteristic polynomial of T . According to
Pm
theorem
Pm 10.5.4 (8C), we follow d i k i for all 1 i m. On the other hand, i=1 ki =
i=1 di = n so that means di = ki for all 1 i m, i.e. the characteristic polynomial of
T equals the minimal polynomial of T .
exer:8C:18 18. Call such operator T and let e1 , . . . , en be the standard basis of Cn with ei has 1 in
the i-th coordinate and 0’s in the remaining coordinates. We can easily find that for
1 i n 1, 1 k n then T k ei = ei+k if i + k n.
We have T en = (a0 , a1 , . . . , an 1) so
T 2 en = T (a0 , . . . , an 2 , 0) an 1 T en = (0, a0 , . . . , an 2) an 1 T en .
Hence,
T 3 en = T (0, a0 , . . . , an 3 , 0) an 2 T en an 1T
2
en ,
2
= (0, 0, a0 , . . . , an 3) an 2 T en an 1T en .
Continuing doing this, we will obtain
T i en = (0, 0, . . . , 0, a0 , . . . , an i ) an i+1 T en ... an 1T
i 1
en .
For i = n then T n en = a0 en a1 T en ... an 1T
n 1e
n or (T n + an 1T
n 1 + ... +
a1 T + a0 I)en = p(T )en = 0.
For 1 i < n, notice that T k ei = ei+k for 0 k n i. Hence, we have
p(T )ei = (T n + an 1T
n 1
+ . . . + an i+1 T n i+1 )ei + an i en + . . . + a1 ei+1 + a0 ei ,
= (T i + an 1 T i 1 + . . . + an i+1 T )(T n i ei ) + (0, . . . , 0, a0 , . . . , an i 1 ),
= (T i + an 1 T i 1 + . . . + an i+1 T )en + (0, . . . , 0, a0 , . . . , an i ),
= 0.
Toan Quang Pham page 104
Thus, p(T )v = 0 for all v 2 V . Since deg p = n so that means p(z) is both the minimal
polynomial and the characteristic polynomial of T .
exer:8B:11
exer:8C:19 19. According to exercise 11 (8B), the number of times that appears on the diagonal of the
matrix of T equals the multiplicity of as an eigenvalue of T. Combining with the defi-
nition of characteristic polynomial for complex vector space, we obtain the characteristic
polynomial of T is (z 1 ) · · · (z n ).
theo:5.27:5B
exer:8C:20 20. Since we are talking about complex vector space, according to theorem 7.3.2 (5B), T |Vj
has an upper-triangular matrix M(T |Vj ) with respect to some basis of Vj .
Therefore, if we combines all the bases of Vj we obtain a basis of V since V = V1 · · · Vm
which will give an block diagonal matrix for T as follow:
0 1
M(T |V1 )
B .. C
M(T ) = @ . A.
0 M(T |Vm )
Notice that M(T ) is an upper-triangular matrix since M(T |Vj ) are upper-triangular matri-
exer:8C:19
ces. Thus, by applying exercise 19 (8C) to M(T ), M(T |Vj ) we can conclude that p1 · · · pm
is the characteristic polynomial of T .
(a) N m1 v1 , . . . , N v1 , v1 , . . . , N mn vn , . . . , N vn , vn is a basis of V .
(b) N m1 +1 v1 = . . . = N mn +1 vn = 0.
exer:8D:6
Furthermore, according to exercise 6 (8D), n = dim null N and N m1 v1 , . . . , N mn vn is basis of
null N .
theo:8.60:8D Theorem 10.7.2 (8.60, Jordan Form) Suppose V is a complex vector space. If T 2 L(V ),
then there is a basis of V that is a Jordan basis for T .
theo:3:8C
Proof. From theorem 10.5.6 (8C), minimal polynomial of T equals product of minimal polyno-
mials of T |G( i ,T )exer:8D:3
for all eigenvalues i of T .
From exercise 3 (8D), minimal polynomial of the nilpotent operator (T i I)|G( i ,T ) is z
mi +1
where mi is the length of the longest consecutive string of 1’s that appears on the line directly
above the diagonal in the Jordan matrix of M(T |G( i ,T ) ). We follow (T i I)
mi +1 = 0 but
mi 6= 0, which leads to (z mi +1 as the minimal polynomial of T |
(T i i) i) G( i ,T ) .
Qm
Thus, the minimal polynomial of T is i=1 (z i)
m i +1 where mi is the length of the longest
consecutive string of 1’s that appears on the line directly above the diagonal in the Jordan
matrix of M(T |G( i ,T ) ).
example_jordan_form Example 10.7.4 we can find the minimal polynomial of T from its Jordan matrix as shown:
N 2 v1 N v1 v1 N v2 v2 N v3 v3
N 2v 0 1 0 0 0 0 01
1 1
N v1 B 0 1 1 0 0 0 0C
B C
v1 B 0 0 1 0 0 0 0C
M(T ) = B C
N v2 B 0 0 0 1 1 0 0C
B C
v2 B 0 0 0 0 1 0 0C
@ A
N v3 0 0 0 0 0 2 1
v3 0 0 0 0 0 0 2
of T is (z 3
1 ) (z 2) .
2
Toan Quang Pham page 106
10.8. Exercises 8D
exer:8D:1 1. Since N is nilpotent so N 4 = 0 and since N 3 6= 0, we find z 4 is both the minimal polynomial
and the characteristic polynomial of T .
exer:8D:4 4. If the Jordan basis v1 , . . . , vn is reversed then the columns of N j vi are reversed, i.e. if it
is (a1 , a2 , . . . , an )T then its reverse is (an , . . . , a1 )T . All of the columns are then arranged
in reversed order. Thus, with this, we conclude that if we rotate 180 the Jordan matrix
of T , we will obtain a matrix with respect to the reversed order of Jordan basis of V .
exer:8D:5 5. In the matrix of T 2 with respect to Jordan basis of V for T , all the eigenvalues on the
diagonal are squared and all the 1’s that is in the same column with changes to 2 . All
the 0’s in row of N k vi and column N k 2 vi changes to 1. Below is an example:
N 2 v1 N v1 v1 N v2 v2 N v3 v3
N 2 v1 0 21 2 1 1 0 0 0 0 1
N v1 B 0 2 2 1 0 0 0 0 C
1
B 2 C
v1 B 0 0 1 0 0 0 0 C
M(T ) = N v2 B
2
B 0 0 0 2
2 2 2 0 0 C
C
B 2 C
v2 B 0 0 0 0 2 0 0 C
@ 2 A
N v3 0 0 0 0 0 3 2 3
v3 0 0 0 0 0 0 2
3
exer:8D:8 8. If there does not exist a direct sumtheo:8.21:8Bdecomposition of V into two proper subspaces in-
variant under T then from theorem 10.3.1 (8B), T must have only one eigenvalue and
V = G( , T ) = null (T I)dim V . We follow (T I) is a nilpotent operator on V . If
theo:8.55:8D
dim null (T I) 2 then according to theorem 10.7.1 (8D), for n = dim null (T I) 2,
V has a Jordan basis (T I)m1 v1 , . . . , v1 , . . . , (T I)mn vn , . . . , vn for T . Let U =
span ((T I)m1 v1 , . . . , v1 ) and W = span ((T I)m2 v2 , . . . , v2 , . . . , (T I)mn vn , . . . , vn )
then V = U W . If u 2 U then T u = (T I)u + u 2 U so U is invariant under T . Simi-
larly, W is invariant under T . This contradicts to the original theo:8.55:8D
assumption. Hence, we must
have dim null (T I) = 1, which again from theorem 10.7.1 (8D), (T I)dim V 1 v, . . . , v
is a basis of V . This follows the smallest d so (T I) = 0 is d = dim V . We find
d
lexification_property Theorem 11.1.4 An operator T on a real vector space V has minimal/characteristic poly-
nomial same as its complexification TC . This means that:
11.2. Exercises 9A
exer:9A:1 1. Done.
exer:9A:8 8. This follows 5, 7 are eigenvalues of TC and since characteristic polynomial of TC of degree
3 so TC has no nonreal eigenvalues.
exer:9A:9 9. Since T is an operator on odd-dimensional real vector space so T has a real eigenvalue.
On the other hand, since T 2 + T + I is nilpotent so minimal polynomial p(T ) of T divides
(T 2 + T + I)7 . Since minimal polynomial p(T ) of T has real coefficients so p(T ) = (T 2 +
T + I)k for some k < 7. However, this implies p(T ) has no real roots, which means T has
theo:8.49:8C
no real eigenvalue according to 10.5.5, a contradiction to previous claim. Thus, there does
not exists T 2 L(R7 ) so T 2 + T + I is a nilpotent.
exer:9A:9
exer:9A:10 10. The difference between this exercise and the previous exercise 9 is that minimal polynomial
of T need not to have real coefficients. Since T 2 +T +I is nilpotent, we aim to construct T so
⇣ p ⌘k ⇣ p ⌘k
1+i 3 1 i 3
its minimal polynomial is of the form p(T ) = (T 2 +T +I)k = T 2 I T 2 I
where k < 7/2 because minimal polynomial must have degree at most dim C7 = 7. Say
Toan Quang Pham page 109
example_jordan_form 1+ip3 p
1 i 3
k = 3. From example 10.7.4, let + = 2 and = 2 , we can construct a
Jordan matrix as follow:
0 1
+ 1
B + 1 C
B C
B + C
B C
M(T ) = B
B +
C
C
B 1 C
B C
@ 1A
WLOG, say that (0, . . . , 0, 1, 0, . . . , 0) is Jordan basis of T then from the above matrix we
can construct T .
exer:9A:11 11. The minimal polynomial p of T must divide x2 + bx + c and since eigenvalue of T are
roots of p so T has real eigenvalue if is root of x2 + bx + c, i.e. b2 4c.
exer:9A:11
exer:9A:12 12. Similarly to exercise 11, minimal polynomial p of T must divide (z 2 + bz + c)dim V , and
since (z 2 + bz + c)dim V has no real root, p has no real root, i.e. T has no eigenvalues.
exer:9A:14 14. T 2 +T +I is nilpotent implies TC has exactly two eigenvalues and with same multiplicity
complexification_property
according to theorem 11.1.4. Therefore, the characteristic polynomial of TC is p(z) =
(z 2 + z + 1)k . Since dim V = 8 so deg p = 8 or k = 4. Thus, the characteristic polynomial
of T is (z 2 + z + 1)4 so (T 2 + T + I)4 = 0.
exer:9A:17 17. (a) We already know that V is a real vector space. For any a, b, c, d 2 R and any v, u 2 V
then (a + ib)v = av + bT v 2 V and
Substituting T ai+1 into the above to obtain ai+1 2 span (a1 , T a1 , . . . , ai , T ai ), a contra-
diction to our condition. Thus, a1 , T a2 , . . . , an , T an must be linearly independent and
therefore it is a basis of real vector space V .
We prove that with such definition of complex scalar multiplication then a1 , . . . , an is
the basis of V as complex vector space. Since aj 2 V so iaj = T aj 2 V . Therefore,
a1 , T a1 , . . . , an , T an 2PV which implies that
P a1 , . . . , an spans V . The list is also linear
independent because ni=1 (↵i + i i )ai = ni=1 ↵i ai + i T ai .
Thus, the dimension of V as a complex vector space is half the dimension of V as real
vector space.
exer:9A:18 18. (b) =) (a): If there exists a basis of V wrt to which T has an upper-triangular matrix,
then it is also a basis of theo:5.32:5B
VC with respect to which TC has an upper-triangular matrix.
Therefore, from theorem 7.3.4 (5B), we find TC has real eigenvalues.
(a) =) (c): If all eigenvalues of TC are real. Consider G( , TC ) where 2 R is an
eigenvalue of TC . Consider basis x1 + iy1 , . . . , xn + iyn of G( , TC ) where xj , yj 2 V . We
have TC (xj + iyj ) = T xj + iT yj = (xj + iyj ) which implies T xj = xj , T yj = yj for all
1 j n. On the other hand, note that xj + iyj 62 U = span (x1 + iy1 , . . . , xj 1 + iyj 1 )
so at least one of xj , yj must not be in U . From this, we can inductively choose either xi
or yi to create a basis of G( , TC ) that are all in V .
theo:8.21:8B Lm
From theorem 10.3.1 (8B), since VC = i=1 G( i , TC ), we can construct a basis of VC
consisting generalized eigenvectors that are in V . This follows V has a basis consisting of
generalized eigenvectors of T .
theo:8.60:8D
eigenvalues of T . With this and from the proof of theorem 10.7.2 (8D), we find that there
is a Jordan basis for T of V . This proves (b).
exer:9A:19 19. Bring back to working with complex vector space. Since null T n 2 6= null T n 1 so
exer:8B:4
n 2 n 1
null TC 6= null TC . Hence, from exercise 4, we find TC has at most 2 eigenvalues,
one of which must be 0. This follows the other must be real eigenvalue. Therefore, T has
at most 2 eigenvalues and TC has no nonreal eigenvalues.
1. T is normal.
By looking at the characteristic polynomial of normal operator T , one can count number of
2-by-2 block diagonal theo:9.34:9B
matrices. theo:7.10:7A
Using this
theo:7.29:7B
theorem 11.3.1 and theorem 9.1.4, one can easily prove the Real Spectral Theorem
9.3.2.
theo:9.36:9B Theorem 11.3.2 (Description of isometries when F = R) Suppose V real inner product
space and S 2 L(V ). Then the following are equivalent:
1. S is an isometry.
exer:9B:3
We can also define an inner product for complexification VC as indicated in exercises 3, i.e.
11.4. Exercises 9B
theo:9.36:9B
exer:9B:1 1. From theorem 11.3.2, S has block diagonal 3-by-3 matrix so at least one block on the
diagonal is a 1-by-1 matrix. If it’s the first block then pick x = (1, 0, 0), if it’s the last
block then x = (0, 0, 1).
theo:9.36:9B
exer:9B:2 2. Obviously true from theorem 11.3.2.
Toan Quang Pham page 112
is the orthonormal basis of V , and the matrix of D with⇣ this respect ⌘ to this basis will give
theo:9.34:9B
the desired form in 11.3.1. In particular, Ui = span cos ix
p , p
⇡
sin ix
⇡
is invariant under T
✓ ◆
a b
where M(T |Ui ) = .
b a
Toan Quang Pham page 113
theo:10.16:10A Theorem 12.1.3 (Trace of an operator equals to trace of its matrix) For T 2 L(V ) then
trace T = trace M(T ) = (trace M(TC )). In other words, for any basis of T , the sum of
diagonal entries of M(T ) is constant and is equal to sum of eigenvalues of T (of TC if V is
real vector space), with each eigenvalue repeated according to its multiplicity.
exer:10A:2 2. Bring back to woring with operators. Define S, T 2 L(V ) so M(S, (v1 , . . . , vn )) = A, M(T, (v1 , . . . , vn )) =
exer:3D:10
B then ST = I as M(ST ) = I. Applying exercise 10, we implies T S = I so BA = I.
P
exer:10A:3 3. Consider an arbitrary basis v1 , . . . , vn of V . Let T v1 = ni=1 ↵i vi . Since M(T, (v1 , . . . , vn )) =
M(T, (2v1 , . . . , vn )) so ↵i = 2↵i for all i 6= 1 which implies ↵i = 0 for all i 6= 1. Hence,
T v1 = ↵1 v1 . Similarly, we obtain T vj = ↵j vj for all j.
On the other hand, since M(T, (v1 , v2 , . . . , vn )) = M(T, (v2 , v1 , . . . , vn )) so ↵1 = ↵2 . As a
result, we obtain T = ↵I.
exer:10A:5 5. Bring back to working with operator. Let T 2 L(V ) be an operatortheo:8.29:8B so M(T, (vtheo:8.60:8D
1 , . . . , vn )) =
B. Since V is complex vector space so according to theorem 10.3.3 (or 10.7.2), there
exists basis u1 , . . . , un so M(T, (u1 , . . . , un )) is an upper-triangular matrix. Now applying
coroll:10.7:10A
corollary 12.1.2 and we are done.
✓ ◆
0 1
exer:10A:6 6. Let T 2 L(R ) so M(T ) =
2 . Then trace (T 2 ) = 2 < 0.
1 0
P
exer:10A:7 7. trace (T 2 ) = m i=1 i
2 0.
exer:10A:14 14. We have trace (cT ) = trace M(cT ) = trace cM(T ) = ctrace T .
exer:10A:16 16. Let ST = T = S = I with dim V = 2 then trace S = trace T = trace ST = 2 and
therefore, trace ST 6= trace Strace T .
exer:10A:17 17. Work with M(T ) and M(S). Choose Si,j 2 L(V ) so M(Si,j )i,j = 1 while the rest entries
are 0. Hence, this implies M(T )j,i = 0. Thus, this concludes T = 0.
Toan Quang Pham page 115
theo:7.10:7A
exer:10A:18 18. For orthonormal basis e1 , . . . , em of V then from theorem 9.1.4 (7A), M(T, (e1 , . . . , em ))
is the conjugate transpose of M(T ⇤ , (e1 , . . . , em )). This means, the i-th row of M(T ⇤ )
is the conjugate of the i-th column of M(T ). Hence, kT ei k2 equals the inner product of
these
Pm two vectors, which is the (i, i-th entry of M(T ⇤ T, (e1 , . . . , em )). Thus, trace (T ⇤ T ) =
i=1 kT ei k . This implies that then sum is independent from the choice of orthonormal
2
basis.
exer:10A:19 19. From previous exercise, we have hT, T i 0 and is 0 when when trace T T ⇤ = 0, which
follows kT ei k = 0 for orthonormal basis e1 , . . . , em of V . This implies T ⇤ = 0 so T = 0.
⇤
13. Summary
2. Let V = span (1, cos x, cos 2x, . . . , cos nx, sin x, sin 2x, . . . , sin nx) be a subspace of
C[ ⇡, ⇡].
a) Define D 2 L(V ) by Df = exer:7A:21
f 0 . Then D⇤ = D which implies D is normal
but not self-adjoint (exercise 21 (7A)). From previous remark, we can also find
an orthonormal basis of V such
theo:9.34:9B
that the matrix of D has the form described in
exer:9B:8
theorem 11.3.1 (see exercise 8 (9B)).
b) Define T 2 L(V ) by T f = f 00 . Then T is self-adjoint and
exer:7C:14
T is positive operator
(exercise 14 (7C)).
norms to define the same open subsets of V . See here for more. See here for proofs about norms
being equivalent in finite dimensional normed space.
Proof. So there are (pn 1)(pn p) · · · (pn pk 1 ) linearly independent lists of length k in
V and since each k-dimensional subspace of V contributes (pk 1)(pk p) · · · (pk pk 1 )
n 1)(pn p)···(pn pk 1 )
such lists, so there are (p
(pk 1)(pk p)···(pk pk 1 )
k-dimensional subspaces of V .
2. (AoPS) Let f 2 Q[X] be a polynomial of degree n > 0.Let p1 , p2 , .., pn+1 be distinct
prime
Pn+1 numbers.Show that there exists a non-zero polynomial g 2 Q[X] such that f g =
c X pi with c 2 Q.
i=1 i i
Proof. Let Pn (Q) be the Q-vector space of polynomials with degree at most n. For each i,
let X pi = f qi +ri where ri is the remainder of X pi modulo f . Since deg f = n so ri 2 Pn (Q)
for all i. On the other hand, since p1 , . . . , pn+1 are primes, this follows r1 , . . . , rn+1 are
linearly independent in Pn (Q), which means P they form a basis of Pn (Q). Therefore, as
f 2 Pn (Q), there exists ci 2 Q such that f = n+1 i=1 ci ri .
Therefore, we have
n+1 n+1 n+1
!
X X X
pi
ci X = ci (f qi + ri ) = f c i qi + 1 = f g.
i=1 i=1 i=1
3. (MSE) Show that any n ⇥ n marix A can be written as sum of two non-singular matrices.