Linear Alg II Chapter 3
Linear Alg II Chapter 3
3.5 Isometries
The deepest results related to inner-product spaces deal with the subject to which we now turn—operators
on inner-product spaces. By exploiting properties of the adjoint, we will develop a detailed description
of several important classes of operators on inner-product spaces.
This section treats linear functional on an inner product spaces and their relations to the inner
products. We use this result to show that any linear functional f on a finite-dimensional inner
product space is an inner product with a fixed vector in the space.
Definition 3.1.1 Let V be a vector space over a field F . A linear functional (or linear form) on V
is a function f : V F satisfying the following conditions:
Note: Thus a linear functional is a linear transformation from V in to the field of scalars F .
f ( x1 , x2 , . . . , xn ) c1 x1 c2 x2 . . . cn xn
f is a linear functional on V .
Example 3.1.3 Let V be the vector space of continuous complex- ( or real-) valued functions on the
interval a , b . Then the function f : V ( or ) defined by
b
f ( x) x(t )dt
a
is a linear functional on V .
Example 3.1.4 Let V be any inner product space over a field F , and let w be any vector in V . Then
the linear transformation T : V F defined by T (v) v, w is a linear functional on V (this is
because inner product is linear in the first variable v ). For instance, the linear functional
T ( x, y, z) 3x 4 y 5z in Example 3.1.1 is of this type, since T ( x, y, z ) ( x, y, z ), (3, 4, 5 .
If V is a vector space over a field F , then the collection of all linear functionals on V with the
operations addition and scalar multiplication of functions is a vector space over F . We denote this space
1, if i j
Definition 3.1.2 Let i j . i j is called kronecker delta.
0 , if i j
Theorem 3.1.1 Let V be a finite-dimensional vector space over a field F , and let B = v1 ,v2 , . . . , vn
be a basis for V . Then there is a unique basis B * = f1 , f 2 , . . . , f n for V * such that
f i (v j ) i j
Proof.
For each i ,
f i (v) f i (c1v1 c 2 v 2 . . . c n v n )
c1 f i (v1 ) c 2 f i (v 2 ) . . . c n f i (v n )
ci
Now,
= f (v)
For each v i in B,
Then, B * = f1 , f 2 , . . . , f n is unique up to B.
Example 3.1.5 Find the dual basis of B = v1 (1, 1, 1) , v2 (1, 2, 0) , v3 (2, 1, 4)of 3 .
B * = f1 ( x, y, z) 8x 4 y 5z, f 2 ( x, y, z ) 5x 2 y 3z , f 3 ( x, y, z ) 2 x y z
Theorem 3.1.2 ( Riesz Representation Theorem). Let V be a finite-dimensional inner product space,
and f a linear functional on V . Then there exists a unique vector w in V such that f (v) v , w for
all v in V .
Proof.
Let be u1 , u 2 , . . . , u n an orthonormal basis for V . Put
w f (u1 ) u1 f (u 2 ) u 2 . . . f (u n ) u n
4
Since this is true for each u i , it follows that f f w . Now suppose w ' is a vector in V such that
v , w v , w ' for all v in V . Then v , w v , w ' 0 or v , w w ' 0 . In particular
w w ' , w w ' 0 and w w ' . Thus there is exactly one vector w determining the linear functional
f in the stated manner.
w f (u1 ) u1 f (u 2 ) u 2 . . . f (u n ) u n
where u1 , u 2 , . . . , u n is an orthonormal basis for 3 . Thus using the standard basis
(1, 0, 0), (0, 1, 0), (0, 0, 1)we obtain,
w f (1, 0, 0) (1, 0, 0) f (0, 1, 0) (0, 1, 0) f (0, 0, 1) (0, 0, 1)
= (3, i, 5)
Thus f (v) v, (3, i, 5) which is one can easily check is consistent with our definition of f .
Example 3.1.7 Let P2 ( ) be the set of all polynomials of degree at most 2, with the inner product
1
f ,g f ( x) g ( x)dx
1
w E (u1 ) u1 E (u 2 ) u 2 .E (u3 ) u3
1 1 45
0 3
x
3 8
(
45 2 1
(x ) )
2 2 2 8 3
9 15 2
= x
8 8
Example 3.1.8 The Riesz Representation Theorem is not true without the assumption that V is finite
dimensional. Let V be the vector space of polynomials over the field of complex numbers, with the
inner product
1
f , g f (t ) g (t ) dt .
0
1
If f a x i
i
and g b x j
j
, then f , g i j 1 a b i j
i j i j
Let z be a fixed complex number, and let L be the linear functional on V given by
L( f ) f ( z )
Is there a polynomial g such that f , g L( f ) for every f ? The answer is no; for suppose we have
1
f ( z ) f (t ) g (t ) dt
0
for every f . Let x z , so that for any f we have (hf )( z ) 0 . Then
1
0 h(t ) f (t ) g (t ) dt
0
h(t ) g (t ) dt 0
2 2
and so hg 0 . Since h 0, it must be that g 0 . But L is not the zero functional; hence, no such g
exists.
1. For each of the following inner product spaces over K determine whether the function
f : V K linear functional or not.
a. V 2 f ( x, y) 4 x 3 y
c. V 3 , f ( x, y, z ) x 2 y 2 z 2
d. V M 22 ( ) , f ( A) tr ( A)
2. Find the dual basis of each of the following basis B of the vector space V.
c. V P2 ( ) , B= (x 2
x, x 2 1, x 1
3. Let V 3 and define f1 , f 2 , f3 V * as follows:
4. For each of the following inner product space V over K and linear functional f : V K , find
vector w such that f (v) v, w for all v V.
1 1
c. V = P1 ( ) , with p, q p(t )q(t )dt ,
1
T ( p) p(t )dt
0
The Riesz Representation Theorem allow us to turn linear functionals f : V F into vectors w V , if
V is a finite-dimensional inner product space. This leads us to a useful notion, that of the adjoint of a
linear operator.
Theorem 3.2.1 For any linear operator T on a finite-dimensional inner product space V , there exists a
unique linear operator T * on V such that
T (v), u v, T (u) for all v, u in V
Proof.
Let u be any vector in V . Define g : V F by g (v) T (v), u for all v V . We first show that
g is linear. Let v1 , v2 V . Then
g (v1 v 2 ) T (v1 v 2 ), u
T (v1 ) T (v 2 ), u
T (v1 ), u T (v 2 ), u
g (v1 ) g (v 2 )
Let v V and c F . Then g (cv) cv, u c v, u cg (v). Hence g is linear. Then by
Riesz Representation Theorem there is a unique vector in u ' V such that g (v) v, u ' ; i.e.,
T (v), u v, u ' f or all v V . Defining T*: V V by T * (u) u ' , we have
T (v), u v, T * (u) . Next let us show that T * is linear. Let et u1 , u2 V . Then
v, T * (u1 u 2 ) T (v) , u1 u 2
T (v) , u1 T (v) , u 2
v, T * (u1 ) v , T * (u 2 )
v, T * (u1 ) T * (u 2 )
c v , T * (u)
v , cT * (u)
Hence T * (cu ) cT * (u)
Defintion 3.2.1 The linear operator T * described in Theorem 3 is called the adjoint of the operator T .
The symbol T * is read as “ T star”.
By Theorem 3.2.1 every linear operator on a finite-dimensional inner product space V has an adjoint on
V . In the infinite-dimensional case this is not always true. But in any case there is at most one such
operator T * ; when it exists, we call it the adjoint of T .
Theorem 3.2.2 Let B = u1 , u 2 , . . . , u n be an ordered, orthonormal basis of V . Let T be a linear
operator on V and TB A= [aij ]n×n . Then
(i) aij T (u j ), ui , i, j 1, 2 , . . . , n
Proof.
v v , u1 u1 v , u 2 u 2 . . . v , u n u n
Analogously,
T (u j ) T (u j ) , u1 u1 T (u j ) , u 2 u 2 . . . T (u j ) , u n u n , j 1, 2, . . . , n
T (u j ) a1 j u1 a2 j u 2 . . . anj u n , j 1, 2, . . . , n
Hence aij T (u j ), ui , i, j 1, 2 , . . . , n
bij T * (u j ), ui , i, j 1, 2 , . . . , n
bij T * (u j ), ui
ui , T * (u j )
T (ui ) , u j
= a ij
Hence, B T *B = A *
T ( x, y, z) (2 x 3 y z, 6 x y 5z, 4 x 7 y 9 z)
Solution. Consider the standard basis B = (1, 0, 0), (0, 1, 0), (0, 0, 1)of 3 which is orthonormal.
Then,
2 3 1
Hence T B 6 1 5 . Let us denote this matrix by A. Then by Theorem 3.2.2, we have
4 7 9
2 6 4
T *B A* 3 1 7 At .
1 5 9
If ( x, y, z ) 3 , then
b1 x 2 x 6 y 4 z
b A y 3x y 7 z
2
b3 z x 5 y 9 z
10
Hence T ( x, y, z) (2 x 6 y 4 z, 3x y 7 z, x 5 y 9 z)
T ( z1 , z 2 ) ( z1 (1 i) z 2 , 2iz1 (3 4i) z 2 )
1 1 i 1 2i
T B . If we denote this matrix by A, then T *B A* 1 i 3 4i . Using A *
2i 3 4i
We hope that these examples enhance the reader’s understanding of the adjoint of a linear operator. We
see that the adjoint operation, passing from T to T * , behaves somewhat like conjugation on complex
numbers. The following theorem strengthens the analogy.
Theorem 3.2.3 . Let V be a finite-dimensional inner product space. If T and U are linear operators on
V and c is a scalar,
(i) (T *)* T
(ii) (cT )* c T *
(iii) (T U )* T * U *
(iv) (TU )* U * T *
(v) I* I
Proof.
Let v, u V . Here we use the fact that the adjoint of a linear operator on V exists and unique.
(i) T * (v), u u, T * (v) T (u), v v, T (u)
Hence T is an adjoint of T * ;i.e., (T *)* T
(ii) (cT )(v), u u, cT (v) c u, T (v) c u, T (v) c T * (u), v v, c T * (v)
Hence (cT )* c T *
(iii) (T U )(v), u (T (v) U (v), u
T (v), u U (v), u
v, T * (u) v, U * (u)
v, T * (u) U * (u)
11
v, (T * U *)(u)
Therefore (T U )* T * U * .
(iv) (TU )(v), u (U (v), T * ( u) (v, U * T * ( u)
Hence (TU )* U * T * .
(v) I (v), u v, I * (u) . On the other hand I (v), u v, u . Therefore I * (u) u . Since u is
an arbitrary element of V , then I * I .
Definition 3.2.2 Let V be a vector space over a field . Let T be a linear operator on V . Let W be a
subspace of V . We say W is invariant under T (or T -invariant ) if for each w W the vector
T (w) W , i.e., T (W ) W .
T ( x, y) (2 x 3 y, 4 y)
Theorem 3.2.4 Let V be a finite-dimensional inner product space and Let T be a linear operator on
V . Suppose W is subspace of V which is invariant under T . Then W is invariant
Under T .
Proof.
Let w 'W and w W . Then T ( w) W because W is invariant under T . Thus,
T (w), w ' w, T * (w ' ) T (w), w ' 0
This implies that w T * (w ' ) . Hence T * ( w ' ) W . Therefore W is invariant under T * .
12
1. For each of the following linear operators T on an inner space V, find the adjoint of T
1
d. V P2 ( ) with p, q p(t )q(t )dt , T ( p(t )) p '(t )
1
2. Let V be 3 with the standard inner product. Let T be the linear operator on V whose matrix in the
standard ordered basis is given by A [alk ] where alk (i) l k , i 2 1 . Find T and T * .
1
f , g f ( x) g ( x)dx
0
6. Let V be the space of 2 2 matrices over the complex numbers, with the inner product Let P be a
fixed invertible matrix in V, and let TP be the linear operator on V defined by TP ( A) P 1 AP .
Recall that in Chapter 1 we discussed the problem of whether a linear operator was
diagonalizable, i.e. whether it had a basis of eigenvectors. In this section, we will see the
condition under which V has an orthonormal basis consisting of eigenvectors of T .
Definition 3.3.1 Let V be a finite-dimensional inner product space and T be linear operator on V .We
say T is normal if TT * T * T .
TT * ( x , y) T ( y , x) ( x, y)
and
T * T ( x , y) T * ( y , x) ( x, y)
Then TT * ( x , y) and T * T ( x , y) agree for all x, y 2 , which implies that TT * T * T . Then T is
normal.
TT * ( x , y) T ( y , 0) (0, y)
and
T * T ( x , y) T * ( x , 0) ( x, 0)
So in general TT * ( x , y) and T * T ( x , y) are not equal, and so TT * T * T . Thus T is not normal.
0 1 0 0
For instance, the matrix A can easily be checked to be normal, while the matrix B
1 1 1 0
is not.
Theorem 3.3.1 Let V be a finite-dimensional inner product space , T : V V be a linear operator and
B be an orthonormal basis for V . Then T normal if and only if A T B is normal.
Proof.
If T is normal then TT* T*T . Now, taking matrices with respect to B, we get
TB T*B T*B TB
14
By Theorem 3.2.2, T *B A* . Hence AA* A*A . Therefore , A is normal. The converse of the
theorem follows by reversing the above steps.
Solution. Consider the standard basis B = (1, 0, 0), (0, 1, 0), (0, 0, 1)of 3 which is orthonormal.
1 1 0 1 1 0
Set A T B 0 1 1 so that A* 0 1 1 .
1 0 1 1 0 1
1 1 0 1 1 0 2 1 1
AA* 0 1 1 0 1 1 1 2 1
1 0 1 1 0 1 1 1 2
1 1 0 1 1 0 2 1 1
A * A 0 1 1 0 1 1 1 2 1
1 0 1 1 0 1 1 1 2
Theorem 3. 3.2 Let V be a finite-dimensional inner product space and T be a normal linear operator
on V . Then ,
(i) T (v) T * (v) for all v V .
(ii) T cI is normal for every c .
(iii) If is an eigenvalue of T with corresponding eigenvector v , then is an eigenvalue of
T * corresponding to the eigenvector v .
(v) If 1 and 2 are distinct eigenvalues of T with corresponding eigenvalues v1 and v 2 , then v1
and v 2 are orthogonal.
Proof.
(i) For any v V , we have
T (v), T (v) T * T (v), v TT * (v), v T * (v), T * v T * (v)
2 2
T (v)
Hence T (v) T * (v) .
(ii) (T cI )(T cI )* (T cI )(T * cI ) TT * cT cT * ccI
(T cI ) * (T cI ) (T * cI )(T cI ) T * T cT * cT ccI
15
Proof.
Let the dimension of V be n . We shall prove this theorem by induction.
(i) First consider the base case for n 1 . Then one pick any orthonormal basis B of V (which in this
case will just be a single unit vector), and the single vector in basis will automatically be an
eigenvector of T (because in a one-dimensional space every vector will be a scalar multiple of this
vector). So the Theorem is true when n 1.
(ii) Now suppose inductively that n 1 , and the Theorem has already been proven for dimension n 1 .
Let cT ( ) be the characteristics polynomial of T . From the Fundamental Theorem of Algebra, we
know that cT ( ) splits over the complex numbers. Hence there must be at least one root of this
polynomial, and hence T has at least one(complex) eigenvalue, and hence at least one eigenvector. So
now let us pick an eigenvector v1 of T with eigenvalue 1 . Thus T (v1 ) 1v1 and T * (v1 ) 1v1 .
v1
Let u1 . Then u1 is an orthonormal eigenvector of T with eigenvalue 1 , i.e. T (u1 ) 1u1 (
v1
remember that if we multiply an eigenvector by a non-zero scalars, you will get still an eigenvector,
so it is safe to normalize eigenvectors). Let W { cu1 : c } denote the span of this eigenvector. Let
v W . Then, v cu1 for some c . T (v) T (cu1 ) cT (u1 ) c1u1 . This implies that
T (v) W and hence W is T invariant. Let W v V : v u1 denote the the orthogonal
complement of W ; this is thus an n 1 -dimensional space. Now, we see that T and T * do to W .
Let w be any vector in W . Thus w u1 , i.e. w, u1 0 .
T (w), u1 w, T * (u1 ) w, 1u1 1 w, u1 0 . Hence T ( w) W .
Since W is T invariant, then W is T * invariant Then T * ( w) W
16
Thus if w W , then T (w) and T * ( w) are also in W . Thus T and T * are not only linear
operators from V to V , they are also linear operators from W to W . Also, we have
T (w), w ' w, T * (w ' )
for all w, w 'W , because every vector in W is a vector in V , we already have this property for
vectors in V . Thus T and T * are still adjoint of each other even after we restrict t he vector space
from n -dimensional vector space V to the n 1 -dimensional vector space W .
(iii) We now apply the induction hypothesis, and find that W has an orthonormal basis of eigenvectors
of T . There are n 1 such eigenvectors since W n 1 -dimensional. Now u1 is normalized it is
orthogonal to all vectors in this basis, since u1 lies in W and all the other vectors lie in W . Thus if
we add u1 to this basis we get a new collection of orthonormal vectors, which forma basis of V . Each
of these vectors is eigenvector of T . Since T has n has linearly independent eigenvectors, then T is
diagonalizable.
17
1. For each of the linear operators below, determine whether it is normal or not.
c. T : P2 ( ) P2 ( ) defined by T ( f ) f ' , f , g f (x)g(x)dx
0
2. For each of the following linear operators T on V, show that T is normal and find an orthonormal
basis for V which consists of the eigenvectors of T.
a. V 3 with standard inner product, T(x, y, z) ( y z, x z, x y)
In the previous section, we have seen that in the world of complex inner product spaces, normal linear
operators are best kind of linear operators: they are not only diagonalizable, but they are diagonalizable
using the best kind of basis, namely an orthogonal basis. However, there a such class of normal operators
which are even better: the self-adjoint linear operators.
Definition 3.4.1 A linear operator T on a finite-dimensional inner product space V is said to be self-
adjoint if T * T , i.e. T is its own adjoint. A square matrix A is said to be self-adjoin if A* A
Example 3.4.3 Consider the inner product space M 22 () with standard inner product. The linear
operator T : M 22 () M 22 () defined by T ( A) At is self-adjoint because its adjoint given is by
T * ( A) At (verfy!), and this is the same as T .
A self-adjoint linear operator on a complex inner product space is sometimes known as a Hermitian
linear operator. A self-adjoint linear operator on a real inner product space is known as a symmetric
18
linear operator. Similarly, a complex self-adjoint matrix is known as a Hermitian matrix, while a real
self-adjoint matrix is known as a symmetric matrix.
A matrix is symmetric if At A . When the matrix is real, the transpose A t is the same as its adjoint,
thus self-adjoint and symmetric are the same meaning for real matrices but not for complex matrices.
Every real symmetric matrix is automatically Hermitian because every real matrix is also a complex
matrix.
0 1
Example 3.4.5 The matrix A is symmetric (self-adjoint) since At A , the
1 0
1 i i 1
matrix B is Hermitian ( self-adjoint) because B* B , and the matrix C is
i 1 1 i
symmetric but not self-adjoint (not Hermitian).
It is clear that all self-adjoint linear operators are normal since if T * T , then TT * and TT * are both
equal to T 2 and are hence equal to each other. Similarly, every self-adjoint matrix is normal. However,
not every normal linear operator are self-adjoint, and not every normal matrix is self-adjoint.
Theorem 3.4.1 Let T be a linear operator on a finite-dimensional inner product space. T is self-adjoint if
and only if its matrix with respect to some orthonormal basis of V is self-adjoint.
Theorem 3.4.2 Let V be a finite-dimensional inner product space and T be a self-adjoint linear
operator on V . Then
(i) each eigenvalue of T is real.
(ii) eigenvectors of T associated with distinct eigenvalues are orthogonal.
19
Proof.
(i) Let be an eigenvalue of T with corresponding eigenvector v . Thus v is non-zero and T (v) v .
Then,
v , v v , v T (v) , v v), T * (v) v), T (v) v), v v , v
This implies that ( ) v , v 0 . Since v is non-zero, we must have . Thus is real.
(ii) Let 1 and 2 be distinct eigenvalues of T corresponding to eigenvectors v1 and v 2 .Then
1 v1 , v2 1v1 , v2 T (v1 ) , v2 v1 , T * (v2 ) v1 , T (v2 ) v1 , 2 v2 2 v1 , v2 2 v1 , v2
.
This implies that (1 2 ) v1 , v2 0 . Since 1 2 , we must have v1 , v2 0 and hence v1
and v 2 are orthogonal.
Corollary 3.4.1 Let V be a finite-dimensional inner product space and T be a self-adjoint linear
operator on V . Then, the characteristics polynomial of T splits over the reals.
Proof.
We know already from Fundamental Theorem of Algebra that the characteristics polynomial of T over
the complex numbers. But since T is a self-adjoint linear operator, by Theorem 3.4.1, every eigenvalues
of T , i.e. every root of the characteristics polynomial, must be real. Thus the polynomial must split over
reals.
Proof. We repeat Theorem 3.3.3 , i.e. we do an induction on the dimension n of the vector space V .
When n 1 the claim is again trivial and we use Theorem 3.4.1 to make sure that the eigenvalue is real.
Now suppose inductively that n 1 and the claim has already has been proven for n 1 . From corollary
3.4.1 we know that T has at least one real eigenvalue . Thus we can find a real 1 and eigenvector v1
such that T (v1 ) v1 . We can normalize v1 to have unit length. We now repeat the proof of Theorem
3.3.3 , for normal operator to obtain the same conclusion except that the eigenvalues are now real.
20
let T be a linear operator on finite-dimensional inner product space V over . Assume T it is normal if
= and self-adjoint if = . Let 1 , 2 , . . . , k be distinct eigenvalues of T with corresponding
eigenspaces W1 , W2 , . . . , Wk respectively. Let T1 , T2 , . . . , Tk be the orthogonal projection of V
onto W1 , W2 , . . . , Wk respectively. Then,
(1) Wi W j , when i j .
(2) V W1 W2 . . . Wk
(3) T 1T1 2T2 . . . k Tk
Proof.
(1) Let w Wi and w ' W j with i j . Then, there exist distinct eigenvalues i , j of T such that
T (w) i w and T (w ' ) j w ' . Now,
i w , w ' i w , w ' T (w) , w ' w , T * (w ' ) w , j w ' j w , w '
(i j ) w , w ' 0 . Since i , j , we must have w , w ' 0 . This implies that w w ' . Since
w and w ' are an arbitrary elements of Wi and W j respectively such that w w ' , then we can
conclude that Wi W j for i j .
(2) (i) Let v V . From Theorem 3.3. 3 and Theorem 3.4.1, V has an orthonormal basis
B = u1 , u 2 , . . ., u n such each element of B is eigenvectors of T . Hence
v c1u1 c2u2 . . . cn un for some scalars c1 , c2 , . . ., cn in . Since for every ui B there
exists exactly one eigenspaces W j of V containing u i ( because eigenspaces are distinct) and each
eigenspaces W j of V contains at least one element u i of B( because B diagonalize T with diagonal
matrix the diagonal matrix become D diag( 1 , 2 , . . . , k ) , then v w1 w2 . . . wk for some
wi Wi , i 1, 2, . . . , k . Thus V W1 W2 . . . Wk .
(ii) Let w W1 W2 . . . Wk . Then w W1 , w W2 ,. . ., w Wk . Hence
T (w) 1w, T (w) 2 w, . . . , T (w) k w .
This implies that i w j w , i, j 1, 2, . . . , k and i j . Then (i j ) w 0 . Since i j ,
we have w 0 . Hence W1 W2 . . . Wk 0. (i) and (ii) imply that V W1 W2 . . . Wk .
(3) Let v V . Since V W1 W2 . . . Wk , then v can be expressed uniquely as
v w1 w2 . . . wk for some wi Wi , i 1, 2, . . . , k .
Since T1 , T2 , . . . , Tk be the orthogonal projection of V onto W1 , W2 , . . . , Wk respectively, then
T1 (v) w1 , T2 (v) w2 , . . . , Tk (v) wk . Thus,
T (v) T (w1 w2 . . . wk )
T (w1 ) T (w2 ) . . . T (wk )
21
1w1 2 w2 . . . k wk
1T1 (v) 2T2 (v) . . . k Tk (v)
(1T1 2T2 . . . k Tk )(v)
Hence T 1T1 2T2 . . . k Tk
In Theorem 3.4.2 the set 1 , 2 , . . . , k of eigenvalues T is called the spectrum of T and the
sum T 1T1 2T2 . . . k Tk is called the spectral decomposition of T .
(a) eigenspaces of T .
Solution. Let us take the standard basis B = (1, 0, 0), (0, 1, 0), (0, 0, 1) of 3 which is orthonormal.
2 1 0
Here T B 1 3 1 . Since T B is symmetric then T is symmetric linear operator.
0 1 9
22
1 2 1 1 2 1 x 2 y z 2x 4 y 2z x 2 y z
T3 ( x, y, z ) ( x, y, z ), ( , , ) ( , , )( , , )
6 6 6 6 6 6 6 6 6
23
1. For each of the following linear operators below, determine whether it is self-adjoint,
a. T : R 2 R 2 defined by T ( x, y) (3x y, x 4 y)
2. Let T be a complex inner product space. Prove that T is self-adjoint if and only if T (v), v is real for
all v V .
3. Let V an inner product space and T is a self-adjoint operator on V such that T(v), v 0 for all
v V, then T 0
4. Let T and U be self-adjoint linear operators on an inner product space V. Prove that TU is self-adjoint
if and only if TU=UT.
5. For each of the following linear operators T on V, show that T is normal and find an orthonormal
basis for V which consists of the eigenvectors of T.
6. Verify Spectral Theorem for each of the following linear operators on the inner product space V.
Take the standard inner products.
a. V 2 , T ( x, y) ( x 2 y, 2 x y)
b. V = 3 , T ( x, y, z) ( x y z, x y z, x y z)
3.5 Isometries
24
In this section we will continue our analogy between complex numbers and linear operators. Recall that
the adjoint of a linear operator acts similarly to the conjugate of a complex number (see Theorem 3.3.1 ) .
A complex number z has length 1 if z z 1 . We will study those linear operators on inner products
V such that TT* T *T I which means T * T 1 . This condition guarantees, in fact, that
T preserves length and the inner product.
Theorem 3.5.1 Let V be a finite dimensional inner product space. Let T be a linear operator on V .
Then the following statements are equivalent.
(i) T * T 1
Proof.
(i) (ii)
(ii) (iii)
Hence T (v) v
(iii) (i)
Ker T = v V : T (v) 0
= v V : T (v) 0
= v V : v 0
v V : v 0
0
Hence T is one to one. Since V is finite dimensional, then T is invertible. Let v be any element of V .
Now,
25
This implies that (T * T I )(v) , v 0 for every v V . Hence (T * T I )(v) 0 for every v V or
Definition 3.5.1 Let V be a finite dimensional inner product space. Let T be a linear operator on V .
T is said to be an isometry if it satisfies any one ( and hence all) of the three equivalent conditions stated
in Theorem 3.5.1.
Defintion 3.5.2 An isometry on a complex inner product space is called a unitary operator, and an
isometry on a real inner product space is called an orthogonal operator. Let A be an n n matrix that
satisfies A* A . We call A a unitary matrix if A has complex entries, we call A an orthogonal matrix
if A has real entries.
T ( x , y) (T ( x, y), T ( x , y) ( y, x), ( y , x) y 2 x 2 x 2 y 2 ( x , y)
Example 3. 5.2 Let V P3 () with inner product p, q p(t )q(t )dt . Let T : V V a linear
0
1 1
T ( p(t )) T ( p(t )) , T ( p(t )) p(t ) , p(t ) p(t )( p(t ))dt ( p(t )) 2 dt
2
Solution.
0 0
1 1
p(t ) p(t ) , p(t ) p(t )( p(t )dt ( p(t )) 2 dt
2
0 0
26
0 1 0 1
Example 3.5.3 The matrix A is orthogonal since At A 1 , and the matrix
1 0 1 0
1 i 1 i
2
2 is unitary since A* 2 2 A 1
A
i 1 i 1
2 2 2 2
Theorem 3.5.2 Let V be a finite dimensional inner product space. Let T be a linear operator on V .Let
B = u1 , u 2 , . . . , u n be an orthonormal basis of V and A T B . Then T is isometry if and only if
A* A1 .
Proof.
T is an isometry T * T 1
T *B T 1 B
T B T B
* 1
A* A1
Theorem 3.5.3 Let V be a finite dimensional inner product space. Let T be an isometry on V . If B =
u1 , u2 , . . ., un an orthonormal basis of V , then T (B ) is an orthonormal basis of V .
Proof.
T (ui ) ui 1 for i 1, 2 , . . . , n .
T (ui ) , T (u j ) ui , u j ij
Thus T (B ) is an orthonormal set. Since T is an isometry and hence one to one T (B ) has n elements.
By Theorem 2., T (B ) is an orthonormal basis of V .
Proof.
1. For each of the following linear operators below, determine whether it is isometry not. Take standard
inner product on each vector space.
3x 4 y 4 x 3 y
a. V 2 , T( x, y) ( , )
5 5
b. V 2 , T ( x, y) ( x y, x y)
c. V = 3 , T(x, y, z) ( y, x, z)
z1 z2 iz1 iz2
d. V = 3 , T ( z1 ,z2 , z3 ) ( z3 , , )
2 2
e. V M22 ( ) , T(A) = A t
2. If T is an isometry on the inner product space 3 with standard inner product and v (2, 6, 3) ,
then find T (v)
3. Let be an angle such that 0 2 . Consider inner product space 2 with standard inner product.
Show that a linear operator T : 2 2 defined by T ( x, y) ( x cos y sin , x sin y cos )
(anticlockwise rotation about the origin) is an isometry.
1 1 1 0
1 1 1 0 1
4. Show that the matrix A is orthogonal and find its inverse.
3 1 0 1 1
0 1 1 1
5. If T is an isometry on an inner product space V and B is an orthonormal basis for V, show that T(B)
is an orthonormal basis for V.
28