0% found this document useful (0 votes)
22 views28 pages

Linear Alg II Chapter 3

Uploaded by

ruthnsr066816
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views28 pages

Linear Alg II Chapter 3

Uploaded by

ruthnsr066816
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

CHAPTER 3

LINEAR OPERATORS ON INNER PRODUCT SPACES


3.1 Linear Functionals and the Dual Spaces

3.2 Adjoint of Linear Operators

3.3 Normal Operators

3.4 Self-adjoint Operators

3.5 Isometries

The deepest results related to inner-product spaces deal with the subject to which we now turn—operators
on inner-product spaces. By exploiting properties of the adjoint, we will develop a detailed description
of several important classes of operators on inner-product spaces.

3.1 Linear Functionals and the Dual Spaces

This section treats linear functional on an inner product spaces and their relations to the inner
products. We use this result to show that any linear functional f on a finite-dimensional inner
product space is an inner product with a fixed vector in the space.

Definition 3.1.1 Let V be a vector space over a field F . A linear functional (or linear form) on V
is a function f : V  F satisfying the following conditions:

(i) For every v , v ' V , we have f (v  v' )  f (v)  f (v' )

(ii) For every v V and c  F , f (cv)  cf (v)

Note: Thus a linear functional is a linear transformation from V in to the field of scalars F .

Example 3.1.1 The linear transformation T :  3   defined by T ( x, y, z)  3x  4 y  5z is a


linear functional on  3 .

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


MATH2042-LINEAR ALGEBRA II

Example 3.1.2 Let F be a field and let c1 , c2 , . . . , cn  F . Define a function f : V = F n  F by

f ( x1 , x2 , . . . , xn )  c1 x1  c2 x2  . . .  cn xn

f is a linear functional on V .

Example 3.1.3 Let V be the vector space of continuous complex- ( or real-) valued functions on the
interval a , b . Then the function f : V   ( or  ) defined by

b
f ( x)   x(t )dt
a

is a linear functional on V .

Example 3.1.4 Let V be any inner product space over a field F , and let w be any vector in V . Then
the linear transformation T : V  F defined by T (v)  v, w is a linear functional on V (this is
because inner product is linear in the first variable v ). For instance, the linear functional
T ( x, y, z)  3x  4 y  5z in Example 3.1.1 is of this type, since T ( x, y, z )  ( x, y, z ), (3, 4, 5 .

If V is a vector space over a field F , then the collection of all linear functionals on V with the
operations addition and scalar multiplication of functions is a vector space over F . We denote this space

by V * and call it the dual space of V .

1, if i  j
Definition 3.1.2 Let  i j   .  i j is called kronecker delta.
0 , if i  j

Theorem 3.1.1 Let V be a finite-dimensional vector space over a field F , and let B = v1 ,v2 , . . . , vn 
be a basis for V . Then there is a unique basis B * =  f1 , f 2 , . . . , f n  for V * such that

f i (v j )   i j

We call B * the dual basis of B.

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


CHAPTER 3: LINEAR OPERATORS ON INNER PRODUCT SPACES

Proof.

(i) Let f  V* . Let g  f (v1 ) f1  f (v2 ) f 2  . . .  f (vn ) f n . Then g  V* . Let v  V. Since B


is a basis for V , then there exist c1 , c2 , . . ., cn in F such that v  c1v1  c2 v2  . . .  cn vn .

For each i ,

f i (v)  f i (c1v1  c 2 v 2  . . .  c n v n )
 c1 f i (v1 )  c 2 f i (v 2 )  . . .  c n f i (v n )
 ci

Hence v  f1 (v)v1  f 2 (v)v2  . . .  f n (v)vn

Now,

g (v)  ( ( f (v1 ) f1  f (v2 ) f 2  . . .  f (vn ) f n )(v)

= f (v1 ) f1 (v)  f (v2 ) f 2 (v)  . . .  f (vn ) f n (v)

= f ( f1 (v)v1 )  f ( f 2 (v)v2 )  . . .  f ( f n (v)vn )

= f ( f1 (v)v1  f 2 (v)v2  . . .  f n (v)vn )

= f (v)

Thus f  g  f (v1 ) f1  f (v2 ) f 2  . . .  f (vn ) f n . Therefore B * =  f1 , f 2 , . . . , f n  spans V* .

(ii) Let c1 , c2 , . . ., cn be scalars in F such that

c1 f1  c2 f 2  . . .  cn f n  0 (the zero linear functional).

For each v i in B,

(c1 f1  c2 f 2  . . .  cn f n )(vi )  0(vi )

c1 f1 (vi )  c2 f 2 (vi )  . . .  cn f n )(vi )  0

This implies that ci  0 for each i. Hence B * =  f1 , f 2 , . . . , f n  is linearly independent.


Therefore,

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


MATH2042-LINEAR ALGEBRA II

(i) and (ii) imply that B * =  f1 , f 2 , . . . , f n  is a basis for V* .

Since B is a basis for V , any v in V is expressed uniquely as

v  f1 (v)v1  f 2 (v)v2  . . .  f n (v)vn

Then, B * =  f1 , f 2 , . . . , f n  is unique up to B.

From Theorem 1 we observe that dim V* = dim V .

Example 3.1.5 Find the dual basis of B = v1  (1, 1,  1) , v2  (1, 2, 0) , v3  (2,  1,  4)of  3 .

Solution. Let B * =  f1 , f 2 , f 3  be the dual basis of B. Then, Since f i (v j )   i j we have

f1 (v1 )  1 , f1 (v2 )  0 , f1 (v3 )  0

f 2 (v1 )  0 , f 2 (v2 )  1, f 2 (v3 )  0

f 3 (v1 )  0 , f 3 (v2 )  0 , f 3 (v3 )  1

Let v  ( x, y, z )   3 . Since B is a basis for  3 , then v  ( x, y, z )  c1v1  c2 v2  c3 v3 for some


c1 , c2 , c3 . Solving for c1 , c2 , c3 we get c1  8x  4 y  5z , c2  5x  2 y  3z and
c3  2 x  y  z . Then,

f1 ( x, y, z)  f1 (c1v1  c2 v2  c3 v3 )  c1 f1 (v1 )  c2 f1 (v2 )  c3 f1 (v3 )  c1  8x  4 y  5z

f 2 ( x, y, z )  f 2 (c1v1  c2 v2  c3 v3 )  c1 f 2 (v1 )  c2 f 2 (v2 )  c3 f 2 (v3 )  c2  5x  2 y  3z

f 3 ( x, y, z )  f 3 (c1v1  c2 v2  c3 v3 )  c1 f 3 (v1 )  c2 f 3 (v2 )  c3 f 3 (v3 )  c3  2 x  y  z

Hence the dual basis of B is

B * =  f1 ( x, y, z)  8x  4 y  5z, f 2 ( x, y, z )  5x  2 y  3z , f 3 ( x, y, z )  2 x  y  z

Theorem 3.1.2 ( Riesz Representation Theorem). Let V be a finite-dimensional inner product space,
and f a linear functional on V . Then there exists a unique vector w in V such that f (v)  v , w for
all v in V .

Proof.
Let be u1 , u 2 , . . . , u n  an orthonormal basis for V . Put
w  f (u1 ) u1  f (u 2 ) u 2  . . .  f (u n ) u n
4

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


CHAPTER 3: LINEAR OPERATORS ON INNER PRODUCT SPACES

and let f w be the linear functional defined by f w (v)  v, w .


Then
f w (ui )  ui , w
 ui , f (u1 ) u1  f (u 2 ) u 2  . . .  f (u n ) u n
 f (u1 ) ui , u1  f (u 2 ) ui , u 2  . . .  f (u n ) u n , u n
 f (ui )

Since this is true for each u i , it follows that f  f w . Now suppose w ' is a vector in V such that
v , w  v , w ' for all v in V . Then v , w  v , w '  0 or v , w  w '  0 . In particular
w  w ' , w  w '  0 and w  w ' . Thus there is exactly one vector w determining the linear functional
f in the stated manner.

Example 3.1.6 Consider the linear functional f :  3   defined by f ( x, y, z)  3x  iy  5z . From


Riesz Representation Theorem we know that there must be one vector w 3 such that f (v)  v , w
for all v  3 . In this case, we can see what w is by inspection. But let us pretend that we are unable to
see this, and use the formula in the proof of Riesz Representation Theorem. Namely, we know that

w  f (u1 ) u1  f (u 2 ) u 2  . . .  f (u n ) u n

where u1 , u 2 , . . . , u n  is an orthonormal basis for  3 . Thus using the standard basis
(1, 0, 0), (0, 1, 0), (0, 0, 1)we obtain,
w  f (1, 0, 0) (1, 0, 0)  f (0, 1, 0) (0, 1, 0)  f (0, 0, 1) (0, 0, 1)

 3 (1, 0, 0)  i (0, 1, 0)  5 (0, 0, 1)

= (3,  i, 5)

Thus f (v)  v, (3,  i, 5) which is one can easily check is consistent with our definition of f .

Example 3.1.7 Let P2 ( ) be the set of all polynomials of degree at most 2, with the inner product

1
f ,g   f ( x) g ( x)dx
1

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


MATH2042-LINEAR ALGEBRA II

Let E : P2 ( )   be the evaluation function E ( f )  f (0) , for instance E ( x 2  2 x  3)  3 . From


Riesz Representation Theorem we know that E ( f )  f , w for some w  P2 ( ) ; we now find what
this w is. We first find an orthonormal basis for P2 ( ) ). We know that
 1 3 45 1 
u1  , u2  x, u 3  ( x 2  ) orthonormal basis for P2 ( ) which is can be obtained
 2 2 8 3 
 
from the standard basis 1, x, x 2 of P2 ( ) by Gram-Schmidt orthgonalization process. Thus we can
compute w as

w  E (u1 ) u1  E (u 2 ) u 2  .E (u3 ) u3

 1  1  45 
   0  3
x  
 3 8
(
45 2 1
(x  ) )
 2  2 2   8 3

9 15 2
=  x
8 8

Example 3.1.8 The Riesz Representation Theorem is not true without the assumption that V is finite
dimensional. Let V be the vector space of polynomials over the field of complex numbers, with the
inner product
1
f , g   f (t ) g (t ) dt .
0

 1 
If f  a x i
i
and g  b x j
j
, then f , g     i  j  1 a b i j


i j i  j 

Let z be a fixed complex number, and let L be the linear functional on V given by
L( f )  f ( z )
Is there a polynomial g such that f , g  L( f ) for every f ? The answer is no; for suppose we have
1
f ( z )   f (t ) g (t ) dt
0
for every f . Let x  z , so that for any f we have (hf )( z )  0 . Then
1
0   h(t ) f (t ) g (t ) dt
0

for all f . In particular this holds when f  hg so that


1

 h(t ) g (t ) dt  0
2 2

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


CHAPTER 3: LINEAR OPERATORS ON INNER PRODUCT SPACES

and so hg  0 . Since h  0, it must be that g  0 . But L is not the zero functional; hence, no such g
exists.

1. For each of the following inner product spaces over K determine whether the function
f : V  K linear functional or not.

a. V   2 f ( x, y)  4 x  3 y

b. V  P2 ( ) , f ( p)  2 p' (0)  p' ' (1)

c. V   3 , f ( x, y, z )  x 2  y 2  z 2

d. V  M 22 ( ) , f ( A)  tr ( A)

2. Find the dual basis of each of the following basis B of the vector space V.

a. V   2 , B=  (1, 2), (  2, 1)


b. V   3 B=  (1, 0, 1), ( 1, 2 , 1), (0, 0, 1)

c. V  P2 ( ) , B=  (x 2
 x, x 2  1, x  1 
3. Let V   3 and define f1 , f 2 , f3  V * as follows:

f1 ( x, y, z)  x  2 y, f 2 ( x, y, z)  x  y  z and f3 ( x, y, z)  y  3z . Find the basis


B   (v1 , v2 , v3  of V for which B*   f1, f 2 , f3  is dual.

4. For each of the following inner product space V over K and linear functional f : V  K , find
vector w such that f (v)  v, w for all v  V.

a. V   3 , f ( x, y, z)  x  2 y  4 z , take standard inner product

b. V= V   2 , f ( z1, z2 )  iz12 z2 , take standard inner product

1 1
c. V = P1 ( ) , with p, q   p(t )q(t )dt ,
1
T ( p)   p(t )dt
0

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


MATH2042-LINEAR ALGEBRA II

3.2 Adjoint of Linear operators

The Riesz Representation Theorem allow us to turn linear functionals f : V  F into vectors w V , if
V is a finite-dimensional inner product space. This leads us to a useful notion, that of the adjoint of a
linear operator.

Theorem 3.2.1 For any linear operator T on a finite-dimensional inner product space V , there exists a
unique linear operator T * on V such that
T (v), u  v, T (u) for all v, u in V
Proof.
Let u be any vector in V . Define g : V  F by g (v)  T (v), u for all v  V . We first show that
g is linear. Let v1 , v2  V . Then
g (v1  v 2 )  T (v1  v 2 ), u
 T (v1 )  T (v 2 ), u
 T (v1 ), u  T (v 2 ), u
 g (v1 )  g (v 2 )
Let v  V and c  F . Then g (cv)  cv, u  c v, u  cg (v). Hence g is linear. Then by
Riesz Representation Theorem there is a unique vector in u '  V such that g (v)  v, u ' ; i.e.,
T (v), u  v, u ' f or all v  V . Defining T*: V  V by T * (u)  u ' , we have
T (v), u  v, T * (u) . Next let us show that T * is linear. Let et u1 , u2  V . Then

v, T * (u1  u 2 )  T (v) , u1  u 2
 T (v) , u1  T (v) , u 2
 v, T * (u1 )  v , T * (u 2 )
 v, T * (u1 )  T * (u 2 )

Since v is arbitrary T * (u1  u 2 )  T * (u1 )  T * (u 2 ) .

Let u  V and c . v, T * (cu )  T (v), cu


 c T (v), u

 c v , T * (u)
 v , cT * (u)
Hence T * (cu )  cT * (u)

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


CHAPTER 3: LINEAR OPERATORS ON INNER PRODUCT SPACES

Therefore T * is linear operator.


Finally, we need only show that T * is unique. Suppose T **: V  V is linear and that it satisfies
v, T * (u)  v, T * *(u) for all v, u in V . This implies that T (u)  T * *(u) for all v, u in V .
Hence T *  T * * . Therefore T * is unique.

Defintion 3.2.1 The linear operator T * described in Theorem 3 is called the adjoint of the operator T .
The symbol T * is read as “ T star”.

By Theorem 3.2.1 every linear operator on a finite-dimensional inner product space V has an adjoint on
V . In the infinite-dimensional case this is not always true. But in any case there is at most one such
operator T * ; when it exists, we call it the adjoint of T .

Theorem 3.2.2 Let B = u1 , u 2 , . . . , u n be an ordered, orthonormal basis of V . Let T be a linear
operator on V and TB  A= [aij ]n×n . Then
(i) aij  T (u j ), ui , i, j  1, 2 , . . . , n

(ii) T *B  A* , the conjugate transpose of A .

Proof.

(i) Since B is an orthonormal basis, for every v  V we have

v  v , u1 u1  v , u 2 u 2  . . .  v , u n u n

Analogously,

T (u j )  T (u j ) , u1 u1  T (u j ) , u 2 u 2  . . .  T (u j ) , u n u n , j  1, 2, . . . , n

Using the matrix of T , we have

T (u j )  a1 j u1  a2 j u 2  . . .  anj u n , j  1, 2, . . . , n

Hence aij  T (u j ), ui , i, j  1, 2 , . . . , n

(ii) Let B  T *B . Then,

bij  T * (u j ), ui , i, j  1, 2 , . . . , n

By the definition of T * , then we have

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


MATH2042-LINEAR ALGEBRA II

bij  T * (u j ), ui

 ui , T * (u j )

 T (ui ) , u j

= a ij

Hence, B  T *B = A *

Example 3. 2.1 Find the adjoint of the linear operator T :  3   3 defined by

T ( x, y, z)  (2 x  3 y  z, 6 x  y  5z, 4 x  7 y  9 z)

Solution. Consider the standard basis B = (1, 0, 0), (0, 1, 0), (0, 0, 1)of  3 which is orthonormal.
Then,

T (1, 0, 0)  (2, 6, 4)  2(1, 0, 0)  6(0, 1, 0)  4(0, 0, 1)

T (0, 1, 0)  (3,  1, 7)  3(1, 0, 0)  1(0, 1, 0)  7(0, 0, 1)

T (0, 0, 1)  (1, 5, 9)  1(1, 0, 0)  5(0, 1, 0)  9(0, 0, 1)

 2 3 1
 
Hence T B  6  1 5 . Let us denote this matrix by A. Then by Theorem 3.2.2, we have
 
4 7 9

 2 6 4
T *B  A*  3  1 7  At .
1 5 9

If ( x, y, z )  3 , then

( x, y, z)  x(1, 0, 0)  y(0, 1, 0)  z(0, 0, 1)

T ( x, y, z )  b1 (1, 0, 0)  b2 (0, 1, 0)  b3 (0, 0, 1) , where

 b1   x  2 x  6 y  4 z 
b   A y    3x  y  7 z 
 2
b3   z   x  5 y  9 z 

10

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


CHAPTER 3: LINEAR OPERATORS ON INNER PRODUCT SPACES

Hence T ( x, y, z)  (2 x  6 y  4 z, 3x  y  7 z, x  5 y  9 z)

Example 3.2.2 Find the adjoint of the linear operator T :  2   2 defined by

T ( z1 , z 2 )  ( z1  (1  i) z 2 , 2iz1  (3  4i) z 2 )

Solution. Using the standard basis B = (1, 0), (0, 1)of  2

 1 1 i   1  2i 
T B   . If we denote this matrix by A, then T *B  A*  1  i 3  4i  . Using A *
2i 3  4i   

we can easily get that T ( z1 , z 2 )  ( z1  2iz 2 , (1  i) z1  (3  4i) z 2 )

We hope that these examples enhance the reader’s understanding of the adjoint of a linear operator. We
see that the adjoint operation, passing from T to T * , behaves somewhat like conjugation on complex
numbers. The following theorem strengthens the analogy.

Theorem 3.2.3 . Let V be a finite-dimensional inner product space. If T and U are linear operators on
V and c is a scalar,

(i) (T *)*  T
(ii) (cT )*  c T *
(iii) (T  U )*  T * U *
(iv) (TU )*  U * T *
(v) I*  I

Proof.
Let v, u  V . Here we use the fact that the adjoint of a linear operator on V exists and unique.
(i) T * (v), u  u, T * (v)  T (u), v  v, T (u)
Hence T is an adjoint of T * ;i.e., (T *)*  T
(ii) (cT )(v), u  u, cT (v)  c u, T (v)  c u, T (v)  c T * (u), v  v, c T * (v)
Hence (cT )*  c T *
(iii) (T  U )(v), u  (T (v)  U (v), u
 T (v), u  U (v), u
 v, T * (u)  v, U * (u)
 v, T * (u)  U * (u)

11

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


MATH2042-LINEAR ALGEBRA II

 v, (T * U *)(u)
Therefore (T  U )*  T * U * .
(iv) (TU )(v), u  (U (v), T * ( u)  (v, U * T * ( u)
Hence (TU )*  U * T * .
(v) I (v), u  v, I * (u) . On the other hand I (v), u  v, u . Therefore I * (u)  u . Since u is
an arbitrary element of V , then I *  I .

Definition 3.2.2 Let V be a vector space over a field . Let T be a linear operator on V . Let W be a
subspace of V . We say W is invariant under T (or T -invariant ) if for each w  W the vector
T (w)  W , i.e., T (W )  W .

Example 3.2.3 Let W  ( x , 0) : x    which is a subspace of  2 . Let T :  2   2 a linear


operator defined by

T ( x, y)  (2 x  3 y, 4 y)

Then for any w  ( x, 0) W , T ( x, y)  (2 x, 0) W . Hence W is invariant under T .

Theorem 3.2.4 Let V be a finite-dimensional inner product space and Let T be a linear operator on
V . Suppose W is subspace of V which is invariant under T . Then W  is invariant
Under T .

Proof.
Let w 'W  and w  W . Then T ( w)  W because W is invariant under T . Thus,
T (w), w '  w, T * (w ' )  T (w), w '  0
This implies that w  T * (w ' ) . Hence T * ( w ' ) W  . Therefore W  is invariant under T * .

12

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


CHAPTER 3: LINEAR OPERATORS ON INNER PRODUCT SPACES

1. For each of the following linear operators T on an inner space V, find the adjoint of T

a. V   2 with standard inner product, T ( x, y)  ( 2 x  5 y, 4 x  3 y)

b V   3 with standard inner product, T ( x, y, z)  ( x  2 y  3z,  x  z, x  4 y  2 z)

c. V   2 with standard inner product, T ( z1, z2 )  ( 2 z1  iz 2 , (1  i) z1 )

1
d. V  P2 ( ) with p, q   p(t )q(t )dt , T ( p(t ))  p '(t )
1

e. V  M22 ( ) with A, B  tr (ABt ) , T(A) = A + A t

2. Let V be  3 with the standard inner product. Let T be the linear operator on V whose matrix in the

standard ordered basis is given by A  [alk ] where alk  (i) l  k , i 2  1 . Find T and T * .

3. Let V  P3 ( ) and define an inner product on V

1
f , g   f ( x) g ( x)dx
0

For any t   , find a polynomial ht  V such that ht , f  f (t) for all f  V .


4. Let V be a finite-dimensional inner product space and a T linear operator on V. Show that

ImT*  (Ker T)

5. Let V be a finite-dimensional inner product space and T a linear operator on V. If T is invertible, V


show that T * is invertible and (T *) 1  (T 1 ) * .

6. Let V be the space of 2  2 matrices over the complex numbers, with the inner product Let P be a
fixed invertible matrix in V, and let TP be the linear operator on V defined by TP ( A)  P 1 AP .

Find the adjoint of TP .

3.3 Normal Operators


13

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


MATH2042-LINEAR ALGEBRA II

Recall that in Chapter 1 we discussed the problem of whether a linear operator was
diagonalizable, i.e. whether it had a basis of eigenvectors. In this section, we will see the
condition under which V has an orthonormal basis consisting of eigenvectors of T .

Definition 3.3.1 Let V be a finite-dimensional inner product space and T be linear operator on V .We
say T is normal if TT *  T * T .

Example 3.3.1 Let T :  2   2 be a linear operator defined by T ( x , y)  ( y ,  x) . Then


T*:  2   2 can be computed to be the linear operator T * ( x , y)  ( y , x) , and so

TT * ( x , y)  T ( y , x)  ( x, y)

and
T * T ( x , y)  T * ( y ,  x)  ( x, y)
Then TT * ( x , y) and T * T ( x , y) agree for all x, y   2 , which implies that TT *  T * T . Then T is
normal.

Example 3.3.2 Let T :  2   2 be a linear operator defined by T ( x , y)  (0 , x) . Then


T * ( x , y)  ( y , 0) , and so

TT * ( x , y)  T ( y , 0)  (0, y)

and
T * T ( x , y)  T * ( x , 0)  ( x, 0)
So in general TT * ( x , y) and T * T ( x , y) are not equal, and so TT *  T * T . Thus T is not normal.

In analogy to the above definition, we define an n  n matrix A is normal if AA*  A * A

 0 1 0 0 
For instance, the matrix A    can easily be checked to be normal, while the matrix B   
 1 1 1 0
is not.

Theorem 3.3.1 Let V be a finite-dimensional inner product space , T : V  V be a linear operator and
B be an orthonormal basis for V . Then T normal if and only if A  T B is normal.

Proof.

If T is normal then TT*  T*T . Now, taking matrices with respect to B, we get
TB T*B  T*B TB

14

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


CHAPTER 3: LINEAR OPERATORS ON INNER PRODUCT SPACES

By Theorem 3.2.2, T *B  A* . Hence AA*  A*A . Therefore , A is normal. The converse of the
theorem follows by reversing the above steps.

Example 3.3.2 Let T :  3   3 be a linear operator defined by T(x , y, z)  (x  y, y  z, x  z) .


Determine whether T is normal ornot.

Solution. Consider the standard basis B = (1, 0, 0), (0, 1, 0), (0, 0, 1)of  3 which is orthonormal.

1 1 0  1 1 0 
Set A   T B  0 1 1  so that A*  0 1 1  .
1 0 1  1 0 1 
1 1 0  1 1 0   2 1 1
AA*  0 1 1  0 1 1   1 2 1 
1 0 1  1 0 1  1 1 2 

1 1 0  1 1 0   2 1 1 
A * A  0 1 1  0 1 1   1 2 1 
1 0 1  1 0 1  1 1 2 

Since AA*  A*A , then A is normal. Thus, by Theorem 3.3.1, T is normal.

Theorem 3. 3.2 Let V be a finite-dimensional inner product space and T be a normal linear operator
on V . Then ,
(i) T (v)  T * (v) for all v  V .
(ii) T  cI is normal for every c .
(iii) If  is an eigenvalue of T with corresponding eigenvector v , then  is an eigenvalue of
T * corresponding to the eigenvector v .
(v) If 1 and  2 are distinct eigenvalues of T with corresponding eigenvalues v1 and v 2 , then v1
and v 2 are orthogonal.

Proof.
(i) For any v  V , we have
 T (v), T (v)  T * T (v), v  TT * (v), v  T * (v), T * v  T * (v)
2 2
T (v)
Hence T (v)  T * (v) .
(ii) (T  cI )(T  cI )*  (T  cI )(T * cI )  TT * cT  cT * ccI
(T  cI ) * (T  cI )  (T * cI )(T  cI )  T * T  cT * cT  ccI

15

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


MATH2042-LINEAR ALGEBRA II

Since T is normal , it follow that (T  cI )(T  cI )*  (T  cI ) * (T  cI )


Hence T  cI is normal.
(iii) Let  be an eigenvalue of T with corresponding eigenvector v . Then,
T (v)  v or (T  I )v  0
This implies that (T  I )v  0 . From (i) and (ii) we have (T  I ) * v  0 or (T *  I )v  0 .

This implies that (T * I )v  0 or T * (v)  v . Hence  is an eigenvalue of T * corresponding


to the eigenvector v .
(iv) Let 1 and  2 be distinct eigenvalues of T with corresponding eigenvectors v1 and v 2 .Then
T (v1 )  1v1 and T (v2 )  2 v2 . By (iii), we have T * (v1 )   1v1 and T * (v2 )  2 v2 . Now,
1 v1 , v2  1v1 , v2  T (v1 ) , v2  v1 , T * (v2 )  v1 , 2 v2  v1 , 2 v2  2 v1 , v2 .
This implies that (1  2 ) v1 , v2  0 . Since 1  2 , we must have v1 , v2  0 and so v1 and v 2
are orthogonal.

Theorem 3.3.3 (Spectral Theorem for Normal Operators)


Let V be a finite-dimensional complex inner product space and T be a normal linear operator on V .
Then V has an orthonormal basis consisting entirely of eigenvectors of T .

Proof.
Let the dimension of V be n . We shall prove this theorem by induction.
(i) First consider the base case for n  1 . Then one pick any orthonormal basis B of V (which in this
case will just be a single unit vector), and the single vector in basis will automatically be an
eigenvector of T (because in a one-dimensional space every vector will be a scalar multiple of this
vector). So the Theorem is true when n  1.
(ii) Now suppose inductively that n  1 , and the Theorem has already been proven for dimension n  1 .
Let cT ( ) be the characteristics polynomial of T . From the Fundamental Theorem of Algebra, we
know that cT ( ) splits over the complex numbers. Hence there must be at least one root of this
polynomial, and hence T has at least one(complex) eigenvalue, and hence at least one eigenvector. So
now let us pick an eigenvector v1 of T with eigenvalue 1 . Thus T (v1 )  1v1 and T * (v1 )   1v1 .
v1
Let u1  . Then u1 is an orthonormal eigenvector of T with eigenvalue 1 , i.e. T (u1 )  1u1 (
v1
remember that if we multiply an eigenvector by a non-zero scalars, you will get still an eigenvector,
so it is safe to normalize eigenvectors). Let W  { cu1 : c  } denote the span of this eigenvector. Let
v  W . Then, v  cu1 for some c . T (v)  T (cu1 )  cT (u1 )  c1u1 . This implies that
T (v)  W and hence W is T  invariant. Let W   v  V : v  u1  denote the the orthogonal
complement of W ; this is thus an n  1 -dimensional space. Now, we see that T and T * do to W  .
Let w be any vector in W  . Thus w  u1 , i.e. w, u1  0 .
T (w), u1  w, T * (u1 )  w, 1u1  1 w, u1  0 . Hence T ( w) W  .
Since W is T  invariant, then W  is T *  invariant Then T * ( w) W 

16

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


CHAPTER 3: LINEAR OPERATORS ON INNER PRODUCT SPACES

Thus if w W  , then T (w) and T * ( w) are also in W  . Thus T and T * are not only linear
operators from V to V , they are also linear operators from W  to W  . Also, we have
T (w), w '  w, T * (w ' )
for all w, w 'W  , because every vector in W  is a vector in V , we already have this property for
vectors in V . Thus T and T * are still adjoint of each other even after we restrict t he vector space
from n -dimensional vector space V to the n  1 -dimensional vector space W  .

(iii) We now apply the induction hypothesis, and find that W  has an orthonormal basis of eigenvectors
of T . There are n  1 such eigenvectors since W  n  1 -dimensional. Now u1 is normalized it is
orthogonal to all vectors in this basis, since u1 lies in W and all the other vectors lie in W  . Thus if
we add u1 to this basis we get a new collection of orthonormal vectors, which forma basis of V . Each
of these vectors is eigenvector of T . Since T has n has linearly independent eigenvectors, then T is
diagonalizable.

Example 3.3.4 The linear operator T :  2   2 defined by T ( x , y)  ( y ,  x) that we have


discussed earlier is normal, but not diagonalizable ( its characteristics polynomial is 2  1 , which
doesn’t split over reals). This doesn’t contradict Theorem 3.3.4 because that it only concerns complex
inner product space. If however we consider the operator T :  2   2 defined by T ( z , w)  (w ,  z) ,
then we kind find an orthonormal basis of eigenvectors , namely
1 1
B = { u1  (1, i ) , u 2  (1,  i) }
2 2
i 0 
We can diagonalize T using the orthogonal basis, to become the diagonal matrix  .
0  i 

17

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


MATH2042-LINEAR ALGEBRA II

1. For each of the linear operators below, determine whether it is normal or not.

a. T :  2   2 defined by T ( x , y)  ( y ,  x  y) , take standard inner product


b. T :  2   2 defined by T ( z1 , z2 )  (2 z1  iz2 , z1  2 z2 ) ,take standard inner product,
1


c. T : P2 ( )  P2 ( ) defined by T ( f )  f ' , f , g  f (x)g(x)dx
0
2. For each of the following linear operators T on V, show that T is normal and find an orthonormal
basis for V which consists of the eigenvectors of T.
a. V   3 with standard inner product, T(x, y, z)  ( y  z, x  z, x  y)

b. V   2 with standard inner product, T(z1 , z 2 )  ( z1  iz 2 , iz1  z 2 )

c. V   3 with standard inner product, T(z1 , z 2 , z3 )  ( z1 ,  z3 , z 2 )


3. Let V be a finite-dimensional inner product space and T is a linear operator on V. If T is normal and U
is a subspace of V that is invariant under T. Prove that
a. U  is invariant under T
b. U is invariant under T*

3.4 Self-adjoint Operators

In the previous section, we have seen that in the world of complex inner product spaces, normal linear
operators are best kind of linear operators: they are not only diagonalizable, but they are diagonalizable
using the best kind of basis, namely an orthogonal basis. However, there a such class of normal operators
which are even better: the self-adjoint linear operators.

Definition 3.4.1 A linear operator T on a finite-dimensional inner product space V is said to be self-
adjoint if T *  T , i.e. T is its own adjoint. A square matrix A is said to be self-adjoin if A*  A

Example 3.4.2 The linear operator T :  2   2 defined by T ( x , y)  ( y , x) is self-adjoint because


its adjoint given is by T * ( x , y)  ( y , x) (verify!), and this is the same as T .

Example 3.4.3 Consider the inner product space M 22 () with standard inner product. The linear
operator T : M 22 ()  M 22 () defined by T ( A)  At is self-adjoint because its adjoint given is by
T * ( A)  At (verfy!), and this is the same as T .

Example 3.4.4 The linear operator T :  2   2 defined by T ( x , y)  ( y ,  x) is not self-adjoint


because its adjoint given is by T * ( x , y)  ( y , x) (verify!), and this is not the same as T .

A self-adjoint linear operator on a complex inner product space is sometimes known as a Hermitian
linear operator. A self-adjoint linear operator on a real inner product space is known as a symmetric

18

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


CHAPTER 3: LINEAR OPERATORS ON INNER PRODUCT SPACES

linear operator. Similarly, a complex self-adjoint matrix is known as a Hermitian matrix, while a real
self-adjoint matrix is known as a symmetric matrix.

A matrix is symmetric if At  A . When the matrix is real, the transpose A t is the same as its adjoint,
thus self-adjoint and symmetric are the same meaning for real matrices but not for complex matrices.
Every real symmetric matrix is automatically Hermitian because every real matrix is also a complex
matrix.
0 1 
Example 3.4.5 The matrix A    is symmetric (self-adjoint) since At  A , the
1 0
 1 i i 1
matrix B    is Hermitian ( self-adjoint) because B*  B , and the matrix C    is
 i 1 1 i 
symmetric but not self-adjoint (not Hermitian).

It is clear that all self-adjoint linear operators are normal since if T *  T , then TT * and TT * are both
equal to T 2 and are hence equal to each other. Similarly, every self-adjoint matrix is normal. However,
not every normal linear operator are self-adjoint, and not every normal matrix is self-adjoint.

Example 3.5.4 The linear operator T :  2   2 defined by T ( x , y)  ( y ,  x) is not self-adjoint


because its adjoint given is by T * ( x , y)  ( y , x) which is not the same as T . The matrix
1 1 0 2 1 1 
A  0 1 1 is normal since AA  A A  1 2 1 but not self-adjoint since At  A .
  t t

1 0 1 1 1 2

Theorem 3.4.1 Let T be a linear operator on a finite-dimensional inner product space. T is self-adjoint if
and only if its matrix with respect to some orthonormal basis of V is self-adjoint.

The proof of this theorem is analogous to that of Theorem 3.3.1

Example 3.5.5 The linear operator T :  3   3 defined by


T(x, y, z)  (x  y  3z, x  5y  z, 3x  y  z)
1 1 3
Is self-adjoint since its matrix A with respect to the standard basis of  3  
is A  1 5 1 which is
 
3 1 1
self-adjoint( symmetric).

Theorem 3.4.2 Let V be a finite-dimensional inner product space and T be a self-adjoint linear
operator on V . Then
(i) each eigenvalue of T is real.
(ii) eigenvectors of T associated with distinct eigenvalues are orthogonal.

19

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


MATH2042-LINEAR ALGEBRA II

Proof.
(i) Let  be an eigenvalue of T with corresponding eigenvector v . Thus v is non-zero and T (v)  v .
Then,
 v , v  v , v  T (v) , v  v), T * (v)  v), T (v)  v), v   v , v
This implies that (   ) v , v  0 . Since v is non-zero, we must have    . Thus  is real.
(ii) Let 1 and  2 be distinct eigenvalues of T corresponding to eigenvectors v1 and v 2 .Then
1 v1 , v2  1v1 , v2  T (v1 ) , v2  v1 , T * (v2 )  v1 , T (v2 )  v1 , 2 v2  2 v1 , v2  2 v1 , v2
.
This implies that (1  2 ) v1 , v2  0 . Since 1  2 , we must have v1 , v2  0 and hence v1
and v 2 are orthogonal.

Example 3.4.6 Let T :  2   2 be a linear operator defined by T ( z , w)  ( z  iw ,  iz  w) . T is


self-adjoint (Hermitian) linear operator. 1  0 and 2  2 be distinct eigenvalues of T which are real
 i    i  
and    and    are bases for the eigenspaces of T corresponding to the eigenvalues 1  0
 1   1 
and 2  2 respectively. (i , 1), (i , 1)  i( i )  1(1 )  1  1  0 . Hence eigenvectors
corresponding to the distinct eigenvalues 1  0 and are orthogonal.

Corollary 3.4.1 Let V be a finite-dimensional inner product space and T be a self-adjoint linear
operator on V . Then, the characteristics polynomial of T splits over the reals.

Proof.
We know already from Fundamental Theorem of Algebra that the characteristics polynomial of T over
the complex numbers. But since T is a self-adjoint linear operator, by Theorem 3.4.1, every eigenvalues
of T , i.e. every root of the characteristics polynomial, must be real. Thus the polynomial must split over
reals.

Theorem 3.4.3 (Spectral Theorem for Self-adjoint Operators)


Let T be a self-adjoint linear operator on a finite-dimensional inner product space V (which can be
either real or complex). Then there is an orthonormal basis of V which consisting entirely of
eigenvectors of T , with real eigenvalues.

Proof. We repeat Theorem 3.3.3 , i.e. we do an induction on the dimension n of the vector space V .
When n  1 the claim is again trivial and we use Theorem 3.4.1 to make sure that the eigenvalue is real.
Now suppose inductively that n  1 and the claim has already has been proven for n  1 . From corollary
3.4.1 we know that T has at least one real eigenvalue . Thus we can find a real 1 and eigenvector v1
such that T (v1 )  v1 . We can normalize v1 to have unit length. We now repeat the proof of Theorem
3.3.3 , for normal operator to obtain the same conclusion except that the eigenvalues are now real.

20

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


CHAPTER 3: LINEAR OPERATORS ON INNER PRODUCT SPACES

Theorem 3.4.4 (The Spectral Theorem )

let T be a linear operator on finite-dimensional inner product space V over  . Assume T it is normal if
=  and self-adjoint if = . Let 1 , 2 , . . . , k be distinct eigenvalues of T with corresponding
eigenspaces W1 , W2 , . . . , Wk respectively. Let T1 , T2 , . . . , Tk be the orthogonal projection of V
onto W1 , W2 , . . . , Wk respectively. Then,
(1) Wi  W j , when i  j .
(2) V  W1  W2  . . .  Wk
(3) T  1T1  2T2  . . .  k Tk

Proof.
(1) Let w  Wi and w ' W j with i  j . Then, there exist distinct eigenvalues i ,  j of T such that
T (w)  i w and T (w ' )   j w ' . Now,
i w , w '  i w , w '  T (w) , w '  w , T * (w ' )  w ,  j w '   j w , w '
(i   j ) w , w '  0 . Since i ,  j , we must have w , w '  0 . This implies that w w ' . Since
w and w ' are an arbitrary elements of Wi and W j respectively such that w w ' , then we can
conclude that Wi  W j for i  j .
(2) (i) Let v V . From Theorem 3.3. 3 and Theorem 3.4.1, V has an orthonormal basis
B = u1 , u 2 , . . ., u n  such each element of B is eigenvectors of T . Hence
v  c1u1  c2u2  . . .  cn un for some scalars c1 , c2 , . . ., cn in . Since for every ui  B there
exists exactly one eigenspaces W j of V containing u i ( because eigenspaces are distinct) and each
eigenspaces W j of V contains at least one element u i of B( because B diagonalize T with diagonal
matrix the diagonal matrix become D  diag( 1 , 2 , . . . , k ) , then v  w1  w2  . . .  wk for some
wi Wi , i  1, 2, . . . , k . Thus V  W1  W2  . . .  Wk .
(ii) Let w  W1  W2  . . .  Wk . Then w W1 , w W2 ,. . ., w Wk . Hence
T (w)  1w, T (w)  2 w, . . . , T (w)  k w .
This implies that i w   j w , i, j  1, 2, . . . , k and i  j . Then (i   j ) w  0 . Since i   j ,
we have w  0 . Hence W1  W2  . . .  Wk  0. (i) and (ii) imply that V  W1  W2  . . .  Wk .
(3) Let v V . Since V  W1  W2  . . .  Wk , then v can be expressed uniquely as
v  w1  w2  . . .  wk for some wi Wi , i  1, 2, . . . , k .
Since T1 , T2 , . . . , Tk be the orthogonal projection of V onto W1 , W2 , . . . , Wk respectively, then
T1 (v)  w1 , T2 (v)  w2 , . . . , Tk (v)  wk . Thus,
T (v)  T (w1  w2  . . .  wk )
 T (w1 )  T (w2 )  . . .  T (wk )

21

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


MATH2042-LINEAR ALGEBRA II

 1w1  2 w2  . . .  k wk
 1T1 (v)  2T2 (v)  . . .  k Tk (v)
 (1T1  2T2  . . .  k Tk )(v)
Hence T  1T1  2T2  . . .  k Tk

In Theorem 3.4.2 the set 1 , 2 , . . . , k  of eigenvalues T is called the spectrum of T and the
sum T  1T1  2T2  . . .  k Tk is called the spectral decomposition of T .

Example 3.4.7. Let T :  3   3 a linear operator defined by

T ( x, y, z)  (2 x  y, x  3 y  z, y  2 z) . Then find the

(a) eigenspaces of T .

(b) orthogonal projection operator of each eigenspace onto  3 .

(c) spectral decomposition of T .

Solution. Let us take the standard basis B = (1, 0, 0), (0, 1, 0), (0, 0, 1) of  3 which is orthonormal.
 2 1 0
 
Here T B  1 3 1 . Since T B is symmetric then T is symmetric linear operator.
 
0 1 9

(a) Using T B one can compute the eigenvalues of T to be 1  1, 2  2 and 3  4 . If W1 , W2


and W3 are the eigenspaces of T corresponding to the eigenvalues 1 , 2 and 3 respectively, then
we get
W1  {t (1,  1, 1) :t   }, W2  {t (1, 0 ,  1) :t  } and W3  {t (1, 2, 1) :t  }.
(b) Recall that if B = u1 , u 2 , . . . , uk  is an orthonormal basis of the subspace W of V , then the
orthogonal projection operator P of W onto V has the formula
P(v)  v , u1 u1  v , u2 u2  . . .  v , uk uk
In our case, B1  (1,  1, 1) , B2  (1, 0 ,  1)and B3  (1, 2 , 1) are bases of the eigenspaces
W1 , W2 and W3 respectively. Applying Gram-Schmidt Orthgonalization process, we obtain
 1 1 1   1 1   1 2 1 
orthonormal bases B 1  ( , , ) , B 2  ( , 0, ) and B 3  ( , , )
 3 3 3   2 2   6 6 6 
of the eigenspaces W1 , W2 and W3 respectively.

The orthogonal projection operator T1 of W1 onto  3

22

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


CHAPTER 3: LINEAR OPERATORS ON INNER PRODUCT SPACES

1 1 1 1 1 1 x yz x yz x yz


T1 ( x, y, z )  ( x, y, z ), (
, , ) ( , , )( , , )
3 3 3 3 3 3 3 3 3
The orthogonal projection operator T2 of W2 onto  3
1 1 1 1 xz xz
T2 ( x, y, z )  ( x, y, z ), (
, 0, ) ( , 0, )( , 0, )
2 2 2 2 2 2
The orthogonal projection operator T3 of W3 onto  3

1 2 1 1 2 1 x  2 y  z 2x  4 y  2z x  2 y  z
T3 ( x, y, z )  ( x, y, z ), ( , , ) ( , , )( , , )
6 6 6 6 6 6 6 6 6

(c) The spectral decomposition of T is


T  1T1  2T2  3T3
 T1  2T2  4T3
where T1 , T2 , T3 are as described in the above. One can easily check that T1  2T2  4T3 gives T .

23

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


MATH2042-LINEAR ALGEBRA II

1. For each of the following linear operators below, determine whether it is self-adjoint,

isometry not. Take standard inner product on each vector space.

a. T : R 2  R 2 defined by T ( x, y)  (3x  y,  x  4 y)

b. T : C2  C2 defined by T ( z1, z2 )  ( z1  iz 2 , z1 z2 )

c. T : R 3  R 3 defined by T(x, y, z)  (x  y  z, 2x, x  z)

d. T : R 3  R 3 defined by T(x, y, z)  (6x  2y  z, 2x  4z, x  4y  5z)

e. T : M22 ( )  M22 ( ) defined by T(A) = A + A t with , A, B  tr (ABt )

2. Let T be a complex inner product space. Prove that T is self-adjoint if and only if T (v), v is real for

all v  V .

3. Let V an inner product space and T is a self-adjoint operator on V such that T(v), v  0 for all
v  V, then T  0
4. Let T and U be self-adjoint linear operators on an inner product space V. Prove that TU is self-adjoint
if and only if TU=UT.

5. For each of the following linear operators T on V, show that T is normal and find an orthonormal
basis for V which consists of the eigenvectors of T.

a. V =  3 , T ( x, y, z )  (3x  2 y, 2 x  3 y,5z) , take standard inner product


1
b. V = P1 ( ), T(a0  a1 x)  (3a0  4a1 )  (4a0  3a1 ) x , f , g   f(x)g(x)dx
0

6. Verify Spectral Theorem for each of the following linear operators on the inner product space V.
Take the standard inner products.

a. V   2 , T ( x, y)  ( x  2 y, 2 x  y)

b. V =  3 , T ( x, y, z)  ( x  y  z, x  y  z, x  y  z)

c. V   3 , T(x, y, z)  (14x 13y  8z, 13x  14y  8z, 8x  8y  7z)

3.5 Isometries
24

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


CHAPTER 3: LINEAR OPERATORS ON INNER PRODUCT SPACES

In this section we will continue our analogy between complex numbers and linear operators. Recall that
the adjoint of a linear operator acts similarly to the conjugate of a complex number (see Theorem 3.3.1 ) .
A complex number z has length 1 if z z  1 . We will study those linear operators on inner products
V such that TT*  T *T  I which means T *  T 1 . This condition guarantees, in fact, that
T preserves length and the inner product.

Theorem 3.5.1 Let V be a finite dimensional inner product space. Let T be a linear operator on V .
Then the following statements are equivalent.

(i) T *  T 1

(ii) T preserves inner product, i.e., T (v1 ) , T (v2 )  v1 , v2 for all v1 , v2 V

(iii) T preserves inner length, i.e., T (v)  v for all v V

Proof.

(i)  (ii)

T (v1 ) , T (v2 )  v1 , T * T (v2 )  v1 , T 1T (v2 )  v1 , v2

(ii)  (iii)

T (v) 2  T (v) , T (v)  v , v  v


2

Hence T (v)  v

(iii)  (i)

Ker T = v V : T (v)  0


= v V : T (v)  0 

= v V : v  0 
 v V : v  0

 0

Hence T is one to one. Since V is finite dimensional, then T is invertible. Let v be any element of V .
Now,

25

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


MATH2042-LINEAR ALGEBRA II

T * T (v) , v  T (v) , T (v)  v, v  I (v), v

This implies that (T * T  I )(v) , v  0 for every v V . Hence (T * T  I )(v)  0 for every v V or

(T * T )(v)  I (v) for every v V . Thus T * T  I . Since inverse is unique we have T *  T 1 .

Definition 3.5.1 Let V be a finite dimensional inner product space. Let T be a linear operator on V .
T is said to be an isometry if it satisfies any one ( and hence all) of the three equivalent conditions stated
in Theorem 3.5.1.

Defintion 3.5.2 An isometry on a complex inner product space is called a unitary operator, and an
isometry on a real inner product space is called an orthogonal operator. Let A be an n n matrix that
satisfies A*  A . We call A a unitary matrix if A has complex entries, we call A an orthogonal matrix
if A has real entries.

Example 3.5.1 The linear operator T :  2   2 defined by T ( x , y)  ( y , x) is an isometry


because for every ( x , y)  2 ,

T ( x , y)  (T ( x, y), T ( x , y)  ( y, x), ( y , x)  y 2  x 2  x 2  y 2  ( x , y)


Example 3. 5.2 Let V  P3 () with inner product p, q  p(t )q(t )dt . Let T : V  V a linear
0

operator defined by T ( p(t ))   p(t ) . Show that T is an isometry.

1 1
T ( p(t ))  T ( p(t )) , T ( p(t ))   p(t ) ,  p(t )    p(t )( p(t ))dt   ( p(t )) 2 dt
2
Solution.
0 0

1 1
p(t )  p(t ) , p(t )   p(t )( p(t )dt   ( p(t )) 2 dt
2

0 0

Hence T ( p(t ))  p(t ) . Thus T is an isometry.

26

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


CHAPTER 3: LINEAR OPERATORS ON INNER PRODUCT SPACES

0  1  0 1
Example 3.5.3 The matrix A    is orthogonal since At     A 1 , and the matrix
1 0    1 0
 1 i   1 i 
 2 
2  is unitary since A*   2 2   A 1
A
 i 1   i 1 
 2 2   2 2 

Theorem 3.5.2 Let V be a finite dimensional inner product space. Let T be a linear operator on V .Let
B = u1 , u 2 , . . . , u n be an orthonormal basis of V and A  T B . Then T is isometry if and only if
A*  A1 .
Proof.

T is an isometry  T *  T 1

 T *B  T 1 B

 T  B  T  B
* 1

 A*  A1

Theorem 3.5.3 Let V be a finite dimensional inner product space. Let T be an isometry on V . If B =
u1 , u2 , . . ., un  an orthonormal basis of V , then T (B ) is an orthonormal basis of V .

Proof.

T (ui )  ui  1 for i  1, 2 , . . . , n .

T (ui ) , T (u j )  ui , u j   ij

Thus T (B ) is an orthonormal set. Since T is an isometry and hence one to one T (B ) has n elements.
By Theorem 2., T (B ) is an orthonormal basis of V .

Theorem 3.5.3 Eigenvalues of of an isometry have modulus 1.

Proof.

Let T be an isometry and  be an eigenvalue of T corresponding to the eigenvector v . Then,

  v , v  v , v  T (v) , T (v)  v , v . Since v , v  0 , then    1 . Hence      1 .


27

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM


MATH2042-LINEAR ALGEBRA II

1. For each of the following linear operators below, determine whether it is isometry not. Take standard
inner product on each vector space.

3x  4 y  4 x  3 y
a. V   2 , T( x, y)  ( , )
5 5

b. V   2 , T ( x, y)  ( x  y, x  y)

c. V =  3 , T(x, y, z)  (  y, x, z)

z1  z2 iz1  iz2
d. V =  3 , T ( z1 ,z2 , z3 )  ( z3 , , )
2 2

e. V  M22 ( ) , T(A) = A t
2. If T is an isometry on the inner product space  3 with standard inner product and v  (2, 6,  3) ,
then find T (v)

3. Let  be an angle such that 0    2 . Consider inner product space  2 with standard inner product.
Show that a linear operator T :  2   2 defined by T ( x, y)  ( x cos   y sin , x sin   y cos )
(anticlockwise rotation about the origin) is an isometry.

1 1 1 0 
 
1 1 1 0 1
4. Show that the matrix A  is orthogonal and find its inverse.
3 1 0 1 1 
 
0 1 1 1 

5. If T is an isometry on an inner product space V and B is an orthonormal basis for V, show that T(B)
is an orthonormal basis for V.

6. If T is an isometry, prove that T (v)  T (u)  v  u for all v, u  V .

7. Find an orthogonal operator T on R 3 such that T (1 2 , 0, 1 2 )  (0 , 1, 0) .

8. For z  C , define Tz : C  C by Tz (v)  zv for all v  . Characterize z for which Tz is normal,


self-adjoint, or isometry.

28

BAHIR DAR UNIVERSITY, MATHEMATICS PROGRAM

You might also like