Dsmat31 (Em)
Dsmat31 (Em)
LESSON - 1
RINGS AND INTEGRAL DOMAINS
1.1 Objectives of the Lesson:
To learn the difinitions of algebraic structures such as a ring, a field and an integral domain
and to study some examples of these structures and thier basic properties.
1.2 Structure
1.3 Introduction
1.9 Summary
1.12 Exercises
1.3 Introduction:
In this lesson we define an algebraic structure called a ring. We also derive the concepts of
a field and an internal domain. We learn some basic properties of a ring. Using the basic
properties, we prove some theorems on rings and fields.
i) a , b R a b R
ii) ( a b) c a (b c ), a, b, c R
vi) a, b R a.b R
Note:
1.4.1
1) In a Ring R the binary operation ‘+’ is called addition and ‘.’ is called multiplication.
5) Sometimes ring R is denoted by (R, +, .) The additive inverse of an element ‘a’ is denoted
by ‘-a’.
1.4.2 Examples
1) Let R 0 and +, . be the operations defined by O + O = O and O . O = O, then (R, +,.)
is a ring. This ring is called zero Ring or Null Ring .
2) Z is the set of all integers and + and . are usual addition and multiplication respec-
tively. Then (Q, +, .) is a ring.
3) Q is the set of all rational numbers and + and . are usual addition and multiplication
respectively. Then (Q, +, .) is a ring.
4) The set of all real numbers is a ring w.r.t usual addition and multiplication.
5) The set of all complex numbers C is a ring w.r.t usual addition and multiplication.
equivalence classes. Then Z n O, 1, 2,..(n 1) . Define addition and multiplication by
a b a b and a.b ab . Then Z n together with these operations is a ring. Z n is called the ring
of integers modulo n. It is customary to denote the elements in Z n as 0,1,.,..., n 1 rather than
O, 1,..., n 1 . This notation will be used whenever convenient. Addition and multiplication in Z n are
sometimes written as n and n respectively..
1.4.3 Ring with Unity: A ring ( R, ,.) is said to be a ring with unity if there exists 1 R such that
a.1 1.a a a .R
1.4.4 The ring of intiger ( Z , ,.) is a ring with unity..
1.4.5 Note: If R is a ring with unity 1, then 1 is called unity or multiplicative identity or smply an
identity element.
1.4.6. commutative Ring: A ring ( R, ,.) is said to be a commutative ring if a.b = b.a, a, b R ,
i.e. Multiplication is commntative in R.
1.4.9 Example:
i. The set of all Even intigers is a commutative ring without unity under usual addition and
multiplication.
ii.
1.4.10 Note: If ( R, ,.) is ring then (R,+) is an abelian group. Therefore we have.
iii) For a R , - (- a) = a
iv) For O R , - 0 = 0
Centre for Distance Education 1.4 Acharya Nagarjuna University
v) For a, b R, ( a b) a b
vi) For a , b , c R , a b a c b c
i) oa ao o
ii) a ( b) ( a )b ( ab)
iii) ( a )( b) ab
iv) a (b c) ab ac
Proof: i) oa (o o)a o o o
oa oa By right distributive law
III ly ao a (o o ) o o o
ao ao By left distributive law
oa ao o
ii) To show that a ( b ) ( ab ) ( a )b
ao
o by (i)
a ( b) ab o a ( b) ab
ob
Rings and Linear Algebra 1.5 Rings and Integral Domains
o by (i)
(ab) by ii
ab
= ab ( ac ) by (ii)
= ab ac
i) (1) a a a ( 1)
1.4.14 Example :
1) In the ring of Integers (Z, +, .), o and 1 are idempotent elements.
1.4.15 Boolean Ring: A ring R is said to be a Boolean ring if every element of R is an idempotent
element i.e., a 2 a, a R .
i) a a 0, a R
ii) a b 0 a b
(a a ) 2 a a Since a 2 a , a R
(a a )( a a) a a
(a 2 a 2 ) (a 2 a 2 ) a a
(a a) (a a) a a Since a 2 a , a R
ab bb by i b b 0
( a b) 2 a b R is Boolean Ring
( a b)( a b) a b
a ba ab b a b R is a Boolean Ring
ba ab by ii.
R is a commutative ring.
1.4.17 Definition : Nilpotent Element : Let R be a ring. An element a R is said to be
nilpotent element if there exists a positive integer ‘n’ such that a n 0 .
1.5.1 Note: 1) A ring R is said to have zero divisors if there exist a, b R , a 0, b 0 but ab 0 .
1.5.2 Examples: i) The ring ( Z 6 , 6 , 6 ) has zero divisors 2, 3 and 4. For 2 6 3 0,3 6 4 0 .
ii) The set R of all 2 x 2 matrices with real numbers as entries is a ring with zero divisors w.r. to
addition and multiplication of matrices.
1 0 0 0
For A and B are two non-zero elements of R but AB = O.
0 0 1 0
1.5.3 Cancellation Laws in a Ring: We say that cancellation Laws hold in a ring R if
a 0, ab ac b c and a 0, ba ca b c for a, b, c R .
1.5.4 Theorem: A ring R is without zero divisors if and only if the cancellation laws hold in R.
Proof: Suppose R is a ring without zero divisors.
i.e. a, b R and ab 0 a 0 or b 0
Let a 0, b, c R and ab ac
Now ab ac ab ac 0
a (b c ) 0
bc
Similarly we can show that ba ca b c
Let a, b R and ab 0
Centre for Distance Education 1.8 Acharya Nagarjuna University
If possible suppose a 0
Now ab 0 ab a0
Similarly if b 0 , then
ab 0 ab 0b
ab 0 either a 0 or b 0
Hence R has no zero devisors.
1.6.3 Theorem: A commutative ring R with unity 1 0 is an Integral domain if and only if cancel-
lation laws hold in R.
Proof of this theorem follows from the previous theorem.
1.6.4 Definition: Invertible element (or) Unit: Let R be a ring with unity 1. A non zero element
a R is said to be inversible if there exists b R such that ab ba 1 . Here b is called multiplica-
tive inverse of a and is denoted by a 1 .
1.6.5 Division Ring: A ring ( R, ,.) is said to be a division ring if i) R has unity 1 0 , ii) Every
non zero element has multiplicative inverse.
Let a, b R and ab 0
If possible let a 0
Now ab 0
a 1 (ab) a 1 0
Rings and Linear Algebra 1.9 Rings and Integral Domains
(a 1a )b 0
1b 0
b0
Similarly if b 0 we can prove that a 0
ab 0 either a 0 or b 0 .
Hence R has no zero divisors.
1.6.8 Definition: Field : A ring R with atleast two elements is called a field if i) R is commuta-
tive, ii) R has unity, iii) Every non-zero element of R has multiplicative inverse.
1.6.9 Example: The rings of rational numbers, real numbers and complex numbers are fields.
F is a commutative ring.
To show that F is an integral domain, it is enough to show that F has no zero divisors.
Let a , b F and ab 0
If possible let a 0
Now ab 0 a 1 (ab) a 1 0
(a 1a )b 0
1b 0
b0
Similarly if b 0 we can show that a 0
Centre for Distance Education 1.10 Acharya Nagarjuna University
Hence ab 0 either a 0 or b 0
aa i aa j 0
a ( ai a j ) 0
ai a j
Now bai ( aa j ) ai
Rings and Linear Algebra 1.11 Rings and Integral Domains
= a ( a j ai )
= a ( ai a j ) R is commutative.
= ( aai ) a j
= aa j aai a
= b
note that ( n ) a n ( a ) ( na )
1.7.2. Note: If m, n are integers and a,b are elements of a ring R then
i) (m n) a ma na
ii) m ( na ) ( mn ) a
iii) m ( a b ) ma mb
iv) m ( ab) ( ma )b a ( mb )
v) ( m a )( n b ) m n ( a b )
1.7.3 Integral Powers : If m, n are positive integers and a, b are elements of a ring R then
i) a m a.a....a, m times
ii) a m .a n a m n
iii) ( a m ) n a mn
Centre for Distance Education 1.12 Acharya Nagarjuna University
If there is no such positive integer ‘n’ then we say that the characteristic of the ring R is zero
or infinite.
1.7.6 Theorem: The characteristic of a ring with unity is zero or n > 0, according as the order of
the unity element is zero or n > 0 respectively regarded as member of the additive group of the ring.
Proof: Suppose R is a ring with unity element 1. Suppose order of 1 is zero, regarded as element
of the group (R, +)
Now na a a .. a n times
a (1 1 .. 1) n times
a ( n.1)
a0 n1 0
=0
na 0, a R
Further n is the least positive integer such that na 0, a R . Since if m < n then
m.1 0 . Hence charecteristic of a ring R is n.
1.7.7. Theorem: The characteristic of an integral domain is either zero or prime.
Proof: Suppose R is an Integral domain
Let P be the charecteristic of R
If P = 0 there is nothing to prove.
Rings and Linear Algebra 1.13 Rings and Integral Domains
Suppose P 0 .
pa 0 P is the charecteristic of R.
pa 2 0
mna 2 0
ma na 0
Let b R
Now ma 0 ma b 0
mb a 0
1.8.3 Note: If R is a ring then S 0 where 0 is the zero element of R and S R are subrings of
R. These subrings are called trivial or improper subrings of R. A subring other than the above two
is called a non-trivial or proper subring of R.
Let a, b S
a, b S ab S .
Hence a, b S a b S and ab S
Existence of zero element: Since S is non-empty S has atleast one element say a S .
0S
Existence of additive inverse:
0S and a S 0 a S
a S
Closure w.r. to addition:
Let a, b S
b S b S
Now a, b S a (b) S by cndition (i).
a bS
Rings and Linear Algebra 1.15 Rings and Integral Domains
Since elements of S are elements of R, associative law for addition and multiplication,
commutative law for addition and distributive law holds good in S.
Also by condition (ii) S is closed w.r.to multiplication
S is a ring and hence a subring of R.
1.8.5 Theorem: The intersection of two subrings of a ring R is a subring of R.
0 S1 S2 S
S is a non-empty subset of R.
Let a, b S a, b S1 S2
a, b S1 and a, b S2
a, b S1 a b S1 and ab S1 S1 is a subring
a b S1 S2 and ab S1 S2
i.e. a b S and ab S
Clearly 2,3 S1 S2
But 3 2 1 S1 S2
Centre for Distance Education 1.16 Acharya Nagarjuna University
(OR)
Union of two subrings of a ring R is again a subring iff one is contained in the other.
Suppose S1 S2 or S2 S1
a S1 a S1 S2
b S2 b S1 S2
a b S1 S2
a b S1 or a b S2
b S1
A contradiction to (2).
a S2
A contradiction to (1)
a bS1 and a b S2
Rings and Linear Algebra 1.17 Rings and Integral Domains
a b S1 S2
S1 S2 is not a subring.
Hence either S1 S2 or S2 S1
Conversily suppose S1 S2 or S2 S1
If S2 S1 S1 S2 S1 which is a subring.
1.8.9 Example: The set of all rational numbers Q, ,. is a subfield of R, ,.
i)a, b K a b K
iii ) 1K
Proof: Let F be a field and
K be a non-empty subset of F.
Suppose K is a subfield of F.
Let a, b K
a bK
Centre for Distance Education 1.18 Acharya Nagarjuna University
Let a,0 b K
Clearly 1K
From condition (i) it follows that K, is a sub group of F , from (ii) and (iii) it follows
that K 0 ,. is a sub group of F 0 ,. .
Since K F , commutative law for addition and multiplication and distributive laws hold for ele-
ments of K.
Hence k is a subfield of F.
1.9. Summary :
In this lesson we learnt the definitions of Ring, Integral domain, Field, Subring and Subfield.
We proved some theorems on Rings, Integral domains and fields.
A B C A B C , A, B, C M
A B B A, A, B, M
AB C A BC A, B, C M
A B C AB AC
B C A BA CA, A, B, C M
M is aring.
Solution: Given z (i ) x iy / x , y z
x1 , y1 , x2 , y2 Z
Now a b ( x1 iy 1 ) ( x2 iy 2 )
x1 x2 i y 1 y 2
a b z (i ) since x1 x2 and y 1 y 2 Z
ab x1 iy 1 x2 iy 2
x1 x2 y 1 y 2 i x1 y 2 x2 y 1
ab z (i )
i.e. z ( i ) is closed under addition and multiplication. We know that addition of complex
numbers is associative and commutative.
Associative law and commutative law for addition hold good in z ( i ) since z ( i ) is a sub-
set of the set of complex numbers C.
Clearly 0 0 i 0 z ( i ) and
a 0 ( x1 iy1 ) (0 i 0) x1 iy1 a
O is the zero element of z (i )
a z (i ) a x1 iy1 z (i ) and
a ( a ) ( x1 iy1 ) ( x1 iy1 ) 0
a (b c ) ab ac a, b, c z (i ) since Z(i) C
1.11.3 If Q( 2) a b 2 : a, b Q Then Q ( 2) is a field with respect to addition and multipli-
cation of real numbers.
Solution: Q ( 2) a b 2 : a, b Q
Let x a b 2 and y c d 2 Q ( 2)
a , b, c , d Q
Now x y a b 2 c d 2
( a c ) (b d ) 2
x y Q ( 2) a c, b d Q
xy ( a b 2)(c d 2)
(ac 2bd ) ad bc 2
xy Q ( 2) ac 2bd , ad bc Q
We have ( x y ) z x ( y z ) and x y y z x, y , z Q ( 2)
clearly 0 0 0 2 Q ( 2) and x 0 x, x Q ( 2)
x Q ( 2) x a b 2 Q ( 2) and x ( x ) 0
Let 0 x a b 2 Q ( 2)
a 0 or b 0 .
1 1 a b 2 a b
Now 2 2 2 2 Q( 2)
x a b 2 a 2b 2
a 2b a 2b2
2
Since a, b Q and a 2 2b 2 c
a 2
2b 2 0 a 2 2b 2 a 2b but a Q a 2 2b 2 0 a 0 and b 0
1.11.4 Z p 0,1, 2,3,............ p 1 where P is a prime. Z p is a field w.r.t the addition modulo P
and multiplication modulo P.
Let a, b Z p
Clearly a b and a b z p
Let a, b, c z p
(a b) c r1 c where a b q1 p r1 ,0 r1 p
r2 where r1 c q2 p r2 ,0 r2 p
Now a b c q1 p r1 c q1 p q2 p r2 (q1 q2 ) p r2
a (b c) a s1 where b c m1 p s1 , 0 s1 p
s2 where a s1 m2 p s2 , 0 s2 p
Rings and Linear Algebra 1.23 Rings and Integral Domains
Now a b c a m1 p s1 m1 p a s1 m1 p m2 p s2
(m1 m2 ) p s2
r1 s2 (a b) c a (b c)
Clearly 0 z p
a0 a
Let a z p
If a 0 then 0 a p 0 a p
p pa p p p pa 0
p a zp
0
p a is the additive inverse of a 0
ba
Addition modulo P is commutative.
(a b) c r1 c where ab q1 p r1 0 r1 p
= r2 where r1c q2 p r2 0 r2 p
(q1c q2 ) p r2
a (b c) a s1 where bc m1 p s1 0 s1 p
s2 where as1 m2 p s2 0 s2 p
(am1 m2 ) p s2
clearly 1 z p and a 1 1 a a, a z p
Clearly a b b a, a, b z p since ab ba
Suppose a b 0 where a, b z p
p p
or since P is prime.
a b
a 0 or b 0 since 0 a p and 0 b p
z p is an integral domain.
Remark: the ring z p described above is the same as the ring defined at 1 & 2 exampled. (upto
isomorphism).
1.11.5 Give example of a division ring which is not a field.
a ib c id
Solution: Let A / a, b, c, d R
c id a id
We know that the set M of all matrices with complex numbers as entries is a non-commu-
tative ring with unity.
Clearly A is a non-empty subset of M.
a1 ib1 c1 id1
Let X , Y A X and
c id1 a1 ib1
a ib2 c2 id 2
Y 2
c2 id 2 a2 ib2
XY A
Hence X ,Y A X Y and XY A
A is a subring of M and hence a ring.
1 0
Since M is a non commutative ring with unity I
0 1
Centre for Distance Education 1.26 Acharya Nagarjuna University
a ib c id
For 0 X A where X
c id a ib
A is a division ring.
Since A is non-commutative it is not a field.
1.11.6 Show that the charecteristic of a Boolean ring R is 2.
Solution: Let R be a Boolean ring
a 2 a, a R
Let a R a a R
( a a )( a a ) a a
a2 a2 a2 a2 a a
aaaa aa
aa 0
2a 0
Charecteristics of a Boolean ring is 2.
Now (a b) 2 (a b)(a b)
a 2 ab ba b 2
a 2 ab ab b 2
a 2 2ab b 2
Rings and Linear Algebra 1.27 Rings and Integral Domains
a 2 0 b 2 Since ch. of R is 2, 2a = 0 a R
a 2 b2
C ( R) x R / ax xa a R
clearly 0 R and ao oa a a R
0 C ( R)
Let x, y C ( R)
ax xa and ay ya a R
Now a ( x y ) ax ay
xa ya
( x y )a a R
a ( xy ) ( ax ) y ( xa ) y x ( ay ) x ( ya )
( xy ) a, a R
x y, xy C ( R)
1.11.9 The set of all those integers which are multiples of a given integer say m is a subring of
the ring of integers.
Clearly o z mo
. o s
S is a non-empty subset of Z.
Let a, b S
a mx and b my for x, y z
Centre for Distance Education 1.28 Acharya Nagarjuna University
Now a b mx my
m( x y )
a b s x y z
also ab mx.my
m.mxy
ab s since m xy z
Hence S is a subring of Z by 1.8.4.
1.11.10 Show that 0 and 1 are the only idempotent elements of an integral domain.
Solution: Let R be an integral domain.
a2 a
a2 a 0
a ( a 1) 0
a 0 or a 1
0 and 1 are the only idempotent elements of an integral domain.
2. Exercise:
1. Prove that the set of even integer is a ring with respect to usual addition and multiplication.
3. Is R a 2 / a Q a ring under ordinary addition and multiplication of real numbers?
4. Is the set of all pure imaginary numbers iy / y R a ring with respect to addition and
multiplication of complex numbers?
X 0 1 i 2 j 3 k , Y 0 1i 2 j 3 k .
X Y ( 0 0 ) (1 1 ) i ( 2 2 ) j ( 3 3 ) k
7. Prove that 1, i are the only four units of the ring of Gaussian Integers z (i ) .
a b
8. Show that the set of matrices where a, b, c z is a subring of the ring of 2 X 2 matri-
0 c
ces whose elements are integers.
10. Is s 1,3,5 a subring of the ring z6 of resedue classes modulo 6 under addition and multipli-
cation of resedue classes.
11. Find the centre of the ring of 3 X 3 matrices M3 (R) , where R is the field of reals.
- Smt. K. Ruth
Rings and Linear Algebra 2.1 Ideals and Homomarphism of Rings
LESSON - 2
2.2 Structure
2.3 Introduction
2.11 Summary
2.14 Exercises
2.3 Introduction:
In this lesson we define Right Ideal, Left Ideal Ideal, Principal Ideal, Prime Ideal, Maximal
Ideal and Quotient rings. We also define homomorphism of rings and Kernal of a homomorphism.
2.4.1 Right Ideal: Let R be a ring. A nonempty subset I of R is said to be a right ideal of R if
i) a, b I a b I
ii) a I , r R ar I
2.4.2 Left Ideal: Let R be a ring : A nonempty subset I of R is said to be a left ideal of R if
i) a, b I a b I
Centre for Distance Education 2.2 Acharya Nagarjuna University
ii) a I , r R ra I
0 a
2.4.3 Example: 1. The set I 0 b a, b z is a left ideal of the ring of 2 x 2 matrices with
integers as entries but not a right ideal.
a b
2. The set I a, b z is a right ideal of the ring of 2 x 2 matrices with integers as entries
0 0
but not a left ideal.
i) a, b I a b I
ii) a I , r R ar and ra I
2.4.5 Example: 1. Let R be a ring. Then I 0 where 0 is the zero element of the ring R is an
ideal of R.
b I b R
a I , b R and I is an ideal ab I .
1
We have 2 z and Q
4
1 1
But 2 Z
4 2
Z is not an ideal of Q.
Let x R
1.x I
xI
R I ..... (2)
aF since I F
aa 1 I
1 I
Now by 2.4.7 I F
Every non-zero ideal of F is equal to F itself.
Take I I1 I 2
Clearly 0 I1 and 0 I 2 0 I1 I 2 I
I is a non-empty subset of R.
Let a, b I and r R .
a , b I a , b I1 I 2
a, b I1 and a, b I 2
Now a b I1 , a b I 2 ra I1 , ra I 2 , ar I1 , ar I 2
a b I1 I 2 I , ra I1 I 2 I , ar I1 I 2 I .
Hence I is an ideal of R.
Suppose I1 I 2 is an ideal of R.
Rings and Linear Algebra 2.5 Ideals and Homomarphism of Rings
To prove that I1 I 2 or I 2 I1 .
a I1 a I1 I 2
b I 2 b I1 I 2
a b I1 I 2 a b I1 or a b I 2
b I1
a I2
I1 I 2 is not an ideal.
This is a contradiction.
Hence our assumption is wrong.
either I1 I 2 or I 2 I1
Conversily suppose I1 I 2 or I 2 I1
Either I1 I 2 or I 2 I1 , I1 I 2 is an ideal
Suppose I1 I 2 a b a I1 and b I 2
Clearly 0 I1 and 0 I 2
0 0 0 I1 I 2
I 2 I 2 is a non-empty subset of R.
Let x, y I1 I 2 and r R
y a2 b2 where a2 I1 and b2 I 2
(a1 a2 ) (b1 b2 )
We have a1 a2 I1 and b1 b2 I 2
x y I1 I 2 ........ (1)
Clearly x I x 0 I1 I 2 since 0 I2
x I1 I 2
I1 I1 I 2
y I 2 0 y I1 I 2 since 0 I1
y I1 I 2
I 2 I1 I 2
I1 I1 I 2 , I 2 I1 I 2 I1 I 2 I1 I2
2.4.16 Theorem: If I1 and I 2 are two ideals of ring R then I1 I 2 is the ideal generated by I1 I 2
i.e. I1 I 2 I1 I 2
By 2.4.12 I1 I 2 is an ideal of R.
To prove that I1 I 2 I 1
Let x I1 I 2
x a b where a I1 and b I 2
a I1 a I1 I 2 a I 1 since I1 I 2 I 1
Centre for Distance Education 2.8 Acharya Nagarjuna University
b I 2 b I1 I 2 b I 1
a, b I 1 and I 1 is an ideal.
a b I1
x I1
2.5.1 Definition: Principal Ideal : Let R be a ring. An ideal I of R is said to be Principal ideal of
R if I is generated by a single element of R. i.e. I a where a R .
2.5.2 Example: (1) The null ideal 0 of a ring R is a principal ideal.
Definition: A commutative ring R with identity 1 0 and no zero divisors is called an integral do-
main.
2.5.3 Principal Ideal Domain: An integral domain R with unity is said to be a principal ideal
domain if every ideal of R is a principal ideal.
2.5.4 Note: Every field is a principal ideal domain. We know that a field has only two ideal 0 and
itself by 2.4.8.
The ideals of a field are principal ideals and hence a field is a principal ideal domain.
2.5.5. Theorem: The ring of integers is a principal ideal domain
Suppose I 0
Rings and Linear Algebra 2.9 Ideals and Homomarphism of Rings
0 a I
a I since I is an ideal.
Since a, a I , I contains atleast one postive integer..
Then I is non-empty..
We claim that I b
Let x I
bq I
x bq I since x I
rI
Since b is the least positive integer in I and 0 r b. r can not be positive.
r 0
x bq, q z
I bq q z b
2.6.1 Definition: Prime ideal: A proper ideal I of a commutative ring R is said to be a prime ideal
if a, b R and a.b I a I or b I
ab 0
a 0 or b 0
2.6.3 Definition: Maximal ideal: Let R be a ring and M is an ideal of R such that M R . M is
said to be a maximal ideal of R if for any ideal I of R such that M I R then I M or I R .
2.6.4 Theroem: An ideal M of the ring of integers Z is maximal iff M is generated by some prime
number.
Proof: Let Z be the ring of integers.
We know that Z is a principal ideal domain (by 2.5.5)
M is maximal I M or I Z
If I M then p n
p nm for some m z
p pqm, since n pq .
mq 1
m 1 and q 1
This is a contradiction.
If I Z then p 1
M I Z n m Z
n m
n mq, q z
m 1 or q 1 , since n is prime.
If m = 1 then m 1 Z
I Z
If q 1 then n m
n m
M I
Either I M or I Z
M is a maximal ideal of Z.
2.6.5 Note: In the ring of integers an ideal generated by a composite number is not maximal.
Eq: Let M 6
M is not maximal.
Centre for Distance Education 2.12 Acharya Nagarjuna University
defined by ( a I ) (b I ) ( a b) I
(a I )(b I ) ab I for a I , b I R I
R r I r R
I
Addition of resedue classes is well defined:
Suppose a1 I a2 I and b1 I b2 I
a1 a2 b1 b2 I . Since I is an ideal.
(a1 b1 ) (a2 b2 ) I
(a1 b1 ) I (a2 b2 ) I
Suppose a1 I a2 I and b1 I b2 I
a1 a2 I and b1 b2 I
a1b1 I a2b2 I
a, b R
a b R
( a b) I R I
( a I ) (b I ) R I
Now (a I ) (b I ) (c I ) (a b) I (c I )
= ( a b) c I
(a I ) (b c) I
(a I ) (b I ) (c I )
Existence of Identity:
Now (0 I ) ( a I ) (0 a ) I a I , a I R I
Let a I R I a R
a R
Centre for Distance Education 2.14 Acharya Nagarjuna University
a I R I
Now (a I ) ( a I ) a ( a ) I
0 I
( a I ) (b I ) ( a b) I (b a ) I since a, b R
(b I ) ( a I )
Closure with respect to multiplication:
Let a I , b I R I a , b R
ab R since R is a ring.
ab I R I
(a I )(b I ) R I
Let a I , b I , c I R I
a (bc ) I since a, b, c R
= ( a I )(bc I )
= (a I ) (b I )(c I )
Distributive Law:
Let a I , b I , c I R I
(a I ) (b I ) (c I ) (a I ) (b c) I
a (b c ) I
( ab ac ) I since a, b, c R
Rings and Linear Algebra 2.15 Ideals and Homomarphism of Rings
( ab I ) ( ac I )
( a I )(b I ) ( a I )(c I )
R I is a ring.
2.7.3 Note: If R is a commutative ring and I is an ideal of R then the Quotient ring R I is also
commutative.
Let a I , b I R I
(a I )(b I ) ab I
ba I Since R is commutative.
(b I )( a I )
R I is commutative.
2. If R is a ring with unity then the Quotient ring R I is also a ring with unity..
1 R 1 I R I
Now (1 I )( a I ) 1a I a I
( a I )(1 I ) a1 I a I
2.7.4 Theorem: An ideal I of a commutative ring R with unity is prime iff the Quotient ring R I
is an integral domain.
Let a I , b I R I and
(a I )(b I ) 0 I
ab I I
ab I
a I or b I . Since I is prime.
a I I or b I I
Let a, b R and ab I
ab I I
( a I )(b I ) I
a I or b I
I is a prime ideal.
2.7.5 Theorem: An ideal I of a commutative ring R with unity is maximal iff the quotient ring R I
is a field.
Since R I is a commutative ring with unity it is enough to prove every non-zero element of
R I has multiplicative inverse.
Rings and Linear Algebra 2.17 Ideals and Homomarphism of Rings
Let 0 I a I R I
a I 0 I aI
But a I a I I since a a I
a I R
1 R 1 a I
1 ar x I
1 I ar I since a H b H a b H
1 I ( a I )( r I )
0 I a I R I r I R I such that (a I )( r I ) 1 I
R I is a field.
there is a I 1 and a I
aI I
ab I 1 I
1 ab I
1 ab I 1 I I1
1 ab ab I 1
1 I 1
I 1 R by corollary 2.4.7
I is a maximal ideal of R.
2.7.6 Note: (1) If R is a commutative ring with unity, then every maximal ideal is a prime ideal.
Let M be a maximal ideal of R.
R M is an integral domain
2. The converse of the above need not be true i.e., every prime ideal of a commutative ring with
unity need not be a maximal ideal.
For example in the ring of integers null ideal is prime but not maximal
Solution: Let a, b R .
f (a b) 01 01 01 f (a) f (b)
f is a homomorphism.
This homomorphism is called Zero homomorphism.
Solution : Let a, b R
I ( a b ) a b I ( a ) I (b )
I is a homomorphism.
We know that identity mapping is a bijection.
I is an automorphism.
This is called identity homomorphism.
Proof: f : R R1 is a homomorphism.
i) 0 R f (0) R1
Centre for Distance Education 2.20 Acharya Nagarjuna University
f (0) f (0 0) since 0 0 0
f a (a) f (0)
f (a) f (a)
iii) Let a, b R
f (a) f (b)
f ( a ) f (b)
f ( R ) is a non-empty subset of R1 .
Let a , b f ( R )
a f ( x ) and b f ( y ) , where x , y R
x, y R x y R and xy R
Rings and Linear Algebra 2.21 Ideals and Homomarphism of Rings
f ( x y ) f ( R ) and f ( xy ) f ( R )
f ( x ) f ( y ) f ( R ) and f ( xy) f ( R)
a b f ( R ) and ab f ( R)
f ( R ) is a subring of R1
Hence f ( R ) is a ring.
Let a , b f ( R ) a f ( x ), b f ( y ) where x, y R .
Now ab f ( x ) f ( y )
f ( xy ) since f is a homomorphism.
f ( yx ) since R is commutative.
f ( y) f ( x)
ba
f ( R ) is commutative.
0 ker f
f (a) f (b) 0 01 01
f (a b) 01
f (ar ) f (a ) f (r ) 01 f ( r ) 01
f (ra) f (r ) f (a ) f ( r )01 01
Suppose f is a homomorphism.
Let a ker f f (a ) 01
f ( a ) f (0)
a 0 since f is one-one
ker f 0 .
f (a ) f (b) 01
f (a b) 01
a b ker f
ab
f is one - one.
Hence f is a monomorphism.
2.9.4 Theorem: If I is an ideal of a ring R then there is an epimorphism f from R onto R I such
that ker f = I
Define f : R R I as f ( a ) a I , a R
aI bI
f ( a ) f (b )
f is well defined.
f ( a b) ( a b) I ( a I ) (b I ) f ( a ) (b)
f is a homomorphism.
f is onto : Let x I R I
x R
Now f ( x ) x I
x I R I x R such that f ( x ) x I
Hence f is onto.
Centre for Distance Education 2.24 Acharya Nagarjuna University
f : R R I is an epimorphism.
a R a I I
a R a I
I
2.9.5 Note: The above epimorphism is called canonical mapping or natural mapping.
2.10.1 Fundamental Theorem of Homomorphism:
(OR)
Every homomorphic image of a ring is isomorphic to some quotient ring.
Let ker f I
Define : R I R1 by
(a I ) f (a ) a I R I
Let a I , b I R I
aI bI
a b I
f (a b) 01 since I ker f .
f ( a ) f (b) 01
f ( a ) f (b )
Rings and Linear Algebra 2.25 Ideals and Homomarphism of Rings
( a I ) (b I )
is onto :
Let y R1
f ( x) y
xR x I R I
y R1 there is x I R I and ( x I ) f ( x ) y
o is onto.
is a homomorphism:
Let a I , b I R I
o (a I ) (b I ) o (a b) I f (a b)
f ( a) (b)
o (a I ) o (b I )
o (a I )(b I ) o ab I f (ab)
f ( a ) f (b )
( a I ) (b I )
R I R1 .
2.10.2 Definition: A ring R is said to be embedded in a ring R1 if there exists a monomorphism
from R into R1.
2.10.3 Example: A subring S of a ring R can be embedded in ring R.
Let R (a, b) a, b D, b 0
Define on R as ( a , b ) (c , d ) ad bc
We claim that is an equivalence relation on R.
Let ( a, b) R a, b D and b 0
ab ba Since D is commutative
( a , b) (a , b )
is reflexive.
Let (a, b),(c, d ) R
Suppose ( a, b) (c, d ) ad bc
da cb Since D is commutative
cb da
(c , d ) ( a , b )
is symmetric.
Let ( a, b ), (c, d ), (e, f ) R
ad b and cf de
( ad ) f (bc ) f
( ad ) f b(cf )
( ad ) f b( de) Since cf de
d ( af ) d (be)
( a, b) (e, f )
is transitive.
Hence is an equivalence relation on R.
.
a
Let be the equivalence class containing ( a, b) with respect to the equivalence relation
b
.
a
Let F a , b D , a 0 the set of all equivalence classes under
b
a c ad bc a c ac a c
We define addition + and multiplication . as and . , , F
b d bd b d bd b d
To show that + and . are well defined.
a a1 c c1
Suppose and
b b1 d d1
x y iff x y
ab1dd1 bb1cd1
ba1dd1 bb1dc1
a1d1bd b1c1dd
( a1 d 1 b1c1 ) bd
ad bc a1d1 b1c1
bd b1d1
a c a1 c1
b d b1 d1
Centre for Distance Education 2.28 Acharya Nagarjuna University
+ is well defined.
bda1c1
ac a1c1
bd b1d1
a c a1 c1
. .
b d b1 d1
. is well defined.
+ and . are binary operations on F..
We prove that ( F , ,.) is a field.
a ax
First we prove that if x 0 the and
b bx
o o x y
if x 0, y 0 the and ............. (1)
x y x y
( a, b) ( ax, bx )
a ax
b bx
Also oy o xo (o , x ) (o , y )
o o
x y
Also xy xy ( x, x) ( y, y )
x y
x y
Rings and Linear Algebra 2.29 Ideals and Homomarphism of Rings
Addition is associative:
a c e
Let , , F
b d f
a c e ad bc e
b d f bd f
( ad bc ) f bde
bdf
a cf de
=
b df
a c e
=
b d f
addition is associative.
Existance of additive identity:
o
If x 0 then F
x
a o ax bo ax a
Now from (1)
b x bx bx b
o a bo ax ax a a
F
x b xb bx b b
o
is the additive identity of F..
x
Existance of Inverse:
a
Let F a, b D and b 0
b
Centre for Distance Education 2.30 Acharya Nagarjuna University
a, b D and b 0
a
F
b
a ( a ) ab b ( a ) ab ba ab ab o o
Now from (1)
b b bb bb bb bb x
a a ( a )b ab ab ab o o
b b bb bb bb x
a a
is the additive inverse of .
b b
Addition is commutative:
a c
Let , F
b d
a c ad bc bc ad cb dc c a
b d bd db db d b
is commutative.
( F , ) is an abelian group.
Multiplication is associative:
a c e
Let , , F
b d f
a c e ac e (ac)e a(ce) a c e
b d f bd f (bd ) f b(df ) b d f
x
Let 0 x D then F
x
a a x ax a
For F, and
b b x bx b
Rings and Linear Algebra 2.31 Ideals and Homomarphism of Rings
x a xa ax a
from (1)
x b xb bx b
x
when x 0 is the multiplicative identity of F..
x
Existance of Multiplicative Identity:
a o
Let F
b b
ao
b
F
a
a b ab ab x
Now from (1)
b a b a a b x
b a ba ab x
a b ab ab x
b a
is the multiplicative inverse of in F..
a b
Multiplication is commutative:
a c
Let , F
b d
a c ac ca c a
b d bd db d b
. is commutative.
o
F ,. is an abelion group.
x
Distributive Law:
a c e
Let , , F
b d f
Centre for Distance Education 2.32 Acharya Nagarjuna University
a c e a c a e
b d f b d b f
c e a c a e a
Similarly
d f b d b f b
( F , , .) is a field.
ax
Define f : D F by f ( a ) where o x D
x
Clearly f is a mapping.
To show that f is a monomorphism.
f is a homomorphism:
Let a , b D
( a b) x (a b) x 2
Now f ( a b)
x x2
ax 2 bx 2
x2
(ax ) x (bx ) x
xx
ax bx
x x
f ( a ) f (b)
f is a homomorphism.
f is one-one:
Let a , b D and f ( a ) f (b )
ax bx
x x
( ax, x ) (bx, x )
axx xbx
ax 2 bx 2
( a b) x 2 o
a b o
ab
f is one - one and hence a monomorphism.
0 a
2.4.3 The set I a, b z is a left ideal of the ring of 2 x 2 matrices with integers as
0 b
entries but not a right ideal.
Solution: Clearly I is a non-empty subset of the ring M of 2 x 2 matrices with integers as entries.
0 a 0 c x y
Let A and B I and C M
0 b 0 d r s
a , b, c , d , x , y , r , s z
0 a c
A B I Since a c, b d z
0 b d
x y 0 a 0 ax by
CA I Since ax by , ar bs z
r s 0 b 0 ar bs
I is a left ideal.
0 a x y ar as
But AC I
0 b r s br bs
Clearly o R oa Ra
ox o Ra
Ra is a non-empty subset of R.
Let x, y Ra and r R
rx r (r1a ) rr1a Ra
Rings and Linear Algebra 2.35 Ideals and Homomarphism of Rings
x, y Ra and r R x y , rx, xr Ra
Ra is an ideal of R.
2.13.3 If R is a ring and a R then Ra is a left ideal and aR ar r R is an ideal of the
ring of integers z.
3, 2 I1 I 2 but 3 2 1
I1 I 2
2.13.6 A commutative ring R with unity element 1 0 is a field if R has no proper ideals.
Solution: Let R be a commutative ring with unity element 1.
Suppose R has no proper ideals.
To prove that R is a field.
Let 0 a R
Ra 0
Centre for Distance Education 2.36 Acharya Nagarjuna University
Ra R
But 1 R 1 R a
1 ba for some b R
then rI is an ideal of R.
o rI
rI is a non-empty subset of R.
Let x , y rI and r R .
xa ya 0
( x y ) a 0 for all a I
x y rI .
x rI xa 0 aI
r ( xa ) 0 a I
(rx ) a 0 a I
rx r ( I )
x rI xa 0 for all a I
Rings and Linear Algebra 2.37 Ideals and Homomarphism of Rings
xr r ( I )
r ( I ) is an ideal of R.
For x, y z
f ( x y ) n( x y )
nx ny
f ( x) f ( y )
f ( xy ) nxy
f ( x ).( y ) nx.ny n 2 xy
f ( xy ) f ( x ) f ( y )
f is not a homomorphism.
2.13.9 If R is a ring with unity element 1 and f is an epimorphism from R onto a ring R1 then f (1)
is the unity element of R1.
Solution: Let R be a ring with unity element 1.
Suppose f : R R1 is an epimorphism.
1 R f (1) R1
Let b R
1
a R a.l a l.a
Centre for Distance Education 2.38 Acharya Nagarjuna University
f (1.a )
f (a)
b
Also b f (1) f ( a ). f (1)
f ( a1)
f (a)
b
f (1) is the unity element of R1 .
f : C C defined by f ( x iy ) x iy
Let a ib, c id C
f (a ib) (c id ) f (a c) i (b d )
= ( a c ) i (b d )
a c ib id
a ib c ib
f ( a ib) f (c id )
( ac bd ) i ( ad bc )
( a ib)(c id )
f ( a ib ) f ( c id )
f is a homomorphism.
Rings and Linear Algebra 2.39 Ideals and Homomarphism of Rings
f is one - one:
Suppose f ( a ib) f (c id )
a ib c id
a c and b d
a ib c id
f is one - one.
f is onto:
Let x iy c
x iy c and f ( x iy ) x iy
f is onto.
Hence f is an isomorphism.
f : F F is a monomorphism.
Hence f is an isomorphism.
Suppose ker f F
f is zero homomorphism.
2.14 Exercises:
a b
1. Show that the set I a, b z is a right ideal but not a left ideal of the ring of 2 x 2
0 0
matrices M over integers.
2. Show that if R is a commutative ring with unity element and a R then Ra ra r R is a
principal ideal generated by a.
4. If R is a commutative ring with unity and a, b R then show that ax by x, y R is the ideal
generated by a, b.
I R : I .
a 0
6. Let R a R where R is the ring of real numbers. Prove that f : R1 R defined
1
0 0
a 0 a 0
a for all R is an isomorphism.
1
by f
0 0 0 0
7. Show that every homomorphic image of a commutative ring is commutative.
8. Give an example to show that a homomorphic image of an integral domain may not be an
integral domain.
9. Let R be a ring with unity. For each inversible element a R , the mapping f : R R defined
by f ( x) axa 1 , x R is an automorphism.
10. Let R be the ring of all real valued continuous functions defined on [0,1] and
1
M f ( x) R f 0 . Show that M is maxmimal ideal of R.
3
- Smt. K. Ruth
Rings and Linear Algebra 3.1 Rings of Polynomials
LESSON - 3
RINGS OF POLYNOMIALS
3.1 Objective of the Lesson:
To learn the definition of a Polynomial over a ring, polynomial ring, degree of a polynomial
and evaluation homomorphism.
3.2 Structure
3.3 Introduction
3.9 Summary
3.12 Exercises
3.3 Introduction:
In this lesson we will introduce the notion of a polynomial ring over a ring, over an integral
domain and over a field. We also introduce evaluation homomorphism.
Let R be a ring and x an indeterminate. If a0 , a1 , a2 ... R and ai 0 for all except a finite
1 3
2. f ( x ) 2 x 2 x 3 is a polynomial over Q.
2 7
3. f ( x) 1 x 2 4 x3 is a polynomial over Z 5 .
3.4.3 Note : 1. The set of all polynomials over a ring R with intdeterminate x is denoted by R x .
2. We omit altogether from the formal sum any term of the form 0 x i .
3.4.4 Note: 1. Let R be a ring. A polynomial over R can also be defined as a sequence
(a0 , a1, a2...an ,....) of elements of R, where all but a finite number of ai ’s are zero.
2. If f (x) a0 a1x a2 x2 ... an xn ... is a polynomial over a ring R then a 0 , a 1 , a 2 . . . are called
term... of f ( x ) .
Equity of Polynomials: Two polynomials ( a 0 , a1 ...a 2 ...) and g (b0 , b1.....bn ....) over a
ring R are said to be equal if ai bi for all i 0 .
Let f (a0 , a1 , a2 ...am ...) and g (b0 , b1 , b2 ...bn ...) be two polynomials over a ring R. the
sum of f and g is denoted by f g (c0 , c1, c ...ck ...) where ci ai bi for i 0,1, 2,.... .
3.5.3 Example: If f (2,3, 4,0,0,...) and g (3, 2, 0, 3, 0, 0...) are two polynomials over the
ring of integers z then f g (5, 1, 4, 3, 0, 0...) 5 x 4 x 2 3 x3
Rings and Linear Algebra 3.3 Rings of Polynomials
3.5.4 Multiplication of Polynomials:
Let f (a0 , a1 , a2 ...am ...) and g (b0 , b1 , b2 ...bn ...) be two polynomials. Over a ring R. The
product of f and g is denoted by fg and fg (c0 , c1 , c2 ...c p ...) where ci a0bi a1bi 1 ... ai b0 i.e.
i
ci a j bi j ab j k .
j 0 j k i
3.5.5. Example : If f (1,3, 4, 0, 0....) and g (2, 2, 0,5, 0, 0,....) over z6 find fg .
f 1 3x 4 x 2 g 2 2 x 0 x 2 5x3
(16 5 6 3 6 0 6 4 6 2)x 3
2 2 x 2 x 2 x3
3.6.1 Degree of a Polynomial:
Let f (a0 , a1,...a2...) be a non-zero polynomial over a ring R. The largest integer i for which
3.6.2 Note: 1. The degree of a non zero polynomial f (a0 , a1,...a2 ...) is n if an 0 and
ai 0 i n .
2. The degree of a non zero constant polynomial is zero
3. The degree of the zero polynomial is not defined.
3.6.3 Definition: Leading Coefficient: If the degree of the polynomial f (x) a0 a1x ... an xn
is n then an 0 is called leading coefficient in f ( x ) .
3.6.4 Example: 1. The degree of the polynomial f ( x) 3 2 x x 4 x 6 over the ring of integers
is 6.
7
2. The degree of the polynomial f ( x ) over the ring of rational numbers is zero.
4
3.6.5 Theorem: Let f ( x ), g ( x ) be two non zero polynomials over a ring R the
Proof: Let f (x) a0 a1x a2 x2 ... am xm and g ( x) b0 b1x b2 x2 ... bn xn be two polynomials
over a ring R with deg f ( x) m and deg g ( x ) n .
am 0 and ai 0 i m and
bn 0 and bi 0 i n
f ( x ) g ( x ) ( a0 b0 ) ( a1 b1 ) x ( a2 b2 ) x 2 ... ( an bn ) x n an 1 x n 1 ... am x m
f ( x ) g ( x ) ( a 0 b0 ) ( a1 b1 ) x ... ( a m bm ) x m bn 1 x m 1 ... bn x n
f ( x ) g ( x ) ( a 0 b0 ) ( a1 b1 ) x ... ( a m bm ) x m
d 0 d1 x d 2 x 2 ...
Where d k ab
i j k
i j from the definition.
Suppose k m n i j m n
i m or j n .
But i m ai 0 and j n b j 0
Rings and Linear Algebra 3.5 Rings of Polynomials
ai b j 0 if i m or j n
ai b j 0 if i j m n
d k 0 if k m n
3.6.6 Corollary: If f ( x ) and g ( x ) are two nonzero polynomials over an integral domain R then
deg f ( x).g ( x) deg f ( x) deg g ( x) .
am 0 and ai 0 i m
bn 0 and bi 0 i n
bi 0 for i n
0
By 3.6.5 deg f ( x ). g ( x ) m n
3.6.7 Corollary: If f ( x ) and g ( x ) are non zero polynomials over an integral domain or field then
deg f ( x) deg f ( x).g ( x) .
3.7.1 Theorem: The set of all polynomials over a ring R is a ring with respect to addition and
multiplication of polynomials.
Proof: Let R x be the set of all polynomials over a ring R with indeterminate x.
f ( x ) g ( x ) ( a 0 b 0 ) ( a1 b1 ) x ... ( a m b m ) x m .....
c 0 c1 x c 2 x 2 .....
f ( x ). g ( x ) d 0 d 1 x d 2 x 2 ..... where d k ab
i j k
i j
ai , bi R ai bi and aibi R i, j
ci and d k R i, k
f ( x ) g ( x ) and f ( x) g ( x) R x .
f ( x) g ( x) ai x i bi xi
i 1 i 1
( ai bi ) x i
i0
(bi ai ) x i since ai bi bi ai , ai , bi R
i0
(bi x i ai xi )
i 0
Rings and Linear Algebra 3.7 Rings of Polynomials
bi x i ai x i
i0 i 0
g ( x) f ( x)
Addition is commutative.
Associative Law w.r.to +.
i
f ( x ) g ( x ) h ( x ) i a x i
bi x ci x
i
i 0 i 0 i 0
(ai bi ) xi ci xi
i 0 i 0
(ai bi ) ci ) xi
i 0
ai (bi ci ) x i since (ai bi ) ci ai (bi ci ) in R.
i 0
ai x i (bi ci ) xi
i0
ai x i (bi ci ) x i
i0 i 0
ai x i bi x i ci xi
i 0 i 0 i 0
f ( x ) g ( x ) h( x )
addition is associative.
Existance of zero element:
The zero polynomial 0( x ) 0 0 x 0 x 2 ... 0 x R x
i 0
i
f ( x ) 0( x) ai x i 0 x i
i 0 i 0
Centre for Distance Education 3.8 Acharya Nagarjuna University
(ai 0) xi
i0
ai xi Since ai 0 ai ai R .
i 0
f ( x) ai xi R x ai R i
i0
( ai )x i R x
i 0
( f )( x) ( ai )x i R x
i 0
Now f ( x) ( f )( x ) ai x i ( ai )xi
i 0 i 0
ai ( ai ) x i
i 0
0x i
i 0
0( x)
f ( x).g ( x) h( x ) ai x i bi x i ci x i
i 0 i 0 i 0
Rings and Linear Algebra 3.9 Rings of Polynomials
k
= i i x ci x
i
a b
k 0 i j k i 0
(ai bi )cs x n
n 0 i j s n
ai bi cs x n
n0
k s n i j k
f ( x) g ( x)h( x ) ai x i bi x i ci x i
i 0 i 0 i 0
ai xi b j ck x i
i0 s 0 j k s
n0
ai b j ck x i
i s n jk s
n0 i jk n
a i (b j c k ) x i
Multiplication is associative.
Distributive Law:
f ( x) g ( x) h( x) ai x i b j x j c j x j
n 0 j 0 j 0
= a x (b c j )x j
i
i j
n 0 j 0
= a (b i j c j ) xn
n0 i j n
Centre for Distance Education 3.10 Acharya Nagarjuna University
= ( aib j aic j ) x n
n0 i jn
aibj xn ai c j xn
n 0 i j n n0 i j n
f ( x).g ( x ) f ( x ).h( x )
Similarly we can prove the other distributive law.
Definition: Let R be a ring. Then R x is called the ring of polynomials in the inderminate x
with coefficiants in R.
Proof: Let R be an integral domain and R x be the ring of polynomials over R with indetermi-
nate x.
g ( x) b0 b1 x b2 x 2 ... bi x i be two polynomials in R x .
i 0
f ( x ) g ( x ) ai x i b j x j
i0 j 0
ai b j x n
n 0 i j n
b j ai x n since R is commutative.
n 0 i j n
Rings and Linear Algebra 3.11 Rings of Polynomials
b j x j ai xi
j 1 i 0
g ( x) f ( x) .
R x is commutative.
am 0 and bn 0
f ( x) g ( x) 0
3.7.3 Note: If R is a ring with unity then the ring of polynomials R x over R is also ring with unity..
Then f ( x) 1 0 x 0 x ... R x
2
i.e. I ( x) b x
j 0
j j where b0 1 and b j 0 j 1
Let f ( x) a0 a1 x a2 x ... R x
2
n
f ( x) I ( x) ai xi b j x j
i 0 j 0
= ab x i j
n
n 0 i j n
( an .1) x n
n 0
Centre for Distance Education 3.12 Acharya Nagarjuna University
an x n f ( x )
n0
Similarly I ( x ) f ( x ) f ( x ) .
R is an integral domain
Let 0 f ( x) R x
Suppose deg f ( x ) 0
Then f ( x) g ( x ) I ( x )
deg f ( x ) g ( x ) 0
deg f ( x ) 0
A contradiction.
Theorem: Let F be a subfield of a field E and F x be the ring of polynomials over the field F. If
Proof: Let F be a subfield of a field E and F x be the ring of polynomials over the field F. For E
: F x E is defined by
f ( x) g ( x) c0 c1 x ... ck x k where c0 a0 b0
( a0 b0 ) ( a1 b1 ) ( a2 b2 ) 2 ... ( ak bk ) k
f ( x) g ( x) d 0 d1 x d 2 x 2 ... d p x p where d n ab
i jn
i j
f ( x ) g ( x ) d 0 d1 x d 2 x 2 ... d p x p
d 0 d1 d 2 2 ... d p p
3.8.2 Note:
z5 0,1, 2,3, 4
f ( x) x 4 4
f ( x ) 0 if x 1, 2,3, 4
f ( x) F [ x] f ( x)
f ( ) 0 where 0 is the zero element of E is called kernal of .
It is denoted by ker .
Rings and Linear Algebra 3.15 Rings of Polynomials
3.9 Summary:
In this lesson we defined polynomial over a ring. We proved set of all polynomials over a
ring is a ring. Set of all polynomials over a field is an integral domain. We defined Evaluation
homorphism, zero of a polynomial and Kernal of evaluation homomorphism.
f ( x) 2 3 x 5 x 2 and g ( x) 1 2 x 3x 2
f ( x ) g ( x) (2 6 1) (3 6 2) x (5 6 3) x 2
3 5x 2 x2
f ( x) g ( x) (2 3x 5 x 2 )(1 2 x 3x 2 )
(2 6 1) (2 6 2 6 3 6 1) x (2 6 63 6 2 6 5 6 1) x 2 (3 6 3 6 5 6 2) x 3 (5 6 3) x 4
2 x 5 x 2 x3 3x 4
deg f ( x ) 2 deg g ( x) 3
max 2,3
3
f ( x).g ( x) (5 3 x 2 x 2 )(1 3 x 4 x 3 )
5 (5 6 3) 6 (3 6 1) x (3 6 3) 6 (2 6 1) x 2 5 6 4 6 (2 6 3) x3 (3 6 4) x 4 (2 6 4) x5
5 5 x 2 2 x3 2 x5
deg f ( x ).g ( x ) 5
5 (2 x 3 )(3 4 x 2 ) .
5 (2 x 3 ) 2 53 2 6 1
5 (3 4 x 2 ) 3 4.52 5
1.5 5
f (0) 1 0 0 2 1 0
f (1) 1 1 12 3 0
f (2) 1 2 22 0
f (3) 1 3 32 6 0
f (4) 1 4 42 0
f (5) 1 5 52 3 0
Rings and Linear Algebra 3.17 Rings of Polynomials
f (6) 1 6 6 2 1 0 .
3.12 Exercises:
6. Let F be a field and let F x be the ring of polynomials over F. Let f ( x ) and g ( x ) be nonzero
- Smt. K. Ruth
Rings and Linear Algebra 4.1 Factorization of Polynomials over..
LESSON - 4
4.2 Structure:
This lesson contains the following components:
4.3 Introduction
4.7 Summary
4.8 Technical Terms
4.9 Exercises
4.10 Model Examination Questions
4.11 Model Practical Problem with Solution
4.12 Problems for Practicals
4.3 Introduction:
Throughout the lesson, we assume that F is a field and F x is the ring of polynomials over
F. In this lesson, similar to the division algorithm of integers, we prove the division algorithm for
F x . Some important corollaries are proved. The concept of irreducibility of polynomials is
introduced and some criteria for determining irreducibility of quadratic and cubic polynomials is
obtained. The famous eisenstein’s irreducibility criterion is discussed. Suitable examples are
given. We also prove that every ideal in F x is a principal ideal. Finally, we prove that every
irreducible polynomials in F x .
a b 1 n m
n m x g ( x) a b
1 n m
n m x b
0 b1 x ... bm x m anbm1b0 x n m anbm1 x n m 1 ... an x n . Hence
q ( x) an bm1 x n m P ( x) , then f ( x ) q ( x ) g ( x ) r ( x )
deg(r2 ( x) r1 ( x) max deg r2 ( x), deg r1 ( x) deg g ( x) . This is a contradiction. Therefore
r1 ( x) r2 ( x) and so q1 ( x) q2 ( x) .
x 4 x3 x 2 x 5
x2 2x 3 x6 3x5 4 x 2 3x 2
x6 2 x5 3x 4
x5 3x 4
x5 2 x 4 3x3
x 4 3x3 4 x 2
x 4 2 x3 3x 2
x3 7 x2 3x
x3 2 x2 3x
5x 2 +2
5 x 2 10 x 15
10 x 17
Thus q ( x) x 4 x3 x 2 x 5 and
r ( x ) 10 x 7
x2 4
x 1 x3 4 x 2 4 x 1
x3 x 2
0 4x 1
4x 4
Proof: If f ( x ) has no root in F then the corollary is true. So, suppose that f ( x ) has at least one
x2 2 x 1
x 1 x3 x 2 6 x 6
x3 x 2
2 x2 6x
2 x2 2x
x6
x 1
x3 x 2 6 x 6 ( x 1)( x 1)2 in Z7 x
Solution: 9x2 5x 1
5x2 x 2 x 4 5 x3 3x 2
x 4 9 x3 7 x 2
3 x 3 10 x 2
3 x 3 5 x 2 10 x
5 x 2 10 x
5x2 x 2
2
Centre for Distance Education 4.6 Acharya Nagarjuna University
Thus f ( x) (9 x 2 5 x 1)(5 x 2 x 2) 2
2 x 2 Q x is irreducible over Q .
Irreducible polynomials play an important role in the study of field theory. The problem of
determining whether a given f ( x) F x is irreducible over F is difficult. We now give some
criteria for determining irreducibility of quadratic and cubic polynomials.
x 2 2 is irreducible over Q .
Z5 .
We now state a theorem which is useful in proving some interesting theorems and whose
proof is beyond the scope of this book.
g ( x ) and h( x) in Q x such that deg g ( x) r deg f ( x) and deg h( x) s deg f ( x) if and only if
Proof: Since a 0 , we can write a as a , where , Z and their gcd ( , ) =1. Then
n 1 n
a0 a1 ... an 1 n 1 n 0
n
to obtain a0 a1 ... an 1
n 1 n 1 n 1
Multiply the above equation by n 1 . Because
n
, Z it follows that Z since ( n , ) 1 and divides n , we have that 1 . There
a
a Z . The last equation shows that a0 and hence a0 .
Centre for Distance Education 4.8 Acharya Nagarjuna University
The question of deciding whether a given polynomial is irreducible or not can be a difficult
and laborious one. Few criteria exist which declare that a given polynomial is or is not irreducible.
One of these few is the following.
Proof: Assume that f ( x ) is reducible over Q then f ( x ) factors into a product of two polynomials
in Q x of lower degrees r and s. By theorem 4.5.5, f ( x ) has such a factorization with polynomi-
als of the same degrees r and s in Z x . Accordingly f ( x ) (b0 b1 x ... br x 2 )(c0 c1 x ... cs x s )
p p p p
a0 and p \ a0 , either b0 and p \ c0 or
2
c0 and p \ b0 . Consider the case c0 and
p
b0 .
p p p
cs . Let cm be the first coefficient in c0 c1 x ... cs x
s
Because an , it follows that br and
such that p \ cm . Observe that am b0cm ... bm c0 from this we get that p \ am , since p \ b0 cm
p
and (b1cm 1 ... bm c0 ) . By hypothesis, m n . Thus n m s n , which is impossible. Simi-
p
larly if b0 and p \ c0 , we arrive at a contradiction. Therefore our assumption is wrong and f ( x )
is irreducible over Q .
2
5 \ 1, 5 ,5
5 10 and 5 \ 10 . By theorem 4.5.9, we get that x 5 x 10 is irreducible over Q.
3
x p 1
Solution: Note that p ( x) . Let
x 1
( x 1) p 1 1 p p p 1 p
g ( x ) = p ( x 1) ( x 1) 1 x x 1 x .... p 1 x
p 1 pp 2 p
= x 1 x .... p 1
p
p , for r 1,.., p 1, p \ 1 and p 2 \ p . By theorem 4.5.9, g ( x ) is irrudible over
Note that r
p 1
a in Q. By corollary 4.5.7, a Z and a 8 . Therefore a 1, , 4, 8 . But none of these are zeros
of f ( x ) . This is a contradiction.
f ( x ) is irreducible over Q.
We now introduce the notation a ra r R to represent the ideal of all multiples of a .
4.6.1 Definition: Let R be a commutative ring with unity. An ideal I of R is called a principal ideal
if I a for some a R .
Solution: Let I be an idea of Z. If I 0 , then clearly I 0 . So, assume that I 0 .
Choose the least positive integer m in I. clearly m km k Z I . Conversely, if h I , then
Centre for Distance Education 4.10 Acharya Nagarjuna University
2. We know that F x is a commutative ring with unity. Clearly x F x . Then the principal
al
xf ( x)
ideal x is the set f ( x) F x
r1 ( x)r2 ( x)......rn ( x) , where ri F x for i 1,.., n , then p ( x ) divides ri ( x) for at least one;
Proof: We prove the corollary by induction on n. By theorem 4.6.5, the result is true for n 2 .
Assume that the result is true for n 1 . Let r ( x) r1 ( x)......rn 1 ( x) then p ( x ) divides r ( x)rn ( x) .
p( x) p( x) p( x)
Again by theorem 4.6.5, either r ( x ) or rn ( x) . If r ( x ) , then by induction hypothesis
p( x) p( x)
rr ( x) for some 1 r n 1 . thus in either case we get that rr ( x) for some i.
that the result is true for all polynomials g ( x ) in F x such that deg g ( x ) deg f ( x) . On the basis
of this assumption we aim to prove the result for f ( x ) . If f ( x ) is irreducible, then there is nothing
to prove. If f ( x ) is not irreducible, then f ( x) g ( x ) h( x ) , where deg g ( x ) deg f ( x) and
deg h( x) deg f ( x ) . Then by our induction hypothesis g ( x ) and h( x) can be written as a prod-
uct of a finite number of irreducible polynomials in F x ; g ( x) r1 ( x).....rn ( x) and
p1 ( x)
ible polynomials in F x and qi ( x) , we have that qi ( x) u1 p1 ( x) , where u1 is a unit in F x .
Thus p1 ( x) p2 ( x)..... pr ( x) u1 p1 ( x)q1 ( x)...qi 1 ( x)qi 1 ....qs ( x) ; cancel off p1 ( x) and we are left with
p2 ( x)..... pr ( x) u1q1 ( x)....qi 1 ( x)qi 1 ....qs ( x) . Repeat the argument on this relation with p2 ( x) . After
r steps the left side becomes 1, the right side is a product of a certain number of q ( x) ’s (the excess
of s over r). This would force r s since the p ( x ) ’s are not units. Similarly s r , so that r s .
In the process we have also showed that every pi ( x) ui qr ( x) , where ui is a unit; for some r .
4.7 Summary:
In this lesson you have learnt the division algorithm of F x , irreducible polynomials,
Division algorithm of F x
Remainder theorem
Factor Theorem
Eisenstein’s irreducibility criterion
Rings and Linear Algebra 4.13 Factorization of Polynomials over..
7. Prove that every nonconstant polynomial in F x can be factored in F x uniquely (upto order
4.10 Exercises:
1. Determine which of the following are irreducible over Q.
a) 2 x 5 6 x 3 9 x 2 15
b) x 4 3 x 2 9
c) 3 x 5 7 x 4 7
2. Prove that
a) x 2 1 is irreducible over Z 7 .
b) x 2 x 1 is irreducible over Z 2
Centre for Distance Education 4.14 Acharya Nagarjuna University
a) x 2 12
b) 8 x 3 6 x 2 9 x 24
c) 4 x10 6 x 3 24 x 18
d) 2 x10 25 x 3 10 x 2 30
Problem: The polynomial 2 x 3 3 x 2 7 x 5 can be factored into linear factors in Z11 x . Find this
factorization.
2x2 9x 2
x3 2 x3 3x 2 7 x 5
2 x3 6 x 2
9 x2 7 x
9x2 5x
2 x 5
2 x 6
0
2 x3 3x 2 7 x 5 ( x 3)(2 x 2 9 x 2)
2x 3
x3 2x2 9x 2
2 x2 6x
3x 2
3x 9
0
x8 5 x 7 3 x 6 2 x 5 x 4 8 x 3 6 x 2 2 x 1 is divided by x 4 x 3 3 x 2 1 over Q x .
by 3 x 4 2 x 3 4 x 2 1 over Q x .
4. Show that the polynomial x 2 1 is irreducible over the field of real numbers and reducible over
the field of complex numbers.
9. The polynomial x 3 2 x 2 2 x 1 can be factored into linear factors in Z 7 x . Find this factoriza-
tion.
10. (a) Let F be a field and f ( x), g ( x) F x . Show that f ( x ) divides g ( x ) if and only if
g ( x ) f ( x )
LESSON - 5
VECTOR SPACES
5.1 Objective of the Lesson:
To learn the definition of vector space, subspace, some examples of vector spaces,
algebra of subspaces and related theorems.
5.2 Structure
5.3 Introduction
5.8 Summary
5.11 Exercises
5.3 Introduction:
In this lesson we introduce vector space, subspace, linear sum of subspaces, linear span
of a set, linear combination of vectors, linearly dependent and independent vectors.
i) a ( ) a b
ii) ( a B ) a b
iii) a ( B ) ( ab )
5.4.4 Note: 1) If V is a vector space over F we write V(F) is a vector space. If the field is under-
stood we simply say V is a vector space.
Centre for Distance Education 5.2 Acharya Nagarjuna University
2. Elements of V are called vectors and elements of F are called scalars.
3. The internal composition + in V is called addition and the external composition . is called scalar
multiplication.
5.4.5 Example: 1. Let F be a field and K be a subfield of F then F is a vector space over K.
Solution: Suppose F is a field and K a subfield of F.
a ( ) a b by distributive law
5.4.7 Example: Let F be a field Vn (a1 , a2 ...an ) ai Ffor1 i n is the set of n-tuples. Vn is a
vector space over F with respect to addition ‘+’ and scalar multiplication ‘ ’ defined by
a ( a1 , a2 ...an ) ( aa1 , aa2 ,...aan ) for (a1 , a2 ...an ), (b1 , b2 ...bn ) Vn and a F .
( )
Addition is associative.
( a1 , b1 , a2 b2 ...an bn )
=
Addition is commutative.
We have 0 F
0 (0, 0...0) Vn
(a1 0, a2 0...an 0)
( a1 , a2 ...an )
=
a1 , a2 ... an F
( a1 , a2 ... an ) Vn
Centre for Distance Education 5.4 Acharya Nagarjuna University
(0, 0...0)
0
a ( ) a (a1 b1 , a2 b2 ...an bn )
a ( a1 b1 ), a( a2 b2 )...a ( an bn )
a a
a a1 , a2 ...an b a1 , a2 ...an
= a b
Rings and Linear Algebra 5.5 Vector Spaces
a (b ) a ba1 , ba2 ...ban
ab a1 , a2 ...an
ab
1 1 a1 , a2 ...an
a1 , a2 ...an
i) a 0 0
ii) 0 0
iii) a ( ) a
iv) ( a ) a
v) ( a )( ) a
vi) a ( ) a a
vii) ( a b) a b
viii) 0 a 0 or 0 , V and a F
i) a 0 a (0 0) a 0 a 0
a0 0 a0 a0
Centre for Distance Education 5.6 Acharya Nagarjuna University
0 0
iii) a ( ) a a ( )
a0
0 by i.
a ( ) a
iv) ( a ) a ( a a )
0
0 by ii.
( a ) a
(a ) by iv..
a
vi) a( ) a ( )
a a ( )
a ( a )
a a
a ( b )
a b
Rings and Linear Algebra 5.7 Vector Spaces
Suppose a 0 .
a 0 and a 0
a 1 (a ) a 1 0
(a 1a) 0
1 0
0
a 0 either a 0 or 0 .
i) a b and 0 then a b
i) ,
ii) a F , a
Suppose W is a subspace of V.
Centre for Distance Education 5.8 Acharya Nagarjuna University
W itself is a vector space.
W is a group w.r.to +
, W W .
Clearly a F and W a W
The conditions i and ii are necessary..
Now suppose i) , W W and
ii) a F and W a W
Hence W is a subspace of V ( F ) .
5.5.4 Theorem: A necessary and sufficient condition for a non-empty subset W of a vector
space V ( F ) to be a subspace of V ( F ) is a, b F and , W a b W .
Suppose W is a subspace of V ( F ) .
a, b F and , W a W , b W
a b W
Let , W
W
Rings and Linear Algebra 5.9 Vector Spaces
(W , ) is a group.
All elements of W are elements of V.
(W , ) is an abelion group.
Let a F and W
a W
a ( ) a a
( a b) a b
a (b ) ( ab)
Hence W is a subspace of V ( F ) .
Take W W1 W2
clearly 0 W1 and 0 W 2 0 W1 W2 W
a b W2 ........... (2)
a, b F , , W a b W
a b a(0,0, y1 ) b(0, 0, y2 )
(0, 2,3) W1 W2
5.5.7 Theorem: Union of two subspaces of a vector space is a subspace iff one is contained in
the other.
W2 W1 there is a W 2 and
W1 ....... (2)
W1 W1 W2
W2 W1 W2
W1 W2 since W1 W2 is a subspace.
W1 or W2
W1
A contradiction to (2)
W2
A contradiction to (1)
Centre for Distance Education 5.12 Acharya Nagarjuna University
W1 and
W2
W1 W2 A contradiction to W1 W2 .
Hence either W1 W2 or W2 W1
If W1 W2 then W1 W2 W2
If W2 W1 then W1 W2 W1
W1 W2 is a subspace.
Definition: Let V ( F ) be a vector space and W1 ,W2 be two subspaces of V ( F ) . The set
Clearly 0 W1 and W2
0 0 0 W1 W2
Now a b a (1 2 ) b( 1 2 )
(a1 a 2 ) (b1 2 )
Rings and Linear Algebra 5.13 Vector Spaces
(a1 b 2 ) (a 2 b 2 )
a b (a1 b 1 ) (a 2 b 2 ) W1 W2
Let 1 W1 1 0 W1 W2
1 W1 W2
2 W1 0 2 W1 W2
2 W1 W2
W2 W1 W2 ...........(2)
5.6.4 Linear Span of a set: Let V(F) be a vector space and S a non-empty subset of V. The set
of all linear combinations of elements of all possible finite subsets of S is said to be linear span of
S.
It is denoted by L (S).
5.6.5 Note: 1. If S is a non-empty subset of a vector space V then
2. S is a subset of L (S).
5.6.6. Theorem: The linear span L (S) of any subset S of a vector space V is a subspace of V.
Proof: Let V(F) be a vector space and S a non-empty subset of V.
Let , L ( S ) and a, b F
(aa1 )1 (aa2 )2 ... (aam )m (bb1 )1 (bb2 )2 ... (bbn )n
a b L ( S )
Hence L (S) is a subspace of V.
5.6.7 Theorem: If S is a non-empty sub-set of a vector space V(F) then linear span of S is the
intersection of all sub-spaces of V which contain S.
Let L( S )
1 , 2 ,.... n S 1 , 2 ,... n W
a11 a2 2 ... an n W Since W is a subspace it is closed under addition and
scalar multiplication.
L ( S ) W
i.e. L ( S ) W
Which contain S.
i) S is a subspace of V L ( S ) S
ii) L ( L ( S )) L ( S )
i) Suppose S is a subspace of V.
Let L( S )
S
L ( S ) S
L( S ) S .......... (1)
Let S 1. L ( S )
L( S )
Now suppose L ( S ) S
S is a subspace of V..
ii) We know that L ( S ) is a subspace of V by 5.6.6.
By (1) L ( L ( S )) L ( S )
Centre for Distance Education 5.16 Acharya Nagarjuna University
5.6.10 Theorem: If S and T are two subsets of a vector space V ( F ) then
(i) S T L ( S ) L (T )
(ii) L ( S T ) L ( S ) L (T )
i) Suppose S T
Let L( S )
L(T )
L( S ) L(T )
L ( S ) L (T )
ii) Let L ( S T )
L ( S ) L (T ) .............. (1)
Now suppose L ( S ) L ( T )
where L ( S ) and L (T )
Let W1 W2
L(W1 W2 )
5.7.1 Definition: Linearly dependent vectors: A finite subset 1 , 2 ...... n of vectors of a vector
space V ( F ) is said to be linearly dependent if there exists scalars a1 , a2 ......an F not all zero
5.7.2 Linearly Independent Vectors: A finite subset 1 , 2 ...... n of vectors of a vector space
V ( F ) is said to be linearly independent if it is not linearly dependent.
5.7.3 Note: A finite subset 1 , 2 ...... n of vectors of a vector space V ( F ) is linearly indepen-
dent if every relation of the form a11 a2 2 .... an n 0 where ai ’s F a1 a2 ..... an 0 .
Centre for Distance Education 5.18 Acharya Nagarjuna University
5.7.4 Note: A set of vectors which contains the zero vector is linearly dependent (L.D.)
We have scalars a1 , a2 ,..., an not all zero such that a11 a2 2 .... an n 0 {Since 1 0
1 , 2 ...... n is linearly dependent.
a1 0 a 0
Proof: Let V ( F ) be a vector space and S 1 ,2 ......n be a linearly dependent set.
There exists scalars a1 , a 2 ,..., a n F not all zero such that a11 a2 2 .... an n 0 .....(1)
Now a11 a2 2 .... an n 0 1 0 2 ... 0 m 0 ............. (2) from (1)
S 1 is L.D.
5.7.7 Theorem: Every non-empty subset of a linearly independent set is linearly independent.
Proof: Let V ( F ) be a vector space and S 1 ,2 ......n be a linearly independent set.
Proof: Let V ( F ) be a vector space and S 1 ,2 ......n be a finite subset of non-zero vectors of
V (F ) .
Suppose S is linearly dependent.
there exists scalars a1 , a2 ....ak F not all zero such that a11 a2 2 .... an n 0 ....... (1)
If k 1 then a11 0
1 0 since ak 0
2 k n
5.7.9 Theorem: Let V ( F ) be a vector space and S 1 ,2 ......n be a subset of V. If i S is
a linear combination of its preceding vectors then L( S ) L( S 1 ) where
Proof: Let V ( F ) be a vector space and S 1 ,2 ......n be a subset of V..
To prove that L( S ) L( S 1 )
Let L ( S )
b1 i b2 2 ... bi 1 i 1 bii bi 1i 1 .... bn n where b1 , b2 ,...., bn F
b11 b2 2 ... bi 1 i 1 bi (a1 a2 2 .... ai 1 i 1 ) bi 1i 1 ..... bn n
(b1 bi a1 )1 (b2 bi a2 )2 .... (bi 1 bi ai 1 )i 1 bi 1i 1 a22 .... ai 1i 1 ) bi 1i 1 ... bnn
= L.C. of elements of S 1 .
L( S 1 )
L( S ) L( S 1 )
L( S ) L( S 1 ) ........... (2)
5.8 Summary:
In this lesson we learnt definitions of vector space, subspace, linear sum, linear span,
linearly dependent and linearly independent vectors. We proved theorems relating to Algebra of
Rings and Linear Algebra 5.21 Vector Spaces
subspaces. Linear sum of subspaces, linear span of a set. We discussed the concepts linear
combination, linear dependence and independance of vectors.
A, B M A B M
M is closed w.r. to addition +.
ii) We know that addition of matrices is associative.
( A B ) C A ( B C ), A, B, C M
A B B A, A, B M
( M , ) is an abelian group.
Centre for Distance Education 5.22 Acharya Nagarjuna University
vi) Let a R and A M
x) 1A A .
M ( R ) is a vector space.
2. The set of all real valued continuous functions defined in the open internal (0,1) is a vector space
over the field of real numbers with respect to addition and scalar multiplication defined by
( f g )( x) f ( x) g ( x ) and ( af )( x ) af ( x ) where a R and 0 x 1 .
f ,g S f gS
( f g ) h ( x) ( f g )( x) h( x) f ( x) g ( x) h( x)
f ( x ) g ( x ) h( x )
f ( x) ( g h)( x )
f ( g h) ( x )
0 S
( f 0)( x ) f ( x ) 0( x ) f ( x) 0 f ( x )
f is continuous f is continuous.
f S
f ( f ) 0
v) For f , g S we have ( f g )( x ) f ( x) g ( x )
g ( x) f ( x)
( g f )( x )
A ddition is commutative.
Hence ( S , ) is an abelian group.
vi) f S and a R af S
a( f g ) ( x) a( f g )( x) a f ( x) g ( x)
af ( x ) ag ( x )
( af )( x) ( ag )( x)
( af ag )( x)
a ( f g ) af ag
viii) (a b) f ( x) (a b) f ( x) af ( x) bf ( x)
( af )( x ) (bf )( x )
( af bf )( x )
( a b) f af bf a, b R and f S
For 1 R, 1 f ( x) 1 f ( x) f ( x)
Centre for Distance Education 5.24 Acharya Nagarjuna University
1 f f
( ab) f a (bf )
S is a vector space.
Solution: Let F be a field and V3 ( F ) (a, b, c) a, b, c F the vector space of ordered triads.
W ( x, y, 0) x, y F
Let , W and a, b F
, W ( x1 , y1 , 0) and ( x2 , y2 , 0) where x1 , y1 , x2 , y2 F
a b a( x1 , y1 ,0) b( x2 , y2 ,0)
W is a subspace of V3 ( F ) .
( a b 2c, a 2b c, a 3b c)
a b 2c 1 .......... (1)
a 2b c 2 .......... (2)
a 3b c 5 .......... (3)
(1) + 2 x (2) a b 2c 1
2a 4b 2c 4
3a 5b 3 ......... (4)
(4) - (5) a 6
From (5) b3
From (1) c2
(1, 2, 5) 6(1,1,1) 3(1, 2, 3) 2(2, 1,1)
6. Show that the vector (1, 2, 3), (1, 0, 0), (0,1, 0) and (0, 0,1) from a linearly dependent subset of
V3 () .
Solution: Suppose (1, 2,5) a (1, 2,3) b(1, 0, 0) c (0,1, 0) d (0, 0,1) 0
( a b, 2a c, 3a d ) (0, 0, 0)
b a c 2 a d 3a
Centre for Distance Education 5.26 Acharya Nagarjuna University
There exist scalars not all zero such that the linear combination of the give n vectors is
zero. Hence the given vectors are linearly dependent.
7. Show that the vectors (1, 0, 1)(1, 2,1)(0, 3, 2) are linearly independent in V3 () .
( a b, 2b 3c, a b 2c ) (0, 0, 0)
a b 0 (1) 2b 3c 0 (2) a b 2c 0
(1)+(3) gives 2b 2 c 0 (3)
abc0
The given vectors are linearly independent.
8. If , , are linearly independent vectors in V ( ) show that , , are also lin-
early independent.
Solution: Suppose a ( ) b( ) c ( ) 0
a a b b c c 0
(a c) (a b) (b c) 0
ac a b bc 0
Solving a b c 0
5.11 Exercises:
(1) Prove that the set of all polynomials in an indeterminate x over a field F is a vector space.
3 1
(4) Write the vector A in the vector space of all 2 x 2 matrices as the linear combina-
1 2
tion of the vectors.
1 1 1 1 1 1
0 1 , 1 0 and 0 0
(5) Show that the subspace spanned by S , and T , , are the same in the
(7) Show that (1, 3, 2)(1, 7, 8)(2,1, 1) in 3 are linearly dependent.
(8) Prove that the set 1, x, x(1 x) is linearly independent set of vectors in the space of all polyno-
mials over the real field.
- Smt. K. Ruth
Rings and Linear Algebra 6.1 Bases and Dimension
LESSON - 6
BASES AND DIMENSION
6.1 Objective of the Lesson:
To learn the definition of vector space. subspace, some examples of vector spaces,
algebra of subspaces and related theorems.
6.2 Structure
6.3 Introduction
6.7 Summary
6.10 Exercises
6.3 Introduction:
In this lesson we define basis and dimension of a vector space, Quotient space. We prove
some important theorems relating to dimension.
b1 b2 ...bn 0
S is L.I.
Let Vn ( F )
(a1 , a2 ...an )
S spans V i.e. L ( S ) = Vn ( F )
Hence S is a basis of Vn ( F )
6.4.4 Definition: Finite Dimensional Vector Space: A vector space V(F) is said to be finite
dimensional vector space if there is a finite subset S of V which spans V. i.e. L ( S ) V .
L( S1 ) V since L ( S ) V
Rings and Linear Algebra 6.3 Bases and Dimension
Continuing the above process, after finite no. of steps we get a subset which is linearly indepen-
dent and spans V.
We get a basis of V..
Hence every finite dimentional vector space has a basis.
6.4.6 Invariance Theorem: Let V ( F ) be a finite dimensional vector space, then any two basis
will have the same number of elements.
1 V 1 is linear combination of S1 .
Also L( S3 ) V
There exists a vector i S3 which is a linear combination of the preceding vectors and i 1
by 5.7.8.
Now L( S 4 ) V by 5.7.9.
There exists a vector j S5 which is a linear combination of the preceding vectors and
j 1 , j 2 .
Centre for Distance Education 6.4 Acharya Nagarjuna University
Now S6 1 , 2 , 1 , 2 ...i 1 ,... j 1 , j 1 ... n generates V i.e. V ( S6 ) V
If we continue this process at each step one is excluded and a is included in the set S1
obviously the set S1 of ’s can not be exhausted before the set S 2 of s otherwise V ( F ) will
be a linear span of a proper subset of S 2 and thus S 2 will become linearly dependent.
We must have n m
Any two bases of a finite dimentional vector space have the same number of elements.
6.5.1 Definition: The number of elements in any basis of a finite dimensional vector space V ( F )
is called the dimension of the vector space V ( F ) and will be denoted by dim V..
6.5.3 Theorem: Every linearly independent subset of a finite dimensional vector space V ( F ) is
either a basis of V or can be extended to form a basis of V.
Proof: Let V ( F ) be a finite dimensional vector space and S 1 , 2 ... m be a linearly indepen-
dent subset of V.
Suppose dim V n
S1 is linearly dependent.
Clearly L( S 2 ) V .
Rings and Linear Algebra 6.5 Bases and Dimension
If S 2 is linearly dependent we repeat the above process a finite no. of times and we get a linearly
independent set containing S and spanning V.
This set is a basis of V which is the extention of S.
Every linearly independent subset of finite dimensional vector space is either a basis or can
be extended to form a basis of V.
S is linearly dependent.
6.5.5 Corollary: Any set of n linearly independent vectors of a n-dimensional vector space
V ( F ) forms a basis of V..
Proof: Let V ( F ) be a n-dimensional vector space and S be a linearly independent subset of V with
n vectors.
6.5.6 Corollary: Every set of n vectors of a n-dimensional vector space V ( F ) , which gener-
ates V is a basis of V.
Proof: Let V ( F ) be a n-dimensional vector space and S be a set of n vectors which generates V..
If S is linearly dependent then we get a proper subset of S which is a basis of V.
We get a basis of V with less than n vectors. It is a contradiction to the fact that dim V n .
S is linearly independent.
Hence S is a basis of V.
6.5.7 Theorem: If S 1,2 ...n is a basis of a finite dimensional vector space V ( F ) of dimen-
Centre for Distance Education 6.6 Acharya Nagarjuna University
sion n then every element V can be uniquely expressed as a1 1 a2 2 ... a n n where
a1 , a2 ...an F .
( a1 b1 )1 ( a2 b2 ) 2 ... ( an bn ) n 0
a1 b1 , a2 b2 .........an bn
6.5.8 Note: If B 1,2 ...n is a basis of a finite dimensional vector space V ( F ) then every
vector V is uniquely expressed as a 1 1 a 2 2 ... a n n where a1 , a2 ...an F .
The scalars a 1 , a 2 . . . a n are called coordinates of , relative to the basis B.
Proof: Let V ( F ) be a finite dimensional vector space with dim V n and W be a subspace of V..
Where m n
Now we prove that S is a basis of W.
Rings and Linear Algebra 6.7 Bases and Dimension
Let W
There exists scalars a1 , a2 ...am F . not all zero a11 a2 2 ... am m a 0
If a 0 then a 1 1 a 2 2 ... a m m 0
S1 is L.I.
A contradiction.
1
a 0 a 1 F such that aa 1
is L.C. of elements of S L( S )
W L ( S )
dimV dimW
Conversely suppose dim V dimW n
S W S V
Now S is a linearly independent subset of a finite dimensional vector space V ( F ) with dimV n
S is a basis of V by 6.5.5.
L(S ) V
Centre for Distance Education 6.8 Acharya Nagarjuna University
V W
6.5.10 Theorem: If W1 and W2 are two subspace of a finite dimensional vector space V ( F )
then dim(W1 W2 ) dim W1 dim W2 dim(W1 W2 )
W1 W2 and W1 W2 are also subspaces of V ( F ) and hence they are finite dimensional by
6.5.9.
W1 W2 L ( S ) (1) .
Now S W1 and S W2
a11 a2 2 ... am m (b1 1 b2 2 ..... bn n c1 1 c2 2 ..... ck k ) (4)
Rings and Linear Algebra 6.9 Bases and Dimension
Now b1 1 b2 2 ..... bn n c1 1 c2 2 ..... ck k W2 and a11 a2 2 ... am m W1
B is linearly independent.
B Spans (W1 W2 )
Let W1 W2
Now a11 a22 ... amm c1 1 c2 2 ..... ck k b1 1 b2 2 ... bm m d11 d2 2 ..... dk k
= L.C of elements of B.
L ( B ) B spans W1 W2
Hence B is a basis of W1 W2 .
dim(W1 W2 ) m n k
Centre for Distance Education 6.10 Acharya Nagarjuna University
k mk nk
6.6.1 Coset: Let W be a subspace of a vector space V ( F ) and V . Then the set
: W is called right coset of W in V generated by . The set : W is called left
coset of W in V generated by .
i) W 0 W
ii) W W W
iii) W W W
Proof: 0 V W 0 0 W
W
W
ii) Suppose W
T.P.T. W W
Let W , W
W
W
W W ............. (1)
T.P.T W W
Rings and Linear Algebra 6.11 Bases and Dimension
Let W
W and W is a subspace W
W , W W , since W is a subspace.
( ) W
W W .......... (2)
iii) Suppose W W
0 W W
W
when W
W
W W W
Conversely suppose W
W ( ) W from ii.
W W
W W
6.6.4 Theorem: If W is a subspace of a vector space V ( F ) then the set V/W of all cosets of W
in V is a vector space over F w.r. to addition of cosets and scalar multiplication defined by
(W ) (W ) W ( ) and a (W ) W a , for all , V and a F .
Suppose W W and W W
W and W by 6.6.3
Centre for Distance Education 6.12 Acharya Nagarjuna University
W since W is a subspace
( ) ( ) W
W ( ) W ( ) by 6.6.3.
Scalar multiplication is well defined:
Suppose W W
W by 6.6.3
a ( ) W for a F
a a W
W a W a by 6.6.3
a (W ) a (W )
Let W ,W ,W V W
(W )(W ) (W ) W ( ) (W )
W ( )
W ( )
(W ) W ( )
(W ) (W ) (W )
Existance of Identity:
Clearly W 0 V W and (W ) (W 0) W ( 0) W
(W 0) (W ) W (0 ) W , W V W
Let W V W V W ( ) V W
(W ( ) (W ) W ( ) W 0 W
W ( ) is the additive inverse of W .
Addition is commutative:
Let W ,W V W
(W ) (W ) W ( ) W ( )
(W ) (W )
V
W is an abelian group w.r. to addition.
Let W , W V W and a, b F
i) a (W ) (W ) a W ( ) W (a a )
(W a ) (W a ) a (W ) a (W )
a (W ) b(W )
iv) 1(W ) W 1 W
V
W is a vector space.
6.6.5 Quotient Space: If W is a subspace of a vector space V ( F ) . Then V W the set of all
cosets of W in V is a vector space called quotient space.
6.6.6. Theorem: If W is a subspace of a finite dimensional vector space V(F) then
dim V W dimV - dim W..
Proof: Let W be a subspace of a finite dimensional vector space V(F).
Centre for Distance Education 6.14 Acharya Nagarjuna University
B is Linearly Independent:
(W b1 1 ) (W b2 2 ) ... (W bk k ) W
W (b1 1 b2 2 ... bk k ) W
b11 b2 2 ... bk k W
B is linearly independent.
B Spans V W :
Let W V W
V
(W ) (d1 1 d 2 2 ... d k k )
W d1 1 d 2 2 ... d k k since W W
d1 (W 1 ) d 2 (W 2 ) ... d k (W k )
V
B spans W .
Hence B is a basis of V W .
(a 2b c, 2a b c, a 2c ) (0, 0, 0)
a 2b c 0; 2a b c 0; a 2c 0
a 2c 0 a 2c
a 2b c 0 2c 2b c 0 2b c 0 2b c
2a b c 0 4c b c 0 b 5c 0
b 10 c 0 7 b 0 b 0
Centre for Distance Education 6.16 Acharya Nagarjuna University
2b c c 0 and 2b c c 0
6.9.2 Find the coordinates of the vector (2,1, 6) of 3 relative to the basis
(1,1, 2)(3, 1, 0)(2, 0, 1)
Solution:
a 3b 2c 2 .......... (1)
a b 1 .......... (2)
2a c 6 .......... (3)
a 3a 3 4a 12 2
8a 7
a 7 8
7 15 14 34 17
b 1 and c 6
8 8 8 8 4
7 15 17
The coordinates of (2,1, 6) are 8 , 8 , 4
6.9.3 If V is the vector space of ordered pairs of complex numbers over the real field R, then
show that the set S (1, 0)(i, 0)(0,1)(0, i ) is a basis of V..
( a ib, c id ) (0, 0)
Rings and Linear Algebra 6.17 Bases and Dimension
a ib 0 and c id 0
a 0, b 0, c 0, d 0
Let (a ib, c id ) V
S spans V
Hence S is a basis of V.
6.9.4 Let V be the vector space of 2 x 2 matrices over a field F. Show that V has dimension 4 by
exhibiting a basis for V which has four elements.
1 0 0 1 0 0 D 0 0
Solution: Let S A, B, C , D where A B 0 0 C 1 0 0 1
0 0
Clearly S V
1 0 0 1 0 0 0 0 0 0
a b c d
0 0 0 0 1 0 0 1 0 0
a b 0 0
c d 0 0
abcd 0
S is linearly independent.
a b
Further if V then
c d
aA bB cC dD
S spans V .
Hence S is a basis of V.
dim V 4
Centre for Distance Education 6.18 Acharya Nagarjuna University
6.9.5 Do the vectors (1,1, 0)(0,1, 2) and (0, 0,1) form a basis of V3 () .
Solution: Suppose a (1,1, 0) b(0,1, 2) c (0, 0,1) (0, 0, 0) when a, b, c R
( a, a b, 2a c) (0, 0, 0)
a 0; a b 0; 2 b c 0
a 0; b a 0; c 2b 0
The vectors (1,1, 0)(0,1, 2) and (0, 0,1) are linearly independent.
Since dim V3 ( ) 3 the set (1,1, 0)(0,1, 2)(0, 0,1) is a basis of V3 () by 6.5.5.
6.9.6 Show that the set (1, 0, 0)(1,1, 0)(1,1,1)(0,1, 0) spans the vector space 3 but not a
basis.
( a , b, c ) ( x y z , y z t , z )
x y z a; y z t b, z c
x y a c; y t b c
If y 0 then x a c and t b c
a a b b ) c c 0
(a c) (a b) (b c) 0
Rings and Linear Algebra 6.19 Bases and Dimension
a c 0; a b 0; b c 0
a 0; b 0; c 0
S 1 is a basis of C (c)
3
6.9.8 Extend the set of linearly independent vectors (1, 0,1, 0), (0, 1,1, 0) to the basis of V4.
Solution: S (1, 0,1, 0), (0, 1,1, 0) is a linearly independent subset of V4 .
Let S1 (1, 0,1, 0), (0, 1,1, 0)(1, 0, 0,0), (0,1, 0, 0)(0, 0,1, 0)(0, 0, 0,1) .
But dim V4 4
S1 cannot be a basis.
S 1 is linearly dependent.
S 2 (1,0,1, 0), (0, 1,1, 0)(1, 0, 0, 0), (0,1, 0, 0)(0, 0,1, 0) spans V4 .
S2 is linearly dependent.
S3 (1, 0,1, 0), (0, 1,1, 0)(1, 0, 0, 0), (0,1, 0, 0) spans V4 .
6.9.9 Find a basis and dimension for the subspace of 3 spanned by the vectors
(2, 7, 3)(1, 1, 0)(1, 2,1) and (0, 3,1) .
Centre for Distance Education 6.20 Acharya Nagarjuna University
Solution : Let W be the subspace of 3 spanned by S (2, 7,3)(1, 1, 0)(1, 2,1)(0,3,1)
(a b, a 2b 3c, b c ) (0, 0, 0)
From (2) b 2b 3c 0 3b 3c 0 b c 0
c b
a 1, b 1, c 1
S1 is linearly dependent.
(0, 3,1) is L.C. of remaining vectors and S2 (1, 1, 0)(1, 2,1)
( a b, a 2b, b) (0, 0, 0)
a b 0 a 2b 0 b 0
a 0, b 0
S 2 is L.I. and L( S2 ) W
dim W 2
If W1 and W2 are subspaces of V4 ( R ) generated by the sets (1,1, 1, 2)(2,1,3, 0)(3, 2, 2, 2) and
Let S1 (1,1, 1, 2)(2,1,3, 0)(3, 2, 2, 2) and S2 (1, 1, 0,1)(1,1, 0, 1)
Given L( S1 ) W1 and L( S 2 ) W2
suppose a (1,1, 1, 2) b(2,1, 3, 0) c(3, 2, 2, 2) d (1, 1, 0,1) e( 1,1, 0, 1) (0, 0, 0, 0)
S1 S2 is L.D
Suppose a (1,1, 1, 2) b(2,1,3, 0) c (1, 1, 0,1) d ( 1,1, 0, 1) (0, 0, 0, 0)
a 3b 0 ..........(3) 2a c d 0 ...............(4)
From (4) 6b c d 0
b0
a 0 b 0 and c d
Centre for Distance Education 6.22 Acharya Nagarjuna University
S 1 is L.D
a 0; b 1; c 1
Hence S 11 is a basis of W1 W2
dim(W1 W2 ) 3
6.10 Exercises:
1. Show that B (1, 0, 1)(1, 2,1)(0, 3, 2) from a basis for 3 .
2. Find the coordinates of (2,3, 4, 1) w.r. to the basis (1,1, 0, 0)(0,1,10)(0, 0,1,1)(1, 0, 0, 0) of V4 ( )
4. Find a basis for the subspace spanned by the vectors (1, 2, 0)( 1, 0,1)(0, 2,1) in V3 () .
- Smt. K. Ruth
Rings and Linear Algebra 7.1 Linear Transformation
LESSON - 7
LINEAR TRANSFORMATION
7.1 Objective of the Lesson:
In Chapter 5 and 6, we discussed vector spaces and some of the related topics. Now it is
natural to consider functions from a vector space into a vector space. Such a function with a
condition is called a linear transformation or a homomorphism.
In this chapter, we discuss the properties of these linear transformation and related prob-
lems.
7.3 Introduction
7.7 Summary
7.9 Exercises
7.3 Introduction:
In I B.Sc./B.A. homomorphisms from a group into a group are discussed. In chapter 2 of
this book, homomorphisms from a ring/ field into a ring/ field are discussed. In this chapter, we
discuss the homomorphisms (linear transformations) from a vector space into a vector space.
So a b (a1 b 1 , a 2 b 2 , a 3 b 3 )
a ( a1 a2 , a1 a3 ) b (b1 b2 , b1 b3 )
aT ( ) bT ( ) , t 3 , a, b
Hence T is a linear transformation.
Rings and Linear Algebra 7.3 Linear Transformation
= a 1 b 1 , 0 ...... (1)
But aT ( ) bT ( ) aT (1 , 2 , 3 ) bT ( 1 , 2 , 3 )
a 1 , 0 b 1 0 ......... (2)
7.4.6 SAQ: Show that the mapping T : 2 2 defined by T (a1 , a2 ) (2a1 a2 , a1 ) , for all
(a1 , a2 ) 2 is a linear transformation.
7.4.7 Theorem: Let T : U V be a linear transformation from the vector space U ( F ) to the
vector space V ( F ) . Then
a) T 0 0 (b) T T ( ) U
(c) T T ( ) T ( ) , U
(d) T (a11 a2 2 ... an n ) a1T (1 ) a2T (d 2 ) ... anT ( n ) for all 1 , 2 ... n U and
a1 , a2 ,..., an F .
Proof: a) We know that T ( ) T ( 0) T ( ) T 0 T ( ) 0 T ( ) T ( ) T 0
0 T 0 , by LCL in the group (V , ) .
Hence T 0 0 0V
(b) Let and ( ) = 0 0U
Centre for Distance Education 7.4 Acharya Nagarjuna University
Now T ( ) T 0
T ( ) T (0)
T ( ) T ( ) T ( ) T ( )
d) We use induction on n.
So, the result is true for n k 1 . Now the result follows from Mathematical induction.
Proof: Let a, b F , , U a b U
T ( a b ) 0
a 0 b 0 aT ( ) bT ( ), a, b F , , U
This shows that T is a linear transformation.
T ( a b ) a b
aT ( ) bT ( ) a , b F and , V
So, T is a linear transformation.
7.4.10 Theorem: Let U ( F ),V ( F ) be vector spaces over a field F and let T : U V be a linear
transformation. Then the mapping T : U V defined by (T )( ) T ( ) U is a linear
transformation.
aT ( ) bT ( )
a.T ( ) bT ( )
a ( T )( ) b( T )( ) a, b F , , U
So, -T is a linear transformation.
Note: - T is called the negative of the linear transformation T.
7.4.11 Theorem: Let T1 , T2 be linear transformation from a vector space U ( F ) into a vector
space V ( F ) . Then the mapping T1 T2 : U V defined by (T1 T2 )( ) T1 ( ) T2 ( ) U is
a linear transformation.
Proof: Let a, b F , , U
a T1 ( ) T2 ( ) b T1 ( ) T2 ( )
Centre for Distance Education 7.6 Acharya Nagarjuna University
Proof: , U and x, y F x y U
Now (aT )( x y ) a T ( x y )
a xT ( ) yT ( )
a xT ( ) a yT ( )
( ax )T ( ) ayT ( )
( xa )T ( ) ( ya )T ( )
x aT ( ) y aT ( )
Proof: Let a, b F , X , Y U .
So, T ( aX bY ) A( aX bY ) A( aX ) A(bY )
a ( AX ) b( AY ) aT ( X ) bT (Y )
T ( ) T ( ) T ( ), , U
T ( a ) aT ( ) a F
T ( a ) aT ( ) , U a F
So, T ( a b ) T ( a ) T (b )
aT ( ) bT ( )
T (a ) aT ( ) T ( ) , U and a F .
This can be justified on similar lines.
7.4.18 Theorem: Let U ( F ),V ( F ) be vector spaces and let 1 , 2 ,... n be a basis of U. Let
Now T (a b ) T (a1a1 bb1 )1 (a1a2 bb2 ) 2 ..... (aan bbn ) n
i
7.4.19 Theorem: Let U , V , W be vector spaces over a field F. Let T :U V, H :V W be linear
transformations. Then the composite function HT : U W defined by
HT ( ) H T ( ) U is a linear transformation. (This HT is called product of linear trans-
formations)
Let a , b F and , U
HT (a b ) H T (a b ) H aT ( ) bT ( )
aH T ( ) bH T ( )
a ( HT )( ) b( HT )( ) , U and a, b F
Hence HT is a linear transformation.
Proof: i) Let U
1 OT1 O;
Then (a) TO (b) T1 I IT1 T1
TO
1 O . Similarly we can prove that OT1 O . [Here O is the zero operator on V].
7.4.22 Theorem: Let L (U , V ) be the set of all linear transformations from a vector space U ( F )
into the vector space V ( F ) .For T1 , T2 L(U , V ) and a F , define
(T1 T2 )( ) T1 ( ) T2 ( ) U and (aT1 )( ) aT1 ( ) U . Then L (U , V ) is a vector
space over F.
Proof: By theorems 7.4.11 and 7.4.12, we have T1 T2 , aT1 L(U ,V ) . This shows that L (U , V ) is
closed under vector addition and scalar multiplication.
= T1 ( ) T2 ( ) T3 T1 T2 ( ) T3 ( )
(T1 T2 ) T3 U
+ is associative in L (U , V ) .
(T O)( ) T ( ) O( ) T ( ) O T ( ) U
O is the identity..
T L (U , V ) T L (U , V ) by theorem 7.4.10.
Rings and Linear Algebra 7.11 Linear Transformation
For a F , T1 , T2 L(U , V ), U .
7.4.23 Theorem: Let dim U n , dim V m . Then dim L (U , V ) mn (U, V are vector spaces
over the field F).
, k i
Tij ( k ) j if
0 k i
Let S Tij 1 i n ,1 j m
n m
n m
For each k ,1 k n , we have aijTij k O k O
i 1 j 1
n m m
aijTij k 0 akj j 0 ak 1 1 ak 2 2 ... akm m O
i 1 j 1 j 1
S is L.I.
b) L(S) = L(U,V):
Clearly L ( S ) L (U , V )
n m
Let H b T
i 1 j 1
ij ij H L(U ,V ) . We have Tij ( k ) 0 if k i and j if k i .
n m m
H ( k ) T ( k ), k B1
N (T ) U T ( ) 0
R (T ) T ( ) U
Proof: (1) N (T ) U T ( ) 0
We know that 0 U and T 0 0. 0 N (T )
N (T ) U
Let a, b F , , N (T )
T ( ) 0, T ( ) 0
a b N (T ) , N (T ) and a, b F .
N(T) is a subspace of U.
2. R (T ) T ( ) U
0 U T 0 R (T ) R (T ) V
Let 1 , 2 R (T ) and a , b F .
1 , 2 U T (1 ) 1 , T ( 2 ) 2
a 1 b 2 aT (1 ) bT ( 2 )
T (a1 b 2 )
R (T ) , since a1 b 2 U .
Centre for Distance Education 7.14 Acharya Nagarjuna University
R (T ) is a subspace of V..
7.5.3 Example: Let V(F) be a vector space and I V V be the identify operator. Then
N ( I ) 0 , R ( I ) V .
and S T ( 1 ), T ( 2 ),...., T ( n )
1
Let R (T ) U T ( )
L( S 1 )
R(T ) L( S 1 )
Hence L( S 1 ) R (T ) .
R (T ) is finite dimensional.
Then N (T ) ( a1 , a2 , a3 ) T ( a1 , a2 , a3 ) 0
(a1 , a2 , a3 ) (a1 a2 , 2a3 ) (0,0)
(a1 , a2 , a3 ) a1 a2 , a3 0
= a, a, 0) a
and R(T ) T (a1 , a2 , a3 ) (a1 , a2 , a3 )
3
(a1 , a2 , 2a3 ) a1 , a2 , a3
By theorem 7.4.28, R(T) is spanned by T (1, 0, 0), T (0,1, 0), T (0, 0,1) , where e1 , e2 , e3 is the
= Span of (1,0),(0,1) = 2
(T ) dim R (T )
(T ) dim N (T ) .
Proof: Let dimU n , Since N(T) is a subspace of U and U is finite dimensional, it follows that N(T)
is finite dimensional. Let dim N (T ) be = k and let A 1 , 2 ,..., k be a basis of N(T). Since A
is a L.I. subset of U, A can be extended to form a basis of U. Let B 1 , 2 ,..., k , k 1 ,....., n be
Centre for Distance Education 7.16 Acharya Nagarjuna University
ak 1 0,......., an 0 S is L.I.
b) L ( S ) R (T ) : Clearly L ( S ) R (T ) .
Let R (T ) U T ( )
Now U a1 , a2 ,...., ak k 1 ,...., an F a11 ..... ak k ... ak 1 k 1 .... an n
R (T ) L ( S ). L ( S ) R (T ) S is a basis of R(T)
(T ) Number of elements of S n k n (T ) .
(a1 , a2 , a3 ) (a1 a2 , a3 0
(a, a, 0) a
= Span of (1, 1, 0)
(T ) dim N (T ) 1
R (T ) T ( a1 , a2 , a3 ) ( a1 , a2 , a3 ) 3
= ( a1 a2 , 2a3 ) a1 , a2 , a3
R(T) = Span of (T (e1 ), T (e2 ), T (e3 ) , where e1 , e2 , e3 is the standard basis of 3 .
= 2
(T ) dim R (T ) 2
Now P (T ) V (T ) 2 1 3 dim 3
7.5.11 Problem: Find the null space, range, rank and nullity of the linear transformation T : 2 3
defined by T ( x, y ) ( x y , x y , y ) for all ( x, y ) 2 .
Solution: N (T ) ( x, y ) T ( x, y ) 0 ( x, y ) x y, x y , y ) (0, 0, 0
( x, y ) x y 0, x y 0, y 0 (0, 0)
Nullity T is = 0.
R (T ) T ( x, y ) ( x, y ) 2 ( x y , x y , y x, y
Also we note that R(T) is spanned by T (1, 0), T (0,1) (1,1, 0), (1, 1,1) .
Centre for Distance Education 7.18 Acharya Nagarjuna University
7.5.14 SAQ: Give an example of two linear operators T and S on 2 such that TS O but ST O .
7.6 Answers to SAQ’s:
7.4.6 SAQ: T : 2 2 . T (a1 , a2 ) (2a1 a2 , a1 ) ( a1 , a2 ) 2
(2(a1 b 1 ) a 2 b 2 , a1 b 1 )
a(21 2 , 1 ) b(2 1 2 , 1 )
aT ( ) bT ( ) , 2 and a, b
So T is a linear transformation.
7.4.14 SAQ: T : 2 2 . T ( a1 , a 2 ) ( a1 , a 2 ) ( a1 , a 2 ) 2
( a 1 b 1 , a 2 b 2 )
a ( 1 , 2 ) b ( 1 , 2 )
aT ( ) bT ( ) , 2 and a, b
T is a linear transformation.
7.4.15 SAQ : T : 2 2 . T ( a1 , a 2 ) ( a1 , 0 ) ( a1 , a 2 ) 2
aT ( ) bT ( ) a, b and , 2
T is a linear transformation.
7.5.12 SAQ: T1 , T2 : 2 2 , T1 ( x, y ) ( y, x)
T2 ( x, y ) ( x, 0)( x, y ) 2
T2T1 ( x, y ) T2 T1 ( x, y ) T2 ( y, x) ( y, o)
T1T2 ( x, y ) T2T1 ( x, y ) , if x o or y o .
T1T2 T2T1
7.5.13 SAQ: T : 2 2 , T ( x, y ) ( x y, y )
T 2 ( x , y ) T T ( x, y ) T ( x y , y )
( x 2 y, y )( x, y ) 2
Define T : 2 2 by T ( x, y) (1,0)( x, y) 2
S : 2 2 by S ( x, y ) (0, x)( x, y )
2
TS ( x, y ) T S ( x, y ) T (0, x) (1, 0) 2
ST ( x, y ) S T ( x, y ) S (1, 0) (0,1)
Here TS ( x, y ) ST ( x, y ) TS ST
7.7 Summary: Linear transformation and properties, null space, range, rank, nullity are
discussed.
1 1
7.9.3 Let V F 22 . Let P V . Define T : V V by T ( A) PA A V Find nullity T
2 2
- A. Satyanarayana Murty
Rings and Linear Algebra 8.1 Matrix Representation of a Linear....
LESSON - 8
8.3 Introduction
8.8 Summary
8.10 Exercises
8.3 Introduction:
In this chapter we discuss matrix of a linear transformation from a vector space to a vector
space relative to ordered bases. We also study the matrix of a product of linear transformations
and the invertibility and isomorphism of linear transformations.
Also B1 e2 , e3 , e1 is an ordered basis of F 3 , but B1 B as ordered bases. For F , e1 , e2 ..., en
n
a1
a
We define the coordinate vector of relative to the ordered basis B as B 2
an
8.4.2 Example: For V3 () , B e1 , e2 , e3 is the standard ordered basis; where
e1 (1,0,0), e2 (0,1,0), e3 (0,0,1)
4
If (4, 2,3) , then B 2 , since
3
8.4.3 Example: For V3 () , we know that B1 1 , 2 ,3 is on ordered basis where
1
1 (1, 0, 0), 2 (1,1, 0), 3 (1,1,1) , so (4,3, 2) B1 1 , since
2
8.4.4 Definition: Let U ( F ) and V ( F ) be vector spaces and dim U n, dim V m . Let
B1 1,2 ,...,n be an ordered basis of U and let B2 1, 2,..., m be an ordered basis of V..
The matrix A aij mn is called the matrix of T relative to the ordered bases B1 , B2 and we
write T ; B1 , B2 A .
8.4.6 Example: If I and O are the identity and zero operators an n-dimensional vector space V
and B 1,2 ,...,n is an ordered basis of V, then I B I nn and O B Onn .
8.4.7 Theorem: Let U(F), V(F) be vector spaces, dim U n, dim V m and let B1 , B2 be ordered
bases of U and V respectively. If T1 , T2 L(U , V ) , then T1 T2 ; B1 , B2 T1 ; B1 , B2 T2 ; B1 , B2
n m
T1 ( j ) aij i , T2 ( j ) bij i , for j = 1,2,.....n.
i 1 i 1
m m m
aT1 ; B1 , B 2 aa ij a a ij a T1 ; B1 , B2
mn mn
8.4.8 Note: We defined the matrix of a linear transformation T : U V w.r.t. the ordered bases
B1 1 , 2 ,....., n , B2 1 , 2 ,....., n of U and V respectively as A aij mn , where
m
T ( j ) aij i , for j 1, 2,....n
i 1
So, for each linear transformation from an n-dimensional vector space to an m-dimensional vector
Now if U, V are vector spaces, B1 1 , 2 ,....., n , B2 1 , 2 ,....., m are ordered bases
8.4.9 Theorem: Let A F mn . Define T : F n1 F m1 by T ( X ) A( X ) , for all X F n1 . Then T
is a linear transformation. If B1 , B2 are standard ordered bases of F n1 and F m1 respectively, then
T , B1 , B2 A .
Proof: We proved that T is a linear transformation. (See theorem 7.4.13).
Let B1 e1 , e2 ,.....en , B2 f1 , f 2 ,....., f m be standard ordered bases of F n1 , F m1 respectively..
o
o
a1 j
T ( e j ) Ae j aik m n o a 2 j a f a f ... a f ..... a f
1 1j 1 2j 2 ij 2 mj m
o a mj
o
m
i1
a ij f i .
T : B1 , B2 aij A.
m n
B 1
is the coordinate matrix of w.r.t. the ordered basis B1.
n n
T ( ) T b j j b jT ( j )
j 1 j 1
n m m
n n m
b j aij j b j aij i ci i where ci b j aij
j 1 i 1 i 1 j 1 j 1 j 1
c1
c
T ( ) B 2
2
cm
b1am1 b2 am 2 b3 am 3 ..... bn amn
= T ; B 1 , B 2 B 1
Let B1 1 , 2 ,....., p , B2 1 , 2 ,....., , B3 1 , 2 ,....., m be ordered bases of U,V,W re-
n
spectively.
Let T ; B1 , B2 bkj n p
S ; B2 , B3 aik mn
n
T ( j ) bkj k , j 1, 2,......, p
k 1
n
S ( k ) aik i , k 1, 2,......, n
i 1
( ST )( j ) S T ( j )
n n
= kj k bkj S ( k )
S b
k 1 k 1
n n
bkj aik i
k 1 i 1
n
n
aik bkj i
i 1 k 1
n n
cij i , where cij aik bkj
i 1 k 1
n
ST ; B1 , B3 cij aik bkj aik mn bkj
m p n p
k 1 m p
S ; B2 , B3 T ; B1 , B2
Centre for Distance Education 8.8 Acharya Nagarjuna University
8.5.2 Corollary: If T, S are linear operators on an n-dimensional vector space V and B is an
ordered basis of V, then ST B S B T B .
Proof: Let B1 , B2 be the standard ordered bases of F n1 and F m1 respectively..
n1
For every X F , we have
T ( X ) B 2
T ; B1 , B 2 X B
1
= A X B 1
T ( X ) AX LA ( X ) X F n1
T LA
We observe that ;
1) LA B LA LB
2) LaA aLA , a F
3) LAB LA oLB
LA B LA LB
L aA a L A
Proof: We know that L (U , V ) and F mn are vector spaces over F. Let B1 1 , 2 ,....., n ,
m m
(T ) T ; B1 , B2 T L(U ,V )
a) Let a, b F , T1 , T2 L(U , V )
aT1 ; B1 , B2 bT2 ; B1 , B2
a T1; B1 , B2 b T2 ; B1 , B2
is a linear transformation.
T1 ; B1 , B2 T2 ; B1 , B2
m m
m
T ( j ) aij 1 , j 1, 2,....., n
i 1
T ; B1 , B2 A (T ) A
8.6.2 Note: 1. T 1 ( ) T ( ) , U , V
2. T is invertible iff T is a bijection.
T 1S 1 , T 1 T
1
ST
1
Rings and Linear Algebra 8.11 Matrix Representation of a Linear....
8.6.3 Theorem: Let U ( F ) , V ( F ) be vector spaces and T : U V be a bijective linear transfor-
mation. Then T 1 : V U is also a linear transformation.
T (1 ) T 1 ( ) 1 and T ( 1 ) T 1 ( ) 1
8.6.5 Definition: If there is an isomorphism (i.e. one-one onto linear transformation) from a vector
space U(F) to a vector space V(F), then we say that U and V are isomorphic and we write U V .
8.6.6 Theorem: Let U(F), V(F) be finite dimensional vector spaces. Then U V dim U dim V .
Proof: 1. Suppose U V
a1 0, a2 0,...., an 0 ( S is L1)
Centre for Distance Education 8.12 Acharya Nagarjuna University
S 1 is L.I.
L( S 1 ). V L( S 1 )
of U and V respectively.
Let a , b F and a11 a2 2 ..... an n , b11 b2 2 ...... bn n U .
n n n n
T (a b ) T (aai bbi ) i (aai bbi ) i aai i bbi 1
i 1 i 1 i 1 i 1
n n
a ai 1 b bi i aT ( ) bT ( ) , U and , F .
i 1 i 1
T is a linear transformation.
T is one-one : Let , U and T ( ) T ( )
n n
, U ai , bi F ai i , bii
i 1 i 1
n n
Now T ( ) T ( ) ai i bi i
i 1 i 1
Rings and Linear Algebra 8.13 Matrix Representation of a Linear....
n
(ai bi ) i 0
i 1
ai bi i 1, 2,......, n
n n
ai i bi i T is one-one.
i 1 i 1
n
T is onto : Let V ai F a
i 1
i i
Write a
i 1
i i U and
n
T ( ) ai i T is onto T is a bijection.
i 1
Hence T is an isomorphism.
U V .
8.6.7. Corollary: If T : U V is an isomorphism between finite dimensional vector spaces and
B is a basis of U, then T(U) is a basis of V.
n
Let U ai F a
i 1
i i
n n
ai , bi F ai i , bii
i 1 i 1
Centre for Distance Education 8.14 Acharya Nagarjuna University
n
T (a b ) T (aai bbi )1 (aa1 bb1 , aa1 bb2 ,..., aan bbn
i 1
T is a linear transformation.
n n
n n
ai i bi i T is one - one.
i 1 i 1
T is onto : T is a bijection.
T is an somorphism.
Hence U F n .
U
Proof: Since N is the null space of T, we have N U and N U is the quotient
N
U
space and ( N ) ( N ) N ( ) and a ( N ) N a N , N and
N
aF
U
Define : V
N
U
( N ) T ( ) N
N
Rings and Linear Algebra 8.15 Matrix Representation of a Linear....
U
is well-defined : Let N , N
1
and N N 1
N
1 N ker T
T ( 1 ) 0 T ( ) T ( 1 ) 0
T ( ) T ( 1 ) ( N ) ( N 1 )
is well-defined.
U
is a linear transformation : Let a, b F , N , N
N
a( N ) b( N ) ( N a ) ( N b ) ( N a b )
= T ( a b ) aT ( ) bT ( )
U
= a ( N ) b ( N ) N , N and a, b F .
N
is linear transformation.
U
is one - one : Let N , N and ( N ) ( N )
N
T () T ( ) T ( ) 0
N N N
is one - one.
T ( ) ( N )
U
V N (N )
N
is on to.
U
is an isomorphism V
N
Centre for Distance Education 8.16 Acharya Nagarjuna University
8.6.10 Note: Even though T is not onto, the above theorem is true if V is replaced by T(U).
S 1 is L.I T (S ) is L . I
Let U and 0 S is L .I
T ( S ) T ( ) is L . I
T ( ) 0
U , 0 T ( ) 0
T is non-singular..
8.6.13 Theorem: Let U ( F ) , V ( F ) be vector spaces and T : U V be a linear transformation.
Then T is non-singular iff T is one-one.
Suppose , U and T ( ) T ( ) T ( ) T ( ) 0
Rings and Linear Algebra 8.17 Matrix Representation of a Linear....
T ( ) 0
0 N (T ) 0
T is one - one.
Conversely suppose that T is one-one.
Suppose T ( ) 0 T ( ) T 0
0 , Since T is one - one.
N (T ) 0 T is non-singular..
Solved Problems
Solution: We have
T (1,1,1) (2,5)
T (1,1, 0) (3,1)
T (1, 0, 0) (2,3)
x y a ............ (1)
3x 4 y b
(1) x 3 : 3 x 3 y 3a
Centre for Distance Education 8.18 Acharya Nagarjuna University
Subtracting, we get : y b 3a
x y 3a a x 4a b
(3)(1,3) ( 1)(1, 4)
3 11 5
T ; B1 , B2
1 8 3
Find T ; B1 , B2 , where B1 (1,1,1), (1,1, 0), (1, 0, 0) and B2 (1,3),(2,5) are the ordered bases
of 3 and 2 respectively..
Solved Problem:
A B A
8.6.18 If A and B are subspaces of a vector space V over a field F, then show that .
B A B
A B
Also B A B is a vector space over F..
B
A
Also A B is a subspace of A is a vector space over F..
A B
A B
Any element of is of the form B , where A and B i.e. B B ,
B
since B B B .
Rings and Linear Algebra 8.19 Matrix Representation of a Linear....
A B
So any element of is of the form B for some A .
B
A B
Define a mapping T : A by T ( ) B A
B
Let , A, a, b F
T ( a b ) B ( a b ) ( B a ) ( B b )
a ( B ) b( B )
aT ( ) bT ( )
, A and a, b F
T is a linear transformation.
A B
Any element of is of the form B for some A
B
A B
B A T ( ) B
B
T is onto.
By the Fundamental Theorem of homomorphisms, we have
A A B
ker T B
But ker T A T ( ) B A B B
A / B A B
A A B A B A
(or)
A B B B A B
8.6.16 SAQ: T : , T ( x, y ) ( x y , 2 x y , 7 y ) .
2 3
Centre for Distance Education 8.20 Acharya Nagarjuna University
B1 1 (1, 0), 2 (0,1)
1 1
T , B1 , B2 2 1
0 7
T (1 ) (1, 1)
T ( 2 ) (5, 4)
T ( 3 ) (3,1)
x 2 y a 3x 5 y b
y 3a b
3 x 5 y b 3 x 6 y 3a
x a 2 y a 6a 2b 2b 5a
( a , b ) (2b 5a ) 1 (3a b) 2
T ( 3 ) (3,1) 13 1 8 2
Rings and Linear Algebra 8.21 Matrix Representation of a Linear....
7 33 13
T , B1 , B2
4 19 8
8.8 Summary:
In this chapter, matrix of a linear transformation from a FDVS to a FDVS relative to ordered
bases is discussed. The concept of invertibility of isomorphisms of vector spaces is discussed
and some problems are discussed.
8.10 Exercises:
8.10.1 Let T : 2 2 be defined by T ( x, y ) (4 x 2 y , 2 x y )
2 3
8.10.3 If the matrix of T on 2 relative to the standard ordered basis is , find the matrix of
1 1
T relative to the basis (1,1), (1, 1) .
8.10.4 Find T : 3 4 as a linear transformation whose range is spanned by (1, 1, 2, 3) and
(2,3, 1, 0) .
1 1
Let P be the fixed matrix in V, P and T :V V be defined by
2 2
T ( A) PA, A V . Find nullity T..
3 2
8.11.1 1 2
Centre for Distance Education 8.22 Acharya Nagarjuna University
0 2 1
1 4 0
8.11.2
3 0 0
1 0 0
0 0 0
8.11.3
1 0 1
8.11.5 2
8.12 Model Examination Questions:
8.12.1 Explain the concept of a matrix of a linear transformation.
8.12.2 Define the invertibility of a linear operator.
8.12.3 State and prove the fundamental theorem of homomorphisms of vector spaces.
8.12.4 Find the matrix of the linear transformation
T : 3 2 defined by T ( x, y, z ) ( x y, 2 z x) ( x, y, z )
3
relative to the ordered bases B1 (1, 0, 1), (1,1,1), (1, 0, 0) and B2 (0,1), (1, 0)
- A. Satyanarayana Murty
Rings and Linear Algebra 9.1 Matrices and Determinants
LESSON - 9
9.3 Introduction
9.5 Determinants
9.7 Summary
9.9 Exercises
9.3 Introduction:
In this chapter, we discuss the elementary operations and elementary matrices. We also
discuss determinants.
3. R ij ( k ) : Multiplying every element of jth row by k and adding to the corresponding element
of ith row.
Definition: A Matrix obtained from a unit matrix I n by subjecting the unit matrix to any one of the
elementary transformations is called an elementary matrix.
Eij : Elementary matrix obtained by interchanging ith and jth rows (column) in I n .
E i ( k ) : Elementary matrix obtained by multiplying every element of ith row (column) with k
in I n .
E ij ( k ) : Elementary matrix obtained by multiplying every element of jth row with k and then
adding them to the corresponding elements of ith row in I n .
1 1
Similarly Eij , E i1 ( k ) , E ij ( k ) denote the elementary matrices obtained by applying the
corresponding elementary column operations on I.
2. Ei (k ) k Ei (k ) , k 0
1
3. Eij ( k ) 1 Eij ( k )
1 1
R1
R
A 2 , B c1 c2 c p
We can write
1 p
,
R m m 1
Rings and Linear Algebra 9.3 Matrices and Determinants
where R1 , R2 ,...., Rm are rows (of order 1 x n) and C1 , C2 ,...., C p are columns of B (of order
n x 1)
RmC1 RmC2 ..... RmC p
This shows that if the rows R1 , R2 ,......Rm of A are subjected to an elementary row opera-
tion, then the rows of AB are subjected to the same elementay row operation.
Now to prove the theorem let A be an m x n matrix and Im be a unit matrix so that A I m A .
1 1
9.4.5 Theorem: Eij 1 Eij , Ei (k ) Ei , k 0, Eij (k ) Eij (k ) if c 0 .
1
k
Proof: a) Eij is the E-matrix (elementary marix) obtained from I n by applying Rij . If we again
1 1 1
Ei Ei (k ) I Ei (k ) is invertible and E ij ( k ) E i
k k
c) Eij k is the E - matrix obtained from I by applying Rij k . If we again apply Rij k ,
we get I
1
Eij ( k ) Eij ( k ) I Eij ( k ) is invertible and E ij ( k ) E ij ( k )
1
E 1 1 1
ij Eij1 , Ei1 (k ) Ei1 , k 0
k
1
Eij1 (k ) Eij1 ( k )
We note that every E - matrix is nonsingular and the inverse of an E - matrix is also non-
singular.
9.4.6 Definition: A matrix A is said to be equivalent to B, if B is obtained from A by a finite number
of E - operations.
We write A B .
Since a) A A is reflexive
b) A B B A so that is symmetric.
c) A B , B C A C so that is transitive.
R
C
9.4.7 Definition: A matrix A is said to be row (column) equivalent to B, denoted by A B A B ,
if it is possible to obtain B from A by a finite number of E - row (column) operations.
Solved Problems:
9.4.8 Compute the matrixes E23 , E34 (1), E2 (2), E12 for I 4 .
1 0 0 0
0 0 1 0 R23
Solution E23 i, I 4 E23
0 1 0 0
0 0 0 1
Rings and Linear Algebra 9.5 Matrices and Determinants
1 0 0 0
R34 ( 1)
0 1 0 0
I4 E 34 ( 1)
0 0 1 1
0 0 0 1
1 0 0 0
R2 ( 2 ) 0 2 0 0
I4
0 0 1 0
E2 ( 2)
0 0 0 1
0 1 0 0
R1 2 1 0 0 0
I4 E12
0 0 1 0
0 0 0 1
1 2 1
1 0 2 I
9.4.9 Show that 3
2 1 3
1 2 1 R (1) 1 2 1
21 3
Solution: Let A 1 0 2 R ( 2) 0 2
31
2 1 3 0 3 5
1 0 2 1 0 2
R12 ( 1)
R32 ( 1)
0 1
R2 ( 1 )
2
3
2
0 1 3
2
R3 ( 1 ) 5 1
3 2
1 0 0
3 6
1 0 2 1 0 0
R3 (6)
R13 (2)
0 1 3 2 0 1 0 I 3
R23 ( 3 2 ) 0 0 1
0 0 1
Centre for Distance Education 9.6 Acharya Nagarjuna University
A I3
1 2 3
1 0 1
9.4.10 Express as a product of E - matrices.
1 1 1
1 2 3 R (1) 1 2 3 R2 ( 1 ) 1 2 3 R ( 2) 1 0 1 R (3)
21 2 12 3
Solution: Let A 1 0 1 0 2 2 0 1 1 0 1 1
1 1 1 31 0 3 2 3 3 2 32 1
R ( 1) R ( 1 ) R ( 1)
0 0 0 0
3 3
1 0 1 C ( 1) 1 0 0
0 1 1 0
31
1 0 I 3
C ( 1)
0 0 1 32 0 0 1
R2 ( 1 )
2
A I3
R 3 ( 13 )
R1 2 ( 2 ), R 3 2 ( 1 ) , R 3 ( 3 )
C 3 1 ( 1 ), C 3 2 ( 1 )
I3 E3 (3) E32 (1) E12 (2) E3 (13 ) E2 ( 12 ) E31 (1) E21 (1) AE32
1
(1) E31
1
(1)
1 1 1
A E21(1) E31(1) E2(12) E3( 13) E12 (2) E32 (1) E3(3) I3 E321 (1) E311 (1)
1 1 1 1 1 1
1 3 3
9.4.11 Express A 1 4 3 as a product of E - matrices.
1 3 4
1 3 3 C ( 3) 1 0 0
R21 ( 1)
Solution: A 0 1 0 0 1 0 I 3
21
R31 ( 1)
C ( 3)
0 0 1 31 0 0 1
Rings and Linear Algebra 9.7 Matrices and Determinants
1 1
A E21 (1) E31 (1)I
1 1
E31
1
(3) E21
1
( 3)
3
a b
9.5.1 Definition: Let A be a 2 x 2 matrix over F..
c d
a b d b
If A , we define adjoint of A as adj A .
c d c a
1 2 3 2 4 4
A ,B A B
3 4 6 4 9 8
9.5.4 Theorem: The function det : F 2 2 F is a linear function of each row of a 2 x 2 matrix
A when the second row is fixed i.e. if u , v , w F 2
and k F , then
u kv u v
det det k det
w w w
u v a b1 a b2
det k det det 1 k det 2
w w a3 b3 a3 b3
Centre for Distance Education 9.8 Acharya Nagarjuna University
(a1b3 a3b1 ) k (a2b3 a3b2 ) (a1 ka2 )b3 (b1 kb2 )a3
a ka2 b1 kb2 u kv
det 1
b3 w
det
a3
w w w
Similarly we can show that det k det det
u v u kv
1 a22 a12
Proof: Suppose det A 0 . Let C
det A a21 a11
a a 1 a22 a12
AC 11 12 .
a21 a22 det A a21 a11
1 det A 0 1 0
I
det A 0 det A 0 1
Conversely suppose A is invertible. So rank of A is = 2. (For definition of rank, see lesson 10).
(Since rank of an invertible n x n matrix is n)
a21
a11 0 or a21 0 . Suppose a11 0 . Multiply R1 with a and add to R2 so that we get :
11
a11 a12
a12 a21
0 a22
a11
Rings and Linear Algebra 9.9 Matrices and Determinants
a12 a21
The rank of this is = 2 so that a22 0.
a11
n
det A (1)1 j aij .det Aij .
j 1
Here Aij is the (n 1) ( n 1) matrix obtained from A by deleting ith row and jth column of A
1 j
If we write cij ( 1) det Aij , then
9.5.7 Note: det A = Sum of the products of each entry (in R1 of A) and the corresponding cofactor..
9.5.8 Example: Find det A using cofactor expansion along the first row of the matrix
1 3 3
A 3 5 2
4 4 6
11 1 2 13
Solution: det A ( 1) A11 A11 ( 1) A12 A12 ( 1) A13 A13
22 78 96 40
Centre for Distance Education 9.10 Acharya Nagarjuna University
9.5.9 Theorem: I n 1
Expanding I n along the first row, we get In 1det I n 1 0 0... 0 1(1) 1 . The theo-
rem is true for n. So by Mathematical induction, the Theorem is true for all positive integers n.
Hence the theorem.
The determinant of a square matrix can be evaluated by cofactor expansion along any row
n
i.e. if A F n n
, then det A ( 1)
j 1
i j
aij det Aij where Aij is the (n 1) (n 1) matrix ob-
9.5.10 Theorem: If A F nn and B is a matrix obtained from A by interchanging two rows of A,
then det B det A .
a1
a
Proof: Let A F nn and a1 , a2 ,...., an be the rows of A so that A 2
an
Let B be the matrix obtained from A by interchanging rth row and sth row, r < s.
a1
a
2
a
B s . Now consider the matrix whose rth and sth rows are replaced by a a .
r s
ar
a n
Rings and Linear Algebra 9.11 Matrices and Determinants
a1 a1 a1
a a a
2 2 2
ar as ar det as
we have d e t det
ar as ar as ar as
a n a n a n
a1 a1 a1 a1
a a a a
2 2 2 2
a a a a
0 d et d et d et d et s 0 d et A det B 0
r r s
ar as ar as
a n a n a n a n
det B det A
Proof: First we prove the theorem when A is an elementary matrix. If A is a matrix obtained from
I by interchanging two rows of I, then det A 1 .
Similarly we can prove the theorem, when A is an elementary matrix of other types.
Let A Em Em 1...E2 E1
= ...........................................
det A det B
1 1
Further A is invertible det A
det A
1 1
det A.det A1 1det A 0 and det A
det A
9.5.13 Theorem: Let A F nn and let B be a matrix obtained by adding a multiple of one row to
another row of A. Then det B det A .
Proof: Suppose B is the n x n matrix obtained from A by adding k times r th row to sth row where
r s.
a1 b1
a b
Let A 2
, B 2 , then bi ai for i s and bs as kar
an bn
Rings and Linear Algebra 9.13 Matrices and Determinants
Let B be the matrix obtained from A by adding c i times row i to row r for each i r
9.5.15 Note: If A F nn , then det kA k n det A,det( A) (1)n det A and det A 0 , if two
rows are identical.
AT is not invertible.
det AT 0
det AT det A 0
Suppose A is invertible. Then A is a product of elementary matrices.
Suppose A Em Em 1 E2 E1
det( Em Em 1 E1 )
det A
Hence the Theorem.
Solved Problem:
2 0 0 1
0 1 3 3
9.5.17 Evaluate the determinant of the matrix A
2 3 5 2
4 4 4 6
Solution:
2 0 0 1 2 0 0 1
R31 (1) 0 1 3 3 R32 (3) 0
1 3 3
A
3 5 3 R42(4) 0
R41 ( 2) 0 0 4 6
0 4 4 8 0 0 16 20
2 0 0 1
R 43 ( 4 )
0 1 3 3
B ( say )
0 0 4 6
0 0 0 4
1 w w2
9.5.18 SAQ: If w is a complex cube root of unity, then show that w w2 1 0
w2 1 w
12 22 32 42
22 32 42 52
9.5.19 SAQ: Show that 0
32 42 52 62
42 52 62 72
Rings and Linear Algebra 9.15 Matrices and Determinants
bc c a a b
9.5.20 SAQ: Show that c a a b b c 0
a b bc ca
Solved Problems:
1 a 1 1
1 1 1
1 1 b 1 abc 1
9.5.21 Show that a b c
1 1 1 c
1 1 1
1
a a a
1 1 1
Solution: LHS abc 1
b b b
1 1 1
1
c c c
1 1 1 1 1 1 1 1 1
1 1 1
a b c a b c a b c
R12 (1)
1 1 1
abc 1
R13 (1) b b b
1 1 1
1
c c c
1 1 1 1 0 0
C 21 ( 1)
1 1 1 1 1 1 1 1
1 1
abc 1 1 abc 1 1 0
a b c b b b C31 ( 1) a b c b
1 1 1 1
1 0 1
c c c c
Centre for Distance Education 9.16 Acharya Nagarjuna University
1 0 0
C 21 ( 1)
1
1 0 Expanding with R1, we get
C 31 ( 1) b
1
0 1
c
1 1 1
det of given matrix is = abc 1 .1
a b c
2(a b c) c a a b
LHS 2(a b c) a b b c
2(a b c) b c c a
abc b c
C 21 ( 1)
2 abc c a
C31 ( 1)
abc a b
a c b c
C12 (1)
2 a b c a
b c a b
Rings and Linear Algebra 9.17 Matrices and Determinants
C13 (1)
a b c
2 b c a
c a b
C2 ( 1)
a b c
C3 ( 1)
2b c a
c a b
1 w w w2
w w2 w2 1
9.5.18 SAQ: Applying C12 (1) , we get
w2 1 1 w
C13 (1)
1 w w2 w w2 0 w w2
1 w w2 w2 1 0 w2 1 0
1 w w 2
1 w 0 1 w
12 22 5.1 5.3 12 22 5 5
22 32 22 32
3 3
7.1 7.3 7 7
0
32 42 9.1 9.3 2
42 9 9
42 52 11.1 11.3 42 52 11 11
b a c a a b
9.5.20 SAQ: Applying C12 (1), we get cb a b b c
ac bc ca
Centre for Distance Education 9.18 Acharya Nagarjuna University
C13 (1)
0 ca a b
0 ab bc 0
0 bc ca
9.7 Summary:
In this lesson, we discussed elementary transformations and applied the techniques in
determinants.
9.9 Exercises:
x x2 1 x3
1. If y y2 1 y 3 0 and x, y, z are different,
z z2 1 z3
1 x x2
2. Show that 1 y y 2 ( x y )( y z )( z x )
1 z z2
b2 c2
2
0 c b ab ac
3. Show that c 0 a ab c a2
2
bc
b a 0 ac bc a2 b2
2bc a 2
2
c2 b2 a b c
2 ac b 2 b a ( a 3 b 3 c 3abc ) 2
3
4. Show that c2 a2 c
b2 a2 2 ab c 2 c a b
1 2 3
5. Find the value of the determinant 4 5 6
7 8 9
Rings and Linear Algebra 9.19 Matrices and Determinants
a b 2c a b
6. Evaluate c b c 2a b
c a c a 2b
6. 2(a b c)3
1 a 1 1 1
1 1 b 1 1 1 1 1 1
3. Show that abcd 1
1 1 1 c 1 a b c d
1 1 1 1 d
ax a a a
b b y b b a b c d
4. Show that xyzw 1
c c cz c x y z w
d d d dw
- A. Satyanarayana Murty
Rings and Linear Algebra 10.1 Rank of a Matrix
LESSON - 10
RANK OF A MATRIX
10.1 Objective of the Lesson:
In this lesson, we define the rank of a matrix and use elementary operations to compute the
rank of a matrix. We also discuss the procedure for computing the inverse of an invertible matrix.
10.3 Introduction
10.7 Summary
10.9 Exercises
10.3 Introduction:
In this lesson, the concept of a rank of the matrix is introduced. We use elementary opera-
tions to compute the rank of a matrix and the rank of a linear transformation. We also introduce a
procedure for computing the inverse of an invertible matrix using elementary transformations.
LA : F n F m .
LA ( X ) AX , X F n
We observe that many results about the rank of a matrix can be obtained from the corre-
sponding results about linear transformation.
We know that every matrix A is the matrix representation of the linear transformation L A
w.r.t. appropriate standard ordered bases.
such that A LA ; B1 , B2 .
m n n p
for all k F , A F , B F LAB LA LB ; LI n I F n
We also know that : if T L(U , V ), U ( F ), V ( F ) are finite dimensional vector spaces and
B1 , B2 are ordered bases of U and V respectively, then
Rank T Rank T ; B1 , B2
So the problem of finding rank of a linear transformation is reduced to that of finding rank
of a matrix.
Now we prove
10.4.2 Theorem: Let A be an m x n matrix. If P and Q are invertible m m, n n matrices
respectively, then
(a) Rank A Q Rank A, (b) Rank PA = Rank A, (c) Rank PAQ = Rank A.
Proof: We have LA : F F , LQ : F F
n m n n
1 1
( y F n (codomain) X Q y LQ ( X ) QQ y y )
dim R ( L A )
= Rank LA
10.4.4 Note: Pre-multiplication (Post multiplication) of a matrix by an elementary matrix and hence
by a finite number of elementary matrices (by a finite number of elementary operations) does not
alter the rank of the matrix.
10.4.5 Theorem: The rank of a matrix is equal to the maximum number of its linearly independent
columns i.e. the rank of a matrix is the dimension of the subspace generated by its columns.
n
Let B be the standard basis of F
B spans F n .
0
0
But we know that LA (e j ) Ae j (a1a2 ...a j ...an ) 1
0
0
1 0 1
10.4.6 SAQ: Find the rank of the matrix A 0 1 1
1 0 1
1 2 1
10.4.7 SAQ : Find the rank of A 1 0 3
1 1 2
Ir 0
10.4.8 Theorem: Every non-zero matrix A can be reduced to the form
0
by a finite number
0
of elementary operations, where I r is the unit matrix of order r and r is the rank of A.
1
Now multiplying R1 of B with , we get a matrix C with leading element 1 so that
k
1 c12 c13......c1n
c c22 c23......c2n
C 21
cn1 cn 2 cn3......cnn
Rings and Linear Algebra 10.5 Rank of a Matrix
Adding suitable multiples of the column C1 of C to the other columns of C and adding
suitable multiples of the first row to the remaining rows of C, we get a matrix D in which all elements
of R1 and C1 of D, except the leading element (=1) are zeros.
1 0 0 0 1 0 0
0 d 22 d 23 d 2 n 0
D
A1 , where A1 is a (m 1) ( n 1) matrix.
0 dm2 dm3 d mn 0 m n
I1 O
If A1 O , then A and is this case, there is nothing to prove.
O O
The elementary operations applied on A1 do not alter the elements of either 1st row or 1st
column of D.
I P O
Proceedings like this, we get a matrix P P
O O
I r O
p r . Hence A can be reduced to the form
O O
10.4.9 Note: 1. Using elementary operations, a matrix A of rank r can be reduced to the form
I I 0
I r , I r 0 , r r
0
, called its normal form.
00
10.4.10 Note: 1) To reduce A to normal form, sometimes, both row operations and column opera-
tions are to be applied.
Ir 0
A can be reduced to the normal form
0 0
For getting this, let the number of row operations used be s and let the number of column
operations used be t. We also know that every elementary row (column) operation on A is equiva-
lent to pre-(post) multiplication of A by a suitable elementary matrix.
P, Q are non-singular..
I O
PAQ r
O O
elementary matrices P1 , P2 , , Ps ; Q1 , Q2 , Qt
Ps Ps 1 P2 P1 AQ1 , Q2 Qt I n
P1 , P2 , , Ps P P1 P2 , Ps
PA P1 P2 , Ps A
( PA) ( A)
( A) ( B)
I r O
Proof: non-singular matrices Pmm and Q nn PAQ D , ( A) ( D) r
O O
D1 ( PAQ)1 Q1 A1 P1
Since P and Q are invertible, we have P1 and Q1 are invertible.
( A 1 ) ( Q 1 A1 P 1 ) ( D 1 )
1
I 0
Let ( A) r . Since D is an n x m matrix, D r
1 1
so that ( D1 ) r .
0 0
( A1 ) ( D1 ) r ( A)
Hence the theorem.
(i) ( ST ) ( S )
Centre for Distance Education 10.8 Acharya Nagarjuna University
(ii) ( AB ) ( A)
(iii) ( AB ) ( B )
iv) ( ST ) (T )
Proof: We have R ( ST ) ST (U ) S T (U ) S R (T )
S (V )( R (T ) V )
R( S )
( ST ) ( S )
( AB ) ( A )
(iii) ( AB) ( AB )T ( BT AT )
( BT )
= ( B)
( AB ) ( B )
Let A1 T ; B1 , B2 , A2 S ; B2 , B3 . Then A2 A1 ST ; B1 , B3
10.4.19 Note: ( AB) min ( A), ( B) , A, B are matrices of suitable orders.
We have A C A A A I n I n A
1 1 1 1
----------- (1)
10.5.3 Theorem: If Ann is invertible matrix and if by a finite number of elementary row operations,
the matrix A I n is transformed into a matrix of the form I n B , then B A1
elementary row operations. Let E1, E2 ...E p be the elementary matrices corresponding to these
elementary row operations.
E p E p 1...E1 A I n I n B
Let M E p E p 1...E1
M A M M A In In B
Centre for Distance Education 10.10 Acharya Nagarjuna University
1 3 1 1
1 2 1 1
10.5.5 Find the rank of (a) A , (b) A 1 0 1 1
1 1 1 1 0 3 0 0
Solution: (a) One row is not a multiple of other so that the two rows are L. I.
( A) 2
R21 ( 1)
1 3 1 1 R (1) 1 3 1 1 R2 ( 1 ) 1 3 1 1
0 3 0 0 32 0 3 0 0 3 0 1 0 0 B
(b) A (say)
0 3 0 0 0 0 0 0 0 0 0 0
A is reduced to B.
1 2 3 1
10.5.6 Find the rank of A 2 1 1 1
1 1 1 0
Solution:
1 2 3 1 R ( 1) 1 2 3 1 1
R2 ( )
1 2 3 1
R21 ( 2)
1 B , the Echelon
A 0 3 5 1 0 3 5 1
32 3
0 1 5 3 3
R31 ( 1)
0 3 4 1 0 0 1 0
0 0 1 0
form of A.
2 3 1 1
1 1 2 4
10.5.7 Find the rank of A
3 1 3 2
6 3 0 7
Solution:
1 1 2 4 1 1 2 4
2
R12
3 1 1 R21 ( 2 )
7
A 0 5 3
3 1 3
2 R31 ( 3) 0 4 9 10
R41 ( 6 )
6 3 0 7 0 9 12 17
1 1 2 4 1 1 2 4
R23( 1) 0 1 6 3 R32 ( 4) 0 1
6 3
A
0 4 9 10 R42
( 9) 0 0 33 22
0 9 12 17 0 0 66 44
1 1 2 4
1 1 2 4 1
R43 ( 2) R3 ( ) 0 1 6 3
0 1 6 3
0 4 33 22 0 0 1
33
2 B( say )
3
0 0 0 0
0 0 0 0
1 1 2 3
4 1 0 2
10.5.8 Reduce the matrix A to the normal form and hence find the rank.
0 3 0 4
0 1 0 2
Centre for Distance Education 10.12 Acharya Nagarjuna University
Solution:
1 1 2 3 1 1 2 3
R21 ( 4) 0 5 8 14 R24 0 1 0 2
A
0 3 0 4 0 3 0 4
0 1 0 2 0 5 8 14
1 0 0 0 1 0 0 0
C21 (1)
0 1 0 2 R32 ( 3) 0
1 0 2
C31 ( 2) 0 3 0 4 R42( 5) 0 0 0 2
0 5 8 14 0 0 8 4
1 0 0 0 1 0 0 0
C42 ( 2) 0 1 0 0 R34 0 1 0 0
0 0 0 2 0 0 8 4
0 0 8 4 0 0 0 2
1 0 0 0
0 1 0 0 0
1 0 0 R34 ( 12 )
1 0 0
1
R3 ( )
0
1 0 1
8
I4
R4 ( )
0 1 0 0 1 0
2
2
0 0 0 1
0 0 0 1
A I 4 ( A) 4
0 2 3
10.5.9 Reduce the matrix A 2 4 0 using row operations to a matrix B and obtain the rank.
3 0 1
Solution:
2 4 0 R1 ( 1 ) 1 2 0 R ( 3) 1 2 0
A 0 2 3 0 2 3 0 2 3
R12 2 31
3 0 1 3 0 1 0 6 1
Rings and Linear Algebra 10.13 Rank of a Matrix
1 2 0 R3 ( 1 ) 1 2 0 R ( 3) 1 2 0
0 2 3 10 0 2 3 23 0 2 0
R32 (3)
0 0 10 0 0 1 0 0 1
R12 ( 1)
1 0 0 R2 ( 1 ) 1 0 0
0 2 0 2 0 1 0 I
3
0 0 1 0 0 1
A I 3 .B I 3 ( A) 3
I r O
10.5.10 Find non-singular matrices P and Q such that PAQ is of the form , where
O O
1 1 2
A 1 2 3
0 1 1
1 1 2 1 0 0 1 0 0
1 2 3 0 1 0 A 0 1 0
0 1 1 0 0 1 0 0 1
1 1 2 1 0 0 1 0 0
Apply : R21 (1) : 0 1 1 1 1 0 A 0 1 0
0 1 1 0 0 1 0 0 1
1 0 0 1 0 0 1 1 2
1 1 1 0 A 0 1 0
Apply : C21 (1), C31 ( 2) : 0 1
0 1 1 0 0 1 0 0 1
1 0 0 1 0 0 1 1 2
Apply R32 (1) : 0 1 1 1 1 0 A 0 1
0
0 0 0 1 1 1 0 0 1
Centre for Distance Education 10.14 Acharya Nagarjuna University
1 0 0 1 0 0 1 1 1
Apply C32 (1) : 0 1 0 1 1 0 A 0 1 1
0 0 0 1 1 1 0 0 1
1 0 0 1 1 1
If P 1 1 0 , Q 0 1 1 , then
1 1 1 0 0 1
I 021
PAQ 2
012 011
5 3 14 4
10.5.11 Reduce A 0 1 2 1 to Echelon form and hence find the rank.
1 1 2 0
Solution:
1 1 2 0 R ( 5) 1 1 2 0
A 0 1 2 1 0 1 2 1
R13 31
5 3 14 4 0 8 4 4
1 1 2 0 1 1 1 2 0
R3 ( )
R32 ( 8) 12
0 1
2 1 0 1 2 1 B
0 0 12 4 1
0 0 1
3
0 2 4
10.5.12 Verify whether the matrix A 2 4 2 is invertible. If so, find its inverse.
3 3 1
Rings and Linear Algebra 10.15 Rank of a Matrix
0 2 4
Solution: Given A 2 4 2
3 3 1
0 2 41 0 0
The argumented matrix A I A 2 4 20 1 0
3 3 10 0 1
2 4 20 1 0
A I 2 0
R12
4 21 0
3 3 10 0 1
R1 ( 12 )
1 2 10 1
2 0
0 0
2 41 0
3 3 10 0 1
R31 ( 3)
1 2 1 0 1
2 0
0 2 4 1 0
0
0 3 2 0 3
2 1
R 2 ( 12 )
1 2 1 0 1
2 0
0 0
1 2 12 0
0 3 2 0 3
2 1
R12 ( 2)
1 0 3 1 12 0 R3 ( 14 )
1 0 3 1 12 0
0 1 2 1 0 0 0 1 2 1 0 0
2 2
0 0 4 2 1 0 0 1 8 8 4
R32 (3) 3 3 3 3 1
2
Centre for Distance Education 10.16 Acharya Nagarjuna University
R13 (3)
R 23 ( 2 )
1 0 1 1
8 5
8
3
4
0
1 0 14 3
4 1
2
0 0 1 83 3
8
1
4
A is invertible and
81 85 43
A1 14 43 12
83 83 14
1 2 1
10.5.13 Using elementary operations, verify whether A 2 1 1 is invertible.
1 5 4
1 2 1 1 00
Solution: Consider the argumented matrix A I 2 1 1 0 1 0
1 5 4 0 0 1
We verify whether A / I can be converted into the form I / B using elementary opera-
tions.
1 2 1 1 0 0
R21 ( 2)
Now A / I 0 3 3 2 1 0
R31 ( 1)
0 3 3 1 0 1
1 2 1 1 0 0
0 3 3 2 1 0
R32 (1)
0 0 0 3 1 1
A / I cannot be converted into the form I / B , since the third row is a zero row..
Hence A is not invertible.
Rings and Linear Algebra 10.17 Rank of a Matrix
1 1 1 1
10.5.14 SAQ : Find the rank of A 1 1 1 1
1 1 1 1
1 1 1
10.5.15 SAQ : Find the rank of A 1 1 1
1 1 1
1 0 1
10.4.6 SAQ: A 0 1 1 ( A) ?
1 0 1
1 0
But 0. ( A) 2
0 1
1 2 1 R (1) 1 2 1 R2 ( 1 ) 1 2 1
21
10.4.7 SAQ: A 1 0 3 0 2 2 0 1 1
2
R (1)
1 1 2 31
0 1 1 0 1 1
2 1
0 1 1 ( A) 2
0 0 0
1 1 1
R21 (1)
10.5.15 SAQ : A 0 0 0 ( A) 1
R31 ( 1)
0 0 0
Centre for Distance Education 10.18 Acharya Nagarjuna University
10.7 Summary:
Rank of an m x n matrix is discussed. Using elementary row/column operations, we
obtained normal form. We also explaind the procedure for finding inverse.
10.9 Exercises:
1 2 3 0
2 4 3 2
10.9.1 Find the rank of the matrix A
3 2 1 3
6 8 7 5
2 1 3 1
1 2 3 1
10.9.2 Find the rank of the matrix
1 0 1 1
0 1 1 1
2 1 3 1
10.9.3 Find the rank of the matrix A 1 8 6 8
1 2 0 2
1 1 1 1
10.9.4 Find the rank of the matrix by reducing to normal form A 1 2 3 4
3 4 5 2
10.9.5 Find two non singular matrices P and Q such that PAQ is in the normal form, where
1 1 2
A 1 2 3
0 1 1
Rings and Linear Algebra 10.19 Rank of a Matrix
10.9.6 Find two non-singular matrices P and Q such that PAQ is in the normal form, where
1 0 1 0
3 1 2 1
A
2 1 2 1
2 2 1 0
1 3 3 1
1 2 1 0
10.9.7 Find the inverse of A using elementary operations: A
2 5 2 3
1 1 0 1
1 0 0 1 1 1
10.9.5 P 1 1 0 , Q 0 1 1
1 1 1 0 0 1
1 0 0 0
1 0 0 1
3 1 0
, Q 0 1 2 3
0
10.9.6 P
5 1 1 0 0 0 0 1
1 1 1 1
0 2 1 3
1 1 1 2
A1
10.9.7 1 2 0 1
1 1 2 6
Centre for Distance Education 10.20 Acharya Nagarjuna University
1 1 2 3
2. Using elementary operations reduce A 4 1 0 2
to normal form and hence find rank.
0 3 0 4
0 1 0 2
1 1 2
3
3. Find nonsingular matrices P and Q such that PAQ is in the normal form, where A 1 2
0 1 1
0 1 2
4. Using elementary operations, find the inverse of A 1 2 3
3 1 1
- A Satyanarayana Murty
Rings and Linear Algebra 11.1 Systems of Linear Equations
LESSON - 11
11.3 Introduction
11.7 Summary
11.9 Exercises
11.3 Introduction:
In this lesson, we study the systems of linear equations. “A System of n linear equations in
n unknowns has a solution” - This statement, sometimes, may be incorrect, because several
possibilities including no solution may arise.
11.4.1 Definition: A system of m linear equations in n unknowns over a field F or simply a linear
system, is a set of m linear equations, each in n unknowns. A linear system can be denoted by
a21 x1 a22 x2 a2 n xn b2
am1 x1 am 2 x2 amn xn bm
where aij and bi F ,1 i m,1 j n and x1 , x2 , , xn are variables taking values in F..
AX B
A is called coefficient matrix.
s1
s
s 2Fn
A solution of the system (S) is an n - tuple
sn
As = B
The set of all solutions of a linear system is called the solution set of the system. A system
of linear equations is said to be consistent if it has a solution. Otherwise, the system is said to be
inconsistent.
Rings and Linear Algebra 11.3 Systems of Linear Equations
11.4.2 System of Homogeneous linear equations:
Consider a system of m homogeneous linear equations in n unknowns namely
a21 x1 a22 x2 a2 n xn 0
am1 x1 am 2 x2 amn xn 0
i.e. X O is a solution of AX O . This is called the trivial solution or zero solution. Any
other solution is called a nonzero solution.
AX O is always consistent.
A system AX B of m linear equations in n unknows is said to be
(a) homogeneous if B O
(b) non-homogeneous if B O
Any homogeneous system has atleast one solution, namely the zero solution.
Proof: We have K X F AX O
n
We have proved that K is a subspace of F n and it is also equal to N ( LA ) , the nullspace of
LA i.e. K N ( LA ) .
rank A + nullity LA n
Centre for Distance Education 11.4 Acharya Nagarjuna University
nullity LA n r
Rank A = Rank LA m
dim K n ( LA ) n m, when K N ( LA )
a nonzero solution s K .
s is a nonzero solution to AX O .
i.e. AX O has a nonzero solution.
Also A ( kX 1 ) ( kAX 1 ) k .O O
We observe that if AX O has a nonzero solution, then it has an infinite number of nonzero solu-
tions, since X1 O is a solution kX 1 is also a solution for every K .
( ( A ) r m n n r 0)
11.4.8 System of non-homogeneous equations : The equations
a21 x1 a22 x2 a2 n xn b2
.......................
am1 x1 am 2 x2 amn xn bn
Rings and Linear Algebra 11.5 Systems of Linear Equations
x1 b1
x b
can be written as AX B , where A (aij )mn , X 2
,B 2
xn bn
11.4.9 Theorem: Let K be the solution set of the system AX B and let KH be the solution set
of the corresponding homogeneous system AX O . Then for any solution S to AX B ,
K S K H S K K K H .
Let W K . AW B. A(W S ) AW AS B B 0
W S KH
K O K H W S K O
W S Ko S K H K S K H
Now suppose W S K H K 0 K H W S K 0
AW A( S K 0 ) AS AK 0
B O B W K
S K H K
K S K H
11.4.10 Theorem: Let AX B be a system of linear equations. Then the system is consis-
tent iff ( A) ( A B)
Proof: Suppose AX B is a system of linear equations.
We know that LA : F n F m , LA ( X ) AX . X F n .
Centre for Distance Education 11.6 Acharya Nagarjuna University
B LA ( X 1 )
B R( LA )
AX B is consistent
AX B has a solution.
B Span a1 , a2 , , an
( A) n, A B n
( A) A B AX B is consistent.
AX B has a solution.
X A1 B
X A1 B is a solution of AX B .
Rings and Linear Algebra 11.7 Systems of Linear Equations
Let X 1 , X 2 be solutions of AX B AX 1 B, AX 2 B AX 1 AX 2
Write s A1 B .
As ( AA1 ) B ( A1 A) B IB B
s A1 B is a solution of AX B .
Is1 Is s1 s
Let KH denote the solution set for the corresponding homogeneous system AX O .
N ( LA ) O LA is non-singular
0 1
We observe that 1 1 ,2 0 are independent vectors in K ; 1 , 2 is a basis of K.
2 1
1 2 1
Solution: The coefficient matrix is A
1 1 1
We observe that ( A) 2
1
We observe that 2 is a basis of K.
3
1 2 3
Solution: The coefficient matrix is A 3 4 4
7 10 12
1 2 3 R2 ( 1 ) 1 2 3 1 2 3
R21 ( 3) 2
A 0 2 5 0 1 5 0 1 5 B
R32 (4)
2 2
R31 ( 7)
0 4 9
0 4 9 0 0 1
0
K 0 . The system has zero solution only..
0
1 1 1
Solution: The coefficient matrix is A 3 4 5
2 3 4
R21 ( 3)
1 1 1 R ( 1) 1 1 1
A 0 1 2 0 1 2 B
32
R31 ( 2)
0 1 2 0 0 0
A B, ( B ) 2 ( A) 2
y 2z 0
y 2 z
x 2z z 0 x z
x z 1
y 2 z z 2
z z 1
If z k , then x k , y 2k , z k
1
K k k , 2
.
1
Centre for Distance Education 11.10 Acharya Nagarjuna University
1 1 1 1
Solution: The coefficient matrix is A 1 1 2 1
3 1 0 1
R21 ( 1)
1 1 1 1 R ( 1) 1 1 1 1
A 0 2 3 2 0 2 3 2 B (say)
32
Now
R31 ( 3)
3 2 3 2 0 0 0 0
( B ) 2 ( A) 2
Let K be the solution set.
dim K n r 4 2 2
The given system is equivalent to x y z t 0
2 y 3 z 2t 0
3
Let z k1 , t k2 2 y 3 k1 2 k 2 y k1 k 2
2
3 1
x y z t k1 k2 k1 k2 k1
2 2
1 1
k1 2
x 2 0
y 3 1
k1 k2 k1 k2
3
z 2 2 0
k 1
t 1 1
k2 0
1
2 0
3 1
Solution set K 2 k1 k 2 k1 , k 2
1 0
1
0
Rings and Linear Algebra 11.11 Systems of Linear Equations
11.5 System of linear equations - computationel aspects: In this section, we use elemen-
tary row operations to find one solution and using that all the solutions to the given non-homoge-
neous system (when the system is consistent).
11.5.1 Definition: If two systems of linear equations have the same solution set, then the sys-
tems are said to be equivalent.
W K 1 K K1
Let W K 1 (CA)W CB C ( AW ) CB
W K K1 K K K1
Hence the theorem.
matrix obtained from A B by a finite number of elementary row operations, then the system
1
Let A B
1
be obtained from A B by elementary row operations.
This is equivelant to pre-multiplication of A B by the elementary matrices (of order m x m),
x1 x2 x3 3
x1 2 x2 x3 x4 2
3 2 3 2 1
The augmented matrix is A B is 1 1 1 0 3
1 2 1 1 2
1. To get 1 as the first row first column element, we interchange R1 and R3.
1 2 1 1 2
1 0 3
R1 3
A B 1 1
3 2 3 2 1
Rings and Linear Algebra 11.13 Systems of Linear Equations
2. Using type 3 row operations, we use R1 to get zeros in the remaining positions of C1.
1 2 1 1 2
By applying R21 (1), R31 (3), we get 0 1 0 1 1
0 4 0 1 5
3. We get 1 in the next row in the left most possible column, without using previous rows. In this
example, C2 is the left most possible column.
1 2 1 1 2
By applying R2 (1) , we get 1 in (2, 2) position, we get 0 1 0 1 1
0 4 0 1 5
4. Now use Type 3 - elmentary row operations to get zeros below 1. In this example, we apply
1 2 1 1 2
R32 (4), 0 1 0 1 1
0 0 0 3 9
1 2 1 1 2
By applying R3 ( 3 ) , we get 0 1 0 1 1
1
0 0 0 1 3
6. Work upward, begining with last nonzero row and add multiples of each row to the rows above.
(so that we get zeros above the first nonzero entry in each row).
1 2 1 0 5
By applying R13 (1), R23 (1), we get 0 1 0 0 2
0 0 0 1 3
7. Repeat the process described in step 6 for each preceeding row until it is performed with the 2nd
row at which the reduction process is complete.
1 0 1 0 1
0 1 0 0 2
In this example, by applying R12 (2), we get
0 0 0 1 3
Centre for Distance Education 11.14 Acharya Nagarjuna University
x2 2
x4 3
1 1 1 4
A B 2 5 2 3
1 7 7 5
R21 ( 2)
1 1 1 4 R ( 2) 1 1 1 4
0 3 4 5 32 0 3 4 5
R31 ( 1)
0 6 8 9 0 0 0 1
( A) 2, A B 3
4x 3y z 3
2x 4 y 2z 4
Solution: The augmented matrix of the system is :
1 2 1 2
3 1 2 1
A B
4 3 1 3
2 4 2 4
Rings and Linear Algebra 11.15 Systems of Linear Equations
1 2 1 2 1 2 1 2
R21 ( 3) 0 5 5 5 R2 ( 51 ) 0 1 1 1
R31 ( 4) 0 11 5 5 0 11 5 5
0 0 0 0 0 0 0 0
1 2 1 2 1 2 1 2
R32 (11) R3 ( )
1
1
0 1 1 1 6 0 1 1
0 0 6 6 0 0 1 1
0 0 0 0 0 0 0 0
( A) 3 A B
y z 1
z 1
y 0
x 0 1 2
x 1
1
0
( A) 3 A is invertible and hence the system has a unique solution, namely .
1
1 1 1 6
Solution: The augmented matrix of the system is A B 1 2 3 10
1 2
Centre for Distance Education 11.16 Acharya Nagarjuna University
R21 ( 1)
1 1 1 6
0 1 4
R32 ( 1)
2
0 0 3 10
If 3 , then ( A) 3 and A B 3 .
if 3 and 10 , then ( A) 2 A B 3 .
1 1 1 6
If 3 and 10 , then A B 0 1 2 4
0 0 0 10
Since 10 0 , ( A) 2 A B ( 3) .
2 x1 3x2 x3 4 x4 9 x5 17
x1 x2 x3 x4 3 x5 6
x1 x2 x3 2 x4 5 x5 8
2 x1 2 x2 2 x3 3 x4 x5 14
2 3 1 4 9 17
1 1 1 1 3 6
A B
1 1 1 2 5 8
2 2 2 3 8 14
Rings and Linear Algebra 11.17 Systems of Linear Equations
1 1 1 1 3 6
R12
2 3 1 4 9 17
1 1 1 2 5 8
2 2 2 3 8 14
1 1 1 1 3 6
R21 ( 2) 0 1 1 2 3 5
R31 ( 1) 0 0 0 1 2 2
0 0 0 1 2 2
1 1 1 1 3 6
R34 ( 1)
0 1 1 2 3 5
0 0 0 1 2 2
0 0 0 0 0 0
( A) 3 A B
x2 x3 2 x4 3 x5 5
x4 2 x5 2
Let x3 t1 , x5 t2 x4 2 2t2
x2 1 t1 t2
x1 3 2t1 2t2
Centre for Distance Education 11.18 Acharya Nagarjuna University
3 2 2
1
1 1
We observe that 0 is a particular solution of the system and 1 , 0 is a basis of the
0 2
2
0 0 1
3x 4 y 4 z 0
7 x 10 y 12 z 0
has a non-trivial solution.
6 x 20 y 6 z 3 0
6 y 18 z 1 0
is consistent.
x 2 y z 4
3x y 4 z 0
3x y 2 z 2
2x 4 y 7 z 7
is consistent.
Rings and Linear Algebra 11.19 Systems of Linear Equations
3x y 2 z 1
2 x 2 y 3z 2
x y z 1
is consistent. If it is consistent, solve.
1 2 3
11.5.12 SAQ: The coefficient matrix is A 3 4 4
7 10 12
1 2 3
1 2 3 R2 ( 1 )
R21 ( 3)
5
A 0 2 5 0 1
2
R31 ( 7) 2
0 4 9 0 4 9
1 2 3
R32 (4)
5
0 1 2
B
0 0 1
( A) 2, A B 3 ( A) A B
2 1 3 8 R 1 2 1 4
A B 1 2 1 4 2 1 3 8
12
3 1 4 0 3 1 4 0
1 2 1 4
1 2 1 4 R2 ( )
1
0 3 5 16 3 0 1 5 16
R21 (2)
R31 ( 3)
3 3
0 7 1 12 0 7 1 12
1 2 1 4 R ( 3 ) 1 2 1 4
38
R32 ( 7) 3
0 1 5
3
16
3 0 1 5 16
3 3
0 0 3
76 3 0 0 1 2
3
( A) A B 3
5 16
y z
3 3
z2
10 16
y y2
3 3
x 4 2 6 x 2
2
solution is 2
2
Rings and Linear Algebra 11.21 Systems of Linear Equations
11.5.15 SAQ : The augmented matrix is
1 2 1 3
1 1 3 R ( 3) 1 3 R ( 1)
5 7
1 1 1
2
A B 3 2 2 0 5 7
21 43 0
1 2
R ( 2) 0 0 0 20
2 4 7 7 31 0 2 5 13
1 2 1 3
3 1 2 1
11.5.16 SAQ: The augmented matrix is A B
2 2 3 2
1 1 1 1
1 2 1 3 1 2 1 3
R21 ( 3)0 7 5 8 R23 ( 1) 0 1 0 4
A B
6 5 4 0 6 5 4
R31 ( 2) 0
R41 ( 1)
0 3 2 4 0 3 2 4
1 2 1 3 1 2 1 3
R2 ( 1) R32 (6) 0
0 1 0 4 1 0 4
0 6 5 4 R42(3) 0 0 5 20
0 3 2 4 0 0 2 8
1 2 1 3
R3 ( 5 )
1
0 1 0 4
R4 ( 1 2 ) 0 0 1 4
0 0 1 4
1 2 1 3
R43 ( 1)
0 1 0 4
0 0 1 4
0 0 0 0
Centre for Distance Education 11.22 Acharya Nagarjuna University
11.9 Exercises:
11.9.1 Find the dimension and basis set of x1 3 x2 0, 2 x1 6 x2 0
11.9.2 Solve : x1 2 x2 x3 0, 2 x1 x2 x3 0
3x y 2 z 1
2 x 2 y 3z 2
x y z 1
11.9.4 Solve : 2 x 2 y 2 z 1
4x 2 y z 2
6x 6 y z 3
11.9.5 Determine whether the following system has a solution:
x1 2 x2 3x3 1
x1 x2 x3 0
x1 2 x2 x3 3
Rings and Linear Algebra 11.23 Systems of Linear Equations
3
11.9.1
1
1
11.9.2 1
1
11.9.3 Consistent
11.9.4 If 2 , system has unique solution.
1
2 x k , y k ; z 0, k
2
11.9.5 System has a solution.
11.11 Model Examination Questions:
1. Prove that the system AX O has n r L.I solutions, where r is rank A and n is the number
of unknowns.
3x1 x2 2 x3 1
4 x1 3 x2 x3 3
2 x1 4 x2 x3 4
- A. Satyanarayana Murty
Rings and Linear Algebra 12.1 Diagonalization
LESSON - 12
DIAGONALIZATION
12.1 Objective of the Lesson:
This lesson is concerned with diagonalization problem. For a given operator T on a finite
dimensional vector space we study about
A solution of diagonalization problem leads to the concept of eigen values and eigen vec-
tors and so we study about,
12.2 Structure of the Lesson: This lesson contains the following items.
12.3 Introduction
12.4 Diagonalizable linear operator - Eigen Vector and eigen values of a
linear operator
12.5 Worked Out Examples
12.6 Properties of eigen values
12.7 Similarity
12.8 Similarity of matrices using trace.
12.9 Trace of a linear operator
12.10 Determinant of a linear operator - relating theorems
12.11 Exercise
12.12 Diagonalizability
12.13 Worked out examples
12.14 Polynomial Splitting and algebraic multiplicity
12.15 Eigen space
12.16 Summary - Test for Diagonalization
12.17 Worked out examples
12.18 Positive integral power of a diagonalizable matrix - Examples
Centre for Distance Education 12.2 Acharya Nagarjuna University
12.19 Invariant Subspaces
12.20 T-Cyclic subspaces generated by a non-zero vector
12.21 Cay ley - Hamilton Theorem and Examples
12.22 Summary
12.23 Technical Terms
12.24 Model Questions
12.25 Exercise
12.26 Reference Books
12.3 Introduction:
In this lesson we introduce the important notions of eigen values and eigen vectors of a
linear operator and a square matrix defined over a field. Using these concepts we discuss the
diagonalization and diagonalizability of linear operators and matrices.
12.4.3 Definition: Let A be in M nn ( F ) . A non zero vector V F n is called an eigen vector of A
if v is an eigen vector of LA ; that is if Av v for some scalar . The scalar is called the eigen
value of A corresponding to the eigen vector v .
Note: i) The words characteristic vector, Latent Vectors, proper vector, spectral vector are also
used in place of eigen vector.
ii) Eigen values are also known as characteristic values, Latent roots, proper values, spec-
tral values.
In order to diagonalize a matrix or a linear operator we have to find a basis of eigen vectors
and the corresponding eigen values.
Before continuing our study of diagonalization problem, we give the method of computing
eigen values.
12.4.4 Method of Computing eigen Values:
Theorem: Let A be an n n matrix in the entries in the field F. Then a scalar is an eigen value
of A if and only if det ( A I n ) 0
Proof: A scalar is an eigen value of A , if and only if there exists a nonzero vector v F n such
that A v v . That is ( A I n ) v 0 . This is true if and only if A I n is not invertible. How-
ever this result is equivalent to the statement that det ( A I n ) 0 .
Definition: Let A be a n n matrix with entries in the field F. Then the polynomial f ( ) det( A I )
is called the characteristic polynomial of the square matrix A of order n - of degree n in . f ( ) is
the characteristic function of A.
Note: By theorem 12-4-4 it follows that the eigen values of a matrix are the zeros of its character-
istic polynomial.
12.4.6A Characteristic polynomial of a linear operator:
Definition: Let T be a linear operator on a n - dimensional vector space V with an ordered basis
B. We define the characteristic polynomial f ( ) of T to be the characteristic polynomial of A T B .
i.e. f ( ) det( A I n ) .
a22 a23 a a a a
A11 (1)11 ; A22 (1)22 11 13 ; A33 (1)33 11 12
a32 a33 a31 a33 a21 a22
ii) For a diagonal element of a square matrix, its minor and cofactor are the same.
12.4.9 Show that every square matrix need not posses eigen values.
0 1
Solution: Consider the matrix A over the field of reals. Its characteristic equation is
1 0
A I 0 .
0 1 1 0
i.e. det 0
1 0 0 1
Rings and Linear Algebra 12.5 Diagonalization
1
det 0 2 1 0
1
Which has no solution is the field of real numbers. So A has no characteristic value and
hence no characteristic vector over the field of reals.
However if A is regarded as a complex matrix then its characteristic equations namely
2 1 0 has two distinct roots i , i over the field of complex numbers and consequently A
has two distinct eigen vectors.
Proof: Let A aij nn where the entries of A belongs to the field F..
n n n 1
Expanding the determinant, we get the polynomial as ( 1) a1 a2 ... an
n 2
so the leading coefficient is (1)n and as the polynomial is of degree n; it can not have more
than n zeros. So A cannot have more than n eigen values.
Step 1:
Step 2: Solve the equation A I 0 to get n roots 1 , 2 ...n . Which are the eigen values of A.
Step 3: The corresponding eigen vectors of A are given by the nonzero vectors V v1 , v2 ,..., vn
Proof: Let V be a finite dimensional vector space and T be a linear operator on V. Suppose T is
diagonalizable. Then there exists an ordered basis B v1 , v2 ,..., vn for V such that T B is a
diagonal matrix. Note that if D T B is a diagonal matrix, then for each vector v j B , we have
n
T (v j ) d ij v j d jj v j j v j where j d jj
i 1
1 0........ 0
0 2 0
Then clearly T B which is a diagonal matrix.
0 0 n
In the preceeding paragraph, each vector v in the basis B satisfies the condition T (v ) v
for some scalar . Moreover, as v lies in a basis, v is non zero. Hence the theorem.
Note: To diagonalize a matrix or a linear operator we have to find a basis of eigen vectors and the
corresponding eigen values.
1 3 1 3
W.E. 1: Let A , B v1 , v2 where v1 , v2 is an ordered basis of R 2 . Prove
4 2 1 4
that v1 , v2 are eigen vectors of A. Find LA B . Show that A and LA B are diagonalizable.
Rings and Linear Algebra 12.7 Diagonalization
1 3 1 3
Solution: Given A , v1 , v2
4 2 1 4
Where LA T .
value corresponding to v1 .
1 3 3 3 12 15 3
Further more LA (v2 ) 5 5v2
4 2 4 12 8 20 4
eigen value 2 5 . Note that B v1 , v2 is an ordered basis of R 2 consisting of eigen vectors of
2 0
Further more LA B is a diagonal matrix.
0 5
5 4
W.E. 2: Determine the eigen values and eigen vectors of the matrix A .
1 2
5 4
0
1 2
(5 )(2 ) 4 0 2 7 6 0
( 6)( 1) 0 6,1
So the eigen values of A are 6, 1.
v1
The eigen vector v ' of A Corresponding to the eigen value 6 are given by the non zero
v2
solution of the equation ( A 6 I )v ' O
5 6 4 v1
O
1 2 6 v2
1 4 v1 0
O
1 4 v2 0
R2 R1 gives
1 4 v1 0
0 0 v 0
2
4
So v ' is an eigen vector of A; Corresponding to the eigen value 6.
1
The set of all eigen values of A corresponding to the eigen value 6 is given by C 1 v ' where
C1 is a nonzero scalar..
The eigen value v of A corresponding to the eigen value 1 are given by the non zero solu-
tion of the equation ( A I )v O
( A 1I )v O
4 4 v1 0
1 1 v2 0
4 4 v1 0
4R2 R1 gives 0 0 v 0
2
4v1 v2 0 v1 v2
Let v1 1 them v2 1
Rings and Linear Algebra 12.9 Diagonalization
1
So v " is an eigen vector of A corresponding to the eigen value 1. Every non zero
1
1
multiple of v " . Which is of the form C 2 where C2 0 is an eigen vector corresponding to
1
the eigen value 1.
the usual ordered basis B e1 , e2 , e3 for R 3 where e1 (1, 0, 0), e2 (0,1, 0), e3 (0, 0,1)
T (e1 ) T (1, 0, 0) 7(1) 4(0) 10(0), 4(1) 3(0) 8(0) 2(1) 0 2(0)
(7, 4, 2)
T ( e2 ) T (0,1, 0) ( 4 3,1)
7 4 10
A T B 4 3 8
........... (1)
2 1 2
7 4 10
or 4 3 8 0
2 1 2
From (1)
trace of A 7 3 2 2
3 8
A11 ( 1)11 minor of 7 1 2 6 8 2
7 10
A22 (1) 2 2 minor of (3) 2 2 14 20 6
7 4
A33 ( 1)33 minor of (2) 4 3 21 16 5
3 2 trace of A ( A1 1 A 2 2 A 3 3 ) d e t A 0 1 1 2 12
3 2 2 1 2 0 f ( ) say 11 2
( A I )v O
Rings and Linear Algebra 12.11 Diagonalization
7 1 4 10 v1 0
4 3 1 8 v2 0
2 1 2 1 v3 0
8 4 10 v1 0
4 2 8 v2 0
2 1 1 v3 0
2 R2 R1 , 4 R3 R1 gives
8 4 10 v1 0
0 0 6 v 0
2
0 0 6 v3 0
R3 R2 gives
8 4 10 v1 0
0 0 6 v 0
2
0 0 0 v3 0
6v3 0 v3 0
put v1 1 then v2 0
v1 1
So v v2 2 and every scalar multiple of it is an eigen vector..
'
v3 0
( A I )v 0
Centre for Distance Education 12.12 Acharya Nagarjuna University
7 1 4 10 v1 0
4 3 1 8 v2 0
2 1 2 1 v3 0
6 4 10 v1 0
4 4 8 v2 0
2 1 3 v3 0
1 1
R1 ; R2 gives
2 4
3 2 5 v1 0
1 1 2 v 0
2
2 1 3 v3 0
R 1 3 R 2 , R 3 2 R 2 gives
0 1 1 v1 0
1 1 2 v 0
2
0 1 1 v3 0
0 1 1 v1 0
R3 R1 gives 1 1 2 v2 0
0 1 1 v3 0
v2 v3 0 v2 v3
v1 v2 2v3 0 v1 v3 0 v3 v1
putting v1 1 , v3 v2 1
v1 1
1
So v v 2
''
and every scalar multiple of it is an eigen vector..
v 3 1
Rings and Linear Algebra 12.13 Diagonalization
( A I )v 0
7 2 4 10 v1 0
4 3 2 8 v2 0
2 1 2 2 v3 0
5 4 10 v1 0
4 5 8 v2 0
2 1 4 v3 0
2 1 4 v1 0
4 5 8 v 0
R1 R3 gives 2
5 4 10 v3 0
R2 2 R1 , 2 R3 5R1 gives
2 1 4 v1 0
0 3 0 v 0
2
0 3 0 v3 0
R3 R2 gives
2 1 4 v1 0
0 3 0 v 0 3v 0 v 0
2 2 2 and
0 0 0 v3 0
v1 2
Put v 3 1, then v1 2, v2 0, v3 1 and so v v2 0 and every non zero scalar
"'
v3 1
multiple of it is an eigen vector.
Centre for Distance Education 12.14 Acharya Nagarjuna University
' ' "
The basis for which T B is a diagonal matrix is B v , v , v
"'
i.e. B (1, 2, 0), (1, 1, 1)(2, 0, 1)
'
eigen values of T.
Solution: B 1, x, x is the standard Basis of P2 ( R ) . We are given linear operator on P2 ( R )
2
defined by T f ( x) f ( x) ( x 1) f ( x)
'
T ( x ) x ( x 1)(1)
2 x 1 1(1) 2.x 0 x 2
T ( x 2 ) x 2 ( x 1)(2 x)
1 1 0
A T B 0 2 2
So
0 0 3
1 1 0
0 2 2 0
0 0 3
(1 )(2 )(3 ) 0
1, 2, 3
Rings and Linear Algebra 12.15 Diagonalization
12.6 Properties of eigen values:
12.6.1 Theorem: Let T be a linear operator on a vector space V; and let be an eigen value of T..
So T (v ) v (T I )v 0
(T I )v 0
Tv v
So v is the characteristic vector corresponding to the characteristic value .
Hence the Theorem.
12.6.2 Theorem: Prove that a square matrix A; and its transport AT have the same set of
eigen values.
det AT ( I )T
det A T I T
det AT I since I T I
= Characteristic polynomial of A T .
So A and AT have the same characteristic polynomial and hence the same set of eigen
values.
12.6.3 Show that zero is a characteristic root of a matrix if and only if the matrix is singular.
Solution: 0 is a characteristic value of A.
A 0 I 0 A is singular..
Then Av v and ( A KI )v Av K ( Iv )
v Kv
( K )v
= ( 1 )( 2 )...( n )
1 ( k ) 2 ( k ) ... n ( k )
12.6.6 If A is non singular prove that the eigen values of A1 are the reciprocals of the eigen values
of A.
Solution: Let be an eigen value of A and v be the corresponding eigen vector then Av v .
v A1 ( v) ( A1v)
Rings and Linear Algebra 12.17 Diagonalization
1
v A1v (Since A is non singular 0 )
1
A1v v
1
So is an eigen value of A 1 , and v is the corresponding eigen vector..
Conversely suppose that is an eigen value of A1 . Since A is non singular A1 is also non
1
singular and ( A1 ) 1 A . So it follows from the first part of this question is an eigen value of A.
Thus each eigen value of A1 is equal to the reciprocal of some eigen value of A.
Hence the eigen values of A1 are nothing put the reciprocals of the eigen values of A.
12.6.7 Corollary: If 1 , 2 ...n are the eigen values of a non singular matrix A, then 11 , 2 1...n 1
are the eigen values of A1 .
12.6.8 Theorem: If 1 , 2 ,...n are the eigen values of A, then K 1 , K 2 ,...K n are the eigen
values of KA .
Proof: If K 0 then KA O and each eigen value of 0 is 0. Thus 01 , 02 ,...0n are the eigen
values of KA when 1 , 2 ,...n are eigen values of A.
We have KA KI K ( A ) I
K n A I Since KB K n B
Thus if 1 , 2 ..., n are eigen values of A, that K 1 , K 2 ..., K n are eigen values of KA .
v T 1 ( v) T 1 (v)
1v T 1 (v)
12.6.10 If is a characteristic root of a non singular matrix A, Show that r is the characteris-
tic root of Ar ; r being an integer..
So Ar v Ar 1 ( Av) Ar 1 ( v) ( Ar 1v)
Ar 2 ( Av)
Ar 2 ( v )
2 Ar 2 v
Proceeding like this we get
Ar v r v
Case ii) Let r 0 then A0 I and characteristic roots of I are all unity i.e. 0 .
So Iv ( A1v)
r is a characteristic root of Ar .
Hence the theorem.
12.6.11 Corollary: If 1 , 2 ,...n are the characteristic roots of A, then the characteristic rootss
of A2 are 12 , 22 , . . . . . , n2 .
T m ( v )
(T m v) Since T is linear
( m v) by (3)
The statement is true for n 1 , when it is assumed to be true for m; it is proved to be true
for m 1 . Hence by mathematical induction the statement is true for all positive integral values of
n.
12.6.13 Let T be a linear operator an a vector space V; over a field F and let g ( x ) be a polynomial
with coefficients from F. Prove that if v is an eigen vector of T with corresponding eigen value ,
then g (T )(v ) g ( )v i.e. v is an eigen value of g (T ) with the corresponding eigen values of g ( ) .
Centre for Distance Education 12.20 Acharya Nagarjuna University
Proof:
Let T (v ) v .
( a0 a1 a2 2 ... am m )v
= g ( )v
Characteristic equation of A is A I 0
a11 0 0 ......... 0
0
Proof: Let D 0 a 22 0 .........
0 0 0 ........ a n n n n
Characteristic equation of D is D I 0 .
a11 0 0......... 0
0 a22 0......... 0
0
0 0 0......... ann
Which shows the characteristic values are a 1 1 , a 2 2 , ......, a n n . Which are nothing but
the elements in the diagonal.
12.7 Similarity:
12.7.1 Definition: i) Two n n matrices A and B are said to be similar if there exists a non
singular matrix P such that AP PB or A PBP 1
Definition II : Two linear operators T1 and T2 on V are said to be similar if there exists a nonsingular
12.7.2 Show that similar matrices have the same characteristic polynomial and hence the same
eigen value.
Proof: Let A and B are any two similar matrices then for a invertible matrix P, we have B P 1 AP .
det P 1 ( A I ) P
det( P 1 P) det( A I )
Centre for Distance Education 12.22 Acharya Nagarjuna University
det( I ) det( A I )
1.det( A I )
This shows that the matrices A and B have the same characteristic polynomial. Hence A
and B have the same characteristic roots.
12.7.3 Corollary: A square matrix B is similar to a diagonal matrix D. Show that the character-
istic roots of B are diagonal elements of D.
Proof: Let the D be the diagonal matrix of order n. B is similar to D.
The characteristic roots of D are the elements along the principal diagonal of D.
By the above theorem, B and D have the same charecteristic equation and hence the
same characteristic roots.
So the characteristic roots of B are the diagonal elements of D.
Hence the Theorem.
n
AB Cij
nn
where Cij a
k 1
b
ik kj
n
BA dij
nn
where d ij b k 1
ik akj
n n
n
trace of ( AB )
i 1
Cii aik bki
c 1 k 1
n n
a b
ik ki
k 1 k 1
n
n
bki aik
k 1 i 1
Rings and Linear Algebra 12.23 Diagonalization
n
d kk d11 d 22 ... d nn
k 1
Let A aij nn B bij nn . Let A be similar to B; Then there exists a non singular matrix
P. such that A P 1 BP .
trace of A = trace of ( P 1 B P )
= trace of ( IB )
= trace of B.
Hence similar matrices have the same trace.
12.9 Trace of Matrix : The Sum of the elements of a square matrix A lying along the
principal diagonal is called the trace of the matrix. If A aij nn then trace of A a11 a22 ... ann .
Let T be a linear operator from V V . Then the trace of T written as tr T is the trace of
M (T ) where M (T ) is the matrix of T in some basis of V..
12.9.1 To show that the definition of trace of a linear operator is well defined:
To show that the trace of linear operator is independent of the basis of V.
in the basis v1 , v2 ,...vn and the matrix M 2 (T ) in the basis w1 , w2 ,...wn then there exists a non
singular matrix P of order n such that M 2 (T ) P 1M 1 (T ) P .
Hence by this theorem, there exists a non singular matrix P such that M 2 (T ) P 1M 1 (T ) P .
Centre for Distance Education 12.24 Acharya Nagarjuna University
i.e. M 1 (T ) and M 2 (T ) are similar matrices. But similar matrices have the same trace.
Hence trace of T does not depend upon any particular basis of V.
Hence the above definition is meaningful. So trace depends only an T and not any particu-
lar basis.
0 2
Then in the basis (1, 0)(0,1) the matrix of T is . So trace of T 0 1 1 .
3 1
30 48
Also in the basis (1,3)(2,5) matrix of T is
18 29
trace of T 30 29 1
12.10.2 Theorem: Prove that the determinant of a linear operator T on a vector space is unique.
Or
Prove that the determinant of a linear operator is independent of the choice of an ordered
basis for V.
Proof:
Let aij and bij are the matrices of the linear operator T with respect to the basis B1
and B2 of V..
1
Then there exists an invertible matrix cij such that bij cij aij cij
det bij det cij
1
aij cij
Rings and Linear Algebra 12.25 Diagonalization
1
det cij det cij det aij
1.det aij
Hence the determinant of the operator T is unique even though the matrices of T are differ-
ent with respect to the bases B1 and B2 .
12.10.3 Theorem: If T1 and T2 are linear operators on a finite dimensional vector space V ( F ) ,
then prove that det(T1T2 ) (det T1 )(det T2 ) .
Proof: T1 and T2 are two linear operators on a finite dimensional vector space V(F). Choose B to
be an ordered basis of v. Then the matrix of the operator T1T2 w.r.t the basis B can be put in the
So det T1T2 B det T1 B .T2 B
det T1 B .det T2 B ............ (1)
As we know the det. of the product of two matrices is equal to the product of their determi-
nants.
Now by the definition det T det T B and hence by (1) we have det T1T2 det T1 det T2
12.10.4 T is a linear operator an a finite dimensional vector space V. Prove that T is invertible if
and only if det (T ) 0 .
Proof: T is a linear operator on a finite dimensional vector space V. Let B be a basis of the vector
space V.
If T is invertible then TT 1 T 1T I and det(T 1T ) det I B where I B is matrix of the
identity operator.
Now (det T ) and (det T 1 ) are the elements of the field F and a field F is without zero divisors. i.e.
ab 0 a 0 or b 0 or both zero.
In other words if a 0 then ab 0 in a field F..
T B is invertible.
So (det T 1 ) det(T )
1
12.10.6 Let T be a linear operator on a finite dimensional vector space V. Then show that O is
the characteristic value of T iff T is not invertible.
Solution: Case i) Let O be the eigen value of T. Then we have to prove T is singular.
a 3a 2b 2 c 0 1 1
T b 4 a 3b 2 c and B 1 , 1 , 0 is an ordered basis of R 3 . Com-
c c 1 0 2
1 0 0
Ans ) T B 0 1 0 ; yes
0 0 1
2). T is a linear operator on R 2 defined by T ( a, b ) ( 2 a 3b, 10 a 9b ) . Find the eigen values of
0 1 2
4) Find the eigen values of the matrix 1 0 1
2 1 0
Ans : 2, 1 3, 1 3
1 1 2
5) Find the characteristic polynomial of A 0 3 2
1 3 9
Centre for Distance Education 12.28 Acharya Nagarjuna University
Ans : 3 13 2 31 17
4 1 1
6) If A 2 5 2 then find i) All eigen values of A.
1 1 2
Ans: i) 3, 3,5 ii) (1, 1, 0), (1, 0,1),(1, 2,1) is a maximal set of linearly independent vectors.
1 1
7) If A , find the eigen values and eigen vectors A. Prove that A is diagonalizable.
4 1
Obtain a basis for R 2 containing eigen vectors of A.
1 1 1 1
Ans: 3, 1 and eigen vectors are , , is a basis of R 2 .
2 2 2 2
8) Find the eigen values and eigen vectors of the following matrices.
6 2 2 8 6 2
i) 2 3 1 ii) 6 7 4
2 1 3 2 4 3
1 1
Ans: i) 2, 2,8; a 2 b 0 where a, b are any nonzero scalars
2 0
1
and c 1 where c is any non zero scalar..
1
1 2 2
2 , 1 , 2
ii) 0,3,15 and theire nonzero scalar multiples.
2 2 1
Rings and Linear Algebra 12.29 Diagonalization
10 6 3 0 6 16
26 16
8 and 0 17 45 are similar..
9) Show that the matrices
16 10 5 0 6 16
1 1
10) Prove that the matrix. A M 22 ( R) is diagonalizable.
1 1
11) Find all the eigen values and a basis for each eigen space of the linear operator T : R 3 R 3
defined by T ( a, b, c ) (2a b, b c, 2b 4c )
1
0
eigen space of 2 is spanned by
0
1
1
eigen space of 3 is spanned by
2
2 1 0
0 2 1
12. Find the eigen values of
0 0 2
Ans : 2, 2, 2
o h g o f h
f ; B f
g
13. Show that A h o o
g f o h o g
12.12 Diagonalizability:
We have see in the preceeding articles that every linear operator or every matrix is not
diagonalizable. We need a simple test to determine whether an operator or a matrix can be diago-
nalized as well as a method for actually finaling a basis of eigen vectors.
Centre for Distance Education 12.30 Acharya Nagarjuna University
12.12.1 Theorem: Let T be linear operator on a vector space V; Let 1 , 2 ...k be distinct eigen
values of T. If v1 , v2 ...vk are eigen vectors of T such that i corresponds to Vi (1 i k ) then
eigen vector and so v1 is linearly independent. We assume th theorem is true for ( k 1) distinct
eigen values where ( k 1) 1
Let there be k eigen vectors v1 , v2 ,..., vk corresponding to the distinct eigen values
that i k 0 for 1 i k 1
So a1 a2 ... ak 1 0
Thus a linear combination of vectors v1 , v2 ...vk is equal to a zero vector implies each of the
scalar coefficient is zero.
Proof: Let the n distinct eigen values of T be 1 , 2 ,...k . Foe each i, choose an eigen vector vi
corresponding to i . By the above theorem v1 , v2 ,...vn is linearly independent and as dim V n ;
Converse: The converse of the above theorem need not be true. i.e. If T is diagonalizable, then it
has n distinct eigen values need not be true. For example the identity operator is diagonalizable
even though it has only one eigen value namely 1.
W.E. 6: Worked Out Examples:
1 1
Show that A M 22 ( R) is diagonalizable.
1 1
1 1
Solution: The characteristic polynomial of A is A I
1 1
(1 ) 2 12
(1 1)(1 1)
(2 )( )
Thus 0, 2
Hence the characteristic values of A are 0,2. Hence the characteristic values of LA are 0,2.
Which are distinct.
1 2
W.E. 7: Show that A is not diagonalizable.
0 1
Solution:
1 2
(1 )2
0 1
Centre for Distance Education 12.32 Acharya Nagarjuna University
1,1
As A and hence LA has one distinct eigen value and dim R 2 is 2, it follows that A is not
diagonalizable.
Example:
Note: If f (t ) is the characteristic polynomial of a linear operator or a matrix over a field F, then
the statement that f (t ) splits is to understood to mean that it splits over F..
12.14.2 Theorem: The characteristic polynomial of any diagonalizable linear operator splits.
Proof: Let V be a n dimensional vector space. Let T be a diagonalizable linear operator over V. Let
B be an ordered basis for V such that T B D is a diagonal matrix. Suppose that
1 0 0........ 0
0 0........ 0
D 2
0 0 0 n
f (t ) det( D tI )
1 t 0 ....... 0
0 2 t ........ 0
det
0 0 n t
Rings and Linear Algebra 12.33 Diagonalization
W.E. 8 : Example:
3 1 0
i) det A 0 3 4
0 0 4
3t 1 0
0 3t 4
expanding along the first column; the characteristic polyno-
0 0 4t
mials is (3 t ) (3 t )(4 t ) 0
(3 t ) 2 (4 t )
The set E is called the eigen space of T corresponding to the eigen value .
Analogously we define the eigen space of a square matrix A to be the eigen space of LA .
Note: E is a subspace of V consisting of the zero vector and eigen vectors of T. Corresponding
to the eigen value . So the maximum number of linearly independent eigen vectors of T corre-
sponding to the eigen value is the dimension of E .
12.15.2 Theorem: Let T be a linear operator on a finite dimensional vector space V. Let be an
eigen value of T having multiplicity m. Then 1 dim( E ) m .
I p B
therefore A
0 C
( t ) I p B
Then the characteristic polynomial of T is f (t ) det( A tI n ) det
0 c tI n p
det ( t ) I p det c tI n p
( t ) p g (t ) where g (t ) is a polynomial.
Thus ( t ) p is a factor of f (t ) and hence the multiplicity of is atleast p. But dim ( E ) p . So
dim ( E ) m .
Hence 1 dim( E ) m .
Rings and Linear Algebra 12.35 Diagonalization
12.15.3 We state some theorem without proofs:
Lemma: Let T be a linar operator and Let 1 , 2 ...k be distinct eigen values of T. For each i 1, 2...k ,
let v i E i , the eigen space corresponding to i . If v1 v2 ... vk 0 then vi 0 for all i.
12.15.4 Theorem: Let T be a linear operator on a vector space V, and let 1 , 2 ,...k be distinct
eigen values of T. For each i 1, 2...k ; let Si be a finite linearly independent subset of Eigen space
E i . Then S S1 S 2 S3 ... S k is a linearly independent subset of V..
12.15.5 Theorem: Let T be a linear operator on a finite dimensional vector space V such that the
characteristic polynomial of T splits. Let 1 , 2 ,...k be the distinct eigen values of T. Then i) T is
diagonalizable if and only if the multiplicity of i is equal to the dim ( E i ) for all i.
12.16 Summary:
Test for Diagonalization :
Let T be a linear operator on an n dimensional vector space V. Then T is diagonalizable if
and only if both the following conditions hold.
i) The characteristic polynomial of T splits.
In order to test the diagonalizability of a square matrix, the same conditions can be used
since the diagonalizability of A is equivalent to the diagonalizability of the operator LA .
If T is a diagonalizable operator and B1 , B2 ,...Bk are ordered basis for the eigen space of T,,
then the union B B1 B2 ... Bk is an ordered basis for V consisting of eigen vectors of T, and
When we want to test T for diagonalizability we usually choose a convient basis B for V, and
form A T B if the characteristic polynomial of A splits, then use the condition (ii) above to check
if the multiplicity of each of the repeated eigen value of A equal to n - rank ( A I ) . If the character-
istic polyomial of A is splitting condition. (ii) is automatically satisfied for eigen values with multiplic-
ity 1. If A is diagonlizable then T is also diagonalizable.
If we find T is diagonalizable and want to find a basis B for V, consisting of eigen vectors of
T, we addopt the following procedure.
Centre for Distance Education 12.36 Acharya Nagarjuna University
1) We first find a basis for each eigen space of A. the union of these bases is a basis C for F n
consisting of eigen vectors of A. Each vector in C is the coordinate vector relative to B of an eigen
vector of T. The set consisting of these n eigen vectors of T is the desired basis B.
Further more, if A is a n n diagonalizable matrix, we can find an invertible n n
matrix Q and a diagonal n n matrix D such that Q1 AQ D .
The matrix Q has as its columns the vectors in a basis of eigen vectors of A, and D has as
its j th diagonal entry the eigen value of A corresponding to the j th column of Q.
P2 ( R) is B 1, x, x 2
Given T f ( x) f ( x)
'
T ( x) 1 1(1) 0 x 0 x 2
T ( x 2 ) 2 x 0(1) 2 x 0 x 2
0 1 0
A T B 0 0 2
0 0 0
1 0
0 2 0
0 0
0 0 ( )( 2 0) 0
Rings and Linear Algebra 12.37 Diagonalization
3 0
Thus T has only one eigen value 0 with multiplicity 3.
3 1 0
W.E. 10: Test the matrix A 0 3 0 M 33 ( R) for diagonalizability..
0 0 4
3 1 0
0 3 0 0
0 0 4
0 0 (4 ) (3 ) 2 0 0
(4 )(3 ) 2 0 so 4 , 3, 3
Since 1 has multiplicity 1; condition (ii) is satisfied for 1 . Thus we need only to test
condition (ii) for 2 .
3 3 1 0 0 1 0
( A 2 I ) 0 33 0 0 0 0
0 0 4 3 0 0 1
Centre for Distance Education 12.38 Acharya Nagarjuna University
Which can be put in Echelon form. Here the number of non zero rows 2. So Rank of
( A 2 I ) 2 .
W.E. 11:
Also find an ordered basis for R 3 of eigen vectors of T B where B is the standard basis of P2 ( R ) .
'
' ''
Solution: T is a linear operator on P2 ( R ) defined by T f ( x ) f (1) f (0) x f (0) f (0) x
2
T (1) 1 0 x 0 x 2 ; T ( x) 1 1.x (1 0) x 2
i.e. T ( x ) 1 x x 2
T ( x 2 ) 1 0 x (0 2) x 2
i.e. T ( x 2 ) 1 0 x 2 x 2
1 1 1
0 1 0
Thus A T B
0 1 2
1 1 1
det ( A I ) 0 1 0
0 1 2
(1 ) 2 (2 )
Rings and Linear Algebra 12.39 Diagonalization
The characteristic polynomial of A and hence of T is (1 ) 2 (2 ) which splits. Hence
the condition (i) is satisfied.
1 1 1 1
11 0
3 - rank of 0
0 1 2 1
0 1 1
0
3 - rank 0 0
0 1 1
3 (1) 2 . Since the matrix has only one linearly independent row.
We now find the ordered basis C for R 3 of eigen vectors of A. We consider each eigen
value separately.
Let 1 1; then ( A I )v 0
1 1 1 1 v1 0
0 1 1 0 . v 2 0
0 1 1 v3 0
0 1 1 v1 0
0 0 0 v2 0
0 1 1 v3 0
v2 v3 0 v2 v3 Let v1 s, v3 t
Centre for Distance Education 12.40 Acharya Nagarjuna University
v1 s 1 0
v t s 0 t 1
then 2
v3 t 0 1
1 0 v1
So C1 0 , 1 is a basis for the eigen space E1 v2 R 3 ( A I )v 0
0 1 v
3
( A 2 I )v O
1 2 1 1 v1 0
0 1 2 0 v2 0
0 0 2 2 v3 0
1 1 1 v1 0
0 1 0 v2 0
0 0 0 v3 0
Put v1 1 then v2 0 , v3 1
1 v1
So C2 0 is the basis for the eigen space E2 v v2 R 3 ( A 2 I )v 0
1 v3
consider C C1 C2 then
1 0 1
C 0 , 1 , 0
0 1 1
1 0 0
T A 0 1 0
which is the required diagonal matrix.
0 0 2
0 2
W.E. 12: Show that the matrix is diagonalizable and find a 2 2 matrix P such that
1 3
P 1 AP is a diagonalizable matrix.
0 2
Solution: The characteristic equation of the given matrix A is
1 3
2
A I 0 0.
1 3
(3 ) 2 0 2 3 2 0
( 2)( 1) 0 in 2,1
Thus A has two distinct eigen values 1 1, 2 2 . As the diamensionality of the vector
space is 2.
We see that A is diagonalizable.
We have ( A I )v O
1 2 v1 0 1 2 v1 0
1 3 1 v2 0 1 2 v2 0
v1 2
So v
v2 1
Centre for Distance Education 12.42 Acharya Nagarjuna University
2 v1
or C1 is a basis of eigen space. E1 R ( A 1I ) O
2
1 v2
0 2 2 v1 0
( A 1I )v O
1 3 2 v2 0
2 2 v1
0 v1 v2 0 so v1 v2
1 1 v2
v1 1
Put v2 1, then v1 1 so v
v2 1
1 v
So C2 is the basis of the eigen space E2 1 R 2 ( A 2 I )v O
1 v2
2 1
C C1 C2 is an ordered basis for 2 consisting of the eigen vectors of A.
R
1 1
2 1
Let P is the matrix whose columns are vectors in C.
1 1
1 0
D P 1 AP LA B
0 2
W.E. 13:
Let T be the linear operator on R 3 which is represented in the standard basis by the matrix
9 4 4
8 3 4
. Prove that T is diagonalizable. Find a basis of R 3 consisting of eigen
16 8 7
vectors of T.
Rings and Linear Algebra 12.43 Diagonalization
9 4 4
Solution: The given matrix A 8 3 4
16 8 7
3 (trace of A) ( A1 1 A 2 2 A 3 3 ) d e t A
2
3 4
A11 (1)11 21 32 11
8 7
9 4
A22 (1) 2 2 63 64 1
16 7
9 4
A33 (1)33 27 32 5
8 3
99 32 64 3
The characteristic polynomial is 1 11 5 3
3 2 5 3 f ( ) say 1 2 3
f ( 1) 1 1 5 3 0 1 2 3 0
1 is a factor of f ( ) 0
The other factor is
2 2 3
( 3)( 1)
Centre for Distance Education 12.44 Acharya Nagarjuna University
So f ( ) 0 ( 1) 2 ( 3) 0
( A I )v O
9 1 4 4 v1
8 3 1 4 v2 O
16 8 7 1 v3
8 4 4 v1
8 4 4 v2 O
16 8 8 v3
R2 R1 , R3 2 R1 gives
8 4 4 v1
0 0 0 v O
2
0 0 0 v3
2 1 1 v1 0
R1 gives 0 0 0 v2 O 0
1
4 0 0 0 v3 0
2v1 v2 v3 0 put v1 t , v2 s
then v3 2v1 v2 2t s
v1 t 1 0
v v2 s t 0 s 1
v3 2t s 2 1
Rings and Linear Algebra 12.45 Diagonalization
1 0 v1
So C1 0 , 1 is a basis of for the eigen space E v2 R 3 ( A 1 I )v O
1
2 1 v
3
( A 2 I )v O
9 3 4 4 v1
8 33 4 v2 O
16 8 7 3 v3
12 4 4 v1
8 0 4 v2 O
16 8 4 v3
1 1 1
R1 , R2 , R3 gives
4 4 4
3 1 1 v1
2 0 1 v O
2
4 2 1 v3
3 1 1 v1
0 2 1 v O
2
0 2 1 v3
Centre for Distance Education 12.46 Acharya Nagarjuna University
R3 R2 gives
3 1 1 v1 0
0 2 1 v 0 2v v 0 so v 2v and
2 2 3 3 2
0 0 0 v3 0
3v1 3v2 so v2 v1
v1 1
Put v1 1, so v2 1, v3 2 Hence v v2 1
v3 2
1 v1
So C2 1 is a basis of eigen space E v2 R3 ( A 2v) O
2
2
v
3
1 0 1
If we consider the union of bases of these two subspaces we get C 0 , 1 , 1
2 1 2
which is linearly independent. Thus the set C is a basis of R 3 consisting of the eigen vectors of T..
Hence T is diagonalizable.
W.E. 14:
a1 4a1 a3
T a2 2a1 3a2 2a3
a3 a1 4a3
a1 4a1 a3
Solution: Given T a2 2a1 3a2 2a3
a3 a1 4a3
B e1 , e2 , e3 where e1 1, 0, 0 , e2 0,1, 0 , e3 0,0,1 is the standard basis of for R 3 .
1 4 0 4
T (e1 ) T 0 2 0 0 2
0 1 0 0 1
0 4(0) 0 0
T (e2 ) T 1 2(0) 3(1) 2(0) 3
0 0 4(0) 0
0 4(0) 1 1
T (e3 ) T 0 2(0) 3(1) 2(1) 2
1 0 4(1) 4
Writing T B as A we get
4 0 1
A T B 2 3 2
1 0 4
3 2
A11 (1)11 12 0 12
0 4
4 1
A22 (1) 2 2 16 1 15
1 4
4 0
A33 (1)33 12 0 12
2 3
f (3) 27 99 45 0
2 8 15 3 1 11 39 45
( 3)( 5) 3 24 45
So f ( ) ( 3) 2 ( 5) 1 8 15 0
Hence the characteristic roots are 3, 3, 5.
So the eigen values of T are
1 5 with multiplicity 1
2 3 with multiplicity 2
( A 1 I )v O
Rings and Linear Algebra 12.49 Diagonalization
4 5 0 1 v1 0
2 35 2 v2 0
1 0 4 5 v3 0
1 0 1 v1 0
2 2 2 v2 0
1 0 1 v3 0
R2 2 R1 , R3 R1 gives
1 0 1 v1 0
0 2 4 v2 0
0 0 0 v3 0
v1 v3 0 v1 v3
Put v3 1, then v2 2, v1 1
v1 1
So v v2 2
v3 1
1 v1
So C1 2 is a basis of eigen space E v2 R 3 ( A 1v) O
1
1 v
3
( A 2 I )v O
Centre for Distance Education 12.50 Acharya Nagarjuna University
4 3 0 1 v1
2 33 2 v2 O
1 0 4 3 v3
1 0 1 v1 0
2 0 2 v2 O 0
1 0 1 v3 0
R2 2 R1 , R3 R1 gives
1 0 1 v1 0
0 0 0 v 0
2
0 0 0 v3 0
1v1 v3 0 v3 v1
The unknown v2 does not appear in this system, we assign it a parametric value say v2 s
and solve the system for v3 and v1 . If v3 t , then v1 t , introducing another parameter t. The
result is the general solution to the system.
v1 t 0 1
v s s 1 t 0
2 for s, t R
v3 t 0 1
0 1 v1
So C2 1 , 0 is a basis of the eigen space E2 v2 R
3
( A 2 v) O
0 1 v
3
So dim E 2 multicity of 2 2
1 0 1
is C C1 C2 2 , 1 , 0 is linearly independent and hence is a basis of R 3 ; consisting
1 0 1
of eigen vectors of T. Consequenty T is diagonalizable.
Solution: B 1, x, x
2
is the standard basis of P2 ( R) we are given a linear operator on P2 ( R)
defined by T f ( x) f ( x) ( x 1) f ( x) .
'
1 1 0
In W.E. 4. We have shown A T B 0 2 2 and the eigen values of T are 1, 2, 3.
0 0 3
We have A 1I v O
1 1 1 0 v1 0 1 0 v1 0
0 2 1 2 v2 0 0 1 2 v2 0
0 0 3 1 v3 0 0 2 v3 0
v1 1
Hence the eigen vector corresponding to 1 1 is given by v v2 0 and every non
v3 0
zero scalar multiple of it is an eigen vector.
1
So C1 0 is a basis for the eigen space.
0
Centre for Distance Education 12.52 Acharya Nagarjuna University
v1
E2 v2 R3 ( A 2 I )v O
v
3
dim E 1 1 ; Multiplicity of 1 1 .
( A 2 I )v O
1 2 1 0 v1
0 22 2 v2 O
0 0 3 2 v3
1 1 0 v1
0 0 2 v2 O v3 0
0 0 1 v3
v1 v2 0
v2 v1
put v1 1 , then v2 1 , v3 0
v1 1
Hence v v2 1 is the eigen vector and every scalar multiple of it is also an eigen vector..
v3 0
1
So C2 1 is a basis for the eigen space.
0
Rings and Linear Algebra 12.53 Diagonalization
v1
E2 v2 R ( A 2 I ) O
3
v
3
( A 3 I )v O
1 3 1 0 v1
0 2 3 2 v O
2
0 0 3 3 v3
2 1 0 v1 0
0 1 2 v2 0
v2 2v3 0 so v2 2v3 2v1 v2 0 v2 2v1
0 0 0 v3 0
put v1 1, so v2 2, v3 1
v1 1
Hence v v2 2 is an eigen vector and every scalar multiple of it is also an eigen vector..
v3 1
1
So C3 2 is a basis for the eigen space.
1
v1
E3 v2 R 3 ( A 3 I )v O ; dim E3 1
v
3
Thus we observe that the multiplicity of each eigen value is equal to the dimension of the
corresponding eigen space.
D Q1 AQ LA B
Ak (QDQ1 )k
QD(Q1Q)D(Q1Q)...DQ1
1
So A k Q D k Q
1 4
For A M 22 ( R) find an expression for An where n is a positive integer..
2 3
1 4
Solution: given A . We show that A is diagonalizable and find a 2 2 matrix Q, such that
2 3
Q1 A Q is a diagonal matrix.
1 4
A I 0 0
2 3
(1 )(3 ) 8 0
2 4 5 0 ( 5)( 1) 0
Hence the characteristic values are -1, 5.
1 1 4 v1
( A 1 I )v 0 O
2 3 1 v2
1 1 4 v1 2 4 v1 0
O 2 4 v O 0
2 3 1 v2 2
v1 2
2v1 4v2 0 v1 2v2 put v2 1 and so v1 2 . There fore v 1
2
2
Hence C1 is a basis for the eigen
1
v1
space E1 v R ( A 1I )v O
2
2
1 5 4 v1
( A 2 I )v O O
2 3 5 v2
4 4 v1 0
O
2 2 v2 0
Centre for Distance Education 12.56 Acharya Nagarjuna University
v1 1
4v1 4v2 0 v1 v2 v1 v2 put v1 1, them v2 1 so v
v2 1
1
C2 is a basis for the eigen space E v1 R 2 ( A I )v O
1 1
v2
1
2 1
So C C1 C2 , 1 is an ordered basis for R 2 consisting of eigen vectors of A.
1
2 1 1 1 1 1
Let Q , Q 3 1 2
1 1
1 0
D Q1 AQ LA
0 5
An QD n D 1
(1)n 0 1
Q n
Q
0 (5)
2 1 (1) n 0 1 1 1
1 1 0 (5)n 3 1 2
1 2 1 (1) n 0 1 1
3 1 1 0
5n 1 2
1 (2)(1) n 5n 1 1
3 (1)n 5n 1 2
1 1
W.E. 17: If A . Find the eigen values and eigen vectors of A. Prove that A is diagonal-
4 1
izable. Find a basis of R 2 containing eigen vectors of A.
Rings and Linear Algebra 12.57 Diagonalization
1 1
O (1 )2 4 0
4 1
(1 2)(1 2) O (3 )( 1) 0
3, 1
1 3 1 v1
( A 1I )v O O
4 1 3 v2
2 1 v1
O R2 2 R1 gives
4 2 v2
2 1 v1 0
0 0 v 0 2v1 v2 0 v2 2v1
2
v1 t 1
put v1 t , then v2 2t so v t
v2 2t 2
Where t R
1
Thus is the eigen vector corresponding to 1 3
2
1 1 1 v1
( A 2 I )v O O
4 1 1 v2
2 1 v1
O R2 2 R1 gives
4 2 v2
Centre for Distance Education 12.58 Acharya Nagarjuna University
2 1 v1 0
0 0 v 0 2 v1 v2 0 v2 2v1
2
put v1 s them v2 2 s
v1 s 1
Thus v s where s R
v2 2s 2
1
So is the eigen vector corresponding to 2 1 .
2
1 1
is a basis of R 2 , since there vectors are linearly in dependent and dim R 2 2 .
2 2
This is basis of R 2 consisting of eigen vectors of A. So LA and hence A is diagonalizable
2 2 3
W.E.18: Find the matrix P which diagonalizes the matrix A 2 1 6 and verify that P 1 AP
1 2 0
is a diagonal matrix.
trace of A 2 1 0 1
1 6
A11 (1)11 0 12 12
2 0
2 3
A22 (1) 2 2 0 3 3
1 0
2 2
A33 (1)33 2 4 6
2 1
24 12 9 45
Rings and Linear Algebra 12.59 Diagonalization
The characteristic equation is
3 2 21 45 0
f ( 3) 27 9 63 45
0
3 is a factor..
3 1 1 21 45
3 6 45
1 2 15 0
So f ( ) ( 5) 2 ( 5) 0
( A 1 I )v O
2 5 2 3 v1
2 1 5 6 v2 O
1 2 5 v3
7 2 3 v1
2 4 6 v2 O
1 2 5 v3
R1 7 R3 , R2 2 R3 given
Centre for Distance Education 12.60 Acharya Nagarjuna University
0 16 32 v1
0 8 16 v O
2
1 2 5 v3
R1 2 R2 gives
0 0 0 v1 0
0 8 16 v 0 8v 16v 0 v 2v
2 2 3 2 3
1 2 5 v3 0
v3 v1
v1 1
v v2 2
is an eigen vector corresponding to 1 5 .
v3 1
1 v1
C1 2 is a basis of the eigen space E1 v2 R ( A 1 I )v 0
3
v
1
3
( A 2 I )v O
2 3 2 3 v1
2 1 3 6 v O
2
1 2 3 v 3
Rings and Linear Algebra 12.61 Diagonalization
1 2 3 v1
2 4 6 v2 O , R2 2 R1 , R3 R1 give
1 2 3 v3
1 2 3 v1
0 0 0 v O ,
2
0 0 0 v 3
So v1 2v2 3v3 0
v1 2v2 3v3
put v2 s; v3 t
v1 2s 3t 2 3
Then v v2 s 0t s 1 t 0
v3 0s 1t 0 1
2 3 v1
So C2 1 , 0 is a basis corresponding to E1 v2 R
3
( A 2 I )v O
0 1 v
3
1 2 3
P 2 1 0 ;det P 1( 1 0) 2(2 0) 3(0 1)
1 0 1
1 4 3 8
1 2 3
Adj. A 2 4 6
1 1
P1
det P 8
1 2 5
1 2 3 2 2 3 5 10 15
1 1
P A 2 4 6 2 4 6 6 12 18
1
8 8
1 2 5 1 2 5 3 6 15
Centre for Distance Education 12.62 Acharya Nagarjuna University
5 10 15 1 2 3 40 0 0
1 1
P AP 6 12 18 2 1 0 0 24 0
1
8 8
3 6 15 1 0 1 0 0 24
5 0 0
0 3 0 = diag (5, 3, 3)
0 0 3
1 0 1
W.E.19: Find the matrix P which transforms the matrix A 1 2 1 to diagonal form and
2 2 3
hence calculate A4 .
trace of A 1 2 3 6
2 1
A11 (1)11 62 4
2 3
1 1
A22 (1) 2 2 3 2 5
2 3
1 0
A33 (1)33 20 2
1 2
f ( ) 3 6 2 11 6 0
Rings and Linear Algebra 12.63 Diagonalization
f (1) 1 6 11 6 0 So 1 is a factor..
1 1 6 11 6
1 5 6
1 5 6 0
( 5)( 2)
1 1 0 1 v1
( A 1I )v O 1 2 1 1 v2 O
2 2 3 1 v3
0 0 1 v1
1 1 1 v2 O
2 2 2 v3
0 0 1 v1
R3 2 R2 gives 1 1 1 . v2 O
0 0 0 v3
v3 0 v3 0
v1 v2 v3 0
i.e. v1 v2 0
v2 v1
v1 1
So v v2 1
v3 0
1 v1
So C1 1 is a basis of the eigen space E1 v2 R3 ( A 1I )v O
0 v
3
1 2 0 1 v1 1 0 1 v1
( A 2 I )v O 1 2 2 1 v2 O 1 0 1 v2 O
2 2 3 2 v3 2 2 1 v3
R2 R3 gives
1 0 1 v1
0 0 0 v O v v 0 so v v
2 1 3 3 1
2 2 1 v3
2v1 2v2 v3 0
2v1 2v2 v1 0
v1 2v2 0 v1 2v2
if v2 1, v1 2, v3 2
v1 2 2 v1
v v2 1 so C2 1 is a basis of eigen space E v2 R3
( A 2 I )v O
2
2
v3 2 v
3
Rings and Linear Algebra 12.65 Diagonalization
1 3 0 1 v1 2 0 1 v1
1 2 3 1 v O 1 1 1 v O
2 2
2 2 3 3 v3 2 2 0 v3
2v1 v3 0 v3 2v1
v1 v2 v3 0 v1 v1 2v1 0
v1 1 1 v1
So v v2 1 Hence C3 1 is a basis of E3 v2 R 3 ( A 3 I )v O
v3 2 2 v
3
Writing the three eigen vectors of the matrix as the three columns, the required transfor-
1 2 1
mation matrix P 1 1 1
0 2 2
det P 1(2 2) 2( 2 0) 1( 2 0)
0 4 2 2
1
P 1 Adj.P
det P
0 2 1
1
2 2 0
2
2 2 1
0 2 1 1 2 1
1
4 4 0 1 1 1
2
6 6 3 0 2 2
2 0 0 1 0 0
1
0 4 0 0 2 0 D (say)
2
0 0 6 0 0 3
P 1 AP D A PDP 1
A4 PD 4 P 1
1 2 1 1 0 0 0 2 1
1
1 1 1 0 16 0 2 2 0
2
0 2 2 0 0 81 2 2 1
We observed if v is an eigen vector of a linear operator T, then T maps the span of v into
itself. Subspaces that are mapped into themselves are great importance in the study of linear
operator.
12.19.1 T invariant Subspace: Let T be a linear operator on a vector space V. A subspace
of V is said to be a T - invariant subspace of V if T (W ) W , that is, if T (v) W for all v .
Rings and Linear Algebra 12.67 Diagonalization
W.E. 20: Example: Suppose that T is a linear operator on a vectorspace V; Then the following
subspaces are T - invariant (i) O , (ii) V (iii) R (T) (iv) N (T) (v) E for any eigen
value of T..
Let u R(T ) V u V
T ’ is a linear operator on V.
R3 .
Solution: Given T : R 3 R 3 is defined by T ( a, b, c ) ( a b, b c, 0)
But v ( x , 0, 0) W1
Let v ( x, 0, 0) W2
V, The subspace W span v, T (v), T (v )... is called the T - Cyclic subspace of V generated by
2
v.
We can easily prove that W is T - invariant subspace of V. In fact, W is the smallest
T - invariant subspace of V containing non zero vector v .
W.E. 22:
generated by x 2 .
T ( x 2 ) ( x 2 )' 2 x
T - Cyclic subspace generated by x 2 = Span x ,2x,2 P2 (x)
2
W.E. 23:
Let T be the linear operator on R 3 defined by T (a, b, c) b c, a c,3c . Find the
Rings and Linear Algebra 12.69 Diagonalization
T (e1 ) T (1, 0, 0) e2
Thus T cyclic subspace generated by e1 e1 , T (e1 ), T (e1 ),....
2
= Span e1 , e2 ( s, t , o) s, t R
12.20.2 Theorem: Let T be a linear operator on a finite dimensional vector space V, and let W be
a T - invariant subspace of V. Then the characteristic polynomial of TW devides the characteristic
polynomial of T.
Proof: Choose an ordered basis C v1 , v2 ,....vk for W , and it is extended to form an ordered
basis B v1 , v2 ,....vk , vk 1......vn for V. Let A T B and B1 TW c . So A can be written as
B1 B2
0 B3 .
B1 tI k B2
Then f (t ) det( A tI n ) det
0 B3 tI n k
g (t ).det( B3 tI n k )
Thus g (t ) divides f (t ) .
W (t , s, o, o) t , s R is a subspae of R 4 .
consider C e1 , e2 which is an ordered basis of W . Extend this to the standard ordered basis
B of R 4 .
1 1 2 1
0 0 1
1 1 1
Then B1 TW c and A T B
0 1 0 0 2 1
0 0 1 1
1 t 1 2 1
0 1 t 0 1
Then f (t ) A tI
0 0 2 t 1
0 0 1 1 t
1 t 0 1
2 t 1
(1 t ) 0 2 t 1 (1 t )(1 t )
1 1 t
0 1 1 t
g ( t ). (2 t )(1 t ) 1
g (t )(t 2 3t 3)
Thus g (t ) divides f (t ) .
Then
i) v, t (v), T 2
(v),.....T k 1 (v) is a basis of W .
j 1
Let j be the largest positive integer for which B v, T (v ), T (v ).....T (v ) is linearly indepen-
2
dent. Such a j must exist because V is finite dimensional. Let Z = Span (B). Then B is a basis for
Z. Further more T j (v) Z . We use this fact to show that Z is a T - invariant subspace of V. Let
w Z . Since W is a linear combination of the vectors of B, there exists scalars b0 , b1 ,....b j 1 such
j 1
that w b0 v b1T (v) .... b j 1T (v) and hence T ( w) is a linear combination of vectors in Z and
hence belongs to Z. So Z is T - invariant. Further more, v Z , and W is the smallest T - invariant
subspace of V, that contains so that . Clearly,, , and so we conclude that . It
follows that B is a basis for , and therefore . Thus .
Hence is a basis of .
Observe that
Or
Every square matrix satisfies its characteristic equation
Or
Every matrix is zero of its characteristic polynomial.
Proof: Consider an n square matrix over a field F for relative to an ordered basis B.
i.e.
i.e.
The elements of the matrix are polynomials at most of the first degree in t with the
result that the elements of the matrix Adj are ordinary polynomials in t of degree or
less. As we know that the elements of the matrix Adj are the cofactors of the elements of
the matrix . It implies that the matrix adj can be written as
.......... (2)
Rings and Linear Algebra 12.73 Diagonalization
Or
Thus
Also we have
or
Thus because
Solution: We have
and i.e.
So
or
correspondingly
nomial of
So
is satisfied by A. i.e. .
Proof: As the elements of are at most of the first degree in , the elements of
can be written as a matrix polynomial in given by
where are matrices of the type
Now
=
Pre multiplying there successively by and adding we get
or
Corollary 2: If m be a positive integer such that , then multiplying the result (1) by
we get
Showing that any positive integral power of of A is linearly expressible in terms of those
lower order.
W.E. 25: Worked Out Examples:
the standard basis of . Show that T satisfies its characteristic equation and satisfies its
characteristic equation.
So
so
W.E. 26 :
i.e.
and
So
Centre for Distance Education 12.80 Acharya Nagarjuna University
So
or
So
So
So
W.E. 28:
.
Centre for Distance Education 12.82 Acharya Nagarjuna University
By Cayley - Hamilton theorem every matrix satisfies its own characteristic equation.
So = O (null matrix)
.......... (1)
since
Aliter:
So ............. (2)
Multiply (2) by
............. (3)
............. (4)
..............(5)
Now
using (5)
Rings and Linear Algebra 12.83 Diagonalization
using (4)
using (3)
using (2)
W.E. 29 :
Using Cayley - Hamilton theorem find the inverse and of the matrix
trace of
det
Characteristic equation is
trace of
........... (1)
Centre for Distance Education 12.84 Acharya Nagarjuna University
By Cayley - Hamilton theorem every matri satisfies its characteristic equation
So ....... (2)
By (2)
multiplying by A
Rings and Linear Algebra 12.85 Diagonalization
So
W.E. 30:
given by
1)
2)
3)
4)
5)
6)
1)
2)
Rings and Linear Algebra 12.87 Diagonalization
3)
4)
gives
i.e.
Hence
By (1) .
So
12.22 Summary:
In this lesson we discussed about
i) Eigen vectors and eigen values of linear operator and of a square matrix - Properties of eigen
values. Diagonalizability of linear operators and matrices. Test for diagonalization and numerical
problems - Invariant subspaces. Cayley-Hamilton theorem for linear operators and matrices.
Rings and Linear Algebra 12.89 Diagonalization
are .
3. Find the eigen values and the corresponding eigen vectors of the matrix
i)
ii)
Centre for Distance Education 12.90 Acharya Nagarjuna University
Hamilton theorem.
Ans:
7) Find the characteristic equation of the matrix A and verify that it is satisfied by A and hence find
.
i)
Ans:
ii)
Ans:
Rings and Linear Algebra 12.91 Diagonalization
Ans :
12.25 Exercises:
1. For each of the following matrices test A for diagonalizability and if A is diagonaliz-
able, find an invertible matrix Q and a diagonal matrix D such that
2. For each of the following linear operators T on a vector space V, test T for diagonalizability and if
T is diagonalizable find a basis B for V such that is a diagonal matrix.
Ans: T is diagonalizable
Ans : T is diagonalizable
6. For the matrix over the field C, Find the diagonal form and a diagonalizing
matrix Q.
Ans :
7. Let T be a linear operator on which is represented in the standard ordered basis by the matrix
Ans: 1, 2, 2
Rings and Linear Algebra 12.93 Diagonalization
8) Let T be a linear operator on a finite dimensional vector space V; and let be an eigen value of
T. Show that the eigen space of i.e. is invariant under T.
9) For each of the following linear operators T on the vector space V, determine whether the given
subspace is a T - invariant subspace of V..
(i) and
Ans: T - invariant
(ii) and
Ans: T - invariant
(iii) and
10) Find the characteristic equation of the matrix and verify that it is satisfied by A
Ans:
11) Verify that the matrix satisfies its characteristic equation and conpute .
Ans :
Centre for Distance Education 12.94 Acharya Nagarjuna University
12) Find the characteristic equation of the matrix and show that it is satisfied by
Ans :
13) Find the characteristic roots of the matrix and verify cayley - Hamilton theorem for
this matrix. Find the inverse of the matrix A and also express as a
linear polynomial.
Ans :
Ans: - .
Ans :
61 93
Ans : B
5
62 94
- A. Mallikharjana Sarma
Rings and Linear Algebra 13.1 Inner Product Spaces
LESSON - 13
13.4 Inner product and inner product space - definitions - examples - basic
theorems
13.8 Summary
13.11 Exercises
13.3.1 Introduction:
In general a vector space is defined over an arbitrary field F. In this lesson we restrict the
field F to be the field of real numbers or complex numbers. In th first case the vector space is
called a real vector space and in the second case it is called a complex vector space. We study
real vector space in analytical geometry and vector analysis. There the concept of length and
orthogonality is disscussed.
In this lesson we introduce the concept of length and orthogonality of vectors by means of
an additional structure on the vector space known as an inner product.
We also have dot or scalar product of two vectors whose properties are discussed in
Centre for Distance Education 13.2 Acharya Nagarjuna University
Before defining inner product and inner product spaces we shall state some important
properties of complex numbers.
13.3.2 Some Properties of Complex Numbers:
The modules of the complex number Z x iy is the non negative real number x 2 y 2 and
is denoted by z .
If z z , then x iy x iy so y 0 .
v) z z vi) z z
i) z1 z2 z1 z2 ii) z1 z 2 z1 z 2
iii) z1 z 2 z1 . z 2 iv) z1 z 2 z1 z 2
z1 z1
v) provided z2 0
z 2 z2
i) u v, w u , w v, w (Linearity)
Rings and Linear Algebra 13.3 Inner Product Spaces
ii) au , v a u , v Linearity
(c) If F C . Then the inner product space V ( F ) is called a unitary space or complex inner
product space.
W.E. I:
Worked Examples:
If u (a1 , a2 ,...an ), v (b1 , b2 ,...bn ) Vn (C ) then show that u, v a1b a2b2 ... an bn de-
fines an inner product on Vn (C ) .
Solution: W e will now show that all the postulates of an inner product holds for
u , v a1b a2b2 ... an bn .......... (1)
Let a, b, c C we have
a u , w b v, w
Thus au bv , w a u , w b v , w
ii) Congugate symmetry : From the definition of the product given in (1)
v, u b1a1 b2 a2 ... bn an
So v , u (b1a1 b2 a2 ... bn an )
b1a1 b2 a2 ... bn an
So v, u u , v
iii) Nonnegativity:
u, u a1a2 a2 a2 ... an an
a1 a2 ... an
2 2 2
............. (2)
as ai is a complex number so ai 0
2
u , u 0 a1 a2 ... an 0
2 2 2
each ai 0 so each ai 0
2
Hence the product defined in (1) is an Inner product on Vn (C ) and with respect to this
inner product Vn (C ) is an inner product space.
u (a1 , a2 ,...an ) and v (b1 , b2 ,...bn ) is called the standard inner product on Vn (C ) .
ii) If u , v are two vectors in Vn ( R ) , then the standard inner product of u , v is given by
= u.v which is the dot product of u and v and the inner product u , v
is denoted by u.v .
iii) u , v a1 (a1 ) a2 (a2 ) ...an (an )
W.E. 2:
Let V be the vector space over C of all continuous complex valued functions defined on
1
i) Linearity : af (t ) bg (t ) h (t ) dt
0
1 1
=a f (t ) h (t ) dt b g (t ) h (t ) dt
0 0
a f , h b g , h
Centre for Distance Education 13.6 Acharya Nagarjuna University
ii) Conjugate symmetry:
1
g , f g (t ) f ( t ) dt
0
1
g (t ) f (t ) dt
0
1
f (t ) g (t ) dt f , g
0
Thus g, f f , g
1
f , f f (t ) f (t ) dt
0
1
f (t ) dt 0
2
Also f , f 0 f (t ) dt 0
2
f (t ) 0 t 0,1
f 0
As the required conditions are satisfied V is an inner product space.
W.E. 3:
For f ( x ), g ( x ) P ( R ) , the vector space of polynomials over the field R, defined on 0,1
1
if f ( x ), g ( x ) f
'
(t ).g (t ) dt then prove that it is not an inner product.
0
1
1
t3 1
0 3 3
2
So f , g 1( t ) dt
1
1
t3 1 2
g , f 2t (t ) dt 2 2
0 3 0 3 3
Hence f , g g , f
n x m matrix denoted by A * and is defined as if A aij mn then A* bij nm where bij a ji i.e.
( AT ) ( A)T
i 1 2i
For example Let A then
2 3 4i
i 2
A*
1 2i 3 4i
ii) Trace of a matrix : Let A be a square matrix of order n. The sum of all the elements of A lying
along the principal diagonal is the called the trace of A. We write trace of A as tr ( A) .
Thus trace of A a
i 1
ii
1 3 4
Ex : If A 2 1 3 then tr A (1) ( 1) 2 2
2 1 2
iv) tr ( AB ) tr ( BA)
Centre for Distance Education 13.8 Acharya Nagarjuna University
v) trA trAT
W.E. 4:
tr C *(aA) C *(bB)
tr a(C * A) b(C * B)
tr a(C * A) tr b(C * B)
tr (C * A) b. tr (C * B)
a A, C b B , C
Hence the condition of linearity holds good.
ii) Non-negativity :
n n
A * A cij nn where cij bik akj aki akj
k 1 k 1
n n n
A, A tr ( A * A) c11 c22 ... cnn cii bik aki
i 1 i 1 i 1
n n n n
aki aki aki
2
........... (1)
i 1 i 1 i 1 k 1
a 0
2
So ki
i 1
trace of ( A * A) 0
A, A 0 .......... (2)
Let A, B V
A , B tr ( B * A ) tr ( B * A )
tr B *( AT )T ( AT )T A
tr B ( A T ) T
T
tr BT
A
T T
tr A B tr( A) tr( A )
T T
T
tr ( A * B )
B, A
So A, B B , A
Provide the reason why the product ( a, b), (c, d ) ac bd on R 2 is not an inner product
on the given vector space.
(3)(3) (4)(4) 9 16 7 0
Hence the condition of non negativity is not satisfied. Hence ( a, b), (c, d ) ac bd is
not an inner product.
W.E. 6:
1
1
1
t3 1 1
f , g 1.t dt 0
2
0 3 0 3 3
0
1 1
t3 2
g , f 2t .t dt 2 t dt 2.
2
0 0 3 1 3
Hence f , g g , f
Hence the conjugate symmatry does not hold. So the given product is not an inner
product.
13.4.6 Theorem:
i) u , v w u, w u , w
ii) u , cv c u , v
iii) u, O O, u 0
iv) au bv, w a u , w b v, w
v) u , av bw a u , v b v, w
i) To show u , v w u , v u , w
By definition u , v w v w, u
v, u w, u
u , v w, u
= u, v u, w
Thus u , v w u , v u , w
ii) To show u, cv c u, v
c v, u by linearity
c v, u
c u, v
Thus u , cv c u , v
iii) To show u, O O, u, 0
0 u, O 0 .............. (1)
Centre for Distance Education 13.12 Acharya Nagarjuna University
0 O, u
0 ........... (2)
u, O O, u 0
a u , w ( b ) v , w
a u , w b v , w by linearity..
Thus au bv, w a u , w b v, w
v) To show that u, av bw a u, v b v, w
a v, u b w, u
a v, u b w, u
Thus u , av bw a u , v b u , w
i) u , av bw a u , v b u , w
u , v u , w 0 for all u V
Rings and Linear Algebra 13.13 Inner Product Spaces
u , v w 0 for all u V
v w, v w 0 choosing u v w
vw0
vw
Thus if u , v u , w for all u V , then v w
Remark: From (ii) and (v) of the above theorem, the reader should observe that the inner product
is conjugate linear in the second part.
= ( 1)( 1) u , v u , v
ii) u , av bw a u , v b u , w
iii) u v, u v u , u u , v v, u v, v
a1 u1 , v a2 u2 , v ... an un , v
n n
i.e. ai ui , v ai ui , v
i 1 i 1
n n
Also v, ai ui ai v, ui
i 1 i 1
Consider the vector space V3 ( R ) with standard inner product defined on it.
Now we know that in the three dimensional Euclidean space a12 a22 a32 is the length of the
vector u (a1 , a2 , a3 ) . Motivated by this fact, we make the following definition.
13.5.1 Definition: Let V be an inner product space if u V , then the norm or length of the
vector u written as u is defined as the positive square root of u , u i.e. u u , u .
Centre for Distance Education 13.14 Acharya Nagarjuna University
u ( a, b) a 2 b 2 u , u
n
a u, u
2
i
i 1
Definition: Let V be an inner product space. If u V is such that u 1 , then u is called a unit
vector. thus in an inner product space a vector is called a unit vector, if its length is one unit.
13.5.3 Theorem: Let V ( F ) be an inner product space. u is a non zero vector in V. Then show
1
that u u is a unit vector..
1 1 1 1 1 1
Proof: u u, u u u u, u u u . u u, u
1
1
2
2
u
u
2
1 u u
u 1 1 is a unit vector..
u u u
u
Note: If u is a non zero vector in an inner product space V ( F ) , then the unit vector u is called
the unit vector corresponding to u . This process of getting a unit vector along u is called
normalizing u .
Rings and Linear Algebra 13.15 Inner Product Spaces
Note ii) In the inner product space R 2 , i (1, 0) j (0,1) are unit vectors since the length of
each is 1.
iii) In the inner product space R 3 ; i (1, 0, 0) j (0,1, 0); k (0, 0,1) are unit vectors since
the length of each is 1.
iv) In the inner product space R 3 ; if u (a1 , a2 , a3 ); then u a12 a22 a32 and the unit
vector corresponding to u is
1 1
u (a1 , a2 , a3 )
u a12 a22 a32
a1 a2 a3
, ,
a a a
2 2 2
a1 a2 a3
2 2 2
a1 a22 a32
2
1 2 3
W.E.7:
then find u, v , u , v .
(1 i )(2 5i ) (2 3i )(3 i )
(1 i )(2 5i ) (2 3i )(3 i )
(2 3i 5) (6 11i 3)
u , v (7 3i ) (3 11i ) 10 8i
u u , u (1 i, 2 3i ), (1 i, 2 3i )
2
(1 i )(1 i ) (2 3i )(2 3i )
u (1 1) (4 9) 15
2
Centre for Distance Education 13.16 Acharya Nagarjuna University
So u 15
(2 5i )(2 5i ) (3 i )(3 i )
(2 5i )(2 5i ) (3 i )(3 i )
v, v (4 25) (9 1) 39
2
So v
So v 39
W.E.8:
(4 1) (9 4) (4 3) 25
25 Hence u 25 5
2
So u
1
Hence unit vector corresponding to u is u u
1
(2 i,3 2i, 2 3i )
5
W.E. 9:
1 1
If u (0,3, 4); v , 0, are two vectors in a real inner product space, then find
2 2
u, v .
Rings and Linear Algebra 13.17 Inner Product Spaces
1 1
Solution: u , v (0, 3, 4) , , 0,
2 2
1
(0,3, 4), (1, 0,1)
2
1
(0,3, 4), (1, 0,1)
2
1 4
0(1) 3(0) 4(1) 2 2
2 2
13.5.5 Theorem:
Let V be an inner product space over a field F then show that
cu c u u V , c F
cu , cu
2
Proof: cu
cc u , u
c
2 2
u
Hence cu c u
13.5.6 Theorem:
Let V be an inner product space over a field F. Then show that u 0 if and only if u 0
u , u u 0 since by definition u , u 0
2 2
So u
u 0
u , u 0 if and only if u 0
Centre for Distance Education 13.18 Acharya Nagarjuna University
0 if and only if u 0
2
i.e. u
if and only if u O .
13.5.7 Theorem:
CAUCHY - SCHWARZ’S INEQUALITY : If u , v are any two vectors in an inner product space
V ( F ) then u, v u v .
O u , O 0
So u, v 0 0 .......... (1)
So 0 u cv .......... (3)
Now u cv u cv, u cv
2
u , u cv c v, u cv
u, u c u, v c v, u c v, u
u , u c u , v c v, u cc v, u .............. (4)
u, v
In particular set c
v, u
u, v u, v u, v u, v
u cv u, u u, v u, v + v, v
2
v, v v, v v, v v, v
u, v u, v
So u cv u, u
2
.............. (5) Since v , v is real number.. v , v v , v .
v, v
i.e. 0 u cv
2
u, v u, v
We get 0 u , u using (5)
v, v
u, v
2
0 u
2
2
v
0 u v u, v
2 2 2
u, v u
2 2 2
v
u, v u v
1 1
n
n 2
2
n 2
2
i 1
a b
i i ai
i 1
bi
i 1
Or
Proof: Let u (a1 , a2 ,...an ) and v (b1 , b2 ,...bn ) are any two vectors in the vector space Vn (C )
with standard inner product defined on it.
2
u, v a1b1 a2b2 ... an bn
2
u u , v a1a1 a2 a2 ... an an
2
Also
a1 a2 ... an
2 2 2
v, v b1 b2 ... bn
2 2 2 2
Similarly v
u, v u v
then a1b1 a2b2 ... anbn a12 a22 ... an2 b12 b22 ... bn2
Or
a1b1 a2b2 ... anbn a12 a22 ... an2 b12 b22 ... bn2
W.E. 10: Using Cauchy - Schwarz, inequality, prove that the also vector value of the cosine of
an angle can not be greater than 1.
O (0,0, 0)
Rings and Linear Algebra 13.21 Inner Product Spaces
a1b1 a2 b2 a3b3
cos
a12 a22 a32 . b12 b22 b32
u, v
u v
u, v u v
cos
u v u v
cos 1
Hence the absolute value of the cosine of an angle can not be greater than 1.
13.5.9 Triangular inequality:
u v u v, u v
2
u, u v v, u v
u, u u, v v, u v, v
u u , v u , v v
2 2
since v, u u , v
u 2 Re u , v v
2 2
u 2 u, v v Since Re z z .
2 2
uv u v
2 2
uv u v
Centre for Distance Education 13.22 Acharya Nagarjuna University
Geometrical interpretation of triangular inequality:
Consider u , v to be the vectors in an Inner product space V3 ( R ) with standard inner prod-
uct defined on it. Let the vectors u , v be represented by the sides AB, BC of triangle ABC. Then
W.E. 11:
Prove that if V is an inner product space then u, v u v if and only if one of the
vectors u or v is a multiple of the other (or u , v are linearly dependent).
Solution:
Case i) Let u , v u v
u, v
Let v O and let c 2
v
Let w u cv
w, w u cv, u cv
u, v u, v u, v u, v
u u, v v, u
2 2
2 2 2 2
v
v v v v
u , v v, u
u
2
2
v
u , v u, v
u
2
2
v
Rings and Linear Algebra 13.23 Inner Product Spaces
u, v
2
u
2
2
v
2 2
u v
u
2
2 since = u, v u v
v
0
Hence w, w 0 w O
u cv O u cv i.e.
u is a scalar multiple of v . Similarlity when u O we can prove v is a scalar multiple of u .
i.e. u , v are linearly dependent.
Converse: If one of the vectors u , v are zero vectors then they are linearly dependent i.e. one
can be expressed as a scalar multiple of other and
u, v 0 and u v =0
So u, v u v
Let us suppose that both u , v are non zero vectors and they are linearly dependent.
So one is a scalar multiple of the other.
u , v cv, v c v, v c v
2
2
u, v c v .......... (1)
Also u cv c v
Hence u v c v
2
.......... (2)
u, v u v
Centre for Distance Education 13.24 Acharya Nagarjuna University
Hence from the above two cases the theorem follows.
13.5.9 Theorem:
u u v v u v v by triangular inequality..
So u v u v .......... (1)
v u v u ........... (2)
u v uv
If u , v are any two vectors in an inner product space V ( F ) then show that
u v u v 2 u 2 v
2 2 2 2
So u v u v, u v by definition of norm.
2
u, u v v, u v
u, u u, v v, u v, v
i.e. u v u u , v v, u v
2 2 2
............ (1)
Also u v u v, u v
2
u , u v v, u v
u, u u, v v, u v, v
Rings and Linear Algebra 13.25 Inner Product Spaces
So u v u u , v v, u v
2 2 2
............... (2)
u v u v 2 u v
2 2 2 2
Let u and v be two vectors in the vector space V3 ( R ) with standard inner product defined
on it. Suppose the vector u is represented by the side AB, and the vector v is represented by the
side BC of a parallelogram ABCD then th vectors u v, u v , represents the diagonals AC and
DB of the parallelogram.
u
D C
v v
A u B
AB 2 BC 2 CD 2 DA2
The sum of the squares on the diagonals of a parallelogram is equal to the sum of the
squares on the four sides.
13.5.11 Theorem:
uv u v
2 2
Centre for Distance Education 13.26 Acharya Nagarjuna University
u v, u v u v 2 u v
2 2
u , u u , v v, u v, v u v 2 u v
2 2
u u , v u , v v u v 2 u v
2 2 2 2
2 Re u, v 2 u v
We know Re z z
So Re(u , v) u, v
u, v u v
For example consider the inner product space V3 ( R ) with standard inner product defined
on it.
18 3 2
u v 2 3 2 4 2 ............ (1)
u v 4 0 4 8 2 2 ....... (2)
1 2 1 1 1
Re u , v uv u v and if F R , then show that u , v uv uv
2 2 2
4 4 4 4
We have u v u v, u v
2
u, u v u, u v
u, u u, v v, u v, v
u u , v u , v v
2 2
u 2 Re u , v v
2 2
............. (1)
Also u v u v, u v
2
u , u v v, u v
u, u u, v v, u v, v
u v u u , v u , v v
2 2 2
So
u 2 Re u , v v
2 2
........... (2)
u v u v 4 Re u , v
2 2
1 1
So Re u , v u v uv
2 2
......... (3)
4 4
If F R, then Re u , v u , v
So (3) becomes
1 1
u , v u v u v
2 2
4 4
Centre for Distance Education 13.28 Acharya Nagarjuna University
13.5.13 Theorem:
If u and v are vectors in a unitary space then
4 u , v u v u v i u iv i u iv
2 2 2 2
u v u , v , u , v
2
u, u u, v v, u v, v
u v u u , v v, u v
2 2 2
i.e.
u v u u , v v, u v
2 2 2
u v u v 2 u , v 2 v, u ................. (1)
2 2
So
u iv u iv, u iv
2
u , u iv i v, u iv
u, u i u, v i v, u i v, v
u, u i u, v i v, u i v, v
Since i i
i u , v i v, u v
2 2
= u
So i u iv i u u , v v, u i v
2 2 2
........... (2)
Also u iv u iv, u iv
2
u , u iv i v, u iv
u, u i u, v i v, u i u, v
So u iv u i u , v i v, u v
2 2 2
since i i
So i u iv i u u, v v, u i v ............ (3)
2 2 2
Rings and Linear Algebra 13.29 Inner Product Spaces
i u iv i u iv 2 u , v 2 v, u ........ (4)
2 2
4 u , v u v u v i u iv i u iv
2 2 2 2
13.5.14 Theorem:
Proof:
If z x iy then y Im z
Re i( x iy )
So by this Im u , v Re i u , v
Re u, iv Since u, iv i u, v
i u, v
ii) au a u
iii) u v u v
A vector space V ( F ) in which the above three conditions are satisfied is called a normed
vector space.
Centre for Distance Education 13.30 Acharya Nagarjuna University
13.6.2 Definition: Normed Vector Space:
u u, u . The inner product space with this definition of norm is called a normed vector
space if the following three conditions are satisfied.
ii) au a u and
iii) u v u v u, v V and a F
Definition: Let u and v be two vectors in an inner product space V ( F ) . The distance between
the vectors u and v is denoted by d (u , v ) and is defined as d (u , v) u v u v, u v
Ex: Let u (a1 , a2 , a3 )v (b1 , b2 , b3 ) be two vectors in the inner product space R 3 . Then
u v (a1 b1 , a2 b2 , a3 b3 )
13.6.5 Theorem: If u , v , w are any three vectors in an inner product space V ( F ) then prove
that
i) d (u , v ) 0 and d (u , v ) 0 iff u v
iii) d (u , v ) d (u , w) d ( w, v )
iv) d (u , v ) d (u w, v w)
By definition d (u, v) u v 0 since the norm of a vector is a non negative real number..
Rings and Linear Algebra 13.31 Inner Product Spaces
and d (u , v ) 0 u v 0 u v 0
2
u v, u v 0
u v 0 u v
Thus d (u , v ) 0 if and only if u v .
d (u , v) u v
(1)(v u )
= (1) u v
1 v u
v u d (v , u )
So d (u, v ) d (v, u )
Proof: d (u , v) u v
(u w) ( w v )
(u w) w v by triangle inequality..
d (u , w ) d ( v , w)
So d (u , v ) d (u , w) d (v, w)
iv) To show d (u , v) u v
Proof: d (u , v) u v
(u w ) ( v w )
Centre for Distance Education 13.32 Acharya Nagarjuna University
d (u w, v w)
Thus d (u , v ) d (u w, v w) .
Solution : As the given field is the field of real numbers the conjugate symmetry is nothing but
symmetry
i) Symmetry:
Let u ( x1 , x2 , x3 ), v ( y1 , y2 , y3 )
So v, u ( y1 , y2 , y3 ), ( x1 , x2 , x3 ) y1 x1 y2 x2 y3 x3
x1 y1 x2 y2 x3 y3
u , v
Thus v, u u , v
au bv a ( x1 , x2 , x3 ) b( y1 , y2 , y3 )
a ( x1 z1 x2 z2 x3 z3 ) b( y1 z1 y2 z2 y3 z3 )
a u , w b v , w
Rings and Linear Algebra 13.33 Inner Product Spaces
Thus au bv , w a u , w b v, w
x1 ( x1 ) x2 ( x2 ) x3 ( x3 )
So u ( x1 , x2 , x3 ) (0,0,0) O
Hence u, u 0 u O .
As all the three required conditions are satisifed, the given product is an inner product.
W.E. 13:
i) u , v x1 y1 2 x1 y2 2 x2 y1 5 x2 y2
ii) u , v 2 x1 y1 5 x2 y2 where u ( x1 , x2 ) v ( y1 , y2 )
i) Symmetry :
v, u ( y1 , y2 ), ( x1 , x2 )
y1 x1 2 y1 x2 2 y2 x1 5 y2 x2
x1 y1 2 x1 y2 2 x2 y1 5 x2 y2
u , v
Hence v, u u , v
ii) Linearity :
Let a, b R; then
Centre for Distance Education 13.34 Acharya Nagarjuna University
au bv a ( x1 , x2 ) b( y1 , y2 )
a ( x1 z1 bx1 z2 2 x2 z1 5 x2 z2 ) b( y1 z1 2 y1 z2 2 y2 z1 5 y2 z2 )
a u, w b v, w
Thus au bv , w a u , w b v, w
Non Negativity:
v, v ( x1 , x2 ), ( x1 , x2 )
x1 x1 2 x1 x2 2 x2 x1 5 x2 x2
( x1 2 x2 ) 2 x22 0
and u , u 0 ( x1 2 x2 ) 2 x22 0
x1 2 x2 0; x2 0 x1 0, x2 0
u (0,0) O
Hence u, u 0 u O .
given u ( x1 , x2 )v ( y1 , y2 ). u , v 2 x1 y1 5 x2 y2
Rings and Linear Algebra 13.35 Inner Product Spaces
Symmetry: v, u ( y1 , y2 )( x1 , x2 )
2 y1 x1 5 y2 x2
2 x1 y1 5 x2 y2
u , v
Thus v, u u , v
ii) Linearity :
Let a, b R
Then au bv a ( x1 , x2 ) b( y1 , y2 )
a (2 x1 z1 5 x2 z2 ) b(2 y1 z1 5 y2 z2 )
a u , w b v , w
Thus au bv , w a u , w b v, w
2 x1 x1 5 x2 x2
2 x12 5 x22 0
x1 0, x2 0 u ( x1 , x2 ) (0, 0)
u O
Centre for Distance Education 13.36 Acharya Nagarjuna University
Thus u, u 0 u O
W.E. 14:
au bv a u ab u , v ab v, u b
2 2 2 2 2
v
au bv au bv, au bv
2
a u , au bv b v, au bv
a a u , u b u , v b a v, u b v, v
aa v, u ab u, v ba v, u bb v, v
u ab u , v ab v, u b
2 2 2 2
= a v
1 2 i
Use the Frobenius inner product, compute A , B and A, B for A
i
and
3
1 i 0
B also compute the angle between A and B on M 22 ( F ) .
i i
1 i 0 1 i i
Solution: B Hence B*
i i 0 i
1 4i 2 i 1 1
3i 1
Rings and Linear Algebra 13.37 Inner Product Spaces
1 4i 4 i
So B * A
3i 1
trace of ( B * A) 1 4i 1 4i
1 2 i 1 3
As A ; A*
3 i 2 i i
1 3 1 2 i 10 2 i 3i
A* A
2 i i 3 i 2 i 3 i 4 i 2 12
10 2 4i
2 4i 6
trace of A * A 10 6 16
A A, A trace of A * A 16
2
A A, A 4
1 i i 1 i 0
B*B
0 i i i
1 i 2 i 2 i 2 3 1
i
2
i 2 1 1
trace of B * B 3 1 4
B B, B trace of B * B 4
2
B B, B 2
1 4i 4 i
A, B trace of B * A trace of 3i 1
1 4i 1 4i
If is the angle between A and B .
Centre for Distance Education 13.38 Acharya Nagarjuna University
A, B 4i 4 1
then cos
A B 4(2) 8 2
1
cos
2 3
Thus the angle between A and B is
3
W.E. 16:
Let T be a linear operator an inner product space suppose that T (u) u u V . Prove
that T is one one:
T is a linear operator on V.
T (u ) T (v )
T (u v) O
uv 0
2
u v, u v 0
u v O
uv
Thus T (u ) T (v) u v u, v V
So T is one one.
Rings and Linear Algebra 13.39 Inner Product Spaces
W.E. 17:
u v . Then verify both the Cauchy - Schwarz inequality and the triangle in equality..
2(2 i ) (1 i )2 i (1 2i )
2(2 i ) 2(1 i ) i (1 2i )
4 2i 2i 2 i 2i 2
4 5i 2 2 8 5i
So u , v 8 5i
2(2) (1 i )(1 i ) i (i )
2(2) (1 i )(1 i ) i ( i )
u 4 1 i2 i2 7
2
So u 7
(2 i )(2 i ) 4 (1 2i )(1 2i )
4 i 2 4 1 4i 2
v 4 1 4 1 4 14
2
So v 14
u v (2,1 i, i ) (2 i, 2,1 2i )
(2 2 i ,1 i 2, i 1 2i ) (4 i, 3 i,1 3i )
Centre for Distance Education 13.40 Acharya Nagarjuna University
uv u v , u v (4 i , 3 i ,1 3i ), (4 i , 3 i ,1 3i )
2
16 i 2 9 i 2 1 9i 2 26 1 1 9 37
Hence u v 37
We have shown u , v 8 5i
u , v 64 25 89 ...... (1)
u 7 , v 14
u v 7 14 7 14 98 ..... (2)
but 89 98
So u, v u v
we have u v 7 14
But 7 14 37
So u v u v
Solution: Let V C 0,1 be the inner product space of real valued continuous functions on 0,1
with the inner product f and g defined by
1
f , g f (t ) g (t ) dt
0
1
f , g e t .t dt
0
1
t .e t 1.e t dt
1
0
0
1.e1 et e1 e1 e0 e e 1 1
1
f , g 1
1
1
t3 1
f , f t , t t.t dt t det
2 2
f
0 0 3 0
1 1
0
2
i.e. f
3 3
1 1
So f
2
f
3 3
1
1
e 2t 1
g , g e .e dt e dt
2 t t 2t
g
0 0 2 0
i.e. g
2
2
e e e 2 1
1 2 0 1
2
g
2
e 1
1 2
f g t et
( f g ) f g , f g (t et ), (t et )
2
Centre for Distance Education 13.42 Acharya Nagarjuna University
1 1
(t e ) dt (t 2 2 te t e 2 t ) dt
t 2
0 0
1
t 3 e 2t
2et .(t 1)
3 2 0
1 1 1
f g (e 2 1) 2 (3e2 11)
2
3 2 6
1 2
i.e. f g (3e 11)
6
1 1
f g . (e 2 1) .............. (2)
3 2
From (1) and (2)
f ,g f g
1 2
We have f g (3e 11)
6
1 1 2
f g (e 1)
3 2
Hence f g f g
In the vector space Vn ( F ) with standard inner product defined on it show that
1 1 1
n 2
2
n 2
2
n 2
2
ai bi ai bi
i 1 i 1 i 1
Solution: In the vector space Vn ( F ) with the standard inner product defined on it, let
u (a1 , a2 ,...an ) , v (b1 , b2 ,...bn ) then u v (a1 , a2 ,...an ) (b1 , b2 ,...bn )
So u v (a1 b1 , a2 b2 ,...an bn )
n
(ai bi )
i 1
n
u a1 a2 ... an a
2 2 2 2
i
i 1
Similarly v b
2
i
i 1
By triangle in equality u v u v
n n n
a b a b
2 2 2
So i i i i
i 1 i 1 i 1
13.8 Summary:
In this lesson we discussed about inner products - inner product spaces, norm or length of
a vector in an inner produc space - normalising the vectors - Cauchy - Schwarz’s in equality tri-
angle in equality - Parallellogram Law - Norm of a vector in a vector space - distance between two
vectors.
1. Find unit vector corresponding to (2 i,3 2i, 2 3i ) of V3 (C ) with respect to the standard
inner product.
Centre for Distance Education 13.44 Acharya Nagarjuna University
1
Ans : (2 3i, 3 2i, 2 3i )
5
i. u , v x1 y1 2 x1 y2 2 x2 y1 5 x2 y2
ii. u , v 2 x1 y1 5 x2 y2 where u ( x1 , x2 ) v ( y1 , y2 )
13.11 Exercises:
1. If u (a1 , a2 ), v (b1 , b2 ) V2 ( R) then define u , v a1b2 a1b2 4a2b2 . Show that it is an inner
product on V2 ( R ) .
3. If u (a1 , a2 ), v (b1 , b2 ) then show that u , v 2a1b1 a1b2 a2b1 a2b2 is an inner product on
V2 ( R) .
5. Let u (1, 3, 4, 2), v (4, 2, 2,1), w (5, 1, 2, 6) in R 4 then find u , w , v, w and verify
Ans: u , w 22, v, w 24
u 30, v 5, w 66
6. u (1, 2), v ( 1,1) are two vectors in the vector space R 2 , with standard inner product. If w
is a vector such that u , w 1, v, w 3 then find w .
7 2
Ans : w ,
3 3
Rings and Linear Algebra 13.45 Inner Product Spaces
i) f , g , and f , h ,
ii) f and g
37
Ans: f , h 1, f , h
4
57
f g 1,
3
3
fˆ unit vector along f (t 2); gˆ g 3t 2
57
9 8 7 1 2 3 3 5 2
A B C then find
6 5 4 4 5 6 1 0 4
2 A 3B, 4C 324
A 271, B 91
1
iii) f (t ) 2t 1, g (t ) t where f , g
2
f (t ) g (t ) dt
0
Centre for Distance Education 13.46 Acharya Nagarjuna University
2 1 0 1
iv) A ;B where AB tr ( BT A)
3 1 2 3
9 23 15 2
i) ii) iii) iv)
105 3 130 6 210
10. If u (1, 5,3) and v (4, 2, 3) are two vectors in R 3 then find the distance between u and v .
Ans : d (u , v ) 94
1 i
11. In the inner product space C 2 for u , v C 2 and A if the inner product u , v uAv *
i 2
Ans : 6 2i C
13. If u , v are two vectors in a Eucledian space V ( R ) such that u v , then prove that
u v, u v 0 .
14. Find the norm of the vector v (1, 2, 5) and also normalise this vector..
1 2 5
Ans : v 30, v , ,
30 30 30
15. Let V ( R ) be the vector space of polynomials with inner product determined by
1
f , g f (t ) g (t ) dt for f , g V . If f ( x) x 2 x 4, g ( x) x 1 for all x 0,1 then find
0
f , g , f , g .
7 311 1
Ans : , ,
4 30 3
Rings and Linear Algebra 13.47 Inner Product Spaces
- A. Mallikharjuna Sarma
Rings and Linear Algebra 14.1 Orthogonalization
LESSON - 14
ORTHOGONALIZATION
14.1 Objective of the Lesson:
In geometry perpendicularity is an useful concept. We introduce now a similar concept in
inner product spaces. In previous chapters, we have seen the special role of the standard ordered
bases for C n and R n . The special properties of these bases stem from the fact that the basis
vectors form an orthonormal set. Just as bases are the building blocks of vector spaces, bases
that are also orthonormal sets are the building blocks of Inner product spaces.
14.3 Introduction
14.10 Exercises
14.16 Summary
14.19 Exercise
14.3 Introduction:
Let us consider the case of vectors in R 2 and we see that how the perpendicularity is
considered here. The two vectors u and v R 2 are perpendicular if and only if the pythogorean
relation u v u v ......(1) holds. In real inner product spaces this pythagorean relation can be
2 2 2
written in a very simple form by using the condition that the angle between the vectors is 90 0 or
cosine of the angle between the vectors u and v is zero. Here the condition (1) is equivalent to a
very simple condition u , v 0 . We extend this idea to the vectors of Inner product spaces.
14.4 Orthogonality:
14.4.1 Orthogonality of Two Vectors:
Definition: Two vectors u and v in an inner product space V are said to be orthogonal or perpen-
dicular if u , v 0 .
14.4.5 A Theorem:
Show that orthogonality in an inner product space is symmetric.
Proof: Let V ( F ) the given inner product space. u, v are two vectors in V such that u is orthogonal
to v.
So u , v u , v conjugate of 0.
Rings and Linear Algebra 14.3 Orthogonalization
So v, u 0 . Hence v is orthogonal to u .
k u is orthogonal to v .
Hence every scalar multiple of u is orthogonal to v .
14.4.6 Theorem:
Zero vector in V is orthogonal to every vector in an Inner product space.
Proof: Let u be any vector in the given inner product space V ( F ) , which is orthogonal to itself.
So u, u 0 u O (zero vector) by definition of inner product.
The vectors u, v of a real inner product space V ( F ) are orthogonal if and only if
uv u v .
2 2 2
Now u v u v
2 2 2
u v, u v u, u v, v
Centre for Distance Education 14.4 Acharya Nagarjuna University
u, u v v, u v u, u v, v
u, u u, v v, u v, v u, u v, v
u , v v, u 0
Let u, v be two vectors in the inner product space V3 ( R ) with standard inner product defined
on it. Let u, v represent the sides AB, BC of triangle ABC. In the three dimensional Eucledian
Also the vector u + v represent the side AC of the triangle ABC and u v AC . Then
from the above theorem ABC 900 if and only if AC 2 AB 2 BC 2 which is pythogorean
theorem.
Note The above theorem does not held in complex inner product space.
If u (0, i ), v (0,1) in V2 (C )
There u v u v, u v
2
u, u u, v v, u v, v
u u , v u; v v
2 2
u v 2 Re u , v .......... (1)
2 2
since z z 2 Re z
0i 0
and Re u , v 0 using this in (1)
We get u v u v
2 2 2
Rings and Linear Algebra 14.5 Orthogonalization
4a 2b 3c 0 ............. (1)
Any solution of this equation gives a vector orthogonal to u.
1 1 1 1 2
Hence vˆ v (1,1, 2) , , is the unit vector orthogonal to the given
v 6 6 6 6
vector u (4, 2, 3) .
W.E. 2 : Find a non-zero vector w which orthogonal to u (1, 2,1) and v (2,5, 4) in R 3 .
So u , w 0 (1, 2,1), ( x, y , z ) 0
x 2 y 12 0 .......... (1)
2 x 5 y 4 z 0 .......... (2)
(1) x 2 : 2 x 4 y 2 z 0
So y 2 z
Put z 1, y 2
x 4 1 0 x 3
Centre for Distance Education 14.6 Acharya Nagarjuna University
Thus w ( x, y , z ) (3, 2,1) is the desired non zero vector orthogonal to both u and v..
W.E. 3 : In a real inner product space if u, v are two vectors such that u v then prove that
u v, and u v are orthogonal. Interpret the result geometrically..
Solution: u v, u v u, u u, v v, u v, v
= u u v u , v v 0
2 2
then u v AB BC . v v
u v, u v 0
u v, u v are orthogonal
W.E. 4: If u, v are two vectors in a real inner products space and u v, u v are orthogonal then
u v
u v, u v 0
u , u u , v v, u v, v 0
u u , v u , v v 0
2 2
u v u v
2 2
Geometrical interpretation:
Let u, v be two vectors in the inner product space V3 ( R ) with standard inner product defined
on it. Let u, v represent the sides AB and BC of a parallelogram ABCD . Then u AB, v BC .
u
u v, u v represent the diagonal AC and DB
D C
of the parallelogram.
Solution: As u and v are orthogonal vectors in an inner product space V, u , v 0 ...... (1)
Now
u v u v, u v
2
u, u u, v v, u v, v
u u , u u , v v .......... (2)
2 2
u 0 conjugate of 0 v
2 2
using (1)
v u v
2 2 2 2
So u
Centre for Distance Education 14.8 Acharya Nagarjuna University
Deduction of Pythogorem theorem:
If u, v are two orthogonal vectors in a real inner product space, then u , v u , v .... . (3)
using in (2).
u v u v 2 u, v
2 2 2
So u v u v
2 2 2
.......... (4)
W.E. 6 : If u, v are two orthogonal vectors in an inner product space V ( F ) and u v 1 then
prove that u v d u , v 2
d (u, v) u v
2 2
u v, u v
u, u u, v v, u v, v
u 00 v
2 2
since, u, v are orthogonal.
2
So d (u , v) u v 2
14.6.1 Theorem:
Show that any orthogonal set of non zero vectors in an inner product space V is linearly
independent.
Proof: Let S be an orthogonal set of non zero vectors in an inner product space V.
Let S1 u1 , u2 ..., un be a finite subset of S containing m vectors which are distinct.
Rings and Linear Algebra 14.9 Orthogonalization
m
Let c u
j 1
j j c1u1 c2u2 cnun O ............... (1)
We will show that each scalar coefficient is zero. Let ui be any vector in S1 . i.e. 1 i m .
But by (1) c u
j 1
j j O
Using in (2)
O, ui ci ui ci2 ui 0
2 2
Hence ci 0 for 1 i m
c1 0, c2 0,...cm 0
Let S u1 , u2 ,..., um be an orthogonal set of non zero vectors in an inner product space
V (F ) .
If a vector v in V is in the linear span of S,
m
v, ui
Then v
i 1 ui
2
ui
Centre for Distance Education 14.10 Acharya Nagarjuna University
Proof: v is a vector in the inner product space V, which is in the linear span of S u1 , u2 ,...um .
So v can be expressed as a linear combination of the vectors of S. So there exists scalars c1 , c2 ,...cm
m
m
v, ui c j u j , ui ........... (1)
j 1
m
c j u j , ui by linear property of inner products.
J 1
So v, ui ci ui
2
............ (2)
v, ui
So by (2) ci 2
ui
v, u1 v, u2 v, um
So v 2
u1 2
u2 ... 2
um
u1 u2 um
m
v, ui
Hence v
i 1 ui
2
ui
i) u S u 1 ie u , u 1 and
ii) u, v S and u v u , v 0
Or
1 if i = j
i.e. ij
0 if i j
Note: i) An orthonormal set is an orthogonal set in which every vector is a unit vector i.e. A set
consisting of mutually orthogonal unit vectors is called an orthonormal set.
Note ii): An orthonormal set does not contain zero vector.
14.7.2 Worked Out Examples:
1 2 2 2 1 2 2 2 1
W.E. 7: Prove that S , , , , , , , , is an orthonormal set in R 3 with
3 3 3 3 3 3 3 3 3
standard inner product.
1 2 2 2 1 2 2 2 1
Solution: Let u , , , v , , , w , , are the given vectors of S.
3 3 3 3 3 3 3 3 3
1 2 2
2 2 2
1 4 4
u 1
3 3 3 9 9 9
2 1 2
2 2 2
4 1 4
v 1 1
3 3 3 9 9 9
2 2 1
2 2 2
4 4 1
w 1
3 3 3 9 9 9
Centre for Distance Education 14.12 Acharya Nagarjuna University
1 2 2 2 2 1 1 2 2 2 2 1
u , w , , , , ,
3 3 3 3 3 3 3 3 3 3 3 3
2 4 2
i.e. u , w 0 w, u
9 9 9
1 2 2 2 1 2
u, v , , , , ,
3 3 3 3 3 3
1 2 2 1 2 2 2 2 4
0
3 3 3 3 3 3 9
i.e. u, v 0 v, u
2 1 2 2 2 1
v, w , , , , ,
3 3 3 3 3 3
2 2 1 2 2 1 4 2 2
0
3 3 3 3 3 3 9
Thus v, w 0 w, v
As the length of each vector in S is unity, the inner product of two different vectors in S is 0,
So S is an orthonormal set.
W.E.7b : Consider the usual basis of E e1 , e2 , e3 of the Eucledian space R 3 where
Solution:
e1 (1, 0, 0) so e1 12 02 02 1
e2 (0,1, 0) so e2 02 12 02 1
e3 (0, 0,1) so e3 02 02 12 1
Thus e1 e2 e3 1
e1 , e2 0 e2 , e1
e2 , e3 0 e3 e2
Thus e3 , e1 0 e1 , e3
Thus the length of each vector in E is unity and the inner product of any two different vectors
of E is zero. So E is an orthonormal set.
W.E. 8: If S (1, 2, 3, 4), (3, 4,1, 2), (3, 2,1,1) is a subset of R 4 . Obtain orthonormal set from
S. Verify Pythagorean theorem.
Solution: Let u (1, 2, 3, 4), v (3, 4,1, 2); w (3, 2,1,1)
Now u , v (1, 2, 3, 4), (3, 4,1, 2) 1(3) 2(4) ( 3)(1) 4( 2)
3838 0
Thus u, v 0 v, u
9 8 1 2 0
So v, w 0 w, v
3 43 4 0
Thus w, u 0 u , w
Thus the inner product of any two different vectors in S is zero. So S is orthogonal.
We normalise S to obtain an orthonormal set.
u 12 (2)2 (3)2 42 1 4 9 16 30
v 32 42 12 (2)2 9 16 1 4 30
Centre for Distance Education 14.14 Acharya Nagarjuna University
w 32 (2)2 12 12 9 4 1 1 15
1 2 3 4 3 4 1 2 3 2 1 1
, , , , , , , , , , ,
30 30 30 30 30 30 30 30 15 15 15 15
(7, 4, 1,3)
u v w (49 16 1 9) 75
2
u v w 30 30 15 75
2 2 2
Hence u v w u v w
2 2 2
u
W.E. 9: Let V ( F ) be an inner product space. If u is a non zero vector in V then show that u
is an orthonormal set.
u u 1 u
Hence u , u u u, u
1 1 1
u , u 2 u 1
2
.
u u u
u u
As u is an unit vector and so is an orthonormal set. it is a subset of V..
u
14.8.1 Theorem: Every orthonormal set of vectors in an inner product space V ( F ) is linearly
independent.
Rings and Linear Algebra 14.15 Orthogonalization
Proof: Let S be an orthonormal set of vectors in an inner product space V. Let S1 u1 , u2 ,...um
be a finite subset of S containing m vectors.
Now
1 if j i
u j , ui
0 if j i
ci ui , ui 0
ci 0 since ui , ui 0 where 1 i m .
So ci 0, c2 0,...cm 0
Thus
Aliter : Let S be an orthonormal set in an inner product space. Let u S , then u O , since
u O u , u 0 1 is a contradiction.
So S is an orthogonal set of non zero vectors.
As we know every orthogonal set of non-zero vectors in an inner product space V is linearly
independent, if follows S is linearly independent.
Centre for Distance Education 14.16 Acharya Nagarjuna University
Hence every orthonormal set of vectors in an inner product space V ( F ) is linearly
independent.
14.8.2 Theorem: Let S u1 , u2 ,...um be an orthonormal set of vectors in an inner product
space V ( F ) . If a vector v is in the linear span of S; then
m
v v, ui ui
i 1
Proof: v is given to be a vector in the linear span of S. So v can be expressed as a linear combina-
tion of the vectors of S. Hence there exists scalars c1 , c2 ,...cm in F such that
m
v c1u1 c2u2 ... cmum c j u j .......... (1)
J 1
m
v, ui c j u j , ui
J 1
m
c j u j , ui by linearty of inner product
J 1
ci ui , ui since
0 if j i
u j , ui
1 if j i
ci (1) ci
Thus v, ui ci where 1 i m .
We get v v, ui ui
i 1
Rings and Linear Algebra 14.17 Orthogonalization
14.8.3 Theorem:
ui 1, u j 1 and ui , u j 0 where i j
ai ............ (1)
n
Hence ai v, ui and so v a1u1 a2u2 ... an un a u
c 1
i i
n
v v ,u
i 1
i u i using (1)
14.8.4 Theorem:
Proof:
For j 1, 2,..., m
m
w, u j v v,ui ui , u j
i 1
m
v , u j v ,u i u i , u j
i 1
v, u j v, u1 u1 , u j v, u2 u2 , u j ... v, u j u j , u j ... v, um um , u j
v, u j v, u j 0
m
v v, ui ui is orthogonal to each of u1 , u2 ...um .
i 1
14.8.5 Corollary: If S u1 , u2 ...um is an orthonormal set in an inner product space V ( F ) and
m
u V , then w v v, ui ui is orthogonal to each vector of L( S ) .
i 1
Proof: From the above the orem the vector w is orthogonal to each of u1 , u2 ...um .
c1 u1 , w c2 u2 , w ... cm um , w
W.E.10: If S u1 , u2 ,...un is an orthogonal set of an inner product space V ( F ) then prove that
a1u1 , a2u2 ,...anun is also an orthogonal set for any choice of non zero scalars a1 , a2 ,...an F .
Then ai ui , a j u j ai a j ui , u j
Rings and Linear Algebra 14.19 Orthogonalization
ai ui , a j u j 0 ai ui , a j u j S1 and i J
1 1 1 1 1 1 1 2
W.E.11: If S , ,0, , , , , , is an orthonormal subset of the
2 2 3 3 3 6 6 6
inner product space R 3 ( R ) express the vector (2,1, 3) as a linear combination of the basis vectors
of S.
1 1 1 1 1 1 1 2
Solution: Let u1 , , 0 ; u2 , , ; u3 , ,
2 2 3 3 3 6 6 6
1 1 3
c1 v, u1 2 1 3(0)
2 2 2
1 1 1 4
c2 v, u2 2 1 3
3 3 3 3
1 1 2 5
c3 v, u3 2 1 3
6 6 6 6
3 1 1 4 1 1 1 5 1 1 2
So v (2,1,3) , ,0 , , , ,
2 2 2 3 3 3 3 6 6 6 6
W.E.12 : Let V (C ) be the inner product space of continuous complex valued functions on 0, 2
2
with inner product f , g
1
2 f (t) g(t)dt .
Prove that S f n (t ) c n Z is an orthonormal
int
subset of V.
Centre for Distance Education 14.20 Acharya Nagarjuna University
2
1
e
in t
Solution: f n , f n
in t
e dt
2 0
2
1
e
int
int
e dt
2 0
2
1 1 2
1dt 2 t
2
So f n , f n 1
2 0
0
2
Let m n then
2 2
1 1
f m , f n e imt e int dt e e int dt
imt
2 0
2 0
2
1
e
i ( m n )t
dt
2 0
2
1
2 cos( m n)t i sin(m n)t
0
2
1 1 1
( m n ) sin( m n )t ( m n ) cos( m n )t
2 0
1
(0) 0
2
Thus f n , f n 1 n Z and f m , f n 0 if m n, m, n Z
1
t 4 kt 3 1 k
4 3 0 4 3
1 k
So 0 3 4k 0
4 3
3
So k
4
14.10 Exercise:
1. Find the vector of unit length which is orthogonal to u (2, 1, 6) of V3 ( R ) with respect to stan-
dard inner product.
2 2 1
Ans: , ,
5 3 3
2. Find two mutually orthogonal vectors each of which is orthogonal to the vector u (4, 2, 3) of
V3 ( R) with respect to the standard inner product.
2 3 1
(i) u (2,3, 1) Ans: uˆ , ,
14 14 14
1 1 1 6 4 3
(ii) v , , ; Ans: vˆ , ,
2 3 4 61 61 61
4. Find a unit vector orthogonal to u1 (1, 2,1) and u2 (3,1,0) in R 3 with the standard inner
product.
1
Ans : (1, 3,5)
35
Centre for Distance Education 14.22 Acharya Nagarjuna University
5. Let V be the vector space over R of all continuous real valued functions defined in 0,1 with inner
1
6. Let S consists of the following vectors in R 3 u1 (1,1,1), u2 (1, 2, 3), u3 (5, 4, 1)
Then 1) Show that S is orthogonal and S is a basis of R 3 . (ii) Write v (1,5, 7) as a linear
combination of u1 , u2 , u3 .
1 16 4
Ans : v u1 u2 u3
3 7 21
7 (a). Find the value of K so that the pair of vectors u (1, 2, k ,3) and v (3, k , 7, 5) are orthogonal
in R 4 .
4
Ans : k
3
Ans : m 4,1
8. If (10,1,0), (0,1, 0,1), ( 1, 0,1, 0) is an orthogonal subset of R 4 ( R ) inner product space, obtain
the orthonormal set by normalizing.
1 1 1
Ans : (1, 0,1, 0), (0,1,0,1), (1,0,1, 0)
2 2 2
au bv a u b
2 2 2 2 2
v
1 1 1 1 1 2 1 1
11. show that , , , 0, , , , , is an orthonormal set in R 3 .
3 3 3 2 2 6 6 6
Rings and Linear Algebra 14.23 Orthogonalization
Ex: i) The basis S (1, 0), (0,1) of the inner product space R 2 ( R ) is also orthonormal. So S is an
orthonormal basis of R2 .
1 2 2 1
ii) The set S , , , is an orthonormal basis of R 2 ( R) .
5 5 5 5
iii) The set S (1, 0, 0), (0,1, 0), (0,0,1) is a basis of the inner product space R 3 ( R ) , which is also
iv) The standard ordered basis for the inner product space Vn ( R ) is also orthonormal. So it is an
ortho normal basis.
14.11.1 Finite dimensional inner product space: Definition:
A finite dimensional vector space, in which an inner product is defined is called a finite
dimensional inner product space.
We now establish that every finite dimensional inner product space possesses an orthonor-
mal basis. If S is a basis of the finite dimensional inner product space V ( F ) , we construct an
orthonormal set S1 from S such that L( S ) L( S 1 ) V .
Proof: Let V ( F ) be an n - dimensional inner product space. Let B u1 , u2 ,...un be the basis of
V ( F ) . We will now construct an orthonormal set in V ( F ) with the help of elements of B.
Now u1 O u 0
u
Further let u v1 (say) O
1
v1 , v1
2
This belongs to V ( F ) and v1
Centre for Distance Education 14.24 Acharya Nagarjuna University
u1 u1 u1 , u1
, 2
u1 u1 u1
(Since u is real)
1
So the set v1 forms an orthonormal set in V ( F ) and v1 is in the linear span of u1 . Now
w
Now w v2 ( say )( O) V ( F )
2
Evidently v2 1 now
w2 1
v2 , v1 , v1 w2 , v1
w2 w2
1
u2 u2 , v1 v1 , u1
w2
1 v1 , v1
So v2 , v1 w u2 , v1 u2 , v1 w2
2
u2 , v1 u2 , v1
since v1 , v1 1
w2 w2
0
As v2 , v1 0, v1 , v2 are orthogonal to each other and have unit norms implying the set
v1 , v1 is an orthonormal set and consists of distinct vectors v1 and v2 . As also
w2 u2 u2 , v1 v1 .
u1 w2
v2 w2 u2 u2 , v1 Since v2
u1 w2
Rings and Linear Algebra 14.25 Orthogonalization
u2 u2 , v1 u1
v2
w2 w2 u1
w3
Where v3 O V ( F )
w3
This shows that the set v1 , v2 , v3 is an orthonormal set of distinct vectors v1 , v2 , v3 . Also
combination of u 1 , u 2 , ....u j suppose that, in this way, we have constructed an orthonormal set
u1 , u2 ,..., u j .
wk 1
vk 1 vk 1 1
wk 1
and also that vk 1 is orthogonal to each of the vectors v1 , v2 ,...vk . Where vk 1 v j ( j 1, 2,...k ) .
impossible. Also from the above it is clear that vk 1 is the linear combination of u1 , u2 ,...uk 1 . Hence
Centre for Distance Education 14.26 Acharya Nagarjuna University
we have constructed an orthonormal set v1 , v2 ,..., vk , vk 1 of k 1 distinct vectors such that
orthonormal set v1 , v2 ,..., vn of n - distinct vectors such that v j ( j 1, 2,..., n) is the linear combi-
nation of u1 , u2 ,..., u j .
This method of converting a basis of V ( F ) into a complete orthonormal set is called Gram
- Schmidt orthogonalisation process.
14.11.3 Working Procedure to apply Gram - Schwidt orthogonalization process to
numerical problems:
Suppose B u1 , u2 ,..., un is a given basis of a finite dimensional inner product space V..
Let v1 , v2 ,..., vn be an orthonormal basis for V, which we are required to construct from the basis
u1
Take v1
u1
w2
u2
w2 where w2 u2 u2 , v1 v1
w3
v3
w3 where w3 u3 u3 , v1 v1 u3 , v2 v2
wn
vn
wn where wn un un , v1 v1 un , v2 v2 ... un , vn 1 vn 1
W.E. 14: Apply Gram - Schmidt process to the vectors u1 (1, 0,1); u2 (1, 0, 1); u3 (0,3, 4) to
obtain an orthonormal basis for V3 ( R ) with standard inner product.
Rings and Linear Algebra 14.27 Orthogonalization
Solution: u1 (1,0,1) so u 12 02 12 2
u1 1 1 1
v1 (1, 0,1) , 0,
u1 2 2 2
1
u2 (1, 0, 1) so u2 , v1 (1, 0, 1), (1, 0,1)
2
1
So u2 , v1 1(1) 0(0) (1)(1) 0
2
1
w2 u2 u2 , v1 v1 (1, 0, 1) 0 (1, 0,1) (1, 0, 1)
2
w2 12 02 (1)2 2
w2 1 1 1
v2 (1, 0, 1) , 0,
w2 2 2 2
1
u3 (0,3, 4) so u3 (0,3, 4) u3 , v1 (0,3, 4), (1, 0,1)
2
1
(0,3, 4), (1, 0,1)
2
1 4
u3 , v1 0(1) 3(0) 4(1) 2 2
2 2
1 1
u3 , v1 (0, 3, 4), (1, 0,1) (0, 3, 4), (1, 0,1)
2 2
1
0(1) 3(0) 4(1) 2 2
2
Now w3 u3 u3 , v1 v1 u3 , v2 v2
1 1
(0,3, 4) 2 2. (1, 0,1) 2 2. (1, 0, 1)
2 2
Centre for Distance Education 14.28 Acharya Nagarjuna University
w3 02 32 02 3
w3 1
v3 (0,3, 0) (0,1, 0)
w3 3
1 1 1 1
, 0, , , 0, ,(0,1, 0)
2 2 2 2
W.E. 15: Apply Gram - Schmidt process to obtain an orthonormal basis for V3 ( R) with respect
to the standard inner product to the vectors (2, 0,1), (3, 1,5), (0, 4, 2) .
Solution: Let u1 , u2 , u3 be the basis of the finite dimensional vector space V3 ( R ) where
Now u1 22 02 12 5
u1 1 2 1
v1 (2, 0,1) , 0,
u1 5 5 5
u2 (3, 1,5)
1 1
u2 , v1 (3, 1,5), (2, 0,1) (3, 1,5), (2, 0,1)
5 5
1 11
3(2) (1)(0) 5(1)
5 5
11 1
w2 u2 u2 , v1 v1 so w2 (3, 1,5) (2, 0,1)
5 5
11 22 11
So w2 (3, 1,5) (2, 0,1) 3 , 1 0, 5
5 5 5
Rings and Linear Algebra 14.29 Orthogonalization
7 5 14 1
So w2 , , (7, 5,14)
5 5 5 5
1 1
w2 ( 7) 2 ( 5) 2 (14) 2 49 25 196
5 5
1
So w2 270
5
w2 5 1 1
v2 . (7, 5,14) (7, 5,14)
w2 270 5 270
1
u3 (0, 4, 2) so u3 , v1 (0, 4, 2), , (2, 0,1)
5
1
(0, 4, 2), (2, 0,1)
5
1 2
0(2) 4(0) 2(1)
5 5
1
u3 , v2 (0, 4, 2), (7, 5,14) >
270
1
(0, 4, 2), (7, 5,14)
270
1
0(7) 4(5) 2(14)
270
1 8
(20 28)
270 270
w3 u3 u3 , v1 v1 u3 , v2 v2
2 1 8 1
(0, 4, 2) . (2, 0,1) . (7, 5,14)
5 5 270 270
2 8
(0, 4, 2) (2, 0,1) ( 7, 5,14)
5 270
Centre for Distance Education 14.30 Acharya Nagarjuna University
4 56 40 2 112
0 ,4 0 ,2
5 270 270 5 270
16 16
So w3 (1, 7, 2) , w3 ( 1) 2 7 2 2 2
27 27
16
w3 54
27
w3 27 16
v3 . (1, 7, 2)
w3 16 54 27
1 1
So v3 (1,7, 2) (1, 7, 2)
54 3 6
1
Hence the required orthonormal basis is v1 , v2 , v3 where v1 (2, 0,1) and
5
1 1
v2 (7, 5,14) and v3 (1, 7, 2)
270 3 6
W.E. 16: If S (1,1,1), (0,1,1), (0, 0,1) is a subset of the vector space R 3 and V R 3 . Obtain the
orthonormal basis B for span (S) and find the fourier coefficients of the vector
(1, 0,1) to B.
Rings and Linear Algebra 14.31 Orthogonalization
u 1
u1 1 1 1 3 so v1 u
1
2 2 2 (1,1,1)
1 3
1 1
v2 , v1 (0,1,1), (1,1,1) (0,1,1), (1,1,1)
3 3
1 2
0(1) 1(1) 1(1)
3 3
w2 u2 v2 , v1 v1
2 1
(0,1,1) . (1,1,1)
3 3
2 2 2 2
(0,1,1) (1,1,1) 0 ,1 ,1
3 3 3 3
2 1 1 1
w2 , , (2,1,1)
3 3 3 3
1 6
So w2 ( 2) 2 12 12
3 3
w2 1 3 1
v2 (2,1,1). (2,1,1)
w2 3 6 6
1 1
u3 , v1 (0, 0,1), (1,1,1) 0(1) 0(1) 1(1)
3 3
1
u3 , v1
3
1 1
u3 , v2 (0, 0,1), (2,1,1) 0(2) 0(1) 1(1)
6 6
1
i.e. u3 , v2
6
Centre for Distance Education 14.32 Acharya Nagarjuna University
w3 u3 u3 , v1 u3 , v2 v2
1 1 1 1
(0,0,1) . (1,1,1) . (2,1,1)
3 3 6 6
1 1
(0, 0,1) (1,1,1) ( 2,1,1)
3 6
1 2 1 1 1 1 1 1
0 ,0 ,1 0, ,
3 6 3 6 3 6 2 2
1
w3 (0, 1,1)
2
1 2 1
w3 0 11
2 2 2
3 1 1
v3 2 (0, 1,1) (0, 1,1)
3 2 2
1 1 1
v1 (1,1,1), v2 (2,1,1), v3 (0, 1,1)
3 6 2
To find the fourier coefficients of the vector v (1, 0,1) relative to S v1 , v2 , v3 :
1
1 1
v, v1 (1, 0,1), (1,1,1) (1)1 0(1) 1(1)
3 3
2
v, v1
3
1 1
v, v1 (1, 0,1), (2,1,1) 1(2) 0(1) 1(1)
6 6
1
v, v2
6
Rings and Linear Algebra 14.33 Orthogonalization
1 1
v, v3 (1, 0,1), (0, 1,1) 1(0) 0(1) 1(1)
2 2
1
v, v3
2
2 1 1
Hence the fourier coefficients relative to the set B is , , or 2 3 , 6 , 2
3 6 2 3 6 2
W.E. 17: If V L ( S ) where S (1, i, 0), (1 i, 2, 4i) . Find the orthonormal basis B of V and com-
pute the fourier coefficients of the vector (3 i, 4i, 4) relative to B.
1(1) i ( i ) 0(0)
1 i2 0 1 1 2
u1 1
u1 2 so v1 u 2 (1, i,0)
1
1
u2 , v1 (1 i, 2, 4i ), (1, i,0)
2
1
2
(1 i)1 2i , 4i(0)
1
(1 i)1 2(i) 4i(0)
2
1 1
1 i 2i (1 3i)
2 2
1 1
w2 u2 u2 , v1 v1 (1 i, 2, 4i ), (1 3i ). (1, i, 0)
2 2
Centre for Distance Education 14.34 Acharya Nagarjuna University
1
(1 i, 2, 4i ), (1 3i),3 i, 0
2
(1 3i ) (3 i )
(1 i ) ,2 , 4i 0
2 2
1 i) 1 i) 1
w2 , , 4i (1 i ), (1 i ),8i
2 2 2
1 1
(1 i ), (1 i ),8i , (1 i,1 i,8i)
2
w2
2 2
11
(1 i )(1 i ) (1 i )(1 i ) 8i (8 i ) where a is congugate of a.
22
1
(1 i )(1 i) (1 i)(1 i) 8i(8i )
2
w2
4
1
4
(1 1) (1 1) 64i 2
1 1 2
w2 2 2 64 68 17 17
2 2 2
w2 1 1
Hence v2 . (1 i,1 i,8i )
w2 17 2
1 i 1 i 4i
So v2 , ,
2 17 2 17 17
1 i 1 i 1 i 4i
Where v1 , , 0 and v2 , ,
2 2 2 17 2 17 17
1
Now v , v1 (3 i , 4 i , 4), (1, i , 0)
2
Rings and Linear Algebra 14.35 Orthogonalization
1
(3 i)1 (4i)(i) 4(0) ( i i)
2
1 1
v, v1 (3 i 4) (7 i )
2 2
1
v, v2 (3 i , 4i , 4), (1 i ,1 i ,8i )
2 17
1
2 17
(3 i )(1 i ) 4i (1 i ) 4(8i )
1
(3 i)(1 i) 4i (1 i ) 4(8i)
2 17
1
(3 1 2i ) (4i 4) 32i
2 17
1
(0 34i 17i
2 17
1
Hence the fowrier coefficients of v relative to B are v, v1 , v, v2 i.e. (7 i ), 17i
2
Proof: B u1 , u2 ,...un is a basis of V. Let u , v V . Then there exists scalars a1 , a2 ,...an ,
b1 , b2 ,...bn F so that
n
u a1u1 a2u2 ... an un ai ui
i 1
n
v b1u1 b2u2 ... bn un b j u j
J 1
Centre for Distance Education 14.36 Acharya Nagarjuna University
n n
Now u, v ai ui , b j u j ,
i 1 j 1
n
ai ui , bi ui since ui , u j 0 for i j
i 1
n n
ai bi u i , ui
i 1
ab
i 1
i i ........... (1)
since ui , u j 1 if i j
0 if i j
n n
But u , ui ai ,ui , ui ai ui , ui ai .......... (2) Since ui , u j
i 1 i 1
1 if i = j
n
and ui , v ui , b ,u
J 1
j j
0 if i j
bi ui , ui Since ui , u j
1 if i = j
n
u , v u , ui ui , v for all u , v V
i 1
14.13.2 Corollary: If S u1 , u2 ,...un is a complete orthonormal set in an inner product space
n
v, u v
2 2
V ( F ) and if v V , then i
i 1
n
Proof: By Parseval’s Identity if v V then u , v u, u
i 1
i ui , v for all v V
Rings and Linear Algebra 14.37 Orthogonalization
n
So as v V , v, v v, u
i 1
i ui , v
n
v, ui v, ui
i 1
n
v, ui zz z
2 2 2
So v
i 1
v, u v
2 2
Thus i
i 1
14.13.3 Theorem:
Bessel’s Inequality: Let V be an inner product space and Let S u1 , u2 ,...un be an orthonor-
n
v, u v .
2 2
mal subset of V. Prove that for any v V , we have i
i 1
Further more the equality holds if and only if v is in the subspace generated by u1 , u2 ,...un .
m m
w, w v v, ui ui , v v, u j u j
2
Now w
i 1 j 1
m m m m
v, v v, ui ui , v v, u j v, u j v, ui viu j ui , u j
i 1 j 1 i 1 j 1
m m m
v, v v, ui ui , u j v, u j v, u j v, ui v, ui
i 1 i 1 i 1
1 if i = j
m m m
v v, ui v, ui v, ui
2 2 2 2
i 1 i 1 i 1
m
w v v, ui ............ (1)
2 2 2
i.e.
i 1
m
Now w 0 v v, u 0
2 2 2
i
i 1
m
v, ui v
2 2
i 1
To show that equality hold if and only if v is in the subspace spanned by u1 , u2 ,...um :
v, u v
2 2
Case i) Let the equality holds good i.e. i then from (1)
i 1
m
v v, ui
2 2 2
i.e. w
i 1
m
v v, ui ui O
i 1
m
v v, ui ui
i 1
m
v v, ui ui ........... (2) But we know
i 1
Rings and Linear Algebra 14.39 Orthogonalization
m
w v v, ui ui O using (2)
i 1
0
2
So w
m
v v , ui 0
2 2
i 1
m
v , ui v
2 2
i 1
14.13.4 Corollary: Let u1, u2 ,...um be an orthogonal set of non zero vectors in an inner product
v, ui 2 m
u
2
i 1
i
ui
Proof: Let B v1 , v2 ,..., vm where vi u (1 i m) . Then vi 1 , so the set B is an
i
v, u v
2 2
orthonormal set. Hence by Bessels’s inequality we get i ....... (1)
i 1
u 1
Also v, vi v, u v, ui
i
i ui
1
So v, vi v, ui .......... (2)
2 2
2
ui
m v, ui 2
v
2
u
2
i 1
i
Centre for Distance Education 14.40 Acharya Nagarjuna University
14.13.5 Theorem: If V is a finite dimensional inner product space, and if u1 , u2 ,...um is an
m
be a basis of V.
Proof: Let v be any vector in V. Consider
m
w v v, ui ui ............. (1)
i 1
w, w
2
We have w
m
v v , ui
2 2
i 1
0 by given condition.
m
w O v v, ui ui
i 1
Thus every vector v in V can be expressed as a linear combination of the vectors in the set
S u1 , u2 ,...um i.e. L ( S ) V . As S is an orthonormal basis, S is linearly independent.
Hence S is a basis of V.
S is called the orthogonal complement of S and the symbol is usually read as S perpen-
dicular
Note i) S V
ii) u S , v S u , v 0
then O, u 0 u S O S . Hence S
Rings and Linear Algebra 14.41 Orthogonalization
14.14.2 Theorem: If S is any non empty subset of an inner product space V ( F ) , then S is a
subspace of V ( F ) .
Proof: By definition S u V u, v 0 v V
a u1 , v b u2 , v
0
For u1 , u2 S
a, b F
14.14.3 Theorem: If V ( F ) is an inner product space, O is the zero vector in V, then show that
O
.V
......... (1). Also 0 V ......... (2) from (1) and (2)
0
V is also an element of S . So V
0
V .
14.14.4 Theorem:
If V ( F ) is an inner product space, O is the zero vector of V; then show that V 0 .
T
0u O
2
i.e. u
u, u 0 u O
So S S O
14.14.6 Theorem:
If S1 , S 2 are two subsets of an inner product space V ( F ) then show that S1 S2 S2 S1
So S 2 S 1
14.14.7 Theorem:
Let u S and v span of (S). Hence there exists scalars a1 , a2 ,..., an in F such that
v a1w1 a2 w2 ... an wn where w1 , w2 ,...wn S
So u , v a1 u , w1 a2 u , w2 ... an u , wn
So u [ span of S ]
Thus u S u (Span S )
14.14.8 Theorem: If B u1 , u2 ,...um is an orthonormal subset of the inner product space V ( F ) ,
m
then for each v V , w v v, u
j 1
j is a vector of B .
Proof: We have proved in theorem 14.8.4 that w is orthogonal to each of u1 , u2 ,...um . By definition
of orthogonal complement w B .
Hence the theorem.
14.14.9 Orthogonal Compliment of an orthogonal compliment:
S ) by ( S ) S u V : u , v v S
14.14.10 Theorem:
Solution : V ( F ) is the given inner product space S, is subset of V. then S , S are subspaces of
V.
Let u S then u , v 0 for all v S
u , v O v, u O for v S and u V
So by definition u S
Centre for Distance Education 14.44 Acharya Nagarjuna University
Thus u S u S
So S S .
W.E. 18: If V ( F ) is an inner product space and S is any subset of V then show that
i) S L( S )
ii) L( S ) S
iv) S S
i) To show S L( S )
As S is a subset of V, S L(S )
To show S L( S )
Let v L ( S ) then v ai ui ui S
i 1
for u S
we have u , v u , ai ui
i 1
n
ai u , ui by definition.
i 1
S L( S )
From (1) and (2)
u ( S ) i.e. u S
Thus u L( S ) u S . So L(S ) S
So S S S (S )
S S ........... (1)
Now S S S , ( S ) S
14.14.11 Theorem: Let W be a finite dimensional subspace of an inner product space V. Let
v V then there exists unique vectors u W and w W such that v u w .
Proof: Let B u1 , u2 ,...un be an orthonormal basis of W. Then B is linearly independent and
n
L( B) W . Let u be defined as u v, ui ui and w v u .
i 1
Now u W L( B) and v u w now we have to prove that w W .
n
As B is an orthonormal basis of the vector space W . w v v, u
i 1
i ui a vector of W .
Consider the problem in R 3 of finding the distance from a point P to a plane W..
If we let v be the vector determined by O and P, we may restate the problem as follows.
P
W= v - u
v
900
Q
Q1
u
O X
W
i.e. v x v u
14.14.3 Orthogonal Projection: Let W be a subspace of the finite dimensional inner product
space. For v V there exists unique vectors u W , w W such that v u W .
n
The vector u W that is u v, u i ui where u1, u2 ,..., un is an orthonormal basis of
i 1
14.14.14 Theorem: Let S v1 , v2 ,...vk is an orthonormal set in an n - dimensional inner prod-
uct space V. Then show that S can be extended to an orthonormal basis S v1 , v2 ,...vk , vk 1...vn
1
for V.
14.14.15 Corollary 1: If S v1 , v2 ,...vk is an orthonormal set in an inner product space V and if
since S W .
for each u W ,
n
u u , u1 u1 u , u2 u2 ... u , uk uk u, u
i k 1
i ui
Centre for Distance Education 14.48 Acharya Nagarjuna University
n
O O ... O
i k 1
u , ui ui
n
u, u
i k 1
i ui
So u W u L( S 2 ) W L( S2 ) ........... (2)
extended to an orthonomal basis S1 v1 , v2 ,...vk , vk 1 ,...vn and by the above corollary 1,
So dim V n k (n k )
dim(W ) dim(W )
ii) (W ) W
Proof: W being a subspace of a finite dimensional vector spae V ( F ) of dimension n, is also finite
dimensional say of dimension k.
Thus we can find B1 u1 , u2 ,...uk as an orthonormal set in W which is also a basis of W. This
k
Now w, u j v v, u
i 1
i ui , u j 1 j k
k
v, u j v, ui ui , u j
i 1
v, u j v, u j u j , ui
1 if i = j
Since ui , u j
1 if i j
So w, u j v, u j v, u j 0
Showing that W is orthogonal to each of the vectors u1 , u2 ,...uk . i.e. orthogonal to the sub-
space W spanned by there vectors and hence it belongs to W .
k
Hence from (2) i.e. w v v, u
i 1
i ui
each v V we have
k
v v, ui ui w
i 1
= an elements of W + an elements of W .
Centre for Distance Education 14.50 Acharya Nagarjuna University
So V W W .......... (3)
ii) To prove that (W ) W when W is a subspace of an inner product space of finite dimension.
Proof: By definition (W ) W w V : w, v 0 v W
Let u W u, v 0 W
Thus u W u W so W W
V W W ............ (3)
Thus (W ) W .
Note: If W is a subspace of any finite dimensional inner product space V(F), then V W W
w, u 0 u W1
and w , v 0 v W 2
So w (W1 W2 )
As W1 , W2 are subspace of V ;W1 ,W2 are also subspaces of V. Hence replacing W1 and
(W1 W2 ) W1 W2
Centre for Distance Education 14.52 Acharya Nagarjuna University
Since W 1 W1 and W 2 W2
2(1, 0, 0) 3(0,1, 0)
(2, 3, 0)
W.E. 21 :
Let V P3 ( R ) be the inner product space of atmost 3rd degree polynomials continues on
1
1
u1 1 u1 u1 , u1 (1)(1)dt.t 1 1 1 2 , u 2
2 1
1
Rings and Linear Algebra 14.53 Orthogonalization
u1 1
v1
u1 2
1 t2
1
1 1
u2 , v1 (t ). dt 2 (1 1) 0
1 2 2 1 2 2
w2 u2 u2 , v1 v1
w2 x 0 x
1
t3
1 1
2
w2 , w2 t.tdt 2 t dt 2
2 2
w2
1 0 3 0 3
2
w2
3
w2 3 3
v2 ( x) ( x)
w2 2 2
1
1
1
1
2 2 t3 2
u3 , v1 t . dt
2
t dt 2 3 3
1 2 20 0
1
3 t4
1 1
3 3 3
u3 , v2 t . tdt
2 0
2
t dt i.e. u3 , v2 0
1
2 2 4 0
w3 u3 u3 , v2 v2 u3 , v1 v1
3 2 1 1
x2 0 x x2
2 3 2 3
1 1
w3 , w3 (t 2 13 ) 2 dt (t 4 23 t 2 19 )dt
2
w3
1 1
1
2 (t 4 23 t 2 19 )dt
1
Centre for Distance Education 14.54 Acharya Nagarjuna University
1
1
2 t5 32 t3 91 t
5 3
0
1
2 15 92 2 9 1045 5 458
2 1
w3 9
8
w3
45
w3 45 (3x 2 1) 5 2
v3 (3x 1)
w3 8 3 8
1 3 5
Hence the orthonormal basis of the subspace is v1 , v2 , v3 , x, (3 x 2 1)
2 2 8
1
1
f , v1 (1 2t 3t 2 ). dt
1 2
1 1
2 2
20
(1 3t 2 )dt tdt
2 1
1
t3
f , v1 2 t 3. 0 2(2)
3 0
1
3
f , v2 (1 2t 3t 2 ) tdt
1
2
1
3
2 1
(t 2t 2 3t 3 )dt
Rings and Linear Algebra 14.55 Orthogonalization
1 1
3 3
.4 t 2 dt (t 3t 3 )dt
2 1 2 1
1
(t )3 2 6
f , v2 2 2 3 0
3 0 3
1
5 2
f , v3 (1 2t 3t 2 ) (3t 1)dt
1
8
1
5
8 0
2 (1 9t 4 )dt
5 t 5
1
5 9
2 t 0 9. 2 1
1
8 5 0 8 5
2 5 ( 5 9)
2 2 5
2 10
So f , v3
5
So f ( x) f , v1 v1 f , v2 v2 f , v3 v3
1 2 6 3 2 10 5
2 2 3 2 x 5 8 (3 x 1)
2
2
3
Solution : The orthogonal projection of f ( x) x3 on W f ,v
i 1
i vi .
f , v1 v1 f , v2 v2 f , v3 v3
1
1
f , v1 t 3 dt 0
1 2
Centre for Distance Education 14.56 Acharya Nagarjuna University
1 1
3 3
f , v2 t 3 tdt 2 t 4 dt
1
2 20
1
t5 6
So f , v2 6 5 5
0
1
5 2
f , v3 t 3 . (3t 1)dt
1
8
1
5 t6 t4
1
5
8 1
(3t t )dt
5 3
3.
8 6 4 1
So f , v3 0
1 6 3 5
0 5 2 x 0. 8 (3 x 1)
2
2
3
x
5
By definition S v C v, u 0 u S
3
Let u (a, b, c) C 3 where a, b, c are scalars belonging to C.
v S v, u1 0 and v, u2 0
a 2b c 0
Let c 1 , then from (1) a i
(i 1)
using in (2), i 2b 1 0 b
2
(1 i )
So v ( a, b, c ) i, ,1
2
(1 i )
So S
i, ,1
2
14.16 Summary:
In this lesson we discussed about orthogonality of vectors. Orthonormality. properties of
orthogonality and orthonormality, Gram-Schmidt orthogonalization process to obtain orthonormal
bases. Parseval’s identity, Bessels inequality orthogonal complements closest vectors projection.
2. If u, v, are two vectors in a real inner product space V ( F ) such that u v then show that
(u v ) is orthogonal to u v .
3. Show that the vectors ( 1, 0), (0, 1) in R 2 form an orthonormal basis over R under usual inner
product on R 2 .
4. Prove that every orthogonal set of non zero vectors in an inner product space V ( F ) is linearly
independent.
5. State and prove Parsvel’s identity.
6. Apply Gram -Schmidt process to obtain an orthonormal basis for V3 ( R ) with the standard inner
product to the vectors.
Centre for Distance Education 14.58 Acharya Nagarjuna University
1 1 1
i) (2,1,3), (1, 2,3), (1,1,1) Ans: (2,1,3), (4,5,1) , (1,1, 1)
14 42 3
1 1 1
ii) (1, 1, 0), (2, 1, 2), (1, 1, 2) Ans: (1, 1, 0), (1,1, 4), (2, 2, 1)
2 3 2 3
14.19 Exercise:
1. Apply the Gram - Schmidt process to obtain an orthonormal basis for V3 ( R ) with the standard
inner product to the vectors
1 1 1
Ans : (2,1,3), (4,5,1), (1,1, 1)
14 42 3
1 1
Ans : (1, 0,1), (1, 0, 1),(0,1, 0)
2 2
iii) (1, 1, 0), (2, 1, 2), (1, 1, 2)
1 1 1
Ans : (1, 1, 0), (1,1, 4), (2, 2, 1)
2 3 2 3
2. In each part apply Gram - Schmidt process to the given subset S of the inner product space V to
obtain an orthogonal basis for the span (S). Then normalise there vectors in this basis, to obtain an
orthonormal basis B for span (S) and compute the fourier coefficients of the given vector relative to
B.
i). v R ; S (2, 1, 2, 4), (2,1, 5,5), (1,3, 7,11) and v ( 11,8, 4,18) .
4
1 1 1
Ans: (2, 1, 2, 4), (4, 2, 3,1), (3, 4,9, 7)
5 30 155
ii) v C ; S (4,3 2i, i,1 4i),( 1 5i,5 4i, 3 5i, 7 2i ),( 27 i, 7 6i, 15 25i, 7 6i )
4
1 1 1
Ans: (4,3 2i, i,1 4i ), (3 i, 5i, 2 4i, 2 i), (17 i, 9 8i, 18 6i, 9 8i)
47 60 1160
1 1 3 3
Ans : 1, 2 3( x ),6 5( x x , ,
2
,0
2 6 2 6
3 5 1 9 7 17 1 27
iv) V M 22 ( R), S , , and A
1 1 5 1 2 6 4 8
1 3 5 1 4 4 1 9 3
Ans: , , 6 6
6 1 1 6 2 6 2 9 2
24, 6 2, 9 2
3. In each of the following parts find the orthogonal projection of the given vectors on the given
subspace W of the inner product space V.
1 2 6
17 10 4
Ans :
29
1
17
Ans : 14
40
4. Find the distance from the vector u (2,1,3) to the sub space W (u1 , u2 , u3 ) u1 3u2 2u3 0
of the vector space R 3 .
1
Ans:
14
Centre for Distance Education 14.60 Acharya Nagarjuna University
5. If W be a sub space of the inner product space V3 ( R ) spanned by B1 (1, 0,1), (1, 2, 2) then
find a basis of orthogonal compliment of W .
6. If W L (1, 2,3, 2), (2, 4,5, 1) the subspace of R 4 ( R ) ; find a basis of the orthogonal compli-
ment W .
7
Ans : (2, 1, 0, 0), (0, , 3,1)
2
7. If V L ( S ) with inner product f , g f (t ) g (t )dt and S sin t , cot,1, t . Find an orthogo-
0
4 4
Ans : sin t , cos t ,1 sin t , t cos t
2
- A. Mallikharjana Sarma
Rings and Linear Algebra 15.1 Linear Operators
LESSON - 15
LINEAR OPERATORS
15.1 Objective of the Lesson:
A* bij n n where bij a ji i.e. A * is the transpace of the matrix formed with the conjugate
complex numbers of the elements of A.
For a linear operator T on an inner product space V, we now define a related linear operator
on V, called the adjoint of T, whose matrix representation with repect of any orthonormal basis B for
V is T B . The analogy between conjugate complex numbers and adjoint of a linear operator will
become apparent.
As V is an inner product space in this chapter, we study the condition which guarantee that
V has an orthonormal basis.
15.3 Introduction
15.12 Exercise
15.20 Summary
15.23 Exercise
15.3 Introduction:
Here we shall consider linear functionals defined on an inner product space V ( F ) . Since
an inner product space is also a vector space so all concepts of linear functionals on vector spaces
are also applicable to inner product space. So give some basic definitions that are useful in inner
product spaces.
Or
f ( au v ) af (u ) f (v ) u , v V and a F
2. Inner Product: An inner product on V is a function f that assigns to every order pair of vectors
u, v in V a scalar u , v in F ( R or C)
a u1 , v b u2 , v for u1 , u2 V , a, b F
Proof: Let B u1 , u2 ,..., un is an orthonormal basis for V; and f is a linear transformation from
V F.
n
Let v f (u )
j 1
j
u j for each f (u j ) F ......... (1)
g (u ) u , v u V .......... (2)
To show that g is a linear functional on V;
Let a, b F , w1 , w2 V , we have
a w1 , v b w2 , v
ag ( w1 ) bg ( w2 )
n
g (uk ) uk , f (u j ) u j
j 1
n
f (u j ) u k , u j
j 1
Centre for Distance Education 15.4 Acharya Nagarjuna University
0 if j k
n
f (u j ) uk , u j f (uk ) uk , u j =
j 1
1 if j k
Inother words we say that there exists a vector v V corresponding to the linear functional
f on V :
f (u ) u , v u V
Uniqueness of v:
Suppose there exists w in V such that
f (u ) u , w u V
Thus u , v u , w u V
u , v u , w 0 u V
u , v w 0 for all u V
v w0
vw
So v is unique. Hence the theorem.
15.5.2 Theorem: For any linear operator T on a finite dimensional inner product space V, then
there exists a unique linear operator T * on V such that
T (u ), v u , T *(v ) u , v V
Proof: Let T be a linear operator on a finite dimensional inner product space V; over the field F let
v be a vector in V. Let f be a function from V into F defined by f (u ) T (u ), v u V ..... (1)
Let a, b F ; u1 , u2 V , then
Hence from (1) and (2) we observe that if T is a liear operator on V, then corresponding to
every v in V, there is a uniquely determined vector v ' in V such that T (u ), v u , v ' for all
u V . Let T * be the rule by which we associate v with v ' . i.e. let T * ( v ) v ' .
a T (u ), v1 b T (u ), v2
u, aT *(v1 ) u, bT *(v2 )
u, aT *(v1 ) bT *(v2 )
u , T * (v ) u , F (v ) u , v V
Centre for Distance Education 15.6 Acharya Nagarjuna University
T* F
So T is unique.
Hence the theorem.
Note i) The symbol T * is red as T star..
1. For each of the inner product space V ( F ) and linear transformation (linear functionals) f : V F
find a vector v , such that f (u ) u , v for all v V .
i) V R 2 ; F R; f (a1 , a2 ) 2a1 a2
ii) V C 2 ; F C ; f ( z1 , z2 ) z1 2 z2
Such that u1 , u1 1, u2 , u2 1, u1 , u2 0, u2 , u1 0
Let u V then u a1u1 a2u2 for a1 , a2 R and f (u ) f (a1u1 a2u2 ) af (u1 ) a2 f (u2 )
.... (1) since f is linear. We have v V such that f (v ) u , v .
Since f (u ) u , v from (1) and (3) we get a1b1 a2b2 a1 f (u1 ) a2 f (u2 )
b1 f (u1 ), b2 f (u2 )
b1 f (1, 0) 2(1) 0 2
b2 f (0,1) 2(0) 1 1
since f is linear..
We have v V such that f (u ) u , v
So v b1 z1 b2 z2 for some b1 , b2 C
u, v a1 z1 a2 z2 , b1 z1 b2 z2
b1 f ( z1 ); b2 f ( z2 )
So b1 f (1, 0) 1 2(0) 1
Centre for Distance Education 15.8 Acharya Nagarjuna University
b2 0 2(1) 2 . Hence b1 1, b2 2
W.E.2: For each of the inner product space V ( F ) and linear functional g: V F find a vector v
such that g (u ) u , v for all u V .
i) V R 3 ; g ( a1 , a2 , a3 ) a1 2a2 4a3
1
ii) V ( F ) V (C ); x (a1 , a2 , a3 ); g ( x) (a1 a2 a3 )
3
V ( R ) has an orthonormal basis u1 (1, 0, 0), u2 (0,1, 0), u3 (0, 0,1)
1 if i j
Such that u i , u j
0 if i j
Since g is linear.
3
So v b1u1 b2u2 b3u3 b u
j 1
j j .......... (2) for b j R
Rings and Linear Algebra 15.9 Linear Operators
3 3
1 if j i
Such that u i , u j
0 if j i
1
Such that g (u ) g ( a1 , a2 , a3 ) (a1 a2 a3 ) for u (a1 , a2 , a3 )
3
3
v b1u1 b2u2 b3u3 b j u j ..... (2) for same b j C
J 1
3 3
So u , v ai ui , b j u j
i 1 i 1
0 if i j
1 if i j
1
We have since g (u ) g ( a1 , a2 , a3 ) (a1 a2 a3 )
3
1
we have g (u1 ) g (1, 0, 0) (1 0 0) 1
3 3
1 1 g (u3 ) 1
g (u2 ) g (0,1, 0) (0 1 0) 1 , g (u3 ) g (0, 0,1) (0 0 1) 3
3 3 3
1 1 1 1 1
v (1,0,0) (0,0,1) , ,
3 3 3 3 3
Which is the required vector.
Rings and Linear Algebra 15.11 Linear Operators
1
W.E.3 : If V3 ( F ) is an inner product space with orthonormal basis u1 , u2 , u3 where u1 (1,1, 0) ,
2
1
u2 (1, 1, 0) , u3 (0,0,1) . If f is a linear functional on V3 ( F ) such that
2
f (u1 ) 2, f (u2 ) 1, f (u3 ) 1 . Find the vector v such that f (u ) u, v u V3 ( F ) .
1 1
V3 has the orthonormal basis u1 , u2 , u3 where u1 (1,1, 0); u2 (1, 1, 0) and
2 2
1 if j i
0 if j i
We have v V so f (u ) u , v
3
v b1u1 b2u2 b3u3 b j u j for b j C ............ (2)
j 1
3 3
u, v ai ui , b j u j a1b1 a2b2 a3b3 ........... (3)
i 1 j 1
b1 2, b2 1, b3 1
1 1
So v b1u1 b2u2 b3u3 2 (1,1, 0) 1. (1, 1, 0) 1(0, 0,1)
2 2
1 3
so v , , 0 is the required vector..
2 2
15.7 Definition:
Adjoint of an operator: Let T be a linear operator in an inner product space V (finite dimensional
or not). We say that T has an adjoint T * , if there exists a linear operator T * on V; such that
T (u ), v u , T *(v ) u , v V .
In theorem 15.5.2, we have proved that every linear operator on a finite dimensional inner
product space possess an adjoint. But it should be noted that if V is not finite dimensional then
some linear operator may possess an adjoint, while the other may not. In any case if T possess an
adjoint T * , it is unique as we have proved in that theorem.
15.8.1 Theorem: Let V be a finite dimensional inner product space. Let B u1 , u2 ,...un be
an orthonormal basis for V. Let T be a linear operator on V; with respect to the ordered basis B.
Then aij T (u j ), ui .
n
Proof: As B is an orthonormal basis of V; and if v is any vector in V; then v v, u
i 1
i ui
n
Taking T (u j ) in place of v , in the above, we get T (u j ) T (u ), u
i 1
j i ui ......(1)
where j 1, 2,...n .
j 1, 2,...n . As the expression for T (u j ) as a linear combination of the vectors in B is unique and
Rings and Linear Algebra 15.13 Linear Operators
so from (1) and (2) we have aij T (u j ), ui where i 1, 2,...n and j 1, 2,...n .
15.18.2 Theorem:
Let V be a finite dimensional inner product space let T be a linear operator on V. Let B be
any orthonormal basis for V. Then the matrix T * is the conjugate transpose of the matrix T i.e.
T T .B
B
Proof: Let B u1 , u2 ,...un be an orthonormal basis of V. Let A aij n n be the matrix of T with
Let C cij n n be the matrix of T * in the ordered basis B. Let cij T * (u j ), ui ..... (2)
T (u j ), u j by def. of T
= a ji by (1)
B
*
; so C A * where A * is the conjugate transpose of A. So T T
C aij
n n B
Note: Here the basis B is the orthonormal basis but not an ordinary basis.
15.8.3 Corollary: If A and B are n x n matrices, then
(iii) ( AB )* B * A * (iv) A ** A
2i (1, 0) 1(0,1)
2i 3
T B
1 1
2i 1
T T B
. Hence the coordinate matrix of
B
3 1
2i 1 a1
T * (a1 , a2 ) ( 2ia1 a2 ,3a1 a2 ) in the same basis is
3 1 a2
LA LA * A L
B B A B
LA
So LA
Solution: The matrix of T relating to the standard basis of V3 (C ) which is also an orthonormal basis
is given by
2 1 i 0
T 3 2i 0 4i aij
33
2i 4 3i 3
Rings and Linear Algebra 15.15 Linear Operators
If T * is the adjoint of T; then the matrix of T * relative to the standard basis B is
2 3 2i 2i
T * a ji 1 i 0 4 3i
0 4i 3
( a, b, c ), T *( x, y , z ) T ( a, b, c ), ( x. y , z )
( a b, b, a b c ), ( x, y , z )
(a b) x by ( a b c) z
ax bx by az bz cz
a ( x z ) b( x y z ) cz
a ( x z ) b( x y z ) cz
( a, b, c), ( x z , x y z , z )
So T *( x, y , z ) ( x z , x y z , z ) ( x, y, z ) T *
S (u ) T (u ), v
S (u ), v T (u ), v
u , S * (v ) u , T * (v )
u , ( S * T *)(v )
Thus for the linear operator S T on v , there exists operator S * T * on V such that
( S T ), (u ), v u , ( S * T *)v u, v V
By uniqueness of adjoint (S T ) S T
C u , T * (v )
u , CT *(v)
u , (CT *)v
u, T * S *(v)
u, T * S * (v)
v, T *(u ) (Since u , v v, u )
u , T (v) Since u , v v, u )
Thus for a linear operator T * , there exists a linear operator T on V, such that
u , (T *) * (v ) u , T (v )
We have u , O ( v ) Ou , v
0 u, O(v)
u , I *(v ) I (u ), v u , v
u , I (v )
So I * I by uniqueness of adjoint.
TT 1 T 1T I
TT 1 * T 1T * I *
(T 1 ) * T * T *(T 1 )* I since I * I
T ** T by uniqueness of adjoint.
then U1 (T T ) T (T ) T T T T U1
* * * * * * * *
and U 2
*
(TT * )* (T * )* T * TT * U 2
Worked Out Examples:
Then by defination of T *
as (a, b), (a1 , b1 ) are arbitrary elements in R 2 , then we have T *(a1 , b1 ) (2a1 b1 , a1 3b1 )
or T *( a, b) (2a b, a 3b)
So T * (3, 5) (2 3 5, 3 3 5)
W.E.9: Let V be the vector space V2 (C ) with standard inner product. Let T be the linear operator
defined by T (1, 0) (1, 2).T (0,1) (i, 1) . If u ( a, b) then find T *(u ) .
Solution: Let B (1, 0),(0,1) . Then B is the standard basis for V. It is an orthonormal basis for V..
1 i
T B
2 1
The matrix of T * in the ordered basis B is the conjugate transpace of the matrix T B .
1 2
So T B
i 1
1 2 a a 2b
i 1 b ia b 21
( a 2b, ia b)
W.E.10: The inner product space V is C 2 and T is the linear operator on V, defined by
Solution: Let B (1, 0),(0,1) . Then B is the standard ordered basis for V. It is an orthonormal
T ( z1 , z2 ) (2 z1 iz2 , (1 i ) z1 )
T (1, 0) 2, (1 i)
2(1, 0) (1 i )(0,1)
2 i
T B
1 i 0
2 (1 i )
T *B
1 0
2 i 1 z1 2 z1 z2 (i 1)
The coordinate matrix of T *( z1 , z2 ) in the basis B is
i 0 z2 iz1 0 z 2
(2 z1 z2 (i 1), iz1 )
6 2i i 2 1 2i
(3 3i ), (3i 1)
Solution: Let B (1, 0),(0,1) . B is the standard ordered basis for T. It is orthonormal basis
1 i i
So T B
2 i
1 i 2
i i
1 i i 1 i 2
We have T B .T *B
2 i i i
3 3 2i
So T T * ............. (1)
B B
3 2i 5
1 i 2 1 i i
Also T *B T B
i i 2 i
6 3i 1
2
.......... (2)
3i 1
TT *B T * T B
So TT * T * T
So T does not commute with T * .
15.12 Exercise:
1. For each of the following inner product spaces V over F and linear transformations g : V F
find a vector v such that g (u ) u , v for all u V .
Centre for Distance Education 15.22 Acharya Nagarjuna University
1
ii) V P2 ( R ) with f , h f (t )h(t )dt
0
by f , g f (t ) g (t )dt;
1
T( f ) f ' 3f
f (t ) 4 2t Evaluate T * .
Ans: T * f (t ) 12 6t
Ans : T *( x, y ) ( x y , 2 x y )
4. Let T be a linear operator on V2 (C ) defined by T (1, 0) (1, 2); T (1, 0) (i, 1) using the standard
inner product find T *(u ) . Where u ( a, b)
ˆ 0,
4. If 0̂ is the zero operator in V , I is the identity operator in V, then 0* ˆ I * I . So 0̂ and I are
self adjoint operators.
5. An n x n real or complex matrix A is self adjoint if A A* .
6. A self adjoint operator is also called as Hermitian operator. A self adjoint matrix is also called as
Hermitian matrix.
15.13.2 Theorem:
Every linear operator T on a finite dimensional complex inner product space V can be uniquely
expressed as T * T1 iT2 where T1 and T2 are self adjoint linear operators on V..
1 1
Proof: Let T1 (T T *) and T2 (T T *) ......... (1)
2 2i
*
1 * 1 *
then T (T T ) T (T )
* * *
1
2 2
1 1
(T * T ) (T T *) T1
2 2
*
Also T * 1 ( T T * )
2
2i
1 1
(T T *)* (T * T ) T2 ........... (2)
2i 2i
T T * 2iT2
T * (U1 iU 2 )* U1 * (iU 2 ) *
U1 * iU 2 *
U1 * iU 2 *
U1 iU 2
1
U1 (T T *) T1
2
1
U2 (T T *) T2 . Hence the expression T T1 iT2 is unique.
2i
15.13.3 Prove that the product of two self adjoint operators on an inner product space is self
adjoint, iff they commute.
Proof: Let T1 and T2 be two self adjoint operators on an inner product space V;;
Thus when T1T2 is self adjoint the T1T2 T2T1 i.e. they commute.
15.13.4 If T1 and T2 are self adjoint linear operators on an inner product space, then show that
T1 T2 is self adjoint.
T (u ) T (v), u v 0
T (u ), u T (u ), v T (v ), u T (v ), v 0
T (u ), v T (u ), v 0 by given condition
Case i) Let V be a complex inner product space then by (1) replacing v by iv we get
T (u ), iv T (iv), u 0
T (u ), v T (v ), u 0 ....... (3)
Adding (1) and (3) we get
2 T (u ), v 0 for all u , v V
So T (u ), T (v) 0 putting v T (u )
Centre for Distance Education 15.26 Acharya Nagarjuna University
T (u ) 0
T 0̂
Case ii) In case if V is real inner product space we have v, T (u ) T (u ), v since
u , v v, u .
From (2) T (u ), v v, T (u ) 0
2 T (u ), v 0 using above, u, v V
T (u ), T (u ) 0 putting v T (u )
T (u ) 0, u , v V So T 0̂
Hence the theorem.
15.13.6 If T is a linear transformation on a complex inner product space, then T is self adjoint
T (u ), v is real u , v V .
T (u ), u T (u ), u T (u ), u is real.
Case ii) Converse :
Let T (u ), u is real for each u V . Then to prove that T is a self adjoint transformation
i.e. to show T (u ), v u , T (v ) u , u V .
Now T (u v ), u v T (u ) T (v ), u v
T (u ), u T (u ), v T (v ), u T (v ), v
T (u ), v T (v ), u )
v, T (u ) u , T (v )
replacing v by iv we get
i T (u ), v iT (v ), u i v, T (u ) u , iT (v)
i T (u ), v i T (v ), u i v, T (u ) i u , T (v ) ............... (2)
Multiplying (2) by i and adding to (1) we get
2 T (u ), v 2 u , T (v )
aT * aT
( a a )T 0ˆ
as T 0̂ , so a a 0 a a
Hence a is real.
Hence the theorem.
15.13.8 Theorem:
Let T be a linear operator on a finite dimensional inner product space V. Then T is self
adjoint if and only if its matrix in every orthonormal basis is a self adjoint matrix.
T B TB
*
So from (1) we get
15.13.9 Theorem:
If T is self adjoint linear operator on a finite dimensional inner product space, then prove that
det (T ) is real.
T * T B
*
and T is self adjoint. So T * T i.e. T B T B
*
Then we have
B
(det A) det A
Prove that the range of T is orthogonal complement of the null space of T i.e. R (T ) N (T )
Proof: Let u be any element in R (T ) . Then there exists a vector v in V such that u T (v)
Now if w N (T ) then T ( w) O
v, T ( w) since T * T
Rings and Linear Algebra 15.29 Linear Operators
v, O 0
Thus u N (T )
V N (T ) N (T )
and
dim R (T ) dim N (T )
So R (T ) N (T )
15.13.11 Let T be a linear operator on a finite dimensional inner product space V. If T has an eigen
vector then show that T * has an eigen vector..
Proof: Let u be an eigen vector of T; with eigen value . Then for any v V , we have
0 O, v (T I )(u ), v
u , (T I ) * (u )
u, (T * I ) *(v)
So T * I is not onto and hence is not one to one. Thus T * I has a non zero null space, and
any non zero vector in this null space is an eigen vector of T * with corresponding eigen value .
Solution: We are given that W is invariant under T. We have to prove that w is invariant under
T * . Let v be an vector in W . Then to prove that T *(v ) is in w . i.e. T *(v ) is orthogonal to
every vector in W. Let u be any vector in W. then u , T *(v) T (u ), v 0
Since u W T (u ) W as W is T invariant.
T * (v) is in W .
So W is invariant under T * .
15.15.1 Schur Theorem: Let T be a linear operator on a finite dimensional inner product space
V. Suppose that the characteristic polynomial of T splits. Then show that there exists an orthonor-
mal basis B for V such that the matrix T B is upper triangular..
Proof: The proof is by mathematical induction on the dimension n of V. The result is immediate if
n 1 . so suppose that the result is true for linear operators on ( n 1) dimensional inner product
spaces whose characteristic polynomial splits. We know that if T is a linear operator on a finite
dimensional inner product space V, and if T has an eigen vector, then T * will have an eigen vector..
so we can assume that T * has a unit eigen vector w . Suppose that T *(W ) W and that
If v W and u cw W then
v, cT *( w) v, c w
c v, w c (0) 0
So T (u ) W
So dim( w ) n 1 so we may apply the induction hypothesis to Tw and obtain an or-
thonormal basis B ' of W such that T w B'
is upper triangular..
Proof: Let dim(V ) n . Let B be an orthonormal basis for V and A T B . Then A is self adjoint.
Let TA be a linear operator on C n . defined by TA (u) = Au for all u C n . TA is self adjoint because
TA D A where D is the standard ordered orthonormal basis for C n . As TA a self adjoint operator,,
the eigen values of TA are real.
By fundamental theorem of Algebra, the characteristic poly nomial splits into factors of the
form t . Since each is real the characteristic polynomial splits over R. But TA has the same
characteristic polynomial as A; which has the same polynomial as T. So the characteristic polyno-
mial of T splits.
Hence the theorem.
Note: Fundamental theorem of Algebra.
that P ( z ) an ( z c1 )( z c2 )...( z cn ) .
15.15.3 Theorem: Let V be a finite dimensional inner product space and let T be a self adjoint
linear operator on V. Then there is an orthonormal basis for V, each vector of which is a character-
istic vector for T and consequently the matrix of T with respect to B is a diagonal matrix.
Proof: As T is a self adjoint linear operator on a finite dimensional inner product space V, so T must
have a characteristic value and T must have a characterisitc vector.
u
Let O u be a characteristic vector for T. Let u1 u . Then u1 is a characterisitic vector
for T and u 1 . If dim V 1 , then u1 is an orthonormal basis for V, and u1 is a characteristic
vector for T. Thus the theorem is true if dim V = 1. Now we proceed by induction on the dimension
of V. Suppose the theroem is true for inner product spaces of dimension less than dimension of V.
Then we shall prove that it is true for n and the proof of will be complete by induction.
Let W be the one dimensional subspace of V spanned by the characteristic vector u1 for T..
Let u1 be the characteristic vector corresponding to the characteristic value C. Then T (u1 ) C (u1 ) .
If v is any vector in W. Then v ku1 , where k is a scalar. We have
T (v) T (ku1 ) kT (u1 ) k (cu1 ) kc (u1 ) . So T (u) W . W is invariant under T. So W is invariant
under T * . But T is self adjoint means T T * . So W is invariant under T. If dim V n , then
dim W dim V dim W n 1
So W with the the inner product from V is an inner product space of dimension one less
than the dimension of V.
S* S .
Thus S is a self adjoint linear operator on W ; Whose dimension is less than dimension
characteristic vectors for S. Suppose ui is the characteristic vector for S corresponding to the
characteristic value ci . Then S (ui ) Ci ui
T (ui ) Ci ui
Rings and Linear Algebra 15.33 Linear Operators
for T. since V W W . So B u1 , u2 ,...un is an orthonormal basis for V each vector of which
is a characteristic vector of T. The matrix T relative to B will be a diagonal matrix.
Note i) If V is finite dimensional then T * will definitely exist. If V is not finite dimensional, then the
above definition will make sense if and only if T possess adjoint.
ii) Every self adjoint operator is normal.
Suppose T is self adnoint operator, then T * T , so obviously T * T TT * .
So T is normal.
cos sin
T B
cos
show that T is normal.
sin
So T is normal.
Normal Matrix : Definition:
A real or complex n x n matrix A is normal if and only if it commutes with its conjugate
transpose. i.e. AA* A * A .
1 1 1 i
Example : A then A*
i 3 2i 1 3 2i
Centre for Distance Education 15.34 Acharya Nagarjuna University
1 1 1 i
and AA*
i 3 2i 1 3 2i
2 3 3i
3 3i 14
1 i 1 1 2 3 3i
A* A
1 3 2i i 3 2i 3 3i 14
T *(u ) T (u ) u V
2 2
T *(u ) T (u ) T *(u ) T (u )
T * (u ), T * (u ) T (u ), T (u )
TT * (u ), u T * T (u ), u
(TT * T * T )u , u 0
TT * T * T 0ˆ (zero operator)
TT * T *T
T is normal.
15.17.2 Theorem:
T * CI *
Again (T CI )*(T CI )
Rings and Linear Algebra 15.35 Linear Operators
T * T CT * CT CCI ( IT TI T )
(T CI )(T CI )*
15.17.3 Theorem:
Let T be a normal operator on an inner product space V. Then a necessary and sufficient
condition that u be a characteristic vector of T is that it be a characteristic vector of T * .
2
T *(u ) T (u ), T (u ) since T ** T .
2 2
T *(u ) T (u )
(T CI )(u ) (T CI ) *(u )
(T CI )u (T * CI )u
T * (u ) Cu . Hence u is the characteristic vector for T with characteristic value C if and only if u
is a characteristic value for T * with characteristic value C .
Result: If u is an eigen vector of T, then u is also an eigen vector of T*. Infact if T (u ) Cu; then
T *(u ) Cu .
Centre for Distance Education 15.36 Acharya Nagarjuna University
15.17.4 Theorem:
Proof: T is normal TT * T * T
2
Also T (u ) T (u ), T (u )
u , T * T (u )
u , (T * T )u
u , (TT *)u
T * (u ), T * (u )
2
T *(u )
So T (u ) T *(u )
Now T (u ) O T (u ) 0 T *(u ) 0
T *(u ) O
Thus T (u ) O T *(u ) O
15.17.5 Let V be an inner product space. Let T be a normal operator on V. If 1 , 2 are distinct
eigen values of T with corresponding eigen vectors, u1 and u2 then show that u1 and u2 are
orthogonal.
Proof: Let u1 , u2 are the characteristic vectors of T corresponding to the characteristic values
Tu1 , u2
u1 , T * u2
u1 , 2u2
or 1 u1 , u2 2 u1 , u2
Rings and Linear Algebra 15.37 Linear Operators
(1 2 ) u1 , u2 0
u1 , u2 0 since 1 2
u1 , u2 are orthogonal.
u
Let u1 . Then u1 is also a characteristic, vector for T since u 1 . If dim V 1 , then
u
Now we proceed by induction on the dimension of V. We suppose that the theorem is true for inner
product spaces of dimension les than dimV . Then we shall prove that it is true for V and the proof
is complete by induction.
Let W be the one dimensional subspace of V spanned by the characteristic vector u1 for T..
Let u1 be the characteristic vector corresponding to the characteristic value C. Then T (u1 ) Cu1 .
If v is any vector in W then v Ku1 where K is some scalar..
K (Cu1 ) ( KC )u1
So W with the inner product from V is a complex inner product space of dimension less
than dimenstion of V.
T T *(v)
(TT *)(v)
(T * T )v T * T (v)
T * S (v )
S * S (v )
( S * S )(v )
B u1 , u2 ,...un is also an orthornormal basis for T each vector of which is a characteristic vector
for T. The matrix of T relative to B will be the diagonal matrix. Hence the theorem.
15.15.7 Theorem:
Suppose T is a linear operator on a finite dimensional inner product space V and suppose
Rings and Linear Algebra 15.39 Linear Operators
that there exists an orthonormal basis B u1 , u2 ,...un for V such that each vector in B is a char-
acteristic vector for T. Then prove that T is normal.
T B T *B T *B T B
TT *B T * T B
TT * T * T T is normal.
Hence the theorem.
Note: The above two theorems can be clubbed together and can be restated as.
Let T be a linear operator on a finite dimentional complex inner product space V. Then T is
normal if and only if there exists an orthonormal basis of V consisting of eigen vectors of T.
ii) Positive semi definite operator : A linear operator on an inner product space V is called postive
semi definite (or non negative) in symbol T 0 if it is self adjoint and if T (u ), u 0 u in V .
Note i) An n x n matrix A with entries from R or C is called positive definite if LA is positive definite.
We have T (u ), u Cu , u C u , u
Centre for Distance Education 15.40 Acharya Nagarjuna University
2
C u
T (u ), u
C 2
u
15.18.4 Theorem:
If T is a self adjoint operator on a finite dimensional inner product space V, such that the
characteristic values of T are non-negative show that T is non-negative.
Proof: T is a self adjoint operator on a finite dimensional inner product space V. Let T has all
charcterisitc values non-negative.
a1c1v1 a2 c2 v2 .... an cn vn
2 2 2
a1 c1 a2 c2 .... an cn . 0
since ci 0 and ai 0
Thus (T ( w), w 0 w V
Hence T O i.e. T is non negative.
Rings and Linear Algebra 15.41 Linear Operators
15.18.5 Theory:
Let T be a linear operator on a finite dimensional inner product space V. Let A aij nn be
the matrix of T relative to an ordered orthonormal basis B u1 , u2 ,..., un . Then T is positive if and
only if the matrix A satisfies the following conditions.
i) A A * i.e. A is self adjoint
n n
ii) a
i 1 j 1
ij xi x j 0 where x , x ,..., x and n scalars not all zero.
1 2 n
n n
then T (v), v T x j j , xi ui
u
j 1 i 1
n n
x jT (u j ), xi ui
j 1 i 1
n n
x j xi T (u j ), ui
j 1 i 1
We know if A aij nn be the matrix of T with respect to the ordered basis B then
n n
So A A *
If x1 , x2 ,...xn are any n scalars not all zero, then v x1u1 x2u2 ,... xn un is a non zero
n n
conversely suppose that the conditions (1) and (ii) of the theorem hold. A A* T T *
Also (ii) implies T (v ), v 0 . If non zero v V , then we can write v x1u1 x2u2 ,... xn un
where x1 , x2 ,...xn are scalars not all zero. Hence T is positive.
Centre for Distance Education 15.42 Acharya Nagarjuna University
15.18.6 Working procedure to verify the positiveness of a square matrix:
Let A aij nn be a square matrix of order n; over the field F. Then the principal minors of
A are the following n scalars
Then the matrix A is positive if and only if the principal minors are all positive and A A * .
Where as the matrix A is not positive if det A is not positive.
2 2 2 2
So T B and T *B
2 5 2 5
So T T * T is self adjoint.
2 2 2 2 8 14
Also T B T *B
2 5 2 5 14 29
2 2 2 2 8 14
T *B T B
2 5 2 5 14 29
So T B T *B T *B T B
TT *B T * T B TT * T * T
Rings and Linear Algebra 15.43 Linear Operators
T is normal.
2 2 1 0
Let A T B . Characteristic equation is A I 0 0
2 5 0 1
2 2
0 (2 )(5 ) 4 0
2 5
2 7 6 0 ( 6)( 1) 0
6, 1
( A I ) X O
2 6 2 x1 4 2 x1
2 5 6 x O 2 1 x O
2 2
4 x1 2 x2 0 x2 2 x1
put x1 1, then x2 2
x1 1
So X and every scalar multiple of it is an eigen vector..
x2 2
( A I ) X O
2 1 2 x1 1 2 x1
O x 0
2 5 1 x
2 2 4 2
x1 2 x2 0 and 2 x1 4 x2 0
x1 2 x2 so put x2 1 , then x1 2
x1 2
So X and every scalar multiple of it is an eigen vector..
x2 1
Centre for Distance Education 15.44 Acharya Nagarjuna University
1 1
An orthonormal basis of eigen vectors is (1, 2), (2,1) with corresponding eigen
5 5
values 6 and 1.
W.E. 14 : Let V be a finite dimensional inner product vector space and T be an idempotent
operator on V i.e. T 2 T then T is self adjoint if and only if TT * T * T .
To prove TT * T * T
u , TT (v ) u , T * T (v )
u , TT *(v) since T * T .
T *T TT *
T * u , T * u
2 2
T (u ) T *(u )
or T *(v) T * T (v) O
or T * (v ) T * T (v ) v V
So T * T * T ............ (2)
Now T (T *)* (T * T )* T * T ** T * T T *
adjoint or not.
1
Df , g f , g f ' (t ) g (t )dt
'
1
Df , g f g 0 f (t ) g ' (t )dt .......... (1)
1
1
Also f , D g f , g
f (t ) g (t )dt
' '
by definition ........ (2)
0
But since (1) and (2) are not the same. D is not selfadjoint.
W.E.16 : If T1 , T2 are positive linear operators on an inner product vector space then prove that
T1 T2 is also positive.
and T2 (u ), u 0
Now (T1 T2 ) T1 T2 T1 T2
* * *
T1 (u ), u T2 (u ), u 0 by given conditions.
15.20 Summary:
In this lesson we discussed about linear operators. Adjoint operator. Properties of adjoint
operators. Normal and self adjoint operators their properties-polynomial split-schur theorem, posi-
tive, semi positive square matrices.
(i) ( S T )* S * T *
(ii) ( ST )* T * S *
2. Define self adjoint operator. Let T be a self adjoint linear operator on a finite dimensional inner
product space. Then prove that R (T ) N (T )
5. Define positive linear operator. If T1 , T2 are positive linear operators on an inner product space
then prove that T1 T2 is above positive.
15.23 Exercise:
1. For each linear operator T on an inner product space V, determine whether T is normal, self
adjoint or neither. If possible produce an orthonormal basis of eigen vectors of T for V and list the
corresponding eigen values.
Ans : T is normal, but not self adjoint. An orthonormal basis of eigen vectors is
1 i (1 i )
1
2
1
2
(1 i), 2 , (1 i, 2 with corresponding eigen values 2
2
,2
2
.
1 0 1 1 1 0 1 0 1 1 1 0
, , , corresponding eigen values are
2 1 0 2 0 1 2 1 0 2 0 1
1,1, 1, 1 .
2. Let V be a complex inner product space, and let T be a linear operator on V.
3. Prove that every entry on the main diagonal of a positive matrix is positive.
4. Which of the following matrices are positive.
0 i 1 1 i
i) A ii) B
i 0 1 i 3
Ans: (i) A is not positive
(ii) B is positive
5. If T is a linear operator on an inner product space V ( F ) and a , b are scalars such that a b ,
then show that aT bT * is normal.
- A. Mallikharjana Sarma
Rings and Linear Algebra 16.1 Unitary and Orthogonal Operators
LESSON - 16
16.4.4 Definition: Let T be a linear transformation from an inner product space V ( F ) to an inner
product space V ( F ) . Then T is said to be an inner product space isomorphism if
T is one one.
Hence an inner product space isomorphism from U onto V can also be defined as a linear
transformation from U onto V, which preserves inner products.
Rings and Linear Algebra 16.3 Unitary and Orthogonal Operators
16.5 Definition:
i) Unitary Operator:
Let T be a linear operator on a finite dimensional inner product space over the field of com-
plex numbers and T (u ) u for all u U then T is called a unitary operator..
ii) Orthogonal Operator: Let T be a linear operator on a finite dimensional inner product space
V, over the field of real numbers R and if T (u ) u for all u V , then T is said to be orthogonal
opeator.
iii) Isometry: Let T be a linear operator on an infinite dimensional inner product space V over F
and if T (u ) u for all u V , then T is called an isometry..
If in addition, the operator is onto (the condition guarantees one to one), then the operator is
called unitary if F C or orthogonal operator if F R .
iv) Definition: U and V are two vector spaces over a field F. Then the zero transformation
T0 : U V is defined by T0 (u ) O u U . The zero transformation is also denoted by 0̂ .
16.6.1 Theorem: Let T be a self adjoint operator on a finite dimensional inner product space V. If
u , T (u ) 0 u V ; then T To .
Proof: We know that if T is a linear operator on a finite dimensional inner product space V, then T
is said to be self adjoint, if and only if there exists on orthonormal basis B for V consisting of eigen
vectors of T.
By the above theorem we can choose an orthonormal basis B for V. consisting of eigen
vectors of T. If u B then T (u ) u for some .
then 0 = u , T (u ) u , u u , u
So T T 0
ii) T (u ), T (v ) u , v u , v V
iv) There exists an orthonormal basis B for V, such that T ( B) is an orthonormal basis for V..
v) T (u ) u u V
Proof:
1. We will now prove that (i) (ii)
Let u , v V u , v I (u ), v
T *(T (u ), v
T (u ), T (v )
Let B u1 , u2 ,...un be an orthonormal basis for V. So T ( B) T (u1 ), T (u2 ),..., T (un )
1 if i j
i j =
0 if i j
Given B is an orthonormal basis for V, then T ( B ) is an orthonormal basis for V....... (3)
Let B u1 , u2 ,...un be an orthonormal basis for V; then by (3) T ( B) T (u1 ), T (u2 ),...T (un )
is an orthonormal basis for V.
Rings and Linear Algebra 16.5 Unitary and Orthogonal Operators
n n
u aiui , a j u j
2
i 1 j 1
n n
ai a j ui , u j
i 1 j 1
n n
ai a j ui , u j
i 1 j 1
n
ai ai ui , ui summing of j 1, 2,...n and remembering
i 1
0 if i j
u i , u j as B is orthonormal.
1 if i j
n
ai ai (1)
i 1
n
ai
2
...... (A)
i 1
Applying the same manipulation to T (u ) aiT (ui ) and using the fact T ( B ) is also
i 1
orthonormal we obtain T (u ) ai
2 2
......... (B)
i 1
So (iv) (v)
Centre for Distance Education 16.6 Acharya Nagarjuna University
5. Finally we will prove that V i (i)
Let u V we have u , u u T (u )
2 2
T (u ), T (u )
u , T * T (u )
So u, u u, T T (u) 0 u V
*
u , ( I T * T )u 0 u V
We know if T is a self adjoint operator on a finite dimensional inner product space and if
u , T (u ) 0 u V . Then T T 0 where T0 is the zero transformation.
So T0 = S ( I T * T )
So T * T I
Hence T (u ) u u V , TT * T * T I
T (u ) u
2 2
T (u ), T (u ) u , u
(T * T )(u ), u u , u
(T * T )(u ), u I (u ), u
(T * T I )u , u 0
T *T I Oˆ (null operator)
Rings and Linear Algebra 16.7 Unitary and Orthogonal Operators
T * T I T * TT 1 IT 1
T * I T 1 T * T 1
Thus if T is a unitary operator then T * T 1
As T is a linear operator and there exists an orthonormal basis B for V consisting of eigen
vectors of T, and so T is self adjoint.
So by this theorem V possess an orthonormal basis B u1 , u2 ,..., un such that T (ui ) i ui
16.7 Reflection:
16.7.1 Definition: Let L be a one dimensional subspace of R 2 about a line L through the origin.
A linear operator T on R 2 is called a reflection of R 2 about L; if T (u ) u u L and
T (u ) u. u L .
Centre for Distance Education 16.8 Acharya Nagarjuna University
16.7.2 Example: Let T be a reflection of R 2 about a line through the origin. We shall show that T
is an orthogonal operator. Select vectors u1 L and u2 L such that u1 u2 1 .
Then T (u1 ) u1 , T (u2 ) u2 thus u1 and u2 are eigen vectors of T with corresponding
eigen values 1 and -1 respectively. Further more u1 , u2 is an orthonormal basis for R 2 . It follows
that T is an orthogonal operator.
So T is normal.
Hence every unitary operator is normal.
16.8.2 If S and T are unitary operators, then ST is unitary or the product of two unitary operators is
unitary.
Proof: S, T are two unitary operators on V. Then
S 1 S *, T 1 T * ............. (1)
( ST ) *
Aliter i) : S and T are two unitary operators on a finite dimensional inner product space V.
We have to show that ST is unitary.
Now ( ST )( ST )* ST (T * S *)
S (TT *) S *
SIS * SS * I
So ( ST )( ST )* I
Similarly ( ST ) * ( ST ) I
So ST is unitary..
Also ( ST )(u ) S (T (u )
So ST (u ) u
So ST is unitary.
Hence the theorem.
16.8.3 Corollary: Prove that the composite of orthogonal operators is orthogonal.
Proof: Similar as above.
16.8.4 Theorem:
Show that the inverse of a unitary operator is unitary.
Proof: Let V be an inner product space. T is a unitary operator on V. We have to show that T 1
is unitary.
We put v T (u ) ie T 1 (v) u
1 1
We get TT (v) T (v)
v T 1 (v)
1
Hence T (v) v v V
Since T is unitary T 1 T * .
16.8.5 Show that the set of all unitary operators on an inner product space V is a group with
respect to composite of operations.
Solution: Let G denote the set of all unitary operators on an inner product space V ( F ) .
Centre for Distance Education 16.10 Acharya Nagarjuna University
So T1T1 * T1 * T1 I , T2 , T2 * T2 * T2 I
T1 IT1* T1T2* I
T2 * IT2 (T1 * T2 ) I
i) Closure Property: If T1T2 are any two unitary operators belongging to G, T1T 2 is a unitary
operator and hence belongs to G. Hence G is closed.
T (u ) v .......... (1)
1
But T is unitary (u ) T (v)
Rings and Linear Algebra 16.11 Unitary and Orthogonal Operators
T 1 (u ) u T 1 is also unitary..
T 1 G .
As all the group axious are satisfied, the set G of all unitary operators on V is a group.
16.8.6 Let T be a unitary operator on an inner product space V, let W be a finite dimensional
T-invariant subspace of V. Prove that W is T - invariant.
So any w W T ( w) W
As T is unitary
T (u ), w1 T (u ), T ( w) u , w 0
Thus T (u ), w1 0 w1 W
This implies T (u ) is W ( T (u ) is perpendicular to W ) T ( u ) W
any u W T (u ) W
So W is T-invariant.
16.8.7 Show that the determinant of a unitary operator has absolute value.
Solution: Let T be a unitary operator on a finite dimensional inner product space V ( F ) . Let B
be an ordered orthonormal basis for V. Let A denote matrix of T relative to B. Then
T is unitary T * T I T * T B I B
T *B . T B I B
(det A)(det A) 1
(det A) 1 det A 1
2
Centre for Distance Education 16.12 Acharya Nagarjuna University
So det A and hence det T has absolute value 1.
So a real unitary matrix is also orthogonal. In this case, we call orthogonal rather then
unitary.
ii) The condition AA* I is equivalent to the statement that the rows of A form an orthonor-
mal basis for F n because
n n
ij I ij ( AA*)ij AiK ( A*)kj Aik Ajk and the last term represents the inner product
k 1 k 1
iii) The condition A * A I is equivalent to the statement that the columns of A form an
orthonormal basis of F n .
iv) A linear operator T on an inner prouct space V is unitary (Orthogonal) if and only if T B
is unitary (orthogonal) for some orthonormal basis B for V.
cos sin
Ex: The matrix
cos
is clearly orthogonal. One can easily see that the rows of
sin
the matrix form an orthonormal basis for R 2 . Similarly the columns of the matrix form an orthonor-
mal basis for R 2 .
W.E.1: Let T be a reflection of R 2 about a lin L through the origin, Let B be the standard ordered
basis for R 2 and let A T B then T LA .
Rings and Linear Algebra 16.13 Unitary and Orthogonal Operators
Since T is an orthogonal operator and B is an orthonormal basis, A is an orthogonal matrix.
Discribe A.
Solution: Suppose that is the angle from the positive x axis to L. Let v1 (cos,sin ) and
1 0
T W LA W
0 1
cos sin
Let Q
sin cos
Thus A Q L A W Q
1
cos z sin 2
sin 2 cos 2
We know that for a complex normal (real symmetric) matrix A, there exists an orthogonal
basis B for F n . Consisting of eigen vectors of A. Hence A is similar to a diagonal matrix D. We
know that A M nm ( F ) and W be an ordered basis for F n .
Then LA W Q AQ . Where Q is the n x n matrix whose jth column is the vector of W..
1
Hence by this theorem, the matrix Q whose columns are the vectors in B is such that D Q1 AQ,
But since the columns of Q are the orthonormal basis for F n , it follows that Q is unitary (orthogo-
nal). Hence A is unitarily equivalent (orthogonally equivalent) to D.
Centre for Distance Education 16.14 Acharya Nagarjuna University
16.10.1 Theorem:
Let V be a finite dimensional inner product space and T be the linear operator on V. Then T
is unitary if and only if the matrix T in some (or every) ordered orthonormal basis for V is a unitary
matrix.
Proof: V is a finite dimensional inner product space. T is a linear operator on V. Let B u1 , u2 ,...un
be an ordered orthonormal basis for V. and let A be the matrix of T relative to B. i.e. T B A .
A T B then A * A I
So A T B is unitary..
Converse:
Suppose that the matrix A is unitary there we have A * A I .
T *B T B I B
T * T B I B
T *T I
So T is unitary.
From the above two cases the theorem follows.
16.10.2 Corollary: A linear operator T on an inner product space V is orthogonal if and only if
T B is orthogonal for same orthonormal basis B for V..
Proof: Let C n be the vector space V with standard inner product defined on it and let B be its
ordered basis. If T is the linear operator on V such that it is represented in the standard ordered
basis by the matrix A, then we have T B A and T *B A*
Case i) If A is a normal matrix then A * A AA * and hence TT *B T * T B . i.e. TT * T * T i.e.
T is a normal operator. From the above, i.e. T being a linear operator on an inner product space V,
it follows that there exists an orthonormal basis say B1 for V each vector of which is a characteristic
vector for T, and hence T B1 is a diagonal matrix. Further if P is a transition matrix from B to B1 ,
then it is a unitary matrix as both B and B1 are orthonormal bases. When P is a unitary matrix we
then AA* ( P * DP )( P * DP ) *
( P * DP )( P * DP **)
( P * DP )( P * D * P )
P * D ( PP*) D * P
P * DID * P
P *( DD*) P ............... (1)
Solution: T is invertible since there exists a linear transformation T 1 which rotates every vector in
R 3 about Z axis by a constem angle in the direction opposite to T. Hence
T 1 (T (u ) (T 1T )u I (u ) u u R3
Also if u ( x, y , z ) x, y , z R then
x2 y 2 z 2
2
u
x2 y2 z 2
Hence T(u) u
a ic b id
W.E.3: Show that the matrix A
a ic
is unitary if and only if a 2 b 2 c 2 d 2 1 .
b id
a ic b id
Solution: A*
b id a ic
a ic b id a ic b id
Now AA*
b id a ic b id a ic
a 2 b2 c 2 d 2
0
0 a b c d
2 2 2 2
0 2m n
1 1 1
W.E.3A: Show that the matrix A l m n where l ,m ,n is orthogo-
2 6 3
l m n
nal.
0 2m n
C1 l ; C2 m ; C3 n
l m n
1
We have C1 , C1 0 l l 2l 2 1
2 2 2
1
C2 , C2 4m 2 m 2 m 2 6m 2 6. 1
6
1
C3 , C3 n 2 n 2 n 2 3n 2 3. 1
3
C2 , C3 2mn mn mn 0
C3 , C1 0n l (n) ln 0
Thus the columns of A form an orthogonal set of vectors. So A is orthogonal.
1 2 2
W.E.4 : Find an orthogonal matrix P whose first row is , , .
3 3 3
1 2 2
Solution: Let u1 , ,
3 3 3
1 2 2
, , , ( x, y, z ) 0
3 3 3
Centre for Distance Education 16.18 Acharya Nagarjuna University
x 2 y 2z
0 x 2 y 2z 0
3 3 3
1 1
Normalize w2 to get the second row of P i.e. u2 0, ,
2 2
u1 , w3 0, u2 , w3 0
1 2 2
u1 , w3 0 , , , ( x, y, z ) 0
3 3 3
x 2 y 2z
0 x 2 y 2 z 0 ........... (1)
3 3 3
1 1
u2 , w3 0 0, , , ( x, y, z ) 0
2 2
y z
0x 0 y z 0 ............... (2)
2 2
4 1 1 4 1 1
i.e. u3 , , , ,
18 18 18 3 2 3 2 3 2
Rings and Linear Algebra 16.19 Unitary and Orthogonal Operators
1 2 2
3 3 3
1 1
Hence the required orthogonal matrix is P 0
2 2
4 1 1
3 2 3 2 3 2
1 1 1
W.E.5 : Let A 1 3 4 determine whether or not i) the rows of A are orthogonal.
7 5 2
u1 3 not unity..
1
2
W.E.6: For which value of is the following matrix is unitary 1
2
Centre for Distance Education 16.20 Acharya Nagarjuna University
1 1
2 2
Solution: Let A so A*
1 1
2
2
1 1
2 2 1 0
i.e. 1 0 1
1
2 2
1 1 1
4 2 2 1 0
i.e. 1 0 1
1 1
2 2 4
1 1 1
i.e. 1 0
4 2 2
1 1 1
0 1
2 2 4
1
Solving we get ( ) 0 is real say a
2
(Since if x iy then x iy so
( x iy) ( x iy) 0 y 0 )
1
Again for real we get 1
4
1 3 3
2 1 2 so
4 4 4
3
Hence the matrix A is unitary if
4
Rings and Linear Algebra 16.21 Unitary and Orthogonal Operators
W.E. 7: If V ( F ) is a finite dimensional unitary space and T be a linear transformation on V ( F ) ,
then show that T is self adjoint T (u ), u is real for each u V .
u , T (u ) ` by (1)
T (u ), u `
Thus T (u ), u T (u ), u
So T (u ), u is real.
then T (u ), u T (u ), u u , T (u )
u , T * (u ) u , T (u )
T* T .
So T is self adjoint.
Hence from the two cases the result follows.
W.E.8 : If B and B ' are two orthonormal bases for a finite dimensional complex inner product
space V. Prove that for each linear transformation T on V, the matrix T B ' is unitarily equivalent
to the matrix T B .
Solution: Let P is a transition matrix from B to B ' and as B, B ' are orthonormal bases , P is a
unitary matrix. Thus P * P I P* P 1
5. Let D be the diagonal matrix whose diagonal elements are the characteristic roots of A.
6. Then we got the orthogonal matrix P, and diagonal matrix D such that P * AP D .
1 2
W.E.9 : If A find an orthogonal matrix P and a diagonal matrix D such that PT AP D
2 1
1 2
0 (1 ) 2 4 0
2 1
(1 2)(1 2) 0
(3 )( 1) 0
3, 1
A I X O
3 2 x 2 2 x
O O
2 1 3 y 2 2 y
2 2 x
R2 R1 gives 0 2 x 2 y 0
0 0 y
x y
if x 1, then y 1
Rings and Linear Algebra 16.23 Unitary and Orthogonal Operators
So v1 (1,1)
x
A I O
y
1 1 2 x
O 2x 2 y 0
1 1 1 y
y x
if x 1, then y 1
So v2 (1, 1)
Evidently v2 is orthogonal to v1 .
v1 1 1 1
Where u1 (1,1) ,
v1 2 2 2
v2 1 1 1
u2 (1, 1) ,
v2 2 2 2
1 1
2 2
Thus one possible choice for P
1 1
2 2
1 1 1
2 1 1
3 0
and D
0 1
Note : We can apply Gram - Schmidt orthogonalisation process to find orthonormal basis.
Centre for Distance Education 16.24 Acharya Nagarjuna University
4 2 2
W.E.10 : If A 2 4 2 find an orthogonal matrix P and a diagonal matrix D such that PT AP D .
2 2 4
We will now find the orthogonal matrix P and a diagonal matrix D such that PT AP D .
To find P:
To find P we first find an orthonormal basis of eigen vectors.
4 2 2
2 4 2 0
on expansion we get the characteristic equation.
2 2 4
Or
4 2 4 2 4 2
M 11 16 4 12 ; M 22 16 4 12 ; M 33 16 4 12
2 4 2 4 2 4
So M 11 M 22 M 33 12 12 12 36
2 10 16 2 20 32
1 -10 16 0
Rings and Linear Algebra 16.25 Unitary and Orthogonal Operators
So f ( ) ( 2)( 2 10 16) 0
( 2)( 2)( 8) 0
So 2, 2,8
4 2 2 2 x
2 42 2 y O
2 2 4 2 z
By R2 R1 , R3 R1 we get
2 2 2 x
0 0 0 y O
0 0 0 z
2 x 2 y 2 z 0 ie x y z 0 ........... (1)
This system has two independent solutions
put y 1, z 0 i.e. x y 1
v1 (1,1, 0)
if b 1 , then c 2 a 1
Thus v1 (1,1, 0) and v2 (1,1, 2) form an orthogonal basis for eigen space of 2 .
for 8, ( A I )( X ) O
Centre for Distance Education 16.26 Acharya Nagarjuna University
4 8 2 2 x
2 4 8 2 y O
2 2 4 8 z
4 2 2 x
2 4 2 y O
2 2 4 z
2 R2 R1 , 2 R3 R1 gives
4 2 2 x
0 6 6 y O
0 6 6 z
R3 R2 gives
4 2 2 x
0 6 6 y O
0 0 0 z
4 x 2 y 2 z 0
6 y 6z 0
or 2 x y z 0 and y z 0 y z
So 2 x 2 z x z
put z 1, then x 1, y 1
v1 1
u1 (1,1, 0)
v1 2
v2 1
u2 (1,1, 2)
v2 6
v3 1
u3 (1,1,1)
v3 3
Aliter: we can apply Gram - Schmidth orthogonalisation process to find an orthonormal basis.
1 1 1
2 6 3
1 1 1
Let P be the matrix whose columns are u1 , u2 , u3 , Then P
2 6 3
2 1
0 3
6
and the diagonal matrix D which is formed with the characteristic roots given by
2 0 0
D 0 2 0
0 0 8
Note: In finding the basis for the eigen space corresponding 2, we get x y z 0 ...... (1) by
putting y 0, we get x z 0 when z 1, x 1
So (1,1, 0),(1, 0,1) is a basis of eigen space for 2, this set is not orthogonal. So
we can apploy Gram-Sdmidt orthogonalisation process to obtain the orthogonal basis
1
( 1,1, 0), (1,1, 2) .
2
Find a orthogonal basis for f th eigen space for 8 , the union of the there two bases is the
orthonormal basis. Normalising the vectors, we get the orthonormal basis.
Centre for Distance Education 16.28 Acharya Nagarjuna University
16.14 Summary:
In this lesson we discussed about unitary operators, orthogonal operators, equivalent state-
ments. Inverse of unitary operator, Matrices representing unitary and orthogonal transformations,
Equivalent matrices.
4. Let A be a complex n x n matrix, then A is normal if and only if A is unitarily equivalent to a diagonal
matrix.
16.17 Exercise:
1. For each of the following matrices A; find an orthogonal or unitary matrix P and a diagonal matrix
D. Such that P * AP D
1 1 1
0 2 2 2 6 3
2 0 0
1 1 1
i) 2 0 2 Ans: P and D 0 2 0
2 2 0 2 6 3
0 0 4
2 1
0 3
6
1 5
5 4 2 2 1 0
ii) Ans : P and D
1 2 1 5 0 6
2 2
0 1 0 1 0 0
1 0 0
2. Show that the matrices and 0 i 0 are unitarily equivalent.
0 0 1 0 0 i
Rings and Linear Algebra 16.29 Unitary and Orthogonal Operators
1 2 i 4
3. Show that the matrices and are not unitarily equivalent.
2 i 1 1
1
x
4. Find the number and exhibit all 2 x 2 orthogonal matrices of the form 3
y z
1 1 1 1
3 8 3 8 3 8 3 3 8 3
3 3
Ans: 4, ,
, and
8 3 1 1 1 8 3 1
8 3 8 3
3
3 3 3
1 i 3i
2 (1 i )
3 2 15 1 i
1 4 3i 2
1
2
i) 2 3 2 15
i 1
ii)
1 i 5i
2 2 2
3 2 15
1 2
i
3 3
1 (1 i ) (1 i )
iii) i 1
2 (1 i ) (1 i )
iv)
23 3
1 1 1 2 3
2 0
2 14 14 14
1 1 3 1
0 0
i) 2 2 ii) 10 10
0 0 1 1 5 3
35
35 35
Centre for Distance Education 16.30 Acharya Nagarjuna University
7. Find an orthogonal matrix whose first row is
1 2
i) , ii) a multiple of (1,1,1)
5 5
1 1 1
1 2 3 3 3
5 5 1 1
Ans : i) ii) 0
2 1 2 2
5 5 2 1 1
6
6 6
1 1 1 1
i) a multiple of (1,1 i ) ii) , i, i
2 2 2 2
1 1 1 1
1
i i
(1 i ) 3 2 2 2 2
3
i 1
Ans: i) 1 ii) 2 0
(1 i ) 3 3
2
1 1 1 1
i i
2 2 2 2
9. Find a 3 3 orthogonal matrix P whose first two rows are multiples of u (1,1,1) and v (1, 2, 3)
respectively.
1 1 1
3 3 3
1 2 3
Ans : P
14 14 14
5 2 3
, ,
38 38 38
10. Real matrices A and B are said to be orthogonally equivalent if there exists an orthogonal matrix
P such that B P T AP . Show that this relation is an equivalence relation.
Rings and Linear Algebra 16.31 Unitary and Orthogonal Operators
11. Prove that if A and B are unitarily equivalent matrices, then A is positive definite if and only if B is
positive definite.
12. Let U be a unitary operator on an inner product space V, and let W be a finite dimensional U -
Invariant subspace of V. Prove that U (W ) W .
13. Let A and B are n x n matrices that are unitarily equivalent then prove that tr ( A* A) tr( B * B)
- A. Mallikharjana Sarma