0% found this document useful (0 votes)
32 views535 pages

Dsmat31 (Em)

This document provides an introduction to rings and integral domains, defining key algebraic structures such as rings, fields, and integral domains, along with their properties and examples. It outlines the basic definitions, operations, and theorems related to rings, including concepts like zero divisors, idempotent elements, and nilpotent elements. Additionally, it discusses specific types of rings, such as commutative rings and Boolean rings, and includes exercises and model exam questions for further understanding.

Uploaded by

Tharun Tharun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views535 pages

Dsmat31 (Em)

This document provides an introduction to rings and integral domains, defining key algebraic structures such as rings, fields, and integral domains, along with their properties and examples. It outlines the basic definitions, operations, and theorems related to rings, including concepts like zero divisors, idempotent elements, and nilpotent elements. Additionally, it discusses specific types of rings, such as commutative rings and Boolean rings, and includes exercises and model exam questions for further understanding.

Uploaded by

Tharun Tharun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 535

Rings and Linear Algebra 1.

1 Rings and Integral Domains

LESSON - 1
RINGS AND INTEGRAL DOMAINS
1.1 Objectives of the Lesson:
To learn the difinitions of algebraic structures such as a ring, a field and an integral domain
and to study some examples of these structures and thier basic properties.

1.2 Structure
1.3 Introduction

1.4 Definition and basic properties of a ring

1.5 Divisors of zero and cancellation laws

1.6 Integral domain ; Division Ring and field

1.7 The characteristic of a ring

1.8 Subrings and subfields

1.9 Summary

1.10 Technical terms

1.11 Model exam questions

1.12 Exercises

1.3 Introduction:
In this lesson we define an algebraic structure called a ring. We also derive the concepts of
a field and an internal domain. We learn some basic properties of a ring. Using the basic
properties, we prove some theorems on rings and fields.

1.4.1 Definition of a Ring :


A ring is a nonempty set R together with two binary operations + and . called addition and
multiplication respectively such that

i) a , b  R  a  b  R

ii) ( a  b)  c  a  (b  c ),  a, b, c  R

iii)  o  R such that a  o  o  a  a ,  a  R

iv) a  R   a  R such that a  (  a )  0  (  a )  a


Centre for Distance Education 1.2 Acharya Nagarjuna University
v) a  b  b  a ,  a, b  R .

vi) a, b  R  a.b  R

vii)  a.b  .c  a.  b.c  ,  a , b, c  R

viii) a.  b  c   a.b  a.c and (b  c ).a  b.a  c.a  a, b, c  R

Note:

1.4.1

1) In a Ring R the binary operation ‘+’ is called addition and ‘.’ is called multiplication.

2) We usually write a b instead of a. b

3) R is a ring if i) (R,+) is an abelian group

ii) (R, . ) is a semigroup

iii) Multiplication is both left and right distributive over addition.

4) The element O in R is called the zero element of R

5) Sometimes ring R is denoted by (R, +, .) The additive inverse of an element ‘a’ is denoted
by ‘-a’.

1.4.2 Examples
1) Let R  0 and +, . be the operations defined by O + O = O and O . O = O, then (R, +,.)
is a ring. This ring is called zero Ring or Null Ring .

2) Z is the set of all integers and + and . are usual addition and multiplication respec-
tively. Then (Q, +, .) is a ring.

3) Q is the set of all rational numbers and + and . are usual addition and multiplication
respectively. Then (Q, +, .) is a ring.

4) The set of all real numbers  is a ring w.r.t usual addition and multiplication.

5) The set of all complex numbers C is a ring w.r.t usual addition and multiplication.

6) Let n  O be an integer. If a, b  Z and n/(a-b) (i.e., n divides (a-b)) then a is said to be


congruent to b modulo n. This is denoted by a  b (mod n). Congruence modulo n is an equiva-
lence relation on Z. Denote the equivalence class of an integer a by a . Note that a  b iff
a  b(mod n) . Given a  Z , there are integers q and r, with o  r  n , such that a = nq + r..
Hence a - r = nq and a  r (mod n ) . Therefore a  r . Since a was arbitrary and o  r  n , it
follows that every equivalence class must be one of O , 1, 2,..( n  1) . However these n equivalence
Rings and Linear Algebra 1.3 Rings and Integral Domains

classes are distinct. For if O  i  r  n , then O  r  i  n and n  (r  i ) . Thus i  r(mod n)


and hence i  r . Therefore, there are exactly n equivalence classes. Let Z n denote the set of all

 
equivalence classes. Then Z n  O, 1, 2,..(n  1) . Define addition and multiplication by

a  b  a  b and a.b  ab . Then Z n together with these operations is a ring. Z n is called the ring
of integers modulo n. It is customary to denote the elements in Z n as 0,1,.,..., n  1 rather than

O, 1,..., n  1 . This notation will be used whenever convenient. Addition and multiplication in Z n are
sometimes written as  n and n respectively..

1.4.3 Ring with Unity: A ring ( R, ,.) is said to be a ring with unity if there exists 1 R such that
a.1  1.a  a  a  .R
1.4.4 The ring of intiger ( Z , ,.) is a ring with unity..

1.4.5 Note: If R is a ring with unity 1, then 1 is called unity or multiplicative identity or smply an
identity element.

1.4.6. commutative Ring: A ring ( R, ,.) is said to be a commutative ring if a.b = b.a,  a, b  R ,
i.e. Multiplication is commntative in R.

1.4.7 Example : The ring of integer ( Z , ,.) is a commntative ring.


1.4.8 Note:
i. If a Ring R has no multiplicative identity then the ring R is said to be a ring without unity.

ii. A Ring R is said to be non-commutative ring if R is not commutative.

1.4.9 Example:
i. The set of all Even intigers is a commutative ring without unity under usual addition and
multiplication.

ii.

1.4.10 Note: If ( R, ,.) is ring then (R,+) is an abelian group. Therefore we have.

i) The zero element (additive identity) of R is unique and a  0  a ,  a  R

ii) For a  R the additive inverse -a is unique and a + (-a) = 0

iii) For a  R , - (- a) = a

iv) For O  R , - 0 = 0
Centre for Distance Education 1.4 Acharya Nagarjuna University

v) For a, b  R, ( a  b)  a  b

vi) For a , b , c  R , a  b  a  c  b  c

and ba ca b c


vii) The unity element 1 is unique, if R has unity element 1.
1.4.11 Elementary Properties of Rings:

Theorem: If R is a ring then for a, b, c  R

i) oa  ao  o
ii) a ( b)  (  a )b  ( ab)
iii) ( a )( b)  ab
iv) a (b  c)  ab  ac
Proof: i) oa  (o  o)a o  o  o
 oa  oa By right distributive law

 oa  o  oa  oa Since ‘o’ is the additive Indentity

 o  oa By Left cancellation Law.

III ly ao  a (o  o ) o  o  o
 ao  ao By left distributive law

 ao  o  ao  ao  o is the additive identity


 o  ao By left cancellation law.

 oa  ao  o
ii) To show that a ( b )  ( ab )  (  a )b

We have a ( b )  ab  a ( b  b) by left distributive law

 ao
o by (i)

 a ( b)  ab  o  a ( b)   ab

III ly (  a )b  ab  (  a  a )b by right distributive law

 ob
Rings and Linear Algebra 1.5 Rings and Integral Domains
o by (i)

 (a)b  ab  o  (a)b  (ab)

iii) (a)(b)    (a)b  by (ii)

   (ab)  by ii

 ab

iv) a(b  c)  a b  (c)

= ab  a ( c ) by left distributive law..

= ab  (  ac ) by (ii)

= ab  ac

1.1.12 Note: If R is a ring with unity element and a  R , then

i) (1) a   a  a ( 1)

ii) ( 1)( 1)  1

1.1.13 Definition: Idempotent Element :- Let R be a ring. An element a  R is said to be an


idempotent element if a 2  a .

1.4.14 Example :
1) In the ring of Integers (Z, +, .), o and 1 are idempotent elements.

2) In the ring ( Z 6 ,  6 , 6 ), 0,1,3, 4 are idempotent elements.

1.4.15 Boolean Ring: A ring R is said to be a Boolean ring if every element of R is an idempotent
element i.e., a 2  a,  a  R .

1.4.16 Theorem: If R is a Boolean ring then

i) a  a  0,  a  R

ii) a  b  0  a  b

iii) R is commutative i.e. ab  ba,  a, b  R .

Proof: Given that R is a Boolean Ring

i) Let a  R  a  a  R  R is closed w.r.t ‘+’


Centre for Distance Education 1.6 Acharya Nagarjuna University

 (a  a ) 2  a  a Since a 2  a ,  a  R

 (a  a )( a  a)  a  a

 (a  a)a  (a  a)a  a  a by left distributive law

 ( aa  aa )  ( aa  aa )  a  a by right distributive law

 (a 2  a 2 )  (a 2  a 2 )  a  a

 (a  a)  (a  a)  a  a Since a 2  a ,  a  R

 (a  a)  (a  a)  (a  a)  0 Since O is the zero element of R.

 aa 0 by left cancellation law.

ii) Let a, b  R and a  b  0

 ab bb by i b  b  0

 ab by right cancellation law

iii) Let a, b  R  a bR  R is closed under ‘+’

 ( a  b) 2  a  b  R is Boolean Ring

 ( a  b)( a  b)  a  b

 ( a  b ) a  ( a  b )b  a  b by left distributive law

 (a 2  ba )  (ab  b 2 )  a  b by right distributive law

 a  ba  ab  b  a  b  R is a Boolean Ring

 ba  ab  0 by left and right cancellation laws

 ba  ab by ii.

 R is a commutative ring.
1.4.17 Definition : Nilpotent Element : Let R be a ring. An element a  R is said to be
nilpotent element if there exists a positive integer ‘n’ such that a n  0 .

1.4.18 Example: In the ring ( Z 9 , 9 , 9 ) , 3 is a nilpotent element.


Rings and Linear Algebra 1.7 Rings and Integral Domains
1.5 Zero Divisors of a Ring:
Let R be a ring. An element 0  a  R is said to be a zero divisor if there exists a b  0  R
such that ab = 0.

1.5.1 Note: 1) A ring R is said to have zero divisors if there exist a, b  R , a  0, b  0 but ab  0 .

2) A ring R is said to have no zero divisors if a, b  R and ab  0  a  0 or b  0 .

1.5.2 Examples: i) The ring ( Z 6 ,  6 , 6 ) has zero divisors 2, 3 and 4. For 2 6 3  0,3 6 4  0 .

ii) The set R of all 2 x 2 matrices with real numbers as entries is a ring with zero divisors w.r. to
addition and multiplication of matrices.

1 0 0 0
For A    and B    are two non-zero elements of R but AB = O.
0 0 1 0

iii) The ring of integers Z is without zero divisors for a, b  Z and ab = 0  a = 0 or b = 0.

1.5.3 Cancellation Laws in a Ring: We say that cancellation Laws hold in a ring R if
a  0, ab  ac  b  c and a  0, ba  ca  b  c for a, b, c  R .

1.5.4 Theorem: A ring R is without zero divisors if and only if the cancellation laws hold in R.
Proof: Suppose R is a ring without zero divisors.

i.e. a, b  R and ab  0  a  0 or b  0

To prove that cancellation laws hold in R.

Let a  0, b, c  R and ab  ac

Now ab  ac  ab  ac  0

 a (b  c )  0

bc  0  a  0 and R is without zero divisors

bc
Similarly we can show that ba  ca  b  c

 cancellation laws hold in R.


Suppose cancellation laws hold in R.
To prove that R has no zero divisors.

Let a, b  R and ab  0
Centre for Distance Education 1.8 Acharya Nagarjuna University

If possible suppose a  0

Now ab  0  ab  a0

b0 by cancellation law..

Similarly if b  0 , then

ab  0  ab  0b

a0 by cancellation law

 ab  0  either a  0 or b  0
Hence R has no zero devisors.

1.6.1 Definition: Integral Domain : A ring ( R, ,.) is said to be an integral domain if i) R is a


commutative Ring with unity 1  0 , ii) R is without zero divisors.

1.6.2. Example: The ring of integers ( Z , , ) is an integral domain.

1.6.3 Theorem: A commutative ring R with unity 1  0 is an Integral domain if and only if cancel-
lation laws hold in R.
Proof of this theorem follows from the previous theorem.
1.6.4 Definition: Invertible element (or) Unit: Let R be a ring with unity 1. A non zero element
a  R is said to be inversible if there exists b  R such that ab  ba  1 . Here b is called multiplica-
tive inverse of a and is denoted by a 1 .

1.6.5 Division Ring: A ring ( R, ,.) is said to be a division ring if i) R has unity 1  0 , ii) Every
non zero element has multiplicative inverse.

1.6.6. Example: The ring of rational numbers (Q , , ) is a division ring.


1.6.7 Theorem: A division ring has no zero divisors.
Proof: Let R be a division ring.

Let a, b  R and ab  0

If possible let a  0

  a 1  R such that aa 1  a 1a  1 ( R is a division ring)

Now ab  0

 a 1 (ab)  a 1 0
Rings and Linear Algebra 1.9 Rings and Integral Domains

 (a 1a )b  0

 1b  0
 b0
Similarly if b  0 we can prove that a  0

 ab  0  either a  0 or b  0 .
Hence R has no zero divisors.
1.6.8 Definition: Field : A ring R with atleast two elements is called a field if i) R is commuta-
tive, ii) R has unity, iii) Every non-zero element of R has multiplicative inverse.
1.6.9 Example: The rings of rational numbers, real numbers and complex numbers are fields.

1.6.10 Note: 1. A ring R is a field if  R  0 ,. is an abelian group.

2. A Commutative division ring is a field.


1.6.11 Theorem: A field has no zero divisors.
Proof: A field is a commutative devision ring. Hence from 1.6.7 it follows that a field has no
zero divisors.
1.6.12 Theorem: Every field is an integral domain.

Proof: Suppose ( F ,  ,.) is a field.

 F is a commutative ring.
To show that F is an integral domain, it is enough to show that F has no zero divisors.

Let a , b  F and ab  0

If possible let a  0

  a 1  F such that aa 1  a 1a  1  F is a field

Now ab  0  a 1 (ab)  a 1 0

 (a 1a )b  0

 1b  0
 b0
Similarly if b  0 we can show that a  0
Centre for Distance Education 1.10 Acharya Nagarjuna University

Hence ab  0  either a  0 or b  0

 F has no zero divisors


Now F is a commutative ring without zero divisors.
i.e. F is an integral domain.
1.6.13 Note: The converse of the above theorem is not true. i.e. an integral domain need not be
a field.
Example: Ring of integers is an integral domain, but not a field.
Theorem: A finite integral domain is a field.
Proof: Let R be a finite integral domain with n elements. Now R is a commutative ring without zero
divisors. To prove that R is a field it is enough to prove that R has i) Unity element, ii) Every non zero
element of R has multiplicative inverse.

Let R  a1 , a2 ,...an  where ai ’s are n distinct elements let 0  a  R

Now the n products aa1 , aa ......aan  R

If possible let aai  aa j for 1  i  n,1  j  n and i  j

 aa i  aa j  0

 a ( ai  a j )  0

 ai  a j  0  a  0 and R is integral domain.

 ai  a j

This is a contradiction to the fact that ai ’s are distinct.

 The n products a a 1 , a a 2 ...... a a n are distinct elements of R.


Existence of Identity:

Since a  R  aa1 , aa 2 ,...aa n  it follows that a  aai for some ai  R .

We claim that ai is the identity element of R.

Let b  R  b  aa j for some a j  R

Now bai  ( aa j ) ai
Rings and Linear Algebra 1.11 Rings and Integral Domains

= a ( a j ai )

= a ( ai a j )  R is commutative.

= ( aai ) a j

= aa j  aai  a

= b

 ai is the identity element of R.

Existence of Inverse: Since the identity 1  R we have 1  aak for some ak  R .

 For 0  a  R there exists ak  R such that aak  1 .

 Every non zero element of R is inversible. Hence R is a field.


1.7.1 Integral Multiples: In a ring R we define 0a  0 , where left hand zero is integer O and right
hand side zero is the zero element of the ring R, where a  R .

We define na  a  a  ...  a , n times where n is a positive integer..

( n)a  ( a )  (  a )  ...  (  a ) , n times

note that (  n ) a  n ( a )  ( na )

1.7.2. Note: If m, n are integers and a,b are elements of a ring R then

i) (m  n) a  ma  na

ii) m ( na )  ( mn ) a

iii) m ( a  b )  ma  mb

iv) m ( ab)  ( ma )b  a ( mb )

v) ( m a )( n b )  m n ( a b )
1.7.3 Integral Powers : If m, n are positive integers and a, b are elements of a ring R then

i) a m  a.a....a, m times

ii) a m .a n  a m  n

iii) ( a m ) n  a mn
Centre for Distance Education 1.12 Acharya Nagarjuna University

1.7.4 Charecteristic of a Ring: The characteristic of a ring R is said to be ‘n’ if there is a


positive integer n such that na  0,  a  R .

If there is no such positive integer ‘n’ then we say that the characteristic of the ring R is zero
or infinite.

1.7.5 Example: 1. The characteristic of the ring ( Z 5 ,  5 , 5 ) is 5

2. The characteristic of the ring ( Z ,  , .) is zero.

1.7.6 Theorem: The characteristic of a ring with unity is zero or n > 0, according as the order of
the unity element is zero or n > 0 respectively regarded as member of the additive group of the ring.
Proof: Suppose R is a ring with unity element 1. Suppose order of 1 is zero, regarded as element
of the group (R, +)

 There is no positive integer n such that n.1  0

 There is no positive integer n such that na  0  a  R


 Charecteristic of R is zero.
Suppose order of 1 = n > 0 regarded as element of the group (R, +)

 n is the least positive integer such that n.1  0


Let a  R

Now na  a  a  ..  a n times

 a (1  1  ..  1) n times

 a ( n.1)

 a0  n1  0
=0

 na  0,  a  R

Further n is the least positive integer such that na  0,  a  R . Since if m < n then
m.1  0 . Hence charecteristic of a ring R is n.
1.7.7. Theorem: The characteristic of an integral domain is either zero or prime.
Proof: Suppose R is an Integral domain
Let P be the charecteristic of R
If P = 0 there is nothing to prove.
Rings and Linear Algebra 1.13 Rings and Integral Domains

Suppose P  0 .

We claim that P is a prime number.


If possible assume that P is not a prime number.
 P = mn where 1 < m, n < P..
Let 0  a  R .

 pa  0  P is the charecteristic of R.

 pa 2  0

 mna 2  0

  ma  na   0

 ma  0 or na  0  R has no zero divisors


Suppose ma  0

Let b R

Now ma  0   ma  b  0

  mb  a  0

 mb  0  a  0 and R is without zero divisors.


 m b  0,  b  R
A contradiction since m < p and Ch of R is P.

Similarly we can arrive at a contradiction if na  0

 our assumption that P is not a prime is wrong


Hence P is a prime number.
1.7.8 Corollary : The charecteristic of a field is either zero or Prime.
Proof: Every field is an Integral domain.
 By theorem: 1.7.7. it follows that the charecteristic of a field is either zero or prime.
1.8.1 Definition: Subring: Let R be a ring. A non-empty subset S of R is said to be a subring of
R if S it self is a ring relative to the same operations in R.
Centre for Distance Education 1.14 Acharya Nagarjuna University

1.8.2 Example: 1.  z, ,  is a subring  Q, , 

1.8.3 Note: If R is a ring then S  0 where 0 is the zero element of R and S  R are subrings of
R. These subrings are called trivial or improper subrings of R. A subring other than the above two
is called a non-trivial or proper subring of R.

1.8.4 Theorem: Let R be a ring. A non-empty subset S of a ring R is a subring iff a, b  S  a  b  S


and ab  S .

Proof: Suppose R is a ring and S a non-empty subset of R. Suppose S is a subring of R.

Let a, b  S

b  S  b  S  additive inverse exists.

Now a, b  S  a  b  S by closure law for addition. Also S is closed w.r. to multiplication.

 a, b  S  ab  S .

Hence a, b  S  a  b  S and ab  S

Conversily suppose S is a non-empty subset of R such that a, b  S  i)a  b  S and


ii)ab  S .
To show that S is a subring.

Existence of zero element: Since S is non-empty S has atleast one element say a  S .

a  S  a  a  S by the given condition.

 0S
Existence of additive inverse:

0S and a  S  0  a  S
 a  S
Closure w.r. to addition:

Let a, b  S

b  S  b  S
Now a, b  S  a  (b)  S by cndition (i).

 a  bS
Rings and Linear Algebra 1.15 Rings and Integral Domains

Since elements of S are elements of R, associative law for addition and multiplication,
commutative law for addition and distributive law holds good in S.
Also by condition (ii) S is closed w.r.to multiplication
 S is a ring and hence a subring of R.
1.8.5 Theorem: The intersection of two subrings of a ring R is a subring of R.

Proof: Let R be a ring and S1, S2 be two subrings of R.

Let S  S1  S2 . To show that S is a subring of R.

Since every subring contains the zero element we have 0  S1 and 0  S2 .

 0  S1  S2  S

 S is a non-empty subset of R.

Let a, b  S  a, b  S1  S2

 a, b  S1 and a, b  S2

a, b  S1  a  b  S1 and ab  S1  S1 is a subring

a, b  S2  a  b  S2 and ab  S2 S2 is a subring

 a  b  S1  S2 and ab  S1  S2

i.e. a  b  S and ab  S

Hence S is a subring of R by Theorem 1.8.4.


1.8.6 Note: 1. The intersection of an arbitrary family of subrings is a subring.
2. Union of two subrings need not be a subring.

Let S1  2n / n  z and S2  3n / n  z

S1 and S2 are two subrings of the ring of integers  z, ,  .

S1  S2  6, 4, 3, 2,0, 2,3, 4,6,8,9,........

Clearly 2,3  S1  S2

But 3  2  1 S1  S2
Centre for Distance Education 1.16 Acharya Nagarjuna University

Hence S1  S2 is not a subring.

1.8.7 Theorem: Let S1, S2 be two subrings of a ring R.

then S1  S2 is a subring iff S1  S2 or S2  S1

(OR)
Union of two subrings of a ring R is again a subring iff one is contained in the other.

Proof: Let R be a ring and S 1 , S 2 be two subrings of a ring R.

Suppose S1  S2 or S2  S1

If possible assume that S1  S2 or S2  S1

S1  S2  there is an element a  S1 and a  S2 1

S2  S1  there is an element b  S2 and b  S1 2

a  S1  a  S1  S2

b  S2  b  S1  S2

Now a, b  S1  S2 and S1  S2 is a subring

 a  b  S1  S2

 a  b  S1 or a  b  S2

If a  b  S1 then a  b  a  S1 Since a  S1 and S1 is a subring.

 b  S1
A contradiction to (2).

If a  b  S2 then a  b  b  S2 Since b  S2 and S2 is a subring.

 a  S2
A contradiction to (1)

a  bS1 and a  b  S2
Rings and Linear Algebra 1.17 Rings and Integral Domains

 a  b  S1  S2

 S1  S2 is not a subring.

Which is a contradiction to the hypothesis.

 Our assumption S 1  S 2 and S2  S1 is wrong.

Hence either S1  S2 or S2  S1

Conversily suppose S1  S2 or S2  S1

Suppose S1  S2  S1  S2  S2 which is a subring.

If S2  S1  S1  S2  S1 which is a subring.

 either S1  S2 or S2  S1 we have S1  S2 is a subring.

1.8.8 Definition: Subfield: Let  F , ,. be a field. A non-empty subset K of F is said to be a


subfield of F if K itself is a field w.r.to the same operations in F.

1.8.9 Example: The set of all rational numbers  Q, ,. is a subfield of  R, ,.

1.8.10 Theorem: Let  F , ,. be a field. A non-empty subset K of F is a subfield of F iff

i)a, b  K  a  b  K

ii)a  K , 0  b  K  ab1  K and

iii ) 1K
Proof: Let F be a field and
K be a non-empty subset of F.
Suppose K is a subfield of F.

Let a, b  K

b  K  b  K since additive inverse exists in a field.


Now a, b  K  a  (b)  K by closure axiom for addition.

 a bK
Centre for Distance Education 1.18 Acharya Nagarjuna University

Let a,0  b  K

0  b  K  b1  K since multiplicative inverse exists in a field.

Now a, b1  K  ab1  K by closure axiom for multiplication.

Clearly 1K

 conditions (i), (ii) and (iii) hold good.


Suppose the condition (i) a , b  K  a  b  K

(ii) a,0  b  K  ab1  K hold good.

From condition (i) it follows that  K,  is a sub group of  F ,   from (ii) and (iii) it follows

  
that K  0 ,. is a sub group of F  0 ,. . 
Since K  F , commutative law for addition and multiplication and distributive laws hold for ele-
ments of K.
Hence k is a subfield of F.

1.9. Summary :
In this lesson we learnt the definitions of Ring, Integral domain, Field, Subring and Subfield.
We proved some theorems on Rings, Integral domains and fields.

1.10 Technical Terms:


i) Ring
ii) Commutative Ring
iii) Ring with Unity
iv) Idempotent Element
v) Boolean Ring
vi) Nilpotent Element
vii) Zero Divisors
viii) Integral Domain
ix) Unit or Invertible Element.
x) Division Ring or Skew Field
xi) Field
xii) Subring
Rings and Linear Algebra 1.19 Rings and Integral Domains
xiii) Subfield
Xiv) Charecteristic of a Ring.

1.11 Model Questions:


1.11.1 The set M of all n X n matrices with entries as real numbers is a non-commutative ring with
unity with respect to addition and multiplicatin of matrices.
Solution: Let M be the set of all n x n matrices with entries as real numbers.
i) We know that sum of the n x n matrices and product of two n x n matrices is again a n x n matrix.

 M is closed w.r. to addition and multiplication of matrics. i.e. A, B  M  A  B  M


and AB  M .
ii) We know that addition of matrices is associative and commutative.

  A  B   C  A   B  C  ,  A, B, C  M

A  B  B  A,  A, B, M

iii) The null matrix Onn  M and A  0  A,  A  M

iv) To each A  M there is  A  M and A   A  Onn

v) Multiplication of matrices is Associative.

  AB  C  A  BC   A, B, C  M

vi) We know that multiplication of matrices is distributive over addition.

 A  B  C   AB  AC

 B  C  A  BA  CA,  A, B, C  M
 M is aring.

The unit matrix I n n  M and IA  AI  A,  A  M

 M is a ring with unity I.


But we know that multiplication of matrices is not commutative.
Hence M is a non-commutative ring with unity.

1.11.2 The set z  i    x  iy / x , y  z of Gaussian integers is a commutative ring with unity


w.r. to addition and multiplication of complex numbers.
Centre for Distance Education 1.20 Acharya Nagarjuna University

Solution: Given z (i )   x  iy / x , y  z 

Let a  x1  iy1 and b  x 2  iy 2  z ( i )

 x1 , y1 , x2 , y2  Z

Now a  b  ( x1  iy 1 )  ( x2  iy 2 )

  x1  x2   i  y 1  y 2 

 a  b  z (i ) since x1  x2 and y 1  y 2  Z

ab   x1  iy 1  x2  iy 2 

  x1 x2  y 1 y 2   i  x1 y 2  x2 y 1 

 ab  z (i )

i.e. z ( i ) is closed under addition and multiplication. We know that addition of complex
numbers is associative and commutative.

 Associative law and commutative law for addition hold good in z ( i ) since z ( i ) is a sub-
set of the set of complex numbers C.

Clearly 0  0  i 0  z ( i ) and

a  0  ( x1  iy1 )  (0  i 0)  x1  iy1  a
O is the zero element of z (i )

a  z (i )   a   x1  iy1  z (i ) and

a  ( a )  ( x1  iy1 )  ( x1  iy1 )  0

a is the additive inverse of a.


We know that multiplication of complex numbers is associative and commutative.

 a (bc )  ( ab)c and ab  ba  a, b, c  z (i ) since Z(i)  C


Also multiplication of complex numbers is distributive over addition.

 a (b  c )  ab  ac  a, b, c  z (i ) since Z(i)  C

clearly 1  1  0i  z (i ) and a1  ( x1  iy1 )(1  0i )  a


Rings and Linear Algebra 1.21 Rings and Integral Domains
 Z (i) is a commutative ring with unity..

 
1.11.3 If Q( 2)  a  b 2 : a, b  Q Then Q ( 2) is a field with respect to addition and multipli-
cation of real numbers.


Solution: Q ( 2)  a  b 2 : a, b  Q 
Let x  a  b 2 and y  c  d 2  Q ( 2)

 a , b, c , d  Q

Now x  y  a  b 2  c  d 2

 ( a  c )  (b  d ) 2

 x  y  Q ( 2)  a  c, b  d  Q

xy  ( a  b 2)(c  d 2)

 (ac  2bd )   ad  bc  2

 xy  Q ( 2)  ac  2bd , ad  bc  Q

 Q ( 2) is closed under addition and multiplication.

We know that addition of real numbers is associative and commutative. Since Q( 2)  R .

We have ( x  y )  z  x  ( y  z ) and x  y  y  z  x, y , z  Q ( 2)

clearly 0  0  0 2  Q ( 2) and x  0  x,  x  Q ( 2)

x  Q ( 2)   x   a  b 2  Q ( 2) and x  (  x )  0

 x is the additive invirse of x .


Also multiplication of real numbers is associative, commutative and is distributive over addition.

 x ( yz )  ( xy ) z , xy  yz , x ( y  z )  xy  xz , for all x , y , z  Q ( 2 ) since Q ( 2 )  R

clearly 1  1  0 2  Q ( 2) and x.1  ( a  ib)(1  0 2)  x

 1 is the multiplicative identity..


Centre for Distance Education 1.22 Acharya Nagarjuna University

Let 0  x  a  b 2  Q ( 2)

 a  0 or b  0 .

1 1 a b 2 a b
Now   2  2  2 2  Q( 2)
x a  b 2 a  2b 2
a  2b a  2b2
2

Since a, b  Q and a 2  2b 2  c

a 2
 2b 2  0  a 2  2b 2  a  2b but a  Q  a 2  2b 2  0  a  0 and b  0

 every non-zero element of Q( 2) has multiplicative inverse. Hence Q( 2) is a field.

1.11.4 Z p  0,1, 2,3,............ p  1 where P is a prime. Z p is a field w.r.t the addition modulo P
and multiplication modulo P.

Solution: Z p  0,1, 2,3,............ p  1

Let a, b  Z p

a  b  remainder, where a  b is divided by P..


a  b  remainder, where ab is divided by P..

Clearly a  b and a  b  z p

 z p is closed under addition and multiplication modulo P..

Let a, b, c  z p

(a  b)  c  r1  c where a  b  q1 p  r1 ,0  r1  p

 r2 where r1  c  q2 p  r2 ,0  r2  p

Now a  b  c  q1 p  r1  c  q1 p  q2 p  r2  (q1  q2 ) p  r2

 r2 is the remainder where a  b  c is divided by P

a  (b  c)  a  s1 where b  c  m1 p  s1 , 0  s1  p

 s2 where a  s1  m2 p  s2 , 0  s2  p
Rings and Linear Algebra 1.23 Rings and Integral Domains

Now a  b  c  a  m1 p  s1  m1 p  a  s1  m1 p  m2 p  s2

 (m1  m2 ) p  s2

 s2 is the remainder when a  b  c is divided by P..

 r1  s2  (a  b)  c  a  (b  c)

ie addition modulo P is associative.

Clearly 0  z p

a0  a

 0 is the zero element of z p

Let a  z p

if a  0 then additive inverse of a is itself.

If a  0 then 0  a  p  0   a   p

 p  pa  p p  p  pa 0

 p  a  zp

Now a  ( p  a )  Remainder when a  ( p  a ) is divided by P..

0
 p  a is the additive inverse of a  0

Clearly a  b  remainder when a  b is divided by P..

= remainder when b  a is divided by P.. p  a  b  b  a

 ba
 Addition modulo P is commutative.
(a  b)  c  r1  c where ab  q1 p  r1 0  r1  p

= r2 where r1c  q2 p  r2 0  r2  p

Now abc  (q1 p  r1 )c  q1 pc  r1c  q1 pc  q2 p  r2


Centre for Distance Education 1.24 Acharya Nagarjuna University

 (q1c  q2 ) p  r2

 r2 is the remainder when abc is divided by P..

a  (b  c)  a  s1 where bc  m1 p  s1 0  s1  p

 s2 where as1  m2 p  s2 0  s2  p

Now abc  a (m1 p  s1  am1 p  as1  am1 p  m2 p  s2

 (am1  m2 ) p  s2

 s2 is the remainder when abc is divided by P..

 r2  s2  (a  b)  c  a  (b  c) i.e., multiplication modulo P is associative.

clearly 1  z p and a  1  1  a  a,  a  z p

 1 is the multiplicative identity..

Clearly a  b  b  a,  a, b  z p since ab  ba

 Multiplication modulo P is commutative.

Hence z p is a commutative ring with unity..

Suppose a  b  0 where a, b  z p

 0 is the remainder when ab is divided by P..


 p divides ab

p p
 or since P is prime.
a b

 a  0 or b  0 since 0  a  p and 0  b  p

 z p has no zero divisors.

 z p is an integral domain.

Since z p is a finite integral domain z p is a field.


Rings and Linear Algebra 1.25 Rings and Integral Domains

Remark: the ring z p described above is the same as the ring defined at 1 & 2 exampled. (upto
isomorphism).
1.11.5 Give example of a division ring which is not a field.

 a  ib c  id  
Solution: Let A    / a, b, c, d  R 
 c  id a  id  
We know that the set M of all matrices with complex numbers as entries is a non-commu-
tative ring with unity.
Clearly A is a non-empty subset of M.

 a1  ib1 c1  id1 
Let X , Y  A  X    and
 c  id1 a1  ib1 

 a  ib2 c2  id 2 
Y  2 
 c2  id 2 a2  ib2 

 (a1  a2 )  i(b1  b2 ) (c1  c2 )  i(d1  d2 ) 


Now X  Y    A
 (c1  c2 )  i(d1  d2 ) (a1  a2 )  i(b1  b2 ) 

 a  ib1 c1  id1   a 2  ib2 c2  id 2 


XY   1  
  c1  id1 a1  ib1    c2  id 2 a 2  ib2 

 a a  bb  c c  d d  i(a1b2  a2b2  c1d2  c2d1 ) a1c2  b1d2  a2c1  b2 d1  i(a1d2  bc


1 2  d1a2  b2 c1 ) 
 1 2 1 2 1 2 1 2 
 c1a2  d1b2  a1c2  bd
1 2  i(c1b2  a2 d1  a1d2  bc
1 2 ) c1c2  d1d2  a1a2  bb1 2  i(c1d2  c2 d1  a1b2  a2 b1 ) 

 XY  A
Hence X ,Y  A  X  Y and XY  A
 A is a subring of M and hence a ring.

1 0
Since M is a non commutative ring with unity I   
0 1
Centre for Distance Education 1.26 Acharya Nagarjuna University

 a  ib c  id 
For 0  X  A where X   
 c  id a  ib 

X  0  one of a, b, c, d must be not equal to zero

 X  a 2  b2  c 2  d 2  0 and hence X is invertible.

Hence every non-zero element of A has a multiplicative inverse.

 A is a division ring.
Since A is non-commutative it is not a field.
1.11.6 Show that the charecteristic of a Boolean ring R is 2.
Solution: Let R be a Boolean ring

 a 2  a,  a  R

Let a  R  a  a  R

 (a  a )2  a  a Since R is a Boolean ring

 ( a  a )( a  a )  a  a

 a2  a2  a2  a2  a  a
aaaa  aa

 aa 0

 2a  0
 Charecteristics of a Boolean ring is 2.

1.11.7 If the charecteristic of a ring R is 2 and a, b  R commute then (a  b) 2  a 2  b 2 .


Solution: Let R be a ring with charecteristic 2.

Let a, b  R such that ab  ba

Now (a  b) 2  (a  b)(a  b)

 a 2  ab  ba  b 2

 a 2  ab  ab  b 2

 a 2  2ab  b 2
Rings and Linear Algebra 1.27 Rings and Integral Domains

 a 2  0  b 2 Since ch. of R is 2, 2a = 0  a  R

 a 2  b2

1.11.8 If R is a ring and C(R)   x  R : ax  xa  a  R then show that C(R) is a subring of R.


C(R) is called the centre of R.
Solution: Let R be a ring.

C ( R)   x  R / ax  xa  a  R

clearly 0  R and ao  oa  a  a  R

 0  C ( R)

C( R) is a non-empty subset of R.

Let x, y  C ( R)

 ax  xa and ay  ya  a  R

Now a ( x  y )  ax  ay

 xa  ya

 ( x  y )a  a  R

a ( xy )  ( ax ) y  ( xa ) y  x ( ay )  x ( ya )

 ( xy ) a,  a  R

 x  y, xy  C ( R)

Hence C( R) is a subring of R. by 1.8.4.

1.11.9 The set of all those integers which are multiples of a given integer say m is a subring of
the ring of integers.

Solution: Let S  mx / x  z where m is a given integer..

Clearly o  z  mo
.  o s
 S is a non-empty subset of Z.

Let a, b  S

 a  mx and b  my for x, y  z
Centre for Distance Education 1.28 Acharya Nagarjuna University

Now a  b  mx  my

 m( x  y )

a  b  s x  y  z

also ab  mx.my

 m.mxy

 ab  s since m xy  z
Hence S is a subring of Z by 1.8.4.
1.11.10 Show that 0 and 1 are the only idempotent elements of an integral domain.
Solution: Let R be an integral domain.

Suppose a  R is an idempotent element.

 a2  a

 a2  a  0

 a ( a  1)  0

 a  0 or a  1  0 since R is an Integral domain.

 a  0 or a  1
 0 and 1 are the only idempotent elements of an integral domain.
2. Exercise:
1. Prove that the set of even integer is a ring with respect to usual addition and multiplication.

2. In the set of integers addition  , multiplication  are defined as a  b  a  b  1 and


a  b  a  b  ab  a, b  z . Prove that ( z , , ) is a commutative ring.

 
3. Is R  a 2 / a  Q a ring under ordinary addition and multiplication of real numbers?

4. Is the set of all pure imaginary numbers iy / y  R a ring with respect to addition and
multiplication of complex numbers?

5. Let Q   0   1 i   2 j   3 k /  0 , 1 ,  2 ,  3  R  where i , j , k are the quoternion units

(i 2  j 2  k 2  ijk  1, ij   ji  k , jk  kj  i, ki  ik  j ) . Show that Q is a division ring with


respect to addition and multiplication defined as follows. For X , Y  Q where
Rings and Linear Algebra 1.29 Rings and Integral Domains

X   0  1 i   2 j   3 k , Y   0  1i   2 j   3 k .

X  Y  ( 0   0 )  (1  1 ) i  ( 2   2 ) j  ( 3   3 ) k

XY  ( 0 0  11   2 2  3 3 )  (0 1  1 0  2 3  3 2 ) i  (0 1  2 0  3 1  1 3 ) j  (0 3  1 2  1 2  2 1 )k

This ring is called ring of quaternions. This ring is not commutative.

6. Find the Zero divisors of the ring ( z12 , 12 , 12 )

7. Prove that 1, i are the only four units of the ring of Gaussian Integers z (i ) .

a b
8. Show that the set of matrices   where a, b, c  z is a subring of the ring of 2 X 2 matri-
0 c
ces whose elements are integers.

9. If R is a division ring show that C ( R)   x  R / xa  ax,  a  R is a field.

 
10. Is s  1,3,5 a subring of the ring z6 of resedue classes modulo 6 under addition and multipli-
cation of resedue classes.

11. Find the centre of the ring of 3 X 3 matrices M3 (R) , where R is the field of reals.

- Smt. K. Ruth
Rings and Linear Algebra 2.1 Ideals and Homomarphism of Rings

LESSON - 2

IDEALS AND HOMOMORPHISM OF RINGS


2.1 Objective of the Lesson:
To learn the definition of ideals, types of ideals, homomorphism of rings and theorems
related to them.

2.2 Structure
2.3 Introduction

2.4 Definition of Ideal and Ideal generated by a subset

2.5 Principal ideal and Principal ideal ring.

2.6 Prime ideal and maximal ideal

2.7 Quotient ring

2.8 Homomorphism of rings

2.9 Kernal of a homomorphism

2.10 Isomorphism of rings and Fundamental theorem.

2.11 Summary

2.12 Technical Terms

2.13 Model Questions

2.14 Exercises

2.3 Introduction:
In this lesson we define Right Ideal, Left Ideal Ideal, Principal Ideal, Prime Ideal, Maximal
Ideal and Quotient rings. We also define homomorphism of rings and Kernal of a homomorphism.

2.4.1 Right Ideal: Let R be a ring. A nonempty subset I of R is said to be a right ideal of R if

i) a, b  I  a  b  I

ii) a  I , r  R  ar  I

2.4.2 Left Ideal: Let R be a ring : A nonempty subset I of R is said to be a left ideal of R if

i) a, b  I  a  b  I
Centre for Distance Education 2.2 Acharya Nagarjuna University

ii) a  I , r  R  ra  I

 0 a  
2.4.3 Example: 1. The set I   0 b  a, b  z  is a left ideal of the ring of 2 x 2 matrices with
  
integers as entries but not a right ideal.

 a b  
2. The set I     a, b  z  is a right ideal of the ring of 2 x 2 matrices with integers as entries
 0 0 
but not a left ideal.

2.4.4 Ideal: Let R be a ring. A non-empty subset I of R is said to be an ideal of R if

i) a, b  I  a  b  I

ii) a  I , r  R  ar and ra  I

2.4.5 Example: 1. Let R be a ring. Then I  0 where 0 is the zero element of the ring R is an
ideal of R.

For 0  0  I and 0.r  r.0  0  I ,  r  R

This ideal is called the Null ideal or zero ideal.


2. The ring R itself is an ideal of R.

For a , b  R  a  b  R and a  R, r  R  ar and ra  R .

This ideal is called the unit ideal.


The above two ideals are called trivial or improper ideals of R.

3. The set of even integers I  2n n  z is an ideal of the ring of integers.

2.4.6 Note: 1. Every ideal of a ring is a subring of the ring.


Let I be an ideal of a ring R.

Let a, b  I  a  b  I since I is an ideal.

b  I b  R
 a  I , b  R and I is an ideal  ab  I .

Hence by 1.8.4 I is a subring.


2. The converse of the above note is note true. i.e. every subring of a ring need not be an ideal of
the ring.
Rings and Linear Algebra 2.3 Ideals and Homomarphism of Rings

We know that ( z ,  , ) is subring of (Q, , ) .....

1
We have 2  z and  Q
4

1 1
But 2   Z
4 2

 Z is not an ideal of Q.

2.4.7 Corollary: Let I be an ideal of a ring R with unity element 1. If 1  I then I  R .

Proof: Let I be an ideal of a ring R with unity element 1. Suppose 1  I .

Clearly I  R ..... (1)

Let x  R

Now 1  I , x  R and I is an ideal.

 1.x  I

 xI
R  I ..... (2)

From (1) and (2) IR


2.4.8 Theorem: A field has no proper ideals.
(OR)

The only ideals of a field are 0 and itself

Proof: Let F be a field.

We know that I  0 and I  F are ideals of F by 2.4.5

Let I  0 be an ideal of F..

Since I  0 there is 0  a  I

 aF since I  F

 a 1  F such that aa 1  a 1a  1 since F is a field.

Now a  I , a 1  F and I is an ideal


Centre for Distance Education 2.4 Acharya Nagarjuna University

 aa 1  I

 1 I
Now by 2.4.7 I F
 Every non-zero ideal of F is equal to F itself.

Hence the only ideals of a field are 0 and itself.

2.4.9 Algebra of Ideals:


Theorem: The intersection of two ideals of a ring R is an ideal of R.

Proof: Let R be a ring and I1 , I 2 be two ideals of R

Take I  I1  I 2

Clearly 0  I1 and 0  I 2  0  I1  I 2  I

 I is a non-empty subset of R.
Let a, b  I and r  R .

a , b  I  a , b  I1  I 2

 a, b  I1 and a, b  I 2

 a  b  I1 , ra and ar  I1 since I1 is an ideal of R and

a  b I2 , ra and ar  I 2 since I 2 is an ideal of R.

Now a  b  I1 , a  b  I 2 ra  I1 , ra  I 2 , ar  I1 , ar  I 2

 a  b  I1  I 2  I , ra  I1  I 2  I , ar  I1  I 2  I .

Hence I is an ideal of R.

2.4.10. Note: Union of two ideals need not be an ideal.


2.4.11 Theorem: Union of two ideals of a ring R is an ideal if and only if one is contained in the
other.

Proof: Let R be a ring and I1 , I 2 be two ideals of a ring R.

Suppose I1  I 2 is an ideal of R.
Rings and Linear Algebra 2.5 Ideals and Homomarphism of Rings

To prove that I1  I 2 or I 2  I1 .

If possible assume that I1 


 I 2 or I2  I1

I1  I 2  There is a  I1 and a  I 2 ........ (1)

I 2  I1  There is b  I 2 and b  I1 ........ (2)

a  I1  a  I1  I 2

b  I 2  b  I1  I 2

Now a, b  I1  I 2  a  b  I1  I 2 since I1  I 2 is an ideal.

a  b  I1  I 2  a  b  I1 or a  b  I 2

If a  b  I1 Then a  (a  b)  I1 since a  I1 and I1 is an ideal

 b  I1

A contradiction to (2)  a  b  I1 ...... (3)

If a  b  I 2 Then b  a  b  I 2 since b  I 2 and I 2 is an ideal

 a  I2

A contradiction to (1)  a  b  I 2 ....... (4)

 a  b  I1  I 2 from (3) and (4)

 I1  I 2 is not an ideal.

This is a contradiction.
Hence our assumption is wrong.

 either I1  I 2 or I 2  I1

Conversily suppose I1  I 2 or I 2  I1

If I1  I 2 Then I1  I 2  I 2 which is an ideal

If I 2  I1 Then I1  I 2  I1 which is an ideal


Centre for Distance Education 2.6 Acharya Nagarjuna University

 Either I1  I 2 or I 2  I1 ,  I1  I 2 is an ideal

2.4.12 Theorem: If I1 and I 2 are two ideals of a ring R then I 2  I 2  a  b a  I1 and b  I 2 


is an ideal of R.

Proof: Let R be a ring and I 2 , I 2 be two ideals of R.

Suppose I1  I 2  a  b a  I1 and b  I 2 

Clearly 0  I1 and 0  I 2

 0  0  0  I1  I 2

 I 2  I 2 is a non-empty subset of R.

Let x, y  I1  I 2 and r  R

Then x  a1  b1 where a1  I1 and b1  I 2

y  a2  b2 where a2  I1 and b2  I 2

Now x  y  (a1  b1 )  (a2  b2 )

 (a1  a2 )  (b1  b2 )

Since a1 , a2  I1 , b1 , b2  I 2 and I1 , I 2 are ideals

We have a1  a2  I1 and b1  b2  I 2

 x  y  I1  I 2 ........ (1)

xr  (a1  b1 )r  a1r  b1r  I1  I 2

Now rx  r (a1  b1 )  ra1  rb1  I1  I 2 since a1  I1 , b1  I 2 , r  R and I1 , I 2 are ideals.

 rx and xr  I1  I 2 ........ (2)

From (1) and (2) I1  I 2 is an ideal.

2.4.13 Note: I1  I 2 is an ideal containing both I1 and I 2 .

By the above theroem I1  I 2 is an ideal.


Rings and Linear Algebra 2.7 Ideals and Homomarphism of Rings

Clearly x  I  x  0  I1  I 2 since 0  I2

 x  I1  I 2

 I1  I1  I 2

y  I 2  0  y  I1  I 2 since 0  I1

 y  I1  I 2

 I 2  I1  I 2

I1  I1  I 2 , I 2  I1  I 2  I1  I 2  I1  I2

2.4.14 Definition: Ideal Generated by a subset:


Let R be a ring and S a subset of R. An ideal I of R is said to be generated by S.

If i) S  I ii) I 1 is an ideal of R and S  I 1  I  I 1

Ideal generated by S is denoted by S

2.4.15 Note: Ideal generated by S is the smallest ideal containing S.

2.4.16 Theorem: If I1 and I 2 are two ideals of ring R then I1  I 2 is the ideal generated by I1  I 2
i.e. I1  I 2  I1  I 2

Proof: Let R be a ring and I1 , I 2 be two ideals of a ring R.

By 2.4.12 I1  I 2 is an ideal of R.

By 2.4.13 I1  I 2  I1  I 2 ....... (1)

Let I 1 be an ideal of R and I1  I 2  I 1

To prove that I1  I 2  I 1

Let x  I1  I 2

 x  a  b where a  I1 and b  I 2

a  I1  a  I1  I 2  a  I 1 since I1  I 2  I 1
Centre for Distance Education 2.8 Acharya Nagarjuna University

b  I 2  b  I1  I 2  b  I 1

 a, b  I 1 and I 1 is an ideal.

 a  b  I1

 x  I1

Hence I1  I 2  I 1 ........ (2)

From (1) and (2) I1  I 2  I1  I 2

2.5.1 Definition: Principal Ideal : Let R be a ring. An ideal I of R is said to be Principal ideal of
R if I is generated by a single element of R. i.e. I  a where a  R .

2.5.2 Example: (1) The null ideal 0 of a ring R is a principal ideal.

(2) In a ring R with unity element, R itself is a principal ideal.


Definition: If a and b are nonzero elements of a ring R such that ab = 0, then a and b are called
divisors of zero (or zero divisors)

Definition: A commutative ring R with identity 1  0 and no zero divisors is called an integral do-
main.
2.5.3 Principal Ideal Domain: An integral domain R with unity is said to be a principal ideal
domain if every ideal of R is a principal ideal.

2.5.4 Note: Every field is a principal ideal domain. We know that a field has only two ideal 0 and
itself by 2.4.8.

Clearly 0 is generated by 0 i.e. 0  0

and F is generated by 1 i.e. F  1

 The ideals of a field are principal ideals and hence a field is a principal ideal domain.
2.5.5. Theorem: The ring of integers is a principal ideal domain

Proof: Let ( z , ,.) be the ring of integers and I be an ideal of Z.

If I  0 then I is a principal ideal.

Suppose I  0
Rings and Linear Algebra 2.9 Ideals and Homomarphism of Rings

 0  a  I

 a  I since I is an ideal.
Since a, a  I , I contains atleast one postive integer..

Let I  be the set of all positive integers of I .

Then I  is non-empty..

 By well ordering principal I  has a least element say “b”.

We claim that I  b

Let x  I

Now x, b are integers and b  0 .

By division algorithm there exists q, r  z

Such that x  bq  r where 0  r  b .

Now b  I , q  z and I is an ideal.

 bq  I

 x  bq  I since x  I

 rI
Since b is the least positive integer in I and 0  r  b. r can not be positive.
r  0

 x  bq, q  z

 I  bq q  z  b

Hence Z is a principal ideal domain.

2.6.1 Definition: Prime ideal: A proper ideal I of a commutative ring R is said to be a prime ideal
if a, b  R and a.b  I  a  I or b  I

2.6.2 Example : In an integral domain, null ideal is prime.


Let R be an integral domain.

Suppose a, b  R and ab  0


Centre for Distance Education 2.10 Acharya Nagarjuna University

 ab  0

 a  0 or b  0  R has no zero divisors

 a  0 or b  0

0 is a prime ideal.

2.6.3 Definition: Maximal ideal: Let R be a ring and M is an ideal of R such that M  R . M is
said to be a maximal ideal of R if for any ideal I of R such that M  I  R then I  M or I  R .

2.6.4 Theroem: An ideal M of the ring of integers Z is maximal iff M is generated by some prime
number.
Proof: Let Z be the ring of integers.
We know that Z is a principal ideal domain (by 2.5.5)

Suppose M is a maximal ideal of Z. Then M  n , where n  0 .

To prove that n is a prime number.


If possible assume that n is not prime.

 n  pq where p and q are integers such that 1  p  n and 1  q  n .

Now I  p is an ideal of Z and M  I  Z

M is maximal  I  M or I  Z

If I  M then p  n

 p  nm for some m  z

 p  pqm, since n  pq .

 mq  1

 m  1 and q  1
This is a contradiction.

If I  Z then p  1

 p  1 . Again this is a contradiction.

 Our assumption that n is not prime is wrong.


Rings and Linear Algebra 2.11 Ideals and Homomarphism of Rings

Hence M  n where n is a prime number..

Conversely suppose M is generated by a prime number

i.e. M  n , where n is a prime number..

To prove that M is a maximal ideal of Z.

Let I be an ideal of Z such that M  I  Z .

I is an ideal of Z  I  m for since m  0 .

M I Z  n  m Z

 n   m

 n  mq, q  z

 m  1 or q  1 , since n is prime.

If m = 1 then m  1  Z

I Z
If q  1 then n  m

 n  m

M I

 Either I  M or I  Z
 M is a maximal ideal of Z.
2.6.5 Note: In the ring of integers an ideal generated by a composite number is not maximal.

Eq: Let M  6

Clearly 6  3  Z and 3  6 and 3  Z

 M is not maximal.
Centre for Distance Education 2.12 Acharya Nagarjuna University

2.7.1 Quotient Rings:

Coset: Let R be a ring and I an ideal of R. Then r  I  r  a a  I  is called a coset of


I generated by r.
Since ( R,  ) is an abelian group r  I  I  r ,  r  R , r  I is also called the resedue class
containing r.

2.7.2 Theorem: Let R be a ring and I an ideal of R.

Then R I  r  I r  R is a ring with respect to the addition and multiplication of cosets

defined by ( a  I )  (b  I )  ( a  b)  I

(a  I )(b  I )  ab  I for a  I , b  I  R I

Proof: Let R be a ring and I an ideal of R.

R  r  I r  R
I
Addition of resedue classes is well defined:

Suppose a1  I  a2  I and b1  I  b2  I

 a1  a2  I and b1  b2  I . Since from groups H  a  H  b  a  b  H .

 a1  a2  b1  b2  I . Since I is an ideal.

 (a1  b1 )  (a2  b2 )  I

 (a1  b1 )  I  (a2  b2 )  I

 (a1  I )  (b1  I )  (a2  I )  (b2  I )

 Addition is well defined.


Multiplication is well defined:

Suppose a1  I  a2  I and b1  I  b2  I

 a1  a2  I and b1  b2  I

 a1  a2  u1 and b1  b2  u2 for some u1 , u2  I

Now a1b1  (a2  u1 )(b2  u2 )


Rings and Linear Algebra 2.13 Ideals and Homomarphism of Rings

 a2b2  a2u2  u1b2  u1u2

 a1b1  a2b2  a2u2  u1u2

 a1b1  a2b2  I since u1 , u2  I , a2 , b2  R and I is an ideal.

 a1b1  I  a2b2  I

 (a1  I )(b1  I )  (a2  I )(b2  I )

 Multiplication is well defined.


Closure Law : Let a  I , b  I  R I

 a, b  R

 a b R

 ( a  b)  I  R I

 ( a  I )  (b  I )  R I

Associative law : Let a  I , b  I , c  I  R I

Now  (a  I )  (b  I )  (c  I )   (a  b)  I   (c  I )

=  ( a  b)  c   I

= (a  (b  c)  I since a, b, c, R

 (a  I )   (b  c)  I 

 (a  I )   (b  I )  (c  I )

Existence of Identity:

Now (0  I )  ( a  I )  (0  a )  I  a  I ,  a  I  R I

 a  I  I is the zero element of R I


Existence of Inverse:

Let a  I  R I  a  R

 a  R
Centre for Distance Education 2.14 Acharya Nagarjuna University

 a  I  R I

Now (a  I )  (  a  I )  a  ( a )  I

 0 I

a  I is the additive inverse of a  I


Commutative Law : Let a  I , b  I  R I

( a  I )  (b  I )  ( a  b)  I  (b  a )  I since a, b  R

 (b  I )  ( a  I )
Closure with respect to multiplication:

Let a  I , b  I  R I  a , b  R

 ab  R since R is a ring.

 ab  I  R I

 (a  I )(b  I )  R I

Associative Law for multiplication.

Let a  I , b  I , c  I  R I

 (a  I )(b  I )(c  I   (ab  I )(c  I )


 ( ab)c  I

 a (bc )  I since a, b, c  R

= ( a  I )(bc  I )

= (a  I )  (b  I )(c  I )

Distributive Law:

Let a  I , b  I , c  I  R I

(a  I )  (b  I )  (c  I )  (a  I )  (b  c)  I 

 a (b  c )  I

 ( ab  ac )  I since a, b, c  R
Rings and Linear Algebra 2.15 Ideals and Homomarphism of Rings

 ( ab  I )  ( ac  I )

 ( a  I )(b  I )  ( a  I )(c  I )

Similarly  (b  I )  (c  I )  (a  I )  (b  I )(a  I )  (c  I )(a  I )

 R I is a ring.

The ring R I is called the quotient ring of R modulo I.

2.7.3 Note: If R is a commutative ring and I is an ideal of R then the Quotient ring R I is also
commutative.

Proof: Let R be a commutative ring and I an ideal of R.

Let a  I , b  I  R I

(a  I )(b  I )  ab  I

 ba  I Since R is commutative.

 (b  I )( a  I )

 R I is commutative.

2. If R is a ring with unity then the Quotient ring R I is also a ring with unity..

Proof: Let R be a ring with unity and I an ideal of R.

1 R  1  I  R I

Now (1  I )( a  I )  1a  I  a  I

( a  I )(1  I )  a1  I  a  I

1  I is the unity element of R I

Hence R I is a ring with unity..

2.7.4 Theorem: An ideal I of a commutative ring R with unity is prime iff the Quotient ring R I
is an integral domain.

Proof: Let R be a commutative ring with unity and I an ideal of R

 The quotient ring R I is a commutative ring with unity..


Suppose I is a prime ideal.
Centre for Distance Education 2.16 Acharya Nagarjuna University
To show that the Quotient ring R I is an integral domain.

Since R I is a commutative ring it is enough to show that R I has no zero divisors.

Let a  I , b  I  R I and

(a  I )(b  I )  0  I

 ab  I  I

 ab  I

 a  I or b  I . Since I is prime.

 a  I  I or b  I  I

 R I has no zero divisors.

Hence R I is an integral domain.

Conversily suppose R I is an integral domain.

To prove that I is a prime ideal.

Let a, b  R and ab  I

 ab  I  I

 ( a  I )(b  I )  I

 a  I  I or b  I  I Since R I is an integral domain.

 a  I or b  I
 I is a prime ideal.
2.7.5 Theorem: An ideal I of a commutative ring R with unity is maximal iff the quotient ring R I
is a field.

Proof: Let R be a commutative ring with unity and I an ideal of R.

 The Quotient ring R I is a commutative ring with unity..

Suppose I is a maximal ideal of R.

To prove that R I is a field.

Since R I is a commutative ring with unity it is enough to prove every non-zero element of
R I has multiplicative inverse.
Rings and Linear Algebra 2.17 Ideals and Homomarphism of Rings

Let 0  I  a  I  R I

a  I  0 I  aI

We have a is a principal ideal of R.

 a  I is an ideal of R and I  a  I  R Since a  I

Since I is a maximal ideal either a  I  I or a  I  R

But a  I  a  I  I since a  a  I

 a I R

1 R  1 a  I

 1  ar  x for some r  R and x  I

 1  ar  x  I

 1  I  ar  I since a  H  b  H  a  b  H
 1  I  ( a  I )( r  I )

0  I  a  I  R I   r  I  R I such that (a  I )( r  I )  1  I

Hence every non zero element of R I has multiplicative inverse.

 R I is a field.

Conversely suppose R I is a field.

To prove that I is a maximal ideal.

Let I 1 be an ideal of R such that I  I 1  R .

If possible assume that I 1  I

 there is a  I 1 and a  I

aI  I

i.e. a  I is a non-zero element of R I .


Centre for Distance Education 2.18 Acharya Nagarjuna University

 There exists b  I  R I such that ( a  I )(b  I )  1  I since R I is a field.

 ab  I  1  I

 1  ab  I

 1  ab  I 1  I  I1

Now a  I 1 and b  R, I 1 is an ideal  ab  I 1

1  ab  ab  I 1

 1 I 1

 I 1  R by corollary 2.4.7

 I is a maximal ideal of R.
2.7.6 Note: (1) If R is a commutative ring with unity, then every maximal ideal is a prime ideal.
Let M be a maximal ideal of R.

 R M is a field by theorem 2.7.5

 R M is an integral domain

M is a prime ideal by theorem 2.7.4

2. The converse of the above need not be true i.e., every prime ideal of a commutative ring with
unity need not be a maximal ideal.
For example in the ring of integers null ideal is prime but not maximal

For 0  2  Z and 2  0 and 2  Z

2.8.1 Homomorphism of rings:

Definition: Let R and R ' be two rings. A mapping f : R  R1 is said to be a homomorphism


if i) f ( a  b)  f ( a )  f (b) ii) f ( ab)  f ( a ) f (b) for all a, b  R

2.8.2 Definition: Let R and R ' be two rings. A mapping f : R  R1 is said to be

i) Monomorphism if f is a homomorphism and one-one


ii) Epimorphism if f is a homomorphism and onto
iii) Isomorphism if f is a homomorphism one-one and onto.
Rings and Linear Algebra 2.19 Ideals and Homomarphism of Rings

2.8.3 Definition: A homomorphism f : R  R of a ring R into itself is called an endomorphism.

2.8.4 Definition: An isomorphism f : R  R of a ring R into itself is called an automorphism.

2.8.5 Example: 1. Let R and R ' be two rings f : R  R1 defined by f ( x)  01 ,  x  R where 01


is the zero element of R is a homomorphism.

Solution: Let a, b  R .

f (a  b)  01  01  01  f (a)  f (b)

f (ab)  01  01.01  f (a ) f (b)

 f is a homomorphism.
This homomorphism is called Zero homomorphism.

2. Let R be a ring. Then the identity mapping I : R  R is an automorphism.

Solution : Let a, b  R

I ( a  b )  a  b  I ( a )  I (b )

I ( ab)  ab  I ( a ).I (b)

 I is a homomorphism.
We know that identity mapping is a bijection.

 I is an automorphism.
This is called identity homomorphism.

2.8.6 Note: If f : R  R1 is an isomorphism then we say R is isomorphic to R1 and we write


R  R1 .
2.8.7 Elementary Properties of Homomorphism:

Theorem: Let f : R  R1 be a homorphism of a ring R into a ring R1 and 0, 01 be the zero


elements of R and R ' respectively then for a, b  R .

i) f (0)  01 ii) f (  a )   f ( a ) iii) f ( a  b )  f ( a )  f (b )

Proof: f : R  R1 is a homomorphism.

i) 0  R  f (0) R1
Centre for Distance Education 2.20 Acharya Nagarjuna University

f (0)  f (0  0) since 0  0  0

 f (0)  f (0)  f (0) since f is a homomorphism.

 f (0)  01  f (0)  f (0) since 01 is the zero element of R1 .

 01  f (0) by left cancellation law..

ii) a  R   a  R such that a  ( a )  0

 f  a  (a)   f (0)

 f (a)  f (a)  01 by (i) and f is a homomorphism.

 f (a)   f (a)

iii) Let a, b  R

f (a  b)  f  a  (b)   f (a)  f (b)

 f (a)    f (b)

 f ( a )  f (b)

2.8.8 Homomorphic Image: Let f : R  R1 be a homomorphism of a ring R into a ring R1 .


Then the image set  f ( x) x  R is called the homomorphic image of R and is denoted by f ( R ) .

2.8.9 Theorem: A homomorphic image of a ring is a ring.

Proof: Let f : R  R1 be a homomorphism and f ( R)   f ( x) x  R be the homomorphic


image of R.

To prove that f ( R ) is a ring.

Clearly f ( R )  R 1 and f (0)  01  f ( R )

 f ( R ) is a non-empty subset of R1 .

Let a , b  f ( R )

 a  f ( x ) and b  f ( y ) , where x , y  R

x, y  R  x  y  R and xy  R
Rings and Linear Algebra 2.21 Ideals and Homomarphism of Rings

 f ( x  y )  f ( R ) and f ( xy )  f ( R )

 f ( x )  f ( y )  f ( R ) and f ( xy)  f ( R)

 a  b  f ( R ) and ab  f ( R)

 f ( R ) is a subring of R1

Hence f ( R ) is a ring.

2.8.10 Note: A homomorphic image of a commutative ring is commutative.

Proof: Let f ; R  R1 be a homomorphism from a commutative ring R into a ring R1 .

f ( R)   f ( x) x  R is the homomorphic image of R.

Let a , b  f ( R )  a  f ( x ), b  f ( y ) where x, y  R .

Now ab  f ( x ) f ( y )

 f ( xy ) since f is a homomorphism.

 f ( yx ) since R is commutative.

 f ( y) f ( x)

 ba
 f ( R ) is commutative.

From 2.8.9 f ( R ) is a ring  f ( R ) is a commutative ring.

2.9.1 Kernal of a homomorphism:

Let f : R  R1 be a homomorphism from a ring R into a ring R1 . Then x  R  f ( x )  01 


is called Kernel of f and is denoted by ker f . 01 is the zero element of R1 .

2.9.2 Theorem: If f : R  R1 is a homomorphism of a ring R into a ring R1 then ker f is an ideal


of R.

Proof: Let f : R  R1 be a homomorphism of a ring R into a ring R1 .

ker f  x  R f ( x)  01 where 01 is the zero element of R1 .


Centre for Distance Education 2.22 Acharya Nagarjuna University

Clearly 0  R and f (0)  01

 0  ker f

 ker f is a nonempty subset of R.

Let a, b  ker f and r  R

a, b  ker f  f (a )  01 and f (b)  01

 f (a)  f (b)  0  01  01

 f (a  b)  01

 a  b  ker f ...... (1)

f (ar )  f (a ) f (r )  01 f ( r )  01

f (ra)  f (r ) f (a )  f ( r )01  01

 ar  ker f and ra  ker f ..... (2)

From (1) and (2) ker f is an ideal of R.

2.9.3 Theorem: A homomorphism f from a ring R into a ring R1 is a monomorphism iff


ker f = 0 .

Proof: Let f : R  R1 be a homomorphism from a ring R into a ring R1 .

Suppose f is a homomorphism.

To prove that ker f = 0

Let a  ker f  f (a )  01

 f ( a )  f (0)

 a  0 since f is one-one

 ker f  0 .

Suppose ker f  0 .

To prove that f is one - one.


Rings and Linear Algebra 2.23 Ideals and Homomarphism of Rings

Suppose a, b  R and f ( a )  f (b)

 f (a )  f (b)  01

 f (a  b)  01

 a  b  ker f

 a  b  0 since ker f  0 .

ab
 f is one - one.
Hence f is a monomorphism.

2.9.4 Theorem: If I is an ideal of a ring R then there is an epimorphism f from R onto R I such
that ker f = I

Proof: Let R be a ring and I am ideal of R.

Define f : R  R I as f ( a )  a  I ,  a  R

f is well defined : Suppose a, b  R and a  b

aI bI
 f ( a )  f (b )

 f is well defined.

f ( a  b)  ( a  b)  I  ( a  I )  (b  I )  f ( a )  (b)

f ( ab)  ab  I  ( a  I )(b  I )  f ( a )(b),  a , b  R

 f is a homomorphism.

f is onto : Let x  I  R I

 x R
Now f ( x )  x  I

 x  I  R I   x  R such that f ( x )  x  I
Hence f is onto.
Centre for Distance Education 2.24 Acharya Nagarjuna University

 f : R  R I is an epimorphism.

 ker f  a  R f (a)  0  I   a  R f (a)  I 

 a  R a  I  I 

 a  R a  I 

I
2.9.5 Note: The above epimorphism is called canonical mapping or natural mapping.
2.10.1 Fundamental Theorem of Homomorphism:

If f : R  R is an epimorphism from a ring R into a ring R1 then R ker f  R1


1

(OR)
Every homomorphic image of a ring is isomorphic to some quotient ring.

Proof: Suppose f : R  R1 is an epimorphism from a ring R into a ring R1 .

Let ker f  I

We know that ker f is an ideal of R.

 R ker f is a quotient ring.

Define  : R I  R1 by

 (a  I )  f (a )  a  I  R I

 is well defined and one - one :

Let a  I , b  I  R I

aI bI

 a b I

 f (a  b)  01 since I  ker f .

 f ( a )  f (b)  01

 f ( a )  f (b )
Rings and Linear Algebra 2.25 Ideals and Homomarphism of Rings

  ( a  I )   (b  I )

  is well defined and one - one.

 is onto :

Let y  R1

since f : R  R1 is onto there is x  R such that

f ( x)  y

xR  x  I R I

 y  R1  there is x  I  R I and  ( x  I )  f ( x )  y

 o is onto.

 is a homomorphism:

Let a  I , b  I  R I

o  (a  I )  (b  I )   o  (a  b)  I   f (a  b)

 f ( a)  (b)

 o  (a  I )  o (b  I )

o  (a  I )(b  I )   o  ab  I   f (ab)

 f ( a ) f (b )

  ( a  I ) (b  I )

  is a homomorphism which is one-one and onto i.e.  is an isomorphism from R I


to R1 .

 R I  R1 .
2.10.2 Definition: A ring R is said to be embedded in a ring R1 if there exists a monomorphism
from R into R1.
2.10.3 Example: A subring S of a ring R can be embedded in ring R.

For I : S  R defined by I ( a )  a ,  a  s is a monomorphism from S into R.


Centre for Distance Education 2.26 Acharya Nagarjuna University
2.10.4 Theorem: Every integral domain can be embedded in a field.
Proof: Let D be an integral domain.

Let R  (a, b) a, b  D, b  0

Define  on R as ( a , b )  (c , d )  ad  bc
We claim that  is an equivalence relation on R.

Let ( a, b)  R  a, b  D and b  0

 ab  ba Since D is commutative
 ( a , b)  (a , b )

 is reflexive.
Let (a, b),(c, d )  R

Suppose ( a, b)  (c, d )  ad  bc

 da  cb Since D is commutative

 cb  da
 (c , d )  ( a , b )

 is symmetric.
Let ( a, b ), (c, d ), (e, f )  R

Suppose ( a, b )  (c, d ) and (c, d )  (e, f )

 ad  b and cf  de

 ( ad ) f  (bc ) f

 ( ad ) f  b(cf )

 ( ad ) f  b( de) Since cf  de

 ( da ) f  (bd )e since D is commutative

 d ( af )  d (be)

 af  be by left cancellation law..


Rings and Linear Algebra 2.27 Ideals and Homomarphism of Rings

 ( a, b)  (e, f )

 is transitive.
Hence  is an equivalence relation on R.
.
a
Let be the equivalence class containing ( a, b) with respect to the equivalence relation
b

.
a 
Let F   a , b  D , a  0  the set of all equivalence classes under
b 

a c ad  bc a c ac a c
We define addition + and multiplication . as   and .  ,  , F
b d bd b d bd b d
To show that + and . are well defined.

a a1 c c1
Suppose  and 
b b1 d d1

 (a, b)  (a1 , b1 ) and (c, d )  (c1 , d1 )

 x   y  iff x y

 ab1  ba1 and cd1  dc

Now ( ad  bc ) b1 d 1  adb1 d 1  bcb1 d 1

 ab1dd1  bb1cd1

 ba1dd1  bb1dc1

 a1d1bd  b1c1dd

 ( a1 d 1  b1c1 ) bd

 (ad  bc, bd )  (a1d1  b1c1 , b1d1 )

ad  bc a1d1  b1c1
 
bd b1d1

a c a1 c1
   
b d b1 d1
Centre for Distance Education 2.28 Acharya Nagarjuna University
 + is well defined.

Also acb1d1  ab1cd1

 ba1dc1 Since ab1  ba1 and cd1  dc1

 bda1c1

 ( ac, bd )  ( a1c1 , b1d1 )

ac a1c1
 
bd b1d1

a c a1 c1
 .  .
b d b1 d1

 . is well defined.
 + and . are binary operations on F..
We prove that ( F , ,.) is a field.

a ax
First we prove that if x  0 the  and
b bx

o o x y
if x  0, y  0 the  and  ............. (1)
x y x y

We have a (bx )  ( ab ) x  (ba ) x  b ( ax )

 ( a, b)  ( ax, bx )

a ax
 
b bx

Also oy  o  xo  (o , x )  (o , y )

o o
 
x y

Also xy  xy  ( x, x)  ( y, y )

x y
 
x y
Rings and Linear Algebra 2.29 Ideals and Homomarphism of Rings
Addition is associative:

a c e
Let , , F
b d f

 a c  e ad  bc e
    
b d  f bd f

( ad  bc ) f  bde

bdf

adf  bcf  bde



bdf

adf  b(cf  de)



b(df )

a cf  de
= 
b df

a c e
=   
b d f 

 addition is associative.
Existance of additive identity:

o
If x  0 then F
x

a o ax  bo ax a
Now     from (1)
b x bx bx b

o a bo  ax ax a a
     F
x b xb bx b b

o
 is the additive identity of F..
x
Existance of Inverse:

a
Let  F  a, b  D and b  0
b
Centre for Distance Education 2.30 Acharya Nagarjuna University

  a, b  D and b  0

a
 F
b

a (  a ) ab  b (  a ) ab  ba ab  ab o o
Now       from (1)
b b bb bb bb bb x

 a a ( a )b  ab  ab  ab o o
    
b b bb bb bb x

a a
 is the additive inverse of .
b b
Addition is commutative:

a c
Let , F
b d

a c ad  bc bc  ad cb  dc c a
     
b d bd db db d b
 is commutative.
 ( F ,  ) is an abelian group.
Multiplication is associative:

a c e
Let , , F
b d f

 a  c   e   ac   e  (ac)e a(ce)  a   c   e 
 b  d   f    bd   f   (bd ) f  b(df )   b   d   f 
             

 Multiplication is assocative in F..


Existance of Multiplicative Identity:

x
Let 0  x  D then F
x

a  a   x  ax a
For  F,      and
b  b   x  bx b
Rings and Linear Algebra 2.31 Ideals and Homomarphism of Rings

 x   a  xa ax a
      from (1)
 x   b  xb bx b

x
 when x  0 is the multiplicative identity of F..
x
Existance of Multiplicative Identity:

a o
Let   F
b b
ao

b
 F
a

 a   b  ab ab x
Now        from (1)
  
b a b a a b x

 b   a  ba ab x
     
 a   b  ab ab x

b a
 is the multiplicative inverse of in F..
a b
Multiplication is commutative:

a c
Let , F
b d

 a  c  ac ca  c   a 
       
 b  d  bd db  d   b 

 . is commutative.

 o 
  F    ,.  is an abelion group.
 x 

Distributive Law:

a c e
Let , , F
b d f
Centre for Distance Education 2.32 Acharya Nagarjuna University

 a   c e  a  cf  de  a(cf  de) acf  ade


       
 b   d f  b  df  bdf bdf

 a  c   a   e  ac ae acbf  bdae ( acf  ade )b acf  ade


           
 b  d   b   f  bd bf bdbf bdfb bdf

 a c e   a  c   a   e 
            
 b  d f   b  d   b   f 

c e   a   c  a   e   a 
Similarly              
d f   b   d  b   f   b 

 ( F ,  , .) is a field.

ax
Define f : D  F by f ( a )  where o  x  D
x
Clearly f is a mapping.
To show that f is a monomorphism.
f is a homomorphism:

Let a , b  D

( a  b) x (a  b) x 2
Now f ( a  b)  
x x2

ax 2  bx 2

x2

(ax ) x  (bx ) x

xx

ax bx
 
x x

 f ( a )  f (b)

abx abx2 axbx ax bx


f (ab)   2   .
x x xx x x
Rings and Linear Algebra 2.33 Ideals and Homomarphism of Rings
 f ( a). f (b)

 f is a homomorphism.
f is one-one:

Let a , b  D and f ( a )  f (b )

ax bx
 
x x

 ( ax, x )  (bx, x )

 axx  xbx

 ax 2  bx 2

 ( a  b) x 2  o

 a b  o

ab
 f is one - one and hence a monomorphism.

 Every integral domain can be embedded in a field.


2.11.11 Summary:
In this lesson we defined ideal, principal ideal, prime ideal, maximal ideal, quotient ring
homomorphism of rings, kernal of a homomorphism. We learnt algebra of ideals. Theorems on
ideals, quotient rings homomorphism of rings, kernel of a homomorphism, fundamental theorem
and embedding of rings.

2.12 Technical Terms:


i) Ideal
ii) Ideal generated by a subset
iii) Principal ideal
iv) Prime ideal
v) Maximal ideal
vi) Quotient ring
vii) Homomorphism of rings
viii) Kernal of a homomorphism
iv) Embedding of rings
Centre for Distance Education 2.34 Acharya Nagarjuna University
2.13.1 Model Questions:

 0 a  
2.4.3 The set I    a, b  z  is a left ideal of the ring of 2 x 2 matrices with integers as
 0 b  
entries but not a right ideal.
Solution: Clearly I is a non-empty subset of the ring M of 2 x 2 matrices with integers as entries.

0 a 0 c  x y
Let A    and B     I and C   M
0 b 0 d  r s

 a , b, c , d , x , y , r , s  z

0 a  c 
A B     I Since a  c, b  d  z
0 b  d 

x y  0 a   0 ax  by 
CA      I Since ax  by , ar  bs  z
r s  0 b   0 ar  bs 

 I is a left ideal.

0 a  x y   ar as 
But AC       I
0 b  r s   br bs 

Hence I is not a right ideal.

2.13.2 If R is a commutative ring and a  R then Ra  ra r  R is an ideal of R.

Solution: Let Ra  ra r  R

Clearly o  R  oa  Ra

 ox  o  Ra

 Ra is a non-empty subset of R.
Let x, y  Ra and r  R

 x  r1a and y  r2 a where r1 , r2  R

Now x  y  r1a  r2 a  (r1  r2 )a  Ra

rx  r (r1a )  rr1a  Ra
Rings and Linear Algebra 2.35 Ideals and Homomarphism of Rings

xr  (r1a)r  r (r1a)  rr1  Ra Since R is commutative.

 x, y  Ra and r  R  x  y , rx, xr  Ra

 Ra is an ideal of R.

2.13.3 If R is a ring and a  R then Ra is a left ideal and aR  ar r  R  is an ideal of the
ring of integers z.

2.13.4 If m is a fixed integer. Then I  mx x  Z  is an ideal of the ring of integers Z.

Solution: Let z be the ring of integers and m  z .


We know that z is a commutative ring and m  z .

 I  mx x  z is an ideal of z by 2.13.2.

2.13.5 Union of two ideals need not be an ideal (2.4.10)

Solution: Let I1  2 x x  z and I 2  3x x  z

Then by 2.13.4 I1 and I 2 are ideals of the ring of integers Z.

I1  I 2  ...  4, 3, 2, 0, 2,3, 4, 6,8,9...

3, 2  I1  I 2 but 3  2  1 
 I1  I 2

I1  I2 is not an ideal.

2.13.6 A commutative ring R with unity element 1  0 is a field if R has no proper ideals.
Solution: Let R be a commutative ring with unity element 1.
Suppose R has no proper ideals.
To prove that R is a field.

Let 0  a  R

 Ra  ra r  R is an ideal of R by 2.13.2.

Since R has no proper ideals either Ra  0 or Ra  R

But 0  a  Ra since a  1.a and 1  R

 Ra  0
Centre for Distance Education 2.36 Acharya Nagarjuna University

 Ra  R
But 1  R  1  R a

 1  ba for some b  R

 b is the multiplicative inverse of a.


 Every non zero element of the commutative ring R with unity element has multiplicative inverse.
 R is a field.

2.13.7 If I is an ideal of a ring R and rI   x  R xa  0 for all a  I 

then rI is an ideal of R.

Solution: Let R be a ring and I an ideal of R.

Let rI   x  R xa  0 for all a  I 

We know that o  R and oa  a for all a  I

 o  rI
 rI is a non-empty subset of R.
Let x , y  rI and r  R .

x , y  rI  xa  0 and ya  0 for all a  I

 xa  ya  0

 ( x  y ) a  0 for all a  I

 x  y  rI .

x  rI  xa  0  aI

 r ( xa )  0  a  I

 (rx ) a  0  a  I

 rx  r ( I )

Further I is an ideal  ra  I for all a  I

x  rI  xa  0 for all a  I
Rings and Linear Algebra 2.37 Ideals and Homomarphism of Rings

 x (ra )  0 since ra  I for all a  I

 ( xr)a  0 for all a  I

 xr  r ( I )

 r ( I ) is an ideal of R.

2.13.8 If n is a fixed positive integer then the mapping f : z  nz defined by f ( x )  n x is not a


homomorphism .
Solution: Z is the set of integers.

f : z  nz defined by f ( x )  nx where n is a fixed postive integer..

For x, y  z
f ( x  y )  n( x  y )

 nx  ny

 f ( x)  f ( y )

f ( xy )  nxy

f ( x ).( y )  nx.ny  n 2 xy

 f ( xy )  f ( x ) f ( y )

 f is not a homomorphism.

2.13.9 If R is a ring with unity element 1 and f is an epimorphism from R onto a ring R1 then f (1)
is the unity element of R1.
Solution: Let R be a ring with unity element 1.

Suppose f : R  R1 is an epimorphism.

1  R  f (1)  R1

To show that f (1) is the unity element of R1 .

Let b  R
1

f : R  R1 is onto  there is a  R such that f (a )  b

a  R  a.l  a  l.a
Centre for Distance Education 2.38 Acharya Nagarjuna University

Now f (1) b  f (1). f ( a )

 f (1.a )

 f (a)

b
Also b f (1)  f ( a ). f (1)

 f ( a1)

 f (a)

b
 f (1) is the unity element of R1 .

2.13.10 The mapping f : C  C defined by f ( x  iy )  x  iy is an isomorphism on the set of


complex numbers C.
Solution: We know that the set of complex numbers C is a ring.

f : C  C defined by f ( x  iy )  x  iy

Let a  ib, c  id  C

f  (a  ib)  (c  id )   f (a  c)  i (b  d ) 

= ( a  c )  i (b  d )

 a  c  ib  id

 a  ib  c  ib

 f ( a  ib)  f (c  id )

f  (a  ib)(c  id )   f  (ac  bd )  i (ad  bc) 

 ( ac  bd )  i ( ad  bc )

 ( a  ib)(c  id )

 f ( a  ib ) f ( c  id )

 f is a homomorphism.
Rings and Linear Algebra 2.39 Ideals and Homomarphism of Rings
f is one - one:

Suppose f ( a  ib)  f (c  id )

 a  ib  c  id
 a  c and b  d

 a  ib  c  id
 f is one - one.
f is onto:

Let x  iy  c

 x  iy  c and f ( x  iy )  x  iy

 f is onto.
Hence f is an isomorphism.

2.13.11 If F is a field and f : F  F is a homomorphism then f is an isomorphism or zero


homomorphism.

Solution: Suppose F is a field and f : F  F is a homomorphism.

Then ker f is an ideal of F..

But we know that 0  and F are the only ideals of F by 2.4.8.

 ker f  0 or ker f  F

Suppose ker f  0

We know that a homomorphism f is a monomorphism

Iff ker f  0 by 2.9.3

 f : F  F is a monomorphism.
Hence f is an isomorphism.

Suppose ker f  F

By definition of ker f we have f ( x)  01 ,  x  F .


Centre for Distance Education 2.40 Acharya Nagarjuna University
Where 01 is the zero element of F..

 f is zero homomorphism.
2.14 Exercises:

 a b  
1. Show that the set I    a, b  z  is a right ideal but not a left ideal of the ring of 2 x 2
 0 0  
matrices M over integers.

2. Show that if R is a commutative ring with unity element and a  R then Ra  ra r  R is a
principal ideal generated by a.

3. Find all the principal ideals of the ring ( z6 ,  6 , x6 ) .

4. If R is a commutative ring with unity and a, b  R then show that ax  by x, y  R is the ideal
generated by a, b.

5. If I is an ideal of R and  R : I    x  R rx  I for every r  R then  R : I  is an ideal of R and

I  R : I  .

 a 0  
6. Let R    a  R where R is the ring of real numbers. Prove that f : R1  R defined
1

 0 0  
 a 0  a 0
  a for all   R is an isomorphism.
1
by f  
 0 0  0 0
7. Show that every homomorphic image of a commutative ring is commutative.
8. Give an example to show that a homomorphic image of an integral domain may not be an
integral domain.

9. Let R be a ring with unity. For each inversible element a  R , the mapping f : R  R defined
by f ( x)  axa 1 ,  x  R is an automorphism.

10. Let R be the ring of all real valued continuous functions defined on [0,1] and

 1 
M   f ( x)  R f    0  . Show that M is maxmimal ideal of R.
 3 

- Smt. K. Ruth
Rings and Linear Algebra 3.1 Rings of Polynomials

LESSON - 3

RINGS OF POLYNOMIALS
3.1 Objective of the Lesson:
To learn the definition of a Polynomial over a ring, polynomial ring, degree of a polynomial
and evaluation homomorphism.

3.2 Structure
3.3 Introduction

3.4 Definition of a Polynomial over a ring.

3.5 Algebra of Polynomials

3.6 Degree of a Polynomial

3.7 Polynomial Ring

3.8 Evaluation Homomorphism

3.9 Summary

3.10 Technical terms

3.11 Model Examination Questions

3.12 Exercises

3.3 Introduction:
In this lesson we will introduce the notion of a polynomial ring over a ring, over an integral
domain and over a field. We also introduce evaluation homomorphism.

3.4.1 Definition of a Polynomial over a ring:

Let R be a ring and x an indeterminate. If a0 , a1 , a2 ...  R and ai  0 for all except a finite

number of values of i then a formal sum f ( x )  a0  a1 x  a2 x 2  ...... is called a polynomial over R.

if f ( x )  a0  a1 x  ...  ...an x n  ... has ai  0 for all i > n, then we donate f ( x ) by


a0  a1 x  ...  an x n .

3.4.2 Example: 1. f ( x)  3  x  4 x 2  5 x 4 is a polynomial over z.


Centre for Distance Education 3.2 Acharya Nagarjuna University

1 3
2. f ( x )   2 x 2  x 3 is a polynomial over Q.
2 7

3. f ( x)  1  x 2  4 x3 is a polynomial over Z 5 .

3.4.3 Note : 1. The set of all polynomials over a ring R with intdeterminate x is denoted by R  x  .

2. We omit altogether from the formal sum any term of the form 0 x i .

3.4.4 Note: 1. Let R be a ring. A polynomial over R can also be defined as a sequence
(a0 , a1, a2...an ,....) of elements of R, where all but a finite number of ai ’s are zero.

 A polynomial f (x)  a0  a1x  a2 x2 ... can also be denoted by ( a 0 , a1 ...a n ...) .

2. If f (x)  a0  a1x  a2 x2  ...  an xn ... is a polynomial over a ring R then a 0 , a 1 , a 2 . . . are called

coefficients of f ( x ) . a , a x , a x 2 ... are called constant term, x term, x 2 term ........ x n


0 1 

term... of f ( x ) .

3.4.4. Zero Polynomial: If 0 is the zero element of a ring R then


f ( x)  0  0 x  0 x 2  0 x n  ....  (0, 0, 0,...0...) is called the zero polynomial. It is denoted by 0 or
0( x ) .
3.4.5 Constant Polynomial: Let R be a ring. Then any element a of the ring is a constant
polynomial f ( x)  a  0 x  0 x 2  ... . It is usually written as f ( x )  a .

3.5.1 Algebra of Polynomials:

Equity of Polynomials: Two polynomials ( a 0 , a1 ...a 2 ...) and g  (b0 , b1.....bn ....) over a
ring R are said to be equal if ai  bi for all i  0 .

If f and g are equal polynomials we write f  g .

3.5.2 Addition of Polynomials:

Let f  (a0 , a1 , a2 ...am ...) and g  (b0 , b1 , b2 ...bn ...) be two polynomials over a ring R. the
sum of f and g is denoted by f  g  (c0 , c1, c ...ck ...) where ci  ai  bi for i  0,1, 2,.... .

3.5.3 Example: If f  (2,3, 4,0,0,...) and g  (3, 2, 0, 3, 0, 0...) are two polynomials over the
ring of integers z then f  g  (5, 1, 4, 3, 0, 0...)  5  x  4 x 2  3 x3
Rings and Linear Algebra 3.3 Rings of Polynomials
3.5.4 Multiplication of Polynomials:

Let f  (a0 , a1 , a2 ...am ...) and g  (b0 , b1 , b2 ...bn ...) be two polynomials. Over a ring R. The
product of f and g is denoted by fg and fg  (c0 , c1 , c2 ...c p ...) where ci  a0bi  a1bi 1  ...  ai b0 i.e.
i
ci   a j bi  j  ab j k .
j 0 j  k i

3.5.5. Example : If f  (1,3, 4, 0, 0....) and g  (2, 2, 0,5, 0, 0,....) over z6 find fg .

f  1  3x  4 x 2 g  2  2 x  0 x 2  5x3

fg  16 2  (16 2 6 3 6 2) x  (16 0 6 3 6 2 6 4 6 2) x 2

 (16 5  6 3 6 0  6 4 6 2)x 3

 2  2 x  2 x 2  x3
3.6.1 Degree of a Polynomial:

Let f  (a0 , a1,...a2...) be a non-zero polynomial over a ring R. The largest integer i for which

ai  0 is called the degree of the polynomial f. It is denoted by deg f ( x ) or deg f .

3.6.2 Note: 1. The degree of a non zero polynomial f  (a0 , a1,...a2 ...) is n if an  0 and
ai  0  i  n .
2. The degree of a non zero constant polynomial is zero
3. The degree of the zero polynomial is not defined.

3.6.3 Definition: Leading Coefficient: If the degree of the polynomial f (x)  a0  a1x ...  an xn
is n then an  0 is called leading coefficient in f ( x ) .

3.6.4 Example: 1. The degree of the polynomial f ( x)  3  2 x  x 4  x 6 over the ring of integers
is 6.

7
2. The degree of the polynomial f ( x )  over the ring of rational numbers is zero.
4

3.6.5 Theorem: Let f ( x ), g ( x ) be two non zero polynomials over a ring R the

i) deg  f ( x)  g ( x)   max deg f ( x), deg g ( x) if f ( x )  g ( x )  0( x ) .


Centre for Distance Education 3.4 Acharya Nagarjuna University

ii) deg  f ( x) g ( x)   deg f ( x)  deg g ( x) if f ( x) g ( x )  0( x ) where 0( x ) is the zero polynomial.

Proof: Let f (x)  a0  a1x  a2 x2  ...  am xm and g ( x)  b0  b1x  b2 x2  ...  bn xn be two polynomials
over a ring R with deg f ( x)  m and deg g ( x )  n .

 am  0 and ai  0  i  m and

bn  0 and bi  0 i  n

Case i): Suppose m  n then max m, n  m

f ( x )  g ( x )  ( a0  b0 )  ( a1  b1 ) x  ( a2  b2 ) x 2  ...  ( an  bn ) x n  an 1 x n 1  ...  am x m

 deg  f ( x)  g ( x)  m  max deg f ( x), deg g ( x)

Case ii) Suppose m  n then max m, n  n

f ( x )  g ( x )  ( a 0  b0 )  ( a1  b1 ) x  ...  ( a m  bm ) x m  bn 1 x m 1  ...  bn x n

 deg  f ( x)  g ( x)  n  max deg f ( x), deg g ( x)

Case iii) Suppose m = n

f ( x )  g ( x )  ( a 0  b0 )  ( a1  b1 ) x  ...  ( a m  bm ) x m

 deg  f ( x)  g ( x)  m  max deg f ( x), deg g ( x)

ii) f ( x).g ( x)  a0b0  (a1b0  a0b1 ) x....

 d 0  d1 x  d 2 x 2  ...

Where d k   ab
i  j k
i j from the definition.

Suppose k  m  n  i  j  m  n

 i  m or j  n .

But i  m  ai  0 and j  n  b j  0
Rings and Linear Algebra 3.5 Rings of Polynomials

 ai b j  0 if i  m or j  n

 ai b j  0 if i  j  m  n

 d k  0 if k  m  n

 deg  f ( x) g ( x)   m  n  deg f ( x)  deg g ( x)

3.6.6 Corollary: If f ( x ) and g ( x ) are two nonzero polynomials over an integral domain R then
deg  f ( x).g ( x)  deg f ( x)  deg g ( x) .

Proof: Let f ( x )  a 0  a1 x  ...  a m x m and g ( x )  b0  b1 x  ...  b n x n be two polynomials


over an integral domain R with deg f ( x)  m and deg g ( x )  n .

 am  0 and ai  0 i  m

bn  0 and bi  0 i  n

a m  0 , bn  0  ambn  0 since R is an integral domain.

Now f ( x ).g ( x )  d 0  d1 x  d 2 x 2  ...  d m  n x m  n  ...

Where d m  n  a0bm  n  a1bm  n 1  ...  am bn  am 1bn 1  ...  am n b0

= ambn since ai  0 for i  m and

bi  0 for i  n

0

 deg  f ( x).g ( x)  m  n

By 3.6.5 deg  f ( x ). g ( x )   m  n

Hence deg  f ( x).g ( x)   deg f ( x)  deg g ( x) .

3.6.7 Corollary: If f ( x ) and g ( x ) are non zero polynomials over an integral domain or field then
deg f ( x)  deg  f ( x).g ( x) .

Proof: Let f ( x ) and g ( x ) be two polynomials over an integral domain or field.


Centre for Distance Education 3.6 Acharya Nagarjuna University

We always have deg f ( x )  deg f ( x )  deg g ( x ) since deg g ( x )  0 .

 deg f ( x)   deg f ( x).deg g ( x)  by 3.6.6.

3.7.1 Theorem: The set of all polynomials over a ring R is a ring with respect to addition and
multiplication of polynomials.

Proof: Let R  x  be the set of all polynomials over a ring R with indeterminate x.

Let f ( x)  a0  a1 x  ....., g ( x )  b0  b1 x  b2 x 2  ....., h( x)  c0  c1 x  c2 x  .....  R  x 


2

f ( x )  g ( x )  ( a 0  b 0 )  ( a1  b1 ) x  ...  ( a m  b m ) x m .....

 c 0  c1 x  c 2 x 2  .....

f ( x ). g ( x )  d 0  d 1 x  d 2 x 2  ..... where d k   ab
i  j k
i j

ai , bi  R  ai  bi and aibi  R  i, j

 ci and d k  R  i, k

 f ( x )  g ( x ) and f ( x) g ( x)  R  x  .

 and . are binary operations on R  x  .


Commutative w.r. to +

 
f ( x)  g ( x)   ai x i   bi xi
i 1 i 1


  ( ai  bi ) x i
i0


  (bi  ai ) x i since ai  bi  bi  ai ,  ai , bi  R
i0


  (bi x i ai xi )
i 0
Rings and Linear Algebra 3.7 Rings of Polynomials

 
  bi x i   ai x i
i0 i 0

 g ( x)  f ( x)

 Addition is commutative.
Associative Law w.r.to +.

 
i

 f ( x )  g ( x )   h ( x )   i a x i
  bi x   ci x
 i

 i 0 i 0  i 0

 
  (ai  bi ) xi   ci xi
i 0 i 0


   (ai  bi )  ci )  xi
i 0


   ai  (bi  ci ) x i since (ai  bi )  ci  ai  (bi  ci ) in R.
i 0


   ai x i  (bi  ci ) xi 
i0

 
  ai x i   (bi  ci ) x i
i0 i 0


 

  ai x i   bi x i   ci xi 
i 0  i 0 i 0 

 f ( x )   g ( x )  h( x ) 

 addition is associative.
Existance of zero element:


The zero polynomial 0( x )  0  0 x  0 x 2  ...   0 x  R  x
i 0
i

 
f ( x )  0( x)   ai x i   0 x i
i 0 i 0
Centre for Distance Education 3.8 Acharya Nagarjuna University


  (ai  0) xi
i0


  ai xi Since ai  0  ai  ai  R .
i 0

 0( x ) is the zero element of R  x  .

Existance of Additive inverse:


f ( x)   ai xi  R  x   ai  R  i
i0

  ai  R such that ai  (ai )  0


  (  ai )x i  R  x 
i 0


 (  f )( x)   ( ai )x i  R  x 
i 0

 

Now  f ( x)  (  f )( x )   ai x i   (  ai )xi
i 0 i 0


   ai  (  ai ) x i
i 0


  0x i
i 0

 0( x)

 Every element of R  x  has additive inverse.

Associative Law w.r.to multiplication:

 
 
 
 f ( x).g ( x)  h( x )    ai x i    bi x i   ci x i
 i 0   i 0  i 0
Rings and Linear Algebra 3.9 Rings of Polynomials

   k 
=     i i  x   ci x
i
a b
 k  0  i  j  k   i  0

  
    (ai bi )cs  x n
n 0  i  j  s n 

    
      ai bi  cs  x n
n0 
 k  s n  i  j k  

     
f ( x)  g ( x)h( x )    ai x i   bi x i    ci x i  
 i 0  i 0   i 0 

    
  ai xi     b j ck x i 
i0  s 0  j  k  s  

   
 
n0 
 ai  b j ck   x i
 i  s  n  jk s  


 
  
n0 i jk n
a i (b j c k )  x i

  f ( x) g ( x)h( x)  f ( x)  g ( x)h( x)

Multiplication is associative.
Distributive Law:

   
f ( x)  g ( x)  h( x)    ai x i   b j x j   c j x j 
n 0  j 0 j 0 

   
=  a x   (b  c j )x j 
i
i j
n 0  j 0 

  
=    a (b i j  c j ) xn
n0 i  j n 
Centre for Distance Education 3.10 Acharya Nagarjuna University

  
=  ( aib j  aic j )  x n
n0  i jn 

     
    aibj  xn    ai c j  xn
n 0  i  j  n  n0  i  j  n 

 f ( x).g ( x )  f ( x ).h( x )
Similarly we can prove the other distributive law.

 g ( x)  h( x) F ( x)  g ( x). f ( x)  h( x). f ( x)


Hence R  x  is a ring.

Definition: Let R be a ring. Then R  x  is called the ring of polynomials in the inderminate x
with coefficiants in R.

Theorem: The ring R  x  of polynomials over an integral domain R is an integral domain.

Proof: Let R be an integral domain and R  x  be the ring of polynomials over R with indetermi-
nate x.

To prove R  x  is an integral domain we have to prove R  x  is commutative and without


zero divisors.

Let f ( x)  a0  a1 x  a2 x 2  ...   ai x i and


i0


g ( x)  b0  b1 x  b2 x 2  ...   bi x i be two polynomials in R  x  .
i 0

    
f ( x ) g ( x )    ai x i    b j x j 
 i0   j 0 

   
    ai b j  x n
n 0  i  j n 

   
    b j ai  x n since R is commutative.
n 0  i  j n 
Rings and Linear Algebra 3.11 Rings of Polynomials

    
   b j x j    ai xi 
 j 1   i 0 

 g ( x) f ( x) .

 R  x  is commutative.

Suppose f ( x )  0, g ( x )  0 and deg f ( x )  m , deg g ( x )  n

 am  0 and  bn  0

 ambn  0 Since R is an integral domain.

 Atleast one coefficient in f ( x ) g ( x ) is non-zero.

 f ( x) g ( x)  0

 R  x  has no zero divisors.

Hence R  x  is an integral domain.

3.7.3 Note: If R is a ring with unity then the ring of polynomials R  x  over R is also ring with unity..

Suppose R is a ring with unity 1.

Then f ( x)  1  0 x  0 x  ...  R  x 
2


i.e. I ( x)  b x
j 0
j j where b0  1 and b j  0  j  1

Let f ( x)  a0  a1 x  a2 x  ...  R  x 
2

 n   
f ( x) I ( x)    ai xi    b j x j 
 i 0   j 0 

  
=    ab x i j
n

n 0  i j n 


  ( an .1) x n
n 0
Centre for Distance Education 3.12 Acharya Nagarjuna University

  an x n  f ( x )
n0

Similarly I ( x ) f ( x )  f ( x ) .

 I ( x ) is the identity element of R  x  .

3.7.4 Corollary: If R is a field then R  x  is an integral domain but not a field.

Proof: Suppose R is a field

 R is an integral domain

 R  x  is an integral domain by 3.7.2.

Let 0  f ( x)  R  x 

Suppose deg f ( x )  0

If g(x) is the multiplicative inverse of f ( x ) in R  x 

Then f ( x) g ( x )  I ( x )

 deg  f ( x ) g ( x )   0

 deg f ( x ) + deg g ( x ) =0 by 3.6.6.

 deg f ( x )  0
A contradiction.

 f ( x ) has no multiplicative inverse.

Hence R  x  need not be a field.

3.8.1 The Evaluation:

Theorem: Let F be a subfield of a field E and F  x  be the ring of polynomials over the field F. If

  E then the mappying   : F  x   E defined by

 ( a0  a1 x  a2 x 2  ...  an x n )  a0  a1  a2 2  ...  an n

for all a0  a1 x  a x 2  ...  an x n  F  x  is a homomorphism.


Rings and Linear Algebra 3.13 Rings of Polynomials

Proof: Let F be a subfield of a field E and F  x  be the ring of polynomials over the field F. For   E

  : F  x   E is defined by

 ( a0  a1 x  a2 x 2  ...  an x n )  a0  a1  a2 2  ...  an n

for all a0  a1 x  a2 x 2  ...  an x n  F  x  is a homomorphism.

Cleary  is well defined.

Let f ( x )  a0  a1 x  ...  am x m and g ( x )  b0  b1 x  ...  bn x n

 f ( x)  g ( x)  c0  c1 x  ...  ck x k where c0  a0  b0

c1  a1  b1...ck  ak  bk and k  max m, n

  f ( x)  g ( x)   c0  c1 x  ...  ck x k 

 c0  c1  c2 2  ...  ck k

 ( a0  b0 )  ( a1  b1 )  ( a2  b2 ) 2  ...  ( ak  bk ) k

 ( a0  b0 )  ( a1  b1 )  ( a2 2  b2 2 )  ...  ( ak k  bk k )

 ( a0  a1  a2 2  ...  ak k )  (b0  b1  b2 2  ...  bk k )

Since ai ’s, bi ’s and  are elements of a field    f ( x)     g ( x) 

f ( x) g ( x)  d 0  d1 x  d 2 x 2  ...  d p x p where d n   ab
i jn
i j

  f ( x ) g ( x )     d 0  d1 x  d 2 x 2  ...  d p x p 

 d 0  d1  d 2 2  ...  d p p

 a0b0  (a0b1  a1b0 )  (a0b2  a1b1  a2b0 ) 2  ...   ab


i i  p
i j
p

 (a0  a1  a2 2  ...  am m ) (b0  b1  ...  bn n )


Centre for Distance Education 3.14 Acharya Nagarjuna University

   f ( x)   g ( x)  ........ (2)

From (1) and (2)  is a homomorphism.

3.8.2 Note:

(1) The homomorphism  defined in 3.8.1 is called the evaluation homorphism at  .

(2) If   : F x   E is an evaluation homorphism at   E and


f ( x )  a0  a1 x  a2 x 2  ...  am x m  F ( x )

then   f ( x)   a0  a1  a2  ...  am is denoted by f ( ) .


2 m

3.8.3 Definition: Zero of a Polynomial: Let F be a subfield of a field E and   E . Let


 : F i   E be an evaluation homorphism. For f ( x )  a 0  a1 x  ...  a n n  F  x  if
f ( )    f ( x)   a0  a1 x  ...  an n  0 then   E is called a zero of the polynomial f ( x ) or
  E is a solution of the equation f ( x )  0 .
3.8.4 Note: Let f ( x ) be a polynomial over a subfield F of a field E. Then   E is called a zero of
the polynomial f ( x ) if f ( ) =0.

3.8.5 Example: The zeros of f ( x)  x 4  4 in z5  x  are 1, 2,3, 4

z5  0,1, 2,3, 4

f ( x)  x 4  4

f ( x )  0 if x  1, 2,3, 4

 zeros of f ( x ) are 1, 2,3, 4

3.8.6 Kernal of Evaluation Homomorphism : Let F be a subfield of a field E. For   En we


have an evaluation homomorphism  : F  x   E defined as   f ( x )   f ( ) . Then the set

 f ( x)  F [ x]   f ( x) 
 f ( )  0 where 0 is the zero element of E is called kernal of  .

It is denoted by ker  .
Rings and Linear Algebra 3.15 Rings of Polynomials

3.9 Summary:
In this lesson we defined polynomial over a ring. We proved set of all polynomials over a
ring is a ring. Set of all polynomials over a field is an integral domain. We defined Evaluation
homorphism, zero of a polynomial and Kernal of evaluation homomorphism.

3.10 Technical Terms:


i) Polynomial over a ring
ii) Degree of a polynomial
iii) Leading coefficient
iv) Evaluation homomorphism
v) Zero of a polynomial
vi) Kernal of Evaluation Homomorphism.

3.11 Model Examination Questions:


1. Find the sum and product of the polynomials f ( x)  2  3 x  5 x 2 and g ( x)  1  2 x  3 x 2 over
Z6 .

Solution: Z 6  0,1, 2,3, 4,5

f ( x)  2  3 x  5 x 2 and g ( x)  1  2 x  3x 2

f ( x )  g ( x)  (2  6 1)  (3  6 2) x  (5  6 3) x 2

 3  5x  2 x2

f ( x) g ( x)  (2  3x  5 x 2 )(1  2 x  3x 2 )

 (2 6 1)  (2 6 2  6 3 6 1) x  (2 6  63 6 2  6 5 6 1) x 2  (3 6 3  6 5 6 2) x 3  (5 6 3) x 4

 2  x  5 x 2  x3  3x 4

2. If f ( x)  5  3 x  2 x 2 and g ( x )  1  3 x  4 x 3  Z 6 find deg  f ( x)  g ( x)  and deg  f ( x).g ( x) .

Solution: Given f ( x)  5  3 x  2 x 2 and g ( x)  1  3 x  4 x3

deg f ( x )  2 deg g ( x)  3

We have deg  f ( x)  g ( x)   max  f ( x), g ( x)


Centre for Distance Education 3.16 Acharya Nagarjuna University

 max 2,3

3

f ( x).g ( x)  (5  3 x  2 x 2 )(1  3 x  4 x 3 )

 5   (5 6 3)  6 (3 6 1)  x  (3 6 3) 6 (2 6 1)  x 2  5 6 4  6 (2 6 3)  x3  (3 6 4) x 4  (2 6 4) x5

 5  5 x 2  2 x3  2 x5

deg f ( x ).g ( x )  5

3. If 5 : Z 7  x   Z 7 is an evaluation homomorphism find 5 (2  x 3 ), 5 (3  4 x 2 ) and

5  (2  x 3 )(3  4 x 2 )  .

Solution: 5 : Z 7  x   Z 7 is an evaluation homomorphism.

5 (2  x 3 )  2  53  2  6  1

5 (3  4 x 2 )  3  4.52  5

5 (2  x 3 )(3  4 x 2 )   5 (2  x 3 )5 (3  4 x 2 )

 1.5  5

4. Find the zeros of f ( x)  1  x  x 2 in Z 7  x  .

Solution: We have Z 7  0,1, 2,3, 4,5, 6

f (0)  1  0  0 2  1  0

f (1)  1  1  12  3  0

f (2)  1  2  22  0

f (3)  1  3  32  6  0

f (4)  1  4  42  0

f (5)  1  5  52  3  0
Rings and Linear Algebra 3.17 Rings of Polynomials

f (6)  1  6  6 2  1  0 .

 Zeros of f ( x)  1  x  x 2 in Z7  x  are 2 and 4.

3.12 Exercises:

1. If f ( x)  5  x 2 and g ( x)  3  2 x  x 3 are polynomials in Z5  x  find the sum and product of


f ( x ) and g ( x ) .

2. If f ( x)  3  2 x  x 3 and g ( x)  4  x 4  Z 6  x  then prove that

deg  f ( x).g ( x)  deg f ( x)  deg g ( x)

3. If f ( x)  2  5 x  3 x 2 , g ( x)  1  4 x  2 x3 find the sum product, deg  f ( x)  g ( x)  and

deg  f ( x).g ( x) in Z7  x  .

4. Find the zeros of f ( x)  1  x 2 in Z5  x  .

5. Let F be a field. Prove that a polynomial f ( x)  F  x  is a unit if and only if it is a nonzero


constant polynomial.

6. Let F be a field and let F  x  be the ring of polynomials over F. Let f ( x ) and g ( x ) be nonzero

polynomials in F  x  . We say that f ( x ) divides g ( x ) if there exists a polynomial h( x)  F  x  .


Such that g ( x )  f ( x ).h ( x ) .

Prove that f ( x ) divides g ( x ) and g ( x ) divides f ( x ) if and only if g ( x )  h( x ) f ( x ) , where h( x)

is a nonzero constant polynomial in F  x  .

- Smt. K. Ruth
Rings and Linear Algebra 4.1 Factorization of Polynomials over..

LESSON - 4

FACTORIZATION OF POLYNOMIALS OVER A


FIELD
4.1 Objectives of the Lesson:
To acquain the student with the division algorithm of polynomials, irreducible polynomials,
the famous Eisenstein criteria for irreducibility of polynomials, ideals in F  x  and the unique
factorization of a nonconstant polynomial as a product of a finite number of irredducible
polynomials.

4.2 Structure:
This lesson contains the following components:
4.3 Introduction

4.4 The Division Algorithm in F  x 

4.5 Irreducible Polynomials

4.6 Ideal Structure in F  x 

4.7 Summary
4.8 Technical Terms
4.9 Exercises
4.10 Model Examination Questions
4.11 Model Practical Problem with Solution
4.12 Problems for Practicals
4.3 Introduction:

Throughout the lesson, we assume that F is a field and F  x  is the ring of polynomials over
F. In this lesson, similar to the division algorithm of integers, we prove the division algorithm for
F  x  . Some important corollaries are proved. The concept of irreducibility of polynomials is
introduced and some criteria for determining irreducibility of quadratic and cubic polynomials is
obtained. The famous eisenstein’s irreducibility criterion is discussed. Suitable examples are
given. We also prove that every ideal in F  x  is a principal ideal. Finally, we prove that every

nonconstant polynomial in F  x  can be written uniquely as a product of a finite number of


Centre for Distance Education 4.2 Acharya Nagarjuna University

irreducible polynomials in F  x  .

Throughout the lesson we use the following notation.


Z : The ring of integers
Q : The field of rational numbers
R : The field of real numbers
C : The field of complex numbers

Zn : The ring of integers modulo n.

Z n  x  : The ring of polynomials over Z n .

4.4 The Division Algorithm in F  x :

In order for F  x  to be an Enclidean ring we need to prove the division algorithm in F  x  .


This is provided by the following.
4.4.1 Theorem: (The division algorithm) let
f ( x)  a0  a1 x  ...  an x , g ( x)  b0  b1 x  ...  bm x  F  x  be polynomials such that an  0 and
n m

bm  0 . Then there exist unique polynomials q( x), r ( x)  F  x  such that f ( x )  q ( x ) g ( x )  r ( x )


where r ( x )  0 or deg r ( x )  deg g ( x) . ( q ( x) is called the quotient and r ( x ) the remainder).

Proof: If deg g ( x )  deg f ( x ) , let q ( x )  0 and r ( x )  f ( x ) . Assume that deg g ( x )  deg f ( x) ,


i.e. m  n . Proof is by induction on n. If n  0 then m  0, f ( x)  a0 and g ( x)  b0 . Let q ( x )  a0b01
1
 
and r ( x )  0 . Then q ( x ) g ( x )  r ( x )  a0b0 b0  a0  f ( x ) . Assume that the existence part of
the theorem is true for polynomials of degree less than n where n  0 . Now

a b 1 n  m
n m x  g ( x)   a b
1 n  m
n m x  b
0  b1 x  ...  bm x m   anbm1b0 x n  m  anbm1 x n  m 1  ...  an x n . Hence

f ( x)   anbm1 x n  m  g ( x)   a0  a1 x  ...  an x n    an bm1b0 x n m ...  an xn  is a polynomial of degree


less than n. By induction hypothesis there are polynomials p(x) and q ( x) such that

f ( x )   an bm1 x n  m  g ( x )  P ( x) g ( x )  r ( x ) where r ( x )  0 or deg r ( x)  deg g ( x) . Therefore if

q ( x)  an bm1 x n  m  P ( x) , then f ( x )  q ( x ) g ( x )  r ( x )

For uniqueness, suppose f ( x)  q1 ( x) g ( x)  r1 ( x) and f ( x)  q2 ( x) g ( x)  r2 ( x), where


r1 ( x )  0, r2 ( x)  0 of deg r1 ( x)  deg g ( x) and deg r2 ( x)  deg g ( x) . Assume that r1 ( x)  )r2 ( x) .
Rings and Linear Algebra 4.3 Factorization of Polynomials over..

Since  q1 ( x)  q2 ( x)  g ( x)  r2 ( x)  r1 ( x) , we get that q1 ( x)  q2 ( x) . Thus

deg  q1 ( x )  q2 ( x )   deg g ( x)  deg  q1 ( x)  q2 ( x)  g ( x)  deg(r2 ( x)  r1 ( x) . But

deg  q1 ( x)  q2 ( x)   deg g ( x )  deg g ( x) and

deg(r2 ( x)  r1 ( x)  max deg r2 ( x), deg r1 ( x)  deg g ( x) . This is a contradiction. Therefore
r1 ( x)  r2 ( x) and so q1 ( x)  q2 ( x) .

We compute the polynomials q( x) and r ( x ) of theorem 4.4.1 by long division.

4.4.2. Example: For polynomials f ( x)  x 6  3 x5  4 x 2  3x  2 and g ( x)  x 2  2 x  3 in Q x  ,

find q( x) and r ( x ) as described by the division algorithm so that f ( x )  g ( x ) q ( x )  r ( x ) with


r ( x )  0 or deg r ( x)  deg g ( x)

x 4  x3  x 2  x  5

x2  2x  3 x6  3x5  4 x 2  3x  2

x6  2 x5  3x 4

x5  3x 4

x5  2 x 4  3x3

x 4  3x3  4 x 2

x 4  2 x3  3x 2

x3  7 x2  3x

x3  2 x2  3x

5x 2 +2

5 x 2  10 x  15

 10 x  17

Thus q ( x)  x 4  x3  x 2  x  5 and

r ( x )  10 x  7

4.4.3 Corollary: (Remainder theorem) Let f ( x)  a0  a1 x  ...  an x n  F  x  . For any a  F ,


Centre for Distance Education 4.4 Acharya Nagarjuna University

there exists a unique polynomial q( x)  F  x  such that f ( x )  q ( x )( x  a )  f ( a ) .

Proof: If f ( x )  0 , let q ( x )  0 . Then f ( x )  q ( x )( x  a )  f ( a ) . suppose that f ( x )  0 . By

theorem 4.4.1 there exist unique polynomials q( x), r ( x)  F  x  such that f ( x )  q ( x )( x  a )  r ( x )


where r ( x )  0 or deg r ( x)  1 . Thus we have that r ( x )  c for some c  F . So,
f ( x )  q ( x )( x  a )  c . Then f ( a )  q ( a )( a  a )  c  c . Thus f ( x )  q ( x )( x  a )  f ( a ) .

4.4.4 Corollary (Factor Theorem): An element a  F is a zero of f ( x)  F  x  if and only if


x  a is a factor of f ( x ) .

Proof: If a is a zero of f ( x ) , then f ( a )  0 . By corollary 4.4.3, f ( x )  q ( x )( x  a )  f ( a ) , for

some q( x)  F  x  . since f ( a )  0 , we get that ( x  a ) is a factor of f ( x ) . Conversely if ( x  a )

is a factor of f ( x ) , then f ( x)  q ( x )( x  a ) for some q( x)  F  x  . Then f ( a )  q ( a )(a  a )  0 .


Thus a is a zero of f ( x ) .

4.4.5 Example: Let x3  4 x 2  4 x  1 Z 5  x  . We divide this polynomial by x  1 and get

x2  4

x 1 x3  4 x 2  4 x  1

x3  x 2

0  4x  1

4x  4

Therefore x3  4 x 2  4 x  1  ( x  1)( x 2  4) and so, ( x  1) is a factor of x 3  4 x 2  4 x  1 . By


corollary 4.4.4, we get that 1 is a zero of x 3  4 x 2  4 x  1 .

4.4.6 corollary. A nonzero polynomial f ( x)  F  x  of degree n has at most n zeros in F..

Proof: If f ( x ) has no root in F then the corollary is true. So, suppose that f ( x ) has at least one

root in F. Proof is by induction on n. If n  1 , then f ( x)  ax  b and b a is the only root of f ( x )


in F. In this case, the corollary is true. Assume that the corollary is true for all polynomials of degree
n  1 . Let a  F be a root of f ( x ) . Then f ( x)  ( x  a ) q ( x) , where q( x)  F  x  . Therefore
Rings and Linear Algebra 4.5 Factorization of Polynomials over..
deg q ( x)  n  1 . By induction hypothesis, q( x) has at most n  1 roots in F. If b  F and b is a
zero of f ( x ) other than a, then 0  f (b)  q (b)(b  a ) . since b  a , we get that q (b) =0. So b is a
zero of q ( x) . Thus every zero of f ( x ) other than a is a zero of q ( x) . Hence f ( x ) has atmost n
roots in F.

4.4.7 Example: Let f ( x)  x3  x 2  6 x  6  Z 7  x  . Note that 1 is a zero of f ( x ) . By corollary


4.4.4, ( x  1 ) is a factor of f ( x ) . Let us find the quotient by long division.

x2  2 x  1

x 1 x3  x 2  6 x  6

x3  x 2

2 x2  6x

2 x2  2x

x6

x 1

Thus f ( x)  ( x  1)( x 2  2 x  1) . Obviously x 2  2 x  1  ( x  1)2 . Therefore

x3  x 2  6 x  6  ( x  1)( x  1)2 in Z7  x 

4.4.8 Example: For polynomials f ( x)  x 4  5 x 3  3x 2 and g ( x)  5 x 2  x  2 in Z11  x , find the


quotient and remainder when f ( x ) is divided by g ( x ) .

Solution: 9x2  5x 1

5x2  x  2 x 4  5 x3  3x 2

x 4  9 x3  7 x 2

3 x 3  10 x 2

3 x 3  5 x 2  10 x

 5 x 2  10 x

 5x2  x  2

2
Centre for Distance Education 4.6 Acharya Nagarjuna University

Thus f ( x)  (9 x 2  5 x  1)(5 x 2  x  2)  2

 quotient q ( x )  9 x 2  5 x  1) and remainder r ( x )  2 .

4.5 Irreducible Polynomials:


We start with the following important definition.

4.5.1 Definition: A nonconstant polynomial f ( x)  F  x  is called irreducible over F (or an irre-

ducible polynomial in F  x  , if f ( x ) cannot be expressed as a product g ( x ) h ( x ) of two polynomi-

als g ( x ) and h( x) in F  x  such that deg g ( x )  deg f ( x) and deg h( x )  deg f ( x) .

If f ( x)  F  x  is a nonconstant polynomial that is not irreducible over F, then f ( x ) is said


to be reducible over F.

4.5.2. Examples: 1. Every first degree polynomial in F  x  is irreducible over F. In particular,,

2 x  2  Q  x  is irreducible over Q .

2. Since x 2  1  ( x  i )( x  i ) in C  x  , x  1 is reducible over C.


2

Irreducible polynomials play an important role in the study of field theory. The problem of
determining whether a given f ( x)  F  x  is irreducible over F is difficult. We now give some
criteria for determining irreducibility of quadratic and cubic polynomials.

4.5.3 Theorem: Let f ( x)  F  x  be a polynomial of degree 2 or 3. Then f ( x ) is reducible over


F if and only if it has a zero in F.

Proof: It f ( x ) is reducible over F, then f ( x)  f1 ( x) f 2 ( x) where f1 ( x) and f 2 ( x) are polynomials

in F  x such that deg f1 ( x)  deg f ( x) and deg f 2 ( x)  deg f ( x) . But

deg f ( x )  deg f1 ( x)  deg f 2 ( x) . If deg f1 ( x)  1 and deg f 2 ( x)  1 then deg f ( x)  4 , a contra-


diction. Therefore either f1 ( x) or f 2 ( x) is of degree 1. If, say, f1 ( x) is of degree 1, then
f1 ( x)  ax  b, where a, b  F . Then f1 ( ba 1 )  0 and hence f (ba 1 )  0 , which proves that
ba 1 is a zero of f ( x ) in F .

Conversely, if f ( a )  0 for a  F , then x  a is a factor of f ( x ) , so f ( x)  ( x  a ) q ( x) ,

where q( x)  F  x  . Since deg( x  a )  deg f ( x ) and deg q ( x )  deg f ( x ) , we get that f ( x ) is


reducible over F .
Rings and Linear Algebra 4.7 Factorization of Polynomials over..
4.5.4 Examples: 1. Since the polynomial x  2  Q  x  has no zeros in Q, by theorem 4.5.3,
2

x 2  2 is irreducible over Q .

2 is a zero of x 2  2 in R, by theorem 4.5.3, we get that x  2 is ireducible over R.


2
2. Note that
From examples (1) and (2), we observe that irreducibility depends on fields.

3. Since x  3x  2  Z 5  x  has no zeros in Z 5 , by theorem 4.5.3, x 3  3 x  2 is irreducible over


3

Z5 .

We now state a theorem which is useful in proving some interesting theorems and whose
proof is beyond the scope of this book.

4.5.5. Theorem: If f ( x)  Z  x  then f ( x ) factors into a product g ( x )h( x ) of two polynomials

g ( x ) and h( x) in Q  x  such that deg g ( x)  r  deg f ( x) and deg h( x)  s  deg f ( x) if and only if

f ( x ) factors into a product u ( x) v( x ) of two polynomials u ( x) and v( x ) in Z  x  such that


deg k ( x )  r  deg f ( x ) and deg v ( x )  s  deg f ( x ) .

4.5.6 Example: Let f ( x)  x3  x  1  Z 2  x  . It can be checked that none of the elements of Z 2


is a zero of f ( x ) . So, f ( x ) has no zero in Z 2 and by theorem 4.5.3, f ( x ) is irreducible over Z 2 .

4.5.7 Corollary: Let f ( x)  a0  a1 x  ...  an1 x n 1  x n  Z  x  with a0  0 . If f ( x ) has a zero


a  Q , then a  Z and a divides a0 .


Proof: Since a  0 , we can write a as a , where  ,   Z and their gcd ( ,  ) =1. Then

    n 1   n
a0  a1    ...  an 1  n 1   n  0
   

n
to obtain a0   a1  ...  an 1
n 1 n 1 n 1
Multiply the above equation by  n 1  . Because

n
 ,   Z it follows that   Z since ( n ,  )  1 and  divides  n , we have that   1 . There

 a
a    Z . The last equation shows that a0 and hence a0 .
Centre for Distance Education 4.8 Acharya Nagarjuna University

4.5.8 Example: Let us show that f ( x)  x3  x  1 Q  x  is irreducible over Q. If f ( x ) is reduc-


ible over Q , then f ( x ) has a zero a in Q . By corollary 4.5.7 a  Z and a 1 a  1. But
f (1)  1  1  1  1  0 and f ( 1)  1  1  1  1  0 . So a is not a zero of f ( x ) , a contradic-
tion. Hence f ( x ) is irreducible over Q .

The question of deciding whether a given polynomial is irreducible or not can be a difficult
and laborious one. Few criteria exist which declare that a given polynomial is or is not irreducible.
One of these few is the following.

4.5.9 Theorem: (Eisenstein Criterion) Let f ( x)  a0  a1 x  ..  an x n  Z  x  , n  1 . If there is a

prime number p such that p 2 \ a0 , p \ a0 , p \ an1 and p  an , then f ( x ) is irreducible over Q .

Proof: Assume that f ( x ) is reducible over Q then f ( x ) factors into a product of two polynomials

in Q  x  of lower degrees r and s. By theorem 4.5.5, f ( x ) has such a factorization with polynomi-

als of the same degrees r and s in Z  x  . Accordingly f ( x )  (b0  b1 x  ...  br x 2 )(c0  c1 x  ...  cs x s )

with bi , ci  Z , br  0, cs  0, r  n and s  n . Then a0  b0 c0 and an  br cs . Clearly r  s  n . Since

p p p p
a0 and p \ a0 , either b0 and p \ c0 or
2
c0 and p \ b0 . Consider the case c0 and
p
b0 .

p p p
cs . Let cm be the first coefficient in c0  c1 x  ...  cs x
s
Because an , it follows that br and

such that p \ cm . Observe that am  b0cm  ...  bm c0 from this we get that p \ am , since p \ b0 cm
p
and (b1cm 1  ...  bm c0 ) . By hypothesis, m  n . Thus n  m  s  n , which is impossible. Simi-
p
larly if b0 and p \ c0 , we arrive at a contradiction. Therefore our assumption is wrong and f ( x )
is irreducible over Q .

4.5.10 Examples: 1. Consider the polynomial x3  5 x  10  Z  x  . 5 is a prime number such that

2
5 \ 1, 5 ,5
5 10 and 5 \ 10 . By theorem 4.5.9, we get that x  5 x  10 is irreducible over Q.
3

2. Similarly, by using the prime number 3, we get, by theorem 4.5.9, that 25 x 5  3 x 4  3 x 2  12 is


irreducible over Q.
Rings and Linear Algebra 4.9 Factorization of Polynomials over..

3. The polynomial  p ( x)  1  x  .....  x is irreducible over Q. Where p is any prime number..


p 1

x p 1
Solution: Note that  p ( x)  . Let
x 1
( x  1) p  1 1  p  p  p 1  p  
g ( x ) =  p ( x  1)  ( x  1)  1  x  x   1  x  ....   p  1 x 
     

p 1  pp 2  p

= x   1  x  ....   p  1
   

p
 p  , for r  1,.., p 1, p \ 1 and p 2 \  p  . By theorem 4.5.9, g ( x ) is irrudible over
Note that r  
   p 1 

Q. But if  p ( x )  f ( x ) h( x ) were a nontrivial factorization of  p ( x ) in Z  x  , then

 p ( x  1)  f ( x  1) h( x  1) would give a nontrivial factorization of g ( x ) in Z  x  . By theorem 4.5.5.,


g ( x ) is a product of two nonconstant polynomials in Q[ x ] . This a contradiction, since g ( x ) is
irreducible over Q]. Thus  p ( x ) must also be irreducible over Q.

4. Prove that x 3  3x 2  8 is irreducible over Q.

Solution: Let f ( x)  x3  3 x 2  8 . If f ( x ) is reducible over Q, by theorem 4.5.3, f ( x ) has a zero

a in Q. By corollary 4.5.7, a  Z and a 8 . Therefore a  1, , 4, 8 . But none of these are zeros

of f ( x ) . This is a contradiction.

 f ( x ) is irreducible over Q.

4.6 Ideal Structure in F[x]:

 
We now introduce the notation  a  ra r  R to represent the ideal of all multiples of a .

4.6.1 Definition: Let R be a commutative ring with unity. An ideal I of R is called a principal ideal
if I  a  for some a  R .

4.6.2 Examples: 1. Every ideal of Z is a principal ideal.

Solution: Let I be an idea of Z. If I  0 , then clearly I  0  . So, assume that I  0 .

 
Choose the least positive integer m in I. clearly  m  km k  Z  I . Conversely, if h  I , then
Centre for Distance Education 4.10 Acharya Nagarjuna University

h  qm  r where q, r  Z with 0  r  m . Since h  qm  I , we get that r  Z . The minimality of


m implies r  0 and h  qm , i.e. h  m  . Thus

I  m  . Therefore I  m  and hence I is a principal ideal.

2. We know that F  x  is a commutative ring with unity. Clearly x  F  x  . Then the principal
al

 xf ( x) 
ideal  x  is the set  f ( x)  F  x 

 the ideal  x  consists of all polynomials in F  x  having zero constant term.

The next theorem is an application of the division algorithm for F  x  .

4.6.3 Theorem: Every ideal in F  x  is a principal ideal.

Proof: Let M be an ideal of F  x  . If M  0 , then M  0  . Assume that M  0 . Choose a


nonzero polynomial g ( x ) in M of minimal degree. If the degree of g ( x ) is 0, then g ( x ) is a unit in

F  x  . Therefore M  g ( x)  F  x  . If the degree of g ( x )  1 , we claim that M  g ( x)  . If


f ( x )  M , then f ( x )  q ( x ) g ( x )  r ( x ) , where r ( x )  0 or deg r ( x )  deg g ( x ) . Since
f ( x ), g ( x )  M , we get that r ( x )  f ( x )  q ( x ) g ( x )  M . Since g ( x ) is a nonzero element of
minimal degree in M, we have that r ( x )  0 . Thus f ( x )  q ( x ) g ( x )  g ( x )  . Hence
M  g ( x)  .

We now characterize the maximal ideals of F  x  .

4.6.4 Theorem: Let p ( x ) be a nonzero polynomial in F  x  . The p ( x ) is irreducible over F if and

only if  p ( x )  is a maximal ideal of F  x  .

Proof: If  p ( x )  is a maximal ideal of F  x  , then  p( x)  F  x  . Therefore p ( x ) is nonconstant

polynomial. Let p ( x )  u ( x )v ( x) where u( x), v( x)  F  x . Then  p ( x )  u ( x )  .

Hence  p ( x )  u ( x)  or  u ( x)  F  x  . If  u ( x)  F  x  , then u ( x) is a unit, i.e.


deg u ( x)  0 . If  u ( x )  p ( x )  , then u ( x)  p( x) g ( x) and hence

p ( x )  u ( x )v ( x)  p ( x ) g ( x )v ( x ) . since F  x  is an integral domain 1  g ( x )v ( x ) . Hence v( x ) is


a unit, i.e. deg v( x)  0 . Therefore p ( x ) is irreducible over F..
Rings and Linear Algebra 4.11 Factorization of Polynomials over..

Conversely if p ( x ) is irreducible over F, then  p( x)  F  x . If  p( x)  I  F  x ,

where I is an ideal of F  x  . By theorem 4.6.3, I  f ( x)  for some f ( x)  F  x  . Since

 p( x)  f ( x)  , we get the p( x )  f ( x ) h( x) , for some h( x)  F  x  . Since p ( x ) is irreducible,

either f ( x ) is a constant polynomial (whence  f ( x)  F  x  ) or h( x) is a constant polynomial

(whence  p ( x )  f ( x)  ). Thus we have that either  p ( x )  f ( x)  or  f ( x )   F  x  .

Hence  p ( x )  is a maximal ideal of F  x  .

We now prove an useful theorem.

4.6.5 Theorem: Let p ( x ) be an irreducible polynomial in F  x  . If p ( x ) divides g ( x ) h( x ) , for

some g ( x ) , h( x)  F  x  , then either p ( x ) divides g ( x ) or p ( x ) divides h( x) .

Proof: If p ( x ) divides g ( x ) h( x ) , then g ( x ) h( x )  p ( x )  . Since p ( x ) is irreducible, by theorem

4.6.4,  p ( x )  is a maximal ideal of F  x  . Every maximal ideal is a prime ideal. Therefore


 p ( x )  is a prime ideal. Since g ( x ) h( x )  p ( x )  , it follows that either g ( x )  p ( x )  or
h( x )  p ( x)  , i.e. either p ( x ) divides g ( x ) or p ( x ) divides h( x) .

4.6.6 Corollary: If p( x)  F  x  is irreducible over F and p ( x ) divides the product

r1 ( x)r2 ( x)......rn ( x) , where ri  F  x  for i  1,.., n , then p ( x ) divides ri ( x) for at least one;

Proof: We prove the corollary by induction on n. By theorem 4.6.5, the result is true for n  2 .
Assume that the result is true for n  1 . Let r ( x)  r1 ( x)......rn 1 ( x) then p ( x ) divides r ( x)rn ( x) .

p( x) p( x) p( x)
Again by theorem 4.6.5, either r ( x ) or rn ( x) . If r ( x ) , then by induction hypothesis
p( x) p( x)
rr ( x) for some 1  r  n  1 . thus in either case we get that rr ( x) for some i.

4.6.7 Theorem: If F is a field, then every nonconstant polynomial f ( x)  F  x  can be factored in

F  x  uniquely (up to order and units) as a product of a finite number of polynomials in F  x  .

Proof: Let f ( x)  F  x  be a nonconstant polynomial. We first prove that f ( x ) can be written as

the product of a finite number of irreducible polynomials in F  x  . The proof is by induction on


deg f ( x) . if deg f ( x )  1 , then f ( x ) is irreducible and the result is true in this case. We assume
Centre for Distance Education 4.12 Acharya Nagarjuna University

that the result is true for all polynomials g ( x ) in F  x  such that deg g ( x )  deg f ( x) . On the basis
of this assumption we aim to prove the result for f ( x ) . If f ( x ) is irreducible, then there is nothing
to prove. If f ( x ) is not irreducible, then f ( x)  g ( x ) h( x ) , where deg g ( x )  deg f ( x) and
deg h( x)  deg f ( x ) . Then by our induction hypothesis g ( x ) and h( x) can be written as a prod-
uct of a finite number of irreducible polynomials in F  x  ; g ( x)  r1 ( x).....rn ( x) and

h( x)  t1 ( x).....tm ( x) where ri ( x) and rr ( x) are irreducible polynomials in F  x  . Consequently


f ( x)  g ( x)h( x)  r1 ( x).....rn ( x)t1 ( x).....tm ( x) and in this way f ( x ) has been factored as a product
of a finite number of irreducible polynomials.
It remains for us to prove uniqueness. Suppose that
f ( x)  p1 ( x) p2 ( x)..... pr ( x)  q1 ( x).....qs ( x) are two factorizations of f ( x ) into irreducible polyno-
mials. By corollary 4.6.6., p1 ( x) divides qi ( x) for some i; since p1 ( x) and qi ( x) are both irreduc-

p1 ( x)
ible polynomials in F  x  and qi ( x) , we have that qi ( x)  u1 p1 ( x) , where u1 is a unit in F  x  .

Thus p1 ( x) p2 ( x)..... pr ( x)  u1 p1 ( x)q1 ( x)...qi 1 ( x)qi 1 ....qs ( x) ; cancel off p1 ( x) and we are left with
p2 ( x)..... pr ( x)  u1q1 ( x)....qi 1 ( x)qi 1 ....qs ( x) . Repeat the argument on this relation with p2 ( x) . After
r steps the left side becomes 1, the right side is a product of a certain number of q ( x) ’s (the excess
of s over r). This would force r  s since the p ( x ) ’s are not units. Similarly s  r , so that r  s .
In the process we have also showed that every pi ( x)  ui qr ( x) , where ui is a unit; for some r .

4.7 Summary:

In this lesson you have learnt the division algorithm of F  x  , irreducible polynomials,

Eisenstein’s irreducibility criterion, ideals in F  x  and factorization of polynomials over F  x  .

4.8 Technical terms/Named theorems:


Irreducible polynomial
Principal ideal

Division algorithm of F  x 

Remainder theorem
Factor Theorem
Eisenstein’s irreducibility criterion
Rings and Linear Algebra 4.13 Factorization of Polynomials over..

Factorization of Polynomials over F  x 

4.9 Model Examination Questions:

1. State and prove the division algorithm in F  x  .

2. Define an irreducible polynomial. Give an example of an irreducible polynomial.

3. Prove that a nonzero polynomial p ( x ) in F  x  is irreducible if and only it  p ( x )  is a maximal


ideal.
4. State and prove Eisensteins irreducibility criterian.

5. For any prime number p, prove that the polynomial  p ( x)  1  x  ...  x


p 1
is irreducible over Q.

6. Prove that every ideal in F  x  is a principal ideal.

7. Prove that every nonconstant polynomial in F  x  can be factored in F  x  uniquely (upto order

and units) as a product of a finite number of irreducible polynomials in F  x  .

8. Find all prime numbers p such that x  2 is a factor of x  x  x  x  1 Z p ( x)


4 3 2

9. Let f ( x )  F [ x ] be a polynomial of degree 2 or 3. Prove that f ( x ) is reducible over F if and only


if it has a zero in F.

10. Let f ( x)  a0  a1 x  ....  an1 x


n 1
 x n  Z  x  with a0  0 . If f ( x ) has a zero a  Q then prove
a
that a  Z and a0 .

4.10 Exercises:
1. Determine which of the following are irreducible over Q.

a) 2 x 5  6 x 3  9 x 2  15

b) x 4  3 x 2  9

c) 3 x 5  7 x 4  7

2. Prove that

a) x 2  1 is irreducible over Z 7 .

b) x 2  x  1 is irreducible over Z 2
Centre for Distance Education 4.14 Acharya Nagarjuna University

c) x 2  x  4 is irreducible over Z11

3. Find all prime numbers p such that x  2 is a factor of x  x  x  x  1 Z p  x 


4 3 2

4. Show that x 2  8 x  2 is irreducible over Q.

5. For polynomials f ( x)  x 6  3 x5  4 x 2  3 x  2 and g ( x)  3 x 2  2 x  3 in Z 7  x  , find the


quotient q( x) and remainder r ( x ) .

6. For polynomials 2 x 7  x 6  3 x 5  4 x 3  x  5 and x 2  2 x  4 in Q  x  , find the quotient q( x)


and remainder r ( x ) .

7. If p is a prime number, prove that the polynomial x n  p is irreducible over Q.

8. Determine whether the following polynomials in Z  x  satisfies an Eisenstein criterion for


irreducibility over Q.

a) x 2  12

b) 8 x 3  6 x 2  9 x  24

c) 4 x10  6 x 3  24 x  18

d) 2 x10  25 x 3  10 x 2  30

4.11 Model Practical Problems with Solution:

Problem: The polynomial 2 x 3  3 x 2  7 x  5 can be factored into linear factors in Z11  x  . Find this
factorization.

AIM: To find factorization of the polynomial 2 x 3  3 x 2  7 x  5 into linear factors in Z11  x  .

Hypothesis : The polynomial 2 x 3  3 x 2  7 x  5 can be factored into linear factors in Z11  x  .

Solution: By Corollary 4.4.4., a linear polynomial ( x  a ) is a factor of a polynomial f ( x ) in F  x 


if and only if a is a zero of f ( x ) in F. So, by hypothesis, all the zeros of 2 x 3  3 x 2  7 x  5 are in
Z11 . On verification, we get that 3 is a zero of the given polynomial, since 2.33  3.32  7.3  5  0 in
Z11 . Therefore x  3 divides 2 x 3  3 x 2  7 x  5

Let us find the factorization by long division.


Rings and Linear Algebra 4.15 Factorization of Polynomials over..

2x2  9x  2

x3 2 x3  3x 2  7 x  5

2 x3  6 x 2

9 x2  7 x

9x2  5x

2 x  5

2 x  6
0

 2 x3  3x 2  7 x  5  ( x  3)(2 x 2  9 x  2)

Since - 3 is a zero of 2 x 2  9 x  2 , we can divide this polynomial by x  3

2x  3

x3 2x2  9x  2

2 x2  6x

3x  2

3x  9
0

Conclusion: 2 x 3  3 x 2  7 x  5  ( x  3)( x  3)(2 x  3) is the required factorization.

4.12 Problems for Practicals:


1. Find the remainder and quotient, when the polynomial

x8  5 x 7  3 x 6  2 x 5  x 4  8 x 3  6 x 2  2 x  1 is divided by x 4  x 3  3 x 2  1 over Q  x  .

2. Find the remainder and quotient when the polynomial x 7  2 x 6  4 x 5  x 3  2 x 2  x  3 is divided

by 3 x 4  2 x 3  4 x 2  1 over Q  x  .

3. If f ( x)  x 4  5 x3  3 x 2  2 x  1 and q ( x)  x 2  x  1 are polynomials in Z5  x  then find the quo-


tient and remainder when f ( x ) is divided by g ( x ) .
Centre for Distance Education 4.16 Acharya Nagarjuna University

4. Show that the polynomial x 2  1 is irreducible over the field of real numbers and reducible over
the field of complex numbers.

5. Show that the polynomial x  x  4  Z11  x  is irreducible over Z11 .


2

6. Resolve x 4  4 into linear factors in Z 5 .

7. Show that f ( x)  x 2  8 x  2 is irreducible over the field of rational numbers. Is it irreducible


over the field of real numbers? Give reasons for your answer.

8. If f ( x)  1  x  x 2  x 3  x 4 , prove that f ( x ) is irreducible over Q.

9. The polynomial x 3  2 x 2  2 x  1 can be factored into linear factors in Z 7  x  . Find this factoriza-
tion.

10. (a) Let F be a field and f ( x), g ( x)  F  x  . Show that f ( x ) divides g ( x ) if and only if
g ( x )  f ( x ) 

(b) Show that the polynomial x 4  2 is irreducible over Q.

- Prof. Y. Venkateswara Reddy


Rings and Linear Algebra 5.1 Vector Spaces

LESSON - 5
VECTOR SPACES
5.1 Objective of the Lesson:
To learn the definition of vector space, subspace, some examples of vector spaces,
algebra of subspaces and related theorems.

5.2 Structure
5.3 Introduction

5.4 Definition and Properties of a vector space

5.5 Vector Subspaces

5.6 Linear sum of subspaces and Linear span of a set.

5.7 Linear dependence and Independence of Vectors

5.8 Summary

5.9 Technical Terms

5.10 Model Questions

5.11 Exercises

5.3 Introduction:
In this lesson we introduce vector space, subspace, linear sum of subspaces, linear span
of a set, linear combination of vectors, linearly dependent and independent vectors.

5.4 Vector Spaces:


Let F be a field. A vector space over F is an additive abelian group V together with a function
F  V  V (( a,  )  a ) such that

i) a (   )  a  b 

ii) ( a  B )  a  b

iii) a ( B )  ( ab )

iv) 1   for all  ,   v; a, b  F and 1 is the unity element of F..

5.4.4 Note: 1) If V is a vector space over F we write V(F) is a vector space. If the field is under-
stood we simply say V is a vector space.
Centre for Distance Education 5.2 Acharya Nagarjuna University
2. Elements of V are called vectors and elements of F are called scalars.
3. The internal composition + in V is called addition and the external composition . is called scalar
multiplication.
5.4.5 Example: 1. Let F be a field and K be a subfield of F then F is a vector space over K.
Solution: Suppose F is a field and K a subfield of F.

F is a field  ( F ,  ) is an abelian group.

Let a  K and   F  a  F and   F Since K  F

 a.  F Since F is a field.


. is an external composition in F over K.
Let a, b  K and  ,   F  a, b  F and  ,   F

a (   )  a  b  by distributive law

( a  b)  a  b by distributive law

a (b )  ( ab) by associative law

1   Since 1 is the unity element of F.

 F is a vector space over the subfield K.


5.4.6 Note: Every field F is a subfield of itself
 Every field is a vector space over itself.

5.4.7 Example: Let F be a field Vn  (a1 , a2 ...an ) ai  Ffor1  i  n is the set of n-tuples. Vn is a
vector space over F with respect to addition ‘+’ and scalar multiplication ‘  ’ defined by

( a1 , a 2 ...a n )  ( b1 , b2 ...bn )  ( a1  b1 ,...a n  bn )

a ( a1 , a2 ...an )  ( aa1 , aa2 ,...aan ) for (a1 , a2 ...an ), (b1 , b2 ...bn )  Vn and a  F .

Solution: Let F be a field and Vn  ( a1 , a2 ...an ) ai  F ,1  i  n

Let   (a1 , a ...an ),   (b1 , b ...bn ), r  (c1 , c2 ,...cn )  Vn  ai ’s, bi ’s and ci ’s  F

    (a1  b1 , a2  b2 ...an  bn )  Vn since ai  bi  F for 1  i  n

Vn is closed w.r.to ‘+’.


Rings and Linear Algebra 5.3 Vector Spaces

(   )     (a1 , a2 ...an )  (b1 , b2 ...bn )   (c1 , c2 ...cn )

 (a1  b1 , a2  b2 ...an  bn )  (c1 , c2 ...cn )

  (a1  b1 )  c1 (a2  b2 )  c2 ,...(an  bn )  cn 

  a1  (b1  c1 ), a2  (b2  c2 ),...an  (bn  cn ) 

 ( a1 , a2 ...an )  (b1  c1 , b2  c2 ,...bn  cn )

   (   )

 Addition is associative.

    (a1 , a2 ...an )  (b1 , b2 ...bn )

 ( a1 , b1 , a2  b2 ...an  bn )

 (b1  a1 , b2  a2 ...bn  an ) since ai s, bi s  F

 (b1b2 ...bn )  (a1 , a2 ...an )

=  

 Addition is commutative.
We have 0  F

 0  (0, 0...0)  Vn

Now   0  ( a1 , a2 ...an )  (0, 0...0)

 (a1  0, a2  0...an  0)

 ( a1 , a2 ...an )

=

 0 is the additive identity of Vn .

  (a1 , a2 ...an )  Vn  a1 , a2 ...an  F

  a1 ,  a2 ...  an  F

 ( a1 ,  a2 ...  an )  Vn
Centre for Distance Education 5.4 Acharya Nagarjuna University

Now (a1 , a2 ...an )  ( a1 ,  a2 ...  an )

 (a1  (a1 ), a2  (  a2 ),...an  ( an )

 (0, 0...0)

0

 (  a1 ,  a2 ...an )   is the inverse of  .

Hence (Vn , ) is an abelian group.

a  a ( a1 , a2 ...an )  ( aa1 , aa2 ...aan )  Vn , Since aa1 , aa2 ...aan  F

Vn is closed under scalar multiplication.

Let a , b  F and   ( a1 , a2 ...an ),   (b1 , b2 ...bn )  Vn

a (   )  a (a1  b1 , a2  b2 ...an  bn )

  a ( a1  b1 ), a( a2  b2 )...a ( an  bn ) 

  aa1  ab1 , aa2  ab2 ...aan  abn 

  aa1 , aa2 ...aan )  (ab1 , ab2 ...abn 

 a (a1 , a2 ...an )  a (b1 , b2 ...bn )

 a  a 

(a  b)  (a  b)(a1 , a2 ...an )

  (a  b)a1 , (a  b)a2 ...(a  b)an )

  aa1  ba1 , aa2  ba2 ,...aan  ban 

  aa1 , aa2 ...aan    ba1 , ba2 ...ban 

 a  a1 , a2 ...an   b  a1 , a2 ...an 

= a  b
Rings and Linear Algebra 5.5 Vector Spaces
a (b )  a  ba1 , ba2 ...ban 

  a(ba1 ), a (ba2 ),...a(ban ) 

  (ab)a1 , (ab)a2 ,...(ab)an 

 ab  a1 , a2 ...an 

 ab

1  1 a1 , a2 ...an 

 1a1 ,1a2 ...1an 

  a1 , a2 ...an 



Vn ( F ) is a vector space.

5.4.8 Elementary Properties of a vector space:

Theorem : Let V ( F ) be a vector space then

i) a 0  0

ii) 0  0

iii) a (  )   a

iv) (  a )   a

v) (  a )(  )  a

vi) a (   )  a  a 

vii) ( a  b)  a  b

viii)   0  a  0 or   0   ,   V and a  F

Let V ( F ) be a vector space and  .   , a  F

i) a 0  a (0  0)  a 0  a 0

 a0  0  a0  a0
Centre for Distance Education 5.6 Acharya Nagarjuna University

 0  a 0 by left cancellation law..

ii) 0  (0  0) since 0 is the zero element of F..

 0  0

 0  0  0  0 Since 0 is the identity in V..

 0  0 by left cancellation law..

iii) a (  )  a  a (    )

 a0

 0 by i.

 a (  )  a

iv) (  a )  a  (  a  a )

 0

 0 by ii.

 (  a )  a

v) (a)( )   (a)  by iii.

   (a ) by iv..

 a

vi) a(   )  a   (  )

 a  a (   )

 a  (  a  )

 a  a 

vii) (a  b)   a  (b)  a  (b)

 a  ( b )

 a  b
Rings and Linear Algebra 5.7 Vector Spaces

Suppose a  0 .

If a  0 then there exists a 1  F such that aa 1  a 1a  1 .

 a  0 and a  0

 a 1 (a )  a 1 0

 (a 1a)  0

 1  0

  0

 a  0  either a  0 or   0 .

5.4.9 Note: Let V ( F ) be a vector space. For a, b  F and  ,   V

i) a  b and   0 then a  b

ii) a  a  and a  0 then   

5.5. Vector Subspaces:


5.5.1 Definition: Let V ( F ) be a vector space and   W  V .

W is said to be a subspace of V if W is an additive subgroup of V and a   W , for all a  F ,   W .

5.5.2 Example: (i) Let V ( F ) be a vector space then W  0

where 0 is the additive identity of V is a subspace of V..

2. The vector space V itself is a subspace of V.


These two subspaces are called trivial subspaces of V.
5.5.3 Theorem: A necessary and sufficient conditions for a non-empty subset W of a vector
space V ( F ) to be a subspace are

i)  ,       

ii) a  F ,     a  

Proof: Let V ( F ) be a vector space and W a non-empty subset of V..

Suppose W is a subspace of V.
Centre for Distance Education 5.8 Acharya Nagarjuna University
 W itself is a vector space.

 W is a group w.r.to +

,  W    W .

Clearly a  F and  W  a  W
 The conditions i and ii are necessary..
Now suppose i)  ,  W     W and

ii) a  F and   W  a  W

W is a non-empty subset of the abelion group (V ,  ) and  ,   W      W .

 From group theory we know that (W, +) is a subgroup of (V ,  ) .


From condition (ii) we get than W is a vector space by itself.

Hence W is a subspace of V ( F ) .

5.5.4 Theorem: A necessary and sufficient condition for a non-empty subset W of a vector
space V ( F ) to be a subspace of V ( F ) is a, b  F and  ,  W  a  b W .

Proof: Let V ( F ) be a vector space and W is a non-empty subset of V..

Suppose W is a subspace of V ( F ) .

 W itself is a vector space over F..

 W is closed under addition ‘+’ and scalar multiplication ‘. ‘.

 a, b  F and  ,  W  a W , b W

 a  b  W

 The condition is necessary..


Suppose the condition a, b  F and  ,  W  a  b W .

To prove that W is a subspace of V ( F ) .

Let  ,  W

Since F is a field 1 and  1  F

 By the condition 1.  (1)  W

    W
Rings and Linear Algebra 5.9 Vector Spaces

 (W , ) is a sub group of (V ,  ) from group theory

 (W , ) is a group.
All elements of W are elements of V.

 (W , ) is an abelion group.

Let a  F and  W

 By the condition a  0. W since 0  F

 a  W

 W is closed under scalar multiplication.

Since elements of W are elements of V, we have

a (   )  a  a 

( a  b)  a  b

a (b )  ( ab)

1.   for all a, b  F and  ,  W


 W is a vector space over F..

Hence W is a subspace of V ( F ) .

 The condition is sufficient.


5.5.4 Algebra of Subspaces:
Theorem: The intersection of any two subspaces of a vector space is a subspace.

Proof: Let V ( F ) be a vector space and W1 ,W2 be two subspaces of V..

To prove that W1  W2 is a subspace.

Take W  W1 W2

clearly 0  W1 and 0  W 2  0  W1  W2  W

W is a non-empty subset of V..


Let a , b  F and  ,  W

 ,  W   ,  W1 and  ,  W2


Centre for Distance Education 5.10 Acharya Nagarjuna University

a, b  F ,  ,  W1 and W1 is a subspace of V..

 a  b  W1 ........... (1)

a, b  F ,  ,   W2 and W2 is a subspace of V..

 a  b   W2 ........... (2)

From (1) and (2) a  b W1  W2  W

 a, b  F , ,  W  a  b  W

 W is a subspace of V.. by 5.5.4.


5.5.6 Note: 1. Intersection of any family of subspaces of a vector space is a subspace.
2. Union of two subspaces of a vector space need not be a subspace.
Example: We know that set of real numbers R is a field.

 R 3  V3 ( R)  (a1 , a2 , a3 ) a1 , a2 , a3  R is a vector space by 5.4.7.

Let W  (0, x, 0) x  R and W2  (0, 0, y ) y  R

Clearly W1 and W2 are non-empty subsets of V3 ( R )

Let a, b  R and   (0, x1 ,0)   (0, x2 ,0) W ;

Then a  b   a (0, x1 , 0)  b(0, x2 , 0)

 (0, ax1 ,0)  (0, bx2 ,0)

 (0, ax1  bx2 , 0)

 a  b   W1 since a, b, x1 , x2  R  ax1  bx2  R

Similarly a, b  R and   (0, 0, y1 )   (0, 0, y2 ) W2

a  b  a(0,0, y1 )  b(0, 0, y2 )

 (0,0, ay1 )  (0,0, by2 )

 (0, 0, ay1  by2 )

a  b W2 since a, b  R, y1 , y2  R  ay1  by2  R


Rings and Linear Algebra 5.11 Vector Spaces

Hence W1 and W2 are subspaces of V3 ( R )

Now   (0, 2, 0)  W1 and   (0,0,3) W2

    (0, 2,3)  W1  W2

W1  W2 need not be a subspace.

5.5.7 Theorem: Union of two subspaces of a vector space is a subspace iff one is contained in
the other.

Proof: Let W1 and W2 be two subspaces of a vector space V ( F ) .

Suppose W1  W2 is a subspace of the vector space V ( F ) .

To prove that either W1  W2 or W2  W1

If possible assume that W1 


 W2 or W2  W1

W1  W2  there is an  W1 and  


 W2 ....... (1)

W2  W1  there is a   W 2 and  
 W1 ....... (2)

 W1   W1  W2

  W2    W1  W2

    W1  W2 since W1  W2 is a subspace.

     W1 or     W2

If    W1 then       W1 since  W1

  W1
A contradiction to (2)

If     W2 then       W2 since  W2

  W2
A contradiction to (1)
Centre for Distance Education 5.12 Acharya Nagarjuna University

   
 W1 and    
 W2

   
 W1  W2 A contradiction to    W1  W2 .

Hence either W1  W2 or W2  W1

Now suppose either W1  W2 or W2  W1

If W1  W2 then W1  W2  W2

If W2  W1 then W1  W2  W1

W1  W2 is a subspace.

5.6.1 Linear sum of two subspaces:

Definition: Let V ( F ) be a vector space and W1 ,W2 be two subspaces of V ( F ) . The set

1   2  , W1 , 2 W2  is called linear sum of W1 and W2 and is denoted by W1  W2 .

5.6.2 Theorem: If W1 and W2 are two subspaces of a vector space V ( F ) then W1 + W2 is a


subspace of V. Also W1  W2  W1  W2 .

Proof: Let V ( F ) be a vector space and W1 , W2 be two subspaces of V..

W1  W2  1   2 1 W1 , 2 W2 

Clearly 0  W1 and W2

 0  0  0  W1  W2

W1  W2 is a non-empty subset of V..

Suppose a, b  F and  ,  W1  W2

 W1  W2    1   2 where 1 W1 and 2 W2

  W1  W2    1   2 where 1  W1 and 2 W2

Now a  b   a (1   2 )  b( 1   2 )

 (a1  a 2 )  (b1   2 )
Rings and Linear Algebra 5.13 Vector Spaces

 (a1  b  2 )  (a 2  b  2 )

a, b  F and 1 , 1 W1  a1  b 1 W1 since W1 is a subspace.

a, b  F and  2 ,  2 W2  a 2  b  2 W2 since W2 is a subspace.

 a  b  (a1  b 1 )  (a 2  b 2 )  W1  W2

Hence W1  W2 is a subspace of V..

Let 1 W1  1  0  W1  W2

 1 W1  W2

W1  W1  W2 ........... (1)

 2  W1  0   2  W1  W2

  2 W1  W2

W2  W1  W2 ...........(2)

From (1) and (2) W1  W2  W1  W2

5.6.3 Linear Combination : Let V ( F ) be a vector space and 1 ,  2 ... n  V if a1 , a2 ...an  F .


then the vector a11  a2 2  ...an n is called linear combination of  1 ,  2   n ...... n .

5.6.4 Linear Span of a set: Let V(F) be a vector space and S a non-empty subset of V. The set
of all linear combinations of elements of all possible finite subsets of S is said to be linear span of
S.
It is denoted by L (S).
5.6.5 Note: 1. If S is a non-empty subset of a vector space V then

L( S )  a11  a2 2  ...  an n ai  F and  i  S for 1  i  n

2. S is a subset of L (S).
5.6.6. Theorem: The linear span L (S) of any subset S of a vector space V is a subspace of V.
Proof: Let V(F) be a vector space and S a non-empty subset of V.

L ( S ) is the linear spance of S.


Centre for Distance Education 5.14 Acharya Nagarjuna University

Clearly by the defintion of linear spance L ( S ) is a non-empty subset of V..

Let  ,   L ( S ) and a, b  F

  L( S )    a11  a2 2  ...  am m where ai ’s  F and i ’s  S

  L ( S )    b1 1  b2  2  ...  bn  n where bi ’s  F and  i ’s  S .

Now a  b   (a11  a2 2  ...  am m )  b(b1 1  b2  2  ...  bn  n )

 (aa1 )1  (aa2 )2  ...  (aam )m  (bb1 )1  (bb2 )2  ...  (bbn )n

i.e. a  b  is a linear combination of elements of S.

 a  b   L ( S )
Hence L (S) is a subspace of V.
5.6.7 Theorem: If S is a non-empty sub-set of a vector space V(F) then linear span of S is the
intersection of all sub-spaces of V which contain S.

Proof: Let V ( F ) be a vector space and S a non-empty sub-set of V..

Suppose W is a sub-space of V and S  W .

Let   L( S )

   a11  a2 2  ...  an n where ai ’s F and i ’s  S .

1 , 2 ,.... n  S  1 , 2 ,... n  W

 a11  a2 2  ...  an n  W Since W is a subspace it is closed under addition and
scalar multiplication.

  L ( S )    W

i.e. L ( S )  W

 L ( S ) is a subset of every subspace which contains S

 L ( S )  Inter section of all subspaces of V. Which contains S .......... (1)

We know that L ( S ) is a subspace and S  L ( S ) by theorem 5.6.6 and 5.6.5.

 Intersection of all subspaces of V which contains S  L ( S ) ........ (2)


Rings and Linear Algebra 5.15 Vector Spaces

From (1) and (2) L ( S ) = Intersection of all subspaces of V..

Which contain S.

5.6.8 Note: If S is a non-empty sub-set of a vector space V ( F ) then L ( S ) is the smallest


subspace of V containing S.

5.6.9 Theorem: If S is a non-empty subset of a vector space V ( F ) then

i) S is a subspace of V  L ( S )  S

ii) L ( L ( S ))  L ( S )

Proof: Let V ( F ) be a vector space and S be a non-empty subset of V..

i) Suppose S is a subspace of V.

Let   L( S )

   a11  a2 2  ...  an n where ai ’s F and i ’s  S .

S is a subspace  It is closed under addition and scalar multiplication.

  S
  L ( S )    S

 L( S )  S .......... (1)

Let   S  1.  L ( S )

   L( S )

 S  L(S ) ........ (2)

Hence L ( S )  S . From (1) and (2)

Now suppose L ( S )  S

By theorem 5.6.6 L ( S ) is a subspace of V..

 S is a subspace of V..
ii) We know that L ( S ) is a subspace of V by 5.6.6.

 By (1) L ( L ( S ))  L ( S )
Centre for Distance Education 5.16 Acharya Nagarjuna University
5.6.10 Theorem: If S and T are two subsets of a vector space V ( F ) then

(i) S  T  L ( S )  L (T )

(ii) L ( S  T )  L ( S )  L (T )

Proof: Let V ( F ) be a vector space and S, T be two subsets of V..

i) Suppose S  T

Let   L( S )

   a11  a2 2  ...  an n where ai ’s F and i ’s  S .

i.e.  is linear combination of finite subset of S.

  is linear combination of finite subset of T. Since S  T

   L(T )

  L( S )    L(T )

 L ( S )  L (T )

ii) Let   L ( S  T )

   a11  a2 2  ...  an n  b1 1  b2  2  ...  bm  m

where a1 , a2 ,...., an , b1 , b2 ,...., bm  F and i ’s  S ,  j ’s T .

   L.C of elements of S  L.C of elements of T.


   An element of L(S) + an element of L(T)

   L ( S )  L (T ) .............. (1)

Now suppose   L ( S )  L ( T )

      where   L ( S ) and   L (T )

  L ( S )    L.C of finite elements of S.


  L(T )    L.C of finite elements of T..

  L.C of finite elements of S  T .


  L ( S  T )
Rings and Linear Algebra 5.17 Vector Spaces
 L( S )  L (T )  L ( S  T ) .............. (2)

From (1) and (2) L ( S  T )  L ( S )  L (T )

5.6.11 Theorem: If W1 and W2 are two subspaces of a vectorspace V ( F ) then


L(W1  W2 )  W1  W2 .

Proof: Let V ( F ) be a vector space and W1 , W2 be two subspaces of V..

We know that W1  W2 is a subspace of V containing W1  W2 by theorem 5.6.2.

Also L(W1  W2 ) is the smallest subspace of V containing W1  W2 .

 L(W1  W2 )  W1  W2 ....... (1)

Let  W1  W2

      where   W1 and  W2

i.e.  is L.C. of finite elements of W1  W2

  L(W1  W2 )

W1  W2  L(W1  W2 ) ............ (2)

From (1) and (2) L(W1  W2 )  W1  W2 .

5.7 Linearly dependent and independent vectors :

5.7.1 Definition: Linearly dependent vectors: A finite subset 1 , 2 ...... n  of vectors of a vector
space V ( F ) is said to be linearly dependent if there exists scalars a1 , a2 ......an  F not all zero

such that a11  a2 2  ....  an n  0 .

5.7.2 Linearly Independent Vectors: A finite subset 1 , 2 ...... n  of vectors of a vector space
V ( F ) is said to be linearly independent if it is not linearly dependent.

5.7.3 Note: A finite subset 1 , 2 ...... n  of vectors of a vector space V ( F ) is linearly indepen-
dent if every relation of the form a11  a2 2  ....  an n  0 where ai ’s  F  a1  a2  .....  an  0 .
Centre for Distance Education 5.18 Acharya Nagarjuna University
5.7.4 Note: A set of vectors which contains the zero vector is linearly dependent (L.D.)

Solution: Let 1 ,  2 ...... n  be a finite set of vectors of a vector space V ( F ) .

Suppose 1  0 and 1 , 1...... n are non zero

Then 11  0 2  0 3  ...  0 n  0

 We have scalars a1 , a2 ,..., an not all zero such that a11  a2 2  ....  an n  0 {Since 1  0
  1 ,  2 ...... n  is linearly dependent.

i.e. a set of vectors containing atleast one zero vector is L.D.


5.7.5 Note: A single non-zero vector forms a L.I. set.

Solution: Suppose 1 where 1  0 is a subset of a vector space V ( F ) .

If a1  0 where a  F then a  0 since 1  0

 a1  0  a  0

 A single non-zero vector forms a L.I. Set.


5.7.6 Theorem: Every super set of a linearly dependent set is linearly dependent.

Proof: Let V ( F ) be a vector space and S  1 ,2 ......n  be a linearly dependent set.

 There exists scalars a1 , a 2 ,..., a n  F not all zero such that a11  a2 2  ....  an n  0 .....(1)

Let S  1 ,  2 ...... n , 1 , 1...... m  be a superset of S (i.e. S  S 1 )


1

Now a11  a2 2  ....  an n  0 1  0  2  ...  0  m  0 ............. (2) from (1)

In the above relation (2) all scalars are not zero.

 S 1 is L.D.
5.7.7 Theorem: Every non-empty subset of a linearly independent set is linearly independent.

Proof: Let V ( F ) be a vector space and S  1 ,2 ......n  be a linearly independent set.

Let S  1 ,  2 ...... k 1  k  n be a subset of S.


1

Suppose a11  a2 2  ....  ak k  0 where a1 , a2 ....ak  F


Rings and Linear Algebra 5.19 Vector Spaces

 a11  a2 2  ....  ak k  0 k 1  0 k  2  ...  0 n  0

 a1  a2  ....  ak  0 since S is L.I.

 S 1  1 , 2 ...... k  is L.I.

5.7.8 Theorem: Let V ( F ) be a vector space. A finite subset of non-zero vectors


S  1 ,2 ......n  of V ( F ) is linearly dependent iff some vector  k , 2  k  n can be expressed as
linear combination of the vectors which preced it.

Proof: Let V ( F ) be a vector space and S  1 ,2 ......n  be a finite subset of non-zero vectors of
V (F ) .
Suppose S is linearly dependent.

 there exists scalars a1 , a2 ....ak  F not all zero such that a11  a2 2  ....  an n  0 ....... (1)

Suppose k is the greatest suffix such that ak  0 .

If k  1 then a11  0

 1  0 since ak  0

A contradiction since S is a subset of nonzero vectors.

2  k  n

From (1) a11  a2 2  ....  an n  0

 a11  a2 2  ....  ak k  0 since ai  0 for i  k

 a k  k   a1 1  a 2 2  ....  a k 1 k 1

  k  ( ak1a1 )1  ( ak1a2 ) 2 ........  ( ak1ak 1 ) k 1

 k is L.C of its preceding vectors.

Now suppose some vector  k , 2  k  n is a L.C of its preceding vectors.

 k  b11  b2 2  ...  bk 1 k 1 where b1 , b2 ,...., bk 1  F

 b11  b2 2  ...  bk 1 k 1  ( 1) k  0


Centre for Distance Education 5.20 Acharya Nagarjuna University

 b11  b2 2  ...  bk 1 k 1  ( 1) k  0 k 1  ....  0 n  0

Since -1 is a non zero scalar we have S  1 ,2 ......n  is L.D.

5.7.9 Theorem: Let V ( F ) be a vector space and S  1 ,2 ......n  be a subset of V. If  i  S is
a linear combination of its preceding vectors then L( S )  L( S 1 ) where

S 1  1 , 2 ..... i 1 ,  i 2 ,..... n  .

Proof: Let V ( F ) be a vector space and S  1 ,2 ......n  be a subset of V..

suppose  i  S is a linear combination of its preceding vectors.

i.e.  i  a11  a2 2  ai 1 i 1 where a1 , a ....ak  F

To prove that L( S )  L( S 1 )

Clearly S 1  S  L( S 1 )  L( S ) ................ (1)

Let   L ( S )

   b1 i  b2 2  ...  bi 1 i 1  bii  bi 1i 1  ....  bn n where b1 , b2 ,...., bn  F

But  i  a11  a2 2  ...  ai 1 i 1

  b11  b2 2  ...  bi 1 i 1  bi (a1  a2 2  ....  ai 1 i 1 )  bi 1i 1  .....  bn n

 (b1  bi a1 )1  (b2  bi a2 )2  ....  (bi 1  bi ai 1 )i 1  bi 1i 1  a22  ....  ai 1i 1 )  bi 1i 1  ...  bnn

= L.C. of elements of S 1 .

  L( S 1 )

  L( S )    L( S 1 )

 L( S )  L( S 1 ) ........... (2)

Hence L( S )  L( S 1 ) from (1) and (2).

5.8 Summary:
In this lesson we learnt definitions of vector space, subspace, linear sum, linear span,
linearly dependent and linearly independent vectors. We proved theorems relating to Algebra of
Rings and Linear Algebra 5.21 Vector Spaces

subspaces. Linear sum of subspaces, linear span of a set. We discussed the concepts linear
combination, linear dependence and independance of vectors.

5.9. Technical Terms:


i) Internal composition
ii) External composition
iii) Vector space
iv) Vector subspace
v) Linear sum of subspaces
vi) Linear combination
vii) Linear span
viii) Linearly dependent vectors
ix) Linearly independent vectors.

5.10 Model Examination Questions:


1. The set of all m x n matrices with real entries is a vector space over the real field with respect to
addition of matrices and scalar multiplication of a matrix.
Solution: Let M = Set of all m x n matrices with elements as real numbers.
i) We know that addition of two m x n matrices is again an m x n matrix.

 A, B  M  A  B  M
M is closed w.r. to addition +.
ii) We know that addition of matrices is associative.

 ( A  B )  C  A  ( B  C ),  A, B, C  M

Clearly the null matric Omn  M and A  O  O  A  A  A  M .

 The null matrix O is the additive identity..


iv) A  M   A  M and A  (  A)  (  A)  A  0

 A is the additive inverse of A.


v) Addition of matrices is commutative.

 A  B  B  A,  A, B  M

 ( M ,  ) is an abelian group.
Centre for Distance Education 5.22 Acharya Nagarjuna University
vi) Let a  R and A  M

 aA  M , where a ( rij )  ( arij ) if A  (ri ) .

 M is closed w.r.to scalar multiplication.


vii) a ( A  B )  aA  aB for all a  R and A, B  M

viii) ( a  b) A  aA  bA for all a, b  R and A  M

ix) a (bA)  ( ab) A for all a, b  R and A  M

x) 1A  A .

 M ( R ) is a vector space.
2. The set of all real valued continuous functions defined in the open internal (0,1) is a vector space
over the field of real numbers with respect to addition and scalar multiplication defined by
( f  g )( x)  f ( x)  g ( x ) and ( af )( x )  af ( x ) where a  R and 0  x  1 .

Solution: Let S   f f : (0,1)   is continuous}

i). We know that sum of two continuous functions is continuous

 f ,g S  f  gS

 S is closed w.r.to addition of functions.


ii) Let f , g , h  S

 ( f  g )  h ( x)  ( f  g )( x)  h( x)   f ( x)  g ( x)  h( x)
 f ( x )   g ( x )  h( x ) 

 f ( x)  ( g  h)( x )

  f  ( g  h)  ( x )

 Addition of functions is associative.


iii) 0 : (0,1)  R defined by 0( x )  0  x  (0,1) is a constanst function and is continuous.

0  S
( f  0)( x )  f ( x )  0( x )  f ( x)  0  f ( x )

 0 is the additive identity..


Rings and Linear Algebra 5.23 Vector Spaces
iv) f : (0,1)  R   f : (0,1)  R

f is continuous   f is continuous.

  f S

f  ( f )  0

  f is the additive inverse of f.

v) For f , g  S we have ( f  g )( x )  f ( x)  g ( x )

 g ( x)  f ( x)

 ( g  f )( x )

 A ddition is commutative.
Hence ( S , ) is an abelian group.

vi) f  S and a  R  af  S

 S is closed under scalar multiplication.


vii) f , g  S and a  R 

 a( f  g ) ( x)  a( f  g )( x)  a  f ( x)  g ( x) 
 af ( x )  ag ( x )

 ( af )( x)  ( ag )( x)

 ( af  ag )( x)

 a ( f  g )  af  ag

viii)  (a  b) f  ( x)  (a  b) f ( x)  af ( x)  bf ( x)

 ( af )( x )  (bf )( x )

 ( af  bf )( x )

 ( a  b) f  af  bf  a, b  R and f  S

For 1  R, 1 f  ( x)  1 f ( x)  f ( x)
Centre for Distance Education 5.24 Acharya Nagarjuna University

1 f  f

 (ab) f  ( x)  (ab) f ( x)  a bf ( x)  a (bf )( x)


  a(bf )( x) 

 ( ab) f  a (bf )

 S is a vector space.

3. Let V3 ( F ) be a vector space. Then W  ( x, y, 0) x, y  F  is a subspace of V3 ( F ) .

Solution: Let F be a field and V3 ( F )  (a, b, c) a, b, c  F  the vector space of ordered triads.

W  ( x, y, 0) x, y  F 

Clearly W is a non-empty subset of V3 ( F ) .

Let  ,   W and a, b  F

 ,   W    ( x1 , y1 , 0) and   ( x2 , y2 , 0) where x1 , y1 , x2 , y2  F

a  b  a( x1 , y1 ,0)  b( x2 , y2 ,0)

 ( ax1 , ay1 , 0)  (bx2 , by2 , 0)

 ( ax1  bx2 , ay1  by2 , 0  0)

 ( ax1  bx2 , ay1  by2 , 0)

 a  b   W since ax1  bx2 , ay1  by2  F

W is a subspace of V3 ( F ) .

4. Let W  ( x,3x,3 x  2) x   show that W is not a subspace of V3 () .

Solution : Clearly   ( x, 3 x, 3 x  2),   ( y , 3 y , 3 y  2)  W

But     ( x  y,3( x  y ),3( x  y )  4) 


W

W is not a subspace of V3 () .


Rings and Linear Algebra 5.25 Vector Spaces

5. Express the vector   (1, 2,5) as a linear combination of the vectors


e1  (1,1,1), e  (1, 2,3), e3  (2, 1,1)

Solution: Suppose   ae1  be2  ce3 where a , b, c  

Then (1, 2, 5)  a (1,1,1)  b(1, 2, 3)  c (2, 1,1)

 ( a, a, a )  (b, 2b, 3b)  (2c, c, c)

 ( a  b  2c, a  2b  c, a  3b  c)

 a  b  2c  1 .......... (1)

a  2b  c  2 .......... (2)

a  3b  c  5 .......... (3)

Solving (1), (2) and (3)

(1) + 2 x (2) a  b  2c  1

2a  4b  2c  4

3a  5b  3 ......... (4)

(2) + (3) 2a  5b  3 .......... (5)

(4) - (5) a  6
From (5) b3
From (1) c2
 (1, 2, 5)  6(1,1,1)  3(1, 2, 3)  2(2, 1,1)

6. Show that the vector (1, 2, 3), (1, 0, 0), (0,1, 0) and (0, 0,1) from a linearly dependent subset of
V3 () .

Solution: Suppose  (1, 2,5)  a (1, 2,3)  b(1, 0, 0)  c (0,1, 0)  d (0, 0,1)  0

 ( a, 2a,3a )  (b, 0, 0)  (0, c, 0)  (0, 0, d )  (0, 0, 0)

 ( a  b, 2a  c, 3a  d )  (0, 0, 0)

 a  b  0  (1) 2a  c  0  (2) 3a  d  0  (3)

 b  a c  2 a d  3a
Centre for Distance Education 5.26 Acharya Nagarjuna University

If a  K then K (1, 2,3)  K (1, 0, 0)  2 K (0,1, 0)  3K (0, 0,1)  0

 (1, 2,3)  (1, 0,0)  2(0,1, 0)  3(0, 0,1)  0

 There exist scalars not all zero such that the linear combination of the give n vectors is
zero. Hence the given vectors are linearly dependent.

7. Show that the vectors (1, 0, 1)(1, 2,1)(0, 3, 2) are linearly independent in V3 () .

Solution: Suppose a (1, 0, 1)  b(1, 2,1)  c (0, 3, 2)  0

 ( a, 0,  a )  (b, 2b, b)  (0, 3c, 2c)  (0, 0, 0)

 ( a  b, 2b  3c,  a  b  2c )  (0, 0, 0)

 a  b  0  (1) 2b  3c  0  (2)  a  b  2c  0
(1)+(3) gives 2b  2 c  0  (3)

(2) - (3) given  5c  0  c  0

From (3) b0


From (1) a0

 a (1, 0, 1)  b(1, 2,1)  c(0, 3, 2)  0

abc0
 The given vectors are linearly independent.
8. If  ,  ,  are linearly independent vectors in V (  ) show that    ,    ,    are also lin-
early independent.

Solution: Suppose a (   )  b(    )  c (   )  0

 a  a   b   b  c  c  0

 (a  c)  (a  b)   (b  c)  0

 ,  ,  are linearly independent.

 ac  a b  bc  0
Solving a  b  c  0

   ,    ,    are also linearly independent.


Rings and Linear Algebra 5.27 Vector Spaces

5.11 Exercises:
(1) Prove that the set of all polynomials in an indeterminate x over a field F is a vector space.

(2) Show that (a, b, c) a, b, c   and a  b  2c  0 is a subspace of V3 () .

(3) Show that (a, b, c) a, b, c  Q is not a subspace of V3 () .

3 1
(4) Write the vector A    in the vector space of all 2 x 2 matrices as the linear combina-
1 2 
tion of the vectors.

1 1   1 1  1 1
0 1 ,  1 0 and 0 0 
     

(5) Show that the subspace spanned by S   ,   and T   ,  ,   are the same in the

vector space V3 () if   (1, 2,1) ,   (3,1, 5) and   (3, 4, 7)

(6) Show that (1, 2,1)(3,1,5) in 3 are linearly independent.

(7) Show that (1, 3, 2)(1, 7, 8)(2,1, 1) in 3 are linearly dependent.

(8) Prove that the set 1, x, x(1  x) is linearly independent set of vectors in the space of all polyno-
mials over the real field.

- Smt. K. Ruth
Rings and Linear Algebra 6.1 Bases and Dimension
LESSON - 6
BASES AND DIMENSION
6.1 Objective of the Lesson:
To learn the definition of vector space. subspace, some examples of vector spaces,
algebra of subspaces and related theorems.

6.2 Structure
6.3 Introduction

6.4 Definition of basis of a vector space and its properties

6.5 Dimension of a vector space

6.6 Quotient space

6.7 Summary

6.8 Technical Terms

6.9 Model Examination Questions

6.10 Exercises

6.3 Introduction:
In this lesson we define basis and dimension of a vector space, Quotient space. We prove
some important theorems relating to dimension.

6.4.1 Definition: Basis of a vector space : A subset S of a vector space V ( F ) is said to be a


basis of V, if
i) S is linearly independent

ii) S spans V i.e. L ( S )  V

6.4.2 Example: The set S  e1 , e2 ...en  where e1  (1, 0, 0,...0) ,


e2  (0,1, 0,...0), e3  (0, 01, 0...0)en  (0, 0...0,1) is a basis of the vector space Vn ( F ) .

Solution: Vn ( F )  (a1 , a2 ...an ) a1 , a2 ...an  F  where F is a field.

First we show that S is L.I.

Suppose b1e1  b2 e2  ...  bn en  0

 b1 (1, 0,...0)  b2 (0,1, 0...0)  bn (0, 0...0,1)  0


Centre for Distance Education 6.2 Acharya Nagarjuna University
 (b1 , 0,...0)  (0, b2 , 0,...0)  ...  (0, 0...0, bn )  0

 (b1 , b2 ...bn )  0  (0, 0...0, bn )  0

 b1  b2  ...bn  0

 S is L.I.

Let   Vn ( F )

   (a1 , a2 ...an )

Now   a1e1  a 2 e 2  ...  a n e n

 S spans V i.e. L ( S ) = Vn ( F )

Hence S is a basis of Vn ( F )

6.4.3 Note: The basis S  e1 , e2 ...en  is called standard basis of Vn ( F ) .

6.4.4 Definition: Finite Dimensional Vector Space: A vector space V(F) is said to be finite
dimensional vector space if there is a finite subset S of V which spans V. i.e. L ( S )  V .

6.4.5 Existance of basis theorem:


Every finite dimensional vector space has a basis.

Proof: Let V ( F ) be a finite dimensional vector space.

 There is a finite subset S of V such that L ( S )  V .

Let S  1 ,  2 ... n 

We may assume that 0 


 S.
If S is linearly independent then S itself is a basis of V.

If S is linearly independent then there exists a vector  k , 2  K  n which is a linear combination of


its preceding vectors. by 5.7.8.

Take S1  1 , 2 ... k 1 ,  k 1 ... n 

Now S1  S and L( S1 )  L( S ) by 5.7.9.

 L( S1 )  V since L ( S )  V
Rings and Linear Algebra 6.3 Bases and Dimension

If S1 is linearly independent then S1 is a bais of V..

Suppose S2 is linearly dependent.

Proceeding as above we get a subset S3 of S with n  2 elements and L( S3 )  V .

Continuing the above process, after finite no. of steps we get a subset which is linearly indepen-
dent and spans V.
 We get a basis of V..
Hence every finite dimentional vector space has a basis.

6.4.6 Invariance Theorem: Let V ( F ) be a finite dimensional vector space, then any two basis
will have the same number of elements.

Proof: Let V ( F ) be a finite dimensional vector space.

Then V ( F ) has a basis by 6.4.5.

Let S1  1 , 2 ... m  and S 2   1 ,  2 ... n  be two bases of V ( F ) .

 S1 is L.I , L( S1 )  V and S 2 is L.I , L ( S 2 )  V

1 V  1 is linear combination of S1 .

 S3   1 ,1 , 2 ... m  is linearly dependent.

Also L( S3 )  V

 There exists a vector  i  S3 which is a linear combination of the preceding vectors and  i  1
by 5.7.8.

Let S 4   1 ,  1 ,  2 ... i 1 ,  i 1 ,  i  2 ... m 

Now L( S 4 )  V by 5.7.9.

  2 is a linear combination of elements of S 4 .

 S5   1 ,  21 , 2 ... i 1 ,  i 1 ,... n  is L.D.

 There exists a vector  j  S5 which is a linear combination of the preceding vectors and
 j  1 ,  j   2 .
Centre for Distance Education 6.4 Acharya Nagarjuna University

 
Now S6  1 ,  2 , 1 , 2 ...i 1 ,... j 1 , j 1 ... n generates V i.e. V ( S6 )  V

If we continue this process at each step one  is excluded and a  is included in the set S1
obviously the set S1 of  ’s can not be exhausted before the set S 2 of  s otherwise V ( F ) will
be a linear span of a proper subset of S 2 and thus S 2 will become linearly dependent.

 We must have n  m

Now interchanging the roles of S1 and S 2 we shall get that m  n .

 Any two bases of a finite dimentional vector space have the same number of elements.
6.5.1 Definition: The number of elements in any basis of a finite dimensional vector space V ( F )
is called the dimension of the vector space V ( F ) and will be denoted by dim V..

6.5.2 Example: The dimension of the vector space Vn ( F ) is n.

6.5.3 Theorem: Every linearly independent subset of a finite dimensional vector space V ( F ) is
either a basis of V or can be extended to form a basis of V.

Proof: Let V ( F ) be a finite dimensional vector space and S  1 ,  2 ... m  be a linearly indepen-
dent subset of V.

Suppose dim V  n

 V has a basis    1 ,  2 ... m 

consider S1  1 ,  2 ... m , 1 ,  2 ... n 

Since B is basis of V, each  can be expressed as linear combination of  ’s.


s.

 S1 is linearly dependent.

 One of the vectors of S1 is a linear combination of its preceding vectors.


This vector can not be any of the  ’s, since  ’s are linearly independent.

 That vector is one of  ’s let it be  i .

Let S 2  1 ,  2 ... m , 1 ,  2 ... i 1 ,  i 1 ,... n 

Clearly L( S 2 )  V .
Rings and Linear Algebra 6.5 Bases and Dimension

If S 2 is linearly independent then S 2 will be a basis of V which is the extended set of S.

If S 2 is linearly dependent we repeat the above process a finite no. of times and we get a linearly
independent set containing S and spanning V.
This set is a basis of V which is the extention of S.
 Every linearly independent subset of finite dimensional vector space is either a basis or can
be extended to form a basis of V.

6.5.4 Corollary: Each subset of ( n  1) or more vectors of an n-dimensional vector space V ( F )


is linearly dependent.

Proof: Let S be a subset of an n-dimensional vector space V ( F ) .

Suppose S has ( n  1) or more vectors.

If S is linearly independent then either S is a basis of V or can be extendend to form a basis of V by


the theorem 6.5.3.

Thus a basis of V contain ( n  1) or more vectors.

Which is a conradiction to the fact that dim V  n .

 S is linearly dependent.
6.5.5 Corollary: Any set of n linearly independent vectors of a n-dimensional vector space
V ( F ) forms a basis of V..

Proof: Let V ( F ) be a n-dimensional vector space and S be a linearly independent subset of V with
n vectors.

 S is a basis of V or can be extended to form a basis of V..


Since dim V  n , S must be a basis of V..

6.5.6 Corollary: Every set of n vectors of a n-dimensional vector space V ( F ) , which gener-
ates V is a basis of V.

Proof: Let V ( F ) be a n-dimensional vector space and S be a set of n vectors which generates V..
If S is linearly dependent then we get a proper subset of S which is a basis of V.

 We get a basis of V with less than n vectors. It is a contradiction to the fact that dim V  n .

 S is linearly independent.
Hence S is a basis of V.

6.5.7 Theorem: If S  1,2 ...n  is a basis of a finite dimensional vector space V ( F ) of dimen-
Centre for Distance Education 6.6 Acharya Nagarjuna University

sion n then every element  V can be uniquely expressed as   a1 1  a2 2  ...  a n n where
a1 , a2 ...an  F .

Proof: Let V ( F ) be a finite dimensional vector space with dim V  n .

S  1,2 ...n  is a basis of V  L ( S )  V

  V    a11  a2 2  ...  an n where a1 , a2 ...an  F

Suppose   b 1 1  b 2  2  ...  b n  n where b1 , b2 ...bn  F

 a11  a2 2  ...  an n  b11  b2 2  ...  bn n

 ( a1  b1 )1  ( a2  b2 ) 2  ...  ( an  bn ) n  0

 a1  b1  a2  b2  .........  an  bn  0 since S is L.I.

 a1  b1 , a2  b2 .........an  bn

Hence every vector  V can be uniquely expressed as   a 1 1  a 2  2  ...  a n  n


where a1 , a2 ...an  F .

6.5.8 Note: If B  1,2 ...n  is a basis of a finite dimensional vector space V ( F ) then every
vector  V is uniquely expressed as   a 1 1  a 2  2  ...  a n  n where a1 , a2 ...an  F .
The scalars a 1 , a 2 . . . a n are called coordinates of  , relative to the basis B.

6.5.9 Theorem: If W is a subspace of a finite dimensional vector space V ( F ) then W is also


finite dimensional and dim W  dim V . Also V  W iff dim V  dimW .

Proof: Let V ( F ) be a finite dimensional vector space with dim V  n and W be a subspace of V..

dimV  n  Every subset of ( n  1) or more vectors of W is L.D.

W is a subspace of V  every subset of ( n  1) or more vectors of W is L.D.

 Any set of L.I. vectors in W must contain at the most n vectors.

Suppose S  1,2 ...m be the largest L.I. subset of W..

Where m  n
Now we prove that S is a basis of W.
Rings and Linear Algebra 6.7 Bases and Dimension

Let  W

Now S 1   1 ,  2 ... m ,   is L.D. {Since S is the largest L.I. subset of W}

 There exists scalars a1 , a2 ...am  F . not all zero  a11  a2 2  ...  am m  a  0

If a  0 then a 1 1  a 2 2  ...  a m  m  0

 a1  a 2  ........  a m  0 since S is L.I.

 S1 is L.I.

A contradiction.
1
 a  0   a 1  F such that aa  1

 a11  a2 2  ...  am m  a  0 

  a 1a11  a 1a2 2 ........  a 1am m

 is L.C. of elements of S    L( S )

W  L ( S )

Hence S is a basis of W  dim W  m  n

W is also a finite dimensional vector space with dim W  dim V .


Now suppose V  W

Then every basis of V is a basis of W.

 dimV  dimW
Conversely suppose dim V  dimW  n

Let S be a basis of W  S contains n vectors and L ( S )  W

S W  S V

Now S is a linearly independent subset of a finite dimensional vector space V ( F ) with dimV  n

 S is a basis of V by 6.5.5.
 L(S )  V
Centre for Distance Education 6.8 Acharya Nagarjuna University

V  W

6.5.10 Theorem: If W1 and W2 are two subspace of a finite dimensional vector space V ( F )
then dim(W1  W2 )  dim W1  dim W2  dim(W1  W2 )

Proof: Let W1 and W2 be two subspaces of a finite dimensional vector space V ( F ) .

W1  W2 and W1  W2 are also subspaces of V ( F ) and hence they are finite dimensional by
6.5.9.

Let dim (W1  W2 )  k and S   1 ,  2 ... k  be a basis of W 1  W 2  S is L.I. and

W1  W2  L ( S )  (1) .

Now S  W1 and S  W2

Since S is a linearly independent subset of W1 .

S can be extended to form a basis of W1 by 6.5.3.

Let S 1   1 ,  2 ... k ,  1 ,  2 ... m  be a basis of W1

 dimW1  k  m and W1  L( S1 ), S1 is L.I. ......... (2)

S is also a linearly independent subset of W2 .

 S can be extended to form a basis of W2 by 6.5.3.

Let S 2   1 ,  2 ... k ,  1 ,  2 ...  n  be a basis of W2

 dimW2  k  n and W2  L( S2 ) , S 2 is L.I. ..... (3)

Now we claim that B  1,2 ,...m , 1, 2 ...n ,  1,  2 ... k  is a basis of W1  W2 .

To Prove that B is L.I.:

Suppose a1 1  a 2 2  ...  a m m  b1  1  b2  2  .....  bn  n  c1 1  c2 2  .....  c k  k  0 for

a1 , a 2 , ..., a m , b1 , b2 , ....., bn , c1 , c 2 , .....c k  F .

 a11  a2 2  ...  am m  (b1 1  b2  2  .....  bn  n  c1 1  c2 2  .....  ck  k )  (4)
Rings and Linear Algebra 6.9 Bases and Dimension

Now  b1 1  b2  2  .....  bn  n  c1 1  c2 2  .....  ck  k  W2 and a11  a2 2  ...  am m  W1

 a1 1  a 2 2  ...  a m  m  W1  W 2  L ( S ) by (1)

 a11  a2 2  ...  am m  d1 1  d 2 2  ....  d k  k where d i ’s  F

 a1 1  a 2 2  ...  a m m  d 1 1  d 2 2  ....  d k  k  0

 a1  a2  ...  am  d1  d2  .....  dn  0 since S1 is L.I.

 a1 1  a 2 2  ...  am m  0

 b1  1  b2  2  .....  bn  n  c1 1  c 2 2  .....  c k  k  0 from (4)

 b1  b2  ...  bn  c1  c2  .....  ck  0 since S2 is L.I.

a1  a2  ...  am  b1  b2  ...  bn  c1  c2  .....  ck  0

 B is linearly independent.

B Spans (W1  W2 )

Let  W1  W2

      where   W and  W2

  W1    a11  a2 2  ...  am m  c1 1  c2 2  .....  ck  k

where a1 , a2 ,..., am , c1 , c2 ,....., ck  F . Since W1  L( S1 )

  W2    b1 1  b2  2  ...  bn  n  d1 1  d 2 2  .....  d k  k

where b1 , b2 ,..., bm , d1 , d 2 ,....., d k  F Since W2  L( S2 ) .

Now   a11  a22  ...  amm  c1 1  c2 2  .....  ck k  b1 1  b2 2  ...  bm m  d11  d2 2  .....  dk  k

= L.C of elements of B.

  L ( B )  B spans W1  W2

Hence B is a basis of W1  W2 .

 dim(W1  W2 )  m  n  k
Centre for Distance Education 6.10 Acharya Nagarjuna University
 k mk nk

 dim W1  dim W2  dim(W1  W2 )

6.6.1 Coset: Let W be a subspace of a vector space V ( F ) and  V . Then the set
   :  W  is called right coset of W in V generated by  . The set    :  W  is called left
coset of W in V generated by  .

6.6.2 Note: 1.    :  W  is denoted by W   and    :  W  is denoted by   W .

2. The right coset W   are subsets of V..

3. W      W since (V ,  ) is an abelian group.

 We say W   is a coset of W generated by  .

6.6.3 Theorem: If W is a subspace of V ( F ) then

i) W  0 W
ii)  W  W    W
iii) W    W       W


Proof: 0  V W  0    0   W 
    W 

W
ii) Suppose  W

T.P.T. W   W
Let   W         ,   W

Now   W ,   W and W is a subspace

    W

  W

W    W ............. (1)

T.P.T W  W 
Rings and Linear Algebra 6.11 Bases and Dimension
Let   W

 W and W is a subspace  W
  W ,   W      W , since W is a subspace.

   (    )   W  

W  W   .......... (2)

Hence W    W from (1) and (2)

iii) Suppose W    W  

Clearly 0  W since W is a subspace.

 0   W     W  

  W  

      when   W

      W

W    W        W

Conversely suppose     W

 W  (   )  W from ii.

 W       W  

 W   W  

6.6.4 Theorem: If W is a subspace of a vector space V ( F ) then the set V/W of all cosets of W
in V is a vector space over F w.r. to addition of cosets and scalar multiplication defined by
(W   )  (W   )  W  (   ) and a (W   )  W  a , for all  ,   V and a  F .

Proof: Let W be a subspace of a vector space V ( F ) and V W  W     V  be the set of all


cosets of W in V.
Addition is well defined:

Suppose W    W   and W    W  

     W and     W by 6.6.3
Centre for Distance Education 6.12 Acharya Nagarjuna University
         W since W is a subspace

 (   )  (   )  W

 W  (   )  W  (   ) by 6.6.3.
Scalar multiplication is well defined:

Suppose W    W  

     W by 6.6.3

 a (   )  W for a  F

 a  a  W

 W  a  W  a by 6.6.3

 a (W   )  a (W   )

 Addition and scalar multiplication in V/W are well defined.


Addition is associative:

Let W   ,W   ,W   V W

 (W   )(W   )  (W   )  W  (   )  (W   )
 W  (   )   

 W    (    )

 (W   )  W  (    )

 (W   )   (W   )  (W   ) 

Existance of Identity:

Clearly W  0  V W and (W   )  (W  0)  W  (  0)  W  

(W  0)  (W   )  W  (0   )  W   ,  W    V W

W  0 is the additive identity of V W .


Existance of Inverse:
Rings and Linear Algebra 6.13 Bases and Dimension

Let W   V W   V  W  ( )  V W

Now (W   )   (W  ( )  W    ( )   W  0  W and

 (W  ( )  (W   )  W  (   )  W  0  W
W  (  ) is the additive inverse of W   .
Addition is commutative:

Let W   ,W   V W

(W   )  (W   )  W  (   )  W  (    )

 (W   )  (W   )

V
W is an abelian group w.r. to addition.

Let W   , W    V W and a, b  F

i) a  (W   )  (W   )   a W  (   )  W  (a  a  )

 (W  a )  (W  a  )  a (W   )  a (W   )

ii) (a  b)(W   )  W  (a  b)   W  (a  b )  (W  a )  (W  b )

 a (W   )  b(W   )

iii) ab(W   )  W  (ab)  W  a (b )  a(W  b )  a b(W   ) 

iv) 1(W   )  W  1  W  

V
W is a vector space.

6.6.5 Quotient Space: If W is a subspace of a vector space V ( F ) . Then V W the set of all
cosets of W in V is a vector space called quotient space.
6.6.6. Theorem: If W is a subspace of a finite dimensional vector space V(F) then
dim V W   dimV - dim W..
Proof: Let W be a subspace of a finite dimensional vector space V(F).
Centre for Distance Education 6.14 Acharya Nagarjuna University

Suppose dimW  m and S  1 ,  2 ... m  be a basis of W..

 S is linearly independent and L( S )  W ............ (1)

Now S is a linearly independent subset of V W  V 

 S can be extended to form a basis of V..

Let S1  1 , 2 ... m , 1 ,  2 ,  k  be a basis of V..

 dim V  m  k , S1 is L.I and L( S1 )  V ............. (2)

Now we claim that B  W  1 ,W   2 ,...W   k  is the basis of V W .

B is Linearly Independent:

Let b1 (W  1 )  b2 (W   2 )  ...  bk (W   k )  W where b1 , b2 ...bk  F

 (W  b1 1 )  (W  b2  2 )  ...  (W  bk  k )  W

 W  (b1 1  b2  2 ...  bk  k )  W

 b11  b2 2  ...  bk k W

 b11  b2 2  ...  bk k  a11  a22  ...  amm since L(S)=W by (1)

 b1 1  b2  2 ...  bk  k  a11  a2 2  ...  am m  0

 b1  b2 ...  bk  a1  a2  ...am  0 since S1 is L.I.

 B is linearly independent.

B Spans V W :

Let W    V W

  V

   c11  c2 2  ...  cm m  d1 1  d 2  2  ...  dk  k where c1 , c2 ...cm and d1 , d 2 ...d k  F

 W    W  ( c1 1  c2 2  ...  cm m  d1 1  d 2  2  ...  d k  k )


Rings and Linear Algebra 6.15 Bases and Dimension

= W  (W  d1 1  d 2  2  ...  d k  k ) when W  c11  ...  cm m

 (W   )  (d1 1  d 2  2  ...  d k  k )

 W  d1 1  d 2  2  ...  d k  k since W    W

 d1 (W  1 )  d 2 (W   2 )  ...  d k (W   k )

V
 B spans W .

Hence B is a basis of V W .

 dim V W   K  m  k  m  dimV  dimW .


6.7 Summary:
In this lesson we learnt the definitions of basis of a vector space, dimension of a vector
space and quotient space. We proved theorems relating to basis and dimension.

6.8 Technical Terms:


i) Basis
ii) Finite dimensional vector space
iii) Dimension of a vector space
iv) Quotient space

6.9 Model Examination Questions:


1. Show that vectors (1, 2,1)(2,1, 0)(1, 1, 2) form a basis for R3.

Solution: Suppose a (1, 2,1)  b(2,1, 0)  c (1, 1, 2)  (0, 0, 0) where a, b, c  R

 (a  2b  c, 2a  b  c, a  2c )  (0, 0, 0)

 a  2b  c  0; 2a  b  c  0; a  2c  0

a  2c  0  a  2c

 a  2b  c  0  2c  2b  c  0  2b  c  0  2b  c

2a  b  c  0  4c  b  c  0  b  5c  0

 b  10 c  0   7 b  0  b  0
Centre for Distance Education 6.16 Acharya Nagarjuna University
 2b  c  c  0 and  2b  c  c  0

(1, 2,1)(2,1,0)(1, 1, 2) is a basis of 3

We know that dim  3  3 .

(1, 2,1)(2,1,0)(1, 1, 2) forms a basis of 3 by 6.5.5.

6.9.2 Find the coordinates of the vector (2,1, 6) of 3 relative to the basis
(1,1, 2)(3, 1, 0)(2, 0, 1)
Solution:

Suppose (2,1, 6)  a (1,1, 2)  b(3, 1, 0)  c(2, 0, 1) where a, b, c  R

 (2,1, 6)  ( a  3b  2c, a  b, 2a  c)

a  3b  2c  2 .......... (1)

a b 1 .......... (2)

2a  c  6 .......... (3)

From (2) b  a  1 from (3) c  2 a  6

 From (1) a  3(a  1)  2(2a  6)  2

a  3a  3  4a  12  2

8a  7

a  7 8

7 15 14 34 17
b  1  and c  6 
8 8 8 8 4

 7 15 17 
 The coordinates of (2,1, 6) are  8 , 8 , 4 
 
6.9.3 If V is the vector space of ordered pairs of complex numbers over the real field R, then
show that the set S  (1, 0)(i, 0)(0,1)(0, i ) is a basis of V..

Solution: Suppose a (1, 0)  b(i, 0)  c (0,1)  d (0, i )  (0, 0) for a, b, c, d  R

 ( a  ib, c  id )  (0, 0)
Rings and Linear Algebra 6.17 Bases and Dimension
 a  ib  0 and c  id  0

 a  0, b  0, c  0, d  0

 The set S  (1, 0)(i, 0)(0,1)(0, i) is L.I

Let (a  ib, c  id )  V

Then (a  ib, c  id )  a (1, 0)  b(i, 0)  c (0,1)  d (0, i )

 S spans V
Hence S is a basis of V.
6.9.4 Let V be the vector space of 2 x 2 matrices over a field F. Show that V has dimension 4 by
exhibiting a basis for V which has four elements.

1 0  0 1  0 0 D  0 0 
Solution: Let S   A, B, C , D where A    B  0 0  C  1 0  0 1 
0 0       

Clearly S  V

Suppose aA  bB  cC  dD  022 where a, b, c, d  F

1 0  0 1  0 0  0 0  0 0 
 a  b  c  d   
0 0  0 0  1 0  0 1  0 0 

 a b  0 0
  
 c d  0 0

abcd 0

 S is linearly independent.

a b 
Further if      V then
c d 

  aA  bB  cC  dD
 S spans V .
Hence S is a basis of V.

 dim V  4
Centre for Distance Education 6.18 Acharya Nagarjuna University

6.9.5 Do the vectors (1,1, 0)(0,1, 2) and (0, 0,1) form a basis of V3 () .
Solution: Suppose a (1,1, 0)  b(0,1, 2)  c (0, 0,1)  (0, 0, 0) when a, b, c  R

 ( a, a  b, 2a  c)  (0, 0, 0)

 a  0; a  b  0; 2 b  c  0

 a  0; b   a  0; c  2b  0

 The vectors (1,1, 0)(0,1, 2) and (0, 0,1) are linearly independent.

Since dim V3 ( )  3 the set (1,1, 0)(0,1, 2)(0, 0,1) is a basis of V3 () by 6.5.5.
6.9.6 Show that the set (1, 0, 0)(1,1, 0)(1,1,1)(0,1, 0) spans the vector space 3 but not a
basis.

Solution: Let (a, b, c)   3

Suppose (a, b, c )  x (1, 0, 0)  y (1,1, 0)  z (1,1,1)  t (0,1, 0)

 ( a , b, c )  ( x  y  z , y  z  t , z )

 x  y  z  a; y  z  t  b, z  c

 x  y  a  c; y  t  b  c

If y  0 then x  a  c and t  b  c

 ( a, b, c )  ( a  c )(1, 0, 0)  c (1,1,1)  (b  c )(0,1, 0)

 S  (1, 0, 0)(1,1, 0)(1,1,1)(0,1, 0) spans 3

Since dim  3  3 , S can not be a basis.

6.9.7 If S   ,  ,   is a basis of C 3 (c) show that S 1     ,    ,     is also a basis of


C 3 (c ) .

Solution: Suppose a (   )  b(    )  c(   )  0 where a , b, c  c

 a  a   b   b )  c  c  0

 (a  c)  (a  b)   (b  c)  0
Rings and Linear Algebra 6.19 Bases and Dimension
 a  c  0; a  b  0; b  c  0

 a  0; b  0; c  0

 S 1     ,    ,     is linearly independent dim C 3 (c)  3 .

 S 1 is a basis of C (c)
3

6.9.8 Extend the set of linearly independent vectors (1, 0,1, 0), (0, 1,1, 0) to the basis of V4.

Solution: S  (1, 0,1, 0), (0, 1,1, 0) is a linearly independent subset of V4 .

Clearly B  (1, 0, 0, 0), (0,1, 0, 0)(0, 0,1,0), (0, 0, 0,1) is a basis of V4 .

Let S1  (1, 0,1, 0), (0, 1,1, 0)(1, 0, 0,0), (0,1, 0, 0)(0, 0,1, 0)(0, 0, 0,1) .

 S1 spans V4 since B  S1 and L( B)  V4 .

But dim V4  4

 S1 cannot be a basis.

 S 1 is linearly dependent.

 (0, 0, 0,1) is a L.C. of its preceding vectors.

 S 2  (1,0,1, 0), (0, 1,1, 0)(1, 0, 0, 0), (0,1, 0, 0)(0, 0,1, 0) spans V4 .

But S 2 cannot be a basis as dim V4  4

 S2 is linearly dependent.

 (0, 0,1, 0) is L.C. of its preceding vectors.

 S3  (1, 0,1, 0), (0, 1,1, 0)(1, 0, 0, 0), (0,1, 0, 0) spans V4 .

Since dim V4  4 and S3 has 4 elements S3 must be a basis of V4 .

6.9.9 Find a basis and dimension for the subspace of 3 spanned by the vectors
(2, 7, 3)(1, 1, 0)(1, 2,1) and (0, 3,1) .
Centre for Distance Education 6.20 Acharya Nagarjuna University

Solution : Let W be the subspace of 3 spanned by S  (2, 7,3)(1, 1, 0)(1, 2,1)(0,3,1)

W is subspace of  3  dim W  dim  3  3

 S can not be linearly independent since S has 4 vectors.

 S is L.D  (2, 7,3) is L.C. of the remaining vectors.

Now S1  (1, 1, 0)(1, 2,1)(0,3,1) spans W..

Suppose a (1, 1, 0)  b(1, 2,1)  c (0,3,1)  (0, 0, 0)

(a  b, a  2b  3c, b  c )  (0, 0, 0)

 a  b  0 ....... (1)  a  2b  3c  0 .............. (2) b  c  0 ............. (3)


(1)  a  b

 From (2) b  2b  3c  0  3b  3c  0  b  c  0

 c  b
 a  1, b  1, c  1

 S1 is linearly dependent.

 (0, 3,1) is L.C. of remaining vectors and S2  (1, 1, 0)(1, 2,1)

Suppose a (1, 1, 0)  b (1, 2,1)  (0, 0, 0)

( a  b,  a  2b, b)  (0, 0, 0)

 a  b  0  a  2b  0 b  0

 a  0, b  0

 S 2 is L.I. and L( S2 )  W

Hence S 2 is a basis of W..

 dim W  2

If W1 and W2 are subspaces of V4 ( R ) generated by the sets (1,1, 1, 2)(2,1,3, 0)(3, 2, 2, 2) and

(1, 1, 0,1)(1,1, 0, 1) respectively then find dim (W1  W2 ) .


Rings and Linear Algebra 6.21 Bases and Dimension
Solution:

Let S1  (1,1, 1, 2)(2,1,3, 0)(3, 2, 2, 2) and S2  (1, 1, 0,1)(1,1, 0, 1)

Given L( S1 )  W1 and L( S 2 )  W2

We know that L(W1  W2 )  W1  W2

suppose a (1,1, 1, 2)  b(2,1, 3, 0)  c(3, 2, 2, 2)  d (1, 1, 0,1)  e( 1,1, 0, 1)  (0, 0, 0, 0)

 a  2b  3c  d  e  0 ..... (1) a  b  2c  d  e  0 ............... (2)

a  3b  2c   0 ........... (3) 2a  2c  d  e  0 ............... (4)

(2) + (4) 3a  b  4c  0 ......... (5)


(1) - (4) a  2b  c  0 ......... (6)
(3) a  3b  2c  0
(6)-(3) b  c  0  b  c
From (5) ac 0
From (2) d e
 a  1; b  0; c  1; d  1; e  1

 S1  S2 is L.D

 One vector is the L.C. of remaining vectors.

 S 1  (1,1, 1, 2)(2,1,3, 0)(1, 1, 0,1)(1,1,0, 1) spans W1  W2

Suppose a (1,1, 1, 2)  b(2,1,3, 0)  c (1, 1, 0,1)  d ( 1,1, 0, 1)  (0, 0, 0, 0)

 a  2b  c  d  0 ......... (1) a  b  c  d  0 ............. (2)

a  3b  0 ..........(3) 2a  c  d  0 ...............(4)

From (3) a  3b From (1) 5b  c  d  0

From (4) 6b  c  d  0

b0

 a  0  b  0 and c  d
Centre for Distance Education 6.22 Acharya Nagarjuna University

 S 1 is L.D

 One vector is S 1 is L.C of the remaining vectors.

 S 11  (1,1, 1, 2)(1, 1, 0,1)(1,1,0, 1) spans W1  W2

Suppose a (1,1, 1, 2)  b(1, 1, 0,1)  c ( 1,1, 0, 1)  (0, 0, 0, 0)

 a  b  c  0 ............... (1) a  b  c  0 ............ (2)

 a  0 ............... (3) 2a  b  c  0 ........... (4)

a  0 we get from (1) b = c

 a  0; b  1; c  1

 S 11 is L.I. and spans W1  W2

Hence S 11 is a basis of W1  W2

 dim(W1  W2 )  3

6.10 Exercises:

1. Show that B  (1, 0, 1)(1, 2,1)(0, 3, 2) from a basis for 3 .

2. Find the coordinates of (2,3, 4, 1) w.r. to the basis (1,1, 0, 0)(0,1,10)(0, 0,1,1)(1, 0, 0, 0) of V4 ( )

3. Extend the set (1,0,1)(0, 1,1) to form a basis for 3 .

4. Find a basis for the subspace spanned by the vectors (1, 2, 0)( 1, 0,1)(0, 2,1) in V3 () .

5. If V is the vector space generated by the polynomials


x3  2 x 2  2 x  1, x3  3 x 2  x  4, 2 x3  x2  7 x  7 then find a basis of V and its dimension.

- Smt. K. Ruth
Rings and Linear Algebra 7.1 Linear Transformation

LESSON - 7

LINEAR TRANSFORMATION
7.1 Objective of the Lesson:
In Chapter 5 and 6, we discussed vector spaces and some of the related topics. Now it is
natural to consider functions from a vector space into a vector space. Such a function with a
condition is called a linear transformation or a homomorphism.

In this chapter, we discuss the properties of these linear transformation and related prob-
lems.

7.2 Structure of the Lesson:


This lesson has the following components.

7.3 Introduction

7.4 Linear Transformation

7.5 Range, Null space of a linear transformation

7.6 Answers to SAQ’s

7.7 Summary

7.8 Technical terms

7.9 Exercises

7.10 Answers to Exercises

7.11 Model Examination Questions

7.12 Reference Books

7.3 Introduction:
In I B.Sc./B.A. homomorphisms from a group into a group are discussed. In chapter 2 of
this book, homomorphisms from a ring/ field into a ring/ field are discussed. In this chapter, we
discuss the homomorphisms (linear transformations) from a vector space into a vector space.

7.4 Linear Transformation:


7.4.1 Definition: Let U ( F ) and V ( F ) be vector spaces. A function T : U ( F )  V ( F ) is called
a linear transformation from U into V if T ( a  b  )  aT ( )  bT (  ) for all  ,   U and a, b  F .
Centre for Distance Education 7.2 Acharya Nagarjuna University
7.4.2 Note: 1) It is called i) a monomorphism of T is one-one
ii) an epimorphism if T in onto
iii) an isomorphism if T is one-one and onto.

2. A linear transformation T : U  U is called a linear operator on U and it is called an automor-


phism if T is one-one and onto.

3. A linear transformation T : U ( F )  F is called a linear functional on U.

4. Taking a=1, b=1 we get T (   )  T ( )  T (  )   ,   U . Taking b = 0, we get


T ( )  aT ( )  a  F ,    U .

5. Throughout this lesson  denotes the field of real numbers.


Solved Problems:

7.4.3 Define: T :  2   2 by T ( x, y, z )  (2 x  y, x),  ( x, y, z )  3 show that T is a linear trans-


formation.

Solution: Let a, b  F ,   (1 ,  2 ,  3 ),   ( 1 ,  2 ,  3 )   3

So a  b   (a1  b 1 , a 2  b  2 , a 3  b  3 )

Now T (a  b  )   2(a1  b 1 )  (a 2  b  2 ), a1  b 1 

= a(21  2 ,1 )  b(21  2 , 1 )  aT ( )  bT ( ) for all a , b  F and  ,    3 .


Hence T is a linear transformation.

7.4.4 Define: T : 3   2 by T (a1 , a2 , a3 )  (a1  a2 , a1  a3 )  ( a1 , a2 , a3 )  3 . Show that T is


a linear transformation.

Solution : Let a , b  F and   (a1 , a2 , a3 )   (b1 , b2 , b3 )   3

So T (a  b )  T (aa1  bb1 , aa2  bb2 , aa3  bb3 )

 (aa1  bb1  aa2  bb2 , aa1  bb1  aa3  bb3 )

 a ( a1  a2 , a1  a3 )  b (b1  b2 , b1  b3 )

 aT ( )  bT (  )   ,   t 3 ,  a, b  
Hence T is a linear transformation.
Rings and Linear Algebra 7.3 Linear Transformation

7.4.5 Show that the mapping T : 3   2 defined by T ( a1 , a 2 , a 3 )  ( a1 , 0) ( a1 , a 2 , a3 )   3 is


not a linear transformation.

Solution: Let a, b  F ,   (1 ,  2 ,  3 ),   ( 1 ,  2 ,  3 )   3

Now T (a  b  )  T (a1  1 , a 2   2 a 3  b  3 )

=  a 1  b 1 , 0  ...... (1)

But aT ( )  bT (  )  aT (1 ,  2 ,  3 )  bT ( 1 ,  2 , 3 )

 a  1 , 0   b  1  0  ......... (2)

From 1 and 2, it is clear that T (a  b )  aT ( )  bT (  )

Hence T is not a linear transformation.

7.4.6 SAQ: Show that the mapping T :  2   2 defined by T (a1 , a2 )  (2a1  a2 , a1 ) , for all
(a1 , a2 )   2 is a linear transformation.

Properties of linear transformations:

7.4.7 Theorem: Let T : U  V be a linear transformation from the vector space U ( F ) to the
vector space V ( F ) . Then

 
a) T 0  0 (b) T     T ( )   U

(c) T      T ( )  T ( )  ,  U

(d) T (a11  a2 2  ...  an n )  a1T (1 )  a2T (d 2 )  ...  anT ( n ) for all 1 ,  2 ... n  U and
 a1 , a2 ,..., an  F .

 
Proof: a) We know that T ( )  T (  0)  T ( )  T 0  T ( )  0  T ( )  T ( )  T 0  
 0  T  0  , by LCL in the group (V ,  ) .

 
Hence T 0  0  0V  
(b) Let        and   (  ) = 0   0U  
Centre for Distance Education 7.4 Acharya Nagarjuna University

Now T   (  )   T  0 

 T   ( )  T (0)

 T ( )  T (  )  T ( )    T ( ) 

 T ( )  T ( ) , by LCL in the group (V ,  ) .

This is true for all  U

c) We have T (   )  T   ( )  T ()  T ( )  T ()  T ( ), ,  U

d) We use induction on n.

If n  1, the T (a11 )  a1T (1 )  a  F and  1 U

Assume the result for n  k . So we have for any a1 , az ,......, ak  F and


1 , z ,..., k  U , T (a11  a2 2  ..  ak k )  a1T (1 )  a2 ( 2 )  ..  ak T (k )

Now let a1 , a z ,..., ak , ak 1 , F and 1 ,  2 ,...,  k , ak 1  U

T ( a11  a2 2  ..  ak k  ak 1 k 1 )  T ( a11  a2 2  ..  ak k )  ak 1 k 1 

= T ( a11  a2 2  ..  ak k )  T ( ak 1 ,  k 1 )

 a1T (1 )  a2T ( 2 )  ...ak T ( k )  ak 1T ( n 1 )

So, the result is true for n  k  1 . Now the result follows from Mathematical induction.

7.4.8 Theorem: Let U ( F ),V ( F ) be vector spaces. Let T : U  V be defined by


T ( )  0    U . Then T is a linear transformation.

Proof: Let a, b  F ,  ,   U  a  b   U

 T ( a  b  )  0

 a 0  b 0  aT ( )  bT (  ),  a, b  F ,   ,   U
This shows that T is a linear transformation.

Note: The linear transformation T : U  V defined by T ( )  0   U is called the zero trans-


formation and is denoted by O.
Rings and Linear Algebra 7.5 Linear Transformation

7.4.9 Theorem: Let V ( F ) be a vector spaces over a field F..

Let T : V  V be defined by T ( )      V . Then T is a linear transformation (operator) on V..

Proof: Let a , b  F ,  ,   V  a  b   V and

T ( a  b  )  a  b 

 aT ( )  bT (  )  a , b  F and  ,   V
So, T is a linear transformation.

Note : The linear transformation T : V  V defined by T ( )      V is called the identity


operator an V and is denoted by I.

7.4.10 Theorem: Let U ( F ),V ( F ) be vector spaces over a field F and let T : U  V be a linear
transformation. Then the mapping T : U  V defined by (T )( )  T ( )    U is a linear
transformation.

Proof: Let a , b  F and  ,   U  a   b   U and

(T )(a  b )   T (a  b ) 

   aT ( )  bT (  )

  a.T ( )  bT (  )

 a ( T )( )  b( T )(  )  a, b  F ,   ,   U
So, -T is a linear transformation.
Note: - T is called the negative of the linear transformation T.

7.4.11 Theorem: Let T1 , T2 be linear transformation from a vector space U ( F ) into a vector
space V ( F ) . Then the mapping T1  T2 : U  V defined by (T1  T2 )( )  T1 ( )  T2 ( )    U is
a linear transformation.

Proof: Let a, b  F , ,  U

Now (T1  T2 )(a  b  )  T1 (a  b  )  T2 (a  b  )

 aT1 ( )  bT1 (  )  aT2 ( )  bT2 (  )

 a T1 ( )  T2 ( )   b T1 (  )  T2 (  )
Centre for Distance Education 7.6 Acharya Nagarjuna University

 a (T1  T2 )( )  b(T1  T2 )(  ),  a, b  F and   ,   U

So, T1  T2 is a linear transformation.

7.4.12 Theorem: Let T : U  V be a linear transformation and a  F . Then the mapping


(aT ) : U  V defined by (aT )( )  aT ( )    U is a linear transformation.

Proof:  ,   U and x, y  F  x  y   U

Now (aT )( x  y  )  a T ( x  y  ) 

 a  xT ( )  yT (  ) 

 a  xT ( )   a  yT (  )

 ( ax )T ( )  ayT (  )

 ( xa )T ( )  ( ya )T (  )

 x  aT ( )   y  aT (  ) 

 x  (aT )( )   y  (aT )(  ) 

This is true for all x, y  F and for all  ,   U

So, aT is a linear transformation.

7.4.13 Theorem: Let U  F n1 , V  F m1 and A  F mn . Define T :    by


T ( X )  A( X ),  X U . Then T is a linear transformation.

Proof: Let a, b  F , X , Y  U .

So, T ( aX  bY )  A( aX  bY )  A( aX )  A(bY )

 a ( AX )  b( AY )  aT ( X )  bT (Y )

for all a , b  F and X , Y  U

Hence T is a linear transformation.

7.4.14 SAQ: Define: T :  2   2 by T ( a1 , a2 )  ( a1 ,  a2 )  ( a1 , a2 )   2 . Show that T is a


linear transformation. (This T is called the reflection in X-axis).
Rings and Linear Algebra 7.7 Linear Transformation
7.4.15 Define: T :  2   2 by T ( a1 , a2 )  ( a1 , 0)  ( a1 , a2 )   2 . Show that T is a linear transfor-
mation. (This T is called the projection on the X-axis).

7.4.16 Theorem: T : U  V is a linear transformation iff

T (   )  T ( )  T (  ) and T ( a )  aT ( ) ,  U and for all a  F (U, V are vector


spaces over a field F).

Proof: 1. Suppose T : U  V is a linear transformation.

 T ( a  b  )  aT ( )  bT (  ),  a, b  F and   ,   U ........... (1)

Taking a  1, b  1 , we get T (1  1 )  1T ( )  1T (  ),   ,   U

 T (   )  T ( )  T (  ),   ,   U

Taking a  1, b  0 , in (1) we get T ( a  o  )  aT ( )  oT (  )

 T ( a )  aT ( )  a  F     

Conversely suppose that T (   )  T ( )  T (  ) and

T ( a )  aT ( )   ,   U  a  F

Now let a, b  F and  ,   U

So, T ( a  b  )  T ( a )  T (b  )

 aT ( )  bT (  )

This is true for all  ,   U and a, b  F .

Hence T is a linear transformation.


In view of theorem (7.4.16), a linear transformation may also be defined as:

7.4.17 Definition: Let U ( F ) , V ( F ) be vector spaces. A mapping T : U  V is a linear transfor-


mation if T (   )  T ()T T ( ) and T ( a  )  a T ( )  a  F and  ,   U .

The two conditions can also be replaced by the single condition:

T (a   )  aT ( )  T (  )   ,  U and  a  F .
This can be justified on similar lines.

7.4.18 Theorem: Let U ( F ),V ( F ) be vector spaces and let 1 , 2 ,... n  be a basis of U. Let

1 ,  2 ,... n  be any set of n vectors in V. Then  a unique linear transformation T : U  V


Centre for Distance Education 7.8 Acharya Nagarjuna University

such that T (i )   i ,1  i  n .

Proof: Let S  1 ,  2 ,... n  Let  U . Since S is a basis of U, a1 , a2 ,....., an  F 


  a1 1  a 2 2  ...  a n n

Define T : U  V by T ( )  T (a11  a2 2  ...  an n )  a 1 1  a 2  2  ...  a n  n , for


every   a11  a22  ...  ann U .

Let a, b  F ,   a11  a2 2  ...  an n  U

  b11  b2 2  ...  bn n  U

Now T (a  b  )  T  (a1a1  bb1 )1  (a1a2  bb2 ) 2  .....  (aan  bbn ) n 

 (aa1  bb1 )1  (aa2  bb2 ) 2  .....  (aan  bbn ) n

 a(a11  a2 2  .....  an n )  b(b11  b2 2  .....  bn n )

 aT ( )  aT (  ), for all  ,   U and for all a, b  F .


This shows that T is a linear transformation.

Now for each i,1  i  n, T ( i )  T  o1  o 2  ...  1i  oi 1...  o n 

 o1  o 2  ....   i  o i 1  ...  o n

 i

To show that T is unique, let T 1 : U  V be a linear transformation such that


T 1 ( i )   i ,1  i  n. Let  U . So  a1, a2 ,.., an  F 

  a11  a2 2  ...  an n

Now T 1 ( )  T 1 ( a11  a2 2  ...  an n )

 a1T 1 (1 )  a2T 1 ( 2 )  ...   nT 1 ( n )

 a11  a2 2  ...  an n  T ( ),   U

So T 1  T . Hence the uniqueness and hence the theorem.


Rings and Linear Algebra 7.9 Linear Transformation
Product of linear transformations.

7.4.19 Theorem: Let U , V , W be vector spaces over a field F. Let T :U V, H :V W be linear
transformations. Then the composite function HT : U  W defined by
HT ( )  H T ( )     U is a linear transformation. (This HT is called product of linear trans-
formations)

Proof: If   U , then T ( )  V and H T ( )  W so that HT ( )  W

Let a , b  F and  ,   U

HT (a  b )  H T (a  b )  H  aT ( )  bT (  ) 

 aH T ( )   bH  T (  ) 

 a ( HT )( )  b( HT )(  )   ,   U and a, b  F
Hence HT is a linear transformation.

7.4.20 Theorem: Let U ,V , W be vector spaces over a field F..

Let T1 , T2 : U  V ; H1 , H 2 : V  W be linear transformations. Then

i) H1 (T1  T2 )  H1T1  H1T2

ii) ( H1  H 2 )T1  H1T1  H 2T1

iii) a ( H1T1 )  (aH1 )T1  H1 (aT1 )

Proof: i) Let  U

 ( H 1  H 2 )T1  ( )  ( H 1  H 2 ) T1 ( )   H 1 T1 ( )   H 2 T1 ( ) 


 ( H1T1 )( )  ( H 2T1 )( )  ( H1T1  H 2T1 )( )    U

 ( H1  H 2 )T1  H1T1  H 2T1


(ii) and (iii) can be proved on similar lines.

7.4.21 Theorem: Let T1 , T2 , T3 be linear operators on a vector space V(F).

1  OT1  O;
Then (a) TO (b) T1 I  IT1  T1

(c) T1 (T2  T3 )  T1T2  T1T3 (d) (T1  T2 )T3  T1T3  T2T3


Centre for Distance Education 7.10 Acharya Nagarjuna University

(e) T1 (T2T3 )  (T1T2 )T3

Proof: Let 1 ()  T1  o()  T1  o    o   o   V


 V;To

 TO
1  O . Similarly we can prove that OT1  O . [Here O is the zero operator on V].

The remaining results can be proved similarly.

7.4.22 Theorem: Let L (U , V ) be the set of all linear transformations from a vector space U ( F )
into the vector space V ( F ) .For T1 , T2  L(U , V ) and a  F , define
(T1  T2 )( )  T1 ( )  T2 ( )    U and (aT1 )( )  aT1 ( )    U . Then L (U , V ) is a vector
space over F.

Proof: By theorems 7.4.11 and 7.4.12, we have T1  T2 , aT1  L(U ,V ) . This shows that L (U , V ) is
closed under vector addition and scalar multiplication.

Let T1 , T2 , T3  L(U , V ) and   U .

T1  (T2  T3 )  ( )  T1 ( )  T2  T3    T1 ( )  T2 ( )  T3 ( ) 

= T1 ( )  T2 ( )  T3    T1  T2  ( )  T3 ( )

  (T1  T2 )  T3      U

T1  (T2  T3 )  (T1  T2 )  T3  T , T2 , T3  L(U , V )

+ is associative in L (U , V ) .

Similarly we can prove that + is commutative in L (U , V ) .

The zero transformation O : U  V is identity with respect to +. For T  L (U , V ) and  U ,


we have

(T  O)( )  T ( )  O( )  T ( )  O  T ( )   U

 T  O  O . Similarly we can show that O  T  O .


T  O  O  O  T ,  T  L(U ,V )

O is the identity..
T  L (U , V )  T  L (U , V ) by theorem 7.4.10.
Rings and Linear Algebra 7.11 Linear Transformation

For  U , T  (T ) ( )  T ( )  (T )( )  T ( )  T ( )  O(U )   U

 T  ( T )  0 . Similarly we can show that (  T )  T  O

T is the inverse of T w.r.t. +.

For a  F , T1 , T2  L(U , V ),   U .

 a(T1  T2 )  ( )  a (T1  T2 )( )   a  (T1 ( )  T2 ( )   aT1 ( )  aT2 ( )

 (aT1 )( )  (aT2 )( )  a (T1  aT2 )( )    U

 a(T1  T2 )  aT1  aT2  a  F and  T1 , T2  L(U , V ) .

Similarly we can prove that (a  b )T  aT  bT , ( ab)T  a (bT ) and 1T  T  a , b  F and


 T  L (U , V )

Hence L (U , V ) is a vector space over F..

Note: L (U , V ) is also denoted by Hom F(U,V) or Hom (U , V ) .

7.4.23 Theorem: Let dim U  n , dim V  m . Then dim L (U , V )  mn (U, V are vector spaces
over the field F).

Proof: Let A  1 ,  2 ,....,  n  , B   1 ,  2 ,....,  m  be ordered bases of U and V respectively. By

Theorem 7.4.18,  a unique linear transformation Tij  L(U ,V )  .

 , k  i
Tij ( k )   j if
 0 k i

for all i, j ,1  i  n, 1  j  m and for each fixed k ,1  k  n .

For example: Ti1 (1 )  0 , Ti1 ( 2 )  0 , Ti1 ( 3 )  0 ......., Ti1 ( i )  1 ,.......Ti1 ( n )  0

i.e. Tij ( i )   j and Ti1 ( k )  0 if k  i .

Let S  Tij 1  i  n ,1  j  m

n m

a) S is L.I: Let aij  F and suppose  a T


i 1 j 1
ij ij  O( L(U ,V )
Centre for Distance Education 7.12 Acharya Nagarjuna University

 n m 
For each k ,1  k  n , we have   aijTij   k   O  k   O
 i 1 j 1 

n m m
  aijTij  k   0   akj  j  0  ak 1 1  ak 2  2  ...  akm m  O
i 1 j 1 j 1

 ak1  0, ak 2  0,.....akm  0  k ,1  k  n  aij  0  i, j ,1  i  n,1  j  m

 S is L.I.
b) L(S) = L(U,V):

Clearly L ( S )  L (U , V )

Let T  L (U , V ) . Now 1  U  T (1 )  V . So T (1 ) can be expressed on a l.c. of


elements of B. So  b11 , b12 , b13 ,....b1m  F  T (1 )  b11 1  b12  2  b13  3  ...  b1m  m .

In general for each k ,1  k  n , bk 1 , bk 2 ,...bkm  F  T ( k )  bk 1 1  bk 2  2  ...  bkm  m

n m
Let H  b T
i 1 j 1
ij ij  H  L(U ,V ) . We have Tij ( k )  0 if k  i and  j if k  i .

n m m

For each k ,1  k  n , H ( k )    bij Tij ( k ) 


i  1 j 1
b
j 1
kj  j  T ( k ) .

 H ( k )  T ( k ),   k  B1

So H and T agree on a basis of U. If   U , then a1 ,....an  F 

  a11  a2 2  ...an n .

So, H ( )  H (a11  a2 2  ...  an n )  a1 H (1 )  a2 H ( 2 )  ...  an H ( n )

= a1T (1 )  a2T ( 2 )  ...  anT ( n )  T (a11  a2 2  ...  an n  T ( )

for all   U . Hence T  H  L( S )  L (U , V )  L ( S )  L( S )  L(U , V ) .

 S is a basis of L(U,V)  dimL(U,V)= number of elements of S = mn. Hence the theorem.


7.5 Null space and Range:
7.5.1 Definition: Let U(F) and V(F) be vector spaces and T : U  V be a linear transformation.
Rings and Linear Algebra 7.13 Linear Transformation

The null space N(T) of T is defined as

N (T )    U T ( )  0

The range R (T) of T is defined as

R (T )  T ( )   U 

Null space is also called kernel.


Range is also called image.

7.5.2 Theorem: Let T : U  V be a linear transformation. Then N(T) is a subspace of U and


R(T) is a subspace of V.


Proof: (1) N (T )    U T ( )  0 
 
We know that 0  U and T 0  0.  0  N (T )

   N (T )  U

Let a, b  F ,  ,   N (T )

 T ( )  0, T (  )  0

T (a  b )  aT ( )  bT (  )  a.0  b.0  0

 a  b   N (T )   ,   N (T ) and  a, b  F .

 N(T) is a subspace of U.

2. R (T )  T ( )   U 

0  U  T  0   R (T )    R (T )  V

Let 1 ,  2  R (T ) and a , b  F .

 1 , 2  U  T (1 )  1 , T ( 2 )   2

 a 1  b  2  aT (1 )  bT ( 2 )

 T (a1  b 2 )

 R (T ) , since a1  b 2 U .
Centre for Distance Education 7.14 Acharya Nagarjuna University

 a 1  b  2  R(T ),  1 ,  2  R(T ) and  a, b  F

 R (T ) is a subspace of V..

7.5.3 Example: Let V(F) be a vector space and I  V  V be the identify operator. Then
N ( I )  0 , R ( I )  V .

7.5.4 Example: Let U ( F ), V ( F ) be vector spaces. O : U  V be the zero linear transforma-


tion. Then N (O )  U and R(O)  O .  
7.5.5 Theorem: Let T : U ( F )  V ( F ) be a linear transformation. If U is finite dimensional, then
R (T ) is finite dimensional.

Proof: Let S   1 ,  2 , ....,  n  be a basis of U.

and S  T ( 1 ), T ( 2 ),...., T ( n )
1

Let   R (T )    U  T ( )  

 a1 , a2 ,......, an  F    a11  a2 2  ......  an n

   T ( )  T (a11  a2 2  ......  an n )

   a1T (1 )  a2T ( 2 )  ....  anT ( n )

   L( S 1 )

 R(T )  L( S 1 )

Also S 1  R (T ) ( each element of S 1 is in R(T))

 L( S 1 )  R(T ) , since R(T) is a subspace of V containing S 1 and L( S 1 ) is the


smallest subspace containing S 1 .

Hence L( S 1 )  R (T ) .

 R (T ) is finite dimensional.

7.5.6 Note: 1. dim U  n  dim R (T )  n


2. If S is a basis of U, then R(T) = Linear span of T(S).
Rings and Linear Algebra 7.15 Linear Transformation

i.e. if S   1 ,  2 , ....,  n  , then R(T)=linear span of S  T ( 1 ), T ( 2 ),...., T ( n )


1

7.5.7 Example : Let T :  3   2 be defined by T (a1 , a2 , a3 )  ( a1  a2 , 2a3 )( a1 , a2 , a3 )   3


Then N (T )  ( a1 , a2 , a3 ) T ( a1 , a2 , a3 )  0 
 (a1 , a2 , a3 ) (a1  a2 , 2a3 )  (0,0)

 (a1 , a2 , a3 ) a1  a2 , a3  0

= a, a, 0) a  


and R(T )  T (a1 , a2 , a3 ) (a1 , a2 , a3 )  
3

 (a1 , a2 , 2a3 ) a1 , a2 , a3  

By theorem 7.4.28, R(T) is spanned by T (1, 0, 0), T (0,1, 0), T (0, 0,1) , where e1 , e2 , e3  is the

standard basis of 3 , R(T) = Span of (1,0),(1,0),(0, 2)

= Span of (1,0),(0,1) = 2

Rank and Nullity:

7.5.8 Definition: Let T  U ( F )  V ( F ) be a linear transformation.

The rank of T, denoted by  (T ) is defined as :

 (T )  dim R (T )

The nullity of T, denoted by  (T ) is defined as:

 (T )  dim N (T ) .

7.5.9 Theorem: Let T : U  V be a linear transformation and U be finite dimensional. Then


 (T )   (T )  dim U .

Proof: Let dimU  n , Since N(T) is a subspace of U and U is finite dimensional, it follows that N(T)
is finite dimensional. Let dim N (T ) be = k and let A  1 ,  2 ,...,  k  be a basis of N(T). Since A

is a L.I. subset of U, A can be extended to form a basis of U. Let B  1 ,  2 ,...,  k ,  k 1 ,.....,  n  be
Centre for Distance Education 7.16 Acharya Nagarjuna University

a basis of U. Let S  T ( k 1 ), T ( k  2 ),..., T ( n ) . We now show that S is a basis of R(T).

a) S is LI: Let ak 1T ( k 1 )  ak  2T ( k  2 )  .....  anT ( n )  0

 T (ak 1 k 1  ak  2 k  2  .....  an n )  0

 ak 1 k 1  ak 2 k  2  .....  an n  N (T )

 a1 , a2 ,...,  k  F  ak 1 k 1  ak  2 k 2 .....  an n  a11  ....  akk

 a11  a2 2 ......  ak k   ak 1 k 1  ....  an n  0

  a1  0,  a2  0,..., ak  0; ak 1  0, ak 2  0,....an  0 , since B is LI.

 ak 1  0,......., an  0  S is L.I.

b) L ( S )  R (T ) : Clearly L ( S )  R (T ) .

Let   R (T )    U  T ( )  

Now   U  a1 , a2 ,...., ak k 1 ,...., an  F    a11  .....  ak k  ...  ak 1 k 1  ....  an n

   T ( )  T (a11  a2 2  .....  ak k  ak 1 k 1  ak 2 k 2  ....  an n )

 a1T (1 )  a2T ( 2 )  ...  ak T ( k )  ak 1T ( k 1 )  ak 2T ( k 2 )  ....  anT ( n )

 a1.0  a2 .0  ...  ak .0  ak 1T ( k 1 )  ak 2T ( k 2 )  ....  anT ( n )

 ak 1T ( k 1 )  ak  2T ( k  2 )  ....  anT ( n )  L( S )

 R (T )  L ( S ).  L ( S )  R (T )  S is a basis of R(T)

  (T )  Number of elements of S  n  k  n   (T ) .

  (T )   (T )  n  dim U ; Rank T + nullity T = dimU . Hence the theorem.


Solved Problems:

7.5.10 Problem: Let T :  3   2 be defined by T (a1 , a2 , a3 )  (a1  a2 , 2a3 ) for all ( a1 , a2 , a3 )   3 .


Find the rank T, nullity T and verify  (T )   (T )  n  dim U .

Solution: We have N (T )  (a1 , a2 , a3 ) T (a1 , a2 , a3 )  (0, 0)


Rings and Linear Algebra 7.17 Linear Transformation

 (a1 , a2 , a3 ) (a1  a2 , 2a3 )  (0, 0)

 (a1 , a2 , a3 ) (a1  a2 , a3  0

 (a, a, 0) a  

= Span of (1, 1, 0)

 (T )  dim N (T )  1

R (T )  T ( a1 , a2 , a3 ) ( a1 , a2 , a3 )   3 

=  ( a1  a2 , 2a3 ) a1 , a2 , a3  

R(T) = Span of (T (e1 ), T (e2 ), T (e3 ) , where e1 , e2 , e3  is the standard basis of 3 .

= Span of (1, 0), (1, 0), (0, 2)

= Span of (1, 0), (0,1)

= 2

  (T )  dim R (T )  2

Now P (T )  V (T )  2  1  3  dim  3

7.5.11 Problem: Find the null space, range, rank and nullity of the linear transformation T :  2   3
defined by T ( x, y )  ( x  y , x  y , y ) for all ( x, y )   2 .

 
Solution: N (T )  ( x, y ) T ( x, y )  0  ( x, y ) x  y, x  y , y )  (0, 0, 0

 ( x, y ) x  y  0, x  y  0, y  0  (0, 0)

 Nullity T is = 0.

R (T )  T ( x, y ) ( x, y )   2   ( x  y , x  y , y x, y  

Since rank T + nullity T = dim  2  2 , it follows that rank T = 2.

Also we note that R(T) is spanned by T (1, 0), T (0,1)  (1,1, 0), (1, 1,1) .
Centre for Distance Education 7.18 Acharya Nagarjuna University

7.5.12 SAQ: Let T1 , T2 be two operators on  2 defined by T1 ( x, y )  ( y, x), T2 ( x, y )  ( x, 0) . Show


that T1T2  T2T1 .

7.5.13 SAQ : If T is a linear operator on  2 defined by T(x, y)  (x  y, y) then find T 2 ( x, y ) .

7.5.14 SAQ: Give an example of two linear operators T and S on  2 such that TS  O but ST  O .
7.6 Answers to SAQ’s:
7.4.6 SAQ: T :  2   2 . T (a1 , a2 )  (2a1  a2 , a1 )  ( a1 , a2 )   2

Let   (1,2 ),   ( 1, 2 ) 2 and a, b   .

T (a  b )  T (a1  b1 , a 2  b 2 )

 (2(a1  b 1 )  a 2  b  2 , a1  b 1 )

 a(21   2 , 1 )  b(2 1   2 , 1 )

 aT ( )  bT (  )   ,    2 and  a, b  
So T is a linear transformation.

7.4.14 SAQ: T :  2   2 . T ( a1 , a 2 )  ( a1 ,  a 2 )  ( a1 , a 2 )   2

Let   (1,2 ),   ( 1, 2 ) 2 and a , b   .

T (a  b )  T (a1  b1 , a 2  b 2 )

 ( a 1  b  1 ,  a 2  b  2 )

 a ( 1 ,  2 )  b (  1 ,   2 )

 aT ( )  bT (  )   ,    2 and  a, b  

 T is a linear transformation.

7.4.15 SAQ : T :  2   2 . T ( a1 , a 2 )  ( a1 , 0 )  ( a1 , a 2 )   2

Let   (1,2 ),   ( 1, 2 ) 2 and a , b   .

T (a  b )  T (a1  b1 , a2  b2 )


Rings and Linear Algebra 7.19 Linear Transformation
 (a1  b 1 , 0)  a(1 , 0)  b( 1 , 0)

 aT ( )  bT (  )  a, b  and  ,  2
 T is a linear transformation.

7.5.12 SAQ: T1 , T2 :  2   2 , T1 ( x, y )  ( y, x)

T2 ( x, y )  ( x, 0)( x, y )   2

1 2 ( x, y)  T1 T2 ( x, y)  T1 ( x, o)  (o, x)


TT

T2T1 ( x, y )  T2 T1 ( x, y )   T2 ( y, x)  ( y, o)

T1T2 ( x, y )  T2T1 ( x, y ) , if x  o or y  o .

T1T2  T2T1

7.5.13 SAQ: T :  2   2 , T ( x, y )  ( x  y, y )

T 2 ( x , y )  T  T ( x, y )   T ( x  y , y )

 ( x  2 y, y )( x, y )   2

7.5.14 SAQ: Give an example to show that TS  O but ST  O

Define T :  2   2 by T ( x, y)  (1,0)( x, y)   2

S :  2   2 by S ( x, y )  (0, x)( x, y )  
2

TS ( x, y )  T  S ( x, y )   T (0, x)  (1, 0)   2

ST ( x, y )  S T ( x, y )  S (1, 0)  (0,1)

Here TS ( x, y )  ST ( x, y )  TS  ST

7.7 Summary: Linear transformation and properties, null space, range, rank, nullity are
discussed.

7.8 Technical Terms:Linear transformation, range, null space, rank, nullity.


7.9 Exercise
7.9.1 Let V  F nn ; M  V . Define T : V  V by T ( A)  AM  MA  A  V . Show that T is linear..
Centre for Distance Education 7.20 Acharya Nagarjuna University

7.9.2 Give an example of a linear operator T on 3 such that T  O, T 2  O but T 3  O

 1 1
7.9.3 Let V  F 22 . Let P     V . Define T : V  V by T ( A)  PA  A  V Find nullity T
 2 2 

7.9.4 Let T :  4   3 be defined by T (e1 )  (1,1,1)

T (e2 )  (1,1,1), T (e3 )  (1, 0, 0), T (e4 )  (1, 0,1)

Verify that  (T )   (T )  dim  4 .

7.10 Answers to Exercises:

7.9.2 T :  3  3 , T (a, b, c)  (0, a, b)  (a, b, c)  3


7.9.3 2
7.11 Model Examination Questions:

7.11.1 Define a linear transformation. Prove that T (0)  0, T ( )  T ( )

7.11.2 Define range and null space of a linear transformation.


Show that range and null space are subspaces.

7.11.3 Define rank and nullity. Prove that  (T )   (T )  dim U

7.11.4 Show that L (U , V ) is a vector space.

7.11.5 Show that dim L (U , V )  mn .

7.12. Reference Books:


1. Stephen H: Fried berg and others - Linear Algebra - Prentice Hall India Pvt. Ltd., New Delhi.
2. K. Hoffman and Kunze - Linear Algebra.
2nd Edition - Prentice Hall, New Jersey - 1971.

- A. Satyanarayana Murty
Rings and Linear Algebra 8.1 Matrix Representation of a Linear....

LESSON - 8

MATRIX REPRESENTATION OF A LINEAR


TRANSFORMATION
8.1 Objective of the Lesson:
In Chapter 7, we discussed linear transformation from an n-dimensional vector space to an
m-dimenstional vector space. In this chapter we represent such a linear transformation as an
m x n matrix and study some properties and discuss some problems.

8.2. Structure of the Lesson:


This lesson lhas the following components.

8.3 Introduction

8.4 Matrix representation

8.5 Composition of linear transformation and matrix multiplication

8.6 Invertibility and isomorphism

8.7 Answers to SAQ’s

8.8 Summary

8.9 Technical Terms

8.10 Exercises

8.11 Answers to Exercises

8.12 Model Examination Questions

8.13 References Books

8.3 Introduction:
In this chapter we discuss matrix of a linear transformation from a vector space to a vector
space relative to ordered bases. We also study the matrix of a product of linear transformations
and the invertibility and isomorphism of linear transformations.

8.4 Matrix Representation:


First we define an ordered basis of an n-dimensional vector space V(F) as a basis for V
endowed with a specific order.
Centre for Distance Education 8.2 Acharya Nagarjuna University

For example in F 3 , the standard basis B  e1 , e2 , e3  is considered as an ordered basis.

Also B1  e2 , e3 , e1 is an ordered basis of F 3 , but B1  B as ordered bases. For F , e1 , e2 ..., en 
n

is considered as the standard ordered basis, where e1  (1,0,0,..,0), e2  (0,1,0,...0) etc.

8.4.1 Definition: If B  1,2 ,...,n is an ordered basis of an n-dimensional vector space V ( F )


n

and   V , then  can be uniquely expressed as    ai i , where ai  F .


in

 a1 
a 
We define the coordinate vector of  relative to the ordered basis B as  B   2

 
 an 

8.4.2 Example: For V3 () , B  e1 , e2 , e3  is the standard ordered basis; where
e1  (1,0,0), e2  (0,1,0), e3  (0,0,1)

4
If   (4, 2,3) , then  B   2  , since
 3 

  (4, 2, 3)  4(1, 0, 0)  2(0,1, 0)  3(0, 0,1)

8.4.3 Example: For V3 () , we know that B1  1 , 2 ,3 is on ordered basis where

1 
1  (1, 0, 0), 2  (1,1, 0), 3  (1,1,1) , so   (4,3, 2)   B1  1  , since
 2 

(4, 3, 2)  1(1, 0, 0)  1(1,1, 0)  2(1,1,1)

8.4.4 Definition: Let U ( F ) and V ( F ) be vector spaces and dim U  n, dim V  m . Let
B1  1,2 ,...,n be an ordered basis of U and let B2 1, 2,..., m be an ordered basis of V..

Let T : U  V be a linear transformation.


Rings and Linear Algebra 8.3 Matrix Representation of a Linear....

Since T (1 ), T ( 2 ),...., T ( n )  R (T )  V , these vectors can be uniquely expressed as lin-


ear combinations of elements of B2 .

For each j ,1  j  n,  unique aij  F  T ( j )   aij  i


i 1

The matrix A   aij  mn is called the matrix of T relative to the ordered bases B1 , B2 and we

write T ; B1 , B2   A .

If U = V and B1  B2 , we write A  T ; B1 , B1   T B1 .

8.4.5 Note: T ( j )  B2  j th column of A.

8.4.6 Example: If I and O are the identity and zero operators an n-dimensional vector space V
and B  1,2 ,...,n is an ordered basis of V, then  I B  I nn and O B  Onn .

since I ( j )   j  o 1  o 2  ...  1 j  ...  o n ,

O ( j )  0  o 1  o 2  ...  o n for each j  1, 2,...., n .

8.4.7 Theorem: Let U(F), V(F) be vector spaces, dim U  n, dim V  m and let B1 , B2 be ordered
bases of U and V respectively. If T1 , T2  L(U , V ) , then T1  T2 ; B1 , B2   T1 ; B1 , B2   T2 ; B1 , B2 

and  aT1; B1 , B2   a T1 ; B1 , B2  , where a  F .

Proof: Let B1  1,2 ,...,n , B2 1, 2,..., m

Let T1 ; B1 , B2    aij  mn and T2 ; B1 , B2   bij  mn

n m
 T1 ( j )   aij  i , T2 ( j )   bij  i , for j = 1,2,.....n.
i 1 i 1

m m m

Now (T1  T2 )( j )  T1 ( j )  T2 ( j )   aij  i   bij  i   ( aij  bij )  i


i 1 i 1 i 1

T1  T2 ; B1, B2   aij  bij   aij   bij   T1 ; B1 , B2   T2 ; B1 , B2 


mn mn mn
Centre for Distance Education 8.4 Acharya Nagarjuna University
m m

Let a  F N.ow ( aT1 )( j )  aT1 ( j )  a  aij  i   aaij  i ,  j  1, 2,....n


i 1 i 1

  aT1 ; B1 , B 2    aa ij   a  a ij   a T1 ; B1 , B2 
mn mn

8.4.8 Note: We defined the matrix of a linear transformation T : U  V w.r.t. the ordered bases
B1  1 , 2 ,....., n  , B2   1 ,  2 ,.....,  n  of U and V respectively as A   aij  mn , where

m
T ( j )   aij  i , for j  1, 2,....n
i 1

So, for each linear transformation from an n-dimensional vector space to an m-dimensional vector

space,  an m x n matrix A   aij  which is = T , B1 , B2 

Now if U, V are vector spaces, B1  1 ,  2 ,.....,  n  , B2   1 , 2 ,....., m  are ordered bases

of U, V respectively and A   aij  mn , then  a linear transformation T : U  V  T , B1 , B2   A

If we define T ( j )   aij  i , j  1, 2,....n , then T is a uniquely defermined linear transfor-


i 1

mation and T , B1 , B2   A (If we write a 


i 1
ij i   j , j  1, 2,....., n , then  1 ,....,  n are n vectors

in V and  a unique linear transformation T : U  V  T ( j )   j ,1  j  n . This result was


proved in chapter 7 - see theorem 7.4.18).

8.4.9 Theorem: Let A  F mn . Define T : F n1  F m1 by T ( X )  A( X ) , for all X  F n1 . Then T
is a linear transformation. If B1 , B2 are standard ordered bases of F n1 and F m1 respectively, then

T , B1 , B2   A .
Proof: We proved that T is a linear transformation. (See theorem 7.4.13).

Let B1  e1 , e2 ,.....en  , B2   f1 , f 2 ,....., f m  be standard ordered bases of F n1 , F m1 respectively..

( ei  F n1 ; i th component ei  1 and other components = 0).


Rings and Linear Algebra 8.5 Matrix Representation of a Linear....

o 
o 
 
    a1 j 
   
T ( e j )  Ae j   aik m  n  o    a 2 j   a f  a f  ...  a f  .....  a f
1     1j 1 2j 2 ij 2 mj m

   
 o   a mj 
 
 
 o 

m
 i1
a ij f i .

T : B1 , B2    aij   A.
m n

8.4.10 Note: T is called the left multiplication transformation and is denoted by LA .

8.4.11 Definition: Let A be an m  n matrix over F. The linear transformation L A : F n 1  F m 1


by LA ( X )  A( X )  X  F n (or F n1 ) is called the left multiplication transformation.

8.4.12 Theorem: Let U ( F ), V ( F ) be vector spaces, dim U  n, dim V  m and B1 , B2 be or-


dered bases of U and V respectively.

Let T : U  V be a linear transformation and   U . Then T ; B1 , B2  B1  T ( ) B2 , where

 B 1
is the coordinate matrix of  w.r.t. the ordered basis B1.

Proof: Let B1  1 , 2 ,....., n  , B2   1 , 2 ,....., m 

Let A   aij  mn  T ; B1 , B2 

For  j  B1 , we have T ( j )   aij  i


i 1

Since   U ,  unique b1 , b2 ,......bn  F    b 


j 1
j j
Centre for Distance Education 8.6 Acharya Nagarjuna University

 n  n
 T ( )  T   b j j    b jT ( j )
 j 1  j 1

n m m
 n  n m
  b j  aij  j     b j aij  i   ci i where ci   b j aij
j 1 i 1 i 1  j 1  j 1 j 1

 c1 
c 
T ( ) B  2
2 
 
cm 

 b1a11  b2 a12  b3 a13  .....  bn a1n 


 b a  b a  b a  .....  b a 
  1 21 2 22 3 23 n 2n 

  
 
b1am1  b2 am 2  b3 am 3  .....  bn amn 

 a11 a12 a13 ........a1 n   b1 


a a 22 a 23 ........a 2 n  b 
  21  2
 ... .... .............   
  
 a m1 am 2 a m 3 .........a mn   bn 

= T ; B 1 , B 2  B 1

Hence the Theorem.

8.4.13 Corollary: If T : V  V is a linear operator, dimV  n , B is an ordered basis of V and


  V , then T ( )B  T B  B

8.5 Composition of linear transformation and matrix multiplication:

We defined the composition of two linear transformation as : TS ( )  T  S ( )    U .

8.5.1 Theorem: Let U ( F ), V ( F ), W ( F ) be finite dimensional vector space and let B1 , B2 , B3 be


ordered bases of U, V, W respectively.

Let T : U  V , S : V  W be linear transformations. then  ST ; B1 , B3    S ; B2 , B3 T ; B1 , B2 


Rings and Linear Algebra 8.7 Matrix Representation of a Linear....

Proof: We know that ST : U  W is defined by ( ST )( )  S T ( )  ,   U is a linear


transformation.

 
Let B1   1 ,  2 ,.....,  p  , B2  1 ,  2 ,.....,  , B3   1 ,  2 ,.....,  m  be ordered bases of U,V,W re-
n

spectively.

Let T ; B1 , B2   bkj  n p

 S ; B2 , B3    aik mn
n
 T ( j )   bkj  k , j  1, 2,......, p
k 1

n
S (  k )   aik  i , k  1, 2,......, n
i 1

Now for each j  1, 2,......, p , we have

( ST )( j )  S T ( j ) 

 n  n
=   kj k    bkj S (  k )
S b 
 k 1  k 1

n n
  bkj  aik  i
k 1 i 1

n
 n 
    aik bkj  i
i 1  k 1 

n n
  cij i , where cij   aik bkj
i 1 k 1

 n 
 ST ; B1 , B3    cij     aik bkj    aik mn bkj 
m p n p
 k 1  m p

  S ; B2 , B3 T ; B1 , B2 
Centre for Distance Education 8.8 Acharya Nagarjuna University
8.5.2 Corollary: If T, S are linear operators on an n-dimensional vector space V and B is an
ordered basis of V, then  ST B   S B T B .

8.5.3 Theorem: Let T : F n1  F m 1 be a linear transformation. Then A  F mn  T  LA .

Proof: Let B1 , B2 be the standard ordered bases of F n1 and F m1 respectively..

Then  a unique matrix A  F


mn
 T ; B1 , B2   A .

n1
 For every X  F , we have

T ( X ) B 2
 T ; B1 , B 2  X B
1

= A X B 1

T ( X )  AX  LA ( X )  X  F n1

 T  LA

We observe that ;

8.5.4 Theorem: If A, B  F mn , then

1) LA B  LA  LB

2) LaA  aLA , a  F

3) LAB  LA oLB

Proof: LAB ( X )  ( A  B) X  AX  BX  LA ( X )  LB ( X )  X  F n1

 LA B  LA  LB

LaA ( X )  (aA) X  a( AX )  aLA ( X )  X  F n1

 L aA  a L A

LAB ( X )  ( AB) X  A(BX )  LA (BX )  LA  LB ( X )

 LAB ( X )  LAoLB ( X )  X  F n1  LAB  LAoLB .


Rings and Linear Algebra 8.9 Matrix Representation of a Linear....
8.5.5 Theorem: Let U ( F ), V ( F ) be vector spaces, dim U  n dimV  m .

Then L(U , V )  F mn

Proof: We know that L (U , V ) and F mn are vector spaces over F. Let B1  1 ,  2 ,.....,  n  ,

B2  1 , 2 ,....., m  be ordered bases of U and V respectively..

Let T1 , T2  L(U , V ) T1; B1 , B2   A  aij  mn

and T2 ; B1 , B2   B   bij  m n

m m

 For j  1, 2,...., n , we have T1 ( j )   aij  i and T2 ( j )   bij  i


i 1 i 1

Define  : L(U , V )  F mn by

 (T )  T ; B1 , B2   T  L(U ,V )

We show that  is an isomorphism.

a) Let a, b  F , T1 , T2  L(U , V )

 (aT1  bT2 )   aT1  bT2 ; B1 , B2 

  aT1 ;  B1 , B2   bT2 ;  B1 , B2 

 a T1; B1 , B2   b T2 ; B1 , B2 

 a (T1 )  b (T2 )  T1 , T2  L(U , V ) and  a, b  F

 is a linear transformation.

b) Let T1 , T2  L(U , V ) and  (T1 )   (T2 )

 T1 ; B1 , B2   T2 ; B1 , B2 

 A  B   aij   bij   aij  bij  i, j ,1  i  m,1  j  n

m m

 for each  j  B1 , we have T1 ( j )   aij  i   bij  i T2 ( j );   j  B


i 1 i 1
Centre for Distance Education 8.10 Acharya Nagarjuna University

 T1 , T2 agree an a basis B1 of U  T1  T2 on U  is one-one.


m

c) Let A  F m n and A   aij  mn . Write  j   aij  i  j  1, 2,...., n .


i 1

So  1 ,  2 ,..... n are n vectors in V..

1,.....,n is a basis of U and  1 , ..... n are n vectors in V..

 a unique linear transformation T : U  V  T ( j )   j , j  1, 2,....., n .

m
 T ( j )   aij 1 ,  j  1, 2,....., n
i 1

 T ; B1 , B2   A   (T )  A

  is onto   is an isomorphism  L(U , V )  F mn

Hence the theorem.

We observe that dim F mn is = mn  dim L (U , V )  mn . (We proved this independently)

8.6 Invertibility and isomorphism.

We know that a function f : A  B is said to be invertible if there exists a function g : B  A


such that gof  I A and fog  I B and g is called inverse of f and we write g  f 1 . Further,, f
and g are bijections and f ( x)  y  f 1 ( y )  x . Since linear transformations are also functions,
it is natural to expect that the inverse of a linear transformation is also a linear transformation.

8.6.1 Definition: Let U ( F ) and V ( F ) be vector spaces. Let T : U  V be a bijective mapping.


Then the mapping S : V  U defined by S ( B )    T ( )   ,   U ,   V , is called the inverse
of T and it is denoted by T 1 . (If such a mapping S exists, then we say that T is invertible).

8.6.2 Note: 1. T 1 (  )    T ( )   ,  U ,   V
2. T is invertible iff T is a bijection.

3. T is a bijection iff T 1 is a bijection.

4. If T : U  V , S : V  W are bijections, then ST :U W is also a bijection and

 T 1S 1 , T 1   T
1
 ST 
1
Rings and Linear Algebra 8.11 Matrix Representation of a Linear....
8.6.3 Theorem: Let U ( F ) , V ( F ) be vector spaces and T : U  V be a bijective linear transfor-
mation. Then T 1 : V  U is also a linear transformation.

Proof: Since T : U  V is a bijection, T 1 : V  U is also a bijection.

To prove that T 1 is a linear transformation, let  ,   V , a, b  F .

Since T : U  V is onto,  ,   V  1 , 1 U  T (1 )   and T ( 1 )   . Since T is a


bijection, we have

T (1 )    T 1 ( )  1 and T ( 1 )    T 1 (  )  1

Now a  b   aT (1 )  bT ( 1 )  T (a1  b 1 )

Since T is a bijection, T 1 ( a  b  )  a1  b 1  aT 1 ( )  bT 1 (  ) and this is true for all

 ,   V and for all a, b  F . Hence T 1 is a linear transformation.

8.6.4 Note: 1) If T : U  V is an isomorphism iff T 1 : V U is an isomorphism.


2) If inverse of T : U  V exists, then T is said to be an ivertible linear transformation
and ToT 1  IV , T 1oT  IU .

8.6.5 Definition: If there is an isomorphism (i.e. one-one onto linear transformation) from a vector
space U(F) to a vector space V(F), then we say that U and V are isomorphic and we write U  V .

8.6.6 Theorem: Let U(F), V(F) be finite dimensional vector spaces. Then U  V  dim U  dim V .

Proof: 1. Suppose U  V

  a one-one onto linear transformation T : U  V

Let dimU  n and S  1 ,  2 ,....., n  be a basis of U.

Let S  T (1 ), T ( 2 ),....., T ( n ) . We show that S 1 is a basis of V..


1

a) S 1 is L. I: Let a1T (1 )  a2T ( 2 )  ......  anT ( n )  0

 T ( a11  a2 2  ......  an n  0  T  0 

 a11  a2 2  ......  an n  0 ( T is one-one)

 a1  0, a2  0,...., an  0 ( S is L1)
Centre for Distance Education 8.12 Acharya Nagarjuna University

 S 1 is L.I.

b) L( S 1 )  V : Let   V . Since T : U  V is onto,  U  T ( )  

Since   U , a1 , a2 ,...., an  F    a11  a2 2  ......  an n

   T ( )  T (a11  a2 2  ....  an n )

 a1T (1 )  a2T ( 2 )  .....  anT ( n )

   L( S 1 ). V  L( S 1 )

Also since L( S 1 )  V , we have L ( S 1 )  V

So, S 1 is a basis of V so that dimV  n . Hence dim U  dim V .

2. Suppose dim U  dim V  n , Let S  1 ,  2 ,....., n  and S    1 ,  2 , .....,  n  be bases


1

of U and V respectively.

Let  U  a1 , a2 ,...., an  F    a11  a2 2  ......  an n

Define T : U  V by T ( )  T ( a 1 1  a 2 2  ....  a n n )  a 1  1  a 2  2  .....  a n  n

for all   a11  a2 2  .....  an n  U

Let a , b  F and   a11  a2 2  .....  an n ,   b11  b2 2  ......  bn n  U .

 n  n n n
T (a  b )  T   (aai  bbi ) i    (aai  bbi ) i   aai i   bbi 1
 i 1  i 1 i 1 i 1

n n
 a  ai 1 b bi  i  aT ( )  bT (  )   ,   U and   ,   F .
i 1 i 1

 T is a linear transformation.
T is one-one : Let  ,   U and T ( )  T (  )

n n
 ,   U  ai , bi  F     ai i ,   bii
i 1 i 1

n n

Now T ( )  T (  )   ai  i   bi  i
i 1 i 1
Rings and Linear Algebra 8.13 Matrix Representation of a Linear....

n
  (ai  bi ) i  0
i 1

 ai  bi  0  i  1, 2,......, n since S 1 is L.I.

 ai  bi  i  1, 2,......, n

n n
   ai i   bi i    T is one-one.
i 1 i 1

n
T is onto : Let   V  ai  F    a 
i 1
i i

Write   a
i 1
i i    U and

n
T ( )   ai  i    T is onto  T is a bijection.
i 1

Hence T is an isomorphism.

U  V .
8.6.7. Corollary: If T : U  V is an isomorphism between finite dimensional vector spaces and
B is a basis of U, then T(U) is a basis of V.

8.6.8 Theorem: If dim U ( F )  n , then U  F n

Proof: Let S  1 ,  2 ,.....,  n  be a basis of U.

n
Let  U  ai  F    a
i 1
i i

Define T : U  F by T ( )  ( a1 , a2 ,...., an )     ai i  U .


n
i 1

We show that T is a linear transformation: Let  ,   U , a, b  F

n n
 ai , bi  F     ai i ,    bii
i 1 i 1
Centre for Distance Education 8.14 Acharya Nagarjuna University

 n 
T (a  b )  T   (aai  bbi )1  (aa1  bb1 , aa1  bb2 ,..., aan  bbn 
 i 1 

 a (a1 , a2 ,....., an )  b(b1 , b2 ,..., bn )  aT ( )  bT (  )   ,   U and  a, b  F .

 T is a linear transformation.
n n

T is one-one : Let  ,   U and T ( )  T (  ) , Let    ai i ,    bi i


i 1 i 1

T ( )  T (  )  (a1 , a2 ,......, an )  (b1 , b2 ,......, bn )  ai  bi  i  1, 2,...n

n n
   ai i   bi i    T is one - one.
i 1 i 1

T is onto : Let (a1 , a2 ,......, an )  F   


n
a
i 1
i i  U and T ()  (a , a ,...., a )
1 2 n

 T is onto : T is a bijection.
 T is an somorphism.
Hence U  F n .

8.6.9 Fundamental Theorem of Homomorphism:

Theorem: Let U ( F ) and V ( F ) be vector spaces and T : U  V be an onto linear transforma-


U
tion with null space N. Then V .
N

U
Proof: Since N is the null space of T, we have N  U and   N     U  is the quotient
N
U
space and ( N   )  ( N   )  N  (   ) and a ( N   )  N  a  N   , N    and
N
aF

U
Define  : V
N

U
 ( N   )  T ( )  N   
N
Rings and Linear Algebra 8.15 Matrix Representation of a Linear....

U
 is well-defined : Let N   , N   
1
and N    N   1
N

    1  N  ker T

 T (   1 )  0  T ( )  T ( 1 )  0

 T ( )  T ( 1 )   ( N   )   ( N   1 )

  is well-defined.

U
 is a linear transformation : Let a, b  F , N   , N   
N

  a( N   )  b( N   )    ( N  a )  ( N  b )   ( N  a  b  )

= T ( a  b  )  aT ( )  bT (  )

U
= a ( N   )  b ( N   )  N   , N    and  a, b  F .
N

 is linear transformation.

U
 is one - one : Let N   , N    and  ( N   )   ( N   )
N

 T ()  T ( )  T (   )  0

     N  N   N  

  is one - one.

 is onto : Let   V . Since T : U  V is onto,   U  T ( )  

   T ( )   ( N   )

U
   V  N      (N   )  
N

  is on to.

U
 is an isomorphism  V
N
Centre for Distance Education 8.16 Acharya Nagarjuna University
8.6.10 Note: Even though T is not onto, the above theorem is true if V is replaced by T(U).

8.6.11 Definition: A linear transformation T : U ( F )  V ( F ) is said to be singular if N (T )  0

and it is said to be non-singular if N (T )  0 . 


8.6.12 Theorem: Let U ( F ) , V ( F ) be two vector spaces and T : U  V be a linear transforma-
tion. then T is non-singular iff “S is L.I  T ( S ) is L .I ”.

Proof: 1. Suppose T is non-singular and suppose S  1 ,  2 ,.....,  n  is L .I

Let S  T (1 ), T (2 ),....., T (n )


1

Let a1 , a2 ,......, an  F and a1T (1 )  a2T ( 2 )  ......  anT ( n )  0

 T ( a11  a2 2  ......  an n )  0

 a11  a2 2  ......  an n  0 , since T is non-singular, N (T )  0 . 


 a1  0, a2  0,......an  0 ( S is L .I )

 S 1 is L.I  T (S ) is L . I

Conversely suppose S is L.I  T (S ) is L .I .

Let   U and   0  S    is L .I

 T ( S )  T ( ) is L . I

 T ( )  0

  U ,   0  T ( )  0

 T is non-singular..
8.6.13 Theorem: Let U ( F ) , V ( F ) be vector spaces and T : U  V be a linear transformation.
Then T is non-singular iff T is one-one.

Proof: Suppose T is non-singular  N (T )  0

Suppose  ,   U and T ( )  T (  )  T ( )  T (  )  0
Rings and Linear Algebra 8.17 Matrix Representation of a Linear....

 T (   )  0

    0  N (T )  0

    T is one - one.
Conversely suppose that T is one-one.

Suppose T ( )  0  T ( )  T 0  
   0 , Since T is one - one.

 N (T )  0  T is non-singular..

Hence the Theorem.

8.6.14 Note: T : U  V is non-singular if either

N (T )  0 (or) T ( )  0    0 (or)   0  T ( )  0 .

Solved Problems

8.6.15 Let T :  3   2 be a linear transformation defined by


T ( x, y, z )  (2 x  y  z ,3x  2 y  4 z ),  ( x, y, z )   3
Obtain the matrix of T relative to the ordered bases.

B1  (1,1,1), (1,1, 0), (1, 0, 0) , B2  (1,3), (1, 4)

Solution: We have

T (1,1,1)  (2,5)

T (1,1, 0)  (3,1)

T (1, 0, 0)  (2,3)

Let ( a, b)  x (1,3)  y (1, 4)  ( x  y ,3 x  4 y )

x  y  a ............ (1)

3x  4 y  b

(1) x 3 : 3 x  3 y  3a
Centre for Distance Education 8.18 Acharya Nagarjuna University

Subtracting, we get : y  b  3a

 x  y  3a  a  x  4a  b

 (a, b)   2  (a, b)  (4a  b)(1,3)  (b  3a )(1, 4)

T (1,1,1)  (2, 5)  (5  6)(1,3)  (8  5)(1, 4)

 (3)(1,3)  ( 1)(1, 4)

T (1,1, 0)  (3,1)  11(1,3)  ( 8)(1, 4)

T (1, 0, 0)  (2,3)  5(1,3)  ( 3)(1, 4)

 3 11 5 
T ; B1 , B2    
 1 8 3

8.6.16 SAQ: Let T :  3   2 be the linear transformation defined by T ( x, y )  ( x  y , 2 x  y , 7 y ) .


Find T ; B1 , B2  , where B1 , B2 are the standard ordered bases of  2 and 3 respectively..

8.6.17 SAQ: Let T :  3   2 be the linear transformation defined by


T ( x, y, z )  (3 x  2 y  4 z , x  5 y  3 z )

Find T ; B1 , B2  , where B1  (1,1,1), (1,1, 0), (1, 0, 0) and B2  (1,3),(2,5) are the ordered bases
of 3 and  2 respectively..

Solved Problem:

A B A
8.6.18 If A and B are subspaces of a vector space V over a field F, then show that  .
B A B

Solution: We know that A, B are subspaces  A  B, A  B are subspacs.

A B
Also B  A  B  is a vector space over F..
B

A
Also A  B is a subspace of A  is a vector space over F..
A B

A B
Any element of is of the form B     , where   A and   B i.e. B      B   ,
B
since   B  B    B .
Rings and Linear Algebra 8.19 Matrix Representation of a Linear....

A B
So any element of is of the form B   for some   A .
B

A B
Define a mapping T : A  by T ( )  B      A
B

Clearly T is well defined (1   2  B  1  B   2  T (1 )  T ( 2 ) )

Let  ,   A, a, b  F

 T ( a  b  )  B  ( a  b  )  ( B  a )  ( B  b  )

 a ( B   )  b( B   )

 aT ( )  bT (  )

  ,   A and  a, b  F

 T is a linear transformation.

A B
Any element of is of the form B   for some   A
B

A B
B      A  T ( )  B  
B

 T is onto.
 By the Fundamental Theorem of homomorphisms, we have

A A B

ker T B

But ker T    A T ( )  B    A B    B

   A /   B  A  B

A A B A B A
  (or) 
A B B B A B

8.7 Answers to SAQ’s:

8.6.16 SAQ: T :    , T ( x, y )  ( x  y , 2 x  y , 7 y ) .
2 3
Centre for Distance Education 8.20 Acharya Nagarjuna University
B1  1  (1, 0), 2  (0,1)

B2   1  (1, 0, 0),  2  (0,1, 0), 3  (0, 0,1)

are the standard bases of  2 and  3 respectively..

T (1 )  T (1, 0)  (1, 2, 0)  1(1, 0, 0)  2(0,1, 0)  0(0, 0,1)

T ( 2 )  T (0,1)  (1, 1, 7)  1(1, 0, 0)  1(0,1, 0)  7(0, 0,1)

1 1 
T , B1 , B2    2 1
0 7 

8.6.17 SAQ: T :  3   2 , T ( x, y)  (3x  2 y  4 z, x  5 y  3z) .

B1  1  (1,1,1), 2  (1,1, 0), 3  (1, 0, 0)

B2   1  (1,3),  2  (2,5) are ordered bases of  3 and  2 respectively..

T (1 )  (1,  1)

T ( 2 )  (5, 4)

T ( 3 )  (3,1)

Let ( a, b )  x 1  y  2  ( a, b)  x (1,3)  y (2,5)

x  2 y  a  3x  5 y  b 
    y  3a  b
3 x  5 y  b  3 x  6 y  3a 

 x  a  2 y  a  6a  2b  2b  5a

 ( a , b )  (2b  5a ) 1  (3a  b)  2

T (1 )  (1, 1)  7 1  4  2

T ( 2 )  (5, 4)  331  19  2

T ( 3 )  (3,1)   13  1  8  2
Rings and Linear Algebra 8.21 Matrix Representation of a Linear....

 7 33 13
T , B1 , B2   
 4 19 8 

8.8 Summary:
In this chapter, matrix of a linear transformation from a FDVS to a FDVS relative to ordered
bases is discussed. The concept of invertibility of isomorphisms of vector spaces is discussed
and some problems are discussed.

8.9 Technical Terms:


Matrix representation of a linear transformation, isomorphism, invertibility.

8.10 Exercises:
8.10.1 Let T :  2   2 be defined by T ( x, y )  (4 x  2 y , 2 x  y )

Find T ; B  , where B  (1,1),(1, 0)

8.10.2 Let T :  3   3 be defined by T (x, y, z)  (2 y  z, x  4 y,3x)

Find T : B , where B  (1, 0, 0), (0,1, 0), (0, 0,1)

 2 3 
8.10.3 If the matrix of T on 2 relative to the standard ordered basis is   , find the matrix of
1 1 
T relative to the basis (1,1), (1, 1) .

8.10.4 Find T :  3   4 as a linear transformation whose range is spanned by (1, 1, 2, 3) and
(2,3, 1, 0) .

8.10.5 Let V be a vector space of 2 x 2 matrices over  .

1  1
Let P be the fixed matrix in V, P   and T :V V be defined by
 2 2 
T ( A)  PA,  A V . Find nullity T..

8.11 Answers to exercises:

 3 2 
8.11.1 1 2 
 
Centre for Distance Education 8.22 Acharya Nagarjuna University

0 2 1 
1 4 0 
8.11.2  
3 0 0 

1 0 0 
0 0 0 
8.11.3  
1 0 1

8.11.4 T ( x, y, z )  ( x  2 y,  x  3 y, 2 x  y,3 x) ( x, y, z )   3

8.11.5 2
8.12 Model Examination Questions:
8.12.1 Explain the concept of a matrix of a linear transformation.
8.12.2 Define the invertibility of a linear operator.
8.12.3 State and prove the fundamental theorem of homomorphisms of vector spaces.
8.12.4 Find the matrix of the linear transformation

T :  3   2 defined by T ( x, y, z )  ( x  y, 2 z  x) ( x, y, z )  
3

relative to the ordered bases B1  (1, 0, 1), (1,1,1), (1, 0, 0) and B2  (0,1), (1, 0)

8.13 Reference Books:


1. Stephen H. Fried berg and others : Linear Algebra, Prentice Hall India Pvt. Ltd., New Delhi.
2. Hoffman and Kunz; Linear Algebra, 2nd Edition, Prentice Hall; New Jersey - 197.

- A. Satyanarayana Murty
Rings and Linear Algebra 9.1 Matrices and Determinants

LESSON - 9

MATRICES AND DETERMINANTS


9.1 Objective of the Lesson:
In this chapter, we define elementary operations that are used to obtain simple computa-
tional methods for determining the rank of a linear transformation and the solution of a system of
linear equations. There are two types of elementary matrix operations - row operations and col-
umn operations.

9.2. Structure of the Lesson:


This lesson has the following components.

9.3 Introduction

9.4 Elementary matrix operations and elementary matrices.

9.5 Determinants

9.6 Answers to SAQ’s

9.7 Summary

9.8 Technical terms

9.9 Exercises

9.10 Answers to exercises

9.11 Model Examination Questions.

9.12 Reference Books

9.3 Introduction:
In this chapter, we discuss the elementary operations and elementary matrices. We also
discuss determinants.

9.4 Elementary Matrix Operations and Elementary Matrices:


9.4.1 Definition: Let A be an m x n matrix. Any one of the following three operations on the rows
(Columns) of A is called an elementary row (Column) operation:

1. Rij ; Interchange of ith and jth rows.

2. R i ( k ) : Multiplying every element of ith row by k.


Centre for Distance Education 9.2 Acharya Nagarjuna University

3. R ij ( k ) : Multiplying every element of jth row by k and adding to the corresponding element
of ith row.

Similarly we have column operations C ij , C i ( k ), C ij ( k ) .

9.4.2 Elementary Matrix:

Definition: A Matrix obtained from a unit matrix I n by subjecting the unit matrix to any one of the
elementary transformations is called an elementary matrix.

Eij : Elementary matrix obtained by interchanging ith and jth rows (column) in I n .

E i ( k ) : Elementary matrix obtained by multiplying every element of ith row (column) with k
in I n .

E ij ( k ) : Elementary matrix obtained by multiplying every element of jth row with k and then
adding them to the corresponding elements of ith row in I n .

1 1
Similarly Eij , E i1 ( k ) , E ij ( k ) denote the elementary matrices obtained by applying the
corresponding elementary column operations on I.

9.4.3 Note: 1. Eij  1  Eij


1

2. Ei (k )  k  Ei (k ) , k  0
1

3. Eij ( k )  1  Eij ( k )
1 1

4. Every elmentary matrix is non-singular and hence it is invertible.


9.4.4. Theorem: Every elementary row (column) transformation of a matrix can be obtained by
pre-multiplication (post multiplication) with the corresponding elementary matrix.
Proof: First we prove that every elementary row (column) operation of a product C = AB can be
effected by subjecting the prefactor A (Post factor B) to the same row (column) operation.
Let A be m x n and B be n x p matrices. The AB is of order m x p.

 R1 
R 
A   2  , B   c1 c2  c p 
We can write
    1 p
,
 
 R m  m 1
Rings and Linear Algebra 9.3 Matrices and Determinants

where R1 , R2 ,...., Rm are rows (of order 1 x n) and C1 , C2 ,...., C p are columns of B (of order
n x 1)

 R1C1 R1C2 ...... R1C p 


R C R2C2 ...... R2C p 
 AB  
2 1

    
 
 RmC1 RmC2 ..... RmC p 

This shows that if the rows R1 , R2 ,......Rm of A are subjected to an elementary row opera-
tion, then the rows of AB are subjected to the same elementay row operation.

Similarly, if the columns of B i.e. C1 , C2 ,......C p are subjected to an elementary column


operation, then the columns of AB are subjected to the same elementary column operation.

Now to prove the theorem let A be an m x n matrix and Im be a unit matrix so that A  I m A .

Let R be any elementary row operation applied on A.

Then RA  R ( I m A)  ( RI m ) A  EA, where E is the elementary matrix corresponding to the


row operation R.

Again let I n be unit matrix so that A  AI n .

Let C be any elementary column operation applied on A.

Then CA  C ( AI n )  A(CI n )  AE , where E is the elementary matrix corresponding to the


column operation C.

1 1
9.4.5 Theorem: Eij 1  Eij ,  Ei (k )   Ei   , k  0,  Eij (k )   Eij (k ) if c  0 .
1

k

Proof: a) Eij is the E-matrix (elementary marix) obtained from I n by applying Rij . If we again

apply Rij on Eij , we get I

 E ij E ij  I  Eij is invertible and  Eij   Eij .


1

b) Ei (k ) is the E - matrix obtained from I by applying Ri ( k ) . If we again apply Ri 


1
 on
k
Ei , we get I .
Centre for Distance Education 9.4 Acharya Nagarjuna University

1 1 1
 Ei   Ei (k )  I  Ei (k ) is invertible and  E ij ( k )   E i  
k k

c) Eij  k  is the E - matrix obtained from I by applying Rij  k  . If we again apply Rij  k  ,
we get I

1
 Eij (  k ) Eij ( k )  I  Eij ( k ) is invertible and  E ij ( k )   E ij (  k )

Similarly, we can show that

1
E  1 1 1
ij  Eij1 ,  Ei1 (k )   Ei1   , k  0
k
1
 Eij1 (k )   Eij1 ( k )

We note that every E - matrix is nonsingular and the inverse of an E - matrix is also non-
singular.
9.4.6 Definition: A matrix A is said to be equivalent to B, if B is obtained from A by a finite number
of E - operations.
We write A  B .

In the set M of all m x n matrices,  is an equivalence relation.

Since a) A  A  is reflexive

b) A  B  B  A so that  is symmetric.

c) A  B , B  C  A  C so that  is transitive.

R
 C 
9.4.7 Definition: A matrix A is said to be row (column) equivalent to B, denoted by A  B  A B ,
 
if it is possible to obtain B from A by a finite number of E - row (column) operations.
Solved Problems:

9.4.8 Compute the matrixes E23 , E34 (1), E2 (2), E12 for I 4 .

1 0 0 0
0 0 1 0   R23

Solution E23   i,  I 4  E23 
0 1 0 0  
 
0 0 0 1
Rings and Linear Algebra 9.5 Matrices and Determinants

1 0 0 0
R34 (  1) 
0 1 0 0 
I4    E 34 (  1)
0 0 1  1
 
0 0 0 1 

1 0 0 0
R2 (  2 ) 0 2 0 0 
I4  
0 0 1 0
 E2 (  2)
 
0 0 0 1

0 1 0 0
R1 2 1 0 0 0 
I4    E12
0 0 1 0
 
0 0 0 1

1 2 1
 1 0 2   I
9.4.9 Show that   3
 2 1 3

 1 2 1  R (1) 1 2 1 
  21  3 
Solution: Let A   1 0 2  R ( 2)  0 2
31
 2 1 3 0 3 5

1 0 2  1 0 2 
R12 ( 1)
  R32 ( 1)  
 0 1
R2 ( 1 ) 
2
 3
2

  

0 1 3
2


R3 ( 1 )  5   1 
3 2
 1 0 0
3   6

1 0 2  1 0 0 
R3 (6)
  R13 (2)
 
  0 1 3 2   0 1 0   I 3
  R23 (  3 2 ) 0 0 1 
 0 0 1   
Centre for Distance Education 9.6 Acharya Nagarjuna University

 A  I3

1 2 3
1 0 1
9.4.10 Express   as a product of E - matrices.
1 1 1

   
1 2 3 R (1) 1 2 3  R2 ( 1 ) 1 2 3  R ( 2) 1 0 1  R (3)
  21   2  12   3
Solution: Let A  1 0 1  0 2 2  0 1 1   0 1 1  
1 1 1 31 0 3 2 3 3  2  32  1
R ( 1) R ( 1 ) R ( 1)

0 0  0 0  
 3  3

1 0 1  C (  1)  1 0 0
0 1 1    0
31

1 0   I 3
 C (  1)
 0 0 1  32  0 0 1 

R2 (  1 )


2

 A I3
R 3 (  13 )
R1 2 (  2 ), R 3 2 (  1 ) , R 3 (  3 )
C 3 1 (  1 ), C 3 2 (  1 )

 I3  E3 (3) E32 (1) E12 (2) E3 (13 ) E2 ( 12 ) E31 (1) E21 (1) AE32
1
(1) E31
1
(1)
1 1 1
A E21(1)  E31(1)  E2(12) E3( 13)  E12 (2) E32 (1)  E3(3) I3 E321 (1) E311 (1)
1 1 1 1 1 1

 E 21 (1) E 31 (1) E 2 (2) E 3 (3) E12 (2) E 32 (1) E 3 (  13 ) E 32


1 1
(1) E 31 (1)

1 3 3 
9.4.11 Express A  1 4 3  as a product of E - matrices.
1 3 4 

1 3 3 C ( 3) 1 0 0 
R21 ( 1)
Solution: A   0 1 0   0 1 0   I 3
21

R31 ( 1)
  C ( 3)  
0 0 1  31 0 0 1 
Rings and Linear Algebra 9.7 Matrices and Determinants

I 3  E31 ( 1) E21 ( 1) AE21


1
( 3) E31
1
( 3)

1 1
 A   E21 (1)   E31 (1)I
1 1
 E31
1
(3)   E21
1
( 3) 
3

 A  E21 (1) E31 (1) E31 (3) E21 (3)

 A is a product of the E - matrices.


9.5 Determinants:
Determinants of matrices of order 2 x 2, 3 x 3 were studied at the Intermediate Level. Here
we attempt to define the determinant of a square matrix of order n x n and discuss the properties
although we deal with determinants of order 2 x 2, 3 x 3.

a b 
9.5.1 Definition: Let A    be a 2 x 2 matrix over F..
c d 

We define the determinant of A, denoted by det A or A as the scalar ad - bc.

a b   d b 
If A    , we define adjoint of A as adj A   .
c d   c a 

9.5.2 Note: We observe that det( A  B )  det A  det B , since

1 2 3 2  4 4
A  ,B     A B   
3 4  6 4 9 8 

and det A  2, det B  0 , det( A  B )  4

9.5.3 Note: The function det : F 2 2  F is not a linear transformation.

9.5.4 Theorem: The function det : F 2 2  F is a linear function of each row of a 2 x 2 matrix
A when the second row is fixed i.e. if u , v , w  F 2
and k  F , then

u  kv  u  v
det    det    k det  
 w   w  w

Proof: Let u  ( a1 , b1 ), v  ( a2 , b2 ), w  ( a3 , b3 )  F 2 and k  F , then

u  v  a b1  a b2 
det    k det    det  1   k det  2
 w  w  a3 b3   a3 b3 
Centre for Distance Education 9.8 Acharya Nagarjuna University

 (a1b3  a3b1 )  k (a2b3  a3b2 )  (a1  ka2 )b3  (b1  kb2 )a3

 a  ka2 b1  kb2  u  kv 
 det  1 
b3   w 
det
 a3  

 w  w  w 
Similarly we can show that det    k det    det  
u  v u  kv 

9.5.5. Theorem: Let A  F 22 . Then det A  0  A is invertible.

(Recall that A is invertible iff  B  AB  BA  I )

 a11 a12  1  a22 a12 


If A   A1 
a22  det A  a21 a11 
then
 a21

1  a22 a12 
Proof: Suppose det A  0 . Let C 
det A a21 a11 

a a  1  a22 a12 
 AC   11 12  . 
 a21 a22  det A  a21 a11 

1  a11a22  a12 a21 a11a12  a11a22 



det A  a21a22  a21a22 a21a12  a22 a11 

1 det A 0  1 0 
   I
det A  0 det A 0 1

Similarly we can show that CA  I . So A is invertible and A1  C .

Conversely suppose A is invertible. So rank of A is = 2. (For definition of rank, see lesson 10).
(Since rank of an invertible n x n matrix is n)

 a21
 a11  0 or a21  0 . Suppose a11  0 . Multiply R1 with a and add to R2 so that we get :
11

 a11 a12 
 a12 a21 
0 a22 
 a11 
Rings and Linear Algebra 9.9 Matrices and Determinants

a12 a21
The rank of this is = 2 so that a22   0.
a11

 a11a22  a12 a21  0  det A  0

Similarly we can prove that det A  0 when a21  0 .

So in any case, we have det A  0 . Hence the Theorem.

Now we define determinants of order n.

9.5.6 Definition: Let A  F nn

If n  1, then A   aij  , we define det A  a11 .

If n  2 , we define det A recursively as follows:

n
det A   (1)1 j aij .det Aij .
j 1

(This is called the determinant of A and is denoted by A ).

Here Aij is the (n  1)  ( n  1) matrix obtained from A by deleting ith row and jth column of A
1 j
If we write cij  ( 1) det Aij , then

det A  a11c11  a12 c12  ...  a1n c1n

c ij is called the cofactor of aij .

9.5.7 Note: det A = Sum of the products of each entry (in R1 of A) and the corresponding cofactor..

9.5.8 Example: Find det A using cofactor expansion along the first row of the matrix

 1 3 3
A   3 5 2 
 4 4 6 

11 1 2 13
Solution: det A  ( 1) A11 A11  ( 1) A12 A12  ( 1) A13 A13

 (1) 2 .1(30  8)  (1)3 .3(18  8)  (1)4 (3)(12  20)

 22  78  96  40
Centre for Distance Education 9.10 Acharya Nagarjuna University

9.5.9 Theorem: I n  1

Proof: (by induction on n)

If n  1, then I1  1  I1  1 . So the theorem is true when n  1 .

Assume the truth of the theorem for n  1 .

Expanding I n along the first row, we get In  1det I n 1  0  0...  0  1(1)  1 . The theo-
rem is true for n. So by Mathematical induction, the Theorem is true for all positive integers n.
Hence the theorem.
The determinant of a square matrix can be evaluated by cofactor expansion along any row
n
i.e. if A  F n n
, then det A   ( 1)
j 1
i j
aij det Aij where Aij is the (n  1)  (n  1) matrix ob-

tained by deleting Ri and C j from A.

9.5.10 Theorem: If A  F nn and B is a matrix obtained from A by interchanging two rows of A,
then det B   det A .

 a1 
a 
Proof: Let A  F nn and a1 , a2 ,...., an be the rows of A so that A   2 

 
 an 

Let B be the matrix obtained from A by interchanging rth row and sth row, r < s.

 a1 
a 
 2
  
 
a
 B   s  . Now consider the matrix whose rth and sth rows are replaced by a  a .
   r s
 
 ar 
  
 
 a n 
Rings and Linear Algebra 9.11 Matrices and Determinants

 a1   a1   a1 
 a   a   a 
 2   2   2 
        
     
 ar  as   ar   det  as 
 we have d e t   det
       
     
ar  as  ar  as  ar  as 
        
     
 a n   a n   a n 

 a1   a1   a1   a1 
a  a  a  a 
 2  2  2  2
           
       
a a a a
 0  d et    d et    d et    d et  s   0  d et A  det B  0
r r s
           
       
 ar   as   ar   as 
           
       
 a n   a n   a n   a n 

 det B   det A

9.5.11 Theorem: If A, B  F nn , then det AB  det A det B .

Proof: First we prove the theorem when A is an elementary matrix. If A is a matrix obtained from
I by interchanging two rows of I, then det A  1 .

By Theorem (9.4.4), AB is a matrix obtained by interchanging two rows of B. So, by Theorem


(9.5.10), det AB   det B  det A det B

Similarly we can prove the theorem, when A is an elementary matrix of other types.

If A is an n  n matrix with rank < n, then det A  0 .

Since rank AB  rank A  n, we have det AB  0

 det AB  0 = det A.det B


Centre for Distance Education 9.12 Acharya Nagarjuna University
If rank A  n , then A is invertible and hence it is a product of elementary matrices.

Let A  Em Em 1...E2 E1

det AB  det( Em Em 1...E2 E1 B )

 det E m .det( E m 1 ...E 2 E1 B )

= det E m .det( E m 1 ... E 2 E1 B )

= ...........................................

 det Em det Em1......det E2 det E1.det B

 det( Em Em 1...E2 E1 ) det B

 det A det B

9.5.12 Theorem: A matrix A  F nn is invertible iff det A  0 .

1 1
Further A is invertible  det A 
det A

Proof: Suppose A is not invertible  Rank of A is  n  det A  0

Suppose A is invertible  A1 exists and AA1  I

1  det I  det AA1  det A det A 1

1 1
det A.det A1  1det A  0 and det A 
det A

9.5.13 Theorem: Let A  F nn and let B be a matrix obtained by adding a multiple of one row to
another row of A. Then det B  det A .
Proof: Suppose B is the n x n matrix obtained from A by adding k times r th row to sth row where
r s.

 a1   b1 
a  b 
Let A   2
, B   2  , then bi  ai for i  s and bs  as  kar
  
   
 an  bn 
Rings and Linear Algebra 9.13 Matrices and Determinants

Let C be the matrix obtained from A by replacing as with ar .

 By theorem (9.5.4), we get

det B  det A  k det C  det A, since det C  0 .

9.5.14 Theorem: If A  F nn and  ( A)  n , then det A  0

Proof: If  ( A)  n , then the rows a1, a2 ,..., an of A are linearly dependent. So  a r  ar  al .c of


other rows.

c1 , c2 ,...cr 1 ,..., cn  ar  c1a1  c2 a2  ...  cr 1ar 1  cr 1 ar 1  ...  cn an

Let B be the matrix obtained from A by adding  c i times row i to row r for each i  r

The rth row of B consists of zeros only and so det B  0 .

But by theorem (9.5.14) det B  det A . Hence det A  0 .

9.5.15 Note: If A  F nn , then det kA  k n det A,det( A)  (1)n det A and det A  0 , if two
rows are identical.

9.5.16 Theorem: For any A  F nn , det AT  det A

Proof: If A is not invertible, then  ( A)  n and det A  0

But we know that  ( A)   ( AT )   ( AT )  n

 AT is not invertible.

 det AT  0

 det AT  det A  0
Suppose A is invertible. Then A is a product of elementary matrices.

Suppose A  Em Em 1 E2 E1

 det AT  det( Em Em 1 E2 E1 )T  det( E1T E2T  EmT )

 det( E1T ) det( E2T )  det( EmT )

 det E1 det E2  det Em

 (det Em )(det Em 1 )  (det E2 )(det E1 )


Centre for Distance Education 9.14 Acharya Nagarjuna University

 det( Em Em 1  E1 )

 det A
Hence the Theorem.
Solved Problem:

2 0 0 1
 0 1 3 3
9.5.17 Evaluate the determinant of the matrix A   
 2 3 5 2 
 
 4 4 4 6 

Solution:

2 0 0 1  2 0 0 1 
R31 (1)  0 1 3 3 R32 (3)  0
 1 3 3 
A 
3 5 3  R42(4)  0

R41 ( 2)  0 0 4 6 
   
 0 4 4 8 0 0 16 20 

2 0 0 1 
R 43 (  4 ) 
0 1 3  3 
    B ( say )
0 0 4 6 
 
0 0 0 4 

B is an upper triangular matrix.

 det A  Product of diagonal elements  2(1)(4)(4)  32

1 w w2
9.5.18 SAQ: If w is a complex cube root of unity, then show that w w2 1 0
w2 1 w

12 22 32 42
22 32 42 52
9.5.19 SAQ: Show that 0
32 42 52 62
42 52 62 72
Rings and Linear Algebra 9.15 Matrices and Determinants

bc c  a a b
9.5.20 SAQ: Show that c  a a  b b  c  0
a b bc ca

Solved Problems:

1 a 1 1
 1 1 1
1 1  b 1  abc 1    
9.5.21 Show that  a b c
1 1 1 c

1 1 1
1
a a a
1 1 1
Solution: LHS  abc 1
b b b
1 1 1
1
c c c

1 1 1 1 1 1 1 1 1
1   1   1  
a b c a b c a b c
R12 (1)
1 1 1
 abc 1
R13 (1) b b b
1 1 1
1
c c c

1 1 1 1 0 0
C 21 ( 1)
 1 1 1 1  1 1 1 1

1 1
 abc 1     1  abc 1     1 0
 a b c b b b C31 ( 1)  a b c b
1 1 1 1
1 0 1
c c c c
Centre for Distance Education 9.16 Acharya Nagarjuna University

1 0 0
C 21 (  1)


1
1 0  Expanding with R1, we get
C 31 (  1) b
1
0 1
c

 1 1 1
 det of given matrix is = abc 1     .1
 a b c

bc ca ab a b c


9.5.22 Show that c  a ab bc  2 b c a
ab bc ca c a b

Solution: Applying C12 (1) and C13 (1) , we get

2(a  b  c) c  a a  b
LHS  2(a  b  c) a  b b  c
2(a  b  c) b  c c  a

abc ca ab


 2 abc ab bc
abc bc ca

abc b c
C 21 ( 1)
 2 abc c a
C31 (  1)
abc a b

a  c b c
C12 (1)
 2 a  b c  a
b  c a b
Rings and Linear Algebra 9.17 Matrices and Determinants

C13 (1)
a b c
 2 b c a
c a b

C2 ( 1)
a b c

C3 ( 1)
2b c a
c a b

9.6 Answers to SAQ’s:

1 w w w2
w  w2 w2 1
9.5.18 SAQ: Applying C12 (1) , we get
w2  1 1 w

C13 (1)
1  w  w2 w w2 0 w w2
 1  w  w2 w2 1  0 w2 1 0
1 w  w 2
1 w 0 1 w

since 1  w  w 2  0 , w being cube root of unity..

9.5.19 SAQ: Applying C32 (1), C41 (1) , we get

12 22 5.1 5.3 12 22 5 5
22 32 22 32
3 3
7.1 7.3 7 7
0
32 42 9.1 9.3 2
42 9 9
42 52 11.1 11.3 42 52 11 11

b  a c  a a b
9.5.20 SAQ: Applying C12 (1), we get cb a b b c
ac bc ca
Centre for Distance Education 9.18 Acharya Nagarjuna University

C13 (1)
0 ca a b
 0 ab bc  0
0 bc ca

9.7 Summary:
In this lesson, we discussed elementary transformations and applied the techniques in
determinants.

9.8 Technical Terms:


Elementary row operations, Elementary Matrix, Determinants.

9.9 Exercises:

x x2 1  x3
1. If y y2 1  y 3  0 and x, y, z are different,
z z2 1  z3

show that xyz  1 .

1 x x2
2. Show that 1 y y 2  ( x  y )( y  z )( z  x )
1 z z2

b2  c2
2
0 c b ab ac
3. Show that c 0 a  ab c  a2
2
bc
b a 0 ac bc a2  b2

2bc  a 2
2
c2 b2 a b c
2 ac  b 2  b a  ( a 3  b 3  c  3abc ) 2
3
4. Show that c2 a2 c
b2 a2 2 ab  c 2 c a b

1 2 3
5. Find the value of the determinant 4 5 6
7 8 9
Rings and Linear Algebra 9.19 Matrices and Determinants

a  b  2c a b
6. Evaluate c b  c  2a b
c a c  a  2b

9.10 Answers to Exercises:


5. 0

6. 2(a  b  c)3

9.11 Model Examination Questions:


1. Explain the concept of elementary row/column operations.

2. Explain the concept of determinants of order 2  2, n  n

1 a 1 1 1
1 1 b 1 1  1 1 1 1
3. Show that  abcd 1     
1 1 1 c 1  a b c d
1 1 1 1 d

ax a a a
b b y b b  a b c d
4. Show that  xyzw 1     
c c cz c  x y z w
d d d dw

9.12 Reference Books:


1. Stephen H. Friedberg and others - Linear Algebra Prentice Hall India Pvt. Ltd - New Delhi.
2. K. Hoffman and Kunze - Linear Algebra - Prentice Hall, New Jersey.

- A. Satyanarayana Murty
Rings and Linear Algebra 10.1 Rank of a Matrix

LESSON - 10

RANK OF A MATRIX
10.1 Objective of the Lesson:
In this lesson, we define the rank of a matrix and use elementary operations to compute the
rank of a matrix. We also discuss the procedure for computing the inverse of an invertible matrix.

10.2. Structure of the Lesson:


This lesson has the following components.

10.3 Introduction

10.4 Rank of a Matrix

10.5 Matrix Inverses

10.6 Answers to SAQ’s

10.7 Summary

10.8 Technical Terms

10.9 Exercises

10.10 Answers to Exercises

10.11 Model Examination Questions

10.12 Reference Books

10.3 Introduction:
In this lesson, the concept of a rank of the matrix is introduced. We use elementary opera-
tions to compute the rank of a matrix and the rank of a linear transformation. We also introduce a
procedure for computing the inverse of an invertible matrix using elementary transformations.

10.4 Rank of a Matrix:


10.4.1 Definition: Let A  F mn . We define the rank of A as the rank of the linear transformation

LA : F n  F m .

We recall the definition of LA : F n  F m as:


Centre for Distance Education 10.2 Acharya Nagarjuna University

LA ( X )  AX ,  X  F n

We observe that many results about the rank of a matrix can be obtained from the corre-
sponding results about linear transformation.

We know that every matrix A is the matrix representation of the linear transformation L A
w.r.t. appropriate standard ordered bases.

i.e. if A  F mn , then  standard bases B1 , B2 of F n and F m respectively

such that A   LA ; B1 , B2  .

We also observe that LA  LB  A  B , LA  B  LA  LB , LkA  kLA

m n n p
for all k  F , A  F , B  F  LAB  LA LB ; LI n  I F n

We also know that : if T  L(U , V ), U ( F ), V ( F ) are finite dimensional vector spaces and
B1 , B2 are ordered bases of U and V respectively, then

Rank T  Rank T ; B1 , B2 

So the problem of finding rank of a linear transformation is reduced to that of finding rank
of a matrix.
Now we prove
10.4.2 Theorem: Let A be an m x n matrix. If P and Q are invertible m  m, n  n matrices
respectively, then

(a) Rank A Q  Rank A, (b) Rank PA = Rank A, (c) Rank PAQ = Rank A.

Proof: We have LA : F  F , LQ : F  F
n m n n

Since Q is non-singular, we observe that LQ is onto.

1 1
( y  F n (codomain)  X  Q y  LQ ( X )  QQ y  y )

Now R ( L AQ )  R ( L A LQ )  L A LQ ( F )  L A ( F )( LQ is onto)


n n

= R ( LA ) (Here R ( L A ) means range of LA )

 Rank LAQ = dim R ( LAQ )


Rings and Linear Algebra 10.3 Rank of a Matrix

 dim R ( L A )

= Rank LA

Similarly we can prove that rank PA = rank A,

(Since Rank PA = Rank ( PA)T = Rank AT PT = Rank AT ) (by (a))

= Rank A, by Theorem 10.4.17


Rank PAQ = Rank (PA)Q = Rank PA = Rank A.
10.4.3 Corollary: Elementary row (column) operations on a matrix are rank preserving.
Proof: Suppose B is a matrix obtained from A by an elementary row operation. So  an elementary
matrix E such that B = EA.

Since E is invertible, we have  ( B )   ( A)

10.4.4 Note: Pre-multiplication (Post multiplication) of a matrix by an elementary matrix and hence
by a finite number of elementary matrices (by a finite number of elementary operations) does not
alter the rank of the matrix.
10.4.5 Theorem: The rank of a matrix is equal to the maximum number of its linearly independent
columns i.e. the rank of a matrix is the dimension of the subspace generated by its columns.

Proof: Let A  F mn  Rank A = Rank LA  dim( R ( LA ))

n
Let B be the standard basis of F

 B spans F n .

 R( LA )  Span LA ( B)  Span  L A (e1 ), L A ( e2 ),....L A ( en )

0
0
 
 
 
But we know that LA (e j )  Ae j  (a1a2 ...a j ...an ) 1 
0
 
 
0
 

 a j , the jth column of A.


Centre for Distance Education 10.4 Acharya Nagarjuna University

 R( LA )  Span a1 , a2 ...an 

 Rank A = Rank LA  dim R(LA )

 dim Span a1 , a2 ...an 

1 0 1
 
10.4.6 SAQ: Find the rank of the matrix A  0 1 1
1 0 1

1 2 1 
10.4.7 SAQ : Find the rank of A  1 0 3 
 
1 1 2 

Reduction to Normal Form:

 Ir 0
10.4.8 Theorem: Every non-zero matrix A can be reduced to the form 
0 
by a finite number
0
of elementary operations, where I r is the unit matrix of order r and r is the rank of A.

Proof: Let A   aij  mn ,  ( A)  r; A  O

Since A  O ,  atleast one element aij  k  0

Interchanging R i with R 1 and C j with C1 , we obtain a matrix B with leading element


k (  0) .

1
Now multiplying R1 of B with , we get a matrix C with leading element 1 so that
k

1 c12 c13......c1n 
c c22 c23......c2n 
C   21 
   
 
cn1 cn 2 cn3......cnn 
Rings and Linear Algebra 10.5 Rank of a Matrix

Adding suitable multiples of the column C1 of C to the other columns of C and adding
suitable multiples of the first row to the remaining rows of C, we get a matrix D in which all elements
of R1 and C1 of D, except the leading element (=1) are zeros.

1 0 0  0  1 0  0
0 d 22 d 23  d 2 n   0 
D    
    A1  , where A1 is a (m  1)  ( n  1) matrix.
   
0 dm2 dm3  d mn   0  m n

 I1 O
If A1  O , then A    and is this case, there is nothing to prove.
O O

If A1  O , we proceed with A1 as we did with A.

The elementary operations applied on A1 do not alter the elements of either 1st row or 1st
column of D.

 I P O
Proceedings like this, we get a matrix P  P   
 O O

Now  ( P )  p . But P is obtained from A by elementary operations and hence  ( A) is unaltered.

 I r O
 p  r . Hence A can be reduced to the form  
 O O
10.4.9 Note: 1. Using elementary operations, a matrix A of rank r can be reduced to the form
I  I 0
I r ,  I r 0 ,  r   r
0 
, called its normal form.
00
10.4.10 Note: 1) To reduce A to normal form, sometimes, both row operations and column opera-
tions are to be applied.

2) If  ( A)  r and A is m  n , then r  m and r  n; r  min m, n

3)  ( A)  r means, every ( r  1)th minor = 0 and  an rth minor of A, which is  0 .

10.4.11 Theorem: If A is an m x n matrix of rank r, then  non-singular matrices P and


I 0
Q  PAQ   r
0 0
Centre for Distance Education 10.6 Acharya Nagarjuna University
Proof: Given that  ( A)  r

 Ir 0
 A can be reduced to the normal form 
0 0 

For getting this, let the number of row operations used be s and let the number of column
operations used be t. We also know that every elementary row (column) operation on A is equiva-
lent to pre-(post) multiplication of A by a suitable elementary matrix.

  elementary matrices P1 , P2 , , Ps and Q1 , Q2 , Qt 


 I O
Ps Ps 1  P2 P1 AQ1 , Q2  Qt   r 
O O
We know that each elementary matrix is non-singular and the product of non-singular ma-
trices is non-singular.

s s1 P2  P and Q1 , Q2  Qt  Q .


Let PP

 P, Q are non-singular..

 I O
 PAQ   r 
O O

10.4.12 Note: If P and Q are non-singular, then  ( A)   ( PAQ ) .


10.4.13 Theorem: Every invertible n x n matrix is a product of elementary matrices.

Proof:- Let A be an invertible matrix  A is nonsingular  A  0   ( A)  n .

 A can be reduced to I n by a finite number of row, column operations.


We know that elementary row (column)operation is equivalent to pre (post) multiplication of
A by a suitable elementary matrix.

 elementary matrices P1 , P2 , , Ps ; Q1 , Q2  , Qt 

Ps Ps 1  P2 P1 AQ1 , Q2  Qt  I n

Since each of Pi , Q j is non-singular, we have A  P11 P21  Ps1Qt1  Q11 .

 A  a product of elementary matrices.

10.4.14 Note: A  0 , the A can be expressed as a product of elementary matrices in many


ways.
Rings and Linear Algebra 10.7 Rank of a Matrix

10.4.15 Theorem: If A  F mn , P  F nn and P is non-singular, then  ( PA)   ( A) and


 ( AP )   ( A) .
Proof: Since P is non-singular,  elementary matrices

P1 , P2 ,  , Ps  P  P1 P2 ,  Ps

 PA  P1 P2 , Ps A

Pre-multiplication by s elementary matrices is equivalent to s elementary operations on A.


But elementary operations do not alter the rank.

  ( PA)   ( A)

Similarly we can prove that  ( AP )   ( A)

10.4.16 Theorem: If A  B , then  ( A)   ( B )

Proof: If A  B , then B is obtained from A by a finite number of elementary operations.

  ( A)   ( B)

10.4.17 Theorem: If A is m x n matrix, then  ( A1 )   ( A)

 I r O
Proof:  non-singular matrices Pmm and Q nn  PAQ  D    ,  ( A)   ( D)  r
O O

 D1  ( PAQ)1  Q1 A1 P1
Since P and Q are invertible, we have P1 and Q1 are invertible.

  ( A 1 )   ( Q 1 A1 P 1 )   ( D 1 )

1
I 0
Let  ( A)  r . Since D is an n x m matrix, D   r
1 1
so that  ( D1 )  r .
0 0 

  ( A1 )   ( D1 )  r   ( A)
Hence the theorem.

10.4.18 Theorem: Let U ,V , W be finite dimensional vector spaces and T : U  V , S : V  W


be linear transformations. Let A, B be matrices such that AB is defined. Then

(i)  ( ST )   ( S )
Centre for Distance Education 10.8 Acharya Nagarjuna University

(ii)  ( AB )   ( A)

(iii)  ( AB )   ( B )

iv)  ( ST )   (T )

Proof: We have R ( ST )  ST (U )  S T (U )   S  R (T ) 

 S (V )( R (T )  V )

 R( S )

  ( ST )  dim R(ST )  dim R( S )   ( S )

  ( ST )   ( S )

 ( AB)   ( LAB )  rank ( LA LB )  rank LA (by ( i ) ) = rank A

  ( AB )   ( A )

(iii)  ( AB)   ( AB )T   ( BT AT )

  ( BT )

=  ( B)

  ( AB )   ( B )

(iv) Let B1 , B2 , B3 be ordered bases of U ,V , W respectively..

Let A1  T ; B1 , B2  , A2   S ; B2 , B3  . Then A2 A1   ST ; B1 , B3 

 Rank ST = Rank A2 A1  Rank A1 by (iii)


= Rank T

 Rank ST  Rank T..

10.4.19 Note:  ( AB)  min   ( A),  ( B) , A, B are matrices of suitable orders.

 ( ST )  min   ( S ),  (T ) , T , S are linear transformations  ST exists.

10.5 The Inverse of a Matrix:


We have remarked that an n x n matrix is invertible iff its rank is n. Since we know how to
Rings and Linear Algebra 10.9 Rank of a Matrix
compute the rank of any matrix, we can always test a matrix to determine whether it is invertible.
We now provide a simple technique for computing the inverse of a matrix using elementary row
operations.

10.5.1 Definition: Let A, B be m  n, m  p matrices respectively. By the argumented matrix


A B  or  A B  or  A B  we mean m  (n  p) matrix whose first n columns are the columns
of A and whose last p columns are the columns of B.
10.5.2 Theorem:

If A is an invertible n x n matrix, it is possible to transform the matrix ( A I n ) into the matrix

I n A1  by a finite number of elementary row operations.

Proof: Suppose A is an n x n invertible matrix. Consider the n x 2n argumented matrix C   A I n  .

We have A C   A A A I n    I n A 
1 1 1 1
----------- (1)

We know that A1 is the product of elementary matrices, say

A1  E p E p 1...E1 . Then (1) becomes:

E p E p 1...E1  A I n   A1C   I n A1 

Since multiplication on the left by an elementary matrix transforms the matrix by an


elementary row operation, it is possible to transform the matrix  A I n  into the form  I n A  by
1

finite number of elementary row operations.

10.5.3 Theorem: If Ann is invertible matrix and if by a finite number of elementary row operations,
the matrix  A I n  is transformed into a matrix of the form  I n B  , then B  A1

Proof: Suppose A is an n x n invertible matrix.

For some n x n matrix, suppose that  A In  is transformed into  I n B  by a finite number of

elementary row operations. Let E1, E2 ...E p be the elementary matrices corresponding to these
elementary row operations.

 E p E p 1...E1  A I n    I n B 

Let M  E p E p 1...E1

 M A M   M  A In    In B
Centre for Distance Education 10.10 Acharya Nagarjuna University

Clearly MA  I n and M  B and hence we have M  A1


 B  A1
10.5.4 Note: We write IA  A .
By applying row operations of I on LHS, the same row operation should be applied on A.
This should be continued till we get BA  I . Then B  A1 .
Solved Problems:

1 3 1 1 
1 2 1 1  
10.5.5 Find the rank of (a) A    , (b) A  1 0 1 1 
1 1 1 1  0 3 0 0 

Solution: (a) One row is not a multiple of other so that the two rows are L. I.

  ( A)  2

R21 ( 1)
1 3 1 1  R (1) 1 3 1 1  R2 (  1 ) 1 3 1 1 
0 3 0 0  32 0 3 0 0  3 0 1 0 0   B
(b) A          (say)
0 3 0 0  0 0 0 0  0 0 0 0 

A is reduced to B.

  ( A)   ( B)  2 , since B has two independent rows.

1 2 3 1 
10.5.6 Find the rank of A   2 1 1 1 
 
1 1 1 0 

Solution:

1 2 3 1  R ( 1) 1 2 3 1  1
R2 (  )
1 2 3 1 
R21 ( 2)
 1   B , the Echelon
A  0 3 5 1  0 3 5 1 
32 3
0 1 5 3 3
R31 ( 1)
0 3 4 1 0 0 1 0   
0 0 1 0 
form of A.

 ( A) = number of nonzero rows in the Echelon form = 2.


Rings and Linear Algebra 10.11 Rank of a Matrix

 2 3 1  1
 1 1 2 4 
10.5.7 Find the rank of A  
3 1 3 2 
 
6 3 0 7 

Solution:

1 1 2 4  1 1 2 4 
2
R12
3 1 1  R21 (  2 ) 
7 
A  0 5 3
3 1 3 
 2  R31 ( 3)  0 4 9 10 
  R41 (  6 )  
6 3 0 7  0 9 12 17 

1 1 2 4 1 1 2 4 
R23( 1) 0 1 6 3 R32 ( 4) 0 1
 6 3
A  
0 4 9 10  R42
( 9)  0 0 33 22 
   
0 9 12 17  0 0 66 44 

1 1 2 4 
1 1 2 4  1 
R43 ( 2)   R3 ( ) 0 1 6 3
 
0 1 6 3
 0 4 33 22   0 0 1
33
 2   B( say )
 
   3
0 0 0 0 
0 0 0 0 

B is the Echelon form   ( A)  3 . ( = number of nonzero rows of B)

 1 1 2 3
4 1 0 2 
10.5.8 Reduce the matrix A   to the normal form and hence find the rank.
0 3 0 4
 
0 1 0 2
Centre for Distance Education 10.12 Acharya Nagarjuna University
Solution:

1  1 2 3   1  1 2 3 
R21 ( 4) 0 5  8 14  R24  0 1 0 2 
A  
0 3 0 4   0 3 0 4
   
0 1 0 2   0 5  8 14 

1 0 0 0 1 0 0 0
C21 (1) 
0 1 0 2  R32 ( 3) 0
 1 0 2 

C31 ( 2)  0 3 0 4  R42( 5)  0 0 0 2 
   
0 5 8 14  0 0 8 4 

1 0 0 0  1 0 0 0
C42 ( 2) 0 1 0 0  R34  0 1 0 0 
 
0 0 0 2    0 0 8 4 
   
0 0 8 4   0 0 0 2 

1 0 0 0 
0 1 0 0 0
1 0 0  R34 ( 12 ) 
1 0 0 
1
R3 (  )
 0
1 0 1  
8
 I4
R4 (  ) 
0 1   0 0 1 0
2
 2  
0 0 0 1
0 0 0 1 

 A  I 4   ( A)  4

0 2 3
10.5.9 Reduce the matrix A   2 4 0  using row operations to a matrix B and obtain the rank.
 3 0 1 

Solution:

 2 4 0  R1 ( 1 ) 1 2 0  R ( 3) 1 2 0
A  0 2 3    0 2 3    0 2 3 
R12 2 31

 3 0 1   3 0 1   0 6 1 
Rings and Linear Algebra 10.13 Rank of a Matrix

1 2 0  R3 ( 1 ) 1 2 0  R ( 3) 1 2 0 
0 2 3  10 0 2 3 23 0 2 0 
R32 (3)

        
0 0 10  0 0 1  0 0 1 

R12 ( 1)
1 0 0  R2 ( 1 ) 1 0 0 
0 2 0  2 0 1 0   I
      3
0 0 1  0 0 1 

 A  I 3 .B  I 3   ( A)  3

 I r O
10.5.10 Find non-singular matrices P and Q such that PAQ is of the form   , where
 O O

1 1 2 
A  1 2 3 
 0 1 1

Solution: Write A  IAI

 1 1 2   1 0 0  1 0 0 
 1 2 3   0 1 0  A 0 1 0 
0 1 1 0 0 1  0 0 1 

1 1 2   1 0 0  1 0 0 
Apply : R21 (1) : 0 1 1    1 1 0  A 0 1 0 
0 1 1  0 0 1  0 0 1 

1 0 0   1 0 0  1 1 2 
 1    1 1 0  A 0 1 0 
Apply : C21 (1), C31 ( 2) :  0 1
 0 1 1  0 0 1  0 0 1 

1 0 0   1 0 0  1 1 2 
  
Apply R32 (1) : 0 1 1  1 1 0 A 0 1
  0 
    
0 0 0   1 1 1  0 0 1 
Centre for Distance Education 10.14 Acharya Nagarjuna University

1 0 0  1 0 0  1 1 1
Apply C32 (1) : 0 1 0   1 1 0  A 0 1 1
     
0 0 0  1 1 1  0 0 1 

 1 0 0 1 1 1
If P  1 1 0 , Q  0 1 1 , then
 
   
1 1 1 0 0 1 

I 021 
PAQ   2
012 011 

5 3 14 4 
 
10.5.11 Reduce A  0 1 2 1  to Echelon form and hence find the rank.
1 1 2 0 

Solution:

1 1 2 0  R ( 5) 1 1 2 0 
A 0 1 2 1   0 1 2 1 
R13 31

5 3 14 4  0 8 4 4 

 
 1  1 2 0  1  1  1 2 0 
R3 ( )
R32 ( 8) 12  
0 1 
  2 1   0 1 2 1   B
 0 0 12 4   1
0 0 1 
 3

B is in Echelon form, and hence the rank of A is 3.

0 2 4
10.5.12 Verify whether the matrix A   2 4 2  is invertible. If so, find its inverse.
 3 3 1 
Rings and Linear Algebra 10.15 Rank of a Matrix

0 2 4

Solution: Given A  2 4 2

 
 3 3 1 

0 2 41 0 0 

The argumented matrix  A I A   2 4 20 1 0 

 3 3 10 0 1 

We convert the matrix into the form  I B  .

2 4 20 1 0 
 A I    2 0 
R12
4 21 0
 3 3 10 0 1 

R1 ( 12 )
1 2 10 1
2 0 
0 0 
  2 41 0
 3 3 10 0 1 

R31 ( 3)
1 2 1 0 1
2 0 
0 2 4 1 0 
  0
0 3 2 0 3
2 1 

R 2 ( 12 )
1 2 1 0 1
2 0 
0 0 
 
1 2 12 0

 0 3 2 0 3
2 1 

R12 ( 2)
1 0 3 1 12 0  R3 ( 14 )
1 0 3 1 12 0 
0 1 2 1 0 0  0 1 2 1 0 0 
  2    2 
0 0 4 2 1  0 0 1 8 8 4 
R32 (3) 3 3 3 3 1
2
Centre for Distance Education 10.16 Acharya Nagarjuna University

R13 (3)
R 23 (  2 )
1 0 1  1
8  5
8
3
4 
0 
  1 0  14 3
4  1
2 
 0 0 1 83  3
8
1
4


 A is invertible and

  81  85 43 
A1   14 43  12 
 83  83 14 

1 2 1 
10.5.13 Using elementary operations, verify whether A   2 1 1 is invertible.
1 5 4 

1 2 1 1 00 

Solution: Consider the argumented matrix  A I   2 1 1 0 1 0 

 1 5 4 0 0 1 

We verify whether  A / I  can be converted into the form  I / B  using elementary opera-
tions.

1 2 1 1 0 0 
R21 ( 2)
 
Now  A / I     0 3 3 2 1 0 
R31 ( 1)
 0 3 3 1 0 1 

1 2 1 1 0 0 
 0 3 3 2 1 0 
R32 (1)

  
 0 0 0 3 1 1 

 A / I  cannot be converted into the form  I / B  , since the third row is a zero row..
Hence A is not invertible.
Rings and Linear Algebra 10.17 Rank of a Matrix

1 1 1 1
 
10.5.14 SAQ : Find the rank of A  1 1 1 1
1 1 1 1

1 1 1
 
10.5.15 SAQ : Find the rank of A  1 1 1
 1 1 1 

10.6 Answers to SAQ’s:

1 0 1
 
10.4.6 SAQ: A   0 1 1   ( A)  ?
1 0 1

We observe that R1 , R3 are identical  A  0

1 0 
But    0.   ( A)  2
0 1 

1 2 1 R (1) 1 2 1 R2 ( 1 ) 1 2 1 
  21 
10.4.7 SAQ: A  1 0 3  0 2 2  0 1 1
 2 
  R (1)    
1 1 2 31
0 1 1 0 1 1 

 2 1
 
 0 1 1   ( A)  2
0 0 0 

10.5.14 SAQ : Every 2nd order minor is 0.


 Rank A = 1.

1 1 1 
 
R21 (1)
10.5.15 SAQ : A   0 0 0    ( A)  1
R31 ( 1)
 0 0 0 
Centre for Distance Education 10.18 Acharya Nagarjuna University
10.7 Summary:
Rank of an m x n matrix is discussed. Using elementary row/column operations, we
obtained normal form. We also explaind the procedure for finding inverse.

10.8 Technical Terms:


Rank, Elementary operations, Normal form.

10.9 Exercises:

1 2 3 0
2 4 3 2 
10.9.1 Find the rank of the matrix A  
3 2 1 3
 
6 8 7 5

 2 1 3 1
 1 2 3 1
10.9.2 Find the rank of the matrix  
1 0 1 1
 
 0 1 1 1

 2 1 3 1 
 
10.9.3 Find the rank of the matrix A  1 8 6 8 
1 2 0 2 

1 1 1 1
10.9.4 Find the rank of the matrix by reducing to normal form A  1 2 3 4 
3 4 5 2 

10.9.5 Find two non singular matrices P and Q such that PAQ is in the normal form, where

1 1 2 
A  1 2 3 
 0 1 1
Rings and Linear Algebra 10.19 Rank of a Matrix
10.9.6 Find two non-singular matrices P and Q such that PAQ is in the normal form, where

1 0 1 0
 3 1 2 1 
A
2 1 2 1
 
 2 2 1 0

 1 3 3 1
 1 2 1 0 
10.9.7 Find the inverse of A using elementary operations: A   
 2 5 2 3
 
 1 1 0 1 

10.10 Answers to Exercises:


10.9.1 3
10.9.2 3
10.9.3 2
10.9.4 2

 1 0 0  1 1 1 
   
10.9.5 P   1 1 0  , Q   0 1 1
 1 1 1  0 0 1 
   

1 0 0 0
   1 0 0 1 
3 1 0
, Q   0 1 2 3 
0
10.9.6 P
 5 1 1 0 0 0 0 1 
   
 1 1 1 1

0 2 1 3
 
1 1 1 2 
A1  
10.9.7 1 2 0 1
 
 1 1 2 6
Centre for Distance Education 10.20 Acharya Nagarjuna University

10.11 Model Examination Questions:


1. Expain the concept of rank of a matrix.

 1 1 2 3 
 
2. Using elementary operations reduce A  4 1 0 2
to normal form and hence find rank.
0 3 0 4
 
0 1 0 2

1 1 2 
 3 
3. Find nonsingular matrices P and Q such that PAQ is in the normal form, where A  1 2
 0 1 1

0 1 2
 
4. Using elementary operations, find the inverse of A  1 2 3 
 3 1 1 

10.12 Reference Books:


1. Hoffman and Kunze, Linear Algebra, 2nd edition - Prentice Hall.
2. Stephen H. Friedberg and others - Linear Algebra, Prentice Hall India Pvt. Ltd - New Delhi.

- A Satyanarayana Murty
Rings and Linear Algebra 11.1 Systems of Linear Equations

LESSON - 11

SYSTEMS OF LINEAR EQUATIONS


11.1 Objective of the Lesson:
In this lesson, we study the systems of linear equations and find solutions, when exist,
using elementary operations. The elementary operations are used to provide a computational
method for finding all solutions to such systems.

11.2. Structure of the Lesson:


This lesson has the following components.

11.3 Introduction

11.4 System of linear equations - Theoritical aspects

11.5 System of linear equations - Computationel aspects

11.6 Answers to SAQ’s

11.7 Summary

11.8 Technical Terms

11.9 Exercises

11.10 Answers to Exercises

11.11 Model Examination Questions

11.12 Reference Books

11.3 Introduction:
In this lesson, we study the systems of linear equations. “A System of n linear equations in
n unknowns has a solution” - This statement, sometimes, may be incorrect, because several
possibilities including no solution may arise.

11.4 System of Linear Equations - Theoretical Aspects:


The equation b  a1 x1  a2 x2    an xn ......... (1)

expressing b in term of the variables x1 , x2  , xn and the scalars a1 , a 2 ,  , a n is called a


linear equation.

For a given b, we must find x1 , x2  , xn satisfafying (1)


Centre for Distance Education 11.2 Acharya Nagarjuna University

A solution to a linear equation (1) is an ordered collection of n scalars y1 , y2  , yn  (1) is


satisfied when x1  y1 , x2  y2  , xn  yn are substituted in (1).

11.4.1 Definition: A system of m linear equations in n unknowns over a field F or simply a linear
system, is a set of m linear equations, each in n unknowns. A linear system can be denoted by

a11 x1  a12 x2    a1n xn  b1

a21 x1  a22 x2    a2 n xn  b2

....................... .... (S)


.......................

am1 x1  am 2 x2    amn xn  bm

where aij and bi  F ,1  i  m,1  j  n and x1 , x2 , , xn are variables taking values in F..

 a11 a12  a1n   x1   b1 


a 
a22  a2 n   
x2  b 
If we write A   21
, X   , B   2  , then
    
     
 am1 am 2  amn   xn  bn 

the system can be represented as:

AX  B
A is called coefficient matrix.

 s1 
s 
s   2Fn 
A solution of the system (S) is an n - tuple 
 
 sn 

As = B
The set of all solutions of a linear system is called the solution set of the system. A system
of linear equations is said to be consistent if it has a solution. Otherwise, the system is said to be
inconsistent.
Rings and Linear Algebra 11.3 Systems of Linear Equations
11.4.2 System of Homogeneous linear equations:
Consider a system of m homogeneous linear equations in n unknowns namely

a11 x1  a12 x2    a1n xn  0

a21 x1  a22 x2    a2 n xn  0

...................... .......... (I)


......................

am1 x1  am 2 x2    amn xn  0

where aij  F . This system can be written as AX  O

11.4.3 Note: x1  0, x2  0,..., xn  0 is a solution of AX  O .

i.e. X  O is a solution of AX  O . This is called the trivial solution or zero solution. Any
other solution is called a nonzero solution.

 AX  O is always consistent.
A system AX  B of m linear equations in n unknows is said to be

(a) homogeneous if B  O

(b) non-homogeneous if B  O
Any homogeneous system has atleast one solution, namely the zero solution.

11.4.4 Theorem: Let AX  O be a homogeneous system of m linear equations in n unknowns


over a field F. Let K denote the set of all solutions of AX  O . Then K  N ( LA )

K is a subspace of F n ,dim K  n - rank LA  n - rank A.


Proof: We have K  X  F AX  O
n

We have proved that K is a subspace of F n and it is also equal to N ( LA ) , the nullspace of

LA i.e. K  N ( LA ) .

 We know that rank LA + nullity LA  dim F  n( LA : F  F )


n n m

 rank A + nullity LA  n
Centre for Distance Education 11.4 Acharya Nagarjuna University

 nullity LA  n  r

11.4.5 Note: The system AX  O has n  r L.I solutions.

11.4.6 Corollary: If m  n, , the system AX  O has a non-zero solution.


Proof: Suppose m  n

 Rank A = Rank LA  m

 dim K  n   ( LA )  n  m, when K  N ( LA )

Since n  m  0 , we have dim K  0 , since K  O .

 a nonzero solution s  K .
 s is a nonzero solution to AX  O .
i.e. AX  O has a nonzero solution.

We know that the solution set S of ( I ) i.e. A X  O is a subspace of F n . [Since


AO  O  O  S   S  F n .Let X 1 , X 2 be solutions of AX  O
 AX1  O, AX2  O  A( X 1  X 2 )  0

Also A ( kX 1 )  ( kAX 1 )  k .O  O

or A(kX 1  lX 2 )  A(kX 1 )  A(lX 2 )  kAX 1  lAX 2  k .O  l.O  O  kX 1  lX 2  S ]

We observe that if AX  O has a nonzero solution, then it has an infinite number of nonzero solu-
tions, since X1  O is a solution  kX 1 is also a solution for every K   .

11.4.7 Note: A system of m homogeneous equations in n unknows with m  n , has a nonzero


solution.

(  ( A )  r  m  n  n  r  0)
11.4.8 System of non-homogeneous equations : The equations

a11 x1  a12 x2    a1n xn  b1

a21 x1  a22 x2    a2 n xn  b2
.......................

am1 x1  am 2 x2    amn xn  bn
Rings and Linear Algebra 11.5 Systems of Linear Equations

 x1   b1 
x  b 
can be written as AX  B , where A  (aij )mn , X   2
,B   2
 
   
 xn  bn 

The system AX  O is called the homogeneous system corresponding to AX  B .

11.4.9 Theorem: Let K be the solution set of the system AX  B and let KH be the solution set
of the corresponding homogeneous system AX  O . Then for any solution S to AX  B ,

K   S   K H  S  K K  K H  .

Proof: Let S be any solution to AX  B . We show that K  S   K H  AS  B .

Let W  K .  AW  B.  A(W  S )  AW  AS  B  B  0

 W  S  KH

 K O  K H  W  S  K O

W  S  Ko S  K H  K  S  K H

Now suppose W  S   K H  K 0  K H  W  S  K 0

 AW  A( S  K 0 )  AS  AK 0

 B O  B W K

S  K H  K

 K  S  K H

11.4.10 Theorem: Let AX  B be a system of linear equations. Then the system is consis-
tent iff  ( A)   ( A B)
Proof: Suppose AX  B is a system of linear equations.

Here A is the coeffiencient matrix and  A B  is the augmented matrix.

We know that LA : F n  F m , LA ( X )  AX .  X  F n .
Centre for Distance Education 11.6 Acharya Nagarjuna University

 AX  B has a solution means that  X 1  F  AX 1  B


n

 B  LA ( X 1 )

 B  R( LA )

Let a1 , a2 , , an be the columns of A.

We know that R ( LA ) = Span a1 , a2 , , an 

 AX  B is consistent
 AX  B has a solution.

 B  Span a1 , a2 , , an 

But B Span a1 , a2 , , an   Span a1 , a2 , , an  = Span a1 , a2 , , an , B

 dim a1 , a2 ,, an   dim a1 , a2 , , an , B

 rank A = rank   A B . Hence the theorem.

Note: If  ( A)    A B  , the system is inconsistent (i.e.) system has no solution).

11.4.11 Theorem: If A is a non-singular matrix of order n, then the system AX  B in n un-


knowns has a unique solution.

Proof: Since A is nonsingular matrix of order n, A  0 .

  ( A)  n,   A B   n

  ( A)    A B   AX  B is consistent.

 AX  B has a solution.

Since A is nonsingular, A1 exists  AX  B  A1 ( AX )  A1 B

 ( A1 A) X  A1 B  IX  A1 B

 X  A1 B

 X  A1 B is a solution of AX  B .
Rings and Linear Algebra 11.7 Systems of Linear Equations

Let X 1 , X 2 be solutions of AX  B  AX 1  B, AX 2  B  AX 1  AX 2

 A1 ( AX 1 )  A1 ( AX 2 )  ( A1 A) X 1  ( A1 A) X 2  Iy1  Iy2  X 1  X 2

Hence the uniqueness and hence the theorem.

11.4.12 Theorem: Let AX  B be a system of n linear equations in n unknowns. If A is invertible,


then the system has exactly one solution A1 B . Conversely if the system has exactly one solution,
then A is invertible.
1
Proof: Suppose A is inversible  A exists.

Write s  A1 B .

 As  ( AA1 ) B  ( A1 A) B  IB  B

 s  A1 B is a solution of AX  B .

Let s1 be a solution of AX  B  As1  B

Now As  B, As1  B  As  As1 ,

 A1 ( As1 )  A1 ( As )  ( A1 A) s1  ( A1 A) s

 Is1  Is  s1  s

 s  A1 B is the only solution of AX  B .


Conversely suppose that the system AX  B has exactly one solution namely s.  As  B

Let KH denote the solution set for the corresponding homogeneous system AX  O .

s  s  K H  K H  O

 N ( LA )  O  LA is non-singular

 A is invertible [ LA : F  F is one-one  LA is onto]


n n

Hence the Theorem.


Solved Problems:

11.4.13 Solve x  2 y  z  0 (System containing one equation in 3 unknowns)

Solution: A  1 2 1 is the coefficient matrix   ( A)  1 .


Centre for Distance Education 11.8 Acharya Nagarjuna University
If K is the solution set, then dim K  3  1  2 .

0 1
   
We observe that 1  1 ,2  0 are independent vectors in K ; 1 , 2  is a basis of K.
   
2 1

K  a11  a22 a1 , a2  is the solution set.

11.4.14 Solve: x  2 y  z  0, x  y  z  0 (System of 2 equations in 3 unknows)

1 2 1 
Solution: The coefficient matrix is A   
1 1 1

We observe that  ( A)  2

If K is the solution set, the dim K  3  2  1

1
We observe that    2  is a basis of K.
 
 3 

Solution set is  a a   

11.4.15 Solve : x  2 y  3 z  0,3 x  4 y  4 z  0, x  10 y  12 z  0

1 2 3 

Solution: The coefficient matrix is A  3 4 4

 
7 10 12 

1 2 3  R2 (  1 ) 1 2 3 1 2 3 
R21 ( 3) 2    
A  0 2 5   0 1 5   0 1 5   B
R32 (4)

2 2
R31 ( 7)
0 4 9    
 0 4 9  0 0 1 

 A  B  B is the echelon form.


  ( A)  3 = number of unknowns.
Rings and Linear Algebra 11.9 Systems of Linear Equations

 0  
 
 K   0   . The system has zero solution only..
 
 0  
  

11.4.16 Problem: Solve : x  y  z  0,3 x  4 y  5 z  0, 2 x  3 y  4 z  0

1 1 1 
 
Solution: The coefficient matrix is A   3 4 5 
 2 3 4 

R21 ( 3)
1 1 1  R ( 1) 1 1 1 
A  0 1 2   0 1 2   B
32

R31 ( 2)
0 1 2  0 0 0 

 A  B,  ( B )  2   ( A)  2

If K is the solution set, then dim K  3  2  1 .

 The system has only one L.I solution.

The given system is equivalent to x  y  z  0

y  2z  0

 y  2 z

 x  2z  z  0  x  z

 x  z  1
  y    2 z   z  2 
   
 z   z   1 

If z  k , then x  k , y  2k , z  k

1
 K  k k   ,    2 
.
 1 
Centre for Distance Education 11.10 Acharya Nagarjuna University

11.4.17 Solve x  y  z  t  0, x  y  2 z  t  0,3 x  y  t  0

1 1 1 1 
 
Solution: The coefficient matrix is A  1 1 2 1
3 1 0 1 

R21 ( 1)
1 1 1 1  R ( 1) 1 1 1 1 
A  0 2 3 2   0 2 3 2   B (say)
32

Now
R31 ( 3)
 3 2 3 2  0 0 0 0 

 ( B )  2   ( A)  2
Let K be the solution set.

 dim K  n  r  4  2  2
The given system is equivalent to x  y  z  t  0

2 y  3 z  2t  0

3
Let z  k1 , t  k2  2 y  3 k1  2 k 2  y  k1  k 2
2

3 1
x  y  z t  k1  k2  k1  k2   k1
2 2

 1   1
  k1   2 
 x 2 0
 y  3     1
      k1  k2   k1    k2  
3
z 2 2 0
   k   1   
t   1    1
 k2   0 

  1  
 2  0 
  3   1 
 Solution set K    2  k1  k 2   k1 , k 2   
 1  0 
    
1
  0  
Rings and Linear Algebra 11.11 Systems of Linear Equations
11.5 System of linear equations - computationel aspects: In this section, we use elemen-
tary row operations to find one solution and using that all the solutions to the given non-homoge-
neous system (when the system is consistent).
11.5.1 Definition: If two systems of linear equations have the same solution set, then the sys-
tems are said to be equivalent.

11.5.2 Theorem: Let AX  B be a system of m linear equations in n unknowns and C be an


invertible m x m matrix.

Then the system (CA) X  CB is equivalent to AX  B .

Proof: Let K be the solution set for AX  B and let

K1 be the solution set for (CA) X  CB .

Let W  K . Then AW  B  C ( AW )  CB  (CA)W  CB

W  K 1  K  K1

Let W  K 1  (CA)W  CB  C ( AW )  CB

Hence AW  C 1 (CAW )  C 1 (CB )  (C 1C ) B  IB  B

 W  K  K1  K  K  K1
Hence the theorem.

11.5.3 Corollary: Let AX  B be a system of m linear equations in n unknowns. If  A1 B1  is a

matrix obtained from  A B by a finite number of elementary row operations, then the system

A1 X  B1 is equivalent to the original system.

Proof: Let  A B  be the augmented matrix of the system AX  B .

1
Let A B
1
 be obtained from  A B  by elementary row operations.
This is equivelant to pre-multiplication of  A B by the elementary matrices (of order m x m),

E1 , E2 ..., E p . Let C  E p E p 1 ..., E 2 E1 .

  A1 B1   C  A B   CA CB  . Now C is invertible, since each Ei is invertible.

Hence the system A1 X  B1 is equivalent to AX  B .


Centre for Distance Education 11.12 Acharya Nagarjuna University
11.5.4 Echelon form of a matrix:
If all the elements in a row of a matrix are zeros, then it is called a zero row and if there is
atleast one nonzero element in a row, then it is called a nonzero row.
11.5.5 Definition: A matrix is said to be in Echelon form if it has the following properties:
i) Zero rows, if any, must follow nonzero rows.
ii) The first nonzero element in each nonzero row is 1.
iii) The number of zeros before the first nonzero element in a row is less than the number
of zeros before the nonzero element of the next row.
11.5.6 Note: 1) Some authors ignore the property (ii) to consider a matrix to be in Echelon form.
2) The rank of a matrix in Echelon form is equal to the number of nonzero rows of
the matrix.
11.5.7 Gaussian Elimination:
We now explain a procedure for solving any system of linear equations by using the follow-
ing example. This procedure is called Gaussian elimination.

Consider the system of linear equations : 3 x1  2 x2  3 x3  2 x4  1

x1  x2  x3  3

x1  2 x2  x3  x4  2

3 2 3 2 1 
The augmented matrix is  A B  is 1 1 1 0 3 
 
1 2 1 1 2 

By performing elementary row operations, the augmented matrix is transformed into an


upper triangular matrix in which the first nonzero entry of each row is 1 and one occurs in a column
to the right of the first nonzero entry of each preceding row.

( A   aij  nn is upper triangular if aij  0 if i  j )

1. To get 1 as the first row first column element, we interchange R1 and R3.

1 2 1 1 2
1 0 3
R1 3
A B  1 1
 
 3 2 3  2 1 
Rings and Linear Algebra 11.13 Systems of Linear Equations
2. Using type 3 row operations, we use R1 to get zeros in the remaining positions of C1.

1 2 1 1 2 
By applying R21 (1), R31 (3), we get 0 1 0 1 1 

0 4 0 1 5

3. We get 1 in the next row in the left most possible column, without using previous rows. In this
example, C2 is the left most possible column.

1 2 1 1 2 

By applying R2 (1) , we get 1 in (2, 2) position, we get 0 1 0 1 1

 
0 4 0 1 5

4. Now use Type 3 - elmentary row operations to get zeros below 1. In this example, we apply

1 2 1 1 2 
R32 (4),  0 1 0 1 1
 0 0 0 3 9 

5. Repeat steps 3, 4 on each succeeding row until no nonzero rows remain.

1 2 1 1 2 
By applying R3 ( 3 ) , we get 0 1 0 1 1
1
 
0 0 0 1 3 

6. Work upward, begining with last nonzero row and add multiples of each row to the rows above.
(so that we get zeros above the first nonzero entry in each row).

1 2 1 0 5 
 
By applying R13 (1), R23 (1), we get 0 1 0 0 2 
0 0 0 1 3 

7. Repeat the process described in step 6 for each preceeding row until it is performed with the 2nd
row at which the reduction process is complete.

1 0 1 0 1 
0 1 0 0 2 
In this example, by applying R12 (2), we get  
0 0 0 1 3 
Centre for Distance Education 11.14 Acharya Nagarjuna University

This matrix corresponds to the system : x1  x3  1

x2  2

x4  3

This system is equivalent to the original system.


The equivalent system can easily be solved.
Solved Problems:

11.5.8 Show that the equations x  y  z  4, 2 x  5 y  2 z  3 , x  7 y  7 z  5 are inconsistent.


Solution: The augmented matrix is

1 1 1 4 
 A B   2 5 2 3 
1 7 7 5 

R21 ( 2)
1 1 1 4  R ( 2) 1 1 1 4 
0 3 4 5  32 0 3 4 5

R31 ( 1)
    
0 6 8 9  0 0 0 1 

 ( A)  2,   A B   3

Since  ( A)    A B  , the system is inconsistent.

11.5.9 Solve the System : x  2 y  z  2, 3 x  y  2 z  1 ,

4x  3y  z  3

2x  4 y  2z  4
Solution: The augmented matrix of the system is :

1 2 1 2
 3 1 2 1 
 A B    
4 3 1 3 
 
2 4 2 4
Rings and Linear Algebra 11.15 Systems of Linear Equations

1 2 1 2 1 2 1 2
R21 ( 3) 0 5 5 5 R2 ( 51 ) 0 1 1 1 
 
R31 ( 4)  0 11 5 5  0 11 5 5
 
   
0 0 0 0 0 0 0 0

1 2 1 2 1 2 1 2
R32 (11)   R3 ( ) 
1 
1
0 1 1 1  6 0 1 1
 0 0 6 6   0 0 1 1
   
0 0 0 0 0 0 0 0

 ( A)  3    A B 

The system is equivalent to x  2 y  z  2

y  z 1

z 1
y 0

 x  0 1  2

x 1

1 
0
 ( A)  3  A is invertible and hence the system has a unique solution, namely   .
1 

11.5.10 Problem: Obtain for what values of  and  , the equations


x  y  z  6, x  2 y  3 z  10, x  2 y   z   have
(a) no solution (b) a unique solution (c) an infinite number of solutions.

1 1 1 6 
 
Solution: The augmented matrix of the system is  A B   1 2 3 10 
1 2   
Centre for Distance Education 11.16 Acharya Nagarjuna University

R21 ( 1)
1 1 1 6 
0 1 4 

R32 ( 1)

2

0 0   3   10 

If   3 , then  ( A)  3 and   A B   3 .

 The system has unique solution.

if   3 and   10 , then  ( A)  2    A B   3 .

 The system has an infinite number of solutions.

1 1 1 6 
If   3 and   10 , then  A B   0 1 2 4 
 
0 0 0   10 

Since   10  0 ,  ( A)  2    A B  (  3) .

In this case, the system is inconsistent.

Hence   3,   10  system is inconsistent.

 3  System has unique solution.

  3,   10  System has an infinite number of solutions.


11.5.11 Solve the following system by reducing to reduced row echelon form

2 x1  3x2  x3  4 x4  9 x5  17

x1  x2  x3  x4  3 x5  6

x1  x2  x3  2 x4  5 x5  8

2 x1  2 x2  2 x3  3 x4  x5  14

Solution: The augmented matrix is:

2 3 1 4 9 17 
1 1 1 1 3 6 
 A B   
1 1 1 2 5 8 
 
2 2 2 3 8 14 
Rings and Linear Algebra 11.17 Systems of Linear Equations

1 1 1 1 3 6 
R12 
2 3 1 4 9 17 
 1 1 1 2 5 8 
 
2 2 2 3 8 14 

1 1 1 1 3 6
R21 ( 2) 0 1 1 2 3 5 
 
R31 ( 1)  0 0 0 1 2 2
 
0 0 0 1 2 2

1 1 1 1 3 6 
R34 ( 1) 
0 1 1 2 3 5 
 0 0 0 1 2 2 
 
0 0 0 0 0 0

  ( A)  3    A B 

The system is equivalent to x1  x2  x3  x4  3x5  6

x2  x3  2 x4  3 x5  5

x4  2 x5  2

System has n  r  5  3  2 L.I solutions:

Let x3  t1 , x5  t2  x4  2  2t2

 x2  t1  2(2  2t2 )  3t2  5  x2  t1  4  t2  5

 x2  1  t1  t2

 x2  1  t1  t2  t1  2  2t2  3t2  6  x1  2t1  2t2  3

 x1  3  2t1  2t2
Centre for Distance Education 11.18 Acharya Nagarjuna University

 x1  3  2t1  2t2   3   2  2


 x   1  t  t  1  1  1
 2  1 2       
 x3    t1    0   t1  1   t2  0  , t1 , t2  
         
 x4   2  2t2   2  0 2
 x5   t2   0   0   1 

3   2   2  
1      
    1   1 
We observe that  0  is a particular solution of the system and   1  ,  0   is a basis of the
   0   2  
2     
 0    0   1  

corresponding homogeneous system.

11.5.12 SAQs: Verify whether the system x  2 y  3 z  0

3x  4 y  4 z  0

7 x  10 y  12 z  0
has a non-trivial solution.

11.5.13 SAQ: Verify whether the system 2 x  6 y  11  0

6 x  20 y  6 z  3  0

6 y  18 z  1  0
is consistent.

11.5.14 SAQ : Solve 2 x  y  3z  8

x  2 y  z  4

3x  y  4 z  0

11.5.15 SAQ: Verify whether the system x  y  z  3

3x  y  2 z  2

2x  4 y  7 z  7
is consistent.
Rings and Linear Algebra 11.19 Systems of Linear Equations

11.5.16 SAQ: Verify whether the system x  2 y  z  3

3x  y  2 z  1

2 x  2 y  3z  2

x  y  z  1
is consistent. If it is consistent, solve.

11.6 Answer to SAQ’s:

1 2 3 
 
11.5.12 SAQ: The coefficient matrix is A   3 4 4 
7 10 12 

1 2 3 
1 2 3  R2 ( 1 ) 
R21 ( 3)
5 
 A   0 2 5   0 1
2

R31 ( 7)  2
0 4 9  0 4 9 
 

1 2 3
R32 (4) 
5 
 0 1 2
B
0 0 1 

Since A  B and B is the echelon form,  ( A)  3

 r  n The system has trivial solution only..


11.5.13 SAQ: The augmented matrix is

2 6 0 11 R ( 3)  2 6 0 11 R ( 3)  2 6 0 11


 
 A B   6 20 6 3   0 2 6 30   0 2 6 30 
21 32

 0 6 18 1   0 6 18 1   0 0 0 91


 

 ( A)  2,   A B   3  ( A)    A B 

 The system is inconsistent.


Centre for Distance Education 11.20 Acharya Nagarjuna University

11.5.14 SAQ: The augmented matrix of the system is

 2 1 3 8  R  1 2 1 4 
 A B    1 2 1 4    2 1 3 8 
12

 3 1 4 0   3 1 4 0 

 1 2 1 4 
  1 2 1 4  R2 ( ) 
1

 0 3 5 16  3  0 1 5 16 
R21 (2)

 
R31 ( 3)
   3 3
 0 7 1 12   0 7 1 12 
 

 1 2 1 4  R ( 3 )  1 2 1 4 
 38 
R32 ( 7) 3

 0 1 5
 3
16 
3    0 1 5 16 
3 3
 0 0 3   
  76 3  0 0 1 2 
3 

  ( A)    A B   3

 The system is consistent. It has unique solution.


The equivalent system is  x  2 y  z  4

5 16
y z
3 3

z2

10 16
y   y2
3 3

x  4 2  6  x  2

2
 
 solution is  2 
 2 
Rings and Linear Algebra 11.21 Systems of Linear Equations
11.5.15 SAQ : The augmented matrix is

1 2 1 3 
1 1  3  R ( 3)  1 3  R ( 1) 
 5 7 
1 1 1
2
 A B    3  2  2    0  5 7   
21 43 0
1 2
R (  2) 0 0 0 20 
 2 4 7 7  31  0 2 5 13   
 

  ( A)  2  3    A B   The system is inconsistent.

 1 2 1 3 
 3 1 2 1 
11.5.16 SAQ: The augmented matrix is  A B    
 2 2 3 2 
 
 1 1 1 1

 1 2 1 3   1 2 1 3 
R21 ( 3)0 7 5 8  R23 ( 1) 0 1 0 4 
 A B   
6 5 4   0 6 5 4 
  
R31 ( 2) 0
R41 ( 1)    
0 3 2 4  0 3 2 4 

 1 2 1 3  1 2 1 3 
R2 ( 1)   R32 (6) 0
0 1 0 4  1 0 4 
 0 6 5 4 R42(3) 0 0 5 20 
   
0 3 2 4  0 0 2 8

1 2 1 3
R3 ( 5 ) 
1

0 1 0 4 

R4 ( 1 2 )  0 0 1 4
 
0 0 1 4

1 2 1 3 
R43 (  1) 
0 1 0 4 
  0 0 1 4
 
0 0 0 0
Centre for Distance Education 11.22 Acharya Nagarjuna University

  ( A)    A B   3  The system is consistent. Equivalent system is


x  2 y  z  3, y  4, z  4  x  1

 The system has unique solution.


11.7 Summary:
The theory of homogeneous linear equations and the theory of non homogeneous linear
equations is discussed.

11.8 Technical Terms:


Linear equations, homogeneous, non-homogeneous linear equations consistency, incon-
sistency.

11.9 Exercises:
11.9.1 Find the dimension and basis set of x1  3 x2  0, 2 x1  6 x2  0

11.9.2 Solve : x1  2 x2  x3  0, 2 x1  x2  x3  0

11.9.3 Examine for consistency x  2 y  z  3

3x  y  2 z  1

2 x  2 y  3z  2

x  y  z 1

11.9.4 Solve : 2 x  2 y  2 z  1

4x  2 y  z  2

6x  6 y   z  3    
11.9.5 Determine whether the following system has a solution:

x1  2 x2  3x3  1

x1  x2  x3  0

x1  2 x2  x3  3
Rings and Linear Algebra 11.23 Systems of Linear Equations

11.10 Answers to Exercises:

  3 
11.9.1   
 1  

  1 
  
11.9.2   1  
 1  
  

11.9.3 Consistent
11.9.4 If   2 , system has unique solution.

1
  2  x   k , y  k ; z  0, k  
2
11.9.5 System has a solution.
11.11 Model Examination Questions:

1. Prove that the system AX  O has n  r L.I solutions, where r is rank A and n is the number
of unknowns.

2. Prove that the system AX  B is consistent iff  ( A)    A B 

3. Solve the system x1  2 x2  x3  2

3x1  x2  2 x3  1

4 x1  3 x2  x3  3

2 x1  4 x2  x3  4

4. Prove that the system AX  B has unique solution iff A is invertible.

5. Show that the system x  2 y  z  3,3 x  y  2 z  1 , 2 x  2 y  3 z  2, x  y  z  1 is consis-


tent and solve.

11.12 Reference Books:


1. Hoffmen and Kunze - Linear algebra, 2nd Edition Prentice Hall.
2. Stephen H. Friedberg and others - Linear algebra - Prentice Hall-India Pvt. Ltd. - New Delhi.

- A. Satyanarayana Murty
Rings and Linear Algebra 12.1 Diagonalization

LESSON - 12

DIAGONALIZATION
12.1 Objective of the Lesson:
This lesson is concerned with diagonalization problem. For a given operator T on a finite
dimensional vector space we study about

i) The existance of an ordered basis B for V and

ii) If such basis exists the method of finding it.

A solution of diagonalization problem leads to the concept of eigen values and eigen vec-
tors and so we study about,

iii) Finding eigen values and eigen vectors of linear transformations.

12.2 Structure of the Lesson: This lesson contains the following items.

12.3 Introduction
12.4 Diagonalizable linear operator - Eigen Vector and eigen values of a
linear operator
12.5 Worked Out Examples
12.6 Properties of eigen values
12.7 Similarity
12.8 Similarity of matrices using trace.
12.9 Trace of a linear operator
12.10 Determinant of a linear operator - relating theorems
12.11 Exercise
12.12 Diagonalizability
12.13 Worked out examples
12.14 Polynomial Splitting and algebraic multiplicity
12.15 Eigen space
12.16 Summary - Test for Diagonalization
12.17 Worked out examples
12.18 Positive integral power of a diagonalizable matrix - Examples
Centre for Distance Education 12.2 Acharya Nagarjuna University
12.19 Invariant Subspaces
12.20 T-Cyclic subspaces generated by a non-zero vector
12.21 Cay ley - Hamilton Theorem and Examples
12.22 Summary
12.23 Technical Terms
12.24 Model Questions
12.25 Exercise
12.26 Reference Books
12.3 Introduction:
In this lesson we introduce the important notions of eigen values and eigen vectors of a
linear operator and a square matrix defined over a field. Using these concepts we discuss the
diagonalization and diagonalizability of linear operators and matrices.

12.4 Diagnolizable Linear Operator:


12.4.1 Definition:
1) A linear operator T on a finite dimensional vector space V is called diagonalizable if there
is an ordered basis B for V such that T B is a diagonal matrix.

2) A square matrix A is called diagonalizable if LA (left multiplication transformation by


matrix A) is diagonalizable.
12.4.2 Eigen vector and eigen value of a linear operator :
Definition: Let T be a linear operator an a vector space V. A non zero vector v  V is called an
eigen vector of T if there exists a scalar  such that T (v )  V . The scalar  is called the eigen
value of A corresponding to the eigen vector v .

12.4.3 Definition: Let A be in M nn ( F ) . A non zero vector V  F n is called an eigen vector of A
if v is an eigen vector of LA ; that is if Av   v for some scalar  . The scalar  is called the eigen
value of A corresponding to the eigen vector v .
Note: i) The words characteristic vector, Latent Vectors, proper vector, spectral vector are also
used in place of eigen vector.
ii) Eigen values are also known as characteristic values, Latent roots, proper values, spec-
tral values.

iii) A vector is an eigen vector of a matrix A if and only if it is an eigen vector of LA .


Rings and Linear Algebra 12.3 Diagonalization

iv) A scalar  is an eigen value of A, if and only if it is an eigen value of LA .

In order to diagonalize a matrix or a linear operator we have to find a basis of eigen vectors
and the corresponding eigen values.
Before continuing our study of diagonalization problem, we give the method of computing
eigen values.
12.4.4 Method of Computing eigen Values:
Theorem: Let A be an n  n matrix in the entries in the field F. Then a scalar  is an eigen value
of A if and only if det ( A   I n )  0

Proof: A scalar  is an eigen value of A , if and only if there exists a nonzero vector v  F n such
that A v   v . That is ( A   I n ) v  0 . This is true if and only if A   I n is not invertible. How-
ever this result is equivalent to the statement that det ( A   I n )  0 .

12.4.5 Characteristic matrix of the given matrix A:


Definition: Let A be a n  n matrix with entries in the field F.  be a scalar, then the matrix
( A   I ) is called the characteristic matrix of A.
12.4.6 Characteristic polynomial of A:

Definition: Let A be a n  n matrix with entries in the field F. Then the polynomial f ( )  det( A   I )
is called the characteristic polynomial of the square matrix A of order n - of degree n in  . f ( ) is
the characteristic function of A.
Note: By theorem 12-4-4 it follows that the eigen values of a matrix are the zeros of its character-
istic polynomial.
12.4.6A Characteristic polynomial of a linear operator:
Definition: Let T be a linear operator on a n - dimensional vector space V with an ordered basis
B. We define the characteristic polynomial f ( ) of T to be the characteristic polynomial of A  T B .

i.e. f ( )  det( A   I n ) .

Note: The characteristic polynomial of an operator T is defined by det (T   I ) .

12.4.6B Characteristic Equation:

The equation f ( )  A   I  0 is called the characteristic equation of A.

Note :  is a characteristic value of the matrix A if and only if det ( A   I )  0 .


Centre for Distance Education 12.4 Acharya Nagarjuna University
12.4.7 Definition: Spectrum: The set of all characteristic values of A is called the spectrum of A.
12.4.8 Short cut method:
Working procedure to find the characteristic polynomial of a matrix A.

 a11 a12 a13 



Let A  a21 a22 a23 

 a31 a32 a33 

The characteristic polynomial of A corresponding to the characteristic root  is given by


 3  tr ( A). 2  ( A11  A22  A33 )  det A . Where A11 , A22 , A33 denotes the cofactors of the diago-
nal elements a11 , a22 and a33 respectively. Observe that the coefficients of the characteristic poly-
nomial of a 3  3 square matrix A are with alternating signs as follows

S1  tr( A)  Sum of the terms of the principal diagonal.

S2  A11  A22  A33 where

a22 a23 a a a a
A11  (1)11 ; A22  (1)22 11 13 ; A33  (1)33 11 12
a32 a33 a31 a33 a21 a22

here each S k is the sum of all principal minors of A of order k.

Note: i) If A is a n x n square matrix then its characteristic polynomial is


 n  S1 n 1  S 2  n 1  ...  ( 1) n S n

Where S k is the sum of principal minors of order k.

ii) For a diagonal element of a square matrix, its minor and cofactor are the same.
12.4.9 Show that every square matrix need not posses eigen values.

 0 1
Solution: Consider the matrix A    over the field of reals. Its characteristic equation is
 1 0 
A  I  0 .

 0 1 1 0  
i.e. det        0
  1 0 0 1  
Rings and Linear Algebra 12.5 Diagonalization

 1
 det  0  2 1  0
1  

Which has no solution is the field of real numbers. So A has no characteristic value and
hence no characteristic vector over the field of reals.
However if A is regarded as a complex matrix then its characteristic equations namely
 2  1  0 has two distinct roots i ,  i over the field of complex numbers and consequently A
has two distinct eigen vectors.

12.4.10 Theorem: Let A  M nn ( F ) . Then show that

i) The characteristic polynomial of A is a polynomial of degree n with leading coefficient


(1)n . ii) A has atmost n distinct eigen values.

Proof: Let A   aij  nn where the entries of A belongs to the field F..

The characteristic polynomial of A is given by A   I .

i.e. the characteristic polynomial of A is

a11   a12 ........ a1 n


a 22 a 22   ....... a2 n
a n 1 a11 .......... a nn  

n n n 1

Expanding the determinant, we get the polynomial as ( 1)   a1  a2  ...  an
n 2

so the leading coefficient is (1)n and as the polynomial is of degree n; it can not have more
than n zeros. So A cannot have more than n eigen values.

Note: If T : V  V is a linear operator such that dim V  n; then T B  A is a n  n


matrix. So det ( A   I ) is a polynomial of degree n. So A or T can not have more than n dis-
tinct eigen values.
1.4.11 Procedure to find eigen values and eigen vectors:

Let A   aij  nn be the square matrix of order n.

Step 1:

Write the characteristic equation of A given by A   I  0 . This is an equation of degree n


in  .
Centre for Distance Education 12.6 Acharya Nagarjuna University

Step 2: Solve the equation A   I  0 to get n roots 1 , 2 ...n . Which are the eigen values of A.

Step 3: The corresponding eigen vectors of A are given by the nonzero vectors V   v1 , v2 ,..., vn 

satisfying the equation AV  iV or ( A  i I )V  0 where i  1,2....n

12.4.12 Theorem: A linear operator T on a finite dimensional vector space V is diagonalizable if


and only if there exists an ordered basis B for V consisting of eigen vectors of T. Further more if T
is diagonalizable, B  v1 , v2 ,..., vn  is an ordered basis of eigen vectors of T and D  T B then D

is diagonal matrix and djj is the eigen value corresponding to V j for 1  j  n .

Proof: Let V be a finite dimensional vector space and T be a linear operator on V. Suppose T is
diagonalizable. Then there exists an ordered basis B  v1 , v2 ,..., vn  for V such that T B is a

diagonal matrix. Note that if D  T B is a diagonal matrix, then for each vector v j  B , we have
n
T (v j )   d ij v j  d jj v j   j v j where  j  d jj
i 1

Conversely if B  v1 , v2 ,..., vn  is an ordered basis of V, such that T (v j )   j v j for some

scalars 1 , 2 ...n , then T ( v j )  0v1  0v2  ...   j v j  ...  0 vn

1 0........ 0
0 2 0 
Then clearly T B  which is a diagonal matrix.
  
 
0 0 n 

In the preceeding paragraph, each vector v in the basis B satisfies the condition T (v )   v
for some scalar  . Moreover, as v lies in a basis, v is non zero. Hence the theorem.

Note: To diagonalize a matrix or a linear operator we have to find a basis of eigen vectors and the
corresponding eigen values.

12.5 Worked Out Exampes:

1 3 1  3
W.E. 1: Let A    , B  v1 , v2  where v1    , v2    is an ordered basis of R 2 . Prove
4 2  1  4
that v1 , v2 are eigen vectors of A. Find  LA B . Show that A and  LA B are diagonalizable.
Rings and Linear Algebra 12.7 Diagonalization

 1 3 1 3
Solution: Given A    , v1    , v2   
 4 2 1  4

1 3  1  1 3 2 1


LA (v1 )            2    2v1
4 2 1 4 2 21  2  1

Where LA  T .

So v1 is an eigen vector of L A and hence v1 is an eigen vector of A. Here 1  2 is the eigen.

value corresponding to v1 .

1 3  3  3  12  15  3
Further more LA (v2 )            5    5v2
 4 2   4  12  8  20  4

So v2 is an eigen vector of LA and so v2 is an eigen vector of A; with the corresponding

eigen value 2  5 . Note that B  v1 , v2  is an ordered basis of R 2 consisting of eigen vectors of

both A and LA and so A and LA are diagonalizable.

 2 0 
Further more  LA B    is a diagonal matrix.
 0 5

5 4 
W.E. 2: Determine the eigen values and eigen vectors of the matrix A   .
1 2 

Solution: The characteristic equation of A is A   I  0 .

5 4
 0
1 2

 (5   )(2   )  4  0   2  7  6  0

 (  6)(  1)  0    6,1
So the eigen values of A are 6, 1.

To find the eigen vector corresponding to   6 :


Centre for Distance Education 12.8 Acharya Nagarjuna University

 v1 
The eigen vector v '    of A Corresponding to the eigen value 6 are given by the non zero
 v2 
solution of the equation ( A  6 I )v '  O

5  6 4   v1 
 O
 1 2  6 v2 

 1 4   v1  0 
    O 
 1 4 v2  0 

R2  R1 gives

 1 4   v1  0 
 0 0   v   0 
  2  

 v1  4v2  0  v1  4v2 putting v2  1 we get v1  4 .

4
So v '    is an eigen vector of A; Corresponding to the eigen value 6.
1 

The set of all eigen values of A corresponding to the eigen value 6 is given by C 1 v ' where
C1 is a nonzero scalar..

The eigen value v of A corresponding to the eigen value 1 are given by the non zero solu-
tion of the equation ( A   I )v  O

 ( A  1I )v  O

 4 4   v1  0 
     
 1 1   v2  0 

 4 4   v1  0 
4R2  R1 gives  0 0  v   0 
  2  

 4v1  v2  0  v1  v2

Let v1  1 them v2  1
Rings and Linear Algebra 12.9 Diagonalization

1 
So v "    is an eigen vector of A corresponding to the eigen value 1. Every non zero
  1
 1 
multiple of v " . Which is of the form C 2   where C2  0 is an eigen vector corresponding to
  1
the eigen value 1.

W.E. to 3: T : R3 ( R)  R3 ( R) is a linear operator defined by


T (a, b, c )  (7 a  4b  10c, 4a  3b  8c, 2a  b  2c ) find the eigen values of T and an ordered
basis B for R 3 (R) such that T B is a diagonal matrix.

Sol: Gives T : R3  R 3 is defined by T (a, b, c )  (7 a  4b  10c, 4a  3b  8c, 2a  b  2c ) . Consider

the usual ordered basis B  e1 , e2 , e3  for R 3 where e1  (1, 0, 0), e2 (0,1, 0), e3 (0, 0,1)

T (e1 )  T (1, 0, 0)   7(1)  4(0)  10(0), 4(1)  3(0)  8(0)  2(1)  0  2(0) 

 (7, 4, 2)

 7(1, 0, 0)  4(0,1, 0)  2(0, 0,1)

 7e1  4e2  2e3

T ( e2 )  T (0,1, 0)  (  4  3,1)

 4(1, 0, 0)  3(0, 0, 0)  1(0, 0, 0)

 4e1  3e2  1e3

T (e3 )  T (0,1, 0)  (10;8, 2)

 10e1  8e2  2e3

 7 4 10 
A T B   4 3 8 
........... (1)
 2 1 2 

If  is an eigen value of A; then A   I  0


Centre for Distance Education 12.10 Acharya Nagarjuna University

7 4 10
or  4 3   8 0
2 1 2  

From (1)

trace of A  7  3  2  2

3 8
A11  ( 1)11 minor of 7  1 2  6  8  2

7 10
A22  (1) 2  2 minor of (3)  2 2  14  20  6

7 4
A33  ( 1)33 minor of (2)  4 3  21  16  5

Their Sum  A11  A22  A33  (2  6  5)  1

det A  7(6  8)  4( 8  16)  10(4  6)

 7( 2)  4(8)  10( 2)  2


The characteristic equation is

 3   2 trace of A   ( A1 1  A 2 2  A 3 3 )  d e t A  0 1 1  2  12

 3  2 2  1  2  0  f ( ) say 11 2

f (1)  13  2(1)2  1  2  0    1 is a factor.. 11 2 0

The other factor is  2    2  (  2)(  1)

Hence the characteristic equation is (   1)(   2)(   1)  0

So characteristic values are 1,1,2 .

ii) To find the characteristic vector corresponding to   1 .

( A   I )v  O
Rings and Linear Algebra 12.11 Diagonalization

 7  1 4 10   v1  0 

 4 3  1 8  v2   0 
 2 1 2  1  v3  0 

 8 4 10   v1  0 
  4 2 8  v2   0 
 2 1 1  v3  0 

2 R2  R1 , 4 R3  R1 gives

8 4 10   v1  0 
0 0 6   v   0 
  2  
0 0 6   v3  0 

R3  R2 gives

8 4 10   v1  0 
0 0 6   v   0 
  2  
0 0 0   v3  0 

6v3  0  v3  0

8v1  4v2  10v3  0  8v1  4v2  v2  2v1

put v1  1 then v2  0

 v1   1 
   
So v  v2  2 and every scalar multiple of it is an eigen vector..
'
   
 v3   0 

ii) To find the eigen vector corresponding to   1

( A   I )v  0
Centre for Distance Education 12.12 Acharya Nagarjuna University

7 1 4 10   v1  0
  4 3 1 8  v2   0
 2 1 2 1 v3  0

 6 4 10   v1  0 
  4 4 8   v2   0 
 2 1 3  v3  0 

1 1
R1  ; R2  gives
2 4

 3 2 5   v1  0 
 1 1 2  v   0 
  2  
 2 1 3  v3  0 

R 1  3 R 2 , R 3  2 R 2 gives

0 1 1  v1  0 
1 1 2   v   0 
  2  
0 1 1   v3  0 

0 1 1  v1  0 
    
R3  R1 gives 1 1 2   v2   0 
0 1 1   v3  0 

v2  v3  0  v2  v3

v1  v2  2v3  0  v1  v3  0  v3  v1

putting v1  1 , v3  v2  1

 v1   1 
     1
So v   v 2
''
   and every scalar multiple of it is an eigen vector..
 v 3    1 
Rings and Linear Algebra 12.13 Diagonalization

iii) To find the eigen vector corresponding to   2 .

( A   I )v  0

7  2 4 10   v1  0 
  4 3  2 8   v2   0 
 2 1 2  2   v3  0 

 5 4 10   v1  0 
  4 5 8  v2   0 
 2 1 4   v3  0 

 2 1 4   v1  0 
 4 5 8  v   0 
R1  R3 gives   2  
 5 4 10   v3  0 

R2  2 R1 , 2 R3  5R1 gives

 2 1 4   v1  0 
 0 3 0  v   0 
  2  
 0 3 0   v3  0 

R3  R2 gives

 2 1 4   v1  0 
 0 3 0   v   0   3v  0  v  0
  2   2 2 and
 0 0 0   v3  0 

2v1  v2  4v3  0  2v1  4v3  v1  2v3

 v1   2 
   
Put v 3   1, then v1  2, v2  0, v3  1 and so v   v2    0  and every non zero scalar
"'

 v3   1
multiple of it is an eigen vector.
Centre for Distance Education 12.14 Acharya Nagarjuna University
' ' "

The basis for which T B is a diagonal matrix is B  v , v , v
"'

i.e. B  (1, 2, 0), (1, 1, 1)(2, 0, 1)
'

W.E. 4: Let T be the linear operator on P2 ( R ) defined by T  f ( x)   f ( x)  ( x  1) f ( x) find the


'

eigen values of T.

 
Solution: B  1, x, x is the standard Basis of P2 ( R ) . We are given linear operator on P2 ( R )
2

defined by T  f ( x)   f ( x)  ( x  1) f ( x)
'

T (1)  1  ( x  1)(0)  1  1(1)  0 x  0 x 2

T ( x )  x  ( x  1)(1)

 2 x  1  1(1)  2.x  0 x 2

T ( x 2 )  x 2  ( x  1)(2 x)

 2 x  3x 2  0(1)  2.x  3.x 2

1 1 0 
A  T B  0 2 2 
So
0 0 3 

The characteristic equation is A   I  0

1  1 0
 0 2 2 0
0 0 3

 (1   ) (2   )(3   )  0 (expanding along first column)

 (1   )(2   )(3   )  0

   1, 2, 3
Rings and Linear Algebra 12.15 Diagonalization
12.6 Properties of eigen values:
12.6.1 Theorem: Let T be a linear operator on a vector space V; and let  be an eigen value of T..

A vector v  V is an eigen vector of T corresponding to  if and only if v  0 and v  N (T   I ) ,


the null space of the linear operator (T   I ) .

Proof: Let v is the characteristic vector corresponding to the characteristic value  .

So T (v )   v  (T   I )v  0

So v  the null space of T   i.e. N (T   I )

Conversely Let v  null space of T   I .

 (T   I )v  0

 Tv   v
So v is the characteristic vector corresponding to the characteristic value  .
Hence the Theorem.

12.6.2 Theorem: Prove that a square matrix A; and its transport AT have the same set of
eigen values.

Proof: Characteristic polynomial of A  det( A   I )

 det( A   I )T since the determinant of a matrix and its


transpose are equal.

 det  AT  ( I )T 

 det  A T   I T 

 det  AT   I  since I T  I

= Characteristic polynomial of A T .

So A and AT have the same characteristic polynomial and hence the same set of eigen
values.
12.6.3 Show that zero is a characteristic root of a matrix if and only if the matrix is singular.
Solution: 0 is a characteristic value of A.

   0 satisfies the equation A   I  0


Centre for Distance Education 12.16 Acharya Nagarjuna University

 A  0 I  0  A is singular..

Note : i) If  is a characteristic root of a non singular matrix then   0 .

ii) At least one characteristic root of every singular matrix is zero.

12.6.4 If  is a characteristic root of a matrix A; K is a scalar, show that K   is a characteristic


root of the matrix A  KI .

Solution: Let  be a characteristic root of a matrix A, and v be the corresponding characteristic


vector.

Then Av   v and ( A  KI )v  Av  K ( Iv )

  v  Kv

 (  K )v

Since v  0   K is a characteristic root of the matrix A  KI and v is a corresponding


of characteristic vector.

12.6.5 If  1 ,  2 ... n are characteristic roots of a n x n matrix A ; k is a scalar then the


charecteristic roots of A  kI are 1  k , 2  k ,..., n  k .

Solution: Given 1 , 2 ......n are the charecteristic roots of A.

So the characteristic polynomial of A is A   I

= ( 1   )(  2   )...(  n   )

The characteristic polynomial of A  kI is A  kI   I  A  ( k   ) I

  1  ( k   )  2  ( k   )  ...  n  ( k   ) 

  (1  k )   )  (2  k )   )  ... ( n  k )   ) 

Hence the characteristic roots of A  kI are 1  k ,  2  k ; 3  k ,...,  n  k .

12.6.6 If A is non singular prove that the eigen values of A1 are the reciprocals of the eigen values
of A.

Solution: Let  be an eigen value of A and v be the corresponding eigen vector then Av   v .

 v  A1 ( v)   ( A1v)
Rings and Linear Algebra 12.17 Diagonalization

1
 v  A1v (Since A is non singular   0 )

1
 A1v  v

1
So is an eigen value of A  1 , and v is the corresponding eigen vector..

Conversely suppose that  is an eigen value of A1 . Since A is non singular A1 is also non
1
singular and ( A1 ) 1  A . So it follows from the first part of this question is an eigen value of A.

Thus each eigen value of A1 is equal to the reciprocal of some eigen value of A.

Hence the eigen values of A1 are nothing put the reciprocals of the eigen values of A.

12.6.7 Corollary: If 1 , 2 ...n are the eigen values of a non singular matrix A, then 11 , 2 1...n 1
are the eigen values of A1 .

Solution: The solution follows from the above proof.

12.6.8 Theorem: If 1 , 2 ,...n are the eigen values of A, then K 1 , K 2 ,...K n are the eigen
values of KA .

Proof: If K  0 then KA  O and each eigen value of 0 is 0. Thus 01 , 02 ,...0n are the eigen
values of KA when 1 , 2 ,...n are eigen values of A.

So let us suppose that K  0

We have KA   KI  K ( A   ) I

 K n A   I Since KB  K n B

If K  0 , then KA   KI  0 if and only if A   I  0 .

i.e. K  is an eigen value of KA; if and only if  is an eigen value of A.

Thus if 1 , 2 ..., n are eigen values of A, that K 1 , K 2 ..., K n are eigen values of KA .

12.6.9 Corollary: Let 0   be an eigen value of an invertible operator T. Show that  1 is an


eigen value of T 1 .
Centre for Distance Education 12.18 Acharya Nagarjuna University

Solution: As  is an eigen value of T, there exists non zero v  V such that T (v )   v .

 v  T 1 ( v)  T 1 (v)

So  1v   1T 1 (v)

  1v  T 1 (v)

Hence  1 is an eigen value of T 1 .

12.6.10 If  is a characteristic root of a non singular matrix A, Show that  r is the characteris-
tic root of Ar ; r being an integer..

Solution:  is a characteristic root of a non singular matrix. So   0 .

Case i) Let r  0 . Let v be a characteristic vector corresponding to  then Av   v .

So Ar v  Ar 1 ( Av)  Ar 1 ( v)   ( Ar 1v)

  Ar  2 ( Av)

  Ar  2 ( v )

  2 Ar  2 v
Proceeding like this we get

Ar v   r v

Thus  r is a characteristic root of Ar and v is also a characteristic vector of Ar .

Case ii) Let r  0 then A0  I and characteristic roots of I are all unity i.e.  0 .

Case iii) Let r  1 then Av   v  A1 ( Av)  A1 ( v)

So Iv   ( A1v)

  1 ( Iv)   1 ( A1v)

  1v  A1v since  1  1

  1 is a characteristic root of A1 .

i.e. when r  1,  r is a characteristic root of Ar .


Rings and Linear Algebra 12.19 Diagonalization
Case iv) Let r be negative say r   s where s is a positive integer then Ar  A s  ( A1 ) s .

By Case iii)  1 is a characteristic root of A1 .

By case i) ( 1 ) s is a characteristic root of ( A1 ) s .

  r is a characteristic root of Ar .
Hence the theorem.

12.6.11 Corollary: If 1 , 2 ,...n are the characteristic roots of A, then the characteristic rootss
of A2 are  12 ,  22 , . . . . . ,  n2 .

Solution: The solution follows from the above theorem.


12.6.12 Let T be a linear operator on a vector space V. let v be an eigen vector of T corresponding
to the eigen value  . For any positive integer n prove that v is an eigen vector of T n corresponding
to the eigen value  n .

Solution: Given v is an eigen vector of the linear operator T. So Tv   v where v  O . We have to


prove T n (v)   n v , n being a positive integer. We prove the result by induction. For n  1 , the
result is true because of (1).
Let the result be true for a positive integer m

i.e. T m (v)   m v .......... (1)

Thus T m 1 (v)  T m (Tv)  (T mT )v  T m (Tv)

 T m ( v )

  (T m v) Since T is linear

  ( m v) by (3)

So T m 1 (v)  ( m )(v)   m 1 (v) so the statement is true for m  1

The statement is true for n  1 , when it is assumed to be true for m; it is proved to be true
for m  1 . Hence by mathematical induction the statement is true for all positive integral values of
n.

12.6.13 Let T be a linear operator an a vector space V; over a field F and let g ( x ) be a polynomial
with coefficients from F. Prove that if v is an eigen vector of T with corresponding eigen value  ,
then g (T )(v )  g ( )v i.e. v is an eigen value of g (T ) with the corresponding eigen values of g ( ) .
Centre for Distance Education 12.20 Acharya Nagarjuna University
Proof:

Let g ( x )  a0  a1 x  a2 x 2  ...  am  m where ai  F

Let T (v )   v .

 T 2 (v)   2 v . In general T r (v )   r (v)

Now g (T )  a0 I  a1T  a2T 2  ...  amT m

 g (T )  (v)  a0v  a1T (v)  a2T 2 (v)  ...  amT m (v)


 a0 v  a1 (v )  a2  2 (v )  ...  am  m (v )

 ( a0  a1  a2  2  ...  am  m )v

= g ( )v

Hence the theorem.


12.6.14 Theorem: Let A be n x n triangular matrix over F. Prove that the eigen values of A are
diagonal elements.

 a11 a12 a13 ....... a1n 


0 a22 a23........... a24 
Let A  
    
 
0 0 0.......... ann 

i.e.. A   aij  nn where aij  0 for i  j

Characteristic equation of A is A   I  0

a11   a12 a13 ........ a1n


0 a22   a23 ........ a2 n
 0
   
0 0 0 ann  

 (a11   )(a22   )...(ann   )  0

So the eigen values of A are a11 , a 22 ,..., a nn


Rings and Linear Algebra 12.21 Diagonalization
So in a triangular matrix, the eigen values are the diagonal elements of the matrix.
12.6.5 Corollary: Show that the characteristic values of a diagonal matrix D are the elements
in the diagonal.

 a11 0 0 ......... 0 
 0 
Proof: Let D   0 a 22 0 .........
 0 0 0 ........ a n n  n  n

Characteristic equation of D is D   I  0 .

a11   0 0......... 0
0 a22   0......... 0
 0
 
0 0 0......... ann  

 (a11   )(a22   )...(ann   )  0

Which shows the characteristic values are a 1 1 , a 2 2 , ......, a n n . Which are nothing but
the elements in the diagonal.

12.7 Similarity:
12.7.1 Definition: i) Two n  n matrices A and B are said to be similar if there exists a non
singular matrix P such that AP  PB or A  PBP 1

Definition II : Two linear operators T1 and T2 on V are said to be similar if there exists a nonsingular

linear operator T on V such that T1T  TT2 or T1  TT2T 1

12.7.2 Show that similar matrices have the same characteristic polynomial and hence the same
eigen value.

Proof: Let A and B are any two similar matrices then for a invertible matrix P, we have B  P 1 AP .

Let det( B   I )  det( P 1 AP   I )

 det  P 1 ( A   I ) P 

 det( P 1 ) det( A   I ) det( P )  det( P 1 ) det( P ) det( A   I )

 det( P 1 P) det( A   I )
Centre for Distance Education 12.22 Acharya Nagarjuna University
 det( I ) det( A   I )

 1.det( A   I )
This shows that the matrices A and B have the same characteristic polynomial. Hence A
and B have the same characteristic roots.
12.7.3 Corollary: A square matrix B is similar to a diagonal matrix D. Show that the character-
istic roots of B are diagonal elements of D.
Proof: Let the D be the diagonal matrix of order n. B is similar to D.
The characteristic roots of D are the elements along the principal diagonal of D.
By the above theorem, B and D have the same charecteristic equation and hence the
same characteristic roots.
So the characteristic roots of B are the diagonal elements of D.
Hence the Theorem.

12.8 Similarity of matrices using Trace:


12.8.1 Theorem: Let A and B are two square matrices of order n. Thun show that trace of
( AB )  trace of ( BA)

Proof: Let A   aij  nn B  bij  nn

n
AB  Cij 
nn
where Cij  a
k 1
b
ik kj

n
BA   dij 
nn
where d ij  b k 1
ik akj

n n
 n 
trace of ( AB )  
i 1
Cii     aik bki 
c 1  k 1 

n n
 a b
ik ki
k 1 k 1

Interchanging the order of summation in the last sum.

n
 n 
    bki aik 
k 1  i 1 
Rings and Linear Algebra 12.23 Diagonalization
n
  d kk  d11  d 22  ...  d nn
k 1

So trace of ( AB ) = trace of ( BA)

Hence the theorem.


12.8.2 Theorem: Prove that similar matrices have the same trace.
Proof: Let A and B are n x n matrices over F such that A is similar to B. We have to show that
trace of A = trace of B.

Let A   aij  nn B  bij  nn . Let A be similar to B; Then there exists a non singular matrix
P. such that A  P 1 BP .

trace of A = trace of ( P  1 B P )

= trace of ( PP 1 B ) since trace of AB is equal to trace of BA

= trace of ( IB )

= trace of B.
Hence similar matrices have the same trace.

12.9 Trace of Matrix : The Sum of the elements of a square matrix A lying along the

principal diagonal is called the trace of the matrix. If A   aij  nn then trace of A  a11  a22  ...  ann .

Definition: Trace of a linear operator :

Let T be a linear operator from V  V . Then the trace of T written as tr T is the trace of
M (T ) where M (T ) is the matrix of T in some basis of V..
12.9.1 To show that the definition of trace of a linear operator is well defined:
To show that the trace of linear operator is independent of the basis of V.

Solution: Let M 1 (T ); M 2 (T ) are the matrices of T in two different bases of V. We know if T is a


linear operator from a n dimensional vector space V to V; over a field F, and T has the matrix M 1 (T )

in the basis v1 , v2 ,...vn  and the matrix M 2 (T ) in the basis w1 , w2 ,...wn  then there exists a non
singular matrix P of order n such that M 2 (T )  P 1M 1 (T ) P .

Hence by this theorem, there exists a non singular matrix P such that M 2 (T )  P 1M 1 (T ) P .
Centre for Distance Education 12.24 Acharya Nagarjuna University
i.e. M 1 (T ) and M 2 (T ) are similar matrices. But similar matrices have the same trace.
Hence trace of T does not depend upon any particular basis of V.
Hence the above definition is meaningful. So trace depends only an T and not any particu-
lar basis.

W.E. 5: Example: Let T : R2  R2 be the linear transformation defined by T ( x, y )  (2 y ,3 x  y ) .

0 2 
Then in the basis (1, 0)(0,1) the matrix of T is   . So trace of T  0  1  1 .
3 1

 30 48
Also in the basis (1,3)(2,5) matrix of T is 
 18 29 

trace of T  30  29  1

From which we observe trace of T is not dependent on the basis of V.

12.10 Determinant of a linear operator T on V(F):

12.10.1 Definition: Let T be a linear operator on a vector space V ( F ) and T  or T B be the


matrix of this linear operator relative to a basis B; then det T  det T  .

12.10.2 Theorem: Prove that the determinant of a linear operator T on a vector space is unique.
Or
Prove that the determinant of a linear operator is independent of the choice of an ordered
basis for V.
Proof:

Let  aij  and bij  are the matrices of the linear operator T with respect to the basis B1

and B2 of V..

i.e. T : B1    aij  ; and T : B2   bij 

1
Then there exists an invertible matrix cij  such that  bij    cij   aij   cij 


det bij   det   cij 
1
 aij   cij  
 

= det bij   det  cij   aij  cij  


1

 
Rings and Linear Algebra 12.25 Diagonalization
1
 det cij  det  cij  det  aij 

det bij   det  cij  cij   det  aij 


1
i.e.
 

 det( I ) det  aij 

 1.det  aij 

i.e. det bij   det  aij 

Hence the determinant of the operator T is unique even though the matrices of T are differ-
ent with respect to the bases B1 and B2 .

12.10.3 Theorem: If T1 and T2 are linear operators on a finite dimensional vector space V ( F ) ,
then prove that det(T1T2 )  (det T1 )(det T2 ) .

Proof: T1 and T2 are two linear operators on a finite dimensional vector space V(F). Choose B to
be an ordered basis of v. Then the matrix of the operator T1T2 w.r.t the basis B can be put in the

form T1T2 B  T1 B .T2 B .


So det T1T2 B  det T1 B .T2 B 
 det T1 B .det T2 B ............ (1)

As we know the det. of the product of two matrices is equal to the product of their determi-
nants.

Now by the definition det T  det T B and hence by (1) we have det T1T2   det T1  det T2 

12.10.4 T is a linear operator an a finite dimensional vector space V. Prove that T is invertible if
and only if det (T )  0 .

Proof: T is a linear operator on a finite dimensional vector space V. Let B be a basis of the vector
space V.

If T is invertible then TT 1  T 1T  I and det(T 1T )  det  I B where  I B is matrix of the
identity operator.

 det(T ).det(T 1 )  1 since det. of the unit matrix is 1.


Centre for Distance Education 12.26 Acharya Nagarjuna University

Now (det T ) and (det T 1 ) are the elements of the field F and a field F is without zero divisors. i.e.

ab  0  a  0 or b  0 or both zero.
In other words if a  0 then ab  0 in a field F..

Hence since (det T ).(det T 1 )  1 hence det T  0 .

Conversely let det T  0 then by def. of det T we have det T B  0

 T B is invertible.

Hence the operator T is invertible.


12.10.5 Theorem: T is a linear operator on a finite dimensional vector space V. Prove that T is
invertible then det(T 1 )  (det T )1 .

Solution: T is invertible then TT 1  I  T 1T

det(TT 1 )  det( I )  det(T 1T )

 (det T ).(det T 1 )  1   det(T 1 )  (det T )

So (det T 1 )   det(T ) 
1

12.10.6 Let T be a linear operator on a finite dimensional vector space V. Then show that O is
the characteristic value of T iff T is not invertible.
Solution: Case i) Let O be the eigen value of T. Then we have to prove T is singular.

As O is a eigen value of T, there exists a nonzero v in V such that T (v )  0v .

 T (v)  O (zero vector)

 T is singular so T is not invertible.


Case ii) Converse:
Suppose T is not invertible we have to show O is the eigen value of T.
As T is a linear operator on a finite dimensional vector space V,and T is not invertible means
T is singular So there exists a nonzero vector v in V such that Tv  O  0v .

O is the characteristic vector..


Hence the theorem.
Rings and Linear Algebra 12.27 Diagonalization
12.10.7 Corollary:
Prove that a linear operator on a finite vector space is invertible if and only zero is not an
eigen value of T.
Exercise 12.11:
T is linear operator on R 3 defined by

 a   3a  2b  2 c   0   1  1  
  
T  b     4 a  3b  2 c  and B   1  ,  1 ,  0   is an ordered basis of R 3 . Com-

     
c  c   1   0   2  
         

pute T B and determine whether B is a basis of consisting of eigen vectors of T..

 1 0 0 
Ans ) T B   0 1 0  ; yes
 0 0 1

2). T is a linear operator on R 2 defined by T ( a, b )  ( 2 a  3b, 10 a  9b ) . Find the eigen values of

T and an ordered basis B for V such that T B is a diagonal matrix.

Ans )   3, 4, B  (3, 5), (1, 2)

3) T is a linear operator on P3 ( R ) defined by T  f ( x)   f ( x)  f (2) x . Find the eigen values of T

and an ordered basis B for V such that T B is a diagonal matrix.

Ans :   1,3 B  2  x, 4  x 2 , 8  x3 , x

0 1 2 
 
4) Find the eigen values of the matrix 1 0 1
 2 1 0 

Ans : 2, 1  3, 1  3

1 1 2 

5) Find the characteristic polynomial of A  0 3 2

 
1 3 9 
Centre for Distance Education 12.28 Acharya Nagarjuna University
Ans :  3  13 2  31  17

 4 1 1
 
6) If A   2 5 2  then find i) All eigen values of A.
1 1 2 

ii) A maximum set S of linearly independent vectors of A.

Ans: i)   3, 3,5 ii) (1, 1, 0), (1, 0,1),(1, 2,1) is a maximal set of linearly independent vectors.

1 1
7) If A    , find the eigen values and eigen vectors A. Prove that A is diagonalizable.
 4 1
Obtain a basis for R 2 containing eigen vectors of A.

1   1   1   1  
Ans:   3, 1 and eigen vectors are   ,      ,    is a basis of R 2 .
 2   2    2   2  
8) Find the eigen values and eigen vectors of the following matrices.

 6 2 2   8 6 2 
i)  2 3 1 ii)  6 7 4 
   
 2 1 3   2 4 3 

1 1 
   
Ans: i) 2, 2,8; a 2  b 0 where a, b are any nonzero scalars
   
 2  0 

1
and c  1 where c is any non zero scalar..
 
 1 

1   2   2 
 2  ,  1  ,  2 
ii) 0,3,15       and theire nonzero scalar multiples.
 2   2  1 
Rings and Linear Algebra 12.29 Diagonalization

 10 6 3 0 6 16 
 26 16 
8  and 0 17 45  are similar..
9) Show that the matrices 
 
 16 10 5 0 6 16 

1 1
10) Prove that the matrix. A     M 22 ( R) is diagonalizable.
1 1

11) Find all the eigen values and a basis for each eigen space of the linear operator T : R 3  R 3
defined by T ( a, b, c )  (2a  b, b  c, 2b  4c )

Ans: eigen values 2, 3

1 
0
eigen space of 2 is spanned by  
 0 

1
1
eigen space of 3 is spanned by  
 2 

2 1 0
0 2 1 
12. Find the eigen values of  
0 0 2

Ans : 2, 2, 2

o h g o f h
 f  ; B   f
 g 
13. Show that A   h o o
 g f o   h o g 

have the same charecteristic equation.

12.12 Diagonalizability:
We have see in the preceeding articles that every linear operator or every matrix is not
diagonalizable. We need a simple test to determine whether an operator or a matrix can be diago-
nalized as well as a method for actually finaling a basis of eigen vectors.
Centre for Distance Education 12.30 Acharya Nagarjuna University
12.12.1 Theorem: Let T be linear operator on a vector space V; Let 1 , 2 ...k be distinct eigen
values of T. If v1 , v2 ...vk are eigen vectors of T such that i corresponds to Vi (1  i  k ) then

v1 , v2 ...vk  are linearly independent.


Proof: We prove the theorem by mathematical induction on k . If k  1, then v1  O since v i is an

eigen vector and so v1 is linearly independent. We assume th theorem is true for ( k  1) distinct
eigen values where ( k  1)  1

Let there be k eigen vectors v1 , v2 ,..., vk corresponding to the distinct eigen values

1 , 2 ,..., k . We will show that v1 , v2 ,..., vk  is linearly independent.

Suppose a1 , a2 ,..., ak are scalars.

Such that a1v1  a2v2  ...  ak vk  O ...... (1)

Applying ( T   k I ) on both sides of (1) we get


a1 (1  k )v1  a2 (2  k )v2  ...  ak 1 (k 1  k )vk 1  0

By induction hypothesis v1 , v2 ,...vk 1 is linearly independent. So we must have

a1 (1  k )  a2 (2  k )  ...  ak 1 (k 1  k )  0 .

As 1 , 2 ,...k are distinct it follows

that i  k  0 for 1  i  k  1

So a1  a2  ...  ak 1  0

Substituting these values in (1) we get ak vk  0 .

As vk  O so ak  0 consequently a1  a2  ...  a3  ...  ak 1  ak  0

Thus a linear combination of vectors v1 , v2 ...vk is equal to a zero vector implies each of the
scalar coefficient is zero.

So v1 , v2 ,...vk  is linearly independent.


Hence the theorem.
12.12.2 Corollary: Let T be a linear operator on an n dimensional vector space V. If T has n
distinct eigen values then T is diagonalizable.
Rings and Linear Algebra 12.31 Diagonalization

Proof: Let the n distinct eigen values of T be 1 , 2 ,...k . Foe each i, choose an eigen vector vi

corresponding to i . By the above theorem v1 , v2 ,...vn  is linearly independent and as dim V  n ;

the set v1 , v2 ,...vn  is a basis of V. Hence T is diagonalizable.

Converse: The converse of the above theorem need not be true. i.e. If T is diagonalizable, then it
has n distinct eigen values need not be true. For example the identity operator is diagonalizable
even though it has only one eigen value namely 1.
W.E. 6: Worked Out Examples:

1 1
Show that A     M 22 ( R) is diagonalizable.
1 1

1  1
Solution: The characteristic polynomial of A is A   I 
1 1 

 (1   ) 2  12

 (1    1)(1    1)

 (2   )(  )

The characteristic equation is  (  2)  0 ,

Thus   0, 2

Hence the characteristic values of A are 0,2. Hence the characteristic values of LA are 0,2.
Which are distinct.

LA is a linear operator on vector space R 2 , whose dimension is 2.

 LA and hence A is diagonaliziable.

1 2 
W.E. 7: Show that A    is not diagonalizable.
0 1 
Solution:

The characteristic polynomial of A is A   I

1  2
  (1   )2
0 1 
Centre for Distance Education 12.32 Acharya Nagarjuna University

The characteristic equation of A is A   I  0 i.e. (1   ) 2  0

   1,1

As A and hence LA has one distinct eigen value and dim R 2 is 2, it follows that A is not
diagonalizable.

12.14 Polynomial Splitting:


12.14.1 Definition: A polynomial f (t ) in P ( F ) splits over F if there are scalars a1 , a2 ,...an not
necessarily distinct in F such that f (t )  c(t  a1 )(t  a2 )...(t  an ) .

Example:

i) t 2  1  (t  1)(t  1) splits over R.

but ii) (t 2  1)(t  2) does not split over R.

bu it splits over C because it factors into the product (t  i )(t  i )(t  2) .

Note: If f (t ) is the characteristic polynomial of a linear operator or a matrix over a field F, then
the statement that f (t ) splits is to understood to mean that it splits over F..

12.14.2 Theorem: The characteristic polynomial of any diagonalizable linear operator splits.
Proof: Let V be a n dimensional vector space. Let T be a diagonalizable linear operator over V. Let
B be an ordered basis for V such that T B  D is a diagonal matrix. Suppose that

1 0 0........ 0
0  0........ 0 
D 2

   
 
0 0 0 n 

Let f (t ) be the characteristic polynomial of T..

f (t )  det( D  tI )

1  t 0 ....... 0 
 0 2  t ........ 0 
 det 
     
 
 0 0 n  t 
Rings and Linear Algebra 12.33 Diagonalization

 (1  t )(2  t )....(n  t )

 ( 1) n (t  1 )(t  2 )....(t  n )

f (t ) is factored into a product of linear factors.

 f (t ) splits. Hence the theorem.


Note: From the above theorem it is clear that if T is diagonalizable linear operator on an n - dimen-
sional vector space that fails to have distinct eigen values, then the characteristic polynomial of T
must have repeated zeros.
ii) The converse of the above theorem need not be true. That is, the characteristics polynomial
may split but T need not be diagonalizable. For example consider the examples W.E. 5. in which
we observe even though T may split, T need not be diagonalizable.
12.14.3 Algebraic Multiplicity:
Definition: Let  be an eigen value of a linear operator or matrix with characteristic polynomial
f (t ) . The algebraic multiplicity of  is the largest positive integer k for which (t   ) k is a factor
of f (t ) .

W.E. 8 : Example:

3 1 0
i) det A  0 3 4 
 
0 0 4 

The characteristic polynomial is A  tI

3t 1 0
 0 3t 4
expanding along the first column; the characteristic polyno-
0 0 4t

mials is (3  t )  (3  t )(4  t )  0

 (3  t ) 2 (4  t )

Hence 3, 3, 4 are the eigen values.

So   3 is an eigen value of A with multiplicity 2.   4 is an eigen value of A with multi-


plicity 1.
ii) For n x n null matrix, zero is the characteristic root of algebraic multiplicity n.
Centre for Distance Education 12.34 Acharya Nagarjuna University
iii) For identity matrix of order n unity is the characteristic root of algebraic multiplicity n.

12.15 Eigen Space:


12.15.1 Definition: Let T be a linear operator on a vector space V, det  be an eigen value of T..
Define E  v  V : T (v)   v  N (T   I v ) .

The set E is called the eigen space of T corresponding to the eigen value  .

Analogously we define the eigen space of a square matrix A to be the eigen space of LA .

Note: E is a subspace of V consisting of the zero vector and eigen vectors of T. Corresponding
to the eigen value  . So the maximum number of linearly independent eigen vectors of T corre-
sponding to the eigen value  is the dimension of E .

12.15.2 Theorem: Let T be a linear operator on a finite dimensional vector space V. Let  be an
eigen value of T having multiplicity m. Then 1  dim( E )  m .

Proof: Choose an ordered basis v , v ,..., v 


1 2 p for E . extend it to an ordered basis

B  v1 , v2 ,..., v p , v p 1 ...vn  for V..

Let A  T B . Observe that vi (1  i  p ) is an eigen vector of T corresponding to  and

 I p B
therefore A  
 0 C 

 (  t ) I p B 
Then the characteristic polynomial of T is f (t )  det( A  tI n )  det 
 0 c  tI n p 

 det  (  t ) I p  det  c  tI n  p 

 (  t ) p g (t ) where g (t ) is a polynomial.
Thus (  t ) p is a factor of f (t ) and hence the multiplicity of  is atleast p. But dim ( E )  p . So
dim ( E )  m .

Hence 1  dim( E )  m .
Rings and Linear Algebra 12.35 Diagonalization
12.15.3 We state some theorem without proofs:

Lemma: Let T be a linar operator and Let 1 , 2 ...k be distinct eigen values of T. For each i  1, 2...k ,
let v i  E  i , the eigen space corresponding to i . If v1  v2  ...  vk  0 then vi  0 for all i.

12.15.4 Theorem: Let T be a linear operator on a vector space V, and let 1 , 2 ,...k be distinct
eigen values of T. For each i  1, 2...k ; let Si be a finite linearly independent subset of Eigen space
E  i . Then S  S1  S 2  S3  ...  S k is a linearly independent subset of V..

12.15.5 Theorem: Let T be a linear operator on a finite dimensional vector space V such that the
characteristic polynomial of T splits. Let 1 , 2 ,...k be the distinct eigen values of T. Then i) T is
diagonalizable if and only if the multiplicity of i is equal to the dim ( E  i ) for all i.

ii) If T is diagonalizable and Bi is an ordered basis for E  i for each i, then


B  B1  B2  B3  ...  Bk is an ordered basis for V consisting of eigen vectors of T..

12.16 Summary:
Test for Diagonalization :
Let T be a linear operator on an n dimensional vector space V. Then T is diagonalizable if
and only if both the following conditions hold.
i) The characteristic polynomial of T splits.

ii) For each eigen value  of T, the multiplicity of  equals to n - rank (T   I )

In order to test the diagonalizability of a square matrix, the same conditions can be used
since the diagonalizability of A is equivalent to the diagonalizability of the operator LA .

If T is a diagonalizable operator and B1 , B2 ,...Bk are ordered basis for the eigen space of T,,
then the union B  B1  B2  ...  Bk is an ordered basis for V consisting of eigen vectors of T, and

hence T B is a diagonal matrix.

When we want to test T for diagonalizability we usually choose a convient basis B for V, and
form A  T B if the characteristic polynomial of A splits, then use the condition (ii) above to check
if the multiplicity of each of the repeated eigen value of A equal to n - rank ( A   I ) . If the character-
istic polyomial of A is splitting condition. (ii) is automatically satisfied for eigen values with multiplic-
ity 1. If A is diagonlizable then T is also diagonalizable.
If we find T is diagonalizable and want to find a basis B for V, consisting of eigen vectors of
T, we addopt the following procedure.
Centre for Distance Education 12.36 Acharya Nagarjuna University
1) We first find a basis for each eigen space of A. the union of these bases is a basis C for F n
consisting of eigen vectors of A. Each vector in C is the coordinate vector relative to B of an eigen
vector of T. The set consisting of these n eigen vectors of T is the desired basis B.
Further more, if A is a n  n diagonalizable matrix, we can find an invertible n  n
matrix Q and a diagonal n  n matrix D such that Q1 AQ  D .

The matrix Q has as its columns the vectors in a basis of eigen vectors of A, and D has as
its j th diagonal entry the eigen value of A corresponding to the j th column of Q.

12.17 Workedout Examples:


W.E. 9:

Let T be the linear on P2 ( R ) defined by T  f ( x)   f ( x) . Test T for diagonalizability..


'

Solution: T is a linear operator on P2  R  defined by T  f ( x)   f ( x) . The standard basis for


'

P2 ( R) is B  1, x, x 2 

Given T  f ( x)   f ( x)
'

T (1)  0  0.1  0.x  0.x 2

T ( x)  1  1(1)  0 x  0 x 2

T ( x 2 )  2 x  0(1)  2 x  0 x 2

0 1 0 
A  T B  0 0 2 
0 0 0 

The characteristic equation is A   I  0

 1 0
 0  2 0
0 0 

expanding along the third row we get

0  0  ( )( 2  0)  0
Rings and Linear Algebra 12.37 Diagonalization

 3  0
Thus T has only one eigen value   0 with multiplicity 3.

T  f ( x )  f ' ( x )  0  f ( x) is a constant polynomial.

E  N (T   I )  N (T ) is the subspace of P2 ( R) with constant polynomials.

So 1 is a basis of E . So dim ( E )  1 consequently there is no basis of P2 ( R ) consist-


ing of eigen vectors of T. And so T is not diagonalizable.

3 1 0 
W.E. 10: Test the matrix A   0 3 0   M 33 ( R) for diagonalizability..
 0 0 4 

Solution: The characteristic equation is A   I  0 .

3   1 0 
  0 3  0   0
 0 0 4   

expanding along the third row.

0  0  (4   ) (3   ) 2  0  0

 (4   )(3   ) 2  0 so   4 , 3, 3

Also A has eigen values 1  4 and 2  3 with multiplicities 1 and 2 respectively..

Since 1 has multiplicity 1; condition (ii) is satisfied for 1 . Thus we need only to test
condition (ii) for 2 .

To find the rank of ( A  2 I ) where 2  3

3  3 1 0  0 1 0 

( A  2 I )   0 33 0   0 0 0 
 0 0 4  3 0 0 1 
Centre for Distance Education 12.38 Acharya Nagarjuna University
Which can be put in Echelon form. Here the number of non zero rows 2. So Rank of
( A  2 I )  2 .

For 2  3 , n - rank ( A   2 I )  3  2  1 . Which is not the multiplicity of 2 . So the


condition (ii) fails for 2 and so A is not diagonalizable.

W.E. 11:

Test for diagonalizability of the linear operator T on P2 ( R ) defined as follows:

T  f ( x )   f (1)  f ' (0) x   f ' (0)  f " (0)  x 2

Also find an ordered basis for R 3 of eigen vectors of T B where B is the standard basis of P2 ( R ) .

'

' ''

Solution: T is a linear operator on P2 ( R ) defined by T  f ( x )   f (1)  f (0) x  f (0)  f (0) x
2

B  1, x, x 2  is the standard ordered basis for P2 ( R) and A  T B

T (1)  1  0 x  0 x 2 ; T ( x)  1  1.x  (1  0) x 2

i.e. T ( x )  1  x  x 2

T ( x 2 )  1  0 x  (0  2) x 2

i.e. T ( x 2 )  1  0 x  2 x 2

1 1 1 
  0 1 0 
Thus A  T B
 0 1 2

1  1 1
det ( A   I )  0 1  0
0 1 2

 (1   )  (1   )(2   )  0 expanding along the first column.

 (1   ) 2 (2   )
Rings and Linear Algebra 12.39 Diagonalization
The characteristic polynomial of A and hence of T is (1   ) 2 (2   ) which splits. Hence
the condition (i) is satisfied.

Also 1  1 has multiplicity 2.

and 2  2 has multiplicity 1 and hence condition (ii) is satisfied.

So we verify condition (ii) for 1  1

For this (n - rank ( A  1 I ))

1  1 1 1 
 11 0 
 3 - rank of  0
 0 1 2  1 

 0 1 1 
 0 
 3 - rank  0 0
 0 1 1 

 3  (1)  2 . Since the matrix has only one linearly independent row.

Here n - rank ( A   1 I )  2 = multiplicity of 1 .

Hence as the required conditions are satisfied, T is diagonalizable.

We now find the ordered basis C for R 3 of eigen vectors of A. We consider each eigen
value separately.

Let 1  1; then ( A   I )v  0

1  1 1 1   v1   0 

 0 1  1 0  .  v 2    0 
 0 1 1   v3   0 

 0 1 1   v1  0 
  0 0 0  v2   0 
 0 1 1   v3  0 

 v2  v3  0  v2  v3 Let v1  s, v3  t
Centre for Distance Education 12.40 Acharya Nagarjuna University

 v1   s  1   0 
v    t   s 0   t  1
then  2       
 v3   t  0   1 

 1   0     v1  
      
So C1    0  ,  1  is a basis for the eigen space E1   v2   R 3 ( A   I )v  0 
 
 0   1   v  
       3  

To find the eigen space corresponding to 2  2 .

( A  2 I )v  O

1  2 1 1   v1  0 

  0 1 2 0  v2   0 
 0 0 2  2   v3  0 

 1 1 1   v1  0 
  0 1 0  v2   0 
 0 0 0   v3  0 

 v2  0, v1  v2  v3  0  v1  v3  0 so v3  v1

Put v1  1 then v2  0 , v3  1

 1     v1  
    
So C2   0   is the basis for the eigen space E2  v   v2   R 3 ( A  2 I )v  0 
 
 1     v3  
    

consider C  C1  C2 then

 1   0  1  
 
C  0 , 1 , 0 
     
  0   1  1  
      

Thus C is an ordered basis for R 3 consisting of eigen vectors of A.


Rings and Linear Algebra 12.41 Diagonalization
Finally we observe that the vectors in C are the coordinate vectors relative to B of the
vectors in the set.

A  1,  x  x 2 ,1  x 2  which is an ordered basis for P2 ( R) consisting of eigen vectors of


T. Thus

1 0 0 
T A  0 1 0 
which is the required diagonal matrix.
0 0 2 

 0 2 
W.E. 12: Show that the matrix   is diagonalizable and find a 2  2 matrix P such that
1 3 
P 1 AP is a diagonalizable matrix.

 0 2 
Solution: The characteristic equation of the given matrix A  is
1 3 
  2
A  I  0   0.
1 3

  (3   )  2  0   2  3  2  0

 (  2)(  1)  0 in   2,1

Thus A has two distinct eigen values 1  1, 2  2 . As the diamensionality of the vector
space is 2.
We see that A is diagonalizable.

1) To find the eigen space corresponding to 1  1 :

We have ( A   I )v  O

1 2  v1  0   1 2   v1  0 
   
1 3  1 v2  0   1 2  v2  0 

 v1  2v2  0  v1  2v2 put v2  1 then v1  2

 v1   2 
So v      
v2  1
Centre for Distance Education 12.42 Acharya Nagarjuna University

  2    v1  
or C1      is a basis of eigen space. E1      R ( A  1I )  O 
2

 1    v2  

ii) To find eigen space corresponding to 2  2 .

0  2 2   v1  0 
( A  1I )v  O   
 1 3  2 v2  0 

 2 2  v1 
     0  v1  v2  0 so v1  v2
 1 1  v2 

 v1   1
Put v2  1, then v1  1 so v      
v2  1

  1    v  
So C2      is the basis of the eigen space E2    1   R 2 ( A  2 I )v  O 
 1    v2  

  2   1 
C  C1  C2        is an ordered basis for 2 consisting of the eigen vectors of A.
R
 1   1  

 2 1
Let P    is the matrix whose columns are vectors in C.
1 1

1 0 
D  P 1 AP   LA B   
0 2
W.E. 13:
Let T be the linear operator on R 3 which is represented in the standard basis by the matrix

 9 4 4 
 8 3 4 
  . Prove that T is diagonalizable. Find a basis of R 3 consisting of eigen
 16 8 7 
vectors of T.
Rings and Linear Algebra 12.43 Diagonalization

 9 4 4 
 
Solution: The given matrix A   8 3 4 
 16 8 7 

The characteristic polynomial of A corresponding to the characteristic root  is

 3 (trace of A)   ( A1 1  A 2 2  A 3 3 )   d e t A
2

Where Aii is the cofactor of the diagonal element aii .

trace of A  a11  a22  a33  9  3  7  1

3 4
A11  (1)11  21  32  11
8 7

9 4
A22  (1) 2  2  63  64  1
16 7

9 4
A33  (1)33  27  32  5
8 3

A11  A22  A33  11  1  5  5

det A  9(21  32)  4( 56  64)  4( 64  48)

 99  32  64  3
The characteristic polynomial is 1 11 5  3

 3   2  5  3  f ( ) say 1 2 3

f ( 1)  1  1  5  3  0 1 2 3 0

  1 is a factor of f ( )  0
The other factor is

 2  2  3

 (  3)(  1)
Centre for Distance Education 12.44 Acharya Nagarjuna University

So f ( )  0  (  1) 2 (  3)  0

Hence the characteristic roots are  1, 1,3

Thus the eigen value 1  1 has the multiplicity 2

2  3 has the multiplicity 1.

1) To find eigen space corresponding to 1  1

( A   I )v  O

 9  1 4 4   v1 
  8 3  1 4  v2   O
 16 8 7  1  v3 

 8 4 4  v1 
  8 4 4 v2   O
16 8 8   v3 

R2  R1 , R3  2 R1 gives

 8 4 4  v1 
 0 0 0  v   O
  2
 0 0 0  v3 

 2 1 1   v1  0 
     
R1  gives  0 0 0 v2   O  0
1
4  0 0 0  v3  0

2v1  v2  v3  0 put v1  t , v2  s

then v3  2v1  v2  2t  s

 v1   t  1   0 
v   v2    s   t  0   s  1 
 v3   2t  s   2   1
Rings and Linear Algebra 12.45 Diagonalization

 1   0    v1  
      
So C1   0 , 1  is a basis of for the eigen space E   v2   R 3 ( A  1 I )v  O 
    1  
  2   1   v  
      3  

Dim of E1  2 which is equal to the multiplication of 1  1 .

i.e. multiplicity of 1  Dim E1

ii) To find the eigen space corresponding to 2  3 :

( A  2 I )v  O

 9  3 4 4   v1 

  8 33 4  v2   O
 16 8 7  3  v3 

12 4 4  v1 
  8 0 4 v2   O
16 8 4  v3 

1 1 1
R1  , R2  , R3  gives
4 4 4

 3 1 1  v1 
 2 0 1 v   O
  2
 4 2 1  v3 

3R2  2 R1 ,3R3  3R1 gives

 3 1 1   v1 
 0 2 1  v   O
  2
 0 2 1  v3 
Centre for Distance Education 12.46 Acharya Nagarjuna University

R3  R2 gives

 3 1 1   v1  0 
 0 2 1  v   0   2v  v  0 so v  2v and
  2   2 3 3 2

 0 0 0   v3  0 

3v1  v2  v3  0  3v1  v2  2v2  0

 3v1  3v2 so v2  v1

 v1  1 
   
Put v1  1, so v2  1, v3  2 Hence v  v2  1
   
 v3   2 

1   v1  
    
So C2   1  is a basis of eigen space E   v2   R3 ( A  2v)  O 
   
  2 
2
 v  
    3  

Dim E  2  1 which is equal to the multiplicity of 2  3

Multiplicity of 2  dim E2

 1   0  1  
 
If we consider the union of bases of these two subspaces we get C    0  ,  1  , 1  
     
  2   1  2  
      
which is linearly independent. Thus the set C is a basis of R 3 consisting of the eigen vectors of T..

Hence T is diagonalizable.
W.E. 14:

Let T : R3  R3 is defined as follows

 a1   4a1  a3 
T  a2    2a1  3a2  2a3 
  
 a3   a1  4a3 

Find whether the linear operator T is diagonalizable or not.


Rings and Linear Algebra 12.47 Diagonalization

 a1   4a1  a3 
  
Solution: Given T a2  2a1  3a2  2a3

   
 a3   a1  4a3 

B  e1 , e2 , e3 where e1  1, 0, 0 , e2  0,1, 0 , e3  0,0,1 is the standard basis of for R 3 .

1   4  0   4 
T (e1 )  T 0    2  0  0    2 
0  1  0  0  1 

 4(1, 0, 0)  2(0,1, 0)  1(0, 0,1)

 4e1  2e2  1e3

0   4(0)  0  0 
T (e2 )  T 1   2(0)  3(1)  2(0)   3
  
0   0  4(0)  0 

 0(1, 0, 0)  3(0,1, 0)  0(0, 0,1)

 0e1  3e2  0e3

0   4(0)  1  1 
T (e3 )  T  0    2(0)  3(1)  2(1)   2 
  
1   0  4(1)  4 

 1(1, 0, 0)  2(0,1, 0)  4(0, 0,1)

 1e1  3e2  0e3

Writing T B as A we get

4 0 1 
A  T B   2 3 2 
1 0 4 

Hence the characteristic polynomial is A   I .


Centre for Distance Education 12.48 Acharya Nagarjuna University
The characteristic polynomial of A corresponding to the characteristic root  is

 3  (trace of A)   ( A11  A22  A33 )   det A .


2

Where Aii is the cofactor or diagonal element aii . trace of A  4  3  4  11

3 2
A11  (1)11  12  0  12
0 4

4 1
A22  (1) 2 2  16  1  15
1 4

4 0
A33  (1)33  12  0  12
2 3

A11  A22  A33  12  15  12  39

det A  4(12  0)  0  1(0  3)  48  3  45

Hence the characteristic polynomial is  3  11 2  39  45  f ( ) say

f (3)  27  99  45  0

   3 is a factor of the characteristic equation f ( )  0 .


The other factor is

 2  8  15 3 1 11 39 45

 (  3)(  5) 3 24 45

So f ( )  (  3) 2 (  5) 1 8 15 0
Hence the characteristic roots are 3, 3, 5.
So the eigen values of T are

1  5 with multiplicity 1

2  3 with multiplicity 2

i) To find the eigen space corresponding to 1  5 :

( A  1 I )v  O
Rings and Linear Algebra 12.49 Diagonalization

4  5 0 1   v1  0 
 2 35 2  v2   0 

 1 0 4  5  v3  0 

 1 0 1   v1  0 
  2 2 2  v2   0 
 1 0 1  v3  0 

R2  2 R1 , R3  R1 gives

 1 0 1   v1  0 
  0 2 4  v2   0 
 0 0 0   v3  0 

 2v2  4v3  0  v2  2v3

v1  v3  0 v1  v3

Put v3  1, then v2  2, v1  1

 v1  1 
   
So v  v2  2
   
 v3  1 

 1     v1  
    
So C1   2  is a basis of eigen space E   v2   R 3 ( A  1v)  O 
  1  
 1   v  
    3  

Dim of E1  1 which is equal to the multiplicity of 1 .

ii) To find the eigen space corresponding to 2  3

( A  2 I )v  O
Centre for Distance Education 12.50 Acharya Nagarjuna University

4  3 0 1   v1 
 2 33 2  v2   O

 1 0 4  3  v3 

1 0 1 v1  0
 2 0 2 v2   O  0
   
1 0 1 v3  0

R2  2 R1 , R3  R1 gives

1 0 1   v1  0 
0 0 0  v   0 
  2  
0 0 0   v3  0 

 1v1  v3  0  v3  v1

The unknown v2 does not appear in this system, we assign it a parametric value say v2  s
and solve the system for v3 and v1 . If v3  t , then v1  t , introducing another parameter t. The
result is the general solution to the system.

 v1   t  0   1
v    s   s 1   t  0 
 2       for s, t  R
 v3   t  0   1 

 0   1    v1  
       
So C2   1 , 0  is a basis of the eigen space E2    v2   R
3
( A  2 v)  O 
   
 0   1   v  
      3  

dim E2  2 ; The multicity of 2  2

So dim E  2  multicity of 2  2

The union of two bases C1 and C2 .


Rings and Linear Algebra 12.51 Diagonalization

 1   0   1 
 
is C  C1  C2    2  , 1  ,  0   is linearly independent and hence is a basis of R 3 ; consisting
     
 1   0  1  
      
of eigen vectors of T. Consequenty T is diagonalizable.

W.E.15: Let T be a linear operator an P2 ( R) defined by T  f ( x)   f ( x)  ( x  1) f ' ( x) . Show


that T is diagonalizable.


Solution: B  1, x, x
2
 is the standard basis of P2 ( R) we are given a linear operator on P2 ( R)

defined by T  f ( x)   f ( x)  ( x  1) f ( x) .
'

1 1 0 
In W.E. 4. We have shown A  T B  0 2 2  and the eigen values of T are 1, 2, 3.
0 0 3 

We will now find eigen vectors corresponding to the eigen values 1, 2, 3.

To find the eigen space corresponding to 1  1 .

We have  A  1I  v  O

1  1 1 0   v1  0 1 0   v1  0 
  0 2  1 2  v2   0  0 1 2  v2   0 
   
 0 0 3  1  v3  0 0 2   v3  0 

 2v3  0 so v3  0 and v2  2v3  0  v2  0 .

Since v3  0 and v2  0 and v1 can take any real value. Say v1  1

 v1  1 
   
Hence the eigen vector corresponding to 1  1 is given by v  v2   0  and every non
 v3  0 
zero scalar multiple of it is an eigen vector.

 1  
  
So C1   0   is a basis for the eigen space.
 0  
  
Centre for Distance Education 12.52 Acharya Nagarjuna University

 v1  
 
E2  v2   R3 ( A  2 I )v  O 
 
 v  
 3  

dim E 1  1 ; Multiplicity of 1  1 .

 dim E1  Multiplicity of 1

To find the eigen space corresponding to 2  2 :

( A  2 I )v  O

1  2 1 0   v1 
 0 22 2  v2   O

 0 0 3  2   v3 

 1 1 0  v1 
  0 0 2 v2   O  v3  0
 0 0 1   v3 

v1  v2  0

v2  v1

put v1  1 , then v2  1 , v3  0

 v1  1 
   
Hence v  v2  1 is the eigen vector and every scalar multiple of it is also an eigen vector..
   
 v3  0 

 1  
  
So C2   1   is a basis for the eigen space.
 0  
  
Rings and Linear Algebra 12.53 Diagonalization

 v1  
  
E2  v2   R ( A  2 I )  O 
3

 v  
 3  

Dim of E2  1 ; Multiplicity of 2  1

 Dim of E2  Multiplicity of 2

To find the eigen space corresponding to 3  3 :

( A  3 I )v  O

1  3 1 0   v1 
 0 2  3 2  v   O
  2
 0 0 3  3 v3 

 2 1 0   v1  0 
  0 1 2  v2   0 
 v2  2v3  0 so v2  2v3 2v1  v2  0 v2  2v1
 0 0 0   v3  0 

put v1  1, so v2  2, v3  1

 v1  1 
Hence v  v2    2  is an eigen vector and every scalar multiple of it is also an eigen vector..
   
 v3  1 

 1  
  
So C3    2   is a basis for the eigen space.
 1  
  

  v1  
 
E3    v2   R 3 ( A  3 I )v  O  ; dim E3  1
v  
 3  

Multiplicity of 3  1 so dim E 3  Multiplicity of 3


Centre for Distance Education 12.54 Acharya Nagarjuna University

Thus we observe that the multiplicity of each eigen value is equal to the dimension of the
corresponding eigen space.

1 1 1 


 
Let C  C1  C2  C3  0 , 1 , 2  is linearly independent. Thus this set is a basis of
     
  0  0  1  
      

R 3 consisting of eigen vectors of T..


So T is diagonalizable.

12.18 Method to find any positive integral power of a diagonalizable


matrix A:
A is a diagonalisable matrix of order n. We can find a n x n matrix Q such that Q1 AQ is a
diagonal matrix D.

D  Q1 AQ   LA B

Q is the matrix whose columns are the eigen vectors.

Preoperating with Q, and post operating with Q1 we get A  QDQ1

Ak  (QDQ1 )k

So Ak  (QDQ1 )(QDQ1 )...(QDQ1 ) (k terms)

 QD(Q1Q)D(Q1Q)...DQ1
1
So A k  Q D k Q

W.E. 16: Examples:

1 4 
For A     M 22 ( R) find an expression for An where n is a positive integer..
2 3

1 4
Solution: given A    . We show that A is diagonalizable and find a 2  2 matrix Q, such that
 2 3
Q1 A Q is a diagonal matrix.

Then we compute An for any positive integer no.

The characteristic equation of A is


Rings and Linear Algebra 12.55 Diagonalization

1  4
A  I  0  0
2 3

 (1   )(3   )  8  0

 2  4  5  0  (  5)(  1)  0
Hence the characteristic values are -1, 5.

Hence the eigen values of LA are -1, 5.

Which are district, LA is a linear operator on a two dimensional vector space. So LA


and hence A is diagonalizable.

To find eigen space corresponding to 1  1 :

1  1 4   v1 
( A  1 I )v  0   O
 2 3  1  v2 

1  1 4   v1  2 4  v1  0 
    O   2 4  v   O  0 
 2 3  1 v2    2  

 v1   2 
 2v1  4v2  0  v1  2v2 put v2  1 and so v1  2 . There fore v    1 
 2  

  2 
Hence C1      is a basis for the eigen
 1  

 v1  
space E1    v   R ( A  1I )v  O 
2

 2  

To find the eigen space corresponding to 2  5 :

1  5 4   v1 
( A  2 I )v  O     O
 2 3  5 v2 

 4 4   v1  0 
    O 
 2 2 v2  0 
Centre for Distance Education 12.56 Acharya Nagarjuna University

 v1  1
 4v1  4v2  0  v1  v2  v1  v2 put v1  1, them v2  1 so v      
v2  1

 1 
C2      is a basis for the eigen space E   v1   R 2 ( A   I )v  O 
  
 1  1
v2 
1


2 1 
So C  C1  C2    , 1  is an ordered basis for R 2 consisting of eigen vectors of A.
    
1

  2 1 1  1  1  1 
Let Q    , Q  3  1 2 
 1 1  

1 0
D  Q1 AQ   LA    
 0 5

An  QD n D 1

(1)n 0  1
 Q n
Q
 0 (5) 

 2 1 (1) n 0   1   1 1
    
 1 1  0 (5)n   3   1 2 

1  2 1 (1) n 0   1 1 
   
3  1 1  0 
5n   1 2 

1 (2)(1) n 5n   1 1 
   
3  (1)n 5n   1 2 

1  2(1)n  5n (2)( 1)n  2.(5) n 


  
3  (1) n  5n (1)n  2.5n 

1 1
W.E. 17: If A    . Find the eigen values and eigen vectors of A. Prove that A is diagonal-
 4 1
izable. Find a basis of R 2 containing eigen vectors of A.
Rings and Linear Algebra 12.57 Diagonalization

Solution: The characteristic equation is A   I  O

1  1
  O  (1   )2  4  0
4 1 

 (1    2)(1    2)  O  (3   )(   1)  0

   3,   1

Thus A has two eigen values 1  3, 2  1

To find the eigen vector corresponding to 1  3 :

1  3 1   v1 
( A  1I )v  O     O
 4 1  3 v2 

2 1   v1 
   O R2  2 R1 gives
 4 2 v2 

 2 1   v1  0 
 0 0 v   0   2v1  v2  0  v2  2v1
  2  

 v1  t  1 
put v1  t , then v2  2t so v        t  
v2   2t   2

Where t  R

1 
Thus   is the eigen vector corresponding to 1  3
2

To find the eigen vector corresponding to 2  1 :

1  1 1   v1 
( A  2 I )v  O     O
 4 1  1 v2 

2 1   v1 
   O R2  2 R1 gives
4 2 v2 
Centre for Distance Education 12.58 Acharya Nagarjuna University

 2 1   v1   0 
 0 0   v    0   2 v1  v2  0  v2  2v1
  2  

put v1  s them v2  2 s

 v1   s  1 
Thus v        s   where s  R
v2   2s  2

1
So   is the eigen vector corresponding to 2  1 .
 2 

 1   1  
      is a basis of R 2 , since there vectors are linearly in dependent and dim R 2  2 .
  2  2  
This is basis of R 2 consisting of eigen vectors of A. So LA and hence A is diagonalizable

 2 2 3
 
W.E.18: Find the matrix P which diagonalizes the matrix A   2 1 6  and verify that P 1 AP
 1 2 0 
is a diagonal matrix.

Solution: The characteristic equation of A is A   I  0

trace of A  2  1  0  1

1 6
A11  (1)11  0  12  12
2 0

2 3
A22  (1) 2 2  0  3  3
1 0

2 2
A33  (1)33  2  4  6
2 1

A11  A22  A33  12  3  6  21

det A  2(0  12)  2(0  6)  3( 4  1)

 24  12  9  45
Rings and Linear Algebra 12.59 Diagonalization
The characteristic equation is

 3   2 . trace of A  ( A11  A22  A33 )  det A  0

i.e. f ( )   3  (1) 2  (21)  45  0

  3   2  21  45  0

f ( 3)  27  9  63  45

0

  3 is a factor..

3 1 1 21 45

3 6 45

1 2 15 0

The other factor is  2  2  15  (  5)(  3)

So f ( )  (  5) 2 (  5)  0

The eigen values are   3, 3,5

i) To find the eigen space corresponding to 1  5 .

( A  1 I )v  O

 2  5 2 3  v1 
  2 1  5 6 v2   O
 1 2 5  v3 

 7 2 3  v1 
  2 4 6 v2   O
 1 2 5  v3 

R1  7 R3 , R2  2 R3 given
Centre for Distance Education 12.60 Acharya Nagarjuna University

 0 16 32   v1 
 0 8 16 v   O
  2
 1 2 5   v3 

R1  2 R2 gives

0 0 0   v1  0 
 0 8 16  v   0   8v  16v  0  v  2v
  2   2 3 2 3

 1 2 5   v3  0 

v1  2v2  5v3  0

v1  4v3  5v3  0 since v2  2v3

 v3  v1

put v1  1 then v3  1, v2  2

 v1   1 
v  v2    2 
is an eigen vector corresponding to 1  5 .
 v3   1

 1     v1  
    
C1    2   is a basis of the eigen space E1   v2   R ( A  1 I )v  0 
3
  v  
  1 
    3  

ii) To find the eigen space corresponding to 2 3

( A  2 I )v  O

2  3 2 3  v1 
  2 1 3  6  v   O
 2
  1 2  3   v 3 
Rings and Linear Algebra 12.61 Diagonalization

 1 2 3  v1 
  2 4 6  v2   O , R2  2 R1 , R3  R1 give
1 2 3  v3 

1 2 3  v1 
0 0 0  v   O ,
  2
 0 0 0   v 3 

So v1  2v2  3v3  0
v1  2v2  3v3

put v2   s; v3  t

 v1   2s  3t   2  3
       
Then v  v2     s  0t   s  1  t 0 
 v3   0s  1t   0  1 

 2  3    v1  
       
So C2   1 , 0  is a basis corresponding to E1   v2   R
3
( A  2 I )v  O 
   
  0  1   v  
      3  

 1 2 3
P   2 1 0  ;det P  1( 1  0)  2(2  0)  3(0  1)
 1 0 1 

 1  4  3  8

 1 2 3 

Adj. A  2 4 6 
1 1
P1 
det P 8
 1 2 5

 1 2 3   2 2 3   5 10 15 
1  1 
P A   2 4 6   2 4 6    6 12 18 
1   
8 8
 1 2 5  1 2 5  3 6 15 
Centre for Distance Education 12.62 Acharya Nagarjuna University

 5 10 15   1 2 3   40 0 0 
1  1 
P AP   6 12 18  2 1 0    0 24 0 
1   
8 8
 3 6 15   1 0 1   0 0 24 

5 0 0 
 0 3 0  = diag (5, 3, 3)
0 0 3

Hence P 1 AP is a diagonal matrix.

1 0 1
 
W.E.19: Find the matrix P which transforms the matrix A  1 2 1  to diagonal form and
 2 2 3 

hence calculate A4 .

Solution: Characteristic equation is A   I  0

trace of A  1  2  3  6

2 1
A11  (1)11  62  4
2 3

1 1
A22  (1) 2  2  3 2  5
2 3

1 0
A33  (1)33  20  2
1 2

A11  A22 A33  4  5  2  11

det A  1(6  2)  0  ( 1)(2  4)  4  2  6


The characteristic equation is

 3   2 trace of A   ( A11  A22  A33 )  det A  0

f ( )   3  6 2  11  6  0
Rings and Linear Algebra 12.63 Diagonalization
f (1)  1  6  11  6  0 So   1 is a factor..

1 1 6 11 6

1 5 6

1 5 6 0

The other factor is  2  5  6

 (  5)(  2)

Hence f ( )  (  1)(  2)(  3)  0

The eigen values of A are 1, 2, 3

i) To find the eigen space corresponding to   1 :

1  1 0 1   v1 
( A  1I )v  O   1 2  1 1  v2   O

 2 2 3  1  v3 

0 0 1  v1 
 1 1 1  v2   O
2 2 2   v3 

 0 0 1  v1 
R3  2 R2 gives  1 1 1  .  v2   O
   
 0 0 0   v3 

 v3  0  v3  0
v1  v2  v3  0

i.e. v1  v2  0

 v2  v1

Put v1  1, them v2  1, v3  0


Centre for Distance Education 12.64 Acharya Nagarjuna University

 v1   1 
   
So v  v2    1
 v3   0 

 1    v1  
     
So C1    1  is a basis of the eigen space E1   v2  R3 ( A  1I )v  O
 
 0   v  
    3  

ii) To find eigen space corresponding to 2  2

1  2 0 1   v1   1 0 1  v1 
( A  2 I )v  O   1 2  2 1  v2   O   1 0 1  v2   O
 2 2 3  2 v3   2 2 1  v3 

R2  R3 gives

 1 0 1  v1 
 0 0 0  v   O  v  v  0 so v  v
  2 1 3 3 1

 2 2 1   v3 

2v1  2v2  v3  0

 2v1  2v2  v1  0

 v1  2v2  0 v1  2v2

if v2  1, v1  2, v3  2

 v1   2   2     v1  
     
v  v2    1  so C2   1  is a basis of eigen space E   v2   R3
  
  ( A  2 I )v  O 
 
 2  
2

 v3   2  v  
     3  
Rings and Linear Algebra 12.65 Diagonalization

iii) To find the eigen space corresponding to 3  3 .

1  3 0 1   v1   2 0 1  v1 
 1 2  3 1  v   O   1 1 1  v   O
  2   2
 2 2 3  3  v3   2 2 0  v3 

 2v1  v3  0  v3  2v1

2v1  2v2  0  v2  v1

v1  v2  v3  0  v1  v1  2v1  0

put v1  1, then v2  1, v3  2

 v1   1   1    v1  
        
So v  v2  1 Hence C3    1   is a basis of E3   v2   R 3 ( A  3 I )v  O 
     
 v3   2   2   v  
    3  

Writing the three eigen vectors of the matrix as the three columns, the required transfor-

 1 2 1

mation matrix P  1 1 1 

 0 2 2 

det P  1(2  2)  2( 2  0)  1( 2  0)

 0  4  2  2

1
P 1  Adj.P
det P

 0 2 1
1 
  2 2 0 
2
 2 2 1

 0 2 1  1 0 1  1 2 1


1
Now P AP 
1  2 2 0  1 2 1   1 1 1 
2    
 2 2 1  2 2 3   0 2 2 
Centre for Distance Education 12.66 Acharya Nagarjuna University

 0 2 1  1 2 1
1 
  4 4 0   1 1 1 
2
 6 6 3  0 2 2 

 2 0 0  1 0 0
1 
 0 4 0    0 2 0   D (say)
2 
 0 0  6   0 0 3 

P 1 AP  D  A  PDP 1

A4  PD 4 P 1

 1 2 1 (1) 0   0 2 1


4
0
   1 
  1 1 1   0 (2) 4 0   2 2 0   
     2 
 0 2 2   0 0 (3) 4   2 2 1

 1 2 1 1 0 0   0 2 1
 1  
  1 1 1  0 16 0      2 2 0 
   
 2 
 0 2 2  0 0 81  2 2 1

 1 32 81  0 2 1


 1  
    1 16 81   2 2 0 
 2 
 0 32 162   2 2 1

 98 100 80   49 50 40 



A4   130 132 81    65 66 40 
1
2
 260 260 162  130 130 81 

12.19 Invariant Subspaces:

We observed if v is an eigen vector of a linear operator T, then T maps the span of v into
itself. Subspaces that are mapped into themselves are great importance in the study of linear
operator.
12.19.1 T invariant Subspace: Let T be a linear operator on a vector space V. A subspace 
of V is said to be a T - invariant subspace of V if T (W )  W , that is, if T (v) W for all v   .
Rings and Linear Algebra 12.67 Diagonalization
W.E. 20: Example: Suppose that T is a linear operator on a vectorspace V; Then the following
subspaces are T - invariant (i) O , (ii) V (iii) R (T) (iv) N (T) (v) E for any eigen
value  of T..

Solution : i) To show 0 is T - invariant.

Let W1  0 . We know that W1 is a subspace of V..

Also T (0)  0  W1 for 0  v

Thus W1 is a T invariant subspace of V..

ii) To show that V is T - invariant.


We know V is a subspace of V.

Let v  V then T (v )  V for all v  V which proves that V is a T - invariant subsspace of


V.

iii) To show range of T i.e. R (T ) is T invariant.

We know that R (T ) is a subspace of V..

Let u  R(T )  V  u V

 T (u )  R (T ) for all u  R (T ) so R (T ) is a T - invariant subspace of V..

iv) To show Null space N (T ) is T - invarient.

T ’ is a linear operator on V.

N (T )   V T ( )  O(V ) and N (T ) is a null space of V i.e. a subspace of V..


T ( N (T ))  N (T ) i.e T (u )  N (T ) for all u  N (T ) .

 N (T ) is a T - invariant subspace of V..

W.E. 21: T is the linear operator on R 3 defined by T ( a, b, c )  ( a  b, b  c, 0)

Then xy plane  ( x, y,0) x, y  R and x axis  ( x, 0, 0) x  R are T - invariant subspaces of

R3 .
Solution: Given T : R 3  R 3 is defined by T ( a, b, c )  ( a  b, b  c, 0)

We know that W1  xy plane = ( x, 0, 0) x  R is a subspace of R 3 .


Centre for Distance Education 12.68 Acharya Nagarjuna University

But v  ( x , 0, 0)  W1

T (v)  T ( x, y, 0)  ( x  y, y  0, 0)  ( x  y, y,0)  W1 for all v  W1

Thus W1 is a T - invariant subspace of R 3 .

Similarly Let W2  X  axis  ( x, 0, 0 x  R

We know that W2 is a subspace of R 3 .

Let v  ( x, 0, 0)  W2

T ( v )  T ( x , 0, 0)  ( x  0, 0  0, 0)  ( x , 0, 0)  W 2 for all v W2 . Thus W2 is an invariant sub-


space of R 3 .

12.20 T - Cyclic subspace of V generated by a non zero vector:


12.20.1 Definition: Let T be a linear operator on a vector space V and let v be a nonzero vector in

V, The subspace W  span v, T (v), T (v )... is called the T - Cyclic subspace of V generated by
2

v.
We can easily prove that W is T - invariant subspace of V. In fact, W is the smallest
T - invariant subspace of V containing non zero vector v .
W.E. 22:

T is a linear operator on P ( R ) defined as T  f ( x )   f ( x) . Find the T - Cyclic subspace


'

generated by x 2 .

Solution: We have T  f ( x)   f ( x) and so


'

T ( x 2 )  ( x 2 )'  2 x

T 2 ( x 2 )  T T ( x 2 )'   T  (2 x)   (2 x )'  2


T - Cyclic subspace generated by x 2 = Span  x ,2x,2   P2 (x)
2

W.E. 23:

Let T be the linear operator on R 3 defined by T (a, b, c)  b  c, a  c,3c . Find the
Rings and Linear Algebra 12.69 Diagonalization

T - Cyclic subspace generated by e1  (1, 0, 0) .

Solution: Given T : R 3  R 3 is defined by

T (a, b, c)  b  c, a  c,3c

T (e1 )  T (1, 0, 0)  e2

T 2 (e1 )  T (e1 )   T (e2 )  T (0,1, 0)  ( 1, 0, 0)  e1

T 3 (e1 )  T 2 (e1 )  T (e1 )  T (1,0,0)  (0, 1,0)  e2


Thus T cyclic subspace generated by e1  e1 , T (e1 ), T (e1 ),....
2

= Span e1 , e2    ( s, t , o) s, t  R 

12.20.2 Theorem: Let T be a linear operator on a finite dimensional vector space V, and let W be
a T - invariant subspace of V. Then the characteristic polynomial of TW devides the characteristic
polynomial of T.

Proof: Choose an ordered basis C  v1 , v2 ,....vk  for W , and it is extended to form an ordered

basis B  v1 , v2 ,....vk , vk 1......vn  for V. Let A  T B and B1  TW c . So A can be written as

 B1 B2 
0 B3  .

Let f (t ) be the characteristic polynomial of T and g (t ) the characteristic polynomial of


TW .

 B1  tI k B2 
Then f (t )  det( A  tI n )  det 
 0 B3  tI n k 

g (t ).det( B3  tI n  k )

Thus g (t ) divides f (t ) .

W.E. 24: T is a linear operator on R 4 defined by T ( a, b, c, d )  ( a  b  2c  d , b  d , 2c  d , c  d ) .

If W  (t , s, o, o) t , s  R is a subspace of R 4 . Verify that characteristic polynomial of TW divides


Centre for Distance Education 12.70 Acharya Nagarjuna University

the characteristic polynomial of T.

Solution: Given T : R4  R4 is a linear operator defined by


T ( a , b, c , d )  ( a  b  2c  d , b  d , 2c  d , c  d )

W  (t , s, o, o) t , s  R is a subspae of R 4 .

Let (a, b, 0, 0)  R 4 then T ( a, b, 0, 0)  ( a  b, b, 0, 0) W

consider C  e1 , e2  which is an ordered basis of W . Extend this to the standard ordered basis
B of R 4 .

1 1 2 1
0 0 1 
1 1 1
Then B1  TW c    and A  T B 
0 1 0 0 2 1
 
0 0 1 1

Let the characteristic polynomial of the f (t ) and g (t ) be the characteristic polynomial of


TW .

1 t 1 2 1
0 1 t 0 1
Then f (t )  A  tI 
0 0 2  t 1
0 0 1 1 t

1 t 0 1
2  t 1
 (1  t ) 0 2  t 1  (1  t )(1  t )
1 1 t
0 1 1 t

 g ( t ).  (2  t )(1  t )  1 

 g (t )(t 2  3t  3)

Thus g (t ) divides f (t ) .

Thus the characteristic polynomial of TW divides the characteristic polynomial of T..


Rings and Linear Algebra 12.71 Diagonalization
12.20.3 Theorem: Let T be a linear operator on a finite dimensional vector space V, and let W
denote the T - Cyclic subspace of V generated by a non zero vector v  V . Let k  dim(W )

Then

i) v, t (v), T 2
(v),.....T k 1 (v) is a basis of W .

ii) If a0 v  a1T (v )  .....  ak 1T k 1 (v )  T k (v)  0

then the characteristic polynomial of TW is f (t )  ( 1) k ( a0  a1t  a2t 2  ....  ak 1t k 1  t k )

Proof: i) Since v  O the set v is linearly independent.

 j 1

Let j be the largest positive integer for which B  v, T (v ), T (v ).....T (v ) is linearly indepen-
2

dent. Such a j must exist because V is finite dimensional. Let Z = Span (B). Then B is a basis for
Z. Further more T j (v)  Z . We use this fact to show that Z is a T - invariant subspace of V. Let

w  Z . Since W is a linear combination of the vectors of B, there exists scalars b0 , b1 ,....b j 1 such
j 1
that w  b0 v  b1T (v)  ....  b j 1T (v) and hence T ( w) is a linear combination of vectors in Z and
hence belongs to Z. So Z is T - invariant. Further more, v  Z , and W is the smallest T - invariant
subspace of V, that contains so that . Clearly,, , and so we conclude that . It
follows that B is a basis for , and therefore . Thus .

Hence is a basis of .

ii) Now view B as an ordered basis for .

Let be scalars such that

Observe that

Which has the characteristic polynomial.

Thus is the characteristic polynomial of , proving (ii).


Centre for Distance Education 12.72 Acharya Nagarjuna University
12.21.1 Cayley Hamilton Theorem for Linear Operators:
Theorem:
Let T be a linear operator on the n dimensional vector space V. (or let A be matrix
over the field F) and be the characteristic polynomial for T (or for A) then (zero
transformation (or (null matrix)

Or
Every square matrix satisfies its characteristic equation
Or
Every matrix is zero of its characteristic polynomial.

Proof: Consider an n square matrix over a field F for relative to an ordered basis B.

i.e.

The characteristic polynomial of A is given by

i.e. ......... (1) for s

The characteristic equation is .

i.e.

The elements of the matrix are polynomials at most of the first degree in t with the
result that the elements of the matrix Adj are ordinary polynomials in t of degree or
less. As we know that the elements of the matrix Adj are the cofactors of the elements of
the matrix . It implies that the matrix adj can be written as

.......... (2)
Rings and Linear Algebra 12.73 Diagonalization

Where is a square matrix of order n, over F with elements independent of t. Now by


the property of adjoints we have

Or

Equating the coefficients of corresponding powers of t we get

Multiplying the above matrix equations by respectively we get

Thus

Also we have

or

Thus because

Hence the theorem.


Centre for Distance Education 12.74 Acharya Nagarjuna University
From this theorem, we conclude that if T is a linear transformation on the n dimensional
vector space ; then there is a polynomial f of degree n, such that .

Corollary: To find an expression for the inverse of a nonsingular matrix A:

Solution: We have

and i.e.

Thus from the cayley - Hamilton theorem we have

So

or

correspondingly

giving an expression for the inverse of a non singular matrix A.


Aliter : To show every linear operator satisfies its characteristic equation.

Let T be a linear operator defined on V and be its characteristic polynomial we will


show that for all . If , as is linear we have Let Let

be the T - cyclic subspace generated by . Let . Then is a


basis for .

Hence there exists scalars

Such that ...................... (1)


Rings and Linear Algebra 12.75 Diagonalization

Hence .................. (2) is the characteristic poly-

nomial of

From (1) and (2) we get

But we have divides . Hence there exists a polynomial such that

So

Hence T satisfies its characteristic equation.


12.21.2 Theorem: Cayley - Hamilton theorem for matrices:
Every square matrix satisfies its characteristic equation. i.e. If for a square matrix A of order n;

then the matrix equation

is satisfied by A. i.e. .

Proof: As the elements of are at most of the first degree in , the elements of
can be written as a matrix polynomial in given by
where are matrices of the type

whose elements are functions of s.

Now

Since A. Adj A = det A.I.


So

comparing coefficients of like powers of on both sides we get


Centre for Distance Education 12.76 Acharya Nagarjuna University

=
Pre multiplying there successively by and adding we get

Thus = O .......... (1)

So every square matrix satisfies its characteristic equation.

Corollary 1: Let A be a non singular matrix i.e. also and therefore .

Pre multiplying (1) by we get

or

Corollary 2: If m be a positive integer such that , then multiplying the result (1) by
we get

Showing that any positive integral power of of A is linearly expressible in terms of those
lower order.
W.E. 25: Worked Out Examples:

Let T be a linear operator on defined by and is

the standard basis of . Show that T satisfies its characteristic equation and satisfies its
characteristic equation.

Solution: is a linear operator defined by

is the standard basis of where and


Rings and Linear Algebra 12.77 Diagonalization

The characteristic polynomial of T is

So

Hence T satisfies its characteristic equation more over


Centre for Distance Education 12.78 Acharya Nagarjuna University

so

Thus satisfies its characteristic equation.

W.E. 26 :

is a linear operator defined by . Find the


characteristic polynomial. Verify cayley - Hamilton theorem.

Solution: is alinear transformation defined by

is the standard ordered basis of T where


Rings and Linear Algebra 12.79 Diagonalization

The characteristic equation is .

i.e.

We have to show where O is the zero operator from .

and

So
Centre for Distance Education 12.80 Acharya Nagarjuna University

So

So T satisfies its characteristic equation.


W.E. 27: Find the characteristic equation of the matrix

and verify that it is satisfied by A and hence find .

Solution: The characteristic equation of the matrix is

or

We have to show that where O is the null matrix.


Rings and Linear Algebra 12.81 Diagonalization

So

So

Hence A satisfies its characteristic equation preoperating with we get


(null matrix)

So

W.E. 28:

If then express as a linear polynomial in A by using

Cayley. Hamilton theorem.

Solution: The characteristic equation is .

.
Centre for Distance Education 12.82 Acharya Nagarjuna University
By Cayley - Hamilton theorem every matrix satisfies its own characteristic equation.

So = O (null matrix)

.......... (1)

The given linear polynomial is using (1) we get

since

Thus is expressed as a linear polynomial .

Aliter:

By (1) (Zero matrix)

So ............. (2)

Multiply (2) by

............. (3)

............. (4)

..............(5)

Now

using (5)
Rings and Linear Algebra 12.83 Diagonalization

using (4)

using (3)

using (2)

which is a linear polynomial.

W.E. 29 :
Using Cayley - Hamilton theorem find the inverse and of the matrix

trace of

det
Characteristic equation is

trace of

........... (1)
Centre for Distance Education 12.84 Acharya Nagarjuna University
By Cayley - Hamilton theorem every matri satisfies its characteristic equation

So ....... (2)

Pre operating with we get

By (2)
multiplying by A
Rings and Linear Algebra 12.85 Diagonalization

So

W.E. 30:

Find the inverse of the matrix by using cayley hamilton theorem.

Solution: We know if A is an n x n square matrix, its characteristic polynomial is

Where is the sum of the principal minors of order K.

By this result, the characteristic polynomial of the given matrix

given by

trance of A and is det. of A.

To find (i.e. the sum of principal minors of order 2)


Centre for Distance Education 12.86 Acharya Nagarjuna University
The rows and columns deleted.

1)

2)

3)

4)

5)

6)

To find the sum of principal minors of order 3 i.e. :

Deleted row and column

1)

2)
Rings and Linear Algebra 12.87 Diagonalization

3)

4)

gives

expanding along the first column.

Hence the characteristic equation is

i.e.

By Cayley Hamilton theorem every matrix satisfies its characteric equation.

Hence

Pre operating with .


Centre for Distance Education 12.88 Acharya Nagarjuna University
So .............. (1)

By (1) .

So

12.22 Summary:
In this lesson we discussed about
i) Eigen vectors and eigen values of linear operator and of a square matrix - Properties of eigen
values. Diagonalizability of linear operators and matrices. Test for diagonalization and numerical
problems - Invariant subspaces. Cayley-Hamilton theorem for linear operators and matrices.
Rings and Linear Algebra 12.89 Diagonalization

12.23 Technical Terms:


In this chapter we come across the following technical terms.
Eigen vectors
Eigen values
Characteristic equation
Diagonalization
Trace of a linear operator
Eigen space.

12.24 Model Questions:


1. Prove that if the characteristic roots of a matrix A are then the characteristic roots of

are .

2. If is an eigen value of a nonsingular matrix A, then show that is an eigen value of .

3. Find the eigen values and the corresponding eigen vectors of the matrix

i)

Ans : i) 2, 2, 8 Characteristic vector corresponding 2 is where a, b are scalars and

coresponding to 8 is where k is scalar..

ii)
Centre for Distance Education 12.90 Acharya Nagarjuna University

Ans : ii) 0, 3, 15, k, l, m are scalars.

4) Using Cayley - Hamilton theorem show that where

5) State and prove Cayley - Hamilton Theorem.

6) If then express as a linear polynomial in A by using Cayley

Hamilton theorem.

Ans:

7) Find the characteristic equation of the matrix A and verify that it is satisfied by A and hence find
.

i)

Ans:

ii)

Ans:
Rings and Linear Algebra 12.91 Diagonalization

8) Find the inverse of the matrix by Cayley Hamilton theorem

Ans :

12.25 Exercises:
1. For each of the following matrices test A for diagonalizability and if A is diagonaliz-
able, find an invertible matrix Q and a diagonal matrix D such that

(i) Ans: diagonalizable

ii) Ans: Not diagonalizable

iii) Ans : Diagonalizable

iv) Ans : Diagonalizable

2. For each of the following linear operators T on a vector space V, test T for diagonalizability and if
T is diagonalizable find a basis B for V such that is a diagonal matrix.

i) and T is defined by respectively..

Ans: T is not diagonalizable


Centre for Distance Education 12.92 Acharya Nagarjuna University

ii) and T is defined by

Ans: T is not diagonalizable.

iii) and T is defined by

Ans: T is diagonalizable

iv) and T is defined

Ans : T is diagonalizable

3. Prove that if T is diagonalizable, then is diagonalizable, when T is a linear operator on a finite


dimensional vector space V.

4. Show that is not diagonalizable.

5. Prove that the matrix is not diagonalizable over the field C.

6. For the matrix over the field C, Find the diagonal form and a diagonalizing

matrix Q.

Ans :

7. Let T be a linear operator on which is represented in the standard ordered basis by the matrix

. find the characteristic values of A and prove that T is diagonalizable.

Ans: 1, 2, 2
Rings and Linear Algebra 12.93 Diagonalization
8) Let T be a linear operator on a finite dimensional vector space V; and let be an eigen value of
T. Show that the eigen space of i.e. is invariant under T.

9) For each of the following linear operators T on the vector space V, determine whether the given
subspace is a T - invariant subspace of V..

(i) and

Ans: T - invariant

(ii) and

Ans: T - invariant

(iii) and

Ans: not T - invariant

10) Find the characteristic equation of the matrix and verify that it is satisfied by A

and hence find .

Ans:

11) Verify that the matrix satisfies its characteristic equation and conpute .

Ans :
Centre for Distance Education 12.94 Acharya Nagarjuna University

12) Find the characteristic equation of the matrix and show that it is satisfied by

A. Hence obtain n the inverse of the given matrix.

Ans :

13) Find the characteristic roots of the matrix and verify cayley - Hamilton theorem for

this matrix. Find the inverse of the matrix A and also express as a
linear polynomial.

Ans :

The given matric polynomial = .

14) If express as a linear polynomial in A.

Ans: - .

15) Calculate by using Cayley - Hamilton theorem given

Ans :

16) If show that and hence find .


Rings and Linear Algebra 12.95 Diagonalization

 61 93
Ans : B  
5

 62 94 

12.26 Reference Books:


1) Linear Algebra 4th edition : Stephen H. Friedberg, Arnold J. Insel, Lawrence E. Spence.
2) Schaum’s out lines : Beginning Linear Algebra Seymour Lipschutz
3) A course in abstract Algebra : Vijay K . khanna. S.K. Bhambari
4) Linear Algebra : Gupta and Sharma
5) Fundamentals of Linear Algebra : M.L. Aggarwal, Romesh Kumar

- A. Mallikharjana Sarma
Rings and Linear Algebra 13.1 Inner Product Spaces

LESSON - 13

INNER PRODUCT SPACES


13.1 Objective of the Lesson:
In this previous lessons the properties of vector spaces discussed are based on addition
and scalar multiplication of vectors. In this lesson we introduce the concept of length of vectors by
means of an additional structure on the vector space known as inner product.

13.2. Structure of the Lesson:


In this lesson the following concpts are discussed:

13.3 Introduction and properties of complex numbers

13.4 Inner product and inner product space - definitions - examples - basic
theorems

13.5 Norm of a vector definition, theorems in inner product spaces

13.6 Norm of a vector, normed vector spaces - definitions and theorems

13.7 Worked out examples

13.8 Summary

13.9 Technical terms

13.10 Model Questions

13.11 Exercises

13.12 Reference Books

13.3.1 Introduction:
In general a vector space is defined over an arbitrary field F. In this lesson we restrict the
field F to be the field of real numbers or complex numbers. In th first case the vector space is
called a real vector space and in the second case it is called a complex vector space. We study
real vector space in analytical geometry and vector analysis. There the concept of length and
orthogonality is disscussed.

In this lesson we introduce the concept of length and orthogonality of vectors by means of
an additional structure on the vector space known as an inner product.

We also have dot or scalar product of two vectors whose properties are discussed in
Centre for Distance Education 13.2 Acharya Nagarjuna University

vector algebra. An inner product on a vector space is a generatisation of dot product in R 3 .

Before defining inner product and inner product spaces we shall state some important
properties of complex numbers.
13.3.2 Some Properties of Complex Numbers:

Let Z  x  iy for some x; y  R and i   1 be the given complex number. Here x is


called the real part of the complex number Z. y is called the imaginary part of Z and we write
x  Re z and y  I m Z .

The modules of the complex number Z  x  iy is the non negative real number x 2  y 2 and

is denoted by z .

Also if z  x  iy is a complex number then z  x  iy is called the conjugate complex number of z .

If z  z , then x  iy  x  iy so y  0 .

i.e. z  z  z is a real number. Obviously we have

i) z  z  2 x  2 Re z ii) z  z  2iy  2Im z

iii) zz  x 2  y 2  z iv) z  0  x  0, y  0 i.e. z  0  z  0


2

v)  z   z vi) z  z

vii) z  x 2  y 2  0 i.e. z  Re z viii) If z1 , z2 are two complex numbers then

i) z1  z2  z1  z2 ii) z1  z 2  z1  z 2

iii) z1 z 2  z1 . z 2 iv) z1  z 2  z1  z 2

 z1  z1
v)    provided z2  0
 z 2  z2

13.4 Inner Product and Inner Product Space:


13.4.1 Definition: Let V be a vector space over F. An inner product on V is a function that assigns
to every order pair of vectors u and v in V , a scalar in field F denoted by  u , v  such that for all
u , v in V, and for all a  F the following holds good.

i)  u  v, w  u , w    v, w  (Linearity)
Rings and Linear Algebra 13.3 Inner Product Spaces

ii)  au , v  a  u , v  Linearity

There two conditions can be clubbed as  au  bv , w  a  u , w   b  v, w 

iii)  u, v   v, u  where bar denotes the complex conjagate (conjugate symmetry)

iv)  u , v   0 and  u , u  0  u  0 where O denotes the zero vector in V (non nega-


tivity)
13.4.2. Definition II: A vector space together with an inner product defined on it is called an inner
product space.
Thus inner product space is a vector space over the field of real or complex numbers with
an inner product function.
Note: a) Conditions (i) and (ii) simply requires that the inner product is linear in the first
component.
(b) If F  R , the condition (iii) reduces to  u , v  v, u  . In this case i.e. when F  R ,
the inner product space V ( F ) is called Eucledian space.

(c) If F  C . Then the inner product space V ( F ) is called a unitary space or complex inner
product space.

(d) It can be easily shown that if a1 , a2 ,...an  F ; u1 , u2 ,...un and v  V , then


n n
  ai u i , v   ai  ui , v 
i 1 i 1

W.E. I:
Worked Examples:

If u  (a1 , a2 ,...an ), v  (b1 , b2 ,...bn )  Vn (C ) then show that  u, v  a1b  a2b2  ...  an bn de-
fines an inner product on Vn (C ) .

Solution: W e will now show that all the postulates of an inner product holds for
 u , v  a1b  a2b2  ...  an bn .......... (1)

1) Linearity: Let w  (c1 , c2 ,...cn )  Vn (C )

Let a, b, c  C we have

au  bv  a(a1 , a2 ,...an )  b(b1 , b2 ,...bn )

 (aa1  bb1 , aa2  bb2 ,..., aan  bbn )


Centre for Distance Education 13.4 Acharya Nagarjuna University
So  au  bv; w  (aa1  bb1 )c1  (aa2  bb2 )c2  ...  (aan  bbn )cn

 (aa1c1  aa2 c2  ...  aan cn )  (bb1c1  bb2 c2  ...  bbn cn )

 a(a1c1  a2 c2  ...  an cn )  b(b1c1  b2 c2  ...  bn cn )

 a  u , w  b  v, w 

Thus  au  bv , w  a  u , w   b  v , w 

ii) Congugate symmetry : From the definition of the product given in (1)

 v, u  b1a1  b2 a2  ...  bn an

So  v , u   (b1a1  b2 a2  ...  bn an )

 (b1a1 )  (b2 a2 )  ...  (bn a n )



 b1 ( a1 )  b2 ( a 2 )  ...  bn ( an )

 b1a1  b2 a2  ...  bn an

 a1b1  a2b2  ...  an bn

 u , v  by (i) (Since multiplication is commulative)

So  v, u   u , v 

iii) Nonnegativity:

 u, u  a1a2  a2 a2  ...  an an

 a1  a2  ...  an
2 2 2
............. (2)

as ai is a complex number so ai  0
2

So (2) is a sum of n non-negative real numbers and so  0 . Thus  u , u  0 and also

 u , u  0  a1  a2  ...  an  0
2 2 2

 each ai  0 so each ai  0
2

So u  (a1 , a2 ,...an )  (0,0,...0)  O


Rings and Linear Algebra 13.5 Inner Product Spaces

Hence the product defined in (1) is an Inner product on Vn (C ) and with respect to this
inner product Vn (C ) is an inner product space.

13.4.3 Note: i) Standard inner product:

Definition: The inner product  u , v  on Vn (C ) defined as  u , v  a1b1  a2b2  ...anbn where

u  (a1 , a2 ,...an ) and v  (b1 , b2 ,...bn ) is called the standard inner product on Vn (C ) .

ii) If u , v are two vectors in Vn ( R ) , then the standard inner product of u , v is given by

 u, v  a1b1  a2b2  ...  anbn

 a1b1  a2b2  ...  anbn

 in the field of real nos b1  b1

= u.v which is the dot product of u and v and the inner product  u , v 
is denoted by u.v .
iii)  u , v  a1 (a1 )  a2 (a2 )  ...an (an )

 a12  a22  ...an2  u.u

W.E. 2:

Let V be the vector space over C of all continuous complex valued functions defined on
1

0,1 . If f , g V ; then  f , g   f (t ) g (t ) dt defines an inner product.


0

Solution: Let f , g , h  V and a, b  C , then

i) Linearity :   af (t )  bg (t )  h (t ) dt
0

1 1

=a  f (t ) h (t ) dt  b  g (t ) h (t ) dt
0 0

 a  f , h  b  g , h 
Centre for Distance Education 13.6 Acharya Nagarjuna University
ii) Conjugate symmetry:

1
 g , f   g (t ) f ( t ) dt
0

1
  g (t ) f (t ) dt
0

1
  f (t ) g (t ) dt  f , g 
0

Thus  g, f  f , g 

iii) Non negativity:

1
 f , f   f (t ) f (t ) dt
0

1
  f (t ) dt  0
2

Also  f , f  0   f (t ) dt  0
2

 f (t )  0   t   0,1

 f 0
As the required conditions are satisfied V is an inner product space.
W.E. 3:

For f ( x ), g ( x )  P ( R ) , the vector space of polynomials over the field R, defined on  0,1
1

if  f ( x ), g ( x )  f
'
(t ).g (t ) dt then prove that it is not an inner product.
0

Solution: Take f ( x)  x; g ( x)  x 2 on  0,1

then f ' ( x )  1 g ' ( x)  2 x


Rings and Linear Algebra 13.7 Inner Product Spaces

1
1
 t3  1
  0  3 3
2
So f , g 1( t ) dt
 

1
1
 t3  1 2
 g , f   2t (t ) dt  2    2   
0  3 0 3 3

Hence  f , g  g , f 

So as the conjugate symmentry is not satisfied. Hence it is not an inner product.


13.4.4. Some Key points in Matrices:

1) Definition: i) Let A  M mn ( F ) . We define the conjugate transpose or Adjoint of A to be the

n x m matrix denoted by A * and is defined as if A   aij  mn then A*  bij  nm where bij  a ji i.e.

( AT )  ( A)T

 i 1  2i 
For example Let A    then
 2 3  4i 

 i 2 
A*   
1  2i 3  4i 

Note: If A has real entries then A * is the transpose of A. i.e. A*  AT .

ii) Trace of a matrix : Let A be a square matrix of order n. The sum of all the elements of A lying
along the principal diagonal is the called the trace of A. We write trace of A as tr ( A) .

Thus trace of A  a
i 1
ii

1 3 4 
 
Ex : If A   2 1 3  then tr A  (1)  (  1)  2  2
 2 1 2 

ii) tr ( A  B )  trA  trB

iii) tr ( A)   trA where   C

iv) tr ( AB )  tr ( BA)
Centre for Distance Education 13.8 Acharya Nagarjuna University
v) trA  trAT

W.E. 4:

Let V  M mn ( F ) . Define  A, B  tr ( B * A) for all A, B  V , then show that V is an inner


product space.

Solution: We are given V  M mn ( F ) is a vector space and  A, B  tr ( B * A) .......... (1)

with this definition we will show that V ( F ) is a inner product space.

i) Linearity: Let A, B  V . Let a , b  F then by

1)  aA  bB, C  tr  C *(aA  bB) 

 tr  C *(aA)  C *(bB) 

 tr  a(C * A)  b(C * B) 

 tr  a(C * A)   tr  b(C * B) 

 tr (C * A)  b.  tr (C * B) 
 a  A, C  b  B , C 
Hence the condition of linearity holds good.
ii) Non-negativity :

if A   aij  nn then A*  bij  nn where bij  aji

n n
A * A  cij  nn where cij   bik akj   aki akj
k 1 k 1

n n n
 A, A  tr ( A * A)  c11  c22  ...  cnn   cii   bik aki
i 1 i 1 i 1

n n n n
  aki aki   aki
2
........... (1)
i 1 i 1 i 1 k 1

Now if A  O then aki  0 for some K and i So  A, A  > 0

If A  O (null matrix) then aij  0 for all i, j.


Rings and Linear Algebra 13.9 Inner Product Spaces

So aki  0 for all k , i

a 0
2
So ki
i 1

 trace of ( A * A)  0

 A, A  0 .......... (2)

Thus if A  O (null matrix)  A, A  0 .

Thus from (1) and (2)  A, A   0

Hence the condition of non negativity is satisfied.


Conjugate Symmetry:

Let A, B  V

Then  A, B  traceof ( B * A) by given data.

 A , B  tr ( B * A )  tr ( B * A )

 
 tr B *( AT )T  ( AT )T  A

 tr   B  ( A T ) T 
T

 


 tr  BT

 A   
T T


 tr   A  B    tr( A)  tr( A )
T T
T

 tr ( A * B )

 B, A 

So  A, B  B , A 

As all the three required conditions are satisfied,

V  M nn ( F ) is an inner product space.


Centre for Distance Education 13.10 Acharya Nagarjuna University
13.4.5 Frobenius Inner Product:

Definition: The inner product on V  M nn ( F ) defined by  A, B  tr ( B * A) for all A, B  V is


called the Frobenius inner product and then V  M nn ( F ) is an inner product space with the inner
product defined above.
W.E. 5 :

Provide the reason why the product  ( a, b), (c, d )  ac  bd on R 2 is not an inner product
on the given vector space.

Solution: Let (a, b)  (3, 4)  R 2

then by given data

 ( a, b), ( a, b)  (3, 4), (3, 4) 

 (3)(3)  (4)(4)  9  16  7  0

Hence the condition of non negativity is not satisfied. Hence  ( a, b), (c, d )  ac  bd is
not an inner product.
W.E. 6:
1

Provide the reason why the product  f ( x ), g ( x )   f (t ) g (t ) dt on P ( R ) where ‘ de-


'

notes differentiation, is not an inner product on the given vector space.

Solution: Take f ( x )  x g ( x)  x 2 on  0,1 . Then

1
1
 t3  1  1
 f , g   1.t dt       0  
2

0  3 0  3  3

0
1 1
 t3  2
 g , f   2t .t dt  2  t dt  2.   
2

0 0  3 1 3

Hence  f , g  g , f 

Hence the conjugate symmatry does not hold. So the given product is not an inner
product.
13.4.6 Theorem:

Let V be an inner product space. Then for u , v, w  V and a, b, c  F the following


statements are true.
Rings and Linear Algebra 13.11 Inner Product Spaces

i)  u , v  w  u, w    u , w 

ii)  u , cv  c  u , v 

iii)  u, O  O, u  0

iv)  au  bv, w  a  u , w  b  v, w 

v)  u , av  bw  a  u , v  b  v, w 

vi) If  u , v  u , w  for all u  V then v  w .


Proof:

i) To show  u , v  w  u , v    u , w 

By definition  u , v  w   v  w, u 

  v, u    w, u 

  u , v    w, u 

=  u, v    u, w 

Thus  u , v  w  u , v    u , w 

ii) To show  u, cv  c  u, v 

By Definition :  u , cv   cv, u  (by conjugate symmetry)

 c  v, u  by linearity

 c  v, u 

 c  u, v 

Thus  u , cv  c  u , v 

iii) To show  u, O   O, u,   0

Now  u, O  u,0(O) 

 0  u, O  0 .............. (1)
Centre for Distance Education 13.12 Acharya Nagarjuna University

 0 is real number its conjugate is itself.


So  u , O  0

IIIly  0, u  0(O), u 

 0  O, u 

0 ........... (2)

From (1) and (2)

 u, O  O, u  0

iv) To show that  au  bv , w  a  u , w  b  v , w 

Here  au  bv, w  au  ( b )v , w 

 a  u , w   ( b )  v , w 

 a  u , w   b  v , w  by linearity..

Thus  au  bv, w  a  u , w  b  v, w 

v) To show that  u, av  bw  a  u, v  b  v, w 

Solution:  u , av  bw   av  bw, u  by conjugate symmetry..

 a  v, u  b  w, u 

 a  v, u  b  w, u 

 a  u, v  b  u , w  by conjugate sym metry..

Thus  u , av  bw  a  u , v  b  u , w 

Corollary: If a, b are real numbers then

i)  u , av  bw  a  u , v  b  u , w 

ii)  u , av  bw  a  u , v  b  u , w  since if x is real number then x  x .

vi) If  u , v  u , w  for all u  V , then to show v  w .

Solution: given  u , v  u , w  for all u  V

 u , v    u , w  0 for all u  V
Rings and Linear Algebra 13.13 Inner Product Spaces

 u , v  w  0 for all u  V

 v  w, v  w  0 choosing u  v  w

vw0
vw
Thus if  u , v  u , w  for all u  V , then v  w

Remark: From (ii) and (v) of the above theorem, the reader should observe that the inner product
is conjugate linear in the second part.

Note: i)  u , v  (1)  u , v  ( 1)( 1)  u , v 

= (  1)(  1)  u , v  u , v 

ii)  u , av  bw  a  u , v  b  u , w 

iii)  u  v, u  v  u , u    u , v    v, u    v, v 

iv) If u1 , u2 ,...un , v  V and a1 , a2 ,...an  F then  a1u1  a2u2  ...  an un , v 

 a1  u1 , v   a2  u2 , v  ...  an  un , v 

n n

i.e.   ai ui , v   ai  ui , v 
i 1 i 1

n n

Also  v,  ai ui   ai  v, ui 
i 1 i 1

v)  au , bv  ab  u , v  for all u , v  V and a , b  F .

13.5 Norm or length of a vector in an inner product space:

Consider the vector space V3 ( R ) with standard inner product defined on it.

If u  (a1 , a2 , a3 )  V3 ( R ) then  u, u  a12  a22  a32 ’

Now we know that in the three dimensional Euclidean space a12  a22  a32 is the length of the
vector u  (a1 , a2 , a3 ) . Motivated by this fact, we make the following definition.

13.5.1 Definition: Let V be an inner product space if u  V , then the norm or length of the
vector u written as u is defined as the positive square root of  u , u  i.e. u   u , u  .
Centre for Distance Education 13.14 Acharya Nagarjuna University

Note: i) If u is a vector in an inner product space V ( F ) then  u , u  is always non negative

and hence u   u , u  is meaning ful and it is non negative.

ii) In the inner product space V2 ( R )  R 2 ( R ) . If u  (a, b)  V2 then

u  ( a, b)  a 2  b 2   u , u 

iii) In the inner product space Vn (C )  Cn

if u  (a1 , a2 ,..., an ) then

u  (a1 , a2 ,..., an )  a1  a2  ...  an


2 2 2

n
 a   u, u 
2
i
i 1

13.5.2 Unit Vector:

Definition: Let V be an inner product space. If u  V is such that u  1 , then u is called a unit
vector. thus in an inner product space a vector is called a unit vector, if its length is one unit.

13.5.3 Theorem: Let V ( F ) be an inner product space. u is a non zero vector in V. Then show
1
that u u is a unit vector..

1 1 1 1 1 1
Proof:  u u, u u  u  u, u u  u . u  u, u 

1
 1
2
2
u
u

2
1 u u
 u 1 1 is a unit vector..
u u u

u
Note: If u is a non zero vector in an inner product space V ( F ) , then the unit vector u is called

the unit vector corresponding to u . This process of getting a unit vector along u is called
normalizing u .
Rings and Linear Algebra 13.15 Inner Product Spaces

13.5.4 Definition: Normalising:


The process of multiplying a non zero vector in an inner product space by the reciprocal of
its length is called normalizing.

Note ii) In the inner product space R 2 , i  (1, 0) j  (0,1) are unit vectors since the length of
each is 1.

iii) In the inner product space R 3 ; i  (1, 0, 0) j  (0,1, 0); k  (0, 0,1) are unit vectors since
the length of each is 1.

iv) In the inner product space R 3 ; if u  (a1 , a2 , a3 ); then u  a12  a22  a32 and the unit
vector corresponding to u is

1 1
u (a1 , a2 , a3 )
u a12  a22  a32

 a1 a2 a3 
 , , 
 a a a
2 2 2
a1  a2  a3
2 2 2
a1  a22  a32
2 
 1 2 3 

W.E.7:

If u  (1  i, 2  3i ) , v  ( 2  5 i , 3  i ) are two vectors in a complex inner product space

then find  u, v , u , v .

Solution:  u , v  (1  i, 2  3i ), (2  5i, 3  i ) 

 (1  i )(2  5i )  (2  3i )(3  i )

 (1  i )(2  5i )  (2  3i )(3  i )

 (2  3i  5)  (6  11i  3)

 u , v  (7  3i )  (3  11i )  10  8i

u  u , u  (1  i, 2  3i ), (1  i, 2  3i ) 
2

 (1  i )(1  i )(2  3i )(2  3i )

 (1  i )(1  i )  (2  3i )(2  3i )

u  (1  1)  (4  9)  15
2
Centre for Distance Education 13.16 Acharya Nagarjuna University

So u  15

 v, v  (2  5i,3  i ), (2  5i, 3  i ) 


2
iii) v

 (2  5i )(2  5i )  (3  i )(3  i )

 (2  5i )(2  5i )  (3  i )(3  i )

 v, v  (4  25)  (9  1)  39
2
So v

So v  39

W.E.8:

Find the unit vector corresponding to (2  i , 3  2 i , 2  3i ) of V3 (C ) with respect to the


standard inner product.

Solution: Let u  (2  i, 3  2i, 2  3i )

 u , u  (2  i, 3  2i, 2  3i ), (2  i, 3  2i, 2  3i ) 


2
u

 (2  i )(2  i )  (3  2 i )(3  2 i )  (2  3i )(2  3i )

 (2  i )(2  i )  (3  2 i )(3  2 i )  (2  3i )(2  3i )

 (4  1)  (9  4)  (4  3)  25

 25 Hence u  25  5
2
So u

1
Hence unit vector corresponding to u is u u

1
 (2  i,3  2i, 2  3i )
5
W.E. 9:

 1 1 
If u  (0,3, 4); v   , 0,  are two vectors in a real inner product space, then find
 2 2
 u, v  .
Rings and Linear Algebra 13.17 Inner Product Spaces

 1 1 
Solution:  u , v  (0, 3, 4) ,  , 0, 
 2 2

1
 (0,3, 4), (1, 0,1) 
2

1
  (0,3, 4), (1, 0,1) 
2

1  4 
 0(1)  3(0)  4(1)   2 2
2 2 

13.5.5 Theorem:
Let V be an inner product space over a field F then show that
cu  c u  u  V ,  c  F

 cu , cu 
2
Proof: cu

 cc  u , u 

c
2 2
u

Hence cu  c u

13.5.6 Theorem:

Let V be an inner product space over a field F. Then show that u  0 if and only if u  0

and in any case u  0 .

Proof: u   u , u  by definition of norm.

 u , u  u  0 since by definition  u , u  0
2 2
So u

 u 0

Also we know by definition of inner product

 u , u  0 if and only if u  0
Centre for Distance Education 13.18 Acharya Nagarjuna University

 0 if and only if u  0
2
i.e. u

i.e. u  0 if and only if u  0

Thus in an inner product space u  0

if and only if u  O .
13.5.7 Theorem:
CAUCHY - SCHWARZ’S INEQUALITY : If u , v are any two vectors in an inner product space
V ( F ) then  u, v   u v .

Proof: Case (i) if v  O then  u, v  u, O  u, 0O 

O  u , O  0

So  u, v   0  0 .......... (1)

and u v  u O  u (0)  0 ....... (2)

From (1) and (2)  u, v   u v

Hence  u, v   u v holds good.

Case (ii) Let v  O for any C in F u  cv  0

So 0  u  cv .......... (3)

Now u  cv  u  cv, u  cv 
2

 u , u  cv  c  v, u  cv 

 u, u  c  u, v  c  v, u  c  v, u 

 u , u  c  u , v  c  v, u   cc  v, u  .............. (4)

 u, v 
In particular set c 
 v, u 

Using the value of c in (4) we get


Rings and Linear Algebra 13.19 Inner Product Spaces

 u, v   u, v   u, v   u, v 
u  cv  u, u    u, v    u, v  +  v, v 
2

 v, v   v, v   v, v   v, v 

 u, v  u, v 
So u  cv  u, u  
2
.............. (5) Since  v , v  is real number..  v , v  v , v  .
 v, v 

Thus from (3) we have 0  u  cv

i.e. 0  u  cv
2

 u, v  u, v 
We get 0  u , u   using (5)
 v, v 

 u, v 
2

0 u 
2
2
v

0 u v   u, v 
2 2 2

  u, v   u
2 2 2
v

  u, v   u v

Hence the theorem.


13.5.8 Special Case of Cauchy - Schwaz’s inequality :
Cauchy’s inequality Theorem:

In the vector space Vn (C ) with standard inner product defined on it,

1 1
n
 n 2
2
 n 2
2


i 1
a b
i i    ai 
 i 1 
  bi 
 i 1 
Or

If a1 , a2 , a3 ...an and b1 , b2 ,...bn are complex numbers, then

a1b1  a2b2  ...  anbn   a1  a2  ...  an


2 2 2
 b1  b2  ...  bn
2 2 2

Centre for Distance Education 13.20 Acharya Nagarjuna University

Proof: Let u  (a1 , a2 ,...an ) and v  (b1 , b2 ,...bn ) are any two vectors in the vector space Vn (C )
with standard inner product defined on it.

Then a1 , a2 ,...an and b1 , b2 ,...bn are all complex numbers.

We have  u , v  a1b1  a2b2  ...  an bn

2
 u, v   a1b1  a2b2  ...  an bn
2

u  u , v  a1a1  a2 a2  ...  an an
2
Also

 a1  a2  ...  an
2 2 2

 v, v  b1  b2  ...  bn
2 2 2 2
Similarly v

By cauchy - Schwarz’s inequality

 u, v   u v

i.e. a1b1  a2b2  ...  anbn   a1  a2  ...  an


2 2 2
 b1  b2  ...  bn
2 2 2

Note : If u , v  Vn ( R ) then a i  a i ; bi  bi so i.e. when a1 , a2 ,...an , b1 , b2 ,...bn are real numbers.

then a1b1  a2b2  ...  anbn   a12  a22  ...  an2   b12  b22  ...  bn2 
Or

a1b1  a2b2  ...  anbn   a12  a22  ...  an2  b12  b22  ...  bn2 
W.E. 10: Using Cauchy - Schwarz, inequality, prove that the also vector value of the cosine of
an angle can not be greater than 1.

Solution: Let F be the field of real numbers R and V  F (3)


Consider the standard inner product on V.

Let u  (a1 , a2 , a3 ); v  (b1 , b2 , b3 ) be any two non zero vectors in V..

O  (0,0, 0)
Rings and Linear Algebra 13.21 Inner Product Spaces

Let  be the angle between the vectors u and v then

a1b1  a2 b2  a3b3
cos 
a12  a22  a32 . b12  b22  b32

 u, v 

u v

 u, v  u v
cos  
u v u v

Since by Cauchy Schwarz inequality  u, v   u v

 cos  1

Hence the absolute value of the cosine of an angle can not be greater than 1.
13.5.9 Triangular inequality:

in an inner product space V ( F ) , prove that u  v  u  v .

Proof: By definition of norm

u  v  u  v, u  v 
2

 u, u  v    v, u  v 

 u, u    u, v    v, u    v, v 

 u   u , v   u , v   v
2 2
since  v, u   u , v 

 u  2 Re  u , v   v
2 2

 u  2  u, v   v Since Re z  z .
2 2

Hence u  v  u 2 u v  v since By Cauchy - Schwarz inequality  u, v   u v


2 2 2

 uv  u  v 
2 2

 uv  u  v
Centre for Distance Education 13.22 Acharya Nagarjuna University
Geometrical interpretation of triangular inequality:

Consider u , v to be the vectors in an Inner product space V3 ( R ) with standard inner prod-
uct defined on it. Let the vectors u , v be represented by the sides AB, BC of triangle ABC. Then

evedently we have u  v  AC and u  AB, v  BC and u  v  AC .

Then Cauchy - Schwarz’s inequality implies that AC  AB  BC .

W.E. 11:

Prove that if V is an inner product space then  u, v   u v if and only if one of the
vectors u or v is a multiple of the other (or u , v are linearly dependent).
Solution:

Case i) Let  u , v   u v

if v  O (zero vector) then clearly  u , v   0 and u v  u 0  0

so  u, v   u v and we can write v  0u i.e. v is a scalar multiple of u . (i.e.


u , v are linearly dependent).

Similarly if u  O then u  0v i.e. u is a scalar multiple of v (i.e. u , v are linearly dependent)

 u, v 
Let v  O and let c  2
v

Let w  u  cv
 w, w  u  cv, u  cv 

 u, u  c  u, v  c  v, u  cc  v, v  by (4) Cauchy - Schwarz’s inequality..

 u, v   u, v   u, v   u, v 
 u   u, v    v, u  
2 2
2 2 2 2
v
v v v v

 u , v  v, u 
 u 
2
2
v

 u , v   u, v 
 u 
2
2
v
Rings and Linear Algebra 13.23 Inner Product Spaces

 u, v 
2

 u 
2
2
v

2 2
u v
 u 
2
2 since =  u, v   u v
v

0
Hence  w, w  0  w  O

 u  cv  O  u  cv i.e.
u is a scalar multiple of v . Similarlity when u  O we can prove v is a scalar multiple of u .
i.e. u , v are linearly dependent.

Converse: If one of the vectors u , v are zero vectors then they are linearly dependent i.e. one
can be expressed as a scalar multiple of other and

 u, v   0 and u v =0

So  u, v   u v

Let us suppose that both u , v are non zero vectors and they are linearly dependent.
So one is a scalar multiple of the other.

Let u  cv for some c  F

 u , v  cv, v  c  v, v  c v
2

2
  u, v   c v .......... (1)

Also u  cv  c v

Hence u v  c v
2
.......... (2)

From (1) and (2) it follows

 u, v   u v
Centre for Distance Education 13.24 Acharya Nagarjuna University
Hence from the above two cases the theorem follows.
13.5.9 Theorem:

If u , v are vectors of an inner product space V ( F ) , then u  v  u  v

Proof: u , v are vectors in an inner product space V ( F ) .

u  u  v  v  u  v  v by triangular inequality..

So u  v  u  v .......... (1)

Again v  v  u  u  v  u  u by triangular inequality

 v  u  v  u ........... (2)

From (1) and (2)

u  v  uv

13.5.10 Parallelogram law on an inner product space :

If u , v are any two vectors in an inner product space V ( F ) then show that

u v  u v  2 u 2 v
2 2 2 2

Proof: u , v are any two vectors in an inner product space V ( F ) .

So u  v  u  v, u  v  by definition of norm.
2

 u, u  v  v, u  v 

 u, u    u, v    v, u    v, v 

i.e. u  v  u   u , v    v, u   v
2 2 2
............ (1)

Also u  v  u  v, u  v 
2

 u , u  v    v, u  v 

 u, u    u, v    v, u    v, v 
Rings and Linear Algebra 13.25 Inner Product Spaces

So u  v  u   u , v    v, u   v
2 2 2
............... (2)

Adding (1) and (2) we get

u v  u v  2 u  v
2 2 2 2

Hence the theorem.


Geometrical inter pretation of parallelogram Law:

Let u and v be two vectors in the vector space V3 ( R ) with standard inner product defined
on it. Suppose the vector u is represented by the side AB, and the vector v is represented by the
side BC of a parallelogram ABCD then th vectors u  v, u  v , represents the diagonals AC and
DB of the parallelogram.

u
D C

v v

A u B

From the theorem of parallelogram law u  v  u  v


2 2

2 u  v
2 2

 AC 2  DB 2  2( AB 2  BC 2 )

 AB 2  BC 2  CD 2  DA2
 The sum of the squares on the diagonals of a parallelogram is equal to the sum of the
squares on the four sides.
13.5.11 Theorem:

If u , and v are two vectors in an innter product space V ( F ) such that u  v  u  v


then the vectors are linearly dependent.

Proof: u , v are two vectors in an inner product space V ( F ) such that u  v  u  v

 uv  u  v 
2 2
Centre for Distance Education 13.26 Acharya Nagarjuna University

 u  v, u  v  u  v  2 u v
2 2

 u , u    u , v    v, u    v, v  u  v  2 u v
2 2

 u   u , v   u , v   v  u  v  2 u v
2 2 2 2

 2 Re  u, v  2 u v

i.e. R e  u, v  u v ............ (1)

We know Re z  z

So Re(u , v)   u, v 

 u v   u , v   u v by (1) and Cauchy - Schawarz’s inequality..

  u, v   u v

Hence u , v are linearly independent.


Note: The converse of the above theorem need not be true.

For example consider the inner product space V3 ( R ) with standard inner product defined
on it.

Let u  (1, 0,1), v  (3, 0, 3)  V3 ( R )

then v  3u . Hence u and v are linearly dependent.

u  (1) 2  02  (1)2  2, v  (3)2  02  (3)2

 18  3 2

u  v  2  3 2  4 2 ............ (1)

(u  v )  ( 1, 0,1)  (3, 0, 3)  (2, 0, 2)

u  v  4  0  4  8  2 2 ....... (2)

From (1) and (2) u  v  u  v .


Rings and Linear Algebra 13.27 Inner Product Spaces

13.5.12 If u , v are vectors in an inner product space V ( F ) then show that

1 2 1 1 1
Re  u , v  uv u v and if F  R , then show that  u , v  uv  uv
2 2 2

4 4 4 4

Proof: u , v are two vectors in an inner product space V ( F ) ,

We have u  v  u  v, u  v 
2

 u, u  v    u, u  v 

 u, u    u, v    v, u    v, v 

 u   u , v   u , v    v
2 2

 u  2 Re  u , v   v
2 2
............. (1)

Also u  v  u  v, u  v 
2

 u , u  v    v, u  v 

 u, u    u, v    v, u    v, v 

u  v  u   u , v   u , v   v
2 2 2
So

 u  2 Re  u , v   v
2 2
........... (2)

Subtracting (2) from (1) we get

u  v  u  v  4 Re  u , v 
2 2

1 1
So Re  u , v   u v  uv
2 2
......... (3)
4 4

If F  R, then Re  u , v  u , v 

So (3) becomes

1 1
 u , v  u v  u v
2 2

4 4
Centre for Distance Education 13.28 Acharya Nagarjuna University
13.5.13 Theorem:
If u and v are vectors in a unitary space then

4  u , v  u  v  u  v  i u  iv  i u  iv
2 2 2 2

Proof: u and v are vectors in a unitary space V ( F )

u  v  u , v ,  u , v 
2

 u, u    u, v    v, u    v, v 

u  v  u   u , v    v, u   v
2 2 2
i.e.

u  v  u   u , v    v, u   v
2 2 2

u  v  u  v  2  u , v  2  v, u  ................. (1)
2 2
So

u  iv  u  iv, u  iv 
2

 u , u  iv  i  v, u  iv 

 u, u   i  u, v  i  v, u   i  v, v 

 u, u  i  u, v  i  v, u  i  v, v 

Since i  i

 i  u , v   i  v, u  v
2 2
= u

So i u  iv  i u   u , v    v, u   i v
2 2 2
........... (2)

Also u  iv  u  iv, u  iv 
2

 u , u  iv  i  v, u  iv 

 u, u  i  u, v  i  v, u  i  u, v 

So u  iv  u  i  u , v  i  v, u   v
2 2 2
since i  i

So i u  iv  i u   u, v    v, u  i v ............ (3)
2 2 2
Rings and Linear Algebra 13.29 Inner Product Spaces

Adding (2) and (3) we get

i u  iv  i u  iv  2  u , v  2  v, u  ........ (4)
2 2

Adding (1) and (4) we get

4  u , v  u  v  u  v  i u  iv  i u  iv
2 2 2 2

13.5.14 Theorem:

If u and v are vectors in a unitary space then prove that  u , v  Re  u , v   i Re  u , iv  .

Proof:

If z  x  iy then y  Im z

 Re i( x  iy )

i.e. y  Re( iz )

So by this Im  u , v  Re i  u , v 

 Re  u, iv  Since  u, iv  i  u, v 

 i  u, v 

Hence  u , v  Re  u , v   i Re  u , iv  . Since Z  real Z  i Im Z

13.6 Norm of a Vector in a vector space:


Definition: Let V be a vector space over F, where F is either R or C. Regardless of whether V
is or is not an inner product space, we define a norm . as a real valued function on V satisfying
the following three conditions for u , v  V and a  F .

i) u  0 and u  0 if and only if u  0

ii) au  a u

iii) u  v  u  v

A vector space V ( F ) in which the above three conditions are satisfied is called a normed
vector space.
Centre for Distance Education 13.30 Acharya Nagarjuna University
13.6.2 Definition: Normed Vector Space:

Let V ( F ) be an inner product space in which norms of a vector u  V is defined as

u   u, u  . The inner product space with this definition of norm is called a normed vector
space if the following three conditions are satisfied.

i) u  0 and u  0 if and only if u  O

ii) au  a u and

iii) u  v  u  v  u, v  V and a  F

13.6.3 Theorem: Every inner product space is a normed vector space.


Proof: As the three conditions required for a normed vector space are true in every inner product
space, it follows that every inner product space is a normed vector space.
13.6.4 Distance in an inner product space:

Definition: Let u and v be two vectors in an inner product space V ( F ) . The distance between
the vectors u and v is denoted by d (u , v ) and is defined as d (u , v)  u  v   u  v, u  v 

Note: d (u , v ) is a non negative real number..

Ex: Let u  (a1 , a2 , a3 )v  (b1 , b2 , b3 ) be two vectors in the inner product space R 3 . Then

u  v  (a1  b1 , a2  b2 , a3  b3 )

d (u, v)  u  v  (a1  b1 ) 2  (a2  b2 )2  (a3  b3 )2

13.6.5 Theorem: If u , v , w are any three vectors in an inner product space V ( F ) then prove
that

i) d (u , v )  0 and d (u , v )  0 iff u  v

ii) d (u, v )  d (v, u )

iii) d (u , v )  d (u , w)  d ( w, v )

iv) d (u , v )  d (u  w, v  w)

Proof: i) To show that d (u , v )  0 and d (u , v )  0 ifff u  v .

By definition d (u, v)  u  v  0 since the norm of a vector is a non negative real number..
Rings and Linear Algebra 13.31 Inner Product Spaces

and d (u , v )  0  u  v  0  u  v 0
2

 u  v, u  v  0

 u v  0  u  v
Thus d (u , v )  0 if and only if u  v .

ii) To show d (u, v )  d (v, u )

Proof: We have by definition

d (u , v)  u  v

 (1)(v  u )

= (1) u  v

1 v u

 v  u  d (v , u )

So d (u, v )  d (v, u )

iii) To show that d (u , v )  d (u , w)  d ( w, v )

Proof: d (u , v)  u  v

 (u  w)  ( w  v )

 (u  w)  w  v by triangle inequality..

 d (u , w )  d ( v , w)

So d (u , v )  d (u , w)  d (v, w)

iv) To show d (u , v)  u  v

Proof: d (u , v)  u  v

 (u  w )  ( v  w )
Centre for Distance Education 13.32 Acharya Nagarjuna University

 d (u  w, v  w)

Thus d (u , v )  d (u  w, v  w) .

13.7 Worked Out Examples:


W.E.12 :

Show that V3 ( R) is an inner product space under


 ( x1 , x2 , x3 ), ( y1 , y2 , y3 )  x1 y1  x2 y2  x3 y3 .

Solution : As the given field is the field of real numbers the conjugate symmetry is nothing but
symmetry
i) Symmetry:

Let u  ( x1 , x2 , x3 ), v  ( y1 , y2 , y3 )

given  u , v  ( x1 , x2 , x3 ),( y1 , y2 , y3 )  x1 y1  x2 y2  x3 y3

So  v, u  ( y1 , y2 , y3 ), ( x1 , x2 , x3 )  y1 x1  y2 x2  y3 x3

 x1 y1  x2 y2  x3 y3

 u , v 

Thus  v, u  u , v 

ii) Linearity: Let a, b  R then

au  bv  a ( x1 , x2 , x3 )  b( y1 , y2 , y3 )

 (ax1  by1 , ax2  by2 , ax3  by3 )

Let w  ( z1 , z2 , z3 ) be any vector in V3 ( R ) .

Then  au  bv, w  (ax1  by1 , ax2  by2 , ax3  by3 ), ( z1 , z2 , z3 ) 

 (ax1  by1 ) z1  (ax2  by2 ) z2  (ax3  by3 ) z3

 (ax1 z1  by1 z1 )  (ax2 z2  by2 z2 )  (ax3 z3  by3 z3 )

 a ( x1 z1  x2 z2  x3 z3 )  b( y1 z1  y2 z2  y3 z3 )

 a  u , w  b  v , w 
Rings and Linear Algebra 13.33 Inner Product Spaces

Thus  au  bv , w  a  u , w   b  v, w 

iii) Non negativity :  u , u  ( x1 , x2 , x3 ), ( x1 , x2 , x3 ) 

 x1 ( x1 )  x2 ( x2 )  x3 ( x3 )

 u , u  x12  x22  x32  0

 u , u  0  x12  x22  x32  0  x1  0, x2  0, x3  0

So u  ( x1 , x2 , x3 )  (0,0,0)  O

Hence  u, u  0  u  O .

As all the three required conditions are satisifed, the given product is an inner product.

So V3 ( R ) is an inner product space.

W.E. 13:

Which of the following define inner product in V2 ( R ) . Give reasons

i)  u , v  x1 y1  2 x1 y2  2 x2 y1  5 x2 y2

ii)  u , v  2 x1 y1  5 x2 y2 where u  ( x1 , x2 ) v  ( y1 , y2 )

Solution: u  ( x1 , x2 ), v  ( y1 , y2 ) are any two vectors in V2 ( R ) .

i) To verify  u , v  x1 y1  2 x1 y2  2 x2 y1  5 x2 y2 is an inner product or not.

i) Symmetry :

 v, u  ( y1 , y2 ), ( x1 , x2 ) 

 y1 x1  2 y1 x2  2 y2 x1  5 y2 x2

 x1 y1  2 x1 y2  2 x2 y1  5 x2 y2

 u , v 
Hence  v, u  u , v 
ii) Linearity :

Let a, b  R; then
Centre for Distance Education 13.34 Acharya Nagarjuna University

au  bv  a ( x1 , x2 )  b( y1 , y2 )

 (ax1  by1 , ax2  by2 )

Let w  ( z1 , z2 ) be any vector in V2 ( R ) .

Then  au  bv, w  (ax1  by1 , ax2  by2 ), ( z1 , z2 ) 

 (ax1  by1 ) z1  2(ax1  by1 ) z2 (ax2  by2 ) z1  5(ax2  by2 ) z2


Thus

 au  bv, w  ax2 z1  by1 z1  2ax1 z2  2by1 z2  2ax2 z1  2by2 z1  5ax2 z2  5by2 z2

 a ( x1 z1  bx1 z2  2 x2 z1  5 x2 z2 )  b( y1 z1  2 y1 z2  2 y2 z1  5 y2 z2 )

 a  u, w  b  v, w 

Thus  au  bv , w  a  u , w   b  v, w 

Non Negativity:

 v, v  ( x1 , x2 ), ( x1 , x2 ) 

 x1 x1  2 x1 x2  2 x2 x1  5 x2 x2

 ( x1  2 x2 ) 2  x22  0

and  u , u  0  ( x1  2 x2 ) 2  x22  0

 x1  2 x2  0; x2  0  x1  0, x2  0

 u  (0,0)  O

Hence  u, u  0  u  O .

As the three required conditions are satisfied  u , v  x1 y1  2 x1 y2  2 x2 y1  5 x2 x2 is an


inner product.

2. To verify  u , v  2 x1 y1  5 x2 y2 is an inner product or not.

given u  ( x1 , x2 )v  ( y1 , y2 ).  u , v  2 x1 y1  5 x2 y2
Rings and Linear Algebra 13.35 Inner Product Spaces

Symmetry:  v, u  ( y1 , y2 )( x1 , x2 ) 

 2 y1 x1  5 y2 x2

 2 x1 y1  5 x2 y2

 u , v 

Thus  v, u  u , v 
ii) Linearity :

Let w  ( z1 , z2 ) be any vector in V2 ( R )

Let a, b  R

Then au  bv  a ( x1 , x2 )  b( y1 , y2 )

  (ax1  by1 ), (ax2  by2 ) 

Now  au  bv, w   (ax1  by1 ), (ax2  by2 )  , ( z1 , z2 ) 

 2(ax1  by1 ) z1  5(ax2  by2 ) z2 by given definition

 2ax1 z1  2by1 z1  5ax2 z2  5by2 z2

 a (2 x1 z1  5 x2 z2 )  b(2 y1 z1  5 y2 z2 )

 a  u , w  b  v , w 

Thus  au  bv , w  a  u , w   b  v, w 

iii) Non Negativity :  u , u  ( x1 , x2 ), ( x1 , x2 ) 

 2 x1 x1  5 x2 x2

 2 x12  5 x22  0

More over  u , u  0  2 x12  5 x22  0

 x1  0, x2  0  u  ( x1 , x2 )  (0, 0)

u O
Centre for Distance Education 13.36 Acharya Nagarjuna University
Thus  u, u  0  u  O

As all the three required conditions are satisfied  u , v  2 x1 y1  5 x2 y2 where


u  ( x1 , x2 ), v  ( y1 , y2 ) is an inner product.

W.E. 14:

If u , v are vectors in an inner product space V ( F ) and a , b  F , then prove that

au  bv  a u  ab  u , v   ab  v, u   b
2 2 2 2 2
v

Solution: u , v are vectors in the inner product space V ( F ) .

au  bv  au  bv, au  bv 
2

 a  u , au  bv  b  v, au  bv 

 a a  u , u  b  u , v  b a  v, u  b  v, v 

 aa  v, u   ab  u, v  ba  v, u  bb  v, v 

u  ab  u , v   ab  v, u   b
2 2 2 2
= a v

Hence the problem.


W.E. 15 :

1 2  i 
Use the Frobenius inner product, compute A , B and  A, B  for A  
i 
and
3

1  i 0 
B also compute the angle between A and B on M 22 ( F ) .
 i i 

1  i 0  1  i i 
Solution: B    Hence B*  
 i i   0 i 

1  i i  1 2  i  1(1  i )  3(i ) (1  i )(2  i )  (i )i 


B* A   
 0 i  3 i   0(1)  3i 0(2  i )  i (i ) 

1  4i 2  i  1  1
 
 3i 1 
Rings and Linear Algebra 13.37 Inner Product Spaces

1  4i 4  i 
So B * A  
 3i 1 

trace of ( B * A)  1  4i  1  4i

1 2  i   1 3
As A    ; A*   
3 i   2  i i 

 1 3  1 2  i   10 2  i  3i 
A* A   
2  i  i   3 i   2  i  3 i 4  i 2  12 

 10 2  4i 

 2  4i 6 

trace of A * A  10  6  16

A  A, A  trace of A * A  16
2

A   A, A   4

1  i i  1  i 0 
B*B  
 0 i   i i 

1  i 2  i 2 i 2   3 1
  
 i
2
i 2   1 1 

trace of B * B  3  1  4

B  B, B  trace of B * B  4
2

B   B, B   2

1  4i 4  i 
 A, B  trace of B * A  trace of  3i 1 

 1  4i  1  4i
If  is the angle between A and B .
Centre for Distance Education 13.38 Acharya Nagarjuna University

 A, B  4i 4 1
then cos    
A B 4(2) 8 2

1 
cos    
2 3


Thus the angle between A and B is
3
W.E. 16:

Let T be a linear operator an inner product space suppose that T (u)  u  u V . Prove
that T is one one:

Solution: V ( F ) is an inner product space.

T is a linear operator on V.

So T ( au  bv )  aT (u )  bT (v ) for all u , v  V for all a, b  F .

Show that T is one one:

Let u , v  V such that

T (u )  T (v )

 T (u )  T (v)  O (Zero vector in V)

 T (u  v)  O (Since T is a linear operator)

 T (u  v)  O

 (u  v)  0 Since T (u )  u for all u  V

 uv 0
2

 u  v, u  v  0

u v  O
uv
Thus T (u )  T (v)  u  v  u, v V

So T is one one.
Rings and Linear Algebra 13.39 Inner Product Spaces
W.E. 17:

Let u  (2,1  i , i ) and v  (2  i , 2,1  2i ) be vectors in C 3 . Compute  u, v , u , v and

u  v . Then verify both the Cauchy - Schwarz inequality and the triangle in equality..

Solution :  u , v  (2,1  i, i ), (2  i, 2,1  2i ) 

 2(2  i )  (1  i )2  i (1  2i )

 2(2  i )  2(1  i )  i (1  2i )

 4  2i  2i  2  i  2i 2

 4  5i  2  2  8  5i
So  u , v  8  5i

u  u , u  (2,1  i, i ), (2,1  i, i ) 


2

 2(2)  (1  i )(1  i )  i (i )

 2(2)  (1  i )(1  i )  i ( i )

u  4  1 i2  i2  7
2

So u  7

v  v, v  (2  i, 2,1  2i ), (2  i, 2,1  2i ) 


2

 (2  i )(2  i )  2(2)  (1  2i )(1  2i )

 (2  i )(2  i )  4  (1  2i )(1  2i )

 4  i 2  4  1  4i 2

v  4  1  4  1  4  14
2

So v  14

u  v  (2,1  i, i )  (2  i, 2,1  2i )

 (2  2  i ,1  i  2, i  1  2i )  (4  i, 3  i,1  3i )
Centre for Distance Education 13.40 Acharya Nagarjuna University

uv  u  v , u  v  (4  i , 3  i ,1  3i ), (4  i , 3  i ,1  3i ) 
2

 (4  i )(4  i )  (3  i )(3  i )(1  3i )(1  3i )

 (4  i )(4  i )  (3  i )(3  i )  (1  3i)(1  3i )

 16  i 2  9  i 2  1  9i 2  26  1  1  9  37

Hence u  v  37

To verify Cauchy - Schwarz in equality  u, v   u v .

We have shown  u , v  8  5i

 u , v   64  25  89 ...... (1)

u  7 , v  14

u v  7 14  7  14  98 ..... (2)

but 89  98

So  u, v   u v

Hence Cauchy - Schwarz’s in equality is verified.

To verify triangle inequality u  v  u  v

we have u  v  7  14

But 7  14  37

So u  v  u  v

Hence triangle inequality is verified.


W.E. 18 :

In C 0,1 if f (t )  t ; g (t )  et . Compute  f , g , f , g , and f  g . Then verify


1

both the Cauchy - Schwarz in equality and the triangle in equality. If  f , g   f (t ) g (t ) dt .


0
Rings and Linear Algebra 13.41 Inner Product Spaces

 
Solution: Let V  C  0,1 be the inner product space of real valued continuous functions on  0,1
with the inner product f and g defined by

1
 f , g   f (t ) g (t ) dt
0

1
 f , g   e t .t dt
0

1
  t .e t    1.e t dt
1

0
0

1.e1   et   e1   e1  e0   e  e  1  1
1

 f , g  1

1
1
t3  1
 f , f  t , t   t.t dt   t det   
2 2
f
0 0  3 0

1 1
 0 
2
i.e. f
3 3

1 1
 So f 
2
f
3 3

1
1
 e 2t  1
 g , g   e .e dt   e dt   
2 t t 2t
g
0 0  2 0

i.e. g
2

2
 e  e    e 2  1
1 2 0 1
2

g 
2
 e  1
1 2

f  g  t  et

( f  g )  f  g , f  g  (t  et ), (t  et ) 
2
Centre for Distance Education 13.42 Acharya Nagarjuna University

1 1
  (t  e ) dt   (t 2  2 te t  e 2 t ) dt
t 2

0 0

1
 t 3 e 2t 
   2et .(t  1) 
3 2 0

1 1 1
f g   (e 2  1)  2  (3e2  11)
2

3 2 6

1 2
i.e. f  g  (3e  11)
6

To verify Cauchy - Schwarz inequality  u, v   u . v

We have  f , g   1  1 .......... (1)

1 1
f g  . (e 2  1) .............. (2)
3 2
From (1) and (2)

 f ,g   f g

Hence Cauchy - Schwarz in equality is verified.

ii) To verify triangular in equality u  v  u  v

1 2
We have f  g  (3e  11)
6

1 1 2
f  g   (e  1)
3 2

Hence f  g  f  g

So triangle inequality is verified.


Rings and Linear Algebra 13.43 Inner Product Spaces
W.E. 19 :

In the vector space Vn ( F ) with standard inner product defined on it show that
1 1 1
 n 2
2
 n 2
2
 n 2
2

  ai  bi     ai     bi 
 i 1   i 1   i 1 

Solution: In the vector space Vn ( F ) with the standard inner product defined on it, let
u  (a1 , a2 ,...an ) , v  (b1 , b2 ,...bn ) then u  v  (a1 , a2 ,...an )  (b1 , b2 ,...bn )

So u  v  (a1  b1 , a2  b2 ,...an  bn )

n
  (ai  bi )
i 1

n
u  a1  a2  ...  an  a
2 2 2 2
i
i 1

Similarly v  b
2
i
i 1

By triangle in equality u  v  u  v

n n n

 a b  a  b
2 2 2
So i i i i
i 1 i 1 i 1

13.8 Summary:
In this lesson we discussed about inner products - inner product spaces, norm or length of
a vector in an inner produc space - normalising the vectors - Cauchy - Schwarz’s in equality tri-
angle in equality - Parallellogram Law - Norm of a vector in a vector space - distance between two
vectors.

13.9 Technical Terms:


Inner Product, Inner Product Space, Norm of a vector, Normed Vector space, Distance
between vectors.

13.10 Model Questions:

1. Find unit vector corresponding to (2  i,3  2i, 2  3i ) of V3 (C ) with respect to the standard
inner product.
Centre for Distance Education 13.44 Acharya Nagarjuna University

1
Ans : (2  3i, 3  2i, 2  3i )
5

2. Which of the following define inner product in V2 ( R ) . Give reasons

i.  u , v  x1 y1  2 x1 y2  2 x2 y1  5 x2 y2

ii.  u , v  2 x1 y1  5 x2 y2 where u  ( x1 , x2 ) v  ( y1 , y2 )

Ans : i) inner product


ii) inner product

3. Show that V3 ( R) is an inner product space under the inner product


 ( x1 , x2 , x3 ), ( y1 , y2 , y3 )  x1 y1  x2 y2  x3 y3 .

13.11 Exercises:
1. If u  (a1 , a2 ), v  (b1 , b2 ) V2 ( R) then define  u , v  a1b2  a1b2  4a2b2 . Show that it is an inner
product on V2 ( R ) .

2. Show that V3 ( R ) is an inner product space under the product defined by


 ( x1 , x2 , x3 ), ( y1 , y2 , y3 )   x1 y1  x2 y2  x3 y3

3. If u  (a1 , a2 ), v  (b1 , b2 ) then show that  u , v  2a1b1  a1b2  a2b1  a2b2 is an inner product on
V2 ( R) .

4. Prove that  u , v  a1  a2  b1  b2 does not define an inner product in V2 ( R ) .

5. Let u  (1, 3, 4, 2), v  (4, 2, 2,1), w  (5, 1, 2, 6) in R 4 then find  u , w ,  v, w  and verify

that  u  v, w  u , w    v, w  and compute u , v , w .

Ans:  u , w  22,  v, w  24

u  30, v  5, w  66

6. u  (1, 2), v  ( 1,1) are two vectors in the vector space R 2 , with standard inner product. If w
is a vector such that  u , w  1,  v, w  3 then find w .

 7 2 
Ans : w   , 
 3 3
Rings and Linear Algebra 13.45 Inner Product Spaces

7. In the vector space P (t ) of all polynomials with inner product  f , g  


0
f ( t ) g ( t )dt

If f (t )  t  2, g (t )  3t  2, h(t )  t 2  2t  3 then find

i)  f , g , and  f , h ,

ii) f and g

iii) Normalize f and g.

37
Ans:  f , h  1,  f , h 
4

57
f  g  1,
3

3
fˆ  unit vector along f  (t  2); gˆ  g  3t  2
57

8. Let M  M 23 with inner product  A, B  tr ( BT A) and Let

9 8 7  1 2 3  3 5 2 
A  B  C  then find
6 5 4  4 5 6 1 0 4 

i)  A, B ,  A, C ,  B, C  ii)  2 A  3 B , 4C  iii) A and B

Ans:  A, B  119;  A, C  9,  B, C  21

 2 A  3B, 4C  324

A  271, B  91

9. Find cos  where  is the angle between

i) u  (1, 3, 2) and v  (2,1, 5) in R 3

ii) u  (1,3, 5, 4) and v  (2, 3, 4,1) in R 4

1
iii) f (t )  2t  1, g (t )  t where  f , g 
2
 f (t ) g (t ) dt
0
Centre for Distance Education 13.46 Acharya Nagarjuna University

2 1   0 1
iv) A    ;B    where  AB  tr ( BT A)
 3 1 2 3 

9 23 15 2
i) ii) iii) iv)
105 3 130 6 210

10. If u  (1, 5,3) and v  (4, 2, 3) are two vectors in R 3 then find the distance between u and v .

Ans : d (u , v )  94

1 i
11. In the inner product space C 2 for u , v  C 2 and A   if the inner product  u , v  uAv *
i  2 

compute  u , v  for u  1  i, 2  3i  , and v   2  i,3  2i 

Ans : 6  2i  C

12. If u  (1  i, 2  3i ) , v  (2  5i ,3  i ) are two vectors in V (C )  C 2 (C ) with standard inner prod-

uct find  u , v  and u , v .

Ans: 10  14i, 27, 51

13. If u , v are two vectors in a Eucledian space V ( R ) such that u  v , then prove that
 u  v, u  v  0 .

14. Find the norm of the vector v  (1, 2, 5) and also normalise this vector..


 1 2 5 
Ans : v  30, v   , , 
 30 30 30 

15. Let V ( R ) be the vector space of polynomials with inner product determined by
1
 f , g   f (t ) g (t ) dt for f , g  V . If f ( x)  x 2  x  4, g ( x)  x  1 for all x   0,1 then find
0

 f , g , f , g .

7 311 1
Ans : , ,
4 30 3
Rings and Linear Algebra 13.47 Inner Product Spaces

13.12 Reference Books :


1. Linear Algebra - 4th edition
Stephen. H - Friedberg, Arnold J. insel, Lawrence E - Spence.
2. Topics in Algebra - I.N. Herstein
3. Modern Algebra Vol. II K.S. Narayanan, T.K. Mani Cavachagom pillay
4. A course in Abstract Algebra - Vijay K. Khanna, S.K. Bhambhari

- A. Mallikharjuna Sarma
Rings and Linear Algebra 14.1 Orthogonalization

LESSON - 14

ORTHOGONALIZATION
14.1 Objective of the Lesson:
In geometry perpendicularity is an useful concept. We introduce now a similar concept in
inner product spaces. In previous chapters, we have seen the special role of the standard ordered
bases for C n and R n . The special properties of these bases stem from the fact that the basis
vectors form an orthonormal set. Just as bases are the building blocks of vector spaces, bases
that are also orthonormal sets are the building blocks of Inner product spaces.

14.2. Structure of the Lesson:


This lesson contains the following items.

14.3 Introduction

14.4 Orthogonality and Orthonormality definitions and Theorems

14.5 Worked out examples

14.6 Orthogonality - Linear independence-theorems

14.7 Orthonormal set definition - worked out examples

14.8 Orthonormal set of vectors - Linear independence Theorems

14.9 Worked out examples

14.10 Exercises

14.11 Orthonormal basis - Gram-Schmidt Orthogonalization process - Working pro-


cedure - Worked out examples

14.12 Fourier coefficients - Worked out examples

14.13 Parseval’s Identity - Bessel’s inequality - Theorems

14.14 Orthogonal complinent - Theorems - Closest vector - Orthogonal Projection


- Theorems

14.15 Worked out examples

14.16 Summary

14.17 Technical Terms


Centre for Distance Education 14.2 Acharya Nagarjuna University
14.18 Model Questions

14.19 Exercise

14.20 Reference Books

14.3 Introduction:
Let us consider the case of vectors in R 2 and we see that how the perpendicularity is
considered here. The two vectors u and v  R 2 are perpendicular if and only if the pythogorean
relation u  v  u  v ......(1) holds. In real inner product spaces this pythagorean relation can be
2 2 2

written in a very simple form by using the condition that the angle between the vectors is 90 0 or
cosine of the angle between the vectors u and v is zero. Here the condition (1) is equivalent to a
very simple condition  u , v  0 . We extend this idea to the vectors of Inner product spaces.

14.4 Orthogonality:
14.4.1 Orthogonality of Two Vectors:
Definition: Two vectors u and v in an inner product space V are said to be orthogonal or perpen-
dicular if  u , v  0 .

14.4.2 Orthogonal Set :


Definition: A subset S of an inner product space is said to be orthogonal if any two distinct
vectors in S are orthogonal.
14.4.3 A Vector Orthogonal to a Subset S of V:
Definition: A vector u is said to be orthogonal to a subset S of Inner product space V; if it is
orthogonal to each vector in S.
14.4.4 Orthogonal Subspaces :
Two subspaces of an inner product space are called orthogonal if every vector in each is
orthogonal to every vector in the other.

Two subspaces W 1 and W 2 of an inner product space V ( F ) are said to be orthogonal if


 u , v  0  u  W1 and  v  W2

14.4.5 A Theorem:
Show that orthogonality in an inner product space is symmetric.

Proof: Let V ( F ) the given inner product space. u, v are two vectors in V such that u is orthogonal
to v.

So  u , v   u , v   conjugate of 0.
Rings and Linear Algebra 14.3 Orthogonalization
So  v, u  0 . Hence v is orthogonal to u .

Hence orthogonality in an inner product is symmetric.


14.4.5B Theorem:
If u is orthogonal to v , then every scalar multiple of u is orthogonal to v . Where u , v are
vectors in an inner product space.

Proof: u , v are vectors in an inner product space V ( F )

As u , is orthogonal to v ; v;  u , v  0 ............ (1)

Let k be a scalar belonging to F.

then  ku , v  k  u , v  k (0)  0 using (1)

 k u is orthogonal to v .
Hence every scalar multiple of u is orthogonal to v .
14.4.6 Theorem:
Zero vector in V is orthogonal to every vector in an Inner product space.

Proof: O is the zero vector in the inner product space V ( F ) .

Let u be any vector in V. There  0, u  0u , u  0  u , u  0

As u is arbitrary, zero vector is orthogonal to every vector.


14.4.7 Theorem: Show that zero vector is the only vector which is orthogonal to itself in an Inner
Product Space.

Proof: Let u be any vector in the given inner product space V ( F ) , which is orthogonal to itself.
So  u, u  0  u  O (zero vector) by definition of inner product.

Hence zero vector is the only vector which is orthogonal to itself.


14.4.8 Theorem:

The vectors u, v of a real inner product space V ( F ) are orthogonal if and only if

uv  u  v .
2 2 2

Proof: u, v are two vectors in an inner product space V ( F ) .

Now u  v  u  v
2 2 2

 u  v, u  v  u, u    v, v 
Centre for Distance Education 14.4 Acharya Nagarjuna University
 u, u  v    v, u  v  u, u    v, v 

 u, u    u, v    v, u    v, v  u, u    v, v 

 u , v    v, u  0

 u , v    u , v  0 since  u , v  is real  u, v  u, v 

 u , v  0  u , v are orthogonal vectors.


14.4.9 Geometrical interpretation:

Let u, v be two vectors in the inner product space V3 ( R ) with standard inner product defined

on it. Let u, v represent the sides AB, BC of triangle ABC. In the three dimensional Eucledian

space. Then u  AB. v  BC ..

Also the vector u + v represent the side AC of the triangle ABC and u  v  AC . Then

from the above theorem ABC  900 if and only if AC 2  AB 2  BC 2 which is pythogorean
theorem.
Note The above theorem does not held in complex inner product space.

If u  (0, i ), v  (0,1) in V2 (C )

There u  v  u  v, u  v 
2

 u, u    u, v    v, u    v, v 

 u   u , v   u; v   v
2 2

 u  v  2 Re  u , v  .......... (1)
2 2

since z  z  2 Re z

But  u , v  (0, i ), (0,1) 

 0(0)  0(1)  i (0)  c (1)

 0i  0
and Re  u , v  0 using this in (1)

We get u  v  u  v
2 2 2
Rings and Linear Algebra 14.5 Orthogonalization

Which does not imply  u , v  0

14.5 Worked out Examples:


W.E.1 : Find a unit vector orthogonal to (4, 2, 3) in R 3 .

Solution : Let u  (4, 2, 3) and v  ( a, b, c ) be orthogonal to u.

Hence  u , v  0  (4, 2, 3), ( a , b, c )  0

 4a  2b  3c  0 ............. (1)
Any solution of this equation gives a vector orthogonal to u.

a  1, b  1, c  2 satisfy the equation (1). So u  (a, b, c )  (1,1, 2) is orthogonal

to u. v  (1) 2  (1) 2  (2) 2  6

1 1  1 1 2 
Hence vˆ  v (1,1, 2)   , ,  is the unit vector orthogonal to the given
v 6  6 6 6
vector u  (4, 2, 3) .

W.E. 2 : Find a non-zero vector w which orthogonal to u  (1, 2,1) and v  (2,5, 4) in R 3 .

Solution: Let w  ( x, y , z ) be the vector which is orthogonal to both u and v..

So  u , w  0  (1, 2,1), ( x, y , z )  0

 x  2 y  12  0 .......... (1)

and  v, w  0  (2, 5, 4), ( x, y , z )  0

 2 x  5 y  4 z  0 .......... (2)

(1) x 2 : 2 x  4 y  2 z  0

Subtracting from (2) y  2 z  0

So y  2 z

Put z  1, y  2

using these values in (1)

x  4 1  0  x  3
Centre for Distance Education 14.6 Acharya Nagarjuna University
Thus w  ( x, y , z )  (3, 2,1) is the desired non zero vector orthogonal to both u and v..

W.E. 3 : In a real inner product space if u, v are two vectors such that u  v then prove that
u  v, and u  v are orthogonal. Interpret the result geometrically..

Solution:  u  v, u  v  u, u    u, v    v, u    v, v 

= u   u  v   u , v   v 0
2 2

Since in the real inner product space  u , v   u , v  and given u  v .

So as  u  v, u  v  0 , the vectors u  v, u  v are orthogonal.


u
Geometrical interpretation: D C
In the 3 dimensional space Let u  AB, v  BC .

then u  v  AB  BC . v v

In the rhombus ABCD, AC  u  v, DB  u  v


A u B
where AC , DB are the diagonals.

 u  v, u  v  0

 u  v, u  v are orthogonal

 The diagonals A C and DB are perpendicular..


Hence in a Rhombus the diagonals are perpendicular.

W.E. 4: If u, v are two vectors in a real inner products space and u  v, u  v are orthogonal then
u  v

Solution: u  v, u  v are orthogonal.

 u  v, u  v  0

 u , u    u , v    v, u    v, v  0

 u   u , v   u , v   v  0
2 2

 u  v  0 since in a real inner product space  u , v   u , v 


2 2
Rings and Linear Algebra 14.7 Orthogonalization

 u  v  u  v
2 2

Thus the vectors u  v, u  v are orthogonal  u  v

Geometrical interpretation:

Let u, v be two vectors in the inner product space V3 ( R ) with standard inner product defined

on it. Let u, v represent the sides AB and BC of a parallelogram ABCD . Then u  AB, v  BC .

u
u  v, u  v represent the diagonal AC and DB
D C
of the parallelogram.

As u  v, u  v are orthogonal the diagonals AC , DB


v v
are perpendicular.

u  v  AB  BC . Thus if the diagonals


A u B
of a parallelogram are perpendicular, then the
parallelogram is a rhombus.
W.E. 5: Let V be an inner product space and suppose that u and v are orthogonal vectors in an inner
product space V ,

Prove that u  v  u  v . Deduce the Pythagorean theorem in R2 .


2 2 2

Solution: As u and v are orthogonal vectors in an inner product space V,  u , v  0 ...... (1)

Now

u  v  u  v, u  v 
2

 u, u    u, v    v, u    v, v 

 u   u , u   u , v   v .......... (2)
2 2

 u  0  conjugate of 0  v
2 2
using (1)

 v  u  v
2 2 2 2
So u
Centre for Distance Education 14.8 Acharya Nagarjuna University
Deduction of Pythogorem theorem:

If u, v are two orthogonal vectors in a real inner product space, then  u , v   u , v  .... . (3)
using in (2).

u  v  u  v  2  u, v 
2 2 2

 u  v  0 . As u , v are orthogonal vectors  u , v  0


2 2

So u  v  u  v
2 2 2
.......... (4)

In R2 if AB  u , BC  v then AC  u  v and (4) states AC 2  AB 2  BC 2 which is


pythogorean theorem.

W.E. 6 : If u, v are two orthogonal vectors in an inner product space V ( F ) and u  v  1 then

prove that u  v  d  u , v  2

Solution: u, v are orthogonal vectors in an inner product space V ( F ) . So  u , v  0 ....... (1)

 d (u, v)   u v
2 2

 u  v, u  v 

 u, u    u, v    v, u    v, v 

 u 00 v
2 2
since, u, v are orthogonal.

 (1) 2  (1) 2 since given u  v  1

2

So d (u , v)  u  v  2

14.6.1 Theorem:
Show that any orthogonal set of non zero vectors in an inner product space V is linearly
independent.
Proof: Let S be an orthogonal set of non zero vectors in an inner product space V.

Let S1  u1 , u2 ..., un  be a finite subset of S containing m vectors which are distinct.
Rings and Linear Algebra 14.9 Orthogonalization
m

Let c u
j 1
j j  c1u1  c2u2  cnun  O ............... (1)

We will show that each scalar coefficient is zero. Let ui be any vector in S1 . i.e. 1  i  m .

Now consider  c1u1  c2u2  ...  ...  ci 1ui 1  ci ui  ci ui  cm um , ui 

 c1  u1 , ui   c2  u2 , ui  ...  ci 1  ui 1 , ui   ci  ui , ui  ci 1  ui 1 , ui  ...cm  um , u i 

i.e.   c j u j , ui  ci  ui , ui  ............ (2)


j 1

 As the vectors are orthogonal  ui , u j  0 for i  j

But by (1) c u
j 1
j j O

Using in (2)

 O, ui  ci ui  ci2 ui 0
2 2

So ci ui  0 . But ui is a non zero vector. ui  0

Hence ci  0 for 1  i  m

Thus c1u1  c2u2  ...  cm um  0

 c1  0, c2  0,...cm  0

Hence S1  c1 , c2 ,..., cm  is a linearly independent set.

Thus every finite subset of S is linearly independent. Hence S is linearly independent.


14.6.2 Theorem:

Let S  u1 , u2 ,..., um  be an orthogonal set of non zero vectors in an inner product space
V (F ) .
If a vector v in V is in the linear span of S,

m
 v, ui 
Then v  
i 1 ui
2
ui
Centre for Distance Education 14.10 Acharya Nagarjuna University

Proof: v is a vector in the inner product space V, which is in the linear span of S  u1 , u2 ,...um  .

So v can be expressed as a linear combination of the vectors of S. So there exists scalars c1 , c2 ,...cm
m

in F. Such that v  c1u1  c2u2  ...  cmum   c j u j


j 1

then for each i where 1  i  m

m
 v, ui    c j u j , ui  ........... (1)
j 1

m
  c j  u j , ui  by linear property of inner products.
J 1

= ci  ui , ui  on summing up with respect to j, and S is an orthogonal set

of non zero vectors. So  u j , ui  0 if j  i .

So  v, ui  ci ui
2
............ (2)

As ui is a non zero vector in S, ui  0

 v, ui 
So by (2) ci  2
ui

But v  c1u1  c2u2  ...  cm um

 v, u1   v, u2   v, um 
So v  2
u1  2
u2  ...  2
um
u1 u2 um

m
 v, ui 
Hence v  
i 1 ui
2
ui

Hence the theorem.


14.7.1 Definition:
Orthonormal set: Let S be a set of unit vectors in an inner product space V, which are mutu-
ally orthogonal then S is said to be an orthonormal set.
Or
Rings and Linear Algebra 14.11 Orthogonalization
Let S be a set of vectors in an inner product space V. Then S is said to be orthonormal
set if

i) u  S  u  1 ie  u , u  1 and

ii) u, v S and u  v   u , v  0
Or

A finite set S  u1 , u2 ,..., um  of an inner product space V is orthonormal if

 ui , u j   ij where  ij denotes the kronecker delta.

1 if i = j

i.e.  ij 

0 if i  j

Note: i) An orthonormal set is an orthogonal set in which every vector is a unit vector i.e. A set
consisting of mutually orthogonal unit vectors is called an orthonormal set.
Note ii): An orthonormal set does not contain zero vector.
14.7.2 Worked Out Examples:

 1 2 2   2 1 2   2 2 1  
W.E. 7: Prove that S   , ,  ,  , ,  ,  , ,   is an orthonormal set in R 3 with
 3 3 3   3 3 3   3 3 3  
standard inner product.

 1 2 2   2 1 2   2 2 1 
Solution: Let u   , ,  , v   , ,  , w   , ,  are the given vectors of S.
3 3 3  3 3 3 3 3 3 

 1    2   2 
2 2 2
1 4 4
u           1
3  3   3  9 9 9

 2   1   2 
2 2 2
4 1 4
v            1 1
3  3  3 9 9 9

 2   2   1 
2 2 2
4 4 1
w           1
3 3  3  9 9 9
Centre for Distance Education 14.12 Acharya Nagarjuna University

 1 2 2   2 2 1  1  2   2  2   2   1 
 u , w   , ,  ,  , ,              
3 3 3  3 3 3  3  3   3  3   3   3 

2 4 2
i.e.  u , w     0  w, u 
9 9 9

  1 2 2   2 1 2  
 u, v    , ,  ,  , ,   
 3 3 3   3 3 3 

1  2   2  1   2   2  2  2  4
             0
3  3   3  3   3   3  9

i.e.  u, v  0  v, u 

 2 1 2   2 2  1 
 v, w   , ,  ,  , ,  
3 3 3 3 3 3 

 2  2   1   2   2   1  4  2  2
               0
 3  3   3   3   3   3  9

Thus  v, w  0  w, v 

As the length of each vector in S is unity, the inner product of two different vectors in S is 0,
So S is an orthonormal set.

W.E.7b : Consider the usual basis of E  e1 , e2 , e3  of the Eucledian space R 3 where

e1  (1, 0, 0), e2  (0,1, 0), e3  (0, 0,1) . Show that E is orthonormal.

Solution:

e1  (1, 0, 0) so e1  12  02  02  1

e2  (0,1, 0) so e2  02  12  02  1

e3  (0, 0,1) so e3  02  02  12  1

Thus e1  e2  e3  1

More over  e1 , e2  (1, 0, 0), (0,1, 0) 


Rings and Linear Algebra 14.13 Orthogonalization

 1(0)  0(1)  0(0)  0

 e1 , e2  0  e2 , e1 

 e2 , e3  (0,1, 0), (0, 0,1)  0(0)  1(0)  0(1)  0

 e2 , e3  0  e3  e2 

 e3 , e1  (0, 0,1), (1, 0, 0)  0(1)  0(0)  0(0)  0

Thus  e3 , e1  0  e1 , e3 

Thus the length of each vector in E is unity and the inner product of any two different vectors
of E is zero. So E is an orthonormal set.

W.E. 8: If S  (1, 2, 3, 4), (3, 4,1, 2), (3, 2,1,1) is a subset of R 4 . Obtain orthonormal set from
S. Verify Pythagorean theorem.

Solution: Let u  (1, 2, 3, 4), v  (3, 4,1, 2); w  (3, 2,1,1)

Now  u , v  (1, 2, 3, 4), (3, 4,1, 2)  1(3)  2(4)  ( 3)(1)  4( 2)

 3838  0
Thus  u, v  0  v, u 

 v , w   (3, 4,1,  2 ), (3,  2,1,1)   3(3)  4 (  2 )  1(1)  (  2 )(1)

 9  8 1 2  0
So  v, w  0  w, v 

 w, u  (3, 2,1,1), (1, 2, 3, 4)  3(1)( 2)2  1( 3)  1(4)

 3 43 4  0
Thus  w, u  0  u , w 

Thus the inner product of any two different vectors in S is zero. So S is orthogonal.
We normalise S to obtain an orthonormal set.

u  12  (2)2  (3)2  42  1  4  9  16  30

v  32  42  12  (2)2  9  16  1  4  30
Centre for Distance Education 14.14 Acharya Nagarjuna University

w  32  (2)2  12  12  9  4  1  1  15

Hence the required orthonormal set of vectors is

 1 2 3 4   3 4 1 2   3 2 1 1 
 , , , , , , , , , , , 
 30 30 30 30   30 30 30 30   15 15 15 15  

More over u  v  w  (1, 2, 3, 4)  (3, 4,1, 2)  (3, 2,1,1)

 (7, 4, 1,3)

u  v  w  (49  16  1  9)  75
2

u  v  w  30  30  15  75
2 2 2


Hence u  v  w  u  v  w
2 2 2

Which verifies the pythogorean theorem for the orthogonal set S.

 u 
W.E. 9: Let V ( F ) be an inner product space. If u is a non zero vector in V then show that  u 
 
is an orthonormal set.

Solution: u is a non zero vector in V. So u  0

u u 1 u
Hence  u , u  u  u, u 

1 1 1
  u , u  2 u  1
2
.
u u u

u  u 
As u is an unit vector and so   is an orthonormal set. it is a subset of V..
 u 

Note: Every inner product space has an orthonormal subset.

14.8.1 Theorem: Every orthonormal set of vectors in an inner product space V ( F ) is linearly
independent.
Rings and Linear Algebra 14.15 Orthogonalization

Proof: Let S be an orthonormal set of vectors in an inner product space V. Let S1  u1 , u2 ,...um 
be a finite subset of S containing m vectors.

Let c1 , c2 ,...cm be scalars belonging to F such that

c1u1  c2u2  ...  cmum  O ............ (1) (zero vector)

Now

 c1u1  c2u2  ...  ci ui  ...  cmum , ui  O, ui  0 where 1  i  m .

 c1  u1 , ui   c2  u2 , ui  ...ci 1  ui 1 , ui   ci  ui , ui   ci 1  ui 1 , ui  ...  cm  um , ui  0 .(2)

As u1 , u2 ...um are orthonormal vectors

1 if j  i

 u j , ui 

0 if j  i

using this in the above (2) we get

ci  ui , ui  0

 ci  0 since  ui , ui  0 where 1  i  m .

So ci  0, c2  0,...cm  0

Thus

c1u1  c2u2  ...cmum  O  c1  0, c2  0,...cm  0

Hence S1  u1 , u2 ,...um  is linearly independent.

As S1 is arbitarary, every finite sub set of S is linearly independent. Hence S is linearly


independent.

Aliter : Let S be an orthonormal set in an inner product space. Let u  S , then u  O , since
u  O  u , u  0  1 is a contradiction.
So S is an orthogonal set of non zero vectors.
As we know every orthogonal set of non-zero vectors in an inner product space V is linearly
independent, if follows S is linearly independent.
Centre for Distance Education 14.16 Acharya Nagarjuna University
Hence every orthonormal set of vectors in an inner product space V ( F ) is linearly
independent.

14.8.2 Theorem: Let S  u1 , u2 ,...um  be an orthonormal set of vectors in an inner product
space V ( F ) . If a vector v is in the linear span of S; then

m
v    v, ui  ui
i 1

Proof: v is given to be a vector in the linear span of S. So v can be expressed as a linear combina-
tion of the vectors of S. Hence there exists scalars c1 , c2 ,...cm in F such that

m
v  c1u1  c2u2  ...  cmum   c j u j .......... (1)
J 1

We have for each i where 1  i  m

m
 v, ui   c j u j , ui 
J 1

m
  c j  u j , ui  by linearty of inner product
J 1

 ci  ui , ui  since

0 if j  i

 u j , ui 

1 if j  i

 ci (1)  ci

Thus  v, ui  ci where 1  i  m .

Putting these values of c1 , c2 ,...cm in (1)

We get v    v, ui  ui
i 1
Rings and Linear Algebra 14.17 Orthogonalization
14.8.3 Theorem:

If S is an orthonormal set of vectors of an inner product space V ( F ) then


v  a1u1  a2u2  ...  an un implies a1  v, ui  where ui  S , ai  F for i  1, 2,...n .

Proof: S is an orthonormal set. So for ui , u j  S

ui  1, u j  1 and  ui , u j  0 where i  j

Thus for each i  1, 2,...n ;

 v, ui  a1u1  a2u2  ...  an un , ui  ai  ui , ui 

 ai ............ (1)

n
Hence ai  v, ui  and so v  a1u1  a2u2  ...  an un  a u
c 1
i i

n
v    v ,u
i 1
i  u i using (1)

14.8.4 Theorem:

If S  u1 , u2 ,...um  is an orthonormal set in an inner product space V ( F ) and u  V then


m
w  v    v,ui  ui is orthogonal to each of u1 , u2 ...um .
i 1

Proof:

For j  1, 2,..., m

m
 w, u j  v    v,ui  ui , u j 
i 1

m
  v , u j     v ,u i   u i , u j 
i 1

 v, u j    v, u1  u1 , u j    v, u2  u2 , u j  ...  v, u j  u j , u j  ...  v, um  um , u j 

 v, u j   v, u1  0  v, u2  0  ...  v, u j  (1)   v, u j 1  (0)  ...  v, um  0


Centre for Distance Education 14.18 Acharya Nagarjuna University

 v, u j    v, u j  0

 w, u j  0  w is orthogonal to each of u1 , u2 ...um .

m
 v    v, ui  ui is orthogonal to each of u1 , u2 ...um .
i 1

14.8.5 Corollary: If S  u1 , u2 ...um  is an orthonormal set in an inner product space V ( F ) and
m
u  V , then w  v    v, ui  ui is orthogonal to each vector of L( S ) .
i 1

Proof: From the above the orem the vector w is orthogonal to each of u1 , u2 ...um .

So  w, ui  0  ui , w  for each i where 1  i  m ......... (1)

Let u  L ( S ) . Then there exists scalars c1 , c2 ,...cm in F such that

u  c1u1  c2u2  ...  cmum

Now  u, w  c1u1  c2u2  ...  cm um , w 

 c1  u1 , w   c2  u2 , w  ...  cm  um , w 

 c1 (0)  c2 (0)  ...  cm (0) using (1)

u is orthogonal to w. So w is orthogonal to u. Hence w is orthogonal to every vector of


L( S ) .

14.9 Worked Out Examples:

W.E.10: If S  u1 , u2 ,...un  is an orthogonal set of an inner product space V ( F ) then prove that

a1u1 , a2u2 ,...anun  is also an orthogonal set for any choice of non zero scalars a1 , a2 ,...an  F .

Solution: S  u1 , u2 ,...un  is an orthogonal set

 ui , u j  0 ..... (1)  ui , u j  S and i  J .

Consider ai ui , a j u j  S1  a1u1 , a2u2 ,...anun 

Then  ai ui , a j u j  ai a j  ui , u j 
Rings and Linear Algebra 14.19 Orthogonalization

 ai a j (0) using (1)

  ai ui , a j u j  0  ai ui , a j u j  S1 and i  J

So S1  a1u1 , a2u2 ,...anun  is also an orthogonal set.

 1 1   1 1 1   1 1 2  
W.E.11: If S   , ,0, , , , , ,   is an orthonormal subset of the
 2 2   3 3 3   6 6 6  
inner product space R 3 ( R ) express the vector (2,1, 3) as a linear combination of the basis vectors
of S.

 1 1   1 1 1   1 1 2 
Solution: Let u1   , , 0  ; u2   , ,  ; u3   , , 
 2 2   3 3 3  6 6 6

Then S  u1 , u2 , u3  and let v  (2,1, 3)

Let v  c1u1  c2u2  c3u3 where c1 , c2 , c3  R by 14.8.3.

 1   1  3
c1  v, u1  2    1   3(0) 
 2  2 2

 1   1   1  4
c2  v, u2  2    1   3 
 3  3  3 3

 1   1   2  5
c3  v, u3  2    1   3 
 6  6  6 6

3  1 1  4  1 1 1  5  1 1 2 
So v  (2,1,3)   , ,0   , ,   , , 
2 2 2  3 3 3 3 6 6 6 6

W.E.12 : Let V (C ) be the inner product space of continuous complex valued functions on  0, 2 
2
with inner product  f , g 
1
2  f (t) g(t)dt .  
Prove that S  f n (t )  c n  Z is an orthonormal
int

subset of V.
Centre for Distance Education 14.20 Acharya Nagarjuna University
2
1
e
in t
Solution:  f n , f n  
in t
e dt
2 0

2
1
e
int
 int
e dt
2 0

2
1 1 2
 1dt  2 t 
2
So  f n , f n   1
2 0
0
2

Let m  n then

2 2
1 1
 f m , f n   e imt e int dt  e e  int dt
imt

2 0
2 0

2
1
e
i ( m  n )t
 dt
2 0

2
1

2  cos( m  n)t  i sin(m  n)t
0

2
1  1 1 
  ( m  n ) sin( m  n )t  ( m  n ) cos( m  n )t 
2  0

1
 (0)  0
2

Thus  f n , f n  1  n  Z and  f m , f n  0 if m  n, m, n  Z

As the required two conditions are satified S  f n (t )  e n  Z


int
  is an orthonormal
subset of V.

W.E.13 : Find k , so that the following pair is orthogonal f (t )  t  k ; g (t )  t 2 where


1
 f , g   f (t ) g (t ) dt
0

Solution: We first find  f , g  .


Rings and Linear Algebra 14.21 Orthogonalization
1 1
 f , g   (t  k )t 2 dt   (t 3  kt 2 )dt
0 0

1
 t 4 kt 3  1 k
    
4 3 0 4 3

As f, g are orthogonal vectors  f , g  0 .

1 k
So   0  3  4k  0
4 3

3
So k 
4

14.10 Exercise:
1. Find the vector of unit length which is orthogonal to u  (2, 1, 6) of V3 ( R ) with respect to stan-
dard inner product.

 2 2 1 
Ans:  , , 
5 3 3 

2. Find two mutually orthogonal vectors each of which is orthogonal to the vector u  (4, 2, 3) of
V3 ( R) with respect to the standard inner product.

Ans : (3, 3, 2); (5,17, 18)

3. Normalize the following vectors in R 3

 2 3 1 
(i) u  (2,3, 1) Ans: uˆ   , , 
 14 14 14 

 1 1 1   6 4 3 
(ii) v   , , ; Ans: vˆ   , , 
2 3 4   61 61 61 
4. Find a unit vector orthogonal to u1  (1, 2,1) and u2  (3,1,0) in R 3 with the standard inner
product.

1
Ans : (1, 3,5)
35
Centre for Distance Education 14.22 Acharya Nagarjuna University

5. Let V be the vector space over R of all continuous real valued functions defined in  0,1 with inner
1

product defined by  f , g   f (t ) g (t )dt .


0
For each positive integer n, define

f n (t )  2 cos(2 nt ), g n (t )  2 sin(2 nt ) . Then show that S  1, f1 , g1 , f 2 , g 2 ... is an orthonor-


mal set.

6. Let S consists of the following vectors in R 3 u1  (1,1,1), u2  (1, 2, 3), u3  (5, 4, 1)

Then 1) Show that S is orthogonal and S is a basis of R 3 . (ii) Write v  (1,5, 7) as a linear
combination of u1 , u2 , u3 .

1 16 4
Ans : v  u1  u2  u3
3 7 21

7 (a). Find the value of K so that the pair of vectors u  (1, 2, k ,3) and v  (3, k , 7, 5) are orthogonal
in R 4 .

4
Ans : k 
3

(b) Find m so that ( m, 3, 4), ( m,  m,1) may be orthogonal vectors in R 3 .

Ans : m   4,1

8. If (10,1,0), (0,1, 0,1), ( 1, 0,1, 0) is an orthogonal subset of R 4 ( R ) inner product space, obtain
the orthonormal set by normalizing.

1 1 1
Ans : (1, 0,1, 0), (0,1,0,1), (1,0,1, 0)
2 2 2

9. If u, v are orthonormal vectors in V ( F ) prove that d (u , v )  2.

10. Two vectors u, v in an unitary space V (C ) are orthogonal if and only if

au  bv  a u b
2 2 2 2 2
v

 1 1 1   1 1   2 1 1  
11. show that  , ,  ,  0, , , , ,   is an orthonormal set in R 3 .
 3 3 3   2 2   6 6 6 
Rings and Linear Algebra 14.23 Orthogonalization

14.11 Orthonormal Basis:


Definition: A basis of an inner product space that consists of mutually orthogonal unit vectors
is called an orthonormal basis.

Ex: i) The basis S  (1, 0), (0,1) of the inner product space R 2 ( R ) is also orthonormal. So S is an

orthonormal basis of R2 .

 1 2   2 1  
ii) The set S   , , ,   is an orthonormal basis of R 2 ( R) .
 5 5   5 5  

iii) The set S  (1, 0, 0), (0,1, 0), (0,0,1) is a basis of the inner product space R 3 ( R ) , which is also

orthonormal. So S is an orthonormal basis of R 3 ( R ) .

iv) The standard ordered basis for the inner product space Vn ( R ) is also orthonormal. So it is an
ortho normal basis.
14.11.1 Finite dimensional inner product space: Definition:
A finite dimensional vector space, in which an inner product is defined is called a finite
dimensional inner product space.
We now establish that every finite dimensional inner product space possesses an orthonor-
mal basis. If S is a basis of the finite dimensional inner product space V ( F ) , we construct an
orthonormal set S1 from S such that L( S )  L( S 1 )  V .

14.11.2 Gram - Schmidt orthogonalisation Process:


Theorem: Show that every finite dimensional inner product space has an orthonormal basis.

Proof: Let V ( F ) be an n - dimensional inner product space. Let B  u1 , u2 ,...un  be the basis of
V ( F ) . We will now construct an orthonormal set in V ( F ) with the help of elements of B.

As B is the basis of V ( F ) so each ui (i  1, 2,...n )  B is a non zero vector..

Now u1  O  u  0

u
Further let u  v1 (say)  O
1

 v1 , v1 
2
This belongs to V ( F ) and v1
Centre for Distance Education 14.24 Acharya Nagarjuna University

u1 u1  u1 , u1 
 ,   2
u1 u1 u1

(Since u is real)

1

So the set v1 forms an orthonormal set in V ( F ) and v1 is in the linear span of u1 . Now

we extend the above set by assuming w2  u2   u2 , v1  v1 and w2  V ( F ) Evidently w2  0


otherwise w2  O would imgly u2  u2 , v1  v1 i.e. u2 is a scalar multiple of v1 or u2 is a scalar
multiple u1 , or u1 , u2 are linearly dependent. Which is not possible as being elements of the basis.

w
Now w  v2 ( say )( O) V ( F )
2

Evidently v2  1 now

w2 1
 v2 , v1  , v1   w2 , v1 
w2 w2

1
  u2   u2 , v1  v1 , u1 
w2

1  v1 , v1 
So  v2 , v1  w  u2 , v1    u2 , v1  w2
2

 u2 , v1   u2 , v1 
  since  v1 , v1  1
w2 w2

0

As  v2 , v1  0, v1 , v2 are orthogonal to each other and have unit norms implying the set

v1 , v1 is an orthonormal set and consists of distinct vectors v1 and v2 . As also

w2  u2   u2 , v1  v1 .

u1  w2 
 v2 w2  u2   u2 , v1  Since   v2 
u1  w2 
Rings and Linear Algebra 14.25 Orthogonalization

u2  u2 , v1  u1
 v2  
w2 w2 u1

or v2 is a linear combination of u1 and u2 i.e. u1 , u2 generate v2 and further more u1 gener-


ates v1 .

Now we extend the set v1 , v2  by assuming that w3  u3   u3 , v1  v1   u3 , v2  v2 V ( F ) .

here again  v2 , v1  0,  v3 , v2  0 and  v3 , v3  1

w3
Where v3   O V ( F )
w3

This shows that the set v1 , v2 , v3  is an orthonormal set of distinct vectors v1 , v2 , v3 . Also

v3 is a linear combination of u3 , u2 , u1 i.e. u1 , u2 , u3 generate v3 . Thus we have constructed an


orthonormal set v1 , v2 , v3  such that v1 , v2 , v3 are distinct vectors and v j ( j  1, 2, 3) is a linear

combination of u 1 , u 2 , ....u j suppose that, in this way, we have constructed an orthonormal set

v1 , v2 ,..., vk  (k  n) of k distinct vectors such that v j ( j  1, 2,...k ) is a linear combination of

u1 , u2 ,..., u j .

To prove it by induction, we consider the vector

wk 1  wk 1   uk 1 , v1  v1   uk 1 , v2  v2 ....  uk 1 , vk  vk ......... (1)

Evidently wk 1 is orthogonal to each of the vectors v1 , v2 ,...vk and wk 1  0 j other wise

wk 1  0 would mean that uk 1 is a linear combination of v1 , v2 ,...vk and by assumption that v j is


a linear combination of u1 , u2 ,...u j . We infer that uk 1 is a linear combination of u1 , u2 ,...uk which is
impossible as u1 , u2 ,...uk , uk 1 is a linearly independent set of vectors. Now we write

wk 1
vk 1   vk 1  1
wk 1

and also that vk 1 is orthogonal to each of the vectors v1 , v2 ,...vk . Where vk 1  v j ( j  1, 2,...k ) .

Because vk 1  v j ( j  1, 2,...k ) will imply that uk 1 is a linear combination of u1 , u2 ,...uk , which

impossible. Also from the above it is clear that vk 1 is the linear combination of u1 , u2 ,...uk 1 . Hence
Centre for Distance Education 14.26 Acharya Nagarjuna University

we have constructed an orthonormal set v1 , v2 ,..., vk , vk 1 of k  1 distinct vectors such that

v j ( j  1, 2,..., k  1) is the linear combination of u1 , u2 ,..., u j . Thus by induction we can construct an

orthonormal set v1 , v2 ,..., vn  of n - distinct vectors such that v j ( j  1, 2,..., n) is the linear combi-

nation of u1 , u2 ,..., u j .

As we know that an orthonormal set is always linearly independent, it follows that


v1 , v2 ,..., vn  is a linearly independent set and consequently it is a basis of V ( F ) as it contains the
number of vectors, equal to the dimension of V ( F ) . Further more this basis set is a complete
orthonormal set as the maximum number of vectors in an orthonormal set in V can be n.

This method of converting a basis of V ( F ) into a complete orthonormal set is called Gram
- Schmidt orthogonalisation process.
14.11.3 Working Procedure to apply Gram - Schwidt orthogonalization process to
numerical problems:

Suppose B  u1 , u2 ,..., un  is a given basis of a finite dimensional inner product space V..

Let v1 , v2 ,..., vn  be an orthonormal basis for V, which we are required to construct from the basis

B. The vectors v1 , v2 ,..., vn will be obtained in the following way..

u1
Take v1 
u1

w2
u2 
w2 where w2  u2   u2 , v1  v1

w3
v3 
w3 where w3  u3   u3 , v1  v1   u3 , v2  v2

wn
vn 
wn where wn  un   un , v1  v1   un , v2  v2 ...  un , vn 1  vn 1

Note: u1  w1 , w2 , w3 ...wn  is an orthogonal basis.

Worked out Examples:

W.E. 14: Apply Gram - Schmidt process to the vectors u1  (1, 0,1); u2  (1, 0, 1); u3  (0,3, 4) to
obtain an orthonormal basis for V3 ( R ) with standard inner product.
Rings and Linear Algebra 14.27 Orthogonalization

Solution: u1  (1,0,1) so u  12  02  12  2

u1 1  1 1 
v1   (1, 0,1)   , 0, 
u1 2  2 2

1
u2  (1, 0, 1) so  u2 , v1  (1, 0, 1), (1, 0,1)
2

1
So  u2 , v1  1(1)  0(0)  (1)(1)  0
2

 1 
w2  u2   u2 , v1  v1  (1, 0, 1)  0  (1, 0,1)   (1, 0, 1)
 2 

w2  12  02  (1)2  2

w2 1  1 1 
v2   (1, 0, 1)   , 0, 
w2 2  2 2

1
u3  (0,3, 4) so u3  (0,3, 4)  u3 , v1  (0,3, 4), (1, 0,1) 
2

1
  (0,3, 4), (1, 0,1) 
2

1 4
 u3 , v1  0(1)  3(0)  4(1)   2 2
2 2

1 1
 u3 , v1  (0, 3, 4), (1, 0,1)   (0, 3, 4), (1, 0,1) 
2 2

1
 0(1)  3(0)  4(1)  2 2
2

Now w3  u3   u3 , v1  v1   u3 , v2  v2

1 1
 (0,3, 4)  2 2. (1, 0,1)  2 2. (1, 0, 1)
2 2
Centre for Distance Education 14.28 Acharya Nagarjuna University

 (0, 3, 4)  (2, 0, 2)  (2, 0, 2)  (0,3, 0)

w3  02  32  02  3

w3 1
v3   (0,3, 0)  (0,1, 0)
w3 3

The required orthonormal basis is v1 , v2 , v3 

 1 1   1 1  
  , 0, , , 0,  ,(0,1, 0) 
 2 2  2 2 

W.E. 15: Apply Gram - Schmidt process to obtain an orthonormal basis for V3 ( R) with respect
to the standard inner product to the vectors (2, 0,1), (3, 1,5), (0, 4, 2) .

Solution: Let u1 , u2 , u3 be the basis of the finite dimensional vector space V3 ( R ) where

u1  (2, 0,1), u2  (3, 1,5), u3  (0, 4, 2)

Now u1  22  02  12  5

u1 1  2 1 
v1   (2, 0,1)   , 0, 
u1 5  5 5

u2  (3, 1,5)

1 1
 u2 , v1  (3, 1,5), (2, 0,1)   (3, 1,5), (2, 0,1) 
5 5

1 11
 3(2)  (1)(0)  5(1) 
5 5

11  1 
w2  u2   u2 , v1  v1 so w2  (3, 1,5)    (2, 0,1)
5 5

11  22 11 
So w2  (3, 1,5)  (2, 0,1)   3  , 1  0, 5  
5  5 5
Rings and Linear Algebra 14.29 Orthogonalization

 7 5 14  1
So w2   , ,   (7, 5,14)
 5 5 5 5

1 1
w2  ( 7) 2  ( 5) 2  (14) 2  49  25  196
5 5

1
So w2  270
5

w2 5 1 1
v2  . (7, 5,14)  (7, 5,14)
w2 270 5 270

1
u3  (0, 4, 2) so  u3 , v1  (0, 4, 2), , (2, 0,1) 
5

1
  (0, 4, 2), (2, 0,1) 
5

1 2
 0(2)  4(0)  2(1) 
5 5

1
 u3 , v2  (0, 4, 2), (7, 5,14) >
270

1
  (0, 4, 2), (7, 5,14) 
270

1
 0(7)  4(5)  2(14)
270

1 8
 (20  28) 
270 270

w3  u3   u3 , v1  v1   u3 , v2  v2

2 1 8 1
 (0, 4, 2)  . (2, 0,1)  . (7, 5,14)
5 5 270 270

2 8
 (0, 4, 2)  (2, 0,1)  ( 7, 5,14)
5 270
Centre for Distance Education 14.30 Acharya Nagarjuna University

 4 56 40 2 112 
 0  ,4 0  ,2  
 5 270 270 5 270 

 0  54(4)  56 1080  40 540  54  2  112 


 , , 
 270 270 270 

 160 1120 320   16 112 32 


 , ,  , , 
 270 270 270   27 27 27 

16 16
So w3  (1, 7, 2) , w3  ( 1) 2  7 2  2 2
27 27

16
w3  54
27

w3 27 16
v3   . (1, 7, 2)
w3 16 54 27

1 1
So v3  (1,7, 2)  (1, 7, 2)
54 3 6

1
Hence the required orthonormal basis is v1 , v2 , v3  where v1  (2, 0,1) and
5

1 1
v2  (7, 5,14) and v3  (1, 7, 2)
270 3 6

14.12 Fourier Coefficients :


Definition: Let B be an orthonormal subset (possibly infinite) of an inner product space V and let
v  V . We define the fourier coefficients of v relative to B to be the scalar  v, u 
where u  B .

Worked out Examples:

W.E. 16: If S  (1,1,1), (0,1,1), (0, 0,1) is a subset of the vector space R 3 and V  R 3 . Obtain the
orthonormal basis B for span (S) and find the fourier coefficients of the vector
(1, 0,1) to B.
Rings and Linear Algebra 14.31 Orthogonalization

Solution: Let u1  (1,1,1), u2  (0,1,1), u3  (0,0,1)

u 1
u1  1  1  1  3 so v1  u 
1
2 2 2 (1,1,1)
1 3

1 1
 v2 , v1  (0,1,1), (1,1,1)   (0,1,1), (1,1,1) 
3 3

1 2
 0(1)  1(1)  1(1) 
3 3

w2  u2   v2 , v1  v1

2 1
 (0,1,1)  . (1,1,1)
3 3

2  2 2 2
 (0,1,1)  (1,1,1)   0  ,1  ,1  
3  3 3 3

 2 1 1  1
w2   , ,   (2,1,1)
 3 3 3 3

1 6
So w2  ( 2) 2  12  12 
3 3

w2 1 3 1
v2   (2,1,1).  (2,1,1)
w2 3 6 6

1 1
 u3 , v1  (0, 0,1), (1,1,1)  0(1)  0(1)  1(1)
3 3

1
 u3 , v1 
3

1 1
 u3 , v2  (0, 0,1), (2,1,1)  0(2)  0(1)  1(1)
6 6

1
i.e.  u3 , v2 
6
Centre for Distance Education 14.32 Acharya Nagarjuna University

w3  u3   u3 , v1    u3 , v2  v2

1 1 1 1
 (0,0,1)  . (1,1,1)  . (2,1,1)
3 3 6 6

1 1
 (0, 0,1)  (1,1,1)  ( 2,1,1)
3 6

 1 2 1 1 1 1   1 1 
  0   ,0   ,1      0, , 
 3 6 3 6 3 6  2 2

1
w3  (0, 1,1)
2

1 2 1
w3  0 11  
2 2 2

3 1 1
v3   2   (0, 1,1)  (0, 1,1)
3 2 2

So the orthonormal basis of Span (S) is B  v1 , v2 , v3  where

1 1 1
v1  (1,1,1), v2  (2,1,1), v3  (0, 1,1)
3 6 2

To find the fourier coefficients of the vector v  (1, 0,1) relative to S  v1 , v2 , v3  :
1

1 1
 v, v1  (1, 0,1), (1,1,1)   (1)1  0(1)  1(1)
3 3

2
 v, v1 
3

1 1
 v, v1  (1, 0,1), (2,1,1)   1(2)  0(1)  1(1)
6 6

1
 v, v2 
6
Rings and Linear Algebra 14.33 Orthogonalization

1 1
 v, v3  (1, 0,1), (0, 1,1)   1(0)  0(1)  1(1)
2 2

1
 v, v3 
2

2 1 1
Hence the fourier coefficients relative to the set B is , , or 2 3 ,  6 , 2
3 6 2 3 6 2

W.E. 17: If V  L ( S ) where S  (1, i, 0), (1  i, 2, 4i) . Find the orthonormal basis B of V and com-
pute the fourier coefficients of the vector (3  i, 4i, 4) relative to B.

Solution: Let u1  (1, i, 0)u2  (1  i, 2, 4i, )

 u1 , u1  (1, i, 0), (1, i, 0) 


2
then u1

 1(1)  i ( i )  0(0) where a is congugate of a.

 1(1)  i ( i )  0(0)

 1  i2  0  1 1  2

u1 1
u1  2 so v1  u  2 (1, i,0)
1

1
 u2 , v1  (1  i, 2, 4i ), (1, i,0) 
2


1
2
(1  i)1  2i , 4i(0)

1
 (1  i)1  2(i)  4i(0)
2

1 1
 1  i  2i  (1  3i)
2 2

1 1
w2  u2   u2 , v1  v1  (1  i, 2, 4i ),  (1  3i ). (1, i, 0)
2 2
Centre for Distance Education 14.34 Acharya Nagarjuna University

1
 (1  i, 2, 4i ),  (1  3i),3  i, 0
2

 (1  3i ) (3  i ) 
  (1  i )  ,2  , 4i  0 
 2 2 

 1  i) 1  i)  1
w2   , , 4i    (1  i ), (1  i ),8i 
 2 2  2

1 1
  (1  i ), (1  i ),8i  , (1  i,1  i,8i) 
2
w2
2 2

11
 
   (1  i )(1  i )  (1  i )(1  i )  8i (8 i ) where a is congugate of a.
22

1
 (1  i )(1  i)  (1  i)(1  i)  8i(8i )
2
w2
4


1
4
(1  1)  (1  1)  64i 2 

1 1 2
w2  2  2  64  68  17  17
2 2 2

w2 1 1
Hence v2   . (1  i,1  i,8i )
w2 17 2

 1  i 1  i 4i 
So v2   , , 
 2 17 2 17 17 

So the orthonormal basis B  v1 , v2 

 1 i   1  i 1  i 4i 
Where v1   , , 0  and v2   , , 
 2 2   2 17 2 17 17 

To find the fourier coefficients of v  (3  i , 4i, 4)

1
Now  v , v1  (3  i , 4 i ,  4), (1, i , 0) 
2
Rings and Linear Algebra 14.35 Orthogonalization

1
 (3  i)1  (4i)(i)  4(0) ( i  i)
2

1 1
 v, v1  (3  i  4)  (7  i )
2 2

1
 v, v2  (3  i , 4i , 4), (1  i ,1  i ,8i ) 
2 17


1
2 17

(3  i )(1  i )  4i (1  i )  4(8i ) 
1
 (3  i)(1  i)  4i (1  i )  4(8i)
2 17

1
 (3  1  2i )  (4i  4)  32i
2 17

1
 (0  34i  17i
2 17

1
Hence the fowrier coefficients of v relative to B are  v, v1 ,  v, v2  i.e. (7  i ), 17i
2

14.13 Parseval’s Identity:

14.13.1 Theorem: Parseval’s Identity : If B  u1 , u2 ,...un  be an ortho-normal basis of a


n

finite dimensional inner product space V ( F ) then  u , v    u , ui  ui , v  for all u , v  V .


i 1

Proof: B  u1 , u2 ,...un  is a basis of V. Let u , v  V . Then there exists scalars a1 , a2 ,...an ,

b1 , b2 ,...bn  F so that

n
u  a1u1  a2u2  ...  an un   ai ui
i 1

n
v  b1u1  b2u2  ...  bn un   b j u j
J 1
Centre for Distance Education 14.36 Acharya Nagarjuna University
n n
Now  u, v   ai ui , b j u j ,
i 1 j 1

n
   ai ui , bi ui  since  ui , u j  0 for i  j
i 1

n n
  ai bi  u i , ui 
i 1
ab
i 1
i i ........... (1)

since  ui , u j  1 if i  j

0 if i  j

n n
But  u , ui    ai ,ui , ui   ai  ui , ui  ai .......... (2) Since  ui , u j 
i 1 i 1

1 if i = j

n
and  ui , v  ui ,   b ,u
J 1
j j 

0 if i  j

 bi  ui , ui  Since  ui , u j 

1 if i = j

 bi (1)  bi .......... (3)

Using (2) and (3) in (1) we get

n
 u , v    u , ui   ui , v  for all u , v  V
i 1

14.13.2 Corollary: If S  u1 , u2 ,...un  is a complete orthonormal set in an inner product space
n

  v, u   v
2 2
V ( F ) and if v  V , then i
i 1

n
Proof: By Parseval’s Identity if v  V then  u , v    u, u
i 1
i   ui , v  for all v  V
Rings and Linear Algebra 14.37 Orthogonalization
n
So as v  V ,  v, v    v, u
i 1
i   ui , v 

n
   v, ui  v, ui 
i 1

n
   v, ui   zz  z
2 2 2
So v
i 1

  v, u   v
2 2
Thus i
i 1

14.13.3 Theorem:

Bessel’s Inequality: Let V be an inner product space and Let S  u1 , u2 ,...un  be an orthonor-
n

  v, u   v .
2 2
mal subset of V. Prove that for any v  V , we have i
i 1

Further more the equality holds if and only if v is in the subspace generated by u1 , u2 ,...un .

Proof: Consider the vector w  v    v, ui  ui


i 1

m m
 w, w  v    v, ui  ui , v    v, u j  u j 
2
Now w
i 1 j 1

m m m m
 v, v    v, ui  ui , v    v, u j   v, u j    v, ui   viu j   ui , u j 
i 1 j 1 i 1 j 1

m m m
 v, v    v, ui   ui , u j     v, u j   v, u j     v, ui  v, ui 
i 1 i 1 i 1

1 if i = j

On summing up with respect to j and remembering  ui , u j 


0 if i  j
Centre for Distance Education 14.38 Acharya Nagarjuna University

m m m
 v    v, ui     v, ui     v, ui 
2 2 2 2

i 1 i 1 i 1

m
w  v    v, ui  ............ (1)
2 2 2
i.e.
i 1

m
Now w  0  v    v, u  0
2 2 2
i
i 1

m
   v, ui   v
2 2

i 1

To show that equality hold if and only if v is in the subspace spanned by u1 , u2 ,...um :

  v, u   v
2 2
Case i) Let the equality holds good i.e. i then from (1)
i 1

m
 v    v, ui 
2 2 2
i.e. w
i 1

We get w  0  w  O (zero vector)


2

m
 v    v, ui  ui  O
i 1

m
 v    v, ui  ui
i 1

Thus if the equality holds good then v is a linear combination of u1 , u2 ,...um

Hence v is in the subspace spanned by u1 , u2 ,...um

Case ii) Converse : v is in the subspace spanned by u1, u2 ,..., um

So v can be expressed as a linear combination of u1, u2 ,..., um , by theorem 14.8.2. We


know that

m
v    v, ui  ui ........... (2) But we know
i 1
Rings and Linear Algebra 14.39 Orthogonalization

m
w  v    v, ui  ui  O using (2)
i 1

0
2
So w

m
 v    v , ui   0
2 2

i 1

m
   v , ui   v
2 2

i 1

Thus the equality holds good.


Hence the theorem.

14.13.4 Corollary: Let u1, u2 ,...um be an orthogonal set of non zero vectors in an inner product

  v, ui  2  m

space V. If v is any vector in V, then    v


2

u
2

i 1
 i 

ui
Proof: Let B  v1 , v2 ,..., vm  where vi  u (1  i  m) . Then vi  1 , so the set B is an
i

  v, u   v
2 2
orthonormal set. Hence by Bessels’s inequality we get i ....... (1)
i 1

u 1
Also  v, vi  v, u    v, ui 
i

i ui

1
So  v, vi    v, ui  .......... (2)
2 2
2
ui

From (1) and (2) we get

m   v, ui  2 
   v
2

 u
2

i 1
 i 
Centre for Distance Education 14.40 Acharya Nagarjuna University

14.13.5 Theorem: If V is a finite dimensional inner product space, and if u1 , u2 ,...um  is an
m

  v, u   v for every v  V ; prove that u1 , u2 ,...um  must


2 2
orthonormal set in V, such that i
i 1

be a basis of V.
Proof: Let v be any vector in V. Consider

m
w  v    v, ui  ui ............. (1)
i 1

As in the proof of Bessel’s inequality

 w, w 
2
We have w

m
 v    v , ui 
2 2

i 1

 0 by given condition.
m
 w  O  v    v, ui  ui
i 1

Thus every vector v in V can be expressed as a linear combination of the vectors in the set
S  u1 , u2 ,...um  i.e. L ( S )  V . As S is an orthonormal basis, S is linearly independent.

Hence S is a basis of V.

14.14 Orthogonal Compliment:


14.14.1 Definition: Let S be a non empty subset of an inner product space V. We define S 
(read S perp) to be the set of all vectors in V; that are orthogonal to every vector in S; i.e.
S   u V  u, v  0 for all v  S 

S  is called the orthogonal complement of S and the symbol is usually read as S perpen-
dicular

Note i) S   V

ii) u  S  , v  S  u , v  0

iii) O (zero vector) is an element in V. v  S


then  O, u  0  u  S  O  S . Hence S   
Rings and Linear Algebra 14.41 Orthogonalization

14.14.2 Theorem: If S is any non empty subset of an inner product space V ( F ) , then S  is a
subspace of V ( F ) .

Proof: By definition S  u V  u, v  0  v V 

Let u1 , u2  S  and v  S . then  u1 , v  0 .......... (1)

and  u2 , v  0 ........... (2) Now for a , b  F and for each v  S we have

 au1  bu2 , v  au1 , v    bu2 , v 

 a  u1 , v  b  u2 , v 

 a (0)  b(0) using (1) and (2)

0

 For u1 , u2  S

a, b  F

au1  bu 2  S  so S  is a subspace of V..

14.14.3 Theorem: If V ( F ) is an inner product space, O is the zero vector in V, then show that

O

.V

Proof: Let v  V , then  v, O  0 by definition of inner product. So v  O i.e. any element v of


......... (1). Also 0  V ......... (2) from (1) and (2)

0 

V is also an element of S  . So V 

0

V .

14.14.4 Theorem:

If V ( F ) is an inner product space, O is the zero vector of V; then show that V  0 .
T

Proof: Let u  V  then  u , v  0  v  V by definition of V  when v  u then  u , u  0

0u O
2
i.e. u

Thus O is the only vector orthogonal to itself and hence V  O



Centre for Distance Education 14.42 Acharya Nagarjuna University
14.14.5 Theorem:

If S is a subset of an inner product space V ( F ) , then show that S  S  O


Proof: If u  S  S  then u  S and u  S 


 u is orthogonal to u.

  u, u  0  u  O

So S  S  O

14.14.6 Theorem:

If S1 , S 2 are two subsets of an inner product space V ( F ) then show that S1  S2  S2  S1

Solution: Let u  S   u is orthogonal to every vector in S 2 .

As S1  S2 so u is orthogonal to every vector in S1 .

 u  S1 thus u  S 2  u  S1

So S 2  S 1

14.14.7 Theorem:

If S is a subset of an inner product space V ( F ) , then show that S   span ( S )  .

Proof: V is an inner product space over the field F.

S is a subset of V So S  span (S)

We know if S1 , S 2 are two subsets of V, then S1  S2  S2  S1

Hence by this, [ span of S ]  S  .......... (1)

Let u  S  and v  span of (S). Hence there exists scalars a1 , a2 ,..., an in F such that
v  a1w1  a2 w2  ...  an wn where w1 , w2 ,...wn  S

As u  S   u , v  u , a1w1  a2 w2  ...  an wn 

So  u , v  a1  u , w1   a2  u , w2  ...  an  u , wn 

 a1 (0)  a2 (0)  ...  an (0)  0


Rings and Linear Algebra 14.43 Orthogonalization

As v  span of S and  u , v  0 where u  S 

So u  [ span of S ]


Thus u  S  u  (Span S ) 

 S   (Span S )  ................ (2)

From (1) and (2) S   (span S ) 

14.14.8 Theorem: If B  u1 , u2 ,...um  is an orthonormal subset of the inner product space V ( F ) ,
m
then for each v  V , w  v    v, u
j 1
j  is a vector of B  .

Proof: We have proved in theorem 14.8.4 that w is orthogonal to each of u1 , u2 ,...um . By definition
of orthogonal complement w  B  .
Hence the theorem.
14.14.9 Orthogonal Compliment of an orthogonal compliment:

Definition: If S is a subset of an inner product space V ( F ) , then S  is a sub space V. We define


( S  )  written as S  (containing those vectors in V ( F ) which are orthogonal to each vector of

S  ) by ( S )  S u  V : u , v   v  S 
   

Note: Obviously ( S  )  is a subspace of V; since we know if S is any set of vectors in an inner


product space V ( F ) then S  is a subspace of V..

14.14.10 Theorem:

Show that for any subset S of an inner product space V ( F ) , S  S  .

Solution : V ( F ) is the given inner product space S, is subset of V. then S  , S  are subspaces of
V.

Let u  S then  u , v  0 for all v  S

 u , v  O  v, u  O for v  S  and u  V

So by definition u  S 
Centre for Distance Education 14.44 Acharya Nagarjuna University

Thus u  S  u  S 

So S  S  .

W.E. 18: If V ( F ) is an inner product space and S is any subset of V then show that

i) S    L( S ) 

ii) L( S )  S 

iii) L( S )  S  if V is finite dimensional

iv) S   S 

Solution: V ( F ) is an inner product space S is a subset of V..

i) To show S    L( S ) 

As S is a subset of V, S  L(S )

Hence  L( S )  S  ......... (1)


To show S    L( S )

Let v  L ( S ) then v   ai ui  ui  S
i 1

for u  S 
we have  u , v  u ,  ai ui 
i 1

n
  ai  u , ui  by definition.
i 1

 0 since u is perpendicular to each ui  S .

This implies u is perpendicular to v  L ( S ) i.e. u   L ( S )  or S    L( S ) ...... (2)


 

S    L( S ) 

From (1) and (2)

ii) To show that L(S )  S 


Rings and Linear Algebra 14.45 Orthogonalization
Let u  L ( S ) and v  S  . Then v is orthogonal to every vector of S or inother words v is
orthogonal to the linear combination of a finite number of vectors in S. i.e. v is orthogonal to u.

 u  ( S  )  i.e. u  S 

Thus u  L( S )  u  S  . So L(S )  S 

iii) To show S   S 

We have S  L(S ) and L(S )  S  from (iii)

So S  S   S   (S  )

 S   S   ........... (1)

Now S  S   S , ( S  )   S 

 S   S  ........ (2)

From (1) and (2) we get S   S 

14.14.11 Theorem: Let W be a finite dimensional subspace of an inner product space V. Let
v  V then there exists unique vectors u W and w W such that v  u  w .

Proof: Let B  u1 , u2 ,...un  be an orthonormal basis of W. Then B is linearly independent and
n
L( B)  W . Let u be defined as u    v, ui  ui and w  v  u .
i 1


Now u W  L( B) and v  u  w now we have to prove that w W .

n
As B is an orthonormal basis of the vector space W . w  v    v, u
i 1
i  ui a vector of W  .

So for v  V , there exists u W and w W  such that v  u  w .

To show the uniqueness:

Let if possible v  u '  w ' where u ' W and w' W 

Then u  w  u '  w '  u  u '  w '  w

But u W , u' W ' ,W is a subspace  u  u1 W


Centre for Distance Education 14.46 Acharya Nagarjuna University

w' W  and w W   w'  w W  since W  is a subspace and as W W  O and so


u  u'  W '  w  0  u  u' , w  W '


So the representation v  u  w is unique.

Note: For u W , there exists w  w such that u  w  v V

14.14.12 Closest Vector:

If V  R 3 and S  e3  where e3  (0, 0,1) there S  is equal to xy plane.

Consider the problem in R 3 of finding the distance from a point P to a plane W..

If we let v be the vector determined by O and P, we may restate the problem as follows.
P

W= v - u

v
900
Q
Q1
u
O X
W

Determine the vector u in W that is ctosest to v. The desired distance is clearly v  u we


observe from the figure that the vector w  v  u is orthogonal to every vector in W and so w  W  .

For any x W , there exists a point Q ' W plane so that PQ '  PQ .

i.e. v  x  v  u

The vector u W is clearly the orthogonal projection of v  V on the plane.

14.14.3 Orthogonal Projection: Let W be a subspace of the finite dimensional inner product
space. For v  V there exists unique vectors u W , w W  such that v  u  W .

n
The vector u  W that is u    v, u i  ui where u1, u2 ,..., un is an orthonormal basis of
i 1

W, is called the orthogonal projection of v  V on the subspace W..


Rings and Linear Algebra 14.47 Orthogonalization

14.14.14 Theorem: Let S  v1 , v2 ,...vk  is an orthonormal set in an n - dimensional inner prod-

uct space V. Then show that S can be extended to an orthonormal basis S  v1 , v2 ,...vk , vk 1...vn 
1

for V.

Proof: V ( F ) is an n - dimensional inner product space. So V ( F ) is an n dimensional vector

space. S  v1 , v2 ,...vk  is an orthonormal subset of V. So it can be extended to the set

S1  v1 , v2 ,...vk , wk 1 , wk  2 ...wn  , so as to form a basis of the vectorspace V ( F ) .

By applying Gram schmidt orthogonalisation process to S1 we can obtain an orthonormal


basis in which the first k vectors are the vectors in S; the last n - k vectors are obtained after
normalising, given by S  v1 , v2 ,...vk , vk 1 ...vn  . So that L ( S ' )  V .
'

Hence the theorem.

14.14.15 Corollary 1: If S  v1 , v2 ,...vk  is an orthonormal set in an inner product space V and if

W is span (S) i.e. W is a subspace of V, then S2  vk 1 , vk  2 ,...vn  is an orthonormal basis of W  .

Proof: In the above theorem we have shown that

S can be estended to S  v1 , v2 ,...vk , vk 1 ,...vn 


'

So S2  vk 1 , vk  2 ..., vn   S (Basis of V)


'

Hence S 2 is linearly independent, being the subset of basis of V..

We are given L( S )  W and S  S 2   ,so S2  W  we will now show that ( S 2 )  W  .

As L ( S )  V for any u  V , we have u    u , ui  ui


'
i 1

Let u  W   u , ui  0 for i  1, 2,..k

since S  W .


 for each u W ,
n

u  u , u1  u1   u , u2  u2  ...  u , uk  uk    u, u
i  k 1
i  ui
Centre for Distance Education 14.48 Acharya Nagarjuna University
n
 O  O  ...  O  
i  k 1
 u , ui  ui

n
   u, u
i  k 1
i  ui

So u  W   u  L( S 2 )  W   L( S2 ) ........... (2)

From (1) and (2) L( S 2 )  W

As S2 is linearly independent L(S2 )  W

S2  uk 1 , uk  2 ,...un  is a basis of W  .

14.14.16 Corollary 2: V is an n dimensional inner product space and if W is any subspace of V,


then show that dim(V )  dim(W )  dim(W  ) .

Proof: As V is a finite dimensional inner product space and W is a subspace V, so W is finite


dimensional inner product space. So it has an orthonormal basis S  v1 , v2 ,...vk  . So it can be

extended to an orthonomal basis S1  v1 , v2 ,...vk , vk 1 ,...vn  and by the above corollary 1,

S1  .vk i , vk  2 ,...vn  is an orthonormal basis of W  .

So dim V  n  k  (n  k )

 dim(W )  dim(W  )

Hence dim V  dim(W )  dim(W  ) .

14.14.17 Projection Theorem:


If W is any subspace of a finite dimensional inner product space then show that

i) v  W  W  where W  is an orthogonal complement of W..

ii) (W  )  W

Proof: W being a subspace of a finite dimensional vector spae V ( F ) of dimension n, is also finite
dimensional say of dimension k.

Thus we can find B1  u1 , u2 ,...uk  as an orthonormal set in W which is also a basis of W. This

can be exteneded to give B  u1 , u2 ,...uk , uk 1...un  as an orthonormal basis for V ( F ) .


Rings and Linear Algebra 14.49 Orthogonalization

As W is a subspace of V, then W  is also a subspace of V; and W  W  0 ...... (1) where


0 is the zero vector.

We will now prove V  W  W 

Now consider the vector w  v    u , ui  ui .......... (2)


i 1

k
Now  w, u j  v    v, u
i 1
i  ui , u j  1  j  k

k
 v, u j    v, ui  ui , u j 
i 1

 v, u j    v, u j  u j , ui 

1 if i = j

Since  ui , u j 

1 if i  j

So  w, u j  v, u j    v, u j  0

Showing that W is orthogonal to each of the vectors u1 , u2 ,...uk . i.e. orthogonal to the sub-
space W spanned by there vectors and hence it belongs to W  .

Also, the vectors   v, u


i 1
i ui being a linear combination of elements of B1 W .

k
Hence from (2) i.e. w  v    v, u
i 1
i ui

each v  V we have

 k 
v     v, ui ui   w
 i 1 

= an elements of W + an elements of W  .
Centre for Distance Education 14.50 Acharya Nagarjuna University
So V  W  W  .......... (3)

and we have W  W  0 ......... (1)


So from (1) and (3) V  W1  W2

ii) To prove that (W  )  W when W is a subspace of an inner product space of finite dimension.

  

Proof: By definition (W )  W  w V : w, v  0  v  W


Let u W  u, v  0   W 

Hence from definition of W  , u W 

Thus u W  u W  so W  W 

We have V  W  W  ........ (1)

So dim V  dim W  dim W  ......... (2)

Putting W  fro W in (1) we get

V  W   W  ............ (3)

So dim V  dim(W  )  dim(W  ) ........ (4)

From (2) and (4) we get dimW  dim W 

Now W  W  . Hence W is a subspace of W  and dimW  dim W  so W  W 

Thus (W ) W .

Note: If W is a subspace of any finite dimensional inner product space V(F), then V  W  W 

 dim V  dim W  dim W 

 dim W   dim V  dim W


14.15 Worked Out Examples:
W.E.19: If  1 and  2 are two subspaces of a finite dimensional inner product space V(F)

then show that i) (W1  W2 )   W1  W2

and ii) (W1  W2 )   W1  W2


Rings and Linear Algebra 14.51 Orthogonalization

Solution: i) To show (W1  W2 )   W1  W2

We know S1  S2  S2  S1

Now W1  W1  W2  (W1  W2 )   W1

and W2  W1  W2  (W1  W2 )   W1

Hence from the above two, it follows

(W1  W2 )   W1  W2 ....... (1)

To show that W1  W2  (W1  W2 ) 

Let w  W1  W2  w  W1 , and w  W2

 w, u  0  u  W1

and  w , v  0  v  W 2

Thus  w, u  0 and  w, v  0  w, u  v  0 where u  v  W1  W2

So w  (W1  W2 ) 

Thus w  W1  W2  w  (W1 , W2 ) 

So W1  W2  (W1 , W2 ) ........ (2)

From (1) and (2) (W1 , W2 )   W1  W2 .

ii) To show that (W1, W2 )  W1 W2

As W1 , W2 are subspace of V ;W1 ,W2 are also subspaces of V. Hence replacing W1 and

W2 by W1 and W2 respectively in the above i.e.

(W1  W2 )  W1  W2

(W1  W2 )   (W1 )  (W2 )

 (W1  W2 )   W1  W2
Centre for Distance Education 14.52 Acharya Nagarjuna University
 
Since W 1  W1 and W 2  W2

 (W1  W2 )   (W1  W2 )

So (W1  W2 )   W1  W2 since W   W

W.E.20: For R 3 ( R) space W  L (1,0,0),(0,1,0)  is a subspace Let v  (2,3, 4)  R 3 .

Orthonormal basis of W  u1 , u2  where u1  (1, 0, 0), u2  (0,1, 0) .

Find the orthogonal projection of v and w .

Solution: The orthogonal projection of v on w is equal to   v, u


i 1
i  ui  v, u1  u1   v, u2  u2

 (2,3, 4), (1, 0, )  (1, 0, 0)   (2,3, 4), (0,1, 0)  (0,1, 0)

 2(1, 0, 0)  3(0,1, 0)

 (2, 3, 0)
W.E. 21 :

Let V  P3 ( R ) be the inner product space of atmost 3rd degree polynomials continues on
1

 1,1 . Let the inner product be defined as  f , g   f (t ) g (t )dt where


1
f , g  V , If W  P2 ( R)

is a sub space of V, with standard basis B  1, x, x .


2
 
i) Obtain orthonormal basis by Gram-schmidt process.

ii) Represent the polynomial f ( x )  1  2 x  3 x 2  P2 ( R ) as a linear combination of the


orthonomial basis obtained above.

iii) Obtain the orthogonal projection of f ( x)  x3 belonging to P2 ( R ) , the subspace of P3 ( R ) .

Solution: i) To obtain on orthonormal basis by Gram schmidt process.

The standard basis of P3 ( R ) is B  u1 , u2 , u3  given u1  1 , u2  x , u3  x 2

Let B  v1 , v2 , v3  be the corresponding orthonormal basis.


'

1
u1  1 u1  u1 , u1   (1)(1)dt.t 1  1  1  2 , u  2
2 1

1
Rings and Linear Algebra 14.53 Orthogonalization

u1 1
v1  
u1 2

1  t2 
1
1 1
 u2 , v1   (t ). dt   2   (1  1)  0
1 2 2  1 2 2

w2  u2   u2 , v1  v1

w2  x  0  x

1
t3 
1 1
2
 w2 , w2   t.tdt  2  t dt  2   
2 2
w2
1 0  3 0 3

2
w2 
3

w2 3 3
v2   ( x)  ( x)
w2 2 2

1
1
1
1
2 2 t3  2
 u3 , v1   t . dt 
2
 t dt  2 3  3
1 2 20  0

1
3 t4 
1 1
3 3 3
 u3 , v2   t . tdt 
2 0
2
t dt  i.e.  u3 , v2  0
1
2 2  4  0

w3  u3   u3 , v2  v2   u3 , v1  v1

3 2 1 1
 x2  0 x  x2 
2 3 2 3

1 1
 w3 , w3   (t 2  13 ) 2 dt   (t 4  23 t 2  19 )dt
2
w3
1 1

1
 2  (t 4  23 t 2  19 )dt
1
Centre for Distance Education 14.54 Acharya Nagarjuna University
1
1
 2   t5  32 t3  91 t 
5 3

0
1

 2  15  92    2  9 1045 5   458
2 1
w3 9

8
w3 
45

w3 45 (3x 2  1) 5 2
v3    (3x  1)
w3 8 3 8

  1 3 5 
Hence the orthonormal basis of the subspace is v1 , v2 , v3    , x, (3 x 2  1) 
 2 2 8 

ii) The given polynomial is f ( x )  1  2 x  3 x 2  P2 ( R )

To express f ( x ) as a linear combination of the vectors in the orthonormal basis:

The linear combination of f ( x )   f ,v


i 1
i  vi

1
1
 f , v1   (1  2t  3t 2 ). dt
1 2

1 1
2 2
 
20
(1  3t 2 )dt   tdt
2 1

1
 t3 
 f , v1  2 t  3.   0  2(2)
 3 0

1
3
 f , v2   (1  2t  3t 2 ) tdt
1
2

1
3
 
2 1
(t  2t 2  3t 3 )dt
Rings and Linear Algebra 14.55 Orthogonalization
1 1
3 3
 .4  t 2 dt   (t  3t 3 )dt
2 1 2 1

1
 (t )3  2 6
 f , v2  2 2 3   0 
 3 0 3

1
5 2
 f , v3   (1  2t  3t 2 ) (3t  1)dt
1
8

1
5
8 0
  2 (1  9t 4 )dt

5   t 5  
1
5 9
 2  t 0  9.     2 1  
1

8  5  0  8 5

2 5 ( 5  9)

2 2 5

2 10
So  f , v3 
5

So f ( x)  f , v1  v1   f , v2  v2   f , v3  v3

 1  2 6  3  2 10  5 
 2 2   3  2 x   5  8 (3 x  1) 
2

 2    

Where f ( x ) is a linear combination of vector of orthonormal basis

iii) To find the orthogonal projection of f ( x)  x 3  P3 ( R ) on W  P2 ( R) :

3
Solution : The orthogonal projection of f ( x)  x3 on W   f ,v
i 1
i  vi .

 f , v1  v1   f , v2  v2   f , v3  v3

1
1
 f , v1   t 3 dt  0
1 2
Centre for Distance Education 14.56 Acharya Nagarjuna University
1 1
3 3
 f , v2   t 3 tdt  2  t 4 dt
1
2 20

1
t5  6
So  f , v2  6 5  5
 0

1
5 2
 f , v3   t 3 . (3t  1)dt
1
8

1
5  t6 t4 
1
5
8 1
 (3t  t )dt 
5 3
3. 
8  6 4  1

So  f , v3  0

Hence the orthogonal projection of f ( x)  x3 on W  P2 ( R) is


 f , v1  v1   f , v2  v2   f , v3  v3

 1  6 3  5
 0   5  2 x   0. 8 (3 x  1)
2

 2  

3
 x
5

W.E. 22 : Compute S  if S  (1, 0, i ), (1, 2,1) in the inner product space C 3 ( C ) .

Solution: Let S  u1 , u2  so that u1  (1,0, i ) ; u2  (1, 2,1)



By definition S  v  C  v, u  0  u  S
3

Let u  (a, b, c)  C 3 where a, b, c are scalars belonging to C.

v  S   v, u1  0 and  v, u2  0

i.e.  v, u1  0  (a, b, c), (1, 0, i )  0

 a(1)  b(0)  c( i )  0 where i is the congugate of i.

So a  ci  0 .... (1) since i  i


Rings and Linear Algebra 14.57 Orthogonalization

and  v, u2  0  (a, b, c), (1, 2,1)  0

 a (1)  b(2)  c (1)  0

 a  2b  c  0
Let c  1 , then from (1) a  i

(i  1)
using in (2), i  2b  1  0  b 
2

 (1  i ) 
So v  ( a, b, c )   i, ,1
 2 

(1  i ) 
So S   
i, ,1
 2 

14.16 Summary:
In this lesson we discussed about orthogonality of vectors. Orthonormality. properties of
orthogonality and orthonormality, Gram-Schmidt orthogonalization process to obtain orthonormal
bases. Parseval’s identity, Bessels inequality orthogonal complements closest vectors projection.

14.17 Technical Terms:


Orthogonality of vectors, orthonormality of vectors, orthogonalization, orthogonal compli-
ment orthogonal projection.

14.18 Model Questions:


1. Define i) Orthogonal set ii) Orthonormal set in an inner product space.

2. If u, v, are two vectors in a real inner product space V ( F ) such that u  v then show that
(u  v ) is orthogonal to u  v .

3. Show that the vectors ( 1, 0), (0, 1) in R 2 form an orthonormal basis over R under usual inner
product on R 2 .

4. Prove that every orthogonal set of non zero vectors in an inner product space V ( F ) is linearly
independent.
5. State and prove Parsvel’s identity.

6. Apply Gram -Schmidt process to obtain an orthonormal basis for V3 ( R ) with the standard inner
product to the vectors.
Centre for Distance Education 14.58 Acharya Nagarjuna University

1 1 1
i) (2,1,3), (1, 2,3), (1,1,1) Ans: (2,1,3), (4,5,1) , (1,1, 1)
14 42 3

1 1 1
ii) (1, 1, 0), (2, 1, 2), (1, 1, 2) Ans: (1, 1, 0), (1,1, 4), (2, 2, 1)
2 3 2 3

7. State and prove Bessel’s inequality.

14.19 Exercise:
1. Apply the Gram - Schmidt process to obtain an orthonormal basis for V3 ( R ) with the standard
inner product to the vectors

i) (2,1,3), (1, 2,3), (1,1,1)

1 1 1
Ans : (2,1,3), (4,5,1), (1,1, 1)
14 42 3

ii) (1, 0,1), (1, 0, 1), (0,1, 4)

1 1
Ans : (1, 0,1), (1, 0, 1),(0,1, 0)
2 2

iii) (1, 1, 0), (2, 1, 2), (1, 1, 2)

1 1 1
Ans : (1, 1, 0), (1,1, 4), (2, 2, 1)
2 3 2 3

2. In each part apply Gram - Schmidt process to the given subset S of the inner product space V to
obtain an orthogonal basis for the span (S). Then normalise there vectors in this basis, to obtain an
orthonormal basis B for span (S) and compute the fourier coefficients of the given vector relative to
B.

i). v  R ; S  (2, 1, 2, 4), (2,1, 5,5), (1,3, 7,11) and v  ( 11,8, 4,18) .
4

1 1 1 
Ans:  (2, 1, 2, 4), (4, 2, 3,1), (3, 4,9, 7) 
5 30 155 

10,3 30, 155

ii) v  C ; S  (4,3  2i, i,1  4i),( 1  5i,5  4i, 3  5i, 7  2i ),( 27  i, 7  6i, 15  25i, 7  6i )
4

and v  ( 13  7i, 12  3i , 39  11i, 26  5i ) .


Rings and Linear Algebra 14.59 Orthogonalization

 1 1 1 
Ans:  (4,3  2i, i,1  4i ), (3  i, 5i, 2  4i, 2  i), (17  i, 9  8i, 18  6i, 9  8i) 
 47 60 1160 

47( 1  i ), 60( 1  2i ), 1160(1  i )

iii) V  P2 ( R ) with the inner product  f ( x ), g ( x )   f (t ) g (t )dt; S  1, x, x  ; h( x)  (1  x)


2

 1 1 3 3
Ans : 1, 2 3( x  ),6 5( x  x   , ,
2
,0
 2 6 2 6

  3 5  1 9  7 17    1 27 
iv) V  M 22 ( R), S     ,  ,   and A   
  1 1  5 1  2 6    4 8 

 1  3 5 1  4 4  1 9 3  
Ans:   ,  , 6 6  
 6  1 1 6 2  6 2  9 2  

24, 6 2, 9 2
3. In each of the following parts find the orthogonal projection of the given vectors on the given
subspace W of the inner product space V.

i) V  R 2 ; u  (2, 6) ; and W  (v, w) : w  4v

1  2 6
17 10 4 
Ans :

ii) V  R 3 ; u  (2,1,3) and W  (u1 , u2 , u3 ) u1  3u2  2u3  0

 29 
1  
17
Ans : 14  
 40 

4. Find the distance from the vector u  (2,1,3) to the sub space W  (u1 , u2 , u3 ) u1  3u2  2u3  0
of the vector space R 3 .

1
Ans:
14
Centre for Distance Education 14.60 Acharya Nagarjuna University
5. If W be a sub space of the inner product space V3 ( R ) spanned by B1  (1, 0,1), (1, 2, 2) then
find a basis of orthogonal compliment of W  .

Ans : (2, 3, 2) is the basis of W 

 
6. If W  L (1, 2,3, 2), (2, 4,5, 1) the subspace of R 4 ( R ) ; find a basis of the orthogonal compli-
ment W  .

 7 
Ans : (2, 1, 0, 0), (0, , 3,1) 
 2 


7. If V  L ( S ) with inner product  f , g   f (t ) g (t )dt and S  sin t , cot,1, t . Find an orthogo-
0

nal basis and compute the fourier coefficients of h (t )  2t  1

 4 4 
Ans : sin t , cos t ,1  sin t , t  cos t  
   2

14.20 Reference Books:


1. Linear Algebra 4th edition Stephen H. Friedberg, Arnold J. Insel, Lawrence E. Spence
2. Schaum’s outline; Beginning Linear Albegra Seymour Lipschutz
3. Topics in Algebra I.N. Herstein
4. Linear Algebra J.N. Sharma & A.R. Vasishtha

- A. Mallikharjana Sarma
Rings and Linear Algebra 15.1 Linear Operators

LESSON - 15

LINEAR OPERATORS
15.1 Objective of the Lesson:

We are familiar with the conjugate transpace of a matrix A * of A. If A is  aij  m  n , then

A*  bij  n n where bij  a ji i.e. A * is the transpace of the matrix formed with the conjugate
complex numbers of the elements of A.
For a linear operator T on an inner product space V, we now define a related linear operator
on V, called the adjoint of T, whose matrix representation with repect of any orthonormal basis B for
V is T B . The analogy between conjugate complex numbers and adjoint of a linear operator will

become apparent.
As V is an inner product space in this chapter, we study the condition which guarantee that
V has an orthonormal basis.

15.2. Structure of the Lesson:


This lesson contains the following items.

15.3 Introduction

15.4 Basic definitions

15.5 Theorems on linear transformations

15.6 Worked Out Examples

15.7 Adjoint of an operator - Definition

15.8 Theorems on adjoint operators

15.9 Worked out examples

15.10 Some properties of adjoint operators - theorems

15.11 Worked out Examples

15.12 Exercise

15.13 Normal and self adjoint operators definitions and theorems

15.14 T - In variance and polynomial split definitions

15.15 Schur Theorem and other theorems


Centre for Distance Education 15.2 Acharya Nagarjuna University

15.16 Normal operator definition - and examples

15.17 Theorems an normal operators

15.18 Positive definite and semidefinite transformations - definitions and theorems

15.19 Worked out examples

15.20 Summary

15.21 Technical Terms

15.22 Model Questions

15.23 Exercise

15.24 Reference Books

15.3 Introduction:
Here we shall consider linear functionals defined on an inner product space V ( F ) . Since
an inner product space is also a vector space so all concepts of linear functionals on vector spaces
are also applicable to inner product space. So give some basic definitions that are useful in inner
product spaces.

15.4 Basic Definitions:


1. Linear Operators : Let V ( F ) be a vector space. A linear transformation from V ( F ) to V ( F )
is called a linear operator.

Also a linear functional f over V ( F ) is a mapping i.e. f : V  F that assigns to every


vector v in V, an element f (v ) in F; such that f is linear..

In other words f (u  v )  f (u )  f (v ) for every u , v  V ; f ( au )  af (u ) for every a  F .


These two can be clubbed together as f ( au  bv )  af (u )  bf (v )  u , v  V and a , b  F

Or

f ( au  v )  af (u )  f (v )  u , v  V and  a  F
2. Inner Product: An inner product on V is a function f that assigns to every order pair of vectors
u, v in V a scalar  u , v  in F (  R or C)

Also f (au1  bu2 , v)  au1  bu2 , v 

 a  u1 , v  b  u2 , v  for u1 , u2  V , a, b  F

Hence f is also linear and hence f is a linear functional on V.


Rings and Linear Algebra 15.3 Linear Operators
If V ( F ) is a finite dimensional inner product space; then it will have an orthonormal basis.

15.5 Theorems on Linear Transformations:


15.5.1 Theorems: Let V be a finite dimensional inner product space and f is a linear transforma-
tion from V  F . Then there exists a unique vector v in V such that f (u )  u , v   u  V .

Proof: Let B  u1 , u2 ,..., un  is an orthonormal basis for V; and f is a linear transformation from
V F.
n
Let v   f (u )
j 1
j
u j for each f (u j )  F ......... (1)

f u j  simply denotes the conjugate of f (u j )

Then v  V , further more let g be a function from V to F defined by

g (u )  u , v   u V .......... (2)
To show that g is a linear functional on V;

Let a, b  F , w1 , w2  V , we have

of (aw1  bw2 )  aw1  bw2 , v  by (2)

 a  w1 , v  b  w2 , v 

 ag ( w1 )  bg ( w2 )

Thus g is a linear functional on V.


Now we will show g = f.

Let uk  B then g (uk )  uk , v  ............ (3)

Now substituting the value of v, from (1), we get

n
g (uk )  uk ,  f (u j ) u j 
j 1

n
  f (u j )  u k , u j 
j 1
Centre for Distance Education 15.4 Acharya Nagarjuna University

0 if j  k
n
  f (u j )  uk , u j  f (uk )  uk , u j  =
j 1

1 if j  k

i.e. g (uk )  f (uk )

Thus f and g agree on a basis for V ( F ) and hence g  f .

Inother words we say that there exists a vector v  V corresponding to the linear functional
f on V :

f (u )  u , v   u  V
Uniqueness of v:
Suppose there exists w in V such that

f (u )  u , w   u  V

Thus  u , v  u , w   u  V

 u , v    u , w  0  u  V

 u , v  w  0 for all u  V

 v  w, v  w  0 Substituting v  w for u a particular value

v  w0
vw
So v is unique. Hence the theorem.
15.5.2 Theorem: For any linear operator T on a finite dimensional inner product space V, then
there exists a unique linear operator T * on V such that

 T (u ), v  u , T *(v )   u , v  V
Proof: Let T be a linear operator on a finite dimensional inner product space V; over the field F let
v be a vector in V. Let f be a function from V into F defined by f (u )  T (u ), v   u  V ..... (1)

To show that f is a linear functional on V.

Let a, b  F ; u1 , u2  V , then

f (au  bv)  T (au1  bu2 ), v  from (1)


Rings and Linear Algebra 15.5 Linear Operators

 aT (u1 )  bT (u2 ), v  Since T is linear..

So f  au1  bu2  af (u1 )  bf (u2 ) from (1)

Thus f is a linear functional on V.

So there exists a unique vector v ' in V..


such that f (u )  u , v '   u  V ............ (2)

Hence from (1) and (2) we observe that if T is a liear operator on V, then corresponding to
every v in V, there is a uniquely determined vector v ' in V such that  T (u ), v  u , v '  for all
u  V . Let T * be the rule by which we associate v with v ' . i.e. let T * ( v )  v ' .

Then T * is a function from V to V defined by  T (u ), v  u , T * (v )  u , v  V .... (3)

1. To show that T * is linear:

Let a, b  F ; v1 , v2  V . then for every u in V we have  u , T * (av1  bv2 ) 

 T (u ), (av1  bv2 )  from (3)

 a  T (u ), v1  b  T (u ), v2 

 a  u , T * (v1 )  b  u , T *(v2 )  from (3)

 u, aT *(v1 )    u, bT *(v2 ) 

 u, aT *(v1 )  bT *(v2 ) 

So T *(av1  bv2 )  aT *(v1 )  bT *(v2 )

since in an inner product space  u, v  u, w  v  w for every u.

So T * is a linear operator on V..

So corresponding to a linear operator T on V there exists a linear operator T * on V, such


that  T (u ), v  u , T * (v )   u , v  V .

To show that T * is unique:

Let F be a linear operator on V such that  T (u ), v  u , F (v )   u , v  V

 u , T * (v )  u , F (v )   u , v  V
Centre for Distance Education 15.6 Acharya Nagarjuna University
 T*  F
So T is unique.
Hence the theorem.
Note i) The symbol T * is red as T star..

ii) T (u ) can be taken as Tu and T *(v) as Tv .


iii) T is a linear operator on a finite dimensional inner product space V. If T has an eigen
vector, then T * does so.

iv) T * is called an adjoint operator. Which we deal later in a detailed manner..

15.6 Worked Out Examples:


W.E.1:

1. For each of the inner product space V ( F ) and linear transformation (linear functionals) f : V  F
find a vector v , such that f (u )  u , v  for all v  V .

i) V  R 2 ; F  R; f (a1 , a2 )  2a1  a2

ii) V  C 2 ; F  C ; f ( z1 , z2 )  z1  2 z2

Solution: Given V  R 2 ; F  R , f (a1 , a2 )  2a1  a2 to find v..

V  F i.e. R 2  R is an inner product space of dim  2

V ( F ) i.e. R2 ( R) has an orthonormal basis u1  (1, 0), u2  (0,1)

Such that  u1 , u1  1,  u2 , u2  1,  u1 , u2  0,  u2 , u1  0

f : V  F is a linear functional such that f (u )  f (u1 , u2 )  2u1  u2 for u  (u1 , u2 )

Let u  V then u  a1u1  a2u2 for a1 , a2  R and f (u )  f (a1u1  a2u2 )  af (u1 )  a2 f (u2 )
.... (1) since f is linear. We have v  V such that f (v )  u , v  .

 v  b1u2  b2u2 for b1 , b2  R ............. (2)

 u, v  a1u1  a2u2 , b1u1  b2u2 

 a1b2  u1 , u1   a2b2  u1 , u2   a2b1  u2 , u1   a2b2  u2 , u2 


Rings and Linear Algebra 15.7 Linear Operators

 a1b1  a2b2 ......... (3)

Since f (u )  u , v  from (1) and (3) we get a1b1  a2b2  a1 f (u1 )  a2 f (u2 )

Comparing both sides of the above we get

b1  f (u1 ), b2  f (u2 )

 b1  f (1, 0)  2(1)  0  2

b2  f (0,1)  2(0)  1  1

Hence v  b1u1  b2u2  2(1, 0)  1(0,1)  (2,1) is the required vector..

ii) V  F is C 2  C defined by f ( z1 , z2 )  z1  2 z2 to find v..

Solution: V  F in C 2  C is a linear functional such that f (u )  f ( z1 , z2 )  z1  2 z2

Let u  V , then u  a1 z1  a 2 z 2 for some scalar belonging to C.

 f (u )  f (a1 z1  a2 z2 )  a1 f ( z1 )  a2 f ( z2 ) .......(1) for a1 , a2  C

 since f is linear..
We have v  V such that f (u )  u , v 

So v  b1 z1  b2 z2 for some b1 , b2  C

 u, v  a1 z1  a2 z2 , b1 z1  b2 z2 

 a1b1  z1 , z1   a1b2  z1 , z2   a2b1  z2 , z1   a2 b2  z2 , z2 

 a1b1  a2b2 ............. (3)

Since f (u )  u , v  from (3) and (1)

We have a1b1  a2b2  a1 f ( z1 )  a2 f ( z2 )

Comparing on both sides we get

b1  f ( z1 ); b2  f ( z2 )

So b1  f (1, 0)  1  2(0)  1
Centre for Distance Education 15.8 Acharya Nagarjuna University

b2  0  2(1)  2 . Hence b1  1, b2  2

Hence v  b1 z1  b2 z2  1(1, 0)  2(0,1)  (1, 2)

Which is the required vector.

W.E.2: For each of the inner product space V ( F ) and linear functional g: V  F find a vector v
such that g (u )  u , v  for all u  V .

i) V  R 3 ; g ( a1 , a2 , a3 )  a1  2a2  4a3

1
ii) V ( F )  V (C ); x  (a1 , a2 , a3 ); g ( x)  (a1  a2  a3 )
3

Solution: (1) V  R 3 g (a1 , a2 , a3 )  a1  2a2  4a3 then to find v..

V  F .i.e.R 3  R;V is an inner product space of dim  3 .

V ( R ) has an orthonormal basis u1  (1, 0, 0), u2  (0,1, 0), u3  (0, 0,1)

1 if i  j

Such that  u i , u j 

0 if i  j

g : V  R i.e. g : R 3  R is a linear functional

such that g (u )  g (a1 , a2 , a3 )  a1  2a2  4a3 for u  (a1, a2 , a3 )

Let u  V , then u  a1u1  a2u2  a3u3   ai ui for ai  R


i 1

So g (u )  g (a1u1  a2u2  a3u3 )  a1 g (u1 )  a2 g (u2 )  a3 g (u3 ) ........... (1)

Since g is linear.

We have v  V such that g (u )  u , v 

3
So v  b1u1  b2u2  b3u3  b u
j 1
j j .......... (2) for b j  R
Rings and Linear Algebra 15.9 Linear Operators
3 3

So  u , v   aui ,  b j u j a1b1  a2b2  a3b3 ......... (3)


i 1 j 1

As g (u )  u , v  from (1) and (3) we have

a1b1  a2b2  a3b3  a1 g (u1 )  a2 g (u2 )  a3 g (u3 )

comparing b1  g (u1 ), b2  g (u2 ), b3  g (u3 )

 b1  g (1, 0, 0)  1  2(0)  4(0)  1

b2  g (0,1, 0)  0  2(1)  4(0)  2

b3  g (0,1, 0)  0  2(0)  4(1)  4

 v  b1v1  b2 v2  b3v3 from (2)

 1(1, 0, 0)  2(0,1, 0)  4(0, 0,1)

 (1, 2, 4) which is the required vector..

ii) g : V  F i.e. V3  C is a linear functional V ( F ) is an inner product space of dim  3 .

V3 has an orthonormal basis u1 , u2 , u3 

1 if j  i

Such that  u i , u j 

0 if j  i

given g : V3  C is a linear transformation

1
Such that g (u )  g ( a1 , a2 , a3 )  (a1  a2  a3 ) for u  (a1 , a2 , a3 )
3

Let u  V3 then u  a1u1  a2u2  a3u3  a u


i 1
i i for ai  C .

 g (u )  g (a1u1  a2u2  a3u3 )  a1 g (u1 )  a2 g (u2 )  a3 g (u3 ) .......... (1)

we have v  V3 such that g(u)  u, v 


Centre for Distance Education 15.10 Acharya Nagarjuna University

3
 v  b1u1  b2u2  b3u3   b j u j ..... (2) for same b j  C
J 1

3 3

So  u , v   ai ui ,  b j u j 
i 1 i 1

0 if i  j

 u , v  a1b1  a2b2  a3b3 ........... (3) Since  u i , u j 

1 if i  j

Since g (u )  u , v  from (1) and (3)

a1b1  a2b2  a3b3  a1 g1 (u1 )  a2 g 2 (u2 )  a3 g 3 (u3 )

Comparing b1  g1 (u1 ); b2  g 2 (u2 ); b3  g3 (u3 )

 b1  g1 (u1 ), b2  g 2 (u2 ), b3  g3 (u3 )

If the orthonormal basis is taken as the standard basis we can have


u1  (1, 0, 0), u2  (0,1, 0), u3  (0, 0,1)

1
We have since g (u )  g ( a1 , a2 , a3 )  (a1  a2  a3 )
3

1
we have g (u1 )  g (1, 0, 0)  (1  0  0)  1
3 3

1 1 g (u3 )  1
g (u2 )  g (0,1, 0)  (0  1  0)  1 , g (u3 )  g (0, 0,1)  (0  0  1) 3
3 3 3

 b1  g (u1 )  1 , b2  g (u2 )  1 , b3  g (u3 )  1


3 3 3

So v  b1u1  b2u2  b3u3 from (1)

1 1 1 1 1
 v  (1,0,0)  (0,0,1)   , , 
3 3  3 3 3
Which is the required vector.
Rings and Linear Algebra 15.11 Linear Operators

1
W.E.3 : If V3 ( F ) is an inner product space with orthonormal basis u1 , u2 , u3  where u1  (1,1, 0) ,
2
1
u2  (1, 1, 0) , u3  (0,0,1) . If f is a linear functional on V3 ( F ) such that
2
f (u1 )  2, f (u2 )  1, f (u3 )  1 . Find the vector v such that f (u )  u, v   u  V3 ( F ) .

Solution: V3  F i.e. V3  C is an inner product.

V3 ( F ) is an inner product space with dimension 3.

1 1
V3 has the orthonormal basis u1 , u2 , u3  where u1  (1,1, 0); u2  (1, 1, 0) and
2 2

1 if j  i

u3  (0, 0,1) . such that  u i , u j 

0 if j  i

f : V3  C is a linear functional such that f (u )  f (a1 , a2 , a3 ) .

Let u  V so u  a1u1  a2u2  a3u3   ai ui for all ai  C .


i 1

f (u )  f (a1u1  a2u2  a3u3 )  a1 f (u1 )  a2 f (u2 )  a3 f (u3 )

We have v  V so f (u )  u , v 

3
 v  b1u1  b2u2  b3u3   b j u j for b j  C ............ (2)
j 1

3 3
 u, v   ai ui ,  b j u j a1b1  a2b2  a3b3 ........... (3)
i 1 j 1

Since f (u )  u , v  from (1) and (3):

a1b1  a2b2  a3b3  a1 f (u1 )  a2 f (u2 )  a3 f (u3 )

Comparing b1  f (u1 ), b2  f (u2 ), b3  f (u3 )


Centre for Distance Education 15.12 Acharya Nagarjuna University

 b1  f (u1 ); b2  f (u2 ); b3  f (u3 )

 b1  2, b2  1, b3  1

 1  1
So v  b1u1  b2u2  b3u3  2   (1,1, 0)  1. (1, 1, 0)  1(0, 0,1)
 2 2

 1 3 
so v   , , 0  is the required vector..
 2 2 

15.7 Definition:
Adjoint of an operator: Let T be a linear operator in an inner product space V (finite dimensional
or not). We say that T has an adjoint T * , if there exists a linear operator T * on V; such that
 T (u ), v  u , T *(v )   u , v  V .
In theorem 15.5.2, we have proved that every linear operator on a finite dimensional inner
product space possess an adjoint. But it should be noted that if V is not finite dimensional then
some linear operator may possess an adjoint, while the other may not. In any case if T possess an
adjoint T * , it is unique as we have proved in that theorem.

Note: We have  u , T (v )   T (v ), u    v, T *(u )   T *(u ), v  .

Hence  u , T (v )  T * (u ), v  for all u , v  V .

15.8.1 Theorem: Let V be a finite dimensional inner product space. Let B  u1 , u2 ,...un  be
an orthonormal basis for V. Let T be a linear operator on V; with respect to the ordered basis B.
Then aij  T (u j ), ui  .

n
Proof: As B is an orthonormal basis of V; and if v is any vector in V; then v    v, u
i 1
i  ui

n
Taking T (u j ) in place of v , in the above, we get T (u j )    T (u ), u
i 1
j i  ui ......(1)

where j  1, 2,...n .

Now if A   aij  n  n be the matrix of T in the ordered basis B; then we have T (u j )  a u ;


i 1
ij i

j  1, 2,...n . As the expression for T (u j ) as a linear combination of the vectors in B is unique and
Rings and Linear Algebra 15.13 Linear Operators

so from (1) and (2) we have aij  T (u j ), ui  where i  1, 2,...n and j  1, 2,...n .

15.18.2 Theorem:
Let V be a finite dimensional inner product space let T be a linear operator on V. Let B be
any orthonormal basis for V. Then the matrix T * is the conjugate transpose of the matrix T i.e.
T    T .B

B

Proof: Let B  u1 , u2 ,...un  be an orthonormal basis of V. Let A   aij  n  n be the matrix of T with

respect to the ordered basis B. Then aij  T (u j ), ui  .......... (1)

Now T * is also a linear operator on V..

Let C  cij  n  n be the matrix of T * in the ordered basis B. Let cij  T * (u j ), ui  ..... (2)

Where cij  T *(u j ), ui   ui , T *(u j )  since  u , v   v, u 

  T (u j ), u j  by def. of T

= a ji by (1)

 B
*
; so C  A * where A * is the conjugate transpose of A. So T   T

 C   aij 
n n B

Note: Here the basis B is the orthonormal basis but not an ordinary basis.
15.8.3 Corollary: If A and B are n x n matrices, then

(i) ( A  B )*  A *  B * (ii) (CA)*  CA * for all C  F

(iii) ( AB )*  B * A * (iv) A **  A

(v) I *  I (vi) 0nn  0nn ( null matrix order n)

15.9 Worked out examples:


W .E .4 : Let T be a linear operator on C2, defined by T ( a1 , a2 )  (2ia1  3a2 , a1  a2 ) if B is the

standard ordered B as is for C2 then find T *(a1 , a2 ) .

Solution: We are given T (a1 , a2 )  (2ia1  3a2 , a1  a2 )

The standard ordered basis is B  (1, 0),(0,1)


Centre for Distance Education 15.14 Acharya Nagarjuna University

T (1, 0)  (2i (0)  3(0),1  0)  (2i,1)

 2i (1, 0)  1(0,1)

T (1, 0)  (0  3, 0  1)  (3, 1)  3(1, 0)  1(0,1)

 2i 3 
T B   
 1 1

2i 1 
T   T B  

 . Hence the coordinate matrix of
B
 3 1
 2i 1   a1 
T * (a1 , a2 )       ( 2ia1  a2 ,3a1  a2 ) in the same basis is
 3 1  a2 

 T *(a1 , a2 )  (2ia1  a2 )(1, 0)  (3a1  a2 )(0,1)  (2ia1  a2 ,3a1  a2 )

W.E.5 : Let A be a n x n matrix, then show that LA   LA 


Solution: Let B be the standard basis for F n . then we have  LA B  A . Hence

 LA     LA *  A  L  
 B B  A B

 LA 

So  LA

W.E.6 : Example: If the linear transformation T on V3 (C ) is defined by


T (a, b, c)   2a  (1  i)b, (3  2i)a  4ic, 2ia  (4  3ib  3c)  for any (a, b, c)  V3 ( R) then
find T  (a, b, c) with respect to standard basis.

Solution: The matrix of T relating to the standard basis of V3 (C ) which is also an orthonormal basis
is given by

 2 1 i 0 

T  3  2i 0 4i   aij 
33
 2i 4  3i 3 
Rings and Linear Algebra 15.15 Linear Operators
If T * is the adjoint of T; then the matrix of T * relative to the standard basis B is

 2 3  2i 2i 
T *   a ji   1  i 0 4  3i 
 0 4i 3 

Thus showing that

T *(a, b, c)   2a  (3  2i)b  2ic, (1  i)a  (4  3i)c, 4ib  3c  for each (a, b, c)  V3 (C )

W.E.7: Let T is a linear operator on V3 ( F ) defined by T ( a, b, c )  ( a  b, b, a  b  c ) for a, b, c  F ,


then find T * .

Solution: Let ( x, y, z )  V3 ( F ) and T is a linear operator on V3 . By definition.

 ( a, b, c ), T *( x, y , z )  T ( a, b, c ), ( x. y , z ) 

 ( a  b, b, a  b  c ), ( x, y , z ) 

 (a  b) x  by  ( a  b  c) z

 ax  bx  by  az  bz  cz

 a ( x  z )  b( x  y  z )  cz

 a ( x  z )  b( x  y  z )  cz

 ( a, b, c), ( x  z , x  y  z , z ) 

So T *( x, y , z )  ( x  z , x  y  z , z )  ( x, y, z )  T *

15.10 Some Properties of adjoint operators:


15.10.1 Theorem: Suppose S and T are linear operators on an inner product space V, and C is
a scalar.
If S and T possess adjoints on V, then

i) ( S  T )*  S * T * (ii) (CT )*  CT * where C is a scalar

iii) ( ST )*  T * S * (iv) (T *)*  T

v) O  O where O is the zero operator

I *  I where I is the identity operator


Centre for Distance Education 15.16 Acharya Nagarjuna University

vi) If T is invertible then T 1 is also in vertible and in this case T *  T *


1 1
 
vii) T **  T

Proof: (1) To show that ( S  T )*  S * T *

As S and T are linear operators on V, S  T is also a linear operator on V, For every u , v  V .

 u , ( S  T ) *(v)  ( S  T )(u ), v 

 S (u )  T (u ), v 

 S (u ), v    T (u ), v 

 u , S * (v )    u , T * (v ) 

 u , S *(v)  T *(v)  by definition of adjoint.

 u , ( S * T *)(v ) 

Thus for the linear operator S  T on v , there exists operator S * T * on V such that

 ( S  T ), (u ), v  u , ( S * T *)v   u, v  V

or  u , ( S  T ) * (v)  u , ( S * T *)v 

By uniqueness of adjoint (S  T )  S   T 

ii) To show that (CT *)  CT * where C is a scalar in F..

As T is a linear operator on V; so CT is also a linear operator on V; for every u , v in V,,


we have  u , (CT )*, v  (CT )u , v  CT (u ), v  C  T (u ), v 

 C  u , T * (v ) 

 u , CT *(v) 

 u , (CT *)v 

So by the uniqueness of the adjoint we get (CT )*  CT * .

iii) To show that ( ST )*  T * S *

As S, T are linear operators on V, so ST is also a linear operator on V. For every u, v in V.


Rings and Linear Algebra 15.17 Linear Operators
We have  u , ( ST ) * (v )  ( ST )(u ), v 

 ST (u ), v  by definition of product of two operators

 T (u ), S * (v)  by definition of adjoint

 u, T *  S *(v)  

 u, T * S * (v) 

i.e  u , ( ST ) * (v )  u , (T * S *)(v ) 

 ( ST )*  T * S * as the adjoint operator is unique.

iv) To show that (T *)*  T

 u,  (T ) * *(v)  T *(u ), v 

  v, T *(u )  (Since  u , v   v, u  )

  T (v), u  by definition of adjoint

 u , T (v)  Since  u , v   v, u  )

Thus for a linear operator T * , there exists a linear operator T on V, such that

 u , (T *) * (v )  u , T (v ) 

Hence (T *)*  T by the uniqueness of adjoint.

v) a) To show that O  O where O is the zero operator

O is the zero operator in V. For every u, v in V.

We have  u , O  ( v )  Ou , v 

 0  u, O(v) 

Thus  u , O (v)  u , O (v)  for all u, v V

 O  O (Where O is the zero operator) by uniqueness of adjoint.

(b) To show that I *  I

For every u, v V we have


Centre for Distance Education 15.18 Acharya Nagarjuna University

 u , I *(v )  I (u ), v  u , v 

 u , I (v ) 

Thus for all u , v  V ,  u , I *(v )  u , I (v ) 

So I *  I by uniqueness of adjoint.

vi) To show that (T *) 1  (T 1 ) *

T is an invertible operator on V. So we have

TT 1  T 1T  I

 TT 1  *  T 1T  *  I *

 (T 1 ) * T *  T *(T 1 )*  I since I *  I

This shows that T * is invertible and the inverse of T * i.e. (T * )  1  (T 1 


)
i.e. inverse of adjoint of T is adjoint of inverse of T.

vii) for u , v  V ,  u , T ( v )  T * (u ), v  u , T **( v )

 T **  T by uniqueness of adjoint.

Note: T is a linear operator on an innerproduct space V and U 1  T  T *, U 2  TT *

then U1  (T  T )  T  (T )  T  T  T  T  U1
* * * * * * * *

and U 2
*
 (TT * )*  (T * )* T *  TT *  U 2
Worked Out Examples:

W.E.8: V is the inner product space R 2 , T is a linear operator in V; evaluate T * at u  ((3, 5) of V


where T ( a, b)  (2a  b, a  3b)

Solution: (a, b)  R 2 , Let (a1 , b1 )  R 2

Then by defination of T *

 (a, b), T *(a1 , b1 )  T (a, b), (a1 , b1 ) 

 (2a  b, a  3b), (a1 , b1 )  by definition of T..


Rings and Linear Algebra 15.19 Linear Operators

 (2a  b)a1  (a  3b)b1

 (2a1  b1 )a  (a1  3b1 )b

So  (a, b), T *(a1 , b1 )  (a, b),(2a1  b1 , a1  3b1 ) 

as (a, b), (a1 , b1 ) are arbitrary elements in R 2 , then we have T *(a1 , b1 )  (2a1  b1 , a1  3b1 )

or T *( a, b)  (2a  b, a  3b)

So T * (3, 5)  (2  3  5, 3  3  5)

i.e. T *(3,5)  (11, 12)

W.E.9: Let V be the vector space V2 (C ) with standard inner product. Let T be the linear operator
defined by T (1, 0)  (1, 2).T (0,1)  (i, 1) . If u  ( a, b) then find T *(u ) .

Solution: Let B  (1, 0),(0,1) . Then B is the standard basis for V. It is an orthonormal basis for V..

Let us find T B . i.e. the matrix of T in the ordered basis B.

We have T (1, 0)  (1, 2)  1(1, 0)  2(0,1)

T (0,1)  (i, 1)  i (1, 0)  1(0,1)

1 i 
T B   
 2 1

The matrix of T * in the ordered basis B is the conjugate transpace of the matrix T B .

 1 2 
So T B   
 i 1

Now ( a, b)  a (1, 0)  b(0,1)

The coordinate T *( a, b) in the basis B.

 1 2   a   a 2b 
     
 i 1   b   ia b  21

T *( a, b)  ( a  2b)(1, 0)  ( ia  b)(0,1)


Centre for Distance Education 15.20 Acharya Nagarjuna University

 ( a  2b, ia  b)

W.E.10: The inner product space V is C 2 and T is the linear operator on V, defined by

T ( z1 , z2 )  (2 z1  iz2 , (1  i ) z1 ) . Find T * at u  (3  i,1  2i ) .

Solution: Let B  (1, 0),(0,1) . Then B is the standard ordered basis for V. It is an orthonormal

basis. Let us find T B .

T ( z1 , z2 )  (2 z1  iz2 , (1  i ) z1 )

T (1, 0)   2, (1  i) 

 2(1, 0)  (1  i )(0,1)

T (0,1)  (i, 0)  i (1, 0)  0(0,1)

 2 i
T B   
1  i 0 

 2 (1  i ) 
T *B  
 1 0 

Now ( z1 , z2 )  z1 (1, 0)  z2 (0,1)

 2 i  1  z1   2 z1  z2 (i  1) 
The coordinate matrix of T *( z1 , z2 ) in the basis B is  
 i 0   z2   iz1  0 z 2 

T * ( z1 , z 2 )   2 z1  z 2 (i  1)  (1, 0)  iz1 (0,1)

 (2 z1  z2 (i  1), iz1 )

T *(3  i,1  2i )   2(3  i )  (1  2i)(i  1)  i (3  i ) 

 6  2i  i  2  1  2i

  (3  3i ), (3i  1) 

W.E.11: Let T be the linear operator on V2 (C ) , defined by T (1, 0)  (1  i , 2)

T (0,1)  ( i , i ) using the standard


Rings and Linear Algebra 15.21 Linear Operators
inner product. Find the matrix T * in the standard ordered basis. Does T commute with T * .

Solution: Let B  (1, 0),(0,1) . B is the standard ordered basis for T. It is orthonormal basis

We have T (1, 0)  (1  i, 2)  (1  i )(1, 0)  2(0,1)

T (0,1)  (i, i )  i (1, 0)  i (0,1)

1  i i
So T B  
 2 i 

So T *B = The conjugate transpose of the matrix T B .

1  i 2 
 
 i i 

1  i i  1  i 2 
We have T B .T *B    
 2 i   i i 

 3 3  2i 
So T  T *    ............. (1)
B B
 3  2i 5 

1  i 2  1  i i 
Also T *B T B  
 i i   2 i 

 6 3i  1

2 
.......... (2)
 3i  1

Now from (1) and (2) T B T *B  T *B T B

 TT *B  T * T B

So TT *  T * T
So T does not commute with T * .

15.12 Exercise:
1. For each of the following inner product spaces V over F and linear transformations g : V  F
find a vector v such that g (u )  u , v  for all u  V .
Centre for Distance Education 15.22 Acharya Nagarjuna University

i) V  R 3 ; g (a1 , a2 , a3 )  a1  2a2  4a3

Ans: v  (1, 2, 4)

1
ii) V  P2 ( R ) with  f , h   f (t )h(t )dt
0

g ( f )  f (0)  f ' (1)

Ans : v  210u 2  204u  33

2. V  P1 ( R ) is an inner product space T is a linear operator on V. The inner product in V is given


1

by  f , g   f (t ) g (t )dt;
1
T( f )  f ' 3f

f (t )  4  2t Evaluate T * .

Ans: T *  f (t )   12  6t

3. A linear operator T on R 2 ( R ) is given by T ( x, y )  ( x  2 y, x  y ) for all x, y  R . If the inner


product on R 2 is the standard one, find the adjoint T * .

Ans : T *( x, y )  ( x  y , 2 x  y )

4. Let T be a linear operator on V2 (C ) defined by T (1, 0)  (1, 2); T (1, 0)  (i, 1) using the standard
inner product find T *(u ) . Where u  ( a, b)

Ans : ( a  2b, ia  b)

15.13 Normal and Self-Adjoint Operators:


15.13.1 Definition: Self Adjoint Operator: A Linear operator T on an inner product space
V ( F ) is called self adjoint operator if and only if T  T * .
Note: 1) A self adjoint operator is called a symmetric according as the space is called Eucledian i.e.
F R.
2. A self adjoint operator is called Hermitian when the vector space is unitary i.e. F  C .

3. In an inner product space if T is self adjoint operator. Then


 T (u ), v  u , T * (v )  u , T (v )   u , v  V .
Rings and Linear Algebra 15.23 Linear Operators

ˆ  0,
4. If 0̂ is the zero operator in V , I is the identity operator in V, then 0* ˆ I *  I . So 0̂ and I are
self adjoint operators.
5. An n x n real or complex matrix A is self adjoint if A  A* .
6. A self adjoint operator is also called as Hermitian operator. A self adjoint matrix is also called as
Hermitian matrix.
15.13.2 Theorem:
Every linear operator T on a finite dimensional complex inner product space V can be uniquely
expressed as T *  T1  iT2 where T1 and T2 are self adjoint linear operators on V..

1 1
Proof: Let T1  (T  T *) and T2  (T  T *) ......... (1)
2 2i

*
1 *  1 *
then T   (T  T )   T  (T ) 
* * *
1
2  2

1 1
 (T * T )  (T  T *)  T1
2 2

i.e. T1*  T1 so T1 is self adjoint.

*
Also T *   1 ( T  T * ) 
2  
 2i 

1 1
   (T  T *)*  (T * T )  T2 ........... (2)
 2i  2i

i.e. T2 *  T2 so T2 is self adjoint.

From the above we get T  T *  2T1

T  T *  2iT2

Adding 2T  2(T1  iT2 )  T  T1  iT2 ............ (3)

Subtracting 2T *  2(T1  iT2 )  T *  T1  iT2

Whence T1 and T2 are self adjoint.


Centre for Distance Education 15.24 Acharya Nagarjuna University
Uniqueness resolution of T:

Let T  U1  iU 2 where U1 and U 2 are self adjoint operators. Then

T *  (U1  iU 2 )*  U1 *  (iU 2 ) *

 U1 *  iU 2 *

 U1 * iU 2 *

 U1  iU 2

T  T *  (U1  iU 2 )  (U1  iU 2 )  2U1

1
 U1  (T  T *)  T1
2

Also T  T *  (U1  iU 2 )  (U1  iU 2 )  2U 2

1
 U2  (T  T *)  T2 . Hence the expression T  T1  iT2 is unique.
2i
15.13.3 Prove that the product of two self adjoint operators on an inner product space is self
adjoint, iff they commute.

Proof: Let T1 and T2 be two self adjoint operators on an inner product space V;;

Case i) Let the product of T1 and T2 i.e. T1T2 is self adjoint.

So (T1T2 )*  T1T2  T2 * T1 *  T1T2

 T2T1  T1T2 since T1 and T2 are self adjoint operators T1*  T1 , T2 *  T2

Thus when T1T2 is self adjoint the T1T2  T2T1 i.e. they commute.

Case ii) Converse : Let T1 and T2 commute i.e. T1T2  T2T1

So (T1T2 )*  T2 * T1*  T2T1 (Since T1*  T1 and T2 *  T2 )

 T1T2 since T1T2  T2T1

Thus (T1T2 )*  T1T2 . So T1T2 is self adjoint.

Hence the theorem.


Rings and Linear Algebra 15.25 Linear Operators

15.13.4 If T1 and T2 are self adjoint linear operators on an inner product space, then show that
T1  T2 is self adjoint.

Solution: T1 is a self adjoint operator. So T1*  T1

and T2 is a self adjoint operator. So T2 *  T2

Now (T1  T2 )*  T1 * T2 *  T1  T2

So (T1  T2 )*  T1  T2 , Hence T1  T2 is self adjoint.

15.13.5 If T is a self adjoint linear transformation on an inner product space then


T  0̂  T (u ), u  0 for all u  V .

Proof: Let T  0̂ (Zero operator)

ˆ u ), u  0, u  0 for all u  V .


 T (u ), u  0(

Conversely Let  T (u ), u  0 , u  V , then to show that T  0̂ .

Consider  T (u  v ), u  v  0 by given condition

 T (u )  T (v), u  v  0

 T (u ), u    T (u ), v    T (v ), u    T (v ), v  0

 T (u ), v    T (u ), v  0 by given condition

 T (u ), v    v, T * (u )  0 ...... (1) Since  T (v), u  v, T *(u) 

 T (u ), v    v, T (u )  0 ......... (2) Since T *  T

Case i) Let V be a complex inner product space then by (1) replacing v by iv we get

 T (u ), iv    T (iv), u  0

 i  T (u ), v  i  T (v ), u  0 since conjugate of i is -i.

   T (u ), v    T (v ), u  0 ....... (3)
Adding (1) and (3) we get

2  T (u ), v  0 for all u , v  V

So  T (u ), T (v)  0 putting v  T (u )
Centre for Distance Education 15.26 Acharya Nagarjuna University

 T (u )  0

 T  0̂
Case ii) In case if V is real inner product space we have  v, T (u )  T (u ), v  since
 u , v  v, u  .

From (2)  T (u ), v    v, T (u )  0

 2  T (u ), v  0 using above,  u, v  V

 T (u ), T (u )  0 putting v  T (u )

 T (u )  0,  u , v  V So T  0̂
Hence the theorem.
15.13.6 If T is a linear transformation on a complex inner product space, then T is self adjoint
 T (u ), v  is real  u , v  V .

Proof: Case i) Let T is self adjoint. So T *  T .

then  u ,  V ,  T (u ), u  u , T *(u )  u , T (u ) 

 T (u ), u   T (u ), u   T (u ), u  is real.
Case ii) Converse :

Let  T (u ), u  is real for each u  V . Then to prove that T is a self adjoint transformation
i.e. to show  T (u ), v  u , T (v )   u , u  V .

Now  T (u  v ), u  v  T (u )  T (v ), u  v 

 T (u ), u    T (u ), v    T (v ), u    T (v ), v 

Since by hypothesis  T (u  v), u  v ,  T (v ), u  and  T (v ), v  are all real and so


 T (u ), v    T (u ), u  is also real. So equating it to its complex conjugate.

We get  T (u ), v    T (v), u  ( T (u ), v    T (v), u )

  T (u ), v    T (v ), u )

 v, T (u )    u , T (v ) 

Thus for all u , v  V .


Rings and Linear Algebra 15.27 Linear Operators

 T (u ), v    T (v ), u  v, T (u )    u , T (v )  ............ (1)

replacing v by iv we get

 T (u ), iv    T (iv), u  iv, T (u )    u , T (iv ) 

 i  T (u ), v    iT (v ), u  i  v, T (u )    u , iT (v) 

 i  T (u ), v  i  T (v ), u  i  v, T (u )  i  u , T (v )  ............... (2)
Multiplying (2) by i and adding to (1) we get

2  T (u ), v  2  u , T (v ) 

 T is self adjoint operator..


Hence the theorem.
15.13.7 Theorem:

If T is a self adjoint linear operator on an inner product space V, then if T  0̂ and a  0 ,


then aT is self adjoint if and only if a is real.

Proof: Let a be real. Then we have (aT )*  aT *

 aT since a is real and T is self adjoint.


Thus (aT )*  aT . So it follows aT is self adjoint.

Converse: Let aT is self adjoint So (aT )*  aT

 aT *  aT

 aT  aT since T is self adjoint.

 ( a  a )T  0ˆ

as T  0̂ , so a  a  0  a  a
Hence a is real.
Hence the theorem.
15.13.8 Theorem:
Let T be a linear operator on a finite dimensional inner product space V. Then T is self
adjoint if and only if its matrix in every orthonormal basis is a self adjoint matrix.

Proof: Let B be any orthonormal basis of V. Then  T  B  T B


* *
........... (1)
Centre for Distance Education 15.28 Acharya Nagarjuna University

If T is self adjoint, then T*  T .

T B  TB
*
So from (1) we get

So T B is a self adjoint matrix.

T B  T B  T *  B from (1)


*
Conversely Let T B be a self adjoint matrix then

So T  T * . Hence T is self adjoint.

15.13.9 Theorem:
If T is self adjoint linear operator on a finite dimensional inner product space, then prove that
det (T ) is real.

Proof: Let B be any orthonormal basis for V.

T *   T B
*
and T is self adjoint. So T *  T i.e. T B  T B
*
Then we have
B

Suppose T B  A and then T B  A *


*

Then A  A*  det( A)  det( A*)

 (det A)  det A

So det A is real. Hence det T is real. Since det T  det T B  det A

Hence the theorem.


15.13.10 Theorem:
Let T be a self adjoint linear operator on a finite dimensional inner product space.

Prove that the range of T is orthogonal complement of the null space of T i.e. R (T )   N (T )

Proof: Let u be any element in R (T ) . Then there exists a vector v in V such that u  T (v)

Now if w  N (T ) then T ( w)  O

We have  u , w  T (v), w  v, T * ( w) 

 v, T ( w)  since T *  T
Rings and Linear Algebra 15.29 Linear Operators
 v, O  0

it follows that  u , w  0 for all w  N (T )

Thus u   N (T ) 

Thus u  R (T )  u   N (T )  and hence


R(T )  N (T )  ......... (1)

Again dim  R(T )  dim  N (T )   dim V

V  N (T )   N (T ) 

and

 dim  N (T )   dim  N (T )   dim V


 dim R (T )  dim  N (T ) 

Now R (T )   N (T )  and dim  R (T )    N (T ) 


 

So R (T )   N (T ) 

15.13.11 Let T be a linear operator on a finite dimensional inner product space V. If T has an eigen
vector then show that T * has an eigen vector..

Proof: Let u be an eigen vector of T; with eigen value  . Then for any v  V , we have

0  O, v  (T   I )(u ), v 

 u , (T   I ) * (u ) 

 u, (T *  I ) *(v) 

Hence u is orthogonal to the range of T *  I .

So T *  I is not onto and hence is not one to one. Thus T *  I has a non zero null space, and
any non zero vector in this null space is an eigen vector of T * with corresponding eigen value  .

15.14 Some Basic Concepts which are already discussed:


15.14.1 T Invariance:
W is a subspace of a vector space V and T : V  V is linear. W is said to be T - invariant
Centre for Distance Education 15.30 Acharya Nagarjuna University

if T ( w) W for every w W that is T (W )  W . If W is T - invariant. We define the restriction of T


on W to be the function TW :W W defined by TW (w)  T (w) for all w  W .

15.14.2 Definition: Polynomial split:

A polynomial f (t ) in P ( F ) is said to split over F, if there exists scalars c, a1 , a2 ,...an not


necessarily distinct in F such that f (t )  C (t  a1 )(t  a2 ),...(t  an ) .

Thus a polynomial is said to split if it factors into linear factors.


15.14.3 Theorem:
Let V be a finite dimensional inner product space. Let T be a linear operator on V. suppose
W is a subspace of V, which is invariant under T. Then show that the orthogonal compliment is
invariant under T * .

Solution: We are given that W is invariant under T. We have to prove that w  is invariant under
T * . Let v be an vector in W  . Then to prove that T *(v ) is in w  . i.e. T *(v ) is orthogonal to
every vector in W. Let u be any vector in W. then  u , T *(v)  T (u ), v  0

Since u W  T (u ) W as W is T invariant.

Also v is orthogonal to every vector in W.

T * (v ) is orthogonal to every vector u in W..

 T * (v) is in W  .

So W  is invariant under T * .

15.15.1 Schur Theorem: Let T be a linear operator on a finite dimensional inner product space
V. Suppose that the characteristic polynomial of T splits. Then show that there exists an orthonor-
mal basis B for V such that the matrix T B is upper triangular..

Proof: The proof is by mathematical induction on the dimension n of V. The result is immediate if
n  1 . so suppose that the result is true for linear operators on ( n  1) dimensional inner product
spaces whose characteristic polynomial splits. We know that if T is a linear operator on a finite
dimensional inner product space V, and if T has an eigen vector, then T * will have an eigen vector..
so we can assume that T * has a unit eigen vector w . Suppose that T *(W )  W and that

W = span w we will now show that W  is is T invariant.

If v W  and u  cw W then

 T (v ), u  T (v), cw  v, T * (cw) 


Rings and Linear Algebra 15.31 Linear Operators

 v, cT *( w)  v, c w 

 c  v, w  c (0)  0

So T (u ) W 

We know that if T is a linear operator on a finite dimensional vector space V and W is a


T-invariant subspace of V, then the characteristic polynomial of TW divides the characteristic poly-
nomial of T.

By this theorem, the characteristic polynomial of TW divides the characteristic polynomials


of T and hence splits.
We know that if W is any subspace of a finite dimensional inner product space V, then
dim(V )  dim(W )  dim(W  )

So dim( w )  n  1 so we may apply the induction hypothesis to Tw and obtain an or-


thonormal basis B ' of W  such that T w   B'
is upper triangular..

Clearly B  B  w is an orthonormal basis for V such that T B is upper triangular..


'

Hence the theorem.


15.15.2 Theorem: Let T be a self adjoint operator on a finite dimensional real inner product space
V. Then show that the characteristic polynomial of T splits.

Proof: Let dim(V )  n . Let B be an orthonormal basis for V and A  T B . Then A is self adjoint.

Let TA be a linear operator on C n . defined by TA (u) = Au for all u  C n . TA is self adjoint because

TA D  A where D is the standard ordered orthonormal basis for C n . As TA a self adjoint operator,,
the eigen values of TA are real.

By fundamental theorem of Algebra, the characteristic poly nomial splits into factors of the
form t   . Since each  is real the characteristic polynomial splits over R. But TA has the same
characteristic polynomial as A; which has the same polynomial as T. So the characteristic polyno-
mial of T splits.
Hence the theorem.
Note: Fundamental theorem of Algebra.

Suppose the P ( z )  an z n  an 1 z n 1  ...  a1 z  a0 is a polynomial in P (C ) of degree n  1,


then P ( z ) has a zero and then exists complex numbers c1 , c2 ,...cn not necessarily distinct such
Centre for Distance Education 15.32 Acharya Nagarjuna University

that P ( z )  an ( z  c1 )( z  c2 )...( z  cn ) .

15.15.3 Theorem: Let V be a finite dimensional inner product space and let T be a self adjoint
linear operator on V. Then there is an orthonormal basis for V, each vector of which is a character-
istic vector for T and consequently the matrix of T with respect to B is a diagonal matrix.
Proof: As T is a self adjoint linear operator on a finite dimensional inner product space V, so T must
have a characteristic value and T must have a characterisitc vector.

u
Let O  u be a characteristic vector for T. Let u1  u . Then u1 is a characterisitic vector

for T and u  1 . If dim V  1 , then u1 is an orthonormal basis for V, and u1 is a characteristic
vector for T. Thus the theorem is true if dim V = 1. Now we proceed by induction on the dimension
of V. Suppose the theroem is true for inner product spaces of dimension less than dimension of V.
Then we shall prove that it is true for n and the proof of will be complete by induction.

Let W be the one dimensional subspace of V spanned by the characteristic vector u1 for T..
Let u1 be the characteristic vector corresponding to the characteristic value C. Then T (u1 )  C (u1 ) .
If v is any vector in W. Then v  ku1 , where k is a scalar. We have
T (v)  T (ku1 )  kT (u1 )  k (cu1 )  kc (u1 ) . So T (u) W . W is invariant under T. So W  is invariant
under T * . But T is self adjoint means T  T * . So W  is invariant under T. If dim V  n , then
dim W   dim V  dim W  n  1

So W  with the the inner product from V is an inner product space of dimension one less
than the dimension of V.

suppose S is the linear operator induced by T on W  i.e. S is the restriction of T to W  .


Then S ( w)  T ( w)  w  W  . Then restriction of T * to W  will be the adjoint of S * of S. Thus
S is a self adjoint linear joperator on W  , because if w is any vector in W  then
S *( w)  T * ( w)  T ( w)  S ( w)

 S*  S .
Thus S is a self adjoint linear operator on W  ; Whose dimension is less than dimension

of V. so by our induction hypothesis W  has an orthonormal basis u1 , u2 ,...un  . Consisting of

characteristic vectors for S. Suppose ui is the characteristic vector for S corresponding to the
characteristic value ci . Then S (ui )  Ci ui

 T (ui )  Ci ui
Rings and Linear Algebra 15.33 Linear Operators

So ui is also a characteristic vector of T. Thus u1 , u2 ,...un are also characteristic vectors

for T. since V  W  W  . So B  u1 , u2 ,...un  is an orthonormal basis for V each vector of which
is a characteristic vector of T. The matrix T relative to B will be a diagonal matrix.

15.16 Normal Operator:


Definition: Let T be a linear operator on an inner product space V. Then T is said to be normal if
it commutes with its adjoint i.e. if TT *  T * T .

Note i) If V is finite dimensional then T * will definitely exist. If V is not finite dimensional, then the
above definition will make sense if and only if T possess adjoint.
ii) Every self adjoint operator is normal.
Suppose T is self adnoint operator, then T *  T , so obviously T * T  TT * .
So T is normal.

W.E.12 Examples: If T : R 2  R 2 be rotation by  , where 0     , and if, for standard basis B.

cos   sin  
T B  
cos  
show that T is normal.
 sin 

cos   sin    cos  sin  


Solution: T B   ; T * 
 sin  cos   B   sin 
 cos  

cos   sin    cos  sin   1 0 


Now T B .T *B   
 sin  cos     sin  cos   0 1 

 cos  sin   cos   sin   1 0 


and T *B .T B   
  sin  cos    sin  cos   0 1 

As T B T *B  T *B T B  TT *B T * T B

So T is normal.
Normal Matrix : Definition:
A real or complex n x n matrix A is normal if and only if it commutes with its conjugate
transpose. i.e. AA*  A * A .

1 1  1 i 
Example : A    then A*   
i 3  2i  1 3  2i 
Centre for Distance Education 15.34 Acharya Nagarjuna University

1 1  1 i 
and AA*    
i 3  2i  1 3  2i 

 2 3  3i 
 
3  3i 14 

1 i  1 1   2 3  3i 
A* A       
1 3  2i  i 3  2i  3  3i 14 

Thus AA*  A * A . So A is normal.

15.17 Theorems on Normal Operators:


15.17.1 An operator T on an inner product space is normal

 T *(u )  T (u )  u V

Proof: For all u  V , we have

2 2
T *(u )  T (u )  T *(u )  T (u )

 T * (u ), T * (u )  T (u ), T (u ) 

 TT * (u ), u  T * T (u ), u 

 (TT * T * T )u , u  0

 TT * T * T  0ˆ (zero operator)

 TT *  T *T

 T is normal.
15.17.2 Theorem:

If T is a normal operator on an inner product space V ( F ) , then T  CI is normal for every


C F .
Proof: (T  CI )*  T * (CI ) *

 T * CI *

 T * CI ........ (1) since I *  I

Again (T  CI )*(T  CI )
Rings and Linear Algebra 15.35 Linear Operators

 (T *  CI )(T  CI ) using (1)

 T * T  CT * CT  CCI ( IT  TI  T )

 T * T  CT  CT *  CCI (TT *  T * T )

 (T  CI )(T  CI )*

As (T  CI )*(T  CI )  (T  CI )(T  CI )* it follows that T  CI is normal.

15.17.3 Theorem:
Let T be a normal operator on an inner product space V. Then a necessary and sufficient
condition that u be a characteristic vector of T is that it be a characteristic vector of T * .

Proof: T is a normal operator on an inner product space V. So TT *  T * T

 (TT *)(u ), u  (T * T )(u ), u  for any u  V

 T (T *)(u ), u  T *(T (u ), u 

 T *(u ), T * (u )  T (u ), (T *) *(u ) 

2
 T *(u )  T (u ), T (u )  since T **  T .

2 2
 T *(u )  T (u )

 T *(u )  T (u ) ie T (u )  T *(u ) ............. (1)

We know that if T is normal, C is a scalar then T  CI is normal.

Hence from (1) for all u  V .

(T  CI )(u )  (T  CI ) *(u )

 (T  CI )u  (T * CI )u

In particular (T  CI )u  0 if and only if (T * CI )(u )  0 i.e. T (u )  Cu , if and only if

T * (u )  Cu . Hence u is the characteristic vector for T with characteristic value C if and only if u
is a characteristic value for T * with characteristic value C .

Result: If u is an eigen vector of T, then u is also an eigen vector of T*. Infact if T (u )  Cu; then

T *(u )  Cu .
Centre for Distance Education 15.36 Acharya Nagarjuna University
15.17.4 Theorem:

Let T be a normal operator on an inner product space. If u  V , then T (u)  O  T *(u)  O

Proof: T is normal  TT *  T * T

2
Also T (u )  T (u ), T (u ) 

 u , T * T (u ) 

 u , (T * T )u 

 u , (TT *)u 

 T * (u ), T * (u ) 

2
 T *(u )

So T (u )  T *(u )

Now T (u )  O  T (u )  0  T *(u )  0

 T *(u )  O

Thus T (u )  O  T *(u )  O

15.17.5 Let V be an inner product space. Let T be a normal operator on V. If 1 , 2 are distinct
eigen values of T with corresponding eigen vectors, u1 and u2 then show that u1 and u2 are
orthogonal.

Proof: Let u1 , u2 are the characteristic vectors of T corresponding to the characteristic values

1 , 2 (1  2 ) . Now Tu1  1u1 , Tu2  2u2 then T * u2   u2 .

Again 1  u1 , u2  1u1 , u2 

 Tu1 , u2 

 u1 , T * u2 

 u1 , 2u2 

or 1  u1 , u2  2  u1 , u2 
Rings and Linear Algebra 15.37 Linear Operators

 (1  2 )  u1 , u2  0

 u1 , u2  0 since 1  2

 u1 , u2 are orthogonal.

Hence the theorem.


15.17.6 Let V be a finite dimensional complex inner product space and let T be a normal opera-
tor on V. Then V has an orthonormal basis B, each vector of which is a characteristic Vector for
T, and conrequently the matrix of T with respect to B is a diagonal matrix.
Proof: As T is a linear operator on a finite dimensional complex inner product space V, so T must
have a characteristic value and so T must value a characteristic vector.

Let O  u be a characteristic vector of T..

u
Let u1  . Then u1 is also a characteristic, vector for T since u  1 . If dim V  1 , then
u

u1 is an orthonormal basis for V. and u1 is a characteristic vector for T.

Thus the theorem is true if dim V  1 .

Now we proceed by induction on the dimension of V. We suppose that the theorem is true for inner
product spaces of dimension les than dimV . Then we shall prove that it is true for V and the proof
is complete by induction.

Let W be the one dimensional subspace of V spanned by the characteristic vector u1 for T..
Let u1 be the characteristic vector corresponding to the characteristic value C. Then T (u1 )  Cu1 .
If v is any vector in W then v  Ku1 where K is some scalar..

We have T (v)  T ( Ku1 )  KT (u1 )

 K (Cu1 )  ( KC )u1

So T (v )  W . Thus W is invariant under T and So W  is invariant under T * .

Now T is normal. So if u1 is a characterisitc vector of T, then u1 is also a characteristic


vector of T * .

So by the same argument as above, W is also invariant under T * . So W  is invariant


under (T *) * i.e. W  is invariant under T. If dim V  n , then dim W   dim V  dim W  n  1 .
Centre for Distance Education 15.38 Acharya Nagarjuna University

So W  with the inner product from V is a complex inner product space of dimension less
than dimenstion of V.

Suppose S is the linear operator induced by T on W  i.e. S is the restriction of T on W  .

Then S (v)  T (v)  v  W 

This restriction of T * to W  will be the adjoint of S * of S. Now S is a normal operator on


W  . For v is any vector in W 

then ( SS *)(v)  S  S *(v)   S T *(v) 

 T T *(v) 

 (TT *)(v)

 (T * T )v  T * T (v)

 T *  S (v ) 

 S *  S (v ) 

 ( S * S )(v )

So ( SS *)  ( S * S ) and thus S is a normal operator on W  ; whose dimension is less than


the dimension of V.

So by our induction hypothesis, W  has an orthonormal basis u1 , u2 ,..., un  consisting of

characteristic vectors of S. Suppose ui is the characteristic vector for S corresponding to the


characteristic value Ci .

Then S (ui )  Cui  T (ui )  Cui

So ui is also a characteristic vector for T..

Thus u2 , u3 ,...un are also characteristic vectors for T. Since V  W  W  , so

B  u1 , u2 ,...un  is also an orthornormal basis for T each vector of which is a characteristic vector
for T. The matrix of T relative to B will be the diagonal matrix. Hence the theorem.
15.15.7 Theorem:
Suppose T is a linear operator on a finite dimensional inner product space V and suppose
Rings and Linear Algebra 15.39 Linear Operators
that there exists an orthonormal basis B  u1 , u2 ,...un  for V such that each vector in B is a char-
acteristic vector for T. Then prove that T is normal.

Proof: If ui  B , then it is given that ui is a characteristic root of T. So

Let T (ui )  Ci ui for i  1, 2,..n

Then T *B is a diagonal matrix with diagonal elements c1 , c2 ,...cn . Since

T *B  T B so T *B is also a diagonal matrix with diagonal elements c1 , c2 ,...cn


. Now as two diagonal matrices commute it follows

T B  T *B  T *B T B
 TT *B  T * T B

 TT *  T * T  T is normal.
Hence the theorem.
Note: The above two theorems can be clubbed together and can be restated as.
Let T be a linear operator on a finite dimentional complex inner product space V. Then T is
normal if and only if there exists an orthonormal basis of V consisting of eigen vectors of T.

15.18 Positive Definite and Positive Semidefinite Transformations:


15.18.1 1) Positive definite operator: Definition:
A linear operator T on a finite dimensional inner product space V is called positive definite
in symbol T  0, if T is self adjoint and  T (u ), u  0 for all O  u V .

ii) Positive semi definite operator : A linear operator on an inner product space V is called postive
semi definite (or non negative) in symbol T  0 if it is self adjoint and if  T (u ), u   0  u in V .

Note i) An n x n matrix A with entries from R or C is called positive definite if LA is positive definite.

ii) An n x n matrix A with entries from R or C is called postive semidefinite if LA is positive


semi definite.
15.8.3 Theorem: Let T be a self adjoint operator on an inner product space V. If T is positive or
non negative then every characteristic value of T is positive or non negative respectively.
Proof: Let C be a characteristic value of T.

Then T (u )  Cu for some non zero vector u.

We have  T (u ), u  Cu , u  C  u , u 
Centre for Distance Education 15.40 Acharya Nagarjuna University

2
C u

 T (u ), u 
C  2
u

If T is postive then  T (u ), u   0 , so C  0 i.e. C is postive.

If T is non negative then  T (u ), u   0

So C  0 i.e. C is non negative.

15.18.4 Theorem:
If T is a self adjoint operator on a finite dimensional inner product space V, such that the
characteristic values of T are non-negative show that T is non-negative.
Proof: T is a self adjoint operator on a finite dimensional inner product space V. Let T has all
charcterisitc values non-negative.

As T is self adjoint, we can find an orthonormal basis B  v1 , v2 , v3 ,..., vn  consisting of


characteristic vectors of T.

For each v1 we have T (vi )  CiVi where Ci  0 . Let w be any vector in V

Let w  a1u1  a2u2  ....  an un

Then T ( w)  T (a1v1  a2 v2  ....  an vn )

 a1T (v1 )  a2T (v2 )  ....  anT (vn )

 a1c1v1  a2 c2 v2  ....  an cn vn

We have  T ( w), w   a1c1v1  a2 c2 v2  ....  an cn vn , a1v1  a2 v2  ....  an vn 

 a1a1c1  a2 a2 c2  ....  an an cn ( v1 , v 2 , v3 ,..., v n  is an orthonormal)

2 2 2
 a1 c1  a2 c2  ....  an cn .  0

since ci  0 and ai  0

Thus (T ( w), w   0  w V
Hence T  O i.e. T is non negative.
Rings and Linear Algebra 15.41 Linear Operators
15.18.5 Theory:

Let T be a linear operator on a finite dimensional inner product space V. Let A   aij  nn be

the matrix of T relative to an ordered orthonormal basis B  u1 , u2 ,..., un  . Then T is positive if and
only if the matrix A satisfies the following conditions.
i) A  A * i.e. A is self adjoint

n n

ii)  a
i 1 j 1
ij xi x j  0 where x , x ,..., x and n scalars not all zero.
1 2 n

Proof: Let v be any vector in V. Then

v  x1u1  x2u2  ...xnun

 n n
then  T (v), v   T  x j j  ,  xi ui 
u
 j 1  i 1

n n
  x jT (u j ), xi ui 
j 1 i 1

n n
  x j xi  T (u j ), ui 
j 1 i 1

We know if A   aij  nn be the matrix of T with respect to the ordered basis B then
n n

aij  T (u j ), ui  using in the above we get  T (v), v   aij xi x j


i 1 j 1

Now suppose T is positive. Then T  T *

So A  A *

If x1 , x2 ,...xn are any n scalars not all zero, then v  x1u1  x2u2 ,...  xn un is a non zero
n n

vector in V. Since T is positive  T (v ), v   0 Hence  a


i 1 j 1
ij xi x j  0

conversely suppose that the conditions (1) and (ii) of the theorem hold. A  A*  T  T *

Also (ii) implies  T (v ), v  0 . If non zero v  V , then we can write v  x1u1  x2u2 ,...  xn un
where x1 , x2 ,...xn are scalars not all zero. Hence T is positive.
Centre for Distance Education 15.42 Acharya Nagarjuna University
15.18.6 Working procedure to verify the positiveness of a square matrix:

Let A   aij  nn be a square matrix of order n; over the field F. Then the principal minors of
A are the following n scalars

 a11 a12 .... a1k 


det    X k say ( k  1, 2,..n)
ak1 .... .... akk 

Then the matrix A is positive if and only if the principal minors are all positive and A  A * .
Where as the matrix A is not positive if det A is not positive.

15.19 Worked out Examples:


W.E.13 : T is a linear operator on an inner product space V  R 2 defined by
T ( a, b)  (2a  2b, 2a  5b) . Determine whether T is normal, self adjoint or neither. If possible,
find an orthonormal basis of eigen vectors of T for V and list the corresponding eigen values.

Solution: V  R 2 ( F  R ) is an inner product space of dimension 2.

So V ( F )  R 2 ( R ) has an orthonormal basis B  u1  (1, 0), u2  (0,1) . Here T is a linear


operator on V ( F ) such that T ( a, b)  (2a  2b, 2a  5b) for u  ( a, b) .

T (1, 0)   2(1)  2(0), 2(1)  5(0)   (2, 2)

and T (0,1)   2(0)  2(1), 2(0)  5(1)   ( 2,5)

 2 2   2 2 
So T B    and T *B   
 2 5   2 5 

So T  T *  T is self adjoint.

 2 2   2 2   8 14 
Also T B T *B     
 2 5   2 5   14 29 

 2 2   2 2   8 14 
T *B T B     
 2 5   2 5   14 29 

So T B T *B  T *B T B

 TT *B  T * T B  TT *  T * T
Rings and Linear Algebra 15.43 Linear Operators

 T is normal.

 2 2  1 0 
Let A  T B . Characteristic equation is A   I  0       0
 2 5  0 1 

2 2
 0  (2   )(5   )  4  0
2 5

  2  7  6  0  (  6)(  1)  0

   6,   1

To find eige vector corresponding   6 .

( A  I ) X  O

 2  6 2   x1   4 2  x1 
 2 5  6  x   O   2 1  x   O
  2   2

 4 x1  2 x2  0  x2  2 x1

put x1  1, then x2  2

 x1   1 
So X       and every scalar multiple of it is an eigen vector..
 x2   2 

To find eigen vector corresponding to   1 .

( A  I ) X  O

 2  1 2   x1   1 2  x1 
    O  x   0
 2 5  1 x
 2  2 4  2

 x1  2 x2  0 and 2 x1  4 x2  0

 x1  2 x2 so put x2  1 , then x1  2

 x1   2
So X       and every scalar multiple of it is an eigen vector..
 x2  1 
Centre for Distance Education 15.44 Acharya Nagarjuna University

 1 1 
An orthonormal basis of eigen vectors is  (1, 2), (2,1)  with corresponding eigen
 5 5 
values 6 and 1.
W.E. 14 : Let V be a finite dimensional inner product vector space and T be an idempotent
operator on V i.e. T 2  T then T is self adjoint if and only if TT *  T * T .

Solution: Let T be self adjoint then T *  T and T 2  T .

To prove TT *  T * T

Now  T (u ), v  u , T * (v )  u , T (v ) 

 u , TT (v )  u , T * T (v ) 

 u , TT *(v)  since T *  T .

 T *T  TT *

Converse : Given T 2  T , TT *  T * T to prove that T is self adjoint i.e. T *  T .

Now  T (u ), T (u )  u , T * T (u )  u , TT * (u ) 

 T * u , T * u 

2 2
 T (u )  T *(u )

T (u)  O if and only if T *(u)  O .......... (1)

Choose any vector in V such that u  v  T (v ) .

T (u )  T (v  T (v)  T (v)  T 2 (v)  T (v)  T (v)  O

Since T (u )  0 it follows that T *(u)  O

But T *(u )  O  T *(v  T (v)  O

or T *(v)  T * T (v)  O

or T * (v )  T * T (v )  v  V

So T *  T * T ............ (2)

Now T  (T *)*  (T * T )*  T * T **  T * T  T *

As T  T * , therefore, T is self adjoint.


Rings and Linear Algebra 15.45 Linear Operators
W.E.15 : Let V be the space of polynomials over the field of complex numbers with inner product
1

defined as  f , g   f (t ) g (t ), f , g  V . If D is the differetial operator, find out if D is itself


0

adjoint or not.

Solution: Let dash denotes the differentiation i.e Df  f '

1
 Df , g  f , g   f ' (t ) g (t )dt
'

intergrating by parts we get

1
 Df , g   f g 0   f (t ) g ' (t )dt .......... (1)
1

1
Also  f , D g  f , g  
 f (t ) g (t )dt
' '
by definition ........ (2)
0

If D were to be self adjoint  Df , g  f , Dg 

But since (1) and (2) are not the same. D is not selfadjoint.

W.E.16 : If T1 , T2 are positive linear operators on an inner product vector space then prove that
T1  T2 is also positive.

Solution: Given T1*  T1 ,  T1 (u ), u   0; T2 *  T1

and  T2 (u ), u   0

Now (T1  T2 )  T1  T2  T1  T2
* * *

T1  T2 is self adjoint.

Again  (T1  T2 )(u ), u  T1 (u )  T2 (u ), u 

 T1 (u ), u    T2 (u ), u  0 by given conditions.

Hence T1  T2 is also +ve.


Centre for Distance Education 15.46 Acharya Nagarjuna University

15.20 Summary:
In this lesson we discussed about linear operators. Adjoint operator. Properties of adjoint
operators. Normal and self adjoint operators their properties-polynomial split-schur theorem, posi-
tive, semi positive square matrices.

15.21 Technical Terms:


The technical terms we come across in this lesson are adjoint operator, self adjoint
operator, polynomial split, Normal operator, positive matrix, semi positive matrix.

15.22 Model Questions:


1. Define adjoint of a linear operator on an inner product space V. If S *, T * are adjoint operators of
S and V, then prove that

(i) ( S  T )*  S * T *

(ii) ( ST )*  T * S *
2. Define self adjoint operator. Let T be a self adjoint linear operator on a finite dimensional inner
product space. Then prove that R (T )   N (T )

3. State and prove schur theorem in Linear operators.


4. Define normal operator. If T is a normal operator on an inner product space then show that
T  CI is normal for every C  F .

5. Define positive linear operator. If T1 , T2 are positive linear operators on an inner product space
then prove that T1  T2 is above positive.

15.23 Exercise:
1. For each linear operator T on an inner product space V, determine whether T is normal, self
adjoint or neither. If possible produce an orthonormal basis of eigen vectors of T for V and list the
corresponding eigen values.

1) V  C 2 , and T defined by T ( a , b )  (2a  ib, a  2b )

Ans : T is normal, but not self adjoint. An orthonormal basis of eigen vectors is
1 i (1  i )
1
2
 1
2
 
 (1  i), 2 , (1  i,  2  with corresponding eigen values 2 
 2
,2 
2
.

Ans: T is self adjoint.

ii) V  M 22 ( R ) and T is defined by T ( A)  AT


Rings and Linear Algebra 15.47 Linear Operators
Ans: T is self adjoint. An orthonormal basis of eigen vectors.

 1  0 1  1  1 0  1  0 1 1  1 0  
  ,  ,  ,    corresponding eigen values are
 2  1 0  2  0 1  2  1 0  2  0 1 
1,1, 1, 1 .
2. Let V be a complex inner product space, and let T be a linear operator on V.

Prove that T is normal if and only if T1T2  T2T1 .

3. Prove that every entry on the main diagonal of a positive matrix is positive.
4. Which of the following matrices are positive.

0 i  1 1 i
i) A    ii) B   
 i 0  1  i 3 
Ans: (i) A is not positive
(ii) B is positive

5. If T is a linear operator on an inner product space V ( F ) and a , b are scalars such that a  b ,
then show that aT  bT * is normal.

15.24 Reference Books:


1. Linear Algebra 4th edition; Stephen H. Friedberg. Arnold J. Insel, Lawrence E. Spence.
2. Sehaum’s outlines, Beginning Linear Algebra, Seymour Lipschutz.
3. Linear Algebra Dr. S.N. Goel
4. Linear Algebra K.P. Gupta

- A. Mallikharjana Sarma
Rings and Linear Algebra 16.1 Unitary and Orthogonal Operators

LESSON - 16

UNITARY AND ORTHOGONAL OPERATORS


16.1 Objective of the Lesson:
In this chapter we study the analogy between complex numbers and linear operators. In
the previous lessons, we observed that the adjoint of a linear operator acts similarly to the conju-
gate of a complex number. A complex number has length 1, if zz  1 .
In this lesson we study those linear operators T on an inner product space V, such that
TT *  T * T  I . We see that these are precisely the linear operators that preserves length; in the
sense that T (u )  u for all u  V . As another characterisation we prove that on a finite dimen-
sional complex inner product space, these are the normal operators whose eigen values all have
absolute value 1.

16.2 Structure of the lesson :


This lesson contains the following items:
16.3 Introduction
16.4 Some basic definitions
16.5 Unitary operator and orthogonal operator, definition
16.6 Theorems - Equivalent statements
16.7 Reflection - definition - examples.
16.8 Some basic properties of unitary operators
16.9 Matrices representing unitary and orthogonal transformations
16.10 Theorems
16.11 Worked out examples
16.12 Working procedure to find unitary matrix P and diagonal matrix D such that
P * AP  D .
16.13 Worked Out examples
16.14 Summary
16.15 Technical Terms
16.16 Model Questions
16.17 Exercise
16.18 Reference Books
Centre for Distance Education 16.2 Acharya Nagarjuna University
16.3 Introduction :
Linear operators preserve the operations of vector addition and scalar multiplication and
isomorphisms preserve all the vector space structure. We now consider those linear operators T
on an inner product space that preserve the length. We see that this condition guarantees, that T
preserves the inner product.

16.4 Some Basic Definitions:


16.4.1 Definition:
Let U and V be two inner product spaces over F. Let T : U  V be a linear transformation.
Then we say that

i) T preserves inner products if  T (u ), T (v )  u , v  for all u , v  U .

ii) T Preserves norms if T (u )  u  u U .

iii) T preserves isometry if T preserves distances i.e. if T (u )  T (v)  u  v  u, v U .

Note that the distance T (u ) to T (v) is d T (u ), T (v)  and is equal to T (u )  T (v) .

16.4.2 Equivalent Conditions:

Let U ( F ) and V ( F ) be two inner product spaces. Let T : U ( F )  V ( F ) be a linear trans-


formation. Then the following three conditions are equivalent.
i) T preserves inner product ii) T preserves norms iii) T is an isometry.
16.4.3 Inner Product Isomorphism:

16.4.4 Definition: Let T be a linear transformation from an inner product space V ( F ) to an inner
product space V ( F ) . Then T is said to be an inner product space isomorphism if

i) T is invertible i.e. T is one one onto


ii) T preserves inner products

Here U ( F ) and V ( F ) are said to be isomorphic and we write U  V .

T preserves inner products  T is non singular

 T is one one.
Hence an inner product space isomorphism from U onto V can also be defined as a linear
transformation from U onto V, which preserves inner products.
Rings and Linear Algebra 16.3 Unitary and Orthogonal Operators
16.5 Definition:
i) Unitary Operator:
Let T be a linear operator on a finite dimensional inner product space over the field of com-
plex numbers and T (u )  u for all u U then T is called a unitary operator..

ii) Orthogonal Operator: Let T be a linear operator on a finite dimensional inner product space
V, over the field of real numbers R and if T (u )  u for all u  V , then T is said to be orthogonal
opeator.
iii) Isometry: Let T be a linear operator on an infinite dimensional inner product space V over F
and if T (u )  u for all u  V , then T is called an isometry..

If in addition, the operator is onto (the condition guarantees one to one), then the operator is
called unitary if F  C or orthogonal operator if F  R .

iv) Definition: U and V are two vector spaces over a field F. Then the zero transformation
T0 : U  V is defined by T0 (u )  O  u U . The zero transformation is also denoted by 0̂ .

16.6.1 Theorem: Let T be a self adjoint operator on a finite dimensional inner product space V. If
 u , T (u )  0 u  V ; then T  To .
Proof: We know that if T is a linear operator on a finite dimensional inner product space V, then T
is said to be self adjoint, if and only if there exists on orthonormal basis B for V consisting of eigen
vectors of T.
By the above theorem we can choose an orthonormal basis B for V. consisting of eigen
vectors of T. If u  B then T (u )   u for some  .

then 0 =  u , T (u )  u ,  u    u , u 

   0 Hence T (u )  O for all u  B

So T  T 0

16.6.2 Equivalent Statements:


Theorem: Let T be a linear operator on a finite dimensional inner product space V. Then the
following statements are equivalent
i) TT *  T * T  I

ii)  T (u ), T (v )  u , v   u , v  V

iii) If B is an orthonormal basis for V, then T ( B ) is an orthonormal basis for V..


Centre for Distance Education 16.4 Acharya Nagarjuna University

iv) There exists an orthonormal basis B for V, such that T ( B) is an orthonormal basis for V..

v) T (u )  u  u V

Proof:
1. We will now prove that (i)  (ii)

given TT *  T * T  I .......... (1)

Let u , v  V  u , v   I (u ), v 

 (T * T )(u ), v  using (1)

 T *(T (u ), v 

 T (u ), T (v ) 

Thus TT *  T * T  I  T (u ), T (v )  u , v  for all u , v  V

Hence (i)  (ii)


ii) To show (ii)  (iii)

Given  T (u ), T (v )    u , v   u , v  V .......... (2)

Let B  u1 , u2 ,...un  be an orthonormal basis for V. So T ( B)  T (u1 ), T (u2 ),..., T (un )

from (2)  T (ui ), T (u j )  ui , u j 

1 if i  j

 i j  =

0 if i  j

 T ( B)  T (u1 ), T (u2 ),...T (un ) is an orthonormal basis for V..

Hence (ii)  (iii)


3. To show that (iii)  (iv)

Given B is an orthonormal basis for V, then T ( B ) is an orthonormal basis for V....... (3)

Let B  u1 , u2 ,...un  be an orthonormal basis for V; then by (3) T ( B)  T (u1 ), T (u2 ),...T (un )
is an orthonormal basis for V.
Rings and Linear Algebra 16.5 Unitary and Orthogonal Operators

Hence (iii)  (iv)


4. To show that (iv)  (v)

Let u  V and B  u1 , u2 ,...un  is a basis for V. So u   ai ui for some scalar ai


i 1

n n
u   aiui ,  a j u j 
2

i 1 j 1

n n
  ai  a j  ui , u j 
i 1 j 1

n n
  ai a j  ui , u j 
i 1 j 1

n
  ai ai  ui , ui  summing of j  1, 2,...n and remembering
i 1

0 if i  j

 u i , u j  as B is orthonormal.

1 if i  j

n
  ai ai (1)
i 1

n
  ai
2
...... (A)
i 1

Applying the same manipulation to T (u )   aiT (ui ) and using the fact T ( B ) is also
i 1

orthonormal we obtain T (u )   ai
2 2
......... (B)
i 1

From (A) and (B) we get T (u )  u

So (iv)  (v)
Centre for Distance Education 16.6 Acharya Nagarjuna University
5. Finally we will prove that V  i (i)

Let u  V we have  u , u  u  T (u )
2 2

 T (u ), T (u ) 

 u , T * T (u ) 

So  u, u    u, T T (u)  0  u V
*

 u , ( I  T * T )u  0  u V

Let S  ( I  T * T ) then S is self - adjoint and  u , S (u )  0  u  V ,

We know if T is a self adjoint operator on a finite dimensional inner product space and if
 u , T (u )  0  u  V . Then T  T 0 where T0 is the zero transformation.

So T0 = S  ( I  T * T )

So T * T  I

Since V is finite dimensional, TT *  I

Hence T (u )  u  u V ,  TT *  T * T  I

Hence (v)  (i)


Note: Among the equivalent conditions, any one can be taken as a definition in doing problems.
16.6.3 Theorem:
If T is a unitary operation then show that T *  T 1

Proof: T is a unitary operation  T (u )  u

 T (u )  u
2 2

 T (u ), T (u )  u , u 

 (T * T )(u ), u  u , u 

 (T * T )(u ), u  I (u ), u 

 (T * T  I )u , u  0

 T *T  I  Oˆ (null operator)
Rings and Linear Algebra 16.7 Unitary and Orthogonal Operators
 T * T  I  T * TT 1  IT 1

 T * I  T 1  T *  T 1
Thus if T is a unitary operator then T *  T 1

Remark: If T is unitary, then T is non singular.


16.6.4 Theorem:
Let T be a linear operator on a finite dimensional real inner product space V, then V has an
orthonormal basis of eigen vectors of T with corresponding eigen values of obsolute value 1 if and
only if T is both self adjoint and orthogonal.

Proof: Let V has then orthogonal basis B  u1 , u2 ,..., un 

such that T (ui )   ui and   1 for all i,

As T is a linear operator and there exists an orthonormal basis B for V consisting of eigen
vectors of T, and so T is self adjoint.

Thus (TT *)(ui )  T (i ui )  i i ui  i2uui  ui for each i. So TT *  I so T is orthogonal.

Hence T is both self adjoint and orthogonal.


Converse: Let T be both self adjoint and Orthogonal:
We know if T is a linear operator on a finite dimensional real inner product space V, then T
is self adjoint if and only if there exists an orthonormal basis B for V consisting of eigen vectors of T.

So by this theorem V possess an orthonormal basis B  u1 , u2 ,..., un  such that T (ui )  i ui

for all i. If T is also orthogonal. We have i . vi  i vi  T (vi )  vi

Thus i vi  vi  i  1 for every i.

Hence from the above two cases the theorem follows.


16.6.5 Corollary: Let T be a linear operator on a finite dimensional complex inner product space
V. Then V has an orthonormal basis of eigen vectors of T with corresponding eigen values of
absolute value 1 if and only if T is unitary.
Proof: The proof is similar as above.

16.7 Reflection:
16.7.1 Definition: Let L be a one dimensional subspace of R 2 about a line L through the origin.
A linear operator T on R 2 is called a reflection of R 2 about L; if T (u )  u  u  L and
T (u )  u.  u  L .
Centre for Distance Education 16.8 Acharya Nagarjuna University
16.7.2 Example: Let T be a reflection of R 2 about a line through the origin. We shall show that T
is an orthogonal operator. Select vectors u1  L and u2  L such that u1  u2  1 .

Then T (u1 )  u1 , T (u2 )  u2 thus u1 and u2 are eigen vectors of T with corresponding

eigen values 1 and -1 respectively. Further more u1 , u2  is an orthonormal basis for R 2 . It follows
that T is an orthogonal operator.

16.8 Some Basic Properties of Unitary Operators:


16.8.1 Show that every unitary operator is normal:
Proof: Let T be linear operatory on the inner product space V. Which is unitary.
So TT *  T * T  I . Hence T is invertible and T 1  T * .

So T is normal.
Hence every unitary operator is normal.
16.8.2 If S and T are unitary operators, then ST is unitary or the product of two unitary operators is
unitary.
Proof: S, T are two unitary operators on V. Then

S 1  S *, T 1  T * ............. (1)

Now ( ST ) 1  T 1S 1  T * S * using (1)

 ( ST ) *

As ( ST ) 1  ( ST ) * ; ST is unitary. Hence the product of two unitary operators is unitary..

Aliter i) : S and T are two unitary operators on a finite dimensional inner product space V.
We have to show that ST is unitary.

Now ( ST )( ST )*  ST (T * S *)

 S (TT *) S *

 SIS *  SS *  I
So ( ST )( ST )*  I

Similarly ( ST ) * ( ST )  I

So ST is unitary..

Hence the theorem.


Rings and Linear Algebra 16.9 Unitary and Orthogonal Operators
ii) Let S and T be the given unitary operators then S and T are invertible. So ST is invertible

Also ( ST )(u )  S (T (u )

 T (u )  u since S and T are unitary..

So ST (u )  u

So ST is unitary.
Hence the theorem.
16.8.3 Corollary: Prove that the composite of orthogonal operators is orthogonal.
Proof: Similar as above.
16.8.4 Theorem:
Show that the inverse of a unitary operator is unitary.

Proof: Let V be an inner product space. T is a unitary operator on V. We have to show that T 1
is unitary.

As T is unitary T (u )  u for all u  V

We put v  T (u ) ie T 1 (v)  u

1 1
We get TT (v)  T (v)

 v  T 1 (v)

1
Hence T (v)  v  v  V

Which implies T 1 in unitary..

Aliter : Let T be unitary. T 1 is the inverse operator of T. Now (T 1 )1  (T *) 1  (T 1 ) *

Since T is unitary T 1  T * .

   T 1  * it follows that the inverse of a unitary operator is unitary..


1
Thus as T 1

16.8.5 Show that the set of all unitary operators on an inner product space V is a group with
respect to composite of operations.

Solution: Let G denote the set of all unitary operators on an inner product space V ( F ) .
Centre for Distance Education 16.10 Acharya Nagarjuna University

Let T1 , T2 be arbitary unitary operators belonging to G.

So T1T1 *  T1 * T1  I , T2 , T2 *  T2 * T2  I

First we will show that T1T 2 is a unitary operator

Now  T1 , T2 T1T2  *  T1T2 T2 * T1 *  T1 T2T1 * T2

 T1 IT1*  T1T2*  I

Thus (T1T2 )(T1T2 )  I

(T1T2 ) *(T1T2 )  (T2 * T1*)(T1T2 )  T2* (T1 * T1 )T2

 T2 * IT2  (T1 * T2 ) I

Hence (T1T2 )(T1T2 )*  I  (T1T2 )*(T1T2 )

So T1T 2 is a unitary operators.

We verify group axiones.

i) Closure Property: If T1T2 are any two unitary operators belongging to G, T1T 2 is a unitary
operator and hence belongs to G. Hence G is closed.

ii) Associativity: We know composite of operators is associative. Hence if T1 , T2 , T2 are any


three unitary operators in G. then (T1T2 )T3  T1 (T2T3 ) .

iii) Existence of Identity: Let I be the identity operator in G. So I is inversible and


I (u )  u  u V . so I is unitary operator on V. Hence belongs to G and IT  TI  T  T  G .

So I is the identity element in G.

iv) Existence of Inverse : T is unitary  T is inversible  T 1 is also invertible.

Let T 1 (u )  v so that T (v )  u for all u , v  V

 T (u )  v .......... (1)

1
But T is unitary  (u )  T (v)
Rings and Linear Algebra 16.11 Unitary and Orthogonal Operators

 T 1 (u )  u  T 1 is also unitary..

T 1  G .
As all the group axious are satisfied, the set G of all unitary operators on V is a group.
16.8.6 Let T be a unitary operator on an inner product space V, let W be a finite dimensional
T-invariant subspace of V. Prove that W  is T - invariant.

Solution: W is a subspace V, W is T - invariant.

So any w  W  T ( w)  W

To prove that W  is T - invariant, it enough to show that any w  W   T ( w)  W 

Let w W and u  W  be arbitrary, then  u , w  0 and T ( w)  w1 W

As T is unitary

 T (u ), w1  T (u ), T ( w)  u , w  0

Thus  T (u ), w1  0  w1  W


This implies T (u ) is  W ( T (u ) is perpendicular to W )  T ( u )  W

any u  W   T (u )  W 

So W  is T-invariant.

16.8.7 Show that the determinant of a unitary operator has absolute value.

Solution: Let T be a unitary operator on a finite dimensional inner product space V ( F ) . Let B
be an ordered orthonormal basis for V. Let A denote matrix of T relative to B. Then

det T  det A  det T B

T is unitary  T * T  I  T * T B   I B

 T *B . T B   I B

So det T *B .det T B  1  det A *.det A  1

 (det A)(det A)  1

 (det A)  1  det A  1
2
Centre for Distance Education 16.12 Acharya Nagarjuna University
So det A and hence det T has absolute value 1.

16.9 Matrices representing Unitary and Orthogonal Transformations:


16.9.1 Definition: A square matrix A is called an orthogonal matrix. If AT A  AAT  I and unitary
if A * A  AA*  I .

Note i) : For a real matrix A we have A*  AT

So a real unitary matrix is also orthogonal. In this case, we call orthogonal rather then
unitary.
ii) The condition AA*  I is equivalent to the statement that the rows of A form an orthonor-
mal basis for F n because

n n
 ij  I ij  ( AA*)ij   AiK ( A*)kj   Aik Ajk and the last term represents the inner product
k 1 k 1

of ith row and jth column.

Here Aik represents the ikth entry of the matrix A.

iii) The condition A * A  I is equivalent to the statement that the columns of A form an
orthonormal basis of F n .

iv) A linear operator T on an inner prouct space V is unitary (Orthogonal) if and only if T B
is unitary (orthogonal) for some orthonormal basis B for V.

cos   sin  
Ex: The matrix 
cos  
is clearly orthogonal. One can easily see that the rows of
 sin 
the matrix form an orthonormal basis for R 2 . Similarly the columns of the matrix form an orthonor-
mal basis for R 2 .

16.9.2 Equivalent Matrices : Definition:


Let A and B are unitary (orthogonal) matrices of order n x n. There B is unitarily equivalent
or (orthogonally equivalient) if and only if there exists an n x n unitary. (Orthogonal) matrix.
P such that B  P * AP
Note: The relation unitarily equivalent to (orthogonally equivalent to) is an equivalence relation on
M nn (C )  M nn ( R) .

W.E.1: Let T be a reflection of R 2 about a lin L through the origin, Let B be the standard ordered
basis for R 2 and let A  T B then T  LA .
Rings and Linear Algebra 16.13 Unitary and Orthogonal Operators
Since T is an orthogonal operator and B is an orthonormal basis, A is an orthogonal matrix.
Discribe A.

Solution: Suppose that  is the angle from the positive x axis to L. Let v1  (cos,sin ) and

v2  ( sin  , cos  ) . Then v1  v2  1 .

v1  L, v2  L . Hence W  v1 , v2  is an orthonormal basis for R 2 . Because T (v1 )  v1


and T (v2 )  v2 we have

1 0 
T W   LA W  
0 1

cos   sin  
Let Q  
 sin  cos  

Thus A  Q  L A W Q
1

cos   sin   1 0   cos  sin  



 sin  cos   0 1   sin  cos  

cos   sin    cos  sin  



 sin  cos     sin  cos  

cos2   sin 2  2sin  cos 


 
 2sin  cos (cos   sin  )
2 2

cos z sin 2 
 
 sin 2  cos 2 
We know that for a complex normal (real symmetric) matrix A, there exists an orthogonal
basis B for F n . Consisting of eigen vectors of A. Hence A is similar to a diagonal matrix D. We
know that A  M nm ( F ) and W be an ordered basis for F n .

Then  LA W  Q AQ . Where Q is the n x n matrix whose jth column is the vector of W..
1

Hence by this theorem, the matrix Q whose columns are the vectors in B is such that D  Q1 AQ,
But since the columns of Q are the orthonormal basis for F n , it follows that Q is unitary (orthogo-
nal). Hence A is unitarily equivalent (orthogonally equivalent) to D.
Centre for Distance Education 16.14 Acharya Nagarjuna University
16.10.1 Theorem:
Let V be a finite dimensional inner product space and T be the linear operator on V. Then T
is unitary if and only if the matrix T in some (or every) ordered orthonormal basis for V is a unitary
matrix.

Proof: V is a finite dimensional inner product space. T is a linear operator on V. Let B  u1 , u2 ,...un 

be an ordered orthonormal basis for V. and let A be the matrix of T relative to B. i.e. T B  A .

Case i) : Let T be unitary. Then T is invertible and hence (T * T )  I  T * T B  I B

 T *B .T B  I where A*  T *B

A  T B then A * A  I

So A  T B is unitary..

Converse:
Suppose that the matrix A is unitary there we have A * A  I .

 T *B T B   I B

 T * T B   I B

 T *T  I
So T is unitary.
From the above two cases the theorem follows.
16.10.2 Corollary: A linear operator T on an inner product space V is orthogonal if and only if
T B is orthogonal for same orthonormal basis B for V..

Proof: Similar as above.


16.10.3 Let A be a complex n x n matrix, then A is normal if and only if A is unitary equivalent to a
diagonal matrix.

Proof: Let C n be the vector space V with standard inner product defined on it and let B be its
ordered basis. If T is the linear operator on V such that it is represented in the standard ordered
basis by the matrix A, then we have T B  A and T *B  A*

Now T * T B  T B .T *B  AA * and T * T B  A * A .


Rings and Linear Algebra 16.15 Unitary and Orthogonal Operators

Case i) If A is a normal matrix then A * A  AA * and hence TT *B  T * T B . i.e. TT *  T * T i.e.
T is a normal operator. From the above, i.e. T being a linear operator on an inner product space V,
it follows that there exists an orthonormal basis say B1 for V each vector of which is a characteristic

vector for T, and hence T B1 is a diagonal matrix. Further if P is a transition matrix from B to B1 ,

then it is a unitary matrix as both B and B1 are orthonormal bases. When P is a unitary matrix we

have P * P  I i.e. P*  P 1 . Also we have T B1  P T B P  P * AP = diagonal matrix.


1

Converse: Suppose that A is unitarily equivalent to a diagonal matrix D.


Suppose that A  P * DP where P is a unitary matrix and D is a diagonal matrix

then AA*  ( P * DP )( P * DP ) *

 ( P * DP )( P * DP **)

 ( P * DP )( P * D * P )

P * D ( PP*) D * P

P * DID * P
P *( DD*) P ............... (1)

Similarly AA*  P *( DD*) P ........ (2)

Since D is a diagonal matrix, however


We have DD*  D * D so AA*  A * A .
Hence A is normal.
From the above two cases, the theorem follows.
16.10.4 Corollary: Let A be a real n x n matrix: Then A is symmetric if and only if A is orthogonally
equivalent to a real diagonal matrix.
Proof: The proof is similar as above.
16.10.5 Schur’s Theorem for Matrices: By Schaur’s theorem proved in normal and adjoint
operators, in the matrix form, it can be stated as

Let A be a matrix M n n ( R ) whose characteristic polynomial splits over F then

i) If F  C then A is unitarily equivalent to a complex upper triangular matrix.

ii) If F  R , then A is orthogonally equivalent to a real upper triangular matrix.


Centre for Distance Education 16.16 Acharya Nagarjuna University
16.11 Worked out examples:
W.E.2: Let T be a linear operator on R 3 which rotates every vector in R 3 about Z axis by a
constant angle  . Prove that T is a orthogonal transformation:

Solution: T is invertible since there exists a linear transformation T 1 which rotates every vector in
R 3 about Z axis by a constem angle  in the direction opposite to T. Hence

T 1 (T (u )  (T 1T )u  I (u )  u  u  R3

Also if u  ( x, y , z )  x, y , z  R then

T (u )  T ( x, y , z )  ( x cos   y sin  , x sin   y cos  , z )

 x2  y 2  z 2
2
u

T (u)  ( x cos  y sin  )2  ( x sin   y cos )2  z 2


2

 x2  y2  z 2

Hence T(u)  u

Hence the linear transformation T is orthogonal.

 a  ic b  id 
W.E.3: Show that the matrix A  
a  ic 
is unitary if and only if a 2  b 2  c 2  d 2  1 .
b  id

 a  ic b  id 
Solution: A*  
 b  id a  ic 

 a  ic b  id   a  ic b  id 
Now AA*  
b  id a  ic   b  id a  ic 

a 2  b2  c 2  d 2 
0
 
 0 a b c d 
2 2 2 2

So AA*  I and only if a 2  b 2  c 2  d 2  1

Hence A is unitary if and only if a 2  b 2  c 2  d 2  1 .


Rings and Linear Algebra 16.17 Unitary and Orthogonal Operators

 0 2m n 
1 1 1
W.E.3A: Show that the matrix A   l m n  where l  ,m  ,n  is orthogo-
2 6 3
 l m n 
nal.

Solution: Let C1 , C2 , C3 be the column vectors of A. Then

0  2m  n
C1   l  ; C2   m  ; C3   n 
   
 l   m   n 

1
We have  C1 , C1  0  l  l  2l  2  1
2 2 2

1
 C2 , C2  4m 2  m 2  m 2  6m 2  6.  1
6

1
 C3 , C3  n 2  n 2  n 2  3n 2  3.  1
3

 C1 , C2  0(2m)  l (m)  l (m)  0

 C2 , C3  2mn  mn  mn  0

 C3 , C1  0n  l (n)  ln  0
Thus the columns of A form an orthogonal set of vectors. So A is orthogonal.

1 2 2
W.E.4 : Find an orthogonal matrix P whose first row is  , ,  .
3 3 3

1 2 2
Solution: Let u1   , , 
3 3 3

First we find a non zero vector w2  ( x, y, z ) which is orthogonal to u1 , for which


 u1 , w2  0

1 2 2
  , ,  , ( x, y, z )  0
3 3 3
Centre for Distance Education 16.18 Acharya Nagarjuna University

x 2 y 2z
    0  x  2 y  2z  0
3 3 3

Put x  0,  z   y put y  1 , then z  1

So one such solution is w2  ( x, y, z )  (0,1, 1)

 1 1 
Normalize w2 to get the second row of P i.e. u2   0, , 
 2 2

Now find a non zero vector w3  ( x, y, z )

Which is orthogonal to both u1 and u2 for which

 u1 , w3  0,  u2 , w3  0

1 2 2
 u1 , w3  0   , ,  , ( x, y, z )  0
3 3 3

x 2 y 2z
    0  x  2 y  2 z  0 ........... (1)
3 3 3

 1 1 
 u2 , w3  0   0, ,  , ( x, y, z )  0
 2 2

y z
 0x    0  y  z  0 ............... (2)
2 2

Put z  1 then y  1  x  4 from (1)

So w3  (4, 1, 1) normalize w3 , to

obtain the third row of P

 4 1 1   4 1 1 
i.e. u3   , ,  , , 
 18 18 18   3 2 3 2 3 2 
Rings and Linear Algebra 16.19 Unitary and Orthogonal Operators

 1 2 2 
 
 3 3 3 
 1 1 
Hence the required orthogonal matrix is P  0
 2 2 

 4 1 1 
 3 2 3 2 3 2 

Caution: The above matrix is not unique.

1 1 1
W.E.5 : Let A  1 3 4  determine whether or not i) the rows of A are orthogonal.
 
7 5 2 

ii) A is an orthogonal matrix.


iii) The columns of A are orthogonal.
Solution: 1. The rows of A are orthogonal since

 (1,1, 1), (1,3, 4)  1(1)  1(3)  ( 1)4  0

 (1,1, 1), (7, 5, 2)  (1)(7)  1( 5)  ( 1)2  0

 (1,3, 4), (7, 5, 2)  (7)  3( 5)  4(2)  0


2. A is not an orthogonal matrix since the rows of A are not unit vectors.

 (1,1, 1), (1,1, 1)  (1)(1)  1(1)  (1)(1)  3


2
u1

u1  3 not unity..

3. The columns of A are not orthogonal since for example


 (1,1, 7), (1,3, 5)  1(1)  1(3)  7( 5)  31  0 .

 1 
  2 
 
W.E.6: For which value of  is the following matrix is unitary   1
 
 2 
Centre for Distance Education 16.20 Acharya Nagarjuna University

 1  1 
 2   2
Solution: Let A    so A*   
 1  1
 2  
 2 

Where A* is the cojugate transpose of the matrix A.

The matrix A is unitary if AA*  I

 1  1 
  2   2   1 0
   
i.e.  1  0 1 
 
1

 2   2 

 1 1 1 
     
4 2 2 1 0 
 
i.e.  1 0 1 
   
1 1
 
 2 2 4 

1 1 1
i.e.   1   0
4 2 2

1 1 1
   0    1
2 2 4

1
Solving we get (   )  0   is real say a
2

(Since if   x  iy then   x  iy so

    ( x  iy)  ( x  iy)  0  y  0 )

1
Again for  real we get    1
4

1 3 3
  2  1 2  so   
4 4 4

3
Hence the matrix A is unitary if   
4
Rings and Linear Algebra 16.21 Unitary and Orthogonal Operators
W.E. 7: If V ( F ) is a finite dimensional unitary space and T be a linear transformation on V ( F ) ,
then show that T is self adjoint  T (u ), u  is real for each u  V .

Solution: Case i) Let T be self adjoint.


So T *  T .......... (1)

Now  T (u ), u  u , T * (u )  for all u  V

 u , T (u ) ` by (1)

  T (u ), u `

Thus  T (u ), u   T (u ), u 

So  T (u ), u  is real.

(Since in complex numbers z  z  z is real)

Case ii) Converse : Let  T (u ), u  be real  u V

then  T (u ), u   T (u ), u   u , T (u ) 

 u , T * (u )  u , T (u ) 

by definition of adjoint  T (u ), u  u , T * (u ) 

 T*  T .
So T is self adjoint.
Hence from the two cases the result follows.

W.E.8 : If B and B ' are two orthonormal bases for a finite dimensional complex inner product
space V. Prove that for each linear transformation T on V, the matrix T  B ' is unitarily equivalent

to the matrix T B .

Solution: Let P is a transition matrix from B to B ' and as B, B ' are orthonormal bases , P is a
unitary matrix. Thus P * P  I  P*  P 1

Now T B'  P T B P  P *T B P


1

So T B1 is unitarily equivalent to the matrix T B .


Centre for Distance Education 16.22 Acharya Nagarjuna University
16.12 Working Procedure:
To find an orthogonal or unitary matrix P and a diagonal matrix D for a given matrix A such
that P * AP  D .

1) Find the characteristic polynomial f ( ) and all eigen values of A.

2. Find a maximal set S of non zero orthogonal eigen vectors of A.

3. Then find an orthonormal set S  u1 , u2 ,..., un  of non zero vectors of A.


'

4. Let P be the matrix whose columns are u1 , u2 ,..., un .

5. Let D be the diagonal matrix whose diagonal elements are the characteristic roots of A.
6. Then we got the orthogonal matrix P, and diagonal matrix D such that P * AP  D .

1 2
W.E.9 : If A    find an orthogonal matrix P and a diagonal matrix D such that PT AP  D
2 1

Solution: The characteristic equation is A   I  0

1  2
  0  (1   ) 2  4  0
2 1 

 (1    2)(1    2)  0

 (3   )(   1)  0

   3,   1

For   3, To find a basis for eigen space of A:

 A  I  X  O
3 2 x 2 2   x 
   O   O
2 1 3  y  2 2  y 

 2 2   x 
R2  R1 gives      0  2 x  2 y  0
 0 0  y 

x y

if x  1, then y  1
Rings and Linear Algebra 16.23 Unitary and Orthogonal Operators
So v1  (1,1)

To find a basis for eigen space of A for   1

 x
 A  I    O
 y

1  1 2   x 
     O  2x  2 y  0
 1 1  1  y 

 y  x

if x  1, then y  1

So v2  (1, 1)

Evidently v2 is orthogonal to v1 .

Hence the orthogonal basis of A is v1 , v2   (1,1), (1, 1)

The corresponding orthonormal basis is u1 , u2  .

v1 1  1 1 
Where u1   (1,1)   , 
v1 2  2 2

v2 1  1 1 
u2   (1, 1)   , 
v2 2  2 2

 1 1 
 2 2 
Thus one possible choice for P  
 1 1 
 2 2 

1 1 1 
  
2 1 1

3 0 
and D   
0 1
Note : We can apply Gram - Schmidt orthogonalisation process to find orthonormal basis.
Centre for Distance Education 16.24 Acharya Nagarjuna University

4 2 2
W.E.10 : If A   2 4 2  find an orthogonal matrix P and a diagonal matrix D such that PT AP  D .
 2 2 4 

Solution: A is a symmetric matrix. A is orthogonally equivalent to a diagonal matrix.

We will now find the orthogonal matrix P and a diagonal matrix D such that PT AP  D .

To find P:
To find P we first find an orthonormal basis of eigen vectors.

The characteristic equation is A   I  0 .

4 2 2
 2 4 2 0
on expansion we get the characteristic equation.
2 2 4

Or

trance of A  4  4  4  12, A  4(16  4)  2(8  4)  2(4  8) i.e. A  48  8  8  32

4 2 4 2 4 2
M 11   16  4  12 ; M 22   16  4  12 ; M 33   16  4  12
2 4 2 4 2 4

So M 11  M 22  M 33  12  12  12  36

Hence the characteristic equation is  3  ( A trace of A)   ( M 11  M 22  M 33 )  A  0


2

i.e. f ( )   3  12 2  36  32  0

Here f (2)  23  12(4)  36(2)  32  8  48  72  32  0

Hence one characteristic value is   2 .

Hence the other factor is   2 1 12 36 32

 2  10  16 2 20 32
1 -10 16 0
Rings and Linear Algebra 16.25 Unitary and Orthogonal Operators
So f ( )  (  2)( 2  10  16)  0

 (  2)(  2)(  8)  0

So   2, 2,8

To find a basis for the eigen space of A, corresponding to   2 . Then  A   I  X  O

4  2 2 2   x
  2 42 2   y   O
 2 2 4  2  z 

By R2  R1 , R3  R1 we get

 2 2 2  x 
0 0 0  y   O
  
0 0 0   z 

 2 x  2 y  2 z  0 ie x  y  z  0 ........... (1)
This system has two independent solutions

put y  1, z  0 i.e. x   y  1

v1  (1,1, 0)

We seek a second solution which is orthogonal to v1 . Let this solution be v2  (a, b, c) .

So  a  b  0 ; and from (1) a  b  c  0

using a  b we get 2 b  c  0 i.e. c  2b

if b  1 , then c  2 a  1

So v2  (a, b, c)  (1,1, 2)

Thus v1  (1,1, 0) and v2  (1,1, 2) form an orthogonal basis for eigen space of   2 .

for   8, ( A   I )( X )  O
Centre for Distance Education 16.26 Acharya Nagarjuna University

4  8 2 2   x

 2 4 8 2   y   O
 2 2 4  8  z 

 4 2 2   x 
  2 4 2   y   O
 2 2 4  z 

2 R2  R1 , 2 R3  R1 gives

 4 2 2   x 
 0 6 6   y   O
  
 0 6 6   z 

R3  R2 gives

 4 2 2  x 
 0 6 6   y   O
  
 0 0 0   z 

 4 x  2 y  2 z  0

 6 y  6z  0

or 2 x  y  z  0 and  y  z  0 y  z

So 2 x  2 z  x  z

put z  1, then x  1, y  1

This system gives the non zero solution.

v3  (1,1,1) which is orthogonal to both v1 and v2 .

Thus v1 , v2 , v3 form a maximal set of non zero orthogonal vectors of A.

Normalize v1 , v2 , v3 by dividing each with their corresponding lengths to obtain an orthonor-


mal basis is u1 , u2 , u3
Rings and Linear Algebra 16.27 Unitary and Orthogonal Operators

v1 1
u1   (1,1, 0)
v1 2

v2 1
u2   (1,1, 2)
v2 6

v3 1
u3   (1,1,1)
v3 3

Aliter: we can apply Gram - Schmidth orthogonalisation process to find an orthonormal basis.

 1 1 1 
 
 2 6 3
 1 1 1 
Let P be the matrix whose columns are u1 , u2 , u3 , Then P  
 2 6 3 
 2 1 
 0 3 
 6

and the diagonal matrix D which is formed with the characteristic roots given by

 2 0 0
D   0 2 0 
 0 0 8 

Note: In finding the basis for the eigen space corresponding   2, we get x  y  z  0 ...... (1) by
putting y  0, we get x  z  0 when z  1, x  1

So v2  (1, 0,1) v1  (1,1, 0)

So (1,1, 0),(1, 0,1) is a basis of eigen space for   2, this set is not orthogonal. So
we can apploy Gram-Sdmidt orthogonalisation process to obtain the orthogonal basis
 1 
( 1,1, 0), (1,1, 2)  .
 2 

Find a orthogonal basis for f th eigen space for   8 , the union of the there two bases is the
orthonormal basis. Normalising the vectors, we get the orthonormal basis.
Centre for Distance Education 16.28 Acharya Nagarjuna University
16.14 Summary:
In this lesson we discussed about unitary operators, orthogonal operators, equivalent state-
ments. Inverse of unitary operator, Matrices representing unitary and orthogonal transformations,
Equivalent matrices.

16.15 Technical Terms:


The technical terms. we come across in this lesson are unitary operator, orthogonal opera-
tor, Isometry - Orthogonal matrix, unitary matrix, equivalent matrices, unitarily equivalent.

16.16 Model Questions:


1. Define unitary operator. If T is a unitary operator then show that T *  T 1 .

2. If S and T are unitary operators then show that ST is unitary.


3. Let T be a unitary operator on an inner product space V, Let W be a finite dimensional T invariant
subspace of V. Then prove that W  is T-invariant.

4. Let A be a complex n x n matrix, then A is normal if and only if A is unitarily equivalent to a diagonal
matrix.

16.17 Exercise:
1. For each of the following matrices A; find an orthogonal or unitary matrix P and a diagonal matrix
D. Such that P * AP  D

 1 1 1 
 
0 2 2  2 6 3
 2 0 0 
 1 1 1   
i)  2 0 2 Ans: P  and D  0 2 0
  
 2 2 0   2 6 3
 0 0 4 
 2 1 
 0 3 
 6

 1 5
5 4   2 2 1 0 
ii)   Ans : P    and D   
1 2  1 5 0 6 
 2 2 

 0 1 0 1 0 0 
 1 0 0   
2. Show that the matrices   and 0 i 0  are unitarily equivalent.
 0 0 1  0 0 i 
Rings and Linear Algebra 16.29 Unitary and Orthogonal Operators

1 2  i 4 
3. Show that the matrices   and   are not unitarily equivalent.
2 i  1 1 

1 
x
4. Find the number and exhibit all 2 x 2 orthogonal matrices of the form  3
 
y z

 1   1   1   1 
 3 8 3  8 3   8 3  3  8 3
3 3
Ans: 4,  ,  
,  and  
 8 3 1   1   1   8 3 1 
 8 3 8 3
 3   
3    3   3 

5. Prove that the following matrices are unitary.

1 i 3i 
 2 (1  i ) 
 3 2 15   1 i 
 1 4  3i   2 
1
 2
i)  2 3 2 15  
i 1 
ii) 
 1 i 5i  
 2 2 2 
 3 2 15 

 1 2
 i 
 3 3
1 (1  i ) (1  i ) 
iii)  i 1 
2 (1  i ) (1  i ) 
iv)
 
 23 3

6. Prove the following matrices are orthogonal

 1 1   1 2 3 
 2 0  
2  14 14 14 
 
 1 1   3 1 
0  0

i)  2 2 ii)  10 10 

 0 0 1  1 5 3 
   35 
 35 35
Centre for Distance Education 16.30 Acharya Nagarjuna University
7. Find an orthogonal matrix whose first row is

1 2
i) , ii) a multiple of (1,1,1)
5 5

 1 1 1 
 1 2   3 3 3 
 
5 5  1 1 
Ans : i)   ii) 0
 2 1   2 2 
 
 5 5   2 1 1 
 6 
 6 6

8. Find a unitary matrix whose first row is

1 1 1 1 
i) a multiple of (1,1  i ) ii)  , i,  i
2 2 2 2 

   1 1 1 1 
1
 i  i
 (1  i ) 3  2 2 2 2 

3
  
i 1
Ans: i)  1  ii) 2 0 
(1  i ) 3 3 
 2 
  
 1 1 1 1 
i  i
 2 2 2 2 

9. Find a 3  3 orthogonal matrix P whose first two rows are multiples of u  (1,1,1) and v  (1, 2, 3)
respectively.

 1 1 1 
 3 3 3 

 1 2 3 
Ans : P
 14 14 14 
 5 2 3 
 , ,
 38 38 38 

10. Real matrices A and B are said to be orthogonally equivalent if there exists an orthogonal matrix
P such that B  P T AP . Show that this relation is an equivalence relation.
Rings and Linear Algebra 16.31 Unitary and Orthogonal Operators
11. Prove that if A and B are unitarily equivalent matrices, then A is positive definite if and only if B is
positive definite.
12. Let U be a unitary operator on an inner product space V, and let W be a finite dimensional U -
Invariant subspace of V. Prove that U (W )  W .

13. Let A and B are n x n matrices that are unitarily equivalent then prove that tr ( A* A)  tr( B * B)

16.18 Reference Books:


1. Linear Algebra 4th edition Stephen H. Friedberg, Arnol J. Insel, Lawrence E. Spence.
2. Schaum’s Out lines Beginning Linear Algebra, Seymour Lipschutz.
3. Topics in Algebra I.N. Herstein
4. Linear Algebra P.P. Gupta Ph.D. and S.K. Sharma Ph.D.

- A. Mallikharjana Sarma

You might also like