0% found this document useful (0 votes)
409 views433 pages

Linear Algebra

The document is a 50th edition textbook on Linear Algebra. It contains chapters on vector spaces, linear transformations, inner product spaces, and bilinear forms. Some key topics covered include the definition of a vector space, basis of a vector space, dimension, linear transformations and their properties, matrices, determinants, inner product spaces, orthogonality, and bilinear forms. The textbook is intended for degree and honors students of mathematics at Indian universities.

Uploaded by

Yogesh Arya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
409 views433 pages

Linear Algebra

The document is a 50th edition textbook on Linear Algebra. It contains chapters on vector spaces, linear transformations, inner product spaces, and bilinear forms. Some key topics covered include the definition of a vector space, basis of a vector space, dimension, linear transformations and their properties, matrices, determinants, inner product spaces, orthogonality, and bilinear forms. The textbook is intended for degree and honors students of mathematics at Indian universities.

Uploaded by

Yogesh Arya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 433

Educational Publishers

Since 1942

LINEAR ALGEBRA

A.R. Vasishtha
]. N. Sharma
LINEAR ALGEBRA
(/4 Course hi Finite Dimensional Vector Sjyaces)
{For Degree mui Flonours Students ofall Indian Universities)

Bj

A. R. Vasishtha ]. N. Sharma
Retd. Head. Dept, of Mathematics Ex. Si: Lect. Dept, of Mathematics
Meerut College, Meerut Meemt College. Meerut

&

A. K. Vasishtha
M.Sc. Ph.D
C.C.S. University. Meerut

KRISHNA Prakashan Media(P)Ltd.


KRISHNA HOUSE, 11, Shivaji Road, Meerut-250 001 (U.P.), India
r

LINEAR ALGEBRA
(A course in Finite Dimensional Vector Spaces)

First Edition : 1982


Fiftieth Edition: 2018

Name,style orany partofthis book thereofmay notbe reproduced In anyform or by any means withoutthe
written permission from the publishers and the author. Every effort has been made to avoid errors or
omissions in this publication. In spite of this, some errors might have crept In. Any mistake, error or
discrepancy noted may be brought to our notice which shall be taken care ofIn the nextedition. It Is notified
that neither the publisher nor the author or seller will be responsible for any damage or loss of action to
anyone,ofany kind. In any manner, therefrom. For binding mistakes, misprints orfor missing pages, etc. the
publisher's liability Is limited to replacement within one month ofpurchase bysimilar edition. Allexpenses In
thisconnection areto be borne bythe purchaser.
\
!

V
ISBN: 978-93-87620-68-1

Book Code No.:225-50

Price : ?295.00 Only

Published by : Satyendra Rastogi


for KRISHNA Prakashan Media(P) Ltd.
11, Shivaji Road, Meerut-250 001 (U.R) India.
Phones :91.121.2644766,2642946,4026111,4026112 Fax: 91,121.2645855

Website: w\vw.krishnaprakashan.com
E-mail: [email protected]
Printed at : Vimal offset Printers, Meerut

^0
-
0
« *

Jai Shri Radhey Shyam

^ >
4

Dg(X\C3X^G6 '^; s.

to '2 ●
'dr
¥■

■1
Lord jj

;
?(

Krishna f'i
e

* /
s-< ”^-4 - *-J
M
AMtfi'’''* ^ P«*fcVr
■X-;

A .<●

« «

o q
-●
To Tftc Latest Ecfition

I n this edition the book has been thoroughly revised. Many more
questions asked in various university examinations have been added to
enhance the utility of the book. Suggestions for the further improvement of
the book will be gratefully received.

—The Authors

To Tftc First Ecfltion

T his book on Linear Algebra has been written for the use of the students
of Degree-Honours and Pbst-Graduate classes ofIndian Universities.

The subject matter has been discussed in such a simple way thatthe students
will find no difficulty to understand it. All the examples have been completely
solved. The students should first try to understand the theorems and then
they should try to solve the problems independently. Definitions should be
read again and again.

Suggestions for the improvement of the book will be gratefully received.

‘The Authors

(v)
3 “there exists'’
V “for every"
“implies”
“implies and is implied by’
iff “if and only if”
E ‘belongs to”
‘does not belong to”
£ ‘is a subset of”
3 ‘is a superset of”
c ‘is a proper subset of”
: or ‘such that”
u ‘union”
n ‘intersection”
0 ‘the null set”
R ‘field of real numbers”
C ‘field of complex numbers'

(Vi)
Contents

1. Vector Spaces (01-106)


Binary operation on a set 01
Group. Definition 02
Field. Definition 04
Vector space. Definition 07
General properties of vector spaces 22
Vector subspaces. Definition 24
Algebra of subspaces 34
Linear combination of vectors. Linear span of a set. 36
Linear sum oftwo subspaces. Definition 38
Linear dependence and linear independence of vectors 42
Basis ofa vector space. Definition 60
Finite-demensional vector spaces. Definition 61
Dimension ofa finitely generated vector spaces 64
Dimension ofsubspace 66
Homomorphism of vector spaces of Linear Transformation 80
Isomorphism of vector spaces 81
Quotient space 90
Directsum ofspaces 94
Disjoint subspaces 94
Complementary subspaces 97
Co-ordinates 103
2. Linear Transformations (107-283)
Linear Transformations 107
Linear operator 107
Range and nullspace of a linear transformation 110
Rank and nullity of a linear transformation 112
Linear transformations as vectors 119
Product of linear transformations 126
Algebra of Linear algebra 132
Invertible linear transformations 133
Singular and non-singular transformations 136
Mc\trix. Definition 153
Representation of transformations by matrices 157
Similarity 168
Determinant of Linear transformations on a finite dimensional 173
Trace of a matrix 173
Trace of a linear transformations on a finite dimensional 174
0
Linear functionals 188
Dual spaces 191

(yii) ●
V, }, irfi

Dual bases 194


Reflexivity 198
Annihilators 205
Invariant direct sum decompositions 212
Reducibility 214
Projections 219
The adjointortranspose a linear transformation 233
Syh/estCT’slaw of nullity 246
Characteristics valuesand characteristics vectorsor proper vectors 247
The Cayley-Hamilton theorem 255
Diagonalizable operators 257
Minimal polynomial and minimalequation 274
3.InnerftoductSpaces ...(284.396)
InnerProductspaces.Definition 284
Euclidean and unitaryspaces 285
Norm or length ofa vector 289
Schwarz’sinequality 291
Orthohgonalify 304
Ordionormalset 306
Complete orthonormalset 308
Gram-Schmidtorthogonalization process 312
Projection theorem 318
Linearfunctionalsand adjoints 330
Self-adjointtransformation 335
Positive operators 348
Non-negative operators 348
Positive matrix 352
Unitary operators 354
Normal operators 363
Characteristics ofSpectra 373
Perpendicular projections 377
Spectraltheorem 382
4.Bilinear Forms ....(397.416)
Bilinear Forms.Definitions 397
Bilinearformsasvectors 400
Matrix ofa bilinearforms 403
Symmetric bilinearforms 409
Skew-Symmetric bilinearforms 412
Groupspreserving bilinearforms 414

Model Question Papers. (M


Index (ni-nifl)

(viii)
1
Vector Spaces

§ 1. Binary operation on a set. Let(? be a non-empty set.


Then GxG={(fl, ^):<i e G. b e G}. (ff: GxG->G, then f is
said to be a binary operation on the set G. The image of the ordered
pair (a, b) under the function/is denoted by /(o, b)or by afb.
Often we use the symbols+, x. .,o, *etc. to denote binary
operations on a set; Thus *+* will be a binary operation on G iff
a+b € G V a, b € G and a+b is unique.
Similarly V will be a binary operation on G iff
a*beGv a, be Gand a * b is unique.
A binary operation on a set G is sometimes,also called a binary
composition in the set G. If *** is a binary composition in G, then
¥ a, b e G,a * b is a unique element of G. If a * b e G ^ a,
b e G,then we also say that G is. closed with respect to the com
position denoted by
If there is a binary composition in a set G, the most conve-
>nient notation to denote this composition is the multiplicative
notation. .In this notation if a, b e G, then represents the ele
ment obtained on multiplying a and b. Thus ab e G ¥ a, beG if
the.binary composition in G has been denoted multiplicatively.
Examples. Addition is a binary operation on the set N of
natural numbers. The sum oftwo natural numbers is also a natural
number. Therefore N ia closed with respect to addition
i.e o+h e N ¥ a, h e N.
^Subtraction is not a , binary operation on N. We have
4—7=—3 e N whereas4 € N,7eN. Thus Nis not closed with
respect to subtraction/
But subtraction is a binary operation in the set of integersT.-
We have a—b el ¥ a^ b el.
Division is not a binary operation in the set R of all red num
bers. We have 0 € R,5 € R but 5-f-O is not an element of R.
§ 2. Algebraic Structure. Definition. 'A non-empty sef Q equip-
ped with one or more binary operationsis called an .algebraic struc-
2 Linear Algebra

lure. Suppose « is a binary operation on G. Then (G, «) is an


algebraic structure. (N, +),(!, +).(I, *~).(R, +..)are all alge
braic s^tructures. Obviously addition and multiplication are both
binary operations on the set R of real numbers. Therefore(R, +,.)
is an algebraic structure equipped with two operations.
§3. Group. Definition.
Let G be a non-empty set equipped with a binary operation
denoted by the symbol * f.e., a *b ^ G ¥ a, b G G. Then this
algebraic structure is a group if the binary operation * satisfies the
following postulates \
I. Associativity i:e.^(Q. ♦ 6) * c=<i ♦ (b ♦ c) V o, h, c e G.
2. Exlsten right I fentity. There exists an element e^ G
such that a V a e G.
The element e ig caHifd the right identity.
3. Existence of right inverse. Each element oj G possesses
right inverse /.e., for each element a e G, there exists an element
a~* e G such that a * a“*=e.
The element ar' is then called the right inverse of a.
Ahelian Group or Commutative Group.
Definition. A group G is said to be abelian or commutative if
in addition to the above three postulates the following postulate is
also satisfied V
4. Commutativity i.e.,a* b=b *a ¥ a, b gG,
Note 1. In our definition ofa group we have denoted the com
position in G by the symbdl ♦. However we can use any symbol
like X, o,., etc. to denote the composition. If we use the additive
notation *+* to denote the composition in G» then the right inverse
of an element a e G is denoted by the symbol — a /.e., we have
a4‘(*^fl)=a.
Note 2. If we use multiplicative notation to denote the com
position in G,then often we denote the right identity by the symbol
*r. Thus 1 is an element of G, such that al =a ¥ a^G.
There should be no confusion about 1. It is not the number 1
but it is an element of the set G whatever it may be.
In multiplicative notation the right inverse a"* of a is often
denoted by 1/a.
In additive notation, we often denote^e right identity by the
symbol^0*. Thus 0 is an element of G su^that
a+6s=a V a e G. .
Note 3. In additive notation the element a+(~h) € G is

//■ ■
Vector Spaces 3

denoted by a^b. In multiplicative notation the element nh"* e G


is denoted by ajb.
Ex. Show that the set I ofall integers
-4,-3. -2,-1.0, 1, 2, 3, 4, ...
is an abelian group with respect to the operation ofaddition of
integers.
§ 4. Some general properties of groups. Suppose G is a group
with binary operation being denoted multiplicatively. Then we
have the following properties in C7.
1. Existence of right cancellation law i.e.» if a, b, c are in (r,
then ba=^ca => 6=c. .
Proof. Since oeG therefore 3 n-* e G such thatflfl"‘=e
where e is the right identity;
Now ba=ca
=> (ba)
=> b (ofl-*)=c [by associativity]
=> be=ce [●.* fl-* is right inverse of a\
b=^c [V e is right identity]
2. The right identity is also the left identity i.e., if e is the
right identity then ea==a v n e G.
Proof. Let aeG and e be the right identity. Since a pdssesses
right inverse, therefore there exists ar^ € G such that

Now (ea) a-'^e [by associativity]


=ee [V aar'=e]
—e [V e is right identity]
=aa-' [V aa-^—e]
Now (ea)
[by right cancellation law]
.*. e is also the left identity.
Hence e is tha identity /.e., ea—a=ae ^ a ^ G.
3. The right inverse of an element is also its left inverse i e,, iff S.- '
a~^ is the right inverse of a then ar^a—e.
Proof. Let a e G and e be the identity element. Let be
the right inverse of o i.e.y aar^=^e. To prove that a~^ a=e.
We have
(a“^ a) (aa~^) . [by associativity]
. =a“* e [V afl“*==e]
a1 [V e is right identity]
=ea-^ [V e is also left identity]
4 Linear Algebra
Now (fl“* a)
^ fl-* flxse. [by right cancellation law]
/. 0-1 is also the left inverse of a.
/. is the inverse ofc Le,, a~' a=e^aa^K
4. Uniqueness of Identity. The identity element in a group is
unique.
Proof. Suppose e and e' are two identity elements of a group
G. We have
ee'^e if e’ is identity
and ee'^e* if e is identity.
But ee' is a unique element of G.
. ee'=^e and ee'^e* => e=e'.
Hence the Mentity element is unique.
S. Uniqueness of Inverse. The inverse of each element of a
group U unique.
Proof. Let a be any .element of a group G and let e be the
identity element. Suppose b apd c are two inverses of o
/.c., ba=et=sab
and ca<=:e=ac.
We have b {ac)—be
[V e is identity]
Also ipa) c^ec [V ba=e\
=c [V e is identity]
But in a group the composition is associative.
b {ac)={ba) c
=> b—c,
6. Existence of left cancellation law i.e., if Otb.c are in G,then
ab=ac => b=c.
Proof. We have ab=ac
=> o~i (ac)
=> (fl”f fl)h=(a~Va)c
:> eb=ec [*.* fl-i is left inverse of a\
s> b—c [ .* e is left identity]
§ 5. Field. Definition. Suppose F is a non-empty set equipped
with two binary operations called addition md multiplication and
denoted by *+* and*,* respectively i.e., for all a^b ^ F we have
o+h e F anda.b e F. Then thisjalgebrdic structure .(F, -f-,.) is
called afields if thefallowing postulates are satisfied:
F\, Addition is commutative, Ue.y
'' a+fiish+a ¥ h € F,
Vector Spaces 5

F2* Addition is associative, i.e.,


(a-f6)+ca»a-j-(ft+c) V a, b,ceF.
F3. 3 an element denoted by 0(called zero) in Fsuch that
a+0=fl V a e F.
F4. To each element a in F there exists an element —a in F
such that a+(—a)=0.
F5. Multiplication is commutative, i.e.,
a.b^b.a ¥ a, b. e F.
Ffi. Multiplication is associative, Ue.,
a.(b.c)—(a.b).c ¥ a, b, c^ F.
Fj. 3 a non-zero element denoted by 1i (called ooe)in F such
that a.l=a ¥ a e F.
Fs. To every non-zero element a in F there corresponds an
element a~* (or 1/a) in F,such that . a,a“*=l.
F9. Multiplication is distributive with respect to addition'i.e.,
for all a, b, c in F, we have a.(b+c)^a.b+a.c.
The element 0 is identity element for addition composition in
F. It is called the zero element of the field. Obviously (F, +) is
an abelian group. If O^^aeF, then a is called a non-zero element
of F. The element I is identity elemeht for multiplication com-
position in F. It is ball^ the unity of the field. In a field each
non-zero element,is invertible i.e., possesses inverse for multiplica
tion composition.
Note. In future we shall denote the multiplication composi
tion in a field Fnot by the symbol but by multiplication nota
tion. Thus we shall omit *.* and we shall write ab in place of a.b.
Snbfield. Definition. Let F be afield. A non-empty subset of
the set F is said to be a subfield ofF if K is closed with respect to
the operations ofaddition and multiplication in Fond K itself is a
fieldfor these operations.
Examples of Fields
Example 1. The set Q of all rational numbers is a field the
addition and multiplication of rational numbers being the two field
compositions. The rational number 0 is the zero element of this
field and the rational number 1 is the unity of this field.
Example 2. The set R of all real numbers is a field, the
addition and multiplication of real numbers being the two field
compositions. Since QcR.therefore the field of rational numbers
is a subfield of the field of rational numbers.
6 Linear Algebra

Example 3. The set C of all complex numbers is a field, the


addition and multiplication of complex number^ being the two field
compositions. Since RcC, therefore the field of real numbers is a
subfield of the field ofcomplex numbers.
Example 4. The set of numbers of theform wiVA a
and b as rational numbers is afield. We can easily show that all
the field postulates are satisfied in this case.
§ 6. Elementary properties of a field.
Theorem. If F is afields thenfor all n, 6, c e F
(i) => b^c.
(ii) a0=0fl=0.
(iii) h)=—(oh),
(iv) —(—a)=o.
(v) (—0)(—h)=flh.
(vi) a(h—c)=oh—flc.
(vii) oh=0 ^ 0=0 or h=0 {or both).
(viii) o#0,oh=oc b=c.
Proof, (i) We have o4-h=o+c
=> (-a)+(«+h)=(-o)+(fl+c)
=> [(-«)+«l+h=[(-o)+ol+c [by i=2]
=> (a+(-a)]+M«+(-fl)]+c [byFil
=>0+h=0+c [by FA
=> h-|“0=c+0 [by Fi]
=> b=c. \byFi]
(ii) We have
oO=o(0+0) [V by Fi, 0+0=0]
=oO+flO. [by FA
o0+0=o0+fl0 [V oOeFand o0+0=o0]
=> 0=o0 [cancelling oO by (i) which we have just proved].
Since in a field multiplication is commutative, therefore
0o=fl0=0.
(iii) We have o[h+(—h)]=o0 [V h+(-h)=0]
=> af'+a (--h)=0 [by Fg and by (ii)j
=> flh+o (-h)=(flh)+r-(flh^]
^ a {-b)=-{ab) [cancelling ab by (i)]
(iv) We have o+(—o)=0 [by FA
=► (—o)+o=0 [byF,]
"T => (—o)+o=(—o)+[—(—o)] [byF4]
r 5> fl =—(—o) [by(i)j
Vector Spaces 7

(V) (_a)(_6)==-((-fl)^] tby (iii)]


= (-a)] [byF,J
=-[-{ba)] [by (iii)]
=iba. [by(iv)]
<=sab. [by Fs]
(vi) a {b-c)-a [6+(-c)J
(—<?) [byF9l
=fl6+[—(flc)J [by (iii)]
=ab—ac,
(vii) Let ab=^0.
. Suppose Ct^O. Then exists.
ab—0 => a-‘ {ab)=a~hO => (a~* a)6=0 =► (aa-^) b^sQ
=> 1 6=0 6 ls=0 s> 6=0.
Similarly we can prove that if then a must be zero,
(viii) We have a#0 and^a6=ac.
*.* a 0, therefore n"* exists.
ab—ac ^ (a6)=o~* (ac) => (o“* a) 6=(a“* a) c
, => (fla“*).6=(flta”*) c => 16= Ic => 61=cl ^ b=^c,
§ 7. Vector spaces. So far we have studied groups and ifields.
Now we shall study another important algebraic structure known
as vector space or (linear space). Before giving the definition of a
vector space we shall make a distinction between internal and
external compositions.
Let A be any set. If a* b ^ A v a, 6 e and a* bis
unique then * is said to be an internal composition in the set A,
Here a and 6 are both elements of the set
Let V and f be any two sets. If a o <k e V for all o e and
for all a € V and a o « is unique, then o is said to be an external
composition in V over F. Here a is an element of the set F and a
is an element of the set V and the resulting element a o a is an
element of the set K
Vector space. Definition. (Nagarjuna 1980; Kakatiya 91;
Osmania 90; Marathwada 92; Meernt 89, 90; Madras 81;
Kanpur 81; Poona 88)
{F, -i-t .) be a field. The elements of.F will be called
scalars. Let V be a non-empty set whose elements will be called
vectors. Then V is a vector space over the field F, if
1. There is defined an internal composition in V called addition
of yectoss and denoted by * Also for this composition V is an
abelian group i.e..
8 Linear Algebra

(i) a+j8 e Vfor all )3 e F.


(W) a+j8=/5+a/or fl// a,^ S K.
(/i7) a+08+y)=*(a+i3)+y/ora//«,i5.yeK.
(fv) H an element 0 e K wcA Mo/ a+0=a/or all asK.
TAis element 0 € F>/// called the zero vector,
(v) To every vector a e F there exists a vector —a e K such
that «+(—«)==0.
2. an external composition in V over F called scalar
multiplicatioD and denoted multiplicatively Le., aa e Vfor all aeF
andfor all In other words V is closed with respect to scalar
multiplication.
3. The iwo compositions i.e., scalar multiplication and addition
of vectors satisfy thefollowing postulates :
(/) a (a+j8)=fla+flj8 ^ a ^ F and Y a, p ^ V.
{it) (a+A)«=fla+Aa Y a^b ^ F and y a G V.
liii) lab) a=a {bet) Y a, b e F and V « e F.
(iv) la=a ¥ a e F and 1 is the unity element of thefield F.
When V is a vector space over the field F, we shall say that
V{F)is a vector space. If the field F is understood we can simply
say that y is a vector space. If F is the field R of real numbers, V
is called a real vector space; similarly if Fis Q dr F is C, we speak
of rational vector spaces or complex vector spaces.
In the above definition of a vector space V over the field F, we
have denoted the addition of vectors by the symbol *+*. This
symbol also denotes the addition composition of the field F ie.,
addition of scalars. There should be no confusion about the two
compositions though we have used the saipe symbol to denote each
of them. If a jS € K, then a+j8 represents addition of V i.e.,
addition of vectors. If a, A € Fthen n+A represents addition of
scalars /.e., addition in the field F. Similarly there should be no
confusion in multiplication of scalars i.e., multiplication of the
elements of F and in scalar multiplication i.e., multiplication of an
element of V by an element of F. If a, A € F, then ab represents
multiplication of F and ab^F. If asF and etsK, then act repre
sents ^lar multiplication and € V. Since 1 € F and a £ K,
therefore la represeats scalar multiplication. Again aaGF,ap^V,
therefore aa-f^.xepresents addition of vectors and thus ua+ajS is
an element of V. Further ueFand a-l-j8 e F, therefore a (a+j8)
represents scalar multiplication and we have a (a-}-j8) e V.
Note 1. Since(F, +} is an abelian group, therefore all the
9
Vector Spaces
A few of them are
properties of an abelian group will hold in V.
as follows:
(i) a+j8=sa+y =► ]8=y (leftcancellation law)
(ii) j84-a=y+« => P=Y (right cancellation law)
(iii)a+j8«=0 =► a=s—p and^——
(iv) -(«-|-i8)=—a—j3 where by a—^ we mean a+(-p).
(v) —(-a)=a.
(vi) «+j3=« ^ ^==0.
(vti) a+(^—«)=^.
(viii) The additive identity 0 will be unique.
(ix) The additive inverse of each vector will be unique.
(X) If a+i8=y, then a+^-y=0.
Note 2. There should be no confusion about the use of the
word vector. Here by vector we do not mean the vector quantity
which we have defined in vector algebra as a directed line segment.
Here we shall call the elements of the set V as vectors .
Note 3. In a vector space we shall be dealing with two types
of zero elements. One is the zero vector and the other is the zero
element of the field F i.e., the 0 scalar. To distinguish oetween the
two we shall use the zero letter in bold type to represent the zero
vector Also we shall use the lower case Greek letters «, ft y
etc. to denote vectors f.e., the elements of V and the lower case
Latin letters a, f>. c etc. to denote scalars i.e., the elements of the
field F.
over
Example 1. A field K can be regarded as a vedor space
any subfield F of K. (Kanpur 1981. Poona 85; Meerut 89)
Here K is the set of vectors. Addition of vectors is the
addition composition in the field K. Since K is a field, therefore
(K +) is an abelian group. Further the elements of the subfield F
constitute the set of scalars. The composition of scalar m'lltipli-
cation is the multiplication composition in the field K.K is a field,
Sore a« e ^ ^ a e F and V « e /C because both a and «
are elements of X. If 1 is the unity element of K, then I ts also
the unity element of the subfield F. We make the following
observations: ■ ..
(i) fl (a+j8)=fla+fli3 V aeF and V a, This result
follows from the left distributive law in K.
V a.b^F and V aeX. This result
(ii) {a+b)i
is a consequence of the right distributive law in K.
10
Linear Algebra
(iii) (ab) a==<i(bx) ¥ a, b^Fdad ¥ x^K. This result is a
consequence of associativity of multiplication in K,
(iv) l«=a ¥ xeK and 1 Is the unity element of the subdeld
F. Since 1 is also the unity element of the field K,therefore la=a
¥ xgK. Hence K(F)is a vector space.
Note 1. If F is any field, then F itself is
a vector space over
the field F.

Note 2. If C is the field of complex numbers and R is the field


° then C is a vector space oyer R because R is a
subfield of C. But R is not a vector space over C. Here R is not
closed with respect to scalar multiplication, For example 2eR
and 3+4/eC and (3+4/)2^R.
(Meerut 1988)
Example^. The set V ofalimxn matrices with their elements
as real numbers is a vector space over thefield F of real numbers
with respect to addition of matrices as addition of vectors,and multi-
plication ofa matrix by a scalar as scalar multiplication
(Meerut 1967)
We can easily prove that V is an abelian group with respect
to addition of matrices. The null matrix O of the type m x «is the
additive identity of this abelian group.
Ifasfand a is a matrix of the type mxn with
eiements as teai numbers), then <i«eF because aa is also a matrix
of the type mxn with elements as teal numbers. Therefore Vis
closed with respect to scaiar multiplication. Also from our study
of matrices we observe that ^
(i) fl(a+^)=aa+flj8 ¥ ueFand ^ F.
(ii) (fl+6) a=aa+6« ¥ a,6eFand ¥ xgK
(Hi)(ab)x=a (bx) ¥ a, b^F and ¥ x^V.
(iv) la=a ¥ ael^and 1 is the unity element of the field Fof
real numbers.
Hence V(F) is a vector space,

as raWolai ‘<>6 Set of all oix« matrices with their elements


as rational numbers and fis the field of real numbers, then Fwill
not be closed with respect to scalar multiplication, For ^/7eF
and if as I', then y/7x ^ V because the elements of the matrix
V7« will not be rational numbers, Therefore V(F) will not be a
vector space.
Example 3. The vector space of all ordered n-tuples over a
field F.
(Meerut 1983P; Gorakhpur 81)
Vector Spaces 11
Let F be a field. An ordered set «=(ai, cj, <13, Oh) of «
elements of F is called an n-tuple over F. Let F be the totality of
all ordered n-ttiples over F i.e.y let
V^{{flu 02 a„): au 02, o«sF}.
Now we shall give a vector space structure to K over the field
F. For this we define equality of two u-tuples, addition of two «-
tuples and multiplication of an n-tuple by a scalar as follows:
Equality of two n-tuples. Two elements a=(fli,02, -m Oa) and
/3=(bi, ^2, b„) of V are said to be equal if and only if
Oi^bi for each /=1, 2, ...» n.
Addition composition in V. We write
02+^2, ●●●, 0„ + bn)
V a=(fli, fl2, p={bu b2y...» h«)eF.
Since 02+^2, ●●●, On-\-b„ are all elements of F, therefore
a+)3 e V and thus V is closed with respect to addition of n-tuples.
Scalar Multiplication composition in V over F. We define
ax=(aou a02y .... aa„) v asF, a=(fl|, 02, o„)^V,
Since ooi, 002, are all elements of F, therefore uaeK
and thus F is closed with respect to scalar multiplication.
Now we shall see that F is a vector space for these two com
positions.
Associativity of addition in F. We have
{Oly fl2v» d/i) + l(hj, b2»»**» + C2,..., Cb)]
= (0l» 02»***f + + ^2+^^2f*» b„-]rCn)
= (fll+[^l+Cll, fl2 + l^2+C2] fl« + [^n + C„])
= ([Ol + ^l]+Cl» [02+^2]+ C2,..., [Ofi+hflH-Cn)

= [(Ul> 02»*«*» + b2y...y i>n)J + (C|, C2,..., C„).


Commutativity of addition in F. We have
{p\y U2»*-‘» + ^2»*"» bn) = {a\-\rb\y a2+^2»-*«» On+^n)
=^{b\-\-aiy ^>2+^2 bn-\-Oi^ = {bu b2y...y bn)-\-{flXy fl2***M
Existence of additive identity in F. We have
(0, 0,..., 0)eF. Also if (fl|, 02,..., U/,)eF, then
(pu O2,..., Un)+(0» 0,..., 0)=(Oi + 0, 02 + 0,..., o« + 0)
= (Oi, 02,..., Ofl).
(0, 0,...', 0) is the additive ^entity in F.
Existence of additive inverse of each element of F. If
(0|, 02,..., u«)eF, then (-01, -02,..., -o„)eF.
Also we have
(—Oi, —02,..., —n«)+(fli, fl2 o„)
12
Linear Algebra

-ai+av.., -rfl„+fl„)=:(0, 0, 0,.


0).
(—«I» —«2, > is the additive inverse of
(ci, aj,..., Oa).
Thus V is an abelian group with respect to addition. Further
we observe that
1
If aeF and a==(fl,, 02,...» bz 6„)e F, then
«(«+^)=a(at+bu 02+^2,.;.,

afl2+a*2i...,
«(aoi, a«2,..., «fl«)+(a*i, aft2,..., o^a)
=a(ai, 02»..., Oii)+n {bu ^>2»..., ^A)=aa+diS.
2. If o, beFand (oi, <12, Oa)s F, then
(n+h)a=([fl+^J au [a^-b] 02,...» [0+6J a„)
^{aai^bqu fl«2+^fl2,...» nOn+^»fl„)
=(<wi, aai aa„)+{bau 602,..., ^Oa)
=a (fli, <i2»...» Aa)+^> (oi, a2»-., flA)=fla+^>a.
3. If fl,^sFand a=(a,, ^2...., o„)sF, then
(ai») a=([fl6] fli. [fl^J fl2,..., [o6] a„)=(n [^n,]. a « [ba„])
^a (bai, ba2 ba„)=a [b{au 02,..., Oa)I«o (6a).
4. If 1 is the unity element of F and a=(n,, «») e F,
then la=(lnj, lfl2»..., l0A)‘==(ai, 02,..., Oa)=a.
Hence is a vector space over F. 77ze vec/or space of all
ordered n-tuples over F will be denoted by V,(F). Sometimes we also
denote it by FW or by F". Here the zero vector 0 is the n-tuple
(0.0 0).
Note. fj(f)={(ai, ai): ai,02Sf} is the vector space of all
ordered pairs over f. Similarly f's(f')={(a,, aj, aj): a,, a,, ajSf}
is the vector space of all ordered triads over F.
Example 4 The vector space ofallpolynomials over afield F.
(Meerut 1982; Andhra 92)
Sol. Let F[x] denote the set of all polynomials in an indeter
minate X over a field F. Then Ffx] is a vector space over the field F
with respect to addition of two polynomials as addition of vectors
and the product of a polynomial by a constant polynomial (/.e.,
by an element of F)as scalar multiplication.
Let /(x)=J7 aiX^=aQ-\-a\x-\-a2X^-{-a2X^+...
g(x)=i;6/x'=6o+6ijc4-62x2+£>3jc3+...
and h{x)=S C,X^s=Co-f*^l^“l"C2X2-f-C3X34 ...
be any arbitrary members of F[4^j.
13
Vector Spaces

Equality of two polynomials. We define/(x)=^(x) if and only


itai—bi for each i'=0,1,2,...» ●●●
Addition composition in F[x]. We define
/(jlf)+^(jC)=(<*0+*o)+(fll+^l)^-h(fl2+^>2) X^+-

Since flo+*o» »●●●» are all elements of F, therefore


fix)+g(x)e F[*)and thus F[x] is closed with respect to addition
of polynomials.
Scalar multiplication in F(x] over F. If kis any..scalar i.e.,
ftsF, we define
kf(x)^kaQ+{kai)x-\-{ka2)x^-\-{ka3)
^S{kai)xL
Since Atoq. kai»>*>t aw aW elements of F, therefore
kf(x) e F(x]and thus F{*] is closed with respect to scalar multi-
plicatioa.;
Now we shall show that F(*l is a vector space for these two
compositions.
Commutativity of addition in F[x]. We have
/(jC)+^(*)=(«0+^o)+(®»+^l)^+(^2+^2) x*+...
=s(h0+<io)+(^>i+oi)-*+(^2+^2)
[●/ addition in the field F is commutative]
=?(*)+/(*)●
Associativity of addition in F[x]. We have
[f(x)+g(,x)Hh{x)=-Z x>+S ex’
^21(a/+6i)+c/] x’=^S [a,Hbi+ct)] x’
=2 aix’-hS (bi-{-ci) x'=/(x)+[g(x)+A(x)l-
Existence of additive identity in F[x]. Let 0 denote the zero
polynomial over the field F i.e.»
0«0+0x+0x2i+0x3+... .
Then OeF(x] and 0+/(x)=/(x).
/. the zero polynomial O'is .the additive identity.
Existence of additive inverse of each member of F[x]. Let —/(x)
be the polynomial over the field F defined as
-— ^+(“<*2) **+●●●
.= -O0-fllX~fl2X2—... .
Then -/(x)eF(x] and we have
—/(x)+/(x)=0 f.e., the zero polynomial.
.*. -/(x) is the additive inverse of/(x).
14
Linear Algebra
Thus F[x] is an abelian group with respect to addition of
pblynomials.
Now for the operation of scalar multiplication we make the
following observations.
1. If then

^[/(^)+^(^)]=A;[(<»o+^o)+(ui+^>i) x+(02+b2) x^+.„]


(«o+^o)+^(ai+bi) x-\-k («24-h2>
=={kaQ-\rkbQ^+{kal+kbi) x+{ka2-\-kb2) x^+...
x+{kai) x^+.,.]+[kb^+{kbi) x+(kh2> x2+...]
(«o+«i*+fl2x2+...)4-k
==kf{x)+kg{x).
2. If X:,, /rjeF, then
iki+k2)f{x)^(ki+k2)ao+m+k2)ai]x
, +[iki-[-k2)a2\x^+..,
=(*iOo+^2flo)+(^iai+k2ai) x+{kia2-^k2ia2) xH...
^{k\a^-hikxai)x+{kia2)x^-\-,.,]
+[^2ao+(k2ai) A:+(k2U2) x2+...]
(ao+ai*+a2Ac2-l-...)+A:2(flo+ai*+a2*2+...)
^hfix)^-k2f{x).
3. If A:,, k2SF, then
{k\k2if{x)=‘{k\k2i ao+[(A:iA:2) at]:«+t(A:iA:2) «2l
“^1 (^2ao)+t^i (Aj2ai)] ^+[Ari (^202)1
' =*i [^2ao+(AT2ai) jr+(k2a2) x2+...]
[A:2/(2c)].
4. If 1 is the unity element of the field F, then
l/W=(Iflo)+(lai);c+(lfl2):c2+...
=0q-\ aix-\-a2X^-\~...=f{x),
Hence F[:v] is a vector space over the field F.
Example 5. Lei S be any non-empty set and let F be anyfield.
Let V be the set of allfunctionsfrom S to F /.e., let
l^—{f-f‘ S->F).
Let us define sum oftwo elementsfand g in V asfollows
(/+^)(2f)=/(2f)+g(x) ¥ x^S.
Also lei us define scalar multiplication of an elementf in V by
an element c in F asfollows:
(c/)(jc)=:c/(x) V JceS.
Then V(F) is a vector space*. (Marathwada 1971)
Solution 1.
«● ^ . We have v *ss,(/+g)(*)=/(;r>+y(*).
Since/(2f) and g (Af) are m Fand Fis a field, therefore/(x)+g(;c)
Vectof^paces 15

is also in F. Thus is also a function from S to F. Therefore


/+g e F for all/, g e K.
Associativity of addition. We have
[(/+^)+*]iP^Mf+g){x)+h (X) [by def.]
^[f(xHg(x)]+h(x) [by def.]
=f(.x)Hgix)+h{x)]
[V . /(x), g (x), h (x) are elements of F and addition in F is
associative]
^f(xH(gi-hHx)=[f+(g^h)]{x).
(/+i?)+A=/+(^+/i).
Commutativity'of addition. We have
(/+^)W=/W+^W
=g(x)+f(x) [●.' ● addition in F is commutative]
=(r+/)W.
Existence of additive identity. Let us define a function
0 : 5->F such that’ 0 (x)=0 v x e 5.
Then 6 € F and it is called zero function.
We have (/+6) (x)=/(x)+0 (x)=/(x)+0=/ (x).
/+6=vA
.*. the function 0 is the additive identity.
Existence of additive inverse. Let/e F. Let us define a
function —/: S-^F by the formula
(~/) W=-I/(^)] vxe5.
Then —/ e F and we have
[/+(-/)] (^)=/(x)+ [(-^/) (^)]=/(x)+t-/(x)]
=/(x)~/(x)=0=6 (X).

.*. the function —f is the additive inverse of/.


Thus F is an abelian group with respect to addition composi¬
tion.
2. If c e F and / e F, then v x ^ 5, we have
(c/) (x)=c/(x).
Now / (x) e F and c e F. Therefore c/(x) is in F. Then c/
16 Linear Algebra
is a function from S to F. Therefore
cf G V for all c e F and for all/e F.
Thus V is closed with respect to scalar multiplication.
3. We observe that
(i) If c e Fand/, g e V, then
[c(/+^)1(x)^c[(/+g)(3c)]«c lf{x)+g(x)]=cf(x)^cg(x)
^(cf)(x)+(c«')(x)=(c/+c^)(x).
cif+g)=^cf+cg.
(ii) If Cl, C2 e Fand/e F, then
[(CI+C2)/](x)=(C|+C2)/(*)=Ci/(*)+C2/(x)
=(ci/)(Jf)+(C2/)(*)=(Clf-hClf)(*).
(Cl+C2)/=Ci/+C2/.
(iii) If Cl, C2 e Fand/e K, then
[(Cl C2)/](X)-(C1 C2)f{x)==Ci [C2/(JC)]=C1 [(C2/)(x)]
=[CI (C2/)](X).
(Cl C2)/=Ci (C2/).
(iv) If 1 is the unity element of F and/e f', then
(l/)(x)=l/(x)=/(x).
!/=/●
Hence K is a vector space over F.
Example 6. The vector space of all real valued continuous
{differentiable or integrable) functions defined in some interval [0,1].
(Nagarjnna 1980)
Solntion. If / is a real valued function in the interval [0,1],
then we mean that/(x) is a real number ¥xe[0,l]. Let V
denote the set of all real valued continuous functions defined in
the interval [0, 1]. Then F is a vector space over the field R of
real numbers with vector addition and scalar multiplication defined
as below:
(/+8) (Jf)=/(Jc)+8 (*) V /, g^V
and W) {x)<=af (x) ¥ fl e R, V / e V.
The sum of two continuous functions is also a continuous
function. Therefore if/, g e F, then/+g e V. Thus F is closed
with respect to addition of vectors. Further if a e R and / is a
real valued continuous function in. the. interval [0, 1], then af is
also a real valued continuous function in the interval [0, .1].
Therefore V is closed with respect to scalar multiplication. To
verify the other postulates of a vector space proceed as in
Example 5.
Example ?. The set of dll convergent sequences is a^^ector
space over the field of real numbers.
Vector Spaces 17

Solution. Let V denote the set of all convergent sequences


over the field,of real numbers.
Let a={ai, a2,..., an,...}={aa}, ^={^1,
y={yi, y2,..«ty«.—}={yn}^e three convergent sequences.
(y* +)is an abelian group,
(i) Wehavea+j8={a„}+{i3„}={«„+i3„} which is also a con
vergent sequence. Therefore F is closed for addition of sequences,
(ii) Commutativity of addition. We have
a+jS={a«}+{i8„}={a„+i8„};={i3„+«„}={i8„}+{««}=i8+«.
(iii) Associativity of addition. We have
a+(i8+y)=W+[{i8«}+{y«}]={a„}+{i3„+y„}
={«ii+(fin-\-Yn)} —{(««+Pn)+y«}={«n+W+{y«)
=[{«n}+{^n}l+(y«}=(«-h^)+yr
(iv) Existence of additive identity. The zero sequence
(0}={0, 0, 0,...} is the additive identity,
(v) Existence of additiv^ inverse. For every sequence {a„}
there exists a sequence {—a„} such that
{a„}+{-a„}={a„- a„}={0}=additive identity,
is an abelian group.
2. V is closed for scalar multiplication.
Let a be any scalar i.e., a be any real number. Then
aa=a {a„}={aafl} which is also a convergent sequence because
lim lim
n-^co aa.„=a n->oo a«.
Thus V is closed for scalar multiplication.
3. Laws of scalar multiplication. Let a, s R. We have
(i) a (a-\-P)=a [{a„}+{8„}]=a {a„
={a (a„+^„)}= ={flaa}+
=a{cc„}+a{^„}=ax+ap.
(ii) ia-)rb) K={a+b) a„}
={aa,„+ba.„}={a<x.„}+{ba.„}
=a {ct„}-i-b{cf.„}=ax+bei.
(iii) (ab)(^=(ab){x„]—{(ab) a„}={a {bcc„)}
=a [b {a„]]=a (bo).
(iv) la=l.{a„)={la„}={a„}=--a.
Thus all the postulates of a vector space are satisfied. Hence
F is a vector space over the field of real numbers.
Example 8. Prove that the set of all vectors in a plane over
thefield ofreal numbers is a vector space.
18
Linear Algebra

Solation. Let V be the set of all vectors in a plane defined as


directed line segments. Let R be the field of real numbers whose
elements will be scalars

Let a,^eK. If and /5=fiC, then we define

<t-\-^=zAB-\-BC=^AC. Since AC is also a vector, therefore


^^ ^^ and thus V is closed for addition of vectors.
Also from our knowledge of Vector Algebra we know that addition
of vectors on the set V is commutative as well as associative. The
zero vector 0=AA is identity for addition of vectors If a=AB,
then the vector as^BA is the additive inverse of a because

—otj-a=BA-\^AB=BB=^the zero vector i.e., the identity for addi


tion of vectors.
Hence(F, -f-) is an abelian group.
If a e Fand m e R/e., m is any scalar, then the scalar
multiplication met is defined as a vector whose direction is that
of a or opposite to that of a according as m is -|-ive or —ive and
met =|m|.|a|.

Since OT e R, a e F => ma e F, therefore F is closed for


scalar multiplication.

Now if a, h e R and a,^ e F, then from our knowledge of


Vector Algebra we know that
« + (a+b)et=ax-\-bct and (ab) a=n (ha).
^Also ifa is any vector and I is the multiplicative identity of
the field R, then by our definition of scalar multiplication the
vector la is in the direction of the vector a and
I 1=1 1 M a 1=1.1 a 1=1 a (.
by our definitiou of equality of two vectors, we have
la=a.

Hence F is a vector space over the field R.

Example 9. Let V be the set ofall pairs {x,y) ofreal numbers,


and let F be thefield ofreal numbers. Define

c {X, y)=[cx, 0).


Is F, with these operations, a vector space over the field ofreal
numbers ? j j

Solntion. If any of the postulates of a vector spa^e is not


19
Vector Spaces

satisfied, then V will not be a vector space. We shall show that for
the operation of addition of vectors as defined in this problem the
ideiitity element does not exist. Suppose the ordered pair (xi, yi)is
to be the identity element for the operation of addition of vectors.
Then we must have
(X,j')+(xi, yi)=(x, y) ¥ X, y e R
=> (x+xi,0)=(x, y) ¥ X. y e R.
But if y#0, then we cannot have (x+xi, 0)=(x, y). Thus
there exists no element (xi, yi) of V such that
(X, yHixu yi)={x, y) ¥ (X. y)e K.
Therefore the identity element does not exist and V is not a
vector space over the field R.
Example 10. Lei V be the set of all pairs (x, y)of real num
bers, and let F be thefield of real numbers. Examine in each ofthe
following cases whether Visa vector space over the field ofreal
numbers or not ?
(0 (X, y)+(xu yi)=(x+xt, y+yi)
c (X, y)=( I c I X, I c ! y).
(») (X, y)+(xi,yi)=(x+xi,y+yi)
c(X, y)=(0, cy).
(ill)(X, y)+(xi, y,)=(x+xi, y+yi)
c (x. y)=(c2x, c2y).
Solution, (i) We shall show that in this case the postulate
(a+b)a=aa+ba. V a, b e F and a e V fails.
Let a=(x, y) and a, b e R. We have
(a-i-b) <K=(a+b)(x, y)=( | a-t-b 1 x,
| a+b| y),
bydef. ...(1)
Also aK+oa=a (x, y)-hb (x, y)
={\a\x,\a\ y)+( \b\x, \b \ y),
by def. of scalar multiplication
=( I fl I x+l 6 1 X. 1 1 y+| I y),
by def. of addition of vectors
=({l«H-i^'|}x.{|fl|4-|6|}y). -(2)
Since \a+b\ | a 1+1 6 !, therefore from (1) and (2), we
conclude that in general (fl+fe) a^oa+fca. Hence V(R) is not a
vector space,
(ii) We shall show that in this case the postulate la=a ¥
oi^V fails. Let a=(x, y) where x, y e R. By definition of
scalar multiplication wp have la=l (x, y)=(0, ly)=(0, y).
20
Linear Algebra

But (0, y) if x^O. Thus there exists a e V such that


lcc9^ot. Hence V(R)is not a vector space,
(iii) Show that in this case the postulate (a+6)a=na+fta
V a,beF and a e K fails. Note that in general

Ex. 11. Let R be thefield ofreal numbers and let P„ be the


set ofall polynomials {of degree at most n) over the field R. Prove
that P„ is a vector space over thefield R.
Sol.
Here P„ is the set of all polynomials of degree at most n
over the field R. The set P„ also includes the zero polynomial.
Thus Ptt={f(x):f(x)=ao-\-aix+a2X^-{-...‘^a„x\
where ao, au a„ e R}.
If f(x)=ao-I- oix-^a2X^+● ● ■+a„x‘
and g(x)=io+^»i Jc+b2X^+...+b„x"
be any two members of then
^ /W+g’(^)=(<7q+^o)+(^1 + bi) X -f... -f (ar„-f.b„) X”
IS also H member of P„ because it is also a polynomial of degree
at most n over the field R.
Thus P„ is closed for addition of polynomials.
Also we know that addition of polynomials is commutative as
well as associative. The zero polynomial 0 is a member of P„ and
is identity for addition of polynomials.
Also iff(x)=ao+atx+...+a„x” e
then —/(x)=—oq—uix—...— e P„ because it is also a
nomial of degree at most n over the field R.
We have —/(x)+/(x)=the zero polynomial.
.-. the polynomial -f(x) is the inverse of/(x) for addition of
polynomials.
Hence P„ is an abelian group for addition of polynomials.
Now if f(x)=ao+aix+a2X^i- ...+a„x” is any member of P„
and ceR, we define scalar multiplication cf(x) by the relation
c/(x)-=Cflo + (Cfll) X+(Cfl2) x2+...+(Cfl„) X».
Obviously cf(x) e Pn because it is also a polynomial of
degree at most n over the field R. Thus P„ is closed for scalar
multiplication.
Now if b e R and/(x), ^(x) e P we have

ia+b)f{x) =af{x)+bg{x)
Vector Spaces ll

and (ab)f{x)—a [bf{x)] as can be easily shown.


Also l/(x)=/(x) ¥ fix) e P«.
Hence is a vector space over the field R.
' Ex. 12. How many elements are there in the vector space of
polynomials of degree at most n in which the coefficients are the ele
ments of thefield I ip) over thefield I(p), p being a prime number ?
(Meerat 1975)
Sol. The field I(p) is the field
({0, 1, 2,...,p—1}, +/», Xp).-
The number of distinct elements in the field I(p) is p.
Iffix) is a polynomial of degree at most n over the field I(p),
then
fix)=ao-{‘aix-\-a2X^+...-\-a„x”, where <7o. oi, ^2,..., O/i e I(p).
Now in the polynomial/(x), the coefficient of each of the n+1
terms Oo,0|X, aix^y...^ a„x" can be filled in p ways because any of
the p elements of the field I(p)can be filled there.
Thus we can have pxpxpx ...upto (n+l)times i.e., p"+* dis
tinct polynomials of degree at most n over the field I(p). Hence if
Pn is the vector space of polynomials of degree at most n in which
the coefficients are the elements of the field 1(p) over the field l(p),
then/*„ has p"+* distinct elements.
Exercises
1. In the axiom (C1-I-C2) a=cia-}-c2a of a vector space what ope
ration does each plus sign represent ? (Meerat 1976)
2. In the axiom (C1C2) a=ci(c2«)of a vector space what operation
does each product represent ?
3. What is the zero vector in the vector space R** ?
4. Is the set of all polynomials in x of degree < 2 a vector space ?
Ans. Yes.
5. Is the set of all non-zero polynomials in x of degree 2 a vector
space ? (Meerut 1977)
Ans. No.
6. Show that any field F may be considered as a vector space
over F if scalar multiplication is identified with the field
multiplication.
7. Show that the complex field C is a vector space over the real
field R. (Kumayon 1987)
8. Prove that the set V—{ia, b). a, b eH} is & vector space over
22 Linear Algebra

the field R for the compositions of addition and scalar multi


plication defined as under:
(a,6)+(c, 6+</)
Tc (a, b)={ka, kb). (Meerut 1987, Gariiwal 86)
9. Let V be the set of all pairs (x, j') of real numbers and let F
be the'field of real numbers. Define
(X,>')+(xi, jFi)=(x-!-xi, y-{-yi)
C (x, y)=(cx, y).
Show that with these operations V is not a vector space over
the field of real numbers. (Meerut 1976)
10. Let V be the .?et of ali pairs (x, y) of real numbers and let F
be the field ot real numbers. Define .
(^. J')+(^i,.Ki)=(3j'f3;^i, —x-xi)
c {x, y)=(3cy, —cx).
Verify that F, with these operations, is not a vector space over
the field of real numbers. (Meerut 1981, 83, 90)
§ 8. General properties of vector spaces.
Theorem 1. Let V(F) be a vector space and 0 be the zero vector
ofV, Then
(i) fl0=0 V a e F.
(Andhra 1992; Nagarjuna 91; Meerut 89; I.A.S. 74)
(ii) 0a=0 V a e F. (Nagarjuna 1991; Andhra 92;
Meerut 90, Kanpur 80; I.A.S. 74)
(iVi) a(~a)=—(aa) V a e F, V a e F.
(Andhra 1992; Meerut 90)
(/v)(—a)a=—(aa) V a e F, V a e F.
(Andhra 1992; Meerut 86, Kanpur 80; I.A.S. 74)
(v) a (a-j8)=aa-aj8 V a e Fa«^/ V a, /3 e F.
(Meerut 1989)
(vi) aa=0 => a=0 or a==0. (Meerut 1989; I.A.S. 74)
Proof, (i) We have aO—a(0+0) [V 0=0-»-01
=a0+a0
0+a0=a0+a0 (’.* aO e F and 0+a0=a0].
Now F is an abelian group with respect to addition.
Therefore by right cancellation law in F, we get 0=a0.
(ii) We have 0a=(0+0)a [V 0=0+0]
=0a+0a.
0+0a=0a+0a [V Oa e F and 0+0a=0a].
Vector Spaces 23

Now V is an abelian group with respect to addition of vectora.


Therefore by right cancellation law in we get 0*»0a.
(iii) We have fl[a-|-(-a)]=fla+fl(—a)
=> flO = fla+fl(—«)
=> 0=act+a(—a) IV o0=0]
=> a (—a)is the additive inverse of aa
=> a (—a)=—(fla).
(iv) We have [u+(—0)1 a=oa+(—fl) a
=> Oa=oa+(—o)a
s> 0=fla+(—fl)«
=> (~o)a is the additive inverse of oa
=> (—fl) a=—(oa).
(v) We have o(a—jS)=o [a+(—/5)]=oa+o(—jS)
=fla+(-(fl^)J [V fl (-i3)-~(oi5)l
=oa—fljS.
(vi) Let oa—0 and o#0. Then o**‘ exists because o is a non
zero element of the field F.
aoL^O => O'* (flx)=o'* 0 s. (O'* o)a=0 => la=0 =;► a=0.
Again let oa=-0 and a#0. Then to prove that o=0. Suppose
o^tO. Then o'* exists.
oa = 0 => O'* (fla)=o' ‘ 0 => {o'* o) a= 0 => la;^0 => a=0.
Thus we get a contradiction that a must be a zero vector.
Therefore a must be equal to 0 Hence a^tO and oa=0 ^ o—0.
Theorem 2. Let V {F) be a vector space Then
(/) If a, b e F and a is a non-zero vector of F, we have
a<t—bc(. ^ a ~b. (Poona 1 972)
(») ^ V and a is a non zero element of F, we have
a<x.—ap 9- a=j3. (Allahabad 1 978)
Proof, (.i) We have acL—boL
=> oa—6a-0
=> (a—b) a=0
=*. a — b=0 since a#0
=> a—b
(ii) We have oa=Oj8
=> oa—oj8=0
s> fl(a—j3)=0
=> a—j8=0, since 07^0
=> a=j8.
24 Linear Algebra
§ 9. Vector Subspaces. Definition.
Let V be a vector space over thefield F and let fV Q V. Then
W is called a subspace of V if W itself is a vector space over F with
respect to the operations of vector addition and scalar multiplication
in V.' (Meerut 1993P; S.V.U. Tirupati 93; Poona 72;
Madras 81; Nagarjuna 90)
Theorem 1. The necessary and sufficient condition for a non-
empty subset W ofa vector space V{F) to be a subspace of V is
that Wis closed under vector addition and scalar multiplication
in V,
Proof.
If W itself is a vector space over F with respect to
vector addition and scalar multiplication in F, then W must be
closed with respect to these two compositions. Hence the condition
is necessary.
The condition is sufficient. Now suppose that IF is a non
empty subset of F and W is closed under vector addition and
scalar multiplication in F.
Let a e IF. If 1 is the unity element of F, then — leF. Now
IF is closed under scalar multiplication. Therefore
-1 e F, a e IF => (-1)a e IF => -(la) e IF
=> -oc e IF
[’.’ a e U'’ =j- a G F and la=a in FJ.
Thus the additive inverse of each element of W^is also in IF.
Now IF is closed under vector addition.
Therefore
a e IF, -a G IF => a-l-(-a) G IF
=> 0 G IF where 0 is the zero vector of F.
Hence the zero vector of F is also the zero vector of IF. Since
the elements of IF are also the elements of F, therefore vector
addition will be commutative as well as associative in IF. Hence
IF is an abelian group with respect to vector addition. Also it is
given that IF is closed under scalar multiplication. The remaining
postulates of a vector space will hold in IF since they hold in F of
which IF is a subset.
Hence IF itself is a vector space for the two compositions.
A IF is a subspace of F.
Theorem 2 The necessary and sufficient conditions for a non-
empty subset W of a vector space V(F) to be a subspace of V are
(0 « e IF, ^ G IF => a -jS G IF,
(//) a e F, a G IF => fla G IF.
Vector Spaces 25

Proof. The conditions are necessary. If ^ is a subspace of


F, then IP is an abelian group with respect to vector addition.
Therefore a e IP, j8 e IP => a—jS e IP. Also IP must be closed
under scalar multiplication. Therefore condition (ii) is also
necessary.
The conditions are soflScient. Now suppose IP is a non-enipty
subset of V satisfying the two given conditions. From condition
(i) we have
a e "P. a e IP => a-a e IP=> 0 S IP.
Thus the zero vector of V belongs to IP and it will also be the
zero vector of IP.
Now 0 e IP, a e IP => 0 -a e => -a e IP.
Thus the additive inverse of each element of IP is also in IP.
Again a e IP, j8 e IP => a e -jS e IP
=> a-(-j3)e IP => a+j3 e IP.
Thus IP is closed with respect to vector addition.
Since the elements of IP are also the elements of P, therefore
vector addition will be commutative as well as associative in IP.
Hence IP is an abelian group under vector addition. Also from
condition (ii), IP is closed under scalar multiplication. The
reiiiaining postulates of a vector space will hold in IP since they
hold in P of which IP is a subset. Hence IP is a subspace of P.
Theorem 3. The necessary and sufficient condition for a non
empty subset W of a vector space V(F) to be a subspace of V is
a, b e F arid a, )8 e W => e IP.
(Nagarjuna 1990; S.V.U. Tirupati 93; Allahabad 76)
Proof. The condition is necessary. If IP is a subspace of P,
then IP must be closed under scalar multiplication and vector
addition.
Therefore a^F,a^fV=> ax eJV
and b ^ F,^ ^ W ^ b^^ W.
Now ua e IP, 6^ e IP =► aa+6j5 G IP. Hence the condition
is necessary.
The condition is sufficient. Now suppose IP is a non-empty
subset of P satisfying the given condition i.e., a, b ^ F and
a,j8 e IP => aa+hjS G IP.
Taking b=l, we see that if «, /S g IP, then
- la-HjS G IP => a+j8 G H'
[V a G IP =«► a G P and la=a in P]
26 Linear Algebra

Thus W is closed under vector addition.


Now taking fl=—1,6=0, we see that if a e W'then
(~l)a+0a e W [In place of /3 we have taken a]
=> -(l«)+0 ^ W => W.
Thus the additive inverse of each element of fV is also in W.
Taking a=0,6= 0, we see that if a e If' then
Oa+Oa e IF => 0+0 e fF => 0 e IF.
Thus the zero vector of F belongs to W. It will also be the
zero vector of W.
Since the elements of fF are also the elements of F, therefore
vector addition will be associative as well as commutative in F.
Thus W is an abeiian group with respect to vector addition.
Now taking p=0, we see that if a, b ^ F and a e fF, then
fla+60 e fF Le,, aa+0 e fF i.e., aa, e fF.
Thus fF is closed under scalar multiplication.
The remaining postulates of a vector space will hold in fF
since they hold in V of which fF is a subset.
Hence fF(F)is a subspace of V(F).
Theorem 4. A non-empty subset W of a vector space V(F) is
a subspace of V if and only if for each pair of vectors a, p in W
and each scalar a in F the vector aa+j8 is again in fF.
Proof. The condition is necessary. If IF is a subset of F, then
fF must be closed with respect to scalar multiplication and as well
as with respect to vector addition. Therefore
fl e r, a e if => aa e if.
Further aa e fF, /3 e ff' => aa+j3 e fF.
Hence the condition is necessary.
The condition is suhicient. It is given that fF is a non-empty
subset of F and a e F, a, j8 e fF => aa+j8 e fF. We are tp prove
that fFis a subspace of F.
(i) Since fF is non-empty, therefore there is at least one vector
in fF, say y. Now 1 e F => -1 e F. Therefore taking a=-l,
a=y, we get from the given condition that
(—1)y-|-y=—(ly)+ y=—y+y=0 is in fF.
(ii) Now let aeF, aefF. Since 0 is in ff' therefore taking
j3=0 in the given condition, we get
aa-l-0=aa is in fF.
Thus fF is closed with respect to scalar multiplication.
Vector Spaces 27

(iii) Let cn^W. Since — leFand W is closed with respect to


scalar multiplication, therefore.(—1) «=—(la)=—a is in W.
(iv) We have leF. If a, j8 e then Ia+j8=a+j8 is in W.
Thus W is closed with respect to vector addition.
The remaining postulates of a vector space will hold in W
since they hold in V of which W is sl subset.
Hence W is a subspace of V.
Note. If we are to prove that a subset FT of a vector space V
is a subspace of F, then either it is sufficient to prove that
fl,6 e Fand a, =>
or it is sufficient to prove that
aeF, and a, j8e W => IV.
Illustrative Examples
Example 1. Let V(F)be any vector space Then V itself and
the subset of V consisting of zero vector only are always subspaces
of V. These two are called improper subspaces. If V has any
other subspace, then it is called a proper subspace. The subspace
of V consisting of zero vector only is called the zero subspace.
Example 2. The set W of ordered triads (aj, A2, 0), where
fli, A2<eF is a subspace of V^{F). (Meerut 1986)
Solution. Let a=(ai, C2, 0) and ^={bu bz, 0) be any two
elements of IV. Then Ou ai, b2^F. If a, b be any two elements
of F, we have
acc-{-bp=a (fli, fl2, 62, 0)
=(AAi, ao2, 0)-\-{bbu bb2y 0)
=(aai+Z?6i, ao2-\-bb2y 0)e W
since aai+bbu aa2^bh2 eFand the last co-ordinate of this triad
is zero.
Hence FI' is a subspace of F'aCF).
Example 3. Let V be the vector space of all polynomials in an
indeterminate x over afield F. Let IV be a subset of V consisting of
all polynomials of degree < n. Then IV is a subspace of V.
Solution. Let a and P be any two elements of W. Then a, p
are polynomials over F of degree < n. If a, b are any two ele
ments of F, then act+bji will also be a polynomial of degree < n.
Therefore a<x.-^bp^W. Hence FF'is a subspace of V.
Example 4. If au fl2> «3 orefixed elements of a field F, then
the set W of all ordered triads (xi, X2, X3) ofelements of F, such that
AiXi+ A2X2+A3.t3=0,
is a subspace of V3 (F). (Poona 1972)
28 Linear Algebra

Solution,
Let «=(xi, X2y ^3) Sind p=(yu yi, yii) be any two
elements of W. Then xu X2, X3, yu y2, ys are elements of F and
are such that aiXi-\-a2X2+a3X3=0 ...(1)
and 01^1+^2^2+03^3=0 ...(2)
If ay b be any two elements of F, we have
oa+^>^=o (3fj, ^2, X3)+h (yi, y2, y3>
^(axiy ax2y ax3)+(J)yu 6y2, by3)={axi-\-by\^ oxa+ftya, flX3+^y3).
Now 0i(fl2fi+6yt)+fl2 (0Af2+^y2)+03 {ax3+by3)
«=o {oiX\+a2X2+a3X3)-\-b {aiyt +a2y2+fl3y3)
=a0+A0=0 [by (l)and (2)]
fl«+i^=(axi+^yi, 02C2+6y2,02^3+Z>y3)e W.
Hence If' is a subspac6 of K3(F).
Example 5. Prove that the set of all solutions(o, 6, c) of the
equation a+b+lc—O is a subspace of the vector space V3(R).
(Meerut 1989)
Sol.
Let fV={(a, b,c):ayb,c^R and n+6+2c=0>.
To prove that IF is a subspace of F3(R) or
Let a=(fli, 61, Cl) and p=(o2y 62. C2) be any two elements of
IF. Then
Oi+^>i+2ci=0 ...(I)
and O2+b2+2C2=0. ...(2)
If fl, b be any two elements of R, we have
0«+^^=0 (fli, biy Ci)+^ (fl2, b2y C2)
=(aoi, ably ac\)-\‘{ba2y bb2y bci)
—{aa\+bj2y ab\+W2, nci+bc2).
Now (flai+6fl2)+(o^i+^2)+2 (flci+^ca)
=n (ai+6,+2c,)+6 (a2+i»2+2c2)
=fl.O+ii.O (from (1) and (2)J
=0.
.*. av.-\-b^=^{aai+ba2y abi+bbz, aci+bci) e W.
Thus a,^ e If' and a, 6 e R => aoi-^b^ e IF.
Hence IF is a subspace of F3(R).
Example 6. Show that the set W of the elements, of the vector
space V3(R)of theform (jf+2;^, y, —x+3y)
where x, y ^ R is a.jiubspace of F3 (R). (Meerut 1974)
Solution.
Let IF={(A:+2y, y, -Jf+3y): x, y e R>.
To prove that IF is a subspace of F3 (R).
Let a*=(xi+2yi, yi, — xi+3yi)and^=(x2+2y2, ya, — X2+3y2)
be any two elements of IF.
Vector Spaces 29

If a, b be any two elements of R, we have


na+*^=o (Afi+2>'i, yu -^i+3>-i)+h (of2+2;^2, Viy -X2+3j^2)
=^(axt+2ayu ayu '-axi+3ayi)+{bx2+2by2, by2, —bx2+Sby2)
-{oxi-i-2ayi-\-bx2-\-2by2, ayi+by2, -axi’\-3ayi-bx2-\‘3by2)
M[oxi-\-bx2W[ayi+by2layi-\-by2, -[axi+bx2]+3 [flyi+hy2])
which is in W because it is of the form {x:\-2y, y, r-x+3;').
Here in place of we have ayi-{‘by2 and in place of X we
have ax\-^bx2.
Thus a,^ e IF and n,6 € R => atn-^-b^ e W.
Hence IF is a subspace of F3(R).
Example 7. Which of the following sets of vectors
«==(«!, Q2t atfl) in R" are subspaces o/R"(n ^ 3)?
(0 all a such that fli ^0;
(j7) all a. such that as is an integer ;
(Hi) all a such that O2+4a3=0;
(iv) all a such that a\ +fl2+● ●. +fln=^ (a given constant).
Solution, (i) Let IF={a : a e R" and a, ^ 0}.
If we take oi=—3, then oi < 0 and so
a«=(—3, 02, ...» an) e IF.
Now if we take o=—2, then
oa=(6, -2o2, ~2o„).
Since the first coordinate of oa is 6 which is > 0, therefore
oa ^ W.
Thus a e IF, o e R but ax m W. Therefore W is not
closed for scalar multiplication and so W is not a subspace
ofR".
(ii) Let IF={a : a e R" and 03 is an integer}.
If we take 03=5, then as is an integer and so ‘
a=(o,, 02, 5 a^) G W.
Now if we take o=J, then
ax=(iai, ifl2, 5/2, .... ia„).
Since the third coordinate of ax is 5/2 which is not an integer
therefore ax c W.
Thus a e IF, o e R but oa Qt IF. Therefore W is not
closed for scalar multiplication and so IF is not a subspace
ofR".
(ili) Let IF={a : a e R" and fl2+4<i3=0}.
Let a=(fli, ..., a„) and ^—(bu h«) be any two members of
IF. Then a2+4a3=0 and ^2+4h3=>0.
30 Linear Algebra

If o, e R,then a(t-\-b^—{aai-\-bbu
We have (oa2+662)+4 (afls+^^s)
=fl (fl2+4fl3)4-ft (62+4A3)=fl.0+i>.0=0.
Thus according to the definition of Wy oa+6jS e W.
In this way a, jS e If' and o,6 e R => oa-fij8 e Hence
If'is a subspace of R".
(I?) A subspace if A:=0 and not a subspace if A:#0.
Example 8. Let R be thefield of real numbers. Which of the
following are subspaces of K3(R):
(i) {(Ai, 2y, 32): x, y,zeR}. (Meerut 1990)
(ii) {(X, Xy x): xeR}.
(i7i) {(x, y,2): XyPyZ are rational numbers).(Meerut 1989 P)
Solution,(i) Let W={{Xy 2y, 3z): Xy y, zeR}.
Let a=(*i, 2yi, 3zi) and P=(x2y 2y2, 3z2> be any two elements
of W. Then xu yi, zi, X2y yi, zi are all real numbers. If a, b are
any two real numbers, then
(Xi, 2yi, 3z,)+^> (X2, 2y2, 3za)
={axt+bx2y 2ayi+2by2y 3azi+3hz2)
=(flxi+fex2, 2[oyi+/>y2l, 3[azi+bz2])
eIF since ax\-\-bx2y ay\+by2y az\-\-bz2 are real numbers.
Thus o,^ € R and oc, jSs W ao.’\‘b^^lV.
IF is a subspace of F3(R).
(ii) Let W—{(Xy Xy jc): jceR}.
Let a=(^i, Xu jci) and ?=^{X2, X2y X2) be any two elements of
IF. Then xi, 21:2 are real numbers. If a,6 are any real numbers,
then aa.+bp=a {xu Xu :vi)-|-^> (x2, X2, X2)
=(axi+bx2y axi-\-bx2y axi-\-bx2)^W
since axi+bx2 e R.
Thus IF is a subspace of Fs(R).
(iii) Let W={(Xy y, z): x, y, z are rational numbers}.
Now a=(3,4, 5) is an element of IF. Also a—^1 is an ele
ment of R. But fla=V7(3, 4. 5)-(3V7.4V7,5^7) € IF since
3V7,4V7,5V7 are not rational numbers.
Therefore IF is not closed under scalar multiplication. Hence
W is not a subspace of Fa(R).
Example 9.. The solution of a system of homogeneous linear
equations. Let V(F) be the vector space of all n x 1 matrices over
thefield F. Let A be an mxn matrix over F. Then the set W of all
P^ector Spaces 31

« X1 matrices X over Fsuch that AX=^O is a subspace of V. Here


O is a null matrix of the type m x 1.
Solatioo. LetXtY^fV. Then JT and 7are nxl matrices
over Fsuch that AX-=^0, AY=0.
LetaeF. Then aX-\-Y is ailso an nxl matrix overi^.
We have
A(aX+Y)=A (aX)+AY=a(AX)+AY
s=sa0-\-0=0-{-0=s0.
Therefore aX+ Y ^ W. Thus
aGF,X,YeW=>aX-^Y^W.
Hence'^is a subspace of V.
Example IQ. Which ofthefollowing sets of vectors a=(iii a„)
in R» are subspaces o/R« ?(n ^ 3).
(/) all a such that a\ ^0;
(ii) all a such that Oi -f3^2=03,*
(fif) all« such that <72=^1^;
(/v) all a such that aiO2=0;
(v) all a such that 02 is rational. (Meerut 1972)
Solution, (i) Let W==^{x : a e R" and oi^O}. Let
and be any two members of W. ThenoiJsO and bt'^0.
If fl,heR,then o«4-/>j8=(aa, If a and h are
any two real numbers, then aai+bb\ will not necessarily be ^ 0.
For example if we take ni==3,/»i=3, <i=—2 and i=—2, then
+ 6—6= —12 which is < 0. Thus a,jSe W and
a, h € R => ax+bp ^ W. Hence W is not a subspace of R".
(ii) Let W={x:a e R» and fli+3fl2“fl3}. Let a=(fli,...,a„)
andj3=(6i 6«) be any two members of.IF. Then ai~{-3a2=a3
and hi+3h2=*3. If fl, e R, then ax+bp=(adi-\-bbu ..,aa„+bb„).
We have {aai-\-bhi'y-^3 (aa2-\-bb2)=a (ai-{-3a2)+b {bi+3b2)
=aa3+bb3. Thus according to the definition of W^axi-bB e W.
In this way a, e IF and a, h e R => ax-\-bp^W. Hence IF is a
subspace of R”.
(iii) Let IF={a : aeR" and 02=01^} Let a=(oi, and
P={bu...,b„) be any two members of IF. Then 02=01^ and b2=b\\
Ifa, heR, ihen ax+bP—{aai-\-bb\,...yaa„-\ bb„). Now nfl2+^^2
which is not necessarily equal to {aa\+bbxY. For
example, take oi-=2, 02=4, h,=3,i>2=9, n=2,6--=3. Then 02=01^
and h2=hi^. Also fla2-f-A^2=8+27=35 and {aa\-\-bb\f={\3y.
32 Linear Algebra

Thus aa2+bb2^(aai-\-bb{)'^. In this way fla+ftjS Qt W. Hence W is


not a subspace of R”.
(iv) Let :aeR" and ai<i2=0}. Let a=(ai a„) and
be any two members of W. Then <iia2=0, 6|*2=0.
If fli b (£= R, then ao^-\-bp={aai-\-bbi aan-\-bb^. We have
iaai"\-bb\){aa2‘]rbb2)=a^a\a2’\"ab{a\b2-\-a2b\)-\b'^bib2=ab{fl\b2-\-a2b\)
which is not necessarily equal to zero. In this way aa+^j8 is not
necessarily a member of W. Hence W is not a subspace of R”.
(v) Let fV~{oc: «eR" and a2 is rational}. Let »=(at a„)
and j8=(6i,...,6„) be any two members of W. Then 02 is rational
and b2 is rational. If a, ^eR,thena»-j-bfi=(aai-l-bbi,...,aa„-i-bb„).
Now aa2+bb2 is not necessarily rational. For example if we take
n=V3, b=^/^^ 02=3,*2=4, then aa2+bb2 is not rational. Thus
in this case W, Hence W is not a subspace of R".
Example 11. Let V be the (real) vector space of allfunctionsf
from R into R. Which of thefollowing sets offunctions are sub
spaces of VI
(i) allfsuch thatf{x^)=[f{x)f\
(//) allfsuch thatfifi)=f{\);
{Hi) allfsuch rAo//(3)=sl +/(-5);
(/v) allfsuch that/(—1)=0;
(v) allfwhich are continuous.
Solution, (i) Let Kand/(x2)=[/(jc)P). Let
/,^ be any two members of W.. Then /(x^)=l/(x)p and g{x^)
=[g(x)P. Let o, e R. Then '{af+bg){x^)={af){x^)-\-{bg){x^)
=^af{x^)+bg(x2)=o [/(x)P+6[g (x)p. Also [{af+bg)(x)P
=[af{x)+bg (x)]2=fl2[f{x)Y+b^ [g {x)]^+2abf{x) g {x). Now
{of+bg){x^) is not necessarily equal to [{af+bg)(x)]^. Thus
af+bg is not necessarily a member of W. Hence W is not a sub
space of F.
(ii) Let /, g e IF in this case. Then /(0)=/(l)and g (0)
=^(1). Let o, 6 e R. Then {af+hg){0)=af{Q)+bg(0)=o/(l)
+bg {\)={af-\-bg)(1). Therefore by definition of W, af+bg e W.
Hence IF is a subspace of F.
(iii) Let/, IF in this case. Then /(3)=l+/(—5) and
g(3)=1+^(-5). We have {af+bg)(3)=o/(3)+i>^(3)=o [1+
/(—5)]+^> [1+^ {—5)]=a+b+{af+bg)(—5)which is not necessa
rily equal to 1 +{af+bg)(—5). Hence IF is not a subspace of F.
(iv) Let W={f:f^ Fand/(-l)=0}. Let/, g e IF. Then
Vector Spaces 33

/(—1)=0 and ^(—l)=aO. If a,6 € R, then (o/'-fig')(~l)


=(«/)(-l)+(^>^)(-l)=fl/(-l)+6g(-.l)=fl(0)+6(0)=.0.
Therefore af+bg e W. Hence IF is a subspace of V.
(v) If/and g are continuous functions and a, 6 € R, then
of-hbg is also a continuous function. Hence in this case IF is a
subspace of F.
Example 12. Ifa vector space V is the set of all real valued
continuousfunctions over the.field of real numbers then show that
the set W ofsolutions of the differential equation

is a subspace of V (Meerut 1993P)

Solution. We have W=iy: 2^


where ;>=!/(x).
Obviously y=D satisfies the given differential equation and as
such it belongs to IF and thus IF# 0.
Now let yi,^2 e W. Then

2 S'-’ ...(1)
and
...(2)
Let n, h € R. If IF is to be a subspace then we should show
that ayi-{-by2 also belongs to W i e., it is a solution of the given
differential equation;
We have

2 ^2 (oj'i +6>-2)+2(ayt+byt)
d^yz 9a
=2fl H-26 dx
“ dx^ dx^ “ ^'-9b ^+2ay,+2byi

dx^
fl.O+h.O, by (1) and (2)
=0.
Thus ayi+by2 is a solution of the giveri differential equation
and so it belongs to IF.
Hence IF is a subspace of F.
34 Linear Algebra

Exercises
1. Let F=R* and W be the set of all ordered triads(x, z)
such that X—3y+4zss0. Prove that I^is a subspace of R^.
(Meerut 1992)
. 2. Let C be the field of complex numbers and let n be a
positive integer (n ^ 2). Let F be the vector space of all nxit
matrit^s over C. Which of the following sets o^ matrices Ain V
are subspaoes of F?
(i) all invertible A; (Meerut 1981)
(ii) all non-invertible A;
(Hi) all such that where R is some fixed matrix
in F.
Ads. (i)not a subspace; (ii) not a subspace; (iii) a subspace.
3. Let F be a vector space of all real nxn matrices! Prove
that the set consisting of all nxn real matrices which comqiute
with a given matrix ToiV form a subspace of F. (Meerut 1980)
4. Let F be the vector space of all 2x2 matrices over the
real field R. Show that the subset of F consisting of all matrices >4
for which A^^A is not a subspkce of F.^ (Meerut 1970)
5. State whether the following statements are true or false
(i) A subspace of F3(R), where R is the real field, must always
contain the origin. (Meerut 1977)
(ii) the set of vectors a=(jc, y)e Fa(R) for which x^^y^ is
a subspace of F2(R). (Meerut 1977)
(iii) The set of ordered triads (x, y, z) of real numbers with
X > 0 is a subspace of Fs(R). (Meerut 1977)
(iv) The set of ordered triads (x, y,z) of real numbers with
x+;;=0 is a subspace of F3(R):
Ans. (i) true; (ii) false; (iii) false;(iv) true.
§ 10. Algebra of subspaces.
Theorem 1. The intersection of any two subspaces Wi and W2
ofa vector space V{P) is also a subspace of V(JF).
(Meerut 1990; Andhra 92; Allahabad 78; Nagarjuna 74)
Proof. Since 0 € IFi and W2 both therefore W\ fl W2 is not
empty.
Let a, e IFi n W2 and fl,b e F.
Now a e W\ n ^F2 => a € FFi and « S IF2
and P e IFi n 18 e IF| and/J e IF2.
Since IFt is a subspace, therefore
35
Vector Spaces

a,b ^ F and a, jS e )^i => € Wi,


Similarly a^b ^ fanda,J8 e W2 => act+bfi e W^2*
Now <ia+fti5 e Wu aoL+bp ^ W2 ^ acL-^-b^ e Wi a ^2.
Thus a^b ^ F and a, j8 e fl 1^2 ,=^ e W^i 0 11^2*
Hence IKi n IJ'a is a subspace of K(F).
Note. The union of two.subspaces of V{F) may not be a sub
space of V{F), For example if R be the field of real numbers, then
lFi={(0,0,z): 2 e R} and 1F2={(0, y, 0): y € R}arc two sub
spaces of K3(R). We have (0,0, 3)e Wi and (0, 5. 0)e 1^2.
(0,0, 3)and (0, 5,0) are both elements of Wi U IF2
But (0, 0, 3)-f(0.5.0)»(0, 5, 3) m IFj U ^2 since neither
(0, 5, 3) e Wi nor (0, 5, 3) e W^2. Thus Wi U W2 is not closed
under vector addition. Hence Wi U W2 is not a subspace of
V2(R).
Theorem 2. The union qf two subspaces is a subspace if land
only if one is contained in the other.
(Meerut 1988; Poona 72; Allahabad 77)
Proof. Suppose Wi and W2 are ’two subspaces of a vector
space V.
Let WiQW2 or Then Wt\jW2^lF2 0t Wt. But
IFi, 1^2 are subspaces and therefore, IFiU 1^2 is also a subspace.
Conversely, suppose W'lU 1^2 is a subspace.
To prove that WtQ W2 or W2QW1.
Let us assume that Wi is not a subset of IV2 and Wi is also
not a subset of Wi.
Now Wi is not a subset of 1F2=>3 ct^Wi and ct^Wi ...0)
and W2 is not a subset of Wi =>3 /5e Wi and pm Wi ...(2)
From (1) and (2), we have
asfF|U »"2 and jSe IFi U B'2.
Since Wt U W2 is a subspace, therefore
+j8 is also in IFiU 1^^2«
But a+)5 e IFiU W'2 => e IFi or W2,
Suppose a+^e Wi. Since aeWt and Wi is a subspace,
therefore (a+j8)—a=»/5 is in IFi.
But from (2), we have j3® Wi, Thus we get a contradiction.
Again suppose that a+/3 S 1^2. Since pmW2 and W2 is a sub
space, therefore («+j9)~P“=« is in 1^2. But from (1), we
0LmW2* Thus here also we get a contradiction. . Hence either
W1CW2OT W2QW1.
36
Linear Algebra

Theorems. ArbUrary intersection afsubspaces U:, the inter-


sectton ofanyfamily afsubspaces afa rector space is a subspace.
(Meerut 1971,73)
Proof.
r .. r. ® {^/ vt^T)be any
fomily of subspaces of F. Here r is an index set and is sucb-thkt—-
V /er, W, IS a subspace of V.
Let U=
t^T ^:x^W, ¥ t^T}.
be the intersection of this family of subspaces of V. Then to prove
that U IS also a.subspace of
Obviously since at least the zero vector 0 of F is in
W, V Yer.

Now let fl, heFand a, p be any two elements of f| Wf.


^ ter
Then a, fV, ¥ /eT. Since each m is a subspace of F, there
fore oa-|-^^ e W, ¥ ter. Thuma-f-hiS e n w,.
tGT
Thus o, heFand a,j3 e f) w, => ax-{-bp e n fVt.
t^T t^T
Hence f) W, is a subspace of V(F)
t^T ^

Smallest subspace containing any subset of V(F), Let V(F\ be


a vector space and 5 be any subset of V. If c; is a subspace of F
containing 5 and is itself contained in every subspace of Fcontai-
Th5/‘*n“.^i? S“l»pace of V containiog S.
The smallest subspace of V cootainiag S is also called the subspace
symbof<T'®?tr 1'**“”^ «««*>»» denote it by the
symboi {S}. It can be easily seen that the intersection of all the
b'f*‘’®“®^^ ®“"‘“*"*”8'S'Js<hesubspaee of V(F) generated
Dy .V If{S}= F, then we say that F is spanned by S.
§ n. Linear combination of vectors. Linear span of a set.
Linear combination. DeBoilion. Let V(F) be a vector space.
If ai, 9.2 *●●●» F, then any vector
a~fl,aj-|-02a2-f-...-M„a„M;Acrca,, 02,..., anSF ^
M called a linear combination of the vectors «|, a2,
(Nagarjttoa 1980)
Linear span. Definition. Let F(F) be a vector space and S
be any non-empty subset of V. Then the linear span of S is the set
Vector Spaces 37

ofall linear combinations offinite sets of elements ofS and is de»


noted by L{S). Thus we have
L(5)={fl,ai+fl2«2+...+o«a„:a,, aa....,
is any arbitrary finite subset of S and <J|, a^^...^ an is any arbitrary
finite subset of F}.
Theorem 1.
The linear span L{S)of any subset S of a vector
space V(F) is a subspace of V generated by S i.e., L(S)={S}.
(Andhra 1992; Kakatiya 91)
Proof. Let a, jS be any two elements of L(S).
Then
and +^2^2 4-●●●
where the o’s and b*s are elements of F and the a’s and B*s are
elements of 5.
If fl, b be any two elements of F, then
act+bpr=a (<aiai+fl2«2 + ... + fl«a„)-f6 +62/82+...+d«i9«)
=a (a,a,)+a (fl2«2) + ... + a (a„a„)+h (6,/8|)+6
, , , +...+6(6„/8„)
=(fl<ii) ai+(afl2) a2 + ...+(<Wm) a„+(66,) ^i+(bb2) ^2
. ... . +-+(66„)/3„.
Thus fla+6^ has been expressed as a linear combination of a
finite set «i, aa...., a«, /Ij, ^2.—, /8« of the elements of S. Conse
quently fla+6/3 e 1(5).
Thus a, 6t=F and a, /8el(5) => fla+6/8e£(5).
Hence 1(5) is a subspace of V{F).
Also each element of 5 belongs to £(5), because if a,e5, then
a,=la, and this implies that a,el(5). Thus 1(5) is a subspace
of V and 5 is contained in 1(5).
Now if W is any. siibspace of V containing 5, then each element
of 1(5) must be in W because W is to be closed under vector addi
tion and scalar multiplication. Therefore 1(5) will be contained
inW,
Hence 1(5)={5} i.e., 1(5) is the smallest subspace of V con
taining 5.
Note 1. Important. Suppose 5 is a non-empty subset of a
vector space V{F). Then a vector asF will be in the subspace of
F generated by 5 if it can be expressed as a linear combination
over F of a finite number of vectors belonging to 5.
Note 2. If in any case we are to prove that 1(5)« K, then
we should prove‘that ^'£1(5) because 1(5)^ F since 1(5) is a
Linear Algebra

subspace of V. In order to prove that VQ,L{S), we should prove


that each element of V can be expressed as a linear combination of
a finite number of elements of S. Then each element of K will
also be an element of £(5)and we shall have VQ^L{S).
Finally FS£(5)and L{S)QV => L{S)=^Vi
Illustrative Examples

Example 1. The subset containing a single element (1,0,0)


of the vector space Vi{F)generates the subspace which is the tota
lity of the elements of the form (a,0,0).
Example 2. The subset {(I, 0. 0),(0, 1» 0)} of Vi{F) generates
the subspace which is the totality of the elements of the form
{a, b, 0).
Example 3. The subset 5={(1,0, 0), (0, 1, 0), (0. 0, 1)} of
Vz{F) generates or spans the entire vector space Fj(F) i.e..
L(S)^V.
If(a, b, c) be any element of V, then
(a, b, c)=fl(1. 0,0)+6(0, 1, 0)+c(0. 0, 1).
Thus(fl, 6, c)e L(5). Hehce VQL(S). AlsoL(S) Q V.
Hence
Example 4. Let V be the vector space of all polynomials over
the field F. Let S be the subset of V consisting of the polynomials
/o,/i,/2,..*, defined by/„=x», n=0. 1, 2
Then K=L(5)..
§ 12. Linear sum of two subspaces. Definition. Let Wi and
W2 be two subspaces of the vector space V(F). Then the linear sum
of the subspaces W\ and W2, denoted by IFi+TFa, is the set ofall
sums ai+a2 such that aie fFi, «2e W2>
Thus IF2«{ai+a2:aj e Wu «2e W2}.
Theorem. If Wi and iV2 are subspaces of the vector space V{F),
then
(1) W2 is a subspace of V{F),
(Marathwada 1971; Madras 81; Nagarjiina 90)
(f^' + W2^{Wi u Wt) i.e., U}Vi U 1^2.
Proof. (i)-Lefa,/3 be any two elements of IFi+W's. Then
where ai, j8| e Wx and 02,P2 € W2.

aa+bp>=^a (^x'iroalsi’b (fix-hPi)


ss=(c«i+b^i)-i-(««2+bft).
Vector Spaces 39

Since W\ is a subspace of therefore a, bGF


and «i» Pi ^ Wi => acti’i’bPiGWi.
Similarly ani+bpi
Consequently ack-\-bp^(actt’\-bPi)-\r(atc2+bp2^eWt+ ff'j.
Thusfl, b ^Fnnd<K,pGWi-hW2^aaL-\-bpelVi+W2,
Hence Wi-\-W2 is a subspace of V(F).
(ii) Since FFs contains the zero vector, therefore if «i e Wi,
then we can write

Thus fVt Q fVi+fV2.


Similarly fV2QfVi+m-
Hence fViUW2QfVi+lV2.
Therefore fVt+fV2 is a subspace of V(F)containing PViUWi,
Now to prove that fVt+ W2-{Wi(J W2} ,we should prove that
Wx-yW2^L{Wx\jW2)tsndLVVi\JW2)^Wi+W2,
Let a=«i+j5i be any element of Wx+1^2. Then «i e IFi and
px e W2, Therefore ai,Px^Wx U W'a- We can write
«i+^i*=l«i +1^1*
Thus oci+jSi is a linear combination of a finite number of ele>
ments ai, jSie i^i U W'a.
Therefore ai +PxGL(Wx U ^2).
fVi+mQLiWxUm
Also L{Wx[J W2)is the smallest hubspace containing Wx U 1^2
and Wx+W2 is a subspace containing Wx^[j Wz. Therefore
L(Wx U W2) must be contained in Wx+ W2. Conpsquently
L(WxUW2)QWx-¥W2.
HtafXiWx-{^W2=L(Wx\jW2)={Wx[jW2}.
Note. If Wxt W2,.,., Wk are subspaces of the vector space V,
then their linear sum, denoted by 1F|+1^2-1-... is the set of
all sums ai+a2+...+(XA such that cti^W{. It can be proved that
Wx-\’W2-\‘l>»-\-Wk is a subspace of ^ which contains each of the
subspaces IF/ and which is spanned by the union of Wx, W2,.*..,Wk»
Solved Examples

Ex. 1. If S,T are subsets of V(F), then


(/) => L(5)£ L(D.
(ii) L(S\JT)^L(S)-\^L(T).
(Hi)S is a subspace of V o L(5)*=5.,
(iv) L(L(S))^L(S),
40 Linear Algebra

Solution, (i) Let a= 1+tf2«2+...+a^n e L(5) where


{«!, «2, » afl} is a finite subset of S. Since 5 Q T, therefore
{«!» «2, , a„} is also a finite subset of T. So aeZ,(r).
Thus a e L(S) a e L (T).
/. L(5)£L(D.
(ii) Let a be any element of L (5UT). then
as=
a\V.\-ffl2*2+ ● ● ● 4- +b2Pl+ ● ● ● +bp^p
where {aj, a2, > ^ pi, p2 Pp) is a finite subset of iSUTsuch
that {*!» K2>'*'» Q S and {Pt, P2f-t Pp] Q T.
Now CTiOCi 4-.a2“24-.●. 4 am«mGL (S)
and biPi-\- M2+...+bpPpG L(T).
Therefore aeL (5)+L (T),
Consequently L (5(JT) Q L {S)+L (T).
Nowlety^be any element ofL (S)+L(r). Then y=j8+S
where p^L (5) and 8 e Z, (T). Now p will be a linear combina
tion of a finite number of elements of S and 8 will be a linear com
bination of a finite number of elements of T. Therefore )3-f-8 will
be a linear combination of a finite number of elements of 5 0 T.
Thus ^+8 eX(S U T). Consequently
L (5)+L (T) £ L (5UD.
Hence L (503^)=^ (<S)+L (T).
(iii) Suppose ●!? is a subspace of V. Then we are to prove that
L{S)=S.
Let a e L (S). Then a.=fliai-{-02*2+...H-<rn«n
where ax,..., a„^ F and ai,..., ane5. But 5 is a subspace of V.
Therefore it is closed with respect to scalar multiplication and
vector addition. Hence a=<jt,ai +... ●+ o„a„ e S. Thus
a e L (5) => a e 5.
Therefore £(5)£S. Also 5qL(5). Therefore £(5)=5.
Converse. Suppose L{S)<=- S. Then to prove that 5 is a sub
space of V. We know that L{S) is a subspace of V. Since 5=I,(5),
therefore S is also a subspace of V.
(iv) L (L (5)) is the smallest subspace of F containing L(5).
But L(S) is a subspace of V. Therefore the smallest subspace of
V containing L {S) is-L (S) itself.
Hence L (1(5)) ^L{S).
Ex. 2. Show that the intersection of any collection of subspaces
of a vector space is a subspace. Can you replace intersection* by
●Union* in this proposition ? (Meerut 1973)
Ads. No.
Vector Spaces 41 ,
● I

Ex. 3. Let V be the vector space of all functions fromViintlt


R; let Ve be the subset of evenfunctions,f{-x)=f{x)\ let Vo be the
subset of oddfunctions, f{—x)=—f{x).
(/) Prove that Ve and Vo are subspaces of V.
(«) Prove that Ve+ Vo= V.
(Hi) Prove that ^o={0}. (l.A.S. 1985)
Solution, (i) Suppose/, and ge G Vg and a is any scalar i.e.
oeR. Then'
(afe+ge)(-X)=afe(-X)+ge(-x)
=afe(x)-]rge(x)=(afe+ge)(x).
Therefore afe+ge is an even function.
Thus agR and/, ge e Ve ^ afe-hge e Vg.
Hence Ve is a subspace of V.
Again suppose/ and go G Vo and a is any scalar. Then
(afo^go)(-x)=afo (-x)-i-go(-x)=a [-fo(x)]-go(x)
=-[afo(x)-hgo(x)]=-(afo-l-go)(x).
Therefore afo-i:go is an odd function.
Thus ogR and/,go G K => G F„.
Hence F« is a subspace of V.
(ii) Since Ve and Vo are subspaces of V, therefore Vg-h Vo is
also a subspace of V and consequently F«+ V„ c V.
Now let/G V. We shall show that/can be expressed as the
sum of an even and an odd function
Let A(x)=J [/(x)+/(-x)] and
Mx)-=i lf(x)-f(-x)}.
Then obviously/j is an even function and fo, is an odd func
tion. We can easily see that (—x)=/i(x)
and fox(-x)==~fo,(x)
Now/(*)=i ['/(*)+/(-*))+i[f{x)~f(-x)]
=A.W+A (*)=(/?,+/<..)(Jt).
Therefore where/,g F, G Fo.
Thus/G F =>/e F,+ F< Therefore F £ Fe+ Fo.
O*

Hence r=Ff+Fo.
(Hi) Let 0 denote the zero function i.e. 0(x)=0 ¥ x G R.
Ihen 0 G F« and also 0 g Fo.
Let/g Ve and also/G Vo.
Then/(~x)=/(x)«-/(x).
●● 2/(x)«0
=-/W=0=0 (AC).
A1 Linear Atgehra

Therefore/—0(zero function).
Thus /e n K o/—O. Hence
§ 13. Linear dependence and linear independence of vectors.
(Kakatiya 1991; Osmania 90; Nagarjuna 78)
Linear dependence. Definition. Let V (JP) be a vector space.
A finite set(ai, ai, ..., a„} of vectors of V is said to be linearly
dependent if there exist scalars au 02.-M On^ F not all ofthem 0
{some of them may be zero) such that
fliai+02*2+03*3+...-}■ =0.
Linear independence. Definition. (Meerut 1980; Nagarjuna 78)
LAt V {F) be a vector space. A finite set (ai, a2,..., «»} of vectors
of V is said to be linearly independent if every relation of the form
Oiai+a2«2+O3«3+...+Oiia«=0, O/ e F, 1 c" f < n
s> o,=0 for each 1 ^ n.
Any infinite set of vectors of V is said to be linearly independent
if its every finite subset is linearly independent, otherwise it is
linearly dependent.
Illustrative Examples
. Example 1. Prove that if two vectors arej linearly dependent,
one of them is a scalar multiple of the other.
Solution. Let a, jS be two linearly dependent vectors of^ the
vector space V. Then 3 scalars a, b not both zero, such that
fla+hj8«=»0.
If then we get
fla=3 —b^

a => a is ? scalar multiple of j8.


If b=ji^0, then we get
6j8=s—oa
a => /3 is a scalar multiple of a.

Thus one of the vectors <x. and is a scalar multiple of the


other.
Example 2. In the vector space V„ (F), the system of n
vectors
Ci«*(l, 0, 0,..., 0), C2—(0, 1, 0,..;, 0),..., Cn—(0, 0,..., 0, 1)
is linearly independent where 1 denotes the unity of the field F.
Solution. If a\, 02, be any scalars, then
0|Ci+02C2.+... +a„e„==0
43
Vector Spaces

=>fli (1,0,0 0)+<i2(0,1,0»●●●» 0)+...


+Ofl(0, 0,...,0,1)=0
=> (fli, ®n)—(0, 0,..., 0)
s> ai=0, 02=0,.;., o«=0.
Therefore the given set of n vectors is linearly independent.
In particular {(1, 0.0), (0,1, 0), (0,0, 1)) is a linearly indepen-
dent subset of V3 (F); (Nagarjuna 1980)
Examples. If the set S=b{c(.u «n}
of vectors of V (F) is linearly independent, then none of the vectors
«i, a2, ..., a„ can be zero vector.
Solution. Let a, be equal to zero vector where 1 ^ r ^
Then 0«iH-0a2+*”+<**r+0ar+i + — +Q*n=®
for any o#0 in F.
Since o#0, therefore from this relation we conclude that S is
linearly dependent. Thus we get a contradiction because it is
given that S is linearly independent. Hence none of the vectors
«i, a2,..., ocn can be zero vwtor. We also conclude that a set of
vectors which contains the zero vector is necessarily linearly depen-
dent. (Kanpur 1981; Meerut 88)
Example 4. Every superset of a linearly dependent set qf vectors
is linearly dependent.
Solutiph. Let 5={ai, a2,..., a„} be a linearly dependent set of
vectors. Then there exist scalars a\, a^,..., an not all zero such that
...(1)
Now let S'={a,, a2,..., a„, pu Pm) be a superset of S. Then
we have from (1)
fllflCi +02*2 + ● ● ● +0«^6t|i+0^1 + 0^2+ ● ● ● +0^m — 0. ...(2)
Since in the relation (2) the scalar coefficients are not all 0, there
fore S' is linearly dependent.
From this we also conclude that any subset of a linearly inde
pendent set of vectors is also linearly independent.
(Nagarjuna 1990)
Example 5. A system consisting of a single non-zero vector is
always linearly independent.
Solution. Let 5s={a} be a subset of a vector space V and let a
be not equal to zero vector. If a is any scalar, then
oa=0
=> 0=0. [Since ot is not zero vector]
44
Linear Algebra
the set S is linearly independent.
Example 6. Show that
5=((i, 2,4),(i, b, 0),(0, r, 0),(0, o, u)
is a linearly dependent subset of thei vector space Vi(R) where R is
the field of real numbers.
(Poona 1972)
Solution. We have
1(1, 2,4)+(-I)(1, 0, 0)-i-(-2)(0, I, 0)+(-4)(0,0, I)
=(1, 2, 4)+(-l. 0, 0)+(0, -2,0)+(0, 0, -4)
=(0,0, 0) /.e., zero vector.
Since in this relation the scalar coefficients l,-j, _2 ~4
are not all zero, therefore the given system S is linearly dependent.
Example 7.
In ^3(R), where R is thefield of real numbers,
examine each of thefollowing sets of vectorsfor linear dependence:
(i) {(2. 1,2),(8.4,8)}
(i7) {(1, 2, 0).(0, 3, 1),(-1, 0, 1)} (Meerut 1989)
m {(-1.2. 1).(3,0, ~l),(-5,4.3)>
(/V) {(2, 3. 5), (4. 9, 25)}
(V) {(l, 3.2),(l, -7, -8).(2, 1. -1)}
(V/) (1. 2. I),(3. 1, 5),(3, -4. 7). ' (Meerut 1986)
Solution, (i) We have
4(2, I,2)+(-l)(8,4. 8)
=(8, 4, 8)+(—-8, —4, ~8)=(0, 0, 0) i.e., the zero vector.
Since in this relation the scalar coefficients 4, —1 are not both
zero, therefore the given set is linearly dependent!
(ii) Let a, b, c be scalars i.e, real numbers such that
a {I, 2, 0)-\~b (0, 3, l)^c (—1,0, 1)=(0, 0, 0)
i.e., (a-c, 2a+36, h+c)=(0,0, 0)
Le., a+Oh—c=0,
2<i+,3h+0c=0,
0a+6+c=0.
These equations will have a non-zero solution i.e., a solution
in
.. whichL a, b,
. c are not all zero if the rank of the coefficient matrix
is less than three />., the number of unknowns a, b, c. If the
rank is 3. then zero solution a=0,6=0, c=0, will be the only
solution.
ri 0 -1
Coefficient matrix A= 2 3 0.
.0 1 1.
We have I ^ {“I (3—0)—2(04-1)=1^0.
Vector Spaces 45

Rank A=3. Hence fl=0, d=0,c~0 is the only solution.


Therefore the given system is linearly independent,
(iii) Let a, b, c be scalars such that
a(-1, 2, l)+6(3, 0, (-5,4, 3)«(0,0,0)
i.e.,
(-a-h3b-5c, 2n+0h+4c, fl-h+3c)=.(0,0,0)
/.e., -a+3h-5c=0,2a+0h+4c=0,a-h+3c=V
The coefficient matrix A of these equations is "
■~1 3 -51
A 2 0 4 .
. 1 -1 3.
We have M |=-1 (0+4)-2 (9-5)H-l (I2-0)«0.
.*. Rank >4 is < 3 i.e., the number of unknowns a, h, c.
Therefore the given system of equations wiir possess a non zero
solution. For example flr=—2, h = 1, c=l is a non-zero solution.
Hence the given system of vectors is linearly dependent,
(iv) Let a, b be scalars i.e., real numbers such thaC
a (2, 3, 5)+b (4, 9, 25)=(0, 0, 0) .
/.e., (2a-H4h, 3n + 96, 5a-l-25h)=(0, 0, 0)
i.e.. 2a+4h=0, 3a+9b=0, 5a-\-25b=0.
The coefficient matrix A of these equations is
[2 41
A= 3 9 .
L5 25j
Obviously rank A=2 Le., equal to the number of unknowns a
and b. Therefore these equations have the only solution tt=0,
h=0. Hence the given set of vectors is linearly independent,
(v) Let fl, h, c be scalars i.e., real numbers su(A that
a (1,3, 2)+h(l, -7,-8)+c(2, 1, -1)=(0, 0,0)
/.c.. (fl-f-h-f 2c, 3a-7hH-c, 2<J-86-c)=(0, 0. 0)
/.c.. fl -|-hH-2c=0, ...(1)
3fl-76-f:c=0, ...(2)
2a-%b—c~0. ...(3)
Eliminating c between (1) and (2), we get
5flr—|5h=0 or a—3b=0. ,
Eliminating c between (2).and (3), we get
5c—!5h=0 or 3h=0,
which is the same equation as obtained on eliminating c between
(l)and(2).
46 Linear Algebra

If we choose h=l, then a=3 and putting in any one of the


equations (1), (2),(3), we get c —2. Hence the given set is
linearly dependent,
(vi) Let a, by c he scalars i.e.y real numbers such that
fl (1. 2, 1)4-6(3,1, 5)4-c(3, -4,7)=:(0,0,0)
I.C., (fl-t-36+3c, 2a4-6—4c, <i4-564-7c)=(0, 0, 0)
I.C., fl4*364"3c=0, ...(1)
2a4-6—4c=0, ...(2)
o4-564-7c=0. ...(3)
Multiplying (1) by 2, we get
2fl4-664-6c=0. ...(4)
Subtracting (4)from (2), we get
-56-10c«0
or 64-2c=0. ...(5)
Again subtracting(3)from (1), we get
—26—4c—0 or 64-2c«0. ...(6)
The equations (5)and (6)are the same and give 6=—2c.
Putting 6=—2c in (1), we get a=3c. If we take c=l, we get
6=—2andfl==3. Thus a=3,6 2, csl is a non-zero solution
of the equations (1),(2)and (3). Hence the given set of vectors is
linearly dependent.
Examples. If F is thefield ofreal numbers, prove that the
vectors (fli, <12) and (61. 62) in V2(F)are linearly dependent iff
aibz—<1261—0. (Kanpur 1981; Gorakhpur 79)
Solution. Let x,y^F. Then
X(au 02)4-3'(61, 62)«(0, 0)
=> {xa\-\-ybu xo2-l-3'62)=»(0, 0).
Therefore <2|2^4-6i;>=»0
.and <i2x4-62>'=0
The necessary and sufficient condition for these equations to
possess a non-zero solution is that
fli 6| caO
«2 62
i,e.y <1162—<1261**0.
Hence the given system is linearly dependent iff
<1162—0261=0.
Example 9. //ai and 0.2 are vectors of K(F), and 0,6 € F,
show that the set {ai, a2, oai4-6a2> is linearly dependent.
(Poona 1972)
Vector Spaces 47

Solution. We have
(—fl) ai-|-(— a2+l (flai+^a2)
=(—a+n)ai+(— V.2
«*0ai+0a2=0 Le.f zero vector.
Whatever may be the scalars —a and —6 since 19^0, therefore
the given set of vectors is linearly dependent.
Example 10. Let ai, a2, as be the vectors of V(F), a,beF.
Show that the set {ai, a2, as} is linearly dependent if the set
{oH-aaa+fras, a2, as) is linearly dependent.
Solution. Since the set{ai+aa2+^>as, a2, as} is linearly depen
dent, therefore there exist scalars, x, y, z not all zero such that
X (aiH-aa2+^a3)+>'a2+2*3=»0
i.e. Xai+(xo+y)a2-f-(x6+2) as=0. ...(1)
If in the relation (1), the coeflScients x, xa+y, xb-{-z are not
all zero, then the set {ai, a2, as} will also be linearly dependent.
If XT^O, then the problem is at once solved whatever y and z
may be. However if x—0, then at least one ofjp and z is not
zero. Therefore at least one of xa-Hy and x6-fr will not be zero
since when x=0 then x«+>' and xfe+z reduce to ^ and z
respectively.
Hence in the relation (1)the scalar coefficients of at, az, as are
not all zero. Therefore the set {ai, a2, as} is also linearly
dependent.
Example 11. If a, )3, y are linearly dependent vectors of V{F)
where F is any subfield of thefield of coniplex numbers then so also
are a+jS,]8+y, y+a. (Meerut 1986)
Solution. Let a, b, c be scalars such that
a (a+jS)4-6(^+y)+c(y+a)«0
i.e.. (a+c)a4-(o4“6)j84-(6+c) y=0.. ...d)
But a,]3, y are linearly independent. Therefore (1) implies
fl4-064-c=0
«4-64-0c=0
0fl-4-64*c=0.
The coefficient matrix A of these equations is
ri 0 11
.4=. 1 1 0.
.0 1 0.
We have rank Ue,, the number of unknowns a, b, c.
48 Linear Algebra

Therefore a=0,6=0,c=0 is the only solution of the given


equations.
Hence are also linearly independent.
Example 12. If a, p, y are linearly independent vectors of V{F)
where F is thefield of complex numbersy then so also are
a+^,a—^,a—2^+y.
Solution. Let a^by c be scalars such that
a(a+^)+6(ot—p)+c (a—2j8-f y)=0 ...(1)
i.e.y (flt+6-l-c) a+(flf--6—2c)j8-fcy=0. ...(2)
But a,Py y are linearly independent. Therefore (2) implies
a+6+c=0,a—b—2c=0y c=0.
The only solution of these equations is c=0,a=0,6=0.
Thus(1) implies a=0,6=0,c=0. Therefore the vectors a-f jS,
ot—.Py a—2^+y are linearly independent.
^ Example 13. Show that the set {ly Xy l+x-hx^ is a linear^
independent set of vectors in the vector space ofall polynomials over
the real numberfield, (Meerut 1976)
Solution. Let Qy by c be scalars (real numbers) such that
a-0)+6x+c(l+x+jc2)=0. We have
a(l)+6x+c(H-x+x2)=0
=> (n+c)+(6+c)jc+cx2=0
=> fl+c=0,6+c=0,c=0
c=0,6=0, <1=0.
the vectors 1, x, l+x+x* are linearly independent over the
field of real numbers.
Example 14. In the vector space F[x] of all polynomials over
the field F the infinite set ■S'={1, x, is linearly inde¬
pendent. ■
Solution, Let 5'={x'”‘, x'”2 !●●●» x"*«} be any fin ite subset
of S having n vectors Here ntu are some non-negative
integers. Letai,<i2, yOn be scalars such that

Uix"** +<J2*'”2^ 4-fl„X^« =0.


{i.e.y zero polynomial) ...(1)
By the definition of equality of two polynomials we have from
(1) fli=0, <J2=0 <j„=0.
Thus every finite subset of S is linearly independent.
Therefore S is linearly independent.
Vector Spaces 49

Example 15. Is the vector (2, —5, 3) in the subspace of


spanned by the vectors (1, —3,2),(2, —4, —1),(1, —5, 7)1.
(Meerot 1990)
Solution. Let «=(2, —5,3), —3, 2), a2=(2,. —4, —1),
a3=(l, —5,7). If a can be'expressed as a linear combination of
the vectors ai, aa, as then it will be in the subspace of R’ spanned
by these vectors otherwise it will not be. '
Let a=niai+n2«24-n3«3 where ni, U2i ^3 e R.
Then (2. -5, 3)=m (1. ~3, 2)+fl2(2. -4,-I)+fl3(1, -5,7)
-f (2. —5, 3)=(fli+2fl2+n3, — 3fli —4^2 — 5fl3,
2ai—fl2+7a3).
fli-|-2n2"l"n3=2 ...(1)
—3fli—4a2—5n3=-
2<zi—n2*f"7fl3=3 =1 ...(2)
...(3)
Multiplying the equation (1) by 3 and adding to (2), we get
2fl2—2fl3=l or fl2~n3=}. ...(4)
Again multiplying the equation (1) by 2 and subtracting from
(3), we get
—5<i2+5n3= —1 orfl2-^n3=l/5. ...(5)
The rdations (4) and (5)show that the above equations are
inconsistent. Hence the vector a cannot be expressed as a linear
combination of the vectors ai, a2, tt3. Therefore « is not in the
subspace of generated by the vectors <tu «2, «3*
Example 16. In the vector, space R^ let a=(l, 2, 1), 1»
5), y=(3, —4, 7). Show that the subspaces spanned by 5={a, j8}
and r={a, j3, y) are the same. (Meerut 1977)
Solution. First we shall show that the vector V can be ex-
pressed as a linear combination of the vectors a and j8. Let
(3, - 4,7)=fl(1. 2. 1)+^(3, 1. 5).
Then n+3/>=3, 2o-f6=—4,a^-5b=l. Solving the first two
equations we get a=—3,br=2 and these satisfy the third equation
also. Therefore we can write y=—3a+2j8.
Now S C r => US)C L{T).
Further let 5 e L(T). Then S can be expressed as a linear
combination of the vectors a, j3 and y. In this,linear combination
the vector y can be replaced by —3.X+2/3. Thus S can be expressed
as a linear combination of th’^ vectors a and jS. Therefore 8eL(5).
Thus 8 e L{T) => 8 e L{S). Therefore L{T) C L(S).
Hence L{T)=LiS).
50
Linear Algebra

Example 17. iSAotv that the three vectors(1, 1, —1),(2, ~3, 5)


and(--2, 1, 4) o/R^ are linearly independent. (Meerut 1986)
Solution. Let o, b, c be scalars /.e., real numbers such that
- fl(l, 1, ~l)+b(2, -3,5)+c(~2, 1,4)=(0,0,0)
te., (fl+2b-2c, fl-36+c, -a+5b+4c)=(0,0, 0).
ie„ fl+2b-2c=0 ...(1)
a—3b+c^0' (2)
-a-{-5b-{-4c=0. (3)
Now we shall solve the simultaneous equations (1), (2)
and (3).
Multiplying (2) by 2 and adding to (1), we get
3fl-45=0. ...(4)
Again multiplying (1) by 2 and adding to (3), we get
a+9b=^0. ...(5)
Multiplying (5) by 3 and subtracting from (4), we get
—31Z>=0 or 6=0.
Putting 6=0 in (5), we get a=0.
Now putting fl=0,6=0 in (1), we get c=0.
Thus fl=0,6=0, c=0 is the only solution of the equations
(1),(2) and (3).
fl(l,l. -l)-f-6(2, -3, 5)+c(-2, 1, 4)=(0, 0.0)
=> fl=0,6=0,c=0.
Hence the vectors (1, 1, -1),(2. -3, 5),(y-2. 1,4) of are
linearly independent.
Example 18. Show that the vectors(1,1, 2, 4),(2, -1,-5,2),
(I, — 1, —4, 0)and(2, 1, 1, 6)are linearly independent in
(Nagarjuna 1990; Meerut 89)
Solatioa. Let (!, 1, 2. 4)=a(2,-1, -5,2)+i(1,-I,-4,0)
+c(2, I, 1,6).
Then 2fl+6+2c=l ...(1)
—a—b-\-c=\ (2)
-5a-46+c=2 ...(3)
2a-l-064-6c=4 ...(4)
Now W3 shall solve the simultaneous equations (1),(2),(3)
and (4).
Adding (1)and (2), we get a-f-3c=2 Which is the same equa-
*’oh as (4).
If we take c=0, we get 0=2.
SI
Vector Spaces

Putting fl«2 and c—0 in (1), we get b 3.


We see that a=2,6=—3,c=0 satisfy ail the four equations
0).(2).(3)and (4).
(1.1.2,_4)=2(2.-1 5,2)-3(l, -1,-4,0)
+0(2, 1,1. 6)
or 1 (1. 1, 2, 4)-2(2,-1, -5.2)+3(1, -1.-4.0)
-0(2,1. 1. 6>=(0, 0,0, 0). ...(1)
Since in the linear relation (1) among the four given vectors
the scalar coefficients !, -2;3,0 are not all zero, therefore the
given vectors are linearly dependent in
Example 19. Show that the vectors (1, 1, 0, 0),(0, 1, —1,0),
(Oi 0,0* 3)in R-* are linearly independent
Solution. Let o, b, c be scalars i.e., real num^rs such that
o(1, 1. 0,0)+6(0, 1, -1,0)+c(0, 0, 0, 3)*(0,0,0,0) ...(1)
Then fl+0i>H-0c=0,
fl+6+0c=0,
06-6+0c=0,
0n+06+3c=0.
The only solution of the above equations is
a=0,6=0,c=0.
Thus the linear relation (1) among the three given vectors isi
possible only if a=0;6=0,c=0.
Hence the three given vectors in R^ are linearly independent.
Example 20. Is the vector (3, —1,0, — 1) /n the subspace o/R*
Spanned by the vectors (2, — 1,3,2),(—1,1, 1, ”3) and
(1.1.9, -5)? (Meerut 1983)
Solution. Leta=(3, -1,0, -1), «i=(2, -1,3,2),
a2=(—1, 1, 1, —3), aj—(1, 1, 9, 5).
If a can be expressed as a linear combination of. the vectors
ai,a2, «3, then'it will be in the subspace of R^ spanned by these
vectors otherwise it will not be.
Let «=aai+6a2+ca3 where a, 6, c e R.
Then (3, -1,0, -l)=fl(2, -1, 3, 2)+6(-1, 1, 1. -3r
+c(1,1, 9,-5).
2a—6+c=3, ...(1)
-a4-6+c=-l, ...(2)
3a+6+9c=0, (3)
and 2fl-36-5c=-l. .»(4)
52
Linear Algebra

Addiog the equations (1) andf^), we get


fl+2c"=2.
...(5)
Again adding the equations (1) and (3) we gev
5a-H0c=3.
...(6)
Multiplying the equation (5) by 5, we get
5fl-M0c=10. ...(7)
Thie relations(6) and (7) show that the equations (1),(2),(3)
Md (4) are inconsistent do not possess a common solution,
ence the vector a cannot be expressed as a linear combination of
the vectors ai, «3. Therefore is not in the subspace ofR<
generated by the vectors «I. «2, «3.
Example 21. SAoM-./Aaf the set {l.x.x(1 -*)} is a linearly in-
Oependent set of vectarslin the space ofall polynomials over the real
numberfield.
(Meerut 1985)
Solution. The zero vector of the vector space of all polynomials
over the real number field is the zero polynomial.
Let n, ft,c be scalars {i.e., real numbers) such that
a (l)4-ftx-j-c [jf (1 ~x)]=:0 i.e., zero polynomial.
We have a(O+ftAr+c(x-x^)^.0
=> n+(ft+c) X—cjc^=0. ...(1)
Now two polynomials in x are said to be equal if the coeffi
cients of like powers of x on both sides are equal. So by the defini
tion of the equality of two polynomials,,we have from (1),
«=s0, ft-f-c==0, —-c=0
=> c=0, ft=0,
Thus n (l)-f-ftx-i-c[x (1 -x)]=0
' a=0, ft=0, c=0.
.. the vectors.!, X, X (1—x) are linearly independent over
the field of real numbers.
, Example
, 22. Find whether the vectors 2x3-{-;c2-f-x-f 1,
x3+3xHx-2 and x3-l.2x2-x-f3 o/R [x], the vector space of all
polynomials over the real numberfield, are linearly independent or
' (Meerut 1975)
Solution. The zero vector of the vector space R fxl is the zero
polynomial.;
Let a, b,c be scalars (/.e., real numbers) such that
q(2x3^x2+x-M)4 ft (x3-f-3xHx-2)4-c-(x3+2x2-x+3)=0
/.e., zero polynomial.
Vector Spaces 53
Then

(2a+h+c) f3A4-2c) x^-{-{a-{-b--c):c+a-2h+3c=0. ...(1)


Equating the coefficients of like powers of;c on both sides of
(1), we get
2a+h+c=0o
<iH-3h4-2c=0,1
fl4-h-c«0,r ...(2)
and a—26-f”3c=?0.J
The coefficient matrix A of the system of equations (2) is
’2 1 n
1 3 2
I 1 -1
.1 -2 3.
I 3 2'
2 1 1
1 1 — 1 ,by i?2
1 -2 3 I
ri 3 21
0 -5 3 , by/?2->/?2-2i?i
0 -2 -3 —R%
.0-5 1. Ri-^Ri—
n 3 2'
0 1 3/5 , by /?2->—^/?2
0 -2 -3
0 -5 1
ri 3 2 1
0 1 3/5 , by /?3-^i?3+2/?2,
0 0 -9/5 ■^4“^-^4"f'5/?2
0 0 4
1 3 2 1
^0 1 p , by i?3-^-fi?3 i
0 0 4
■1 3 2 '
0 0 1^^ » by Ra-^R^-ARz
0 0 0
which is in echelon form,
rank i4== number of non-zero rows in its echelon form
r =3=number of unknowns a, h, c in the system 6T ~*
equations (2). /
●ti!
54 Linear Algebra

Hence the system of equations(2) has the only solution


as=0, Z>^0, c=0.
the given set of vectors is linearly independent.
- Ex. 23. Prove that a set of vectors which contains the zero
vector is linearly dependent. (Meerut 1988)
Sol. Let 5={ai, az, ..., be a set of vectors of the vector
, space V{F).
heioLr be equal to zero vector where 1 < r < n.
To show that the set S of vectors is linearly dependent.
Obviously
Oa,+0«j+...+fl«,+Ooi,+,+...+Oa..=0 U„ zero vector. ...(1)
for any non-zero scalar a i.e., for any non-zero element a in the
field F.
Since in the linear relation (1) among the vectors «i, ai, .
the scalar coefficient a is not zero, therefore the vectors ai, a2,
are linearly dependent.
Ex. 24. Show that the system of three vectors (1, 3, 2),
(1 _7, _8),(2. 1,-l) of Vs(R)is linearly dependent.
(Meerut 1989; Gorakhpur 80)
Sol. Let a, b, c be scalars i.c., real numbers such that
0(1, 3. 2)+b (1, -7,-8)+c(2, 1. -1)=(0,0,0)
i.e.. {fl-\‘b-\-2£, 3<i—76-j-c, 2a—86—c)=(0,0, 0).
Then fl-|-6+2c=0, ...(1)
3o-76+c=0, ...(2)
and 2fl—86—cs=0. ...(3)
If the equations (1),(2),(3) possess a non-zero solution, then
the given vectors are linearly dependent.
Adding (2)and (3), we get
5a-156=0 or o-36=0. ...(4)
Multiplying (3)by 2 and adding to (1), we g6t
a-36=0. ...(5)
The equations(4)and (5) are the same and give a==36.
Putting 0=36 in (1), we get 2c4-46=0 or c=—26,
If we take 6=1, we get o=3,c=—2.
Thus 0=3, 6=1, c=—2 is a non-zero solution of the
equations (1), (2) and . (3). Hence the given set of vectors is
linearly dependent.
Alternative method. The coefficient matrix i4 of the system
Vector Spaces 55.

of equations (1),(2)and (3)is


ri 1 . 2
A= 3 -7 1.
12 -8 -Ij
We have det 1 1 2
3 7 1
2 -8 -1
1 0 0 , by C2—C|
3 -10 —5 and C3—2Ci
2 10 -5
=50—50=0.
rankyf <3 i.e., rank A< the number of unknowns
a, b,c in the equations ^1)*(2) and (3).
Therefore the equations (1), (2) and (3) must possess a non*
zero solution. Hence the given vectors are linearly dependent.
Ex. 25. Determine whether the following set of vectors in
is linearly dependent or independent^ Q being thefield ofration
nal numbers:
{(-1,2,1),(3. 1, -2)}. (Meerut 1974)
Sol. Let u, b be scalars (/.e., o, ^ € Q)such that
o(-l,2, l)+6(3, l,.-2)=(0,0,0)
r.e., (-fl+3h, 2a+^ fl-26)=(0,0, 0). )
Then —fl-f'3^>=0,)
2a+i»=0,} (1)
0-26=0.3
The coefficient matrix A of the system of equations(1) is
-1 31
A= 2 1.
1 -2j
-1 . 3
We have =—1—6=—7#0.
2 1
Thus there exists a 2-rowed minor of the matrix A which is
not zero, Also the matrix can have no minor of order greater
than 2.
A rank/4=2=the number of unknowns o and 6.
Therefore the equations(1)have the only solution a=0,6=0.
Hence the given set of vectors is linearly independent.
Note. If we do not want to use the concept of the rank of
a matrix to discuss the solutions of the system of equations (1),
we can directly say that solving the system of equations (1) we
find that the only solution of the system of equations(1) is a=0,
6=0.
56 Linear Algebra

Ex. 26. Find a linearly independent subset T of the set


5={ai, a2, as, 04}
where oci=(l, 2, -1), a2=(~3, -6,3).
aj=(2, l, 3),oc4=(8.7, 7)eR3
which spans the.same space as S.
Sol.. First we observe that a2=—3«i so that the vectors ai
and tt2 are linearly dependent,
if 5i«{ai, aa, a4>fthen the subspace of spanned by
5i is the same as that spanned by 5.
Now there exists no real number c such that aa^cai. There
fore the vectors ai and aa are linearly independent.
Let us now see whether the vector 04 lies in the subspace of
R3 spanned by the vectors ai and aa or not
Let O4sao{^i+ha3, where a,b^R,
Then (8, 7, 7)«<i (I, 2, -l)+6(2, 1, 3).
.*. d-l-26=8,
2a-{‘b—l,
and —a-l-36=7.
Solving the first two of these three equations, we get u=2,
6=3. These values of a and b also satisfy the third equation.
«4=2a|-l-3a3.
Thus the vector on has been expressed as a linear combination
of ai and so that the subspace of R^ spanned by the vectors
ai, aa and 04 is the same as that spanned by the vectors ai and aj.
Hence r={ai, aa} is a linearly independent subset of S which
spans the same subspace of9? as is spanned by S.
Ex. 27. Determine a basis of the subspace spanned by the^
vectors
a,=(l,2,3), a2=(2,l,-1),.
a3=(l, -l, -4),a4=(4. 2, -2). (I.A.S. 1988)
Sol. Proceed as in Ex. 26. Ads. (ai, a2>.
Ex. 28. Find a maximal linearly independent subsystem of the
system of vectors
a,=(2, -2, -4), a2=(l, 9, 3). aa=(-2, -4, 1)
and a4=(3,7, — 1). (I.A.S. 1986)
2 -2 -41
1 9 3
Sol. Let A denote the matrix -2 -4 1
3 7 -1
57
Vector Spaces

whose rows consist of the vectors kj, a2, «3 a4.


We shall reduce the matrix A to echelon form byapplying
the row transformations. We have
1 1 -21
1
A _2 -4 1 *
3 1 -1
n -1 21, by R2~^^2—
0 10 5 /?3—>-/?3“1-2/?1
0 -6 -3
lO 10 5
'1 -I -21, by/?2->iV:R2
0 1 i
0 1 i /?4-^iV-^4
0 1 iJ
ri -1 -21 „
0 1 i , by /?3-»^3-^2
'^ 0 0 0 R4-^R4-R2
0 0 0,
which is in echelon form.
We have rank A=Xhe number of non-zero rows in its echelon
form=2
/. the maximum number of linearly independent row vectors
in the matrix /l=the rank .<4=2.

The vectors ai and a2 are linearly independent and so (ai, a2>


is a maximal linearly, independent subsystem of the given system
of vectors. We observe that none of the given four vectors is a
scalar multiple of any of the remaining three vectors. So any two
maximal . linearly independent
of the given four vectors form a
subsystem of the given system of vectors.
Exercises

1. Fill up the blanks in the following statements:


(i) Any set of vectors containing the zero vector as a
i, linearly (Meerut 1976)
(ii) Intersection of two linearly independent subsets of a vMtot
space will be linearly (Meerut 1976)
(iii)The subset {(1.0,0), (0,1, 0), (0. 0,1)} of tbe vector
space B? is linearly
58 Linear Algebra

(iv) A system consisting of a single non-zero vector is always


linearly
(v) In the vector space R3 the vectors (1,0, 1),(3^ 7, 0) and
(“1,0, —1)are linearly......
2. State whether the following statements are true or false
(i) If A and B are subsets of a vector space, then
A^B ^ L{A)^U,B), (Meerut 19'6)
(ii) A set containing a linearly.independent set of vectors is
itself linearly independent. (Meerut 1976)
(iii) Union of two linearly independent subsets of a vector
space is linearly independent, (Meerut 1976)
(iv) The union of two subspaces of a vector space V is also a
subspace ofV.
(v) The intersection of two subspaces of a vector space V is
also a subspace of V.
Ans.(i) False; (ii) false; (iii) false; (iv) false; (v) true.
3. Determine if the vectors (1, -2, 1), (2, 1, -1), (7, -4, 1) in
R3 are linearly independent. (I.A.S. 1985)
Ans. Linearly dependent.
4. In the vector space R3 express the vector (1, -2,5) as a linear
combination of the vectors (1, 1, 1), {1, 2, 3) and (2, -1, 1).
(Meerut 1976)
Ans. (1, -2,5)=~6(1, 1, l)+3 (I, 2, 3)+2(2, -1, 1).
5. In the vector space-R-* determine whether or not the vector
(3, 9, —4, 2)is a linear combination of the vectors (1, ~2,0,
3).(2, 3, 0, -1)and (2,-1, 2, I). (Meerut 1976)
Ans. No.
6. Prove that the four vectors (1,0,0), (0,1,0), (0,0,1),
(1, 1,1)in VziC) form a linearly dependent set but any three
of them are linearly independent. (Meerut 1969)
7. If a,^ and y are vectors such that a-f)3+y=0, then a and j8
span the same siibspace as j8 and y. (Meerut 1976,80)
§14. Some Theorems on linear dependence and linear indepen-
deuce.
Theorem 1. Let V{F) he a vector space. a2, a„
are non-zero vectors e V then either they are linearly independent
or some a*, 2^k <,n, is a linear combination of the preceding ones
*l» ®2»***» !● (Nagarjuna 1980, 91)
59
Vector Spaces
are
Proof. If «1, «2, «« are linearly independent we
nothing to prove* So let oci, a2,..., «»i are linearly dependent. Then
there exists a relation of the form
aiCCi+ ● ● ● 4" ...(1)
Let k be
where, not all the scalar coefficients oi, are 0.
the largest integer for which i.c., Ofc+i=0, oa+2=0,...» o«=0
and flfc7^:0. There is no harm in this assumption because at the
most if k=:n.
Also Because if fl2=0,03=0, ..●9 0, then <2iai=0
and ai7^=0 => fli=*0. This contradicts the fact that not all the o’s
are 0.
Now the relation (1) reduces to
fl iai+fl2«2+...+fl/c«fc=0, where dk^O
or flfcafc = — Oiai—fl2«2 —... — i«*-i
or (a*afc)=afc“* (— »32«2—...—
or afc=(— ^*"‘<*2) a2+...+(— «ac-i*
Thus is a linear combination of its preceding vectors.
Theorem 2. The set of non-zero vectors ai, a2,..., a„ of V{F)
is linearly dependent if some a*, 2 ^ is a linear combination
of the preceding ones. (Nagarjuna 1980)
Proof. If some a^, 2 A: < n, is a linear combination of the
preceding ones ai, a2,..., «a-i then 3 scalars cri, 02,..., Ok-i such
that afc=fliai4-...+<*fe-i *fc-i
=> IttA—fll«l—fl2*2—...—flft-lOtfc-l=0
=> the set (ai, a2,..., afc} is linearly dependent.
Hence the set {ai, a2,..., a„) of which {a,, a2,..., «a}; is a subset,
must be linearly dependent.
Theorems. If in a vector space V{F), a vector ^ is a linear
combination of the set of vectors ai, a2, aj,..., a„, then the set of
vectors j3, ai, a2,..., «« is linearly dependent.
Proof, Since j8 is a linear combination of ai, a2, a^, there-
fore there exist scalars ui, <i2,-.., fln such that
j3=fliai +02*2 +...+«/!«(
=> IjS—fliai—02*2*-...— ...(1)
In the relation (1) the scalar coefficient of ^ is 1 which is #0.
Hence in the relation (1) not all the scalar coefficients are 0. There
fore the set 13, a,,\., a« is linearly^dependent.
Theorem 4. Let S be a linearly indeperident rubset of a vector
space V. Suppose l3 is a vector in V which is not in the subspace
60
Linear Algebra

spanned by S. Then the set obtained by adjoining fi to S is linearly


independent..
(Meerut 1977)
Proof. Suppose ai, a2,,.., are distinct vectors in S. Let
ciaI+C2a2+...-t- CmV-m+6;S=0. ...(1)
Then b must be zero, for otherwise

(-?)
and consequently ^ is in the subspace spanned by 5 which is a
contradiction.
Putting />=0 in (I), we get
C!«i+C2a2+...
=► Cl =0, C2=0,..., Cot=0 because
the set (ai, a2,..., is linearly independent since it is a subset of
a linearly independent set .S.
Thus the relation (1) implies
ci=0, C2f=0,..., c„=0, 6=0.
Therefore the set {ai, a2,..., mi is linearly independent. If
S* is the set obtained by adjoining ^ to S, then we have proved
that every finite subset of S' is linearly independent.
Hence S' is linearly independent.
Theorem 5.
The set of non-zero vectors aj , a2,..., a^, o/1^ (F)
is linearly dependent iff one of these vectors is a linear combination
of the remaining (n—l) vectors.
Proof. This theorem can be easily proved.
§ 15. Basis of a Vector Space. Definition.
(Meerut 1990; Nagarjuna 90, Allahabad 76; S.V.U. Tirupati 90) '
A subset Sofa vector space V (F) is said to be a basis of V (F),
if
(/) S consists of linearly independent vectors.
(//) S generates V{F) i.e., L{S)==V i.e\ each vector in Visa
linear combination of a finite number of elements of S.
Example 1. A system S consisting of n vectors
^i=(lj 0» 0)> C2=(0, 1, 0,..., 0),..., e/i=(0, 0,..., 0, 1) is a
basis of Vn{F). (Meerut 1990)
Solution. First we should show that 5 is a linearly indepen
dent set of vectors. We have proved it in one of the previous
examples.
Sector Spaces 61

Now we should prove that L{S)= V„{F). We have always


L{S) cVn{F). So we should prove that y„{F) C L{S) i.e., each
vector \n V„{F) is a linear combination of elements of S.
Let «2, ”. On) be any vector in V„{F). We can write
(ai, 02,..., an)=ai (1, 0,..., 0)+fl2(0, 1, 0)+...
...+a„(0.0,...,0, 1)
i.e., a=fliCi 4-02^2+● ● ● +
Hence S' is a basis of Vn{F). We shall call this particular basis
the standard basis of V„(F) or
Note. The set {(1, 0), (0, 1)} is a basis of F2(F). The set
{(1, 0,0),(0,1, 0),(0, 0, 1)} is a basis of KaCF). As a particular
case a basis of F(F) is the set consisting of only the unity element
ofF.
Example 2. Show that the infinite set

is a basis of the vector space F(x] ofpolynomials over the field F.


Solution. Firrt w^hould prove that S is a linearly indepen
dent set of vectors. ..For;proof refer some previous example.
Now we should show that S spans F(jc] i.e., each polynomial
in F[x] can be expressed as a linear combination of a finite num
ber of elements of S.
Let/(.v)=ao+fliX+02^*24...+0/.X' be a polynomial of degree t.
Then/(x)=(flo) 14-fli^4<»2.v24-... 4«;-v'.
Hence .S is a basis of F[xJ.
Note. The vector space F[x] has no finite basis. If we take any
finite set S of polynomials, we can find a polynomial of degree
greater than that of each of them. Such a polynomial cannot at
any cost be expressed as a linear combination of the elements of S.
§ 16. Finite Dimensional Vector Spaces. Definition. The vector
space V(F) is said to he finite dimensional or finitely generated if
there exists a finite subset S of V such that V=L (5;.
The vector space V„(F) of n-tuples is a finite dimensional
vector space.
The vector space F[xj of all polynomials over a field F is not
finite dimensional. There exists no finite subset 5 of F[x] which
spans F[x]. A vector space which not finitely generated may be
referred to as an infinite dimensional space. Thus the vector space
F[x] of all polynomials over a fi61d F is infinite dimensional.
62 Linear Algebra

Existence of basis of a finite dimensional vector space.


Theorem. There ext'*s a basisfor each finite dimensional vector
space. [Meerut 1987, 92; Allahabad 76]
Proof. Let V(F)be a finitely generated vector space. Let
S={ot.u «2,—,«!»} be a finite subset of V such that L(5)= V. We
may suppose that no member of 5 is 0.
If S is linearly independent, then S itself is a basis of V.
If S is linearly dependent, then there exists a vector «/ € 5
which can be expressed as a linear combination of the preceding
vectors ai, a2,...,a<-i.
If we omit this vector a,- from S', then the remaining set S' of
m—1 vectors
ai, a2,...,a/_i, a,+i,...,a„,
also generates V /.c., V=L (S'). For if a is any element of V, then
L(S)=K implies that a can be written as a linear combination of
«i, «2,-.., a„. Let a=fliai+U2*2+-** + + ●●●
But a/ can be expressed as a linear combination of
ai,...,a/»i. Let «/=6iai+...+6/_ia;_i. Putting this value of a/
in the expression for a, we get a=aiai+...+a/_ia/_i-|-fl/ (6iai+...
Thus a has been expressed ^s a
linear combination of the vectors ai, a2,...,a/_i, a/+i,...ia„. In this
way a £ F => a can be expressed as a linear combination of the
vectors belonging to the set S'. Thus S' generates K/.<?., L(S')=^ V.
If S' is linearly independent, then S' will be a basis of V.
If S' is linearly dependent, then proceeding as above we shall get
a new set of «—2 vectors which generates V. Continuing this
process, we shall, after finite number of steps, obtain a linearly
independent subset of S which generates V and which is therefore
●a basis of V.
At the most it may happen that we shall be left with a subset
of S which contains only one non-zero vector and which spans V.
We know that a set containing a single non-zero vector is definitely
linearly independent and so it will form a basis of V.
Note. The above theorem may also be stated as below :
If a finite set S of vectors spans a finite dimensional vector
space F(F), there exists a subset of S which forms a basis of V.
Invariance of number of elements in the basis of a finite dimen
sional vector space. ..
Dimension theorem for ioctor spaces. If V{F) is a finite
Vector Spaces 63

dimensional vector spaccy then any two bases of V have the same
number of elements. (Nagarjuna 1991; Tirupati 90, 93;
Meerut 81, 82, 8*/, 89, 93; Poona 72; Allahabad 79)
Proof. Suppose V{F) is a finite dimensional vector space.
Then V definitely possesses a basis. Let
5’i={ai, a2,...,a„}
and S2—{fu A}
be two bases of V. We shall prove that m=n.
Since V=L (^i) and jSj e K, therefore jSi can be expressed as
a linear combination of ai, a2,..., a^. Consequently the set
53={j8|, ai, a„} which also obviously generates V(F) is
linearly dependent. Therefore there exists a member ai#j8i of this
set S3 such that a/ is a linear combination of the preceding vectors
fih «1» — if we omit the vector a< from S3 then V is also
generated by the remaining set
S4—{^l» OCj, CC2,...,Ki_l,

Since V=L (S4) and ^2 ^ V, therefore p2 can be expressed as


a linear combination of the vectors belonging to S4. Consequently
the set
- S5=>{^2, Pi, «lj «2».«f*/-l» */+!*●●●»«/»}
' . 1

is linearly dependent. Therefore there exists a member ay of\ this


set Ss such that a/ is a linear combihation of the preceding vectors.
Obvmusly ay will be different from j8i and/32 since ^1,^2} is a
linearly independent set. if we exclude the vector ay from 6's,
then the remaining set will generate V (F).
We may continue to proceed in this manner. Here each step
consists in the exclusion of an a and the inclusion of a jS in the
set Si,
' ● Obviously the set 5| of a’s cannot be exhausted before the set
S2 of jS*s otherwise V (F) will be a linear span of a proper subset of
S2 and thus S2 will become linearly dependent. Therefore we must
have
m n.
Interchanging the roles of Si and S2, we shall get that
n < m.
Hence n — m.
Example. For the vector space F3, the set
^iHd. 0, 0), (0, 1, 0), (0, 0, 1)}
and S2={(1.0,0). (1,1,0), (1, 1.1)}
64 . Linear Algebra

are bases as can easily be seen. Both these bases contain the same
number of elements i.e., 3.
Dimension of a finitely generated yeetor space. Definition. The
number ofelements in any basis of afinite dimensional vector space
V(F)is called the dimension of the vector space V\F) and will be
denoted by dim V. (Marathwada 1971; S.V.U. Tirupati 93;
Nagarjuna.80)
The vector space (F)is of dimension n. The vector space
^ 3(F)la of dimension 3. If a field F is regarded as a vector space
over F, then F will be of dimension 1 and the set 5={1)consisting
of unity element of F alone is a basis of F. In fact every noa*zerc
element of F will form a basis of F.
§ 17. Some Properties of finite dimensional vector spaces.
Theorem 1. Extension theorem. Every linearly independent
subset ofa finitely generated vector space V(F)forms a pan of a
basis of V.
Or
Every linearly independent subset of afinitely generated vector
space V(F) is either a basis of V dr can be extended toform a basis
of V. (Meerut 1980, 84; Nagarjuna 90, 91; Andhra 90;
Allahabad 76)
Proof. Let S={ai, «2,...,awi} be a linearly independent subset
of a finite dimensional vector space V(F). If dim P=«, then V
has a finite basis say, {j3,, jS2,... Consider the set
0l2,●.. ^i,

Obviously L (5'i)= V. Since the a’s can be expressed as linear


combinations of the jS’s therefore the set Si is linearly dependent.
Therefore there is some vector of which is a linear combi
nation of its preceding vectors This vector cannot be any of the
a’s, since the a’s are linearly independent. Therefore this vector
must be some )8, say/3.. Now omit the vector from and
consider the set
.*S'2={ai, a2,...,a,„, ^1, ^2» ^/-i»
Obviously L (S2) — V. If ^2 is linearly independent, then Sj
will be a basis of K and it is^the required extended set which, is a
basis of V. If ^2 is not linearly independent, then repeati:;g the
above process a finite number of times, we shall get a linearly in
dependent set containing a,, a2,...,a,„ and spanning V. This set will
be a basis,-of F and it wilL contain .S. Since leach basis of F
65
Vector Spaces

contains the same number of elements, therefore exactly « rn


elements of set of jS’s will be adjoined to Sso as to form a basis
ofK.
^ Theorem!.. Each set of or more vectors of a finite
dimensional vector space V(F)ofdimensionn is linearly dependent,
(Nagarjuna 1978; Andhra 92; Meerut 68)
Proof. Let V{F) be a finite dimensional vector space of dimen
sion h. Let be a linearly indepfident subset of V containing
(if+1) or more vectors. Then S will form a part of basis of V.
Thus we shall get a basis of P containing more than n vectors. But
every basis of F will contain exactly n vectors. Hence bur ^assump
tion is wrong. Therefore if5 contains(ii+l)or more vectors, then
5^ must be linearly dependent.
Theorem 3. Let V be a vector space which is spanned by afinite
set of vectors pu Pi**-* P.HI* Then any linearly independent set of
vectors in V isfinite and contains no more than m vectors.
(Meerut 1972, 73, 85; Allahabad 79; Nagarjuna 78)
Proof. Let S^{Pi, p2*—tPm}*
Since L(S)=F, therefore F has a finite basis and dim K<ni.
Hence every subset S* of.F which contains more than m vectors is
linearly dependent. This proves the theorem.
Theorem 4. If V {F) is afinite dimensional vector space of
dimension n, then any set of n linearly independent vectors inV
forms a basis of V. (Andhra 1992)
Proof. Let {ai, a2,...,««) be a linearly independent subset of
a finite dimensional vector space F(F)of dimension n. If S is not
a basis of F,then it can he extended to form a basis of F. Thus
we shall get a basis of F containing more than « vectors.
But every basis of F must contain exactly n vectors. Therefore
our assumption is wrong and S must be a basis of F.
Theorem 5. Ifa set S of n vectors of a finite dimensional
vector space V(F)of dimension n generates V(F), then S is a basis
.(Andhra 1992)
Proof. Let F(F) be a finite dimensional vector space of
dimension/!. Let 5={ai, aj,..., a„) be a subset of F such that
Z,(5)=F. If S is linearly independent, then S will form a basis of
F. If S is not linearly independent, then there will exist a proper
subset of S which will form a basis of F. Thus we shall get a basis
of F containing less than n elements. But every basis of F must
66 Linear Algebra

contain exactly n elements. Hence S cannot be linearly dependent


and so iS^ must be a basis of V.
Note. IfVis afinite dimensional vector space ofdimension n,
then V ceumot be generated byfewer than n vectors.
Theorem 6. Dimension of a subspace.
Each subspace W ofafinite dimensional vector space V(F) of
dimension n is afinite dimensional space with dim m^n.
Also V^Wiffdim V^dimW. (Nagarjuna 1980; Andhra92;
Kakatiya 91; Poona 72)
Proof. Let V{F)be a finite dimensional vector space of dim
n. Let FFbe a subspace of V. Any subset of IP containing(n+1)
or more vectors is also a subset of V and any(«+1) vectors in V
are linearly dependent. Therefore any linearly independent set of
vectors in IP can contain, at the most n vectors. Let
5i={oti, flca,...»ocm)
be a linearly independent subset of W with maximum number
of elements.We claim that S is a basis of FP. The proof is as
follows:
(i) 5 is a linearly independent subset of IP.
(ii) L(5)= W Let a be any element of IP.
Then the(m+l)vectors a, ai, belonging to IP are
linearly dependent because we have supposed that the largest inde
pendent subset of IP contains m vectors.
Now {«i, ct„, a)is a linearly dependent set. Therefoje
there exists a vector belonging to it which can be expressed as a
linear combination ofthe preceding vectors. Since ai, a2,..., am are
linearly independent, therefore this vector cannot be any of these
m vectors. So it must be a itself. Thus a can be expressed as a
linear combination of ai, a2, ..., a^. Hence L(<S)= IP.
.*. S'is a basis of IP.
, .*. dim IP=m and m^n.
Now if P=IP,then every basis of P is also a basis of IP.
Hence dim P=*dim W=n.
Conversely let dim IP=dim P=n. Then to prove that
IP=P.
Let S be a basis of IP. Then L{S)— IP and S contains n
vectors! Since S is also a subset of V and S contains n linearly
independent vectors, therefore S will also be a basis of P. There
fore L(S)«»P. Hence IP=P. We thus conclude:
67
Vector Spaces

If Win a proper subspace of aflnile-dimettsional v^or sprue V,


then W isfinite dimensional and dim W <dm V. (Meei^ «M8)
Theorem 7. If W Is a subspace of a fimte-dimenslonal vector
space Vt every linearly,independent subset of W isfinite and is part
ofa {finite) basisfor We
Proof Let dimT=«. Let W be a subspace of V. Let So be
a linearly independentsubset of IT. Let S be a linearly independent
subset of which contains So and which has maximum ®
elements. Then 5 is also a linearly independent subset of V. So
will have at the most n elements. Therefore S is finite and con
sequently S'© is finite.
Now our claim is that 5 is a basis of W. The proof is as
follows:
(i) S is a linearly independent subset of W.
(ii) L{S)= W. Because if j8 e W» then p must be in the linear
span of S. If j3 is not in the linear span of 5, then the
W obtained by adjoining ^ to S will be linearly independent. Th^
S will not remain the maximum linearly independent subset of
containing Sq, Hence ^ e => ^ S L (5). T^us ^^ L ( ).
Since S' is a subset of the subspace B", therefore L {S)c vr.
Hence L{S)^W.
Thus 5is a finite basis of W and Sq Q 5.
Theorem 8. Let S={«i, «.} be a basis of afinite dimen¬
sional vector space V\F)of dimension ti. Then every element a of V
can be uniquely expressed as
o*. O’

Proof. Since 5 is a basis of F, therefore L{S)=V. Therefore


any vector a e F can be expressed as
a =● fliai+fl2*2 + ● ● ● "i"
To show uniqueness let us suppose that
a=^iai+ + ● ● ♦ + «●
Then we must show that 02=^2* ●●●» On~b
We have fliai+a2a24--**+^«“n'=^i**+^2«2+*-*+^««»
0
=> (fli—^1) ai+(fl2—^2) a2+ —+(fl«--^ii) ««=
=> a\-b\ 0, fl 2-^>2-0, .... sinqe
«i, «2, ●●●, a« are linearly independent
=> a\—bu a2=b2t'“i On~b n*
Hence the theorem*.
68 Linear Algebra

TheoreiB 9. If Wu W2 are two subspdcet of afinite dimensional


vector space K(F), then ^
dimiWx^W2)^dimWi-{^dimW2-dim{Wi(\W2).
(Andhni 1992; Kakatiya 91; I.A.S. 85,86,88; Marathwada 71;
Poona 70; Nagarjona 74; TlrqpaU 90; Allahabad 77;
Meerot 80, 83,85,87,89,92)
Proof.. Let dim (Wt f| W2)—k and let the set
S={yuY2fy3 Yk)
be a basis of Wt fl W2. Then S Q fPi and S Q W2.
Since S is linearly independent and Wu therefore S can be
extended toiform a basis of Let
{YU Y2 «1, «m>
be a basispf ITi. Then dim Similarly let
{yi» y!2».»» Yk,Pu Pz,—* Pt)
be a basis of IP2. Then diin 1^2=A:+/.
dim FFi+dim FP2-dim(lPi fl fP2)=(«+*)+(fc+0-A^

/. to prove the theorem we must show that


diva(Wf^W2)=k+m-^t.
We claim that the set
5|={yi, Y2t*»>, Yk, «1» «2»***» «Bl» pt, P2,‘»», Pt)
is a basis of FPi+^2-
First we show that St is linearly independent. Let
.Ciyi+C2y2+...+C*yA+n|ai4‘02fll24'***+0m«m+^l^l+^2^2
+...+h/P/=0 ...(1)
=> 6ij8|+62/^2+●●● =*—(flyi + ●●● +CkYk + «l«l + ●●●
...(2)
Now —(ciyi+...+CAyA+fliai+...+fln,am) € IFi since it is a
linear combination of a basis of Wi. Again
btPi-{‘b^2’¥»>>’\’biPtG W2
since it is a linear combination of elements belonging to a basis of
W2.
Also by virtue of the equiiity {?-), btpt+●>.+b,p,^Wt. There
fore 6ij5|-j-M2+.-.+h/j5/e Wt n W2- Therefore it can be expressed
as a linear combination of the basis of Wt 01^2 Thus we have a
relation of the form
biPf^b^2'\'>’‘’^btPt=^diYi-\r‘.,.-\‘dkYk '
=> Mi+M2+—+M/-diyi-</2y2—...—4y*=0.
BtstPuP2, ●nPhYu-fYk are linearly independent vectors.
Vector Spaces 69

Therefore we must have hi=0, hisO,...; h|C=0.


Putting these values of h*s in (1), it reduces to
. Ciyi+Ciy2+*«*f^*y*+Ol*|+O2«2+***+OinJ*m=0
=> CiasO,. CjssO,..** <?*=0» <*1=0» <*2=0,..., flm=sO
since the vectors yi, y2»—> Yk» «i* «m are linearly independent.
Thus the relation (1)implies that
ci=0, C2=»0 c*=0,ai»0,..., Om==0t hi=0,...,
Thercifore the set St of vectors
yi»»*M Ykt *!»●●●» *«» Pt
fs linearly independent.
Now to show that L (5i)= Wi+Wz.
. Since Wi+ Wz is a subspace of V and each element of 6*1 be*
longs to iPi+IF2, therefore X(5i)Q lPi+
Again let a be any element of ^1+1P2< Then
a=8ome element of IPg+some element of 1^2
»a linear combination of elements of basis of FPi+a linear
combination^felementsofbasisof FP2
c?:a linear combination of elements of .Si.
.% aeJL (5|). Hence Wt+Wz<^L (St),
LXSd^Wv^Wz,
/. S\ is a basis of Wi’{-Wz and consequently
dim FP2)=A:+w+**
Hence the theorem.
Solved Examples
Ex. 1. Let Vbe the vector space of all 2x2 matrices over the
field F, Prove that V has dimension 4 by exhibiting a basis for V
which has 4 elements. (MeMatl979; I.A.S. 85; Nagaijona 78)
1 0' 0 11 _ro 01
Sol. Let as=
P OJ* 0 oj*7“[i 0;
*f0 01
and Sss be four elements of V.
0 1

the subset .s={a, jB; y, 3} of V is linearly independent because


flflt-f- -f* cy H" dS=0
® 01 ro or
--[i o]+*
a hi ro
0 oj"*"^ [l oJ+"Lo lj=[0 0
O'*
{c dJ^LO 0]
s> a=:0, hs=s0, c—0, d»0.
Linear Algebra
70
a b'
Also L(S)=K because if c J
is any vector in then we can write

^l=aa+6i5+cy+</S.
Therefore^ S is a basis of K. Since the number of elements in
S is 4,therefore dim f'=4. ,
Ex 2. Show that if jS. y} is a basis of (C), then the
set 5'={a+jS, 15+y, y+a}
is also a basis of (C).
Any subset
Sol. The vector space (C)is of dimension 3. . .
of C’ having three linearly independent vectors will form a basis ot
cs We have shown in one of the previous examples that if s is a
linrarlyindependentsubsetofCt, thenS' isalso a linearly inde
pendent subset of C’. (Give die proof here).
Therefore 5'is also a basis of
Ex. 3. If Wx and Wi arefinite-dimensional subspaces with the
same dimension, and if fTiC Wi, then Wi.
Sol Since W1QW2,therefore W\ is also a subspace of W2,
Now dim Wi, Therefore we inust have W2.
Ex 4 Let V be the vector space of ordered pairs of complex
numbers over the realfield R i.e„ let V be the vector space C(R).
Show that the set 0),(i. 0).(0,1).(0, i)} is a basisfor V.
Sol. S is linearly independwit. We have
fl(h 0)+b (/, 0)+c(0, l)+rf(0. 0=(0.0) ^
where a, b, c, deR
^{a-{-ib,c+id)=-i0,0)
a+/6=0, c+/d=*0
=> fl=0p 6=0, c=0, d=0.
Therefore S is linearly independent,
Now we shall
_ show that L(5)= 7. Let any .ordereda.pair {a+&.
c-i-idVe V where a. 6, c, d S R. 'Pien as shown above we can
write^+i6,c+id)=a(1,0)+6(i,0)+c(0, 0- Thusai^
vector in ,K is expressible as a linear combination of elements of 5.
Therefore X(S)= and so S is a basis for K.
● Ex. 5. In the vector space R^ let a«(U.2,1),^=(3,1,5),
y —4 7) Show that there exists more than ope basis for the
subspace spanned by the set S^{a, y). (M^rathwada 11171)
Vector Spaces 71

Sol. First show that vector y can be expressed as a linear


combination of the vectors a and Therefore if fi), then
L(T)=L(S). Now the set(a, p} is linearly independent as can be'
easily shown. Therefore the set {«, j8} is a basis for I(r)=£(5).
Therefore dim L(S')—2. Now {a, y) is a linearly independent sub
set of L{S)containing two vectors. Therefore {a, y} is also a basis
for L{S). Similarly {j3, y)is also a basis for L(S).
Ex. 6. Show that the vectors (1, 2, 1),(2,1, 0), (I, —1,2)
form a basis of (Nagarjuna 1980; Meerut 90)
Sol. We know that the set {(1, 0, 0),(0, 1, 0),(0,0,1)} forms
a basis for R^. Therefore dim R^—3. If we show that the set
5'={(1, 2, 1),(2, 1, 0), (1, —1,2)} is linearly independent, then
this set will also form a basis for K^. (See theorem 4 of § 17]
We have
n,(1, 2, l)+a2(2, 1. 0)-ffl3(1, -1,2)=(0,0. 0)
=> (ni +2o2+fl3, 2«i+fl2—«3» fli+2fl3)=(0, 0, 0).
fli+2fl2+fl3=0 (1)
2fli+a2“<*3=0 (2)
fli-|-2a3==0. (3)
Now we shall solve these equations to get the values of ui, az,
03* Multiplying,the equation (2) by 2, we get
4fli+2fl2—2fl3=0. ...(4)
Subtracting (4)from (1), we get
—3<I|-|-3o3=0
or —ai+fl3=0. ...(5)
Adding(3)and (S), we get 3a3«0 or n3»0. Putting n3»0
in (3), we get ai»0. Now putting as»0 and ni=0 in (1), we get
fl2—0.
Thus solving the equations (1),(2)and (3),^we get ni=0,
02—0, a3=>0. Therefore the set S is linearly independent. Hence
~lt forms a basis for R*.
Ex. 7. Determine.whether or not the following vectorsform a .: ■
basis of :
(1.1, 2),(1. 2, 5),(5. 3.4). (Meerut 1980)
Sol. We know that dim R^«3. If the given set of vectors
is linearly independent, it will form a basis of B} otherwise not; -
We have
0}(1, 1, 2)+02(1, 2, 5)+03(5, 3, 4)—(0,0,0)
72 Linear Algebra

=> (i8i+fl2+5a3, fli+2fl24*3fl3» 2fl|+5fl2+4fl3)—(0» 0» 0).


...(1)
fll+fl2+5fl3=*0'|
<iiH-2fl2+3fl3=0 ...(2)
2«i+5a2+4fl3=»0 3 ...(3)
Now we shall solve these equations to get the values of ai, ait
03. Subtracting (2)from (1), we get
—fl2+2<l3=0- ...(4)
Multiplying (1) by 2, we get
2fli+2a2+10fl3=0. ...(5)
Subtracting (5)from (3), we get
3fl2—6038=0
or 02—203~0. ...(6)
We see that the equations(4)and (6) are the same and give
02=2fl3. Putting 02=2o3 in (1), we get fli=—7o3. If we put 03—I»
we get 02=2 andoi==—7. Thus oi=—7, 02=2,03—! is a non-
zero solution of the equations (1),(2)and (3). Hence the given set
is linearly dependent so it does not form a basis of R^.
Ex. 8. For the 3’dimensional space over the field of real
numbers^* determine if the set {(2, —1,0),(3,5, 1),(1,1,2)} is
a basis.
Sol. We have dim R^=3. If the given set containing three
vectors is linearly independent, it will form a basis of R^ otherwise
not.
Let o, h, ceR be such that
o (2, -1,0)+b (3, 5, l)+c(1,1, 2)=(0,0,0)
=> (20+36-l-c, -o+5h+c,0o+h+2c)=(0,0, 0).
A 2o+3h+c=0, (1)
—0+56+0=0, (2)
and 6+2c=0. ...(3)
Now we shall solve these equations to get the values of
o,6, c.
Multiplying (2) by 2 and adding to (1), we get
136+3c=0. ...(4)
Multiplying (3) by 13 and then subtracting(4) from it, we get
23c=0 or c=0. .
Putting c=0 in (3), we get 6=0.
Putting 6=0,c=0 in.(1), we get 0=0.
Thus the only solution of the eqii^ons (1),(2)and (3) is
73
Vector Spaces

a=0,6=0,c=0. Therefore the three given vectors are linearly


independent anc^o they form a basis of R®.
Ex.9. Shaw that the vecfow ai=(l,0, — 1), a2=(lt2,1),
-3,2)i/orm a basisfor R^. Express each of the standard/
basis vectors as a linear combination ofo.u «2, «?. ■
(Meernt 1981,84P,93r)
So\. Let 5={ai, a2, «3}* .
First we shall show that the set S is linearly independent. Let
a, 6, c be scalars i.e., real numWs such that
aai+6a2+ca3=0
Le., a(1,0, -D+6(1, 2, l)+c(0, -3,2)=(0,0,0)
i.e., 0fl+26~3c, -o+6+2c)=?=(0,0,0)
ie.y fl+6=0
26-3c=0 ...(2)
...(3)
—a+6+2c=0.
Adding both sides of the equations(1) and (3), we get
26+2<?=0 (4)
Subtracting(2)from (4), we get 5c=6 or c=0.
Putting c=0 in (2), we get 6=0 and then putting 6-0 in (i),
we get fl=0.
Thus a=0,6=0,c=0 is the only solution of the equations
(1),(2) and (3)and so ^
aai+6a2+ca3=0 => a=0,6=0,c—0.
The vectors ai, a2, a3 are linearly independent.
Now we shall show that the vectors a'l, a2, «3 also generate R^
Let y=(P» 0 vector in R^ and let
y=(p, r)=xai+ya2+z«3, x,y,ze^
=x (1,0,-l)+y (1.2, l)+z(0, 3, 2). ...(5)
...(6)
Then x+y=p
\ (7)
2y—3z=q
...(8)
. -x^y+2z=r
Addin^both sides of equations(6)and (8), we get
...(9)
2y+2z=p+r. - '●
Subtracting (7)from (9), we get
Sz=p+r--q or z=ip—
Then from (9), we get

and from (6), we get ;


x=::p-y==p
74 Linear Algebra

Thus every vector y={p,q,r) inR^ can be expressed as


y=xai+ya2+za3, where x, y,z^R are as found above,
the set S generates R^.
Since\S is linearly independent and it also generates there
fore it is a basis of R^.
The relation (5)expresses the vector y=(p, <7, r) as a linear
combination of ai, and ctz.
The standard basis vectors are
e,=(i,0, 0), e2=(0,1, 0)and «3=(0, 0, I).
If y=ci, then p=l,^=0, r=^0 and so
x=1^0, y=^‘A and z==i
.*. ei =-A-«i+-A-«2+^«3.
If y=e2t then p=0,^=1,r=0 and so
7=6 and z=—
A C2=-^ai+^a2-ia3.
Finally if y=C3, thcnp=0, <7=0, r==l and so
x= 1%, y=-A* and z=si.
iV«2+ia3.
Ex. 10. Show that the set {(1, /, 0),(2/, 1, 1),(0, l+f, I -/)}
is a basisfor F3(C). (Meerut 1981)
Sol. We know that dim V^iC) or C^=3. If the given set
containing three vectors is linearly independent it will form a basis
of V3(C) otherwise not.
Let a, h, c € C be such that
a (1, /, 0)+h (2/, 1. i)+c(0, 1+/. l-/)=(0, 0, 0)
=> (fl+2i^»+0c, ai-^b+c[i+/], Oa+h+c[l-i])=(0,0, 0).
.-. a4-2/h=0, ...(1)
fl/+6+c(H-/)=0, ...(2)
and . He(1-0=0. ...(3)
Now we shall solve these equations to get the values of
a, h, c.
Multiplying(1) by —i and adding to (2), we get
3Hc(l+00=. v.(4)
Multiplying(^ by 3 and subtracting from (4), we get
c(l+0~3c(l-0=0
or c(H-i-3-t-30=0 or c(-2+40=0 or c=0.
Putting c=0 in (3), we get 6=0.
Putting 6=0 in (1), we get fl=0.
Thus the only solution of the equations (1),(2) and (3) is
75
Vector Spaces

flasO, c—-0. Therefore the three


^ given vectors are linearly
independent and so they form a basis of
Ex. 11. Show that a systim X consisting of the vectors
ai=(l, 0,0. 0), «2=(0. 1, 0,0),a3=(0,0, 1, 0)and a4=(0,0. 0, 1)
isa basissetofR*(^)> ..
SoL First we show that the set ^ is a linearly independent
set of vectors.
If au 02,fl3,at be any scalars elements of thefleld R,then

=> (1, 0,0,0)+<73(0, 1,0,0)+«3(0,0,1,0)+<34(0,0.0.1)

=> (fll» ^2* ^3, fl4)“(0» 0*


=> fl2=0, fl3=0,04=0.
is linearly inde¬
Therefore the given set X of four vectors
pendent.
Now we shall show that -IT generates R* i.c., each vertor of R*
can be expressed as a linear combination of the vectoh of
Let (fl, b, Cy d)be any vector in R*. We can write
b. c, d)=n (1,0,0. 0)+h (0. 1. 0.0)+c(0,

=flai+fea2+ca34-
linear combination
Thus(o, by Cy d)has been expressed as a
of the vectors of X and so AT generates R^. ^
Since ^ is a linearly independent .subset of and it also
generates R^, therefore it is a basis of R^
Ex. 12. Show that the set S={1, x. x> f> ^
mlals inxisa basis of the vector space i»,(R), ofaU polynomials m
X (of degree at most n)over thefield ofreal numbers.
(I.A.S. 1977; Meemt 74)
Sol. -P«(R) is the vector space of all polynomials in x (of
degree at most n) over the field R of real numbers.
1 JC x«> is a subset of consisting of n+1 poly
nomial.^ T^ P^v;that 5 is a basis of the vector space P„(R^ ^
First we show that the vectors in the set S are linearly inde
pendent over the field R.
The zero vector of the vector space />»{R) is the. zero polyno-
mial. Let ao» oi* € R be such that
<10(l)+<ii*+a2*»+-+a«*”=0 > polynomial.
76 Ljnear Algebra

Now by the definitiop of the equality of two polynomials,


we have
Co+«I«+fl2aE2-{-...+fl#iX"=sO
=> fl&s=0, ai=0, fl2=0,...,a«==0.
the vectors 1, x, x’> of the vector space P„(R) are
linearly independent.
Now we shall show that the set S generates the whole vector
space P„(R).
Let be any arbitrary member of
Pff, where ao* ni a„ € Ri Then a is a linear combination of the
polynomials 1, x, x^,.,.,x- over the field R. Therefore S generates
Pnitt).
Since 5 is a linearly independent subset of P„(R) and it also
generates/>„(R),therefore it is a basis of i>„(R).
Ex. 13. Select a basis, if any, of R^(R)from the set
{«!, «2, aa, ^}, where ai=(1. -3,2), a2«(2,4, 1), a3=(3. 1, 3),
04=»(1, 1, 1).
Sol. Let 5'={ai, a2, aa, «4),
If any three vectors in iS are linearly independent, then they
will form a basis of the vector space R3(R).
First consider the set ●S|={«|, aa, aa}. Let us see whether the
vectors in the set iS| are linearly independent or not.
The determinant of order 3 whose columns consist of the
coordinates of the vectors ai, tt2, aa is
1 2 3
= —3 4 1
2 1 3
1
0 0 ',by. C2-2C1
-3
10 10 and Ca-3C|
2
3 -3 .
«-30+30=0.
the vectors tti, a2, aa are linearly dependent and so tHby
do not form a basis of R3(R),
I Now consider the set S2={«i, 02, a4).
The determinant of order 3 whose columns consist of the co-
, ordinates of the vectors oi, 02, 04 is
1 2 1
-3 4 1
2 1 1
Vector Spaces 77

1 0 0 ,by C2—2Ct
cs —3 10 4 and C3—Ci
2 -3 -1
=-10+12=2/.e., 9^=0.
the vectors ai, a2, 04 are linearly independent.
Since «2, «4> is a linearly independent subset of
containing three vectors, therefore it is a basis of R^.
Ex. 14. Show that afinite subset W of a vector space V(F) is
linearly dependent if and only ifsome element of W can be expressed
as a linear combination of the others.
Sol. Let FF={«|, a2,...,a„} be a finite subset of a vector space
V{F).
First suppose that W is linearly dependent. Then to show
that some vector of W can be expressed as a linear combination of
the others.
Since the vectors ai, a2,...,an are linearly dependent, therefore
there exist scalars at, a2,..^,o„ not all zero such that
fll«l+02«2f —+«n*/i=0. ...{1)
Suppose a,^0, 1 ^ r «.
Then from (I), we have
0,.a,=—Oiai—02^2-“ ● ● ● — “Or+iar+l fln«n

or (—flldti —fl2«2— ®r+l«r+l

or a,=(— fll) ai+(—flf”‘ 02) a2+...+(— ®r-l) «r-l


+(-fl,-» a,+i+...+(-flr‘ On) ««.
Thus a, is a linear combination of the other vectors of the
set W.
Conversely suppose that some vector of FF is a linear combi
nation of the others. Then to prove that W is linearly dependent.
Without loss of generality suppose ai is a linear combination
of the vectors a2,...,afl.
Let «i='i>2*2+t>3*3+“*+^fl*fl» where 62* bz,...,bn S F.
Then ai—62«2“^3*3“"-**~'^«*n=^®* ●●●(2)
Since in the linear relation (2) among the vectors ai, a2,...,<Xfl,
the scalar coefiScient of ai is 1 which is 9^0, therefore the vectors
ai, a2,...,a« are linearly dependent.
Ex. 15. Prove that any finite set S of vectors, not ail the zero
vectors, contains a linearly independent subset T which spans the
same space as S,
78 Linear Algebra
Sol. Let 5 be a finite set of vectors belonging to a vector
space V(F). Assume that no member of S is zero vector for if any
member of S is zero vector, we can omit it from S without affec
ting the subspace spanned by S.
Let.S={ai,
If S is linearly independent, then S itself is the required
linearly independent subset 7* of S' which spans the same subspace
of Fas S.
If S'is linearly dependent, then there exists, a vector ai e S
which can be expressed as a linear combination of the preceding
vectors ai,
If w§ omit this vector a* from S, then the remaining subset S'
of S containing m—1 vectors spans the same subspace of Fas S.
If S' is linearly independent, then S' will be the .required
linearly independent subset of S which spans the same 'subspace of
Fas S. IfS' is linearly dependent, then proceeding as above we
shall get a new subset of S containing m—2 vectors which spans
the same space as S. Continuing this process we shall, after a
finite number of steps, obtain a linearly independent subset of S.
which spans the same space as S.
At the most it may happen that we shall be left with a sub
set of S which contains only one non-zero vector and which spans
the same space as .S. We know that a set containing a single non
zero vector is definitely linearly independent.
Hence any finite set S of vectors, not all the zero vectors,
definitely contains a linearly independentsubs et T which spans the
same space as S'.
Ex. 16. Let V be a vector space. Let W be a subspace of V
generated by the vectors ai,..., a,. Prove that JV is spanned by a
linearly independent subset of ai,..., aj.
Sol. IF is a subspace of V generated by a finite set
5={ai,..., a^}.
To show that there exists a linearly independent subset T of S
which also spans W. ■
Now for proof proceed as in solved example 15 above.
Ex. 17. If W is a subspace of afinite dimensional vector space
F,prove that any basis of W can be extended toform a basis of V.
Sol. ’’Let F(F) be a finite dimensional vector space of dimen
sion n. Let {fu jSfl) bt a basis of F.
Vector Spaces 79

Let FT be a subspace of V. Then W itself is finite dimensional


and dim W < n. Let dim W=m and let iS={ai, a2,..., ««i} be a
basis of W,
Then 5 is a linearly independent subset of K To show that
S can be extended to form a basis of V For proof proceed as in
theorem 1 of § 17.
Ex. 18. Ifn vectors span a vector space V containing r linearly
independent vectors^ then show that n'^r.
Sol. Suppose a subset 5of F containing n vectors spans Vt
Then there exists a linearly independent subset T of 5 which also
spans V, This subset T of 5 will form a basis of V. Suppose the
number of vectors in T is tn. Then dim F=m ^n.
Since dim F=m, therefore any subset of V containing more
than m vectors will be linearly dependent.
Hence if 5i is a linearly independent subset of V and S\ con
tains r vectors, we must have
r^m
=> r ^ n. [V m < m]
Exercises

1. Tell with reason whether or not the vectors (2,1, 0),(1, 1, 0),
and (4, 2,0)form a basis of R^. (Meerut 1976)
2. State whether the following statements are true or false
(i) If a subset of an /I'dimensional vector space V consists of
n non-zero vectors, then it will be a basis of F.
(Meerut 1976)
(ii) If A and B are subspaces of a vector space, then,
dim A < dim B => A cz B. (Meerut 1976)
(iii) If A and B are subspaces of a vector space, then
A^B => dim ^#dim B. (Meerut 1976)
(iv) If M and iV are finite dimensional subspaces with the same
dimension, and if MQN,then M=N.
(v) In an n-dimensional space any subset consisting of n
linearly independent vectors will form a basis.
Ans.(i) false; (ii) false; (iii) false; (iv) true; (v) true.
3. (i) Show that the vectors (2, 1, 4), (1, —1, 2),(3,1, —2)
form a basis for R^
(ii) Show that the vectors (0, 1, 1),(1, 0, l)and (1, 1, 0)form
a basis of (S V.U. Tirupatl 1993)
80 Linear Algebra

(iii) Determine whether or not the following vectors form a


basis of R3
(U 1.2),(1,2, 5),(5, 3,4). (Meernt 1980)
AOS. Do not form a basis of R^.
4. Show that the vectors.j5i=(l, 1,0)and ^2=(1» U 1+0 are in
the subspace Wol(? spanned by(1,0,0 and (1+1,1, — 1),
and that and form a basis of W. (Meerut 1974)
S. Find three vectors iulR^ which are linearly dependent,^nd are
such that any two ofthem are linearly independent. .
6. Prove that the space of all m x n matrices over the field F has
dimension mn, by exhibiting a basis for this space.
7. If a vector space F.is spanned by a finite sef of m vectors, then
show that any linearly independent set of vectors in V has at
most m elements. (Meerut 1973)
8. In a vector space V over the field F,let a2,...» ««)
span F. Prove that the following two statements ate equi
valent
(i) B is linearly independent.

(ii) If a e F,then the expression a= 2^ o/a/ with ai e F is


unique. (Meerut 1979)
9. If{ai, a2, «3}isabasisof Fs(R), show that {ai+«2, «2+«3,
«3+ai} is also a basis of Fs(R). (Nagarjuna 1991)
§ 18. Homomorphism of vector spaces or Linear transformation.
Definition. Let U(F)and V{F) be two vector spaces. Then a
mapping
is called a homomorphism or a linear transformation of U into F if
(0 /(a+j8)=/(a)+/(i3), ¥a,^et/
and iii) /(fl«)=o/(a) V oeF, V aet/.
(Nagarjnna 1991; Ponna 72)
The conditions(i) and (ii) can be combined into a single con¬
dition
/(na+6i5)-fl/(a)+h/(i8) ¥ a, heFand ¥ a, p^U.
If/is a homomorphism of U onto F, then F is called a homo-.
morphic image of I/.
Theorem. Iffis a homomorphism of U(F)into F(F), then
(i) /(0)=0' where 0 and 0' are the zero vectors of U and V
respectively.
Vector Spaces 81

(«) /(-«)—/(«) V aeC/.


Proof, (i) Let Then /(a) € K. Since 0' is the zero
vector of V, therefore
/(«)+0'=/(a)=/(a+0)=/(a)+/(0).
Ndw V is an abelian group with respect to addition of vectors.
A /(a)+0'-/(a)+/(0)
O'=s/(0) by left cancellation law.
(ii) If asI/, then —a e U. Also we have
0'=/(0)«/(«+(—«)]=/(a)+/(-a).
Now/(a)+/(—a)=0' =►/(—a)=additive inverse of/(a)
=>/(-«)=“/(«).

§10. Isomorphism of Vector Spaces.


Definition. Let U{F) and V{F) be two Vector spaces. Then a
mapping f:U-^V
is called an isomorphism of U onto V if
if) f is one-one,
iii)fisonto,
(//0/(o«+hi8)=n/(a)+h/(i8) ¥ n, heF, a,
Also then the two vector spaces U and V are said to be isomor~
phic and symbolically we write U{F) ^ V(F). (Kanpur 1980)
The vector space V{F) is also called the isomorphic image of
the vector space (/(F). ^
If/is a homomorphism, of U{F) into F(F), then / will become
an isomorphism of U into V if/is onoone. Also in addition if/is
onto F, then/will become an isomorphism of U onto V.
Isomorphism of finite dimensional vector spaces.
Theorem 1. Two finite dimensional vector spaces over the same
field are isomorphic if and only if tl^ey are of the same dimension.
(Meerut 1988, 91, 92, 93P; S. V.U. Tirupati 90)
Proof. First suppose that (/(F).and V(F) are two finite
dimensional vector spaces each of dimension n. Then to prove
that (/(F) ^ K(F).
Let the sets of vectors
{*!»
and {^1, ^2» — , Pn)
be the bases of (/ and V respectively.
Any vector a€E(/ can be uniquely expressed as
a=fl|ai+fl2a2+...-f-anart.
82 Linear Algebra
Let/: i/-->Fbe defined by
/(«)=«I^1+02^2-+'...
Since in the expression of a as a linear combination of
oci, «2. ●●●> «n the scalars a\, 02, On are unique, therefore the
mapping/ is well defined
i.e.,/ (a) is a unique element of K
/ is one-one. We have
/ (^1«14- fl2«2 +... + a„OCa) =/ (*l«| + ^2*2 +. ●. + bnUn)
t> oi^i4-02/52+*..+o»^/i=hii8i’hb2fi2+--+b„fi„
=> (n|— j8i+(a2+^2)^2+ ●●●+(o«—-^fi) j8n=0V::
(zero'vector of r)
=► 02—62«0,..., Oh--^„B=0 because
^2>»m /8a are linearly independent
=*» Ol==hi, 02=h2,..., fln=ha
=> Oiai-j-02a2-|-...+OflaB=hiai+^2«24- +^a«a*
/is one-one.
/Is onto K If oi^i +02/82+ ●●●+<3faj8a is any element of K, then
3 an element oi«,+02«2+...+o„a„ e 1/ such that
/ (Oi«i+02«2 +... + 0„«„)=O1/81 + O2J82+...+0a/8fl.
/is onto K
/is a linear transformation. We have
/[o (oiai+02a2+...+Oi,a«)+h (^i«i+h2«2+»**+ha«a))
=f[(aai+bbi) ai+(oo2+^h2) «2+...+(oOn+hh„) a«]
«(ooi+W,) )S,+(002+662) i82+...+(flfl«+66„) pn
«»fl (oi^i+02/82 + ...+o„/3„)+6 (6,/8,-f62/S2+...+6„j?„)
=»o/ (oiai +02a2+... +0n««)+6/ (6iai+62a2+ ...+6naa).
.*. / is a linear transformation.
Hence/is an isomorphism of U onto V,

Conversely, let U(F) and V(F) be two isomorphic finite dimen


sional vector spaces.
Then to prove that dim C/=dim V.
Let dim U^n. Let a2,..., a„}
be a basis of U. If/is an isomorphism of U onto V, we shall show
that 5'={/faO,/(a2),...,/(a„j}
is a basis of V. Then F wiH also be of dimension h.
First we shall show that S' is linearly independent.
Let oi/(ai)+fl2/(a2)+...+On/(«a)=0' (zero vector of V)
">■ / (0|«1 + 02«2+...+Oa«a)=0"
[V /is a linear transformation]
Vector Spaces 83

[*,● / is one-one and/(0)«0'


where 0 is zero VMtor of U\
=> ni=0, fl2=0, Oa^O since ai, a2, ●●●» «n ace linearly
independent.
S* is linearly independent.
Now to prove that L(S*)=V. For this we shall prove that any
vector j8€ K can be expressed as a linear combination nf the
vectors of the set S\ Since/is onto K. therefore )3 e K => there
exists a € such that/(a)»j8.
Let «=Ciai-f C2a2+«»+Cit(Xii.
Then ^=/(a)=/(ciai-J-C2«2+...+Ciiai,)
/(«i)+C2/(aa)+...+<?»/(«»)●
Thus j8 is a linear combination of the vectors of S\
Hence V^L (S'),
S* is a basis of V. Since .S" contains n vectors, therefore
dim
Note. While proving the converse, we have proved that if/
is an isomorphism of U onto V, then / maps a basis of U onto a basis
ofV,
Theorem 2. Every n-dimensional vector space V {F) is isomor-^
phic to.Vn {F) (Nagarjuna 1980; Meerut 89. 93; Pooira 72;
Kanpur 80j
● Proof. Let {ai. a2..... a„) be-any basis of V(F). Then every
vector <x G V can be uniquely expressed as
«=jja|4-02*2+ at G E.
The ordered n-tuple (oi. 02, o«) e V„ (F).
Let/: F(F) -> V„ {F) be defined by/(a)»(oi. 02..... a„).
Since in the expression of a as a linear combination of
*1. *2. ●●●> *A the scalars oi. 02,« *» are unique. there|f6re/(a)4s a
unique element of Vn (F) and thus the mapping/is well defined,
/is one-one. Let a=fliai+02a2+—+Oj»«n and
^=hiai+62a2+...+hn«/.
be any two elements of V. We have
/(a)=/(iS)
=> /(oi«i+fl2«2+...+<»«««)=/ (Mi+h2a2+...+h«a„)
^ (fli, 02,..., a„)^(bu ^2,..., b„)
s> Ol»bi, 02=^2,,.., a„’=bn
=> a=p.
.*. /is one-one.
84 Linear Algebra

/Is ODto Va (F). Let (fli, fl2, a„) be any element of V„(F).
Then there exists an element S V (JF) such
that /(Ol«l4-flf2*2+*--+an«n)=(Ui, <l2 »●●●» «n).
/is ontoF„(/).
/ Is a linear transformation. If h e F and a, e F (F)
we have
/(aa+6j8)
*=/[0 fl2a2+*»» + fln«n)+6 (^>l«i+4'2«2+*«*+^ii«n)]
-f[{flO\’\-bb\) ai+(aa2+6^2) «2+... + (flOii+^^>n)««]
=(aoi+hhi, flfl2+*^2v» aan-\-bbtt)
flfl2,..., aan)-\-(bbu bbz,..., bb„)
= a (oi, <i2».*.» (^1, i»2».... hfl)
(«i«i + fl2«2+... +a„ci„)+bf (/;iai +hi«2+...+b^)
«fl/(«)+h/(/3).
/ is a linear transformation,
/is an isomorphism of V (F) onto V„ (F).
Hence F (F; ^ V„ (F).
Solved Examples
Example 1, Show that the mapping f: V3 {F)^Vi (F) defined
by /(«!» «2. «3) = («l. a2)
is a homomorphism of V3 (F) onto Vz (F). [Kanpur 81]
Solution. Let a=(fli, <12, 03) and j8=(6i, bz, bz) be any two
elements of F3 (F). Also let a, h be any two elements of F. We
have
f (aa.-\-bp)=f[a (au <ii» <*3)+^ (^i, ^2. ^3)]
=f[{aa\-\-bbu aaz-^bbz^ 00i'\-bb^\={aai-{-bbu aaz-^bbzi
=a (au az)-hb (bi, bz)==af (au az^a^+bf (bu bz^ A3)
=fl/(a)+A/(j8).
.’. /is a linear transformation.
To show that/is onto F2(F). Let (0|, <22) be any element of
F2(F). Then (au fl2. 0) e F3(F) and we have/(oi, 02, 0)=(oi, az).
Therefore/is onto F2 (F).
Therefore f is a homomorphism oif F3(F) onto Vz(F).
Example 1. Let F (R) be the vector space of all complex
numbers a-\-ib over the field ofr,als R nnd let T be a mapping from
V (¥) to Vz(Ktdefined as T(a-\-ib)=-(a,b),
Show that T is an isomorphism.
Solution. T is one-one. Let
(L=a+ib,^=c+id
be any two members of F (R). Then a. A, c, d e R.
Vector Spaces 8S
We have
7’(«)=7’(i8) => (a. 6)=(c. rf) [V r(a)=(a,6)]
=> ac=c, basd s> a-^ibssc+id
=> a=^.
● .*. r is one-one.
T is onto.
Let (tf, b) be an arbitrary member of Vz(R).-'
Then 3 a vector a-hib e V(R)such that T(a-{-i6)=:(a, b). Hence
T is onto,

ris a linear transformation. Let a=a4/^ iS==c+/J be any


two members of V(R) and ki, kz be any two elements of the field
R. Then
{a-\-ib)-{-kz {c-\-id)=r.{k^a-\-kzc)+i{k^b+kzii).
We have
T{k\a.-\-kzfi)s={k\a-\-kzCy kib-\-kzd)yhy definition of T
=(A:ia, k\b)^{kzCy kzd)—ki (a^ b)+k2(c, d)
^kiT{a-\-ih)-\-kzT {c+id) (by definition of 71
=fctT(<x)+k2Tm.
Hence T is a linear transformation.
Hence 7* is an isomorphism.
Example 3. If Vis afinite dimensional vector space andfis an
isomorphism of V into Vy prove thatf must map V onto V.
(Poona 1970; Meerut 85)
Solution.
Let P(/)be a finite dimensional vector space of
dimension n 'Ltxfbe an isomorphism of V into V i.e., /is a
linear transformation and /is one-one. To prove that/is onto V.
Let 5={ai, a2,..., a„} be a ba»s of V. We shall first prove
that
. ,/(««)}
18 also a basis of V. We claim that S' is linearly independent.
The proof is as follows:
^Let fli/(«i)+02/(a2)-f...-|-fl„/(a„)=0 (zero vector of V)
=>/(fliai+fl2«2+...+o«a„)=0 [*.* /is linear transformation]
=> ni«i+<*2«2+--.+fl/i«if=6 [V /is one-one and/(0)»0]
=> fli=0,02=0, ..., a„ 0 since aj, a2, ...» a„ are linearly
independent.
S' is linearly independent.
Now Vis of dimension n and S'Is a linearly independent sub
set of V containing n vectors. Therefore S' must be a basis of V,
Therefore each vector in V can be expressed as a linear combina
tion of the vectors belonging to S'.
86 Linear Algebra

Now we shall show that/is onto V. Let a be any element of


V, Then there exist scalars ci, Cn such that
«=C|/(ai)+C2/(o2)+...+c„/(a„)*
=/(C|0C|+C2«2+*.«+Ciian)-
Now ciai+C2a2+-.+C/i«ii e and the/-image of this ele
ment is a. Therefore/ is onto V, Hence/is an isomorphism of
Konto V.
^Example 4. V {F)and fV(F)are twofinite dimensional vector
spaces such that dim V=dim W.\ Iff is an isomorphism of V into
W prove thatfmust map Vonto W,
Solntion. Proceed as in example 3.
Example 5. If V isfinite Dimensional andf is a homomorphism
of V onto V prove thatf must be one-one^ and so^ an isomorphism.
(Meerut 1985)
SdlutioDi Let V(F) be a finite dimensional vector space of
dimension/r. Let/be a homomorphism of V onto V i.e.,f is a.
linear transformation and/is onto V. To prove that/is one-one.
Let a2,..Mai,) be a basis of V. We shall first prove
that y={/(ai),/(a2),...,/(««)} is also a basis of K We claim
that 1(50=1^. The proof is as follows:
Let a be any element of V. We shall show that a can be
expressed as a linear 'combination of/(ai),/(a2) /(a„). Since
/is onto V, therefore implies that there exists jSeKsuch
that/G8)=a. Now jS can be expressed as a linear combination of
ai, «2*—> «n- Let
+«2«2+...+0n««.
Then «=/(^)=/(Oiai+U2«2+...+flji««)
/(«l)+n2/(«2)+...+Ofl/(«fl).
Thus oe has been expressed as a linear combination of
/(«i),/(«2),...,/(«„).
ThbfeforeX(^0=J'.
Since y is of dimension n and S' is a subset of V conteining n
vectors and L(5')=* K, therefore S' must be a basis of P'. There
fore each vwtbr in V can be expressed as a linear combination of
the vectors belonging to S' and iS' is linearly independent.
Now w;e shall show that/is one one. Let y and 8 be any two
elements of y such that
. y=^Cioci-bC2«2+.●●+<?«««, 8=.rfiai+d2flt2+...+d'«a„.
We have/(y)«/(8)
Vector Spaces 87

=► /(C|a| + C2«2+... +c«*n) =/(<^l«i H-J2«2+ ●●●


=> C|/(«i)+C2/(a2)+...+c„/(a„)=?irfi /(a0+</2/(a2)

^ (Ci—i/i)/(a|)4-(C2^^2)/(«2)-{----+(<7i—^^i)/(«fl)==0
Cl—t/i=0, C2-</2=0,..., C„-</„=0 since
/(«i)» /(«2)i ● ● ●, /(««) are linearly independent
Ci=</i, C2~d2i...tCn=d„
S> ys=S.
/. /is one-one.
/is an isomorphism of Vonto V,
Example 6. If V is finite dimmsional ond f is a homomor*
phism of V into itself which is not onto prove that there is some
in V such that/(a)=0. (Meerut 1969)
Soitttioo. If/is a homomorphism of V into itself, then/(0)a0.
Suppose there is no non-zero vector a in F such that/(a)=sO. Then
/is one-one. Because
fm=f(Y)
=>/(i8)-/(y)«0
=>/0S~y)=o ['.* /is a linear transformation]
=> jS^ysQ => /3=sy.
Now V is finite dimensional and/is a linear transformation of
Finto itself. Since/ is one-one, therefore/must be onto F. But it
is given that/is not onto. Therefore our assumption is wrong.
Hence there will be a non-zero vector a in F such that'/(a)aO.
Example 7. Define linear transformation of a vector space
V{F) into a vector space IV(F). Show that the mapping
T:{a,b)^(a+2,b+$)
of F2(R) into itself is not c- 'inear transformation.
Solution. Linear Transformation. Definition. Let F(F)and
W{F) be two vector spaces over the same field F. A mapping
TiV^W
is called a linear transformation of F into FF if
r (fla T(ot)+b r(/8) V o, beF and ¥ a, jSeF.
Now to show that the mapping
T:(<i,h)-5‘(«+2,h.f3)
of F2(R) into itself is not a linear transformation.
Take a=(l, 2) and j8=(l, 3) as two vectors of F2(R) and a«l,
d.= l as two elements of the field R.
88 Lifted Algebra

Then (I, 2)+1 (1. 3)=(1, 2)+(l. 3)


=(2. 5).
By the definition of the mapping T, we have
r(fla+hi3)=r(2. 5)=(2+2, 5+3)=(4, 8). ...(1)
Also T{(k)=^T{\, 2)=(1+2, 2+3)=(3, 5)
and T(i5)=7’(U 3)=(H-2, 3+3)=(3, 6).
flr(aHW'(j8)=l(3,5)+I(3.6)
=(3, 5)H-(3. 6)=(6, 11). ...(2)
From (1) and (2), we see that
r(fla+6)S)#flr.(a)+hro3).
Hence T is not a linear transformation of F2(R) into itself.
Example %. TMfbe a linear transformationfrom a vector space
U into a vector space V. If Sis a subspace of £/, prove that f{S)
will be a subspace of V. (Meerat 1974)
Solotion. U{F)and V{F) are two vector spaces over the same
field F. The mapping/is a linear transformation of U into V i.e.,
/: siich that
/(fla+h^)=fl/(a)-f d/(j8) V a, 6eFand v a. jSeC/.
Let S be a subspace of U. Then to prove that/(5) is a sub
space of F.
Let fl, and/(a),/(/3) ef{S) where a, j8s5.
Since 5 is a subspace of £/, therefore
fl,b^F and a,
:»/(fla+i>/3) 6/(5)
=> fl/(«)+^/(i8)e/(5).
(y /(<ia+6i8)=fl/(«)+6/(^)].
Thus fl,6eFand/(a),/(j8) e/(5)
=> fl/(«)+Mi8) e/(5).
Hence/(5)is a subspace of F.
Example 9. Iff: U->V is an isomorphism of the vector space'U
into the vector space F, then a set of vectors {/(«i),/(«2),..„/(ar)}
is linearly independent if and only if the a2,..., «,} is linearly
independent. (Nagarjnna 1990)
Solution. V{F)and F(F) are two vector spaces over the same
field F and/is an isomorphism of U into F i.c.,
/: C/->F such that
/is 1-1 and/(fla+i>j8)=fl/(a)+6/(i8)
¥ fl,^eFand V a, jSeC/.
89
Vector Spaces
‘/
Let {ai, a,.} be a subset of U. First suppose that ^e
vectors ai, ai,.- a, are linearly independent. Then to show raat
the vectors/(ai),/(a2),...,/(ar) are also linearly independent. /
We have
qi /(ai)+fl2/(*2)+ ●●● /(«r)—0»
where fli, fl2^●●●v<*r ^ ^
=>/(fll«l+fl2«2+*-+flraf)=0 /
[V /is a linear transformation]
^ /(fliai-bn2*2+»**’ir^r*/’)""/(®) [V /(0)=01
=► 02*2-!-●●●+ ^r*r=0 [V /is 1-1]
=> fli=0, 02—0,..., o,=0 since the vectors «i( «2,--» *r
are linearly independent.
Hence the vectors/(ai),/(a2),...,/(a,) are also linearly indepen¬
dent.
Conversely suppose that the vectors /(ai),/(a2),...,/(«r) are
linearly independent. Then to show that the vectors aj, a2,...y «r
are also linearly independent.
We have
Ol*I+O2«2+..*4-Or«r=0, Whcre Oi, 0,eF
=>/(Oiai4-fl2«2±---+Or«r)=/(0) ./
=> O|/(ai) + a2/(*2) + ”*"t-Or/(ar)~0
[ \* / is a linear transformation]
=> Oi=0, fl2=0,..., flr=0
since the vectors/(ai),/(a2),...,/(«r) are linearly independent. ●
Hence the vectors ai,a2,,.*» «r are also linearly indepen¬
dent.
Exercises

1, If/: (J->V is an isomorphism of the vector space U into the


vector space K, then a set of vectors/(ai),/(a2),.“»/(«r) is
linearly dependent in V if and only if the set ai, *2,.*., is
linearly dependent in U.
2. Let T: K2(R)-^F2(R) be defined as
T(o,, 6,)=(6,. o,).
Show that T is an isomorphism.
3. If / is an isomorphism of a vector space V onto a vector space
W, prove that / niaps a basis of F onto a basis of IF.
4. Prove that a finite dimensional vector space F(R) with dimen
sion F=« is isomorphic to R". (Meerut 1987)
90
Linear Algebra

5. Let Kbe a fin4e dimensional vector space. If /: V->V is a


one-one linear transformation, show that/is an isomorphism
of I'onto itself. . (Poona 1971)
6. Give an example of a one-'one linear transformation of an in
finite dimensional vector space which is not an isomorphism.
7. If 7’ be a linear operator on a finite dimensional vector space
V, show that T is one-one if and only if T is onto.
(Meernt 1^5)
§ 20. Quotient Space. Let IT be any subspace of a vector space
V{F), Let a be any element of V. Then.the set
l^4-«={y4'« ; y.e HO
is called a right coset of Win Vgenerated by a. Similarly the set
«+ W={ct-jry : y e'W)
is called a left coset of Win Hgenerated by a.
Obviously IH-fa and a.-i-W are both subsets of K Since
- addition in K is commutative, therefore we have W+K=oLi-W.
Hence we shall call W+a. as simply a coset of W in V generWd
by a. .
The following results about cosets are both to be remembered :
(i) We have 0 e,F and W-\-0^W. Ther^ore W itself is a
coset of W in V. '
(ii) a e W =>
Proof. First we shall prove that IH-I-a £ IF.
Let y fa be any arbitrary element of IF-fa. Then y e W.
Now IF is a subspace of F. Therefore
ye W,«eW=>- y-l-a e w.
Thus every element of is also an element of W. Hence
W+oc Q W.
Now we shall prove that W Q W-{-^.
Let /3 e IF. Siiice W is a subspace, therefore
a e IF => ~« e IF.
Gonsequently e IF, -a.^ w => /8~a e W. Now we can write
/3=03-a)-l-a e IF-f« since/3—a e IF.
Thus )3 e IF => e IF-f-a. Therefore IF c IF-fa,
Hence IFi= IF-(-«.
(iii) If W-h a and IF+j3 are two cosets of IF in F, then
If'-|-a=IF-|-j8 o a~iS e IF.
Proof. Since 0 e IF, therefore 0-f-a e IF-fa. Thus
a e IFH-oc.
Now IF-fa«=3lF-f-/3 a e W-^^
91
Vector Spaces
=> a jS'e W+W-P)
=> a p e w^+0 => a-j3 e W.
Conversely,
a-i3e H^=> FF+(«-j3)=W^
:>^+[(a-i8)+j31=fK+^
=> W-\-a-=W+p.
Let V/Wdenote the set of all cosets of Win V /.c., let
VlW={W-\-oi:a g V).
We have just seen that if a—/3 e FT, then W fa=W^+^. Thus
a coset of W in K ean have more than one representation.
Now if V{F) is a vector space, then we shall give a vector
space structure to the set V{W over the same field F. For this we
shall have to define addition in VjW i.e,, addition of cosets of W
in V and multiplication of a cOset by an element of F i.e., scalar
multiplication.
. Theorem. If W is any subspace ofa vector space K(F), then the
set VjW^^f all cosets FK-f*« where a is any arbitrary element of V,
‘ is a vector space over Ffor the addition and scalar multiplication
compositions defined asfollows:
(FF+«)+(»'+i8)= W-\-{a^P).¥ a,)3 e V
and a (FK+a)=lK+fla ; a e F, a e K.
(Meerut 1993; Kanpur 80; Nagarjuna 90)
Proof. We have a, ]8 e K => a+]8 e K. /
Also a e F, a e K =*» aa e K.
Therefore FK+(a,+i3) e K/FKand also FK+aa e VfW. Thus K/FK
is closed with respect to addition of cosets and scalar multiplication
as defined above. Now first of all we shall show that these two
compositions are well defined /.e., are independent of the particular
representative chosen to denote a coset.
Let fK4-«=lK+a', a, a'e K
and FK+j8=FK+i8', /3; i8'e K.
\We have PK-j-a=FK+a' => «—a'e FK
and PK+iS=FK+/3' => jS-jS' e FK.
Now FK is a subspace, therefore
a-a'e FK, /3-jS' e FK S IK
=> (a+j3)--(a'+^')e FK
FK+(«+i3)=FK+(«'+^')
=> (FK+a)+(FK-f
«(FK+a'>+(FK+i8').
Therefore addition in K/ FK is well defined.
92 Linear Algebra

Again a' e F, a—a'e => a («-a') e W


=> aa.—aot! e
=> JV-ha<x= lV+aoi\
scalar multiplication in is also well defined.
Commutativity of addition. Let H'+/3 be any two ele¬
ments of W'. Then

Associativity of addition. Let fK+a, W^+j8, IK+y be any three


elements of F/fF. Then
(«"+«)+[(fF+iS)-!-( iv+a)+[PV+(^+y)]
= H^|-[aH-(/5+y)J
-H^+l(a+iS)+yl
=[H^r(afiS)]+(»'+y)
=[(»^+a)+(W'+^)]+(ii/+y).
Existence of additive identity. If0 is the zero vector of V,then
^-^+0= fV e V/fV, If W-\-c(. is any element of V/fV, then
(W'+0)-}-< W''H-a)= WM-tO+a)= l^-f a.
H^+0= IK is the additive identity.
Existence of additive inverse. If IK-f a is any element of V/W,
then -a)= W-'—a e VjW. Also we have
(W/+a)+(l^-a)= M''+(a-a)= IK+0= iV.
.*. W—a is the additive inverse of If'+a.
Thus VjWh an abelian group with respect to addition com-
position. Further we observe that if
a, h e Fand M^+a, fV+p e V/W, then
1. a [(^F^-a)-l-(^F-^-j8)J=a [lV+ioi-\-P)]
= W+ a(a-f-/3)= IF+(aa,+ajS)
=(H^+aa)+(l<F+ai8) .
=a iW+<K)-\-a {W+P),
2. (a-f-6) (ll^4-a)=W'+(a+6) a '
=IFH-(aa+ha) /
=(fF+aa)+(IF+i/a)
=a (IF-l-a)-f6 (W-'+a).
3. (a6)(f'F-i-a)= «== W'-fa (ba.)
=a{W-\-ba)=a[b{Wi-a)].
4. 1 {W-\-a)=W-\-\a.= ]V-^a.
.*. VjW is a vector space over F for these two compositions.
The vector space VJIV is called the Quotient Space of V relatWe to
W. The coset W is the zero vector of this vector space.
Vector Spaces 93

Dimension of a Quotient Space. Theorem.


If W be a subspace of a finite dimensional vector space V (i0»
then dim VIW==dim V—dim W.
(Meerut 1990, 93; I.A.S 89; Allahabad 78, Nagarjuna 91;
Andhra 92; Poona 72)
Proof. Let m be the dimension of the subspace W of the
vector space F(F). Let
5={ai, a„}
be a basis of W. Since 5 is a linearly independent subset of K,
therefore it can be extended to form a basis of V. Let
5'={ai, a2,...,aB„ ^i,
be a basis of V. Then dim. V— m -f /.
.'. dim K—dim W={m-\-l)-m—l.
So we should prove that dim VIW=L
We claim that the set of / cosets
■StMW+Pi.
is a basis of VfW.
First we show that St is linearly independent. The zero vector
ofP/lPislP.
Let at (FP+j8i) + flr2 (W'+j82)+... +o/ (W^+j8/)=lP

=> +02^2*{-●●●+ 0/^/)= W^'+O

=> 0|^l+02^2 + **' + fl/^/ =^iai+^2a2+***+bm«m


[V any vector in W can be expressed as a
linear combination of its basis vectors]
=> <7ii9i+02)82+... +^?/j8/-bioi —biv-i-...-b„,a„=:0
s> m—O, a2=0,...,fl/=0 since the vectors
Pit PZf’t Pit *1» ●*2»." ..am are linearly independent.
.*. The set is linearly independent.
Now to show that L(5i)=F/li^. Let H'+a be any element of
VfW. The vector a e K can be expressed as
a=Ci«i + C2«2 +... + Cm<X.m + </|j8l + ^2^2 +.. ● + dlPl
=y+rfij8i+</2i32+...+^/i8/ where
y=Cia|+C2«2+... + fm«in e W.
So W-\-o=:lV-{-(y-^dtPf\rd2P2i^...+diPi)
=(lF+y)+</i^i+</2^2-r... +
= H''+(</i^1+</2^2+... ■^'diPl)
[V Y e W => W+y=:W]
^{W+dtPi)+(^-{-d2P2) + .,.-h{W+dip,)
=di (PP+j8,)+</2 ifV+P2)+...+d, {W-\-p,)
94 Linear Algebra

Thus any element W-\-k of K/fTcan be expressed as a linear


combination of S'!.
VIW==L{Si),
is a basis of VIW,
dim V/W=^l.
Hence the theorem.
§ 21. Direct snm of spaces.
Vector space as a direct sum of subspaces.
Dehnition. Let V{F)be a vector space and let Wu fV2f., m
be subspaces of V. Then V is said to be the direct sum of Wu W2,
●●●» W„ if every element can be written in one and only one way
a=*ai+a2+...+an, wAere
aiS Wu a2S W2,...» a„e W„. (Nagarjuna 1978)
If a vector space V(F) is a direct sum of its two subspaces Wi
and W2 then we should have not only V=Wt + W2 but also that
each vector of V can be uniquely expressed as sum of an element of
Wi and an element of W2. Symbolically the direct sum is represen
ted by the notation V= Wi(^ W2.
Example. Let F2(F) be the vector space of all ordered pairs
of F. Then Wi={(a, 0): a ^ F} and W2—{{0^b): h e F) are two
subspaces of K2(F). Obviously any element (x, y) e V2{F) can be
uniquely expressed as suin of an element of Wi and an element of
W2. The unique expression is (x, y)=(x, 0)-{-(0, y). Thus FiCF) is
the direct sum of Wi and W2. Also we observe that the only ele
ment common to both Wi and W2 is the zero vector (0, 0).
Disjoint subspaces. Definition. Two subspaces Wi and W2 of
the vector space V{F)are said to be disjoint if their intersection is the
zero subspace i.e. if Wi f| »'2={0}.
Theorem. The necessary and sufficient conditions for a vector
space V(F) to be a direct sum of its two subspaces Wi and W2 are
that
(0 V=^Wi-\-W2
and(ji) Wif]W2~{0} i.e.^ Wj and W2 are disjoint.
(Meerut 1987, 90, 91, 92; Kanpur 81; Andhra 92)
Proof. The conditions are necessary.
Let F be direct sum of its two subspaces Wi and W2. Then
each element of V is expressible uniquely as sum of an element of
Wt and an element of W2. Therefore we have
V=Wi + W2.
Let, if possible 0#ae Wt h W2. Then «e Wu ae W2. Also
a€ F and we can write
Vector Spaces 95

a=0+a where 0 e Wu ct e W2
and a=a+0 where a e 1^1, 0 € /
Thus a e K can be expressed in at least two different ways as
sum of an element of ^1 and an element of This contradicts
the fact that V is direct sum of Wi and fV2» Hence 0 is the only
vector common:to both FFi and W2 i.e. IKi fl Thus the
conditions are necessary.
The conditions are suflScient.
Let V=Wi+W2&nd Wif)W2^{0), Then to show that P” is
directsumoflFiandlF2-
V==Wi+W2^ that each element of V can be expressed as sum
ofan element of Wi and an element of 1^2- Now to show that this
expression is unique.
Let, if possible,
a=«i+«2, a e K, a, e Wu cc2 ^ W2,
and
Then to show that ai and v.2=P2-
We have ai+«2=^i+^2
=>ai—^i=^2—«2.
Since Wi is a subspace, therefore
ai e fVitfil e fVi :> «i—e iVi.
Similarly J32'~«2 e 1^2.
«i—^I=^2“-a2 s fVlf)f^2»

But 0 ts the only vector which belongs to fVi f) 1^2. Therefore


=> ai=j8,. Also j82“«2=0 => a2=j82.
Thus each vector a e r is uniquely expressible as sum of an-
element of fVi and an element of 1^2. Hence V=Wi@W2.
Dimension of a Direct Sum. Ifafinite dimensional vector space
V(F)is a direct sum oftwo subspaces Wi and W2, then
dim V~dim ]Vi-\-dim iV2. (Nagarjuoa 1980)
Proof. Let dim Wi=?n and dim IV2=1. Also let the sets of vectors
Si={ctu ««}
and ‘S2=(^I, ^2,-..t^/}
be the bases of JVi and 1^2 respectively.
We have dim fdim 11''2—"r+A
In order to prove that dim K=dim IKi-}-dim W2, we should
therefore prove that dim V=^m-\-l. We claim that the set
5=5'iU52={ai, a„, j8,, /?2,..., jS/}
is a basis of V.
. 96 Linear Algebra

First we show that the set S is linearly independent. Let


+d2«2+●●● + + +^/j8/=0
+fli«2+ ●●● —{biPi+62^2+...+6/^/).
Now fliai4-fl2a2+... + flm«»f e Wi
and ~(^1^1+^2)82+...+^/i3/) S PF2. Therefore
ai*i+a2«2+...+flinam S W'i 01^2
and “(^l^l+^2^2+...+^/^/) S fFiD^2.
But F is the direct sum of Wi and IV2. Therefore 0 is the only
vector belonging to Wi fl Then we have
fll«l + «2«2+...+flf|»«m=0> ^I^I + ^2ft+...+^/^/=0.
Since both the sets {ai, a2,..., oim} and {pu Pi) are linearly \
independent, therefore we have
«i=0, U2=0,..., fl«=0, 61=0, 62=0,..., 6/=0.
Therefore 5 is linearly independent.
Now we shall show that L{^= V. Let a be any element of V.
Then
a=an element of W^i+an element of W2.
aa linear combination of .Si+a linear combination of 52
=a linear combination of elements of 5.
L(5)=K.
5 is a basis of V. Therefore dim F=m+/.
Hence the theorem.
A sort of converse of this theorem is true. It has been proved
in the following theorem.
Theorem. Let V be a finite dimensional vector space and let
W-, W2 be subspaces of V such that F— Wt+ IV2 and
dim V=dim Wi-\-dim W2. Then V^JVi@lV2.
Proof. Let dim lf^i=/ and dim fV2—m. Then
dim K=/+m.
Let 5i={ai, a2,..., a/}
be a basis of l^i and
<S'2 = {^1, ^2» --, fim)
be a basis of 1^2. We shall show that 5i (J-Si is a basis of V.
Let ae K Since V= fVi +1^2, therefore we can write
a=y+s where y e fVt, S ^ W2.
Now y e Wi can be expressed as a linear combination of the ●
elements of5i and S^W2 can be expressed as a linear combination
of the elements of S2. Therefore asF can be expressed as a linear
combination of the elements of 5iU5'2. Therefore V=L(Si\JS2).
Since dim V~l-\-m and L (5i (J52)«*F, therefore the number
Vector Spaces 97

of distinct elements in 5i U 5*2 cannot be less than/+w. Thus


<S'iU'S2 has distinct ■elements and therefore iSiU*S'2isa basis
of V. Therefore the set
{«!, ^2f*. pm}
is linearly independent.
Now we shall show that Wi fl 1^2'={0}i
Let a e 1^1 n W2. Then a e IKi, a e Wz.
Therefore a=oiai-l-fl2a2+-”+fl/“/
and a
==^1^1+^2^2+^» ● m
for some n’s and 5’^ e F.
fliai+02«2+»** + fl/«/*=^l^I+^2^2+***+^m^m
=> 01^1 + ^2*2 + ● ● ● H" Ol<X.i -Ifip 1 —- bzpl bmPm =“ ®
=> ai=0, fl2=0,..., fl/= 0, 6<=0, b2=0f‘, b m 0
=> a=0.
^in)^2=={0}.
Complementary subspaces. Definition. Let V{F) be a vector
space and Wj, fVz be two subspaces of V. Then the subspace Wz is
called the complement of Wi in V if V is the direct sum of W\ and fVz.
Existence of complementary subspaces. Theorem. Corresponding
to each subspace W\ of a finite dimensional vector space V{F), there
exists a subspace IVz such that V is the direct sum of Wi and Wz»
(I.A.S. 1972; Meerut 68)
Proof. Letdim 7Ti=m. Let the set
5|={ai, a2,..., «m}
be a basis of Wi.
Since 5i is a linearly independent subset of K, therefore can
be extended to form a basis of V. Let the set
5={ai, “m, Pit P2t — t Pi)
be a basis of V.
Let Wz be the subspace of V generated by the set
Sz={Pu P2t-"t Pi}>
We shall prove that V is the direct sum of Wi and Wz- For
this we shall prove that V= W\ + Wz and Wi H Wz={0}.
Let a be any element of V. Then we can express
a=a linear combination of,elements of 5
=a linear combination-of 5'i+a linear combination of Sz
=an element of an element 2* .

V^Wi + Wz-
98
Linear Algebra
Again let p ^ JVi f) W2. Then p can be expressed as a linear
●S'l and also as a linear combination of Si. So we

P=^ai(ni-\-a2et2’\-.,.i-a„(x„=biPt-\-b2p2-h .:+biPi.
oi<Hi+a2f)t2+,..+a„ot„-~biPi—b2p2~...—biPi^Q
=> fl2«0, a„=0, h,=:0, 62=0, , 6/=0 since
«i» «2, a„, Pu p2t Pi are linearly independent,
iff=0 (zero vector).
Thus Wi n W2={0}.
Hence V is the direct sum of and W2.
Dimension of a Quotient space. Alternative method.
Theorem. If JVi and W2 are complementary subspaces of a
v^tor space V, then the mapping/which assigns to each vector p in
rv2 the coset W\ +p is an isomorphism between W2 and VjWi,
(Rleerut 1969)
Proof. It is given that
V==Wt@W2
and f: W2-^VJWx such that
iP)^Wi+p p ^ W2.
We shall show that/is an isomorphism of W2 onto VIWi.
(i) / Is one-one.
If )8i. j82 e IFj, then
fWl)==f(P2}^ fVx+Pt=Wi+P2, (by def. off]
Pi~p2 e Wi
=> P1-P2 ^ m n W2
[*●’ P1—P2 e IV2 because W2 is a subspace]
=> Pi-P2=0. [V n 1V2={0}]
=> pi^Pz.
/ is one-one.
(ii) /is onto.
Let fVi+ct be any coset in V/Wiy where a e P. Since V is
direct sum of IVi and W2t therefore we can write
a=y-f ^ where ye Wu p ^ W2.
This gives y=a-jff e IFi.
Since a-jS e Wu therefore fVi+a:=»fVi+p.
How f (P)=zWi+p (by def. of/]
= fVt+<x,
Thus IFi+a e 17 ^1 => H )3 e W2 such that

Therefore / is onto.
Vector Spaces 99

(Hi) / is linear transformation.


Let fl,6 e F and Pi ^ W2,
Then /(flj8i+hi82)= Wx-\-{aPi-\-bPi)
={Wx+aPx)^{Wi+bh)
=a{Wx+pi)+b{W^+^2)
=«/C8i)+A/C82). -
Therefore/is a linear transformation.
Hence/is an isomorphism between W2 and VIWi.
Corollary. Dimension of a quotient space. If W is an nt'dimen
sional subspace of an n-dimensiona! vector space F, then the dimen
sion of the quotient space VfW is n—m.‘
Proof. Since IF is a subspace of a finite dimensional vector
space F, therefore there exists a subspace Wi of F such that
F=1F©1F,.
Also dim F=dim IF+dim. Wi
or dim lFi=dim F—dim W
‘=n—m.
Now by the above theorem, we have
VfW^Wx.
dim F/lF=dim
Direct sum of several subspaces. Now we shall have some
discussion on the direct sum of several subspaces. For this purpose
we shall first define the concept of independence of subspaces ana-
logous to the disjoifitness condition on two subspaces.
Definition. Suppose fVt, W2, ...» IF* are subspaces of the vector
space V. We shall say that Wj, W2,..., Wu are independent if
«i +«2+...-f-a*=0, a, e W,implies that each a/=0.
Theorem 1. Suppose V{F) is a vector space. Let Wu W2
Wk be subspaces of V and let W= Wi+IF2+...+ Wk. Thefollowing
are equivalent: ^ '
(0 W„W2, W/f are independent.
(ii) Each vector «. in W can be uniquely expressed in the form
a=ai+a2+...+aAr with a/ /« IF/, i=sl, ..., k.
(Hi) For each f 2 ^ y < the subspace Wjis disjoint from
the sum.(IFi .„+ lFy_i).
Proof. In order to prove the equivalence of the three state
ments we shall prove that (i) => (ii), (ii) => (iii) and (iii) => (i).
(0 =>(ii). Suppose IFi,... Wu are independent. Let a e IF.
njnce● W=> Wi-^... + W/c, therefore we can write
«=«i+...+aA with a/in IF/.
100 Linear Algebra

Suppose that also


a=/3i+...+/3* with j8/ in Wh
Then
=► («i-j8i)+...+(aA-j8A)=0 with a/-j8/ in Wi
as fVt is a subspace
S> ai—pit=0 [V Wi, Wk are independent]
^ ●●●» fc.
Therefore the a/ are uniquely determined by a.
(ii) => (iii). Let aeiVjO (^i + ...+fFy-,).
Then as Wj and asfFj-j-...4* WP^y—1»
Now a s implies that there exist vectors
...»etj-i with a/ in Wi such that
«=««i+...+«y-i.
Also a s Wj.
Therefore-we get two expressions for a as a sum of vectors,
one in each Wt. These are
a=«i+...+«y-i+0+... + 0
in which the vector belonging to Wj is 0
and a==0+...+0+a-|-...-|-0
in which the vector belonging to Wj is a.
Since the expression for a is given to be unique, therefore we
must have
0=a.
Thus HPO n (FFi+.^'V i^y-i)-{0}.
(iii) => (i). Let ai-f-...-|-a*=0
where a/ s Wi, k. ...(1)
Then we are to prove that each a/=0.
Suppose that for some i we have a/?^0.
Lety be the largest integer i between 1 and k such that a/#0.
Obviously j must be ^ 2 and at the most j can be equal to k.
Then (1) reduces to
ai+...+a;=0, a/7^0
=> ai—..,-ay-i
=> ay e 1^1+...+^y-1
[V —ai— ay_i e W^i+... + l^y-i]
^ n (FF^i + ... + FF0-,) tv ayeFTy)
ay=0.
Thus we get a contradiction. Hence each cf,i=0.
Note. If any (and hence all) of three conditions of theorem 1
hold for FFi,..., Wk» then we shall say that W is the direct sum of
Vector Spaces 101

W\t...» IT* and we write


Wk.
Theorem 2. Let V(iJ*) be a vector space. Let Wt be
subspaces of V. Suppose that V= FFi+...-f- and that
Wi n w+...+fK/-i+iF/+i+...+j5F„)={0}
for every /=!,2, n. Prove that V is the direct sum of IFi,...,
Proof. In order to prove that V is the direct sum of
Wnt we should prove that each vector a e F can be uniquely
expressed as
a=ai+...+a„ where «/.e Wt, i=l,. n.
Since V=Wi-\-,..-\-W„, therefore any vector a in V can be
written as
«=«!+...+a„ where a/ e Wi. ...(1)
To show that ai,..., en„ are unique.
Let a=^i+... where )5/ e Wt. ...(2)
From (1) and (2), we get

=s^ («i~^i)+●●●+(«/-!—^/_i)H-(a/—j8/)
+(«/+!—^/+i)+...+(a„—j8n)=0. ...(3)
Now each IF/ is a subspace of V. Therefore a/-j8/ and also
its additive inverse)?/—«/ e FF/, /=!, ... $ n.
From (3), we get
(«/—^/) ~ (^1 —● ai) +... -j- (Pi^i
...(4)
Now the vector on the right hand side of (4) and consequently
the vector a/~j8/ is in FFi+... + FF/-i4-FF/+i+... + FF«.
Also a/—jS/ e FF/.
.-. a/-j8/ e FF/ n (FF,+... + FF/-,+ FF/+i+...-f FF„).
But for every i=l,..., «, it is given that
FF/ n (^F,+...+ FF/_, + FF/+,+...+FF„)={0}.
Therefore a/—)S/=0, i=l, ..., n
a/=j8/, i'=l, ...» n
=«► the expression (1) for a is unique.
Hence F is the direct sum of FFi, ..., FF„.
Theorem 3. Let V (F) be a finite dimensional vector space and
let Wu Wk be subspaces of V. Then the following two statements
are equivalent.
(0 F is the direct sum of Wu ...» FF*.
(i7) IfBi is a basis o/FF/, i=l, , then the union
m Linear Algebra
k
5= U Bi is also a basisfor V. (Meerut 1973, 75, 80, 83)
/-I

Proof. Let a2^ ..., be a basis for Wt, Here


n/adim fFi^number of vectors in Bt. Also let B be the union of
the bases P/.
(i) => (u). It is given that V is the direct sum of Wu ..., Wkt
therefore for any a e K, we can write
«=«!+...+«fc for a/ e Wi, 1=1, k.
Now a/ can be expressed as a linear combination of the vectors
in Bi which is a basis of W/. Therefore a can be expressed as a
k
linear combination of the elements of B= U Bi. Therefore lfB)—V
f-i

*.e., B spans V.
Now to show that B is linearly independent. Let
k

s(fli' a,'+02'02'+...+a;,;«;„)=0.
/-I
.(1)
Since V is the direct sum of Wt Wky therefore'0 e V can
be uniquely expressed as a sum of vectors one in each Wi. This
unique expression is
0=0+...+0 where 0 e Wi, i=l, ..., k.

\ Now ai'ai'+...+flJ,^aJj^ e IT/. Therefore from (1) which


is an expression for 0 € F as a sum of vectors one in each IF/, we
get
i=l, ..., k .

=> since («i^ ..., 4^}


is linearly independent being a basis for Wt.
k
Therefore B= U is linearly independent. Therefore 5 is a
/-I
basis of V,
k
(ii) => (i). It is given that 9 U Bi is a basis of F:
/-I

Therefore for any a e F, we can write


* I I

cs«i+a2+ ● ●● +«& ...(2)


Vector Spaees^
103
where

Thus each vector in V can be expressed as a sum of vectors


one in each Wt.
^ ““ of %if the expression
(2)for a IS unique. Let
a=^i-f^2+...+j8* ...(3)
where

From (2)and (3), we get


«t+.-.+«A=j8i+...+^A

[(<.,'-*,0 «i'+-+(o;-a;,)«;^=o
=> ai^—b\^
%!

[V
B( is linearly independent being a

basis of V]
=> On,=^b«/» i^l> ●●●» k
=> A:
=> the expression (2) for a is unique.
Hence F is the direct sum of Wt, ..., if'*.
/V V proving this theorem we have proved that if a
finite dimensional vector space V {F) is the direct sum of its sub-
spaces fVj,..., then dim V=»dim FFi+...+<//« Wu.
§ M. Co-ordinates. (Meerut 1983 P). Ut y(F) be a finite
dimensional vector space. Let
-®={«1, o„}
"0 “oan that the
vectors of 5 have been enumerated in some well-defined way ie
the vectors occupying the first, second,..., places in the set B

Let a e F. Then there exists a unique «-tuple (xu JC2. . ^n)


of!^alars such that ^

«—Xicci +X2»2+... +^iia„ = 2 Xictt.


/-I

The n-tuple (xi, X2,*..., x„) is called the n-tuple of co-ordinates


of a relative to the ordered basis B. The scalar xt is called
104 Linear Algebra

coordinate of « relative to the ordered basis B. The nx 1 matrix

X X2

IX„ j
is called the coordinate matrix of ct relative to the ordered basis B.
We shall use the symbol
Wo
for the coordinate matrix of the vector a relative to the ordered
basis B
It should be noted that for the same basis set B, the coordinates
of the vector a are unique only with respect to a particular order
ing of 5. The basis set ^ can be ordered in several ways. The
coordinates of a may change with a change in the ordering of B.
Solved Examples
Example 1. Show that the set
Sf={(l. 0,0),(1,1,0),(1, 1, 1)}
is a basis of {B) where R is thefield of real nun^ers. Hence
find the coordinates of the vector (o, b, c) with respect to the d)ove
basis. (Meerut 1989)
Solution. The dimension of the vector space R*(R)is 3. If
the set S is linearly independent, then S will form a basis of R^(R)
because S contains 3 vectors. Let x, y, z be scalars in R such that
X (1, 0,0)+y(1, 1, OHz(1, 1. 1)=0==(0, 0,0)
=> (x+y+z, y+z,z)=(0,0,0)
=> x+y+^=0,y+z=0,z=0
=> x=0,y=0,z=0
^ the set S is linearly independent.
iS is a basis of R^ (R).
Now to find the coordinates of(a, 6, c) with respect to the or
dered basis S. Let p, q, r be scalars in R such that
(a, b, c)=^p (1, 0, 0)+q (1, 1, 0)+r(1, 1, 1)
=► (fl, b, c)=(p+^+r, g+r, r)
=> P+q+r=a, q-\^r=b, r=c
=> r=c, q=b—c, p=>a—b.
Hence the coordinates of the vector (a, b, c) are (p, q, r) i.e.,
{a—by b-^Cy c).
Ex. 2. Find the coordinates of the vector (2, 1, --6) of R^
relative to the basis ai=(l, 1,2), a2=(3, —1, 0), aa=(2, 0, —1).
Sol. To find the coordinates of the vector (2, 1,-6) relative to
the ordered basis {ai, a2, as}, we shall express the vector (2,1, —6)
105
Vector Spaces

as a linear combination of the vectors ai, »2, aa. Let p, q, r be


scalars in R such that
(2, 1. -6)=/>ai+^a2+ra3
=> (2, 1, -6)=p (1. r, 2)+« (3, -1.0)+r (2, 0. -1)
=> (2,1, -6)=(p+3q+2r,p-g,2p-r).
...(1)
p-q=\t
and lp—r^—6.
Solving the equations (1), we get
-7/8, ^=-15/8 and r=17/4.
Hence the required coordinates of the vector (2, 1, —b) re a-
tive to the ordered basis (ai, a2, as} are (j>, q^ r) /-e.,
(__7/8, -15/8, 17/4).
Example 3. Construct three subspaces Wu W'a, of a vecWr
space V so that V= m® fV2= fVi® fVj but 1^2^ (Kanpur 1980)
Solution. Take the vector space V=R^.
Obviously fVi=={(a, 0): flSR},
yf2=((0, a): asR},
and JV3={(a, a): a^R}
are three subspaces of R^.
We have V=fVi+fV2 and IKiO W^2={(0. 0)}*
V:=Wl@W2.
Also it can be easily shown that
4. W3 and fVi0 0)}.
V=>fVi@1V3.
Thus but W'2#W'3.
Exercises
1. Let fVi and JV2 be two subspaces of a finite dimensional vector
space K. If
dim y=dim W,+dim W2 and PT,n H'2={0}. prove that
V=fVt®^2-
2. Show that the set S=«l.0. 0),(1, I, 0),(1, 1. 1)} is a basis
of (C) where C is the field of complex numbers. Hence find
the coordinates of the vector (3+4?, 6/, 3+7?) in with res
pect to the above basis.
Ans. (3—2i, —3—/,3+70-
3. Let 5={ai, a2, as) be an ordered basis for P.’,
where ai—(1, 0, —1)» «2=(1, 1, 1)» a3=(l» b. 0).
Obtain the coordinates of the vector (a, b, c) in the ordered
106
Linear Algebra
basis B.
(iMeeruf 1973, 77,85)
Ans. 6, a~26+c)..
polynomial functions of degree
less than or eoual to two from the field of real numbers R into
itself. For a fixed / e R, let
gl (x)^l, g2(x)=x+t, g3 (x)z==(x~f./)2.
Prove that {g,, g2,g^} is a basis for Fand obtain the coordi-
nates of co+c,x+c2x2 in this ordered basis. (Meerut 1973)
Ans. {cq—Clr+C2?2, Cl ~2c2/, C2).
5. Let F be a finite-dimensional vector space and let Wu .... Wk
be subspaces of V such that
V^Wi+.„+Wk9indi dim V
=dim »',-f-...+dim Wk.
Prove that V=Wi
(Meerut 1976)
Linear Transformations

§1. Linear transformations or Vector space homomorphism.


Definition. Let U{F) and V{F) be two vector spaces over the
same field F. A linear transformation from U into V is afunction T
from U into V such that
r(fla+/>iS)=ar(a)+hr(j8) ...(1)
for <l7^a>]8 in U andfor all la, h in F,
(Meerut 1978, 79; Allahabad 77; Nagarjuna 80,91)
The condition (1)is also called linearity property. It can be
easily seen that the condition (1) is equivalent to the condition
r(na+i8)=nr(a)+r(i8)
for all a, in C/ and for all scalars a in F.
Linear operator. Definition. Let V{F) be a vector space, A
linear operator on V is afunction Tfrom V into V such that
r(aa+feiS)=fl7’(a)+hr(i3)
for all a,)3 in V andfor all a, b in F. (Meerut 1983)
Thus r is a linear operator on K if r is a linear transformation
from V into V itself.
Example 1. Thefunction
r: K3(R) F2(R)
defined by T {Oy by c)=(a, b) V a, h, ceR is a linear transforma
tionfrom F3(R) into K2(R).
Let a=(fli, biy cj), ^=(fl2, ^2» C2) e ^3(R)
Iffl, *eR,then
T(aoi+b^)=‘T[a (ui, bi, cx)-\-b {aiy 62, C2)]
=!T{aa\-\-ba2t abi-\-bb2y aciA-^ci)
=^(aai-\-ba2t abx-^-bbi) (by def. ofr]
=z{aa\y ab\)-{ bu2y bbi)
{at, bi)-\-b {02, bi)
cma T(fli, bty\ci)-\-bT {02, 62, C2)
eafl T {lx)A~bT{^)*
r is a linear transformation from K3(R) into ^2(R).
108
Linear Algebra

Example 2. Let V(F) be the vector space of all mxn matrices


over the field F. Let P be afixed mxm matrix over F, and let Q
be afixed nxn matrix over F. The correspondence T from V into
V defined by t{A)=^PAQ y AgV
is a linear operator on V. ‘
If A is an mxn matrix over the field F, then PAQ is also an
wx» matrix over the field F. Therefore Fis a function from F
into F. Now let A. B^Vand a, b^F. Then
T(aA+bB)==P(aA+bB)Q [by def. ofFJ
==>(aPA+bPB) Q=a PAQA-b PBQ=aT(A)+bT{B)
T is a linear transformation from F into F. Thus Fis a
linear operator on F.
Example 3. Let V(F)be the vector space of all polynomials
over thefield F. Letf{x)=ao-[-ax^a2X^+...+anX” e F
be a polynomial of degree n in the indeterminate x. Let us define
=a,-f nanX""* i/«> 1
and
Df(x)=Q iff{x) is a constant polynomial.
Then the correspondence Dfrom V into V is a linear operator
on V.

Iffix) is a polynomial over the field F, then Df{x) as defined


above is also a polynomial over the field F. Thus if /(x)eF, then
Dfix)^V. Therefore D is a function from Finto F.
Also if/(x), ^(x)eF and o, heF, then
I^[afix)-\-bgix)]=a Dfix)+bDg{x).
Z) is a linear transformation from F into F.
u ijTheu operator
j D. on F is called the difierentiation
-Operator. It
should be noted that for polynomials the definition of differentia
tion can be given purely algebraically, and does not require the
usual theory of limiting processes.
Example 4. Let F(R) be the 'ivector space of all continuous
functionsfrom R into R. Iff^ V and we define T by
'x
/(0*V*sR,
0
then T is a linear transformation from V into V.
If f is real valued continuous function, then Tf as defined
above, is also a real valued continuous function. Thus
/eF=>3/GF.
Also the operation of integration satisfies the linearity property.
Therefore F is a linear transformation from F into F.
Linear Transformations 109

§ 2. Some particular transformations.


1. Zero Transformation. Let U(F) and V{F) be two vector
spaces. The function T,from U into V defined by
r(a)=0 {zero vector of V) ¥ aSt/
is a linear transformationfrom U into V.
Let a,jSeC/ and a, beF. Then fla+A/S e U.
We have T(fla+6j8)=0 [by def. ofT]
=fl0+h0=fl7’(a)+6T(i3).
T is a linear transformation from U into V.
'It is called
zero transformation and we shall in future denote it by 0.
2. Identity operator. Let V(F) be a vector space. Thefunction
Ifrom V into V defined by /(a)—a v aeT^
is a linear transformationfrom V into V.
If a, jSeKand 0, heF,then fla+hjSeF and we have
I{acn-\-bP)=ax-\-bp [bydef.of/]
=a/(a)+h/(i3).
.*. I is a linear transformation from V into V. The trans-
formatio / is called identity operator on V and we shall always
denote it by I.
3. Negative of a linear transformation.
Let U(F)and V{F) be two vector spaces. Let Tbe a linear trans-.
formation from U into V. The correspondence —T defined by
(--F)(a)=-[r(a)] ¥ aSf/
is a linear transformationfrom U into V.
Since T{a) e V =>—T{oi)^ V, therefore
—r is a function from U into V.
Let«,P^U and a, b^F. Then aa:-\-bp e U and we have
(-T){ao^-hbP)==-[T{ad+bp)] [by def. of -T]
=—[o r{a)+bT {p)] [ r is a linear transformation]
=a[-T(a)]+6[-T(P)]=a [(-T)<z]+b[(-D P).
—7’is a linear transformation from U into V. The linear
transformation ●—Tis called the negative of the linear transforma
tion T.
§ 3. Properties of linear transformations.
Theorem. Let T be a linear transformation from a vector space
U{F) into a vector space V{F). Then
(i) T(0)=0 where 0 on the left hand side is zero vector of U
and 0 on the right hand side is zero vector of V.
(Meerut 1974, 76, 79)
(l7) T (-a)« -r(a) ¥ aSC/. (Meerut 1979)
●/
no Linear Algebra

iiii) r(«~i8)=r(«)-2XjS) V a, p^u.


(iv) 7’(aiai+fl2a2+...+o«a„)
=aiT(ccO+a2T{x2)+...+anTM
where oct, K2,..., a„e£/ and oj, a2,..., a„eF.
Proof. (i)Letaei7. Thenr(a)eF. We have
r(a)+0=r(a) .* 0 is zero vectof of Fand r(a)eFJ
=r(a+0) [T 0 is zero vector of (/]
=r(a)4-r(0) [*.* r is a linear transformation]
Now in the vector space V, we have
r(a)+o=r(a)+r(0)
!> OssTtOj, by left cancellation law for addition in V,
Note. When we write r(0)=0, there should be no confusion
about the vector 0. Here T* is a function from U into F. Therefore
if OeC/, then its image under T /.e., T(0)e V. Thus in 7’(0)=0,
the zero on the right hand side is zer<f vector of V.'
(ii) We have 7’[a+(-a)]=r(dj+7’(-a)
[V r is a linear transformation]
But r(a+(-a)]=rt0)=0e F. (by (i)]
Thus in F, we have
r(a)-fr(-a)=0
=> r(-«)=-7’(a).
(iii) r(a-)8)=r(a+(-j8)]
=r(a)4-T(-i3) [V r is linear]
=r(a)+[-T()8)] (by (ii)]
=r(a)-T(iS).
(iv) We shall prove the result by induction on n, the number
of vectors in the linear combination Suppose
T((x,)-f-a2 T(a2) T((x„.tj.
r. ...(1)
Then T(atoci-i-a20i2i-'..,+a„oc„)
=r((«io i4-a2a2 }-...+n«-ia„_i)+fl„a„]
(flia,-f-a2a2+...+n„-i a„_,)4-fl„ T(oc„)
=[at r(a,)+fl2 7’(a2)-f-...+an_, T((x„,0J+a„ T(a„) (by(i)]
=n, r(ai)+fl2 7’(a2)+...+fl«-i r(a„_,)+a«T(a«).
Now the proof is complete by induction since the result is true
when the number of vectors in the linear combination is 1.
Note.
On account of this property sometimes we say that a
linear transformation preserves linear combinations.
§ 4. Range'and Null space of a linear transformation.
Range of a linear transformation. Definition. Let l/(F) and
Linear Transformations 111

V{F) be two vector spaces and let T be a linear transformationfrom


U into V. Then the range Of T written as R {T) is the set of all
vectors p in V such that j8=r(w)for some a in U.
(Marathwada 1971)
Thujs the range of T is the image set of U under T i.e..
Range(T)={r(a) e F: aeC/}.
Theorem 1. Jf U{F)and V{F) are two vector spaces and T is
a linear transformationfrom U into F, then range of T is a subspace
ofV. (Meerut 1980)
Proof. Obviously R{T)is a non-empty subset of V.
Let fit, fii e R{T). Then there exist vectors ai, a.2 in U such
thatr(a,)=/J,. r(a2)=i82.
Let a, b be any elements of the field F. We have
afit+bfi2^aT{oit)-{-bT{o^2)
=r(aai -f60C2) (●.' r is a linear transformation]
Now C/ is a vector space. Therefore ai, a2 e U and
a, b G F => aoLf{-bci.2 G U.
Consequently T{a<Xf\-boi2)=afit-\-bfi2 e R{T).
Thus a, b ^ Pand fi t, fi2 ^ F{T) afit+hfi2 e F{T).
Therefore R{T) is a subspace of V.
Null space of a linear transformation. Definition.
Let U(F) and V(F) be two vector spaces and let T be a linear
transformation from U into V. Then the null space ofT written as
N{T) is the set of all vectors «. in U such that r(a)=0 {zero vector
of V). rims
W(T)={a e U: T(a)=0 e V}. (Marathwada 1971)
If we regard the linear transformation T from U into T as a
vector space homomorphism of U into V, then the null space bf T
as also called the kernel of T. (Nagar^una \99%\ Allahabad 75)
Theorem 2. If U{F) and V{F) are two vector sj)aces and T is a
linear transformation from U into V, then the kernel ofTor the null
space of T is a subspace of U. ;
(Meerut 1980, 90; Allahabad 75; Nagarjuna 91; Andhra 9i)
Proof. Let W(r)={a e U : r(«)=0 e F).
Since T(0)=0 e V, therefore at least 0 ^ N{T)
Thus N{T) is a non-empty subset of U.
Let ai, a2 e N{T). Then r(ai)=0 and T(a2)=0.
Let a, b G F. Then adf\-bx2 e £/ and
T{a»i-^b»2)=o T{(x.i)-{-b T{iX2)
['.* r is a linear transformation]
fl0-fb0=0-f0«0 e F.
112 Linear Algebra

aff,\-\-ba.2 S N{T).
Thus fl,6 e /5’and ai, a2 e N(T) => flai+i>a2 e iVCT). There
fore N(T)is a subspace of U.
§ 5. Rank and nullity of a linear transformation.
Theorem 1. Let T be a linear transformation from a vector
space U{F) into a vector space V(F). If U isfinite dimensional^ then
the range of T is a finite dimensional subspace of V.
Proof. Since U is finite dimensionaU therefore there exists a
finite subset of U, say {ai, a2,...,afl} which spans U.
Let e range of T. Then there exists a in C/ such that
r(a)=i3.
Now a e U => 3 fli, a„^ F such that
a=flia 1+a2«2+.♦. +
=> T(a)=r(aia,-|-fl2«2+...+«/.««)
=> p=ai r(ai)+fl2 T(oi2)-\-...+a„Tic(.„^. ...(1)
Now the vectors r(ai), r(a2)„..., T{ct„) are in the range of T. If
P is any vector in the range of T, then from (1), we see that /3 can
be expressed as a linear combination of ^(ai), 7’(a2),...,7’(a„)
Therefore range of T is spanned by the vectors
n«i), r(a2),...,r(a„).
Hence range of T is finite dimensional.
Now we are in a position to define rank and nullity of a linear
transformation.
Rank and nullity of a linear transformation. Definition.
(S.V. 1992; Meerut 75; Kanpur 81; Allahabad 79)
Let T be a linear transformationfrom a vector space U{F) into
a vector space V(F) with U as finite dimensional. The rank of T
denoted by p{T) is the dimension of the range of TJ.e.,
p{T)=dim R(T).
The nullity of T denoted by v(T) is the dimension of the null
space of T i.e. viT)=dim N{T).
Theorem 2. Let U and V be vector spaces over the field F
and let T be a linear transformationfrom U into V. Suppose that U
isfinite dimensional. Then
rank {T)^-nullity {T)=dim U.
(Meerut 1983P, 87, 93; Andhra 92; Tirupati 90; I.A.S. 85;
Madras 83; Madurai 85; Nagarjuna 90; Kanpur 81; Allahabad 79)
Proof Let N be the null space of T. Tnen iV is a subspace
of U. Since U is finite dimensional, therefore N is finite dimen
sional. Let dim iV= nullity (T)=A:. and let {aj, a2,..., a*} be a
basis for N. A .●'*
Linear Transformations 113

Since {ai, oc2,...»«4 is a linearly independent subset of


therefore we can extend it to form a basis of U. Let dim U^n
and let {ai, c(A+i^...,an} be a basis for U.
The vectors T(ai), r(a2) T(a&), r(a&+i) r(a„) are in range
of r. We claim that {r(a*+0, r(aA+2),...,^(««)} is a basis for the
range pf T.
(i) First we shall prove that ihe vectors
rCait+i), r(a/k+2),...,7’(a„) span the range of T.
Let p € range of T. Then there exists a € £/ such that

Now a e £/ => 3fli, aif^^On S Fsuch that


a=fliai+J2«2+...+
=> T(a)=r(fliai 4- fl2«2+...+On<t„)
=> p=ai 7’(«*)+fl*+i 7’{aA+i)+***+Ofl 7’(a«)
=> P=Ok+i T(«A+i)+fl*+2 2Ya*+:v)-|-...-{-fl„ T{vn)
[V «i, a2,...,aife G N => T(ai)=0,...,r(a*)«0]
the vectors ^(aA+i r(a„) span the range of T.
(ii) Now we shall show that the vectors
2^(o^A:+i)i ● ● ●
are linearly independent.
Let ca+i,...,c„ e F such,that
Ca+1 r(a*+i)+...+c„ r(a«)=0
=> r(CA+i aA+i+...+c„«„)=0
=>● Ck+i ctk+i -f... + CnO^n £ uull spacc of T i.e., N
=> Ca+1 aA+i-f ...+c„a„=/)iai4M2+... + ^*«fc
for some bi, b2,...,bk e F.
[ each vector in N can be expressed as a linear combination
of the vectors ai,...,aA forming a basis of N]
=> biot.i + ...-\-hka.k-Ck+i aA+i —...~c„a„=0
=> hl = ...=/>A = fA+l~...=Cn=0
[V a„ are linearly independent being
basis for U] .
=> the vectors r(aA+i),...,r(a„) are linearly independent,
the vectors r(aA+i),...,r(a„) form a basis of range of T.
rank T=dim of range otT=n—k.
rank (r)+nullity {T) = {n—k)i-k=n=6im U.
Note. If in place of the vector space V, we take the vector
space U i.e.f if T is a linear transformation on ah n dimensional
vector space U, even then as a special case of the above theorem,
pCT) |-v{7’)=«.
114
Linear Algebra

Solved Examples
Ex. 1. Show that the mapping T; FaCR)-> K2(R) defined as
: : T(au «3)N3fli—2o24-<i3, oi—3<i2~2fl3)
is a linear transformationfrom VsiR) into V2(R).
Solatioo. Let a«(<ii, <12, 03), i3=(*i. Aj, *3) e ^(R);
Then r(a)tsT(at, 02, fl3)=(3oi^202+03, ai—3d2^2ai)
ah4 .. — 362—263). .
Also let a. 6 e R. Then aa+6iS e F3CR). We have
r(a«+Z^=7’[a (01* 02. a3)+6;(6„ 62, 63)1
«=7T(fl«i+661^ 002+662,003+663) : :
«(3(ooi+66i)-2(oo2+662)+oo3+663. ^ ^
001+661—3(002+662)—2(003+663))
=(o(3oi-2o2+03)+6(36,-262+63)
a
^ («i-302-203)+6 (6,-362-263))
=a(3o,-202+03, o,-3o2-2o3)+6(36,-262+63,6,-.362-263)
=ona)+67Yi9). :
Hence r is a linear transformation from Fs^R) into FiCR).
Ex 2. Show that th^mapping Z: 1^(R)->J^(R) defined as
T{a, 6)=(fl-|-6, o—6,6) ;
/jo linear transformationfrom V2(Rj into J^3(M) Find the range,
rank, null-space and nullity of T. (Nagarjuna 1990; Tirupati 90)
Solution. Let =(^i. hi), j8=(02, 62) e F2(R).
Then 7Xa)=nai, 6,)=(o,+6,, 0,-6,i 6,)
^C^)==(^2+62,^02 —62,62).
Also let o, 6 e R. Then oa+6j8 e F2(R) and
T(aci+bP)-=^Tlaiaubi)-hb(a2rb^ ' '
(o0,+6O2, 061+662)
o,+602+06,+662, flO| f602—06,—662, o6,Tf 662)
^[«i+h,]+6(02+62], o[0,-6,]+6 (02-62], 06,-f 662)
p(«i+6,, 0,-6,, 6,)+6(02+62, 02:^62,62)
lo7Xa)+67X)5) . ■
.*. T is a linear transformation from FijCR) into ^(R).
{(1* ®)»;(0* 1)} is a basis for J^(R)
We have m,o)=(i+o, 1-0, o)=(i, 1,0)
and
T(0* 1)5=(0+J,0-1, 0)=(1* -.1,1); i;
The vectors TXl,:0)*!T(0,1) j,pan the range ofT.
Thus the range of T is the subspace of K3(R)spanned by the
vectors (I, 1. 0), 19.' 1.1).
Now the vectors (1, 1, 0),(l,-1, 1)e Ks(R) are linearly
independent because if x, >» e R,then
JLimar Transformations 115

x{U 1,0)+X1, -1, 1)=(0, 0.0) , ,


=> {x+y, x'-r-y, jO=(0, 0, 0) , r
=> jc+j>=0, x~j>=0,3'=0, => jf=0, y=sG.
tlje vectors (l,rl, Q)^(li;~U 1)form a basis ^for range of
T. ^Cje rank 7>=dim oftange of.r=sX ■ v; . - t ^ -
lility of r=dim of l^R)—rank 7’~2—2—Q^ : ; :
A null space ofV must be the zerp subspace of p2(R).
Otherwise, (u, Z>) e null space of 7*
=> 7Xfl. ^^=(0, 0. Oj
=> 0* 0) '
=> a^{-b^0, a^b=0, b^O
/ =>fl=0,^=0.
(0, 0)is the only elenaent of Ki(R) which belongs^,to null
space of r. ; ai
: ; ;V. null space of'T is.^e zerp^^bspacejof ■
; Example 3i Let F be field of complex numbers and let T be
t^functionfrom FI into F^ defined by
T(xu X2, X3)-(xi-.X2+2:r3,2xi+X2-^xj^ ●^iT-2xi);
Verify that T is a linear transformation. Describe the null space: of
T. ! a (Meerut 1W, «5)
Solution. Let a=(xj, X2, X3), ^=^{yuyi*y^^F^- Theii
-;«2+2x3, 2xi+X2~X3, -xi -2x2) and
T{^)=-{y\’-y2-\-2yu2y\-\-y2-'yzr^yi^2y2). , - v v
Also let fl, b^F Then fla+^>/5 e and^
0(1^bp^a (xi, xi, X3>+i» (yi, >2,3^3)
- ==(UX| + fG>1, aX2;M^^, flX3+frH3)^
Now* by the definition of T, We halve
T (fla+6^)=([flxi+^>yi]—[fljC2+6;'2]+2 10x3+6^3],
2 [uxi:+h;'i]+nx2+^y2’-(tfX3+6)'3],+[uxi+6;>i]+-2(flX2+hy2])
= (u(Xi-X2 + 2X3]++ [>'i-:V2+2y3],
a i[2xi+X2r“X3]i+hc[2y:i;+j?2-^^3l»
a [-xj-^^il+h [?-yir-2>!2l)'
u (Xi-X2 + 2X3, 22fi+X2T-X3, T-Xi -2x2) = -
+ * (>'|-J'2+2>'3, 2yi+y2-J^3. ~^I^2>'2)

ris a linear transformation from into


Now (xi, X2» X3) e mill space of T
o TCxi, X2, X3)=(‘0, 0, 0)
(X| -X2 + 2X3, 2Xi + X2 -X3 x,-2x2)=(0, 0,0)
116
Linear Algebra

O *l*“^2+2jf3«=0,
22fi+X2—X3=0, ...(1)
— Xl—2Af2+0X3=0.
the null space of T is the solution space of the system of
linear homogeneous equations (1). Let A be the coefficient matrix
● of the equations (1). Then
r 1 -1 21
2 1 —1
-1. -2 0.
ri -1 21 performing the elementary row opera-
0 3—5 tions Ri-^Ri-lRu R^-^R^^\■R\
lO -3 2.
n -1 21
0 3 5 by /?3->-/?3-}-/?2.
LO 0 -3J
This last matrix is in Echelon form. Its rank^the number
of non-zero rows»3. Therefore rank >4=3=the number of un
knowns in the equations (1). Hence the equations (1) have no
linearly independent solutions. Therefore jci=0, X2=0, X3=0 is
the only solution of the equations (1). Thus (0, 0, 0) is the only
vector which belongs to the null space of T Hence the null space
of T is the zero subspace of
Exannie 4 f ●'i V be the vector space of all nxn matrices over
the field F. and let B be a fixed nxn matrix If
T{A)=AB^BA V A^V
verify that T is a linear transformation from V into V. (Meerut 1982)
Solution. If A(=V, then T(A)=AB~BA^Vbeaiuse AB—BA
is also an nx« matrix over the field F. Thus T is a function from
VinXoV.
Let Au Az^ V and a, 6eF. Then aAi -f M2G V and
T(aAi + bA2)=(aAi-^bA2) B-B (aAi +bA2)
^aAiB^bA2B-^aBAi-bBA2=a {AiB-BAi)-{^b (A2B~BA2)
=aT(Ai)-hbT(A2).
r is a linear transformation from V into V.
Example 5. Let V be an n-dimensional vector space over the field
F and let T be a linear transformation from V into V such that the
range and null space of T are identical. Prove that n is even. Give
an example of such a linear transformation.
Solution. Let N be the null space of T. Then .V is also the
range of T.
Linear Transformation^ 117
Now p(7’)-t-v(r)=dim V
i.e. dim of range of J+dim of null space of r=dim V— n
i>e. 2 dim N
I *.● range of null spac e of N]
i.e. n is even.
Example of such a transformation.
Let T: ^"2(R)-^Fi,(R) be defined by
r(fl, z>)=(6,0) V fl, />gR.
Let a=(fli, 6i), ^=(a2f ^2) e f^2(R) and let x,
Then T{x<r.+y^)=.T[x\au b{)+y (^2, bi)]
=r(xfli +ya2. xbi●\-yb2)={xbx^yb2y 0)
=(x6|,0)+(y62, 0)=x {b\, 0)+>> (62, 0)
==xT(au bi)+yT{a2, bii^xT(y.)^yT(S).
r is a linear transformation from T2(R) into F2(R).
Now {(1, 0), (0, 1)} is a basis of 1^2(R).
We have r(I, 0)=(0, 0) and T(0, 1)=(1, O).
Thus the range ofTis the subspace of K2(R) spanned liy the
vectors (0,0) and (1, 0). The vector (0, 0) can be omitte(| from
this spanning set because it is zero vector. Therefore the range of
T is the subspace of ^2(R) spanned by the vector (1,0). Thus
range of (1.0): flGR)={(a, 0): aeR).
Now let (a, k) ^ N (the null space of T).
Then (a, b) ^ N => T(a, 6)=(0, 0) => (6. 0)=(0, 0) => b=0
null space of 2 ={(a, 0): aeR}.
Thus range of r=null space of T.
Also we observe that dim F2(R)=2 which is even.
Example 6. Let U{F) and V(F) be two vector, spaces and let
T\, T2 be two linear transformations from U to V. Let X, y be two
given elements of F. Then the mapping T defined as
na)-xr,(a)+j»r2(a) V aeC/
is a linear transformation from U into V.
(Marathwada 1971)
Solution. If ae(/. then T,(a) and T2(«) e V. Therefore
xTx (a)H-;; T2 (a) G V. Thus Tas defined above is a mapping
from U into V. Let (x, /Set/ and a, b^F. Then
T(a<r.^b^)^xTx {a<r.-\-b^)-^yT2 (aa+/»/8) ^(by def, of TJ
=x [ar, (a)-fZ^r, [aT2-(a)f t>r2(/S)J
[ ●.● T\ and T2 are linear transformations]
«a [xTx (cc)-\-y T2 (a)]-f-6 [xTi(P)+yT2(B)]
«flr(«)+^r(/8)
(by def. of T\
A T is a linear-transformation from U into V,
118 tine^dr Algebfci

Example 7. Let V be a vector space and T d linear transforma


tionfrom^V inter V^ Pra¥e tfidtlthifolldwing two statements about t
~ are equivalents -
(/) The intersection of the range of T and the nulflspace off
is the_ zero subspace of V /.e., H JV{J’)s={Oy
(«*) r[r(a)]=o => r(a)=o (Meerut J979j 85)
Solution. First we shall show that (i) => (ii).
Wehavo^/r[7Xp01??0;=^nZX«) e '
=> 7Xa) e R{T)V{NiT)-. |V a S V. ^ T{<t)
7Xo5)=0 bepause J?(2r)nJV(r)={0}; ■
Now we shall show that (ii) => (i)^
Let aa7^;0.and (r-&P{T)f)N^T),( ● ;● ●
Then o^/?(r),aod ,: !
Since ae^(rj, therefore r(a)==0
Also aei?(r) .3 ^^e Ksuch that rC^)r?a. I \ :r.
Now . r(i3)=a ;
217T(i8)]=Ti(a)iO :
Thus'3 K such, that 7] 0 but. 7:(fi)~a#0
This'contradict'tie given hy^th^is (ii), ;
Therefore there exists ho ae/?(7’) n A^ l^’l .such that
a:j^0. HenchV ^(T;) QifS^ r
Example 8. Consider the basis S^\olu a2, aj} of where
=(i, i; I), <x2==(l/ii 6), a3-(l, 0, (0. ' . ^ in
terms of the basis ai, ol2, aa* , .
Let T: be defined as
TM=(^M H*2)-(2, -1)1 h«3)=(4, 3). '
fiwj(2,-3.5). ■ (I.A:S.i985)
Solution. Let (2, ~3, 5)=flai4-ha2^+
-a (1. 1,1)^6 (1, 1, 0)+c(i, 0. 0).
Then a+h+c=2, o4 —3, 0=5.
Solving these equations, we get a=5, 6=—8, c=5.
(2i —3, 5)=5air-8a2+5a3.
Now T(2, —3, 5)=T(5ai-8a2+5a3)
=5r(ai)—8r(a2)+5r(a3) [V T is a linear transfdrmationj
=5(l.;0)--8 (2,-1)45 (4, 3)
=(5, 0)-(16, -8)+(20,15)
«(9. 231.
' ;£x..!rdses^ ‘ ^
1. Show that the mapping T: V3 (R)-> K2(R) defined as
T{ai, fl2, a3)==*(«i*^d2, 01^03) is a linear transformation.
Lineur ^ans/prmaiipns m
2. Show that the moping r - defined as
. TiOyJ^=^—brh—ay’--d)
is a li^ar traosfbrmatipa lrpm R2 into< R? Find; the range.
rank, null-space and nullity of T.
Ans. Null space of 7’={0); Nullity of rank 2. The
set {(1 1
1),(—1, f, 0)}is a basis set for R(7).
3
. Let F be a subfield of the complex numbers and letT be the
function from into defined by
yXa,^, ^ a
ir js.a linear transformation. Find.also the rank and
'the nullity of r.
(Meerut 1983)
4. Which pf the fbUowing functions Ffrom into R2 are linear
^ tfansformatlnns 7 ,,.
(a) F(fl, i)=(l+a, , ● (b): F(a, h)«(h,n); / ^
-(c) Tip, )
Ans. (a) T is not a jinear transformation, (b) F is a linear
transformation, (c) F is a linear transformation. ■.
5. Let V be the space of «;K-1 matrices over a field F and let W be
the space of/MX l matrices over F Let be a^xed mxn
matrix over F and let F be the linear transformation from V
into defined by F(JIT)=/<A'.
T zero trahsformalioii if and only if A is the
zero matrix.
6. Let F; Rf-»R3 be the linear transformation defined by:
T {x, y,z)=r\x+2y-z, y-^z,
Find a basis and the dimension of(i) the range of F(ii) the
"ullspaeeofF (Meerutl981)
Ads. (i){(1, 0, 1),(2, 1, I)} is a basis of/? (F)and
dim. R (F/=2; {(3, —1,1)} is a basis of N(F)and
, :dim. N{T)=»\.
7. Let F: K4(R)-»F3(R) be a linear tralisformatibn defined by
: Tipi b, c, d)=^(a-b+c+d, a+2c-rf, a+b-{-3c-3d).
Then obtain the basis and dimension of the range space of F
and null space of F. (Meerut 1992)
§ 6. Linear transformations as vectors.
Let L ((/, V) be the set of all linear transformations from a
vector space U(F)into a vector space V (F). Sometimes we denote
this set by Horn {U, V), Now we want to impose a vector space
120 Linear Algebra

structure on the set L {U, V)over the same field F. For this pur-
pose we shall have to suitably define addition in L V)and
scalar multiplication in L (17, V) over F.
Theorem 1. Let U and V be vector spaces over the field F.
Let T\ and Tjz be linear transformationsfrom U into V. Thefunction
T\+T2 defined by
(r,+r2)(a)=r,(a)f72(a) ¥ a e £/
is a linear transformationfrom U into V. if c is any eiement of F,
thefunction {cT) defined by
(cr)(a)=cr(a)¥ae t/
is a linear transformationfrom U into V. The set L{U, V) of ail
linear transformationsfrom U into K, together with the addition and
scalar multiplication defined above is a vector space over thefieid F.
(Andhra 1992; Meerut 78, 91; Madras 81; Kanpur 69)
Proof. Suppose T\ and T2 are linear transformations from U
into V and we define ri+72 as follows :
(r,+r2)(a)=r,(a)-hr2(a) ¥ a e u. ...(1)
Since T\ (a)+r2(«) S K, therefore ri+r2
is a function from U ihto V.
Let a, b^F and a, jS e U. Then
(Ti-^Ti)(aflc+0j8)=r,(aa+0j8)-fr2(o«+0i8) [by (1)1
=^[aTi (a)+0r, mHoT2i<L)+bT2(j3)]
[V Ti and T2 are linear transformations]
o [Ti (a)+r2(a)l+0[Ti iP)+T2(jS)][v 1^ is a vector space]
(Ti+r2)(a)-i-0(Ti+Tz)(i8) [by (1)]
/. Ti+r2 is a linear transformation from U into V, Thus
Tu 72 e L{U, V) => Ti+72 e-L (f/, K).
Therefore L {U, V)is closed with respect to addition defined in it.
Again let T ^ L ((/, K) and c ^ F. Let us define cT as
follows:
(cT)(a)=cT(a) ¥ a e t/. ...(2)
Since cT(a)e K, therefore cT is a function from U into V.
Let a, A e F and a, j8 s U. Then
(c2)(fla+/»j8)=cr (fla-|-6j8) [by (2)]
=c[flr(a)+^r(i3)J [ r is a linear transformation]
[aT(a)J+c[bT 08)J=(ca) T(a)+(cZ>) T (/i)
=(flc) T{p.)^{bc) T{f)^a [cr(a)j+^[cT m
—0[{cT)(a)J+6 l(tT)(^)].
Linear Transformations 1^1

cr is a linear -transformation from V into V. Thus


r e L (C/, K)and c e /* => cT e L ((/, V).
Therefore L (I/, V)is closed with respect to scalar multiplica
tion defined in it.
Associativity of addition in L(C/, V).
Let Tu Ti, Ti e L (C/. V). If a e (/, then
ir,+(r2+r3)](«)=ri (a)+(r2+Ts)(«)
T [by (1) I e„ by def. of addition in L (C/, V)\
=ri («)+[72(a)+T3(a)] [by (1)1
=[T’i (a)-f-r2(«)]+T3(«) [. addition in V is associative]
=(T,-i-r2)(a)+T3(a) [by (1)1
«[(r,+r2)+r3i («) [by (1)1
ri+(r2+r3)=(r,+r2)+r3
[by def. of equality of two functions]
Commutativity of addition in L (t/, V). Let Ti, T2 S L(t/, V).
If a is any element of t/, then
(Ti+r2)(a)=r,(a)+r2(a) [by (1)1
-7’2(a)+7’,(a) [ addition in V is commutative]
=(124-TO (a) [by (1)1
ri+r2=r2+ri [by def. of equality of two functions]
Existence of additive identity in L ((/, V). Let 0 be the zero
function from U into V i.c., 0(a)=0 e K ¥ a e f/.

Then 6 e L (I/, V), If T e L (C/, K) and a e (/, we have


(fi+rH«)“«(«)+r(a) [by (1)]
=0+r(a) [by def. of 6]
=r(a). [0 being additive identity in V\
6+r«=*r ^ T (t/, V).
6 is the additive identity iaL {U, V).
Existence of additive inverse of each element in L {U, V).
Let r e L (t/, Kj. Let us define —T as follows:
(-T)(a)«-r(a) ¥ a e O'.
Then —r e L [(/, K). if a e O, we have
(-T+T)(a)=(-r>(a)+7 («)
Iby def. of addition in L(0, K;j
122
Linear Algebra

=-r(a)+r(ix) [by def. qf-rj

=6 (PC) ^ (by,def.; of 0]
.'. — r-t-r^O for every T e L \U^ V)
Thus each element in i'(J/, 1^) possesses additive inverse,
defined 8™“P W''h aspect to addition
Further we make'thefollowing observations: *
: : (i) Let c e Fand r,, 72 e L ((/, VY If a Is an y element in
Vi we have ●. f

[e(Ti+r2)](a)=c[(r,*riHoc)] ;
(●>y (2) i.e.t by-def. of scalar multiplication in L W, P)|
“0 [Ti (a) -fra (a)] lby(i)J
=cr, (a)+cr2 (a)
[V c e /' and T\ (a), T2 («) e V which IS
a vector space)
(C7i) (a)-l-(c7-2) (a) [by (2)]
: ■+■,<^^2) (a)
[by.(l)J
c(ri+7'2)==cri+cr2.
(ii) Let d, ^ e F and T e L ((/,'y), If a e t/, we have
rj (a) := {a-Yb} T (ol) [by (2)1
—aT {a.)-i-hT (a) {●/ (/ is a vector space]
-=(nF) (a) +(6F) (a) [by (2)]
=(aT:±bT)(x) [by (1)]
{a-Yb)r^aT-YbT.
(iii) Let O, b and r e X (t/. vy. if « e {/, we have
[(ab) T] (a)=(ah) T(a)
=“[*T(a)] . [by(2)J
[ .* K is a Vector space]
=0 im («)] [by (2)]
=[a(*r)J(a)
[by (2)]
(o4)r=a(W-).
(iv) Let 1 s Xand T e L (L/, yy, Ifa g £/, „e have
(1 r)(«)=l T(a)
[by (2)]
= T(a) [●
1 T=T. is a vector space]

Hence L ((/, Y) is a vector space over the field F.


Note. if in place of the ve ctor
space K, we take U, then we
Linear transformations 123

observe that the set of ;aU linear pperators on C/ forms a; vector


space with respect to addition and scalar multiplication(defined as
above..,
Dimension:of L(t/, V). Now we shall prove that if V {F)
and V(F) are finite dimensional, theo the vector space of linear
transformations from C/Jnto, K.is aisp finite ^imepsionaU.;^:For this
purpose we shall require an important result which we prove in the;
following theorem : .
Theorem 2. Let U be afinite dimensional, vector space over'the
field F qnd /e/ {at, «2» a„} be an ordered basisfor U- L^^ ^
be ayector.space over the same fied F and let j8i,. ., ^fbe any
vectors in V. Then there exists a unitpte linear transfor^mation T
from If into V such that
F(a,)=i3„/=1, 2, ..●> n.
(Meerut 1980, 81, 93, Nagarjuna 91)
Proof. Existence of T.
Lata’et/.
Since F={ai, a-j, oCfl} is a basis for (/, therefore there, exist
unique ibalars xi, xz. , x„ siich that .

For this vectbr a, let us define


T{y)^x{fii-{-X2P2-\- f-x„l3„. ■.
Obviously F(a) as defined above is a unique element of V.
Therefore i’is a well-defined rule for associating with e^.ch vector
a in 1/ a unique vector T{a.) in F. Thus T is a function from T/
into F.
The unique representation of a/ei/ as a linear combination of
the vectors bcloiiging to the basis F is , , ,
a/=Oai-hOoc2yir,..-{- l«i-}-0a{,j.i
Therefore according to our definition of F, we have

i,e.
Now to show that 7’ is a linear transformation.
Let Of b ^ F and a, /8 & t/. Let
a = XlOC, + ..,-f-Xr-i„ : : ■
and
Then T{ax \-b^)--^T[a{xicci t- -● + A'„a„)-| A(>’iai-f- >’««„)]'
=-T[{axi+byi) ai +... + {ax„ by,,) a„]
124
Linear Algebra
^{ax^+by^)
[by def.ofrj

=aT(«)+bT(fi) [by def.ofrj


r is a linear transformation from U into K Thus there
exists a linear transformation T from V into V such that

Uniqueness of T. Let T be a linear transformation fmm U


into Vsuch that r(a,)=^,, /= i, 2 «.
For the vector a - jf,ai +...4 e U. we have
T{a)=T\x\(x.i+...-\-x„(x.„)
=xir (a„) [. T is a linear transformation]
[by def. of r]
==r(a).
[by def. ofrj
Thus r(a)=r(a) V a e t/.
r=r.
This shows the uniqueness of T.
Note. From this theorem we conclude that if T is a linear
transformation from a finite dimensional vector space U{F)into a
vector space K(F), then T is completely defined if we mention
under T the images of the elements of a basis set of U. If S and T
are two linear transformations from U into V such that
5(a/)=r(a/) V «/ belonging to a basis of U, then
5(a) (a) V a e U, r>., S-=T
Thus two linear transformationsfrom U into V are equal if they
agree on a basis of U.
Theorem 3. Let V be an n-dimensional vector space over the
field F, and let V be an m'dimensional vector space over F. Then the
vector
space L (C/, V) of all linear transformationsfrom U into V is
finite dimensional and is of dimension mn.
(Meerut 1970, 76, 80, 93P; Madras 81; Nagarjuna 77; Andhra 81)
Proof. Ut a={a,, «„} and jSj /5„}
be ordered bases for U and V respeciively. By theorem 2 there
exists a unique linear transformation r„ from U into Ksuch that
T\\{p.\)~^iy 7n(ot2)=0,...,7’ii(a„)=0 where
^1, 0,...,0 are vectors in V.
In fact, for each pair of integers(p, q) with 1 ^ < m and
there exists a unique linear transformation TPQ from U
into V such that ‘
Linear Transformations 125

if i^q
if i=‘q
i.e., Tpq{a.i) — Siq Pp, ...(1)
where 8/^ e Fis Kronecker delta i.e. 8/,= l if/=<jr
and 8m<=0\fi^q.
Since p can be any of 1, and q any of 1, 2,...,n, there
are mn such TpqS. Let Bi denote the set of these wn transfor
mations TpqS. We shall show that B\ is a basis for L V).
(i) First we shall show that L{U, V)is a linear span of Bi.
Let T e L{U^ V). Since r(ai) e V and any element in V is
a linear combination of j8i, therefore
7’(ai)=|fl 1^1-f +●.. + m

for some an, .Omi e F. Iri^fact for each /, 1 ^ i ^ n.


m
T’(a/)=ai/ f ' l-.-.+flOT,- r'm = £ Opi^p ...(2)
p-i

m
No*"^ consider S= £ £ Opq Tpq.
/»-! <?-l

Obviously 5 is a linear combination of elements of B\ which


is a subset of L(C/, V). Since L(U, V) is a vector space, therefore
S e L(£/, V) i.e. S is also a linear transformation from U into V.
We shall show that S~T.
Let us compute 5(a/) where a,- is any vector in the basis B of
U. We have
m m
S (a/)— £ £ Qpq Tpq (a,)-~ £ £ Opq Tpq ^a/)
p-i ^-1 p-i fl -i

i £ £ Opq I>iq fip [From (1)]


P-1 ff-1

m
= ^ Op, fip [On summing with respect to^.
p-i

Remember that 8;<j=l when q=i and 8,^^0 when <79^^!*]


=T(a,). [From (2)]
Thus S(<x.i)=T (on) V a/ e B. Therefore S and F agree on a
basis of U. So we must have S- T. Thus T is also a linear com
bination of the elements of B\. Therefore L(f/, V) is a linear span
ofFi.
126 Linear Algebra

(ii) Now we shall show that fil ls linearly independent. For


e let
m
S 2 bp^ Tpg-=0 i.e. zero vector of L{U, V)
P-l 9-1
\
m
=>● 2 2 bpq Tpq (a/)=0 (a;) ^ cni e= B
9-1

. I i bpq Tpq («;)=0 G V


p-1 9-1

[ 0 is zero transformation]

^22 bpq biq jSpS=0


/>-! 9=1

m
=> 27 bpi
;»-l

=> h|f + 1 c / n
=> 6i/=0, />2/=0,...,h„/i=0, 1 < / «
[V ^1,J82,...,i5m are linearly independem]
=> hp<,=0 where 1 i p < w and 1 q ^ n
fil ls linearly independent.
Therefore fii is a basis of Z,(C/, K).
dim L(U, r)=number of elements in fi I
=w«.
Corollary. The vector space L(U, V) of all linear operators on
an n-dimensional vector space U is of dimension n\
Note. Suppose C/(fi) is an n>dimensional vector space and
V{F) is an w-dimensional vector space.’ If i/#{0} and ^7^(0}, then
n ^ 1 and/n > 1. Therefore L(£/, F) does not just consist of the
element 6, because dimension of Z,(i/, V) is rnn ^ 1.
§ 7. Product of Linear Transformations.
^ Theorem 1: Let UfVahdW be vector spaces over the field F.
Let T be a linear transformation from U into V and S a linear trans-
formation from V into W. Then the composite fmi^tiori ST {called
product of linear transformations defined by
(S7’)(a)=fi:[r (a)I V a e f/ ' ^ H
is a linear transformation from U into W.
<Meerut 1983, 89; Na^arjuiia ^4)
Lirii'ar Transformations 127

Proof. T is a function from U into V and 5 is a function from


f'into W.
So <K G U ^ T(<x) G V. Further
/●(a) e P => S’ [r(a)] e W. Thus (5r) (a) e IP..
Therefore ST is a function from V into W. Now to show that
57* is a linear transformation from U into IP. / . ,
Let a, b e F and /8 e £/. Then
(ST) (aa+^i3)=5 [T(aa+/>i5)] (by def. of product of
; /\ two functionsj
5(ar(a)+ir(j8j] [V r is a linear
transformation]
=m9[7’(flc)lH-i5’[7'(j8)]
[ V S'is a linear transformation]
(5T) (a)+b (.ST) (j8).
Hence ST is a lineir transformation from f/ into W. ’
Note. If T and S are linear operators on a vector space V(F),
then both the products 57* as well as T’5 exist and each is a linear
operator on P. However, in general TS^ST as is obvious from
the following examples.
Example I. Let 7\ andT2 be linear operators on defined,
as follows :
T’lCXl, X2)=(A'2, Xi)
and 7’2(Ai, A2) = (A,, 0).
Show that TiTzi^nTi.
Solution. We have
(r,72) (X,, X2) = 7’, [72 (X,, X2)]
(Ai, 0), ; by def. of 72
==(0, xt), by def. of 7i.
Also (727^) Uu at2)=72 [7\(x,, a:2)],
by def, of 727,
“72 (X2, X|), by def. of 7,
(^2, 0) 9 . by def. of 72.
thus we see that
(T,72) (x,,X2)t^(727,) (a,, 2f2) y (a,, X2) S R^. .
Hence by the definition of equality of two mappings,^ w,e have
TxT^ffTx. ".v, : , .
Example 2. Let S(R) be the vector space of all polynomial
functions in x with coefficients as elements of the field of real
numbers. Let D and T he two linear operators on V defined by
128 Linear Algebra

D(Ax))^^Ax)
and
for every f(x)eV.
r(/(*))=£/(*)dx ...(2)

Then show that DT=I(identity operator) and TD^I


Solution. Let /(x)=<io+fli^+a2x2+...eK.
We have(DT)(f(x))=D [T(f(x)]
rfx rfx
f(x) dx]=^D (ao+aix-\-a2X^+.„) dx
Uo Uo
■* d \ I fli 2 I
=Z>
Jo =sr»*+2-*+-j
=ai+aiat+02*2+...=/(*)=/(/(*)].
Thus we have
(DT) [/(JC)]=/ [/(jc)] V f(x) e V. Therefore DT=L
Now (TD)f(x)=T[Df(x)\
^■T
^(flo+aiX+oaX^H-...)

tx
(ai+2a2X+...) dx
JO
’ X

= fliX+fl2X^f ...
JO
=aix+a2X^-{-...
¥=f(x) unless ao=0.
Thus 3 f(x) e V such that
(TD) [f(x)]:^l [f(x)Y
TDr^L
Hence TD^DT,
showing that product of linear operators is not in general conimu-
tative.
Example 3. Let V(R).be the vector space of all polynomials
in X with coefficients in the field R. Let D and T be two linear
transformations on V defined as

i> [f(x)\=^^j(x) Ax) ^ y


and r (/(*)]"*/(*) V /(*).e V.
Then show that DT^TD,
129
Linear Transformations

Solution. We have
(Dr)[/(x)]=Z> [T{f(x))]

=D[*/(*)I=^(*/Wl

=f(.x)+x^f(x). ...(1)
Also (TD) [f(x)]=T[D (/(*))]

=r[^(/(*))
=x
...(2)
From (1) and (2), we see>that H/(x) e K such that
{DT){f{x))^{TD) {fix))
=> DT^TD.
Also we see that
(DT-TD){f(x))={DT)(/(x))-(7’D)(/(x))
=/(x)=/(/(x)).
DT-TD=I.
Theorem 2. Let V{F) be a vector space and A, B,C be linear
transformations on V. Then
(/) AO =6=0^
(i7) AI=A=rA
(Hi) A (BC)=(AB)C A
(IV) A (B+C)=AB+AC
(v) (A-j-B)C=^AC+BC
(vi) c(AB)={cA) B=A (cB) where c is any element of F.
Proof, Just for the sake of convenience we first mention here
our definitions of addition, scalar multiplication and product of
linear transformations:
(AA-B)(a)-/l(a)+D(«) ...(1)
(cA)(<k)=cA (x) ...(2)
(AB)(x)=A [B(x)] .„(3)
¥ aePand ¥ ceF.
Now we shallfprove the above results,
(i) We have ¥ aeF,
. (a6)(<x)=A [0(«)1 [by (3)1
130
Linear Algebra

(0) (*.* 0 is zero transformation]


=0=sd (a).
i4d= 6.
[by def. of equality of two functions]
Similarly we can show that di4=6.
(ii) WehavevaeF',
(^/)(a)=^ [/(«)]
/i(a) [V I is identity transformation]
Al=A.
Similarly we can show thAtIA=A.
(iii) We have ¥ ae F,
[A (BC)](«)=^ [(JBC)(a)] [by (3)]
^A [fi(C(a))J [by (3)]
=(^fl)[C(«)] [by (3)]
=M5)CJ(a). [by (3)]
/. A{BC)^{AB)C.
(iv) We have v ae F,
[A (5+C)](a)=^ [(5+C)(a)] [by (3)]
^A [5(«)+.C(a)] [by (1)]
^A [B(a)]+^[C(«)]
[’.* is a linear transformation and
5(«),C(a) e F]
=(^5)(«)+(^0(«) [by (3)]
=(AB+AC)(a) Cby(i)]
A (B-\-C)=.AB+AC;
, (V) We have ¥ aeF,
[(^+5) C](a)=(^^^fi)[C(a)] [by (3)]
==A [C(«)]+5 [C(«)]
[by (1) since C(«) e FJ
=(^C)(a)+(5C)(«) [by (3)]
^(AC+BC)(a) [by (1)1
(A^B)C=AC-\-BC.
(vi) We have ¥ ae F,
[c(^fl)](a)=c[(^5)(a)] [by (2)]
[A (^(«))] [by (3)]
-(c^)[B(a)] [by (2)since 5(a) e F]
=^[{cA)B]{fl) [by (3)]
131
Linear Transformations
A c(AB)={cA)B,
Again[c(AB)](a)=c[{AB)(a)] [by (2)1
=c[A{Bia))] . [by (3)1
=A [cB(a)]
[V A is linear transformation and B(oc) e V]
=A [(cB)(a)] [by (2)1
=[A (cB)](«). [by (3)1
c (AB)<=^AicB).
§ 8. Ring of Linear operators on a vector space.
Ring. Definition. A non-empty set R with two binary operations,
to be denoted additively andmultiplicatively, is called a {ting if the
following postulates are satisfied:
R\, R is closed with respect to addition, i.e.,
a-^-b s R V b^R»
Ri- ia-\-b)+c~a+{b-\- c) v a, b, c^R.
R3. a-^b=b-\-a v a, b^R»
/?4. 3 an element 0 {called zero element) in R^uch that
0+a=a V a^R.
Rs. a^R => 3 —a^R such that
(-fl)+fl=0.
/?6. R is closed with respect to multiplication.
Rt, {ab)c~a {be) V a, b, c^R»
Rs. Multiplication is distributive with respect to addition, i e.,
a{b+c)=ab+ac
{a’\'by\c==ac-\-bc v a, b, csR.
king with nnity element. Definition.
If in a ring R there exists an element \gR such that
V a^R,
then R\is called a ring with unity element. The element^v^callcd
the unity element of the ring.
Theorem. The set L{V, V)of all linear transformations from
a vector space V{F) into itself is a ring with unity element with
respect to addition and multiplication of linear transformations de
fined as below :
(5+T)(a)=5(a)+7’(a)
and (ST)(a)=S[r»a)]
¥ S,T e L{V, V) and V oce V.
Proof, The students should themselves write the complete
proof of this theorem. We have proved all the steps here and there.
They should show here that all the ring postulates are satisfied
132
Linear Algebra

in the set L(F, V). The transformation 0 will act as the zero
‘element and the identity transformation / will act as the unity
element of this ring.
§9. Algebra or Linear Algebra. DeSnitton. Let F be afield.
A vector space V over F is called a linear algebra over F if there is
defined an additional operation in V called nmltiplication of vectors
and satisfying thefollowing postulates:
1. oj8eFva,jSeF
2. a (^y)=(aj8) y V a, j8, y e F
3. a (^+y)=aj84-ay
and
(a+^)y=ay+j8y v a, jS, y e K.
4. c(ap)=(ca)jS=a (cp) v a, j8 e V and c G F.
If there is an element 1 in V such that
l«=a=al V a e r,
then we call V a linear algebra with identity over F. Also 1 is then
called the identity of V. The algebra V is Commutative if
aj8=)3a V a, e F.
Theorem. Let V{F) be a vector space. The vector space
L(F, F)over Fof all linear transformations from V into V is a
linear algebra with identity'with respect to the product of linear
transformations as the multiplication composition in £(F, F).
Proof.
The students should write the complete proof here.
All the necessary steps have been proved here and there.
§.X0. Polynomials. Let T be a linear transformation on a
vector space F(F). Then TT is also a linear transformation on F.
We shall write r=r and r2=7T. Since the product of linear
transformations is an associative operation, therefore if/« is a posi
tive integer, we shall define
T»»=7T7’...upto m times.
Obviously is a linear transformation on F.
Also.We define T^=I(identity transformation).
If m and n are non-negative integers, it can be easily seen that
J'm J'n — J'm+n
and (JPmyt—’J'ntn^
The set L(F, F)of all linear'transformations on F is a vector
space over the field F. If uo. pi e F, then
F(r)«flro/-ffliF-f e £(F. F)
i.e. p{T)is also a linear transformation on F because it is a linear
L inear Transformations 13S

combination over F of elemenis of L{V,V), We call /)!(r) as a


polynomial in linear transformation T, The polynomials in a linear
transformation behave like ordinary polynomials.
§ 11. Invertible linear transformations. Definition. Let U and
V be vector spaces over thefield F Let T be a linear transformation
from U into V such that T is one-one onto. Then T is called
invertible.
If r is a function from U into K, then Tis said to be 1—1 if
«i, 02 e U and ai#a2 => T(oO^feT(aa).
In other words T is said to be 1—1 if
«i.*2^ U and T(ai)=r(«2) =► aisaa.
Further Tis said to be onto if
j8eF=>3aeC/ such that r(a)=/?.
If T is one-one and onto, then we define a function . _
into 1/, called the inverse of T and denoted by as follows ;
Let p be any vector in V. Since T is onto, therefore
j8 = F=>3aet/ such that T (o)=j8.
Also a determined in this way is a unique element of U
because T is one-one and therefore
Oo, a e t/ and aov^o => )3=»r(a)#r(«o).
We define (j8) to be a. Thus
such that
r-«(iS)=a<>7’(a)=)9.
The function is itself one-one and onto. In the following
theorem, we shall prove that T~^ is a linear transformation from V
into U.
Theorem 1. Let U and V be vector spaces over the field F and
let T be a linear transformation from U into V. IfT is one-one and
onto, then the inverse function is a linear transformation from V.
into U. (Meernt 1983; Andhra 92)
Proof. Let Pu Pi^ V and a,bmF.
Since r is one-one and onto, therefore there exists unique
vectors ai, a2 S C/ such that T («i)=j8i, 7’(«2)—jSa* By definition
of T"*, we have r-> iPi)=a.u (P2)=o^2»
Now a<Ki-\-b(H2 e U and we have by linearity of r,
T {av.x-]rbtf‘2)=oT (a2)
=aP\-\-hP2 e V,
by def. of T”', we have

=or-» 052).
134 Linear Algebra

A r-* is a linear transformation from V into U.


Theorem 2, Let T be an invertible linear transformation on a
vector space V(F). Then
r=/=7T-».
Proof. Let« be any element of V and let T(<x)=j3. Then
(jS)=a.
We have r(a)=j8
=> r-> [r(a)]=r-> (jS)
=> (T-* T)(a)=a
=>(r-»7’)(a)=/(«)
^ r-i 7’=/.
Let J8 be any element of V. Since T is onto, therefore
jSeK=>3aeK such that T («)=i8. Then r-> (]8)=a.
Now r-»(/5)=«
(/3)]=r(a)
=> (7T-«)(i3)=i8

=> rr-»=7.
Theorem 3. If A, B arid C are linear transformations on a vec
tor space V(F)such that
AB^CA^L
then A is invertible and A-^=^B=C. (Meerut 1968, 69)
Proof, In order to show that A is invertible, we are to show
that A is one-one and onto.
(i) A is one-one.
Let ai, tt2 e V. Then
A (ai)=^ (az)
^C[A (a,)]=--C [A (aa)]
=> (CA)M={CA)(a2)
=>/(ai)=/(a2)
s> «i=a2.
A is one-one.
(ii) A is onto.
Let jS be any element of V. Since F is a linear transformation
on P, therefore B (j8) e V, Let B (^)=a. Then
B
=>AlF(jS)]=^(a)
=> {AB) m=A (a)
Linear Transformations 135

=> im=A (a) [V AB=^I]


r** («).
Thus /5eK=>3aeK such that A (a)«j8.
/. A is onto.
Since A is one-one and onto therefore A is invertible i.e,
A~^ exists,
(iii) Now we shall show that
We have AB=I
=> A-^ {AB)=^A-^ I
=> (/!-* A)B=A-^
=> IB=:A-i
=> B=A-K
CA^I
=> (CA) A-'=IA-^
=> C(AA-^)=A-'
^ CI=A-'
=> C^A-K
Hence the theorem.
Theorem 4. The necessary and sufficient conditionfor a linear
transformation A on a vector space V{F) to be invertible is that there
exists a linear transformation B on V such that
AB^I^BA.
Proof. The condition is necessary. For proof see theorem 2.
The condition is suflBcient. For proof see theorem 3. Take B
in place of C.
Also we note that B<=>A~^ and A=B~^,
Theorem 5. Uniqueness of inverse. Let A be an invertible
linear transformation on a vector space V(F). Then A possesses
unique inverse.
' Proof. Let B and C be two inverses of A. Then
AB=I=BA
and AC=I=CA.
We have C{AB)=CI=^C. (1)
Also (CA)B=IB=B. (2)
Since product of linear transformations is associative, therefore
from (1)and (2), we get
C{AB)=^iCA)B
=> C=B.
Hence the inverse of A is unique.
136 Linear Algebra

Theorem 6. Let V(JP) be a vector space and let A ^ B be linear


transformations on V. Then show that
(/) If A and B are invertible^ then AB is invertible and
{AB)-i=B-^ A-K
(ii) If A is invertible and a^Q^Fy then aA is invertible and
A-K

(Hi)If A is invertible, then A~^ is invertible and (A-^)-*=:A,


Proof, (i) We have
(B-iA-^)(AB)=B-i[AHAB)]=B-il(A-iA)B]
=5-> (75)=5-* B=/.
Also (AB)(5-> A-^)=A [B(B-^ A-^)]=A
=A (/A-t)=AAri=./.
Thus(AB)(5** A-^)=I=(B-^ A~^)(AB).
By theorem 3, ABiis invertible and A~^.
(ii) We have

=0

Also
[A-'(aA)\=\[a(A.-yA)\
-1
/f)s=3l/=»/.

Thus (aA) =/= A-'"j(aA).


/. by theorem 3, aA is invertible and
(a^)-*=-i A<
(iii) Since A is invertible, therefore
AA-'==I=A-^ A.-
by theorem 3, A~^ is invertible and
A=^(A-^)-K
Singular and Non-singular transformations. Definition. Let T
be a linear transformationfrom a vector space U(F) into a vector
space V(F). Then T is said to, be non-singular if the null space of
T(i.e., her T)consists of the zero vector alone i.e., if
a e £/ and T (a)=0 =:>■ a=0.
If there exists a vector e U such that T («)==0, then T is
said to be singular.
Linear Transformations 137

Theorem 7. Let T be a linear transformation from a vector


space U {F)into a vector space V(F). Then T is non-singular if
and only if T is one-one. (Allahabad 1976)
Proof. Given that T is non-singular. Then to prove that T is
one-one.
Let oc], 0C2 € U, Then
r(a,)=r(a2)
o r(a,)-r(a2)=0
=> T(ai—a2)=0
=> ai—a2=0 (V r is non-singular]
=> ai=a2.
T is one-one.
Conversely let T be one-one. We know that T(0)=0. Since
T is one-one,,therefore ol ^ U and T(a)=0=r(0) => a=0. Thus
the null space of T consists of zero vector alone. Therefore T is
non-singular.
Theorems. Let T be a linear transformationfrom U into V.
Then T is non-singular if and only ifTcarries each linearly indepen
dent subset of U onto a linearly independent subset of V.
(Meerut 1972, 78, 84 P,90, 91, 93P)
Proof. First suppose that T is non-singular.
Let 5={«i, a2,..., an)
be a linearly independent subset of U. Then image of B under T is
the subset B' of V given by
B'={T(ut), r(a2),..., TM).
To prove that B' is linearly Independent.
Let oi, 02,..., e Fand let
(^iT(ai)-f...-i-fl„7’(ccn)=0
=> T(aiott-i-...+an»n)=0 [V r is linear]
=> ni«i+---+fln«»=0 [*.● T is non-singular]
=> fl/=0, i=l,2 n [V aii..., a„ are linearly
independent]
Thus the image of B under T is linearly independent.
Conversely suppose that T carries independent subsets onto
independent subsets. Then to prove that T is non-singular.
Let a#0 e U. Then the set 5={a) consisting of the one
non-zero vector/a is linearly independent. The image of S under
T is the set
5'«{r(«)}.
It is given that S' is also linearly independent. Therefore
138 Linear Algebra
T(a)#0 because the set consisting of zero vector alone is linearly
dependent. Thus
e £/ =► r(a)#0.
This shows that the null space of 7’consists of the zero vector
alone. Therefore T is non-singular.
Theorem 9. (i) A linear transformation T on a finite dimen
sional vector space is invertible iff T is non-singular.
(Kanpur 1981, Meerut 93)
(li) A linear transformation T on a finite dimensional vector
space is invertible iff T is onto.
Proof, (i) Let P (F) be a finite dimensional vector space of
dimension n. Let 7* be a linear transformation on V.
If 7* is invertible, then T* must be one-one. Hence T must be
non-singular.
Conversely if T is non-singular, then T must be one-one. Now
T will be invertible if we prove that T is onto. For proof see
example 3 page 85.
(ii) If T is invertible, then T must be onto.
Conversely let Tbe onto. Then T will be invertible if we prove
that T is one-one. For proof see example 5 page 86.
Important Note. In the above theorem, we have proved that
if T is a linear transformation on a finite dimensional vector space
F, then T is one-one implies T must be onto. Also Tis onto implies
Tmust be one-one. However this theorem fails if V is not finite
dimensional. This is clear from the following example :
Example. Let V (F) be the vector space of all polynomials
in;c with coefficients in F. Let Z) be the differentiation . operator
on V. The range of D is all of V and so D is j^onto. However
D is not one-one because all constant polynomials are sent into 0
by 2>.
Now let T be the linear transformation on K defined by
T[f{x)]^xf{x) vf(x) e y.
Then T is one-one.
But T is not onto V. If g(x) is a non-zero constant polynomial
in F, then g(x) will not be the image of any element in V under
the function T.
Theorem 10. Let U and V be finite dimensional vector spaces
over the field F such that dim U=dim V. If T is a linear trans
formation from U into F, the following are equivalent.
1^9
Linear Transformations

(i) T is invertible,
(ii) T is non-singular,
liii) The range ofT is V.
(iv) If{on «n} is any basisfor U, then
{T(ai). T(a„)} is a basisfor V.
(v) There is some basis (ai, a«}/or V such that

is a basisfor V.
Proof, (i) (ii).
If T is invertible, then T is one-one. Therefore T is non¬
singular.
(ii) =► (iii).
Let T be non-singular. Let {ai, , a„) be a basis for V. Then
««} is a linearly independent subset of U. Since Tis non
singular therefore {r(ai),..., T (a«)} is a linearly independent subset
of V and it contains n vectors. Since dim V is also «, therefore this
set of vectors is a basis for V. Now let j8 be any vector in V. Then
there exist scalars oi, , e Fsuch that
j8=fl| 7X«i)H-...+fln 3T(«b)
—T (fliai-}-...-}-fln*«)
which shows that j8 is in the range of T because
+ e 6/.
Thus every vector in V is in the range of T. Hence range of
Jis V.
(iii) => (iv).
Now suppose that range of T is V /.£●., 7’is onto. If (ai a„)
is any basis for C/, then the vectors r(a,),..., T(a„) span the range
of T which is equal to V. Thus the vectors T (ai),..., T (a„) which
are n in number span V whose dimension is also n. Therefore
{T (ai),..-, T (a^)} must be a basis set for V.
(iv) (v).
Since U is finite dimensional, therefore there exists a basis for
U. Let (ai,..., a„} be a basis for U. Then [T (aO,..., T (a„)} is a
basis for V as it is given in (iv).
(V) => (i).
Suppose there is some basis {ai
for U such that (r(ai) T{a,„))
is a basis for V. The vectors (r(aj),..., 7’(a„)}
140 Linear Algebra

span the range of T. Also they span V. Therefore the range of T


must be all of V i.e. T is onto.
If a=ciai+...-f is in the null space of T, then
r(ciai+...+c„a„)=:0
=> c,r(a,)+...+c„7’(a„)=0
=> C|=0, 1 / ^ « because
^(«i) T(«„) are linearly independent
=> a=0.
T is non-singular and consequently T is one-one. Hence T
is invertible.

Solved Examples

Example 1. Describe explicitly the linear transformation Tfrom


to F2 such that T(e,)=(n, b\ T(c2)=(c, d) where ei=(l,0),
C2=(0, 1).
Solution. Let(xi, X2) be any member of F^, Then we are to
find a formula for T(xi, X2) under the given conditions that
r(i,o)=(fl,b),r(o. D=(c.</).
We know that the set {ei, ^2} is a basis for the vector space F^.
Therefore any vector (X,, X2) e can be expressed as a linear
combination of the elements of this basis set.
Obviously (xi, X2)=xi (1, 0)+x2(0, l)=»Xiei+ x2C2«
T (xi, X2)=r(Xiei+X2e2)
^xiT {e\)+X2T(^2), by linearity of T
=xi (a, A)+X2(c, </)=(xio, Xii>)+(x2C, xid)
~(xia+X2C, xih-t-X2</).
Example 2. Describe explicity the linear transformation t: R2_^R2
such that T(2, 3)=(4, 5)and T(1, 0)=(0, 0).
Solution.
First we shall show that the set {(2, 3),(1, 0)} is a
basis of R2. For linear independence of this set let
<2(2, 3)+6 (1, 0)=(0, 0), where a,b^VL.
Then (2a+^ 3fl)=(0, 0)
=> 2fl+h=0, 3fl=0
=> fl=0,6=0.
Hence the set {(2, 3),(1, 0)} is linearly independent.
Now we shall show that the set {(2, 3),(1, 0)} spans R2 Let
ixu X2) e R2 and let (xi, x2)=o (2, 3)+6(1, 0)=(2«+6, 3a).
Then 2a+6=xi, 3a=X2. Therefore
*2
a=3-
Linear Transformations 141

3xi —2x2
/. (xi,X2)=:=p(2,3)+ 3^ (1, 0).
...(1)
From the relation (1) we see that the set {(2, 3),(1, 0)} spans
R^. Hence this set is a basis for R^.
Now let (xi, X2) be any member of R^. Then we are to find a
formula for T(xi, X2) under the conditions that r(2, 3)=(4, 5),
r(1.0)=(0, 0). We have

T(*,. *j)=r^(2, (I. 0)]. by (1)

T(2, 3)+-i=^ T(1, 0), by linearity of T

=^*(4,5)4 3X1^2(0.0)=(^^ ^')-


Example 3. Find a linear transformation T: R^-^R^ such that
r(l, 0)=(1, 1)andT(0, 1)=(—1, 2). Prove that T maps the square
with vertices (0, 0),(1, 0),(1, 1) and (0, 1) into a parallelogram.
I Solution. Let (xi, X2) e R^. We can write
(Xl, X2)=Xl (I, 0)+X2(0, 1).
Now T (X,. X2)=7’[x,(1, 0)+X2(0, l)]=x,r(l, 0)+X2 T(0, 1)
=xi (1, 1)+x2(-1,2)
=(^1-^2, Xi+lXi). ●-(I)
(1) gives the required formula for T. Now let the given vertices
of the square be Ay B, C, D respectively and let A'y B\ C\ D* be
their T-images. We have
i4'=7’(/4)=7’(0, 0)=(0, 0), on putting xi=0, X2=0 in (1)
B'~TiB)=T{\y 0)=(1, 1), on putting Xi=l, X2=0 in (1)
C'=T(C)=7’(1,1)=(0, 3), on putting xi=l, X2- I in (I)
D'=T{D)=T{fiy 1)=(—1, 2), on putting X|=0, X2=l in (1).
Now A'B’= s/2^CD\ Also /!'£)'=v'5=5'C'. Hence A'B'CD'
is a parallelogram.
Example 4. Describe explicitly a linear transformation from
T'3(R) into F3(R) which has its range the subspace spanned by
(1.0, -1) W (l,2,2).
Solution. The set 5={(1,0, 0),(0, 1, 0),(0, 0, 1)}
is a basis for FaCR).
Also {(1, 0, -1),(1, 2, 2),(0, 0, 0)} is a subset of V^ (R). It
should be noted that in this subset the number of vectors has been
taken the same as is the number of vectors in the set B,
142 Linear Algebra

There exists a unique linear transformation T from Vz(R)into


Vz(R)such that
r(l,0,0)=(l,0,-l)
T(0, 1,0)=(1,2,2). ...0)
and r(0,0, 1)=(0,0,0).
Now the vectors T(1, 0, 0), T(0,1, 0), T(0, 0, 1)
span the range of T. In other words the vectors
(1. 0, ~1),(1, 2, 2),(0, 0,0)
span the range of T. Thus the range of T is the subspace of Vz(R)
spanned by the set {(1 0, —1)» (1, 2, 2)}
because the zero vector (0,0,0)can be omitted from the spanning
set. Therefore T defined in (1)is the required transformation.
Now let us find an explicit expression for T. Let (a, h, c) be
any element of Vz(R). Then we can write
{a, h. c)=fl(l, 0, 0)^-h(0, 1, 0)+c(0, 0,1).
r(a, K c)=oT(l, 0. 0)+6T(0. 1. 0)+cr(0,0,1)
(1, 0, -l)-{-h (1, 2, 2)+c(0, 0,0) [from (1)1
2h, 2b—fl).
Example 5. Let T be a linear operator on Vz(R)defined by
T(a, b, a-b,2a+6+c) V {a, b, c) e Vz (R).
Is T invertible't Ifso,find a rulefor T~^ like the one which
defines T. (Meerut 1982, 87,89; Nagarjona 91)
Solation. Let us see that T is one-one or not.
L^t a=(ai, bu Ci), ^=(<*2,^2, Cz) e Vz(R).
Then r(a)=r(j3)
=> T(at, b\, C\)—T (fl2» bz, cz)
=> (3<ii, at—bi, 2fl!+6i-fci)«=*(3fl2» Oz—bz, 2az-\-bz'\-cz)
=> 3ai=3fl2» fli—^i=«2““^2» 2ai+6i+ci=^2+.h2+C2
a\—az, bi=bz, ct—cz
=> (Oi, b\, C\)—(az, bz, Cz) => ac=^.
T is one-one.
Now r is a linear transformation on a finite dimensional
vector space Vz(R) whose dimension is 3. Since T is one-one,
therefore T must be onto also and thusT is invertible
b, c)=^(pi q^r}i then T-\f^ q,r)=(a, b, c).
Now ir(a, 6, c)=(/?, r)
=> (3a, a-b,2a+b+c)=(p, q,r)
=> /?—3a, q=a—b,r=2a+b+c
Linear Transformations 143.

=>fl=

=r-p-\-q.

% T-Up,g* -|- -<7,r V (p, g,r)e V3(R)


is the rule which defines T-^.
Example 6. For the linear operator T of Ex. 5, prove that
{T^-l)(7’-3/)=6.
Solution. We have
(r-37)(fl, b, c)=.T(u. b, c)-3/ (fl. b, c)
=(3fl, fl-i). 2a+h+c)-3 {a, h, c),
by def. of T and /
=(0, a~4h,2fl-{-6-2c). ...(1)
(r^-/)(r-3/)(a. h, c)=iX^-I)[(r-37)(a, 6. c)]
=(.7’2-7)(0, a-Ab, 2a+h-2c\ by (1)
=r2(^,fi,C)-7(^,5,C), ...(2)
where y4=0, B=a~Ab, C=2a-\-b—2c.
Now r(.4, B, C)^QA,A-B,2^+R+C). by def. of T.
A (^,5, C)=r(3.4,.4—5,2.4+5+C)
(/, m, n), say
=(3/,/—m, 2/+m+n), by def. of T
=(9A, 3A-A+B,6A+A-B-h2A-hB-^C\
on putting the values of /, m, n
={9A,2A -f-5, 9/4+C)
={0, fl—46, 2a+6—2c).
Also 7(.4, 5, C)=(/l, 5, C)=(0, a-4b, 2n+6-2c).
Hence from (2), we have
(r2-7)(r-.37)(a,6, c)=r2(/I, 5,0-7(.4, 5, O
=(0, a-46, 2a+6-2c)-(0, a-4b,2a+6-2c)
=(0,0, 0)=6(n, 6, c) ¥ (n. 6, c) e V3 (R).
Therefore, by def. of zero transformation, we have
(r2-7)(r-37)=6.-
Example 7. A linear transformation T is defined on V2(C)by
T(o, b)=(an+^6, ya+86),
where a, )3, y, 8 arefixed elements of C. Prove that T is invertible
ifand only if a8—jSy^iO. (Nagarjuna 1977)
Solution. The vector space V2(C) is of dimension 2. There
fore r is a linear transformation on a finite-dimensional vector
144 Linear Algebra

space. T will be invertible if and only if the null space of T


consists of zero vector alone. The zero vector of the space V2(C)
is the ordered pair (0, 0). Thus T is invertible
iff T(x. 0) => x=0j y=0
Ue,, iff (ccx-\-h* y^+S:H)=(0,0) =► jc=0, y=0
i.e., iff a:j;+j8y=0, yA:+Sy=0 => ac=0, y=0;
Now the necessary and sufiBcient condition for the equations
ax+jSy=0, yx-{-8y=0 to have the only solution x=0, ;>=0 is that
*Y f0 #0.
Hence T is invertible iff aS—jSy^tO.
Example 8. Find two linear operators T and S on V2 (R) such
that

TS=hbut 5r^0.
Solution. Consider the linear transformations T and S on
^2 (R) defined by
r(fl, 6)=(a,0) V {a,b) e F2(R)
and 5 (a, 6)=(0,a) ¥ (a, b) e Fa (R).
We have {TS) {a, b)=T [S {a, b)\^T (0, a)=(0, 0)
=d(fl^6) ¥ (a, F2(R).
.*. T’5=6.
Again (ST) (fl, b)=S [T(o, b)]=S (fl, 0)=(0, a)
^0 (fl, b) ¥ (a, b) e F2 (R).
Thus 5T#6.
Example 9. Let V be a vector spadie over the field F and T a
linear operator on V. IfT^=0t what can you say about the rela
tion of the range of T to the null space of T I-Give an example of a
linear operator T on V2 (R) such that but T^O.
Solution. We have r2=0
=> T2 (a)=,o (a) V a e F
=> r[7’(a)] = 0 ¥ a e F
=> T (a) e null space of T ¥ a g F.
But T (a) e range of T v a g F.
=> range of T c null space of T.
For the second part of the question, consider the linear trans
formation T on Fa (R) defined by
Tia,b)=^iQ,a) ¥ G Fa (R). '
Linear Transformations 145

Then obviously r^O.


We have (a, b)=^T[T(a, 6)]*7’(0, u)=(0, 0)
-6(a. b) V (a. b) e Ks(R).
T2=.6.
Example 10. Lei T: be defined as T(a, h, c)=(0, a,h>.
Show that r#0, hwr
Solution. We have T(a. />, c)=(0, a, h).
I'herefore T(a, c)#(0, 0, 0) V (a, 6, c) e R^
Hence T^O.
Again 7’2 (a, ft. c)=7’[T{a, ft,c)]-T ,0. a. ft)=(0,0. a).
Thus f2(a, ft.c)9t(0, 0. 0) ¥ (a, ft,c) e R^. Hence
Finally T^ (a, ft, c)=r2[T(a, ft.c)]-r2(0, a, ft)=r[r(0, a, ft)]
0, a)=(0, 0, 0). Thus T\a, ft,6*)=(0, 0,0) V (a, ft.
In other words 7^ (a. ft,c)=6 (a, ft, c) v (a. ft , c) e R^. Hence
r3=().
Example 11. Let T he^a linear transformation from a vector
space U into a vector space V with Ktr Ty^O. Show that there exist
vectors ai and ct2 in U such that aj7^a2 and 7ai=7a2. (Meerut 1977)
Solution. Let aj be the zero vector of 17. Then rai=0.
Since Ker 7#0, therefore there exists a non-zero vector, say
a2, in U such that 7«2=0.
Now we see that ai, ol2 are vectors in £/ such that
7ai=7«2-
Example 12. If T: U-^V is a linear transformation arid U is
finite dimensional, show that U and range of T have the same
dimension iff T is non-singular. Determine all non-singular linear
transformations
7: F4(R)->F3(R),
Solution. We know that
dim t/=rank (7)-fnullity(7)
=dim of range of 7-|-dim of null space of 7.
dim f/=dim of range of7
iff dim of null space of 7is zero
i.e., iff null space of 7.consists of^ro vector alone
i.e., iff 7 is non-singular.
146 Linear Algebra

Let r be a linear transformation from K*(R) into K3(R). Then


T will be non-singular iff
dim of K4(R)=dini of range of T.
Now dim K|(R)=4 and dim ofrange of because range
ofr-c K3(R).
/. dim K4(R) cannot be equal to dim of range of T.
Hence T cannot be non-singular. Thus there can be no non
singular linear transformation from F4(R) into F3(R),
Example 13, Jf A and B are linear transformations{on the same
vector space\ then a necessary and sufficient condition that both A
and B he invertible is that both AB and BA be invertible.
Solution, Let A and B be two invertible linear transformations
on a vector space V,
We have {AB)(5~« /<-i)=7=(5-t A~^){AB),
is invertible.
Also we have {BA) fi-i){BA),
is also invertible.
Thus the.condition is necessary.
Conversely, let ABz.n6 BA be both invertible. Then AB and
BA ?re both one-one and onto.
First we shall show that A is invertible.
A Is une-one. Let ai; a2^V. Then
A {cti)=:A {0C2)
r>B[A (a,)]=B [A («2)] => {BA)(ai)={BA)(aa)
*> ai=s«2 [V BA is one-one]
.*. A is one-one.
A is onto.
Let K Since AB is! onto, therefore there exists ^such
that {AB){a)=p
=>A[B{a)]=:p,
Thus j8eF => 3 B{(t) e K such that/4 [F (a)]=jS.
A is onto.
.V. /4 is invertible.
Intehanging the roles played by 5. and BA in the above
proof, we can prove that F is invertible.
Example 14. Let T be a linear transformationfrom fsCR) into
F2(R), and let S be a linear transformationfrom i'2(R) into K3(R).
Prove that the transformation STls not invertible,
Linear Transformations 147

Solution. As proved in Ex. 12, we can prove that T cannot


be non>singular i e. T cannot be one-one.
Let ST be invertible. Then ST is one-one.
Let ai, ct2 e V3(R). Then
r(a,)=r(a2)
=>S[T(a,)l=5[T(aj)] => (ST)(ai)«(SD (aj)
=> ai=:a2. IV ST is one-one]
«*. T is one-one which is not possible.
ST cannot be invertible.
Example 15. Let A and B be linear transformations on afinite
dimensional vector space V and Iet,AB=L Then A and B are both
invertible and A-^^^B.
Give an example to show that this isfalse when V is not finite
dimensional. (Meerut 1983P)
Solution. First we shall show that B is invertible.
B is one-one. Let oci, «2 ^
Then B {<ty)=B {0.2)
^ A [B(a,)]=^[S (aj)] => {AB) («2>
^ I(a,)=/(a2> ai=a2. ’
.*. is one-one.
Now is a linear transformation on a 6nite dimensional
vector space V.
Therefore B is one-one implies that B must be onto, Conse-
quently B is invertible.
Now .45=/
{AB) A (BB-i):=>B-^
=> A/==B-i => A=^B-K
But B’i B==l=BB-i
=> AB=I=BA
^ A is invertible and A-*=*B.
Example. Let F(R) be the vector space of all polynomials in
X with elements in R. Consider the linear transformations D and
r on F defined as follows :
d
I>[/M]=~f(x) v/(x)e F
‘x
and T[f(x)]= f(x)dx¥f(x)e F.
0
Here DT=l.
But D is not invertible because D is not one-one.
!48 Linem Algebra

Example 16. If A is a linear transformation on a vectt^ space


V such that A^'-A +i==6,
then A is invertible. (Meerut 1969,88)
Solution. If >42-.^+7—5, then A^^A=—J.
First we shall prove that >4 is one*one.
Let ai,a^e V. Then
A (a,)=^ (0C2) ...(1)
A [A (a,)]=^ [A («2)J
=> A^ (ai)s=A^ (a2) ...(2)
^ >42(ai)-y4 (ai)=/42(aj)—^4 («2) [From (2) and (1)]
(^2_^)(a,)=(^2-^)(a2)
(-/)(a,)=(-/)(«2) => -[/(«,)]=-[/(«2)J
=> ~ai= —a2 => ai=a2.
/. A is one-one.
Now to prove that A is onto.
Let a e K Then a-^ (a) e V.
We have A [a-A (a)]=^/l («)-v42(a)
=(i4—>12)(a)
-/(«) [V ,42-/4=-/ => ,4-/12=/]
*=a.

Thus a e F =► 3 a—i4 (a) e V such that


A [ol—A (a)]=«.
A is onto.
Hence ,4 is invertible.
Example 17. If A and Bare linear transformations {on the same
vector space) and if AB~J, then A is called a left inverse of B and
B is called a right inverse of A. Prove that if A has exactly one
right inverse, say B, then A is invertible. (Kanpur 1980)
Solution. Given that A has a unique right inverse B i.e.
,4i?=/and B is unique.
We have A{BA^B-I)
=A (BA)-i-AB~AI=(AB) A+AB-A
-/A+I-A=A-i-I-A=/.
,', .^i4+^*-/is a right inverse of ,4. But/? is the unique
right inverse of ,4.
■ .*. BA^B-l<=>B
=> BA^I.
Thus AB=I=BA,
A is invertible.
Linear transformations 149

Example 18. Let V be afinite dimensional vector space and T


be a linear operator on V, Suppose that rank {'P)=^rank {T). Prove
that the range and null space ofT are disjoint i eT, have only the
zero vector in common.
SolutioD. We have
dim F=rank (r)-|-nu!IUy(F)
and dim r^rank (2^)-1-nullity (7^).
Since rank (7’)=rank (F^), therefore we get
nullity (r)==nullity (F^)
i.e., dim of null space of F=dim of null space of F^
Now F(a)=0
=> F[F(a)J=F(0) F^(a)=0.
a e null space of F => a e null space of TK
null space of F C null space of F^.
But null space of F and null space of F^ are both subspaces of
V and have the same dimension;
null space of F=null space x)f F^.
null space of F^. £ null space of F
i.e. F2(a)=0 ^ F(a)=0.
range and null space of Fare disjoint.[See Ex. 7 page 118]
Example. 19. Let V be afinite dimensional vector space over
the field F,
Let {ai, a2, ..., a„} and {Pu ....
be two ordered hosesfor V. Show that there exists a unique invert
able linear transformation T oh V such that
F(a/)=/8/, 1 ^ «.
Solution. We have proved in one of the previous theorems
that there exists a unique linear transformation Ton V such
that F(«/)=^/, 1 ^ ^ n.
Here we are to show that F is invertible. Since V is finite
dimensional therefore in order to prove that F is invertible, it is
suflScient to prove that F is non-singular.
Let ot G V and F(a)s=:0.
Let a=fl|ai+...-ffl„a„ where «i, a„ e F.
Wc have F(cej^O
=> T(aiXi+...-t-a„ot„)i=>0
F(a,)-l-...+fl„F(a«))=0
150 Linear Algebra

g> ii/sbO for each i ^ i ^ n


IV are linearly independent]
t> acaO.
/. T is non-singular,because null space of T consists of zero
vector alone. Hence r is invertible.
Example 20. //{ai,...,a*} and
are linearly independent sets of vectors in a finite dimensional vector
space Vt then there exists an invertible linear transformation T on V
such that r(a,)«j8<, i-l,...,fc.
Solution. Let dim Ks=»n.
Since is a linearly independent subset of K, therefore
it can be extended to form a basis for V, Let

be a basis for V. Similarly let


be a basis for V.
Now there exists a unique linear transformation Ton V such
that T'(ay)=^y»y—If 2,...,/c,...|W.
Also T is invertible because T maps a basis of V onto a basis
of V and V is finite dimensional.
Thus there exists an invertible linear transformation T on V
such that r(a<)=si^/, i=l,...,A:.
Example 21. Let T\ be a linear transformation on a vector
space V{F). Prove that the set ofall linear transformations S on V
* which TiS^O is a subspace of the vector space of all linear
transformations.
Solution. Let L denote the vector space of all linear transfor
mations on the vector space V. Let fK={S:5 is a linear transfor
mation on V and 2‘'iS=d}. To prove that IK is a subspace Of L.
Let a,b e /’’and Si, Si e W, Then by def. of IK, we have
riSi^d and We shall show that
{aSi-\-bS2i=0,
Let a e K. Then
[Ti (aSi-^-bSi)](«)=Ti [(aSi+bSi)(a)],
by def. of product of linear transformations
=:Ti [(oSi)(«)+(^S2)(a)]=ri [flSi (a)+bS2(«)J
esoTi [St (a)]-H^ri [S2 (*)], since Ti is a linear transformation
essfl {TiSi)(a)+i> (T1S2)(«)«=*ia0(a)+A0(oc)

bbO OoGoaO (01)●


Linear Transformations 151

Thus [Ti (aSi+bS2)](a)=6(a) v oc e K.


Therefore Tt iaSt+bS2)=^6. Consequently by def. of W,
aSt+bS2 e W. Thus a,b^F and Su S2^W => aSi | ^82 e W.
Hence is a subspace of L.
Exercises
1. Fill up the blanks in the following statements:
(i) A linear operator T on R2 defined by T(x,
cx-\-dy) will be invertible iflf. (Meerut 1976)
(ii) If T is a linear operator on defined by
T{x, y)=(x-y, y), then (x, y)=..
(Meerut 1976)
Ans. (i) ad—bc^O; (ii) (x—2y,y).
2. State whether the following statements are true or false :
(i) For two linear operators T and U on R2,
r£/=d => VT=^0. (Meerut 1976)
(ii) If S and T are linear operators on a vector space t/, then
(54-r)2=52+2Sr+ (Meerut 1977)
Ans. (i) False; (ii) False, because it is not necessary that
5r=r5.
3. If r is a linear transformation from a vector space V into a
Vector space iV, then obtain conditions for T”* to be a linear
transformation from If to V. (Meerut 1976)
Ans T should be one-one and onto.
4. Show that the identity operator on a vector space is always
invertible.
5. Prove that the set of invertible linear operators on a vector
space V with the operation of composition forms a group.
Check if this group is commutative. (Meerut 1976, 80)
6. A linear operator T on a vector space V is said to be niipotent
if there exists a positive integer r such that T'=^.
If F is the space of all polynomials of degree less than or
equal to n over a field F, prove that the differentiation opera
tor on V is niipotent. (Meerut 1976)
7. Describe explicitly a linear transformation from V3(R) into
V»(R) which has its range the subspace spanned by the vectors
(1, 2, 0,-4),(2. 0.-1, 3).
Ans. T iq, b, <?)=.(«-f-26, 2o, -6, -4a-36).
152 Linear Algebra

- 8.Let V “ be any field and letTbe a linear operator on defined


by T(fl, h)={a+b, a).
Show that T is invertible and find a rule for like Uie one
which defines T.
Ans T-'{a, b)=^{h, a ^h),
9. Let T’ and U be the linear operators on defined by
■ T(fl, b)—{b, a) and U (a, b)-={a, 0).
Give rules like the one defining T and U for each of the trans-
- formations (t/fr),(/r, Tu, T\
*Ans. (i/4 T)(a, b)Ma^■b, a); {UT) (n. 0) ;
{TU) (tf. A)=(0, a); ^ (a. h)^{a, b); (a, *)-(«, 0).
10. Let T be the (unique) linear operator on for which
T(l. 0. 0)=(1. 0, 0, no, 1, 0)=(0,1, 1). no, 0, l)«(i, 1, 0).
Show that r is not invertible.
Hint The vectors ei=(l, 0, 0), e2=(0, 1, 0), C3=(0, 0, 1),
form a basis for C^. Show that n<^i), T{ei), are linearly
dependent vectors. Consequently they do not form a basis for
C^. Since T does not map a basis of onto a basis of C^,
therefore r is not invertible.
11. Let V and W be vector spaces over the field F and let U be an
isomorphism of V onto W. Prove that T->UTV~^ is an iso
morphism of L{Vy V) onto IV).
12. Give an example of a .one-one linear transformation of an in;v
finite dimensional vector space which is not an isomorphism.
(Poona 1970)
13. Show that if two linear transformations of a finite dimensional
vector space coincide on a basis of that vector space, then
they are identical. (Poona 1970)
14. If T is a linear tpansl’ormation on a finite dimensional vector
space V such that range {T) is a proper subset of V, show that
there exists a non-zero element a in V with 2X«)=0.
(Efanaras 1972)
15. Let V and W be n-dimensional vector spaces over the field F.
Show that for any linear transformation T: F->fV, the follow
ing statements are ail equivalent: *
(i) T is invertible.
(ii)T(y)=JV:
(iii) There is a basis {«i, a.| for V such that
{7«i, Tk2,...,T(X„} is a basis for JV. (Meerut 1973)
16. Let r be a linear transformation from a finite dimensional
vector space V into a finite dimensional vector space IV. Show
153
Linear Transformations

that
(i) r(0)=0.
of V under T is
, (ii) If 1/ is a subspace of K, then the image
also a subspace of W,
(iii) If dim K=-dim FFand if TiY)^ W,then 7'is invertible.
(Meerut 1974)
17. Show that the operator Tow defined by
' T(x, y, z) - {x H-z, x-Zyy)
is invertible and find similar rule defining 2'*‘. (Meerut 1980)
Ans. r-'(x, y, z)=(|x fiy, z,
§ 12. Matrix Definition Let F be any field. A set of mn
eletnents of F arranged in theform of a rectangular array having m
rows and n columns is called an mxn matrix over the field F.
An mxn matrix is usually written as
ail Oi2 ... am
021 022 02n

.ami 0,„2 Omn .

In a compact form the above matrix is iepre.scnted by


A^[afjU<n. The element a,V is called the (/,./)'" clement of the
matrix .4. In this element the first suffix / will always denote the
number of row in which this element occurs.
If in a matrix A the number of rows is equal to the number
of columns and is equal to n, then A is called a square matrix of
order n and the elements aij for which i~ J constitute its principal
diagonal.
Unit matrix. A square matrix each-of whose diagonal elements
. is equal to \ and each of whose non-diagonal elements is equal to
zero is called a unit matrix or an identity matrix. Wc shall denote
it by /. Thus if I is unit matrix of order «, then T—[8ij]„xn where
8,7 is Kronecker delta.
Diagonal matrix. A square matrix is said to be a diagonal
matrix {fall the elements lying above and below the principal diagO'
nal are equal to 0. For example,
ro 0 0 0‘
0 2+i 0 0
0 0 0 0
0 0 0 5j
is a diagonal matrix of order 4 over the field of complex numbers.
Null matrix. The mx« matrix whose elements are all zero
mXn,
is called the null matrix or (zero matrix) of the type
154
Linear Algebra
Equality of two matrices. Definition.
Let [o/y]jBxn and niXn» Then

A^B if aij^bij for each pair of subscripts i and j.


Addition of two matrices. Definition.
Let As=s[aij]„y„^ ^=lbij)myH‘ Then we define
A+B=:[aij’i-hij]mxn>
Multiplication of a matrix by a scalar. Definition.
Let mXn and a ^ F i e. a he a scalar. Then we define
oA=[aaij]mxif
Multiplication of two matrices. Definition.
Let A^laij]
. . mxft.
^’=^[bjk]itxp i-e. the number of columns in the
matrix A is equal to the number of rows in the matrix B. Then
we define

ABsss
^ a,j hjii
U,AB is an mxp matrix whose (/, kyf>
m>p

element is equal to 1’ Oij bjk.


y-i

If^ and B are both square matrices of order w, then both the
products AB and BA exist but in general AB^BA.
Transpose of a matrix. Definition.
Let
. A=-[a/j]mXn ● ^1*® nxfn matrix obtained by inter-
changing the rows and columns of A is called the transpose of A.
nma A where bij=aji, U., the element of.4’' IS
I
the 0*0' element of If^isanwxn matrix and£isan«x/;
matrix. It can be shown that (AB)t^Bt^t xhe transpose of a
matrix A is also denoted by A‘ or by A\
,, Determinant of a square matrix. Let P„ denote the group of
all permutations of degree n on the set {1, 2,..., n}. If 0 e
then e (i) will denote the image of / under 6. The symbol(-iv
for Q ^ Fa will mean +1 if 0 is an even permutation and —1 if ^
is an odd permutation.
Definition. Let A ==[aij]nxti’ Then the determinant of A, written
as del af or I I or I a,j
| nxn is the element

Ic-i) ^19(1) d2j(2)...o«9(») in F.


The number of terms in this summation is n! because there
are n!permutations in ihe set P!»●
155
Linear Transformations

We shall often use the notation


011 ... ai„
021 02n

fli.1 Onn
for the determinant of the matrix [Oij]n.'n*
The following properties of. dclerminants are worth to be
noted:
(i) The determinant of a unit matrix is always equal to 1.
(ii) The determinant of a null maliix is always equal to 0.
(iii) If A —
then det (y4B)=-(det i4)(det i?).
Cofactors. Definition. Let A=-[aif]nxn’ We define
^ cofactor of o,v in y4
»=(_ l)i+j. [determinant of the matrix of order «-1
obtained by deleting the row and column
of A passing through aij].
It should be noted that

1' Oik Aij—0 if


i-l

or s:=det A if k=j.
Adjoint of a square matrix. Definition. Let A~[aij]„xn*
The MX n matrix which is the transpose of the matrix of cofac
tors of i4 is called the adjoint of A and is denoted by adj A.
It should be remembered that
A (adj ^)=(adj A)^-(det A)/ where
I is unit matrix of order n.
Inverse of a square matrix. Definition. Let ^ be a square matrix
of order n. If there exists a square matrix B of order n such that
AB=^I^BA
then A is said to be invertible and B is called the inverse of A.
Also we write B=A-K
The following results should be remembered :
(i) The necessary and sufficient condition for a square matrix^
A to be invertible is that det A^O.
(ii) If A is invertible, then A-^ is unique and
1
A-'
det A (adj. A),
156 Linear Atgebra

(iii) If A and B are invertible square matrices of order n, then


AB \% also invertible and {AB)^=^B ^A K
(iv) If A is invertible, so is A~^ and (A-^)- ^~A.
Elementary row operations on a matrix.
Detinition. Let A be an m x n matrix over the field F. The
following three operations arc called elementaiy row operations :
(1) multiplication ofany row of.4 by a non-zero element c of F.
(2) addition to the elements of any row of ^4 the coriesponding
elements of any other row of A multiplied by any element a in F.
(3) interchange of two rows of A.
Row equivalent matrices. Definition, If A and B are mxn
matrices over the field F, then H is said to be row equivalent to A
if B can be obtained from /I by a finite sequence of elementary row
operations. It can be easily seen that the relation of being row
equivalent is an equivalence relation in the set of all mxn matrices
over F
Row reduced echelon matrix. Detinition.
An mxn matrix R is called a row reduced echelon matrix if:
(1) Every row of R which has all its entries 0 occurs below
every row which has a non-zero entry.
(2) The first non-zero entry in each non-zero row is equal to I.
(3) If the first non-zero entry in row i appears in column A
then all other entries in column A; are zero.
(4) If /● is the number of non-zero rows, then
A'l < A2 < ... < kf
ii.e.y the fi rst non-zero entry in row i is to the left of the fi rst non
zero entry in row /-b 1).
Row and. column rank of a matrix.
Definition. L&tA= [o/ylmx, be an mxn matrix over the field
F. The row vectors of A are the vectors ai,.. , «m e Vn (F)
defined by a,=.(o/i, a/2,..., a/„), 1 ^ i ^ m.
The row space of A is the subspace of V„ (F) spanned by these
vectors. The row rank of is the dimension of the row space of ^4.
The column vectors of A are the vectors j3i e {F)
defined by ^y=(c,y, I ^n.
The column space of A is the subspace of V,„ {F) spanned by
these vectors. The Column rank of A is the dimension of the column
space of
Linear Transformations 157
I
The following two results are to be remembered :
(1) Row equivalent matrices have the shme row space.
(2) If R is a non-zero row reduced echelon matrix, then the
non-zero row vectors of R are linearly independent and therefore
they form a basis for the row space of R.
In order to find the row rank of a matrix we should reduce
it to row reduced echelon matrix R by elementary row operations.
The number of non-zero rows in R will give us the row rank of /i.
§ 13. Representation of transformations by matrices
Matrix of a linear transformation.
(Meerut 1980, 83; l.A.S. 88)
Let U be an n-dimensipnal vector space over the field f and
let Vbe an m-dimensional vector space over F Let
F—{»!,...» a„} and fim}
be ordered bases for U and K lespectively. Suppose T is a linear
transformation from U into V. We know that T is completely
determined by its action on the vectors ay belonging to a basis for
[/. Eachof the n vectors T(ay) is uniquely expressible as a linear
combination of/Si,..., because e V and these m vectors
form a basis for V. Let for.7^ 1, 2,..., n.
m
T(ay)=<liy pi -I- 02J p2 \'-- \ a,„jpm- i»l
^(»ij Pi’

The scalars Oiy, are the coordinates of T(ay) in the


ordered basis R^ The m x n matrix whose./'* column(7= 1, 2,..., n)
consists of these coordinates is called the matrix of the linear
transformation T. relative to the pair of ordered bases B and B'.
We shall denote it by the symbol [T; B; B'] or simply by [TJ if the
bases are understood. Thus
[T\=>[T; R; R'J~matrix of Trelative to ordered
bases B and R'

m
where ^(ay)® i: aij Pi, for each7=l, 2,..., « ...(1)
/-I

The coordinates of T(aO in ordered basis 5' form the first


column of this matrix, the coordinates of T(aa) in ordered basis B'
form the second column of this matrix and so on.
158 Linear Algebra
The mx/I matrix [fliy]«xn completely determines the. linear
transformation T through the formulae given in (1). Therefore the
matrix [ay/]mxn represents the transformation T.
Note. Let The a linear transformation from an n-dimensional
vector space V(F)into itself. Then in order to represent T by a
matrix, it is most convenient to use the same ordered basis in each
case, /.e., to take B~B\ The representing matrix wili then be
called the matrix of F relative to ordered basis B and wili be deno*
tedby.[F;jB] or sometimes also by [r]^.
Thus if F={ai,..., a„) is an ordered basis for K, then
[F]fl or [F; F]=matrix of F relative to ordered basis B
=[aij]ny.n$

where T £ aij xj. for each./=>l, 2,..., n.

Example 1. Let The a linear transformation on the vector space


Vi(F)defined by T {a, h)={a, 0).
Write the matrix of T relative to the standard ordered basis of
ViiF).
Solution. Let 5={ai, a2} be the standard ordered basis for
^2(F). Then ai=(l, 0), a2=(0, 1).
Wehave FCai)-F(l, 0)=(1, 0).
Now let us express F(ai) as a linear combination of vectors in
B. WehaveF(a,)=(L0)=l (l,0)+0(0, l)=la,+0a2.
Thus 1, 0 are the coordinates of F(ai) with respect to the
ordered basis B. These coordinates will form the first column of
matrix of F relative to ordered basis B.
Again F(a2)-F(0, l)=n(0. 0)=0(1,0)+0(0, 1).
Thus 0, 0 are the coordinates of F(a2) and jwill form second
column of matrix of F relative to ordered basis B.
Thus matrix of F relative to ordered basis B
n 01
=[7']a or [F; F]= 0 0
Example 2. Let V(R)be the vector space of all polynomials in
X with coefficients in R oftheform
/(x)= -f X+02X2+03X3
the space ofpolynomials of degree three or less. The differen
tiation operator D is a linear transformation on V. The set
F={a,,..., CC4} where ai=x«, a2=x>, aa-x^, a4=x3
is an ordered basisfor V. Write the matrix of D, relative to the
ordered basis B. '
Linear Transformations 159

Solution. We have
Z)(ai)«/)(:«o)=o=0x«+O:cH0x2+Ox3
«0«i-f0a2+0a3+0«4
D(ol2)^-D (;c0=xo=:1xO+0x*-|-0x2-|-0ac3
=l«i+0a2+0a3+0a4
i)(a3)«=i> (x2)=2x»=0.x04-2x»+0xH0x3
. «=0ai-l-2a2+0a3+0a4
D(«4)=I> (x3)=3jc2«Ojco+Oa:H3x2+Ox3
caOai+0a2+ 3a34-0o4.
.% the matrix of D relative to the ordered basis B
0 1 0 0*
fd* wi 0 0 2 0
0 0 0 3
0 0 0 0j4x4.
Theorem 1. Let U be an n-dimensional vector space over the
field F and let V be an m-dimensionai vector space over F, Let B and
B* be ordered basesfor U and V respectively. Then corresponding to
every matrix [aij]my„ of mn scalars belonging to F there corresponds
a unUfue linear transformation Tfrom U into V such that
\T\ By (I.A.S. 1988)
Proof. Let 5={aj, a2,—, a„) and ^2,..., Pm)-

ow 27 0/1 Pit 2’ 0/2 pi....t 27 0/„ Pt


/-I /-I /-I

are vectors belonging to V because each of them is a linear combi


nation of the vectors belonging to a basis for V. If. should be noted

that the vector S aij pi has been obtained with,the help of the
/-I

column of the matrix [o/;]mxn-


Since 5 is a basis for Ut therefore by the theorerh 2 page 123
there exists a unique linear transformation T from U into V such
that
m
T{(xj)=‘ £ oij Pi where./=1, 2,...» n. ...(1)
i~i

By our definition of matrix of a linear transformation, we have


from (1)
[Tl B; =
Note. If we take F=^{7, then in place of B', we also take B.
In that case the above theorem will,run as :
160 Linear Algebra

Let V be an n-dimensional vector space over the field F and B


be an ordered basis or co-ordinate system/or V. Then corresponding
to every matrix of scalars belonging to F there corresponds
a unique linear transformation Tfrom V into V such that
\T\ 5] or (T'Jfl“fo/y]«x»*
Explicit expression for a linear transformation in terms of its
matrix. Now our aim is to establish a formula which will give us
the image of any vector under a linear transformation T in terms
of its matrix.
Theorem 2. Let T be a linear transformationfrom an n dimen
sional vector space U into an m-dimensional vector space V and let B
and B' be ordered basesfor U and V respectively, if A is the matrix
ofT relative to B and'B' then ¥ a <= C/, we have
[T ~ A [oi]n where
[ajfl is the co-ordinate matrix of a with respect to ordered basis P
and [7’(a)]«' /v co-ordinate matrix ofT{ix) e V with respect to B'.
(Meerut 1980)
Proof. Let a2 «„} and /?'=●-{/li, (h Pm\-
Then A^[T; B; where
m
T (aj)= L' 2, o. ...(1)
/=i

If a~ .Via, +... -f .V„a„ is a vector in U, then

T (a)=r( 1’ xj (xj
)
n
^FxjTM fV T is a linear transiorraation]
;-i

m
- SxjlJauPi [From (1)1
/ -i

m
S ...(2)
/“i \j^\
B
Oij Xj j pi.
The co-ordinate matrix of T (a)'with respect to ordered basis
5' is an m X1 matrix; From (2), we see that the entry of this
column matrix [T (a)]a'

i.el t6e cuefficiept of pi in the linear combination (2) for T(a).


linear Transformations 161

If X is the co-ordinate matrix [a]i^ of a with respect to ordered


basis Bi then ^ is an /i x 1 matrix. The product AX will be an m X i
matrix. The entry of this column matrix AX will be

= S at! Xj.
y-i

[T{<i)]b'=^AX==A [a]B=[t/; B; 5'J [«]bJ


Nofe. If we take t/=K,then the above result will be
ir(a)]B=[rjB [aja.f
Matrices of Identity and Zero transformations.'
Theorem 3. Lei V(F) be an :\-dimensional vector space and B
be any ordered basisfor V. IfI be the identity transformation and

0 be the zero transformation on Vy then


(/) [/; B]=I {unit matrix of order n)

and (ii) [0; B]~null matrix of the ryn.


Proof. Let 5={ai, aj, a„}.
(/) We have / («;)=«;,./= 1, 2 n
=OotI-f...-f-1 «y 4 0«y^ 14* ● ● ● "!■

= S Sij a/, where S;; is Kionecker ddta.

By def. ,of matrix of a linear transformation, we have


[/; B\=[bij]nyn—l Le. unit matrix of order n.

(//) We have 0 (ay)=0, ./=--4,2 n


= 0(X 14^ 0(X2 4* ● ● ● 4"
n
= S.OIJ ctiy where each qij=Q»
/-I

.*. By def. of matrix of a linear transformation, we have

[6; B]—[oij\„yjt null matrix of the type nx«.


Theorem 4. Let T and S be linear transformations from an
n-dimensional vector space U into an m-dimensional vector space V
and let B and B' be ordered bases for U and V respectively. Then
(0 [r+5; B; B'}=ir; 5; B']+[5; 5; J8']
(i7) [cT; B; B']=c [T; B; B'] where c is any scalar^
Proof. Let B={<ni, ot2* a„) and
*●$ w-
162 Linear Algebra

Let [fl/y]inx» be the matrix of T relative to 5, B\ Then


m

T(ay)=^2; fl/y ^,,7=1, 2, ...» n.

Also let[Mmxn be the matrix of S relative to B\ Then


nt
●S (ay)= S ^>/y 2, n.
M

(i) We have
(T+S) (ay)asT (ay)+5 («y),y‘=l, 2, n
m m m
^ <»/y /5/+ -T bij j3/= 27 (fl/y + 6/y) ^/.
/ -I / -I / -I

matrix of 7+5 relative to B, B'=[aij-^bij]„xa


~{Oij]mXn~\'[blj\m'xa»

/. fl; 5']=[T; B; 5'J+[5; 5; i?'J.


(ii) We have (cT) (ay)=c7’(flty),7=l, 2. n
m m
=c Z aij Pi— Z (caij) Pi,
/ -I / -I

[cT\ B; W] - matrix of cT relative to B, B'


=[caij]mXn—C[aij\mxn—C[T\B\B%
Theorems. Let U^ V and Wbe finite dimehsionat vector
spaces over the fieid F\ let T be a linear transformation from U into
V and S a linear transformation from V into W. Further let 5, B'
and B" be ordered bases for spaces t/, V and W respectively. If A
is the matrix of T relative to the pair B, B' and D is the matrix of S
relative to the pair B\ B" then the matrix of the composite transfor
mation ST relative to the pair B“ is the product matrix C=DA.
(Meerut 1975)
Proof. Let dim f/=n, dim V=m and dim W=p. Further let
5={ai, a2, ...» «»), B'—{Pif p2t ...» Pm)
and 72, .... rp}.
Let i4=“[o/y]mx«, D=[di(i]pxm and C=[c*y]pxii. Then
m
T(ay)= Za,iPiJ=^\,l ...(1)
/ -I

S (Pi)= Z dki Yk* /=!, 2, .... m. ...(2)


163
Linear Transformations

p
...(3)
and {ST){aj)«= £ Ckj y/c, y=1,2 n.
fc™i
n
We have(ST)(«;)<=S[T J=i,2»●●●»

[Fjrom(l)l

m
=s S atj S iPi) [V 5 is linear]
/-I

83 S oij S dki Yk [From (2)3


/«! A -r

VA. ...(4)

Therefore from (3) and (4), we have

IIdki ni/»y = l» 2, ●●●» n ; /cs=»T, 2«

m
[^Ayjpxn= 2.’ dki aij
/ ‘I
. pxa
[<fA/lpXm [Ofylmxn* by def. of product of two
matrices.
Thus C^DA,
Note, If u= V= IV, then the statement and proof of the
above theorem will be as follows :
Let V be an n-dimensional vector space over the field F;letT
and S be linear transformations of V. Further let B be ah ordered
basis for.V. If A is the matrix 6fT relative to B, and D is the
matrix of S relative to B, then the matrix of the composite transfor
mation ST relative to B is the product matrix
C^DA i.e. [5rjj,=[S]fl [rjfl. (Banaras 1972)
Proof. . Let B—{olu «2, ●●●»
Let A=[aij]»ynp D^ldkijnxf, C«[CAy]iixn. Then
n
T («y)*» £ aij 2,. n, ...(1)
/ -I

S («/)«■ £ dki «A» 2 «» ...(2)


A -l
!64
Linear Algebra

y' '
and n
[ST)(ay)*= Ckj a*,j«=^-2^1^ ...(3)

< We have (5r)(ay)=5ir(ay)J


“*5
( ^(*/)= ^ Otj S dkt a*
\ /-I j /-I
n I n \
“^£l I Oij a*. ...(4)
/

from (3) and (4), we have r*y= r dkt aij,


1-1
n
[^*;]nxa S dkt otj — [dkt]iUn [<*/y]«Xfl
/-I
nxn
C^DA,
Theorem 6.
be an n-dimensional vector space over the
field Fond let V be an m-dimensional vector space over F. For each
pair irfordered bases B, B*for U and V respectively, the function
which assigns to a linear transformation T its matrix relative to B,
B* is an isomorphism between the space L[U, V)and the space ofall
mx n matrices over thefield F. (Meerut 1?^ 7)
Proof Let5={ai, a,} and
^ .... |8«}.
Let il/ be the vector space of all mxn matrices over the field
F, Let
0:Ir (C7, such that
(73=IT; B\ B*\ V T eX[U, V).
Let Tu Tie L[U, V); and let
(r,; B; and [Ta B; B']=^[btj\„Xlt*
Then T\ (ay)s=! /S atjfhj^X, 2,..., n
-I
m
;and TI(ay)s= 2bijjS/,7=1,2,.... n.

, To prove that 0 is one-one.


^ We have 0(7i)=*0(ra)
^[Tt;B;B']^[T2;B;B'l [by def of0)
^ UfiAmXH^^lbtjlnxm

N
timqr Transformations 165

=> aij=»btj for /=»!t ●●●» m andy=.l,..., n


n
^^aijPi=^£ hijpi for J=>1,.
=> Ti (aj)=T2(ay) fory=l, ...» n
, f>Tt^T2 [V Ti and Tz agree on a basis for U]
t/i is one*one.
^ is onto.
Let [cij]„x„ e M. Then there exists a linear transformation T
from U into V such that
m

/=i
.,n.
We have [T; B; B^^cu]m yn
=> 'A (T):^-[Cijlmxn *
.*. ^ is onto.
0 is a linear transformation.
If d, 6eF, then
^ {aTi-irbTz)=[aTi+bTz\ B; B'] [by def. of
=[aTi; B; B']+[bTz; B; B'] [by theorem 4]
=fl
(r,; B; B>]^b [Tz; B; B'J [by theorem 4]
ssQiji {T\)-\-b^ (72), by def. of *fi.
.. 0 is a linear transformation.
Hence ^ is an. isomorphism from L (£/, V) onto M.
Note. It should be noted that in the above theorem if l/^y
tnen also preserves products and / i.e.,
0 (Ti Tz)=^ (r,) 0 (72)
and 0 (/)=/ i.e., unit matrix.
Theorem 7. Let Tbea linear operator on an n-dimensional
vector space V and let B be an ordered basis for V. Prove that T is
invertible iff[T]B is an invertible matrix. Also ifT is invertible, then

i.e. the matrix of T-‘ relative to B is the inverse of the matrix


of T relative to B.
Proof Let 7’be invertible. Then T~' exists and we have
r~' r=/=rr-»
[r-' T]b=[!]bHTT-^]b
=> [T-']b [T]a=/=[7’.l^ [T-1]b
=> [T]b is invertible and ((7’Js)"*=(r-']fl.
Conversely, let [T’Js be an invertible matrix. Let [7'Jb=^. Let
C—A-' and let S be the linear transformation of V such that
166 Linear Algebra

We have CA"=*I=^AC
[S]fl [T]b=-1--[T]b [SIb
»[srjB=*[/ls«[r5]B
:> ST^I^TS
=> ris invertible.
Change of basis. Suppose V is an n-dimensional vector spaw
over the field F. Let B and F' be two ordered bases for V. If a is
any vector in K, then we are now interested to kn(jw what is the
relation between its coordinates with respect to B and its coordi
nates with respect to F'.
Theorem 8. Let V(JF) be an n-dimensional vector space and let
B and B'be tm ordered basesfor V. Then there is a unique nece
ssarily invertible, nx.n matrix A with entries in Fsuch that
(1) («]fl=^[«]e'
(2) [a]B'«i4-‘[«]b
for every vector « in V. (Meerut 1984P, 93P)
Solution. Let Fs=*{ai, a2,...,«ii} and F'=(^i,
Then there exists a unique linear transformation Tfrom V
into V such that
r(a^)«=i3y,y=l, 2 n. ...(1)

Since T maps a basis F onto a basis F', therefore T is neces


sarily invertible. The matrix of T relative to F i,e. [T]b will be a
unique nxn matrix with elements in F. Also this matrix will be
invertible because T is invertible.
Let (rjB=*^~[^//l»xii* Then
...(2)
T(a/)=^F Oij ay,/=l, 2,...,«.
Let Xu X2,.»^*Xn be the coordinates of a with respect to F and
yu .F2.-,y„ be the coordinates of a with respect to F'. Then

as=yiPi-\-y^2-\- — 27 yj^j
/-I

SysT{ai) [From (1)1


y-i

sa s yj II OlJ OCf [From (2)1


/-I /-I
Linecff Transformations 167

min
= 2’ I S Oij yj «/.
/-I y-r /

Also aa 27 Xi a/.
/-I

Xi=s £ 0/yyy because the expression for a as a linear

combination of elements of B is unique.


Now [a]a is a column matrix of the type nx 1. Also is a
column matrix of the type n x 1. The product matrix A [a]s' will
also be of the type n x 1.

The entry of[alB=s*/=»^27^ Oij yj


saitff entry of A [otja'.
.*. [o]b^A [aju'
=> A-^ [a]fl=i4~‘ A [a]fl'
:»■A~^
=> A-^
Note. The matrix A^[T]b is called the transition^hiatrix
from B to B\ It expresses the coordinates of each vector in V rela-
tive to B in terms of its coo^dinates i^tive to B\
How to write the transition matrix from one basis to another ?
Let 5={ai. a„} and B'={fu i8«} be two ordered
bases for the n-dimensional vector space V {f). Let A be the
transition matrix from the basis B tbsthe basis B\ Let T be the
linear transformation from V into V wh|ch maps the basis B onto
the basis B\ Then ^ is the matrix of T^elative to B i e. A^\T]b.
So in order to find the matrix A^ we should first express each vec
tor in the basis B' as a linear combination over Foi the vectors in
B. Thus we write the relations
^1 =flaai-|-n2i«2-t-● ● ● +
p2=012*1+^22^2+ ● ● ●+0«2«II
● ee

P-i^.%n*l +02fl«2+ ...+Oj,«a«.


Then the matrix >4 <= [o/;]i,xii i e. A is the transpose of the matrix
of coefficients in the above relations. Thus
168 Linear Algebra

~0\\ 0\i ... ai„ -


021 022 ... 02m

a„2 ... a„„


Now suppose a is any vector in V If[a]s is the coordinate
, matrix of a relative to the basis B and [a]// its coordinate matrix
relative to the basis B' then
[cc]g=A [a]fl'
and
Theorem 9. Let 5={ai,...,a„} be two or¬
dered basesfor an n-dimensional vector space V{F), If(xi,...,Af„) is

an ordered set of n scalars, let S xixi and E Xt^i.


1-1

Then show that 7’(«)=/3,


where T is the linear operator on V defined by
r(a,)=j8/, 1=1, 2 n.

Proof. We have r(a)=r


(i; “')
n ●
-^E Xi 7'(a/) [V T is linear]
/-I

=E
i-1

Similarity.
Similarity of matrices. Definition. Let A and B be square
matrices of order n over the field F. Then B is said to be similar
to A if there exists an nxn invertible square matrix C with elements
in F such that 5=C-« AC. (Meerut 1976)
Theorem 10. The relation ofsimilarity is an equivalence relation
in the set of all nxn matrices over thefield F.
(Meerut 1969, 76; Kanpur 81)
Proof. If A and £iare two nxn matrices over the field F, then
B is said to be similar to A if there exists an nxn invertible matrix
C over Fsuch that B^C-^ AC.
Reflexive. . Let be any nxn matrix over F. We can write
A~l~^ AI, where / is nxn unit matrix over F.
Linear Transformations 169

A is similar to A because I is definitely invertible.


Symmetric. Let A be similar to B, Then there exists an nxn
invertible matrix P over F such that
.4=P-» BP
=> PAP-'^P {P-^ BP)P-^
=> PAP-'i=B

=> AP-'
[V is invertible means P~^ is invertible and (/*”*)"'
=> 5 is similar to
Transitive. Let A be similar to B and B be similar to C. Then
A=P-'BP
and B=Q-' CQ,
where P and 6 are invertible nx« matrices over F.
We have A^P~' (g-‘ CQ)P
=(P-i e-i) CiQP)
^{QP)-'C{QP)
[V P and Q are invertible means QP is
invertible and Q~>]
A is similar to C.
Hence similarity is an equivalence relation on the set of nxn
matrices over the field F.
Theorem 11. Similar matrices have the same determinant.
Proof. Let B be similar to A. Then there exists an invertible
matrix C such that
5=C-« AC
=> det 5=det(C"' AC) det Z?=(dct C-‘)(det A)(det C)
=> det i9=(det C"‘Kdet C)(det -d) => det (detC">C)(det A)
=> det 5=(det /)(det A) => det B—l (det A) => det i?=det A.
Similarity of linear transformations. Definition. Let A and B
be linear transformations on a vector space V{F). Then B is said
to be similar to A if there exists an invertible linear transformation
C on V such that B.^CAC-K
Theorem 12 The relation ofsimilarity is an equivalence rela
tion in the set of all linear transformations on a vector space V(F).
Proof. If .4 and are two linear transformations on the
vector space V(F), then B is said to be similar to A if there exists
an invertible linear transformation C on K such that
B=CAC-‘.
Reflexive. Let A be any linear transformation on V. We can
170 Linear Algebra
write A^IAl’’\
where I is identity transformation on V.
A is similar to A because I is definitely invertible.
Symmetric. Let >4 be similar to B. Then there exists an in
vertible linear transformation P on V such that
A^PBP-^
=► />-» AP=P-i (PBP-i) P
p-i AP=B =>. B=^P-i AP
=> B=P-^A (/»-*)-> =► is similar to A.
Transitive. Let ^4 be similar to B aiid B be similar to C.
Then A=PBP-\
and B=QCQ-K
where P and Q are invertible linear transfoimations on V.
We have (gCfi-*)-P-‘
C (g-* P-^)^{PQ) C {PQ)-K
A is similar to C.
Hence similarity is an equivalence relation on the set of all
linear transformations'on F(F).
Theorem 13. Let T be a linear operator on an n-dimensional
vector space V{F) and let B and B' be two ordered bases for K. Then
the matrix of T relative to B' is similar to the matrix of T relative
to B. (Andhra 1992)
Proof. Let .8={ai, a2,..., a«} and /3„}.
Let A=‘[aij]„xn be the matrix of T relative to B
and C=[cij]„x„ be the matrix of T relative to B\ Then

T (ay)=2’ Oij a/,y=l, 2,..., n (1)

and ^ ^ij 2,..., n. ...(2)


/-I

Let S be the linear operator on V defined by


S (a;)=»^y, j=I, 2,..., n. ...(3)
Since S maps a basis B onto a basis therefore S is necessa*
rily invertible. Let P be the matrix of S relative [to B. Then P is
also an invertible matrix.
If P-=[Pij]nxn> then

S {<tj)s=2 pij a<, 1, 2,..., n ...(4)


i-i
Linear Transformations
m
We have
[From (3)1
r05y)«r[5(a^)l
[From (4), on replacing i by
p*/a/ej
k which is immaterial]

2 Pkj T (afc) [V r is linear]


/c-l

n n
S pkj 2 Oik «< [From (1), on replacing y by /:]
*-I i-l

\
= 27 ( 2 a,k Pkj a/. ...(5)
/-I \fc-i /

Also r(iSj)« kml


S Ckj ^k [From (2), on replacing i by /r]

t=i 2 Ckj S (a/fe) IFrom (-3)]


k-t

H n
2Ckj 2pik «/ [From (4), on replacing j by k]
k-l /-

n( n \
ss 2 \ 2Pik Ckj «i. ...(6)
. f-i /

From (5)and (6), we have

2^ 2 Oik Pkj^ cti’=2^ I


n "
*> 2 Oik PkJ —^Pik Ckj
Ae-1 k--l

=> [0/fcl«Xn [pkj]nXH^[Pik]ny.n [Ckj\nXn


[by def. of matrix multiplication]
=> AP^PC
=> p-i PC [V exists]
p-i AP=^IC => P”» ^P«C
C is similar to A,
172
Linear Algebra

Note. Suppose B and B* are two ordered bases for an »-


dimensional vector space V(F). Let J be a linear operator on V.
Suppose A is the matrix of T relative to B and C is the matrix of T
relative to B\ If P is the transition matrix from the basis B to the
basis then C=P-« AP,
This result will enable us to find the matrix of T relative to
the basis 5' when we already knew the matrix of 7’ relative to the
basis B.
Theorem 14. Let V be an n~dimensional vector space over the
field F and T\ and T2 be two linear operators on V. If there exist
two ordered bases B and B^for V such that [T{\b=[T2]b', then show
that T2 is similar to Tu
Proof. Let and
Let[T\]B=\T2]B’~A^[aij]nya* Then

Ti (a/)= Saij a/,y=l, 2,..., X(J)


and T2(^y)= 2’ ». ...(2)

Let S be the linear operator on V defined by


S(ay)=^y,yc=l, 2,..., n. ...(3)
Since S maps a basis of V onto a basis of K, tfierej
invertible.
We have T2 {ft)=T2[S (ay)] [From (3)]"
MT2S)(«y). ' ...(4)
Also r2%)-= 2aij Pi
i-i [From (2)]

= 2aij S(a/) [From (3)j

[V iS is linear]
«5[r,(«y)] [From(1)]
=(5T,)(ay). ●..( 5)
From (4) and (5), we have
{T2S)(ay)=(5Ti)(«y),y=l, 2,..., n.
Since T2S and STx agree a basis for F, therefore we have
T2S^STx
Linear Transformations 173

=> r2ss-t=sr, => Tii^sTiS-^


=► Ti=^ST\ S“* => T2 is similar to Ti.
Determinaot of a linear transformation on a finite dimensional
vector space. Let T be a linear operator on an /i-dimensional vector
space V{F), If 5 and are two ordered bases for F, then
[T]b and [T]b>
are similar matrices. Also similar matrices have the same determi*
nant. This enables us to make the following definition :
Definition. Let T be a linear operator on an n-dimensional
vector space V (F). Then the determinant of T is the determinant of
the matrix of T relative to any ordered basis for V.
By the above discussion the determinant of T as defined by us
will be a unique element of Fand thus our definition is sensible.
Scalar Transformation. Definition. Let V {F) be a vector space.
A linear transformation Ton V is said to be a scalar transformation
of V ifT(ai)=ca V a e F, where c is a fixed scalar in F.
A?'U> then we write T—c and we say that the linear transforma*
tion T is equal to the scalar c.
Also obviously if the linear transformation T is equal to the
scalar c, then we have T=d,
where / is the identity transformation on F.
Trace of a Matrix Definition. (Kanpur 1981). Lu A be a
square matrix of order n over a field F. The sum of the elements of
A lying along the principal diagonal Is called the trace of A. We
shall write the trace of if as trace A. Thus if A=[aij]„xn, then

tr if=2; <1//=: <in+<*22+...+<»««●


/-I

In the following two theorems we have given some fundamen


tal properties of the trace function.
Theorem 15. Let A and B he two square matrices of order n
over afield Fand X^F. Then
V (1) /r(Aif)*=A/rif;
(2) /r(if+F)=/rif + /r F;
(3) rr (ifF)=/r (Fif), (Poona 1970)
Proof. Let if «[a,j\„xn and F=[hij]„y„.
(1). We have XA=^[Xau\nxnt by def. of multiplication Of a
mMnk by a icalar.
174 Linear Algebra

n
tr (A-/4)s= S Aa//=»A S ait*>aX tr A.
/-I /-I

(2) We have A
n n It

tr(i4+5)«=a S (air\-bu)<^ /S
-I
a//+ 1S
-1
bti**»it A-^ie B,
/-I

n
(3) We have AB^[cij]„^„ where 2aik bkj.

m
Also BAfsx[dij]„xn where <//ys= 2btk akj.
ft-i

H A / R

Now tr (AB)wi 2 cu^i 2 2'Oik bki


f-i /-I
\*"i )
R R

= 22 aik bkh interchanging the order of sum-


*-i /-I mation in the last sum
n !n \ R
= -T I 27 bki aik 1= 2dkk
k-i J A-r
I+<62+●●●+</< tr {BAy Rfl

Theorem 16. Similar matrices have the same trace.

Proof. Suppose A and B are two similar matrices. Then


there exists an invertible matrix C such that B=C~^ AC.
Let C-' A=D.
Then tr5=tr(Z)C)
=tr(C2)) [by theorem 15]
=tr (CC-> A)=^tr (L4)=«tr A
Trace of a linear transformation on a finite dimensional vector
space. Let T be a linear operator on an n-dimensional vector space
V {F). If B and B* are two ordered bases for F, then
and [T]b’
are similar matrices. Also similar matrices have the same trace.
This enables us to make the following definition.
Definition of trace of a linear transformation. Let T be a linear
operator on an n-dimensional vector space V (F). Then the trace of
T is the trace of the matrix of T relative to any ordered basis for V.
Linear Transformations 175

By the above discussion the trace of T as dehned by us will be


a unique element of F and thus our definition is sensible.'
Solved Examples
Example t. Find the matrix of the linear transformation T on
Vi(R)defined as T{a, b,c)=(2fr+c, a-46, 3a),
with respect to the ordered basis B and also with respect to the or
dered basis B' where
(0 R={(1,0, d).(0.1.0),(0,0. 1)}
(//) R'={(1. 1.1).(1.1, 0),(1. 0. 0)}.
(Andhra 1992, Nagarjuoa 90; Tirupati 901
Solution, (i) We have
T(1,0.0)=(0,1, 3)«0(1, 0,0)+l (0, 1,0)-f3(0, 0, 1),
7’(0, 1, 0)=(2, --4, 0)=2(1,0. 0)-4(0, 1, 0)+0 (0. 0, 1),
and r(0,0.1)»(1, 0, 0)«l (I, 0. 0)+0(0, 1,0)+0(0,0,1).
by def of matrix of T with respect to J?. we have
ro 2 n
[T]b= i -4 0.
L3 0 oj
Note. In order to find the matrix of T relative to the stand*
ard ordered basis R, it is sufficient'to compute 7X1.0, 0), r(0,1,0)
and r'(0, 0, 1). There is no need of further expressing these
vectors as linear combinations of(1,0, 0),(0, 1,0) and (0, 0, 1)..
Obviously the co-ordinates of the vectors T(I,0,0), T(0,1,0)
and ^(O,0, 1) respectively constitute the first, second and third
columns of the matrix [TJb.
(ii) Wehaver(l, I, 1)=(3, ~3,3).
Now our aim is to express (3, —3, 3) as a linear combination
of vectors in B*. Let
(a,*.c)=x(l, 1, 1)+:k(1. 1.0)+z(1.0.0)
=(x+y+z. x+y, X).
Then x+y+z=a,x-{-y=*. x=c
i.e. x—Cty=b—CtZ=a—b. ...(1)
Putting fl*3, b= -3, and c=3 in (1), we get
x=3,y=“-6 and z=»6.
r(l, 1, 1)=(3, -3,3)=3(1, 1, l)-6(1,1,0)+6(1,0,0).
Alsor(l. 1.0)=(2,-3.3).
Putting fl=2, —3 and c=3 in (1), we geit
r(l. 1. 0)=(2, -3.3)=3(1.1, l)-6 (1. 1. 0)+6(1. 0, 0).
Finally, r(l,0,0)KO, 1.3).
Putting fl«0,b= 1 and c=»3 in (1), we get
r(l,0,0)«=(0, 1, 3)=3(1, 1, l)-2(1. 0. 0)-l (1, 0.0)
176 Linear Algebra
r 3 3 31
-6 —6 —2 .
L 6 5 -ij
Example 2. Let T be the linear operator on defined by
7X^1, Af2, X3)=(3Xi+Jf3, —2Xi+Af2, —Xi+2^2+4x3).
What is the matrix of T in the ordered basis {ai, a2, a3} where
0, 1), a2=(-l, 2, 1) aw</a3=(2, 1,1)?
(Meerut 1972, 77^ 90, 91)
Solution. By def. of T, we have
n«i)=r(I, 0, 1)=(4,-2, 3).
Now dur aim is to express (4, —2,3) as a linear combination
of the vectors in the basis 5={ai, a2, a3}. Let
(fl, b, c)=xai+ya2+za3
=x(l,0, l)+y(-l,2, l)+r(2, 1, 1)
={x-y+2z,2y+z, x+y+z).
Then x—;^-j-2z=!a, 2y4-^^=^, x4-3^+z=c.
Solving these equations, we get
x=> -a—3b-{-5c , y=b-\-c—a ,z«*b—c-^a
■■
4 4 2 ...(1)
Putting a—4,6=—2,c=3 in (1), we get
x=i/, y=-f,z=-J.
7’(ai)=J‘/ai—fa2—Ja3.
Also 7’(a2)=r(—I, 2,1)=(—2,4, 9). Putting
-7
«=-2,6-4,c-9 in (I), we get 2
. T, , 35 , 15 7
., 1(a2)=^- ai+~a2—2 «3.
Finally T(a3)=r(2, 1,1)=:(7, -3,4). Putting
u=7, b= .=0.
3, c=4 in (1), we get x=^
11 3
r(od3)=i^a,—2 0t2+O(X3,
17 35 in
4 4 2
3 15 3
/. [T]b=] ”4
4 2'
1 7
0
L 2 2
Examples. Let T be a linear operator on defined by
7’(xi,‘X2,3f3)=(3xi+X3, ~2xi+*2. —Xi+2x2+4x3). Prove that T
is invertible andfind aformulafor T~^, (Meerat 1976, 93)
Linear Transformations 177

SolatioD. Suppose is the standard ordered basis for


Then 5={(1,0,0).(0, 1. 0).(0, 0,1)} Let A^[T]b i.e, let A be
the matrix of T with respect to B. First we shall compute A,
We have
r(l. 0,0)=(3,-2.-1),
m 1,0)=(0, 1. 2),
and r(0, 0,1)=(1,0, 4).
3 0 n
A^{T\b^ -2 1 0.
-1 2 4j
Now T will be invertible if the matrix [T]b is invertible. [See
theorem 7 on page 165].
3 0 1
Wehavedet.4=|.4 1= -2 1 0
-1 2 4
*=■3(4—0)+! (—4+l)==9^
Since deti4#0, therefore the matrix A is invertible and
consequently T is invertible.
Now we shall compute the matrix For this let us first
find adj. A.
The cofactors of the elements of the first row of A are
1 0 -2 0-2 1
* /.e.4,8. —3.
2 4’ -1 4 * -1 .
The cofactors of the elements of the second row of A are
0 1 3 1 3 0
U. 2, 13, -6.
“2 4* -1 4 * -1 2
The cofactors of the elements of the third row of A are
0 1 3 1 3 0
le. -1,-2, 3.
1 0 ’ -2 0 » -2 1
■ 4 8 -3‘
/. Adj. transpose of the matrix 2 13 -6
-1 -2 3j
4 2 -n
8 13 -2
-3 -6 3j
1 4 2 -n
1
8 13 -2 .
-3 -6 3j
Now [See theorem 7 page 165]
We shall now find a formula for T"*. Let a=(fl, 6, c) be any
vector belonging to Then
178 Linear Algebra

fr-*(a)l^=[r-i]a[a]B [See Note on page 161]


1 r 4 2 1]r a 1 4<i4- 26— c'
8 13 -2 8fl-|'136-r-2c .
L-3 -6 3j 3fl— 66+3c.
Since B is the standard ordered basis for
1
b, c)=i(4a+i6-c. 8n-H36-2c,
—3fl—66+3c).
Example 4. Let T be the linear operator on R^ defined by
T(Xi, X2, X3)={3Xi H-X3, -2xi +X2, -Xi +2X2+4X3>.
(0 What is the matrix of T in the standard ordered basis B
for R3 ?
(//) Find the transition matrix Pfrom the ordered basis B to
the ordered basis J5'={ai, <X2, «3} where ,0,1), «2=(—1,2, 1),
andoc3=(2^ 1, 1). Hencefind the matrix ofT relative to the ordered
basis B* >■
Solution, 0) Let>4=[7’]fl. Then
■ 3 0 1
As= —2 jl 0 [For ca lculation work see Ex. 3 ]
.-1 2 4.
(ii) Since R is the standard ordered basis, therefore the transi
tion matrix R from B to B* can be immediately written as
fl -1 21
P= 0 2 1 .
1 1 1.
Now [TIb'*?-* [T\b P, [See note on page 172 ]
In order to compute the matrix F"*, we find that det 4.
1 1 3 -51
Therefore Adj. 1 -1 -I .
d etF
1 -2 -2 2j
1 3 -Sir 3 0 nri -i 21
1 -1 _1 _2 1 0 0 2 1
-2 -2 2JL-1 2 4J11 1 ij
2 -7 -19iri -1 21
6 -3 -3 0 2 1
~4 2 6jLl 1 1.
-17 -35 -221
3 -15 6
2 14 0.
179
Linear Transformations

r iz
4
35
4
in
2*
3 15 3 [Note that this result
4 4 2 tallies with that of Ex. 2],
1 7
0
-“2 2
Example 5. Lef T ie the linear operator on R* defined by
T(X, y)=={4x-2y, 2xi-y).
Compute the matrix of T relative to the basis {ai, a2} where
«1-(1,1),«2=(-1.0). (Meerut 1976, 93P)
Solution. By def. of T, we have
r(a,)=r(l, 1)=(2,3).
Now our aim is to express (2, 3)as a linear combination of
the vectors in the basis {ai, a2>.
Let (fl, b)=:xoti-\-yct2^x (1, l)H-y(—1,0)=»(x—y, x).
Then X—y=fl,jc=6.
Solving these equations, we get
x—b,y-b—a. .(1)
Putting fl=2, b=3 in (1), we get x=*3, y=l.
T(ai)=3ai-|-la2. ...(2)
Again r(a2)=r(~li 0)=(-4, -2). Putting a=:-4, fc«-2
in (1), we get x=—2,y=2.
T’(«2)= —2ai +2oc2. ...(3)
From the relations(2) and (3), we see that the matrix ofT
F3 “●2'
relative to the basis {ai, a2} is= j ^ '
Example 6. Let T be a linear operator on R^ defined by :
r(x, y)=(2y, 3x-y).
Find the matrix representation of Trelative to the basis ((I, 3),
(2. 5)}. (Meerut 1980, 85, 89; S.V.U. Tirupatl 93P)
Solution. Let a, =(1, 3) and a2=(2, 5). By def. of T, we have
r(a,)=r(l, 3)=(2.3, il-3)=(6, 0)
and 7’(a2)-7'(2, 5) = (2 5, 3 2-5)=(10, 1).
Now our aim is to express the vectors TXai) and r(a2) as linear
combinations of the vectors in the basis (ai, a2>.
Let (a, b)—pxi-\-qyL2=p (1, 3)-|-^ (2, 5)=(p+2^, 3p-\-5q).
Then p-f 2^=a, 3p-\-5q=b.
Solving these equations, we get
/>ea—5a+26, qi=»3a—b. (1)
180 Linear Algebra

Putting a-6, b—0 in (1), we get />=—30, ^=18.


A 7'(a,)=(6.0)--=-.30a,-|-18a2 ...(2)
Again putting <i=10, b=>l in (1), we get
p=-48, ^=29.
/. r(a2)=(10.1) ~48flti-i“29a2. | .(3)
From the relations (2) and (3), we see that the matrix of T
-30 -481
relative to the basis {ai, a2> is= 18 29
Example 7 Let T be the linear operator oh defined by
T(x, y)r=(4x-2y, 2x+y).
(0 What is the matrix of T in the standard ordered basis B
for R2 ?
(//) Find the transition matrix Pfrom the ordered basis B to
the ordered basis J5'={a,, aj} where a,=(l, l), a2=(-l,0). Hence
find the matrix of T relative to the ordered basis B'.
Solution, (i) We have 7’(1, 0)=(4, 2)and ^O, l)=(-2, 1).
Since B is the standard ordered basis for R^, therefore
-21
1
(ii) Since B is the standard ordered basis for R^, therefore
the transition matrix P from B to B'can be immediately written as
P= ri -n
1 0.
Now [r>y=/>-i {T]b p.
We have det P=1 xO—(1 x —1)=1. The cofactors of the
elements of the first row of P are 0,-1. Also the cofactors of
the elements of the second row of P are —(—1), 1 /.e., are 1, 1.
Therefore
1 0 r
detP Adj P= ,“1 1.’

0 nr4 ~2'|ri -r
-1 ijU iJ[i 0.
* .2 nri ~n [3 -2]
-2 3j[l 2j‘
Example 8. Let T be the linear operator on R^ defined by
T{Xii X2, X3)=(Xi+X2+XJ, —Xi—X2-4X3,2X|—X3).
What is the matrix ofT in the ordered basis {xu 0L2* <*-s} where
a,=(l, 1, 1), «2-(0, 1, 1), a3=(l, 0. 1)?
Linear Transformations 181

Note. Calculate the required .matrix in two ways and check


your answer.
Example 9. Consider the vector space F(R) ofall 2x2 matrices
over thefield R ofreal numbers. Let T be the linear transformation
n
on V thui sends each matrix X onto AXy where A
matrix ofT with respect to the ordered basis
1 I- Find the
«2, as, <X4] for
V where
n 01 '0 n ro 01 ro 0’
a,=
Lo oJ’“^" 0 0 » «4= 0 1 ’
Solution. We have
r(a,)= nn Ol^ri 0‘
1 JLo oj Ll 0
1 0 0’
0 [o oj^^ [l 0 +0 0 1 ●
n«2)= iiro il^fo n
iJLo oJ [o 1
1 0 O'
0
0 oJ^^[o oJ^^Li 0 +I 0 1 ●
T(as>^
nro 01 ri or
iJLi OJ Ll 0.
=1 0 O'
0 OJ"*'® [o oj"^^ [l 0 +0 0 1 »
and r(«4)= iifo oi^ro 1]
iJLo ij lo 1.
0
n oi- ifo n+oP 01 ro 01
0 oj^^ [0 oJ^'^Li [a ij*
no 1 01
[T]b= 0 1 0 1 .
1 0 1 0
0 1 0 1
Example 10‘ If the matrix of a linear transformation T on
1 1
VziC), with respect to the ordered basis 5={(1, 0),(0, 1)} is
Ll ij*
what is the matrix ofT with respect to the ordered basis
^'={(1. 1),(1,-1)}?
Solution. Let us first defineT explicitly. It is given that
ri
irja= i I
.% r(i,0)=i(J,0)+i(0,1)=(1,1),
and r(0, i)=i (i*,o)+i (0. i)=(i,i).
182 Linear Algebra

If (fl, b)e V2(C), then we can write


(fl. (1. 0)+6(0, 1).
.V 7'(<i,6)=flr(l,0)+^r(0.1)
=n(l,l)+£»(l,!)=(«+£», a+i>).
This is the explicit expression for T.
Now let us find the matrix of T with respect to
We have 7(1. 1)=(2,2).
Let (2,’2)=x (1,1)+;^(1, ~l)=(JC+>^, x-y)
Then x+y=2, x—y—l
x=2t y=0.
(2,2)=2(1,1)+0(1.-1).
Also 7(1. -1)=(0. 0)«0(1, 1)+0(1, -I).
f2 Cl
.*. 17]b'=
0 OJ
Note. If P is the transition matrix from the basis B to the
ri 1*
basis B*, then P=^ . . We can compute [TIb' by using the
l\ ~1,
formula (7]b'=P"* {PJa P*
Example 11. Show that the vectors «i=(l,0, —1),a2=(l,2,1),
«3 3^2)form a basisfor tt}. Express each of the standard
basis vectors as a linear combination of du ait OLi,
[Meerut 1981. 84P. 93P]
Solution. Let a. b, c be scalars i.e.. real numbers such that
nai+^>a2-f eflt3=0
i.e. fl (1* 0. -l)+6(1. 2, l)+c(0, -3,2)«(0, 0,0)
Le. (a+6+Oc,0fl+2d—3c, —fl+/>4^2c)s=(0, 0,0)
i.e. fl-}-6+0c=»0,)
\ On+26—3c=0, ● ...(1)
-a+^+2c=0.
The coefficient matrix of these equations is
r 1 1 OV
A=- 0 2 -3 .
-1 .■ 1 2, ,
1 . 1 0 ■
We have det\<4=| i4 |=s 0 2 —3
-i:-. ' /l ' .2
= 1 (4+3)-l (0-3)=7+3=10.
Since det therefore the matrix ^4 is non-singular and
rank <4=3 / e. equal to the number of unknowns o, b, c. Hence
. fl=b, h—0, c=0 is theonly solution ofthe equations (1). There
fore the vectors ai,a2, ts are linearly independent over R. Since
Linear Transformations 183

dim R^=s3» therefore th^e set {ai, cn.2, as) containing three linearly
independent vectors forms a basis for R^.
Now let ^2, es} be the standard ordered basis for R^
Then e,n.(l, 0, 0), e2=(0, 1.0). e3=(0,0, 1). Let a2, as}.
We have a,=(1.0,-l)=le,+0e2--U3
«2=(1, 2, l)==leiH-2«2+le3
a3-=(0, —3,2)=0ei—3e2+2es.
If P is the transition matrix from the basis B to the basis B\
then
1 1 01
P 0 2 -3 .
-1 1 2
Let us find the matrix P~K For this let us first find Adj. P.
The cofactors of the elements of the first row of P are
2 -3 _ , 0 -3 i 0 2
i.e, 7, 3. 2.
1 2 ’ -1 2 ’ 1-1 1
The cofactors of the elements of the second row of/'are
1 0 1 0 1 1
2.
1 2 ’ -1 2 ’ -1 1 i.e. -2. 2,
The cofactors of the elements of the third row of F are

!S 0 _ 1.
-3 * 0-3* 0
0 I I
2 i.e. -3. 3, 2.
r 7 3 2
Adj transpose of the matrix —2 .2 —2
1-3 3 2.
f7 -2 -3
=» 3 2 3.
L2 -2 2J
1 1
/. /*-*

Now ci=lei+0e2+0e3.
ji.
Coordinate matrix of ei relative to the basis
1
0 :
0j
Co-ordinate matrix of ei relative to the basis B*
r1
ei ^p—i 0
B* 0
1
●2 0
-2 2ll 0 j
184 Linear Algebra

1 ●7 7/10*
3 = 3/10 .
“lO
l2, 2/10 .
«3.
10“*'^10*^‘^10
01 roi
Also C2 1 and SB 0 .
. B B Ll
LOj

●●
.
62
Jfi'
■Bp-*
01
1 and
0
^3
JB'
=p-> fSl.
LU
I -21 I r-31
Thus €2 2 . ^3 3 .
1b' *”20 IB' 10
L-2J 2j
«2=—A«i4-x%«2—l*0«3
and ^3=—i%«l + i*0’«2 + 'A'«3r
Exaoiple 12. Let A be an mxn matrix with real entries. Prove that
A=0 {null matrix) if and only if trace {A‘ A)=0. (Meerut 1981)
Solution. Let A^[aij]myn> Then A*s=s[bij\„xnti
where bij^ajt.
Now is a matrix of the type nxn,
Let A* A—[ctj]n>in^ Then
Cf/Bsthe sum of the products of the corresponding elements of the
row of A* and the i*^ column of A
^bit au+bi2a2i+.„+bi„fi„i
*=ut/fl|/+fl2rfl2/+●●●+flm/««r [V bij=aji]

N^ trace .4)==2? c«

/-I

c=the sum of the squares of all the elements of A,


Now'the elements of A are all real numbers. Therefore trace
{A* A)—0 ^ the sum of the squares of all the elements of A is zero
^ each element of A is zero i4 is a null matrix.
Conversely if i4 is a null matrix, then A* A is also a null matrix
and so trace (^* i4)=0. '
Hence ti^ice. (A*. 4)=0 iff ^4»0.
Example 13. Show that the only matrix similar to the identity
matrix I is I itself. (Meerut 1976)
Solution. The identity matrix 1 is invertible and we can write
185
Linear Transformations

/=/-» 11. Therefore I is Mmilar to /. Further let 5 be a matrix


similar to 7. Then there exists an invertible matrix P such that
B=P-'IP
=> P [V p-i
=> i#=/.
Hence the only matrix sirnilar to / is / itself.
Example 14. Iftwo linear transformations A and B on V{F)are
similar, then show that and B^ are also similar and if A,B are
invertible, then A~K ore also similar.
Solution Since A and B are similar, therefore there exists an
invertible linear transformation C on V such that
A = CBC-^ ...(1)
We have (C5C~‘).(C/?C”‘)—C^C~‘ CBC~^
^CBIBC-^=CBBC-^= CB2G-'.
A?- is similar to
If A and B are invertible, then from (1), we have

is similar to 5"*.
Example ]S. If A and B are linear transformations on the same
vector space and if at least one of them is invertible, then AB and BA
are similar.
Solution. Let A be invertible.
We have A (BA)A~^=ABAA~^=ABl~AB.
Thus AB^A {BA) A-K
/. AB is similar to BA.
Now let B be invertible.
We have B(AB)B-^=BABB-'=BAI^BA,
BA is similar to AB.
Example 16. Let Tand S he linear operators on thefinite dimen
sional vector space V (F)j prove that
(0 det {TS)-={det T){det S);
{ii) T is invertible ijf det T^O.
Solution, (i) Let B be any ordered basis for V.
We have [r5]e=[r]B [5Jb.
det ir5lB=det(irjBl6']fl)=(det[r]fl)(det[S]B)
determinant of the product of two matrices is equal to the
product of their determinants].
Now the determinant of a linear transformation is equal to the
determinant of its matrix with respect to any ordered basis.
/. det(rS)=(detr)(det5).
186
Linear Algebra

(ii) Suppose T is invertible. Then there exists a linear trans


formation r-i on V such that r=/=7T-‘.
A det(7T-0=det/
^ (det T)(det 7’~')=det f/Jij, where B is any ordered basis
for V
=> (det T)(det r~')=l. [*/ [7]^ is unit matrix and the deter
minant of a unit matrix is equal to 1].
Now det T and det 7'”* are elements of F. In a field the
product of two elements can be 0 iff at least one of them is 0.
(det7’)(detJ’-0=»l
=> det T^O,
Conversely suppose that det
Then det [T]b^0^ where B is any ordered basis for V,
Now det[7’Js#0 implies that the matrix [T]b is invertible.
T is also invertible.
Example 17. If T and S are linear transformations on a finite
dimensional vector space V such that
r5«=6, 7V0, then det T=det S=0,
Solution. Letdetr^i^O.
Then T is invertible and T~^ exists.
.*. TS=6
=> r-' (r5)«r-> 6
=> (r->r)5=6 => 75=6

5=0 which is contradictory to the hypothesis that 5^6


det T must be equal to 0.
Again let det 5^0. Then 5 is invertible.
●*. 75=0

=> (T5) 5->=65->


'=>
r=6 which is contradictory to the hypothesis
that r#6.
det 5 must be equal to zero.
. Example 18. If{»t ^n} ond {fii^ are bases in the
same finite, dimensional vector space V (F), and if T is a linear
transformation such that
n, then det r^^O.
187
Linear Transformations

Solution. Since T maps a basis for V onto a basis for K,


therefore T is invertible.
Now T is invertible implies that det For proof see
Ex. 16.
Example 19. If T and S are similar linear transformations on a
:dimensional vector space V (F), then det T=^det S.
finite
Solution. Since Tand S are similar, therefore there exists an
invertible linear transformation P on F such that
Therefore det T=det(/-SP-')=(det P)(det S)(det P-')
=(det P)(det P-‘)(det 5)=[det(PP’’)1 ●^)
=(det 7) (det 5)=1 (det 5)=det S.
Exercises
on R2 defined by
1. Let T be the linear operator
the standard ordered
T(a h)=(u, 0), Write the matrix of T in
basis £={(!, 0), (0, 1)}.
If fi'=?{(l, 1), (2. 1)> is,another ordered bwis for R’, And the
transiti-a matrix P from the basis B to the basis B . Hence find the
matrix of T relative to the basis B'.
ri 01. 1
A ns. [7']b- I 0. lI U*
r-1 -2
1 2J
2. Find the matrix relative to the basis
X
a* -i, f) of R3, ofthe
Oti=(|, f, — i)» «2—(it “~8» “"a)*
linear transformation T: R’->R’ whose matrix relative to the stan-
dard ordered basis is
r2 0 01
0 4 0 .
0 0 3.
3
Ans. f 0 .
0
3. Find the co-ordinates of the vector (2, 1, 3,4) of R^rela-
tive to the basis vectors
«i*=(lt It 0, 0), a2=(l» 0» K 1)» *3“ (2, 0, 0, 2), a4=(0. 0, 2,2).
Ans. (2,1, 3, 4)==ai+i«3-i-f«4. . ^
4. Explain what is iheant by the matrix of a linear
formation on I'relative to a basis of K. Let F be a field and ^
the set of all polynomials in x over F of degree ^ 5. It D . v-^v
188 Linear Algebra^

is defined by D[f(,x)]=f"(x) where/'(x) is the derivative of/(x),


show that D is a linear transformation on V. Find the matrix of
D in the basis {1, x, x\ x\ x^}.
rO 0 2 0 Ot
0 0 0 6 0
Ans. 0 0 0 0 12 .
0 0 0 0 0
.0 0 0 0 0.
5. Let V be the vector space of those polynomial functions
from the reals into itself which have degree < 3. Let
U)where// (1
Show that B forms a basis for V. For any real number t let
Si(x)=(jc+0'"L Show that g2, g3» g^} is also a basis for
V. If D is the differentiation operator on F, write the matrices of
D in the ordered bases ^ and B\ (Meerut 1974)
0 1 0 01
0 0 2 0
Ans. =[/)]*'= 0 0 0 3 *
LO 0 0 0,
6. If A and B are nxn complex matrices, show that
AB—BA=J is impossible.
7. Let V be the space of all 2 x 2 matrices over the field F
and let P be a fixed 2x2 matrix over F. Let T be the linear
operator on Vdefined by T(A)=FA, v AgK Prove that
trace(r)=2 trace(P)
8 Show that the only matrix similar to the zero matrix is
the zero matrix itself. (Meerut 1976)
§ 14. Linear Functionals. Let V(F) be a vector space. We
know that the field F can be regarded as a vector space over F.
This is the vector space F(F)or F‘. We shall simply denote it by
F. A linear transformation from V into Fis called a linear func*
tional on F. We shall now give independent definition of a linear
functional.
Linear Functionals. Definition. Let F(F) be a vector space, A
functionffrom V into F is said to he a linearfunctional on V if
f(aoL-\-bP)=af(a)+6/()8) V a. b^Fand v a, jSeF.
If/is a linear functional on F(F), then /(a) is in Ffor each
a belonging to F. Since f{<x.) is a scalar, therefore a linear func
tional on F is a scalar valued function.
Example I. Let V„{F) be the vector space of ordered n-tuples
of the elements of the field F.
189
Linear Transformations

Let Xu X2y Xn be n field elements of F. If


a —(fl|, fl2» ●●●» ^b) ^ Vn{,F)y
let / be a function from Vn{F) into F defined by
/(a) = X|fli+ X2«2+●●●+^bOb-
Let j8=(£»,. b2 b„) e Vn (F). If o, 6 e F, we have
/(aa+^j3)=/[fl (<*i» ●●●» ®b)+^ (^t» ●●●» ^b)]
—f Xaax+bbu onn+^^a)
=Xi (^®b+^^b)
=0 (xini + ...+JCfl^*B)+^ (*i^i+***+'^b^b)
^b)

=a/(a) + 6/(i3).
/is a linear functional on V„{F),
Example 2. Now we shall give a very important example of
a linear functional.
We shall prpve that the trace function is a linear functional on
the space of all nxn matrices over a field F. (Meerut 1977)
Let R be a positive integer and Fa field. Let K(F)bethe
vector space of all nXn matrices over F. lf^=[a,;]„xBeK, then
the trace of A is the scalar

tr 4 fl22+●●●+^Bfl= On*

Thus the trace of A is the scalar obtained by adding the ele


ments of A lying along the principal diagonal.
The trace function is a linear functional on V because if
fl, b^F and /4 = [<2,y]«x«, F=[6,y]„x« e K, then
tr {aA \-bB)~\x {a [a/y]axB-f^ [&/y]nXB)=tr {[aaij ■\-hbij\n'<n)

= S {aaii’\'bbi^—a S au-\-h 2 bn=a (tr A)-]rb (tr B).


/-I 1-1 f-i

Example 3. Now we shall give another important example of


a linear functional.
Let V be a finite-dimensional vector space over the field F and
let B be an ordered basis for V The function f which assigns to
each vector a in V the /'* coordinate of a relative to the ordered basis
B is a linear functional on V
Let F={a,, a2, ..., a„}.
If a=fliai4-fl2«24---4-OBa« ^ K, then by definition of//, we
have // (a)=fl/.
Similarly if ^=b\v.\-\-...-\-bn«.n G F", then
190 Linear Algebra

If a,b ^ Fy we have
ft (fla+i>j3,*=// [a (fliai4-...+fl;i«fi)+^ (^i«i+..*+^n««)]
=a/f [(aOi *I+■●●● +
=>aar\-bbi=^afi (a)+6/} (^).
Hence ft is a linear functional on V,
Some particular linear functionals.
1. Zero functional. Let V be a vector space over the field F.
The function ffrom V into F defined by
/(a)=0 {zero o/ F) V a e K
is a linear functional on V.
Proof. Let a, /8 e K and n, 6 € F. We have
/(na+i>j9)=0 (by def. of/)
=n0+^0=fl/(a)+£/(i8).
/is a linear functional on V. It is called the zero func¬
tional and we shall in future denote it by 0.
2. Negative of a linear functional.
Let V be a vector space over the field F. Let f he a linear
functional on V. The correspondence —f defined hy
(-/)(«)=-[/(«)] V a e F
is a linear functional on V.
Proof. Since/(a)eF => —/(a)eF, therefore —/is a function
from V into F.
Let fl, 6 e Fand a, jS e F. Then
(-fXaoc+bfi) lf(a<K-hb^)] [by def. of -/]
“ “[^/(«)+^/(P)] (V / is a linear functional]
[-/(«)]-fi) [-/(iS)]
=«[(-/) «]+6[(-/) (i5)].
.*. —/is a linear functional on F.
Properties of a linear fuDclional.
Theorem. Let fbe a linear functional on a vector space V (F),
Then
(0 /(0)=0 where 0 on the left hand side is zero vector of F,
and 0 on the right hand side is zero element of F.
(»)/(-«)=-/(«) V aeF.
Proof. LetaeF. Then/(a)eF.
We have/(a)+0=/(a) [V 0 is zero element of F]
=/(a-f-0) [V 0 is zero element of F]
=/(«)+/0) [V / is a linear functional]
Now F is a field. Therefore
/(«)-|-0=/(a)-f/(0)
Linear Transformations 191

=>/(0)=0, by left cancellation law for addition in F.


(iii) We have/[a-K-a)]=/(a)-f/(-a)
[V /is a linear functional]
But /[«+(~a)]=/(0)=0 [by (i)J
Thus in F, we have
/(«)+/(-«)-0
=>/(-«)=-/(«).
§ 15. Dual Spaces.
Let V be the set of ail linear functionals on a vector space
V{F). Sometimes we denote this set by V*. Now our aim is to
impose a vector space structure on the set V over the same field F.
For this purpose we shall have to suitably define addition in V
and scalar multiplication in V’ over F.
Theorem. Let V be a vector space over the field F, Letf and
fz be linearfunctionals on V. Thefunction/1+/2 defined by
(/1+/2)(a)=/i («)+/2(a) V a (= F
is a linearfunctional on V. If c is any element of F, the function
cf defined by
(c/)(a)=c/(«) V aeF
is a linearfunctional on V, The set V* ofall linearfunctionals on V,
together with the addition and scalar multiplication defined as above
is a vector space over thefield F.
Proof. Supposef and fz are linear functionals on V and we
define/i4-/2 as follows :
(/1+/2)(«)=/i (a)f/2(a) ¥ a&V. ...(1)
Since/i(a)+/2(a)^ F, therefore/i-1-/2 is a function from V
into F.
Let a, b^F and a, jSe V. Then
(/1+/2) (ua+^^)+/2 (p«--\-bP) [by (1)1
—[^fI(*)+^/il^)]4-[tf5(«)-^-bfiij^)]
[V /i and fz are linear functionals]
[/l(«)+/2(«)]+^ [/l(^)+/2(^)l
. *= [(/i+/1)(«)]+^ [(/i +^2) iP)] [by (1)1
.'. /1+/2 is a linear functional on V. Thus
/1./2 e r =>/,+/2 e K'.
Therefore V is closed with respect to addition defined in it.
Again let/e F' and c G F. Let us define cf as follows :
(c/)(a)=c/(a) ¥ ae F. ...(2)
Since cf(a)e F, therefore cf is a function from F into F.
192 Linear Algebra

Let at b ^ F and a, j8 e F. Then


(c/)(aa+*i8).=:c/(fl«+^»i3) [by (2)]
=c [fl/(a)+6/(j8)l [V /is linear functional]
=c [fl/(«)l+c [6/03)] [ F is a field]
=(ca)/(a)+(c6)/(^)
=(flc)/(«)+(6c)/(i8)
=0[^/(«)]+* [C/*03)1
“fl[(</)(«)l4-6((c/)(^)].
.% cfis a linear functional on V, Thus
/e F'andceiF=>c/e V\
Therefore V is closed with respect to scalar multiplication
defined in it.
Associativity of addition in V\
Let/1,/2,/3 e r. Ifa e F, then
[/i+(/2+/3)](«)--=/! (a)+(/2+/3)(a) [by (1)]
=/i(«)+[/2(a)f/3(«)] [by (1)]
-[/i(«)+/2(a)]+/3(«) [V add ition in F is associative]
-(/1+/2)(«)f/3 (a) [by (1)1
=[(/i+/2)+/3](a) [by (1)]
/l+(/2+/3)=(/l+/2)+/3
[by def. of equality of two functions]
Commutativity of addition in F'. Let/1,/2 e F'. Ifa is any
element of F, then
ifi-kf-i)(«)-/i (a)-f/2(«) [by (1)]
=/2(a)+/i (a) [*.* addition in F is commutative]
=(/2+/l)(«) [by (1)]
/l+/2=/2-h/l.
Existence of additive identity in F'. Let 0 be the zero linear
functional on F i.e.
0(a)=0 V a e F.
Then 6 e F'. If/e F" and a e F, we have

(6-f/)(a)-6(a)+/(a) [by (1)]


=0-f/(a) [by def. ofO]
=/(a) [0 being additive identity in FJ
A 6+/=/¥/eF'.

A 0 is the additive identity in F'.


Linear Transformations 193

Existence of additive inverse of each element in V\


Let/e V’l Let us define —/as follows:
(-/)(«)=-/(«) V « e F.
Then —/e V\ If & ^ F, we have
(“/+/)(a)=(^/)(«)+/(«) (by (1)1
=-/(«)+/(«) Ibydef.of-/]
s=0

*0(a) [by def. of 6]


,*. —/+/=6 for every/s F'.
Thus each element in V* possesses additive inverse. Therefore
F'is an abelian group with respect to addition defined in it.
Further we make the following'observations:
(i) Let c € Fand/i,/2 e F". If a is any element in F,
we have
(c(/i+/2)K«)=c[(/,+/2)(a)] (by (2)]
[/l(«)+/2(«)l [by 0)1
=</i («)+</2(a)
=(</i)(«)+(c/2)(«) (by (2)1
==(</i+c/2)(a) [by (1)1
(/l+/2)=c/l+^2*
(ii) Let fl, h e Fand/e F'. If a e F, we have
[(«+/>)/](«)=(«+^»)/(a) (by (2)1
=o/(a)+^/(«) [V Fisa field)
==(«/)(«)+(&/*)(«) (by (2)1:
=(o/+h/)(a) (by (1)1
(fl+h)/=fl/+h/.
(Hi) Let a, 6 e F and/e F'. If a e F, we have
[{ab)f]{u.)=={ab)m [by (2)1
=0[¥(«)! [ multiplication in F is associative)
((^/)(«)1 [by (2)1
=(o (^/)1 (a). (by (2)1
{ab)f^a{bf).
(iv) Let I be the multiplicative identity of Fand/e V*, If
a e F, we have
(1/)(«)=!/(«) (by (2)1
-/(«) [V Fisa field)
/. 1/=/-
Hence F'is a vector space over the field F.
194
Linear Algebra

Dual Space. Defiaition. Let V be a vector space oyer tfw field


F Then the set V’ofall linearfunctionals on Vis also a vector
space over thefield F. The vector space V is called the dual space
ofV.
(Meerut 1970, 72. 83; AliahaOad 78)
Sometimes V* and P are also used to denote the dual space
of V. The dual
space of V is also called the conjugate space
of K.
§ 16. Dual bases.
Theorem 1. Let V be an n'dimensional vector space over the
field Fand let 5={ai, be an ordered basis for V. If(xi x„)
is any ordered set of n scalars, then there exists a unique linear
functionalfon Vsuch thatf(ai)=^Xi, /=!,2»●●●» n.
Proof Existence of/. Let ae V.
Since 5={a,, aa, a#,} is a basis for V, therefore there exist
unique scalars 0|, uj, Or, such that
a=Oiai-f
For this vector a, let us define
/(a)=o,X| + ...-f-o„x„.
Obviously /(a) as defined above is a unique element of F.
Therefore/is a well-defined rule for associating with each vcctor\
a in Pa unique scalar f(ot) in F. Thus / is a function from V''
into F \

The unique representation of «/eP as a linear combination of


the vectors belonging to the basis B is
«/=0ai-j-0a2+...4. la/-j-0a/+, 4....+0a„.
Therefore according to our definition off we have
/(«/)=0xi-1-0x2+...+ lx/+0;r,+, + ...+0;c„
ie. /(a/)=x/,/=1, 2,..., ff.
Now to show that/is a linear functional.
Let a, b^F and a, V Let
a=Uia| + ...+a„any
and ^i«i+... + />nart. Then
/(na46^)=/[fl(fl,a, + ...+u„a„)+h (/,,«, + ...+/,„«„)]
-/((flfli+Wi) ai + ... +(uflr„+hA„) a^]
={aai+hb\) X| +...+{aa„+bb„) x„ \[bydef. of/]
CST
a (atXf +... +<i„x„)+h +... +^„.v„) =af(a)+bf(J3),
/is a linear functional on P. Thus there exists a linear
functional/on P such that/(a/)=x/, /-I, 2 n.
Linear Transformations 195

Unlqaeness of/. Let;be a linear functional on V such that


f=»l. 2 n.
For any vector asaiai+...+OaaA e K, we have
+●●●+«<!«»)
[●/ g is linear]
?=<llXl + ...4-0«Xn [by def. of g]
=/(«). [by def. of/j
Thus g(«)=/(a) V aSK.

'^is shows the uniqueness of /.


Remark. > From this theorem we conclude that if / is a linear
functional on a finite dimensional vector space K, then / is com
pletely determined if'we mention under/ the images of the elements
of a basis set of V. If / and g are' two linear functionals on V
such that /(«/)=<? (a/) for all a/ |;)elongiog to a basis.qf then
/(a)=g(a) V (teV i.e., f=g. Thus two linear functionals of V
are equal if they agree on a basis of V.
Theorem 2. Let V be an n-dimensional vector space over the
field F and let B -{ai,.... ct„} he a basis for V. Then there is a uni
quely determined basis R'-={/i,...,/«}/or F s«c/i rW/(«y) = 8/y.
Consequently the dual space of an n-dimensional space is n-dimen-
sional. (Meerut i974, 77, 88, 93; Poona 70; Allahabad 78)
The basis B' is called the dual basis of R.
Proof. ^={ai,..., ««,} is an ordered.basis for V. Therefore
by theorem 1, there exists a unique linear functional f\ on V such
that
/t(«i)=l,/i(«2)«0 /,(a„)=0
where {1, 0,..., 0} is an ordered set of n scalars.
In fact, for each /=3l,2,..., n there exists a unique linear
functional / on V such that
.. X |0 if/ri

ie. fi(^j)^8ij, ...(1)


where S/^eFis Kronecker delta i.e. 8/;=l if i-J
and S/y==0 if /#/
Let F'«={/i,...,/„}. Then F'is a subset of F'containing n
distinct elements of We shall show that B' is a basis for V\
First we shall show that B* is linearly independent.
196 Linear Algebra

Let ei/i+C2/2+,..+c«/,s=6

*> (ei/i4-.».+eii/i)(«)=6(a) ¥ asK


*> c\ /i(«)+...+Cii/«(a)«0 ¥ «e P'[V 6(a)«6j

t> S c///(a)»0 ¥ oteF


/-I

r af,(flty)«OJ«l, 2 n
/-I

[Putting ««ay where7=1, 2,,.., »]

s> e/8/y=0,7s=»l, 2,..., «

=► cy=0,y=l, 2 »...» R

=>/i,/a....,/« arc linearly independent.


In the second pilace, we shall show that the linear span of ^8'
is equal to F'.

Let/be any element of V\ The linear functional /will be


completely determined if we define it on a basis for F. So let
/(«/)=oi, /=*!, 2,...,«. ...(2)

We shall show that/=<ii/,+...+<i„/,« fa,/.


f-i

We know that two linear functionals on F are equal if they


agree on a basis of F. So let «yS5 where y=l n. Then

/S
-I at ft («y)=ii7
/ -I
at ft (ay)

sa E at 8tj
i-i
(from (1)1

=0y, on summing with respect to i


and remembering that S/y=l when
i=j and 8/y=a0 when
“/ («y) ifrom (2)1
Thus Therefore
t?i V aye**
Linear Transformations 197

o///. Thus every elemeat/in V* can be expressed as a

linear combination of/i,...,/«.


linear span of B\ Hence B* is a basis for V\
Now dim number of distinct elements in B*^n.
Corollary. If V is an n-dimensional vector space over thefield
Ft then V is isomorphic to its dual space V\
Proof. We have dim P"«=>dim
K is isomorphic to P".
Theorem. Let V be an n-dimensional vector space over the
field F and let 5={ai,..., «„} be a basis for V, Let \
be the dual basis of B. Thenfor each linear functionalfon P', we
have

/«/-I
Z /(«,)//
andfor each vector a in V we have
H
a*= Z fi (a) «/. (Meernt 1972, 79,85)
/-I

Proof Since B* is dual basis of B, therefore


//(ay)=8/^. „.(1)
If / is a linear functional on P, then/eP'' for which Zl' is
basis. Therefore/can be expressed as a linear combination of

/i,. Let/=27 Cifi.


{-I

Then /(a/) («i)= ^ Cift{v.j)


=( i c,f,) /-I

— Z Ci Sfj [From (1)]

2, ,n.

MS fMf,
Now let a be any vector in V. Let
a=X|«i+...+x„a«. ● ●●(2)
Then f (a)=// From (2), «= Z Xj ay
(jM)
198 linear Algebra

« f Xjfi (Ctj) [V /i is linear functional]


y-i

2= 27 JC/ Sij [From (1)J


y-t
=Xt.
m
\ d=/i(«) «i+...+/«(«) «**» Hft («)«/.
\.
Important It shoold.be noted that if a«} is an
ordered basis for V and .. ,/n} is the dual basis, then // is
precisely the function which assigns to each vector a in V the
coordinate of a relative to the ordered basis B.
Theorem 4. Let V be an n-dimensional vector, space over the
field F. Ifft is a non-zero vector in F, there exists a linearfunctional
fon V such that/(a)?^©. (Poona 1970)
Proof Since therefore {«} is a linearly independent sub
set of V. So it can be extended to form a basis for V. Thus
there exists a basis a,,} for Vsuch that ai«a.
If B*~{fu is the dual basis, then
/i («)=/i («i)= 17*^0.
Thus th^re exists linear functional'/i such that
A («)#0.
Corollary. Let V be an n-dimensional vector space over thefield
F,. J[ff(<z)=^0 ¥ /S r,then a=0.
Proof. Suppose a:^0. Then there is a linear functional/ on
V such that/(a)vi:0 This contradicts the hypothesis that
/(a)=:0 ¥/e V*. Hence we must have a«»0.
Theorem s Let V be an n-dimensional vector space over the
field F. If ttfP are any two different vectors in F, then there exists
a linearfunctionalfon V such thatf fifi).
Proof. We have <t^fi => a—
Now «— is a non-zero vector in F. Therefore by theorem 4,
there exists a linear functional/on F such that

^/(<x)~/(iS)960

Hence the result.


§ 17. Reflexivity.
Second dual space. We know that every vector space F
possesses a dual space V'consisting of all linear functionals on F.
Linear Transformations 199

Now V* is also a vector space. Therefore it will al^ possess a dual


space(Vy consisting of all linear functionals on This dual
space of r is called the Second dnal space of Fand for the sake
of simplicity we shall denote it by V\
If y is finite-dimensional, then
dim r=dim r=dim V"
showing that they are isomorphic to each other.
Theorem 1. Let V be afinite dimensional vector space over
thefield F. Ifdie any vector in K thefunction L. on V defined by
(/)-/(«) v/e r
IS a linearfunctional on V' i.e. L,e V*",
Also the mapping is an isomorphism of V onto V.
(Meerut 1973,76, 77, 78, 82, 83; 88, 90, 93P; Kanpur 69)
Proof. Ifae Kand/e th^/(a)is a unique element
of F. Therefbre the correspondence £,« define by
^.(/)=/(a) v/e r ♦.♦(I)
IS a function from K'into F.
Let a,bGF and/, g e y\ Then
Let {af-\-bg)=.{af-\-bg) (a) [From (1)]
(a/) (a) 4 (6g) (a)
(<x)^bg (a) [by scalar multiplication of linear functionals]
^^[Letif)UblL^(g)]. [From(l)J
7’herefore Lg is a linear functional on V* and thus £« e V".
Now let 0 be the function from Tinto P'' defined by
0 («)=L, ¥ « e F.
0 is one-one. If a, jS e y, then
0 (a)s=0(^)
Lx=Lfi => L, (/)=Lp(/) ¥/e r
=>/(«)=/(^) ¥/e r (From(l)J
=*“/(«)-/(^)«o v/e r =>/(a-/3)=o ¥/e r
a—/1=0 [ '.● by theorem 4 of § 16, if a-jS^feO,
then 3 a linear functional/on Fsuch
that /(a—/S)^0. Here we have
/(a—j8)»0 ¥ /e y' and so a—/? must
beO]
=> a=fi.
0 is one-one.
0.is a linear transformation.
Let a^b ^ Fand a, /i € F. Then
200
Linear Algebra
_ 0 [by def.of0]
For every/eP, we have
Lm^b»{f)=f{ad-^b^) [From (1)3
*=«/(«)+W)
c=flL,(/H£»£p(/) [From (1)3
«(a£a)(/)+(6£p)(/)=(a£,+i,lp)(/).
£as-f6^ -oLa+bLfi=a^ (a)+h^()9)
Th,u8 0
*. 0 is a linear transformation .from V into V\ We have
dim K=dim P. Therefore 0 is one one implies that ^ must also
be onto.
Hence tf> is an isomorphism of V onto P.
Note The correspondence a-ȣ, as defined in the above
theorem is called the natural correspondence betweed V and F". It
is important to note that the above theorem shows not only that
Vand P are isomorphic -this much is obvious from the fact that
they have the same dimension —but that the i^atural correspon
dence is an isomorphism. This property of vector spaces is called
reflexivlty. Thus in the above theorem we have proved that every
finite'dimensional vector space is reflexive.
In future we shall identify V with V through the natural
isomorphism «<-♦£*. We shall say that the element L of P is the
same as the element a of V iSL—La i.e. iff
^ (/)=/(«) V/eP.
It will be In this sense that we shall regard P=: V.
Theorem 2 Let V be a finite dimensional vector space over the
field F. If L is a linear functional on the dual space V'ofV, then
there is a unique vector 6l in V such that *
^(/)=/(«) V/eP.
Proof.
This theorem is an immediate corollary of theorem 1.
We should first prove theorem 1. Then we should conclude like
this:
The correspondence a->£j, is a one-to-one correspondence
between Fand P. Therefore if £eP, there exists a unique vector
a in F such that £=£, i.e. such that.
A/)=/(«) v/eF\
Theorem 3. Let V be a finite\dimensional vector space over the
field F. Eof^h basis for P U the dual of some basis forV.
Proof. Let B^^{fuf2,...Jn) be a basis for F. Then there
exists a dual basis £2 »●●●! Ln) for V" such that
Liifa^^Bij. ● ●● 0)
Linear Transformations 201

By previous theorem, for each i there is a vector «/ in V such


that
Li^L»i where =/(a/) v /e V\ (2)
The correspoudence is an isomorphism of V onto V,
Under an isomorphism of a basis is mapped onto a basis. There
fore a„} is a basis for V because it is the image set of a
basis for V" under the above isomorphism.
Putting/==/y ^(2), we get

— 8,7. [From (1)]


.*. 5'—{/i, is the dual of the basis B.
Hence the result.
Theorem 4. Let V he afinite dimensional vector space over the
field F. Let B be a basisfor V and B* be the dual basis of B. Then
show that 5"=(B7=5.
Proof. Let an} be abasis for K,
be the dual basis of B in V* and
L„) be the dual basis of B
in r. Then
fi («j)=8,y,
and Li ifj)=8ijy 1=1 «;y=l,..., n.
If a€ Vf then there exists V" such that
(/)=/(«) v/ei"'.
Taking ai in place of a, we see that for each 7=1,..., n,
L^, (/y)=/Xa,)=8,;=L,(/y).
Thus Lg. and Lt agree on a basis for V\ Therefore
«/^Li.
If we identify F" with V through natural isomorphism jit

then we consider L, as the same element as a.


So Li=L^^=a/where 1=1,2,..., n.
Thus B"=il.

Solved Examples
Example 1. Find the dual basis of the basis set
^-{(1, -1,3),(0. 1, -1),(0, 3. -2)}
for n(R).
Solution. Let ai=»(l, —1, 3), a2=(0, 1, -1), «3=^0, 3, —2).
Then «2»
202
Linear Algebra

If 5'={/i,/2,/3} is dual basis of B, then


/i(«i)« I,/i(«2)=0,/i(a3)=s0,
/2(«l)=*0,/2(a2)=s0,y2(a3)s=0,
and /3(«l)—0,/3(a2> «0,/3fa3)«I.
Now to find explicit expressions for/j,/2,/3.
Let (a,^c)eK3(R).
Let (tf, b, c)=2c (1. -1,3)+;/(0, 1, -l)+r (0, 3. 2) ...(1)
“»^«i+3'«2+2ra3.
Then fi (a, 6,c)^x^f2(o, b^ c)«=*^, and/3 (<r, ft , c)=r.
Now to find the values of y,z.
From (1), we have
^x^‘y-{-3z=^b, 3x -y--:z^c.
Solving these equations, we have
x^a,y^7a^2b-3c,z^b-^c-2a.
Hence /,(o. b, c)>=a,
h (pt b, c)a=7fl—2h—3c,
and A(a,b,c)^-2a+b^c,
Therefore is a dual basis of 5 where/i, A A
are as defined above.
Example 2. The vectors oci«(l, I, 1), a2=(I, 1, -1). and
a3«(l, -1,-\)formabasii of V^{C). If {/,./2,/3} is the dual
basis and //a=(0, 1. 0),///id/,(«),/3(a) and
Solution. Let oc=aitti4~O2oc2'|~03*3< Then
/|(a)«afll,/2(«)a02* flip) 03.
Now a=aiai-f-<i2«2+fl3«3
=> (0. 1. 0)=o,(I. I. 1)4.02(1,1, -1)1-03 (1, -1.-1)
*> (0, I, 0)«(ai4-O2+Oj, <I|4-«2—«3. fl|—02'-fl3)
«1+02+03—0, Oi+02~03»l,01—02—03^:0
«> oi=0,02=i,03=—
/l(*)*=*0,y2(«)s=»i,y3(«)=--J.
Example Z. Iffis a non-zero linear functional on a vector space *
V and if X is an arbitrary scalar^ does there necessarily exist a '
vector flt in V such thatfip)^x ?
Solution, /is a non-zero linear functional on I'. Therefore
there must be some non-zero vector p in V siich that
f{P)s=y where is a non-zero element of F,
If X is any element of F, then
x=(xy-') y=(xy-*)/(i5)
(V /is linear functional]
Linear Transformations 203

Thus there exists such that/(«)=*x.


Note. If/is a non-zero linear fuactional on ^(F), then/ is
necessarily a function from V onto F.
Important Note. In some books/(a) is written as [a,/].
Example 4. Prove that iff is a linear functional on an n-dimen-
siohal vector space F(F), then the set ofall those vectors afor which
f(jx)—0 is a subspace of F, what is the dimension of that subspace ?
Solution. Let Ar-{aeF ;/(a)--=0}.
N is not empty because at least 0&M Remember that
/(0j=0
Let«, p^N. Then/(«)=0,/(j8) =-- 0.
If freF, we have
/(aot^-A/8)=fl/(a)+h/(/5)=flO+60=:0.
ua-fh/8eiV.
Thus a, heFand a, p^N => attybp^N.
JV is a subspace of V This subspace N is the null space
of/.
We know that dim F«dim dim (range of/),
(i) If/is zero linear functional, then range of/ consists of
zero element of F alone. Therefore dim (range of/)=°0 in this
case.
In this case, we have
dim F=dim A^H-0
=> n»dim N,
(ii) If/ is a non-zero linear functional on K, tlien/is onto F.
So range of/consists of all F in this case. The dimension of the
vector space F> is 1.
.*. In this case we have
dim F=dim AT-f 1
=► dim iV=«~-l.
Example 5. l^t V be a vector space over the field F. Let f be
a non-zero linear functional on V and let N be the null space off.
Fix a vector oq in V which is not in hf. Prove that for each a in V
there is a scalar c and a vector p in N such that a=cao+j8. Prove
that c and p are unique.
Solution. Since/is a non zero linear functional on K, there
fore there exists a non-zero vector «o in K such that /(«o)9^0.
Consequently <xq ^ N. Let/(oq)—>>9^0.
I.et a be any element of V and let /(a)=x.
204
linear Algebra
We have /(oe)=*JC
^ A<»)^ixr')y [V G F t> y-t exists]
:> f{a)=^cy where c==xy~' e F
=►/(«)=c/(ao)
=►/(«)=/(<?oto) [V /is a linear functional]
=► /(«)-/ (c«o)=0
^ /(«-cao)=0
=> a—coo e iV
=> a-cao=j8 for some jS e
=> a=ca<)-fj8.
If possible/let
a=c'oo+^' where c' e f and /S' e /V.
Then cao+iS=c'ao+/5' ...(1)
=> (c-c')«o+(j8-/8')=0
=>f[{c-’C') oco+(j8~j8')l=:/(0)
=> (c~c')/(«o)+/(iS -jS')«0
(<?-0/(«o)=0 [V /S' €= TV jS-jS' € N
and thus/(/3~/8')«0]
=> (c-cO=0 [ '●* /(«o) a non-zero element of F]
^ c=c'.
Putting c=c' in (J), we get
c«o+^=f«^+/8'
=> i8-/8'.
Hence c and /S are unique.
Example 6. ///onrf g are in V such that /(a)«0 g(a)=0,
prove that g=kffor some k ^ F. (Meerut 1979, 84)
Solution. It is given that/(a)=0 =► g(«)=0. Therefore if a
belongs to null space of/, then a also belongs to null space of g.
●Thus null space of/is a subset of the null space of g.
(i) if/is zero linear functional, then null space of /is equal
to K. Therefore in this case I' is a subset of null space of g.
Hence null space of g is equal to V. So g is also zero linear func
tional. Hence we have
g=^kf ^ k G F.
(ii) Let/be non-zero linear functional on V. Then there-
exists a non-zero vector ooS V such that/(oo) ^y where is a non
zero element of F.
Let
/(«o) ■
Linear Transformations 205
If « e Vt then we can write
tfco
c«o+i8 where c e F and jS e null space of/.
We have ^(«)=g (coco+i8)=c^(oeo)+^(j8)
=cg(ao)
['/ ^ e null space of/=> /(j8)=0 and so ^03)»O]
Also (kf)(a)»^(a)«*/(c«o+i3)
-A:[c/(«o)+/05)l
^kcf(«o) [V /O)=01
.\
=,*Mc/(«o)=c^(«o).
/(«o)
Thus g(it)^{kf)(a) ¥ « e K.

Exercises
1. Prove that every finite dimensional vector space V is isomor* ,
phic to its second conjugate space V** under an isomorphism
which is independent of the choice of a basis in F.
(Meerut 1973)
2. Find the dual basis of the basis set
««{(!. 0,0),(0. 1,0).(0.0,1)}
forF3(R).
Ans. F'={/i./2,/3)
where/i(fl, b, c)=a,/2(a, b, c)=h,f{a, b, c)=c.
3. Find the dual basis of the basis set
F«((l, -2. 3),(1, -1, 1). (2. -4. 7)} of K3(R).
Ans. F'~{/i,/a,/3}
where /i(u, b, c)=—3a—5^—2c,
/2(a, 6, c)=2a+ft, /3(a, b, c)«a-f-2fc+c.
§ 18. Annihilators.
Definition. If V is a vector space over the field F and S is a
subset of the annihilator ofS is the set 5° ofall linearfunctionals
fon V such that
/(a)=0 ¥ « e 5.
(Meerut 1970, 71, 76, 92; Marathwada 71; S.V.U. Tirupati 90]
Sometimes ^4(5) is also used to denote the annihilator of S.
Thus 50={/e F':/(a)=0 ¥ a e 5}.
It should be noted that we have,defined the annihilator of S
which is simply a subset of V. S should not necessarily be a sub
space of F. V
If iS=*zero subspace of F, then 5®=* F'. (Meerut 1976)
It S«=*F, then 5®== Fo==zero subspace of F^ (Meerut 1976)
206
Linear Algebra
If V is finite dimensional and S contains a non-zero vector,
then If O^a e 5, then there is a linear functional/on V
Thus there is/e V* such that/q^S®. Therefore

Theorem 1. If S is any subset ofa vector space V (F), then S®


is a subspace of V\ (Poona 1970; Meerut 79, 89, 92; Kanpur 81)
Proof. First we see that 5® is a non-empty subset of.P'be-
cause at least 0 € .S®. We have
6(a)=0 V « e 5.
Let /,^e5®. Then /(a)=0 V « e .S,
and ^(a)=0 V a e 5.
If a, b G F, then
(af+bg)(ci)=(af)(a)+(6g)(a)=saf (a)=o0-fh0s=0.
af+bgeSo,
Thus o,6 e F and/,^ e SO => af+6g S®.
S® is a subspace of P'.
Dimension of annihilator.
Theorem 2. Let V be afinite dimensional vector space over the
field F, and let W be a subspace of V. Then
dim fV-hdim W^=dim V.
(Meerut 1980, 81, 82, 84, 87, 91,92; Marathwada 71. 93;
S.V U. I'irnpati 90)
Proof. If W is zero subspace of P, then IP®=t v\
. /. dim IPo=dim P'=dim P.
Also in this case dim IP=0. Hence the result.
Similarly the result is obvious when W= V.
Let us now suppose that IP is a proper subspace of P. Let
dim P=«,and dim W=m where 0 < m < «.
Let F|={a],..., a„,} be a basis for W. Since is a linearly
independent subset of P also, therefore it can be extended to form
a basis for P. Let F={«,...., «„,
be a basis for V.
Let be the dual basis of F. Then
B’ is a basis for P'such that// (ay)=8/y.
We claim that 5= is a basis for IP®.
Since S c therefore S is linearly independent because B'is
linearly independent So S will be a basis for W'®, if ipo is equal to
the subspace of P'spanned by 6' i.e. if fV^—L{S).
First we shall show that Wo q L{S). Let /g Wo Then
/e P'. So let
Linear Transformations 207

● ●●(1)
/-!
Now /e W'o =>/(a)=:0 ¥ a e
s> /(ay)G=d for each y=l m [V are in W]

(«y)s*»0 [From (1)]

II n
*>27 («y)csO^ 2Xt §/ys»0
/-I /-I

=> *y=0 for each y«l, m.


Putting jei=0, xr=°0, jf«=0 in (1), we get
*Bi+i yin+i 4-● ● ●+y«
»a linear combination of the elements of S.
/e£(S).
Thus few^ =>/e£(5).
IPOCICS).
Now we shall show that 1(5) c
Let g€l.(5). Then g is a linear combination of
■ fm+lf>» ftt’
It
Let 8— ^ ykfk ...(2)
k-m-¥\

Let a € W. Then a is a linear combination of ai »●●●» */n* Let


m
a=3 2 Cj a.j. ...(3)
y -i

m \
We have g (a)=g ^cj»j [From (3)]
y-> 7
m
^ 2 Cjg (ay) ['/ g is linear functional]
y -i

m
-2:cy («y) [From (2)]
y -i

m n m n
^.2cj .^ ykfk{<t})=^2cj 2 ynhu
y-i At-w+i
208 Linear Algebra

m
S cj0 [V S*y=0 if k^J which is so for each
y-i- kssm-\-l»●●●» n and for eachy= 1, i.., m]
=0.
Thus g (a)«0 V a e FT. Therefore^ e fFo.
Thu8s^€£(5) =>« e JFo.
L{S)Q fVO,
Hence W^=:L (S) and 5^ is a basis for fF®.
dim i^'®=«—w=dim F—dim fV
or , dim F=dim JF+dim fF®.
Corollary. // F is finite-dimensional and fV is a subspace of F,
W* is isomorphic to V'lWa.
Proof. Let dim F=n and dim lF=w. W' is dual space of W,
80 dim FF'=dim fV>=m,
Now dim F7iF®=dim t''—dim IF®
=dim F-(dim F-dim PF)=dim PF=m.
Since dim fF'«dim F'/IF®, therefore W'^V'IWo.
Annihilator of an annibilator. Let F be a vector space over
the field F. If S is any subset of F, then S® is a subspace of F'. By
definition of an annihilator. we have
(5o)o=5®o={L e F": L (/)=0 V / e 5®}.
Obviously 5®® is a subspace of F". But if F is finite dimensio>
nal, then we have identified V" with F through the natuial isoinor*
phism « 4-4 Lee ● Therefore we may regard S®® as a subspace of F.
Thus
5®o={a e V:f(a)=0 v / e S®}.
Theorem 3. Let V be a finite dimensional vector space over the
field F and let W be a subspace of F. ^fhen IF®o= W.
(Meerut 1968, 78. 90, 91)
Proof. We have
fF®=(/e F' :/(a)=0 V a e IF} ...(I)
and fr®o={a €= F:/(a)=0 V/s IF®}. ...(2)
Let a e IF. Then from (l),/(a)=pO V / e IF® an?so from
(2),aelF®®.
a e iF => a e iF®®.
Thus W c IF®®. Now W is a subspace of F and W’®® is also a
subspace of F. Since W C IF®®, therefore IF is a subspace of IF®®.
Now dim IF-fdim IF®=dim F. (by theorem (2)]
Applying the same theorem for vector space V and its sub
space B'®, we get
Linear Transformations 209

dim fFo+dim Jf'®®=dim V'=d\ui V.


dim »'=:dimy-dim fFo = dim F-[dim K-dim Woo]
=dim woo.
Since W is & subspace of W°o and dim W'=dim Woo^ therefore
W=W^.

Solved Examples
Example 1. If Si and S2 are two subsets of a vector space V
such that SiQS2t then show that S2^QSi^.
Solution. Let /e 52®. Then
/(«)=0 ¥ a e 52
=>/(a)=0 ¥ a e 5i [V 5, £52]
=>/e5i0.
52® £5,0.
Example 2« Let V be a vector space over thefield F. If S is any
subset of K, then show that 5®=.[L (5)j0.
Solution. We know that 5*£ L (5).
.*. [L(5)l®£5®. ...(1)
Now let/e 5®. Then/(«)=0 ¥ ae5.
If p is any element of L (5), then

Xfcti where each «/ e 5.

We have/03)«rjc//(a,)

e=»P, since ehch/(a/)=0.


Thus f(p)=0 ¥ y3 G £,(5).
/e (L (5))®.
Therefore 5®£(L(S))®. ...(2)
From (1) and (2), we conclude that 5®=(L (5))®.
Example 3. Let V be a finite-dimensional vector space over the
fit id F. If Sis any subset of V, then Soo=.L{S),
Solution. We have 5®=(L (5))®. [See Ex. 2]
5®o=(£ (5))®®.
But V is finite-dimensional and L(5) is a subspace of V,
Therefore by theorem 3, (L (5))0®=L (5).
.*. from (1), we have 5®® ^£(5).
Example 4. Let V be afinite dimensional vector space over the
field F. If Wiond W2 are sub^paces of V, then Wx^=W20 iff
Wi=>W2- (Meerut 1979)
210 Linear Algebra
Solution We have Wi — Wz
=>
Conversely, let
Then fFi00= FF200
=> fF,= lF2.
Example 5. Let Wi and Wz be subspaces of afinite dimensional
vector space V.
(«) Prove that(Wt + W2)^=fVi^nWz^-
(I.A.S. 1985; Meerut 76, 77, 88, 91, 93P)
(b) Prove that(Wi fl ^'2)°= »'i®+ Wz°.
(Meerut 1970, 73, 75, 77, 88, 91,93P)
Solution, (a) First we shall prove that
Wi^nWzO Q {Wi-\-Wz)^.
Let/e WiOf) FF20. Then /s WiO,/e W.
Suppose a is any vector in Wi-\- Wz. Then
a=ai+a2 where oneWt, ctz^Wz.
We have/(a)=/(«i4 «2)
“/(«l)+/(«2)
*=0+0 [V aieJFi and/elFiO =>/(«i)=0
and similarly/(a2)“0]
0.
Thus /(a)=0 V a e Wi-\-Wz.
/. f^(Wi+Wz)<^.
Wt^C]Wz^Q{Wi+ Wzf. ...0)
Now we shall prove that
{Wi+Wzy»Q Wi^nwz^.
Wehave
1^2)0 C IF,0. ...(2)
Similarly.IFaG IF,+ W'2.
{Wi-\^WzfQ,Wz^, ...(3)
From (2)and (3), we have
(FFt+lF2)0Q lF,onFF20. .(4)
From (1)and (4), we have
(FF,+ FF2)o«ilF,onFF20.
(b) Let us use the result(a)for the vector space F' in place
of the vector space F. Thus replacing W\ by FF,® and Wz by FFa®
in (a) we get
(FF,®+- FF2®)®= FF,o®n FF2«o
(FF,®+FF2®)»=fF,nFF2 [V lF,o««FFetc.l
=> (FF,®+ FF2®/»=(1F,n FF2)®
=> FF,®+FF2®«(FFinFF2)®.
Linear Trantformations 211

Example 6. If Wt and Wz are subspaces of a vector space V,


andifV^Wx@W2,then

Solution. To prove that V'= we are to prove that


(i)
and (ii) /.e. each f^V' can be writte nas/i+/a
where/ie WiO,fz^ Wz^.
(i) First to prove that Wz^—{b).
Let/e WiO n IFaO. Then/e IT,® and/e Wz^.
If a is any vector in then, V being the direct sum of Wi and
I^2i we can write
«=«i+«2 where aiell'i, az^Wz.
We have/(a)=/(ai+az)
=/(«l)+/(«2) [V /is linear functional]
saO-j-O [\* /e il'i® and aie IFi => /(ai)=0
and similarly/(«2)=01
BBlOil

Thus /(a)«aO ¥ aeF.

A /«0.
/. »"i®ni>"2®='{6}.
(ii) Now to prove that V*=lTi®-i- Wz^>
Let/eF\
If a€K,then « can be uniquely written as
a=oci4-a2 where aiSW'i, a2eIT2.
For each /, let us define two functions/i and/2 fro® ^ ®lo F
such that
/i(«)=/i(«i+«i)==y(*2) ...d)
and /2(«)=/2(«i-|-a2)=/(ai). .(2)
First we shall show that/i.is a linear functional on V Let
«, heF and a«ai+a2, ^=^i+/52eF where ai, pi^Wi and«2.
^z^Wz Then
/i(fla+6j8)=/t fa (ai4-«2)+^ Oi+^2)l
—fi \fa%i+hp{)-\-{aa2+bpz)] ■
=/(aa2 -I-A^2)
[V aai+6)5|€lF|, a«2+^^2Sf^2l
«fl/(ai)+hfifizi[V /is linear functionkij
'=a/i(a)+6/i08) lFrom,(l)]
fi is linear functional on V Le,/|S F".
212
Linear Algebra
Now we shall show that /)e
let«, be any vector in Then «, is also in V. We can write
«i=»ai+0, wherea,eW',, 0efF2.
from (1), we have
/i(«i)=/i («i+0)=/(0)=0.
Thus
/i(ai)=0 V a,elFi.

can show that/i is a linear functional on V and

Now we claim that/=/,+/2.


Let a be any element in V. Let
«=a,+a2, where a,e W\,ajS W^.
Then
(f\ +/2)(a)=/,(a)+/2(a)
=/(«2)+/(a0 [From (1) and (2)]
=/(«i)+/(a2)
'=/(ai+a2) [V /is linear functional]
=/(«).
Thus (/i+^)(«)=/(a) V aeK

Thus feF' =>f=f[ 4-/2 where/,e ^


Hence
^«»Spaces o/afimle-dimemlonal
vector space V and if V= if'i© IFa, then
(/) W\ is isomorphic to Wf^.
(it) Wf is isomorphic to ■
Solution. Let dim F=n, dim fFi=:m.
Then dim fF2=n-^m.
We Have dim lPi'=dim fF, m.
Also dim F-dim ==«-(«-m)=w.
dim lF|'=dim fF2°
=> fFi' is isomorphic to ^2°.
Again dim if'2'=dim ^F2 =n—m
Also dim if'iO=dim F~djm IFi- n—m.
dim Wf^dim PF,o
=> IFz'^fFtO,

§ 19. Invariant Direct-sum Decompositions.


Let r be a linear operator on a vector space F(F) If5is a
non-empty sub^t of F. then by T(S) we mean the set of those ele
ments of V which are images under T of the elements in 5. Thus
Linear Tramformations
213

7'(5’)={r(a)eP^:ae5}.
Obviously T(S)QV. We call it the image of S under T
Invariance. Definition.
, „ , y be a vector space and T a linear
operator on V. If W is a subspace of V, we say that W is invariant
under TiJueW^T(u)slV. [Meerut imi
Example). If T is any linear operator on V, thenris
invariant under T. If «s V. then r(a)s fr because T is a linear
operator on V. Thus V is invariant under T.
The zero subspace of V is also invariant under T. The zero
subspace contains only I
one vector i.e., 0 and we know that r(0)«0
which is in zero subspace.

Example 2. Let V(Fy be the vector space of all polynomials


Trmu ‘u® ●" ">edifferentiation operator on F
Let W be the subspace of V consisting of all polynomials of degree
not greater than n. ®

If/W- .K ther D [/(.v)]e W because differentiation operator


D IS degret. decreasing. Therefore fV is Invariant under D.
^ Let ff'' /e a subspace of the vector space V and let fV be in-
variant under the linear operator Ton V i.e. let
T(a)efV.
We know that li'itself is
a vector space. If we ignore the fact that
T is defined outside W, then
on W. „ ^ . weraayregard rasa linear operator
Thus the linear op e rator T induces a linear operator T»r
on the vector space W defined by
rn/(a)=r(a) V ae W
It should be noted that r,p is quite a different object from T
because the domain of-r^^f is W while the domain of r is V

also. Invariancecan be considered for several linear transformations


.... . Thus IK IS invariant under a set of linear transformations
It It IS invariant undereach memeber of the set.
Matrix interpretation of invariance. Let r be a finite dimen
sional vector space over the field Fand let T be a linear operator
on V. Suppose V has a subspace W which is invariant under T
Then we ^cari choose suitable ordered basis for F so that the
matrix of T with respect to B takes some particular simple form.
Let a«) be an ordered basis for W where
di m W=*m, We can extend Bi to form a basis for V, Let
●^={*lp*»»* <K„}
be an ordered basis for V where dim
214 Linear Algebra

Let A [a/jlnxn be the matrix of T with respect to the ordered


basis B Then

7’(ay)=^ 2; 2,..., n. ...(1)


/-I

ff 1 then a.j is in W. But fV is invariant under T,


Therefore if I ^7 < m, then T{olj) is in B'and so it can be ex
pressed as a linear combination of the vectors ai,...,a„, which form
a basis for W This means that
m
aij a/, 1 ^ J ^ m. ...(2)

In other words in the'.reIation (1), the scalars aij are all zero
if I and w+1 < / ^ n.
Therefore the matrix A takes the simple form
A= M C]
P £>.
where Af is an mxm matrix, C is an mx{n^m) matrix, O is the
null matrix of the type (rt—/«)x/M and D is an («—m)x(n—w)
matrix.
From the relation (2)it is obvious that the matrix M is nothing
but the matrix of the induced operator Tw on W relative to the
ordered basis Bi for \V.
Reducibillty Definition Let W\ and W2 be two subspaces of
a vector space V and let T be a linear operator on V. Then T is said
to he reduced by the pair {Wu fVz) if
(0
(i7) Both fVi and Wz are invariant under T.
It should be noted that if a subspace Wi of V is invariant
under r, then there are many ways of hading a subspace Wz of V
such that V-fVi^lVzy but it is not necessary that some Wz will
also be invariant under T in other words ainong the collection
of all subspaces invariant under T we may not be able to select any
tw) >>ther tnan V and the zero suospace with the property that V
In their direct sum.
The definition of reducibility can be extended to more than
two suo^paces. Tttas let Wi be k subspaces of a vector
pace V and let T be a linear operator on,V, Then T is said to be
reduced yWk)tf
(n V IS the duett sum of the subspaces
■i'lt/ ti) La h of the subspaces fVi is invariant under T.
Linear Transformations 215

Direct sum of linear operators. Defioition.


Suppose r is a linear operator on the vector space K. Let
K==f^i©...ew'fc
be a direct sum decomposition of V in which each subspace Wt is
invariant under T. Then T induces a linear operator Ti on each
Wi by restricting its domain from Kto Wi. If then there
exist unique vectors with a/ in Wi such that
as=ai-!“●●●

=> T'(a)=7'(ai+ ●●●+«<:)


=> r(a)=n«0-l--+r(«o [V T is linear]
=> r(a)=ri(ai)+...4-7fc(a*) [V ifaiSiFi, then
by def. of T/, we have r(af)=sr/(a/)J
Thus we can find the action of Ton V with the help of inde
pendent action of the operators Tt on the subspaces Wi. In such
situation we say that the operator T is the direct sum of the opera
tors Ti,..;,7*. It should be noted carefully that T is a linear
operator on K, while the T are linear operators on the various
subspaces Wi.
Matrix representation of reducibility. If T is a linear operator,
on a finite dimensional vector space V and T is reduced by the pair
(Wu W2), then by choosing a suitable basis B for we can give a
particularly simple form to the matrix of T with respect to B.
Let dim V=n and dim lV\=m. Then dim W2^n—m since V
is the direct sum of Wt and Wz-
Let 5|s=(ai be a basis for Wi and
,an} be a basis for W2. Then
is a basis for V.
It can be easily seen, as in the case of invariance, that
M 01
N
where Af is an mxm matrix, N is an (n~m)x(n—w) matrix and
O are null matrices of suitable sizes.
Also if Ti and T2 are linear operators induced by Ton W\ and
W2 respectively, then
and

Solved Examples
Example 1. IfT is a linear operator on a vector space V and
if W is any subspace of K, then T(W) is a subspace of V, Also W
is invariant under T iff T{W) G W,
216 Linear Algebra

Solution. We have, by definition


T(lV)={T{a):a^ W).
Since 0 G W and 7’(0)=0, therefore T{W) is not empty be
cause at least 0 e T( W).
Now let !T(ai), ^(aa) be any two elements of T(fV) where ai, aa
are any two elements of W.
If a, 6 e F, then
n'r(«i)+6!r(a2)=T(aai+6a2), becauseTis linear.
But W is a subspace of V. Therefore aj, aa e W'
and a,bGF^ axi+bctz W, Consequently
r(flai+haa) e T(W). Thus
fl,6 e Fand T(a,), r(aa) S T{W)
s> aT{cci)+bT(cn2) S T{W).
T{IV)is a subspace of V.
Second Part. Suppose W is invariant under T.
Let T(a) be any element of T(fV) where a e IF.
Since a e fV and fV is invariant under T, therefore
r(a) e fV. Thus T(a)G T(fV) ^ T(») e fV.
Therefore T(fV)£ fV.
Conversely suppose that T(fV)£ JV.
Then T(a) e JF ¥ « e Therefore fV is invariant under T.
Example 2. If Tis any linear operator on a vector space F,
then the range of T and the null space of T are both invariant
under T.
Solution. Let N(T) be the null space of T. Then
JV(r)={ae F:r(a)c=o).
Ife N{T)t then T{^)—0 e N{T) because N(T)is a subspace.
N{T)is invariant under T.
Again let R{T) be the range of T. Then
F(r)={3"(«)e F: ae F}.
Since R{T)is a subset of F, therefore /3 e F(T) => e F.
Now /3 e F =► T(/3) e R{T),
Thus jS e R{T) => T(fi) e R{,T), Therefore R (T) is in
variant under T.
Example 3. If the set 5=*{1F/) is the collection of subspaces
of a vector space V which are invariant under T, then show that
FFn n Wi is also invariant under T,
. I
Linear Transformations 217

SoIatioD. We have
ae => a e Wi for each i
s> T(ct) e Wi fot each i {●/ each ff"/ is invariant under T]
=> r(a) G n Wi => /'(a) G W.
W is invariant under T.
Example 4. Prove that the suhspace spanned by two subspaces
each of which is invariant under some linear operator T, is itself
invariant under T. (Meerut 1987)
Solution. Let Wi and W2 be two subspaces of a vector space
y. Let W be the subspace of K spanned by U W2. Then we
know that lV^Wi + W2.
Now it is given that both W\ and W2 are invariant under a
linear operator T and we are to prove that W is also invariant
under T.
Let a G W. Then
a=ai+a2, where at G Wi, a.2 G W2.
We have r(a)=T(a, f a,)
= T(ai)4Ttaa) because 7' is linear.
Now T(ai) G Wi since Wf is invariant under T and ai G W\.
Similarly 7’{a2) G W2.
Thus r(ai)-}-7(a2) G
i.e. T(a) = 7\a,) f7\a2)G W.
Thus a G => 7'(a) G W.
W is invariant under T.
Example 5. Let V be a vector space over the field F, and let
T be a linear operator on V and let f(t) be a polynomial in the in
determinate t over the field F. If W is the null space of the operator
/(T), then W is invariant under 7’
Solution, if/(r) is a polynomial in the indeterminate t over
the field F, then we know that/(/') is a linear operator on V where
r is a linear operator on V.
Now W )S the null space off(T). Therefore
aG W =>f{T)(ai)=0. ...(1)
We are to show that W is invariant under T
i.e. »gW^ J(a) G W.
Obviously [/(T)] r=r/(r) because r/(0=/(0 ^ and polyno
mials in T behave like ordinary polynomials.
^18 Linear Algebra

«e M' => [{f{T)) rj(a)«(r/(Di («)


=>/(T)in«)]-r[/(r)(«)]
=>/(D ir(a)i=r(0) [From (1)]
=>/(n[n«)l=o
=> T(ot) e ff'since fT is null space of/(F).
fV is invariant under T.
Example 6. Give an example of a linear transjormotion T on
afinite-dimensional vector space V such that V and the zero subspace
are the only subspaces invariant under T.
Solution. Let T be the linear operator on which is
represented in the standard ordered basis by the matrix
0 ~r.
[l 0.
Let if" be a proper subspace of KaCR) which is invariant under
T, Then if" must be of dimension L Let if" be the subspace
spanned by some non-zero vector a. Now a e iK and if" is in
variant under T. Therefore F(a)€ W.
F(a)=ca for some ceR
=> F(a)=c/(a) where / is identity operator on V
o (F-c/J(«)=0
=> F—c/is singular [V a-^OJ
=> F—c/ is not invertible.
If ^denotes the standard ordered basis for KiCR), then
[T-cI\bHT\b c[1]b
ro -n — c 1 01 —c n
I 0. 0 i I —c
Now det ~c~ -n c —I ■
1 —c I ^ =<^+1^0 for any real
number c.
-c -11
1 i.e. [T—cI]b is invertible.

Consequently T^cl is invertible which is contradictory to the


result that F—c/ is not invertible.
Hence no proper subspace W of f"2(R) can be invariant
under F.
Example 7. Show that the space generated by (1, 1, 1) and
{\,2,\) isan invariant sub-space of R3 under F, where
T(x,y,z)={x-^y--z,x-{-y,x+y~z). (Meerut 1977)
Solution. Let IV be the subspace of R^ generated by the vectors
(M. l) and(l,2, I). F is a linear transformation on R^ detined
by F(x, y, x-j-y, x+y-z).
Linear Transformations
If
Now IT will be invariant under T if asW' => r(«) G W.
a is an arbitrary vector in W, then a=<i (I, 1, 1)+^(U 2, 1) for
some a, /’SR* Since T is a linear transformation, theretore
r(a)=ar(i, 1, i)+^»r(i. 2, 1).
Now T{ol) will be in IV if we show that T(l, 1, 1) and Til, 2, 1)
are both in W. We have ^1, 1, l)=(l+l-l, l+l, ^ +
(1 2 1) which is a vector belonging to a.set generating ly. There-
)oK ni, 1. 1) u ID w. Also ni,2. l)=(l+2-l. 1+2, 1+2-1)
=(2.3, 2)=(1,1,1) Kl. 2. 1). Thus TO,2, 1) is a Uuear com
bination of the vectors (1, 1. I) and (1,2,1) which generate W.
Therefore r(l, 2, l) is also in W,
Hence ot-G lV => 7'(a)e l^. Therefore W is invariant under T,
Exercises

i. Let r be the linear operator on the matrix of which in the


standard ordered basis is
'2 n
0 2J
If Wi is the subspace of R^ spanned by the vector (1. 0),
prove that Wi is invariant under J.
2. Let T be the linear operator on R^, the matrix of which in the
standard ordered basis is
I -r
/!==
2 2'
(a) Prove that the only subspaces of R^ invariant under T
are R2 and the zero subspace. (Meerut 1976,80)
(b) If U is the linear operator on C^, the matrix of which
in the standard ordered basis is A, show that U has one
dimensional invariant subspaces.
§ 20 Projections, Definition Suppose a vector space V is the
direct sum of its subspaces Wx and W^, Then every vector in V
can be uniquely written as a=«i+a2 where and a.iGiV2.
The projection on iVx along iVz is the linear transformation E on V
defined by Eia)^ax. (Meerut 1975)
In order to make the definition sensible, we shall show that
the correspondence E as defined in it is a linear transformation on
V.
Obviously J5 is a function from V into V,
220
Linear Algebra

Letn,6€Fand a=aj-fa2, where (x.u^\^W\


andaj, Then £(a)=a,, ^(/8)=/8,. Also Wi is a sub¬
space and therefore av.\-\-b^\e Wi. Similarly aoi.2-\-b^2G W2
Now E{av.+b^)=^E[a (a,+a2)+A (/3,+j82)]
—E[(nai+ +(a%2+b^2)]
=flai+Z>j8i.
[by def. of E, since aa,+A^,e and W-A
(a)-f-6£03) r
E is a linear transformation on V.
Theorem 1.
A linear transformation E on V is a projection on
some suhspace if and only if it is idempotent i.e. E^=>E
(Meerut J975, 78, 85,88, 91, 93, 93P)
Proof.
Let K= and let £ be the projection on IVt
along W2. Then to prove that E^=E.
Let a be any vector in V.
Then a=a,_^a2 where ajeW^i, aaeU-'j.
By def. of projection, we have
£(a)==ai.
...(1)
Now E\o)^E[£(a)J
£(a,)
^'(“1 +0) wfrere aie and Oe W2
=ai
[by def of projection since the
decomposition for aie V is ai+OJ
S8
£(a)
[From (1)J
Thus £-*(a)=£(a) V ae K Therefore £2=£.
Conversely,let £2«£.
Let M"i={aeP:£(a)=a},
and M^2=(ae V: £(a)=:0}.
1^2 is a subspace of V because it is null space of £.
Also Wi is a subspace of V as shown below :
Let a,6e£ and a,/ie W^. Then £(a)=ra, E(P)=p,
We have £{aoi-\-bp)—aE(a)-f6£(j8) [V £ is linear]
=fla+6iS.
Now and therefore Wi is a subspace of K
Now we shall prove that £ is the projection on fP, along JV2,
First we shall prove that P= fPj. For this we are to prove
that
(i) P= IP,4-^2
and (ii) Wi and IP2 are disjoint.
Linear Transformations 221

Proof of (i). Let ae V. Then a can be written as


<x.=E(ol)-f[a—jE(a)].
Let ai=£’(a) and a2=a—^(a).
W« have £(a,)=£(£(a)]=£2(«)
=£(a) [V £=£2]
=ai.
/. aiSlPi.
Also £(a2)=£[a-£(a)]=£(«)-£2(a)
=£(«)-£(«) [V £=£^J
=0.
»2GW2.
Thus ae V can be written as a=ai +0.2 where aiS 0.2^
Therefore K=»"i+ JPP2.
Proof of(ii). Let a e 0 IP2. Then ae W'l, ae fp2.
Now ael^i => £(a)=a;
Also aeW'2 => £(a)=0.
aefPi, W2^o^-^0.
Wt and W2 are disjoint.
Thus V=lVi@fV2.
Now let aef'and a=ai+a2 where aie a2e tVz.
Then £(a)=£(ai+a2)
«£(a2+a2) [ E is linear]
=ai-f-0 [V aiS 1P| ^£(ai)=ai and
a2e 1^2 £"(*2)=®]
=ai.
Hence £ is a projection on W'l along IV2
Theorem 2. Suppose the vector space V is the direct sum of its
subspaces W\ and IV2. ifE is the projection on W\ along W2. then
Wi and W2 are respectively^ the set of all solutions of the equations
£(a)=a, £ a)=0.
Proof. Let V be the vector space. Let
M={aer: £(a)=a}
and iV={aeK: £(a)=0}.
Then to prove that
(i) M=^Wx
and (ii) N=W2.
Proof of(i). Let ae Wi. Then ae V and it can be written
as a==a-f0 where ae IVi. Oe 1P2*
222 Linear Algebra

by def. of projection, we have


£(a)=a
=> a e
/. Wx £ M.
Now let a e Af. Then E(«)=a.
Let a=ai+ «2 where ai e fPi, «2 ^ ^^^2-
We have £(a)=a| [by def of projection]
o a=ai e W'l [/ £(a)=a]
Af C Wi.
Hence M=Wt.
Proof of(ii). Let a e >^2. Then a e F can be written as
a=0-fa where 0 e ae
by def. of projection, we have
£(«)=0
=> a ^ W.

Now let a e W. Then E(a)=0.


Let a«=»ai+a2 where ai e IPi, a2 e 1^2*
We have E(a)=ai [by def. of projection]
a> 0=c(| [●.* £(a)«=0J
=> a=a2 [V «=ai-|-a2l
^ a e ff'2 because «2 e FF2*
AT c W^.
Hence. W=»'2.
Theorem 3. Suppose the vector space V is the Idirect sum of
its subspaces W\ and W2, If E is the projection on Wx along W2, then
(0 the range of jB, i e. R (£)= Wx
and {ii) the null space of E i c. W (£)= ff'a.
Proof, (i) We have
R (£)={a e V: a*£ OS) for some P G V).
To prove that R (£)= Wx,
I^t a €= Then a=a+0 where
a e Wi, OSW2
£(a)=« [by def of projection)
=> a € £ (£) because a is the image of a under E.
JF, £ £(£).
Now let d e £ (£). Then there exists p e.V such that
£(j8)«d.
Linear Transformations 223

=>£(/3)=£(a) IV jE^=£,E being a projectionj


£-(a)=a [V £0!)=«1
=> a e Wi.
R (£) C Wx,
Hence /?(£)=fF,.
(ii) We have
JV(£)«{a e K:£(a)=0}.
To prove that W2=>N(£).
For proof, see theorem 2, part (ii).
Theorem 4. A linear transformation E is a projection if and
only ifl-^E is a projection; ifE is the projection on fVi along W2,
then I^E is the projection on W2 along IVi. (Meerut 1989)
Proof. We recall that the set of all linear operators on a vector
space V form a ring with unity element I with respect to addition
and multiplication of transformations
Suppose E is the projection Then E^—E.
(l—E)will be projection if(/--£)2=/—£.
We have(/-£)*«(/-£)(/-£)=//-/£-£f+£^
=/—E—■£-+ E [V
«=/—E.
/—£ is a projection.
Conversely, let /—£ be a projection. Then
{I^E)^=^I-E
=> (/—£)(/—£)=/—●£ => /—£—£4-^^=*/--^
=> £2-£=:0 o-£*=£
^ £ is a projection.
Now let £ be the projection on Wt along iV2^
Then V=Wi®lV2^W2®fVi.
Let «==ai-f-«2 e V where ai e IFi, *2 S W2.
Then £ (a)=«i, by def. of projection.
Now (/—£)(«)=/(«)—£(*) =«2*
/—£ is a projection on W2 along Wi^
Theojrem 5. If V=Wi®...@fVk, then there ^xist k linear
operator Eu ..*» £* on V such that
(a) each £/ is a projection (£<*«£/);
(i) E,Ej<==6; If l¥‘J;
(c) /«=*£i+...+£*:
Id) the range of Et is Wt, ■ (Meerut 1979,83P)
224
Linear Algebra

Conversely, ifEu Ek are k linear operators on V which


satisfy conditions {a),(b) and (c), and if we let IVi be the range of
El, then V is the direct sum of Wi, ..., W'*.
Proof, (a) Since P is the direct sum of fF, Wi, therefore
if ae V then we can uniquely write
a=
ai-h.-.+a* where each ct,^W,, \^i^k.
Let El be a function from V into V defined by the rule
El (a)=a,.
■ X
El is a linear transformation on V as shown below:
Leta,i8eF. Then
«=«i+...+a*, a/S Wi,

If a, b^F, we have
El {acc-^bP)=Ei (i8,+...+iSA)]
=Ei + +
=aoii-^bPi [by def. of £,]
=aEi (a)-\-bEi(P).
.*. El is a linear transformation on V.
We have Ei^ {<r)=Ei[Ei (a)J=£/ (a,)
=Ei (O+...-fa/H-0+...-f-O), where cti^Wi
=a/ [by def. ofiT/l
=Et(cc).
Thus £’/2(a)=.£'/(a) V aSF.
.*. Ei^=Ei i.e. El is a projection.
Thus there exist k linear operators Ei, 1 </<Ar on V such that
Ei‘^=Ei,
(b) Let i¥=f
Let ae V. Then a=aj+...+ajt, where a/e Wi,
We have {Ei Ej){<x)=Ei[Ej{y)]
s=Ei (aj) [by def. ofiTyJ
=0
[V i^j means that in the decomposition ofay as the
sum of the vectors of Wi,..., Wk, the vector
belonging to Wi will be OJ
=« («).
.*. Ei jEy=0 if i^J.
(c) Let aeF. Then a=ai4-...+afc where each a/efF/. We
have (£'i+»..+£'*)(a)=i£i(a)+...+£A(a)
=>ai-K..+«*=*'=»/(«).
.225
Linear Transformations

(d) Let R {Et) denote the range of Ei,


Let (xgR{Ei). Then 3.j3eF such that
Let j3«j3i+...+i3ft where piGTVi.
Then E, iP)=pt.
»=PiGWi.
Thus a^R (E,)^
R{Ei) Q Wi-
Now let a^Wh Then £/(«)««.
ae£(£/).
Thus aGWi => aS£(£<)»
fF,C£(£/).
Hence /?(£/)= FT,.
Converse. Suppose are linear operators jonj wnlch
satisfy the 6rst three conditions. Let Wi be the range,of
From (c),.we have
7=£:j+...+£'a
/(a)=(£i+...+£:fc)(a). V
=> a=£i (a)H-...+£fc («). ^ ...0)
Since FF/ is the range of £/. therefore
£/(a)e FT/.
Thus from (1), we see that if a e V, then a e FTi+.^+ FF'**
Therefore F=FT,+...+ FF'/:.
Now to show that the expression (1)for a as the sum of the
vectors belonging to FF^i,..., FT* is unique. Let
a =ai+ ●●●+«* where a/eFF'ir.
Since FF, is the range of £/, therefore
let a/=£/ iPi) where jS,eF^, K^i^k.
We have Ej («)=£; (aj + ...+afc)=£y (ai)+...+£y M

= 2 £; (a,)=: E Ej [E, (/3,)]« E Ej E, (fii)


/-I /-I i-i

[V El £y=0 ifW]
==Ej(Pj) [V EJ^^Ej]
«ay. '
Hence the expression for a is unique.
V is the direct sum of FF'i !●●●» FF'* .
226 Linear Algebra

§ 21. i roJect’^DS and Invariance.


.'*u 1. a subspace Wt ofa vector space V is invariant
under the linear operator Ton V, then ETE^TEfor every projection
E on W' Conversely^ if ETE=TE for some projection E on Wu
then Wt is invariant under T. (Meerut 1979» 88)
Proof, r is a linear operator on V and Wi is a subspace of V
invariant under T.■ Let
y= W'i0 W2 for some W2.
Let E be the projection on W\ along W2, Then to prove that
ETE^TE.
Let ae-p.
Then we can write
f a=ai+a2 where ai e iPi, aj e HV
We have (ETE){(x)={Er)[E(a)]
=(ET)(a,) [by def. of projection £]
^E ir(a,)]
=r(«o
[ai e Wt and W\ is invariant under T. So S Wt
and consequently E lT(ai)]=r(ai)]
=r[£(«)] (V JF(a)=a]
=(r£)(a).
Thus (ETE)(*)=(r£)(a) V a e V.
ETE=TE.
Conversely, let Wt®W2 and ETE=TE for the projection
Eon Wt along W2. Then to prove that Wt is invariant under T.
Let ote Wf. Since ETE=TE, therefore
{ETE)(a)=.(7’£)(a)
r>
(£r)[£(a)]=r[£(a)l
=> (£7’)(a)=r(a) [*.* K G Wt £(a)=a]
:>£[r(«)]=7’(a)
T(a) e W\ [Since £ is projection on Wt along W2^
therefore £[T(a))=7’(a) 7’(a) e Wt]
Th usa e => T'(a) ^Wt.
Wt is invariant under T.
Note. The above theorem can be also stated in a slightly diff
erent form :
Let E be a projection on V and let T be a linear operator on V.
Trove that the range of T is invariant under T iff ETE=^TE.
Proof. Let the range of £= Wx and the null space of £= W2.
Then V— Wi(BW2 and £ is the projection on Wt along W2.
Linear Transformations 227

Now proceed as in the above theorem.


Theorem 2. IfWxandWi are Subspaces with V=Wi(BfV2,
then a necessary and sufficient condition that the linear transforma
tion T be reduced by the pair(Wu W'a) is that ET=^TE, where E is
the projection on W\ along W2.
Proof. Wi@W2 and E is the projection on Wi along W2
T is a linear operator on V and T is reduced by the pair {Wu ^^2)*
/. W\ and Wisre both invariant under T.
To prove that ET^TE.
Let a e V. We can write
a=ai-l-a2 where ai G W'l, a2 e
Wi and W2 are both invariant underT Therefore Tiv.\)&Wu
7’(a2)e W2> ^’is a projection on W\ along W2 Therefore
E[TM]=r(a,) and E[T(a2)]=0.
Now we have(ET)(a)= E[T(a)]=£[T(a,
=E[T(a,)+ T(a2)]=£[r (a,)]f E{T(xj)]- r^a,)^ 0
=T(a,)=7-[£(a)l=.(T£)(a).
ET=TE
Conversely, suppose V is the direct sum of W\ and 1^2. E is
the projection on W^'i along 1^2 aud ET=TE where 2'is a linear
operator on V. Then to prove that T is reduced by the pair
(fVi, W2). i.e. both Wi and H'2 are invariant under T.
Let a G Wi. Since ET—TE, therefore
(fcT)(a)=(7’£Ka)
£!r(a)]=r[£(a)]
^ £[7’(a)l.^r(a) (V aGlf'i => £(a)=aif£
is projection on Wi along lf^2l
^ 7(a) G [V if£ is projection on li'i
along 1-^2, then £(a)=a => a G Wt]
Thus a G W', => r(a) G W'i.
.*. Wi is invariant under T.
Now let jSG 11^2. Since £r:=T*£, therefore ‘
(£T)()8)=(r£)(^)
:^£{7’(jS)].= 7’{£|8)]
=> £ir(^)]= T(0) [V RgW2 => EiP)=0
if E is projer > .1 on W'l along W2]
=>£ir(j3)]=0 [V no)-o]
=> 7\/?)gW'2 [V if£ is projection on Wi
along W2, then E(3)=0.=> P= Wz]

S'*’
228
Linear Algebra
W2 is invariant under T.
Hence T is reduced by the pair (Wi, W2).
Note. The above theorem may also be stated as below :
Let E be the projection on V and let T be a linear operator on
V. Prove that both the range and null space ofE are invariant under
TiffET^TE. (Meerut 1973)
Solved Examples
Example 1. Let V be the direct sum ofits subspaces Wi and W2.
If El is the projection on Wi along W2 and E2 is the projection on
JV2 along Wi, prove that
(i) El+E2=L
and (i7) £|£’2=0, £2 ^1=6.
Solution, (i) LetaeF. Then
a=ai+«2 where aje Wi, a2S W2.
Since Ei is projection on Wi along W2, therefore Ei (a)«ai.
Also £2 is projection on W2 along Wi. Therefore £2(a)=«2-
We have (£i+£2)(a)=£,(a)+£’2(a)
=«i+«2=«=/(a).
£,+£2=/.
(ii) We have EiE2=Ei (7~£i) (●.* £i+£2=/ => £2=/-£iJ
=£,/-£,2
=£i - £i [ ●.* El is projection => £i2*=:£,]
A

= 0.
Similarly £2£i=£2 (/-£2)=£2/~£22
==£2—£2=0.
Example 2. Let Eu..., Ek be linear operators on a vector space
V such that Ei + ...+Ek=l. Prove that if
EiEj=0for i^jt then Ei^=Etfor each i.
Solution. We have
Ei^=E,E,=E, (I-El-E2-. ●—£/-i—£/+i— £a)
= £,/-£/£,-£,£2 -...—£/£/_i—£/£/+! —... ’—EiEk
£/. [V £/£>=0 for/#/].
Example 3. Let E be an idempotent linear operator on a vector
space y i.e., £2s=£ If JVi is the range of E and W2 is the null space
of £, show that
(0 9.isinWxiffE{a)^9.;
(//) V is the direct sum of fVi and Wa
(Hi) E is the projection on Wi along W2> (Meerut 1985)
Linear Transformations 229

Solution, (i) Let aelTi where is the range of E.


Then 3 /3e K such that E{P)=ol.
Now i?(/3)=a
=> £[EO)]=£'(a) => EHf)^E{o)
[V E^=E]
=> a=£(a).
Now let ae V be such that £(a)=a. Since a is the image o1f a
under therefore ae the range of E i.e,, as IFi.
(ii) Let ae V. We can write
a=J5(a)4-[«—£(«)]
=ai+«2 where «i=£(a) and a2—a—£(a).
Since ai=£'(a), therefore ai is in the range of E /.e., ai is in fVi.
Also EioL2)=E[<i-E{a)]=E(ot)-E^{a)
=£(a)-£(a) [V E^==E]
0.
, «2 e the null space of E i.e. a2e W'2.
Thus «c=ai-j-a2 where aiSFFi, «2elF2.
/. V=Wi-{-W7.
Now let oceWt n W2 Then ae Wu f^2. We have
ct^Wt => £(a)=a.
Also since a€ 11^2* therefore £(a)=s0.
a=0 and thus Wi and W2 are disjoint.
V is the direct sum of fVi and IV2.
(iii) Let ae V. Then a=ai-f a2 where cti^Wu «2S W2.
We have £(a)=£(ai)4-J?(a2)=ai 4-0«sa1.
.*. E is the projection on Wi along W2.
Example 4. V is an n-dimensional vector space over afield F and
E is a linear operator on V which is idempotent. Show that V?=Ri^N
where R is the range space of£ and N is the null space ofE.
(Meerut 1977, 90)
Solution. In order to prove that F«=jR0AT, we are to prove
that
(i) V^R+N,and (ii) i?nW={0}.
For proof of(i)see Ex. 3.
Now we shall prove (ii). Let ael^Di^.
Then ae/{» and aeM
By def. of JV, we have
aeW =>● £(a)=0.
230 Linear Algebra

Also by def of /?, we have ae/? =:► 3 V such that


W)=«.
Now E{^)-=ol
:^£[£(i8)l=£(«) =>£W-£(a)
=> £(j8)=£(a) (V £*=£J
=> a = £(a) [V £(/3)=aJ
=> a=0. [V £(a)=0]
/?riA^={0}. Hence
Example 5. Prove that if T is a linear transformation on V
such that (i-~T)=T then T is a projection on V.
Solution. We have
r^z-r-y [(/-r)(;--r)]=o
r> r2-r=r [/-r-r+'£^]==d

=> and T^=T^-\-'n-T


s> r2=r2+7*2-r => r2=r
r is a projection on V.
Example 6. // T is a linear transformation on E is a projection
on V and F= I-E, then 7’=£7£-h£r£+/JT£+/T£.
Solution. We have
£r£f£r£^£7’£+£T£
=»£r£+£r(Z-£) + f7’£+£r(Z~£) [V £=Z-£J
=£r£ f ETl-£T£f £££+££/- FTE
=(£r£-£r£)-|-£r+-/T t-(/T£-£F£)
=0 4*£F-t“£r-l-0 =»£F-l-£F=£F-i-(/—£» T
=£r-H/r-£T=r
Example?. //£i and Ez are projections on V and if E\Ei==EzEu
then £| + £2—£i £2 is a projection.
Solution. Since £|% Ez are projections, therefore
£|2=i£j^ E^s=Ez,
Also it is given that EiEz^EzEi,
Therefore (£|+£2—£i£2>2
= (£i+£'2-£i£2) (£i+£a -£i£2)
=(^i*+^i£2“£i^ Ez’\rEzE\ -^E:? EzE\Ei—E\EzE\
—EiE:^-\-E\EzEiEz
Linear Tran^ormations 231

C=3
E\'^E\E2~^E\Ei-^EzEi'{‘Ei—EiE2E2~“E\E\E2
—‘EiE2’\‘E\E\E2E2
^Ex+E2Ex^-E2-ExE2^--Ey^E2-ExE2-^Ex^E2^ "
*=» Ex^E2-{‘ExE2—ExE2-^Ex E2-~Ex E2’\‘ExE%=iEx +-£2*“ExE2^
Ex+E2—EiE2 is idempotent and therefore is a projection.
-Example 8. ^Two projections E and F have the same range iff
EF=FandF.E—E.
Solution. £ and Fare two projections having W\ and as
their ranges respectively. Also £F-=F and FE=E. Then to prove
that fVx^W2.
Let aeW'i. Then
£(a)n=a [V iFi is range of £J
=> F[£(a)I=F(a)
(F£)(a)=/'Xa)
=> £\a)=F(a) {V F£«£J
« —F(a)
a s W2
1V F(a) a => a G the range of F /.e. ae IV2]
/. Wx Q IV2.
Now let jSe IV2- Then
Fli?)*i»
=> £(FO)]«£(/3)
=> (£F)(j8)=£(iS)
=> F{P)^E{P) (V £F=F]
^ /J=£(/8)
[ Wx is the range of£j
iVi Q IF,.
Hence
Conversely, suppose £ and Fare two projections having the
same range. Then to prove that EF^F end F£=£
Let ae K. We have
(£F)(a)=.£[F(a)j
«F(a)
[V F(a)e the range of F=>F(a)e the range of £)
£F«F.
Also (F£)(a)«F[£(a)l
=s£(a)
[V £(a)e the range of £ :> £(a) e the range,of FJ.
/. F£=£.
232 Linear Algebra

Example 3. Suppose that E and F are projections on a vector


space V over thefield F whose characteristic is not equal to 2 i.e.
1 eF is such that 1 +1#0. Then ptove that E+F is a projection if
and only if EF==FE=0,
SolotioD. Let F+F be a projection. Then
(F+F)*=F+F
=> (F+F)(F+F)=F+F
E^+EF-\-FE+F^=E+F
^ F+FF+FF+F=F+F [V F2==F, F2=FJ
=> ff+ff=6. ...(1)
Multiplying(1)on both left and right by E, we get
F^F-f-FFF*s=0
i,e. FF-i-FFF=0, ...(2)
and FFF+FF*=0
Le. FFF+FF=0. ...(3)
Subtracting (2)and (3), we get
FF~FF=6
=> 'EF'=^FE.
Putting FE*=EF in (1), we get
FF+FF=6
=> (1+1)FF«6
s> FF«0 [V 1+1#0]
Thus FFs=FF=b,
Conversely» suppose that FFsFF—0.
We have(F+F)*=F*+FF+FF+F2.
«F+0+0+F=F+F.
A F+Fisaprojectlon.
Example 10. Let E be a polynomial on a finite-dimensUmal
vector space V, Show that by choosing a suitable basis Bfor K,the
matrix ofE with respect to B can be put into a particular simple
form.
Solution. Let 1^101^2. Suppose F is the projection on
Wi along W%, Let dim V^n,dim Wi^m,
Linear Transformations 233

Then dim W2=n—m. Let 5i={ai,...,«OT>


be a basis for Wi and
be a basis for Wz. Then
5={ai,...,a„, a„+i, is a basis for V.
We have, for 1 < i < m
£(a/)=a/ [V cci e Wi and £’is projection on Wi along Wz],
Also for w-f1 ^ I <
E{»i)=0 [●.* a, e Wz in this case]
the matrix of E with respect to the ordered basis B is

O O
where I is unit matrix of order m and O's are null matrices of
suitable sizes.
Exercises

1. Let £ be a projection on a vector space V and let T be a linear


operator on V. Prove that the range of £ is invariant under
T if and only if ETE=TE.
2. Let £ be the projection on a vector space V and let T be a
linear operator on V. Prove that both range and null space
of £ are invariant under T if and only if
ET=TE.
3. If K= 1^2© W's. prove that there exist three linear opera
tors £j, £2, £3 on V such that
(i) each £/ is a projection (£i*=£/);
(ii) EiEj^^O if i^j;
(iii) the range of £< is Wt, (Meernt 1971,91)
4. Let £|, £2 be two linear operators on a vector space V such
that £i+£2=/, £i*=£i, £»*=£2.
Prove that £i£2=6=£2£i.
5. Let £1 and £2 be linear operators on a vector space V such
that £j +£2*=/. Prove that
£/£i=0 for if and only if £|2«£| for each /. (Meerut 1974)
§ 22 The Adjoint or the Transpose of a linear Transformation.
In order to bring some simplicity in our work we shall intro
duce a few changes in pur notation of writing the image of an ele
ment of a vector space under a linear transformation and that
\mder a linear functional, if J is a linear transformation on a
234
Linear Algebra

vector spac« V and ae= V, then in place of writing T(a) we shall


simply write Ta i,e. we shall omit the brackets, Thus 7a will
mean the image of a under T, if /’i and 7*2 are two linear trans
formations of \\ then in our new notation 7*1 Ta a will stand for
7’i(72(«)J.
. Let/be a linear functional on V. If ae then in place of
writing/(a) we shall write la,/]. This is the square brackets
notation to write the image of a vector under a linear furictional.
Thus[a,/] will stand for/(a), If a.h^F and a, t.hen
in this new notation the linearity property of/
i.e. /uia+/;j8)=ny(a)-|-h/(j8)
will be written as
[a* f f\ [a,/]+ h [fi,/].
Also if/and g are two linear functionals on K and a, heF.
then the property de&ning addition and scalar multiplication of
linear functionals i.e. the property
(riy>hg)(a)-a/(a)h/'Jf(«)
will be written as
la, af
[hg]^ a (a,/J f h la. g].
Note that ill this new notation, we have
K/J =/(a), la,5l -^a).
Theorem 1. Let U ami V he vector spaces over thefield F. For
eacltlinear traniformation Tfrom U into V, there is a unique linear
transformation Tfrom V into U* such that
in^)](a) -- g (7\a)] (in old notation)
or l«. rg]^[ToL, g) (in new notation)
for every g in K' and a in U.
The linear iransforinalion /'is called the adjoint or the trans
pose or the dual of T. In sjrne books it is denoted by 'P or by T*.
(Allahabad 1977)
Proof. /'is a linear transformation from(/toP. is the
dual space of U and V is the dual space of F. Suppose geK'
i.e. g is a linear functional on F. Let us define
yl«)==^«l7’(a)j V aei/ ...(1)
Then/is a function from U into F. We see thas/is nothing
but the product or composite ol the two functions 7 and g where
T: U~^Vand g : V-^-F. Since both /’and g are linear therefore/
is also linear. Thus/is a linear lunctiona! on U i.e.f^U*. In
this way 2' provides us with a rule T which associates with each
functional g on k a linear functionalf^Tig)on (/, defined by (1).
235
Linear Transformations
Thus
r: such that
rte)«/V ger where
/(«)=^ir(«)]Vaet/. , . ,
Putting/=r(g)in(l), wesecthatl'is a function from V
into U'such that
ing)l(a)-gl2Xa)]
or in square brackets notation
[«, rg]«[r«,g] ...(2)
. ¥ geF'and ¥ chGU.
Now we shall show that T is a linear transformation from K'
into U\ Let gi, gaS V'and «,b^F
Then we are to prove that
r(ag,+&^2)=«r gi+6r gz ...(3)
where T gt stands for T’(gi) and T g2 stands for T'igz).
We see that both the sides of <3) arc elements of IT i,e. both
are linear functionals on U. So if «is any clement of U, we have
K r(agi-fhg2)l«(Tbe. flg,+6g2l
[From (2) because ugi |-6g2eF'J
(7’a, agiJ+[/a, bgz]
[by def. of addition in F']
a[Ta. gi)+6[Ta, g2]
SSI

[by def. of scalar multiplication in F']


●--a [a, V gil-H^ [«, V gzl [From (2)1
«Kflrgii+[oc,i>rg2)
[by def. of scalar multiplication in (/'.
Note that r gi, r g2et/'J
-=[a. o2"g,+6r g2] [by addition in U*\
Thus V ae(/. we have
[a, V (agi+hg2)l=[a, aV gi i />r gal-
T (agi + t gi) - aV gt + bT' gz
[by def. of equality of two functions]
Hence / ' is a linear transformation from F' into (/'.
Now let us show that T' is uniquely determined for a given T.
If possible, let T\ be a linear transformation from V into U' such
that
[x. 7’igl=(7’a, gl ¥ geF' and ae(/. ...(4)
Then from (2) and (4), we get
K 7'i ^]*[«, r g] ¥ aei/, ¥ ge r
=> 2’ig=rg ¥ ger
ri=r.
236 Linear Algebra

T is uniquely determined for each T. Hence the theorem.


Note. If r is a linear transformation on the vector space
then in the proof of the above theorem we should simply replace U
\iy V.
Theorem 2. If T is a linear transformationfrom a vector space
U into a vector space then-
(/) the annihilator of the range of T is equal to the null space
ofTi.e,^
[y?(r)]o=iV(r). (Marathwada 1971; Meerut 83P, 87. 88)
If in addition U and V arefinita 4ifnensionaL then
(if) p(r)=p(D (Meerut 1983P. 88; Poona 70)
and {Hi) the range of T is the annihilator of the null space of T i.e.
/?(D=[iV(r)io. (Marathwada 1971; Meerot 88)
Proof, (i) If ge V\ then by definition of T\ we have
K rg]=ir«.^] vaeC/. (0
Let g^N{T)which is a subspace of V\ Then

Tg^Q where 0 is zero element of U' i.e., 0 is zero functional


on U. Therefore from (1), we get

[Ta, g]=:(«, 0] V ael/

=> [To-, ^1=0 V aef/ [V 6(«)=0 V «ef/]


^ gm^O V PGR{T) [V R{T)=^{PeV:
for some a€l7}]
g e [R{T)]o.
.-. N{r)Q[R{W.
Now let gS(/?(r)p which is a subspace of V\ Then
g(i8)=0 V j8e/?(r)
=> [r«, g]~0 V ae £/ (V V aef/, rae/?(r)j
=> (a, r'gj=0 V ael/ [From (1)1
r'g=fi (zero functional on U)
=>geiV(r).
.*. [RiT)T C- N(T).
Hence [RiT)]o^N(T).
(ii) Suppose U and V are finite dimensional. Let dim
4im V^m Let r»p(r)othe dimension of
Linear Transformations 237

Now R{T) is a subspace of V, Therefore


dim /?(r)+dim[/?(r)]o«dim F. [See Th. 2 Page 206]
dim [/?(7’)P=dim K-dim R{T)
adim F—r=»m—r.
By part (i) of this theorem [/?(r)]o«iV(7’').
/. dimAr(r)=m-r
^ nullity of
But T is a linear transformation from V into V\
/o(r)+v(r)«dim V
or p(n«.dim F'~v(r)
=dim F—nullity of r'=m—(m—r)=r.
p(D=p(r)-r.
(iii) T is a linear transformation from V into V. Therefore
R{T)is a subspace of U\ Also [N(T)]^ is a subspace of U’ because
N(T)is a subspace of U. First we shall show that
R(T)G [N(T)y>.
Let/e/?(r). Then/= T’g for some ge V'.
If a is any vector in N{T), then !Ta=sO. We have,
[a,/l=[a, r 5l=[ra, g]«[0, ^]=0.
Thus/(a)=0 V aeA^(r). Therefore/e[iV(7’)lo.
R{r)Q[N(T)]o
=> R{T)is a subspace of [N(T)Y*.
Now dim iV(D+dim [iV(r)P=dim U. [Theorem 2 page 206]
dim [A^(r)lo=dim t/-dim N{T)
=dim R{T) [V dim t/=dim/?(T)
+dim N{T)]
==p{T)
=p(r)=dim R{r).
Thus dim /?(n=idim [M{T)]o&nd R{r)c:[NiT)]o.
R{r)=^[N{T)]Q.
Note. If T is a linear transformation on a vector space V,
then in the proof of the above theorem we should replace U by V
and OT by n.
Theorem 3. Let U and V be finite-dimensional vector spaces
over thefield F. Let B be an ordered basisfor U with dual basis B\
and let Bx be an ordered basisfor V with dual basis Bi*. Let T he a
linear transformationfrom U into, F. Let A be the matrix of T rela
tive to S, Bi and let C be the matrix of T relative to B\\ B\ Then
/.e. tlK matrix C Is the transpose of thk matrix A\
238 Linear Algebra

Proof. Let dim f/=/i, dim K=m.


Let a„}, ...,/«}»

Now T is a linear transformation from (J into V and T is that


from F' into V\ The matrix A of T relative to B, Bi will be of the
type mxn. If ^=[fl/y]myp, then by definition
m
Tiaj) or simply Iv.j^ E aij fti, ./=1, 2, .... n. ...(1)
/-I

The matrix C of T' relative to B\\ B' will be of the type nxm.
If C=[c;/]„xmf then by definition

T (g,) or simply T gt SC},fj, /=!, 2, m ...(2)

Now T' gi is an element of C/' i e. T gi is a linear functional


on U. If/is any linear functional on f/, then we know that

/= 2; /(a;)/;. [See theorem 3 page 197]

Applying this formula for V gi in place of/, we get

E {{V gi)U,))fj. ...(3)


./-i

Now let us find {V gi) (a;). We have


(by def. of T‘]
m \
E Ok) ftk [From (1), replacing the
/
suffix / by k which is
immaterial]
m
sss
E Okj gi (fik) [V gi is linear]
/r-1

m
— E.OkjSik [V gi^Bi* which is dual
A-I
basis of 5|]
On [On summing with respect to k and
remembering that 8ik—\ when A'=/
and ^ik—Q when k^=i]

r
/-/«;●* r Transformations 239

Putting this value of {T gi) («>) in (3), we get

T gi~ 2j' aijfj. ...(4)

Since /i, are linearly independent, therefore from t2)and


(4), we get cji=aij.

Hence by definition of transpose of a matrix, we have

Note. If r is a linear transformation on a fi nite-dimensional


vector space K, then in the above theorem we put U~ V and m ~n.
Also according to our convention we take The students
should write the complete proof themselves.
Theorem 4. Let A he any mxn matrix over the field F. Then
the row rank of A is equal to the column rank of A.
Proof. Let .4=[fl, Let
..., a„) and 5,={/},,
be the standard ordered bases for V„ (F) and V„, {F) respectively.
Let Tbe the linear transformation from V„ (F) into (F) whose
matrix is A relative to ordered bases B and Bi Then obviously the
vectors r (aO, ..., r(a«) are nothing but the column vectors of the
matrix A. Also these vectors span the range of T because ai, .... or„
form a basis for the domain of 7’ /.<?. V„ (F;.
the range of F—the column space of A
\ :> the dimension of the range of 7'~ihe dimension of the
column space of A
=5" />(F)=the column rank of A. ...(1)
If F'is the adjoint of the liner transformation F, then the
matrix of F' relative to the dual bases Fi' and B' is the matrix A'
which is the transpose of the matrix A. The columns of the matrix
A' are nothing but the rows of the matrix A. By the same reasoning
as given in proving the result (1), we have
/,(F')—the column rank of A'
“the row rank of A ...(2)
Since p{T)=p{T'), therefore from (I) and (2), we get the result
that
the column rank of XI-- the row rank of A.
Theorem 5. Prove the following proper tie\ of adjoints of linear-
operators on a vector ipace V (F) :
240 Linear Algebra
A A

(0 0=0;
(//) /'=/;
{ui)(r,+r2y=r,'+r2'; (Meerot 1975, 90)
(/V)(n r2)'=r2' r,'; (Meerut 1975, 90; Allahabad 77)
(v) {aTY^aT where a^F; (Meerut 1975)
(v/) (r-^Y={T)-^ if T is invertible;
(vii){Ty=r'=T if V isfinite-^dimensional.
Proof, (i) If 6 is the zero transformation on V, then by the
definition of the adjoint of a linear transformation, we have
[«, 6'^]=[6a, for every g in V and a in K
[0, for every g in V* [V 0(a)=0 V a e
«0 [V ^(0)=0]
=[«, 0] V a e K [Here 0 e P' and 6(a)=0]
=[«.0 V g e V' and a e r
[Here 0 is the zero transformation on V']
Thus we have

[«. 0'g]=[a, 6 g]for all g in V'and a m V.


6'=d.
(ii) If / is the identity transformation on F, then by the
definition of the adjoint of a linear transformation, we have
[a, r ^]=[/a, for every g in K' and a in K
~[«i for every g in V' and a in K
[V /(a)=a ¥ a e KJ
=[«. Ig]for every g in V’ and a in P
[Here / is the identity operator on V']
r=i.
(iii) If Tu T2 are linear operators on V, then T1 + T2 is also a
linear operator on V. By the definition of adjoint, we have
[a,(7’i+T2)' g]=[(Ti +T2)a, for every g in V- and a in V
=[Tia+72a,5^] [by def. of addition of linear
transformations]
=[7’i «. ^1+[72 a, g]
[by linearity property of^]
[by def. of adjoint]
-[«, TY g+Tf g]
[by def. of addition of linear functionals.
Note that TY g, Tf g are elements of V]
Linear Transjormations
241

=[a.(ri'4-72')«l*
Thus we have
[«.(Ti+TaV {Tx^T^) for every g in r and « in V.
(7’i+r2yg=(r,'+72')^ V K'.
(Ti+ray^ry+ra'.
(iv) If T\, Ti are linear operators on V, then T\ Ti is also a
linear operator on V. By the definition of adjoint, we have
[«,(Tt 72)' gl=[(7’i 72)«. g]for every ^ in F' and a in F
'=[(7’i) TVk, g] [by fief- of product of linear
transformations]
=[T^*Tt^g] [by def. of adjoint]
=[«, T’2' Ti' ^1 (by def. of adjoint]
Thus we have
[a,(Ti T2)' g]=[a, ry 7*,' g]for every g in F'and « in F.

Note. This is called the reversal law for the adjoint of the
product of two linear transformations,
(v) If r is a linear operator on F and a e F, then aT is also
a linear operator on F. By the definition ofthe adjoint, we have
[a,{ary a, g]fOr every g in F' and a in F
-[a{Td),g] [by def. of scalar multiplication
of a linear transformation]
=u[ra.^] [y g is linear)
=o [«. T g] [by def. of adjoint]
«[d, a (7” ^>] [by def. of scalar multiplication
in F'. NotethaUr^e F']
=[«;(aT’')g] [by def. ofscalar multiplication ,
of r by a]
(arx=ar.
If T’-i
(vi) Suppose 7* is an invertible linear operator on F.
is the inverse ofT, we have
r-i 7’=/=rr-*
^ (T’-i Ty=r=-(TT-'Y
=> r(7’-')'=/=(7’-')'r
[Using results (ii) and (iv)J
V is inveitible and
(r)T'=(7’-')'.
T* is a linear
(vii) F is a finite dimensional vector space,
operator on F, T is a linear operator on F' and (T')' or T" is a
linear operator on F". We have identified F" with F through
242 Linear Algebra

natural isomorphism where a G and £,(E V". Here £,


is a linear functional on V'and is such that
(g)=g (ot) ¥ g e ...(1)
Through this natural Isomorphism we shall take «=L,and
thus T" will be regarded as a linear operator on K.
Now T'is a linear operator on F'. Therefore by the definition
of adjoint, we have
T" £«J=fg,(Ty £«]for every g e V'and a g K
Now T* g is an element of V\ Therefore frpm (I), we have
(r g. £,]=[«,r g] (Note that from (1), L,{T' g)
=(rg)(a)]
=(7a,^]. [by def. of adjoint]
Again T" La is an element of V". Therefore from (1), we have
(^, T' where and j8<-»r' £«
under natural isomorphism
(V j8=7’" La=T'*«. when we regard T" as
linear operator on V in place of K*]
Thus, we have
(7*.8\=[T”ft. g]for every g in V* and a in F
=> g(7ot)=g(T" a)for every g in V' and a in F
=» g(Ta—T"a)=0 for every g in F' and a in F
^ r«—r"'a=:0 for every a in F
=> {T-rT") a=0 for every « in F
=> r-r"=6
=> T=T\
§ 23. Adjoints of projections.
Theorem 1. If E is the projection on Wi along Wz, then E is
the projection on along (Meerut 1975)
Proof. £ is a linear operator on F and E'is a linear operator
on F'. E is the projection on iF, along Wz. Since £is a projec*
tion, therefore £*=£. We have
(£')2=£'£'=(£E)'=(£2)'=£'.
£'is also a projection.
If W\y Wz are subspaccs of F, then IF|®, IF2® are subspaces of
FV Also if F= Wt@Wz, then F'= Wz^@Wx\ [See Ex. 6 page 211]
-now £ will be the projection on IF2® along IFi® if we prove
that and Wi^=N where
A/={/e r ;£●(/)=/}
and JV={/e r:£'(/)=d}.
(0 First we shall prove that Wf=M.
243
Linew Transformations
Let/e W^. Then to prove that/e M » E’f^f
For every « in F, we have
la,/]=[£«+(/-£)*,/!
[V /is linear]
«=I£a,/l+0 [Since/eFFi® and (/-£)ae iFa, / ^
being projection on Wi along W\\
=[«.-£'/1 [by def. of adjoint)
Thus E'f=f. Therefore/eM.
* 1^2® C M
Now let/S M i.e. li £●/=/ Then to prove that/ e n't"-
For every a in IFi, we have
[«./)=[«. r/I
=[£^. n [by def of adjoint]
●=[0,/l IV a e >F2 => £a=0, E being
projection on IFi along W^i]
=0.
/e >F2®,
M C >F2®.
Hence A/=1F2®.
(ii) Now we shall prove that W'i®=M that
Let/e W^i®. Then to prove ihat/e N i.c., to prove
F/=0. For every «in F, we have
[by def. of adjoint]
=0 [Since/ e W'l® and £a e IFi, £ being
projection on 1F| along FF2]
Thus (a, £'/l=0 for every a in F.
£V=0-
/e and thus 1F|0C//.
Now let/ e AT i.e., let £7=0. Then to prove that/ e W'l®.
For every a in IF?, we have
[«,/M£*,/l [V aelFi => £a=a,£being
projection on IFi along lF2l
=K£7] [by def. of adjoint]

=*[«. 0]
=0
/. /e fFi® and thus N Q fF*®
Hence Ar=>F,o.
This completes the. proof of the theorem.
Theorem 2. // IFi /5 invariant under T, then FFi° is invariant
244
Linear Algebra

«n*r r,//T Is reduced by (W,. r,). then T ts reduced by


fr2%
Proof. Since is invariant under r. therefore
aeW'i =>
Now let/e FPi®. then to prove that r/efVt^. For every a in
f^ii we have
K r/J^lTh,/]
=0 [V /eJFiOandraefFiJ
/. r/elfF,®.
Hence JF,® is invariant under r.
Now suppose that T is reduced by the pair (fVj, Then
we have
(i) F^fFieiFa.
(ii) both fVt and FFa are invariant under T,
Now r=FF|0©>F2® because K=iFi©FF2.
Also fFi® and FF2® are both invariant under T* as we have just
proved
Hence T'is reduced by the pair (fVjO, FFi®).
Solved Examples
Example 1. //A and B are similar linear transformations on a
vector space then so also are A'and B\
Solution. A is similar to B means that there exists an invertible
linear transformation C on Fsuch that
A^CBC-'
=> A'=^(CBC-^y
^A'=(C-^yBV\
Now C is invertible implies that C'is also invertible and
(Cr'-(C-«)'.
A*=^iCy' B'C
=> C A'iCy^^B’ (Multiplying on right by
and on left by C]
=> B'is similar to A'
A' and 5* are similar.
Example 2. Let V be afinite dimensional vector space over the
field F. Show that T-^T is an isomorphism ofuy^ y)onto L{y\ V')
Solution. Let dim V=h. Then dim F'=n.
Also dim L(V, V)^n\ dim L(F'. n=«2.
Let 0:L(F. V)^L(V', V') such that
0(T)«r V r e L(K vy
(1)0 is linear transformation.
Let a, be Fand Tu Tt e L(F, F>. Then
Linear lYansformatiom IAS
^ {aTi^bTt)^{aTx^bT2y [bydcf.of0]
'=(«r,y+(6r2)' [y {A+By==^A'+B']
^aTy+bTy [V iaAY^aA']
^atft(rO+60(Ta) (bydcf.of0]
0 is a lidear transformation from L{V, V)into
UV\ V%
(li) 0 is one-one.
Let TuTiGLiV, V): Then
0(r,)=0(r2)
=>
=> rr=72-^
»> Tt=T2 {*.* V is finiterdimensional]
.*● 0 is one-one.
(iii) 0 is onto.
We have dim £(K, n*=dim L(r, r)=«2.
Since 0 is a linear transformation from L{V, V) into L{V\ V')
therefore 0 is one-one implies that 0 must be onto.
Hence 0 is an isomorphism of L(F, V) onto L(V\ y*).
Example 3 if A and B are linear transformations on a finite-
dimensional vector space then prove that
(/) p (^+5) ^p M)+p (^)
{U) p\AB) <.min{o{A),p{B% (Poona 1970)
(Hi) If B is invertible^ then p {AB)~p {BA)=p{A),
SolnlioD. If A is a linear .transformation on K, let R{A) denote
the range of ^4. We know that /?(^)» R{B), and are all
subspaces of V. First we shall prove that R{A-\-B) is a subspace of
R{A)^R{B).
Let Then
a=(i4 -f B) iP) for some
<=A(PHBW).
But A(^) e /?(^) and B(P) e R(B),

Thus R{A-hB) Q R{A)+R{B).


.*. ii(i4+B) is a subspace of/?(/!)+2?(B).
.*. dim R(A-^B) ^ dim (/?(i4)+/?(B)}
i.e. p{A+B) < dim ...(1)
Now if Wi and Wz are subspaces of a finite dimensional vector
space V, then
dim (B'l + W'a)—dim B'l-fdim IFi-dim (IFif) >^2)
=> dim (fVi + M^z) < dim lF|-f dim M'2
i.e. the dimension of the sum ^ sum of the dimensions,
dim {/?(^)+^(B)} ^ dim i?(i4)+dim R{Bj^
.246 Linear Algebra

/ e. dim [R iA)+R{B)) < p(AHp {B). ...(2)


From (1) and (2), we get
p{A-{-B) < p{A)+p(B).
(ii) For every a in V, we have
(AB){a)^A[B{<i)],
the range otAB is a subset of the range of A te »
RiAB) C R{A),
/. R{AB)is a subspace of /?(><).
/. dim < dim /?(i4)
i.e. piAB) ^ p{A). (1)
Applying the result(1)for the transformations B’ and A\ we
get
p(5' A*) < p(B')
=> p[{AB/\ ^ p{B*) [V iABy^B'A*]
=> p(AB) < p{B) IV p{B)=p{B')and
p[{ABY]^p{AB)]
Thus p(AB) ^ p{A), and p{AB) < p(5). ...(2)
p(AB) < minimum {p{A), p{B))."
(iii) Let 5 be invertible. Then we can write
A=iAB)B-K
p{A)r=^pl{AB)B-^]
< P{AB) [by result(1) proved in
the proof of(ii))
Similarly, we can write
Ac=B-^ {BA),
A P{A)-^P[B-^{BA)]
< P{BA). [by result(2) proved in the proof of(ii)]
Now p{A) < p{AB), and p{AB) ^ p{A)implies that
p{A)=:p[AB).
Similarly p{A) < p{BA), and p(BA) ^ p{A) implies that
p{A)=p{BA).
Ex. 4. If A and B are Imear transformations on an n’dimen-
sional vector space F, then prove that
(/) p{AB) ^ p{A)^-p{B)-n.
{ii) '^{AB) ^ V (^)+v(fl). (Sylvester's law of nullity)
Solution. (i)I First we shall prove, that if T is a linear trans*
formation on V and IVi is an /t-dimensional subspace of F, then the
dimension of T(iF|) is ^ /i —v(T').
Since F is finite-dimensional, therefore the subspace IVi will
p&.ssess complement. Let F=iF|<^IF2. Then
dim ^F2«/I-A«fc(say).
Linear Transformations 147-
Since therefore
7’(K)=T(iKi)+r(»'2), as can be easily seen.
dimr(F)=dim[r(»^,)+r(W'2)]
< dim T{Wx)+dim T(W^i [V the dimension
of a sum is < the sum of the dimensions]
But T{V)^the range of f.
d\mTiV)=p(T),
Thus dim rCfFO+dim nWi)^ p{Ty ...(1)
Now T{W^ is a subspace of Wi. Therefore
dim Wi ^ dim r(W'2). ...(2)
From(1)and (2). we get
dim nW'O+dim Wt ^ p{T)
=> dimr(lf'i)> p(T)-dim
=> dim T{Wi)^ n-v(D-A: [V /»(^)+v(r)=»n]
=> dim T{W{i ^ n-k-v{T)
dim T{Wx)^ Ar-v(D. ...(3)
Now takingTr=i4 and W\ =B{V)in (3), \ye get
dim A[B(V)] > dim B{V)’->y,{A)
=> dim(AB){V)^ p{V)-v{A)
[V B(y)=x\io range of B\
p{AB) ^ p{B)--[n-p{A)\
=> p{AB) > p{AHp{B)-n.
(ii) We have p{AB)-{v{AB)~n.
p(/l5)=n—V (AB).
But P(AB) ^ p(A)+p(B)^n.
n-y(AB)^ p(A)-i-p(B)-h
=> )f(AB) ^ [n—p{A)]-\-[n-‘p{B)]
=> ^{AB) ^ v(^)+v(B) i‘.* p(^)+v(^)=n]
§ 24. Characteristic Values and Chari^cteristic Vectors.
Throughout this discussion T will be ,regarded as a linear
operator on a finite dimensional vector space.
Definition. Let T be a linear operator on an n-dimensional
vector space V over thefield F. Then a scalar c^F is colled a
characteristic value ofTif there is a non-zero vector a in V such
that Td—cit. Also if c is a characteristic value ofT, then any non^
zero vector a in V such that Tol—co. is called a characteristic vector
of T belonging to the characteristic value c.
(Meerut 1976, 79; Nagarjuna 78)
Characteristic values are sometimes also called proper values,
eigen values, or spectral values. Similarly chafacteristic vectors are
called proper vectors, eigen vectors, or spectral vectors,
Lineal^ Algebra

ofr.The set of all characteristic,values of T is called the spectrum


Theorem 1. Ifttisa characteristic vector ofT corresponding
to the characteristic value c, then kti is also a Characteristic vector of
T corresponding to the same characteristic value c. Here k is any
non zero scalar.
Proof. Since a is a characteristic vector of T corresponding
to the characteristic value c, therefore and
T(«)»ca. (1)
If k is any non-zero scalar, then koLik^.
Also T(/:«)=sA:r(a)=A:(c«)=s(Arc) a
=(cfc) a=c {kf).
A:« is a characteristic vector of T corresponding to the
characteristic value c. >
Thus corresponding to a characteristic value c, there may
correspond more than one characteristic vectors.
Theorem 2. If tt. is a characteristic vector of T, then « cannot
cofrespond to more than one characteristic values ofT.
Proof. Let « be a characteristic vector of T corresponding to
two distinct characteristic values C| and C2 ofr. Then
Jascia
and rot'=5C2«.
C|«sC20t
(C|—C2)flt=»0
=> C|—C2=0 [.' «#0J
s> C|s=C2*
Theorem 3. Let T be a linear operator on afinite dimensional
vector space V and let c be a. characteristic value of T. Then the set
:ra=ca} is a subspace of V. (Meerut 1992)
Proof. Leta.jSefTc Then r«=ca, and r/3=ci3.
If <1, hsF,then
J’(aa-fA^)=ar«-|-A7)5.-sa (ca)-l-6 (c^)=c(aa+6/3).
am-i-bp G W'c.
Therefore We is a subspace of V,
Note, The set We is nothing but the set of all characteristic
vectors bf rborresponding to the characteristic value c provided
we include the zero vector in this set. In other words We is the
null space of the linear operator if—c/. The subspace We of K is
called the characteristic space of the characteristic value c of the
Linear Transformations
249

linear operator 7*. It is also called the space of characteristic


vectors of T associated with the characteristic value c.
Theorem 4. Distinct characteristic vectors of T corresponding
to distinct characteristic values of T are lineariy independent.
(Poona 1970; Meerut 77; Nagarjuna 78)
Proof. Let ci, C2,..., Cm be m distinct characteristic values of
T and let ai, «2«—> be the characteristic vectors of T cOrres-
ponding to these characteristic values respectively. Then
roc,=c<a/ where 1 < i ^ w.
Let
Then to prove that the set S is linearly iudependenv. We shall
prove the theorem by induction on m,the number of vectors in S.
lfm»l,then S islinearly independent because.5 contains
only one non-zero vector. Note that a characteristic vector cannot
be 0 by our definition.
Now suppose that the set
where A:<m,
is linearly independent.
Consider the set 52={«i, ...
We shall show that Si is linearly independent.
Let ill,..., OA+t e Fand let
fli«i+.,.+aA+i aA+i=0 ...(1)
=> T(ajai-}-...+fl/k+i ak+i)=T(0)
=> aiT (ai)4-..»+flA+i T(afc+i)=0
=> fll (Ci «l)+...+«A+l (CA+1 ...(2)
Multiplying (1) by the scalar Ca+i and subtracting from (2)»
we get
a\ (ci~ca+i) (ca—Cfc+i) aft=0.
/. O|=a0,..., oa=0 since ai,,.., afc are linearly independent
according to our assumption and ci...., ca+i are all distinct.
Putting each of ai,..., oa equal to 0 in (1), we get
Ua+I «A+I=fi
OA+i=f0 since «a+i=t^0.
Thus the relation (1) implies that
UlssO,..., <IA=0, flA+l—0.
.*. the set 1S2 is linearly independent.
Now the proof is complete by induction.
Corollary, if T is a linear operator on an n-dimensional vector
250 Linear Algebra

space V, then T cannot have more than n distinct characteristic


values.
Proof. Suppose T has more than n distinct cfiaracteristic
values. Then the corresponding set of distinct characteristic vectors
of T will be linearly independent. Thus we shall have a linearly
independent subset of V containing more than n vectors which is
not possible because V is of dimension /i. Hence T cannot -have
more than n distinct characteristic values.
Theorem 5. Let T be a linear operator on afinite-dimensional
vector space V. Then thefollowing are equivalent.
(/) cis a characteristic value of T.
(it) The operator T—cI is singular {not invertible).
(Hi) det{T-cJ)=^0. (Meerut 1S^9)
Proof, (i) => (ii).
c is a'characteristic value of T implies that there exists a non
zero vector a in K such that
T<s—cv.
or ToL=^cJct where / is the identity operator on V
or Ta=(cl) a
or (T-cI)a=0.
Thus(r—c/)a=0 while Therefore the operator T—cl
is singular and thus T—cI is not invertible,
(ii) => (iii).
If the operator T- cl is singular, then it is not invertible.
Therefdre del(T—cl)=0.
(iii) => (i)-
If det(T- c/)=0, then T~ cl is not invertible.
If T-cI is not invertible, then T-cl is singular because every non
singular operator on a finite-dimensional vector space is invertible.
Now T- cl is singular means that there is a non-zero vector a in
V such that
(7’-c/)a-0
or ra-c/a=0
or 'rc(.=cx.
c is a characteristic value of T.
This completes the proof of the theorem.
Let T be a linear operator on an «-dimensional vector space
|/. Let B be an ordered basis for K and let A be the matrix of T
with respect to B i.e. let A=[T\u. If c i-^ any scalar, we have
Linear Transformations 2S1

ir-c/]B=ir]a-c [I]b
^A—cI where I is the unit matrix of order n.
[Note that [i]a»/].
We have det(T*—c/)=*det[T—c/]b
=det(.4-c/).
Therefore c is a characteristic value of f iff det(/4-c/)=0.
This enables us to make the following definition.
Characteristic values of a matrix. Definition.
Let A—[aij]„y.n be a square matrix of order n over thefield F,
An element c in F is called a characteristic value ofA if
det(^—c/)=0 where I is the unit matrix oforder n.
Now suppose 2- is a linear operator on an n-dimensional vec
tor space V and A is the matrix of T with respect to any ordered
basis B. Then c is a characteristic value of Tiff c is a characteristic
value of the matrix A. Therefore our definition of characteristic
values of a matrix is sensible.
Characteristic equation of a matrix Definition.
Let /t be a square matrix of order n over the field F. Consider
the matrix A—xL The elements of this matrix are polynomials in
X of degree at most 1. If we evaluate det(^—x7), then it will be a
polynomial in x of degtee n. The coefl5cients of x in this polyno
mial will be (—1)". Let us denote this polynomial by/(x).
Then/(x)=det {A—xl)is called the characteristic polynomial
of the matrix A. The equation/(x)=0 is called the characteristic
equation of the matrix A. Now c is a characteristic value of the
matrix A iff det(^--c/)«0 i.e» iff/(c)=0 i,e. iff c is a root of the
characteristic equation of A. Thus in order to find the characteris
tic values of a matrix we should first obtain its characteristic equa
tion and then wc should find the roots of this equation.
Characteristic vector of a matrix.
Definition. If c is a characteristic value of an nxn matrix A,
then a non-zero matrix X of the type nxl such that AX=cX is
called a characteristic vector of A corresponding to the characteristic
value c.
Theorem 6. Let T be a linear opt rator on. an n-dimensional
vector space V and A be the matrix of T relative to any ordered
basis B. Then a vector ot. in V is an eigenvector of T corresponding
to its eigtnvalue c if and only if its coordinate vector X relative to
the basis B is an eigen-vector of A corresponding to its eigenvalue c.
251
Linear Algebra
Proof We have
iT-cI]B~[T]B—c[l]B=A’-cI.
If a^O, then the coordinate vector X of a is also non-zero.
Now
1{T-cI)(<k)]b^[T-cJIb[o^]b
[See theorem 2 of§ 13]
S=3
(A^cl) X,
(r-c/)(a)«0 iflf(.4-c/)2r=0
or T(»)=cait[AX^cX
or
a is an eigenvector of r jflf JT is an eigenvector of.4.
Thus with the help of this theorem we see that our definition
of characteristic vector of a matrix is sensible Now we shall define
the characteristic polynomial of a linear operator. Before doii:r; so
we shall prove the following theorem.
Theorem 7. Similar matrices A and B have the same character^
istic polynomial and hence the same eigenvalues. If X is an eigen
vector ofA corresponding to the eigenvalue c, then P-\ X is an eigen
vector of B corresponding to the eigenvalue c where
AP. (Meerut WJ6,83,92)
Proof. Suppose A and B are similar matrices, Then there
exists an invertible matrix P such that
B=P-^ AP.
We have AP~xI
[.* P~ *(jc/f P=3jrP-i /P==*/J.
=P-'{A-xl)P,
det(B-;i/)=det /*-* det {A-xl)det P
=sdetP-^det P.det (i4—x/)=det(P-> P).det {A—xl)
=idet /.det {A -x/)=l.det(A -jc/)«det(A-xI),
Thus the matrices A and B have the same characteristic poly
nomial and consequently they will have the same characteristic
values.
If c is an eigenvalue of A arr^ A' is a corresponding eigenvec
tor, then AX=cX, and hence
B(P-> X)==iP-^ AP)P~^ A'=P-' AX=^P-^ (cA')=c(p-> X).
P~^ X is an eigenvector of B corresponding to c. This
completes the proof of the theorem.
Now suppose that r is a linear operator on an n-dimensional
vector space K. If Pi, B2 are any two ordered bases for P, then we
know that the matrices and are similar. Also similar
Linear Transformations 253

matrices have the same characteristic polynomial. This enables us


to define sensibly the characteristic polynomial of T as follows:
Characteristic polynomial of a linear operator.
Definition. Let T be a linear operator on an n-dimensional
vector space V:The characteristic polynomial ofT is the character
istic polynomial of any nxn matrix which represents Tin some
ordered basis for V. On account of the above discussion the
polynomial of T as defined by us will be unique,
ordered basis for V and A is the matrix of T with
respect to B.then
det(r~jc/)«det ir-jc/Ja«det ((rja-jcl/ji,)
s=det {A —xl)
=*the characteristic polynomial of A and so also that of T.
the characteristic polynomial of 7*=det(T—xl)
The equation det(T—Ar/)=0 is called the characteristic equa
tion of T.
Existence of characteristic values. Let T be a linear operator
on an «-dimensional vector space Kover the field F. Then c belon
ging to F will be a characteristic value of T \fS c is a root of the
characteristic equation of T i e. iff c is a root of the equation
det(r~x/)=0. ...(1)
The equation (1)is of degree n in x. If the field F is algebrai
cally closed i.e. if every polynomial equation in F possesses a root
then T will definitely have at least one characteristic value. Ifthe
field F is not algebraically closed, then T may or may not have a
characteristic value according as the equation (1) has or has not a
root in F. Since the equation (1)is of degree n in x, therefore if T
has a characteristic value then it cannot have more than n distinct
characteristic values. The field of complex numbers is algebraically
closed. By fundamental theorem of algebra we know that every
polynomial equation over the field of complex numbers is solvable.
Therefore if F is the field of complex numbers then T will definitely
have at least one characteristic value. The field of real numbers
is not algebraically closed. If F is the field of rearnumbers, then T
may or may not have a characteristic value.
Example. Consider the linear operator T on V2(R) which is
represented in the standard o'rdered basis by the matrix
A=^
ro -11
1 0
The characteristic polynomial for T(or for A)is
254 Linear Algebra

dct(-4—jcO=det ^ ^
1 ^ *xHl.
0-x —X

The polyooinial equation 4-1=0 has no roots in R. There


fore T has no characteristic values.
However if T is a linear operator on F2(C), then the charac
teristic equation of T has two distinct roots / and — / in C. In this
case r has two characteristic values / and —/.
Algebraic and geometric multiplicity of a characteristic value.
Definition. Let T be a linear operator on an n-dimensional vector
space V and let c be a characteristic value of T By geometric muU
tipUcity of c we mean the dimension of the characteristic space Wc
of c. By algebraic multiplicity of c we mean the multiplicity of c
as root of the characteristic equation of T.
Method of finding the characteristic values and the correspon
ding characteristic vectors of a linear operator T. Let T be a linear
operator on an n-dimensional vector space V over the field F. Let
2? be any ordered basis for F and let A=[T\b> The roots of the
equation det (/< —x/)=0 will give the.characteristic values of A or
also of T. Let c be a characteristic value of T. Then will be
a characteristic vector corresponding to this characteristic value if
{T-cI)a=0
i.e, if ir-c/]fl [a]B=[0]fl
i.e. if \A-cI)X^O, ^ ...(1)
where X=[x]b—^ column matrix of the type nx 1 and O is the null
matrix of the type n x 1. Thus to find the coordinate matrix of a
with respect to we should solve the matrix equation (1) for X.
Matric Polynomials. Definition. An expression of theform
f{x)-AQ-\-A\X-\-AiX^-\r..,+AmX'^,
where Aq^ At, A2,>»;Am are all square matrices of order n, is called
a Matric polynomial of degree m provided A„ is not a null matrix.
The symbol x is called indeterminate.
Equality of Matric Polynomials. Two matric polynomials are
equal iff the coefficients of the like powers of x are the same.
Lemma. Every square matrix over the field F whose elements
are ordinary polynomials in x over F, can essentially be expressed as
a matric polynomial in x of degree m, where m is the highest power
of X occurring in any element of the matrix.
We shall illustrate this theorem by the following example :
Consider the matrix
l+2x-f3x2 X2 4—6x
A= l+x3 3+4x2 1-2x4*4x5
. 2-3x 4-2x5 5 6
Linear Transformations 255

in which the highest power of x occurring in any element is 3.


Rewriting each element as a cubic in x, supplying missing coeffi
cients with zeros, we get
ri+2.xf3xH0x3 0+0.x-M.x«+0.x3 4-6.x4*0.x2+0.x’
A- l+0.x4-0.x^-f-l.x3 3 f0.x4-4x?+0.x3 1-2 x-f-0 x2+4.x*
2~3.x4-0.x24-2.x3 5 fO x4-0.x24-0.x3 6 f0.x4-0.x24-0.x3j
Obviously A car. be written as the matrix polynomial
ri 0 4] [ 20 -61 [3 I 01 0 0 0
i4= 1 3 1 +x 0 0 -2 4-x2 0 4 0 4-x3 I 0 4- .
Ll 5 6j 0 0 .0 0 Oj 2 0 0
Theorem n. 1 he Cayley-Hamilton Theorem.
Let T bi'a linear operator on an n-dimensional vector space
V(F). Then T satisfies its characteristic equation i e. iff{x) he the
characteristic polynomial of T^ then f{T)=^0.
Or
Every square matrix satisfies its characteristic equation.
(Meerut 1980, 81, 82, 85, 87, 92; Banaras73; Poona 70; Raj 65;
G.N.D.U. Amritsar 90; Andhra 92; S.V.U. Tirupati 90)
Proof. Let T be a linear operator on an n-dimensional vector
space V over the field F. Let B be any ordered basis for V and A
be the matrix of T relative to B i e. let A=[T\o- The characteristic
polynomial of T is the same as the characteristic polynomial of A.
If A=[atj]nxn, then the characteristic polynomial /(x)of A is given by
oii-x an a\„
aix an-x ain
/(x)=det(A-xl)^
a„\ On!
=flro+<2|X 4-02x2 4-... 4-o,x"(say), ...0)
where the at are in F.
The characteristic equation of A is/(x)=0
i.e. Oo+<2|3f-f02x24-...-4 O«x"=0.
Since the elements of the matrix A —x/ are polynomials at
roost of the first degree in x, therefore the elements of the matrix
adj(A—xI)are ordinary polynomials in x of degree n -1 or less.
Note that the elements of the ma;-ix adj(A—xl) are the cofactors
of the elements of the matrix >4-x7 Therefore adj(/l—x/' can
be written as a matrix polynomial in x in the form
adj(^ - xO=fio+^^+®2x24-...-fB«_|X"-‘, ...(2)
where the Bi*s are square matrices of order n over F with elements
independent of x.
256 Linear Algebra

Now by the property of adjoints. we know that


{A-xI). adj.(^-x/)«{det.(A-xl)} I.
(i4-x/){5o+x J?i+x252+...+*”-'
<^{ao+aix -^OnX”} I
[frona (1) and (2)1
on both sides,
Equating the coefficients of like powers of x
we get
ABq- QqI
AB\—IBo=>aiI
AB2^IBi=02l

ABit-i — IBn-l — On-X J


— IBn-X—anL

Premultiplying these equations successively by /, ^4*.


and adding, we get
no/4-«i^ +● ● ●
where O is the null matrix of order n.
Thus RA)=0.
Now /(7’)=flo/+fli7’+a27’2+...+fl„r'.
[/(7’))i,=[no^+«i7’+fl27^+...+«i.r'lD
=®oI^Jb+®I 1*^2 U^\b
^f{A).
AA)^o ■
.1
=● [/(7’)]»=0=(0)8
=> /(D=«‘
^ aQl-\-Q\T-\-a2l^-\r ●●●-^anT"=H. .(3)
Corollary. We have
/(x)=flo+fli^+ :.. + fl«x'»=det. {A-xl),
/. /(0)=flo=det. ^=det. T.
If T is non-singular, then T is invertible and
det. r^O /.e., Oov^O.
Then from (3), we get
flo/= -(ai7’+fl27'2+... 4

/^_£2 p-i W
\flo ^0 ^0 /
r> r-i
u Oq «0 )●
257
Unear Transformations

recor
space K. Then T is said to be diagonalizable if there is a basis B
for V each vector of which is a characteristic vector

Matrix of a diagonalizable operator. Let T be a diagonalizable


operator on an n-dimensional vector space V. Let «„}
be an ordered basis for K such that each ctj is a characteristic
vector of T, Let Toii*=icm- Then
T«t=ci«i=ci«i+0ai+...+0«ji
T'a2=C2«2=0«i+e^2+●●●+0*»

*7ai,=:CAan«0ai +0a2+... +6an-i+Cb«h.


Therefore the matrix ofT relative to B is
rc\ 0 ... 0 1
0 C2 0
[T]b
Lo 0 ● ●● ^#1 -
This matrix is a diagonal matrix. Note that a square matrix
of order n is said to be a diagonal matrix if all the elements lying
above and below the principal diagonal, are equal to zero. The
scalars ci,...,Cn need not all be distinct, If F is n dimensional,
then T is diagonalizable iff T has n linearly independent charac-
teristic vectors.
Diagonalizable Matrix. Definition. A matrix A over afield F
is said to be diagonalizable if it is similar to a diagonal matrix over
the field F. Thus a matrix/I is diagonalizable if there exists an
invertible matrix P such that />-● AP=^D where Z) is a diagonal
matrix, Alsb the matrix P is then said to diagonalize A or trans-
form^A to diagonal form.
●' Theorem 9. A necessary and sufficient condition that an nxn
matrix A over afield F be diagonalizable is that A hqs n linearly
independent characteristic vectors in V„{F).
Proof. If yt is diagonalizable, then A is similar to a diagonal
matrix D, Therefore there exists an invertible matrix P such that
p-i ap=d
or AP=PD. ..(1)
If Cl, C2,...,Cfl are the diagonal elements of D, then ci, C2 c„
are the characteristic values of D as can be easily seen. But similar
matrices have the same Characteristic values. Therefore ci, Ct c,
are the characteristic values of A,
2sa
Linear Afgebra
Now suppose P|, P„ are the column, vectors of the
matrix P. Then equating corresponding columns on each side of
(IVweget
APt^CtPi(i=»l, 2,..., If). ● ●●(2)
But (2)shows that Pt is a characteristic vector of^4 corresponding
to the characteristic value c/. Since the matrix P is invertible,
therefore its column vectors />,; />„ are n linearly independent
vectors belonging to V^iF). Thus ^ has n linearly independent
characteiristic vectors Pt, Pn.
Conversely, ,if />,, , Pn ®te It linearly independent charac*
teristic vectors of A corresponding to characteristic values Cu :,c„,
theiL.equations(2) hold. Therefore equation (1) holds where is
th^matrix with columns /», P„ . Since the columns of P are
linearly independent, therefore P is invertible and hence (1)implies
P- AP=^p. Thus A is similar to a diagonal matrix and so is
diagonalizable
This completes the proof of the theorem.
Remark. In the proof of the above theorem we have shown
that if A is diagonalizable and P diagonalizes A,then
fci 0 . 01
P-'AP^ ® ^2 ... 0
LOO ... c„J
if and only if the y'A column of/* is a characteristic vector of
corresponding to the characteristic value cjoi A, 1, 2,....«).
Theorem 10. A linear operator T on an n-dimensional vector
space V{F) is diagonalizable if and only if its matrix A relative to
any ordered basis B of V is diagonalizable.
Proof Suppose r is diagonalizable. Then 7has n linearly
independent characteristic vectors ai, a2, .., «„ in V. Suppose
Xu Xt,y., Xn arc the co-ordinate vectors of ai, «« relative to
● the basis B. Then X\,..., Xn are also linearly independent since V
is isomorphic to Vn{F) by isomorphism which takes a vector in V
to its co-ordinate vector in Vn{F), Under an isomorphism a linearly
independent set is mapped onto a linearly independent set. Further
Xu...,X„ are the characteristic vectors of the matrix i4 [see
theorem 6]. Therefore the matrix A is diagonalizable. [See
theorem 9].
Conversely suppose the matrix A is diagonalizable. Then A
Linear Transformations 259

has n linearly independent cliaracteristic vectors ATi,..., X» in Vn(F).


If oei,..., a„ are the vectors in V having Xu X„ as their Coordi
nate vectors, then «● will be n linearly independent charac
teristic vectors of T. So risdiagonalizable.
. Theorem 11. Let T be any linear operator on a finite dimen
sional vector space V, let cu C2, Ck be the distinct characteristic
values ofTt and let Wt be the null space of{T—cti), Then the sub
spaces Wk are independent. (Meerut 1972,74, 77)
Further show^that if in addition T is diagonalizable, then V is
the direct sum of the subspaces Wu*„, IVk. (Meerot 1993,93P)
Proof. By definition of Wu we have
FF/={a ; disKand {T’^CiI) a*=0 i.e, Tn^Cio},
Now let«/ be in Wu I»l,..., k, and suppose that
...(1)
Let y be any integer between 1 'and k and let
Ur n . (r-c,7).
1 ^ i

Note that Uj is the product of the operators (7—C|7) for


In Other words Uj=>{T—cil) (T—C27)...<r—c*7) where in the pro
duct the factor T—cji is missing.
Let us find C/ja/, k. By the definition of Wu we have
(T^cil) a/«0. Since the operators {T—ctl) all commute, being
polynomials in 7*. therefore Ujui^O for i^J. Note thyt for each
l^J* Vj contains a factor (T’-c//) and {T-^ciI) «/=»0.
Also
t//K/«[(r-c,7)...(r-c*7))ay
=((r-r,/)...(r-c*-,7)j ir«y-r*7flcy)
=»[(7’—Ci7)...(r—c*-|/)J (Cjaj—CkUj)
[V Tttf=cjxj and Ictjssctj]
«((r—Ci7)...(7’—c*_i7)] (cj—crc) clj
j^{cj—Ck) ((7’—C|7)...(r—r*r-i7)] «y
*=(cy—Cft) (cy“C*-i)...(cy—ci) ay, thc factor c,—cj will be
missing. Thus
Ujuj 77 {cj^Ci)! ay. ...(2)
l^i^k
260 Linear Algebra

Now applying C/j to both sidesT>f(I)» we get


Ujcm+Uj%2+ —+Uj<tk=^Q
=> £//Xy=d UjoL,==0 if My]
n (c/—C/)' cc;=0 [by (2)1
^ VW J
Since the scalars ct are all distinct, therefore the product
\n (cj—c/)
i¥=j
is a non-zero scalar. Hence n (c/—Ci)1ayca0

=> ay=0. Thus a/=0 for every integer 7 between 1 and it.
In this way ai+...+«ft=0
=9. a/=0 for each L Hence the subspaces Wu..., Wk are in
dependent.
Second Part. Now suppose that T is diagonalizable. Then
we shall show that V=Wi^..,-\-Wk. Since T is diagonalizable,
therefore there exists a basis of 1' each vector of which is a chara9-
teristic vector of T. Thus there exists a basis of V consisting of
vectors belonging to the characteristic subspaces Wi Wk- If
ae V, then a can be expressed as a linear combination of these
basis vectors. Thus a can be written as a=ai4-...+«it where
a/ e Wi, 1=1 ic. In this way a e Wi+...+ Wk. Therefore
V=Wt-h...+ Wk. But in the first part, we have proved that the
subspaces are independent. Hence
V^Wi@...@Wk.
Theorem 12. If T is a diagonalizable operator on a finite
dimensional vector space V, and ct,...,Ck are the distinct character
istic values of T, then there are linear operators Ek on V such
that
(fl) T=CtEi-{-..,+Ck Ek',
(b) /=£,+...4-
(c) E,Ej=^6, i¥^j;
{d) E(^—Ei;
(e) the range ofEi is the space of characteristic vectors of T
aisociated with the characteristic value Ci. (Meerut 1984P)
Proof. Let Wi be the null space of the operator T—cJ,for
k Then Wi,..., IF* are the characteristic spaces of the
characteristic values ci, ..., c* respectively. By theorem 11, V is
Linear transformatlmH 161

the direct sum of the subspaces Wu ●●●> Wk* If a€ K, then a can


be uniquely written as
a=ai+...+«* where eacha/€FF|^
Now let £*/ be a function from V into V defined by the rule
El («)=ai.
Then Et is a linear transformation on V. For its proof and
for the proofs of parts (b), (c), (d), and (e) of this theorem consult
theorem 5 on page 223.
Now it remains to prove the part (a) of this theorem. Let
a€Kandiet
«=«, + ...4-a*, where a/ e Wi foi i=l,..., k.
We have
TasssT («! + ●●●+**)
=Tbt|-|-...+7’«A
=C|ai4-...+Cfta*. [V a/GPF/and by def. of IF/, we
have 7flc/=cia/]
«= Cl E\(t 4' ● ● ●+ [by def. of £/, we have
£/a=3«/]
=(Ci^i4"***4"CA£*)«.
Thus we have Ta=:(ci£i4-...4*CA£A) a v a e F. Hence
r= c I jJi 4-● ● ● 4* ca £a ●

Solved Examples

Example 1. Let V be a n-dimensional vector space over F. What


is the characteristic polynomial of (/) the identity operator on V, (ii)
the zero operator on V. (Meerut 198H
Solution. Let B be any ordered basis for F.
(1) If / is the identity operator on V, then
[/]a=/.
The characteristic polynomial of 7=det (/—^/)
l-jc 0 0
0 1-x 0

0 0 l-JC

(ii) If 5 is the zero operator on V, then [d]a=Oi.e. the null


matrix of order n.
262 Linear Algebra

The chatacteristic polynomial of O^det(0~x/)


—X 0 0
0 —X 0
(-1)* X".

0 0 —X
Example 2. Let The a linear operator on afinite dimensional
vector space V and let c be d charaeterisfic value of T, Show that
the characterisdc space ofc le., Wc is invariant under T.
Solution. We have by definition,
»"c={aeF': r«=c«}.
Let a^fVc. ThenTas^ca.
Since We is a subspace, therefore
ceF and «e We c«e We.
Thus a^We => Ta^We.
Hence Wc is invariant under T.
Example 3. IfT be a linear operator on afinite dimensional
vector space V and c be a characteristic value ofT, then show that
the characteristic space ofc j.e.. We is the null space of the operator
T-^cl.
Solution. LetoiGWe. Then
Ta^ca.
(r—c/)«=s0
=> a e the null space of t-cl.
Again let oc € the null space of T-c/.
Then (T—Cf)ttaO
‘r«saCa
tH^We.
Hence FKc»the null space of T—cI.
Example 4. Show that the characteristic values of a diagonal
matrix are precisely the elements in the diagonal. Hence show that
if a matrix B issimilar to a diagonal matrix D, then the diagonal
elements ofD are the characteristic values of B.
Solution. Let
'flu 0 0 ● ●● 01
022 0 0
● ee

0 0 0
●●● fliinj j
be a diagonal matrix of order r. The characteristic equation of
A is
Linear Transformations 263

<*et (i4-jc/)=0
l.e.
whose roots are i.e., the diagonal elements.
Now we know that similar matrices have the same character
istic values. Hence if B is similar to Z>, then the eigen values of B
are the diagonal elements of i>.
Example 5. Let T be a linear operator on afinite dimensional
vector space V. Then show that 0 is.a characteristic value of Tiff T
is not invertible.
Solution. Suppose 0 is a characteristic value of T, Then there
exists a non-zero vector a in K such that
7a=0a
s> Tas»t).
r is singular and so T is not invertible.
Conversely suppose that T is not in\ertible. Since T is a linear
operator on a finite-dimensional vector space K, therefore Tis not
invertible means that T is singular. Thus there exists a non-zero
vector a in 1^ such that
7'«=0=30oe.
.*. 0 is a characteristic value of T.
Example 6. Ifiris a characteristic value ofan invertible trans^
formation T, then show that is a characteristic value ofT-K
\ Solution Since T is invertible, therefore c#0. So c-» exists.
Now c is a characteristic value of r. Therefore there exists a
npn-'zero vector tt in 1^ such that
7a=ca
=> r-* (ra)=T-'(ca) => (T-* T)a=c r-t («)
=> /(a)=cr-> (a) => aascT-'(a) => c“> «s=r-.*(a)
=> T“' a, a^tO.
is a characteristic value of T”*. .
Example 7. Ifc^F is a characteristic value of a linear opera*
tor T on a vector space V(F), thenfor any polynomial p{x) over F,
p(c) is a characteristic value ofp(7^.
Solution. Since c is a characteristic value of T, therefore
there exists a non-zero vector a in F such that
TaaaCa

^ T(Ta)^T(<M) => r«
s> r* a«ac (cot) [V Tatacot}
ya a.
164 Linear Algebra

c* is a characteristic value of r*.


Repeating this process A:times, we get
T*n=»<*ct,
c* is a characteristic value of T* where k is any positive
integer.
Let p(x)*»Oo+fliJC+<*2**4-●●●+««*"'
where the <i/*s e F.
Then
We have [p (F)] as=3(flof-jj-flir+...+fliii7’'”) «
aflo/«4'fliFa+„.+<in,7'" «
=<*o«+<»i («»)+●●●+««
=(<*o+«ic+...+<*mC'")«.
p (c)=flo+<*ic+ —+flmC'" is a characteristic value of p(T).
Example 8. Let Abe a square matrix of order.n over the field
F. if c^F is a characteristic value of A, then for any polynomial
p{x) over F, p{c) is a characteristic value of p(A).
Solution. Proceed as in Ex. 7. Replace the linear operator
F by the matrix
' Example 9. A and B are similar linear operators on a fitiite
dimensional vector space V, then A and B have the same charac
teristic polynomial.
Solution. Suppose A and B are similar linear transformations
on a finite dimensional vector space V. Then there exists an inver
tible linear operator C on F such that
A^^CBC'K
We have i4-x/=CflC-'-Jc/=CFC-‘~C {xl) C-‘
=C(F-x/)C-*.
det(^-*/>=det{C(F-x/)C-'}
=det C. del (F-x/). det C-*=det C. det C-‘. del (F~x/)
«det (CC-*) det (F-x/)=det /. det (F-x/)= 1. det(F-x/)
«det(F-jc/).
Now the characteristic polynomial of A =det {A—x/).
.V the characteristic polynomial of i4=the characteristic
polynomial of F.
Example 10. Suppose S and T are two linear operators on a
finite dimensional vector space V. If S and T have the same charac
teristic polynomial then det S^det T.
Solution. Let dim V^n. Let F be any ordered basis for V.
If i4 is the matrix of S relative to F, then the characteristic polyno<
mial/(x)of F isgiven by
265
Linear Transformations

/(x)=dct(A-xI)>
Let /(x)*ao ffliX+...+a»x"=det(A^xfh
The constant term in this polynomial is
s=flo=»/(0)=*det A.
Similarly the constant term in the characteristic polynomial of
r=idet C, where C is the matrix of T relative to B,
Since S and T have the same characteristic polynomial, there
fore the two constant terms must be equal.
deti4=detC
=> det[S]b=det[r]B
=> det5=detr.
vec-
Example 11. Find all(complex)proper values and proper
tors of thefollowing matrices.
ro n ' ri c ri n
(a) 0 0 [o i (c) 0 i

Let^=
ro n
Solution, (a) 0 0

We have i4—x/=
n 0l -X n
0 oj lo lJ“L 0 -X
/. characteristic polj'nomial of i4=det(i4—x/)
—X 1
x2.
S3

0
the characteristic equation of A is
det(v4—x/)=0 i e. x*=0.
The only root of this equation is x—0.
/. 0 is the only characteristic value of A.
Now let xi, xi be the components of a characteristic vector «
corresponding to this characteristic value. Let X be the coordinate
Xi
matrix of a. Then X—
L *2 J
Now X will be given by a non-zero solution of the equation
(^-00 X=0
Le, 0 nr 0
LO OJL *2 0

i.e.
X2 r 0 1
0 0
Thus X2»0, X|=/c where k is any non-zero complex number.
● tf
A jr- Q where k is any non-zero complex number.
266
Linear Algebra

(b) Let A — 1 0]
0 / J*
'l-x 0
We have A-^xl
0 i—x
the characteristic equation of A is
det (i4—jc/)=s0
l-x 0
i.e. =0
0 i—X
i.e. (1—jc)(/-x)«0.
The roots of this equation are x^T; x=i.
1 and i are the two characteristic values of A,
Now let Xi, X2 be the components of a characteristic vector
corresponding to the characteristic value 1. Let Xbe the coordi
nate matrix of this vector. Then X= Xi
1^2}
40W X will be given by a non-zero solution of the equation
(A-uyx^o
i.e. 1-1 Olfx, 01
0 ,/-l 0
i.e.
ro 0 ^1. r0
0 /-I J L-*f2 J 0
i.e.
0 roi ’
L0’“1)^2 J“L0 J
Thus X2=0, xi^k where k is any non-zero complex number..
’ k'
«
0 where ^ is any non-zero complex number.

Similarly to find characteristic vectors corresponding to the


characteristic value i, we consider the equation
iA-ii)X^O
i.e. ri-i Olfx, roi
0 0JL^2j L 0
i e. (i-/)x,T roi
0 0
Thus xi=0, X2=c where c is any non-zero complex number.
,% X= ^ where
c
,T 1
(c) Let A=s 0 I
1-x
} 1 1
We have A -r.x/=
. 0 i-xj*
Linear Transformations 267

the characteristic equation of A is det(^4—jf/)=0


1-JC I
i.e. s=0
0 i-x
i.e. l-jc)(/-Jt)=0.
The roots of this equation are x=» 1, /.
1 and i are two characteristic values of A,
Xi
Let be the coordinate matrix of a characteristic
X2
vector corresponding to the characteristic value x=l. Then ^ will
be given by a non-zero solution of the equation
(A-I)X=0
i.e.
0 1 Xl roi
0 i—1 J[xaJ [0
i.e. X2 roi

Thus X2=0, Xi=k where

X= Q where k-i^O.
To find the characteristic vectors corresponding to the charac
teristic value / we consider the equation
(^-i7)X=0
i.e.
ri-/
0
nr _ 01
oJlxiJ Lp.
i.e. ‘(1-0 Xi+X2'\ _r 01
0 '"Lo.
i.e. (1—0 Xi’YX2==‘0.
Letxi»c. Then X2«(/—1)c.
c \
where
(i-l)c
Example 12. Find all {complex) characteristic values and
characteristic vectors of thefollowing matrices
ri 1 1 1 1 11
(fl) 1 1 1 .(6) 0 1 1 . ,(Meerut 1968)
LI 1 IJ LO 0 1.
fl 1 1
Solution, (a) Leti4n 1 1 1 .
LI 1 iJ
1 1
We have A-xI^ I 1 .
1 1 1-x .
268 Linear Algebra
/. the characteristic polynomial of A is
1-x i 1
=dct(^~^/)= 1 1-x 1
1 1 l-;c
3-x 1 1
=» 3—X 1—X 1 Ci+Ca+Ca
3-x 1 1-jr
1 1 1
=(3~x) 1 l-x 1
1 1 1-JC
1 1 1
=(3-jc) 0 -X 0 Rz—Ri, Ri—Rt
0 0 -X
=(3-x)x2
/. the characteristic equation of A is
(3-x) x*=0.
The only roots of this equation are x=3,0.
0 and 3 are the only characteristic values of A.
r^ii
Let A'=» X2 be the coordinate matrix of a characteristic
1X3
vector corresponding to the characteristic value x=0. Then X will
be given by a non-zero solution of the equation
(A-OI)X^O
ri 1 l]f Xt 01
i.e. 1 1 1 X2 = 0
.1 1 iJ X3i 0
■Jfl+X2 + X3 ro
i:e. = 0
*I+X2+X3J LO.
i.e. ^l+Af2 + ^3 = 0.
This equation has two linearly independent solutions i.e.
1 01
0 , and X2 1 .
-1 -1
Every non-zero multiple of these column matrices X\ and X2
is a characteristic vector of A corresponding to the characteristic
value 0.
The characteristic space of this characteristic value will be the
subspace IF spanned by these two vectors A'l and X2. Any non
zero vector in W will be a characteristic vector corresponding to
this characteristic value.
Linear Transformations 269

To find the characteristic vectors corresponding to the charac


teristic value 3 we consider the equation
(Ar--31)X=0
r-2 1 1 Xi 01
he. 1 -2 1 X2 = 0
. 1 1 -2JLJC3J 10
■—2xi+jf2+X3] ro
he. .xi—2X2+X3 = 0
. X1+X2-2X3] Lo.
he. -2X|+X2+X3=0,
Xi—2X2+X3=0,
X1+X2—2x3=0.
Solving these equations, we get
X\s=zX2—Xt=k.
r*i
X= k where k^Q.
k\
rl 1 11
(b) Let 0 1 1 .
.0 0 1
The chara^ristic equation of ^4 is
(l_x)3=0.
x=s=l is the only characteristic value of >4.
Xil . .
Let X= X2 be the coordinate matrix of a characteristic
.Xi\
vector corresponding to the characteristic value 1. Then X will be
given by a non-zero solution of the equation
{A-I) X=0
ro 1 .llfx,l fOl
i.e. 0 0 1 X2 0
.0 0 0JLX3J LO
X2+X3] ro'
he. X2 =0
. 0 J lo.
he. ^2 + ^3=0, X2=0.
xi=A:, X2=0, X3=0.
k
Thus X= 0 where k^Q.
LoJ
270 Linear Algebra

Example 13. Let T be a linear operator on the n-dimensional


vector space F, and suppose that T has n distinct characteristic
values. Prove that T is diagonalizable. (Meerut 1989)
Solotion. We know that distinct characteristic vectors of a
linear operator T corresponding to distinct characteristic values of
T are linearly independent. [See theorem 4 of § 24].
Since T has n distinct characteristic values, therefore T has n
linearly independent characteristic vectors. These n linearly inde*
pendent vectors will form a basis for V because dim F=n. Thus
we get a basis for V each vector of which is a characteristic vector
of T. Hence T is diagonalizable.
Example 14. Ifan nxn matrix A has n distinct eigenvalues,
A is diagonalizable.
Solution. Let k\, ki, ..., k,, be the n distinct eigenvalues of A
and let Xt be an eigenvector of A corresponding to the eigenvalue
ki, I—1,2,..., n,
Then AXi^kt Xi,
We shall prove that the eigenvectors X\,,..,Xn are linearly
independent.
If Xu X2, Xn are linearly dependent we can choose r so
that 1 ^ r < n and Xu Xu ...» X, are linearly independent but
Xi, X2,..., X„ X,+i are linearly dependent. Hence we can choose
scalars cu C2,..., Cr+u not all zero, such that
CtXt+C2X2-i- +Cr+I X,+tsBO,
●00 ...(1)
Multiplying (1)on the left by ^4, we get
CiAXi-^C2AX2~^..,-^Cr+i AXf+i^O
or CtktXt-hC2k2X2-i'...-hCr+ikr+lXr+t=O, ...(2)
tv AX,=k,X,J
Now multiplying (I) by the scalar and subtracting from
(2), we get
Cl (ki-kr+t) Xx +C2 {k2-kr^0 JT2+...
+Cr(kr-kr^l)Xr=0. ...(3)
But since Xu X2, ..., Xr are linearly independent according to
our assumption and *1, k2, .... k,^i are all distinct, therefore froria
(3), we get
C|=0, C2=*0, ..., rr«=0.
Putting C|«0,..., c,«0 in (1), we get
^r+l Xr^i^O
Linear Transformations 271

=> <?r+!*0, since AT,+i#0.


Thus the relation (1) implies that
Ci=0, C2*“0,..., C,=0, Cr^issO.

But this contradicts our assumption that the scalars

are not all zero.

Hence our Initial assumption is wrong and the vectors

are linearly independent. Since the matrix A has n linearly indepen


dent eigenvectors, therefore it is diagonalizable.
Example 15. Let T be the linear operator on which is repre
sented in the standard basis by the matrix
r -9 4 41
-8 3 4 .
L-16 8 7.
Prove that T is diagonalizable. (Meerut 1984, 87, 90)
Solution.
r -9 4 41
Let A= -8 3 4.
L-16 8 7.
The characteristic equation of A is
-9-x 4 4
-8 3—jc 4 =0
-16 8 7-x
-1-x 4 4
or —1—X 3—X 4 =0, applying Ci -j-Ca+Cj
-1-x 8 7-x
1 4 4
or -(1-hx) 1 3-x 4 =0
1 8 T—jc
1 4 4
or 0+x) 0 -1-x 0 =0 applying Ri—Ri, Rs—Ri
0 1 3-x
or (l+^)(l+Jf)(3-x)=0.
The roots of this equation are — 1, — 1, 3.
The eigen values of the matrix,/i are -1, -1, 3.
Thecharacteristic vector A'of.4 corresponding to the eigen
value —1 are given by the equation
in Unear Algebra
\

or (^+/)j^r=o
-8 4 41 [xi 01
or -i-8 4 4 X2 = 0 .
L-I6 8 8 ^3 0
These equations are equivalent to the equations
r-8 4 4irjfi-| ro]
0 0 0 X2 == 0 ,applying
. 0 0 Oj XaJ LoJ /?3-2/?i.
The matrix of coefficients of these equations has jtpnk 1.
Therefore these equations have two linearly independent solutions.
We see that these equations reduce to the single equation
-2Xi+X2+^3=0.
Obviously
rn 1
01
=. 1 . X2^
1 L-iJ
are two linearly independent solutions of this equation. Therefore
Xt and X2 are two linearly independent eigenvectors of A corres-'
ponding to the eigenvalue ~1.
Now the eigenvectors of A corresponding to the eigenvalue 3
are given by
(/1~3/) X=^0
r-12 4 4irxi] ro'
i.e. -8 0 4 X2 =● 0 .
.-16 8 4JLX3J LO.
These equations are equivalent to the equations
r-12 4 4lfx,l rOl
4 -4 0 X2 = 0 ,
— 4 4 OjLXaJ lO.
applying /?2-^i.
The matrix of coefficients of these equations.has rank 2. There
fore these equations will have a non-zero solution, Also these
These
equations will have 3—2=1 linearly independent solution,
equations can be written as
-12x1-1-4x2+4x3=0
4xi -4x2 = 6
—4xi+4x2=0.
From these, we get
xi=X2=l, say.
Then X3 = 2.
273
Linear Transformations

rn
/. Xii= 1

is an eigenvector of i4 corresponding to the eigenvalue 3.


n 0 n
1 1.
Now let 1 \ 2\
We have detP=l#0. Therefore the matrix Pis invertible.
Therefore the columns of P are linearly independent vretors be^
longing to Rs. Since the mattix-4 has three linearly independent
lig^vretors in R’, therefore it is diagonaliaable. C
linear operator T is diagonalizable. Also the diagonal form D of
A is given by
r-1 0 01
/»-* AP 0 —1 0
. 0 0 3
Example 16. Is the matrix
i4==
l-i
Similar over thefield Rtoa diagonal matrix Is A similar over the
.
field C to a diagonal matrix ?
Solution. The characteristic equation of A is
i-ac 1
ssQ
-1 1-x
or 1—x)*+l=0
or jg2_2x+2«0.
The roots of this equation are 1+1-/● Since the
istic equation of A has no^oots in R, therefore the matrix A na
no eigenvalue if we regard it as a matrix over R.
has no eigenvector in R^. Therefore the matrix is not diagonali-
zable over the field R. ,
If we regard .4 as a matrix over C. then it has two eigenvalues
i.e.
Since i4 has two distinct eigen values, therefore it will have
two linearly independent eigenvectors. Consequently A is diagona-
lizable
The eigenvectors of A corresponding to the eigenvalues 1+/*
1—i are given by the system of equations
r-' r®i f ‘
-1 'JW I®!
respectively.
274
Linear Algebra
From these, we get
r n.A2= I
I -i
as
respectively ^ corresponding to the eigenvalues 1+/, i-/
F= n
n
If
I/ —/ ●

0 1—/
gives the diagonal form of A.
Example 17. Prove that the matrix

^“LO 1
is not diagonalizdble over thefield C.
Solotion. The characteristic equation of ^4 is
2 .
0 l-jc
or (l-jr)2=0.
The roots of this equation are 1, 1. Therefore the only distinct
eigenvalue of^ is 1. The eigenvectors of A corresponding to this
eigenvalue^ere given by
TOn 2 Xl roi
0 L *2 J 0
or 0jfiM-2jif2=0."
This equation has only one linearly independent solution. We
see that

is the only linearly independbpt eigenvector of A. Since A has not


eigenvectors, therefore it is not diagona-
Example 18. Show that the matrix
A^ 1 r is not diagonalizable. „ .
n .” .0 1 ●* (Meerut 1992)
? 25. Minimal polynomial and Minimal equation of a linear
operator or of a matrix
Annihilating polynomials. Supposed is a linear operator on
a finite dimensional vector space over the field F and /(x) is a
polynomial over F, If/(7^«0, then we say that the polynomial/(x)
annihilates Xht linear operator T, Similarly suppose A is a square
matrix of order n over the field F and /(x) is a polynomial over
F. If f(A)=^0, then we say that the polynomial /(x) annihilates
the matrix A, We know that every linear operator T on an
275
Linear Transformations
n-dimensional vector space V(F)satisfies its characteristic equation,
non-zero polyuomial
Also the characteristic polynomial of r is
I e. a polynomial in which the coefficients of various terns _are not
all zero! Note that if /< is the matrix of T in some
ffien&e characteristic polynomial of ris 1^-x/!« which the
coefficient of *» is(-0" which is not zero. Thus we ^
least the characteristic polynomial of T is a
which annihilates r. Therefore the set ofthose non-zero polyno
mials which annihilate T is not empty.
Monte polynomial, fteflnitlon. A ‘V.^ Wahest
Fis called a monic polynomial if the coefficient of <‘>e hig^st
power of X in it is unity Thus *>-2x«+4x+5 is a monic poly-
nomial of degre 3 over the field of rational numbers.
Among these non-zero polynomials which
operator r,the polynomial which is
lowest degree is of special interest. It is called the minimal poly
nomial ofthe linear operator T.
Minimal polynomial of a linear D®®"'"®"
[Meerut 1976, 80, Nagarjona 74J
Suppose Tisa linear operator on an n-dimensional
V{F), The monic polynomial of lowest degree over
annihilates T is called the minimal polynomial ofT, Also ifKX)^
the minimal polynomial of T, the e(iuationfix)-^0 is called the ^inh
mal equation of the linear operator T
Similarly we can define the ihinimal polynomial of a matnx.
Suppose is a square matrix oforder n over the field F. The monic
polynomial of lowest degree over the field F that annihilates >4 is
called the of i4.
Now suppose r is a linear operator on an n-dimensional vector
ordered basis B. If
space F(F)and A is the matrix of T in some
f(x) is any polynomial over F, then [f{T)]B=f(A). Therefore
/(r)=6ifand only if f{A)=^0. Thus/(x) annihilates r iff it
annihilates A. Therefore if/(x) is the polynomial of lowest degree
that annihilates F, then it is also the polynomial of lowest degree
that annihilates A and conversely. Hence T and A have the same
minimal polynomial. Further the characteristic polynomial of the
matrix A is of degree n. Since the characteristic polynomial of i4
annihilates A^ therefore the minimal polynomial of ^4 cannot be of
degree greater than n. Its degree roust be less than or equal to «.
276
Linear Algebra

Theorem 1. The minimal polynomial ofa matrix or ofa linear


operator is unique.
Proof. Suppose the minimal polynomial of a matrix A is of
degree r. Then no non-zero polynomial of degree less than r can
annihilate A, Let f{x)~x'-\-a\x'-^-^aixf-‘^+...-\‘a,..\X’\-a, and
g(x)t=x'’4-6|X''“*+62X'‘“*4-...-f6,_ix+6r be two minimal polyno
mials of A. Then both f{x) and g(x) annihilate A. Therefore we
have/(i4)==0 and g(yf)=:0. These give
A^ar 1^0, ...0)
and A+br 1^0. ...(2)
Subtracting (1)from (2), we get
(hi-fli) 7=0. ...(3)
From (3), we see that the polynomial(hi—«i) x'’"*+...+(h,—a,)
also annihilates A. Since its degree is less than r, therefore it must
be a zero polynomial. This gives hi—fli=0. h2-a2=0,...,h,.— 0.
Thus oi=hi,..., fl,=h,. Therefore/(x)=g(x)and thus the minimal
polynomial of A is unique.
Theorem 2 The minimal polynomial ofa matrix {linear operator)
is a divisor ofevery polynomial that annihilates the matrix {linear
operator).
Proof. Suppose m(x) is the minimal polynomial of a matrix
A. Let A(x) be any polynomial that annihilates A. Since m(x)and
h(x) are two polynomials, therefore by the division algorithm there
exist two polynomials ^(x) and r{x) such that
h(x)=TO(x)9(x)+r(x), ...(1)
where either r(x) is a zero polynomial or its degree is less than the
degree of m(x). Putting
x=A on both sides of(1), we get
h{A)=m{A) q{A)+r{A)
0=0 q{A)-\-r{A) Iv both m(x)and h(x)
annihilate A]
r{A)=0.
Thusr(x)is a polynomial which also annihilates A: If/●(x)=?tO,
then it is a non-zero polynomial of degree smaller than the degree
of the minimal polynomial m{x) and thus we arrive at a contradic
tion that m(x) is the minimal polynomial of A. Therefore r(x)
must be a zero polynomial. Then (1) gives
A(x)=m(x) q{x) => m{x) is a divisor of A(x).
Corollary. The minimal polynomial of a matrix is a divisor of
tlK characteristic polynomial of that matrix.
Linear Transformations. 277

Proof. Suppose /(x) is the characteristic polynomial of a


matrix A. Thenf{A)—0 by Cayley-'Hamilton theorem. Thus/(x)
annihilates A> If m(x)is the minimal polynomial of i4/theo by the
above theorem we see that m(x)must be a divisor of/(x).
Theorem 3. Let T be a linear operator on an n-dimensional
vector space V[or, let A bean nxn matrix]. The characteristic and
minimal polynomialsfor T[for A] have the same roots^ except for
multiplicities. (Meerut 1984,<jr.N.D.U. Amritsar 90)
Proof. Suppose/(x)is the characteristic polynomial of a linear
operator randm(x)is its minimal polynomial. First we shall prove
that every root of the equation m(x)=0 is also a root of the equa
tion /(x)<=0. We know that the minimal polynomial is a divisor of
the characteristic polynomial. Therefore m(x)is a divisor of/(x).
Then there exists a polynomial ^(x)such that
/(x)«m(x)^(x). ...(1)
Suppose c is a root of the equation m(x)=0. Then m(c)=0.
Putting x=c on both sides of(1) we get/(c)=/«(c) ^(c)=0 igr(c)=o.
Therefore c is also a root of/(x)-0. Thus c is also a characteristic
root of the linear operator T.
Conversely suppose that c is a characteristic value of T. Then
there exists a non>zero vector a such that Ta=ca. Since m(x)is a
polynomial, therefore we have
\m{T)](a)=in(c) «. [See Ex. 7 page 263]
But m(x)is the minimal polynomial for T. So m(ji^annihilates
r/e., m(r)«d.
d(a)=nt(c)«
0»m(c)a [V d(a)«0]
m(tf)s0. [Vflc^tO]
Thus c is a root of the minimal equation of T.
Hence-every root of the minimal equation of T is also a root
of its characteristic equation and every root of the characteristic
equation of T is also a root of its minimal equation.
Theorem 4. Let T be a diagonalizable linear operator and let
ci,..., Cit be the distinct characteristic values ofT. Then the minimal
polynomialfor T Is the polynomial
p(x)«s(x—Cl)...(X—Cfc)- (Meerut 1980,84P)
^oof. We know that each characteristic value of T is a root
of the niinimal polynomial for r. Therefore each of the scalars
Linear Algebra
m
c, ck is a root of the minimal polynomial for T and so each of
the*pi>lynomial8 x-Ck.is a factor of minimal polynomial
JotT, Therefore the polynomial p(jc)=(x-ci)...(x-ck) will be
the minimal polynomial for T provided it annihilates T te, provi¬
ded
Let a be a characteristic vector of T. Then one of the opera-
'tors sends a into 0. Therefore
(r-ci/)...(r-ck0
for every characteristic vector a.
Now r is a diagonalizable operator. Let V be the underlying
vector space.. Then there exists a basis B for K which consists of
Qharacteristic vectors. If is any vector in K, then ^ can be ex
pressed aS a linear combination of the vectors in the basis B. But
we have just shown that piT)a-0 for every characteristic vector'
^
Therefore we have
p(r)/3=0,
=> p(D«6.
P(x) annihilates T and so pipe) is the minimal polynomial
forr.
Thus we have proved that if T is a diagonalizable linear opera-
tor, the minimal polynomialfor T is a product of distinct linear
factors.
Corollary. If the roots of the characteristic equation ofa linear
operator T are all distinct say cu cs,. then the minimal polyno
mialfor T is the polynomial
:{.,-{x-ci)...{x~Cn).
Proof. Since the roots of the characteristic equation of T are
all distinct, therefore r is diagonalizable. Hence by the above
theorem,the minimal polynomial for T is the polynomial
(x—ci){x^c^.„{x-c^.

Solved Examples

Example 1. Let V be afinite^imemional vector space. What


is the minimal polynomialfor the identity operator on VI What is
the minimi polynomialfor the zero operator ? CMeernt 1981)
' SolntloQ; Wehave/—1/a/—/»$. Therefore the mqnic
polysofflial x^l annihilates the identity operator / and it is the
Linear Transformations 279

polynomial of lowest degree that annihilates /. Hence x—1 is the


minimal polynomial for /.
Again we see . that the monic polynomial x annihilates the
,zero operator 0 and it is the polynomial of lowest degree that
annihilates 0. Hence x is the minimal polynomial for 0.
Example 2. Let V be an n-dimensional vector space and let T
be a linear operator on V. Suppose that there exists some positive
integer k so that 7^=6. Prove that T>*=:6.
Solution. Since T*—0, therefore the polynomial x* annihi*
lates T. So the minimal polynomial for T is a divisor of x*. Let
X'' be the minimal polynomial for T where r ^ ri. Then 7^=0.
Now 6=6.
Example 3. Find the minimal polynomialfor the real matrix
7 4 -n
4 7 -1 ,
-4-4 4
Solution. We have
7-x 4 -1
\A-^xI\^ 4 7-x -1
-4 -4 4-x
7-x 4 -1
=» 4 7-x -1 ,byi?3+/e2
0 3-x 3-x
7-x 4 -1
=(3-x) 4 7-x -1
0 1 1
7-x 4 -5.
=(3-x) 4 7-x x-8 .by C3-C2
0 1 0
7-x -5
-(3-*) 4 .i-8 *
expanding along third row
3-x
“-(3-^) 4
1 1
Ls i’
—(3-x)2 ; |“~(3-;*P(*-12).
Therefore the roots of the equation| A^xl|<=0 are xb3,3,12.
These are tbi; characteristic roots of A,
280 Linear Algebra

Let us now find the minimul polynomial of A. We know that


each characteristic root of A is also a root of its minimal polyno
mial. So if m(x) is the minimal polynomial for i4, then both
x-3 and x-12 are factors of m{x). Let us try whether the poly-
nomial AW=(x-3)(x-12)*x2~15x-i-36 annihilates A or not.
69 60 -151
We have 60 69 -15 .
-60 -60 24j
r 69 60 -151
42-15^+36/* 60 69 -15
,●-60 -60 24j
7 4 -n 36 0 0]
-15 4 7 1 + 0 36 0
-4 -4 4 0 0 36.
105 60 -151 105 60 -15
== 60 105 -15 - 60 ' 105 -15 =0.
-60 -60 60 -60 -60 60

/. h{x) annihilates A, Thus h{x) is the monic polynomial of


lowest degree which annihilates-i4. Hence A(x) is the minimal
polynomial for A. . ●
Note. In order to find the minimal polynomial of a matrix
A, we should not forget that each characteristic root of A must also
be a root of the minimal polynomial. We should try to find the
monte polynomial of lowest degres which annihilates A and which
has also the characteristic roots of A as its roots.
Example 4. Show that similar moifices have the same minimal
polynomial. (Meerut 1976)
Solution: Suppose A and B are two similar matrices. Then
there exists a non-singular matrix P such that
AP.
Now we are to show that the matrices A and P"* AP have
the same monic polynomial. First we shall show that a monic
polynomial/(x) annihilates A if and only if it annihilates P“* AP.
We have
(P“* AP AP>^P-^A^P.
> Proceeding in this way we can show that (P-* i4P)*==P-* !<4*P,
whcrelfe is any positive integer.
Let /(x)=sx'’-Hflix’’“*-l-...-|-flr-i X’^-a,.
Then/(.4)«i4‘'+M'’-*+...+flr-i A-\ro^i.
281
Linear Transformations

Also /(P-* i4P)=(P-*» i4P)'’+».+«r-i(P"‘ AP)-\-OrI


«P-» i4'^P+...+flr-l (P-* ^P)+flr P“* ^
«P-» i4+fl,/)P
=P-*/(^)P.
Since P is non>singular, therefore
P-^f{A)P^O if and only if/(^)=0.
Thus/(x) annihilates A if and only if it annihilates P-* AP.
Therefore if/(:c) is the polynomial of lowest degree that anni
hilates P“* >1P and conversely. Hence A and P~^ AP have the
same minimal polynomial.
Exercises
1. Find the characteristic roots of the matrix
rs 6 8]
0 7 2. (Meerut 1976)
0 0 4j
2. Write the characteristic polynomial and the minimal polyno
mial of the matrix
4 3 OT
2 1 0.
5 7 9.
3. State whether the following statement is true or false :
The matrices
ri 0 01 1 1 n
0 2 0 and 0 2 1
0 0 3j 0 0 3.
have the same characteristic roots. (Meerut 1977)
4. Show that the characteristic equation of the complex matrix
ro 0 cl
i4«=* 1 0 b
.0 1 a.
is jc*-ox*—6x--c=0. (Meerut 1980)
5. Show that the minimal polynomial of the real matrix
o r
U oj
isx*+l.
6. Show that the minimal polynomial of the real matrix
r 5 -6 -61"—
T-1 4 2
1 3 .-6 4.
is(Xr-1)(x-2).
282 Linear Algebra

7. Find all (complex) eigenvalues and eigenvectors of the follow


ing matrices.
f2 2 -3' -2
(a) ■ 3 (b) ^3
13J* 1 * [2 1
[■3 2 41
(d) 2 0 2.
.4 2 3J
8. What are the eigenvalues and eigenvectors of the identity
matrix ?
9. For each of the following matrices over the field C, find the
diagonal form and a diagonalizing matrix P.
■ 20 181 ■ 3 4‘
(a) (b) -4 3
.-27 -25J*
4 2 —2 r-17 18 -61
(c) -5 3 2 . (d) -18 19 -6 .
-2 4 1 L~9 9 2.
10. Show that distinct eigenvectors of a matrix A corresponding
to distinct eigenvalues of A are linearly independent.
11. Let T be the linear operator on which is represented in the
standard ordered basis by the matrix
[ 5-6-61
.4= -1 4 2 .
3 -6 -4
Find the characteristic values of A and prove that T is diago-
nalizable. (Meerat 1985)
12; Is the matrix
3 1 -n
A^ 2 2-1
2 2 0.
similar over the field R to a diagonal matrix ? Is /4 similar
over the field C to a diagonal matrix ? ^Meerut 1983)
13. Is the matrix
6 -3 -21
4 -1 -2
llO -5 -3.
similar over the field.R to a diagonal matrix ? Isi4 similar
over the field C to a diagonal matrix ?
(G.N.D.U. Amritsar 1990)
14. Find the characteristic equation of the matrix
> 2-1 n
A*=» —1 2—1-
- - 1 -1 X
and verify that it is satisfied by A. (Meerut 1988)
Linear Transformations 283

Answers

I. 5.7.4.
2. Characteristic polynomial is (9-Jc) (x*-5jc-2). minimal
polynomial is(*—9){x^—Sx—l).
3. True.
7. (a) 14. 1;[1,31.[4.-1].
(b). t±i V37 ;[6.1-V371.[6. 1+ V37].
(c) 2±V3i;[2. 1-V3/3.12.1 +V3i'l.
(d) 8. ~1. -1; linearly independent eigenvectors are
[2.1.21.(0.2.-1].[1,0.-IJ.
8. All eigenvalues are 1. Every non-zero vector is an eigenvector.
9.;> (a) Z)= P

(b) D
2 01 p
- -7j»^
*3+4i 0 *
-1
.
-a
1

1 -1
0 3-4/J*‘~U
■1 0 0] f2 1 01
(c) D=t 0 '2 0 . P~ 1 1 .
LO 0 5. 4 2 ij
-2 0 01 r2 1 -11
(d) /)«. 0 1 0 . 2 1 0 .
0 0. ij li 0 3J
11. 1.2,2.
12. Roots of the-characteristic equation of i4 are 1. 2. 2. A is not
similar over the field R to a diagonal matrix. Here A has only
two linearly independent eigenvectors belonging to R^. A is
also not similar over the field C to a diagonal matrix.
13. Roots of the characteristic equation of A arc 2. /.—/. is
not diagonalizablc over R. But A is diagonalizable over C
because in this case A has 3 distinct eigenvalues.
3
Inner Product Spaces

Throughout this chapter we shall deal only with real or comp


lex vector spaces. Thus if V is the vector space over the field F,
then F will not be an arbitrary field. In this chapter F will be either
the field R of real numbers or the field C of complex numbers.
Before defining inner product and inner product spaces, we
shall just give some important properties of complex numbers.
Letz e C I’.e., letz be a complex number. Then z=*+(y
■ where x, S R and i—y/— 1. Here x is called the real part ofz
and y is called the imaginary part of z. We write z, and
j;=Im r. The modulus of the complex number z-«xH-/y is the
non-negative real number y/{x^+y^) and is denoted by | z[. Also
ifz=jc-f-iy is a complex number,then the complex number g===x—iy
\s caWed the conjugate complex of z. If z=f. then x-f lyUjc—(V
and therefore y=0. Thus z=g implies that z is real. Obviously
we have
(i) z-|-2=2x=2 Re z, (ii) z—Sss2/y=2i Im z,
●2 (iv) I z 1=0 o x=0,;>=»0
(iii) zf=x*+^2=lz I f
I z j=0 o z=0, (v) ^=z,(vi) 1 2 1=1 z I, and
(vii) I z ^ * I z I > Re z.
If zi and Z2 are two coniplex numbers, then
(i) I I < H-| ‘2 I 00 ^1 +^2«?I +^2

(iii) zi Z2=5i ?2. and (iv) Zi-Z2=fi-«2.


§ 1. Inner Product Spaces. Definition. (Meerut 1982, 83;
Allahabad 75; Madras 8t; Nagarjuna 78; Andhra 81, 85)
Let V (F) be a vector space where F is either the field of retd
numbers or the field of complex numbers. An inner product on V
is a function from V'X.V into F which assigns to each ordered pair
of vectors «, p in V a scalar («, P) in such a way that
(i) (a, ^)=(J3, a) [Here (/3,«) denotes the conjugate complex of
the number (fi,«)).
Inner Product Spaces 285

(2) (<ia+^>i3, y)=fl(a, y)+6(^, y)


(3) (a, a) > 0 anrf(a, a)=30 => asaO
for any ct, ^ V and a, b G F.
Also the vector space V is then said to be an inner product space
with respect to the specified inner product defined on it.
It should be noted that in the above definition (a, does not
denote the ordered pair of the vectors a and jS. But it denotes the
inner product of the vectors a and j8. It is an element of V which
has been assigned by the function (named as inner product) to
the vectors a and j8. Sometimes the inner product of the ordered
pair of vectors a, j8 is also written as(a
| j8) If F=R,theifX*» P)
a real number and if F=C,then (a, /S) is a complex number.
If F is the field of real numbers, then the complex conjugate
appearing in (1.) is superfluous and,(1) should be read as (a,P)
a). If F is the field of complex, numbers, then from (1), we
have (a, a)=(a,a)and therefore («, «) is real. Thus(a, a) is always
real whether F=R or F=C. Therefore the inequality given in (3)
makes sense.
If y(F)is an inner product space^ then it is called a Euclidean
space if F is thefield of real numbers. Also it is called a Unitary
space if F is thefield ofcomplex numbers.
Note I. The property (3) in the definition of inner product is
called non-negativity. The property (2) is called the linearity pro
perty. If F=R,then the property (I) is called symmetry and if
F«C,then it is called conjugate symmetry.
Note 2. If in an inner product space V (F), the vector a is 0,
5^

then (a, a)a=0.


We have (0, 0)«C00,0) [V 00«0 in V]
=0(0,0) [by linearity property of inner
product]
0 (●/ (0, 0) e F and therefore
0 (0, 0)=0J
EXAMPLES OF INNER PRODUCT SPACES
Example 1. On Vn (C) there is an inner product which we
call the standard inner product.
Ifa=(fli, 02, ...» an),P={f>\, bz,...» bn) e Va (C),
then we define

{cl, P)—aiS\i-aJ^2\-.--^anb„= 2 aibi. ...(1)


/-I
286 Linear Algebra

Let us see that all the postulates of an inner product hold in


(1).
(i) Conjugate symmetry. From the definition of product given
in (1), we have ()3, a)=6id|
03,«)=(hidi+...-f-6„5„)= (6idi) +
+--+5n ifln) =5l<l|+*-*+5«On
ssiitdi+...+a<,Sfl [V multiplication in C is commutative]

Thus («, ^)=(/3,«),


(II) Linearity. Let y={c\ c„) e K„(C) and let o,6eC.
We have act+bp=a (ui,..., a„)-^b (bi b„)
=(flfli+66i,..., aa„+bb„).
(fla-fd/5, y)=(afli +ddi) ci+...-h(aan+bb„)c„ (by (1)1
=(flflici +...+uaflCn)+(dd|Ci -f-... +dd„c„)
=0(uiCi+...+dflC„)-f-d (diCi4-...+dnC„)
=fl(a, y)+d (^, y) (by (1)1
(111) Nou-negatlvlty.
(a, a)s=*fl|ffi+...+Ondn [by (1)1
H O, 12+...+la„12. ...(2)
Now ai is a complex number. Therefore| u/ p > 0. Thus(2)
is a sum of n non-negative real numbers and therefore it is ^ 0.
Thus (a, a)>0. Also (a, a)=0
=> I fli P+...+I P=0
=> each I ai p=»0 and so each a/=0
s> a=0.

Hence the product defined in (1) is an inner product on V„(C)


and with respect to this inner product F„(C) is an inner product
space.
If a, j8 are two vectors in P'«(C), then the standard inner pro
duct of a and /3 is also called the dot product of a and and is
denoted by a*/S. Thus if a=(<ii,..., a„),)5=(di,..., b„) then
a ● ^=Ol5l-|-<12^24*...4*On^>n.
Example 2. On Kn(R) there is an inner product which we call
the standard inner product.
If a=(«,,..„ a„), j3=(d|,..., b„) e Ki,(R), then we define
(«»
Inner Product Spaces 287

As shown in example 1, we can see that this definition satis


fies all the postulates of an inner product.
Ifa,/3 are two-vectors in Kn(R), then the standard inner
product of a and jS is also called the dot product of a and jS and is
denoted by «-/3. Thus if a=*(tfi a,0. b„), then
P=>atbi'\-a2b2+..●-hOnbtt-

Example 3. If a=(<7i, aa), iS=(hi, ^2) e K2(R),


let us define
(a, P)=taibi—a2bv—aib2+4a2b2. ...(1)
We shall show that all the postulates of an inner product hold
good in (1).
(i) Symmetry. We have
(jS, a.)=3biai—hafli—6ifl2“462U2 [from (1)1
=fli6i— —flii/?2+402^2 [V «2. ^I. ^2SR]
=(«. j8).
(ii) Linearity. If a, heR, we have
a»4-bp=a (fli, 02)-hb (bi, b2)=(aa\ -{-bbi, aa2+bb2).
Let y«(ci, C2) e V2 (R). Then
(aa+£>j3, Y)=^{aai -\-bbi) ct-(aa2-\-bb2) Ci -(«ai+Wi) ca
+4 (aa2-\-bb2) C2 [from (1)]
=(afl|Ci -aa2Ci—aaiC2-\-4aa2C2) -{-(bbiCi —bbtci -bbiC2-\-4bb2p^
=a {aiCi—aiCi —aiC2-\-4a2C2)+b (biCt—b2C\'-biC24‘4b2C2)
[from (1)1
a (a, y)+b (p. y).

(iii) Non-negativity. We have


(a, a)=ni<ii —02fli“^i^24‘4flt2U2=Oi^—2fl[i£72 +-4fli2^
=s(ni—<i2)^+3fl2^. ...(2)
Now (2) is a sum of two non-negative real numbers. There
fore it is ^ 0. Thus (a, a) ^ 0.
Also («.«)=0
=> (fli—fl2)^f3a2^=0
=> (ai—O2>^ ~0, 3a2*=0
=> fl i—O2=0» fl2="0
:> ai=;0, a2=>P ^ a=»0.

Hence the product defined in (I) is an inner product on F2(R).


Also with respect to this inner product Ka(R) is an inner product
space.
288 Linear Algebra
Note, There can be defined more than one inner products on
a vector space. For example, we have the standard inner product
on FiCR) and the inner product defined in example 3.

Example 4. Let K(C) be the vector space of all continuous


If
complex-valued functions on the unit interval, 0 ^ f ^ 1.
AOtgif) “s define

(AO,M0)=17(0 7(0 dt (1)


We shall show that a 1 the postulates of an inner product hold

(I) Conjugate Symmetry. We have

(5(0,A0)=f7(0T(0</L Ifrom(l)]

df
(«(0.A0)
I'sW/vT* =J*^[sWTW
=.0
f 7(0/(0*=Jo
r /(O iW*=(/(0.s(0).
(II) Linearity. Let a, 6eC and A(r)e V. Then
Ifl/(0+*^(01A(0*
(fl/(0+*^(0,A(0)«j]^
=ifl
j'^fit)Ji{^dt+b \{t)hV)dt
“fl(/(O. Kt))+b (g(/), h{t)).
(Ill) Non negativity. We have v

(A0./(0)=j/(0 7(0
fj
I AO 1^ dt. ...(2)
0

Since
| AO P > 0 for every t lying in the closed interval (0, 1],
therefore (2) >0. Thus(AO,AO)^ 0-
Also (AO.AO)=0
ri
iAOprf'=o
JO
|/(r) p=0 for every t lying in [0, 1]
-j. /(0=0 for every t lying in [0,1]
=> A0~^®*
Hence the product defined in (1)is an inner product on V{C).
Inner Product Spaces 289

Example 5. Let ^(C) be the vector space of all polynomials


in / with coefficients in C. If/(0, g(Oe V, let us define

. Jo ...(1)
As in example 4, wb'can show that all the postulates, of an
inner product arc satisfied by (1). Since V is not a finite dimen
sional vector space, therefore this example gives us an inner pro
duct on a vector space which is not finite-dimensional.
Important note. In some books the inner product space is
defined as follows :
The vector space V over F is said to be an inner product space
if there is definedfor any two vectors a, jSsK <in element (a
|
such that
(I) («|/3)=0|a)
(2) I y)=fl(a I y)+fe 03 I y).
(3) (a I a) > 0 ifa^O
for any j3, ye V and a, b^F.
There is a slight difference in the postulate (3). It can be easily
seen that both the definitions are equivalent.
§ 2. Norm or length of a vector in an inner product space.
Consider the vector space F3(R) with standard inner product de*;
fined on it. Ifa=(fl|, 02*^3) e 1^3(R)» we have
(a, a)=fli2+fl2^+fl3^.
Now we know that in the three dimensional Euclidean space
V(ui^+U2*+fl3*)is the length,of the vector a=(01, 02. 03)* Taking
motivation from this fact, we make the following definition.
Definition. Let V be an inner product space. If oceV, then
the norm or the length ofthe vector a, written as || a jj, is defined as
the positive square root of(a, a) / e..
(Nagarjona 1978)
Unit vector. Definition. Let V be cm inner product space. If
ae V is such that jj a jj=1, then a. is called a unit vector. Thus in an
inner product space a vector is called a unit vector if its length is I.
Theorem 1. In an inner product space V{F)^ prove that
(1) (fla—6j3,'y)=fl(a, y)^b (P, y)
(//) (a, ap+by)=a (a, ^)-}-5 (a, y).
Proof, (i) We have
{acc—bp, y)«(aa+(—A) p, y)
290
LUtear Alge^a
=«(«» y)+(—^)Of y)[by linearity property]
«a(a, y)-6 05, y).
(ii) («, +by)^ap+fty, a) [by conjugate symmetry]
«a(^, a)+^(y, a) [by linearity property]
“0 Wf «)+ My, ot)atfi 0,a)+5(y, a)
=a(a.j5)+7»(a.y).
Note 1. If F=sR, then the result (ii) can be simply read as
(a, o;3+6y)=a(a,j8)+F(a, y).
Note 2. Similarly It can be proved that
(a, o^-^y)=5(a,/8)-5(«, y).
Also (a,j8+y)«(«. l^+ ly)=T(a,iS)+l(«, y)
. =(«.^)+(«, y).
Theorem 2. /n an inner product space V{F)^ prove that
(/) II a II > 0; and jj a ||=a0 if and only ifa«=sO.
(«) !|o«||=|o|.||«||.
Proof, (i) We have
II«11=V(«.«) [by def. of norm]
II « 1|2=(«, a)
=> II a !|2 > 0 [V (a. a) > 0]
=> II « II > 0.
Also (a, a)=0 iff tt=0.
.'i I! a|P=0 iff a=0 i.e. || a ||=0 iff a—0.
Thus in an inner product space, || a || > 0 iff a7f;0.
(ii) We have || oa |l*«(oa, <ra) [by def. of norm]
=fl(a»oa) [by linearity property)
*=sna (a, a) [by theorem I]
=|flP.||«ll*.
Thus||oalp=|oN|«|p.
Taking square root, we get 1| oa ||=|o |.|| a ||.
Note. If a is any non-zero vector of an inner product space
I
then - a is a unit vector in V, We have |1 a ||:?feO because
«?40.
1
Therefore
li«ll
1 1 1 1
Now
«.l!r a,
«)= a (*M a ■)
1 1
IRIP («, «) =
iRp ||«1P«1.
Inner Product Spaces 291

a
Therefore U=1 and thus is a unit vector. ; >.cv
a a
For example if a=(2, 1, 2)is a vector in Kj(R) with standard
inner product, then H « H=V(a,a)=V(4+ r+4)«3.
Therefore
| (2, 1, 2) (f, |)is a unit vector.'
Theorem 3. Schwarz*s Inequality. ‘ In an inner product space
V{F), prove that
(Meerut 1981,83,89, 92, 93P; Madras 83; Andhra 92;
S.V.U. Tlrupatl 90,93; Nagarjuna TO)
Proof. If a»0, then II« |i=0. Also in that case
(a,/3)«(0,/5)=(00,^)«0(0,/3)=0.
V I (a, 1=0.
Thus if a=0, then | (a, /9)|«0 and H a H U j8 H=0.
.\ the inequality | (a, j8)
|< H a H-ll P \\.'s valid.
1
Now let Then \\ct || > 0, Therefore is a positive.
!1«IP
real number. Consider the vector
(/3. «) a. We have
ysssp

0?.a) „ ^ (j9.a)
(y. y)—(p"“ Hall* *)
(i3.ocV a ● 08. ot) 03,«)
(i9.i3 )
[by linearity pro{fetty]
(jS.pcXjS.g)
P)- (/3,«) (i3,al (/3. «) ,
(«,
llocElW
I. « (/3.a)03. a) (a. /S)(a. /3) . (j8, a)(jS. a)
«ii.^ II “ ●ii*ir"" :ir»F~+ii«iiTO . iKtp
Q-'H

rsT
(«, P) (a, /3) \
. -ll ;3 |p“ , the second and the fourth terms^'pancel
I(«.j8) P
=!! P 11^ [V. Z2=|rpifzeci
But (y, y)«:iy IP > 0.
A ll^ll* —'| («.i5)P
II * IP
;
> 0
or : Hi8 1N«|P>^ |(«.j8) P
or I {(t, p) I < II !!'l! ^ Hr taking square root of both sides.
Schwarz’s inequality has very important applications in matte-
matics.
292 Linear Algebra

Theorem 4. Triangle {nequalily. 1/ct, p are vectors in an inner


product space F, prove that
11 «+i3 IK II « 11+ 11 /3 11.
(Meerut J974, 76. 77. 81. 90,92,93)
Proof. Wehave
11 «+/5 lP=(a+/3. a+i3) [by def. of norm]
e=s(«, a+ «+^) ■ [by linearity property]
=(a, a)-f(a, ^)+(^, «)+(^i P) [by theorem 1]
=11 « llH(a./3)+(^;T)+I1 i8 IP [V (j8. a)=0^«l
«11 a lP+2Re (a, j8)+|l p |p [V r+Z=2Rc z]
^11« lP+2'1 («. P)1+11 P IP [V Re z < 1 z II
<|l«lP+2|l«ll.ll i8Mi81P
[*.* by Schwarz inequality .| («, p) K H « H-H P Hi
=(11«11+11^11)^*
Thus H«+i8 IP < (II « 11+11/3 11)^.
Taking square root of both sides, we get
f
ll«+/81Kll«ll+ll^ll.
Geometrical interpretation. Let a, p be the vectors in the inner
product, space 73(R) with standard inner product defined on it.
Suppose the vectors a,P represent the sides AB and BC respectively
of a triangle ABC in the three dimensional Euclidean space. Then
11 ot ll=^R, II p ||=SC. Also the vector a+jS represents the side AC
of the triangle ABC and || a+j3 ||=i4C. Then from the above ine
quality. we have AC < AB-\-BC.
Special cases of Schwarz*s inequality.
Case I. Consider the vector space VniC) with standard inner
product defined on it.
Let a==(oi, 02,..., o„) and p={bu ^2,..., b„) e V„{C).
Then oi,..., a„, bt,..., h„ are all complex numbers.
We have (a, j8)=fli5i-f 02*2+ —+On^>«.
1 («» P)P=l Ol5l+0262+”.+0/15/1 p.
Also II a lP=(a, a)=flidi+...+0n5„
=1 Ol P+-.+1 On p.
Similarly 11 j8|P=|6, p-| ...-Hh«P.
By Schwarz’s inequality, we have
|(«.^) lKlloclP.il /3 1P.
If Oi,..., On, h|,..., bn are complex numbers, then
loi^ +0252+...+o«5fl P < (| oi P+...+I a„ p)(1 bi p+...-f-| bn P).
tmer Product Spacer 293

Tbis inequality is known as Cauchy's inequality.


Iffli, ,bn are all real numbers, then this ine-
quality gives that
{a\b\-{-a2bf\-,..-\-aJbnY < (V+...+M*
Case II. Consider the vector space V{C)of all continuous,
complex-valued functions on the unit interval 0 ^ < 1, with
inner product defined by(f(t)» 8(f)

We have H/(0|P=(/(0./(0)=f/(07(0
Jo

-i: 1/(0 \^dt.

Similarly H s(t)11*= 1 s(01* <*●

Also I (/(<), g(0) 1*= j/(f) g^dtp.


By Schwarz's inequality, we have
|(/(0.^(0)P<ll/(0INk(011^.
Therefore if/(f), g(f) are continuous complex valued functions
on the unit interval [0, 1], then

Case ni. Consider the vector space K3(R) with standard inner
product defined on it i.c., if
a=(oi, <i2, a3)fp—{bif ^>2, hs) e I'3(R).
then (a, j8)=fl|hi+fl2^>2+a3^3* .(1)
We see that (1) is nothing but the dot product of two vectors
tt and p in three dimensional Euclidean space. If 6 is the angle
between the non-zero vectors a and /3, then we know that
(Ol6i+fl2^>2 + U3&3)* . {(«,W
cos* ^«=»
{bi^+bi^+bs^) («, «) iP» P)
_ K«.i8)P
iMPliTiP
[V if (a, jS) is real then {(a, i3)}2=| («. iS) I*]
But by Schwarz's inequality, we have
|(«,^)P<11«1N1^IP.
cos* 9 ^. II«11*HP 11* /.e., cos*0 ^ 1.
11«IP II? IP
Thus the absolute value of the cosine of a real angle cannot be
greater than 1.
194 Linear Atgebra

Normed vector space. Definition. Let V {F)be a vector space


where F is either thefietd ofreal numbers or the field of complex
numbers. Then V is said to be a normed vector space if to each
vector a there corresponds a real number denoted by || « || called the
norm ofa, in such a manner that
(1) It a II ^ 0 and\\ a ||~0 ^ a=0.
(2) l|oa||=:|fl|.||«||, V oeF.
(3) ll«+/5|| < ll« H+||i8|l. V «,iSe K.
We have $hown in theorems 2 and 4 that the norm of an inner
product space satisfies all the three conditions of the norm of a
normed vector space. Hence ^very inner product space is a normed
vector space.
Distance in ah inner product space.
Definition. Let V(F)be an inner product space. Then we
define the distance («, P) between two vectors a and p by
rf(a, i8)=l| a-j8 11=V[(a-i8, a-/3)].
Theorem 5. In an inner product space V(F)we define the dis^
tance d(a, p)from utopbyd(a, j8)=l| a-/311 Prove that
(1) <if(a. P) ^ 0 and d(a, i8)=0 iffu^p,
(2) dia,p)^dip,u).
(3) d(a,/5) [triangle ineqnaiity]
(4) d(«,^)=d(a+y,jS+y). (Meerut 1984>
Proof. (1) We have </(a,/8>=|| a—j8 11. [by definition]
Now II a-)8 II > 0 and || a-p|1=0 if and only if a-/3=0.
d(a,P)> 0 and d (a, P)<=0 if and only if a=)5.
(2) We have d(a,j8)=H a—j8 | [bydef.]
HI(-1)08-*) 11
H-ll ll ^-a || iv II «« 11=1 fl 1 11* 111
=11 ll=rf(/5, a).
(3) We have d(«, P)=\\ a-/8 1|=H («-y)+(y-i3) H
ll«-r ll+ ily-i^ll [by theorem 4]
=d(a, y)+</(y,/3).
/. dia,py^d(oi,YHdiY,P),
(4) We have d(a. ^)=|1(a-p)||=|1 («4.y)_(^+y)||
=<^(«+y.i5+y).
Matrix of an inner product.
Before defining the matrix of an inner product, we shall define
some special types of matrices over the complex field C,
Linear TYans/^rmathns 19$

CoDjogate transpose of a Matrix.. Definition. Let A=[au]„xm


be a equare matrix oforder n over thefield C of complex numbers.
Then the matrix [dji]„XH is called the conjugate transpose of A and
we shall denote it by A*.
Thus in order to obtain if from if, we should first replace
each element of if by its conjugate complex and then we should
write Uie transpose of the new matrix.
If in place of the field C we take the field R of the real hum*
bers, then So in this case if* will simply be the transpose
of the matrix A.

If if =if*, then the matrix if is said to be a self-adjoint matrix


or a Hermitian matrix.
Symmetric matrix. Definition, if square matrix A over afield
F is said to be a symmetric matrix if it is equal to its transpose i.e.,
if if=ifr.

Obviously a Hermitian matrix over the field of real numbers


is a symmetric matrix.
Theorem. //iT={ai, aj,..., a„} is an ordered basis of a finite
dimensional vector space V, then an inner product on V is completely
determined by the values which it takes on pairs of vectors in B.
(Meerut 1976)
Proof. Suppose we are given a particular inner product on V.
We shall show that this inner product on V is completely deter
mined by the values
Sn-iy-h «/) where /«!, 2, ......y 2f ●●●» R.
II - n
Let »= 2 xjctj and p= 2yt*i be any two vectors in V. Then
y -i / -I

.T X; uj, p xj (ay, P), by linearity property of the

inner product
n / n ■ \ n n
= 2^ xyl ay, 2yi a/ 1=^-^ Pi («y»«/) [See theorem
1, part (ii) on page 289]
n If
= 2 2 pi gij Xy
J~t /-I

^Y*GX, ● ●● (1)
296 Linear Algebra

where X, Y are the coordinate matrices of a, p in the ordered basis


B and G is the matrix [g/y)»xit-
From (1) we observe that the inner product (a, p) is comp
letely determinedly the matrix G i.e., by the scalars gij. Hence the
result of the theorem.
OefioitioD. Le/-B=*{ai, a*} an ordered basis for an
n-dimensional inner product space V. The matrix C?=[g;y]«xn. where
gij={<Kj, a/), is called the matrix of the underlined inner product in
the ordered basis B. ■
We observe that gij=(»j,<ti)—(a.i, <!tj) —gjh Therefore the mat
rix G is such that Thus C? is a Hermitian matrix.
Further we know that in an inner product space, (a, a) > 0
if «#0. Therefore from the relation (I)given above we observe
that the matrix G is such that
A'*GJir>o, ...(2)
From the relation (2) we conclude that the matrix G is inver-
'tible. For if G is not invertible then there exists an X^O such
that GX=0. For any such X the relation (2) is impossible. Hence
G must be invertible.
In the last, from the relation (2) we observe that if xi, .V2,...,x«
are scalars not all of which are zero, then

2? 2^ Xt gij Xj > 0. ...(3)


i-i y-i
Now suppose that out of the n scalars xi,..., x„ we take X/= 1
and each of the remaining n—1 scalars is taken as 0. Then from
(3) we conclude that gn > 0. Thus gu > 0 for each /=!, ..., n.
Hence each entry along the principal diagonal of the matrix G is ''
positive.
Solved Examples
Example 1. Show that we can always define an inner product
on afinite dimensional vector space real or complex.
Solution. Let F be a finite dimensional vector space over the
field F real or complex.
Let 5={«i, ●●●» an} be a basis for V.
Let a, jS e V. Then we can write a=jiai 4-...+a»a«,
and )3ss6ia|-f...-f^nttn
where Oi,..., an and b\, hn are uniquely determined elements of
F, Let us define
(a, ...(1)
297
Linear Transformations

We shall show that(1) satisfies all the conditions for an inner


product.
*We have
(i) Conjugate Symmetry,
03,a)=Mi+...+Mn*
03,a)=(Mi+—+Mn)=5ifli+...+M«
=flihj+...+u«hn=(»f ^)*
(II) Linearity. Let y=ciai+...+c„a„ e K and <i, 6 S F.
We have

=(mii+^>hi) «!+●●●+(ufln4'h6B) *«●


.'. {acn+bPt y)=‘(aai+bbi) ci^...-\‘{aan-\'bbtt) Ctt
=a (0|Ci + ...+fli|Cn)+^> (6lCl + ... + hnC„)
=fl (a, y)+h (^, y).
(Ill) Non-negativity. We have
(a, a)=fliai+ — P+—+I
Also (a, a)=*0
. ^ I fli P4-..-+I
=► |fl.iP=0 |u«i2=0
fl|=0, ..., fln—0
=> a=0.
Hence (!) is an inner product on K.
Example 2.. In V2{F) definejor ot=(Ui. flj) and ^=(hi, hj),
(«, j3)=20l5l+Ol52+fl25l4-U252-
Show that this defines an inner product on VziF).
Solution. (1) Conjugate Symmetry., We have
(j3, a)=26|5i-f 6ifl2+^2di+h2U2-
.*. (/3, a)=a(26i5i-l-^id2-H’2di-H>2d2)
=251 Ui+5ifl2+^2« 1+S2O2
=2ai5i+ai52+U2^>i+^2&2==(a» ^)-
(2) Linearity. Let o, b e Fandy=(ci, cj) e F2(F). Then
oa f5/3=a (oi, ^2)+* {f>u 52)=(uui 002+552).
.'. (oflc+5^, y)=2 (ofl|+o6i) Z?i f (ofli +55i) cz
+ (002 + 552) C| +(002 + 552) C2
—a (2o|Ci+O1C2+O2C1+02C2)+5 (25ici+5iC2+52Ci+52C2)
=0 (a, y)+5 (^, y).
(3) Non-negativity. We have
(a, «)=2oiai+Oid2+02fli+02d2
—Oi5i+(01+02) (01+02)=I Oi P+(Oi+02) (O1 + O2)
=|0| P+l0,+02P ^ 0.
298
Linear Atgebra

Also (a, a)=0 [ at|H|fli+«2 P=0


=> fll=0, fl|+O2=»0
=> 02=0
=> a=:0.
Hence the result.
Example 3. Let V he a vector space over F. Show that the
sum oftwo inner products on V is an inner product on V. Is the diffe
rence of two inner products an inner product ? Show that'a positive
multiple ofan inner product is an inner product. (Meerut 1979)
Solution. Let p and q be two inner products on V{F). Then
p and q are both mappings from Vx V into F and are such that
^ «» yS V and v o, hsF, we have

^)= (j8, a),


p (oa +6^, y)=o /» (a, y)+A p (/5, y),
/>(a, a)>0ifa9fe0;
and similar results for the inner product q. Note that here /»(a, jS)
is an element of F which is the image of the ordered pair(a,)9)
under the inner product p.
Now let us define the sum p-)rq of the inner products p and q
by the relation
iP+q)(a,^)=p (a, (a, j8) v (a,/S) e Vx V.
We shall show that p \-q \s also an inner product on V i.e, all
the postulates of an inner product hold for p f</.
(i) Conjugate symmetry. We have
iP+q) <r)+q(P, a). (by def. ofp-\‘q\
{P+q)0,«■)= P {fit a) -\-q{fi, a)
^P ifi» «)+ q{fi, a) [V
fi)+q{», fi) (V both p and q are
inner products]
(a, fi) (by def. of p+9]
(ii) Linearity. We have
(P+q) {oct+bfi, y)s=/?(fla+6j8, y)+q{a<i+bfi, y)
(by def. of/»+^J
^ap (a, y)+bp {fi, y)+aq (a, y)+bq (fi, y)
(by linearity of p and q]
=a W«. v)+q («, v))+b y)+q^fi, y)}
=a {(p+9) (a, y))+b ((p+q) \fi, y)}.
Inner Product Spaces 299j

(iii) Non-negativity. Suppose a#0. We have


(p+^)(«,«)=/»(«.«)4- I
Since /;(«, a) > 0, and a) > 0, therefore
+9)(«,«)> 0.
Hence p-\:q is an inner product on V,
If we define p^q by the relation
ip-q)(a,^)=p(«, P)-q{ct» P)»
then Pr~q is not necessarily an inner product on V.
In this case the postulate of non-negativity might not be satis
fied. Note that even if p(oe, a)> 0 and a) > 6, yet(p -q)
(«» «)-P («» «)—^(«t «) n»ay come out to be negative.
Again suppose we define np =p -f-p+p+->upto n terms, where
n is a +ive integer. Then np is an inner product on F. As we
have proved above, p+p=2p is an inner product on V. Then
2p-\-p=3p is also an inner product, 3p-fp=4p is also an inner
product and so on. By induction we can show that np is an inner
product on. V.
Example 4. Let V be an inner product space.
(fl) Show that (0, p)—0for all p in V.
(h) Show that if(a, p)=0for all p in F, then a=0.
Solution, (a) We have (0, j8>=(00,|8) ¥/3eF
=0(0,i5)«0.
(b) Let(a, P)=i0 for all p in F. Then taking iSaa, we get
(a, a)=0
a=0.
Example 5. Let V be an inner product space, and a, p be vec
tors in F. Show that a=>P if and only if(«, y)=(/3, y)for every y
in V. (Meerut 1979; S.V.U. Tirupati 93)
Solution. Let a«=jS. Then for every y in F we have
(«, y)=03, y).
Conversely, let (a, y)=(i3, y) v ye F.
Then (a, y)—03, y)=0 ¥ ye F
=> («~i8,y)«0 ¥ yeF
«> (a-/3, a-/3)=s0 [taking y=a-i3J
. s> a—/3s=0 [V (a, a)ss0 ^ «=0J
' => as=)3.
Example 7. If a and p are vectors in an inner product space
then show that 11 <^+P lP+11 oi-P IP=2 11 a 112+2 11 P 112.
(Parallelogram law)(Meerut 1969, 74; Madras 81,83;
Nagarjuna 78; Andhra 81; S.V.U. Tirupati 93)
300 Linear Algebra

Solution. We have
11 «+i3 112=^(a+j3. a+)3) [by def. of norm]
=(«, <t+P) Iby linearity property]
=(a, «)+(a, j8)+(|8, a)+(j8./3)
=11«lP+(«. «)+ll P IP ...(1)
Also 11 a-jS lP=(a-/S, a-/3)=(a, «-j8)-(i3. a-jS)
=(«. a)-(«,)8)-(^, a)+(/3, p)
=HalP-(«,i8)-(^,a)-<-ll/?rp. ...(2)
Adding (1) and (2), we get
ll«+/311Hl|«-/3_tl'=2||«llH21|j31p.
Geometrical interpretation. Let a and jS be vectors in the vec
tor space KzCR) with standard inner product defined on it. Sup
pose the vector a is represented by the side AB and the vector p by
the side BC of a parallelogram A BCD. Then the vectors a+/3 and
oL—p represent the diagonals AC and DB of the parallelogram.
AC^^DB^^lAB^-\-2BC\
i.e.f the sum of the squares of the sides of a parallelogram is equal
to the sum of the squares of its diagonals.
Example 7. If «■, P are vectors in an inner product space V{F)
and a, then prove that
(0 II m+bfi |P=| a |2 II « IIH-aS («■. |3) + ib (/S, a)+| * |^ II;»IP.
(fO /!«(«.«=’i|l«+/3|P-ill«- /3 1P.
Solution, (i) We have
11 aa-1-6/3 lP=(fl«+^iS, fl*+^i3)
=fl (a, a<t-\-bp)-\-b (/3, act+bp)
=a {a (a. «)+5 (a, «}+i {a 0, a)+« (/3, /S)}
=aa (a, a)+aS (a, fi)+bR (fi, a)+bb fi)
=1 a PII«|P+a5 («. /3)+a6 (fi. a)+| 6 P || |5 |p.
(ii) We have
II «+^ |P=(«+/3. a+/3)=(a, «+^)+(j3, a+/J)
=.(«. «)+(«, ^)+{g, a)+(;8. fi)
HI«IP+(«.W+(«.W +II/S1P
=11 a |P+2Re (a, (5)+|| IP. ...(1)
Also II «-(3 |p-=(«-/3, «-«=(«. «-/3)-(/3.
=(a, a)-(«, «)+(ft «
■=ll«lP-{(«. «+03.«)>+ll j8 1P
HI«lP-««.»+(^)+ll/5 1P
=|l«|p-2Re(a,W+||/3|p ...(2)
Inner Product Spaces 301

Subtracting (2)from (1), we get


\\«-+P lP-11 IP-4 Re(a, P).
Re(a.j8)=Jll«+i81P~ill«-iS112.
Note. If F=R,then Re (a, i3)=(a, /3).
Example 8. Prove that iftt and are vectors in a unitary space^
then
(i) 4(a, j3)Hl lP-11 «-i8 IlH/.ll cci-ifi IP-/ H a-ifi |p.
(Meerut 1987, 88, 91)
(//) (a, fi)=Re (a, fiHi Re (a, /j8). (Meerut 1985, 88)
'Solution, (i) We have
II a+/3 1P=11 « lP+(«.|8)+(i8, «)+ll jS IP [See Ex. 7]
Also 11 a-j8 1P=11 a lP-(a,j5)-(i3, a)+|| )3 IP [See Ex. 7]
11 «+j8 lP-11 a-iS 1P=2(a.]8)+2 0. «) ...(I)
Also II (x+ifi lP=(a+ijS, pcH-/^)=(a, ot+ifi)+i (fl, a+/^)
=(a, a)+f(a, j3)+/ {(jS. a)4-r(]9, 3)}
=(a, a)-/(a, j3)+/ {(jS. a)-/(^, j3)}
[V /= — /)
=llaip-/(a, j8)-|-/03.a)-|-ll/3 1P
/11 «+/i8 1P=/11 « lP+(«. i3)~03, oc)f/11 P IP. ...(2)
Also 11 a-i/S |p=(a-//3. a-/i8)
=(a. a-ii3)-i (/3, a-/|8)
=(a, a)-/(a, /5)-/ {(jS, «)-/ (/3. /3)}
=11«1P+ U«,]8)-/{(i8,a)+/(i3. 18)}
■=llalP4-/(a,l3)-/(l3,a)+l| i8 1p.
-/11 a-/jS 1P=-/11 a lp+(a, j3)-(i8, a)-/ Hp ip. ...(3)
Adding (2) and (3), we get
/ II «+/l8 IP-/11 «-/l8 !P=2 (a, j8)-2 (13. a). ...(4)
Adding (1) and (4), we get
4 (a, 13)=11 a+j8 1P-!| a-l3 1P+/11 « + /l8 IP-/11 «-/^ IP-
(ii) We have (a, l3)=Re (a, P)+i Im (a, P)
If z^x+iy, then
/ >»=Im z=Re {-/ (x+/;')}=Re (-/z)
' Im (a, l3)=Re {—/ (a, ^)}
=Re (a, ip) [V (a, //3)=-/ ^a, ^)1
(a, l3)=Re (a, P) Re (a, /j8).
Example 9, Suppose that a and p are vectors in an inner product
space V. //I (a, P) j=H a H H j8 \\ \that is, if the Schwarz inequality
reduces to an equality), then a and P are linearly dependent.
(Meerut 1969)
302 Linear Algebra

Solution. It is given that| a ||.]| p y. ...(I)


If a=0, tljen (1) is satisfied. Therefore if a and j8 satisfy (1)»
then « can be ^also. If a=s0, the vectors a and /3 are linearly
dependent' se any set of vectors containing zero vector is
linearly dependent.
Let us no^ suppose that a and p satisfy (1) and a?^0.
If«7t0, then II a 11 > 0. Consider the vector

(P. a)
«.iS
We havc(y, y)=^jS II«1P “)
(P,a)
) IMPl«.i5- II « IP ■)
={P,py
(P, a) (P. «) (P» «) («. «)
2(^» «)
II «ip II« IP 11 « IP
S=ll ft na I *) II n, ||2
" ^ II II ii2 II«ii2~ T+ir7nrrmi2 II “ H

BlO. [from(l)l
Now (y, y)=0 ^ y=0
^P iP, g) a=0 II « IP
a and p are linearly dependent.
Example 10. If in an inner product space the vectors a and p are
linearly dependent, then
l(«. P) HI«IMI i8||.
Solution. If a=:0, then | {fi P) |=0 and || a ||3=o. Therefore
the given result is true.
Also if iS=0. then (a, 0)=i0, a)=0=0 and || p 1|=0.
So let us suppose that both a and p arje non-zero vectors.
Since they are linearly dependent, therefore a=c)8 where c is some
scalar. We have
{x,p)^(cp, P)==cip, P)=^c\\P \\K
A l(«.^)l=kl lli3|P.
Also ll«l|«||ci3|!HcMiiS|l.

Hence |(a,/3) |H1 « !|.||,/3 |1.


Linear Transformations 303

Example 11. Ifin an inner product space


I1.-I-/31HI« 11+11 II.
then prove that the vectors a and are linearly dependent. Give an
example to show that the converse of this statement isfalse.
Solution. We have

^ II « |l*+2Re (a;iS)+|| j8|P=|1« HH||^


+2|l«|M|j3 1| [See Ex. 7]
=> Re(a,^)«||a|l.||/?|!
...(1) fv Re z < I z 11
But by Schwarz inequality, we have
ll«ll.ll^ll^K«»i3)l. ..,(2)
from (1) and (2), we get
I1«I1-!I^11H(«,/3)|.
Thus for the vectors a and j3, the Schwarz inequality reduces
to an equality. Hence the vectors a and j6 are linearly dependent.
Note that the proof we have given is applicable whether the
inner product is complex of real. In real case,
Re(«, ^;=(a, iS).
The converse is nol true as is obvious from the following
example.
Take the inner product space I^3(R) with standard inner pro
duct defined on it.
Let a=ei,0. -2)
be two vectors in I'3(R). Then j8==—2a, therefore a and jS are
linearly dependent. We have
ll«l!-V(I+0-H)=V2. 11 i81|-V8.
ll«ll+lli81!=\/2+V8.
Also a+j8=(l,0,-1).
A ||«+)8||=V2.
Obviously |1 a4-)5 l|9fe|| a 11+11 ||.
Exercises
1. If a==(oi, P={l>u ^2,..., b„) e I'nCR), then prove that
(a, ^)t=sfli6i+a262+..*'+Ofli>n
defines an inner product on Fa(R).
2. Which of the following define inndf products in K2(R)? Give
reasons. (Assume a=(xi, X2), ya)].
304 Linear Algebra

(a) (a,P)=xiyi+2xiy2-\-2x2yi-\-5x2y2’
ib) (a. P)<==xi^-2xty2-2x2yi +yi^.
(C) («,^)=»2Xi:Vl +5X2^2.
id) (a, j8)=:Xi>'|-2X|>'2--2X2>'l+4X2>>2.
Ads. (fl) and (c) are inner products, (6) and (d) are not.
3. Show that for the vectors a=(xi, X2) and /3={yu yil fro®
the following defines an inner product on :
(a, P)=Xiyt-X2y^-Xly2■\●2x2y2^ (M eerut 1911)
4. Let a=(oi, 02) and ^ ={bu 62) be any two vectors e V2{C).
Prove that (a, i3)=a,5i+(fli+a2) (5i+52) defines'hn inner
product in V2{C). Show that the norm of the vector (3, 4) in
this inner product space is \/(58).
5. If a, jS be vectors in a real inner product space such that

then prove that (a+)8, a—j8)=0.


6. Show that any two vectors a, p of an inner product space are
linearly dependent if and only if 1 (a, j8) |=j| a ll-H /3 \\.
7. Let F be a finite-dimensional vector space and let ^={ai,...,an}
be a basis for V. Let (|) or (,) be an inner product on V.
If C|,...,Cn are any n scalars, show that there is exactly, one
vector a in F such that (a | ay) or (a, -1, 2,...,«.
§ 3. Orthogonality. Definition. Let a and jS be vectors in an
inner product space V. Then a is said to be orthogonal to p if
(a,i5)=0.
The relation of orthogonality in an inner product space is/
symmetric. We have
a is orthogonal to p ^ (a,j3)=0 => (a, /3)=0
=> 03, a)=0 => /S is orthogonal to a.
So we can say that two vectors a and j8 in an inner product
space are orthogonal if (a, i3)=0.
Note 1. //a is orthogonal to jS, then every scalar multiple of a
is orthogonal to j8. Let k be any scalar. Then
(A:a, ^)=k (a, j3)=A:0—0 [V (a, j3)=0]
Therefore kv. is orthogonal to )3.
Note 2. The zero vector is orthogonal to every vector. For
every vector a in F, we have (0, a)=0v
Note 3. The zero vector is the only vector which-is orthogonal
to itself.
We have
a is orthogonal to a => («, a)=0
=> a 0, by def. of an inner product space.
Linear Transformations 305

Definitioo A vector a is said to be orthogonal to a set S if it


is orthogonal to each vector in S. Similarly two subspaces are called
orthogonal if every vector in each is orthogonal to every vector in
the other.
' Orthogonal set. Definitioo. Let S be a set of vectors in an
inner product space V. Then S is said to be an orthogonal set pro
vided that any two distinct vectors in S are orthogonal.
Theorem 1. Let 5=={ai,...,am} be an orthogonal set ofnon-zero
vectors in an inner product space V. Ifa vector ^ in V is in the
linear span of5, then
m

Oik

Proof. Since p € L(S)t therefore p can be expressed as a


linear combination of the vectors in S. Let
m
P=3C]S^-{-...-j-CmVm= Cj(X.j.

We have for each k where 1 < A: < m,

(Pt «*)=
^^27 cjoj, a/, ^
m
= S Cj(ay, a*) [by linearity property of inner product]
y-i

=C/fc (afc, a/t)


[On summing with respect XqJ. Note
that S is an orthogonal set of non-zero
vectors and so (ay, «fc)s=0 ifj¥^k]
Now ak¥^0. Therefore (a*, a*)^0. Thus

Ck^(p, Ok) I^ m.
II 1^*
Putting these values of C|, ●f in (I), we get
m
(fl.
P— ^ II ^ 112 ***

Theorem 2. Any orthogphat set of non-zero yeciors in ap inner


product space V is linearly independent. . (Meerut I984(P>1
Proof. . Let S be an orthogonal set of non-^ro vectors in ah
inner.product space P. Let 5i ={ai,...,a4 be a finite subset of S
containing m distinctvectors^ T^t ^
Linear Algebra

m
(1)
^^^Cjay«c,a, + Cma„=0.
We have, for each k where 1 ^ k
m , _—-
2'cy(a;,a.0
^^2 c/x/, a* ^ y-i
“Ca- (a*, a*) [V (ay, aft)«iO ifjV=^l
=^*ll
m-
Bu( from (1), S c/xy=:0. Therefore (0, a)»0.
j-t
1 1

(1) implies that c* H a* \\^=0, 1 < fe < im


c*=0 [V a*#0 11 a* ll^TtOl
.*. the set jSi is linearly independent. Thus every finite sub-
, set of5 is linearly, independent. Therefore S is linearly indepen
dent.
Orlhonormal set. Definition. Let S be a set of vectors in an
inner product space V. Then S is said to be an orthonormal set if
(/) a e5:>ll(rll=I le.(a, a)«l,
and {it) a, j9 e 5 and a?fciS (a. i3)=0. (Nagarjuna 1980)
Thus an orthonormal set is an orthogonal set with the addi
tional property that each vector in it is of length 1; iti other words
a set S consisting of mutually orthogonal unit vectors is called an
orthonormal set. Obviously an orthonormal set cannot contain zero
vector because H 0 11=0.
A finite set <S’={ai, ..., a„,} is orthonormal if
(*/» *y)=8/; where 8/y=l if i=y and 8/y==0 if /#7-
(Andhra 1981)
Existence of an orthonormal set. Every inner product space V
which is not'equal to zero space possesses an orthonormal set.
a
Let 0?ta e V. Then |i a H^&O. The set
only one vector is necessarily an orthonormal set.
a ,} containing

1
We have
(ii *irii «ii)-di( «. 1

Theorem 2. Let 5={aj,..., am} bean orthonormal set ofvectors


in an inner product space V. If a vector p is in the linear span of S,
●m
, then p=^ £(p, »k)
tc^l
Inner Product Spaces 307 >

Proof. Since jS e L {S), therefore p can be expressed as a


linear combination of the vectors in 5. Let
m
^=C|ai+».* CjVj, ...oi
We have for each ^ whero 1 < A; ^ m.

(P, «*)
a* ^
fit

= 2; c/ (by linearity of inner product]


/-I.' ●

m ■

S cj Sjk [V 5 is an brthonormal set]

=»c* [On summing with respecuo:/and


remembering that S^ft=l if/=A: and
8yifc=0 ify^^^]
Putting the values of c\» ●●●> c„ in (1), we get

j8« i;(j8, a/t) a*,


fc-i
Theorem 4. If S^{ctu ●●●» «m} /-? on orthonormai set in V and

(/■jS e then y=sp— r (jS, a<) a; is orthogonal to each of ai, «„

consequently, to the subspace spatmed by S.


Proof. We have for each k where 1 < A: < m.

(y, »k)=‘^p— S (P, a,) a/, a* J


fit

(ft «a )- 27 (P, dti) a/, a*


/●I

[by linearity of inner product]


01

iP, «Jt)~ /-I


*27 0®, «/) (a,, ajt) [by linearity of inner product].

m
●●
^(ft «*)— H (ft «/) I belong to an
/-I
● t

^honormal set]
=*(ft «*)~08, «*) [V 8/*=l if /*=iA: and 8/*=s0 if /#Ar]
*=*0,
308 Linear Algebra

Hence the first part of the theorem.


Now let 8 be any vector in the subspace spanned by S i.e., let
m
8 e 1.(5). Then 8ca r «/«/ where each at is^some scalar.
/-I

m
We have (y, 3) Y,£ at%t l=® a/)™ S &t OcaO.
/-I
( /-I
m /
\ /-I
rn
Thus y is orthogonal to every vector 8 in L(5). Therefore y is
orthogonal to L{S).
Theorem 5. Any orthonormal set of vectors in an inner product
space is linearly independent.
Proof. Let S be any orthonormal set of vectors in an inner
product space V. Let ...» a„} be a finite subset of S con¬
taining m distinct vectors. Let
m
S C^;=Ciai+...+C„a„=aO. ...(1)
y-i

We have, for each k where 1


m m
S Cjtij, a*
\ ;-i
S cj {oLj, a*) (by linearity of inner product]
)=
m
=3 2 CjSjk [V (aj, a*)«8yk]
'IM

[On summing with respect to y]


m

Butfrom (1), 5 c/ay=0. Therefore ^ 3^ cjinj, a* ^=(0,a*)=a0.


(1)'implies that c«=0 for each
/. the set 5| is linearly independent. Thus every finite subset
of 5 is linearly independent Therefore S is linearly independent.
Complete orthonormal set. Definition. orthonormal set is
said to be complete if it is not contained in any larger orthonormal
set.
Orthonorinal dimension of a finite-dimensional vector space.
Definition. Let V be afinite-dimensional inner product space of
dimension n IfS is any orthonormal set in V then S is linearly in-
dependenti Therefore S cannot contain more than n distinct vectors
because in an n-dimensional vector space a linearly independent set
cannot contain niUre than n vectors*
Inner Product Spaces 309

The orthogonal dimension of V is defined as the largest number


of vectors an orthonormal set in V can contain.
Obviously the orthogonal dimension of F will be < n where'
n is the linear dimension of V.
The following theorem gives us a characterization of com
pleteness i.e. it gives us equivalent definitions of completeness.
Theoreod $. IfS={»u ●●., ««} ony finite orthonormal set in
an inner product space F, then the following six conditions on S are
equivalent:
(i) The orthonormal set S is complete.
(i7) 7/(i8,a/)=0/ori=l, ..., n./Aen /3=0.
{Hi) The linear span of S is equal to V i.e. L(S)=V. .

(iv) If j8e F, then j3= 27 (j3, a/) «/.


/-I

n .
(v) Ifpandy are in F, then 03. y)= /-I
27 (/8, a,) («,. y).

h
(V/) y/jS is in F. then f-i
27 ! (j8. a,) |*=1| p jp.

Proof (i) ^ (ii).


It is given that 5 is a complete orthonormal set. Let jS e F
and 03, a/)«=0 for each n.
Then j8 is orthogonal to each of the vectors ai, .... a„.
/ ^
If then adjoining the vector to the set S we obtain
an orthonormal set larger than 5. This contradicts the given state
ment that iS is a complete orthonormal set. Hence p=0.
(ii) => (iii). '
It is given that if (j8, a/)«0 for /=!, .... /i, then )3=0. To
prove that 1,(5)= F.
n
Let y be any vector in F. Consider the vector 8=y^ 27 (y,a/)«/,
<-i

We know that 8 is orthogonal to each of the vectors «i,


i.e. (8. a/)=0 for each i=l,.... h. Therefore according to the
n
given statement 8=0. This gives y= /-i
27 (y, a/)«/. Thus every vector
310 Linear 4tgei>ra

.y in Fcao be expressed as a linear combination of«i,


Therefore I.(S)—K
(iii)*>(iv).
It is given that Zr(5)=»F. Therefore if/3 e K, then ^ can be
expressed as a linear combination of ai,..., From theorem 3, we
know that this expression for/3 will be

/3a «/) «/●


f-1

(iv)=>(v).

It is given that if p is in F, then i8« ai) «/. If y is another


n '
vector in V, then y= S (y, «/) a*.
/-I

We have (j3,y) =.( i (P, «,) a/, -5 (y, «/) a


V-‘ ')

« i 1? 03, */) (y, ay) («/, ay)=. 5? (^, a,) (y, «/)
1-17-1 /-I

[On summing with respect to^]

2? (Pf «i) («i. y)«


1-1

(V) :>(vi).

It is given that if p and y are in F, then (P, y)=i; (p, a,) («/, y).

If p is in V then taking yt=>p in the given result, we get

(p, j8)= f (P, a,) (a/, /3)= /-I


S (P, a/)
/-I

*> 11 ^ 1P= /-i


2^ 1● (j8. r/) P.
(vi)->(i).

it is given that if P is in V, then 1| p lP=i^^| (/3,«/) p. To prove


● that ^ is a complete orthonormal set.
Inter Product Spaces 3U

IMS be not a complete orthonprmal set i.e. let 5 ^ contained


in a larger orthonormal set Si.
Then there exists a vector oiq in S\, such that (| ocq ||>»1 and oo
is orthogonal to each of the vectors ai,..., an. Since ooisinK,
therefore from the given condition, we have

11 oo ll*=i; 1 (ao. a/) P=0.


/-I

This contradicts the fact that llaq||»l. Hence 5 must be


complete.
Corollary. jSvery complete orthonormal set in afinite-dimensional
inner product space Vforms a basisfor V.
Proof. Let 5 be a complete orthonormal set in a finite-dimen
sional inner product space P.. Then S is linearly independent. Also
by the above theorem 1(5)= V.' Hence S must be a basis for V.
In the next theorem we shall prove that the orthogonal dimen
sion of a finite dimensional inner product space is eqUal to its linear
dimension. '

Theorem 7. If V is an n'dimensional inner product space, then


there exist complete orthonormal sets in V, and every complete ortho-
normal set in V contains exactly n elements. The orthogonal dimen
sion of V is the seme as its linear dimension.
(Meerut.1969, 70, 73, 76.77. 80)

Proof. LetO^^taeK. Then


iir* il l orthonormal set
in V. If It is not complete, then we can enlarge it by adding one
more vector to it so that the resulting set is also an orthonormal
set. If this resulting ortbpnormal set is still not complete, then we
enlarge it again. Thus we proceed by induction. Ultimately we
must reach a complete orthonormal set because an orthonormal set
is linearly independent and so it can contain at most it elements.
Thus there exist complete orthonormal sets in V.
Now suppose 5s={ai,..., is a complete orthonormal set in
V. Then S is linearly independent. Also the linear span of 5is V.
Therefore 5is a basis for V. Hence the number of vectors in S^must
be It. Thus we must have 01=11.
Thus we have proved that there exist complete or(honormal
sets in V and each of them will have n elements. thus If is the
312 Linear Algebra

largest number of vectors that an orthonormal set in V will contain.


Therefore the orthogonal dimension of V is equal to n which is also
the linear dimension of K.
Now we shall give an alternative proof of theorem 7. This
proof will be a constructive proof i.e., it will also give us a process
to construct an orthonormal basis for a finite dimensional inner
product space.
Orthonormal basis. De^nition. A basis of an inner product
space that consists of mutually orthogonal unit vectors is called an
orthonormal basis.
Gram-Scbmldt orthogonalization process. Theorem 8. Every
finite-dimensional inner product space has an orthonormal basis.
(Meerut 1972, 73, 74, 79, 80, 82,83P,92,93P; Madras 83;
Andhra 92; Tirupati 90; Nagarjuna 77)
Proof. Let V be an n-dimensional inner product space and let
B={Pu Pn) be a basis for V From this set we shall construct
an orthonormal set a„]of n distinct vectors by means of
a construction known as Gram-Schmidt orthogonalization process.
The main idea behind this construction is that each ay, 1 will
be in the linear span of jSi,..., Pj.
We have pt¥^0 because the set:0'is linearly independent. Let
Pi ^1 Pi 1
iPi.Pi)
li^i II . Wehave (ai,ai)5=|- ii iSi ir Hi3i ii II pt IP
1
wpi r^\Pi ip=i.
Thus we have constructed an orthonormal set {ai} containing
one vector. Also ai is in the linear span of Pi.
Now let y2-Pi—{p2* «i)«i* By theorem 4, /2 is orthogonal to
ai. Also because if V2=0, then j32 is a scalar multiple of ai
and therefore of P\. But this is not possible because the vectors
P\ and P2 are linearly independent Hence y2#0. Let us now put
Yi
a2 Then || a2||=1. Also a2 is orthogonal to ai because
fl n If*
tt2 is simply a scalar multiple of y2 which is orthogonal to ai. .
Further a2?^ai. For otherwise p2 will become a scalar multiple of
pi. Thus {ai, tt2} is an orthonormal set containing two distinct
vectors such that ai is in the linear span ofPi and a2 is in the linear
span of Pu P2
The way ahead is now clear. Suppose that we have construc
ted an orthonormal $et {ai,...» a*} of k (where k < n) distincl
Inner Product Spaces 313

vectors such that each ay A:) is a linear combination of


jSy. Consider the vector
yk+i=Pk+i — i^k-¥i* ai) «2) a2—...—(^*41. «*) a*. ...(1)
By theorem 4, 7*4.1 is orthogonal to each of the vectors
a*. Suppose 7*41=0. Then j3*+i is a linear combination of ai,.. a*.
But according to our assumption each ay (7— 1,..., k) is a linear
combination ofj8|, jSi,..., jSy. Therefore Pk+i is a linear combination
of jS*. This is not possible because ^1,..., ^*,^A4-i are linearly
independent.
Therefore we must have 7*41#0.
yfc4-i ■
Let us now put a*+i= ...(2)
ll Yk+i II
We have || a*+i ji=l. Also a*+i is orthogonal to each of the
vectors ai...., a*, because a*4i is simply a scalar multiple of 7*41
which is orthogonal to each of the vectors ai,..., a* Further obvio
usly a*+i:?tay,7=1,. For otherwise from (1) and (2), we see
that Pk+i will become a linear combination of Also from
(1) and (2), we see that a^+i is in the linear span of Pu...» pk+i
Thus we have been able to construct an orthonormal set
{ai,..., a*, a*+i}
containing k +1 distinct vectors such that ay (7--1,2,..., k-{-1) is
in the linear span of j3i,. ,]8y Our aim is now complete by
induction, Thus continuing in this way we shall ultimately obtain
an orthonormal set Bi={ai, .., an} containing « distinct vectors.
The set Bi is linearly independent because it is an orthonormal set.
Therefore Bi is a basis for K because the number of vectors in Bi is
equal to the dimension of V Also the set Bi is a complete orthonor
mal set because the maximum number of vectors ia an orthonormal
set in y can be n. Thus there exist complete orthonormal sets in V.
Also the orthogonal dimension of V is equal to n / e., equal to the
linear dimension of V.
Y2
Note. In the above construction the vector a2 will be
II 72 II
73
where 72=j32—OS2, «i) «i. Similarly the vector «3 will be II 73 II
where Y3-^P3-W3, «i) «i~(iP3, «2> «2. Similarly the .other vectors
can be found.
How to apply Gram-Schmidt orthogonalization process to
numerical problems ?
Suppose B«=a{^i, ^2!●●●* j3«) is a given basis of a finite dimen-
314
LUtear Algebra
sional inner product space V. Let ’
{*!» <Xii}

te an orthonormal basis for V which we are required to construct


from the basis The vectors aj wiil be obtained in
the following way.
Take ai=i ..
WPi II *
V2

a3B3
iy~ipwhere V3=>^3-(^3, ai) ai~(j83, a2> «2,


“"“TTrii where Yn^fin--(Pni ai) ai—(j8„, a2) a2
XT t. 11 ● *** (Pn*^n-l) «n-|.
Now we shall give an example to illustrate the Gram-Schmidt
process.
Example. Apply the Gram-Schmidt process to the vectors
^i=(l, 0,1),^2=(1, 0, — I), ^3=(0, 3, 4), to obtain an orthonormal
basisfor f'sCR) with the standard inner product.
(Meerut 1980. 81,83, 88. 93; Nagarjuna 80;
_ S.V.U. TIrupati 90)
Solution. We have II i8, |12=(^,. /3,)=1. H-O.O+l.I
«(1)H(0)H(1)2«2.
Let ajes
I
pnr“v2('>'’●'>=( V2
Now let «i) a,.

We have (02, «j)=l: ~+0.0+(-l).-l =o.


V^ V2
72=(1,0
’ iv2 ’ ^2)“ (1.0, -1).
Now II Y2 ||2=(V2, y2)=(l)2+(0)2+(_ 1)2=2.
V2 1
Let a2
!172II -y2 (1.0.-1)=(±.0, - ^2)-
Now let a,) xi) «j.

We have O3, a,)=((0, 3. 4). 0, 2 |j


1 1
=0. ^2+3-0+4 V2 =2‘\/2.
trmer Product Spaces M5

Also O,. «j)=((0. 3,4), 0,

=0.;l+3.0-4.:l=-2V2.

A y3=(0,3.4) (v2* ®’ {^’ “V2)


=(0,3.4)-(2,0.2)+(2,0. -2)=(0,3, 0).
Now IIV3 IP=(M. w)=(0)H(3)»+(0)»=9.
Put «3
n i (0, 3, 0)=(0,1.0).
llwl[

Now {ai, «2, «3} i-e. {(*●«●


is the required orthonormal basis for KjCR).
Theorem 9. BessePs inequality.
a«} is any finite orthonormal set in an inner pro
duct space\V and if P is any vector in F, then

£ «/) P < il i3 IP- (Meerut 1978, 79 83P, 89, 91;


/-I Nagarjuna91)
Furthermore, equality holds if and only if P is in the subspace
spanned by ct-i,..., (x.m-
m
Proof. Consider the vector y=j8—i? {p,«/) «/●
i-i

m IH

We have |1 y Ip—(y, y)=^^— (P* *') “<♦ )

^{P, P)- Sfp, a/) (a„ j8)-S iP* «i)

+2’ 2 (p, ai) (/5, a^) (a,, uj)


1-1 ;-i .

=(i8, j8)-f (j5, a/) -^2 (^. «/)+ «') (?» *'>
[On summing with respect toy and remembering that
(a/, a;)=l wheny=/ and (a/, a/)=0 whenyV=il

«|1 p 1'2— I 03, a/) P— I (^» *') P+fj 1 (^» P’


316
Linear Algebra
m
●●● II y IMI ^ (j8, a,) p ...(I)
Now l|y|l*>o.
l|;3|P-f Ki8,«,)p> 0
m
or
i;i(^,«/)p<ii/sip

If the equality holds i.a. ifI|(/S. a,)|2=|1 )3 jp, then from (1)

we have 1| y jp^o. This implies that y=0 i.e. £ (jS, a/) «/.

Thus it the equality holds, thenjS is a linear combination of

If^ is a linear combination of ai,..., a„, then from theorem

3, we know that Wt «/) «/● This implies that y=0 which in

itself implies that |1 y |p=0. Then from (1), we get

f-i
I (/8. «/) P=ll P IP
and thus the equality holds.
Note. Another statement of Bessel’s inequality.
Let {ai,..., v.„) be an orthogonal set of non-zero vec/or.y in an
inner product space V. If p is any vector in K, then
£ I «/) P < I, (Meerut 1971, 74)
/fi ||«/ 1P "
«/
Proof. Let 8„} where h
ip 1 < I < w.
Then || 8/ j| = l. Thus the set B is an orthonormal set. Now
proceeding as in the previous theorem, we get
fit

^ I iP* 8/) P
/ -I
ii iS ip. ...(1)
1
A.so(^.a,=(ft-^r)“ li II iP* «0.
Inner Product Spaces 317

. Ki8. «/)P
■.■(2)1
From (1) and (2), we get the required result.
Corollary. If isfinite dimensional and if a„} is an

orthonormal set in V such that (jS; a/) \^=\\ jS ll*/or every ^ e F,

prove that {aj,..., a„} must be a basis ofV.


Proof. Let j3 be any vector in V. Consider the vector
m
vsssjS—1/ (jS, a/) «/. ...(1)
/-I

As in the )of of Bessel’s inequality, we have


m
11 y lP=(y, y)«ll P IP- /-I 1 (/3.«/) P [prove it here]

=0 by the given condition.


m
y=0 i.e., ^=27 (jS, a/) a/. [from(D]
/=!
Thus every vector ^ in V can be expressed as a linear combi
nation of the vectors in the set 5={ai,..., «„} i.e. L{S)~ F. Also
S is linearly independent because it is an orthonormal set. Hence
S must be a basis for F.
Orthogonal completpeut. Definition Let V be an inner product
space, and let S be any set of vectors in V. The orthogonal comple
ment of S, written as and read as S perpendicular, is defined
by
SJ-={aeF : (a, j8) =0 ¥ jSeS}
Thus is the set of all those vectors in V which are orthogo
nal to every vector in S.
Theorem 10. Let S be any set of vectors in an inner product
space F.. Then is a subspace of V. (Madras 1981; Andhra 81)
Proof. We have, by definition
5J-={aeF: (a, i3)=0 ¥ jSeS).
Since (0, j8)=0 ¥ j8e5, therefore at least 0e5-> and thus S^
is not empty.
Letfl, ieFand y, . Then (y, /3)=0 ¥ e 5 and
(8, |3)«0 ¥ jS e 5.
318 Linear Algebra

For every peSty/e have.


iay^bS, j8)=fl(y, fi)-\-b (8» fi)=a0-{-b0=0.
Therefore oy-f/>8e5-'-. Hence S'-l is a subspace of V
Note. The orthogonal complement of V is the zero subspace
and the orthogonal complement of the zero subspace is V itself.
Orthogonal complement of an orthogonal complement.
Definition. Let S be any subset of an inner product space V.
Then is a subset of V. We define written as by
5-i-L={ae V;(a, j5|=0 v
Obviously 5-lJ- is a subspace of V. Also it can be easily seen
that^CSJ^J-.
Let"ae5. Then (a, j8)=Q;'^-jJSiS-*-. Therefore by definition
of Thus ^ ae5-LJ-. Therefore 5C5-LJ-.
The following theorem known as the projection theorem is
very important.
Theorem 11. L^t W be any subspace of a finite dimensional
inner product space V. Then
(/) V^W(^W\and (//) WJ-^= W.
(Meerut 1974, 76, 84, 85, 90, 93 P; Andhra 92j
Proof (i) First we shajl prove that 1P+
Since W is a subspace of/a finite dimensional vector space V
therefore IP itself is also finite<dimensional. Let dim V=:n and
dim W=m.
Now every finite-dimensional vector space possesses an Ortho
normal basis. Let ..., a„} be an orthonormai b^sis for W
Let p be any vector in V. Consider the vector /
m
Ll (fi a/) a/.
/-I ...(1)

By theorem 4, the vector yjs orthogonal to each of the vectors


«i. ●●●, «/» and consequently y is orthogonal to the subspace W
spanned by these vectors. Thus y is orthogonal to every vector in

W. Therefore ye IPJ-. Also the vector (j8, a/) a, is in W be-


/-I

cause it is a linear combination of vectors belonging to a basis


for W. I
Now from (1), we have
}
Inner Product Spaces 319

m m
p= Z(fi, a/)«/ +y where 20,a/) a/ is in W and y is in
/-I /-ir
y
IFJ. Therefore K= 1F+ WK
Now we shall prove that the ^bspaces W and are disjoint.'

Then ct^fV and ae IV-^. Since therefore a is orthogo>


nal to^very vector in.lF. In particular a is orthogonal to a because
Now (a, a)s=0 => a=0. Thus 0 is the only vector which
belongs to both Wand W^. Hehce HP'and W^- are disjoint.
.V V^W@WK
(ii) We have V=W®W-i-. ...(2)
Now W^ is also a subspace of K Therefore taking W^ in
place of W and using the result (2), we get
...(3)
Since' V is the direct sum of W and W^ and V is finite-dimen*
sional, therefore I
dim F=::dim TF+dim WK ...(4)
Similarly from (3), we get
dim F=dim IF^+dim IKJ !. ...(5)
From (4)and (5), we get
dim W=dim W^-^ ...(6)
Now we shall prove that WqW-^-l.
Let aelT. Then (a,j3)=0 V Therefore by definition
of ae(W'-i-)J.. Thus aeW'=>ae»'i'. Therefore Wq W^-'.
Since W'Q IF-*-J-, therefore IF is a subspa^e of H''J. Also
dim »'=dim W^^ Hence W^W-i i. . \
Corollary. Let W be any subspace ofa finite-dimensional inner
product space V. Then
dim W^^dim V-dim W.
Proof. Since V is finite dimensional and

therefore, dim F=dim IP+dini 11''


=> dim lP-L=dim F—dim W.
Definition. If W is a subspace of a finite dimensional inner
product space V, then F= W® W^. Therefore every vector a in V
can be uniquely expressed as « =»«j where meW and
320 Linear Algebra

The vectors a.\ and(H2 are then called the ’orfhogonal projections of a.
on the subspaces W and
Solved Examples
Example 1. State whether thefollowing statement is true or
false. Give reasons to support your answer.
a is an element of an n-dimensional unitary space V and a is
perpendicular to h linearly independent vectorsfrom V, then a=0.
(MeerutH977)
Solution. True. Suppose a is perpendicular to n linearly in
dependent vectors ai, ...» a„. ‘
Since V is of dimension n, therefore the n linearly independent
vectors ai, ...» a„ constitute a basis for V. So we can write
a=fl,3Ci+...+a„a„. Now
(a, a)=(<Jiai-f...-{- a)=fl|(ai, a)+...+fl„ (a„» a)
=fliX0-f...+««x0 [V a is _L to each of
the vectors ai, .... a„]
=0.
a=0.

Exan:ple 2. If a. and are orthogonal unit vectors {tlmt is,


{a, P)is an orthonormal set), whai is the distance between « and ^ ?
Solution. If d(a, denotes the distance ‘between a and fi,
thenrf(a, j3)=lla-^ll.
We have i! a-j8 |P=(a-iS, a~i8)=(a, a~j3)-(j8. <x~P)
=(a, a)-(a, a)-f(jS, j5)
=||alP~0~0-Hli3 1P
[V is orthogonal to jS]
=1+1 [*.* a and p are unit vectors]
=2.

^(a. /3)=11«-/S 11=V2. ,V ●

Example 3. Prove that two vectors a and P in a real inner


product space are orthogonal if and only if \\ a + jS jp^l] a 1P4 \\ P Ip.
(Nagarjuna 1991; Madurai 85)
Spiiition. Let a, ^ be two vectors in a real inner product
space F. We have
ll«+^lP=(a+^,a-fi3)
=(a, a)+(a, f(i8. a)+(i3, j8)
=i|alP+2(a,^)+ll/3|P (V (^.a)=(a,^)]
Thus in a real inner product space V, we have
! li«+^lP=INlP+2(a,j8)+|l>H^ -d)
Imter Product Spaces ^ 321

If a and )3 are orthogonal,(«, j8)=»0.


therefore from (1)» we get j| a+/3 ip=*||« HHH P IP-
Conversely, suppose that H a+jS ip=|| a ip+i| p Ip.
Then from (1), we get 2(«, p)^0 i.e.,(«,^)=0.
Therefore « and jSjfare’^'drthogonal.
Note 1. The above result is known as the Pytbogorean theorem.
Its geometrical interpretation is that if ABC is a triangle in three
dimensional Euclidean space, then the angle ^ is a right angle if
and only if
Note 2. If V is a coniplex inner^product space, then the above
result becomes false.
In this case

II «+/3 iP«|l« lP+(«.PHifl^PH 11P IP


Hl«lP+2Re(a,i8)+|l^lp.
If a and p are orthogonal, then (a, j8)«0. So Re («,^)=0
and we get H a+p lp«|| a Ip+H p |p.
But if 11 a+p 1P=*!1« IP+II p IP. then we get 2 Re(a,^)=0.
This implies that Re(a, p)^0:
This does not necessarily imply that(a,/3)=0 / e,,« and p are
orthogonal. Thus in a complex inner product space if a and p are
orthogonal, then we have H «+/3|p=l| a jp+H^ Ip. But if we
have II a+j8 )p»l| a lP+||i3 IP, then it is not necessary that oc and
are orthogonal.
Example 4, If cl and p are vectors in a real inner product space,
and if\\(t ||=|| p |l, then a—j8 and a+jS are orthogonal. Interpret the
result geometrically.
Solution. Let a and jS be vectors in a real inner product space
V. Also let |j a ||-||j8 ||. We have
{cL—p, cL^P)~{a, (/?, CL+P)
=(«. «)+(«, P)-iP* P)
Hla|P+(«,j8)-(«.^)Hli8|P
=0 IV ll«|pHli8|Pl
/. a-j3 and a+jS are orthogonal.
Geometrical Interpretation. Let K be the three dimensional ,.
Euclidean space i.e., let V be the inner product space V3(R) with'
standard inner product defined on it. Let vectors « and p represent
the sides AB and BC of a parallelogram ABCD. Since the length
of a is equal to the len'gth of jS, therefore is a rhombus.
The vectors (t-\-p and cl—P are along the diagonals AC and DB of
322
Linear Algebra

the rhombus. Therefore diagonals of a rhombus intersect at right


angles.
Example S, If and p are vectors in a real inner productspace^
and if is orthogonal to cc-p, then prove that |j a ||=|1 j5 j|.
Interpret the result geometrically.
Solution. We have a+j? is orthogonal to a—
a+/5)*0 («, a+/8)^0
=> («. «)+(a, P)-(P, a)-(j8, P)=0
II « /5)-(a, j8)-|j )8 1)2=0
II a-||=||i8|!.
Geometrical Interpretation. Let K be the three dimensional
Euclidean space/ e.let be the inner product space F3(R) with
standard inner product defined on it. Let vectors a and p represent
the sides AB and BC of a parallelogram ABCD. Then the vectors
a-f^ and ot—p are along the diagonals and DB of the parallelo-
pam. If these diagonals are at right angles, then the length of a
is equal to the length of p. So AB=BC and the parallelogram is a,
rhombus.

Example 6. Two vectors a and p in a complex inner product


space are orthogonal if and only if\\ aoL+bp |p=|| na IP+H bp 1J2 for
all pairs ofscalars a and b. (Meerut 1975)
Solution. Let a and p be any two vectors in a complex .inner
product space V. Also let b be any two scalars We have
II ax-{-bp ay.-^-bp)
=(oa, fla -f bp)+{bp, a«. bp)
^ «=(oa, bP)-{-_{bp, aot.)-{-{bp, bp)
=11 na ip+ll bp Ip+oS (ac, P)+ba (P, a)
='11 act 1(2-Hi bp Ip+aS (a. P)+ ba {g^rp). ...(1)
If a and p are orthogonal, then (a, p)=0.
Therefore from (1), we get 11 |12=H i,« |12-|.1| 112 for
all pairs of scalars a and b.
Conversely, suppose that for all pairs of scalars a and b we
have II a«.-\-bp ||2=|| aa i|2-j-|| bp \\\ Then from (1), for all pairs of
scalars a and b, we get
oB (ct, P)+b3 {OL, P)==0. ...(2)
Take <ia], Then (2) gives
(«* ^)=0
Inner Product Spaces 323

=*■ 2Re (a, j8)=0


=> Re (a, i5)=0.
Again take a=i, ^=1. Then (2) gives
/(a,j8)-i(^T)=0
=> («, /3)-(ST?)=0 [V i¥^0 ]
=> 2i Im (a, j3)«0 IV Im z\
=> Im (a, j8)=0.
Thus we have Re (a, j8)=0, Im (a, |3)=0. Therefore (a, W=0
and thus a and $ are orthogonal. ^
Example 7. ^ F ij nn /«wer product space, then prove that
(i) (ii)
Solution. (i) We shall show that V C {0}^-. Let .a e V.
Since (a. 0)=0. therefore a e {0}^. Thus a e K => oc e
Therefore V Q, {0}x. But {0}^ £ V. Hence (0)J-=K.
(ii) Let a e V^. Then by def. of V\ we have (a,
V jSe F. Taking ^=a, we get (a, a)=0 which implies a=0.
Thus a e KL => a=0. Therefore K' *{0}.
Example 8. JfVis an inner product space and S, S\, Si are
subsets of V, then
(0 Si Q Si => 52^ C 5, L
(ii) S^=^[L(S)]^
(Hi) L(S) Q Si-^
(iv) L(5)=5J J- if V is finite dimensional.
Solution, (i) Let S| C 52. We have
a e 52 '- => a is orthogonal to every vector in Si
:> a is orthogonal to every vector in Si because Si Q Si
=> a e Si^ .
Si^- c Si ' .
(ii) We have 5 C i(S).
[L(S)]'QS'-. [From (i)]
Now let a e 5->. Then a is orthogonal to every vector in S.
Let p be any vector in L(S). Then /5 is a linear combination of
n _
finite number of vectors in S. Let j8= S aioLi where each a/ e S.
/-I

We have (a, P) =s( a, £ Uia/1= £ 5i (a, a/)


\ I
bO, since a is orthogonal to each a/.
324
Linear Algebra

Thus a is orthogonal to every vector j8 in L(S). Therefore


a € [£(5)JA.
5’-LC[L(S))i.
Hence (5)]J^.
(iii) Let a £ £(5). If ^ is any vector in , then is ortho
gonal to every vector in S. Consequently p is orthogonal 4o a
which is nothing but a linear combination of a finite number of
vectors in S. Thus
a € £(5)o a is orthogonal to every vector p in
=> a e
/. £(5)C
(iv) We have
S^^[L(S)]i [as proved in (ii)]
(5J.)i.«((£(5))iJ±
=> 5'J^-L=[£(S)Jij.
=> iS-LJ.«£(5) (V £(5)is a siibspace of V. If V ia,
finite dimensional and If'is a subspace
ofV,thenfV^^= W]
Example 9. IfSis a subset ofan inner product space F. then
prove that

Solution We know that S c 511. Taking 5i in place of


5, we see that
(51)11
f.e.. (51 Q 5111. ..(1)
Also 5C 511
^ ' C 51
(V 5| C 52 521 C 5|i]
=> 5111 Q 51. (2)
From (I) and (2), we get 5i«5J:ii.

Example 10. Let V be afinite-dimensional inner product space


ofdimension n. If(ai, a„)is an orthonormal set in F, prove
that there exist vectors a«+,....,a, such that {a,.
an orthonormal basisfor V.

Solution I^t il=s{aj,...,a4. If men,then 5is a complete


orthonormal set in F. Therefore B will form an orthonormal basis
for F. if m < n, then B is not a complete orthonormal set and
so ^can be enlarged by adding one more vector to it so that the
inner Product Spaces 325

resulting set is also an orthonormal set. This process can be con*


tinned till B becomes a complete orthonormal set and it will
happen only when the number of vectors in B will become n.
Thus if #« < n, then we can find vectors such that
«n) >8 & Complete orthonormal serin Kand
so is a basis for V.
Example 11. Let WJte a subspace of an inner product space V,
if(<xi,...,a»} is a basisfor W,then
S W-^ ifand only if(P, a/)=sO V /=»1,2,...,n.
Solution. Suppose P e Thee by definition of W\ we
have(P, a)«0 V aefF. Since «i e IV, therefore we must
have(P, ai)=0 v i«l, 2,...,n.
Conversely suppose that ()8, a,)=0 v i«l n. Then to
prove that p e IVK Let a be any vector in IV. Then a can be
expressed as a linear combination of the vectors belonging to
the basis {ai,..., a^} of IV. Therefore we can write a^BctUi.
Now
(P, Ot)^(P, £ CiUi)^(JP, Ci«i-|-...+CaOen)

=0,since (/3, a/)=0 ¥


Thus(p, «)=0 ¥ a e PK Therefore p € WK
Example 12. Let W be a finite dimensional proper subspace
ofan inner product space V. Let cl ^ V and o. ^ W. Show that
there is a vector P ^ fV such that a—jS J, }V^ (Meerut 1979)
Solution. We know that every finite dimensional inner product
space possesses an orthonormal basis. Here IF is a finite dimensional
inner product space. So let{«i, be an orthonormal basis of
W, Consider the vector

P=(fit$ «i) ai+...-|-(a, «b) £ (a, a/) a/.

Since ^ is a linear combination of the vectors belonging to a


basis of W,therefore p G IV. We shall show that a—jS x W.
We have for each k where I ^n,

Ok) £(«,«/) «/, «*


”(“-J )

-(i
=(«» «*)“' £ (a, «i) a/, a*
)
[by linearity of inner product)
326 Linear Algebra

=(a, a*)—r(a, a/)(a/, a^) [by linearity of inner product]


/-I

n
- =(a, <Xk)S(«, «/)
i-i

[V «i, ttik belong to an orthonormal basis]


==(a, a0—(«» «&) (V 8/*=l if i^k and 5/a=0 if i¥^k\
=0.
Tbusa—i5 is orthogonal to every vector belonging to a basis
of W. Hence «—j8 J_ W. Thus we have found a vector p in W such
that a—jS J, W.
Example 13. Let V be afinite’dimensional inner product space^
and let an) be an orthonormal basisfor V. Show thatfor any
vectors cn,p in V

(a, /3)= kS(a, a/k) {fi, «*). (Meerut 1979)


-l

Solution. Here {ai,..., an) is an orthonormal basis for V. Since


a, therefore we have

a=S(a, a,) a/, and )3= 2(/3, ttj) ay.


/-I

[Refer theorem 3 on page 306]

Now (a,/3) 2^ (a. a/) a/, 21 (fi, ay) ay


y-i
■(“

f .S (a, a/) 03, ay) (a,, «/) (P. «7) S/y


/-I y-i

^ («. ai) O, a/)


/-I

[On summing with respect to J. We remember that S/y=l if


y=i and 8/y=0 ify#*]

S (a, a*) 05, a*).

Example 14. If A={ctu...^ ct„} is an orthonormal basis for sub^


space W ofa finite dimensional inner product space V and
Inner Product Spaces S27
is an orthonormal basisfor fV-i-, then prove that

is an orthonormal basisfor V. \

Solution. First we shall prove that the set S is an orthonormal


set. Obviously each vector in 5 is a unit vector. So it remains to
prove that two distinct vectors in S are orthogonal.
Now (a/, ay)=0 V m,y=l,..., w,iVy.
[ A is orthonormalj
Similarly (^/, ^y)=0 ¥ i=l,.... /,y=l,..., /, i#y.
Lastly we are to verify that (a/, j8y)~0 ¥ m and
y=l,..., t. But this is true since a/e and
Hence the set S is an otthogonal set. Therefore it is a linearly-
independent set. So S will be-a basis for V if L{S)= V,
Let aeK. Since V= therefore we can write a*=y4-S
where ye W and 8^W^. NoW yei^ can be expressed as a linear
combination of the vectors belonging to the basis of W. Similarly
W^ can be expressed as a linear combination of the Vectors
belonging to the basis Bof W^. Therefore a can be expressed as a
linear combination of the vectors belonging xoA\jB i.e.^ belonging
to S. Therefore I(S)=F.
Hence iS is a basis for K.
Example 15. If W\ and W2are subspaces qf afinite-dimensional
inner product spacer then
(0 (Wt+ W2)^=^Wt^nW2^ (Meerut 1989)
iii) (Winw2)^^Wt^+W2^. (Meerut 1969, 75, 89)
Solution. (/) We have WiQ Wi+ W2.
(Wi-hWiyQWi^. .(1)
Also WiQ Wt+Wi.
(Wfl-W2)-^CW2^. ...(2)
From (1) and (2)» we conclude that
(WrhW2)iQWf^nW2^^ ...(3)
Now we shall show that Wi^f) Wr'^ Qi Wi+ W2)^.
Let a€ Wi^ n W2^. Then ae W'i-i- and ae Wi^. Therefore a
is orthogonal to every vector in and also to every vector in W2.
Let ]8 be any vector in M^2. Then we can write jS^yi+ya
where yiS Wu Y2G W2.
We have (a,jS)=(a, yi+y2)«(«, Vi)+(a, y2)
=0+0«*0.
328 Lfmear Algebra

Therefore a is orthogonal to every vector PmWv{-W2. So

Thus aePFiJ-n W2^ => W2)^.


fVt^niV2^Q(lVt+W2)^. ...(4)
F^om (3) and (4), we have
(Wt+ W2)^=>Wt^nW2K
(ii) Wt^ and are also subspaces of V. Taking Wt^ in
place of Wt and in place of W2 in the result (i), we get
(Wt^+W2^)^=Wt^^f\iy2^^
[V V is finite dimensional
and so Wt etc.]

Wt^-hfV2^=(fViniV2)±
=> iWit)1F2)-L= W',-L+
Example 16. If Wu...^ Wk are pairwise orthogonalsubspaces in
an inner product space Vt and if as=«i+...+a* with at in W%for
/= 1,.... K then II a ||*=|| «,|1H...+1| «* ||2.
Solution. We have

II a |P*=(«, a) 1
k
» £(«!« at) [On summing with respect tojand remembering
that «y is orthogonal to each a/ ifJ^i]
H!»i llH...+ll«fc||*.
Example 17. Find a vector of unit length which is orthogonal to the
vector a»(2, —1^6)of F3(R) with respect to standard inner product.
Solution. Let i5=(*, z) be the required'ector so that
a-/3=(2,-1, 6).(jf, y,z)=22C-j»+62«0.
Any solution of this equation, for example,
i5=(2.-2,-1),
gives a vector orthogonal to a. But
IIP I =[2H(-2)H(-1)*J'/2«3.
Hence the vector ijS*=(f, ~f, -|)has length 1 and is ortho
gonal to a.
Example 18. Find two mutually orthogonal vectors each of which
is orthogonal to the vector a=(4,2,3)of FaCR) with respect to stan
dard inner product. ■
Solution. Let /5s==(xi, X2. X3) be any vector orthogonal to the
vector (4, 2, 3). Then 4xi+2x2+3x3=0.
Obviously ^=*(3, —3, —2)is a solution of this equation. We
BOW require a third vector y^{yu P2, ys)orthogonal to both « and
Inner Product Spaces 329

p. This means y must be a solution vector of the system of equations


4yi+2y2+3y3«0, 3yi~3y2~2>'3=0.
Obviously y=(5,17,-18)is a solution of these equations.
Thus, p and y arc orthogonal to each other and to a. The solution
is, of course, by no means unique.
Example 19. Let K3(R) be the inner product space with respect
to the standard inner product and let W be the subspace of J'slR)
spanned by the vector aL={2, —If 6) Find the projections of the
vector i5=(4, 1, 2)on W and W^.
Elution. Let «i+«2 where otie W'and aiS But every
vector of is a scalar multiple of «. So let ai=/r«. Then
j8s»A:a4-«2* ...(1)
Since a2^W-^ and aSB', therefore a2.«=0. From (1), we get
p.assk (a.a)H-«2.a=fc (a.«).
But /3.«=(4, 1, 2).(2, -1,6)=8- 1+12=19
and a.a=(2, -1,6).(2, -1,6)=4+l+36=41.
19=41* or
-H.m
Again, from (1)
a2=/3—ai=(4, 1, 2)—(if, —ii, lf» —if)*
Exercises
1. Let F3(R) be the inner product space relative to the standard
inner product. Then find
(a) two linearly independent vectors each of which is ortho*
gonal to the vector(1, 1, 2).
(b) two mutually orthogonal vectors, each of which is ortho*
gonal to (5, 2,—1).
(c) two mutually orthogonal unit vectors, each of which is
orthogonal to (2, -1, 3).
{d) the projections of the vector(3,4, 1)onto the space span
ned by (1, 1, 1)and on its orthogonal complement.
2
2. Verify that the vectors (J, — 8« ~l).(f.-i.f)and(l. -i)
form an orthonormal basis for K3(R) relative to the standard
inner product.
3. Given the basis(2,0, 1),(3^ -1, 5), and (0,4,2)for K3(R),
construct from it by the Gram-Schmidt process an orthonor
mal basis relative to the standard inner product.
4. Given the basis(1,0,0),(1,1, 0),(1, 1,1)for>3(R), construct
from it by the Gram-Schmidt process an orthonormal basis
relative to the standard inner product.
330 Linear Algebra

5. Let /» be the vector space over the field R consisting of all


polynomials in x of degree ^ 2 with real coefiicients. Define
an inner product Pby

(a) Verify that this does define an inner product.


(b) Apply the Gram-Sebmidt process to the basis \, of
P to obtain an orthonormal basis, relative to this inner
product.
6. If W is a finitC’dimensional subspace ofan inner product space
V, then show that Hence show that for an
orthogonal set {ai,..., «„} of non-zero vectors in ^and for any
arbitrary p w V,

III
Prove that 1' 1 p
i! p IP if and only if
A-1 II
(/5. a,)
P=: A£ - f*- ctk. (Meerut 1979)
-I II «A IP
Answers
1. (a).(b),(c). Check yourself,
(d) (5, !5. S) and(i,i -4)^
1 1 1
5. 1 4), . (-1.7, 2).
3. ^j(2,0, l), ^(27oj(-7, 3V6
4. (1. 0, 0),(0, 1, 0),(0, 0, I).
S. (b) 1, V’3(2a:-I). V5(6*2-6*+l).
§ 4. Linear functionals and adjoints.
Theorem 1. Let V be a finite dimensional inner product space^
andf a linearfunctional on V. Then there exists q unique vector p
in V such //ra//(a)=(a, P)for all in K (Meerut 1974, 79, 83,85)
Proof. Suppose a2,..., a„} is an ortbonormal basis for
K and/ is linear functional on K Let

P= S /(«y) a). .-(1)


y-i

Then p is a vector in V. Let g be a function from V \o F


defined by
g(a)=(a, p) V x<E:V, ●●●(2;
Inner Product Spaces SSI

We ciaim that ^ is a linear functional on K. If a, b^F and


yi, yiSK, then we have
g (ayt -f^>'2)—(«yi+^V2. [from (2)]
=a (yu P)+b {y2t
g(yi)+bg (ya). [from (2)1
Thus g is a linear functional on V.
Now we shall show that g—f. If then
g («fc)=(«ft, P)

^ f M «y [Putting p from (1)]


j~i
-( )

= -27 [/(ay)l (aA, ay)=27 /(ay)(aA, ay)


;-i y~i

=/(aA)
[On summing with respect to j. Remember that
(aA, ay)=l if'7=fc and (aA, ay)=0 ify#/cj.
Thus g andfagree on a basis for V. Therefore g=f. There
fore corresponding to a linear functional/on P there exists a vec
tor p in V sush that/(a)=(a, p) V asK.
Now to show that p is unique.
Let y be a vector in V such that/(a)=(a, y) ¥ aeK.
Then (a,/3)=(a.y) ¥ aeK
=>(a. /3)-(a,y)=0 ¥ aSK
=> (a, jS-y)=0 ¥ aeK
=> (/3-y,/J-y)=0 [taking a=j3—y]
=> j3-y=0
=»● i5=y.
Thus p is unique Hence the theorem.
Theorem 2. For any linear operator T on a finite’dimensional
itmer product space K, there exists a unique linear operator T* on V
such that
(Ta, /i)-(a, T* p) for all a, p in V.
(Meerut 1973, 74, 78, 81, 83P, 88, 91; 93P)
Proof. Let /' be a linear operator on a finite-dimensional
inner product space V over the field F. Let p be a vector in V.
Let /be a function from V into F defined by
f{x)={Toi, p) V aeV. ...(!)
Here 7’a stands for T(a). We claim that/is a linear functional
on V. Let a, and at, ajG V. Then
/(ax| + ^>a2)=(y' (aai ^-bc^2)p P) [from(l)J
3S2 Linear Algebra

«=»(flrai-f^7*2, (V r is linear]
=aiToiu?Hb
—of(»t)+hf(aa). [from (1)1
Thus/is a linear functional on F. Therefore by theorem 1,
there exists a unique vector /3' in V such that
/(a)=(a.j80 V asK. ...(2)

From (1)and (2) we see that if T is a linear operator on V,


then corresponding to every vector p in V there is a uniquely
determined vector p' in V such that
(ra.i5)=(a. A') V aeF.
Let us.denote by T* the rule which associates p with p' i.e. let
T*p=p\ Then T* is a function from V into Kand is such that
(Ta, j8)=(a,.r*/3) for all a, in F. (3)
Now we shall show that T* is a linear operator on V, Let
a, b^F and piy PzG V. Then for every a in V, we have
(a, T* (aPI-hbPz))=^(Toe, aPi-j-bpz) [from (3)1
(Tot, pi)-^-b^Tot, pz)
=a (a, T*pt)-^b («, T*pz) [from (3»
={ot,aT*Pt)+(a,bT*Pz)
=(a, aT*Pr{^bT*Pz).
/. T*(api-{-bP2)^aT*Pi -\-bT*Pz. [Note that in an inner
product space V if(a,j8)=(a, y) for every « in F, then P=^y]-
Thus 7^ is a linear operator on F. Therefore corresponding
to a linear operator Ton F there exists a linear operator T* on F
such that
(7a,j8)t=i(a, T*p)for all a, p in V.
Now to show that T* is unique. Let S be a linear operator
on F such that
(Tot, /3)«(a, SP)for all a. p in F.
Then (a, T*P)—(oi,SP) for all at, pin y
=> T*p=>Sp for every pin V
=>T*=>S.
Therefore T* is unique.
Hence the theorem.

Adjoint. Definition. Let T be a linear operator on an inner


product space V(finite-dimensional or not). We say that T has an
adjoint 1* if there exists a linear operator T* on V such that
(Tot, p)^(ot, T*P)for all a, p in V,
inner Product Spaces 333

In theorem 2, we have proved that every linear operator on a


finite-dimensional inner product space possesses an adjoint. But it
should be noted that if V is not finite-dimensional then some linear
operator on V may possess an adjoint while the other may not.
In any case if T possesses an adjoint T*, then it must be unique as
we have proved in the last part of theorem 2. Also mark that the
adjoint of T depends not only upon Tbut also on the inner pro
duct on K

Theorem 3 Let V be afinite-dimensiortal inner product space


and let be an ordered drthonormdl basis for V. Let
T be a linear operator on V and let A=[aij]„XH be the matrix of T
with respect to the ordered basis B. Then aij—iXoLj, a/).
Proof. Since B is an orthonormal basis for P, therefore if jS
is any vector in P,then

jS** (Pf «/) «/.

Taking T*jin place of p, we have

2? (T«y, a/) a/,y*=l, 2,..., «. -.(1)


1-1

Now if if »[a/y]nx« be the matrix of T in the ordered basis


then we have

$●●●9 n. -(2)
T<xj=i 2^ atjxt,j=l
Since the expression for Taj as a linear combination of vectors
in B is unique, therefore from (1)and (2) we get
fl/y=(7«y,a/),1= I n andy= 1,..., «.
Corollary. Let V be afinite-dimensional inner product space
and let T be a linear operator on V. In any orthonormal basis for
V, the matrix of T* is the conjugate transpose of the matrix ofT.
Proof. Let a»} be an orthonormal basis for V.
Let if=[a/y]i,xa be the matrix of T in the ordered basis B, Then
«r/ya(7«/, 0(|).
Now T* is also a linear operator on V. Let C=[cij]nxi9 be the
matrix of T* in the ordered basis B. Then
cu*”{T*aj, «d- M2)
334 Linear Algebra

Wc have
aj, ai)
=(«/, T* ctj) [V (a,j8)=(i8.a)]
={Tct,, *j) [by def. of r*l
—aji. [from (1)1
Ca>(a//]nx#i. Hence C—A*, where A* is the conjugate
transpose of the matrix/4.
Note. It should be marked that in this corollary the basis B
is an orthonormal basis and not an ordinary basis.
Theorem 4. Suppose S and T are linear operators on an inner'
product space V and c is a scalar. IfS and T possess adjoints, the
operators S-hT, cT^ ST^ T* will also possess adjoints. Also we have
(0 (5-|-r)*=5*+7’* (Meerut 1972, 76, 79, 87)
(«) (cr*)=c7* (Meerut 1970, 71, 76, 87)
{Hi){ST)*^T*S* (Meerut 1972, 78, 79, 87,91)
(/v) (r*)*=»T. (Meerut 1970, 87)
Proof, (i) Since S and T are linear .operators on K, therefore
5+r is also a linear operator on V. For every a, /5 in F, we have
((5+r)a, i3)=(5a+ra, iS)=(S'oc, ^)+(7’a,/3)
=(«, S*jS)+(«, T*^) [by def. of adjoint]
={x,S*fi+ T*p)=(<r..(S*+T*)P).
Thus for the linear operator 7’on F there exists a linear
operator S*+T* on V such that
iiS+T) y., i8)=(«.(S*+r*)P)for all a, p in V.
Therefore the linear operator Si-T has an adjoint. By the
definition and by the uniqueness of adjoint, we get
(S+T)*=S*+T*,
(ii) Since 7* is a linear operator on V, therefore cT is also
linear operator on V. For every a,j3 in V, we have
((cT) a,/3)=(cr5c.i8)=c (7’a, /3)=c (a. r*]?)
=(a, c7’*i8)=(a.(cr*)jS).
Thus for the linear operator cT on V there exists a linear
operator cJ* on F such that
^ ((cT) a. /5)=(a,(cT*)/3) for all a. J3 in F.
Therefore the linear operator cT possesses an adjoint. By the
definition and by the uniqueness of adjoint, we get
icT)*=cT*,
Inner Product Spaces 335

U’Ji) 57’is a linear operator on V. For every a, jS in F,


we have
((57’) «,i8)=^(S7a. )5) [by the def. of product of two
operators]
=(7'a, S*i8) (by def, of adjoint]
=(a,r*5*i3) [by def. of adjoint]
«(«.(7'*5*)i9).
Thus for the linear operator ST on V, there exists a linear
operator 7*5'* on F such that
((57) a,j8)=(a,(7*5*)jS) for all «. j8 in F.
Therefore the linear operator 57 has an adjoint. By the defini
tion and by the uniqueness of adjoint, we get
(57)*=7*5*.
(iv) The adjoint of 7/.e., 7* is a linear operator on F. For
every a, in F, we have
(7*a,/3)=(i8, 7*;«) [V (a. j8)=(/5, a)]
=(7jS, a) [by def. of adjoint]
=(a. 7)3). ( («, /S)=03, a)}
Thus'fbr the linear operator 7* on V, there exists a linear
operator 7on F such that
(7* a, j8)=(a, Tp)for all a. in F
Therefore the linear operator 7* has an adjoint. By the defini
tion and by the uniqueness of adjoint, we have
(7*)*=7.
Note 1. If in the above theorem the vector space F is finite
dimensional, thep the results will be true for arbitrary linear
operators 5 and 7 In a finite dimensional inheir product space
each linear operator possesses an adjoint.
Note 2. The ppefatiori of adjoint behaves like the operation
of conjugatipo on complex numbers.
Self-adjoint transforniatioh, Definition. A linear operator T
on an inner product space V is said to be self-adjoint if

A self-adjoint linear operator on a real inner product space is


called .$)'mmef>/c;whiie a self-adjoint linear operator on a complex
inner product space is called 7/>r/m7/an.
The zero Ojperator 0 a^id the identity operator / on any inner
336 Linear Algebra

product space V are self-adjoint operators. For every a, j3 in K,


we have

(0 «. j8)«{0,/3)=0=(a,0)=(a,0 /5).
A A

0*=0. (Meerut 1976)


Similariy for every a. j3 in K, we have
(/a, ^)=(a. j5)=(a, 7)5).
/♦*=/. (Meerut 1976)
Skew-symmetric or Skew-Hermitian operators. Definitipu. 1/
a linear operator T on an inner product space V is such that

then T is called skew^symmetric or skew'Hermitian according as the


vector space V is real or complex.
Theorem 5. Every linear operator T on a finite dimensional
complex inner product space V can be uniquely expressed as
T=‘Ti-\-iT2
where T\ and 72 are self-adjoint linear operators on V.
Proof. Let Tt sa r^andr,=±(r-T-).
Then T=Tt+iT2. ...U)
Now (ar+r*)f=i (7’+7’*)*=i tr*-f(r*)*i
«i[r*+7i«H7’+7^)=ri.
Ti is self adjoint.
n 1*
Also 7i*= jj- (T-T*) J =(^)c7--r*,-
1 1
2i {T-T*)=T2.
T2 is self-adjoint.
ThusTcan be expressed in the form (I) where Ti and 72 are
both self adjoint operators.
To show that the expression (1) for 7' is unique.
Let r=£7|-MC/2 where Ui and U2 are both self-adjoint.
We have r*«(C7,+il/2)*=t/i*+(/!y2)*
l/i*+i U2*=Ui*-iU2*
= f/,-iC/2 [ . Ui and U2 are self-adjoint]
7’-i-r*=(t/,-i-iC/2)-f((/.~iX/2)=2t/,.
This glyes 7/|—i (T-f 7’*;«ss7'|.
Also T-‘T*^{0i-i-iU2)-iVr-iU^=^2iU2.
4 .
337
Inner Product Spaces

I
This gives ^^2=2/
Hence the expression (1) for T is unique.
Note. If r is a linear operator on a complex inner product
space r which is not finite-dimensional, then the above result will
be still true provided it is given that T possesses adjoint. Also in
the resolution r=r,+1T2, Ti is called the real part of T and T2 is
called the imaginary part of T.
Theorem 6. Every linear operator T on a finite-dimensional
inner product space V can be uni<iuely expressed as
7’=ri+22
where Ti is self-adjoint and Ti is skew.-
Proof. Letr,=i(7’+r*)andr2=:i(7’-r*).
Then r=n+r2. ...<r
Now (r+T*)l*=l(r-f (T*+t)

/, Ti is self-adjoint.
Also T2*=[\(r-7'*)r=f{r-rT“i iT*-T)
=-i(r-r*)=-r2.
/. 72 is skew.
Thus T can be expressed in the form (1) where Ti is self-
adjoint and T2 is skew.
Let
Now to show that the expression (1)for T is unique.
r=t/i+f/2,
where C/| is self-adjoint and U2 is skew.
Then 7’*=(C/|+t/2)*«C^i*+f^2* . , ,
(/,4:U2 (*.* Ui is self-adjoint and U2 is skew]
l{Ti-T*)=Ui=Ti
and ^ (f-7’*)=C/2=72.
Hence the expression (1)for T is unique.
Note. If r is a linear operator on an inner product space V.
which is not finite-dimensional, then the above result will be still
true provided it is given that T possesses adjoint.
Theorem?. A necessary and sufficient condition that a linear
f
transformation T on an inner produ^ space V be H is that {Tutp)f=0
for all 9. and p in V.
338
Linetff Algebra

Proof. Let 7’=0. Then for.all a and we have


(Ta. iS)=(6a, jS)=(0, j8)«0.
Hence the condition is necessary.
Conversely, let (7<x, j8)«0 for all a and
Taking we get
(Ta, 7<x)ss0 V a
=> r«=0 V a [V (ft, a)caO => «=s0]
=> r=o.
Hence the condition is sufficient.
Tfcrorem 8. A necessary and sufficient condition that a linear
transformation Ton a unitary space {complex inner product space)
be his that m,a)=0/or all a. in V.
Proof Let P” be a complex inner product space.
Let TssO. Then for all a in P, we have
(7a, a)==(0a, a)=(0, a)=0.
Hence the condition is necessary.
Conversely, let(Ta, a)=0 for all a in V.
Then for every a,^ in P, weJiave
(r(a+/3). orF/5)=0 [taking a+i3 in place of a]
=> (Ta+T/J. a4-j8)=0
=> (Ta. a)+(7a.^)+{m «)+(m /3)'=0
*>(7'«,i8H(7'iS,a)==0.
[V (Ta. a)=0 and (7)8,)5)=0]
Thus for every a, fi in P, we have
(Ta. i8)+(ri8. a)«0. ...(1)
Since the result (1)is true for every in P, therefore taking /
ip in place ofjS, we get
(7a; m+{Tip,a)«0
=> /(ra,j8)+(iTiS,a)=.0
a>-/(ra,iS)+/(riS,a)=0
=> ~/((7’a,i8)-(718,a)]«0
(Th,P)—(TjS, a)»0 [V i¥^0] (2)
●oo

Adding (1)and (2). we get


2(ra,j8)=0
(Th,P)^0 V a. in P
*> (Th, 7<x)=0 V asP [Taking fSasTa]
inner Product Spaces 339

:> 7a«0 V aSK


=>7«=^.
Hence the condition is sufficient.
Note. If F is a real inner product space, then the above
theorem may fail. For example consider the vector space FaCR)
with standard inner product defined on it. Let 7 be ffie linear
operator on F2(R) defined as
T{a, -a) V (a. b)e Ka(R).
Then 79^6. But
(7(a,h).(a,6))«((h, -a),(a,i)) [by def, ofTJ
esxba—db [by def. of standard inner pro
duct in FaCR)]
=0.

Thus(7a, a)«0 v aeFa(R)and yet 7#5.


However if 7is self-adjoint, then the above theorem is true
for real inner product spaces also. Thus we have the following
theorem.
Theorem 9. A necessary and sufficient condition that a self-
adjoint linear transformation Ton ah inner product space V be is
that(7a,a)ss0/ar all a in Vi,
Proof. 7is a self-adjoint linear operator on an inner product
space V i.e, 7*«7.
Let 7a»0. Then for all a in F, we have
(7a, a)ts(d«, a)»(0,a)«*0.
Hence the condition is necessary.
Conversely, let(7a, a)as0 for aU a in V.
Then for every a, p in V, we have
(7(am«+^)«0 '
=> (Tx+7/5, a+/8)«0.
=> (7a, a)+(7a,/5)+(7/5, a)4-(7)3, j?)«0
^(7a,j8)+(m«)«0
»>(7a,i8)4-O,7*a)«0 [V (75,«)-0.r*a))
=>(r«.i»)+(i5.7a)=:0 [%● 7=»7*1 ...(1)
Now two cases arise.
Case I. F is a real vector space.
In this case (jS, 7a)«(7a,P)
[V jn a Euclidean space (a, a)]
340
Linear Algebra
(1) gives ,
2(ra.,/3)=0
(?’«»/3)*=0 for all a, p in V
=> (7a, 7a)s=0 for all a in P'
=> 7a=0 for every a in F
r=6.
Case II. F is a complex vector space.
Now proceed as in theorem 8. Repeat the steps after the
result (1).
Theorem 10. A necessary and sufficient condition that a linear
transformation T on a complex inner product space V{unitary space)
be self-adjoint (Hermitian) is that (7a, a) be realfor all a.
(Meerut 1978, 91,93)
Proof. Let Fbe a complex inner product space. Suppose T
is a self-adjoint linear operator on V i e. T*==7' Then for every a
in F, we have
(Ta, a)=(a, r*a)=(a, ra)=(ra, a).
Thus (Ta, a) is equal to its own conjugate and is therefore
real. Hence the condition is necessary.
Conversely, let(Ta, a) be real for every a in F Then to prove
that T is self-adjoint. For this we should show that
(Ta,i5)=(a, TP), for all a, /5 in F.
For every a, p in F, we have
(r(a+i8), a+i8)=(7’a-}-758. a-l-jS)
=(Ta, a)+(ra,^)+(718, a)-f.(715, i?) ...(1)
Since{T(a-f^), a-f-,8), (7’a, a)i and (7)3, jS) are all real, there
fore from (I) we see that(7a, jS)H-(7)3. a)is also real. So equating
it to its complex conjugate, we get ■
(Ta, P)+{TP, a)«(ra,P)+(7)3,a)*(7^-f{TpT^
=(^, 7<ic)^(a, ^).
Thus for all a, p in F, we haVe
(TTx, j8)-^(7)3. a)=(/3, 7^)-f-(a, 7)8) ...(2)
Taking ip in place of p in (2)^ we get
(7a. iP)HTiP.
or -/(7a,P)H/7)3, a)==/03^ 7a)4^(a, i7]8)
or
--/(ra,/3)+/(ri8,aj=^;ra)-^ ...(3)
Multiplying(3) by i and adding to (2), we get
2tra,iS)=2(a,fi8)
or
Inner Product, Spaces 3^1

T is self-adjoint.
Note. If V is finite-dimensional, then we can take advantage
of the fact that T must possess adjoint. So in that case the con
verse part of the theorem can be easily proved as follows:
Since (ra, a) is real for all a in V, therefore
(Ta, a)=(ra, a)=^, r^==(r*a, a).
From this, we get for every a in V
(ra-r*a, a)=0
=> ((r—r*)a, a)=0
=> r-7’*=d [by theorem 8]
=> r=r*.
Solved Examples
Example 1. Let V be the vector space FaCC), with the standard
inner product. Let T be the linear operator defined by
T’d,0)«(1, -2), d«(i, -1).
Ifa=(fl, b)tfind r*a. (Meernt 1984P)
Solution. Let J»={(I, 0), (0, 1)}. Then B is the standard
ordered basis for V. It is an orthonormal basis. Let us find JTJs
i.e. the matrix of Tin the ordered basis B.
We have r(l,0)»(l, -2)=1 (1, 0)-2(0, 1)
and
7’(0, !)«(/,-1)=/(1,0)-1 (0,1).
1 I'
-2 -1
The matrix of T* in the ordered basis B is the conjugate trans
pose of the matrix [T]b.
1 -2
-I _1
Now (fl, b)=a (I, 0)+h (0, 1).
the coordinate matrix of T*(o, b) in the basis B
_r I -2 fa I f a~2dl
-I
-Ulbi -ia--b ● [Seepage 161]
T*(o, b)^(a-^2b)(1,0)+(-M~h)(0, 1)
=(a-2^, -^ia^b).
Example 2. A linear operator on R* is defined by
T ix, y)^{x+ly, x-y).
Find the adjoint T*. ifthe inner product is standard one,
(Meernt 1977)
342 Linear Algebra

Solution. Let B={(1,0),(0, 1)}. Then B is the standard


ordered basis for F. It is an orthonormal basis. Let us find LTIb.
^e have T(1,0)«(1,1), and r(0, 1)«(2, -1).
1 21
J -IJ
The matrix of T* in the ordered basis B is the transpose of
the matrix [TJb. Note that is a real inner product space.

/. inn-Q -1”
The coordinate matrix of T*(x,y) in the basis B

""[2 —2j[ J""L2x--yJ’


T*(X, y)Mx+y,2x-y).
Example 3. Lei T be the linear operator on V2(C)defined by
, r(l.0)=(l+/. 2), r(0,!)=(/. i).
Using ihe standard inner product^ find the matrix ofT* in the stan
dard ordered basis. Does T commute with t* ?
Solution. Let B^{{\,0),(0,1)}. Then B is the standard
ordered basis for V. It is an orthonormal basis. We have
r(1.0)=(l+/.2)=(l+0(l»0)+2(0.1)
and r(0, 1)«(^»0“U1»0)+/(0,1).

[T]b
i+i n .
“I2 I
[r*]0=the conjugate transpose of the matrix [71s
l-I

21 3
Wehave[71fl[r*Js-f^"^2 < -<J“'L3-2i
T-/ 21 +i n r 6 3/+11
Also [J*Js [71a=
-I -UL 2 iJ”l-3/-fl 2 .*
. Now [7’js[T*]b^ T*]b[Ts] => [7T*]s#tr*71s
=> TT*^T*T,
Example 4. If^ is a vector in an inner product space, if T is
a linear transformation on that space, and if f{a)^{fi,Tct)for every
vector <s, thenfis a linear functional; find a vector such that
/(a)s=(a, fi*) for every «.
Solution. It is given that/(a)=08, Ta) v a € F.
/is a function from F into F.
Let o, h e F and «i, «2 € F. Then
hmer Product Spaces 343

/(flai+6«2)=»05, T’ ^a2))=(r(fla|+6a2), jS)


s=(flrai+i>T'a2» ^)~a (3T<X|» ^)+^> (21x2,/3)
=fl (is. ra,T+6 Or^=fl/(a,)+6/(a2).
Hence /is a linear functional on V. If V is finite-dimensional
then there will exist a unique vector such that /(a)»(a, for
every a.
We have /(a)—08, ra)=(ra,iS)s=(oe, T*p) for every a,
/* if/(«)=»(a, /3') for every a, then
(a, r*iS)=(a, p*) for every a.
Hence j3'«r*i3.
Example 5. If Ti and T2 are self-adjoint linear operators on
an inner product space V, then
(0 T\-\-T2 is self-adjoint,
(«) If'.Tx^h and a is a non-zero scalar^ then aT\ is self-adjoint
iffa is real.
Solution, (i) It is given that ri*=7’i, T2*^T2,
We have (ri+r2)*=»7’,*-i-r2*«r,+r2.
Ti+Ti is self-adjoint,
(ii) Let a be real. Then
(aTxf^aTx*
fs»aT\, [V a is real and
aT\ is self-adjoint.
Conversely, lef oTi be self-adjoint. Then
{aTxf^aTi
&Tx*^aTx
=> QTx—aTx [V
=> (d — U) T’l'saO
=>d-fl=0 (V r,#6j
s> a=a a is real.
Example 6. Show that the product of^ two self-adjoint operators
on an inner product space is selfadjoint iff the two operators
commute. (Meerut 1977, 79, 80, 82,89)
Solution. Let S and T be two self-adjoint operators on an
inner product space V. Suppose S and T commute /.#?., ST^TS.
Then-to-prove that ST is self-adjoint.
We have {STf^T*S*
TS [V S and T are both self-ajfljolnt]
(V sfoirs)
A ST is self-adjoint.
'}44 Linear Algebra

Conversely, suppose that ST is self-adjoint.


Then {ST)*=ST
=> T*^S*^ST
TSi=ST [*.* S and T are both self-adjoint]
=> S and T commute.
Example 7. Let V be afinite-dimensional inner product space
andT a linear operator on V. If T is invertible^ show that T* is
invertible and (Meerut 1976, 81,85)
Solution. Suppose T is invertible. Then there exists a linear
operator r * on F such that
r-»r=/=7T-‘
=> (r-' T)*=/*=(7T->)*
=» r*(r-«)*=/-(r-')*r* . IV I is self-adjoint]
/. r* is invertible and (r*)“*=(7'-‘)*.
Example 8. Let The a linear operator on a finite-dimensional
inner product space V. Then T is self-adjoint iff its matrix in every
orthonormal basis is a self-adjoint matrix.
Solution Let B be any orthonormal basis for T. Then
.(1)
If T is self-adjoint, then T*=‘T. Therefore from (1), we get
[T]B=mB*-
IT]b is a self-adjoint matrix.
Conversely let [T]b be a self-adjoint matrix. Then
trja=lTJa*
^[T*]b. [from (1)]
r=r*/.e. r is self adjoint.
Example 9. if T is self-adjoint, then S*TS is self-adjointfor
all S; ifS is. invertible and S*TS is self-adjoint, then T is self-
adjoint.
Solution, ris a self-adjoint operator. Let 5 be any operator.
Then T* {S*)*^S*TS.
S*TS is self-adjoint.
Conversely, let S be invertible. Then S* is also invertible. If
S*TS is self-adjoint, then
(S*r5)*=5*rS
S* T*(S*)*=s*rs => S*T*S=S*TS
(5*)-* {S* T*S) (S*TS)S-'
=► IT*I-=1TI => T*^T => ris self-adjoint.
Example 10. If T is a self-adjoint linear operator on a finite-
dimensional imer product space V, then det T is real,
Inner Product Spaces 345

Solution. Let ^ be any orthonormal basis for V. Then

Since T*=T,therefore
...(1)
Let [rjff==i4. Then from (1), we get

=> det i4=det A*


=> det i4=(det A)
[●.* det A* - conjugate complex of det A]
=> det A is real
=> det T is real. [V detr=det [r]B=deti4]
Example 11. Let V be a finite-dimensional inner product space,
and let T be any linear operator on V. Suppose W is a subspace of
V which is invariant under T. Then the orthogonal complement of W
is invariant under T*.
Solution. It is given that W is invariant under T. To prove
that is invariant under T*.
Let j8 be any vector in W' . Then to prove that J*jB is in
i.e. T*p is orthogonal to every vector in W.
Let a be any vector in W. Then
(«, T*j9)=(ra. /3)
0 [V a e W W. Also p is
orthogonal to every vector in V/]
T*p is orthogonal to every vector a in W.
HenceT*J3 is in W^ .
IV' is invariant under T*.
Example 12. Let V be a finite-dimensional inner product space,
and let E he an idempotent linear operator on V, i.e. Prove
that E is self-adjoint iff ££*=£♦£.
Solution. E is idempotent / e., E^=E.
Let E be self-adjoint i.e.,
Then EE*^E*E* [putting E* in place of E in L.H.S.]
[V £♦«£]
Conversely, let ££*=£*£. Then to prove that £*«=£.
For every vector jS in V, we have
(£j8, Ep)^iP, E* EP)
^{P,EE*p) [●.* £*£«££*J
^{E*p, E*P) [y (£T»^J
From this we conclude that £j5=0 iflf E*p=0.
346 Linear Aigeb’ra

Now let a be any vector in V. Let j8=a—£bt.


Then £j8=£(a-£bc)=£a-£aa=£a--£a«0.
0*£*j8=£:*(a-£a)=£*«-£*£'«.
This gives E*ol=E*Ex for all a in V.
Therefore £*«£♦£.
Now £=(£*)♦=(£♦£)*=£♦£=£*.
E is self-adjoint.
Example 13. If T is skew, does it follow that so is ? How
about ?
Solation. Tis skew => 3T*=~J’.
We have (7^>*=(rr)*= (_
is self-adjoint and not skew.
Again (r3)*=r*r*r*=(~7’> (~r) (-70= -(r^).
is skew
Example 14. If T is self adjoint, or skew, and if J*a=0, then
TacsQ.
/ . (Meernt 1979)
Solution, (i) Let T be self-adjoint i.e, T*rsT.
Suppose r^acs 0. Then for every p in V, we have
(72a. jS)c=0
(7T«, /S)=0 => (7a, T*p)^0 (7a, 7j3)«0.
Taking we get
(7a, 7a)=0 => 7a=0.
(ii) Let 7 be skew i e. 7*= —7.
Suppose 7^a= 0. Then for every j8 in V, we have
(72a,i8)=0
^ (77a, )5)=0 => (7a, 7*j8)=0 (7a, -7/8)=0
=> (-1) (7a, Tp)=0 => (7a, 7]S)=0.
Taking a, we get
(7a, 7a)=0 => 7as=0.

Example 15. IfTis a skew-symmetric transformation on a


Euclidean space, then (7 a, a)s=0 for every vector a.
Solution. It is given that 7 Is a linear operator on a real inner
product space V and 7*= ~7. For every vector a in V, we have
(7a, a)=(a, 7*a)=(a, -7a)=-(a, 7a)«-(7a, a).
; .*. 2 (7a, a)=0 => (7a, a)=0.
Example 16. Let V be a finite dimensional inner product space
and 7 a selfadjoint linear operator on V. Prove that the range ofT
347
Inner Product Spaces

is the orthogonal complement of the null space of T i.e,


R{T)MN(T)]^.
Solution. We shall prove that R(T)is a subspace of[N{T)]^
and dim R{T)^d\ta Then we must have
R{T)^[N{T)]K ^ ' o
Let <teR{T). Then there exists a vector jSe V such that(s^Tp.
Let Y be an arbitrary vector of[iV(r)lJ-. Then ry=Q.
We have
(a» y)=(r^, y)=(j8, r*y)«(^, Ty) [V. 7’*«rj
=0» 0)=0.
Thus(a.y)«0 ¥.ysJV(r). Therefore Thus we have
proved that <seR{T) => ae[JV(r))^. Therefore R(T) Q [N{T)]K
Again we know that
dim i?(rj+dim iV(r)«=dim V. . ...(1)
Also V^N{T)@[N{T)]K
dim N{T)-\-6im [iV(r)]J^=diro V, ...(2)
From (1)and (2), we get
dim R{T)=6im [N{T)]K
Since R{T) Q[N {T)\^ and dim i?(r)=dim [N{T)h, therefore
R{T)^[N{T)]\
Exercises
1. Let jS be a fixed vector in an inner product space V over a field
F. If/^: V->F is defined by /^(a)=(a, P) for all then
show that/^ is a linear functional on V. If V is finite dimen
sional, then prove that each functional on V arises in this way
from some p. (Meerut 1973)
2. Suppose r is a self-adjoint linear operator on a finite dimen
sional inner product space V. If a subspace IF of K is invariant
under T, then prove that is also invariant under T.
3. If both Ti and Tz are self-adjoint, or else if both are skew,
then riTi+rari is self-adjoint and T1T2-T2T1 is skew. What
happens if one of Ti and Ta is self-adjoint and the other
skew ?
4. State whether the following statements are true or false:
(i) The product of two symmetric matrices is symmetric.
(Meerut 1977)
(ii) The product of two self-adjoint operators on an inner
product space is a self-adjoint operator.
Ans. (i) False; (ii) false.
348
Linear Algebra

§ 5. Positive Operators.
Positive Operator. Definition. A linear operator T on an inner
product space V is called positive, in symbols T > 0, if it is self-
adjoint and if(Ja, a) > 0 whenever a^0. (Meerut 1972,82)
If «=0, then (Ta, a)=0. Thus if T is positive, then (Ta, a)^0
for all a and (Ta, a)=0 => a=0. Also if T is self-adjoint and if
(Tbc, a)^ 0 for ail a, and (7a, a)=0 => a=0, then T is positive.
If P is a complex inner product space, then by theorem 10 of § 4,
(7a, a)^ 0 for every a implies that T must be self-adjoint There
fore a linear operator 7’ on a complex inner product space is posi
tive if and only if(Th, a) > 0 whenever
Non-negative operator. Definition. A linear operator T on an
inner product space V is called non-negative, in symbols T'^Q.if it
is self-adjoint and if(To, a) > Q for all a in V.
Every positive operator is also a non-negative operator. IfJ
is a non-neagtive operator, then (To, a)=0 is possible even if a^itfi.
Therefore a non-negative operator may or may not be a positive
operator.
IfS and T are two linear operators on an inner product space K,
then we define S>T{orT< S)ifS-T>0.
Note. Some authors call a positive operator by the name _ „
^strictly positive* or ^positive definite*. Also they use the phrase
*positive operator* in place of *non-negative operator*.
Thedrem 1. Let y be an inner product space, and let T be a
linear operator on V. Let p be thefunction defined on ordered pairs
of vectors K, j8 in V by
p (a, ^)»(ra, j8).
Show that thefunction p is an inner product on V if and only ifT is
a positive operator. (Meerut 1969, 88)
Proof.
The function p obviously satisfies linearity property.
If a,b^F and ai, a2 e V, then
P(aa,-f6a2, /i)=(r(fla, fAaj), i8)=(fl7a,-f67’a2. j8)
-a (Tai, ^)+6(Ta2, (ai, ^)-\-bp (a2, j8).
.'. the function p satisfies linearity property.
Now the function p will be an inner product on V if and only
if p(a, ^)=p 05, a) and p (a, a)> 0 if a^tO.

We have p («, ^)«(ra,^) and p o8, a)=(r/i, a>=(a, TjS).


Also p(a, a)ss(7’a, «),
Inner Product Spaces 349

the function p will be an inner product on V if and only if


(i) (ra, i3)=(a, 7)3) V a, /3 e K i.e. T is self-adjoint,
(ii) (Tot, a) > 0, if a^tO. ‘
Hence the function p will be an inner product on V if and
only if the linear operator T is positive.
Now we shall show that if K is finite-dimensional, then every
inner product on V is of the type just described.
Theorem 2. Let V(F) be afinite-dimensional inner product
space with inner product(,). Ifp is any inner product on there
is a unique positive linear operator T oh V such that p (a, p)={Tct, /3)
for all a, in F. (Meerut 1980)
Proof. Fix a vector /S in V. Let/he a mapping from V to
Fdefined &sf(at)=:p (a, j3) v a e F. Since the inner products
satisfies linearity property, therefore/is a linear functional on F.
By theorem 1 on page 330, there axists a unique vector j3' in V
such that /(a)=(«, j8') for all a in F i.e. p (a, /3)=»(a, j8') for ail a
in F.

Now let us define a mapping T from F into F by the rule


Then we have
p_(a, /3)=(«, i8') =--(a, 7)8) for all a, ^ in F. (1)
We also have
^_p(«,)8)=(«,7)8)
=P (/3.«) [V by conjugate property of inner product
p, wi have p (a, ^)=-fp (fS, a)J
=f^. Tx) [by(D]
==(7’a, P). (by conjugate property of inner product(,)]
Thus we have
p (a, i3)=(r«, j8) for all a, fi in F. ».(2)
NowiVfrshall show that T is linear. LetX|,a2e F and
au 02 e F. Then for all y in F, we have
(T(fliai+fli2*2), y)=P (ai«i+<*2*2. y) Iby (2)1
P («i, y)+i*2P («2, y'» (by linearity of p]
=fli(7ai, y)+<i2 (7«2, y) [by (2)1
~{a\Tx\-\rOiTx2y y) [by linearity of inner product(,)]
Therefore, we have r(fliai+fl2«2)=<*i7ai -f <f27*2 Hence T is
a linear operator. Thus we have proved the existence of a linear
operator T with p (a, /3)={7’x, /3). Since p is an inner product,
therefore by theorem 1, T is positive.
350 Lineiw Algebra

Uniqueness of T. Suppose there are two linear operators T


and U.such that /»(a, j8)to(7’a, j3)«(t/a, j3) for all *, /3 in V. Then .
we have
(Ta—Ua,P)=0 V a,)3eK. ...(3)
Suppose we keep a fixed, Then from (3), we see that the
vector Ta-t/a is orthogonal to every vector /3 in F. Therefore
To.--(Jot. is a zero vector Thus we haveTa—£/a=0 v aeF
Therefore ra= Ua v ae F and so T=U Hence T is unique.
Theorem 3. Let V be afinite-dimensional inner product space
and Ta linear operator on V. fThen T is positive ifand only if there
is an invertible linear operator U on V such that T=U*U.
(Meerut 1973, 76, 78. 82, 85. 87. 89)
Proof. Let T= t/*C/, where U is an invertible linear operator
on F. We have T*={U*U)*= U*{U*)*= U*V= T. Therefore T is
self-adjoint.
Also i(7«, a)ss(C/(x, f/**a)=s(f/a, f/a) ^ 0.
Further (Ta, a)=*0 ^ (f/a, Ua)=0 s> Ubt=0
=> a=0 [V U is invertible and F is finite-dimensional
implies that U is.non-singular]
Therefor® if a#0, then(Ta, a) > 0. Hence T is positive.
Conversely, suppose that T is positive. Then by theorem 1,
/>{«, /S)=(7a, fi) is an inner product on ij. Let {ai,..., a„} be a
basis for F which is orthonormal with respejbt to the inner product
(,)and let (jSi,..., j8„} be a basis orthonorn/al with respect to the
inner product p. Then
PiPh «y).
Now let V be the unique linear operator on F such that
r=r*.... n Obviously V is invertible because it carries a
basis onto a basis We have
p(i8/.i8y)=(a,,(X;)=(C/i3/. t//5y).
Now let ot. i3 be any two vectors in F.
n
Let a» S xi pt and jS S yj pj. Then
1-1 y-i

(Ta,P)=/K«,P) (by dcf. of p\


\ on
( S Xt pu /-I
SyiPi
/
Inner \Product Spaces 351

=f3X, p, m.ufi,) i X, Vfi,.1-1


S y, up,
)

3^ Xifii.U 2^ y,fij)“(C/a, f/j9)=(f/*C/a, B)

Thus for all oc,1)9 in K, we have


(Ta. j9)=*(f/*(7a, )3).
T^U*U.
Hence the theorem
Theorem 4. Let T be a linear operator on afinite-dimensional
inner product space V, Let A =[atj]mxn pe the matrix o/T relative
to an ordered orthonormal basis a„}. Then T. is positive
ifand only if,the matrix A satisfies thefollowing two conditions :
(i) A^A* i.e. A is self-adjoint^
a n
(ii) aij xi xj>0 where x„ are any n scalars not

all zero.

Proof. Let a be any vector in V. Let ataXiai+...-fx„*„.

Then (Ta, a) ItE xj(x/^ E xi a/


V i-i
)
n
( E xj Tuj, I-I
E xtctt ]~ E E>xj Xi {Tctj, «/)
\ ./-i i-i / ;-i /-I

a
= E E atj XiXj [V by theorem 3 of § 4,{Tuj, oii)=:aij]
I-I ;-i

Now suppose T is positive. Then T=T*. Therefore A^A*.


If Xu ,Xa are any n scalars not all zero, then
a«xi ai -f...+Xn a„ is a non-zero vector in V. Since T is positive,
“ \ H a ^

therefore(7a, a) > 0. Hence E E an xt xj>0.


M J-J

Conversely, suppose that the conditions (i) and (ii) of the


theorem hold. Then r=i7'*. -Also (iii) implies that
(Ta,a)> 0 ifa?&0. Note that if 0#« e T, then we.can write
352 Linear Algebra

a»x,ai +...+x»a„ where are scalars not all zero Hence


T is positive.
Positive matrix. Definition. Let A=[aij\nxn be a square matrix
of order n over thefield of real or complex numbers. Then A is said
to be positive if
(0
n n
and (ii) S S an xt xj > Q where xu.,., x„ are any n scalars not
i-l 7-1

all zero.
If the field is real, then the bars may be omitted. If the field
is complex then the condition (i) will automatically follow from
the condition (ii) and so it may be omitted.
Now the theorem 8 may be stated as follows:
Let V be a finite-dimensional inner product space and B an
ordered orthonormal basisfor K If T is a linear operator on K,
then T is positive ifand only if the matrix of T in the ordered basis
B is positive.
Principal minors of a matrix
Definition. Let A=[aij]„xn be a square matrix of order n over
an arbitraryfield F. The principal minors of A are the n scalai s
defined as
a\\ ... Oik
det Ai^^=det ; ,k— l,...f n
.0*1 ... Okk.
We shall now give without proof a criterion fov a matrix to
be positive.
l^t A be an nxn self-adjoint matrix over thefield of real or
complex numbers. Then A is positive if and only if the principal
minors of A are all positive.
If det A is not positive, then the matrix A is not positive.
Solved Examples
Example 1. Suppose S and T are two positive linear operators
on an inner product space V. Then show that S-^T is also positive.
Solution. Since ^ and r are both positive, therefore S*=>S
and r*=r.
We have (S+7’)*=S*-hr*=*5+7.
.S+7'Is also self-adjoint. '
Also if a is any vector in P, then
Inner Product Spaces 353

(iS+T)a, a)=(S«+7’«, a)«(5a, «)+(rot. a).


Since S and T are both positive, therefore
(5*, a)>0, and {Tat, a)>0 if a^feO.
So ((5+r)a,a)>0ifa#0.
Hence S+T is positive.

Example 2. Which of thefollowing matrices are positive ?


1 I 2
(0 3 > 4
nr n
(«0 i
Li
1 i+<‘
Solotion. (i) Let-4= 1-/ 3 J*
We have >4*=the conjugate transpose of ^4
1 1+n =A,●
* 1-^/ '3
/. A is self-adjoint. Z
1 1+4
Now the principal minors of A are 1, 1-/ 8

We have
1 1+/‘ =3-2=1.
1-4
Since A is self-adjoint and the principal minors of A are all
positive, therefore A is a positive matrix,
(ii) The given matrix is not self^adjoint.because its transpose
is not equal to itself. Hence it is not positive,
(iii) Let A denote the given matrix. Obviously A^A^ i.e.
/4=the transpose of A,
The principal minors of A are
1 I 1 i I
1, I . i .
All these are positive as can be easily seen. Hence 41 is
positive.
Example 3. Prove that ever; entry on the main diagonal ofa
positive matrix is positive. (Meerut 1976, 83)
Solution. Let -4=[a/7l«x« he a positive matrix. Then

S S atj Xi xj > 0, (I)


/●=! /-I
Linear Algebra

where jci, are any n scalars not all zero. Now suppose that
out of n scalars x\, —,x„ we tako and each of the remaining
»—1 scalars is taken as 0. Then from (1) we conclude that au > 0.
Then ah > 0 for each i=l, w. Henca each entry on the main
diagonal of a positive matrix is positive.
§ 6. Unitary operators.
Definitions. Let U and V be two inner product spaces over the
same field F and let T be a linear transformation from U into V.
We say that
(/) T preserves inner products if (Ta, T/3)=(a, for all a, /3
in U.

(//) T preserves norms if || 7a 11=H a || v a e C/.


, (Hi) T is an isometry if T preserves distances^ i e., if
II Ta-TP 11=11 a-jS || for all a, in U.
Note that £/(ra,rj8)=|l ra-ljS H.
Theorem 1, Let U and V be two inner product spaces over the
samefield F and let T be a linear operatorfrom U info V. Then the
following three conditions on Tare equivalent:
(/) T preserves inner products,
(ii) T preserves norms.
(Hi) T is an isometry.
Proof, (i) =:► (ii).
It is given that T preserves inner products. Therefore
(7a, Tp)—(<x, /?) for all a, jS in U. Taking j3=a, we get
(Ta, Ta)=:: («, «) i.e. H 7a |12=ll a 1|2. Thus II Ta 11=11 a 11 for
every a in U. Hence T preserves norms,
(ii) => (iii).
It is given that H 7a H=|l a |1 for every a in U. Therefore for
all a, j8 in U, we have
||r(a~i8) 11 = 11 a-i3 11 (taking a—^ in place of a]
llT«-7i8 11 = |!a-/3 11
=> r is an isometry.
(iii) ^ (i).
It is given that || Ta --7j8 H^l| a~^ || for all a, jS in i/, taking
! /8=0, we see that
II Tx 11=11 a 11 for every a in U
=> (7a, 7a)=(a, a) for all a in U.
Now let Then
iT («+j8), T(a-} jS))=(a+i3, a+jS)

\
■\ ■
Inner Product Spaces 355

=> (ra+r/5. ra+718)=(a+i8, a+j8)


=> (ra. ra)+(7V. 213)+(ri3. ra)+(r/5, TO
=(«, a)+(a. j8)+(i3, a)+(^,
=> (ra, TO+W.TO=(«.)3)+(^. a)
[V (Ja, ra)=(a. a) and (T/J, TO=(^» ^)1
Thus if a, jS are any vectors in C/, then
(Ta, TO+(7i8, TO=(«,)3)+(^. a). -d)
If F is the field of real numbers, then
(TO ra)=(ra. TO and (A, «)=(«» A)-
(1) gives 2(Ta, TO)=2(a. A)
=>(TOTO=(«,A).
If F is the field of complex numbers, then p ^ U ^ ip G U.
So replacing ]8 by /j3 in (1), we get
(Fa. r/j8)+(r/|8, Ta)=(a,/]8)+(ii3, a)
=> (TO iTO)+(iTO Ta)=J (a, P)+i(^/a)
=> 7(TO TO)+*(TO TO=-*(«. A)4-f(A. «)
=> -I(TO m+i(TP, T<£)^-i(a, j8)+i(A, «)
=> -(TO TO+(7’A. Fa)=-(a,]8)+(^, a). ...(2)
Subtracting (2)from (1). we get
2(TO TO=2(a, jS)
=> (TO TO=(«. A) for all a, jS in C/.
Hence the theorem.
Inner product space isomorphism.
Definition. Let U and V be inner product spaces over the same
field Ft and let T be a linear transformationfrom U into V. Then T
is said to be an Inner product space isomorphism//
(i) T is invertible i.e. T is one-one and onto,
(//) Tpreserves inner products.
Also the inner product spaces U and V are then said to be
isomorphic and we write U ^ V.
If T preserves inner products, then || Fa 11=H« || for every a
in U i.e.(Fa, Fa)=(a, a)for every a in U. So
Ta.=0 => (a, a)=0 => a=0.
T is non-singular i e. T is one-one.
Therefore an inner product space isomorphisTh fipm U onto V
can also be defined as a linear transformationfrom U onto V which
preserves inner products. . ^ '
Theorem 2. Let V be any unitary vector space of dimension n
with inner product(,). Then V is inner product space isomorphic
to Va(C) with standard inner product.
356
Lineef Algebra

Proof. Let 5={ai, a„} be an orthonormal basis for the


vector space P. Then
(a,, ay)=8,y, where 3/;=0 if i^jand S/y=»l if /=y.
Now let be any vector in V and let (;cu Xn) be the
coordinate vector of relative to the basis B. Then
Pi =‘Xi»i +^2«2+
Let/be a mapping from V to V„(C)defined as
/ X2t X||).
Then we know that/is an isomorphism from Fonto V„(C).
Now if ^2=3'i«i+y2a2+..,+yn«n be any vector in F, then
(Pu p2)=‘(B xicti, B yj<Kj)

(«'» «y)
/●I /«! .

= Ij xi yi, on summing with respect to j

^(xuX2 x„),(yuy2 y„)


=f(Pl)f(P2)
=the standard inner product of/(j8i) and/(/32) in V„ (C).
Hence F is an inner product space isomorphic to F„ (C) with
standard inner product.
Corollary. In an n-dimensional unitary vector space V the dot
product of the coordinate vectors of any two vectors of V is invariant
under transformation from one orthonormal basis to another.
Proof. Let .ffi={ai, ...» a„} and 52={yi..... Vn} be two ortho-
normal bases for F and let Pu P2 be any two vectors in F. Suppose
Xu X2 are the coordinate vectors of j8i, P2 relative to the basis Bi
and Yu F2 are their coordinate vectors relative to the basis B2.
Then
(PuP2)=Xi.X2
and iPl,P2)=Yi.Y2.
XuX2=Yi.Y2.
Hence the result.
l^nitary operator.
Definition. A linear operator T on an inner product space V
over the field F is said to be a unitary operator ifTis an inner pro
duct space isomorphism of V onto itself.
Inner Product Spaces 357

In other words r is a unitary operator on an inner product


space F if r is invertible and if T preserves inner products.
Theorem 3. Lei T be a linear operator on an inner product
space V. Then T is unitary ifand only if the adjoint T* of T exists
and TT*=T*T^I. (Meerut 1974, 76, 83, 87, 89)
Proof. Suppose T is unitary. Then T is invertible.
For all a, p in F, we have
(ra,j8)=(ra./i8) (V ip=^p]
«(Ta,7T-> P) fV 7T->=/]
—(a, T~' p) [●.* T is unitary => T preserves*
inner products]
is the adjoint of T. Thus T is. unitary implies that T*
exists and rr*=s/= J*r.
Conversely, suppose that T* exists and TT*=>T*T=L Then
T is invertible and T~^=T*. So T will be unitary if we show that
T preserves inner products.
For all a, p in F, we have
(ra. riS)«(a, T* Ji8)=(a, /j8)=(ot, P).
.*. T is unitary. \
Theorem 4. A linear operator T on a finite-dimensional inner
product space is unitary iff T*T=I. (Meerut 1990)
Proof. Since r is a linear operator on a finite dimensional
inner product space, therefore T* exists. Also T*T=I implies that
ris invertible.
Now give the same proof as in theorem 3.
Theorem 5. A linear operator T on a finite dimensional inner
product space V is unitary if and only if Tpreserves inner products,
(Meerut 1977)
Proof, r is a linear operator on a finite dimensional inner
product space V.
Suppose T is unitary. Then T preserves inner products.
Conversely suppose that T preserves inner products. Then
T will be unitary if we prove that T is invertible. Since T preserves
inner products, therefore {Tol, Tp)={a, P) for every a, jS in F.
Taking ^=a, we get (Tbc, 7’a)=(a, a) for every a in F. So
Ta=0 (a, a)=0 ^ a=0.
.*. T is non-singular /.e., T is one-one. Now F ip finite
dimensional. Therefore T is one-one implies that T is onto, pence
T is invertible.
T is unitary.’
358 Linear Algebra
X'
Theorem 6; A linear operator T on afinite dimensional inner
product space V is unitary if and only if it takes an orthonormal
basis of V onto an orthonormal basis of V, (Meerut 1976,88)
Proof. Suppose r is a unitary operator on a finite-dimensional
inner product space V, Then T preserves inner products. If
B=>{ai,...,an) is an orthonormal basis of F, then to show ,that
{rai,...,rafl} is an orthonormal basis of F. For and
y=l .,n, we have
(r«i, ay) ( T preserves inner products]
8/y [\* a;, ay € an orthonormal set B\
.*. {T'ai,...,7a„} is an orthonormal set in F. It will be a basis●
of F because it is linearly independent and it contains n vectors
which is the dimension of F.
Conversely, suppose that r is a linear operator on F such
that both
®nd {7ai,...,7bC|,}
are orthonormal bases of F. Then (a/, ay)=8/y=(ra/, Tay). Since
T carries a basis of F onto a basis of F, therefore T is invertible.
Now T will be unitary if T preserves inner products.

For any a=a S jcja/, j8= Z yjccj in F, we have


1 -1 y -i

(«. P).
ii Z Xitti, Z yjdj
/-I
y
c^Z Z x,yj 8/y= Z Xih-
/-I j-i 1-1

Also (Tbc, T)3)=^7’ Z X/a/, T Z yjxj

=( Z xi Tai, Z yjTaj\=Z Z Xipj (Txi, Txj)


\/-i y-i J /-I y-i

^ Z Z Xi^j 8ij= Z XiJ><B«(a, j8).


1-1 y-i . 1-1

r preserves inner products. Hence T is unitary.


Unitary or Isrometric matrix. Definition. (Meerut 1980). Let
Inner Product Spaces 359

A be an nxn matrix over thefield ofreal or complex numbers* Then


A is said to be unitary or Isometric if A*A=L
Orthogonal matrix. Definition. (Meerut 1980). A real or
complex nxn matrix A is said to be orthogonal^ if A'A—I where A'
is the transpose of A.
A real orthogonal matrix is unitary and a unitary matrix over
the real field is orthogonal. Therefore sometimes a unitary matrix
over the real field is also called an orthogonal matrix.
Unitarily Equivalent matrices. Definition. Let A and B be com
plex nXn matrices. We say that B is unitarily equivalent to if
there is an nxn unitary matrix P such that B—P*AP.
Orthogonally' Equivalent matrices. Definition. Let A and B be
complex nxn matrices..We say that B is orthogonally equivalent to
A if there is an nxn orthogonal matrix P such that
B=^P* AP.
Unitarily Equivalent matrices are also sometimes called unitarily
similar matrices. Similarly orthogonally equivalent matrices are
also sometimes called orthogonally similar matrices.
Theorem 7. Let V be afinite-dimensional inner product space,
and let U be a linear operator on V. Then U is unitary if and only if
the matrix of U in some (or every) ordered orthonormal basis is a
unitary matrix. (Meerut 1977)
Proof. Let .3={ai, a2,...,aR) be an ordered orthonormal basis ’
of V. Let A be the matrix of U relative to the basis B i.e., let
[U]b=-A. Then (£/*Jb=4*.
Now suppose U is unitary. Then
UW=l
^ [V*U]b=[1]b => [U*]b [Ub]=I
=> A*A=I => the matrix A is unitary.
Conversely suppose that the matrix A is unitary. Then
A*A=I
=> [U*]b[U]b=I =>[U*U]b=I
=> U*U—I => the linear operator U is unitary.
Solved Examples f
Example 1. Show that the set ofall unitary operators on an \
inner product space V is a group under the operation ofcomposition. ;
Solution. The product of two unitary operators is unitary. If
Tit T2 are two unitary operators, then both of them are invertible.
360 Linear Algebra

Therefore T\T2 is also invertible. Also |) T\T^ ]|b|| Tiv.|1»H a )|


for each a. Hence TiTi is nnitary.
The composition of linear operators is an associative opera¬
tion.
The identity operator / is invertible and || /«))=|| oc {{for each
«. Hence / is also unitary.
Finally, if Tis a unitary operator, then T is invertible. Now
to show that T-* is also unitary. Let a be any vector in V and let
r-* a=/5. Then r/5=aai We have

-\\JV\\ IV T is unitary]
= « ●
/*
.*. . T-^ is unitary. Hence the result.
Example 2. Show that the determinant ofa unitary operator
has absolute value 1.
Solution. Let 7* be a unitary operator on a finite-dimensional
vector space V. Then T*T=I. Let B be an ordered orthonormal
basis for F. Let ^4 be the matrix of T relative to B, Then
detr=deti4.
Also A* will be the matrix of T* with respect to B.
Now T*T=I
^[r*rjB=[/]fl => [t*\b [rjB=/ => a*a^i
=> dot(i4*i4)=det / «► (det d*).(det i4)«l
=> (det A) (det A)=\ => ldet.4 |2=l => |deti4 |=1
^ det A has absolute value 1
det T has absolute value 1.

Example 3. For which values of a are the following matrices


isometric :

(0 « f (W)
r« 01
.-i « 1 1

Solution. (f)Leti4=
L-i 4
3
Then .4*=
U ^ «the conjugate transpose of ^4.
Now A will be isometric if ^4*^4=/ i.e. if

G 01
djl-i flj Lo 1,
Inner Product Spaces m
1 a_Q-
Oa+t
4 2 2
or
a__a l+aa =P ®1
2 2 4^ J to IJ
or
aa+l=l,|-?=0
or
|--|=0.i+aa=l.
From these equations, we get Therefore a must be real.
Then we get This gives o=±~ ●

A is isometric if a>=*±^●
(ii) Proceed as in part (i).
Example 4. Show that the following three conditions on a linear
operator T on an inner product space V are equivalent:
(0 r*r=/.
(«) (Ta, ri8)=(a. fi)for all a andfi,
(/«)||raIlHl«ll/oro//a. (Meerut 1978, 91)
Solution (i) => (ii). It is given that T*T=J. We have
(Tee, ri8)-(a, T* Tfi)
i=(a, 7j3)=>(a, P) for ali a and p,
(ii) (iii). it is given that^Ta, r/3)=(a, p) for all a and p.
Taking ^=a, we get
(Ta, ra)«(a, a)
=> liraiMl«ll*
11 ra llHl « 11 for all a.
(iii) => (i). It is given that H Ta H=l! a H for all a. Therefore
(Ta, ra)=(a, a)
=> (J*ra, a)«(a, a)
((7’*r-/)o,a)=0foraii a. ».(1)
Now (r*r-/)*=(7’*D*-i’»‘«r*r-/.
... t*T—I is self-adjoint.
Hence from (1), we get (r*r>—/aaO Le. T*T>^I.
Example 5. If Fis{at, is an orthonormal basis of an
rt’dimensional unitary, space V and if B2^{Pu».,,p„) is a second
362
Linear Algebra

basis of F, then the basis is orthonormal if and only if the transi


tion matrixfrom the basis B\ to the basis B2 is unitary.
Solution, Let P~[pij]„xn be the transition matrix from the
basis' to the basis B2. Then
y=l, 2, n.
We have

(^/> ^;)=(/'i/«i+jP2/a2+...4-/>/ii«rt, /^U«i+J32ya2


+●●●+/>»;««)
■Pi iPiJ+PuPl) f ... +PnlPnif
since (a/, ay)=-8/y, Bi being an
orthonormal basis.
Now suppose B2 is an orthonormal basis. ThenTiS,, /3y)=8 ij-

PuPu-;{->”-i-PniPttj—bij
=> ^*^'=[S«7]nx«^unit matrix
=> P is a unitary matrix.
Conversely suppose that P is a unitary matrix. Then
unit matrix

=► PliPlj+^>--\-PnfPitJ—bij
Wh Pj) — ^u
=> B2 is an orthonormal basis.
Example 6. Let B and B' be two ordered orthonormal bases
for a finite dimensional complex inner product space V. Prove that
for each linear operator T on V, the matrix [T]b' is mitarily
equivalent to the matrix [T]b.
Solution, Let P be the transition matrix from the basis B to
the basis B'. Since B and B* are orthonormal bases, therefore P
is a unitary matrix.
Therefore ' P*P^I
=> P*=p-i.
Now [7’Jb'=P-‘ [T]b P=P*[T]b P.
[T\b' is unitarily equivalent to the matrix [rji,.
Exercises
" 1. Let V be any Euclidean vector space of dimension n with inner
product (,). Then V is inner product space isomorphic to
Vn (R) with standard inner product
2. Prove that in an w-dimensional Euclidean space V the dot
product of the coordinate vectors of any two vectors of V is
invariant under transformation from one orthonormal basis to
another.
timer Product Spaces 363
/.
3. If Bt={9.i, a2, a„} is an orthonormal basis of an w-dimen-
sional Euclidean space V and if i?2={i3i, Pn} is a second
basis of V, then the basis B2 is orthonormal if and only if the
transition matrix from the basis Bi to the b^sis B2 is ortho
gonal.
4. If an isometric matrix is triangular, then it is diagonal.
(Meerut 1969)
5. 'If P and Q are orthogonal matrices, then PQ,P^, and P~* are
orthogonal and det P=±1.
6. If P and Q are unitary matrices, then so is PQ.
7. Let B and B’ be two ordered orthonoimal bases for a finite
dimensional real inner product space V. Prove that for each
linear operator T on K, the matrix {T]b' is orthogonally equi
valent to the matrix [PJa.
8. Prove that the relation of being ‘unitarily similar’ is an equi
valence relation in the set of all »xn complex matrices.
9. Fill up the blanks in the following statements:
(i) Transition Matrices expressing a change of orthonormal
bases are...
(ii) Matrices of a linear transformation expressing a change of
orthonormal bases are...
Aos. (i) unitary,(ii) unitarily equivalent.
§ 7. Normal Operators. Definition. Let T be a linear operator
on an inner product space V. Then T is said to be normal if it com
mutes with its adjoint i.e. if TT*=1*T. (Meerut 1972, 87)
If V is finite-dimensional, then T* will definitely exist. If V
is not finite-dimensional, then the above definition will make sense
only if T possesses adjoint.
Note 1. Every self-adjoint operator is normal.
Suppose Pis a self-adjoint operator i.e. T*=T. Then obviously
T*T=TT*. Therefore T is normal.
Note 2. Every unitary operator is normal.
Suppose P is a unitary operator, Then the adjoint T* of T
exists and we have P*P=PP*«=/. Therefore Pis normal.
Theorem 1. Let Tbe a normal operator on an inner product
space V. Then a necessary and sufficient condition that ol be a charac
teristic vector of T is that it be a characteristic vector of T*.
(Meerut 1977, 78, 81,84P, 88, 91)
$64
Linear Algebra

Proof. Suppose r is a normal operator on an inner product


space V. Then 7T*=»r*r. If a is any vector in V, then we have
11 Tv.|P=(ra, T(}t)=(a, T*Tv)=^(v, 7T*a)
«(r*a, r*a)=l| r*a 1|2.
If T is normal and if a is any vector in K, then
lira 11=11 11. ...(1)
Further if c is any scalar, then
(r-c/)*=r*-cjy*=r*-c/.
We shall show that T-cI is normal.
We have(r-c7)(r-c/)*=(r~c/)(r*-c7)
^TT*--cT-cT*+ccI.
Also (7’~c7)* (r-c7)=(r*-c7)(T^cl)
«r*r-cj*-cr+cc7.
Since T*T=TT*, therefore
(T-cI)(T-cI)*=.(T-cI)* {T-cl).
Thus T-cI is normal. Therefore from (1), we have
11 (r-c7)a 1|«||{f-cl)* a If V a e F
i.e.
II (T—c7)a 11=11(T*—c7)a || v a e K. ...(2)
From (2), we conclude that
(r-c7)a=0 iflF(r*-c7)a=0
Le. 7a=c« iff 7^a=ca.
Thus a is a characteristic vector of T with characteristic value
c iff it is a characteristic vector of T* with characteristic value c.
T, r is a normal operator on an inner product space
V,then the characteristic vectorsfor T belonging to distinct characte
ristic values are orthogonal.

Proof. Suppose Tis a normal operator on an inner product


space V. Let a,^ be the characteristic vectors for T corresponding
to the characteristic values c, and cz where ci^c2. Then
Ta=cia and 7j8=c2j8. Also T*^=c^, We have
(«. ^)=(c,a,^)=(ra, ^)=(«, r*/8)=(«, c2iS)=C2(a, ^).
(C1-C2)(a,)S)«0
=> (a, ^)=0
[V ci^a]
==► a, j8 are orthogonal.

Corollary. Characteristic spaces of a normal operator are


pair-wise orthogonal.
(Meerut 1975, 78)
Imer Product Spaces 365

Proof. Let Wu ^2 be characteristic spaces of a normal opera


tor T correspondiog to the distinct characteristic'values c\ and C2-
Then to prove that W\ is orthogonal to W2, Let a e fPi, jS e W2.
Then Tol=cik^ Tp=C2p. By theorem 2,(a, j8)=0. Thus every
vector a e fVt is orthogonal to every vector p belonging to W2.
Therefore IVi is orthogonal to W2,
Theorem 3 Let V be a finite-dimensional complex inner
product space, and let T be a normal operator on V, Then V has an
orthonormal basis B, each vector of which is a characteristic vector
for T and consequently the matrix ofT with respect to B is a diago
nal matrix. .
Proof. Since T is a linear operator on a finite dimensional
complex inner product space V,therefore T must have a charac
teristic value and so T must have a characteristic vector.
a
LetO#« be a characteristic vector for T. Letai«j|-
Then ai is also a characteristic vector for T and || «i 1|=1. If
dim F=il, then {ai} is an orthonormal basis for V and ai is a
characteristic vector for T. Thus the theorem is true if dim V=l,
Now we proceed by induction on the dimension of V. Suppose the
theorem is true for inner product spaces of dimension less than
dim V. Then we shall prove that it is true for V and the proof
will be complete by induction.
Let W be the one-dimensional subspace of V spanned by the
characteristic vector at for T. Let ai be the characteristic vector
corresponding to the characteristic value c. Then rai=cai. If p
is any vector in W,then p=kcti where k is some scalar. We have
TP=Tkcti=kTai=:k (c«i)=(A:c) at. Therefore Tp ^ W. Thus W
is invariant under T. Therefore W-i- is invariant under T*. Now T
is normal. Therefore if aj is a characteristic vector of T, then at
is also a characteristic vector of T*. Therefore by the same argu
ment as above IK is also invariant under 7’“. So IKJ-is invariant
under (T*)* i e, is invariant under T. If dim V=n, then
dim IK^=dim K—dim IK=>/i—1. Therefore with the inner
product from K is a complex inner product space of dimension one
less than the dimension of K.
Suppose U is the linear operator induced by T on i.e U is
the restriction of r to IKJ-. Then Uy—Ty v ye W^. The restri-
tion of T* to IKJ- will be the adjoint U* of U. Now U is notmal
operator on WK For if y is any vector in then
366 Linear Algebra

{UU*) y=^U{U*y)^U(T*y)=T{T*Y)={TT*) y
={T*T) y=T*(7V)=r*(C/y)=C/*(t/y)=.(C/*C/) y.
.*. UU*—U*U and thus U is & normal operator on IV^ whose
dimension is less than dimension of V.

Therefore by our induction hypothesis, has an orthonormal


basis {a2,..., a„} consisting of characteristic vectors for U. Suppose
a/ is the characteristic vector for U corresponding to the characteri
stic value Cl. Then Ucci=CiOLi => Tcti=CiOLi. Therefore a, is also a
characteristic vector for T. Thus a2,..., «« are also characteristic
vectors for T. Since V— W@W-^,therefore 5={ai, a2,..., an} is an
orthonormal basis for T each vector of which is a characteristic
vector for T. The matrix of T relative to B will be a diagonal
matrix.
Hence the theorem.

Normal matrix. Definition. A complex nxn matrix A is said


to be normal if
AA*=A*A.
If D is a diagonal matrix, then obviously
DD*=^D*D.
Therefore every diagonal matrix is necessarily a normal matrix.
Theorem 4. Let A he an nxn matrix with complex entries.
There exists a unitary matrix P such that P*AP is diagonal if and
only if
AA*=A*A.
In other words, A is unitarily equivalent to a diagonal matrix if and
only if A is normal.

Proof. Let V be the vector space C", with the standard inner
product. Let B denote the standard ordered basis for V and let
T be the linear operator oh V which is represented in the standard
ordered basis by the matrix A. Then
[T]n=A.
Also [T*]b=A*.
We have
[TT*]b=-{T]b [T*\b^AA*
and [T*T]b-=^A*A.
If is a normal matrix, then
AA*=A*A,
Linear Transformations 367

[TT*]b^[T*T]b
=> TT*=T*T =► r is'normal.
Since T is a normal operator on infinite dimensional complex
inner product space F, therefore there exists an orthonormal basis,
say B\ for V each vector of which is a characteristic vector for T.
Consequently [T]b* will be a diagonal matrix. Now let P be the
transition matrix from the basis B to the basis B\ Since B and B'
are orthonormal bases, therefore is a unitary matrix. [Note that
the standard ordered basis is an orthonormal basis]. Now P is a
unitary matrix implies that P*P=/and therefore

We have [T]b P-=P* AP.


Thus there exists a unitary matrix P such that
a diagonal matrix.
Hence A is unitarily similar to a diagonal matrix.
Conversely suppose that A is unitarily similar to a diagonal
matrix. Then there exists a unitary matrix P such that P*AP=D
where /) is a diagonal matrix. Since P is unitary, therefore
Therefore, we have
P-^AP=D
=> A=PDP-K
We have AA*=^{PDP-^) (POP-*)*
={PDP*) (/>/>?♦)* [V P-«=P*]
=PDP* (P*)* D*P*=^PDP* PD^P*
=PDJD*P* [V P*P-7]
=PD*DP* [ D is diagonal =>. D is normal]
=P/)*P*PZ>P* [V P*P=--1]
= (PZ)*P*) (Pi)P*)=(PDP*)* (PDP*]=(PDP-«)* (P/)P->)
=A*A.
.'. A is norma!.
Solved Examples
Example 1. Let T be a normal operator on an inner product
space V, If c is a sealary prove that cT is also normal.
Solution. It is given that T is normal. Therefore TT*—T*T.
We have (cT)*=n-T*.
Now {cT)icT)*=={cT)icT*)=cc (TT*).
Also {cTf (cr)=(cT*) (,cT)={cc) (7'*T)
368 Linear Algebra

=(cc)(7T*).
{cT)(cr)*=(cD* {cT).
Hence cT is normal.
Example 2. If Tt,T2 are normal operators on an inner product
space with the property that either commutes with the adjoint of the
other, then prove that T\ ^-72 and T\T2 are also normal operators.
(Meerot IW,89)
Solution. Since Tu Tj are normal opersitors, therefore
TiTi*=^Ti*Tt and T2T2*=^T2*T2.
Also it is given that
r,7’2*=7’2*r, and r2r,*==r,*r2. ...(2)
To prove that T\+T2 is normal. Tx+T2 will be normal if
(r,+T2)(r,+r2)*=(r,+r2)*(r,+T2).
We have(7’i+72)(r,+r2)*=.(7’,+T2)
«rir,*+ri72*+r2r,*H-72T2*
==ri*ri+T2*ri+ri*72+72*72 [from (1)and (2)]
=7’,*(r,+r2)+72*(r,+72)=(r,*+72*)(ri+r2>
=(ri+r2)*(r,+r2).
T’i+72 is normal.
Also to prove that TxT2 is normal.
We have
{TxT2){TxT2)*=-TxT2T2*Tx*=^fx {T2T2*) Tx*
=r,(r2*r2)r,* [by(D]
«(r,r2*){T2TX*)
=(72*ro (r,*72) (by (2)]
=T2* (7’,r,*) 7a
=72*(r,*r,)7a [from (1)]
=(72*r,*)(r,72)=(7’,r2)* (r,72).
TxT2 is normal.
Example 3. Let T be a linear operator on afinite-dimensional
complex inner product space V lf\\ Tv. 1|=H T*v \\for all a in V,
then T is normal. (Meerut 1973, 90)
Solution. If then,we have
i| ra IHl r*ci 11 => ll Ta||M|7’*«112
● => {Tv, Tv)^{T*v, T*v) => {T*Tv, a)=(7T*a, «)
=> (T*Tv, a)-(7T*a. a)=0 => ({T*T-TT*) v, a)=0.
Thus if 11 Tv 11=11 T*v H for all a, then {{T*T-TT*) v, a)=0
Inner Product Spaces 369

for all a. Since K is a complex inner product space, therefore


from this we get 7^r—7T*=0
=> T*T==TT* T is normal.
Example 4. Let T be a normal operator on an inner product
space V, 7/aeK, Tbc^O o r*a=0.
Solntion. Since Tis a normal operator on an inner, product
space F, therefore || T<t |1=|| T*a 1|[ for all a in F.
' Here r«=0 o || Ta ||=0 o 1| T^a ||«=0 o 7’*a=0.
Example 5. Let Tbe a linear operator on afinite-dimensional
complex inner product space. Prove that T is normal if and only if
its real and imaginary parts commute. (Meerut 1974)
Solution. Let r=ri+/r2. Then r,*=7’i and 7’2*=7’2.
Suppose T\T2—TtTi. Then to prove that T is normal.
We have r*=(r,+iTi)*=ri»+< T2*=-T^-iT2.
TT*^(Ti+iT2){Ti-i7'2)=-Ti^-iTiT2+iT2Ti+T2^
● [V TiT2==T2Ti]
Aisor*r=(r,-ir2)(ri+/r2)=r,H/7’,r2-iT27’,+722
=7’,Hr22.
TT*=T*T. Hence r is normal.
Conversely, suppose that T is normal.
Then TT*^T*T
=> Ti^-iTiT2+iT2Ti-{-T2^==^Ti^-\-iTiT2-iT2Tt+T2^
^2i(TiT2-T2Ti)=6
=> TiT2-T2Ti=0 [V 2/VO]
^ TtT2=T2Ti.

Example 6. IfTis an arbitrary linear operator on a finite


dimensional complex inner product spacer and a and b are complex
numbers such that
| a|=|6|=I, then oT^bT* is normal.
Solution. We have (o7’+6r*)*=(n7’)*+(6r*)*=ar*+5r.
Now {oTA-bT*){aT-\-bT*)*={aT+bT*)\aT*-\-bT)
=anTT*+al'n+ba {T*)‘^+blT*T
=1 a P TT*-\-alT^-\-ba (7^)2+l b P T*T
=TT*+alT^A-b& (7’*)2+7’*r.
Also(oT+bT*)*(a7’+67’*)=(dr*+5r){aT+bT*)
=1 a p T^T^&b (7’*)2+5fl7’2+| b p TT*
(r*)2-i-5ar2+r7’*
370 Linear Algebra
°»Tr*+alT^+bU {T*)^+T*T.
(aT+bT*){aT+bT*)*=^{aT+bT*)*(aT-^bT*),
Hence aT-^bT* is normal.
Example 1, If T is normal, c is a characterisiic value ofT,and
W is the characteristic space of ci.e, W is the set of all solutions of
T<t=ca.^ then both W and are invariant under T,
SolatioD. Leta e W. Then e W*
.*. W is invariant under T.
Now in order to prove that is invariant under 7*, we shall
first prove that W is invariant under T*:
Let a G W. Then Tr=sca,. We have
r(r*«)=(7T*)<x=(7:*r)«
(ra)«7^(cx)«c(T*a).
Now T(T*a)^c implies that T*<x. is in W.
Thus « e FT => T*a e W. Therefore W is invariant under T*,
Hence W'J- is invariant under(T*)* Le,,T._
Example 8. Suppose T is a linear operator on a finite-dimen^
sional inner product space V and suppose there exists an orthonormal
basis 5={ai,...,a«}/or Vsuch that each vector in Bis a characteristic
vectorfor T. Then prove that T is normal.
Solution. If(Hi € B, then it is given that a/ is a characteristic
vector for r. So let
ra/=c/ai,
Then [7’*]^ is a diagonal matrix with diagonal elements C|
Since therefore [T^Jb is also a diagonal matrix with
diagonal elements 'Now two diagonal matrices commute.
Therefore
[T]b [T*]B^[n]B [T]b
=> [7T*jB=[r*nB
=> TT*=^']*T ^ r is normal.
Example 0. Let T be a bnormal operator and a a vector such that
r2a=0. Then Ta^O.
Hence show that the range and null space ofa normal operator
are disjoint.
Solution. Let a be a vector such that 7^a=0.
. Letra=/5. Then to prove that
We have ra=j8
=> r(r«)=7i8 =► => o«rj8.
Inner Product Spaces 371

Now r is normal. Therefore || Tft |l=il| r*/5 It Hence


Ji5=0 ^ r*i8=:0.
Now 08. T«)=(r*i5, a)=(0, a)=0.
/?=0 i.e, 7as=0,
Second Part. Let R{T)denote ^e range of T and denote
the null space of r. Let a e (T) D N(T). Then a e R(T)
and oc e ^(T), Since a e A(T). therefore Ta^O. Also a e i?(T)
=> aeaj]8 for some vector/8. Now
r/3^^T(rj5)i^Ta
=> T^fi—Tccs=0.
But r is a normal operator, therefore by first part
-T^jSssaO => 7)8=0
=> a=0.
Thus a e /?(Tf n A (7’) => a =0 Therefore
A (70 n A(r)={0} i.e. R(T)and A {T)are disjoint.
Example 10. Let T be a normal operator on a finite-dimensional
complex inner product space V and/ a polynomial with complex
coefficients; Then the operatorf{T)is normal. (Meerut 1972j
Solution. Let/=Oo+oix+...+r„x««; then
/(T)=oof+0i7’+...4-dm7'".
We know that(c70*=c7'*. Also If k is any positive integer,
then (r*)*=(77T...upto A: times)*
=7’*r*...upto k times=r'^* .
We have [/(7’)]*=[Oof+fl,r+...+o..,T»"]*
=do/*+S,7’*4-...+5„(r'")*=5o/+ai7’*+...+47^«
Now/(r> will be a normal operator if we prove that
[/(7’)]*/(7’)=/(r) r/(7’>]*
Since Tis normal, therefore 7T*=i^r. First we shall prove
that if p is any positive integer, then Tr*^=r*P7’. We shall rove
it by induction on.p. Obviously the result is true when ^=1.
Assume as our induction hypothesis that the result is true for p=/t
I.e. rr*<^=r**r. Then
r.r**+‘=(7T*)r**=(r*DT**=T’> (7T**)
=r*(r**r), since the result is true for p=A:

Thus the result is true for p=Ar+1 if it is true for p=/f; Hence,
it is true for all positive integers p.
Now we shall prove that if p is any fixed positive integer,
then T*p T^—T^. T*p for all positive integers^. Obviously this
372 Linear Algebra

resuU is true for ^=*1, as we have just proved it. Now assume
that this result is true for q=^k Le., r*pr*=r*r*^ Then
T*PT^+i«ix*pT’^) T*(r*r*p) r=r* {t*pt)
=r*(rr*p)«7^+>r*p.
Thus the result is true; for f9=A:+1 if it is true for q=k.
Hence it is true for all positive intergers
So far we have proved that T*pT^=>T>T*p for all positive inte
gers p aod q. Now if p is any positive integer, then
T*p I/(r)]=T*p [flof+flir+...+a«7>«i'x
=ooT*'»/+oir*pr+...+
SS
OQlT*p-[-aiTT*p-{-...
T*p=[/(r)l T*p, ^
Now[/(r))*y|[D«[5o/4-diT*+...+5„r*'«]/(r)
-=>Soff(T)+5,r* f{T)-1-...+a„T*”f(T)
^sofiT)r+dt f(T) r*4-...H-8«/(7) r*«
=/(r)[dof+diT^+.-.+a^T^^l^/CT)[/(T)]*.
Hence/(T)is normal.
Example 11. Show that the minimal polynomial of a normal
operator on a finite-dimensional inner product space has distinct
roots. (Meerut 1976, 80)
. Solution. Let p(x) be the minimal polynomial of a normal
operator Ton a finite dimensional inner product space V. Then
p(x) is the monic polynomial of lowest degree that annihilates
T/.e., for which/»(T)=0. We shall show that p(x)=(x—ci)...
(x—c*) where Cl,..., c* are distinct complex numbers. Suppose
cu.».*Ck are not all distinct, i.e ,suppose some root c of p(x) is
repeated. Then p(jr)=:(x—c)* g(x)for some polynomial g{x).
Now p(r)=0 => (T-^c/)* g(T)«6
=> (T-cI)^ g(T)a=0, V « e K.
Let US set U=T^cI. Since the operator T is normal and x-c
is a polynomial with complex coefiScients, therefore T—cI—U is a
normal operator. [Refer Ex. 10 above]
Now let a be any vector in V and let P^g(T)a. Then
(72j8=t/2g(T) «=(7’-c/)2 g(D «=0.
Since the operator U is normal, therefore
l/2j8^0 => Up=0 [See Ex. 9]
=>(r-c/)j8«=0
=> (7'-c/)g(r)a=0, V ae K
=> (r~c/)g(r)«6.
Inner Product Spaces 3731

Thus the operator T annihilates the monic polynomial


(jc—c)g^(^) and the degree of this polynomial is less than that of
p(x). Therefore it contradicts the assumption that p(x) is the
minimal polynomial for T. Hence no root c of p{x) is repeated.
Therefore p{x) has distinct roots.
§ 8. Characterization of Spectra.
Theorem 1. Let T be a self-adjoint linear operator on an inner
product space V. Then each characteristic value of T is real. Also
if T is positive, or non-negative, then every characteristic,value of T
is positive, or non-negative, respectively.
Proof. Suppose c is a characteristic value of T. Thep Ta=ca
for some non-zero vector a. We have
c(a, a)=(ca, a)=(7a,(£)-{«., 7*a)
=(a, 7a) [V r*==rj
=(a, ca)=c (a, a),
(c—c)(a, a)=0
[V a#0 =>,(a, a)#0]
=> c=c => c is real.
Also (Ta, a)=(ca, a)=c (a, a)sac )| a \\K,
. a).

If T is positive, then (7a, a) > 0. Therefore c > 0


i.e. c is positive.
If T is non-negative, then (T’a, a)^ 0. Therefore c ^ 0 i.e.,
c is non-negative.
Theorem 2. Every characteristic value of an isometry has
absolute value one.
Proof. Let r be an isometry and let c be a characteristic
value of T. Then Tol—ccl for some non-zero vector a.
Since T is an isometry, therefore

'V II a 11^0, therefore


| c|=1.
Theorem 3. Let T be either self-adjoint or isometric, then
characteristic vectors of T belonging to distinct characteristic values
are orthogonal. (Meernt 1968,69)
Proof. Let J be a linear operator on an inner product space
Y- Let Cl and ci be two distinct characteristic values ofr. Let a.
374 Linear Algebra

p be characteristic vectors forT corresponding to the characteristic


values Cl and cs respectively. Then
Tocscitt, Tp^—CiP.
Case I. If T is self adjoint, then T*=^T. Also by theorem 1
both Cl and ci are real. We have
Cl («. i8)=(cia, i5)«(ra,)8)=(a. T* P)
«(a, Tp)=(et, C2 P)^Cz(a, j8)=C2(«, P).
/. (Ci-C2)(a.j8)=0
(«,^)=0 Iv Cl9«:C2l
a and P are orthogonal.
, Case II. If T is an isometry, then (a,i3)=(7a, TP). Also by
theorem 2,|C2|==I I.c.,
| C2 j*s=l i.e., C2C2—I /.c., C2=»l/C2. We
have

(«, rj8)=(c,a, C2p)-=Cih (a, («, j8).


C2

●●● (‘-I) ~0
s> («, J3)«a=0 [V Ci#C;]
=> a and p are orthogonal.
Theorem 4. Every root of the characteristic equation ofa self-
adjoint operator on afinite-dimensional inner product space is real.
Proof. Suppose T is a self-adjoint linear operator on a finite
dimensional inner product space P. If V is a complex vector
space, then every root of the characteristic equation of T is also a
characteristic value ofr and so is real by theorem 1.
If K is a real vector space, then it is possible to show that
there exists a Hermitian operator on some complex inner pro
duct space such that T and 7^ have the same characteristic equa
tion. Now every root of the characteristic equation of T+ is also
a characteristic value of T+. So it must be real. Hence every root
of the characteristic equation of7 must also be real.
Corollary. Every self-adjoint operator on afinite-dimensional
inner product space has a characteristic value and consequently a
characteristic vector. (Meerut 1973)
Theorem 5. Let V be afinite-dimensional inner,product space
and let T be a self-adjoint linear operator on V. Then, there is an
orthonormal basis Bfor V, each vector of which is a .characteristic
vectorfor T and consequently the matrix of T with respect to Bis a
diagonal matrix.
Inner Product Spaces 375

Proof. Since r is a self-adjoint linear operator on a finite


dimensional inner product space therefore T must have a charac
teristic value and so T must have a characteristic vector.
Let Ov&a be a characteristic vector for T,
a
Let oil Then at is also a characteristic vector for T
and II at 11=1. If dim V=>1, then {«i} is an ortbonormal basis for V
and «i is a characteristic vector for T. Thus the theorem is true,
if dim K»l. Now we proceed by induction on the dimension of V,
Suppose the theorem is true for inner product spaces of dimension
less than dim V. Then we shall prove that it is true for V and the
proof will be complete by induction.
Let FPbe the one-dimensional subspace of P spanned by the
characteristic vector ai for T. Let at be the characteristic vector
corresponding to the characteristic value c. Then 7hi»cai. If is
any vector in then )3=>Ar«i where A: is some scalar. We have
Ji8«=»7’A:ai=A:rai=A:(cai)«(fcc) a|.
Therefore Thus W is invariant under T. Therefore
JPJ-is invariant under ar*. But ris self-adjoint means thatr«J*.
Therefore fPJ* is invariant under T.
If dim then dim fP-i-=dim, F—dim -1.
therefore with the inner product from F,is an inner pro
duct space of dimension one less than the dimension of F.
Suppose U is the linear operator induced by Ton /.e., (7 is
the restriction of Jto WK Then Uy=>Ty v ySiFJ-. Then restri
ction of T* to FFJ- will be the adjoint £/* of tf. Now U is a self-
adjoint linear operator on FFJ-, because if y is any vector in WK
then
U*Y^T*y^Ty^Uy,

Thus U is a self-adjoint linear operator on whose dimen


sion is less than dimension of F.
Therefore by our induction hypothesis, has an orthonormal*
basis {a2,..., »„} consisting of characteristic vectors for U, Suppose
a# it the characteristic vector for U corresponding to the character
istic value Ct. Then Uet/=ct(ti ^ Tctissaai, Therefore di is also a
characteristic vector of T. Thus ec2,. a« are also characteristic
vectors for r. Since F»IF®JFJ-, therefore <X2>. .. «●} is
376 Linear Algebra
an orthonormal basis for V each vector of which is a characteristic
vector of r. The matrix of T relative to B will be a diagonal
matrix.
Theorem 6. Let A be an nxn Hermitian matrix. Then there
exists a unitary matrix P such that P* AP is a diagonal matrix.
Proof. Let V be the vector space C", with the standard inner
product. Let B denote the standard ordered basis for V and let T
be the linear operator on V which is represented in the standard
ordered basis by the matrix A.
Then {T]b^A.
Also
Since ^4 is a Hermitian mistrix, therefore A*=A, Consequently
{T*]b=‘[T]b. Hence T*=>T and thus T is a self-adjoint linear
operator.
Since T is a self-adjoint linear operator on a finite dimensional
complex inner product space therefore there exists an ortho
normal basis, say B\ for V each vector of which is a characteristic
vector for T. Consequently [TJb' will be a diagonal matrix.
Now let P be the transition matrix from the basis B\ Since B
and B* are orthogonal bases^ therefore P is a unitary matrix. Now
● P is a unitary matrix implies that
We have irjfl'=P-* [TJb P=P* AP.
Thus there exists a unitary matrix P such that P* AP=[T]b*=a
diagonal matrix.
Theorem 7. The self-adjoint linear transformation T on afinite
dimensional inner product space V is non-negative ifand only if all of
its characteristic values are non-negative.
Proof. Suppose that r is non-negative f'e. r^O.
Let A be a characteristic value of T.
Then 7’(a)=A« for some a#0.
We have (Ta, a)=(A«, a)=A (a, a)=A |! a H^.
● X cr«>«)

Since therefore (Ta, a) ^ 0.


Also 11 a 11* > 0. Therefore A^O.
Conversely suppose that T has all its characteristic values oon-
ncgative. Since T is self-adjoint, therefore we can find an ortho-
normal basis consisting of characteristic vectors ofT.
For each fii, we have T^t=»Ki^u where A/^0.
377
Linear Transformations

Now let y be any vector in V, Let —


Then Ty—a\T^\-]rarr^2-\-...-YanT^H
==aiAij3i -{-azXiPi-h ●●●+<*nA«^«.
We have (TV, y)==(aiAii3i+02A2i324-...-t-a„A„^nt
flljSl + fl2^2 + ● ● ● + Oh^ii)
fl iaiAi + 0252A2 4" ● ● ● 4-O/iflnAn
[V j3«i} is an orthonormal basis]
=1 ai P Ai4*| P A24-«*«+I Ofl p An
^ 0, since each A/^0 and | a/1 ^ 0.
Thus (Ty, y) > 0 V yeK.
Hence r^o.
Exercises
1● Let v4 be an « X « real symmetric matrix. Then there exists a
real orthogonal matrix P such that P*AP is a diagonal matrix.
(Meernt 1988)
2 The self-adjoint linear operator T on a finite dimensional inner
product space V is positive if and only if all its characteristic
values are positive.
§ 9. Perpendicular projections or Orthogonal projections.
Definition. Suppose V is a finite dimensional inner product space
and W is a subspace of V. Then V=^W(^W^. Let E be the pro-
jection on W along Then E is called a perpendicular projec-
tion or an orthogonal projection. (Meerut 1775)
Since fV->- is uniquely determined by fV, therefore there is no
necessity of saying that i? is a perpendicular projection on along
IP-L. It is sufficient to say that £ is a perpendicular projection on
fV and we shall write E^Pw
If E is the perpendicular projection on then W is the range
of E and is the null space of E. Also W is the set of all solu
tions of the equation £’a=a and is the set of all solutions of
the equation £'a=0. Further l—E is the projection on along
W. Therefore l—E is the perpendicular projection on W^.
Theorem 1. A linear transformation E is a perpendicular pro
jection if and only if E=E^—E*. Perpendicular projections are
non-negative linear transformations and have the property that
ll^-all <ll«||/orn//a. (Meerut 1975, 78)
Proof. Suppose E is the perpendicular projection on W. Then
E is the projection on W along W^, Since E is a projection, there
fore E^^E. Also
/?(£')=the range of E= W'={ae V : £a=a)
and iV(£)=»the null space of E^ V : £'a=0}.
378 Linear Algebra

Let a, jS be any two vectors in V. Since so let


a«ai+a2 and +/3a where «i, /5ie W and ai. j82e WK Then
£aexai and £i|3<=j8|. We have
(^«» ^)=*(«i» i8i)+(ai, ^2)
=(«i» ^1)
(V («!, p2)—0 because «ie and j82e
«(«!» ^l)+(«2» ^1) [V (a2, ^i)«0]
—(«i+a2,^i)=(ot, pi)i=(a, Ep).
Thus (£ix, P)=>(ct, EP) for all a, j8 in K Therefore E is the
adjoint ofE Le. E==E*.
Conversely, suppose E^E^^E*. Since E^E\ therefore E is
a projection on R(E) along N(E). In'^ order to show that £.is a
perpendicular projection, we are simply to prove that the subspaces
R{E)and N{E)are orthogonal Let a^R(E)and PgN{E), Then
£a«sa and Ep=0. We have
(oc, j8)«(£a, jS)c=(«, E*P)^{a, EP)^(oi, 0)=0.
Thus R(E)and N{E)are orthogonal and consequently E is a
perpendicular projection.
Now to show that if£isa perpendicular projection i.e.if
E=E^ssE*,then E is non-negative. We have
(JSx, a)«(£2a, a)=(££a. «)=^(£a, £*a)=(£a,jSa)=-|| £« 1|2.
Since || E» jp > 0 for all a in F, therefore
(£bc, a)> 0 for ail « in F.
Hence E is non-negative.
Now/—£ is also perpendicular projection. Therefore/—£
is non-negative. So for all a in F, we have
((/—£)a, a)^ 0
o> (a—£k,a) > 0
=> (a, a)-(£a, a)^ 0
=> 11 « IP-ll £« IP > 0 [V (£a, a)==ll £« IP]
=>11«1P>11£«1P
:>ll £a |i <11«|1.
Hence the theorem.
Note. If£ is a perpendicular projection, then (£«, a)=H £« |p
for all a.
Theorem!. Ifa linear transformation E is such that £=>£2
and 11 £a H < ||« H for all a, then £=£*. (Meerut 1968)
Proof. Let £«£(£)and JV*r£(£) be the range and the null
space of£ respectively. Since £2«£,therefore £ is the projection
Inner Product Spaces 379

on R along N. We shall show that E is the perpendicular pro


jection on R. Then by theorem 1, we shall have E^E*.
Now E will be perpendicular projection on R HR and Narc
orthogonal. For this we shall prove that R—N^^
Let Put J8s=£bc—«.
We have E^^E(jSbt-a)=j£2«_£«=:,£5x-£«=0.
iSeJV.
Now aeiV-L and jSeiV => (a,^)«=0.
We have H £« |p=(£a,£a)
=(a+j8,«+i3) , [V iSsss£a—a]
=(«»«)+(«»
[V (a,)3)=0 and so :
05, «)«0J
Now it is given that || £« |p < H a \\K

or
or 11/311 <0
or 11/811=0 ['.* IIP II cannot be negative]
or p~o.
Putting p^O in the relation p—Eet—a, we get
E»=ct
^ a € the range of E
=>ae£.
Thus aeVV-i :> ae£. So JVJ- c /?.
Conversely, let ye/?. Then £y*sy.
Since P«JVJ-0/V, therefore lety=yi+y2 where yieiV-*-and
yieA^. Since yieAA, therefore £y2*sa0. Also yi e /V-*- yi e /?
because we have just proved that N-^QR. Now nei? ^ Eyi^yi.
We have
Y^Ey^E(yi+y2)=£yi+£y2=yi+0=yi.
Since yie/V-L, therefore yeA^J-.
Thus ye/? => yeiV-L. So /?£//■!●.
Hence R=NK
Definition. Two perpendicular projections E and F are said to be
orthogonal if ££=0.
Also if £, F are perpendicular projections, then
££=0 o (££>*«(6>* o £*£*»6 o ££=b.
380
Linear Algebra

Theorem 3. Let E and F be two perpendicular projections on


..
W\ and W2 respectively. Then E and F are orthogonal if and only
if the subspaces Wi and W2(that is, the ranges ofEandF)are
orthogonal.

Proof. Let EF^H and let ae Wi and iSe fPi. Since Wi is the
range of E, therefore £a=«. Also W2 is the range of F. Therefore
Fp=^. We have
(a, F/3)«(a, E*Ffi)
=(a, EFfi) [: E*=E]
=(a, 0j8) [/ £f=6j
=a(a, 0)=0,
Wi and W2 are.orthogonal.
Conversely, let IVi and fV2 be orthogonal. If is any vector
in W2, then ^ is orthogonal to every vector in Wt. So /3 e WiK
Consequently IV2 Q Wi-l.

Now let y be any vector in V. Then Fy is in the range of F


ie.Fy is in IV2 Consequently Fy is in fF,! which is the null
space of E. Therefore F(Fy)=0.
Thus FFy=0 for all y in K. Hence f/’=0.
fTheorem 4. If Ei,..., E„ are perpendicular projections on the
subspaces Wi,..., W„ respectively of an inner product space V then
a necessary and sufficient condition tliat E==Ei-{-y.-\-E„ be a per
pendicular projection is that El Ej=fi, whenever ii^J (that is, that
the El be pairwise orthogonal). Also then E is the perpendicular
projection on W= Wi+...+ lV„.

Proof. It is given that F|,..., F, are perpendicular projections.


Therefore Ei^=Ei=Ei* for each n.
Let E=Et+...+E„.
Then + + + Fff...+F„=F.
V The condition is sufficient. Suppose that F/Fy=0 whenever
ij^J Then to prove that F is a perpendicular projection.
We have F2=.FF=(F,+...-t-F„)(Fi+...+F„)
F,2+...+F«2 [V F/Fy=d if
=F,+...+F„=F.
Thus F2==f=F*. Therefore F is a perpendicular projection.
Inner Product Spaces 381

The condition is necessary. Suppose £ is a perpendicular pro*


jection. ThinE^t=E=E*,
We are to prove that EtEj^h if i^j.
Let a belong to the range of some Ei. Then Eiot^a.
Therefore 11 a 1|2==:11 11*. ...(1)
Since £ is a perpendicular projection^ therefore by theorem 1,
we get
II a 11* > 11 £a 11*. .(2)
From (1) and (2), we get
II 11* > 11 £a 11*. (3)
Now 11 Ed li*=(£’a, a) [See Note below theorem 1]
If

-(U^')■■●)-( E EjctfCt
\ J-t
E {Ej a)

^i:\\Ejd 11* [V Ej is perpendicular projection]

=>11 £■;« !!*=(£;«, «). [See Note below theorem 1]


/. From (3), we get

WEid 11*^ ill£;all*. ...(4)


J ~i

But l|£,«ll*^i:il£;all*. ...(5)


y -i

[ V II Eid 11* is one of the n terms in R.H.S. of (5)J


From (4) and (5), we get

ll£,a ll*c3 r 1! 11*


>1

=> 11 Ejd 11*«0 if Mi => II Ejd 11=0 if Mi


=> ^;«=0 ify#i => a is in the null space of £y if Mi
s> a e Wj^ if j^i
=> a is orthogonal to the range Wj of every Ej with y#/.
Thus every vector a in the range of, Ei (i=l,..., n) is ortho*,
gonal to the range of every Ej with j^i. Therefore the range of Ei
is orthogonal to the range of every Ej Hence by theorem
3, we have £/£yn0 whenever Mj-
382 Linear Algebra

Finally in order to show that E is the perpendicular projection

on fVi, we are to show that ft (E)-fV where R(E) is thc^

range of £.
Let a s y?(£). Then
■ n
«= E £/ X e IV since E/x S IVi {i.e. the
/-I
. range of £/)for each i.
R (E)C fV.
n-
Conversely, let a e fV. Then x> E xj where xj B fVj,
Mi

We have Ex x= E Eix=a E E EiXj


/-I /-I 7*.I

E Ejxj (●.: ay e Ifj s> aJS-the null space


of£iif/#yj

=s E tty [V tty is in the range of Ej]


y-i , ■ ' ■

/. tt is in the range of E.
Hence WQRiE),
W=R(E),
This completes the proof of the theorem.
§10. The Spectral Theorem.
Theorem 1. (Spectral theorem for a normal operator). To
every normal operator T on a finite-dimensional complex inner
product space ViheK correspond distinct complex numbers Cu c*
and perpendicular projections £i, ..., Eu {where k is a strictly
positive integer, and greater than the dimension of the space) so
that

(1) the El are pairwise orthogonal and different from 0,


(2)
(3) tI- 4-
(Meerut 1976, 78, 80, 83 1*. 84, 91, 93, 93 P)
Proof. Suppose T is a normal operator on a iinite*dimensional
/

Inner Product Spaces 383

complex inner product space V, Then T will definitely possess a


characteristic value. Let cu Ck be the distinct characteristic
values of T, If dim Kait, then we must have k ^n. Let
Wu Wiche the characteristic subspaces of the characteristic
values cu ...» c* respectively. Then
e V: Toi^Cia), I i ^ k.
Let Et Ek be the perpendicular projections on Wu ...» Wk
respectively. Then Wi is the range of Et for each /=! fc.
(1) To show that\Ef9^0 where 7=1 k. Let be a
characteristic vector for T corresponding to the characteristic value
Ch Then 7(x=c<«. Therefore * G Wt which is the range of the
perpendicular projection Et. So £(a=a. Since therefore

Further to show that the Et are pairwise orthogonal. We


know that if T is a normal operator, then the characteristic vectors
of r corresponding to distinct characteristic values are orthogonal.
: Therefore each vector in JVt is orthogonal to each vector in iVj if
i^j. Therefore the subspaces Wt and Wj are orthogonal. Hence
by theorem 3 of § 9, we have Et Ej=Q if /#y.
(2) To prove that Fi-f...+^fc=/.
Let £=£t+...-HJSfc and 1F= W\+...+ Wk. Then by theorem^
of§ 9, i? is a perpendicular projection on W and consequehtly
l^E is a perpendicular projection having as its range.
First we shall prove that is invariant under T. For ^this it
k
is sufficient to shbw that W is invariant under T*. Let «= S a/e W
/-I

where a/eFF/ for each i. Then ra/=cia/ and T*at—ctsti


k k '
because T is normal. Therefore T*a= S 7’*a/= S c/a/ e W since
f-i /-I

cm e Wi for each /. Thus FFis invariant underT* and conse


quently is invariant under T** i.e. T.
Let t/be the restriction of r to 1FJ-. Then J/a=r7a for alia
in W^.

Suppose £#/ Then/—£#6. Therefore the range W^ of


the perpendicular projection is not equal to the zero sub
space of V. Since 17 is a linear operator on a finite dimensional
non-zero complex space FF-<, therefore U must have a characteristic
\

384 Linear Algebra

value and consequently a characteristic vector. Suppose a is a


characteristic vector for U corresponding to the characteristic value
c. Then Ua=cci. Therefore ra=ca and thus a is also a character*
istic vector for T. Therefore each characteristic vector for U is
also a characteristic vector for T. But T has no characteristic
vector in IF-i-since all the^characteristic vectors for Tare in W.
So U has no characteristic vector and thus we get a contradiction.
Therefore we must have E—I i.e,

(3) To prove that T—cxEi 4-...


Let jS be any vector in V, Let £/)3eaj8/, i=l, ..., A:. Then jS/
is in PFf which is the range of £/. Therefore by the definition of
FT/, we have T^t=Cipi,
/* k k
Now TP=T(IP)=T S ES T2 E TPi
)“
k k / A

— LI CiPi= /-I
S ciEiP= /-I
2 ciEi ^ j8.
k
7’= 2 ciEi=CiEi+...-\-CkEk.
/-I

Note. We shall call the decomposition T=ciEi+>-+CkEk


the Spectral resolution of T.
Theorem 2. (Spectral theorem for a self-adjoint operator).
To every self-adjoint operator T on afinite-dimensional inner product
space V there correspond distinct real numb(rsc\^..,t Ck andperpendi
cular projections -Bi,..., Ek,{where kis a strictly positive integer^ not
greater than the dimension of the space) so that
(I) the Et are pairwise orthogonal and different /romO,
(2) £i+...+£*=/,
(3) T=C\E\-\-...-\-CkEk, (Meerut 1969)
Proof. Suppose Tis a self-adjoint operator on a finite-dimen
sional inner product'●pace V. Then T will definitely possess a
characteristic value and all the characteristic values of T will be
real. Let ci,..., Ck be the distinct characteristic values ofT. If
dim then we must have k ^ n. Let Wt,..., Wk be the
characteristic siibspaces of the characteristic values cu ●●●,Ck res
pectively. Then IP/={aeK: 7a=Cia}. Let £i,...» £** be the
385
Inner Product Spaces

perpendicular projections on Wu ●●●» W'* respectively. Then Wt is


the range of Ei for each k.

(1) To show that Et^6. Let be a characteristic vector


for T corresponding to tfie characteristic value c/. Then
A
i«.
Therefore aSFT/. So £,«=a. Since therefore Ei¥^0.
Further to show that the Et are pairwise orthogonal. We
know that if r is a wlf-adjoint operator, then the characteristic
vectors of T belonging to distinct^ characteristic values are ortho
gonal. Therefore each vector in Wt is orthogonal to each vector
in Wj if Therefore the subspacesr Wi and Wj are orthogonal.
But Wi and Wj are the ranges of Et and £7 respectively. Therefore
EiEj^Oifi^J,
(2) To prove that/= Ft
Let and W^Wr^^ .-A-Wk. Then E is a per¬
pendicular projection on W and consequently /—-E is a perpendi
cular projection having W^. as its range.
Let
First we shall prove that IF-*- is invariant under T.

«= I aiSfr where a/efTi for each/. Then r«,=C(«(. Therefore


1-1

ra=r S a/= E 2 c/a/sB' since Ci»i^Wi for each L thus


/-I /-I /-I

W is invariant under T and consequently W^ is invariant


T*«r.r being self-adjoint. Let t/be the restriction ofT to .
Then C/a«ra for all a iri W^. Also U will be self-adjoint.

Suppose /.e., Then the range W^ of I—E is


not equal to the zero subspace of V. Since t/is a self-adjoint opera
tor on a finite-dimensional non-zero space IF-l, therefore U must
have a characteristic vector. Obviously each characteristic vector
for U is also a characteristic vector for r. But T has no character
istic vector in W^ and so U has no characteristic vector. Thus we
get a contradiction. Hence we must have iB=»/ r.e.,
£i+ ...-!-£*=/●
(3) To prove that r=C|£i-l-...+c&£A:. Give the same proof
here as we have given in the corresponding part of the spectral
theorem for a normal operator.
386
Linear Algebra

TIiMrein 3. T be a normal operator on afinite-dimensional


complex inner product space V. Then V has an orthonormal basis B
consisting ofcharacteristic vectorsfor T. Consequently the matrix of
T relative to B is a diagonal matrix.
Proof. Let T=sciEi’^.„ +CkEk be the spectral resolution of T.
___Then
„ ct are all the distinct characteristic values ofr. Also
are the perpendicular projections on Wu..,,1Vk which
are the characteristic subspaces of the characteristic values
respectively.
Thus fVi is the set of all vectors a which satisfy the equation

Also
and
EiEjs=0 for i#y.
For each /. the subspace Wi is the range of Ei. For i^j\ the
subspaces Wt and Wj are orthogonal. For if as Wi,Pe Wj, then
(ot, P)z=,(E,», Ejp) [V otethe range of =>JEia=a, etc.]
=(a, Ei*EjP)
«(a, EiEjP) [V JS/being perpendicular
projection]
—(«,6j5)=t«» 0)=0.
Now let Bu...,Bk be orthonormal bases for the subspaces
respectively. Then we claim that B=\jBi /.e., the
union of the is ah orthonormal basis for V. Obviously B is an
brthonormal set because each is an orthonormal set and any
'^ctor in Bt is orthogonal to any vector in Bj, if iVy. Note that
tpe vectors in Bt are some elements of Wi and the vectors in Bj arc
if*V*^*™***^* The subspaces Wi and Wj are orthogonal
Since £ is an orthonormal,set, therefore B is linearly inde
pendent.
Now:B will be a basis for V if we prove that B generates F.
Let y be any vector in V. Then
y=/y=(£j4...,4.£)ij)
=ai-r where a/=£’/y.
Since Eiy is in the range of therefore a/ is in Wt. So for
each / the vector a/ can be expressed as a linear combination of
the vectors in B, which is a basis for W,, Therefore y can be
Inner Product Spaces 387

expressed as a linear combination of the vectors in B, Hence V is


generated by B. Therefore B is.an orthonormal basis for K. Since
each non-zero vector in Wi is a characteristic vector for 7*, there
fore each vector in\ff/ is a characteristic vector for T, Consequently
each vector in ^ is a characteristic vector for T. Thus there exists
an orthonormal basis B for V such that each vector in B is a
characteristic vector for T. Consequently the matrix of T relative
to B will be a diagonal matrix.
Theorem 4; Let T be a self-adjoint operator on afinite-dimen
sional inner product space V. Then V has an [orthonormal basis B
consisting ofcharacteristic vectorsfor T.
Proof. Give the same proof as in theorem 3.
Theorem 5.. Let T be a normal operator on the finite-dimen
sional complex inner product space V. Let be distinct comp¬
lex numbers andEu..,yEk be non-zero linear operators on V such that
(a) T=ciEi-{-„..-{‘CkEk,
(b) 7=£i+...+£*,

(C) EiEj:==6,ifi^j\
Then cu.^^Ck are precisely the distinct characteristic values of
T. Alsofor each i, Et is a polynomial in T and is the perpendicular
projection of V on the characteristic space of the characteristk value
cu In short the decomposition of T given in {a) is the spectral reso
lution of T.
Proof; (i) First we shaii prove that for each U Ei is a pro
jection.
We have
^ E,l=^E,:Ei-\-.,,+Ek)
s> Ei^EiEi-\-,,.-\-EiEk
=> Ei=Ei^ (V if ItVJ
=> El is a projection.
(ii) Now we shall prove that are precisely the distinct
characteristic values of T.
First we shall show that for each i, ci, is a characteristic value
ofr.

Since Ei^6, therefore there exists a non-zero vector a in the


range of £1.
Since Ei is a projection, therefore Eiot^a,
388
Linetff Algebra

Now 7’a=(C|£|-|-...+CAJS*)«
,,,-^CkEk) Eid [V «=JEia]
=CxE\Ei^-\-..,-\-CkEkEi9.
=^ctEfh, (V if/7Sy‘]
^CfEici [V £/*s=£/J
-cix,
c/is a characteristic value of T;
Since T is a linear operator on a finite*dimensional complex
inner product space, therefore T must possess a characteristic
value. Suppose c is a characteristic value of T. Then there exists
a non-zero vector « such that
TxssCCt
=> T(x=c/x
^ (CiEi+...+CkEk)(n=c (Efi-.,.+Ek) a
=> (oi—c)£|af...+(ca—c)
Operating on this with Ei and remembering that Ei^^Et and
EiEj—0 if /#y, we obtain
(c/—c)£’/a=0 for .,A:.
If ctjtc for each i, then we have Ei»=0 for each /. Therefore
-f£fca=sO
(£|-!-...+£*)a«0
!> 7a=0
=> a=0.
This contradicts the fact that a#0. Hence c must be equal to
ct for some i.
(iii) Now we shall prove that Et is a projection on Wi where
IF/ is the characteristic space of the characteristic value c/. For this
we shall prove that the range olEi is IF/.
Remember that a is in the'range of Et iff Et<x,~(x. and a is in
IF/ iflf Ta=c/a.
Let a e IF/. Then
T<t=Ci»=Cil(n=ci(£i -f...+£fc) tt
=> (ci^i -fCit£!fc)«=(c/JEi+ ...■+■ c/£fc) «
^ (ci—C/) J?ia+...+ (ca—C/) £Aa=aO.
Operating with £>, we get
' {ci~~Ci) EjEi«t-{-...’\-{Ck-‘Cj) EjEkCt—0
=> (jCj—ci) JEy^a=0
Itmer Produ^'t Spaces 389
=> (c>—c/)£)asO
J?ja=sO if7*96/
■s>
[V are all distinct]
Then a.>=Ict=*Ei<t-f...-f-

Thus « e the range of Ei.


Conversely, suppose that a is in the range of Et. Then Eia=>a
As proved in (ii), we shall get Ta^^cta. Thus a e Wt,
(iv) Now we shall prove that, for each/, £'/ is a polynomial
inr.
We have ro=i/=^i+...+£*.
7’= Cl +...+ca£*.
T^>=»{ciEi+...+caJSa) (ci£i+... +ca£*)
—Ci^Ei+...+Ck^Ek [ V Ep^Eif EiEjss0 if i^yj
Similarly 7^aCi”j?i+...+CA”JSA where It is any non-negative
integer.
Therefore if g is any polynomial, then taking linear combina
tions of the above relations, we get
k
g(7’)=g(ci)£|+«**+^(cA)JSA— 2 g(cj) Ej,
y-i

Now suppose p/ is a polynomial such that pt (Ci)»8iy. Then


taking pt in place of g, we get
k k
Pt (r)s= Z Pi {cj) Ej^ 2 ^jEjssEt.
/-I y-i

Thu& foi each /, Et is a polynomial in T, But we 'how


the existence '■"ob ? polynomial pt over the field F, Obviously
p, (/— gf-i) (/~c<4.i)...(/-ca)
(Cl—Ci)...(C/—C/_|) (C/—C/+i).,.(C/—Ca)
serves the purpose i.e., pt (c/)=l and pt (c;)=0 if
(v) Now we shall show that for each /, Et is a perpendicular
projection.
J is a normal operator. For each /, Et is a polynomial in T.
Therefore for each /, £/ is a normal operator. Since Et is a projec
tion, therefore we must have Et*i=>Et. Hence Et is a perpendicular
projection.
[Note. If risaself'adjoint operator on an inner product
space, then also the above theorem is true. The only difierence in
390 tinear Algebra

proof will be in part(v). Since ci»●●●» Ck are all real, therefore pt


is a polynomial with real coefficients.
We have (3^.
Therefore (7’)]*=pi {T)=Et,
Hence Et is a perpendicular projection.
Theorem 6. Let T be the normal operator on the finite-dimen-
sional complex inner product space V. Then T is self-adjoint^ positive,
or unitary according as each characteristic value ofT is real, positive,
or of absolute value 1. (Meerut 1979)
Proof. Lei T^c\E\-f ...-\-CkEk be the spectral resolution for
T. Then ci,..., c* are all the distinct characteristic values for T and
»●●●» are perpendicular projections.
Now and EiEj^tt for /#y.
(i) Suppose each characteristic value for Tis real. Then to
prove that T is self-adjoint.
Wo have r*«(ci£|+...+c*£fc)*
a=Cij&i*4-i..4-Ck£**
*="CiEi’{-,„+CkEk
oaCi£i+...-f-Ck^k (V each ct is real => ct^ci]

ris self-adjoint,
(ii) Suppose, each characteristic value for T is positive i.e,
ci>0 for each i. Then to prove that Tis positive.
Since c/>0 for each i, therefore each ci is real. Consequently
by case (i) T is self-adjoint. Now
(Ta, oc)a(ra,/oi)

(* CiEia,£^* E/i\=s
\ 27
* * c, {Eto., £)a)
* * * *
« £ Cl («, El* E/x)= 27 a (a, EtEj a)

e=» 27^ Ct (a, EiEi ^ [Summing with respect to j. Note

that
k k
27 Ct (JSi*a, £|«)*=i £ ct {Eta, Eta)
i-i 1 -1
Inner Product Spaces 39i
.k

/-I
Cl 11 jF,a II*.

Since || £i« |1 > 0, therefore if c/>0, then


(r«. a)> 0.

Also(ra,aHO => /-I


ici H £ia "
H*«0

=> II EiOL |l=»0.for each /


=> £'ia»0 for each i
=> (£!+●●●+£*) «=>0
=> /a»0
s> a»0.
Thus T is self-adjoint. Also (Ta, a) > 0 for each a and
(Ta, «)ca0 => a=a0.
T is positive.
(iii) Suppose each characteristic value of E is of absolute
value 1. Then to show that E is unitary. We have
r*=sci£i 4-...+c*£fc.
7T*=(ci^i4-—4-CA£ft) (ci£i-f-,..4-c*£*)
=1 Cl I* ^i4-...-f| Ck I* Ek
[V Ep^Et and for ii^j\
=j&i-|-...4'JSft [V I C| |»] for each/]

Since K- is finite-dimensional, therefore 7T*ssj T*T=si,


Hence r is unitary. f
Solved Examples
Example 1. Let T be a normal operator on the finite dimen^
slonal complex inner product space V, Then T is non-negative,
invertible, or idempotent according as each characteristic value ofT
is non-negative, different from 0, or equal to zero or one.
Solution Let 7’=ciJ?i+...4-c*JS* be the spectral resolution
forJ.
(i) Suppose each characteristic value of T is non^negative
i.e. a ^ 0 for each /»1 »●●●» k. Then as in theorem 6, we have
r*=r. Also
k
(Tx, x)ssE a II Eix 11*.
M
302 Littear Algebra

Since \\ Ei^ |) > 0,therefore if ct^O, then (Ta, a) > 0 for all
«in K.
Hence T is non-negative,
(ii) Suppose each characteristic value of T is different from
zero i e. C(#0 for each i.
Consider the linear operator
1 1
S=—
Ct Ck

We have TS^{ctEi 4* ●●● -{-CkEk) £i+— +

[V E^^Et and EiEj=0 for i¥^j]


~7.
T is invertible.
zero or one j.e.
(Hi) Suppose each characteristic value of T is
cisa0 or 1 for each L
Then ci*=ci for each i«l,..., k,
I4ow 7^=s(ci£i4- ●●● +CkEk){ciEi4- .*● 4-CkEk)
csCi^Ei+.^»+Ck^Ek
=C|£i 4" ● ● ● 4" CkEk
«r.
/. r is idempotent.

Example 2. (f 2 ctEt is the spectral resolution of a normal

operator Ton a finite-dimensional complex inner product space V,.


then a necessary and sufficient condition that a linear operator S
commutes with T is that it commutes with each Ei,
^^Solution. The condition is necessary. Suppose commutes
with r. Then S commutes with every polynomial in T. Since each
£i is a polynomial in T, therefore S will commute with each Ei,
Hence the condition is necessary.
The condition is sufficient. Suppose S commutes with each fii
k

i,e, SEi^EiS fot each i. Since T^r 2^ ciEt, therefore obviousiy


ST 1) TS. Hence the condition is sufficient.
Note. A similar result can be proved ifT is a self-adjoint linear
operator on a finite-dimensional inner product space.
Linear Transformations 393

Example 3. If T is a normal operator on a finite-dimensional


complex inner product space V and S is an arbitrary linear operator
that commutes with T, then S commutes with T*.
k
Solution. Letr*ciEi+...+c&EA= S aEi be the spectral
/-t

resolution of T.
Then r*=(c,E,+...+CfcEik)*

=»C|Ei+...+ [ each El is self*adjoint]


Now for each i the operator Ei is a polynomial in T. There
fore T* is also a polynomial in-T. Since S commutes with T, there
fore S will also commute with every polynomial in T. In particular
commutes with T* which is also a polynomial in T.
Example 4. Let Tbe a linear operator an afinite-dimensional
complex inner product space V. Then T is normal if and only if the
adjoint T* is a polynomial in T. (Meerut 1971, 76,88)
Solution. Suppose T* is a polynomial in T, Let r*»/(7).
Then obvioily T*T=TT*. Therefore T Is normal.
Conversely suppose that T is normal. Let T=^ciEi-{-,..-\-CkEk
be the spectral resolution of T. Then

=C|Ei*4-...+Cft£&*
=ciEt+...+c*E&. [V each E/is self-adjon]
But we know that each E/, where /«s=l,..., k, is a polynomial
in r. Therefore T* is also a polynomial in T.
Example 5. Let Tbe a normal operator on afinite-dimensional
complex inner product space V. Then every subspace of V which is
invariant under T is also invariant under T*. (Meerut 1973, 74,87)
Solution. Let IK be a suospace of V which is invariant under
r. We have r^a==7T«.
Since IV is invariant under T, therefore a ^ IV ^ Ta G W,
Consequently TTa, i.e, T^et is also in W. Thus

Therefore IV is invariant under T^,


Similarly W is invariant under r« where n is any positive
integer. Consequently IF is invariant under f{T) where/ is any
polynomial.
394 ,Linear Algebra

Now T* is also a polynomial in T. Therefore W is invariant


under T*.
Example 6. If T is a normal operator on a finite-dimensional
inner product space V and if iV is a subspace of V invariant under T,
then the restriction ofTto W is also normal.

Solution. If W\s invariant under then IF is also invariant


under T\ Let U be the restriction of T to W and 5 be the restric
tion of 7* to IF. Then
U(x.=Ta and Sct=T*(i for all a in IF.
Obviously S^=U* le, S is the adjoint of U because for all a, jS
in IF, we have
(t/a, i8)=(ra,j8)=(a. 7*/8)«(a, S^),
S is the adjoint of U.
Now let a be any vector in IF. Then
UU*aL=UT*oL [V
«7T*a
=r*ra [V r is normal]
^T*U(k
^U*U0L,
uu*^u*u.
Hence U is normal.
Example 7. Let T be a linear operator on afinite-dimensional inner
product space V, Let Eu,.,,Ek be linear operators on V such that
(1) T=:C\Ei-^„.-\‘CkEk\
(2) Ei=sEi*2lhA EtEj^afor i^j.
Then T is normal.
Solution. We have
r*=(^?,E,+...+c*£*)*
s=ciE|*4* ●●●+?*£&*
^cxEx +.„^CkEk.
Now 7T*=(ci£i+...+c*£*) (c,£,+...+c*£’0
=ciC|£i2-f...

[V £/Eys=d for i^j\


=|c, |a£,2+...+|c* !*£●**.
- Al8or*r=(ci£,H...+c*£*)(ci£i+...-fc*£0 \
<=»c\C\E\^-\-,„-\-CkCkEk^ ●
tnmr Product Spaces 395

=|C,l*£,2+...+|c,|*^fc2.
Example 8. Let Wu..., Wk be subspaces of an inner product
space K,and let Elbe the perpendicular projection on IT/, A:.
Then thefollowing two statements are equivalent:
(0 r=1T10...© Wki and this is an orthogonal direct sum le,
the subspaces Wt and Wj are orthogonalfor i^j.
(ii) I=Ef\~,,.-\rEk and £/£)=0for i^J.
Sblutioo. (i) » (ii).
For each i the subspace fVi is the range of the perpendicular
projection Ei, Since Wi and Wj are orthogonal subspaces for i=^j,
therefore EiEj^O for
Let a € V. Since V is the dir<*ct sum of fVt,..*, Wk, therefore
we can write
a=fl(i-}-...4-afc where a/e i==l, ●●●» k.
Now e PF/ e> Eicti=ni. Therefore we have
ass^iai-j-...+
> +...+£*)(£|ai+...+£fca*)
* {Ei-\-...-\-Ek) »=>Ei^xt + .-.+Ek^cik
( for i^j]
=> (F| + *** (V Ei^=E,\
s> (^14 ...+£*) a=a=/a.
(Fi4-...4-JSfc) «=/« for all a in V.
jB| 4-●●●4* ■£*=*/●
(ii) (i).
Let a € K. Then
as/a= {El 4-... 4- £*) a
=£^i«4-...4--fi*«. (1)
For each / the vector JS'/ot is in the range of £/ i,e. in Wi. There
fore (1) is an expression for « as a sum of vectors, one from each
subspace Wi. We shall show that this expression is unique.
Let a=sai4-...4-aft with a/ in Wt.
Then £/a/=sa/. Therefore

=> Ei(K'=sEiEicti~{-,,,-^EiEkCtit

^Epcni (V for 1V7]


^Etai=iOLi,
as=3^i«4* ● ●●
396 Linear Algebra

Thus the expression (1)for a as a sum of vectors from the


subspaces Wi is unique. Hence
Now to show that this sum is an orthogonal direct sum. Let
« € Wt, p ^ Wj and
Then («. /5)=(£ia, £yj8)=(a, E,*£jP)=.(oi, EiEjfi)
=(a, 0)=0.
Therefore the subspaces Wi and fVj are orthogonal for i^j.
Hence y=fVi@.,.@Wk is an orthogonal direct sum.
Exercises
1. Let r be a self-adjoint operator on a finite-dimensional inner-
product space V, Then V has an orthonormal basis B consis
ting of characteristic vectors of T.
2. If {/and T are normal operators which comniute, prove that
U+T and UT are normal. (Meerut 1972,80, 84)
{Hint. Take help of Ex. 3 page 393. and Ex. 2 page 368].
4
Bilinear Forms

Suppose C/ aud y are two vector spaces over the same field F,
Let Vte. fi): a e t/, /3 e V}.
If (ai, j3i) afid (tt2, i?2) are two elements in W, then we define
their equality as follows :
(«i»/5i)=(«2» P2) if «i=a2 and Pi=-^2-
Also we defiiie the sum of(ai, /5|) and (a2, P2) as follows:
{«i»;^i)+(«2. ^2)=(«i+«2. ^1+^2)*
If c is any;eiement in Fand (a, /3) is any element in W, then-
we define sca% multiplication in W as follows :
^ cP).
It can be;easily shown that with respect to addition and scalar
multiplication as defined above, W is a vector spate over the field
Fi We call B'.as the external direct product of the vector spaces U
and T and we shall write W'=t/0K.
Now we shall consider some special type of scalar-valued
functions on IF'known as bilinearforms.

§ 1. Bilinear forms.
Definition. l£t U and V be two vector spaces over the samefield
F. A bilinear'form on fV=U@V is afunctionf from W into F,
Which assigns to ^dch element(a, j8) in W a scalarf{», j3) in such a
way that y; -:
/ («2, P)
and /(ai; niS,+Z>i82)=o/(a. jSa). (Meerut 1975)
Here /(a;j3) is an element of F. It denotes the image of(a, j8)
under the function/ Thus a bilinear form on W is a function from
IV into F whic$ 16. linear as a function of either of its arguments
when the other Js fixed.
If tA=y, theh in place of saying that/is a bilinear form on
IV^V ^ we shall simply say that /is a bilinear form on V.
398 Linear Algebra

Thus if V is a vector space over thefield then a bilinear form on


V is afunction/, which assigns to each ordered pair of vectors a, j8
in V a scalarf p)in F, and which satisfies
(«i, ^)\bf(a2,
and /(«, («, iSa).
Exflimple 1. Suppose V is a vector space over thefield F. Let
JLi, £2 be linearfunctionals on V. Letf be afunction from VxV
into F defined as
/(a.i3)=L,(«)£2(j8).
Thenfis a bilinearform on V.
If a, jS € F, then L\ (a), £2(P) are scalars. We have
/(<**!+ha2, P)f=‘Li (nai+^a2)£2(P)
=(a£i(ai)+6£i (a2)]£2(P)
=aL\ (ai) £2 iP)-\-bL\ (a2> £2(P)
«=«/(»ifpy+bf(a2, p).
AIso/(«, tf^i-f-h^2)=£i («)£2(oPi-ybp^il
=£,(a)[fl£2 0,)+h£2(/32)]
=fl£i (a)£2(^i)+h£i (a)£2(P2)
=of(«, ^1)+*/(«,^2).
Hence/is a bilinear form on V.
Example 2. Suppose V is a vector space over the field F. Lfit
T be a linear operator on V andfa bilinear form on V, Suppose g
is afunctionfrom VxV into F defined as
^(*.i8)=/(r«,r/3).
gis a bilinearform on V.
We have g (<iai+/?a2, P)=f{T {aonx-\-be.2\ FjS)
=/(ora,+hra2. TP)
=fl/(Fai, ri3)+h/(ra2. 7)3>
^ag(ai,P)+bg (a2, P).
Also g (a. n/Jt+W=/(7’ot, r(nj8i+hi82))
—/(Fa,aTp\-\-bTpi)
=af{Ta, Tpi)+bf{Toc, TP2)
—og(«,Pi)+hg(a,^i)*
Hence g is a bilinear form on F.
Example 3. Let V— V„(F)i.e., let V be the vector space of all
ordered n-tuples over thefield F. /ya={H|,..., a^ and p^{bu bn)
be any two elements in F, letf be a function from Fx F into F
defined as
/(«»^>=<*1^1+...+oA.
Bilinear Forms 399

Then it can be easily seen thatfis a bilinearform on V.


Example 4. Let U and V be two vector spaces over thefield F.
Let W==U@V. Ifliis the zero function from W into F(/.e.6
maps each element of W into the zero of F), then 6 is a bilinear
form on W.

We have 6(a,jS)=sO v (a,/5) e W.


Now 0(aai+6flC2, j8)=0=0+0
=fl0+^0

=«6(a,,j8)+M(a2,/5).
Also 0(a, flj9i+^»j82)=0=0-f0
=fl0+60

=o0 (a. Pt)+bO (a, ft).


Thus 6 is a bilinear form on W

Example 5. Let U and V be two vector spaces over the.field F.


Letfbe a bilinearform on UxV, Then thefunction —ffrom UxV
into F defined as ..
(-/)(a.j8)=-/(«,i8)
is a bilinearform on Ux V.

Solved Examples

Example. Which ofthefollowingfunctions f defined on vectors


^2) ond fi=(yit P2) in R2, are bilinearforms ?
(0 /(«»P)=^Xiy2‘-X2yi
(Meerut 1972, 77, 79, 85.90,91,93 P)
(ii) (Meerut 1978, 85, 91, 93 P)
Solution. Let ae=(xi, X2),
yi), ,
and y=(zi, Z2)
be any three vectors in R^. Let a, b e R. Then
ax+bfi:=:a(xt, X2)+b(yu >>2)
●^(axi+byt, ax2-hby2).
400 Linear Algebra

(0By definition of/, we have


/(a, y)=/((Xl, Xz), izu Z2))=XiZ2-X2Zu
fiP* y)=yiZ2-y2Zuf(Y» <z)=Z.iX2-Z2Xu
and f{Y»P)=^Ziy2-Z2yt.
Now

fiaoL-ftb^, y)=/((«;Xi+hyi, <iX2+6y,),(ri, zz))


={axi+byx)zz—{axz+byz)Z\
{xxzz-xzz\)-\ b (yizz-yizt)
^af(a, y)-\-bf(p, y).'
Also
/(y, fl«+hi5)=/((zi. zz),{axi-hbyu axz+byz))
=zi {ax2+byz)—zz(«xt+6yi)
=a {ziXz--Z2Xi)+b {ziyz-^zzyi)
=a/(y,a)+A/*(y.iS).
Hence/is a bilinear form on R^.
(//) By definition of/, we have
/(«. Y)=ixi-zty+xzzz,
and /03, y)=(yi-^i)Hy2^2.
Now
fiaoL+bp, y)=/((nxi+^yi, axz-\-byz)» (zi, Z2))
=(axi+byi-zi)^-^{axz+byz)z?
Also
«/(«. y)+^/03. y)=fl(Jci-2iP+ax2Z2+*(yi~2i)H*y2Z2
=<l(Xi“Zl)*+h (yi-Zi)2+(fl*2+^y2) ^2*
Obviously f(aaL-\-bp,Y)¥=of{a,Y)+bf{p,y). 'Hence / is
not a bilinear form on R^-
§ 2. Bilinear forms as vectors. Suppose U and V are two
vector spaces over the field F. Let L (t/, V, F)denote the set of all
bilinear forms on UxV. We can impose a vector space structure
on L (t/, K, F)over the field F. For this purpose we define addi
tion and scalar multiplication in L (I/, F, F)as follows:
Addition of bilinear forms. Suppose/, g are two bilinear forms
on 17 X F. Then we‘define their sum as follows:
(/+5)(«»i3)=/(«.i3)+g(a,^).,
It can be easily seen thatf+g is also a bilinear form on UxV.
We have .
(/+^)iaxi+bxz, P)=/(a«i+ha2, P)+g (axi+b<tz, P) .
K =[fl/(«i, P)+bf{xzi P)]^[ag (ai,P)+bg(a2,P)]
=a[/(«!. P)]-\-b [/(«2,P)+g («2, ^)1
-fl l(/+^)(«i. [(/+^)(«2» i3)l-
Bilinear Forms 401

Similarly, we can show that


(/+?)(«. aPt+bh)=a l(f+g)(«,ft)]+*[(/+«)(«. WJ-
Hence/-l;^ is a bilinear form on I^x K.
Hius L iU^VgF) is closed with respect to addition defined
on it.
Scalar mnltiplication of bilinear forms. Suppose/is a bilinear
form on 17x K and c is a scalar.
Then we define cf as follows.:
(c/)(a,/3)=c/(a, j8) y (a, fi)S Ux V.
Obviously cfis & function from UxV into F. We have
{cf)(aai+fcaa, /5)=c/ P)
=c[af(ai, ^)+bf(«2, P)]
^caf(ai, py-i-cbf(«2, P)
=a[cf{aup)]+bW(o^2*P)]
=«[(</)(«i,J8)]+&[(c/)(«2,^)1.
Similarly we can show that .
{cf)(oe. apt+bp2)^a [{cf)(a, pt)]+b [{cf)(«. ^2)].
Therefore cfis also a bilinear form on UxV.
Thus L {U, F, F)is closed with respect to scalar multiplication
defined on it.
Important. It can be easily seen that the set of all bilinear
forms on UxV is a vector space over the field F with respect to
addition and scales multiplication just defined above.
(Meernt 1975,78,85)
The bilinear from 0 will act as the zero vector of this space.
The bilinear form —f will act as the additive inverse of the
vector/.
Theorem 1. If U is an n-dimensional vector space with basis
if V is an m-dimensional vector space with basis
iPi —,Pm}f and if {aij) is any set of nm scalars (/=1, ..., n;
7-1 » ..., m)then there is one and only one bilinear formfon 170F
such that
f{*h Pj)^^u I 7* (Meernt 1969, 78, 88,91,93)
n
Proof. Let a*s S xtcti € U and
;-i

m
Let us define a function
fi^S^yjPj^V.
/from Ux V into F such that
402 Linear Algebra
n m
Hxiyja,;. ...(1)
/-I y»i

We shall show that/is a bilinear form on UxV,


Let fl, b e Fand let »i, »2 e U.
fl n
Let ai S OiCtit ci2~ S biOLf,
/-I i-i

n m n m
Then/(ai,)3)= i? S atyjatj and /(aj, j3)« r 2* btyjOtf.
/-I 7-1 /-I 7-1

fl A A

Also L: 77/a/ f A2’ fc/a/= 2^


/■=! /-I /.-I

/(«*!+ *«2. jS) = (aai+bbi) yfiij

It m fl m

27 Saoiyian^ 27 Sbbiy,an
/-I 7-1^ /-I 7-1

A At A fit

=o 27 SatyjOtj+hS Ehtyja,j-
7-1 7-1 7-1 7“1

=fl/(«,.iS)+A/‘(«2,i3).
Similarly, we can prove that if o, ft e F, and fi u h ^ V* then
/ (a, aPi ■\’b^2)= af (a, j8|)+ft/* (07^2).
Therefore/is a bilinear form on (/x K
Now a/=0«i“};--* + 0a/_i +1«7+ 0flf7+i 4" ●●●+0aA
and ^ 7=0^14- .-f ■ 0,^7-1 + * ^7+0^7+1 + ● ● ●+
Therefore from (1), we have
f {o-it =
Thus there exists a bilinear form/ on UxV such that
/(«/.
Now to show that/ is unique.
; Let g be a bilinear form on UXV such that
8 (<^l> ...(2)
A fif

If a= 27 Xicti be in U and j8= 27 yj^j be In V, then


7-1 7-1

m
8(<x, fi)=g
^ ^27 XiCti, 7-1
27 yjpj.
403
Bilinear Forms

m
Bxtyjg(tChPj) [V g is a bilinear form]
r-i y-i
m
£ SxiyiUtj Ifrom (2)]
/-I

==/(a. Ph [from (1)1


By the equality of two functions, we have
g^f-
Thusfis unique.
Matrix of a bilinear form. ; Definition. Lei be a finiu-
dimensional vector space and let B={<x.u ●●●» *n) be an ordered basis
for V. Iff is a bilinear form on V, the matrix cf/ln the ordered
basis B is the nxn matrix A=[aij]n)in such Jiat
n.
f{Ki, oij)=‘aij, *“1, ...»w;y=*i» ●
We shall denote this matrix A by [fh*
Rank of a bilinear form. The rank of a bilinear form is defined
as the rank of the matrix of the form in any ord«?red bas? .;.
Let us describe all bilinear forms on a finite-ditui ;?!0.r»ai vect<
space V of dimension n.

If a= £ x/a/, and j8= B yjccj are vectors in V, then


i-t y-i

/(«, P) xixi. Byj7,j^

= 2; B xtyj f {(Cl, etj)


/-I /-I

^ B B xtyj atf
i-i ;-i

where X and Y are coordinate matrices of a and p in the ordered


basis B and X' is the transpose of the matrix AT. Thus
/(oi,P)==[aYnA[Ph,
From the definition of the matrix of a bilinear form, we note
that if/is a bilinear form on an «-dimensional vector space V over
the field F and B is an ordered basis of V, then there exists a
unique nxn matrix A^[an]ny.n over the field Fsuch that
404
Linear Algebra
AHfh.
Conversely, if be an n x « matrix bver the field F,
then from theorem 1, we see that there exists a unique bilinear
form/on F such that

I» B
ff acs
are vectors in V, then the bilinear
form/is defined as

/(«»i®)= i Sxiyj aij^X'AY, .(2)

where X, Y aie the coordinate matrices of a, p in the ordered basis


B, Hence the bilinear forms on V are precisely those obtained from
an nxn matrix as in (1).
Example. Letfbe the bilinearform on defined by
f{{xuy\)*{x2*y2i)=Xtyi’\-X2y2.''
Find the matrix off in the ordered basis
~1),(1.1)} 0/1^2(R).
Solution. Let Ro{ai, aa}
where «i-(l,-1), «2=(1,1).
We have /(«!. «i)=/((l.-1),(1, -l»=-l-l=.-2,
/(«!. «2)=»/((l,-I).(1.1))«-1+1«=0,
/(*». *l)=/((l* i),(1, —1))=1 —le=*0,
/(«2. a2)=/((l, 1).(1, 1))=1+1«2.
r-2 01
●●● 0 2
' Theorem 2. Let V be afinite-dimensional vector space over the
field F. For each ordered basis6of V, thefunction 0 which associates
with each bilinearform on V its matrix in the ordered basis B is an
isomorphism ofthe space L{V, V, F)onto the space of nxn matrices
over thefield F.
Proof. Let M denote the vector space of all n xn matrices over
the field F. Let 0 be a function from L{V, V, F)into M such that
0(/)==[/Ja V/SL(P, V,F). ...(1)
^ is a linear transformation.
Let «2, .... «b).
-Let a,b^FvLndf,g^L{V,V,F). Then
^ W+bg)=^[af+bg\B.
Now / W-\rbg)(at, ay)=(</)(at, <Kj)-j-(bg)(at,ajY
=4/(at, aj)+bg (at, aj). ● ●●(2)
Bilinear Forms 405

The result(2)is true for each /=.1, 1, ●●●» n.


Therefore W-\-bg]B^a[fh+b [g\B
0 W-¥bg)^atft (/)+h0 (g).
Therefore 0 is a linear transformation.
0 is one<one.
Let /, g^Uy, V, F). Then
0(/)«0(^)
=> [f h—\g]B
^Mg. [by theorem 1]
0isone>one.
0 is onto.
Let be an element of M. Then from theorem. (1),
there exists a bilinear form/on V such that [/]b=A
:> 0 (f)^A.
tfi is onto.
Hence 0 is an isomorphism of L(F, V, F) onto M.
Corollary. If V is an n-dimensional vector space over the field
F, then dim L{V,y,F)=^n\
Proof. If M is the vector space of all /txn matrices over the
field F^ then
L (K y F)^M.
Since dim M^n^, therefore dim L (V, y F)=n^,
Theorem 3. If U is an n-dimensional Vector space with basis
(ai, a„}, and if Vis an fn-dimensional vector space with basis
●●●» Pm}, then there is'a uniquely determined basis
{fpq}{P=^^ «;?=!, ...,w)
in the vector space of all bilinear forms on U@V with the property
ibatfpq (a/, Pj)=8ip8jq, Consequently dimension of the vector space
L{U, Vy F) of bilinear forms on U@V is the product of the dimen-
sionsofVandV, (Meerut 1992)
. Proof. By theorem 1 for each fixed p and q we determine a
unique bilinear form on t/©P such that
fp*i («/»Pj)=bip Sjq, 1=1,..., «;y=l,..., m. ...(1)
Let B denote the.set of these nm uniquely determined bilinear
forms on £/©K.
First we shall show that L (U, V, F) is generated by B. Let/
be any bilinear form on Cx F ue. f^L{U, V, F^,
406 Linear Algebra

If/(«!. Pj)^Oij, then we shall prove that

/« S Sop^fp^.
p-i «-i

Let a be any element of U and p be any element of V. Let


n m
«s=» S X{ctt, p= S yjBj,
<-i /-I

Then fpq («,/3)«/p« ^ )


Xi a/,^J yj^j

n m

“ jfl J?i

ft

« 2 xtygStp [Summing with respect toj]


/-I

«Xp yg. (2)

Now /(«,^)*/
u m

E jf/a/, S^yj^j)

^ E. E XiyjfietuPj)
M /-I

n m
E E Xtyjaij
. M y-i

ft m
r r OpgXpyg
P-1 fl-l

=. P-t
I «-l
r «„[/„(.,«] [From (2)1

as E E Opgfpg
P-l «-l
Bilinear Forms 407

m
/=* 2 S apqfpq.
p-i fl-i '

Thus every element/in £((/, V, F)is a linear combination of


elements of
Now we shall prove that B is linearly independent. Let
n m m.

P-l 9-1

m
S 2 bpqf. (a, ^>*6(a,
=> ( 1
y p-l 9-1 )
¥ « in t/ and ¥ i3 in K
m
s> 2 2 bpqfpq (a, ^)«=s0
p-l 9-1

¥ «in C/ and ¥ jS in F'


m
^22 bpqfpq (a/, ^y)==0, i‘=l, .... m
p-l 9-1

ttt

f> 2 2 bpq Sip Sy^esO, n\j^\, m


P-I 9-1

s> 2 bpj 8/p«0, /=!, ...» n; j=>l m


p-l

=► bi/~0, issl,..., m
:> ^ is linearly independent.
/, 5 is a basis for L (C/, K, F).
Dim i« (£/, K, F)«number of vectors in B
anm. ■

Theorem 4. Let V be a finite-dimensional vector space over


the field F. If F={ai, ..., a«} is an ordered basis for V,
and B'={Lu..„L„}
is the dual basis for V\ then the bilinear forms
//y(«,i5)=L/(a)Ly(i3), 1 1 <n.
form a basis for the space L (F, K, F). In particular, the dimension
ofL{V, V, F) is n*.
Proof. Since B* is the dual basis of B, therefore L/(ap)»8/p
' and Lj {aq)^^q for all values of i,J, p, q from 1 to n.
408 Linear Algebra

Thus fij(«p» ««)==!</(«p) Lj(««)=8/|,


Now proceed as in theorem 3. Simply replace U hy V and
{fiu ●... Pm) by {ai, a„).
Theorem 5. Let V be a finite-dimensional vector space over
the field F, and let
●●●} otii} and B ●».»
be ordered bases for V. Suppose fis a bilinear form on V. Then
there exists an invertible nxn matrix P over the field F siich that
[fW^^P'lfhP
where P* denotes the transpose of the matrix P.
The matrix P is called the transition matrix from the ordered
basis B to the ordered basis B\
Proof. Let T be the linear operator on V dehned as
r(a/)=i3/, /f.
Since T maps a basis onto a basis, therefore T is invertible.
LetP denote the matrix of T relative to the ordered basis B,
Then P is an invertible matrix. We recall that if P=[p/y]nxn then
●If
T (ay)«=*i^y= E Pij
/-I

Also if « is any vector in Vt then


IajB=»P(ajB' ...(1)
[See theorem 8 page 166]
For any vectors a, p in K, we have
/(a, j8)=[a]'^ Ifh [P]b
Ms')' [fh IPU [from(D]
=NVn/]B[i8]B [V if Pt Q are
two matrices then {PQy=Q'P*]
=[«]V P'lfhPlPh'
[from (1), taking p in place of a]
By the definition and uniqueness of the matrix representing /
in the ordered basis B\ we must have
1/Jb'^P'[/]b P.

Degenerate and Non-degenerate bilinear forms. Definitions. A


bilinear form f on a vector space V is called degenerate if
(0 for each non-zero 9. in V, f{9,P)=0 for all p in V and
{ii)for each non-zero p in V,f(<z, P)^0for all a in V.
Bilinear Forms 409
A bilinear form is called non>degenerate if it is not degenerate.
In other words a bilinear formfon a vector space V is called non^
degenerate if
(0 for each 0#a e V, there is a p in V such thatfict, pj^O
and {ii) for each 0#]8 e V, there is an a in V such that

§ 3. Symmetric bilinear forms. Definition. Letf be a bilU


nearform of the vector space V. Thenfis said to be symmetric if
/(a, j8)=/(i3, a)for all vectors a, j8 in V.

Theorem 1. IfV is afinite-dimensional vector space, then a


bilinearformfon V is symmetric if and only if its matrix A in some
{or every) ordered basis is symmetric, i e.. A*^A. (Meerut 1990)

Proof. Let B be an ordered basis for V. Let a, jS be any


two vectors in V. Let X, Y be the coordinate matrices of the vec-
tors a and j3 respectively in the ordered basis B, If/is a bilinear
form on V and A is the matrix of/in the ordered basis B, then
f(a,p)=rA Y,
and f{p,a)==Y'AX.
/will be symmetric if and only if
X'kY=Y'AX
for all column matrices X and Y.
Now X'AY matrix, therefore we have
r A Y={x' A F)'=r A* {xy^ y'a^x.
f will be symmetric if and only if
Y'A* X=Y'AX for all column matrices X and Y
i.e. A'^A
i,e. A is symmetric.
Hence the theorem.

Quadratic form. Definition. Letf be a bilinear form on a


vector space V over thefield F. Then the quadratic form on V asso
ciated with the bilinear form / is the function q from V into F
defined by
q (a)=/(a, a)for all a in V.
410 Linear Algebra

Theorem 2. Let Vbe a vector space over the field F whose


characteristic is not equal to 2 i.c., 1 +19^0. Then every symmetric
bilinear form on V is uniquely determined by the ctfrresponding
quadraticform.

Proof. Let/be a symmetric bilinear form on K and ^ be


the quadratic form on K associated with/. For all a, jS in F we
have
q («+j8)-/(«-l-^. a+/i)
=/(«> +/iP* «+^)
*/(a, «)+/(«. j8)+/(i8, a)+/03, j8)
=9(«)+/(«» +/(«» ^)+^?(^)
«(/(a)+(l+l)/(a, 0).
(l+l)/(«, p)^q (a+i3)-^(a)-^ (jS). ...(1)
Thus /(a, is uniquely determined by with the help of
the polarization identity (1) provided l + l7^=0 ie.Fis not of
characteristic 2.

Note. If F is a subfield of the complex numbers the symmetric


bilinear form/is completely detemined by its associated quadratic
form according to the polarization identity
/(«,j9)=i (1(a+)S)-i q (a-jS).
As in theorem 2, we have
2/(a, ^)^q («)“^ (^)- ...(1)
Also qi*-p)^f(a-/3,
=/(«. a-/S)-/(i8. a~i3)
«^/(«. «)-/(«,j8)-/03, «)4-/05. P)
(«)+r(l5)-2/(a, /3).
/. 2/(a,iS)«:*^y(«)+9(j8)-^(a-/l). ...(2)
Adding (1) and (2), we get
4/(a, (a+^)-</(a-/3)
=> /(a,^)=i 9(a+^)~i q (a~/3).
Theorem 3* Ler V be afinite-dimensional vector space over a
subfield of the complex numbers^ and letfbe a symmetric bilinear
form on V. Then there is an ordered basisfor V in whichf is repre
sented by a diagonal matrix,
(Meerut 1981, 82,83P, 84, 87. 89; G.N.D.U. Amritsar 90)
411
Bitinear Forms

Proof. In order to prove the theorem, we should find an


ordered basis a„} for V such that/(a/, ay)«0 for
If/=6 or n 1, the theorem is obviously true. So let us sup-
pose that fi^h and « > 1.
If /(a, a)=0 for every a in F, then q(«)«0 for every « j® ^
where q is the quadratic form associated with /. Therefore from
the polarization identity /(a, ^ we see
that/(a,/3)=0 for alia,|3 in F and thus/=0 which is a wntra-
diction. Therefore there must be a vector ai in F such that
/(«!● «i)=9 («i)#0.
Let Wi be the one-dimensional subspace of F spanned by the
vector ai and let W2 be the set of all vectors ^ in F such that
/(ai, j3)e=sb. Obviously W2 is a subspace of F. Now we claim that
F=IFi01F2. We shall first prove our claim.
First we show that IFi and W2 are disjoint.
Let y e= IFi D W2. Then ye IFi and ye IF2.
But y e JFi => y=*cai for some scalar c.
Also yeIF2 =>/(«!. y)=o
=>/(«!. cai)=»0
=> cf («i, «i)=0
=> c=0 [V /(«!.
=> yss0ai>=30.

/. FFi and 1F2 are disjoint.


Now we shall show that V— IFrf- IF2.
Let y be any vector in F. Since/(«i, «i)^0, so put
/(y,«i)
/3=y ai.
/(at, ai)
/(y» «i)
«i» y— «t
Thus /(a,./3)=/( /(«if «i) )
/(y, «i)
=/ («i. y) /(«i, «i)
/(«!, «l)
=/(«ii y)—/(y. «i)
=/ («i. y)~/ («i. y) [V /is symmetric]
«0.
41:2 Linear Algebra

A P e fV2hy deOnition of W2, Also by definition of the

s «t)
'● ^ A^) »t+P e Wi+Wz,
Hence ' v=.w,+ w,.
V^Wt®W2>
So dim W'2=dim K—dim lVt=n^l,
Now let g be the restriction off from V to fV2. Then g is a
symmetric bilinear form on fTaand dim^iis less than dim K.
Now we may assume by induction that 1^2 has a basis {a2, «»}
such that
g (a/. ay)=0, (/>2, y>2)
=>/(«/, «y)«0, i#y (/>2.j>2)
[V ^ is restriction of/]
Now by definition of W2, we have
/(«!»«y)=0 for7=2, 3 »●●●» n.
Since {ai} is a basis for Wi and V= fVt®W2» therefore
{«!, a„) is a basis for V such that
/(«/»ccj)c=0 for i^j.
Corollary. Let F be a subfield of the complex numbers^ and
let A be a symmetric nxn matrix over F. Then there is an invertible
nxn matrix P over Fsuch that P' AP is diagonal.
Proof. Let P be a ^nite dimensional vector space over the
field F and let B be an ordered basis for V. Let / be the bilinear
form on K such that [f]B^A. Since is a symmetric matrix,
therefore the bilinear form/is also symmetric. Therefore by the
above theorem there exists an ordered basis 5' of V such that [fy
is a diagonal matrix. If P is the transition matrix from B to B\
then P is an invertible matrix and *
[fy=P'AP
i.e. P' AP ISO, diagonal matrix.
§ 4. Skew-symmetric bilinear forms.
Definition. Lei f be a bilinear form on the vector space V.
Then f is said to be skew-symmetric //(a,/8)=:~/(j5, «) for all
vectors a, j8 in V.
Theorem 1. Every bilinear form on the vector space V over a
subfield F of the complex numbers can be uniquely expressed as the
sumpf a symmetric and skew-symmetric bilinear forms.
/

f Bilinear Forms 413

Proof. Let/be a bilinear form on a vector space V, Let


...(1)
...(2)
for all«, j8 in V.
Then it can be easily seen that both'g and h are bilinear formi
on V. We have
g(P, [M «)+/(«, j8)]=g(«, P).
/. dissymmetric.
Also A (fi, a)=J lAp.«)-/[«,/5)] (4«, «)]
=—A(«./3).
A is skew-symmetric.
Adding(1)and (2), we get
d(a,j8)+A(a,i8)=/(a,i8)
=> (^+A)(«, jS)s=/(«, jS) for all a, p in V.
g-¥h^f.
Now suppose that/—/i4-/2 where/is symmetric and / is
skew-symmetric.
. Then /(«»iS)=(/i+/2)(a. i8)
or /(«.^)=/i(a,i3)+/2(«.i8). ...(3)
Also /08.a)«(/,+/)08.«)
or /03.«)-/iO.aH/a08,a)
or /08,«)=/i(a. /5)-/2(a,j5). ...(4)
[V /i is symmetric and/ is skew-symmetric]
Adding(3)and (4), we get
/(«»^)+/08,a)«2/,(«,i3)
Le, /i(«,iS)=i [/(a.i3)+/(iS.a)]
(«»P)-
A /i=^.
Subtracting (4)from (3), we get
/(*Ai3)-/(j8,a)«2/2(«,i8) .
/.e.* /(«,/3)=i[/(«.P)-fiPy «)1=A (a. /3).
● /—A.
Thus the resolution/=g+A is unique.
Theorem 2. If V is a finite-dimensional vector space, then a
bilinearformfon V is skew-symmetric ifand only if its matrix A in
some {or every) ordered basis is skew-symmetric, i.e., A*f=:.—A.
Proof. Let B be an ordered basis for V. Let a, p be any two
vectors in V, Let X, Tbe co-ordinate' matrices for the vrotors a
and p respectively in the ordered basis B. If/is a bilinear form on
414 Linec^ Algebra

V and A is the matrix of/in the ordered basis By then

and f(^y^)=rAx.
/will be skew-symmetric if and only if
X^AY^-TAX
for all column matrices X and Y,
Now A".4 y is a 1X1 matrix, therefore we have
X'AY=^{X' A Yy= Y' A\{X*y= Y' A'X.
/will be skew-symmetric if and only if
y' A'X— — Y' for all column matrices X and Y
i.e. A=-A
i.e. A is skew-symmetric.
§ 5. Groups Preserving Bilinear forms.
Definition. Letf be a bilinearform and T be a linear operator
on a vector space V over thefield F. We say that T preserves/ if
f(T<iym=f{<^,P)foralUyPinV.
The identity operator / preserves every bilinear form. For if
/is any bilinear form, then
/(/«. («»P)for all a, jS in F.
If S and T are linear operators which preserve /, then the
product ST also preserves/. In fact for all a, p in V we have
fiSTxy STP)=fiTxy TP) [V 5 preserves/]
=/(a,i8) [V r preserves/]
ST preserves/.
.Therefore if G is the set of all linear operators on V which
preserve a given bilinear form/on F, then G is closed with respect
to the operation of product of two linear operators.
Theorem. Letfbe a non-degenerate bilinearform on a finite
dimensional vector space V. The set G of all linear operators on V
which preservefis a group under the operation of composition.
Proof. If 5, r are elements of (7/.e., if S and T are linear
operators on F preserving/, then ST is also a linear operator on
F preserving/. Therefore ST is also an element of G. Thus G is
closed with respect to the operation of product of two linear
operators.
The product of linear operators is an associative operation
because the product of functions is associative.
The identity operator /on F also preserves/. Therefore 7 is
an element of G.
J

/ Bilinear Forms 415

Now let re Gj Then from the fact that/is nondegenerate»


we shall prove that any operator T in G is.invertible, and T"* is
also in Q. Since T e G, therefore T preserves /. Let a be a vector
in the null space of T i e.y ra=0. Then for any jS in K, we have
/(«, i8)=/(ra, Ti3)=/(0, r/3)
=/(00, T/3)=0/(0. r^)=0.
Since/(a, j8)~0 for all j3 ip K and/ is non-degenerate, there
fore pt=0.
Then 7flt=0 => as=0.
T is non-singular. Since V is finite-dimensional, therefore
T is invertible. Clearly r-‘ also preserves/; for
/(T-’a, T ●»i8)=^/(7’r“‘a, [V r preserves/]
=/(/«, 7)8)=/(a, ^).
Thus r-» e G.
Hence G is a group with respect to the operation of product
of linear operators.
Exercises
1. Which of the following functions/, defined on vectors
a=(xi, and in R^ are bilinear forms ?
(0 /(a, ^)-Wi+W2;
(ii)/(a, ^)=1 ;
(iii)/(a, P)=xiyi -\-xiy2 i- X2yi -\-X2yi. (Meerut 1979)
2. Let m and n be positive integers and F a field. Let V be
the vector space of all mxn matrices over F. Let y4 be a fixed
7« X w matrix over F. Define
f{X, y)=tr {X‘ A K), where tr stands for trace and X’ denotes
the transpo.se of the matrix A'.
Show that/is a bilinear form on V.
3. Let F be a field. Find all liilinear forms on the vector
space F^. (Meerut 1972)
4. Let / be the bilinear form on defined by
/((X|, >-|), (.X2, >’2))=X|V,+.r2V2.
Find the matrix of /in each of the following bases,
(i) {(l.O).(O, 1)};
(ii) {(1.2). (3, 4)}; (Meerut 1976, 77)
(Hi) {(1,1), (0, 1)}. (Meerut 1977)
5. Let /be a bilinear form on defined by
f[{xu xi)» {yi, >’2)] ■'=2xiyi-3A*iy2-|-A'2>’2.
, Find the matrix of /in the basis
A
{a, =-(1,0), a2«(l, l)}. (Meerut 1976)
I
416 Linear Algebra

6. Let/be the bilinear form on defined by


Mxu xi\ {yu :V2))«(*i+Jf2)(yi+>'2).
(i) Find the matrix of/ in the standard ordered basis
' 0),(0,1)}.
(ii) Find the transition matrix from the basis B to the basis

(iii) Find the matrix of/in the basis B'.


7. Describe explicitly all bilinear forms/on with the pro-
perty that/(a, j8)=/(j8, a)for all «, )3. , (Meerut 1973,84)
Or
Describe explicitly all symmetric bilinear forms on R^.
8. Find all skew-symmetric bilinear forms on R^.
(Meerut 1980)
9. Let V be an n-dimensional vector space over the field of
complex numbers and/be a skew-symmetric bilinear form on F.
Prove that the rank of/is even. (Meerut 1977)
Answers
1.(i) Bilinear form; (ii)‘ not a bilinear form;(iii) bilinear form.
3. Let be any 2x2 matrix over F and B any ordered
basis of F^. Then the bilinear forms on F^ are precisely
those obtained by/(a, AY, where X, Y are the co
ordinate matrices of a and p in the ordered basis B;
0 0' ■ 4 14' [2 n
4.(i) ;(ii) 14 24 ;(iii) 1 0
.0 0.
5.
f2 -n
2 0
1 r
«.(i) 1 1 ;(ii) -11 1n ro 01
:(»0[o 4]
Model Question Papers

( » )

1. (a) Define a vector space. Prove tliat every n-dimensional


vector space V over a field F is isomorphic to F".
(Page 7; Theorem 2 page 83)
(b) Show that the necessary and sufficient condition for a
non-empty subs'FF of a vector space V(F)to be a sub
space is that a, a, jSe FF a*+bpeW.
(Theorem 3 page 25)
(c) Prove that in a vector space F, OacaO for all a€F.
(Theorem 1 page 22)

2; (a) Show that there exists a basis for each finitely generated
space and that any two bases of such a space have the
same number of vectors. (Theorems page 62,-63)

(b) If Wu FFa are two subspaces, of a finite dimensional yec^


tor space F(F), then
dim (FF,+ FF2):=dim FFi+dim FFa-dim (IF|f| Wi)-
(theorem 9 page 68)

3. (a) If U and F be two vector spaces over a field F,then show


that the set of all linear transformations from U into Fis
a vector space, (Theorem 1 page 120)

(b) Let U and F be vector spaces over the field F and let T be
a linear transformation from U into F, Suppose that V
is finite*dimensional. Then
rank (r)+nuliity(r)«dim U. (Theorem 2 page 112)
( « )

4. (a),If JVis an m-dimensional subspace of an 7i>dimensional


vectof space K, then show that fVo is an n—m dimen*
sionai subspacc c-f V*, (Theorem 2 page 20()

(b) Ifris a linear transformation on a vector spaced such


that

r2-7’+/=6.
i then r is invertible. (Ex. 16 page 148)
(®) Define an inner product space. If a, jS are vectors in a
complex inner product space, prove that
4 (ot. ^)-!l a+p IP+/ II «+/i5 IP-i i| 11*.
(Page 284, Ex.8 page 301)

(b) If Wi and W2 are subspaces of a finite dimensional inner


product space prove that
{Wi+fV2y^Wi^ n
> (Wi^W2)^=>WvL^W2^.
(Ex. 15 page 327)
6. (a) What is a self-adjoint operator ? If an operator T is
either self-adjoint or isometric, then prove that the eigen
vectors of r belonging to distinct eigenvalues are ortho
gonal. (Page 335; Theorem 3 page 373)

(b) State and prove the spectral theorem for. self-adjoint


. operators on a finite-dimensional inner product space.
(Theorem 2 page 384)

- ►

( 2 )

1. (a) Show that the intersection of any collection of subspaces


of a vector space is a' subspace. (Theorem 3 page 36)

(b) Show that every linearly independent sul^set of ^ a finite


dimensional vector space can be extended to" form a
basis for V. (Theorem 1 page 64)
(c) -Show that every subspace of a finite-dimensional vector
space has a.omplement. (Theorem page 97)
/
( tu )
2. (a) If A,B, C are linear transformations such that

ifZoyp that A is invertible and


(Theorem 3 page 134)
(b) Show that a linear transformation is a projection on ^
some subspace if and only if it is idempotect.
(Theorem 1 page 220)
3. (a) Define the‘rank* and nullity’ of a linear transformation.
If T is a linear transformation on a finite-dimensional
vector space, then p(D=p(r').,
(Page 112, Theorem 2 page 236)
(b) Let Wi and be subspaces of a vector space V. If ^
is the projection on Wy along ITj, then show that E'is
the projection on W2^ along W%^. (Theorem 1 page 242)
4. (a) If Wy and W2 are coi.iplementary subspaces of a vector
space F, then prove that Wt is isomorphic to VfWu
(Theorem page 98)
(b) Show that two finite dimensioi al vector spaces over the
same field are isomorphic if and inly if they are of the
same dimension. (Theorem 1 page 81)

' 5. (a) If a and j8 ar#» vectors in an inner product'space F, prove


] that
Prove that the abovd inequality reduces to an equality iff
a and ^ are linearly dependent. (Theorem 3 page 29f,
Ex. 9, 10 page 301, 302)

(b) If W‘is any subspace of a finite dimensional inner pro


duct space F, prove that F is the direct sum of W and W^,
and fFiJ-=lF. (Theorem 11 page 318)

6. (a) Show that to any linear functional /on a finite dimen


sional inner product space F, there exists a unique vector
^ in F such that/(a)=(«,/3)for ail a in F.
(Theorem 1 page 330)
(b) State and prove the spectral theorem for normal o] !ia-
tors on a finite-dimensional complex inner product s] ice.
(Theorem 1 page 382)
\

{ iv )

( 3 )
1. (a) Show that each of(n+1)or more vectors of an n-dimen-
sional vector space V(F)is linearly dependent.
(Theorem 2 page 65)
(b) Show that each subspace W of a finite-dimensional
vector space V{F) of dimension n is a finite-dimensional
space with dim ^<it. Also W if and only if
dim K=dim W, (Theorem 6 page 66)
2. (a) Show that if a finitejrdimensional vector space V{F) is a
direct sum of two subspaces Wt and W2 then
dim ?'=dim If^i+dim W2. (Theorem page 95)
^ (b) Show that a linear transformation T on a finite-dimen¬
sional vector space K<is invertible iff it is non-singular.
(Theorem 9 page 138)
3. (a) Show that the dual space V of an n-dimensional vector
; space V is also n-dimensional. (Theorem 2 page 195)
(b) Let fVi and W2 be subspaces of a finite-dimensional vec-
-ytor space V over the field F. Prove that
l(i) iWf\’W2y>«WiO(]W2^.
(ii) iWtfUV2fi^WiO+W2^, (Ex.5 page 210)
4. (a) Show that the distinct characteristic vectors of a linear
transformation r corresponding to distinct characteristic
values are linearly independent. (Theorem 4 page 249)
(b) Let F be a projection on a vector space V and let T be
a linear operator on V, Prove that both the range and
the null space of E are invariant under T if and only if
ET^TE, (Theorem 2 page 227)
5. (a) If a,jS are vectors in an inner product space V prove that
(Theorem 4 page 292)
^) Show that every orthonormal set in' an inner product
space is linearly independent and that every finite dimen
sional inner product space has an orthonormal basis.
(Theorem 5 page 308; Theorem 7 page 311)
6. (a) If and F be self-adjoint operators on an inner product
8p^ Vt then a necessary and sufficient condition that
4F be self-adjoint is that AB>=BA, (Ex.6 page 343)
/

(b) Let be an idempatent linear operator on an inner pro


duct space V, Prove that E is setfradjoint if and only if
E is normal; (Ex. 12 page 345)
(c) Let F be a finite-dimensional complex inner product space
and let r be a normal operator on V. Then show that V
has an orthonormal basis, each vector of which is a
characteristic vector for T. (Theorem 3 page 386)
7. (a) Let F be a complex inner product space and T a linear
operator on F. Then show thdtTis Hermitian iff(7a, a)
is real.for all a. (Theorem 10 page 340)
(b) Let F be a finitC'dimensional inner product space and T
a normal operator on F. Then show that any charac
teristic vector for T is also a characteristic vector for 7^.
(Theorem 1 page 363)
N

INDEX

A D
Abelian group 2 Determinant of a linear transforma
Addition of vectors 7 tion 173
A4joint of a linear operator on an Determinant of a matrix 1S4
inner product space 332 Diagonalizabli^lmatrix 257
Adjoint of a linear transformation Diagonalizable operator 257
234 , Diagonal matrix 153
Algebrn 132 Dimension of a quotient space 93
Annihilating polynomial 274 Ditnension of a subspace 66
Annibilator 205 Dimension of a vector space 64
Direct sum of space 94
B Disjoint subspaces 94
Distance in an inner product
Basis of a^vector space 60 space 294
Bessel's inequality 315 Dot product 286
Bilinear forms 397 Dual'basis 195
Binary operation 1 Dual space 194
C E
Cauchy's inequality 293 Eigen Value 247
Cayley-Hamilton theorem"255 Eigen vector 247
Characteristic equation of a matrix Euclidean space 285
251 External composition 7
Characteristic space 248
Characteristic value 247
‘Characteristic vector 247
Characteristic vector of a matrix 251 F
Column rank iof a matrix 156 Field 4
Complementary subspaces 97 Finite dimensional vector
Complete orthonormal set 308 space 61
Complex vector space 8 Finitely generated vector space 61
Conjugate space 194
conjugate transpose of a matrix 295 G
Coordinate matrix of a vector 104
Coordinates.^)!a vector 103 Gram Schmidt process 312
■ ^ ' ■ ■ i ' ■
I

< vii )
Group 2 N
Groups preserving bilinear
forms 414 Non-negative operator 348
Non-singular transformation 136
Normal matrix 366
H Normal operator 363
Normed vector space 294
Hermitian matrix 295 Norm of a vector 289
Homomorphism of vector Nullity of a linear transformation 112
spaces 80 Null matrix 1S3
Null space of a L. T, 111
I O
Identity operator 109
Independence of subspaces 99 Ordered basis 103
Infinite'dimensional space 6l Ordered n-tuple 11
Inner product 284 Orthogonal complement 317
Inner product space 285 Orthogonal dimension 309
Inner product, space isomorphism Orthogonally equivalent matrices 3S9
. 355 Orthogonal matrix 359
Invariant subspace 213 Orthogonal projections 377
Inverse of a matrix l55 Orthogonal set 305
Invertible linear transformation 133 Orthogonal vectors 304
Isometry 354 Orthonormal basis 312
. Isomorphism of vector Orthonormal set 306
spaces 81
'L P
Parallelogram law 299
Linear combination 36 Petpendicular projections 377
Linear dependence of vectors 42 Polynomials of linear operators 132
Linear functional 188 Positive matrix^352
Linear Independence of vectors 42 Positive operator 348
Linearity property of inner product Product of linear transformations 126
285 Projections 219
Linear operator 107 Projection theorem for inner product
Linear transformation 80.107 spaces 318
Linear space 7 Proper subspace 27
Linear span 36 Proper values 247
Linear sum oftwo subspaces 38 Pythagorean theorem 321
M Q
Matrix 153 Quadratic form 409
Matrix of a bilinear form 402 Quotient space 92
Matrix of a linear transformation 157
Minimal polynomial of a Matrix 275 R
Minimal polynomial of ah operator
275 Range of a linear transformation 110
\ ( vm )
Rank of a bilinear form 403 T
Rank of a linear transformation 112
Real part of an operator 337 Trace of a linear transformation 174
Real vector space 8 Trace of a matrix 173
Reducibility 214 Transition matrix 167
Reflexivity of a finite dimensional Transpose of a linear transformation
234
vector space 200
Ring of linear operators 131 Transpose of a matrix 154
Row rank of a matrix 1S6 Triangle inequality 292

Scalar multiplication 8
Scalars 7 Unitariiy equivalent matrices 359
Schwarz’s inequality 29 Unitary matrix 358
Second dual space 199 . Unitary operator 356
Self*adjoint matrix 295 Unitary space 285
Self-adjoint transformation 335 Unit matrix 153
Simitar matrices 168 Unit vector 289
Similar transformations 169
Singular transformation 136 I V
Skew-Hcrmitian operator 336
Skew-symmetric operator 336 Vectors 7
Spectral theorem 382, 384 Vector space 7
Spectral values 247
Standard basis of (f)61
Standard -inner product 285, 286 Z
Subfield 5
Subspace 24 Zero functional 190
Sylvester’s law of nullity 246 Zero subspace 27
Symmetric bilinear forms 409 Zero transformation 109
Symmetric matrix 295 Zere vector 8
Most Popular Books in MATHEMATICS
Advanced Differentail Equations RK.Gupta &J.N. Sharma
Analytical Solid Geometry Vasishtha &.Agarwal
Advanced Differential Calculus J.N.Sharma
Advanced Integral Calculus D.C.Agarwal
Calculus of Finite Difference &Numerical Analysis Gupta,Malik SeChauhan
Differential Equations Sharma&Gupta
Differential Geometry Mithal ScAgarwal
Dynamicsofa Particle Vasishtha &Agarwal
Fluid Dynamics Shanti Swamp
Functional Analysis Sharma&Vasishtha
Functions ofa Complex Variable J.N. Sharma
Complex Analysis A.R. Vasishtha
Hydrodynamics Shanti Swamp
Infinite Scries &.Products J.N. Sharma
Integral Transforms(Transform Calculus) Vasishtha &Gupta
LinearAlgebra(Finite Dimension Vector Spaces) Sharma &.Vasishtha
Linear Difference Equations Gupta ScAgarwal
Integral Equations Shanti Swamp
Linear Programming R.K.Gupta
Mathematical Analysis -1(Metric Spaces) J.N.sharma
Mathenuitical Analysis - II Sharma &Vasishtha
Measure and Integration Gupta & Gupta
Real Analysis(General) Sharma&Vasishtha
Vector Calculus Sharma &Vasishtha
Modem Algebra(AbstractAlgebra) A.R. Vasishtha
Matrices A.R. Vasishtha
Mathematic^ Methods
(Special Function &Boundary Value Problems) Sharma &Vasishtha
Special Functions(Spherical Harmonics) Sharma &lVasishtha
VectorAlgebra A.R. Vasishtha
Mathematical Statistics Sharma&Goel
Operations Research R.K.Gupta
Rigid Dynamics-1(Dynamicsof Rigid Bodies) Gupta &Malik
Rigid Dynamics- II (Analytical Dynamics) PR Gupta
Set Theory &.Related Topics K.P. Gupta
Spherical Astronomy Gupta,Sharma &.Kumar
Statics
Goel &Gupta
Tensor Calculus&Riemannian Geometry D.C.^arwal
Theory of Relativity Goel &.Gupta
Topology J.N. Sharma
Discrete Mathematics
M.K.Gupta
Basic Mathematics for Chemists A.R. Vasishtha
Number Theory Hari Kishan
Bio-Mathematics
Singh &.Agrawal
Partial Differential Equations R.K.Gupta
Cryptography &Network Security ManojKumar
Advanced AbstractAlgebra S.K.Pimdir
Space Dynamics J.P. Chauhan
Spherical Astronomy and Space Dynamics J.P. Chauhan
Advanced Mathematical Methods ISGN 9387<
Shiv Raj Singh
Fuzzy SetTheory Shiv RajSingh
Advanced Numerical Analysis(MRT) Gupta, Malik & Chauhan
Analysis-1 (Real Analysis) J.P. Chauhan
Calculus of ^nations Mukesh Kumar
789387 6206i

if^RISHNA Prakashan
KRISHNA I Books I
GROUP Media (P)Ltd..Meerut
BUY Online at

U’rite to u.s at .● info@krishnaprakashan .com WWW. krishnaprakashan com

You might also like