Linear Algebra
Linear Algebra
Since 1942
LINEAR ALGEBRA
A.R. Vasishtha
]. N. Sharma
LINEAR ALGEBRA
(/4 Course hi Finite Dimensional Vector Sjyaces)
{For Degree mui Flonours Students ofall Indian Universities)
Bj
A. R. Vasishtha ]. N. Sharma
Retd. Head. Dept, of Mathematics Ex. Si: Lect. Dept, of Mathematics
Meerut College, Meerut Meemt College. Meerut
&
A. K. Vasishtha
M.Sc. Ph.D
C.C.S. University. Meerut
LINEAR ALGEBRA
(A course in Finite Dimensional Vector Spaces)
Name,style orany partofthis book thereofmay notbe reproduced In anyform or by any means withoutthe
written permission from the publishers and the author. Every effort has been made to avoid errors or
omissions in this publication. In spite of this, some errors might have crept In. Any mistake, error or
discrepancy noted may be brought to our notice which shall be taken care ofIn the nextedition. It Is notified
that neither the publisher nor the author or seller will be responsible for any damage or loss of action to
anyone,ofany kind. In any manner, therefrom. For binding mistakes, misprints orfor missing pages, etc. the
publisher's liability Is limited to replacement within one month ofpurchase bysimilar edition. Allexpenses In
thisconnection areto be borne bythe purchaser.
\
!
V
ISBN: 978-93-87620-68-1
Website: w\vw.krishnaprakashan.com
E-mail: [email protected]
Printed at : Vimal offset Printers, Meerut
^0
-
0
« *
^ >
4
Dg(X\C3X^G6 '^; s.
to '2 ●
'dr
¥■
■1
Lord jj
;
?(
Krishna f'i
e
* /
s-< ”^-4 - *-J
M
AMtfi'’''* ^ P«*fcVr
■X-;
A .<●
« «
o q
-●
To Tftc Latest Ecfition
I n this edition the book has been thoroughly revised. Many more
questions asked in various university examinations have been added to
enhance the utility of the book. Suggestions for the further improvement of
the book will be gratefully received.
—The Authors
T his book on Linear Algebra has been written for the use of the students
of Degree-Honours and Pbst-Graduate classes ofIndian Universities.
The subject matter has been discussed in such a simple way thatthe students
will find no difficulty to understand it. All the examples have been completely
solved. The students should first try to understand the theorems and then
they should try to solve the problems independently. Definitions should be
read again and again.
‘The Authors
(v)
3 “there exists'’
V “for every"
“implies”
“implies and is implied by’
iff “if and only if”
E ‘belongs to”
‘does not belong to”
£ ‘is a subset of”
3 ‘is a superset of”
c ‘is a proper subset of”
: or ‘such that”
u ‘union”
n ‘intersection”
0 ‘the null set”
R ‘field of real numbers”
C ‘field of complex numbers'
(Vi)
Contents
(yii) ●
V, }, irfi
(viii)
1
Vector Spaces
//■ ■
Vector Spaces 3
afl2+a*2i...,
«(aoi, a«2,..., «fl«)+(a*i, aft2,..., o^a)
=a(ai, 02»..., Oii)+n {bu ^>2»..., ^A)=aa+diS.
2. If o, beFand (oi, <12, Oa)s F, then
(n+h)a=([fl+^J au [a^-b] 02,...» [0+6J a„)
^{aai^bqu fl«2+^fl2,...» nOn+^»fl„)
=(<wi, aai aa„)+{bau 602,..., ^Oa)
=a (fli, <i2»...» Aa)+^> (oi, a2»-., flA)=fla+^>a.
3. If fl,^sFand a=(a,, ^2...., o„)sF, then
(ai») a=([fl6] fli. [fl^J fl2,..., [o6] a„)=(n [^n,]. a « [ba„])
^a (bai, ba2 ba„)=a [b{au 02,..., Oa)I«o (6a).
4. If 1 is the unity element of F and a=(n,, «») e F,
then la=(lnj, lfl2»..., l0A)‘==(ai, 02,..., Oa)=a.
Hence is a vector space over F. 77ze vec/or space of all
ordered n-tuples over F will be denoted by V,(F). Sometimes we also
denote it by FW or by F". Here the zero vector 0 is the n-tuple
(0.0 0).
Note. fj(f)={(ai, ai): ai,02Sf} is the vector space of all
ordered pairs over f. Similarly f's(f')={(a,, aj, aj): a,, a,, ajSf}
is the vector space of all ordered triads over F.
Example 4 The vector space ofallpolynomials over afield F.
(Meerut 1982; Andhra 92)
Sol. Let F[x] denote the set of all polynomials in an indeter
minate X over a field F. Then Ffx] is a vector space over the field F
with respect to addition of two polynomials as addition of vectors
and the product of a polynomial by a constant polynomial (/.e.,
by an element of F)as scalar multiplication.
Let /(x)=J7 aiX^=aQ-\-a\x-\-a2X^-{-a2X^+...
g(x)=i;6/x'=6o+6ijc4-62x2+£>3jc3+...
and h{x)=S C,X^s=Co-f*^l^“l"C2X2-f-C3X34 ...
be any arbitrary members of F[4^j.
13
Vector Spaces
satisfied, then V will not be a vector space. We shall show that for
the operation of addition of vectors as defined in this problem the
ideiitity element does not exist. Suppose the ordered pair (xi, yi)is
to be the identity element for the operation of addition of vectors.
Then we must have
(X,j')+(xi, yi)=(x, y) ¥ X, y e R
=> (x+xi,0)=(x, y) ¥ X. y e R.
But if y#0, then we cannot have (x+xi, 0)=(x, y). Thus
there exists no element (xi, yi) of V such that
(X, yHixu yi)={x, y) ¥ (X. y)e K.
Therefore the identity element does not exist and V is not a
vector space over the field R.
Example 10. Lei V be the set of all pairs (x, y)of real num
bers, and let F be thefield of real numbers. Examine in each ofthe
following cases whether Visa vector space over the field ofreal
numbers or not ?
(0 (X, y)+(xu yi)=(x+xt, y+yi)
c (X, y)=( I c I X, I c ! y).
(») (X, y)+(xi,yi)=(x+xi,y+yi)
c(X, y)=(0, cy).
(ill)(X, y)+(xi, y,)=(x+xi, y+yi)
c (x. y)=(c2x, c2y).
Solution, (i) We shall show that in this case the postulate
(a+b)a=aa+ba. V a, b e F and a e V fails.
Let a=(x, y) and a, b e R. We have
(a-i-b) <K=(a+b)(x, y)=( | a-t-b 1 x,
| a+b| y),
bydef. ...(1)
Also aK+oa=a (x, y)-hb (x, y)
={\a\x,\a\ y)+( \b\x, \b \ y),
by def. of scalar multiplication
=( I fl I x+l 6 1 X. 1 1 y+| I y),
by def. of addition of vectors
=({l«H-i^'|}x.{|fl|4-|6|}y). -(2)
Since \a+b\ | a 1+1 6 !, therefore from (1) and (2), we
conclude that in general (fl+fe) a^oa+fca. Hence V(R) is not a
vector space,
(ii) We shall show that in this case the postulate la=a ¥
oi^V fails. Let a=(x, y) where x, y e R. By definition of
scalar multiplication wp have la=l (x, y)=(0, ly)=(0, y).
20
Linear Algebra
ia+b)f{x) =af{x)+bg{x)
Vector Spaces ll
Solution,
Let «=(xi, X2y ^3) Sind p=(yu yi, yii) be any two
elements of W. Then xu X2, X3, yu y2, ys are elements of F and
are such that aiXi-\-a2X2+a3X3=0 ...(1)
and 01^1+^2^2+03^3=0 ...(2)
If ay b be any two elements of F, we have
oa+^>^=o (3fj, ^2, X3)+h (yi, y2, y3>
^(axiy ax2y ax3)+(J)yu 6y2, by3)={axi-\-by\^ oxa+ftya, flX3+^y3).
Now 0i(fl2fi+6yt)+fl2 (0Af2+^y2)+03 {ax3+by3)
«=o {oiX\+a2X2+a3X3)-\-b {aiyt +a2y2+fl3y3)
=a0+A0=0 [by (l)and (2)]
fl«+i^=(axi+^yi, 02C2+6y2,02^3+Z>y3)e W.
Hence If' is a subspac6 of K3(F).
Example 5. Prove that the set of all solutions(o, 6, c) of the
equation a+b+lc—O is a subspace of the vector space V3(R).
(Meerut 1989)
Sol.
Let fV={(a, b,c):ayb,c^R and n+6+2c=0>.
To prove that IF is a subspace of F3(R) or
Let a=(fli, 61, Cl) and p=(o2y 62. C2) be any two elements of
IF. Then
Oi+^>i+2ci=0 ...(I)
and O2+b2+2C2=0. ...(2)
If fl, b be any two elements of R, we have
0«+^^=0 (fli, biy Ci)+^ (fl2, b2y C2)
=(aoi, ably ac\)-\‘{ba2y bb2y bci)
—{aa\+bj2y ab\+W2, nci+bc2).
Now (flai+6fl2)+(o^i+^2)+2 (flci+^ca)
=n (ai+6,+2c,)+6 (a2+i»2+2c2)
=fl.O+ii.O (from (1) and (2)J
=0.
.*. av.-\-b^=^{aai+ba2y abi+bbz, aci+bci) e W.
Thus a,^ e If' and a, 6 e R => aoi-^b^ e IF.
Hence IF is a subspace of F3(R).
Example 6. Show that the set W of the elements, of the vector
space V3(R)of theform (jf+2;^, y, —x+3y)
where x, y ^ R is a.jiubspace of F3 (R). (Meerut 1974)
Solution.
Let IF={(A:+2y, y, -Jf+3y): x, y e R>.
To prove that IF is a subspace of F3 (R).
Let a*=(xi+2yi, yi, — xi+3yi)and^=(x2+2y2, ya, — X2+3y2)
be any two elements of IF.
Vector Spaces 29
If o, e R,then a(t-\-b^—{aai-\-bbu
We have (oa2+662)+4 (afls+^^s)
=fl (fl2+4fl3)4-ft (62+4A3)=fl.0+i>.0=0.
Thus according to the definition of Wy oa+6jS e W.
In this way a, jS e If' and o,6 e R => oa-fij8 e Hence
If'is a subspace of R".
(I?) A subspace if A:=0 and not a subspace if A:#0.
Example 8. Let R be thefield of real numbers. Which of the
following are subspaces of K3(R):
(i) {(Ai, 2y, 32): x, y,zeR}. (Meerut 1990)
(ii) {(X, Xy x): xeR}.
(i7i) {(x, y,2): XyPyZ are rational numbers).(Meerut 1989 P)
Solution,(i) Let W={{Xy 2y, 3z): Xy y, zeR}.
Let a=(*i, 2yi, 3zi) and P=(x2y 2y2, 3z2> be any two elements
of W. Then xu yi, zi, X2y yi, zi are all real numbers. If a, b are
any two real numbers, then
(Xi, 2yi, 3z,)+^> (X2, 2y2, 3za)
={axt+bx2y 2ayi+2by2y 3azi+3hz2)
=(flxi+fex2, 2[oyi+/>y2l, 3[azi+bz2])
eIF since ax\-\-bx2y ay\+by2y az\-\-bz2 are real numbers.
Thus o,^ € R and oc, jSs W ao.’\‘b^^lV.
IF is a subspace of F3(R).
(ii) Let W—{(Xy Xy jc): jceR}.
Let a=(^i, Xu jci) and ?=^{X2, X2y X2) be any two elements of
IF. Then xi, 21:2 are real numbers. If a,6 are any real numbers,
then aa.+bp=a {xu Xu :vi)-|-^> (x2, X2, X2)
=(axi+bx2y axi-\-bx2y axi-\-bx2)^W
since axi+bx2 e R.
Thus IF is a subspace of Fs(R).
(iii) Let W={(Xy y, z): x, y, z are rational numbers}.
Now a=(3,4, 5) is an element of IF. Also a—^1 is an ele
ment of R. But fla=V7(3, 4. 5)-(3V7.4V7,5^7) € IF since
3V7,4V7,5V7 are not rational numbers.
Therefore IF is not closed under scalar multiplication. Hence
W is not a subspace of Fa(R).
Example 9.. The solution of a system of homogeneous linear
equations. Let V(F) be the vector space of all n x 1 matrices over
thefield F. Let A be an mxn matrix over F. Then the set W of all
P^ector Spaces 31
2 S'-’ ...(1)
and
...(2)
Let n, h € R. If IF is to be a subspace then we should show
that ayi-{-by2 also belongs to W i e., it is a solution of the given
differential equation;
We have
2 ^2 (oj'i +6>-2)+2(ayt+byt)
d^yz 9a
=2fl H-26 dx
“ dx^ dx^ “ ^'-9b ^+2ay,+2byi
dx^
fl.O+h.O, by (1) and (2)
=0.
Thus ayi+by2 is a solution of the giveri differential equation
and so it belongs to IF.
Hence IF is a subspace of F.
34 Linear Algebra
Exercises
1. Let F=R* and W be the set of all ordered triads(x, z)
such that X—3y+4zss0. Prove that I^is a subspace of R^.
(Meerut 1992)
. 2. Let C be the field of complex numbers and let n be a
positive integer (n ^ 2). Let F be the vector space of all nxit
matrit^s over C. Which of the following sets o^ matrices Ain V
are subspaoes of F?
(i) all invertible A; (Meerut 1981)
(ii) all non-invertible A;
(Hi) all such that where R is some fixed matrix
in F.
Ads. (i)not a subspace; (ii) not a subspace; (iii) a subspace.
3. Let F be a vector space of all real nxn matrices! Prove
that the set consisting of all nxn real matrices which comqiute
with a given matrix ToiV form a subspace of F. (Meerut 1980)
4. Let F be the vector space of all 2x2 matrices over the
real field R. Show that the subset of F consisting of all matrices >4
for which A^^A is not a subspkce of F.^ (Meerut 1970)
5. State whether the following statements are true or false
(i) A subspace of F3(R), where R is the real field, must always
contain the origin. (Meerut 1977)
(ii) the set of vectors a=(jc, y)e Fa(R) for which x^^y^ is
a subspace of F2(R). (Meerut 1977)
(iii) The set of ordered triads (x, y, z) of real numbers with
X > 0 is a subspace of Fs(R). (Meerut 1977)
(iv) The set of ordered triads (x, y,z) of real numbers with
x+;;=0 is a subspace of F3(R):
Ans. (i) true; (ii) false; (iii) false;(iv) true.
§ 10. Algebra of subspaces.
Theorem 1. The intersection of any two subspaces Wi and W2
ofa vector space V{P) is also a subspace of V(JF).
(Meerut 1990; Andhra 92; Allahabad 78; Nagarjuna 74)
Proof. Since 0 € IFi and W2 both therefore W\ fl W2 is not
empty.
Let a, e IFi n W2 and fl,b e F.
Now a e W\ n ^F2 => a € FFi and « S IF2
and P e IFi n 18 e IF| and/J e IF2.
Since IFt is a subspace, therefore
35
Vector Spaces
Hence r=Ff+Fo.
(Hi) Let 0 denote the zero function i.e. 0(x)=0 ¥ x G R.
Ihen 0 G F« and also 0 g Fo.
Let/g Ve and also/G Vo.
Then/(~x)=/(x)«-/(x).
●● 2/(x)«0
=-/W=0=0 (AC).
A1 Linear Atgehra
Therefore/—0(zero function).
Thus /e n K o/—O. Hence
§ 13. Linear dependence and linear independence of vectors.
(Kakatiya 1991; Osmania 90; Nagarjuna 78)
Linear dependence. Definition. Let V (JP) be a vector space.
A finite set(ai, ai, ..., a„} of vectors of V is said to be linearly
dependent if there exist scalars au 02.-M On^ F not all ofthem 0
{some of them may be zero) such that
fliai+02*2+03*3+...-}■ =0.
Linear independence. Definition. (Meerut 1980; Nagarjuna 78)
LAt V {F) be a vector space. A finite set (ai, a2,..., «»} of vectors
of V is said to be linearly independent if every relation of the form
Oiai+a2«2+O3«3+...+Oiia«=0, O/ e F, 1 c" f < n
s> o,=0 for each 1 ^ n.
Any infinite set of vectors of V is said to be linearly independent
if its every finite subset is linearly independent, otherwise it is
linearly dependent.
Illustrative Examples
. Example 1. Prove that if two vectors arej linearly dependent,
one of them is a scalar multiple of the other.
Solution. Let a, jS be two linearly dependent vectors of^ the
vector space V. Then 3 scalars a, b not both zero, such that
fla+hj8«=»0.
If then we get
fla=3 —b^
Solution. We have
(—fl) ai-|-(— a2+l (flai+^a2)
=(—a+n)ai+(— V.2
«*0ai+0a2=0 Le.f zero vector.
Whatever may be the scalars —a and —6 since 19^0, therefore
the given set of vectors is linearly dependent.
Example 10. Let ai, a2, as be the vectors of V(F), a,beF.
Show that the set {ai, a2, as} is linearly dependent if the set
{oH-aaa+fras, a2, as) is linearly dependent.
Solution. Since the set{ai+aa2+^>as, a2, as} is linearly depen
dent, therefore there exist scalars, x, y, z not all zero such that
X (aiH-aa2+^a3)+>'a2+2*3=»0
i.e. Xai+(xo+y)a2-f-(x6+2) as=0. ...(1)
If in the relation (1), the coeflScients x, xa+y, xb-{-z are not
all zero, then the set {ai, a2, as} will also be linearly dependent.
If XT^O, then the problem is at once solved whatever y and z
may be. However if x—0, then at least one ofjp and z is not
zero. Therefore at least one of xa-Hy and x6-fr will not be zero
since when x=0 then x«+>' and xfe+z reduce to ^ and z
respectively.
Hence in the relation (1)the scalar coefficients of at, az, as are
not all zero. Therefore the set {ai, a2, as} is also linearly
dependent.
Example 11. If a, )3, y are linearly dependent vectors of V{F)
where F is any subfield of thefield of coniplex numbers then so also
are a+jS,]8+y, y+a. (Meerut 1986)
Solution. Let a, b, c be scalars such that
a (a+jS)4-6(^+y)+c(y+a)«0
i.e.. (a+c)a4-(o4“6)j84-(6+c) y=0.. ...d)
But a,]3, y are linearly independent. Therefore (1) implies
fl4-064-c=0
«4-64-0c=0
0fl-4-64*c=0.
The coefficient matrix A of these equations is
ri 0 11
.4=. 1 1 0.
.0 1 0.
We have rank Ue,, the number of unknowns a, b, c.
48 Linear Algebra
(-?)
and consequently ^ is in the subspace spanned by 5 which is a
contradiction.
Putting />=0 in (I), we get
C!«i+C2a2+...
=► Cl =0, C2=0,..., Cot=0 because
the set (ai, a2,..., is linearly independent since it is a subset of
a linearly independent set .S.
Thus the relation (1) implies
ci=0, C2f=0,..., c„=0, 6=0.
Therefore the set {ai, a2,..., mi is linearly independent. If
S* is the set obtained by adjoining ^ to S, then we have proved
that every finite subset of S' is linearly independent.
Hence S' is linearly independent.
Theorem 5.
The set of non-zero vectors aj , a2,..., a^, o/1^ (F)
is linearly dependent iff one of these vectors is a linear combination
of the remaining (n—l) vectors.
Proof. This theorem can be easily proved.
§ 15. Basis of a Vector Space. Definition.
(Meerut 1990; Nagarjuna 90, Allahabad 76; S.V.U. Tirupati 90) '
A subset Sofa vector space V (F) is said to be a basis of V (F),
if
(/) S consists of linearly independent vectors.
(//) S generates V{F) i.e., L{S)==V i.e\ each vector in Visa
linear combination of a finite number of elements of S.
Example 1. A system S consisting of n vectors
^i=(lj 0» 0)> C2=(0, 1, 0,..., 0),..., e/i=(0, 0,..., 0, 1) is a
basis of Vn{F). (Meerut 1990)
Solution. First we should show that 5 is a linearly indepen
dent set of vectors. We have proved it in one of the previous
examples.
Sector Spaces 61
dimensional vector spaccy then any two bases of V have the same
number of elements. (Nagarjuna 1991; Tirupati 90, 93;
Meerut 81, 82, 8*/, 89, 93; Poona 72; Allahabad 79)
Proof. Suppose V{F) is a finite dimensional vector space.
Then V definitely possesses a basis. Let
5’i={ai, a2,...,a„}
and S2—{fu A}
be two bases of V. We shall prove that m=n.
Since V=L (^i) and jSj e K, therefore jSi can be expressed as
a linear combination of ai, a2,..., a^. Consequently the set
53={j8|, ai, a„} which also obviously generates V(F) is
linearly dependent. Therefore there exists a member ai#j8i of this
set S3 such that a/ is a linear combination of the preceding vectors
fih «1» — if we omit the vector a< from S3 then V is also
generated by the remaining set
S4—{^l» OCj, CC2,...,Ki_l,
are bases as can easily be seen. Both these bases contain the same
number of elements i.e., 3.
Dimension of a finitely generated yeetor space. Definition. The
number ofelements in any basis of afinite dimensional vector space
V(F)is called the dimension of the vector space V\F) and will be
denoted by dim V. (Marathwada 1971; S.V.U. Tirupati 93;
Nagarjuna.80)
The vector space (F)is of dimension n. The vector space
^ 3(F)la of dimension 3. If a field F is regarded as a vector space
over F, then F will be of dimension 1 and the set 5={1)consisting
of unity element of F alone is a basis of F. In fact every noa*zerc
element of F will form a basis of F.
§ 17. Some Properties of finite dimensional vector spaces.
Theorem 1. Extension theorem. Every linearly independent
subset ofa finitely generated vector space V(F)forms a pan of a
basis of V.
Or
Every linearly independent subset of afinitely generated vector
space V(F) is either a basis of V dr can be extended toform a basis
of V. (Meerut 1980, 84; Nagarjuna 90, 91; Andhra 90;
Allahabad 76)
Proof. Let S={ai, «2,...,awi} be a linearly independent subset
of a finite dimensional vector space V(F). If dim P=«, then V
has a finite basis say, {j3,, jS2,... Consider the set
0l2,●.. ^i,
^l=aa+6i5+cy+</S.
Therefore^ S is a basis of K. Since the number of elements in
S is 4,therefore dim f'=4. ,
Ex 2. Show that if jS. y} is a basis of (C), then the
set 5'={a+jS, 15+y, y+a}
is also a basis of (C).
Any subset
Sol. The vector space (C)is of dimension 3. . .
of C’ having three linearly independent vectors will form a basis ot
cs We have shown in one of the previous examples that if s is a
linrarlyindependentsubsetofCt, thenS' isalso a linearly inde
pendent subset of C’. (Give die proof here).
Therefore 5'is also a basis of
Ex. 3. If Wx and Wi arefinite-dimensional subspaces with the
same dimension, and if fTiC Wi, then Wi.
Sol Since W1QW2,therefore W\ is also a subspace of W2,
Now dim Wi, Therefore we inust have W2.
Ex 4 Let V be the vector space of ordered pairs of complex
numbers over the realfield R i.e„ let V be the vector space C(R).
Show that the set 0),(i. 0).(0,1).(0, i)} is a basisfor V.
Sol. S is linearly independwit. We have
fl(h 0)+b (/, 0)+c(0, l)+rf(0. 0=(0.0) ^
where a, b, c, deR
^{a-{-ib,c+id)=-i0,0)
a+/6=0, c+/d=*0
=> fl=0p 6=0, c=0, d=0.
Therefore S is linearly independent,
Now we shall
_ show that L(5)= 7. Let any .ordereda.pair {a+&.
c-i-idVe V where a. 6, c, d S R. 'Pien as shown above we can
write^+i6,c+id)=a(1,0)+6(i,0)+c(0, 0- Thusai^
vector in ,K is expressible as a linear combination of elements of 5.
Therefore X(S)= and so S is a basis for K.
● Ex. 5. In the vector space R^ let a«(U.2,1),^=(3,1,5),
y —4 7) Show that there exists more than ope basis for the
subspace spanned by the set S^{a, y). (M^rathwada 11171)
Vector Spaces 71
=flai+fea2+ca34-
linear combination
Thus(o, by Cy d)has been expressed as a
of the vectors of X and so AT generates R^. ^
Since ^ is a linearly independent .subset of and it also
generates R^, therefore it is a basis of R^
Ex. 12. Show that the set S={1, x. x> f> ^
mlals inxisa basis of the vector space i»,(R), ofaU polynomials m
X (of degree at most n)over thefield ofreal numbers.
(I.A.S. 1977; Meemt 74)
Sol. -P«(R) is the vector space of all polynomials in x (of
degree at most n) over the field R of real numbers.
1 JC x«> is a subset of consisting of n+1 poly
nomial.^ T^ P^v;that 5 is a basis of the vector space P„(R^ ^
First we show that the vectors in the set S are linearly inde
pendent over the field R.
The zero vector of the vector space />»{R) is the. zero polyno-
mial. Let ao» oi* € R be such that
<10(l)+<ii*+a2*»+-+a«*”=0 > polynomial.
76 Ljnear Algebra
1 0 0 ,by C2—2Ct
cs —3 10 4 and C3—Ci
2 -3 -1
=-10+12=2/.e., 9^=0.
the vectors ai, a2, 04 are linearly independent.
Since «2, «4> is a linearly independent subset of
containing three vectors, therefore it is a basis of R^.
Ex. 14. Show that afinite subset W of a vector space V(F) is
linearly dependent if and only ifsome element of W can be expressed
as a linear combination of the others.
Sol. Let FF={«|, a2,...,a„} be a finite subset of a vector space
V{F).
First suppose that W is linearly dependent. Then to show
that some vector of W can be expressed as a linear combination of
the others.
Since the vectors ai, a2,...,an are linearly dependent, therefore
there exist scalars at, a2,..^,o„ not all zero such that
fll«l+02«2f —+«n*/i=0. ...{1)
Suppose a,^0, 1 ^ r «.
Then from (I), we have
0,.a,=—Oiai—02^2-“ ● ● ● — “Or+iar+l fln«n
1. Tell with reason whether or not the vectors (2,1, 0),(1, 1, 0),
and (4, 2,0)form a basis of R^. (Meerut 1976)
2. State whether the following statements are true or false
(i) If a subset of an /I'dimensional vector space V consists of
n non-zero vectors, then it will be a basis of F.
(Meerut 1976)
(ii) If A and B are subspaces of a vector space, then,
dim A < dim B => A cz B. (Meerut 1976)
(iii) If A and B are subspaces of a vector space, then
A^B => dim ^#dim B. (Meerut 1976)
(iv) If M and iV are finite dimensional subspaces with the same
dimension, and if MQN,then M=N.
(v) In an n-dimensional space any subset consisting of n
linearly independent vectors will form a basis.
Ans.(i) false; (ii) false; (iii) false; (iv) true; (v) true.
3. (i) Show that the vectors (2, 1, 4), (1, —1, 2),(3,1, —2)
form a basis for R^
(ii) Show that the vectors (0, 1, 1),(1, 0, l)and (1, 1, 0)form
a basis of (S V.U. Tirupatl 1993)
80 Linear Algebra
/Is ODto Va (F). Let (fli, fl2, a„) be any element of V„(F).
Then there exists an element S V (JF) such
that /(Ol«l4-flf2*2+*--+an«n)=(Ui, <l2 »●●●» «n).
/is ontoF„(/).
/ Is a linear transformation. If h e F and a, e F (F)
we have
/(aa+6j8)
*=/[0 fl2a2+*»» + fln«n)+6 (^>l«i+4'2«2+*«*+^ii«n)]
-f[{flO\’\-bb\) ai+(aa2+6^2) «2+... + (flOii+^^>n)««]
=(aoi+hhi, flfl2+*^2v» aan-\-bbtt)
flfl2,..., aan)-\-(bbu bbz,..., bb„)
= a (oi, <i2».*.» (^1, i»2».... hfl)
(«i«i + fl2«2+... +a„ci„)+bf (/;iai +hi«2+...+b^)
«fl/(«)+h/(/3).
/ is a linear transformation,
/is an isomorphism of V (F) onto V„ (F).
Hence F (F; ^ V„ (F).
Solved Examples
Example 1, Show that the mapping f: V3 {F)^Vi (F) defined
by /(«!» «2. «3) = («l. a2)
is a homomorphism of V3 (F) onto Vz (F). [Kanpur 81]
Solution. Let a=(fli, <12, 03) and j8=(6i, bz, bz) be any two
elements of F3 (F). Also let a, h be any two elements of F. We
have
f (aa.-\-bp)=f[a (au <ii» <*3)+^ (^i, ^2. ^3)]
=f[{aa\-\-bbu aaz-^bbz^ 00i'\-bb^\={aai-{-bbu aaz-^bbzi
=a (au az)-hb (bi, bz)==af (au az^a^+bf (bu bz^ A3)
=fl/(a)+A/(j8).
.’. /is a linear transformation.
To show that/is onto F2(F). Let (0|, <22) be any element of
F2(F). Then (au fl2. 0) e F3(F) and we have/(oi, 02, 0)=(oi, az).
Therefore/is onto F2 (F).
Therefore f is a homomorphism oif F3(F) onto Vz(F).
Example 1. Let F (R) be the vector space of all complex
numbers a-\-ib over the field ofr,als R nnd let T be a mapping from
V (¥) to Vz(Ktdefined as T(a-\-ib)=-(a,b),
Show that T is an isomorphism.
Solution. T is one-one. Let
(L=a+ib,^=c+id
be any two members of F (R). Then a. A, c, d e R.
Vector Spaces 8S
We have
7’(«)=7’(i8) => (a. 6)=(c. rf) [V r(a)=(a,6)]
=> ac=c, basd s> a-^ibssc+id
=> a=^.
● .*. r is one-one.
T is onto.
Let (tf, b) be an arbitrary member of Vz(R).-'
Then 3 a vector a-hib e V(R)such that T(a-{-i6)=:(a, b). Hence
T is onto,
^ (Ci—i/i)/(a|)4-(C2^^2)/(«2)-{----+(<7i—^^i)/(«fl)==0
Cl—t/i=0, C2-</2=0,..., C„-</„=0 since
/(«i)» /(«2)i ● ● ●, /(««) are linearly independent
Ci=</i, C2~d2i...tCn=d„
S> ys=S.
/. /is one-one.
/is an isomorphism of Vonto V,
Example 6. If V is finite dimmsional ond f is a homomor*
phism of V into itself which is not onto prove that there is some
in V such that/(a)=0. (Meerut 1969)
Soitttioo. If/is a homomorphism of V into itself, then/(0)a0.
Suppose there is no non-zero vector a in F such that/(a)=sO. Then
/is one-one. Because
fm=f(Y)
=>/(i8)-/(y)«0
=>/0S~y)=o ['.* /is a linear transformation]
=> jS^ysQ => /3=sy.
Now V is finite dimensional and/is a linear transformation of
Finto itself. Since/ is one-one, therefore/must be onto F. But it
is given that/is not onto. Therefore our assumption is wrong.
Hence there will be a non-zero vector a in F such that'/(a)aO.
Example 7. Define linear transformation of a vector space
V{F) into a vector space IV(F). Show that the mapping
T:{a,b)^(a+2,b+$)
of F2(R) into itself is not c- 'inear transformation.
Solution. Linear Transformation. Definition. Let F(F)and
W{F) be two vector spaces over the same field F. A mapping
TiV^W
is called a linear transformation of F into FF if
r (fla T(ot)+b r(/8) V o, beF and ¥ a, jSeF.
Now to show that the mapping
T:(<i,h)-5‘(«+2,h.f3)
of F2(R) into itself is not a linear transformation.
Take a=(l, 2) and j8=(l, 3) as two vectors of F2(R) and a«l,
d.= l as two elements of the field R.
88 Lifted Algebra
a=0+a where 0 e Wu ct e W2
and a=a+0 where a e 1^1, 0 € /
Thus a e K can be expressed in at least two different ways as
sum of an element of ^1 and an element of This contradicts
the fact that V is direct sum of Wi and fV2» Hence 0 is the only
vector common:to both FFi and W2 i.e. IKi fl Thus the
conditions are necessary.
The conditions are suflScient.
Let V=Wi+W2&nd Wif)W2^{0), Then to show that P” is
directsumoflFiandlF2-
V==Wi+W2^ that each element of V can be expressed as sum
ofan element of Wi and an element of 1^2- Now to show that this
expression is unique.
Let, if possible,
a=«i+«2, a e K, a, e Wu cc2 ^ W2,
and
Then to show that ai and v.2=P2-
We have ai+«2=^i+^2
=>ai—^i=^2—«2.
Since Wi is a subspace, therefore
ai e fVitfil e fVi :> «i—e iVi.
Similarly J32'~«2 e 1^2.
«i—^I=^2“-a2 s fVlf)f^2»
V^Wi + Wz-
98
Linear Algebra
Again let p ^ JVi f) W2. Then p can be expressed as a linear
●S'l and also as a linear combination of Si. So we
P=^ai(ni-\-a2et2’\-.,.i-a„(x„=biPt-\-b2p2-h .:+biPi.
oi<Hi+a2f)t2+,..+a„ot„-~biPi—b2p2~...—biPi^Q
=> fl2«0, a„=0, h,=:0, 62=0, , 6/=0 since
«i» «2, a„, Pu p2t Pi are linearly independent,
iff=0 (zero vector).
Thus Wi n W2={0}.
Hence V is the direct sum of and W2.
Dimension of a Quotient space. Alternative method.
Theorem. If JVi and W2 are complementary subspaces of a
v^tor space V, then the mapping/which assigns to each vector p in
rv2 the coset W\ +p is an isomorphism between W2 and VjWi,
(Rleerut 1969)
Proof. It is given that
V==Wt@W2
and f: W2-^VJWx such that
iP)^Wi+p p ^ W2.
We shall show that/is an isomorphism of W2 onto VIWi.
(i) / Is one-one.
If )8i. j82 e IFj, then
fWl)==f(P2}^ fVx+Pt=Wi+P2, (by def. off]
Pi~p2 e Wi
=> P1-P2 ^ m n W2
[*●’ P1—P2 e IV2 because W2 is a subspace]
=> Pi-P2=0. [V n 1V2={0}]
=> pi^Pz.
/ is one-one.
(ii) /is onto.
Let fVi+ct be any coset in V/Wiy where a e P. Since V is
direct sum of IVi and W2t therefore we can write
a=y-f ^ where ye Wu p ^ W2.
This gives y=a-jff e IFi.
Since a-jS e Wu therefore fVi+a:=»fVi+p.
How f (P)=zWi+p (by def. of/]
= fVt+<x,
Thus IFi+a e 17 ^1 => H )3 e W2 such that
Therefore / is onto.
Vector Spaces 99
=s^ («i~^i)+●●●+(«/-!—^/_i)H-(a/—j8/)
+(«/+!—^/+i)+...+(a„—j8n)=0. ...(3)
Now each IF/ is a subspace of V. Therefore a/-j8/ and also
its additive inverse)?/—«/ e FF/, /=!, ... $ n.
From (3), we get
(«/—^/) ~ (^1 —● ai) +... -j- (Pi^i
...(4)
Now the vector on the right hand side of (4) and consequently
the vector a/~j8/ is in FFi+... + FF/-i4-FF/+i+... + FF«.
Also a/—jS/ e FF/.
.-. a/-j8/ e FF/ n (FF,+... + FF/-,+ FF/+i+...-f FF„).
But for every i=l,..., «, it is given that
FF/ n (^F,+...+ FF/_, + FF/+,+...+FF„)={0}.
Therefore a/—)S/=0, i=l, ..., n
a/=j8/, i'=l, ...» n
=«► the expression (1) for a is unique.
Hence F is the direct sum of FFi, ..., FF„.
Theorem 3. Let V (F) be a finite dimensional vector space and
let Wu Wk be subspaces of V. Then the following two statements
are equivalent.
(0 F is the direct sum of Wu ...» FF*.
(i7) IfBi is a basis o/FF/, i=l, , then the union
m Linear Algebra
k
5= U Bi is also a basisfor V. (Meerut 1973, 75, 80, 83)
/-I
*.e., B spans V.
Now to show that B is linearly independent. Let
k
s(fli' a,'+02'02'+...+a;,;«;„)=0.
/-I
.(1)
Since V is the direct sum of Wt Wky therefore'0 e V can
be uniquely expressed as a sum of vectors one in each Wi. This
unique expression is
0=0+...+0 where 0 e Wi, i=l, ..., k.
[(<.,'-*,0 «i'+-+(o;-a;,)«;^=o
=> ai^—b\^
%!
[V
B( is linearly independent being a
basis of V]
=> On,=^b«/» i^l> ●●●» k
=> A:
=> the expression (2) for a is unique.
Hence F is the direct sum of Wt, ..., if'*.
/V V proving this theorem we have proved that if a
finite dimensional vector space V {F) is the direct sum of its sub-
spaces fVj,..., then dim V=»dim FFi+...+<//« Wu.
§ M. Co-ordinates. (Meerut 1983 P). Ut y(F) be a finite
dimensional vector space. Let
-®={«1, o„}
"0 “oan that the
vectors of 5 have been enumerated in some well-defined way ie
the vectors occupying the first, second,..., places in the set B
X X2
IX„ j
is called the coordinate matrix of ct relative to the ordered basis B.
We shall use the symbol
Wo
for the coordinate matrix of the vector a relative to the ordered
basis B
It should be noted that for the same basis set B, the coordinates
of the vector a are unique only with respect to a particular order
ing of 5. The basis set ^ can be ordered in several ways. The
coordinates of a may change with a change in the ordering of B.
Solved Examples
Example 1. Show that the set
Sf={(l. 0,0),(1,1,0),(1, 1, 1)}
is a basis of {B) where R is thefield of real nun^ers. Hence
find the coordinates of the vector (o, b, c) with respect to the d)ove
basis. (Meerut 1989)
Solution. The dimension of the vector space R*(R)is 3. If
the set S is linearly independent, then S will form a basis of R^(R)
because S contains 3 vectors. Let x, y, z be scalars in R such that
X (1, 0,0)+y(1, 1, OHz(1, 1. 1)=0==(0, 0,0)
=> (x+y+z, y+z,z)=(0,0,0)
=> x+y+^=0,y+z=0,z=0
=> x=0,y=0,z=0
^ the set S is linearly independent.
iS is a basis of R^ (R).
Now to find the coordinates of(a, 6, c) with respect to the or
dered basis S. Let p, q, r be scalars in R such that
(a, b, c)=^p (1, 0, 0)+q (1, 1, 0)+r(1, 1, 1)
=► (fl, b, c)=(p+^+r, g+r, r)
=> P+q+r=a, q-\^r=b, r=c
=> r=c, q=b—c, p=>a—b.
Hence the coordinates of the vector (a, b, c) are (p, q, r) i.e.,
{a—by b-^Cy c).
Ex. 2. Find the coordinates of the vector (2, 1, --6) of R^
relative to the basis ai=(l, 1,2), a2=(3, —1, 0), aa=(2, 0, —1).
Sol. To find the coordinates of the vector (2, 1,-6) relative to
the ordered basis {ai, a2, as}, we shall express the vector (2,1, —6)
105
Vector Spaces
aff,\-\-ba.2 S N{T).
Thus fl,6 e /5’and ai, a2 e N(T) => flai+i>a2 e iVCT). There
fore N(T)is a subspace of U.
§ 5. Rank and nullity of a linear transformation.
Theorem 1. Let T be a linear transformation from a vector
space U{F) into a vector space V(F). If U isfinite dimensional^ then
the range of T is a finite dimensional subspace of V.
Proof. Since U is finite dimensionaU therefore there exists a
finite subset of U, say {ai, a2,...,afl} which spans U.
Let e range of T. Then there exists a in C/ such that
r(a)=i3.
Now a e U => 3 fli, a„^ F such that
a=flia 1+a2«2+.♦. +
=> T(a)=r(aia,-|-fl2«2+...+«/.««)
=> p=ai r(ai)+fl2 T(oi2)-\-...+a„Tic(.„^. ...(1)
Now the vectors r(ai), r(a2)„..., T{ct„) are in the range of T. If
P is any vector in the range of T, then from (1), we see that /3 can
be expressed as a linear combination of ^(ai), 7’(a2),...,7’(a„)
Therefore range of T is spanned by the vectors
n«i), r(a2),...,r(a„).
Hence range of T is finite dimensional.
Now we are in a position to define rank and nullity of a linear
transformation.
Rank and nullity of a linear transformation. Definition.
(S.V. 1992; Meerut 75; Kanpur 81; Allahabad 79)
Let T be a linear transformationfrom a vector space U{F) into
a vector space V(F) with U as finite dimensional. The rank of T
denoted by p{T) is the dimension of the range of TJ.e.,
p{T)=dim R(T).
The nullity of T denoted by v(T) is the dimension of the null
space of T i.e. viT)=dim N{T).
Theorem 2. Let U and V be vector spaces over the field F
and let T be a linear transformationfrom U into V. Suppose that U
isfinite dimensional. Then
rank {T)^-nullity {T)=dim U.
(Meerut 1983P, 87, 93; Andhra 92; Tirupati 90; I.A.S. 85;
Madras 83; Madurai 85; Nagarjuna 90; Kanpur 81; Allahabad 79)
Proof Let N be the null space of T. Tnen iV is a subspace
of U. Since U is finite dimensional, therefore N is finite dimen
sional. Let dim iV= nullity (T)=A:. and let {aj, a2,..., a*} be a
basis for N. A .●'*
Linear Transformations 113
Solved Examples
Ex. 1. Show that the mapping T; FaCR)-> K2(R) defined as
: : T(au «3)N3fli—2o24-<i3, oi—3<i2~2fl3)
is a linear transformationfrom VsiR) into V2(R).
Solatioo. Let a«(<ii, <12, 03), i3=(*i. Aj, *3) e ^(R);
Then r(a)tsT(at, 02, fl3)=(3oi^202+03, ai—3d2^2ai)
ah4 .. — 362—263). .
Also let a. 6 e R. Then aa+6iS e F3CR). We have
r(a«+Z^=7’[a (01* 02. a3)+6;(6„ 62, 63)1
«=7T(fl«i+661^ 002+662,003+663) : :
«(3(ooi+66i)-2(oo2+662)+oo3+663. ^ ^
001+661—3(002+662)—2(003+663))
=(o(3oi-2o2+03)+6(36,-262+63)
a
^ («i-302-203)+6 (6,-362-263))
=a(3o,-202+03, o,-3o2-2o3)+6(36,-262+63,6,-.362-263)
=ona)+67Yi9). :
Hence r is a linear transformation from Fs^R) into FiCR).
Ex 2. Show that th^mapping Z: 1^(R)->J^(R) defined as
T{a, 6)=(fl-|-6, o—6,6) ;
/jo linear transformationfrom V2(Rj into J^3(M) Find the range,
rank, null-space and nullity of T. (Nagarjuna 1990; Tirupati 90)
Solution. Let =(^i. hi), j8=(02, 62) e F2(R).
Then 7Xa)=nai, 6,)=(o,+6,, 0,-6,i 6,)
^C^)==(^2+62,^02 —62,62).
Also let o, 6 e R. Then oa+6j8 e F2(R) and
T(aci+bP)-=^Tlaiaubi)-hb(a2rb^ ' '
(o0,+6O2, 061+662)
o,+602+06,+662, flO| f602—06,—662, o6,Tf 662)
^[«i+h,]+6(02+62], o[0,-6,]+6 (02-62], 06,-f 662)
p(«i+6,, 0,-6,, 6,)+6(02+62, 02:^62,62)
lo7Xa)+67X)5) . ■
.*. T is a linear transformation from FijCR) into ^(R).
{(1* ®)»;(0* 1)} is a basis for J^(R)
We have m,o)=(i+o, 1-0, o)=(i, 1,0)
and
T(0* 1)5=(0+J,0-1, 0)=(1* -.1,1); i;
The vectors TXl,:0)*!T(0,1) j,pan the range ofT.
Thus the range of T is the subspace of K3(R)spanned by the
vectors (I, 1. 0), 19.' 1.1).
Now the vectors (1, 1, 0),(l,-1, 1)e Ks(R) are linearly
independent because if x, >» e R,then
JLimar Transformations 115
O *l*“^2+2jf3«=0,
22fi+X2—X3=0, ...(1)
— Xl—2Af2+0X3=0.
the null space of T is the solution space of the system of
linear homogeneous equations (1). Let A be the coefficient matrix
● of the equations (1). Then
r 1 -1 21
2 1 —1
-1. -2 0.
ri -1 21 performing the elementary row opera-
0 3—5 tions Ri-^Ri-lRu R^-^R^^\■R\
lO -3 2.
n -1 21
0 3 5 by /?3->-/?3-}-/?2.
LO 0 -3J
This last matrix is in Echelon form. Its rank^the number
of non-zero rows»3. Therefore rank >4=3=the number of un
knowns in the equations (1). Hence the equations (1) have no
linearly independent solutions. Therefore jci=0, X2=0, X3=0 is
the only solution of the equations (1). Thus (0, 0, 0) is the only
vector which belongs to the null space of T Hence the null space
of T is the zero subspace of
Exannie 4 f ●'i V be the vector space of all nxn matrices over
the field F. and let B be a fixed nxn matrix If
T{A)=AB^BA V A^V
verify that T is a linear transformation from V into V. (Meerut 1982)
Solution. If A(=V, then T(A)=AB~BA^Vbeaiuse AB—BA
is also an nx« matrix over the field F. Thus T is a function from
VinXoV.
Let Au Az^ V and a, 6eF. Then aAi -f M2G V and
T(aAi + bA2)=(aAi-^bA2) B-B (aAi +bA2)
^aAiB^bA2B-^aBAi-bBA2=a {AiB-BAi)-{^b (A2B~BA2)
=aT(Ai)-hbT(A2).
r is a linear transformation from V into V.
Example 5. Let V be an n-dimensional vector space over the field
F and let T be a linear transformation from V into V such that the
range and null space of T are identical. Prove that n is even. Give
an example of such a linear transformation.
Solution. Let N be the null space of T. Then .V is also the
range of T.
Linear Transformation^ 117
Now p(7’)-t-v(r)=dim V
i.e. dim of range of J+dim of null space of r=dim V— n
i>e. 2 dim N
I *.● range of null spac e of N]
i.e. n is even.
Example of such a transformation.
Let T: ^"2(R)-^Fi,(R) be defined by
r(fl, z>)=(6,0) V fl, />gR.
Let a=(fli, 6i), ^=(a2f ^2) e f^2(R) and let x,
Then T{x<r.+y^)=.T[x\au b{)+y (^2, bi)]
=r(xfli +ya2. xbi●\-yb2)={xbx^yb2y 0)
=(x6|,0)+(y62, 0)=x {b\, 0)+>> (62, 0)
==xT(au bi)+yT{a2, bii^xT(y.)^yT(S).
r is a linear transformation from T2(R) into F2(R).
Now {(1, 0), (0, 1)} is a basis of 1^2(R).
We have r(I, 0)=(0, 0) and T(0, 1)=(1, O).
Thus the range ofTis the subspace of K2(R) spanned liy the
vectors (0,0) and (1, 0). The vector (0, 0) can be omitte(| from
this spanning set because it is zero vector. Therefore the range of
T is the subspace of ^2(R) spanned by the vector (1,0). Thus
range of (1.0): flGR)={(a, 0): aeR).
Now let (a, k) ^ N (the null space of T).
Then (a, b) ^ N => T(a, 6)=(0, 0) => (6. 0)=(0, 0) => b=0
null space of 2 ={(a, 0): aeR}.
Thus range of r=null space of T.
Also we observe that dim F2(R)=2 which is even.
Example 6. Let U{F) and V(F) be two vector, spaces and let
T\, T2 be two linear transformations from U to V. Let X, y be two
given elements of F. Then the mapping T defined as
na)-xr,(a)+j»r2(a) V aeC/
is a linear transformation from U into V.
(Marathwada 1971)
Solution. If ae(/. then T,(a) and T2(«) e V. Therefore
xTx (a)H-;; T2 (a) G V. Thus Tas defined above is a mapping
from U into V. Let (x, /Set/ and a, b^F. Then
T(a<r.^b^)^xTx {a<r.-\-b^)-^yT2 (aa+/»/8) ^(by def, of TJ
=x [ar, (a)-fZ^r, [aT2-(a)f t>r2(/S)J
[ ●.● T\ and T2 are linear transformations]
«a [xTx (cc)-\-y T2 (a)]-f-6 [xTi(P)+yT2(B)]
«flr(«)+^r(/8)
(by def. of T\
A T is a linear-transformation from U into V,
118 tine^dr Algebfci
structure on the set L {U, V)over the same field F. For this pur-
pose we shall have to suitably define addition in L V)and
scalar multiplication in L (17, V) over F.
Theorem 1. Let U and V be vector spaces over the field F.
Let T\ and Tjz be linear transformationsfrom U into V. Thefunction
T\+T2 defined by
(r,+r2)(a)=r,(a)f72(a) ¥ a e £/
is a linear transformationfrom U into V. if c is any eiement of F,
thefunction {cT) defined by
(cr)(a)=cr(a)¥ae t/
is a linear transformationfrom U into V. The set L{U, V) of ail
linear transformationsfrom U into K, together with the addition and
scalar multiplication defined above is a vector space over thefieid F.
(Andhra 1992; Meerut 78, 91; Madras 81; Kanpur 69)
Proof. Suppose T\ and T2 are linear transformations from U
into V and we define ri+72 as follows :
(r,+r2)(a)=r,(a)-hr2(a) ¥ a e u. ...(1)
Since T\ (a)+r2(«) S K, therefore ri+r2
is a function from U ihto V.
Let a, b^F and a, jS e U. Then
(Ti-^Ti)(aflc+0j8)=r,(aa+0j8)-fr2(o«+0i8) [by (1)1
=^[aTi (a)+0r, mHoT2i<L)+bT2(j3)]
[V Ti and T2 are linear transformations]
o [Ti (a)+r2(a)l+0[Ti iP)+T2(jS)][v 1^ is a vector space]
(Ti+r2)(a)-i-0(Ti+Tz)(i8) [by (1)]
/. Ti+r2 is a linear transformation from U into V, Thus
Tu 72 e L{U, V) => Ti+72 e-L (f/, K).
Therefore L {U, V)is closed with respect to addition defined in it.
Again let T ^ L ((/, K) and c ^ F. Let us define cT as
follows:
(cT)(a)=cT(a) ¥ a e t/. ...(2)
Since cT(a)e K, therefore cT is a function from U into V.
Let a, A e F and a, j8 s U. Then
(c2)(fla+/»j8)=cr (fla-|-6j8) [by (2)]
=c[flr(a)+^r(i3)J [ r is a linear transformation]
[aT(a)J+c[bT 08)J=(ca) T(a)+(cZ>) T (/i)
=(flc) T{p.)^{bc) T{f)^a [cr(a)j+^[cT m
—0[{cT)(a)J+6 l(tT)(^)].
Linear Transformations 1^1
=6 (PC) ^ (by,def.; of 0]
.'. — r-t-r^O for every T e L \U^ V)
Thus each element in i'(J/, 1^) possesses additive inverse,
defined 8™“P W''h aspect to addition
Further we make'thefollowing observations: *
: : (i) Let c e Fand r,, 72 e L ((/, VY If a Is an y element in
Vi we have ●. f
[e(Ti+r2)](a)=c[(r,*riHoc)] ;
(●>y (2) i.e.t by-def. of scalar multiplication in L W, P)|
“0 [Ti (a) -fra (a)] lby(i)J
=cr, (a)+cr2 (a)
[V c e /' and T\ (a), T2 («) e V which IS
a vector space)
(C7i) (a)-l-(c7-2) (a) [by (2)]
: ■+■,<^^2) (a)
[by.(l)J
c(ri+7'2)==cri+cr2.
(ii) Let d, ^ e F and T e L ((/,'y), If a e t/, we have
rj (a) := {a-Yb} T (ol) [by (2)1
—aT {a.)-i-hT (a) {●/ (/ is a vector space]
-=(nF) (a) +(6F) (a) [by (2)]
=(aT:±bT)(x) [by (1)]
{a-Yb)r^aT-YbT.
(iii) Let O, b and r e X (t/. vy. if « e {/, we have
[(ab) T] (a)=(ah) T(a)
=“[*T(a)] . [by(2)J
[ .* K is a Vector space]
=0 im («)] [by (2)]
=[a(*r)J(a)
[by (2)]
(o4)r=a(W-).
(iv) Let 1 s Xand T e L (L/, yy, Ifa g £/, „e have
(1 r)(«)=l T(a)
[by (2)]
= T(a) [●
1 T=T. is a vector space]
i,e.
Now to show that 7’ is a linear transformation.
Let Of b ^ F and a, /8 & t/. Let
a = XlOC, + ..,-f-Xr-i„ : : ■
and
Then T{ax \-b^)--^T[a{xicci t- -● + A'„a„)-| A(>’iai-f- >’««„)]'
=-T[{axi+byi) ai +... + {ax„ by,,) a„]
124
Linear Algebra
^{ax^+by^)
[by def.ofrj
if i^q
if i=‘q
i.e., Tpq{a.i) — Siq Pp, ...(1)
where 8/^ e Fis Kronecker delta i.e. 8/,= l if/=<jr
and 8m<=0\fi^q.
Since p can be any of 1, and q any of 1, 2,...,n, there
are mn such TpqS. Let Bi denote the set of these wn transfor
mations TpqS. We shall show that B\ is a basis for L V).
(i) First we shall show that L{U, V)is a linear span of Bi.
Let T e L{U^ V). Since r(ai) e V and any element in V is
a linear combination of j8i, therefore
7’(ai)=|fl 1^1-f +●.. + m
m
No*"^ consider S= £ £ Opq Tpq.
/»-! <?-l
m
= ^ Op, fip [On summing with respect to^.
p-i
[ 0 is zero transformation]
m
=> 27 bpi
;»-l
=> h|f + 1 c / n
=> 6i/=0, />2/=0,...,h„/i=0, 1 < / «
[V ^1,J82,...,i5m are linearly independem]
=> hp<,=0 where 1 i p < w and 1 q ^ n
fil ls linearly independent.
Therefore fii is a basis of Z,(C/, K).
dim L(U, r)=number of elements in fi I
=w«.
Corollary. The vector space L(U, V) of all linear operators on
an n-dimensional vector space U is of dimension n\
Note. Suppose C/(fi) is an n>dimensional vector space and
V{F) is an w-dimensional vector space.’ If i/#{0} and ^7^(0}, then
n ^ 1 and/n > 1. Therefore L(£/, F) does not just consist of the
element 6, because dimension of Z,(i/, V) is rnn ^ 1.
§ 7. Product of Linear Transformations.
^ Theorem 1: Let UfVahdW be vector spaces over the field F.
Let T be a linear transformation from U into V and S a linear trans-
formation from V into W. Then the composite fmi^tiori ST {called
product of linear transformations defined by
(S7’)(a)=fi:[r (a)I V a e f/ ' ^ H
is a linear transformation from U into W.
<Meerut 1983, 89; Na^arjuiia ^4)
Lirii'ar Transformations 127
D(Ax))^^Ax)
and
for every f(x)eV.
r(/(*))=£/(*)dx ...(2)
tx
(ai+2a2X+...) dx
JO
’ X
= fliX+fl2X^f ...
JO
=aix+a2X^-{-...
¥=f(x) unless ao=0.
Thus 3 f(x) e V such that
(TD) [f(x)]:^l [f(x)Y
TDr^L
Hence TD^DT,
showing that product of linear operators is not in general conimu-
tative.
Example 3. Let V(R).be the vector space of all polynomials
in X with coefficients in the field R. Let D and T be two linear
transformations on V defined as
Solution. We have
(Dr)[/(x)]=Z> [T{f(x))]
=D[*/(*)I=^(*/Wl
=f(.x)+x^f(x). ...(1)
Also (TD) [f(x)]=T[D (/(*))]
=r[^(/(*))
=x
...(2)
From (1) and (2), we see>that H/(x) e K such that
{DT){f{x))^{TD) {fix))
=> DT^TD.
Also we see that
(DT-TD){f(x))={DT)(/(x))-(7’D)(/(x))
=/(x)=/(/(x)).
DT-TD=I.
Theorem 2. Let V{F) be a vector space and A, B,C be linear
transformations on V. Then
(/) AO =6=0^
(i7) AI=A=rA
(Hi) A (BC)=(AB)C A
(IV) A (B+C)=AB+AC
(v) (A-j-B)C=^AC+BC
(vi) c(AB)={cA) B=A (cB) where c is any element of F.
Proof, Just for the sake of convenience we first mention here
our definitions of addition, scalar multiplication and product of
linear transformations:
(AA-B)(a)-/l(a)+D(«) ...(1)
(cA)(<k)=cA (x) ...(2)
(AB)(x)=A [B(x)] .„(3)
¥ aePand ¥ ceF.
Now we shallfprove the above results,
(i) We have ¥ aeF,
. (a6)(<x)=A [0(«)1 [by (3)1
130
Linear Algebra
in the set L(F, V). The transformation 0 will act as the zero
‘element and the identity transformation / will act as the unity
element of this ring.
§9. Algebra or Linear Algebra. DeSnitton. Let F be afield.
A vector space V over F is called a linear algebra over F if there is
defined an additional operation in V called nmltiplication of vectors
and satisfying thefollowing postulates:
1. oj8eFva,jSeF
2. a (^y)=(aj8) y V a, j8, y e F
3. a (^+y)=aj84-ay
and
(a+^)y=ay+j8y v a, jS, y e K.
4. c(ap)=(ca)jS=a (cp) v a, j8 e V and c G F.
If there is an element 1 in V such that
l«=a=al V a e r,
then we call V a linear algebra with identity over F. Also 1 is then
called the identity of V. The algebra V is Commutative if
aj8=)3a V a, e F.
Theorem. Let V{F) be a vector space. The vector space
L(F, F)over Fof all linear transformations from V into V is a
linear algebra with identity'with respect to the product of linear
transformations as the multiplication composition in £(F, F).
Proof.
The students should write the complete proof here.
All the necessary steps have been proved here and there.
§.X0. Polynomials. Let T be a linear transformation on a
vector space F(F). Then TT is also a linear transformation on F.
We shall write r=r and r2=7T. Since the product of linear
transformations is an associative operation, therefore if/« is a posi
tive integer, we shall define
T»»=7T7’...upto m times.
Obviously is a linear transformation on F.
Also.We define T^=I(identity transformation).
If m and n are non-negative integers, it can be easily seen that
J'm J'n — J'm+n
and (JPmyt—’J'ntn^
The set L(F, F)of all linear'transformations on F is a vector
space over the field F. If uo. pi e F, then
F(r)«flro/-ffliF-f e £(F. F)
i.e. p{T)is also a linear transformation on F because it is a linear
L inear Transformations 13S
=or-» 052).
134 Linear Algebra
=> rr-»=7.
Theorem 3. If A, B arid C are linear transformations on a vec
tor space V(F)such that
AB^CA^L
then A is invertible and A-^=^B=C. (Meerut 1968, 69)
Proof, In order to show that A is invertible, we are to show
that A is one-one and onto.
(i) A is one-one.
Let ai, tt2 e V. Then
A (ai)=^ (az)
^C[A (a,)]=--C [A (aa)]
=> (CA)M={CA)(a2)
=>/(ai)=/(a2)
s> «i=a2.
A is one-one.
(ii) A is onto.
Let jS be any element of V. Since F is a linear transformation
on P, therefore B (j8) e V, Let B (^)=a. Then
B
=>AlF(jS)]=^(a)
=> {AB) m=A (a)
Linear Transformations 135
=0
Also
[A-'(aA)\=\[a(A.-yA)\
-1
/f)s=3l/=»/.
(i) T is invertible,
(ii) T is non-singular,
liii) The range ofT is V.
(iv) If{on «n} is any basisfor U, then
{T(ai). T(a„)} is a basisfor V.
(v) There is some basis (ai, a«}/or V such that
is a basisfor V.
Proof, (i) (ii).
If T is invertible, then T is one-one. Therefore T is non¬
singular.
(ii) =► (iii).
Let T be non-singular. Let {ai, , a„) be a basis for V. Then
««} is a linearly independent subset of U. Since Tis non
singular therefore {r(ai),..., T (a«)} is a linearly independent subset
of V and it contains n vectors. Since dim V is also «, therefore this
set of vectors is a basis for V. Now let j8 be any vector in V. Then
there exist scalars oi, , e Fsuch that
j8=fl| 7X«i)H-...+fln 3T(«b)
—T (fliai-}-...-}-fln*«)
which shows that j8 is in the range of T because
+ e 6/.
Thus every vector in V is in the range of T. Hence range of
Jis V.
(iii) => (iv).
Now suppose that range of T is V /.£●., 7’is onto. If (ai a„)
is any basis for C/, then the vectors r(a,),..., T(a„) span the range
of T which is equal to V. Thus the vectors T (ai),..., T (a„) which
are n in number span V whose dimension is also n. Therefore
{T (ai),..-, T (a^)} must be a basis set for V.
(iv) (v).
Since U is finite dimensional, therefore there exists a basis for
U. Let (ai,..., a„} be a basis for U. Then [T (aO,..., T (a„)} is a
basis for V as it is given in (iv).
(V) => (i).
Suppose there is some basis {ai
for U such that (r(ai) T{a,„))
is a basis for V. The vectors (r(aj),..., 7’(a„)}
140 Linear Algebra
Solved Examples
3xi —2x2
/. (xi,X2)=:=p(2,3)+ 3^ (1, 0).
...(1)
From the relation (1) we see that the set {(2, 3),(1, 0)} spans
R^. Hence this set is a basis for R^.
Now let (xi, X2) be any member of R^. Then we are to find a
formula for T(xi, X2) under the conditions that r(2, 3)=(4, 5),
r(1.0)=(0, 0). We have
=>fl=
=r-p-\-q.
TS=hbut 5r^0.
Solution. Consider the linear transformations T and S on
^2 (R) defined by
r(fl, 6)=(a,0) V {a,b) e F2(R)
and 5 (a, 6)=(0,a) ¥ (a, b) e Fa (R).
We have {TS) {a, b)=T [S {a, b)\^T (0, a)=(0, 0)
=d(fl^6) ¥ (a, F2(R).
.*. T’5=6.
Again (ST) (fl, b)=S [T(o, b)]=S (fl, 0)=(0, a)
^0 (fl, b) ¥ (a, b) e F2 (R).
Thus 5T#6.
Example 9. Let V be a vector spadie over the field F and T a
linear operator on V. IfT^=0t what can you say about the rela
tion of the range of T to the null space of T I-Give an example of a
linear operator T on V2 (R) such that but T^O.
Solution. We have r2=0
=> T2 (a)=,o (a) V a e F
=> r[7’(a)] = 0 ¥ a e F
=> T (a) e null space of T ¥ a g F.
But T (a) e range of T v a g F.
=> range of T c null space of T.
For the second part of the question, consider the linear trans
formation T on Fa (R) defined by
Tia,b)=^iQ,a) ¥ G Fa (R). '
Linear Transformations 145
that
(i) r(0)=0.
of V under T is
, (ii) If 1/ is a subspace of K, then the image
also a subspace of W,
(iii) If dim K=-dim FFand if TiY)^ W,then 7'is invertible.
(Meerut 1974)
17. Show that the operator Tow defined by
' T(x, y, z) - {x H-z, x-Zyy)
is invertible and find similar rule defining 2'*‘. (Meerut 1980)
Ans. r-'(x, y, z)=(|x fiy, z,
§ 12. Matrix Definition Let F be any field. A set of mn
eletnents of F arranged in theform of a rectangular array having m
rows and n columns is called an mxn matrix over the field F.
An mxn matrix is usually written as
ail Oi2 ... am
021 022 02n
ABsss
^ a,j hjii
U,AB is an mxp matrix whose (/, kyf>
m>p
If^ and B are both square matrices of order w, then both the
products AB and BA exist but in general AB^BA.
Transpose of a matrix. Definition.
Let
. A=-[a/j]mXn ● ^1*® nxfn matrix obtained by inter-
changing the rows and columns of A is called the transpose of A.
nma A where bij=aji, U., the element of.4’' IS
I
the 0*0' element of If^isanwxn matrix and£isan«x/;
matrix. It can be shown that (AB)t^Bt^t xhe transpose of a
matrix A is also denoted by A‘ or by A\
,, Determinant of a square matrix. Let P„ denote the group of
all permutations of degree n on the set {1, 2,..., n}. If 0 e
then e (i) will denote the image of / under 6. The symbol(-iv
for Q ^ Fa will mean +1 if 0 is an even permutation and —1 if ^
is an odd permutation.
Definition. Let A ==[aij]nxti’ Then the determinant of A, written
as del af or I I or I a,j
| nxn is the element
fli.1 Onn
for the determinant of the matrix [Oij]n.'n*
The following properties of. dclerminants are worth to be
noted:
(i) The determinant of a unit matrix is always equal to 1.
(ii) The determinant of a null maliix is always equal to 0.
(iii) If A —
then det (y4B)=-(det i4)(det i?).
Cofactors. Definition. Let A=-[aif]nxn’ We define
^ cofactor of o,v in y4
»=(_ l)i+j. [determinant of the matrix of order «-1
obtained by deleting the row and column
of A passing through aij].
It should be noted that
or s:=det A if k=j.
Adjoint of a square matrix. Definition. Let A~[aij]„xn*
The MX n matrix which is the transpose of the matrix of cofac
tors of i4 is called the adjoint of A and is denoted by adj A.
It should be remembered that
A (adj ^)=(adj A)^-(det A)/ where
I is unit matrix of order n.
Inverse of a square matrix. Definition. Let ^ be a square matrix
of order n. If there exists a square matrix B of order n such that
AB=^I^BA
then A is said to be invertible and B is called the inverse of A.
Also we write B=A-K
The following results should be remembered :
(i) The necessary and sufficient condition for a square matrix^
A to be invertible is that det A^O.
(ii) If A is invertible, then A-^ is unique and
1
A-'
det A (adj. A),
156 Linear Atgebra
m
where ^(ay)® i: aij Pi, for each7=l, 2,..., « ...(1)
/-I
Solution. We have
Z)(ai)«/)(:«o)=o=0x«+O:cH0x2+Ox3
«0«i-f0a2+0a3+0«4
D(ol2)^-D (;c0=xo=:1xO+0x*-|-0x2-|-0ac3
=l«i+0a2+0a3+0a4
i)(a3)«=i> (x2)=2x»=0.x04-2x»+0xH0x3
. «=0ai-l-2a2+0a3+0a4
D(«4)=I> (x3)=3jc2«Ojco+Oa:H3x2+Ox3
caOai+0a2+ 3a34-0o4.
.% the matrix of D relative to the ordered basis B
0 1 0 0*
fd* wi 0 0 2 0
0 0 0 3
0 0 0 0j4x4.
Theorem 1. Let U be an n-dimensional vector space over the
field F and let V be an m-dimensionai vector space over F, Let B and
B* be ordered basesfor U and V respectively. Then corresponding to
every matrix [aij]my„ of mn scalars belonging to F there corresponds
a unUfue linear transformation Tfrom U into V such that
\T\ By (I.A.S. 1988)
Proof. Let 5={aj, a2,—, a„) and ^2,..., Pm)-
that the vector S aij pi has been obtained with,the help of the
/-I
T (a)=r( 1’ xj (xj
)
n
^FxjTM fV T is a linear transiorraation]
;-i
m
- SxjlJauPi [From (1)1
/ -i
m
S ...(2)
/“i \j^\
B
Oij Xj j pi.
The co-ordinate matrix of T (a)'with respect to ordered basis
5' is an m X1 matrix; From (2), we see that the entry of this
column matrix [T (a)]a'
= S at! Xj.
y-i
(i) We have
(T+S) (ay)asT (ay)+5 («y),y‘=l, 2, n
m m m
^ <»/y /5/+ -T bij j3/= 27 (fl/y + 6/y) ^/.
/ -I / -I / -I
p
...(3)
and {ST){aj)«= £ Ckj y/c, y=1,2 n.
fc™i
n
We have(ST)(«;)<=S[T J=i,2»●●●»
[Fjrom(l)l
m
=s S atj S iPi) [V 5 is linear]
/-I
VA. ...(4)
m
[^Ayjpxn= 2.’ dki aij
/ ‘I
. pxa
[<fA/lpXm [Ofylmxn* by def. of product of two
matrices.
Thus C^DA,
Note, If u= V= IV, then the statement and proof of the
above theorem will be as follows :
Let V be an n-dimensional vector space over the field F;letT
and S be linear transformations of V. Further let B be ah ordered
basis for.V. If A is the matrix 6fT relative to B, and D is the
matrix of S relative to B, then the matrix of the composite transfor
mation ST relative to B is the product matrix
C^DA i.e. [5rjj,=[S]fl [rjfl. (Banaras 1972)
Proof. . Let B—{olu «2, ●●●»
Let A=[aij]»ynp D^ldkijnxf, C«[CAy]iixn. Then
n
T («y)*» £ aij 2,. n, ...(1)
/ -I
y' '
and n
[ST)(ay)*= Ckj a*,j«=^-2^1^ ...(3)
N
timqr Transformations 165
/=i
.,n.
We have [T; B; B^^cu]m yn
=> 'A (T):^-[Cijlmxn *
.*. ^ is onto.
0 is a linear transformation.
If d, 6eF, then
^ {aTi-irbTz)=[aTi+bTz\ B; B'] [by def. of
=[aTi; B; B']+[bTz; B; B'] [by theorem 4]
=fl
(r,; B; B>]^b [Tz; B; B'J [by theorem 4]
ssQiji {T\)-\-b^ (72), by def. of *fi.
.. 0 is a linear transformation.
Hence ^ is an. isomorphism from L (£/, V) onto M.
Note. It should be noted that in the above theorem if l/^y
tnen also preserves products and / i.e.,
0 (Ti Tz)=^ (r,) 0 (72)
and 0 (/)=/ i.e., unit matrix.
Theorem 7. Let Tbea linear operator on an n-dimensional
vector space V and let B be an ordered basis for V. Prove that T is
invertible iff[T]B is an invertible matrix. Also ifT is invertible, then
We have CA"=*I=^AC
[S]fl [T]b=-1--[T]b [SIb
»[srjB=*[/ls«[r5]B
:> ST^I^TS
=> ris invertible.
Change of basis. Suppose V is an n-dimensional vector spaw
over the field F. Let B and F' be two ordered bases for V. If a is
any vector in K, then we are now interested to kn(jw what is the
relation between its coordinates with respect to B and its coordi
nates with respect to F'.
Theorem 8. Let V(JF) be an n-dimensional vector space and let
B and B'be tm ordered basesfor V. Then there is a unique nece
ssarily invertible, nx.n matrix A with entries in Fsuch that
(1) («]fl=^[«]e'
(2) [a]B'«i4-‘[«]b
for every vector « in V. (Meerut 1984P, 93P)
Solution. Let Fs=*{ai, a2,...,«ii} and F'=(^i,
Then there exists a unique linear transformation Tfrom V
into V such that
r(a^)«=i3y,y=l, 2 n. ...(1)
as=yiPi-\-y^2-\- — 27 yj^j
/-I
min
= 2’ I S Oij yj «/.
/-I y-r /
Also aa 27 Xi a/.
/-I
=E
i-1
Similarity.
Similarity of matrices. Definition. Let A and B be square
matrices of order n over the field F. Then B is said to be similar
to A if there exists an nxn invertible square matrix C with elements
in F such that 5=C-« AC. (Meerut 1976)
Theorem 10. The relation ofsimilarity is an equivalence relation
in the set of all nxn matrices over thefield F.
(Meerut 1969, 76; Kanpur 81)
Proof. If A and £iare two nxn matrices over the field F, then
B is said to be similar to A if there exists an nxn invertible matrix
C over Fsuch that B^C-^ AC.
Reflexive. . Let be any nxn matrix over F. We can write
A~l~^ AI, where / is nxn unit matrix over F.
Linear Transformations 169
=> AP-'
[V is invertible means P~^ is invertible and (/*”*)"'
=> 5 is similar to
Transitive. Let A be similar to B and B be similar to C. Then
A=P-'BP
and B=Q-' CQ,
where P and 6 are invertible nx« matrices over F.
We have A^P~' (g-‘ CQ)P
=(P-i e-i) CiQP)
^{QP)-'C{QP)
[V P and Q are invertible means QP is
invertible and Q~>]
A is similar to C.
Hence similarity is an equivalence relation on the set of nxn
matrices over the field F.
Theorem 11. Similar matrices have the same determinant.
Proof. Let B be similar to A. Then there exists an invertible
matrix C such that
5=C-« AC
=> det 5=det(C"' AC) det Z?=(dct C-‘)(det A)(det C)
=> det i9=(det C"‘Kdet C)(det -d) => det (detC">C)(det A)
=> det 5=(det /)(det A) => det B—l (det A) => det i?=det A.
Similarity of linear transformations. Definition. Let A and B
be linear transformations on a vector space V{F). Then B is said
to be similar to A if there exists an invertible linear transformation
C on V such that B.^CAC-K
Theorem 12 The relation ofsimilarity is an equivalence rela
tion in the set of all linear transformations on a vector space V(F).
Proof. If .4 and are two linear transformations on the
vector space V(F), then B is said to be similar to A if there exists
an invertible linear transformation C on K such that
B=CAC-‘.
Reflexive. Let A be any linear transformation on V. We can
170 Linear Algebra
write A^IAl’’\
where I is identity transformation on V.
A is similar to A because I is definitely invertible.
Symmetric. Let >4 be similar to B. Then there exists an in
vertible linear transformation P on V such that
A^PBP-^
=► />-» AP=P-i (PBP-i) P
p-i AP=B =>. B=^P-i AP
=> B=P-^A (/»-*)-> =► is similar to A.
Transitive. Let ^4 be similar to B aiid B be similar to C.
Then A=PBP-\
and B=QCQ-K
where P and Q are invertible linear transfoimations on V.
We have (gCfi-*)-P-‘
C (g-* P-^)^{PQ) C {PQ)-K
A is similar to C.
Hence similarity is an equivalence relation on the set of all
linear transformations'on F(F).
Theorem 13. Let T be a linear operator on an n-dimensional
vector space V{F) and let B and B' be two ordered bases for K. Then
the matrix of T relative to B' is similar to the matrix of T relative
to B. (Andhra 1992)
Proof. Let .8={ai, a2,..., a«} and /3„}.
Let A=‘[aij]„xn be the matrix of T relative to B
and C=[cij]„x„ be the matrix of T relative to B\ Then
n n
S pkj 2 Oik «< [From (1), on replacing y by /:]
*-I i-l
\
= 27 ( 2 a,k Pkj a/. ...(5)
/-I \fc-i /
H n
2Ckj 2pik «/ [From (4), on replacing j by k]
k-l /-
n( n \
ss 2 \ 2Pik Ckj «i. ...(6)
. f-i /
[V iS is linear]
«5[r,(«y)] [From(1)]
=(5T,)(ay). ●..( 5)
From (4) and (5), we have
{T2S)(ay)=(5Ti)(«y),y=l, 2,..., n.
Since T2S and STx agree a basis for F, therefore we have
T2S^STx
Linear Transformations 173
n
tr (A-/4)s= S Aa//=»A S ait*>aX tr A.
/-I /-I
(2) We have A
n n It
tr(i4+5)«=a S (air\-bu)<^ /S
-I
a//+ 1S
-1
bti**»it A-^ie B,
/-I
n
(3) We have AB^[cij]„^„ where 2aik bkj.
m
Also BAfsx[dij]„xn where <//ys= 2btk akj.
ft-i
H A / R
r iz
4
35
4
in
2*
3 15 3 [Note that this result
4 4 2 tallies with that of Ex. 2],
1 7
0
-“2 2
Example 5. Lef T ie the linear operator on R* defined by
T(X, y)=={4x-2y, 2xi-y).
Compute the matrix of T relative to the basis {ai, a2} where
«1-(1,1),«2=(-1.0). (Meerut 1976, 93P)
Solution. By def. of T, we have
r(a,)=r(l, 1)=(2,3).
Now our aim is to express (2, 3)as a linear combination of
the vectors in the basis {ai, a2>.
Let (fl, b)=:xoti-\-yct2^x (1, l)H-y(—1,0)=»(x—y, x).
Then X—y=fl,jc=6.
Solving these equations, we get
x—b,y-b—a. .(1)
Putting fl=2, b=3 in (1), we get x=*3, y=l.
T(ai)=3ai-|-la2. ...(2)
Again r(a2)=r(~li 0)=(-4, -2). Putting a=:-4, fc«-2
in (1), we get x=—2,y=2.
T’(«2)= —2ai +2oc2. ...(3)
From the relations(2) and (3), we see that the matrix ofT
F3 “●2'
relative to the basis {ai, a2} is= j ^ '
Example 6. Let T be a linear operator on R^ defined by :
r(x, y)=(2y, 3x-y).
Find the matrix representation of Trelative to the basis ((I, 3),
(2. 5)}. (Meerut 1980, 85, 89; S.V.U. Tirupatl 93P)
Solution. Let a, =(1, 3) and a2=(2, 5). By def. of T, we have
r(a,)=r(l, 3)=(2.3, il-3)=(6, 0)
and 7’(a2)-7'(2, 5) = (2 5, 3 2-5)=(10, 1).
Now our aim is to express the vectors TXai) and r(a2) as linear
combinations of the vectors in the basis (ai, a2>.
Let (a, b)—pxi-\-qyL2=p (1, 3)-|-^ (2, 5)=(p+2^, 3p-\-5q).
Then p-f 2^=a, 3p-\-5q=b.
Solving these equations, we get
/>ea—5a+26, qi=»3a—b. (1)
180 Linear Algebra
0 nr4 ~2'|ri -r
-1 ijU iJ[i 0.
* .2 nri ~n [3 -2]
-2 3j[l 2j‘
Example 8. Let T be the linear operator on R^ defined by
T{Xii X2, X3)=(Xi+X2+XJ, —Xi—X2-4X3,2X|—X3).
What is the matrix ofT in the ordered basis {xu 0L2* <*-s} where
a,=(l, 1, 1), «2-(0, 1, 1), a3=(l, 0. 1)?
Linear Transformations 181
dim R^=s3» therefore th^e set {ai, cn.2, as) containing three linearly
independent vectors forms a basis for R^.
Now let ^2, es} be the standard ordered basis for R^
Then e,n.(l, 0, 0), e2=(0, 1.0). e3=(0,0, 1). Let a2, as}.
We have a,=(1.0,-l)=le,+0e2--U3
«2=(1, 2, l)==leiH-2«2+le3
a3-=(0, —3,2)=0ei—3e2+2es.
If P is the transition matrix from the basis B to the basis B\
then
1 1 01
P 0 2 -3 .
-1 1 2
Let us find the matrix P~K For this let us first find Adj. P.
The cofactors of the elements of the first row of P are
2 -3 _ , 0 -3 i 0 2
i.e, 7, 3. 2.
1 2 ’ -1 2 ’ 1-1 1
The cofactors of the elements of the second row of/'are
1 0 1 0 1 1
2.
1 2 ’ -1 2 ’ -1 1 i.e. -2. 2,
The cofactors of the elements of the third row of F are
!S 0 _ 1.
-3 * 0-3* 0
0 I I
2 i.e. -3. 3, 2.
r 7 3 2
Adj transpose of the matrix —2 .2 —2
1-3 3 2.
f7 -2 -3
=» 3 2 3.
L2 -2 2J
1 1
/. /*-*
Now ci=lei+0e2+0e3.
ji.
Coordinate matrix of ei relative to the basis
1
0 :
0j
Co-ordinate matrix of ei relative to the basis B*
r1
ei ^p—i 0
B* 0
1
●2 0
-2 2ll 0 j
184 Linear Algebra
1 ●7 7/10*
3 = 3/10 .
“lO
l2, 2/10 .
«3.
10“*'^10*^‘^10
01 roi
Also C2 1 and SB 0 .
. B B Ll
LOj
●●
.
62
Jfi'
■Bp-*
01
1 and
0
^3
JB'
=p-> fSl.
LU
I -21 I r-31
Thus €2 2 . ^3 3 .
1b' *”20 IB' 10
L-2J 2j
«2=—A«i4-x%«2—l*0«3
and ^3=—i%«l + i*0’«2 + 'A'«3r
Exaoiple 12. Let A be an mxn matrix with real entries. Prove that
A=0 {null matrix) if and only if trace {A‘ A)=0. (Meerut 1981)
Solution. Let A^[aij]myn> Then A*s=s[bij\„xnti
where bij^ajt.
Now is a matrix of the type nxn,
Let A* A—[ctj]n>in^ Then
Cf/Bsthe sum of the products of the corresponding elements of the
row of A* and the i*^ column of A
^bit au+bi2a2i+.„+bi„fi„i
*=ut/fl|/+fl2rfl2/+●●●+flm/««r [V bij=aji]
N^ trace .4)==2? c«
/-I
is similar to 5"*.
Example ]S. If A and B are linear transformations on the same
vector space and if at least one of them is invertible, then AB and BA
are similar.
Solution. Let A be invertible.
We have A (BA)A~^=ABAA~^=ABl~AB.
Thus AB^A {BA) A-K
/. AB is similar to BA.
Now let B be invertible.
We have B(AB)B-^=BABB-'=BAI^BA,
BA is similar to AB.
Example 16. Let Tand S he linear operators on thefinite dimen
sional vector space V (F)j prove that
(0 det {TS)-={det T){det S);
{ii) T is invertible ijf det T^O.
Solution, (i) Let B be any ordered basis for V.
We have [r5]e=[r]B [5Jb.
det ir5lB=det(irjBl6']fl)=(det[r]fl)(det[S]B)
determinant of the product of two matrices is equal to the
product of their determinants].
Now the determinant of a linear transformation is equal to the
determinant of its matrix with respect to any ordered basis.
/. det(rS)=(detr)(det5).
186
Linear Algebra
=a/(a) + 6/(i3).
/is a linear functional on V„{F),
Example 2. Now we shall give a very important example of
a linear functional.
We shall prpve that the trace function is a linear functional on
the space of all nxn matrices over a field F. (Meerut 1977)
Let R be a positive integer and Fa field. Let K(F)bethe
vector space of all nXn matrices over F. lf^=[a,;]„xBeK, then
the trace of A is the scalar
tr 4 fl22+●●●+^Bfl= On*
If a,b ^ Fy we have
ft (fla+i>j3,*=// [a (fliai4-...+fl;i«fi)+^ (^i«i+..*+^n««)]
=a/f [(aOi *I+■●●● +
=>aar\-bbi=^afi (a)+6/} (^).
Hence ft is a linear functional on V,
Some particular linear functionals.
1. Zero functional. Let V be a vector space over the field F.
The function ffrom V into F defined by
/(a)=0 {zero o/ F) V a e K
is a linear functional on V.
Proof. Let a, /8 e K and n, 6 € F. We have
/(na+i>j9)=0 (by def. of/)
=n0+^0=fl/(a)+£/(i8).
/is a linear functional on V. It is called the zero func¬
tional and we shall in future denote it by 0.
2. Negative of a linear functional.
Let V be a vector space over the field F. Let f he a linear
functional on V. The correspondence —f defined hy
(-/)(«)=-[/(«)] V a e F
is a linear functional on V.
Proof. Since/(a)eF => —/(a)eF, therefore —/is a function
from V into F.
Let fl, 6 e Fand a, jS e F. Then
(-fXaoc+bfi) lf(a<K-hb^)] [by def. of -/]
“ “[^/(«)+^/(P)] (V / is a linear functional]
[-/(«)]-fi) [-/(iS)]
=«[(-/) «]+6[(-/) (i5)].
.*. —/is a linear functional on F.
Properties of a linear fuDclional.
Theorem. Let fbe a linear functional on a vector space V (F),
Then
(0 /(0)=0 where 0 on the left hand side is zero vector of F,
and 0 on the right hand side is zero element of F.
(»)/(-«)=-/(«) V aeF.
Proof. LetaeF. Then/(a)eF.
We have/(a)+0=/(a) [V 0 is zero element of F]
=/(a-f-0) [V 0 is zero element of F]
=/(«)+/0) [V / is a linear functional]
Now F is a field. Therefore
/(«)-|-0=/(a)-f/(0)
Linear Transformations 191
Let ei/i+C2/2+,..+c«/,s=6
r af,(flty)«OJ«l, 2 n
/-I
=► cy=0,y=l, 2 »...» R
/S
-I at ft («y)=ii7
/ -I
at ft (ay)
sa E at 8tj
i-i
(from (1)1
/«/-I
Z /(«,)//
andfor each vector a in V we have
H
a*= Z fi (a) «/. (Meernt 1972, 79,85)
/-I
2, ,n.
MS fMf,
Now let a be any vector in V. Let
a=X|«i+...+x„a«. ● ●●(2)
Then f (a)=// From (2), «= Z Xj ay
(jM)
198 linear Algebra
^/(<x)~/(iS)960
Solved Examples
Example 1. Find the dual basis of the basis set
^-{(1, -1,3),(0. 1, -1),(0, 3. -2)}
for n(R).
Solution. Let ai=»(l, —1, 3), a2=(0, 1, -1), «3=^0, 3, —2).
Then «2»
202
Linear Algebra
Exercises
1. Prove that every finite dimensional vector space V is isomor* ,
phic to its second conjugate space V** under an isomorphism
which is independent of the choice of a basis in F.
(Meerut 1973)
2. Find the dual basis of the basis set
««{(!. 0,0),(0. 1,0).(0.0,1)}
forF3(R).
Ans. F'={/i./2,/3)
where/i(fl, b, c)=a,/2(a, b, c)=h,f{a, b, c)=c.
3. Find the dual basis of the basis set
F«((l, -2. 3),(1, -1, 1). (2. -4. 7)} of K3(R).
Ans. F'~{/i,/a,/3}
where /i(u, b, c)=—3a—5^—2c,
/2(a, 6, c)=2a+ft, /3(a, b, c)«a-f-2fc+c.
§ 18. Annihilators.
Definition. If V is a vector space over the field F and S is a
subset of the annihilator ofS is the set 5° ofall linearfunctionals
fon V such that
/(a)=0 ¥ « e 5.
(Meerut 1970, 71, 76, 92; Marathwada 71; S.V.U. Tirupati 90]
Sometimes ^4(5) is also used to denote the annihilator of S.
Thus 50={/e F':/(a)=0 ¥ a e 5}.
It should be noted that we have,defined the annihilator of S
which is simply a subset of V. S should not necessarily be a sub
space of F. V
If iS=*zero subspace of F, then 5®=* F'. (Meerut 1976)
It S«=*F, then 5®== Fo==zero subspace of F^ (Meerut 1976)
206
Linear Algebra
If V is finite dimensional and S contains a non-zero vector,
then If O^a e 5, then there is a linear functional/on V
Thus there is/e V* such that/q^S®. Therefore
● ●●(1)
/-!
Now /e W'o =>/(a)=:0 ¥ a e
s> /(ay)G=d for each y=l m [V are in W]
II n
*>27 («y)csO^ 2Xt §/ys»0
/-I /-I
m \
We have g (a)=g ^cj»j [From (3)]
y-> 7
m
^ 2 Cjg (ay) ['/ g is linear functional]
y -i
m
-2:cy («y) [From (2)]
y -i
m n m n
^.2cj .^ ykfk{<t})=^2cj 2 ynhu
y-i At-w+i
208 Linear Algebra
m
S cj0 [V S*y=0 if k^J which is so for each
y-i- kssm-\-l»●●●» n and for eachy= 1, i.., m]
=0.
Thus g (a)«0 V a e FT. Therefore^ e fFo.
Thu8s^€£(5) =>« e JFo.
L{S)Q fVO,
Hence W^=:L (S) and 5^ is a basis for fF®.
dim i^'®=«—w=dim F—dim fV
or , dim F=dim JF+dim fF®.
Corollary. // F is finite-dimensional and fV is a subspace of F,
W* is isomorphic to V'lWa.
Proof. Let dim F=n and dim lF=w. W' is dual space of W,
80 dim FF'=dim fV>=m,
Now dim F7iF®=dim t''—dim IF®
=dim F-(dim F-dim PF)=dim PF=m.
Since dim fF'«dim F'/IF®, therefore W'^V'IWo.
Annihilator of an annibilator. Let F be a vector space over
the field F. If S is any subset of F, then S® is a subspace of F'. By
definition of an annihilator. we have
(5o)o=5®o={L e F": L (/)=0 V / e 5®}.
Obviously 5®® is a subspace of F". But if F is finite dimensio>
nal, then we have identified V" with F through the natuial isoinor*
phism « 4-4 Lee ● Therefore we may regard S®® as a subspace of F.
Thus
5®o={a e V:f(a)=0 v / e S®}.
Theorem 3. Let V be a finite dimensional vector space over the
field F and let W be a subspace of F. ^fhen IF®o= W.
(Meerut 1968, 78. 90, 91)
Proof. We have
fF®=(/e F' :/(a)=0 V a e IF} ...(I)
and fr®o={a €= F:/(a)=0 V/s IF®}. ...(2)
Let a e IF. Then from (l),/(a)=pO V / e IF® an?so from
(2),aelF®®.
a e iF => a e iF®®.
Thus W c IF®®. Now W is a subspace of F and W’®® is also a
subspace of F. Since W C IF®®, therefore IF is a subspace of IF®®.
Now dim IF-fdim IF®=dim F. (by theorem (2)]
Applying the same theorem for vector space V and its sub
space B'®, we get
Linear Transformations 209
Solved Examples
Example 1. If Si and S2 are two subsets of a vector space V
such that SiQS2t then show that S2^QSi^.
Solution. Let /e 52®. Then
/(«)=0 ¥ a e 52
=>/(a)=0 ¥ a e 5i [V 5, £52]
=>/e5i0.
52® £5,0.
Example 2« Let V be a vector space over thefield F. If S is any
subset of K, then show that 5®=.[L (5)j0.
Solution. We know that 5*£ L (5).
.*. [L(5)l®£5®. ...(1)
Now let/e 5®. Then/(«)=0 ¥ ae5.
If p is any element of L (5), then
We have/03)«rjc//(a,)
A /«0.
/. »"i®ni>"2®='{6}.
(ii) Now to prove that V*=lTi®-i- Wz^>
Let/eF\
If a€K,then « can be uniquely written as
a=oci4-a2 where aiSW'i, a2eIT2.
For each /, let us define two functions/i and/2 fro® ^ ®lo F
such that
/i(«)=/i(«i+«i)==y(*2) ...d)
and /2(«)=/2(«i-|-a2)=/(ai). .(2)
First we shall show that/i.is a linear functional on V Let
«, heF and a«ai+a2, ^=^i+/52eF where ai, pi^Wi and«2.
^z^Wz Then
/i(fla+6j8)=/t fa (ai4-«2)+^ Oi+^2)l
—fi \fa%i+hp{)-\-{aa2+bpz)] ■
=/(aa2 -I-A^2)
[V aai+6)5|€lF|, a«2+^^2Sf^2l
«fl/(ai)+hfifizi[V /is linear functionkij
'=a/i(a)+6/i08) lFrom,(l)]
fi is linear functional on V Le,/|S F".
212
Linear Algebra
Now we shall show that /)e
let«, be any vector in Then «, is also in V. We can write
«i=»ai+0, wherea,eW',, 0efF2.
from (1), we have
/i(«i)=/i («i+0)=/(0)=0.
Thus
/i(ai)=0 V a,elFi.
7'(5’)={r(a)eP^:ae5}.
Obviously T(S)QV. We call it the image of S under T
Invariance. Definition.
, „ , y be a vector space and T a linear
operator on V. If W is a subspace of V, we say that W is invariant
under TiJueW^T(u)slV. [Meerut imi
Example). If T is any linear operator on V, thenris
invariant under T. If «s V. then r(a)s fr because T is a linear
operator on V. Thus V is invariant under T.
The zero subspace of V is also invariant under T. The zero
subspace contains only I
one vector i.e., 0 and we know that r(0)«0
which is in zero subspace.
In other words in the'.reIation (1), the scalars aij are all zero
if I and w+1 < / ^ n.
Therefore the matrix A takes the simple form
A= M C]
P £>.
where Af is an mxm matrix, C is an mx{n^m) matrix, O is the
null matrix of the type (rt—/«)x/M and D is an («—m)x(n—w)
matrix.
From the relation (2)it is obvious that the matrix M is nothing
but the matrix of the induced operator Tw on W relative to the
ordered basis Bi for \V.
Reducibillty Definition Let W\ and W2 be two subspaces of
a vector space V and let T be a linear operator on V. Then T is said
to he reduced by the pair {Wu fVz) if
(0
(i7) Both fVi and Wz are invariant under T.
It should be noted that if a subspace Wi of V is invariant
under r, then there are many ways of hading a subspace Wz of V
such that V-fVi^lVzy but it is not necessary that some Wz will
also be invariant under T in other words ainong the collection
of all subspaces invariant under T we may not be able to select any
tw) >>ther tnan V and the zero suospace with the property that V
In their direct sum.
The definition of reducibility can be extended to more than
two suo^paces. Tttas let Wi be k subspaces of a vector
pace V and let T be a linear operator on,V, Then T is said to be
reduced yWk)tf
(n V IS the duett sum of the subspaces
■i'lt/ ti) La h of the subspaces fVi is invariant under T.
Linear Transformations 215
Solved Examples
Example 1. IfT is a linear operator on a vector space V and
if W is any subspace of K, then T(W) is a subspace of V, Also W
is invariant under T iff T{W) G W,
216 Linear Algebra
SoIatioD. We have
ae => a e Wi for each i
s> T(ct) e Wi fot each i {●/ each ff"/ is invariant under T]
=> r(a) G n Wi => /'(a) G W.
W is invariant under T.
Example 4. Prove that the suhspace spanned by two subspaces
each of which is invariant under some linear operator T, is itself
invariant under T. (Meerut 1987)
Solution. Let Wi and W2 be two subspaces of a vector space
y. Let W be the subspace of K spanned by U W2. Then we
know that lV^Wi + W2.
Now it is given that both W\ and W2 are invariant under a
linear operator T and we are to prove that W is also invariant
under T.
Let a G W. Then
a=ai+a2, where at G Wi, a.2 G W2.
We have r(a)=T(a, f a,)
= T(ai)4Ttaa) because 7' is linear.
Now T(ai) G Wi since Wf is invariant under T and ai G W\.
Similarly 7’{a2) G W2.
Thus r(ai)-}-7(a2) G
i.e. T(a) = 7\a,) f7\a2)G W.
Thus a G => 7'(a) G W.
W is invariant under T.
Example 5. Let V be a vector space over the field F, and let
T be a linear operator on V and let f(t) be a polynomial in the in
determinate t over the field F. If W is the null space of the operator
/(T), then W is invariant under 7’
Solution, if/(r) is a polynomial in the indeterminate t over
the field F, then we know that/(/') is a linear operator on V where
r is a linear operator on V.
Now W )S the null space off(T). Therefore
aG W =>f{T)(ai)=0. ...(1)
We are to show that W is invariant under T
i.e. »gW^ J(a) G W.
Obviously [/(T)] r=r/(r) because r/(0=/(0 ^ and polyno
mials in T behave like ordinary polynomials.
^18 Linear Algebra
If a, b^F, we have
El {acc-^bP)=Ei (i8,+...+iSA)]
=Ei + +
=aoii-^bPi [by def. of £,]
=aEi (a)-\-bEi(P).
.*. El is a linear transformation on V.
We have Ei^ {<r)=Ei[Ei (a)J=£/ (a,)
=Ei (O+...-fa/H-0+...-f-O), where cti^Wi
=a/ [by def. ofiT/l
=Et(cc).
Thus £’/2(a)=.£'/(a) V aSF.
.*. Ei^=Ei i.e. El is a projection.
Thus there exist k linear operators Ei, 1 </<Ar on V such that
Ei‘^=Ei,
(b) Let i¥=f
Let ae V. Then a=aj+...+ajt, where a/e Wi,
We have {Ei Ej){<x)=Ei[Ej{y)]
s=Ei (aj) [by def. ofiTyJ
=0
[V i^j means that in the decomposition ofay as the
sum of the vectors of Wi,..., Wk, the vector
belonging to Wi will be OJ
=« («).
.*. Ei jEy=0 if i^J.
(c) Let aeF. Then a=ai4-...+afc where each a/efF/. We
have (£'i+»..+£'*)(a)=i£i(a)+...+£A(a)
=>ai-K..+«*=*'=»/(«).
.225
Linear Transformations
[V El £y=0 ifW]
==Ej(Pj) [V EJ^^Ej]
«ay. '
Hence the expression for a is unique.
V is the direct sum of FF'i !●●●» FF'* .
226 Linear Algebra
S'*’
228
Linear Algebra
W2 is invariant under T.
Hence T is reduced by the pair (Wi, W2).
Note. The above theorem may also be stated as below :
Let E be the projection on V and let T be a linear operator on
V. Prove that both the range and null space ofE are invariant under
TiffET^TE. (Meerut 1973)
Solved Examples
Example 1. Let V be the direct sum ofits subspaces Wi and W2.
If El is the projection on Wi along W2 and E2 is the projection on
JV2 along Wi, prove that
(i) El+E2=L
and (i7) £|£’2=0, £2 ^1=6.
Solution, (i) LetaeF. Then
a=ai+«2 where aje Wi, a2S W2.
Since Ei is projection on Wi along W2, therefore Ei (a)«ai.
Also £2 is projection on W2 along Wi. Therefore £2(a)=«2-
We have (£i+£2)(a)=£,(a)+£’2(a)
=«i+«2=«=/(a).
£,+£2=/.
(ii) We have EiE2=Ei (7~£i) (●.* £i+£2=/ => £2=/-£iJ
=£,/-£,2
=£i - £i [ ●.* El is projection => £i2*=:£,]
A
= 0.
Similarly £2£i=£2 (/-£2)=£2/~£22
==£2—£2=0.
Example 2. Let Eu..., Ek be linear operators on a vector space
V such that Ei + ...+Ek=l. Prove that if
EiEj=0for i^jt then Ei^=Etfor each i.
Solution. We have
Ei^=E,E,=E, (I-El-E2-. ●—£/-i—£/+i— £a)
= £,/-£/£,-£,£2 -...—£/£/_i—£/£/+! —... ’—EiEk
£/. [V £/£>=0 for/#/].
Example 3. Let E be an idempotent linear operator on a vector
space y i.e., £2s=£ If JVi is the range of E and W2 is the null space
of £, show that
(0 9.isinWxiffE{a)^9.;
(//) V is the direct sum of fVi and Wa
(Hi) E is the projection on Wi along W2> (Meerut 1985)
Linear Transformations 229
C=3
E\'^E\E2~^E\Ei-^EzEi'{‘Ei—EiE2E2~“E\E\E2
—‘EiE2’\‘E\E\E2E2
^Ex+E2Ex^-E2-ExE2^--Ey^E2-ExE2-^Ex^E2^ "
*=» Ex^E2-{‘ExE2—ExE2-^Ex E2-~Ex E2’\‘ExE%=iEx +-£2*“ExE2^
Ex+E2—EiE2 is idempotent and therefore is a projection.
-Example 8. ^Two projections E and F have the same range iff
EF=FandF.E—E.
Solution. £ and Fare two projections having W\ and as
their ranges respectively. Also £F-=F and FE=E. Then to prove
that fVx^W2.
Let aeW'i. Then
£(a)n=a [V iFi is range of £J
=> F[£(a)I=F(a)
(F£)(a)=/'Xa)
=> £\a)=F(a) {V F£«£J
« —F(a)
a s W2
1V F(a) a => a G the range of F /.e. ae IV2]
/. Wx Q IV2.
Now let jSe IV2- Then
Fli?)*i»
=> £(FO)]«£(/3)
=> (£F)(j8)=£(iS)
=> F{P)^E{P) (V £F=F]
^ /J=£(/8)
[ Wx is the range of£j
iVi Q IF,.
Hence
Conversely, suppose £ and Fare two projections having the
same range. Then to prove that EF^F end F£=£
Let ae K. We have
(£F)(a)=.£[F(a)j
«F(a)
[V F(a)e the range of F=>F(a)e the range of £)
£F«F.
Also (F£)(a)«F[£(a)l
=s£(a)
[V £(a)e the range of £ :> £(a) e the range,of FJ.
/. F£=£.
232 Linear Algebra
O O
where I is unit matrix of order m and O's are null matrices of
suitable sizes.
Exercises
The matrix C of T' relative to B\\ B' will be of the type nxm.
If C=[c;/]„xmf then by definition
m
— E.OkjSik [V gi^Bi* which is dual
A-I
basis of 5|]
On [On summing with respect to k and
remembering that 8ik—\ when A'=/
and ^ik—Q when k^=i]
r
/-/«;●* r Transformations 239
(0 0=0;
(//) /'=/;
{ui)(r,+r2y=r,'+r2'; (Meerot 1975, 90)
(/V)(n r2)'=r2' r,'; (Meerut 1975, 90; Allahabad 77)
(v) {aTY^aT where a^F; (Meerut 1975)
(v/) (r-^Y={T)-^ if T is invertible;
(vii){Ty=r'=T if V isfinite-^dimensional.
Proof, (i) If 6 is the zero transformation on V, then by the
definition of the adjoint of a linear transformation, we have
[«, 6'^]=[6a, for every g in V and a in K
[0, for every g in V* [V 0(a)=0 V a e
«0 [V ^(0)=0]
=[«, 0] V a e K [Here 0 e P' and 6(a)=0]
=[«.0 V g e V' and a e r
[Here 0 is the zero transformation on V']
Thus we have
=[a.(ri'4-72')«l*
Thus we have
[«.(Ti+TaV {Tx^T^) for every g in r and « in V.
(7’i+r2yg=(r,'+72')^ V K'.
(Ti+ray^ry+ra'.
(iv) If T\, Ti are linear operators on V, then T\ Ti is also a
linear operator on V. By the definition of adjoint, we have
[«,(Tt 72)' gl=[(7’i 72)«. g]for every ^ in F' and a in F
'=[(7’i) TVk, g] [by fief- of product of linear
transformations]
=[T^*Tt^g] [by def. of adjoint]
=[«, T’2' Ti' ^1 (by def. of adjoint]
Thus we have
[a,(Ti T2)' g]=[a, ry 7*,' g]for every g in F'and « in F.
Note. This is called the reversal law for the adjoint of the
product of two linear transformations,
(v) If r is a linear operator on F and a e F, then aT is also
a linear operator on F. By the definition ofthe adjoint, we have
[a,{ary a, g]fOr every g in F' and a in F
-[a{Td),g] [by def. of scalar multiplication
of a linear transformation]
=u[ra.^] [y g is linear)
=o [«. T g] [by def. of adjoint]
«[d, a (7” ^>] [by def. of scalar multiplication
in F'. NotethaUr^e F']
=[«;(aT’')g] [by def. ofscalar multiplication ,
of r by a]
(arx=ar.
If T’-i
(vi) Suppose 7* is an invertible linear operator on F.
is the inverse ofT, we have
r-i 7’=/=rr-*
^ (T’-i Ty=r=-(TT-'Y
=> r(7’-')'=/=(7’-')'r
[Using results (ii) and (iv)J
V is inveitible and
(r)T'=(7’-')'.
T* is a linear
(vii) F is a finite dimensional vector space,
operator on F, T is a linear operator on F' and (T')' or T" is a
linear operator on F". We have identified F" with F through
242 Linear Algebra
=*[«. 0]
=0
/. /e fFi® and thus N Q fF*®
Hence Ar=>F,o.
This completes the. proof of the theorem.
Theorem 2. // IFi /5 invariant under T, then FFi° is invariant
244
Linear Algebra
ir-c/]B=ir]a-c [I]b
^A—cI where I is the unit matrix of order n.
[Note that [i]a»/].
We have det(T*—c/)=*det[T—c/]b
=det(.4-c/).
Therefore c is a characteristic value of f iff det(/4-c/)=0.
This enables us to make the following definition.
Characteristic values of a matrix. Definition.
Let A—[aij]„y.n be a square matrix of order n over thefield F,
An element c in F is called a characteristic value ofA if
det(^—c/)=0 where I is the unit matrix oforder n.
Now suppose 2- is a linear operator on an n-dimensional vec
tor space V and A is the matrix of T with respect to any ordered
basis B. Then c is a characteristic value of Tiff c is a characteristic
value of the matrix A. Therefore our definition of characteristic
values of a matrix is sensible.
Characteristic equation of a matrix Definition.
Let /t be a square matrix of order n over the field F. Consider
the matrix A—xL The elements of this matrix are polynomials in
X of degree at most 1. If we evaluate det(^—x7), then it will be a
polynomial in x of degtee n. The coefl5cients of x in this polyno
mial will be (—1)". Let us denote this polynomial by/(x).
Then/(x)=det {A—xl)is called the characteristic polynomial
of the matrix A. The equation/(x)=0 is called the characteristic
equation of the matrix A. Now c is a characteristic value of the
matrix A iff det(^--c/)«0 i.e» iff/(c)=0 i,e. iff c is a root of the
characteristic equation of A. Thus in order to find the characteris
tic values of a matrix we should first obtain its characteristic equa
tion and then wc should find the roots of this equation.
Characteristic vector of a matrix.
Definition. If c is a characteristic value of an nxn matrix A,
then a non-zero matrix X of the type nxl such that AX=cX is
called a characteristic vector of A corresponding to the characteristic
value c.
Theorem 6. Let T be a linear opt rator on. an n-dimensional
vector space V and A be the matrix of T relative to any ordered
basis B. Then a vector ot. in V is an eigenvector of T corresponding
to its eigtnvalue c if and only if its coordinate vector X relative to
the basis B is an eigen-vector of A corresponding to its eigenvalue c.
251
Linear Algebra
Proof We have
iT-cI]B~[T]B—c[l]B=A’-cI.
If a^O, then the coordinate vector X of a is also non-zero.
Now
1{T-cI)(<k)]b^[T-cJIb[o^]b
[See theorem 2 of§ 13]
S=3
(A^cl) X,
(r-c/)(a)«0 iflf(.4-c/)2r=0
or T(»)=cait[AX^cX
or
a is an eigenvector of r jflf JT is an eigenvector of.4.
Thus with the help of this theorem we see that our definition
of characteristic vector of a matrix is sensible Now we shall define
the characteristic polynomial of a linear operator. Before doii:r; so
we shall prove the following theorem.
Theorem 7. Similar matrices A and B have the same character^
istic polynomial and hence the same eigenvalues. If X is an eigen
vector ofA corresponding to the eigenvalue c, then P-\ X is an eigen
vector of B corresponding to the eigenvalue c where
AP. (Meerut WJ6,83,92)
Proof. Suppose A and B are similar matrices, Then there
exists an invertible matrix P such that
B=P-^ AP.
We have AP~xI
[.* P~ *(jc/f P=3jrP-i /P==*/J.
=P-'{A-xl)P,
det(B-;i/)=det /*-* det {A-xl)det P
=sdetP-^det P.det (i4—x/)=det(P-> P).det {A—xl)
=idet /.det {A -x/)=l.det(A -jc/)«det(A-xI),
Thus the matrices A and B have the same characteristic poly
nomial and consequently they will have the same characteristic
values.
If c is an eigenvalue of A arr^ A' is a corresponding eigenvec
tor, then AX=cX, and hence
B(P-> X)==iP-^ AP)P~^ A'=P-' AX=^P-^ (cA')=c(p-> X).
P~^ X is an eigenvector of B corresponding to c. This
completes the proof of the theorem.
Now suppose that r is a linear operator on an n-dimensional
vector space K. If Pi, B2 are any two ordered bases for P, then we
know that the matrices and are similar. Also similar
Linear Transformations 253
dct(-4—jcO=det ^ ^
1 ^ *xHl.
0-x —X
/^_£2 p-i W
\flo ^0 ^0 /
r> r-i
u Oq «0 )●
257
Unear Transformations
recor
space K. Then T is said to be diagonalizable if there is a basis B
for V each vector of which is a characteristic vector
=> ay=0. Thus a/=0 for every integer 7 between 1 and it.
In this way ai+...+«ft=0
=9. a/=0 for each L Hence the subspaces Wu..., Wk are in
dependent.
Second Part. Now suppose that T is diagonalizable. Then
we shall show that V=Wi^..,-\-Wk. Since T is diagonalizable,
therefore there exists a basis of 1' each vector of which is a chara9-
teristic vector of T. Thus there exists a basis of V consisting of
vectors belonging to the characteristic subspaces Wi Wk- If
ae V, then a can be expressed as a linear combination of these
basis vectors. Thus a can be written as a=ai4-...+«it where
a/ e Wi, 1=1 ic. In this way a e Wi+...+ Wk. Therefore
V=Wt-h...+ Wk. But in the first part, we have proved that the
subspaces are independent. Hence
V^Wi@...@Wk.
Theorem 12. If T is a diagonalizable operator on a finite
dimensional vector space V, and ct,...,Ck are the distinct character
istic values of T, then there are linear operators Ek on V such
that
(fl) T=CtEi-{-..,+Ck Ek',
(b) /=£,+...4-
(c) E,Ej=^6, i¥^j;
{d) E(^—Ei;
(e) the range ofEi is the space of characteristic vectors of T
aisociated with the characteristic value Ci. (Meerut 1984P)
Proof. Let Wi be the null space of the operator T—cJ,for
k Then Wi,..., IF* are the characteristic spaces of the
characteristic values ci, ..., c* respectively. By theorem 11, V is
Linear transformatlmH 161
Solved Examples
0 0 l-JC
0 0 —X
Example 2. Let The a linear operator on afinite dimensional
vector space V and let c be d charaeterisfic value of T, Show that
the characterisdc space ofc le., Wc is invariant under T.
Solution. We have by definition,
»"c={aeF': r«=c«}.
Let a^fVc. ThenTas^ca.
Since We is a subspace, therefore
ceF and «e We c«e We.
Thus a^We => Ta^We.
Hence Wc is invariant under T.
Example 3. IfT be a linear operator on afinite dimensional
vector space V and c be a characteristic value ofT, then show that
the characteristic space ofc j.e.. We is the null space of the operator
T-^cl.
Solution. LetoiGWe. Then
Ta^ca.
(r—c/)«=s0
=> a e the null space of t-cl.
Again let oc € the null space of T-c/.
Then (T—Cf)ttaO
‘r«saCa
tH^We.
Hence FKc»the null space of T—cI.
Example 4. Show that the characteristic values of a diagonal
matrix are precisely the elements in the diagonal. Hence show that
if a matrix B issimilar to a diagonal matrix D, then the diagonal
elements ofD are the characteristic values of B.
Solution. Let
'flu 0 0 ● ●● 01
022 0 0
● ee
0 0 0
●●● fliinj j
be a diagonal matrix of order r. The characteristic equation of
A is
Linear Transformations 263
<*et (i4-jc/)=0
l.e.
whose roots are i.e., the diagonal elements.
Now we know that similar matrices have the same character
istic values. Hence if B is similar to Z>, then the eigen values of B
are the diagonal elements of i>.
Example 5. Let T be a linear operator on afinite dimensional
vector space V. Then show that 0 is.a characteristic value of Tiff T
is not invertible.
Solution. Suppose 0 is a characteristic value of T, Then there
exists a non-zero vector a in K such that
7a=0a
s> Tas»t).
r is singular and so T is not invertible.
Conversely suppose that T is not in\ertible. Since T is a linear
operator on a finite-dimensional vector space K, therefore Tis not
invertible means that T is singular. Thus there exists a non-zero
vector a in 1^ such that
7'«=0=30oe.
.*. 0 is a characteristic value of T.
Example 6. Ifiris a characteristic value ofan invertible trans^
formation T, then show that is a characteristic value ofT-K
\ Solution Since T is invertible, therefore c#0. So c-» exists.
Now c is a characteristic value of r. Therefore there exists a
npn-'zero vector tt in 1^ such that
7a=ca
=> r-* (ra)=T-'(ca) => (T-* T)a=c r-t («)
=> /(a)=cr-> (a) => aascT-'(a) => c“> «s=r-.*(a)
=> T“' a, a^tO.
is a characteristic value of T”*. .
Example 7. Ifc^F is a characteristic value of a linear opera*
tor T on a vector space V(F), thenfor any polynomial p{x) over F,
p(c) is a characteristic value ofp(7^.
Solution. Since c is a characteristic value of T, therefore
there exists a non-zero vector a in F such that
TaaaCa
^ T(Ta)^T(<M) => r«
s> r* a«ac (cot) [V Tatacot}
ya a.
164 Linear Algebra
/(x)=dct(A-xI)>
Let /(x)*ao ffliX+...+a»x"=det(A^xfh
The constant term in this polynomial is
s=flo=»/(0)=*det A.
Similarly the constant term in the characteristic polynomial of
r=idet C, where C is the matrix of T relative to B,
Since S and T have the same characteristic polynomial, there
fore the two constant terms must be equal.
deti4=detC
=> det[S]b=det[r]B
=> det5=detr.
vec-
Example 11. Find all(complex)proper values and proper
tors of thefollowing matrices.
ro n ' ri c ri n
(a) 0 0 [o i (c) 0 i
Let^=
ro n
Solution, (a) 0 0
We have i4—x/=
n 0l -X n
0 oj lo lJ“L 0 -X
/. characteristic polj'nomial of i4=det(i4—x/)
—X 1
x2.
S3
0
the characteristic equation of A is
det(v4—x/)=0 i e. x*=0.
The only root of this equation is x—0.
/. 0 is the only characteristic value of A.
Now let xi, xi be the components of a characteristic vector «
corresponding to this characteristic value. Let X be the coordinate
Xi
matrix of a. Then X—
L *2 J
Now X will be given by a non-zero solution of the equation
(^-00 X=0
Le, 0 nr 0
LO OJL *2 0
i.e.
X2 r 0 1
0 0
Thus X2»0, X|=/c where k is any non-zero complex number.
● tf
A jr- Q where k is any non-zero complex number.
266
Linear Algebra
(b) Let A — 1 0]
0 / J*
'l-x 0
We have A-^xl
0 i—x
the characteristic equation of A is
det (i4—jc/)=s0
l-x 0
i.e. =0
0 i—X
i.e. (1—jc)(/-x)«0.
The roots of this equation are x^T; x=i.
1 and i are the two characteristic values of A,
Now let Xi, X2 be the components of a characteristic vector
corresponding to the characteristic value 1. Let Xbe the coordi
nate matrix of this vector. Then X= Xi
1^2}
40W X will be given by a non-zero solution of the equation
(A-uyx^o
i.e. 1-1 Olfx, 01
0 ,/-l 0
i.e.
ro 0 ^1. r0
0 /-I J L-*f2 J 0
i.e.
0 roi ’
L0’“1)^2 J“L0 J
Thus X2=0, xi^k where k is any non-zero complex number..
’ k'
«
0 where ^ is any non-zero complex number.
X= Q where k-i^O.
To find the characteristic vectors corresponding to the charac
teristic value / we consider the equation
(^-i7)X=0
i.e.
ri-/
0
nr _ 01
oJlxiJ Lp.
i.e. ‘(1-0 Xi+X2'\ _r 01
0 '"Lo.
i.e. (1—0 Xi’YX2==‘0.
Letxi»c. Then X2«(/—1)c.
c \
where
(i-l)c
Example 12. Find all {complex) characteristic values and
characteristic vectors of thefollowing matrices
ri 1 1 1 1 11
(fl) 1 1 1 .(6) 0 1 1 . ,(Meerut 1968)
LI 1 IJ LO 0 1.
fl 1 1
Solution, (a) Leti4n 1 1 1 .
LI 1 iJ
1 1
We have A-xI^ I 1 .
1 1 1-x .
268 Linear Algebra
/. the characteristic polynomial of A is
1-x i 1
=dct(^~^/)= 1 1-x 1
1 1 l-;c
3-x 1 1
=» 3—X 1—X 1 Ci+Ca+Ca
3-x 1 1-jr
1 1 1
=(3~x) 1 l-x 1
1 1 1-JC
1 1 1
=(3-jc) 0 -X 0 Rz—Ri, Ri—Rt
0 0 -X
=(3-x)x2
/. the characteristic equation of A is
(3-x) x*=0.
The only roots of this equation are x=3,0.
0 and 3 are the only characteristic values of A.
r^ii
Let A'=» X2 be the coordinate matrix of a characteristic
1X3
vector corresponding to the characteristic value x=0. Then X will
be given by a non-zero solution of the equation
(A-OI)X^O
ri 1 l]f Xt 01
i.e. 1 1 1 X2 = 0
.1 1 iJ X3i 0
■Jfl+X2 + X3 ro
i:e. = 0
*I+X2+X3J LO.
i.e. ^l+Af2 + ^3 = 0.
This equation has two linearly independent solutions i.e.
1 01
0 , and X2 1 .
-1 -1
Every non-zero multiple of these column matrices X\ and X2
is a characteristic vector of A corresponding to the characteristic
value 0.
The characteristic space of this characteristic value will be the
subspace IF spanned by these two vectors A'l and X2. Any non
zero vector in W will be a characteristic vector corresponding to
this characteristic value.
Linear Transformations 269
or (^+/)j^r=o
-8 4 41 [xi 01
or -i-8 4 4 X2 = 0 .
L-I6 8 8 ^3 0
These equations are equivalent to the equations
r-8 4 4irjfi-| ro]
0 0 0 X2 == 0 ,applying
. 0 0 Oj XaJ LoJ /?3-2/?i.
The matrix of coefficients of these equations has jtpnk 1.
Therefore these equations have two linearly independent solutions.
We see that these equations reduce to the single equation
-2Xi+X2+^3=0.
Obviously
rn 1
01
=. 1 . X2^
1 L-iJ
are two linearly independent solutions of this equation. Therefore
Xt and X2 are two linearly independent eigenvectors of A corres-'
ponding to the eigenvalue ~1.
Now the eigenvectors of A corresponding to the eigenvalue 3
are given by
(/1~3/) X=^0
r-12 4 4irxi] ro'
i.e. -8 0 4 X2 =● 0 .
.-16 8 4JLX3J LO.
These equations are equivalent to the equations
r-12 4 4lfx,l rOl
4 -4 0 X2 = 0 ,
— 4 4 OjLXaJ lO.
applying /?2-^i.
The matrix of coefficients of these equations.has rank 2. There
fore these equations will have a non-zero solution, Also these
These
equations will have 3—2=1 linearly independent solution,
equations can be written as
-12x1-1-4x2+4x3=0
4xi -4x2 = 6
—4xi+4x2=0.
From these, we get
xi=X2=l, say.
Then X3 = 2.
273
Linear Transformations
rn
/. Xii= 1
0 1—/
gives the diagonal form of A.
Example 17. Prove that the matrix
^“LO 1
is not diagonalizdble over thefield C.
Solotion. The characteristic equation of ^4 is
2 .
0 l-jc
or (l-jr)2=0.
The roots of this equation are 1, 1. Therefore the only distinct
eigenvalue of^ is 1. The eigenvectors of A corresponding to this
eigenvalue^ere given by
TOn 2 Xl roi
0 L *2 J 0
or 0jfiM-2jif2=0."
This equation has only one linearly independent solution. We
see that
Solved Examples
Answers
I. 5.7.4.
2. Characteristic polynomial is (9-Jc) (x*-5jc-2). minimal
polynomial is(*—9){x^—Sx—l).
3. True.
7. (a) 14. 1;[1,31.[4.-1].
(b). t±i V37 ;[6.1-V371.[6. 1+ V37].
(c) 2±V3i;[2. 1-V3/3.12.1 +V3i'l.
(d) 8. ~1. -1; linearly independent eigenvectors are
[2.1.21.(0.2.-1].[1,0.-IJ.
8. All eigenvalues are 1. Every non-zero vector is an eigenvector.
9.;> (a) Z)= P
(b) D
2 01 p
- -7j»^
*3+4i 0 *
-1
.
-a
1
1 -1
0 3-4/J*‘~U
■1 0 0] f2 1 01
(c) D=t 0 '2 0 . P~ 1 1 .
LO 0 5. 4 2 ij
-2 0 01 r2 1 -11
(d) /)«. 0 1 0 . 2 1 0 .
0 0. ij li 0 3J
11. 1.2,2.
12. Roots of the-characteristic equation of i4 are 1. 2. 2. A is not
similar over the field R to a diagonal matrix. Here A has only
two linearly independent eigenvectors belonging to R^. A is
also not similar over the field C to a diagonal matrix.
13. Roots of the characteristic equation of A arc 2. /.—/. is
not diagonalizablc over R. But A is diagonalizable over C
because in this case A has 3 distinct eigenvalues.
3
Inner Product Spaces
(5(0,A0)=f7(0T(0</L Ifrom(l)]
df
(«(0.A0)
I'sW/vT* =J*^[sWTW
=.0
f 7(0/(0*=Jo
r /(O iW*=(/(0.s(0).
(II) Linearity. Let a, 6eC and A(r)e V. Then
Ifl/(0+*^(01A(0*
(fl/(0+*^(0,A(0)«j]^
=ifl
j'^fit)Ji{^dt+b \{t)hV)dt
“fl(/(O. Kt))+b (g(/), h{t)).
(Ill) Non negativity. We have v
(A0./(0)=j/(0 7(0
fj
I AO 1^ dt. ...(2)
0
Since
| AO P > 0 for every t lying in the closed interval (0, 1],
therefore (2) >0. Thus(AO,AO)^ 0-
Also (AO.AO)=0
ri
iAOprf'=o
JO
|/(r) p=0 for every t lying in [0, 1]
-j. /(0=0 for every t lying in [0,1]
=> A0~^®*
Hence the product defined in (1)is an inner product on V{C).
Inner Product Spaces 289
. Jo ...(1)
As in example 4, wb'can show that all the postulates, of an
inner product arc satisfied by (1). Since V is not a finite dimen
sional vector space, therefore this example gives us an inner pro
duct on a vector space which is not finite-dimensional.
Important note. In some books the inner product space is
defined as follows :
The vector space V over F is said to be an inner product space
if there is definedfor any two vectors a, jSsK <in element (a
|
such that
(I) («|/3)=0|a)
(2) I y)=fl(a I y)+fe 03 I y).
(3) (a I a) > 0 ifa^O
for any j3, ye V and a, b^F.
There is a slight difference in the postulate (3). It can be easily
seen that both the definitions are equivalent.
§ 2. Norm or length of a vector in an inner product space.
Consider the vector space F3(R) with standard inner product de*;
fined on it. Ifa=(fl|, 02*^3) e 1^3(R)» we have
(a, a)=fli2+fl2^+fl3^.
Now we know that in the three dimensional Euclidean space
V(ui^+U2*+fl3*)is the length,of the vector a=(01, 02. 03)* Taking
motivation from this fact, we make the following definition.
Definition. Let V be an inner product space. If oceV, then
the norm or the length ofthe vector a, written as || a jj, is defined as
the positive square root of(a, a) / e..
(Nagarjona 1978)
Unit vector. Definition. Let V be cm inner product space. If
ae V is such that jj a jj=1, then a. is called a unit vector. Thus in an
inner product space a vector is called a unit vector if its length is I.
Theorem 1. In an inner product space V{F)^ prove that
(1) (fla—6j3,'y)=fl(a, y)^b (P, y)
(//) (a, ap+by)=a (a, ^)-}-5 (a, y).
Proof, (i) We have
{acc—bp, y)«(aa+(—A) p, y)
290
LUtear Alge^a
=«(«» y)+(—^)Of y)[by linearity property]
«a(a, y)-6 05, y).
(ii) («, +by)^ap+fty, a) [by conjugate symmetry]
«a(^, a)+^(y, a) [by linearity property]
“0 Wf «)+ My, ot)atfi 0,a)+5(y, a)
=a(a.j5)+7»(a.y).
Note 1. If F=sR, then the result (ii) can be simply read as
(a, o;3+6y)=a(a,j8)+F(a, y).
Note 2. Similarly It can be proved that
(a, o^-^y)=5(a,/8)-5(«, y).
Also (a,j8+y)«(«. l^+ ly)=T(a,iS)+l(«, y)
. =(«.^)+(«, y).
Theorem 2. /n an inner product space V{F)^ prove that
(/) II a II > 0; and jj a ||=a0 if and only ifa«=sO.
(«) !|o«||=|o|.||«||.
Proof, (i) We have
II«11=V(«.«) [by def. of norm]
II « 1|2=(«, a)
=> II a !|2 > 0 [V (a. a) > 0]
=> II « II > 0.
Also (a, a)=0 iff tt=0.
.'i I! a|P=0 iff a=0 i.e. || a ||=0 iff a—0.
Thus in an inner product space, || a || > 0 iff a7f;0.
(ii) We have || oa |l*«(oa, <ra) [by def. of norm]
=fl(a»oa) [by linearity property)
*=sna (a, a) [by theorem I]
=|flP.||«ll*.
Thus||oalp=|oN|«|p.
Taking square root, we get 1| oa ||=|o |.|| a ||.
Note. If a is any non-zero vector of an inner product space
I
then - a is a unit vector in V, We have |1 a ||:?feO because
«?40.
1
Therefore
li«ll
1 1 1 1
Now
«.l!r a,
«)= a (*M a ■)
1 1
IRIP («, «) =
iRp ||«1P«1.
Inner Product Spaces 291
a
Therefore U=1 and thus is a unit vector. ; >.cv
a a
For example if a=(2, 1, 2)is a vector in Kj(R) with standard
inner product, then H « H=V(a,a)=V(4+ r+4)«3.
Therefore
| (2, 1, 2) (f, |)is a unit vector.'
Theorem 3. Schwarz*s Inequality. ‘ In an inner product space
V{F), prove that
(Meerut 1981,83,89, 92, 93P; Madras 83; Andhra 92;
S.V.U. Tlrupatl 90,93; Nagarjuna TO)
Proof. If a»0, then II« |i=0. Also in that case
(a,/3)«(0,/5)=(00,^)«0(0,/3)=0.
V I (a, 1=0.
Thus if a=0, then | (a, /9)|«0 and H a H U j8 H=0.
.\ the inequality | (a, j8)
|< H a H-ll P \\.'s valid.
1
Now let Then \\ct || > 0, Therefore is a positive.
!1«IP
real number. Consider the vector
(/3. «) a. We have
ysssp
0?.a) „ ^ (j9.a)
(y. y)—(p"“ Hall* *)
(i3.ocV a ● 08. ot) 03,«)
(i9.i3 )
[by linearity pro{fetty]
(jS.pcXjS.g)
P)- (/3,«) (i3,al (/3. «) ,
(«,
llocElW
I. « (/3.a)03. a) (a. /S)(a. /3) . (j8, a)(jS. a)
«ii.^ II “ ●ii*ir"" :ir»F~+ii«iiTO . iKtp
Q-'H
rsT
(«, P) (a, /3) \
. -ll ;3 |p“ , the second and the fourth terms^'pancel
I(«.j8) P
=!! P 11^ [V. Z2=|rpifzeci
But (y, y)«:iy IP > 0.
A ll^ll* —'| («.i5)P
II * IP
;
> 0
or : Hi8 1N«|P>^ |(«.j8) P
or I {(t, p) I < II !!'l! ^ Hr taking square root of both sides.
Schwarz’s inequality has very important applications in matte-
matics.
292 Linear Algebra
We have H/(0|P=(/(0./(0)=f/(07(0
Jo
Case ni. Consider the vector space K3(R) with standard inner
product defined on it i.c., if
a=(oi, <i2, a3)fp—{bif ^>2, hs) e I'3(R).
then (a, j8)=fl|hi+fl2^>2+a3^3* .(1)
We see that (1) is nothing but the dot product of two vectors
tt and p in three dimensional Euclidean space. If 6 is the angle
between the non-zero vectors a and /3, then we know that
(Ol6i+fl2^>2 + U3&3)* . {(«,W
cos* ^«=»
{bi^+bi^+bs^) («, «) iP» P)
_ K«.i8)P
iMPliTiP
[V if (a, jS) is real then {(a, i3)}2=| («. iS) I*]
But by Schwarz's inequality, we have
|(«,^)P<11«1N1^IP.
cos* 9 ^. II«11*HP 11* /.e., cos*0 ^ 1.
11«IP II? IP
Thus the absolute value of the cosine of a real angle cannot be
greater than 1.
194 Linear Atgebra
inner product
n / n ■ \ n n
= 2^ xyl ay, 2yi a/ 1=^-^ Pi («y»«/) [See theorem
1, part (ii) on page 289]
n If
= 2 2 pi gij Xy
J~t /-I
^Y*GX, ● ●● (1)
296 Linear Algebra
Solution. We have
11 «+i3 112=^(a+j3. a+)3) [by def. of norm]
=(«, <t+P) Iby linearity property]
=(a, «)+(a, j8)+(|8, a)+(j8./3)
=11«lP+(«. «)+ll P IP ...(1)
Also 11 a-jS lP=(a-/S, a-/3)=(a, «-j8)-(i3. a-jS)
=(«. a)-(«,)8)-(^, a)+(/3, p)
=HalP-(«,i8)-(^,a)-<-ll/?rp. ...(2)
Adding (1) and (2), we get
ll«+/311Hl|«-/3_tl'=2||«llH21|j31p.
Geometrical interpretation. Let a and jS be vectors in the vec
tor space KzCR) with standard inner product defined on it. Sup
pose the vector a is represented by the side AB and the vector p by
the side BC of a parallelogram A BCD. Then the vectors a+/3 and
oL—p represent the diagonals AC and DB of the parallelogram.
AC^^DB^^lAB^-\-2BC\
i.e.f the sum of the squares of the sides of a parallelogram is equal
to the sum of the squares of its diagonals.
Example 7. If «■, P are vectors in an inner product space V{F)
and a, then prove that
(0 II m+bfi |P=| a |2 II « IIH-aS («■. |3) + ib (/S, a)+| * |^ II;»IP.
(fO /!«(«.«=’i|l«+/3|P-ill«- /3 1P.
Solution, (i) We have
11 aa-1-6/3 lP=(fl«+^iS, fl*+^i3)
=fl (a, a<t-\-bp)-\-b (/3, act+bp)
=a {a (a. «)+5 (a, «}+i {a 0, a)+« (/3, /S)}
=aa (a, a)+aS (a, fi)+bR (fi, a)+bb fi)
=1 a PII«|P+a5 («. /3)+a6 (fi. a)+| 6 P || |5 |p.
(ii) We have
II «+^ |P=(«+/3. a+/3)=(a, «+^)+(j3, a+/J)
=.(«. «)+(«, ^)+{g, a)+(;8. fi)
HI«IP+(«.W+(«.W +II/S1P
=11 a |P+2Re (a, (5)+|| IP. ...(1)
Also II «-(3 |p-=(«-/3, «-«=(«. «-/3)-(/3.
=(a, a)-(«, «)+(ft «
■=ll«lP-{(«. «+03.«)>+ll j8 1P
HI«lP-««.»+(^)+ll/5 1P
=|l«|p-2Re(a,W+||/3|p ...(2)
Inner Product Spaces 301
(P. a)
«.iS
We havc(y, y)=^jS II«1P “)
(P,a)
) IMPl«.i5- II « IP ■)
={P,py
(P, a) (P. «) (P» «) («. «)
2(^» «)
II «ip II« IP 11 « IP
S=ll ft na I *) II n, ||2
" ^ II II ii2 II«ii2~ T+ir7nrrmi2 II “ H
BlO. [from(l)l
Now (y, y)=0 ^ y=0
^P iP, g) a=0 II « IP
a and p are linearly dependent.
Example 10. If in an inner product space the vectors a and p are
linearly dependent, then
l(«. P) HI«IMI i8||.
Solution. If a=:0, then | {fi P) |=0 and || a ||3=o. Therefore
the given result is true.
Also if iS=0. then (a, 0)=i0, a)=0=0 and || p 1|=0.
So let us suppose that both a and p arje non-zero vectors.
Since they are linearly dependent, therefore a=c)8 where c is some
scalar. We have
{x,p)^(cp, P)==cip, P)=^c\\P \\K
A l(«.^)l=kl lli3|P.
Also ll«l|«||ci3|!HcMiiS|l.
(a) (a,P)=xiyi+2xiy2-\-2x2yi-\-5x2y2’
ib) (a. P)<==xi^-2xty2-2x2yi +yi^.
(C) («,^)=»2Xi:Vl +5X2^2.
id) (a, j8)=:Xi>'|-2X|>'2--2X2>'l+4X2>>2.
Ads. (fl) and (c) are inner products, (6) and (d) are not.
3. Show that for the vectors a=(xi, X2) and /3={yu yil fro®
the following defines an inner product on :
(a, P)=Xiyt-X2y^-Xly2■\●2x2y2^ (M eerut 1911)
4. Let a=(oi, 02) and ^ ={bu 62) be any two vectors e V2{C).
Prove that (a, i3)=a,5i+(fli+a2) (5i+52) defines'hn inner
product in V2{C). Show that the norm of the vector (3, 4) in
this inner product space is \/(58).
5. If a, jS be vectors in a real inner product space such that
Oik
(Pt «*)=
^^27 cjoj, a/, ^
m
= S Cj(ay, a*) [by linearity property of inner product]
y-i
Ck^(p, Ok) I^ m.
II 1^*
Putting these values of C|, ●f in (I), we get
m
(fl.
P— ^ II ^ 112 ***
m
(1)
^^^Cjay«c,a, + Cma„=0.
We have, for each k where 1 ^ k
m , _—-
2'cy(a;,a.0
^^2 c/x/, a* ^ y-i
“Ca- (a*, a*) [V (ay, aft)«iO ifjV=^l
=^*ll
m-
Bu( from (1), S c/xy=:0. Therefore (0, a)»0.
j-t
1 1
1
We have
(ii *irii «ii)-di( «. 1
(P, «*)
a* ^
fit
m ■
m
●●
^(ft «*)— H (ft «/) I belong to an
/-I
● t
^honormal set]
=*(ft «*)~08, «*) [V 8/*=l if /*=iA: and 8/*=s0 if /#Ar]
*=*0,
308 Linear Algebra
m
We have (y, 3) Y,£ at%t l=® a/)™ S &t OcaO.
/-I
( /-I
m /
\ /-I
rn
Thus y is orthogonal to every vector 8 in L(5). Therefore y is
orthogonal to L{S).
Theorem 5. Any orthonormal set of vectors in an inner product
space is linearly independent.
Proof. Let S be any orthonormal set of vectors in an inner
product space V. Let ...» a„} be a finite subset of S con¬
taining m distinct vectors. Let
m
S C^;=Ciai+...+C„a„=aO. ...(1)
y-i
n .
(v) Ifpandy are in F, then 03. y)= /-I
27 (/8, a,) («,. y).
h
(V/) y/jS is in F. then f-i
27 ! (j8. a,) |*=1| p jp.
(iv)=>(v).
« i 1? 03, */) (y, ay) («/, ay)=. 5? (^, a,) (y, «/)
1-17-1 /-I
(V) :>(vi).
It is given that if p and y are in F, then (P, y)=i; (p, a,) («/, y).
a3B3
iy~ipwhere V3=>^3-(^3, ai) ai~(j83, a2> «2,
y«
“"“TTrii where Yn^fin--(Pni ai) ai—(j8„, a2) a2
XT t. 11 ● *** (Pn*^n-l) «n-|.
Now we shall give an example to illustrate the Gram-Schmidt
process.
Example. Apply the Gram-Schmidt process to the vectors
^i=(l, 0,1),^2=(1, 0, — I), ^3=(0, 3, 4), to obtain an orthonormal
basisfor f'sCR) with the standard inner product.
(Meerut 1980. 81,83, 88. 93; Nagarjuna 80;
_ S.V.U. TIrupati 90)
Solution. We have II i8, |12=(^,. /3,)=1. H-O.O+l.I
«(1)H(0)H(1)2«2.
Let ajes
I
pnr“v2('>'’●'>=( V2
Now let «i) a,.
=0.;l+3.0-4.:l=-2V2.
m IH
=(i8, j8)-f (j5, a/) -^2 (^. «/)+ «') (?» *'>
[On summing with respect toy and remembering that
(a/, a;)=l wheny=/ and (a/, a/)=0 whenyV=il
If the equality holds i.a. ifI|(/S. a,)|2=|1 )3 jp, then from (1)
we have 1| y jp^o. This implies that y=0 i.e. £ (jS, a/) «/.
f-i
I (/8. «/) P=ll P IP
and thus the equality holds.
Note. Another statement of Bessel’s inequality.
Let {ai,..., v.„) be an orthogonal set of non-zero vec/or.y in an
inner product space V. If p is any vector in K, then
£ I «/) P < I, (Meerut 1971, 74)
/fi ||«/ 1P "
«/
Proof. Let 8„} where h
ip 1 < I < w.
Then || 8/ j| = l. Thus the set B is an orthonormal set. Now
proceeding as in the previous theorem, we get
fit
^ I iP* 8/) P
/ -I
ii iS ip. ...(1)
1
A.so(^.a,=(ft-^r)“ li II iP* «0.
Inner Product Spaces 317
. Ki8. «/)P
■.■(2)1
From (1) and (2), we get the required result.
Corollary. If isfinite dimensional and if a„} is an
m m
p= Z(fi, a/)«/ +y where 20,a/) a/ is in W and y is in
/-I /-ir
y
IFJ. Therefore K= 1F+ WK
Now we shall prove that the ^bspaces W and are disjoint.'
The vectors a.\ and(H2 are then called the ’orfhogonal projections of a.
on the subspaces W and
Solved Examples
Example 1. State whether thefollowing statement is true or
false. Give reasons to support your answer.
a is an element of an n-dimensional unitary space V and a is
perpendicular to h linearly independent vectorsfrom V, then a=0.
(MeerutH977)
Solution. True. Suppose a is perpendicular to n linearly in
dependent vectors ai, ...» a„. ‘
Since V is of dimension n, therefore the n linearly independent
vectors ai, ...» a„ constitute a basis for V. So we can write
a=fl,3Ci+...+a„a„. Now
(a, a)=(<Jiai-f...-{- a)=fl|(ai, a)+...+fl„ (a„» a)
=fliX0-f...+««x0 [V a is _L to each of
the vectors ai, .... a„]
=0.
a=0.
-(i
=(«» «*)“' £ (a, «i) a/, a*
)
[by linearity of inner product)
326 Linear Algebra
n
- =(a, <Xk)S(«, «/)
i-i
is an orthonormal basisfor V. \
Wt^-hfV2^=(fViniV2)±
=> iWit)1F2)-L= W',-L+
Example 16. If Wu...^ Wk are pairwise orthogonalsubspaces in
an inner product space Vt and if as=«i+...+a* with at in W%for
/= 1,.... K then II a ||*=|| «,|1H...+1| «* ||2.
Solution. We have
II a |P*=(«, a) 1
k
» £(«!« at) [On summing with respect tojand remembering
that «y is orthogonal to each a/ ifJ^i]
H!»i llH...+ll«fc||*.
Example 17. Find a vector of unit length which is orthogonal to the
vector a»(2, —1^6)of F3(R) with respect to standard inner product.
Solution. Let i5=(*, z) be the required'ector so that
a-/3=(2,-1, 6).(jf, y,z)=22C-j»+62«0.
Any solution of this equation, for example,
i5=(2.-2,-1),
gives a vector orthogonal to a. But
IIP I =[2H(-2)H(-1)*J'/2«3.
Hence the vector ijS*=(f, ~f, -|)has length 1 and is ortho
gonal to a.
Example 18. Find two mutually orthogonal vectors each of which
is orthogonal to the vector a=(4,2,3)of FaCR) with respect to stan
dard inner product. ■
Solution. Let /5s==(xi, X2. X3) be any vector orthogonal to the
vector (4, 2, 3). Then 4xi+2x2+3x3=0.
Obviously ^=*(3, —3, —2)is a solution of this equation. We
BOW require a third vector y^{yu P2, ys)orthogonal to both « and
Inner Product Spaces 329
III
Prove that 1' 1 p
i! p IP if and only if
A-1 II
(/5. a,)
P=: A£ - f*- ctk. (Meerut 1979)
-I II «A IP
Answers
1. (a).(b),(c). Check yourself,
(d) (5, !5. S) and(i,i -4)^
1 1 1
5. 1 4), . (-1.7, 2).
3. ^j(2,0, l), ^(27oj(-7, 3V6
4. (1. 0, 0),(0, 1, 0),(0, 0, I).
S. (b) 1, V’3(2a:-I). V5(6*2-6*+l).
§ 4. Linear functionals and adjoints.
Theorem 1. Let V be a finite dimensional inner product space^
andf a linearfunctional on V. Then there exists q unique vector p
in V such //ra//(a)=(a, P)for all in K (Meerut 1974, 79, 83,85)
Proof. Suppose a2,..., a„} is an ortbonormal basis for
K and/ is linear functional on K Let
=/(aA)
[On summing with respect to j. Remember that
(aA, ay)=l if'7=fc and (aA, ay)=0 ify#/cj.
Thus g andfagree on a basis for V. Therefore g=f. There
fore corresponding to a linear functional/on P there exists a vec
tor p in V sush that/(a)=(a, p) V asK.
Now to show that p is unique.
Let y be a vector in V such that/(a)=(a, y) ¥ aeK.
Then (a,/3)=(a.y) ¥ aeK
=>(a. /3)-(a,y)=0 ¥ aSK
=> (a, jS-y)=0 ¥ aeK
=> (/3-y,/J-y)=0 [taking a=j3—y]
=> j3-y=0
=»● i5=y.
Thus p is unique Hence the theorem.
Theorem 2. For any linear operator T on a finite’dimensional
itmer product space K, there exists a unique linear operator T* on V
such that
(Ta, /i)-(a, T* p) for all a, p in V.
(Meerut 1973, 74, 78, 81, 83P, 88, 91; 93P)
Proof. Let /' be a linear operator on a finite-dimensional
inner product space V over the field F. Let p be a vector in V.
Let /be a function from V into F defined by
f{x)={Toi, p) V aeV. ...(!)
Here 7’a stands for T(a). We claim that/is a linear functional
on V. Let a, and at, ajG V. Then
/(ax| + ^>a2)=(y' (aai ^-bc^2)p P) [from(l)J
3S2 Linear Algebra
«=»(flrai-f^7*2, (V r is linear]
=aiToiu?Hb
—of(»t)+hf(aa). [from (1)1
Thus/is a linear functional on F. Therefore by theorem 1,
there exists a unique vector /3' in V such that
/(a)=(a.j80 V asK. ...(2)
$●●●9 n. -(2)
T<xj=i 2^ atjxt,j=l
Since the expression for Taj as a linear combination of vectors
in B is unique, therefore from (1)and (2) we get
fl/y=(7«y,a/),1= I n andy= 1,..., «.
Corollary. Let V be afinite-dimensional inner product space
and let T be a linear operator on V. In any orthonormal basis for
V, the matrix of T* is the conjugate transpose of the matrix ofT.
Proof. Let a»} be an orthonormal basis for V.
Let if=[a/y]i,xa be the matrix of T in the ordered basis B, Then
«r/ya(7«/, 0(|).
Now T* is also a linear operator on V. Let C=[cij]nxi9 be the
matrix of T* in the ordered basis B. Then
cu*”{T*aj, «d- M2)
334 Linear Algebra
Wc have
aj, ai)
=(«/, T* ctj) [V (a,j8)=(i8.a)]
={Tct,, *j) [by def. of r*l
—aji. [from (1)1
Ca>(a//]nx#i. Hence C—A*, where A* is the conjugate
transpose of the matrix/4.
Note. It should be marked that in this corollary the basis B
is an orthonormal basis and not an ordinary basis.
Theorem 4. Suppose S and T are linear operators on an inner'
product space V and c is a scalar. IfS and T possess adjoints, the
operators S-hT, cT^ ST^ T* will also possess adjoints. Also we have
(0 (5-|-r)*=5*+7’* (Meerut 1972, 76, 79, 87)
(«) (cr*)=c7* (Meerut 1970, 71, 76, 87)
{Hi){ST)*^T*S* (Meerut 1972, 78, 79, 87,91)
(/v) (r*)*=»T. (Meerut 1970, 87)
Proof, (i) Since S and T are linear .operators on K, therefore
5+r is also a linear operator on V. For every a, /5 in F, we have
((5+r)a, i3)=(5a+ra, iS)=(S'oc, ^)+(7’a,/3)
=(«, S*jS)+(«, T*^) [by def. of adjoint]
={x,S*fi+ T*p)=(<r..(S*+T*)P).
Thus for the linear operator 7’on F there exists a linear
operator S*+T* on V such that
iiS+T) y., i8)=(«.(S*+r*)P)for all a, p in V.
Therefore the linear operator Si-T has an adjoint. By the
definition and by the uniqueness of adjoint, we get
(S+T)*=S*+T*,
(ii) Since 7* is a linear operator on V, therefore cT is also
linear operator on V. For every a,j3 in V, we have
((cT) a,/3)=(cr5c.i8)=c (7’a, /3)=c (a. r*]?)
=(a, c7’*i8)=(a.(cr*)jS).
Thus for the linear operator cT on V there exists a linear
operator cJ* on F such that
^ ((cT) a. /5)=(a,(cT*)/3) for all a. J3 in F.
Therefore the linear operator cT possesses an adjoint. By the
definition and by the uniqueness of adjoint, we get
icT)*=cT*,
Inner Product Spaces 335
(0 «. j8)«{0,/3)=0=(a,0)=(a,0 /5).
A A
I
This gives ^^2=2/
Hence the expression (1) for T is unique.
Note. If r is a linear operator on a complex inner product
space r which is not finite-dimensional, then the above result will
be still true provided it is given that T possesses adjoint. Also in
the resolution r=r,+1T2, Ti is called the real part of T and T2 is
called the imaginary part of T.
Theorem 6. Every linear operator T on a finite-dimensional
inner product space V can be uni<iuely expressed as
7’=ri+22
where Ti is self-adjoint and Ti is skew.-
Proof. Letr,=i(7’+r*)andr2=:i(7’-r*).
Then r=n+r2. ...<r
Now (r+T*)l*=l(r-f (T*+t)
/, Ti is self-adjoint.
Also T2*=[\(r-7'*)r=f{r-rT“i iT*-T)
=-i(r-r*)=-r2.
/. 72 is skew.
Thus T can be expressed in the form (1) where Ti is self-
adjoint and T2 is skew.
Let
Now to show that the expression (1)for T is unique.
r=t/i+f/2,
where C/| is self-adjoint and U2 is skew.
Then 7’*=(C/|+t/2)*«C^i*+f^2* . , ,
(/,4:U2 (*.* Ui is self-adjoint and U2 is skew]
l{Ti-T*)=Ui=Ti
and ^ (f-7’*)=C/2=72.
Hence the expression (1)for T is unique.
Note. If r is a linear operator on an inner product space V.
which is not finite-dimensional, then the above result will be still
true provided it is given that T possesses adjoint.
Theorem?. A necessary and sufficient condition that a linear
f
transformation T on an inner produ^ space V be H is that {Tutp)f=0
for all 9. and p in V.
338
Linetff Algebra
T is self-adjoint.
Note. If V is finite-dimensional, then we can take advantage
of the fact that T must possess adjoint. So in that case the con
verse part of the theorem can be easily proved as follows:
Since (ra, a) is real for all a in V, therefore
(Ta, a)=(ra, a)=^, r^==(r*a, a).
From this, we get for every a in V
(ra-r*a, a)=0
=> ((r—r*)a, a)=0
=> r-7’*=d [by theorem 8]
=> r=r*.
Solved Examples
Example 1. Let V be the vector space FaCC), with the standard
inner product. Let T be the linear operator defined by
T’d,0)«(1, -2), d«(i, -1).
Ifa=(fl, b)tfind r*a. (Meernt 1984P)
Solution. Let J»={(I, 0), (0, 1)}. Then B is the standard
ordered basis for V. It is an orthonormal basis. Let us find JTJs
i.e. the matrix of Tin the ordered basis B.
We have r(l,0)»(l, -2)=1 (1, 0)-2(0, 1)
and
7’(0, !)«(/,-1)=/(1,0)-1 (0,1).
1 I'
-2 -1
The matrix of T* in the ordered basis B is the conjugate trans
pose of the matrix [T]b.
1 -2
-I _1
Now (fl, b)=a (I, 0)+h (0, 1).
the coordinate matrix of T*(o, b) in the basis B
_r I -2 fa I f a~2dl
-I
-Ulbi -ia--b ● [Seepage 161]
T*(o, b)^(a-^2b)(1,0)+(-M~h)(0, 1)
=(a-2^, -^ia^b).
Example 2. A linear operator on R* is defined by
T ix, y)^{x+ly, x-y).
Find the adjoint T*. ifthe inner product is standard one,
(Meernt 1977)
342 Linear Algebra
/. inn-Q -1”
The coordinate matrix of T*(x,y) in the basis B
[T]b
i+i n .
“I2 I
[r*]0=the conjugate transpose of the matrix [71s
l-I
21 3
Wehave[71fl[r*Js-f^"^2 < -<J“'L3-2i
T-/ 21 +i n r 6 3/+11
Also [J*Js [71a=
-I -UL 2 iJ”l-3/-fl 2 .*
. Now [7’js[T*]b^ T*]b[Ts] => [7T*]s#tr*71s
=> TT*^T*T,
Example 4. If^ is a vector in an inner product space, if T is
a linear transformation on that space, and if f{a)^{fi,Tct)for every
vector <s, thenfis a linear functional; find a vector such that
/(a)s=(a, fi*) for every «.
Solution. It is given that/(a)=08, Ta) v a € F.
/is a function from F into F.
Let o, h e F and «i, «2 € F. Then
hmer Product Spaces 343
Since T*=T,therefore
...(1)
Let [rjff==i4. Then from (1), we get
§ 5. Positive Operators.
Positive Operator. Definition. A linear operator T on an inner
product space V is called positive, in symbols T > 0, if it is self-
adjoint and if(Ja, a) > 0 whenever a^0. (Meerut 1972,82)
If «=0, then (Ta, a)=0. Thus if T is positive, then (Ta, a)^0
for all a and (Ta, a)=0 => a=0. Also if T is self-adjoint and if
(Tbc, a)^ 0 for ail a, and (7a, a)=0 => a=0, then T is positive.
If P is a complex inner product space, then by theorem 10 of § 4,
(7a, a)^ 0 for every a implies that T must be self-adjoint There
fore a linear operator 7’ on a complex inner product space is posi
tive if and only if(Th, a) > 0 whenever
Non-negative operator. Definition. A linear operator T on an
inner product space V is called non-negative, in symbols T'^Q.if it
is self-adjoint and if(To, a) > Q for all a in V.
Every positive operator is also a non-negative operator. IfJ
is a non-neagtive operator, then (To, a)=0 is possible even if a^itfi.
Therefore a non-negative operator may or may not be a positive
operator.
IfS and T are two linear operators on an inner product space K,
then we define S>T{orT< S)ifS-T>0.
Note. Some authors call a positive operator by the name _ „
^strictly positive* or ^positive definite*. Also they use the phrase
*positive operator* in place of *non-negative operator*.
Thedrem 1. Let y be an inner product space, and let T be a
linear operator on V. Let p be thefunction defined on ordered pairs
of vectors K, j8 in V by
p (a, ^)»(ra, j8).
Show that thefunction p is an inner product on V if and only ifT is
a positive operator. (Meerut 1969, 88)
Proof.
The function p obviously satisfies linearity property.
If a,b^F and ai, a2 e V, then
P(aa,-f6a2, /i)=(r(fla, fAaj), i8)=(fl7a,-f67’a2. j8)
-a (Tai, ^)+6(Ta2, (ai, ^)-\-bp (a2, j8).
.'. the function p satisfies linearity property.
Now the function p will be an inner product on V if and only
if p(a, ^)=p 05, a) and p (a, a)> 0 if a^tO.
all zero.
a
= E E atj XiXj [V by theorem 3 of § 4,{Tuj, oii)=:aij]
I-I ;-i
all zero.
If the field is real, then the bars may be omitted. If the field
is complex then the condition (i) will automatically follow from
the condition (ii) and so it may be omitted.
Now the theorem 8 may be stated as follows:
Let V be a finite-dimensional inner product space and B an
ordered orthonormal basisfor K If T is a linear operator on K,
then T is positive ifand only if the matrix of T in the ordered basis
B is positive.
Principal minors of a matrix
Definition. Let A=[aij]„xn be a square matrix of order n over
an arbitraryfield F. The principal minors of A are the n scalai s
defined as
a\\ ... Oik
det Ai^^=det ; ,k— l,...f n
.0*1 ... Okk.
We shall now give without proof a criterion fov a matrix to
be positive.
l^t A be an nxn self-adjoint matrix over thefield of real or
complex numbers. Then A is positive if and only if the principal
minors of A are all positive.
If det A is not positive, then the matrix A is not positive.
Solved Examples
Example 1. Suppose S and T are two positive linear operators
on an inner product space V. Then show that S-^T is also positive.
Solution. Since ^ and r are both positive, therefore S*=>S
and r*=r.
We have (S+7’)*=S*-hr*=*5+7.
.S+7'Is also self-adjoint. '
Also if a is any vector in P, then
Inner Product Spaces 353
We have
1 1+/‘ =3-2=1.
1-4
Since A is self-adjoint and the principal minors of A are all
positive, therefore A is a positive matrix,
(ii) The given matrix is not self^adjoint.because its transpose
is not equal to itself. Hence it is not positive,
(iii) Let A denote the given matrix. Obviously A^A^ i.e.
/4=the transpose of A,
The principal minors of A are
1 I 1 i I
1, I . i .
All these are positive as can be easily seen. Hence 41 is
positive.
Example 3. Prove that ever; entry on the main diagonal ofa
positive matrix is positive. (Meerut 1976, 83)
Solution. Let -4=[a/7l«x« he a positive matrix. Then
where jci, are any n scalars not all zero. Now suppose that
out of n scalars x\, —,x„ we tako and each of the remaining
»—1 scalars is taken as 0. Then from (1) we conclude that au > 0.
Then ah > 0 for each i=l, w. Henca each entry on the main
diagonal of a positive matrix is positive.
§ 6. Unitary operators.
Definitions. Let U and V be two inner product spaces over the
same field F and let T be a linear transformation from U into V.
We say that
(/) T preserves inner products if (Ta, T/3)=(a, for all a, /3
in U.
\
■\ ■
Inner Product Spaces 355
(«'» «y)
/●I /«! .
(«. P).
ii Z Xitti, Z yjdj
/-I
y
c^Z Z x,yj 8/y= Z Xih-
/-I j-i 1-1
-\\JV\\ IV T is unitary]
= « ●
/*
.*. . T-^ is unitary. Hence the result.
Example 2. Show that the determinant ofa unitary operator
has absolute value 1.
Solution. Let 7* be a unitary operator on a finite-dimensional
vector space V. Then T*T=I. Let B be an ordered orthonormal
basis for F. Let ^4 be the matrix of T relative to B, Then
detr=deti4.
Also A* will be the matrix of T* with respect to B.
Now T*T=I
^[r*rjB=[/]fl => [t*\b [rjB=/ => a*a^i
=> dot(i4*i4)=det / «► (det d*).(det i4)«l
=> (det A) (det A)=\ => ldet.4 |2=l => |deti4 |=1
^ det A has absolute value 1
det T has absolute value 1.
(0 « f (W)
r« 01
.-i « 1 1
Solution. (f)Leti4=
L-i 4
3
Then .4*=
U ^ «the conjugate transpose of ^4.
Now A will be isometric if ^4*^4=/ i.e. if
G 01
djl-i flj Lo 1,
Inner Product Spaces m
1 a_Q-
Oa+t
4 2 2
or
a__a l+aa =P ®1
2 2 4^ J to IJ
or
aa+l=l,|-?=0
or
|--|=0.i+aa=l.
From these equations, we get Therefore a must be real.
Then we get This gives o=±~ ●
A is isometric if a>=*±^●
(ii) Proceed as in part (i).
Example 4. Show that the following three conditions on a linear
operator T on an inner product space V are equivalent:
(0 r*r=/.
(«) (Ta, ri8)=(a. fi)for all a andfi,
(/«)||raIlHl«ll/oro//a. (Meerut 1978, 91)
Solution (i) => (ii). It is given that T*T=J. We have
(Tee, ri8)-(a, T* Tfi)
i=(a, 7j3)=>(a, P) for ali a and p,
(ii) (iii). it is given that^Ta, r/3)=(a, p) for all a and p.
Taking ^=a, we get
(Ta, ra)«(a, a)
=> liraiMl«ll*
11 ra llHl « 11 for all a.
(iii) => (i). It is given that H Ta H=l! a H for all a. Therefore
(Ta, ra)=(a, a)
=> (J*ra, a)«(a, a)
((7’*r-/)o,a)=0foraii a. ».(1)
Now (r*r-/)*=(7’*D*-i’»‘«r*r-/.
... t*T—I is self-adjoint.
Hence from (1), we get (r*r>—/aaO Le. T*T>^I.
Example 5. If Fis{at, is an orthonormal basis of an
rt’dimensional unitary, space V and if B2^{Pu».,,p„) is a second
362
Linear Algebra
PuPu-;{->”-i-PniPttj—bij
=> ^*^'=[S«7]nx«^unit matrix
=> P is a unitary matrix.
Conversely suppose that P is a unitary matrix. Then
unit matrix
=► PliPlj+^>--\-PnfPitJ—bij
Wh Pj) — ^u
=> B2 is an orthonormal basis.
Example 6. Let B and B' be two ordered orthonormal bases
for a finite dimensional complex inner product space V. Prove that
for each linear operator T on V, the matrix [T]b' is mitarily
equivalent to the matrix [T]b.
Solution, Let P be the transition matrix from the basis B to
the basis B'. Since B and B* are orthonormal bases, therefore P
is a unitary matrix.
Therefore ' P*P^I
=> P*=p-i.
Now [7’Jb'=P-‘ [T]b P=P*[T]b P.
[T\b' is unitarily equivalent to the matrix [rji,.
Exercises
" 1. Let V be any Euclidean vector space of dimension n with inner
product (,). Then V is inner product space isomorphic to
Vn (R) with standard inner product
2. Prove that in an w-dimensional Euclidean space V the dot
product of the coordinate vectors of any two vectors of V is
invariant under transformation from one orthonormal basis to
another.
timer Product Spaces 363
/.
3. If Bt={9.i, a2, a„} is an orthonormal basis of an w-dimen-
sional Euclidean space V and if i?2={i3i, Pn} is a second
basis of V, then the basis B2 is orthonormal if and only if the
transition matrix from the basis Bi to the b^sis B2 is ortho
gonal.
4. If an isometric matrix is triangular, then it is diagonal.
(Meerut 1969)
5. 'If P and Q are orthogonal matrices, then PQ,P^, and P~* are
orthogonal and det P=±1.
6. If P and Q are unitary matrices, then so is PQ.
7. Let B and B’ be two ordered orthonoimal bases for a finite
dimensional real inner product space V. Prove that for each
linear operator T on K, the matrix {T]b' is orthogonally equi
valent to the matrix [PJa.
8. Prove that the relation of being ‘unitarily similar’ is an equi
valence relation in the set of all »xn complex matrices.
9. Fill up the blanks in the following statements:
(i) Transition Matrices expressing a change of orthonormal
bases are...
(ii) Matrices of a linear transformation expressing a change of
orthonormal bases are...
Aos. (i) unitary,(ii) unitarily equivalent.
§ 7. Normal Operators. Definition. Let T be a linear operator
on an inner product space V. Then T is said to be normal if it com
mutes with its adjoint i.e. if TT*=1*T. (Meerut 1972, 87)
If V is finite-dimensional, then T* will definitely exist. If V
is not finite-dimensional, then the above definition will make sense
only if T possesses adjoint.
Note 1. Every self-adjoint operator is normal.
Suppose Pis a self-adjoint operator i.e. T*=T. Then obviously
T*T=TT*. Therefore T is normal.
Note 2. Every unitary operator is normal.
Suppose P is a unitary operator, Then the adjoint T* of T
exists and we have P*P=PP*«=/. Therefore Pis normal.
Theorem 1. Let Tbe a normal operator on an inner product
space V. Then a necessary and sufficient condition that ol be a charac
teristic vector of T is that it be a characteristic vector of T*.
(Meerut 1977, 78, 81,84P, 88, 91)
$64
Linear Algebra
{UU*) y=^U{U*y)^U(T*y)=T{T*Y)={TT*) y
={T*T) y=T*(7V)=r*(C/y)=C/*(t/y)=.(C/*C/) y.
.*. UU*—U*U and thus U is & normal operator on IV^ whose
dimension is less than dimension of V.
Proof. Let V be the vector space C", with the standard inner
product. Let B denote the standard ordered basis for V and let
T be the linear operator oh V which is represented in the standard
ordered basis by the matrix A. Then
[T]n=A.
Also [T*]b=A*.
We have
[TT*]b=-{T]b [T*\b^AA*
and [T*T]b-=^A*A.
If is a normal matrix, then
AA*=A*A,
Linear Transformations 367
[TT*]b^[T*T]b
=> TT*=T*T =► r is'normal.
Since T is a normal operator on infinite dimensional complex
inner product space F, therefore there exists an orthonormal basis,
say B\ for V each vector of which is a characteristic vector for T.
Consequently [T]b* will be a diagonal matrix. Now let P be the
transition matrix from the basis B to the basis B\ Since B and B'
are orthonormal bases, therefore is a unitary matrix. [Note that
the standard ordered basis is an orthonormal basis]. Now P is a
unitary matrix implies that P*P=/and therefore
=(cc)(7T*).
{cT)(cr)*=(cD* {cT).
Hence cT is normal.
Example 2. If Tt,T2 are normal operators on an inner product
space with the property that either commutes with the adjoint of the
other, then prove that T\ ^-72 and T\T2 are also normal operators.
(Meerot IW,89)
Solution. Since Tu Tj are normal opersitors, therefore
TiTi*=^Ti*Tt and T2T2*=^T2*T2.
Also it is given that
r,7’2*=7’2*r, and r2r,*==r,*r2. ...(2)
To prove that T\+T2 is normal. Tx+T2 will be normal if
(r,+T2)(r,+r2)*=(r,+r2)*(r,+T2).
We have(7’i+72)(r,+r2)*=.(7’,+T2)
«rir,*+ri72*+r2r,*H-72T2*
==ri*ri+T2*ri+ri*72+72*72 [from (1)and (2)]
=7’,*(r,+r2)+72*(r,+72)=(r,*+72*)(ri+r2>
=(ri+r2)*(r,+r2).
T’i+72 is normal.
Also to prove that TxT2 is normal.
We have
{TxT2){TxT2)*=-TxT2T2*Tx*=^fx {T2T2*) Tx*
=r,(r2*r2)r,* [by(D]
«(r,r2*){T2TX*)
=(72*ro (r,*72) (by (2)]
=T2* (7’,r,*) 7a
=72*(r,*r,)7a [from (1)]
=(72*r,*)(r,72)=(7’,r2)* (r,72).
TxT2 is normal.
Example 3. Let T be a linear operator on afinite-dimensional
complex inner product space V lf\\ Tv. 1|=H T*v \\for all a in V,
then T is normal. (Meerut 1973, 90)
Solution. If then,we have
i| ra IHl r*ci 11 => ll Ta||M|7’*«112
● => {Tv, Tv)^{T*v, T*v) => {T*Tv, a)=(7T*a, «)
=> (T*Tv, a)-(7T*a. a)=0 => ({T*T-TT*) v, a)=0.
Thus if 11 Tv 11=11 T*v H for all a, then {{T*T-TT*) v, a)=0
Inner Product Spaces 369
Thus the result is true for p=Ar+1 if it is true for p=/f; Hence,
it is true for all positive integers p.
Now we shall prove that if p is any fixed positive integer,
then T*p T^—T^. T*p for all positive integers^. Obviously this
372 Linear Algebra
resuU is true for ^=*1, as we have just proved it. Now assume
that this result is true for q=^k Le., r*pr*=r*r*^ Then
T*PT^+i«ix*pT’^) T*(r*r*p) r=r* {t*pt)
=r*(rr*p)«7^+>r*p.
Thus the result is true; for f9=A:+1 if it is true for q=k.
Hence it is true for all positive intergers
So far we have proved that T*pT^=>T>T*p for all positive inte
gers p aod q. Now if p is any positive integer, then
T*p I/(r)]=T*p [flof+flir+...+a«7>«i'x
=ooT*'»/+oir*pr+...+
SS
OQlT*p-[-aiTT*p-{-...
T*p=[/(r)l T*p, ^
Now[/(r))*y|[D«[5o/4-diT*+...+5„r*'«]/(r)
-=>Soff(T)+5,r* f{T)-1-...+a„T*”f(T)
^sofiT)r+dt f(T) r*4-...H-8«/(7) r*«
=/(r)[dof+diT^+.-.+a^T^^l^/CT)[/(T)]*.
Hence/(T)is normal.
Example 11. Show that the minimal polynomial of a normal
operator on a finite-dimensional inner product space has distinct
roots. (Meerut 1976, 80)
. Solution. Let p(x) be the minimal polynomial of a normal
operator Ton a finite dimensional inner product space V. Then
p(x) is the monic polynomial of lowest degree that annihilates
T/.e., for which/»(T)=0. We shall show that p(x)=(x—ci)...
(x—c*) where Cl,..., c* are distinct complex numbers. Suppose
cu.».*Ck are not all distinct, i.e ,suppose some root c of p(x) is
repeated. Then p(jr)=:(x—c)* g(x)for some polynomial g{x).
Now p(r)=0 => (T-^c/)* g(T)«6
=> (T-cI)^ g(T)a=0, V « e K.
Let US set U=T^cI. Since the operator T is normal and x-c
is a polynomial with complex coefiScients, therefore T—cI—U is a
normal operator. [Refer Ex. 10 above]
Now let a be any vector in V and let P^g(T)a. Then
(72j8=t/2g(T) «=(7’-c/)2 g(D «=0.
Since the operator U is normal, therefore
l/2j8^0 => Up=0 [See Ex. 9]
=>(r-c/)j8«=0
=> (7'-c/)g(r)a=0, V ae K
=> (r~c/)g(r)«6.
Inner Product Spaces 3731
●●● (‘-I) ~0
s> («, J3)«a=0 [V Ci#C;]
=> a and p are orthogonal.
Theorem 4. Every root of the characteristic equation ofa self-
adjoint operator on afinite-dimensional inner product space is real.
Proof. Suppose T is a self-adjoint linear operator on a finite
dimensional inner product space P. If V is a complex vector
space, then every root of the characteristic equation of T is also a
characteristic value ofr and so is real by theorem 1.
If K is a real vector space, then it is possible to show that
there exists a Hermitian operator on some complex inner pro
duct space such that T and 7^ have the same characteristic equa
tion. Now every root of the characteristic equation of T+ is also
a characteristic value of T+. So it must be real. Hence every root
of the characteristic equation of7 must also be real.
Corollary. Every self-adjoint operator on afinite-dimensional
inner product space has a characteristic value and consequently a
characteristic vector. (Meerut 1973)
Theorem 5. Let V be afinite-dimensional inner,product space
and let T be a self-adjoint linear operator on V. Then, there is an
orthonormal basis Bfor V, each vector of which is a .characteristic
vectorfor T and consequently the matrix of T with respect to Bis a
diagonal matrix.
Inner Product Spaces 375
or
or 11/311 <0
or 11/811=0 ['.* IIP II cannot be negative]
or p~o.
Putting p^O in the relation p—Eet—a, we get
E»=ct
^ a € the range of E
=>ae£.
Thus aeVV-i :> ae£. So JVJ- c /?.
Conversely, let ye/?. Then £y*sy.
Since P«JVJ-0/V, therefore lety=yi+y2 where yieiV-*-and
yieA^. Since yieAA, therefore £y2*sa0. Also yi e /V-*- yi e /?
because we have just proved that N-^QR. Now nei? ^ Eyi^yi.
We have
Y^Ey^E(yi+y2)=£yi+£y2=yi+0=yi.
Since yie/V-L, therefore yeA^J-.
Thus ye/? => yeiV-L. So /?£//■!●.
Hence R=NK
Definition. Two perpendicular projections E and F are said to be
orthogonal if ££=0.
Also if £, F are perpendicular projections, then
££=0 o (££>*«(6>* o £*£*»6 o ££=b.
380
Linear Algebra
Proof. Let EF^H and let ae Wi and iSe fPi. Since Wi is the
range of E, therefore £a=«. Also W2 is the range of F. Therefore
Fp=^. We have
(a, F/3)«(a, E*Ffi)
=(a, EFfi) [: E*=E]
=(a, 0j8) [/ £f=6j
=a(a, 0)=0,
Wi and W2 are.orthogonal.
Conversely, let IVi and fV2 be orthogonal. If is any vector
in W2, then ^ is orthogonal to every vector in Wt. So /3 e WiK
Consequently IV2 Q Wi-l.
-(U^')■■●)-( E EjctfCt
\ J-t
E {Ej a)
range of £.
Let a s y?(£). Then
■ n
«= E £/ X e IV since E/x S IVi {i.e. the
/-I
. range of £/)for each i.
R (E)C fV.
n-
Conversely, let a e fV. Then x> E xj where xj B fVj,
Mi
/. tt is in the range of E.
Hence WQRiE),
W=R(E),
This completes the proof of the theorem.
§10. The Spectral Theorem.
Theorem 1. (Spectral theorem for a normal operator). To
every normal operator T on a finite-dimensional complex inner
product space ViheK correspond distinct complex numbers Cu c*
and perpendicular projections £i, ..., Eu {where k is a strictly
positive integer, and greater than the dimension of the space) so
that
— LI CiPi= /-I
S ciEiP= /-I
2 ciEi ^ j8.
k
7’= 2 ciEi=CiEi+...-\-CkEk.
/-I
Also
and
EiEjs=0 for i#y.
For each /. the subspace Wi is the range of Ei. For i^j\ the
subspaces Wt and Wj are orthogonal. For if as Wi,Pe Wj, then
(ot, P)z=,(E,», Ejp) [V otethe range of =>JEia=a, etc.]
=(a, Ei*EjP)
«(a, EiEjP) [V JS/being perpendicular
projection]
—(«,6j5)=t«» 0)=0.
Now let Bu...,Bk be orthonormal bases for the subspaces
respectively. Then we claim that B=\jBi /.e., the
union of the is ah orthonormal basis for V. Obviously B is an
brthonormal set because each is an orthonormal set and any
'^ctor in Bt is orthogonal to any vector in Bj, if iVy. Note that
tpe vectors in Bt are some elements of Wi and the vectors in Bj arc
if*V*^*™***^* The subspaces Wi and Wj are orthogonal
Since £ is an orthonormal,set, therefore B is linearly inde
pendent.
Now:B will be a basis for V if we prove that B generates F.
Let y be any vector in V. Then
y=/y=(£j4...,4.£)ij)
=ai-r where a/=£’/y.
Since Eiy is in the range of therefore a/ is in Wt. So for
each / the vector a/ can be expressed as a linear combination of
the vectors in B, which is a basis for W,, Therefore y can be
Inner Product Spaces 387
(C) EiEj:==6,ifi^j\
Then cu.^^Ck are precisely the distinct characteristic values of
T. Alsofor each i, Et is a polynomial in T and is the perpendicular
projection of V on the characteristic space of the characteristk value
cu In short the decomposition of T given in {a) is the spectral reso
lution of T.
Proof; (i) First we shaii prove that for each U Ei is a pro
jection.
We have
^ E,l=^E,:Ei-\-.,,+Ek)
s> Ei^EiEi-\-,,.-\-EiEk
=> Ei=Ei^ (V if ItVJ
=> El is a projection.
(ii) Now we shall prove that are precisely the distinct
characteristic values of T.
First we shall show that for each i, ci, is a characteristic value
ofr.
Now 7’a=(C|£|-|-...+CAJS*)«
,,,-^CkEk) Eid [V «=JEia]
=CxE\Ei^-\-..,-\-CkEkEi9.
=^ctEfh, (V if/7Sy‘]
^CfEici [V £/*s=£/J
-cix,
c/is a characteristic value of T;
Since T is a linear operator on a finite*dimensional complex
inner product space, therefore T must possess a characteristic
value. Suppose c is a characteristic value of T. Then there exists
a non-zero vector « such that
TxssCCt
=> T(x=c/x
^ (CiEi+...+CkEk)(n=c (Efi-.,.+Ek) a
=> (oi—c)£|af...+(ca—c)
Operating on this with Ei and remembering that Ei^^Et and
EiEj—0 if /#y, we obtain
(c/—c)£’/a=0 for .,A:.
If ctjtc for each i, then we have Ei»=0 for each /. Therefore
-f£fca=sO
(£|-!-...+£*)a«0
!> 7a=0
=> a=0.
This contradicts the fact that a#0. Hence c must be equal to
ct for some i.
(iii) Now we shall prove that Et is a projection on Wi where
IF/ is the characteristic space of the characteristic value c/. For this
we shall prove that the range olEi is IF/.
Remember that a is in the'range of Et iff Et<x,~(x. and a is in
IF/ iflf Ta=c/a.
Let a e IF/. Then
T<t=Ci»=Cil(n=ci(£i -f...+£fc) tt
=> (ci^i -fCit£!fc)«=(c/JEi+ ...■+■ c/£fc) «
^ (ci—C/) J?ia+...+ (ca—C/) £Aa=aO.
Operating with £>, we get
' {ci~~Ci) EjEi«t-{-...’\-{Ck-‘Cj) EjEkCt—0
=> (jCj—ci) JEy^a=0
Itmer Produ^'t Spaces 389
=> (c>—c/)£)asO
J?ja=sO if7*96/
■s>
[V are all distinct]
Then a.>=Ict=*Ei<t-f...-f-
ris self-adjoint,
(ii) Suppose, each characteristic value for T is positive i.e,
ci>0 for each i. Then to prove that Tis positive.
Since c/>0 for each i, therefore each ci is real. Consequently
by case (i) T is self-adjoint. Now
(Ta, oc)a(ra,/oi)
(* CiEia,£^* E/i\=s
\ 27
* * c, {Eto., £)a)
* * * *
« £ Cl («, El* E/x)= 27 a (a, EtEj a)
that
k k
27 Ct (JSi*a, £|«)*=i £ ct {Eta, Eta)
i-i 1 -1
Inner Product Spaces 39i
.k
/-I
Cl 11 jF,a II*.
Since \\ Ei^ |) > 0,therefore if ct^O, then (Ta, a) > 0 for all
«in K.
Hence T is non-negative,
(ii) Suppose each characteristic value of T is different from
zero i e. C(#0 for each i.
Consider the linear operator
1 1
S=—
Ct Ck
resolution of T.
Then r*=(c,E,+...+CfcEik)*
=C|Ei*4-...+Cft£&*
=ciEt+...+c*E&. [V each E/is self-adjon]
But we know that each E/, where /«s=l,..., k, is a polynomial
in r. Therefore T* is also a polynomial in T.
Example 5. Let Tbe a normal operator on afinite-dimensional
complex inner product space V. Then every subspace of V which is
invariant under T is also invariant under T*. (Meerut 1973, 74,87)
Solution. Let IK be a suospace of V which is invariant under
r. We have r^a==7T«.
Since IV is invariant under T, therefore a ^ IV ^ Ta G W,
Consequently TTa, i.e, T^et is also in W. Thus
=|C,l*£,2+...+|c,|*^fc2.
Example 8. Let Wu..., Wk be subspaces of an inner product
space K,and let Elbe the perpendicular projection on IT/, A:.
Then thefollowing two statements are equivalent:
(0 r=1T10...© Wki and this is an orthogonal direct sum le,
the subspaces Wt and Wj are orthogonalfor i^j.
(ii) I=Ef\~,,.-\rEk and £/£)=0for i^J.
Sblutioo. (i) » (ii).
For each i the subspace fVi is the range of the perpendicular
projection Ei, Since Wi and Wj are orthogonal subspaces for i=^j,
therefore EiEj^O for
Let a € V. Since V is the dir<*ct sum of fVt,..*, Wk, therefore
we can write
a=fl(i-}-...4-afc where a/e i==l, ●●●» k.
Now e PF/ e> Eicti=ni. Therefore we have
ass^iai-j-...+
> +...+£*)(£|ai+...+£fca*)
* {Ei-\-...-\-Ek) »=>Ei^xt + .-.+Ek^cik
( for i^j]
=> (F| + *** (V Ei^=E,\
s> (^14 ...+£*) a=a=/a.
(Fi4-...4-JSfc) «=/« for all a in V.
jB| 4-●●●4* ■£*=*/●
(ii) (i).
Let a € K. Then
as/a= {El 4-... 4- £*) a
=£^i«4-...4--fi*«. (1)
For each / the vector JS'/ot is in the range of £/ i,e. in Wi. There
fore (1) is an expression for « as a sum of vectors, one from each
subspace Wi. We shall show that this expression is unique.
Let a=sai4-...4-aft with a/ in Wt.
Then £/a/=sa/. Therefore
=> Ei(K'=sEiEicti~{-,,,-^EiEkCtit
Suppose C/ aud y are two vector spaces over the same field F,
Let Vte. fi): a e t/, /3 e V}.
If (ai, j3i) afid (tt2, i?2) are two elements in W, then we define
their equality as follows :
(«i»/5i)=(«2» P2) if «i=a2 and Pi=-^2-
Also we defiiie the sum of(ai, /5|) and (a2, P2) as follows:
{«i»;^i)+(«2. ^2)=(«i+«2. ^1+^2)*
If c is any;eiement in Fand (a, /3) is any element in W, then-
we define sca% multiplication in W as follows :
^ cP).
It can be;easily shown that with respect to addition and scalar
multiplication as defined above, W is a vector spate over the field
Fi We call B'.as the external direct product of the vector spaces U
and T and we shall write W'=t/0K.
Now we shall consider some special type of scalar-valued
functions on IF'known as bilinearforms.
§ 1. Bilinear forms.
Definition. l£t U and V be two vector spaces over the samefield
F. A bilinear'form on fV=U@V is afunctionf from W into F,
Which assigns to ^dch element(a, j8) in W a scalarf{», j3) in such a
way that y; -:
/ («2, P)
and /(ai; niS,+Z>i82)=o/(a. jSa). (Meerut 1975)
Here /(a;j3) is an element of F. It denotes the image of(a, j8)
under the function/ Thus a bilinear form on W is a function from
IV into F whic$ 16. linear as a function of either of its arguments
when the other Js fixed.
If tA=y, theh in place of saying that/is a bilinear form on
IV^V ^ we shall simply say that /is a bilinear form on V.
398 Linear Algebra
=«6(a,,j8)+M(a2,/5).
Also 0(a, flj9i+^»j82)=0=0-f0
=fl0+60
Solved Examples
m
Let us define a function
fi^S^yjPj^V.
/from Ux V into F such that
402 Linear Algebra
n m
Hxiyja,;. ...(1)
/-I y»i
n m n m
Then/(ai,)3)= i? S atyjatj and /(aj, j3)« r 2* btyjOtf.
/-I 7-1 /-I 7-1
fl A A
It m fl m
27 Saoiyian^ 27 Sbbiy,an
/-I 7-1^ /-I 7-1
A At A fit
=o 27 SatyjOtj+hS Ehtyja,j-
7-1 7-1 7-1 7“1
=fl/(«,.iS)+A/‘(«2,i3).
Similarly, we can prove that if o, ft e F, and fi u h ^ V* then
/ (a, aPi ■\’b^2)= af (a, j8|)+ft/* (07^2).
Therefore/is a bilinear form on (/x K
Now a/=0«i“};--* + 0a/_i +1«7+ 0flf7+i 4" ●●●+0aA
and ^ 7=0^14- .-f ■ 0,^7-1 + * ^7+0^7+1 + ● ● ●+
Therefore from (1), we have
f {o-it =
Thus there exists a bilinear form/ on UxV such that
/(«/.
Now to show that/ is unique.
; Let g be a bilinear form on UXV such that
8 (<^l> ...(2)
A fif
m
8(<x, fi)=g
^ ^27 XiCti, 7-1
27 yjpj.
403
Bilinear Forms
m
Bxtyjg(tChPj) [V g is a bilinear form]
r-i y-i
m
£ SxiyiUtj Ifrom (2)]
/-I
^ B B xtyj atf
i-i ;-i
I» B
ff acs
are vectors in V, then the bilinear
form/is defined as
/« S Sop^fp^.
p-i «-i
n m
“ jfl J?i
ft
Now /(«,^)*/
u m
E jf/a/, S^yj^j)
^ E. E XiyjfietuPj)
M /-I
n m
E E Xtyjaij
. M y-i
ft m
r r OpgXpyg
P-1 fl-l
=. P-t
I «-l
r «„[/„(.,«] [From (2)1
as E E Opgfpg
P-l «-l
Bilinear Forms 407
m
/=* 2 S apqfpq.
p-i fl-i '
P-l 9-1
m
S 2 bpqf. (a, ^>*6(a,
=> ( 1
y p-l 9-1 )
¥ « in t/ and ¥ i3 in K
m
s> 2 2 bpqfpq (a, ^)«=s0
p-l 9-1
ttt
=► bi/~0, issl,..., m
:> ^ is linearly independent.
/, 5 is a basis for L (C/, K, F).
Dim i« (£/, K, F)«number of vectors in B
anm. ■
s «t)
'● ^ A^) »t+P e Wi+Wz,
Hence ' v=.w,+ w,.
V^Wt®W2>
So dim W'2=dim K—dim lVt=n^l,
Now let g be the restriction off from V to fV2. Then g is a
symmetric bilinear form on fTaand dim^iis less than dim K.
Now we may assume by induction that 1^2 has a basis {a2, «»}
such that
g (a/. ay)=0, (/>2, y>2)
=>/(«/, «y)«0, i#y (/>2.j>2)
[V ^ is restriction of/]
Now by definition of W2, we have
/(«!»«y)=0 for7=2, 3 »●●●» n.
Since {ai} is a basis for Wi and V= fVt®W2» therefore
{«!, a„) is a basis for V such that
/(«/»ccj)c=0 for i^j.
Corollary. Let F be a subfield of the complex numbers^ and
let A be a symmetric nxn matrix over F. Then there is an invertible
nxn matrix P over Fsuch that P' AP is diagonal.
Proof. Let P be a ^nite dimensional vector space over the
field F and let B be an ordered basis for V. Let / be the bilinear
form on K such that [f]B^A. Since is a symmetric matrix,
therefore the bilinear form/is also symmetric. Therefore by the
above theorem there exists an ordered basis 5' of V such that [fy
is a diagonal matrix. If P is the transition matrix from B to B\
then P is an invertible matrix and *
[fy=P'AP
i.e. P' AP ISO, diagonal matrix.
§ 4. Skew-symmetric bilinear forms.
Definition. Lei f be a bilinear form on the vector space V.
Then f is said to be skew-symmetric //(a,/8)=:~/(j5, «) for all
vectors a, j8 in V.
Theorem 1. Every bilinear form on the vector space V over a
subfield F of the complex numbers can be uniquely expressed as the
sumpf a symmetric and skew-symmetric bilinear forms.
/
and f(^y^)=rAx.
/will be skew-symmetric if and only if
X^AY^-TAX
for all column matrices X and Y,
Now A".4 y is a 1X1 matrix, therefore we have
X'AY=^{X' A Yy= Y' A\{X*y= Y' A'X.
/will be skew-symmetric if and only if
y' A'X— — Y' for all column matrices X and Y
i.e. A=-A
i.e. A is skew-symmetric.
§ 5. Groups Preserving Bilinear forms.
Definition. Letf be a bilinearform and T be a linear operator
on a vector space V over thefield F. We say that T preserves/ if
f(T<iym=f{<^,P)foralUyPinV.
The identity operator / preserves every bilinear form. For if
/is any bilinear form, then
/(/«. («»P)for all a, jS in F.
If S and T are linear operators which preserve /, then the
product ST also preserves/. In fact for all a, p in V we have
fiSTxy STP)=fiTxy TP) [V 5 preserves/]
=/(a,i8) [V r preserves/]
ST preserves/.
.Therefore if G is the set of all linear operators on V which
preserve a given bilinear form/on F, then G is closed with respect
to the operation of product of two linear operators.
Theorem. Letfbe a non-degenerate bilinearform on a finite
dimensional vector space V. The set G of all linear operators on V
which preservefis a group under the operation of composition.
Proof. If 5, r are elements of (7/.e., if S and T are linear
operators on F preserving/, then ST is also a linear operator on
F preserving/. Therefore ST is also an element of G. Thus G is
closed with respect to the operation of product of two linear
operators.
The product of linear operators is an associative operation
because the product of functions is associative.
The identity operator /on F also preserves/. Therefore 7 is
an element of G.
J
( » )
2; (a) Show that there exists a basis for each finitely generated
space and that any two bases of such a space have the
same number of vectors. (Theorems page 62,-63)
(b) Let U and F be vector spaces over the field F and let T be
a linear transformation from U into F, Suppose that V
is finite*dimensional. Then
rank (r)+nuliity(r)«dim U. (Theorem 2 page 112)
( « )
r2-7’+/=6.
i then r is invertible. (Ex. 16 page 148)
(®) Define an inner product space. If a, jS are vectors in a
complex inner product space, prove that
4 (ot. ^)-!l a+p IP+/ II «+/i5 IP-i i| 11*.
(Page 284, Ex.8 page 301)
- ►
( 2 )
{ iv )
( 3 )
1. (a) Show that each of(n+1)or more vectors of an n-dimen-
sional vector space V(F)is linearly dependent.
(Theorem 2 page 65)
(b) Show that each subspace W of a finite-dimensional
vector space V{F) of dimension n is a finite-dimensional
space with dim ^<it. Also W if and only if
dim K=dim W, (Theorem 6 page 66)
2. (a) Show that if a finitejrdimensional vector space V{F) is a
direct sum of two subspaces Wt and W2 then
dim ?'=dim If^i+dim W2. (Theorem page 95)
^ (b) Show that a linear transformation T on a finite-dimen¬
sional vector space K<is invertible iff it is non-singular.
(Theorem 9 page 138)
3. (a) Show that the dual space V of an n-dimensional vector
; space V is also n-dimensional. (Theorem 2 page 195)
(b) Let fVi and W2 be subspaces of a finite-dimensional vec-
-ytor space V over the field F. Prove that
l(i) iWf\’W2y>«WiO(]W2^.
(ii) iWtfUV2fi^WiO+W2^, (Ex.5 page 210)
4. (a) Show that the distinct characteristic vectors of a linear
transformation r corresponding to distinct characteristic
values are linearly independent. (Theorem 4 page 249)
(b) Let F be a projection on a vector space V and let T be
a linear operator on V, Prove that both the range and
the null space of E are invariant under T if and only if
ET^TE, (Theorem 2 page 227)
5. (a) If a,jS are vectors in an inner product space V prove that
(Theorem 4 page 292)
^) Show that every orthonormal set in' an inner product
space is linearly independent and that every finite dimen
sional inner product space has an orthonormal basis.
(Theorem 5 page 308; Theorem 7 page 311)
6. (a) If and F be self-adjoint operators on an inner product
8p^ Vt then a necessary and sufficient condition that
4F be self-adjoint is that AB>=BA, (Ex.6 page 343)
/
INDEX
A D
Abelian group 2 Determinant of a linear transforma
Addition of vectors 7 tion 173
A4joint of a linear operator on an Determinant of a matrix 1S4
inner product space 332 Diagonalizabli^lmatrix 257
Adjoint of a linear transformation Diagonalizable operator 257
234 , Diagonal matrix 153
Algebrn 132 Dimension of a quotient space 93
Annihilating polynomial 274 Ditnension of a subspace 66
Annibilator 205 Dimension of a vector space 64
Direct sum of space 94
B Disjoint subspaces 94
Distance in an inner product
Basis of a^vector space 60 space 294
Bessel's inequality 315 Dot product 286
Bilinear forms 397 Dual'basis 195
Binary operation 1 Dual space 194
C E
Cauchy's inequality 293 Eigen Value 247
Cayley-Hamilton theorem"255 Eigen vector 247
Characteristic equation of a matrix Euclidean space 285
251 External composition 7
Characteristic space 248
Characteristic value 247
‘Characteristic vector 247
Characteristic vector of a matrix 251 F
Column rank iof a matrix 156 Field 4
Complementary subspaces 97 Finite dimensional vector
Complete orthonormal set 308 space 61
Complex vector space 8 Finitely generated vector space 61
Conjugate space 194
conjugate transpose of a matrix 295 G
Coordinate matrix of a vector 104
Coordinates.^)!a vector 103 Gram Schmidt process 312
■ ^ ' ■ ■ i ' ■
I
< vii )
Group 2 N
Groups preserving bilinear
forms 414 Non-negative operator 348
Non-singular transformation 136
Normal matrix 366
H Normal operator 363
Normed vector space 294
Hermitian matrix 295 Norm of a vector 289
Homomorphism of vector Nullity of a linear transformation 112
spaces 80 Null matrix 1S3
Null space of a L. T, 111
I O
Identity operator 109
Independence of subspaces 99 Ordered basis 103
Infinite'dimensional space 6l Ordered n-tuple 11
Inner product 284 Orthogonal complement 317
Inner product space 285 Orthogonal dimension 309
Inner product, space isomorphism Orthogonally equivalent matrices 3S9
. 355 Orthogonal matrix 359
Invariant subspace 213 Orthogonal projections 377
Inverse of a matrix l55 Orthogonal set 305
Invertible linear transformation 133 Orthogonal vectors 304
Isometry 354 Orthonormal basis 312
. Isomorphism of vector Orthonormal set 306
spaces 81
'L P
Parallelogram law 299
Linear combination 36 Petpendicular projections 377
Linear dependence of vectors 42 Polynomials of linear operators 132
Linear functional 188 Positive matrix^352
Linear Independence of vectors 42 Positive operator 348
Linearity property of inner product Product of linear transformations 126
285 Projections 219
Linear operator 107 Projection theorem for inner product
Linear transformation 80.107 spaces 318
Linear space 7 Proper subspace 27
Linear span 36 Proper values 247
Linear sum oftwo subspaces 38 Pythagorean theorem 321
M Q
Matrix 153 Quadratic form 409
Matrix of a bilinear form 402 Quotient space 92
Matrix of a linear transformation 157
Minimal polynomial of a Matrix 275 R
Minimal polynomial of ah operator
275 Range of a linear transformation 110
\ ( vm )
Rank of a bilinear form 403 T
Rank of a linear transformation 112
Real part of an operator 337 Trace of a linear transformation 174
Real vector space 8 Trace of a matrix 173
Reducibility 214 Transition matrix 167
Reflexivity of a finite dimensional Transpose of a linear transformation
234
vector space 200
Ring of linear operators 131 Transpose of a matrix 154
Row rank of a matrix 1S6 Triangle inequality 292
Scalar multiplication 8
Scalars 7 Unitariiy equivalent matrices 359
Schwarz’s inequality 29 Unitary matrix 358
Second dual space 199 . Unitary operator 356
Self*adjoint matrix 295 Unitary space 285
Self-adjoint transformation 335 Unit matrix 153
Simitar matrices 168 Unit vector 289
Similar transformations 169
Singular transformation 136 I V
Skew-Hcrmitian operator 336
Skew-symmetric operator 336 Vectors 7
Spectral theorem 382, 384 Vector space 7
Spectral values 247
Standard basis of (f)61
Standard -inner product 285, 286 Z
Subfield 5
Subspace 24 Zero functional 190
Sylvester’s law of nullity 246 Zero subspace 27
Symmetric bilinear forms 409 Zero transformation 109
Symmetric matrix 295 Zere vector 8
Most Popular Books in MATHEMATICS
Advanced Differentail Equations RK.Gupta &J.N. Sharma
Analytical Solid Geometry Vasishtha &.Agarwal
Advanced Differential Calculus J.N.Sharma
Advanced Integral Calculus D.C.Agarwal
Calculus of Finite Difference &Numerical Analysis Gupta,Malik SeChauhan
Differential Equations Sharma&Gupta
Differential Geometry Mithal ScAgarwal
Dynamicsofa Particle Vasishtha &Agarwal
Fluid Dynamics Shanti Swamp
Functional Analysis Sharma&Vasishtha
Functions ofa Complex Variable J.N. Sharma
Complex Analysis A.R. Vasishtha
Hydrodynamics Shanti Swamp
Infinite Scries &.Products J.N. Sharma
Integral Transforms(Transform Calculus) Vasishtha &Gupta
LinearAlgebra(Finite Dimension Vector Spaces) Sharma &.Vasishtha
Linear Difference Equations Gupta ScAgarwal
Integral Equations Shanti Swamp
Linear Programming R.K.Gupta
Mathematical Analysis -1(Metric Spaces) J.N.sharma
Mathenuitical Analysis - II Sharma &Vasishtha
Measure and Integration Gupta & Gupta
Real Analysis(General) Sharma&Vasishtha
Vector Calculus Sharma &Vasishtha
Modem Algebra(AbstractAlgebra) A.R. Vasishtha
Matrices A.R. Vasishtha
Mathematic^ Methods
(Special Function &Boundary Value Problems) Sharma &Vasishtha
Special Functions(Spherical Harmonics) Sharma &lVasishtha
VectorAlgebra A.R. Vasishtha
Mathematical Statistics Sharma&Goel
Operations Research R.K.Gupta
Rigid Dynamics-1(Dynamicsof Rigid Bodies) Gupta &Malik
Rigid Dynamics- II (Analytical Dynamics) PR Gupta
Set Theory &.Related Topics K.P. Gupta
Spherical Astronomy Gupta,Sharma &.Kumar
Statics
Goel &Gupta
Tensor Calculus&Riemannian Geometry D.C.^arwal
Theory of Relativity Goel &.Gupta
Topology J.N. Sharma
Discrete Mathematics
M.K.Gupta
Basic Mathematics for Chemists A.R. Vasishtha
Number Theory Hari Kishan
Bio-Mathematics
Singh &.Agrawal
Partial Differential Equations R.K.Gupta
Cryptography &Network Security ManojKumar
Advanced AbstractAlgebra S.K.Pimdir
Space Dynamics J.P. Chauhan
Spherical Astronomy and Space Dynamics J.P. Chauhan
Advanced Mathematical Methods ISGN 9387<
Shiv Raj Singh
Fuzzy SetTheory Shiv RajSingh
Advanced Numerical Analysis(MRT) Gupta, Malik & Chauhan
Analysis-1 (Real Analysis) J.P. Chauhan
Calculus of ^nations Mukesh Kumar
789387 6206i
if^RISHNA Prakashan
KRISHNA I Books I
GROUP Media (P)Ltd..Meerut
BUY Online at