Linear Algebra
Linear Algebra
Linear Algebra
kita semua
A Practical Approach to
LINEAR ALGEBRA
A Practical Approach to
LINEAR ALGEBRA
Prabhat Choudhary
ISBN: 978-81-89473-95-2
Oxford Book Company 267, 10-B-Scheme, Opp. Narayan Niwas, Gopalpura By Pass Road, Jaipur-3020 18 Phone: 0141-2594705, Fax: 0141-2597527 e-mail: [email protected] website: www.oxfordbookcompany.com
Reserved
Typeset by:
Shivangi Computers 267, lO-B-Scheme, Opp. Narayan Niwas, Gopalpura By Pass Road, Jaipur-3020 18
Printed at :
Rajdhani Printers, Delhi
All Rights are Reserved. No part ofthis publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic. mechanical, photocopying, recording, scanning or otherwise, without the prior written permission of the copyright owner. Responsibility for the facts stated, opinions expressed, conclusions reached and plagiarism, if any, in this volume is entirely that of the Author, according to whom the matter encompassed in this book has been origmally created/edited and resemblance with any such publication may be incidental. The Publisher bears no responsibility for them, whatsoever.
Preface
Linear Algebra has occupied a very crucial place in Mathematics. Linear Algebra is a continuation of classical course in the light of the modem development in Science and Mathematics. We must emphasize that mathematics is not a spectator sport, and that in order to understand and appreciate mathematics it is necessary to do a great deal of personal cogitation and problem solving. Scientific and engineering research is becoming increasingly dependent upon the development and implementation of efficient parallel algorithms. Linear algebra is an indispensable tool in such research and this paper attempts to collect and describe a selection of some of its more important parallel algorithms. The purpose is to review the current status and to provide an overall perspective of parallel algorithms for solving dense, banded, or block-structured problems arising in the major areas of direct solution of linear systems, least squares computations, eigenvalue and singular value computations, and rapid elliptic solvers. There is a widespread feeling that the non-linear world is very different, and it is usually studied as a sophisticated phenomenon of Interpolation between different approximately-Linear Regimes. Prabhat Choudhary
Contents
Preface v
1. Basic Notions
..
Chapter 1
Basic Notions
VECTOR SPACES
A vector space V is a collection of objects, called vectors, along with two operations, addition of vectors and multiplication by a number (scalar), such that the following properties (the so-called axioms of a vector space) hold: The first four properties deal with the addition of vector: 1. Commutativity: v + w = w + v for all v, W E V. 2. Associativity: (u + v) + W = u + (v + w) for all u, v, W E V. 3. Zero vector: there exists a special vector, denoted by 0 such that v + 0 = v for all v E V. 4. Additive inverse: For every vector v E V there exists a vector v + W = O. Such additive inverse is usually denoted as -v.
W E
V such that
The next two properties concern multiplication: 5. Multiplicative identity: 1v = v for all v E V. 6. Multiplicative associativity: (a~)v = a(~v) for all v E Vand all E scalars a, ~. And finally, two distributive properties, which connect multiplication and addition: 7. a(u + v) = au + av for all u, v E Vand all sCfllars a. 8. (a + ~)v = av + ~v for all v E Vand all scalars a, ~. Remark: The above properties seem hard to memorize, but it is not necessary. They are simply the familiar rules of algebraic manipulations with numbers. The only new twist here is that you have to understand what operations you can apply to what objects. You can add vectors, and you can multiply a vector by a number (scalar). Of course, you can do with number all possible manipulations that you have learned before. But, you cannot multiply two vectors, or add a number to a vector. Remark: It is not hard to show that zero vector 0 is unique. It is also easy to show that
Basic Notions
given v E V the inverse vector -v is unique. In fact, properties can be deduced from the properties: they imply that 0 = Ov for any v E V, and that -v = (-l)v. If the scalars are the usual real numbers, we call the space Va real vector space. If the scalars are the complex numbers, i.e., if we can multiply vectors by complex numbers, we call the space Va complex vector space. Note, that any complex vector space is a real vector space as well (if we can multiply by complex numbers, we can multiply by real numbers), but not the other way around. It is also possible to consider a situation when the scalars are elements of an arbitrary field IF. In this case we say that V is a vector space over the field IF. Although many of the constructions in the book work for general fields, in this text we consider only real and complex vector spaces, i.e., IF is always either lR or Co Example: The space lRn consists of all columns of size n,
VI
v2
v=
vn
whose entries are real numbers. Addition and multiplication are defined entrywise, i.e.,
vn
Example: The space also consists of columns of size n, only the entries now are complex numbers. Addition and multiplication are defined exactly as in the case of lRn , the only difference is that we can now multiply vectors by complex numbers, i.e., is a complex vector space. Example: The space Mmxn (also denoted as Mm n) ofm x n matrices: the multiplication and addition are defined entrywise.Ifwe allow only real entries (and so only multiplication only by reals), then we have a real vector space; if we allow complex entries and multiplication by complex numbers, we then have a complex vector space. Example: The space lP'n of polynomials of degree at most n, consists of all polynomials p of form pet) = ao + alt + a2 +... + ant, where t is the independent variable. Note, that some, or even all, coefficients ak can be O. In the case of real coefficients ak we have a real vector space, complex coefficient give us a complex vector space.
en
aVn
vn
wn
vn +wn
en
r-
Basic Notions
Matrix notation
An m x n matrix is a rectangular array with m rows and n columns. Elements of the array are called entries of the matrix. It is often convenient to denote matrix entries by indexed letters is}, the first index denotes the number of the row, where the entry is aij' and the second one is the number of the column. For example
al,1
A = (a 'k' J j=l,k=1
- )m n =
a2,1
is a general way to write an m x n matrix. Very often for a matrix A the entry in row number) and column number k is denoted by AJ-,k or (A)J, -k' and sometimes as in example above the same letter but in lowercase is used for the matrix entries. Given a matrix A, its transpose (or transposed matrix) AT, is defined by transforming the rows of A into the columns. For example
12 3)T = (12 4J (4 5 6 5 .
3 6
So, the columns of AT are the rows of A and vise versa, the rows of AT are the columns ofA. The formal definition is as follows: (AT)j,k = (A)kj meaning that the entry of AT in the row number) and column number k equals the entry of A in the row number k and row number}. The transpose of a matrix has a very nice interpretation in terms of linear transformations, namely it gives the so called adjoint transformation. We will study this in detail later, but for now transposition will be just a useful formal operation. One of the first uses of the transpose is that we can write a column vector x E IRn as x = (XI' x 2' .. , xn)T. Ifwe put the column vertically, it will use significantly more space.
4
p
Basic Notions
aiv i
Definition: A system of vectors vI' v2, ... vn E Vis called a basis (for the vector space V) if any vector v E V admits a unique representation as a linear combination
II
The coefficients ai' a 2, , an are called coordinates of the vector v (in the basis, or with respect to the basis vI' v2' . , v,J Another way to say that vI' v2'.., VII is a basis is to say that the equation xlvI + x 2v2 +... + xmvn = v (with unknowns xk ) has a unique solution for arbitrary right side v. Before discussing any properties of bases, let us give few examples, showing that such objects exist, and it makes sense to study them. Example: The space V is ]RII. Consider vectors
0
0
0 0
0 0
e,
0 , e2 0
0 , e3 0
=
0
, ... , en
0 ,
(the vector ek has all entries 0 except the entry number k, which is 1). The system of vectors e l , e 2 , ... , ell is a basis in Rn. Indeed, any vector
V=
Xn
n
and this representation is unique. The system e l , e2, ... , en E ]Rn is called the standard basis in ]Rn. Example: In this example the space is the space Jllln of the polynomials of degree at most n. Consider vectors (polynomials) eo' e l , e2,... , en E Jllln defined by eo _= 1, e 1 = t, e2 = P, e3 =~, ... , en =~.
Basic Notions
= ao + alt + a 2t 2 +
p = aoe o + aiel +... + anen So the system eo' e l , e 2, .. , en E pn is a basis in pn. We will call it the standard basis in pn. Remark: If a vector space V has a basis vI' v2, .. , vn' then any vector v is uniquely defined by its co-effcients in the decomposition v =
I. k=1 Uk vk .
So, if we stack the coefficients uk in a column, we can operate with them as if they were column vectors, i.e., as with elements oflRn. Namely, if v
I. k=1 Uk vk
n
k=!
and w
=
n
n
v+w= I.UkVk+
k=1
I.~kVk= L(Uk+~k)Vk>
k=1
i.e., to get the column of coordinates of the sum one just need to add the columns of coordinates of the summands. Generating and Linearly Independent Systems. The definition of a basis says that any vector admits a unique representation as a linear combination. This statement is in fact two statements, namely that the representation exists and that it is unique. Let us analyse these two statements separately. Definition: A system of vectors vI' '.'2' ... ' Vp E Vis called a generating system (also a spanning system, or a complete system) in V if any vector v E V admits representation as a linear combination
p
I.UkVk
k=1
The only difference with the definition of a basis is that we do not assume that the representation above is unique. The words generating, spanning and complete here are synonyms. The term complete, because of my operator theory background. Clearly, any basis is a generating (complete) system. Also, if we have a basis, say vI' v2' ... , vn' and we add to it several vectors, say vn +1' ... , vp ' then the new system will be a generating (complete) system. Indeed, we can represent any vector as a linear combination of the vectors vI' v2' ... , vn' and just ignore the new ones (by putting corresponding coefficients uk = 0). Now, let us turn our attention to the uniqueness. We do not want to worry about existence, so let us consider the zero vector 0, which always admits a representation as a linear combination.
Basic Notions
Definition: A linear combination a l vI + a 2v2 +... + apvp is called trivial if a k = 0 Vk. A trivi~llinear combination is always (for all choices of vectors vI' v2' .. , vp ) equal to 0, and that IS probably the reason for the name. Definition: A system of vectors vI' v2' ... , vp E V is called linearly independent if only
the trivial linear combination (:2..:=l akvk with a k = 0 Vk) of vectors vI' V2, ... , vp equals
O.
In other words, the system vI' v2, , vp is linearly independent i the equation xlvI + x 2v2 + ... + xpvp = 0 (with unknowns x k) has only trivial solution xI = x 2 =... = xp = O. If a system is not linearly independent, it is called linearly dependent. By negating the definition of linear independence, we get the following Definition: A system of vectors vI' v2 , , vp is called linearly dependent if 0 can be represented as a nontrivial linear combination, 0 = :2..:=l akvk . Non-trivial here means that at least one of the coefficient a k is non-zero. This can be (and usually is) written as :2..:=1 1ak 1"* o. So, restating the definition we can say, that a system is linearly dependent if and only ifthere exist scalars at' a 2, ... ,
(J.P'
:2.. a k vk = o. k=1 An alternative definition (in terms of equations) is that a system VI' v 2' , vp is linearly dependent i the equation XlVI +x2v2 + +xpvp=O (with unknowns x k ) has a non-trivial solution. Non-trivial, once again again means that at
least one ofxk is different from 0, and it can be written as :2..:=1 1xk 1"* O. The following proposition gives an alternative description of linearly dependent systems. Proposition: A system of vectors VI' V2, ... , vp E V is linearly dependent if and only if one of the vectors Vk can be represented as a linear combination of the other vectors,
Vk =
:2..~iVj'
VI'
Basic Notions
aiv i + a 2v 2 + ... + apvp = O. Let k be the index such that ak:f:. O. Then, moving all terms except akvk to the right side we get
p
akvk Iajvj.
j=1
j"#k
Dividing both sides by ak we get with ~j = k On the other hand, if holds, 0 can be represented as a non-trivial linear combination
-a/a
vk -
I~jVj
j=1
=0
j"#k
Obviously, any basis is a linearly independent system. Indeed, if a system vI' v 2,, vn is a basis, 0 admits a unique representation
n
Since the trivial linear combination always gives 0, the trivial linear combination must be the only one giving O. So, as we already discussed, if a system is a basis it is a complete (generating) and linearly independent system. The following proposition shows that the converse implication is also true. Proposition: A system of vectors v I' v2' .. , Vn E V is a basis if and only if it is linearly independent and complete (generating). Proof: We already know that a basis is always linearly independent and complete, so in one direction the proposition is already proved. Let us prove the other direction. Suppose a system vI' v2' ... , vn is linearly independent and complete. Take an arbitrary vector v2 v. Since the system vI' v2, . , vn is linearly complete (generating), v can be represented as
n
V
= I'V U,I V I
k=I
We only need to show that this representation is unique. Suppose v admits another representation
Then
8
n n n
Basic Notions
Since the system is linearly independent, Uk - Uk = 0 'r;fk, and thus the representation v = aIv I + a 2v 2 +... + anvn is unique. Remark: In many textbooks a basis is defined as a complete and linearly independent system. Although this definition is more common than one presented in this text. It emphasizes the main property of a basis, namely that any vector admits a unique representation as a linear combination. Proposition: Any (finite) generating system contains a basis. Proof Suppose VI' v2"'" Vp E V is a generating (complete) set. If it is linearly independent, it is a basis, and we are done. Suppose it is not linearly independent, i.e., it is linearly dependent. Then there exists a vector V k which can be represented as a linear combination of the vectors vj' j :j; k. Since vk can be represented as a linear combination of vectors vj' j :j; k, any linear combination of vectors vI' v2"'" vp can be represented as a linear combination of the same vectors without vk (i.e., the vectors vj' 1 ~j ~p,j = k). So, if we delete the vector vk, the new system will still be a complete one. If the new system is linearly independent, we are done. 1fnot, we repeat the procedure. Repeating this procedure finitely many times we arrive to a linearly independent and complete system, because otherwise we delete all vectors and end up with an empty set. So, any finite complete (generating) set contains a complete linearly independent subset, i.e., a basis.
Basic Notions
T (p):= p'lip E lP'n' Since if + g) = f + g and (a./)' = af', this is a linear transformation. Example: Rotation: in this example V = W = jR2 (the usual coordinate plane), and a transformation Ty: jR2 -7 jR2 takes a vector in jR2 and rotates it counterclockwise by r radians. Since Tyrotates the plane as a whole, it rotates as a whole the parallelogram used to define a sum of two vectors (parallelogram law). Therefore the property 1 of linear transformation holds. It is also easy to see that the property 2 is also true. Example: Reflection: in this example again V = W = jR2, and the transformation T: jR2 -7 jR2 is the reflection in the first coordinate axis. It can also be shown geometrically, that this transformation is linear, but we will use another way to show that.
Fig. Rotation
~ V~J
and from this formula it is easy to check that the transformation is linear. Example: Let us investigate linear transformations T: jR ~ lR. Any such transformation is given by the formula T (x) = ax where a = T (1). Indeed, T(x) = T(x x I) =xT(l) =xa = ax. So, any linear transformation of jR is just a multiplication by a constant. Linear transformations J!{' -7 Matrix-column mUltiplication: It turns out that a linear transformation T: jRn -7 jRm also can be represented as a multiplication, not by a number, but by a matrix.
r.
10
Basic Notions
Let us see how. Let T: ]Rn ~ ]Rm be a linear transformation. What information do we need to compute T (x) for all vectors x E ]Rn? My claim is that it is sufficient how T acts on the standard basis e" e 2,... , en of Rn. Namely, it is sufficient to know n vectors in Rm (i.e." the vectors of size m), Indeed, let
k=l k=l k=l k=l So, if we join the vectors (columns) aI' a2, ... , an together in a matrix A = [aI' a2, ... , an] (ak being the kth column of A, k = 1, 2, ... , n), this matrix contains all the information about T. Let us show how one should define the product of a matrix and a vector (column) to represent the transformation T as a product, T (x) = Ax. Let al,l a2,1 A= am,l am,2 am,n Recall, that the column number k of A is the vector ak , i.e., al,k ak = Then if we want Ax = T (x) we get al,l Ax = n LXkak k=l a2,k am,k al,2 a2,2 al,n a2,n
= Xl
a2,1 am,l
Basic Notions
11
coordinate rule: Multiply each column of the matrix by the corresponding coordinate of the vector. Example:
The "column by coordinate" rule is very well adapted for parallel computing. It will be also very important in different theoretical constructions later. However, when doing computations manually, it is more convenient to compute the result one entry at a time. This can be expressed as the following row by column rule: To get the entry number k of the result, one need to multiply row number k of the matrix by the vector, that is, if Ax = y, then
yk =
here Xj and Yk are coordinates ofthe vectors x and y respectively, and aj'k are the entries of the matrix A. Example:
1 2 (4 5 6
3)(~J = (1.1+2.2+3.3)=(14) 3 41 + 52 + 63 32
Linear transformations and generating sets: As we discussed above, linear transformation T(acting from ~nto ~m) is completely defined by its values on the standard basis in ~n. The fact that we consider the standard basis is not essential, one can consider any basis, even any generating (spanning) set. Namely, a linear transformation T: V ---? W is completely defined by its values on a generating set (in particular by its values on a basis). In particular, if vI' V 2, . , vn is a generating set (in particular, ifit is a basis) in V, and T and TI are linear transformations T, T~: V ---? W such that
2.
12
Basic Notions
The latter seems more appropriate for manual computations. The former is well adapted for parallel computers, and will be used in different theoretical constructions. For a linear transformation T: JR.n ~ JR:m, its matrix is usually denoted as [T]. However, very often people do not distinguish between a linear transformation and its matrix, and use the same symbol for both. When it does not lead to confusion, we will also use the same symbol for a transformation and its matrix. Since a linear transformation is essentially a multiplication, the notation Tv is often used instead of T(v). We will also use this notation. Note that the usual order of algebraic operations apply, i.e., Tv + u means T(v) + u, not T(v + u). Remark: In the matrix-vector mUltiplication Ax the number of columns of the matrix A matrix must coincide with the size of the vector x, i.e." a vector in JR.n can only be multiplied by an m x n matrix. It makes sense, since an m x n matrix defines a linear transformation JR.n ~ JR. m, so vector x must belong to JR.n. The easiest way to remember this is to remember that if performing multiplication you run out of some elements faster, then the multiplication is not defined. For example, if using the "row by column" rule you run out of row entries, but still have some unused entries in the vector, the multiplication is not defined. It is also not defined if you run out of vector's entries, but still have unused entries in the column.
if aj,k and bj,k are entries of the matrices A and B respectively. I intentionally did not speak about sizes of the matrices A and B, but if we recall the row by column rule for the matrix-vector multiplication, we can see that in order for the multiplication to be defined, the size of a row of A should be equal to the size of a column of B. In other words the product AB is defined i and only if A is an m x nand B is n x r matrix.
Basic Notions
13
T (x) = TI(Tix)) \Ix ERr. Note that TI(x) ERn. Since TI:]Rn ~ ]Rm, the expression TI(Tix)) is well defined and the result belongs to ]Rm. So, T: ]Rr ~ ]Rm. It is easy to show that T is a linear transformation, so it is defined by an m x r matrix. How one can find this matrix, knowing the matrices of TI and T2? Let A be the matrix of TI and B be the matrix of T2. As we discussed in the previous section, the columns of T are vectors T (e l ), T (e 2), ... , T(e r), where el' e2, ... , er is the standard basis in Rr. For k = 1, 2, ... , r we have T (e k) = TI (T2(e k )) = TI(Be k) = TI(b k) = Abk (operators T2 and TI are simply the mUltiplication by B and A respectively). So, the columns of the matrix of Tare Abl' Ab2,... , Abr, and that is exactly how the matrix AB was defined! Let us return to identifying again a linear transformation with its matrix. Since the matrix multiplication agrees with the composition, we can (and will) write TI T2 instead of TI T2 and TIT2x instead of TI(Tix )). Note that in the composition TI T2 the transformation T2 is applied first! The way to remember this is to see that in TI T2x the transformation T2 meets x first. Remark: There is another way of checking the dimensions of matrices in a product, different form the row by column rule: for a composition T JT2 to be defined it is necessary that T2x belongs to the domain of T1 If T2 acts from some space, say ]R'to ]Rn, then TI must act from Rn to some space, say ]Rm. So, in order for TI T2 to be defined the matrices of TI and T2 should We will usually identify a linear transformation and its matrix, but in the next few paragraphs we will distinguish them be of sizes m x nand n x r respectively-the same condition as obtained from the row by column rule. Example: Let T: ]R2 ~ ]R2 be the reflection in the line xI = 3x2. It is a linear transformation, so let us find its matrix. To find the matrix, we need to compute Tel and Te2 . However, the direct computation of Te I and Te2 involves significantly more trigonometry than a sane person is willing to remember. An easier way to find the matrix of T is to represent it as a composition of simple linear transformation. Namely, let g be the angle between the xI axis and the line xI = 3x2, and let To be the reflection in the xl-axis. Then to get the reflection T we can first rotate the plane by the angle -g, moving the line xI = 3x2 to the xl-axis, then reflect everything in the xI-axis, and then rotate the plane by g, taking everything back. Formally it can be written as
Motivation: Composition of linear transformations. Why are we using such a complicated rule of multiplication? Why don't we just multiply matrices entrywise? And the answer is, that the multiplication, as it is defined above, arises naturally from the composition of linear transformations. Suppose we have two linear transformations, T1: ]Rn ~ ]Rm and T2: ]Rr ~ ]Rn. Define the composition T = TI T2 of the transformations T I , T2 as
14
Basic Notions
T= RgTOR-Y
where Rg is the rotation by g. The matrix of To is easy to compute,
To
the rotation matrices are known
~ (~ _~),
R_y= ( sin(-y)
sin y) cosy,
To compute sin yand cos ytake a vector in the line x I = 3x2, say a vector (3, first coordinate 3 3 cos Y= length - ~32 + 12 -
Il. Then
.JW
1
.J1O
Distributivity: A(B + C) = AB + AC, (A + B)C = AC + BC, provided either left or right side of each equation is well defined; 3. One can take scalar multiplies out: A(aB) = aAB. This properties are easy to prove. One should prove the corresponding properties for linear transformations, and they almost trivially follow from the definitions. The properties of linear transformations then imply the properties for the matrix multiplication. The new twist here is that the commutativity fails: Matrix multiplication is noncommutative, i.e., generally for matrices AB = BA. 2.
Basic Notions
15
One can see easily it would be unreasonable to expect the commutativity of matrix multiplication. Indeed, letA and B be matrices of sizes m x nand n x r respectively. Then the product AB is well defined, but if m = r, BA is not defined. Even when both products are well defined, for example, when A and Bare nxn (square) matrices, the multiplication is still non-commutative. If we just pick the matrices A and B at random, the chances are that AB = BA: we have to be very lucky to get AB = BA. Transposed Matrices and Multiplication. Given a matrix A, its transpose (or transposed matrix) AT is defined by transforming the rows of A into the columns. For example
I 2 (4 5
!) ~ ~ ! .
T
(1 4)
So, the columns of AT are the rows of A and vise versa, the rows of AT are the columns ofA. The formal definition is as follows: (AT)j,k = (A)kJ meaning that the entry of AT in the row number) and column number k equals the entry of A in the row number k and row number}. The transpose of a matrix has a very nice interpretation in terms of linear transformations, namely it gives the so-called adjoint transformation. We will study this in detail later, but for now transposition will be just a useful formal operation. One of the first uses of the transpose is that we can write a column vector x E Rn as x = (x \' x 2, .. , X-n)T. If we put the column vertically, it will use significantly more space. A simple analysis of the row by columns rule shows that (AB)T = BTAT, i.e." when you take the transpose of the product, you change the order of the terms. Trace and Matrix Multiplication. For a square (n x n) matrix A diagonal entries
k=l
L ak,k
Theorem: Let A and B be matrices of size m Xn and n Xm respectively (so the both p )ducts AB and BA are well defined). Then trace(AB) = trace(BA)
16
Basic Notions
J0.k'
There are essentially two ways of proving this theorem. One is to compute the diagonal. entries of AB and of BA and compare their sums. This method requires some proficiency in manipulating sums in notation. If you are not comfortable with algebraic manipulatioos, there is another way. We can consider two linear transformations, T and Tl' acting from Mnxm to lR = lRI defined by T (X) = trace(AX), T} (X) = trace(XA) To prove the theorem it is sufficient to show that T = T 1; the equality for X = A gives the theorem. Since a linear transformation is completely defined by its values on a generating system, we need just to check the equality on some simple matrices, for example on matrices which has all entries 0 except the entry I in the intersection of jth column and kth row.
INVERTIBLE TRANSFORMATIONS AND MATRICES. ISOMORPHISMS IDENTITY TRANSFORMATION AND IDENTITY MATRIX
Among all linear transformations, there is a special one, the identity transformation (operator) L Ix = x, 'Vx. To be precise, there are infinitely many identity transformations: for any vector space V, there is the identity transformation I = Iv: V ~ V, Ivx = x, 'Vx E V. However, when it is does not lead to the confusion we will use the same symbol I for all identity operators (transformations). We will use the notation IV only we want to emphasize in what space the transformation is acting. Clearly, if I: lR n ~ lR n is the identity transformation in Rn, its matrix is an n x n matrix
1 0 o 1
1=1n =
0 0 1
(l on the main diagonal and 0 everywhere else). When we want to emphasize the size of the matrix, we use the notation In; otherwise we just use 1. Clearly, for an arbitrary linear transformation A, the equalities AI=A,IA =A hold (whenever the product is defined).
INVERTffiLE TRANSFORMATIONS
Definition: Let A: V ~ W be a linear transformation. We say that the transformation A is left invertible if there exist a transformation B: W ~ V such that BA = I (I = I v here). The transformation A is called right invertible if there exists a linear transformation C: W ~ V such that
Basic Notions
17
AC = I (here 1= I w)'
The transformations Band C are called left and right inverses of A. Note, that we did not assume the uniqueness of B or C here, and generally left and right inverses are not unique. Definition: A linear transformation A: V ~ W is called invertible if it is both right and left invertible. Theorem. If a linear transformation A: V ~ W is invertible, then its left and right inverses Band C are unique and coincide. Corollary: A transformation A: V ~ Wis invertible if and only if there erty is used as the exists a unique linear transformation (denoted A-I), A-I: W ~ V such definition of an A-IA = IV' AA-l = Iw The transformation A-I is called the inverse of A. Proof Let BA = I and AC = 1. Then
BAC = B(AC) = BI = B.
On the other hand
BAC = (BA)C = IC = C,
and therefore B = C. Suppose for some transformation BI we have BIA = 1. Repeating the above reasoning with B I instead of B we get B 1 = C. Therefore the left inverse B is unique. The uniqueness of C is proved similarly. Definition: A matrix is called invertible (resp. left invertible, right invertible) if the corresponding linear transformation is invertible (resp. left invertible, right invertible). Theorem: asserts that a matrix A is invertible if there exists a unique matrix A-I such that A-1A = I, AA- I = 1. The matrix A-I is called (surprise) the inverse of A.
Examples:
1. 2. The identity transformation (matrix) is invertible, 11 The rotation Rg
= I;
Ry
= (C~S1
sm 1
-sin cos 1
1)
rl
is invertible, and the inverse is given by (R y = R_"( This equality is clear from the geometric description of Rg, and it also can be checked by the matrix multiplication; 3. The column (l, I)T is left invertible but not right invertible. One of the possible left inverses in the row (112, 112). To show that this matrix is not right invertible, we just notice that there are more than one left inverse. Exercise: describe all left inverses of this matrix.
18
Basic Notions
4. The row (l, 1) is right invertible, but not left invertible. The column (112, 1I2l is a possible right inverse.
Remark: An invertible matrix must be square (n x n). Moreover, if a square matrix A has either left of right inverse, it is invertib!e. So, it is sufficient to check only one of the identities AA- I = L A-IA = 1. This fact will be proved later. Until we prove this fact, we will not use it. I presented it here only to stop trying wrong directions.
2. 3.
If A and B are invertible and the product AB is defined, then AB is invertible and (AB)-I = .s-IA-I. If A is invertible, then AT is also invertible and (ATt l = (A-I)T.
Basic Notions
19
meaning that all properties and constructions involving vector space operations are preserved under isomorphism. The theorem below illustrates this statement. Theorem: LetA: V ~ Wbe an isomorphism, and let vI' V2' ... , vn be a basis in V. Then the system Av l , Av2, ... , AVn is a basis in W. Remark: In the above theorem one can replace "basis" by "linearly independent", or "generating", or "linearly dependent"-all these properties are preserved under isomorphisms. Remark: If A is an isomorphism, then so is A-I. Therefore in the above theorem we can state that vI' v2' .. , vn is a basis if and only if Avl' Av2 , .. , AVn is a basis. The inverse to the Theorem is also true Theorem: Let A: V ~ W be a linear map, and let VI' v2' , vn and WI' w2' ... , wn are bases in Vand W respectively. if AVk = w k' k = 1,2, ... , n, then A is an isomorphism. Proof Define the inverse transformation A-I by A-Iwk = vk , k= 1,2, ... , n (as we know, a linear transformation is defined by its values on a basis). Examples:
1.
Let A: ]Rn+1
~ JP>n (JP>n
is defined by
= ]Rn+l.
2.
v\'
v2'
... , vn '
Define transformation A:
Aek = vk , k = 1,2, ... , n, where e l , e2, ... , en is the standard basis in ]Rn. Again by Theorem A is an isomorphism, so V== Rn. 3. M 2x3 == JR6;
4. More generally, Mmxn == Rm"n
20
Basic Notions
A-lAx =A-Ib , and therefore x I = A-I b = x. Note that both identities, AA-I = I and A-iA = I were used here. Let us now suppose that the equation Ax = b has a unique solution x for any b E W. Let us use symbol y instead of b. We know that given yEW the equation Ax=y has a unique solution x E V. Let us call this solution B (y). Let us check that B is a linear transformation. We need to show that B(aYI + PY2) = ap(YI) + PB(Y2) Let xk := B(Yk)' k = 1,2, i.e., AXk =Yk' k = 1,2. Then
which means
B(aYI + PY2) = aB(Yi) + PB(Y2)
Corollary: An m
SUBSPACES
A subspace of a vector space V is a subset Vo c V of V which is closed under the vector addition and multiplication by scalars, i.e., 1. If v E Vo then av E Vo for all scalars a. 2. For any u, v E Vo the sum u + v E Vo. Again, the conditions 1 and 2 can be replaced by the following one: au + bv E Vo for all u, v E Vo' and for all scalars a, p. Note, that a subspace Vo c V with the operations (vector addition and multiplication by scalars) inherited from Vis a vector space. Indeed, because all operations are inherited from the vector space V they must satisfy all eight axioms of the vector space. The only thing that could possibly go wrong, is that the result of some operation does not belong to Vo. But the definition of a subspace prohibits this! Now let us consider some examples: 1. Trivial subspaces of a space V, namely V itself and {O} (the subspace consisting only of zero vector). Note, that the empty set 0 is not a vector space, since it does not contain a zero vector, so it is not a subspace. With each linear transformation A : V -t W we can associate the following two subspaces: 2. 3. The null space, or kernel of A, which is denoted as Null A or Ker A and consists of all vectors v E V such that Ay = o. The range Ran A is defined as the set of all vectors represented as w = Ay for some v E V.
W E
W whicb can be
If A is a matrix, i.e., A: R m -t ]Rn, then recalling column by coordinate rule of the matrix-vector multiplication, we can see that any vector W E Ran A can be represented as
Basic Notions
21
a linear combination of columns of the matrix A. That explains why the term column space (and notation Col A) is often used for the range of the matrix. So, for a matrix A, the notation Col A is often used instead of Ran A. And now the last Example. 4. Given a system of vectors vI' V 2 ' ... , Vr E Vits linear span (some-times called simply span) {V I, V 2' ... , vr } is the collection of all vectors V E Vthat can be represented as a linear combination v = alv I + a 2v2 +... + arvr of vectors vI' V2' ... , vr . The notation span{v I, v 2' , vr } is also used instead of {vl' v 2', vr } It is easy to check that in all of these examples we indeed have subspaces.
22
Basic Notions
translated by a, i.e., the vector v is replaced by v + a (notation v 1--7 v + a is used for this). A vector addition is very well adapted to the computers, so the translation is easy to implement. Note, that the translation is not a linear transformation (if a :f. 0): while it preserves the straight lines, it does not preserve O. All other transformation used in computer graphics are linear. The first one that comes to mind is rotation. The rotation by yaround the origin o is given by the multiplication by the rotation matrix Rr we discussed above, -sin y) R . r Stny cosy Ifwe want to rotate around a point a, we first need to translate the picture by-a, moving the point a to 0, then rotate around 0 (multiply by R) and then translate everything back by a. Another very useful transformation is scaling, given by a matrix
_ (COSY
(~ ~),
a, b :?: O. If a = b it is uniform scaling which enlarges (reduces) an object, preserving its shape. If a :f. b then x and y coordinates scale di erently; the object becomes "taller" or "wider". Another often used transformation is reflection: for example the matrix
defines the reflection through x-axis. We will show later in the book, that any linear transformation in ]R2 can be represented either as a composition of scaling rotations and reflections. However it is sometimes convenient to consider some di erent transformations, like the shear transformation, given by the matrix
This transformation makes all objects slanted, the horizontal lines remain horizontal, but vertical lines go to the slanted lines at the angle j to the horizontal ones. 3-Dimensional Graphics Three-dimensional graphics is more complicated. First we need to be able to manipulate 3-dimensional objects, and then we need to represent it on 2-dimensional plane (monitor). The manipulations with 3-dimensional objects is pretty straightforward, we have the same basic transformations: Translation, reflection through a plane, scaling, rotation. Matrices of these
Basic Notions
23
x
[ ~ ~ ~J,
000
y
:r
Fig. Perspective Projection onto x - y plane: F is the centre (focal point) of the projection
Such method is often used in technical illustrations. Rotating an object and projecting it is equivalent to looking at it from di erent points. However, this method does not give a very realistic picture, because it does not take into account the perspective, the fact that the objects that are further away look smaller. To get a more realistic picture one needs to use the so-called perspective projection. To: Qefine a perspective projection one needs to pick a point the centre of projection or the
24
Basic Notions
focal point) and a plane to project onto. Then each point in ]R3 is projected into a point on the plane such that the point, its image and the centre of the projection lie on the same line. This is exactly how a camera works, and it is a reasonable first approximation of how our eyes work. Let us get a formula for the projection. Assume that the focal point is (0, 0, d)T and that we are projecting onto x-y plane. Consider a point v = {x, y, zl, and let v* = (x*, y*, ol be its projection, we get that x* x d d-z' so
y
,....._ _....."'" (x',y' , O)
x'
h)
z
Fig. Finding Coordinates x*, y* of the Perspective Projection of the Point (x, y, z) T
y . l-z/d Note, that this formula also works if z > d and if z < 0: you can draw the corresponding similar triangles to check It. Thus the perspective projection maps a point (x, y, z) to the
x y point ( 1- z / d ' 1- z / d'
y* =
O)T
This transformation is definitely not linear (because of z in the denominator). However it is still possible to represent it as a linear transformation. To do this let us introduce the so-called homogeneous coordinates. In the homogeneous coordinates, every point in ]R3 is represented by 4 coordinates, the last, 4th coordinate playing role of the scaling coe cient. Thus, to get usual3-dimensional coordinates of the vector v = (x, y, zl from its homogeneous coordinates (x l' x 2 , x 3 , x 4l
Basic Notions
25
one needs to divide all entries by the last coordinate x4 and take the first 3 coordinates 3 (if x 4 = 0 this recipe does not work, so we assume that the case x 4 = 0 corresponds to the point at infinity). Thus in homogeneous coordinates the vector v* can be represented as (x, y, 0, I - z/dl, so in homogeneous coordinates the perspective projection. Ifwe multiply homogeneous coordinates of a point in]R2 by a non-zero scalar, we do not change the point. In other words, in homogeneous coordinates a point in ]R3 is represented by a line through 0 in ]R4. is a linear transformation:
x
y
0
=
0 0 0 I 0
0 0 0
0 0 0 I
x
y z
I
l-zld
0 0 -lid
Note that in the homogeneous coordinates the translation is also a linear transformation:
x y
0 0 0 o 0 0
G3
z
I
But what happen if the centre of projection is not a point (0, 0, d) T but some arbitrary point (d t , d2 , d3l. Then we first need to apply the translation by -(dp d2, O)Tto move the centre to (0, 0, d3)T while preserving the x-y plane, apply the projection, and then move everything back translating it by (d t , d2 , ol. Similarly, if the plane we project to is not x-y plane, we move it to the x-y plane by using rotations and translations, and so on. All these operations are just multiplications by 4 x 4 matrices. That explains why modern graphic cards have 4 x 4 matrix operations embedded in the processor. Of course, here we only touched the mathematics behind 3-dimensional graphics, there is much more. For example, how to determine which parts of the object are visible and which are hidden, how to make realistic lighting, shades, etc.
Chapter 2
+ + +
a 12 x 2 a 22 x 2 amZxZ
amixI
= bm
To solve the system is to find all n-tuples of numbers xl' X2' ... , xn which satisfy all m equations simultaneously. Ifwe denote X:= (xl' X2' ... , xnl E lR n , b
A=
(~~:~ ~~:~
am 'I am,Z
:::
'"
~~::],
am'n
then the above linear system can be written in the matrix form (as a matrix vector equation) Ax = b. To solve the above equation is to find all vectors X E Rn satisfying Ax = b, and finally, recalling the "column by coordinate" rule of the matrixvector multiplication, we can write the system as a vector equation where a k is the kth column of the matrix A, a k = (alk' a 2'k' ... , am,k)T, k = I, 2, ... , n. Note, these three examples are essentially just different representations of the same mathematical object.
xla l
+ x 2a 2 + ... + xnan = b,
27
Before explaining how to solve a linear system, let us notice that it does not matter what we call the unknowns, x k' Yk or something else. So, all the information necessary to solve the system is contained in the matrix A, which is called the coefficient matrix of the system and in the vector (right side) b. Hence, all the information we need is contained in the following matrix
which is obtained by attaching the column b to the matrix A. This matrix is called the augmented matrix ofthe system. We will usually put the vertical line separating A and b to distinguish between the augmented matrix and the coefficient matrix.
It is clear that the operations 1 and 2 do not change the set of solutions of the system;
they essentially do not change the system. As for the operation 3, one can easily see that it does not lose solutions. Namely, let a "new" system be obtained from an "old" one by a row operation of type 3. Then any solution of the "old" system is a solution of the "new" one. To see that we do not gain anything extra, i.e., that any solution of the "new" system is also a solution of the "old" one, we just notice that row operation of type 3 are reversible, i.e., the "old' system also can be obtained from the "new" one by applying a row operation of type 3. Row operations and multiplication by elementary matrices. There is another, more "advanced" explanation why the above row operations are legal. Namely, every row operation is equivalent to the multiplication of the matrix from the left by one ofthe special elementary matrices. Namely, the multiplication by the matrix
28
}
o
1
o ........ .
......... 0
o o
1 0
Q
0 ]
o
1
Multiplication by the matrix multiplies the row number k by Q. Finally, multiplication by the matrix
1
o
1
A way to describe (or to remember) these elementary matrices: they are obtained from I by applying the corresponding row operation to it adds to the row # k row # } multiplied by a, and leaves all other rows intact. To see, that the multiplication by these matrices works as advertised, one can just see how the multiplications act on vectors (columns). Note that all these matrices are invertible (compare with reversibility of row operations). The inverse ofthe first matrix is the matrix itself. To get the inverse ofthe second one, one just replaces a by 1/a. And finally, the inverse of the third matrix is obtained by replacing a by -a. To see that the inverses are indeed obtained this way, one again can simply check how they act on columns. So, performing a row operatiQn on the augmented matrix of the system Ax = b is equivalent to the multiplication of the system (from the left) by a special invertible matrix E. Left multiplying the equality Ax = b by E we get that any solution of the equation Ax =b
29
is also a solution of
EAx
= Eb.
Multiplying this equation (from the left) by ~l we get that any of its solutions is a solution of the equation ~IEAx =~IEb , which is the original equation Ax = b. So, a row operation does not change the solution set of a system. Row reduction. The main step of row reduction consists of three sub-steps: 1. Find the leftmost non-zero column of the matrix; 2. Make sure, by applying row operations of type 2, if necessary, that the first (the upper) entry of this column is non-zero. This entry will be called the pivot entry or simply the pivot; "Kill" (i.e., make them 0) all non-zero entries below the pivot by adding (subtracting) an appropriate multiple of the first row from the rows number 2, 3, ... ,m.
3.
We apply the main step to a matrix, then we leave the first row alone and apply the main step to rows 2, ... , m, then to rows 3, ... , m, etc. The point to remember is that after we subtract a multiple of a row from all rows below it (step 3), we leave it alone and do not change it in any way, not even interchange it with another row. After applying the main step finitely many times (at most m), we get what is called the echelon form of the matrix. An example of row reduction. Let us consider the following linear system:
XI
+ 2x2 + 3x3 = 1
3xI+2x2 +x3
{
=7
(~
~ ~
1 2
jJ
1
U~ ~ n :;~ - (~ j j JJ
Operate R2 (-
),
we get
30
(~
JJ =l)
(6 ~ ~ -l)
-3 -4 -1 0 0 2 -4 Now we can use the so called back substitution to solve the system. Namely, from the last row (equation) we getx3 =-2. Then from the second equation we get x 2 = -1- 2x3 = - 1 - 2(-2) = 3, and finally, from the first row (equation) xl = 1 - 2X2 - 3x3 = 1 - 6 + 6 = 1. So, the solution is
~ ~ -~)-3R2 (6 o
:~ : 13
'x~UJ
or x= (1, 3,-2l. We can check the solution by mUltiplying Ax, where A is the coefficient matrix. Instead of using back substitution, we can do row reduction from down to top, killing all the entries above the main diagonal of the coefficient matrix: we start by multiplying the last row by 112, and the rest is pretty self-explanatory:
1 -2 0 0 1 -2 0 0 1 -2 and we just read the solution x = (1, 3,-2)T 0 the augmented matrix. Echelon form. A matrix is in echelon form if it satisfies the following two conditions: 1. All zero rows (i.e." the rows with all entries equal 0), if any, are below all nonzero entries. For a non-zero row, let us call the leftmost non-zero entry the leading entry. Then the second property of the echelon form can be formulated as follows: 2. For any non-zero row its leading entry is strictly to the right of the leading entry in the previous row. The leading entry in each row in echelon form is also called pivot entry, Pivots: leading (rightmost non-zero entries) in a row. or simply pivot, because these entries are exactly the pivots we used in the row reduction.
~ (6 o 0
J) =~~ _(6 ~ g
~)-2R2 - (6 ~ g ~)
31
A particular case of the echelon form is the so-called triangular form. We got this form in our example above. In this form the coefficient matrix is square (n x n), all its entries on the main diagonal are non-zero, and all the entries below the main diagonal are zero. The right side, i.e., the rightmost column of the augmented matrix can be arbitrary. After the backward phase of the row reduction, we get what the socalled reduced echelonform of the matrix: coefficient matrix equal I, as in the above example, is a particular case of the reduced echelon form. The general definition is as follows: we say that a matrix is in the reduced echelon form, if it is in the echelon form and 3. All pivot entries are equal I; 4. All entries above the pivots are O. Note, that all entries below the pivots are also o because of the echelon form.
To get reduced echelon form from echelon form, we work from the bottom to the top and from the right to the left, using row replacement to kill all entries above the pivots. An example of the reduced echelon form is the system with the coefficient matrix equal!. In this case, one just reads the solution from the reduced echelon form. In general case, one can also easily read the solution from the reduced echelon form. For example, let the reduced echelon form of the system (augmented matrix) be
= 1-2x2 = 2 -Sx4
x 2 is free
x3 x 4 is free
x5
=3
One can also find the solution from the echelon form by using back substitution: the idea is to work from bottom to top, moving all free variables to the right side.
32
2: 3.
Equation Ax = b is consistent for all right sides b if and only if the echelon form of the coefficient matrix has a pivot in every row. Equation Ax = b has a unique solution for any right side b if and only if echelon form of the coefficient matrix A has a pivot in every column and every row.
The first statement is trivial, because free variables are responsible for all nonuniqueness. I should only emphasize that this statement does not say anything about the existence. The second statement is a tiny bit more complicated. If we have a pivot in every row of the coefficient matrix, we cannot have the pivot in the last column of the augmented matrix, so the system is always consistent, no matter what the right side b is. Let us show that if we have a zero row in the echelon form ofthe coefficient matrix A, then we can pick a right side b such that the system Ax = b is not consistent. LetAe echelon form of the coefficient matrix A. Then Ae=EA, where E is the product of elementary matrices, corresponding to the row operations, E = EN, ... , E 2 , E I. If Ae has a zero row, then the last row is also zero. Therefore, if we put be = (0, ... ,0, Il (all entries are 0, except the last one), then the equation Ac = be does not have a solution. Multiplying this equation by n I from the left, an recalling that nIAe = A, we get that the equation Ax = nIbe does not have a solution. Finally, statement 3 immediately follows from statements 1 and 2.
33
From the above analysis of pivots we get several very important corollaries. The main observation. In echelon form, any row and any column have no more than 1 pivot in it (it can have 0 pivots)
3.
The system vi' v 2' .. , vm is a basis in ~n i echelon form of A has a pivot in every column and in every row. Proof The system VI' v2' .. , vm E ~m is linearly independent ifand only if the equation
XlvI +x2v2 + +xmvm=O has the unique (trivial) solution XI = x 2 = ... = xm = 0, or equivalently, the equation Ax = 0 has unique solution x = O. By statement 1 above, it happens if and only if there is a pivot in every column of the matrix. Similarly, the system VI' v 2' .. , vm E ~m is complete in ~n ifand only if the equation
+x2v2 + +xmvm=b has a solution for any right side b E ~ n . By statement 2 above, it happens if and only
XlvI
if there is a pivot in every column in echelon form of the matrix. And finally, the system VI' v2'
XlVI
. ,
By statement 3 this happens if and only ifthere is a pivot in every column and in every row of echelon form of A. Proposition. Any linearly independent system of vectors in ~ n cannot have more than n vectors in it.
Proof Let a system vi' v2' ... , vm E ~n be linearly independent, and letA = [VI' v2' , vm] be the n x m matrix with columns v I' v 2' ... , vm. By Proposition echelon form of A must have a pivot in every column, which is impossible if m > n (number of pivots cannot be more than number of rows). Proposition. Any two bases in a vector space V have the same number of vectors in them.
~n.
34
Proof Let vI' V 2' ... , vn and w"w2' .. ,wm be two different bases in V. Without loss of generality we can assume that n ~ m. Consider an isomorphism A : IR. n ~ V defined by Ae k = vk' k = 1,2, ... n,
where e l , e 2 , ... , en is the standard basis in IRn. Since A-I is also an isomorphism, the system A-I wl , A-I w2' . , A-I wm is a basis. So it is linearly independent, m ~ n. Together with the assumption n ~ m this implies that m =n. The statement below is a particular case of the above proposition.
Proposition. Any basis in IR n must have exactly n vectors in it. Proof This fact foIlows immediately from the previous proposition, but there is also
a direct proof. Let v I' v 2' . , vm be a basis in IR n and let A be the n x m matrix with columns VI' v2' ... , vm . The fact that the system is a basis, means that the equation Ax = b has a unique solution for any (all possible) right side b. The existence means that there is a pivot in every row (of a reduced echelon form of the matrix), hence the number of pivots is exactly n. The uniqueness mean that there is pivot in every column of the coefficient matrix (its echelon form), so m = number of columns = number of pivots = n
Proposition. Any spanning (generating) set in IR n must have at least n vectors. Proof Let VI' v2' ... , vm be a complete system in IR n , and letA be n x m matrix with columns VI' v2' ... , vm . Statement 2 of Proposition implies that echelon form of A has a pivot in every row. Since number of pivots cannot exceed the number of rows, n ~ m.
35
Note, that this proposition applies only to square matrices! Proof We know that matrix A is invertible if and only if the equation Ax = b has a unique solution for any right side b. This happens if and only if the echelon form of the matrix A has pivots in every row and in every column. If a matrix A is left invertible, the equation Ax = 0 has unique solution x = O. Indeed, if b is a left inverse of A (i.e., BA = l), and x satisfies Ax = 0, then multiplying this identity by B from the left we get x = 0, so the solution is unique. Therefore, the echelon form of A has pivots in every row. If the matrix A is square (n x n), the echelon form also has pivots in every column, so the matrix is invertible. If a matrix A is right invertible, and C is its right inverse (AC = l), then for x = Cb, b
E
IR n
Ax = ACb = Ib = b. Therefore, for any right side b the equation Ax = b has a solution x = Cb. Thus, echelon form of A has pivots in every row. If A is square, it also has a pivot in every column, so A is invertible.
There are several possible explanations of the above algorithm. The first, a na"yve one, is as follows: we know that (for an invertible A) the vector A-Ib is the solution of the equation Ax = b. So to find the column number k of A-I we need to find the solution of Ax = ek, where e l , e2, ... , en is the standard basis in Rn. The above algorithm just solves the equations Ax = e k , k = 1,2, ... , n simultaneously!
36
Let us also present another, more "advanced" explanation. As we discussed above, every row operation can be realized as a left multiplication by an elementary matrix. Let E I , E 2, ... , EN be the elementary matrices corresponding to the row operation we performed, and let E = EN ... E2EI be their product. 1 We know that the row operations transform A to identity, i.e., EA = J, so E = A-I. But the same row operations transform the augmented matrix (AI 1) to (EA IE) = (I I A-I). This "advanced" explanation using elementary matrices implies an important proposition that will be often used later. Theorem. Any invertible matrix can be represented as a product ofelementary matrices. Proof As we discussed in the previous paragraph,
A-I =EN E E
2 I'
so
A = (A-It l = EjlE";l ... E-;/.
1 4 -2J -2 -7 7 . ( 3 11 -6
Augmenting the identity matrix to it and performing row reduction we get
0~J-R3
+2R2
Here in the last row operation we multiplied the first row by 3 to avoid fractions in the backward phase of row reduction. Continuing with the row reduction we get
12R
2
(3 0 0 -35 2 14J - 0 1 0 3 0 -1 0 0 3 -1 1 1
Dividing the first and the last row by 3 we get the inverse matrix
37
For a vector space consisting only of zero vector 0 we put dim V = o. If V does not have a (finite) basis, we put dim V = 00. If dim V is finite, we call the space V finitedimensional; otherwise we call it infinite-dimensional. Proposition asserts that the dimension is well defined, i.e., that it does not"depend on the choice of a basis. This immediately implies the following Proposition. A vector space Vis finite-dimensional if and only if it has a finite spanning system. Suppose, that we have a system of vectors in a finite-dimensional vector space, and we want to check if it is a basis (or if it is linearly independent, or if it is complete)? Probably the simplest way is to use an isomorphism A : V -t IR n , n = dimE to move the problem to IR n , where all such questions can be answered by row reduction (studying pivots). Note, that if dim V = n, then there always exists an isomorphism A : V -t IRn. Indeed, if dim V = n then there exists a basis VI' V 2 ' , Vn E V, and one can define an isomorphism
A : V -t IR n
by
AVk =
ek , k = 1,2, ... , n.
Proposition. Any linearly independent system in a finite-dimensional vector space V cannot have more than dim V vectors in it. Proof Let vI' v ' ... , vm E Vbe a linearly independent system, and letA: V -t IR n be
2
an isomorphism. Then Av I, Av2, ... , AVm is a linearly independent system in IR n , and by Proposition m ::; n. Proposition. Any generating system in afinite-dimensional vector space V must have at least dim V vectors in it. Proof Let vI' v2, . , vm E Vbe a complete system in V, and let
A : V -t IR n
be an isomorphism. Then Av I, Av2 , ... , AVm is a complete system in lR n , and by Proposition m ~ n. Proposition. Any linearly independent system 0/ vectors in a finitedimensional space can be extended to a basis, i.e., ijvI' v2, ... , vr are linec:rly independent vectors in afinitedimensional vector space V then one can find vectors vr+l, v r+ 2 """' vn such that the system o/vectors vI' v2, """' vn is a basis in V.
38
Proof Let n = dim Vand let r < n (if r = n then the system v I' V 2' . , vr is already a basis, and the case r> n is impossible). Take any vector not belonging to span{vl' v2' ... , v r } and call it vr + I (one can always do that because the system vI' V 2' ... , vr is not generating). The system vI' v2 ' ... , v r' vr + I is linearly independent. Repeat the procedure with the new system to get vector vr + 2, and so on. We will stop the process when we get a generating system. Note, that the process cannot continue infinitely, because a linearly independent system of vectors in V cannot have more than n = dim V vectors.
=b
= of Ax = b
A particular solution
General solution of Ax
Proof Fix a vector xI satisfying AXI =b. Let a vector xh satisfy Axh = O. Then for
we have
Ax = A(x I + xh) = AXI + AXh = b + 0 = b, so any x of form x = xI + xh' xII E H is a solution of Ax = b. Now let x be satisfy Ax = b. Then for x h :=x-x I we get
39
= Ax -
AXI = b - b = 0,
so
H. Therefore any solution x of Ax
X
h E
H.
The power of this theorem is in its generality. It applies to all linear equations, we do not have to assume here that vector spaces are finitedimensional. You will meet this theorem in differential equations, integral equations, partial differential equations, etc. Besides showing the structure of the solution set, this theorem allows one to separate investigation of uniqueness from the study of existence. Namely, to study uniqueness, we only need to analyse uniqueness of the homogeneous equation Ax = 0, which always has a solution. There is an immediate application in this course: this theorem allows us to check a solution of a system Ax = b. For example, consider a system
2 2 2 2 -8 14 Performing row reduction one can find the solution of this system
The parameters x 3, x5 can be denoted here by any other letters, t and s, for example; we keeping notation x3 and Xs here only to remind us that they came from the corresponding free variables. Now, let us suppose, that we are just given this solution, and we want to check whether or not it is correct. Of course, we can repeat the row operations, but this is too time consuming. Moreover, if the solution was obtained by some non-standard method, it can look differently from what we get from the row reduction. For example the formula
gives the same set as (can you say why?); here we just replaced the last vector by its sum with the second one. So, this formula is different from the solution we got from the row reduction, but it is nevertheless correct. The simplest way to check that give us correct solutions, is to check that the first vector (3, 1, 0, 2, ol satisfies the equation Ax = b, and that the other two (the ones with
40
the parameters x3 and Xs or sand t in front of them) should satisfy the associated homogeneous equation Ax = O. If this checks out, we will be assured that any vector x defined is indeed a solution. Note, that this method of checking the solution does not guarantee that gives us all the solutions. For example, if we just somehow miss the term with x 2' the above method of checking will still work fine. What comes to mind, is to count the pivots again. In this example, if one does row operations, the number of pivots is 3. So indeed, there should be 2 free variables, and it looks like we did not miss anything. To be able to prove this, we will need new notions of fundamental subspaces and of rank of a matrix. Systems of linear equations example, one does not have to perform all row operations to check that there are only 2 free variables, and that formulas both give correct general solution.
41
1.
The pivot columns of the original matrix a (i.e., the columns where after row operations we will have pivots in the echelon form) give us a basis (one of many possible) in Ran A. The pivot rows of the echelon from Ae give us a basis in the row space. Of course, it is possible just to transpose the matrix, and then do row operations. But if we already have the echelon form of A, say by computing Ran A, then we get Ran AT for free. To find a basis in the null space Ker A one needs to solve the homogeneous equation Ax = 0: the details will be seen from the example below. Example. Consider a matrix
2.
3.
i3 i3 ~3 ~3 2~J ' (1 1 -1 -1 0
Performing row operations we get the echelon form
~ ~ -; -!J (~o o
0 0 000 000
(the pivots are boxed here). So, the columns 1 and 3 of the original matrix, i.e., the columns
give us a basis in Ran A. We also get a basis for the row space RanA T for free: the first and second row of the echelon form of A, i.e., the vectors
(we put the vectors vertically here. The question of whether to put vectors here vertically as columns, or horizontally as rows is is really a matter of convention. Our reason for putting them vertically is that although we call RanAT the row space we define it as a column space of AT) To compute the basis in the null space Ker A we need to solve the equation Ax = O. Compute the reduced echelon form of A, which in this example is
42
0 0 0 0 0 0 Note, that when solving the homogeneous equation Ax = 0, it is not necessary to write the whole augmented matrix, it is sucient to work with the coefficient matrix. Indeed, in this case the last column of the augmented matrix is the column of zeroes, which does not change under row operations. So, we can just keep this column in mind, without actually writing it. Keeping this last zero column in mind, we can read the solution 0 the reduced echelon form above: 1 xI = -x2 -3"xs ,
ill o (o o
0 0 0
x2is free.
x4 is free,
Xs is free
-1 1
0
0
x=
0 +x4 -1 +xs 0 1
-113 0 -113
0
1
Xs The vectors at each free variable, i.e., in our case the vectors
[ 001
O. -1,
1
form a basis in KerA. Unfortunately, there is no shortcut for finding a basis in KerAT, one must solve the equation AT x = O. Unfortunately, the knowledge of the echelon form of a does not help here.
43
The null space KerA. The case of the null space KerA is probably the simplest one: since we solved the equation Ax = 0, i.e., found all the solutions, then any vector in Ker A is a linear combination of the vectors we obtained. Thus, the vectors we obtained form a spanning system in Ker A. To see that the system is linearly independent, let us multiply each vector by the corresponding free variable and add everything. Then for each free variable x k , the entry number k of the resulting vector is exactly x k' so the only way this vector (the linear combination) can be 0 is when all free variables are O. The column space Ran A. Let us now explain why the method for finding a basis in the column space Ran A works. First of all, notice that the pivot columns of the reduced echelon form are of a form a basis in Ran Are. Since row operations are just left multiplications by invertible atrices, they do not change linear independence. Therefore, the pivot columns of the original matrix A are linearly independent. Let us now show that the pivot columns of a span the column space of A. Let VI ' v 2, ... , vr be the pivot columns of A, and let V be an arbitrary column of A. We want to show that v can be represented as a linear combination of the pivot columns vI' v2, ... , vr' v = a IVI + a2v2 + ... + arvr. the reduced echelon form Are is obtained from A by the left multiplication Are = EA, where E is a product of elementary matrices, so E is an invertible matrix. The vectors Ev l , Ev2, ... , EVr are the pivot columns of Are' and the column v ofa is transformed to the column Ev of Are. Since the pivot columns of Are form a basis in RanA re , vector Ev can be represented as a linear combination Ev = alEv I + a 2Ev2 + ...+ a,Evr. Multiplying this equality by g-I from the left we get the representation v = aIv I + a 2v2 + ...+ arvr, so indeed the pivot columns of A span Ran A. The row space Ran A T. It is easy to see that the pivot rows of the echelon form Ae of a are linearly independent. Indeed, let wI 'w2, ... ,wr be the transposed (since we agreed always to put vectors vertically) pivot rows of Ae. Suppose alw l + a 2w 2 + ... + arwr = O. Consider the first non-zero entry of WI. Since for all other vectors w2,w3' ... , wr the corresponding entries equal 0 (by the definition of echelon form), we can conclude that a l = O. So we can just ignore the first term in the sum. Consider now the first non-zero entry ofw2. The corresponding entries of the vectors w3, ... , wr are 0, so a 2 = O. Repeating this procedure, we get that a k = 0 Vk = 1, 2, ... , r. To see that vectors w I ,w2' ... , wr span the row space, one can notice that row operations do not change the row space. This can be obtained directly from analyzing row operations, but we present here a more formal way to demonstrate this fact.
44
For a transformation A and a set X let us denote by A(X) the set of all elements y which can represented as y = A(x), x E X, A(X) : = {y = A(x) : x EX}. If a is an m x n matrix, and Ae is its echelon form, Ae is obtained from A be left multiplication Ae = EA, where E is an m x m invertible matrix (the product of the corresponding elementary matrices). Then Ran
A: =
= Ran AT ,
A:
Theorem. Let A be an m Xn matrix, i.e., a linear transformationfrom ]Rn to ]Rm. Then 1. dim Ker A + dim Ran A = dim Ker A + rank A = n (dimension of the domain of A). 2. dim Ker AT + dimRan AT = dim Ker AT + rank AT = dimKer AT + rankA T = m (dimension of the target space ofA). Proof The proof, modulo the above algorithms of finding bases in the fundamental subspaces, is almost trivial. The first statement is simply the fact that the number of free variables (dimKer A) plus the number of basic variables (i.e., the number of pivots, i.e., rank A) adds up to the number of columns (i.e., to n). The second statement, if one takes into account that rank A = rank AT is simply the first statement applied to AT.
45
2 3
1 1 ( 1 1
1 4 1 -9J -3 6 1 (17J 1 2 -5 x = 8 .
2 2 2 3 -8
14
or by
A vector x given by either formula is indeed a solution of the equation. But, how can we guarantee that any of the formulas describe all solutions? First of all, we know that in either formula, the last 2 vectors (the ones multiplied by the parameters) belong to Ker A.1t is easy to see that in either case both vectors are linearly independent (two vectors are linearly dependent if and only if one is a mUltiple of the other). Now, let us count dimensions: interchanging the first and the second rows and performing first round of row operations
2R --R 1 1 1 2 -5 0 0 0 1 -2 J -2RJ 2 2 2 3 -8 0 0 0 1 -2 we see that there are three pivots already, S0 rank A ~ 3. (Actually, we already can see that the rank is 3, but it is enough just to have the estimate here). By Theorem, rankA + dim Ker A = 5, hence dim Ker A ~ 2, and therefore there cannot be more than 2 linearly independent vectors in KerA. Therefore, last 2 vectors in either formula form a basis in KerA, so either formula give all solutions of the equation. An important corollary of the rank theorem, is the following theorem connecting existence and uniqueness for linear equations.
Theorem. Let A be an an m
x
Ax = b
has a solution for every b
E
lR m
ATx=O
46
has a unique (only the trivial) solution. (Note, that in the second equation we have AT, not A). Proof The proof follows immediately from Theorem by counting the dimensions. There is a very nice geometric interpretation of the second rank theorem. Namely, statement 1 of the theorem says, that if a transformation a: IR n ~ IRtn has trivial kernel
(KerA = {O}), then the dimensions of the domain Rn and of the range Ran A coincide. If the kernel is non-trivial, then the transformation "kills" dimKerA dimensions, so dimRanA =n - dim Ker A.
The numbers xl' X2, ... , Xn are called the coordinates of the vector v in the basis B. It is convenient to join these coordinates into the so-called coordinate vector of v relative to the basis B, which is the column vector
[v]B
is an isomorphism between Vand IRn. It transforms the basis v!' v2, ... , vn to the standard basis e I, e 2, ... , en in IRn. Matrix of a linear transformation. Let T: V ~ W be a I inear transformation, and let a = {a!' a2, ... , an}, B := {bl' b2, bm } be bases in Vand W respectively. A matrix of the transformation T in (or with respect to) the bases a and b is an m x n matrix, denoted by [11 BA . which relates the coordinate vectors [Tv]B and [v]A' [Tv]B= [1]BA [v]A;
"'j
47
notice the balance of symbols A and B here: this is the reason we put the first basis A into the second position. The matrix [1]BA is easy to find: its kth column is just the coordinate vector [Tak]B (compare this with finding the matrix of a linear transformation from ~n to ~m). As in the case of standard bases, composition of linear transformations is equivalent to multiplication oftheir matrices: one only has to be a bit more careful about bases. Namely, let T) : x ~ Yand T2 : Y ~ Z be linear transformation, and let A, Band C be bases in X, Yand Z respectively. The for the composition T= T2T I, T: x ~ Z, Tx:= T2(T I (x)) we have [1]CA = [T2 T dcA= [T2]CB [TdBA (notice again the balance of indices here). The proof here goes exactly as in the case of ~n spaces with standard bases, so we do not repeat it here. Another possibility is to transfer everything to the spaces ~n via the coordinate isomorphisms v ~ [v]B' Then one does not need any proof, everything follows from the results about matrix multiplication. Change of Coordinate Matrix. Let us have two bases A = {aI' a2, ... , an} and b = {bI' b2, ... , bn} in a vector space V. Consider the identity transformation I = I v and its matrix [1]BA in these bases. By the definition [v]B = [1]BA [v] A , \Iv E V. i.e., for any vector v E Vthe matrix [1]BA transforms its coordinates in the basis a into coordinates in the basis B. The matrix [1]BA is often called the change ofcoordinates (from the basis A to the basis B) matrix. The matrix [1]BA is easy to compute: according to the general rule of finding the matrix of a linear transformation, its kth column is the coordinate representation [aklB of kth element of the basis A Note that [1]AB = ([1] BAtl, (follows immediately from the mUltiplication of matrices rule), so any change of coordinate matrix is always invertible.
An example: change of coordinates from the standard basis. Let our space Vbe and let us have a basis B = {bI' b2, .. " bn} there. We also have the standard basis S = {el' e2, "., en} there. The change of coordinates matrix [1]SB is easy to compute: [1]SB = [bl' b2, .'" bn] =: B,
~n,
48
i.e., it is just the matrix B whose kth column is the vector (column) vk . And in the other direction [l]BS = ([1] SB )-1 = 15 1. For example, consider a basis
B={(1),
[1]SB =
and
1]-1 S[1]BS= [ SB
un
Ui)
=: B
= B-1 ="3
1 (-I 2) 2 -I
(we know how to compute inverses, and it is also easy to check that the above matrix is indeed the inverse of B) An example: going through the standard basis. In the space of polynomials of degree at most 1 we have bases . A = {I, 1 + x}, and B = {I + 2x, 1 - 2x}, and we want to find the change of coordinate matrix [1]BA. Of course, we can always take vectors from the basis A and try to decompose them in the basis B; it involves solving linear systems, and we know how to do that. However, I think the following way is simpler. In PI we also have the standard basis S = {l, x}, and for this basis
[I]SA =
(6
_I
n
= 0
=: A, [llsA =
(1
~2) =: B,
1
[1]As=A
Then
and Notice the balance of indices here. [1] Matrix of a transformation and change of coordinates. Let T: V
W be a linear
transformation, and let A, A be two bases in V and let B, B be two bases in W. Suppose we know the matrix [1] BA' and we would like to find the matrix representation with respect to new bases
49
to get the matrix in the "new" bases one has to surround the matrix in the "old" bases by change of coordinates matrices. We did not mention here what change of coordinate matrix should go where, because we don't have any choice if we follow the balance of indices rule. Namely, matrix representation of a linear transformation changes according to the formula Notice the balance of indices.
I[Tls A = [I]BB[T]BA[I]AA
The proof can be done just by analyzing what each of the matrices does. Case of one basis: similar matrices. Let V be a vector space and let A = {aI' a 2, ... , an} be a basis in V. Consider a linear transformation T: V ~ V and let [11 AA be its matrix in this basis (we use the same basis for "inputs" and "outputs") The case when we use the same basis for "inputs" and "outputs" is very important (because in this case we can multiply a matrix by itself), so let us study this case a bit more carefully. Notice, that very often in this [11 A is often used instead of [11 AA . It is shorter, but two index notation is better adapted to the balance of indices rule. case the shorter notation [11 A is used instead of [T]AA' However, the two index notation [T] is better adapted to the balance of indices rule, so I recommend using it (or at least always keep it in mind) when doing change of coordinates. Let B = {b I , b2, ... , bn} be another basis in V. By the change of coordinate rule above [T]BB = [I]BA[T]AA[I]AB Recalling that
[I]BA = [flAk and denoting Q := [I]AB ' we can rewrite the above formula as
This gives a motivation for the following definition
Definition. We say that a matrix A is similar to a matrix b if there exists an invertible matrix Q such that A = g-I BQ.
[11 BB = Q -1 [11 AA Q.
Since an invertible matrix must be square, it follows from counting dimensions, that similar matrices A and B have to be square and of the same size. If a is similar to B, i.e., if A = g-IBQ, then B = QAg-I = (g-ItIA(g-I) (since g-l is invertible), therefore B is similar to A. So, we can just say that A and B are similar. The above reasoning shows, that it does not matter where to put Q and where g-l: one can use the formula A = QBg-I in the definition of similarity. The above discussion shows, that one can treat similar matrices as different matrix representation of the same linear operator (transformation).
Chapter 3
Matrics
Introduction
A rectangular array of numbers of the form
a~
a;n amn
ami
is called an m x n matrix, with m rows and n columns. We count rows from the top and columns from the left. Hence
amj
aij
represent respectively the i-th row and the j-th column of the matrix (1), and represents the entry in the matrix (1) on the i-th row andj-th column. Example. Consider the 3 x 4 matrix 4 3
I
[J 1
Here
and
5
7
~1
represent respectively the 2-nd row and the 3-rd column ofthe matrix, and 5 represents the entry in the matrix on the 2-nd row and 3-rd column. We now consider the question of arithmetic involving matrices. First of all, let us
Matrics
51
study the problem of addition. A reasonable theory can be derived from the following definition. Definition. Suppose that the two matrices
A=: : and B = : [ amI amn bm1 both have m rows and n columns. Then we write A+B=[
all
a~ 1
.. .
a~n 1
[b~
.. .
b~:n 1
bmn
~ql
+ bml
al
~bln 1
amI
amn + bmn
and call this the sum of the two matrices A and B. Example. Suppose that
A=[
Then
; -1 0 7
~ ~11
6
and B
=[ ~
-2 1
~ ~2 ~Il'
3 3
and
A+B =
10
2 [ -1
4 3 0 7
-I] [~ ~l
6
and ;
-1 0 7
Proposition. (Matrix Addition) Suppose that A, B, Care m x n matrices. Suppose further that 0 represents the m x n matrix with all entries zero. Then (a) A + B = B + A;
(b) A + (B + C) = (A + B) + C;
(c) A + 0
= A; and
x
(d) there is an m
Proof Parts (a) - (c) are easy consequences of ordinary addition, as matrix addition is simply entry-wise addition. For part (d), we can consider the matrix A' obtained from A by multiplying each entry of A by -1. The theory of multiplication is rather more complicated, and includes multiplication of a matrix by a scalar as well as mUltiplication of two matrices. We first study the simpler case of multiplication by scalars.
52
Matrics
ami has m rows and n columns, and that C E JR. Then we write cA =
a~1
. .
[C~l1
C~ln 1
cam I camn . and call this the product of the matrix A by the scalar c. Example. Suppose that
2 4 3 1]
A= 3 1 5 -1 0 7
2 . 6
Then
2A = [:
~ 1~ ~21.
12
-2 0 14
Proposition. (Multiplication By Scalar) Suppose that A, Bare m x n matrices, and that c, dE JR. Suppose further that 0 represents the m x n matrix with all entries zero. Then (a) c(A + B) = cA + cB; (b) (c + d)A = cA + dA; (c) OA = 0; and
(d) c(dA)
= (cd)A.
Proof These are all easy consequences of ordinary multiplication, as multiplication by scalar c is simply entry-wise multiplication by the number c. The question of multiplication of two matrices is rather more complicated. To motivate this, let us consider the representation of a system of linear equations al1 xI ++a1nxn =~,
A=: [ ami
a~
I ..
Matrics
53
Can you work out the meaning of this representation? Now let us define matrix multiplication more formally. Definition. Suppose that
A=
and B -
are respectively an m x n matrix and an n x p matrix. Then the matrix product AB is given by the m x p matrix
AB qml qmp
= 1, ... , p, we have
~aikbkj
k=l
Remark. Note first of all that tbe number of columns of the first matrix must be equal to the numberof rows of the second matrix. On the other hand, for a simple way to work out qjj , the entry in the i-throw and j-th column of AB, we observe that the i-th row of A and the j-th column of B are respectively b1j
bnj We now multiply the corresponding entries - from ail with bll' and so on, until a in with bnj - and then add these products to obtain q ij' Example. Consider the matrices
54
Matrics
A=[
-1
;
0
-2
1
Note that A is a 3 x 4 matrix and B is a 4 x 2 matrix, so that the product AB is a 3 x 2 matrix. Let us calculate the product
Consider first of all q II. To calculate this, we need the I-st row of A and the I-st column of B, so let us cover up all unnecessary information, so that
3 x From the definition, we have qll = 2.1 + 4.2 + 3.0 + (-1) .3 = 2 + 8 + 0 -3 = 7. Consider next q 12. To calculate this, we need the I-st row of A and the 2-nd column of B, so let us cover up all unnecessary information, so that
2 4 3-1]: ; x [ x
x x x x x x x x
1
-2
=X
[X
x
ql21
x.
xJ
From the definition, we have qI2 = 2.4 + 4.3 + 3 (- 2) + (- 1) .1 = 8 + 12 - 6 -1 = 13. Consider next q2I. To calculate this, we need the 2-nd row of A and the I-st column of B, so let us cover up all unnecessary information, so that
[~ ~ ~ ~H :+~l :]
= 3 + 2 + 0 + 6 = 11
Matries
55
Consider next q22. To calculate this, we need the 2-nd row of A and the 2-nd column of B, so let us cover up all unnecessary information, so that x 4
x x x x
3 5 2
3 From the definition, we have q22 = 3.4 + 1.3 + 5. (- 2) + 2.1 = 12 + 3 -10 + 2 = 7. Consider next q31. To calculate this, we need the 3-rd row of A and the I-st column of B, so let us cover up all unnecessary information, so that 1 x x x x v 2 x
x x x x
x -2 =
x
X
x
q22
x x x
0 7 6
-1
3 x
From the definition, we have q31 =(-1).1 +0.2+7.0+6.3 =-1 +0+0+ 18= 17. Consider finally q32. To calculate this, we need the 3-rd row of A and the 2-nd column of B, so let us cover up all unnecessary information, so that
x
-1
: : :j: : =[:
0 7 6
x -2
X
x
q32
From the definition, we have q32 = (-1) .4 + 0.3 + 7 (- 2) + 6.1 = - 4 + 0 + (- 14) + 6 = - 12. We therefore conclude that
AB=[ ~
4 3 -Ij 152
o
7 6
1 2 0 3
4 3 -2
-1
+~
1 4 3 2
13 7 1 .
-12
17
A~P
4 1 0
3
5 7
-Ij ~ and B=
-1
0 -2
56
Matries
Note that B is a 4 x 2 matrix and A is a 3 x 4 matrix, so that we do not have a definition for the "product" BA. We leave the proofs of the following results as exercises for the interested reader. Proposition. (Associative Law) Suppose that A is an mn matrix, B is an np matrix and C is an p x r matrix. Then A(BC) = (AB)C.
(A
+ B)C = AC +BC.
Proposition. Suppose that A is an m x n matrix, B is an n x p matrix, and that c E JR. Then c(AB) = (cA)B = A(cB).
Inversion of Matrices
We shall deal with square matrices, those where the number ofrows equals the number of columns. Definition. The n x n matrix
Matrics
57
where I if i = j, a { 1)= 0 ifi-::;!:j, is called the identity matrix of order n. Remark. Note that
II = (1) and 14 =
a a a a a a a a 1 a a a a
1
The following result is relatively easy to check. It shows that the identity matrix In acts as the identity for multiplication of n x n matrices. Proposition. For every n x n matrix A, we have AIn = I,.,A = A. This raises the following question: Given an n x n matrix A, is it possible to find another n x n matrix B such that AB = BA = In? However, we shall be content with nding such a matrix B if it exists. We shall relate the existence of such a matrix B to some properties of the matrix A. Definition. An n x n matrix A is said to be invertible if there exists an n x n matrix B such that AB = BA = In. In this case, we say that B is the inverse of A and write B = A-I. Proposition. Suppose that A is an invertible n x n matrix. Then its inverse A-I is unique. Proof Suppose that B satises the requirements for being the inverse of A. Then AB = BA = In. It follows that
=A
Proof In view of the uniqueness of inverse, it is sucient to show that B-IA -1 satises the requirements for being the inverse of AB. Note that (AB)(B-IA -I) =A(B(B -IA -I = A BB -I)A -I)
= A(I,.,A -I) = AA -I = In
and
n matrix. Then
(A -I) -I =A.
Proof Note that both (A -1) -1 and A satisfy the requirements for being the inverse of
A -I. Equality follows from the uniqueness of inverse.
58
Matrics
al n
A=
[
:
ani ann
where aij = 0 whenever i ::j: j, is called a diagonal matrix of order n. Example. The 3 x 3 matrices
[~ ~ ~ 1 [~ ~ ~]
and
are both diagonal. Given an n x n matrix A, it is usually rather complicated to calculate
Ak
=1 d:.:A:
k
However, the calculation is rather simple when A is a diagonal matrix, as we shall see in the following example. Example. Consider the 3 x 3 matrix
45 -28 -15. -30 20 12 Suppose that we wish to calculate A98. It can be checked that if we take
A=
17 -10 -5]
P
then
=[ -2 ~
~ ~l'
3 0
p-I
=[=~ 4~3 ~ ].
D=[~3 ~
Matrics
59
398
A 98
0 298 0
0 0 298
p-I.
= ~PD-I)"'(PDP-I~ = PD98 p- I = P
. 98
0 0
This is much simpler than calculating A98 directly. Note that this example is only an illustration. We have not discussed here how the matrices P and D are found.
a12 a22
a13 a23
=
(
a 21
and
h=
(1 0 0
0 0 O.
~I ~2 ~3 0 0 1 Let us interchange rows 1 and 2 of A and do likewise for 13, We obtain respectively
~ ~ ~].
001
0 0 1 a31 a32 a33 Let us interchange rows 2 and 3 of A and do likewise for 13 , We obtain respectively
a31 a 32 a33
::: :::].
( Note that
land
~~
0
1 0
[all
a 31
and 0 0
0 1 o a31 a32 a 33 Let us add 3 times row 1 to row 2 of A and do likewise for 13, We obtain respectively
a21
Til
1
a12 a22
13 0 ]. a 23
a21
Matrics
1 0
a32
0 0
~]
::: :::].
0 0
a31 a32 a33 0 0 1 a31 a32 a33 Let us add -2 times row 3 to row 1 of A and do likewise for 13 . We obtain respectively -2a31 +all a 21
a23 ]
[~ ~ ~][ :::
and
a31
Note that -2a31 + all a21
-2a +a,,]
33
a23 a33
[~
-2] o .
- 2a32 + a l2 a22
0 -2][ all a12 al3] 1 0 a 21 a22 a23 0 1 a31 a32 a33 for 13" We obtain respectively
[S:~I
a31 Note that all Sa21 [ a31 Let us multiply
S::2 a32
al2 Sa22
~ ~ ~].
0 0
al3] 0 01[a11 a12 al3] Sa23 = 0 S 0 J a21 a22 a23 . a32 a33 0 0 1 a31 a32 a33 row 3 of A by - 1 and do likewise for 13 . We obtain respectively
:::
-~2
[I
[ Note that
:::
-~I
:::] and[
-~3
0 0 -1
~ ~ ~ 1.
a 31
[:~:
Matrics
61
Definition. By an elementary n x n matrix, we mean an n x n matrix obtained from In by an elementary row operation. We state without proof the following important result. The interested reader may wish to construct a proof, taking into account the different types of elementary row operations. Proposition. Suppose that A is an n x n matrix, and suppose that B is obtained from A by an elementary row operation. Suppose further that E is an elementary matrix obtained from In by the same elementary row operation. Then B = EA. We now adopt the following strategy. Consider an n x n matrix A. Suppose that it is possible to reduce the matrix A by a sequence <X J <X2 , ... , <Xk of elementary row operations to the identity matrix In. If EI' E2, ... , Ek are respectively the elem'entary n x n matrices obtained from In by the same elementary row operations <XI' <X2, , <Xk then In = Ek ... E2EIA We therefore must have A-I = Ek ... E2EI = Ek ... E 2E lI n It follows that the inverse A-I can be obtained from In by performing the same elementary row operations <Xl' <X2 <Xk Since we are performing the same elementary row operations on A and In it makes sense to put them side by side. The process can then be described pictorially by'
(A I In)
2
03
C'i)
l(EJA I EJln )
l(E2EIA I E2ElIn)
l."
l(Ek E2EIA I Ek .. E2ElI n) = (In,1 A- J). In other words, we consider an array with the matrix A on the left and the matrix In on the right. We now perform elementary row operations on the array and try to reduce the left hand half to the matrix In. If we succeed in doing so, then the right hand half of the array gives the inverse A-I. Example, Consider the matrix
<Yk
A=[
-2 3 0
~ ~ ~.
121
030 300
o o
1 0
0]
1
We now perform elementary row operations on this array and try to reduce the left hand half to the matrix 13 . Note that if we succeed, then the final array is clearly in reduced
62
Matrics
row echelon form. We therefore follow the same procedure as reducing an array to reduced row echelon form. Adding - 3 times row 1 to row 2, we obtain
~ ~3 ~3 ~3 ~ ~l'
~ ~3 ~3 ~3 ~ ~l' [o
[ ~o ~3 ~3 ~3 ~ ~l'
~ ~3 ~3 ~3 ~ ~l' [o
[~o ~
o
o
3
~3 ~ ~ ~l.
0 -15 10 6] 3 -3 -3 -3 1 O.
0 -3 -9 5 3 Adding -1 times row 3 to row 2, we obtain 3 3 0 -15 10 6 -3 0 6 -4., -3 . 0 -3 -9 5 3 Adding 1 times row 2 to row 1, we obtain
o o
~ ~3 [o
~ ~9 ~4 ~3].
Matrics
63
0 0 -3
0 0
-3 6
2 -4
~31
1
o
[
1 0 0 -3 1 0 -2
2 4/3
1 1.
0 1
-5/3 -1
Note now that the array is in reduced row echelon form, and that the left hand half is the identity matrix 13 . It follows that the right hand half of the array represents the inverse A-I. Hence -3 2 A-I = -2 4/3
: l-
2 2 4 5 0 3 0 0
0 0 0 2 2 4 5 0 1 0 0 (AI 14)= 0 3 0 0 0 0 1 0 0 0 0 0 0 0 We now perform elementary row operations on this array and try to reduce the left hand half to the matrix 14. Adding -2 times row 1 to row 2, we obtain 11231000 0 0 0 -1 -2 1 0 0 0 0 1 0 0 3 0 0 0 0 0 1 0 0 0 1 Adding 1 times row 2 to row 4, we obtain
64
1 1 2 3
Matrics
100 0
o o o
0 0 -1 -2 1 0 0 3 0 0 o 0 1 0
1 2 3 000 03000010 o 0 0 -1 -2 1 0 0 o 0 0 0 -2 1 0 1
At this point, we observe that it is impossible to reduce the left hand half of the array to 14 . For those who remain unconvinced, let us continue. Adding 3 times row 3 to row 1, we obtain
1 2 0 -5 3 0 0 03000010 o 0 0 -1 -2 0 0 0 0 0 -2 0 1 Adding -1 times row 4 to row 3, we obtain 1 1 2 0 -5 3 0 0 o 3 0 0 o o 1 0 o 0 0 -1 -2 o 0 0 0 0 -2 1 0 Multiplying row 1 by 6 (here we want obtain 6 6 12 0 -30 18 o 3 0 0 0 o o 0 0 -1 0 o
0 1
0 0
0 -1
Matries
65 6 0 12
0 0
3 -2
-15
0 3 0 0
0 0
0 0 0 1 -1 0 0 0 -1 1 0 -2 1 0 116, multiplying row 2 by 113, multiplying row 3 by -1 and we obtain 0 0 1/12 -1/3 -5/2 0 113 0 0 0
0 0 0 1 0 0 0 1 0 -1/2 0 0 0 0 1 -112 Note now that the array is in reduced row echelon form, and that the left hand half is not the identity matrix 14 , Our technique has failed. In fact, the matrix A is not invertible.
66
Proposition. Suppose that A = .
[ ani ann
Matrics
a~ I
...
a~n 1
. , .
and that
are n x 1 matrices, where xl' ... , Xn are variables. (a) Suppose that the matrix A is invertible. Then the system Ax equations has only the trivial solution.
0 of linear
(b) Suppose that the system Ax = 0 of linear equations has only the trivial solution. Then the matrices A and In are row equivalent. (c) Suppose that the matrices A and In are row equivalent. Then A is invertible.
Proof (a) Suppose that we have
Xo = Irfo = (A-IA)x o = A-l(Axo) = A:! = o. It follows that the trivial solution is the only solution. (b) Note that if the system Ax = 0 of linear equations has only the trivial solution, then it can be reduced by elementary row operations to the system
: x1=O, ... ,xn=O. This is eqQ.ivalent to saying that the array
a~1
[
al n
0 can be reduced by elementary row operations to the reduced row echelon form
ani ann
l . . r!1
Hence the matrices A and In are row equivalent. (c) Suppose that the matrices A and In are row equivalent. Then there exist elementary nn matrices E l , ... , Ek such that
In = E k, ... , EIA. By Proposition, the matrices E l , ..., E~ are all invertible, so that
Matrics
I I I I A = EI Ek l n = EI ... Ek
67
Consequences of Invertibility
Suppose that the matrix
A
all anI
a;n ann
1
= b, where
x~
are n
x
[]:] and
b~[ll
... , Xn
are variables and b l , ... , b n E lR are arbitrary. Since A is invertible let us consider X = A-lb. Clearly
Ax = A(A_I b) = (A-I b) = (AA-1)b = Inb = b,
is invertible, Consider so that x = A-I b is a solution of the system. On the other hand, let Xo be any solution of the system. Then Axo = b, so that
It follows that the system has unique solution. We have proved the following important result. Proposition. Suppose that
Xo = l,fo= (A-IA)x o =
A-I(Axo) = A-lb.
a~ I
A=:
[ anI
...
a~n 1
:,
ann
and that
x=
[~Il
xn
and b
=[~I
bn
... ,
are arbitrary. Suppose further that the matrix A is invertible. Then the system Ax = b of linear equations has the unique solution x = A-I b. We next attempt to study the question in the opposite direction. Proposition. Suppose that
68
Matrics
and that
are n x 1 matrices, where xl' ... , Xn are variables. Suppose further that for every b I , ... , bn E lR., the system Ax = b of linear equations is soluble. Then the matrix A is invertible. Proof Suppose that
1
0
,b2 =
0 0
b -
,-
o ,... , bn =
0 0
1
0 0
= 1,
... , n, bj is an n
XI
denote respectively solutions of the systems of linear equations Ax = b l , ... , Ax = bn It is easy to check that A( xl' ... , xn ) = ( b l , .. , bn); in other words,
A[a;1
ani
so that A is invertible. We can now summarize Propositions as follows. Proposition. In the notation of Proposition, the following four statements are equivalent: (a) The matrix A is invertible.
Matrics
(b) The system Ax Cd) The system Ax
69
(c) The matrices A and In are row equivalent. is soluble for every n x 1 matrix b.
Application to Economics
In this section, we describe briey the Leontief input-output model, where an economy is divided into n sectors. For every i = 1, ... , n, let xi denote the monetary value of the total output of sector i over a fixed period, and let d; denote the output of sector i needed to satisfy outside demand over the same xed period. Collecting together xi and di for i = 1, .... n, we obtain the vectors
x=[~ll E~n
Xn
andd=
~llE~'
dn
known respectively as the production vector and demand vector of the economy. On the other hand, each of the n sectors requires material from some or all of the sectors to produce its output. For i, j = 1, ... n, let cij denote the monetary value of the output of sector i needed by sector j to produce one unit of monetary value of output. For every j = 1, ... n, the vector
Cl i
C
=
C nj
E~n
is known as the unit consumption vector of sector j. Note that the column sum
ClO + ... + C <1 J nJ in order to ensure that sector j does not make a loss. Collecting together the unit consumption vectors, we obtain the matrix
0
cll
C=(cl cn ) =
:
[
cnl
is known as the consumption matrix of the economy. Consider the mat~ix product For every i = 1, ... , n, the entry cilx\ + ... + c;nXn represents the monetary value of the output of sector i needed by all the sectors to produce their output. This leads to the production equation
x= Cx+ d
Here Cx represents the part ofthe total output that is required by the various sectors of
70
Matrics
the economy to produce the output in the first place, and d represents the part of the total output that is available to satisfy outside demand. , Clearly (/ - C)x = d. If the matrix (/ - C) - d is invertible, then represents the perfect production level. We state without proof the following fundamental result. Proposition. Suppose that the entries of the consumption matrix C and the demand vector d are non-negative. Suppose further that the inequality (5) holds for each column of C. Then the inverse matrix (/ - C)-I exists, ,and the production vector x = (I - C)-I d has non-negative entries and is the unique solution of the production equation (6). Let us indulge in some heuristics. Initially, we have demand d. To produce d, we need Cd as input. To produce this extra Cd, we need C(Cd) = c'2d as input. To produce this extra C2d, we need C(C2d) = C3d as input. And so on. Hence we need to produce d+ Cd+ C2d+ C3d+ ... = (/+ C+ C2 + C3 + ... )d in total. Now it is not dicuIt to check that for every positive integer k, we have ,{f- C)(/ + C + C2 + C3 + ... + C") = I - Ck + I, If the entries of Ck + I are all very small, then (I - C)(I + C + C2 + C3 + ... + C") "" I, so that (I - C)-I"" I + C + C2 + C3 + ... + Ck. This gives a practical way of approximating (I - C)-I, and lalso suggests that (/ - C)-I = / + C + C2 + C3 + ... Example. An economy consists of three sectors. Their dependence on each other is summarized in the table below:
To produce one unit of monetary value of output in sector
2
monetary value of output required from sector 1 monetary value of output required from sector 2 monetary value of output required from sector 3 0:3
3 0:1
0:2
0:5 0: 1
0:4
0: 1
0:2
0:3
Suppose that the fit;lal demand from sectors 1,2 and 3 are respectively 30, 50 and 20. Then the production vector and demand vector are respectively
Matrics
71
0.1] [0.7 -0.2 -0.1] 0.2, so that I - C = -0.4 0.5 -0.2 . 0.3 -0.1 -0.1 0.7
. The production equation (I - C) x = d has augmented matrix 0.7 -0.2 -0.4 0.5 -0.1 -0.1 -0.1 30 [ 7 -2 -1 300 -0.2 50, . I t t -4 5 -2 500 eqmva en 0 0.7 20 -1 -1 7 200
and which can be converted to reduced row echelon form 0 0 0 3200/27 o 0 0 6100/27
o
~
1 70019
~
This gives xl
--+ ]R 2
can
we write
T(x) = y,
where satifies
x' E]R2
that
and T(ex) =
and every
and A (ex
J= eA (Xl] . eX2 x2
l )
Here we conne ourselves to looking at a few simple matrix transformations on the plane.
72
Matrics
X2)
A=( ~l ~l
satisfies
A[;:H ~l ~J[::H~:ll
for every (x I' x 2) E.JR 2 , and so represents reflection across the x2-axis. On the other hand, the matrix satisfies for every (x I' x 2) E.JR 2 , and so represents reection across the origin, whereas the matrix
for every (xI' x 2) E.JR 2 , and so represents reection across the line xI summary in the table below: Transformation Reflection acrossx1 - axis Reflection acrossx2 - axis Reflection across origin Equations matrix
x 2. We give a
{Yl =x1
Y2
= -x2
[~ ~ll
[
{Yl =-x1
Y2 =x2
~1
Reflection acrossxI
= x2
~l
~ll
[~ ~l
for every (x I' x 2 ) E.JR , and so represents dilation if k > 1 and contraction if 0 < 1. k < 1. On the other hand, the matrix
Matrics
73
for every
(XI' X2)
xI
-direction if k >
for every (xI' x 2) E IR. 2, and so represents expansion in the x 2-direction if k > 1 and compression in the x 2-direction if 0 < k < 1. We give summary in the table below: Transformation Dilation or contract ion by factor k > 0 Expansion or compression in xI -direction by factor k > 0 Expansion or compression in x2 -direction by factor k > 0 Equations matrix
{YI = Axl
= kx2 {YI ~ Axl
Y2 Y2 =x2
{YI ~xl
Y2
= kx2
[~ ~l [~ ~l [~ ~l
for every (xI' x 2) E IR. 2 , and so represents a shear in the xI-direction. For the case k = 1, we have the following:
(k= I)
74
For the case k = - 1, we have the following:
Matries
T (k=-J)
A=[~ ~]
satisfies
for every (xl' x 2) E IR?, and so represents a shear in the x2-direction. We give a summary in the table below: Transformation Shear in XI - direction Shear in xl - direction
Equations Matrix
{Yl =xl
Y2 =kxl +x2
[~ ~] [~ ~]
Example. For anticlockwise rotation by an angle e, we have T(x], x 2) = (Y]'Y2)' where Y] + iY2 = (x] + ix2)(cos e + i sin e ), and so
YI ] = [ Y2
[c~se
A = [ cos e
- sin e1
cose .
Equations Matrix
2 sin e
sine
Transformation
{Yl = x, cose - x
YI
[ cose -Sine]
sine cose
= x] sin e + x2 cos e
We conclude this section by establishing the following result which reinforces the linearity of matrix transformations on the plane.
Matries Proposition. Suppose that a matrix transformation T: JR2 invertible matrix A. Then (a) The image under T of a straight line is a straight line;
---?
75
JR2 is given by an
(b) The image under T of a straight line through the origin is a straight line through the origin; and (c) The images under T of parallel straight lines are parallel straight lines.
Proof Suppose that T(x l , x 2) = (Yl'Y2)' Since A is invertible, we have x = A-I y , where
= [ ~)
and y
=[
;;J
(a;'W)
* (a~) A-I.
In other words, the image under T of the straight line a~l +~~ =yis aYI + WY2 =y, clearly another straight line. This proves (a). To prove (b), note that straight lines through the origin correspond to y = O. To prove (c), note that parallel straight lines correspond to different values of y for the same values of a and ~.
Following the boundary in the anticlockwise direction starting at the origin, the 12
76
vertices can be represented by the coordinates
Matrics
A=[:
A ( x2
H
(Xl +..!.. X2]
X:
for every (x l ,x2)EIR 2.
Xl]
(: t][~
=
1 1 4 7 7 8 8 7 4 1 0]
060 6 008 8 288
[0
4 4 10 7 8 12 11 5 5 4].
006060088288
In view of Proposition, the image of any line segment that joins two vertices is a line segment that joins the images of the two vertices. Hence the image of the letter M under the shear looks like the following:
Matrics
77
Next, we may wish to translate this image. However, translation is transformation by vector
h=(h l ,h2) is of the form
Ene
for every (x" x,)E R'.
(;: H:: H~ 1
~ ~~
1
and this cannot be described by a matrix transformation on the plane. To overcome this deficiency, we introduce homogeneous coordinates. For every point
(xl' X2) (Xl' x 2)E]R2
we identify it with the point (Xl' x 2' 1) E]R3 . Now we wish to translate a point (Xl' X 2) to
(Xl' X 2) + (hI' h2) = (Xl + hI' x 2 + h2), so we attempt to find a 3 x 3 matrix A * such that
] = A *[
~1
[ :~:~l= ~
0
0 0
~ ~] ;:]
1 1
It fo Hows that using homogeneous coordinates, translation by vector h = (h l' h2) E ]R . can be described by the matrix
A*=[~o ~ ~l.
1
Remark. Consider a matrix transformation T :]R2 ~]R2 on the plane given by a matrix
A=[a a
l1
a2l
12
].
a22
78
Matrics
7.
It follows that homogeneous coordinates can also be used to study all the matrix transformations we have discussed. By moving over to homogeneous coordinates, we simply replace the 2 x 2 matrix A by the 3 x 3 matrix
A*=[~ ~J.
Example. The letter M, the 12 vertices are now represented by homogeneous
coordinates, put in an array in the form 0 [ 1 1 4
7 7 8 8 7 4 1 I 1
0] 1 1
0 6 0 6 0 0 8 8 2 8 8 ,
111 1
A=
, [1 ~l 2
o
x
3 matrix
o
0 .
A* 0
o
Note that
0 ,t 1 -4 7 7 8 8 7 4 11 A* 0" 0 6 0 6 0 0 8 8 2 8 [1 1 1 1
1 0
O~]
1 1
1 2 o [0 1 I 4 7 7 8 8 7 4 1 00060600882 8 8 0
1 0]
11111111111 1 1
Matrics
1 4 4 10 7 8 12 11 5 5 4] =006060088288.
79
111111111111 Next, let us consider a translation by the vector (2; 3). The matrix under homogeneous coordinates for this translation is given by
B*
Note that
~ ~ ~].
0 0 1 1 4 1 4 1 1 4 4 10 7 6 1 11 1 1 11 1 0 7 7 8 8 7 1 8 12 11 8 1 7 1 5 11 1 8 1 1 5 5 2 8 1 1 4 0
B*A* 0 0 6 0 6 0 0 8 8 2 8 8
=( ~
0 1 0
=[~
2 [3
3 6
m
3 1
6 0
0 0
6 12 9 10 14 13 7
3 9
1 1 1
l~l J.
6]' 11
~l
Hence the image of the letter M under the shear followed by translation looks like the following:
80
Matrics
Example. Under homogeneous coordinates, the transformation representing a reflection across the xI-axis, followed by a shear by factor 2 in the xI-direction, followed by anticlockwise rotation by 90, and followed by translation by vector (2, -I), has matrix
00
100100100
100
bn and then convert it to reduced row echelon form by elementary row operations. . The first step is to reduce it to row echelon form: (I) First of all, we may need to interchange two rows in order to ensure that the top left entry in the array is non-zero. This requires n + 1 operations.
ann
a;n
~Il'
(II) Next, we need to multiply the new first row by a constant in order to make the top left pivot entry equal to I. This requires n + I operations, and the array now looks like
anI a n2 ann bn Note that we are abusing notation somewhat, as the entry a l2 here, for example, may well be different from the entry a l2 in the augmented matrix. (III) For each row i = 2, ... , n, we now multiply the first row by - ail and then add to row i. This requires 2(n - I)(n + 1) operations, and the array now looks like
Matrics
81
(IV) In sumJary, to proceed from the form II to the form III, the number of operations require~.ts at most 2(n + 1) + 2(11 - 1)(n + 1) = 2n(n + 1) . . (V) Our nelt task is to convert the smaller array
t [a~2 ...
a n2
a~n ~21'
ann
bn
These have one row and one column fewer than the arrays (II) and (III), and the number of operations required is at most 2m(m + 1), where m = n - 1. We continue in this way systematically to reach row echelon form, and conclude that the number of operations required to convert the augmented matrix (II) to row echelon form is at most
n2 2 L2m(m+ 1) ~ _n 3 m=1 3
The next step is to convert the row echelon form to reduced row echelon form. This is simpler, as many entries are now zero. It can be shown that the number of operations required is bounded by something like 2n 2 indeed, by something like n2 if one analyzes the problem more carefully. In any case, these estimates are insignicant compared to the . 2 3 . estlmate"3 n earlier.
Ax
=b
We therefore conclude that the number of operations required to solve the system by reducing the augmented matrix to reduced row echelon form is bounded by
something like
Another way of solving the system Ax may involve converting the array
1 to reduced row echelon form by elementary row operations. It can be shown that the number of operations required is something like 2n 3 , so this is less ecient than our first method.
ani ann
82
Matrics
Matrix Factorization
In some situations, we may need to solve systems of linear equations of the form Ax = b, with the same coefficient matrix A but for many different vectors b. If A is an invertible square matrix, then we can and its inverse A-I and then compute A-1b for each vector b. However, the matrix A may not be a square matrix, and we may have to convert the augmented matrix to reduced row echelon form. In this section, we describe a way for solving this problem in a more efficient way. To describe this, we first need a deffinition. Definition. A rectangular array of numbers is said to be in quasi row echelon form if the following conditions are satised: (1) The left-most non-zero entry of any non-zero row is called a pivot entry. It is not necessary for its value to be equal to 1. (2) All zero rows are grouped together at the bottom of the array. (3) The pivot entry of a non-zero row occurring lower in the array is to the right of the pivot entry of a non-zero row occurring higher in the array. In other words, the array looks like row echelon form in shape, except that the pivot entries do not have to be equal to 1. We consider first of all a special case. Proposition. Suppose that an m x n matrix A can be converted to quasi row echelon form by elementary row operations but without interchanging any two rows. Then A = LU, where L is an m x m lower triangular matrix with diagonal entries all equal to 1 and U is a quasi row echelon form of A. Proof Recall that applying an elementary row operation to an m x n matrix corresponds to mUltiplying the matrix on the left by an elementary m x m matrix. On the other hand, if we are aiming for quasi row echelon form and not row echelon form, then there is no need to multiply any row of the array by a non-zero constant. Hence the only elementary row operation we need to perform is to add a mUltiple of one row to another row. In fact, it is sucient even to restrict this to adding a mUltiple of a row higher in the array to another row lower in the array, and it is easy to see that the corresponding elementary matrix is lower triangular, with diagonal entries all equal to 1. Let us call such elementary matrices unit lower triangular. If an m x n matrix A can be reduced in this way to quasi row echelon form U, then where the elementary matrices E 1,E2,
L
U = Ek,
... ,
... ,
= (Ek'
..., E 2E 1)-I.
Then
A=LU. It can be shown that products and inverses of unit lower triangular matrices are also unit lower triangular. Hence L is a unit lower triangular matrix as required.
"
Matrics
83
If Ax = b and A = L U, then L(Ux) = b. Writing y= Ux, we have Ly = band Ux = y. It follows that the problem of solving the system Ax = b corresponds to first solving the system Ly = b and then solving the system Ux = y. Both of these systems are easy to solve since both Land U have many zero entries. It remains to and Land U. Ifwe reduce the matrix A to quasi row echelon form by only performing the elementary row operation of adding a multiple of a row higher in the array to another row lower in the array, then U can be taken as the quasi row echelon form resulting from this. It remains to nd L. However, note that L = (Ek , , E2Eltl, where U = Ek , , E2E 1A, and so 1= Ek, , E2EIL This means that the very elementary row operations that convert A to U will convert L to /. We therefore wish to create a matrix L such that this is satised. It is simplest to illustrate the technique by an example.
-1
-2
A=
6 -5 8 2 -10 -4 8 -5 2 -13 -6 16 -5
The entry 2 in row 1 and column 1 is a pivot entry, and column 1 is a pivot column. Adding -2 times row 1 to row 2, adding -1 times row 1 to row 3, and adding -1 times row 1 to row 4, we obtain 2 0 0 -1 3 2 2 -2
-1
3 2 -8 -8
-9
-6
-8
10 18
0 -12
0 0 0 to 0 0 0 0 0
1 0
0 0
* 1 * *
* 1 * *
0
1
Next, the entry 3 in row 2 and column 2 is a pivot entry, and column 2 is a pivot
84
Matrics
column. Adding 3 times row 2 to row 3, and adding 4 times row 2 to row 4, we obtain
2 -1 0 3
0 0 0 0 0 0 1 0 -3 0 -4
2 -2 2 -1
0 0 7 14
3 2 -2
0
1 0
1 0
to
1 0 0
0 0 0 0
Next, the entry 7 in row 3 and column 4 is a pivot entry, and column 4 is a pivot column. Adding -2 times row 3 to row 4, we obtain the quasi row echelon form
u=
2 -1 2 -2 0 3 2 -1
0 0 0 0 0 0 7 0
2 -2'
4
where the entry 4 in row 4 and column 5 is a pivot entry, and column 5 is a pivot column. Note that the same elementary row operation converts 1 0 0 0 0 0 0 0 0 0 0
1 0
0 0 0 to 0 1 0 0 0 0 1 0
2 1
0 0
0 0 0 0
L=
0 0
1 -3
1 -4
o'
Matrics
85
2 * 4 3 2 ' -9 -12 2
*
*
7
14
* * *'
4
Dividing them respectively by the pivot entries 2, 3, 7 and 4, we obtain respectively the columns
*
2
1
*
*
l'
l ' -3'
*'
-4
1 0 1
2
0 0 0 0
L=
-3
1 0
-4 2
correspond precisely to the entries in these columns.
LU FACTORIZATION ALGORITHM.
(1) Reduce the matrix A to quasi row echelon form by only performing the elementary row operation of adding a mUltiple of a row higher in the array to another row lower in the array. Let V be the quasi row echelon form obtained.
(2) Record any new pivot column at the time of its first recognition, and modify it by replacing any entry above the pivot entry by zero and dividing every other entry by the value of the pivot entry. (3) Let L denote the square matrix obtained by letting the columns be the pivot columns as modied in step (2).
-1
-4
5 -2 -2 andb= 11 -10 6 9 6 -6 8 -21 13 -9 -15 Let us first apply LV factorization to the matrix A. The first pivot column is column 1, with modied version
A=
-3
3 -4
-5
86
Matries
-1
-2
Adding row 1 to row 2, adding - 2 times row 1 to row 3, and adding 2 times row 1 to row 4, we obtain
3 -1
0 0 2
6
-4
-1 -2
-3
7 -17
0 -2
4
-7
o
1 -1
-4
-1
4 -1 3 -8 2 -4
o o
-2
Adding 2 times row 3 to row 4, we obtain the quasi row echelon form 3 -1 0 0 0 2 0 0 2 -3
-4
-1 -1
4
0
3 2
Matrics
87
0 0 0
1
It follows that
3 -1 2 -4 -1 0 0 0 2 -3 1 -1 and u= L= 4 -1 3 0 0 2 -1 1 0 0 0 2 -2 3 -2 0 0
0 0 0
We now consider the system Ly
1
-1
0
0
1
9
o
-1
3
1
0-2
2
-2
-2 1 -15
Using row 1, we obtainYl = 1. Using row 2, we obtainY2 - Yl = -2, so thatY2 =-1. Using row 3, we obtain Y3 + 2Yl - Y2 = 9, so that Y3 = 6. Using row 4, we obtain Y4 - 2Yl + 3Y2 - 2Y3 = -15, so that Y4 = 2.
1
Hence
Y=
-1
=Y,
3 -1 2 -4 1 1 o 2 -3 1 -1-1 o 0 4 -1 3 6
0 0 0 2 2 Here the free variable is x 4. Let x 4 = t. Using row 4, we obtain 2x5 = 2, so that x5 = 1. . . 3 1 Usmg row 3, we obtam 4x3 = 6 + x 4 -3x5 = 3 + t, so that x3 = + 4t . Using row 2, we
obtain
2x2
= -1 + 3x3 -x4 + x5
Xs = ---t,
4 4
88
Matrics
so that x 2 that
XI
= g- g/.
1 Hence
= Sl - g'
27
so
gt - g'
_ (9/-1 9-1 t 3 +t 11) (xl' x 2' X3' X4' Xs) 8 ' 8 ' 4 " ,where t E R. Remarks. (1) In practical situations, interchanging rows is usually necessary to convert -a matrix A to quasi row echelon form. The technique here can be modied to produce a matrix L which is not unit lower triangular, but which can be made unit lower triangular by interchanging rows. 2 (2) Computing an LU factorization of an n x n matrix takes approximately 3 n3
operations. Solving the systems Ly = band Ux = y requires approximately 2n2 operations. (3) LU factorization is particularly ecient when the matrix A has many zero entries, in which case the matrices Land U may also have many zero entries.
A[
: .
ami
The entries can be positive, negative or zero. Suppose that for every i = 1, 2, 3, ... , m, player R makes move i with probability Pi' and that for every j = 1,2,3, ... , n, player C makes movej with probability qj" Then PI + ... + Pm = 1 and qI + ... + qn = 1. Assume that the players make moves independently of each other. Then for every i = 1,2,3, ... , m, andj = 1,2,3, ... , n, the number Pjqj represents the probability that player R makes move i and player C makes move j. Then the double sum
m n
EA(p,q) = LLaijp;qj
;=1 j=1
represents the expected payo that player C has to make to player R. The matrices
Matrics
89
(PI Pm ) and q = [
::J
.. .
are known as the strategies of player R and player C respectively. Clearly the expected payo
m n [ all EA(p,q) = B~aijPiq/PI'''Pm):
a~n][ ~I ..
]_
- p.Aq.
amI'" amn qn Here we have slightly abused notation. The right hand side is a I x 1 matrix! We now consider the following problem: Suppose that A is xed. Is it possible for player R to choose a strategy p to try to maximize the expected payo Eip, q)? Is it possible for player C to choose a strategy q to try to minimize the expected payo EA(P, q)? Fundemental Theorem of Zero Sum Games. There exist strategies p* and q* such that EA(p*, q) > EA(p*, q*) > EA(p, q*) for every strategy p* of player R, and every strategy q* of player C. Remark. The strategy p is known as an optimal strategy for player R, and the strategy q is known as an optimal strategy for player C. The quantity EA(p*, q*) is known as the value of the game. Optimal strategies are not necessarily unique. However, if p** and q** are another pair of optimal strategies, then EA(P*, q*) = EA(p**, q**). Zero sum games which are strictly determined are very easy to analyse. Here the payo matrix A contains saddle points. An entry aij in the payo matrix A is called a saddle point if it is a least entry in its row and a greatest entry in its column. In this case, the strategies
o
-
o
where the I 's occur in position i in p* and positionj in q*, are optimal strategies, so that the value of the game is aij . Remark. It is very easy to show that different saddle points in the payo matrix have the same value. Example. In some sports mad school, the teachers require 100 students to each choose between rowing (R) and cricket (C). However, the students cannot make up their mind,
90
Matrics
and will only decide when the identities of the rowing coach and cricket coach are known. There are 3 possible rowing coaches and 4 possible cricket coaches the school can hire. The number of students who will choose rowing ahead of cricket in each scenario is as follows, where RI, R2 and R3 denote the 3 possible rowing coaches, and CI, C2, C3 and C4 denote the 4 possible cricket coaches:
CI C2 C3 RI 75 50 45 R2 20 60 30 R3 45 70 35
C5 60 55 30
[For example, if coaches R2 and CI are hired, then 20 students will choose rowing, and so 80 students will choose cricket.] We first reset the problem by subtracting 50 from each entry and create a payo matrix
=[~:o I~ ;~ I~
-5
].
20 -15 -20
[For example, the top left entry denotes that if each sport starts with 50 students, then 25 is the number cricket concedes to rowing.] Here the entry -5 in row I and column 3 is a saddle point, so the optimal strategy for rowing is to use coach RI and the optimal strategy for cricket is to use coach C3. In general, saddle points may not exist, so that the problem is not strictly determined. Then the solution for these optimal problems are solved by linear programming techniques which we do not discuss here. However, in the case of 2 x 2 payo matrices
which do not contain saddle points, we can write P2 = I - PI and q2 = I Eip, q) = allPlql + a I 2PI(1- ql) + a 21 (1- PI)ql + a 22(1 - PI)(1 = ((all - a 12 - a 21 + a 22 )Pl - (a 22 - a 21 ))ql + (a 12 - a 22 )Pl + Let
- ql. Then
ql) a 22
Then
E A(P ,q
* ) = (a12 -a22)(a22 -a21 ) + a22 = --!..!.-==.-....!..:.~a lI a22 -a12 a21 all - al2 - a21 + a22 all - a l2 - a 21 + a22
Matrics
91
then
EA ( p,q *) alla22 - a12 a 2l
al1 - a12 - a 2l
+ a 22
* [
P =
and
a22 -a2l
all - a12
with value
ORTHOGONAL MATRICES
Definition. A square matrix A with real entries and satisfying the condition A-I = At is called an orthogonal matrix.
Example. Consider the euclidean space ~2 with the euclidean inner product. The
vectors u l = (1, 0) and u2 = (0, 1) form an orthonormal basis B = {up u2}. Let us now rotate u l and u2 anti clockwise by an angle to obtain vI = (cose sin e) and v2 = (-sine, cose ). Then C = {vI' v2 } is also an orthonormal basis.
92
The transition matrix from the basis C to the basis B is given by
Matries
p
Clearly
p -I =p t = [cos
In fact, our example is a special case of the following general result. Proposition. Suppoiie that B = {up'" un} and C = {vI' ... , vn } are two orthonormal bases of a real inner product space V. Then the transition matrix P from the basis C to the basis B is an orthogonal matrix. Example. The matrix
113
A=
2/3 ( 2/3
2/3 ) -2/3
1/3
is orthogonal, since
At A
113
-2/3
2/3
-2/3 2/3) (113 -2/3 2/3) (1 0 0 ) -113 2/3 2/3 -1/3 -2/3 = 0 1 0 . -2/3 113 2/3 2/3 1/3 0 0 1
Note also that the row vectors of A, namely (113,-2/3,2/3), (2/3,-113,-2/3) and (2/3, 2/3, 113) are orthonormal. So are the column vectors of A. In fact, our last observation is not a coincidence. Proposition. Suppose that A is an n x n matrix with real entries. Then (a) A is orthogonal if and only if the row vectors of A form an orthonormal basis of 1R n under the euclidean inner product; and (b) A is orthogonal if and only if the column vectors of A form an orthonormal basis of 1R n under the euclidean inner product.
Proof We shall only prove (a), since the proof of (b) is almost identical. Let r., .... , rn denote the row vectors of A. Then
t _
AA
-.
(
Ii. Ii
:
.. . Ii .:rn )
rn .rn
..
rn .r.
Matrics
r r
93
if i == j, /. } { 0 ifi::/:.j, I
if and only if r I' ... , rn are orthonormal. Proposition. Suppose that A is an n x n matrix with real entries. Suppose further that the inner product in lR n is the euclidean inner product. Then the following are equivalent: (a) A is orthogonal. (b) For every x E lRn, we have (c) For every
Proof a)) :::} (b)) Suppose that A is orthogonal, so that AlA = 1. It follows that for
every x E lR n, we have II Ax 112 = Ax . Ax = xlAIAx = xlIx = XIX b)) :::} (c)) Suppose that II Ax we have
= X . X = II X 112.
21
(AlA -l)u . v = o.
In particular, this holds when v = (AlA -l)u, so that (AlA -I)u . (AlA -I) u = 0, whence (AlA -I)u = 0, ...(i) But then (1) is a system ofn homogeneous linear equations in n unknowns satised by" n every u E lR Hence the coefficient matrix AI A -I must be the zero matrix, and so AlA = 1. Proof For every UE V, we can write u = ~Iul + ... + ~nun = Ylvi +... + ynvn' where
~I''~n' YI' YnE lR, and where B = {u l ' ..... , un} and C= {vi' ..... , vn} are two orthonormal bases of V. Then II u 112 = (u, u) = (~Iul + ... + ~nun' ~Iul + .... + ~nun)
94
Matrics
Similarly,
IIu /1 2 = (U,
n n
LL'Yi'Yj(V;Vj ) = L'Y~
1=1 j=1
It follows that in lR n with the euclidean norm, we have /I [u]B /I = /I [u]c /I, and so
/I P[u]c /I = II [u]c /I n for every u E V. Hence /I Px II = II x II holds for every x E R . It now follows from
Proposition that P is orthogonal.
a~1
A=:
( ani
a~n J
:
ann
is an n x n matrix with real entries. Suppose further that there exist a number A E 1R and a non-zero vector vERn such that Av = v. Then we say that A is an eigenvalue of the matrix A, and that v is an eigenvector corresponding to the eigenvalue A . In this case, we have Av = AV = ')Jv, where I is the n x n identity matrix, so that (A - AI)v = O. Since
vERn is non-zero, it follows that we must have det(A - AI) = O. In other words, we must have
...(ii)
det
ani a n2 ann -
=0
Note that (ii) is a polynomial equation. The polynomial det(A - AI) is called the characteristic polynomial of the matrix A. Solving this equation (2) gives the eigenvalues of the matrix A. On the other hand, for any eigenvalue of the matrix A, the set { v E lR n : (A - AI)v = O} ... (iii) is the nullspace of the matrix A - ')J, and forms a subspace of R n This space (iii) is
Matrics
95
called the eigenspace corresponding to the eigenvalue A. Suppose now that A has eigenvalues
AI' .... An E lR, not necessarily distinct, with corresponding eigenvectors vI' .... , vn E lRn,
and that vi' .... vn are linearly independent. Then it can be shown that p-IAP=D, where
In fact, we say that A is diagonalizable if there exists an invertible matrix P with real entries such that P-IAP is a diagonal matrix with real entries. It follows that A is diagonalizable if its eigenvectors form a basis of lR n In the opposite direction, one can show that if A is diagonalizable, then it has n linearly independent eigenvectors in lR n It therefore follows that the question of diagonalizing a matrix A with real entries is reduced to one of linear independence of its eigenvectors. We now summarize our discussion so far. Diagonalization Process. Suppose that A is an n x n matrix with real entries. (1) Determine whether the n roots of the characteristic polynomial det(A - IJ) are real. (2) If not, then A is not diagonalizable. If so, then nd the eigenvectors corresponding to these eigen-values. Determine whether we can nd n linearly independent eigenvectors. If not, then A is not diagonalizable. If so, then write
(3)
~ (vI .... v
n)
and D
where Ai' ... , An E lR are the eigenvalues ofA and where vl' .....vn E lRn are respectively their corresponding eigenvectors. Then P-IAP = D. In particular, it can be shown that if A has distinct eigenvalues AI' .. An E lR, with corresponding eigenvectors vi' ...... , vn E lR n , then vi' ..... vn are linearly independent. It follows that all such matrices A are diagonalizable.
Orthonormal Diagonalization
We now consider the euclidean space ffi. n an as inner product space with the euclidean inner product. Given any n x n matrix A with real entries, we wish to nd out whether there
96
Matrics
exists an orthonormal basis of ~n consisting of eigenvectors of A. Recall that in the Diagonalization process discussed in the last section, the columns of the matrix Pare eigenvectors of A, and these vectors form a basis of ~ n . It follows from Proposition basis is orthonormal if and only if the matrix P is orthogonal. Definition. An nn matrix A with real entries is said to be orthogonally dagonalizable if there exists an orthogonal matrix P with real entries such that P-IAP = plAP is a diagonal matrix with real entries. First of all, we would like to determine which matrices are orthogonally diagonalizable. For those that are, we then need to discuss how we may find an orthogonal matrix P to carry out the diagonalization. To study the first question, we have the following result which gives a restriction on those matrices that are orthogonally diagonalizable. Proposition. Suppose that A is a orthogonally diagonalizable matrix with real entries. Then A is symmetric. Proof Suppose that A is orthogonally diagonalizable. Then there exists an orthogonal matrix P and a diagonal matrix D, both with real entries and such that P'AP = D. Since PP' = pIp = I and Dt = D, we have A = PDP' = PD'P', so that A' = (PD'P0 t = (P0 t(D0'P' = PDP' = A, whence A is symmetric. Our first question is in fact answered by the following result which we state without proof. Proposition. Suppose that A is an n x n matrix with real entries. Then it is orthogonally diagonalizable if and only if it is symmetric. The remainder of this section is devoted to nding a way to orthogonally diagonalize a symmetric matrix with real entries. We begin by stating without proof the following result. The proof requires results from the theory of complex vector spaces. Proposition. Suppose that A is a symmetric matrix with real entries. Then all the eigenvalues of A are real. Our idea here is to follow the Diagonalization process discussed in the last section, knowing that since A is diagonalizable, we shall find a basis of ~n consisting of eigenvectors of A. We may then wish to orthogonalize this basis by the Gram-Schmidt process. This last step is considerably simplied in view of the following result. Proposition. Suppose that u 1 and u2 are eigenvectors of a symmetric matrix A with real entries, corresponding to distinct eigenvalues 1 and 2 respectively. Then u 1 u 2 = O. In other words, eigenvectors of a symmetric real matrix corresponding to distinct eigenvalues are orthogonal. Proof Note that if we write u 1 and u2 as column matrices, then since A is symmetric, we have
Matrics
97
It follows that
A)U) u 2 = Au) . u 2 = u) AU2 = u) .2u2, so that (AI - A2)(u l . u2) = o. Since A) 7: A 2, we must have u l . u2 = o. We can now follow the procedure below. Orthogonal Diagonalization Process. Suppose that A is a symmetric n x n matrix with real entries. (1) Determine the n real roots AI, .... An of the characteristic polynomial det (A- AJ), and find n linearly independent eigenvectors u l ' , un of A corresponding to these eigenvalues as in the Diagonalization process.
(2)
Aply the Gram~Schmidt orthogonalization process to the eigenvectors u I ' , un to obtain orthogonal eigenvectors vIm, ... , vn of A, noting that eigenvectors corresponding to distinct eigenvalues are already orthogonal. Normalize the orthogonal eigenvectors v .. ..., vn to obtain orthonormal eigenvectors
wI' ... ,
(3)
w.)
where A., ... , An E lR are the eigenvalues of A and where wI' ... , wn E lR n are respectively their orthogonalized and normalized eigenvectors. Then PtAP = D. Remark. Note that if we apply the Gram-Schmidt orthogonalization process to eigenvectors corresponding to the same eigenvalue, then the new vectors that result from this process are also eigenvectors corresponding to this eigenvalue. Why? Example. Consider the matrix
A=( ~
122
~ ~).
1) =
0;
in other words, (A-7)(A- 1)2 = O. The eigenvalues are therefore Al = 7 and (double root)
~=A3=1.
An eigenvector corresponding to A)
98
Matrics
-5
(A-71)u
=(
(A-7I)U=U
~ ~}=
0 wiili root .,
=UJ
Md ",
=Ul J
which are linearly independent. Next, we apply the Gram-Schmidt orthogonalization process to u2 and u3, and obtain
which are now orthogonal to each other. Note that we do not have to do anything to ul at this stage, in view of Proposition. We now conclude that
wI
We now take
Then
p= pI
1IJi [
1IJ3
1116 2/16
0
-1IJ3
~:::n] /AP~(~ ~
and
1IJ3
0 0
Matries
99
det(-I~A
~102 Jo;
o -9 20-A in other words, (A + 1)( A- 2)( A - 5) = O. The eigenvalues are therefore Al = -1, A2 = 2 and A3 = 5. An eigenvector corresponding Al = -I is a solution of the system
with root
ul
=(~J.
u3
=(~5J.
-3
(A+5/)u =
u3 =
(~5 J.
-3
Note that while uI ' u2' u3 correspond to distinct eigenvalues of A, they are not orthogonal. The matrix A is not symmetric, and so Proposition 100 does not apply in this case. Example. Consider the matrix
A=( ~2
det
~2
2
n
0 2
5-A -2 -2 6 - A
J= 0;
7-A
100
Matrics
and 1..3 = 9. An eigenvector corresponding 1..1
(A-3J)u
=(~2 ~2 ~)u =
o
2 4
0, with root
=(
~ ).
-1
(A-6nu
=( ~~ ~2 ~} =
0, wiili root
~ =( ~1
)-
(A-9J)u=
-2 0)
[-1)
2 . 2
Note now that the eigenvalues are distinct, so it follows from Proposition that up u2, u3 are orthogonal, so we do not have to apply Step (2) of the Orthogonal diagonalization process. Normalizing each of these vectors, we obtain respectively
f-113
w3
2/3
We now take
w2
2/3 2/3 -1/3) w3)= 2/3 -113 2/3. ( -113 2/3 . 2/3 2/3 2/3
= pi =
-113) 2/3
2/3
and pIAP= 0
(3
Chapter 4
Determinants
Introduction
The reader probably already met determinants in calculus or algebra, at least the determinants of 2 x 2 and 3 x 3 matrices. For a 2 x 2 matrix
the determinant is simply ad - be; the determinant of a 3 x 3 matrix can be found by the "Star of David" rule. In this chapter we would like to introduce determinants for n x n matrices. I don't want just to give a formal definition. First I want to give some motivation, and then derive some properties the determinant should have. Then if we want to have these properties, we do not have any'choice, and arrive to several equivalent definitions of the determinant. It is more convenient to start not with the determinant of a matrix, but with determinant of a system of vectors. There is no real difference here, since we always can join vectors together (say as columns) to form a matrix. Let us have n vectors VI' V2, ... , vn in R,n (notice that the number of vectors coincide with ~imension), and we want to find the n-dimensional volume of the parallelepiped determined by these vectors. The parallelepiped determined by the vectors VI' V2, ... , Vn can be defined as the collection of all vectors V E ]Rn that can be represented as
v=tIvI +t2v2 + .. +tnvn ,O S tk S 1 "ik = 1,2, ... , n. It can be easily visualized when n = 2 (parallelogram) and n = 3 (parallel~piped). So, what is the n-dimensional volume? If n = 2 it is area; if n = 3 it is indeed the volume. In dimension 1 is it just the length. Finally, let us introduce some notation. For a system of vectors (columns) VI' V2, ... , Vn we will denote its determinant (that we are going to construct) as D(v I , V2, ... , vn). If we
102
Determinants
join these vectors in a matrix A (column number k of a is vk ), then we will use the notation detA, detA = D(v\, v2, ... , vn) Also, for a matrix
an,}
al.l a2)
an'2
aI,2
an,n
aI,n
a2,!1 .
In other words, the above two properties say that the determinant of n vectors is linear in each argument (vector), meaning that if we fix n - 1 vectors and interpret the remaining vector as a variable (argument), we get a linear function.
'
D(
VI''''Ub' .. , vn
k
+ D( VI'"'' Vb"', vn )
k
Determinants
103
Remark. We already know that linearity is a very nice property, that helps in many situations. So, admitting negative heights (and therefore negative volumes) is a very small price to pay to get linearity, since we can always put on the absolute value afterwards. In fact, by admitting negative heights, we did not sacrifice anything! To the contrary, we even gained something, because the sign of the determinant contains some information about the system of vectors (orientation). Preservation Under "Column Replacement"
The next property also seems natural. Namely, if we take a vector, say vp and add to it a multiple of another vector vk , the "height" does not change, so
D(v\,. "Vj
,
In other words, if we apply the column operation of the third type, the determinant does not change. Remark. Although it is not essential here, let us notice that the second part of linearity is not independent: it can be deduced from properties. Antisymmetry The next property the determinant should have, is Functions of several variables that change sign
D[V\>''''
At first sight this property does not look natural, but it can be deduced from the previous ones. Namely, applying property three times, and then using we get
D(v\, ... , Vj, ... , Vb'''' Vn ) =
j
= D(V1"",Vj"",:k -Vj,'''''Vn )
j
104
Determinants
-+,.
Normalization
The last property is the easiest one. For the D(e!, e2, ... , en) In matrix notation this can be written as det(l) = 1
= 1.
for all scalars a, ~. 2. Determinant is antisymmetric, i.e., if one interchanges two columns, the determinant changes sign. 3. Normalization property: det 1= 1. The first propertyis just the combined. The second one and the last one is the normalization property. Note, that we did not use property: it can be deduced from the above three. These three properties completely define determinant.
If a has two equal columns, then det A = 0; If one column of a is a multiple of another, then detA = 0; If columns of a are linearly dependent, i.e., if the matrix is not invertible,
detA =
o.
then
Determinants
105
Proof Statement 1 follows immediately from linearity. Ifwe multiply the zero column by zero, we do not change the matrix and its determinant. But by the property 1 above, we should get O. The fact that determinant is antisymmetric, implies statement 2. Indeed, if we interchange two equal columns, we change nothing, so the determinant remains the same. On the other hand, interchanging two columns changes sign of determinant, so det4 = - det A, which is possible only if det A = O. Statement 3 is immediate corollary of statement 2 and linearity. To prove the last statement, let us first suppose that the first vector VI is a linear combination of the other vectors,
n VI
v.l = D
[[t, "'tVt].
vn )
v,. v3'"'' v.
and each determinant in the sum is zero because of two equal columns. Let us now consider general case, i.e., let us assume that the system vI' v2, ... , vn is linearly dependent. Then one of the vectors, say vk can be represented as a linear combination ofthe others. Interchanging this vector with VI we arrive to the situation we just treated, so
D(vl""'Vk "", vn ) = -D(vk""'V)'''''vn ) = -0 = 0,
k k
so the determinant in this case is also O. The next proposition generalizes property. As we already have said above, this property can be deduced from the three "basic" properties of the determinant, we are using in this section. Proposition. The determinant does not change if we add to a column a linear combination of the other columns (leaving the other columns intact). In particular, the determinant is preserved under "column replacement" (column operation of third type). Proof Fix a vector vk , and let u be a linear combination of the other vectors,
u = L>~"jVj'
j"",k
Then by linearity
D(vl'''' vk
106
Determinants
A = {aj, k} ~,j=l is called diagonal if all entries off the main diagonal are zero, i.e., if aj k
= for all}
::j::
k. We will often use the notation diag{a\, a 2, ... , an} for the diagonal matrix
an Since a diagonal matrix diag{a\, a 2, ... , an} can be obtained from the identity matrix I by multiplying column number k by ak,
?~ .. [
g].
Determinant of a diagonal matrix equal the product of the diagonal entries,det(diag{a\, a2' ... , an}) = a\a2 .... an' The next important class is the class of so-called triangular matrices. A square matrix
A = {a j
'k } ~,j=l
i.e" if a"k = for all k <i. A square matrix is called lower triangular if all entries above the mai~ are 0, i.e if aj'k = for all} < k. We call a matrix triangular, if it is either lower or upper triangular matrix. It is easy to see that
is called upper triangular if all entries below the main diagonal are 0,
~--------------------------------------~
detA = a\,\a2'2 ... an' n. Indeed, if a triangular matrix has zero on the main diagonal, it is not invertible (this can easily be checked by column operations) and therefore both sides equal zero. If all diagonal entries are non-zero, then using column replacement (column operations of third type) one can transform the matrix into a diagonal one with the same diagonal entries: For upper triangular matrix one should first subtract appropriate multiples of the first column from the columns number 2, 3, ... , n, "killing" all entries in the first row, then subtract appropriate mUltiples of the second column from columns number 3, ... , n, and so on. To treat the case of lower triangular matrices one has to do "column reduction" from the left to the right, i.e., first subtract appropriate multiples of the last column from columns number n - 1, ... , 2, 1, and so on. Computing the Determinant Now we know how to compute determinants, using their properties: one just need to do column reduction (i.e., row reduction for AT) keeping track of column operations
Determinants
107
changing the determinant. Fortunately, the most often used operation - row replacement, i.e., operation of third type does not change the determinant. So we only need to keep track of interchanging of columns and of multiplication of column by a scalar. Ifan echelon form of AT does not have pivots in every column (and row), then a is not invertible, so det A = O. If a is invertible, we arrive at a triangular matrix, and det A is the product of diagonal entries times the correction from column interchanges and multiplications. The above algorithm implies that detA can be zero only if a matrix A is not invertible. Combining this with the last statement of Proposition we get Proposition. det A = 0 if and 0 if and only if A is invertible. only if a is not invertible. An equivalent statement: det A Note, that although we now know how to compute determinants, the determinant is still not defined. One can ask: why don't we define it as the result we get from the above algorithm? The problem is that formally this result is not well defined: that means we did not prove that different sequences of column operations yield the same answer. Determinants of Transpose and Product
In this section we prove two important theorems Theorem. (Determinant of a transpose). For a square matrix A, det A = det(AT). This theorem implies that for all statement about columns the corresponding statements about rows are also true. In particular, determinants behave under row operations the same way they behave under column operations. So, we can use row operations to compute determinants. Theorem. (Determinant of a product). For n x n matrices A and B det(AB) = (detA)(detB)
In other words
are easy to compute; right multiplication by an elementary matrix is a column operation, and eect of column operations on the determinant is well known. This can look like a lucky coincidence, that the determinants of elementary matrices agree with the corresponding column operations, but it is not a coincidence at all. Namely, for a column operation the corresponding elementary matrix can be obtained from the identity matrix 1 by this column operation. So, its determinant is 1 (determinant of 1) times the eect of the column operation. And that is all! It may be hard to realise at first, but the above paragraph is a complete and rigorous proof of the lemma! Applying N times Lemma we get the following corollary.
108
Determinants
Corollary. For any matrix A and any sequence of elementary matrices E I, E 2, ... , EN (all matrices are n x n) det(AE IE 2, ..., EN) = (det A)(det EI)(det E 2), ... , (det.EN) Lemma. Any invertible matrix is a product of elementary matrices. Proof We know that any invertible matrix is row equivalent to the identity matrix, which is its reduced echelon form. So /= ENElV l , ..., E 2E IA, and therefore any invertible matrix can be represented as a product of elementary matrices, A = EI E2 ... EN_IEN / = EI E2 ... EN_IEN (inverse of an elementary matrix is an elementary matrix). Proof First of all, it can be easily checked, that for an elementary matrix E we have detE = det(ET). Notice, that it is sucient to prove the theorem only for invertible matrices A, since if A is not invertible then AT is also not invertible, and both determinants are zero. By Lemma matrix A can be represented as a product of elementary matrices, A = E IE 2, ..., EN' and by Corollary the determinant of A is the product of determinants of the elementary matrices. Since taking the transpose just transposes each elementary matrix and reverses their order, Corollary implies that detA = detA T. Proof Let us first suppose that the matrix B is invertible. Then Lemma implies that B can be represented as a product of elementary matrices B = E IE 2, ..., EN' and so by Corollary det (AB) = (det A)[(det EI)(det E 2), ... , (det EN)] = (det A)(det B). If B is not invertible, then the product AB is also not invertible, and the theorem just says that 0 = O. To check that the product AB = C is not invertible, let us assume that it is invertible. Then multiplying the identity AB = C by C-I from the left, we get C-1AB = /, so C-IA is a left inverse of B. So B is left invertible, and since it is square, it is invertible. We got a contradiction.
-I -I -I -I -I -I -I -I
Properties of Determinant
First of all, let us say once more, that determinant is defined only for square matrices Since we now know that det A = det(AT), the statements that we knew about columns are true for rows too. 1. Determinant is linear in each row (column) when the other rows (columns) are fixed. 2. If one interchanges two rows (columns) of a matrix A, the determinant changes sign.
Determinants
109
3. 4. 5. 6. 7. 8. 9.
For a triangular (in particular, for a diagonal) matrix its determinant is the product of the diagonal entries. In particular, det 1= 1. If a matrix A has a zero row (or column), det A = O. If a matrix A has two equal rows (columns), det A
= O.
If one of the rows (columns) of A is a linear combination of the other rows (columns), i.e., if the matrix is not invertible, then det A = 0; More generally, det A
= 0 if and only if A
det A*,O if and only if A is invertible. det A does not change if we add to a row (column) a linear combination of the other rows (columns). In particular, the determinant is preserved under the row (column) replacement, i.e., under the row (column) operation of the third kind.
= (det A)(det B). And finally, 12. If a is an n x n matrix, then det(aA) = an det A.
11. det(AB)
The last property follows from the linearity of the determinant, if we recall that to multiply a matrix A by a we have to multiply each row by a, and that each multiplication multiplies the determinant by a.
n matrix A = {aj,k }nj,k = 1, and let vI' v 2, ... , vn be its columns, i.e.,
vk
[~~:~l =
~~
= taj'k ej .
J~
vI: D(v., v 2, ... , vn) = D(2: a j,l e j' v2 ... ,en = 2: a j'l D(ej , v2'''' vn )
j=1 j=1 Then we expand it in the second column, then in the third, and so on. We get D(v., v 2, .. " vn) =
110
Determinants
Notice, that we have to use a different index of summation for each column:we call themjl,h, ... ,jn; the indexh here is the same as the index}. It is a huge sum, it contains nn terms. Fortunately, some of the terms are zero. Namely, if any 2 of the indices j l' h, ... , jn coincide, the determinant D(ej I . eh, ... ef) is zero, because there are two equal rows here. So, let us rewrite the sum, omitting all zero terms. The most convenient way to do that is using the notion of a permutation. a permutation of an ordered set {l, 2, ... , n} is a rearrangement of its elements. a convenient way to represent a permutation is by using a function a: {I, 2, ... , n} ~ {l, 2, ... , n}, where a(I), a(2), ... , (n) gives the new order of the set 1,2, ... , n. In other words, the permutation rearranges the ordered set 1,2, ... , n into a(1), a(2), ... , (n). Such function a has to be one-to-one (different values for different arguments) and onto (assumes all possible values from the target space). Such functions (one-to-one and onto) are called bijections, and they give one-toone correspondence between two sets. Although it is not directly relevant here, let us notice, that it is wellknown in combinatorics, that the number of different perturbations of the set {I, 2, ... , n} is exactly n!. The set of all permutations of the set {l, 2, ... , n} will be denoted Perm(n). Using the notion of a permutation, we can rewrite the determinant as D(v l , v2, ... , vn) =
2::
crEPenn(n)
acr(I),lacr(2),2acr(n),nD(ecr(I),ecr(2)',ecr(n
The matrix with columns ecr(l)' e cr (2)' ... , ecr(n) can be obtained from the identity matrix by finitely many column interchanges, so the determinant D(ecr(I)' ecr(2), ... , ecr(n is I or -1 depending on the number of column interchanges. To formalize that, we define sign (denoted sign a) of a permutation to be 1 if even number of interchanges is necessary to rearrange the ntuple 1, 2, ... , n into a(1), a(2), ... , a(n), and sign(a) = -1 if the number of interchanges is odd. It is a well-known fact from the combinatorics, that the sign of permutation is well defined, i.e., that although there are infinitely many ways to get the n-tuple (1), (2), ... , (n) from 1, 2, ... , n, the number of interchanges is either always odd or always even. One of the ways to show that is to count the number K of pairs j, k, j < k such that aU) > a(k), and see if the number is even or odd. We call the permutation odd if K is odd and even if K is even. Then define signum of to be (-I)K. We want to show that signum and sign coincide, so sign is well defined. If (k) = k V k, then the number of such pairs is 0, so signum of such identity permutation is 1. Note also, that any elementary transpose, which interchange two neighbors, changes the signum of a permutation, because it changes (increases or decreases) the number of the pairs exactly by I. So, to get from a permutation to another one always needs an even
Determinants
111
number of elementary transposes if the permutation have the same signum, and an oddnumber if the signums are different. Finally, any interchange of two entries can be achieved by an odd number of elementary transposes. This implies that signum changes under an interchange of two entries. So, to get from 1,2, ... , n to an even permutation (positive signum) one always need even number of interchanges, and odd number of interchanges is needed to get an odd permutation (negative signum). That means signum and sign coincide, and so sign is well defined. So, if we want determinant to satisfy basic properties 1-3 from Section 3, we must define it as aa(l),l,aa(2),2,aa(n),n sign(a), detA =
2:
aEPenn(n)
where the sum is taken over all permutations of the set {I, 2, ... , n}. If we define the determinant this way, it is easy to check that it satisfies the basic properties. Indeed, it is linear in each column, because for each column every term (product) in the sum contains exactly one entry from this column. Interchanging two columns of a just adds an extra interchange to the perturbation, so right side in changes sign. Finally, for the identitymatrix I, the right side is 1 (it has one non-zero term).
COFACTOR EXPANSION
For an n x n matrix A = {aj,d~,k=1 letAj'kdenotes the (n-l) x (n-l) matrix obtained from A by crossing out row number j and column number k. Theorem. (Cofactor expansion of determinant). Let A be an n x n matrix. For each j, ~ j ~ n, determinant ofA can be expanded in the row number j as
detA - aj,l (-I)J detAj,1 + aj ,2(-I)j+2 detAj ,2 + ... + a' (_I)J+n detA. J, l J,n
= 2:aj,k(-I)
k=1
'+1
j+k
det Aj,k'
n, the determinant can be expanded in the
detA
= 2:aj,k(-I)
k=1
j+k
det Aj,k'
Proof Let us first prove the formula for the expansion in row number 1. The formula for expansion in row number k then can be obtained from it by interchanging rows number 1 and k. Since det A = det AT, column expansion follows automatically. Let us first consider a special case, when the first row has one non zero term a I I' Performing column operations on columns 2, 3, ... , n we transform a to the lower triangular form. The determinant of A then can be computed as
112
Determinants
aI'k by
where the matrix A(k) is obtained from A by replacing all entries in the first row except O. As we just discussed above detA(k) = (_I)1+k al,k detAl'k, so det A =
To get the cofactor expansion in the second row, we can interchange the first and second rows and apply the above formula. The row exchange changes the sign, so we get det A
= - l:)-1)
k=I
I+k
Exchanging rows 3 and 2 and expanding in the second row we get formula detA = 2:(-1)
k=1
and so on. To expand the determinant det A in a column one need to apply the row expansion formula for AT. Definition. The numbers '+k C detA' 1, k =(-I)l j, k are called co/actors. Using this notation, the formula for expansion of the determinant in the row number j can be rewritten as
Determinants
n
113
detA
= a"1 C"I J J
= Laj,kCj,k' k=1
n
Remark. Ver~ often the cofactor expansion formula is Jused as the definition of determinant. It is not dicult to show that the quantity given by this formula satisfies the basic properties of the determinant: the normalization property is trivial, the proof of anti symmetry is easy. However, the proof of linearity is a bit tedious (although not too dicult). Remark. Although it looks very nice, the cofactor expansion formula is not suitable for computing determinant of matrices bigger than 3 x 3. As one can count it requires n! multiplications, and n! grows very rapidly. For example, cofactor expansion of a 20 x 20 matrix require 20! ~ 2.4 . 10 18 multiplications: it would take a computer performing a billion multiplications per second over 77 years to perform the multiplications. On the other hand, computing the determinant of an n x n matrix using row reduction requires (n 3 + 2n - 3)/3 multiplications (and about the same number of additions). It would take a computer performing a million operations per second (very slow, by today's standards) a fraction of a second to compute the determinant of a 100 x 100 matrix by row reduction. It can only be practical to apply the cofactor expansion formula in higher dimensions if a row (or a column) has a lot of zero entries. However, the cofactor expansion formula is of great theoretical importance, as the next section shows. Cofactor Formula for Inverse Matrix
The matrix C = {Cj,k }~,k=1 whose' entries are c<!factors of A given matrix A is called the cofactor matrix of A. Theorem. Let a be an invertible matrix and let C be its cofactor matrix. Then
A-I =_l_C T . detA Proof Let us find the product ACT. The diagonal entry number j is obtained by mUltiplyingjth row of a by jth column of a (i.e., jth row of C), so (ACT)JJ .. = a'IC.'1 + a C. + ... + a C. = detA, J J J, 2 J, 2 J.n J,n by the cofactor expansion formula. To get the off diagonal terms we need to mUltiply jth row of A by kth column of CT,j -:t= k, to get a,IC C + ...+ a C . k, 1+ a, j, j" 2 k2 j,n k,n
114
Determinants
It follows from the cofactor expansions formula (expanding in kth row) that this is the determinant of the matrix obtained from a by replacing row number k by the row number } (and leaving all other rows as they were). But the rows} and k of this matrix coincide, so the determinant is O. So, all off-diagonal entries of ACT are zeroes (and all diagonal ones equal det A), thus ACT = (det A) 1.
That means that the matrix det A CT is a right inverse of A, and since a is square, it is the inverse. Recalling that for an invertible matrix A the equation Ax solution
X
= b has a unique
=A- 1 b = - - CTb. detA We get the following corollary of the above theorem. Corollary. (Cramer's rule). For an invertible matrix a the entry number k ofthe solution of the equation Ax = b is given by the formula detBk xk=--' detA where the matrix Bk is obtained from a by replacing column number k of A by the vector b. Some applications of the cofactor formula for the inverse. Example (Inverting 2 x 2 matrices). The cofactor formula really shines when one needs to invert a 2 x 2 matrix
I
A=(~ ~).
The cofactors are.just entries (1 x ] matrices), the cofactor matrix is
While the cofactor formula for the inverse does not look practical for dimensions higher than 3, it has a great theoretical value, as the examples below illustrate. Example. (Matrix with integer inverse). Suppose that we want to construct a matrix a with integer entries, such that its inverse also has integer entries (inverting such matrix would make a nice homework problem: no messing with fractions). If det A = 1 and its entries are integer, the cofactor formula for inverses implies that A-I also have integer entries. Note, that it is easy to construct an integer matrix A with det A = 1: one should start with a triangular matrix with 1 on the main diagonal, and then apply several row or column replacements (operations of the third type) to make the matrix look generic.
Determinants
115
Examp:e (Inverse of a polynomial matrix). Another example is to consider a polynomial matrix A(x), i.e., a matrix whose entries are not numbers but polynomials a ix) of the variable x. If det A(x) 1, then the inverse matrix A-I(x) is also a polynomial ri;atrix. If det A (x) =p(x) == 0, it follows from the cofactor expansion that p(x) is a polynomial, so A-I (x) is a has rational entries: moreover, p(x) is a multiple of each denominator.
(~l).( ~)
different k
The'Jrem. For a non-zero matrix a its rank equals to the maximal integer k such that there exists a non-zero minor of order k. Proof Let us first show, that if k > rank A then all minors of order k are 0. Indeed, since the dimension of the column space Ran A is rank A < k, any k columns of A are linearly dependent. Therefore, for any k x k submatrix of A its column are linearly dependent, and so all minors of order k are 0. To complete the proof we need to show that there exists a non-zero minor of order k = rankA. There can be many such minors, but probably the easiest way to get such a minor is to take pivot rows and pivot column (i.e., rows and columns of the original matrix, containing a pivot). This k x k submatrix has the same pivots as t:.e original matrix, so it is invertible (pivot in every column and every row) and its determinant is non-zero. This theorem does not look very useful, because it is much easier to perform row reduction than to compute all minors. However, it is of greattheoretical importance, as the following corollary shows. Corollary. Let A = A(x) be an m x n polynomial matrix (i.e., a matrix whose entries are polynomials ofx). Then rank A (x) is constant everywhere, except maybe finitely many points. Proof Let r be the largest integer such that rankA(x) = r for some x. To show that such r exists, we first try r = min {m, n}. If there exists x such that rank A (x) = r, we found r. If not, we replace r by r -1 and try again. After finitely many steps we either stop or hit 0. So, r exists. Let Xo be a point such that rankA(xo) = r, and let M be a minor of order k such that M(xo) 0. Since M(x) is the determinant of a k x k polynomial matrix, M(x) is a polynomial. Since M(xo) 0, it is not identically zero, so it can be zero onl; at finitely many points. So, everywhere except maybe finitely many points rankA(x) ~ r. But by the definition ofr, rankA(x) ~ r for all x.
'*
'*
DETERMINANTS
We have related the question of the invertibility of a square matrix to a question of
116
Determinants
solutions of systems of linear equations. In some sense, this is unsatisfactory, since it is not simple to nd an answer to either of these questions without a lot of work. We shall relate these two uestions to the question of the determinant of the matrix in question. The task is reduced to checking whether this determinant is zero or non-zero. So what is the determinant? Let us start with 1 x 1 matrices, of the form
A
= (a)
Note here that 1\ = (1). If a 6 ::;c 0, then clearly the matrix A is invertible, with inverse matrix A-I = (a - 1) On the other hand, if a = 0, then clearly no matrix B can satisfy AB = BA = 1\, so that the matrix A is not invertible. We therefore conclude that the value a is a good "determinant" to determine whether the 1 x 1 matrix A is invertible, since the matrix A is invertible if and , only if a ::;c O. Let us then agree on the following definition. Definition. Suppose that
A
= (a).
is a 1 x 1 matrix. We write det (A) = a, and call this the determinant of the matrix A. Next, let us turn to 202 0 x 2 matrices, of the form
A=(~ ~).
We shall use elementary row operations to nd out when the matrix A is invertible. So we consider the array
(AII2 ) =
(~ ~
6 ~),
and try to use elementary row operations to reduce the left hand half of the array to 12, Suppose first of all that a = e = O. Then the array becomes
0 b (o d O l ' and so it is impossible to reduce the left hand half of the array by elementary row operations to the matrix 12 , Consider next the case a 6:f. O. Multiplying row 2 of the array (1) by a, we obtain Adding -e times row 1 to row 2, we obtain
I 0)
1 0)
a b ( o 0 -c
1 a' 0)
Determinants
117
and so it is impossible to reduce the left hand half of the array by elementary row operations to the matrix 12, On the other hand, if D = ad - be :f. 0, then the array (2) can be reduced by elementary row operations to ( so that
1 0
1 -c/ D
d / D -b / D) a/ D '
A-I =
1 ad-be -e
(d -b). a
Consider nally the case e:f. O. Interchanging rows I and 2 of the array (I), we obtain
e (a
ad-be -e a Finally, note that a = e = 0 is a special case of ad - be = O. We therefore conclude that the value ad - be is a good determinant" to determine whether the 2 x 2 matrix A is invertible, since the matrix A is invertible if and only if ad - be :f. O. Let us then agree on the following definition. Definition. Suppose that
A=
d -b).
(~ ~)
118
Determinants
2 matrix. We write det(A) = ad - be, and call this the determinant of the matrix A.
x
is a 2
A =.
(
all...
a ln ]
anI
ann
be an n matrix. For every i,) = 1, ... , n, let us delete row i and column} of A to obtain the (n - 1) (n - 1) matrix
all a(i-I)I Aij aI(I-I)
aI(I+I)
aln
a(i_I)"
a(i-l)(j-I) a(i-I)(j+I)
=
a(i,!"I)I anI
a(i+l}(j-I) : a(j+I)(j+I)
a(i~l)"
ann
an(j-I)
an(j+I)
Here denotes that the entry has been deleted. Definition. The number Cij = (_l)i+j det(Aij) is called the cofactor of the entry aij of A. In other words, the cofactor of the entry a ij is obtained from A by first deleting the row and the column containing the entry aij' then calculating the d~t.erminant of the resulting (n - 1) x (n - 1) matrix, and nally multiplying by a sign (-ly+J. Note that the entries of A in row i are given by
(ail' ... , ain)
L aijCij =aijC
j=I
il
+ ... + ainCin .
Determinants a F] ( an) Definition. By the cofactor expansion of A by column j, we mean the expression
n
IaijCy i=l
119
Proposition. Suppose that A is an n x n matrix. Then the expressions are all equal and independent of the row or column chosen. Definition. Suppose that A is an n x n matrix. We call the common value in the determinant of the matrix A, denoted by det(A). Let us check whether this agrees with our earlier definition of the determinant of a 2 x 2 matrix. Writil1g
A=
we have
CII
(a a
l1 a21
12 ), a22
It follows that by row 1 : allC ll + a l2 C l2 = a ll a22 - a 12a21 , by row 2 : a 21 C21 + a 22 C22 = -a2l a l2 + a 22 a ll , by column 1 : allC ll + a 21 C 21 = a ll a 22 - a 21 a 12 , by column 2 : a l2 C l2 + a22 C22 = -a l2 a21 + a22a ll : The four values are clearly equal, and of the form ad - bc as before. Example. Consider the matrix
A= 1 4 2. 215
Let us use cofactor expansion by row 1. Then
Cll
2 3 5) (
= (_1)1+1 det
U;)
= (_1)3 (5 - 4) = -1,
U i) = (_1)4 (1 - 8) = -7,
det(A) = allC ll + a l2 C l2 + a l3 C l3 = 36 - 3 - 35 = -2: Alternatively, let us use cofactor expansion by column 2. Then
120
Determinants
C l2 = (_1)1+2 det
C22 = (_1)2+2 det
U;) =
(f
~)
(-1)3(5 - 4)
= -1, = 0,
(~ ~) = (_1)4(10 -
10)
= (-1)5(4 - 5)
= 1,
det(A) = a 12 C 12 + a22 C22 + a 32 C32 = -3 + 0 + 1 = -2. When using co-factor expansion, we should choose a row or column with as few nonzero entries as possible in order to minimize the calculations. Example. Consider the matrix
2 30 02 5J 1 4 A= (5 4 8 5 . 2 1 0 5
Here it is convenient to use cofactor expansion by column 3, since then 2 3 det(A) = a 13 C 13 + a23 C23 + a 33 C33 + a43 C43 = 8C33 = 8E-l)3+3 det ( ~ in view of Example.
i D=-16,
(ap ... al n J.
an, ... ann If aij = 0 whenever i > j, then A is called an upper triangular matrix. If aij = 0 whenever i <j, then A is called a lower triangular matrix. We also say that A is a triangular matrix if it is upper triangular or lower triangular. Example. The matrix
( ~ ~ ~J
006
is upper triangular.
Determinants Example. A diagonal matrix is both upper triangular and lower triangular. Proposition. Suppose that the n x n matrix is triangular.
A=
121
... ann Then det(A) = all a22 , ... , ann' the product of the diagonal entries. Proof Let us assume that A is upper triangular for the case when A is lower triangular, change the term left-most column to the term top row in the proof. Using cofactor expansion by the left-most column at each step, we see that
( ani
all .
...
aln'J .
f3
'"
'"
ann
a~nJ = '"
all
a22 .. ann
(b) Suppose that the matrix B is obtained from the matrix A by adding a multiple of one row of A to another row. Then det(B} = det(A}. (c) Suppose that the matrix B is obtained from the matrix A by multiplying one row of A by a non-zero constant c. Then det(B} = c det(A}. Proof (a) The proof is by induction on n. It is easily checked that the result holds when n = 2. When n> 2, we use co-factor expansion by a third row, say row i. Then
n
det(B)
= .LJaij(-l)i+ j
'""
det(Bij)'
j=1
Note that the (n - 1) x (n - I) matrices Bij are obtained from the matrices Ai" by interchanging two rows of Aij , so that det(Bij) = -det(Aij)' It follows that !)
n
det(B)
as required.
=-
,
122
Determinants
(b) Again, the proof is by induction on n. It is easily checked that the result holds when n = 2. When n> 2, we use co-factor expansion by a third row, say row i. Then
n
det(B)
= L,aij(-li+ j
j=1
det(Bij)
Note that the (n -1) x (n - 1) matrices Bij are obtained from the matricesAij by adding a multiple of one row of Aij to another row, so that det(Bij) = det(Aij)' It follows that
n
det(B)
= L,uij(-l)i+ j
1=1
det(Aij)
= det(A)
as required. (c) This is simpler. Suppose that the matrix B is obtained from the matrix A by multiplying row i of A by a non-zero constant c. Then det{ B) = Note now that
Bij
= Aij'
n
since row i has been removed respectively from Band A. It follows that det(B) = Lcaij(-I)i+1 det(Aij) = cdet(A)
j=1
as required. In fact, the above operations can also be carried out on the columns of A. More precisely, we have the following result. Proposition. Suppose that A is an n x n matrix. (a) Suppose that the matrix B is obtained from the matrix A by interchanging two columns of A. Then det(B} = - det(A).
(b) Suppose that the matrix B is obtained from the matrix A by adding a multiple of one column of A to another column. Then det(B} = det(A}. (c) Suppose that the matrix B is obtained from the matrix A by multiplying one column of A by a non-zero constant c. Then det(B} = c det(A}.
Elementary row and column operations can be combined with cofactor expansion to calculate the determinant of a given matrix. We shall illustrate this point by the following examples. Example. Consider the matrix
=(1
! ! ~J
Determinants
123
det(A) = det[?
! ~J.
det(A)
= det
[H ! ~J.
2 2 0 4
= 2(_1)4+1 det
(~3 i ~J = -2 det(~3 4 3
(~ ~ J. o i 2 -2
de~ ~ ~2 det (~
det(A) det(A)
n
(!
~)
= -2.2(-1)3+2 det
= det
(!
~).
Using the formula for the determinant of2 x 2 matrices, we conclude that
Let us start again and try a different way. Dividing row 4 by 2, we have
det(A)
= 2 det
(! : ! ~J.
det(A) = 2 det =
(~ ! ! ~J
2det(~
! ! ~J.
1 0 2
124
Determinants
Using cofactor expansion by row 2, we have (; 1 I 2 Adding -2 times row 3 to row 1, we have
det(A)
= 2 1(-1)2+3 det
.
=~ ~) =-2det(;1
det(A)
= -2 det
(~1 =~ 1) 1 2
det(A)
~ -2 de{~ ~53
-H
~5) =-2det(-=-15 3 ~5)'
Using the formula for the determinant of 22 matrices, we conclude that det(A) = -2(25 + 13) = -76. Example. Consider the matrix
2 102 0 Here we have the least number of non-zero entries in column 3, so let us work to get more zeros into this column. Adding -1 times row 4 to row 2, we have
A=[~ ~ ~ H]
1 0 1
1 3 .
det(A)
det[~ ? ~ H].
1 0 1 1 3
2 1 1 3J (
(2 1 1
Determinants
125
o 3 1 2 det( A) = - det 1 7 1 1
Adding -1 times row 1 to row 3, we have 1 3 1 1 3 o 1 2 det(A) = -det 0 6 0 -2 (
1 1 1 3J (o 1 2 0
1 2
Using cofactor expansion by column 1, we have 2'J. det(A)=-l(-I)I+ldet(~1 2 ~ 32)=-detdet(A)=-det(~ ~ 3 0 1 2 0 Adding 1 times row 1 to row 2, we have
3 1 2) (
~) = -2det(i ~).
Using the formula for the determinant of 2 x 2 matrices, we conclude that det(A) =2(18 - 1) = -34. Example. Consider the matrix 1 024 1 0 2 4 5 7 6 2 A= 4 6 1 9 2 1 350 125 245 362
1 0 2 5
1 0
Here note that rows 1 and 6 are almost identical. Adding -1 times row 1 to row 6, we have 1 0 2 4 1 0 2 4 5 7 6 2 4 6 1 9 2 1 det(A) = det 3 5 0 1 2 5 2 4 5 3 6 2 0 0 0 1 0 0 Adding -1 times row 5 to row 2, we have
126
Determinants
det(A) = det
4 6
102 4 1 0 0 0 4 0 0
1 9 2 1
0 0 0 0 0
4 6 1 9 2 1 det(A) = det 3 5 0 1 2 5
n matrix
= 11 ...
(a
a 1n ).
anI'" ann By the transpose At of A, we mean the matrix obtained from A by transposing rows and columns.
. . . . A =' al n ... ann Example. Consider the matrix
I
A= 4 5 6.
Then
1 2 3) (789
369
AI
=(1
~ ~J.
Recall that determinants of 2 x 2 matrices depend on determinants of 1 x 1 matrices, in turn, determinants of 3 x 3 matrices depend on determinants of 2 x 2 matrices, and so on. It follows that determinants of n x n matrices ultimately depend on determinants of 1 x 1 matrices. Note now that transposing a 1x 1 matrix does not aect its determinant (why?). The result below follows in view of Proposition. For every n x n matrix A, we have det(A t) = det(A). Example. We have
Determinants
127
H~l=det[~ i ~ i ~l=-34.
10113 2 102 0
Next, we shall study the determinant of a product. We shall sketch a proof of the following important result Proposition. For every n x n matrices A and B, we have det(AB) = det(A) det(B). Proposition. Suppose that the n x n matrix A is invertible. Then
1 det(A) Proof In view of Propositions 3G and 3C, we have det(A) det(A- 1) = det(In) = 1. The result follows immediately. Finally, the main reason for studying determinants, as outlined in the introduction, is summarized by the following result. Proposition. Suppose that A is an n x n matrix. Then A is invertible if and only if det(A) o. Proof Suppose that A is invertible. Then det(A) 0 follows immediately from Proposition. Suppose now that det(A) ::f:. O. Let us now reduce A by elementary row operations to reduced row echelon form B. Then there exist a finite sequence E 1, , Ek of elementary n x n matrices such that B = Ek , , EIA It foIrows from Proposition that det(B) = det(Ek), , det(E 1) det(A) Recall that all elementary matrices are invertible and so have non-zero determinants. It follows that det(B) ::f:. 0, so that B has no zero rows by Proposition. Since B is an n x n matrix in reduced row echelon form, it must be In. We therefore conclude that A is row equivalent to In. It now follows from Proposition that A is invertible. Combining Propositions, we have the following result. Proposition. In the notation of Proposition, the following statements are equivalent: (a) The matrix A is invertible.
det(A- 1)
(b) The system Ax = 0 of linear equations has only the trivial solution. (c) The matrices A and In are row equivalent. (d) The system Ax = b of linear equations is soluble for every n 1 matrix b. (e) The determinant det(A)
::f:.
o.
n variables has a non-trivial solution if and only if the determinant if the coefficient matrix
128
Determinants
is equal to zero. In this section, we shall use this to solve some problems in geometry. We illustrate our ideas by a few simple examples. Example. Suppose that we wish to determine the equation of the unique line on the xy-plane that passes through two distinct given points (xI' Yl) and (x2' Y2)' The equation of a line on the xy-plane is of the form ax + by + c = O. Since the two points lie on the line, we must have aX I + bYI + c = 0 and ax2 + bY2 + c = O. Hence xa + yb + c = 0, xla + Ylb + c = 0, x 2a + Y2b + c = O. Written in matrix notation, we have
x2 Y2 1 c 0 Clearly there is a non-trivial solution (a, b, c) to this system of linear equations, and so we must have
( ~ ~ ~J (~J =(~J.
det(~ ~ ~J = O. x2 Y2 1
lying on a straight line. The equation of a circle on the xy-plane is of the form a(x2 +Y2) + bx + cy + d = O. Since the three points lie on the circle, we must have 2 2) (2 2) a (al + YI + bX 1+ cYI + d = 0, a x2 + Y2 + bX2 + cY2 + d = 0 and Hence
(x2 + Y2)a + xb + yc + d = 0, 2 (x1 + yf}a + xlb + ylc + d = 0 (xi + yi)a + x 2b + Y2c + d, 0 (x; + y;)a + x3b + Y3c + d = 0
Determinants
129
Clearly there is a non-trivial solution (a, b, c, d) to this system of linear equations, and so we must have
det
2 x +i 2 xI +i I x2 + Y2 x3 + Y3
2 2 2 2
x xI x2 x3
Y YI Y2 Y3
= 0,
the equation of the circle required. Example. Suppose that we wish to Qetermine the equation of the unique plane in 3space that passes through three distinct given points (xl' YI' ZI)' (X2'Y2' z2) and (x 3' Y3' z3)' not all lying on a straight line. The equation of a plane in 3-space is of the form ax + by + cz + d = o. Since the three points lie on the plane, we must have ax I + bYI + cz I + d = 0, aX2 + bY2 + CZ2 + d= 0, and ax3 + bY3 + cZ3 + d= O. Hence xa + yb + zc + d = 0,
i
2
xi + yi x3
2
x 2 Y2 x3 Y3
+ Y3
c d
=o
Clearly there is a non-trivial solution (a, b, c, d) to this system of linear equations, and so we must have the equation of the plane required.
1 0' 1 Example. Suppose that we wish to determine the equation of the unique sphere in 3space that passes through four distinct given points (xl' YI' ZI)' (x 2' Y2' Z2)' (x 3' Y3' z3) and (x4' Y4' Z4)' not all lying on a plane. The equation of a sphere in 3-space is of the form
det(~ ~ ~ ~J =
z2 x3 Y2 Y3 z2 z3
a(x2 + Y2 + z2) + bx + cy + dz + e = o.
Since the four points lie on the sphere, we must have
a(xf + yf + zf)+ bXI +cYI +dzl +e = 0, a (xi + yi + zi) + bX2 + CY2 + dz 2 + e = 0, a(x; + Y; + z;) + bX3 +CY3 +dz3 +e = 0,
130
Determinants
Hence
(x 2 + y2 + z2)a+ xb+ yc+ zd +e = 0, (xf + yf +zf)a+xlb+ Ylc+zl d +e = 0, (x; + Y; + zi)a+ x2 b + Y2 C + z2d + e = 0, (xi + yi + zi)a+x3 b + Y3 C + z3 d +e = 0, (xi + yi + zi)a+x4 b + Y4 C + Z4 d +e = o.
Written in matrix notation, we have
x 2 + i+z2 x 2 + i +z2
I I I
x xI x3
Y YI Y3
Z 1 zi z2 z3 z4
1 1
a b
C
0 0
x2 + Y2 +z2 x 2 + i+z2 x4 + Y4 + z4
3 2 3 2
x2 Y2 x4 Y4
=0
0
3 2
d
e
Clearly there is a non-trivial solution (a, b, c, d, e) to this system of linear equations, and so we must have
X2 + Y2 +z2 x 2 + i +z2
I I I
x xI X2 X3
Y YI Y2 Y3
Z 1 zi Z2 z3 Z4
1 1
det xi + Y; +z;
=0,
xf + yf +zf
x~ + y~ +z~
Some Useful Formulas
x4 Y4
In this section, we shall discuss two very useful formulas which involve determinants only. The rst one enables us to nd the inverse of a matrix, while the second one enables us to solve a system of linear equations. The interested reader is referred to Section 3.8 for proofs. Recall rst of all that for any n x n matrix
all A = . ( ... a ln )
., .
ani
... ann
the number Cij = (-ly+j det(Aij) is called the cofactor of the entry aij' and the (n - 1) (n - 1) matrix
Determinants
al(j._I) a(i-I)I a(i_l)(j-I) a(i-I)(j+I)
131
a.ln
a(i+I){i+I)
an(j+I)
a(i-I)(i_l)
ani
an(j_I)
is obtained from A by deleting row i and column}, here denotes that the entry has been deleted. Definition. The n x n matrix
ad} (A)
= (C?I
C11I
...
,
Cf/lJ
CIIII is called the adjoint of the matrix A. Remark. Note that adj(A) is obtained from the matrix A rst by replacing each entry of
A-I =
1
det(A)
ad(A).
lj
A= 0 1 2. 203
Then
1 -1 0] [
-det(~ ~]
det[O I] -det[1 -1] detr1 -I] 2 0 2 0 ,0 1 On the other hand, adding 1 times column 1 to column 2 and then using cofactor expansion on row 1, we have
132
Determinants
A-I = -4
-3 -3 2] [
-3
2 2 -1
2.
Next, we turn our attention to systems of n linear equations in n unknowns, of the form anix i + ... anrfn = bn, represented in matrix notation in the form Ax = b, where
Cnn
C :
Jn
and b =
(b) :1
bn
. ,
bn
an(J+I)
. . . ann
Proposition. (Cramer's Rule) Suppose that the matrix A is invertible. Then the unique solution of the system Ax = b, where A, x and b are given by equation, is given by det(A, (b x - _.....:.......e,--,-,-,1det(A) , x n det(An(b = -...:......:.' -'--':..:... det(A) '
where the matrices AI(b), ... , AI(b). Example. Consider the system Ax = b, where A=
XI
= --'-d-et-(A-)--'-= -3, x2 =
det[~ 11 ~] 3 0 3
det[~ ~ ~] 2 3 3
det(A)
= -4,
Determinants
133
det 0 1 2 203 x = =3 3 det{A) , Let us check our calculations. Recall from Example that
(1 -1 1]
[-3 -3 2]
Further Discussion
In this section, we shall first discuss a definition of the determinant in terms of permutations. In order to do so, we need to make a digression and discuss first the rudiments of permutations on non-empty finite sets. Definition. LetXbe a non-empty finite set. A permutation $ onXis a function: X ~ X which is one-to-one and onto. If x E X, we den()te by x the image of x under the permutation. It is not dicult to see that if: $ X ~ X and: X ~ X are both permutations on X, then: X ~ X, dened by x<l>'I' = {x$)'I' for every x E X so that is followed by, is also a permutation on X. Remark. Note that we use the notation x instead of our usual notation (x) to denote the image of x under. Note also that we write to denote the composition. We shall do this only for permutations. The reasons will become a little clearer later in the discussion. Since the set X is non-empty and finite, we may assume, without loss of generality, that it is {I, 2, ... , n}, where n EN. We now let Sn denote the set of all permutations on the set {l, 2, ... , n}. In other words, Sn denotes the collection of all functions from (1, 2, ... , n) to {I, 2, ... , n} that are both one-to-one and onto. Proposition. For every n EN, the set Sn has n! elements. Proof There are n choices for 1$. For each such choice, there are (n - 1) choices left for 2$. And so on. To represent particular elements of Sn' there are various notations. For example, we can use the notation 1 2 ... (1<1> 2<1> ... n<1> to denote the permutation $. Example. In S4' 1 2 3 4) (2 4 1 3
n)
134
Determinants
denotes the permutation, where 1 = 2, 2 = 4, 3 = 1 and 4 = 3. On the other hand, the reader can easily check that 1 2 3 4)(1 2 3 4) (I 2 3 4) (2 4 1 3 3 2 4 1 - 2 1 3 4 ' A more convenient way is to use the cycle notation. The permutations
U~
j) and (~
l-:i)
can be represented respectively by the cycles (I 243) and (I 34). Here the cycle (1 243) gives the information 1<1> = 2, 2<1> = 4, 4<1> = 3 and 3<1> = 1. Note also that in the latter case, since the image of2 is 2, it is not necessary to include this in the cycle. Furthermore, the information 1 2 3 4)(1 2 3 4) (I 2 3 4) (2 4 1 3 3 2 4 1 - 2 1 3 4 ' can be represented in cycle notation by (1 243)(1 34) = (1 2). We also say that the cycles (1 2 4 3), (1 3 4) and (1 2) have lengths 4, 3 and 2 respectively. Example. In 8 6, the permutation I 2 3 4 5 6) (2 4 1 3 6 5 can be represented in cycle notation as (1 243)(5 6). Example. In 84 or 86 , we have (1 2 4 3) = (1 2)(1 4)(1 3). The last example motivates the following important idea. Definition. Suppose that n EN. A pet:mutation in 8n that interchanges two numbers among the elements of {I, 2, ... , n} and leaves all the others unchanged is called a transposition. Remark. It is obvious that a transposition can be represented by a 2-cycle, and is its own inverse. Two cycles (XI' x 2' ... , x k) and (YI' Y2' ... , YI) in 8n are said to be disjoint if the elements XI' ... , xk' YI' ... , YI are all different. The interested reader may try to prove the following result. Proposition. Suppose that n EN. (a) Every permutation in 8n can be written as a product of disjoint cycles. (b)
X 2' ... , Xk
For every subset (XI' x 2, ... , xk) of the set {I, 2, ... , n}, where the elements are distinct, the cycle (XI' X2' ... , Xk) satises
x 3), ... , (XI' xk), in other words, every cycle can be written as a product of transpositions.
(XI X2 , .. , Xk) X2 )(X I ,
Xl'
= (XI'
(c)
Consequently, every permutation in 8n can be written as a product of transpositions. Example. In 8 9, the permutation
I 2 3 4 5 6 7 8 9) (3 2 5 1 7 8 4 9 6
can be written in cycle notation as (1 3 5 74)(6 8 9). By Theorem 3P(b), we have
Determinants
13S
(1 3 5 7 4) = (1 3)(1 5)(1 7)(1 4) and (6 8 9) = (6 8)(6 9). Hence the permutation can be represented by (1 3)(1 5)(1 7)(1 4)(68)(69). Definition. Suppose that n EN. Then a permutation in Sn is said to be even if it is representable as the product of an even number oftransp0sitions and odd ifit is representable as the product of an odd number of transpositions. Furthermore, we write
E 'f' -
(,!..) _
Remark. It can be shown that no permutation can be simultaneously odd and even. We are now in a position to dene the determinant of a matrix. Suppose that
A=[aI~
anI
is an n x n matrix. Definition. By an elementary product from the matrix A, we mean the product of n entries of A, no two of which are from the same row or same column. It follows that any such elementary product must be of the form a I(1<1ai 2<1 ... an(nj), where <I> is a permutation in Sn' Definition. By the determinant of an n x n matrix A of the form (11), we mean the sum
<Pe:S"
where the summation is over all the n! permutations in Sn' It is be shown that the determinant de ned in this way is the same as that dened earlier by row or column expansions. Indeed, one can use (12) to establish Proposition. The very interested reader may wish to make an attempt. Here we conne our study to the special cases when n = 2 and n = 3. In the two examples below, we use e to denote the identity permutation. Example. Suppose that n = 2. We have the following: elementary product permutation sign contribution (1 2) -1 Hence det (A) = all a 22 - a I2 a 2I as shown before. Example. Suppose that n = 3. We have the following: elementary product permutation sign e +1 alla22a33 (123) +1 a12a21a33 (1 32) +1 a13a21a32 -1 (1 3) a13a22a3I
a ll a22 a 12a2I
+1
136
Determinants
(23) -1 - alla23a32 alla23a32 a12a21a33 (1 2) -1 - a12a21a33 Hence det(A) = alla22a33 + a12a23a31 + a13a21a32 - a13a22a31- alla23a32 - a12a2Ia33' We have the picture lielow:
Next, we discuss briey how one may prove Proposition concerning the determinant of the, product of two matrices. The idea is to use elementary matrices. Corresponding to Proposition, we can easily establish the following result. Proposition. Suppose that E is an elementary matrix. (a) If E arises from interchanging two rows of In, then det(E) = -I.
(b) (c)
If E arises from adding one row of In to another row, then det(E) = 1. If E arises from mUltiplying one row of In by a non-zero constant c, det(E) = c.
then
Combining Propositions 3D and 3Q, we can establish the following intermediate result. Proposition. Suppose that E is an n x n elementary matrix. Thenfor any n x n matrix B, we have det(EB) = det(E) det(B). Proof of Proposition. Let us reduce A by elementary row operations to reduced row echelon form A'. Then there exist a finite sequence G 1, ... , Gk,ofelementary matrices such that A' = Gk , ... , G1A. Since elementary matrices are invertible with elementary inverse matrices, it follows that there exist a nite sequence E 1, , Ek of elementary matrices such that A = EI ... EJI1' Suppose first of all that det(A) = O. Then it follows from (13) that the matrix Ao must have a zero row. Hence A' B must have a zero row, and so det(A' B) = O. ButAB = E 1, ... , E/A 'B), so it follows from Proposition that det(AB) = O. Suppose next that det(A) ::t: O. Then A' = In' and so it follows from Equation that AB = E 1, , E~.
Determinants
Proof It sucess to show that Aad}(A) = det(A)In, as this clearly implies
1 ad}(A)] = In' det(A) giving the result. To show, note that
137
A[
Aad}(A) =: :: anI'" ann Ctn Suppose that the right hand side of is equal to
all ... [
al n
1[CJl
(B)=:
... bnn Then for every i,} = 1, ... , n, we have bij = ajJCjI + ... + ainCjn .
It follows that when i = }, we have bii = ailCil + ... + ainC;n = det(A): On the other hand, if i :;; }, then equation is equal to the determinant of the matrix obtained from A by replacing row} by row i. This matrix has therefore two identical rows, and so the determinant is 0 (why?). Proof Since A is invertible, we get
: .
A-I =
x=A
-I
adj(A)b.
[::J- det~A) ~~
[
J
det(A) . To complete the proof, it remains to show that bIClj + ... + bnCnj = det(Aib)): Note, on using cofactor expansion by column}, that
x = ---=-----"-
138
Determinants
Hj
a(i-I)I
a(i-l)(j-I)
a(i-I)(j-I)
a(i-I)n
det(A .(b
}
= L...,.bi ( -1)
i=l
det
a(i+}) I
a(i+I)(J-I)
an(j-I)
Chapter 5
MAIN DEFINITIONS
Eigenvalues, Eigenvectors, Spectrum
A scalar A. is called an eigenvalue of an operator A : V ~ V if there exists a non-zero vector v E V such that
140
Av = Av. The vector v is called the eigenvector ofa (corresponding to the eigenvalue A). Ifwe know that ').. is an eigenvalue, the eigenvectors are easy to find: one just has to solve the equation Ax = Ax, or, equivalently (A - Ai)x = o. So, finding all eigenvectors, corresponding to an eigenvalue is simply finding the nUllspace of A - AI. The nullspace Ker(A - AI), i.e., the set of all eigenvectors and 0 vector, is called the eigenspace. The set of all eigenvalues of an operator A is called spectrum of A, and is usually denoted cr(A). Finding Eigenvalues: Characteristic Polynomials
A scalar A is an eigenvalue if and only if the nullspace Ker(A - AI) is non-trivial (so the equation (A - Ai) x = 0 has a non-trivial solution). Let a act on lR n (i.e., a: lR n ~ lRn). Since the matrix of A is square, A -Ihas a nontrivial nullspace if and only if it is not invertible. We know that a square matrix is not invertible if and only if its determinant is O. Therefore
II E a(A),i.e.A is an eigenvalue of A
01
If A is an n x n matrix, the determinant det(A - AI) is a polynomial of degree n of the variable A. This polynomial is called the characteristic polynomial of A. So, to find all eigenvalues of A one just needs to compute the characteristic polynomial and find all its roots. This method of finding the spectrum of an operator is not very practical in higher dimensions. Finding roots of a polynomial of high degree can be a very dicult problem, and it is impossible to solve the equation of degree higher than 4 in radicals. So, in higher dimensions different numerical methods of finding eigenvalues and eigenvectors are used. Characteristic Polynomial of an Operator So we know how to find the spectrum of a matrix. But how do we find eigenvalues of an operator acting in an abstract vector space? The recipe is simple: Take an arbitrary basis, and compute eigenvalues of the matrix of the operator in this basis. But how do we know that the result does not depend on a choice of the basis? There can be several possible explanations. One is based on the notion of similar matrices. Let us recall that square matrices A and B are called similar if there exist an invertible matrix S such that
A =SBS-l.
Note, that determinants of similar matrices coincide. Indeed det A = det(SBS-l) = det S det B det S-l = det B because det S-l = 1/ det S. Note that if A = SBS-I then
141
ASIS-I
US-I) so the matrices A -lJ and B -lJ are similar. Therefore det(A -'JJ) = det(B - 'JJ), i.e., If T: V -7
A - U
= SBS- I -
= S(BS-I -
= S(B _IJ.)S-I,
Icharacteristic polynomials of similar matrices coincide. I V is a linear transformation, and A and B are two bases in V, then
[T]AA = [I]AB[TbB[I]BA and since [l]BA = ([l]AB)-1 the matrices [11 AA and [11 BB are similar. In other words, matrices of a linear transformation in different bases are similar. Therefore, we can define the characteristic polynomial of an operator as the characteristic polynomial of its matrix in some basis. As we have discussed above, the result does not depend on the choice of the basis, so characteristic polynomial of an operator is well defined.
Multiplicities of Eigenvalues
Let us remind the reader, that ifp is a polynomial, and Ais its root (i.e., peA) = 0) then Z - A divides p(z), i.e., p can be represented as p(z) = (z - A)q(Z), where q is some polynomial. If q(A) = 0, then q also can be divided by z -, so (z - )2 divides p and so on. The largest. positive integer k such that (z - A i divides p(z) is called the multiplicity. of the root A. If A i~ an eigenvalue of an operator (matrix) A, then it is a root of the characteristic polynomial p(z) = det(A - zl). The mUltiplicity of this root is called the (algebraic) multiplicity of the eigenvalue A. Any polynomial p(z) = L~=o akz k of degree n has exactly n complex roots, counting multiplicity. The words counting multiplicities mean that if a root has multiplicity d we have to count it d times. In other words, p can be represented as p(z) = an(z - AI)(Z - A2) ... (z - An). where Ai' ~, ... , An are its complex roots, counting multiplicities. There is another notion of multiplicity of an eigenvalue: the dimension of the eigenspace Ker(A-1) is called geometric multiplicity of the eigenvalue A. Geometric multiplicity is not as widely used as algebraic mUltiplicity. So, when people say simply "multiplicity" they usually mean algebraic multiplicity. Let us mention, that algebraic and geometric multiplicities of an eigenvalue can differ. Proposition. Geometric multiplicity of an eigenvalue cannot exceed its algebraic multiplicity. Trace and Determinant
Theorem. Let A be n x n matrix, and let AI' A 2, ... , An be its eigenvalues (counting multiplicities). Then 1. traceA = AI + A2 + ... + An.
142
2.
DIAGONALIZATION
Suppose an operator (matrix) a has a basis b = vI' v2' ... vn of eigenvectors, and let A.), A.2, ... , n be the corresponding eigenvalues. Then the matrix of A in this basis is the diagonal matrix with 1, 2, ... , n on the diagonal
Therefore, it is easy to find an Nth power of the operator A. Namely, its matrix in the basis B is
o
N "N "N _ [A N ]BB = dlag = {" A) ,A2 , ... ,An -
o
Moreover, functions of the operator are also very easy to compute: for example the operator (matrix) exponent et4 is defined as et4 and its matrix in the basis B is
= I +tA+~+
t2A2
t3 A3
3!
=Lkl , k=O
00
Ak
143
o o
To find the matrices in the standard basis S, we need to recall that the change of coordinate matrix [l]SB is the matrix with columns vI' v2, ... , vn . Let us call this matrix S, then
A = [A]ss = s
AI [
2. . .
1 -I-I S = SDS ,
An
Af
AN
= SD
S-I
=S
o
A~
and similarly for etA. Another way of thinking about powers (or other functions) of diagonalizable operators is to see that if operator A can be represented as A = SDS- I , then
AN
NTimes
and it is easy to compute the Nth power of a diagonal matrix. The following theorem is almost trivial. Theorem. A matrix a admits a representation A = SDS-I, where D is a diagonal matrix if and only if there exists a basis of eigenvectors of A. Proof We already discussed above that if there is a basis of eigenvectors, then the matrix admits the representation A = SDS-l, where S = [l]SB is the change of coordinate matrix from coordinates in the basis B to the standard coordinates. On the other hand if the matrix admits the representation a = SDS-I with a diagonal matrix D, then columns of S are eigenvectors of A (column number k corresponds to the kth diagonal entry of D). Since S is invertible, its columns form a basis. Theorem. Let A. I , A.z, ... , A.r be distinct eigenvalues of A, and let vI' v 2, ... , vr be the corresponding eigenvectors. Then vectors vI' v2, ... , vr are linearly independent. Proof We will use induction on r. The case r = 1 is trivial, because by the definition an eigenvector is non-zero, and a system consisting of one non-zero vector is linearly independent.
144
Suppose that the statement of the theorem is true for r - 1. Suppose there exists a non-trivial linear combination
r
cIv I + c2v2 + ...+ crvr = L::CkVk = 0 k=I Applying A - Ar I and using the fact that (A - A/)Vr = 0 we get r-I L::Ck (Ak - Ar )Vk = O. k=I By the induction hypothesis vectors vI' v2, ... , vr- I are linearly independent, so ciAk-r) = 0 for k = 1, 2, ... , r - 1. Since Ak"* Ar we can conclude that ck = 0 for k < r. Then it follows from that cr = 0, i.e., we have the trivial linear combination. Corollary. If an operator A " V ~ V has exactly n = dim V distinct then it is diagonalizable. Proof For each eigenvalue k let vk be a corresponding eigenvector (just pick one eigenvector for each eigenvalue). By Theorem the system vI' v 2, ... , vn is linearly
independent, and since it consists of exactly n
= dim
V vectors it is a basis.
v = v J + v2 + ...+ vp
= L::Vk,Vk E Vk
k=1
We also say, that a system of subspaces VI' V2, ... , Vp is linearly independent if the equation VI + v2 + ...+ vp = 0, vk E Vk has only trivial solution (vk = 0 Vk = 1,2, ... , p). Another way to phrase that is to say that a system of subspaces VI' V2, ... , Vp is linearly independent if and only if any system of non-zero vectors vk, where vk E Vk, is linearly independent. , We say that the system of subspaces VI' V2, ... , Vp is generating (or complete, or spanning) if any vector v E V admits representation. Remark. From the above definition one can immediately see that Theorem states in fact that the system of eigenspaces Ek of an operator A
Ek := Ker(A - AkI), Ak
is linearly independent.
cr(A), '
Remark. It is easy to see that similarly to the bases of vectors, a system of subspaces VI' V2, ... , Vp is a basis if and only if it is generating and linearly independent.
145
There is a simple example of a basis of subspaces. Let V be a vector space with a basis vI' v2, ... , vn ' Split the set of indices I, 2, ... , n into p subsets AI' A2, ... , Ap ' and define subspaces Vk := span {Vj :} E A k }. Clearly the subspaces Vk form a basis of V. The following theorem shows that in the finite-dimensional case it is essentially the only possible example of a basis of subspaces. Theorem. Let VI' V2' ... , Vp be a basis of subspaces, and let us have in each subspace
Vk a basis (of vectors) B;. Then the union [kBk ofthese bases is a basis in V. To prove the
theorem we need the following lemma.
Lemma. Let VI' V2' ... , Vp be a linearly independent family of subspaces, and let us have in each subspace Vk a linearly independent system Bk of vectors 3 Then the union B "= U~k is a linearly independent system. Proof The proof of the lemma is almost trivial, if one thinks a bit about it. The main
diculty in writing the proof is a choice of a appropriate notation. Instead of using two indices (one for the number k and the other for the number of a vector in Bk, let us use "flat" notation. Namely, let n be the number of vectors in B := [U~k' Let us order the set B, for example as follows: first list all vectors from B I , then all vectors in B2, etc, listing all vectors from Bp last. This way, we index all vectors in B by integers 1,2, ... , n, and the set of indices {I, 2, ... , n} splits into the sets 1,2, ... , P such that the set Bk consists of vectors bj :} E A k . Suppose we have a non-trivial linear combination
n
Denote
+ v 2 + ...+ vp = O.
Since vk E vk and the system of subspaces Vk is linearly independent, vk = 0 Vk. Than means that for every k
L:
JEAk
Cij
=0,
and since the system of vectors bj :} E A k (i.e., the system Bk) are linearly independent, we have cj = 0 for all} E A k' Since it is true for all A k' we can conclude that cj = 0 for all }. Proof To prove the theorem we will use the same notation as in the proof of Lemma, i.e., the system Bk consists of vectors bi'} E A k'
146
Lemma asserts that the system of vectors b"j = 12, "" n is linearly independent, so it only remains to show that the system is compl~te. Since the system of subspaces VI' V2, "., Vp is a basis, any vector v E V can be represented as
p
vk
2:: cjbj ,
jEA k
and therefore v = 2::~={Jbj' Criterion of Diagonalizability. First of all let us mention a simple necessary condition. Since for a diagonal matrix D = diag{A 1, A2, "., An} det(D - 'Ai) = (AI - A)(A2 - A) ".(An - A), we see that if an operator A is diagonalizable, its characteristic polynomial splits into the product of monomials. Note, that any polynomial can be decomposed into the product of monomials, if we allow complex coefficients (i.e., complex eigenvalues). In what follows we always assume that the characteristic polynomial splits into the product of monomials, either by working in a complex vector space, or simply assuming that a has exactly n = dim Veigenvalues(counting multiplicity). Theorem. An operator A : V ~ V is diagonalizable ifand only iffor each eigenvalue Il the dimension of the eigenspace Ker(A - Ai) (i.e., the geometric multiplicity) coincides with the algebraic multiplicity of Il. Proof First of all let us note, that for a diagonal matrix, the algebraic and geometric multiplicities of eigenvalues coincide, and therefore the same holds for the diagonalizable operators. Let us now prove the other implication. Let AI' A2, "" Ap be eigenvalues of A, and let Ek := Ker(A - At/) be the corresponding eigenspaces. The subspaces Ek, k = 1,2, '''' pare linearly independent. Let Bk be a basis in Ek. By Lemma the system B = [U~k is a linearly independent system of vectors. We know that each Bk consists of dim Ei= mUltiplicity ofAk) vectors. So the number of vectors in b equal to the sum of multiplicities of eigenvectors k, which is exactly n = dim V. So, we have a linearly independent system of dim V eigenvectors, which means it is a basis. Some Examples Real eigenvalues. Consider the matrix
A=
(~
r).
147
A-51 = 8 1- 5 = 8 -4 A basis in its nullspace consists of one vector (1, 2)T, so this is the corresponding eigenvector. Similarly, for t.. = -3
(1-5 2) (-4 2)
~)
1
A-U=A+31=(:
and the eigenspace Ker(A + 31) is spanned by the vector (1, _2)T . The matrix A can be diagonalized as
A=
(~
i)
(~ l2)(~ ~3)(~
-2
)-1
A=(l2 i)
Its characteristic polynomial is 1
1=2>- 1~>-1=(1->-)2+22
t.. = 1 2i. For
= (-=-~ _~i)
This matrix has rank 1, so the eigenspace Ker(A - AI) is spanned by one vector, for example by (1, OT. Since the matrix A is real, we do not need to compute an eigenvector for t.. = 1-2i: we can get it for free by taking the complex conjugate of the above eigenvector. So, for t..= 1-2i a corresponding eigenvector is (1 ,-if , and so the matrix A can be diagonalized as
A=(b O
Its characteristic polynomial is
110>- 1~>-I=(l_>-)2,
148
so A has an eigenvalue 1 of mUltiplicity 2. But, it is easy to see that dimKer(A - 1) = 1 (1 pivot, so 2 - 1 = 1 free variable). Therefore, the geometric multiplicity of the eigenvalue 1 is different from its algebraic multiplicity, so a is not diagonalizable. There is also an explanation which does not use Theorem. Namely, we got that the eigenspace Ker(A-I1) is one dimensional (spanned by the vector (1, Of). If A were diagonalizable, it would have a diagonal form
(6
where
(;)=G
Note that
~)(;).
It follows that every U E JR2 can be written uniquely in the form u=c1v 1 +c2v2' where c I , c2 E lR, so that Au =A(clv l + c2v2) = clAvI + c0 v2 = 2c l v I + 6c2v2 . Note that in this case, the function f: JR2 ~ JR2 can be described easily in terms of
.
the two special vectors vI and v 2 and the two special numbers 2 and 6. Let us now examine how these special vectors and numbers arise. We hope to find numbers A E JR and nonzero vectors v
E
G~)V=AV.
Since
149
(G ~)-(~ ~)}= o.
In other words, we must have
3- A. 3) v=O. ( 1 5-A.
In order to have non-zero v E]R2 , we must therefore ensure that det (
3- A.
1
5 _ A.
J= O.
= 2 and A.2 = 6. Substituting A = 2 into (1),
G!} =
0, willi root
v, = (~J
A=:
a~1
...
a~nJ
:
ani
ann
is an n x n matrix with entries in lR. Suppose further that there exist a number E R and a non-zero vector v E]Rn such that Av = AV. Then we say that A is an eigenvalue of the matrix A, and that v is an eigenvector corresponding to the eigenvalue A. Suppose that A is an eigenvalue of the n x n matrix A, and that v is an eigenvector corresponding to the eigenvalue A. Then Av = AV = 'Alv, where I is the n x n identity matrix, so that (A - AJ)v = O. Since vERn is non-zero, it follows that we must have det (A - A1) = O. In other words, we must have
al1
-A
a12
det
a21
=0.
ani
that is a polynomial equation. Solving this equation gives the eigenvalues of the matrix A. On the other hand, for any eigenvalue A of the matrix A, the set n {v elR : (A-A1)v=O}
ISO
is the nullspace of the matrix A - IJ., a subspace of ~ n Definition. The polynomial is called the characteristic polynomial of the matrix A. For any root Aof equation, the space is called the eigenspace corresponding to the eigenvalue
A.
Example. The matrix
has characteristic polynomial (3 - 1..)(5 - A) - 3 = 0, in other words, 1..2 - 81.. + 12 = O. Hence the eigenvalues are Al = 2 and ~ = 6, with corresponding eigenvectors
G~)
{VE~2:G :)V=+H~JCE~}.
The eigenspace corresponding to the eigenvalue 6 is
-1 6 -12J A= 0 -1,3 30 . ( o -9 20 To find the eigenvalues of A, we need to nd the roots of -12 J 30 = 0; ( o -9 20-1.. in other words, (A + 1)(1.. - 2)(1.. - 5) = O. The eigenvalues are therefore Al = -1, 1..2 = 2
det 6 -1-1.. 0 -13 - A and
1..3 = 5.
An eigenvector corresponding to the eigenvalue -1 is a solution of the system
(A + f)v
=(~
= (IJ ~ .
151
-15 -9 -12) 30 v = 0, with root v2 = 2. 18 1
-3 6 (o
o
(0)
(A-5J)v=
-9 15
-3
Note that the three eigenspaces are all lines through the origin. Note also that the eigenvectors vI' v2 and v3 are linearly independent, and so form a basis for ]R3. Example. Consider the matrix
A=
-30 20 12 To find the eigenvalues of A, we need to nd the roots of 17-A -to 45 -28-A -15 = 0; ( -30 20 12-A in other words, (A + 3)(A - 2)2 = O. The eigenvalues are therefore Al = -3 and An eigenvector corresponding to the eigenvalue -3 is a solution of the system
det
-5)
Az = 2.
(A+31)v=
( 1) 3. -2
(A - 2I)v = 45
15
-to
-5)
(1)
(2)
Note that the eigenspace corresponding to the eigenvalue -3 is a line through the origin, while the eigenspace corresponding to the eigenvalue 2 is a plane through the origin. Note also that the eigenvectors VI' v2 and v3 are linearly independent, and so form a basis for
]R3.
152
A= 1
2 -1 OJ 0. (0 0 3
det
in other words, 0. The eigenvalues are therefore Al = 3 and A2 = 1. An eigenvector corresponding to the eigenvalue 3 is a solution of the system
0;
(A - 3I)v =
(A- J)v=
(i ~: ~}=
= (i}
Note that the eigenspace corresponding to the eigenvalue 3 is a line through the origin. On the other hand, the matrix
(i ~: ~J
has rank 2, and so the eigenspace corresponding to the eigenvalue 1 is of dimension 1 and so is also a line through the origin. We can therefore only nd two linearly independent eigenvectors, so that ]R3 does not have a basis consisting of linearly independent eigenvectors of the matrix A. Example. Consider the matrix
A=
(~ =~ ~J.
1 -3 4
153
3-A
det
-3
-1- A
2
2
1 -3 4-A in other words, (A - 2)3 = O. The eigenvalue is therefore A = 2. An eigenvector corresponding to the eigenvalue 2 is a solution of the system
J= 0;
vI = ( ~ J and v 2= (~J.
-1 0
-3 2
Note now that the matrix
=~ ~Jv =
0, with roots
has rank 1, and so the eigenspace corresponding to the eigenvalue 2 is of dimension 2 and so is a plane through the origin. We can therefore only nd two linearly independent eigenvectors, so that IR does not have a basis consisting of linearly independent eigenvectors of the matrix A. Example. Suppose that A is an eigenvalue of a matrix A, with corresponding eigenvector v. Then A 2v = A(Av) = A(AV) = A(Av) = A(AV) = A2V. Hence A2 is an eigenvalue of the matrixA2, with corresponding eigenvector v. In fact, it can be proved by induction that for every natural number kEN, Ak is an eigenvalue of the matrix Ak, with corresponding eigenvector v. Example. Consider the matrix
3
(i
=~ ~J
( ~ ~ :J.
003
To find the eigenvalues of A, we need to find the roots of det(l
~A 2 ~ A
o
0
J= 0;
3-A
in other words, (A - 1)(A - 2)(A - 3) = O. It follows that the eigenvalues of the matrix A are given by the entries on the diagonal. In fact, this is true for all triangular matrices.
154
A~G
!}
ne, every u E R2 can be written
We have already shown that the matrix A has eigenvalues Al = 2 and 11.2 = 6, with corresponding eigenvectors
respectively. Since the eigenvectors form a basis for uniquely in the form u = civ i + c2v2' where cl' c2 E R, and Write
= P Dc respectively, so that APc = P Dc. Note that c E R2 is arbitrary. This implies that (AP - PD)c = 0 for every c E R2. Hence we must have AP = PD. Since P is invertible, we conclude that
P-IAP=D. Note here that P = (vI v2) and D = ( 0
AI
Note also the crucial point that the eigenvectors of A form a basis for 1R 2 We now consider the problem in general.
Proposition. Suppose that A is an nn matrix, with entries in R. Suppose further that
155
vI' ... , vn ERn, and that vI' ... , vn are linearly independent. Then
p-IAP=D,
where
Proof Since VI' ... , Vn are linearly independent, they form a basis for ]Rn, so that
every u E]Rn can be written uniquely in the form
u=
and Writing
civ i
... ,
cn E]Rn ,
respectively, so that
APc = PDc.
Note that C E]Rn is arbitrary. This implies that (AP - PD)c = 0 for every cERn. Hence we must have AP = PD. Since the columns of P are linearly independent, it follows that P is invertible. Hence P-IAP = D as required. Example. Consider the matrix
A
156
17 -10 -5)
3 -2
1 1 2)
0 3 and D = 3 0
(-3 0
0 0 2 0
Definition. Suppose that A is an n x n matrix, with entries in lR. We say that A is diagonalizable ifthere exists an invertible matrix P, with entries in lR, such that P-IAP is a diagonal matrix, with entries in lR. It follows from Proposition that an n x n matrix A with entries in lR is diagonalizable if its eigenvectors form a basis for lR n In the opposite direction, we establish the following result.
Proposition. Suppose that A is an nn matrix, with entries in lR. Suppose further that
entries in lR, such that D = P-IAP is a diagonal matrix, with entries in lR. Denote by vI' ... , vn the columns of P; in other words, write P=(vl .. vn) Also write
Clearly we have AP
= (AIV I ... Alnvn) Equating columns, we obtain AVI = Alvl' ... , AVn = AnVn' It follows that A has eigenvalues AI' ... , An E lR, with corresponding eigenvectors vI'
... , Vn E lRn. Since P is invertible and vI' ... , vn are the columns of P, it follows that the eigenvectors vI' ... , vn are l!nearly independent.
157
In view of Propositions, the question of diagonalizing a matrix A with entries in IR is reduced to one of linear-independence of its eigenvectors.
Proposition. Suppose that A is an n x n matrix, with entries in IR. Suppose further that A has distinct eigenvalues AI' ... , An E IR, with corresponding eigenvectors vI' ... , Vn
E IR n . Then vI' ... , vn are linearly independent. Proof Suppose that vI' ... , vn are linearly dependent. Then there exist c I' ... , cn E IR, not all zero, such that clvl++cnvn=O. Then A(CIV I + ... + cnvn) = clAvI + ... + c~vn = Atclv I + ... + Ancnvn = O. Since vI' ... , vn are all eigenvectors and hence non-zero, it follows that at least two numbers among c I ' ... , cn are non-zero, so that c I ' , cn_1 are not all zero. Multiplying by An and subtracting, we obtain (AI - n)c 1VI + ... + (An-I - An)Cn- IVn-I = O. Note that since AI' ... , An are distinct, the numbers Al - An' ... , An-I - An are all nonzero. It follows that VI' ... , Vn- l are linearly dependent. To summarize, we can eliminate one eigenvector and the remaining ones are still linearly dependent. Repeating this argument a finite number of times, we arrive at a linearly dependent set of one eigenvector, clearly an absurdity. We now summarize our discussion in this section.
(l)
Diagonalization Process. Suppose that A is an n x n matrix with entries in lR. Determine whether the n roots of the characteristic polynomial det(A - IJ) are
real. If not, then A is not diagonalizable. If so, then nd the eigenvectors corresponding to these eigenvalues. Determine whether we can find n linearly independent eigenvectors. If not, then A is not diagonalizable. If so, then write
(2)
(3)
. . J,
where AI' ... , An E IR are the eigenvalues ofA and where vI' ... , vn E lR n are respectively their corresponding eigenvectors. Then P-IAP = D.
Some Remarks
In all the examples we have discussed, we have chosen matrices A such that the characteristic polynomial det(A -IJ) has only real roots. However, there are matrices A where the characteristic polynomial has non-real roots. Ifwe permit AI' ... , An to take values
158
in C and permit "eigenvectors" to have entries in <C , then we may be able to "diagonalize" the matrix A, using matrices P and D with entries in <c. The details are similar. Example. Consider the matrix
A
(1 -5).
I-A -5 = 0; 1 -I-A in other words, A2 + 4 = O. Clearly there are no real roots, so the matrix A has no
det= ( eigenvalues in matrix
1R. Try to show, however, that the matrix A can be "diagonalized" to the
D= (
. -2i We also state without proof the following useful result which will guarantee many examples where the characteristic polynomial has only real roots. Proposition. Suppose that A is an n x n matrix, with entries in 1R. Suppose further thatA is symmetric. Then the characteristic polynomial det(A -IJ) has only real roots. We conclude this section by discussing an application of diagonalization. We illustrate this by an example. Example. Consider the matrix
2i
0)
= D, where
P=
o OJ 2 O. o 2
= (PDP-I) ...(PDP- 1) = PD 98 p- 1 = P
98
159
An Application to Genetics
In this section, we discuss very briey the problem of autosomal inheritance. Here we consider a set oftwo genes designated by G and g. Each member of the population inherits one from each parent, resulting in possible genotypes GG, Gg and gg. Furthermore, the gene G dominates the gene g, so that in the case of human eye colours, for example, people with genotype GG or Gg have brown eyes while people with genotype gg have blue eyes. It is also believed that each member of the population has equal probability of inheriting one or the other gene from each parent. The table below gives these peobabilities in detail. Here the genotypes of the parents are listed on top, and the genotypes of the ospring are listed on the left.
GG-GG GG Gg gg
1
GG-Gg
GG-gg
Gg-Gg
Gg-gg
gg-gg
0 0
2 1 2
0
1
4 1
0 0
1 2 Example. Suppose that a plant breeder has a large population consisting of all three genotypes. At regular intervals, each plant he owns is fertilized with a plant known to have genotype GG, and is then disposed of and replaced by one of its osprings. We would like to study the distribution of the three genotypes after n rounds of fertilization and replacements, where n is an arbitrary positive integer. Suppose that GG(n), Gg(n) and gg(n) denote the proportion of each genotype after n rounds offertilization and replacements, and that GG(O), Gg(O) and gg(O) denote the initial proportions. Then clearly we have GG(n) + Gg(n) + gg(n) = 1 for every n = 0, 1,2, ... On the other hand, the left hand half of the table above shows that for every n = 1, 2, 3, ... , we have 1 GG(n) = GG(n - 1) + Gg(n - 1),
2 1
2 1
"2
Gg(n)
and
gg(n) = 0,
160
It follows that
( ~~~;J = (~~~;J
An
gg(n) gg(O)
A=(~ ~;~ ~J
p=(~ ~2 ~IJ
We therefore write
p=(~
-I
-2
0 0 o 0 112
OJ ,
with P
(I
0
-1
-2
:J
1 0 0 0 1I2n 0 -1 -2
An =PDnp-
1-1I2n
=
It follows that
0 0
I/2n
l/2n-1
0
(GG(n))
gg(n)
n I-1I2 n- 1 I/2n-1
(GG(O))
Gg(O) gg(O)
161
n n-\
+ gg(O)/2
n-\
o
I-Gg(O)/2 -gg(O)/2
n n-\
Gg(O)/2
+ gg(O)/2
n-\
o
This means that nearly the whole crop will have genotype GG.
Chapter 6
en
Inner product and norm in IRn. In dimensions 2 and 3, we defined the length of a vector x (i.e., the distance from its endpoint to the origin) by the Pythagorean rule, for example in lR the length of the vector is defined as
3
II x 11= -V Xl + x2 + x3 .
It is natural to generalize this formula for all n, to define the norm ofthe vector x
ERn
as
IIxll=-vxI +X2 + ... +xnThe word norm is used as a fancy replacement to the word length. The dot product in IR3 was defined as x . Y = x IY2 + xV'2 + xJY3' where x = (xl' X2' x3)T andy = (YI' Y2' Y3l. Similarly, in IR one can define the inner product (x, y) oftwo vectors x = (xl' X2, ... , xnl, Y = (Yl' Y2' ... , ynl by (X, y):= XIYI + XV'2 + ... + X,ln =yT X, so II X 11= ~(x,x). Note, that yT X = xT y, and we use the notation yT X only to be consistent. Inner product and norm in en. Let us now define norm and inner product for en. The complex space en is the most natural space from the point of view of spectral theory: even if one starts from a matrix with real coefficients (or operator on a real vectors space), the eigenvalues can be complex, and one needs to work in a complex space. For a complex number z = X + iy, we have 1z 12 = x2 + y2 = z Z . If Z
E
en is given by
163
z=
~~ - [:~ ! &~
2 n 2 2 n 2
k=l" k=l
and that will be our definition of the inner product in en . To simplify the notation, let us introduce a new notion. For a matrix A let us define its Hermitian adjoint, or simply adjoint A* by A* = A-~ meaning that we take the transpose of the matrix, and then take the complex conjugate of each entry. Note, that for a real matrix A, A* = AT . Using the notion of A *, one can write the inner product in en as
(z, w) = w*z.
Remark. It is easy to see that one can define a different inner product in en such that II z Ib = (z, z), namely the inner product given by
(z,
w)1 = Z I WI
+ Z 2 w 2 + ... + Z n Wn = z*w.
We did not specify what properties we want the inner product to satisfy, but z*w and
w*z are the only reasonable choices giving II z 112 = (z, z).
Note, that the above two choices of the inner product are essentially equivalent: the only difference between them is notatioool, because (z, w)l = (w, z). While the second choice of the inner product looks more natural, the first one, (z, w) = w*z is more widely used, so we will use it as well.
Inner Product Spaces. The inner product we defined for ]Rn and en satisfies the following properties:
1.
2. 3. 4.
(Conjugate) symmetry: (x, y) is just symmetry, (x, y) Linearity: (ax + ay, z) Non-negativity: (x, x)
= (y, x);
= (x,
~
p;
0 '\Ix;
Non-degeneracy: (x, x)
= 0 if and only if x = o.
164
Let V be a (complex or real) vector space. An inner product on V is a function, that assign to each pair of vectors x, Y a scalar, denoted by (x, y) such that the properties 1-4 from the previous section are satisfied. Note that for a real space V we assume that (x, y) is always real, and for a complex space the inner product (x, y) can be complex. A space V together with an inner product on it is called an inner product space. Given an inner product space, one defines the norm on it by
II x II = ~(x,x).
Example. Let V be ]Rn or en . We already have an inner product (x, y) = y*x = y * x defined above.
= 2:;=1 xkYk
This inner product is called the standard inner product in ]Rn or en We will use symbol F to denote both
means the statement is true for both lR and Example. Let V be the space Pn of polynomials of degree at most n. Define the inner product by
(f,g) =
en.
JI f(t)g(t)dt.
-1
It is easy to check, that the above properties 1-4 are satisfied. This definition works both for complex and real cases. In the real case we only allow polynomials with real coefficients, and we do not need the complex conjugate here. Let us recall, that for a square matrix A, its trace is defined as the sum of the diagonal entries,
n
trace A = 2:ak,k'
k=l
Example. For the space Mm x n of m x n matrices let us define the so-called Frobenius inner product by (A, B) = trace (B*A). Again, it is easy to check that the properties, i.e., that we indeed defined an inner product. Note, that
trace (B* A) =
2: Aj,kBj,k ,
j,k
e mn
165
Properties of Inner Product. The statements we get in this section are true for any inner product space, not only for Fn' To prove them we use only properties 1-4 of the inner product. First of all let us notice, that properties 1 and 2 imply that
2 '. (x, ay + ~z) Indeed,
= ~ (x,
y) +
i3 (x, z).
+ 0(z,x) =
---
a(x,y) + 0(x,z)
---
Note also that property 2 implies that for all vectors x (0, x) = (x, 0) = O. Lemma. Let x be a vector in an inner product space V. Then x = 0 if and only
if
(x, y) = 0 \ly E V. Proof Since (0, y) = 0 we only need to show that implies x = O. Putting y = x in) we get(x, x) = 0, so x = O. Applying the above lemma to the difference x - y we get the following Corollary. Let x, y be vectors in an inner product space V. The equality x = y holds if and only if (x, z) = (y, z) \lz E V . The following corollary is very simple, but will be used a lot Corollary. Suppose two operators A, B : x ~ Y satisfo (Ax, y) = (Bx, y) \Ix E X, \ly E V.
Then A = B Proof By the previous corollary (fix x and take all possible y's) we get Ax = Bx' Since this is true for all x E X, the transformations A and B coincide. The following property relates the norm and the inner product. Theorem. (Cauchy-Schwarz inequality). I (x, y) I ~ II x II . II y II Proof The proofwe are going to present, is not the shortest one, but it gives a lot for the understanding. Let us consider the real case first. If y = 0, the statement is trivial, so we can assume that y O. By the properties of an inner product, for all scalar t
"*
o ::; II x becomes
ty 112
= (x -
ty, x - ty)
= II x 112 -
lIyll ,
166
which is exactly the inequality we need to prove. There are several possible ways to treat the complex case. One is to replace x by ax, where a is a complex constant, I a I = 1 such that (ax, y) is real, and then repeat the proof for the real case. The other possibility is again to consider o : :; II x - ty 112 = (x - ty, x - ty) = (x, x - ty) - t(y, x - ty)
= (x,y) + (x,y)
II y 112
(x,y)=4"(llx+ yll
-llx-yll )
L:::
167
NORMS
Normed spaces
We have proved before that the norm II v II satisfies the following properties: 1. Homogeneity: II v II = II . II v II for all vectors v and all scalars. 2. 3. 4.
Suppose in a vector space V we assigned to each vector v a number II v II such that above properties 1-4 are satisfied. Then we say that the function v ~ II v II is a norm. A vector space V equipped with a norm is called a normed space. Any inner product space is a normed space, because the norm II v II = J(v, v) satisfies the above properties 1-4. However, there are many other normed spaces. For example, given p, 1 < p < 00 one can define the norm
en by
P xk I
II x lip
~ (I
XI
~ [~I
r
p
One can also define the norm 11.11"" (p = 00) by II x 1100 = max{1 xkl: k= 1,2, ... , n}. The norm product. To check that 1IlI p is indeed a norm one has to check that it satisfies all the above properties 1-4. Properties 1,3 and 4 are very easy to check. The triangle inequality (property 2) is easy to check for P = 1 and p = 1 (and we proved it for p = 2). For all other p the triangle inequality is true, but the proof is not so simple, and we will not present it here. The triangle inequality for k . kp even has special name: its called Minkowski inequality, after the German mathematician H. Minkowski. Note, that the norm 1I.lI p for p
1Ill p
for P = 2 coincides with the regular norm obtained from the inner
easy to see that this norm is not obtained from the standard inner product in Rn (en). But we claim more! We claim that it is impossible to introduce an inner product which gives rise to the norm 1I.lIp'p ~ 2. This statement is actually quite easy to prove. It is easy to see that the Parallelogram Identity fails for the norm 1I.lIp'p ~ 2. and one can easily find a counter example in 1R , which then gives rise to a counter example in all other spaces. In fact, the Parallelogram Identity, as the theorem below asserts completely characterizes norms obtained from an inner product.
2
168
Inner Product Spaces Theorem. A norm in a normed space is obtained from some inner product if and only
"i/u, v E V. The inverse implication is more complicated. If we are given a norm, and this norm came from an inner product, then we do not have any choice; this inner product must be given by the polarization identities. But, we need to show that (x, y) we got from the polarization identities is indeed an inner product, i.e., that it satisfies alt the properties. It is indeed possible to check if the norm satisfies the parallelogram identity, but the proof is a bit too involved, so we do not present it here.
ORTHOGONALITY
II u + v 112 = II U 112 + II v 112 if u 1. v. The proof is straightforward computation, II u + v 112 = (u + v, u + v) = (u, u) + (v, v) + (u, v) + (v, u) = II U 112 + II v 112 ((u, v) = (v, u) = 0 because of orthogonality). Definition. We say that a vector v is orthogonal to a subspace E if v is orthogonal to all vectors w in E. We say that subspaces E and F are orthogonal if all vectors in E are orthogonal to F, i.e., all vectors in E are orthogonal to all vectors in F The following lemma shows how to check that a vector is orthogonal to a subspace. Lemma. Let E be spanned by vectors VI' v 2, ... , Yr' Then v ? E if and only if v J.. vk, "i/k = 1, 2, ... , r.
Proof By the definition, if v 1. E then v is orthogonal to all vectors in E. In particular,
v J.. v k , k = 1,2, ... , r.
= 1, 2,
Definition. A system of vectors v I' v2, ... , vn is called orthogonal if any two vectors are orthogonal to each other (i.e., if(vj , vk) = 0 forj k). If, in addition Itvk 11= 1 for all k, we call the system orthonormal. Lemma. (Generalized Pythagorean identity). Let VI' v2, ... , vn be an orthogonal system. Then
169
I:CXkVk = I:lcxk Illvk II k=1 k=1 This formula looks particularly simple for orthonormal systems, where II vk II = 1. Proof of the Lemma. tCXkVk 2 = [tCXkVk,tCXjVj] = ttakaj(Vk>Vj)' k=1 k=1 j=1 k=1 j=1 Because of orthogonality vk' v) = 0 if} ::j:. k. Therefore we only need to sum the terms with} = k, which gives exactly (Vk,Vk) = I:lak IlIvk II . k=1 k=1 Corollary. Any orthogonal system Vi' v2, ... , Vn of non-zero vectors is linearly independent. Proof Suppose for some ai' ... , ~2' Generalized Pythagorean identity
... ,
I:lcxk I
Since
II vk II
::j:.
0 (vk
::j:.
a k = 0 Vk,
so only the trivial linear combination gives O. Remark. In what follows we will usually mean by an orthogonal system an orthogonal system of non-zero vectors. Since the zero vector 0 is orthogonal to everything, it always can be added to any orthogonal system, but it is really not interesting to consider orthogonal systems with zero vectors.
j=1 Taking inner product of both sides of the equation with VI we get
170
(x, VI)
= I:>~ j
j=1
(Vj , VI)
= 0 if}
:t= 1), so
(XI = - - 2 II vIII
Similarly, mUltiplying both sides by v k we get
(x, Vk)
(x, vI)
so
Therefore, to find coordinates of a vector in an orthogonal basis one does not need to solve a linear system, the coordinates are determined by the formula. This formula is especially simple for orthonormal bases, when
II V k II = 1.
v- W 1. E.
We will use notation w = PEV for the orthogonal projection. After introducing an object, it is natural to ask: 1. Does the object exist? 2. 3. Is the object unique? How does one find it?
We will show first that the projection is unique. Then we present a method of finding the projection, proving its existence. The following theorem shows why the orthogonal projection is important and also proves that it is unique.
171
i.e.,
II = II v - x II,
x. Then
=w-
v-x=v-w+w-x=v-w+~
Since v - w ..L E we have y ..L v - wand so by Pythagorean Theorem II v - x 112 = II v - w 112 + II y 112 ~ II v - w 112. Note that equality happens only ify = 0 i.e., if x = w. The following proposition shows how to find an orthogonal projection if we know an orthogonal basis in E. Proposition. Let VI' v2, ... , vr be an orthogonal basis in E. Then the orthogonal projection P EV of a vector v is given by the formula
r
k=I
(V'Vk~. II vk II
~ (v,vk) PEV= L...J--2vk k=III vk II
Note that the formula for k coincides with, i.e., this formula for an orthogonal system (not a basis) gives us a projection onto its span. Remark. It is easy to see now from formula that the orthogonal projection P E is a linear transformation. One can also see linearity of P E directly, from the definition and uniqueness of the orthogonal projection. Indeed, it is easy to check that for any x and y the vector ax + ~y - (aPeX - ~P sY) is orthogonal to any vector in E, so by the definition PE(ax + ~y) = a.PE! + ~PsY.
en and lR. none can get from the ) is given by above formula the matrix of the orthogonal projection PE onto E in en (lR. n
Remark. Recalling the definition of inner product in
r
1
2
PE = L
k=ll1 Vk II
* VkVk
where columns VI' V 2' ... , vr form an orthogonal basis in E. Proof of Proposition. Let
172
W:L:>~kVk'
k=I
where Olk=
(V'Vk~. II vk II
W
We want to show that v - W ..1 E. By Lemma it is sucient to show that v = 1, 2, ... , n. Computing the inner product we get for k = 1, 2, ... , r
(v - w, vk)
..L vk' k
= (v, v k) -
(w, vk)
= (v, v k) -
L Ol /Vj' vk)
SO, if we know an orthogonal basis in E we can find the orthogonal projection onto E.
In particular, since any system consisting of one vector is an orthogonal system, we know
how to perform orthogonal projection onto one-dimensional spaces. But how do we find an orthogonal projection if we are only given a basis in E? Fortunately, there exists a simple algorithm allowing one to get an orthogonal basis from a basis. Gram-Schmidt Orthogonalization Algorithm. Suppose we have a linearly independent system x I' X2, ... , x n . The Gram-Schmidt method constructs from this system an orthogonal system vI' v 2' ... , vn such that span{xl' x 2, .. " xn} = span {vI' V2, ... , v n } Moreover, for all r $ n we get span {xl' x 2, ... , x r } = span {vI' V2, ... , v r } Now let us describe the algorithm. Step 1. Put VI :=x I ' Denote by EI := span{x I } = span{v I}. Step 2. Define v2 by
V2 =X2 -PE,X2 =X2 (x2' VI)
2 VI'
II VI II
(x3' VI) (X3' V2)
2 VI -
2 V2
II vIII
IIv211
E2 so
O.
Step r + 1. Suppose that we already made r steps of the process, constructing an orthogonal system (consisting of non-zero vectors) VI' v 2, ... , vr such that Er := span{vl' v2' ... , vr } = span{x I, x 2, ... , x r }. Define
._ _ ~ (Xr+I,Vk) vr+I .- xr+I - PEr Xr+I - Xr+I - 6 I 2 Vk k=I ,I Vk II
173
Note, that xr+I e Er so vr+I '* 0. Continuing this algorithm we get an orthogonal system vI' v2, ... , vn . An example. Suppose we are given vectors xI = (1, I,ll, X2 = (0, 1, 2l, X3 = (1,0, 2l, and we want to orthogonalize it by Gram-Schmidt. On the first step define vI =x l = (1,1, ll On the second step we get (x2' vI)
v2
= x2 -PE1X 2 = x2 -
2 VI
II vI II
Computing
(x2' v,)
we get
Finally, define
v3
= x3 -
PE2 X3 = x3 -
(x3' vI)
II vIII
vI -
(x3' v2)
II v211
2 v2
Computing
vJ
=[g]-~[l]-M -1]= -I
2 Remark. Since the multiplication by a scalar does not change the orthogonality, one can multiply vectors vk obtained by Gram-Schmidt by any non-zero numbers. In particular, in many theoretical constructions one normalizes vectors vk by dividing them by their respective norms II vk II. Then the resulting system will be orthonormal, and the formulas will look simpler. On the other hand, when performing the computations one may want to avoid fractional entries by multiplying a vector by the least common denominator of its entries. Thus one may want to replace the vector v 3 from the above example by (1, -2, ll.
..
174
E,
V2
..L E (eqv.
V2 E
E1..)
, (where clearly VI = PEV)' This statement is often symbolically written as V = E $ E1.., which mean exactly that any vector admits the unique decomposition above. The following proposition gives an important property ofthe orthogonal complement. Proposition. For a subspace E
L: L:Ak,jxj -bk
k=I j=I
175
There are several ways to find the least square solution. Ifwe are in ~n ,and everything is real, we can forget about absolute values. Then we can just take partial derivatives with respect to Xj and find the where all of them are 0, which gives us minimum. Geometric approach. However, there is a simpler way offinding the minimum. Namely, if we take all possible vectors x, then Ax gives us all possible vectors in Ran A, so minimum of II Ax - b II is exactly the distance from b to Ran A. Therefore the value of II Ax - b II is minimal if and only if Ax = PRan Ab, where PRanA stands for the orthogonal projection onto the column space Ran A. So, to find the least square solution we simply need to solve the equation Ax = PRanAb. Ifwe know an orthogonal basis vI' v2, .. , vn in Ran A, we can find vector P Ra~b by the formula
~ (b,vk)
PRanAb=
w 11 2vk k=I 11 vk
Ifwe only know a basis in Ran A, we need to use the Gram-Schmidt orthogonalization to obtain an orthogonal basis from it. So, theoretically, the problem is solved, but the solution is not very simple: it involves Gram-Schmidt orthogonalization, which can be computationally intensive. Fortunately, there exists a simpler solution. Normal equation. Namely, Ax is the orthogonal projection P Ra~b if and only if b Ax 1. Ran A (Ax E Ran A for all x). If aI' a2, ... , an are columns of A, then the condition Ax 1. Ran A can be rewritten as b -Ax 1. ak' That means
0= (b -Ax, ak)
Joining rows
A*(b -Ax) = 0, which in tum is equivalent to the so-called normal equation A * Ax =A * b. A solution of this equation gives us the least square solution of Ax = b. Note, that the least square solution is unique if and only if A * A is invertible. Formula for the orthogonal projection. As we already discussed above, ifx is a solution of the normal equation A * Ax = A * b (i.e., a least square solution of Ax = b), then Ax = PRanAb. So, to find the orthogonal projection of b onto the column space Ran A we need to solve the normal equation A *Ax = A *b, and then mUltiply the solution by A. If the operator A *A is invertible, the solution of the normal equation A *Ax = A *b is given by x = (A*ArIA*b, so the orthogonal projection P~b can be computed as
176
PRa"A =A(A*ArIA* is the formula for the matrix of the orthogonal projection onto Ran A. The following theorem implies that for an m x n matrix A the matrix A *A is invertible if and only if rank A = n. Theorem. For an m x n matrix A KerA = Ker(A*A). Indeed, according to the rank theorem KerA = {O} if and only rank A is n. Therefore Ker(A *A) = {O} if and only if rank A = n. Since the matrix A *A is square, it is invertible if and only if rank A = n. To prove the equality Ker A = Ker (A *A) one needs to prove two inclusions Ker(A *A) KerA and KerA Ker(A *A). One of the inclusion is trivial, for the other one use the fact that 1/ Ax 1/ 2 = (Ax, Ax) = (A *Ax, x). Example. line fitting. Let us introduce a few examples where the least square solution appears naturally. Suppose that we know that two quantities x and yare related by the law Y = a + bx. The coefficients a and b are unknown, and we would like to find them from experimental data. Suppose we run the experiment n times, and we get n pairs (xk' Yk)' k = 1, 2, ... , n. Ideally, all the points (xk' Yk) should be on a straight line, but because of errors in measurements, it usually does not happen: the point are usually close to some line, but not exactly on it. That is where the least square solution helps! Ideally, the coefficients a and b should satisfy the equations a + bxk = Yk' k = 1, 2, ... , n (note that here, xk andYk are some fixed numbers, and the unknowns are a and b). If it is possible to find such a and b we are lucky. If not, the standard thing to do, is to minimize the total quadratic error
2:1 a+bxk - Yk
k=l
2
1
But, minimizing this error is exactly finding the least square solution of the system
:;j 1
'1
1 xn
[bl
~ J~l
Yn
(recall, that xkYk are some given numbers, and the unknowns are a and b). Example. Suppose our data (xk' Yk) consist of pairs (-2,4), (-1,2), (0, 1), (2, 1), (3, 1). Then we need to find the least square solution of
177
4
-1 0 2 3 Then 1 -2 -1 1 1 1 1) 1 0 -1 0 2 3 1 1 2 1 3
[~] =
2 1 I
1
A*A=(_i
and
=(~ 1~)
4
1 1 1 A*b=(_i -1 0 2 so the normal equation A *Ax
j) =(-~)
2 I 1 I
= A *b
is rewritten as
(~ ?8)(~) =(-~).
The solution of this equation is
= 2, b = -112,
y = 2 - 1I2x. Examples. Curves and Planes. The least square method is not limited to the line fitting. It can also be applied to more general curves, as well as to surfaces in higher dimensions. The only constraint here is that the parameters we want to find be involved linearly. The general algorithm is as follows: 1. Find the equations that your data should satisfy if there is exact fit;
2. Write these equations as a linear system, where unknowns are the parameters you want to find. Note, that the system need not to be consistent (and usually is not); Find the least square solution of the system.
3.
An example: curve fitting. For example, suppose we know that the relation between x and y is given by the quadratic law y = a + bx + cx2' so we want to fit a parabola y = a + bx + cx 2 to the data. Then our unknowns a, b, c should satisfy the equations
178
: :; :~ [~]=[;?l
1
X
Yn
For example, for the data from the previous example we need to find the least square solution of 1 -2 4 4 1 -1 2 1 0 1 1 2 1 1 3 9 1 Then 1 -2 4 2 -1 1 1 1 1 -1 0 2 3 1 o 0 18 26 18 26 114 1 0 4 9 1 2 4 1 3 9 and 4 1 1 1 2 -1 0 2 1 1 0 4 1 1 Therefore the normal equation A *Ax = A*b is
H~]=
<A*A=H
=p
IS]
A*b=H
i]=
=
=[3H
[ 18
which has the unique solution a = 86/77, b = -62/77, C = 43/154. Therefore, Y = 86/77 - 62 x/77 + 43 x2/154 is the best fitting parabola. Plane fitting. As another example, let us fit a plane z = a + bx + cy to the data
(xk'Yk' zk) E E]R3, k
= 1,2, ... n.
The equations we should have in the case of exact fit are a + bXk + cYk = zk' k = 1,2, .'" n, or, in the matrix form
179
So, to find the best fitting plane, we need to find the best square solution ofthis system (the unknowns are a, b, c).
n matrix A its
Hermitian adjoint (or simply adjoint) A* is defined by A * : = AT . In other words, the matrix A * is obtained from the transposed matrix AT by taking complex conjugate of each entry. The following identity is the main property of adjoint matrix:
V.I
Before proving this identity, let us introduce some useful formulas. Let us recall that for transposed matrices we have the identity (AB)T = BT AT. Since for complex numbers z and
W
(AB)* = B*A*
holds for the adjoint. Also, since (AT l = A and z = ~,z,
(A*) =A*
= A.
V, 'dy
W.
Why does such an operator exists? We can simply construct it: consider orthonormal bases A = VI' v2, ... , vn in Vand B = w l ,w2' ... ,wm in W. If [AlBA is the matrix of A with respect to these bases, we define the operator A * by defining its matrix [A *lAB as
180
Useful form ulas. Below we present the properties of the adjoint operators (matrices) we will use a lot. l. (A + B) = A* + B*;
2. (aA)*
(iA*;
3.
4.
5.
(y, Ax)
= (A *y,
x).
l.
2. 3. 4.
= (Ker A)-L;
Proof First of all, let us notice, that since for a subspace E we have (E -L ) -L = E, the statements 1 and 3 are equivalent. Similarly, for the same reason, the statements 2 and 4 are equivalent as well. Finally, statement 2 is exactly statement 1 applied to the operator A * (here we use the fact that
(A*)* =A)
So, to prove the theorem we only need to prove statement 1. We will present 2 proofs of this statement: a "matrix" proof, and an "invariant", or "coordinate-free" one. In the "matrix" proof, we assume that A is an m x n matrix, i.e., thatA : F" ~ P. The general case can be always reduced to this one by picking orthonormal bases in Vand W, and considering the matrix of A in this bases. Let aI' a 2, ... , an be the columns of A. Note, that x (i.e., (x, ak)
E
= 0) \;/k = 1,2, ... , n. By the definition of the inner product in F n , that means
= o.
181
E
Now, let us present the "coordinate-free" proof. The inclusion x that x is orthogonal to all vectors of the form AY' i.e., that
(x, Ay)
= 0 Vy .
and by Lemma this happens if and only if Ax = O. So we proved that x E (Ran A)..L if and only if A *x = 0, which is exactly the statement 1 of the theorem. The above theorem makes the structure of the operator A and the geometry of fundamental subspaces much more transparent. It follows from this theorem that the operator A can be represented as a composition of orthogonal projection onto Ran A * and an isomorphism from Ran A * to Ran A.
Isometrices and Unitary Operators. Unitary and Orthogonal Matrices Main definitions
Definition. An operator U: x
-7
Ux II = II x II Vx E X, ... The following theorem shows that an isometry preserves the inner product. Theorem. An operator U : x -7 Y is an isometry if and only if it preserves the inner product, i.e., tland only if (x, y) = (Ux, Uy) Vx,y E X. Proof The proof uses the polarization identities. For example, if Xis a complex space (Ux,U y )
II
=- D
1 ~
ex II Ux+exUy
II 2
4 a=l,i
=! =!
L
L
ex IIU(x+exUy) 112
4 a=I.i
exllx+ exyI1 2=(x,y).
4 a=l,i
-II Ux-Uy II )
Uy II
2
Lemma. An operator U : X
-7
182
Proof By the definitions of the isometry and of the adjoint operator (x. y) = (Ux, Vy) = (V*Vx, y) V x, Y E X. Therefore, if V*V::I: I, we have (x, y) = (Vx' Vy ) and therefore Vis an isometry. On the other hand, if V is an isometry, then for all x E X (V*Vx, y) = (x, y) VY E X, and therefore by Corollary V*Vx = x. Since it is true for all x E X, we have V*V = I. The above lemma implies that an isometry is always left invertible (V* being a left inverse). Definition. An isometry V: X ~ Y is called a unitary operator if it is invertible. Proposition. An isometry V: X ~ Y is a unitary operator if and only if dirnX = dim Y. Proof Since V is an isometry, it is left invertible, and since dim X = dim Y , it is invertible (a left invertible square matrix is invertible). On the other hand, if V: x ~ Y is invertible, dim X = dim Y (only quare matrices are invertible, isomorphic spaces have equal dimensions). A square matrix V is called unitary if V*V = I, i.e., a unitary matrix is a matrix of a unitary operator acting in Fn' A unitary matrix with real entries is called an orthogonal matrix. In other words, an orthogonal matrix is a matrix of a unitary operator acting in the real space R n Few properties of unitary operators: I. For a unitary transformation V, U- I = V*; 2. 3. If V is unitary, V*
= U-I
is also unitary;
If V is a isometry, and vi' v2, ... , vn is an orthonormal basis, then Uv l ,Vv2, ... , Vvn is an orthonormal system. Moreover, if V is unitary, VVi'Vv 2, ""Vvn is an orthonormal basis. A product of unitary operators is a unitary operator as well. Examples. First of all, let us notice, that a matrix V is an isometry if and only if its columns form
4.
an orthonormal system. This statement can be checked directly by computing the product V*V. It is easy to check that the columns of the rotation matrix -Sino.) coso. are orthogonal to each other, and that each column has norm 1. Therefore, the rotation matrix is an isometry, and since it is square, it is unitary. Since all entries of the rotation matrix are real, it is an orthogonal matrix. The next example is more abstract. Let x and Y be inner products paces, dim X = dim c?sa ( sma
183
Y= n, and let xl' x 2, ... , Xn and yp Y2' ... , Yn be orthonormal bases inXand Yrespectively. Define an operator U: X ~ Y by Uxk = Yk' k = 1, 2, ... , n. Since for a vector x = Cl xl + C(2 + ... + C,ff II X 112 = I cl12 + I c21 + ... + I Cn I and
2
1
II
Ux
II = II X II for all x E
X, so U is a unitary operator.
2.
If 'A is an eigenvalue of U, then I 'A I = 1 Remark. Note, that for an orthogonal matrix, an eigenvalue (unlike the determinant)
does not have to be real. Our old friend, the rotation matrix gives an example.
Proof Let det U = z. Since det (U) = det(U) , we have I z 12 = z z = det (U* U) = det I = 1, so I det U I = I z I = 1. Statement 1 is proved. To prove statement 2 let us notice that if Ux = Ax then II Ux I A I = 1.
II = II Ax II = I 'A I . II x II, so
184
orthonormal basis. Let D be the matrix of A in the basis B = u I ' u2, ... , un' Clearly, D is a diagonal matrix. Denote by U the matrix with columns u I ' u2' ... , un' Since the columns form an orthonormal basis, U is unitary. The standard change of coordinate formula implies A = [A]ss = [1]SB [A]BB [1]BS = UDU- I and since U is unitary, A = UDU*.
e.
e,
e e
the concept of dot product, norm and distance, rst developed for ~n in Chapter 9. Definition. Suppose that u = (up ... , un) and v = (vI' ... , vn) are vectors in e. The complex euclidean inner product of u and v is dened by
u. v =
U I VI
+ ... +
Un Vn ;
the complex euclidean norm of u is dened by "u" = (u u)1I2 = (I u I I2 + ... + 1 un 12)112; and the complex euclidean distance between u and v is dened by d(u, v) = " u - v " = (I u l - vI 12 + .,. + 1un - vn 12)112. Corresponding to Proposition, we have the following result. Proposition. Suppose that u, v,
WEen
and
CE
C. Then
185
= (u
v) + (u w);
= (c u)
v, and
The following definition is motivated by Proposition. Definition. Suppose that V is a complex vector space. By a complex inner product on V, we mean a function (, ) : V x V ~ C which satises the following conditions: (lPl) For every u, V E V, we have hu, vi = hv, ui. (lP2)Foreveryu, v, WE V,wehave (u,v+w)=(u,v)+(u,w).
(lP3) For every u, v E V and c E C, we have chu, c(u, v) =(cu.v). (IP4) For every u E V, we have (u,u) ~ 0, and (u,u) = 0 if and only if u = O. Definition. A complex vector space with an inner product is called a complex inner product space or a unitary space. Definition. Suppose that u and v are vectors in a complex inner product space V . Then the norm of u is defined by
II u 1I=(u,u)1I2 ,
and the distance between u and v is defined by d(u, v) = IIu - vii. Using this inner product, we can discuss orthogonality, orthogonal and orthonormal bases, the Gram Schmidt orthogonalization process, as well as orthogonal projections, in a similar way as for real inner product spaces. In particular, the results in Sections can be generalized to the case of complex inner product spaces.
Unitary Matrices
For matrices with real entries, orthogonal matrices and symmetric matrices play an important role in the orthogonal diagonalization problem. For matrices with complex entries, the analogous roles are played by unitary matrices and hermitian matrices respectively. Definition. Suppose that A is a matrix with complex entries. Suppose further that the matrix A is obtained from the matrix A by replacing each entry ofA by its complex conjugate. Then the matrix --I A=A is called the conjugate transpose of the matrix A. Proposition. Suppose that A and B are matrices with complex entries, and that c E C. Then (a) (A *)* = A;
(b) (A + B)* =A* + B*;
186
Definition. A square matrix A with complex entries and satisfying the condition
A-I = A * is said to be a unitary matrix.
Corresponding to Proposition, we have the following result. Proposition. Suppose that A is an n x n matrix with complex entries. Then
(a) (b)
A is unitary if and only if the row vectors of A form an orthonormal basis of
en
187
Unitary Diagonalization Process. Suppose that A is a normal n n matrix with complex entries. (1) Determine the n complex roots 11"'" In of the characteristic polynomial det(AII), and n linearly independent eigenvectors uI ' ... , un of A corresponding to these eigenvalues as in the Diagonalization process. (2) Apply the Gram-Schmidt orthogonalization process to the eigenvectors u l , ... , un to obtain orthogonal eigenvectors vI' ... , Vn of A, noting that eigenvectors corresponding to distinct eigenvalues are already orthogonal. Normalize the orthogonal eigenvectors vI' ... , vn to obtain orthonormal eigenvectors wI' ... , Wn of A. These form an orthonormal basis of en. Furthermore, write and D = AI
(3)
P = (w 1 ...
Wn )
J,
An
where AI' ... , An E e are the eigenvalues of A and where wI' ... 'Wn E en are respectively their orthogonalized and normalized eigenvectors. Then P*AP = D. We conclude this chapter by discussing the following important result which implies Proposition, that all the eigenvalues of a symmetric real matrix are real. Proposition. Suppose that A is a hermitian matrix. Then all the eigenvalues of A are real. Proof Suppose that A is a hermitian matrix. Suppose further that is an eigenvalue of A, with corresponding eigenvector v. Then Av = AV. Multiplying on the left by the conjugate transpose v* of v, we obtain v*Av = V*AV = AV* v. To show that A is real, it suces to show that the 1 x 1 matrices v*Av and v*v both have real entries. Now (v*Av)* = v*A*(v*)* = v*Av and (v*v)* = v*(v*)* = v*v. It follows that both v*Av and v*v are hermitian. It is easy to prove that hermitian matrices must have real entries on the main diagonal. Since v*Av and v*v are 1 xl, it follows that they are real.
188
I f(x) -
g(x) dx
is minimized. The purpose of this section is to study this problem using the theory of real inner product spaces. Our argument is underpinned by the following simple result in the theory. Proposition. Suppose that V is a real inner product space, and that W is a finitedimensional subspace of V. Given any u E V, the inequality
II u projwU II ~ II u w II holds for every w E W. In other words, the distance from u to any W E W is minimized by the choice w = projwU, the orthogonal projection of u on the subspace W. Alternatively, projwU can be thought of as the vector in W closest to u. Proof Note that
u projwU E W.L and proj Wu WE W. It follows from Pythagoras's theorem that II u - W 112 = II(u - projwu - W) + (projwu - w)1I 2 = II u - projwU 112 + II projwU - W 112; so that II u - W 112 II u - proj wU 112 = II proj wU - W 112 ~ 0: The result follows immediately. Let V denote the vector space qa, b] of all continuous real valued functions on the closed interval [a, b], with inner product
(f,g)
f:
f(x)g(x)dx.
2 2
Then
b a
I f(x)g(x)
It follows that the least squares approximation problem is reduced to one of nding a suitable polynomial g to minimize the norm IIf- g Ii.
Now let W = Pk [a, b] be the collection of all polynomials g : [a, b] ~ lR with real coecients and of degree at most k. Note that W is essentially Pk' although the variable is restricted to the closed interval [a, b]. It is easy to show that W is a subspace of V. In view of Proposition IIA, we conclude that g= projwf gives the best least squares approximation among polynomials in W = P k [a, bJ. This subspace is of dimension k + 1. Suppose that {l'o' vI' ... , vk } is an orthogonal basis of W= P k [a, b]. Then by Proposition, we have
189
2 VO
g-
_ (f,vo)
II Vo II
+-2 VI + ... +
(f,vI)
(f,Vk)
II VI II
interval [0,2]. Suppose that we wish to nd a least squares approximation by a polynomial of degree at most 1. In this case, we can take V = C[O, 2], with inner product
= x 2 in the
II Vk II
2 Vk
(f,g)
= f: f(x)g(x)dx,
and W = PI [0,2], with basisfI' xg. We now apply the Gram-Schmidt orthogonalization process to this basis to obtain an orthogonal basis {I, x-I} of W, and take
1 o
(x-I) dx=-.
It follows that
4 2 g=-+2(x-I)=2x--. 3 3 Example. Consider the functionf(x) = eX in the interval [0, 1]. Suppose that we wish
to find a least squares approximation by a polynomial of degree at most 1. In this case, we can take V = qo, 1], with inner product
(f,g) = f~f(x)g(x)d I x,
and W = PI [0, 1], with basis {I, x}. We now apply the Gram-Schmidt orthogonalization process to this basis to obtain an orthogonal basis {I, x - 1I2} of W, and take
X
IIlIT + II
1
(ex,I)
(e ,X-1I2)(
X
_11211 2
-"2 .
1)
Now
so that
190
Also
= (e-l)+(l8-6e)(x-.!.) =(l8-6e)x+(4e-1O).
Remark. From the proof of Proposition, it is clear that II u - w II is minimized by the unique choice w = proj wU. It follows that the least squares approximation problem posed here has a unique solution.
Quadratic Form
A real quadratic form in n variables xl' ... , xn is an expression of the form
LL
;=1
iSj.
Cijx;Xj'
j=1
where cij
< j.
Example. The expression 5x~ +6xlx2 +7x; is a quadratic form in two variables xl and x 2 . It can be written in the form
4 1
2J (XIJ x2'
2 3 3 x3 Note that in both examples, the quadratic form can be described in terms of a real symmetric matrix. In fact, this is always possible. To see this, note that given any quadratic form (1), we can write, for every i,j = 1, ... , n,
Cij aij
-coo 2 l) -Coo 2 )1
1
191
Then
We are interested in the case when xl"'" xn take real values. In this case, we can write
192
we have
x'Ax =YDy. Also, since P is an orthogonal matrix, we also have x = Py. This answers our second question. Furthermore, in view of the Orthogonal diagonalization process, the diagonal entries in the matrix D can be taken to be the eigenvalues of A, so that
D=(AI '.
J
2
y=(n
we have
x:Ax = y Dy = AlYI + ... + AnYn. Note now that x = 0 if and only if y = 0, since P is an invertible matrix.
_1.t
2
Example. Consider the quadratic form 2xJ + 5x; + 2x; +4xlx2 + 2Xlx3 + 4x2x3. This can bewritten in the form xlAx, where
A=(~ ~ ~) ~d x=(~}
The matrix A has eigenvalues I = 7 and (double root) 2 Furthermore, we have plAP = D, where
= 3 = 1, see Example.
-1I.J2
1/.J3
0 0
Writingy = pIX, the quadratic form becomes 7xJ + y; + y; which is clearly positive
defnite.
Example. Consider the quadratic from 5xJ + 6x~ + written in the form XlAx, where
yi - 4xlx2 + 4x2x3.
This cn be
193
The matrix A has eigenvalues Al = A 3, A2 = 6 and A3 = 9, See Example, furthermore, we have PIAP = D, where
0 0 9 Writingy= pIX, the quadratic form becomes 3y~ + 6y; +9y; which is clearly positive
definite.
Example. Consider the quadratic form
P= (
2/3 2/3 -113) 2/3 -113 2/3 and -113 2/3 2/3
D=
(3 0 0)
0 6
o.
Clearly this is equal to (x 1 + x 2)2 and is therefore not positive defnite. The quadratic form can be written in the form xlAx, where
x~ +
xi + 2xIX2.
A=
G :) and x = [::j.
It follows from Proposition that the eigenvalues of A are not all positive. Indeed, the matrix A has eigenvalues Al = 2 and A2 = 0, with corresponding eigenvectors
= pIX, the quadratic form becomes 2y~ which is not positive denite.
194
For every e E IR andfE E, we have efE E. For every e E IR andJ, gEE, we have elf + g) = ef+ eg. For every a, e E IR andf E E, we have (a + b)f= af + bf. For every a, e E IR andfE E, we have (a + b)f= a(bf). For every fEE, we have If= f. We now give this vector space E more structure by introducing an inner product. For every J, gEE,
(f,g) = !f1t f(x)g(x)dx.
1t
-1t
The integral exists since the function j{x)g(x) is clearly piecewise continuous on [-1t', 1t]. It is easy to check that the following conditions hold: For every J, gEE, we have (f,g) = (g,f). For every f, g, h E E, we have (f,g+ h) = (f,g)+(f,h). For every J, gEE and e E IR, we have e (f, g) = (cf, g). ForeveryfE E,wehave (f,f)~O,and ,(f,f)=O if and only iff=A. HenceEis a real inner product space. The diculty here is that the inner product space E is not finite-dimensional. It is not straightforward to show that the set
3x,COS3X, -}
in E forms an orthonormal "basis" for E. The diculty is to show that the set spans E. Remark. It is easy to check that the elements in (4) form an orthonormal "system". For
as well as
1t
-1t
1t
ifk=m if k:f; m
195
.!.(cos(k-m)x-cos(k+m)I~)dx = {I
? k =m if k '* m
(sinkx,cosmx) =lJ1t sinkxcosmxdx =lJ1t .!.(sin(k-m)x-sin(k+ m)x)dx x -1t X -1t 2 Let us assume that we have established that the set (4) forms an orthonormal basis for E. Then a natural extension of Proposition 9H gives rise to the following: Every function J E E can be written uniquely in the form
ao + t(ancosnx+bnsinnx) 2 n=l
known usually as the (trigonometric) Fourier series of the function f, with Fourier coecients
Ji :n) !i:1t
(J,
=
J (x)dx,
(f,:n)= ~=~.
Example. Consider the functionJ: [-x, x] For every n E N u {O}, we have a = lJ1t xcosnxdx = 0
~
n x-1t since the integrand is an odd function. On the other hand, for every n E N, we have 1t bn = lJ1t xsin nxdx = l r xsin nxdx, x -1t x Jo since the integrand is an even function. On integrating by parts, we have
L
00
n=1
196
Note that the functionfis odd, and this plays a crucial role in eschewing the Fourier coefficients an corresponding to the even part of the Fourier series. Example. Consider the functionf: [-n, n] ~ 1R, given by f(x) = Ix I for every x E [n, n]. For every n E N u {O}, we have
an
1t
= ~f1t xdx = n.
an
~ ~ ([XSi:nx
I +[cO:,nxIJ { ~,
=n~
2
if niseven,
ifnisodd.
nf4
4
7t(2k -1)
cos(2k-l)x.
n-l
II odd
ttn
k=l
Note that the functionfis even, and this plays a crucial role in eschewing the Fourier coefficients bn corresponding to the odd part of the Fourier series. Example. Consider the function f: [-n, n] ~ 1R, given for every x E [-n, n] by
an
= ~f1t
n -1t
since the integrand is an odd function. On the other hand, for every n EN, we have
197
ifn is odd. nn We therefore have the (trigonometric) Fourier series 4 4 sinnx = sin(2k-l)x. n=1 nn k=1 n(2k -1)
11:
0 --
bn =
_~[cosnx]lt ={ ~ m
L
00
ifniseven
L00
II odd
Example. Consider the function/: [-n, n]-t lR, given by j(x) = x 2 for every x E [-n, n] For every n E N u {O} we have an = 2
-It
lt ao = ~ r idx = 3n
n Jo
an
dx
since the integrand is an odd function. We therefore have the (trigonometric) Fourier series
-+ L,
3
11=1
n2
00
4( _1)n
2
cosnx.
Chapter 7
Theorem. Let A : X ~ X be an operator acting in a complex inner product space. There exists an orthonormal basis u l , u2, ... , un in X such that the matrix ofA in this basi$ is upper triangular. In other words, any n x n matrix A can be represented as T = UTU*, where U is a unitary, and T is an upper triangular matrix. Proof We prove the theorem using the induction in dirnX. If dirnX = 1 the theorem is
trivial, since any 1 x 1 matrix is upper triangular. Suppose we proved that the theorem is true if dirnX = n - 1, and we want to prove it for dim X = n. Let A.) be an eigenvalue of A, and let up II u l II = 1 be a corresponding
= 1..1 u I Denote E = ,and let 2' ... , vn be some orthonormal basis in E (clearly, dimE = dim X-I = n- I ), so up v2, ... , vn is an orthonormal basis in X. In this
eigenvector, AU I basis the matrix of A has the form
ut
o
o
here all entries below 1..1 are zeroes, and * means that we do not care what entries are in the first row right of 1..1' We do care enough about the lower right (n - 1) x (n - 1) block, to give it name: we denote it as A I' . Note, that A I defines a linear transformation in E, and since dimE = n - 1, the induction hypothesis implies that there exists an orthonormal basis (let us denote is as u2, ... , un) in which the matrix of A I is upper triangular.
199
So, matrix of A in the orthonormal basis u I' u2, ... , un has the form), where matrix A I is upper triangular. Therefore, the matrix of a in this basis is upper triangular as well.
Remark. Note, that the subspace E = ut introduced in the proof is not invariant under
A, i.e. the inclusion AE c E does not necessarily holds. That means that AI is not a part
of A, it is some operator constructed from A. Note also, that AE c E if and only if all entries denoted by * (i.e. all entries in the first row, except AI) are zero. Remark. Note, that even if we start from a real matrix A, the matrices U and T can have complex entries. The rotation matrix
c?so. -Sino.) a:;1!: k7\ k E Z ( sma coso.' , is not unitarily equivalent (not even similar) to a real upper triangular matrix. Indeed, eigenvalues of this matrix are complex, and the eigenvalues of an upper triangular matrix are its diagonal entries. Remark. An analogue of Theorem can be stated and proved for an arbitrary vector space, without requiring it to have an inner product. In this case the theorem claims that any operator have an upper triangUlar form in some basis. a proof can be modeled after the proof of Theorem. An alternative way is to equip V with an inner product by fixing a basis in V and declaring it to be an orthonormal one. Note, that the version for inner product spaces is stronger than the one for the vector spaces, because it says that we always can find an orthonormal basis, not just a basis. The following theorem is a real-valued version of Theorem Theorem. Let A : X ~ X be an operator acting in a real inner product space. Suppose that all eigenvalues ofA are real. Then there exists an brthonormal basis u l , u2, ... , un in X such that the matrix of A in this basis is upper triangular. In other words, any real n x n matrix A can be represented as T = UTU* = UTUT, where U is an orthogonal, and T is a real upper triangular matrices. Proof To prove the theorem we just need to analyse the proof of Theorem. Let us assume (we can always do that without loss of generality, that the operator (matrix) A acts in lRn. Suppose, the theorem is true for (n - 1)
x
in lR n The matrix of a in this basis has form equation, where A I is some real matrix. If we can prove that matrix Al has only real eigenvalues, then we are done. Indeed, then by the induction hypothesis there exists an orthonormal basis u2, ... , un in E
= ut
200
such that the matrix of A I in this basis is upper triangular, so the matrix of a in the basis UI ' un is also upper triangular. To show that A I has only real eigenvalues, let us notice that det(A -'JJ) = (AI - A) det(A I - A) (take the cofactor expansion in the first, row, for example), and so any eigenvalue of A I is also an eigenvalue of A. But a has only real eigenvalues!
U2, ... ,
'*
0),
we can conclude A = I, so is real. It also follows from Theorem that eigenspaces of a self-adjoint operator are orthogonal. Let us give an alternative proof of this result. Proposition. Let A = A * be a self-adjoint operator, and let u, v be its eigenvectors, Au = AU, Av = Av. Then, if A '* j.l, the eigenvectors u and v are orthogonal.
201
Proof This proposition follows from the spectral theorem, but here we are giving a direct proof. Namely, (Au, v) = (lu, v) = leu, v). On the other hand (Au, v) = (u, A *v) = (u, Av) = (u, /lv) = iI (u, v) = /leU, v) (the last equality holds because eigenvalues of a self-adjoint operator are real), so 'A,(u, v) = /leU, v). If 'A, /l it is possible only if (u, v) = o. Now let us try to find what matrices are unitarily equivalent to a diagonal one. It is easy to check that for a diagonal matrix D D*D=DD*. Therefore AA = AA if the matrix of a in some orthonormal basis is diagonal. Definition. An operator (matrix) N is called normal if N* N = NN. Clearly, any self-adjoint operator (AA * = AA *) is normal. Also, any unitary operator U: X ~ X is normal since U*U = UU* = 1. Note, that a normal operator is an operator acting in one space, not from one space to another. So, if U is a unitary operator acting from one space to another, we cannot say that U is normal. Theorem. Any normal operator N in a complex vector space has an orthonormal basis of eigenvectors. In other words, any matrix N satisfying N*N = NN* can be represented as N= UDU*, where U is a unitary matrix, and D is a diagonal one. Remark. Note, that in the above theorem even if N is a real matrix, we did not claim that matrices U and D are real. Moreover, it can be easily shown, that if D is real, N must be self-adjoint. Proof To prove Theorem we apply Theorem to get an orthonormal basis, such that the matrix of N in this basis is upper triangular. To complete the proof of the theorem we only need to show that an upper triangular normal matrix must be diagonal. We will prove this using induction in the dimension of matrix. The case of 1 x 1 matrix is trivial, since any 1 x 1 matrix is diagonal. Suppose we have proved that any (n - 1) x (n - 1) upper triangular normal matrix is diagonal, and we want to prove it for n x n matrices. Let N be n x n upper triangular normal matrix. We can write it as
*"
N=
aJ'J 0
aJ'2
...
NJ
aJ'n
1) matrix.
202
Let us compare upper left entries (first row first column) of N*N and NN*. Direct computation shows that that and
(NN),) = I a),) 12 + I a)'21 2 + ... + I al'n 12. So, (N*N),) = (NN),) if and only if a),2 = ... = a),n = O. Therefore, the matrix N has the form o a),) 0 N=
0
N)
Ia),Ii
N*N=
o
, NN*=
o
o o
o
o
so N; Nl
hypothesis it is diagonal. So the matrix N is also diagonal. The following proposition gives a very useful characterization of normal operators. Proposition. An operator N : X ~ X is normal if and only if II Nx II = II N*x II \;fx E X. Proof Let N be normal, N*N = NN*. Then II Nx 112 = (Nx,Nx) = (N*Nx, x) = (NN*x, x) = (N*x,N*x) = II Nx 112 so II Nx II = II N*x II Now let II Nx II = II N*x II \;fx E X .. The Polarization Identities imply that for all x, y E x
(N*N x, y)
= (Nx,
Ny)
II Nx+aNy 112
a=),i
a=l,i
a=l,i
L
a=l,i
203
We will use the notation A> 0 for positive definite operators, and A ~ 0 for positive semi-definite. The following theorem describes positive definite and semidefinite operators. Theorem. Let a = A *. Then 1. A > 0 if and only if all eigenvalues of A are positive.
2. A A 0 if and only
Proof Pick an orthonormal basis such that matrix of a in this basis is diagonal. To finish the proof it remains to notice that a diagonal matrix is positive definite (positive semidefinite) if and only if all its diagonal entries are positive (non-negative). Corollary. Let A = A * ~ 0 be a positive semidefinite operator. There exists a unique positive semidefinite operator B such that B2 = A Such B is called (positive) square root ofA and is denoted asJA or Proof Let us prove that JA exists. Let VI' v2, ... , vn be an orthonormal basis of eigenvectors of A, and let AI' ~, ... , An be the corresponding eigenvalues. Note, that since A ~ 0, all Ak ~ o. In the basis VI' v 2, ... , vn the matrix of a is a diagonal matrix diag{l, 2, ... , n} with entries A], ~, ... , An on the diagonal. Detine the matrix of B in the same basis as diag
il2.
andB2=A. To prove that such B is unique, let us suppose that there exists an operator C = C* ~ 0 such that e2 = A. Let u l ' u2, ... , un be an orthonormal basis of eigenvalues of e, and let J.ll' J.l2' ... , J.ln be the corresponding eigenvalues (note that Uk ~ OVk). The matrix of C in the basis ul' u2, ... , un is a diagonal one diag {~]' ~, ... , ~n} and therefore the matrix of
A ::: C2 in the same basis is diag {~~ ,~~ ... ~~} This implies that any eigenvalue A of A is
of form J..l~, and, moreover, if Ax = Ax, then Therefore in the basis vI' v2'
... ,
ex = ,J';.x.
d~agonal
form
diag{A,A, ... ,Fn}, i.e. B = C. Modulus ofan operator. Singular values. Consider an operator A : X square A *A is a positive semidefinite operator acting in X. Indeed, (A*A*) =A*A** =A*A and
Y. Its Hermitian
204
II Ax 112
0 Vx EX.
Therefore, there exists a (unique) positive-semidefinite square root R = A * A . This operator R is called the modulus of the operator A, and is often denoted as I A I. The modulus ofA shows how "big" the operator A is: Proposition. For a linear operator
A:X--7Y
.J
III A I x II = II Ax II Vx EX
Proof For any x
E
X
lx, x)
= (I A
2x, x)
Ker A = Ker I A I = (Ran I A I) 1- Proof The first equality follows immediately from Proposition, the second one follows from the identity Ker T = (Ran T*).l (j A I is self-adjoint). Theorem. (Polar decomposition of an operator). Let A : X ~ X be an operator (square matrix). Then A can be represented as A=VIAI, where- V is a unitary operator. Remark. The unitary operator V is generally not unique. As one will see from the proof of the theorem, V is unique if and only if a is invertible. Remark. The polar decomposition A = VI A I also holds for operators A : X --7 Yacting from one space to another. But in this case we can only guarantee that V is an isometry from Ran i A
I = {KerA) 1-
to Y.
If dim X :S; dim Y this isometry can be extended to the isometry from the whole X to Y (if dim X = dim Y this will be a unitary operator). Proof Consider a vector x E Ran IA I. Then vector x can be represented as x = IAlv for some vector v E X. Define Vo x := Av. By Proposition II Vox II = II A v II = III A I v II = II x II so it looks like V is an isometry from Ran I A I to X. But first we need to prove that Vo is well defined. Let VI be another vector such that x = I A Iv I But x = I A I v = I A I VI means that v - v I E Ker I A I = Ker A so meaning that Vo x is well defined. By the construction
A
= VolA I.
205
= Ker A*.
It is always possible to do this, since for square matrices dim KerA = dim Ker A * (the Rank Theorem). It is easy to check that V = Vo + VI is a unitary operator, and that
A=~AI
Singular Values
Eigenvalues ofl A 1 are called the singular values of A. In other words, if AI' A 2, ... , An are eigenvalues of A*A, then are singular values of A. Consider an operator A : X ~ Y, and let aI' a 2, ... , an be the singular values of a counting mUltiplicities. Assume also that a l' a 2, ... , a r are the non-zero singular values of A, counting multiplicities. That means ak = 0 for k> r. By the definition of singular values a~,a~, ... ,a~ are eigenvalues of A*A, and let
vI' V 2, ... , Vn be an orthonormal basisof eigenvectors of
A *A, A *Avk = Proposition. The system wk := ak AVk' k = 1,2 ... , r is an orthonormal system. Proof (Avj , Avk) = (A *Avj , vk) =
1 .
ai vk
a~(vj' vk) = { ;
i*k
J.
a j'
=k
since vI' v 2, ... , vr is an orthonormal system. In the notation of the above proposition, the operator a can be represented as
or, equivalently
Ax =
L ak(x, vk)wk'
k=I
= a Jw J = Av .J
f I J = 1, 2,
... , r,
k=I
206
and
r
So the operators in the left and right sides of equation coincide on the basis v I' V2, ... , vn' so they are equal.
Definition
Remark. Singular value decomposition is not unique. Why? Lemma. Let a can be represented as A = LOkWkVk
r
where ok> and v\, V2, .. , Vr' W I 'W2' ... , wr are some orthonormal systems. Then this representation gives a singular value decomposition ofA. Proof We only need to show that vk are eigenvalues of A* A, A*A vk = O~Vk.
Since v\, v2,
... ,
k=1
vr is an orthonormal system,
WkWj=(Wj,wk)=B kj :=
and therefore
{a,
1,
j ~k -k ] - ,
A * A = LOkVkVk.
k=1
... ,
vr is an orthonormal system
A* AVj
= LOkVkVkVj =OjVj
k=1
A= LOkWkVk
k=1
A= LOkVkWk
k=1
Matrix representation of the singular value decomposition. The singular value decomposition can be written in a nice matrix form. It is especially easy to do if the operator
207
A is invertible. In this case di X = dim Y = n, and the operator A has n non-zero singular values (counting multiplicitie ), so the singular value decomposition has the form
n
k=1
A= Lakwkvk
where vI' v2, ... , vn and wl'w2, ... , wn are orthonormal bases in x and Y respectively. It can be rewritten as
A
where
= W L V*,
... ,
L = diag{a l , a 2,
v\' v2' ... , vn and wp w2, ... ,wn respectively. Such representation can be written even if A is not invertible. Let us first consider the case dim X = dim Y = n, and let a I' a 2, ... , a,., r < n be non-zero singular values of A. Let
A
= Lakwkvk
k=1
{wd:=1 to orthonormal bases. Namely, let vr+ 1, ... , vn and wr+I' ...,wn
are orthonormal bases inXand Yrespectively and A can be represented
be an orthonormal bases in Ker A = Ker IA I and (Ran A).L respectively. Then v\, v2' ... , vn and w l ,w2' as where
... ,wn
A= WLV*,
is n x n diagonal matrix diag {aI' ... , ar' 0, ... , O}, and V, Ware n x n
.. ,
vn and wI,w2' ... ,wn respectively. Remark. Another way to interpret the singular value decomposition A = W LV * is
to say that L is the matrix of A in the (orthonormal) bases A = vI' v2, ... , vn and B:= w l ,w2' ... , wn' i.e, that = [AlB A. We will use this interpretation later. From singular value decomposition to the polar decomposition. Note, that if we know the singular value decomposition A = W LV * of a square matrix A, we can write a polar decomposition of A:
A = WLV*
so I A I = V LV * and General matrix form of the singular value decomposition. In the general case when dim X = n, dim Y = m (i.e. A is an m x n matrix), the above representation A = V also possible. Namely, if
LV *
is
208
A
= LO"kWkVk
k=1
is a singular value decomposition of A, we need to complete the systems vI' v 2, ... , vr and w I ,w2' ... , wr to orthonormal bases in X and Y respectively. Then a can be represented as
A = W LV*,
where V E Mn x nand WE Mmxm are unitary matrices with columns VI' V 2, ... , vn and
w I 'w 2, ""wm respectively, and
O"k
is a "diagonal" m x n matrix
L j,k = {
j = k ~ r: otherwise.
In other words, to get the matrix one has to take the diagonal matrix diag {O" I' 0"2' ... , r} and make it to an m x n matrix by adding extra zeroes "south and east".
SINGULAR VALUES
As we discussed above, the singular value decomposition is simply diagonalization with respect to two dierent orthonormal bases. Since we have two dierent bases here, we cannot say much about spectral properties of an operator from its singular value decomposition. For example, the diagonal entries of L in the singular value decomposition are not the eigenvalues of A. Note, that for a
= WL V*
as in we generally have
An
:f:.
However, as the examples below show, singular values tell us a lot about so-called metric properties of a linear transformation. Final Remark: performing singular value decomposition requires finding eigenvalues and eigenvectors ofthe Hermitian (self-adjoint) matrix A * A. To find eigenvalues we usually computed characteristic polynomial, found its roots, and so on ... This looks like quite a complicated process, especially if one takes into account that there is no formula for finding roots of polynomials of degree 5 and higher. However, there are very eective numerical methods of find eigenvalues and eigenvectors of a hermitian matrix up to any given precision. These methods do not involve computing the characteristic polynomial and finding its roots. They compute approximate eigenvalues and eigenvectors directly by an iterative procedure. Because a Hermitian matrix has an orthogonal basis of eigenvectors, these methods work extremely well. We will not discuss these methods here, it goes beyond the scope of this book. However, you should believe me that there are very eective numerical methods for computing eigenvalues and eigenvectors of a Hermitian matrix and for finding the singular value decomposition. These methods are extremely eective, and just a little more computationally intensive than solving a linear system.
209
Image of the unit ball. Consider for example the following problem: let A : lR m be a linear transformation, and let B = {x
E
IR n
--7
lR n
II x II ::; I}
lRn. We want to describe A(B), i.e. we want to find out how the unit ball is transformed under the linear transformation. Let us first consider the simplest case when A is a diagonal matrix A =
diag{a l , a2, ... , an}, a k > 0, k= 1,2, ... , n. Then forv= (x l ,x2' ... , xnl and (Y1'Y2' ... , ynl = Y = Ax we have yk = akXk (equivalently, x k = yklak) for k = 1,2, ... , n, so y=(YI'Y2' ,ynl=Axforllxll::; 1, if and only if the coordinates YI' Y2' ... , Y n satisfy the inequality
2 2
2l+ Y2 + a1
L k I xk 12 ::; 1).
The set of points in lR n satisfying the above inequalities is called an ellipsoid. If n = 2 it is an ellipse with half-axes a I and a 2, for an = 3 it is an ellipsoid with half-axes a I' a 2 and a 2 . In lR n the geometry of this set is also easy to visualize, and we call that set an ellipsoid with half axes aI' a 2, ... , an. The vectors e l , e 2, ... , en or, more precisely the corresponding lines are called the principal axes of the ellipsoid. The singular value decomposition essentially says that any operator in an inner product space is diagonal with respect to a pair of orthonormal bases, see Remark 3.9. Namely, consider the orthogonal bases A = VI' v2, ... , vn and B = w l ,w2' ... ,wn from the singular value decomposition. Then the matrix of A in these bases is diagonal [AhA = diag{sn: n = 1,2, ... , n}. Assuming that all a k > 0 and essentially repeating the above reasoning, it is easy to show that any point Y = Ax E A(B) if and only if it satisfies the inequality
Y2 + +Yn=~Yk<1 2l+ 2 2 ... 2 ~ 2- a1 a2 an k=1 ak where YI' Y2' ... , Y n are coordinates ofy in the orthonormal basis B = w l 'w 2, ... , wn' not in the standard one. Similarly, (XI' x 2, ... , xnl = [x]A. But that is essentially the same ellipsoid as before, only "rotated" (with dierent but still orthogonal principal axes)! There is also an alternative explanation which is presented below. Consider the general case, when the matrix A is not necessarily square, and (or) not
2 2 2
II
all singular values are non-zero. Consider first the case of a "diagonal" matrix form. It is easy to see that the image space but in the Ran
of
210
Consider now the general case, A = wI. V*, where V, Ware unitary operators. Unitary transformations do not change the unit ball (because they preserve norm), so V* (B) = B. We know that I. (B) is an ellipsoid in Ran I. with half-axes ai' a 2, ... , ar . Unitary transformations do not change geometry of objects, so W( I. (B)) is also an ellipsoid with the same half-axes. It is not hard to see from the decomposition A = wI. V* (using the fact that both Wand V are invertible) that W transforms RanI. to Ran A, so we can conclude: the image A(B) of the closed unit ball B is an ellipsoid in Ran A with half axes ai' 0"2' ... , a r . Here r is the number of non-zero singular values, i.e. the rank of A. Operator norm of a linear transformation. Given a linear transformation A : x ~ Y let us consider the following optimization problem: find the maximum ofkAxk on the closed unit ball B = {x EX: II x ~ I}. Again, singular value decomposition allows us to solve the problem. For a diagonal matrix A with non-negative entries the maximum is exactly maximal diagonal entry. Indeed, let s!' s2' ... , sr be non-zero diagonal entries of A and let sl be the maximal one. Since
r
Ax= LXkek'
k=l
so II Ax II ~ S) II x II On the other hand, II Aelll = II slelll = sIll ell, so indeed sl is the maximum of II Ax lion the closed unit ball B. Note, that in the above reasoning we did not assume that the matrix A is square; we only assumed that all entries outside the "main diagonal" are 0, so formula holds. To treat the general case let us consider the singular value decomposition, A = WI. V , where W, Vare unitary operators, and I. is the diagonal matrix with non-negative entries. Since unitary transformations do not change the norm, one can conclude that the maximum of II Ax II on the unit ball B is the maximal diagonal entry of I. i.e. that the maximum of IIAxl I on the unit ball B is the maximal singular value of A.
Definition. The quantity max {II Ax II : x E X, II x II ~ I} is called the operator norm of a and denoted II A II. lt is an easy exercise to see that II A II satisfies all properties of the norm:
211
1. 2. 3. 4.
II aA II = I ex I . II A II II A + B IIA
II :::; II A II + II B II
II
o for aliA.
II A II
= 0 if and only if A = o.
So it is indeed a norm on a space of linear transformations from from x to Y. One of the main properties of the operator norm is the inequality II Ax II ~ II A II . II x II, which follows easily from the homogeneity of the norm II x II.
o such that
In fact, it can be shown that the operator norm II A II is the best (smallest) number C
'\Ix E X. This is often used as a definition of the operator norm. On the space of linear transformations we already have one norm, the Frobenius, or Hilbert-Schmidt norm II Alb,
II Ax II :::; C II x II
counting multiplicities). Recalling that the trace equals the sum of the eigenvalues we conclude that II A 112 = trace(A * A) =
On the other hand we know th~t the operator norm of a equals its largest singular value, i.e. II A II = s I. SO we can conclude that II A II :::; II A Ib, i.e. that the operator norm of a matrix cannot be more than its. This statement also admits a direct proof using the Cauchy-Schwarz inequality, and such a proof is presented in some textbooks. The beauty of the proof we presented here is that it does not require any computations and illuminates the reasons behind the inequality. Condition number of a matrix. Suppose we have an invertible matrix A and we want to solve the equation Ax = b. The solution, of course, is given by x = A-I b, but we want to investigate what happens if we know the data only approximately. That happens in the real life, when the data is obtained, for example by some experiments. But even if we have exact data, round-o errors during computations by a computer may have the same effect of distorting the data. Let us consider the simplest model, suppose there is a small error in the right side of the equation. That means, instead of the equation Ax = b we are solving
212
Ax = b + ~b, where ~b is a small perturbation of the right side b. So, instead of the exact solution x of Ax = b we get the approximate solution x + Llx of A(x+ Llx) = b + ~b. We are assuming that A is invertible, so x = A-I ~b. We want to know how big is the relative error in the solution II ~ 11111 x II in comparison with the relative error in the right side II ~b 11111 b II. It is easy to see that
II
= II
II x II II x II II b II II x II II b II II x II Since II A-I ~b II ::;; II A-I II . II ~ b II and II A x II ::;; II A II . II x II we can conclude that II ~II::;;II A-I 11.11 AII.II ~bll. II xII IIbll The quantity II A 11'11 A-III is called the condition number of the matrixA. It estimates how the relative error in the solution x depends on the relative error in the right side b. Let us see how this quantity is related to singular values. Let sl' s2' ... , sn be the singular values of A, and let us assume that sl is the largest singular value and sn is the smallest. We know that the (operator) norm of an operator equals its largest singular value, so -I 1 II A 11= sl,1I A 11=-,
sn
so IIAII.IIAI
II=~.
sn
In other words, the condition number of a matrix equals to the ratio of the largest and the smallest singular values. We deduced above that
II~II -I lI~bll W ~II A II II A II w It is not hard to see that this
estimate is sharp, i.e. that it is possible to pick the right side b and the error ~b such that we have equality II ~II =11 A-I 11.11 AII.II ~bll. II xII IIbil We just put b = VI and ~b = aWn' where VI is the first column of the matrix V, and wn is the nth column of the matrix Win the singular value decomposition A = w'L V*. Here a can be any scalar. A matrix is called well conditioned ifits condition number is not too big. If the condition number is big, the matrix is called ill conditioned. What is "big" here depends on the problem: with what precision you can find your right side, what precision is required for the solution, etc. Effective rank of a matrix. Theoretically, the rank of a matrix is easy to compute: one JUSt needs to row reduce matrix and count pivots.However, in practical
213
applications not everything is so easy. The main reason is that very often we do not know the exact matrix, we only know its approximation up to some precision. Moreover, even if we know the exact matrix, most computer programs introduce roundo errors in the computations, so effectively we cannot distinguish between a zero pivot and a very small pivot. A simple naive idea of working with round-off errors is as follows. When computing the rank (and other objects related to it, like column space, kernel, etc) one simply sets up a tolerance (some small number) and if the pivot is smaller than the tolerance, count it as zero. The advantage of this approach is its simplicity, since it is very easy to programme. However, the main disadvantage is that is is impossible to see what the tolerance is responsible for. For example, what do we lose is we set the tolerance equal to 10--6? How much better will 10-8 be? While the above approach works well for well conditioned matrices, it is not very reliable in the general case. A better approach is to use singular values. It requires more computations, but gives much better results, which are easier to interpret. In this approach we also set up some small number as a tolerance, and then perform singular value decomposition. Then we simply treat singular values smaller than the tolerance as zero. The advantage of this approach is that we can see what we are doing. The singular values are the half-axes of the ellipsoid A(B) (B is the closed unit ball), so by setting up the tolerance we just deciding how "thin" the ellipsoid sholJld be to be considered "flat".
Theorem. Let U be an orthogonal operator in IRI1. Suppose that detU = 1. Then there exists an orthonormal basis VI' v2, ... , vn such that the matrix of U in this basis has the block diagonal form
Rq>1 Rq>2
Rq>k
I n - 2k
where
Rjk
-sin<Pk) sin<Pk cos<Pk and In- 2k stands for the identity matrix of size (n - 2k)
=(COS<Pk
Rq,k
(n - 2k).
214
Structure of Operators in Inner Product Spaces Proof We know that if p is a polynomial with real coefficient and A is its complex
= L ~=o ak z k ),
5: k'
I k = cos a k + i sin a k ,
E
We know, that eigenvalues of a unitary matrix have absolute value 1, so all complex eigenvalues of A can be written as Ak = cos ak + i sin ak, Fix a pair of complex eigenvalues A and
U, Uu
5:, and
let u
en be the eigenvector of
= Au' Then
Uu
=I
so u = x + iy (note, that x, yare real vectors, i.e. vectors with real entries). Then 1 lux = U -(u + iJ) = -(AU + AU) = Re(Au) 2 2 Similarly, 11
Uy = 2i U(u-iJ) = 2/ AU - AU ) = 1m (AU).
x k := Re u = (u + 17)/2, Y = 1m u = (u - u)/(2i),
Since = cos a + i sin a, we have AU = (cos a + i sin a) (x + iy) = ((cos a)x-(sin a)y) + i((cos a)y + (sin a)x). so Ux = Re(Au) = (cos a) x - (sin a)y, Uy = Im(u) = (cos a)y + (sin a)x. In other word, U leaves the 2-dimensional subspace EA. spanned by the vectors x, y invariant and the matrix of the restriction of U onto this subspace is the rotation matrix cosa sina) Ra= ( . -sina cosa Note, that the vectors u and u (eigenvectors of a unitary matrix, corresponding to dierent eigenvalues) are orthogonal, so by the Pythagorean Theorem
J2
= x, v2 =Y to an orthonormal basis in
~n
Since UEA. c E'J... E"A.' i.e. E is an invariant subspace of U, the matrix of U in this basis has
215
where 0 stands for the (n - 2) x 2 block of zeroes. Since the rotation matrix R_ uis invertible, we have VE/... = E/.... Therefore V*EI = U-IEI = EI so the matrix of V in the basis we constructed is in fact block diagonal,
Since V is unitary
(*)
(~}
so, since VI is square, it is also unitary. If VI has complex eigenvalues we can apply the same procedure to decrease its size by 2 until we are left with a block that has only real eigenvalues. Real eigenvalues can be only + 1 or -1, so in some orthonormal basis the matrix of V has the form
R- u1
R_ U2
o
here Ir and I, are identity matrices of size r x r and I x I respectively. Since det U = 1, the multiplicity of the eigenvalue -1 (i.e. r) must be even. Note, that the 2 x 2 matrix -12 can be interpreted as the rotation through the angle n'. Therefore, the above matrix has the form given in the conclusion of the theorem with '<Pk = -uk or '<Pk = n Let us give a dierent interpretation of Theorem. Define ~. to be a rotation thorough <Pj in the plane spanned by the vectors vi , vi + 1. Then Theorem simply says that V is the composition of the rotations ~ , i = 1, 2, ... , k. Note, that because the rotations T. act in mutually orthogonal planes, they commute, i.e. it does not matter in what order ..!ve take the composition. So, the theorem can be interpreted as follows: Any rotation in IR n can be represented as a composition of at most nl2 commuting planar rotations. Ifan orthogonal matrix has determinant -1, its structure is described by the following theorem.
216
Theorem. Let U be an orthogonal operator in ~n, and let detU = -1. Then there exists an orthonormal basis vI' v2, ... , vn such that the matrix of U in this basis has block diagonal form
R<pk
I n - 2k where r = n - 2k - 1 and R<pk are 2-dimensional rotations,
~k-
_(COS'Pk . sm'Pk
-sin'Pk] cos'Pk
and I n- 2k stands for the identity matrix of size (n - 2k) x (n - 2k). The modification that one should make to the proof of Theorem are pretty obvious. Note, that it follows from the above theorem that an orthogonal 2 x 2 matrix U with determinant -I is always a reflection.
Let us now fix an orthonormal basis, say the standard basis in ~n. We call an elementary rotation 2 a rotation in the Xj - x k plane, i.e. a linear transformation which changes only the coordinates Xj and x k , and it acts on these two coordinates as a plane rotation. Theorem. Any rotation U (i.e. an orthogonal transformation U with detU = I) can be represented as a product at most n(n - 1)/2 elementary rotations. To prove the theorem we will need the following simple lemmas.
Lemma. Let x
= (xI'
x 2f
of, where a =
J + x~.
x:
One can just draw a picture orland write a formula for Ra'
R n_1 such thdt R n- 1 ... , R 2R j x = (a, 0, 0, ... , Of, where a = Jx~ +x~ + ... +x~. Proof The idea ofthe proof of the lemma is very simple. We use an elementary rotation R t in the xn - I -xn plane to "kill" the last coordinate ofx. Then use an elementary rotation R2 in x n- 2-xn- t plane to "kill" the coordinate number n - I of Rtx (the rotation R2 does not change the last coordinate, so the last coordinate of R 2R tx remains zero), and so on. For a formal proof we will use induction in n. The case n = 1 is trivial, since any
vector in
~1
Lemma. Let x = (xI' x 2, ... , xn)T E ~n. There exist n- 1 elementary rotations R I, R 2, ... ,
Assuming now that Lemma is true for n - 1, let us prove it for n. There exists a 2 x 2 rotation matrix Ra such that
217
where an-I
J
R,
n elementary rotation R I by
~(1'~2
;J
(1n-2 is (n - 2) x (n - 2) identity matrix), then R lx = (xI' x 2, .. " x n- 2' an-I' of, We assumed that the conclusion of the lemma holds for n - I, so there exist n - 2
elementary rotations (let us call them R 2, R 3, .. " R n_l ) in JR. 1l - 1 which transform the vector 1l I (XI' x 2, .. " Xn_I' an_If E JR. 1l- 1 to the vector (a, 0, .. " O)T E JR. - , In other words R n_ l , .. " R 3Rix l' x 2, .. " Xn-I' an_If = (a, 0, .. " of, We can always assume that the elementary rotations R 2, R 3, .. " R n_ J act in JR.1l , simply by assuming that they do not change the last coordinate, Then
R n_ l , .. ,' R3R2R x
It can be easily checked directly, but we apply the following indirect reasoning, We know that orthogonal transformations preserve the norm, and we know that a ~ 0, But, then we do not have any choice, the only possibility for a is a =
\jXJ +X2 +"'+Xn ' Lemma. Let A be an n x n matrix with real entries, There exist elementary rotations R I,R2, .. " R n, n ~ n(n - 1)/2 such that the matrix B = RN .. " R2RIA is upper triangular, and, moreover, all its diagonal entries except the last one B n.n are non-negative, Proof We will use induction in n, The case n = I is trivial, since we can say that any I x I matrix is of desired form, Let us consider the case n = 2, Let al be the first column of A, There exists a rotation R which "kills" the second coordinate of ai' making the first coordinate non-negative, Then the matrix B = RA is of desired form, Let us now assume that lemma holds for (n - I) x (n - I) matrices, and we want to prove it for n x n matrices, For the n x n matrix A let al be its first column, We can find n - I elementary rotations (say R I, R 2, ,.. , R n_1 which transform a l into (a, 0, .. " So, the matrix R n_ l , .. " R2R)A has the following block triangular form
= (a, 0, .. "
of E
JR. Il ,
Of,
Rn _ l .. ,R2R JA
= (~
~J,
218
where Al is an (n - 1) x (n - 1) block. We assumed that lemma holds for n - 1, so AI can be transformed by at most (n - 1) (n - 2)/2 rotations into the desired upper triangular form. Note, that these rotations act in
JR(n-1 (only on the coordinates x 2' x 3' ... , xn)' but we can always assume that they act on the
whole JR(1l simply by assuming that they do not change the first coordinate. Then, these rotations do not change the vector (a, 0, ... , ol (the first column of Rn_ l , ., R2RIA), so the matrix A can be transformed into the desired upper triangular form by at most n - 1 + (n - 1)(n - 2)/2 = n(n - 1)/2 elementary rotations. Proof There exist elementary rotations R 1, R2, .... RN such that the matrix VI = RN, ..., R2R2V is upper triangular, and all diagonal entries, except may be the last one, are nonnegative. Note, that the matrix VI is orthogonal. Any orthogonal matrix is normal, and we know that an upper triangular matrix can be normal only if it is diagonal. Therefore, VI is a diagonal matrix. We know that an eigenvalue of an orthogonal matrix can either be lor-I, so we can have only 1 or -Ion the diagonal of VI' But, we know that all diagonal entries of VI' except may be the last one, are non-negative, so all the diagonal entries of VI' except may be the last one, are 1. The last diagonal entry can be 1. Since elementary rotations have determinant 1, we can conclude that det V\ = det V= 1, so the last diagonal entry also must be 1. So VI = J, and therefore V can be represented as a product of elementary rotations
V=R1 R2 ... RN
Here we use the fact that the inverse of an elementary rotation is an elementary rotation as well.
-\
-\
-I
Orientation
Motivation. In Figures 3 orthonormal bases in JR(2 and JR(3 respectively. In each figure,
the basis b) can be obtained from the standard basis a) by a rotation, while it is impossible to rotate the standard basis to get the basis c) (so that ek goes to vk Vk). You have probably heard the word "orientation" before, and you probably know that bases (a) and (b) have positive orientation, and orientation of the bases (c) is negative. You also probably know some rules to determ;ne the orientation, like the right hand rule from physics. So, if you can see a basis, say in JR(3, you probably can say what orientation it has. But what if you only given coordinates of the vectors VI' V2' V3? Of course, you can try to draw a picture to visualize the vectors, and then to see what the orientation is.
219
But this is not always easy. Moreover, how do you "explain" this to a computer? It turns out that there is an easier way. Let us explain it. We need to check whether it is possible to get a basis VI' V2' v3 in IR3 by rotating the standard basis e l , e2, en' Uek = vk, k = 1, 2, 3;
L
(a)
e)
v2
(b)
Fig. Standard Bases
(c)
lR 2
h
e)
e2
V2
(a)
(b)
Fig. Orientation in IR 3
(c)
There is unique linear transformation U such that its matrix (in the standard basis) is the matrix with columns VI' v2 , v 3 It is an orthogonal matrix (because it transforms an orthonormal basis to an orthonormal basis), so we need to see when it is rotation. Theorems give us the answer: the matrix U is a rotation if and only if det U = 1. Note, that (for 3 x 3 matrices) if det U = -1, then U is the composition of a rotation about some axis and a reflection in the plane of rotation, i.e. in the plane orthogonal to this axis. This gives us a motivation for the formal definition below. Definition. Let a and b be two bases in a real vector space X. We say that the bases a and b have the same orientation, if the change of coordinates matrix [J]B,A has positive determinant, and say that they have dierent orientations if the determinant of [J]B A is negative. Note, that since '
[J]A,B = [I]B,A' one can use the matrix [J]A,B in the definition ..
We usually assume that the standard basis e l , e2, ... , en in IR n has positive orientation. In an abstract space one just needs to fix a basis and declare that its orientation is positive. If an orthonormal basis VI' v2, ... , vn in IR n has positive orientation (i.e. the same orientation as the standard basis) This equation show that the basis Vl' v 2, .. , Vn is obtained from the standard basis by a rotation.
-I
220
Chapter 8
x, Y
1. 2.
jRn
~x2'Y)
One can consider bilinear form whose values belong to an arbitrary vector space, but in this book we only consider forms that take real values. If x = (x!' x 2, ... , xn)T and y = (Y!, Y2' ... , Ynf, a bilinear form can be written as
L(x,y)
=L
j,k=!
aj,kxkYj'
or in matrix form
L(x, y)
= (Ax, y)
where
A=
The matrix A is determined uniquely by the bilinear form L. Quadratic forms on jR n There are several equivalent definition of a quadratic form. One can say that a quadratic form on Rn is the "diagonal" of a bilinear form L, i.e. that any quadratic form Q is defined by Q[x] = L(x, x) = (Ax, x).
222
Another, more algebraic way, is to say that a quadratic form is a homogeneous polynomial of degree 2, i.e. that Q[x] is a polyn~mial of n variables xl' x 2, ... , xn having only terms of degree 2. That means that only terms axi and cX/'k are allowed. There many ways (in fact, infinitely many) to write a quadratic form Q[x] as Q[x] =
2 2
(Ax, x). For example, the quadratic form Q[x] =xl + x2 - 4xlx2 on ~2 can be represented as (Ax, x) where A can be any of the matrices
1 -4) (1 0) (1 -1). (
-2 1
(~a a~4)
will work. But if we require the matrix a to be symmetric, then such a matrix is unique: Any quadratic form Q[x] on ~n admits unique representation Q[x] = (Ax, x) where a is A (real) symmetric matrix. For example, for the quadratic form
[~
-8 3.5
~ ;,~J.
5
Quadratic forms on One can also define a quadratic form on (or any complex inner product space) by taking a self-adjoint transformation A = A * and defining Q by Q[x] = (Ax, x). While our main examples will be in Rn, all the theorems are true in the settipg of Cn as well. Bearing this in mind, we will always use a instead of AT
en.
en
Q[x] = alx~ + a2x~ + ... + anx~, we can easily visualize this set, especially if n = 2, 3. In
223
higher dimensions, it is also possible, if not to visualize, then to understand the structure of the set very well. So, if we are given a general, complicated quadratic form, we want to simplify it as much as possible, for example to make it diagonal. The standard way of doing that is the change of variables. Orthogonal diagonalization. Let us have a quadratic form
= (Ax, x) in IRn. Introduce new variables y = (Y!, Y2' where S is some invertible n x n matrix, so x = Sy.
Q[x]
... ,
ynf E
IR n , with y = S-I x,
Then,
Q[x]
y),
so in the new variables y the quadratic form has matrix S*AS. So, we want to find an invertible matrix S such that the matrix S*AS is diagonal. Note, that it is dierent from the diagonalization of matrices we had before: we tried to represent a matrix a as A = SDS-I, so the matrix D = S-lAS is diagonal. However, for orthogonal matrices U, we have U = U-!, and we can orthogonally diagonalize symmetric matrices. So we can apply orthogonal diagonalization we studied before to the quadratic forms. Namely, we can represent the matrix a as A = UDU* = UDU-!. Recall, that D is a diagonal matrix with eigenvalues of A on the diagonal, and U is the matrix of eigenvectors (we need to pick an orthogoml basis of eigenvectors). Then
D
= U*AU,
so in the variables
y= U-! x
the quadratic form has diagonal matrix. Let us analyse the geometric meaning of the orthogonal diagonalization. The columns of the orthogonal matrix Uform an orthonormal basis in IR n , let us call this basis S. The change of coordinate matrix [I]s B from this basis to the standard one is exactly U. We know that ' y = (Y!, Y2' ... , ynf = Ax, so the coordinates Y!'Y2' ... , Yn can be interpreted as coordinates ofthe vector x in the new basis u!' u2, ... , un' So, orthogonal diagonalization allows us to visualize very well the set Q[x] = I, or a similar one, as long as we can visualize it for diagonal matrices.
up u 2, ... , un
Example. Consider the quadratic form of two variables (i.e. quadratic form on IR 2 ), Q(x, y) = 2.x2 +2y2 + 2xy.
yf E
IR 2 satisfying Q(x, y) = 1.
= (~
~}
224
= (~
~)u*, where U = ~C
~(~ ~)~: D
or, equivalently
U' AU
The set {y : (Dy, y) = I} is the ellipse with half-axes 11.J3 and 1. Therefore the set
{x
E
IR 2
(Ax. x)
or, equivalently, the same ellipse, rotated 1t/4. Non-orthogonal diagonalization. Orthogonal diagonalization involves computing eigenvalues and eigenvectors, so it may be dicult to do without computers for n > 2. There is another way of diagonalization based on completion of squares, which is easier to do without computers. Let us again consider the quadratic form of two variables, Q[ x] = 2x{ + 2xlx2 + 2xi (it is the same quadratic form as in the above example, only here we call variables not x, y but x I' x 2 ). Since
2 ( xJ+"2X2
r.(-Jz, Jzr'
1 2
1)2
(2
1 2)
(note, that the first two terms coincide with the first two terms of Q), we get
2xJ +2XIX2 +2xI =2 xl +-X2
1)2
3 2 2
3 2 2
whereYI =x I
The same method can be applied to quadratic form of more than 2 variables. Let us consider, for example, a form Q[x] in ~3 ,
2 2 2
Q[ X] = Xl - 6XIX2 + 4XIX3 - 6X2X3 + 8X2 - 3X3 . Considering all terms involving the first variable xl (the first 3 terms in this case), let us pick a full square or a multiple of a full square which has exactly these terms (plus some other terms). Since
(Xl - 3X2
+ 2X3)
= xI
2 '
- 6XIX2
225
Note, that the expression -xi + 6x2x3 -7xi involves only variables x 2 and x 3. Since
-(X2 - 3X3)
2 2 =-(X2 2
222
we have
-X2 + 6X2X3 - 7X3 =-(X2 - 3X3) + 2x3 Thus we can write the quadratic fonn Q as Q[x](x)-3X2+ 2x3) -(x2- 3x3) + 2X3 =y) +2+23
,2 22222 2
where
Y) = x) - 3x2 + 2x3'Y2 = x 2 - 3x3'Y3 = x 3 There is another way of perfonning non-orthogonal diagonalization of a quadratic fonn. The idea is to perfonn row operations on the matrix a of the quadratic form. The difference with the row reduction (Gauss-Jordan elimination) is that after each row operation we need to perform the same column operation, the reason for that being that we want to make the matrix S*AS diagonal. Let us explain how everything works on an example. Suppose we want to diagonalize a quadratic form with matrix
-1 2 1. 311 We augment the matrix A by the identity matrix, and perform on the augmented matrix (All) row/column operations. After each row operation we have to perfonn on the matrix a the same column operation. We get
A
1 -1 3) (
~ - ~ ~ ~ ~ ~) + ~ (~
R)
-:
! ~ ~ ~1) ~
~
311001
31100
1 0 (3 4
001 ( 1 ( 00 0
~ ! : ~~) ~ [~ ~ ! ~ ~ ~1) ~
0 0 1 -3~ ,0 4 -8 -3 0 1 0
~~) ~ (~~ ~
1
-4R2
4 -8 -3 0
0 0 -24 -7 -4
~)-
0 -24 -7 -4
~IJ.
226
Note, that we perform column operations only on the left side of the augmented matrix. We get the diagonal D matrix on the left, and the matrix S* on the right, so D = S*AS,
-24 -7 -4 3 Let us explain why the method works. A row operation is a left multiplication by an elementary matrix. The corresponding column operation is the right multiplication by the transposed elementary matrix. Therefore, performing row operations E l ,E2, "., EN and the same column operations we transform the matrix A to * * * EN ... E2EIAEIE2 ... EN =EAE*.
As for the identity matrix in the right side, we performed only row operations on it, so the identity matrix is transformed to
EN'" E2El1
= E1 = E.
If we now denote E* = S we get that S* AS is a diagonal matrix, and the matrix E = S is the right half of the transformed augmented matrix. In the above example we got lucky, because we did not need to interchange two rows. This operation is a bit tricker to perform. It is quite simple if you know what to do, but it may be hard to guess the correct row operations. Let us consider, for example, a quadratic form with the matrix
A
= (~
~)
Ifwe want to diagonalize it by row and column operations, the simplest idea would be to interchange rows I and 2. But we also must to perform the same column operation, i.e. interchange columns 1 and 2, so we will end up with the same matrix. So, we need something more non-trivial. The identity
2XIX2
= i[(XI +X2)
122
-(Xl -X2) ]
-I
1112 1)
112
Non-orthogonal diagonalization gives us a simple description of a set Q[x] = 1 in a non-orthogonal basis. It is harder to visualize, then the representation given by the
227
orthogonal diagonalization. However, if we are not interested in the details, for example if it is sucient for us to know that the set is ellipsoid (or hyperboloid, etc), then the nonorthogonal diagonalization is an easier way to get the answer.
*'
S * DS = diag{s~Al,siA2, ... S~An} This transformation changes the diagonal entries of D. However, it does not change the signs of the diagonal entries. And this is always the case Namely, the famous Silvester's Law ofInertia states that: For a Hermitian matrix A (i.e. for a quadratic form Q[x] :::;; (Ax, x and any its diagonalization D = S*AS, the number of positive (negative, zero) diagonal entries of D depends only on A, but not on a particular choice of diagonalization. Here we of course assume that S is an invertible matrix, and D is a diagonal one. The idea of the proof ofth~ Silvester's Law of Inertia is to express the number of positive (negative, zero) diagonal entries of a diagonalization D = S*AS in terms of A, not involving S or D. We will need the following definition. Definition. Given an n x n symmetric matrix A = A * (a quadratic form Q[x] = (Ax, x) n on ]R.n) we call a subspace E c lR positive (resp. negative, resp. neutral) if (Ax, x) > 0 (resp. (Ax, x) < 0, resp. (Ax, x) = 0) for all x E E, x O. Sometimes, to emphasize the role of a we will say A-positive (A negative, A-neutral). Theorem. Let A be an n x n symmetric matrix, and let D = S*AS be its diagonalization by an invertible matrix S. Then the number ofpositive (resp. negative, resp. zero) diagonal entries of D coincides with the maximal dimension of an A-positive (resp. A - negative, resp. A - neutral) subspace. The above theorem says that ifr+ is the number of positive diagonal entries of D, then there exists an A - positive subspace E of dimension 1 +, but it is impossible to find a positive subspace E with dim E > r +. We will need the following lemma, which can be considered a particular case of the above theorem. Lemma. Let D be a diagonal matrix D = diag{Al' A 2 , ... , An}' Then the number of positive (resp. negative, resp. zero) diagonal entries of D coincides with the maximal dimension of an D - positive (resp. D - negative, resp. D - neutral) subspace. Proof By rearranging the standard basis in ]R.n (changing the numeration) we can
*'
228
always assume without loss of generality that the positive diagonal entries of D are the first r + diagonal entries. Consider the subspace E+ spanned by the first r-f coordinate vectors e l , e2, ... , er+. Clearly E+ is a D-positive subspace, and dimE+ = r+. Let us now show that for any other D-positive subspace E we have dim E ~ r +. Consider the orthogonal projection P = PE+' For a D - positive subspace E define an operator T: E ~ E+ by Tx = Px, V X E E. In other words, T is the restriction of the projection P : P is defined on the whole space, but we restricted its domain to E and target space to E+. We got an operator acting from E to E+, and we use a dierent letter to distinguish it from P. Note, that Ker T = {O}. Indeed, let for X = (xl' x 2, ... , xnf E E we have Tx = Px = O. Then, by the definition of P Xl = x 2 = ... = xr + = 0, . and therefore
Px = (xl' x2,
... , x r+,
... ,
xnf.
(Dx,x) =
L
k=r++l
for x
But X belongs to aD - positive subspace E, so the inequality (Dx, x) ~ 0 holds only = O. Let us now apply the Rank Theorem. First of all, rank T = dimRan T ~ dimE+ = r+ because Ran T c E+. By the Rank Theorem, dimKer T + rank T = dim E. But we just proved that Ker T= {O}, i.e. that dim Ker T= 0, so
dim E = rank T ~ dim E+ = r+. To prove the statement about negative entries, we just apply the above reasoning to the matrix -D. The case of zero entries is treated similarly, but even simpler. Proof Let D = S*AS be a diagonalization of A. Since
(Dx, x) = (S* ASx, x) = (ASx, Sx) it follows that for any D - positive subspace E, the subspace SE is an Apositive subspace. The same identity implies that for any A - positive subspace F the subspace
!)IF is D - positive. Since Sand !)l are invertible transformations, dimE = dim SE and dimF = dim !)l F. Therefore, for any D positive subspace E we can find an A - positive subspace (namely SE) of the same dimension, and vice versa: for any A - positive subspace F we can find a D - positive subspace (namely !)l F) of the same dimension. Therefore the maximal possible dimensions of a A - positive and a D - positive subspace coincide, and the theorem is proved. The case of negative and zero diagonal entries treated similarly.
229
'* o.
0 for all x.
Indefinite if it take both positive and negative values, i.e. if there exist vectors xl and x2 such that Q[xd > 0 and Q[x2] < O. Definition. A symmetric matrix A = A * is called positive definite (negative definite, etc.) ifthe corresponding quadratic form Q[x] =(Ax, x) is positive definite (negative definite, etc.). Theorem. Let A = A *. Then 1. A is positive definite i all eigenvalues of a are positive.
2. 3. 4.
A is positive semidefinite i all eigenvalues of a are non-negative. A is negative definite i all eigenvalues of a are negative. A is negative semidefinite i all eigenvalues of a are non-positive. A is indefinite i it has both positive and negative eigenvalues.
S.
Proof The proof follows trivially from the orthogonal diagonalization. Indeed, there is an orthonormal basis in which matrix of a is diagonal, and for diagonal matrices the theorem is trivial. Remark. Note, that to find whether a matrix (a quadratic form) is positive definite (negative definite, etc) one does not have to compute eigenvalues. By Silvester's Law of Inertia it is sucient to perform av arbitrary, not necessarily orthogonal diagonalization D = SAS and look at the diagonal entries of D. Silvester's criterion of positivity. It is an easy exercise to see that a 2 x 2 matrix
is positive definite if and only if a > 0 and det A = ac - b2 > 0 Indeed, if a > 0 and det A = ac-b 2 > 0, then c > 0, so trace A = a + c > O. So we know that if AI' A2 are eigenvalues of a then AI > 0 (det A > 0) and 1 + 2 = traceA > O. But that only possible if both eigenvalues are positive. So we have proved that conditions imply that A is positive definite. The opposite implication is quite simple. This result can be generalized to the case of n x n matrices. Namely, for a matrix A
"'2
230
A=
an,1 a n,2 an,n
A I =(a l1 ),A 2 = ,
a a) (a a
1,1 2,1
Theorem. (Silvester's Criterion of Positivit f?.JJ. m~ffix :J3~ ~ * is positive definite if and only if detA k > Ofor all k = 1,2, ... , n. First of all let us notice that if A> 0 then Ak > 0 also (can you explain why?). Therefore, since all eigenvalues of a positive definite matrix are positive, det Ak > 0 for all k. One can show that if det Ak > 0 \;j k then all eigenvalues ofA are positive by analyzing diagonalization of a quadratic form using row and column operations. The key here is the observation that if we perform row/column operations in natural order (i.e. first subtracting the first row/column from all other rows/columns, then subtracting the second row/columns from the rows/columns 3, 4, ... , n, and so on ... ), and if we are not doing any row interchanges, then we automatically diagonalize quadratic forms Ak as well. Namely, after we subtract first and second rows and columns, we get diagonalization of A 2 ; after we subtract the third row/column we get the diagonalization of A 2 , and so on. Since we are performing only row repla~ement we do not change the determinant. Moreover, since we are not performing row exchanges and performing the operations in the correct order, we preserve determinants of A k Therefore, the condition det Ak > 0 guarantees that each new entry in the diagonal is positive. Of course, one has to be sure that we can use only row replacements, and perform the operations in the correct order, i.e. that we do not encounter any pathological situation. If one analyzes the algorithm, one can see that the only bad situation that can happen is the situation where at some step we have zero in the pivot place. In other words, if after we subtracted the first k rows and columns and obtained a diagonalization of A k , the entry in the k + 1st row and k + 1st column is O. We will need the following characterizatipn of eigenvalues of a hermitian matrix. Minimax characterization of eigenvalues. Let us recall that the codimension of a subspace
(al,1
al,2
231
Theorem. (Minimax characterization of eigenvalues). Let A = A * be an n x n matrix. and let AI' 1.,2 ~ ... ~ An be its eigenvalues taken in the decreasing order. Then Ak = max min(Ax,x) = min max(Ax,x). & ~E F: ~E
dimE=k
11.<11=1
COLlInF=A~1
11'11=1
Let us explain in more details what the expressions like max min and minmax mean. To compute the first one, we need to consider all subspaces E of dimension k. For each such subspace E we consider the set of all x E E of norm 1, and find the minimum of (Ax, x) over all such x. Thus for each subspace we obtain a number, and we need to pick a subspace E such that the number is maximal. That is the maxmin. The min max is defined similarly. Remark. A sophisticated reader may notice a problem here: why do the maxima and minima exist? It is well known, that maximum and minimum have a nasty habit of not existing: for example, the functionJ(x) = x has neither maximum nor minimum on the open interval (0, 1). However, in this case maximum and minimum do exist. There are two possible explanations of the fact that (Ax, x) attains maximum and minimum. The first one requires some familiarity with basic notions of analysis: one should just say that the unit sphere in E, i.e. the set {x E E: II x II = I} is compact, and that a continuous function (Q[x] = (Ax, x) in our case) on a compact set attains its maximum and minimum. Another explanation will be to notice that the function Q[x] = (Ax, x), x E E is a quadratic form on E. It is not dicult to compute the matrix of this form in some orthonormal basis in E, but let us only note that this matrix is not A: it has to be a k x k matrix, where k= dimE. It is easy to see that for a quadratic form the maximum and minimum over a unit sphere is the maximal and minimal eigenvalues of its matrix. As for optimizing over all subspaces, we will prove below that the maximum and minimum do exist. Proof First of all, by picking an appropriate orthonormal basis, we can assume without loss of generality that the matrix A is diagonal, A = diag{A\, 1.,2' ... , An}' Pick subspaces E and F, dimE= k, codim F= k-l, i.e. dimE = n -k+ 1. Since dimE + dim F> n, there exists a non-zero vector Xo E En F. By normalizing it we can assume without loss of generality that II Xo II = 1. We can always arrange the eigenvalues in decreasing order, so let us assume that AI ~ 1.,2 ~ ... ~ An. Since x belongs to the both subspaces E and F -min(Ax,x):::; (Axo,xo):::; max(Ax,x).
xeE IIxll=1 xeF Ilxll=1
We did not assume anything except dimensions about the subs paces E and F, so the above inequality min(Ax,x) :::; max(Ax,x).
xeE IIxll=1 xeF Ilxll=1
232
Eo := span{el' e2,
... ,
Since for a self-adjoint matrix B, the maximum and minimum of (Bx, x) over the unit sphere {x : " x " = I} are the maximal and the minimal eigenvalue respectively (easy to check on diagonal matrices), we get that . min (Ax, x) = max(Ax,x) = A.k'
xeEo xeFo
... ,
en}'
IIxll=1
IIxll=1
IIxll=1
IIxll=1
and similarly, for any subspace F of codimension k - 1, max(Ax,x) ~ min (Ax, x) = A.k'
xeF xeEo
IIxll=1
IIxll=1
But on subspaces Eo and Fo both maximum and minimum are A.k , so min max = max min = A.k
= A * = {aj,k }~,k=I
be a self-adjoint
matrix, and let A = {aj,k }~;I=I be its submatrix of size (n - 1) x (n - 1). Let 1... 1' 1...2' ... , A.n and Ill' 112' ... , Il n-I be the eigenvalues ofA and A respectively, taken in decreasing order. Then
I.e.
A.k ~ Ilk ~ A.k+l,k = 1,2, ... ,n-1 n Proof. Let X c F be the subspace spanned by the first n - 1 basis vectors, eX =
span{e I , e 2, ... , en-I}' Since (Ax, x) = (Ax, x) for all x EX, Ilk = ma{( min(Ax,x).
xcX
dimE=k
IIxll=1
xeE
To get A.k we need to get maximum over the set of all subspaces E of P', dimE = k, i.e. take maximum over a bigger set (any subspace of X is a subspace of P'). Therefore Ilk ~ lk' (the maximum can only increase, if we increase the set). On the other hand, any subspace E c codimension in Therefore
1 - (k - 1) = n - k, so its codimension in Fn is k.
EcF n
mil}
EcX
dimE=n-k
max(Ax,x) ~ min
IIxll=1
xeE
max(Ax, x) = A.k+I
IIxll=1
xeE
dimE=n-k
233
Proof If A> 0, thenA k > 0 for k= 1,2, ... , n as well (can you explain why?). Since all eigenvalues of a positive definite matrix are positive, det Ak > 0 for all k = 1, 2, ... , n. 'Let us now prove the other implication. Let det Ak > 0 for all k. We will show, using induction in k, that all Ak (and so A = An) are positive definite. Clearly A I is positive definite (it is 1 x 1 matrix, so A I = det A I)' Assuming that A k_1 > 0 (and det Ak > 0) let us show that Ak is positive definite. Let AI' A2, ... , Ak and ~I' ~2' ... , ~k-I be eigenvalues of Ak and A k - I respectively. By Corollary
'k j
Since detA k = AI/"'2 ... Ak-I/"'k > 0, the last eigenvalue Ak must also be positive. Therefore, since all its eigenvalues are positive, the matrix Ak is positive definite.
Chapter 9
det(A - Ai)
= peA) = LCk Ak
k=O
k
then the result will be zero matrix. It is not clear why we get the same result if we just plug A instead of A in the determinant det(A - AI). Moreover, it is easy to see that with the exception of trivial case of 1 x 1 matrices we will get a dierent object. Namely, A - AI is zero matrix, and its determinant is just the number O. But peA) is a matrix, and the theorem claims that this matrix is the zero matrix. Thus we are comparing apples and oranges. Even though in both cases we got zero, these are dierent zeroes: he number zero and the zero matrix! Let us present another proof, which is based on some ideas from analysis. A "continuous" proof. The proof is based o,n several observations. First of all, the theorem is trivial for diagonal matrices, and so for matrices similar to diagonal (Le. for diagonalizable matrices). The second observation is that any matrix can be approximated (as close as we
235
want) by diagonalizable matrices. Since any operator has an upper triangular matrix in some orthonormal basis, we can assume without loss of generality that a is an upper triangular matrix. We can perturb diagonal entries of A (as little as we want), to make them all dierent, so the perturbed matrix A is diagonalizable (eigenvalues of a a triangular matrix are its diagonal entries, and by Corollary an n x n matrix with n distinct eigenvalues is diagonalizable). We can perturb the diagonal entries of A as little as we want, so Frobenius norm II A - A 112 is as small as we want. Therefore one can find a sequence of diagonalizable matrices Ak such that Ak -7 A as k -7 00 for example such that Ak - Ak -7 A as k -7 00). It can be shown that the characteristic polynomials pl)..) = det(Ak -IJ) converge to the characteristic polynomial pC)..) = det(A - A1) of A. Therefore peA) = lim Pk(Ak ) k-7 But as we just discussed above the Cayley-Hamilton Theorem is trivial for diagonalizable matrices, so piAk) = O. Therefore peA) = lim k-7oo 0 = O. This proof is intended for a reader who is comfortable with such ideas from analysis as continuity and convergence. Such a reader should be able to fill in all the details, and for him/her this proof should look extremely easy and natural. However, for others, who are not comfortable yet with these ideas, the proof definitely may look strange. It may even look like some kind of cheating, although, let me repeat that it is an absolutely correct and rigorous proof (modulo some standard facts in analysis). So, let us resent another, proof of the theorem which is one of the "standard" proofs from linear algebra textbooks. A "standard" proof We know, see Theorem, that any square matrix is unitary equivalent to an upper triangular one. Since for any polynomial p we have p(UA[jI) = Up(A)[jI, and the characteristic polynomials of unitarily equivalent matrices coincide, it is sucient to prove the theorem only for upper triangular matrices. So, let A be an upper triangular matrix. We know that diagonal entries of a triangular matrix coincide with it eigenvalues, so let AI' A 2, ... , An be eigenvalues of A ordered as they appear on the diagonal, so
00
*
A=
An The characteristic polynomial p(z) = det(A - z1) of A can be represented as p(z) = (A\ - Z)(A2 - z) ... (An - z) = (_l)n (z - AI)(Z - A 2) ... (z - An)' so
236
Ek := span{e l , e2, ... , ek}, where e l , e 2, ... , An is the standard basis in Since the matrix ofA is upper triangular, the subspaces Ek are so-called invariant subspaces of the operator A, i.e. AEk c Ek (meaning that Av E Ek for all v E Ek). Moreover, since for any v E Ek and any A (A - Al)v =Av - AV E Ek, because both Av and AV are in Ek. Thus (A - Al)Ek c E k, i.e. Ek is an invariant subspace of A - Al. We can say even more about the the subspace (A - A~Ek. Namely, (A - A~ek E span{e l , e2, ... , ek_I }, because only the first k - 1 entries of the kth column of the matrix of A - At! can be non-zero. On the other hanc!, for j < k we have (A - Ak)ej E Ej C Ek (because Ej is an invariant subspace of A - A~. Take any vector v E Ek. By the definition of Ele it can be represented as a linear combination of the vectors e 1, e2, ... , ek. Since all vectors e l , e2, ... , ele are transformed by A - At! to some vectors in E k _1' we can conclude that (A - A~V E Ek_ 1 \/v E Ek
en.
en
1, ... , 1 we
XI := (A - Anl)x E En-I' X2 := (A -In_ll)x I = (A -In_Il)(A -Inl) X E En-2' Xn := (A -1 2l)xn_ 1 = (A -1 2l) ... (A - I n_Il)(A -Inl)x EEl The last inclusion mean that xn = ae l But (A - AIl)e l = 0, so 0= (A - AIl)xn = (A - AIl)(A - A2l) ... (A - Anl)X. Therefor.:: p(A)x = 0 for all X E en , which means exactly that peA) = O.
Ie
We know that generally matrix multiplication is not commutative, i.e. generally AB BA so the order is essential. However
Aylj = AiAk = Ak+j,
and from here it is easy to show that for arbitrary polynomials p and q
237
p(A)q(A) = q(A)p(A) = R(A) where R(z) = p(z)q(z). That means that when dealing only with polynomials of an operator A, one does not need to worry about non-commutativity, and act like a is simply an independent (scalar) variable. In particular, if a polynomial p(z) can be represented as a product of monomials p(z) = a(z - z\)(z - z2) ... (z - zN)' where z \' z2' ... , z N are the roots of p, then peA) can be represented as peA) = a(A - n\l)(A - zNl) Spectral Mapping Theorem. Let us recall that the spectrum seA) of as quare matrix (an operator) A is the set of all eigenvalues of A (not counting multiplicities. Theorem (Spectral Mapping Theorem). For a square matrix a and an arbitrary polynomial p cr(p(A)) = p(cr(A)). In other words, ~ is an eigenvalue of peA) if and only if ~ = peA) for some eigenvalue of A. Note, that as stated, this theorem does not say anything about multiplicities of the eigenvalues. Remark. Note, that one inclusion is trivial. Namely, if is an eigenvalue of A, Ax = x for some x =1= 0, then Akx = Akx, and p(A)x = p(A)x, so peA) is an eigenvalue of peA). That means that the inclusion p(cr(A)) c cr(p(A)) is trivial. Ifwe consider a particular case ~ = o of the above theorem, we get the following corollary. Corollary. Let a be a square matrix with eigenvalues 1, 2, ... , n and let p be a polynomial. Then peA) is invertible if and only if peAk) =1= 'v'k = 1,2, ... n. Proof As it was discussed above, the inclusion p(cr(A)) c (P(A)) is trivial. To prove the opposite inclusion cr(p(A)) cp(cr(A)) take a point ~ E cr(p(A)). Denote q(z) = p(z) - ~, so q(A) = peA) - ~1. Since ~ E (P(A)) the operator q(A) = peA) - ~I is not invertible. Let us represent the polynomial q(z) as a product of monomials, q(z) = a(z - z\)(z - z2) ... (z - zN). Then, as it was discussed above in Section 2.1, we can represent q(A) = a(A - ztl)(A - z2l) ... (A - zNl). The operator q(A) is not invertible, so one of the terms A - z/ must be not invertible (because a product of invertible transformations is always invertible). That means zk E (A). On the other hand zk is a root of q, so o= q(zk) = p(zk) - ~ and therefore ~ = p(zk). So we have proved the inclusion s(P(A)) c p((A)).
Invariant Subspaces
Definition. Let A : V ~ V be an operator (linear transformation) in a vector space V. a subspace E of the vector space V is called an invariant subspace of the operator A (or, shortly, A - invariant) if AE c E, i.e. if
238
Av E E for all vEE. If E is A-invariant, then A2E = A(AE) cAE c E, i.e. E is A 2 - invariant. Similarly one can show (using induction, for example), that if AE c E. then Att: cE \tk~l
This implies that P(A)E c E for any polynomial p, i.e. that any A-invariant subspace E is an invariant subspace of peA). If E is an A-invariant subspace, then for all vEE the result Av also belongs to E. Therefore we can treat A as an operator acting on E, not on the whole space V. Formally, for an A-invariant subspace E we define the so-called restriction AlE: E ---7 E of A onto E by
(AIE)v = Av \tv E E.
Here we changed domain and target space of the operator, but the rule assigning value to the argument remains the same. We will need the following simple lemma
Lemma. Let p be a polynomial, and let E be an A-invariant subspace. Then p(AI E) = p(A)I E Proof The proof is trivial. If E1 ,E2, ... , Er a basis of A-invariant subspaces, and AEk := AIEk are the corresponding restrictions, then, since AEk = Att:k c Ek the operators Ak
act independently of each other (do not interact), and to analyse action of A we can analyse operators Ak separately. In particular, if we pick a basis in each subspace Ek and join them to get a basis in V then the operator a will have in this basis the following block-diagonal form
o
A=
o
(of course, here we have the correct ordering of the basis in V, first we take a basis in
E1,then in E2 and so on). Our goal now is to pick a basis of invariant subspaces E 1, E2, ... , Er such that the restrictions Ak have a simple structure. In this case we will get basis in
which the matrix of A has a simple structure. The eigenspaces Ker(A - AI) would be good candidates, because the restriction of a to the eigenspace Ker(A - AI) is simply At!. Unfortunately, as we know eigenspaces do not always form a basis (they form a basis if and only if A can be diagonalized. However, the so-called generalized eigenspaces will work.
239
Generalized Eigenspaces
Definition. A vector v is called a generalized eigenvector (corresponding to an eigenvalue) if (A - ,)J/v = 0 for some k ~ 1. The collection EA, of all generalized eigenvectors, together with 0 is called the generalized eigenspace (corresponding to the eigenvalue A. In other words one can represent the generalized eigenspace EA, as EA, = UKer(A-AI)k.
k~1
The sequence Ker(A - Ali, k = 1,2, 3, ... is an increasing sequence of subspaces, i.e.
Ker(A - AI)k c Ker(A - AI)k+ 1 Vk ~ k" . The representation does not look very simple, for it involves an infinite union. However, the sequence of the subspaces Ker(A - A.I)k stabilizes, i.e. Ker(A - Hi c Ker(A - 'M)k+ 1 V k ~ 1. , so, in fact one can take the finite union. To show that the sequence of kernels stabilizes, let us notice that if for finitedimensional subspaces E and F we have E
<;
F (symbol E
<;
E:;:. F), then dim E < dim F. Since dimKer(A - I)k ~ dim V < 00 , it cannot grow to infinity, so at some point Ker(A - I)k = Ker(A - II)k+ I. The rest follows from the lemma below. Lemma. Let for some k Ker(A - I)k = Ker(A - AI)k+ I. Then Ker(A _l)k+r = Ker(A - 'Al)k+r+l Vr ~ O. Proof Let v E Ker(A - I)k+r+l, i.e. (A - Ai)k+r+1v = O. Then w := (A - ll)r E Ker(A - ll)k+ 1. But we know that Ker(A _'M)k = Ker(A _'M)k+l so WE Ker(A _'M)k, which means (A _l)kw = O. Recalling the definition of w we get that (A - A.I)k+r v = (A - A.I)k w = 0 so V E Ker(A - Al)k+r. We proved that Ker(A _l)k+r+l c Ker(A _'M)k+r. The opposite inclusion is trivial. Definition. The number d = d(A) on which the sequence Ker(A - A.I)k stabilizes, i.e. the number d such that Ker(A _'M)d-I
C
oct
is called the depth of the eigenvalue A. It follows from the definition of the depth, that for the generalized eigenspace EA,
240
Now let us summarize, what we know about generalized eigenspaces. 1. E is an invariant subspace of A, AE, E. 2. If d(A) is the depth of the eigenvalue, then
= (AlE').. - A1n)d(')..) = 0.
{A}, because the operator AlE').. - AI)." is nilpotent, see 2, and the spectrum of nilpotent operator consists of one point 0,
Now we are ready to state the main result of this section. Let A : V ~ V.
Theorem. Let (A) consists ofr points AI' A 2, ... , A,. and let Ek : = Ek be the corresponding generalized eigenspaces. Then the system of subspace E J,E2, ... , Er is a basis ofsubspaces in V Remark. Ifwe join the bases in all generalized eigenspaces Ek, then by Theorem, we will get a basis in the whole space. In this basis the matrix of the operator a has the block diagonal form A = diag{Al'A 2, ... , Ar}, where Ak := AI Ek , Ek = E')..e' It is also easy to see,
that the operators Nk := Ak - AkIEk are nilpotent, Ntk
= 0.
Proof Let mk be the multiplicity of the eigenvalue Ak, so p(z) = the characteristic polynomial of A. Define pk(z) = p(Z)/(Z-Akt'k Lemma.
II::1 (z - Ak
)mk is
= . (z-
II hk
A m;
j)
(A - AkI)mk I Ek = 0, Proof There are 2 possible simple proofs. The first one is to notice that mk ;::: dk, where dk is the depth of the eigenvalue Ak and use the fact that (A - I\,k I ) k I Ek = (Ak - I\,k I Ek ) = 0, where Ak := A I Ek (property 2 of the generalized eigenspaces).
The second possibility is to notice that according to the Spectral Mapping Theorem, the operator PiA) I Ek = piAk) is invertible. By the Cayley-Hamilton Theorem.
'I
'I
mk
so
(Ak-AkIEk )
I11k
=p(Ak)Pk(Ak )
-1
=OPkCAk )
-1
=0.
241
q(z) = LPk(z)
k=1
Since Pk( j) = 0 for j ;j:. k and Pk (k) ;j:. 0, we can conclude that q(k) if:. 0 for all k. Therefore, by the Spectral Mapping Theorem, the operator B = q(A) is invertible. Note that BEk c Ek (any A - invariant subspace is also peA) - invariant). Since B is an invertible operator, dim(BEk) = dim E k, which together with BEk c Ek implies BEk = Ek. MUltiplying the last identity by S-I we get that S-IEk = Ek, i.e. that Ek is an invariant subspace of S-1. Note also, that it follows from that
piA) I Ej = 0 Vj;j:. k , because piA) I Ej = Pk(A) and piA) contains the factor (Aj -'A/Ej )mj = O. Define the operators Pk by Pk = S-lpiA ).
1.
2.
= 0 for} ;j:. k;
3. 4.
RanPk c Ek; Moreover, PkV = vVv E E k , so, in fact Ran Pk = Ek Proof Property 1 is trivial:
r '" r -1",
L.,.Pk =B
k=1
L.,.PkPPk(A)=B B=l.
k=1
-1
Indeed,Pk(A) contains the factor (A - 'A)mj , restriction of which to Ej is zero. Therefore piA) I Ej = 0 and thus P k I Ej = S-I piA) I Ej = O. To prove property 3, recall that according to Cayley-Hamilton Theorem peA) = O. Since we have for w = piA)v
(A -I'vkl) w = (A - Akl) pk(A)v = p(A)v = O. That means, any vector w in Ran piA) is annihilated by some power of (A - kJ), which by definition means that Ran Pk(A) c E k. To prove the last property, let us notice that it follows from that for v E Ek
'\ /Ilk /Ilk
242
r
which implies Pkv= g-lBv= v. Now we are ready to complete the proofofthe theorem. Take v E Vand define vk = Pkv. Then according to Statement 3 of the above lemma, vk E Ek , and by statement,
r
V=
LVk'
k=l
so v admits a representation as a linear combination. To show that this representation is unique, we can just note, that if v is represented as v = "" r vk' E Ek , then it follows from the Statements 2 and 4 of the lemma that L.Jk=l Pkv = Pivl + v2 + ...+ vr ) = Pkvk = v k Geometric Meaning of Algebraic Multiplicity
Proposition. Algebraic multiplicity of an eigenvalue equals to the dimension of the corresponding generalized eigenspace. Proof Ifwe joint bases in generalized eigenspaces Ek = EM to get a basis in the whole space, the matrix of a in any such basis has a block-diagonal form diag{AI.A2' ... ,A r }, where
Operators are nilpotent, so a(Nk) = {O}. Therefore, the spectrum of the operator Ak (recall that Ak =Nk - Ai) consists of one eigenvalue k of (algebraic) multiplicity nk = dimEk The multiplicity equals nk because an operator in a finite-dimensional space V has exactly dim V eigenvalues counting multiplicities, and Ak has only one eigenvalue. Note that we are free to pick bases in Ek , so let us pick them in such a way that the corresponding blocks Ak are upper triangular. Then
det(A - AI) =
But this means that the algebraic multiplicity of the eigenvalue Ak is nk = dimEj,i An important application. The following corollary is very important for dierential equations.
243
Corollary. Any operator A in V can be represented as A = D + N, where D is diagonalizable (i.e. diagonal in some basis) and N is nilpotent (NI1l = 0 for some m), and DN=ND. Proof As we discussed above, if we join the bases in Ek to get a basis in V, then in this basis A has the block diagonal form A = diag{A\,A2' ... ,A r }, where Ak :=A I Ek, Ek = Ek
The operators Nk := Ac AJ Ek are nilpotent, and the operator in the equation.
AJEr }
AkIEkNk = NkAkIEk (identity operator commutes with any operator), so the block diagonal operator N = diag{N\,N2, ... ,Nr } commutes with D,DN=ND. Therefore, defining N as the block diagonal operator N = diag{N1, N2, ... , Nr } we get the desired decomposition. This corollary allows us to compute functions of operators. Let us recall that if p is a polynomial of degree d, then p(a + x) can be computed with the help of Taylor's formula (a) i k=O k! This formula is an algebraic identity, meaning that for each polynomial p we can check that the formula is true using formal algebraic manipulations with a and x and not caring about their nature. Since operators D and N commute, DN=ND, the same rules as for usual (scalar) variables apply to them, and we can write (by plugging D instead of a and N instead of x d p(k)(D) k k! N p(A) p(D + N) p(a+x) =
Lp
(k)
=.to
Here, to compute the derivative p(k)(D) we first compute the kth derivative of the polynomial p(x) (using the usual rules from calculus), and then plug D instead of x. But since N is nilpotent,
N"' = 0
for some m, only first m terms can be non-zero, so m-l/k)(D) k p(A)=p(D+N)= ~ k! N.
244
In m i., much smaller than d, this formula makes computation of peA) much easier. The same approach works if p is not a polynomial, but an infinite power series. For general power series we have to be careful about convergence of all the series involved, so we cannot say that the formula is true for an arbitrary power series p(x). However, if the radius of convergence of the power series is 1, then everything works fine. In particular, if p(x) = eX, then, using the fact that (eXy = eX we get.
e
A
k=O k! k=O k! This formula has important applications in dierential equation. Note, that the fact that ND=DN is essential here!
= L -=-Nk = e D = L ..!..Nk
m-l D
m-l
Ek = Ek
to get a basis in the whole space, then the operator a has in this basis A block diagonal form diag{AI' A 2, ... , Ar} and operators Ak ca be represented as Ak = Ai +Nk, where Nk are nilpotent operators. In each generalized eigenspace Ek we want to pick up a basis such that the matrix of Ak in this basis has the simplest possible form. Since matrix (in any basis) of the identity operator is the identity matrix, we need to find a basis in which the nilpotent operator Nk has a simple form. Since we can deal with each Nk separately, we will need to consider the following problem: For a nilpotent operator A find a basis such that the matrix of a in this basis is simple. Let see, what does it mean for a matrix to have a simple form. It is easy to see that the matrix
o
o
1
is nilpotent. These matrices (together with 1 x 1 zero matrices) will be our "building blocks". Namely, we will show that for any nilpotent operator one can find a basis such that the
245
matrix of the operator in this basis has the block diagonal form diag{A l' A 2, ... , Ar}, where each A k is either a block of form or a 1 x I zero block. Let us see what we should be looking for. Suppose the matrix of an operator A has in a basis vI' v'), ... , V the form (4.1). Then p Av 1and AVk+ = vk , k = 1,2, ... , p - 1. I Thus we have to be looking for the chains of vectors vI' v2, ... , vp satisfying the above relations.
being the length ofthe cycle Ck. Assume that the initial
v~
,... ,v; are linearly independent. Then no vector belongs to two cycles, and
the union of all the vectors from all the cycles is a linearly independent. Proof Let n = PI + P2 + ... + Pr be the total number of vectors in all the cycles. We will use induction in n. Ifn = 1 the theorem is trivial. Let us now assume, that the theorem is true for all operators and for all collection of cycles, as long as the total number of vectors in all the cycles is strictly less than n. Without loss of generality we can assume that the vectors v~ span the whole space V, because, otherwise we can consider instead of the operator A its restriction onto the invariant subspace
v~
~j~Pk-l
annihilates any cycle of length 1. Therefore, we have finitely many cycles, and initial vectors ofthese cycles are linearly independent, so the induction hypothesis applies, and the vectors
246
kk=12 Vj . " ... ,
r, 1< - J ::; Pk -1
are linearly independent. Since these vectors also span Ran A, we have a basis there. Therefore, rank A = dim Ran A = n - r (we had n vectors, and we removed one vector vPk from each cycle Ck , k = 1, 2, ... , r, so we have n-r vectors in the basis v~ : k = 1, 2, ... , r, 1 ::; j ::; Pk -1 ). On the other hand
k
Ai I =0
for k = 1, 2, ... , r, and since these vectors are linearly independent dim Ker A ;;::: r. By the Rank Theorem. dimV= rankA + dimKerA = (n - r) + dimKer A ;;::: (n - r) + r = n so dim V;;::: n. On the other hand V is spanned by n vectors, therefore the vectors
Since the
Theorem implies that the union of these cycles is a linearly independent system. By the definition of the cycle we have v: E KerA, and we assumed that the initial vectors v: ' k = 1, 2, ... , r are linearly independent. Let us complete this system to a basis in KerA, i.e. let
247
tl2' ... ,
find vectors
U , Ul! ... ,
happen that the system v;,k =1,2, .. . ,r is already a basis in Ker A, in which case we put q = 0 and add nothing). The vector uj can be treated as a cycle of length !, so we have a collection of cycles CI ,C2 , ... ,Cr , u I ' u2, ... , uq , whose initial vectors are linearly independent. So, we can apply Theorem to get that the union of all these cycles is a linearly independent system. To show that it is a basis, let us count the dimensions. We know that the cycles C p C2, ... , Cr have dim Ran A = rank A vectors total. Each cycle Ck was obtained from Ck by adding 1 vector to it, so the total number of vectors in all the cycles We know that dim Ker A = r + q (because We added to the cycles Cl , C 2 , . .. , Cr additional q vectors, so we got rank A + r + q = rank A + dimKer A = dim V linearly independent vectors. But dim V linearly independent vectors is a basis. Definition. A basis consisting of a union of cycles of generalized eigenvectors of a nilpotent operator a (existence of which is guaranteed by the Theorem) is called a Jordan canonical basis for A. Note, that such basis is not unique. Corollary. Let A be a nilpotent operator. There exists a basis (a Jordan canonical basis) such that the matrix ofA in this basis is a block diagonal diag {A].A 2, ... ,A r }, where all Ak (except may be one) are blocks ofform, and one of the blocks Ak can be zero. The matrix of a in a Jordan canonical basis is called the Jordan canonicalform of the operator A. We will see later that the Jordan canonical formis unique, if we agree on how to order the blocks (i.e. on how to order the vectors in the basis). Proof According to Theorem one can find a basis consisting of a union of cycles of generalized eigenvectors. a cycle of size p gives rise to a p x p diagonal block, and a cycle of length 1correspond to a 1 x 1 zero block. We can join these 1x 1 zero blocks in one large zero block (because o-diagonal entries are 0). Dot diagrams. Uniqueness of the Jordan canonical form. There is a good way of visualizing Theorem and Corollary, the socalled dot diagrams. This methods also allows us to answer many natural questions, like "is the block diagonal representation given by Corollary unique?" Of course, if we treat this question literally, the answer is "no", for we always can change the order of the blocks. But, if we exclude such trivial possibilities, for example by agreeing on some order of blocks (say, if we put all non-zero blocks in decreasing order, and then put the zero block), is the representation unique, or not?
r .. VI' vI , ... VI 'Ul' U2" .. Uq IS a baSIS there).
12
Ck
is rank A + r.
--
248
1
0
1
0 0
1
0 0 0
1
0
o 0
Fig. Dot Diagram and Corresponding Jordan Canonical form of a Nilpotent Operator
To better understand the structure of nilpotent operators, let us draw the so-called dot diagram. Namely, suppose we have a basis, which is a union of cycles of generalized eigenvalues. Let us represent the basis by an array of dots, so that each column represents a cycle. The first row consists of initial vectors of cycles, and we arrange the columns (cycles) by their length, putting the longest one to the left. On the figure 1 we have the dot diagram of a nilpotent operator, as well as its Jordan canonical form. This dot diagram shows, that the basis has 1 cycle oflength 5, two cycles of length 3, and 3 cycles of length 1. The cycle of length 5 corresponds to the 5 x 5 block of the matrix, the cycles of length 3 correspond to two 3 non-zero blocks. Three cycles of length 1 correspond to three zero entries on the diagonal, which we join in the 3 x 3 zero block. Here we only giving the main diagonal of the matrix and the diagonal above it; all other entries of the matrix are zero. Ifwe agree on the ordering of the blocks, there is a one-to-one correspondence between dot diagrams and Jordan canonical forms (for nilpotent operators). So, the question about uniqueness of the Jordan canonical form is equivalent to the question about uniqueness of the dot diagram. To answer this question, let us analyse, how the operator A transforms the dot diagram. Since the operator A annihilates initial vectors of the cycles, and moves vector vH1 of a cycle to the vector vk, we can see that the operator a acts on its dot diagram by deleting the first (top) row of the diagram. The new dot diagram corresponds to a Jordan canonical basis in Ran A, and allows us to write down the Jordan canonical form for the restriction A IRan A.
249
Similarly, it is not hard to see that the operator Ak removes the first k rows of the dot diagram. Therefore, if for all k we know the dimensions dimKer(A"), we know the dot diagram of the operator A. Namely, the number of dots in the first row is dimKerA, the number of dots in the second row is dimKer(A2) - dimKer A, and the number of dots in the kth row is
dimKer(A") - dimKer(A k+1).
But this means that the dot diagram, which was initially defined using a Jordan canonical basis, does not depend on a particular choice of such a basis. Therefore, the dot diagram, is unique. This implies that if we agree on the order of the blocks, then the Jordan canonical form is unique. Computing a Jordan canonical basis. Let us say few words about computing a Jordan canonical basis for a nilpotent operator. Let p} be the largest integer such that ApI "* 0 (so APl+} = 0). That PI is the length of the longest cycle. Computing operators Ak, k= 1,2, ... , PI' and counting dimKer(A") we can construct the dot diagram of A. Now we want to put vectors instead of dots and find a basis which is a union of cycles. We start by finding the longest cycles (because we know the dot diagram, we know how many cycles should be there, and what is the length of each cycle). Consider a basis in the column space Ran(Apl). Name the vectors in this basis v:' v~ , ... , V(, these will be the initial vectors of the cycles.Then we find the end vectors of the cycles by solving the equations
A vpI =1' k = 1,2, ... ,rI
Pi k
k
Applying consecutively the operator a to the end vector V~I ' we get all the vectors v~ in the cycle. Thus, we have constructed all cycles of maximal length. Let P2 be the length of a maximal cycle among those that are left to find. Consider the subspace Ran(AP2), and let dim Ran(A P2 ) = r 2. Since Ran(A PI ) c Ran(AP2), we can complete the basis
v:, v~ ,... ,
PI k
V(
to a basis
V(+l , ... , V(
in Ran (A P2 ). Then
we find end vectors of the cycles C'1+ I , ... ,CI'.2 by solving (for V~2) the equations
A vp2 =vI, k=r} + l,r I +2, ... ,r2, thus constructing the cycles of length P2' Let P3 denote the length of a maximal cycle among ones left. Then, completing the
k
basis v:, v~ , ... , V( in Ker( A P2 ) to a basis in Ker( A P3 we construct the cycles of length P3' and so on.
250
One final remark: as we discussed above, if we know the dot diagram, we know the canonical form, so after we have found a Jordan canonical basis, we do not need to compute the matrix of a in this basis: we already know it.
Iv
0
1
Iv
Iv
1
Iv
where Iv is an eigenvalue ofA. Here we assume that the block of size 1 is just 'A. The block diagonal form from Theorem is called the Jordan canonical form of the operator A. The corresponding basis is called a Jordan canonical basis for an operator A. Proof Ifwe join bases in the generalized eigenspaces Ek = EM to get a basis in the whole space, the matrix of a in this basis has a block diagonal form diag {A I ,A2' ... ,A r }, where
The operators
Nk =A k - 'A/Ek are nilpotent, so by Theorem one can find a basis in Ek such that the matrix of Nk in this basis is the Jordan canonical form of Nk . To get the matrix of Ak in this basis one just puts k instead of 0 on the main diagonal. First of all let us recall that the computing of eigenvalues is the hardest part, but here we do not discuss this part, and assume that eigenvalues are already computed. For each eigenvalue we compute subspaces
Ker(A -
until the sequence of the subspaces stabilizes. In fact, since we have an increasing sequence of subspaces (Ker(A _'JJ)k c Ker(A - 1v1)k+I), then it is sucient only to keep track oftheir dimension (or ranks of the operators (A _'JJ)k. For an eigenvalue let m = m be the number where the sequence Ker(A _'JJ)k stabilizes, i.e. m satisfies dimKer(A _'JJ)m-l < dimKer(A - 'Al)m = dim Ker(A _'JJ)m+l. Then E" = Ker(A - 'Al)m is the generalized eigenspace corresponding to the eigenvalue. After we computed all the generalized eigenspaces there are two possible ways of
251
action. The first way is to find a basis in each generalized eigenspace, so the matrix of the operator a in this basis has the block-diagonal form diag{A1.A2, ... , Ar}, where
Nk = A k - ')..i are nilpotent, so applying the algorithm described in Section 4.4 we get the Jordan canonical representation for N k, and putting k instead of 0 on the main diagonal, we get the Jordan canonical representation for the block A k The advantage of this approach is that we are working with smaller blocks. But we need to find the matrix of the operator in a new basis, which involves inverting a matrix and matrix multiplication. Another way is to find a Jordan canonical basis in each of the generalized eigenspaces
Ak =AIEAk Then we can deal with each matrix Ak separately. The operators
by working directly with the operator A, without splitting it first into the blocks. Again, the algorithm works with a slight modification. Namely, when computing a Jordan
EAk
canonical basis for a generalized eigenspace EAk ' instead of considering subspaces Ran(Ak - ')..,/)1, which we would need to consider when working with the block Ak separately, we consider the subspaces (A - ')..,/)1 EAk
Chapter 10
Linear Transformations
Euclidean Linear Transformations
By a transformation from IR n into IR m , we mean a function of the type T: IR n -+ IR m ,
n with domain IR and codomain IR m , For every vector x E IRII, the vector T(x) called the image of x under the transformation T, and the set E
IR m is
R(I) = {T(x) : x E IRII}, of all images under T, is called the range of the transformation T, Remark. For our convenience later, we have chosen to use R(I) instead of the usual
T( IR II) to denote the range of the transformation T,
For every x = (xl' .. " Xn) E IR n , we can write T(x) = T(x l , .. " xn) = (YI' .. " Ym): Here, for every i = 1, .. ,' m, we have Yi = Tixl' .. " xn)' (1) where Ti : IR II
--7
A=(a~l
amI such that for every
we have where such that for every
a~nJ
amn
Linear Transformations
253
we have where
The matrix A is called the standard matrix for the linear transformation T. Remarks. (1) In other words, a transformation
and y
E JRm
and so the linear transformation T can be interpreted as multiplication of x the standard matrix A.
E]Rn
by
T:]R11 -7]R/11
Example. The linear transformation T:]Rs -7]R3 , defined by the equations y) = 2x) + 3x2 + 5x3 + 7x4 - 9xs; Y2 = 3x2 + 4x3 + 2xs; Y3 =x) + 3x3 -2x4 ; can be expressed in matrix form as
( ;~J=(~ ~ ~ ~ -:]
Y3
1 0 3 -2
254
Linear Transformations
1
(~J(~
so that
3 5
7
0
3 4
3 -2
-n
~(:JJ
1
Example. Suppose that A is the zero m x n matrix. The linear transformation T: lR n n ~ lR m , where T(x) = Ix for every x E lR , is the zero transformation from Rn into lRn. Clearly T(x) = 0 for every x E lR n Example. Suppose that I is the identity n x n matrix. The linear operator T: lR n - t lR m , where T(x) = Ix for every x E lR n , is the identity operator on Rn. Clearly T(x) = x for every
xE lRn.
lR m is a linear transformation, andthatfel' ... , eng is the standard basis for lR n. Then the standard matrix for T is given by
-t
Linear Operators on
]R
In this section, we consider the special case when n = T(xl,x 3)' and study linear operators on ]R2. For every x E ]R2 , we shall write x = (xl' x 2 ). Example. Consider reection across the x2-axis, so that T(x l , x 2) = (-xl' x 2 ). Clearly we have
A=(~l ~)
It is not dicult to see that the standard matrices for reection across the xl-axis and
A~(~ ~J)
and
A~(~ ~)
Also, the standard matrix for reection across the origin is given by
Linear Transformations
255
~(~I ~I}
Standard matrix
YI = -xI { Y2 =x2
Y\ =X\
We give a summary in the table below: Linear operator Equations Reection across x2-axis nYI Reection across X I-axis Reectio:1 across xl
= x2
= -Xl
= -X2
(-1 0)
0-1
Example. For orthogonal projection onto the xI-axis, we have T(x l , x 2) = (XI' 0), with standard matrix
A=(~ ~J
Similarly, the standard matrix for orthogonal projection onto the x2-axis is given by
A=(~ ~J
We give a summary in the table below: Linear operator Orthogonal projection onto X I-axis Orthogonal projection onto x2-axis Equations
YI {
Standard matrix
= xl
Example. For anticlockwise rotation by an angle e, we have T(x I' x 2) = (}II' Y2)' where Y\ + iY2 = (Xl + ix2)(cose + i sine );
Y2 = = Y 1_ { Y2 - x2
(1 0) (0 0)
1
and so -Sine) (XI) SIn e cose x2 It follows that the standard matrix is given by
Y\) ( Y2
A
= (c~se
= (cose
sine
-sin e) cose
256
Linear Transformations
We give a summary in the table below: Linear operator Anticlockwise rotation by angle e { {Y2
Equations
= (kxl' kx2),
A~(~ ~)
The operator is called a contraction if < k < 1 and a dilation if k > 1, and can be extended to negative values of k by noting that for k < 0, we have
This describes contraction or dilation by non-negative scalar -k followed by reection across the origin. We give a summary in the table below: Linear operator Equations Standard matrix Contraction or dilation by factor k
YI { Y2
= kxl = kx2
(ko O k)
Example. For expansion or compression in the xl-direction by positive factor k, we have T(xl' x 2) (kxl' x 2), with standard matrix
A=(~ ~)
This can be extended to negative values of by noting that for k < 0, we have
A=(~ ~).
We give summary in the Linear operator
tabl~
below: Equations
Yl = kxl { Y2 =x2
Standard matrix
Linear Transformations
257
Yl =xl { Y2 = kx2
Example. For shears in the xl-direction with factor k, we have T(x l , x 2) = (xI + kx2, x 2), with standard matrix
A=(~ ~).
For the case k = 1, we have the following.
(k= 1)
T (k= -I)
Similarly, for shears in the x2-direction with factor k, we have standard matrix
A=(~ ~).
We give a summary in the table below: Linear operator Equations Shear in x l-direction Shear in x 2-direction
{
Standard matrix
= xI +kx2 Y2 = x2 Yl = xl +kx2
Yl Y2 =x2
258
Linear Transformations
Example. Consider a linear operator T: ]R Z ---7 ]Rz which consists of a reection across the xz-axis, followed by a shear in the x1-<iirection with factor 3 and then reection across the xl-axis. To end the standard matrix, consider the eect of Ton a standard basis {e 1, e z }
ez
-1 Let us summarize the above and consider a few special cases. We have the following table of invertible linear operators with k '* O. Clearly, if A is the standard matrix for an invertible linear operator T, then the inverse matrix A-I is the standard matrix for the inverse linear operator '11.
Linnear operator T Reflection across line XI =X2 Expansion or compressIOn in XI -direction Expansion or compressIOn in X2 -direction Shear in XI -direction Shear in x2 -direction Standard matrix A Inverse matrix A-I
(-I 3). o
(~ b) (~ ~) (b ~)
1 k 0 1 1 0 k 1
(~ b)
rl
0 1
k~1
Next, let us consider the question of elementary row operations on 2 x 2 matrices. It is not dicult to see that an elementary row operation performed on a 2 x 2 matrix A has the eect of multiplying the matrix A by some elementary matrix E to give the product EA. We have the following table. Elementary row operation Elementary matrix E Interchanging the two rows Multiplying row 1 by non-zero factor k Multiplying row 2 by non-zero factor k
(~ ~) (~ ~) (~ ~)
Linear Transformations
259
Adding k times row 2 to row 1 Adding k times row 1 to row 2 Now, we know that any invertible matrix A can be reduced to the identity matrix by a finite number of elementary row operations. In other words, there exist a finite number of elementary matrices E 1, , Es of the types above with various non-zero values of k such that so that
A = El ... E s .
-1 -1
Proposition. Suppose that the linear operator T: jR2 ~ jR2 has standard matrix A, where A is invertible. Then T is the product of a succession of nitely many reections, expansions, compressions and shears. In fact, we can prove the following result concerning images of straight lines. Proposition. Suppose that the linear operator T: 1R.2 ~ 1R2 has standard matrix A, where A is invertible. Then (a) The image under T of a straight line is a straight line; (b) The image under T of a straight line through the origin is a straight line through the origin, and . (c) The images under T ofparallel straight lines are parallel straight lines. Proof Suppose that T(x l x 2) = (Y1'Y2)' Since A is invertible, we have x =A-l y , where
Hence
Let
(a'
W) = (a ~)A-l.
Then
260
(a '
Linear Transformations
W) =
(;J
= (y).
In other words, the image under T of the straight line xI + x 2 == y is "(' YI + WY2 == ,,(, clearly another straight line. This proves (a). To prove (b), note that straight lines through the origin correspond to "( == 0, To prove (c), note that parallel straight lines correspond to dierent values of for the same values of a and ~. Elementary Properties of Euclidean Linear Transformations In this section, we establish a number of simple properties of euclidean linear transformations.
and T2 : IRIll -7 IRk are linear tramformations. Then T == T2 . Ii: IR -7 IRk is also a linear transformation. Proof Since TI and T2 are linear transformations, they have standard matrices A I and n A2 respectively. In other words, we have TI(x) ==Alx for every x E IR and T2(y) ==A7,Y for
n
Ii: IR n -7 IRIll
every Y E IRIll . It follows that T(x) == TiTI(x)) == Ay4lx for every Y E IR n , so that T has standard matrix AzAI' Example. Suppose that Ii: ~2 -7 IR2 is anti clockwise rotation by nl2 and
T2 : IR -7 IR matrices are
AI ~ (~ ~1) and A, ~ (~ ~)
It follows that the standard matrices for T2 . TI and TI . T2 are respectively
Example. Suppose that Ii : IR2 -7 IR2 is anticlockwise rotation by and T2 : IR2 is anticlockwise rotation by <j>. Then the respective standard matrices are
_ (COS8 -sin8) (cos<j> -sin<j Al - . and A = . smS cos8 2 sin<j> cos<j> It follows that the standard matrix for T2 . TI is cOS<j>COS8 - sin <j>sin 8 -cos<j>sin 8 - sin <j>COS8) AA- ( 2 1- sin<j>cos8+cos<j>sin8 cos<j>cos8-sin<j>sin8
-7
IR2
= (COS(<j>+8)
sine <j> + 8)
-Sin(<j>+8)). cos(<j> + 8)
Linear Transformations
261
Hence T2
Example. The reader should check that in lR 2 , reaction across the x I-axis followed by reection across the x)-axis gives reection across the origin. Linear transformations that map distinct vectors to distinct vectors are of special importance. Definition. A linear transformation T:][{n -t][{m is said to be one-lo-one iffor every x', x" E lR,n , we have x' = x" whenever T(x) = T(x"). Example. Ifwe consider linear operators T: ][{ -) ][{ , then T is one-to-one precisely when the standard matrix A is invertible. To see this, suppose rst of all that A is invertible. If T(x') = T(x"), then Ax' = Axil. Multiplying on the left by A-I, we obtain x' = x". Suppose
next that A is not invertible. Then there exists x E lR, such that x ::f. 0 and Ax = O. On the other hand, we clearly have Ao = O. It follows that T(x) = T(O), so that T is not one-to-one. Proposition. Suppose that the linear operator T:][{n -t][{11 has standard matrix A. Then the following statements are equivalent: (a) The matrix A is invertible. (b) The linear operator T is one-to-one. (c) The range of T is lR n , in other words, R(T) = lR IJ
2 2 2
Proof ((a =:} (b Suppose that T(x') = T(x"). Then Ax' = Axil. Multiplying on the left by A-I gives x' = x", ((b =:} (a Suppose that T is one-to-one. Then the system Ax = 0 has unique solution x = 0 in lR IJ It follows that A can be reduced by elementary row operations to the identity matrix J, and is therefore invertible.
((a ((c
=:}
(c For any y
lR,n , clearly x
= A-Iy
= y.
=:}
(a) Suppose that {el' ... , en} is the standard basis for ]Rn. Let xI' ... , xn
T(x) = e ' so that AXj = ej , for every j = 1, ... , n. Write J C = (xI'" x n) Then AC = J, so that A is invertible. Definition. Suppose that the linear operator T : ][{ n -t lR, n has standard matrix A, where
: lR,n
262
Linear Transformations
Example. Consider the linear operator T:]R2 ~]R2, dened by T(x) = Ax for every
x E]R , where
Clearly A
A-I
=( 2 -1).
-1 1
Hence the inverse linear operator is 11 : ]R2 ~]R2 , dened by 11 (x) = A-Ix for every
x E]R .
Example. Suppose that T: 1R2
2
should check that 11 : ]R2 ~]R2 is anticlockwise rotation by angle 21t-9. Next, we study the linearity properties of euclidean linear transformations which we shall use later to discuss linear transformations in arbitrary real vector spaces. n Proposition. A transformation T : IR ~]Rm is linear if and only if the following two conditions are satised: n (a) For every u, v E IR , we have T(u + v) = T(u) + T(v). (b) For every u E IR n and c E IR, we have T(cu)
n m
cT (u).
Proof Suppose rst of all that T: IR ~ IR is a linear transformation. Let A be the standard matrix for T. Then for every u, v E IR nand c E IR , we have
T(u
and
T(cu)
Suppose now that (a) and (b) hold. To show that Tis linear, we need to nd a matrix A such that T(x) = Ax for every X]Rn. Suppose that {e 1, , en} is the standard basis for IR n . As suggested by Proposition 8A, we write
A
= ( T(e l )
..
T(e n;
=
in
]Rn,
we have
Linear Transformations
263
Ax
(]J ~
n
Using (b) on each summand and then using (a) inductively, we obtain Ax = T(x)e)) + ... + T(xne n) = T(x)e) + ... + xnen) = T(x) as required. To conclude our study of euclidean linear transformations, we briey mention the problem of eigenvalues and eigenvectors of euclidean linear operators.
f... E lR is called an eigenvalue of T if there exists a non-zero vector x E IR n such that T(x)
= x.
This non-zero vector x E IR is calIed an eigenvector of T corresponding to the eigenvalue f.... Remark. Note that the equation T(x) = x is equivalent to the equation Ax = Ax. It follows that there is no distinction between eigenvalues and eigenvectors of T and those of the standard matrix A. We therefore do not need to discuss this problem any further.
264
Linear Transformations
Example. Suppose that V is a finite dimensional vector space, with basis {WI' ... ,wn}.
Dene a transformation T: V ~ IR
U E
vector (~I' ... , ~n) E ffi.n such that u = ~IwI + ... + ~nwn We let T(u) = (~I' ... , ~n) In other words, the transformation T gives the coordinates of any vector u E V with respect to the given basis {wI' ... , W n }. Suppose now that V=AIwI++AnWn is another vector in V. Then u + v = (~I + AI)w I + ... + (~n + An)wn, so that T(u + v) = (~l + AI' ... , ~n + ~n) = (~I' ... , ~n) + (AI' ... , An) = T(u) + T(v). Also, if C E IR, then cu = C~I wI + ... + C~nWn' so that T(cu) = (c~I' ... , c~n) = c(~l' ... , ~n) = cT (u). Hence T is a linear transformation. We shall return to this in greater detail in the next section. Example. Suppose that P n denotes the vector space of all polynomials with real coefficients and degree at most n. Dene a transformation T: Pn ~ P n as follows. For every polynomial P=PO+PI X + ... +P,f1 in Pn , we let T(P) = Pn + Pn-I x + ... + p~. Suppose now that q=qo+q)x+ ... +q,t>(l is another polynomial in Pn . Then P + q = (Po + qo) + (p) + ql)x + ... + (Pn + qn)xn; so that T(P + q) = (pn + qn) + (Pn-I + qn_l)x + ... + (Po + qo)xn = (Pn + Pn-I x + ... + p~) + (qn + qn_I x + ... + q~) = T(P) + T(q). Also, for any c E IR, we have cp = cPo + cP1x + ... + cp/' so that T(cp) = cPn + cPn_Ix + ... + cPoXn = c(Pn + Pn_)x + ... + p~) = cT (p). Hence T is a linear transformation. Example. Let V denote the vector space of all real valued functions dierentiable everywhere in IR, and let W denote the vector space of all real valued functions dened on ffi. . Consider the transformation T: V~ W, where T(f) = f' for every f E V. It is easy to check from properties of derivatives that T is a linear transformation. Example. Let V denote the vector space of all real valued functions that are Riemann integrable over the interval [0, 1]. Consider the transformation T: V ~ ffi. , where
T(f)
= f~f(x)dx
Linear Transformations
265
for every f E V. It is easy to check from properties of the Riemann integral that T is a linear transformation. Consider a linear transformation T: V ~ W from a finite dimensional real vector space V into a real vector space W. Suppose that {vI' ... , v n } is a basis of V. Then every u E V can be written uniqudy in the form u = ~lvJ + ... + ~nvn' where ~I' ... ~n E IR. It follows that T(u) = T(~JvJ + ... + ~nvn) = T(~JvJ) + ... + T(~nvn) = ~JT(vl) + ... + ~nT(vn)' We have therefore proved the following generalization of proposition. Proposition. Suppose that T : V ~ W is a linear transformation from a finite dimensional real vector space V into a real vector space W Suppose further that {v I' ... , vn } is a basis of V Then T is completely determined by T(v l ), ... , T(vn). Example. Consider a linear transformation T: P 2 ~ IR , where T(1) = 1, T(x) = E and T(x 2) = 3. Since {l, x, x 2} is a basis of P 2, this linear transformation is completely determined. In particular, we have, for example, T(5 - 3x + 2x2) = 5T(1) - 3T(x) + 2T(x2) = 5.
Example. Consider a linear transformation T: IR ~ IR , where T(1, 0, 0, 0) = 1, T(1, 1,0,0) = 2, T(l, 1, 1,0) = 3 and T(1, 1, 1, 1) = 4. Since {(1, 0, 0, 0), (1, 1,0,0), (1,1, 1,
4
0), (1, 1, 1, I)} is a basis of IR 4 , this linear transformation is completely determined. In particular, we have, for example, T(6,4,3, 1)=T(2(1 0,0,0)+(1,1,0,0)+2(1,1,1,0)+(1,1,1,1)) = 2T(I, 0, 0, 0) + T(1, 1,0,0) + 2T(1, 1, 1,0) + T(1, 1, 1, 1) = 14. We also have the following generalization of PROPOSITION. Proposition. Suppose that V, W, U are real vector spaces. Suppose further that TI : V ~ Wand T2 : W~ U are linear transformations. Then T= T2 T J : V ~ U is also a linear transformation. Proof Suppose that u, v E V. Then T(u + v) = TzCTI(u + v)) = T2(T I(u) + TJ(v)) = T2(T J(u)) + TzCTJ(v)) = T(u) + T(v). Also, if c E IR, then T(cu) = T2(T J(cu)) = TzCcTJ(u)) = cT2(T I(u)) = cT (u). Hence T is a linear transformation.
Change of Basis
E
Suppose that V is a real vector space, with basis B = {u I' ... , un}' Then every vector u V can be written uniquely as a linear combination u = ~luI + ... + ~nun' where ~l' ... , ~n E IR
It follows that the vector u can be identied with the vector (~I' ... , ~n) E IRn.
266
Linear Transformations
[ul.
=(~:J
is called the coordinate matrix of II relative to the basis B = {Ut' ... , un}. Example. The vectors u l = (1, 2, 1,0), u2 = (3,3,3,0), u3 = (2, -10,0,0), u4 = (-2, 1, -6, 2) are linearly independent in JR4 , and so B = {u l ' u2' u3, u4} is a basis of JR4 . It follows that for any U = (x, y, z, w) E JR4, we can write U = ~lul + ~2u2 + ~3u3 + ~4u4: In matrix notation, this becomes x 3 2 -2 ~l
Y z
w so that
2 3 -10 1 3 0 0 0 0
-6
2
~2 , ~3 ~4
x
~l ~2 [u]B= ~3 ~4
3 2 -2 2 3 -10 1 1 3 0 -6 2 0 0 0
z
w
Remark. Consider a function <j> : V ~ JR n , where (u) = [u]B for every U E V. It is not dicult to see that this function gives rise to a one-to-one correspondence between the elements of V and the elements of ~ n Furthermore, note that [u + v]B = [u]B + [v]B and [cu]B = c[u]B' so that <j>(u + v) = <j>(u) + <j>(v) and <j>(cu) = c<j>(u) for every u, v E Vand C E JR. Thus is a ljnear transformation, and preserves much of the structure of V. We also say that V is isomorphic to JR n In practice, once we have made this identication between vectors and their coordinate matrices, then we can basically forget about the basis B and imagine that we are working in JR n with the standard basis. Clearly, if we change from one basis B = {u I ' , un} to another basis C = {VI' ... , vn} of V, then we also need to nd a way of calculating [u]C in terms of [u]B for every vector u E V. To do this, note that each of the vectors vI' ... , vn can be written uniquely as a linear combination of the vectors ul' ... , un. Suppose that for i = 1, ... , n, we have vi = aliu l + ... + aniun, where ali' ... , ani E R, so that
Linear Transformations
267
[vilB
= ( :.'
am
aliJ
for every u E V, we can write u = ~Iul + ... + ~nun = + ... + Ynvn, where ~I' so that
Ylvl
R;
Clearly
u = Ylvi + ... + ynvn =YI(allu l + ... + anI un) + ... + yn(alnu l + ... + ann un) = (Ylall + ... + ')'naln)u l + ... + (Ylanl + ... + 'Ynann)un = ~Iul + ... + ~nun:
Hence
(... )
a~n
alnJ(YIJ
Y:n'
We have proved the following result. Proposition. Suppose thatB = {u I' ... , un} and C = vector space V. Then for every U E V, we have
where the columns of the matrix P = ([vtlB ... [vnl B) are precisely the coordinate matrices of the elements of C relative to the basis B. Remark. Strictly speaking, Proposition gives [ul B in terms of [ulc- However, note that the matrix P is invertible (why?), so that [ul e = P-1[ul B Definition. The matrix P in Proposition is sometimes called the transition matrix from the basis C to the basis B. Example. We know that with
268
Linear Transformations
that
both B = {u I ' u2' u3' u4 } and C = {vI' V 2, V 3, V 4 } are bases of ~4 . It is easy to check
VI
= ul'
= 11u I -4u2 + u3
v2 = -2u I + u2'
v3
so that
1 -2
11 -4 1
-27
o
P = ([vdB [v 2]B [v 3]B [v4 ]B) = 0
1 0
11 -2
o 0 0 1 Hence [u]B = P[u]c for every u E ~4. It is also easy to check that u l = vI' u2 = 2vI + v2 , u3 = -3v I + 4v2 + v3' u4 = -vI - 3v2 + 2v3 + v4' so that
1 2 -3 -1 o 1 4 -3
0 0 1 ~4. Note that PQ = 1. Now let u = (6, -1,2,2). We
o o
Hence [u]c = Q[u]B for every u E can check that u = VI + 3v2 + 2v3 + v4' so that 1
[u1c
2 1
1 -2
11
Thea
[u]B
0 0 0
1 0 0
-4
1
0
-271 11
-2 1
j 21
-10 6 0
1
Check that u = -IOu I + 6u2 + u4. Example. Consider the vector space P 2. It is not too dicult to check that
Linear Transformations
U
269
'
U
=l+x
=1+x2
'
=X+X2
where 1 + 4x -x2 = ~1(1 + x) + ~2(1 + x 2) + ~3(x + x 2) = (~I + ~2) + (~1 + ~3)x + (~2 + ~3)x2, so that and Hence (~I' ~2' ~3)
P2 + P3 =-1.
= {u I '
= (3, -2,
u2' u3 },
B
then
[Uln=(+J
On the other hand, it is also not too dicult to check that
vI
= 1, v 2 = 1 + x, v3 = 1 + x + x 2
= (-3,5, -1).
270
Linear Transformations
Hence
(~~~ ~ ;~~).
-112 0 112
-1
( ~2)
1
(:~~ ~ :~~)(~3).
-112 0 112
Linear Transformations
271
Example. Suppose that T: V ~ V is the identity operator on V. Clearly we have ' ker(1) = {O} and R(1) = V. Example. Suppose that T: JR2 ~ JR2 is orthogonal projection onto the xl-axis. Then ker(1) is the x 2-axis, while R(1) is the xl-axis. Example. Suppose that T: ~n ~ ~n is one-to-one. Then
ker(1) = {O} and R(1) = JR n , Example. Consider the linear transformation T: V ~ W, where V denotes the vector space of all real valued functions dierentiable everywhere in JR, where W denotes the space of all real valued functions dened in JR, and where T(f) = f' for every f E V. Then ker(1) is the set of all dierentiable functions with derivative 0, and so is the set of all constant functions in JR. Example. Consider the linear transformation T: V~ JR, where V denotes the vector space of all real valued functions Riemann integrable over the interval [0, 1], and where
T(f)
=f~f(x)dx
for every f E V. Then ker(1) is the set of all Riemann integrable functions in [0, 1] with zero mean, while R(1) = JR. Proposition. Suppose that T: V ~ W is a linear transformation from a real vector space V into a real vector space W. Then ker(1) is a subspace of V, while R(1) is a subspace of W. Proof Since T(O) = 0, it follows that 0 E ker(1) Vand 0 E R(1) W. For any u, v E ker(1), we have T(u + v) = T(u) + T(v) = 0 + 0 = 0, so that u + v E ker(1). Suppose further that c E JR. Then T(cu) = cT (u) = Co = 0, so that C u E ker(1). Hence ker(1) is a subspace of V. Suppose next that w, Z E R(1). Then there exist u, v E V such that T(u) = wand T(v) = z. Hence T(u + v) = T(u) + T(v) = w + z, so that w + Z E R(1). Suppose further that c E JR. Then T(cu) = cT (u) = cw, so that cw E R(1). Hence R(1) is a subspace of W. To complete this section, we prove the following generalization of the Rank-nullity theorem. Proposition. Suppose that T,' V ~ W is a linear transformation from an n-dimensional real vector space V into a real vector space W. Then dim ker(1) + dim R(1) = n.
..
272
<
Linear Transformations
Proof Suppose first of all that dim ker(1) = n. Then ker(I) = V, and so R(I) = {O}, and the result follows immediately. Suppose next that dim ker(I) = 0, so that ker(I) = {O}. If {vI' ... , vn} is a basis of V, then it follows that T(v l ), ... , T(v n) are linearly independent in W, for otherwise there exist c l ' ... , cn E lR, not all zero, such that cIT(v l ) + .. , + cnT(vn) = 0, so that T(c i vI + ... + cnvn) = 0, a contradiction since civ i + ... +cnVn;f:. O. On the other hand, elements of R(I) are linear combinations of T(v l ), ... , T(vn). Hence dim R(I) = n, and the result again follows immediately. We may therefore assume that dim ker(I) = r, where 1 ::::; r < n. Let {vI' ... , vr } be a basis ofker(I). This basis can be extended to a basis {vI' ... , vr' vr + 1, ... , vn } of V. It suces toshow that
{T(vr+I)' ... , T(vn)} is a basis of R(I). Suppose that
U= UE
... , ~n E
lR such that
so that
T(u) = ~IT(vl) + ... + ~rT(vr) + ~r+IT(vr+1) + ... + ~nT(vn) = ~r+IT(vr+l) + ... + ~nT(vn)' It follows that spans R(I). It remains to prove that its elements are linearly independent.
+ ... + cnT(vn) = O.
= ... = cn = O.
By linearity, it follows from equation that T(cr+Ivr+I + ... + cnvn) = 0, so that cr+lvr+1 + ... + cnvn E ker(I). Hence there exist c I ' ... , cr E lR such that
cr+lvr+1
so that
C I vI + ... + CrVr - Cr+l Vr+l - ... - CnVn = O. Since {VI' ... , vn } is a basis of V, it follows that c i = ... = cr = cr+I = ... = cn = O. Remark. We sometimes say that dim R(I) and dim ker(I) are respectively the rank and the nullity of the linear transformation T.
Linear Transformations
273
T(u') = T(u"). Proposition. Suppose that T: V -7 W is a linear transformation from a real vector space V into a real vector space WF. Then T is one-to-one if and only if ker(1) = {O}. Proof (= Clearly 0 E ker(1). Suppose that ker(1) ::j: {O}. Then there exists a nonzero v E ker(1). It follows that T(v) = T(O), and so T is not one-to-one. (( :::) Suppose that ker(1) = {O}. Given any u', u" E V, we have T(u') - T(u') = T(u' - u'') = 0 if and only if u' - u" = 0, in other words, if and only if u' = u". We have the following generalization of PROPOSITION. Proposition. Suppose that T: V -7 V is a linear operator on a finite-dimensional real vector space V. Then the following statements are equivalent. (a) The linear operator T is one-to-one.
= {g}. = V.
Proof The equivalence of (a) and (b) is established by PROPOSITION. The equivalence of (b) and (c) follows from PROPOSITION. Suppose that T:V-7W is a one-to-one linear transformation from a real vector space V into a real vector space W. Then for every W E R(1), there exists exactly one u E V such that T(u) = W.I We can therefore dene a transformation r-I : R(1) -7 V by writing r-I(w) = u, where u E Vis the unique vector satisfying T(u) = w. Proposition. Suppose that T: V -7 W is a one-to-one linear transformation from a real vector space V into a real vector space W. Then r-I. R(T) -7 V is a linear transformation. Proof Suppose that w, Z E R(1). Then there exist u, v E V such that r-I(w) = u and r-I(z) = v. It follows that T(u) = wand T(v) = z, so that T(u + v) = T(u) + T(v) = w + z,
274
whence
jl(w + z) = U + v =
Suppose further that C E IR . Then T(cu) = cw, so that jl(cw) = cu = cjl(w). We also have the following result concerning compositions of linear transformations and which requires no further proof, in view of our knowledge concerning inverse functions. Proposition. Suppose that V, W, U are real vector spaces. Suppose further that T\ : V ~ Wand T2 : W ~ U are one-to-one linear transformations. Then (a) The linear transformation T2 . TI : V ~ U is one-to-one, and
= 1}-I.T2-1.
is the coordinate matrix of v relative to the basis B. Consider now a transformation <I> : V ~ lR. n , where <I>(v) = [v]B for every v E V. The proof of the following result is straightforward. Proposition. Suppose that the real vector space V has basis B = {VI' ... , vn }. Then the tramformation <I> : V~ W, where <I>(v) = [v]Bsatisesforevery v E V, is a one-to-one linear
transformation, with range R(<1
I :
lR n ~ V is also one-to-one, with range R( lR n -I) = V. Suppose next that C = {wI' ... ,wm} is a basis of W. Then we can dene a linear
~
transformation \jJ : W
lR m , where
\jJ (w)
W, in a similar way. We
Linear Transformations
275
Suppose nextthat {w l' ... , W m} is basis of W. Then we can define linear transformation
WE
V _ _ _~T _ _ _~) W
0/
-I
0/
JR"
Clearly the composition
S = \jf.T 0 T 0 <I> : IR. ~ IR. is a euclidean linear transformation, and can therefore be described in term of a standard matrix A. Our task is to determine this matrix A in terms of rand the base~ :. nd C. We know from Proposition that
A
-I
III
= (S(e l )
where {e l ,
... Seen))' en} is the standard basis for ]Rn. For every j
-I
)
(e)
= \If (T(<I>-
A = ([T(vl)]c ... [T(vn)]c)' Definition. The matrix A given by equatioin is called the matrix for the linear transformation Twith respect to the bases Band C. We now have the following diagram of linear transformations. T V----.:....---~) W
0/
-I
0/
276
T='V oS0<l>:V~W. V, we have the following:
-I
Linear Transformations
For every v
v ~[V]B ~ A[v]B
-\
'II
More precisely, if v
say, and so
T(v) = \If-I(A[v]B) = 'Ylwi + ... + 'Ymwm. We have proved the following result. Proposition. Suppose that T: V ~ W is a linear transformation from a real vector space V into a real vector space W. Suppose further that Vand Ware finite dimensional, with bases Band C respectively, and that A is the matrix for the linear transformation T with respect to the bases Band C. Then for every v E V, we have T(v) = w, where W E W is the unique vector satisfying [w]c = A [v]B. Remark. In the special case when V = W, the linear transformation T: V ~ W is a linear operator on T. Of course, we may choose a basis B for the domain V of T and a basis C for the codomain V of T. In the case when T is the identity linear operator, we often choose B ;:f:. C since this represents a change of basis. In the case when T is not the identity operator, we often choose B = C for the sake of convenience, we then say that A is the matrix for the linear operator T with respect to the basis B. Example. Consider an operator T : P 3 ~ P 3 on the real vector space P3 of all polynomials with real coefficients and degree at most 3, where for every polynomial p(x) in P3' we have T(P(x = xp' (x), the product of x with the formal derivative p'(x) of p(x). The reader is invited to check that T is a linear operator. Now consider the basis B = { 1, x, x 2' x 3 } of P3 The matrix for Twith respect to B is given by A = ([T(I)]B [T(x)]B [T(x 2)]B [T(x 3)]B) = ([O]B [x]B [2x 2]B [3x 3]B)
o o
=
0 0 0
1 0 0 0 0 2 0 0 0 3
o
Suppose that p(x)
Linear Transformations
277
0 0 and Afp(x)]B = 0 0
0 0
0 0 2
0 0 0 3
0 2
2
fp(x)]B =
2 4 3
4 3
8
9
0 0
so that T(P(x =2x + 8x2 + 9x3 . This can be easily verified by noting that T(P(x = xp' (x) = x(2 + 8x + 9x2 ) = 2x + 8x2 + 9x3. In general, if p(x) = Po + Plx+ P2x2 + P3x3, then
Po [P(x)]B PI P2 P3
and A[P(x)]B
o o
0 0
0 0 0
1 0 0 2 0 0 0
Po PI P2 P3
0 3
3P3
so that T(P(x = PIx + 2P2x2 + 3P3x3 . Observe that T(P(x = xp' (x) = x(Plx + 2P2x + 3P3x) = PIx + 2pzX2 + 3P3x3 . verifying our result.
Example. Consider the linear operator T: ~2 ~ ~2, given by T(x\, x2) 2x1 x 2, xI 3x2) for every (x I' x 2 ) E .lR 2 . Consider also the basis B = (1, 0), 1, 1) of .lR 2 . Then the matrix
[(3, 2)].
have
so that T(3, 2) -(1,0) +9(1, 1) (8,9). This can be easily veried directly. In general, we
[(XI'
x,)lB ~ ( :,x,)
XI
so that T(x l , x 2) = (xI - 2x2)(1 , 0) + (xI + 3x2)(1, 1) = (2x 1 + x2, xI + 3x2). Example. Suppose that T: )Rn ~)Rm is a linear transformation. Suppose further that Band C are the standard bases for .lR nand .lR m respectively. Then the matrix for T with respect to Band C is given by A = T(el)]c ... T(en)]c = (T(e l ) ... T(e n), so it follows from Proposition that is simply the standard matrix for T.
278
Linear TransformatIOns
Suppose now that T J-7 W and T2 W-7 U are linear transformations, where the real vector spaces V, W, U are finite dimensional, with respective bases B = {vI' ... , vn }, {wI' ... , wm } and D = {u I ' ... , uk}. We then have the following diagram of linear transformations.
v----'---~)
W ---"------+) U
T,
",-3
'"
11-' 11
~n----'---~)
S,
~m
_ _ _........_~) ~
S,
Here 11: U -7lR k , where 11 (u) = [u]D for every U E U, is a linear transformation, and
SI
='If 1J
0
-\
0
<I>
lR -7 lR
an
S2
=11
7'
0
12
'If
-I
: m.
TlJ)m
-7 m.
TlJ)k
are euclidean linear transformations. Suppose that Al and A2 for SI and S2' so that they are respectively the matrix for TI with respect to Band C and the matrix for T2 with respect to C and D. Clearly
S2 0 Sl = 11 0 12 o1J 0 <I> : lR -7 lR .. It follows thatA01 is the standard matrix for S2 0 SI' and so is the matrix for T2 0 TI with respect to the bases Band D. To summarize, we have the following result. Proposition. Suppose that TI : V -7 Wand T2 : W -7 U are linear transformations, where the real vector spaces V, W, U are finite dimensional, with bases B, C, D respectively. Suppose further that Al is the matrix for the linear transformation TI with respect to the bases Band C, and that A2 is the matrix for the linear transformation T2 with respect to the bases C and D. Then A2 A I is the matrix for the linear transformation T2 x TI with respect to the bases Band D. Example. Consider the linear operator TI : P3 -7 P3' where for every polynomial p(x) in we have TI (P(x)) = xp'(x). We have already shown that the matrix for TI with respect to the basis B = {I, x, x 2, x 3 } of P3 is given by
. -I
Linear Transformations
279
0 0 0
0 0 2 0
Al
0 0
0
3
0 0 0
Consider next the linear operator T2 : P3 ~ P3, where for every polynomial q(x) = qo +q lx+q2x 2 +q3x3'In P 3,we have
Tiq(x = q(l + x) = qo + qt(1 + x) + qi1 + x)2 + q3(1 + x)3. We have Til) = 1, T2(x) = 1 + x, T2(x 2) = 1 + 2x + x 2 and Tix3) = 1 + 3x + 3x2 + x 3, so that the matrix for T2 with respect to B is given by A2 = ([T2(1)]B [T2(x)]B [T2(x2)]B [T2(x 3)]B)
023 003
000 Consider now the composition T= T2 0 TI : P 3 ~ P 3. LetA denote the matrix for T with respect to B. By Proposition 8T, we have
0 0 0 0
0
=
2 1 4 0 0
A = A2AI =
0 0
1 2 0 0
0 0
1 0 0 0
0 3
0 0
9 9
3
0 0
1 3
0 0 2 0
0 0 2
[p(x>S
Po P
~
I
PI +2Pn +3P3
PI +4P2 +9P3
2~+~
P3
P3
3P3
so that T(P(x = (PI+ 2P2 + 3P3) + (PI + 4P2 + 9P3) x + (2P2 + 9P3)x2 + 3P3x3. We can check this directly by noting that
280
T(x l , X 2)
Linear Transformations
for every (xl' x 2) E]R2 . We have already shown that the matrix for Twith respect to the basis B = {(l, 0), (l, I)} of ]R 2 is given by
A=G
~I}
Consider the linear operator T:]R2 -7]R2 . By Proposition 8T, the matrix for T2 with respect to B is given by
A'
[(V,)]. = (
so that T(xof' x 2) = -5xil, 0) + (5x] + IQx2)(l, I) = (5x l + 5x2, 5x l + IOx2). The reader is invited to check this directly. A simple consequence of Propositions 8N and 8T is the following result concerning inverse linear transformations. Proposition. Suppose that T: V -7 V is a linear operator on a finite dimensional real vector space V with basis B. Suppose further that A is the matrix for the linear operator T with respect to the basis B. Then T is one-to-one if and only if A is invertible. Furthermore, if T is one-to-one, then A-I is the matrix for the inverse linear operator 11 : V -7 V with respect to the basis B. Proof Simply note that T is one-to-one if and only if the system
Ax=O
has only the trivial solution X = O. The last assertion follows easily from Proposition 8T, since if A I denotes the matrix for the inverse linear operator 11 with respect to B, then we must have A'A = I, the matrix for the identity operator 110 Twith respect to B. Example. Consider the linear operator T: P 3 -7 P 3, where for every q(x) = qo + qlx + 2 q2x + q3x 3' In P3' we have T(q(x)) = q(1 + x) = qo + ql(l + x) + qil + x)2 + q3(l + x)3. We have already shown that the matrix for Twith respect to the basis . B= {I +x,x2,x3} is given by
Linear Transformations
281
1 1
A=
1 2 3 1 3
1
0 0
0 0 0
This matrix is invertible, so it follows that T is one-to-one. Furthermore, it can be checked that
1 -1
A
-I
-1
=0
0
-2
0
0 0
3 1 -3
-1
-1
PI P2 P3
and A [P(x)]8
-I
=0
0
-2
1
0 0
3 -3 '
1
Po PI P2 P3
Po - PI + P2 - P3
so that r-I(p(x
PI + P2 - P3) + (Pt - 2P2 + 3PJ)x + (P2 - 3P3)x2 + P3x3 =Po + PI(x - 1) + P2(X2 - 2x + 1) + P3(r - 3x2 + 3x - 1) = Po + PI (x - 1) + pix - 1)2 + P3(x - 1)3 = p(x - 1).
= (Po -
Change of Basis
Suppose that V is a finite dimensional real vector space, with one basis B = {v!, ... , vn} and another basis B' = {u 1 ' ... , un}' Suppose that T: V -7 V is a linear operator on V. Let A denote the matrix for T with respect to the basis B, and let A' denote the matrix for Twith respect to the basis B'. Ifv E Vand T(v) = w, then [w]B =A[v]B and [W]B' = A,[v]B' We wish to nd the relationship between A' and A. Recall Proposition J, that if P = ([ut]B ... [un]B)
282
Linear Transformations
denotes the transition matrix from the basis B' to the basis B, then [vJs = P[v]B' and [wJs = P[w]B" Note that the matrix P can also be interpreted as the matrix for the identity operator I:V-7V with respect to the bases B' and B. It is easy to see that the matrix P is invertible, and ~I = ([v dB' ... [vn]B' ) denotes the transition matrix from the basis B to the basis B', and can also be interpreted as the matrix for the identity operator I:V-7V with respect to the bases Band B'. We conclude that, [w]B' = ~I w]B = ~IA[v]B = ~IAP[v]B'. Comparing this with (1 ]), we conclude that
~IAP=A'.
This implies that A =PA'p-l. Remark. We can use the notation A = [1]B and A' [1]B' to denote that A and A' are the matrices for T with respect to the basis B and with respect to the basis B' respectively. We can also write
P = [1]B' B. to denote that P is the transition matrix from the basis B' to the basis B, so that
~I = [1]B" B. Then (13) and (14) become respectively
[I ]B', B[1]B[1 ]B,B' = [1]B' and [1 ]B, B' [1]B' [1]B',B = [1]B' We have proved the following result. Proposition. Suppose that T: V -7 V is a linear operator on a finite dimensional real vector space V, with bases B = {vI' ... , vn} and B' = {u l ' ... , un}' Suppose further that A and A I are the matrices for T with respect to the basis B and with respect to the basis B' respectively. Then ~IAP =A' andA'= PA~I , where P = ([utlB'" [un]B denotes the transition matrix from the basis B' to the basis B. Remarks. (1) We have the following picture. (2) The idea can be extended to the case of linear transformations T: V -7 W ffom a finite dimensional real vector space into another, with a change of basis in Vand a change of basis in W.
Linear Transformatiom
283
~v
V~
)W~
________________________________--+)W
T
[v]B'----------:..:.---------+) [w]B'
A'
[V]B
) [W]B
Example. Consider the vector space P3 of all polynomials with real coefficients and degree at most 3, with bases B = {1, x, x2,x3} and B'= {l, 1 + x, 1 + x + x2,1 + x + x 2 + x 3}. Consider also the linear operator T: P3 ~ P3' where for every polynomial p(x) = Po +PIx + pzX2 + P3x3, we have' T(P(x = (Po + PI) + (PI + P2)x + (P2 +P3)x2 + (Po +P3)x3. Let A denote the matrix for T with respect to the basis B. Then T(l) = 1 + x 3, T(x) = 1 + x, T(x 2) = x + x2 and T(x 3) = x2 + x3, and so
o
A
= ([T(l)]B [T(x)]B
= 0 0
0 0
Next, note that the transition matrix from the basis B' to the basis B is given by 1 1
1
1 1
1 1
o o
It can be checked that
-1 0 -1 1 0 0 0 -1 1
000
P =
-\
0 0 0
0 0
284
A'=P-1AP=
1
0
-1
1
0
-1
1 1 0 1 1
1 1 1
1 1
0 0
0 0
1 0
0 0 0 1 1 0 0 1 0 0 0 is the matrix for T with respect to the basis Bo. It follows that T(l) = 1 - (1 + x +.x2) + (l + x + x 2 + x 3) = 1 + x 3, T(l + x) = 1 + (1 + x) - (1 + x + x 2) + (1 + x + x 2 + x 3) = 2 + x + x 3, T(l + x + .x2) = (l + x) + (1 + x + .x2 + x3) = 2 + 2x + .x2 + x3, T(l + x + x 2 + x 3) = 2(1 + x + x 2 + x 3) = 2 + 2x + 2x2 + 2x3. These can be veried directly.
-1 -1 0 0 1 1 1 2
equal to the dimension of the eigenspace of A' corresponding to A. We also state without proof the following result. Proposition. Suppose that T: V ~ V is a linear operator on a finite dimensional real vector space V. Suppose further that A is the matrix for T with respect to a basis B of V. Then (a) The eigenvalues of T are precisely the eigenvalues of A, and
(b) A vector V is an eigenvector of T corresponding to an eigenvalue A if and only if the coordinate matrix [ul B is an eigenvector of A corresponding
U E
to the eigenvalue A.
Linear Transformations
285
Suppose now that A is the matrix for a linear operator T: V~ V on a finite dimensional real vector space V with respect to a basis B = {vI' ... , vn } If A can be diagonalized, then there exists an invertible matrix P such that
jTIAP=D
is a diagonal matrix. Furthermore, the columns of P are eigenvectors of A, and so are the coordinate matrices of eigenvectors of T with respect to the basis B. In other words,
P
is the transition matrix from the basis B' to the basis B. It follows that the matrix for Twith
respect to the basis B' is given by
= ([uI]B ...
... , un}
[un]B)'
D{I . J
= 2x + 7x2,
where AI' ... , An are the eigenvalues of T. Example. Consider the vector space P2 of all polynomials with real coefficients and degree at most 2, with basis B = {I, x, x2 }. Consider also the linear operator T : P2 ~ P2' where for every polynomial p(x) = Po +PIx+Pr2, we have
T(P(x
2Po)x
+ (2p} + 7P2)x2
Then
T(I) = 5 - 2x, T(x) = -2 + 6x + 2x2
and
T(x 2)
7 eigenvalues 3, 6, 9, with It is a simple exercise to show that the matrix 2 A has corresponding eigenvectors
OJ 5 -2 6 2.
XI
=(~}2 =(~lJ~=(n
-I
so that writing
P=(~ ~I ~IJ' 2 2
286
Linear Transformations
[A ~
(xl]B
PJ
[p, (xl]s
~ ~l). [~(xl]s ~
(
n
'.
Then P is the transition matrix from the basis B' to the basis B, and D is the matrix for T with respect to the basis B'. Clearly PI(x) = 2 + 2x -x2,pix) = 2 -x + 2x2 ad P3(X) = -1 + 2x + 2x2. Note now that T(p} (x)) = T(2 + 2x - xx2) = 6 + 6x - 3x2 = 3PI (x), T(P2(x)) = T(2 -x + 2x2) = 12 - 6x + 12x2 = 6P2(x), T(pix)) = T(-1 + 2x + 2x2) = -9 + 18x + 18x2 = 9P3(x).