Math Basics
Math Basics
Calculus
Jean Gallier
Department of Computer and Information Science
University of Pennsylvania
Philadelphia, PA 19104, USA
e-mail: [email protected]
c Jean Gallier
May 9, 2015
Contents
1 Introduction
2 Vector Spaces, Bases, Linear Maps
2.1 Groups, Rings, and Fields . . . . .
2.2 Vector Spaces . . . . . . . . . . . .
2.3 Linear Independence, Subspaces . .
2.4 Bases of a Vector Space . . . . . .
2.5 Linear Maps . . . . . . . . . . . . .
2.6 Quotient Spaces . . . . . . . . . . .
2.7 Summary . . . . . . . . . . . . . .
9
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11
11
22
29
34
41
48
49
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
51
51
67
83
86
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
87
87
100
113
114
123
125
.
.
.
.
.
.
.
129
129
133
136
142
146
147
147
.
.
.
.
.
.
5 Determinants
5.1 Permutations, Signature of a Permutation . . .
5.2 Alternating Multilinear Maps . . . . . . . . . .
5.3 Definition of a Determinant . . . . . . . . . . .
5.4 Inverse Matrices and Determinants . . . . . . .
5.5 Systems of Linear Equations and Determinants
5.6 Determinant of a Linear Map . . . . . . . . . .
5.7 The CayleyHamilton Theorem . . . . . . . . .
3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
5.8
5.9
Permanents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Further Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
157
157
161
183
186
190
205
211
.
.
.
.
.
213
213
219
232
240
242
.
.
.
.
245
245
252
256
258
.
.
.
.
.
259
259
262
264
269
276
.
.
.
.
.
.
.
277
277
285
296
299
301
305
306
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
319
319
328
333
335
338
341
344
.
.
.
.
.
.
.
347
347
347
356
363
366
369
377
.
.
.
.
.
.
.
.
.
379
379
386
390
394
397
400
413
417
421
451
451
459
462
CONTENTS
16.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
465
465
475
476
483
486
.
.
.
.
489
489
497
501
506
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
507
507
514
516
517
520
525
530
537
539
543
545
547
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
561
561
562
568
570
578
582
. . . . . . . . . . . . . . . . . 589
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
619
619
621
627
633
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
639
639
647
649
650
653
658
662
664
664
666
668
672
674
675
677
680
682
689
692
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
697
697
705
710
713
716
724
729
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
745
745
750
751
753
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8
26 Topology
26.1 Metric Spaces and Normed Vector Spaces .
26.2 Topological Spaces . . . . . . . . . . . . .
26.3 Continuous Functions, Limits . . . . . . .
26.4 Connected Sets . . . . . . . . . . . . . . .
26.5 Compact Sets . . . . . . . . . . . . . . . .
26.6 Continuous Linear and Multilinear Maps .
26.7 Normed Affine Spaces . . . . . . . . . . .
26.8 Futher Readings . . . . . . . . . . . . . . .
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
767
767
771
776
781
787
801
806
806
27 A Detour On Fractals
807
27.1 Iterated Function Systems and Fractals . . . . . . . . . . . . . . . . . . . . . 807
28 Dierential Calculus
28.1 Directional Derivatives, Total Derivatives . . . . .
28.2 Jacobian Matrices . . . . . . . . . . . . . . . . . .
28.3 The Implicit and The Inverse Function Theorems
28.4 Tangent Spaces and Dierentials . . . . . . . . . .
28.5 Second-Order and Higher-Order Derivatives . . .
28.6 Taylors formula, Fa`a di Brunos formula . . . . .
28.7 Vector Fields, Covariant Derivatives, Lie Brackets
28.8 Futher Readings . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
815
815
823
828
832
833
839
843
845
.
.
.
.
847
847
856
859
867
869
869
870
876
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
879
Chapter 1
Introduction
10
CHAPTER 1. INTRODUCTION
Chapter 2
Vector Spaces, Bases, Linear Maps
2.1
In the following three chapters, the basic algebraic structures (groups, rings, fields, vector
spaces) are reviewed, with a major emphasis on vector spaces. Basic notions of linear algebra
such as vector spaces, subspaces, linear combinations, linear independence, bases, quotient
spaces, linear maps, matrices, change of bases, direct sums, linear forms, dual spaces, hyperplanes, transpose of a linear maps, are reviewed.
The set R of real numbers has two operations + : R R ! R (addition) and : R R !
R (multiplication) satisfying properties that make R into an abelian group under +, and
R {0} = R into an abelian group under . Recall the definition of a group.
Definition 2.1. A group is a set G equipped with a binary operation : G G ! G that
associates an element a b 2 G to every pair of elements a, b 2 G, and having the following
properties: is associative, has an identity element e 2 G, and every element in G is invertible
(w.r.t. ). More explicitly, this means that the following equations hold for all a, b, c 2 G:
(G1) a (b c) = (a b) c.
(associativity);
(G2) a e = e a = a.
(identity);
2 G such that a a
=a
a = e.
(inverse).
12
Example 2.1.
1. The set Z = {. . . , n, . . . , 1, 0, 1, . . . , n, . . .} of integers is a group under addition,
with identity element 0. However, Z = Z {0} is not a group under multiplication.
2. The set Q of rational numbers (fractions p/q with p, q 2 Z and q 6= 0) is a group
under addition, with identity element 0. The set Q = Q {0} is also a group under
multiplication, with identity element 1.
3. Given any nonempty set S, the set of bijections f : S ! S, also called permutations
of S, is a group under function composition (i.e., the multiplication of f and g is the
composition g f ), with identity element the identity function idS . This group is not
abelian as soon as S has more than two elements.
4. The set of n n invertible matrices with real (or complex) coefficients is a group under
matrix multiplication, with identity element the identity matrix In . This group is
called the general linear group and is usually denoted by GL(n, R) (or GL(n, C)).
It is customary to denote the operation of an abelian group G by +, in which case the
inverse a 1 of an element a 2 G is denoted by a.
The identity element of a group is unique. In fact, we can prove a more general fact:
Fact 1. If a binary operation : M M ! M is associative and if e0 2 M is a left identity
and e00 2 M is a right identity, which means that
e0 a = a for all a 2 M
(G2l)
(G2r)
and
then e0 = e00 .
Proof. If we let a = e00 in equation (G2l), we get
e0 e00 = e00 ,
and if we let a = e0 in equation (G2r), we get
e0 e00 = e0 ,
and thus
e0 = e0 e00 = e00 ,
as claimed.
13
Fact 1 implies that the identity element of a monoid is unique, and since every group is
a monoid, the identity element of a group is unique. Furthermore, every element in a group
has a unique inverse. This is a consequence of a slightly more general fact:
Fact 2. In a monoid M with identity element e, if some element a 2 M has some left inverse
a0 2 M and some right inverse a00 2 M , which means that
a0 a = e
(G3l)
a a00 = e,
(G3r)
and
then a0 = a00 .
Proof. Using (G3l) and the fact that e is an identity element, we have
(a0 a) a00 = e a00 = a00 .
Similarly, Using (G3r) and the fact that e is an identity element, we have
a0 (a a00 ) = a0 e = a0 .
However, since M is monoid, the operation is associative, so
a0 = a0 (a a00 ) = (a0 a) a00 = a00 ,
as claimed.
Remark: Axioms (G2) and (G3) can be weakened a bit by requiring only (G2r) (the existence of a right identity) and (G3r) (the existence of a right inverse for every element) (or
(G2l) and (G3l)). It is a good exercise to prove that the group axioms (G2) and (G3) follow
from (G2r) and (G3r).
If a group G has a finite number n of elements, we say that G is a group of order n. If
G is infinite, we say that G has infinite order . The order of a group is usually denoted by
|G| (if G is finite).
Given a group G, for any two subsets R, S G, we let
RS = {r s | r 2 R, s 2 S}.
In particular, for any g 2 G, if R = {g}, we write
gS = {g s | s 2 S},
and similarly, if S = {g}, we write
Rg = {r g | r 2 R}.
14
For any g 2 G, define Lg , the left translation by g, by Lg (a) = ga, for all a 2 G, and
Rg , the right translation by g, by Rg (a) = ag, for all a 2 G. Observe that Lg and Rg are
bijections. We show this for Lg , the proof for Rg being similar.
If Lg (a) = Lg (b), then ga = gb, and multiplying on the left by g 1 , we get a = b, so Lg
injective. For any b 2 G, we have Lg (g 1 b) = gg 1 b = b, so Lg is surjective. Therefore, Lg
is bijective.
Definition 2.2. Given a group G, a subset H of G is a subgroup of G i
(1) The identity element e of G also belongs to H (e 2 H);
(2) For all h1 , h2 2 H, we have h1 h2 2 H;
(3) For all h 2 H, we have h
2 H.
15
Proof. If we apply the bijection Lg2 1 to both g1 H and g2 H we get Lg2 1 (g1 H) = g2 1 g1 H
and Lg2 1 (g2 H) = H, so g1 H = g2 H i g2 1 g1 H = H. If g2 1 g1 H = H, since 1 2 H, we get
g2 1 g1 2 H. Conversely, if g2 1 g1 2 H, since H is a group, the left translation Lg2 1 g1 is a
bijection of H, so g2 1 g1 H = H. Thus, g2 1 g1 H = H i g2 1 g1 2 H.
It follows that the equivalence class of an element g 2 G is the coset gH (resp. Hg).
Since Lg is a bijection between H and gH, the cosets gH all have the same cardinality. The
map Lg 1 Rg is a bijection between the left coset gH and the right coset Hg, so they also
have the same cardinality. Since the distinct cosets gH form a partition of G, we obtain the
following fact:
Proposition 2.3. (Lagrange) For any finite group G and any subgroup H of G, the order
h of H divides the order n of G.
The ratio n/h is denoted by (G : H) and is called the index of H in G. The index (G : H)
is the number of left (and right) cosets of H in G. Proposition 2.3 can be stated as
|G| = (G : H)|H|.
The set of left cosets of H in G (which, in general, is not a group) is denoted G/H.
The points of G/H are obtained by collapsing all the elements in a coset into a single
element.
It is tempting to define a multiplication operation on left cosets (or right cosets) by
setting
(g1 H)(g2 H) = (g1 g2 )H,
but this operation is not well defined in general, unless the subgroup H possesses a special
property. This property is typical of the kernels of group homomorphisms, so we are led to
Definition 2.4. Given any two groups G and G0 , a function ' : G ! G0 is a homomorphism
i
'(g1 g2 ) = '(g1 )'(g2 ), for all g1 , g2 2 G.
Taking g1 = g2 = e (in G), we see that
'(e) = e0 ,
and taking g1 = g and g2 = g 1 , we see that
'(g 1 ) = '(g) 1 .
If ' : G ! G0 and : G0 ! G00 are group homomorphisms, then
' : G ! G00 is also a
homomorphism. If ' : G ! G0 is a homomorphism of groups, and H G, H 0 G0 are two
subgroups, then it is easily checked that
Im H = '(H) = {'(g) | g 2 H}
16
for all g 2 G.
()
= H,
for all g 2 G,
gHg
H,
for all g 2 G.
()
= '(g)'(g)
= e0 ,
for all h 2 H = Ker ' and all g 2 G. Thus, by definition of H = Ker ', we have gHg
H.
= N,
for all g 2 G.
This is denoted by N C G.
Observe that if G is abelian, then every subgroup of G is normal.
If N is a normal subgroup of G, the equivalence relation induced by left cosets is the
same as the equivalence induced by right cosets. Furthermore, this equivalence relation is
a congruence, which means that: For all g1 , g2 , g10 , g20 2 G,
17
18
(associativity of +)
(commutativity of +)
(zero)
(additive inverse)
(associativity of )
(identity for )
(distributivity)
(distributivity)
(2.1)
(2.2)
(2.3)
(2.4)
(2.5)
(2.6)
(2.7)
(2.8)
(a b).
(2.9)
(2.10)
Note that (2.9) implies that if 1 = 0, then a = 0 for all a 2 A, and thus, A = {0}. The
ring A = {0} is called the trivial ring. A ring for which 1 6= 0 is called nontrivial . The
multiplication a b of two elements a, b 2 A is often denoted by ab.
Example 2.2.
19
The reader will easily check that this is an equivalence relation, and, moreover, it is
compatible with respect to addition and multiplication, which means that if m1 n1
(mod p) and m2 n2 (mod p), then m1 + m2 n1 + n2 (mod p) and m1 m2 n1 n2
(mod p). Consequently, we can define an addition operation and a multiplication
operation of the set of equivalence classes (mod p):
[m] + [n] = [m + n]
and
[m] [n] = [mn].
Again, the reader will easily check that the ring axioms are satisfied, with [0] as zero
and [1] as multiplicative unit. The resulting ring is denoted by Z/pZ.1 Observe that
if p is composite, then this ring has zero-divisors. For example, if p = 4, then we have
220
(mod 4).
The notation Zp is sometimes used instead of Z/pZ but it clashes with the notation for the p-adic integers
so we prefer not to use it.
1
20
if n
0 (with 0 a = 0) and
na=
( n) a
h(n) = n 1A
is a ring homomorphism (where 1A is the multiplicative identity of A).
2. Given any real
21
Definition 2.8. A set K is a field if it is a ring and the following properties hold:
(F1) 0 6= 1;
(F2) K = K
(F3) is commutative.
If is not commutative but (F1) and (F2) hold, we say that we have a skew field (or
noncommutative field ).
Note that we are assuming that the operation of a field is commutative. This convention
is not universally adopted, but since will be commutative for most fields we will encounter,
we may as well include this condition in the definition.
Example 2.5.
1. The rings Q, R, and C are fields.
2. The set of (formal) fractions f (X)/g(X) of polynomials f (X), g(X) 2 R[X], where
g(X) is not the null polynomial, is a field.
3. The ring C(]a, b[) of continuous functions f : ]a, b[! R such that f (x) 6= 0 for all
x 2]a, b[ is a field.
4. The ring Z/pZ is a field whenever p is prime.
A homomorphism h : K1 ! K2 between two fields K1 and K2 is just a homomorphism
between the rings K1 and K2 . However, because K1 and K2 are groups under multiplication,
a homomorphism of fields must be injective.
First, observe that for any x 6= 0,
1 = h(1) = h(xx 1 ) = h(x)h(x 1 )
and
1 = h(1) = h(x 1 x) = h(x 1 )h(x),
so h(x) 6= 0 and
h(x 1 ) = h(x) 1 .
22
2.2
Vector Spaces
For every n 1, let Rn be the set of n-tuples x = (x1 , . . . , xn ). Addition can be extended to
Rn as follows:
(x1 , . . . , xn ) + (y1 , . . . , yn ) = (x1 + y1 , . . . , xn + yn ).
We can also define an operation : R Rn ! Rn as follows:
(x1 , . . . , xn ) = ( x1 , . . . , xn ).
The resulting algebraic structure has some interesting properties, those of a vector space.
Before defining vector spaces, we need to discuss a strategic choice which, depending
how it is settled, may reduce or increase headackes in dealing with notions such as linear
combinations and linear dependence (or independence). The issue has to do with using sets
of vectors versus sequences of vectors.
Our experience tells us that it is preferable to use sequences of vectors; even better,
indexed families of vectors. (We are not alone in having opted for sequences over sets, and
we are in good company; for example, Artin [4], Axler [7], and Lang [67] use sequences.
Nevertheless, some prominent authors such as Lax [71] use sets. We leave it to the reader
to conduct a survey on this issue.)
23
24
The issue is that the binary operation + only tells us how to compute a1 + a2 for two
elements of A, but it does not tell us what is the sum of three of more elements. For example,
how should a1 + a2 + a3 be defined?
What we have to do is to define a1 +a2 +a3 by using a sequence of steps each involving two
elements, and there are two possible ways to do this: a1 + (a2 + a3 ) and (a1 + a2 ) + a3 . If our
operation + is not associative, these are dierent values. If it associative, then a1 +(a2 +a3 ) =
(a1 + a2 ) + a3 , but then there are still six possible permutations of the indices 1, 2, 3, and if
+ is not commutative, these values are generally dierent. If our operation is commutative,
then all six permutations have the same value. P
Thus, if + is associative and commutative,
it seems intuitively clear that a sum of the form i2I ai does not depend on the order of the
operations used to compute it.
This is indeed the case, but a rigorous proof requires induction, and such a proof is
surprisingly
involved. Readers may accept without proof the fact that sums of the form
P
i2I ai are indeed well defined, and jump directly to Definition 2.9. For those who want to
see the gory details, here we go.
P
First, we define sums i2I ai , where I is a finite sequence of distinct natural numbers,
say I = (i1 , . . . , im ). If I = (i1 , . . . , im ) with m 2, we denote the sequence (i2 , . . . , im ) by
I {i1 }. We proceed by induction on the size m of I. Let
X
ai = ai 1 ,
if m = 1,
ai = ai 1 +
i2I
X
i2I
i2I {i1 }
ai ,
if m > 1.
i2I
If the operation + is not associative, the grouping of the terms matters. For instance, in
general
a1 + (a2 + (a3 + a4 )) 6= (a1 + a2 ) + (a3 + a4 ).
P
However, if the operation + is associative, the sum i2I ai should not depend on the grouping
of the elements in I, as long as their order is preserved. For example, if I = (1, 2, 3, 4, 5),
J1 = (1, 2), and J2 = (3, 4, 5), we expect that
X X
X
ai =
aj +
aj .
i2I
j2J1
j2J2
25
Proposition 2.5. Given any nonempty set A equipped with an associative binary operation
+ : A A ! A, for any nonempty finite sequence I of distinct natural numbers and for
any partition of I into p nonempty sequences Ik1 , . . . , Ikp , for some nonempty sequence K =
(k1 , . . . , kp ) of distinct natural numbers such that ki < kj implies that < for all 2 Iki
and all 2 Ikj , for every sequence (ai )i2I of elements in A, we have
X
X X
a =
a .
2I
2Ik
k2K
Next, assume n > 1. If p = 1, then Ik1 = I and the formula is trivial, so assume that
2 and write J = (k2 , . . . , kp ). There are two cases.
Case 1. The sequence Ik1 has a single element, say , which is the first element of I.
In this case, write C for the sequence obtained from I by deleting its first element . By
definition,
X
X
a = a +
a ,
2I
and
X X
k2K
Since |C| = n
2Ik
2C
=a +
X X
j2J
2Ij
j2J
2Ij
k2K
2Ik0
2Ik0
j2J
2Ij
26
If we add the righthand side to a , using associativity and the definition of an indexed sum,
we get
X X X
X X X
a +
a +
a
= a +
a
+
a
2Ik0
j2J
2Ij
2Ik0
2Ik1
X X
k2K
2Ik
X X
j2J
2Ij
j2J
2Ij
a ,
as claimed.
Pn
P
If I = (1, . . . , n), we also write
a
instead
of
i
i=1
i2I ai . Since + is associative, PropoPn
sition 2.5 shows that the sum i=1 ai is independent of the grouping of its elements, which
justifies the use the notation a1 + + an (without any parentheses).
If we also assume that
P our associative binary operation on A is commutative, then we
can show that the sum i2I ai does not depend on the ordering of the index set I.
Proposition 2.6. Given any nonempty set A equipped with an associative and commutative
binary operation + : A A ! A, for any two nonempty finite sequences I and J of distinct
natural numbers such that J is a permutation of I (in other words, the unlerlying sets of I
and J are identical), for every sequence (ai )i2I of elements in A, we have
X
X
a =
a .
2I
2J
2J 0
a =
2I 0
iX
X
p
1 1
a =
ai +
ai .
i=1
i=i1 +1
27
p
1 1
ai 1 +
ai +
ai
.
i=1
i=i1 +1
p
p
1 1
1 1
ai 1 +
ai +
ai
= ai 1 +
ai
+
ai ,
i=1
i=i1 +1
i=1
i=i1 +1
then using associativity and commutativity several times (more rigorously, using induction
on i1 1), we get
iX
X
iX
p
p
1 1
1 1
ai 1 +
ai
+
ai =
ai + ai 1 +
ai
i=1
i=i1 +1
i=1
i=i1 +1
ai ,
i=1
as claimed.
The cases where i1 = 1 or i1 = p are treated similarly, but in a simpler manner since
either P = () or Q = () (where () denotes the empty sequence).
P
Having done all this, we can now make sense of sums of the form i2I ai , for any finite
indexed set I and any family a = (ai )i2I of elements in A, where A is a set equipped with a
binary operation + which is associative and commutative.
Indeed, since I is finite, it is in bijection with the set {1, . . . , n} for some n 2 N, and any
total ordering
on I corresponds to a permutation I of {1, . . . , n} (where
we identify a
P
permutation with its image). For any total ordering on I, we define i2I, aj as
X
X
aj =
aj .
i2I,
on I, we have
X
X
aj =
aj ,
i2I,
j2I
j2I
j2I
Therefore,
the sum i2I, aj does
P
P not depend on the total ordering on I. We define the sum
of I.
i2I ai as the common value
j2I aj for all total orderings
Vector spaces are defined as follows.
28
Definition 2.9. Given a field K (with addition + and multiplication ), a vector space over
K (or K-vector space) is a set E (of vectors) together with two operations + : E E ! E
(called vector addition),2 and : K E ! E (called scalar multiplication) satisfying the
following conditions for all , 2 K and all u, v 2 E;
( u) =
, so from
u = 0, we get
0.
( u) = (
) u = 1 u = u,
The symbol + is overloaded, since it denotes both addition in the field K and addition of vectors in E.
It is usually clear from the context which + is intended.
3
The symbol 0 is also overloaded, since it represents both the zero in K (a scalar) and the identity element
of E (the zero vector). Confusion rarely arises, but one may prefer using 0 for the zero vector.
29
for all (x1 , . . . , xn ) 2 Rn and all 2 R. Axioms (V0)(V3) are all satisfied, but (V4) fails.
Less trivial examples can be given using the notion of a basis, which has not been defined
yet.
The field K itself can be viewed as a vector space over itself, addition of vectors being
addition in the field, and multiplication by a scalar being multiplication in the field.
Example 2.6.
1. The fields R and C are vector spaces over R.
2. The groups Rn and Cn are vector spaces over R, and Cn is a vector space over C.
3. The ring R[X] of polynomials is a vector space over R, and C[X] is a vector space over
R and C. The ring of n n matrices Mn (R) is a vector space over R.
4. The ring C(]a, b[) of continuous functions f : ]a, b[! R is a vector space over R.
Let E be a vector space. We would like to define the important notions of linear combination and linear independence. These notions can be defined for sets of vectors in E,
but it will turn out to be more convenient to define them for families (vi )i2I , where I is any
arbitrary index set.
2.3
One of the most useful properties of vector spaces is that there possess bases. What this
means is that in every vector space, E, there is some set of vectors, {e1 , . . . , en }, such that
every, vector, v 2 E, can be written as a linear combination,
v=
of the ei , for some scalars,
is unique.
1, . . . ,
1 e1
+ +
n en ,
n ),
as above
This description is fine when E has a finite basis, {e1 , . . . , en }, but this is not always the
case! For example, the vector space of real polynomials, R[X], does not have a finite basis
but instead it has an infinite basis, namely
1, X, X 2 , . . . , X n , . . .
One might wonder if it is possible for a vector space to have bases of dierent sizes, or even
to have a finite basis as well as an infinite basis. We will see later on that this is not possible;
all bases of a vector space have the same number of elements (cardinality), which is called
the dimension of the space. However, we have the following problem: If a vector space has
30
n+1 en+1 .
But then, how do we define such limits? Well, we have to define some topology on our space,
by means of a norm, a metric or some other mechanism. This can indeed be done and this
is what Banach spaces and Hilbert spaces are all about but this seems to require a lot of
machinery.
A way to avoid limits is to restrict our attention to linear combinations involving only
finitely many vectors. We may have an infinite supply of vectors but we only form linear
combinations involving finitely many nonzero coefficients. Technically, this can be done by
introducing families of finite support. This gives us the ability to manipulate families of
scalars indexed by some fixed infinite set and yet to be treat these families as if they were
finite.
With these motivations in mind, given a set A, recall that an I-indexed family (ai )i2I
of elements of A (for short, a family) is a function a : I ! A, or equivalently a set of pairs
{(i, ai ) | i 2 I}. We agree that when I = ;, (ai )i2I = ;. A family (ai )i2I is finite if I is finite.
Remark: When considering a family (ai )i2I , there is no reason to assume that I is ordered.
The crucial point is that every element of the family is uniquely indexed by an element of
I. Thus, unless specified otherwise, we do not assume that the elements of an index set are
ordered.
If A is an abelian group (usually, when A is a ring or a vector space) with identity 0, we
say that a family (ai )i2I has finite support if ai = 0 for all i 2 I J, where J is a finite
subset of I (the support of the family).
Given two disjoint sets I and J, the union of two families (ui )i2I and (vj )j2J , denoted as
(ui )i2I [ (vj )j2J , is the family (wk )k2(I[J) defined such that wk = uk if k 2 I, and wk = vk
if k 2 J. Given a family (ui )i2I and any element v, we denote by (ui )i2I [k (v) the family
(wi )i2I[{k} defined such that, wi = ui if i 2 I, and wk = v, where k is any index such that
k2
/ I. Given a family (ui )i2I , a subfamily of (ui )i2I is a family (uj )j2J where J is any subset
of I.
In this chapter, unless specified otherwise, it is assumed that all families of scalars have
finite support.
31
i ui .
i2I
P
When I = ;, we stipulate that v = 0. (By proposition 2.6, sums of the form i2I i ui are
well defined.) We say that a family (ui )i2I is linearly independent if for every family ( i )i2I
of scalars in K,
X
i ui = 0 implies that
i = 0 for all i 2 I.
i2I
Equivalently, a family (ui )i2I is linearly dependent if there is some family ( i )i2I of scalars
in K such that
X
i ui = 0 and
j 6= 0 for some j 2 I.
i2I
Observe that defining linear combinations for families of vectors rather than for sets of
vectors has the advantage that the vectors being combined need not be distinct. For example,
for I = {1, 2, 3} and the families (u, v, u) and ( 1 , 2 , 1 ), the linear combination
X
i ui
1u
2v
1u
i2I
makes sense. Using sets of vectors in the definition of a linear combination does not allow
such linear combinations; this is too restrictive.
Unravelling Definition 2.10, a family (ui )i2I is linearly dependent i some uj in the family
can be expressed as a linear combination of the other vectors in the family. Indeed, there is
some family ( i )i2I of scalars in K such that
X
i ui
= 0 and
i2I
6= 0 for some j 2 I,
1
j
i ui .
i2(I {j})
Observe that one of the reasons for defining linear dependence for families of vectors
rather than for sets of vectors is that our definition allows multiple occurrences of a vector.
This is important because a matrix may contain identical columns, and we would like to say
that these columns are linearly dependent. The definition of linear dependence for sets does
not allow us to do that.
32
The above also shows that a family (ui )i2I is linearly independent i either I = ;, or I
consists of a single element i and ui 6= 0, or |I| 2 and no vector uj in the family can be
expressed as a linear combination of the other vectors in the family.
When I is nonempty, if the family (ui )i2I is linearly independent, note that ui 6= 0 for
all
i
P 2 I. Otherwise, if ui = 0 for some i 2 I, then we get a nontrivial linear dependence
i2I i ui = 0 by picking any nonzero i and letting k = 0 for all k 2 I with k 6= i, since
2, we must also have ui 6= uj for all i, j 2 I with i 6= j, since otherwise we
i 0 = 0. If |I|
get a nontrivial linear dependence by picking i = and j =
for any nonzero , and
letting k = 0 for all k 2 I with k 6= i, j.
Thus, the definition of linear independence implies that a nontrivial linearly indendent
family is actually a set. This explains why certain authors choose to define linear independence for sets of vectors. The problem with this approach is that linear dependence, which
is the logical negation of linear independence, is then only defined for sets of vectors. However, as we pointed out earlier, it is really desirable to define linear dependence for families
allowing multiple occurrences of the same vector.
Example 2.7.
1. Any two distinct scalars , 6= 0 in K are linearly dependent.
2. In R3 , the vectors (1, 0, 0), (0, 1, 0), and (0, 0, 1) are linearly independent.
3. In R4 , the vectors (1, 1, 1, 1), (0, 1, 1, 1), (0, 0, 1, 1), and (0, 0, 0, 1) are linearly independent.
4. In R2 , the vectors u = (1, 1), v = (0, 1) and w = (2, 3) are linearly dependent, since
w = 2u + v.
Note that a family (ui )i2I is linearly independent i (uP
j )j2J is linearly independent for
every finite subset J of I (even when I = ;).P
Indeed, when i2I i ui = 0, theP
family ( i )i2I
of scalars in K has finite support, and thus i2I i ui = 0 really means that j2J j uj = 0
for a finite subset J of I. When I is finite, we often assume that it is the set I = {1, 2, . . . , n}.
In this case, we denote the family (ui )i2I as (u1 , . . . , un ).
The notion of a subspace of a vector space is defined as follows.
Definition 2.11. Given a vector space E, a subset F of E is a linear subspace (or subspace)
of E if F is nonempty and u + v 2 F for all u, v 2 F , and all , 2 K.
It is easy to see that a subspace F of E is indeed a vector space, since the restriction
of + : E E ! E to F F is indeed a function + : F F ! F , and the restriction of
: K E ! E to K F is indeed a function : K F ! F .
33
It is also easy to see that any intersection of subspaces is a subspace. Since F is nonempty,
if we pick any vector u 2 F and if we let = = 0, then u + u = 0u + 0u = 0, so every
subspace contains the vector 0. For any nonempty finite index set I, one can show by
induction on the cardinalityPof I that if (ui )i2I is any family of vectors ui 2 F and ( i )i2I is
any family of scalars, then i2I i ui 2 F .
The subspace {0} will be denoted by (0), or even 0 (with a mild abuse of notation).
Example 2.8.
1. In R2 , the set of vectors u = (x, y) such that
x+y =0
is a subspace.
2. In R3 , the set of vectors u = (x, y, z) such that
x+y+z =0
is a subspace.
3. For any n
of R[X].
i ui
i2I
i2I J
j v j
j2J
i ui
i2I
j vj
j2J
i ui
i2I\J
+ i )ui +
j vj ,
j2J I
which is a linear combination with index set I [ J, and thus u + v 2 Span(S), which
proves that Span(S) is a subspace.
34
One might wonder what happens if we add extra conditions to the coefficients involved
in forming linear combinations. Here are three natural restrictions which turn out to be
important (as usual, we assume that our index sets are finite):
(1) Consider combinations
i2I
i ui
for which
X
= 1.
i2I
These
are called affine combinations. One should realize that every linear combination
P
i2I i ui can be viewed as an affine combination.
PFor example,
Pif k is an index not
in I, if we let J = I [ {k}, uk = 0, and k = 1
i2I i , then
j2J j uj is an affine
combination and
X
X
i ui =
j uj .
i2I
j2J
However, we get new spaces. For example, in R3 , the set of all affine combinations of
the three vectors e1 = (1, 0, 0), e2 = (0, 1, 0), and e3 = (0, 0, 1), is the plane passing
through these three points. Since it does not contain 0 = (0, 0, 0), it is not a linear
subspace.
(2) Consider combinations
i2I
i ui
for which
i
0,
for all i 2 I.
These are called positive (or conic) combinations It turns out that positive combinations of families of vectors are cones. They show naturally in convex optimization.
(3) Consider combinations
i2I
i ui
= 1,
i2I
0 for all i 2 I.
These are called convex combinations. Given any finite family of vectors, the set of all
convex combinations of these vectors is a convex polyhedron. Convex polyhedra play a
very important role in convex optimization.
2.4
Given a vector space E, given a family (vi )i2I , the subset V of E consisting of the null vector 0
and of all linear combinations of (vi )i2I is easily seen to be a subspace of E. Subspaces having
such a generating family play an important role, and motivate the following definition.
35
Definition 2.12. Given a vector space E and a subspace V of E, a family (vi )i2I of vectors
vi 2 V spans V or generates V if for every v 2 V , there is some family ( i )i2I of scalars in
K such that
X
v=
i vi .
i2I
We also say that the elements of (vi )i2I are generators of V and that V is spanned by (vi )i2I ,
or generated by (vi )i2I . If a subspace V of E is generated by a finite family (vi )i2I , we say
that V is finitely generated . A family (ui )i2I that spans V and is linearly independent is
called a basis of V .
Example 2.9.
1. In R3 , the vectors (1, 0, 0), (0, 1, 0), and (0, 0, 1) form a basis.
2. The vectors (1, 1, 1, 1), (1, 1, 1, 1), (1, 1, 0, 0), (0, 0, 1, 1) form a basis of R4 known
as the Haar basis. This basis and its generalization to dimension 2n are crucial in
wavelet theory.
3. In the subspace of polynomials in R[X] of degree at most n, the polynomials 1, X, X 2 ,
. . . , X n form a basis.
n
4. The Bernstein polynomials
(1 X)n k X k for k = 0, . . . , n, also form a basis of
k
that space. These polynomials play a major role in the theory of spline curves.
It is a standard result of linear algebra that every vector space E has a basis, and that
for any two bases (ui )i2I and (vj )j2J , I and J have the same cardinality. In particular, if E
has a finite basis of n elements, every basis of E has n elements, and the integer n is called
the dimension of the vector space E. We begin with a crucial lemma.
Lemma 2.8. Given a linearly independent family (ui )i2I of elements of a vector space E, if
v 2 E is not a linear combination of (ui )i2I , then the family (ui )i2I [k (v) obtained by adding
v to the family (ui )i2I is linearly independent (where k 2
/ I).
P
Proof. Assume that v + i2I i ui = 0, for any family ( i )i2I of scalars
P in K.1 If 6= 0, then
has an inverse (because K is a field), and thus we have v =
i )ui , showing
i2I (
that v is a linear
Pcombination of (ui )i2I and contradicting the hypothesis. Thus, = 0. But
then, we have i2I i ui = 0, and since the family (ui )i2I is linearly independent, we have
i = 0 for all i 2 I.
The next theorem holds in general, but the proof is more sophisticated for vector spaces
that do not have a finite set of generators. Thus, in this chapter, we only prove the theorem
for finitely generated vector spaces.
36
Theorem 2.9. Given any finite family S = (ui )i2I generating a vector space E and any
linearly independent subfamily L = (uj )j2J of S (where J I), there is a basis B of E such
that L B S.
Proof. Consider the set of linearly independent families B such that L B S. Since this
set is nonempty and finite, it has some maximal element, say B = (uh )h2H . We claim that
B generates E. Indeed, if B does not generate E, then there is some up 2 S that is not a
linear combination of vectors in B (since S generates E), with p 2
/ H. Then, by Lemma
2.8, the family B 0 = (uh )h2H[{p} is linearly independent, and since L B B 0 S, this
contradicts the maximality of B. Thus, B is a basis of E such that L B S.
Remark: Theorem 2.9 also holds for vector spaces that are not finitely generated. In this
case, the problem is to guarantee the existence of a maximal linearly independent family B
such that L B S. The existence of such a maximal family can be shown using Zorns
lemma, see Appendix 31 and the references given there.
A situation where the full generality of Theorem 2.9 is needed
p is the case of the vector
space R over the field of coefficients Q. The numbers 1 and 2 are linearly independent
p
over Q, so according to Theorem 2.9, the linearly independent family L = (1, 2) can be
extended to a basis B of R. Since R is uncountable and Q is countable, such a basis must
be uncountable!
Let (vi )i2I be a family of vectors in E. We say that (vi )i2I a maximal linearly independent
family of E if it is linearly independent, and if for any vector w 2 E, the family (vi )i2I [k {w}
obtained by adding w to the family (vi )i2I is linearly dependent. We say that (vi )i2I a
minimal generating family of E if it spans E, and if for any index p 2 I, the family (vi )i2I {p}
obtained by removing vp from the family (vi )i2I does not span E.
The following proposition giving useful properties characterizing a basis is an immediate
consequence of Theorem 2.9.
Proposition 2.10. Given a vector space E, for any family B = (vi )i2I of vectors of E, the
following properties are equivalent:
(1) B is a basis of E.
(2) B is a maximal linearly independent family of E.
(3) B is a minimal generating family of E.
Proof. We prove the equivalence of (1) and (2), leaving the equivalence of (1) and (3) as an
exercise.
Assume (1). We claim that B is a maximal linearly independent family. If B is not a
maximal linearly independent family, then there is some vector w 2 E such that the family
B 0 obtained by adding w to B is linearly independent. However, since B is a basis of E, the
37
m
X
i ui
i=1
We claim that
n
X
j vj .
j=m+1
m
X
i ui ,
i=1
a nontrivial linear dependence of the ui , which is impossible since (u1 , . . . , um+1 ) are linearly
independent.
38
m
X
1
m+1 i )ui
1
m+1 um+1
i=1
n
X
1
m+1 j )vj .
j=m+2
Observe that the families (u1 , . . . , um , vm+1 , . . . , vn ) and (u1 , . . . , um+1 , vm+2 , . . . , vn ) generate
the same subspace, since um+1 is a linear combination of (u1 , . . . , um , vm+1 , . . . , vn ) and vm+1
is a linear combination of (u1 , . . . , um+1 , vm+2 , . . . , vn ). Since (u1 , . . . , um , vm+1 , . . . , vn ) and
(v1 , . . . , vn ) generate the same subspace, we conclude that (u1 , . . . , um+1 , vm+2 , . . . , vn ) and
and (v1 , . . . , vn ) generate the same subspace, which concludes the induction hypothesis.
For the sake of completeness, here is a more formal statement of the replacement lemma
(and its proof).
Proposition 2.12. (Replacement lemma, version 2) Given a vector space E, let (ui )i2I be
any finite linearly independent family in E, where |I| = m, and let (vj )j2J be any finite family
such that every ui is a linear combination of (vj )j2J , where |J| = n. Then, there exists a set
L and an injection : L ! J (a relabeling function) such that L \ I = ;, |L| = n m, and
the families (ui )i2I [ (v(l) )l2L and (vj )j2J generate the same subspace of E. In particular,
m n.
Proof. We proceed by induction on |I| = m. When m = 0, the family (ui )i2I is empty, and
the proposition holds trivially with L = J ( is the identity). Assume |I| = m + 1. Consider
the linearly independent family (ui )i2(I {p}) , where p is any member of I. By the induction
hypothesis, there exists a set L and an injection : L ! J such that L \ (I {p}) = ;,
|L| = n m, and the families (ui )i2(I {p}) [ (v(l) )l2L and (vj )j2J generate the same subspace
of E. If p 2 L, we can replace L by (L {p}) [ {p0 } where p0 does not belong to I [ L, and
replace by the injection 0 which agrees with on L {p} and such that 0 (p0 ) = (p).
Thus, we can always assume that L \ I = ;. Since up is a linear combination of (vj )j2J
and the families (ui )i2(I {p}) [ (v(l) )l2L and (vj )j2J generate the same subspace of E, up is
a linear combination of (ui )i2(I {p}) [ (v(l) )l2L . Let
X
X
up =
(1)
i ui +
l v(l) .
l2L
i2(I {p})
If
i ui
up = 0,
i2(I {p})
l2(L {q})
(2)
39
We claim that the families (ui )i2(I {p}) [ (v(l) )l2L and (ui )i2I [ (v(l) )l2(L {q}) generate the
same subset of E. Indeed, the second family is obtained from the first by replacing v(q) by up ,
and vice-versa, and up is a linear combination of (ui )i2(I {p}) [ (v(l) )l2L , by (1), and v(q) is a
linear combination of (ui )i2I [(v(l) )l2(L {q}) , by (2). Thus, the families (ui )i2I [(v(l) )l2(L {q})
and (vj )j2J generate the same subspace of E, and the proposition holds for L {q} and the
restriction of the injection : L ! J to L {q}, since L \ I = ; and |L| = n m imply that
(L {q}) \ I = ; and |L {q}| = n (m + 1).
The idea is that m of the vectors vj can be replaced by the linearly independent ui s in
such a way that the same subspace is still generated. The purpose of the function : L ! J
is to pick n m elements j1 , . . . , jn m of J and to relabel them l1 , . . . , ln m in such a way
that these new indices do not clash with the indices in I; this way, the vectors vj1 , . . . , vjn m
who survive (i.e. are not replaced) are relabeled vl1 , . . . , vln m , and the other m vectors vj
with j 2 J {j1 , . . . , jn m } are replaced by the ui . The index set of this new family is I [ L.
Actually, one can prove that Proposition 2.12 implies Theorem 2.9 when the vector space
is finitely generated. Putting Theorem 2.9 and Proposition 2.12 together, we obtain the
following fundamental theorem.
Theorem 2.13. Let E be a finitely generated vector space. Any family (ui )i2I generating E
contains a subfamily (uj )j2J which is a basis of E. Any linearly independent family (ui )i2I
can be extended to a family (uj )j2J which is a basis of E (with I J). Furthermore, for
every two bases (ui )i2I and (vj )j2J of E, we have |I| = |J| = n for some fixed integer n 0.
Proof. The first part follows immediately by applying Theorem 2.9 with L = ; and S =
(ui )i2I . For the second part, consider the family S 0 = (ui )i2I [ (vh )h2H , where (vh )h2H is any
finitely generated family generating E, and with I \ H = ;. Then, apply Theorem 2.9 to
L = (ui )i2I and to S 0 . For the last statement, assume that (ui )i2I and (vj )j2J are bases of
E. Since (ui )i2I is linearly independent and (vj )j2J spans E, proposition 2.12 implies that
|I| |J|. A symmetric argument yields |J| |I|.
Remark: Theorem 2.13 also holds for vector spaces that are not finitely generated. This
can be shown as follows. Let (ui )i2I be a basis of E, let (vj )j2J be a generating family of E,
and assume that I is infinite. For every j 2 J, let Lj I be the finite set
X
Lj = {i 2 I | vj =
i ui ,
i 6= 0}.
i2I
Let L = j2J Lj . By definition L I, and since (ui )i2I is a basis of E, we must have I = L,
since otherwise (ui )i2L would be another basis of E, and this would contradict the fact that
(ui )i2I is linearly independent. Furthermore, J must beSinfinite, since otherwise, because
the Lj are finite, I would be finite. But then, since I = j2J Lj with J infinite and the Lj
finite, by a standard result of set theory, |I| |J|. If (vj )j2J is also a basis, by a symmetric
argument, we obtain |J| |I|, and thus, |I| = |J| for any two bases (ui )i2I and (vj )j2J of E.
40
When E is not finitely generated, we say that E is of infinite dimension. The dimension
of a vector space E is the common cardinality of all of its bases and is denoted by dim(E).
Clearly, if the field K itself is viewed as a vector space, then every family (a) where a 2 K
and a 6= 0 is a basis. Thus dim(K) = 1. Note that dim({0}) = 0.
If E is a vector space, for any subspace U of E, if dim(U ) = 1, then U is called a line; if
dim(U ) = 2, then U is called a plane. If dim(U ) = k, then U is sometimes called a k-plane.
Let (ui )i2I be a basis of a vector space E. For any vector v 2 E, since the family (ui )i2I
generates E, there is a family ( i )i2I of scalars in K, such that
X
v=
i ui .
i2I
and since (ui )i2I is linearly independent, we must have i i = 0 for all i 2 I, that is, i = i
for all i 2 I. The converse is shown by contradiction. If (ui )i2I was linearly dependent, there
would be a family (i )i2I of scalars not all null such that
X
i u i = 0
i2I
i ui
i2I
X
i2I
i u i =
+ i )ui ,
i2I
with j 6= j +
Pj since j 6= 0, contradicting the assumption that ( i )i2I is the unique family
such that v = i2I i ui .
If (ui )i2I is a basis of a vector space E, for any vector v 2 E, if (xi )i2I is the unique
family of scalars in K such that
X
v=
x i ui ,
i2I
each xi is called the component (or coordinate) of index i of v with respect to the basis (ui )i2I .
Given a field K and any (nonempty) set I, we can form a vector space K (I) which, in
some sense, is the standard vector space of dimension |I|.
41
Definition 2.13. Given a field K and any (nonempty) set I, let K (I) be the subset of the
cartesian product K I consisting of all families ( i )i2I with finite support of scalars in K.4
We define addition and multiplication by a scalar as follows:
( i )i2I + (i )i2I = (
+ i )i2I ,
and
(i )i2I = ( i )i2I .
It is immediately verified that addition and multiplication by a scalar are well defined.
Thus, K (I) is a vector space. Furthermore, because families with finite support are considered, the family (ei )i2I of vectors ei , defined such that (ei )j = 0 if j 6= i and (ei )i = 1, is
clearly a basis of the vector space K (I) . When I = {1, . . . , n}, we denote K (I) by K n . The
function : I ! K (I) , such that (i) = ei for every i 2 I, is clearly an injection.
When I is a finite set, K (I) = K I , but this is false when I is infinite. In fact, dim(K (I) ) =
|I|, but dim(K I ) is strictly greater when I is infinite.
Many interesting mathematical structures are vector spaces. A very important example
is the set of linear maps between two vector spaces to be defined in the next section. Here
is an example that will prepare us for the vector space of linear maps.
Example 2.10. Let X be any nonempty set and let E be a vector space. The set of all
functions f : X ! E can be made into a vector space as follows: Given any two functions
f : X ! E and g : X ! E, let (f + g) : X ! E be defined such that
(f + g)(x) = f (x) + g(x)
for all x 2 X, and for every
for all x 2 X. The axioms of a vector space are easily verified. Now, let E = K, and let I
be the set of all nonempty subsets of X. For every S 2 I, let fS : X ! E be the function
such that fS (x) = 1 i x 2 S, and fS (x) = 0 i x 2
/ S. We leave as an exercise to show that
(fS )S2I is linearly independent.
2.5
Linear Maps
A function between two vector spaces that preserves the vector space structure is called
a homomorphism of vector spaces, or linear map. Linear maps formalize the concept of
linearity of a function. In the rest of this section, we assume that all vector spaces are over
a given field K (say R).
4
42
Definition 2.14. Given two vector spaces E and F , a linear map between E and F is a
function f : E ! F satisfying the following two conditions:
for all x, y 2 E;
for all 2 K, x 2 E.
f (x + y) = f (x) + f (y)
f ( x) = f (x)
Setting x = y = 0 in the first identity, we get f (0) = 0. The basic property of linear
maps is that they transform linear combinations into linear combinations. Given a family
(ui )i2I of vectors in E, given any family ( i )i2I of scalars in K, we have
X
X
f(
i f (ui ).
i ui ) =
i2I
i2I
The above identity is shown by induction on the size of the support of the family ( i ui )i2I ,
using the properties of Definition 2.14.
Example 2.11.
1. The map f : R2 ! R2 defined such that
x0 = x y
y0 = x + y
is a linear map. The reader should
p check that it is the composition of a rotation by
/4 with a magnification of ratio 2.
2. For any vector space E, the identity map id : E ! E given by
id(u) = u for all u 2 E
is a linear map. When we want to be more precise, we write idE instead of id.
3. The map D : R[X] ! R[X] defined such that
D(f (X)) = f 0 (X),
where f 0 (X) is the derivative of the polynomial f (X), is a linear map.
4. The map
f (t)dt,
a
where C([a, b]) is the set of continuous functions defined on the interval [a, b], is a linear
map.
43
f (t)g(t)dt,
a
is linear in each of the variable f , g. It also satisfies the properties hf, gi = hg, f i and
hf, f i = 0 i f = 0. It is an example of an inner product.
Definition 2.15. Given a linear map f : E ! F , we define its image (or range) Im f = f (E),
as the set
Im f = {y 2 F | (9x 2 E)(y = f (x))},
and its Kernel (or nullspace) Ker f = f
44
Proposition 2.16. Given any two vector spaces E and F , given any basis (ui )i2I of E,
given any other family of vectors (vi )i2I in F , there is a unique linear map f : E ! F such
that f (ui ) = vi for all i 2 I. Furthermore, f is injective i (vi )i2I is linearly independent,
and f is surjective i (vi )i2I generates F .
Proof. If such a linear map f : E ! F exists, since (ui )i2I is a basis of E, every vector x 2 E
can written uniquely as a linear combination
X
x=
x i ui ,
i2I
f (x) =
xi f (ui ) =
i2I
xi vi .
i2I
f (x) =
xi vi
i2I
P
for every x =
i2I xi ui . It is easy to verify that f is indeed linear, it is unique by the
previous reasoning, and obviously, f (ui ) = vi .
Now, assume that f is injective. Let ( i )i2I be any family of scalars, and assume that
X
i vi = 0.
i2I
i f (ui )
i2I
i vi
= 0.
i2I
i ui
= 0,
i2I
and since (ui )i2I is a basis, we have i = 0 for all i 2 I, which shows that (vi )i2I is linearly
independent. Conversely, assume that (vi )i2I is linearly
Pindependent. Since (ui )i2I is a basis
of E, every vector x 2 E is a linear combination x = i2I i ui of (ui )i2I . If
X
f (x) = f (
i ui ) = 0,
i2I
then
X
i2I
i vi
X
i2I
i f (ui )
= f(
i ui )
= 0,
i2I
and i = 0 for all i 2 I because (vi )i2I is linearly independent, which means that x = 0.
Therefore, Ker f = 0, which implies that f is injective. The part where f is surjective is left
as a simple exercise.
45
By the second part of Proposition 2.16, an injective linear map f : E ! F sends a basis
(ui )i2I to a linearly independent family (f (ui ))i2I of F , which is also a basis when f is
bijective. Also, when E and F have the same finite dimension n, (ui )i2I is a basis of E, and
f : E ! F is injective, then (f (ui ))i2I is a basis of F (by Proposition 2.10).
We can now show that the vector space K (I) of Definition 2.13 has a universal property
that amounts to saying that K (I) is the vector space freely generated by I. Recall that
: I ! K (I) , such that (i) = ei for every i 2 I, is an injection from I to K (I) .
Proposition 2.17. Given any set I, for any vector space F , and for any function f : I ! F ,
there is a unique linear map f : K (I) ! F , such that
f =f
I CC / K (I)
CC
CC
f
C
f CC!
F
Proof. If such a linear map f : K (I) ! F exists, since f = f
, we must have
46
xi f (ui ) =
i2I
xi vi .
i2I
This shows that f is unique if it exists. Conversely, assume that (ui )i2I does not generate E.
Since F is nontrivial, there is some some vector y 2 F such that y 6= 0. Since (ui )i2I does
not generate E, there is some vector w 2 E that is not in the subspace generated by (ui )i2I .
By Theorem 2.13, there is a linearly independent subfamily (ui )i2I0 of (ui )i2I generating the
same subspace. Since by hypothesis, w 2 E is not in the subspace generated by (ui )i2I0 , by
Lemma 2.8 and by Theorem 2.13 again, there is a basis (ej )j2I0 [J of E, such that ei = ui ,
for all i 2 I0 , and w = ej0 , for some j0 2 J. Letting (vi )i2I be the family in F such that
vi = 0 for all i 2 I, defining f : E ! F to be the constant linear map with value 0, we have
a linear map such that f (ui ) = 0 for all i 2 I. By Proposition 2.16, there is a unique linear
map g : E ! F such that g(w) = y, and g(ej ) = 0, for all j 2 (I0 [ J) {j0 }. By definition
of the basis (ej )j2I0 [J of E, we have, g(ui ) = 0 for all i 2 I, and since f 6= g, this contradicts
the fact that there is at most one such map.
(2) If the family (ui )i2I is linearly independent, then by Theorem 2.13, (ui )i2I can be
extended to a basis of E, and the conclusion follows by Proposition 2.16. Conversely, assume
that (ui )i2I is linearly dependent. Then, there is some family ( i )i2I of scalars (not all zero)
such that
X
i ui = 0.
i2I
By the assumption, for any nonzero vector, y 2 F , for every i 2 I, there is some linear map
fi : E ! F , such that fi (ui ) = y, and fi (uj ) = 0, for j 2 I {i}. Then, we would get
X
X
0 = fi (
i ui ) =
i fi (ui ) = i y,
i2I
i2I
and f
g = idF .
()
h) = (g f ) h = idE
f = idE , f
g = idF ,
h = h.
The map g satisfying () above is called the inverse of f and it is also denoted by f
47
Proposition 2.16 implies that if E and F are two vector spaces, (ui )i2I is a basis of E,
and f : E ! F is a linear map which is an isomorphism, then the family (f (ui ))i2I is a basis
of F .
One can verify that if f : E ! F is a bijective linear map, then its inverse f
is also a linear map, and thus f is an isomorphism.
:F !E
(1) If f has a left inverse g, that is, if g is a linear map such that g f = id, then f is an
isomorphism and f 1 = g.
(2) If f has a right inverse h, that is, if h is a linear map such that f
an isomorphism and f 1 = h.
h = id, then f is
Proof. (1) The equation g f = id implies that f is injective; this is a standard result
about functions (if f (x) = f (y), then g(f (x)) = g(f (y)), which implies that x = y since
g f = id). Let (u1 , . . . , un ) be any basis of E. By Proposition 2.16, since f is injective,
(f (u1 ), . . . , f (un )) is linearly independent, and since E has dimension n, it is a basis of
E (if (f (u1 ), . . . , f (un )) doesnt span E, then it can be extended to a basis of dimension
strictly greater than n, contradicting Theorem 2.13). Then, f is bijective, and by a previous
observation its inverse is a linear map. We also have
g = g id = g (f
) = (g f ) f
= id f
=f
(2) The equation f h = id implies that f is surjective; this is a standard result about
functions (for any y 2 E, we have f (g(y)) = y). Let (u1 , . . . , un ) be any basis of E. By
Proposition 2.16, since f is surjective, (f (u1 ), . . . , f (un )) spans E, and since E has dimension
n, it is a basis of E (if (f (u1 ), . . . , f (un )) is not linearly independent, then because it spans
E, it contains a basis of dimension strictly smaller than n, contradicting Theorem 2.13).
Then, f is bijective, and by a previous observation its inverse is a linear map. We also have
h = id h = (f
f) h = f
(f
h) = f
id = f
48
( f )(x) = f (x)
for all x 2 E. The point worth checking carefully is that f is indeed a linear map, which
uses the commutativity of in the field K. Indeed, we have
( f )(x) = f (x) = f (x) = f (x) = ( f )(x).
When E and F have finite dimensions, the vector space Hom(E, F ) also has finite dimension, as we shall see shortly. When E = F , a linear map f : E ! E is also called an
endomorphism. It is also important to note that composition confers to Hom(E, E) a ring
structure. Indeed, composition is an operation : Hom(E, E) Hom(E, E) ! Hom(E, E),
which is associative and has an identity idE , and the distributivity properties hold:
(g1 + g2 ) f = g1 f + g2 f ;
g (f1 + f2 ) = g f1 + g f2 .
The ring Hom(E, E) is an example of a noncommutative ring. It is easily seen that the
set of bijective linear maps f : E ! E is a group under composition. Bijective linear maps
are also called automorphisms. The group of automorphisms of E is called the general linear
group (of E), and it is denoted by GL(E), or by Aut(E), or when E = K n , by GL(n, K),
or even by GL(n).
Although in this book, we will not have many occasions to use quotient spaces, they are
fundamental in algebra. The next section may be omitted until needed.
2.6
Quotient Spaces
Let E be a vector space, and let M be any subspace of E. The subspace M induces a relation
M on E, defined as follows: For all u, v 2 E,
u M v i u
v 2 M.
(v1 + v2 ) = w1 + w2 ,
2.7. SUMMARY
49
2.7
Summary
The main concepts and results of this chapter are listed below:
Groups, rings and fields.
The notion of a vector space.
Families of vectors.
Linear combinations of vectors; linear dependence and linear independence of a family
of vectors.
Linear subspaces.
Spanning (or generating) family; generators, finitely generated subspace; basis of a
subspace.
Every linearly independent family can be extended to a basis (Theorem 2.9).
50
Chapter 3
Matrices and Linear Maps
3.1
Matrices
Proposition 2.16 shows that given two vector spaces E and F and a basis (uj )j2J of E,
every linear map f : E ! F is uniquely determined by the family (f (uj ))j2J of the images
under f of the vectors in the basis (uj )j2J . Thus, in particular, taking F = K (J) , we get an
isomorphism between any vector space E of dimension |J| and K (J) . If J = {1, . . . , n}, a
vector space E of dimension n is isomorphic to the vector space K n . If we also have a basis
(vi )i2I of F , then every vector f (uj ) can be written in a unique way as
f (uj ) =
ai j v i ,
i2I
where j 2 J, for a family of scalars (ai j )i2I . Thus, with respect to the two bases (uj )j2J
of E and (vi )i2I of F , the linear map f is completely determined by a possibly infinite
I J-matrix M (f ) = (ai j )i2I, j2J .
Remark: Note that we intentionally assigned the index set J to the basis (uj )j2J of E,
and the index I to the basis (vi )i2I of F , so that the rows of the matrix M (f ) associated
with f : E ! F are indexed by I, and the columns of the matrix M (f ) are indexed by J.
Obviously, this causes a mildly unpleasant reversal. If we had considered the bases (ui )i2I of
E and (vj )j2J of F , we would obtain a J I-matrix M (f ) = (aj i )j2J, i2I . No matter what
we do, there will be a reversal! We decided to stick to the bases (uj )j2J of E and (vi )i2I of
F , so that we get an I J-matrix M (f ), knowing that we may occasionally suer from this
decision!
When I and J are finite, and say, when |I| = m and |J| = n, the linear map f is
determined by the matrix M (f ) whose entries in the j-th column are the components of the
51
52
1
. . . a1 n
. . . a2 n C
C
.. C
..
.
. A
. . . am n
We will now show that when E and F have finite dimension, linear maps can be very
conveniently represented by matrices, and that composition of linear maps corresponds to
matrix multiplication. We will follow rather closely an elegant presentation method due to
Emil Artin.
Let E and F be two vector spaces, and assume that E has a finite basis (u1 , . . . , un ) and
that F has a finite basis (v1 , . . . , vm ). Recall that we have shown that every vector x 2 E
can be written in a unique way as
x = x 1 u1 + + x n un ,
and similarly every vector y 2 F can be written in a unique way as
y = y1 v1 + + ym vm .
Let f : E ! F be a linear map between E and F . Then, for every x = x1 u1 + + xn un in
E, by linearity, we have
f (x) = x1 f (u1 ) + + xn f (un ).
Let
or more concisely,
f (uj ) = a1 j v1 + + am j vm ,
f (uj ) =
m
X
ai j v i ,
i=1
for every j, 1 j n. This can be expressed by writing the coefficients a1j , a2j , . . . , amj of
f (uj ) over the basis (v1 , . . . , vm ), as the jth column of a matrix, as shown below:
v1
v2
..
.
vm
f (u1 ) f (u2 )
0
a11
a12
B a21
a22
B
B ..
..
@ .
.
am1 am2
. . . f (un )
1
. . . a1n
. . . a2n C
C
.. C
...
. A
. . . amn
Then, substituting the right-hand side of each f (uj ) into the expression for f (x), we get
f (x) = x1 (
m
X
i=1
ai 1 vi ) + + xn (
m
X
i=1
ai n vi ),
3.1. MATRICES
53
n
X
j=1
a1 j xj )v1 + + (
n
X
am j xj )vm .
j=1
n
X
ai j x j
(1)
j=1
for all i, 1 i m.
To make things more concrete, let us treat the case where n = 3 and m = 2. In this case,
f (u1 ) = a11 v1 + a21 v2
f (u2 ) = a12 v1 + a22 v2
f (u3 ) = a13 v1 + a23 v2 ,
v1 a11
a12
a13
,
v2 a21
a22
a23
and for any x = x1 u1 + x2 u2 + x3 u3 , we have
f (x) = f (x1 u1 + x2 u2 + x3 u3 )
= x1 f (u1 ) + x2 f (u2 ) + x3 f (u3 )
= x1 (a11 v1 + a21 v2 ) + x2 (a12 v1 + a22 v2 ) + x3 (a13 v1 + a23 v2 )
= (a11 x1 + a12 x2 + a13 x3 )v1 + (a21 x1 + a22 x2 + a23 x3 )v2 .
Consequently, since
y = y1 v1 + y2 v2 ,
we have
y1 = a11 x1 + a12 x2 + a13 x3
y2 = a21 x1 + a22 x2 + a23 x3 .
This agrees with the matrix equation
0 1
x1
y1
a11 a12 a13 @ A
x2 .
=
y2
a21 a22 a23
x3
54
Let E, F , and G, be three vectors spaces with respective bases (u1 , . . . , up ) for E,
(v1 , . . . , vn ) for F , and (w1 , . . . , wm ) for G. Let g : E ! F and f : F ! G be linear maps.
As explained earlier, g : E ! F is determined by the images of the basis vectors uj , and
f : F ! G is determined by the images of the basis vectors vk . We would like to understand
how f g : E ! G is determined by the images of the basis vectors uj .
Remark: Note that we are considering linear maps g : E ! F and f : F ! G, instead
of f : E ! F and g : F ! G, which yields the composition f g : E ! G instead of
g f : E ! G. Our perhaps unusual choice is motivated by the fact that if f is represented
by a matrix M (f ) = (ai k ) and g is represented by a matrix M (g) = (bk j ), then f g : E ! G
is represented by the product AB of the matrices A and B. If we had adopted the other
choice where f : E ! F and g : F ! G, then g f : E ! G would be represented by the
product BA. Personally, we find it easier to remember the formula for the entry in row i and
column of j of the product of two matrices when this product is written by AB, rather than
BA. Obviously, this is a matter of taste! We will have to live with our perhaps unorthodox
choice.
Thus, let
f (vk ) =
m
X
ai k w i ,
i=1
n
X
bk j v k ,
k=1
v1
v2
..
.
vn
. . . g(up )
1
. . . b1p
. . . b2p C
C
.. C
..
.
. A
. . . bnp
3.1. MATRICES
55
yk =
bk j x j
(2)
j=1
zi =
ai k y k
(3)
k=1
for all i, 1 i m. Then, if y = g(x) and z = f (y), we have z = f (g(x)), and in view of
(2) and (3), we have
zi =
=
n
X
ai k (
p
X
bk j xj )
j=1
k=1
p
n
XX
ai k b k j x j
k=1 j=1
=
=
p
n
X
X
ai k b k j x j
j=1 k=1
p
n
X
X
ai k bk j )xj .
j=1 k=1
n
X
ai k b k j ,
k=1
p
X
ci j xj
(4)
j=1
56
1jn
of scalars in
1
a1 n
a2 n C
C
.. C
. A
am n
In these last two cases, we usually omit the constant index 1 (first index in case of a row,
second index in case of a column). The set of all m n-matrices is denoted by Mm,n (K)
or Mm,n . An n n-matrix is called a square matrix of dimension n. The set of all square
matrices of dimension n is denoted by Mn (K), or Mn .
Remark: As defined, a matrix A = (ai j )1im, 1jn is a family, that is, a function from
{1, 2, . . . , m} {1, 2, . . . , n} to K. As such, there is no reason to assume an ordering on
the indices. Thus, the matrix A can be represented in many dierent ways as an array, by
adopting dierent orders for the rows or the columns. However, it is customary (and usually
convenient) to assume the natural ordering on the sets {1, 2, . . . , m} and {1, 2, . . . , n}, and
to represent A as an array according to this ordering of the rows and columns.
We also define some operations on matrices as follows.
Definition 3.2. Given two m n matrices A = (ai j ) and B = (bi j ), we define their sum
A + B as the matrix C = (ci j ) such that ci j = ai j + bi j ; that is,
0
a1 1
B a2 1
B
B ..
@ .
a1 2
a2 2
..
.
am 1 am 2
1 0
b1 1 b1 2
. . . a1 n
B b2 1 b2 2
. . . a2 n C
C B
.. C + B ..
..
..
.
. A @ .
.
. . . am n
bm 1 bm 2
1
. . . b1 n
. . . b2 n C
C
.. C
..
.
. A
. . . bm n
0
a1 1 + b 1 1
B a2 1 + b 2 1
B
=B
..
@
.
a1 2 + b1 2
a2 2 + b 2 2
..
.
...
...
...
a1 n + b 1 n
a2 n + b 2 n
..
.
am 1 + bm 1 am 2 + bm 2 . . . am n + bm n
C
C
C.
A
3.1. MATRICES
57
For any matrix A = (ai j ), we let A be the matrix ( ai j ). Given a scalar 2 K, we define
the matrix A as the matrix C = (ci j ) such that ci j = ai j ; that is
0
1 0
1
a1 1 a1 2 . . . a1 n
a1 1
a1 2 . . .
a1 n
B a2 1 a2 2 . . . a2 n C B a2 1
a2 2 . . .
a2 n C
B
C B
C
B ..
..
.. C = B ..
..
.. C .
.
.
.
.
@ .
.
.
.
. A @ .
.
. A
am 1 am 2 . . . am n
am 1 am 2 . . . am n
n
X
ai k b k j ,
k=1
= C shown below
1 0
b1 p
c1 1 c1 2
C
B
b 2 p C B c2 1 c2 2
.. C = B ..
..
. A @ .
.
bn p
cm 1 cm 2
1
. . . c1 p
. . . c2 p C
C
.. C
...
. A
. . . cm p
note that the entry of index i and j of the matrix AB obtained by multiplying the matrices
A and B can be identified with the product of the row matrix corresponding to the i-th row
of A with the column matrix corresponding to the j-column of B:
0 1
b1 j
n
B .. C X
(ai 1 ai n ) @ . A =
ai k b k j .
k=1
bn j
The square matrix In of dimension n containing 1 on the diagonal and 0 everywhere else
is called the identity matrix . It is denoted as
0
1
1 0 ... 0
B0 1 . . . 0 C
B
C
B .. .. . . .. C
@. .
. .A
0 0 ... 1
Given an m n matrix A = (ai j ), its transpose A> = (a>
j i ), is the n m-matrix such
that a>
=
a
,
for
all
i,
1
m,
and
all
j,
1
n.
ij
ji
58
The following observation will be useful later on when we discuss the SVD. Given any
m n matrix A and any n p matrix B, if we denote the columns of A by A1 , . . . , An and
the rows of B by B1 , . . . , Bn , then we have
AB = A1 B1 + + An Bn .
For every square matrix A of dimension n, it is immediately verified that AIn = In A = A.
If a matrix B such that AB = BA = In exists, then it is unique, and it is called the inverse
of A. The matrix B is also denoted by A 1 . An invertible matrix is also called a nonsingular
matrix, and a matrix that is not invertible is called a singular matrix.
Proposition 2.19 shows that if a square matrix A has a left inverse, that is a matrix B
such that BA = I, or a right inverse, that is a matrix C such that AC = I, then A is actually
invertible; so B = A 1 and C = A 1 . These facts also follow from Proposition 4.14.
It is immediately verified that the set Mm,n (K) of m n matrices is a vector space under
addition of matrices and multiplication of a matrix by a scalar. Consider the m n-matrices
Ei,j = (eh k ), defined such that ei j = 1, and eh k = 0, if h 6= i or k 6= j. It is clear that every
matrix A = (ai j ) 2 Mm,n (K) can be written in a unique way as
A=
m X
n
X
ai j Ei,j .
i=1 j=1
Thus, the family (Ei,j )1im,1jn is a basis of the vector space Mm,n (K), which has dimension mn.
Remark: Definition 3.1 and Definition 3.2 also make perfect sense when K is a (commutative) ring rather than a field. In this more general setting, the framework of vector spaces
is too narrow, but we can consider structures over a commutative ring A satisfying all the
axioms of Definition 2.9. Such structures are called modules. The theory of modules is
(much) more complicated than that of vector spaces. For example, modules do not always
have a basis, and other properties holding for vector spaces usually fail for modules. When
a module has a basis, it is called a free module. For example, when A is a commutative
ring, the structure An is a module such that the vectors ei , with (ei )i = 1 and (ei )j = 0 for
j 6= i, form a basis of An . Many properties of vector spaces still hold for An . Thus, An is a
free module. As another example, when A is a commutative ring, Mm,n (A) is a free module
with basis (Ei,j )1im,1jn . Polynomials over a commutative ring also form a free module
of infinite dimension.
Square matrices provide a natural example of a noncommutative ring with zero divisors.
Example 3.1. For example, letting A, B be the 2 2-matrices
1 0
0 0
A=
, B=
,
0 0
1 0
3.1. MATRICES
59
then
AB =
1 0
0 0
0 0
0 0
=
,
1 0
0 0
BA =
0 0
1 0
1 0
0 0
=
.
0 0
1 0
and
m
X
i=1
ai j v i ,
for every j, 1 j n.
The coefficients a1j , a2j , . . . , amj of f (uj ) over the basis (v1 , . . . , vm ) form the jth column of
the matrix M (f ) shown below:
v1
v2
..
.
vm
f (u1 ) f (u2 )
0
a11
a12
B a21
a22
B
B ..
..
@ .
.
am1 am2
. . . f (un )
1
. . . a1n
. . . a2n C
C
.. C .
..
.
. A
. . . amn
The matrix M (f ) associated with the linear map f : E ! F is called the matrix of f with
respect to the bases (u1 , . . . , un ) and (v1 , . . . , vm ). When E = F and the basis (v1 , . . . , vm )
is identical to the basis (u1 , . . . , un ) of E, the matrix M (f ) associated with f : E ! E (as
above) is called the matrix of f with respect to the base (u1 , . . . , un ).
Remark: As in the remark after Definition 3.1, there is no reason to assume that the vectors
in the bases (u1 , . . . , un ) and (v1 , . . . , vm ) are ordered in any particular way. However, it is
often convenient to assume the natural ordering. When this is so, authors sometimes refer
60
to the matrix M (f ) as the matrix of f with respect to the ordered bases (u1 , . . . , un ) and
(v1 , . . . , vm ).
Then, given a linear map f : E ! F represented by the matrix M (f ) = (ai j ) w.r.t. the
bases (u1 , . . . , un ) and (v1 , . . . , vm ), by equations (1) and the definition of matrix multiplication, the equation y = f (x) correspond to the matrix equation M (y) = M (f )M (x), that
is,
0 1 0
10 1
x1
y1
a1 1 . . . a1 n
B .. C B ..
C
B
.
.
..
.. A @ ... C
@ . A=@ .
A.
ym
am 1 . . . am n
xn
Recall that
0
a1 1 a1 2
B a2 1 a2 2
B
B ..
..
@ .
.
am 1 am 2
10 1
0
1
0
1
0
1
. . . a1 n
x1
a1 1
a1 2
a1 n
B C
B a2 1 C
B a2 2 C
B a2 n C
. . . a2 n C
C B x2 C
B
C
B
C
B
C
=
x
+
x
+
+
x
C
B
C
B
C
B
C
..
..
..
..
1
2
n B .. C .
...
A
@
A
@
A
@
A
@
.
.
.
.
. A
. . . am n
xn
am 1
am 2
am n
The above notation seems reasonable, but it has the slight disadvantage that in the
expression MU ,V (f )xU , the input argument xU which is fed to the matrix MU ,V (f ) does not
appear next to the subscript U in MU ,V (f ). We could have used the notation MV,U (f ), and
some people do that. But then, we find a bit confusing that V comes before U when f maps
from the space E with the basis U to the space F with the basis V. So, we prefer to use the
notation MU ,V (f ).
Be aware that other authors such as Meyer [80] use the notation [f ]U ,V , and others such
as Dummit and Foote [32] use the notation MUV (f ), instead of MU ,V (f ). This gets worse!
You may find the notation MVU (f ) (as in Lang [67]), or U [f ]V , or other strange notations.
Let us illustrate the representation of a linear map by a matrix in a concrete situation.
Let E be the vector space R[X]4 of polynomials of degree at most 4, let F be the vector
3.1. MATRICES
61
space R[X]3 of polynomials of degree at most 3, and let the linear map be the derivative
map d: that is,
d(P + Q) = dP + dQ
d( P ) = dP,
with 2 R. We choose (1, x, x2 , x3 , x4 ) as a basis of E and (1, x, x2 , x3 ) as a basis of F .
Then, the 4 5 matrix D associated with d is obtained by expressing the derivative dxi of
each basis vector for i = 0, 1, 2, 3, 4 over the basis (1, x, x2 , x3 ). We find
0
1
0 1 0 0 0
B0 0 2 0 0 C
C
D=B
@0 0 0 3 0 A .
0 0 0 0 4
Then, if P denotes the polynomial
P = 3x4
5x3 + x2
7x + 5,
we have
dP = 12x3
15x2 + 2x
7,
as expected! The kernel (nullspace) of d consists of the polynomials of degree 0, that is, the
constant polynomials. Therefore dim(Ker d) = 1, and from
dim(E) = dim(Ker d) + dim(Im d)
(see Theorem 4.11), we get dim(Im d) = 4 (since dim(E) = 5).
For fun, let us figure out the linear map from the vector space R[X]3 to the vector space
i
R[X]4 given by integration (finding
R the primitive, or anti-derivative) of x , for i = 0, 1, 2, 3).
The 5 4 matrix S representing with respect to the same bases as before is
0
1
0 0
0
0
B1 0
0
0 C
B
C
C.
0
1/2
0
0
S=B
B
C
@0 0 1/3 0 A
0 0
0 1/4
62
We verify that DS = I4 ,
0
0
B0
B
@0
0
1
0
0
0
0
2
0
0
0
0
3
0
1
0
0 0
0
0
0 B
1
C
1 0
0
0 C B
C
B
0C B
B0
0 1/2 0
0 C
C = @0
0A B
@0 0 1/3 0 A
4
0
0 0
0 1/4
1
0
1
0
0
1
0
0C
C,
0A
1
0
0
1
0
0
2
0
0
0
0
3
0
0
0
B0
B
0C
C = B0
0A B
@0
4
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
1
0
0C
C
0C
C,
0A
1
2 K, we have
(A + B)C = AC + BC
A(C + D) = AC + AD
( A)C = (AC)
A( C) = (AC),
so that matrix multiplication : Mm,n (K) Mn,p (K) ! Mm,p (K) is bilinear.
Proof. (1) Every m n matrix A = (ai j ) defines the function fA : K n ! K m given by
fA (x) = Ax,
for all x 2 K n . It is immediately verified that fA is linear and that the matrix M (fA )
representing fA over the canonical bases in K n and K m is equal to A. Then, formula (4)
proves that
M (fA fB ) = M (fA )M (fB ) = AB,
3.1. MATRICES
63
so we get
M ((fA fB ) fC ) = M (fA fB )M (fC ) = (AB)C
and
M (fA (fB
fC )) = M (fA )M (fB
fC ) = A(BC),
fB )
fC = fA
(fB
fC ),
64
where M (x) is the column vector associated with the vector x and M (g(x)) is the column
vector associated with g(x), as explained in Definition 3.3.
Thus, M : Hom(E, F ) ! Mn,p is an isomorphism of vector spaces, and when p = n
and the basis (v1 , . . . , vn ) is identical to the basis (u1 , . . . , up ), M : Hom(E, E) ! Mn is an
isomorphism of rings.
Proof. That M (g(x)) = M (g)M (x) was shown just before stating the proposition, using
identity (1). The identities M (g + h) = M (g) + M (h) and M ( g) = M (g) are straightforward, and M (f g) = M (f )M (g) follows from (4) and the definition of matrix multiplication.
The mapping M : Hom(E, F ) ! Mn,p is clearly injective, and since every matrix defines a
linear map, it is also surjective, and thus bijective. In view of the above identities, it is an
isomorphism (and similarly for M : Hom(E, E) ! Mn ).
In view of Proposition 3.2, it seems preferable to represent vectors from a vector space
of finite dimension as column vectors rather than row vectors. Thus, from now on, we will
denote vectors of Rn (or more generally, of K n ) as columm vectors.
It is important to observe that the isomorphism M : Hom(E, F ) ! Mn,p given by Proposition 3.2 depends on the choice of the bases (u1 , . . . , up ) and (v1 , . . . , vn ), and similarly for the
isomorphism M : Hom(E, E) ! Mn , which depends on the choice of the basis (u1 , . . . , un ).
Thus, it would be useful to know how a change of basis aects the representation of a linear
map f : E ! F as a matrix. The following simple proposition is needed.
Proposition 3.3. Let E be a vector space, and let (u1 , . . . , un ) be aPbasis of E. For every
family (v1 , . . . , vn ), let P = (ai j ) be the matrix defined such that vj = ni=1 ai j ui . The matrix
P is invertible i (v1 , . . . , vn ) is a basis of E.
Proof. Note that we have P = M (f ), the matrix associated with the unique linear map
f : E ! E such that f (ui ) = vi . By Proposition 2.16, f is bijective i (v1 , . . . , vn ) is a basis
of E. Furthermore, it is obvious that the identity matrix In is the matrix associated with the
identity id : E ! E w.r.t. any basis. If f is an isomorphism, then f f 1 = f 1 f = id, and
by Proposition 3.2, we get M (f )M (f 1 ) = M (f 1 )M (f ) = In , showing that P is invertible
and that M (f 1 ) = P 1 .
Proposition 3.3 suggests the following definition.
Definition 3.4. Given a vector space E of dimension n, for any two bases (u1 , . . . , un ) and
(v1 , . . . , vn ) of E, let P = (ai j ) be the invertible matrix defined such that
vj =
n
X
ai j u i ,
i=1
which is also the matrix of the identity id : E ! E with respect to the bases (v1 , . . . , vn ) and
(u1 , . . . , un ), in that order . Indeed, we express each id(vj ) = vj over the basis (u1 , . . . , un ).
3.1. MATRICES
65
The coefficients a1j , a2j , . . . , anj of vj over the basis (u1 , . . . , un ) form the jth column of the
matrix P shown below:
u1
u2
..
.
v1
v2 . . .
a11 a12
B a21 a22
B
B ..
..
@ .
.
un an1 an2
vn
1
. . . a1n
. . . a2n C
C
.. C .
..
. . A
. . . ann
The matrix P is called the change of basis matrix from (u1 , . . . , un ) to (v1 , . . . , vn ).
Clearly, the change of basis matrix from (v1 , . . . , vn ) to (u1 , . . . , un ) is P 1 . Since P =
(ai,j ) is the matrix of the identity id : E ! E with respect to the bases (v1 , . . . , vn ) and
(u1 , . . . , un ), given any vector x 2 E, if x = x1 u1 + + xn un over the basis (u1 , . . . , un ) and
x = x01 v1 + + x0n vn over the basis (v1 , . . . , vn ), from Proposition 3.2, we have
10 1
0 1 0
x1
a1 1 . . . a 1 n
x01
B .. C B .. . .
.. C B .. C ,
@ . A=@ .
.
. A@ . A
xn
an 1 . . . an n
x0n
showing that the old coordinates (xi ) of x (over (u1 , . . . , un )) are expressed in terms of the
new coordinates (x0i ) of x (over (v1 , . . . , vn )).
Now we face the painful task of assigning a good notation incorporating the bases
U = (u1 , . . . , un ) and V = (v1 , . . . , vn ) into the notation for the change of basis matrix from
U to V. Because the change of basis matrix from U to V is the matrix of the identity map
idE with respect to the bases V and U in that order , we could denote it by MV,U (id) (Meyer
[80] uses the notation [I]V,U ), which we abbreviate as
PV,U .
Note that
PU ,V = PV,U1 .
Then, if we write xU = (x1 , . . . , xn ) for the old coordinates of x with respect to the basis U
and xV = (x01 , . . . , x0n ) for the new coordinates of x with respect to the basis V, we have
xU = PV,U xV ,
xV = PV,U1 xU .
The above may look backward, but remember that the matrix MU ,V (f ) takes input
expressed over the basis U to output expressed over the basis V. Consequently, PV,U takes
input expressed over the basis V to output expressed over the basis U , and xU = PV,U xV
matches this point of view!
66
Beware that some authors (such as Artin [4]) define the change of basis matrix from U
to V as PU ,V = PV,U1 . Under this point of view, the old basis U is expressed in terms of
the new basis V. We find this a bit unnatural. Also, in practice, it seems that the new basis
is often expressed in terms of the old basis, rather than the other way around.
Since the matrix P = PV,U expresses the new basis (v1 , . . . , vn ) in terms of the old basis
(u1 , . . ., un ), we observe that the coordinates (xi ) of a vector x vary in the opposite direction
of the change of basis. For this reason, vectors are sometimes said to be contravariant.
However, this expression does not make sense! Indeed, a vector in an intrinsic quantity that
does not depend on a specific basis. What makes sense is that the coordinates of a vector
vary in a contravariant fashion.
Let us consider some concrete examples of change of bases.
Example 3.2. Let E = F = R2 , with u1 = (1, 0), u2 = (0, 1), v1 = (1, 1) and v2 = ( 1, 1).
The change of basis matrix P from the basis U = (u1 , u2 ) to the basis V = (v1 , v2 ) is
1
1
P =
1 1
and its inverse is
P
1/2 1/2
.
1/2 1/2
The old coordinates (x1 , x2 ) with respect to (u1 , u2 ) are expressed in terms of the new
coordinates (x01 , x02 ) with respect to (v1 , v2 ) by
0
x1
1
1
x1
=
,
x2
1 1
x02
and the new coordinates (x01 , x02 ) with respect to (v1 , v2 ) are expressed in terms of the old
coordinates (x1 , x2 ) with respect to (u1 , u2 ) by
0
x1
1/2 1/2
x1
=
.
0
x2
1/2 1/2
x2
Example 3.3. Let E = F = R[X]3 be the set of polynomials of degree at most 3,
and consider the bases U = (1, x, x2 , x3 ) and V = (B03 (x), B13 (x), B23 (x), B33 (x)), where
B03 (x), B13 (x), B23 (x), B33 (x) are the Bernstein polynomials of degree 3, given by
B03 (x) = (1
x)3
x)2 x
x)x2
B33 (x) = x3 .
0
0
3
3
1
0
0C
C.
0A
1
67
1
1 0
0 0
B1 1/3 0 0C
C
=B
@1 2/3 1/3 0A .
1 1
1 1
Therefore, the coordinates of the polynomial 2x3 x + 1 over the basis V are
10 1
0 1 0
1
1 0
0 0
1
B2/3C B1 1/3 0 0C B 1C
B C=B
CB C
@1/3A @1 2/3 1/3 0A @ 0 A ,
2
1 1
1 1
2
and so
2
1
x + 1 = B03 (x) + B13 (x) + B23 (x) + 2B33 (x).
3
3
Our next example is the Haar wavelets, a fundamental tool in signal processing.
2x3
3.2
1
1
1C
C
0A
0
1
0
B0C
C
w4 = B
@ 1 A.
1
Note that these vectors are pairwise orthogonal, so they are indeed linearly independent
(we will see this in a later chapter). Let W = {w1 , w2 , w3 , w4 } be the Haar basis, and let
U = {e1 , e2 , e3 , e4 } be the canonical basis of R4 . The change of basis matrix W = PW,U from
U to W is given by
0
1
1 1
1
0
B1 1
1 0C
C,
W =B
@1
1 0
1A
1
1 0
1
and we easily find that the inverse of W is given by
0
10
1
1/4 0
0
0
1 1
1
1
B 0 1/4 0
B
0 C
1
1C
C B1 1
C.
W 1=B
@ 0
0 1/2 0 A @1
1 0
0A
0
0
0 1/2
0 0
1
1
68
So, the vector v = (6, 4, 5, 1) over the basis U becomes c = (c1 , c2 , c3 , c4 ) over the Haar basis
W, with
0 1 0
10
c1
1/4 0
0
0
1
Bc2 C B 0 1/4 0
C
B
0 C B1
B C=B
@c 3 A @ 0
0 1/2 0 A @1
c4
0
0
0 1/2
0
1
1
1
0
1
1
0
1
10 1 0 1
1
6
4
C
B
C
B
1 C B4 C B1 C
C.
=
0 A @5 A @1 A
1
1
2
v1 + v2 + v3 + v4
4
is the overall average value of the signal v. The coefficient c1 corresponds to the background
of the image (or of the sound). Then, c2 gives the coarse details of v, whereas, c3 gives the
details in the first part of v, and c4 gives the details in the second half of v.
Reconstruction of the signal consists in computing v = W c. The trick for good compression is to throw away some of the coefficients of c (set them to zero), obtaining a compressed
signal b
c, and still retain enough crucial information so that the reconstructed signal vb = W b
c
looks almost as good as the original signal v. Thus, the steps are:
input v
! coefficients c = W
! compressed b
c
! compressed vb = W b
c.
It turns out that there is a faster way to find c = W 1 v, without actually using W
This has to do with the multiscale nature of Haar wavelets.
Given the original signal v = (6, 4, 5, 1) shown in Figure 3.1, we compute averages and
half dierences obtaining Figure 3.2. We get the coefficients c3 = 1 and c4 = 2. Then, again
we compute averages and half dierences obtaining Figure 3.3. We get the coefficients c1 = 4
and c2 = 1.
69
1
4
A pattern is begining to emerge. It looks like the second Haar basis vector w2 is the mother
of all the other basis vectors, except the first, whose purpose is to perform averaging. Indeed,
in general, given
w2 = (1, . . . , 1, 1, . . . , 1),
|
{z
}
2n
the other Haar basis vectors are obtained by a scaling and shifting process. Starting from
w2 , the scaling process generates the vectors
w3 , w5 , w9 , . . . , w2j +1 , . . . , w2n
1 +1
70
such that w2j+1 +1 is obtained from w2j +1 by forming two consecutive blocks of 1 and 1
of half the size of the blocks in w2j +1 , and setting all other entries to zero. Observe that
w2j +1 has 2j blocks of 2n j elements. The shifting process, consists in shifting the blocks of
1 and 1 in w2j +1 to the right by inserting a block of (k 1)2n j zeros from the left, with
0 j n 1 and 1 k 2j . Thus, we obtain the following formula for w2j +k :
8
1 i (k 1)2n j
>
>0
>
<1
(k 1)2n j + 1 i (k 1)2n j + 2n j 1
w2j +k (i) =
>
1 (k 1)2n j + 2n j 1 + 1 i k2n j
>
>
:
0
k2n j + 1 i 2n ,
with 0 j n
1 and 1 k 2j . Of course
w1 = (1, . . . , 1) .
| {z }
2n
The above formulae look a little better if we change our indexing slightly by letting k vary
from 0 to 2j 1 and using the index j instead of 2j . In this case, the Haar basis is denoted
by
w1 , h00 , h10 , h11 , h20 , h21 , h22 , h23 , . . . , hjk , . . . , hn2n 11 1 ,
and
with 0 j n
8
0
>
>
>
<1
hjk (i) =
>
1
>
>
:
0
1 and 0 k 2j
1 i k2n j
k2n j + 1 i k2n j + 2n j 1
k2n j + 2n j 1 + 1 i (k + 1)2n
(k + 1)2n j + 1 i 2n ,
1.
It turns out that there is a way to understand these formulae better if we interpret a
vector u = (u1 , . . . , um ) as a piecewise linear function over the interval [0, 1). We define the
function plf(u) such that
i
plf(u)(x) = ui ,
1
m
x<
i
, 1 i m.
m
In words, the function plf(u) has the value u1 on the interval [0, 1/m), the value u2 on
[1/m, 2/m), etc., and the value um on the interval [(m 1)/m, 1). For example, the piecewise
linear function associated with the vector
u = (2.4, 2.2, 2.15, 2.05, 6.8, 2.8, 1.1, 1.3)
is shown in Figure 3.4.
Then, each basis vector hjk corresponds to the function
j
k
= plf(hjk ).
71
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
j
k
= (2j x
k),
0jn
1, 0 k 2j
1.
The above formula makes it clear that kj is obtained from by scaling and shifting. The
function 00 = plf(w1 ) is the piecewise linear function with the constant value 1 on [0, 1), and
the functions kj together with '00 are known as the Haar wavelets.
72
Rather than using W 1 to convert a vector u to a vector c of coefficients over the Haar
basis, and the matrix W to reconstruct the vector u from its Haar coefficients c, we can use
faster algorithms that use averaging and dierencing.
If c is a vector of Haar coefficients of dimension 2n , we compute the sequence of vectors
u , u1 , . . ., un as follows:
0
u0 = c
uj+1 = uj
uj+1 (2i
1) = uj (i) + uj (2j + i)
uj (2j + i),
1) + cj+1 (2i))/2
1)
cj+1 (2i))/2,
We leave it as an exercise to implement the above programs in Matlab using two variables
u and c, and by building iteratively 2j . Here is an example of the conversion of a vector to
its Haar coefficients for n = 3.
Given the sequence u = (31, 29, 23, 17, 6, 8, 2, 4), we get the sequence
c3
c2
c1
c0
so c = (10, 15, 5, 2, 1, 3, 1, 1). Conversely, given c = (10, 15, 5, 2, 1, 3, 1, 1), we get the
sequence
u0
u1
u2
u3
= (10, 15, 5, 2, 1, 3, 1, 1)
= (25, 5, 5, 2, 1, 3, 1, 1)
= (30, 20, 7, 3, 1, 3, 1, 1)
= (31, 29, 23, 17, 6, 8, 2, 4),
73
There is another recursive method for constucting the Haar matrix Wn of dimension 2n
that makes it clearer why the above algorithms are indeed correct (which nobody seems to
prove!). If we split Wn into two 2n 2n 1 matrices, then the second matrix containing the
last 2n 1 columns of Wn has a very simple structure: it consists of the vector
(1, 1, 0, . . . , 0)
|
{z
}
2n
and 2n
1
1
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
1
1
0
0
1
0
0C
C
0C
C
0C
C.
0C
C
0C
C
1A
1
Then, we form the 2n 2n 2 matrix obtained by doubling each column of odd index, which
means replacing each such column by a column in which the block of 1 is doubled and the
block of 1 is doubled. In general, given a current matrix of dimension 2n 2j , we form a
2n 2j 1 matrix by doubling each column of odd index, which means that we replace each
such column by a column in which the block of 1 is doubled and the block of 1 is doubled.
We repeat this process n 1 times until we get the vector
(1, . . . , 1, 1, . . . , 1) .
|
{z
}
2n
The first vector is the averaging vector (1, . . . , 1). This process is illustrated below for n = 3:
| {z }
2n
0
B
B
B
B
B
B
B
B
B
B
@
1
0
1
B
1C
C
B
B
1C
C
B
B
1C
C (= B
C
B
1C
B
B
1C
C
B
A
@
1
1
1
1
1
1
0
0
0
0
1
0
0
B
0C
C
B
B
0C
C
B
B
0C
C (= B
C
B
1C
B
B
1C
C
B
A
@
1
1
1
1
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
1
1
0
0
1
0
0C
C
0C
C
0C
C
0C
C
0C
C
1A
1
74
1
B1
B
B1
B
B1
W3 = B
B1
B
B1
B
@1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
1
1
1
1
1
1
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
1
1
0
0
1
0
0C
C
0C
C
0C
C.
0C
C
0C
C
1A
1
Observe that the right block (of size 2n 2n 1 ) shows clearly how the detail coefficients
in the second half of the vector c are added and subtracted to the entries in the first half of
the partially reconstructed vector after n 1 steps.
An important and attractive feature of the Haar basis is that it provides a multiresolution analysis of a signal. Indeed, given a signal u, if c = (c1 , . . . , c2n ) is the vector of its
Haar coefficients, the coefficients with low index give coarse information about u, and the
coefficients with high index represent fine information. For example, if u is an audio signal
corresponding to a Mozart concerto played by an orchestra, c1 corresponds to the background noise, c2 to the bass, c3 to the first cello, c4 to the second cello, c5 , c6 , c7 , c7 to the
violas, then the violins, etc. This multiresolution feature of wavelets can be exploited to
compress a signal, that is, to use fewer coefficients to represent it. Here is an example.
Consider the signal
u = (2.4, 2.2, 2.15, 2.05, 6.8, 2.8, 1.1, 1.3),
whose Haar transform is
c = (2, 0.2, 0.1, 3, 0.1, 0.05, 2, 0.1).
The piecewise-linear curves corresponding to u and c are shown in Figure 3.6. Since some of
the coefficients in c are small (smaller than or equal to 0.2) we can compress c by replacing
them by 0. We get
c2 = (2, 0, 0, 3, 0, 0, 2, 0),
and the reconstructed signal is
u2 = (2, 2, 2, 2, 7, 3, 1, 1).
The piecewise-linear curves corresponding to u2 and c2 are shown in Figure 3.7.
75
6
2.5
3
1.5
0
0.5
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.6
0.7
0.8
0.9
6
2.5
5
2
1.5
1
0.5
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.1
0.2
0.3
0.4
0.5
76
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0.2
0.2
0.4
0.4
0.6
0.6
0.8
0.8
7
4
7
4
x 10
x 10
0.6
1
0.8
0.4
0.6
0.2
0.4
0.2
0
0
0.2
0.2
0.4
0.4
0.6
0.6
0.8
1
0.8
7
4
x 10
x 10
Figure 3.9: The compressed signal handel and its Haar transform
matrices (even rectangular) without any extra eort! This allows for the compression of
digital images. But first, we address the issue of normalization of the Haar coefficients. As
we observed earlier, the 2n 2n matrix Wn of Haar basis vectors has orthogonal columns,
but its columns do not have unit length. As a consequence, Wn> is not the inverse of Wn ,
but rather the matrix
Wn 1 = Dn Wn>
with Dn = diag 2
, |{z}
2 n,2
|
20
(n 1)
,2
{z
21
(n 1)
,2
} |
(n 2)
,...,2
{z
22
Hn = Wn Dn2
, . . . , 2 1, . . . , 2 1 .
}
|
{z
}
(n 2)
2n
21
22
77
1
2
1
,...,2 2 .
{z
}
2n
p
1) + cj+1 (2i))/ 2
p
1) cj+1 (2i))/ 2.
p
Note that things are now more symmetric, at the expense of a division by 2. However, for
long vectors, it turns out that these algorithms are numerically more stable.
p
Remark:pSome authors (for example, Stollnitz, Derose and Salesin [103]) rescale c by 1/ 2n
and u by 2n . This is because the norm of the basis functions kj is not equal to 1 (under
R1
the inner product hf, gi = 0 f (t)g(t)dt). The normalized basis functions are the functions
p
2j kj .
Let us now explain the 2D version of the Haar transform. We describe the version using
the matrix Wn , the method using Hn being identical (except that Hn 1 = Hn> , but this does
not hold for Wn 1 ). Given a 2m 2n matrix A, we can first convert the rows of A to their
Haar coefficients using the Haar transform Wn 1 , obtaining a matrix B, and then convert the
columns of B to their Haar coefficients, using the matrix Wm 1 . Because columns and rows
are exchanged in the first step,
B = A(Wn 1 )> ,
and in the second step C = Wm 1 B, thus, we have
C = Wm 1 A(Wn 1 )> = Dm Wm> AWn Dn .
In the other direction, given a matrix C of Haar coefficients, we reconstruct the matrix A
(the image) by first applying Wm to the columns of C, obtaining B, and then Wn> to the
rows of B. Therefore
A = Wm CWn> .
Of course, we dont actually have to invert Wm and Wn and perform matrix multiplications.
We just have to use our algorithms using averaging and dierencing. Here is an example.
78
is the 8 8 matrix
2
55
47
26
34
23
15
58
3
54
46
27
35
22
14
59
61
12
20
37
29
44
52
5
60
13
21
36
28
45
53
4
6
51
43
30
38
19
11
62
0
0
4
4
27
11
5
21
59.5
13.5
21.5
35.5
27.5
45.5
53.5
3.5
7
50
42
31
39
18
10
63
1
57
16C
C
24C
C
33C
C,
25C
C
48C
C
56A
1
0
0
0
0
4 4
4 4
25 23
9
7
7
9
23 25
1
0
0 C
C
4C
C
4C
C.
21C
C
5 C
C
11 A
27
As we can see, C has a more zero entries than A; it is a compressed version of A. We can
further compress C by setting to 0 all entries of absolute value at most 0.5. Then, we get
0
1
32.5 0 0 0 0
0
0
0
B 0 0 0 0 0
0
0
0 C
B
C
B 0 0 0 0 4
C
4
4
4
B
C
B 0 0 0 0 4
4 4
4C
B
C.
C2 = B
C
0
0
0
0
27
25
23
21
B
C
B 0 0 0 0
11 9
7 5 C
B
C
@ 0 0 0 0
5
7
9 11 A
0 0 0 0 21
23 25
27
61.5
11.5
19.5
37.5
29.5
43.5
51.5
5.5
5.5
51.5
43.5
29.5
37.5
19.5
11.5
61.5
7.5
49.5
41.5
31.5
39.5
17.5
9.5
63.5
1
57.5
15.5C
C
23.5C
C
33.5C
C,
25.5C
C
47.5C
C
55.5A
1.5
79
If we use the normalized matrices Hm and Hn , then the equations relating the image
80
A = Hm CHn> .
The Haar transform can also be used to send large images progressively over the internet.
Indeed, we can start sending the Haar coefficients of the matrix C starting from the coarsest
coefficients (the first column from top down, then the second column, etc.) and at the
receiving end we can start reconstructing the image as soon as we have received enough
data.
Observe that instead of performing all rounds of averaging and dierencing on each row
and each column, we can perform partial encoding (and decoding). For example, we can
perform a single round of averaging and dierencing for each row and each column. The
result is an image consisting of four subimages, where the top left quarter is a coarser version
of the original, and the rest (consisting of three pieces) contain the finest detail coefficients.
We can also perform two rounds of averaging and dierencing, or three rounds, etc. This
process is illustrated on the image shown in Figure 3.12. The result of performing one round,
two rounds, three rounds, and nine rounds of averaging is shown in Figure 3.13. Since our
images have size 512 512, nine rounds of averaging yields the Haar transform, displayed as
the image on the bottom right. The original image has completely disappeared! We leave it
as a fun exercise to modify the algorithms involving averaging and dierencing to perform
k rounds of averaging/dierencing. The reconstruction algorithm is a little tricky.
A nice and easily accessible account of wavelets and their uses in image processing and
computer graphics can be found in Stollnitz, Derose and Salesin [103]. A very detailed
account is given in Strang and and Nguyen [106], but this book assumes a fair amount of
background in signal processing.
We can find easily a basis of 2n 2n = 22n vectors wij (2n 2n matrices) for the linear
map that reconstructs an image from its Haar coefficients, in the sense that for any matrix
C of Haar coefficients, the image matrix A is given by
n
A=
2 X
2
X
cij wij .
i=1 j=1
C=
2 X
2
X
i=1 j=1
aij hij .
81
If the columns of W
We leave it as exercise to compute the bases (wij ) and (hij ) for n = 2, and to display the
corresponding images using the command imagesc.
82
Figure 3.13: Haar tranforms after one, two, three, and nine rounds of averaging
3.3
83
The eect of a change of bases on the representation of a linear map is described in the
following proposition.
Proposition 3.4. Let E and F be vector spaces, let U = (u1 , . . . , un ) and U 0 = (u01 , . . . , u0n )
0
be two bases of E, and let V = (v1 , . . . , vm ) and V 0 = (v10 , . . . , vm
) be two bases of F . Let
0
P = PU 0 ,U be the change of basis matrix from U to U , and let Q = PV 0 ,V be the change of
basis matrix from V to V 0 . For any linear map f : E ! F , let M (f ) = MU ,V (f ) be the matrix
associated to f w.r.t. the bases U and V, and let M 0 (f ) = MU 0 ,V 0 (f ) be the matrix associated
to f w.r.t. the bases U 0 and V 0 . We have
M 0 (f ) = Q 1 M (f )P,
or more explicitly
MU 0 ,V 0 (f ) = PV 01,V MU ,V (f )PU 0 ,U = PV,V 0 MU ,V (f )PU 0 ,U .
Proof. Since f : E ! F can be written as f = idF f idE , since P is the matrix of idE
w.r.t. the bases (u01 , . . . , u0n ) and (u1 , . . . , un ), and Q 1 is the matrix of idF w.r.t. the bases
0
(v1 , . . . , vm ) and (v10 , . . . , vm
), by Proposition 3.2, we have M 0 (f ) = Q 1 M (f )P .
As a corollary, we get the following result.
Corollary 3.5. Let E be a vector space, and let U = (u1 , . . . , un ) and U 0 = (u01 , . . . , u0n ) be
two bases of E. Let P = PU 0 ,U be the change of basis matrix from U to U 0 . For any linear
map f : E ! E, let M (f ) = MU (f ) be the matrix associated to f w.r.t. the basis U , and let
M 0 (f ) = MU 0 (f ) be the matrix associated to f w.r.t. the basis U 0 . We have
M 0 (f ) = P
M (f )P,
or more explicitly,
MU 0 (f ) = PU 01,U MU (f )PU 0 ,U = PU ,U 0 MU (f )PU 0 ,U .
Example 3.4. Let E = R2 , U = (e1 , e2 ) where e1 = (1, 0) and e2 = (0, 1) are the canonical
basis vectors, let V = (v1 , v2 ) = (e1 , e1 e2 ), and let
2 1
A=
.
0 1
The change of basis matrix P = PV,U from U to V is
1 1
P =
,
0
1
84
= P.
Therefore, in the basis V, the matrix representing the linear map f defined by A is
0
A =P
AP = P AP =
1
0
1
1
2 1
1
0 1
0
1
1
2 0
0 1
= D,
a diagonal matrix. Therefore, in the basis V, it is clear what the action of f is: it is a stretch
by a factor of 2 in the v1 direction and it is the identity in the v2 direction. Observe that v1
and v2 are not orthogonal.
What happened is that we diagonalized the matrix A. The diagonal entries 2 and 1 are
the eigenvalues of A (and f ) and v1 and v2 are corresponding eigenvectors. We will come
back to eigenvalues and eigenvectors later on.
The above example showed that the same linear map can be represented by dierent
matrices. This suggests making the following definition:
Definition 3.5. Two nn matrices A and B are said to be similar i there is some invertible
matrix P such that
B=P
AP.
It is easily checked that similarity is an equivalence relation. From our previous considerations, two n n matrices A and B are similar i they represent the same linear map with
respect to two dierent bases. The following surprising fact can be shown: Every square
matrix A is similar to its transpose A> . The proof requires advanced concepts than we will
not discuss in these notes (the Jordan form, or similarity invariants).
If U = (u1 , . . . , un ) and V = (v1 , . . . , vn ) are two bases of E, the change of basis matrix
P = PV,U
1
a11 a12 a1n
B a21 a22 a2n C
C
B
= B ..
..
.. C
.
.
@ .
.
.
. A
an1 an2 ann
from (u1 , . . . , un ) to (v1 , . . . , vn ) is the matrix whose jth column consists of the coordinates
of vj over the basis (u1 , . . . , un ), which means that
vj =
n
X
i=1
aij ui .
85
0 1
v1
B .. C
It is natural to extend the matrix notation and to express the vector @ . A in E n as the
vn
0 1
u1
B .. C
product of a matrix times the vector @ . A in E n , namely as
un
0 1 0
10 1
u1
v1
a11 a21 an1
B v2 C B a12 a22 an2 C B u2 C
B C B
CB C
B .. C = B ..
..
.. C B .. C ,
.
.
@.A @ .
.
.
. A@ . A
vn
a1n a2n ann
un
but notice that the matrix involved is not P , but its transpose P > .
un
that is,
vi =
n
X
aij uj ,
j=1
n
X
x i ui =
i=1
then
n
X
yk vk ,
k=1
1
0 1
x1
y1
B .. C
> B .. C
@ . A = A @ . A,
xn
and so
0 1
y1
B .. C
>
@ . A = (A )
yn
yn
1
x1
1 B .. C
@ . A.
xn
un
wn
vn
86
so
1
0 1
0 1
w1
u1
u1
B .. C
> > B .. C
> B .. C
@ . A = Q P @ . A = (P Q) @ . A ,
wn
un
un
which means that the change of basis matrix PW,U from U to W is P Q. This proves that
PW,U = PV,U PW,V .
3.4
Summary
The main concepts and results of this chapter are listed below:
The representation of linear maps by matrices.
The vector space of linear maps HomK (E, F ).
The vector space Mm,n (K) of m n matrices over the field K; The ring Mn (K) of
n n matrices over the field K.
Column vectors, row vectors.
Matrix operations: addition, scalar multiplication, multiplication.
The matrix representation mapping M : Hom(E, F ) ! Mn,p and the representation
isomorphism (Proposition 3.2).
Haar basis vectors and a glimpse at Haar wavelets.
Change of basis matrix and Proposition 3.4.
Chapter 4
Direct Sums, The Dual Space, Duality
4.1
Before considering linear forms and hyperplanes, we define the notion of direct sum and
prove some simple propositions.
There is a subtle point, which is that if we attempt to
`
define the direct sum E F of two vector spaces using the cartesian product E F , we
dont
ordered pairs, but we want
` quite get
` the right notion because elements of E F are `
E F = F E. Thus, we want to think of the elements of E F as unordrered pairs of
elements. It is possible to do so by considering the direct sum of a family (Ei )i2{1,2} , and
more generally of a family (Ei )i2I . For simplicity, we begin by considering the case where
I = {1, 2}.
Definition 4.1.
` Given a family (Ei )i2{1,2} of two vector spaces, we define the (external)
direct sum E1 E2 (or coproduct) of the family (Ei )i2{1,2} as the set
E1
with addition
`
`
We define the injections in1 : E1 ! E1 E2 and in2 : E2 ! E1 E2 as the linear maps
defined such that,
in1 (u) = {h1, ui, h2, 0i},
and
in2 (v) = {h1, 0i, h2, vi}.
87
88
a
E1 = {{h2, vi, h1, ui} | v 2 E2 , u 2 E1 } = E1
E2 .
`
Thus, every member {h1, ui, h2, vi} of E1 E2 can be viewed as an unordered pair consisting
of the two vectors u and v, tagged with the index 1 and 2, respectively.
`
Q
Remark: In fact, E1 E2 is just the product i2{1,2} Ei of the family (Ei )i2{1,2} .
E2
This is not to be confused with the cartesian product E1 E2 . The vector space E1 E2
is the set of all ordered pairs hu, vi, where u 2 E1 , and v 2 E2 , with addition and
multiplication by a scalar defined such that
`
Proposition 4.1. Given any two vector spaces, E1 and E2 , the set E1 E2 is a vector
space. For every`pair of linear maps, f : E1 ! G and g : E2 ! G, there is a unique linear
map, f + g : E1 E2 ! G, such that (f + g) in1 = f and (f + g) in2 = g, as in the
following diagram:
E1 PPP
PPP
PPPf
PPP
PPP
P'
`
f +g
/
E1 O E2
n7
n
n
n
nnn
in2
nnng
n
n
n
nnn
in1
E2
Proof. Define
(f + g)({h1, ui, h2, vi}) = f (u) + g(v),
for every u 2 E1 and v 2 E2 . It is immediately verified that f + g is the unique linear map
with the required properties.
`
We `
already noted that E1 `E2 is in bijection with E1 E2 . If we define the projections
1 : E1 E2 ! E1 and 2 : E1 E2 ! E2 , such that
1 ({h1, ui, h2, vi}) = u,
and
we have the following proposition.
89
Proposition 4.2. Given any two vector spaces, E1 and E2 , for every pair `
of linear maps,
f : D ! E1 and g : D ! E2 , there is a unique linear map, f g : D ! E1 E2 , such that
1 (f g) = f and 2 (f g) = g, as in the following diagram:
7 E1
nnn O
n
n
f nn
n
1
nnn
n
n
n
`
n
f g
n
/ E1
E2
PPP
PPP
PPP
P
2
g PPPP
PP(
E2
Proof. Define
(f g)(w) = {h1, f (w)i, h2, g(w)i},
for every w 2 D. It is immediately verified that f g is the unique linear map with the
required properties.
Remark: It is a peculiarity of linear algebra that direct sums and products of finite families
are isomorphic. However, this is no longer true for products and sums of infinite families.
When U, V are subspaces
of a vector space E, letting i1 : U ! E and i2 : V ! E be the
`
inclusion maps, if U V is isomomorphic to E under the map i1 +`i2 given by Proposition
4.1, we say that E is a direct
` sum of U and V , and we write E = U V (with a slight abuse
of notation, since E and U V are only isomorphic). It is also convenient to define the sum
U1 + + Up and the internal direct sum U1 Up of any number of subspaces of E.
Definition 4.2. Given p 2 vector spaces E1 , . . . , Ep , the product F = E1 Ep can
be made into a vector space by defining addition and scalar multiplication as follows:
(u1 , . . . , up ) + (v1 , . . . , vp ) = (u1 + v1 , . . . , up + vp )
(u1 , . . . , up ) = ( u1 , . . . , up ),
for all ui , vi 2 Ei and all 2 K. With the above addition and multiplication, the vector
space F = E1 Ep is called the direct product of the vector spaces E1 , . . . , Ep .
As a special case, when E1 = = Ep = K, we find again the vector space F = K p .
The projection maps pri : E1 Ep ! Ei given by
pri (u1 , . . . , up ) = ui
are clearly linear. Similarly, the maps ini : Ei ! E1 Ep given by
ini (ui ) = (0, . . . , 0, ui , 0, . . . , 0)
90
are injective and linear. If dim(Ei ) = ni and if (ei1 , . . . , eini ) is a basis of Ei for i = 1, . . . , p,
then it is easy to see that the n1 + + np vectors
(e11 , 0, . . . , 0),
..
.
(e1n1 , 0, . . . , 0),
..
.
...,
..
.
with ui 2 Ui for i = 1, . . . , p. It is clear that this map is linear, and so its image is a subspace
of E denoted by
U1 + + Up
and called the sum of the subspaces U1 , . . . , Up . It is immediately verified that U1 + + Up
is the smallest subspace of E containing U1 , . . . , Up . This also implies that U1 + + Up does
not depend on the order of the factors Ui ; in particular,
U1 + U2 = U2 + U1 .
if
If the map a is injective, then Ker a = 0, which means that if ui 2 Ui for i = 1, . . . , p and
u1 + + up = 0
Up .
91
u2 , u1 2 U1 , u2 2 U2 ,
A}
are subspaces of Mn , and that S(n) \ Skew(n) = (0). Observe that for any matrix A 2 Mn ,
the matrix H(A) = (A + A> )/2 is symmetric and the matrix S(A) = (A A> )/2 is skewsymmetric. Since
A + A> A A>
A = H(A) + S(A) =
+
,
2
2
we see that Mn = S(n) + Skew(n), and since S(n) \ Skew(n) = (0), we have the direct sum
Mn = S(n)
Skew(n).
Remark: The vector space Skew(n) of skew-symmetric matrices is also denoted by so(n).
It is the Lie algebra of the group SO(n).
Proposition 4.3 can be generalized to any p
2 subspaces at the expense of notation.
The proof of the following proposition is left as an exercise.
Proposition 4.4. Given any vector space E and any p
lowing properties are equivalent:
X
p
j=1,j6=i
Uj
= (0),
i = 1, . . . , p.
92
(3) We have
Ui \
X
i 1
Uj
j=1
= (0),
i = 2, . . . , p.
Up ,
we have
Proposition 4.5. If E is any vector space, for any (finite-dimensional) subspaces U1 , . . .,
Up of E, we have
dim(U1 Up ) = dim(U1 ) + + dim(Up ).
If E is a direct sum
E = U1
Up ,
Skew(n),
A + A>
,
2
A>
A
2
93
Up .
We also have the following proposition characterizing idempotent linear maps whose proof
is also left as an exercise.
Proposition 4.7. For every vector space E, if f : E ! E is an idempotent linear map, i.e.,
f f = f , then we have a direct sum
E = Ker f
Im f,
i2I
94
such that, (
i2I
i2I
`
Remark: When Ei = E, for all i 2 I, we denote i2I Ei by E (I) . In particular, when
Ei = K, for all i 2 I, we find the vector space K (I) of Definition 2.13.
We also have the following basic proposition about injective or surjective linear maps.
Proposition 4.9. Let E and F be vector spaces, and let f : E ! F be a linear map. If
f : E ! F is injective, then there is a surjective linear map r : F ! E called a retraction,
such that r f = idE . If f : E ! F is surjective, then there is an injective linear map
s : F ! E called a section, such that f s = idF .
Proof. Let (ui )i2I be a basis of E. Since f : E ! F is an injective linear map, by Proposition
2.16, (f (ui ))i2I is linearly independent in F . By Theorem 2.9, there is a basis (vj )j2J of F ,
where I J, and where vi = f (ui ), for all i 2 I. By Proposition 2.16, a linear map r : F ! E
can be defined such that r(vi ) = ui , for all i 2 I, and r(vj ) = w for all j 2 (J I), where w
is any given vector in E, say w = 0. Since r(f (ui )) = ui for all i 2 I, by Proposition 2.16,
we have r f = idE .
Now, assume that f : E ! F is surjective. Let (vj )j2J be a basis of F . Since f : E ! F
is surjective, for every vj 2 F , there is some uj 2 E such that f (uj ) = vj . Since (vj )j2J is a
basis of F , by Proposition 2.16, there is a unique linear map s : F ! E such that s(vj ) = uj .
Also, since f (s(vj )) = vj , by Proposition 2.16 (again), we must have f s = idF .
The converse of Proposition 4.9 is obvious. We now have the following fundamental
Proposition.
Proposition 4.10. Let E, F and G, be three vector spaces, f : E ! F an injective linear
map, g : F ! G a surjective linear map, and assume that Im f = Ker g. Then, the following
properties hold. (a) For any section s : G ! F of g, we have F = Ker g Im s, and the
linear map f + s : E G ! F is an isomorphism.1
(b) For any retraction r : F ! E of f , we have F = Im f
f
Eo
1
2
Fo
Ker r.2
95
s(g(u))) = g(u)
g(s(g(u))) = g(u)
g(u) = 0.
! G
! E
! F
! 0.
The property of a short exact sequence given by Proposition 4.10 is often described by saying
f
g
that 0 ! E ! F ! G ! 0 is a (short) split exact sequence.
As a corollary of Proposition 4.10, we have the following result.
Theorem 4.11. Let E and F be vector spaces, and let f : E ! F be a linear map. Then,
E is isomorphic to Ker f Im f , and thus,
dim(E) = dim(Ker f ) + dim(Im f ) = dim(Ker f ) + rk(f ).
Proof. Consider
Ker f
i
! E
f0
! Im f,
f0
96
Proposition 4.12. Given a vector space E, if U and V are any two subspaces of E, then
dim(U ) + dim(V ) = dim(U + V ) + dim(U \ V ),
an equation known as Grassmanns relation.
Proof. Recall that U + V is the image of the linear map
a: U V ! E
given by
a(u, v) = u + v,
and that we proved earlier that the kernel Ker a of a is isomorphic to U \ V . By Theorem
4.11,
dim(U V ) = dim(Ker a) + dim(Im a),
but dim(U V ) = dim(U ) + dim(V ), dim(Ker a) = dim(U \ V ), and Im a = U + V , so the
Grassmann relation holds.
The Grassmann relation can be very useful to figure out whether two subspace have a
nontrivial intersection in spaces of dimension > 3. For example, it is easy to see that in R5 ,
there are subspaces U and V with dim(U ) = 3 and dim(V ) = 2 such that U \ V = 0; for
example, let U be generated by the vectors (1, 0, 0, 0, 0), (0, 1, 0, 0, 0), (0, 0, 1, 0, 0), and V be
generated by the vectors (0, 0, 0, 1, 0) and (0, 0, 0, 0, 1). However, we claim that if dim(U ) = 3
and dim(V ) = 3, then dim(U \ V ) 1. Indeed, by the Grassmann relation, we have
dim(U ) + dim(V ) = dim(U + V ) + dim(U \ V ),
namely
3 + 3 = 6 = dim(U + V ) + dim(U \ V ),
dim(U \ V ) = n
2.
Here is a characterization of direct sums that follows directly from Theorem 4.11.
97
Up .
98
V and E = U
W , then there is
v 2 U.
v=u
and
w0
v 0 = u0 ,
(v + v 0 ) = (u + u0 ),
w
w
v=u
v = u,
99
The notion of rank of a linear map or of a matrix is an important one, both theoretically
and practically, since it is the key to the solvability of linear equations. Recall from Definition
2.15 that the rank rk(f ) of a linear map f : E ! F is the dimension dim(Im f ) of the image
subspace Im f of F .
We have the following simple proposition.
Proposition 4.16. Given a linear map f : E ! F , the following properties hold:
(i) rk(f ) = codim(Ker f ).
(ii) rk(f ) + dim(Ker f ) = dim(E).
(iii) rk(f ) min(dim(E), dim(F )).
Proof. Since by Proposition 4.11, dim(E) = dim(Ker f ) + dim(Im f ), and by definition,
rk(f ) = dim(Im f ), we have rk(f ) = codim(Ker f ). Since rk(f ) = dim(Im f ), (ii) follows
from dim(E) = dim(Ker f ) + dim(Im f ). As for (iii), since Im f is a subspace of F , we have
rk(f ) dim(F ), and since rk(f ) + dim(Ker f ) = dim(E), we have rk(f ) dim(E).
The rank of a matrix is defined as follows.
Definition 4.5. Given a m n-matrix A = (ai j ) over the field K, the rank rk(A) of the
matrix A is the maximum number of linearly independent columns of A (viewed as vectors
in K m ).
In view of Proposition 2.10, the rank of a matrix A is the dimension of the subspace of
K generated by the columns of A. Let E and F be two vector spaces, and let (u1 , . . . , un )
be a basis of E, and (v1 , . . . , vm ) a basis of F . Let f : E ! F be a linear map, and let M (f )
be its matrix w.r.t. the bases (u1 , . . . , un ) and (v1 , . . . , vm ). Since the rank rk(f ) of f is the
dimension of Im f , which is generated by (f (u1 ), . . . , f (un )), the rank of f is the maximum
number of linearly independent vectors in (f (u1 ), . . . , f (un )), which is equal to the number
of linearly independent columns of M (f ), since F and K m are isomorphic. Thus, we have
rk(f ) = rk(M (f )), for every matrix representing f .
m
We will see later, using duality, that the rank of a matrix A is also equal to the maximal
number of linearly independent rows of A.
If U is a hyperplane, then E = U V for some subspace V of dimension 1. However, a
subspace V of dimension 1 is generated by any nonzero vector v 2 V , and thus we denote
V by Kv, and we write E = U Kv. Clearly, v 2
/ U . Conversely, let x 2 E be a vector
such that x 2
/ U (and thus, x 6= 0). We claim that E = U Kx. Indeed, since U is a
hyperplane, we have E = U Kv for some v 2
/ U (with v 6= 0). Then, x 2 E can be written
in a unique way as x = u + v, where u 2 U , and since x 2
/ U , we must have 6= 0, and
1
thus, v =
u + 1 x. Since E = U Kv, this shows that E = U + Kx. Since x 2
/ U,
100
In the next section, we shall see that hyperplanes are precisely the Kernels of nonnull
linear maps f : E ! K, called linear forms.
4.2
We already observed that the field K itself is a vector space (over itself). The vector space
Hom(E, K) of linear maps from E to the field K, the linear forms, plays a particular role.
We take a quick look at the connection between E and Hom(E, K), its dual space. As we
will see shortly, every linear map f : E ! F gives rise to a linear map f > : F ! E , and it
turns out that in a suitable basis, the matrix of f > is the transpose of the matrix of f . Thus,
the notion of dual space provides a conceptual explanation of the phenomena associated with
transposition. But it does more, because it allows us to view subspaces as solutions of sets
of linear equations and vice-versa.
Consider the following set of two linear equations in R3 ,
x
x
y+z =0
y z = 0,
and let us find out what is their set V of common solutions (x, y, z) 2 R3 . By subtracting
the second equation from the first, we get 2z = 0, and by adding the two equations, we find
that 2(x y) = 0, so the set V of solutions is given by
y=x
z = 0.
This is a one dimensional subspace of R3 . Geometrically, this is the line of equation y = x
in the plane z = 0.
Now, why did we say that the above equations are linear? This is because, as functions
of (x, y, z), both maps f1 : (x, y, z) 7! x y + z and f2 : (x, y, z) 7! x y z are linear. The
set of all such linear functions from R3 to R is a vector space; we used this fact to form linear
combinations of the equations f1 and f2 . Observe that the dimension of the subspace V
is 1. The ambient space has dimension n = 3 and there are two independent equations
f1 , f2 , so it appears that the dimension dim(V ) of the subspace V defined by m independent
equations is
dim(V ) = n m,
which is indeed a general fact.
More generally, in Rn , a linear equation is determined by an n-tuple (a1 , . . . , an ) 2 Rn ,
and the solutions of this linear equation are given by the n-tuples (x1 , . . . , xn ) 2 Rn such
that
a1 x1 + + an xn = 0;
101
1 x1
+ +
n xn ,
where i = f (ui ) 2 K, for every i, 1 i n. Thus, with respect to the basis (u1 , . . . , un ),
f (x) is a linear combination of the coordinates of x, and we can view a linear form as a
linear equation, as discussed earlier.
Given a linear form u 2 E and a vector v 2 E, the result u (v) of applying u to v is
also denoted by hu , vi. This defines a binary operation h , i : E E ! K satisfying the
following properties:
hu1 + u2 , vi = hu1 , vi + hu2 , vi
hu , v1 + v2 i = hu , v1 i + hu , v2 i
h u , vi = hu , vi
hu , vi = hu , vi.
The above identities mean that h , i is a bilinear map, since it is linear in each argument.
It is often called the canonical pairing between E and E. In view of the above identities,
given any fixed vector v 2 E, the map evalv : E ! K (evaluation at v) defined such that
evalv (u ) = hu , vi = u (v) for every u 2 E
is a linear map from E to K, that is, evalv is a linear form in E . Again, from the above
identities, the map evalE : E ! E , defined such that
evalE (v) = evalv
for every v 2 E,
102
We shall see that the map evalE is injective, and that it is an isomorphism when E has finite
dimension.
We now formalize the notion of the set V 0 of linear equations vanishing on all vectors in
a given subspace V E, and the notion of the set U 0 of common solutions of a given set
U E of linear equations. The duality theorem (Theorem 4.17) shows that the dimensions
of V and V 0 , and the dimensions of U and U 0 , are related in a crucial way. It also shows that,
in finite dimension, the maps V 7! V 0 and U 7! U 0 are inverse bijections from subspaces of
E to subspaces of E .
Definition 4.7. Given a vector space E and its dual E , we say that a vector v 2 E and a
linear form u 2 E are orthogonal if hu , vi = 0. Given a subspace V of E and a subspace U
of E , we say that V and U are orthogonal if hu , vi = 0 for every u 2 U and every v 2 V .
Given a subset V of E (resp. a subset U of E ), the orthogonal V 0 of V is the subspace V 0
of E defined such that
V 0 = {u 2 E | hu , vi = 0, for every v 2 V }
(resp. the orthogonal U 0 of U is the subspace U 0 of E defined such that
U 0 = {v 2 E | hu , vi = 0, for every u 2 U }).
The subspace V 0 E is also called the annihilator of V . The subspace U 0 E
annihilated by U E does not have a special name. It seems reasonable to call it the
linear subspace (or linear variety) defined by U .
Informally, V 0 is the set of linear equations that vanish on V , and U 0 is the set of common
zeros of all linear equations in U .
We can also define V 0 by
V 0 = {u 2 E | V Ker u }
and U 0 by
U0 =
Ker u .
u 2U
Indeed, if V1 V2 E, then for any f 2 V20 we have f (v) = 0 for all v 2 V2 , and thus
f (v) = 0 for all v 2 V1 , so f 2 V10 . Similarly, if U1 U2 E , then for any v 2 U20 , we
have f (v) = 0 for all f 2 U2 , so f (v) = 0 for all f 2 U1 , which means that v 2 U10 .
Here are some examples. Let E = M2 (R), the space of real 2 2 matrices, and let V be
the subspace of M2 (R) spanned by the matrices
0 1
1 0
0 0
,
,
.
1 0
0 0
0 1
103
We check immediately that the subspace V consists of all matrices of the form
b a
,
a c
that is, all symmetric matrices. The matrices
a11 a12
a21 a22
in V satisfy the equation
a12
a21 = 0,
and all scalar multiples of these equations, so V 0 is the subspace of E spanned by the linear
form given by u (a11 , a12 , a21 , a22 ) = a12 a21 . We have
dim(V 0 ) = dim(E)
dim(V ) = 4
3 = 1.
The above example generalizes to E = Mn (R) for any n 1, but this time, consider the
space U of linear forms asserting that a matrix A is symmetric; these are the linear forms
spanned by the n(n 1)/2 equations
aij
aji = 0,
1 i < j n;
Note there are no constraints on diagonal entries, and half of the equations
aij
aji = 0,
1 i 6= j n
are redudant. It is easy to check that the equations (linear forms) for which i < j are linearly
independent. To be more precise, let U be the space of linear forms in E spanned by the
linear forms
uij (a11 , . . . , a1n , a21 , . . . , a2n , . . . , an1 , . . . , ann ) = aij
aji ,
1 i < j n.
Then, the set U 0 of common solutions of these equations is the space S(n) of symmetric
matrices. This space has dimension
n(n + 1)
= n2
2
n(n
1)
2
= aii ,
1i<jn
1 i n.
104
It is easy to see that these linear forms are linearly independent, so dim(U ) = n(n + 1)/2.
The space U 0 of matrices A 2 Mn (R) satifying all of the above equations is clearly the space
Skew(n) of skew-symmetric matrices. The dimension of U 0 is
n(n
1)
2
= n2
n(n + 1)
.
2
called the trace of A. The subspace U 0 of E consisting of all matrices A such that tr(A) = 0
is a space of dimension n2 1. We leave it as an exercise to find a basis of this space.
The dimension equations
dim(V ) + dim(V 0 ) = dim(E)
dim(U ) + dim(U 0 ) = dim(E)
are always true (if E is finite-dimensional). This is part of the duality theorem (Theorem
4.17).
In constrast with the previous examples, given a matrix A 2 Mn (R), the equations
asserting that A> A = I are not linear constraints. For example, for n = 2, we have
a211 + a221 = 1
a221 + a222 = 1
a11 a12 + a21 a22 = 0.
Remarks:
(1) The notation V 0 (resp. U 0 ) for the orthogonal of a subspace V of E (resp. a subspace
U of E ) is not universal. Other authors use the notation V ? (resp. U ? ). However,
the notation V ? is also used to denote the orthogonal complement of a subspace V
with respect to an inner product on a space E, in which case V ? is a subspace of E
and not a subspace of E (see Chapter 10). To avoid confusion, we prefer using the
notation V 0 .
(2) Since linear forms can be viewed as linear equations (at least in finite dimension), given
a subspace (or even a subset) U of E , we can define the set Z(U ) of common zeros of
the equations in U by
Z(U ) = {v 2 E | u (v) = 0, for all u 2 U }.
105
Of course Z(U ) = U 0 , but the notion Z(U ) can be generalized to more general kinds
of equations, namely polynomial equations. In this more general setting, U is a set of
polynomials in n variables with coefficients in K (where n = dim(E)). Sets of the form
Z(U ) are called algebraic varieties. Linear forms correspond to the special case where
homogeneous polynomials of degree 1 are considered.
If V is a subset of E, it is natural to associate with V the set of polynomials in
K[X1 , . . . , Xn ] that vanish on V . This set, usually denoted I(V ), has some special
properties that make it an ideal . If V is a linear subspace of E, it is natural to restrict
our attention to the space V 0 of linear forms that vanish on V , and in this case we
identify I(V ) and V 0 (although technically, I(V ) is no longer an ideal).
For any arbitrary set of polynomials U K[X1 , . . . , Xn ] (resp V E) the relationship
between I(Z(U )) and U (resp. Z(I(V )) and V ) is generally not simple, even though
we always have
U I(Z(U )) (resp. V Z(I(V ))).
However, when the field K is algebraically closed, then I(Z(U )) is equal to the radical
of the ideal U , a famous result due to Hilbert known as the Nullstellensatz (see Lang
[67] or Dummit and Foote [32]). The study of algebraic varieties is the main subject
of algebraic geometry, a beautiful but formidable subject. For a taste of algebraic
geometry, see Lang [67] or Dummit and Foote [32].
The duality theorem (Theorem 4.17) shows that the situation is much simpler if we
restrict our attention to linear subspaces; in this case
U = I(Z(U )) and V = Z(I(V )).
We claim that V V 00 for every subspace V of E, and that U U 00 for every subspace
U of E .
Indeed, for any v 2 V , to show that v 2 V 00 we need to prove that u (v) = 0 for all
u 2 V 0 . However, V 0 consists of all linear forms u such that u (y) = 0 for all y 2 V ; in
particular, since v 2 V , u (v) = 0 for all u 2 V 0 , as required.
Similarly, for any u 2 U , to show that u 2 U 00 we need to prove that u (v) = 0 for
all v 2 U 0 . However, U 0 consists of all vectors v such that f (v) = 0 for all f 2 U ; in
particular, since u 2 U , u (v) = 0 for all v 2 U 0 , as required.
106
Given a vector space E and any basis (ui )i2I for E, we can associate to each ui a linear
form ui 2 E , and the ui have some remarkable properties.
Definition 4.8. Given a vector space E and any basis (ui )i2I for E, by Proposition 2.16,
for every i 2 I, there is a unique linear form ui such that
1 if i = j
ui (uj ) =
0 if i 6= j,
for every j 2 I. The linear form ui is called the coordinate form of index i w.r.t. the basis
(ui )i2I .
Given an index set I, authors often define the so called Kronecker symbol
that
1 if i = j
ij =
0 if i 6= j,
for all i, j 2 I. Then, ui (uj ) =
i j,
such
i j.
The reason for the terminology coordinate form is as follows: If E has finite dimension
and if (u1 , . . . , un ) is a basis of E, for any vector
v=
1 u1
+ +
n un ,
we have
ui (v) = ui ( 1 u1 + + n un )
= 1 ui (u1 ) + + i ui (ui ) + +
= i,
n ui (un )
since ui (uj ) = i j . Therefore, ui is the linear function that returns the ith coordinate of a
vector expressed over the basis (u1 , . . . , un ).
Given a vector space E and a subspace U of E, by Theorem 2.9, every basis (ui )i2I of U
can be extended to a basis (uj )j2I[J of E, where I \ J = ;. We have the following important
theorem adapted from E. Artin [3] (Chapter 1).
Theorem 4.17. (Duality theorem) Let E be a vector space. The following properties hold:
(a) For every basis (ui )i2I of E, the family (ui )i2I of coordinate forms is linearly independent.
(b) For every subspace V of E, we have V 00 = V .
(c) For every subspace V of finite codimension m of E, for every subspace W of E such
that E = V W (where W is of finite dimension m), for every basis (ui )i2I of E such
that (u1 , . . . , um ) is a basis of W , the family (u1 , . . . , um ) is a basis of the orthogonal
V 0 of V in E , so that
dim(V 0 ) = codim(V ).
Furthermore, we have V 00 = V .
107
i ui
= 0,
i2I
for a family ( i )i2I (of scalars in K). Since ( i )i2I has finite support, there is a finite subset
J of I such that i = 0 for all i 2 I J, and we have
X
j uj = 0.
j2J
Applying the linear form j2J j uj to each uj (j 2 J), by Definition 4.8, since ui (uj ) = 1 if
i = j and 0 otherwise, we get j = 0 for all j 2 J, that is i = 0 for all i 2 I (by definition
of J as the support). Thus, (ui )i2I is linearly independent.
(b) Clearly, we have V V 00 . If V 6= V 00 , then let (ui )i2I[J be a basis of V 00 such that
(ui )i2I is a basis of V (where I \ J = ;). Since V 6= V 00 , uj0 2 V 00 for some j0 2 J (and
thus, j0 2
/ I). Since uj0 2 V 00 , uj0 is orthogonal to every linear form in V 0 . Now, we have
uj0 (ui ) = 0 for all i 2 I, and thus uj0 2 V 0 . However, uj0 (uj0 ) = 1, contradicting the fact
that uj0 is orthogonal to every linear form in V 0 . Thus, V = V 00 .
(c) Let J = I {1, . . . , m}. Every linear form f 2 V 0 is orthogonal to every uj , for
j 2 J, and thus, f (uj ) = 0, for all j 2 J. For such a linear form f 2 V 0 , let
g = f (u1 )u1 + + f (um )um .
for every v 2 E, is a linear map, and that its kernel Ker h is precisely U 0 . Then, by
Proposition 4.11,
E Ker (h) Im h = U 0 Im h,
108
When E is of finite dimension n and (u1 , . . . , un ) is a basis of E, by part (c), the family
(u1 , . . . , un ) is a basis of the dual space E , called the dual basis of (u1 , . . . , un ).
By part (c) and (d) of theorem 4.17, the maps V 7! V 0 and U 7! U 0 , where V is
a subspace of finite codimension of E and U is a subspace of finite dimension of E , are
inverse bijections. These maps set up a duality between subspaces of finite codimension of
E, and subspaces of finite dimension of E .
One should be careful that this bijection does not extend to subspaces of E of infinite
dimension.
When E is of infinite dimension, for every basis (ui )i2I of E, the family (ui )i2I of coordinate forms is never a basis of E . It is linearly independent, but it is too small to
generate E . For example, if E = R(N) , where N = {0, 1, 2, . . .}, the map f : E ! R that
sums the nonzero coordinates of a vector in E is a linear form, but it is easy to see that it
cannot be expressed as a linear combination of coordinate forms. As a consequence, when
E is of infinite dimension, E and E are not isomorphic.
Here is another example illustrating the power of Theorem 4.17. Let E = Mn (R), and
consider the equations asserting that the sum of the entries in every row of a matrix 2 Mn (R)
is equal to the same number. We have n 1 equations
n
X
(aij
ai+1j ) = 0,
j=1
1in
1,
and it is easy to see that they are linearly independent. Therefore, the space U of linear
forms in E spanned by the above linear forms (equations) has dimension n 1, and the
space U 0 of matrices sastisfying all these equations has dimension n2 n + 1. It is not so
obvious to find a basis for this space.
When E is of finite dimension n and (u1 , . . . , un ) is a basis of E, we noted that the family
(u1 , . . . , un ) is a basis of the dual space E (called the dual basis of (u1 , . . . , un )). Let us see
how the coordinates of a linear form ' over the dual basis (u1 , . . . , un ) vary under a change
of basis.
Let (u1 , . . . , un ) and (v1 , . . . , vn ) be two bases of E, and let P = (ai j ) be the change of
basis matrix from (u1 , . . . , un ) to (v1 , . . . , vn ), so that
vj =
n
X
i=1
ai j u i ,
109
ui =
bj i vj .
j=1
Since ui (uj ) =
ij
and vi (vj ) =
i j,
we get
vj (ui )
vj (
n
X
bk i vk ) = bj i ,
k=1
and thus
vj
n
X
bj i ui ,
i=1
and
ui =
n
X
ai j vj .
j=1
This means that the change of basis from the dual basis (u1 , . . . , un ) to the dual basis
(v1 , . . . , vn ) is (P 1 )> . Since
n
n
X
X
' =
' i ui =
'0i vi ,
i=1
we get
'0j
i=1
n
X
ai j ' i ,
i=1
so the new coordinates '0j are expressed in terms of the old coordinates 'i using the matrix
P > . If we use the row vectors ('1 , . . . , 'n ) and ('01 , . . . , '0n ), we have
('01 , . . . , '0n ) = ('1 , . . . , 'n )P.
Comparing with the change of basis
vj =
n
X
ai j u i ,
i=1
we note that this time, the coordinates ('i ) of the linear form ' change in the same direction
as the change of basis. For this reason, we say that the coordinates of linear forms are
covariant. By abuse of language, it is often said that linear forms are covariant, which
explains why the term covector is also used for a linear form.
Observe that if (e1 , . . . , en ) is a basis of the vector space E, then, as a linear map from
E to K, every linear form f 2 E is represented by a 1 n matrix, that is, by a row vector
( 1, . . . ,
n ),
110
with
Pn respect to the basis (e1 , . . . , en ) of E, and 1 of K, where f (ei ) = i . A vector u =
i=1 ui ei 2 E is represented by a n 1 matrix, that is, by a column vector
0 1
u1
B .. C
@ . A,
un
and the action of f on u, namely f (u), is represented by the matrix product
0 1
u1
B .. C
1
n @ . A = 1 u1 + + n un .
un
On the other hand, with respect to the dual basis (e1 , . . . , en ) of E , the linear form f is
represented by the column vector
0 1
1
B .. C
@ . A.
n
Remark: In many texts using tensors, vectors are often indexed with lower indices. If so, it
is more convenient to write the coordinates of a vector x over the basis (u1 , . . . , un ) as (xi ),
using an upper index, so that
n
X
x i ui ,
x=
i=1
vj =
n
X
aij ui
i=1
and
i
x =
n
X
aij x0j .
j=1
Dually, linear forms are indexed with upper indices. Then, it is more convenient to write the
coordinates of a covector ' over the dual basis (u 1 , . . . , u n ) as ('i ), using a lower index,
so that
n
X
' =
' i u i
i=1
u =
n
X
j=1
aij v j
n
X
111
aij 'i .
i=1
With these conventions, the index of summation appears once in upper position and once in
lower position, and the summation sign can be safely omitted, a trick due to Einstein. For
example, we can write
'0j = aij 'i
as an abbreviation for
'0j =
n
X
aij 'i .
i=1
For another example of the use of Einsteins notation, if the vectors (v1 , . . . , vn ) are linear
combinations of the vectors (u1 , . . . , un ), with
vi =
n
X
1 i n,
aij uj ,
j=1
1 i n.
Thus, in Einsteins notation, the n n matrix (aij ) is denoted by (aji ), a (1, 1)-tensor .
Beware that some authors view a matrix as a mapping between coordinates, in which
case the matrix (aij ) is denoted by (aij ).
We will now pin down the relationship between a vector space E and its bidual E .
Proposition 4.18. Let E be a vector space. The following properties hold:
(a) The linear map evalE : E ! E defined such that
evalE (v) = evalv
for all v 2 E,
112
If E is of finite dimension n, by Theorem 4.17, for every basis (u1 , . . . , un ), the family
the bidual E . This shows that dim(E) = dim(E ) = n, and since by part (a), we know
that evalE : E ! E is injective, in fact, evalE : E ! E is bijective (because an injective
map carries a linearly independent family to a linearly independent family, and in a vector
space of dimension n, a linearly independent family of n vectors is a basis, see Proposition
2.10).
(u1 , . . . , un )
When a vector space E has infinite dimension, E and its bidual E are never isomorphic.
113
Proof. The maps l' : E ! F and r' : F ! E are linear because a pairing is bilinear. If
l' (u) = 0 (the null form), then
l' (u)(v) = '(u, v) = 0 for every v 2 F ,
and since ' is nondegenerate, u = 0. Thus, l' : E ! F is injective. Similarly, r' : F ! E
is injective. When F has finite dimension n, we have seen that F and F have the same
dimension. Since l' : E ! F is injective, we have m = dim(E) dim(F ) = n. The same
argument applies to E, and thus n = dim(F ) dim(E) = m. But then, dim(E) = dim(F ),
and l' : E ! F and r' : F ! E are bijections.
When E has finite dimension, the nondegenerate pairing h , i : E E ! K yields
another proof of the existence of a natural isomorphism between E and E . Interesting
nondegenerate pairings arise in exterior algebra. We now show the relationship between
hyperplanes and linear forms.
4.3
Actually, Proposition 4.20 below follows from parts (c) and (d) of Theorem 4.17, but we feel
that it is also interesting to give a more direct proof.
Proposition 4.20. Let E be a vector space. The following properties hold:
(a) Given any nonnull linear form f 2 E , its kernel H = Ker f is a hyperplane.
(b) For any hyperplane H in E, there is a (nonnull) linear form f 2 E such that H =
Ker f .
(c) Given any hyperplane H in E and any (nonnull) linear form f 2 E such that H =
Ker f , for every linear form g 2 E , H = Ker g i g = f for some 6= 0 in K.
Proof. (a) If f 2 E is nonnull, there is some vector v0 2 E such that f (v0 ) 6= 0. Let
H = Ker f . For every v 2 E, we have
f (v)
f (v)
f v
v
=
f
(v)
f (v0 ) = f (v) f (v) = 0.
0
f (v0 )
f (v0 )
Thus,
v
f (v)
v0 = h 2 H,
f (v0 )
and
v =h+
f (v)
v0 ,
f (v0 )
114
(c) Let H be a hyperplane in E, and let f 2 E be any (nonnull) linear form such that
H = Ker f . Clearly, if g = f for some 6= 0, then H = Ker g . Conversely, assume that
H = Ker g for some nonnull linear form g . From (a), we have E = H Kv0 , for some v0
such that f (v0 ) 6= 0 and g (v0 ) 6= 0. Then, observe that
g
g (v0 )
f
f (v0 )
is a linear form that vanishes on H, since both f and g vanish on H, but also vanishes on
Kv0 . Thus, g = f , with
g (v0 )
=
.
f (v0 )
We leave as an exercise the fact that every subspace V 6= E of a vector space E, is the
intersection of all hyperplanes that contain V . We now consider the notion of transpose of
a linear map and of a matrix.
4.4
Given a linear map f : E ! F , it is possible to define a map f > : F ! E which has some
interesting properties.
Definition 4.10. Given a linear map f : E ! F , the transpose f > : F ! E of f is the
linear map defined such that
f > (v ) = v f,
for every v 2 F ,
/F
BB
BB
B v
f > (v ) B
K.
115
id>
E = idE .
Note the reversal of composition on the right-hand side of (g f )> = f > g > .
The equation (g f )> = f > g > implies the following useful proposition.
Proposition 4.21. If f : E ! F is any linear map, then the following properties hold:
(1) If f is injective, then f > is surjective.
f,
f >>
F O
evalE
evalF
F.
= hf > ('), ui
= h', f (u)i
= hevalF (f (u)), 'i
= h(evalF f )(u), 'i
= (evalF f )(u)('),
which proves that f >> evalE = evalF
f , as claimed.
116
If E and F are finite-dimensional, then evalE and then evalF are isomorphisms, so Proposition 4.22 shows that if we identify E with its bidual E and F with its bidual F then
(f > )> = f.
As a corollary of Proposition 4.22, if dim(E) is finite, then we have
Ker (f >> ) = evalE (Ker (f )).
Indeed, if E is finite-dimensional, the map evalE : E ! E is an isomorphism, so every
' 2 E is of the form ' = evalE (u) for some u 2 E, the map evalF : F ! F is injective,
and we have
f >> (') = 0 i
i
i
i
i
and
Proof. We have
hw , f (v)i = hf > (w ), vi,
for all v 2 E and all w 2 F , and thus, we have hw , f (v)i = 0 for every v 2 V , i.e.
w 2 f (V )0 , i hf > (w ), vi = 0 for every v 2 V , i f > (w ) 2 V 0 , i.e. w 2 (f > ) 1 (V 0 ),
proving that
f (V )0 = (f > ) 1 (V 0 ).
Since we already observed that E 0 = 0, letting V = E in the above identity, we obtain
that
Ker f > = (Im f )0 .
From the equation
hw , f (v)i = hf > (w ), vi,
117
we deduce that v 2 (Im f > )0 i hf > (w ), vi = 0 for all w 2 F i hw , f (v)i = 0 for all
w 2 F . Assume that v 2 (Im f > )0 . If we pick a basis (wi )i2I of F , then we have the linear
forms wi : F ! K such that wi (wj ) = ij , and since we must have hwi , f (v)i = 0 for all
i 2 I and (wi )i2I is a basis of F , we conclude that f (v) = 0, and thus v 2 Ker f (this is
because hwi , f (v)i is the coefficient of f (v) associated with the basis vector wi ). Conversely,
if v 2 Ker f , then hw , f (v)i = 0 for all w 2 F , so we conclude that v 2 (Im f > )0 .
Therefore, v 2 (Im f > )0 i v 2 Ker f ; that is,
Ker f = (Im f > )0 ,
as claimed.
The following proposition gives a natural interpretation of the dual (E/U ) of a quotient
space E/U .
Proposition 4.24. For any subspace U of a vector space E, if p : E ! E/U is the canonical
surjection onto E/U , then p> is injective and
Im(p> ) = U 0 = (Ker (p))0 .
Therefore, p> is a linear isomorphism between (E/U ) and U 0 .
Proof. Since p is surjective, by Proposition 4.21, the map p> is injective. Obviously, U =
Ker (p). Observe that Im(p> ) consists of all linear forms 2 E such that = ' p for
some ' 2 (E/U ) , and since Ker (p) = U , we have U Ker ( ). Conversely for any linear
form 2 E , if U Ker ( ), then factors through E/U as =
p as shown in the
following commutative diagram
p
E CC / E/U
CC
CC
CC
C!
K,
where
: E/U ! K is given by
(v) = (v),
v 2 E,
where v 2 E/U denotes the equivalence class of v 2 E. The map does not depend on the
representative chosen in the equivalence class v, since if v 0 = v, that is v 0 v = u 2 U , then
(v 0 ) = (v + u) = (v) + (u) = (v) + 0 = (v). Therefore, we have
Im(p> ) = {' p | ' 2 (E/U ) }
= { : E ! K | U Ker ( )}
= U 0,
which proves our result.
118
Proposition 4.24 yields another proof of part (b) of the duality theorem (theorem 4.17)
that does not involve the existence of bases (in infinite dimension).
Proposition 4.25. For any vector space E and any subspace V of E, we have V 00 = V .
Proof. We begin by observing that V 0 = V 000 . This is because, for any subspace U of E ,
we have U U 00 , so V 0 V 000 . Furthermore, V V 00 holds, and for any two subspaces
M, N of E, if M N then N 0 N 0 , so we get V 000 V 0 . Write V1 = V 00 , so that
V10 = V 000 = V 0 . We wish to prove that V1 = V .
Since V V1 = V 00 , the canonical projection p1 : E ! E/V1 factors as p1 = f
the diagram below,
E CC
p as in
/ E/V
CC
CC
f
p1 CCC
!
E/V1
where p : E ! E/V is the canonical projection onto E/V and f : E/V ! E/V1 is the
quotient map induced by p1 , with f (uE/V ) = p1 (u) = uE/V1 , for all u 2 E (since V V1 , if
u u0 = v 2 V , then u u0 = v 2 V1 , so p1 (u) = p1 (u0 )). Since p1 is surjective, so is f . We
wish to prove that f is actually an isomorphism, and for this, it is enough to show that f is
injective. By transposing all the maps, we get the commutative diagram
E dHo H
p>
(E/V )
HH
HH
HH
HH
p>
1
f>
(E/V1 ) ,
0
but by Proposition 4.24, the maps p> : (E/V ) ! V 0 and p>
1 : (E/V1 ) ! V1 are isomorphism, and since V 0 = V10 , we have the following diagram where both p> and p>
1 are
isomorphisms:
V 0 dHo H
p>
(E/V
)
O
HH
HH
HH
HH
p>
1
f>
(E/V1 ) .
p>
1 is an isomorphism. We claim that this implies that f is injective.
If f is not injective, then there is some x 2 E/V such that x 6= 0 and f (x) = 0, so
for every ' 2 (E/V1 ) , we have f > (')(x) = '(f (x)) = 0. However, there is linear form
2 (E/V ) such that (x) = 1, so 6= f > (') for all ' 2 (E/V1 ) , contradicting the fact
that f > is surjective. To find such a linear form , pick any supplement W of Kx in E/V , so
that E/V = Kx W (W is a hyperplane in E/V not containing x), and define to be zero
119
on W and 1 on x.3 Therefore, f is injective, and since we already know that it is surjective,
it is bijective. This means that the canonical map f : E/V ! E/V1 with V V1 is an
isomorphism, which implies that V = V1 = V 00 (otherwise, if v 2 V1 V , then p1 (v) = 0, so
f (p(v)) = p1 (v) = 0, but p(v) 6= 0 since v 2
/ V , and f is not injective).
The following theorem shows the relationship between the rank of f and the rank of f > .
Theorem 4.26. Given a linear map f : E ! F , the following properties hold.
(a) The dual (Im f ) of Im f is isomorphic to Im f > = f > (F ); that is,
(Im f ) Im f > .
(b) rk(f ) rk(f > ). If rk(f ) is finite, we have rk(f ) = rk(f > ).
Proof. (a) Consider the linear maps
E
! Im f
! F,
! F is injective, F
j>
! I is surjective. Since f = j
f > = (j
j>
! I is surjective, I
p>
! E is injective, and
p, we also have
(b) We already noted that part (a) of Theorem 4.17 shows that dim(E) dim(E ),
for every vector space E. Thus, dim(Im f ) dim((Im f ) ), which, by (a), shows that
rk(f ) rk(f > ). When dim(Im f ) is finite, we already observed that as a corollary of
Theorem 4.17, dim(Im f ) = dim((Im f ) ), and thus, by part (a) we have rk(f ) = rk(f > ).
If dim(F ) is finite, then there is also a simple proof of (b) that doesnt use the result of
part (a). By Theorem 4.17(c)
dim(Im f ) + dim((Im f )0 ) = dim(F ),
and by Theorem 4.11
dim(Ker f > ) + dim(Im f > ) = dim(F ).
3
Using Zorns lemma, we pick W maximal among all subspaces of E/V such that Kx \ W = (0); then,
E/V = Kx W .
120
Remarks:
1. If dim(E) is finite, following an argument of Dan Guralnik, we can also prove that
rk(f ) = rk(f > ) as follows.
We know from Proposition 4.23 applied to f > : F ! E that
Ker (f >> ) = (Im f > )0 ,
and we showed as a consequence of Proposition 4.22 that
Ker (f >> ) = evalE (Ker (f )).
It follows (since evalE is an isomorphism) that
dim((Im f > )0 ) = dim(Ker (f >> )) = dim(Ker (f )) = dim(E)
dim(Im f ),
and since
dim(Im f > ) + dim((Im f > )0 ) = dim(E),
we get
dim(Im f > ) = dim(Im f ).
2. As indicated by Dan Guralnik, if dim(E) is finite, the above result can be used to prove
that
Im f > = (Ker (f ))0 .
From
hf > ('), ui = h', f (u)i
for all ' 2 F and all u 2 E, we see that if u 2 Ker (f ), then hf > ('), ui = h', 0i = 0,
which means that f > (') 2 (Ker (f ))0 , and thus, Im f > (Ker (f ))0 . For the converse,
since dim(E) is finite, we have
dim((Ker (f ))0 ) = dim(E)
dim(Ker (f )) = dim(Im f ),
121
p,
>
j >.
Since p is surjective, p> is injective, since j is injective, j > is surjective, and since f is
>
>
bijective, f is also bijective. It follows that (E/Ker (f )) = Im(f
j > ), and we have
Im f > = Im p> .
Since p : E ! E/Ker (f ) is the canonical surjection, by Proposition 4.24 applied to U =
Ker (f ), we get
Im f > = Im p> = (Ker (f ))0 ,
as claimed.
122
>
>
f > (vi ) = a>
1 i u 1 + + aj i u j + + an i u n
>
>
over the basis (u1 , . . . , un ), which is just a>
j i = f (vi )(uj ) = hf (vi ), uj i. Since
We now can give a very short proof of the fact that the rank of a matrix is equal to the
rank of its transpose.
Proposition 4.29. Given a m n matrix A over a field K, we have rk(A) = rk(A> ).
Proof. The matrix A corresponds to a linear map f : K n ! K m , and by Theorem 4.26,
rk(f ) = rk(f > ). By Proposition 4.28, the linear map f > corresponds to A> . Since rk(A) =
rk(f ), and rk(A> ) = rk(f > ), we conclude that rk(A) = rk(A> ).
Thus, given an mn-matrix A, the maximum number of linearly independent columns is
equal to the maximum number of linearly independent rows. There are other ways of proving
this fact that do not involve the dual space, but instead some elementary transformations
on rows and columns.
Proposition 4.29 immediately yields the following criterion for determining the rank of a
matrix:
123
1
a11 a12
A = @a21 a22 A
a31 a32
a11 a12
a11 a12
a21 a22
a31 a32
a21 a22
a31 a32
is invertible. We will see in Chapter 5 that this is equivalent to the fact the determinant of
one of the above matrices is nonzero. This is not a very efficient way of finding the rank of
a matrix. We will see that there are better ways using various decompositions such as LU,
QR, or SVD.
4.5
124
(1) The column space of A, denoted by Im A or R(A); this is the subspace of Rm spanned
by the columns of A, which corresponds to the image Im f of f .
(2) The kernel or nullspace of A, denoted by Ker A or N (A); this is the subspace of Rn
consisting of all vectors x 2 Rn such that Ax = 0.
(3) The row space of A, denoted by Im A> or R(A> ); this is the subspace of Rn spanned
by the rows of A, or equivalently, spanned by the columns of A> , which corresponds
to the image Im f > of f > .
(4) The left kernel or left nullspace of A denoted by Ker A> or N (A> ); this is the kernel
(nullspace) of A> , the subspace of Rm consisting of all vectors y 2 Rm such that
A> y = 0, or equivalently, y > A = 0.
Recall that the dimension r of Im f , which is also equal to the dimension of the column
space Im A = R(A), is the rank of A (and f ). Then, some our previous results can be
reformulated as follows:
1. The column space R(A) of A has dimension r.
2. The nullspace N (A) of A has dimension n
r.
r.
The above statements constitute what Strang calls the Fundamental Theorem of Linear
Algebra, Part I (see Strang [105]).
The two statements
Ker f = (Im f > )0
Ker f > = (Im f )0
translate to
(1) The nullspace of A is the orthogonal of the row space of A.
(2) The left nullspace of A is the orthogonal of the column space of A.
The above statements constitute what Strang calls the Fundamental Theorem of Linear
Algebra, Part II (see Strang [105]).
Since vectors are represented by column vectors and linear forms by row vectors (over a
basis in E or F ), a vector x 2 Rn is orthogonal to a linear form y if
yx = 0.
4.6. SUMMARY
125
x 2 = b1
x 3 = b2
x 1 = b3
as below:
10 1 0 1
0
x1
b1
A
@
A
@
1
x 2 = b2 A ,
1
x3
b3
we see that the rows of the matrix A add up to 0. In fact, it is easy to convince ourselves that
the left nullspace of A is spanned by y = (1, 1, 1), and so the system is solvable i y > b = 0,
namely
b1 + b2 + b3 = 0.
Note that the above criterion can also be stated negatively as follows:
The equation Ax = b has no solution i there is some y 2 Rm such that A> y = 0 and
y > b 6= 0.
4.6
Summary
The main concepts and results of this chapter are listed below:
Direct products, sums, direct sums.
Projections.
126
(Proposition 4.23).
If F is finite-dimensional, then
rk(f ) = rk(f > ).
(Theorem 4.26).
4.6. SUMMARY
127
The matrix of the transpose map f > is equal to the transpose of the matrix of the map
f (Proposition 4.28).
For any m n matrix A,
rk(A) = rk(A> ).
128
Chapter 5
Determinants
5.1
This chapter contains a review of determinants and their use in linear algebra. We begin
with permutations and the signature of a permutation. Next, we define multilinear maps
and alternating multilinear maps. Determinants are introduced as alternating multilinear
maps taking the value 1 on the unit matrix (following Emil Artin). It is then shown how
to compute a determinant using the Laplace expansion formula, and the connection with
the usual definition is made. It is shown how determinants can be used to invert matrices
and to solve (at least in theory!) systems of linear equations (the Cramer formulae). The
determinant of a linear map is defined. We conclude by defining the characteristic polynomial
of a matrix (and of a linear map) and by proving the celebrated Cayley-Hamilton theorem
which states that every matrix is a zero of its characteristic polynomial (we give two proofs;
one computational, the other one more conceptual).
Determinants can be defined in several ways. For example, determinants can be defined
in a fancy way in terms of the exterior algebra (or alternating algebra) of a vector space.
We will follow a more algorithmic approach due to Emil Artin. No matter which approach
is followed, we need a few preliminaries about permutations on a finite set. We need to
show that every permutation on n elements is a product of transpositions, and that the
parity of the number of transpositions involved is an invariant of the permutation. Let
[n] = {1, 2 . . . , n}, where n 2 N, and n > 0.
Definition 5.1. A permutation on n elements is a bijection : [n] ! [n]. When n = 1, the
only function from [1] to [1] is the constant map: 1 7! 1. Thus, we will assume that n 2.
A transposition is a permutation : [n] ! [n] such that, for some i < j (with 1 i < j n),
(i) = j, (j) = i, and (k) = k, for all k 2 [n] {i, j}. In other words, a transposition
exchanges two distinct elements i, j 2 [n]. A cyclic permutation of order k (or k-cycle) is a
permutation : [n] ! [n] such that, for some sequence (i1 , i2 , . . . , ik ) of distinct elements of
[n] with 2 k n,
(i1 ) = i2 , (i2 ) = i3 , . . . , (ik 1 ) = ik , (ik ) = i1 ,
129
130
CHAPTER 5. DETERMINANTS
and (j) = j, for j 2 [n] {i1 , . . . , ik }. The set {i1 , . . . , ik } is called the domain of the cyclic
permutation, and the cyclic permutation is sometimes denoted by (i1 , i2 , . . . , ik ).
If is a transposition, clearly, = id. Also, a cyclic permutation of order 2 is a
transposition, and for a cyclic permutation of order k, we have k = id. Clearly, the
composition of two permutations is a permutation and every permutation has an inverse
which is also a permutation. Therefore, the set of permutations on [n] is a group often
denoted Sn . It is easy to show by induction that the group Sn has n! elements. We will
also use the terminology product of permutations (or transpositions), as a synonym for
composition of permutations.
The following proposition shows the importance of cyclic permutations and transpositions.
Proposition 5.1. For every n 2, for every permutation : [n] ! [n], there is a partition
of [n] into r subsets called the orbits of , with 1 r n, where each set J in this partition
is either a singleton {i}, or it is of the form
J = {i, (i), 2 (i), . . . , ri 1 (i)},
where ri is the smallest integer, such that, ri (i) = i and 2 ri n. If is not the identity,
then it can be written in a unique way (up to the order) as a composition = 1 . . .
s
of cyclic permutations with disjoint domains, where s is the number of orbits with at least
two elements. Every permutation : [n] ! [n] can be written as a nonempty composition of
transpositions.
Proof. Consider the relation R defined on [n] as follows: iR j i there is some k 1 such
that j = k (i). We claim that R is an equivalence relation. Transitivity is obvious. We
claim that for every i 2 [n], there is some least r (1 r n) such that r (i) = i.
Indeed, consider the following sequence of n + 1 elements:
hi, (i), 2 (i), . . . , n (i)i.
Since [n] only has n distinct elements, there are some h, k with 0 h < k n such that
h (i) = k (i),
and since is a bijection, this implies k h (i) = i, where 0 k h n. Thus, we proved
that there is some integer m 1 such that m (i) = i, so there is such a smallest integer r.
r
1)
( k (i)) = k(r
1)
(j).
131
Now, for every i 2 [n], the equivalence class (orbit) of i is a subset of [n], either the singleton
{i} or a set of the form
J = {i, (i), 2 (i), . . . , ri 1 (i)},
where ri is the smallest integer such that ri (i) = i and 2 ri n, and in the second case,
the restriction of to J induces a cyclic permutation i , and = 1 . . . s , where s is the
number of equivalence classes having at least two elements.
For the second part of the proposition, we proceed by induction on n. If n = 2, there are
exactly two permutations on [2], the transposition exchanging 1 and 2, and the identity.
However, id2 = 2 . Now, let n
3. If (n) = n, since by the induction hypothesis, the
restriction of to [n 1] can be written as a product of transpositions, itself can be
written as a product of transpositions. If (n) = k 6= n, letting be the transposition such
that (n) = k and (k) = n, it is clear that leaves n invariant, and by the induction
hypothesis, we have = m . . . 1 for some transpositions, and thus
=
a product of transpositions (since
m . . . 1 ,
= idn ).
Remark: When = idn is the identity permutation, we can agree that the composition of
0 transpositions is the identity. The second part of Proposition 5.1 shows that the transpositions generate the group of permutations Sn .
In writing a permutation as a composition = 1 . . .
s of cyclic permutations, it
is clear that the order of the i does not matter, since their domains are disjoint. Given
a permutation written as a product of transpositions, we now show that the parity of the
number of transpositions is an invariant.
Definition 5.2. For every n 2, since every permutation : [n] ! [n] defines a partition
of r subsets over which acts either as the identity or as a cyclic permutation, let (),
called the signature of , be defined by () = ( 1)n r , where r is the number of sets in the
partition.
If is a transposition exchanging i and j, it is clear that the partition associated with
consists of n 1 equivalence classes, the set {i, j}, and the n 2 singleton sets {k}, for
k 2 [n] {i, j}, and thus, ( ) = ( 1)n (n 1) = ( 1)1 = 1.
Proposition 5.2. For every n
sition , we have
) =
().
132
CHAPTER 5. DETERMINANTS
Proof. Assume that (i) = j and (j) = i, where i < j. There are two cases, depending
whether i and j are in the same equivalence class Jl of R , or if they are in distinct equivalence
classes. If i and j are in the same class Jl , then if
Jl = {i1 , . . . , ip , . . . iq , . . . ik },
where ip = i and iq = j, since
(( 1 (ip ))) = (ip ) = (i) = j = iq
and
((iq 1 )) = (iq ) = (j) = i = ip ,
it is clear that Jl splits into two subsets, one of which is {ip , . . . , iq 1 }, and thus, the number
of classes associated with is r + 1, and ( ) = ( 1)n r 1 = ( 1)n r = (). If i
and j are in distinct equivalence classes Jl and Jm , say
{i1 , . . . , ip , . . . ih }
and
{j1 , . . . , jq , . . . jk },
where ip = i and jq = j, since
(( 1 (ip ))) = (ip ) = (i) = j = jq
and
(( 1 (jq ))) = (jq ) = (j) = i = ip ,
we see that the classes Jl and Jm merge into a single class, and thus, the number of classes
associated with is r 1, and ( ) = ( 1)n r+1 = ( 1)n r = ().
Now, let = m
proposition, we have
...
1 for a transposition.
Remark: When = idn is the identity permutation, since we agreed that the composition
of 0 transpositions is the identity, it it still correct that ( 1)0 = (id) = +1. From the
proposition, it is immediate that ( 0 ) = ( 0 )(). In particular, since 1 = idn , we
get ( 1 ) = ().
We can now proceed with the definition of determinants.
5.2
133
First, we define multilinear maps, symmetric multilinear maps, and alternating multilinear
maps.
Remark: Most of the definitions and results presented in this section also hold when K is
a commutative ring, and when we consider modules over K (free modules, when bases are
needed).
Let E1 , . . . , En , and F , be vector spaces over a field K, where n
1.
Symmetric bilinear maps (and multilinear maps) play an important role in geometry
(inner products, quadratic forms), and in dierential calculus (partial derivatives).
A bilinear map is symmetric if f (u, v) = f (v, u), for all u, v 2 E.
Alternating multilinear maps satisfy the following simple but crucial properties.
Proposition 5.3. Let f : E . . . E ! F be an n-linear alternating map, with n
following properties hold:
(1)
f (. . . , xi , xi+1 , . . .) =
f (. . . , xi+1 , xi , . . .)
2. The
134
CHAPTER 5. DETERMINANTS
(2)
f (. . . , xi , . . . , xj , . . .) = 0,
where xi = xj , and 1 i < j n.
(3)
f (. . . , xi , . . . , xj , . . .) =
f (. . . , xj , . . . , xi , . . .),
where 1 i < j n.
(4)
f (. . . , xi , . . .) = f (. . . , xi + xj , . . .),
for any
2 K, and where i 6= j.
f (. . . , xi+1 , xi , . . .).
(2) If xi = xj and i and j are not adjacent, we can interchange xi and xi+1 , and then xi
and xi+2 , etc, until xi and xj become adjacent. By (1),
f (. . . , xi , . . . , xj , . . .) = f (. . . , xi , xj , . . .),
where = +1 or
135
It is then convenient to use the matrix notation to describe the eect of the linear map L(A),
as
0
1 0
10 1
L(A)1 (u)
a1 1 a1 2 . . . a 1 n
u1
B L(A)2 (u) C B a2 1 a2 2 . . . a2 n C B u2 C
B
C B
CB C
B
C = B ..
..
.. . .
.. C B .. C .
@
A @ .
.
.
.
. A@ . A
L(A)n (u)
an 1 an 2 . . . an n
un
a1 1 a1 2
B a2 1 a2 2
B
A = B ..
..
@ .
.
an 1 an 2
1
. . . a1 n
. . . a2 n C
C
.. C
..
.
. A
. . . an n
0 1
0 1
v1
u1
B v2 C
B u2 C
B C
B C
B .. C = A> B .. C .
@.A
@.A
vn
un
Then,
f (v1 , . . . , vn ) =
2Sn
136
CHAPTER 5. DETERMINANTS
The quantity
det(A) =
2Sn
()a(1) 1 a(n) n
is in fact the value of the determinant of A (which, as we shall see shortly, is also equal to the
determinant of A> ). However, working directly with the above definition is quite ackward,
and we will proceed via a slightly indirect route
5.3
Definition of a Determinant
Recall that the set of all square n n-matrices with coefficients in a field K is denoted by
Mn (K).
Definition 5.4. A determinant is defined as any map
D : Mn (K) ! K,
which, when viewed as a map on (K n )n , i.e., a map of the n columns of a matrix, is n-linear
alternating and such that D(In ) = 1 for the identity matrix In . Equivalently, we can consider
a vector space E of dimension n, some fixed basis (e1 , . . . , en ), and define
D : En ! K
as an n-linear alternating map such that D(e1 , . . . , en ) = 1.
First, we will show that such maps D exist, using an inductive definition that also gives
a recursive method for computing determinants. Actually, we will define a family (Dn )n 1
of (finite) sets of maps D : Mn (K) ! K. Second, we will show that determinants are in fact
uniquely defined, that is, we will show that each Dn consists of a single map. This will show
the equivalence of the direct definition det(A) of Lemma 5.4 with the inductive definition
D(A). Finally, we will prove some basic properties of determinants, using the uniqueness
theorem.
Given a matrix A 2 Mn (K), we denote its n columns by A1 , . . . , An .
Definition 5.5. For every n
inductively as follows:
When n = 1, D1 consists of the single map D such that, D(A) = a, where A = (a), with
a 2 K.
Assume that Dn 1 has been defined, where n 2. We define the set Dn as follows. For
every matrix A 2 Mn (K), let Ai j be the (n 1) (n 1)-matrix obtained from A = (ai j )
by deleting row i and column j. Then, Dn consists of all the maps D such that, for some i,
1 i n,
D(A) = ( 1)i+1 ai 1 D(Ai 1 ) + + ( 1)i+n ai n D(Ai n ),
where for every j, 1 j n, D(Ai j ) is the result of applying any D in Dn
to Ai j .
137
We confess that the use of the same letter D for the member of Dn being defined, and
for members of Dn 1 , may be slightly confusing. We considered using subscripts to
distinguish, but this seems to complicate things unnecessarily. One should not worry too
much anyway, since it will turn out that each Dn contains just one map.
Each ( 1)i+j D(Ai j ) is called the cofactor of ai j , and the inductive expression for D(A)
is called a Laplace expansion of D according to the i-th row . Given a matrix A 2 Mn (K),
each D(A) is called a determinant of A.
We can think of each member of Dn as an algorithm to evaluate the determinant of A.
The main point is that these algorithms, which recursively evaluate a determinant using all
possible Laplace row expansions, all yield the same result, det(A).
We will prove shortly that D(A) is uniquely defined (at the moment, it is not clear that
Dn consists of a single map). Assuming this fact, given a n n-matrix A = (ai j ),
0
1
a1 1 a1 2 . . . a 1 n
B a2 1 a2 2 . . . a 2 n C
B
C
A = B ..
.. . .
.. C
@ .
.
.
. A
an 1 an 2 . . . an n
its determinant is denoted by D(A) or det(A), or more explicitly by
a1 1 a1 2
a2 1 a2 2
det(A) = ..
..
.
.
an 1 an 2
. . . a1 n
. . . a2 n
..
...
.
. . . an n
a b
c d
D(A) = ad
2. When n = 3, if
bc.
0
1
a1 1 a1 2 a1 3
A = @a2 1 a2 2 a2 3 A
a3 1 a3 2 a3 3
138
CHAPTER 5. DETERMINANTS
expanding according to the first row, we have
D(A) = a1 1
a2 2 a2 3
a3 2 a3 3
a1 2
a2 1 a2 3
a
a
+ a1 3 2 1 2 2
a3 1 a3 3
a3 1 a3 2
that is,
D(A) = a1 1 (a2 2 a3 3
a3 2 a2 3 )
a1 2 (a2 1 a3 3
a3 1 a2 3 ) + a1 3 (a2 1 a3 2
a3 1 a2 2 ),
a1 1 a3 2 a2 3
a2 1 a1 2 a3 3
a3 1 a2 2 a1 3 .
( 1)i+k ai k D(Ai k ) = 0.
139
where the sum ranges over all permutations on {1, . . . , n}. As a consequence, Dn consists
of a single map for every n 1, and this map is given by the above explicit formula.
Proof. Consider the standard basis (e1 , . . . , en ) of K n , where (ei )i = 1 and (ei )j = 0, for
j 6= i. Then, each column Aj of A corresponds to a vector vj whose coordinates over the
basis (e1 , . . . , en ) are the components of Aj , that is, we can write
v 1 = a1 1 e 1 + + an 1 e n ,
...
v n = a1 n e 1 + + an n e n .
Since by Lemma 5.5, each D is a multilinear alternating map, by applying Lemma 5.4, we
get
X
D(A) = D(v1 , . . . , vn ) =
()a(1) 1 a(n) n D(e1 , . . . , en ),
2Sn
where the sum ranges over all permutations on {1, . . . , n}. But D(e1 , . . . , en ) = D(In ),
and by Lemma 5.5, we have D(In ) = 1. Thus,
X
D(A) =
()a(1) 1 a(n) n ,
2Sn
n un
|0
1, 1 i n}
140
CHAPTER 5. DETERMINANTS
2Sn
()a(1) 1 a(n) n ,
where the sum ranges over all permutations on {1, . . . , n}. Since a permutation is invertible,
every product
a(1) 1 a(n) n
can be rewritten as
a1
1 (1)
an
1 (n)
and since ( 1 ) = () and the sum is taken over all permutations on {1, . . . , n}, we have
X
X
()a(1) 1 a(n) n =
( )a1 (1) an (n) ,
2Sn
2Sn
where and
A useful consequence of Corollary 5.7 is that the determinant of a matrix is also a multilinear alternating map of its rows. This fact, combined with the fact that the determinant of
a matrix is a multilinear alternating map of its columns is often useful for finding short-cuts
in computing determinants. We illustrate this point on the following example which shows
up in polynomial interpolation.
Example 5.2. Consider the so-called Vandermonde determinant
V (x1 , . . . , xn ) =
1
x1
x21
..
.
xn1
We claim that
V (x1 , . . . , xn ) =
1
x2
x22
..
.
1
xn2
Y
...
...
...
..
.
1
1
xn
x2n .
..
.
. . . xnn
(xj
xi ),
1i<jn
141
etc, multiply row i 1 by x1 and subtract it from row i, until we reach row 1. We obtain
the following determinant:
1
0
V (x1 , . . . , xn ) = 0
..
.
1
x2 x1
x2 (x2 x1 )
..
.
0 xn2 2 (x2
...
...
...
...
1
xn x1
xn (xn x1 )
..
.
x1 ) . . . xnn 2 (xn
x1 )
Now, expanding this determinant according to the first column and using multilinearity,
we can factor (xi x1 ) from the column of index i 1 of the matrix obtained by deleting
the first row and the first column, and thus
V (x1 , . . . , xn ) = (x2
x1 )(x3
x1 ) (xn
x1 )V (x2 , . . . , xn ),
Then,
a1 1 a1 2
B a2 1 a2 2
B
A = B ..
..
@ .
.
an 1 an 2
1
. . . a1 n
. . . a2 n C
C
.. C
..
.
. A
. . . an n
0 1
0 1
v1
u1
B v2 C
B u2 C
B C
B C
B .. C = A B .. C .
@.A
@.A
vn
un
142
CHAPTER 5. DETERMINANTS
As a consequence, we get the very useful property that the determinant of a product of
matrices is the product of the determinants of these matrices.
Proposition 5.9. For any two n n-matrices A and B, we have det(AB) = det(A) det(B).
Proof. We use Proposition 5.8 as follows: let (e1 , . . . , en ) be the standard basis of K n , and
let
0 1
0 1
w1
e1
B w2 C
B e2 C
B C
B C
B .. C = AB B .. C .
@ . A
@.A
wn
en
Then, we get
det(v1 , . . . , vn ) = det(B),
and since
1
0 1
w1
v1
B w2 C
B v2 C
B C
B C
B .. C = A B .. C ,
@ . A
@.A
wn
we get
vn
It should be noted that all the results of this section, up to now, also holds when K is a
commutative ring, and not necessarily a field. We can now characterize when an nn-matrix
A is invertible in terms of its determinant det(A).
5.4
In the next two sections, K is a commutative ring and when needed, a field.
143
e = (bi j )
Definition 5.6. Let K be a commutative ring. Given a matrix A 2 Mn (K), let A
be the matrix defined such that
bi j = ( 1)i+j det(Aj i ),
bi j = ( 1)i+j det(Aj i ).
e is the transpose of the matrix of cofactors of elements of A.
Thus, A
We have the following proposition.
Proposition 5.10. Let K be a commutative ring. For every matrix A 2 Mn (K), we have
e = AA
e = det(A)In .
AA
e
= (det(A)) 1 A.
e = (bi j ) and AA
e = (ci j ), we know that the entry ci j in row i and column j of AA
e
Proof. If A
is
c i j = ai 1 b 1 j + + ai k b k j + + ai n b n j ,
which is equal to
If j 6= i, we can form the matrix A0 by replacing the j-th row of A by the i-th row of A.
Now, the matrix Aj k obtained by deleting row j and column k from A is equal to the matrix
A0j k obtained by deleting row j and column k from A0 , since A and A0 only dier by the j-th
row. Thus,
det(Aj k ) = det(A0j k ),
and we have
ci j = ai 1 ( 1)j+1 det(A0j 1 ) + + ai n ( 1)j+n det(A0j n ).
However, this is the expansion of det(A0 ) according to the j-th row, since the j-th row of A0
is equal to the i-th row of A, and since A0 has two identical rows i and j, because det is an
alternating map of the rows (see an earlier remark), we have det(A0 ) = 0. Thus, we have
shown that ci i = det(A), and ci j = 0, when j 6= i, and so
e = det(A)In .
AA
144
CHAPTER 5. DETERMINANTS
e that
It is also obvious from the definition of A,
f> .
e> = A
A
f> = A> A
e> = (AA)
e >,
det(A)In = A> A
e > = det(A)In ,
(AA)
which yields
since In> = In . This proves that
e = det(A)In ,
AA
e = AA
e = det(A)In .
AA
e Conversely, if A is
As a consequence, if det(A) is invertible, we have A 1 = (det(A)) 1 A.
1
invertible, from AA = In , by Proposition 5.9, we have det(A) det(A 1 ) = 1, and det(A) is
invertible.
When K is a field, an element a 2 K is invertible i a 6= 0. In this case, the second part
of the proposition can be stated as A is invertible i det(A) 6= 0. Note in passing that this
method of computing the inverse of a matrix is usually not practical.
We now consider some applications of determinants to linear independence and to solving
systems of linear equations. Although these results hold for matrices over an integral domain,
their proofs require more sophisticated methods (it is necessary to use the fraction field of
the integral domain, K). Therefore, we assume again that K is a field.
Let A be an n n-matrix, x a column vectors of variables, and b another column vector,
and let A1 , . . . , An denote the columns of A. Observe that the system of equation Ax = b,
0
10 1 0 1
a1 1 a1 2 . . . a 1 n
x1
b1
B a2 1 a2 2 . . . a 2 n C B x 2 C B b 2 C
B
CB C B C
B ..
.. . .
.. C B .. C = B .. C
@ .
.
.
. A@ . A @ . A
an 1 an 2 . . . an n
xn
bn
is equivalent to
x1 A1 + + xj Aj + + xn An = b,
145
Proof. First, assume that the columns A1 , . . . , An of A are linearly dependent. Then, there
are x1 , . . . , xn 2 K, such that
x1 A1 + + xj Aj + + xn An = 0,
det(A1 , . . . , x1 A1 + + xj Aj + + xn An , . . . , An ) = det(A1 , . . . , 0, . . . , An ) = 0,
where 0 occurs in the j-th position, by multilinearity, all terms containing two identical
columns Ak for k 6= j vanish, and we get
xj det(A1 , . . . , An ) = 0.
146
CHAPTER 5. DETERMINANTS
5.5
We now characterize when a system of linear equations of the form Ax = b has a unique
solution.
Proposition 5.13. Given an n n-matrix A over a field K, the following properties hold:
(1) For every column vector b, there is a unique column vector x such that Ax = b i the
only solution to Ax = 0 is the trivial vector x = 0, i det(A) 6= 0.
(2) If det(A) 6= 0, the unique solution of Ax = b is given by the expressions
xj =
det(A1 , . . . , Aj 1 , b, Aj+1 , . . . , An )
,
det(A1 , . . . , Aj 1 , Aj , Aj+1 , . . . , An )
5.6
147
M (f )P ) = det(P
5.7
We conclude this chapter with an interesting and important application of Proposition 5.10,
the CayleyHamilton theorem. The results of this section apply to matrices over any commutative ring K. First, we need the concept of the characteristic polynomial of a matrix.
Definition 5.8. If K is any commutative ring, for every n n matrix A 2 Mn (K), the
characteristic polynomial PA (X) of A is the determinant
PA (X) = det(XI
A).
148
CHAPTER 5. DETERMINANTS
a b
A=
,
c d
then
PA (X) =
a
c
b
X
= X2
(a + d)X + ad
bc.
We can substitute the matrix A for the variable X in the polynomial PA (X), obtaining a
matrix PA . If we write
PA (X) = X n + c1 X n 1 + + cn ,
then
PA = An + c1 An
+ + cn I.
+ + cn I = 0.
Proof. We can view the matrix B = XI A as a matrix with coefficients in the polynomial
e which is the transpose of the matrix of
ring K[X], and then we can form the matrix B
e is an (n 1) (n 1) determinant, and thus a
cofactors of elements of B. Each entry in B
e as
polynomial of degree a most n 1, so we can write B
e = X n 1 B0 + X n 2 B1 + + Bn 1 ,
B
X a
b
X
d
b
1
0
d
b
e=
B=
, B
=X
+
.
c
X d
c
X a
0 1
c
a
By Proposition 5.10, we have
e = det(B)I = PA (X)I.
BB
A)(X n 1 B0 + X n 2 B1 + + X n
j 1
Bj + + Bn 1 ),
149
with
e = X n D0 + X n 1 D1 + + X n j Dj + + Dn ,
BB
D0 = B0
D1 = B1 AB0
..
.
Dj = Bj ABj 1
..
.
Dn 1 = Bn 1 ABn
Dn = ABn 1 .
Since
PA (X)I = (X n + c1 X n
+ + cn )I,
the equality
X n D0 + X n 1 D1 + + Dn = (X n + c1 X n
+ + cn )I
is an equality between two matrices, so it 1requires that all corresponding entries are equal,
and since these are polynomials, the coefficients of these polynomials must be identical,
which is equivalent to the set of equations
I = B0
c1 I = B1 AB0
..
.
cj I = Bj ABj 1
..
.
cn 1 I = Bn 1 ABn
cn I = ABn 1 ,
for all j, with 1 j n 1. If we multiply the first equation by An , the last by I, and
generally the (j + 1)th by An j , when we add up all these new equations, we see that the
right-hand side adds up to 0, and we get our desired equation
An + c1 An
as claimed.
+ + cn I = 0,
150
CHAPTER 5. DETERMINANTS
As a concrete example, when n = 2, the matrix
a b
A=
c d
(a + d)A + (ad
bc)I = 0.
Most readers will probably find the proof of Theorem 5.15 rather clever but very mysterious and unmotivated. The conceptual difficulty is that we really need to understand how
polynomials in one variable act on vectors, in terms of the matrix A. This can be done
and yields a more natural proof. Actually, the reasoning is simpler and more general if we
free ourselves from matrices and instead consider a finite-dimensional vector space E and
some given linear map f : E ! E. Given any polynomial p(X) = a0 X n + a1 X n 1 + + an
with coefficients in the field K, we define the linear map p(f ) : E ! E by
p(f ) = a0 f n + a1 f n
where f k = f
+ + an id,
for every vector u 2 E. Then, we define a new kind of scalar multiplication : K[X]E ! E
by polynomials as follows: for every polynomial p(X) 2 K[X], for every u 2 E,
p(X) u = p(f )(u).
It is easy to verify that this is a good action, which means that
p (u + v) = p u + p v
(p + q) u = p u + q u
(pq) u = p (q u)
1 u = u,
for all p, q 2 K[X] and all u, v 2 E. With this new scalar multiplication, E is a K[X]-module.
If p =
which means that K acts on E by scalar multiplication as before. If p(X) = X (the monomial
X), then
X u = f (u).
Now, if we pick a basis (e1 , . . . , en ), if a polynomial p(X) 2 K[X] has the property that
p(X) ei = 0,
i = 1, . . . , n,
151
then this means that p(f )(ei ) = 0 for i = 1, . . . , n, which means that the linear map p(f )
vanishes on E. We can also check, as we did in Section 5.2, that if A and B are two n n
matrices and if (u1 , . . . , un ) are any n vectors, then
0
0 11
0 1
u1
u1
B
B .. CC
B .. C
A @B @ . AA = (AB) @ . A .
un
un
This suggests the plan of attack for our second proof of the CayleyHamilton theorem.
For simplicity, we prove the theorem for vector spaces over a field. The proof goes through
for a free module over a commutative ring.
Theorem 5.16. (CayleyHamilton) For every finite-dimensional vector space over a field
K, for every linear map f : E ! E, for every basis (e1 , . . . , en ), if A is the matrix over f
over the basis (e1 , . . . , en ) and if
PA (X) = X n + c1 X n
+ + cn
+ + cn id = 0.
Proof. Since the columns of A consist of the vector f (ej ) expressed over the basis (e1 , . . . , en ),
we have
n
X
f (ej ) =
ai j ei , 1 j n.
i=1
n
X
i=1
ai j e i ,
1 j n,
which yields
j 1
X
i=1
ai j ei + (X
aj j ) e j +
n
X
i=j+1
B a1 2
X a2 2
B
B
..
..
..
@
.
.
.
a1 n
a2 n
X
ai j ei = 0,
1 j n.
1 0 1 0 1
e1
0
C B e 2 C B0 C
C B C B C
C B .. C = B .. C .
A @ . A @.A
en
0
152
CHAPTER 5. DETERMINANTS
e = det(B)I = det(XI
BB
A> )I = det(XI
A)I = PA I.
0 1 0 1
e1
0
B e2 C B0C
B C B C
B B .. C = B .. C ,
@ . A @.A
en
0
PA ej = 0,
j = 1, . . . , n,
AP ) = det(XP 1 IP P 1 AP )
= det(P 1 (XI A)P )
= det(P 1 ) det(XI A) det(P )
= det(XI A).
5.8
Permanents
5.8. PERMANENTS
153
If we drop the sign () of every permutation from the above formula, we obtain a quantity
known as the permanent:
X
per(A) =
a(1) 1 a(n) n .
2Sn
Permanents and determinants were investigated as early as 1812 by Cauchy. It is clear from
the above definition that the permanent is a multilinear and symmetric form. We also have
per(A) = per(A> ),
and the following unsigned version of the Laplace expansion formula:
per(A) = ai 1 per(Ai 1 ) + + ai j per(Ai j ) + + ai n per(Ai n ),
for i = 1, . . . , n. However, unlike determinants which have a clear geometric interpretation as
signed volumes, permanents do not have any natural geometric interpretation. Furthermore,
determinants can be evaluated efficiently, for example using the conversion to row reduced
echelon form, but computing the permanent is hard.
Permanents turn out to have various combinatorial interpretations. One of these is in
terms of perfect matchings of bipartite graphs which we now discuss.
Recall that a bipartite (undirected) graph G = (V, E) is a graph whose set of nodes V can
be partionned into two nonempty disjoint subsets V1 and V2 , such that every edge e 2 E has
one endpoint in V1 and one endpoint in V2 . An example of a bipatite graph with 14 nodes
is shown in Figure 5.8; its nodes are partitioned into the two sets {x1 , x2 , x3 , x4 , x5 , x6 , x7 }
and {y1 , y2 , y3 , y4 , y5 , y6 , y7 }.
y1
y2
y3
y4
y5
y6
y7
x1
x2
x3
x4
x5
x6
x7
154
CHAPTER 5. DETERMINANTS
y1
y2
y3
y4
y5
y6
y7
x1
x2
x3
x4
x5
x6
x7
Obviously, a perfect matching in a bipartite graph can exist only if its set of nodes has
a partition in two blocks of equal size, say {x1 , . . . , xm } and {y1 , . . . , ym }. Then, there is
a bijection between perfect matchings and bijections : {x1 , . . . , xm } ! {y1 , . . . , ym } such
that (xi ) = yj i there is an edge between xi and yj .
Now, every bipartite graph G with a partition of its nodes into two sets of equal size as
above is represented by an m m matrix A = (aij ) such that aij = 1 i there is an edge
between xi and yj , and aij = 0 otherwise. Using the interpretation of perfect machings as
bijections : {x1 , . . . , xm } ! {y1 , . . . , ym }, we see that the permanent per(A) of the (0, 1)matrix A representing the bipartite graph G counts the number of perfect matchings in G.
In a famous paper published in 1979, Leslie Valiant proves that computing the permanent
is a #P-complete problem. Such problems are suspected to be intractable. It is known that
if a polynomial-time algorithm existed to solve a #P-complete problem, then we would have
P = N P , which is believed to be very unlikely.
Another combinatorial interpretation of the permanent can be given in terms of systems
of distinct representatives. Given a finite set S, let (A1 , . . . , An ) be any sequence of nonempty
subsets of S (not necessarily distinct). A system of distinct representatives (for short SDR)
of the sets A1 , . . . , An is a sequence of n distinct elements (a1 , . . . , an ), with ai 2 Ai for i =
1, . . . , n. The number of SDRs of a sequence of sets plays an important role in combinatorics.
Now, if S = {1, 2, . . . , n} and if we associate to any sequence (A1 , . . . , An ) of nonempty
subsets of S the matrix A = (aij ) defined such that aij = 1 if j 2 Ai and aij = 0 otherwise,
then the permanent per(A) counts the number of SDRs of the set A1 , . . . , An .
This interpretation of permanents in terms of SDRs can be used to prove bounds for the
permanents of various classes of matrices. Interested readers are referred to van Lint and
Wilson [113] (Chapters 11 and 12). In particular, a proof of a theorem known as Van der
Waerden conjecture is given in Chapter 12. This theorem states that for any n n matrix
A with nonnegative entries in which all row-sums and column-sums are 1 (doubly stochastic
matrices), we have
n!
per(A)
,
nn
155
with equality for the matrix in which all entries are equal to 1/n.
5.9
Further Readings
Thorough expositions of the material covered in Chapters 24 and 5 can be found in Strang
[105, 104], Lax [71], Lang [67], Artin [4], Mac Lane and Birkho [73], Homan and Kunze
[62], Bourbaki [14, 15], Van Der Waerden [112], Serre [96], Horn and Johnson [57], and Bertin
[12]. These notions of linear algebra are nicely put to use in classical geometry, see Berger
[8, 9], Tisseron [109] and Dieudonne [28].
156
CHAPTER 5. DETERMINANTS
Chapter 6
Gaussian Elimination,
LU -Factorization, Cholesky
Factorization, Reduced Row Echelon
Form
6.1
Curve interpolation is a problem that arises frequently in computer graphics and in robotics
(path planning). There are many ways of tackling this problem and in this section we will
describe a solution using cubic splines. Such splines consist of cubic Bezier curves. They
are often used because they are cheap to implement and give more flexibility than quadratic
Bezier curves.
A cubic Bezier curve C(t) (in R2 or R3 ) is specified by a list of four control points
(b0 , b2 , b2 , b3 ) and is given parametrically by the equation
C(t) = (1
t)3 b0 + 3(1
t)2 t b1 + 3(1
t)t2 b2 + t3 b3 .
Clearly, C(0) = b0 , C(1) = b3 , and for t 2 [0, 1], the point C(t) belongs to the convex hull of
the control points b0 , b1 , b2 , b3 . The polynomials
(1
t)3 ,
3(1
t)2 t,
3(1
t)t2 ,
t3
157
158
b1
b2
b3
b0
b1
b3
b0
b2
Figure 6.2: A Bezier curve with an inflexion point
159
b2
b1
b0
b3
= x0
and dN +1 = xN
160
x2
d7
x1
d3
x3
d0
d6
x6
x4
d4
x5
d5
x0 = d
x7 = d8
Figure 6.4: A C 2 cubic interpolation spline curve passing through the points x0 , x1 , x2 , x3 ,
x 4 , x 5 , x 6 , x7
It can be shown that d1 , . . . , dN 1 are given by the linear system
07
10
1 0
1
3
1
d
6x
d
1
1
0
2
2
B1 4 1
B
C B
C
0C
6x2
B
C B d2 C B
C
B .. .. ..
C B .. C B
C
.
.
B
C.
.
.
. CB . C = B
.
B
CB
C B
C
@0
A
1 4 1 A @dN 2 A @ 6xN 2
7
3
1 2
dN 1
6xN 1 2 dN
We will show later that the above matrix is invertible because it is strictly diagonally
dominant.
161
Once the above system is solved, the Bezier cubics C1 , . . ., CN are determined as follows
(we assume N 2): For 2 i N 1, the control points (bi0 , bi1 , bi2 , bi3 ) of Ci are given by
bi0 = xi 1
2
bi1 = di 1 +
3
1
i
b2 = d i 1 +
3
bi3 = xi .
1
di
3
2
di
3
bN
0 = xN 1
1
1
bN
1 = dN 1 + dN
2
2
N
b2 = d N
bN
3 = xN .
We will now describe various methods for solving linear systems. Since the matrix of the
above system is tridiagonal, there are specialized methods which are more efficient than the
general methods. We will discuss a few of these methods.
6.2
162
(2) One does not solve (large) linear systems by computing determinants (using Cramers
formulae). This is because this method requires a number of additions (resp. multiplications) proportional to (n + 1)! (resp. (n + 2)!).
The key idea on which most direct methods (as opposed to iterative methods, that look
for an approximation of the solution) are based is that if A is an upper-triangular matrix,
which means that aij = 0 for 1 j < i n (resp. lower-triangular, which means that
aij = 0 for 1 i < j n), then computing the solution x is trivial. Indeed, say A is an
upper-triangular matrix
0
a1 1 a1 2
B 0 a2 2
B
B
0
B 0
A=B
B
B
@ 0
0
0
0
a1 n 2
a2 n 2
..
...
.
...
0
0
1
a1 n 1
a1 n
a2 n 1
a2 n C
..
.. C
C
.
. C
.
..
.. C
.
. C
C
an 1 n 1 an 1 n A
0
an n
163
new system
2x +
y + z =
8y
2z =
8y + 3z =
5
12
14.
This time, we can eliminate the variable y from the third equation by adding the second
equation to the third:
2x + y + z = 5
8y
2z =
12
z = 2.
This last system is upper-triangular. Using back-substitution, we find the solution: z = 2,
y = 1, x = 1.
Observe that we have performed only row operations. The general method is to iteratively
eliminate variables using simple row operations (namely, adding or subtracting a multiple of
a row to another row of the matrix) while simultaneously applying these operations to the
vector b, to obtain a system, M Ax = M b, where M A is upper-triangular. Such a method is
called Gaussian elimination. However, one extra twist is needed for the method to work in
all cases: It may be necessary to permute rows, as illustrated by the following example:
x + y + z =1
x + y + 3z = 1
2x + 5y + 8z = 1.
In order to eliminate x from the second and third row, we subtract the first row from the
second and we subtract twice the first row from the third:
x +
z
=1
2z = 0
3y + 6z = 1.
Now, the trouble is that y does not occur in the second row; so, we cant eliminate y from
the third row by adding or subtracting a multiple of the second row to it. The remedy is
simple: Permute the second and the third row! We get the system:
x +
y + z
=1
3y + 6z = 1
2z = 0,
which is already in triangular form. Another example where some permutations are needed
is:
z = 1
2x + 7y + 2z = 1
4x
6y
=
1.
164
1
1
1,
and then, we add twice the first row to the third, obtaining:
2x + 7y + 2z = 1
z = 1
8y + 4z = 1.
Again, we permute the second and the third row, getting
2x + 7y + 2z = 1
8y + 4z = 1
z = 1,
an upper-triangular system. Of course, in this example, z is already solved and we could
have eliminated it first, but for the general method, we need to proceed in a systematic
fashion.
We now describe the method of Gaussian Elimination applied to a linear system Ax = b,
where A is assumed to be invertible. We use the variable k to keep track of the stages of
elimination. Initially, k = 1.
(1) The first step is to pick some nonzero entry ai 1 in the first column of A. Such an
entry must exist, since A is invertible (otherwise, the first column of A would be the
zero vector, and the columns of A would not be linearly independent. Equivalently, we
would have det(A) = 0). The actual choice of such an element has some impact on the
numerical stability of the method, but this will be examined later. For the time being,
we assume that some arbitrary choice is made. This chosen element is called the pivot
of the elimination step and is denoted 1 (so, in this first step, 1 = ai 1 ).
(2) Next, we permute the row (i) corresponding to the pivot with the first row. Such a
step is called pivoting. So, after this permutation, the first element of the first row is
nonzero.
(3) We now eliminate the variable x1 from all rows except the first by adding suitable
multiples of the first row to these rows. More precisely we add ai 1 /1 times the first
row to the ith row for i = 2, . . . , n. At the end of this step, all entries in the first
column are zero except the first.
(4) Increment k by 1. If k = n, stop. Otherwise, k < n, and then iteratively repeat steps
(1), (2), (3) on the (n k + 1) (n k + 1) subsystem obtained by deleting the first
k 1 rows and k 1 columns from the current system.
165
akij = aii j
for all i, j with 1 i k
after the (k 1)th step.
We will prove later that det(Ak ) = det(A). Consequently, Ak is invertible. The fact
that Ak is invertible i A is invertible can also be shown without determinants from the fact
that there is some invertible matrix Mk such that Ak = Mk A, as we will see shortly.
Since Ak is invertible, some entry akik with k i n is nonzero. Otherwise, the last
n k + 1 entries in the first k columns of Ak would be zero, and the first k columns of
Ak would yield k vectors in Rk 1 . But then, the first k columns of Ak would be linearly
dependent and Ak would not be invertible, a contradiction.
So, one the entries akik with k i n can be chosen as pivot, and we permute the kth
row with the ith row, obtaining the matrix k = (jk l ). The new pivot is k = kk k , and we
zero the entries i = k + 1, . . . , n in column k by adding ikk /k times row k to row i. At
the end of this step, we have Ak+1 . Observe that the first k 1 rows of Ak are identical to
the first k 1 rows of Ak+1 .
It is easy to figure out what kind of matrices perform the elementary row operations
used during Gaussian elimination. The key point is that if A = P B, where A, B are m n
matrices and P is a square matrix of dimension m, if (as usual) we denote the rows of A and
B by A1 , . . . , Am and B1 , . . . , Bm , then the formula
aij =
m
X
pik bkj
k=1
giving the (i, j)th entry in A shows that the ith row of A is a linear combination of the rows
of B:
Ai = pi1 B1 + + pim Bm .
Therefore, multiplication of a matrix on the left by a square matrix performs row operations. Similarly, multiplication of a matrix on the right by a square matrix performs column
operations
166
The permutation of the kth row with the ith row is achieved by multiplying A on the left
by the transposition matrix P (i, k), which is the matrix obtained from the identity matrix
by permuting rows i and k, i.e.,
0
1
1
B 1
C
B
C
B
C
0
1
B
C
B
C
1
B
C
B
C
.
..
P (i, k) = B
C.
B
C
B
C
1
B
C
B
C
1
0
B
C
@
1 A
1
Observe that det(P (i, k)) =
= P (i, k).
During the permutation step (2), if row k and row i need to be permuted, the matrix A
is multiplied on the left by the matrix Pk such that Pk = P (i, k), else we set Pk = I.
Adding
matrix ,
where
(ei j )k l =
i.e.,
0
Ei,j;
B
B
B
B
B
B
B
=B
B
B
B
B
B
@
1 if k = i and l = j
0 if k 6= i or l =
6 j,
1
1
1
1
..
.
1
1
1
1
C
C
C
C
C
C
C
C
C
C
C
C
C
A
or Ei,j;
0
1
B 1
B
B
1
B
B
1
B
B
..
=B
.
B
B
1
B
B
1
B
@
1
C
C
C
C
C
C
C
C.
C
C
C
C
C
A
On the left, i > j, and on the right, i < j. Observe that the inverse of Ei,j; = I + ei j is
Ei,j; = I
ei j and that det(Ei,j; ) = 1. Therefore, during step 3 (the elimination step),
the matrix A is multiplied on the left by a product Ek of matrices of the form Ei,k; i,k , with
i > k.
167
E1 P1 A.
This justifies the claim made earlier that Ak = Mk A for some invertible matrix Mk ; we can
pick
M k = E k 1 Pk 1 E 1 P1 ,
a product of invertible matrices.
The fact that det(P (i, k)) =
claimed above: We always have
Furthermore, since
Ak = Ek 1 Pk
E 1 P1 A
E 2 P 2 E 1 P1 A
E 2 P2 E 1 P 1 ,
but this expression is of no use. Indeed, what we need is M 1 ; when no permutations are
needed, it turns out that M 1 can be obtained immediately from the matrices Ek s, in fact,
from their inverses, and no multiplications are necessary.
168
k = 1, . . . , n,
where k is the pivot obtained after k 1 elimination steps. Therefore, the kth pivot is given
by
8
<a11 = det(A[1..1, 1..1])
if k = 1
k =
det(A[1..k, 1..k])
:
if k = 2, . . . , n.
det(A[1..k 1, 1..k 1])
Proof. First, assume that A = LU is an LU -factorization of A. We can write
A[1..k, 1..k] A2
L1 0
U1 U2
L1 U1
L1 U2
A=
=
=
,
A3
A4
L 3 L4
0 U4
L3 U1 L3 U2 + L4 U4
169
E 2 E 1 A = Ak ,
where L = Ek 1 E2 E1 is a unit lower-triangular matrix and Ak [1..k, 1..k] is uppertriangular, so that LA = Ak can be written as
L1 0
A[1..k, 1..k] A2
U1 B2
=
,
L 3 L4
A3
A4
0 B4
where L1 is unit lower-triangular and U1 is upper-triangular. But then,
L1 A[1..k, 1..k]) = U1 ,
where L1 is invertible (in fact, det(L1 ) = 1), and since by hypothesis A[1..k, 1..k] is invertible,
U1 is also invertible, which implies that (U1 )kk 6= 0, since U1 is upper-triangular. Therefore,
no pivoting is needed in step k, establishing the induction step. Since det(L1 ) = 1, we also
have
det(U1 ) = det(L1 A[1..k, 1..k]) = det(L1 ) det(A[1..k, 1..k]) = det(A[1..k, 1..k]),
and since U1 is upper-triangular and has the pivots 1 , . . . , k on its diagonal, we get
det(A[1..k, 1..k]) = 1 2 k ,
k = 1, . . . , n,
as claimed.
Remark: The use of determinants in the first part of the proof of Proposition 6.2 can be
avoided if we use the fact that a triangular matrix is invertible i all its diagonal entries are
nonzero.
Corollary 6.3. (LU -Factorization) Let A be an invertible n n-matrix. If every matrix
A[1..k, 1..k] is invertible for k = 1, . . . , n, then Gaussian elimination requires no pivoting
and yields an LU -factorization A = LU .
170
Proof. We proved in Proposition 6.2 that in this case Gaussian elimination requires no
pivoting. Then, since every elementary matrix Ei,k; is lower-triangular (since we always
arrange that the pivot k occurs above the rows that it operates on), since Ei,k;1 = Ei,k;
and the Ek0 s are products of Ei,k; i,k s, from
En
E2 E1 A = U,
One of the main reasons why the existence of an LU -factorization for a matrix A is
interesting is that if we need to solve several linear systems Ax = b corresponding to the
same matrix A, we can do this cheaply by solving the two triangular systems
Lw = b,
and U x = w.
171
The following easy proposition shows that, in principle, A can be premultiplied by some
permutation matrix P , so that P A can be converted to upper-triangular form without using
any pivoting. Permutations are discussed in some detail in Section 20.3, but for now we
just need their definition. A permutation matrix is a square matrix that has a single 1 in
every row and every column and zeros everywhere else. It is shown in Section 20.3 that
every permutation matrix is a product of transposition matrices (the P (i, k)s), and that P
is invertible with inverse P > .
Proposition 6.4. Let A be an invertible n n-matrix. Then, there is some permutation
matrix P so that P A[1..k, 1..k] is invertible for k = 1, . . . , n.
Proof. The case n = 1 is trivial, and so is the case n = 2 (we swap the rows if necessary). If
n 3, we proceed by induction. Since A is invertible, its columns are linearly independent;
in particular, its first n 1 columns are also linearly independent. Delete the last column of
A. Since the remaining n 1 columns are linearly independent, there are also n 1 linearly
independent rows in the corresponding n (n 1) matrix. Thus, there is a permutation
of these n rows so that the (n 1) (n 1) matrix consisting of the first n 1 rows is
invertible. But, then, there is a corresponding permutation matrix P1 , so that the first n 1
rows and columns of P1 A form an invertible matrix A0 . Applying the induction hypothesis
to the (n 1) (n 1) matrix A0 , we see that there some permutation matrix P2 (leaving
the nth row fixed), so that P2 P1 A[1..k, 1..k] is invertible, for k = 1, . . . , n 1. Since A is
invertible in the first place and P1 and P2 are invertible, P1 P2 A is also invertible, and we are
done.
Remark: One can also prove Proposition 6.4 using a clever reordering of the Gaussian
elimination steps suggested by Trefethen and Bau [110] (Lecture 21). Indeed, we know that
if A is invertible, then there are permutation matrices Pi and products of elementary matrices
Ei , so that
An = En 1 Pn 1 E2 P2 E1 P1 A,
where U = An is upper-triangular. For example, when n = 4, we have E3 P3 E2 P2 E1 P1 A = U .
We can define new matrices E10 , E20 , E30 which are still products of elementary matrices so
that we have
E30 E20 E10 P3 P2 P1 A = U.
Indeed, if we let E30 = E3 , E20 = P3 E2 P3 1 , and E10 = P3 P2 E1 P2 1 P3 1 , we easily verify that
each Ek0 is a product of elementary matrices and that
E30 E20 E10 P3 P2 P1 = E3 (P3 E2 P3 1 )(P3 P2 E1 P2 1 P3 1 )P3 P2 P1 = E3 P3 E2 P2 E1 P1 .
It can also be proved that E10 , E20 , E30 are lower triangular (see Theorem 6.5).
In general, we let
Ek0 = Pn
1
Pk+1 Ek Pk+1
Pn 11 ,
172
and we have
En0
E10 Pn
P1 A = U,
1.
for k = j + 1, . . . , n
and
Enn
1
1
= En 1 .
Then,
Ejk = Pk Pk
U=
Enn 11
Pj+1 Ej Pj+1 Pk 1 Pk
E1n 1 Pn
P1 A,
1,
173
and if we set
P = Pn 1 P1
L = (E1n 1 ) 1 (Enn 11 ) 1 ,
then
P A = LU.
Furthermore,
(Ejk )
= I + Ejk ,
1jn
`
B
j+1j 0
B. .
..
..
@ .. ..
.
.
k
0 `nj 0
we have
Ejk = I
and
Ejk = Pk Ejk 1 ,
1jn
1, j k n
1,
1
0
.. C
.C
C
0C
C,
0C
. . .. C
. .A
0
..
.
Ejk ,
2, j + 1 k n
1,
n1
n2
nk
k of the form
1
0 0
.
..
0 ..
. 0C
C
C
..
..
0 .
. 0C
C
..
.. .. C
0 .
. .C
C
0 0C
C
..
. 0C
0
C
.. .. . . .. C
. .A
. .
0 0
174
then for k = 2, . . . , n
1, define
0k = Pk k 1 ,
and
k = (I + 0k )Ek 1
B
B
B
B
B
B
B
I=B
B
B
B
B
B
B
@
0
0k
1
21
0k 1
31
..
.
0k
0
..
.
...
0
0
0k
32
..
.
0k
0k
1
k+11
0k
..
.
0k
..
.
0k
k1
n1
k2
1
k+12
n2
..
.
0k
1
kk 1
0k 1
k+1 k 1
..
.
0k
1
nk 1
`kk+1k
..
.
`knk
..
.
..
.
..
.
..
.
..
.
..
.
C
0C
C
C
0C
.. C
.C
C,
0C
C
C
..
. 0C
C
.. . . .. C
. .A
.
0
with Pk = I or Pk = P (k, i) for some i > k. This means that in assembling L, row k
and row i of k 1 need to be permuted when a pivoting step permuting row k and row
i of Ak is required. Then
I + k = (E1k )
(Ekk )
k = E1k Ekk ,
for k = 1, . . . , n
1, and therefore
L = I + n 1 .
Proof. (1) The only part that has not been proved is the uniqueness part (when P = I).
Assume that A is invertible and that A = L1 U1 = L2 U2 , with L1 , L2 unit lower-triangular
and U1 , U2 upper-triangular. Then, we have
L2 1 L1 = U2 U1 1 .
However, it is obvious that L2 1 is lower-triangular and that U1 1 is upper-triangular, and so
L2 1 L1 is lower-triangular and U2 U1 1 is upper-triangular. Since the diagonal entries of L1
and L2 are 1, the above equality is only possible if U2 U1 1 = I, that is, U1 = U2 , and so
L1 = L2 .
175
Ek 1
If we define Lk by
for k = 1, . . . , n
1
B ..
B.
B
B0
=B
B0
B.
@ ..
0
1
0
..
..
.
.
1
`k+1k
..
..
.
.
`nk
1
0 0
.. .. .. C
. . .C
C
0 0C
C.
1 0C
.. . . .. C
. .A
.
0 1
B
B `21
1
0
0
B
B
.
..
B `31
`32
0
B
..
Lk = B ..
...
.
1
B .
B
B`k+11 `k+12 `k+1k
B .
..
..
..
@ ..
.
.
.
`n1
`n2 `nk
0
0
0
..
.
..
.
..
.
..
.
0
1
.
0 ..
0
2kn
C
0C
C
C
0C
C
C
0C
C
0C
C
0A
1
1,
because multiplication on the right by Ek 1 adds `i times column i to column k (of the matrix
Lk 1 ) with i > k, and column i of Lk 1 has only the nonzero entry 1 as its ith element.
Since
Lk = E1 1 Ek 1 , 1 k n 1,
we conclude that L = Ln 1 , proving our claim about the shape of L.
(3) First, we prove by induction on k that
Ak+1 = Ekk E1k Pk P1 A,
k = 1, . . . , n
2.
176
2,
Ak+1 = Ek Pk Ak ,
1
1
E2k 1 E1k 1 Pk
P1 A.
1
k 1 k 1
E 1 P k 1 P1 A
1 E2
Ek Pk Ekk 11 (Pk Pk ) (Pk Pk )E2k 1 (Pk Pk )E1k 1 (Pk Pk )Pk 1 P1 A
Ek (Pk Ekk 11 Pk ) (Pk E2k 1 Pk )(Pk E1k 1 Pk )Pk Pk 1 P1 A.
Observe that Pk has been moved to the right of the elimination steps. However, by
definition,
Ejk = Pk Ejk 1 Pk ,
j = 1, . . . , k
Ekk = Ek ,
so we get
Ak+1 = Ekk Ekk
E2k E1k Pk P1 A,
= Enn
1
1
2, we get
E1n 1 Pn
P1 A,
is clear,
2, we have Ejj = Ej ,
Since for j = 1, . . . , n
Ejk = Pk Ejk 1 Pk ,
since Enn 11 = En 1 and Pk
j = 1, . . . , n 2, we have
(Ejk )
k = j + 1, . . . , n
= Pk , we get (Ejj )
= Pk (Ejk 1 ) 1 Pk ,
= Ej
for j = 1, . . . , n
k = j + 1, . . . , n
Since
(Ejk 1 )
1,
= I + Ejk
1.
1, and for
177
Pk = I + Pk Ejk
Pk .
Therfore, we have
(Ejk )
= I + Pk Ejk
We prove for j = 1, . . . , n
of the form
and that
1, that for
0
0
B ..
B.
B
B0
Ejk = B
B0
B.
@ ..
0
Ejk = Pk Ejk 1 ,
1jn
Pk ,
k = j, . . . , n
0
..
..
.
.
0
`kj+1j
..
..
.
.
k
`nj
1jn
0
..
.
0
0
..
.
0
2, j + 1 k n
1.
2, j + 1 k n
1,
1. Since (Ejj )
For the induction step, we only need to consider the case where Pk = P (k, i) is a transposition, since the case where Pk = I is trivial. We have to figure out what Pk Ejk 1 Pk =
P (k, i) Ejk 1 P (k, i) is. However, since
0
Ejk
0
..
..
.
.
0
k 1
`j+1j
..
..
.
.
`knj 1
0
B ...
B
B0
B
=B
B0
B.
@ ..
0
1
0 0
.. .. .. C
. . .C
0 0C
C
C,
0 0C
.. . . .. C
. .A
.
0 0
and thus,
(Ejk )
which shows that
= I + P (k, i) Ejk 1 ,
178
We also know that multiplying (Ejk 1 ) 1 on the left by P (k, i) will permute rows i and
k, which shows that Ejk has the desired form, as claimed. Since all Ejk are strictly lower
triangular, all (Ejk ) 1 = I + Ejk are lower triangular, so the product
L = (E1n 1 )
(Enn 11 )
(Enn 11 ) 1 .
(Ekk )
k = E1k Ekk ,
for k = 1, . . . , n
1.
B
B
E1 = B
@
We get
(E1 1 )
Since (E1 1 )
Since (Ejk )
1
1
0 0
`121 1 0C
C
..
.. . . .. C .
. .A
.
.
`1n1 0 1
1
1 0 0
B ` 1 1 0C
B 21
C
= B .. .. . . .. C = I + 1 ,
@ . .
. .A
`1n1 0 1
= I + Ejk with
0
B ..
B.
B
B0
k
Ej = B
B0
B.
@ ..
0
0
..
..
.
.
0
`kj+1j
..
..
.
.
k
`nj
1
0 0
.. .. .. C
. . .C
C
0 0C
C,
0 0C
.. . . .. C
. .A
.
0 0
(Ekk 11 )
= I + E1k
Ekk 11 ,
2 k n.
()
P (k, i) = Ejk
= I + Pk Ejk 1 ,
if i
1jn
k + 1 and j k
2, j + 1 k n
179
1 and since
1,
we get
(E1k )
(Ekk 1 )
= I + Pk E1k
Ekk 11 ,
= (E1k 1 )
(Ekk 11 ) 1 ,
2kn
1.
()
(Ekk 1 )
Ekk 11 .
= I + Pk k 1 .
(Ekk 1 ) 1 (Ekk )
= (I + Pk k 1 )Ek 1 .
However, by definition
I + k = (I + Pk k 1 )Ek 1 ,
which proves that
I + k = (E1k )
(Ekk 1 ) 1 (Ekk ) 1 ,
()
and finishes the induction step for the proof of this formula.
If we apply equation () again with k + 1 in place of k, we have
(E1k )
(Ekk )
= I + E1k Ekk ,
1 in (), we obtain
Part (3) of Theorem 6.5 shows the remarkable fact that in assembling the matrix L while
performing Gaussian elimination with pivoting, the only change to the algorithm is to make
the same transposition on the rows of L (really k , since the ones are not altered) that we
make on the rows of A (really Ak ) during a pivoting step involving row k and row i. We
can also assemble P by starting with the identity matrix and applying to P the same row
transpositions that we apply to A and . Here is an example illustrating this method.
180
1
B4
A=B
@2
3
2
3
8 12
3
2
1 1
1
4
8C
C.
1A
4
We set P0 = I4 , and we can also set 0 = 0. The first step is to permute row 1 and row 2,
using the pivot 4. We also apply this permutation to P0 :
0
4
B
1
A01 = B
@2
3
8 12
2
3
3
2
1 1
1
8
4C
C
1A
4
0
B1
P1 = B
@0
0
1
0
0
0
0
0
1
0
1
0
0C
C.
0A
1
Next, we subtract 1/4 times row 1 from row 2, 1/2 times row 1 from row 3, and add 3/4
times row 1 to row 4, and start assembling :
0
4
B0
A2 = B
@0
0
8 12
0
6
1
4
5 10
Next we permute
and P :
0
4
B
0
A03 = B
@0
0
1
8
6 C
C
5 A
10
0
B 1/4
1 = B
@ 1/2
3/4
0
0
0
0
0
0
0
0
4
B0
A3 = B
@0
0
0
B1
P1 = B
@0
0
1
0
0
0
1
0
0C
C.
0A
1
0
0
1
0
row 2 and row 4, using the pivot 5. We also apply this permutation to
8 12
5 10
1
4
0
6
1
8
10C
C
5 A
6
0
B
3/4
02 = B
@ 1/2
1/4
0
0
0
0
0
0
0
0
1
0
0C
C
0A
0
8 12
5 10
0
2
0
6
1
8
10C
C
3 A
6
0
B 3/4
2 = B
@ 1/2
1/4
0
0
1/5
0
0
0
0
0
1
0
0C
C
0A
0
1
0
0C
C
0A
0
0
B0
P2 = B
@0
1
0
0
B0
P2 = B
@0
1
1
0
0
0
1
0
1C
C.
0A
0
0
0
1
0
1
0
0
0
0
0
1
0
1
0
1C
C.
0A
0
Next we permute row 3 and row 4, using the pivot 6. We also apply this permutation to
and P :
0
1
0
1
0
1
4 8 12
8
0
0
0 0
0 1 0 0
B0 5 10
B
B
C
10C
0
0 0C
C 03 = B 3/4
C P3 = B0 0 0 1C .
A04 = B
@0 0
@ 1/4
@1 0 0 0A
6 6 A
0
0 0A
0 0
2 3
1/2
1/5 0 0
0 0 1 0
and that
0
B0
PA = B
@1
0
0
1
B 3/4
LU = B
@ 1/4
1/2
1
0
0
0
181
0
0
0
1
10
0
B
1C
CB
0A @
0
0
0
1
0
0
1
1/5 1/3
1
4
2
3
10
0
4
B0
0C
CB
0 A @0
1
0
12
10
6
0
1
8
10C
C,
6 A
1
2
3
8 12
3
2
1 1
1 0
4
B
8C
C=B
1A @
4
8 12
5 10
0
6
0 0
1 0
8
B
10C
C=B
A
@
6
1
4
3
1
2
4
3
1
2
0
0
B0
P =B
@1
0
8 12
1 1
2
3
3
2
8 12
1 1
2
3
3
2
1
0
0
0
0
0
0
1
1
0
1C
C.
0A
0
0
0
0
1
1
0
1C
C.
0A
0
1
8
4C
C,
4A
1
1
8
4C
C = P A.
4A
1
Note that if one willing to overwrite the lower triangular part of the evolving matrix A,
one can store the evolving there, since these entries will eventually be zero anyway! There
is also no need to save explicitly the permutation matrix P . One could instead record the
permutation steps in an extra column (record the vector ((1), . . . , (n)) corresponding to
the permutation applied to the rows). We let the reader write such a bold and spaceefficient version of LU -decomposition!
As a corollary of Theorem 6.5(1), we can show the following result.
Proposition 6.6. If an invertible symmetric matrix A has an LU -decomposition, then A
has a factorization of the form
A = LDL> ,
where L is a lower-triangular matrix whose diagonal entries are equal to 1, and where D
consists of the pivots. Furthermore, such a decomposition is unique.
Proof. If A has an LU -factorization, then it has an LDU factorization
A = LDU,
182
y
=
1
4
10 )y = 2 104 .
104
,
104 1
y=
104
104
2
.
1
However, if roundo takes place on the fourth digit, then 104 1 = 9999 and 104 2 = 9998
will be rounded o both to 9990, and then the solution is x = 0 and y = 1, very far from the
exact solution where x 1 and y 1. The problem is that we picked a very small pivot. If
instead we permute the equations, the pivot is 1, and after elimination, we get the system
x +
(1
y
=
4
10 )y = 1
2
2 10 4 .
This time, 1 10 4 = 0.9999 and 1 2 10 4 = 0.9998 are rounded o to 0.999 and the
solution is x = 1, y = 1, much closer to the exact solution.
To remedy this problem, one may use the strategy of partial pivoting. This consists of
choosing during step k (1 k n 1) one of the entries akik such that
|akik | = max |akp k |.
kpn
By maximizing the value of the pivot, we avoid dividing by undesirably small pivots.
183
n
X
i=1, i6=j
|ai j |,
for j = 1, . . . , n
n
X
j=1, j6=i
|ai j |,
for i = 1, . . . , n.)
It has been known for a long time (before 1900, say by Hadamard) that if a matrix A
is strictly column diagonally dominant (resp. strictly row diagonally dominant), then it is
invertible. (This is a good exercise, try it!) It can also be shown that if A is strictly column
diagonally dominant, then Gaussian elimination with partial pivoting does not actually require pivoting (See Problem 21.6 in Trefethen and Bau [110], or Question 2.19 in Demmel
[27]).
Another strategy, called complete pivoting, consists in choosing some entry akij , where
k i, j n, such that
|akij | = max |akp q |.
kp,qn
However, in this method, if the chosen pivot is not in column k, it is also necessary to
permute columns. This is achieved by multiplying on the right by a permutation matrix.
However, complete pivoting tends to be too expensive in practice, and partial pivoting is the
method of choice.
A special case where the LU -factorization is particularly efficient is the case of tridiagonal
matrices, which we now consider.
6.3
= 1,
1
b1 c 1
B a2 b 2
C
c2
B
C
B
C
a3
b3
c3
B
C
B
C
.
.
.
.
.
.
A=B
C.
.
.
.
B
C
B
C
an 2 b n 2 c n 2
B
C
@
an 1 b n 1 c n 1 A
an
bn
1
= b1 ,
= bk
k 1
ak c k
1 k 2,
2 k n.
184
Proof. By expanding det(A[1..k, 1..k]) with respect to its last row, the proposition follows
by induction on k.
Theorem 6.8. If A is the tridiagonal matrix above and
the following LU -factorization:
0
B
B a2
B
B
B
B
B
A=B
B
B
B
B
B
@
0
1
a3
..
..
.
an
.
n 3
1
n 2
n 2
an
B
CB
CB
CB
CB
CB
CB
CB
CB
CB
CB
CB
CB
CB
AB
1 @
1
1
n 1
c1
0
2
c2
1
3
2
c3
..
..
n 1
n 2
C
C
C
C
C
C
C
C
C.
C
C
C
C
cn 1 C
C
A
n
n 1
(LU )1 1 =
= b1
ak c k
(LU )k k =
1 k 2
= bk ,
k 1
since
= bk
k 1
ak c k
2 k n,
1 k 2.
c1
,
b1
z k = ck
k 1
k
2kn
1,
zn =
n
n 1
= bn
an z n 1 ,
185
U ) is written as
0c
B
1B
B
B
CB
CB
CB
CB
CB
CB
CB
CB
CB
CB
CB
CB
AB
B
zn B
B
B
@
B z1
B
B a2
B
B
B
A=B
B
B
B
B
B
@
c2
z2
a3
c3
z3
..
.
..
an
.
1
cn 1
zn 1
an
1 z1
1
z2
1
z3
..
..
zn
zn
1
C
C
C
C
C
C
C
C
C
C
C
C.
C
C
C
C
C
C
C
C
C
A
c1
,
b1
zk =
bk
ck
ak z k
k = 2, . . . , n
d1
,
b1
1,
z n = bn
an z n 1 ,
wk =
ak c k
k 1
dk
bk
ak w k
ak z k
1 k 2
k = 2, . . . , n,
xk = w k
zk xk+1 ,
1
k=n
1, n
2, . . . , 1,
U x = w.
Remark: It can be verified that this requires 3(n 1) additions, 3(n 1) multiplications,
and 2n divisions, a total of 8n 6 operations, which is much less that the O(2n3 /3) required
by Gaussian elimination in general.
We now consider the special case of symmetric positive definite matrices (SPD matrices).
Recall that an n n symmetric matrix A is positive definite i
x> Ax > 0 for all x 2 Rn with x 6= 0.
Equivalently, A is symmetric positive definite i all its eigenvalues are strictly positive. The
following facts about a symmetric positive definite matrice A are easily established (some
left as an exercise):
186
(1) The matrix A is invertible. (Indeed, if Ax = 0, then x> Ax = 0, which implies x = 0.)
(2) We have ai i > 0 for i = 1, . . . , n. (Just observe that for x = ei , the ith canonical basis
vector of Rn , we have e>
i Aei = ai i > 0.)
(3) For every n n invertible matrix Z, the matrix Z > AZ is symmetric positive definite
i A is symmetric positive definite.
Next, we prove that a symmetric positive definite matrix has a special LU -factorization
of the form A = BB > , where B is a lower-triangular matrix whose diagonal elements are
strictly positive. This is the Cholesky factorization.
6.4
First, we note that a symmetric positive definite matrix satisfies the condition of Proposition
6.2.
Proposition 6.9. If A is a symmetric positive definite matrix, then A[1..k, 1..k] is symmetric
positive definite, and thus invertible for k = 1, . . . , n.
Proof. Since A is symmetric, each A[1..k, 1..k] is also symmetric. If w 2 Rk , with 1 k n,
we let x 2 Rn be the vector with xi = wi for i = 1, . . . , k and xi = 0 for i = k + 1, . . . , n.
Now, since A is symmetric positive definite, we have x> Ax > 0 for all x 2 Rn with x 6= 0.
This holds in particular for all vectors x obtained from nonzero vectors w 2 Rk as defined
earlier, and clearly
x> Ax = w> A[1..k, 1..k] w,
which implies that A[1..k, 1..k] is positive definite. Thus, A[1..k, 1..k] is also invertible.
Proposition 6.9 can be strengthened as follows: A symmetric matrix A is positive definite
i det(A[1..k, 1..k]) > 0 for k = 1, . . . , n.
The above fact is known as Sylvesters criterion. We will prove it after establishing the
Cholesky factorization.
Let A be an n n symmetric positive definite matrix and write
a1 1 W >
A=
,
W
C
where C is an (n 1) (n 1) symmetric matrix and W is an (n 1) 1 matrix. Since A
p
is symmetric positive definite, a1 1 > 0, and we can compute = a1 1 . The trick is that we
can factor A uniquely as
a1 1 W >
0
1
0
W > /
A=
=
,
W
C
W/ I
0 C W W > /a1 1
0
I
i.e., as A = B1 A1 B1> , where B1 is lower-triangular with positive diagonal entries. Thus, B1
is invertible, and by fact (3) above, A1 is also symmetric positive definite.
187
a1 1 W >
0
1
0
W > /
A=
=
= B1 A1 B1> ,
W
C
W/ I
0 C W W > /a1 1
0
I
p
where = a1 1 , the matrix B1 is invertible and
1
0
A1 =
0 C W W > /a1 1
is symmetric positive definite. However, this implies that C W W > /a1 1 is also symmetric
positive definite (consider x> A1 x for every x 2 Rn with x 6= 0 and x1 = 0). Thus, we can
apply the induction hypothesis to C W W > /a1 1 (which is an (n 1) (n 1) matrix),
and we find a unique lower-triangular matrix L with positive diagonal entries so that
W W > /a1 1 = LL> .
C
But then, we get
1
A =
0
0
1
=
W/ I
0
0
1
=
W/ I
0
=
W/ L
0
0
W/ I
Therefore, if we let
B=
0
W > /
C W W > /a1 1
0
I
>
0
W /
>
LL
0
I
0
1 0
W > /
L
0 L>
0
I
W > /
.
L>
0
,
W/ L
we have a unique lower-triangular matrix with positive diagonal entries and A = BB > .
The uniqueness of the Cholesky decomposition can also be established using the uniqueness of an LU decomposition. Indeed, if A = B1 B1> = B2 B2> where B1 and B2 are lower
triangular with positive diagonal entries, if we let 1 (resp.
2 ) be the diagonal matrix
consisting of the diagonal entries of B1 (resp. B2 ) so that ( k )ii = (Bk )ii for k = 1, 2, then
we have two LU decompositions
A = (B1
)(
>
1 B1 )
= (B2
)(
>
2 B2 )
188
= B2
>
1 B1 ,
>
1 B1
>
2 B2
>
2 B2 ,
= B2
2.
()
The diagonal entries of B1 1 are (B1 )2ii and similarly the diagonal entries of B2
so the above equation implies that
(B1 )2ii = (B2 )2ii ,
i = 1, . . . , n.
Since the diagonal entries of both B1 and B2 are assumed to be positive, we must have
(B1 )ii = (B2 )ii ,
that is,
2,
i = 1, . . . , n;
ai j
1)
j 1
X
k=1
bi k bj k
/bj j .
The above formulae are used to compute the jth column of B from top-down, using the first
j 1 columns of B previously computed, and the matrix A.
The Cholesky factorization can be used to solve linear systems Ax = b where A is
symmetric positive definite: Solve the two systems Bw = b and B > x = w.
Remark: It can be shown that this methods requires n3 /6 + O(n2 ) additions, n3 /6 + O(n2 )
multiplications, n2 /2+O(n) divisions, and O(n) square root extractions. Thus, the Cholesky
method requires half of the number of operations required by Gaussian elimination (since
Gaussian elimination requires n3 /3 + O(n2 ) additions, n3 /3 + O(n2 ) multiplications, and
n2 /2 + O(n) divisions). It also requires half of the space (only B is needed, as opposed to
both L and U ). Furthermore, it can be shown that Choleskys method is numerically stable
(see Trefethen and Bau [110], Lecture 23).
Remark: If A = BB > , where B is any invertible matrix, then A is symmetric positive
definite.
189
Proof. Obviously, BB > is symmetric, and since B is invertible, B > is invertible, and from
x> Ax = x> BB > x = (B > x)> B > x,
it is clear that x> Ax > 0 if x 6= 0.
We now give three more criteria for a symmetric matrix to be positive definite.
Proposition 6.11. Let A be any n n symmetric matrix. The following conditions are
equivalent:
(a) A is positive definite.
(b) All principal minors of A are positive; that is: det(A[1..k, 1..k]) > 0 for k = 1, . . . , n
(Sylvesters criterion).
(c) A has an LU -factorization and all pivots are positive.
(d) A has an LDL> -factorization and all pivots in D are positive.
Proof. By Proposition 6.9, if A is symmetric positive definite, then each matrix A[1..k, 1..k] is
symmetric positive definite for k = 1, . . . , n. By the Cholsesky decomposition, A[1..k, 1..k] =
Q> Q for some invertible matrix Q, so det(A[1..k, 1..k]) = det(Q)2 > 0. This shows that (a)
implies (b).
If det(A[1..k, 1..k]) > 0 for k = 1, . . . , n, then each A[1..k, 1..k] is invertible. By Proposition 6.2, the matrix A has an LU -factorization, and since the pivots k are given by
8
<a11 = det(A[1..1, 1..1])
if k = 1
k =
det(A[1..k, 1..k])
:
if k = 2, . . . , n,
det(A[1..k 1, 1..k 1])
we see that k > 0 for k = 1, . . . , n. Thus (b) implies (c).
Assume A has an LU -factorization and that the pivots are all positive. Since A is
symmetric, this implies that A has a factorization of the form
A = LDL> ,
with L lower-triangular with 1s on its diagonal, and where D is a diagonal matrix with
positive entries on the diagonal (the pivots). This shows that (c) implies (d).
Given a factorization A = LDL> with all pivots in D positive, if we form the diagonal
matrix
p
p
p
D = diag( 1 , . . . , n )
p
and if we let B = L D, then we have
Q = BB > ,
with B lower-triangular and invertible. By the remark before Proposition 6.11, A is positive
definite. Hence, (d) implies (a).
190
Criterion (c) yields a simple computational test to check whether a symmetric matrix is
positive definite. There is one more criterion for a symmetric matrix to be positive definite:
its eigenvalues must be positive. We will have to learn about the spectral theorem for
symmetric matrices to establish this criterion.
For more on the stability analysis and efficient implementation methods of Gaussian
elimination, LU -factoring and Cholesky factoring, see Demmel [27], Trefethen and Bau [110],
Ciarlet [24], Golub and Van Loan [49], Meyer [80], Strang [104, 105], and Kincaid and Cheney
[63].
6.5
Gaussian elimination described in Section 6.2 can also be applied to rectangular matrices.
This yields a method for determining whether a system Ax = b is solvable, and a description
of all the solutions when the system is solvable, for any rectangular m n matrix A.
It turns out that the discussion is simpler if we rescale all pivots to be 1, and for this we
need a third kind of elementary matrix. For any 6= 0, let Ei, be the n n diagonal matrix
0
1
1
B ...
C
B
C
B
C
1
B
C
B
C
Ei, = B
C,
B
C
1
B
C
B
C
..
@
. A
1
with (Ei, )ii =
Ei, 1 = Ei,
1)ei i ,
of the kth column of the current matrix Ak is nonzero so that a pivot k can be chosen,
after a permutation of rows if necessary, we also divide row k by k to obtain the pivot 1,
and not only do we zero all the entries i = k + 1, . . . , m in column k, but also all the entries
i = 1, . . . , k 1, so that the only nonzero entry in column k is a 1 in row k. These row
operations are achieved by multiplication on the left by elementary matrices.
If akkk = akk+1k = = akmk = 0, we move on to column k + 1.
191
The result is that after performing such elimination steps, we obtain a matrix that has
a special shape known as a reduced row echelon matrix . Here is an example illustrating this
process: Starting from the matrix
0
1
1 0 2 1 5
A1 = @1 1 5 2 7 A
1 2 8 4 12
we perform the following steps
A1
by subtracting row
0
1 0 2
@
A2 ! 0 2 6
0 1 3
0
1
1 0 2 1 5
! A2 = @0 1 3 1 2A ,
0 2 6 3 7
3;
1
0
1
5
1 0 2
A
@
3/2 7/2
! A3 = 0 1 3
1
2
0 0 0
1
3/2
1/2
1
5
7/2 A ,
3/2
0 2 0
1 3 0
0 0 1
1
2
1A ,
3
It is clear that columns 1, 2 and 4 are linearly independent, that column 3 is a linear
combination of columns 1 and 2, and that column 5 is a linear combinations of columns
1, 2, 4.
In general, the sequence of steps leading to a reduced echelon matrix is not unique. For
example, we could have chosen 1 instead of 2 as the second pivot in matrix A2 . Nevertherless,
the reduced row echelon matrix obtained from any given matrix is unique; that is, it does
not depend on the the sequence of steps that are followed during the reduction process. This
fact is not so easy to prove rigorously, but we will do it later.
If we want to solve a linear system of equations of the form Ax = b, we apply elementary
row operations to both the matrix A and the right-hand side b. To do this conveniently, we
form the augmented matrix (A, b), which is the m (n + 1) matrix obtained by adding b as
an extra column to the matrix A. For example if
0
1
0 1
1 0 2 1
5
A = @1 1 5 2A and b = @ 7 A ,
1 2 8 4
12
192
0
1
1 0 2 1 5
(A, b) = @1 1 5 2 7 A .
1 2 8 4 12
M (A, b) = (M A, M b),
0
1
1 0 2 1 5
(A, b) = @1 1 5 2 7 A
1 2 8 4 12
considered above, so the reduction steps applied to this matrix yield the system
x1
x2
+ 2x3
+ 3x3
x4
=
=
=
2
1
3.
This reduced system has the same set of solutions as the original, and obviously x3 can be
chosen arbitrarily. Therefore, our system has infinitely many solutions given by
x1 = 2
2x3 ,
x2 =
3x3 ,
x4 = 3,
where x3 is arbitrary.
The following proposition shows that the set of solutions of a system Ax = b is preserved
by any sequence of row operations.
Proposition 6.12. Given any m n matrix A and any vector b 2 Rm , for any sequence
of elementary row operations E1 , . . . , Ek , if P = Ek E1 and (A0 , b0 ) = P (A, b), then the
solutions of Ax = b are the same as the solutions of A0 x = b0 .
Proof. Since each elementary row operation Ei is invertible, so is P , and since (A0 , b0 ) =
P (A, b), then A0 = P A and b0 = P b. If x is a solution of the original system Ax = b, then
multiplying both sides by P we get P Ax = P b; that is, A0 x = b0 , so x is a solution of the
new system. Conversely, assume that x is a solution of the new system, that is A0 x = b0 .
Then, because A0 = P A, b0 = P B, and P is invertible, we get
Ax = P
A0 x = P
1 0
b = b,
193
The following proposition shows that every matrix can be converted to a reduced row
echelon form using row operations.
194
0 1 B
A1 =
,
0 0 D
0 1 B
R=
.
0 0 R0
Because R0 is a reduced row echelon matrix, the matrix R satisfies conditions (a) and (b) of
the reduced row echelon form. Finally, the entries above all pivots in R0 can be cleared out
by subtracting suitable multiples of the rows of R0 containing a pivot. The resulting matrix
also satisfies condition (c), and the induction step is complete.
Remark: There is a Matlab function named rref that converts any matrix to its reduced
row echelon form.
If A is any matrix and if R is a reduced row echelon form of A, the second part of
Proposition 6.13 can be sharpened a little. Namely, the rank of A is equal to the number of
pivots in R.
This is because the structure of a reduced row echelon matrix makes it clear that its rank
is equal to the number of pivots.
195
Given a system of the form Ax = b, we can apply the reduction procedure to the augmented matrix (A, b) to obtain a reduced row echelon matrix (A0 , b0 ) such that the system
A0 x = b0 has the same solutions as the original system Ax = b. The advantage of the reduced
system A0 x = b0 is that there is a simple test to check whether this system is solvable, and
to find its solutions if it is solvable.
Indeed, if any row of the matrix A0 is zero and if the corresponding entry in b0 is nonzero,
then it is a pivot and we have the equation
0 = 1,
which means that the system A0 x = b0 has no solution. On the other hand, if there is no
pivot in b0 , then for every row i in which b0i 6= 0, there is some column j in A0 where the
entry on row i is 1 (a pivot). Consequently, we can assign arbitrary values to the variable
xk if column k does not contain a pivot, and then solve for the pivot variables.
For example, if we consider the reduced row
0
1 6
0 0
@
(A , b ) = 0 0
0 0
echelon matrix
1
0 1 0
1 2 0A ,
0 0 1
has solutions. We can pick the variables x2 , x4 corresponding to nonpivot columns arbitrarily,
and then solve for x3 (using the second equation) and x1 (using the first equation).
The above reasoning proved the following theorem:
Theorem 6.15. Given any system Ax = b where A is a m n matrix, if the augmented
matrix (A, b) is a reduced row echelon matrix, then the system Ax = b has a solution i there
is no pivot in b. In that case, an arbitrary value can be assigned to the variable xj if column
j does not contain a pivot.
Nonpivot variables are often called free variables.
Putting Proposition 6.14 and Theorem 6.15 together we obtain a criterion to decide
whether a system Ax = b has a solution: Convert the augmented system (A, b) to a row
reduced echelon matrix (A0 , b0 ) and check whether b0 has no pivot.
Remark: When writing a program implementing row reduction, we may stop when the last
column of the matrix A is reached. In this case, the test whether the system Ax = b is
196
solvable is that the row-reduced matrix A0 has no zero row of index i > r such that b0i 6= 0
(where r is the number of pivots, and b0 is the row-reduced right-hand side).
If we have a homogeneous system Ax = 0, which means that b = 0, of course x = 0 is
always a solution, but Theorem 6.15 implies that if the system Ax = 0 has more variables
than equations, then it has some nonzero solution (we call it a nontrivial solution).
Proposition 6.16. Given any homogeneous system Ax = 0 of m equations in n variables,
if m < n, then there is a nonzero vector x 2 Rn such that Ax = 0.
Proof. Convert the matrix A to a reduced row echelon matrix A0 . We know that Ax = 0 i
A0 x = 0. If r is the number of pivots of A0 , we must have r m, so by Theorem 6.15 we may
assign arbitrary values to n r > 0 nonpivot variables and we get nontrivial solutions.
Theorem 6.15 can also be used to characterize when a square matrix is invertible. First,
note the following simple but important fact:
If a square n n matrix A is a row reduced echelon matrix, then either A is the identity
or the bottom row of A is zero.
Proposition 6.17. Let A be a square matrix of dimension n. The following conditions are
equivalent:
(a) The matrix A can be reduced to the identity by a sequence of elementary row operations.
(b) The matrix A is a product of elementary matrices.
(c) The matrix A is invertible.
(d) The system of homogeneous equations Ax = 0 has only the trivial solution x = 0.
Proof. First, we prove that (a) implies (b). If (a) can be reduced to the identity by a sequence
of row operations E1 , . . . , Ep , this means that Ep E1 A = I. Since each Ei is invertible,
we get
A = E1 1 Ep 1 ,
where each Ei 1 is also an elementary row operation, so (b) holds. Now if (b) holds, since
elementary row operations are invertible, A is invertible, and (c) holds. If A is invertible, we
already observed that the homogeneous system Ax = 0 has only the trivial solution x = 0,
because from Ax = 0, we get A 1 Ax = A 1 0; that is, x = 0. It remains to prove that (d)
implies (a), and for this we prove the contrapositive: if (a) does not hold, then (d) does not
hold.
Using our basic observation about reducing square matrices, if A does not reduce to the
identity, then A reduces to a row echelon matrix A0 whose bottom row is zero. Say A0 = P A,
where P is a product of elementary row operations. Because the bottom row of A0 is zero,
the system A0 x = 0 has at most n 1 nontrivial equations, and by Proposition 6.16, this
197
Ep E1 A = I.
we get
A
= Ep E1 .
From a practical point of view, we can build up the product Ep E1 by reducing to row
echelon form the augmented n 2n matrix (A, In ) obtained by adding the n columns of the
identity matrix to A. This is just another way of performing the GaussJordan procedure.
Here is an example: let us find the inverse of the matrix
5 4
A=
.
6 5
(A, I) =
5 4 1 0
6 5 0 1
and apply elementary row operations to reduce A to the identity. For example:
5 4 1 0
5 4 1 0
(A, I) =
!
6 5 0 1
1 1
1 1
by subtracting row 1 from row 2,
1 0
5 4 1 0
!
1 1
1 1
1 1
5
1
5
6
4
5
1 0 5
4
1 0
!
1 1
1 1
0 1
by subtracting row 1 from row 2. Thus
A
5
6
4
1
= (I, A 1 ),
4
.
5
Proposition 6.17 can also be used to give an elementary proof of the fact that if a square
matrix A has a left inverse B (resp. a right inverse B), so that BA = I (resp. AB = I),
then A is invertible and A 1 = B. This is an interesting exercise, try it!
For the sake of completeness, we prove that the reduced row echelon form of a matrix is
unique. The neat proof given below is borrowed and adapted from W. Kahan.
198
Proposition 6.18. Let A be any m n matrix. If U and V are two reduced row echelon
matrices obtained from A by applying two sequences of elementary row operations E1 , . . . , Ep
and F1 , . . . , Fq , so that
U = Ep E1 A
and
V = Fq F1 A,
then U = V and Ep E1 = Fq F1 . In other words, the reduced row echelon form of any
matrix is unique.
Proof. Let
C = Ep E1 F1 1 Fq
so that
U = CV
and V = C
U.
Therefore, we may simplify our task by striking out columns of zeros from U, V , and A,
since they will have corresponding indices. We still use n to denote the number of columns of
A. Observe that because U and V are reduced row echelon matrices with no zero columns,
we must have u1 = v1 = `1 .
Claim. If U and V are reduced row echelon matrices without zero columns such that
U = CV , for all k 1, if k n, then `k occurs in U i `k occurs in V , and if `k does occurs
in U , then
1. `k occurs for the same index jk in both U and V ;
2. the first jk columns of U and V match;
3. the subsequent columns in U and V (of index > jk ) whose elements beyond the kth
all vanish also match;
4. the first k columns of C match the first k columns of In .
We prove this claim by induction on k.
For the base case k = 1, we already know that u1 = v1 = `1 . We also have
c1 = C`1 = Cv1 = u1 = `1 .
199
x0 ) = 0,
200
which means that x1 x0 2 Ker (A). Therefore, Z x0 + Ker (A), where x0 is a special
solution of Ax = b. Conversely, if Ax0 = b, then for any z 2 Ker (A), we have Az = 0, and
so
A(x0 + z) = Ax0 + Az = b + 0 = b,
which shows that x0 + Ker (A) Z. Therefore, Z = x0 + Ker (A).
Given a linear system Ax = b, reduce the augmented matrix (A, b) to its row echelon
form (A0 , b0 ). As we showed before, the system Ax = b has a solution i b0 contains no pivot.
Assume that this is the case. Then, if (A0 , b0 ) has r pivots, which means that A0 has r pivots
since b0 has no pivot, we know that the first r columns of In appear in A0 .
We can permute the columns of A0 and renumber the variables in x correspondingly so
that the first r columns of In match the first r columns of A0 , and then our reduced echelon
matrix is of the form (R, b0 ) with
Ir
F
R=
0m r,r 0m r,n r
and
0
b =
where F is a r (n
Then, because
d
0m
Ir
0m
F
r,r
0m
r,n r
we see that
x0 =
0n
d
0n
d
0m
r zero rows.
Ir
F
u
u + Fv
0r
=
=
.
0m r,r 0m r,n r
v
0m r
0m r
Therefore, u =
Fv
F
=
v,
v
In r
201
F
N=
In r
form a basis of the kernel of A. This is because N contains the identity matrix In r as a
submatrix, so the columns of N are linearly independent. In summary, if N 1 , . . . , N n r are
the columns of N , then the general solution of the equation Ax = b is given by
d
x=
+ xr+1 N 1 + + xn N n r ,
0n r
where xr+1 , . . . , xn are the free variables; that is, the nonpivot variables.
In the general case where the columns corresponding to pivots are mixed with the columns
corresponding to free variables, we find the special solution as follows. Let i1 < < ir be
the indices of the columns corresponding to pivots. Then, assign b0k to the pivot variable
xik for k = 1, . . . , r, and set all other variables to 0. To find a basis of the kernel, we
form the n r vectors N k obtained as follows. Let j1 < < jn r be the indices of the
columns corresponding to free variables. For every column jk corresponding to a free variable
(1 k n r), form the vector N k defined so that the entries Nik1 , . . . , Nikr are equal to the
negatives of the first r entries in column jk (flip the sign of these entries); let Njkk = 1, and set
all other entries to zero. The presence of the 1 in position jk guarantees that N 1 , . . . , N n r
are linearly independent.
An illustration of the above method, consider the problem of finding a basis of the
subspace V of n n matrices A 2 Mn (R) satisfying the following properties:
1. The sum of the entries in every row has the same value (say c1 );
2. The sum of the entries in every column has the same value (say c2 ).
It turns out that c1 = c2 and that the 2n 2 equations corresponding to the above conditions
are linearly independent. We leave the proof of these facts as an interesting exercise. By the
duality theorem, the dimension of the space V of matrices satisying the above equations is
n2 (2n 2). Let us consider the case n = 4. There are 6 equations, and the space V has
dimension 10. The equations are
a11 + a12 + a13 + a14
a21 + a22 + a23 + a24
a31 + a32 + a33 + a34
a11 + a21 + a31 + a41
a12 + a22 + a32 + a42
a13 + a23 + a33 + a43
a21
a31
a41
a12
a13
a14
a22
a32
a42
a22
a23
a24
a23
a33
a43
a32
a33
a34
a24
a34
a44
a42
a43
a44
=0
=0
=0
=0
=0
= 0,
202
matrix is
1
0
0
0
0
1
1
1
0
1
0
0
1
1
0
1
1
0
1
1
0
0
1
1
1
1
0
0
0
1
0
1
1
1
0
0
0
1
1
1
1
0
0
1
1
0
1
1
0
1
1
0
0
1
0
0
1
1
0
0
0
0
1
1
1
0
0
0
1
0
1
1
1
0
0C
C
1C
C.
0C
C
0A
1
performing the reduction to row echelon form yields the following matrix
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
1
1
0
0
1
0
1
0
1
0
1
0
1
0
0
1
1
0
0
0
0
0
0
1
1
1
0
0
0
1
1
0
1
0
0
1
1
0
0
1
0
1
2
1
1
1
1
1
1
0
1
1
1
1
1
1
0
1
1
1
1
1
1C
C
1C
C
0C
C
1A
1
The list pivlist of indices of the pivot variables and the list freelist of indices of the free
variables is given by
pivlist = (1, 2, 3, 4, 5, 9),
freelist = (6, 7, 8, 10, 11, 12, 13, 14, 15, 16).
After applying the algorithm
matrix
0
1
B 1
B
B0
B
B0
B
B 1
B
B1
B
B0
B
B0
BK = B
B0
B
B0
B
B0
B
B0
B
B0
B
B0
B
@0
0
The reader should check that that in each column j of BK, the lowest 1 belongs to the
row whose index is the jth element in freelist, and that in each column j of BK, the signs of
203
the entries whose indices belong to pivlist are the fipped signs of the 6 entries in the column
U corresponding to the jth index in freelist. We can now read o from BK the 44 matrices
that form a basis of V : every column of BK corresponds to a matrix whose rows have been
concatenated. We get the following 10 matrices:
0
1
0
1
0
1
1
1 0 0
1 0
1 0
1 0 0
1
B 1 1 0 0C
B 1 0 1 0C
B 1 0 0 1C
C,
B
C,
B
C
M1 = B
M
=
M
=
2
3
@0
@ 0 0 0 0A
@ 0 0 0 0 A,
0 0 0A
0
0 0 0
0 0 0 0
0 0 0 0
0
1
B0
M4 = B
@ 1
0
0
2
B1
M7 = B
@1
1
0
1
B1
M10 = B
@1
0
1
0
1
0
0
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0C
C,
0A
0
1
1
0C
C,
0A
0
1
0
0C
C.
0A
1
1
B0
M5 = B
@ 1
0
0
0
0
0
0
1
B1
M8 = B
@1
0
1
0
1
0
0
0
0
1
1
0
0
0
1
0
0C
C,
0A
0
1
1
0C
C,
0A
0
1
B0
M6 = B
@ 1
0
0
0
0
0
0
0
0
0
1
B1
M9 = B
@1
0
1
0
0
0
0
0
0
1
1
1
0C
C,
1A
0
1
1
0C
C,
0A
0
Recall that a magic square is a square matrix that satisfies the two conditions about
the sum of the entries in each row and in each column to be the same number, and also
the additional two constraints that the main descending and the main ascending diagonals
add up to this common number. Furthermore, the entries are also required to be positive
integers. For n = 4, the additional two equations are
a22 + a33 + a44
a41 + a32 + a23
a12
a11
a13
a12
a14 = 0
a13 = 0,
and the 8 equations stating that a matrix is a magic square are linearly independent. Again,
by running row elimination, we get a basis of the generalized magic squares whose entries
are not restricted to be positive integers. We find a basis of 8 matrices. For n = 3, we find
a basis of 3 matrices.
A magic square is said to be normal if its entries are precisely the integers 1, 2 . . . , n2 .
Then, since the sum of these entries is
1 + 2 + 3 + + n2 =
n2 (n2 + 1)
,
2
204
and since each row (and column) sums to the same number, this common value (the magic
sum) is
n(n2 + 1)
.
2
It is easy to see that there are no normal magic squares for n = 2. For n = 3, the magic sum
is 15, for n = 4, it is 34, and for n = 5, it is 65.
In the case n = 3, we have the additional condition that the rows and columns add up
to 15, so we end up with a solution parametrized by two numbers x1 , x2 ; namely,
0
1
x1 + x2 5 10 x2
10 x1
@20 2x1 x2
5
2x1 + x2 10A .
x1
x2
15 x1 x2
Thus, in order to find a normal magic square, we have the additional inequality constraints
x1 + x2
x1
x2
2x1 + x2
2x1 + x2
x1
x2
x1 + x2
>5
< 10
< 10
< 20
> 10
>0
>0
< 15,
and all 9 entries in the matrix must be distinct. After a tedious case analysis, we discover the
remarkable fact that there is a unique normal magic square (up to rotations and reflections):
0
1
2 7 6
@9 5 1 A .
4 3 8
It turns out that there are 880 dierent normal magic squares for n = 4, and 275, 305, 224
normal magic squares for n = 5 (up to rotations and reflections). Even for n = 4, it takes a
fair amount of work to enumerate them all! Finding the number of magic squares for n > 5
is an open problem!
Instead of performing elementary row operations on a matrix A, we can perform elementary columns operations, which means that we multiply A by elementary matrices on the
right. As elementary row and column operations, P (i, k), Ei,j; , Ei, perform the following
actions:
1. As a row operation, P (i, k) permutes row i and row k.
205
We can define the notion of a reduced column echelon matrix and show that every matrix
can be reduced to a unique reduced column echelon form. Now, given any m n matrix A,
if we first convert A to its reduced row echelon form R, it is easy to see that we can apply
elementary column operations that will reduce R to a matrix of the form
Ir
0r,n r
,
0m r,r 0m r,n r
where r is the number of pivots (obtained during the row reduction). Therefore, for every
m n matrix A, there exist two sequences of elementary matrices E1 , . . . , Ep and F1 , . . . , Fq ,
such that
Ir
0r,n r
Ep E1 AF1 Fq =
.
0m r,r 0m r,n r
The matrix on the right-hand side is called the rank normal form of A. Clearly, r is the
rank of A. It is easy to see that the rank normal form also yields a proof of the fact that A
and its transpose A> have the same rank.
6.6
In this section, we characterize the linear isomorphisms of a vector space E that leave every
vector in some hyperplane fixed. These maps turn out to be the linear maps that are
represented in some suitable basis by elementary matrices of the form Ei,j; (transvections)
or Ei, (dilatations). Furthermore, the transvections generate the group SL(E), and the
dilatations generate the group GL(E).
Let H be any hyperplane in E, and pick some (nonzero) vector v 2 E such that v 2
/ H,
so that
E = H Kv.
206
Assume that f : E ! E is a linear isomorphism such that f (u) = u for all u 2 H, and that
f is not the identity. We have
f (v) = h + v,
x = y + tv,
then
We have f (x) = x i (1
)y + th = 0 i
y=
Then, if we let w = h + (
x = y + tv =
h.
h + tv =
1))h, we have
(h + (
1)v) =
w,
207
x = th,
that is, f (x) x 2 Kh for all x 2 E. Assume that the hyperplane H is given as the kernel
of some linear form ', and let a = '(v). We have a 6= 0, since v 2
/ H. For any x 2 E, we
have
'(x a 1 '(x)v) = '(x) a 1 '(x)'(v) = '(x) '(x) = 0,
which shows that x
get
a 1 '(x)v = f (x a 1 '(x)v)
= f (x) a 1 '(x)f (v),
so
f (x) = x + '(x)(f (a 1 v)
a 1 v).
'(u) = 0.
a 1 v = h for some
()
208
For v =
u,
as claimed.
Therefore, we proved that every linear isomorphism of E that leaves every vector in some
hyperplane H fixed and has the property that f (x) x 2 H for all x 2 E is given by a map
',u as defined by equation (), where ' is some nonzero linear form defining H and u is
some vector in H. We have ',u = id i u = 0.
Definition 6.3. Given any hyperplane H in E, for any nonzero nonlinear form ' 2 E
defining H (which means that H = Ker (')) and any nonzero vector u 2 H, the linear map
',u given by
',u (x) = x + '(x)u, '(u) = 0,
for all x 2 E is called a transvection of hyperplane H and direction u. The map ',u leaves
every vector in H fixed, and f (x) x 2 Ku for all x 2 E.
The above arguments show the following result.
Proposition 6.20. Let f : E
that f (x) = x for all x 2 H,
vector u 2 E such that u 2
/H
otherwise, f is a dilatation of
u = t(h + (
1)v),
209
which is an elementary matrix of the form E1, . Conversely, it is clear that every elementary
matrix of the form Ei, with 6= 0, 1 is a dilatation.
Now, assume that f is a transvection of hyperplane H and direction u 2 H. Pick some
v2
/ H, and pick some basis (u, e3 , . . . , en ) of H, so that (v, u, e3 , . . . , en ) is a basis of E. Since
f (v) v 2 Ku, the matrix of f is of the form
0
1
1 0 0
B 1
0C
B
C
B ..
.. C ,
.
.
@.
. .A
0 0 1
which is an elementary matrix of the form E2,1; . Conversely, it is clear that every elementary
matrix of the form Ei,j; ( 6= 0) is a transvection.
The following proposition is an interesting exercise that requires good mastery of the
elementary row operations Ei,j; .
Proposition 6.22. Given any invertible n n matrix A, there is a matrix S such that
In 1 0
SA =
= En, ,
0
with = det(A), and where S is a product of elementary matrices of the form Ei,j; ; that
is, S is a composition of transvections.
Surprisingly, every transvection is the composition of two dilatations!
Proposition 6.23. If the field K is not of charateristic 2, then every transvection f of
hyperplane H can be written as f = d2 d1 , where d1 , d2 are dilatations of hyperplane H,
where the direction of d1 can be chosen arbitrarily.
Proof. Pick some dilalation d1 of hyperplane H and scale factor 6= 0, 1. Then, d2 = f d1 1
leaves every vector in H fixed, and det(d2 ) = 1 6= 1. By Proposition 6.21, the linear map
d2 is a dilatation of hyperplane H, and we have f = d2 d1 , as claimed.
Observe that in Proposition 6.23, we can pick = 1; that is, every transvection of
hyperplane H is the compositions of two symmetries about the hyperplane H, one of which
can be picked arbitrarily.
Remark: Proposition 6.23 holds as long as K 6= {0, 1}.
The following important result is now obtained.
Theorem 6.24. Let E be any finite-dimensional vector space over a field K of characteristic
not equal to 2. Then, the group SL(E) is generated by the transvections, and the group
GL(E) is generated by the dilatations.
210
Proof. Consider any f 2 SL(E), and let A be its matrix in any basis. By Proposition 6.22,
there is a matrix S such that
In 1 0
SA =
= En, ,
0
with = det(A), and where S is a product of elementary matrices of the form Ei,j; . Since
det(A) = 1, we have = 1, and the resulxt is proved. Otherwise, En, is a dilatation, S is a
product of transvections, and by Proposition 6.23, every transvection is the composition of
two dilatations, so the second result is also proved.
We conclude this section by proving that any two transvections are conjugate in GL(E).
Let ',u (u 6= 0) be a transvection and let g 2 GL(E) be any invertible linear map. We have
(g ',u g 1 )(x) = g(g 1 (x) + '(g 1 (x))u)
= x + '(g 1 (x))g(u).
Let us find the hyperplane determined by the linear form x 7! '(g 1 (x)). This is the set of
vectors x 2 E such that '(g 1 (x)) = 0, which holds i g 1 (x) 2 H i x 2 g(H). Therefore,
Ker (' g 1 ) = g(H) = H 0 , and we have g(u) 2 g(H) = H 0 , so g ',u g 1 is the transvection
of hyperplane H 0 = g(H) and direction u0 = g(u) (with u0 2 H 0 ).
Conversely, let ,u0 be some transvection (u0 6= 0). Pick some vector v, v 0 such that
'(v) = (v 0 ) = 1, so that
E = H Kv = H 0 v 0 .
There is a linear map g 2 GL(E) such that g(u) = u0 , g(v) = v 0 , and g(H) = H 0 . To
define g, pick a basis (v, u, e2 , . . . , en 1 ) where (u, e2 , . . . , en 1 ) is a basis of H and pick a
basis (v 0 , u0 , e02 , . . . , e0n 1 ) where (u0 , e02 , . . . , e0n 1 ) is a basis of H 0 ; then g is defined so that
g(v) = v 0 , g(u) = u0 , and g(ei ) = g(e0i ), for i = 2, . . . , n 1. If n = 2, then ei and e0i are
missing. Then, we have
(g ',u g 1 )(x) = x + '(g 1 (x))u0 .
Now, ' g 1 also determines the hyperplane H 0 = g(H), so we have ' g
nonzero in K. Since v 0 = g(v), we get
'(v) = ' g 1 (v 0 ) =
and since '(v) = (v 0 ) = 1, we must have
(v 0 ),
= 1. It follows that
,u0 (x).
for some
6.7. SUMMARY
211
Proposition 6.25. Let E be any finite-dimensional vector space. For every transvection
',u (u 6= 0) and every linear map g 2 GL(E), the map g ',u g 1 is the transvection
of hyperplane g(H) and direction g(u) (that is, g ',u g 1 = ' g 1 ,g(u) ). For every other
transvection ,u0 (u0 6= 0), there is some g 2 GL(E) such ,u0 = g ',u g 1 ; in other
words any two transvections (6= id) are conjugate in GL(E). Moreover, if n 3, then the
linear isomorphim g as above can be chosen so that g 2 SL(E).
Proof. We just need to prove that if n
3, then for any two transvections ',u and ,u0
0
(u, u 6= 0), there is some g 2 SL(E) such that ,u0 = g ',u g 1 . As before, we pick a basis
(v, u, e2 , . . . , en 1 ) where (u, e2 , . . . , en 1 ) is a basis of H, we pick a basis (v 0 , u0 , e02 , . . . , e0n 1 )
where (u0 , e02 , . . . , e0n 1 ) is a basis of H 0 , and we define g as the unique linear map such that
g(v) = v 0 , g(u) = u0 , and g(ei ) = e0i , for i = 1, . . . , n 1. But, in this case, both H and
H 0 = g(H) have dimension at least 2, so in any basis of H 0 including u0 , there is some basis
vector e02 independent of u0 , and we can rescale e02 in such a way that the matrix of g over
the two bases has determinant +1.
6.7
Summary
The main concepts and results of this chapter are listed below:
One does not solve (large) linear systems by computing determinants.
Upper-triangular (lower-triangular ) matrices.
Solving by back-substitution (forward-substitution).
Gaussian elimination.
Permuting rows.
The pivot of an elimination step; pivoting.
Transposition matrix ; elementary matrix .
The Gaussian elimination theorem (Theorem 6.1).
Gauss-Jordan factorization.
LU -factorization; Necessary and sufficient condition for the existence of an
LU -factorization (Proposition 6.2).
LDU -factorization.
P A = LU theorem (Theorem 6.5).
LDL> -factorization of a symmetric matrix.
212
Chapter 7
Vector Norms and Matrix Norms
7.1
In order to define how close two vectors or two matrices are, and in order to define the
convergence of sequences of vectors or matrices, we can use the notion of a norm. Recall
that R+ = {x 2 R | x 0}. Also recall
that
p if z = a + ib 2 C is a complex number, with
p
a, b 2 R, then z = a ib and |z| = zz = a2 + b2 (|z| is the modulus of z).
Definition 7.1. Let E be a vector space over a field K, where K is either the field R of
reals, or the field C of complex numbers. A norm on E is a function k k : E ! R+ , assigning
a nonnegative real number kuk to any vector u 2 E, and satisfying the following conditions
for all x, y, z 2 E:
(N1) kxk
0, and kxk = 0 i x = 0.
(positivity)
(N2) k xk = | | kxk.
(triangle inequality)
1, we obtain
k xk = k( 1)xk = |
1| kxk = kxk ;
y + yk kx
yk + kyk ,
kyk kx
yk .
xk = k (x
213
y)k = kx
yk ,
214
we also have
kyk
Therefore,
|kxk
kxk kx
kyk| kx
yk,
yk .
for all x, y 2 E.
()
Observe that setting = 0 in (N2), we deduce that k0k = 0 without assuming (N1).
Then, by setting y = 0 in (), we obtain
|kxk| kxk ,
for all x 2 E.
However, there may be nonzero vectors x 2 E such that kxk = 0. Let us give some
examples of normed vector spaces.
Example 7.1.
1. Let E = R, and kxk = |x|, the absolute value of x.
2. Let E = C, and kzk = |z|, the modulus of z.
3. Let E = Rn (or E = Cn ). There are three standard norms. For every (x1 , . . . , xn ) 2 E,
we have the norm kxk1 , defined such that,
kxk1 = |x1 | + + |xn |,
we have the Euclidean norm kxk2 , defined such that,
kxk2 = |x1 |2 + + |xn |2
1
2
1) by
215
2 R, if ,
q
p
+ .
p
q
0, then
()
To prove the above inequality, we use the fact that the exponential function t 7! et satisfies
the following convexity inequality:
ex+(1
)y
ex + (1
y)ey ,
> 0. If we replace by
1
1
1
1
e p p log + q q log ep log + eq log ,
p
q
which simplifies to
q
p
+ ,
p
q
as claimed.
We will now prove that for any two vectors u, v 2 E, we have
n
X
i=1
()
Since the above is trivial if u = 0 or v = 0, let us assume that u 6= 0 and v 6= 0. Then, the
inequality () with = |ui |/ kukp and = |vi |/ kvkq yields
|ui vi |
|ui |p
|vi |q
+
,
kukp kvkq
p kukpp q kukqq
for i = 1, . . . , n, and by summing up these inequalities, we get
n
X
i=1
216
as claimed. To finish the proof, we simply have to prove that property (N3) holds, since
(N1) and (N2) are clear. Now, for i = 1, . . . , n, we can write
(|ui | + |vi |)p = |ui |(|ui | + |vi |)p
n
X
i=1
n
X
i=1
X
n
i=1
(|ui | + |vi |)
which yields
(|ui | + |vi |)
1/p
(p 1)q
1/q
1)q = p, so we have
X
n
i=1
(|ui | + |vi |)
1/q
kukp + kvkp .
Since |ui + vi | |ui | + |vi |, the above implies the triangle inequality ku + vkp kukp + kvkp ,
as claimed.
For p > 1 and 1/p + 1/q = 1, the inequality
n
X
i=1
|ui vi |
X
n
i=1
|ui |
1/p X
n
i=1
|vi |
1/q
n
X
ui v i ,
i=1
n
X
i=1
|ui v i | =
n
X
i=1
|ui vi |,
217
(|ui + vi |)
1/p
X
n
i=1
|ui |
1/p
X
n
i=1
|vi |
1/q
It is very useful to observe that if we represent (as usual) u = (u1 , . . . , un ) and v = (v1 , . . . , vn )
(in Rn ) by column vectors, then their Euclidean inner product is given by
hu, vi = u> v = v > u,
and when u, v 2 Cn , their Hermitian inner product is given by
hu, vi = v u = u v.
In particular, when u = v, in the complex case we get
kuk22 = u u,
and in the real case, this becomes
kuk22 = u> u.
As convenient as these notations are, we still recommend that you do not abuse them; the
notation hu, vi is more intrinsic and still works when our vector space is infinite dimensional.
The following proposition is easy to show.
Proposition 7.2. The following inequalities hold for all x 2 Rn (or x 2 Cn ):
kxk1 kxk1 nkxk1 ,
p
kxk1 kxk2 nkxk1 ,
p
kxk2 kxk1 nkxk2 .
Proposition 7.2 is actually a special case of a very important result: in a finite-dimensional
vector space, any two norms are equivalent.
218
Definition 7.2. Given any (real or complex) vector space E, two norms k ka and k kb are
equivalent i there exists some positive reals C1 , C2 > 0, such that
kuka C1 kukb
and
Given any norm k k on a vector space of dimension n, for any basis (e1 , . . . , en ) of E,
observe that for any vector x = x1 e1 + + xn en , we have
kxk = kx1 e1 + + xn en k |x1 | ke1 k + + |xn | ken k C(|x1 | + + |xn |) = C kxk1 ,
with C = max1in kei k and
kxk1 = kx1 e1 + + xn en k = |x1 | + + |xn |.
The above implies that
| kuk
kvk | ku
vk C ku
vk1 ,
which means that the map u 7! kuk is continuous with respect to the norm k k1 .
Let S1n
= {x 2 E | kxk1 = 1}.
Now, S1n 1 is a closed and bounded subset of a finite-dimensional vector space, so by Heine
Borel (or equivalently, by BolzanoWeiertrass), S1n 1 is compact. On the other hand, it
is a well known result of analysis that any continuous real-valued function on a nonempty
compact set has a minimum and a maximum, and that they are achieved. Using these facts,
we can prove the following important theorem:
Theorem 7.3. If E is any real or complex vector space of finite dimension, then any two
norms on E are equivalent.
Proof. It is enough to prove that any norm k k is equivalent to the 1-norm. We already proved
that the function x 7! kxk is continuous with respect to the norm k k1 and we observed that
the unit sphere S1n 1 is compact. Now, we just recalled that because the function f : x 7! kxk
is continuous and because S1n 1 is compact, the function f has a minimum m and a maximum
M , and because kxk is never zero on S1n 1 , we must have m > 0. Consequently, we just
proved that if kxk1 = 1, then
0 < m kxk M,
so for any x 2 E with x 6= 0, we get
m kx/ kxk1 k M,
which implies
m kxk1 kxk M kxk1 .
Since the above inequality holds trivially if x = 0, we just proved that k k and k k1 are
equivalent, as claimed.
Next, we will consider norms on matrices.
7.2
219
Matrix Norms
For simplicity of exposition, we will consider the vector spaces Mn (R) and Mn (C) of square
n n matrices. Most results also hold for the spaces Mm,n (R) and Mm,n (C) of rectangular
m n matrices. Since n n matrices can be multiplied, the idea behind matrix norms is
that they should behave well with respect to matrix multiplication.
Definition 7.3. A matrix norm k k on the space of square n n matrices in Mn (K), with
K = R or K = C, is a norm on the vector space Mn (K), with the additional property called
submultiplicativity that
kABk kAk kBk ,
for all A, B 2 Mn (K). A norm on matrices satisfying the above property is often called a
submultiplicative matrix norm.
Since I 2 = I, from kIk = kI 2 k kIk2 , we get kIk
Before giving examples of matrix norms, we need to review some basic definitions about
matrices. Given any matrix A = (aij ) 2 Mm,n (C), the conjugate A of A is the matrix such
that
Aij = aij , 1 i m, 1 j n.
The transpose of A is the n m matrix A> such that
A>
ij = aji ,
1 i m, 1 j n.
AA = A A,
220
2 C is an
Au = u.
If is an eigenvalue of A, then the nonzero vectors u 2 Cn such that Au = u are called
eigenvectors of A associated with ; together with the zero vector, these eigenvectors form a
subspace of Cn denoted by E (A), and called the eigenspace associated with .
Remark: Note that Definition 7.4 requires an eigenvector to be nonzero. A somewhat
unfortunate consequence of this requirement is that the set of eigenvectors is not a subspace,
since the zero vector is missing! On the positive side, whenever eigenvectors are involved,
there is no need to say that they are nonzero. The fact that eigenvectors are nonzero is
implicitly used in all the arguments involving them, so it seems safer (but perhaps not as
elegant) to stituplate that eigenvectors should be nonzero.
If A is a square real matrix A 2 Mn (R), then we restrict Definition 7.4 to real eigenvalues
2 R and real eigenvectors. However, it should be noted that although every complex
matrix always has at least some complex eigenvalue, a real matrix may not have any real
eigenvalues. For example, the matrix
0
1
A=
1 0
221
has the complex eigenvalues i and i, but no real eigenvalues. Thus, typically, even for real
matrices, we consider complex eigenvalues.
i
i
i
i
A is not invertible i
det( I
Now, det( I
A) = 0.
tr(A)
n 1
+ + ( 1)n det(A).
Thus, we see that the eigenvalues of A are the zeros (also called roots) of the above polynomial. Since every complex polynomial of degree n has exactly n roots, counted with their
multiplicity, we have the following definition:
Definition 7.5. Given any square n n matrix A 2 Mn (C), the polynomial
det( I
A) =
tr(A)
n 1
+ + ( 1)n det(A)
and U 6= 0, we have kU k =
6 0, and get
(A) = | | kAk ,
as claimed.
222
Proposition 7.4 also holds for any real matrix norm k k on Mn (R) but the proof is more
subtle and requires the notion of induced norm. We prove it after giving Definition 7.7.
Now, it turns out that if A is a real n n symmetric matrix, then the eigenvalues of A
are all real and there is some orthogonal matrix Q such that
A = Qdiag( 1 , . . . ,
n )Q
>
where diag( 1 , . . . , n ) denotes the matrix whose only nonzero entries (if any) are its diagonal
entries, which are the (real) eigenvalues of A. Similarly, if A is a complex n n Hermitian
matrix, then the eigenvalues of A are all real and there is some unitary matrix U such that
A = U diag( 1 , . . . ,
n )U
where diag( 1 , . . . , n ) denotes the matrix whose only nonzero entries (if any) are its diagonal
entries, which are the (real) eigenvalues of A.
We now return to matrix norms. We begin with the so-called Frobenius norm, which is
2
just the norm k k2 on Cn , where the n n matrix A is viewed as the vector obtained by
concatenating together the rows (or the columns) of A. The reader should check that for
any n n complex matrix A = (aij ),
X
n
i,j=1
|aij |
1/2
p
p
tr(A A) = tr(AA ).
Definition 7.6. The Frobenius norm k kF is defined so that for every square n n matrix
A 2 Mn (C),
X
1/2
n
p
p
2
kAkF =
|aij |
= tr(AA ) = tr(A A).
i,j=1
The following proposition show that the Frobenius norm is a matrix norm satisfying other
nice properties.
Proposition 7.5. The Frobenius norm k kF on Mn (C) satisfies the following properties:
(1) It is a matrix norm; that is, kABkF kAkF kBkF , for all A, B 2 Mn (C).
(2) It is unitarily invariant, which means that for all unitary matrices U, V , we have
kAkF = kU AkF = kAV kF = kU AV kF .
(3)
p
p p
(A A) kAkF n (A A), for all A 2 Mn (C).
223
Proof. (1) The only property that requires a proof is the fact kABkF kAkF kBkF . This
follows from the CauchySchwarz inequality:
kABk2F
n
n
X
X
aik bkj
i,j=1 k=1
n X
n
X
i,j=1
h=1
X
n
i,h=1
|aih |
|aih |
X
n
k=1
X
n
k,j=1
|bkj |
|bkj |
= kAk2F kBk2F .
(2) We have
kAk2F = tr(A A) = tr(V V A A) = tr(V A AV ) = kAV k2F ,
and
kAk2F = tr(A A) = tr(A U U A) = kU Ak2F .
The identity
kAkF = kU AV kF
follows from the previous two.
(3) It is well known that the trace of a matrix is equal to the sum of its eigenvalues.
Furthermore, A A is symmetric positive semidefinite (which means that its eigenvalues are
nonnegative), so (A A) is the largest eigenvalue of A A and
(A A) tr(A A) n(A A),
which yields (3) by taking square roots.
Remark: The Frobenius norm is also known as the Hilbert-Schmidt norm or the Schur
norm. So many famous names associated with such a simple thing!
We now give another method for obtaining matrix norms using subordinate norms. First,
we need a proposition that shows that in a finite-dimensional space, the linear map induced
by a matrix is bounded, and thus continuous.
Proposition 7.6. For every norm k k on Cn (or Rn ), for every matrix A 2 Mn (C) (or
A 2 Mn (R)), there is a real constant CA 0, such that
kAuk CA kuk ,
for every vector u 2 Cn (or u 2 Rn if A is real).
224
x2Cn
x6=0
kAxk
CA .
kxk
x2Cn
x6=0
kAxk
= sup kAxk .
kxk
x2Cn
kxk=1
Similarly
sup
x2Rn
x6=0
kAxk
= sup kAxk .
kxk
x2Rn
kxk=1
x2Cn
x6=0
kAxk
= sup kAxk .
kxk
x2Cn
kxk=1
The function A 7! kAk is called the subordinate matrix norm or operator norm induced
by the norm k k.
225
It is easy to check that the function A 7! kAk is indeed a norm, and by definition, it
satisfies the property
kAxk kAk kxk , for all x 2 Cn .
A norm k k on Mn (C) satisfying the above property is said to be subordinate to the vector
norm k k on Cn . As a consequence of the above inequality, we have
kABxk kAk kBxk kAk kBk kxk ,
for all x 2 Cn , which implies that
kABk kAk kBk
Since the function x 7! kAxk is continuous (because | kAyk kAxk | kAy Axk
CA kx yk) and the unit sphere S n 1 = {x 2 Cn | kxk = 1} is compact, there is some
x 2 Cn such that kxk = 1 and
kAxk = kAk .
Equivalently, there is some x 2 Cn such that x 6= 0 and
kAxk = kAk kxk .
The definition of an operator norm also implies that
kIk = 1.
The above shows that the Frobenius norm is not a subordinate matrix norm (why?). The
notion of subordinate norm can be slightly generalized.
Definition 7.8. If K = R or K = C, for any norm k k on Mm,n (K), and for any two norms
k ka on K n and k kb on K m , we say that the norm k k is subordinate to the norms k ka and
k kb if
kAxkb kAk kxka for all A 2 Mm,n (K) and all x 2 K n .
Remark: For any norm k k on Cn , we can define the function k kR on Mn (R) by
kAkR = sup
x2Rn
x6=0
kAxk
= sup kAxk .
kxk
x2Rn
kxk=1
226
for all real matrices A 2 Mn (R). However, it is possible to construct vector norms k k on Cn
and real matrices A such that
kAkR < kAk .
In order to avoid this kind of difficulties, we define subordinate matrix norms over Mn (C).
Luckily, it turns out that kAkR = kAk for the vector norms, k k1 , k k2 , and k k1 .
We now prove Proposition 7.4 for real matrix norms.
Proposition 7.7. For any matrix norm k k on Mn (R) and for any square n n matrix
A 2 Mn (R), we have
(A) kAk .
Proof. We follow the proof in Denis Serres book [96]. If A is a real matrix, the problem is
that the eigenvectors associated with the eigenvalue of maximum modulus may be complex.
We use a trick based on the fact that for every matrix A (real or complex),
(Ak ) = ((A))k ,
which is left as an exercise (use Proposition 8.5 which shows that if ( 1 , . . . , n ) are the (not
necessarily distinct) eigenvalues of A, then ( k1 , . . . , kn ) are the eigenvalues of Ak , for k 1).
Pick any complex norm k kc on Cn and let k kc denote the corresponding induced norm
on matrices. The restriction of k kc to real matrices is a real norm that we also denote by
k kc . Now, by Theorem 7.3, since Mn (R) has finite dimension n2 , there is some constant
C > 0 so that
kAkc C kAk , for all A 2 Mn (R).
Furthermore, for every k 1 and for every real n n matrix A, by Proposition 7.4, (Ak )
Ak c , and because k k is a matrix norm, Ak kAkk , so we have
((A))k = (Ak ) Ak
for all k
C Ak C kAkk ,
1. It follows that
(A) C 1/k kAk ,
for all k
1.
However because C > 0, we have limk7!1 C 1/k = 1 (we have limk7!1 k1 log(C) = 0). Therefore, we conclude that
(A) kAk ,
as desired.
We now determine explicitly what are the subordinate matrix norms associated with the
vector norms k k1 , k k2 , and k k1 .
227
x2Cn
kxk1 =1
i=1
x2Cn
kxk1 =1
|aij |
n
X
j=1
|aij |
(A A) =
(AA ).
X X
i
aij uj
X
j
|uj |
|aij |
n
X
i=1
max
j
X
i
|aij | kuk1 ,
|aij |.
It remains to show that equality can be achieved. For this let j0 be some index such that
X
X
max
|aij | =
|aij0 |,
j
kAuk1 = max
i
aij uj
max
n
X
j=1
X
j
|aij |.
|aij | kuk1 ,
228
x2Cn
x x=1
Since the matrix A A is symmetric, it has real eigenvalues and it can be diagonalized with
respect to an orthogonal matrix. These facts can be used to prove that the function x 7!
x A Ax has a maximum on the sphere x x = 1 equal to the largest eigenvalue of A A,
namely, (A A). We postpone the proof until we discuss optimizing quadratic functions.
Therefore,
p
kAk2 = (A A).
Let use now prove that (A A) = (AA ). First, assume that (A A) > 0. In this case,
there is some eigenvector u (6= 0) such that
A Au = (A A)u,
and since (A A) > 0, we must have Au 6= 0. Since Au 6= 0,
AA (Au) = (A A)Au
which means that (A A) is an eigenvalue of AA , and thus
(A A) (AA ).
Because (A ) = A, by replacing A by A , we get
(AA ) (A A),
and so (A A) = (AA ).
If (A A) = 0, then we must have (AA ) = 0, since otherwise by the previous reasoning
we would have (A A) = (AA ) > 0. Hence, in all case
kAk22 = (A A) = (AA ) = kA k22 .
For any unitary matrices U and V , it is an easy exercise to prove that V A AV and A A
have the same eigenvalues, so
kAk22 = (A A) = (V A AV ) = kAV k22 ,
229
kAk22 = (A A) = (A U U A) = kU Ak22 .
Finally, if A is a normal matrix (AA = A A), it can be shown that there is some unitary
matrix U so that
A = U DU ,
where D = diag( 1 , . . . ,
n)
A A = (U DU ) U DU = U D U U DU = U D DU .
However, D D = diag(| 1 |2 , . . . , |
n|
(A A) = (D D) = max | i |2 = ((A))2 ,
i
which shows that the Frobenius norm is an upper bound on the spectral norm. The Frobenius
norm is much easier to compute than the spectal norm.
The reader will check that the above proof still holds if the matrix A is real, confirming
the fact that kAkR = kAk for the vector norms k k1 , k k2 , and k k1 . It is also easy to verify
that the proof goes through for rectangular matrices, with the same formulae. Similarly,
the Frobenius norm is also a norm on rectangular matrices. For these norms, whenever AB
makes sense, we have
kABk kAk kBk .
Remark: Let (E, k k) and (F, k k) be two normed vector spaces (for simplicity of notation,
we use the same symbol k k for the norms on E and F ; this should not cause any confusion).
Recall that a function f : E ! F is continuous if for every a 2 E, for every > 0, there is
some > 0 such that for all x 2 E,
if
kx
ak
then
kf (x)
f (a)k .
It is not hard to show that a linear map f : E ! F is continuous i there is some constant
C 0 such that
kf (x)k C kxk for all x 2 E.
If so, we say that f is bounded (or a linear bounded operator ). We let L(E; F ) denote the
set of all continuous (equivalently, bounded) linear maps from E to F . Then, we can define
the operator norm (or subordinate norm) k k on L(E; F ) as follows: for every f 2 L(E; F ),
kf k = sup
x2E
x6=0
kf (x)k
= sup kf (x)k ,
kxk
x2E
kxk=1
230
or equivalently by
kf k = inf{ 2 R | kf (x)k
It is not hard to show that the map f 7! kf k is a norm on L(E; F ) satisfying the property
kf (x)k kf k kxk
for all x 2 E, and that if f 2 L(E; F ) and g 2 L(F ; G), then
kg f k kgk kf k .
Operator norms play an important role in functional analysis, especially when the spaces E
and F are complete.
The following proposition will be needed when we deal with the condition number of a
matrix.
Proposition 7.9. Let k k be any matrix norm and let B be a matrix such that kBk < 1.
(1) If k k is a subordinate matrix norm, then the matrix I + B is invertible and
(I + B)
1
.
kBk
u, so
kuk = kBuk .
Recall that
kBuk kBk kuk
for every subordinate norm. Since kBk < 1, if u 6= 0, then
kBuk < kuk ,
which contradicts kuk = kBuk. Therefore, we must have u = 0, which proves that I + B is
injective, and thus bijective, i.e., invertible. Then, we have
(I + B)
+ B(I + B)
= (I + B)(I + B)
so we get
(I + B)
=I
B(I + B) 1 ,
= I,
231
which yields
1
(I + B)
1 + kBk (I + B)
and finally,
(I + B)
1
.
kBk
where
1, . . . ,
B0
B
B
T = B ...
B
@0
0
..
.
t12 t13
t23
2
.. . .
.
.
0
0
n 1
C
C
C
C,
C
tn 1 n A
n
t1n
t2n
..
.
,...,
n 1
t12
),
B0
B
B
(U D ) 1 A(U D ) = D 1 T D = B ...
B
@0
0
..
.
0
0
t13
t23
...
n 1
..
.
1
t1n
n 2
t2n C
C
.. C .
. C
C
tn 1 n A
n 1
232
for every B 2 Mn (C). Then it is easy to verify that the above function is the matrix norm
subordinate to the vector norm
v 7! (U D ) 1 v
Furthermore, for every > 0, we can pick
n
X
j=i+1
j i
so that
tij | ,
1in
1,
0 1
A=
,
0 0
for which (A) = 0 < kAk, since A 6= 0.
7.3
Unfortunately, there exist linear systems Ax = b whose solutions are not stable under small
perturbations of either b or A. For example, consider the system
0
10 1 0 1
10 7 8 7
x1
32
B 7 5 6 5 C Bx2 C B23C
B
CB C B C
@ 8 6 10 9 A @x3 A = @33A .
7 5 9 10
x4
31
The reader should check that it has the solution
right-hand side, obtaining the new system
0
10
10 7 8 7
x1 +
B 7 5 6 5 C Bx 2 +
B
CB
@ 8 6 10 9 A @x3 +
7 5 9 10
x4 +
the new solutions turns out to be x = (9.2, 12.6, 4.5, 1.1). In other words, a relative error
of the order 1/200 in the data (here, b) produces a relative error of the order 10/1 in the
solution, which represents an amplification of the relative error of the order 2000.
233
This time, the solution is x = ( 81, 137, 34, 22). Again, a small change in the data alters
the result rather drastically. Yet, the original system is symmetric, has determinant 1, and
has integer entries. The problem is that the matrix of the system is badly conditioned, a
concept that we will now explain.
Given an invertible matrix A, first, assume that we perturb b to b + b, and let us analyze
the change between the two exact solutions x and x + x of the two systems
Ax = b
A(x + x) = b + b.
We also assume that we have some norm k k and we use the subordinate matrix norm on
matrices. From
Ax = b
Ax + A x = b + b,
we get
x=A
b,
k bk
.
kbk
Now let us assume that A is perturbed to A + A, and let us analyze the change between
the exact solutions of the two systems
(A +
Ax = b
A)(x + x) = b.
234
It follows that
k xk A
k Ak kx +
xk ,
k Ak
.
kAk
Observe that the above reasoning is valid even if the matrix A + A is singular, as long
as x + x is a solution of the second system. Furthermore, if k Ak is small enough, it is
not unreasonable to expect that the ratio k xk / kx + xk is close to k xk / kxk. This will
be made more precise later.
In summary, for each of the two perturbations, we see that the relative error in the result
is bounded by the relative error in the data, multiplied the number kAk kA 1 k. In fact, this
factor turns out to be optimal and this suggests the following definition:
Definition 7.9. For any subordinate matrix norm k k, for any invertible matrix A, the
number
cond(A) = kAk A 1
is called the condition number of A relative to k k.
The condition number cond(A) measures the sensitivity of the linear system Ax = b to
variations in the data b and A; a feature referred to as the condition of the system. Thus,
when we says that a linear system is ill-conditioned , we mean that the condition number of
its matrix is large. We can sharpen the preceding analysis as follows:
Proposition 7.11. Let A be an invertible matrix and let x and x + x be the solutions of
the linear systems
Ax = b
A(x + x) = b + b.
If b 6= 0, then the inequality
k xk
k bk
cond(A)
kxk
kbk
holds and is the best possible. This means that for a given matrix A, there exist some vectors
b 6= 0 and b 6= 0 for which equality holds.
Proof. We already proved the inequality. Now, because k k is a subordinate matrix norm,
there exist some vectors x 6= 0 and b 6= 0 for which
A
b = A
k bk
and
235
(A +
x be the solutions of
Ax = b
A)(x + x) = b.
1
1
kA 1 k k Ak
Proof. The first inequality has already been proved. To show that equality can be achieved,
let w be any vector such that w 6= 0 and
A 1w = A
and let
kwk ,
x=
A 1w
x=w
b = (A + I)w
(A +
Ax = b
A)(x + x) = b
k xk = | | A 1 w = k Ak A
kx +
xk .
236
A A
1
A)
k Ak < 1,
A is invertible and
1
kA 1 Ak
1
1
kA 1 k k Ak
A(x +
x),
and by adding x to both sides and moving the right-hand side to the left-hand side yields
(I + A
A)(x +
x) = x,
and thus
x+
x = (I + A
A) 1 x,
which yields
x = ((I + A 1 A) 1 I)x = (I + A
= (I + A 1 A) 1 A 1 ( A)x.
A)
we get
k xk
which can be written as
A) 1 (I
(I + A
1
kA 1 k k Ak
A))x
kA 1 k k Ak
kxk ,
1 kA 1 k k Ak
k xk
k Ak
cond(A)
kxk
kAk
1
1
kA 1 k k Ak
A)(x + x) = b + b,
k xk
cond(A)
k Ak k bk
+
;
kxk
1 kA 1 k k Ak
kAk
kbk
237
see Demmel [27], Section 2.2 and Horn and Johnson [57], Section 5.8.
We now list some properties of condition numbers and figure out what cond(A) is in the
case of the spectral norm (the matrix norm induced by k k2 ). First, we need to introduce a
very important factorization of matrices, the singular value decomposition, for short, SVD.
It can be shown that given any n n matrix A 2 Mn (C), there exist two unitary matrices
U and V , and a real diagonal matrix = diag( 1 , . . . , n ), with 1
0,
2
n
such that
A = V U .
The nonnegative numbers
1, . . . ,
If A is a real matrix, the matrices U and V are orthogonal matrices. The factorization
A = V U implies that
A A = U 2 U
and AA = V 2 V ,
which shows that 12 , . . . , n2 are the eigenvalues of both A A and AA , that the columns of U
are corresponding eivenvectors for A A, and that the columns of V are corresponding eivenvectors for AA . In the case of a normal matrix if 1 , . . . , n are the (complex) eigenvalues
of A, then
1 i n.
i = | i |,
Proposition 7.13. For every invertible matrix A 2 Mn (C), the following properties hold:
(1)
cond(A) 1,
cond(A) = cond(A 1 )
cond(A) = cond(A) for all 2 C
{0}.
(2) If cond2 (A) denotes the condition number of A with respect to the spectral norm, then
1
cond2 (A) =
where
1, . . . ,
| 1|
,
| n|
n |.
238
(5) The condition number cond2 (A) is invariant under unitary transformations, which
means that
cond2 (A) = cond2 (U A) = cond2 (AV ),
for all unitary matrices U and V .
Proof. The properties in (1) are immediate consequences of the properties of subordinate
matrix norms. In particular, AA 1 = I implies
1 = kIk kAk A
= cond(A).
(2) We showed earlier that kAk22 = (A A), which is the square of the modulus of the largest
2
eigenvalue of A A. Since we just saw that the eigenvalues of A A are 12
n , where
1 , . . . , n are the singular values of A, we have
kAk2 =
Now, if A is invertible, then
of (A A) 1 are n 2
1
2
1.
1
2
and thus
cond2 (A) =
(3) This follows from the fact that kAk2 = (A) for a normal matrix.
AA = I, so (A A) = 1, and kAk2 =
p (4) If A is a unitary matrix,1 then A A = p
(A A) = 1. We also have kA k2 = kA k2 = (AA ) = 1, and thus cond(A) = 1.
(5) This follows immediately from the unitary invariance of the spectral norm.
Proposition 7.13 (4) shows that unitary and orthogonal transformations are very wellconditioned, and part (5) shows that unitary transformations preserve the condition number.
In order to compute cond2 (A), we need to compute the top and bottom singular values
of A, which may be hard. The inequality
p
kAk2 kAkF n kAk2 ,
may be useful in getting an approximation of cond2 (A) = kAk2 kA 1 k2 , if A
determined.
can be
239
Thus, if A is nearly singular, then there will be some orthonormal pair u, v such that Au
and Av are nearly parallel; the angle (A) will the be small and cot((A)/2)) will be large.
For more details, see Horn and Johnson [57] (Section 5.8 and Section 7.4).
It should be noted that in general (if A is not a normal matrix) a matrix could have
a very large condition number even if all its eigenvalues are identical! For example, if we
consider the n n matrix
0
1
1 2 0 0 ... 0 0
B0 1 2 0 . . . 0 0 C
B
C
B0 0 1 2 . . . 0 0 C
B
C
B
C
A = B ... ... . . . . . . . . . ... ... C ,
B
C
B0 0 . . . 0 1 2 0 C
B
C
@0 0 . . . 0 0 1 2 A
0 0 ... 0 0 0 1
it turns out that cond2 (A)
2n 1 .
A classical example of matrix with a very large condition number is the Hilbert matrix
H (n) , the n n matrix with
1
(n)
Hij =
.
i+j 1
For example, when n = 5,
H (5)
B1
B2
B1
=B
B3
B1
@4
1
5
1
2
1
3
1
4
1
5
1
6
1
3
1
4
1
5
1
6
1
7
1
4
1
5
1
6
1
7
1
8
11
5
1C
6C
C
1C
.
7C
1C
8A
1
9
10
B7
A=B
@8
7
1
7 8 7
5 6 5C
C,
6 10 9 A
5 9 10
240
which is a symmetric, positive, definite, matrix, it can be shown that its eigenvalues, which
in this case are also its singular values because A is SPD, are
1
30.2887 >
3.858 >
0.8431 >
0.01015,
so that
cond2 (A) =
1
4
2984.
The reader should check that for the perturbation of the right-hand side b used earlier, the
relative errors k xk /kxk and k xk /kxk satisfy the inequality
7.4
k xk
k bk
cond(A)
kxk
kbk
The problem of solving an inconsistent linear system Ax = b often arises in practice. This
is a system where b does not belong to the column space of A, usually with more equations
than variables. Thus, such a system has no solution. Yet, we would still like to solve such
a system, at least approximately.
Such systems often arise when trying to fit some data. For example, we may have a set
of 3D data points
{p1 , . . . , pn },
and we have reason to believe that these points are nearly coplanar. We would like to find
a plane that best fits our data points. Recall that the equation of a plane is
x + y + z + = 0,
with (, , ) 6= (0, 0, 0). Thus, every plane is either not parallel to the x-axis ( 6= 0) or not
parallel to the y-axis ( 6= 0) or not parallel to the z-axis ( 6= 0).
Say we have reasons to believe that the plane we are looking for is not parallel to the
z-axis. If we are wrong, in the least squares solution, one of the coefficients, , , will be
very large. If 6= 0, then we may assume that our plane is given by an equation of the form
z = ax + by + d,
and we would like this equation to be satisfied for all the pi s, which leads to a system of n
equations in 3 unknowns a, b, d, with pi = (xi , yi , zi );
ax1 + by1 + d = z1
..
..
.
.
axn + byn + d = zn .
241
However, if n is larger than 3, such a system generally has no solution. Since the above
system cant be solved exactly, we can try to find a solution (a, b, d) that minimizes the
least-squares error
n
X
(axi + byi + d zi )2 .
i=1
This is what Legendre and Gauss figured out in the early 1800s!
In general, given a linear system
Ax = b,
we solve the least squares problem: minimize kAx
bk22 .
This does not appear to be a linear problem, but we can use a trick to convert this
minimization problem into a linear program (which means a problem involving linear constraints).
Note that |x| = max{x, x}. So, by introducing new variables e1 , . . . , en , our minimization problem is equivalent to the linear program (LP):
minimize
subject to
e1 + + en
axi + byi + d zi ei
(axi + byi + d zi ) ei
1 i n.
|axi + byi + d
zi |,
1 i n.
242
For an optimal solution, we must have equality, since otherwise we could decrease some ei
and get an even better solution. Of course, we are no longer dealing with pure linear
algebra, since our constraints are inequalities.
We prefer not getting into linear programming right now, but the above example provides
a good reason to learn more about linear programming!
7.5
Summary
The main concepts and results of this chapter are listed below:
Norms and normed vector spaces.
The triangle inequality.
The Euclidean norm; the `p -norms.
Holders inequality; the CauchySchwarz inequality; Minkowskis inequality.
Hermitian inner product and Euclidean inner product.
Equivalent norms.
All norms on a finite-dimensional vector space are equivalent (Theorem 7.3).
Matrix norms.
Hermitian, symmetric and normal matrices. Orthogonal and unitary matrices.
The trace of a matrix.
Eigenvalues and eigenvectors of a matrix.
The characteristic polynomial of a matrix.
The spectral radius (A) of a matrix A.
The Frobenius norm.
The Frobenius norm is a unitarily invariant matrix norm.
Bounded linear maps.
Subordinate matrix norms.
Characterization of the subordinate matrix norms for the vector norms k k1 , k k2 , and
k k1 .
7.5. SUMMARY
243
244
Chapter 8
Eigenvectors and Eigenvalues
8.1
Given a finite-dimensional vector space E, let f : E ! E be any linear map. If, by luck,
there is a basis (e1 , . . . , en ) of E with respect to which f is represented by a diagonal matrix
0
1
0 ... 0
1
. . . .. C
B
.C
B0
2
D=B. .
C,
.
.. .. 0 A
@ ..
0 ... 0
n
then the action of f on E is very simple; in every direction ei , we have
f (ei ) =
i ei .
We can think of f as a transformation that stretches or shrinks space along the direction
e1 , . . . , en (at least if E is a real vector space). In terms of matrices, the above property
translates into the fact that there is an invertible matrix P and a diagonal matrix D such
that a matrix A can be factored as
A = P DP
When this happens, we say that f (or A) is diagonalizable, the i s are called the eigenvalues
of f , and the ei s are eigenvectors of f . For example, we will see that every symmetric matrix
can be diagonalized. Unfortunately, not every matrix can be diagonalized. For example, the
matrix
1 1
A1 =
0 1
cant be diagonalized. Sometimes, a matrix fails to be diagonalizable because its eigenvalues
do not belong to the field of coefficients, such as
0
1
A2 =
,
1 0
245
246
whose eigenvalues are i. This is not a serious problem because A2 can be diagonalized over
the complex numbers. However, A1 is a fatal case! Indeed, its eigenvalues are both 1 and
the problem is that A1 does not have enough eigenvectors to span E.
The next best thing is that there is a basis with respect to which f is represented by
an upper triangular matrix. In this case we say that f can be triangularized , or that f is
triangulable. As we will see in Section 8.2, if all the eigenvalues of f belong to the field of
coefficients K, then f can be triangularized. In particular, this is the case if K = C.
Now, an alternative to triangularization is to consider the representation of f with respect
to two bases (e1 , . . . , en ) and (f1 , . . . , fn ), rather than a single basis. In this case, if K = R
or K = C, it turns out that we can even pick these bases to be orthonormal , and we get a
diagonal matrix with nonnegative entries, such that
f (ei ) =
i fi ,
1 i n.
The nonzero i s are the singular values of f , and the corresponding representation is the
singular value decomposition, or SVD. The SVD plays a very important role in applications,
and will be considered in detail later.
In this section, we focus on the possibility of diagonalizing a linear map, and we introduce
the relevant concepts to do so. Given a vector space E over a field K, let I denote the identity
map on E.
Definition 8.1. Given any vector space E and any linear map f : E ! E, a scalar 2 K
is called an eigenvalue, or proper value, or characteristic value of f if there is some nonzero
vector u 2 E such that
f (u) = u.
Equivalently, is an eigenvalue of f if Ker ( I f ) is nontrivial (i.e., Ker ( I f ) 6= {0}).
A vector u 2 E is called an eigenvector, or proper vector, or characteristic vector of f if
u 6= 0 and if there is some 2 K such that
f (u) = u;
the scalar
is then an eigenvalue, and we say that u is an eigenvector associated with
. Given any eigenvalue 2 K, the nontrivial subspace Ker ( I f ) consists of all the
eigenvectors associated with together with the zero vector; this subspace is denoted by
E (f ), or E( , f ), or even by E , and is called the eigenspace associated with , or proper
subspace associated with .
Note that distinct eigenvectors may correspond to the same eigenvalue, but distinct
eigenvalues correspond to disjoint sets of eigenvectors.
Remark: As we emphasized in the remark following Definition 7.4, we require an eigenvector
to be nonzero. This requirement seems to have more benefits than inconvenients, even though
247
it may considered somewhat inelegant because the set of all eigenvectors associated with an
eigenvalue is not a subspace since the zero vector is excluded.
Let us now assume that E is of finite dimension n. The next proposition shows that the
eigenvalues of a linear map f : E ! E are the roots of a polynomial associated with f .
Proposition 8.1. Let E be any vector space of finite dimension n and let f be any linear
map f : E ! E. The eigenvalues of f are the roots (in K) of the polynomial
det( I
Proof. A scalar
that
f ).
i
( I
i ( I
f )(u) = 0
f ) = 0.
Definition 8.2. Given any vector space E of dimension n, for any linear map f : E ! E,
the polynomial Pf (X) = f (X) = det(XI f ) is called the characteristic polynomial of
f . For any square matrix A, the polynomial PA (X) = A (X) = det(XI A) is called the
characteristic polynomial of A.
Note that we already encountered the characteristic polynomial in Section 5.7; see Definition 5.8.
Given any basis (e1 , . . . , en ), if A = M (f ) is the matrix of f w.r.t. (e1 , . . . , en ), we can
compute the characteristic polynomial f (X) = det(XI f ) of f by expanding the following
determinant:
X a1 1
a1 2
...
a1 n
a2 1
X a2 2 . . .
a2 n
det(XI A) =
.
..
..
..
..
.
.
.
.
an 1
an 2
... X
an n
= det(XI
A) = X n
(a1 1 + + an n )X n
+ + ( 1)n det(A).
The sum tr(A) = a1 1 + + an n of the diagonal elements of A is called the trace of A. Since
we proved in Section 5.7 that the characteristic polynomial only depends on the linear map
f , the above shows that tr(A) has the same value for all matrices A representing f . Thus,
248
the trace of a linear map is well-defined; we have tr(f ) = tr(A) for any matrix A representing
f.
Remark: The characteristic polynomial of a linear map is sometimes defined as det(f XI).
Since
det(f XI) = ( 1)n det(XI f ),
this makes essentially no dierence but the version det(XI
that the coefficient of X n is +1.
If we write
A (X)
= det(XI
A) = X n
1 (A)X n
+ + ( 1)k k (A)X n
+ + ( 1)n n (A),
1, . . . ,
n,
A (X)
= det(XI
is
= det(XI
1 ) (X
A) = (X
n ),
1(
)X n
where
k( ) =
+ + ( 1)k
X
k(
)X n
+ + ( 1)n
n(
),
i,
I{1,...,n} i2I
|I|=k
i s.
) = k (A)
and, in particular, the product of the eigenvalues of f is equal to det(A) = det(f ), and the
sum of the eigenvalues of f is equal to the trace tr(A) = tr(f ), of f ; for the record,
tr(f ) =
det(f ) =
+ +
1 n,
1
249
where 1 , . . . , n are the eigenvalues of f (and A), where some of the i s may appear more
than once. In particular, f is not invertible i it admits 0 has an eigenvalue.
Remark: Depending on the field K, the characteristic polynomial A (X) = det(XI A)
may or may not have roots in K. This motivates considering algebraically closed fields,
which are fields K such that every polynomial with coefficients in K has all its root in K.
For example, over K = R, not every polynomial has real roots. If we consider the matrix
cos
sin
A=
,
sin
cos
then the characteristic polynomial det(XI A) has no real roots unless = k. However,
over the field C of complex numbers, every polynomial has roots. For example, the matrix
above has the roots cos i sin = ei .
It is possible to show that every linear map f over a complex vector space E must have
some (complex) eigenvalue without having recourse to determinants (and the characteristic
polynomial). Let n = dim(E), pick any nonzero vector u 2 E, and consider the sequence
u, f (u), f 2 (u), . . . , f n (u).
Since the above sequence has n + 1 vectors and E has dimension n, these vectors must be
linearly dependent, so there are some complex numbers c0 , . . . , cm , not all zero, such that
c0 f m (u) + c1 f m 1 (u) + + cm u = 0,
where m n is the largest integer such that the coefficient of f m (u) is nonzero (m must
exits since we have a nontrivial linear dependency). Now, because the field C is algebraically
closed, the polynomial
c0 X m + c1 X m 1 + + cm
can be written as a product of linear factors as
c0 X m + c1 X m
for some complex numbers
1, . . . ,
+ + cm = c0 (X
m
1 ) (X
m)
c0 f m (u) + c1 f m 1 (u) + + cm u = 0
is equivalent to
(f
1 I)
(f
m I)(u)
= 0.
i v;
m I) would be injective,
I
i must have a nontrivial
250
that is,
As nice as the above argument is, it does not provide a method for finding the eigenvalues
of f , and even if we prefer avoiding determinants as a much as possible, we are forced to
deal with the characteristic polynomial det(XI f ).
Definition 8.3. Let A be an n n matrix over a field K. Assume that all the roots of the
characteristic polynomial A (X) = det(XI A) of A belong to K, which means that we can
write
k1
km
det(XI A) = (X
1 ) (X
m) ,
where 1 , . . . , m 2 K are the distinct roots of det(XI A) and k1 + + km = n. The
integer ki is called the algebraic multiplicity of the eigenvalue i , and the dimension of the
eigenspace E i = Ker( i I A) is called the geometric multiplicity of i . We denote the
algebraic multiplicity of i by alg( i ), and its geometric multiplicity by geo( i ).
By definition, the sum of the algebraic multiplicities is equal to n, but the sum of the
geometric multiplicities can be strictly smaller.
Proposition 8.2. Let A be an nn matrix over a field K and assume that all the roots of the
characteristic polynomial A (X) = det(XI A) of A belong to K. For every eigenvalue i
of A, the geometric multiplicity of i is always less than or equal to its algebraic multiplicity,
that is,
geo( i ) alg( i ).
Proof. To see this, if ni is the dimension of the eigenspace E i associated with the eigenvalue
n
i , we can form a basis of K obtained by picking a basis of E i and completing this linearly
independent family to a basis of K n . With respect to this new basis, our matrix is of the
form
i I ni B
0
A =
0
D
and a simple determinant calculation shows that
det(XI
A) = det(XI
A0 ) = (X
i)
ni
det(XIn
ni
D).
ni
Therefore, (X
divides the characteristic polynomial of A0 , and thus, the characteristic
i)
polynomial of A. It follows that ni is less than or equal to the algebraic multiplicity of i .
251
i 1 ui 1
+ + k
i k ui k
= 0,
i1
i1 )uik
= 0,
which is a nontrivial linear dependency among a proper subfamily of (ui1 , . . . , uik ) since the
j are all distinct and the i are nonzero, a contradiction.
Thus, from Proposition 8.3, if 1 , . . . ,
(where m n), we have a direct sum
E
E=E
When
we say that f is diagonalizable (and similarly for any matrix associated with f ). Indeed,
picking a basis in each E i , we obtain a matrix which is a diagonal matrix consisting of the
eigenvalues, each i occurring a number of times equal to the dimension of E i . This happens
if the algebraic multiplicity and the geometric multiplicity of every eigenvalue are equal. In
particular, when the characteristic polynomial has n distinct roots, then f is diagonalizable.
It can also be shown that symmetric matrices have real eigenvalues and can be diagonalized.
For a negative example, we leave as exercise to show that the matrix
1 1
M=
0 1
cannot be diagonalized, even though 1 is an eigenvalue. The problem is that the eigenspace
of 1 only has dimension 1. The matrix
cos
sin
A=
sin
cos
cannot be diagonalized either, because it has no real eigenvalues, unless = k. However,
over the field of complex numbers, it can be diagonalized.
252
8.2
Unfortunately, not every linear map on a complex vector space can be diagonalized. The
next best thing is to triangularize, which means to find a basis over which the matrix has
zero entries below the main diagonal. Fortunately, such a basis always exist.
We say that a square matrix A is an upper triangular matrix if it has the following shape,
0
1
a1 1 a1 2 a1 3 . . . a1 n 1
a1 n
B 0 a2 2 a2 3 . . . a2 n 1
a2 n C
B
C
B 0
0 a3 3 . . . a3 n 1
a3 n C
B
C
B ..
..
.. . .
..
.. C ,
B .
.
.
.
.
. C
B
C
@ 0
0
0 . . . an 1 n 1 an 1 n A
0
0
0 ...
0
an n
Theorem 8.4. Given any finite dimensional vector space over a field K, for any linear map
f : E ! E, there is a basis (u1 , . . . , un ) with respect to which f is represented by an upper
triangular matrix (in Mn (K)) i all the eigenvalues of f belong to K. Equivalently, for every
n n matrix A 2 Mn (K), there is an invertible matrix P and an upper triangular matrix T
(both in Mn (K)) such that
A = PTP 1
i all the eigenvalues of A belong to K.
Proof. If there is a basis (u1 , . . . , un ) with respect to which f is represented by an upper
triangular matrix T in Mn (K), then since the eigenvalues of f are the diagonal entries of T ,
all the eigenvalues of f belong to K.
For the converse, we proceed by induction on the dimension n of E. For n = 1 the result
is obvious. If n > 1, since by assumption f has all its eigenvalue in K, pick some eigenvalue
1
1 2 K of f , and let u1 be some corresponding (nonzero) eigenvector. We can find n
vectors (v2 , . . . , vn ) such that (u1 , v2 , . . . , vn ) is a basis of E, and let F be the subspace of
dimension n 1 spanned by (v2 , . . . , vn ). In the basis (u1 , v2 . . . , vn ), the matrix of f is of
the form
0
1
1 a1 2 . . . a 1 n
B 0 a2 2 . . . a 2 n C
B
C
U = B ..
.. . .
.. C ,
@.
.
.
. A
0 an 2 . . . an n
since its first column contains the coordinates of 1 u1 over the basis (u1 , v2 , . . . , vn ). If we
let p : E ! F be the projection defined such that p(u1 ) = 0 and p(vi ) = vi when 2 i n,
the linear map g : F ! F defined as the restriction of p f to F is represented by the
(n 1) (n 1) matrix V = (ai j )2i,jn over the basis (v2 , . . . , vn ). We need to prove
253
that all the eigenvalues of g belong to K. However, since the first column of U has a single
nonzero entry, we get
U (X)
= det(XI
U ) = (X
1 ) det(XI
V ) = (X
1 ) V (X),
where U (X) is the characteristic polynomial of U and V (X) is the characteristic polynomial
of V . It follows that V (X) divides U (X), and since all the roots of U (X) are in K, all
the roots of V (X) are also in K. Consequently, we can apply the induction hypothesis, and
there is a basis (u2 , . . . , un ) of F such that g is represented by an upper triangular matrix
(bi j )1i,jn 1 . However,
E = Ku1 F,
and thus (u1 , . . . , un ) is a basis for E. Since p is the projection from E = Ku1
and g : F ! F is the restriction of p f to F , we have
f (u1 ) =
and
f (ui+1 ) = a1 i u1 +
F onto F
1 u1
i
X
bi j uj+1
j=1
For the matrix version, we assume that A is the matrix of f with respect to some basis,
Then, we just proved that there is a change of basis matrix P such that A = P T P 1 where
T is upper triangular.
If A = P T P 1 where T is upper triangular, note that the diagonal entries of T are the
eigenvalues 1 , . . . , n of A. Indeed, A and T have the same characteristic polynomial. Also,
if A is a real matrix whose eigenvalues are all real, then P can be chosen to real, and if A
is a rational matrix whose eigenvalues are all rational, then P can be chosen rational. Since
any polynomial over C has all its roots in C, Theorem 8.4 implies that every complex n n
matrix can be triangularized.
If
we obtain
A2 u = A(Au) = A( u) = Au =
u,
which shows that 2 is an eigenvalue of A2 for the eigenvector u. An obvious induction shows
that k is an eigenvalue of Ak for the eigenvector u, for all k
1. Now, if all eigenvalues
k
k
,
.
.
.
,
of
A
are
in
K,
it
follows
that
,
.
.
.
,
are
eigenvalues
of Ak . However, it is not
1
n
1
n
obvious that Ak does not have other eigenvalues. In fact, this cant happen, and this can be
proved using Theorem 8.4.
254
Proposition 8.5. Given any n n matrix A 2 Mn (K) with coefficients in a field K, if all
eigenvalues 1 , . . . , n of A are in K, then for every polynomial q(X) 2 K[X], the eigenvalues
of q(A) are exactly (q( 1 ), . . . , q( n )).
Proof. By Theorem 8.4, there is an upper triangular matrix T and an invertible matrix P
(both in Mn (K)) such that
A = P T P 1.
Since A and T are similar, they have the same eigenvalues (with the same multiplicities), so
the diagonal entries of T are the eigenvalues of A. Since
Ak = P T k P
1,
Furthermore, it is easy to check that q(T ) is upper triangular and that its diagonal entries
are q( 1 ), . . . , q( n ), where 1 , . . . , n are the diagonal entries of T , namely the eigenvalues
of A. It follows that q( 1 ), . . . , q( n ) are the eigenvalues of q(A).
If E is a Hermitian space (see Chapter 12), the proof of Theorem 8.4 can be easily adapted
to prove that there is an orthonormal basis (u1 , . . . , un ) with respect to which the matrix of
f is upper triangular. This is usually known as Schurs lemma.
Theorem 8.6. (Schur decomposition) Given any linear map f : E ! E over a complex
Hermitian space E, there is an orthonormal basis (u1 , . . . , un ) with respect to which f is
represented by an upper triangular matrix. Equivalently, for every n n matrix A 2 Mn (C),
there is a unitary matrix U and an upper triangular matrix T such that
A = U T U .
If A is real and if all its eigenvalues are real, then there is an orthogonal matrix Q and a
real upper triangular matrix T such that
A = QT Q> .
Proof. During the induction, we choose F to be the orthogonal complement of Cu1 and we
pick orthonormal bases (use Propositions 12.10 and 12.9). If E is a real Euclidean space
and if the eigenvalues of f are all real, the proof also goes through with real matrices (use
Propositions 10.9 and 10.8).
255
Using, Theorem 8.6, we can derive the fact that if A is a Hermitian matrix, then there
is a unitary matrix U and a real diagonal matrix D such that A = U DU . Indeed, since
A = A, we get
U T U = U T U ,
which implies that T = T . Since T is an upper triangular matrix, T is a lower triangular
matrix, which implies that T is a real diagonal matrix. In fact, applying this result to a
(real) symmetric matrix A, we obtain the fact that all the eigenvalues of a symmetric matrix
are real, and by applying Theorem 8.6 again, we conclude that A = QDQ> , where Q is
orthogonal and D is a real diagonal matrix. We will also prove this in Chapter 13.
When A has complex eigenvalues, there is a version of Theorem 8.6 involving only real
matrices provided that we allow T to be block upper-triangular (the diagonal entries may
be 2 2 matrices or real entries).
Theorem 8.6 is not a very practical result but it is a useful theoretical result to cope
with matrices that cannot be diagonalized. For example, it can be used to prove that
every complex matrix is the limit of a sequence of diagonalizable matrices that have distinct
eigenvalues!
Remark: There is another way to prove Proposition 8.5 that does not use Theorem 8.4, but
instead uses the fact that given any field K, there is field extension K of K (K K) such
that every polynomial q(X) = c0 X m + + cm 1 X + cm (of degree m 1) with coefficients
ci 2 K factors as
q(X) = c0 (X
1 ) (X
n ),
i 2 K, i = 1, . . . , n.
The field K is called an algebraically closed field (and an algebraic closure of K).
Assume that all eigenvalues 1 , . . . , n of A belong to K. Let q(X) be any polynomial
(in K[X]) and let 2 K be any eigenvalue of q(A) (this means that is a zero of the
characteristic polynomial q(A) (X) 2 K[X] of q(A). Since K is algebraically closed, q(A) (X)
has all its root in K). We claim that = q( i ) for some eigenvalue i of A.
Proof. (After Lax [71], Chapter 6). Since K is algebraically closed, the polynomial
factors as
q(X) = c0 (X 1 ) (X n ),
q(X)
q(A) = c0 (A
1 I) (A
n I),
and since the left-hand side is singular, so is the right-hand side, which implies that some
factor A i I is singular. This means that i is an eigenvalue of A, say i = i . As i = i
is a zero of q(X), we get
= q( i ),
which proves that is indeed of the form q( i ) for some eigenvalue
of A.
256
8.3
Location of Eigenvalues
If A is an n n complex (or real) matrix A, it would be useful to know, even roughly, where
the eigenvalues of A are located in the complex plane C. The Gershgorin discs provide some
precise information about this.
Definition 8.4. For any complex n n matrix A, for i = 1, . . . , n, let
Ri0 (A)
n
X
j=1
j6=i
and let
G(A) =
n
[
i=1
|ai j |
ai i | Ri0 (A)}.
{z 2 C | |z
Each disc {z 2 C | |z ai i | Ri0 (A)} is called a Gershgorin disc and their union G(A) is
called the Gershgorin domain.
Although easy to prove, the following theorem is very useful:
Theorem 8.7. (Gershgorins disc theorem) For any complex n n matrix A, all the eigenvalues of A belong to the Gershgorin domain G(A). Furthermore the following properties
hold:
(1) If A is strictly row diagonally dominant, that is
|ai i | >
n
X
j=1, j6=i
|ai j |,
for i = 1, . . . , n,
then A is invertible.
(2) If A is strictly row diagonally dominant, and if ai i > 0 for i = 1, . . . , n, then every
eigenvalue of A has a strictly positive real part.
Proof. Let be any eigenvalue of A and let u be a corresponding eigenvector (recall that we
must have u 6= 0). Let k be an index such that
|uk | = max |ui |.
1in
Since Au = u, we have
(
ak k )uk =
n
X
j=1
j6=k
ak j u j ,
ak k ||uk |
257
n
X
j=1
j6=k
n
X
j=1
j6=k
|ak j |
ak k |
n
X
j=1
j6=k
and thus
2 {z 2 C | |z
as claimed.
(1) Strict row diagonal dominance implies that 0 does not belong to any of the Gershgorin
discs, so all eigenvalues of A are nonzero, and A is invertible.
(2) If A is strictly row diagonally dominant and ai i > 0 for i = 1, . . . , n, then each of the
Gershgorin discs lies strictly in the right half-plane, so every eigenvalue of A has a strictly
positive real part.
In particular, Theorem 8.7 implies that if a symmetric matrix is strictly row diagonally
dominant and has strictly positive diagonal entries, then it is positive definite. Theorem 8.7
is sometimes called the GershgorinHadamard theorem.
Since A and A> have the same eigenvalues (even for complex matrices) we also have a
version of Theorem 8.7 for the discs of radius
Cj0 (A)
n
X
i=1
i6=j
|ai j |,
n
X
i=1, i6=j
|ai j |,
for j = 1, . . . , n,
258
(2) If A is strictly column diagonally dominant, and if ai i > 0 for i = 1, . . . , n, then every
eigenvalue of A has a strictly positive real part.
There are refinements of Gershgorins theorem and eigenvalue location results involving
other domains besides discs; for more on this subject, see Horn and Johnson [57], Sections
6.1 and 6.2.
Remark: Neither strict row diagonal dominance nor strict column diagonal dominance are
necessary for invertibility. Also, if we relax all strict inequalities to inequalities, then row
diagonal dominance (or column diagonal dominance) is not a sufficient condition for invertibility.
8.4
Summary
The main concepts and results of this chapter are listed below:
Diagonal matrix .
Eigenvalues, eigenvectors; the eigenspace associated with an eigenvalue.
The characteristic polynomial .
The trace.
algebraic and geometric multiplicity.
Eigenspaces associated with distinct eigenvalues form a direct sum (Proposition 8.3).
Reduction of a matrix to an upper-triangular matrix.
Schur decomposition.
The Gershgorins discs can be used to locate the eigenvalues of a complex matrix; see
Theorems 8.7 and 8.8.
Chapter 9
Iterative Methods for Solving Linear
Systems
9.1
In Chapter 6 we have discussed some of the main methods for solving systems of linear
equations. These methods are direct methods, in the sense that they yield exact solutions
(assuming infinite precision!).
Another class of methods for solving linear systems consists in approximating solutions
using iterative methods. The basic idea is this: Given a linear system Ax = b (with A a
square invertible matrix), find another matrix B and a vector c, such that
1. The matrix I
B is invertible
k 2 N.
Under certain conditions (to be clarified soon), the sequence (uk ) converges to a limit u
e
which is the unique solution of u = Bu + c, and thus of Ax = b.
Consequently, it is important to find conditions that ensure the convergence of the above
sequences and to have tools to compare the rate of convergence of these sequences. Thus,
we begin with some general results about the convergence of sequences of vectors and matrices.
Let (E, k k) be a normed vector space. Recall that a sequence (uk ) of vectors uk 2 E
converges to a limit u 2 E, if for every > 0, there some natural number N such that
kuk
uk ,
for all k
259
N.
260
We write
u = lim uk .
k7!1
If E is a finite-dimensional vector space and dim(E) = n, we know from Theorem 7.3 that
any two norms are equivalent, and if we choose the norm k k1 , we see that the convergence
of the sequence of vectors uk is equivalent to the convergence of the n sequences of scalars
formed by the components of these vectors (over any basis). The same property applies to
the finite-dimensional vector space Mm,n (K) of m n matrices (with K = R or K = C),
(k)
which means that the convergence of a sequence of matrices Ak = (aij ) is equivalent to the
(k)
convergence of the m n sequences of scalars (aij ), with i, j fixed (1 i m, 1 j n).
The first theorem below gives a necessary and sufficient condition for the sequence (B k )
of powers of a matrix B to converge to the zero matrix. Recall that the spectral radius (B)
of a matrix B is the maximum of the moduli | i | of the eigenvalues of B.
Theorem 9.1. For any square matrix B, the following conditions are equivalent:
(1) limk7!1 B k = 0,
(2) limk7!1 B k v = 0, for all vectors v,
(3) (B) < 1,
(4) kBk < 1, for some subordinate matrix norm k k.
Proof. Assume (1) and let k k be a vector norm on E and k k be the corresponding matrix
norm. For every vector v 2 E, because k k is a matrix norm, we have
kB k vk kB k kkvk,
and since limk7!1 B k = 0 means that limk7!1 kB k k = 0, we conclude that limk7!1 kB k vk = 0,
that is, limk7!1 B k v = 0. This proves that (1) implies (2).
Assume (2). If We had (B)
some eigenvalue such that
Bu = u,
| | = (B)
1,
u and |
| = | |k
Assume that (3) holds, that is, (B) < 1. By Proposition 7.10, we can find > 0 small
enough that (B) + < 1, and a subordinate matrix norm k k such that
kBk (B) + ,
which is (4).
261
k7!1
Proof. We know from Proposition 7.4 that (B) kBk, and since (B) = ((B k ))1/k , we
deduce that
(B) kB k k1/k for all k 1,
and so
(B) lim kB k k1/k .
k7!1
Now, let us prove that for every > 0, there is some integer N () such that
kB k k1/k (B) + for all k
N (),
k7!1
B
.
(B) +
Since kB k < 1, Theorem 9.1 implies that limk7!1 Bk = 0. Consequently, there is some
integer N () such that for all k N (), we have
kB k k =
kB k k
1,
((B) + )k
262
9.2
Recall that iterative methods for solving a linear system Ax = b (with A invertible) consists
in finding some matrix B and some vector c, such that I B is invertible, and the unique
solution x
e of Ax = b is equal to the unique solution u
e of u = Bu + c. Then, starting from
any vector u0 , compute the sequence (uk ) given by
k 2 N,
uk+1 = Buk + c,
k7!1
Here is a fundamental criterion for the convergence of any iterative methods based on a
matrix B, called the matrix of the iterative method .
Theorem 9.3. Given a system u = Bu + c as above, where I
statements are equivalent:
e k = uk
u
e,
where u
e is the unique solution of the system u = Bu + c. Clearly, the iterative method is
convergent i
lim ek = 0.
k7!1
We claim that
ek = B k e0 ,
where e0 = u0
0,
u
e.
u
e = Buk + c
u
e,
and because u
e = Be
u + c and ek = B k e0 (by the induction hypothesis), we obtain
uk+1
u
e = Buk
Be
u = B(uk
u
e) = Bek = BB k e0 = B k+1 e0 ,
k7!1
263
The next proposition is needed to compare the rate of convergence of iterative methods.
It shows that asymptotically, the error vector ek = B k e0 behaves at worst like ((B))k .
Proposition 9.4. Let kk be any vector norm, let B be a matrix such that I
and let u
e be the unique solution of u = Bu + c.
B is invertible,
then
lim
k7!1
sup
ku0 u
ek=1
kuk
k 2 N,
u
ek1/k = (B).
(2) Let B1 and B2 be two matrices such that I B1 and I B2 are invertibe, assume
that both u = B1 u + c1 and u = B2 u + c2 have the same unique solution u
e, and consider any
two sequences (uk ) and (vk ) defined inductively by
uk+1 = B1 uk + c1
vk+1 = B2 vk + c2 ,
with u0 = v0 . If (B1 ) < (B2 ), then for any > 0, there is some integer N (), such that
for all k N (), we have
sup
ku0 u
ek=1
kvk
kuk
u
ek
u
ek
1/k
(B2 )
.
(B1 ) +
u
e = B k e0 ,
u
e. For every k 2 N, we have
which implies
(B1 ) = sup kB1k e0 k1/k = kB1k k1/k ,
ke0 k=1
u
e = B1k e0
u
e = B2k e0 ,
264
with e0 = u0 u
e = v0 u
e. Again, by Proposition 9.2, for every > 0, there is some natural
number N () such that if k N (), then
sup kB1k e0 k1/k (B1 ) + .
ke0 k=1
(B2 ),
9.3
The methods described in this section are instances of the following scheme: Given a linear
system Ax = b, with A invertible, suppose we can write A in the form
A=M
N,
with M invertible, and easy to invert, which means that M is close to being a diagonal or
a triangular matrix (perhaps by blocks). Then, Au = b is equivalent to
M u = N u + b,
that is,
u=M
Nu + M
b.
N =M
(M
A) = I
265
1
N and
A,
which shows that I B = M 1 A is invertible. The iterative method associated with the
matrix B = M 1 N is given by
uk+1 = M
N uk + M
b,
0,
starting from any arbitrary vector u0 . From a practical point of view, we do not invert M ,
and instead we solve iteratively the systems
M uk+1 = N uk + b,
0.
Various methods correspond to various ways of choosing M and N from A. The first two
methods choose M and N as disjoint submatrices of A, but the relaxation method allows
some overlapping of M and N .
To describe the various choices of M and N , it is convenient to write A in terms of three
submatrices D, E, F , as
A = D E F,
where the only nonzero entries in D are the diagonal entries in A, the only nonzero entries
in E are entries in A below the the diagonal, and the only nonzero entries in F are entries
in A above the diagonal. More explicitly, if
0
1
a11
a12
a13
a1n 1
a1n
B
C
C
B a21
a
a
a
a
22
23
2n
1
2n
C
B
B
C
B a31
a32
a33
a3n 1
a3n C
B
C
A=B .
C,
.
.
.
.
.
B ..
..
..
..
..
.. C
C
B
B
C
B an 1 1 an 1 2 an 1 3 an 1 n 1 an 1 n C
@
A
an 1
an 2
an 3 an n 1
an n
then
B
B
B
B
B
B
D=B
B
B
B
B
@
a33
.. . .
.
.
0
..
.
a11
a22
0
..
.
0
..
.
an
1n 1
C
0 C
C
C
0 C
C
,
.. C
C
. C
C
0 C
A
an n
266
B
0
0
B a21
B
B a
a32
0
B 31
B
E=B .
..
...
...
B ..
.
B
B
.
B
@ an 1 1 an 1 2 an 1 3 . .
an 1
an 2
0
0
..
.
0
an 3
an n
C
0C
C
0C
C
C
,
.. C
.C
C
C
C
0A
0 a12 a13
B
B0
B
B
B0
B
F =B
B ..
B.
B
B
@0
a1n
a2n
..
. a3n
..
..
a23
0
..
.
0
..
.
a1n
C
a2n C
C
C
a3n C
C
C.
.. C
. C
C
C
an 1 n A
0
In Jacobis method , we assume that all diagonal entries in A are nonzero, and we pick
M =D
N = E + F,
so that
B=M
N = D 1 (E + F ) = I
D 1 A.
D 1 A = D 1 (E + F ),
which is called Jacobis matrix . The corresponding method, Jacobis iterative method , computes the sequence (uk ) using the recurrence
uk+1 = D 1 (E + F )uk + D 1 b,
0.
0.
=
=
..
.
a21 uk1
an 1 n 1 uk+1
n 1
an n uk+1
n
=
=
an 1 1 uk1
an 1 uk1
a12 uk2
a13 uk3
a23 uk3
..
.
an 2 uk2
an
k
1 n 2 un 2
a1n ukn
a2n ukn
+ b1
+ b2
.
an
an n 1 ukn 1
k
1 n un
+ bn
+ bn
Observe that we can try to speed up the method by using the new value uk+1
instead
1
k+2
k+1
k
of u1 in solving for u2 using the second equations, and more generally, use u1 , . . . , uk+1
i 1
instead of uk1 , . . . , uki 1 in solving for uk+1
in
the
ith
equation.
This
observation
leads
to
the
i
system
a11 uk+1
1
a22 uk+1
2
..
.
=
=
..
.
an 1 n 1 uk+1
n 1 =
k+1
an n un
=
a21 uk+1
1
a12 uk2
a13 uk3
a23 uk3
..
.
an 1 1 uk+1
1
an 1 uk+1
1
an 2 uk+1
2
an
k+1
1 n 2 un 2
a1n ukn
a2n ukn
an
an n 1 uk+1
n 1
k
1 n un
+ b1
+ b2
+ bn 1
+ bn ,
267
E) 1 F uk + (D
E) 1 b,
E is invertible, so the
0.
N = (D
E) 1 F.
B=M
A is also invertible.
The method that we just described is the iterative method of Gauss-Seidel , and the
matrix B is called the matrix of Gauss-Seidel and denoted by L1 , with
E) 1 F.
L1 = (D
One of the advantages of the method of Gauss-Seidel is that is requires only half of the
memory used by Jacobis method, since we only need
k+1
k
k
uk+1
1 , . . . , ui 1 , ui+1 , . . . , un
to compute uk+1
. We also show that in certain important cases (for example, if A is a
i
tridiagonal matrix), the method of Gauss-Seidel converges faster than Jacobis method (in
this case, they both converge or diverge simultaneously).
The new ingredient in the relaxation method is to incorporate part of the matrix D into
N : we define M and N by
M=
N=
D
!
!
!
D + F,
where ! 6= 0 is a real parameter to be suitably chosen. Actually, we show in Section 9.4 that
for the relaxation method to converge, we must have ! 2 (0, 2). Note that the case ! = 1
corresponds to the method of Gauss-Seidel.
268
If we assume that all diagonal entries of D are nonzero, the matrix M is invertible. The
matrix B is denoted by L! and called the matrix of relaxation, with
D
1 !
L! =
E
D + F = (D !E) 1 ((1 !)D + !F ).
!
!
The number ! is called the parameter of relaxation. When ! > 1, the relaxation method is
known as successive overrelaxation, abbreviated as SOR.
At first glance, the relaxation matrix L! seems at lot more complicated than the GaussSeidel matrix L1 , but the iterative system associated with the relaxation method is very
similar to the method of Gauss-Seidel, and is quite simple. Indeed, the system associated
with the relaxation method is given by
D
1 !
E uk+1 =
D + F uk + b,
!
!
which is equivalent to
(D
!E)uk+1 = ((1
!(Duk
Euk+1
F uk
b).
k
k
2 + a1n 1 un 1 + a1n un
a2n 2 ukn 2 + a2n 1 ukn 1 + a2n ukn
k+1
k
!(an 1 uk+1
+ an 2 uk+1
+ + an n 2 uk+1
1
2
n 2 + an n 1 u n 1 + an n u n
b1 )
b2 )
bn ).
What remains to be done is to find conditions that ensure the convergence of the relaxation method (and the Gauss-Seidel method), that is:
1. Find conditions on !, namely some interval I R so that ! 2 I implies (L! ) < 1;
we will prove that ! 2 (0, 2) is a necessary condition.
2. Find if there exist some optimal value !0 of ! 2 I, so that
(L!0 ) = inf (L! ).
!2I
We will give partial answers to the above questions in the next section.
It is also possible to extend the methods of this section by using block decompositions of
the form A = D E F , where D, E, and F consist of blocks, and with D an invertible
block-diagonal matrix.
9.4
269
We begin with a general criterion for the convergence of an iterative method associated with
a (complex) Hermitian, positive, definite matrix, A = M N . Next, we apply this result to
the relaxation method.
Proposition 9.5. Let A be any Hermitian, positive, definite matrix, written as
A=M
N,
(M
N ) < 1,
N k = kI
N ) < 1. By definition
1
Ak = sup kv
Avk,
kvk=1
wk2 = (v w) A(v w)
= kvk2 v Aw w Av + w Aw
= 1 w M w w M w + w Aw
= 1 w (M + N )w.
270
Avk
Avk < 1,
kvk=1
D
!
N with
!
!
D + F,
D
!
E +
!
!
D+F =
!
!
D.
If D consists of the diagonal entries of A, then we know from Section 6.3 that these entries
are all positive, and since ! 2 (0, 2), we see that the matrix ((2 !)/!)D is positive definite.
If D consists of diagonal blocks of A, because A is positive, definite, by choosing vectors z
obtained by picking a nonzero vector for each block of D and padding with zeros, we see
that each block of D is positive, definite, and thus D itself is positive definite. Therefore, in
all cases, M + N is positive, definite, and we conclude by using Proposition 9.5.
D
1 !
1
1
M +N =
E +
D+F =
+
1 D.
!
!
! !
271
But,
1
1
! + ! !!
1 (! 1)(! 1)
1
+
1=
=
=
2
! !
!!
|!|
so the relaxation method also converges for ! 2 C, provided that
|!
|! 1|2
,
|!|2
1| < 1.
Therefore, the relaxation method (possibly by blocks) does not converge unless ! 2 (0, 2). If
we allow ! to be complex, then we must have
|!
1| < 1
det
1
It follows that
The proof is the same if ! 2 C.
D
det
!
= det(L! ) =
(L! )
n|
1/n
D+F
= |!
= (1
!)n .
1|.
We now consider the case where A is a tridiagonal matrix , possibly by blocks. In this
case, we obtain precise results about the spectral radius of J and L! , and as a consequence,
about the convergence of these methods. We also obtain some information about the rate of
convergence of these methods. We begin with the case ! = 1, which is technically easier to
deal with. The following proposition gives us the precise relationship between the spectral
radii (J) and (L1 ) of the Jacobi matrix and the Gauss-Seidel matrix.
272
Proposition 9.8. Let A be a tridiagonal matrix (possibly by blocks). If (J) is the spectral
radius of the Jacobi matrix and (L1 ) is the spectral radius of the Gauss-Seidel matrix, then
we have
(L1 ) = ((J))2 .
Consequently, the method of Jacobi and the method of Gauss-Seidel both converge or both
diverge simultaneously (even when A is tridiagonal by blocks); when they converge, the method
of Gauss-Seidel converges faster than Jacobis method.
Proof. We begin with a preliminary result. Let A() with a tridiagonal matrix by block of
the form
0
1
A1 1 C 1
0
0
0
BB1
C
A2
1 C2
0
0
B
C
.
..
..
..
B
C
.
.
.
.
0
.
B
C
A() = B .
C,
...
...
..
B ..
C
.
0
B
C
@ 0
0
Bp 2 Ap 1 1 Cp 1 A
0
then
det(A()) = det(A(1)),
Bp
Ap
6= 0.
F ) = det(D)pJ ( ).
F ) = det(D
E)pL1 ( ).
273
Since A is tridiagonal (or tridiagonal by blocks), using our preliminary result with =
we get
2
qL1 ( 2 ) = det( 2 D
E F ) = det( 2 D
E
F ) = n qJ ( ).
By continuity, the above equation also holds for
6= 0,
1/2
Proof. The proof is very technical and can be found in Serre [96] and Ciarlet [24]. As in the
proof of the previous proposition, we begin by showing that the eigenvalues of the matrix
L! are the zeros of the polynomnial
D
+! 1
qL! ( ) = det
D
E F = det
E pL! ( ),
!
!
where pL! ( ) is the characteristic polynomial of L! . Then, using the preliminary fact from
Proposition 9.8, it is easy to show that
2
+! 1
2
n
qL ! ( ) = qJ
,
!
for all 2 C, with 6= 0. This time, we cannot extend the above equation to
leads us to consider the equation
2
+! 1
= ,
!
which is equivalent to
2
! + ! 1 = 0,
= 0. This
for all 6= 0. Since 6= 0, the above equivalence does not hold for ! = 1, but this is not a
problem since the case ! = 1 has already been considered in the previous proposition. Then,
we can show the following:
274
1. For any
is an eigenvalue of L! , then
+!
1/2 !
+!
1/2 !
are eigenvalues of J.
2. For every 6= 0, if and are eigenvalues of J, then + (, !) and (, !) are
eigenvalues of L! , where + (, !) and (, !) are the squares of the roots of the
equation
2
! + ! 1 = 0.
It follows that
max {max(|+ (, !)|, | (, !)|)},
(L! ) =
| pJ ( )=0
and since we are assuming that J has real roots, we are led to study the function
M (, !) = max{|+ (, !)|, | (, !)|},
where 2 R and ! 2 (0, 2). Actually, because M ( , !) = M (, !), it is only necessary to
consider the case where 0.
Note that for 6= 0, the roots of the equation
2
are
! + !
1 = 0.
2 ! 2 4! + 4
.
2
In turn, this leads to consider the roots of the equation
!
! 2 2
which are
4! + 4 = 0,
2(1
2 )
2(1
2 )
2(1 +
2
p
1+ 1
1 2 )(1
p
2 (1 + 1
2 )
1
2
)
p
2 )
1
2
)
!1 () =
2
p
1
2
p
1
2
p
1+ 1
275
Observe that the expression for !0 () is exactly the expression in the statement of our
proposition! The rest of the proof consists in analyzing the variations of the function M (, !)
by considering various cases for . In the end, we find that the minimum of (L! ) is obtained
for !0 ((J)). The details are tedious and we omit them. The reader will find complete proofs
in Serre [96] and Ciarlet [24].
Combining the results of Theorem 9.6 and Proposition 9.9, we obtain the following result
which gives precise information about the spectral radii of the matrices J, L1 , and L! .
Proposition 9.10. Let A be a tridiagonal matrix (possibly by blocks) which is Hermitian,
positive, definite. Then, the methods of Jacobi, Gauss-Seidel, and relaxation, all converge
for ! 2 (0, 2). There is a unique optimal relaxation parameter
!0 =
such that
1+
2
1
((J))2
1.
)Du.
Consequently,
u Au = (1
)u Du,
and since A and D are hermitian, positive, definite, we have u Au > 0 and u Du > 0 if
u 6= 0, which proves that 2 R. The rest follows from Theorem 9.6 and Proposition 9.9.
Remark: It is preferable to overestimate rather than underestimate the relaxation parameter when the optimum relaxation parameter is not known exactly.
276
9.5
Summary
The main concepts and results of this chapter are listed below:
Iterative methods. Splitting A as A = M
N.
E) 1 F .
!E) 1 ((1
!)D + !F ).
N is Hermitian,
A sufficient condition for the convergence of the methods of Jacobi, Gauss-Seidel, and
relaxation. The Ostrowski-Reich Theorem: A is symmetric, positive, definite, and
! 2 (0, 2).
A necessary condition for the convergence of the methods of Jacobi , Gauss-Seidel, and
relaxation: ! 2 (0, 2).
The case of tridiagonal matrices (possibly by blocks). Simultaneous convergence or divergence of Jacobis method and Gauss-Seidels method, and comparison of the spectral
radii of (J) and (L1 ): (L1 ) = ((J))2 .
The case of tridiagonal, Hermitian, positive, definite matrices (possibly by blocks).
The methods of Jacobi, Gauss-Seidel, and relaxation, all converge.
In the above case, there is a unique optimal relaxation parameter for which (L!0 ) <
(L1 ) = ((J))2 < (J) (if (J) 6= 0).
Chapter 10
Euclidean Spaces
Rien nest beau que le vrai.
Hermann Minkowski
10.1
So far, the framework of vector spaces allows us to deal with ratios of vectors and linear
combinations, but there is no way to express the notion of length of a line segment or to talk
about orthogonality of vectors. A Euclidean structure allows us to deal with metric notions
such as orthogonality and length (or distance).
This chapter covers the bare bones of Euclidean geometry. Deeper aspects of Euclidean
geometry are investigated in Chapter 11. One of our main goals is to give the basic properties
of the transformations that preserve the Euclidean structure, rotations and reflections, since
they play an important role in practice. Euclidean geometry is the study of properties
invariant under certain affine maps called rigid motions. Rigid motions are the maps that
preserve the distance between points.
We begin by defining inner products and Euclidean spaces. The CauchySchwarz inequality and the Minkowski inequality are shown. We define orthogonality of vectors and of
subspaces, orthogonal bases, and orthonormal bases. We prove that every finite-dimensional
Euclidean space has orthonormal bases. The first proof uses duality, and the second one
the GramSchmidt orthogonalization procedure. The QR-decomposition for invertible matrices is shown as an application of the GramSchmidt procedure. Linear isometries (also
called orthogonal transformations) are defined and studied briefly. We conclude with a short
section in which some applications of Euclidean geometry are sketched. One of the most
important applications, the method of least squares, is discussed in Chapter 17.
For a more detailed treatment of Euclidean geometry, see Berger [8, 9], Snapper and
Troyer [99], or any other book on geometry, such as Pedoe [88], Coxeter [26], Fresnel [40],
Tisseron [109], or Cagnac, Ramis, and Commeau [19]. Serious readers should consult Emil
277
278
Artins famous book [3], which contains an in-depth study of the orthogonal group, as well
as other groups arising in geometry. It is still worth consulting some of the older classics,
such as Hadamard [53, 54] and Rouche and de Comberousse [89]. The first edition of [53]
was published in 1898, and finally reached its thirteenth edition in 1947! In this chapter it is
assumed that all vector spaces are defined over the field R of real numbers unless specified
otherwise (in a few cases, over the complex numbers C).
First, we define a Euclidean structure on a vector space. Technically, a Euclidean structure over a vector space E is provided by a symmetric bilinear form on the vector space
satisfying some extra properties. Recall that a bilinear form ' : E E ! R is definite if for
every u 2 E, u 6= 0 implies that '(u, u) 6= 0, and positive if for every u 2 E, '(u, u) 0.
Definition 10.1. A Euclidean space is a real vector space E equipped with a symmetric
bilinear form ' : E E ! R that is positive definite. More explicitly, ' : E E ! R satisfies
the following axioms:
'(u1 + u2 , v)
'(u, v1 + v2 )
'( u, v)
'(u, v)
'(u, v)
u
=
=
=
=
=
6=
The real number '(u, v) is also called the inner product (or scalar product) of u and v. We
also define the quadratic form associated with ' as the function : E ! R+ such that
(u) = '(u, u),
for all u 2 E.
Since ' is bilinear, we have '(0, 0) = 0, and since it is positive definite, we have the
stronger fact that
'(u, u) = 0 i u = 0,
that is,
(u) = 0 i u = 0.
uv
or hu, vi or (u|v),
(u) by kuk.
Example 10.1. The standard example of a Euclidean space is Rn , under the inner product
defined such that
(x1 , . . . , xn ) (y1 , . . . , yn ) = x1 y1 + x2 y2 + + xn yn .
This Euclidean space is denoted by En .
279
We leave as an easy exercise that h , i is indeed an inner product on C[a, b]. In the case
where a = and b = (or a = 0 and b = 2, this makes basically no dierence), one
should compute
hsin px, sin qxi,
for all natural numbers p, q
analysis possible!
Example 10.4. Let E = Mn (R) be the vector space of real n n matrices. If we view
a matrix A 2 Mn (R) as a long column vector obtained by concatenating together its
columns, we can define the inner product of two matrices A, B 2 Mn (R) as
hA, Bi =
which can be conveniently written as
n
X
aij bij ,
i,j=1
Since this can be viewed as the Euclidean product on Rn , it is an inner product on Mn (R).
The corresponding norm
p
kAkF = tr(A> A)
is the Frobenius norm (see Section 7.2).
(u + v) = '(u + v, u + v)
= '(u, u + v) + '(v, u + v)
= '(u, u) + 2'(u, v) + '(v, v)
=
(u) + 2'(u, v) + (v).
280
Thus, we have
1
'(u, v) = [ (u + v)
2
We also say that ' is the polar form of .
(u)
(v)].
If E is finite-dimensional and if P
' : E E ! R is aPbilinear form on E, given any basis
(e1 , . . . , en ) of E, we can write x = ni=1 xi ei and y = nj=1 yj ej , and we have
'(x, y) = '
X
n
i=1
xi e i ,
n
X
j=1
yj e j
n
X
xi yj '(ei , ej ).
i,j=1
If we let G be the matrix G = ('(ei , ej )), and if x and y are the column vectors associated
with (x1 , . . . , xn ) and (y1 , . . . , yn ), then we can write
'(x, y) = x> Gy = y > G> x.
P
Note that we are committing an abuse of notation, since x = ni=1 xi ei is a vector in E, but
the column vector associated with (x1 , . . . , xn ) belongs to Rn . To avoid this minor abuse, we
could denote the column vector associated with (x1 , . . . , xn ) by x (and similarly y for the
column vector associated with (y1 , . . . , yn )), in wich case the correct expression for '(x, y)
is
'(x, y) = x> Gy.
However, in view of the isomorphism between E and Rn , to keep notation as simple as
possible, we will use x and y instead of x and y.
Also observe that ' is symmetric i G = G> , and ' is positive definite i the matrix G
is positive definite, that is,
x> Gx > 0 for all x 2 Rn , x 6= 0.
The matrix G associated with an inner product is called the Gram matrix of the inner
product with respect to the basis (e1 , . . . , en ).
Conversely, if A is a symmetric positive definite n n matrix, it is easy to check that the
bilinear form
hx, yi = x> Ay
is an inner product. If we make a change of basis from the basis (e1 , . . . , en ) to the basis
(f1 , . . . , fn ), and if the change of basis matrix is P (where the jth column of P consists of
the coordinates of fj over the basis (e1 , . . . , en )), then with respect to coordinates x0 and y 0
over the basis (f1 , . . . , fn ), we have
hx, yi = x> Gy = x0> P > GP y 0 ,
so the matrix of our inner product over the basis (f1 , . . . , fn ) is P > GP . We summarize these
facts in the following proposition.
281
Proposition 10.1. Let E be a finite-dimensional vector space, and let (e1 , . . . , en ) be a basis
of E.
1. For any inner product h , i on E, if G = (hei , ej i) is the Gram matrix of the inner
product h , i w.r.t. the basis (e1 , . . . , en ), then G is symmetric positive definite.
2. For any change of basis matrix P , the Gram matrix of h , i with respect to the new
basis is P > GP .
3. If A is any n n symmetric positive definite matrix, then
hx, yi = x> Ay
is an inner product on E.
We will see later that a symmetric matrix is positive definite i its eigenvalues are all
positive.
p
One of the very important properties of an inner product ' is that the map u 7!
(u)
is a norm.
Proposition 10.2. Let E be a Euclidean space with inner product ', and let
be the
corresponding quadratic form. For all u, v 2 E, we have the CauchySchwarz inequality
'(u, v)2 (u) (v),
the equality holding i u and v are linearly dependent.
We also have the Minkowski inequality
p
p
p
(u + v)
(u) +
(v),
the equality holding i u and v are linearly dependent, where in addition if u 6= 0 and v 6= 0,
then u = v for some > 0.
Proof. For any vectors u, v 2 E, we define the function T : R ! R such that
T ( ) = (u + v),
for all
Since ' is positive definite, is nonnegative, and thus T ( ) 0 for all 2 R. If (v) = 0,
then v = 0, and we also have '(u, v) = 0. In this case, the CauchySchwarz inequality is
trivial, and v = 0 and u are linearly dependent.
282
cannot have distinct real roots, which means that its discriminant
= 4('(u, v)2
(u) (v))
is equivalent to
(u + v)
(u) +
(u + v) (u) + (v) + 2
2'(u, v) = (u + v)
and so the above inequality is equivalent to
'(u, v)
(u)
(u) (v),
(u) (v),
(v)
(u) (v).
(v),
which is trivial when '(u, v) 0, and follows from the CauchySchwarz inequality when
'(u, v) 0. Thus, the Minkowski inequality holds. Finally, assume that u 6= 0 and v 6= 0,
and that
p
p
p
(u + v) =
(u) +
(v).
When this is the case, we have
'(u, v) =
and we know from the discussion of the CauchySchwarz inequality that the equality holds
i u and v are linearly dependent. The Minkowski inequality is an equality when u or v is
null. Otherwise, if u 6= 0 and v 6= 0, then u = v for some 6= 0, and since
p
'(u, v) = '(v, v) =
(u) (v),
by positivity, we must have
> 0.
283
(u + v)
(u) +
(v)
p
shows that the map u 7!
(u) satisfies the convexity inequality (also known as triangle
inequality), condition (N3) of Definition 7.1, and since ' is bilinear and positive definite, it
also satisfies conditions (N1) and (N2) of Definition 7.1, and thus it is a norm on E. The
norm induced by ' is called the Euclidean norm induced by '.
Note that the CauchySchwarz inequality can be written as
|u v| kukkvk,
and the Minkowski inequality as
ku + vk kuk + kvk.
Remark: One might wonder if every norm on a vector space is induced by some Euclidean
inner product. In general, this is false, but remarkably, there is a simple necessary and
sufficient condition, which is that the norm must satisfy the parallelogram law :
ku + vk2 + ku
ku
2hu, vi,
and by adding and subtracting these identities, we get the parallelogram law and the equation
1
hu, vi = (ku + vk2
4
which allows us to recover h , i from the norm.
ku
vk2 ),
284
y + zk2
y + zk2
which yields
kx + y + zk2 = 2(ky + zk2 + kxk2 )
kx
x + zk2 ,
ky
where the second formula is obtained by swapping x and y. Then by adding up these
equations, we get
1
kx
2
y + zk2
x + zk2 .
1
kx
2
Since kx y + zk = k (x y + z)k = ky x zk and ky
kx y zk, by subtracting the last two equations, we get
kx + y
1
ky
2
zk2 + ky
1
ky x zk2 ,
2
x + zk = k (y x + z)k =
zk2
1
hx + y, zi = (kx + y + zk2 kx + y
4
1
= (kx + zk2 kx zk2 ) +
4
= hx, zi + hy, zi,
zk2
zk2 )
1
(ky + zk2
4
ky
zk2 )
as desired.
Proving that
h x, yi = hx, yi for all
2R
2 Z, then to promote it to Q,
Since
1
h u, vi = (k u + vk2 k u vk2 )
4
1
= (ku vk2 ku + vk2 )
4
= hu, vi,
285
1,
2 N,
and since the above also holds for = 1, it holds for all 2 Z. For
and q 6= 0, we have
qh(p/q)u, vi = hpu, vi = phu, vi,
= p/q with p, q 2 Z
2 Q.
To finish the proof, we use the fact that a norm is a continuous map x 7! kxk. Then, the
continuous function t 7! 1t htu, vi defined on R {0} agrees with hu, vi on Q {0}, so it is
equal to hu, vi on R {0}. The case = 0 is trivial, so we are done.
We now define orthogonality.
10.2
An inner product on a vector space gives the ability to define the notion of orthogonality.
Families of nonnull pairwise orthogonal vectors must be linearly independent. They are
called orthogonal families. In a vector space of finite dimension it is always possible to find
orthogonal bases. This is very useful theoretically and practically. Indeed, in an orthogonal
basis, finding the coordinates of a vector is very cheap: It takes an inner product. Fourier
series make crucial use of this fact. When E has finite dimension, we prove that the inner
product on E induces a natural isomorphism between E and its dual space E . This allows
us to define the adjoint of a linear map in an intrinsic fashion (i.e., independently of bases).
It is also possible to orthonormalize any basis (certainly when the dimension is finite). We
give two proofs, one using duality, the other more constructive using the GramSchmidt
orthonormalization procedure.
Definition 10.2. Given a Euclidean space E, any two vectors u, v 2 E are orthogonal, or
perpendicular , if u v = 0. Given a family (ui )i2I of vectors in E, we say that (ui )i2I is
orthogonal if ui uj = 0 for all i, j 2 I, where i 6= j. We say that the family (ui )i2I is
orthonormal if ui uj = 0 for all i, j 2 I, where i 6= j, and kui k = ui ui = 1, for all i 2 I.
For any subset F of E, the set
F ? = {v 2 E | u v = 0, for all u 2 F },
of all vectors orthogonal to all vectors in F , is called the orthogonal complement of F .
286
i u = 0.
if p = q, p, q
hsin px, sin qxi =
0 if p 6= q, p, q
hcos px, cos qxi =
and
for all p
1 and q
if p = q, p, q
if p =
6 q, p, q
1,
1,
1,
0,
287
1 if i = j,
0 if i 6= j
All this is true even for an infinite orthonormal (or orthogonal) basis (ei )i2I .
where the family of scalars (xi )i2I has finite support, which means that xi = 0 for all
i 2 I J, where J is a finite set. Thus, even though the family (sin px)p 1 [ (cos qx)q 0 is
orthogonal
(it p
is not orthonormal, but becomes so if we divide every trigonometric function by
p
, and 1 by 2; we wont because it looks messy!), the fact that a function f 2 C 0 [ , ]
can be written as a Fourier series as
f (x) = a0 +
1
X
k=1
does not mean that (sin px)p 1 [ (cos qx)q 0 is a basis of this vector space of functions,
because in general, the families (ak ) and (bk ) do not have finite support! In order for this
infinite linear combination to make sense, it is necessary to prove that the partial sums
a0 +
n
X
k=1
of the series converge to a limit when n goes to infinity. This requires a topology on the
space.
A very important property of Euclidean spaces of finite dimension is that the inner
product induces a canonical bijection (i.e., independent of the choice of bases) between the
vector space E and its dual E .
Given a Euclidean space E, for any vector u 2 E, let 'u : E ! R be the map defined
such that
'u (v) = u v,
288
for all v 2 E.
Since the inner product is bilinear, the map 'u is a linear form in E . Thus, we have a
map [ : E ! E , defined such that
[(u) = 'u .
Theorem 10.5. Given a Euclidean space E, the map [ : E ! E defined such that
[(u) = 'u
is linear and injective. When E is also of finite dimension, the map [ : E ! E is a canonical
isomorphism.
Proof. That [ : E ! E is a linear map follows immediately from the fact that the inner
product is bilinear. If 'u = 'v , then 'u (w) = 'v (w) for all w 2 E, which by definition of 'u
means that
uw =vw
for all w 2 E, which by bilinearity is equivalent to
(v
u) w = 0
for all w 2 E, which implies that u = v, since the inner product is positive definite. Thus,
[ : E ! E is injective. Finally, when E is of finite dimension n, we know that E is also of
dimension n, and then [ : E ! E is bijective.
The inverse of the isomorphism [ : E ! E is denoted by ] : E ! E.
As a consequence of Theorem 10.5, if E is a Euclidean space of finite dimension, every
linear form f 2 E corresponds to a unique u 2 E such that
f (v) = u v,
for every v 2 E. In particular, if f is not the null form, the kernel of f , which is a hyperplane
H, is precisely the set of vectors that are orthogonal to u.
Remarks:
(1) The musical map [ : E ! E is not surjective when E has infinite dimension. The
result can be salvaged by restricting our attention to continuous linear maps, and by
assuming that the vector space E is a Hilbert space (i.e., E is a complete normed vector
space w.r.t. the Euclidean norm). This is the famous little Riesz theorem (or Riesz
representation theorem).
289
(2) Theorem 10.5 still holds if the inner product on E is replaced by a nondegenerate
symmetric bilinear form '. We say that a symmetric bilinear form ' : E E ! R is
nondegenerate if for every u 2 E,
if '(u, v) = 0 for all v 2 E, then u = 0.
For example, the symmetric bilinear form on R4 defined such that
'((x1 , x2 , x3 , x4 ), (y1 , y2 , y3 , y4 )) = x1 y1 + x2 y2 + x3 y3
x4 y4
Proposition 10.6. Given a Euclidean space E of finite dimension, for every linear map
f : E ! E, there is a unique linear map f : E ! E such that
f (u) v = u f (v),
for all u, v 2 E. The map f is called the adjoint of f (w.r.t. to the inner product).
Proof. Given u1 , u2 2 E, since the inner product is bilinear, we have
(u1 + u2 ) f (v) = u1 f (v) + u2 f (v),
for all v 2 E, and
290
and
f (u2 ) v = u2 f (v),
for all v 2 E, we get
( f (u)) v = ( u) f (v),
= f , or equivalently
f f = f
f = id,
also play an important role. They are linear isometries, or isometries. Rotations are special
kinds of isometries. Another important class of linear maps are the linear maps satisfying
the property
f f = f f ,
291
called normal linear maps. We will see later on that normal maps can always be diagonalized
over orthonormal bases of eigenvectors, but this will require using a Hermitian inner product
(over C).
Given two Euclidean spaces E and F , where the inner product on E is denoted by h , i1
and the inner product on F is denoted by h , i2 , given any linear map f : E ! F , it is
immediately verified that the proof of Proposition 10.6 can be adapted to show that there
is a unique linear map f : F ! E such that
hf (u), vi2 = hu, f (v)i1
for all u 2 E and all v 2 F . The linear map f is also called the adjoint of f .
Remark: Given any basis for E and any basis for F , it is possible to characterize the matrix
of the adjoint f of f in terms of the matrix of f , and the symmetric matrices defining the
inner products. We will do so with respect to orthonormal bases. Also, since inner products
are symmetric, the adjoint f of f is also characterized by
f (u) v = u f (v),
for all u, v 2 E.
We can also use Theorem 10.5 to show that any Euclidean space of finite dimension has
an orthonormal basis.
Proposition 10.7. Given any nontrivial Euclidean space E of finite dimension n
is an orthonormal basis (u1 , . . . , un ) for E.
1, there
Consider the linear form 'u1 associated with u1 . Since u1 6= 0, by Theorem 10.5, the linear
form 'u1 is nonnull, and its kernel is a hyperplane H. Since 'u1 (w) = 0 i u1 w = 0,
the hyperplane H is the orthogonal complement of {u1 }. Furthermore, since u1 6= 0 and
the inner product is positive definite, u1 u1 6= 0, and thus, u1 2
/ H, which implies that
E = H Ru1 . However, since E is of finite dimension n, the hyperplane H has dimension
n 1, and by the induction hypothesis, we can find an orthonormal basis (u2 , . . . , un ) for H.
Now, because H and the one dimensional space Ru1 are orthogonal and E = H Ru1 , it is
clear that (u1 , . . . , un ) is an orthonormal basis for E.
292
There is a more constructive way of proving Proposition 10.7, using a procedure known as
the GramSchmidt orthonormalization procedure. Among other things, the GramSchmidt
orthonormalization procedure yields the QR-decomposition for matrices, an important tool
in numerical methods.
Proposition 10.8. Given any nontrivial Euclidean space E of finite dimension n 1, from
any basis (e1 , . . . , en ) for E we can construct an orthonormal basis (u1 , . . . , un ) for E, with
the property that for every k, 1 k n, the families (e1 , . . . , ek ) and (u1 , . . . , uk ) generate
the same subspace.
Proof. We proceed by induction on n. For n = 1, let
For n
u1 =
e1
.
ke1 k
u1 =
e1
,
ke1 k
2, we also let
and assuming that (u1 , . . . , uk ) is an orthonormal system that generates the same subspace
as (e1 , . . . , ek ), for every k with 1 k < n, we note that the vector
u0k+1
= ek+1
k
X
i=1
(ek+1 ui ) ui
is nonnull, since otherwise, because (u1 , . . . , uk ) and (e1 , . . . , ek ) generate the same subspace,
(e1 , . . . , ek+1 ) would be linearly dependent, which is absurd, since (e1 , . . ., en ) is a basis.
Thus, the norm of the vector u0k+1 being nonzero, we use the following construction of the
vectors uk and u0k :
u0
u01 = e1 ,
u1 = 10 ,
ku1 k
and for the inductive step
u0k+1
= ek+1
k
X
i=1
where 1 k n
system, we have
(ek+1 ui ) ui ,
uk+1 =
u0k+1
,
ku0k+1 k
u0k+1 ui = ek+1 ui
ek+1 ui = 0,
for all i with 1 i k. This shows that the family (u1 , . . . , uk+1 ) is orthonormal, and since
(u1 , . . . , uk ) and (e1 , . . . , ek ) generates the same subspace, it is clear from the definition of
uk+1 that (u1 , . . . , uk+1 ) and (e1 , . . . , ek+1 ) generate the same subspace. This completes the
induction step and the proof of the proposition.
293
Note that u0k+1 is obtained by subtracting from ek+1 the projection of ek+1 itself onto the
orthonormal vectors u1 , . . . , uk that have already been computed. Then, u0k+1 is normalized.
Remarks:
(1) The QR-decomposition can now be obtained very easily, but we postpone this until
Section 10.4.
(2) We could compute u0k+1 using the formula
u0k+1
k
X
ek+1 u0
i
= ek+1
i=1
ku0i k2
u0i ,
and normalize the vectors u0k at the end. This time, we are subtracting from ek+1
the projection of ek+1 itself onto the orthogonal vectors u01 , . . . , u0k . This might be
preferable when writing a computer program.
(3) The proof of Proposition 10.8 also works for a countably infinite basis for E, producing
a countably infinite orthonormal basis.
Example 10.6. If we consider polynomials and the inner product
hf, gi =
f (t)g(t)dt,
1
where fn
and Pn (x) =
1
2n n!
fn(n) (x),
294
n
Pn 1 (x).
n+1
0,
(equivalently, with x = cos , we have Tn (cos ) = cos(n)) are orthogonal with respect to
the above inner product. These polynomials are the Chebyshev polynomials. Their norm is
not equal to 1. Instead, we have
(
if n > 0,
hTn , Tn i = 2
if n = 0.
Using the identity (cos + i sin )n = cos n + i sin n and the binomial formula, we obtain
the following expression for Tn (x):
Tn (x) =
bn/2c
X
k=0
n
(x2
2k
1)k xn
2k
Tn 1 (x),
1.
1)n
295
The polynomial Tn has n distinct roots in the interval [ 1, 1]. The Chebyshev polynomials
play an important role in approximation theory. They are used as an approximation to a
best polynomial approximation of a continuous function under the sup-norm (1-norm).
The inner products of the last two examples are special cases of an inner product of the
form
Z 1
hf, gi =
W (t)f (t)g(t)dt,
1
where W (t) is a weight function. If W is a nonzero continuous function such that W (x) 0
on ( 1, 1), then the above bilinear form is indeed positive definite. Families of orthogonal
polynomials used in approximation theory and in physics arise by a suitable choice of the
weight function W . Besides the previous two examples, the Hermite polynomials correspond
2
to W (x) = e x , the Laguerre polynomials to W (x) = e x , and the Jacobi polynomials
to W (x) = (1 x) (1 + x) , with , > 1. Comprehensive treatments of orthogonal
polynomials can be found in Lebedev [72], Sansone [91], and Andrews, Askey and Roy [2].
As a consequence of Proposition 10.7 (or Proposition 10.8), given any Euclidean space
of finite dimension n, if (e1 , . . . , en ) is an orthonormal basis for E, then for any two vectors
u = u1 e1 + + un en and v = v1 e1 + + vn en , the inner product u v is expressed as
u v = (u1 e1 + + un en ) (v1 e1 + + vn en ) =
and the norm kuk as
kuk = ku1 e1 + + un en k =
X
n
i=1
u2i
1/2
n
X
ui vi ,
i=1
The fact that a Euclidean space always has an orthonormal basis implies that any Gram
matrix G can be written as
G = Q> Q,
for some invertible matrix Q. Indeed, we know that in a change of basis matrix, a Gram
matrix G becomes G0 = P > GP . If the basis corresponding to G0 is orthonormal, then G0 = I,
so G = (P 1 )> P 1 .
We can also prove the following proposition regarding orthogonal spaces.
Proposition 10.9. Given any nontrivial Euclidean space E of finite dimension n 1, for
any subspace F of dimension k, the orthogonal complement F ? of F has dimension n k,
and E = F F ? . Furthermore, we have F ?? = F .
Proof. From Proposition 10.7, the subspace F has some orthonormal basis (u1 , . . . , uk ). This
linearly independent family (u1 , . . . , uk ) can be extended to a basis (u1 , . . . , uk , vk+1 , . . . , vn ),
296
10.3
In this section we consider linear maps between Euclidean spaces that preserve the Euclidean
norm. These transformations, sometimes called rigid motions, play an important role in
geometry.
Definition 10.3. Given any two nontrivial Euclidean spaces E and F of the same finite
dimension n, a function f : E ! F is an orthogonal transformation, or a linear isometry, if
it is linear and
kf (u)k = kuk, for all u 2 E.
Remarks:
(1) A linear isometry is often defined as a linear map such that
kf (v)
f (u)k = kv
uk,
for all u, v 2 E. Since the map f is linear, the two definitions are equivalent. The
second definition just focuses on preserving the distance between vectors.
(2) Sometimes, a linear map satisfying the condition of Definition 10.3 is called a metric
map, and a linear isometry is defined as a bijective metric map.
An isometry (without the word linear) is sometimes defined as a function f : E ! F (not
necessarily linear) such that
kf (v) f (u)k = kv uk,
for all u, v 2 E, i.e., as a function that preserves the distance. This requirement turns out to
be very strong. Indeed, the next proposition shows that all these definitions are equivalent
when E and F are of finite dimension, and for functions such that f (0) = 0.
Proposition 10.10. Given any two nontrivial Euclidean spaces E and F of the same finite
dimension n, for every function f : E ! F , the following properties are equivalent:
(1) f is a linear map and kf (u)k = kuk, for all u 2 E;
f (u)k = kv
297
f (u)k = kv
uk
for all u, v 2 E, then for any vector 2 E, the function g : E ! F defined such that
g(u) = f ( + u)
f ( )
for all u 2 E is a linear map such that g(0) = 0 and (3) holds. Clearly, g(0) = f ( ) f ( ) = 0.
Note that from the hypothesis
kf (v)
f (u)k = kv
uk
g(u)k =
=
=
=
kf ( + v) f ( ) (f ( + u)
kf ( + v) f ( + u)k,
k + v ( + u)k,
kv uk,
f ( ))k,
g(u)k = kv
uk,
we get
kg(v)k = kvk
for all v 2 E. In other words, g preserves both the distance and the norm.
To prove that g preserves the inner product, we use the simple fact that
2u v = kuk2 + kvk2
ku
vk2
kg(u)
vk2
g(v)k2
298
and thus g(u) g(v) = u v, for all u, v 2 E, which is (3). In particular, if f (0) = 0, by letting
= 0, we have g = f , and f preserves the scalar product, i.e., (3) holds.
Now assume that (3) holds. Since E is of finite dimension, we can pick an orthonormal basis (e1 , . . . , en ) for E. Since f preserves inner products, (f (e1 ), . . ., f (en )) is also
orthonormal, and since F also has dimension n, it is a basis of F . Then note that for any
u = u1 e1 + + un en , we have
ui = u e i ,
for all i, 1 i n. Thus, we have
f (u) =
n
X
i=1
n
X
i=1
(u ei )f (ei ) =
n
X
ui f (ei ),
i=1
which shows that f is linear. Obviously, f preserves the Euclidean norm, and (3) implies
(1).
Finally, if f (u) = f (v), then by linearity f (v u) = 0, so that kf (v u)k = 0, and since
f preserves norms, we must have kv uk = 0, and thus u = v. Thus, f is injective, and
since E and F have the same finite dimension, f is bijective.
Remarks:
(i) The dimension assumption is needed only to prove that (3) implies (1) when f is not
known to be linear, and to prove that f is surjective, but the proof shows that (1)
implies that f is injective.
(ii) The implication that (3) implies (1) holds if we also assume that f is surjective, even
if E has infinite dimension.
In (2), when f does not satisfy the condition f (0) = 0, the proof shows that f is an affine
map. Indeed, taking any vector as an origin, the map g is linear, and
f ( + u) = f ( ) + g(u) for all u 2 E.
From section 19.7, this shows that f is affine with associated linear map g.
This fact is worth recording as the following proposition.
299
Proposition 10.11. Given any two nontrivial Euclidean spaces E and F of the same finite
dimension n, for every function f : E ! F , if
kf (v)
f (u)k = kv
uk
for all u, v 2 E,
10.4
In this section we explore some of the basic properties of the orthogonal group and of
orthogonal matrices.
Proposition 10.12. Let E be any Euclidean space of finite dimension n, and let f : E ! E
be any linear map. The following properties hold:
(1) The linear map f : E ! E is an isometry i
f
f = f f = id.
(2) For every orthonormal basis (e1 , . . . , en ) of E, if the matrix of f is A, then the matrix
of f is the transpose A> of A, and f is an isometry i A satisfies the identities
A A> = A> A = In ,
where In denotes the identity matrix of order n, i the columns of A form an orthonormal basis of E, i the rows of A form an orthonormal basis of E.
Proof. (1) The linear map f : E ! E is an isometry i
f (u) f (v) = u v,
for all u, v 2 E, i
(f (f (u))
u) v = 0
300
for all u, v 2 E. Since the inner product is positive definite, we must have
f (f (u))
for all u 2 E, that is,
f f = f
u=0
f = id.
301
The proof of Proposition 10.10 (3) also shows that if f is an isometry, then the image of an
orthonormal basis (u1 , . . . , un ) is an orthonormal basis. Students often ask why orthogonal
matrices are not called orthonormal matrices, since their columns (and rows) are orthonormal
bases! I have no good answer, but isometries do preserve orthogonality, and orthogonal
matrices correspond to isometries.
Recall that the determinant det(f ) of a linear map f : E ! E is independent of the
choice of a basis in E. Also, for every matrix A 2 Mn (R), we have det(A) = det(A> ), and
for any two n n matrices A and B, we have det(AB) = det(A) det(B). Then, if f is an
isometry, and A is its matrix with respect to any orthonormal basis, A A> = A> A = In
implies that det(A)2 = 1, that is, either det(A) = 1, or det(A) = 1. It is also clear that
the isometries of a Euclidean space of dimension n form a group, and that the isometries of
determinant +1 form a subgroup. This leads to the following definition.
Definition 10.5. Given a Euclidean space E of dimension n, the set of isometries f : E ! E
forms a subgroup of GL(E) denoted by O(E), or O(n) when E = Rn , called the orthogonal
group (of E). For every isometry f , we have det(f ) = 1, where det(f ) denotes the determinant of f . The isometries such that det(f ) = 1 are called rotations, or proper isometries,
or proper orthogonal transformations, and they form a subgroup of the special linear group
SL(E) (and of O(E)), denoted by SO(E), or SO(n) when E = Rn , called the special orthogonal group (of E). The isometries such that det(f ) = 1 are called improper isometries,
or improper orthogonal transformations, or flip transformations.
As an immediate corollary of the GramSchmidt orthonormalization procedure, we obtain
the QR-decomposition for invertible matrices.
10.5
Now that we have the definition of an orthogonal matrix, we can explain how the Gram
Schmidt orthonormalization procedure immediately yields the QR-decomposition for matrices.
Proposition 10.13. Given any real n n matrix A, if A is invertible, then there is an
orthogonal matrix Q and an upper triangular matrix R with positive diagonal entries such
that A = QR.
Proof. We can view the columns of A as vectors A1 , . . . , An in En . If A is invertible, then they
are linearly independent, and we can apply Proposition 10.8 to produce an orthonormal basis
using the GramSchmidt orthonormalization procedure. Recall that we construct vectors
0
Qk and Q k as follows:
0
Q1
01
1
1
Q =A ,
Q =
,
kQ0 1 k
302
0 k+1
k+1
=A
k
X
i=1
where 1 k n
triangular system
(A
k+1
Q )Q ,
k+1
Q k+1
=
,
kQ0 k+1 k
0
A1 = kQ 1 kQ1 ,
..
.
0
j
A = (Aj Q1 ) Q1 + + (Aj Qi ) Qi + + kQ j kQj ,
..
.
0
n
A = (An Q1 ) Q1 + + (An Qn 1 ) Qn 1 + kQ n kQn .
0
1
0 0 5
A = @0 4 1 A .
1 1 1
303
The QR-decomposition yields a rather efficient and numerically stable method for solving
systems of linear equations. Indeed, given a system Ax = b, where A is an n n invertible
matrix, writing A = QR, since Q is orthogonal, we get
Rx = Q> b,
and since R is upper triangular, we can solve it by Gaussian elimination, by solving for the
last variable xn first, substituting its value into the system, then solving for xn 1 , etc. The
QR-decomposition is also very useful in solving least squares problems (we will come back
to this later on), and for finding eigenvalues. It can be easily adapted to the case where A is
a rectangular m n matrix with independent columns (thus, n m). In this case, Q is not
quite orthogonal. It is an m n matrix whose columns are orthogonal, and R is an invertible
n n upper triangular matrix with positive diagonal entries. For more on QR, see Strang
[104, 105], Golub and Van Loan [49], Demmel [27], Trefethen and Bau [110], or Serre [96].
It should also be said that the GramSchmidt orthonormalization procedure that we have
presented is not very stable numerically, and instead, one should use the modified Gram
0
Schmidt method . To compute Q k+1 , instead of projecting Ak+1 onto Q1 , . . . , Qk in a single
k+1
k+1
step, it is better to perform k projections. We compute Qk+1
as follows:
1 , Q2 , . . . , Q k
Qk+1
= Ak+1
1
Qk+1
= Qk+1
i+1
i
where 1 i k
method.
(Ak+1 Q1 ) Q1 ,
(Qk+1
Qi+1 ) Qi+1 ,
i
0
j=1
j=1
i=1
Moreover, equality holds i either A has a zero column in the left inequality or a zero row in
the right inequality, or A is orthogonal.
304
Proof. If det(A) = 0, then the inequality is trivial. In addition, if the righthand side is also
0, then either some column or some row is zero. If det(A) 6= 0, then we can factor A as
A = QR, with Q is orthogonal and R = (rij ) upper triangular with positive diagonal entries.
Then, since Q is orthogonal det(Q) = 1, so
Y
| det(A)| = | det(Q)| | det(R)| =
rjj .
j=1
a2ij
2
Aj 2
2
QRj 2
= R
j 2
i=1
n
X
rij2
2
rjj
,
i=1
n
Y
j=1
rjj
n
Y
j
2
j=1
n X
n
Y
j=1
i=1
a2ij
1/2
The other inequality is obtained by replacing A by A> . Finally, if det(A) 6= 0 and equality
holds, then we must have
rjj = Aj 2 , 1 j n,
which can only occur is R is orthogonal.
n
Y
aii .
i=1
j=1
305
2
>
i
However,
Pn 2 the diagonal entries aii of A = B B are precisely the square norms kB k2 =
j=1 bij , so by squaring (), we obtain
2
det(A) = det(B)
n X
n
Y
i=1
j=1
b2ij
n
Y
aii .
i=1
If det(A) 6= 0 and equality holds, then B must be orthogonal, which implies that B is a
diagonal matrix, and so is A.
We derived the second Hadamard inequality (Proposition 10.15) from the first (Proposition 10.14). We leave it as an exercise to prove that the first Hadamard inequality can be
deduced from the second Hadamard inequality.
10.6
Euclidean geometry has applications in computational geometry, in particular Voronoi diagrams and Delaunay triangulations. In turn, Voronoi diagrams have applications in motion
planning (see ORourke [87]).
Euclidean geometry also has applications to matrix analysis. Recall that a real n n
matrix A is symmetric if it is equal to its transpose A> . One of the most important properties
of symmetric matrices is that they have real eigenvalues and that they can be diagonalized
by an orthogonal matrix (see Chapter 13). This means that for every symmetric matrix A,
there is a diagonal matrix D and an orthogonal matrix P such that
A = P DP > .
Even though it is not always possible to diagonalize an arbitrary matrix, there are various
decompositions involving orthogonal matrices that are of great practical interest. For example, for every real matrix A, there is the QR-decomposition, which says that a real matrix
A can be expressed as
A = QR,
where Q is orthogonal and R is an upper triangular matrix. This can be obtained from the
GramSchmidt orthonormalization procedure, as we saw in Section 10.5, or better, using
Householder matrices, as shown in Section 11.2. There is also the polar decomposition,
which says that a real matrix A can be expressed as
A = QS,
where Q is orthogonal and S is symmetric positive semidefinite (which means that the eigenvalues of S are nonnegative). Such a decomposition is important in continuum mechanics
and in robotics, since it separates stretching from rotation. Finally, there is the wonderful
306
singular value decomposition, abbreviated as SVD, which says that a real matrix A can be
expressed as
A = V DU > ,
where U and V are orthogonal and D is a diagonal matrix with nonnegative entries (see
Chapter 16). This decomposition leads to the notion of pseudo-inverse, which has many
applications in engineering (least squares solutions, etc). For an excellent presentation of all
these notions, we highly recommend Strang [105, 104], Golub and Van Loan [49], Demmel
[27], Serre [96], and Trefethen and Bau [110].
The method of least squares, invented by Gauss and Legendre around 1800, is another
great application of Euclidean geometry. Roughly speaking, the method is used to solve
inconsistent linear systems Ax = b, where the number of equations is greater than the
number of variables. Since this is generally impossible, the method of least squares consists
in finding a solution x minimizing the Euclidean norm kAx bk2 , that is, the sum of the
squares of the errors. It turns out that there is always a unique solution x+ of smallest
norm minimizing kAx bk2 , and that it is a solution of the square system
A> Ax = A> b,
called the system of normal equations. The solution x+ can be found either by using the QRdecomposition in terms of Householder transformations, or by using the notion of pseudoinverse of a matrix. The pseudo-inverse can be computed using the SVD decomposition.
Least squares methods are used extensively in computer vision More details on the method
of least squares and pseudo-inverses can be found in Chapter 17.
10.7
Summary
The main concepts and results of this chapter are listed below:
Bilinear forms; positive definite bilinear forms.
inner products, scalar products, Euclidean spaces.
quadratic form associated with a bilinear form.
The Euclidean space En .
The polar form of a quadratic form.
Gram matrix associated with an inner product.
The CauchySchwarz inequality; the Minkowski inequality.
The parallelogram law .
10.7. SUMMARY
307
308
Chapter 11
QR-Decomposition for Arbitrary
Matrices
11.1
Orthogonal Reflections
Hyperplane reflections are represented by matrices called Householder matrices. These matrices play an important role in numerical methods, for instance for solving systems of linear
equations, solving least squares problems, for computing eigenvalues, and for transforming a
symmetric matrix into a tridiagonal matrix. We prove a simple geometric lemma that immediately yields the QR-decomposition of arbitrary matrices in terms of Householder matrices.
Orthogonal symmetries are a very important example of isometries. First let us review
the definition of projections. Given a vector space E, let F and G be subspaces of E that
form a direct sum E = F G. Since every u 2 E can be written uniquely as u = v + w,
where v 2 F and w 2 G, we can define the two projections pF : E ! F and pG : E ! G such
that pF (u) = v and pG (u) = w. It is immediately verified that pG and pF are linear maps,
and that p2F = pF , p2G = pG , pF pG = pG pF = 0, and pF + pG = id.
Definition 11.1. Given a vector space E, for any two subspaces F and G that form a direct
sum E = F G, the symmetry (or reflection) with respect to F and parallel to G is the
linear map s : E ! E defined such that
s(u) = 2pF (u)
u,
for every u 2 E.
Because pF + pG = id, note that we also have
s(u) = pF (u)
pG (u)
and
s(u) = u
309
2pG (u),
310
Definition 11.2. Let E be a Euclidean space of finite dimension n. For any two subspaces
F and G, if F and G form a direct sum E = F G and F and G are orthogonal, i.e.,
F = G? , the orthogonal symmetry (or reflection) with respect to F and parallel to G is the
linear map s : E ! E defined such that
s(u) = 2pF (u)
u,
pG (u),
Ip
0
0
In
.
p is even. In particular, when F is
311
with respect to the origin. When G is a plane, p = n 2, and det(s) = ( 1)2 = 1, so that a
flip about F is a rotation. In particular, when n = 3, F is a line, and a flip about the line
F is indeed a rotation of measure .
Remark: Given any two orthogonal subspaces F, G forming a direct sum E = F G, let
f be the symmetry with respect to F and parallel to G, and let g be the symmetry with
respect to G and parallel to F . We leave as an exercise to show that
f
g=g f =
id.
When F = H is a hyperplane, we can give an explicit formula for s(u) in terms of any
nonnull vector w orthogonal to H. Indeed, from
u = pH (u) + pG (u),
since pG (u) 2 G and G is spanned by w, which is orthogonal to H, we have
pG (u) = w
for some
2 R, and we get
u w = kwk2 ,
and thus
pG (u) =
(u w)
w.
kwk2
s(u) = u
2pG (u),
Since
we get
(u w)
w.
kwk2
Such reflections are represented by matrices called Householder matrices, and they play
an important role in numerical matrix analysis (see Kincaid and Cheney [63] or Ciarlet
[24]). Householder matrices are symmetric and orthogonal. It is easily checked that over an
orthonormal basis (e1 , . . . , en ), a hyperplane reflection about a hyperplane H orthogonal to
a nonnull vector w is represented by the matrix
s(u) = u
H = In
WW>
= In
kW k2
WW>
,
W >W
where W is the column vector of the coordinates of w over the basis (e1 , . . . , en ), and In is
the identity n n matrix. Since
pG (u) =
(u w)
w,
kwk2
312
WW>
,
W >W
and since pH + pG = id, the matrix representing pH is
In
WW>
.
W >W
These formulae can be used to derive a formula for a rotation of R3 , given the direction w
of its axis of rotation and given the angle of rotation.
The following fact is the key to the proof that every isometry can be decomposed as a
product of reflections.
Proposition 11.1. Let E be any nontrivial Euclidean space. For any two vectors u, v 2 E,
if kuk = kvk, then there is a hyperplane H such that the reflection s about H maps u to v,
and if u 6= v, then this reflection is unique.
Proof. If u = v, then any hyperplane containing u does the job. Otherwise, we must have
H = {v u}? , and by the above formula,
s(u) = u
(u (v u))
(v
k(v u)k2
u) = u +
2kuk2 2u v
(v
k(v u)k2
u),
and since
k(v
and kuk = kvk, we have
k(v
2u v
2u v,
If E is a complex vector space and the inner product is Hermitian, Proposition 11.1
is false. The problem is that the vector v u does not work unless the inner product
u v is real! The proposition can be salvaged enough to yield the QR-decomposition in terms
of Householder transformations; see Gallier [44].
We now show that hyperplane reflections can be used to obtain another proof of the
QR-decomposition.
11.2
First, we state the result geometrically. When translated in terms of Householder matrices,
we obtain the fact advertised earlier that every matrix (not necessarily invertible) has a
QR-decomposition.
313
Proposition 11.2. Let E be a nontrivial Euclidean space of dimension n. For any orthonormal basis (e1 , . . ., en ) and for any n-tuple of vectors (v1 , . . ., vn ), there is a sequence of n
isometries h1 , . . . , hn such that hi is a hyperplane reflection or the identity, and if (r1 , . . . , rn )
are the vectors given by
rj = hn h2 h1 (vj ),
then every rj is a linear combination of the vectors (e1 , . . . , ej ), 1 j n. Equivalently, the
matrix R whose columns are the components of the rj over the basis (e1 , . . . , en ) is an upper
triangular matrix. Furthermore, the hi can be chosen so that the diagonal entries of R are
nonnegative.
Proof. We proceed by induction on n. For n = 1, we have v1 = e1 for some 2 R. If
0, we let h1 = id, else if < 0, we let h1 = id, the reflection about the origin.
For n
If v1 = r1,1 e1 , we let h1 = id. Otherwise, there is a unique hyperplane reflection h1 such that
h1 (v1 ) = r1,1 e1 ,
defined such that
h1 (u) = u
for all u 2 E, where
(u w1 )
w1
kw1 k2
w1 = r1,1 e1
v1 .
The map h1 is the reflection about the hyperplane H1 orthogonal to the vector w1 = r1,1 e1
v1 . Letting
r1 = h1 (v1 ) = r1,1 e1 ,
it is obvious that r1 belongs to the subspace spanned by e1 , and r1,1 = kv1 k is nonnegative.
Next, assume that we have found k linear maps h1 , . . . , hk , hyperplane reflections or the
identity, where 1 k n 1, such that if (r1 , . . . , rk ) are the vectors given by
rj = h k
h2 h1 (vj ),
314
(u wk+1 )
wk+1
kwk+1 k2
u00k+1 .
The map hk+1 is the reflection about the hyperplane Hk+1 orthogonal to the vector wk+1 =
rk+1,k+1 ek+1 u00k+1 . However, since u00k+1 , ek+1 2 Uk00 and Uk0 is orthogonal to Uk00 , the subspace
Uk0 is contained in Hk+1 , and thus, the vectors (r1 , . . . , rk ) and u0k+1 , which belong to Uk0 , are
invariant under hk+1 . This proves that
hk+1 (uk+1 ) = hk+1 (u0k+1 ) + hk+1 (u00k+1 ) = u0k+1 + rk+1,k+1 ek+1
is a linear combination of (e1 , . . . , ek+1 ). Letting
rk+1 = hk+1 (uk+1 ) = u0k+1 + rk+1,k+1 ek+1 ,
since uk+1 = hk
is a linear combination of (e1 , . . . , ek+1 ). The coefficient of rk+1 over ek+1 is rk+1,k+1 = ku00k+1 k,
which is nonnegative. This concludes the induction step, and thus the proof.
Remarks:
(1) Since every hi is a hyperplane reflection or the identity,
= hn h2 h1
is an isometry.
(2) If we allow negative diagonal entries in R, the last isometry hn may be omitted.
315
u00k ,
316
Remarks:
(1) Letting
Ak+1 = Hk H2 H1 A,
1
B
..
.. .. .. .. C
B 0 ...
.
. . . .C
B
C
k+1
B 0 0 uk
C
B
C
B 0 0 0 uk+1 C
k+1
B
C,
Ak+1 = B
k+1
C
0
0
0
u
k+2
B
C
B .. .. ..
..
.. .. .. .. C
B. . .
.
. . . .C
B
C
k+1
@ 0 0 0 un 1 A
0 0 0 uk+1
n
where the (k + 1)th column of the matrix is the vector
uk+1 = hk
h2 h1 (vk+1 ),
and thus
k+1
u0k+1 = uk+1
1 , . . . , uk
and
k+1
k+1
u00k+1 = uk+1
.
k+1 , uk+2 , . . . , un
If the last n k 1 entries in column k + 1 are all zero, there is nothing to do, and we
let Hk+1 = I. Otherwise, we kill these n k 1 entries by multiplying Ak+1 on the
left by the Householder matrix Hk+1 sending
k+1
0, . . . , 0, uk+1
k+1 , . . . , un
k+1
where rk+1,k+1 = k(uk+1
k+1 , . . . , un )k.
(2) If A is invertible and the diagonal entries of R are positive, it can be shown that Q
and R are unique.
(3) If we allow negative diagonal entries in R, the matrix Hn may be omitted (Hn = I).
(4) The method allows the computation of the determinant of A. We have
det(A) = ( 1)m r1,1 rn,n ,
where m is the number of Householder matrices (not the identity) among the Hi .
11.3. SUMMARY
317
(5) The condition number of the matrix A is preserved (see Strang [105], Golub and Van
Loan [49], Trefethen and Bau [110], Kincaid and Cheney [63], or Ciarlet [24]). This is
very good for numerical stability.
(6) The method also applies to a rectangular m n matrix. In this case, R is also an
m n matrix (and it is upper triangular).
11.3
Summary
The main concepts and results of this chapter are listed below:
Symmetry (or reflection) with respect to F and parallel to G.
Orthogonal symmetry (or reflection) with respect to F and parallel to G; reflections,
flips.
Hyperplane reflections and Householder matrices.
A key fact about reflections (Proposition 11.1).
QR-decomposition in terms of Householder transformations (Theorem 11.3).
318
Chapter 12
Hermitian Spaces
12.1
In this chapter we generalize the basic results of Euclidean geometry presented in Chapter
10 to vector spaces over the complex numbers. Such a generalization is inevitable, and not
simply a luxury. For example, linear maps may not have real eigenvalues, but they always
have complex eigenvalues. Furthermore, some very important classes of linear maps can
be diagonalized if they are extended to the complexification of a real vector space. This
is the case for orthogonal matrices, and, more generally, normal matrices. Also, complex
vector spaces are often the natural framework in physics or engineering, and they are more
convenient for dealing with Fourier series. However, some complications arise due to complex
conjugation.
Recall that for any complex number z 2 C, if z = x + iy where x, y 2 R, we let <z = x,
the real part of z, and =z = y, the imaginary part of z. We also denote the conjugate of
z = x + iy by z = x iy, and the absolute value (or length, or modulus) of z by |z|. Recall
that |z|2 = zz = x2 + y 2 .
There are many natural situations where a map ' : E E ! C is linear in its first
argument and only semilinear in its second argument, which means that '(u, v) = '(u, v),
as opposed to '(u, v) = '(u, v). For example, the natural inner product to deal with
functions f : R ! C, especially Fourier series, is
Z
hf, gi =
f (x)g(x)dx,
which is semilinear (but not linear) in g. Thus, when generalizing a result from the real case
of a Euclidean space to the complex case, we always have to check very carefully that our
proofs do not rely on linearity in the second argument. Otherwise, we need to revise our
proofs, and sometimes the result is simply wrong!
319
320
2 C.
Remark: Instead of defining semilinear maps, we could have defined the vector space E as
the vector space with the same carrier set E whose addition is the same as that of E, but
whose multiplication by a complex number is given by
( , u) 7! u.
Then it is easy to check that a function f : E ! C is semilinear i f : E ! C is linear.
We can now define sesquilinear forms and Hermitian forms.
Definition 12.2. Given a complex vector space E, a function ' : E E ! C is a sesquilinear
form if it is linear in its first argument and semilinear in its second argument, which means
that
'(u1 + u2 , v) = '(u1 , v) + '(u2 , v),
'(u, v1 + v2 ) = '(u, v1 ) + '(u, v2 ),
'( u, v) = '(u, v),
'(u, v) = '(u, v),
for all u, v, u1 , u2 , v1 , v2 2 E, and all , 2 C. A function ' : E E ! C is a Hermitian
form if it is sesquilinear and if
'(v, u) = '(u, v)
for all all u, v 2 E.
Obviously, '(0, v) = '(u, 0) = 0. Also note that if ' : E E ! C is sesquilinear, we
have
'( u + v, u + v) = | |2 '(u, u) + '(u, v) + '(v, u) + ||2 '(v, v),
and if ' : E E ! C is Hermitian, we have
321
Note that restricted to real coefficients, a sesquilinear form is bilinear (we sometimes say
R-bilinear). The function : E ! C defined such that (u) = '(u, u) for all u 2 E is called
the quadratic form associated with '.
The standard example of a Hermitian form on Cn is the map ' defined such that
'((x1 , . . . , xn ), (y1 , . . . , yn )) = x1 y1 + x2 y2 + + xn yn .
This map is also positive definite, but before dealing with these issues, we show the following
useful proposition.
Proposition 12.1. Given a complex vector space E, the following properties hold:
(1) A sesquilinear form ' : E E ! C is a Hermitian form i '(u, u) 2 R for all u 2 E.
(2) If ' : E E ! C is a sesquilinear form, then
4'(u, v) = '(u + v, u + v) '(u v, u v)
+ i'(u + iv, u + iv) i'(u iv, u
iv),
and
2'(u, v) = (1 + i)('(u, u) + '(v, v))
'(u
v, u
v)
i'(u
iv, u
iv).
'(v, u)) = ,
i
2
and '(v, u) =
+i
,
2
322
Proposition 12.1 shows that a sesquilinear form is completely determined by the quadratic
form (u) = '(u, u), even if ' is not Hermitian. This is false for a real bilinear form, unless
it is symmetric. For example, the bilinear form ' : R2 R2 ! R defined such that
'((x1 , y1 ), (x2 , y2 )) = x1 y2
x2 y1
is not identically zero, and yet it is null on the diagonal. However, a real symmetric bilinear
form is indeed determined by its values on the diagonal, as we saw in Chapter 10.
As in the Euclidean case, Hermitian forms for which '(u, u)
1
X
xi yi
i=0
is well defined, and l2 is a Hermitian space under '. Actually, l2 is even a Hilbert space.
Example 12.3. Let Cpiece [a, b] be the set of piecewise bounded continuous functions
f : [a, b] ! C under the Hermitian form
Z b
hf, gi =
f (x)g(x)dx.
a
It is easy to check that this Hermitian form is positive, but it is not definite. Thus, under
this Hermitian form, Cpiece [a, b] is only a pre-Hilbert space.
323
Example 12.4. Let C[a, b] be the set of complex-valued continuous functions f : [a, b] ! C
under the Hermitian form
Z b
hf, gi =
f (x)g(x)dx.
a
It is easy to check that this Hermitian form is positive definite. Thus, C[a, b] is a Hermitian
space.
Example 12.5. Let E = Mn (C) be the vector space of complex n n matrices. If we
view a matrix A 2 Mn (C) as a long column vector obtained by concatenating together its
columns, we can define the Hermitian product of two matrices A, B 2 Mn (C) as
hA, Bi =
n
X
aij bij ,
i,j=1
Since this can be viewed as the standard Hermitian product on Cn , it is a Hermitian product
on Mn (C). The corresponding norm
p
kAkF = tr(A A)
is the Frobenius norm (see Section 7.2).
X
n
i=1
xi e i ,
n
X
j=1
yj e j
n
X
xi y j '(ei , ej ).
i,j=1
If we let G be the matrix G = ('(ei , ej )), and if x and y are the column vectors associated
with (x1 , . . . , xn ) and (y1 , . . . , yn ), then we can write
'(x, y) = x> G y = y G> x,
where y corresponds to (y 1 , . . . , y n ). As in SectionP
10.1, we are committing the slight abuse of
notation of letting x denote both the vector x = ni=1 xi ei and the column vector associated
with (x1 , . . . , xn ) (and similarly for y). The correct expression for '(x, y) is
324
Furthermore, observe that ' is Hermitian i G = G , and ' is positive definite i the
matrix G is positive definite, that is,
x> Gx > 0 for all x 2 Cn , x 6= 0.
The matrix G associated with a Hermitian product is called the Gram matrix of the Hermitian product with respect to the basis (e1 , . . . , en ).
Remark: To avoid the transposition in the expression for '(x, y) = y G> x, some authors
(such as Homan and Kunze [62]), define the Gram matrix G0 = (gij0 ) associated with h , i
so that (gij0 ) = ('(ej , ei )); that is, G0 = G> .
Conversely, if A is a Hermitian positive definite n n matrix, it is easy to check that the
Hermitian form
hx, yi = y Ax
is positive definite. If we make a change of basis from the basis (e1 , . . . , en ) to the basis
(f1 , . . . , fn ), and if the change of basis matrix is P (where the jth column of P consists of
the coordinates of fj over the basis (e1 , . . . , en )), then with respect to coordinates x0 and y 0
over the basis (f1 , . . . , fn ), we have
x> Gy = x0> P > GP y 0 ,
so the matrix of our inner product over the basis (f1 , . . . , fn ) is P > GP = (P ) GP . We
summarize these facts in the following proposition.
Proposition 12.2. Let E be a finite-dimensional vector space, and let (e1 , . . . , en ) be a basis
of E.
1. For any Hermitian inner product h , i on E, if G = (hei , ej i) is the Gram matrix of
the Hermitian product h , i w.r.t. the basis (e1 , . . . , en ), then G is Hermitian positive
definite.
2. For any change of basis matrix P , the Gram matrix of h , i with respect to the new
basis is (P ) GP .
3. If A is any n n Hermitian positive definite matrix, then
hx, yi = y Ax
is a Hermitian product on E.
We will see later that a Hermitian matrix is positive definite i its eigenvalues are all
positive.
The following result reminiscent of the first polarization identity of Proposition 12.1 can
be used to prove that two linear maps are identical.
325
Proposition 12.3. Given any Hermitian space E with Hermitian product h , i, for any
linear map f : E ! E, if hf (x), xi = 0 for all x 2 E, then f = 0.
Proof. Compute hf (x + y), x + yi and hf (x
y), x
yi:
hf (x
y), x
ihf (y), xi = 0,
for all x, y 2 E,
so we have
hf (x), yi + hf (y), xi = 0
hf (x), yi hf (y), xi = 0,
which implies that hf (x), yi = 0 for all x, y 2 E. Since h , i is positive definite, we have
f (x) = 0 for all x 2 E; that is, f = 0.
One should be careful not to apply Proposition 12.3 to a linear map on a real Euclidean
space, because it is false! The reader should find a counterexample.
The CauchySchwarz inequality and the Minkowski inequalities extend to pre-Hilbert
spaces and to Hermitian spaces.
Proposition 12.4. Let hE, 'i be a pre-Hilbert space with associated quadratic form
all u, v 2 E, we have the CauchySchwarz inequality
p
p
|'(u, v)|
(u)
(v).
. For
Furthermore, if hE, 'i is a Hermitian space, the equality holds i u and v are linearly dependent.
We also have the Minkowski inequality
p
p
p
(u + v)
(u) +
(v).
Furthermore, if hE, 'i is a Hermitian space, the equality holds i u and v are linearly dependent, where in addition, if u 6= 0 and v 6= 0, then u = v for some real such that
> 0.
326
(u + t0 ei v) = 0.
(u)
(v).
327
(u)
(v),
If ' is positive definite and u and v are linearly dependent, it is immediately verified that
we get an equality. Conversely, if equality holds in the Minkowski inequality, we must have
p
p
<('(u, v)) =
(u)
(v),
which implies that
|'(u, v)| =
(u)
(v),
uk < }.
If E has finite dimension, every linear map is continuous; see Chapter 7 (or Lang [69, 70],
Dixmier [29], or Schwartz [93, 94]). The CauchySchwarz inequality
|u v| kukkvk
328
If hE, 'i is only pre-Hilbertian, kuk is called a seminorm. In this case, the condition
kuk = 0 implies u = 0
is not necessarily true. However, the CauchySchwarz inequality shows that if kuk = 0, then
u v = 0 for all v 2 E.
Remark: As in the case of real vector spaces, a norm on a complex vector space is induced
by some positive definite Hermitian product h , i i it satisfies the parallelogram law :
ku + vk2 + ku
This time, the Hermitian product is recovered using the polarization identity from Proposition 12.1:
4hu, vi = ku + vk2 ku vk2 + i ku + ivk2 i ku ivk2 .
It is easy to check that hu, ui = kuk2 , and
hv, ui = hu, vi
hiu, vi = ihu, vi,
so it is enough to check linearity in the variable u, and only for real scalars. This is easily
done by applying the proof from Section 10.1 to the real and imaginary part of hu, vi; the
details are left as an exercise.
We will now basically mirror the presentation of Euclidean geometry given in Chapter
10 rather quickly, leaving out most proofs, except when they need to be seriously amended.
12.2
In this section we assume that we are dealing with Hermitian spaces. We denote the Hermitian inner product by u v or hu, vi. The concepts of orthogonality, orthogonal family of
vectors, orthonormal family of vectors, and orthogonal complement of a set of vectors are
unchanged from the Euclidean case (Definition 10.2).
For example, the set C[ , ] of continuous functions f : [ , ] ! C is a Hermitian
space under the product
Z
hf, gi =
f (x)g(x)dx,
ikx
)k2Z is orthogonal.
Proposition 10.3 and 10.4 hold without any changes. It is easy to show that
n
X
i=1
ui
n
X
i=1
kui k2 +
1i<jn
2<(ui uj ).
329
Analogously to the case of Euclidean spaces of finite dimension, the Hermitian product
induces a canonical bijection (i.e., independent of the choice of bases) between the vector
space E and the space E . This is one of the places where conjugation shows up, but in this
case, troubles are minor.
Given a Hermitian space E, for any vector u 2 E, let 'lu : E ! C be the map defined
such that
'lu (v) = u v, for all v 2 E.
Similarly, for any vector v 2 E, let 'rv : E ! C be the map defined such that
'rv (u) = u v,
for all u 2 E.
Since the Hermitian product is linear in its first argument u, the map 'rv is a linear form
in E , and since it is semilinear in its second argument v, the map 'lu is also a linear form
in E . Thus, we have two maps [l : E ! E and [r : E ! E , defined such that
[l (u) = 'lu ,
for all u 2 E
u) = 0 for all w 2 E,
which implies that u = v, since the Hermitian product is positive definite. Thus, [ : E ! E
is injective. Finally, when E is of finite dimension n, E is also of dimension n, and then
[ : E ! E is bijective. Since [ is semilinar, the map [ : E ! E is an isomorphism.
330
As a corollary of the isomorphism [ : E ! E , if E is a Hermitian space of finite dimension, then every linear form f 2 E corresponds to a unique v 2 E, such that
f (u) = u v,
for every u 2 E.
In particular, if f is not the null form, the kernel of f , which is a hyperplane H, is precisely
the set of vectors that are orthogonal to v.
Remarks:
1. The musical map [ : E ! E is not surjective when E has infinite dimension. This
result can be salvaged by restricting our attention to continuous linear maps, and by
assuming that the vector space E is a Hilbert space.
2. Diracs bra-ket notation. Dirac invented a notation widely used in quantum mechanics for denoting the linear form 'u = [(u) associated to the vector u 2 E via the
duality induced by a Hermitian inner product. Diracs proposal is to denote the vectors
u in E by |ui, and call them kets; the notation |ui is pronounced ket u. Given two
kets (vectors) |ui and |vi, their inner product is denoted by
hu|vi
(instead of |ui |vi). The notation hu|vi for the inner product of |ui and |vi anticipates
duality. Indeed, we define the dual (usually called adjoint) bra u of ket u, denoted by
hu|, as the linear form whose value on any ket v is given by the inner product, so
hu|(|vi) = hu|vi.
Thus, bra u = hu| is Diracs notation for our [(u). Since the map [ is semi-linear, we
have
h u| = hu|.
Using the bra-ket notation, given an orthonormal basis (|u1 i, . . . , |un i), ket v (a vector)
is written as
n
X
|vi =
hv|ui i|ui i,
i=1
n
X
i=1
hv|ui ihui | =
n
X
i=1
hui |vihui |
over the dual basis (hu1 |, . . . , hun |). As cute as it looks, we do not recommend using
the Dirac notation.
331
for every v 2 E.
Proposition 12.6. Given a Hermitian space E of finite dimension, for every linear map
f : E ! E there is a unique linear map f : E ! E such that
f (u) v = u f (v),
for all u, v 2 E. The map f is called the adjoint of f (w.r.t. to the Hermitian product).
Proof. Careful inspection of the proof of Proposition 10.6 reveals that it applies unchanged.
The only potential problem is in proving that f ( u) = f (u), but everything takes place
in the first argument of the Hermitian product, and there, we have linearity.
The fact that
vu=uv
Given two Hermitian spaces E and F , where the Hermitian product on E is denoted
by h , i1 and the Hermitian product on F is denoted by h , i2 , given any linear map
f : E ! F , it is immediately verified that the proof of Proposition 12.6 can be adapted to
show that there is a unique linear map f : F ! E such that
hf (u), vi2 = hu, f (v)i1
for all u 2 E and all v 2 F . The linear map f is also called the adjoint of f .
0 all x 2 E;
332
positive definite i
hf (x), xi > 0 all x 2 E, x 6= 0.
An interesting corollary of Proposition 12.3 is that a positive semidefinite linear map must
be self-adjoint. In fact, we can prove a slightly more general result.
Proposition 12.7. Given any finite-dimensional Hermitian space E with Hermitian product
h , i, for any linear map f : E ! E, if hf (x), xi 2 R for all x 2 E, then f is self-adjoint.
In particular, any positive semidefinite linear map f : E ! E is self-adjoint.
Proof. Since hf (x), xi 2 R for all x 2 E, we have
hf (x), xi = hf (x), xi
= hx, f (x)i
= hf (x), xi,
so we have
h(f
f )(x), xi = 0 all x 2 E,
f = 0.
1, there
n
X
i=1
ui v i ,
X
n
i=1
|ui |
1/2
333
The fact that a Hermitian space always has an orthonormal basis implies that any Gram
matrix G can be written as
G = Q Q,
for some invertible matrix Q. Indeed, we know that in a change of basis matrix, a Gram
matrix G becomes G0 = (P ) GP . If the basis corresponding to G0 is orthonormal, then
1
1
G0 = I, so G = (P ) P .
Proposition 10.9 also holds unchanged.
Proposition 12.10. Given any nontrivial Hermitian space E of finite dimension n 1, for
any subspace F of dimension k, the orthogonal complement F ? of F has dimension n k,
and E = F F ? . Furthermore, we have F ?? = F .
12.3
In this section we consider linear maps between Hermitian spaces that preserve the Hermitian
norm. All definitions given for Euclidean spaces in Section 10.3 extend to Hermitian spaces,
except that orthogonal transformations are called unitary transformation, but Proposition
10.10 extends only with a modified condition (2). Indeed, the old proof that (2) implies
(3) does not work, and the implication is in fact false! It can be repaired by strengthening
condition (2). For the sake of completeness, we state the Hermitian version of Definition
10.3.
Definition 12.4. Given any two nontrivial Hermitian spaces E and F of the same finite
dimension n, a function f : E ! F is a unitary transformation, or a linear isometry, if it is
linear and
kf (u)k = kuk, for all u 2 E.
Proposition 10.10 can be salvaged by strengthening condition (2).
Proposition 12.11. Given any two nontrivial Hermitian spaces E and F of the same finite
dimension n, for every function f : E ! F , the following properties are equivalent:
(1) f is a linear map and kf (u)k = kuk, for all u 2 E;
(2) kf (v)
f (u)k = kv
334
ku
vk2
iku
ivk2 .
Since f (iv) = if (v), we get f (0) = 0 by setting v = 0, so the function f preserves distance
and norm, and we get
2'(f (u), f (v)) = (1 + i)(kf (u)k2 + kf (v)k2 ) kf (u) f (v)k2
ikf (u) if (v)k2
= (1 + i)(kf (u)k2 + kf (v)k2 ) kf (u) f (v)k2
ikf (u) f (iv)k2
= (1 + i)(kuk2 + kvk2 ) ku vk2 iku ivk2
= 2'(u, v),
which shows that f preserves the Hermitian inner product, as desired. The rest of the proof
is unchanged.
Remarks:
(i) In the Euclidean case, we proved that the assumption
kf (v)
f (u)k = kv
(20 )
ku
vk2 .
In the Hermitian case the polarization identity involves the complex number i. In fact,
the implication (20 ) implies (3) is false in the Hermitian case! Conjugation z !
7 z
satisfies (20 ) since
|z2 z1 | = |z2 z1 | = |z2 z1 |,
and yet, it is not linear!
(ii) If we modify (2) by changing the second condition by now requiring that there be some
2 E such that
f ( + iu) = f ( ) + i(f ( + u) f ( ))
for all u 2 E, then the function g : E ! E defined such that
g(u) = f ( + u)
f ( )
satisfies the old conditions of (2), and the implications (2) ! (3) and (3) ! (1) prove
that g is linear, and thus that f is affine. In view of the first remark, some condition
involving i is needed on f , in addition to the fact that f is distance-preserving.
12.4
335
In this section, as a mirror image of our treatment of the isometries of a Euclidean space,
we explore some of the fundamental properties of the unitary group and of unitary matrices.
As an immediate corollary of the GramSchmidt orthonormalization procedure, we obtain
the QR-decomposition for invertible matrices. In the Hermitian framework, the matrix of
the adjoint of a linear map is not given by the transpose of the original matrix, but by its
conjugate.
Definition 12.5. Given a complex m n matrix A, the transpose A> of A is the n m
matrix A> = a>
i j defined such that
a>
i j = aj i ,
and the conjugate A of A is the m n matrix A = (bi j ) defined such that
b i j = ai j
for all i, j, 1 i m, 1 j n. The adjoint A of A is the matrix defined such that
A = (A> ) = A
>
Proposition 12.12. Let E be any Hermitian space of finite dimension n, and let f : E ! E
be any linear map. The following properties hold:
(1) The linear map f : E ! E is an isometry i
f
f = f f = id.
(2) For every orthonormal basis (e1 , . . . , en ) of E, if the matrix of f is A, then the matrix
of f is the adjoint A of A, and f is an isometry i A satisfies the identities
A A = A A = In ,
where In denotes the identity matrix of order n, i the columns of A form an orthonormal basis of E, i the rows of A form an orthonormal basis of E.
Proof. (1) The proof is identical to that of Proposition 10.12 (1).
(2) If (e1 , . . . , en ) is an orthonormal basis for E, let A = (ai j ) be the matrix of f , and let
B = (bi j ) be the matrix of f . Since f is characterized by
f (u) v = u f (v)
336
Remarks:
(1) The conditions A A = In , A A = In , and A 1 = A are equivalent. Given any two
orthonormal bases (u1 , . . . , un ) and (v1 , . . . , vn ), if P is the change of basis matrix from
(u1 , . . . , un ) to (v1 , . . . , vn ), it is easy to show that the matrix P is unitary. The proof
of Proposition 12.11 (3) also shows that if f is an isometry, then the image of an
orthonormal basis (u1 , . . . , un ) is an orthonormal basis.
(2) Using the explicit formula for the determinant, we see immediately that
det(A) = det(A).
If f is unitary and A is its matrix with respect to any orthonormal basis, from AA = I,
we get
det(AA ) = det(A) det(A ) = det(A)det(A> ) = det(A)det(A) = | det(A)|2 ,
and so | det(A)| = 1. It is clear that the isometries of a Hermitian space of dimension
n form a group, and that the isometries of determinant +1 form a subgroup.
This leads to the following definition.
337
j=1
j=1
i=1
Moreover, equality holds i either A has a zero column in the left inequality or a zero row in
the right inequality, or A is unitary.
We also have the following version of Proposition 10.15 for Hermitian matrices. The
proof of Proposition 10.15 goes through because the Cholesky decomposition for a Hermitian
positive definite A matrix holds in the form A = B B, where B is upper triangular with
positive diagonal entries. The details are left to the reader.
Proposition 12.15. (Hadamard) For any complex nn matrix A = (aij ), if A is Hermitian
positive semidefinite, then we have
det(A)
n
Y
aii .
i=1
338
Due to space limitations, we will not study the isometries of a Hermitian space in this
chapter. However, the reader will find such a study in the supplements on the web site (see
http://www.cis.upenn.edu/ejean/gbooks/geom2.html).
12.5
In this section, we assume that the field K is not a field of characteristic 2. Recall that a
linear map f : E ! E is an involution i f 2 = id, and is idempotent i f 2 = f . We know
from Proposition 4.7 that if f is idempotent, then
E = Im(f )
Ker (f ),
and that the restriction of f to its image is the identity. For this reason, a linear involution
is called a projection. The connection between involutions and projections is given by the
following simple proposition.
Proposition 12.16. For any linear map f : E ! E, we have f 2 = id i 12 (id f ) is a
projection i 12 (id + f ) is a projection; in this case, f is equal to the dierence of the two
projections 12 (id + f ) and 12 (id f ).
Proof. We have
so
We also have
so
Oviously, f = 12 (id + f )
Let U + = Ker ( 12 (id
1
(id
2
1
(id
2
f)
f)
2
1
(id
2
f ) i f 2 = id.
1
= (id + 2f + f 2 ),
4
1
= (id + f ) i f 2 = id.
2
2f + f 2 )
f ).
(id + f ) (id
which implies that
1
= (id
4
1
= (id
2
1
(id + f )
2
1
(id + f )
2
f ) = id
1
Im (id + f )
2
Ker
f 2 = id
1
(id
2
id = 0,
f) .
1
(id
2
339
f ) , then f (u) = u, so
1
1
(id + f )(u) = (u + u) = u,
2
2
and thus
Ker
Therefore,
1
(id
2
U = Ker
f)
1
(id
2
1
Im (id + f ) .
2
f)
1
= Im (id + f ) ,
2
and so, f (u) = u on U + and f (u) = u on U . The involutions of E that are unitary
transformations are characterized as follows.
Proposition 12.17. Let f 2 GL(E) be an involution. The following properties are equivalent:
(a) The map f is unitary; that is, f 2 U(E).
(b) The subspaces U = Im( 12 (id
= f . Since
A unitary involution is the identity on U + = Im( 12 (id + f )), and f (v) = v for all
v 2 U = Im( 12 (id f )). Furthermore, E is an orthogonal direct sum E = U + U 1 . We
say that f is an orthogonal reflection about U + . In the special case where U + is a hyperplane,
we say that f is a hyperplane reflection. We already studied hyperplane reflections in the
Euclidean case; see Chapter 11.
If f : E ! E is a projection (f 2 = f ), then
(id
so id
2f )2 = id
4f + 4f 2 = id
4f + 4f = id,
340
U = Ker
and
2f . Since id
1
(id
2
1
U = Im (id
2
= Ker (f )
= Im(f ),
g)
g)
g = 2f we have
Ax) = 0;
that is,
A> Ax = A> y.
The matrix A> A is invertible because A has full rank k, thus we get
x = (A> A) 1 A> y,
and so
P y = Ax = A(A> A) 1 A> y.
Therefore, the matrix P of the projection onto the subspace spanned by (a1 . . . , ak ) is given
by
P = A(A> A) 1 A> .
The reader should check that P 2 = P and P > = P .
12.6
341
Dual Norms
In the remark following the proof of Proposition 7.8, we explained that if (E, k k) and
(F, k k) are two normed vector spaces and if we let L(E; F ) denote the set of all continuous
(equivalently, bounded) linear maps from E to F , then, we can define the operator norm (or
subordinate norm) k k on L(E; F ) as follows: for every f 2 L(E; F ),
kf k = sup
x2E
x6=0
kf (x)k
= sup kf (x)k .
kxk
x2E
kxk=1
In particular, if F = C, then L(E; F ) = E 0 is the dual space of E, and we get the operator
norm denoted by k k given by
kf k = sup |f (x)|.
x2E
kxk=1
be the dual norm of k k (on E). If E is a real Euclidean space, then the dual norm is defined
by
kykD = sup hx, yi
x2E
kxk=1
for all y 2 E.
Beware that k k is generally not the Hermitian norm associated with the Hermitian innner
product. The dual norm shows up in convex programming; see Boyd and Vandenberghe [17],
Chapters 2, 3, 6, 9.
342
The fact that k kD is a norm follows from the fact that k k is a norm and can also be
checked directly. It is worth noting that the triangle inequality for k kD comes for free, in
the sense that it holds for any function p : E ! R. Indeed, we have
pD (x + y) = sup |hz, x + yi|
p(z)=1
p(z)=1
= p (x) + p (y).
If p : E ! R is a function such that
(1) p(x)
2 C;
(3) p is continuous, in the sense that for some basis (e1 , . . . , en ) of E, the function
(x1 , . . . , xn ) 7! p(x1 e1 + + xn en )
from Cn to R is continuous;
then we say that p is a pre-norm. Obviously, every norm is a pre-norm, but a pre-norm
may not satisfy the triangle inequality. However, we just showed that the dual norm of any
pre-norm is actually a norm.
Since E is finite dimensional, the unit sphere S n
there is some x0 2 S n 1 such that
= {x 2 E | kxk = 1} is compact, so
0, then
|he
so
with e
x0 , yi| = |e
hx0 , yi| = |e
kykD = = |he
ei | = ,
x0 , yi|,
x2E
kxk=1
343
which yields
kykD
1 = kyk1
kykD
2 = kyk2 .
It can also be shown that the dual of the spectral norm is the trace norm (or nuclear norm)
from Section 16.3. We close this section by stating the following duality theorem.
Theorem 12.20. If E is a finite-dimensional Hermitian space, then for any norm k k on
E, we have
kykDD = kyk
for all y 2 E.
kyk kykDD ,
for all y 2 E.
for all y 2 E.
Proofs of this fact can be found in Horn and Johnson [57] (Section 5.5), and in Serre [96]
(Chapter 7). The proof makes use of the fact that a nonempty, closed, convex set has a
344
supporting hyperplane through each of its boundary points, a result known as Minkowskis
lemma. This result is a consequence of the HahnBanach theorem; see Gallier [44]. We give
the proof in the case where E is a real Euclidean space. Some minor modifications have to
be made when dealing with complex vector spaces and are left as an exercise.
Since the unit ball B = {z 2 E | kzk 1} is closed and convex, the Minkowski lemma
says for every x such that kxk = 1, there is an affine map g, of the form
g(z) = hz, wi
hx, wi
with kwk = 1, such that g(x) = 0 and g(z) 0 for all z such that kzk 1. Then, it is clear
that
sup hz, wi = hx, wi,
kzk=1
and so
It follows that
kxkDD
hw/ kwkD , xi =
hx, wi
kwkD
= 1 = kxk
for all x such that kxk = 1. By homogeneity, this is true for all y 2 E, which completes the
proof in the real case. When E is a complex vector space, we have to view the unit ball B
as a closed convex set in R2n and we use the fact that there is real affine map of the form
g(z) = <hz, wi
<hx, wi
such that g(x) = 0 and g(z) 0 for all z with kzk = 1, so that kwkD = <hx, wi.
More details on dual norms and unitarily invariant norms can be found in Horn and
Johnson [57] (Chapters 5 and 7).
12.7
Summary
The main concepts and results of this chapter are listed below:
Semilinear maps.
Sesquilinear forms; Hermitian forms.
Quadratic form associated with a sesquilinear form.
Polarization identities.
Positive and positive definite Hermitian forms; pre-Hilbert spaces, Hermitian spaces.
Gram matrix associated with a Hermitian product.
12.7. SUMMARY
345
346
Chapter 13
Spectral Theorems in Euclidean and
Hermitian Spaces
13.1
Introduction
The goal of this chapter is to show that there are nice normal forms for symmetric matrices,
skew-symmetric matrices, orthogonal matrices, and normal matrices. The spectral theorem
for symmetric matrices states that symmetric matrices have real eigenvalues and that they
can be diagonalized over an orthonormal basis. The spectral theorem for Hermitian matrices
states that Hermitian matrices also have real eigenvalues and that they can be diagonalized
over a complex orthonormal basis. Normal real matrices can be block diagonalized over an
orthonormal basis with blocks having size at most two, and there are refinements of this
normal form for skew-symmetric and orthogonal matrices.
13.2
We begin by studying normal maps, to understand the structure of their eigenvalues and
eigenvectors. This section and the next two were inspired by Lang [67], Artin [4], Mac Lane
and Birkho [73], Berger [8], and Bertin [12].
Definition 13.1. Given a Euclidean space E, a linear map f : E ! E is normal if
f
f = f f.
f , and orthogonal
348
nice form: It is a block diagonal matrix in which the blocks are either one-dimensional
matrices (i.e., single entries) or two-dimensional matrices of the form
This normal form can be further refined if f is self-adjoint, skew-self-adjoint, or orthogonal. As a first step, we show that f and f have the same kernel when f is normal.
Proposition 13.1. Given a Euclidean space E, if f : E ! E is a normal linear map, then
Ker f = Ker f .
Proof. First, let us prove that
hf (u), f (v)i = hf (u), f (v)i
for all u, v 2 E. Since f is the adjoint of f and f
f = f f , we have
349
Definition 13.2. Given a real vector space E, let EC be the structure E E under the
addition operation
(u1 , u2 ) + (v1 , v2 ) = (u1 + v1 , u2 + v2 ),
and let multiplication by a complex scalar z = x + iy be defined such that
(x + iy) (u, v) = (xu
yv, yu + xv).
hu1 , v2 i).
It is easily verified that h , iC is indeed a Hermitian form that is positive definite, and
it is clear that h , iC agrees with h , i on real vectors. Then, given any linear map
f : E ! E, it is easily verified that the map fC defined such that
fC (u + iv) = f (u) + if (v)
for all u, v 2 E is the adjoint of fC w.r.t. h , iC .
Assuming again that E is a Hermitian space, observe that Proposition 13.1 also holds.
We deduce the following corollary.
350
Proposition 13.2. Given a Hermitian space E, for any normal linear map f : E ! E, we
have Ker (f ) \ Im(f ) = (0).
Proof. Assume v 2 Ker (f ) \ Im(f ) = (0), which means that v = f (u) for some u 2 E, and
f (v) = 0. By Proposition 13.1, Ker (f ) = Ker (f ), so f (v) = 0 implies that f (v) = 0.
Consequently,
0 = hf (v), ui
= hv, f (u)i
= hv, vi,
and thus, v = 0.
We also have the following crucial proposition relating the eigenvalues of f and f .
Proposition 13.3. Given a Hermitian space E, for any normal linear map f : E ! E, a
vector u is an eigenvector of f for the eigenvalue (in C) i u is an eigenvector of f for
the eigenvalue .
Proof. First, it is immediately verified that the adjoint of f
f
id is normal. Indeed,
(f
id) (f
id) = (f
id) (f
f
=f
=f
= (f
= (f
Applying Proposition 13.1 to f
(f
f
f
id) (f
id) (f
id is f
id. Furthermore,
id),
f +
id,
f+
id,
id),
id).
id)(u) = 0,
351
and
hf (u), vi = hu, f (v)i = hu, vi = hu, vi,
where the last identity holds because of the semilinearity in the second argument, and thus
hu, vi = hu, vi,
that is,
(
which implies that hu, vi = 0, since
)hu, vi = 0,
6= .
We can also show easily that the eigenvalues of a self-adjoint linear map are real.
Proposition 13.5. Given a Hermitian space E, all the eigenvalues of any self-adjoint linear
map f : E ! E are real.
Proof. Let z (in C) be an eigenvalue of f and let u be an eigenvector for z. We compute
hf (u), ui in two dierent ways. We have
hf (u), ui = hzu, ui = zhu, ui,
and since f = f , we also have
hf (u), ui = hu, f (u)i = hu, f (u)i = hu, zui = zhu, ui.
Thus,
zhu, ui = zhu, ui,
which implies that z = z, since u 6= 0, and z is indeed real.
There is also a version of Proposition 13.5 for a (real) Euclidean space E and a self-adjoint
map f : E ! E.
Proposition 13.6. Given a Euclidean space E, if f : E ! E is any self-adjoint linear map,
then every eigenvalue of fC is real and is actually an eigenvalue of f (which means that
there is some real eigenvector u 2 E such that f (u) = u). Therefore, all the eigenvalues of
f are real.
Proof. Let EC be the complexification of E, h , iC the complexification of the inner product
h , i on E, and fC : EC ! EC the complexification of f : E ! E. By definition of fC and
h , iC , if f is self-adjoint, we have
hfC (u1 + iv1 ), u2 + iv2 iC = hf (u1 ) + if (v1 ), u2 + iv2 iC
= hf (u1 ), u2 i + hf (v1 ), v2 i + i(hu2 , f (v1 )i
= hu1 , f (u2 )i + hv1 , f (v2 )i + i(hf (u2 ), v1 i
= hu1 + iv1 , f (u2 ) + if (v2 )iC
= hu1 + iv1 , fC (u2 + iv2 )iC ,
hf (u1 ), v2 i)
hu1 , f (v2 )i)
352
2 R.
353
f (W ? ) W ? .
v + i(u + v),
354
we have
f (u) = u
and f (v) = u + v,
iv) = (
i)(u
iv),
Proposition 13.9. Given a Euclidean space E, for any normal linear map f : E ! E, if
w = u + iv is an eigenvector of fC associated with the eigenvalue z = + i (where u, v 2 E
and , 2 R), if 6= 0 (i.e., z is not real) then hu, vi = 0 and hu, ui = hv, vi, which implies
that u and v are linearly independent, and if W is the subspace spanned by u and v, then
f (W ) = W and f (W ) = W . Furthermore, with respect to the (orthogonal) basis (u, v), the
restriction of f to W has the matrix
iviC = hu, ui
hv, vi + 2ihu, vi = 0.
Thus, we get hu, vi = 0 and hu, ui = hv, vi, and since u 6= 0 or v 6= 0, u and v are linearly
independent. Since
f (u) = u v and f (v) = u + v
and since by Proposition 13.3 u + iv is an eigenvector of fC for
f (u) = u + v
and f (v) =
i, we have
u + v,
355
The beginning of the proof of Proposition 13.9 actually shows that for every linear map
f : E ! E there is some subspace W such that f (W ) W , where W has dimension 1 or
2. In general, it doesnt seem possible to prove that W ? is invariant under f . However, this
happens when f is normal.
We can finally prove our first main theorem.
Theorem 13.10. (Main spectral theorem) Given a Euclidean space E of dimension n, for
every normal linear map f : E ! E, there is an orthonormal basis (e1 , . . . , en ) such that the
matrix of f w.r.t. this basis is a block diagonal matrix of the form
0
1
A1
...
B
C
A2 . . .
C
B
B ..
.. . .
.. C
@ .
. . A
.
. . . Ap
such that each block Aj is either a one-dimensional matrix (i.e., a real scalar) or a twodimensional matrix of the form
j
j
Aj =
,
j
j
where
j , j
2 R, with j > 0.
with respect to the basis (u/kuk, v/kvk). If < 0, we let 1 = , 1 = , e1 = u/kuk, and
e2 = v/kvk. If > 0, we let 1 = , 1 = , e1 = v/kvk, and e2 = u/kuk. In all cases, it
is easily verified that the matrix of the restriction of f to W w.r.t. the orthonormal basis
(e1 , e2 ) is
1
1
A1 =
,
1
1
356
2 C.
13.3
357
Theorem 13.12. Given a Euclidean space E of dimension n, for every self-adjoint linear
map f : E ! E, there is an orthonormal basis (e1 , . . . , en ) of eigenvectors of f such that the
matrix of f w.r.t. this basis is a diagonal matrix
0
1
...
1
B
C
2 ...
B
C
B ..
.. . .
.. C ,
@.
.
.
.A
... n
where
2 R.
Proof. We already proved this; see Theorem 13.6. However, it is instructive to give a more
direct method not involving the complexification of h , i and Proposition 13.5.
and f (v) = u + v,
we get
hf (u), vi = h u
v, vi = hu, vi
hv, vi
and
hu, f (v)i = hu, u + vi = hu, ui + hu, vi,
and thus we get
hu, vi
that is,
(hu, ui + hv, vi) = 0,
which implies = 0, since either u 6= 0 or v 6= 0. Therefore,
is a real eigenvalue of f .
Now, going back to the proof of Theorem 13.10, only the case where = 0 applies, and
the induction shows that all the blocks are one-dimensional.
Theorem 13.12 implies that if 1 , . . . ,
the eigenspace associated with i , then
E = E1
Ep ,
358
@
@
(X), . . . ,
(X)
@x1
@xn
where X is a column vector of size n. But since f is self-adjoint, A = A> , and thus
r (X) = 2AX.
The next step is to find the maximum of the function
Sn
on the sphere
X (Y
) = hr (X), Y i = 0
for some
359
0
j
Aj =
,
j
0
where j 2 R, with j > 0. In particular, the eigenvalues of fC are pure imaginary of the
form ij or 0.
Proof. The case where n = 1 is trivial. As in the proof of Theorem 13.10, fC has some
eigenvalue z = + i, where , 2 R. We claim that = 0. First, we show that
hf (w), wi = 0
for all w 2 E. Indeed, since f =
f , we get
hw, f (w)i =
hf (w), wi,
and f (v) = u + v,
we get
and
0 = hf (u), ui = h u
v, ui = hu, ui
hu, vi
= 0.
Then, going back to the proof of Theorem 13.10, unless = 0, the case where u and v
are orthogonal and span a subspace of dimension 2 applies, and the induction shows that all
the blocks are two-dimensional or reduced to 0.
360
Remark: One will note that if f is skew-self-adjoint, then ifC is self-adjoint w.r.t. h , iC .
By Proposition 13.5, the map ifC has real eigenvalues, which implies that the eigenvalues of
fC are pure imaginary or 0.
Finally, we consider orthogonal linear maps.
Theorem 13.14. Given a Euclidean space E of dimension n, for every orthogonal linear
map f : E ! E there is an orthonormal basis (e1 , . . . , en ) such that the matrix of f w.r.t.
this basis is a block diagonal matrix of the form
0
1
A1
...
B
C
A2 . . .
C
B
B ..
.. . .
.. C
@ .
.
.
. A
. . . Ap
such that each block Aj is either 1,
cos j
sin j
Aj =
sin j cos j
where 0 < j < . In particular, the eigenvalues of fC are of the form cos j i sin j , 1, or
1.
Proof. The case where n = 1 is trivial. As in the proof of Theorem 13.10, fC has some
eigenvalue z = + i, where , 2 R. It is immediately verified that f f = f f = id
implies that fC fC = fC fC = id, so the map fC is unitary. In fact, the eigenvalues of fC
have absolute value 1. Indeed, if z (in C) is an eigenvalue of fC , and u is an eigenvector for
z, we have
hfC (u), fC (u)i = hzu, zui = zzhu, ui
and
361
cos j
sin j
Aj =
sin j cos j
with 0 < j < .
The linear map f has an eigenspace E(1, f ) = Ker (f id) of dimension p for the eigenvalue 1, and an eigenspace E( 1, f ) = Ker (f + id) of dimension q for the eigenvalue 1. If
det(f ) = +1 (f is a rotation), the dimension q of E( 1, f ) must be even, and the entries in
Iq can be paired to form two-dimensional blocks, if we wish. In this case, every rotation
in SO(n) has a matrix of the form
0
1
A1 . . .
C
B .. . .
.
B .
C
. ..
B
C
@
A
. . . Am
...
In 2m
where the first m blocks Aj are of the form
cos j
Aj =
sin j
sin j
cos j
with 0 < j .
E( 1, f )
F1
Fr ,
and all the summands are pairwise orthogonal. Furthermore, the restriction ri of f to each
Fi is a rotation ri 6= id. Each 2D rotation ri can be written a the composition ri = s0i si
of two reflections si and s0i about lines in Fi (forming an angle i /2). We can extend si and
s0i to hyperplane reflections in E by making them the identity on Fi? . Then,
s0r
sr
s01 s1
362
sr
F1
Fr . But then,
s01 s1 ,
p reflections.
If
f = st s1 ,
for t reflections si , it is clear that
F =
t
\
i=1
E(1, si ) E(1, f ),
where E(1, si ) is the hyperplane defining the reflection si . By the Grassmann relation, if
we intersect t n hyperplanes, the dimension of their intersection is at least n t. Thus,
n t p, that is, t n p, and n p is the smallest number of reflections composing f .
As a corollary of Theorem 13.15, we obtain the following fact: If the dimension n of the
Euclidean space E is odd, then every rotation f 2 SO(E) admits 1 has an eigenvalue.
Proof. The characteristic polynomial det(XI f ) of f has odd degree n and has real coefficients, so it must have some real root . Since f is an isometry, its n eigenvalues are of the
form, +1, 1, and ei , with 0 < < , so = 1. Now, the eigenvalues ei appear in
conjugate pairs, and since n is odd, the number of real eigenvalues of f is odd. This implies
that +1 is an eigenvalue of f , since otherwise 1 would be the only real eigenvalue of f , and
since its multiplicity is odd, we would have det(f ) = 1, contradicting the fact that f is a
rotation.
When n = 3, we obtain the result due to Euler which says that every 3D rotation R has
an invariant axis D, and that restricted to the plane orthogonal to D, it is a 2D rotation.
Furthermore, if (a, b, c) is a unit vector defining the axis D of the rotation R and if the angle
of the rotation is , if B is the skew-symmetric matrix
0
1
0
c b
0
aA ,
B=@ c
b a
0
then it can be shown that
R = I + sin B + (1
cos )B 2 .
The theorems of this section and of the previous section can be immediately applied to
matrices.
13.4
363
symmetric if
skew-symmetric if
orthogonal if
A A> = A> A,
A> = A,
A> =
A,
A A> = A> A = In .
Recall from Proposition 10.12 that when E is a Euclidean space and (e1 , . . ., en ) is an
orthonormal basis for E, if A is the matrix of a linear map f : E ! E w.r.t. the basis
(e1 , . . . , en ), then A> is the matrix of the adjoint f of f . Consequently, a normal linear map
has a normal matrix, a self-adjoint linear map has a symmetric matrix, a skew-self-adjoint
linear map has a skew-symmetric matrix, and an orthogonal linear map has an orthogonal
matrix. Similarly, if E and F are Euclidean spaces, (u1 , . . . , un ) is an orthonormal basis for
E, and (v1 , . . . , vm ) is an orthonormal basis for F , if a linear map f : E ! F has the matrix
A w.r.t. the bases (u1 , . . . , un ) and (v1 , . . . , vm ), then its adjoint f has the matrix A> w.r.t.
the bases (v1 , . . . , vm ) and (u1 , . . . , un ).
Furthermore, if (u1 , . . . , un ) is another orthonormal basis for E and P is the change of
basis matrix whose columns are the components of the ui w.r.t. the basis (e1 , . . . , en ), then
P is orthogonal, and for any linear map f : E ! E, if A is the matrix of f w.r.t (e1 , . . . , en )
and B is the matrix of f w.r.t. (u1 , . . . , un ), then
B = P > AP.
As a consequence, Theorems 13.10 and 13.1213.14 can be restated as follows.
364
Theorem 13.16. For every normal matrix A there is an orthogonal matrix P and a block
diagonal matrix D such that A = P D P > , where D is of the form
0
1
D1
...
B
C
D2 . . .
B
C
D = B ..
.. . .
.. C
@ .
. . A
.
. . . Dp
such that each block Dj is either a one-dimensional matrix (i.e., a real scalar) or a twodimensional matrix of the form
j
j
Dj =
,
j
j
where
j , j
2 R, with j > 0.
Theorem 13.17. For every symmetric matrix A there is an orthogonal matrix P and a
diagonal matrix D such that A = P D P > , where D is of the form
0
1
...
1
B
C
2 ...
C
B
D = B ..
.. . .
.. C ,
@.
. .A
.
... n
where
2 R.
Theorem 13.18. For every skew-symmetric matrix A there is an orthogonal matrix P and
a block diagonal matrix D such that A = P D P > , where D is of the form
0
1
D1
...
B
C
D2 . . .
B
C
D = B ..
.. . .
.. C
@ .
. . A
.
. . . Dp
such that each block Dj is either 0 or a two-dimensional matrix of the form
0
j
Dj =
,
j
0
where j 2 R, with j > 0. In particular, the eigenvalues of A are pure imaginary of the
form ij , or 0.
Theorem 13.19. For every orthogonal matrix A there is an orthogonal matrix P and a
block diagonal matrix D such that A = P D P > , where D is of the form
0
1
D1
...
C
B
D2 . . .
B
C
D = B ..
.. . .
.. C
@ .
. . A
.
. . . Dp
365
cos j
sin j
Dj =
sin j cos j
where 0 < j < . In particular, the eigenvalues of A are of the form cos j i sin j , 1, or
1.
We now consider complex matrices.
Definition 13.4. Given a complex m n matrix A, the transpose A> of A is the n m
matrix A> = a>
i j defined such that
a>
i j = aj i
for all i, j, 1 i m, 1 j n. The conjugate A of A is the m n matrix A = (bi j )
defined such that
b i j = ai j
for all i, j, 1 i m, 1 j n. Given an m n complex matrix A, the adjoint A of A is
the matrix defined such that
A = (A> ) = (A)> .
A complex n n matrix A is
normal if
Hermitian if
skew-Hermitian if
unitary if
AA = A A,
A = A,
A =
A,
AA = A A = In .
Recall from Proposition 12.12 that when E is a Hermitian space and (e1 , . . ., en ) is an
orthonormal basis for E, if A is the matrix of a linear map f : E ! E w.r.t. the basis
(e1 , . . . , en ), then A is the matrix of the adjoint f of f . Consequently, a normal linear map
has a normal matrix, a self-adjoint linear map has a Hermitian matrix, a skew-self-adjoint
linear map has a skew-Hermitian matrix, and a unitary linear map has a unitary matrix.
366
Similarly, if E and F are Hermitian spaces, (u1 , . . . , un ) is an orthonormal basis for E, and
(v1 , . . . , vm ) is an orthonormal basis for F , if a linear map f : E ! F has the matrix A w.r.t.
the bases (u1 , . . . , un ) and (v1 , . . . , vm ), then its adjoint f has the matrix A w.r.t. the bases
(v1 , . . . , vm ) and (u1 , . . . , un ).
Furthermore, if (u1 , . . . , un ) is another orthonormal basis for E and P is the change of
basis matrix whose columns are the components of the ui w.r.t. the basis (e1 , . . . , en ), then
P is unitary, and for any linear map f : E ! E, if A is the matrix of f w.r.t (e1 , . . . , en ) and
B is the matrix of f w.r.t. (u1 , . . . , un ), then
B = P AP.
Theorem 13.11 can be restated in terms of matrices as follows. We can also say a little
more about eigenvalues (easy exercise left to the reader).
Theorem 13.20. For every complex normal matrix A there is a unitary matrix U and a
diagonal matrix D such that A = U DU . Furthermore, if A is Hermitian, then D is a real
matrix; if A is skew-Hermitian, then the entries in D are pure imaginary or null; and if A
is unitary, then the entries in D have absolute value 1.
13.5
1
0
B1 0
C
B
C
B 1 0
C
B
C
A=B
C
.. ..
B
C
.
.
B
C
@
1 0 A
1 0
has the eigenvalue 0 with multiplicity n. However, if we perturb the top rightmost entry of
A by , it is easy to see that the characteristic polynomial of the matrix
0
1
0
B1 0
C
B
C
B 1 0
C
B
C
A() = B
C
.. ..
B
C
.
.
B
C
@
1 0 A
1 0
367
for every diagonal matrix. Then, for every perturbation matrix A, if we write
Bi = {z 2 C | |z
for every eigenvalue
i|
cond(P ) k Ak},
of A + A, we have
2
n
[
Bk .
k=1
Proof. Let be any eigenvalue of the matrix A + A. If = j for some j, then the result
is trivial. Thus, assume that 6= j for j = 1, . . . , n. In this case, the matrix D
I is
invertible (since its eigenvalues are
j for j = 1, . . . , n), and we have
P
Since
(A + A
I)P = D
= (D
I + P 1 ( A)P
I)(I + (D
I) 1 P
I) 1 P
( A)P
( A)P ).
368
1 (D
( A)P ,
(D
( A)P (D
I)
k Ak kP k ,
so we have
Now, (D
norm,
I)
1 (D
I)
k Ak kP k .
I)
1
mini (|
|)
I)
1
|
|) = |
|, we have
and we obtain
|
k|
k Ak kP k = cond(P ) k Ak ,
Proposition 13.21 implies that for any diagonalizable matrix A, if we define (A) by
(A) = inf{cond(P ) | P
then for every eigenvalue
AP = D},
of A + A, we have
2
n
[
k=1
{z 2 Cn | |z
k|
(A) k Ak}.
The number (A) is called the conditioning of A relative to the eigenvalue problem. If A is
a normal matrix, since by Theorem 13.20, A can be diagonalized with respect to a unitary
matrix U , and since for the spectral norm kU k2 = 1, we see that (A) = 1. Therefore,
normal matrices are very well conditionned w.r.t. the eigenvalue problem. In fact, for every
eigenvalue of A + A (with A normal), we have
2
n
[
k=1
{z 2 Cn | |z
k|
k Ak2 }.
If A and A+ A are both symmetric (or Hermitian), there are sharper results; see Proposition
13.27.
Note that the matrix A() from the beginning of the section is not normal.
13.6
369
A fact that is used frequently in optimization problem is that the eigenvalues of a symmetric
matrix are characterized in terms of what is known as the Rayleigh ratio, defined by
R(A)(x) =
x> Ax
,
x> x
x 2 Rn , x 6= 0.
The following proposition is often used to prove the correctness of various optimization
or approximation problems (for example PCA).
Proposition 13.22. (RayleighRitz) If A is a symmetric n n matrix with eigenvalues
1 2 n and if (u1 , . . . , un ) is any orthonormal basis of eigenvectors of A, where
ui is a unit eigenvector associated with i , then
max
x6=0
x> Ax
=
x> x
x6=0,x2{un
k+1 ,...,un }
x> Ax
=
x> x
n k
x> Ax
= max
,
x6=0,x2Vk x> x
1. Equivalently, if Vk is the
k = 1, . . . , n.
x> Ax
= max{x> Ax | x> x = 1},
x
x> x
and similarly,
max
x6=0,x2{un
?
k+1 ,...,un }
x> Ax
= max x> Ax | (x 2 {un
x
x> x
?
k+1 , . . . , un } )
^ (x> x = 1) .
Since A is a symmetric matrix, its eigenvalues are real and it can be diagonalized with respect
to an orthonormal basis of eigenvectors, so let (u1 , . . . , un ) be such a basis. If we write
x=
n
X
i=1
x i ui ,
370
n
X
2
i xi .
i=1
If x> x = 1, then
Pn
i=1
n
n
X
>
2
2
x Ax =
xi = n .
i xi n
i=1
n,
we get
i=1
Thus,
max x> Ax | x> x = 1
n,
and since this maximum is achieved for en = (0, 0, . . . , 1), we conclude that
max x> Ax | x> x = 1 =
n.
Next,
that x 2 {un k+1 , . . . , un }? and x> x = 1 i xn
Pn k observe
2
i=1 xi = 1. Consequently, for such an x, we have
>
x Ax =
n k
X
2
i xi
i=1
n k
X
n k
i=1
x2i
k+1
= = xn = 0 and
n k.
Thus,
max x> Ax | (x 2 {un
x
k+1 , . . . , un }
) ^ (x> x = 1)
n k,
k+1 , . . . , un }
) ^ (x> x = 1) =
n k,
as claimed.
For our purposes, we need the version of Proposition 13.22 applying to min instead of
max, whose proof is obtained by a trivial modification of the proof of Proposition 13.22.
Proposition 13.23. (RayleighRitz) If A is a symmetric n n matrix with eigenvalues
1 2 n and if (u1 , . . . , un ) is any orthonormal basis of eigenvectors of A, where
ui is a unit eigenvector associated with i , then
min
x6=0
x> Ax
=
x> x
x6=0,x2{u1 ,...,ui
1}
x> Ax
=
x> x
371
min
x6=0,x2Wk
x> Ax
x> Ax
=
min
,
>
x> x
x6=0,x2Vk? 1 x x
k = 1, . . . , n.
Propositions 13.22 and 13.23 together are known the RayleighRitz theorem.
As an application of Propositions 13.22 and 13.23, we prove a proposition which allows
us to compare the eigenvalues of two symmetric matrices A and B = R> AR, where R is a
rectangular matrix satisfying the equation R> R = I.
First, we need a definition. Given an n n symmetric matrix A and an m m symmetric
B, with m n, if 1 2 n are the eigenvalues of A and 1 2 m are
the eigenvalues of B, then we say that the eigenvalues of B interlace the eigenvalues of A if
i
n m+i ,
i = 1, . . . , m.
j = 1, . . . , i
1,
we have Rv 2 (Ui 1 )? . By Proposition 13.23 and using the fact that R> R = I, we have
i
(Rv)> ARv
v > Bv
=
.
(Rv)> Rv
v>v
max
x6=0,x2{vi+1 ,...,vn }?
x> Bx
x> Bx
=
max
,
x6=0,x2{v1 ,...,vi } x> x
x> x
372
so
w> Bw
i
w> w
for all w 2 Vi ,
i = 1, . . . , m.
A and
B, to conclude that
i ,
that is,
i
n m+i ,
i = 1, . . . , m.
Therefore,
i
n m+i ,
i = 1, . . . , m,
as desired.
(b) If
= i , then
i
(Rv)> ARv
v > Bv
=
= i ,
(Rv)> Rv
v>v
so v must be an eigenvector for B and Rv must be an eigenvector for A, both for the
eigenvalue i = i .
Proposition 13.24 immediately implies the Poincare separation theorem. It can be used
in situations, such as in quantum mechanics, where one has information about the inner
products u>
i Auj .
Proposition 13.25. (Poincare separation theorem) Let A be a n n symmetric (or Hermitian) matrix, let r be some integer with 1 r n, and let (u1 , . . . , ur ) be r orthonormal
vectors. Let B = (u>
i Auj ) (an r r matrix), let 1 (A) . . . n (A) be the eigenvalues of
A and 1 (B) . . . r (B) be the eigenvalues of B; then we have
k (A)
k (B)
k+n r (A),
k = 1, . . . , r.
+ +
tr(R> AR)
n m+1
+ +
n.
If P1 is the the n (n 1) matrix obtained from the identity matrix by dropping its last
column, we have P1> P1 = I, and the matrix B = P1> AP1 is the matrix obtained from A by
deleting its last row and its last column. In this case, the interlacing result is
1
2 n
n 1
n,
373
a genuine interlacing. We obtain similar results with the matrix Pn r obtained by dropping
the last n r columns of the identity matrix and setting B = Pn> r APn r (B is the r r
matrix obtained from A by deleting its last n r rows and columns). In this case, we have
the following interlacing inequalities known as Cauchy interlacing theorem:
k
k+n r ,
k = 1, . . . , r.
()
Another useful tool to prove eigenvalue equalities is the CourantFischer characterization of the eigenvalues of a symmetric matrix, also known as the Min-max (and Max-min)
theorem.
Theorem 13.26. (CourantFischer ) Let A be a symmetric n n matrix with eigenvalues
1
2
n and let (u1 , . . . , un ) be any orthonormal basis of eigenvectors of A,
where ui is a unit eigenvector associated with i . If Vk denotes the set of subspaces of Rn of
dimension k, then
k
x> Ax
W 2Vn k+1 x2W,x6=0 x> x
x> Ax
= min max
.
W 2Vk x2W,x6=0 x> x
=
max
min
Proof. Let us consider the second equality, the proof of the first equality being similar.
Observe that the space Vk spanned by (u1 , . . . , uk ) has dimension k, and by Proposition
13.22, we have
x> Ax
x> Ax
min
max
.
k = max
W 2Vk x2W,x6=0 x> x
x6=0,x2Vk x> x
Therefore, we need to prove the reverse inequality; that is, we have to show that
k
x> Ax
max
,
x6=0,x2W x> x
for all W 2 Vk .
Now, for any W 2 Vk , if we can prove that W \Vk? 1 6= (0), then for any nonzero v 2 W \Vk? 1 ,
by Proposition 13.23 , we have
k
min
x6=0,x2Vk? 1
x> Ax
v > Av
x> Ax
max
.
x2W,x6=0 x> x
x> x
v>v
k + 1 dim(W \ Vk? 1 ) + n;
374
The CourantFischer theorem yields the following useful result about perturbing the
eigenvalues of a symmetric matrix due to Hermann Weyl.
Proposition 13.27. Given two n n symmetric matrices A and B = A + A, if 1 2
n are the eigenvalues of A and 1 2 n are the eigenvalues of B, then
|k
k|
( A) k Ak2 ,
k = 1, . . . , n.
Proof. Let Vk be defined as in the CourantFischer theorem and let Vk be the subspace
spanned by the k eigenvectors associated with 1 , . . . , k . By the CourantFischer theorem
applied to B, we have
k
x> Bx
W 2Vk x2W,x6=0 x> x
x> Bx
max >
x2Vk x x
>
x Ax x> Ax
= max
+
x2Vk
x> x
x> x
x> Ax
x> A x
max > + max
.
x2Vk x x
x2Vk
x> x
= min max
x> Ax
,
x> x
so we obtain
x> Ax
x> A x
+ max
k max
x2Vk x> x
x2Vk
x> x
>
x Ax
= k + max
x2Vk
x> x
>
x Ax
k + maxn
.
x2R
x> x
Now, by Proposition 13.22 and Proposition 7.7, we have
x> A x
= max i ( A) ( A) k Ak2 ,
i
x2R
x> x
where i ( A) denotes the ith eigenvalue of A, which implies that
maxn
k + ( A) k + k Ak2 .
+ ( A)
+ k Ak2 ,
and thus,
as claimed.
|k
k|
( A) k Ak2 ,
k = 1, . . . , n,
375
(k
k)
k=1
k Ak2F ,
where k kF is the Frobenius norm. However, the proof is significantly harder than the above
proof; see Lax [71].
The CourantFischer theorem can also be used to prove some famous inequalities due to
Hermann Weyl. Given two symmetric (or Hermitian) matrices A and B, let i (A), i (B),
and i (A + B) denote the ith eigenvalue of A, B, and A + B, respectively, arranged in
nondecreasing order.
Proposition 13.28. (Weyl) Given two symmetric (or Hermitian) n n matrices A and B,
the following inequalities hold: For all i, j, k with 1 i, j, k n:
1. If i + j = k + 1, then
i (A)
j (B)
k (A
+ B).
2. If i + j = k + n, then
k (A
+ B)
i (A)
j (B).
Proof. Observe that the first set of inequalities is obtained form the second set by replacing
A by A and B by B, so it is enough to prove the second set of inequalities. By the
CourantFischer theorem, there is a subspace H of dimension n k + 1 such that
x> (A + B)x
.
k (A + B) = min
x2H,x6=0
x> x
Similarly, there exist a subspace F of dimension i and a subspace G of dimension j such that
i (A)
x> Ax
,
x2F,x6=0 x> x
= max
j (B)
x> Bx
.
x2G,x6=0 x> x
= max
We claim that F \ G \ H 6= (0). To prove this, we use the Grassmann relation twice. First,
dim(F \ G \ H) = dim(F ) + dim(G \ H)
dim(F + (G \ H))
dim(F ) + dim(G \ H)
and second,
dim(G \ H) = dim(G) + dim(H)
dim(G + H)
dim(G) + dim(H)
so
dim(F \ G \ H)
2n.
n,
n,
376
However,
dim(F ) + dim(G) + dim(H) = i + j + n
k+1
and i + j = k + n, so we have
dim(F \ G \ H)
i+j+n
k+1
2n = k + n + n
k+1
2n = 1,
which shows that F \ G \ H 6= (0). Then, for any unit vector z 2 F \ G \ H 6= (0), we have
k (A
+ B) z > (A + B)z,
k (A
z > Az,
i (A)
+ B)
i (A)
j (B)
z > Bz,
j (B).
It follows that
1 (B)
1 (A
is concave, while
+ B),
n (A
+ B)
n (A)
n (B).
is convex.
If i = 1 and j = k, we obtain
1 (A)
k (B)
k (A
+ B),
+ B)
k (A)
n (B),
k (B)
k (A
+ B)
k (A)
n (B).
k (A
+ B) k = 1, . . . , n.
The reader is referred to Horn and Johnson [57] (Chapters 4 and 7) for a very complete
treatment of matrix inequalities and interlacing results, and also to Lax [71] and Serre [96].
We now have all the tools to present the important singular value decomposition (SVD)
and the polar form of a matrix. However, we prefer to first illustrate how the material of this
section can be used to discretize boundary value problems, and we give a brief introduction
to the finite elements method.
13.7. SUMMARY
13.7
377
Summary
The main concepts and results of this chapter are listed below:
Normal linear maps, self-adjoint linear maps, skew-self-adjoint linear maps, and orthogonal linear maps.
Properties of the eigenvalues and eigenvectors of a normal linear map.
The complexification of a real vector space, of a linear map, and of a Euclidean inner
product.
The eigenvalues of a self-adjoint map in a Hermitian space are real .
The eigenvalues of a self-adjoint map in a Euclidean space are real .
Every self-adjoint linear map on a Euclidean space has an orthonormal basis of eigenvectors.
Every normal linear map on a Euclidean space can be block diagonalized (blocks of
size at most 2 2) with respect to an orthonormal basis of eigenvectors.
Every normal linear map on a Hermitian space can be diagonalized with respect to an
orthonormal basis of eigenvectors.
The spectral theorems for self-adjoint, skew-self-adjoint, and orthogonal linear maps
(on a Euclidean space).
The spectral theorems for normal, symmetric, skew-symmetric, and orthogonal (real)
matrices.
The spectral theorems for normal, Hermitian, skew-Hermitian, and unitary (complex)
matrices.
The conditioning of eigenvalue problems.
The Rayleigh ratio and the RayleighRitz theorem.
Interlacing inequalities and the Cauchy interlacing theorem.
The Poincare separation theorem.
The CourantFischer theorem.
Inequalities involving perturbations of the eigenvalues of a symmetric matrix.
The Weyl inequalities.
378
Chapter 14
Bilinear Forms and Their Geometries
14.1
Bilinear Forms
In this chapter, we study the structure of a K-vector space E endowed with a nondegenerate
bilinear form ' : E E ! K (for any field K), which can be viewed as a kind of generalized
inner product. Unlike the case of an inner product, there may be nonzero vectors u 2 E such
that '(u, u) = 0, so the map u 7! '(u, u) can no longer be interpreted as a notion of square
length (also, '(u, u) may not be real and positive!). However, the notion of orthogonality
survives: we say that u, v 2 E are orthogonal i '(u, v) = 0. Under some additional
conditions on ', it is then possible to split E into orthogonal subspaces having some special
properties. It turns out that the special cases where ' is symmetric (or Hermitian) or skewsymmetric (or skew-Hermitian) can be handled uniformly using a deep theorem due to Witt
(the Witt decomposition theorem (1936)).
We begin with the very general situation of a bilinear form ' : E F ! K, where K is an
arbitrary field, possibly of characteristric 2. Actually, even though at first glance this may
appear to be an uncessary abstraction, it turns out that this situation arises in attempting
to prove properties of a bilinear map ' : E E ! K, because it may be necessary to restrict
' to dierent subspaces U and V of E. This general approach was pioneered by Chevalley
[22], E. Artin [3], and Bourbaki [13]. The third source was a major source of inspiration,
and many proofs are taken from it. Other useful references include Snapper and Troyer [99],
Berger [9], Jacobson [59], Grove [52], Taylor [108], and Berndt [11].
Definition 14.1. Given two vector spaces E and F over a field K, a map ' : E F ! K
is a bilinear form i the following conditions hold: For all u, u1 , u2 2 E, all v, v1 , v2 2 F , for
all 2 K, we have
'(u1 + u2 , v) = '(u1 , v) + '(u2 , v)
'(u, v1 + v2 ) = '(u, v1 ) + '(u, v2 )
'( u, v) = '(u, v)
'(u, v) = '(u, v).
379
380
A bilinear form as in Definition 14.1 is sometimes called a pairing. The first two conditions
imply that '(0, v) = '(u, 0) = 0 for all u 2 E and all v 2 F .
If E = F , observe that
'( u + v, u + v) = '(u, u + v) + '(v, u + v)
= 2 '(u, u) + '(u, v) + '(v, u) + 2 '(v, v).
If we let
= = 1, we get
'(u + v, u + v) = '(u, u) + '(u, v) + '(v, u) + '(v, v).
'(u, u)
'(v, v).
is called the quadratic form associated with '. If the field K is not of characteristic 2, then
' is completely determined by its quadratic form . The symmetric bilinear form ' is called
the polar form of . This suggests the following definition.
Definition 14.2. A function
hold:
(1) We have
( u) =
(u)
2 E.
(v) is bilinear. Obviously, the map
If the field K is not of characteristic 2, then ' = 12 '0 is the unique symmetric bilinear form
such that that '(u, u) = (u) for all u 2 E. The bilinear form ' = 12 '0 is called the polar
form of . In this case, there is a bijection between the set of bilinear forms on E and the
set of quadratic forms on E.
If K is a field of characteristic 2, then '0 is alternating, which means that
'0 (u, u) = 0 for all u 2 E.
381
Thus, cannot be recovered from the symmetric bilinear form '0 . However, there is some
(nonsymmetric) bilinear form such that (u) = (u, u) for all u 2 E. Thus, quadratic
forms are more general than symmetric bilinear forms (except in characteristic 6= 2).
In general, if K is a field of any characteristic, the identity
we say that ' is skew-symmetric. Conversely, if the field K is not of characteristic 2, then a
skew-symmetric bilinear map is alternating, since '(u, u) = '(u, u) implies '(u, u) = 0.
An important consequence of bilinearity is that a pairing yields a linear map from E into
F and a linear map from F into E (where E = HomK (E, K), the dual of E, is the set of
linear maps from E to K, called linear forms).
Definition 14.3. Given a bilinear map ' : E F ! K, for every u 2 E, let l' (u) be the
linear form in F given by
l' (u)(y) = '(u, y) for all y 2 F ,
and for every v 2 F , let r' (v) be the linear form in E given by
r' (v)(x) = '(x, v) for all x 2 E.
Because ' is bilinear, the maps l' : E ! F and r' : F ! E are linear.
Definition 14.4. A bilinear map ' : E F ! K is said to be nondegenerate i the following
conditions hold:
(1) For every u 2 E, if '(u, v) = 0 for all v 2 F , then u = 0, and
(2) For every v 2 F , if '(u, v) = 0 for all u 2 E, then v = 0.
The following proposition shows the importance of l' and r' .
Proposition 14.1. Given a bilinear map ' : E F ! K, the following properties hold:
(a) The map l' is injective i property (1) of Definition 14.4 holds.
(b) The map r' is injective i property (2) of Definition 14.4 holds.
(c) The bilinear form ' is nondegenerate and i l' and r' are injective.
(d) If the bilinear form ' is nondegenerate and if E and F have finite dimensions, then
dim(E) = dim(F ), and l' : E ! F and r' : F ! E are linear isomorphisms.
382
Proof. (a) Assume that (1) of Definition 14.4 holds. If l' (u) = 0, then l' (u) is the linear
form whose value is 0 for all y; that is,
l' (u)(y) = '(u, y) = 0 for all y 2 F ,
and by (1) of Definition 14.4, we must have u = 0. Therefore, l' is injective. Conversely, if
l' is injective, and if
l' (u)(y) = '(u, y) = 0 for all y 2 F ,
then l' (u) is the zero form, and by injectivity of l' , we get u = 0; that is, (1) of Definition
14.4 holds.
(b) The proof is obtained by swapping the arguments of '.
(c) This follows from (a) and (b).
(d) If E and F are finite dimensional, then dim(E) = dim(E ) and dim(F ) = dim(F ).
Since ' is nondegenerate, l' : E ! F and r' : F ! E are injective, so dim(E) dim(F ) =
dim(F ) and dim(F ) dim(E ) = dim(E), which implies that
dim(E) = dim(F ),
and thus, l' : E ! F and r' : F ! E are bijective.
As a corollary of Proposition 14.1, we have the following characterization of a nondegenerate bilinear map. The proof is left as an exercise.
Proposition 14.2. Given a bilinear map ' : E F ! K, if E and F have the same finite
dimension, then the following properties are equivalent:
(1) The map l' is injective.
(2) The map l' is surjective.
(3) The map r' is injective.
(4) The map r' is surjective.
(5) The bilinear form ' is nondegenerate.
Observe that in terms of the canonical pairing between E and E given by
hf, ui = f (u),
f 2 E , u 2 E,
383
Proposition 14.3. Given a bilinear map ' : E F ! K, if ' is nondegenerate and E and
F are finite-dimensional, then dim(E) = dim(F ) = n, and for every basis (e1 , . . . , en ) of E,
there is a basis (f1 , . . . , fn ) of F such that '(ei , fj ) = ij , for all i, j = 1, . . . , n.
Proof. Since ' is nondegenerate, by Proposition 14.1 we have dim(E) = dim(F ) = n, and
by Proposition 14.2, the linear map r' is bijective. Then, if (e1 , . . . , en ) is the dual basis (in
E ) of the basis (e1 , . . . , en ), the vectors (f1 , . . . , fn ) given by fi = r' 1 (ei ) form a basis of F ,
and we have
'(ei , fj ) = hr' (fj ), ei i = hei , ej i = ij ,
as claimed.
This is because
1, and that e1 2
/ H.
Ke1 .
j=1
i=1 j=1
384
This shows that ' is completely determined by the m n matrix M = ('(ei , ej )), and in
matrix form, we have
'(x, y) = x> M y = y > M > x,
where x and y are the column vectors associated with (x1 , . . . , xm ) 2 K m and (y1 , . . . , yn ) 2
K n . As in Section 10.1,
are committing the slight abuse of notation of letting x denote
Pwe
n
both the vector x =
i=1 xi ei and the column vector associated with (x1 , . . . , xn ) (and
similarly for y). We call M the matrix of ' with respect to the bases (e1 , . . . , em ) and
(f1 , . . . , fn ).
If m = dim(E) = dim(F ) = n, then it is easy to check that ' is nondegenerate i M is
invertible i det(M ) 6= 0.
As we will see later, most bilinear forms that we will encounter are equivalent to one
whose matrix is of the following form:
1. In ,
In .
2. If p + q = n, with p, q
1,
Ip,q =
Jm.m =
3. If n = 2m,
Ip
0
0
Iq
0
Im
Im 0
4. If n = 2m,
Am,m = Im.m Jm.m =
0 Im
.
Im 0
i=1
385
i=1
xi e i
i=1
p
X
p+q
X
x2i
i=1
x2i ,
i=p+1
with 0 p, q and p + q n.
Proof. The first statement is a direct consequence of Theorem 14.4. If K = C, then every
i has a square root i , and if replace ei by ei /i , we obtained the desired form.
If K = R, then there are two cases:
1. If
2. If
and replace ei by ei /i .
i
and replace ei by ei /i .
In the nondegenerate case, the matrices corresponding to the complex and the real case
are, In , In , and Ip,q . Observe that the second statement of Proposition 14.4 holds in any
field in which every element has a square root. In the case K = R, we can show that(p, q)
only depends on '.
For any subspace U of E, we say that ' is positive definite on U i '(u, u) > 0 for all
nonzero u 2 U , and we say that ' is negative definite on U i '(u, u) < 0 for all nonzero
u 2 U . Then, let
r = max{dim(U ) | U E, ' is positive definite on U }
and let
s = max{dim(U ) | U E, ' is negative definite on U }
Proposition 14.6. (Sylvesters inertia law ) Given any symmetric bilinear form ' : E E !
R with dim(E) = n, for any basis (e1 , . . . , en ) of E such that
X
n
i=1
xi e i
p
X
i=1
x2i
p+q
X
x2i ,
i=p+1
386
Proof. If we let U be the subspace spanned by (e1 , . . . , ep ), then ' is positive definite on
U , so r
p. Similarly, if we let V be the subspace spanned by (ep+1 , . . . , ep+q ), then ' is
negative definite on V , so s q.
Next, if W1 is any subspace of maximum dimension such that ' is positive definite on
W1 , and if we let V 0 be the subspace spanned by (ep+1 , . . . , en ), then '(u, u) 0 on V 0 , so
W1 \ V 0 = (0), which implies that dim(W1 ) + dim(V 0 ) n, and thus, r + n p n; that
is, r p. Similarly, if W2 is any subspace of maximum dimension such that ' is negative
definite on W2 , and if we let U 0 be the subspace spanned by (e1 , . . . , ep , ep+q+1 , . . . , en ), then
'(u, u)
0 on U 0 , so W2 \ U 0 = (0), which implies that s + n q n; that is, s q.
Therefore, p = r and q = s, as claimed
These last two results can be generalized to ordered fields. For example, see Snapper and
Troyer [99], Artin [3], and Bourbaki [13].
14.2
Sesquilinear Forms
= .
If the automorphism 7! is the identity, then we are in the standard situation of a
bilinear form. When K = C (the complex numbers), then we usually pick the automorphism
of C to be conjugation; namely, the map
a + ib 7! a
ib.
Definition 14.5. Given two vector spaces E and F over a field K with an involutive automorphism 7! , a map ' : E F ! K is a (right) sesquilinear form i the following
conditions hold: For all u, u1 , u2 2 E, all v, v1 , v2 2 F , for all 2 K, we have
'(u1 + u2 , v) = '(u1 , v) + '(u2 , v)
'(u, v1 + v2 ) = '(u, v1 ) + '(u, v2 )
'( u, v) = '(u, v)
'(u, v) = '(u, v).
Again, '(0, v) = '(u, 0) = 0. If E = F , then we have
'( u + v, u + v) = '(u, u + v) + '(v, u + v)
=
= = 1 and then
387
= 1, =
1, we get
'(u
v, u
v) for u, v 2 E.
6= 0), we get
'(u
v, u
v),
)'(u, v) = '(u + v, u + v)
'(u
v, u
v)
'(u + v, u + v) + '(u
v, u
v).
If the automorphism 7! is not the identity, then there is some 2 K such that
6= 0,
and if K is not of characteristic 2, then we see that the sesquilinear form ' is completely
determined by its restriction to the diagonal (that is, the set of values {'(u, u) | u 2 E}).
In the special case where K = C, we can pick = i, and we get
4'(u, v) = '(u + v, u + v)
'(u
v, u
v) + i'(u + v, u + v)
i'(u
v, u
v).
Remark: If the automorphism 7! is the identity, then in general ' is not determined
by its value on the diagonal, unless ' is symmetric.
In the sesquilinear setting, it turns out that the following two cases are of interest:
1. We have
'(v, u) = '(u, v),
for all u, v 2 E,
in which case we say that ' is Hermitian. In the special case where K = C and the
involutive automorphism is conjugation, we see that '(u, u) 2 R, for u 2 E.
2. We have
'(v, u) =
'(u, v),
for all u, v 2 E,
388
Proof. We give the prooof in the Hermitian case, the skew-Hermitian case being left as an
exercise. Assume that ' is alternating. From the identity
'(u + v, u + v) = '(u, u) + '(u, v) + '(u, v) + '(v, v),
we get
'(u, v) for all u, v 2 E.
'(u, v) =
Since ' is not the zero form, there exist some nonzero vectors u, v 2 E such that '(u, v) = 1.
For any 2 K, we have
'(u, v) = '( u, v) =
'( u, v) =
'(u, v),
= 1, we get 1 =
2 K.
so the automorphism
for all
7!
for all
2 K,
is the identity.
The definition of the linear maps l' and r' requires a small twist due to the automorphism
7! .
Definition 14.6. Given a vector space E over a field K with an involutive automorphism
7! , we define the K-vector space E as E with its abelian group structure, but with
scalar multiplication given by
( , u) 7! u.
Given two K-vector spaces E and F , a semilinear map f : E ! F is a function, such that
for all u, v 2 E, for all 2 K, we have
f (u + v) = f (u) + f (v)
f ( u) = f (u).
Because = , observe that a function f : E ! F is semilinear i it is a linear map
f : E ! F . The K-vector spaces E and E are isomorphic, since any basis (ei )i2I of E is also
a basis of E.
The maps l' and r' are defined as follows:
For every u 2 E, let l' (u) be the linear form in F defined so that
l' (u)(y) = '(u, y) for all y 2 F ,
and for every v 2 F , let r' (v) be the linear form in E defined so that
r' (v)(x) = '(x, v) for all x 2 E.
389
The reader should check that because we used '(u, y) in the definition of l' (u)(y), the
function l' (u) is indeed a linear form in F . It is also easy to check that l' is a linear
map l' : E ! F , and that r' is a linear map r' : F ! E (equivalently, l' : E ! F and
r' : F ! E are semilinear).
The notion of a nondegenerate sesquilinear form is identical to the notion for bilinear
forms. For the convenience of the reader, we repeat the definition.
Definition 14.7. A sesquilinear map ' : E F ! K is said to be nondegenerate i the
following conditions hold:
(1) For every u 2 E, if '(u, v) = 0 for all v 2 F , then u = 0, and
(2) For every v 2 F , if '(u, v) = 0 for all u 2 E, then v = 0.
Proposition 14.1 translates into the following proposition. The proof is left as an exercise.
Proposition 14.8. Given a sesquilinear map ' : E F ! K, the following properties hold:
(a) The map l' is injective i property (1) of Definition 14.7 holds.
(b) The map r' is injective i property (2) of Definition 14.7 holds.
(c) The sesquilinear form ' is nondegenerate and i l' and r' are injective.
(d) If the sesquillinear form ' is nondegenerate and if E and F have finite dimensions,
then dim(E) = dim(F ), and l' : E ! F and r' : F ! E are linear isomorphisms.
Propositions 14.2 and 14.3 also generalize to sesquilinear forms. We also have the following version of Theorem 14.4, whose proof is left as an exercise.
Theorem 14.9. Given any sesquilinear form ' : E E ! K with dim(E) = n, if ' is
Hermitian and K does not have characteristic 2, then there is a basis (e1 , . . . , en ) of E such
that '(ei , ej ) = 0, for all i 6= j.
As in Section 14.1, if E and F are finite-dimensional vector spaces and if (e1 , . . . , em ) is
a basis of E and (f1 , . . . , fn ) is a basis of F then the sesquilinearity of ' yields
X
X
m
n
m X
n
X
'
xi e i ,
yj f j =
xi '(ei , fj )y j .
i=1
j=1
i=1 j=1
This shows that ' is completely determined by the m n matrix M = ('(ei , ej )), and in
matrix form, we have
'(x, y) = x> M y = y M > x,
where x and y are the column vectors associated with (x1 , . . . , xm ) 2 K m and (y 1 , . . . , y n ) 2
K n , and y = y > . As earlier, we are committing the slight abuse of notation of letting x
390
P
denote both the vector x = ni=1 xi ei and the column vector associated with (x1 , . . . , xn )
(and similarly for y). We call M the matrix of ' with respect to the bases (e1 , . . . , em ) and
(f1 , . . . , fn ).
If m = dim(E) = dim(F ) = n, then ' is nondegenerate i M is invertible i det(M ) 6= 0.
Observe that if ' is a Hermitian form (E = F ) and if K does not have characteristic 2,
then by Theorem 14.9, there is a basis of E with respect to which the matrix M representing
' is a diagonal matrix. If K = C, then these entries are real, and this allows us to classify
completely the Hermitian forms.
Proposition 14.10. Given any Hermitian form ' : E E ! C with dim(E) = n, there is
a basis (e1 , . . . , en ) of E such that
X
n
i=1
xi e i
p
X
i=1
x2i
p+q
X
x2i ,
i=p+1
with 0 p, q and p + q n.
The proof of Proposition 14.10 is the same as the real case of Proposition 14.5. Sylvesters
inertia law (Proposition 14.6) also holds for Hermitian forms: p and q only depend on '.
14.3
Orthogonality
In this section, we assume that we are dealing with a sesquilinear form ' : E F ! K.
We allow the automorphism 7! to be the identity, in which case ' is a bilinear form.
This way, we can deal with properties shared by bilinear forms and sesquilinear forms in a
uniform fashion. Orthogonality is such a property.
Definition 14.8. Given a sesquilinear form ' : E F ! K, we say that two vectors u 2 E
and v 2 F are orthogonal (or conjugate) if '(u, v) = 0. Two subsets E 0 E and F 0 F
are orthogonal if '(u, v) = 0 for all u 2 E 0 and all v 2 F 0 . Given a subspace U of E, the
right orthogonal space of U , denoted U ? , is the subspace of F given by
U ? = {v 2 F | '(u, v) = 0 for all u 2 U },
and given a subspace V of F , the left orthogonal space of V , denoted V ? , is the subspace of
E given by
V ? = {u 2 E | '(u, v) = 0 for all v 2 V }.
When E and F are distinct, there is little chance of confusing the right orthogonal
subspace U ? of a subspace U of E and the left orthogonal subspace V ? of a subspace V of
F . However, if E = F , then '(u, v) = 0 does not necessarily imply that '(v, u) = 0, that is,
orthogonality is not necessarily symmetric. Thus, if both U and V are subsets of E, there
14.3. ORTHOGONALITY
391
is a notational ambiguity if U = V . In this case, we may write U ?r for the right orthogonal
and U ?l for the left orthogonal.
The above discussion brings up the following point: When is orthogonality symmetric?
If ' is bilinear, it is shown in E. Artin [3] (and in Jacobson [59]) that orthogonality is
symmetric i either ' is symmetric or ' is alternating ('(u, u) = 0 for all u 2 E).
If ' is sesquilinear, the answer is more complicated. In addition to the previous two
cases, there is a third possibility:
'(u, v) = '(v, u) for all u, v 2 E,
where is some nonzero element in K. We say that ' is -Hermitian. Observe that
'(u, u) = '(u, u),
so if ' is not alternating, then '(u, u) 6= 0 for some u, and we must have = 1. The most
common cases are
1. = 1, in which case ' is Hermitian, and
2. =
7!
must be
392
1. Symmetric matrices ( = 1)
2. Skew-symmetric matrices ( =
1)
3. Hermitian matrices ( = 1)
4. Skew-Hermitian matrices ( =
1).
V (V ? )? .
For simplicity of notation, we write U ?? instead of (U ? )? (and V ?? instead of (V ? )? ).
Given any two subspaces U1 and U2 of E, if U1 U2 , then U2? U1? (and similarly for any
two subspaces V1 V2 of F ). As a consequence, it is easy to show that
U ? = U ??? ,
V ? = V ??? .
Proposition 14.11. For any sesquilinear form ' : E F ! K, the space E/F ? is finitedimensional i the space F/E ? is finite-dimensional; if so, dim(E/F ? ) = dim(F/E ? ).
Proof. Since the sesquilinear form ['] : (E/F ? ) (F/E ? ) ! K is nondegenerate, the maps
l['] : (E/F ? ) ! (F/E ? ) and r['] : (F/E ? ) ! (E/F ? ) are injective. If dim(E/F ? ) =
m, then dim(E/F ? ) = dim((E/F ? ) ), so by injectivity of r['] , we have dim(F/E ? ) =
dim((F/E ? )) m. A similar reasoning using the injectivity of l['] applies if dim(F/E ? ) = n,
and we get dim(E/F ? ) = dim((E/F ? )) n. Therefore, dim(E/F ? ) = m is finite i
dim(F/E ? ) = n is finite, in which case m = n.
If U is a subspace of a space E, recall that the codimension of U is the dimension of
E/U , which is also equal to the dimension of any subspace V such that E is a direct sum of
U and V (E = U V ).
Proposition 14.11 implies the following useful fact.
14.3. ORTHOGONALITY
393
codim(W ),
394
Proposition 14.14. Let ' : E F ! K be any sesquilinear form. If ' has finite rank r,
then l' and r' have the same rank, which is equal to r.
Proof. Because for every u 2 E,
l' (u)(y) = '(u, y) for all y 2 F ,
and for every v 2 F ,
it is clear that the kernel of l' : E ! F is equal to F ? and that, the kernel of r' : F ! E is
equal to E ? . Therefore, rank(l' ) = dim(Im l' ) = dim(E/F ? ) = r, and similarly rank(l' ) =
dim(F/E ? ) = r.
Remark: If the sesquilinear form ' is represented by the matrix m n matrix M with
respect to the bases (e1 , . . . , em ) in E and (f1 , . . . , fn ) in F , it can be shown that the matrix
representing l' with respect to the bases (e1 , . . . , em ) and (f1 , . . . , fn ) is M , and that the
matrix representing r' with respect to the bases (f1 , . . . , fn ) and (e1 , . . . , em ) is M . It follows
that the rank of ' is equal to the rank of M .
14.4
Let E1 and E2 be two K-vector spaces, and let '1 : E1 E1 ! K be a sesquilinear form on E1
and '2 : E2 E2 ! K be a sesquilinear form on E2 . It is also possible to deal with the more
general situation where we have four vector spaces E1 , F1 , E2 , F2 and two sesquilinear forms
'1 : E1 F1 ! K and '2 : E2 F2 ! K, but we will leave this generalization as an exercise.
We also assume that l'1 and r'1 are bijective, which implies that that '1 is nondegenerate.
This is automatic if the space E1 is finite dimensional and '1 is nondegenerate.
Given any linear map f : E1 ! E2 , for any fixed u 2 E2 , we can consider the linear form
in E1 given by
x 7! '2 (f (x), u), x 2 E1 .
Since r'1 : E1 ! E1 is bijective, there is a unique y 2 E1 (because the vector spaces E1 and
E1 only dier by scalar multiplication), so that
'2 (f (x), u) = '1 (x, y),
for all x 2 E1 .
395
Thus, we get a function f l : E2 ! E1 . We claim that this function is a linear map. For any
v1 , v2 2 E2 , we have
'2 (f (x), v1 + v2 ) = '2 (f (x), v1 ) + '2 (f (x), v2 )
= '1 (x, f l (v1 )) + '1 (x, f l (v2 ))
= '1 (x, f l (v1 ) + f l (v2 ))
= '1 (x, f l (v1 + v2 )),
for all x 2 E1 . Since r'1 is injective, we conclude that
f l (v1 + v2 ) = f l (v1 ) + f l (v2 ).
For any
2 K, we have
'2 (f (x), v) = '2 (f (x), v)
= '1 (x, f l (v))
= '1 (x, f l (v))
= '1 (x, f l ( v)),
for all x 2 E1 .
The map f l is called the left adjoint of f , and the map f r is called the right adjoint of f .
396
If E1 and E2 are finite-dimensional with bases (e1 , . . . , em ) and (f1 , . . . , fn ), then we can
work out the matrices Al and Ar corresponding to the left adjoint f l and the right adjoint
f r of f . Assuming that f is represented by the n m matrix A, '1 is represented by the
m m matrix M1 , and '2 is represented by the n n matrix M2 , we find that
Al = (M1 ) 1 A M2
Ar = (M1> ) 1 A M2> .
If '1 and '2 are symmetric bilinear forms, then f l = f r . This also holds if ' is
-Hermitian. Indeed, since
'2 (u, f (x)) = '1 (f r (u), x),
we get
'2 (f (x), u) = '1 (x, f r (u)),
and since
7!
is an involution, we get
'2 (f (x), u) = '1 (x, f r (u)).
g l ,
14.5
397
The notion of adjoint is a good tool to investigate the notion of isometry between spaces
equipped with sesquilinear forms. First, we define metric maps and isometries.
Definition 14.11. If (E1 , '1 ) and (E2 , '2 ) are two pairs of spaces and sesquilinear maps
'1 : E1 E2 ! K and '2 : E2 E2 ! K, a metric map from (E1 , '1 ) to (E2 , '2 ) is a linear
map f : E1 ! E2 such that
'1 (u, v) = '2 (f (u), f (v)) for all u, v 2 E1 .
We say that '1 and '2 are equivalent i there is a metric map f : E1 ! E2 which is bijective.
Such a metric map is called an isometry.
The problem of classifying sesquilinear forms up to equivalence is an important but very
difficult problem. Solving this problem depends intimately on properties of the field K, and
a complete answer is only known in a few cases. The problem is easily solved for K = R,
K = C. It is also solved for finite fields and for K = Q (the rationals), but the solution is
surprisingly involved!
It is hard to say anything interesting if '1 is degenerate and if the linear map f does not
have adjoints. The next few propositions make use of natural conditions on '1 that yield a
useful criterion for being a metric map.
Proposition 14.15. With the same assumptions as in Definition 14.10, if f : E1 ! E2 is a
bijective linear map, then we have
'1 (x, y) = '2 (f (x), f (y))
f 1 = f l = f r .
for all x, y 2 E1 i
Proof. We have
'1 (x, y) = '2 (f (x), f (y))
i
'1 (x, y) = '2 (f (x), f (y)) = '1 (x, f l (f (y))
i
'1 (x, (id
f l
f = id,
= f l . similarly,
'1 (x, y) = '2 (f (x), f (y))
398
i
'1 (x, y) = '2 (f (x), f (y)) = '1 (f r (f (x)), y)
i
'1 ((id
f r
f r
f = id,
for all x, y 2 E i
for all x, y 2 E,
()
then f is injective.
(2) If E is finite-dimensional and if ' is nondegenerate, then the linear maps f : E ! E
satisfying () form a group. The inverse of f is given by f 1 = f .
Proof. (1) If f (x) = 0, then
'(x, y) = '(f (x), f (y)) = '(0, f (y)) = 0 for all y 2 E.
Since l' is injective, we must have x = 0, and thus f is injective.
(2) If E is finite-dimensional, since a linear map satisfying () is injective, it is a bijection.
By Proposition 14.16, we have f 1 = f . We also have
'(f (x), f (y)) = '((f f )(x), y) = '(x, y) = '((f
which shows that f satisfies (). If '(f (x), f (y)) = '(x, y) for all x, y 2 E and '(g(x), g(y))
= '(x, y) for all x, y 2 E, then we have
'((g f )(x), (g f )(y)) = '(f (x), f (y)) = '(x, y) for all x, y 2 E.
Obviously, the identity map idE satisfies (). Therefore, the set of linear maps satisfying ()
is a group.
399
If ' is symmetric, then the group Isom(') is denoted O(') and called the orthogonal
group of '. If ' is alternating, then the group Isom(') is denoted Sp(') and called the
symplectic group of '. If ' is -Hermitian, then the group Isom(') is denoted U (') and
called the -unitary group of '. When = 1, we drop and just say unitary group.
If (e1 , . . . , en ) is a basis of E, ' is the represented by the n n matrix M , and f is
represented by the n n matrix A, then we find that f 2 Isom(') i
A M > A = M >
and A
is given by A
i A> M A = M,
= (M > ) 1 A M > = (M ) 1 A M .
More specifically, we define the following groups, using the matrices Ip,q , Jm.m and Am.m
defined at the end of Section 14.1.
(1) K = R. We have
O(n) = {A 2 Matn (R) | A> A = In }
The group O(n) is the orthogonal group, Sp(2n, R) is the real symplectic group, and
SO(n) is the special orthogonal group. We can define the group
{A 2 Mat2n (R) | A> An,n A = An,n },
The group U(n) is the unitary group, Sp(2n, C) is the complex symplectic group, and
SU(n) is the special unitary group.
It can be shown that if A 2 Sp(2n, R) or if A 2 Sp(2n, C), then det(A) = 1.
400
14.6
In this section, we deal with -Hermitian forms, ' : E E ! K. In general, E may have
subspaces U such that U \ U ? 6= (0), or worse, such that U U ? (that is, ' is zero on U ).
We will see that such subspaces play a crucial in the decomposition of E into orthogonal
subspaces.
Definition 14.13. Given an -Hermitian forms ' : E E ! K, a nonzero vector u 2 E is
said to be isotropic if '(u, u) = 0. It is convenient to consider 0 to be isotropic. Given any
subspace U of E, the subspace rad(U ) = U \ U ? is called the radical of U . We say that
(i) U is degenerate if rad(U ) 6= (0) (equivalently if there is some nonzero vector u 2 U
such that x 2 U ? ). Otherwise, we say that U is nondegenerate.
(ii) U is totally isotropic if U U ? (equivalently if the restriction of ' to U is zero).
By definition, the trivial subspace U = (0) (= {0}) is nondegenerate. Observe that a
subspace U is nondegenerate i the restriction of ' to U is nondegenerate. A degenerate
subspace is sometimes called an isotropic subspace. Other authors say that a subspace U
is isotropic if it contains some (nonzero) isotropic vector. A subspace which has no nonzero
isotropic vector is often called anisotropic. The space of all isotropic vectors is a cone often
called the light cone (a terminology coming from the theory of relativity). This is not to
be confused with the cone of silence (from Get Smart)! It should also be noted that some
authors (such as Serre) use the term isotropic instead of totally isotropic. The apparent lack
of standard terminology is almost as bad as in graph theory!
It is clear that any direct sum of pairwise orthogonal totally isotropic subspaces is totally isotropic. Thus, every totally isotropic subspace is contained in some maximal totally
isotropic subspace.
First, let us show that in order to sudy an -Hermitian form on a space E, it suffices to
restrict our attention to nondegenerate forms.
Proposition 14.18. Given an -Hermitian form ' : E E ! K on E, we have:
(a) If U and V are any two orthogonal subspaces of E, then
rad(U + V ) = rad(U ) + rad(V ).
(b) rad(rad(E)) = rad(E).
(c) If U is any subspace supplementary to rad(E), so that
E = rad(E)
U,
401
= U \ U? \ V ? + V \ U? \ V ?
= U \ U? + V \ V ?
= rad(U ) + rad(V ).
v0 , with
U ?.
402
(i) U is nondegenerate.
(ii) U ? is nondegenerate.
(iii) E = U
U ?.
Proof. By definition, rad(U ? ) = U ? \ U ?? , and since ' is nondegenerate and U is finitedimensional, U ?? = U , so rad(U ? ) = U ? \ U ?? = U \ U ? = rad(U ).
By Proposition 14.19, (i) implies (iii). If E = U U ? , then rad(U ) = U \ U ? = (0),
so U is nondegenerate and (iii) implies (i). Since rad(U ? ) = rad(U ), (iii) also implies (ii).
Now, if U ? is nondegenerate, we have U ? \ U ?? = (0), and since U U ?? , we get
U \ U ? U ?? \ U ? = (0),
which shows that U is nondegenerate, proving the implication (ii) =) (i).
If E is finite-dimensional, we have the following results.
Proposition 14.21. Given an -Hermitian form ' : E E ! K on a finite-dimensional
space E, if ' is nondegenerate, then for every subspace U of E we have
(i) dim(U ) + dim(U ? ) = dim(E).
(ii) U ?? = U .
Proof. (i) Since ' is nondegenerate and E is finite-dimensional, the semilinear map l' : E !
E is bijective. By transposition, the inclusion i : U ! E yields a surjection r : E ! U
(with r(f ) = f i for every f 2 E ; the map f i is the restriction of the linear form f to
U ). It follows that the semilinear map r l' : E ! U given by
(r l' )(x)(u) = '(x, u) x 2 E, u 2 U
is surjective, and its kernel is U ? . Thus, we have
dim(U ) + dim(U ? ) = dim(E),
and since dim(U ) = dim(U ) because U is finite-dimensional, we get
dim(U ) + dim(U ? ) = dim(U ) + dim(U ? ) = dim(E).
(ii) Applying the above formula to U ? , we deduce that dim(U ) = dim(U ?? ). Since
U U ?? , we must have U ?? = U .
403
0 1
B=
1 0
defines the alternating form given by
'((x1 , y1 ), (x2 , y2 )) = x1 y2
x2 y1 .
0 1
.
1 0
Proof. If u and v were linearly dependent, as u, v 6= 0, we could write v = u for some 6= 0,
but then, since ' is alternating, we would have
= '(u, v) = '(u, u) = '(u, u) = 0,
contradicting the fact that
Proposition 14.22 yields a plane spanned by two vectors u1 , v1 such that '(u1 , u1 ) =
'(v1 , v1 ) = 0 and '(u1 , v1 ) = 1. Such a plane is called a hyperbolic plane. If E is finitedimensional, we obtain the following theorem.
Theorem 14.23. Let ' : E E ! K be an alternating bilinear form on a space E of
finite dimension n. Then, there is a direct sum decomposition of E into pairwise orthogonal
subspaces
E = W1 Wr rad(E),
where each Wi is a hyperbolic plane and rad(E) = E ? . Therefore, there is a basis of E of
the form
(u1 , v1 , . . . , ur , vr , w1 , . . . , wn 2r ),
404
with respect to which the matrix representing ' is a block diagonal matrix M of the form
0
1
J
0
B J
C
B
C
B
C
...
M =B
C,
B
C
@
A
J
0
0n 2r
with
J=
0 1
.
1 0
Proof. If ' = 0, then E = E ? and we are done. Otherwise, there are two nonzero vectors
u, v 2 E such that '(u, v) 6= 0, so by Proposition 14.22, we obtain a hyperbolic plane W2
spanned by two vectors u1 , v1 such that '(u1 , v1 ) = 1. The subspace W1 is nondegenerate
(for example, det(J) = 1), so by Proposition 14.20, we get a direct sum
E = W1
W1? .
2r )
405
2r ),
This particularly simple matrix is often preferable, especially when dealing with the matrices
(symplectic matrices) representing the isometries of ' (in which case n = 2r).
We now return to the Witt decomposition. From now on, ' : EE ! K is an -Hermitian
form. The following assumption will be needed:
Property (T). For every u 2 E, there is some 2 K such that '(u, u) = + .
Property (T) is always satisfied if ' is alternating, or if K is of characteristic 6= 2 and
= 1, with = 12 '(u, u).
The following (bizarre) technical lemma will be needed.
Lemma 14.25. Let ' be an -Hermitian form on E and assume that ' satisfies property
(T). For any totally isotropic subspace U 6= (0) of E, for every x 2 E not orthogonal to U ,
and for every 2 K, there is some y 2 U so that
'(x + y, x + y) = + .
Proof. By property (T), we have '(x, x) = + for some 2 K. For any y 2 U , since '
is -Hermitian, '(y, x) = '(x, y), and since U is totally isotropic '(y, y) = 0, so we have
'(x + y, x + y) = '(x, x) + '(x, y) + '(y, x) + '(y, y)
=
+ + '(x, y) + '(x, y)
Since x is not orthogonal to U , the function y 7! '(x, y) + is not the constant function.
Consequently, this function takes the value for some y 2 U , which proves the lemma.
Definition 14.14. Let ' be an -Hermitian form on E. A Witt decomposition of E is a
triple (U, U 0 , W ), such that
(i) E = U
U0
W (a direct sum)
U 0.
406
U 0)
W.
As a warm up for Proposition 14.27, we prove an analog of Proposition 14.22 in the case
of a symmetric bilinear form.
Proposition 14.26. Let ' : E E ! K be a nondegenerate symmetric bilinear form with K
a field of characteristic dierent from 2. For any nonzero isotropic vector u, there is another
nonzero isotropic vector v such that '(u, v) = 2, and u and v are linearly independent. In
the basis (u, v/2), the restriction of ' to the plane spanned by u and v/2 is of the form
0 1
.
1 0
Proof. Since ' is nondegenerate, there is some nonzero vector z such that (rescaling z if
necessary) '(u, z) = 1. If
v = 2z '(z, z)u,
then since '(u, u) = 0 and '(u, z) = 1, note that
'(u, v) = '(u, 2z
'(z, z)'(u, u) = 2,
and
'(v, v) = '(2z '(z, z)u, 2z '(z, z)u)
= 4'(z, z) 4'(z, z)'(u, z) + '(z, z)2 '(u, u)
= 4'(z, z) 4'(z, z) = 0.
If u and z were linearly dependent, as u, z 6= 0, we could write z = u for some 6= 0, but
then, we would have
'(u, z) = '(u, u) = '(u, u) = 0,
contradicting the fact that '(u, z) 6= 0. Then u and v = 2z '(z, z)u are also linearly
independent, since otherwise z could be expressed as a multiple of u. The rest is obvious.
407
Proposition 14.26 yields a plane spanned by two vectors u1 , v1 such that '(u1 , u1 ) =
'(v1 , v1 ) = 0 and '(u1 , v1 ) = 1. Such a plane is called an Artinian plane. Proposition 14.26
also shows that nonzero isotropic vectors come in pair.
Remark: Some authors refer to the above plane as a hyperbolic plane. Berger (and others)
point out that this terminology is undesirable because the notion of hyperbolic plane already
exists in dierential geometry and refers to a very dierent object.
We leave it as an exercice to figure out that the group of isometries of the Artinian plane,
the set of all 2 2 matrices A such that
0 1
> 0 1
A
A=
,
1 0
1 0
consists of all matrices of the form
0
or
1
0
0
1
2K
{0}.
In particular, if K = R, then this group denoted O(1, 1) has four connected components.
The first step in showing the existence of a Witt decomposition is this.
Proposition 14.27. Let ' be an -Hermitian form on E, assume that ' is nondegenerate
and satisfies property (T), and let U be any totally isotropic subspace of E of finite dimension
dim(U ) = r.
(1) If U 0 is any totally isotropic subspace of dimension r and if U 0 \ U ? = (0), then U U 0
is nondegenerate, and for any basis (u1 , . . . , ur ) of U , there is a basis (u01 , . . . , u0r ) of U 0
such that '(ui , u0j ) = ij , for all i, j = 1, . . . , r.
(2) If W is any totally isotropic subspace of dimension at most r and if W \ U ? = (0),
then there exists a totally isotropic subspace U 0 with dim(U 0 ) = r such that W U 0
and U 0 \ U ? = (0).
Proof. (1) Let '0 be the restriction of ' to U U 0 . Since U 0 \ U ? = (0), for any v 2 U 0 ,
if '(u, v) = 0 for all u 2 U , then v = 0. Thus, '0 is nondegenerate (we only have to check
on the left since ' is -Hermitian). Then, the assertion about bases follows from the version
of Proposition 14.3 for sesquilinear forms. Since U is totally isotropic, U U ? , and since
U 0 \ U ? = (0), we must have U 0 \ U = (0), which show that we have a direct sum U U 0 .
It remains to prove that U + U 0 is nondegenerate. Observe that
H = (U + U 0 ) \ (U + U 0 )? = (U + U 0 ) \ U ? \ U 0? .
Since U is totally isotropic, U U ? , and since U 0 \ U ? = (0), we have
(U + U 0 ) \ U ? = (U \ U ? ) + (U 0 \ U ? ) = U + (0) = U,
408
409
Proposition 14.29. Any two -Hermitian neutral forms satisfying property (T) defined on
spaces of the same dimension are equivalent.
Note that under the conditions of Proposition 14.28, (U, U 0 , (U U 0 )? ) is a Witt decomposition for E. By Proposition 14.27(1), the block A in the matrix of ' is the identity
matrix.
The following proposition shows that every subspace U of E can be embedded into a
nondegenerate subspace.
Proposition 14.30. Let ' be an -Hermitian form on E which is nondegenerate and satisfies
property (T). For any subspace U of E of finite dimension, if we write
U =V
W,
for some orthogonal complement W of V = rad(U ), and if we let r = dim(rad(U )), then
there exists a totally isotropic subspace V 0 of dimension r such that V \ V 0 = (0), and
?
(V
V 0)
W = V 0 U is nondegenerate. Furthermore, any isometry f from U into
another space (E 0 , '0 ) where '0 is an -Hermitian form satisfying the same assumptions as
' can be extended to an isometry on (V
V 0)
W.
so (V
W =V0
U is nondegenerate.
We leave the second statement about extending f as an exercise (use the fact that f (U ) =
?
410
(U
W)
D.
W)
D and E = S
N , so E = S
(U
W)
D.
W in N . Then, N =
411
for all
0
Im
Im 0
(b) if ' is symmetric ( = +1 and = for all 2 K), then ' is represented by a
matrix of the form
0
1
0 Ir 0
@I r 0 0 A ,
0 0 P
where either n = 2r and P does not occur, or n > 2r and P is a definite symmetric
matrix.
7!
where either n = 2r and P does not occur, or n > 2r and P is a definite matrix
such that P = P .
412
Proof. Part (1) follows from Theorem 14.31. By Proposition 14.28, we obtain a totally
isotropic subspace U 0 of dimension r such that U \ U 0 = (0). By applying Theorem 14.31
to U1 = U and U2 = U 0 , we get U = W = (0), which proves (2). Part (3) is an immediate
consequence of (2).
As a consequence of Theorem 14.32, we make the following definition.
Definition 14.15. Let E be a vector space of finite dimension n, and let ' be an -Hermitian
form on E which is nondegenerate and satisfies property (T). The index (or Witt index )
of ', is the common dimension of all totally isotropic maximal subspaces of E. We have
2 n.
Neutral forms only exist if n is even, in which case, = n/2. Forms of index = 0
have no nonzero isotropic vectors. When K = R, this is satisfied by positive definite or
negative definite symmetric forms. When K = C, this is satisfied by positive definite or
negative definite Hermitian forms. The vector space of a neutral Hermitian form ( = +1) is
an Artinian space, and the vector space of a neutral alternating form is a hyperbolic space.
If the field K is algebraically closed, we can describe all nondegenerate quadratic forms.
Proposition 14.33. If K is algebraically closed and E has dimension n, then for every
nondegenerate quadratic form , there is a basis (e1 , . . . , en ) such that is given by
X
(Pm
n
xi xm+i
if n = 2m
xi ei = Pi=1
m
2
if n = 2m + 1.
i=1 xi xm+i + x2m+1
i 1
Proof. We work with the polar form ' of . Let U1 and U2 be some totally isotropic
subspaces such that U1 \ U2 = (0) given by Theorem 14.32, and let q be their common
dimension. Then, W = U = (0). Since we can pick bases (e1 , . . . eq ) in U1 and (eq+1 , . . . , e2q )
in U2 such that '(ei , ei+q ) = 0, for i, j = 1, . . . , q, it suffices to proves that dim(D) 1. If
x, y 2 D with x 6= 0, from the identity
(y
x) = (y)
'(x, y) +
(x)
and the fact that (x) 6= 0 since x 2 D and x 6= 0, we see that the equation (y
y) = 0
has at least one solution. Since (z) 6= 0 for every nonzero z 2 D, we get y = x, and thus
dim(D) 1, as claimed.
We also have the following proposition which has applications in number theory.
Proposition 14.34. If
is any nondegenerate quadratic form such that there is some
nonzero vector x 2 E with (x) = 0, then for every 2 K, there is some y 2 E such that
(y) = .
The proof is left as an exercise. We now turn to the Witt extension theorem.
14.7
413
Witts Theorem
Witts theorem was referred to as a scandal by Emil Artin. What he meant by this is
that one had to wait until 1936 (Witt [115]) to formulate and prove a theorem at once so
simple in its statement and underlying concepts, and so useful in various domains (geometry,
arithmetic of quadratic forms).1
Besides Witts original proof (Witt [115]), Chevalleys proof [22] seems to be the best
proof that applies to the symmetric as well as the skew-symmetric case. The proof in
Bourbaki [13] is based on Chevalleys proof, and so are a number of other proofs. This is
the one we follow (slightly reorganized). In the symmetric case, Serres exposition is hard to
beat (see Serre [97], Chapter IV).
Theorem 14.35. (Witt, 1936) Let E and E 0 be two finite-dimensional spaces respectively
equipped with two nondegenerate -Hermitan forms ' and '0 satisfying condition (T), and
assume that there is an isometry between (E, ') and (E 0 , '0 ). For any subspace U of E,
every injective metric linear map f from U into E 0 extends to an isometry from E to E 0 .
Proof. Since (E, ') and (E 0 , '0 ) are isometric, we may assume that E 0 = E and '0 = ' (if
h : E ! E 0 is an isometry, then h 1 f is an injective metric map from U into E. The
details are left to the reader). We begin with the following observation. If U1 and U2 are
two subspaces of E such that U1 \ U2 = (0) and if we have metric linear maps f1 : U1 ! E
and f2 : U2 ! E such that
'(f1 (u1 ), f2 (u2 )) = '(u1 , u2 ) for ui 2 Ui (i = 1, 2),
()
then the linear map f : U1 U2 ! E given by f (u1 + u2 ) = f1 (u1 ) + f2 (u2 ) extends f1 and
f2 and is metric. Indeed, since f1 and f2 are metric and using (), we have
'(f1 (u1 ) + f2 (u2 ), f1 (v1 ) + f2 (v2 )) = '(f1 (u1 ), f1 (v1 )) + '(f1 (u1 ), f2 (v2 ))
+ '(f2 (u2 ), f1 (v1 )) + '(f2 (u2 ), f2 (v2 ))
= '(u1 , v1 ) + '(u1 , v2 ) + '(u2 , v1 ) + '(u2 , v2 )
= '(u1 + u2 , v2 + v2 ).
Furthermore, if f1 and f2 are injective, then so if f .
We now proceed by induction on the dimension r of U . The case r = 0 is trivial. For
the induction step, r
1 so U 6= (0), and let H be any hyperplane in U . Let f : U ! E
be an injective metric linear map. By the induction hypothesis, the restriction f0 of f to H
extends to an isometry g0 of E. If g0 extends f , we are done. Otherwise, H is the subspace
of elements of U left fixed by g0 1 f . If the theorem holds in this situation, namely the
1
Curiously, some references to Witts paper claim its date of publication to be 1936, but others say 1937.
The answer to this mystery is that Volume 176 of Crelle Journal was published in four issues. The cover
page of volume 176 mentions the year 1937, but Witts paper is dated May 1936. This is not the only paper
of Witt appearing in this volume!
414
f (u), v),
that is
'(f (u), f (v)
v) = '(u
()
In this case, formula () show that f (U ) is not contained in D? (check this!). Consequently,
U \ D? = f (U ) \ D? = H.
We can pick V to be any supplement of H in D? , and the above formula shows that V \ U =
V \ f (U ) = (0). Since U V contains the hyperplane D? (since D? = H V and H U ),
and U V 6= D? (since U is not contained in D? and V D? ), we must have E = U V ,
and as we showed as a consequence of hypothesis (V), f can be extended to an isometry of
U V = E.
Case (b). U D? .
415
We show that case (b) can be reduced to the situation where U = D? and f is an
isometry of U . For this, we show that there exists a subspace V of D? , such that
D? = U
V = f (U )
V.
Kx,
f (U ) = H
Ky.
We claim that x + y 2
/ U . Otherwise, since y = x + y x, with x + y, x 2 U and since
y 2 f (U ), we would have y 2 U \ f (U ) = H, a contradiction. Similarly, x + y 2
/ f (U ). It
follows that
U + f (U ) = U K(x + y) = f (U ) K(x + y).
Now, pick W to be any supplement of U + f (U ) in D? so that D? = (U + f (U ))
let
V = K(x + y) + W.
W , and
V = f (U )
K(x + y)
W = (U + f (U ))
W = D? ,
()
This is because the linear form u 7! '(f 1 (u), v) (u 2 U ) is the restriction of a linear form
2 E , and since ' is nondegenerate, there is some (unique) v 0 2 E, such that
(x) = '(x, v 0 ) for all x 2 E,
416
We still need to pick y 2 D so that v1 = v 0 + y satisfies '(v1 , v1 ) = '(v, v). However, since
v2
/ U = D? , the vector v is not orthogonal D, and by lemma 14.25, there is some y 2 D
such that
'(v 0 + y, v 0 + y) = '(v, v).
Then, if we let v1 = v 0 + y, as we showed at the beginning of the proof, we can extend f
to a metric map g of U + Kv = E by setting g(v) = v1 . Since ' is nondegenerate, g is an
isometry.
The first corollary of Witts theorem is sometimes called the Witts cancellation theorem.
Theorem 14.36. (Witt Cancellation Theorem) Let (E1 , '1 ) and (E2 , '2 ) be two pairs of
finite-dimensional spaces and nondegenerate -Hermitian forms satisfying condition (T), and
assume that (E1 , '1 ) and (E2 , '2 ) are isometric. For any subspace U of E1 and any subspace
V of E2 , if there is an isometry f : U ! V , then there is an isometry g : U ? ! V ? .
Proof. If f : U ! V is an isometry between U and V , by Witts theorem (Theorem 14.36),
the linear map f extends to an isometry g between E1 and E2 . We claim that g maps U ?
into V ? . This is because if v 2 U ? , we have '1 (u, v) = 0 for all u 2 U , so
'2 (g(u), g(v)) = '1 (u, v) = 0 for all u 2 U ,
and since g is a bijection between U and V , we have g(U ) = V , so we see that g(v) is
orthogonal to V for every v 2 U ? ; that is, g(U ? ) V ? . Since g is a metric map and since
'1 is nondegenerate, the restriction of g to U ? is an isometry from U ? to V ? .
A pair (E, ') where E is finite-dimensional and ' is a nondegenerate -Hermitian form
is often called an -Hermitian space. When = 1 and ' is symmetric, we use the term
Euclidean space or quadratic space. When = 1 and ' is alternating, we use the term
symplectic space. When = 1 and the automorphism 7! is not the identity we use the
term Hermitian space, and when = 1, we use the term skew-Hermitian space.
We also have the following result showing that the group of isometries of an -Hermitian
space is transitive on totally isotropic subspaces of the same dimension.
Theorem 14.37. Let E be a finite-dimensional vector space and let ' be a nondegenerate
-Hermitian form on E satisfying condition (T). Then for any two totally isotropic subspaces
U and V of the same dimension, there is an isometry f 2 Isom(') such that f (U ) = V .
Furthermore, every linear automorphism of U is induced by an isometry of E.
417
Remark: Witts cancelation theorem can be used to define an equivalence relation on Hermitian spaces and to define a group structure on these equivalence classes. This way, we
obtain the Witt group, but we will not discuss it here.
14.8
Symplectic Groups
In this section, we are dealing with a nondegenerate alternating form ' on a vector space E
of dimension n. As we saw earlier, n must be even, say n = 2m. By Theorem 14.23, there
is a direct sum decomposition of E into pairwise orthogonal subspaces
E = W1
Wm ,
where each Wi is a hyperbolic plane. Each Wi has a basis (ui , vi ), with '(ui , ui ) = '(vi , vi ) =
0 and '(ui , vi ) = 1, for i = 1, . . . , m. In the basis
(u1 , . . . , um , v1 , . . . , vm ),
' is represented by the matrix
Jm,m =
0
Im
.
Im 0
The symplectic group Sp(2m, K) is the group of isometries of '. The maps in Sp(2m, K)
are called symplectic maps. With respect to the above basis, Sp(2m, K) is the group of
2m 2m matrices A such that
A> Jm,m A = Jm,m .
Matrices satisfying the above identity are called symplectic matrices. In this section, we show
that Sp(2m, K) is a subgroup of SL(2m, K) (that is, det(A) = +1 for all A 2 Sp(2m, K)),
and we show that Sp(2m, K) is generated by special linear maps called symplectic transvections.
First, we leave it as an easy exercise to show that Sp(2, K) = SL(2, K). The reader
should also prove that Sp(2m, K) has a subgroup isomorphic to GL(m, K).
Next we characterize the symplectic maps f that leave fixed every vector in some given
hyperplane H, that is,
f (v) = v for all v 2 H.
Since ' is nondegenerate, by Proposition 14.21, the orthogonal H ? of H is a line (that is,
dim(H ? ) = 1). For every u 2 E and every v 2 H, since f is an isometry and f (v) = v for
all v 2 H, we have
'(f (u)
418
which shows that f (u) u 2 H ? for all u 2 E. Therefore, f id is a linear map from E
into the line H ? whose kernel contains H, which means that there is some nonzero vector
w 2 H ? and some linear form such that
f (u) = u + (u)w,
u 2 E.
Since f is an isometry, we must have '(f (u), f (v)) = '(u, v) for all u, v 2 E, which means
that
'(u, v) = '(f (u), f (v))
= '(u + (u)w, v + (v)w)
= '(u, v) + (u)'(w, v) + (v)'(u, w) + (u) (v)'(w, w)
= '(u, v) + (u)'(w, v)
(v)'(w, u),
which yields
(u)'(w, v) = (v)'(w, u) for all u, v 2 E.
Since ' is nondegenerate, we can pick some v0 such that '(w, v0 ) 6= 0, and we get
(u)'(w, v0 ) = (v0 )'(w, u) for all u 2 E; that is,
(u) = '(w, u) for all u 2 E,
for some
for all u 2 E.
It is also clear that every f of the above form is a symplectic map. If = 0, then f = id.
Otherwise, if 6= 0, then f (u) = u i '(w, u) = 0 i u 2 (Kw)? = H, where H is a
hyperplane. Thus, f fixes every vector in the hyperplane H. Note that since ' is alternating,
'(w, w) = 0, which means that w 2 H.
In summary, we have characterized all the symplectic maps that leave every vector in
some hyperplane fixed, and we make the following definition.
for all u 2 E,
for some nonzero w 2 E and some 2 K. If 6= 0, the subspace of vectors left fixed by f
is the hyperplane H = (Kw)? . The map f is also denoted u, .
Observe that
u,
and u, = id i
u, = (u, /2 )2 .
u, = u,
6= 0, we have
Our next goal is to show that if u and v are any two nonzero vectors in E, then there is
a simple symplectic map f such that f (u) = v.
419
Proposition 14.38. Given any two nonzero vectors u, v 2 E, there is a symplectic map
f such that f (u) = v, and f is either a symplectic transvection, or the composition of two
symplectic transvections.
Proof. There are two cases.
Case 1 . '(u, v) 6= 0.
In this case, u 6= v, since '(u, u) = 0. Let us look for a symplectic transvection of the
form v u, . We want
v = u + '(v
u, u)(v
u) = u + '(v, u)(v
u),
which yields
( '(v, u)
Since '(u, v) 6= 0 and '(v, u) =
1)(v
u) = 0.
= '(v, u)
and v
u,
maps u to v.
Case 2 . '(u, v) = 0.
If u = v, use u,0 = id. Now, assume u 6= v. We claim that it is possible to pick some
w 2 E such that '(u, w) 6= 0 and '(v, w) 6= 0. Indeed, if (Ku)? = (Kv)? , then pick any
nonzero vector w not in the hyperplane (Ku)? . Othwerwise, (Ku)? and (Kv)? are two
distinct hyperplanes, so neither is contained in the other (they have the same dimension),
so pick any nonzero vector w1 such that w1 2 (Ku)? and w1 2
/ (Kv)? , and pick any
?
?
nonzero vector w2 such that w2 2 (Kv) and w2 2
/ (Ku) . If we let w = w1 + w2 , then
'(u, w) = '(u, w2 ) 6= 0, and '(v, w) = '(v, w1 ) 6= 0. From case 1, we have some symplectic
transvection w u, 1 such that w u, 1 (u) = w, and some symplectic transvection v w, 2 such
that v w, 2 (w) = v, so the composition v w, 2 w u, 1 maps u to v.
Next, we would like to extend Proposition 14.38 to two hyperbolic planes W1 and W2 .
Proposition 14.39. Given any two hyperbolic planes W1 and W2 given by bases (u1 , v1 ) and
(u2 , v2 ) (with '(ui , ui ) = '(vi , vi ) = 0 and '(ui , vi ) = 1, for i = 1, 2), there is a symplectic
map f such that f (u1 ) = u2 , f (v1 ) = v2 , and f is the composition of at most four symplectic
transvections.
Proof. From Proposition 14.38, we can map u1 to u2 , using a map f which is the composition
of at most two symplectic transvections. Say v3 = f (v1 ). We claim that there is a map g
such that g(u2 ) = u2 and g(v3 ) = v2 , and g is the composition of at most two symplectic
transvections. If so, g f maps the pair (u1 , v1 ) to the pair (u2 , v2 ), and g f consists of at
most four symplectic transvections. Thus, we need to prove the following claim:
Claim. If (u, v) and (u, v 0 ) are hyperbolic bases determining two hyperbolic planes, then
there is a symplectic map g such that g(u) = u, g(v) = v 0 , and g is the composition of at
most two symplectic transvections. There are two case.
Case 1 . '(v, v 0 ) 6= 0.
420
v,
(u) = u, and g = v0
v,
v,
(v) = v 0 . We also
Case 2 . '(v, v 0 ) = 0.
First, check that (u, u + v) is a also hyperbolic basis. Furthermore,
'(v, u + v) = '(v, u) + '(v, v) = '(v, u) =
1 6= 0.
v) = '(u, v 0 )
u v,
such that v0
'(u, u)
u v,
'(u, v) = 1
u v,
(u + v) = v 0 . Since
0
u,
1 = 0,
1
We will use Proposition 14.39 in an inductive argument to prove that the symplectic
transvections generate the symplectic group. First, make the following observation: If U is
a nondegenerate subspace of E, so that
E=U
U ?,
whose restriction
is a transvection of E.
Theorem 14.40. The symplectic group Sp(2m, K) is generated by the symplectic transvections. For every transvection f 2 Sp(2m, K), we have det(f ) = 1.
Proof. Let G be the subgroup of Sp(2m, K) generated by the tranvections. We need to
prove that G = Sp(2m, K). Let (u1 , v1 , . . . , um , vm ) be a symplectic basis of E, and let f 2
Sp(2m, K) be any symplectic map. Then, f maps (u1 , v1 , . . . , um , vm ) to another symplectic
0
basis (u01 , v10 , . . . , u0m , vm
). If we prove that there is some g 2 G such that g(ui ) = u0i and
g(vi ) = vi0 for i = 1, . . . , m, then f = g and G = Sp(2m, K).
We use induction on i to prove that there is some gi 2 G so that gi maps (u1 , v1 , . . . , ui , vi )
to (u01 , v10 , . . . , u0i , vi0 ).
The base case i = 1 follows from Proposition 14.39.
For the induction step, assume that we have some gi 2 G mapping (u1 , v1 , . . . , ui , vi )
00
00
to (u01 , v10 , . . . , u0i , vi0 ), and let (u00i+1 , vi+1
, . . . , u00m , vm
) be the image of (ui+1 , vi+1 , . . . , um , vm )
421
0
by gi . If U is the subspace spanned by (u01 , v10 , . . . , u0m , vm
), then each hyperbolic plane
0
0
0
00
00
Wi+k given by (ui+k , vi+k ) and each hyperbolic plane Wi+k given by (u00i+k , vi+k
) belongs to
?
U . Using the remark before the theorem and Proposition 14.39, we can find a transvec00
0
tion mapping Wi+1
onto Wi+1
and leaving every vector in U fixed. Then, gi maps
0
0
0
(u1 , v1 , . . . , ui+1 , vi+1 ) to (u1 , v1 , . . . , u0i+1 , vi+1
), establishing the induction step.
For the second statement, since we already proved that every transvection has a determinant equal to +1, this also holds for any composition of transvections in G, and since
G = Sp(2m, K), we are done.
It can also be shown that the center of Sp(2m, K) is reduced to the subgroup {id, id}.
The projective symplectic group PSp(2m, K) is the quotient group PSp(2m, K)/{id, id}.
All symplectic projective groups are simple, except PSp(2, F2 ), PSp(2, F3 ), and PSp(4, F2 ),
see Grove [52].
The orders of the symplectic groups over finite fields can be determined. For details, see
Artin [3], Jacobson [59] and Grove [52].
An interesting property of symplectic spaces is that the determinant of a skew-symmetric
matrix B is the square of some polynomial Pf(B) called the Pfaffian; see Jacobson [59] and
Artin [3]. We leave considerations of the Pfaffian to the exercises.
We now take a look at the orthogonal groups.
14.9
Orthogonal Groups
In this section, we are dealing with a nondegenerate symmetric bilinear from ' over a finitedimensional vector space E of dimension n over a field of characateristic not equal to 2.
Recall that the orthogonal group O(') is the group of isometries of '; that is, the group of
linear maps f : E ! E such that
'(f (u), f (v)) = '(u, v) for all u, v 2 E.
The elements of O(') are also called orthogonal transformations. If M is the matrix of ' in
any basis, then a matrix A represents an orthogonal transformation i
A> M A = M.
Since ' is nondegenerate, M is invertible, so we see that det(A) = 1. The subgroup
SO(') = {f 2 O(') | det(f ) = 1}
is called the special orthogonal group (of '), and its members are called rotations (or proper
orthogonal transformations). Isometries f 2 O(') such that det(f ) = 1 are called improper
orthogonal transformations, or sometimes reversions.
422
'(v, u)
u for all v 2 E.
'(u, u)
6= 0, we have
If we replace u by u with
u (v) = v
'(v, u)
u=v
'( u, u)
'(v, u)
2 '(u, u)
u=v
'(v, u)
u,
'(u, u)
which shows that u depends only on the line D, and thus only the hyperplane H. Therefore,
denote by H the linear map u determined as above by any nonzero vector u 2 H ? . Note
that if v 2 H, then
H (v) = v,
and if v 2 D, then
H (v) =
v.
1.
'(v, u)
u for all v 2 E
'(u, u)
is an involutive isometry of E called the reflection through (or about) the hyperplane H.
Remarks:
1. It can be shown that if f 2 O(') leaves every vector in some hyperplane H fixed, then
either f = id or f = H ; see Taylor [108] (Chapter 11). Thus, there is no analog to
symplectic transvections in the orthogonal group.
2. If K = R and ' is the usual Euclidean inner product, the matrices corresponding to
hyperplane reflections are called Householder matrices.
Our goal is to prove that O(') is generated by the hyperplane reflections. The following
proposition is needed.
423
Proposition 14.41. Let ' be a nondegenerate symmetric bilinear form on a vector space
E. For any two nonzero vectors u, v 2 E, if '(u, u) = '(v, v) and v u is nonisotropic,
then the hyperplane reflection H = v u maps u to v, with H = (K(v u))? .
Proof. Since v
u (u)
=u
=u
=u
u, v
u) 6= 0, and we have
'(u, v u)
(v u)
'(v u, v u)
'(u, v) '(u, u)
2
(v
'(v, v) 2'(u, v) + '(u, u)
2('(u, v) '(u, u))
(v u)
2('(u, u) 2'(u, v))
2
u)
= v,
which proves the proposition.
We can now obtain a cheap version of the CartanDieudonne theorem.
Theorem 14.42. (CartanDieudonne, weak form) Let ' be a nondegenerate symmetric
bilinear form on a K-vector space E of dimension n (char(K) 6= 2). Then, every isometry
f 2 O(') with f 6= id is the composition of at most 2n 1 hyperplane reflections.
Proof. We proceed by induction on n. For n = 0, this is trivial (since O(') = {id}).
erate, E = H (Ku)? , and since f (u) = u, we must have f (H) = H. The restriction f 0 of
of f to H is an isometry of H. By the induction hypothesis, we can write
f 0 = k0
10 ,
i0
3. We
1 .
f = k
Case 2 . f (u) =
u.
424
2n
u.
'(u, f (u))
'(u, u)
We also have
'(u, u) = '((f (u) + u (f (u) u))/2, (f (u) + u (f (u) u))/2)
1
1
= '(f (u) + u, f (u) + u) + '(f (u) u, f (u) u),
4
4
so f (u) + u and f (u)
If f (u)
= id, if g = f (u)
u (u)
is such that
= f (u),
f = f (u)
2n
f (u),
and since f2(u)+u = id, if g = f (u)+u f , then g(u) = u, and we are back to case (2). We
obtain
f = f (u) u k 1
where , f (u) u and the i are hyperplane reflections, with k 2n
2n 1 hyperplane reflections. This proves the induction step.
The bound 2n 1 is not optimal. The strong version of the CartanDieudonne theorem
says that at most n reflections are needed, but the proof is harder. Here is a neat proof due
to E. Artin (see [3], Chapter III, Section 4).
Case 1 remains unchanged. Case 2 is slightly dierent: f (u) u is not isotropic. Since
'(f (u) + u, f (u) u) = 0, as in the first subcase of Case (3), g = f (u) u f is such that
g(u) = u and we are back to Case 1. This only costs one more reflection.
The new (bad) case is:
425
Case 3 . f (u) u is nonzero and isotropic for all nonisotropic u 2 E. In this case, what
saves us is that E must be an Artinian space of dimension n = 2m and that f must be a
rotation (f 2 SO(')).
If we acccept this fact, then pick any hyperplane reflection . Then, since f is a rotation,
g = f is not a rotation because det(g) = det( ) det(f ) = ( 1)(+1) = 1, so g(u) u
is not isotropic for all nonisotropic u 2 E, we are back to Case 2, and using the induction
hypothesis, we get
f = k . . . , 1 ,
where each i is a hyperplane reflection, and k 2m. Since f is not a rotation, actually
k 2m 1, and then f = k . . . , 1 , the composition of at most k + 1 2m hyperplane
reflections.
Therefore, except for the fact that in Case 3, E must be an Artinian space of dimension
n = 2m and that f must be a rotation, which has not been proven yet, we proved the
following theorem.
Theorem 14.43. (CartanDieudonne, strong form) Let ' be a nondegenerate symmetric
bilinear form on a K-vector space E of dimension n (char(K) 6= 2). Then, every isometry
f 2 O(') with f 6= id is the composition of at most n hyperplane reflections.
To fill in the gap, we need two propositions.
Proposition 14.44. Let (E, ') be an Artinian space of dimension 2m, and let U be a totally
isotropic subspace of dimension m. For any isometry f 2 O('), we have det(f ) = 1 (f is a
rotation).
Proof. We know that we can find a basis (u1 , . . . , um , v1 , . . . , vm ) of E such (u1 , . . . , um ) is a
basis of U and ' is represented by the matrix
0 Im
.
Im 0
Since f (U ) = U , the matrix representing f is of the form
B C
A=
.
0 D
The condition A> Am,m A = Am,m translates as
>
B
0
0 Im
B C
0 Im
=
C > D>
Im 0
0 D
Im 0
that is,
B> 0
C > D>
0 D
B C
B>D
D> B C > D + D> C
0
0 Im
,
Im 0
426
0 1
,
1 0
we have '((x1 , x2 ), (x1 , x2 )) = 2x1 x2 , and the matrices representing isometries are of the
form
0
0
or
,
2 K {0}.
1
1
0
0
In the second case,
0
1
Let us now assume that n 3. Let y be some nonzero isotropic vector. Since n 3, the
orthogonal space (Ky)? has dimension at least 2, and we know that rad(Ky) = rad((Ky)? ),
which implies that (Ky)? contains some nonisotropic vector, say x. We have '(x, y) =
0, so '(x + y, x + y) = '(x, x) 6= 0, for = 1. Then, by hypothesis, the vectors
f (x) x, f (x+y) (x+y) = f (x) x+(f (y) y), and f (x y) (x y) = f (x) x (f (y) y)
are isotropic. The last two vectors can be written as f (x) x) + (f (y) y) with = 1,
so we have
0 = '(f (x)
x) + (f (y)
y, f (y)
y).
If we write the two equations corresponding to = 1, and then add them up, we get
'(f (y)
y, f (y)
y) = 0.
Therefore, we proved that f (u) u is isotropic for every u 2 E. If we let W = Im(f id),
then every vector in W is isotropic, and thus W is totally isotropic (recall that we assumed
427
u, v) = 0, and so
'(f (u)
u, v)
for all u 2 E. Since ' is nonsingular, this means that f (v) = v, for all v 2 W ? . However,
by hypothesis, no nonisotropic vector is left fixed, which implies that W ? is also totally
isotropic. In summary, we proved that W W ? and W ? W ?? = W , that is,
W = W ?.
Since, dim(W ) + dim(W ? ) = n, we conclude that W is a totally isotropic subspace of E
such that
dim(W ) = n/2.
By Proposition 14.27, the space E is an Artinian space of dimension n = 2m. Since W = W ?
and f (W ? ) = W ? , by Proposition 14.44, the isometry f is a rotation.
Remarks:
1. Another way to finish the proof of Proposition 14.45 is to prove that if f is an isometry,
then
Ker (f id) = (Im(f id))? .
After having proved that W = Im(f
Ker (f
id),
which implies that (f id)2 = 0. From this, we deduce that det(f ) = 1. For details,
see Jacobson [59] (Chapter 6, Section 6).
2. If f = Hk
s.
428
Proof. If g1 and g2 are two extensions of f such that det(g1 ) det(g2 ) = 1, then h = g1 1 g2
is an isometry such that det(h) = 1, and h leaves every vector of U fixed. Conversely, if h
is an isometry such that det(h) = 1, and h(u) = u for all u 2 U , then for any extesnion g1
of f , the map g2 = h g1 is another extension of f such that det(g2 ) = det(g1 ). Therefore,
we need to show that a map h as above exists.
If dim(U ) + dim(rad(U )) < dim(E), consider the nondegenerate completion U of U given
by Proposition 14.30. We know that dim(U ) = dim(U ) + dim(rad(U )) < n, and since U is
nondegenerate, we have
E=U
?
U ,
with U =
6 (0). Pick any isometry of U such that det( ) =
isometry h of E whose restriction to U is the identity.
If dim(U ) + dim(rad(U )) = dim(E) = n, then U = V
dim(U ) = dim(U ) + dim(rad(U )) = n, we have
E = U = (V
V 0)
1, and extend it to an
W,
Chapter 15
Variational Approximation of
Boundary-Value Problems;
Introduction to the Finite Elements
Method
15.1
Consider a beam of unit length supported at its ends in 0 and 1, stretched along its axis by
a force P , and subjected to a transverse load f (x)dx per element dx, as illustrated in Figure
15.1.
dx
f (x)dx
Figure 15.1: Vertical deflection of a beam
The bending moment u(x) at the absissa x is the solution of a boundary problem (BP)
of the form
u00 (x) + c(x)u(x) = f (x),
u(0) =
u(1) = ,
429
0<x<1
430
where c(x) = P/(EI(x)), where E is the Youngs modulus of the material of which the beam
is made and I(x) is the principal moment of inertia of the cross-section of the beam at the
abcissa x, and with = = 0. For this problem, we may assume that c(x)
0 for all
x 2 [0, 1].
Remark: The vertical deflection w(x) of the beam and the bending moment u(x) are related
by the equation
d2 w
u(x) = EI 2 .
dx
If we seek a solution u 2 C 2 ([0, 1]), that is, a function whose first and second derivatives
exist and are continuous, then it can be shown that the problem has a unique solution
(assuming c and f to be continuous functions on [0, 1]).
Except in very rare situations, this problem has no closed-form solution, so we are led to
seek approximations of the solutions.
One one way to proceed is to use the finite dierence method , where we discretize the
problem and replace derivatives by dierences. Another way is to use a variational approach.
In this approach, we follow a somewhat surprising path in which we come up with a so-called
weak formulation of the problem, by using a trick based on integrating by parts!
First, let us observe that we can always assume that = = 0, by looking for a solution
of the form u(x) ((1 x) + x). This turns out to be crucial when we integrate by parts.
There are a lot of subtle mathematical details involved to make what follows rigorous, but
here, we will take a relaxed approach.
First, we need to specify the space of weak solutions. This will be the vector space V of
continuous functions f on [0, 1], with f (0) = f (1) = 0, and which are piecewise continuously
dierentiable on [0, 1]. This means that there is a finite number of points x0 , . . . , xN +1 with
x0 = 0 and xN +1 = 1, such that f 0 (xi ) is undefined for i = 1, . . . , N , but otherwise f 0 is
defined and continuous on each interval (xi , xi+1 ) for i = 0, . . . , N .1 The space V becomes a
Euclidean vector space under the inner product
Z 1
hf, giV =
(f (x)g(x) + f 0 (x)g 0 (x))dx,
0
0<x<1
We also assume that f 0 (x) has a limit when x tends to a boundary of (xi , xi+1 ).
431
c(x)u(x)v(x)dx =
0
()
f (x)v(x)dx.
()
Now, the trick is to use integration by parts on the first term. Recall that
(u0 v)0 = u00 v + u0 v 0 ,
and to be careful about discontinuities, write
Z
00
u (x)v(x)dx =
0
N Z
X
i=0
xi+1
u00 (x)v(x)dx.
xi
xi+1
u0 (x)v 0 (x)dx
xi
xi+1
u0 (x)v 0 (x)dx
Z xi+1
0
u (xi )v(xi )
u0 (x)v 0 (x)dx.
xi
xi
It follows that
Z 1
N Z
X
00
u (x)v(x)dx =
0
xi+1
i=0 xi
N
X
u00 (x)v(x)dx
u (xi+1 )v(xi+1 )
u (xi )v(xi )
i=0
0
= u (1)v(1)
u (0)v(0)
xi+1
u (x)v (x)dx
xi
u0 (x)v 0 (x)dx.
However, the test function v satisfies the boundary conditions v(0) = v(1) = 0 (recall that
v 2 V ), so we get
Z 1
Z 1
00
u (x)v(x)dx =
u0 (x)v 0 (x)dx.
0
f (x)v(x)dx,
0
432
or
0 0
(u v + cuv)dx =
0
f vdx,
0
for all v 2 V.
()
(u0 v 0 + cuv)dx,
Then, () becomes
for all u, v 2 V ,
f (x)v(x)dx,
for all v 2 V .
a(u, v) = fe(v),
for all v 2 V.
1
J(v) = a(v, v)
2
fe(v) v 2 V.
where
a(u, v) =
and
fe(v) =
for all v 2 V,
(u0 v 0 + cuv)dx,
for all u, v 2 V ,
(WF)
f (x)v(x)dx,
0
for all v 2 V .
(2) If c(x)
0 for all x 2 [0, 1], then a function u 2 V is a solution of (WF) i u
minimizes J(v), that is,
J(u) = inf J(v),
v2V
with
Furthermore, u is unique.
1
J(v) = a(v, v)
2
fe(v)
v 2 V.
433
for all v 2 V.
(f 0 (x))2 dx,
for all v 2 V.
and so
kvk2V
since
1
2
1
0
((v 0 )2 + cv 2 )dx.
1
fe(v) + a(v, v),
2
Then, if u is a solution of (WF), we deduce that
J(u + v)
J(u) = a(u, v)
J(u + v)
since a(u, v)
1
J(u) = a(v, v)
2
1
kvkV
4
for all u, v 2 V .
0 for all v 2 V.
We also have
2
a(v, v) for all 2 R,
2
and so J(u + v) J(u) 0 for all 2 R. Consequently, if J achieves a minimum for u,
then a(u, v) = fe(v), which means that u is a solution of (WF).
J(u + v)
J(u) = (a(u, v)
f (v)) +
1
J(u) = a(v, v) for all v 2 V
2
we see that J(u + v) > J(u), so the minimum u is unique
J(u + v)
434
Theorem 15.1 shows that every solution u of our boundary problem (BP) is a solution
(in fact, unique) of the equation (WF).
The equation (WF) is called the weak form or variational equation associated with the
boundary problem. This idea to derive these equations is due to Ritz and Galerkin.
Now, the natural question is whether the variational equation (WF) has a solution, and
whether this solution, if it exists, is also a solution of the boundary problem (it must belong
to C 2 ([0, 1]), which is far from obvious). Then, (BP) and (WF) would be equivalent.
Some fancy tools of analysis can be used to prove these assertions. The first difficulty is
that the vector space V is not the right space of solutions, because in order for the variational
problem to have a solution, it must be complete. So, we must construct a completion of the
vector space V . This can be done and we get the Sobolev space H01 (0, 1). Then, the question
of the regularity of the weak solution can also be tackled.
We will not worry about all this. Instead, let us find approximations of the problem (WF).
Instead of using the infinite-dimensional vector space V , we consider finite-dimensional subspaces Va (with dim(Va ) = n) of V , and we consider the discrete problem:
Find a function u(a) 2 Va , such that
a(u(a) , v) = fe(v),
for all v 2 Va .
(DWF)
Since Va is finite dimensional (of dimension n), let us pick a basis of functions (w1 , . . . , wn )
in Va , so that every function u 2 Va can we written as
u = u1 w 1 + + u n w n .
Then, the equation (DWF) holds i
a(u, wj ) = fe(wj ),
j = 1, . . . , n,
1 j n.
1
Because a(v, v)
kvkVa , the bilinear form a is symmetric positive definite, and thus
2
the matrix (a(wi , wj )) is symmetric positive definite, and thus invertible. Therefore, (DWF)
has a solution given by a linear system!
and
bj = fe(wj ) =
f (x)wj (x)dx.
0
435
However, if the basis functions are simple enough, this can be done by hand. Otherwise,
numerical integration methods must be used, but there are some good ones.
Let us also remark that the proof of Theorem 15.1 also shows that the unique solution of
(DWF) is the unique minimizer of J over all functions in Va . It is also possible to compare
the approximate solution u(a) 2 Va with the exact solution u 2 V .
Theorem 15.2. Suppose c(x) 0 for all x 2 [0, 1]. For every finite-dimensional subspace
Va (dim(Va ) = n) of V , for every basis (w1 , . . . , wn ) of Va , the following properties hold:
(1) There is a unique function u(a) 2 Va such that
a(u(a) , v) = fe(v),
for all v 2 Va ,
(DWF)
(2) The unique solution u(a) 2 Va of (DWF) is the unique minimizer of J over Va , that is,
J(u(a) ) = inf J(v),
v2Va
We proved (1) and (2), but we will omit the proof of (3) which can be found in Ciarlet
[24].
Let us now give examples of the subspaces Va used in practice. They usually consist of
piecewise polynomial functions.
Pick an integer N
h=
1
,
N +1
i = 0, . . . , N + 1.
436
There are various ways to prove this. One way is to use the Bernstein basis, because
the kth derivative of a polynomial is given by a formula in terms of its control points. For
example, for m = 1, every degree 3 polynomial can be written as
P (x) = (1
x)3 b0 + 3(1
x)2 x b1 + 3(1
x)x2 b2 + x3 b3 ,
b0 )
b2 ).
Given P (0) and P (1), we determine b0 and b3 , and from P 0 (0) and P 0 (1), we determine b1
and b2 .
In general, for a polynomial of degree m written as
P (x) =
m
X
bj Bjm (x)
j=0
m
in terms of the Bernstein basis (B0m (x), . . . , Bm
(x)) with
m
m
Bj (x) =
(1 x)m j xj ,
j
(k)
(0) = m(m
1) (m
X
k
k
k + 1)
( 1)k
i
i=0
bi ,
(k)
(0) =
m(m
1) (m
(s r)k
k + 1)
X
k
k
( 1)k
i
i=0
bi ,
with a similar formula for P (k) (1). In our case, we set r = xi , s = xi+1 .
Now, if the 2m + 2 values
P (0), P (1) (0), . . . , P (m) (0), P (1), P (1) (1), . . . , P (m) (1)
437
are given, we obtain a triangular system that determines uniquely the 2m + 2 control points
b0 , . . . , b2m+1 .
Recall that C m ([0, 1]) denotes the set of C m functions f on [0, 1], which means that
f, f (1) , . . . , f (m) exist are are continuous on [0, 1].
We define the vector space VNm as the subspace of C m ([0, 1]) consisting of all functions f
such that
1. f (0) = f (1) = 0.
2. The restriction of f to [xi , xi+1 ] is a polynomial of degree 2m + 1, for i = 0, . . . , N .
Observe that the functions in VN0 are the piecewise affine functions f with f (0) = f (1) =
0; an example is shown in Figure 15.2.
ih
N
X
i=1
v(ih)wi (x),
x 2 [0, 1].
438
wi
1)h ih (i + 1)h
(i
1).
Going back to our problem (the bending of a beam), assuming that c and f are constant
functions, it is not hard to show that the linear system () becomes
0
2c 2
h
3
+ 6c h2
2+
B
B 1
B
1B
B
hB
B
B
@
...
10
1 + 6c h2
2+
2c 2
h
3
1 + 6c h2
...
1 + 6c h2
...
2+
2c 2
h
3
1 + 6c h2
1 + 6c h2
0 1
f
CB
C
B C
C B u2 C
Bf C
CB
C
B C
CB . C
B C
C B .. C = h B ... C .
CB
C
B C
CB
C
B C
C Bu
C
Bf C
N
1
A@
A
@ A
2c 2
2+ 3h
uN
f
u1
We can also find a basis of 2N + 2 cubic functions for VN1 consisting of functions with
small support. This basis consists of the N functions wi0 and of the N + 2 functions wi1
439
1
(x
h3
wi0 (x) =
1
((i + 1)h
h3
(i
x)2 (2x
(2i
2x),
1)h),
(i
1)h x ih,
ih x (i + 1)h,
1
(ih
h2
x)(x
(i
1)h)2 ,
x)2 (x
ih),
(i
1)h x ih,
and
wj1 (x) =
1
((i + 1)h
h2
ih x (i + 1)h,
N
X
v(ih)wi0 (x) +
i=1
N
+1
X
j=0
v 0 jih)wj1 (x),
x 2 [0, 1].
we find that if c = 0, the matrix A of the system () is tridiagonal by blocks, where the blocks
are 2 2, 2 1, or 1 2 matrices, and with single entries in the top left and bottom right
corner. A dierent order of the basis vectors would mess up the tridiagonal block structure
of A. We leave the details as an exercise.
Let us now take a quick look at a two-dimensional problem, the bending of an elastic
membrane.
440
wi0
w01
0
wj1
ih
jh
1
wN
+1
15.2
Consider an elastic membrane attached to a round contour whose projection on the (x1 , x2 )plane is the boundary of an open, connected, bounded region in the (x1 , x2 )-plane, as
illustrated in Figure 15.5. In other words, we view the membrane as a surface consisting of
the set of points (x, z) given by an equation of the form
z = u(x),
with x = (x1 , x2 ) 2 , where u : ! R is some sufficiently regular function, and we think
of u(x) as the vertical displacement of this membrane.
We assume that this membrane is under the action of a vertical force f (x)dx per surface
element in the horizontal plane (where is the tension of the membrane). The problem is
441
f (x)dx
g(y)
u(x)
x2
dx
x1
x2
x2 ,
where g : ! R represents the height of the contour of the membrane. We are looking for
a function u in C 2 () \ C 1 (). The operator is the Laplacian, and it is given by
u(x) =
@ 2u
@ 2u
(x)
+
(x).
@x21
@x22
This is an example of a boundary problem, since the solution u of the PDE must satisfy the
condition u(x) = g(x) on the boundary of the domain . The above equation is known as
Poissons equation, and when f = 0 as Laplaces equation.
It can be proved that if the data f, g and
a unique solution.
To get a weak formulation of the problem, first we have to make the boundary condition
homogeneous, which means that g(x) = 0 on . It turns out that g can be extended to the
whole of as some sufficiently smooth function b
h, so we can look for a solution of the form
u b
h, but for simplicity, let us assume that the contour of lies in a plane parallel to the
442
(where nRdenotes the outward pointing unit normal to the surface). Because v = 0 on , the
integral
drops out, and we get an equation of the form
a(u, v) = fe(v) for all v 2 V,
Z
@u @v
@u @v
a(u, v) =
+
dx
@x1 @x1 @x2 @x2
fe(v) =
f vdx.
We get the same equation as in section 15.2, but over a set of functions defined on a
two-dimensional domain. As before, we can choose a finite-dimensional subspace Va of V
and consider the discrete problem with respect to Va . Again, if we pick a basis (w1 , . . . , wn )
of Va , a vector u = u1 w1 + + un wn is a solution of the Weak Formulation of our problem
i u = (u1 , . . . , un ) is a solution of the linear system
Au = b,
with A = (a(wi , wj )) and b = (fe(wj )). However, the integrals that give the entries in A and
b are much more complicated.
An approach to deal with this problem is the method of finite elements. The idea is
to also discretize the boundary curve . If we assume that is a polygonal line, then we
can triangulate the domain , and then we consider spaces of functions which are piecewise
defined on the triangles of the triangulation of . The simplest functions are piecewise affine
and look like tents erected above groups of triangles. Again, we can define base functions
with small support, so that the matrix A is tridiagonal by blocks.
The finite element method is a vast subject and it is presented in many books of various
degrees of difficulty and obscurity. Let us simply state three important requirements of the
finite element method:
443
1. Good triangulations must be found. This in itself is a vast research topic. Delaunay
triangulations are good candidates.
2. Good spaces of functions must be found; typically piecewise polynomials and splines.
3. Good bases consisting of functions will small support must be found, so that integrals
can be easily computed and sparse banded matrices arise.
We now consider boundary problems where the solution varies with time.
15.3
Consider a homogeneous string (or rope) of constant cross-section, of length L, and stretched
(in a vertical plane) between its two ends which are assumed to be fixed and located along
the x-axis at x = 0 and at x = L.
@ 2u
(x, t) = f (x, t),
@x2
p
with c = /, where is the linear density of the string, known as the one-dimensional
wave equation.
444
Furthermore, the initial shape of the string is known at t = 0, as well as the distribution
of the initial velocities along the string; in other words, there are two functions ui,0 and ui,1
such that
u(x, 0) = ui,0 (x),
@u
(x, 0) = ui,1 (x),
@t
0 x L,
0 x L.
For example, if the string is simply released from its given starting position, we have ui,1 = 0.
Lastly, because the ends of the string are fixed, we must have
u(0, t) = u(L, t) = 0,
0.
@ 2u
(x, t) = 0,
@x2
Let us try our trick of multiplying by a test function v depending only on x, C 1 on [0, L],
and such that v(0) = v(L) = 0, and integrate by parts. We get the equation
Z L 2
Z L 2
@ u
@ u
2
(x, t)v(x)dx c
(x, t)v(x)dx = 0.
2
2
0 @t
0 @x
For the first term, we get
Z L 2
Z L 2
@ u
@
(x, t)v(x)dx =
[u(x, t)v(x)]dx
2
2
0 @t
0 @t
Z
d2 L
= 2
u(x, t)v(x)dx
dt 0
d2
= 2 hu, vi,
dt
445
where hu, vi is the inner product in L2 ([0, L]). The fact that it is legitimate to move @ 2 /@t2
outside of the integral needs to be justified rigorously, but we wont do it here.
For the second term, we get
Z L 2
@ u
(x, t)v(x)dx =
2
0 @x
@u
(x, t)v(x)
@x
x=L
+
x=0
L
0
@u
dv
(x, t) (x)dx,
@x
dx
for all v 2 V
and all t
0.
where, for every t 2 R+ , the functions u(x, t) and (v, t) belong to V . Actually, we have to
replace V by the subspace of the Sobolev space H01 (0, L) consisting of the functions such
that v(0) = v(L) = 0. Then, the weak formulation (variational formulation) of our problem
is this:
Find a function u 2 V such that
d2
hu, vi + a(u, v) = 0,
for all v 2 V and all t 0
dt2
u(x, 0) = ui,0 (x), 0 x L (intitial condition),
@u
(x, 0) = ui,1 (x), 0 x L (intitial condition).
@t
kuk2H 1
0
for all v 2 V
(Poincares inequality), which shows that a is positive definite on V . The above method is
known as the method of Rayleigh-Ritz .
A study of the above equation requires some sophisticated tools of analysis which go
far beyond the scope of these notes. Let us just say that there is a countable sequence of
solutions with separated variables of the form
kx
kct
kx
kct
(1)
(2)
uk = sin
cos
, uk = sin
sin
, k 2 N+ ,
L
L
L
L
446
called modes (or normal modes). Complete solutions of the problem are series obtained by
combining the normal modes, and they are of the form
u(x, t) =
1
X
sin
k=1
kx
L
kct
kct
Ak cos
+ Bk sin
,
L
L
where the coefficients Ak , Bk are determined from the Fourier series of ui,0 and ui,1 .
We now consider discrete approximations of our problem. As before, consider a finite
dimensional subspace Va of V and assume that we have approximations ua,0 and ua,1 of ui,0
and ui,1 . If we pick a basis (w1 , . . . , wn ) of Va , then we can write our unknown function
u(x, t) as
u(x, t) = u1 (t)w1 + + un (t)wn ,
where u1 , . . . , un are functions of t. Then, if we write u = (u1 , . . . , un ), the discrete version
of our problem is
A
d2 u
+ Ku = 0,
dt2
u(x, 0) = ua,0 (x),
@u
(x, 0) = ua,1 (x),
@t
0 x L,
0 x L,
where A = (hwi , wj i) and K = (a(wi , wj )) are two symmetric matrices, called the mass
matrix and the stiness matrix , respectively. In fact, because a and the inner product
h , i are positive definite, these matrices are also positive definite.
We have made some progress since we now have a system of ODEs, and we can solve it
by analogy with the scalar case. So, we look for solutions of the form U cos !t (or U sin !t),
where U is an n-dimensional vector. We find that we should have
(K
! 2 A)U cos !t = 0,
such that
KU = AU,
a problem known as a generalized eigenvalue problem, since the ordinary eigenvalue problem
for K is
KU = U.
447
i = 1, . . . , n,
are linearly independent and are solutions of the generalized eigenvalue problem; that is,
KUi = !i2 AUi ,
i = 1, . . . , n.
More is true. Because the vectors Y1 , . . . , Yn are orthonormal, and because Yi = L> Ui ,
from
(Yi )> Yj = ij ,
we get
(Ui )> LL> Uj =
ij ,
ij ,
1 i, j n,
1 i, j n.
448
n
X
Uik wk .
k=1
ij ,
which means that the functions (U 1 , . . . , U n ) form an orthonormal basis of Va for the inner
product a. The functions U i 2 Va are called modes (or modal vectors).
As a final step, let us look again for a solution of our discrete weak formulation of the
problem, this time expressing the unknown solution u(x, t) over the modal basis (U 1 , . . . , U n ),
say
n
X
u=
u
ej (t)U j ,
j=1
where each u
ej is a function of t. Because
u=
n
X
j=1
u
ej (t)U j =
n
X
j=1
u
ej (t)
n
X
Ujk wk
k=1
Pn
j=1
u=
n
X
yields
n
X
j=1
k=1
j=1
u
ej (t)Ujk
wk ,
u
ej Uj ,
n
n
X
X
u
ej (t)Ujk for k = 1, . . . , n, we see that
j=1
the equation
j = 1, . . . , n,
d2 u
+ Ku = 0
dt2
[(e
uj )00 + !j2 u
ej ]AUj = 0.
Since A is invertible and since (U1 , . . . , Un ) are linearly independent, the vectors (AU1 ,
. . . , AUn ) are linearly independent, and consequently we get the system of n ODEs
(e
uj )00 + !j2 u
ej = 0,
1 j n.
449
n
X
j=1
0 x L,
0 x L,
by expressing ua,0 and ua,1 on the modal basis (U 1 , . . . , U n ). Furthermore, the modal functions (U 1 , . . . , U n ) form an orthonormal basis of Va for the inner product a.
If we use the vector space VN0 of piecewise affine functions, we find that the matrices A
and K are familar! Indeed,
0
1
2
1 0
0
0
B 1 2
1 0
0C
C
1B
B .. . . . . . .
C
.
.
A= B .
.
.
. . C
C
hB
@0
0
1 2
1A
0
0
0
1 2
and
1
4 1 0 0 0
B1 4 1 0 0 C
C
hB
B .. . . . . . . .. C
K = B.
.
.
. .C .
C
6B
@0 0 1 4 1 A
0 0 0 1 4
To conclude this section, let us discuss briefly the wave equation for an elastic membrane,
as described in Section 15.2. This time, we look for a function u : R+ ! R satisfying
the following conditions:
1 @ 2u
(x, t)
c2 @t2
x 2 , t > 0,
450
Z
@u @v
@u @v
a(u, v) =
+
dx,
@x1 @x1 @x2 @x2
and
hu, vi =
uvdx.
d2 u
+ Ku = 0,
dt2
u(x, 0) = ua,0 (x),
@u
(x, 0) = ua,1 (x),
@t
x2 ,
x2 ,
where A = (hwi , wj i) and K = (a(wi , wj )) are two symmetric positive definite matrices.
In principle, the problem is solved, but, it may be difficult to find good spaces Va , good
triangulations of , and good bases of Va , to be able to compute the matrices A and K, and
to ensure that they are sparse.
Chapter 16
Singular Value Decomposition and
Polar Form
16.1
In this section, we assume that we are dealing with real Euclidean spaces. Let f : E ! E
be any linear map. In general, it may not be possible to diagonalize f . We show that every
linear map can be diagonalized if we are willing to use two orthonormal bases. This is the
celebrated singular value decomposition (SVD). A close cousin of the SVD is the polar form
of a linear map, which shows how a linear map can be decomposed into its purely rotational
component (perhaps with a flip) and its purely stretching part.
The key observation is that f f is self-adjoint, since
h(f f )(u), vi = hf (u), f (v)i = hu, (f f )(v)i.
Similarly, f
f is self-adjoint.
The fact that f f and f f are self-adjoint is very important, because it implies that
f
f and f f can be diagonalized and that they have real eigenvalues. In fact, these
eigenvalues are all nonnegative. Indeed, if u is an eigenvector of f f for the eigenvalue ,
then
h(f f )(u), ui = hf (u), f (u)i
and
and thus
452
The above considerations also apply to any linear map f : E ! F betwen two Euclidean
spaces (E, h , i1 ) and (F, h , i2 ). Recall that the adjoint f : F ! E of f is the unique
linear map f such that
hf (u), vi2 = hu, f (v)i1 ,
Then, f f and f f are self-adjoint (the proof is the same as in the previous case), and
the eigenvalues of f f and f f are nonnegative. If is an eigenvalue of f f and u (6= 0)
is a corresponding eigenvector, we have
h(f f )(u), ui1 = hf (u), f (u)i2 ,
and also
h(f f )(u), ui1 = hu, ui1 ,
so
hu, ui1 , = hf (u), f (u)i2 ,
which implies that
0. A similar proof applies to f f . The situation is even better,
since we will show shortly that f f and f f have the same nonzero eigenvalues.
Remark: Given any two linear maps f : E ! F and g : F ! E, where dim(E) = n and
dim(F ) = m, it can be shown that
m
det( In
g f) =
det( Im
g),
453
The wonderful thing about the singular value decomposition is that there exist two orthonormal bases (u1 , . . . , un ) and (v1 , . . . , vm ) such that, with respect to these bases, f is
a diagonal matrix consisting of the singular values of f , or 0. Thus, in some sense, f can
always be diagonalized with respect to two orthonormal bases. The SVD is also a useful tool
for solving overdetermined linear systems in the least squares sense and for data analysis, as
we show later on.
First, we show some useful relationships between the kernels and the images of f , f ,
f
f , and f f . Recall that if f : E ! F is a linear map, the image Im f of f is the
subspace f (E) of F , and the rank of f is the dimension dim(Im f ) of its image. Also recall
that (Theorem 4.11))
dim (Ker f ) + dim (Im f ) = dim (E),
Proof. To simplify the notation, we will denote the inner products on E and F by the same
symbol h , i (to avoid subscripts). If f (u) = 0, then (f f )(u) = f (f (u)) = f (0) = 0,
and so Ker f Ker (f f ). By definition of f , we have
hf (u), f (u)i = h(f f )(u), ui
for all u 2 E. If (f f )(u) = 0, since h , i is positive definite, we must have f (u) = 0,
and so Ker (f f ) Ker f . Therefore,
Ker f = Ker (f f ).
The proof that Ker f = Ker (f
f ) is similar.
By definition of f , we have
hf (u), vi = hu, f (v)i for all u 2 E and all v 2 F .
()
454
Let us explain why Ker f = (Im f )? , the proof of the other equation being similar.
Because the inner product is positive definite, for every u 2 E, we have
u 2 Ker f
i f (u) = 0
i hf (u), vi = 0 for all v,
by () i hu, f (v)i = 0 for all v,
i u 2 (Im f )? .
Since
dim(Im f ) = n
dim(Ker f )
and
dim(Im f ) = n
dim((Im f )? ),
from
Ker f = (Im f )?
we also have
dim(Ker f ) = dim((Im f )? ),
from which we obtain
dim(Im f ) = dim(Im f ).
Since
dim(Ker (f f )) + dim(Im (f f )) = dim(E),
Ker (f f ) = Ker f and Ker f = (Im f )? , we get
dim((Im f )? ) + dim(Im (f f )) = dim(E).
Since
dim((Im f )? ) + dim(Im f ) = dim(E),
we deduce that
dim(Im f ) = dim(Im (f f )).
A similar proof shows that
dim(Im f ) = dim(Im (f
Consequently, f , f , f f , and f
f )).
455
We will now prove that every square matrix has an SVD. Stronger results can be obtained
if we first consider the polar form and then derive the SVD from it (there are uniqueness
properties of the polar decomposition). For our purposes, uniqueness results are not as
important so we content ourselves with existence results, whose proofs are simpler. Readers
interested in a more general treatment are referred to [44].
The early history of the singular value decomposition is described in a fascinating paper
by Stewart [101]. The SVD is due to Beltrami and Camille Jordan independently (1873,
1874). Gauss is the grandfather of all this, for his work on least squares (1809, 1823)
(but Legendre also published a paper on least squares!). Then come Sylvester, Schmidt, and
Hermann Weyl. Sylvesters work was apparently opaque. He gave a computational method
to find an SVD. Schmidts work really has to do with integral equations and symmetric and
asymmetric kernels (1907). Weyls work has to do with perturbation theory (1912). Autonne
came up with the polar decomposition (1902, 1915). Eckart and Young extended SVD to
rectangular matrices (1936, 1939).
Theorem 16.2. (Singular value decomposition) For every real n n matrix A there are two
orthogonal matrices U and V and a diagonal matrix D such that A = V DU > , where D is of
the form
0
1
...
1
B
C
2 ...
B
C
D = B .. .. . .
.. C ,
@. .
. .A
...
where 1 , . . . , r are the singular values of f , i.e., the (positive) square roots of the nonzero
eigenvalues of A> A and A A> , and r+1 = = n = 0. The columns of U are eigenvectors
of A> A, and the columns of V are eigenvectors of A A> .
Proof. Since A> A is a symmetric matrix, in fact, a positive semidefinite matrix, there exists
an orthogonal matrix U such that
A> A = U D2 U > ,
with D = diag( 1 , . . . , r , 0, . . . , 0), where 12 , . . . , r2 are the nonzero eigenvalues of A> A,
and where r is the rank of A; that is, 1 , . . . , r are the singular values of A. It follows that
U > A> AU = (AU )> AU = D2 ,
and if we let fj be the jth column of AU for j = 1, . . . , n, then we have
hfi , fj i =
2
i ij ,
1 i, j r
and
r + 1 j n.
fj = 0,
If we define (v1 , . . . , vr ) by
vj =
1
j
fj ,
1 j r,
456
then we have
hvi , vj i =
ij ,
1 i, j r,
j hvi , vj i
j i,j ,
1 i n, 1 j r
AA> = V D2 V > ,
which shows that A> A and AA> have the same eigenvalues, that the columns of U are
eigenvectors of A> A, and that the columns of V are eigenvectors of AA> .
Theorem 16.2 suggests the following definition.
Definition 16.3. A triple (U, D, V ) such that A = V D U > , where U and V are orthogonal
and D is a diagonal matrix whose entries are nonnegative (it is positive semidefinite) is called
a singular value decomposition (SVD) of A.
The proof of Theorem 16.2 shows that there are two orthonormal bases (u1 , . . . , un ) and
(v1 , . . . , vn ), where (u1 , . . . , un ) are eigenvectors of A> A and (v1 , . . . , vn ) are eigenvectors
of AA> . Furthermore, (u1 , . . . , ur ) is an orthonormal basis of Im A> , (ur+1 , . . . , un ) is an
orthonormal basis of Ker A, (v1 , . . . , vr ) is an orthonormal basis of Im A, and (vr+1 , . . . , vn )
is an orthonormal basis of Ker A> .
Using a remark made in Chapter 3, if we denote the columns of U by u1 , . . . , un and the
columns of V by v1 , . . . , vn , then we can write
A = V D U> =
>
1 v 1 u1
+ +
>
r v r ur .
As a consequence, if r is a lot smaller than n (we write r n), we see that A can be
reconstructed from U and V using a much smaller number of elements. This idea will be
used to provide low-rank approximations of a matrix. The idea is to keep only the k top
singular values for some suitable k r for which k+1 , . . . r are very small.
Remarks:
457
0
1
B
1 1
A= B
2 @1
1
1
1
1
1
1
1
1
1
1
1
1C
C
1A
1
is both orthogonal and symmetric, and A = RS with R = A and S = I, which implies that
some of the eigenvalues of A are negative.
Remark: In the complex case, the polar decomposition states that for every complex n n
matrix A, there is some unitary matrix U and some positive semidefinite Hermitian matrix
H such that
A = U H.
458
1
0
0C
C
0C
C
.. C
.C
C
2 0C
C
1 2A
0 1
0
0
0
..
.
the positive square roots of the eigenvalues of the matrix B = A> A with
0
1
1 2 0 0 ... 0 0
B2 5 2 0 . . . 0 0C
B
C
B0 2 5 2 . . . 0 0C
B
C
B
C
B = B ... ... . . . . . . . . . ... ... C
B
C
B0 0 . . . 2 5 2 0C
B
C
@0 0 . . . 0 2 5 2A
0 0 ... 0 0 2 5
n,
which are
= cond2 (A)
2n 1 .
2
n of A are not unrelated, since
2
1
2
n
1, . . . ,
= det(A A) = | det(A)|2
n|
= | det(A)|,
| 1| |
n|
so we have
1
n.
and
for
k,
k = 1, . . . , n
1.
A proof of Theorem 16.3 can be found in Horn and Johnson [58], Chapter 3, Section
3.3, where more inequalities relating the eigenvalues and the singular values of a matrix are
given.
Theorem 16.2 can be easily extended to rectangular m n matrices, as we show in the
next section (for various versions of the SVD for rectangular matrices, see Strang [105] Golub
and Van Loan [49], Demmel [27], and Trefethen and Bau [110]).
16.2
460
Proof. As in the proof of Theorem 16.2, since A> A is symmetric positive semidefinite, there
exists an n n orthogonal matrix U such that
A> A = U 2 U > ,
2
i ij ,
1 i, j r
and
r + 1 j n.
fj = 0,
If we define (v1 , . . . , vr ) by
1
j
fj ,
1 j r,
hvi , vj i =
ij ,
1 i, j r,
vj =
then we have
j hvi , vj i
1 i m, 1 j r
j i,j ,
B
B.
C
B
. . . nC
D=
=B
C,
..
0m n
B
C
B 0 . ... 0 C
B.
.. . .
.C
B ..
. .. C
.
@
A
..
0 . ... 0
else if n
m, then we let
B
B
D = B ..
@.
..
.
...
...
...
..
.
...
1
0 ... 0
0 . . . 0C
C
C.
..
0 . 0A
0 ... 0
2
. . . , 0)U >
r , 0,
| {z }
n r
and
AA> = V DD> V > = V diag( 12 , . . . ,
2
. . . , 0)V > ,
r , 0,
| {z }
m r
which shows that A> A and AA> have the same nonzero eigenvalues, that the columns of U
are eigenvectors of A> A, and that the columns of V are eigenvectors of AA> .
A triple (U, D, V ) such that A = V D U > is called a singular value decomposition (SVD)
of A.
Even though the matrix D is an m n rectangular matrix, since its only nonzero entries
are on the descending diagonal, we still say that D is a diagonal matrix.
If we view A as the representation of a linear map f : E ! F , where dim(E) = n and
dim(F ) = m, the proof of Theorem 16.4 shows that there are two orthonormal bases (u1 , . . .,
un ) and (v1 , . . . , vm ) for E and F , respectively, where (u1 , . . . , un ) are eigenvectors of f f
and (v1 , . . . , vm ) are eigenvectors of f f . Furthermore, (u1 , . . . , ur ) is an orthonormal basis
of Im f , (ur+1 , . . . , un ) is an orthonormal basis of Ker f , (v1 , . . . , vr ) is an orthonormal basis
of Im f , and (vr+1 , . . . , vm ) is an orthonormal basis of Ker f .
The SVD of matrices can be used to define the pseudo-inverse of a rectangular matrix; we
will do so in Chapter 17. The reader may also consult Strang [105], Demmel [27], Trefethen
and Bau [110], and Golub and Van Loan [49].
One of the spectral theorems states that a symmetric matrix can be diagonalized by
an orthogonal matrix. There are several numerical methods to compute the eigenvalues
of a symmetric matrix A. One method consists in tridiagonalizing A, which means that
there exists some orthogonal matrix P and some symmetric tridiagonal matrix T such that
A = P T P > . In fact, this can be done using Householder transformations. It is then possible
to compute the eigenvalues of T using a bisection method based on Sturm sequences. One can
also use Jacobis method. For details, see Golub and Van Loan [49], Chapter 8, Demmel [27],
Trefethen and Bau [110], Lecture 26, or Ciarlet [24]. Computing the SVD of a matrix A is
more involved. Most methods begin by finding orthogonal matrices U and V and a bidiagonal
matrix B such that A = V BU > . This can also be done using Householder transformations.
Observe that B > B is symmetric tridiagonal. Thus, in principle, the previous method to
diagonalize a symmetric tridiagonal matrix can be applied. However, it is unwise to compute
462
B > B explicitly, and more subtle methods are used for this last step. Again, see Golub and
Van Loan [49], Chapter 8, Demmel [27], and Trefethen and Bau [110], Lecture 31.
The polar form has applications in continuum mechanics. Indeed, in any deformation it
is important to separate stretching from rotation. This is exactly what QS achieves. The
orthogonal part Q corresponds to rotation (perhaps with an additional reflection), and the
symmetric matrix S to stretching (or compression). The real eigenvalues 1 , . . . , r of S are
the stretch factors (or compression factors) (see Marsden and Hughes [76]). The fact that
S can be diagonalized by an orthogonal matrix corresponds to a natural choice of axes, the
principal axes.
The SVD has applications to data compression, for instance in image processing. The
idea is to retain only singular values whose magnitudes are significant enough. The SVD
can also be used to determine the rank of a matrix when other methods such as Gaussian
elimination produce very small pivots. One of the main applications of the SVD is the
computation of the pseudo-inverse. Pseudo-inverses are the key to the solution of various
optimization problems, in particular the method of least squares. This topic is discussed in
the next chapter (Chapter 17). Applications of the material of this chapter can be found
in Strang [105, 104]; Ciarlet [24]; Golub and Van Loan [49], which contains many other
references; Demmel [27]; and Trefethen and Bau [110].
16.3
The singular values of a matrix can be used to define various norms on matrices which
have found recent applications in quantum information theory and in spectral graph theory.
Following Horn and Johnson [58] (Section 3.4) we can make the following definitions:
Definition 16.5. For any matrix A 2 Mm,n (C), let q = min{m, n}, and if
the singular values of A, for any k with 1 k q, let
Nk (A) =
+ +
are
k,
p
1
+ +
p 1/p
,
k)
called the Ky Fan p-k-norm of A. When k = q, Nq;p is also called the Schatten p-norm.
Observe that when k = 1, N1 (A) = 1 , and the Ky Fan norm N1 is simply the spectral
norm from Chapter 7, which is the subordinate matrix norm associated with the Euclidean
norm. When k = q, the Ky Fan norm Nq is given by
Nq (A) =
+ +
= tr((A A)1/2 )
16.4. SUMMARY
463
and is called the trace norm or nuclear norm. When p = 2 and k = q, the Ky Fan Nq;2 norm
is given by
p
Nk;2 (A) = ( 12 + + q2 )1/2 = tr(A A) = kAkF ,
which is the Frobenius norm of A.
It can be shown that Nk and Nk;p are unitarily invariant norms, and that when m = n,
they are matrix norms; see Horn and Johnson [58] (Section 3.4, Corollary 3.4.4 and Problem
3).
16.4
Summary
The main concepts and results of this chapter are listed below:
For any linear map f : E ! E on a Euclidean space E, the maps f f and f f are
self-adjoint and positive semidefinite.
The singular values of a linear map.
Positive semidefinite and positive definite self-adjoint maps.
Relationships between Im f , Ker f , Im f , and Ker f .
The singular value decomposition theorem for square matrices (Theorem 16.2).
The SVD of matrix.
The polar decomposition of a matrix.
The Weyl inequalities.
The singular value decomposition theorem for m n matrices (Theorem 16.4).
Ky Fan k-norms, Ky Fan p-k-norms, Schatten p-norms.
464
Chapter 17
Applications of SVD and
Pseudo-Inverses
De tous les principes quon peut proposer pour cet objet, je pense quil nen est pas de
plus general, de plus exact, ni dune application plus facile, que celui dont nous avons
fait usage dans les recherches precedentes, et qui consiste `a rendre minimum la somme
des carres des erreurs. Par ce moyen il setablit entre les erreurs une sorte dequilibre
qui, empechant les extremes de prevaloir, est tr`es propre `a faire connaitre letat du
syst`eme le plus proche de la verite.
Legendre, 1805, Nouvelles Methodes pour la determination des Orbites des
Com`etes
17.1
This chapter presents several applications of SVD. The first one is the pseudo-inverse, which
plays a crucial role in solving linear systems by the method of least squares. The second application is data compression. The third application is principal component analysis (PCA),
whose purpose is to identify patterns in data and understand the variancecovariance structure of the data. The fourth application is the best affine approximation of a set of data, a
problem closely related to PCA.
The method of least squares is a way of solving an overdetermined system of linear
equations
Ax = b,
i.e., a system in which A is a rectangular m n matrix with more equations than unknowns
(when m > n). Historically, the method of least squares was used by Gauss and Legendre
to solve problems in astronomy and geodesy. The method was first published by Legendre
in 1805 in a paper on methods for determining the orbits of comets. However, Gauss had
already used the method of least squares as early as 1801 to determine the orbit of the asteroid
465
466
Ceres, and he published a paper about it in 1810 after the discovery of the asteroid Pallas.
Incidentally, it is in that same paper that Gaussian elimination using pivots is introduced.
The reason why more equations than unknowns arise in such problems is that repeated
measurements are taken to minimize errors. This produces an overdetermined and often
inconsistent system of linear equations. For example, Gauss solved a system of eleven equations in six unknowns to determine the orbit of the asteroid Pallas. As a concrete illustration,
suppose that we observe the motion of a small object, assimilated to a point, in the plane.
From our observations, we suspect that this point moves along a straight line, say of equation
y = dx + c. Suppose that we observed the moving point at three dierent locations (x1 , y1 ),
(x2 , y2 ), and (x3 , y3 ). Then we should have
c + dx1 = y1 ,
c + dx2 = y2 ,
c + dx3 = y3 .
If there were no errors in our measurements, these equations would be compatible, and c
and d would be determined by only two of the equations. However, in the presence of errors,
the system may be inconsistent. Yet we would like to find c and d!
The idea of the method of least squares is to determine (c, d) such that it minimizes the
sum of the squares of the errors, namely,
(c + dx1
y1 )2 + (c + dx2
y2 )2 + (c + dx3
y3 ) 2 .
bk22
(where kuk22 = u21 + +u2n , the square of the Euclidean norm of the vector u = (u1 , . . . , un )),
and that these solutions are given by the square n n system
A> Ax = A> b,
called the normal equations. Furthermore, when the columns of A are linearly independent,
it turns out that A> A is invertible, and so x is unique and given by
x = (A> A) 1 A> b.
Note that A> A is a symmetric matrix, one of the nice features of the normal equations of a
least squares problem. For instance, the normal equations for the above problem are
3
x1 + x2 + x3
c
y1 + y2 + y3
=
.
x1 + x2 + x3 x21 + x22 + x23
d
x1 y1 + x2 y2 + x3 y3
In fact, given any real m n matrix A, there is always a unique x+ of minimum norm
that minimizes kAx bk22 , even when the columns of A are linearly dependent. How do we
prove this, and how do we find x+ ?
467
y2U
U ? ! U is the linear
with u 2 U and v 2 U ? . If we let p = pU (b) 2 U , then for any point y 2 U , the vectors
! = y p 2 U and !
py
bp = p b 2 U ? are orthogonal, which implies that
!
!
! 2,
k byk22 = k bpk22 + kpyk
2
!
where by = y b. Thus, p is indeed the unique point in U that minimizes the distance from
b to any point in U .
Thus, the problem has been reduced to proving that there is a unique x+ of minimum
norm such that Ax+ = p, with p = pU (b) 2 U , the orthogonal projection of b onto U . We
use the fact that
Rn = Ker A (Ker A)? .
Consequently, every x 2 Rn can be written uniquely as x = u + v, where u 2 Ker A and
v 2 (Ker A)? , and since u and v are orthogonal,
kxk22 = kuk22 + kvk22 .
Furthermore, since u 2 Ker A, we have Au = 0, and thus Ax = p i Av = p, which shows
that the solutions of Ax = p for which x has minimum norm must belong to (Ker A)? .
However, the restriction of A to (Ker A)? is injective. This is because if Av1 = Av2 , where
v1 , v2 2 (Ker A)? , then A(v2 v2 ) = 0, which implies v2 v1 2 Ker A, and since v1 , v2 2
(Ker A)? , we also have v2 v1 2 (Ker A)? , and consequently, v2 v1 = 0. This shows that
there is a unique x+ of minimum norm such that Ax+ = p, and that x+ must belong to
(Ker A)? . By our previous reasoning, x+ is the unique vector of minimum norm minimizing
kAx bk22 .
!
The proof also shows that x minimizes kAx bk22 i pb = b Ax is orthogonal to U ,
which can be expressed by saying that b Ax is orthogonal to every column of A. However,
this is equivalent to
A> (b Ax) = 0, i.e., A> Ax = A> b.
468
Finally, it turns out that the minimum norm least squares solution x+ can be found in terms
of the pseudo-inverse A+ of A, which is itself obtained from any SVD of A.
Definition 17.1. Given any nonzero m n matrix A of rank r, if A = V DU > is an SVD
of A such that
0r,n r
D=
,
0m r,r 0m r,n r
with
= diag( 1 , . . . ,
r)
1
0r,m r
+
D =
,
0n r,r 0n r,m r
with
= diag(1/ 1 , . . . , 1/ r ),
If A = 0m,n is the zero matrix, we set A+ = 0n,m . Observe that D+ is obtained from D by
inverting the nonzero diagonal entries of D, leaving all zeros in place, and then transposing
the matrix. The pseudo-inverse of a matrix is also known as the MoorePenrose pseudoinverse.
Actually, it seems that A+ depends on the specific choice of U and V in an SVD (U, D, V )
for A, but the next theorem shows that this is not so.
Theorem 17.2. The least squares solution of smallest norm of the linear system Ax = b,
where A is an m n matrix, is given by
x+ = A+ b = U D+ V > b.
Proof. First, assume that A is a (rectangular) diagonal matrix D, as above. Then, since x
minimizes kDx bk22 i Dx is the projection of b onto the image subspace F of D, it is fairly
obvious that x+ = D+ b. Otherwise, we can write
A = V DU > ,
where U and V are orthogonal. However, since V is an isometry,
kAx
bk2 = kV DU > x
V > bk2 .
469
Letting y = U > x, we have kxk2 = kyk2 , since U is an isometry, and since U is surjective,
kAx bk2 is minimized i kDy V > bk2 is minimized, and we have shown that the least
solution is
y + = D+ V > b.
Since y = U > x, with kxk2 = kyk2 , we get
x+ = U D+ V > b = A+ b.
Thus, the pseudo-inverse provides the optimal solution to the least squares problem.
By Proposition 17.2 and Theorem 17.1, A+ b is uniquely defined by every b, and thus A+
depends only on A.
When A has full rank, the pseudo-inverse A+ can be expressed as A+ = (A> A) 1 A>
when m n, and as A+ = A> (AA> ) 1 when n m. In the first case (m n), observe that
A+ A = I, so A+ is a left inverse of A; in the second case (n m), we have AA+ = I, so A+
is a right inverse of A.
Proof. If m
A=V
U>
0m n,n
We find that
>
A A = U 0n,m
>
V V
which yields
(A> A) 1 A> = U 2 U > U 0n,m
Therefore, if m
0n,m
V >.
0m
n,n
U > = U 2 U > ,
V >V = U
0n,m
If n
U>
1
+
A =U
V >.
0n m,m
V > = A+ .
470
We find that
>
AA = V 0m,n
>
U U
which yields
>
>
A (AA )
Therefore, if n
=U
0n
m,m
>
0n
V V V
m,m
>
=U
V > = V 2 V > ,
0n
1
m,m
V > = A+ .
>
>
AA = U V V U = U U = U
and
+
>
A A = V U U V
>
= V V
>
=V
We immediately get
Ir
0
0 0m
Ir
0
0 0n
U>
V >.
(AA+ )2 = AA+ ,
(A+ A)2 = A+ A,
so both AA+ and A+ A are orthogonal projections (since they are both symmetric). We
claim that AA+ is the orthogonal projection onto the range of A and A+ A is the orthogonal
projection onto Ker(A)? = Im(A> ), the range of A> .
Obviously, we have range(AA+ ) range(A), and for any y = Ax 2 range(A), since
AA A = A, we have
AA+ y = AA+ Ax = Ax = y,
+
so the image of AA+ is indeed the range of A. It is also clear that Ker(A) Ker(A+ A), and
since AA+ A = A, we also have Ker(A+ A) Ker(A), and so
Ker(A+ A) = Ker(A).
Since A+ A is Hermitian, range(A+ A) = range((A+ A)> ) = Ker(A+ A)? = Ker(A)? , as
claimed.
It will also be useful to see that range(A) = range(AA+ ) consists of all vectors y 2 Rm
such that
z
>
U y=
,
0
471
with z 2 Rr .
Indeed, if y = Ax, then
>
>
>
>
>
U y = U Ax = U U V x = V x =
r
0
0 0m
z
V x=
,
0
>
Ir
0
+
AA y = U
U >y
0 0m r
Ir
0
z
>
=U
U U
0 0m r
0
I
0
z
=U r
0 0m r
0
z
=U
= y,
0
which shows that y belongs to the range of A.
Similarly, we claim that range(A+ A) = Ker(A)? consists of all vectors y 2 Rn such that
z
>
V y=
,
0
with z 2 Rr .
If y = A+ Au, then
+
y = A Au = V
Ir
0
0 0n
z
V u=V
,
0
>
z
Ir
0
z
+
>
A AV
=V
V V
0
0 0n r
0
Ir
0
z
=V
0 0n r
0
z
=V
= y,
0
which shows that y 2 range(A+ A).
472
Bj =
j
j
r 0
=
,
0 0
where r has rank r, then
+
r 1 0
.
0 0
Proof. Assume that B1 , . . . , Bp are 2 2 blocks and that 2p+1 , . . . , n are the scalar entries.
We know that the numbers j ij , and the 2p+k are the eigenvalues of A. Let 2j 1 =
473
q
2
2
2p, and assume that the blocks
2j =
j + j for j = 1, . . . , p, 2p+j = j for j = 1, . . . , n
are ordered so that 1 2 n . Then it is easy to see that
U U > = U > U = U U > U > U > = U > U > ,
with
> = diag(21 , . . . , 2n ),
so the singular values 1
2
>
the eigenvalues of AA , are such that
j
= j ,
1 j n.
r , 0, . . . , 0),
> 0 and
= diag(
B1 , . . . ,
1
2p Bp , 1, . . . , 1),
2p+1 , . . . ,
r , 0, . . . , 0).
= + ,
2p+1 , . . . ,
r 1 0
.
0 0
r ),
Therefore, the pseudo-inverse of a normal matrix can be computed directly from any block
diagonalization of A, as claimed.
474
bk2 = kRx
Hn H1 bk2 ,
Rx = Hn H1 b
R1
0m
c
x=
,
d
, and the
x+ = R1 1 c.
Since R1 is a triangular matrix, it is very easy to invert R1 .
The method of least squares is one of the most eective tools of the mathematical sciences.
There are entire books devoted to it. Readers are advised to consult Strang [105], Golub and
Van Loan [49], Demmel [27], and Trefethen and Bau [110], where extensions and applications
of least squares (such as weighted least squares and recursive least squares) are described.
Golub and Van Loan [49] also contains a very extensive bibliography, including a list of
books on least squares.
17.2
475
Among the many applications of SVD, a very useful one is data compression, notably for
images. In order to make precise the notion of closeness of matrices, we use the notion of
matrix norm. This concept is defined in Chapter 7 and the reader may want to review it
before reading any further.
Given an m n matrix of rank r, we would like to find a best approximation of A by a
matrix B of rank k r (actually, k < r) so that kA Bk2 (or kA BkF ) is minimized.
Proposition 17.5. Let A be an m n matrix of rank r and let V DU > = A be an SVD for
A. Write ui for the columns of U , vi for the columns of V , and 1
2
p for the
singular values of A (p = min(m, n)). Then a matrix of rank k < r closest to A (in the k k2
norm) is given by
k
X
>
>
Ak =
i vi ui = V diag( 1 , . . . , k )U
i=1
and kA
Ak k2 =
k+1 .
Ak k2 =
p
X
i=k+1
>
i v i ui
= V diag(0, . . . , 0,
k+1 , . . . ,
p )U
>
k+1 .
Bk22
k(A
2
2
= DU > h
2
2
2
k+1
U >h
2
2
2
k+1 ,
476
17.3
year
1777
1838
1752
1826
1862
1854
1882
1815
1835
1843
length
0
12
0
15
2
5
0
0
2
20
We usually form the n d matrix X whose ith row is Xi , with 1 i n. Then the
jth column is denoted by Cj (1 j d). It is sometimes called a feature vector , but this
terminology is far from being universally accepted. In fact, many people in computer vision
call the data points Xi feature vectors!
The purpose of principal components analysis, for short PCA, is to identify patterns in
data and understand the variancecovariance structure of the data. This is useful for the
following tasks:
1. Data reduction: Often much of the variability of the data can be accounted for by a
smaller number of principal components.
2. Interpretation: PCA can show relationships that were not previously suspected.
Given a vector (a sample of measurements) x = (x1 , . . . , xn ) 2 Rn , recall that the mean
(or average) x of x is given by
Pn
xi
x = i=1 .
n
We let x x denote the centered data point
x
x = (x1
x, . . . , xn
x).
477
In order to measure the spread of the xi s around the mean, we define the sample variance
(for short, variance) var(x) (or s2 ) of the sample x by
Pn
(xi x)2
var(x) = i=1
.
n 1
There is a reason for using n 1 instead of n. The above definition makes var(x) an
unbiased estimator of the variance of the random variable being sampled. However, we
dont need to worry about this. Curious readers will find an explanation of these peculiar
definitions in Epstein [35] (Chapter 14, Section 14.5), or in any decent statistics book.
Given two vectors x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ), the sample covariance (for short,
covariance) of x and y is given by
Pn
(xi x)(yi y)
cov(x, y) = i=1
.
n 1
The covariance of x and y measures how x and y vary from the mean with respect to each
other . Obviously, cov(x, y) = cov(y, x) and cov(x, x) = var(x).
Note that
cov(x, y) =
(x
x)> (y
n 1
y)
1
n 1
2 = 5.6,
478
we get
Name
Carl Friedrich Gauss
Camille Jordan
Adrien-Marie Legendre
Bernhard Riemann
David Hilbert
Henri Poincare
Emmy Noether
Karl Weierstrass
Eugenio Beltrami
Hermann Schwarz
year
51.4
9.6
76.4
2.4
33.6
25.6
53.6
13.4
6.6
14.6
length
5.6
6.4
5.6
9.4
3.6
0.6
5.6
5.6
3.6
14.4
We can think of the vector Cj as representing the features of X in the direction ej (the
jth canonical basis vector in Rd , namely ej = (0, . . . , 1, . . . 0), with a 1 in the jth position).
If v 2 Rd is a unit vector, we wish to consider the projection of the data points X1 , . . . , Xn
onto the line spanned by v. Recall from Euclidean geometry that if x 2 Rd is any vector
and v 2 Rd is a unit vector, the projection of x onto the line spanned by v is
hx, viv.
Thus, with respect to the basis v, the projection of x has coordinate hx, vi. If x is represented
by a row vector and v by a column vector, then
hx, vi = xv.
Y is given by
Y = v1 (C1
1 ) + + vd (Cd
d ) = (X
)v)> (X
n 1
((X
= v>
n 1
>
= v w,
(X
)w
)> (X
)w
)v.
Y ) = v>
479
(X
)> (X
)v.
The above suggests that we should move the origin to the centroid of the Xi s and consider
the matrix X of the centered data points Xi .
From now on, beware that we denote thePcolumns of X by C1 , . . . , Cd and that Y
denotes the centered point Y = (X )v = dj=1 vj Cj , where v is a unit vector.
Basic idea of PCA: The principal components of X are uncorrelated projections Y of the
data points X1 , . . ., Xn onto some directions v (where the vs are unit vectors) such that
var(Y ) is maximal.
This suggests the following definition:
Definition 17.2. Given an n d matrix X of data points X1 , . . . , Xn , if is the centroid of
the Xi s, then a first principal component of X (first PC) is a centered point Y1 = (X )v1 ,
the projection of X1 , . . . , Xn onto a direction v1 such that var(Y1 ) is maximized, where v1 is
a unit vector (recall that Y1 = (X )v1 is a linear combination of the Cj s, the columns of
X ).
More generally, if Y1 , . . . , Yk are k principal components of X along some unit vectors
v1 , . . . , vk , where 1 k < d, a (k +1)th principal component of X ((k +1)th PC) is a centered
point Yk+1 = (X )vk+1 , the projection of X1 , . . . , Xn onto some direction vk+1 such that
var(Yk+1 ) is maximized, subject to cov(Yh , Yk+1 ) = 0 for all h with 1 h k, and where
vk+1 is a unit vector (recall that Yh = (X )vh is a linear combination of the Cj s). The
vh are called principal directions.
The following proposition is the key to the main result about PCA:
Proposition 17.6. If A is a symmetric dd matrix with eigenvalues 1
2
d and
if (u1 , . . . , ud ) is any orthonormal basis of eigenvectors of A, where ui is a unit eigenvector
associated with i , then
x> Ax
max > = 1
x6=0 x x
(with the maximum attained for x = u1 ) and
max
x6=0,x2{u1 ,...,uk }?
x> Ax
=
x> x
k+1
1.
480
x> Ax
= max{x> Ax | x> x = 1},
>
x
x x
and similarly,
max
x6=0,x2{u1 ,...,uk }?
x> Ax
= max x> Ax | (x 2 {u1 , . . . , uk }? ) ^ (x> x = 1) .
x
x> x
Since A is a symmetric matrix, its eigenvalues are real and it can be diagonalized with respect
to an orthonormal basis of eigenvectors, so let (u1 , . . . , ud ) be such a basis. If we write
x=
d
X
x i ui ,
i=1
x> Ax =
d
X
2
i xi .
i=1
If x> x = 1, then
Pd
i=1
d
d
X
>
2
2
x Ax =
xi =
i xi 1
i=1
Thus,
d,
we get
1.
i=1
1,
and since this maximum is achieved for e1 = (1, 0, . . . , 0), we conclude that
max x> Ax | x> x = 1 =
x
1.
d
d
X
>
2
2
x Ax =
xi = k+1 .
i xi k+1
i=k+1
Pd
i=1
xi = 1.
i=k+1
Thus,
max x> Ax | (x 2 {u1 , . . . , uk }? ) ^ (x> x = 1)
x
k+1 ,
and since this maximum is achieved for ek+1 = (0, . . . , 0, 1, 0, . . . , 0) with a 1 in position k +1,
we conclude that
max x> Ax | (x 2 {u1 , . . . , uk }? ) ^ (x> x = 1) =
x
as claimed.
k+1 ,
481
The quantity
x> Ax
x> x
is known as the RayleighRitz ratio and Proposition 17.6 is often known as part of the
RayleighRitz theorem.
Proposition 17.6 also holds if A is a Hermitian matrix and if we replace x> Ax by x Ax
and x> x by x x. The proof is unchanged, since a Hermitian matrix has real eigenvalues
and is diagonalized with respect to an orthonormal basis of eigenvectors (with respect to the
Hermitian inner product).
We then have the following fundamental result showing how the SVD of X yields the
PCs:
Theorem 17.7. (SVD yields PCA) Let X be an n d matrix of data points X1 , . . . , Xn ,
and let be the centroid of the Xi s. If X = V DU > is an SVD decomposition of X
and if the main diagonal of D consists of the singular values 1
2
d , then the
centered points Y1 , . . . , Yd , where
Yk = (X
var(Yk ) =
1
n
)> (X
(X
)v.
= V DU > , we get
var(Y ) = v >
= v>
1
(n
1)
1
)> (X
)v
U DV > V DU > v
1)
1
D2 U > v.
= v>U
(n 1)
Similarly, if Y = (X
)v and Z = (X
(n
(X
cov(Y, Z) = v > U
1
(n
1)
D2 U > w.
482
2
1
n 1
2
d
n 1
, and
1
(n
1)
D2 U > w,
n 1
and since the columns of U form an orthonormal basis, U > uk+1 = ek+1 , and Yk+1 is indeed
the (k + 1)th column of V D, which completes the proof of the induction step.
The d columns u1 , . . . , ud of U are usually called the principal directions of X (and
X). We note that not only do we have cov(Yh , Yk ) = 0 whenever h 6= k, but the directions
u1 , . . . , ud along which the data are projected are mutually orthogonal.
We know from our study of SVD that 12 , . . . , d2 are the eigenvalues of the symmetric
positive semidefinite matrix (X )> (X ) and that u1 , . . . , ud are corresponding eigenvectors. Numerically, it is preferable to use SVD on X rather than to compute explicitly
(X )> (X ) and then diagonalize it. Indeed, the explicit computation of A> A from
483
a matrix A can be numerically quite unstable, and good SVD algorithms avoid computing
A> A explicitly.
In general, since an SVD of X is not unique, the principal directions u1 , . . . , ud are not
unique. This can happen when a data set has some rotational symmetries, and in such a
case, PCA is not a very good method for analyzing the data set.
17.4
A problem very close to PCA (and based on least squares) is to best approximate a data
set of n points X1 , . . . , Xn , with Xi 2 Rd , by a p-dimensional affine subspace A of Rd , with
1 p d 1 (the terminology rank d p is also used).
First, consider p = d 1. Then A = A1 is an affine hyperplane (in Rd ), and it is given
by an equation of the form
a1 x1 + + ad xd + c = 0.
By best approximation, we mean that (a1 , . . . , ad , c) solves the homogeneous linear system
0 1 0 1
0
1 a1
0
x1 1 x1 d 1 B . C B . C
B ..
..
..
.. C B .. C = B .. C
C B C
@ .
.
.
.A B
@ad A @0A
xn 1 xn d 1
c
0
in the least squares sense, subject to the condition that a = (a1 , . . . , ad ) is a unit vector , that
is, a> a = 1, where Xi = (xi 1 , , xi d ).
If we form the symmetric matrix
0
x1 1
B ..
..
@ .
.
xn 1
1> 0
x1 d 1
x1 1
..
.. C B ..
..
.
.A @ .
.
xn d 1
xn 1
1
x1 d 1
..
.. C
.
.A
xn d 1
involved in the normal equations, we see that the bottom row (and last column) of that
matrix is
n1 nd n,
Pn
where nj = i=1 xi j is n times the mean of the column Cj of X.
Therefore, if (a1 , . . . , ad , c) is a least squares solution, that is, a solution of the normal
equations, we must have
n1 a1 + + nd ad + nc = 0,
that is,
a1 1 + + ad d + c = 0,
484
which means that the hyperplane A1 must pass through the centroid of the data points
X1 , . . . , Xn . Then we can rewrite the original system with respect to the centered data
Xi , and we find that the variable c drops out and we get the system
(X
)a = 0,
where a = (a1 , . . . , ad ).
Thus, we are looking for a unit vector a solving (X
that is, some a such that a> a = 1 minimizing
a> (X
)> (X
)a.
Compute some SVD V DU > of X , where the main diagonal of D consists of the singular
values 1
)> (X
)a = a> U D2 U > a,
= {u 2 Rd | hu, ai = 0},
where a is the last column in U for some SVD V DU > of X , we have shown that the
affine hyperplane A1 = + Ud 1 is a best approximation of the data set X1 , . . . , Xn in the
least squares sense.
Is is easy to show that this hyperplane A1 = + Ud 1 minimizes the sum of the square
distances of each Xi to its orthogonal projection onto A1 . Also, since Ud 1 is the orthogonal
complement of a, the last column of U , we see that Ud 1 is spanned by the first d 1 columns
of U , that is, the first d 1 principal directions of X .
All this can be generalized to a best (d k)-dimensional affine subspace Ak approximating
X1 , . . . , Xn in the least squares sense (1 k d 1). Such an affine subspace Ak is cut out
by k independent hyperplanes Hi (with 1 i k), each given by some equation
ai 1 x1 + + ai d xd + ci = 0.
If we write ai = (ai 1 , , ai d ), to say that the Hi are independent means that a1 , . . . , ak are
linearly independent. In fact, we may assume that a1 , . . . , ak form an orthonormal system.
Then, finding a best (d
homogeneous linear system
0
X 1 0 0
B .. .. .. . . ..
@. . .
. .
0 0 0 0
0 1
1 a1
0 1
C
0 0 B
0
B c1 C
.. .. C B .. C = B .. C ,
C @.A
. .AB
B.C
X 1 @ ak A
0
ck
485
with a>
i aj =
ij
ak
where Ud k is the linear subspace spanned by the first d k principal directions of X , that
is, the first d k columns of U . Consequently, we get the following interesting interpretation
of PCA (actually, principal directions):
Theorem 17.8. Let X be an nd matrix of data points X1 , . . . , Xn , and let be the centroid
of the Xi s. If X = V DU > is an SVD decomposition of X and if the main diagonal
of D consists of the singular values 1
k)-dimensional
2
d , then a best (d
affine approximation Ak of X1 , . . . , Xn in the least squares sense is given by
Ak = + Ud k ,
where Ud k is the linear subspace spanned by the first d
principal directions of X (1 k d 1).
There are many applications of PCA to data compression, dimension reduction, and
pattern analysis. The basic idea is that in many cases, given a data set X1 , . . . , Xn , with
Xi 2 Rd , only a small subset of m < d of the features is needed to describe the data set
accurately.
If u1 , . . . , ud are the principal directions of X , then the first m projections of the data
(the first m principal components, i.e., the first m columns of V D) onto the first m principal
directions represent the data without much loss of information. Thus, instead of using the
486
original data points X1 , . . . , Xn , with Xi 2 Rd , we can use their projections onto the first m
principal directions Y1 , . . . , Ym , where Yi 2 Rm and m < d, obtaining a compressed version
of the original data set.
For example, PCA is used in computer vision for face recognition. Sirovitch and Kirby
(1987) seem to be the first to have had the idea of using PCA to compress facial images.
They introduced the term eigenpicture to refer to the principal directions, ui . However, an
explicit face recognition algorithm was given only later, by Turk and Pentland (1991). They
renamed eigenpictures as eigenfaces.
For details on the topic of eigenfaces, see Forsyth and Ponce [39] (Chapter 22, Section
22.3.2), where you will also find exact references to Turk and Pentlands papers.
Another interesting application of PCA is to the recognition of handwritten digits. Such
an application is described in Hastie, Tibshirani, and Friedman, [55] (Chapter 14, Section
14.5.1).
17.5
Summary
The main concepts and results of this chapter are listed below:
Least squares problems.
Existence of a least squares solution of smallest norm (Theorem 17.1).
The pseudo-inverse A+ of a matrix A.
The least squares solution of smallest norm is given by the pseudo-inverse (Theorem
17.2)
Projection properties of the pseudo-inverse.
The pseudo-inverse of a normal matrix.
The Penrose characterization of the pseudo-inverse.
Data compression and SVD.
Best approximation of rank < r of a matrix.
Principal component analysis.
Review of basic statistical concepts: mean, variance, covariance, covariance matrix .
Centered data, centroid .
The principal components (PCA).
17.5. SUMMARY
The RayleighRitz theorem (Theorem 17.6).
The main theorem: SVD yields PCA (Theorem 17.7).
Best affine approximation.
SVD yields a best affine approximation (Theorem 17.8).
Face recognition, eigenfaces.
487
488
Chapter 18
Quadratic Optimization Problems
18.1
In this chapter, we consider two classes of quadratic optimization problems that appear
frequently in engineering and in computer science (especially in computer vision):
1. Minimizing
1
f (x) = x> Ax + x> b
2
n
over all x 2 R , or subject to linear or affine constraints.
2. Minimizing
1
f (x) = x> Ax + x> b
2
x> b,
490
for all x 2 E.
Proof. (1) First, assume that f is positive definite. Recall that every self-adjoint linear map
has an orthonormal basis (e1 , . . . , en ) of eigenvectors, and let 1 , . . . , n be the corresponding
eigenvalues. With respect to this basis, for every x = x1 e1 + + xn en 6= 0, we have
hx, f (x)i =
n
DX
i=1
xi e i , f
n
X
i=1
xi e i
n
DX
i=1
xi e i ,
n
X
i=1
i xi e i
n
X
i=1
2
i xi ,
491
i ei i
i,
n
X
2
i xi ,
i=1
1
P (x) = x> Ax x> b
2
has a global minimum when A is symmetric positive definite.
Proposition 18.2. Given a quadratic function
1
P (x) = x> Ax x> b,
2
if A is symmetric positive definite, then P (x) has a unique global minimum for the solution
of the linear system Ax = b. The minimum value of P (x) is
P (A 1 b) =
1 > 1
b A b.
2
492
Proof. Since A is positive definite, it is invertible, since its eigenvalues are all strictly positive.
Let x = A 1 b, and compute P (y) P (x) for any y 2 Rn . Since Ax = b, we get
P (y)
1
1 >
P (x) = y > Ay y > b
x Ax + x> b
2
2
1 >
1
>
= y Ay y Ax + x> Ax
2
2
1
>
= (y x) A(y x).
2
P (x)
Remarks:
(1) The quadratic function P (x) is also given by
1
P (x) = x> Ax
2
b> x,
but the definition using x> b is more convenient for the proof of Proposition 18.2.
(2) If P (x) contains a constant term c 2 R, so that
1
P (x) = x> Ax
2
x> b + c,
the proof of Proposition 18.2 still shows that P (x) has a unique global minimum for
x = A 1 b, but the minimal value is
P (A 1 b) =
1 > 1
b A b + c.
2
Thus, when the energy function P (x) of a system is given by a quadratic function
1
P (x) = x> Ax
2
x> b,
where A is symmetric positive definite, finding the global minimum of P (x) is equivalent to
solving the linear system Ax = b. Sometimes, it is useful to recast a linear problem Ax = b
493
as a variational problem (finding the minimum of some energy function). However, very
often, a minimization problem comes with extra constraints that must be satisfied for all
admissible solutions. For instance, we may want to minimize the quadratic function
Q(y1 , y2 ) =
1 2
y + y22
2 1
y2 = 5.
The solution for which Q(y1 , y2 ) is minimum is no longer (y1 , y2 ) = (0, 0), but instead,
(y1 , y2 ) = (2, 1), as will be shown later.
Geometrically, the graph of the function defined by z = Q(y1 , y2 ) in R3 is a paraboloid
of revolution P with axis of revolution Oz. The constraint
2y1
y2 = 5
corresponds to the vertical plane H parallel to the z-axis and containing the line of equation
2y1 y2 = 5 in the xy-plane. Thus, the constrained minimum of Q is located on the parabola
that is the intersection of the paraboloid P with the plane H.
A nice way to solve constrained minimization problems of the above kind is to use the
method of Lagrange multipliers. But first, let us define precisely what kind of minimization
problems we intend to solve.
Definition 18.3. The quadratic constrained minimization problem consists in minimizing a
quadratic function
1
Q(y) = y > C 1 y b> y
2
subject to the linear constraints
A> y = f,
where C 1 is an m m symmetric positive definite matrix, A is an m n matrix of rank n
(so that m n), and where b, y 2 Rm (viewed as column vectors), and f 2 Rn (viewed as a
column vector).
The reason for using C 1 instead of C is that the constrained minimization problem has
an interpretation as a set of equilibrium equations in which the matrix that arises naturally
is C (see Strang [104]). Since C and C 1 are both symmetric positive definite, this doesnt
make any dierence, but it seems preferable to stick to Strangs notation.
The method of Lagrange consists in incorporating the n constraints A> y = f into the
quadratic function Q(y), by introducing extra variables = ( 1 , . . . , n ) called Lagrange
multipliers, one for each constraint. We form the Lagrangian
L(y, ) = Q(y) +
>
(A> y
1
f ) = y>C
2
(b
A )> y
>
f.
494
We shall prove that our constrained minimization problem has a unique solution given
by the system of linear equations
C
y + A = b,
A> y = f,
y + A = b,
we get
y = C(b
A ),
A ) = f,
that is,
A> CA = A> Cb
f.
However, by a previous remark, since C is symmetric positive definite and the columns of
A are linearly independent, A> CA is symmetric positive definite, and thus invertible. Note
that this way of solving the system requires solving for the Lagrange multipliers first.
Letting e = b
e=b A ,
y = Ce,
A> y = f.
The latter system is called the equilibrium equations by Strang [104]. Indeed, Strang shows
that the equilibrium equations of many physical systems can be put in the above form.
This includes spring-mass systems, electrical networks, and trusses, which are structures
built from elastic bars. In each case, y, e, b, C, , f , and K = A> CA have a physical
495
interpretation. The matrix K = A> CA is usually called the stiness matrix . Again, the
reader is referred to Strang [104].
In order to prove that our constrained minimization problem has a unique solution, we
proceed to prove that the constrained minimization of Q(y) subject to A> y = f is equivalent
to the unconstrained maximization of another function P ( ). We get P ( ) by minimizing
the Lagrangian L(y, ) treated as a function of y alone. Since C 1 is symmetric positive
definite and
1
>
L(y, ) = y > C 1 y (b A )> y
f,
2
by Proposition 18.2 the global minimum (with respect to y) of L(y, ) is obtained for the
solution y of
C 1y = b A ,
that is, when
y = C(b
A ),
1
(A
2
b)> C(A
b)
>
f.
Letting
1
P ( ) = (A
b)> C(A
b) + > f,
2
we claim that the solution of the constrained minimization of Q(y) subject to A> y = f
is equivalent to the unconstrained maximization of P ( ). Of course, since we minimized
L(y, ) with respect to y, we have
L(y, )
P( )
for all y and all . However, when the constraint A> y = f holds, L(y, ) = Q(y), and thus
for any admissible y, which means that A> y = f , we have
min Q(y)
y
max P ( ).
In order to prove that the unique minimum of the constrained problem Q(y) subject to
A> y = f is the unique maximum of P ( ), we compute Q(y) + P ( ).
Proposition 18.3. The quadratic constrained minimization problem of Definition 18.3 has
a unique solution (y, ) given by the system
1
C
A
y
b
=
.
>
A
0
f
Furthermore, the component
maximum.
P ( ) is
496
Proof. As we suggested earlier, let us compute Q(y) + P ( ), assuming that the constraint
A> y = f holds. Eliminating f , since b> y = y > b and > A> y = y > A , we get
1
1
Q(y) + P ( ) = y > C 1 y b> y + (A
2
2
1
1
>
= (C y + A
b) C(C
2
b)> C(A
1
y+A
b) +
>
b).
y+A
b = 0,
that is,
C
y + A = b.
But then the unique constrained minimum of Q(y) subject to A> y = f is equal to the
unique maximum of P ( ) exactly when A> y = f and C 1 y + A = b, which proves the
proposition.
Remarks:
(1) There is a form of duality going on in this situation. The constrained minimization
of Q(y) subject to A> y = f is called the primal problem, and the unconstrained
maximization of P ( ) is called the dual problem. Duality is the fact stated slightly
loosely as
min Q(y) = max P ( ).
y
Recalling that e = b
A , since
1
P ( ) = (A
2
b)> C(A
b) +
>
f,
1
P ( ) = e> Ce + > f.
2
This expression often represents the total potential energy of a system. Again, the
optimal solution is the one that minimizes the potential energy (and thus maximizes
P ( )).
(2) It is immediately verified that the equations of Proposition 18.3 are equivalent to the
equations stating that the partial derivatives of the Lagrangian L(y, ) are null:
@L
= 0,
@yi
@L
= 0,
@ j
i = 1, . . . , m,
j = 1, . . . , n.
497
y2 = 5,
the Lagrangian is
1 2
y + y22 + (2y1 y2 5),
2 1
and the equations stating that the Lagrangian has a saddle point are
L(y1 , y2 , ) =
2y1
y1 + 2 = 0,
y2
= 0,
y2 5 = 0.
18.2
In this section, we complete the study initiated in Section 18.1 and give necessary and
sufficient conditions for the quadratic function 12 x> Ax + x> b to have a global minimum. We
begin with the following simple fact:
Proposition 18.4. If A is an invertible symmetric matrix, then the function
1
f (x) = x> Ax + x> b
2
has a minimum value i A 0, in which case this optimal value is obtained for a unique
value of x, namely x = A 1 b, and with
1 > 1
f (A 1 b) =
b A b.
2
498
1
1
1 > 1
f (x) = x> Ax + x> b = (x + A 1 b)> A(x + A 1 b)
b A b.
2
2
2
If A has some negative eigenvalue, say
(with > 0), if we pick any eigenvector u of A
associated with , then for any 2 R with 6= 0, if we let x = u A 1 b, then since
Au =
u, we get
1
f (x) = (x + A 1 b)> A(x + A 1 b)
2
1
1 > 1
= u> Au
b A b
2
2
1 2
1 > 1
=
kuk22
b A b,
2
2
1 > 1
b A b
2
and since can be made as large as we want and > 0, we see that f has no minimum.
Consequently, in order for f to have a minimum, we must have A 0. In this case, since
(x + A 1 b)> A(x + A 1 b)
0, it is clear that the minimum value of f is achieved when
1
x + A b = 0, that is, x = A 1 b.
Let us now consider the case of an arbitrary symmetric matrix A.
Proposition 18.5. If A is a symmetric matrix, then the function
1
f (x) = x> Ax + x> b
2
has a minimum value i A 0 and (I
p =
1 > +
b A b.
2
Proof. The case that A is invertible is taken care of by Proposition 18.4, so we may assume
that A is singular. If A has rank r < n, then we can diagonalize A as
> r 0
A=U
U,
0 0
499
1 > > r 0
f (x) = x U
U x + x> U > U b
0
0
2
1
> r 0
= (U x)
U x + (U x)> U b.
0 0
2
If we write
y
Ux =
z
c
and U b =
,
d
1
> r 0
f (x) = (U x)
U x + (U x)> U b
0 0
2
1 > > r 0
y
c
> >
= (y , z )
+ (y , z )
0 0
z
d
2
1
= y > r y + y > c + z > d.
2
For y = 0, we get
f (x) = z > d,
so if d 6= 0, the function f has no minimum. Therefore, if f has a minimum, then d = 0.
However, d = 0 means that
c
Ub =
,
0
and we know from Section 17.1 that b is in the range of A (here, U is U > ), which is equivalent
to (I AA+ )b = 0. If d = 0, then
1
f (x) = y > r y + y > c,
2
and since r is invertible, by Proposition 18.4, the function f has a minimum i r 0,
which is equivalent to A 0.
r 1 c
c
Ux =
and U b =
,
0
0
r 1 c, z = 0 and
500
> r c
x = U
=
0
>
r 1 c 0
0
0
c
=
0
r 1 c 0
Ub =
0
0
A+ b
1 > +
b A b.
2
0
A b+U
,
z
+
>
>
1 > +
b A b.
2
The case in which we add either linear constraints of the form C > x = 0 or affine constraints of the form C > x = t (where t 6= 0) can be reduced to the unconstrained case using a
QR-decomposition of C or N . Let us show how to do this for linear constraints of the form
C > x = 0.
If we use a QR decomposition of C, by permuting the columns, we may assume that
> R S
C=Q
,
0 0
where R is an r r invertible upper triangular matrix and S is an r (m
has rank r). Then, if we let
> y
x=Q
,
z
where y 2 Rr and z 2 Rn r , then
> R
C > x = 0 becomes
>
0
0
y
> R
Qx =
= 0,
>
0
S
0
z
1 > >
> y
(y , z )QAQ
+ (y > , z > )Qb
z
2
y = 0, y 2 Rr , z 2 Rn r .
G11 G12
>
QAQ =
G21 G22
r) matrix (C
b
Qb = 1 ,
b2
501
b1 2 R r , b 2 2 R n r ,
1
minimize z > G22 z + z > b2 ,
2
the problem solved in Proposition 18.5.
z 2 Rn r ,
that is,
y
(R , 0)
= t,
z
>
which yields
R> y = t.
Since R is invertible, we get y = (R> ) 1 t, and then it is easy to see that our original problem
reduces to an unconstrained problem in terms of the matrix P > AP ; the details are left as
an exercise.
18.3
In this section we discuss various quadratic optimization problems mostly arising from computer vision (image segmentation and contour grouping). These problems can be reduced to
the following basic optimization problem: Given an n n real symmetric matrix A
maximize
subject to
x> Ax
x> x = 1, x 2 Rn .
502
In view of Proposition 17.6, the maximum value of x> Ax on the unit sphere is equal
to the largest eigenvalue 1 of the matrix A, and it is achieved for any unit eigenvector u1
associated with 1 .
A variant of the above problem often encountered in computer vision consists in minimizing x> Ax on the ellipsoid given by an equation of the form
x> Bx = 1,
where B is a symmetric positive definite matrix. Since B is positive definite, it can be
diagonalized as
B = QDQ> ,
where Q is an orthogonal matrix and D is a diagonal matrix,
D = diag(d1 , . . . , dn ),
with di > 0, for i = 1, . . . , n. If we define the matrices B 1/2 and B
p
p
B 1/2 = Q diag
d 1 , . . . , d n Q>
and
1/2
1/2
by
p
p
= Q diag 1/ d1 , . . . , 1/ dn Q> ,
it is clear that these matrices are symmetric, that B 1/2 BB 1/2 = I, and that B 1/2 and
B 1/2 are mutual inverses. Then, if we make the change of variable
x=B
1/2
y,
x> Ax
x> Bx = 1, x 2 Rn ,
1/2
AB
1/2
The complex version of our basic optimization problem in which A is a Hermitian matrix
also arises in computer vision. Namely, given an n n complex Hermitian matrix A,
maximize
subject to
x Ax
x x = 1, x 2 Cn .
503
Again by Proposition 17.6, the maximum value of x Ax on the unit sphere is equal to the
largest eigenvalue 1 of the matrix A and it is achieved for any unit eigenvector u1 associated
with 1 .
It is worth pointing out that if A is a skew-Hermitian matrix, that is, if A =
x Ax is pure imaginary or zero.
A, then
x Ax,
x> Ax
x> x = 1, C > x = 0, x 2 Rn .
Golub shows that the linear constraint C > x = 0 can be eliminated as follows: If we use
a QR decomposition of C, by permuting the columns, we may assume that
> R S
C=Q
,
0 0
where R is an rr invertible upper triangular matrix and S is an r(p r) matrix (assuming
C has rank r). Then if we let
> y
x=Q
,
z
where y 2 Rr and z 2 Rn r , then C > x = 0 becomes
>
>
0
0
y
> R
> R
Qx =
= 0,
>
>
S
0
S
0
z
504
minimize
>
>
z > z = 1, z 2 Rn r ,
y = 0, y 2 Rr .
subject to
G11 G12
>
QAQ =
,
G>
12 G22
our problem becomes
z > G22 z
z > z = 1, z 2 Rn r ,
minimize
subject to
0 0
J=
,
0 In r
then
>
JQAQ J =
and if we set
0 0
,
0 G22
P = Q> JQ,
then
P AP = Q> JQAQ> JQ.
Now, Q> JQAQ> JQ and JQAQ> J have the same eigenvalues, so P AP and JQAQ> J also
have the same eigenvalues. It follows that the solutions of our optimization problem are
among the eigenvalues of K = P AP , and at least r of those are 0. Using the fact that CC +
is the projection onto the range of C, where C + is the pseudo-inverse of C, it can also be
shown that
P = I CC + ,
the projection onto the kernel of C > . In particular, when n
p and C has full rank (the
columns of C are linearly independent), then we know that C + = (C > C) 1 C > and
P =I
505
This fact is used by Cour and Shi [25] and implicitly by Yu and Shi [116].
The problem of adding affine constraints of the form N > x = t, where t 6= 0, also comes
up in practice. At first glance, this problem may not seem harder than the linear problem in
which t = 0, but it is. This problem was extensively studied in a paper by Gander, Golub,
and von Matt [45] (1989).
Gander, Golub, and von Matt consider the following problem: Given an (n+m)(n+m)
real symmetric matrix A (with n > 0), an (n+m)m matrix N with full rank, and a nonzero
vector t 2 Rm with (N > ) t < 1 (where (N > ) denotes the pseudo-inverse of N > ),
minimize
subject to
x> Ax
x> x = 1, N > x = t, x 2 Rn+m .
The condition (N > ) t < 1 ensures that the problem has a solution and is not trivial.
The authors begin by proving that the affine constraint N > x = t can be eliminated. One
way to do so is to use a QR decomposition of N . If
R
N =P
,
0
where P is an orthogonal matrix and R is an m m invertible upper triangular matrix, then
if we observe that
x> Ax = x> P P > AP P > x,
N > x = (R> , 0)P > x = t,
x> x = x> P P > x = 1,
and if we write
>
P AP =
and
>
y
P x=
,
z
>
then we get
x> Ax = y > By + 2z > y + z > Cz,
R> y = t,
y > y + z > z = 1.
Thus
y = (R> ) 1 t,
506
and if we write
y>y > 0
s2 = 1
and
b = y,
we get the simplified problem
minimize
subject to
z > Cz + 2z > b
z > z = s2 , z 2 Rm .
18.4
Summary
The main concepts and results of this chapter are listed below:
Quadratic optimization problems; quadratic functions.
Symmetric positive definite and positive semidefinite matrices.
The positive semidefinite cone ordering.
Existence of a global minimum when A is symmetric positive definite.
Constrained quadratic optimization problems.
Lagrange multipliers; Lagrangian.
Primal and dual problems.
Quadratic optimization problems: the case of a symmetric invertible matrix A.
Quadratic optimization problems: the general case of a symmetric matrix A.
Adding linear constraints of the form C > x = 0.
Adding affine constraints of the form C > x = t, with t 6= 0.
Maximizing a quadratic function over the unit sphere.
Maximizing a quadratic function over an ellipsoid.
Maximizing a Hermitian quadratic form.
Adding linear constraints of the form C > x = 0.
Adding affine constraints of the form N > x = t, with t 6= 0.
Chapter 19
Basics of Affine Geometry
Lalg`ebre nest quune geometrie ecrite; la geometrie nest quune alg`ebre figuree.
Sophie Germain
19.1
Affine Spaces
Geometrically, curves and surfaces are usually considered to be sets of points with some
special properties, living in a space consisting of points. Typically, one is also interested
in geometric properties invariant under certain transformations, for example, translations,
rotations, projections, etc. One could model the space of points as a vector space, but this is
not very satisfactory for a number of reasons. One reason is that the point corresponding to
the zero vector (0), called the origin, plays a special role, when there is really no reason to have
a privileged origin. Another reason is that certain notions, such as parallelism, are handled
in an awkward manner. But the deeper reason is that vector spaces and affine spaces really
have dierent geometries. The geometric properties of a vector space are invariant under
the group of bijective linear maps, whereas the geometric properties of an affine space are
invariant under the group of bijective affine maps, and these two groups are not isomorphic.
Roughly speaking, there are more affine maps than linear maps.
Affine spaces provide a better framework for doing geometry. In particular, it is possible
to deal with points, curves, surfaces, etc., in an intrinsic manner, that is, independently
of any specific choice of a coordinate system. As in physics, this is highly desirable to
really understand what is going on. Of course, coordinate systems have to be chosen to
finally carry out computations, but one should learn to resist the temptation to resort to
coordinate systems until it is really necessary.
Affine spaces are the right framework for dealing with motions, trajectories, and physical
forces, among other things. Thus, affine geometry is crucial to a clean presentation of
kinematics, dynamics, and other parts of physics (for example, elasticity). After all, a rigid
motion is an affine map, but not a linear map in general. Also, given an m n matrix A
and a vector b 2 Rm , the set U = {x 2 Rn | Ax = b} of solutions of the system Ax = b is an
507
508
509
b
!
ab
a
O
!1 , x2
!2 , x3
!3 )
510
which means that the columns of P are the coordinates of the e0j over the basis (e1 , e2 , e3 ),
since
u1 e1 + u2 e2 + u3 e3 = u01 e01 + u02 e02 + u03 e03
and
v1 e1 + v2 e2 + v3 e3 = v10 e01 + v20 e02 + v30 e03 ,
it is easy to see that the coordinates (u1 , u2 , u3 ) and (v1 , v2 , v3 ) of u and v with respect to
the basis (e1 , e2 , e3 ) are given in terms of the coordinates (u01 , u02 , u03 ) and (v10 , v20 , v30 ) of u and
v with respect to the basis (e01 , e02 , e03 ) by the matrix equations
0 1
0 01
0 1
0 01
u1
u1
v1
v1
@u2 A = P @u02 A and @v2 A = P @v20 A .
u3
u03
v3
v30
From the above, we get
0 01
u1
@u02 A = P
u03
0 1
u1
1@ A
u2
u3
and
0 01
v1
@v20 A = P
v30
0 1
v1
1@ A
v2 ,
v3
1
u1 + v1
1@
u2 + v2 A .
u3 + v3
Everything worked out because the change of basis does not involve a change of origin. On the
other hand, if we consider the change of frame from the frame (O, (e1 , e2 , e3 )) to the frame
!
(, (e1 , e2 , e3 )), where O = (!1 , !2 , !3 ), given two points a, b of coordinates (a1 , a2 , a3 )
and (b1 , b2 , b3 ) with respect to the frame (O, (e1 , e2 , e3 )) and of coordinates (a01 , a02 , a03 ) and
(b01 , b02 , b03 ) with respect to the frame (, (e1 , e2 , e3 )), since
(a01 , a02 , a03 ) = (a1
!1 , a2
!1 , b2
!2 , a3
!3 )
and
!2 , b3
!3 ),
511
( + )!1 , a2 + b2
( + )!2 , a3 + b3
( + )!3 ),
!1 , a2 + b2
!2 , a3 + b3
!3 ),
unless + = 1.
Thus, we have discovered a major dierence between vectors and points: The notion of
linear combination of vectors is basis independent, but the notion of linear combination of
points is frame dependent. In order to salvage the notion of linear combination of points,
some restriction is needed: The scalar coefficients must add up to 1.
A clean way to handle the problem of frame invariance and to deal with points in a more
intrinsic manner is to make a clearer distinction between points and vectors. We duplicate
R3 into two copies, the first copy corresponding to points, where we forget the vector space
structure, and the second copy corresponding to free vectors, where the vector space structure
is important. Furthermore, we make explicit the important fact that the vector space R3 acts
on the set of points R3 : Given any point a = (a1 , a2 , a3 ) and any vector v = (v1 , v2 , v3 ),
we obtain the point
a + v = (a1 + v1 , a2 + v2 , a3 + v3 ),
which can be thought of as the result of translating a to b using the vector v. We can imagine
that v is placed such that its origin coincides with a and that its tip coincides with b. This
action + : R3 R3 ! R3 satisfies some crucial properties. For example,
a + 0 = a,
(a + u) + v = a + (u + v),
!
and for any two points a, b, there is a unique free vector ab such that
!
b = a + ab.
It turns out that the above properties, although trivial in the case of R3 , are all that is
needed to define the abstract notion of affine space (or affine structure). The basic idea is
!
to consider two (distinct) sets E and E , where E is a set of points (with no structure) and
!
E is a vector space (of free vectors) acting on the set E.
Did you say A fine space?
512
!
Intuitively, we can think of the elements of E as forces moving the points in E, considered
!
as physical particles. The eect of applying a force (free vector) u 2 E to a point a 2 E is
!
a translation. By this, we mean that for every force u 2 E , the action of the force u is to
move every point a 2 E to the point a + u 2 E obtained by the translation corresponding
!
to u viewed as a vector. Since translations can be composed, it is natural that E is a vector
space.
For simplicity, it is assumed that all vector spaces under consideration are defined over
the field R of real numbers. Most of the definitions and results also hold for an arbitrary field
K, although some care is needed when dealing with fields of characteristic dierent from zero
(see the problems). It is also assumed that all families ( i )i2I of scalars have finite support.
Recall that a family ( i )i2I of scalars has finite support if i = 0 for all i 2 I J, where
J is a finite subset of I. Obviously, finite families of scalars have finite support, and for
simplicity, the reader may assume that all families of scalars are finite. The formal definition
of an affine space is as follows.
Definition 19.1. An affine space is either the degenerate space reduced to the empty set,
!
!
or a triple E, E , + consisting of a nonempty set E (of points), a vector space E (of trans!
lations, or free vectors), and an action + : E E ! E, satisfying the following conditions.
(A1) a + 0 = a, for every a 2 E.
!
(A2) (a + u) + v = a + (u + v), for every a 2 E, and every u, v 2 E .
!
(A3) For any two points a, b 2 E, there is a unique u 2 E such that a + u = b.
!
!
The unique vector u 2 E such that a + u = b is denoted by ab, or sometimes by ab, or
even by b a. Thus, we also write
!
b = a + ab
513
!
E
E
b=a+u
u
a
c=a+w
w
v
!
!
where u 2 E , and consider the mapping from E to E given by
!
b 7! ab,
where b 2 E. The composition of the first mapping with the second is
!
u 7! a + u 7! a(a + u),
which, in view of (A3), yields u. The composition of the second with the first mapping is
!
!
b 7! ab 7! a + ab,
!
!
which, in view of (A3), yields b. Thus, these compositions are the identity from E to E
and the identity from E to E, and the mappings are both bijections.
!
!
When we identify E with E via the mapping b 7! ab, we say that we consider E as the
vector space obtained by taking a as the origin in E, and we denote it by Ea . Because Ea is
514
a vector space, to be consistent with our notational conventions we should use the notation
!
Ea (using an arrow), instead of Ea . However, for simplicity, we stick to the notation Ea .
!
Thus, an affine space E, E , + is a way of defining a vector space structure on a set of
points E, without making a commitment to a fixed origin in E. Nevertheless, as soon as
we commit to an origin a in E, we can view E as the vector space Ea . However, we urge
!
the reader to think of E as a physical set of points and of E as a set of forces acting on E,
rather than reducing E to some isomorphic copy of Rn . After all, points are points, and not
!
!
vectors! For notational simplicity, we will often denote an affine space E, E , + by (E, E ),
!
or even by E. The vector space E is called the vector space associated with E.
One should be careful about the overloading of the addition symbol +. Addition
is well-defined on vectors, as in u + v; the translate a + u of a point a 2 E by a
!
vector u 2 E is also well-defined, but addition of points a + b does not make sense. In
this respect, the notation b a for the unique vector u such that b = a + u is somewhat
confusing, since it suggests that points can be subtracted (but not added!).
!
!
Any vector space E has an affine space structure specified by choosing E = E , and
!
! !
letting + be addition in the vector space E . We will refer to the affine structure E , E , +
!
!
on a vector space E as the canonical (or natural) affine
structure
on E . In particular, the
n
if K is any field, the affine space K , K , + is denoted by AnK . In order to distinguish
between the double role played by members of Rn , points and vectors, we will denote points
by row vectors, and vectors by column vectors. Thus, the action of the vector space Rn over
the set Rn simply viewed as a set of points is given by
0 1
u1
B .. C
(a1 , . . . , an ) + @ . A = (a1 + u1 , . . . , an + un ).
un
We will also use the convention that if x = (x1 , . . . , xn ) 2 Rn , then the column vector
associated with x is denoted by x (in boldface notation). Abusing the notation slightly, if
a 2 Rn is a point, we also write a 2 An . The affine space An is called the real affine space of
dimension n. In most cases, we will consider n = 1, 2, 3.
19.2
Let us now give an example of an affine space that is not given as a vector space (at least, not
in an obvious fashion). Consider the subset L of A2 consisting of all points (x, y) satisfying
the equation
x + y 1 = 0.
The set L is the line of slope
19.3.
515
1=0
The line L can be made into an official affine space by defining the action + : L R ! L
of R on L defined such that for every point (x, 1 x) on L and any u 2 R,
(x, 1
x) + u = (x + u, 1
u).
It is immediately verified that this action makes L into an affine space. For example, for any
two points a = (a1 , 1 a1 ) and b = (b1 , 1 b1 ) on L, the unique (vector) u 2 R such that
b = a + u is u = b1 a1 . Note that the vector space R is isomorphic to the line of equation
x + y = 0 passing through the origin.
Similarly, consider the subset H of A3 consisting of all points (x, y, z) satisfying the
equation
x + y + z 1 = 0.
The set H is the plane passing through the points (1, 0, 0), (0, 1, 0), and (0, 0, 1). The plane
H can be made into an official affine space by defining the action +
: H R2 ! H of R2 on
u
H defined such that for every point (x, y, 1 x y) on H and any
2 R2 ,
v
(x, y, 1
u
y) +
= (x + u, y + v, 1
v
v).
For a slightly wilder example, consider the subset P of A3 consisting of all points (x, y, z)
satisfying the equation
x2 + y 2 z = 0.
The set P is a paraboloid of revolution, with axis Oz. The surface P can be made into an
official affine space by defining the action + : P R2 ! P of R2 on P defined such that for
516
!
E
b
a
!
ab
c
!
ac
!
bc
19.3
Chasless Identity
!
!
Given any three points a, b, c 2 E, since c = a + !
ac, b = a + ab, and c = b + bc, we get
!
!
!
! !
c = b + bc = (a + ab) + bc = a + ( ab + bc)
by (A2), and thus, by (A3),
! ! !
ab + bc = ac,
517
19.4
i2I
= 1, then
a+
!=b+
i aai
i2I
(2) If
i2I
= 0, then
X
i2I
X
i2I
!=
i aai
X
i2I
i bai .
i bai .
518
i2I
=a+
X
i2I
! X
i ab +
! X
= a + ab +
=b+
!
i bai
i bai
i2I
since
i2I
i2I
=1
!
since b = a + ab.
i bai
i2I
! =
i aai
i2I
i2I
X
i2I
!
+ bai )
i ( ab
! X
i ab +
i bai
i2I
!
i bai ,
i2I
since
i2I
= 0.
Thus, by Lemma
P 19.1, for any family of points (ai )i2I in E, for any family ( i )i2I of
scalars such that i2I i = 1, the point
x=a+
i aai
i2I
is independent of the choice of the origin a 2 E. This property motivates the following
definition.
Definition
19.2. For any family of points (ai )i2I in E, for any family ( i )i2I of scalars such
P
that i2I i = 1, and for any a 2 E, the point
a+
i aai
i2I
519
Figure 19.5 illustrates the geometric construction of the barycenters g1 and g2 of the
weighted points a, 14 , b, 14 , and c, 12 , and (a, 1), (b, 1), and (c, 1).
The point g1 can be constructed geometrically as the middle of the segment joining c to
the middle 12 a + 12 b of the segment (a, b), since
1 1
1
1
g1 =
a + b + c.
2 2
2
2
The point g2 can be constructed geometrically as the point such that the middle 12 b + 12 c of
the segment (b, c) is the middle of the segment (a, g2 ), since
1
1
g2 = a + 2 b + c .
2
2
520
g1
a
b
c
g2
a+b+c
Later on, we will see that a polynomial curve can be defined as a set of barycenters of a
fixed number of points. For example, let (a, b, c, d) be a sequence of points in A2 . Observe
that
(1 t)3 + 3t(1 t)2 + 3t2 (1 t) + t3 = 1,
since the sum on the left-hand side is obtained by expanding (t + (1
binomial formula. Thus,
(1
t)3 a + 3t(1
t)2 b + 3t2 (1
t) c + t3 d
is a well-defined affine combination. Then, we can define the curve F : A ! A2 such that
F (t) = (1
t)3 a + 3t(1
t)2 b + 3t2 (1
t) c + t3 d.
Such a curve is called a Bezier curve, and (a, b, c, d) are called its control points. Note that
the curve passes through a and d, but generally not through b and c. It can be sbown
that any point F (t) on the curve can be constructed using an algorithm performing affine
interpolation steps (the de Casteljau algorithm).
19.5
Affine Subspaces
521
!
Definition 19.3. Given an affine space E, E , + , a subset V of E is an affine subspace (of
!
P
E, E , + ) if for
P every family of weighted points ((ai , i ))i2I in V such that i2I i = 1,
the barycenter i2I i ai belongs to V .
An affine subspace is also called a flat by some authors. According to Definition 19.3,
the empty set is trivially an affine subspace, and every intersection of affine subspaces is an
affine subspace.
As an example, consider the subset U of R2 defined by
U = (x, y) 2 R2 | ax + by = c ,
i.e., the set of solutions of the equation
ax + by = c,
where it is assumed that a 6= 0 or b 6= 0. Given any m points (xi , yi ) 2 U and any m scalars
i such that 1 + + m = 1, we claim that
m
X
i (xi , yi )
i=1
2 U.
axi + byi = c,
and if we multiply both sides of this equation by i and add up the resulting m equations,
we get
m
m
X
X
( i axi + i byi ) =
i c,
i=1
and since
+ +
i=1
= 1, we get
!
m
m
X
X
a
x
+
b
i i
i=1
i yi
i=1
m
X
i=1
i xi ,
m
X
i=1
i yi
m
X
i=1
m
X
i=1
i (xi , yi )
c = c,
2 U.
522
U
!
U
i (xi , yi )
i=1
i,
!
2 U,
this time without any restriction on the i , since the right-hand side of the equation is
!
!
null. Thus, U is a subspace of R2 . In fact, U is one-dimensional, and it is just a usual line
in R2 . This line can be identified with a line passing through the origin of A2 , a line that is
parallel to the line U of equation ax + by = c, as illustrated in Figure 19.6.
Now, if (x0 , y0 ) is any point in U , we claim that
!
U = (x0 , y0 ) + U ,
where
! n
!o
(x0 , y0 ) + U = (x0 + u1 , y0 + u2 ) | (u1 , u2 ) 2 U .
!
!
First, (x0 , y0 ) + U U , since ax0 + by0 = c and au1 + bu2 = 0 for all (u1 , u2 ) 2 U . Second,
if (x, y) 2 U , then ax + by = c, and since we also have ax0 + by0 = c, by subtraction, we get
a(x
x0 ) + b(y
y0 ) = 0,
!
!
which shows that (x x0 , y y0 ) 2 U , and thus (x, y) 2 (x0 , y0 ) + U . Hence, we also have
!
!
U (x0 , y0 ) + U , and U = (x0 , y0 ) + U .
The above example shows that the affine line U defined by the equation
ax + by = c
523
!
is obtained by translating the parallel line U of equation
ax + by = 0
passing through the origin. In fact, given any point (x0 , y0 ) 2 U ,
!
U = (x0 , y0 ) + U .
More generally, it is easy to prove the following fact. Given any m n matrix A and any
vector b 2 Rm , the subset U of Rn defined by
U = {x 2 Rn | Ax = b}
is an affine subspace of An .
Actually, observe that Ax = b should really be written as Ax> = b, to be consistent with
our convention that points are represented by row vectors. We can also use the boldface
notation for column vectors, in which case the equation is written as Ax = b. For the sake of
minimizing the amount of notation, we stick to the simpler (yet incorrect) notation Ax = b.
If we consider the corresponding homogeneous equation Ax = 0, the set
!
U = {x 2 Rn | Ax = 0}
is a subspace of Rn , and for any x0 2 U , we have
!
U = x0 + U .
n
X
i aai
i=1
(a1 , 1 ), . . . , (an , n ), a, 1
since
n
X
i=1
i+ 1
n
X
i=1
n
X
i=1
= 1.
! !
!
Given any point a 2 E and any subset V of E , let a + V denote the following subset of E:
! n
!o
a+ V = a+v |v 2 V .
524
!
E
!
V
a
!
V =a+ V
!
Figure 19.7: An affine subspace V and its direction V
!
Lemma 19.2. Let E, E , + be an affine space.
(1) A nonempty subset V of E is an affine subspace i for every point a 2 V , the set
!
!|x2V}
Va = {ax
!
!
is a subspace of E . Consequently, V = a + Va . Furthermore,
!
! | x, y 2 V }
V = {xy
!
! !
!
is a subspace of E and Va = V for all a 2 E. Thus, V = a + V .
! !
!
(2) For any subspace V of E and for any a 2 E, the set V = a + V is an affine subspace.
Proof. The proof is straightforward, and is omitted. It is also given in Gallier [43].
!
In particular, when E is the natural affine space associated with a vector space E , Lemma
!
!
!
19.2 shows that every affine subspace of E is of the form u + U , for a subspace U of E .
!
The subspaces of E are the affine subspaces of E that contain 0.
!
The subspace V associated with an affine subspace V is called the direction of V . It is
!
!
!
also clear that the map + : V V ! V induced by + : E E ! E confers to V, V , + an
affine structure. Figure 19.7 illustrates the notion of affine subspace.
!
By the dimension of the subspace V , we mean the dimension of V .
An affine subspace of dimension 1 is called a line, and an affine subspace of dimension 2
is called a plane.
An affine subspace of codimension 1 is called a hyperplane (recall that a subspace F of
a vector space E has codimension 1 i there is some subspace G of dimension 1 such that
E = F G, the direct sum of F and G, see Strang [105] or Lang [67]).
525
We say that two affine subspaces U and V are parallel if their directions are identical.
!
!
!
!
Equivalently, since U = V , we have U = a + U and V = b + U for any a 2 U and any
!
b 2 V , and thus V is obtained from U by the translation ab.
In general, when we talk about n points a1 , . . . , an , we mean the sequence (a1 , . . . , an ),
and not the set {a1 , . . . , an } (the ai s need not be distinct).
!
By Lemma 19.2, a line is specified by a point a 2 E and a nonzero vector v 2 E , i.e., a
line is the set of all points of the form a + v, for 2 R.
!
We say that three points a, b, c are collinear if the vectors ab and !
ac are linearly dependent. If two of the points a, b, c are distinct, say a 6= b, then there is a unique 2 R such
!
!
ac
that !
ac = ab, and we define the ratio !
= .
ab
!
A plane is specified by a point a 2 E and two linearly independent vectors u, v 2 E , i.e.,
a plane is the set of all points of the form a + u + v, for , 2 R.
!
!
We say that four points a, b, c, d are coplanar if the vectors ab, !
ac, and ad are linearly
dependent. Hyperplanes will be characterized a little later.
!
Lemma 19.3. Given
an
affine
space
P
PE, E , + , for any family (ai )i2I of points in E, the
set V of barycenters i2I i ai (where i2I i = 1) is the smallest affine subspace containing
(ai )i2I .
P
Proof. If (ai )i2I is empty, then V = ;, because of the condition i2I i = 1. If (ai )i2I is
nonempty, then
P the smallest affine subspace containing (ai )i2I must contain the set V of
barycenters i2I i ai , and thus, it is enough to show that V is closed under affine combinations, which is immediately verified.
Given a nonempty subset S of E, the smallest affine subspace of E generated by S is
often denoted by hSi. For example, a line specified by two distinct points a and b is denoted
by ha, bi, or even (a, b), and similarly for planes, etc.
Remarks:
(1) Since it can be shown that the barycenter of n weighted points can be obtained by
repeated computations of barycenters of two weighted points, a nonempty subset V
of E is an affine subspace i for every two points a, b 2 V , the set V contains all
barycentric combinations of a and b. If V contains at least two points, then V is an
affine subspace i for any two distinct points a, b 2 V , the set V contains the line
determined by a and b, that is, the set of all points (1
)a + b, 2 R.
(2) This result still holds if the field K has at least three distinct elements, but the proof
is trickier!
19.6
Corresponding to the notion of linear independence in vector spaces, we have the notion of
affine independence. Given a family (ai )i2I of points in an affine space E, we will reduce the
526
notion of (affine) independence of these points to the (linear) independence of the families
(ai!
aj )j2(I {i}) of vectors obtained by choosing any ai as an origin. First, the following lemma
shows that it is sufficient to consider only one of these families.
!
Lemma 19.4. Given an affine space E, E , + , let (ai )i2I be a family of points in E. If the
family (a !
a)
is linearly independent for some i 2 I, then (a !
a)
is linearly
i j j2(I {i})
i j j2(I {i})
Since
ak!
aj = ak!
ai + ai!
aj ,
we have
X
! =
j ak aj
j2(I {k})
j2(I {k})
!
j ak ai +
j2(I {k})
!+
j ak ai
j2(I {k})
!
j ai aj
j2(I {i,k})
j ai aj ,
j2(I {i,k})
!
j ai aj
j2(I {i,k})
and thus
j ai aj ,
j2(I {k})
j2(I {k})
ai!
ak ,
ai!
ak = 0.
{i})
j
Definition 19.4 is reasonable, because by Lemma 19.4, the independence of the family
!
(ai aj )j2(I {i}) does not depend on the choice of ai . A crucial property of linearly independent
vectors (u1 , . . . , um ) is that if a vector v is a linear combination
v=
m
X
i ui
i=1
527
!
E
E
a2
a0!
a2
a0
a1
a0!
a1
Lemma 19.5 suggests the notion of affine frame. Affine frames are the affine analogues
!
of bases in vector spaces. Let E, E , + be a nonempty affine space, and let (a0 , . . . , am )
be a family of m + 1 points in E. The family (a0 , . . . , am ) determines the family of m
!
vectors (a0!
a1 , . . . , a0 a!
m ) in E . Conversely, given a point a0 in E and a family of m vectors
!
(u1 , . . . , um ) in E , we obtain the family of m + 1 points (a0 , . . . , am ) in E, where ai = a0 + ui ,
1 i m.
Thus, for any m 1, it is equivalent to consider a family of m + 1 points (a0 , . . . , am ) in
!
E, and a pair (a0 , (u1 , . . . , um )), where the ui are vectors in E . Figure 19.8 illustrates the
notion of affine independence.
Remark: The above observation also applies to infinite families (ai )i2I of points in E and
!
families (ui )i2I {0} of vectors in E , provided that the index set I contains 0.
!
!
When (a0!
a1 , . . . , a0 a!
m ) is a basis of E then, for every x 2 E, since x = a0 + a0 x, there
is a unique family (x1 , . . . , xm ) of scalars such that
x = a0 + x1 a0!
a1 + + xm a0 a!
m.
The scalars (x1 , . . . , xm ) may be considered as coordinates with respect to
(a0 , (a0!
a1 , . . . , a0 a!
m )). Since
!
m
m
m
X
X
X
!
x=a +
x a a i x = 1
x a +
xa,
0
i 0 i
i=1
i=1
i i
i=1
528
x=
i ai
i=0
P
Pm
with m
i=0 i = 1, and where 0 = 1
i=1 xi , and i = xi for 1 i m. The scalars
( 0 , . . . , m ) are also certain kinds of coordinates with respect to (a0 , . . . , am ). All this is
summarized in the following definition.
!
Definition 19.5. Given an affine space E, E , + , an affine frame with origin a0 is a family
(a0 , . . . , am ) of m + 1 points in E such that the list of vectors (a0!
a1 , . . . , a0 a!
m ) is a basis of
!
!
!
E . The pair (a0 , (a0 a1 , . . . , a0 am )) is also called an affine frame with origin a0 . Then, every
x 2 E can be expressed as
x = a0 + x1 a0!
a1 + + xm a0 a!
m
for a unique family (x1 , . . . , xm ) of scalars, called the coordinates of x w.r.t. the affine frame
(a0 , (a0!
a1 , . . ., a0 a!
m )). Furthermore, every x 2 E can be written as
x=
0 a0
+ +
m am
Proof. By Lemma 19.5, the family (ai )i2I is affinely dependent i the family of vectors
(ai!
aj )j2(I {i}) is linearly dependent for some i 2 I. For any i 2 I, the family (ai!
aj )j2(I {i})
is linearly dependent i there is a family ( j )j2(I {i}) such that j 6= 0 for some j, and such
that
X
!
j ai aj = 0.
j2(I {i})
j (xaj
j2(I {i})
j2(I {i})
!
j xaj
!)
xa
i
X
j2(I {i})
!,
xa
i
529
a2
a0
a0
a1
a3
a0
a1
a0
a2
a1
P
P
!
and letting i =
j2(I {i}) j , we get
i2I i xai = 0, with
i2I i = 0 and j 6= 0 for
some
P j 2 I. The converse is obvious by setting x = ai for some i such that i 6= 0, since
i2I i = 0 implies that j 6= 0, for some j 6= i.
Even though Lemma 19.6 is rather dull, it is one of the key ingredients in the proof of
beautiful and deep theorems about convex sets, such as Caratheodorys theorem, Radons
theorem, and Hellys theorem.
!
A family of two points (a, b) in E is affinely independent i ab 6= 0, i a 6= b. If a 6= b, the
affine subspace generated by a and b is the set of all points (1
)a + b, which is the unique
line passing through a and b. A family of three points (a, b, c) in E is affinely independent
!
i ab and !
ac are linearly independent, which means that a, b, and c are not on the same line
(they are not collinear). In this case, the affine subspace generated by (a, b, c) is the set of all
points (1
)a + b + c, which is the unique plane containing a, b, and c. A family of
!
!
four points (a, b, c, d) in E is affinely independent i ab, !
ac, and ad are linearly independent,
which means that a, b, c, and d are not in the same plane (they are not coplanar). In this
case, a, b, c, and d are the vertices of a tetrahedron. Figure 19.9 shows affine frames and
their convex hulls for |I| = 0, 1, 2, 3.
Given n+1 affinely independent points (a0 , . . . , an ) in E, we can consider the set of points
0 ( i 2 R). Such affine combinations are
0 a0 + + n an , where 0 + + n = 1 and i
called convex combinations. This set is called the convex hull of (a0 , . . . , an ) (or n-simplex
530
spanned by (a0 , . . . , an )). When n = 1, we get the segment between a0 and a1 , including
a0 and a1 . When n = 2, we get the interior of the triangle whose vertices are a0 , a1 , a2 ,
including boundary points (the edges). When n = 3, we get the interior of the tetrahedron
whose vertices are a0 , a1 , a2 , a3 , including boundary points (faces and edges). The set
{a + a !
a + + a !
a | where 0 1 ( 2 R)}
0
1 0 1
n 0 n
Points are not vectors! The following example illustrates why treating points as
vectors may cause problems. Let a, b, c be three affinely independent points in A3 .
Any point x in the plane (a, b, c) can be expressed as
x=
0a
1b
2 c,
However, there is a problem when the origin of the coordinate system belongs to the plane
(a, b, c), since in this case, the matrix is not invertible! What we should really be doing is to
solve the system
!
!
!
!
0 Oa + 1 Ob + 2 Oc = Ox,
where O is any point not in the plane (a, b, c). An alternative is to use certain well-chosen
cross products.
It can be shown that barycentric coordinates correspond to various ratios of areas and
volumes; see the problems.
19.7
Affine Maps
Corresponding to linear maps we have the notion of an affine map. An affine map is defined
as a map preserving affine combinations.
!
!
Definition 19.6. Given two affine spaces E, E , + and E 0 , E 0 , +0 , a functionPf : E ! E 0
is an affine map i for every family ((ai , i ))i2I of weighted points in E such that i2I i = 1,
we have
!
X
X
f
=
i ai
i f (ai ).
i2I
i2I
531
and
i (b
i2I
+ h(vi )) = b +
i2I
we have
i2I
i b(b
i2I
i (a
+ vi )
i2I
X
!
+ h(vi )) = b +
i h(vi ),
i2I
= f a+
i vi
i2I
= b+h
= b+
=
i vi
i2I
i h(vi )
i2I
i (b
+ h(vi ))
i2I
i f (a
+ vi ).
i2I
i2I
i (a
+ vi ) = a +
i2I
and
X
i2I
i (b
i vi
i2I
+ h(vi )) = b +
i h(vi ).
i2I
x1
1 2
x1
3
7!
+
x2
0 1
x2
1
532
a0
a
c0
b0
x1
1 1
x1
3
7!
+
x2
1 3
x2
0
is an affine map. Since we can write
p
p
1 1
2/2
= 2
1 3
2/2
2/2
1
2
p
,
0 1
2/2
!
! !
!
for every v 2 E is a linear map f : E ! E 0 . Indeed, we can write
a + v = (a + v) + (1
)a,
533
c0
d0
c
b0
a0
! and also
)aa,
a + u + v = (a + u) + (a + v)
!
!
since a + u + v = a + a(a + u) + a(a + v)
f (a + v) = f (a + v) + (1
If we recall
P that x =
(with i2I i = 1) i
i2I
i ai
)f (a).
! X
bx =
i2I
we get
a,
i bai
i ))i2I
of weighted points
for every b 2 E,
!
!
f (a)f (a + v) = f (a)f (a + v) + (1
!
!
showing that f ( v) = f (v). We also have
!
!
)f (a)f (a) = f (a)f (a + v),
f (a + u + v) = f (a + u) + f (a + v)
f (a),
534
b + v = (a + v)
f (a) + f (b),
! ! !
or f (a)f (x) = f (ax),
for all a, x 2 E. Lemma 19.7 shows that for any affine map f : E ! E 0 , there are points
!
! !
a 2 E, b 2 E 0 , and a unique linear map f : E ! E 0 , such that
!
f (a + v) = b + f (v),
!
!
for all v 2 E (just let b = f (a), for any a 2 E). Affine maps for which f is the identity
!
map are called translations. Indeed, if f = id,
! !
! !
! = x + xa
! + af (a)
f (x) = f (a) + f (ax)
= f (a) + ax
+ ax
!
!
! + af (a) xa
! = x + af (a),
= x + xa
and so
!
!
xf (x) = af (a),
!
which shows that f is the translation induced by the vector af (a) (which does not depend
on a).
Since an affine map preserves barycenters, and since an affine subspace V is closed under
barycentric combinations, the image f (V ) of V is an affine subspace in E 0 . So, for example,
the image of a line is a point or a line, and the image of a plane is either a point, a line, or
a plane.
535
It is easily verified that the composition of two affine maps is an affine map. Also, given
affine maps f : E ! E 0 and g : E 0 ! E 00 , we have
!
!
g(f (a + v)) = g f (a) + f (v) = g(f (a)) + !
g f (v) ,
!
!
which shows that g f = !
g f . It is easy to show that an affine map f : E ! E 0 is injective
!
!
! !
! !
i f : E ! E 0 is injective, and that f : E ! E 0 is surjective i f : E ! E 0 is surjective.
!
! !
An affine map f : E ! E 0 is constant i f : E ! E 0 is the null (constant) linear map equal
!
to 0 for all v 2 E .
If E is an affine space of dimension m and (a0 , a1 , . . . , am ) is an affine frame for E, then
for any other affine space F and for any sequence (b0 , b1 , . . . , bm ) of m + 1 points in F , there
is a unique affine map f : E ! F such that f (ai ) = bi , for 0 i m. Indeed, f must be
such that
f ( 0 a0 + + m am ) = 0 b 0 + + m b m ,
where 0 + + m = 1, and this defines a unique affine map on all of E, since (a0 , a1 , . . . , am )
is an affine frame for E.
Using affine frames, affine maps can be represented in terms of matrices. We explain how
an affine map f : E ! E is represented with respect to a frame (a0 , . . . , an ) in E, the more
general case where an affine map f : E ! F is represented with respect to two affine frames
(a0 , . . . , an ) in E and (b0 , . . . , bm ) in F being analogous. Since
!
f (a0 + x) = f (a0 ) + f (x)
!
for all x 2 E , we have
!
! !
a0 f (a0 + x) = a0 f (a0 ) + f (x).
!
!
Since x, a0 f (a0 ), and a0 f (a0 + x), can be expressed as
x = x1 a0!
a1 + + x n a0 !
an ,
!
!
!
a0 f (a0 ) = b1 a0 a1 + + bn a0 an ,
!
a0 f (a0 + x) = y1 a0!
a1 + + y n a0 !
an ,
!
if A = (ai j ) is the n n matrix of the linear map f over the basis (a0!
a1 , . . . , a0 !
an ), letting x,
y, and b denote the column vectors of components (x1 , . . . , xn ), (y1 , . . . , yn ), and (b1 , . . . , bn ),
!
! !
a0 f (a0 + x) = a0 f (a0 ) + f (x)
is equivalent to
y = Ax + b.
Note that b 6= 0 unless f (a0 ) = a0 . Thus, f is generally not a linear transformation, unless it
has a fixed point, i.e., there is a point a0 such that f (a0 ) = a0 . The vector b is the translation
536
part of the affine map. Affine maps do not always have a fixed point. Obviously, nonnull
translations have no fixed point. A less trivial example is given by the affine map
x1
1 0
x1
1
7!
+
.
x2
0
1
x2
0
This map is a reflection about the x-axis followed by a translation along the x-axis. The
affine map
p
x1
1
3
x1
1
7! p
+
x2
x2
1
3/4 1/4
can also be written as
x1
2 0
p1/2
7!
x2
0 1/2
3/2
3/2
1/2
x1
1
+
x2
1
which shows that it is the composition of a rotation of angle /3, followed by a stretch (by a
factor of 2 along the x-axis, and by a factor of 12 along the y-axis), followed by a translation.
It is easy to show that this affine map has a unique fixed point. On the other hand, the
affine map
x1
8/5
6/5
x1
1
7!
+
x2
3/10 2/5
x2
1
has no fixed point, even though
8/5
3/10
6/5
2/5
2 0
0 1/2
4/5
3/5
3/5
,
4/5
and the second matrix is a rotation of angle such that cos = 45 and sin = 35 . For more
on fixed points of affine maps, see the problems.
There is a useful trick to convert the equation y = Ax + b into what looks like a linear
equation. The trick is to consider an (n + 1) (n + 1) matrix. We add 1 as the (n + 1)th
component to the vectors x, y, and b, and form the (n + 1) (n + 1) matrix
A b
0 1
so that y = Ax + b is equivalent to
y
A b
x
=
.
1
0 1
1
This trick is very useful in kinematics and dynamics, where A is a rotation matrix. Such
affine maps are called rigid motions.
If f : E ! E 0 is a bijective affine map, given any three collinear points a, b, c in E,
with a 6= b, where, say, c = (1
)a + b, since f preserves barycenters, we have f (c) =
537
(1
)f (a) + f (b), which shows that f (a), f (b), f (c) are collinear in E 0 . There is a converse
to this property, which is simpler to state when the ground field is K = R. The converse
states that given any bijective function f : E ! E 0 between two real affine spaces of the
same dimension n
2, if f maps any three collinear points to collinear points, then f is
affine. The proof is rather long (see Berger [8] or Samuel [90]).
Given three collinear points a, b, c, where a 6= c, we have b = (1
)a + c for some
unique , and we define the ratio of the sequence a, b, c, as
!
ab
ratio(a, b, c) =
= !,
(1
)
bc
provided that 6= 1, i.e., b 6= c. When b = c, we agree that ratio(a, b, c) = 1. We warn our
!
ba
readers that other authors define the ratio of a, b, c as ratio(a, b, c) = !
. Since affine maps
bc
preserve barycenters, it is clear that affine maps preserve the ratio of three points.
19.8
Affine Groups
We now take a quick look at the bijective affine maps. Given an affine space E, the set of
affine bijections f : E ! E is clearly a group, called the affine group of E, and denoted by
!
GA(E). Recall that the group of bijective linear maps of the vector space E is denoted by
!
!
!
GL( E ). Then, the map f 7! f defines a group homomorphism L : GA(E) ! GL( E ).
The kernel of this map is the set of translations on E.
The subset of all linear maps of the form id!
, where 2 R {0}, is a subgroup
E
!
538
b0
b
c
c0
Ha, = Ha, .
id!
,
E
Proof. The proof is straightforward, and is omitted. It is also given in Gallier [43].
!
Clearly, if f = id!
, the affine map f is a translation. Thus, the group of affine
E
dilatations DIL(E) is the disjoint union of the translations and of the dilatations of ratio
6= 0, 1. Affine dilatations can be given a purely geometric characterization.
Another point worth mentioning is that affine bijections preserve the ratio of volumes of
!
parallelotopes. Indeed, given any basis B = (u1 , . . . , um ) of the vector space E associated
with the affine space E, given any m + 1 affinely independent points (a0 , . . . , am ), we can
compute the determinant detB (a0!
a1 , . . . , a0 a!
m ) w.r.t. the basis B. For any bijective affine
map f : E ! E, since
!
!
!
!
!
detB f (a0!
a1 ), . . . , f (a0 a!
m ) = det f detB (a0 a1 , . . . , a0 am )
539
!
and the determinant of a linear map is intrinsic (i.e., depends only on f , and not on the
particular basis B), we conclude that the ratio
!
!
detB f (a0!
a1 ), . . . , f (a0 a!
)
m
!
= det f
!
!
det (a a , . . . , a a )
B
0 1
0 m
19.9
In this section we state and prove three fundamental results of affine geometry. Roughly
speaking, affine geometry is the study of properties invariant under affine bijections. We now
prove one of the oldest and most basic results of affine geometry, the theorem of Thales.
Lemma 19.9. Given any affine space E, if H1 , H2 , H3 are any three distinct parallel hyperplanes, and A and B are any two lines not parallel to Hi , letting ai = Hi \A and bi = Hi \B,
then the following ratios are equal:
!
a1!
a3
b1 b3
= ! = .
a1!
a2
b1 b2
Conversely, for any point d on the line A, if
!
a1 d
!
a1 a
2
= , then d = a3 .
Proof. Figure 19.13 illustrates the theorem of Thales. We sketch a proof, leaving the details
!
as an exercise. Since H1 , H2 , H3 are parallel, they have the same direction H , a hyperplane
!
! !
in E . Let u 2 E H be any nonnull vector such that A = a1 +Ru. Since A is not parallel to
! !
!
H, we have E = H Ru, and thus we can define the linear map p : E ! Ru, the projection
!
on Ru parallel to H . This linear map induces an affine map f : E ! A, by defining f such
that
f (b1 + w) = a1 + p(w),
!
!
for all w 2 E . Clearly, f (b1 ) = a1 , and since H1 , H2 , H3 all have direction H , we also have
f (b2 ) = a2 and f (b3 ) = a3 . Since f is affine, it preserves ratios, and thus
!
a1!
a3
b1 b3
= !.
a1!
a2
b1 b2
The converse is immediate.
540
a1
b1
H1
H2
a2
a3
b2
b3
H3
A
541
We also have the following simple lemma, whose proof is left as an easy exercise.
Lemma 19.10. Given any affine space E, given any two distinct points a, b 2 E, and for
any affine dilatation f dierent from the identity, if a0 = f (a), D = ha, bi is the line passing
through a and b, and D0 is the line parallel to D and passing through a0 , the following are
equivalent:
(i) b0 = f (b);
(ii) If f is a translation, then b0 is the intersection of D0 with the line parallel to ha, a0 i
passing through b;
If f is a dilatation of center c, then b0 = D0 \ hc, bi.
The first case is the parallelogram law, and the second case follows easily from Thales
theorem.
We are now ready to prove two classical results of affine geometry, Pappuss theorem and
Desarguess theorem. Actually, these results are theorems of projective geometry, and we
are stating affine versions of these important results. There are stronger versions that are
best proved using projective geometry.
Lemma 19.11. Given any affine plane E, any two distinct lines D and D0 , then for any
distinct points a, b, c on D and a0 , b0 , c0 on D0 , if a, b, c, a0 , b0 , c0 are distinct from the intersection of D and D0 (if D and D0 intersect) and if the lines ha, b0 i and ha0 , bi are parallel,
and the lines hb, c0 i and hb0 , ci are parallel, then the lines ha, c0 i and ha0 , ci are parallel.
Proof. Pappuss theorem is illustrated in Figure 19.14. If D and D0 are not parallel, let d
be their intersection. Let f be the dilatation of center d such that f (a) = b, and let g be the
dilatation of center d such that g(b) = c. Since the lines ha, b0 i and ha0 , bi are parallel, and
the lines hb, c0 i and hb0 , ci are parallel, by Lemma 19.10 we have a0 = f (b0 ) and b0 = g(c0 ).
However, we observed that dilatations with the same center commute, and thus f g = g f ,
and thus, letting h = g f , we get c = h(a) and a0 = h(c0 ). Again, by Lemma 19.10, the
lines ha, c0 i and ha0 , ci are parallel. If D and D0 are parallel, we use translations instead of
dilatations.
There is a converse to Pappuss theorem, which yields a fancier version of Pappuss
theorem, but it is easier to prove it using projective geometry. It should be noted that
in axiomatic presentations of projective geometry, Pappuss theorem is equivalent to the
commutativity of the ground field K (in the present case, K = R). We now prove an affine
version of Desarguess theorem.
Lemma 19.12. Given any affine space E, and given any two triangles (a, b, c) and (a0 , b0 , c0 ),
where a, b, c, a0 , b0 , c0 are all distinct, if ha, bi and ha0 , b0 i are parallel and hb, ci and hb0 , c0 i are
parallel, then ha, ci and ha0 , c0 i are parallel i the lines ha, a0 i, hb, b0 i, and hc, c0 i are either
parallel or concurrent (i.e., intersect in a common point).
542
b
a
c0
b0
a0
D0
543
a0
b0
b
c
c0
19.10
Affine Hyperplanes
We now consider affine forms and affine hyperplanes. In Section 19.5 we observed that the
set L of solutions of an equation
ax + by = c
is an affine subspace of A2 of dimension 1, in fact, a line (provided that a and b are not both
null). It would be equally easy to show that the set P of solutions of an equation
ax + by + cz = d
is an affine subspace of A3 of dimension 2, in fact, a plane (provided that a, b, c are not all
null). More generally, the set H of solutions of an equation
1 x1
+ +
m xm
+ +
m xm
1 x1
+ +
m xm
for all (x1 , . . . , xm ) 2 Rm . It is immediately verified that this map is affine, and the set H of
solutions of the equation
1 x1 + + m xm =
544
is the null set, or kernel, of the affine map f : Am ! R, in the sense that
H=f
where x = (x1 , . . . , xm ).
Thus, it is interesting to consider affine forms, which are just affine maps f : E ! R
from an affine space to R. Unlike linear forms f , for which Ker f is never empty (since it
always contains the vector 0), it is possible that f 1 (0) = ; for an affine form f . Given an
affine map f : E ! R, we also denote f 1 (0) by Ker f , and we call it the kernel of f . Recall
that an (affine) hyperplane is an affine subspace of codimension 1. The relationship between
affine hyperplanes and affine forms is given by the following lemma.
Lemma 19.13. Let E be an affine space. The following properties hold:
(a) Given any nonconstant affine form f : E ! R, its kernel H = Ker f is a hyperplane.
(b) For any hyperplane H in E, there is a nonconstant affine form f : E ! R such that
H = Ker f . For any other affine form g : E ! R such that H = Ker g, there is some
2 R such that g = f (with 6= 0).
(c) Given any hyperplane H in E and any (nonconstant) affine form f : E ! R such that
H = Ker f , every hyperplane H 0 parallel to H is defined by a nonconstant affine form
g such that g(a) = f (a)
, for all a 2 E and some 2 R.
Proof. The proof is straightforward, and is omitted. It is also given in Gallier [43].
When E is of dimension n, given an affine frame (a0 , (u1 , . . . , un )) of E with origin
a0 , recall from Definition 19.5 that every point of E can be expressed uniquely as x =
a0 + x1 u1 + + xn un , where (x1 , . . . , xn ) are the coordinates of x with respect to the affine
frame (a0 , (u1 , . . . , un )).
Also recall that every linear form f is such that f (x) = 1 x1 + + n xn , for every
x = x1 u1 + + xn un and some 1 , . . . , n 2 R. Since an affine form f : E ! R satisfies the
!
property f (a0 + x) = f (a0 ) + f (x), denoting f (a0 + x) by f (x1 , . . . , xn ), we see that we have
f (x1 , . . . , xn ) =
1 x1
+ +
n xn
+ ,
+ +
n xn
+ = 0.
19.11
545
In this section we take a closer look at the intersection of affine subspaces. This subsection
can be omitted at first reading.
First, we need a result of linear algebra. Given a vector space E and any two subspaces M
and N , there are several interesting linear maps. We have the canonical injections i : M !
M +N and j : N ! M +N , the canonical injections in1 : M ! M N and in2 : N ! M N ,
and thus, injections f : M \N ! M N and g : M \N ! M N , where f is the composition
of the inclusion map from M \ N to M with in1 , and g is the composition of the inclusion
map from M \ N to N with in2 . Then, we have the maps f + g : M \ N ! M N , and
i j : M N ! M + N.
Lemma 19.14. Given a vector space E and any two subspaces M and N , with the definitions
above,
f +g
i j
0 ! M \N ! M N ! M +N ! 0
is a short exact sequence, which means that f + g is injective, i j is surjective, and that
Im (f + g) = Ker (i j). As a consequence, we have the Grassmann relation
dim(M ) + dim(N ) = dim(M + N ) + dim (M \ N ).
Proof. It is obvious that i j is surjective and that f + g is injective. Assume that (i
j)(u + v) = 0, where u 2 M , and v 2 N . Then, i(u) = j(v), and thus, by definition of i and
j, there is some w 2 M \ N , such that i(u) = j(v) = w 2 M \ N . By definition of f and
g, u = f (w) and v = g(w), and thus Im (f + g) = Ker (i j), as desired. The second part
of the lemma follows from standard results of linear algebra (see Artin [4], Strang [105], or
Lang [67]).
We now prove a simple lemma about the intersection of affine subspaces.
Lemma 19.15. Given any affine space E, for any two nonempty affine subspaces M and
N , the following facts hold:
! ! !
(1) M \ N 6= ; i ab 2 M + N for some a 2 M and some b 2 N .
! ! !
(2) M \ N consists of a single point i ab 2 M + N for some a 2 M and some b 2 N ,
! !
and M \ N = {0}.
!
! ! !
(3) If S is the least affine subspace containing M and N , then S = M + N + K ab (the
!
vector space E is defined over the field K).
Proof. (1) Pick any a 2 M and any b 2 N , which is possible, since M and N are nonempty.
!
!
! | x 2 M } and !
Since M = {ax
N = { by | y 2 N }, if M \ N 6= ;, for any c 2 M \ N we have
! ! !
! !
! ! !
!
ab = ac bc, with !
ac 2 M and bc 2 N , and thus, ab 2 M + N . Conversely, assume that
546
! ! !
!
!+!
ab 2 M + N for some a 2 M and some b 2 N . Then ab = ax
by, for some x 2 M and
some y 2 N . But we also have
! ! ! !
ab = ax + xy + yb,
!
!
! + yb by, that is, xy
! = 2!
and thus we get 0 = xy
by. Thus, b is the middle of the segment
!
!
[x, y], and since yx = 2 yb, x = 2b y is the barycenter of the weighted points (b, 2) and
(y, 1). Thus x also belongs to N , since N being an affine subspace, it is closed under
barycenters. Thus, x 2 M \ N , and M \ N 6= ;.
(2) Note that in general, if M \ N 6= ;, then
! ! !
M \ N = M \ N,
because
!
!
!
!
! !
M \ N = { ab | a, b 2 M \ N } = { ab | a, b 2 M } \ { ab | a, b 2 N } = M \ N .
!
Since M \ N = c + M \ N for any c 2 M \ N , we have
! !
M \ N = c + M \ N for any c 2 M \ N .
! !
From this it follows that if M \N 6= ;, then M \N consists of a single point i M \ N = {0}.
This fact together with what we proved in (1) proves (2).
(3) This is left as an easy exercise.
Remarks:
! ! !
(1) The proof of Lemma 19.15 shows that if M \ N 6= ;, then ab 2 M + N for all a 2 M
and all b 2 N .
!
(2) Lemma 19.15 implies that for any two nonempty affine subspaces M and N , if E =
! !
! !
! ! !
M N , then M \ N consists of a single point. Indeed, if E = M N , then ab 2 E
! !
for all a 2 M and all b 2 N , and since M \ N = {0}, the result follows from part (2)
of the lemma.
We can now state the following lemma.
Lemma 19.16. Given an affine space E and any two nonempty affine subspaces M and N ,
if S is the least affine subspace containing M and N , then the following properties hold:
(1) If M \ N = ;, then
! !
dim(M ) + dim(N ) < dim(E) + dim(M + N )
and
dim(S) = dim(M ) + dim(N ) + 1
(2) If M \ N 6= ;, then
! !
dim(M \ N ).
dim(M \ N ).
Proof. The proof is not difficult, using Lemma 19.15 and Lemma 19.14, but we leave it as
an exercise.
19.12. PROBLEMS
19.12
547
Problems
Problem 19.1. Given a triangle (a, b, c), give a geometric construction of the barycenter of
the weighted points (a, 14 ), (b, 14 ), and (c, 12 ). Give a geometric construction of the barycenter
of the weighted points (a, 32 ), (b, 32 ), and (c, 2).
Problem 19.2. Given a tetrahedron (a, b, c, d) and any two distinct points x, y 2 {a, b, c, d},
let let mx,y be the middle of the edge (x, y). Prove that the barycenter g of the weighted points
(a, 14 ), (b, 14 ), (c, 14 ), and (d, 14 ) is the common intersection of the line segments (ma,b , mc,d ),
(ma,c , mb,d ), and (ma,d , mb,c ). Show that if gd is the barycenter of the weighted points
(a, 13 ), (b, 13 ), (c, 13 ), then g is the barycenter of (d, 14 ) and (gd , 34 ).
!
Problem 19.3. Let E be a nonempty set, and E a vector space and assume that there is a
!
!
function : E E ! E , such that if we denote (a, b) by ab, the following properties hold:
! !
(1) ab + bc = !
ac, for all a, b, c 2 E;
(2) For every a 2 E, the map
is a bijection.
Let
a:
!
E ! E defined such that for every b 2 E,
a (b)
!
= ab,
!
!
E ! E be the inverse of a : E ! E .
!
Prove that the function + : E E ! E defined such that
a:
a+u=
a (u)
!
!
for all a 2 E and all u 2 E makes (E, E , +) into an affine space.
!
Note. We showed in the text that an affine space (E, E , +) satisfies the properties stated
above. Thus, we obtain an equivalent characterization of affine spaces.
Problem 19.4. Given any three points a, b, c in the affine plane A2 , letting (a1 , a2 ), (b1 , b2 ),
and (c1 , c2 ) be the coordinates of a, b, c, with respect to the standard affine frame for A2 ,
prove that a, b, c are collinear i
a1 b 1 c 1
a2 b2 c2 = 0,
1 1 1
i.e., the determinant is null.
Letting (a0 , a1 , a2 ), (b0 , b1 , b2 ), and (c0 , c1 , c2 ) be the barycentric coordinates of a, b, c with
respect to the standard affine frame for A2 , prove that a, b, c are collinear i
a0 b 0 c 0
a1 b1 c1 = 0.
a2 b 2 c 2
548
Given any four points a, b, c, d in the affine space A3 , letting (a1 , a2 , a3 ), (b1 , b2 , b3 ), (c1 , c2 , c3 ),
and (d1 , d2 , d3 ) be the coordinates of a, b, c, d, with respect to the standard affine frame for
A3 , prove that a, b, c, d are coplanar i
a1 b1 c1 d1
a2 b2 c2 d2
= 0,
a3 b 3 c 3 d 3
1 1 1 1
i.e., the determinant is null.
Letting (a0 , a1 , a2 , a3 ), (b0 , b1 , b2 , b3 ), (c0 , c1 , c2 , c3 ), and (d0 , d1 , d2 , d3 ) be the barycentric
coordinates of a, b, c, d, with respect to the standard affine frame for A3 , prove that a, b, c, d
are coplanar i
a0 b 0 c 0 d 0
a1 b 1 c 1 d 1
= 0.
a2 b 2 c 2 d 2
a3 b 3 c 3 d 3
Problem 19.5. The function f : A ! A3 given by
t 7! (t, t2 , t3 )
defines what is called a twisted cubic curve. Given any four pairwise distinct values t1 , t2 , t3 , t4 ,
prove that the points f (t1 ), f (t2 ), f (t3 ), and f (t4 ) are not coplanar.
Hint. Have you heard of the Vandermonde determinant?
Problem 19.6. For any two distinct points a, b 2 A2 of barycentric coordinates (a0 , a1 , a2 )
and (b0 , b1 , b2 ) with respect to any given affine frame (O, i, j), show that the equation of the
line ha, bi determined by a and b is
a0 b 0 x
a1 b1 y = 0,
a2 b 2 z
or, equivalently,
(a1 b2
a2 b1 )x + (a2 b0
a0 b2 )y + (a0 b1
a1 b0 )z = 0,
where (x, y, z) are the barycentric coordinates of the generic point on the line ha, bi.
Prove that the equation of a line in barycentric coordinates is of the form
ux + vy + wz = 0,
where u 6= v or v 6= w or u 6= w. Show that two equations
ux + vy + wz = 0 and u0 x + v 0 y + w0 z = 0
19.12. PROBLEMS
549
represent the same line in barycentric coordinates i (u0 , v 0 , w0 ) = (u, v, w) for some 2 R
(with 6= 0).
A triple (u, v, w) where u 6= v or v 6= w or u =
6 w is called a system of tangential
coordinates of the line defined by the equation
ux + vy + wz = 0.
Problem 19.7. Given two lines D and D0 in A2 defined by tangential coordinates (u, v, w)
and (u0 , v 0 , w0 ) (as defined in Problem 19.6), let
u v w
d = u0 v 0 w0 = vw0
1 1 1
wv 0 + wu0
uw0 + uv 0
vu0 .
(a) Prove that D and D0 have a unique intersection point i d 6= 0, and that when it
exists, the barycentric coordinates of this intersection point are
1
(vw0
d
wv 0 , wu0
uw0 , uv 0
vu0 ).
(b) Letting (O, i, j) be any affine frame for A2 , recall that when x + y + z = 0, for any
point a, the vector
!
!
!
xaO + y ai + z aj
is independent of a and equal to
!
!
y Oi + z Oj = (y, z).
The triple (x, y, z) such that x + y + z = 0 is called the barycentric coordinates of the vector
!
!
y Oi + z Oj w.r.t. the affine frame (O, i, j).
Given any affine frame (O, i, j), prove that for u 6= v or v 6= w or u 6= w, the line of
equation
ux + vy + wz = 0
in barycentric coordinates (x, y, z) (where x + y + z = 1) has for direction the set of vectors
of barycentric coordinates (x, y, z) such that
ux + vy + wz = 0
(where x + y + z = 0).
Prove that D and D0 are parallel i d = 0. In this case, if D 6= D0 , show that the common
direction of D and D0 is defined by the vector of barycentric coordinates
(vw0
wv 0 , wu0
uw0 , uv 0
vu0 ).
(c) Given three lines D, D0 , and D00 , at least two of which are distinct and defined by
tangential coordinates (u, v, w), (u0 , v 0 , w0 ), and (u00 , v 00 , w00 ), prove that D, D0 , and D00 are
parallel or have a unique intersection point i
550
u v w
u0 v 0 w0 = 0.
u00 v 00 w00
Problem 19.8. Let (A, B, C) be a triangle in A2 . Let M, N, P be three points respectively
on the lines BC, CA, and AB, of barycentric coordinates (0, m0 , m00 ), (n, 0, n00 ), and (p, p0 , 0),
w.r.t. the affine frame (A, B, C).
(a) Assuming that M 6= C, N 6= A, and P 6= B, i.e., m0 n00 p 6= 0, show that
! !
MB NC
! !
MC NA
!
PA
!=
PB
m00 np0
.
m0 n00 p
!
PA
! = 1.
PB
(c) Prove Cevas theorem: The lines AM, BN, CP have a unique intersection point or
are parallel i
m00 np0 m0 n00 p = 0.
When M 6= C, N 6= A, and P 6= B, this is equivalent to
! !
MB NC
! !
MC NA
!
PA
!=
PB
1.
Problem 19.9. This problem uses notions and results from Problems 19.6 and 19.7. In view
of (a) and (b) of Problem 19.7, it is natural to extend the notion of barycentric coordinates
of a point in A2 as follows. Given any affine frame (a, b, c) in A2 , we will say that the
barycentric coordinates (x, y, z) of a point M , where x + y + z = 1, are the normalized
barycentric coordinates of M . Then, any triple (x, y, z) such that x + y + z 6= 0 is also called
a system of barycentric coordinates for the point of normalized barycentric coordinates
1
(x, y, z).
x+y+z
With this convention, the intersection of the two lines D and D0 is either a point or a vector,
in both cases of barycentric coordinates
(vw0
wv 0 , wu0
uw0 , uv 0
vu0 ).
19.12. PROBLEMS
551
When the above is a vector, we can think of it as a point at infinity (in the direction of the
line defined by that vector).
Let (D0 , D00 ), (D1 , D10 ), and (D2 , D20 ) be three pairs of six distinct lines, such that the
four lines belonging to any union of two of the above pairs are neither parallel nor concurrent
(have a common intersection point). If D0 and D00 have a unique intersection point, let M be
this point, and if D0 and D00 are parallel, let M denote a nonnull vector defining the common
direction of D0 and D00 . In either case, let (m, m0 , m00 ) be the barycentric coordinates of M ,
as explained at the beginning of the problem. We call M the intersection of D0 and D00 .
Similarly, define N = (n, n0 , n00 ) as the intersection of D1 and D10 , and P = (p, p0 , p00 ) as the
intersection of D2 and D20 .
Prove that
m n p
m0 n0 p0 = 0
m00 n00 p00
i either
(i) (D0 , D00 ), (D1 , D10 ), and (D2 , D20 ) are pairs of parallel lines; or
(ii) the lines of some pair (Di , Di0 ) are parallel, each pair (Dj , Dj0 ) (with j 6= i) has a unique
intersection point, and these two intersection points are distinct and determine a line
parallel to the lines of the pair (Di , Di0 ); or
(iii) each pair (Di , Di0 ) (i = 0, 1, 2) has a unique intersection point, and these points M, N, P
are distinct and collinear.
Problem 19.10. Prove the following version of Desarguess theorem. Let A, B, C, A0 , B 0 , C 0
be six distinct points of A2 . If no three of these points are collinear, then the lines AA0 , BB 0 ,
and CC 0 are parallel or collinear i the intersection points M, N, P (in the sense of Problem
19.7) of the pairs of lines (BC, B 0 C 0 ), (CA, C 0 A0 ), and (AB, A0 B 0 ) are collinear in the sense
of Problem 19.9.
Problem 19.11. Prove the following version of Pappuss theorem. Let D and D0 be distinct
lines, and let A, B, C and A0 , B 0 , C 0 be distinct points respectively on D and D0 . If these
points are all distinct from the intersection of D and D0 (if it exists), then the intersection
points (in the sense of Problem 19.7) of the pairs of lines (BC 0 , CB 0 ), (CA0 , AC 0 ), and
(AB 0 , BA0 ) are collinear in the sense of Problem 19.9.
Problem 19.12. The purpose of this problem is to prove Pascals theorem for the nondegenerate conics. In the affine plane A2 , a conic is the set of points of coordinates (x, y) such
that
x2 + y 2 + 2 xy + 2 x + 2 y + = 0,
where 6= 0 or
6= 0 or
x
A @y A = 0.
(x, y, 1) @
552
@
B=
A,
1
1 0 0
C = @0 1 0 A ,
1 1 1
0 1
x
@
X = yA .
z
(a) Letting A = C > BC, prove that the equation of the conic becomes
X > AX = 0.
Prove that A is symmetric, that det(A) = det(B), and that X > AX is homogeneous of degree
2. The equation X > AX = 0 is called the homogeneous equation of the conic.
We say that a conic of homogeneous equation X > AX = 0 is nondegenerate if det(A) 6= 0,
and degenerate if det(A) = 0. Show that this condition does not depend on the choice of the
affine frame.
(b) Given an affine frame (A, B, C), prove that any conic passing through A, B, C has
an equation of the form
ayz + bxz + cxy = 0.
Prove that a conic containing more than one point is degenerate i it contains three distinct
collinear points. In this case, the conic is the union of two lines.
(c) Prove Pascals theorem. Given any six distinct points A, B, C, A0 , B 0 , C 0 , if no three of
the above points are collinear, then a nondegenerate conic passes through these six points i
the intersection points M, N, P (in the sense of Problem 19.7) of the pairs of lines (BC 0 , CB 0 ),
(CA0 , AC 0 ) and (AB 0 , BA0 ) are collinear in the sense of Problem 19.9.
Hint. Use the affine frame (A, B, C), and let (a, a0 , a00 ), (b, b0 , b00 ), and (c, c0 , c00 ) be the
barycentric coordinates of A0 , B 0 , C 0 respectively, and show that M, N, P have barycentric
coordinates
(bc, cb0 , c00 b), (c0 a, c0 a0 , c00 a0 ), (ab00 , a00 b0 , a00 b00 ).
Problem 19.13. The centroid of a triangle (a, b, c) is the barycenter of (a, 13 ), (b, 13 ), (c, 13 ).
If an affine map takes the vertices of triangle 1 = {(0, 0), (6, 0), (0, 9)} to the vertices of
triangle 2 = {(1, 1), (5, 4), (3, 1)}, does it also take the centroid of 1 to the centroid of
2 ? Justify your answer.
Problem 19.14. Let E be an affine space over R, and let (a1 , . . . , an ) be any n 3 points
in E. Let ( 1 , . . . , n ) be any n scalars in R, with 1 + + n = 1. Show that there must
be some i, 1 i n, such that i 6= 1. To simplify the notation, assume that 1 6= 1. Show
that the barycenter 1 a1 + + n an can be obtained by first determining the barycenter b
of the n 1 points a2 , . . . , an assigned some appropriate weights, and then the barycenter of
19.12. PROBLEMS
553
a1 and b assigned the weights 1 and 2 + + n . From this, show that the barycenter of
any n 3 points can be determined by repeated computations of barycenters of two points.
Deduce from the above that a nonempty subset V of E is an affine subspace i whenever V
contains any two points x, y 2 V , then V contains the entire line (1
)x + y, 2 R.
Problem 19.15. Assume that K is a field such that 2 = 1 + 1 6= 0, and let E be an affine
space over K. In the case where 1 + + n = 1 and i = 1, for 1 i n and n 3,
show that the barycenter a1 + a2 + + an can still be computed by repeated computations
of barycenters of two points.
Finally, assume that the field K contains at least three elements (thus, there is some
2 K such that 6= 0 and 6= 1, but 2 = 1 + 1 = 0 is possible). Prove that the barycenter
of any n
3 points can be determined by repeated computations of barycenters of two
points. Prove that a nonempty subset V of E is an affine subspace i whenever V contains
any two points x, y 2 V , then V contains the entire line (1
)x + y, 2 K.
Hint. When 2 = 0, 1 + + n = 1 and i = 1, for 1 i n, show that n must be
odd, and that the problem reduces to computing the barycenter of three points in two steps
involving two barycenters. Since there is some 2 K such that 6= 0 and 6= 1, note that
1 and (1 ) 1 both exist, and use the fact that
1
+
= 1.
1 1
Problem 19.16. (i) Let (a, b, c) be three points in A2 , and assume that (a, b, c) are not
collinear. For any point x 2 A2 , if x = 0 a + 1 b + 2 c, where ( 0 , 1 , 2 ) are the barycentric
coordinates of x with respect to (a, b, c), show that
! !
det(xb, bc)
=
,
!
det( ab, !
ac)
! !
det(ax,
ac)
=
! ! ,
det( ab, ac)
! !
det( ab, ax)
=
.
!
det( ab, !
ac)
Conclude that 0 , 1 , 2 are certain signed ratios of the areas of the triangles (a, b, c), (x, a, b),
(x, a, c), and (x, b, c).
(ii) Let (a, b, c) be three points in A3 , and assume that (a, b, c) are not collinear. For any
point x in the plane determined by (a, b, c), if x = 0 a + 1 b + 2 c, where ( 0 , 1 , 2 ) are the
barycentric coordinates of x with respect to (a, b, c), show that
! !
xb bc
=!
,
ab !
ac
!!
ax
ac
= !
,
!
ab ac
! !
ab ax
= !
.
ab !
ac
Given any point O not in the plane of the triangle (a, b, c), prove that
! ! !
det(Oa, Ox, Oc)
=
! ! ! ,
det(Oa, Ob, Oc)
! ! !
det(Oa, Ob, Ox)
=
! ! ! ,
det(Oa, Ob, Oc)
554
and
0
! ! !
det(Ox, Ob, Oc)
=
! ! ! .
det(Oa, Ob, Oc)
(iii) Let (a, b, c, d) be four points in A3 , and assume that (a, b, c, d) are not coplanar. For
any point x 2 A3 , if x = 0 a + 1 b + 2 c + 3 d, where ( 0 , 1 , 2 , 3 ) are the barycentric
coordinates of x with respect to (a, b, c, d), show that
!
! !
det(ax,
ac, ad)
=
!
! ,
det( ab, !
ac, ad)
! ! !
det( ab, ax,
ad)
=
! ! ! ,
det( ab, ac, ad)
! ! !
det(xb, bc, bd)
=
!
! .
det( ab, !
ac, ad)
and
!
!
det( ab, !
ac, ax)
=
!
! ,
det( ab, !
ac, ad)
Conclude that 0 , 1 , 2 , 3 are certain signed ratios of the volumes of the five tetrahedra
(a, b, c, d), (x, a, b, c), (x, a, b, d), (x, a, c, d), and (x, b, c, d).
(iv) Let (a0 , . . . , am ) be m+1 points in Am , and assume that they are affinely independent.
For any point x 2 Am , if x = 0 a0 + + m am , where ( 0 , . . . , m ) are the barycentric
coordinates of x with respect to (a0 , . . . , am ), show that
i
!
!
det(a0!
a1 , . . . , a0 ai!1 , a!
0 x, a0 ai+1 , . . . , a0 am )
!, . . . , a a!)
det(a0!
a1 , . . . , a0 ai!1 , a0!
ai , a0 ai+1
0 m
!, a !
!
det(xa
1 1 a2 , . . . , a1 am )
=
.
det(a0!
a1 , . . . , a0!
ai , . . . , a0 a!
m)
Conclude that i is the signed ratio of the volumes of the simplexes (a0 , . . ., x, . . . am ) and
(a0 , . . . , ai , . . . am ), where 0 i m.
Problem 19.17. With respect to the standard affine frame for the plane A2 , consider the
three geometric transformations f1 , f2 , f3 defined by
p
p
p
1
3
3
3
1
3
0
0
x =
x
y+ , y =
x
y+
,
4
4
4
4
4
4
p
p
p
1
3
3
3
1
3
0
0
x =
x+
y
, y =
x
y+
,
4
4
4p
4
4
4
1
1
3
x0 =
x, y 0 = y +
.
2
2
2
(a) Prove that these maps are affine. Can you describe geometrically what their action
is (rotation, translation, scaling)?
19.12. PROBLEMS
555
(b) Given any polygonal line L, define the following sequence of polygonal lines:
S0 = L,
Sn+1 = f1 (Sn ) [ f2 (Sn ) [ f3 (Sn ).
Construct S1 starting from the line segment L = (( 1, 0), (1, 0)).
Can you figure out what Sn looks like in general? (You may want to write a computer
program.) Do you think that Sn has a limit?
Problem 19.18. In the plane A2 , with respect to the standard affine frame, a point of
coordinates (x, y) can be represented as the complex number z = x + iy. Consider the set
of geometric transformations of the form
z 7! az + b,
where a, b are complex numbers such that a 6= 0.
(a) Prove that these maps are affine. Describe what these maps do geometrically.
(b) Prove that the above set of maps is a group under composition.
(c) Consider the set of geometric transformations of the form
z 7! az + b or z 7! az + b,
where a, b are complex numbers such that a 6= 0, and where z = x iy if z = x + iy.
Describe what these maps do geometrically. Prove that these maps are affine and that this
set of maps is a group under composition.
Problem 19.19. Given a group G, a subgroup H of G is called a normal subgroup of G i
xHx 1 = H for all x 2 G (where xHx 1 = {xhx 1 | h 2 H}).
(i) Given any two subgroups H and K of a group G, let
HK = {hk | h 2 H, k 2 K}.
Prove that every x 2 HK can be written in a unique way as x = hk for h 2 H and k 2 K
i H \ K = {1}, where 1 is the identity element of G.
(ii) If H and K are subgroups of G, and H is a normal subgroup of G, prove that HK
is a subgroup of G. Furthermore, if G = HK and H \ K = {1}, prove that G is isomorphic
to H K under the multiplication operation
(h1 , k1 ) (h2 , k2 ) = (h1 k1 h2 k1 1 , k1 k2 ).
When G = HK, where H, K are subgroups of G, H is a normal subgroup of G, and
H \ K = {1}, we say that G is the semidirect product of H and K.
!
(iii) Let (E, E ) be an affine space. Recall that the affine group of E, denoted by GA(E),
!
!
is the set of affine bijections of E, and that the linear group of E , denoted by GL( E ), is
!
!
the group of bijective linear maps of E . The map f 7! f defines a group homomorphism
556
!
L : GA(E) ! GL( E ), and the kernel of this map is the set of translations on E, denoted
as T (E). Prove that T (E) is a normal subgroup of GA(E).
(iv) For any a 2 E, let
GAa (E) = {f 2 GA(E) | f (a) = a},
the set of affine bijections leaving a fixed. Prove that that GAa (E) is a subgroup of GA(E),
!
and that GAa (E) is isomorphic to GL( E ). Prove that GA(E) is isomorphic to the direct
product of T (E) and GAa (E).
!
Hint. Note that if u = f (a)a and tu is the translation associated with the vector u, then
tu f 2 GAa (E) (where the translation tu is defined such that tu (a) = a+u for every a 2 E).
(v) Given a group G, let Aut(G) denote the set of homomorphisms f : G ! G. Prove
that the set Aut(G) is a group under composition (called the group of automorphisms of G).
Given any two groups H and K and a homomorphism : K ! Aut(H), we define H K
as the set H K under the multiplication operation
(h1 , k1 ) (h2 , k2 ) = (h1 (k1 )(h2 ), k1 k2 ).
Prove that H K is a group.
Hint. The inverse of (h, k) is ((k 1 )(h 1 ), k 1 ).
Prove that the group H K is the semidirect product of the subgroups
{(h, 1) | h 2 H} and {(1, k) | k 2 K}. The group H K is also called the semidirect
product of H and K relative to .
Note. It is natural to identify {(h, 1) | h 2 H} with H and {(1, k) | k 2 K} with K.
If G is the semidirect product of two subgroups H and K as defined in (ii), prove that
the map : K ! Aut(H) defined by conjugation such that
(k)(h) = khk
19.12. PROBLEMS
557
Problem 19.20. The purpose of this problem is to study certain affine maps of A2 .
(1) Consider affine maps of the form
x1
cos
sin
x1
b
7!
+ 1 .
x2
sin cos
x2
b2
Prove that such maps have a unique fixed point c if 6= 2k, for all integers k. Show that
these are rotations of center c, which means that with respect to a frame with origin c (the
unique fixed point), these affine maps are represented by rotation matrices.
(2) Consider affine maps of the form
x1
cos
sin
x1
b
7!
+ 1 .
x2
sin cos
x2
b2
Prove that such maps have a unique fixed point i ( + ) cos 6= 1 + . Prove that if
= 1 and > 0, there is some angle for which either there is no fixed point, or there are
infinitely many fixed points.
(3) Prove that the affine map
x1
8/5
6/5
x1
1
7!
+
x2
3/10 2/5
x2
1
has no fixed point.
(4) Prove that an arbitrary affine map
x1
a1 a2
x1
b
7!
+ 1
x2
a3 a4
x2
b2
has a unique fixed point i the matrix
a1 1
a2
a3
a4 1
is invertible.
!
Problem 19.21. Let (E, E ) be any affine space of finite dimension. For every affine map
f : E ! E, let Fix(f ) = {a 2 E | f (a) = a} be the set of fixed points of f .
(i) Prove that if Fix(f ) 6= ;, then Fix(f ) is an affine subspace of E such that for every
b 2 Fix(f ),
!
Fix(f ) = b + Ker ( f
id).
(ii) Prove that Fix(f ) contains a unique fixed point i
!
!
Ker ( f
id) = {0}, i.e., f (u) = u i u = 0.
Hint. Show that
! !
! ! !
f (a) a = f () + f (a)
for any two points , a 2 E.
!
a,
558
!
!
Problem 19.22. Given two affine spaces (E, E ) and (F, F ), let A(E, F ) be the set of all
affine maps f : E ! F .
!
!
(i) Prove that the set A(E, F ) (viewing F as an affine space) is a vector space under
the operations f + g and f defined such that
(f + g)(a) = f (a) + g(a),
( f )(a) = f (a),
for all a 2 E.
(ii) Define an action
!
+ : A(E, F ) A(E, F ) ! A(E, F )
!
of A(E, F ) on A(E, F ) as follows: For every a 2 E, every f 2 A(E, F ), and every h 2
!
A(E, F ),
(f + h)(a) = f (a) + h(a).
!
Prove that (A(E, F ), A(E, F ), +) is an affine space.
!
Hint. Show that for any two affine maps f, g 2 A(E, F ), the map f g defined such that
!
!
f g(a) = f (a)g(a)
!
!
!
(for every a 2 E) is affine, and thus f g 2 A(E, F ). Furthermore, f g is the unique map in
!
A(E, F ) such that
!
f + f g = g.
!
!
!
(iii) If E has dimension m and F has dimension n, prove that A(E, F ) has dimension
n + mn = n(m + 1).
Problem 19.23. Let (c1 , . . . , cn ) be n 3 points in Am (where m 2). Investigate whether
there is a closed polygon with n vertices (a1 , . . . , an ) such that ci is the middle of the edge
(ai , ai+1 ) for every i with 1 i n 1, and cn is the middle of the edge (an , a0 ).
Hint. The parity (odd or even) of n plays an important role. When n is odd, there is a
unique solution, and when n is even, there are no solutions or infinitely many solutions.
Clarify under which conditions there are infinitely many solutions.
Problem 19.24. Given an affine space E of dimension n and an affine frame (a0 , . . . , an ) for
E, let f : E ! E and g : E ! E be two affine maps represented by the two (n + 1) (n + 1)
matrices
A b
B c
and
0 1
0 1
w.r.t. the frame (a0 , . . . , an ). We also say that f and g are represented by (A, b) and (B, c).
19.12. PROBLEMS
(1) Prove that the composition f
559
g is represented by the matrix
AB Ac + b
.
0
1
A
A 1b
.
0
1
is
A
A> b
.
0
1
Furthermore, denoting the columns of A by A1 , . . . , An , prove that the vector A> b is the
column vector of components
(A1 b, . . . , An b)
(where denotes the standard inner product of vectors).
(3) Given two affine frames (a0 , . . . , an ) and (a00 , . . . , a0n ) for E, any affine map f : E ! E
has a matrix representation (A, b) w.r.t. (a0 , . . . , an ) and (a00 , . . . , a0n ) defined such that
!
!
!
b = a00 f (a0 ) is expressed over the basis (a00 a01 , . . . , a00 a0n ), and ai j is the ith coefficient of
!
!
f (a0!
aj ) over the basis (a00 a01 , . . . , a00 a0n ). Given any three frames (a0 , . . . , an ), (a00 , . . . , a0n ),
and (a000 , . . . , a00n ), for any two affine maps f : E ! E and g : E ! E, if f has the matrix
representation (A, b) w.r.t. (a0 , . . . , an ) and (a00 , . . . , a0n ) and g has the matrix representation
(B, c) w.r.t. (a00 , . . . , a0n ) and (a000 , . . . , a00n ), prove that g f has the matrix representation
(B, c)(A, b) w.r.t. (a0 , . . . , an ) and (a000 , . . . , a00n ).
(4) Given two affine frames (a0 , . . . , an ) and (a00 , . . . , a0n ) for E, there is a unique affine
map h : E ! E such that h(ai ) = a0i for i = 0, . . . , n, and we let (P, !) be its associated
!
matrix representation with respect to the frame (a0 , . . . , an ). Note that ! = a0 a00 , and that
!
pi j is the ith coefficient of a00 a0j over the basis (a0!
a1 , . . . , a0 !
an ). Observe that (P, !) is also
0
the matrix representation of idE w.r.t. the frames (a0 , . . . , a0n ) and (a0 , . . . , an ), in that
order. For any affine map f : E ! E, if f has the matrix representation (A, b) over the
frame (a0 , . . . , an ) and the matrix representation (A0 , b0 ) over the frame (a00 , . . . , a0n ), prove
that
(A0 , b0 ) = (P, !) 1 (A, b)(P, !).
Given any two affine maps f : E ! E and g : E ! E, where f is invertible, for any affine
frame (a0 , . . . , an ) for E, if (a00 , . . . , a0n ) is the affine frame image of (a0 , . . . , an ) under f (i.e.,
f (ai ) = a0i for i = 0, . . . , n), letting (A, b) be the matrix representation of f w.r.t. the frame
(a0 , . . . , an ) and (B, c) be the matrix representation of g w.r.t. the frame (a00 , . . . , a0n ) (not
560
the frame (a0 , . . . , an )), prove that g f is represented by the matrix (A, b)(B, c) w.r.t. the
frame (a0 , . . . , an ).
Remark: Note that this is the opposite of what happens if f and g are both represented
by matrices w.r.t. the fixed frame (a0 , . . . , an ), where g f is represented by the matrix
(B, c)(A, b). The frame (a00 , . . . , a0n ) can be viewed as a moving frame. The above has
applications in robotics, for example to rotation matrices expressed in terms of Euler angles,
or roll, pitch, and yaw.
Problem 19.25. (a) Let E be a vector space, and let U and V be two subspaces of E so
that they form a direct sum E = U V . Recall that this means that every vector x 2 E can
be written as x = u + v, for some unique u 2 U and some unique v 2 V . Define the function
pU : E ! U (resp. pV : E ! V ) so that pU (x) = u (resp. pV (x) = v), where x = u + v, as
explained above. Check that that pU and pV are linear.
(b) Now assume that E is an affine space (nontrivial), and let U and V be affine subspaces
! ! !
!
!
such that E = U
V . Pick any 2 V , and define qU : E ! U (resp. qV : E ! V , with
2 U ) so that
!
!
qU (a) = p!
(a) (resp. qV (a) = p!
(a)),
U
V
for every a 2 E.
Prove that qU does not depend on the choice of 2 V (resp. qV does not depend on the
choice of 2 U ). Define the map pU : E ! U (resp. pV : E ! V ) so that
pU (a) = a
qU (a)),
for every a 2 E.
(a0 , . . . , an ),
Chapter 20
Polynomials, Ideals and PIDs
20.1
Multisets
This chapter contains a review of polynomials and their basic properties. First, multisets
are defined. Polynomials in one variable are defined next. The notion of a polynomial
function in one argument is defined. Polynomials in several variable are defined, and so is
the notion of a polynomial function in several arguments. The Euclidean division algorithm is
presented, and the main consequences of its existence are derived. Ideals are defined, and the
characterization of greatest common divisors of polynomials in one variables (gcds) in terms
of ideals is shown. We also prove the Bezout identity. Next, we consider the factorization of
polynomials in one variables into irreducible factors. The unique factorization of polynomials
in one variable into irreducible factors is shown. Roots of polynomials and their multiplicity
are defined. It is shown that a nonnull polynomial in one variable and of degree m over an
integral domain has at most m roots. The chapter ends with a brief treatment of polynomial
interpolation: Lagrange, Newton, and Hermite interpolants are introduced.
In this chapter, it is assumed that all rings considered are commutative. Recall that a
(commutative) ring A is an integral domain (or an entire ring) if 1 6= 0, and if ab = 0, then
either a = 0 or b = 0, for all a, b 2 A. This second condition is equivalent to saying that if
a 6= 0 and b 6= 0, then ab 6= 0. Also, recall that a 6= 0 is not a zero divisor if ab 6= 0 whenever
b 6= 0. Observe that a field is an integral domain.
Our goal is to define polynomials in one or more indeterminates (or variables) X1 , . . . , Xn ,
with coefficients in a ring A. This can be done in several ways, and we choose a definition
that has the advantage of extending immediately from one to several variables. First, we
need to review the notion of a (finite) multiset.
Definition 20.1. Given a set I, a (finite) multiset over I is any function M : I ! N such
that M (i) 6= 0 for finitely many i 2 I. The multiset M such that M (i) = 0 for all i 2 I is
the empty multiset, and it is denoted by 0. If M (i) = k 6= 0, we say that i is a member of
M of multiplicity k. The union M1 + M2 of two multisets M1 and M2 is defined such that
(M1 + M2 )(i) = M1 (i) + M2 (i), for every i 2 I. If I is finite, say I = {1, . . . , n}, the multiset
561
562
Intuitively, the order of the elements of a multiset is irrelevant, but the multiplicity of
each element is relevant, contrary to sets. Every i 2 I is identified with the multiset Mi such
that Mi (i) = 1 and Mi (j) = 0 for j 6= i. When I = {1}, the set N(1) of multisets k 1 can be
identified with N and {1} . We will denote k 1 simply by k.
However, beware that when n 2, the set N(n) of multisets cannot be identified with the
set of strings in {1, . . . , n} , because multiset union is commutative, but concatenation
of strings in {1, . . . , n} is not commutative when n
2. This is because in a multiset
k1 1 + + kn n, the order is irrelevant, whereas in a string, the order is relevant. For
example, 2 1 + 3 2 = 3 2 + 2 1, but 11222 6= 22211, as strings over {1, 2}.
Nevertherless, N(n) and the set Nn of ordered n-tuples under component-wise addition
are isomorphic under the map
k1 1 + + kn n 7! (k1 , . . . , kn ).
20.2
Polynomials
2 A,
20.2. POLYNOMIALS
563
ai b j .
i+j=k
We define the polynomial ek such that ek (k) = 1 and ek (i) = 0 for i 6= k. We also denote
e0 by 1 when k = 0. Given a polynomial P , the ak = P (k) 2 A are called the coefficients
of P . If P is not the null polynomial, there is a greatest n 0 such that an 6= 0 (and thus,
ak = 0 for all k > n) called the degree of P and denoted by deg(P ). Then, P is written
uniquely as
P = a0 e 0 + a1 e 1 + + an e n .
1.
There is an injection of A into PA (1) given by the map a 7! a1 (recall that 1 denotes e0 ).
There is also an injection of N into PA (1) given by the map k 7! ek . Observe that ek = ek1
(with e01 = e0 = 1). In order to alleviate the notation, we often denote e1 by X, and we call
X a variable (or indeterminate). Then, ek = ek1 is denoted by X k . Adopting this notation,
given a nonnull polynomial P of degree n, if P (k) = ak , P is denoted by
P = a0 + a1 X + + an X n ,
or by
P = an X n + an 1 X n
+ + a0 ,
if this is more convenient (the order of the terms does not matter anyway). Sometimes, it
will also be convenient to write a polynomial as
P = a0 X n + a1 X n
+ + an .
The set PA (1) is also denoted by A[X] and a polynomial P may be denoted by P (X).
In denoting polynomials, we will use both upper-case and lower-case letters, usually, P, Q,
R, S, p, q, r, s, but also f, g, h, etc., if needed (as long as no ambiguities arise).
Given a nonnull polynomial P of degree n, the nonnull coefficient an is called the leading
coefficient of P . The coefficient a0 is called the constant term of P . A polynomial of the
form ak X k is called a monomial . We say that ak X k occurs in P if ak 6= 0. A nonzero
polynomial P of degree n is called a monic polynomial (or unitary polynomial, or monic) if
an = 1, where an is its leading coefficient, and such a polynomial can be written as
P = X n + an 1 X n
+ + a0
or
P = X n + a1 X n
+ + an .
The choice of the variable X to denote e1 is standard practice, but there is nothing special
about X. We could have chosen Y , Z, or any other symbol, as long as no ambiguities
arise.
564
Formally, the definition of PA (1) has nothing to do with X. The reason for using X is
simply convenience. Indeed, it is more convenient to write a polynomial as P = a0 + a1 X +
+ an X n rather than as P = a0 e0 + a1 e1 + + an en .
We have the following simple but crucial proposition.
Proposition 20.1. Given two nonnull polynomials P (X) = a0 +a1 X + +am X m of degree
m and Q(X) = b0 + b1 X + + bn X n of degree n, if either am or bn is not a zero divisor,
then am bn 6= 0, and thus, P Q 6= 0 and
deg(P Q) = deg(P ) + deg(Q).
In particular, if A is an integral domain, then A[X] is an integral domain.
Proof. Since the coefficient of X m+n in P Q is am bn , and since we assumed that either am or
an is not a zero divisor, we have am bn 6= 0, and thus, P Q 6= 0 and
deg(P Q) = deg(P ) + deg(Q).
Then, it is obvious that A[X] is an integral domain.
It is easily verified that A[X] is a commutative ring, with multiplicative identity 1X 0 = 1.
It is also easily verified that A[X] satisfies all the conditions of Definition 2.9, but A[X] is
not a vector space, since A is not necessarily a field.
A structure satisfying the axioms of Definition 2.9 when K is a ring (and not necessarily a
field) is called a module. As we mentioned in Section 4.2, we will not study modules because
they fail to have some of the nice properties that vector spaces have, and thus, they are
harder to study. For example, there are modules that do not have a basis.
However, when the ring A is a field, A[X] is a vector space. But even when A is just a
ring, the family of polynomials (X k )k2N is a basis of A[X], since every polynomial P (X) can
be written in a unique way as P (X) = a0 + a1 X + + an X n (with P (X) = 0 when P (X)
is the null polynomial). Thus, A[X] is a free module.
Next, we want to define the notion of evaluating a polynomial P (X) at some 2 A. For
this, we need a proposition.
Proposition 20.2. Let A, B be two rings and let h : A ! B be a ring homomorphism.
For any 2 B, there is a unique ring homomorphism ' : A[X] ! B extending h such that
'(X) = , as in the following diagram (where we denote by h+ the map h+ : A[{X} ! B
such that (h + )(a) = h(a) for all a 2 A and (h + )(X) = ):
/
A[X]
LL
LL
LL
'
LL
LL
h+
%
A [ {X}
20.2. POLYNOMIALS
565
Proof. Let '(0) = 0, and for every nonull polynomial P (X) = a0 + a1 X + + an X n , let
'(P (X)) = h(a0 ) + h(a1 ) + + h(an )
It is easily verified that ' is the unique homomorphism ' : A[X] ! B extending h such that
'(X) = .
Taking A = B in Proposition 20.2 and h : A ! A the identity, for every 2 A, there
is a unique homomorphism ' : A[X] ! A such that ' (X) = , and for every polynomial
P (X), we write ' (P (X)) as P ( ) and we call P ( ) the value of P (X) at X = . Thus, we
can define a function PA : A ! A such that PA ( ) = P ( ), for all 2 A. This function is
called the polynomial function induced by P .
More generally, PB can be defined for any (commutative) ring B such that A B. In
general, it is possible that PA = QA for distinct polynomials P, Q. We will see shortly
conditions for which the map P 7! PA is injective. In particular, this is true for A = R (in
general, any infinite integral domain). We now define polynomials in n variables.
Definition 20.3. Given n
1 and a ring A, the set PA (n) of polynomials over A in n
variables is the set of functions P : N(n) ! A such that P (k1 , . . . , kn ) 6= 0 for finitely many
(k1 , . . . , kn ) 2 N(n) . The polynomial such that P (k1 , . . . , kn ) = 0 for all (k1 , . . . , kn ) is
the null (or zero) polynomial and it is denoted by 0. We define addition of polynomials,
multiplication by a scalar, and multiplication of polynomials, as follows: Given any three
polynomials P, Q, R 2 PA (n), letting a(k1 ,...,kn ) = P (k1 , . . . , kn ), b(k1 ,...,kn ) = Q(k1 , . . . , kn ),
c(k1 ,...,kn ) = R(k1 , . . . , kn ), for every (k1 , . . . , kn ) 2 N(n) , we define R = P + Q such that
c(k1 ,...,kn ) = a(k1 ,...,kn ) + b(k1 ,...,kn ) ,
R = P , where
2 A, such that
c(k1 ,...,kn ) = a(k1 ,...,kn ) ,
For every (k1 , . . . , kn ) 2 N(n) , we let e(k1 ,...,kn ) be the polynomial such that
e(k1 ,...,kn ) (k1 , . . . , kn ) = 1 and e(k1 ,...,kn ) (h1 , . . . , hn ) = 0,
for (h1 , . . . , hn ) 6= (k1 , . . . , kn ). We also denote e(0,...,0) by 1. Given a polynomial P , the
a(k1 ,...,kn ) = P (k1 , . . . , kn ) 2 A, are called the coefficients of P . If P is not the null polynomial,
there is a greatest d
0 such that a(k1 ,...,kn ) 6= 0 for some (k1 , . . . , kn ) 2 N(n) , with d =
k1 + + kn , called the total degree of P and denoted by deg(P ). Then, P is written
uniquely as
X
P =
a(k1 ,...,kn ) e(k1 ,...,kn ) .
(k1 ,...,kn )2N(n)
1.
566
There is an injection of A into PA (n) given by the map a 7! a1 (where 1 denotes e(0,...,0) ).
There is also an injection of N(n) into PA (n) given by the map (h1 , . . . , hn ) 7! e(h1 ,...,hn ) . Note
that e(h1 ,...,hn ) e(k1 ,...,kn ) = e(h1 +k1 ,...,hn +kn ) . In order to alleviate the notation, let X1 , . . . , Xn
be n distinct variables and denote e(0,...,0,1,0...,0) , where 1 occurs in the position i, by Xi
(where 1 i n). With this convention, in view of e(h1 ,...,hn ) e(k1 ,...,kn ) = e(h1 +k1 ,...,hn +kn ) , the
polynomial e(k1 ,...,kn ) is denoted by X1k1 Xnkn (with e(0,...,0) = X10 Xn0 = 1) and it is called
a primitive monomial . Then, P is also written as
X
P =
a(k1 ,...,kn ) X1k1 Xnkn .
(k1 ,...,kn )2N(n)
a monomial a(k1 ,...,kn ) X1k1 Xnkn occurs in the polynomial P if a(k1 ,...,kn ) 6= 0.
A polynomial
P =
is homogeneous of degree d if
deg(X1k1 Xnkn ) = d,
for every monomial a(k1 ,...,kn ) X1k1 Xnkn occurring in P . If P is a polynomial of total degree
d, it is clear that P can be written uniquely as
P = P (0) + P (1) + + P (d) ,
where P (i) is the sum of all monomials of degree i occurring in P , where 0 i d.
20.2. POLYNOMIALS
567
/ A[X1 , . . . , Xn ]
TTTT
TTTT
'
TTTT
TTTT
h+
)
let
'(P (X1 , . . . , Xn )) =
h(a(k1 ,...,kn ) )
k1
1
kn
n .
It is easily verified that ' is the unique homomorphism ' : A[X1 , . . . , Xn ] ! B extending h
such that '(Xi ) = i .
Taking A = B in Proposition 20.3 and h : A ! A the identity, for every 1 , . . . , n 2 A,
there is a unique homomorphism ' : A[X1 , . . . , Xn ] ! A such that '(Xi ) = i , and for
every polynomial P (X1 , . . . , Xn ), we write '(P (X1 , . . . , Xn )) as P ( 1 , . . . , n ) and we call
P ( 1 , . . . , n ) the value of P (X1 , . . . , Xn ) at X1 = 1 , . . . , Xn = n . Thus, we can define a
function PA : An ! A such that PA ( 1 , . . . , n ) = P ( 1 , . . . , n ), for all 1 , . . . , n 2 A. This
function is called the polynomial function induced by P .
More generally, PB can be defined for any (commutative) ring B such that A B. As
in the case of a single variable, it is possible that PA = QA for distinct polynomials P, Q.
We will see shortly that the map P 7! PA is injective when A = R (in general, any infinite
integral domain).
568
P
k1
kn
Given any nonnull polynomial P (X1 , . . . , Xn ) =
(k1 ,...,kn )2N(n) a(k1 ,...,kn ) X1 Xn in
A[X1 , . . . , Xn ], where n 2, P (X1 , . . . , Xn ) can be uniquely written as
X
P (X1 , . . . , Xn ) =
Qkn (X1 , . . . , Xn 1 )Xnkn ,
20.3
We know that every natural number n 2 can be written uniquely as a product of powers of
prime numbers and that prime numbers play a very important role in arithmetic. It would be
nice if every polynomial could be expressed (uniquely) as a product of irreducible factors.
This is indeed the case for polynomials over a field. The fact that there is a division algorithm
for the natural numbers is essential for obtaining many of the arithmetical properties of the
natural numbers. As we shall see next, there is also a division algorithm for polynomials in
A[X], when A is a field.
Proposition 20.4. Let A be a ring, let f (X), g(X) 2 A[X] be two polynomials of degree
m = deg(f ) and n = deg(g) with f (X) 6= 0, and assume that the leading coefficient am of
f (X) is invertible. Then, there exist unique polynomials q(X) and r(X) in A[X] such that
g = fq + r
and
+ + a0 ,
and
g = bn X n + bn 1 X n
+ + b0 .
If n < m, then let q = 0 and r = g. Since deg(g) < deg(f ) and r = g, we have deg(r) <
deg(f ).
569
1, since n
m, note that
bn am1 X n
(am X m + am 1 X m
+ + a0 )
is a polynomial of degree deg(g1 ) < n, since the terms bn X n and bn am1 X n m am X m of degree
n cancel out. Now, since deg(g1 ) < n, by the induction hypothesis, we can find q1 and r
such that
g1 = f q1 + r and deg(r) < deg(f ) = m,
and thus,
g1 (X) = g(X)
bn am1 X n
+ q1 (X), we get
q2 ) = r 2
r1 .
r1 ) = deg(f (q1
q2 )) = deg(f ) + deg(q2
q1 ),
and so, deg(r2 r1 ) deg(f ), which contradicts the fact that deg(r1 ) < deg(f ) and deg(r2 ) <
deg(f ). Thus, q1 = q2 , and then also r1 = r2 .
It should be noted that the proof of Proposition 20.4 actually provides an algorithm for
finding the quotient q and the remainder r of the division of g by f . This algorithm is
called the Euclidean algorithm, or division algorithm. Note that the division of g by f is
always possible when f is a monic polynomial, since 1 is invertible. Also, when A is a field,
am 6= 0 is always invertible, and thus, the division can always be performed. We say that f
divides g when r = 0 in the result of the division g = f q + r. We now draw some important
consequences of the existence of the Euclidean algorithm.
570
20.4
a 2 I.
Note that every a 2 A divides 0. However, it is customary to say that a is a zero divisor
i ac = 0 for some c 6= 0. With this convention, 0 is a zero divisor unless A = {0} (the
trivial ring), and A is an integral domain i 0 is the only zero divisor in A.
Given a, b 2 A with a, b 6= 0, if (a) = (b) then there exist c, d 2 A such that a = bc and
b = ad. From this, we get a = adc and b = bcd, that is, a(1 dc) = 0 and b(1 cd) = 0. If A
is an integral domain, we get dc = 1 and cd = 1, that is, c is invertible with inverse d. Thus,
when A is an integral domain, we have b = ad, with d invertible. The converse is obvious, if
b = ad with d invertible, then (a) = (b).
As a summary, if A is an integral domain, for any a, b 2 A with a, b 6= 0, we have (a) = (b)
i there exists some invertible d 2 A such that b = ad. An invertible element u 2 A is also
called a unit.
571
1}
is easily seen to be an ideal, and in fact, it is the smallest ideal containing J. It is usually
denoted by (J).
Ideals play a very important role in the study of rings. They tend to show up everywhere.
For example, they arise naturally from homomorphisms.
Proposition 20.5. Given any ring homomorphism h : A ! B, the kernel Ker h = {a 2 A |
h(a) = 0} of h is an ideal.
Proof. Given a, b 2 A, we have a, b 2 Ker h i h(a) = h(b) = 0, and since h is a homomorphism, we get
h(b a) = h(b) h(a) = 0,
and
h(ax) = h(a)h(x) = 0
for all x 2 A, which shows that Ker h is an ideal.
There is a sort of converse property. Given a ring A and an ideal I A, we can define
the quotient ring A/I, and there is a surjective homomorphism : A ! A/I whose kernel
is precisely I.
Proposition 20.6. Given any ring A and any ideal I A, the equivalence relation I
defined by a I b i b a 2 I is a congruence, which means that if a1 I b1 and a2 I b2 ,
then
1. a1 + a2 I b1 + b2 , and
2. a1 a2 I b1 b2 .
Then, the set A/I of equivalence classes modulo I is a ring under the operations
[a] + [b] = [a + b]
[a][b] = [ab].
The map : A ! A/I such that (a) = [a] is a surjective homomorphism whose kernel is
precisely I.
572
a1 )b2 = b1 b2
a1 b 2 2 I
(b2
a2 )a1 = a1 b2
a1 a2 2 I.
a1 2 I
and
Since I is an ideal, and thus, an additive group, we get
b1 b2
a1 a2 2 I,
573
(2) Assume that I is a maximal ideal. As in (1), A/I is not the trivial ring (0). Let
[a] 6= 0 in A/I. We need to prove that [a] has a multiplicative inverse. Since [a] 6= 0, we
have a 2
/ I. Let Ia be the ideal generated by I and a. We have
I Ia
and I 6= Ia ,
since a 2
/ I, and since I is maximal, this implies that
Ia = A.
However, we know that
Ia = {ax + h | x 2 A, h 2 I},
and thus, there is some x 2 A so that
ax + h = 1,
which proves that [a][x] = 1, as desired.
Conversely, assume that A/I is a field. Again, since A/I is not the trivial ring, I 6= A.
Let J be any proper ideal such that I J, and assume that I 6= J. Thus, there is some
j 2 J I, and since Ker = I, we have (j) 6= 0. Since A/I is a field and is surjective,
there is some k 2 A so that (j)(k) = 1, which implies that
jk
1=i
i 2 J, showing
Observe that a ring A is an integral domain i (0) is a prime ideal. This is an example
of a prime ideal which is not a maximal ideal, as immediately seen in A = Z, where (p) is a
maximal ideal for every prime number p.
A less obvious example of a prime ideal which is not a maximal ideal, is the ideal (X) in
the ring of polynomials Z[X]. Indeed, (X, 2) is also a prime ideal, but (X) is properly
contained in (X, 2).
574
Definition 20.5. An integral domain in which every ideal is a principal ideal is called a
principal ring or principal ideal domain, for short, a PID.
6= 0 in K
(2) For every nonnull ideal I in K[X], there is a unique monic polynomial f 2 K[X] such
that I = (f ).
Proof. (1) If (f ) = (g), there are some nonzero polynomials q1 , q2 2 K[X] such that g = f q1
and f = gq2 . Thus, we have f = f q1 q2 , which implies f (1 q1 q2 ) = 0. Since K is a
field, by Proposition 20.1, K[X] has no zero divisor, and since we assumed f 6= 0, we must
have q1 q2 = 1. However, if either q1 or q2 is not a constant, by Proposition 20.1 again,
deg(q1 q2 ) = deg(q1 ) + deg(q2 )
1, contradicting q1 q2 = 1, since deg(1) = 0. Thus, both
q1 , q2 2 K {0}, and (1) holds with = q1 . In the other direction, it is obvious that g = f
implies that (f ) = (g).
(2) Since we are assuming that I is not the null ideal, there is some polynomial of smallest
degree in I, and since K is a field, by suitable multiplication by a scalar, we can make sure
that this polynomial is monic. Thus, let f be a monic polynomial of smallest degree in I.
By (ID2), it is clear that (f ) I. Now, let g 2 I. Using the Euclidean algorithm, there
exist unique q, r 2 K[X] such that
g = qf + r
575
Note that f and g are relatively prime i all of their gcds are constants (scalars in K),
or equivalently, if f, g have no divisor q of degree deg(q) 1.
In particular, note that f and g are relatively prime when f is a nonzero constant
polynomial (a scalar 6= 0 in K) and g is any nonzero polynomial.
We can characterize gcds of polynomials as follows.
Proposition 20.10. Let K be a field and let f, g 2 K[X] be any two nonzero polynomials.
For every polynomial d 2 K[X], the following properties are equivalent:
(1) The polynomial d is a gcd of f and g.
(2) The polynomial d divides f and g and there exist u, v 2 K[X] such that
d = uf + vg.
(3) The ideals (f ), (g), and (d) satisfy the equation
(d) = (f ) + (g).
In addition, d 6= 0, and d is unique up to multiplication by a nonzero scalar in K.
Proof. Given any two nonzero polynomials u, v 2 K[X], observe that u divides v i (v) (u).
Now, (2) can be restated as (f ) (d), (g) (d), and d 2 (f ) + (g), which is equivalent to
(d) = (f ) + (g), namely (3).
If (2) holds, since d = uf + vg, whenever h 2 K[X] divides f and g, then h divides d,
and d is a gcd of f and g.
576
Assume that d is a gcd of f and g. Then, since d divides f and d divides g, we have
(f ) (d) and (g) (d), and thus (f ) + (g) (d), and (f ) + (g) is nonempty since f and
g are nonzero. By Proposition 20.9, there exists a monic polynomial d1 2 K[X] such that
(d1 ) = (f ) + (g). Then, d1 divides both f and g, and since d is a gcd of f and g, then d1
divides d, which shows that (d) (d1 ) = (f ) + (g). Consequently, (f ) + (g) = (d), and (3)
holds.
Since (d) = (f ) + (g) and f and g are nonzero, the last part of the proposition is
obvious.
As a consequence of Proposition 20.10, two nonzero polynomials f, g 2 K[X] are relatively prime i there exist u, v 2 K[X] such that
uf + vg = 1.
The identity
d = uf + vg
of part (2) of Proposition 20.10 is often called the Bezout identity.
We derive more useful consequences of Proposition 20.10.
Proposition 20.11. Let K be a field and let f, g 2 K[X] be any two nonzero polynomials.
For every gcd d 2 K[X] of f and g, the following properties hold:
(1) For every nonzero polynomial q 2 K[X], the polynomial dq is a gcd of f q and gq.
(2) For every nonzero polynomial q 2 K[X], if q divides f and g, then d/q is a gcd of f /q
and g/q.
Proof. (1) By Proposition 20.10 (2), d divides f and g, and there exist u, v 2 K[X], such
that
d = uf + vg.
Then, dq divides f q and gq, and
dq = uf q + vgq.
By Proposition 20.10 (2), dq is a gcd of f q and gq. The proof of (2) is similar.
The following proposition is used often.
Proposition 20.12. (Euclids proposition) Let K be a field and let f, g, h 2 K[X] be any
nonzero polynomials. If f divides gh and f is relatively prime to g, then f divides h.
Proof. From Proposition 20.10, f and g are relatively prime i there exist some polynomials
u, v 2 K[X] such that
uf + vg = 1.
Then, we have
uf h + vgh = h,
and since f divides gh, it divides both uf h and vgh, and so, f divides h.
577
Proposition 20.13. Let K be a field and let f, g1 , . . . , gm 2 K[X] be some nonzero polynomials. If f and gi are relatively prime for all i, 1 i m, then f and g1 gm are relatively
prime.
Proof. We proceed by induction on m. The case m = 1 is trivial. Let h = g2 gm . By the
induction hypothesis, f and h are relatively prime. Let d be a gcd of f and g1 h. We claim
that d is relatively prime to g1 . Otherwise, d and g1 would have some nonconstant gcd d1
which would divide both f and g1 , contradicting the fact that f and g1 are relatively prime.
Now, by Proposition 20.12, since d divides g1 h and d and g1 are relatively prime, d divides
h = g2 gm . But then, d is a divisor of f and h, and since f and h are relatively prime, d
must be a constant, and f and g1 gm are relatively prime.
Definition 20.6 is generalized to any finite number of polynomials as follows.
Definition 20.7. Given any nonzero polynomials f1 , . . . , fn 2 K[X], where n 2, a polynomial d 2 K[X] is a greatest common divisor of f1 , . . . , fn (for short, a gcd of f1 , . . . , fn )
if d divides each fi and whenever h 2 K[X] divides each fi , then h divides d. We say that
f1 , . . . , fn are relatively prime if 1 is a gcd of f1 , . . . , fn .
It is easily shown that Proposition 20.10 can be generalized to any finite number of
polynomials, and similarly for its relevant corollaries. The details are left as an exercise.
Proposition 20.14. Let K be a field and let f1 , . . . , fn 2 K[X] be any n
2 nonzero
polynomials. For every polynomial d 2 K[X], the following properties are equivalent:
(1) The polynomial d is a gcd of f1 , . . . , fn .
(2) The polynomial d divides each fi and there exist u1 , . . . , un 2 K[X] such that
d = u 1 f 1 + + un f n .
(3) The ideals (fi ), and (d) satisfy the equation
(d) = (f1 ) + + (fn ).
In addition, d 6= 0, and d is unique up to multiplication by a nonzero scalar in K.
As a consequence of Proposition 20.14, some polynomials f1 , . . . , fn 2 K[X] are relatively
prime i there exist u1 , . . . , un 2 K[X] such that
u1 f1 + + un fn = 1.
The identity
u1 f 1 + + un f n = 1
578
20.5
X + 1).
579
p and g are relatively prime, and if not, because p is irreducible, we have d = p for some
6= 0 in K, and thus, p divides g. As a consequence, if p, q 2 K[X] are both irreducible,
then either p and q are relatively prime, or p = q for some 6= 0 in K. In particular, if
p, q 2 K[X] are both irreducible monic polynomials and p 6= q, then p and q are relatively
prime.
We now prove the (unique) factorization of polynomials into irreducible factors.
Theorem 20.16. Given any field K, for every nonzero polynomial
f = ad X d + ad 1 X d
of degree d = deg(f )
+ + a0
where the pi 2 K[X] are distinct irreducible monic polynomials, the ki are (not necessarily
distinct) integers, and m 1, ki 1.
Proof. First, we prove the existence of such a factorization by induction on d = deg(f ).
Clearly, it is enough to prove the result for monic polynomials f of degree d = deg(f ) 1.
If d = 1, then f = X + a0 , which is an irreducible monic polynomial.
Assume d 2, and assume the induction hypothesis for all monic polynomials of degree
< d. Consider the set S of all monic polynomials g such that deg(g)
1 and g divides
f . Since f 2 S, the set S is nonempty, and thus, S contains some monic polynomial p1 of
minimal degree. Since deg(p1 ) 1, the monic polynomial p1 must be irreducible. Otherwise
we would have p1 = g1 g2 , for some monic polynomials g1 , g2 such that deg(p1 ) > deg(g1 ) 1
and deg(p1 ) > deg(g2 )
1, and since p1 divide f , then g1 would divide f , contradicting
the minimality of the degree of p1 . Thus, we have f = p1 q, for some irreducible monic
polynomial p1 , with q also monic. Since deg(p1 ) 1, we have deg(q) < deg(f ), and we can
apply the induction hypothesis to q. Thus, we obtain a factorization of the desired form.
We now prove uniqueness. Assume that
f = ad pk11 pkmm ,
and
Thus, we have
f = ad q1h1 qnhn .
ad pk11 pkmm = ad q1h1 qnhn .
580
and since q1 and the pi are irreducible monic, we must have m = 1 and p1 = q1 .
If h1 + + hn
1, we have
pk11 pkmm = q1 q,
with
q = q1h1
qnhn ,
pkmm = q1h1
qnhn ,
581
for some q, r 2 A, with '(r) < '(b). Since b 2 I and I is an ideal, we also have bq 2 I,
and since a, bq 2 I and I is an ideal, then r 2 I with '(r) < '(b) = m, contradicting the
minimality of m. Thus, r = 0 and a 2 (b). But then,
I (b),
and since b 2 I, we get
I = (b),
and A is a PID.
As a corollary of Proposition 20.17, the ring Z is a Euclidean domain (using the function
'(a) = |a|) and thus, a PID. If K is a field, the function ' on K[X] defined such that
'(f ) =
0
if f = 0,
deg(f ) + 1 if f 6= 0,
582
p
Observe
that when d 2 (mod 4) or d 3 (mod 4), the ring of integers of Q( d) is equal to
p
Z[ d]. For more on quadratic fields and their rings of integers, see Stark [100] (Chapter 8)
or Niven, Zuckerman
and Montgomery [86] (Chapter 9). It can be shown that the rings of
p
integers, Z[
d], where d = 19, 43, 67, 163, are PIDs, but not Euclidean rings.
p
Actually the rings of integers of Q( d) that are Euclidean domains are completely determined but the proof is quite difficult. It turns out that there are twenty one such rings corresponding to the integers: 11, 7, 3, 2, 1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57
and 73, see Stark [100] (Chapter 8).
It is possible to characterize a larger class of rings (in terms of ideals), factorial rings (or
unique factorization domains), for which unique factorization holds (see Section 21.1). We
now consider zeros (or roots) of polynomials.
20.6
Roots of Polynomials
)h but not by (X
)h+1 .
583
)h g and g() 6= 0.
Proof. Assume (1). Then, we have f = (X )h g for some g 2 A[X]. If we had g() = 0,
by Proposition 20.18, g would be divisible by (X ), and then f would be divisible by
(X )h+1 , contradicting (1).
Assume (2), that is, f = (X )h g and g() 6= 0. If f is divisible by (X
we have f = (X )h+1 g1 , for some g1 2 A[X]. Then, we have
(X
)h g = (X
)h+1 , then
)h+1 g1 ,
and thus
(X
)h (g
(X
)g1 ) = 0,
h + k. If A is an integral
584
)h f1 (X) + (X
)k
)k g1 (X) = (X
)h (f1 (X) + (X
)k
g1 (X)),
f (X)g(X) = (X
Proposition 20.21. Let A be an integral domain. Let f be any nonnull polynomial f 2 A[X]
and let 1 , . . . , m 2 A be m
1 distinct roots of f of respective multiplicities k1 , . . . , km .
Then, we have
f (X) = (X 1 )k1 (X m )km g(X),
where g 2 A[X] and g(i ) 6= 0 for all i, 1 i m.
1 )k1 (m
which implies that g1 (m ) = 0. Now, by Proposition 20.20 (2), since m is not a root of the
polynomial (X 1 )k1 (X m 1 )km 1 and since A is an integral domain, m must be a
root of multiplicity km of g1 , which means that
g1 (X) = (X
m )km g(X),
1 )k1 (X
m )km g(X),
585
if n
0 (with 0 a = 0) and
na=
( n) a
is injective.
The fields Q, R, and C are of characteristic zero. In fact, it is easy to see that every
field of characteristic zero contains a subfield isomorphic to Q. Thus, finite fields cant be of
characteristic zero.
Remark: If a field is not of characteristic zero, it is not hard to show that its characteristic,
that is, the smallest n 2 such that n 1 = 0, is a prime number p. The characteristic p of K
is the generator of the principal ideal pZ, the kernel of the homomorphism : Z ! K. Thus,
every finite field is of characteristic some prime p. Infinite fields of nonzero characteristic
also exist.
586
1.
The following standard properties of derivatives are recalled without proof (prove them
as an exercise).
Given any two polynomials, f, g 2 A[X], we have
(f + g)0 = f 0 + g 0 ,
(f g)0 = f 0 g + f g 0 .
For example, if f = (X
)k g and k
f 0 = k(X
1, we have
)k 1 g + (X
)k g 0 .
We can now give a criterion for the existence of simple roots. The first proposition holds for
any ring.
Proposition 20.24. Let A be any ring. For every nonnull polynomial f 2 A[X], 2 A is
a simple root of f i is a root of f and is not a root of f 0 .
Proof. Since 2 A is a root of f , we have f = (X )g for some g 2 A[X]. Now, is a
simple root of f i g() 6= 0. However, we have f 0 = g + (X )g 0 , and so f 0 () = g().
Thus, is a simple root of f i f 0 () 6= 0.
We can improve the previous proposition as follows.
Proposition 20.25. Let A be any ring. For every nonnull polynomial f 2 A[X], let 2 A
be a root of multiplicity k 1 of f . Then, is a root of multiplicity at least k 1 of f 0 . If
A is a field of characteristic zero, then is a root of multiplicity k 1 of f 0 .
587
)k 1 g + (X
)k g 0 = (X
)k 1 (kg + (X
)g 0 ),
it is clear that the multiplicity of w.r.t. f 0 is at least k 1. Now, (kg+(X )g 0 )() = kg(),
and if A is of characteristic zero, since g() 6= 0, then kg() 6= 0. Thus, is a root of
multiplicity k 1 of f 0 .
As a consequence, we obtain the following test for the existence of a root of multiplicity
k for a polynomial f :
Given a field K of characteristic zero, for any nonnull polynomial f 2 K[X], any 2 K
is a root of multiplicity k 1 of f i is a root of f, D1 f, D2 f, . . . , Dk 1 f , but not a root of
Dk f .
We can now return to polynomial functions and tie up some loose ends. Given a ring A,
recall that every polynomial f 2 A[X1 , . . . , Xn ] induces a function fA : An ! A defined such
that fA (1 , . . . , n ) = f (1 , . . . , n ), for every (1 , . . . , n ) 2 An . We now give a sufficient
condition for the mapping f 7! fA to be injective.
Proposition 20.26. Let A be an integral domain. For every polynomial f 2 A[X1 , . . . , Xn ],
if A1 , . . . , An are n infinite subsets of A such that f (1 , . . . , n ) = 0 for all (1 , . . . , n ) 2
A1 An , then f = 0, i.e., f is the null polynomial. As a consequence, if A is an infinite
integral domain, then the map f 7! fA is injective.
Proof. We proceed by induction on n. Assume n = 1. If f 2 A[X1 ] is nonnull, let m = deg(f )
be its degree. Since A1 is infinite and f (1 ) = 0 for all 1 2 A1 , then f has an infinite number
of roots. But since f is of degree m, this contradicts Theorem 20.22. Thus, f = 0.
If n
+ + g0 ,
588
f = 0,
V (g1 gm ), then f = 0.
Remark: Letting U (g) = A V (g), the identity V (g1 )[ [V (gm ) = V (g1 gm ) translates
to U (g1 ) \ \ U (gm ) = U (g1 gm ). This suggests to define a topology on A whose basis
of open sets consists of the sets U (g). In this topology (called the Zariski topology), the
sets of the form V (g) are closed sets. Also, when g1 , . . . , gm 2 A[X1 , . . . , Xn ] and n
2,
understanding the structure of the closed sets of the form V (g1 )\ \V (gm ) is quite difficult,
and it is the object of algebraic geometry (at least, its classical part).
When f 2 A[X1 , . . . , Xn ] and n 2, one should not apply Proposition 20.26 abusively.
For example, let
f (X, Y ) = X 2 + Y 2 1,
considered as a polynomial in R[X, Y ]. Since R is an infinite field and since
1 t2 2t
(1 t2 )2
(2t)2
f
,
=
+
1 = 0,
1 + t2 1 + t2
(1 + t2 )2 (1 + t2 )2
HERMITE)
589
for every t 2 R, it would be tempting to say that f = 0. But whats wrong with the above
reasoning is that there are no two infinite subsets R1 , R2 of R such that f (1 , 2 ) = 0 for
all (1 , 2 ) 2 R2 . For every 1 2 R, there are at most two 2 2 R such that f (1 , 2 ) = 0.
What the example shows though, is that a nonnull polynomial f 2 A[X1 , . . . , Xn ] where
n 2 can have an infinite number of zeros. This is in contrast with nonnull polynomials in
one variables over an infinite field (which have a number of roots bounded by their degree).
We now look at polynomial interpolation.
20.7
Let K be a field. First, we consider the following interpolation problem: Given a sequence
(1 , . . . , m+1 ) of pairwise distinct scalars in K and any sequence ( 1 , . . . , m+1 ) of scalars
in K, where the j are not necessarily distinct, find a polynomial P (X) of degree m such
that
P (1 ) = 1 , . . . , P (m+1 ) = m+1 .
First, observe that if such a polynomial exists, then it is unique. Indeed, this is a
consequence of Proposition 20.23. Thus, we just have to find any polynomial of degree m.
Consider the following so-called Lagrange polynomials:
Li (X) =
(X
(i
1 ) (X
1 ) (i
i 1 )(X
i 1 )(i
i+1 ) (X
i+1 ) (i
m+1 )
.
m+1 )
Note that L(i ) = 1 and that L(j ) = 0, for all j 6= i. But then,
P (X) =
1 L1
+ +
m+1 Lm+1
+ +
= 0,
setting X to i , we would get i = 0. Thus, the Li are linearly independent, and by the
previous argument, they are a set of generators. We we call (L1 , . . . , Lm+1 ) the Lagrange
basis (of order m + 1).
It is known from numerical analysis that from a computational point of view, the Lagrange
basis is not very good. Newton proposed another solution, the method of divided dierences.
Consider the polynomial P (X) of degree m, called the Newton interpolant,
P (X) =
1 (X
1 ) +
2 (X
1 )(X
2 ) + +
m (X
1 )(X
2 ) (X
m ).
590
Q(X) = P (X),
Q(1 , . . . , i , X) = i + i+1 (X
Q(1 , . . . , m , X) = m .
i+1 ) + +
m (X
i+1 ) (X
m ),
= Q(1 ),
= Q(1 , . . . , i , i+1 ),
= Q(1 , . . . , m , m+1 ).
The expression Q(1 , 2 , . . . , i+1 ) is called the i-th dierence quotient. Then, we can
compute the i in terms of 1 = P (1 ), . . . , m+1 = P (m+1 ), using the inductive formulae
for the Q(1 , . . . , i , X) given above, initializing the Q(i ) such that Q(i ) = i .
The above method is called the method of divided dierences and it is due to Newton.
An astute observation may be used to optimize the computation. Observe that if Pi (X)
is the polynomial of degree i taking the values 1 , . . . , i+1 at the points 1 , . . . , i+1 , then
the coefficient of X i in Pi (X) is Q(1 , 2 , . . . , i+1 ), which is the value of i in the Newton
interpolant
Pi (X) =
1 (X
1 ) +
2 (X
1 )(X
2 ) + +
i (X
1 )(X
2 ) (X
i ).
As a consequence, Q(1 , 2 , . . . , i+1 ) does not depend on the specific ordering of the j
and there are better ways of computing it. For example, Q(1 , 2 , . . . , i+1 ) can be computed
HERMITE)
591
using
Q(1 , . . . , i+1 ) =
Q(2 , . . . , i+1 )
i+1
Q(1 , . . . , i )
.
1
Then, the computation can be arranged into a triangular array reminiscent of Pascals
triangle, as follows:
Initially, Q(j ) =
j,
1 j m + 1, and
Q(1 )
Q(1 , 2 )
Q(2 )
Q(1 , 2 , 3 )
Q(2 , 3 )
Q(3 )
Q(3 , 4 )
Q(4 )
...
...
...
Q(2 , 3 , 4 )
...
In this computation, each successive column is obtained by forming the dierence quotients of the preceeding column according to the formula
Q(k , . . . , i+k ) =
The
Observe that if we performed the above computation starting with a polynomial Q(X)
of degree m, we could extend it by considering new given points m+2 , m+3 , etc. Then,
from what we saw above, the (m + 1)th column consists of m in the expression of Q(X) as
a Newton interpolant and the (m + 2)th column consists of zeros. Such divided dierences
are used in numerical analysis.
Newtons method can be used to compute the value P () at some of the interpolant
P (X) taking the values 1 , . . . , m+1 for the (distinct) arguments 1 , . . . , m+1 . We also
mention that inductive methods for computing P () without first computing the coefficients
of the Newton interpolant exist, for example, Aitkens method. For this method, the reader
is referred to Farin [36].
It has been observed that Lagrange interpolants oscillate quite badly as their degree
increases, and thus, this makes them undesirable as a stable method for interpolation. A
standard example due to Runge, is the function
f (x) =
1
,
1 + x2
in the interval [ 5, +5]. Assuming a uniform distribution of points on the curve in the
interval [ 5, +5], as the degree of the Lagrange interpolant increases, the interpolant shows
592
wilder and wilder oscillations around the points x = 5 and x = +5. This phenomenon
becomes quite noticeable beginning for degree 14, and gets worse and worse. For degree 22,
things are quite bad! Equivalently, one may consider the function
f (x) =
1
,
1 + 25x2
0
...
P (m+1 ) = m+1
,
1
1
...
D P (m+1 ) = m+1 ,
...
i
Di P (1 ) = 1i , . . .
Di P (m+1 ) = m+1
,
...
nm+1
Dn1 P (1 ) = 1n1 , . . . Dnm+1 P (m+1 ) = m+1
.
Note that the above equations constitute n + 1 constraints, and thus, we can expect that
there is a unique polynomial of degree n satisfying the above problem. This is indeed the
case and such a polynomial is called a Hermite polynomial . We call the above problem the
Hermite interpolation problem.
Proposition 20.28. The Hermite interpolation problem has a unique solution of degree n,
where n = n1 + + nm+1 + m.
Proof. First, we prove that the Hermite interpolation problem has at most one solution.
Assume that P and Q are two distinct solutions of degree n. Then, by Proposition 20.25
and the criterion following it, P Q has among its roots 1 of multiplicity at least n1 + 1, . . .,
m+1 of multiplicity at least nm+1 + 1. However, by Theorem 20.22, we should have
n1 + 1 + + nm+1 + 1 = n1 + + nm+1 + m + 1 n,
which is a contradiction, since n = n1 + + nm+1 + m. Thus, P = Q. We are left with
proving the existence of a Hermite interpolant. A quick way to do so is to use Proposition
5.13, which tells us that given a square matrix A over a field K, the following properties
hold:
HERMITE)
593
For every column vector B, there is a unique column vector X such that AX = B i the
only solution to AX = 0 is the trivial vector X = 0 i D(A) 6= 0.
j ) + b)Lj DLj ,
2DLj (j )(X
j ))L2j ,
Hj1 = (X
j )L2j .
= 2X 3 3X 2 + 1,
= 2X 3 + 3X 2 ,
= X 3 2X 2 + X,
= X 3 X 2.
3X 2 + 1) + m0 (X 3
2X 2 + X) + m1 (X 3
X 2 ) + x1 ( 2X 3 + 3X 2 ).
594
3t2 + 1) + (b
a)m0 (t3
2t2 + t) + (b
where
X
b
t=
Observe the presence of the extra factor (b
be false otherwise!
a)m1 (t3
t2 ) + x1 ( 2t3 + 3t2 ),
a
.
a
a) in front of m0 and m1 , the formula would
j )2 + bi (X
j ) + ci )L3j ,
3 2
0
2
2
Hj (X) = 6(DLj (j ))
D Lj (j ) (X j )
3DLj (j )(X j ) + 1 L3j (X),
2
1
2
Hj (X) = 9(DLj (j )) (X j )2 3DLj (j )(X j ) L3j (X),
1
Hj2 (X) = (X
2
j )2 L3j (X).
Going back to the general problem, it seems to us that a kind of Newton interpolant will
be more manageable. Let
P00 (X) = 1,
Pj0 (X) = (X
P0i (X) = (X
Pji (X) = (X
1 )n1 +1 (X
j )nj +1 , 1 j m
1 )n1 +1 (X
j )nj +1 (X
1 )n1 +1 (X
m )nm +1 (X
1 )i (X
1jm
Pmi (X) = (X
2 )n2 +1 (X
m+1 )nm+1 +1 , 1 i n1 ,
j+1 )i (X
1, 1 i nj+1 ,
and let
j+2 )nj+2 +1 (X
m+1 )nm+1 +1 ,
m+1 )i , 1 i nm+1 ,
j=m,i=nj+1
P (X) =
i i
j Pj (X).
j=0,i=0
We can think of P (X) as a generalized Newton interpolant. We can compute the derivatives Dk Pji , for 1 k nj+1 , and if we look for the Hermite basis polynomials Hji (X) such
that Di Hji (j ) = 1 and Dk Hji (l ) = 0, for k 6= i or l 6= j, 1 j, l m + 1, 0 i, k nj ,
we find that we have to solve triangular systems of linear equations. Thus, as in the simple
case n1 = . . . = nm+1 = 0, we can solve successively for the ij . Obviously, the computations
are quite formidable and we leave such considerations for further study.
Chapter 21
UFDs, Noetherian Rings, Hilberts
Basis Theorem
21.1
We saw in Section 20.5 that if K is a field, then every nonnull polynomial in K[X] can
be factored as a product of irreducible factors, and that such a factorization is essentially
unique. The same property holds for the ring K[X1 , . . . , Xn ] where n
2, but a dierent
proof is needed.
The reason why unique factorization holds for K[X1 , . . . , Xn ] is that if A is an integral
domain for which unique factorization holds in some suitable sense, then the property of
unique factorization lifts to the polynomial ring A[X]. Such rings are called factorial rings,
or unique factorization domains. The first step if to define the notion of irreducible element
in an integral domain, and then to define a factorial ring. If will turn out that in a factorial
ring, any nonnull element a is irreducible (or prime) i the principal ideal (a) is a prime
ideal.
Recall that given a ring A, a unit is any invertible element (w.r.t. multiplication). The
set of units of A is denoted by A . It is a multiplicative subgroup of A, with identity 1. Also,
given a, b 2 A, recall that a divides b if b = ac for some c 2 A; equivalently, a divides b i
(b) (a). Any nonzero a 2 A is divisible by any unit u, since a = u(u 1 a). The relation a
divides b, often denoted by a | b, is reflexive and transitive, and thus, a preorder on A {0}.
Definition 21.1. Let A be an integral domain. Some element a 2 A is irreducible if a 6= 0,
a2
/ A (a is not a unit), and whenever a = bc, then either b or c is a unit (where b, c 2 A).
Equivalently, a 2 A is reducible if a = 0, or a 2 A (a is a unit), or a = bc where b, c 2
/ A
(a, b are both noninvertible) and b, c 6= 0.
Observe that if a 2 A is irreducible and u 2 A is a unit, then ua is also irreducible.
Generally, if a 2 A, a 6= 0, and u is a unit, then a and ua are said to be associated . This
is the equivalence relation on nonnull elements of A induced by the divisibility preorder.
595
596
is a unit.
It should be noted that the converse of Proposition 21.1 is generally false. However, it
holds for factorial rings, defined next.
Definition 21.2. A factorial ring or unique factorization domain (UFD) (or unique factorization ring) is an integral domain A such that the following two properties hold:
(1) For every nonnull a 2 A, if a 2
/ A (a is not a unit), then a can be factored as a product
a = a1 am
where each ai 2 A is irreducible (m
1).
597
p
There are integral domains that are not UFDs. For example,
the subring Z[
5] of C
p
consisting of the complex numbers of the form a + bi 5 where a, b 2 Z is not a UFD.
Indeed, we have
p
p
9 = 3 3 = (2 + i 5)(2 i 5),
p
p
and it can be shown that 3, 2 + i 5, and 2p i 5 are irreducible, and that the units are 1.
The uniqueness condition (2) fails and Z[
5] is not a UFD.
p
Remark: For d 2 Z with d < 0, it is known that the ring of integers of Q( d) is a UFD i d
is one of the nine primes, d = 1, 2, 3, 7, 11, 19, 43, 67 and 163. This is a hard
theorem that was conjectured by Gauss but not proved until 1966, independently by Stark
and Baker. Heegner had published a proof of this result in 1952 but there was some doubt
about its validity. After finding his proof, Stark reexamined Heegners proof and concluded
that it was essentially correct after all. In sharp contrast, when
p d is a positive integer, the
problem of determining which of the rings of integers
p of Q( d) are UFDs, is still open. It
d] is a UFD i d = 1 or d = 2. If
can also be shown that
if
d
<
0,
then
the
ring
Z[
p
d 1 (mod 4), then Z[ d] is never a UFD. For more details about these remarkable results,
see Stark [100] (Chapter 8).
Proposition 21.2. Let A be an integral domain satisfying condition (1) in Definition 21.2.
Then, condition (2) in Definition 21.2 is equivalent to the following condition:
(20 ) If a 2 A is irreducible and a divides the product bc, where b, c 2 A and b, c 6= 0, then
either a divides b or a divides c.
Proof. First, assume that (2) holds. Let bc = ad, where d 2 A, d 6= 0. If b is a unit, then
c = adb 1 ,
and c is divisible by a. A similar argument applies to c. Thus, we may assume that b and c
are not units. In view of (1), we can write
b = p1 pm
where pi 2 A is irreducible. Since bc = ad, a is irreducible, and b, c are not units, d cannot
be a unit. In view of (1), we can write
d = q1 qr ,
where qi 2 A is irreducible. Thus,
p1 pm pm+1 pm+n = aq1 qr ,
where all the factors involved are irreducible. By (2), we must have
a = ui 0 p i 0
598
a = a1 am = b 1 b n ,
where ai 2 A and bj 2 A are irreducible. Without loss of generality, we may assume that
m n. We proceed by induction on m. If m = 1,
a1 = b 1 b n ,
and since a1 is irreducible, u = b1 bi 1 bi+1 bn must be a unit for some i, 1 i n. Thus,
(2) holds with n = 1 and a1 = bi u. Assume that m > 1 and that the induction hypothesis
holds for m 1. Since
a1 a2 am = b 1 b n ,
a1 divides b1 bn , and in view of (20 ), a1 divides some bj . Since a1 and bj are irreducible,
we must have bj = uj a1 , where uj 2 A is a unit. Since A is an integral domain,
a1 a2 am = b1 bj 1 uj a1 bj+1 bn
implies that
a2 am = (uj b1 ) bj 1 bj+1 bn ,
and by the induction hypothesis, m 1 = n 1 and ai = vi b (i) for some units vi 2 A and
some bijection between {2, . . . , m} and {1, . . . , j 1, j + 1, . . . , m}. However, the bijection
extends to a permutation of {1, . . . , m} by letting (1) = j, and the result holds by
letting v1 = uj 1 .
As a corollary of Proposition 21.2. we get the converse of Proposition 21.1.
Proposition 21.3. Let A be a factorial ring. For any a 2 A with a 6= 0, the principal ideal
(a) is a prime ideal i a is irreducible.
Proof. In view of Proposition 21.1, we just have to prove that if a 2 A is irreducible, then the
principal ideal (a) is a prime ideal. Indeed, if bc 2 (a), then a divides bc, and by Proposition
21.2, property (20 ) implies that either a divides b or a divides c, that is, either b 2 (a) or
c 2 (a), which means that (a) is prime.
Because Proposition 21.3 holds, in a UFD, an irreducible element is often called a prime.
In a UFD A, every nonzero element a 2 A that is not a unit can be expressed as a
product a = a1 an of irreducible elements ai , and by property (2), the number n of factors
only depends on a, that is, it is the same for all factorizations into irreducible factors. We
agree that this number is 0 for a unit.
Remark: If A is a UFD, we can state the factorization properties so that they also applies
to units:
599
0).
600
Proof. Assume that f (X) = ag(X), for some g(X) 2 A[X]. Since a 6= 0 and A is an
integral ring, f (X) and g(X) have the same degree m, and since for every i (0 i m)
the coefficient of X i in f (X) is equal to the coefficient of X i in ag(x), we have fi = agi , and
whenever fi 6= 0, we see that a divides fi .
Lemma 21.5. (Gausss lemma) Let A be a UFD. For any a 2 A, if a is irreducible and a
divides the product f (X)g(X) of two polynomials f (X), g(X) 2 A[X], then either a divides
f (X) or a divides g(X).
Proof. Let f (X) = fm X m + + fi X i + + f0 and g(X) = gn X n + + gj X j + + g0 .
Assume that a divides neither f (X) nor g(X). By the (easy) converse of Proposition 21.4,
there is some i (0 i m) such that a does not divide fi , and there is some j (0 j n)
such that a does not divide gj . Pick i and j minimal such that a does not divide fi and a
does not divide gj . The coefficient ci+j of X i+j in f (X)g(X) is
ci+j = f0 gi+j + f1 gi+j
+ + fi gj + + fi+j g0
(letting fh = 0 if h > m and gk = 0 if k > n). From the choice of i and j, a cannot divide
fi gj , since a being irreducible, by (20 ) of Proposition 21.2, a would divide fi or gj . However,
by the choice of i and j, a divides every other nonnull term in the sum for ci+j , and since a
is irreducible and divides f (X)g(X), by Proposition 21.4, a divides ci+j , which implies that
a divides fi gj , a contradiction. Thus, either a divides f (X) or a divides g(X).
As a corollary, we get the following proposition.
Proposition 21.6. Let A be a UFD. For any a 2 A, a 6= 0, if a divides the product
f (X)g(X) of two polynomials f (X), g(X) 2 A[X] and f (X) is irreducible and of degree at
least 1, then a divides g(X).
Proof. The Proposition is trivial is a is a unit. Otherwise, a = a1 am where ai 2 A is
irreducible. Using induction and applying Lemma 21.5, we conclude that a divides g(X).
We now show that Lemma 21.5 also applies to the case where a is an irreducible polynomial. This requires a little excursion involving the fraction field F of A.
Remark: If A is a UFD, it is possible to prove the uniqueness condition (2) for A[X] directly
without using the fraction field of A, see Malliavin [75], Chapter 3.
Given an integral domain A, we can construct a field F such that every element of F
is of the form a/b, where a, b 2 A, b 6= 0, using essentially the method for constructing the
field Q of rational numbers from the ring Z of integers.
Proposition 21.7. Let A be an integral domain.
(1) There is a field F and an injective ring homomorphism i : A ! F such that every
element of F is of the form i(a)i(b) 1 , where a, b 2 A, b 6= 0.
601
(2) For every field K and every injective ring homomorphism h : A ! K, there is a
(unique) field homomorphism b
h : F ! K such that
for all a, b 2 A, b 6= 0.
b
h(i(a)i(b) 1 ) = h(a)h(b)
i ab0 = a0 b,
= h(a0 )h(b0 ) 1 .
b
h(a/b) = b
h(i(a)i(b) 1 ) = h(a)h(b)
602
The field F given by Proposition 21.7 is called the fraction field of A, and it is denoted
by Frac(A).
In particular, given an integral domain A, since A[X1 , . . . , Xn ] is also an integral domain, we can form the fraction field of the polynomial ring A[X1 , . . . , Xn ], denoted by
F (X1 , . . . , Xn ), where F = Frac(A) is the fraction field of A. It is also called the field
of rational functions over F , although the terminology is a bit misleading, since elements of
F (X1 , . . . , Xn ) only define functions when the dominator is nonnull.
We now have the following crucial lemma which shows that if a polynomial f (X) is
reducible over F [X] where F is the fraction field of A, then f (X) is already reducible over
A[X].
Lemma 21.8. Let A be a UFD and let F be the fraction field of A. For any nonnull
polynomial f (X) 2 A[X] of degree m, if f (X) is not the product of two polynomials of
degree strictly smaller than m, then f (X) is irreducible in F [X].
Proof. Assume that f (X) is reducible in F [X] and that f (X) is neither null nor a unit.
Then,
f (X) = G(X)H(X),
where G(X), H(X) 2 F [X] are polynomials of degree p, q
1. Let a be the product of
the denominators of the coefficients of G(X), and b the product of the denominators of
the coefficients of H(X). Then, a, b 6= 0, g1 (X) = aG(X) 2 A[X] has degree p
1,
h1 (X) = bH(X) 2 A[X] has degree q 1, and
abf (X) = g1 (X)h1 (X).
Let c = ab. If c is a unit, then f (X) is also reducible in A[X]. Otherwise, c = c1 cn ,
where ci 2 A is irreducible. We now use induction on n to prove that
f (X) = g(X)h(X),
for some polynomials g(X) 2 A[X] of degree p
1.
603
1. By
f (X) = g(X)h(X),
for some polynomials g(X) 2 A[X] of degree p
showing that f (X) is reducible in A[X].
1,
1 and
604
605
606
(an )
n 1
(an ) = (a)
n 1
for some a 2 A. However, there must be some n such that a 2 (an ), and thus,
(an ) (a) (an ),
and the chain stabilizes at (an ). As a consequence, for any ideal (d) such that
(an ) (d)
607
and (an ) 6= (d), d has the desired factorization. Observe that an is not irreducible, since
(an ) 2 S, and thus,
an = bc
for some b, c 2 A, where neither b nor c is a unit. Then,
(an ) (b) and (an ) (c).
If (an ) = (b), then b = an u for some u 2 A, and then
an = an uc,
so that
1 = uc,
since A is an integral domain, and thus, c is a unit, a contradiction. Thus, (an ) 6= (b), and
similarly, (an ) 6= (c). But then, both b and c factor as products of irreducible elements and
so does an = bc, a contradiction. This implies that S = ;.
To prove the uniqueness of factorizations, we use Proposition 21.2. Assume that a is
irreducible and that a divides bc. If a does not divide b, by a previous remark, a and b are
relatively prime, and by Proposition 21.11, there are some x, y 2 A such that
ax + by = 1.
Thus,
acx + bcy = c,
and since a divides bc, we see that a must divide c, as desired.
Thus, we get another justification of the fact that Z is a UFD and that if K is a field,
then K[X] is a UFD.
It should also be noted that in a UFD, gcds of nonnull elements always exist. Indeed,
this is trivial if a or b is a unit, and otherwise, we can write
a = p1 pm
and b = q1 qn
where pi , qj 2 A are irreducible, and the product of the common factors of a and b is a gcd
of a and b (it is 1 is there are no common factors).
We conclude this section on UFDs by proving a proposition characterizing when a UFD
is a PID. The proof is nontrivial and makes use of Zorns lemma (several times).
Proposition 21.13. Let A be a ring that is a UFD, and not a field. Then, A is a PID i
every nonzero prime ideal is maximal.
608
Proof. Assume that A is a PID that is not a field. Consider any nonzero prime ideal, (p),
and pick any proper ideal A in A such that
(p) A.
Since A is a PID, the ideal A is a principal ideal, so A = (q), and since A is a proper nonzero
ideal, q 6= 0 and q is not a unit. Since
(p) (q),
q divides p, and we have p = qp1 for some p1 2 A. Now, by Proposition 21.1, since p 6= 0
and (p) is a prime ideal, p is irreducible. But then, since p = qp1 and p is irreducible, p1
must be a unit (since q is not a unit), which implies that
(p) = (q);
that is, (p) is a maximal ideal.
Conversely, let us assume that every nonzero prime ideal is maximal. First, we prove that
every prime ideal is principal. This is obvious for (0). If A is a nonzero prime ideal, then,
by hypothesis, it is maximal. Since A 6= (0), there is some nonzero element a 2 A. Since A
is maximal, a is not a unit, and since A is a UFD, there is a factorization a = a1 an of a
into irreducible elements. Since A is prime, we have ai 2 A for some i. Now, by Proposition
21.3, since ai is irreducible, the ideal (ai ) is prime, and so, by hypothesis, (ai ) is maximal.
Since (ai ) A and (ai ) is maximal, we get A = (ai ).
Next, assume that A is not a PID. Define the set, F, by
F = {A | A A,
Since A is not a PID, the set F is nonempty. Also, the reader will easily check that every
chain in F is bounded. Then, by Zorns lemma (Lemma 31.1), the set F has some maximal
element, A. Clearly, A 6= (0) is a proper ideal (since A = (1)), and A is not prime, since we
just showed that prime ideals are principal. Then, by Theorem 31.3, there is some maximal
ideal, M, so that A M. However, a maximal ideal is prime, and we have shown that a
prime ideal is principal. Thus,
A (p),
for some p 2 A that is not a unit. Moreover, by Proposition 21.1, the element p is irreducible.
Define
B = {a 2 A | pa 2 A}.
609
Observe that the above proof shows that Proposition 21.13 also holds under the assumption that every prime ideal is principal.
21.2
In this section, which is a bit of an interlude, we prove a basic result about quotients of
commutative rings by products of ideals that are pairwise relatively prime. This result has
applications in number theory and in the structure theorem for finitely generated modules
over a PID, which will be presented later.
Given two ideals a and b of a ring A, we define the ideal ab as the set of all finite sums
of the form
a1 b1 + + ak bk , ai 2 a, bi 2 b.
The reader should check that ab is indeed an ideal. Observe that ab a and ab b, so that
ab a \ b.
In general, equality does not hold. However if
a + b = A,
then we have
ab = a \ b.
This is because there is some a 2 a and some b 2 b such that
a + b = 1,
so for every x 2 a \ b, we have
x = xa + xb,
which shows that x 2 ab. Ideals a and b of A that satisfy the condition a + b = A are
sometimes said to be comaximal .
We define the homomorphism ' : A ! A/a A/b by
'(x) = (xa , xb ),
where xa is the equivalence class of x modulo a (resp. xb is the equivalence class of x modulo
b). Recall that the ideal a defines the equivalence relation a on A given by
x a y
i x
y 2 a,
and that A/a is the quotient ring of equivalence classes xa , where x 2 A, and similarly for
A/b. Sometimes, we also write x y (mod a) for x a y.
610
(mod a)
(mod b).
a)y a y
ay a y,
x b az b (1
b)z b z
bz b z,
and similarly
which shows that x = az + by works.
Theorem 21.14 can be generalized to any (finite) number of ideals.
Theorem 21.15. (Chinese Remainder Theorem) Given a commutative ring A, let a1 , . . . , an
be any n
2 ideals of A such that ai + aj = A for all i 6= j. Then, the homomorphism
: A/a1 an ! A/a1 A/an is an isomorphism.
Proof. The map : A/a1 \ \ an ! A/a1 A/an is induced by the homomorphism
' : A ! A/a1 A/an given by
'(x) = (xa1 , . . . , xan ).
Clearly, Ker (') = a1 \ \ an , so is well-defined and injective. We need to prove that
a1 \ \ an = a1 an
611
a1 + a2 an = A.
a1 \ a2 \ \ an = a1 \ (a2 an ) = a1 a2 an .
Let us now prove that is surjective by induction. The case n = 2 is Theorem 21.14. Let
x1 , . . . , xn be any n 3 elements of A. First, applying Theorem 21.14 to a1 and a2 an ,
we can find y1 2 A such that
y1 1
y1 0
(mod a1 )
(mod a2 an ).
By the induction hypothesis, we can find y2 , . . . , yn 2 A such that for all i, j with 2 i, j n,
yi 1
yi 0
(mod ai )
(mod aj ),
j 6= i.
We claim that
x = x1 y1 + x2 y2 + + xn yn
(mod ai ),
(mod ai ),
i = 2, . . . , n
(mod ai ),
i = 2, . . . , n.
()
612
For i = 1, we get
x x1
(mod a1 ),
therefore
x xi
(mod ai ),
i = 1, . . . , n.
proving surjectivity.
The classical version of the Chinese Remainder Theorem is the case where A = Z and
where the ideals ai are defined by n pairwise relatively prime integers m1 , . . . , mn . By the
Bezout identity, since mi and mj are relatively prime whenever i 6= j, there exist some
ui , uj 2 Z such that ui mi + uj mj = 1, and so mi Z + mj Z = Z. In this case, we get an
isomorphism
n
Y
Z/(m1 mn )Z
Z/mi Z.
i=1
Z/pri i Z.
In the previous situation where the integers m1 , . . . , mn are pairwise relatively prime, if
we write m = m1 mn and m0i = m/mi for i = 1 . . . , n, then mi and m0i are relatively
prime, and so m0i has an inverse modulo mi . If ti is such an inverse, so that
m0i ti 1
(mod mi ),
(mod mi ),
i = 1, . . . , n.
Theorem 21.15 can be used to characterize rings isomorphic to finite products of quotient
rings. Such rings play a role in the structure theorem for torsion modules over a PID.
Given n rings A1 , . . . , An , recall that the product ring A = A1 An is the ring in
which addition and multiplication are defined componenwise. That is,
(a1 , . . . , an ) + (b1 , . . . , bn ) = (a1 + b1 , . . . , an + bn )
(a1 , . . . , an ) (b1 , . . . , bn ) = (a1 b1 , . . . , an bn ).
613
The additive identity is 0A = (0, . . . , 0) and the multiplicative identity is 1A = (1, . . . , 1).
Then, for i = 1, . . . , n, we can define the element ei 2 A as follows:
ei = (0, . . . , 0, 1, 0, . . . , 0),
where the 1 occurs in position i. Observe that the following properties hold for all i, j =
1, . . . , n:
e2i = ei
ei ej = 0, i 6= j
e 1 + + e n = 1A .
Also, for any element a = (a1 , . . . , an ) 2 A, we have
ei a = (0, . . . , 0, ai , 0, . . . , 1) = pri (a),
where pri is the projection of A onto Ai . As a consequence
Ker (pri ) = (1A
ei )A.
ei )A, for i, j = 1, . . . , n.
614
Proof. Assume (a). Since we have an isomorphism A A/b1 A/bn , we may identify
A with A/b1 A/bn , and bi with Ker (pri ). Then, e1 , . . . , en are the elements defined
just before Definition 21.3. As noted, bi = Ker (pri ) = (1A ei )A. This proves (b).
Assume (b). Since bi = (1A ei )A and A is a ring with unit 1A , we have 1A ei 2 bi
for i = 1, . . . , n. For all i 6= j, we also have ei (1A ej ) = ei ei ej = ei , so (because bj is an
ideal), ei 2 bj , and thus, 1A = 1A ei + ei 2 bi + bj , which shows that bi + bj = A for all
i 6= j. Furthermore, for any xi 2 A, with 1 i n, we have
Y
Y
n
n
n
Y
xi (1A ei ) =
xi
(1A ei )
i=1
i=1
Y
n
i=1
n
X
xi (1A
i=1
ei )
i=1
= 0,
The equivalence of (c) and (d) follows from the proof of Theorem 21.15.
The fact that (c) implies (a) is an immediate consequence of Theorem 21.15.
21.3
Given a (commutative) ring A (with unit element 1), an ideal A A is said to be finitely
generated if there exists a finite set {a1 , . . . , an } of elements from A so that
A = (a1 , . . . , an ) = { 1 a1 + +
n an
2 A, 1 i n}.
n)
=0
for all polynomials Pi (X1 , . . . , Xn ) in some given family, P = (Pi )i2I . However, it is clear
that
V (P) = V (A),
where A is the ideal generated by P. Then, Hilberts basis theorem says that V (A) is actually
defined by a finite number of polynomials (any set of generators of A), even if P is infinite.
The property that every ideal in a ring is finitely generated is equivalent to other natural
properties, one of which is the so-called ascending chain condition, abbreviated a.c.c. Before
proving Hilberts basis theorem, we explore the equivalence of these conditions.
615
Definition 21.4. Let A be a commutative ring with unit 1. We say that A satisfies the
ascending chain condition, for short, the a.c.c, if for every ascending chain of ideals
A1 A2 Ai ,
there is some integer n
1 so that
Ai = An
for all i
n + 1.
We say that A satisfies the maximum condition if every nonempty collection C of ideals in
A has a maximal element, i.e., there is some ideal A 2 C which is not contained in any other
ideal in C.
Proposition 21.17. A ring A satisfies the a.c.c if and only if it satisfies the maximum
condition.
Proof. Suppose that A does not satisfy the a.c.c. Then, there is an infinite strictly ascending
sequence of ideals
A1 A2 Ai ,
and the collection C = {Ai } has no maximal element.
Conversely, assume that A satisfies the a.c.c. Let C be a nonempty collection of ideals
Since C is nonempty, we may pick some ideal A1 in C. If A1 is not maximal, then there is
some ideal A2 in C so that
A1 A2 .
Using this process, if C has no maximal element, we can define by induction an infinite
strictly increasing sequence
A1 A2 Ai .
However, the a.c.c. implies that such a sequence cannot exist. Therefore, C has a maximal
element.
Having shown that the a.c.c. condition is equivalent to the maximal condition, we now
prove that the a.c.c. condition is equivalent to the fact that every ideal is finitely generated.
Proposition 21.18. A ring A satisfies the a.c.c if and only if every ideal is finitely generated.
Proof. Assume that every ideal is finitely generated. Consider an ascending sequence of
ideals
A1 A2 Ai .
S
Observe that A = i Ai is also an ideal. By hypothesis, A has a finite generating set
{a1 , . . . , an }. By definition of A, each ai belongs to some Aji , and since the Ai form an
ascending chain, there is some m so that ai 2 Am for i = 1, . . . , n. But then,
Ai = Am
616
for all i
Conversely, assume that the a.c.c. holds. Let A be any ideal in A and consider the family
C of subideals of A that are finitely generated. The family C is nonempty, since (0) is a
subideal of A. By Proposition 21.17, the family C has some maximal element, say B. For
any a 2 A, the ideal B + (a) (where B + (a) = {b + a | b 2 B, 2 A}) is also finitely
generated (since B is finitely generated), and by maximality, we have
B = B + (a).
So, we get a 2 B for all a 2 A, and thus, A = B, and A is finitely generated.
Definition 21.5. A commutative ring A (with unit 1) is called noetherian if it satisfies the
a.c.c. condition. A noetherian domain is a noetherian ring that is also a domain.
By Proposition 21.17 and Proposition 21.18, a noetherian ring can also be defined as a
ring that either satisfies the maximal property or such that every ideal is finitely generated.
The proof of Hilberts basis theorem will make use the following lemma:
Lemma 21.19. Let A be a (commutative) ring. For every ideal A in A[X], for every i 0,
let Li (A) denote the set of elements of A consisting of 0 and of the coefficients of X i in all
the polynomials f (X) 2 A which are of degree i. Then, the Li (A)s form an ascending chain
of ideals in A. Furthermore, if B is any ideal of A[X] so that A B and if Li (A) = Li (B)
for all i 0, then A = B.
Proof. That Li (A) is an ideal and that Li (A) Li+1 (A) follows from the fact that if f (X) 2
A and g(X) 2 A, then f (X) + g(X), f (X), and Xf (X) all belong to A. Now, let g(X) be
any polynomial in B, and assume that g(X) has degree n. Since Ln (A) = Ln (B), there is
some polynomial fn (X) in A, of degree n, so that g(X) fn (X) is of degree at most n 1.
Now, since A B, the polynomial g(X) fn (X) belongs to B. Using this process, we can
define by induction a sequence of polynomials fn+i (X) 2 A, so that each fn+i (X) is either
zero or has degree n i, and
g(X)
is of degree at most n
thus, g(X) 2 A.
We now prove Hilberts basis theorem. The proof is substantially Hilberts original proof.
A slightly shorter proof can be given but it is not as transparent as Hilberts proof (see the
remark just after the proof of Theorem 21.20, and Zariski and Samuel [117], Chapter IV,
Section 1, Theorem 1).
Theorem 21.20. (Hilberts basis theorem) If A is a noetherian ring, then A[X] is also a
noetherian ring.
617
Proof. Let A be any ideal in A[X], and denote by L the set of elements of A consisting of 0
and of all the coefficients of the highest degree terms of all the polynomials in A. Observe
that
[
L=
Li (A).
i
n
X
i fi (X)X
d di
i=1
where di is the degree of fi (X). Now, g(X) g1 (X) is a polynomial in A of degree at most
d 1. By repeating this procedure, we get a sequence of polynomials gi (X) in B, having
strictly decreasing degrees, and such that the polynomial
g(X)
1 ad1
and we define
g1 (X) =
+ +
nd
X
nd adnd ,
i fdi (X)X
d di
i=1
where di is the degree of fdi (X). Then, g(X) g1 (X) is a polynomial in A of degree at most
d 1, and by repeating this procedure at most q times, we get an element of A of degree 0,
and the latter is a linear combination of the f0i s. This proves that every polynomial in A
of degree at most q 1 is a combination of the polynomials fij (X), for 0 i q 1 and
1 j ni . Therefore, A is generated by the fk (X)s and the fij (X)s, a finite number of
polynomials.
618
Remark: Only a small part of Lemma 21.19 was used in the above proof, namely, the fact
that Li (A) is an ideal. A shorter proof of Theorem 21.21 making full use of Lemma 21.19
can be given as follows:
Proof. (Second proof) Let (Ai )i 1 be an ascending sequence of ideals in A[X]. Consider
the doubly indexed family (Li (Aj )) of ideals in A. Since A is noetherian, by the maximal
property, this family has a maximal element Lp (Aq ). Since the Li (Aj )s form an ascending
sequence when either i or j is fixed, we have Li (Aj ) = Lp (Aq ) for all i and j with i p and
j q, and thus, Li (Aq ) = Li (Aj ) for all i and j with i p and j q. On the other hand,
for any fixed i, the a.c.c. shows that there exists some integer n(i) so that Li (Aj ) = Li (An(i) )
for all j
n(i). Since Li (Aq ) = Li (Aj ) when i
p and j
q, we may take n(i) = q if
i p. This shows that there is some n0 so that n(i) n0 for all i 0, and thus, we have
Li (Aj ) = Li (An(0) ) for every i and for every j n(0). By Lemma 21.19, we get Aj = An(0)
for every j n(0), establishing the fact that A[X] satisfies the a.c.c.
Using induction, we immediately obtain the following important result.
Corollary 21.21. If A is a noetherian ring, then A[X1 , . . . , Xn ] is also a noetherian ring.
Since a field K is obviously noetherian (since it has only two ideals, (0) and K), we also
have:
Corollary 21.22. If K is a field, then K[X1 , . . . , Xn ] is a noetherian ring.
21.4
Futher Readings
The material of this Chapter is thoroughly covered in Lang [67], Artin [4], Mac Lane and
Birkho [73], Bourbaki [14, 15], Malliavin [75], Zariski and Samuel [117], and Van Der
Waerden [112].
Chapter 22
Annihilating Polynomials and the
Primary Decomposition
22.1
where f k = f
+ + ad id,
+ + ad 1 A + ad I = 0.
It is clear that Ann(A) is a nonzero ideal and its unique monic generator is called the minimal
polynomial of A. We check immediately that if Q is an invertible matrix, then A and Q 1 AQ
have the same minimal polynomial. Also, if A is the matrix of f with respect to some basis,
then f and A have the same minimal polynomial.
The zeros (in K) of the minimal polynomial of f and the eigenvalues of f (in K) are
intimately related.
Proposition 22.1. Let f : E ! E be a linear map on some finite-dimensional vector space
E. Then, 2 K is a zero of the minimal polynomial mf (X) of f i is an eigenvalue of f
i is a zero of f (X). Therefore, the minimal and the characteristic polynomials have the
same zeros (in K), except for multiplicities.
Proof. First, assume that m( ) = 0 (with 2 K, and writing m instead of mf ). If so, using
polynomial division, m can be factored as
m = (X
)q,
with deg(q) < deg(m). Since m is the minimal polynomial, q(f ) 6= 0, so there is some
nonzero vector v 2 E such that u = q(f )(v) 6= 0. But then, because m is the minimal
polynomial,
0 = m(f )(v)
= (f
id)(q(f )(v))
= (f
id)(u),
which shows that
is an eigenvalue of f .
621
m = (X
k ).
For this, we just have to show that m annihilates f . However, for any eigenvector u of f ,
one of the linear maps f
i id sends u to 0, so
m(f )(u) = (f
1 id)
(f
k id)(u)
= 0.
22.2
In this section, we prove that if the minimal polynomial mf of a linear map f is of the form
mf = (X
1 ) (X
k)
B C
A=
,
0 D
where B is a k k matrix, D is a (n k) (n
Then
det(XI A) = det(XI
k) matrix, and C is a k (n
B) det(XI
D),
k) matrix.
B Ci
i
A =
,
0 Di
for some k (n k) matrix Ci . It follows that any polynomial which annihilates A also
annihilates B and D. So, the minimal polynomial of B divides the minimal polynomial of
A.
For the next step, there are at least two ways to proceed. We can use an old-fashion argument using Lagrange interpolants, or use a slight generalization of the notion of annihilator.
We pick the second method because it illustrates nicely the power of principal ideals.
What we need is the notion of conductor (also called transporter).
Definition 22.2. Let f : E ! E be a linear map on a finite-dimensional vector space E, let
W be an invariant subspace of f , and let u be any vector in E. The set Sf (u, W ) consisting
of all polynomials q 2 K[X] such that q(f )(u) 2 W is called the f -conductor of u into W .
Observe that the minimal polynomial mf of f always belongs to Sf (u, W ), so this is a
nontrivial set. Also, if W = (0), then Sf (u, (0)) is just the annihilator of f . The crucial
property of Sf (u, W ) is that it is an ideal.
Proposition 22.3. If W is an invariant subspace for f , then for each u 2 E, the f -conductor
Sf (u, W ) is an ideal in K[X].
We leave the proof as a simple exercise, using the fact that if W invariant under f , then
W is invariant under every polynomial q(f ) in f .
Since Sf (u, W ) is an ideal, it is generated by a unique monic polynomial q of smallest
degree, and because the minimal polynomial mf of f is in Sf (u, W ), the polynomial q divides
m.
Proposition 22.4. Let f : E ! E be a linear map on a finite-dimensional space E, and
assume that the minimal polynomial m of f is of the form
m = (X
1)
r1
(X
k)
rk
of f .
623
Proof. Observe that (a) and (b) together assert that the f -conductor of u into W is a
polynomial of the form X
i . Pick any vector v 2 E not in W , and let g be the conductor
of v into W . Since g divides m and v 2
/ W , the polynomial g is not a constant, and thus it
is of the form
s1
sk
g = (X
1 ) (X
k) ,
with at least some si > 0. Choose some index j such that sj > 0. Then X
of g, so we can write
g = (X
j )q.
is a factor
By definition of g, the vector u = q(f )(v) cannot be in W , since otherwise g would not be
of minimal degree. However,
(f
j id)(u)
= (f
j id)(q(f )(v))
= g(f )(v)
1, . . . ,
1 ) (X
k ),
Proof. We already showed in Section 22.2 that if f is diagonalizable, then its minimal polynomial is of the above form (where 1 , . . . , k are the distinct eigenvalues of f ).
For the converse, let W be the subspace spanned by all the eigenvectors of f . If W 6= E,
since W is invariant under f , by Proposition 22.4, there is some vector u 2
/ W such that for
some j , we have
(f
j id)(u) 2 W.
Let v = (f
j id)(u)
where f (wi ) =
h, we have
i wi
i ),
k )wk ,
which shows that h(f )(v) 2 W for every polynomial h. We can write
m = (X
j )q
q( j ) = p(X
j)
for some polynomial p. We know that p(f )(v) 2 W , and since m is the minimal polynomial
of f , we have
0 = m(f )(u) = (f
j id)(q(f )(u)),
which implies that q(f )(u) 2 W (either q(f )(u) = 0, or it is an eigenvector associated with
j ). However,
q(f )(u) q( j )u = p(f )((f
j id)(u)) = p(f )(v),
and since p(f )(v) 2 W and q(f )(u) 2 W , we conclude that q( j )u 2 W . But, u 2
/ W , which
implies that q( j ) = 0, so j is a double root of m, a contradiction. Therefore, we must have
W = E.
Remark: Proposition 22.4 can be used to give a quick proof of Theorem 8.4.
Using Theorem 22.5, we can give a short proof about commuting diagonalizable linear
maps. If F is a family of linear maps on a vector space E, we say that F is a commuting
family i f g = g f for all f, g 2 F.
Proposition 22.6. Let F be a finite commuting family of diagonalizable linear maps on a
vector space E. There exists a basis of E such that every linear map in F is represented in
that basis by a diagonal matrix.
Proof. We proceed by induction on n = dim(E). If n = 1, there is nothing to prove. If
n > 1, there are two cases. If all linear maps in F are of the form id for some
2
K, then the proposition holds trivially. In the second case, let f 2 F be some linear
map in F which is not a scalar multiple of the identity. In this case, f has at least two
distinct eigenvalues 1 , . . . , k , and because f is diagonalizable, E is the direct sum of the
corresponding eigenspaces E 1 , . . . , E k . For every index i, the eigenspace E i is invariant
under f and under every other linear map g in F, since for any g 2 F and any u 2 E i ,
because f and g commute, we have
f (g(u)) = g(f (u)) = g( i u) =
i g(u)
625
Remark: Proposition 22.6 also holds for infinite commuting familes F of diagonalizable
linear maps, because E being finite dimensional, there is a finite subfamily of linearly independent linear maps in F spanning F.
There is also an analogous result for commuting families of linear maps represented by
upper triangular matrices. To prove this, we need the following proposition.
Proposition 22.7. Let F be a nonempty finite commuting family of triangulable linear maps
on a finite-dimensional vector space E. Let W be a proper subspace of E which is invariant
under F. Then there exists a vector u 2 E such that:
1. u 2
/ W.
2. For every f 2 F, the vector f (u) belongs to the subspace W
u.
Ku spanned by W and
2. (fi
Consider the base case r = 1. Since f1 is triangulable, its eigenvalues all belong to K
since they are the diagonal entries of the triangular matrix associated with f1 (this is the
easy direction of Theorem 8.4), so the minimal polynomial of f1 is of the form
m = (X
where the eigenvalues
22.4.
1, . . . ,
1)
r1
(X
k)
rk
Next, assume that r 2 and that the induction hypothesis holds for f1 , . . . , fr 1 . Thus,
there is a vector ur 1 2 E such that
1. ur
2. (fi
2
/ W.
i id)(ur 1 ) 2 W for i = 1, . . . , r
Let
Vr
= {w 2 E | (fi
i id)(w) 2 W, i = 1, . . . , r
1}.
i id)(v)),
1ir
1.
i id)(v)) 2
1)
r1
(X
k)
rk
We begin by applying Proposition 22.7 to the subspace W0 = (0) to get u1 so that for all
f 2 F,
f (u1 ) = 1f u1 .
For the induction step, since Wi invariant under F, we apply Proposition 22.7 to the subspace
Wi , to get ui+1 2 E such that
1. ui+1 2
/ Wi .
2. For every f 2 F, the vector f (ui+1 ) belong to the subspace spanned by Wi and ui+1 .
Condition (1) implies that (u1 , . . . , ui , ui+1 ) is linearly independent, and condition (2) means
that for every f 2 F,
f (ui+1 ) = af1i+1 u1 + + afi+1i+1 ui+1 ,
for some afi+1j 2 K, establishing the induction step. After n steps, each f 2 F is represented
by an upper triangular matrix.
627
Observe that if F consists of a single linear map f and if the minimal polynomial of f is
of the form
r1
rk
m = (X
1 ) (X
k) ,
with all i 2 K, using Proposition 22.4 instead of Proposition 22.7, the proof of Proposition
22.8 yields another proof of Theorem 8.4.
22.3
E k,
but in general there are not enough eigenvectors to span E. What if we generalize the notion
of eigenvector and look for (nonzero) vectors u such that
( id
f )r (u) = 0,
for some r
1?
i;
1)
r1
(X
k)
rk
f ) ri ,
then
E = W1
Wk .
This result is very nice but seems to require that the eigenvalues of f all belong to K.
Actually, it is a special case of a more general result involving the factorization of the
minimal polynomial m into its irreducible monic factors (See Theorem 20.16),
m = pr11 prkk ,
where the pi are distinct irreducible monic polynomials over K.
Theorem 22.9. (Primary Decomposition Theorem) Let f : E ! E be a linear map on the
finite-dimensional vector space E over the field K. Write the minimal polynomial m of f as
m = pr11 prkk ,
where the pi are distinct irreducible monic polynomials over K, and the ri are positive integers. Let
Wi = Ker (pri i (f )), i = 1, . . . , k.
Then
Wk .
Proof. The trick is to construct projections i using the polynomials pj j so that the range
of i is equal to Wi . Let
Y r
gi = m/pri i =
pj j .
j6=i
Note that
pri i gi = m.
Since p1 , . . . , pk are irreducible and distinct, they are relatively prime. Then, using Proposition 20.13, it is easy to show that g1 , . . . , gk are relatively prime. Otherwise, some irreducible
polynomial p would divide all of g1 , . . . , gk , so by Proposition 20.13 it would be equal to one
of the irreducible factors pi . But, that pi is missing from gi , a contradiction. Therefore, by
Proposition 20.14, there exist some polynomials h1 , . . . , hk such that
g1 h1 + + gk hk = 1.
Let qi = gi hi and let i = qi (f ) = gi (f )hi (f ). We have
q1 + + qk = 1,
and since m divides qi qj for i 6= j, we get
1 + + k = id
i j = 0,
i 6= j.
(We implicitly used the fact that if p, q are two polynomials, the linear maps p(f ) q(f )
and q(f ) p(f ) are the same since p(f ) and q(f ) are polynomials in the powers of f , which
commute.) Composing the first equation with i and using the second equation, we get
i2 = i .
Therefore, the i are projections, and E is the direct sum of the images of the i . Indeed,
every u 2 E can be expressed as
u = 1 (u) + + k (u).
Also, if
then by applying i we get
1 (u) + + k (u) = 0,
0 = i2 (u) = i (u),
i = 1, . . . k.
629
j 6= i.
u 2 E,
But then, qgi is divisible by the minimal polynomial m = pri i gi of f , and since pri i and gi are
relatively prime, by Euclids Proposition, pri i must divide q. This finishes the proof that the
minimal polynomial of fi is pri i , which is (c).
If all the eigenvalues of f belong to the field K, we obtain the following result.
Theorem 22.10. (Primary Decomposition Theorem, Version 2) Let f : E ! E be a linear map on the finite-dimensional vector space E over the field K. If all the eigenvalues
1 , . . . , k of f belong to K, write
m = (X
r1
(X
k)
n1
(X
k)
1)
rk
= (X
1)
nk
f ) ri ,
i = 1, . . . , k.
Wk .
i)
ri
Proof. Parts (a), (b) and (d) have already been proved in Theorem 22.10, so it remains to
prove (c). Since Wi is invariant under f , let fi be the restriction of f to Wi . The characteristic
polynomial fi of fi divides (f ), and since (f ) has all its roots in K, so does i (f ). By
Theorem 8.4, there is a basis of Wi in which fi is represented by an upper triangular matrix,
and since ( i id f )ri = 0, the diagonal entries of this matrix are equal to i . Consequently,
fi
and since
fi
= (X
i)
dim(Wi )
i = 1, . . . , k.
Because E is the direct sum of the Wi , we have dim(W1 ) + + dim(Wk ) = n, and since
n1 + + nk = n, we must have
dim(Wi ) = ni ,
i = 1, . . . , k,
proving (c).
Definition 22.3. If 2 K is an eigenvalue of f , we define a generalized eigenvector of f as
a nonzero vector u 2 E such that
( id
The index of
f )r (u) = 0,
for some r
1.
1 such that
f )r = Ker ( id
f )r+1 .
1. By Theorem 22.10(d), if
Another important consequence of Theorem 22.10 is that f can be written as the sum of
a diagonalizable and a nilpotent linear map (which commute). If we write
D=
1 1
+ +
k k ,
where i is the projection from E onto the subspace Wi defined in the proof of Theorem
22.9, since
1 + + k = id,
631
we have
f = f 1 + + f k ,
and so we get
f
D = (f
1 id)1
+ + (f
k id)k .
1 id)
1 + + (f
k id)
k id)
D, using
k .
N r = 0.
A linear map g : E ! E is said to be nilpotent if there is some positive integer r such
that g r = 0.
Since N is a polynomial in f , it commutes with f , and thus with D. From
D=
1 1
+ +
k k ,
and
1 + + k = id,
we see that
D
i id
= 1 1 + + k k
i (1 + + k )
=( 1
i )1 + + ( i 1
i )i 1 + (
i+1
i )i+1
+ + (
i )k .
Since the projections j with j 6= i vanish on Wi , the above equation implies that D
i id
vanishes on Wi and that (D
id)(W
)
W
,
and
thus
that
the
minimal
polynomial
of
D
j
i
i
is
(X
1 ) (X
k ).
Since the i are distinct, by Theorem 22.5, the linear map D is diagonalizable, so we have
shown that when all the eigenvalues of f belong to K, there exist a diagonalizable linear
map D and a nilpotent linear map N , such that
f =D+N
DN = N D,
and N and D are polynomials in f .
A decomposition of f as above is called a Jordan decomposition. In fact, we can prove
more: The maps D and N are uniquely determined by f .
D0 = N 0
N,
and D, D0 , N, N 0 commute with one another. Since D and D0 are both diagonalizable and
commute, by Proposition 22.6, they are simultaneousy diagonalizable, so D D0 is diagonalizable. Since N and N 0 commute, by the binomial formula, for any r 1,
(N
r
N) =
( 1)
(N 0 )r j N j .
j
j=0
r
r
X
Since both N and N 0 are nilpotent, we have N r1 = 0 and (N 0 )r2 = 0, for some r1 , r2 > 0, so
for r r1 + r2 , the right-hand side of the above expression is zero, which shows that N 0 N
is nilpotent. (In fact, it is easy that r1 = r2 = n works). It follows that D D0 = N 0 N
is both diagonalizable and nilpotent. Clearly, the minimal polynomial of a nilpotent linear
map is of the form X r for some r > 0 (and r dim(E)). But D D0 is diagonalizable, so
its minimal polynomial has simple roots, which means that r = 1. Therefore, the minimal
polynomial of D D0 is X, which says that D D0 = 0, and then N = N 0 .
If K is an algebraically closed field, then Theorem 22.11 holds. This is the case when
K = C. This theorem reduces the study of linear maps (from E to itself) to the study of
nilpotent operators. There is a special normal form for such operators which is discussed in
the next section.
22.4
633
This section is devoted to a normal form for nilpotent maps. We follow Godements exposition [47]. Let f : E ! E be a nilpotent linear map on a finite-dimensional vector space over
a field K, and assume that f is not the zero map. Then, there is a smallest positive integer
r 1 such f r 6= 0 and f r+1 = 0. Clearly, the polynomial X r+1 annihilates f , and it is the
minimal polynomial of f since f r 6= 0. It follows that r + 1 n = dim(E). Let us define
the subspaces Ni by
Ni = Ker (f i ), i 0.
Note that N0 = (0), N1 = Ker (f ), and Nr+1 = E. Also, it is obvious that
Ni Ni+1 ,
0.
Proposition 22.12. Given a nilpotent linear map f with f r 6= 0 and f r+1 = 0 as above, the
inclusions in the following sequence are strict:
(0) = N0 N1 Nr Nr+1 = E.
Proof. We proceed by contradiction. Assume that Ni = Ni+1 for some i with 0 i r.
Since f r+1 = 0, for every u 2 E, we have
0 = f r+1 (u) = f i+1 (f r i (u)),
which shows that f r i (u) 2 Ni+1 . Since Ni = Ni+1 , we get f r i (u) 2 Ni , and thus f r (u) = 0.
Since this holds for all u 2 E, we see that f r = 0, a contradiction.
Proposition 22.13. Given a nilpotent linear map f with f r 6= 0 and f r+1 = 0, for any
integer i with 1 i r, for any subspace U of E, if U \ Ni = (0), then f (U ) \ Ni 1 = (0),
and the restriction of f to U is an isomorphism onto f (U ).
Proof. Pick v 2 f (U ) \ Ni 1 . We have v = f (u) for some u 2 U and f i 1 (v) = 0, which
means that f i (u) = 0. Then, u 2 U \ Ni , so u = 0 since U \ Ni = (0), and v = f (u) = 0.
Therefore, f (U ) \ Ni 1 = (0). The restriction of f to U is obviously surjective on f (U ).
Suppose that f (u) = 0 for some u 2 U . Then u 2 U \ N1 U \ Ni = (0) (since i 1), so
u = 0, which proves that f is also injective on U .
Proposition 22.14. Given a nilpotent linear map f with f r 6= 0 and f r+1 = 0, there exists
a sequence of subspace U1 , . . . , Ur+1 of E with the following properties:
(1) Ni = Ni
Ui , for i = 1, . . . , r + 1.
Ur+1 .
and f (Ur+1 ) Ur .
Ur
Ui ,
= Ni
Ui
and f (Ui ) Ui 1 .
The fact that f is an injection from Ui into Ui 1 follows from Proposition 22.13. Therefore,
the induction step is proved. The construction stops when i = 1.
Because N0 = (0) and Nr+1 = E, we see that E is the direct sum of the Ui :
E = U1
Ur+1 ,
635
Lr+1
i=1
Ui . Then, we define
r+1
er+1
1 , . . . , enr+1
with
eij
= f (eij ),
j = 1 . . . , ni .
e
B 1
nr+1
B .
..
B ..
.
B
B .
..
@ ..
.
1
1
e1 enr+1
j = 1, . . . , n1 .
ernr+1 +1
..
.
1
ernr+1
+1
..
.
..
.
1
enr+1 +1
ernr
..
.
1
ernr 1 ernr +1
..
..
.
.
..
..
.
.
1
1
enr enr +1
ernr 11
..
.
..
.
1
e nr 1
e1n1
C
C
C
C
C
C
C
C
C
C
C
C
C
A
Finally, we define the basis (e1 , . . . , en ) by listing each column of the above matrix from
the bottom-up, starting with column one, then column two, etc. This means that we list
the vectors eij in the following order:
For j = 1, . . . , nr+1 , list e1j , . . . , er+1
;
j
In general, for i = r, . . . , 1,
for j = ni+1 + 1, . . . , ni , list e1j , . . . , eij .
Then, because f (e1j ) = 0 and eij
= f (eij ) for i
2, either
f (ei ) = 0 or f (ei ) = ei 1 ,
which proves the theorem.
k)
0
C
..
..
A,
.
.
Jr m (
m)
0
0
0
0
0
0
0
0
0
0
0
0
0 0
1
0
0C
C
0C
C
0C
C.
0C
C
0C
C
1A
Theorem 22.16. (Jordan form) Let E be a vector space of dimension n over a field K and
let f : E ! E be a linear map. The following properties are equivalent:
(1) The eigenvalues of f all belong to K (i.e. the roots of the characteristic polynomial
all belong to K).
(2) There is a basis of E in which the matrix of f is a Jordan matrix.
L
Proof. Assume (1). First we apply Theorem 22.10, and we get a direct sum E = kj=1 Wk ,
such that the restriction of gi = f
j id to Wi is nilpotent. By Theorem 22.15, there is a
basis of Wi such that the matrix of the restriction of gi is of the form
0
1
0 1 0 0 0
B0 0 2 0 0 C
B
C
C
B .. .. ..
.
.
.
.
.
.
Gi = B . . .
C,
.
.
.
B
C
@0 0 0 0 n i A
0 0 0 0 0
637
Chapter 23
Tensor Algebras, Symmetric Algebras
and Exterior Algebras
23.1
Tensors Products
We begin by defining tensor products of vector spaces over a field and then we investigate
some basic properties of these tensors, in particular the existence of bases and duality. After
this, we investigate special kinds of tensors, namely symmetric tensors and skew-symmetric
tensors. Tensor products of modules over a commutative ring with identity will be discussed
very briefly. They show up naturally when we consider the space of sections of a tensor
product of vector bundles.
Given a linear map f : E ! F , we know that if we have a basis (ui )i2I for E, then f
is completely determined by its values f (ui ) on the basis vectors. For a multilinear map
f : E n ! F , we dont know if there is such a nice property but it would certainly be very
useful.
In many respects, tensor products allow us to define multilinear maps in terms of their
action on a suitable basis. The crucial idea is to linearize, that is, to create a new vector space
E n such that the multilinear map f : E n ! F is turned into a linear map f : E n ! F
which is equivalent to f in a strong sense. If in addition, f is symmetric, then we can define
a symmetric tensor power Symn (E), and every symmetric multilinear map f : E n ! F is
turned into a linear map f : Symn (E) ! F which is equivalent to f in a strong
Vn sense.
Similarly, if f is alternating, then we can define a skew-symmetric tensor
(E), and
V power
every alternating multilinear map is turned into a linear map f^ : n (E) ! F which is
equivalent to f in a strong sense.
Tensor products can be defined in various ways, some more abstract than others. We
tried to stay down to earth, without excess!
Let K be a given field, and let E1 , . . . , En be n 2 given vector spaces. For any vector
space F , recall that a map f : E1 En ! F is multilinear i it is linear in each of its
639
640
2 K, for i = 1 . . . , n.
The set of multilinear maps as above forms a vector space denoted L(E1 , . . . , En ; F ) or
Hom(E1 , . . . , En ; F ). When n = 1, we have the vector space of linear maps L(E, F ) (also
denoted Hom(E, F )). (To be very precise, we write HomK (E1 , . . . , En ; F ) and HomK (E, F ).)
As usual, the dual space E of E is defined by E = Hom(E, K).
Before proceeding any further, we recall a basic fact about pairings. We will use this fact
to deal with dual spaces of tensors.
Definition 23.1. Given two vector spaces E and F , a map h , i : E F ! K is a
nondegenerate pairing i it is bilinear and i hu, vi = 0 for all v 2 F implies u = 0, and
hu, vi = 0 for all u 2 E implies v = 0. A nondegenerate pairing induces two linear maps
' : E ! F and : F ! E defined by
'(u)(y) = hu, yi
(v)(x) = hx, vi,
for all u, x 2 E and all v, y 2 F .
Proposition 23.1. For every nondegenerate pairing h , i : E F ! K, the induced maps
' : E ! F and : F ! E are linear and injective. Furthermore, if E and F are finite
dimensional, then ' : E ! F and : F ! E are bijective.
Proof. The maps ' : E ! F and : F ! E are linear because u, v 7! hu, vi is bilinear.
Assume that '(u) = 0. This means that '(u)(y) = hu, yi = 0 for all y 2 F , and as our
pairing is nondegenerate, we must have u = 0. Similarly, is injective. If E and F are finite
dimensional, then dim(E) = dim(E ) and dim(F ) = dim(F ). However, the injectivity of '
and implies that that dim(E) dim(F ) and dim(F ) dim(E ). Consequently dim(E)
dim(F ) and dim(F ) dim(E), so dim(E) = dim(F ). Therefore, dim(E) = dim(F ) and '
is bijective (and similarly dim(F ) = dim(E ) and is bijective).
Proposition 23.1 shows that when E and F are finite dimensional, a nondegenerate pairing
induces canonical isomorphims ' : E ! F and : F ! E ; that is, isomorphisms that do
not depend on the choice of bases. An important special case is the case where E = F and
we have an inner product (a symmetric, positive definite bilinear form) on E.
Remark: When we use the term canonical isomorphism, we mean that such an isomorphism is defined independently of any choice of bases. For example, if E is a finite dimensional vector space and (e1 , . . . , en ) is any basis of E, we have the dual basis (e1 , . . . , en ) of
641
u (ei ) = hu, ei i =
so we get
[
u =
X
n
n
X
uj e j , e i
j=1
n
X
j=1
!i ei ,
uj hej , ei i =
with !i =
i=1
n
X
n
X
gij uj ,
j=1
gij uj .
j=1
If we P
use the convention that coordinates of vectors are written using superscripts
n
i
(u = P
i=1 u ei ) and coordinates of one-forms (covectors) are written using subscripts
n
(! =
indices. The
i=1 !i ei ), then the map [ has the eect of lowering (flattening!)
Pn
inverse
of
[
is
denoted
]
:
E
!
E.
If
we
write
!
2
E
as
!
=
!
e
and
! ] 2 E as
i=1 i i
P
! ] = nj=1 (! ] )j ej , since
!i = !(ei ) = h! ] , ei i =
n
X
(! ] )j gij ,
j=1
we get
] i
(! ) =
n
X
1 i n,
g ij !j ,
j=1
ij
for all u, v 2 E.
n
X
k=1
g ik ek ,
642
u, v 2 E,
is clearly bilinear. It is also clear that the above defines a linear map from Hom(E, E) to
Hom(E, E; K). This map is injective, because if f (u, v) = 0 for all u, v 2 E, as h , i is
an inner product, we get g(u) = 0 for all u 2 E. Furthermore, both spaces Hom(E, E) and
Hom(E, E; K) have the same dimension, so our linear map is an isomorphism.
If (e1 , . . . , en ) is an orthonormal basis of E, then we check immediately that the trace of
a linear map g (which is independent of the choice of a basis) is given by
tr(g) =
n
X
i=1
hg(ei ), ei i,
n
X
f (ei , ei ),
i=1
for any orthonormal basis (e1 , . . . , en ) of E. We can also check directly that the above
expression is independent of the choice of an orthonormal basis.
We will also need the following Proposition to show that various families are linearly
independent.
643
Proposition 23.3. Let E and F be two nontrivial vector spaces and let (ui )i2I be any family
of vectors ui 2 E. The family (ui )i2I is linearly independent i for every family (vi )i2I of
vectors vi 2 F , there is some linear map f : E ! F so that f (ui ) = vi for all i 2 I.
Proof. Left as an exercise.
First, we define tensor products, and then we prove their existence and uniqueness up to
isomorphism.
Definition 23.2. A tensor product of n
2 vector spaces E1 , . . . , En is a vector space T
together with a multilinear map ' : E1 En ! T , such that for every vector space F
and for every multilinear map f : E1 En ! F , there is a unique linear map f : T ! F
with
f (u1 , . . . , un ) = f ('(u1 , . . . , un )),
for all u1 2 E1 , . . . , un 2 En , or for short
f = f '.
Equivalently, there is a unique linear map f such that the following diagram commutes:
E1 N En
'
/ T
NNN
NNN
f
NNN
f
N&
First, we show that any two tensor products (T1 , '1 ) and (T2 , '2 ) for E1 , . . . , En , are
isomorphic.
Proposition 23.4. Given any two tensor products (T1 , '1 ) and (T2 , '2 ) for E1 , . . . , En , there
is an isomorphism h : T1 ! T2 such that
'2 = h '1 .
Proof. Focusing on (T1 , '1 ), we have a multilinear map '2 : E1 En ! T2 , and thus
there is a unique linear map ('2 ) : T1 ! T2 with
'2 = ('2 ) '1 .
Similarly, focusing now on on (T2 , '2 ), we have a multilinear map '1 : E1 En ! T1 ,
and thus there is a unique linear map ('1 ) : T2 ! T1 with
'1 = ('1 ) '2 .
644
645
i 2 I. Furthermore, it is easy to show that for any vector space F , and for any function
f : I ! F , there is a unique linear map f : K (I) ! F such that f = f , as in the following
diagram:
I CC / K (I)
CC
CC
f
C
f CC
!
(I)
This shows that K is the free vector space generated by I. Now, apply this construction
to the cartesian product I = E1 En , obtaining the free vector space M = K (I) on
I = E1 En . Since every ei is uniquely associated with some n-tuple i = (u1 , . . . , un ) 2
E1 En , we denote ei by (u1 , . . . , un ).
Next, let N be the subspace of M generated by the vectors of the following type:
(u1 , . . . , ui + vi , . . . , un ) (u1 , . . . , ui , . . . , un )
(u1 , . . . , ui , . . . , un )
(u1 , . . . , ui , . . . , un ).
(u1 , . . . , vi , . . . , un ),
E1 REn
Because f is multilinear, note that we must have f (w) = 0 for every w 2 N . But then,
f : M ! F induces a linear map h : M/N ! F such that
f = h ,
646
by defining h([z]) = f (z) for every z 2 M , where [z] denotes the equivalence class in M/N
of z 2 M :
E1 SEn / K (E1 En ) /N
SSS
SSS
SSS
h
SSS
SSS
f
S)
Indeed, the fact that f vanishes on N insures that h is well defined on M/N , and it is clearly
linear by definition. However, we showed that such a linear map h is unique, and thus it
agrees with the linear map f defined by
f (u1 un ) = f (u1 , . . . , un )
on the generators of E1 En .
What is important about Theorem 23.5 is not so much the construction itself but the
fact that it produces a tensor product with the universal mapping property with respect to
multilinear maps. Indeed, Theorem 23.5 yields a canonical isomorphism
L(E1 En , F )
= L(E1 , . . . , En ; F )
between the vector space of linear maps L(E1 En , F ), and the vector space of multilinear maps L(E1 , . . . , En ; F ), via the linear map
' defined by
h 7! h ',
where h 2 L(E1 En , F ). Indeed, h ' is clearly multilinear, and since by Theorem
23.5, for every multilinear map f 2 L(E1 , . . . , En ; F ), there is a unique linear map f 2
L(E1 En , F ) such that f = f ', the map
' is bijective. As a matter of fact,
its inverse is the map
f 7! f .
Using the Hom notation, the above canonical isomorphism is written
Hom(E1 En , F )
= Hom(E1 , . . . , En ; F ).
Remarks:
(1) To be very precise, since the tensor product depends on the field K, we should subscript
the symbol with K and write
E1 K K En .
However, we often omit the subscript K unless confusion may arise.
647
(2) For F = K, the base field, we obtain a canonical isomorphism between the vector
space L(E1 En , K), and the vector space of multilinear forms L(E1 , . . . , En ; K).
However, L(E1 En , K) is the dual space (E1 En ) , and thus the vector space
of multilinear forms L(E1 , . . . , En ; K) is canonically isomorphic to (E1 En ) .
We write
L(E1 , . . . , En ; K)
= (E1 En ) .
The fact that the map ' : E1 En ! E1 En is multilinear, can also be
expressed as follows:
u1 (vi + wi ) un = (u1 vi un ) + (u1 wi un ),
u1 ( ui ) un = (u1 ui un ).
Of course, this is just what we wanted! Tensors in E1 En are also called n-tensors,
and tensors of the form u1 un , where ui 2 Ei are called simple (or indecomposable)
n-tensors. Those n-tensors that are not simple are often called compound n-tensors.
Not only do tensor products act on spaces, but they also act on linear maps (they are
functors). Given two linear maps f : E ! E 0 and g : F ! F 0 , we can define h : E F !
E 0 F 0 by
h(u, v) = f (u) g(v).
It is immediately verified that h is bilinear, and thus it induces a unique linear map
f g : E F ! E0 F 0
such that
(f g)(u v) = f (u) g(u).
If we also have linear maps f 0 : E 0 ! E 00 and g 0 : F 0 ! F 00 , we can easily verify that
the linear maps (f 0 f ) (g 0 g) and (f 0 g 0 ) (f g) agree on all vectors of the form
u v 2 E F . Since these vectors generate E F , we conclude that
(f 0 f ) (g 0 g) = (f 0 g 0 ) (f g).
The generalization to the tensor product f1 fn of n
is immediate, and left to the reader.
23.2
3 linear maps fi : Ei ! Fi
648
for some family of scalars (vjk )j2Ik . Let F be any nontrivial vector space. We show that for
every family
(wi1 ,...,in )(i1 ,...,in )2I1 ...In ,
of vectors in F , there is some linear map h : E1 En ! F such that
h(u1i1 unin ) = wi1 ,...,in .
Then, by Proposition 23.3, it follows that
(u1i1 unin )(i1 ,...,in )2I1 ...In
is linearly independent. However, since (uki )i2Ik is a basis for Ek , the u1i1 unin also
generate E1 En , and thus, they form a basis of E1 En .
We define the function f : E1 En ! F as follows:
X
X
X
f(
vj11 u1j1 , . . . ,
vjnn unjn ) =
vj11 vjnn wj1 ,...,jn .
j1 2I1
jn 2In
i1 ,...,in
23.3
649
Proposition 23.7. Given 3 vector spaces E, F, G, there exists unique canonical isomorphisms
(1) E F ' F E
(2) (E F ) G ' E (F G) ' E F G
(3) (E
F ) G ' (E G)
(F G)
(4) K E ' E
such that respectively
(a) u v 7! v u
(b) (u v) w 7! u (v w) 7! u v w
(c) (u, v) w 7! (u w, v w)
(d)
u 7! u.
Proof. These isomorphisms are proved using the universal mapping property of tensor products. We illustrate the proof method on (2). Fix some w 2 G. The map
(u, v) 7! u v w
from E F to E F G is bilinear, and thus there is a linear map fw : E F ! E F G
such that fw (u v) = u v w.
Next, consider the map
(z, w) 7! fw (z),
from (E F ) G into E F G. It is easily seen to be bilinear, and thus it induces a
linear map
f : (E F ) G ! E F G
such that f ((u v) w) = u v w.
Also consider the map
(u, v, w) 7! (u v) w
from E F G to (E F ) G. It is trilinear, and thus there is a linear map
g : E F G ! (E F ) G
such that g(u v w) = (u v) w. Clearly, f g and g f are identity maps, and thus
f and g are isomorphisms. The other cases are similar.
650
Indeed, any bilinear map f : E F ! G gives the linear map '(f ) 2 Hom(E, Hom(F, G)),
where '(f )(u) is the linear map in Hom(F, G) given by
'(f )(u)(v) = f (u, v).
Conversely, given a linear map g 2 Hom(E, Hom(F, G)), we get the bilinear map (g) given
by
(g)(u, v) = g(u)(v),
and it is clear that ' and
corollary:
Proposition 23.8. For any three vector spaces, E, F, G, we have the canonical isomorphism
Hom(E F, G)
= Hom(E, Hom(F, G)).
23.4
In this section, all vector spaces are assumed to have finite dimension. Let us now see how
tensor products behave under duality. For this, we define a pairing between E1 En and
E1 En as follows: For any fixed (v1 , . . . , vn ) 2 E1 En , we have the multilinear
map
lv1 ,...,vn : (u1 , . . . , un ) 7! v1 (u1 ) vn (un )
from E1 En to K. The map lv1 ,...,vn extends uniquely to a linear map
Lv1 ,...,vn : E1 En ! K. We also have the multilinear map
(v1 , . . . , vn ) 7! Lv1 ,...,vn
from E1 En to Hom(E1 En , K), which extends to a linear map L from
E1 En to Hom(E1 En , K). However, in view of the isomorphism
Hom(U V, W )
= Hom(U, Hom(V, W )),
we can view L as a linear map
L : (E1 En ) (E1 En ) ! K,
which corresponds to a bilinear map
(E1 En ) (E1 En ) ! K,
651
(u f )(x) = u (x)f.
Proposition 23.9. If E and F are vector spaces, then the following properties hold:
(1) The linear map : E F ! Hom(E, F ) is injective.
(2) If E is finite-dimensional, then : E F ! Hom(E, F ) is a canonical isomorphism.
(3) If F is finite-dimensional, then : E F ! Hom(E, F ) is a canonical isomorphism.
Proof. (1) Let (ei )i2I be a basis of E and let (fj )j2J be a basis of F . Then, we know that
(ei fj )i2I,j2J is a basis of E F . To prove that is injective, let us show that its kernel
is reduced to (0). For any vector
X
!=
ij ei fj
i2I 0 ,j2J 0
in E F , with I 0 and J 0 some finite sets, assume that (!) = 0. This means that for
every x 2 E, we have (!)(x) = 0; that is,
X
X X
( ij ei fj )(x) =
ij ei (x) fj = 0.
i2I 0 ,j2J 0
j2J 0
i2I
652
for all x 2 E.
ij ei (x) = 0,
i2I 0
But, then (ei )i2I 0 would be linearly dependent, contradicting the fact that (ei )i2I is a basis
of E , so we must have
ij
= 0,
f1 + +
en
fn ))(x) =
n
X
i=1
( (ei
fi ))(x) =
n
X
ei (x)fi .
i=1
23.5
653
Tensor Algebras
is also denoted as
m
O
or V m
and is called the m-th tensor power of V (with V 1 = V , and V 0 = K). We can pack all
the tensor powers of V into the big vector space
M
T (V ) =
V m ,
m 0
also denoted T (V ) to avoid confusion with the tangent bundle. This is an interesting object
because we can define a multiplication operation on it which makes it into an algebra called
the tensor algebra of V . When V is of finite dimension n, this space corresponds to the
algebra of polynomials with coefficients in K in n noncommuting variables.
Let us recall the definition of an algebra over a field. Let K denote any (commutative)
field, although for our purposes, we may assume that K = R (and occasionally, K = C).
Since we will only be dealing with associative algebras with a multiplicative unit, we only
define algebras of this kind.
Definition 23.3. Given a field K, a K-algebra is a K-vector space A together with a bilinear
operation : A A ! A, called multiplication, which makes A into a ring with unity 1 (or
1A , when we want to be very precise). This means that is associative and that there is
a multiplicative identity element 1 so that 1 a = a 1 = a, for all a 2 A. Given two
K-algebras A and B, a K-algebra homomorphism h : A ! B is a linear map that is also a
ring homomorphism, with h(1A ) = 1B .
For example, the ring Mn (K) of all n n matrices over a field K is a K-algebra.
654
where vi 2 V ni and the ni are natural numbers with ni 6= nj if i 6= j, to define multiplication in T (V ), using bilinearity, it is enough to define multiplication operations
: V m V n ! V (m+n) , which, using the isomorphisms V n
= n (V n ), yield multiplication operations : m (V m ) n (V n ) ! m+n (V (m+n) ). More precisely, we use the
canonical isomorphism
V m V n
= V (m+n)
which defines a bilinear operation
V m V n ! V (m+n) ,
V
V}
= V (m+n) ,
| {z
n
which can be shown using methods similar to those used to proved associativity. Of course,
the multiplication V m V n ! V (m+n) is defined so that
(v1 vm ) (w1 wn ) = v1 vm w1 wn .
(This has to be made rigorous by using isomorphisms involving the associativity of tensor
products, for details, see see Atiyah and Macdonald [5].)
Remark: It is important to note that multiplication in T (V ) is not commutative. Also, in
all rigor, the unit 1 of T (V ) is not equal to 1, the unit of the field K. However, in view
of the injection 0 : K ! T (V ), for the sake of notational simplicity, we will denote 1 by 1.
More generally, in view of the injections n : V n ! T (V ), we identify elements of V n with
their images in T (V ).
The algebra T (V ) satisfies a universal mapping property which shows that it is unique
up to isomorphism. For simplicity of notation, let i : V ! T (V ) be the natural injection of
V into T (V ).
Proposition 23.10. Given any K-algebra A, for any linear map f : V ! A, there is a
unique K-algebra homomorphism f : T (V ) ! A so that
f =f
i,
655
k 0
and the multiplication behaves well w.r.t. the grading, i.e., : V m V n ! V (m+n) .
Generally, a K-algebra E is said to be a graded algebra i there is a sequence of subspaces
E n E such that
M
E=
E n,
k 0
(with E 0 = K) and the multiplication respects the grading; that is, : E m E n ! E m+n .
Elements in E n are called homogeneous elements of rank (or degree) n.
In dierential geometry and in physics it is necessary to consider slightly more general
tensors.
Definition 23.4. Given a vector space V , for any pair of nonnegative integers (r, s), the
tensor space T r,s (V ) of type (r, s) is the tensor product
T r,s (V ) = V r (V )s = V
V} V
V } ,
| {z
| {z
r
656
(T r,s (V ))
= Hom(V r , (V )s ; K).
For finite dimensional vector spaces, the duality of Section 23.4 is also easily extended to the
tensor spaces T r,s (V ). We define the pairing
T r,s (V ) T r,s (V ) ! K
as follows: If
and
then
u = u1 ur vr+1
vr+s
2 T r,s (V ),
(v , u) = v1 (u1 ) vr+s
(ur+s ).
The tradition in classical tensor notation is to use lower indices on vectors and upper
indices on linear forms and in accordance to Einstein summation convention (or Einstein
notation) the position of the indices on the coefficients is reversed. Einstein summation
convention is to assume that a summation is performed for all values of every index that
appears simultaneously once as an upper index and once as a lower index. According to this
convention, the tensor above is written
j1
js
r
= aij11,...,i
,...,js ei1 eir e e .
657
1,s 1
(V ), with 1 i r
ci,j (u1 ur v1 vs )
As
c1,1 (ei ej ) =
we get
c1,1 (h) =
n
X
i,j ,
aii = tr(h),
i=1
where tr(h) is the trace of h, where h is viewed as the linear map given by the matrix, (aij ).
Actually, since c1,1 is defined independently of any basis, c1,1 provides an intrinsic definition
of the trace of a linear map h 2 Hom(V, V ).
Remark: Using the Einstein summation convention, if
j1
js
r
= aij11,...,i
,...,js ei1 eir e e ,
then
i ,...,i
1 ,i,ik+1 ...,ir
1 ,i,jl+1 ,...,js
j1
js
c
j
ei1 ec
ik e ir e e l e .
If E and F are two K-algebras, we know that their tensor product E F exists as a
vector space. We can make E F into an algebra as well. Indeed, we have the multilinear
map
EF EF !EF
658
given by (a, b, c, d) 7! (ac) (bd), where ac is the product of a and c in E and bd is the
product of b and d in F . By the universal mapping property, we get a linear map,
EF EF
! E F.
23.6
Our goal is to come up with a notion of tensor product that will allow us to treat symmetric
multilinear maps as linear maps. First, note that we have to restrict ourselves to a single
vector space E, rather then n vector spaces E1 , . . . , En , so that symmetry makes sense.
Recall that a multilinear map f : E n ! F is symmetric i
f (u
(1) , . . . , u (n) )
= f (u1 , . . . , un ),
for all ui 2 E and all permutations, : {1, . . . , n} ! {1, . . . , n}. The group of permutations
on {1, . . . , n} (the symmetric group) is denoted Sn . The vector space of all symmetric
multilinear maps f : E n ! F is denoted by Sn (E; F ). Note that S1 (E; F ) = Hom(E, F ).
We could proceed directly as in Theorem 23.5 and construct symmetric tensor products
from scratch. However, since we already have the notion of a tensor product, there is a more
economical method. First, we define symmetric tensor powers.
Definition 23.6. An n-th symmetric tensor power of a vector space E, where n 1, is a
vector space S together with a symmetric multilinear map ' : E n ! S such that, for every
vector space F and for every symmetric multilinear map f : E n ! F , there is a unique linear
map f : S ! F , with
f (u1 , . . . , un ) = f ('(u1 , . . . , un )),
659
f =f
'.
Equivalently, there is a unique linear map f such that the following diagram commutes:
E nC
'
/ S
CC
CC
f
C
f CC!
First, we show that any two symmetric n-th tensor powers (S1 , '1 ) and (S2 , '2 ) for E
are isomorphic.
Proposition 23.11. Given any two symmetric n-th tensor powers (S1 , '1 ) and (S2 , '2 ) for
E, there is an isomorphism h : S1 ! S2 such that
'2 = h '1 .
Proof. Replace tensor product by n-th symmetric tensor power in the proof of Proposition
23.4.
We now give a construction that produces a symmetric n-th tensor power of a vector
space E.
Theorem 23.12. Given a vector space E, a symmetric n-th tensor power (Symn (E), ')
for E can be constructed (n
1). Furthermore, denoting '(u1 , . . . , un ) as u1 un ,
the symmetric tensor power Symn (E) is generated by the vectors u1
un , where
u1 , . . . , un 2 E, and for every symmetric multilinear map f : E n ! F , the unique linear
map f : Symn (E) ! F such that f = f
' is defined by
f (u1
on the generators u1
un ) = f (u1 , . . . , un )
un of Symn (E).
Proof. The tensor power E n is too big, and thus we define an appropriate quotient. Let C
be the subspace of E n generated by the vectors of the form
u1 un
for all ui 2 E, and all permutations
space (E n )/C does the job.
(1)
(n) ,
Let p : E n ! (E n )/C be the quotient map. Let ' : E n ! (E n )/C be the map
(u1 , . . . , un ) 7! p(u1 un ),
660
/ E n
FF
FF
f
F
f FF
#
E n FF
E n JJ
(E n )/C
JJ
JJ
JJ
h
f JJJ
%
un generate
which shows that h and f agree. Thus, Symn (E) = (E n )/C and ' constitute a symmetric
n-th tensor power of E.
Again, the actual construction is not important. What is important is that the symmetric
n-th power has the universal mapping property with respect to symmetric multilinear maps.
Remark: The notation for the commutative multiplication of symmetric tensor powers is
not standard. Another notation commonly used is . We often abbreviate symmetric tensor
n
power as symmetric power. The symmetric power Symn (E) is also denoted
Jn Sym E or
S(E). To be consistent with the use of , we could have used the notation
E. Clearly,
1
0
u1
2 Sn .
wi
un ),
661
g(v).
It is immediately verified that h is symmetric bilinear, and thus it induces a unique linear
map
f g : Sym2 (E) ! Sym2 (E 0 ),
such that
(f
g)(u
v) = f (u)
g(u).
(g 0 g) = (f 0
g 0 ) (f
g).
fn of n
3 linear maps
662
23.7
The vectors u1 un , where u1 , . . . , un 2 E, generate Symn (E), but they are not linearly
independent. We will prove a version of Proposition 23.6 for symmetric tensor powers. For
this, recall that a (finite) multiset over a set I is a function M : I ! N, such that M (i) 6= 0
for finitely many i 2 I, and that the set of all multisets over I is denoted as N(I) . We let
(I)
dom(M ) = {i 2
PI | M (i) 6= 0}, which is a finite
Pset. Then, for
P any multiset M 2 N , note
that the sum
i2I M (i) makes sense, since
i2I M (i) =
i2dom(M ) M (i), and dom(M )
(I)
is finite. For every multiset M 2 N , for any n
2, we define the set JM of functions
: {1, . . . , n} ! dom(M ), as follows:
X
JM = { | : {1, . . . , n} ! dom(M ), | 1 (i)| = M (i), i 2 dom(M ),
M (i) = n}.
i2I
In other words, if i2I M (i) = n and dom(M ) = {i1 , . . . , ik },1 any function 2 JM specifies
a sequence of length n, consisting of M (i1 ) occurrences of i1 , M (i2 ) occurrences of i2 , . . .,
M (ik ) occurrences of ik . Intuitively, any defines a permutation of the sequence (of length
n)
hi1 , . . . , i1 , i2 , . . . , i2 , . . . , ik , . . . , ik i.
| {z } | {z }
| {z }
M (i1 )
Given any k
M (i2 )
M (ik )
{z
u
|
as u k .
u}
M (i1 )
M (ik )
ui 1
ui k
P
M 2N(I) ,
i2I
Proof. The proof is very similar to that of Proposition 23.6. For any nontrivial vector space
F , for any family of vectors
(wM )M 2N(I) , Pi2I M (i)=n ,
we show the existence
of a symmetric multilinear map h : Symn (E) ! F , such that for every
P
M 2 N(I) with i2I M (i) = n, we have
h(ui1
M (i1 )
M (ik )
ui k
) = wM ,
663
X
X
X
1 1
n n
1
n
f(
v j1 uj 1 , . . . ,
v j n uj n ) =
v(1) v(n) wM .
j1 2I
jn 2I
(I)
P M 2N
i2I M (i)=n
2JM
It is not difficult to verify that f is symmetric and multilinear. By the universal mapping
property of the symmetric tensor product, the linear map f : Symn (E) ! F such that
f =f
', is the desired map h. Then, by Proposition 23.3, it follows that the family
M (i1 )
M (ik )
ui 1
ui k
P
M 2N(I) ,
i2I
is linearly independent. Using the commutativity of , we can also show that these vectors
generate Symn (E), and thus, they form a basis for Symn (E). The details are left as an
exercise.
As a consequence, when I is finite, say of size p = dim(E), the dimension of Symn (E) is
the number of finite multisets (j1 , . . . , jp ), such that j1 + + jp = n, jk 0. We leave as
an exercise to show that this number is p+nn 1 . Thus, if dim(E) = p, then the dimension of
Symn (E) is p+nn 1 . Compare with the dimension of E n , which is pn . In particular, when
p = 2, the dimension of Symn (E) is n + 1. This can also be seen directly.
Remark: The number
p+n 1
n
P M 2N
i2I M (i)=n
{i1 ,...,ik }=dom(M )
This looks like a homogeneous polynomial of total degree n, where the monomials of total
degree n are the symmetric tensors
ui 1
M (i1 )
M (ik )
ui k
in the indeterminates ui , where i 2 I (recall that M (i1 ) + + M (ik ) = n). Again, this
is not a coincidence. Polynomials can be defined in terms of symmetric tensors.
664
23.8
We can show the following property of the symmetric tensor product, using the proof technique of Proposition 23.7:
n
Sym (E
23.9
F)
=
n
M
k=0
In this section, all vector spaces are assumed to have finite dimension. We define a nondegenerate pairing Symn (E ) Symn (E) ! K as follows: Consider the multilinear map
(E )n E n ! K
given by
(v1 , . . . , vn , u1 , . . . , un ) 7!
2Sn
Note that the expression on the right-hand side is almost the determinant det(vj (ui )),
except that the sign sgn( ) is missing (where sgn( ) is the signature of the permutation ;
that is, the parity of the number of transpositions into which can be factored). Such an
expression is called a permanent.
vj .
It is easily checked that this expression is symmetric w.r.t. the ui s and also w.r.t. the
For any fixed (v1 , . . . , vn ) 2 (E )n , we get a symmetric multinear map
X
lv1 ,...,vn : (u1 , . . . , un ) 7!
v (1) (u1 ) v (n) (un )
2Sn
from E n to K. The map lv1 ,...,vn extends uniquely to a linear map Lv1 ,...,vn : Symn (E) ! K.
Now, we also have the symmetric multilinear map
(v1 , . . . , vn ) 7! Lv1 ,...,vn
from (E )n to Hom(Symn (E), K), which extends to a linear map L from Symn (E ) to
Hom(Symn (E), K). However, in view of the isomorphism
Hom(U V, W )
= Hom(U, Hom(V, W )),
we can view L as a linear map
L : Symn (E ) Symn (E) ! K,
which corresponds to a bilinear map
Symn (E ) Symn (E) ! K.
665
Now, this pairing in nondegenerate. This can be shown using bases and we leave it as
an exercise to the reader (see Knapp [64], Appendix A). Therefore, we get a canonical
isomorphism
(Symn (E))
= Symn (E ).
Since we also have an isomorphism
(Symn (E))
= Sn (E, K),
we get a canonical isomorphism
: Symn (E )
= Sn (E, K)
which allows us to interpret symmetric tensors over E as symmetric multilinear maps.
Remark: The isomorphism : Symn (E )
= Sn (E, K) discussed above can be described
explicity as the linear extension of the map given by
X
(v1 vn )(u1 , . . . , un ) =
v (1) (u1 ) v (n) (un ).
2Sn
(1)
(n) .
for all
2 Sn
1 X
n! 2S
(u1 un ) =
1 X
u
n! 2S
n
(1)
(n) .
As the right hand side is clearly symmetric, we get a linear map : Symn (E) ! E n .
Clearly, (Symn (E)) is the set of symmetrized tensors in E n . If we consider the map
666
S = S. Therefore, S is a projection,
Ker S.
It turns out that Ker S = E n \ I = Ker , where I is the two-sided ideal of T (E) generated
by all tensors of the form u v v u 2 E 2 (for example, see Knapp [64], Appendix A).
Therefore, is injective,
E n = (Symn (E))
(E n \ I) = (Symn (E))
Ker ,
and the symmetric tensor power Symn (E) is naturally embedded into E n .
23.10
Symmetric Algebras
As in the case of tensors, we can pack together all the symmetric powers Symn (V ) into an
algebra
M
Sym(V ) =
Symm (V ),
m 0
called the symmetric tensor algebra of V . We could adapt what we did in Section 23.5 for
general tensor powers to symmetric tensors but since we already have the algebra T (V ),
we can proceed faster. If I is the two-sided ideal generated by all tensors of the form
u v v u 2 V 2 , we set
Sym (V ) = T (V )/I.
Then, Sym (V ) automatically inherits a multiplication operation which is commutative, and
since T (V ) is graded, that is
M
T (V ) =
V m ,
m 0
we have
Sym (V ) =
m 0
V m /(I \ V m ).
Symm (V )
= V m /(I \ V m ),
so
Sym (V )
= Sym(V ).
667
Proposition 23.14. Given any commutative K-algebra A, for any linear map f : V ! A,
there is a unique K-algebra homomorphism f : Sym(V ) ! A so that
f =f
i,
V HH / Sym(V )
HH
HH
H
f
f HHH
$
A
vn )(u1 , . . . , un ) =
2Sn
! Sn (E, K)
Symm+n (E )
/
Sm (E, K) Sn (E, K)
The answer is yes! The solution is to define this multiplication such that, for f 2 Sm (E, K)
and g 2 Sn (E, K),
(f g)(u1 , . . . , um+n ) =
f (u
2shue(m,n)
where shue(m, n) consists of all (m, n)-shues; that is, permutations of {1, . . . m + n}
such that (1) < < (m) and (m + 1) < < (m + n). We urge the reader to check
this fact.
Another useful canonical isomorphim (of K-algebras) is
Sym(E
F)
= Sym(E) Sym(F ).
668
23.11
(1) , . . . , u (n) )
= sgn( )f (u1 , . . . , un ),
2 Sn , and all ui 2 E.
For n = 1, we agree that every linear map f : E ! F is alternating. The vector space of
all multilinear alternating maps f : E n ! F is denoted Altn (E; F ). Note that Alt1 (E; F ) =
Hom(E, F ). The following basic proposition shows the relationship between alternation and
skew-symmetry.
Proposition 23.15. Let f : E n ! F be a multilinear map. If f is alternating, then the
following properties hold:
(1) For all i, with 1 i n
1,
f (. . . , ui , ui+1 , . . .) =
(2) For every permutation
f (. . . , ui+1 , ui , . . .).
2 Sn ,
f (u
(1) , . . . , u (n) )
= sgn( )f (u1 , . . . , un ).
669
f (. . . , ui+1 , ui , . . .).
(ii) Clearly, the symmetric group, Sn , acts on Altn (E; F ) on the left, via
f (u1 , . . . , un ) = f (u
(1) , . . . , u (n) ).
(1) , . . . , u (n) )
= 0.
However, by (ii),
f (u1 , . . . , un ) = sgn( )f (u
(1) , . . . , u (n) )
= 0.
n
X
aij ui ,
i=1
then
f (v1 , . . . , vn ) =
2Sn
sgn( ) a
(1),1
(n),n
1 j n,
!
670
f = f^ '.
Equivalently, there is a unique linear map f^ such that the following diagram commutes:
E nC
'
/ A
CC
CC
f^
C
f CC!
First, we show that any two n-th exterior tensor powers (A1 , '1 ) and (A2 , '2 ) for E are
isomorphic.
Proposition 23.17. Given any two n-th exterior tensor powers (A1 , '1 ) and (A2 , '2 ) for
E, there is an isomorphism h : A1 ! A2 such that
'2 = h '1 .
Proof. Replace tensor product by n exterior tensor power in the proof of Proposition 23.4.
We now give a construction that produces an n-th exterior tensor power of a vector space
E.
V
Theorem 23.18. Given a vector space E, an n-th exterior tensor power ( n (E), ') for E
can be constructed
Vn (n 1). Furthermore, denoting '(u1 , . . . , un ) as u1 ^ ^un , the exterior
tensor power
(E) is generated by the vectors u1 ^ ^ un , where u1 , . . .V
, un 2 E, and for
n
every alternating multilinear map f : E ! F , the unique linear map f^ : n (E) ! F such
that f = f^ ' is defined by
f^ (u1 ^ ^ un ) = f (u1 , . . . , un )
on the gxenerators u1 ^ ^ un of
Vn
(E).
671
Proof sketch. We can give a quick proof using the tensor algebra T (E). let Ia be the twosided ideal of T (E) generated by all tensors of the form u u 2 E 2 . Then, let
n
^
(E) = E n /(Ia \ E n )
Vn
n
and let be the projection
(E). If we let u1 ^ ^ un = (u1 un ), it
Vn : E !
is easy to check that ( (E), ^) satisfies the conditions of Theorem 23.18.
Remark: We can also define
^
(E) = T (E)/Ia =
n
M^
(E),
n 0
the exterior algebra of E. This is the skew-symmetric counterpart of Sym(E), and we will
study it a little later.
V
V
For simplicity of notation, we may writeV n E for n (E). We also abbreviate V
exterior
0
tensor power as exterior power. Clearly, 1 (E)
(E) =
= E, and it is convenient to set
K.
V
The fact that the map ' : E n ! n (E) is alternating and multinear can also be expressed
as follows:
u1 ^ ^ (ui + vi ) ^ ^ un = (u1 ^ ^ ui ^ ^ un )
+ (u1 ^ ^ vi ^ ^ un ),
u1 ^ ^ ( ui ) ^ ^ un = (u1 ^ ^ ui ^ ^ un ),
u (1) ^ ^ u (n) = sgn( ) u1 ^ ^ un ,
for all
2 Sn .
n
^
(E), F )
= Altn (E; F )
V
between the vector space of linear maps Hom( n (E), F ) and the vector space of alternating
multilinear maps Altn (E; F ), via the linear map
' defined by
where h 2 Hom(
Vn
h 7! h ',
(E), F ). In particular, when F = K, we get a canonical isomorphism
!
n
^
(E)
= Altn (E; K).
672
V
Tensors 2 n (E) are called alternating n-tensors or alternating tensors of degree n
and we write deg() = n. Tensors of the form u1 ^ ^ un , where ui 2 E, are called simple
(or decomposable) alternating n-tensors. Those alternating n-tensors that areVnot simple are
often called compound alternating
Simple tensors u1 ^ ^ un 2 n (E) are also
Vn n-tensors.
called n-vectors and tensors in
(E ) are often called (alternating) n-forms.
V
Given two linear maps f : E ! E 0 and g : E ! E 0 , we can define h : E E ! 2 (E 0 ) by
h(u, v) = f (u) ^ g(v).
It is immediately verified that h is alternating bilinear, and thus it induces a unique linear
map
2
2
^
^
f ^ g:
(E) ! (E 0 )
such that
23.12
3 linear maps fi : E ! E 0
Let E be any vector space. For any basis (ui )i2 for E, we assume that some total ordering
on has been chosen. Call the pair ((ui )i2 , ) an ordered basis. Then, for any nonempty
finite subset I , let
u I = ui 1 ^ ^ u i m ,
where I = {i1 , . . . , im }, with i1 < < im .
V
Since n (E) is generated by the tensors of the form v1 ^ ^ vn , with
V vi 2 E, in view of
skew-symmetry, it is clear that the tensors uI , with |I| = n, generate n (E). Actually, they
form a basis.
Proposition 23.19. Given any vector
V space E, if E has finite
V dimension d = dim(E), then
for all n > d, the exterior power n (E) is trivial; that is n (E) = (0). If n d or if E
is
Vninfinite dimensional, then for every ordered basis ((ui )i2 , ), the family (uI ) is basis of
(E), where I ranges over finite nonempty subsets of of size |I| = n.
673
Proof.
First, assume that E has finite dimension d = dim(E) and that n > d. We know that
Vn
(E) is generated by the tensors of the form v1 ^ ^ vn , with vi 2 E. If u1 , . . . , ud is a
basis of E, as every vi is a linear combination of the uj , when we expand v1 ^ ^ vn using
multilinearity, we get a linear combination of the form
X
v1 ^ ^ vn =
(j1 ,...,jn ) uj1 ^ ^ ujn ,
(j1 ,...,jn )
where each (j1 , . . . , jn ) is some sequence of integers jk 2 {1, . . . , d}. As n > d, each sequence
(j1 , . . . , jn ) must contain two identical
Vn elements. By alternation, uj1 ^ ^ ujn = 0, and so
v1 ^ ^ vn = 0. It follows that
(E) = (0).
ij .
For any nonempty subset I = {i1 , . . . , in } with i1 < < in , let lI be the map given by
lI (v1 , . . . , vn ) = det(uij (vk )),
for all vk 2 E. As lI is alternating multilinear, it induces a linear map LI :
Observe that for any nonempty finite subset J with |J| = n, we have
1 if I = J
LI (uJ ) =
0 if I 6= J.
Vn
(E) ! K.
where the above sum is finite and involves nonempty finite subset I with |I| = n, for
every such I, when we apply LI we get
I
= 0,
n
^
n
(E)) =
,
d
674
Vn
(E)) = 0.
V0
(V ) = K.
Proposition 23.20. For any vector space E, the vectors u1 , . . . , un 2 E are linearly independent i u1 ^ ^ un 6= 0.
Proof. If u1 ^ ^ un 6= 0, then u1 , . . . , un must be linearly independent. Otherwise, some
ui would be a linear combination of the other uj s (with j 6= i), and then, as in the proof
of Proposition 23.19, u1 ^ ^ un would be a linear combination of wedges in which two
vectors are identical, and thus zero.
Conversely, assume that u1 , . . . , un are linearly independent. Then, we have the linear
forms ui 2 E such that
ui (uj ) = i,j
1 i, j n.
V
As in the proof of Proposition 23.19, we have a linear map Lu1 ,...,un : n (E) ! K, given by
for all v1 ^ ^ vn 2
Vn
(E). As,
Lu1 ,...,un (u1 ^ ^ un ) = 1,
we conclude that u1 ^ ^ un 6= 0.
Proposition 23.20 shows that, geometrically, every nonzero wedge u1 ^ ^un corresponds
to some oriented version of an n-dimensional subspace of E.
23.13
We can show the following property of the exterior tensor product, using the proof technique
of Proposition 23.7:
n
n ^
k
n^k
^
M
(E F ) =
(E)
(F ).
k=0
23.14
675
given by
X
(v1 , . . . , vn , u1 , . . . , un ) 7!
2Sn
It is easily checked that this expression is alternating w.r.t. the ui s and also w.r.t. the vj .
For any fixed (v1 , . . . , vn ) 2 (E )n , we get an alternating multinear map
lv1 ,...,vn : (u1 , . . . , un ) 7! det(vj (ui ))
from E n to K. By the argument used in the symmetric case, we get a bilinear map
n
^
(E )
n
^
(E) ! K.
Now, this pairing in nondegenerate. This can be shown using bases and we leave it as an
exercise to the reader. Therefore, we get a canonical isomorphism
(
n
^
(E))
=
n
^
n
^
(E ).
(E))
= Altn (E; K),
n
^
(E )
= Altn (E; K),
676
with the factor n!1 added in front of the determinant. Each version has its its own merits
and inconvenients. Morita [83] uses 0 because it is more convenient than when dealing
with characteristic classes. On the other hand, when using 0 , some extra factor is needed
in defining the wedge operation of alternating multilinear forms (see Section 23.15) and for
exterior dierentiation. The version is the one adopted by Warner [114], Knapp [64],
Fulton and Harris [42], and Cartan [20, 21].
by
v 2 F .
Consequently, we have
f > (v )(u) = v (f (u)),
For any p
1, the map
(u1 , . . . , up ) 7! f (u1 ) ^ ^ f (up )
V
V
V
V
from E n to p F is multilinear alternating, so it induces a linear map p f : p E ! p F
defined on generators by
Combining
by
Vp
p
^
>
Vp
f> :
Vp
F !
Vp
E defined on generators
p
^
>
!2
p
^
F , u1 , . . . , up 2 E.
p
^
as claimed.
677
V
The map p f > is often denoted f , although this is an ambiguous notationVsince p is
dropped. Proposition 23.21 gives us the behavior of f under the identification of p E and
Altp (E; K) via the isomorphism .
Vn
n
As in the case of symmetric powers, the map
from
E
to
(E) given by (u1 , . . . , un ) 7!
Vn
n
u1 ^ ^ un yields a surjection
: E
!
(E). Now, this map has some section, so
Vn
n
there is some injection :
(E) ! E with = id. If our field K has characteristic 0,
then there is a special section having a natural definition involving an antisymmetrization
process.
Recall that we have a left action of the symmetric group Sn on E n . The tensors z 2 E n
such that
z = sgn( ) z, for all
2 Sn
are called antisymmetrized tensors. We define the map : E n ! E n by
1 X
sgn( ) u (1) u (n) .
(u1 , . . . , un ) =
n! 2S
n
Vn
As the right
hand
side
is
clearly
an
alternating
map,
we
get
a
linear
map
:
(E) ! E n .
Vn
Clearly, ( (E)) is the set of antisymmetrized tensors in E n . If we consider the map
A = : E n ! E n , it is easy to check that A A = A. Therefore, A is a projection,
and by linear algebra, we know that
E
= A(E
Ker A = (
n
^
(A))
Ker A.
It turns out that Ker A = E n \ Ia = Ker , where Ia is the two-sided ideal of T (E)
generated by all tensors of the form u u 2 E 2 (for example, see Knapp [64], Appendix
A). Therefore, is injective,
E
= (
23.15
n
^
(E))
Vn
(E
\ I) = (
n
^
(E))
Ker ,
Exterior Algebras
As in the case of symmetric tensors, we can pack together all the exterior powers
an algebra
m
^
M^
(V ) =
(V ),
Vn
(V ) into
m 0
called the exterior algebra (or Grassmann algebra) of V . We mimic the procedure used
for symmetric powers. If Ia is the two-sided ideal generated by all tensors of the form
u u 2 V 2 , we set
^
(V ) = T (V )/Ia .
678
V
Then, (V ) automatically inherits a multiplication operation, called wedge product, and
since T (V ) is graded, that is
M
T (V ) =
V m ,
m 0
we have
(V ) =
m 0
so
V m /(Ia \ V m ).
(V )
= V m /(Ia \ V m ),
(V )
=
(V ).
Vm
(V ) =
d ^
m
M
(V ),
m=0
d
m
(V ) has dimension
, we deduce that
^
dim( (V )) = 2d = 2dim(V ) .
The multiplication, ^ :
precise sense:
Vm
(V )
Vn
Vm
(V ) !
Vm+n
(V ) and all
Vn
(V ), we have
^ = ( 1)mn ^ .
The above discussion suggests that it might be useful to know whenVan alternating tensor
is simple, that is, decomposable. It can be shown that for tensors 2 2 (V ), ^ = 0 i
is simple. A general criterion for decomposability can be given in terms of some operations
known as left hook and right hook (also called interior products); see Section 23.17.
V
It is easy to see that (V ) satisfies the following universal mapping property:
679
Proposition 23.23. Given any K-algebra A, for any linear map fV: V ! A, if (f (v))2 = 0
for all v 2 V , then there is a unique K-algebra homomorphism f : (V ) ! A so that
f =f
i,
(V )
FF
FF
f
F
f FF
"
Vn
Vm
(E )
Vn
(E ) !
Vm+n
(E ). The fol-
Can we define a multiplication Altm (E; K) Altn (E; K) ! Altm+n (E; K) directly on
alternating multilinear forms, so that the following diagram commutes:
Vm
(E )
Vn
(E )
(E )
Vm+n
As in the symmetric case, the answer is yes! The solution is to define this multiplication
such that, for f 2 Altm (E; K) and g 2 Altn (E; K),
(f ^ g)(u1 , . . . , um+n ) =
sgn( ) f (u
2shue(m,n)
where shue(m, n) consists of all (m, n)-shues; that is, permutations of {1, . . . m + n}
such that (1) < < (m) and (m+1) < < (m+n). For example, when m = n = 1,
we have
(f ^ g)(u, v) = f (u)g(v) g(u)f (v).
When m = 1 and n
2, check that
(f ^ g)(u1 , . . . , um+1 ) =
m+1
X
i=1
where the hat over the argument ui means that it should be omitted.
680
Altn (E; K)
n 0
V
is an algebra under the above multiplication, and this algebra is isomorphic to (E ). For
the record, we state
V
Proposition 23.24. When E is finite dimensional, the maps : n (E ) ! Altn (E; K)
induced by the linear extensions of the maps given by
(v1 ^ ^ vn )(u1 , . . . , un ) = det(vj (ui ))
V
yield a canonical isomorphism of algebras : (E ) ! Alt(E), where the multiplication in
Alt(E) is defined by the maps ^ : Altm (E; K) Altn (E; K) ! Altm+n (E; K), with
X
(f ^ g)(u1 , . . . , um+n ) =
sgn( ) f (u (1) , . . . , u (m) )g(u (m+1) , . . . , u (m+n) ),
2shue(m,n)
of {1, . . . m + n}
V
Remark: The algebra (E) is a graded algebra. Given two graded algebras E and F , we
b F , where E
b F is equal to E F as a vector space,
can make a new tensor product E
but with a skew-commutative multiplication given by
(a b) ^ (c d) = ( 1)deg(b)deg(c) (ac) (bd),
23.16
n^k
V,
681
V
It is easy to show that if (e1 , . . . , en ) is an orthonormal basis of V , then the basis of k V
consisting
V of the eI (where I = {i1 , . . . , ik }, with 1 i1 < < ik n) is anorthonormal
basis of k V . Since the inner product on V induces an inner product V
on V (recall that
]
]
k
^
V !
n^k
V,
called the Hodge -operator , as follows: For any choice of a positively oriented orthonormal
basis (e1 , . . . , en ) of V , set
(e1 ^ ^ ek ) = ek+1 ^ ^ en .
(1) = e1 ^ ^ en
(e1 ^ ^ en ) = 1.
It is easy to see that the definition of does not depend on the choice of positively oriented
orthonormal basis.
Vk
V
The Hodge
V ! n k V induces aVlinear bijection
V
V -operators :
V
: (V ) ! (V ). We also have Hodge -operators : k V ! n k V .
The following proposition is easy to show:
Proposition 23.25. If V is any oriented vector space of dimension n, for every k with
0 k n, we have
(i) = ( id)k(n
k)
Vk
V.
1
(1) = p
v1 ^ ^ vn
det(hvi , vj i)
682
23.17
In this section, all vector spaces are assumed to have finite dimension. Say dim(E) = n.
Using our nonsingular pairing
h , i:
defined on generators by
p
^
p
^
E !K
(1 p n)
p
^
p+q
p+q
x:
p
^
E !
q
^
q
^
(left hook)
(right hook),
as well as the versions obtained by replacing E by E and E by E. We begin with the left
interior product or left hook, y.
V
Let u 2 p E. For any q such that p + q n, multiplication on the right by u is a linear
map
q
p+q
^
^
^R (u) :
E !
E
given by
where v 2
Vq
v 7! v ^ u
E. The transpose of ^R (u) yields a linear map
p+q
t
(^R (u)) : (
E)
!(
q
^
E) ,
V
Vp+q
V
Vq
which, using the isomorphisms ( p+q E)
E and ( q E)
E , can be viewed
=
=
as a map
p+q
q
^
^
t
(^R (u)) :
E !
E
given by
where z 2
Vp+q
E .
We denote z ^R (u) by
z 7! z ^R (u),
u y z.
683
for all u 2
Vp
Vq
E, v 2
E and z 2
Vp+q
E .
(u ^ v) y z = u y (v y z ),
so y defines a left action
y:
p
^
p+q
E !
q
^
E .
k
^
p
^
F)
=
k
^
F ,
p+q
E !
for all u 2
Vp
q
^
E.
E , v 2
Vq
E and z 2
Vp+q
E.
Vp In order to proceed any further, we need some combinatorial properties of the basis of
E constructed from a basis (e1 , . . . , en ) of E. Recall that for any (nonempty) subset
I {1, . . . , n}, we let
e I = e i1 ^ ^ e ip ,
where I = {i1 , . . . , ip } with i1 < < ip . We also let e; = 1.
Given any two subsets H, L {1, . . . , n}, let
0
if H \ L 6= ;,
H,L =
( 1) if H \ L = ;,
where
= |{(h, l) | (h, l) 2 H L, h > l}|.
Proposition 23.26. For any basis (e1 , . . . , en ) of E the following properties hold:
(1) If H \ L = ;, |H| = h, and |L| = l, then
H,L L,H = ( 1)hl .
684
y:
we have
p+q
E !
q
^
E ,
eH y eL = 0 if H 6 L
eH y eL = L H,H eL H if H L.
Similar formulae hold for y :
have the
Vp
Vp+q
Vq
Vs
p
^
p+q
q
^
E ,
u y (x ^ y ) = ( 1)s (u y x ) ^ y + x ^ (u y y ),
E .
Proof. We can prove the above identity assuming that x and y are of the form eI and eJ
using Proposition 23.26, but this is rather tedious. There is also a proof involving determinants; see Warner [114], Chapter 2.
Thus, y is almost an anti-derivation, except that the sign ( 1)s is applied to the wrong
factor.
It is also possible to define a right interior product or right hook x, using multiplication
on the left rather than multiplication on the right. Then, x defines a right action
p+q
x:
such that
hz , u ^ vi = hz x u, vi,
p
^
for all u 2
x:
p
^
E !
Vp
q
^
E, v 2
q
^
Vq
E,
E, and z 2
Vp+q
E .
685
such that
hu ^ v , zi = hv , z x u i,
Since the left hook y :
Vp
hu y z , vi = hz , v ^ ui,
for all u 2
Vp+q
E !
for all u 2
x:
by
hz x u, vi = hz , u ^ vi,
Vq
Vp
p
^
for all u 2
Vp
E , and z 2
E is defined by
E, v 2
E !
Vp
Vq
E , v 2
q
^
Vq
E and z 2
Vp+q
Vp+q
E.
E ,
E, v 2
Vq
E, and z 2
Vp+q
E ,
u y z = ( 1)pq z x u,
where u 2
Vp
E and z 2
Vp+q
E .
Using the above property and Proposition 23.27, we get the following version of Proposition 23.27 for the right hook:
Proposition 23.28. For the right hook
p+q
x:
for every u 2 E, we have
where x 2
Vr
p
^
E !
q
^
E ,
(x ^ y ) x u = (x x u) ^ y + ( 1)r x ^ (y x u),
E .
Thus, x is an anti-derivation.
For u 2 E, the right hook z x u is also denoted i(u)z , and called insertion operator or
interior
V product. This operator plays an important role in dierential geometry. If we view
z 2 n+1 (E ) as an alternating multilinear map in Altn+1 (E; K), then i(u)z 2 Altn (E; K)
is given by
(i(u)z )(v1 , . . . , vn ) = z (u, v1 , . . . , vn ).
Note that certain authors, such as Shafarevitch [98], denote our right hook z x u (which
is also the right hook in Bourbaki [14] and Fulton and Harris [42]) by u y z .
686
Vp
Vn p
Using
the
two
versions
of
y,
we
can
define
linear
maps
:
E
!
E and
Vp
Vn p
:
E !
E. For any basis (e1 , . . . , en ) of E, if we let M = {1, . . . , n}, e = e1 ^ ^en ,
and e = e1 ^ ^ en , then
(u) = u y e and (v) = v y e,
V
V
for all u 2 p E and all v 2 p E . The following proposition is easily shown.
V
V
V
V
Proposition 23.29. The linear maps : p E ! n p E and : p E ! n p E are
isomorphims. The isomorphisms
and map decomposable vectors to decomposable vectors.
Vp
Vp
Furthermore, if z 2
E is decomposable, thenVh (z), ziV= 0, and similarly
for
z
2
V
V E .
If (e01 , . . . , e0n ) is any other basis of E and 0 : p E ! n p E and 0 : p E ! n p E
are the corresponding isomorphisms, then 0 =
and 0 = 1 for some nonzero 2 .
Proof. Using Proposition 23.26, for any subset J {1, . . . , n} = M such that |J| = p, we
have
(eJ ) = eJ y e = M J,J eM J and (eJ ) = eJ y e = M J,J eM J .
Thus,
(eJ ) = M
A similar result holds for
J,J J,M J eJ
= ( 1)p(n
p)
eJ .
p)
id and
= ( 1)p(n p) id.
V
Thus, and are isomorphisms. If z 2 p E is decomposable, then z = u1 ^ ^ up where
u1 , . . . , up are linearly independent since z 6= 0, and we can pick a basis of E of the form
(u1 , . . . , un ). Then, the above formulae show that
(z) = up+1 ^ ^ un .
Clearly
h (z), zi = 0.
V
If (e01 , . . . , e0n ) is any other basis of E, because m E has dimension 1, we have
e01 ^ ^ e0n = e1 ^ ^ en
We are now ready to tackle the problem of finding criteria for decomposability. We need
a few preliminary results.
Vp
Proposition
23.30.
Given
z
2
E with z 6= 0, the smallest vector space W E such
V
that z 2 p W is generated by the vectors of the form
V
u y z,
with u 2 p 1 E .
687
V
Proof. First, let W be any subspace such that z 2 p (E) and let
P(e1 , . . . , er , er+1 , . . . , en ) be
= 1.
It follows that
eI y z = eI y (z 0 + w ^ z 00 ) = eI y z 0 + eI y (w ^ z 00 ) = eI y z 0 + w,
with eI y z 0 2 W 0 , which shows that eIVy z 2
/ W 0 . Therefore, W is indeed generated by the
p 1
as u y z for some u 2
E , and since (u y z) ^ z = 0 for all u 2
E , we get
ej ^ z = 0 for j = 1, . . . , n.
P
By wedging z = I I eI with each ej , as n > p, we deduce
contradiction. Therefore, n = p and z is decomposable.
= 0 for all I, so z = 0, a
688
are
(eH y z) ^ z = 0
Vp
1. Since (eH y z) ^ z 2
eJ ((eH y z) ^ z) = 0
for all H, J {1, . . . , n}, with |H| = p
with |I| = |I 0 | = p, we can show that
Vp+1
E, this is equivalent to
eJ ((eH y eI ) ^ eI 0 ) = 0,
unless there is some i 2 {1, . . . , n} such that
I
In this case,
H = {i},
eJ (eH y eH[{i} ) ^ eJ
{i}
I 0 = {i}.
= {i},H {i},J
{i} .
If we let
i,J,H = {i},H {i},J
{i} ,
we have i,J,H = +1 if the parity of the number of j 2 J such that j < i is the same as the
parity of the number of h 2 H such that h < i, and i,J,H = 1 otherwise.
Finally, we obtain the following criterion in terms of quadratic equations (Pl
uckers equations) for the decomposability of an alternating tensor:
P
V
Proposition 23.32. (Grassmann-Pl
uckers Equations) For z = I I eI 2 p E, the conditions for z 6= 0 to be decomposable are
X
i,J,H H[{i} J {i} = 0,
i2J H
1 and |J| = p + 1.
Using these criteria, it is a good exercise to prove that if dim(E) = n, then every tensor
V
n 1
(E) is decomposable. This can also be shown directly.
It should be noted that the equations given by Proposition 23.32 are not independent.
For example, when dim(E) = n = 4 and p = 2, these equations reduce to the single equation
12 34
13 24
14 23
= 0.
When the field K is the field of complex numbers, this is the homogeneous equation of a
quadric in CP5 known as the Klein quadric. The points on this quadric are in one-to-one
correspondence with the lines in CP3 .
23.18
689
Hom(
n
^
(E), F )
= (
n
^
(E))
=
Alt (E; F )
=
n
n
^
n
^
^
n
n
^
(E), F )
(E)) F
(E )
(E ) F.
Note
F may have infinite dimension. This isomorphism allows us to view the tensors in
Vn that
Vn
i=1
(E ). We also let
^
(E; F ) =
n
M ^
n=0
(E )
F =
(E) F.
: F G ! H, then
m+n
^ :
(E ) F
(E ) G
!
(E ) H
by
( f ) ^ ( g) = ( ^ ) (f, g).
As in Section 23.15 (following H. Cartan [21]) we can also define a multiplication
^ : Altm (E; F ) Altm (E; G) ! Altm+n (E; H)
690
directly on alternating multilinear maps as follows: For f 2 Altm (E; F ) and g 2 Altn (E; G),
(f ^ g)(u1 , . . . , um+n ) =
sgn( ) (f (u
2shue(m,n)
of {1, . . . m + n}
In general, not much can be said about ^ , unless has some additional properties. In
particular, ^ is generally not associative. We also have the map
!
n
^
:
(E ) F ! Altn (E; F )
defined on generators by
((v1 ^ ^ vn ) a)(u1 , . . . , un ) = (det(vj (ui ))a.
Proposition 23.33. The map
:
n
^
(E )
! Altn (E; F )
vector spaces,
Vn F, G, H, and any bilinear map : F G ! H, for all ! 2 ( (E )) F and
all 2 ( (E )) G,
( ^ ) = () ^ ( ).
V
Proof. Since we already know that ( nV(E ))F and Altn (E; F ) are isomorphic, it is enough
to show that maps some basis of ( n (E )) F to linearly independent elements. Pick
some bases (e1 , . . . , ep ) in E and (fj )j2J in F V
. Then, we know that the vectors eI fj , where
I {1, . . . , p} and |I| = n, form a basis of ( n (E )) F . If we have a linear dependence
X
I,j (eI
I,j
fj ) = 0,
applying the above combination to each (ei1 , . . . , ein ) (I = {i1 , . . . , in }, i1 < < in ), we
get the linear combination
X
I,j fj = 0,
j
and by linear independence of the fj s, we get I,j = 0 for all I and all j. Therefore, the
(eI fj ) are linearly independent, and we are done. The second part of the proposition is
easily checked (a simple computation).
691
A special case of interest is the case where F = G = H is a Lie algebra and (a,P
b) = [a, b]
is the LieP
bracket of F . In this case, using a basis (f1 , . . . , fr ) of F , if we write ! = i i fi
and = j j fj , we have
X
[!, ] =
i ^ j [fi , fj ].
i,j
Consequently,
[, !] = ( 1)mn+1 [!, ].
The following proposition will be useful in dealing with vector-valued dierential forms:
V
Proposition 23.34. If (e1 , . . . , ep ) is any basis of E, then every element ! 2 ( n (E )) F
can be written in a unique way as
X
!=
eI fI ,
fI 2 F,
I
where the
eI
V
Proof. Since, Vby Proposition 23.19, the eI form a basis of n (E ), elements of the form
eI f span ( n (E )) F . Now, if we apply (!) to (ei1 , . . . , ein ), where I = {i1 , . . . , in }
{1, . . . , p}, we get
(!)(ei1 , . . . , ein ) = (eI fI )(ei1 , . . . , ein ) = fI .
(u1 ^ ^ un f ) = (u1 ^ ^ un ) f.
Proposition 23.35. If (e1 , . . . , ep ) is any basis of E, then every element ! 2 Altn (E; F )
can be written in a unique way as
X
!=
eI fI ,
fI 2 F,
I
692
23.19
Let so(2n) denote the vector space (actually, Lie algebra) of 2n 2n real skew-symmetric
matrices. It is well-known that every matrix A 2 so(2n) can be written as
A = P DP > ,
where P is an orthogonal matrix and where D is a block diagonal matrix
0
1
D1
B
C
D2
B
C
D=B
C
..
@
A
.
Dn
consisting of 2 2 blocks of the form
Di =
0
ai
ai
.
0
For a proof, see Horn and Johnson [57] (Corollary 2.5.14), Gantmacher [46] (Chapter IX),
or Gallier [44] (Chapter 11).
Since det(Di ) = a2i and det(A) = det(P DP > ) = det(D) = det(D1 ) det(Dn ), we get
det(A) = (a1 an )2 .
The Pfaffian is a polynomial function Pf(A) in skew-symmetric 2n 2n matrices A (a
polynomial in (2n 1)n variables) such that
Pf(A)2 = det(A),
and for every arbitrary matrix B,
Pf(BAB > ) = Pf(A) det(B).
The Pfaffian shows up in the definition of the Euler class of a vector bundle. There is a
simple way to define the Pfaffian using some exterior algebra. Let (e1 , . . . , e2n ) be any basis
of R2n . For any matrix A 2 so(2n), let
X
!(A) =
aij ei ^ ej ,
i<j
Vn
Definition 23.9. For every skew symmetric matrix A 2 so(2n), the Pfaffian polynomial or
Pfaffian, is the degree n polynomial Pf(A) defined by
n
^
693
Clearly, Pf(A) is independent of the basis chosen. If A is the block diagonal matrix D,
a simple calculation shows that
!(D) =
and that
n
^
and so
(a1 e1 ^ e2 + a2 e3 ^ e4 + + an e2n
^ e2n )
i,j
>
k,l
k bki ek ,
k,l
el ^ ek , we get
= 2!(BAB > ).
Consequently,
Now,
n
^
=2
n
^
= C f1 ^ f2 ^ ^ f2n ,
for some C 2 R. If B is singular, then the fi are linearly dependent, which implies that
f1 ^ f2 ^ ^ f2n = 0, in which case
Pf(BAB > ) = 0,
694
= 2n n! Pf(A) f1 ^ f2 ^ ^ f2n .
n
^
n
^
as claimed.
Remark: It can be shown that the polynomial Pf(A) is the unique polynomial with integer
coefficients such that Pf(A)2 = det(A) and Pf(diag(S, . . . , S)) = +1, where
0 1
S=
;
1 0
see Milnor and Stashe [82] (Appendix C, Lemma 9). There is also an explicit formula for
Pf(A), namely:
n
Y
1 X
Pf(A) = n
sgn( )
a (2i 1) (2i) .
2 n! 2S
i=1
2n
Beware, some authors use a dierent sign convention and require the Pfaffian to have
the value +1 on the matrix diag(S 0 , . . . , S 0 ), where
0
1
0
S =
.
1 0
For example, if R2n is equipped with an inner product h , i, then some authors define !(A)
as
X
!(A) =
hAei , ej i ei ^ ej ,
i<j
where A = (aij ). But then, hAei , ej i = aji and not aij , and this Pfaffian takes the value +1
on the matrix diag(S 0 , . . . , S 0 ). This version of the Pfaffian diers from our version by the
factor ( 1)n . In this respect, Madsen and Tornehave [74] seem to have an incorrect sign in
Proposition B6 of Appendix C.
695
We will also need another property of Pfaffians. Recall that the ring Mn (C) of n n
matrices over C is embedded in the ring M2n (R) of 2n 2n matrices with real coefficients,
using the injective homomorphism that maps every entry z = a + ib 2 C to the 2 2 matrix
a
b
.
b a
If A 2 Mn (C), let AR 2 M2n (R) denote the real matrix obtained by the above process.
>
Observe that every skew Hermitian matrix A 2 u(n) (i.e., with A = A = A) yields a
matrix AR 2 so(2n).
Proposition 23.37. For every skew Hermitian matrix A 2 u(n), we have
Pf(AR ) = in det(A).
Proof. It is well-known that a skew Hermitian matrix can be diagonalized with respect to a
unitary matrix U and that the eigenvalues are pure imaginary or zero, so we can write
A = U diag(ia1 , . . . , ian )U ,
for some reals aj 2 R. Consequently, we get
AR = UR diag(D1 , . . . , Dn )UR> ,
where
Dj =
and
0
aj
aj
0
as claimed.
Madsen and Tornehave [74] state Proposition 23.37 using the factor ( i)n , which is
wrong.
696
Chapter 24
Introduction to Modules; Modules
over a PID
24.1
In this chapter, we introduce modules over a commutative ring (with unity). After a quick
overview of fundamental concepts such as free modules, torsion modules, and some basic
results about them, we focus on finitely generated modules over a PID and we prove the
structure theorems for this class of modules (invariant factors and elementary divisors). Our
main goal is not to give a comprehensive exposition of modules, but instead to apply the
structure theorem to the K[X]-module Ef defined by a linear map f acting on a finitedimensional vector space E, and to obtain several normal forms for f , including the rational
canonical form.
A module is the generalization of a vector space E over a field K obtained replacing
the field K by a commutative ring A (with unity 1). Although formally, the definition is
the same, the fact that some nonzero elements of A are not invertible has some serious
conequences. For example, it is possible that u = 0 for some nonzero 2 A and some
nonzero u 2 E, and a module may no longer have a basis.
For the sake of completeness, we give the definition of a module, although it is the same
as Definition 2.9 with the field K replaced by a ring A. In this chapter, all rings under
consideration are assumed to be commutative and to have an identity element 1.
Definition 24.1. Given a ring A, a (left) module over A (or A-module) is a set M (of vectors)
together with two operations + : M M ! M (called vector addition),1 and : A M ! M
(called scalar multiplication) satisfying the following conditions for all , 2 A and all
u, v 2 M ;
(M0) M is an abelian group w.r.t. +, with identity element 0;
1
The symbol + is overloaded, since it denotes both addition in the ring A and addition of vectors in M .
It is usually clear from the context which + is intended.
697
698
(M1) (u + v) = ( u) + ( v);
(M2) ( + ) u = ( u) + ( u);
(M3) ( ) u = ( u);
(M4) 1 u = u.
n>0
nu=
( n) u,
n < 0.
All definitions from Section 2.3, linear combinations, linear independence and linear
dependence, subspaces renamed as submodules, apply unchanged to modules. Proposition
2.7 also holds for the module spanned by a set of vectors. The definition of a basis (Definition
2.12) also applies to modules, but the only result from Section 2.4 that holds for modules
is Proposition 2.14. Unfortunately, it is longer true that every module has a basis. For
example, for any nonzero integer m 2 Z, the Z-module Z/mZ has no basis. Similarly, Q,
as a Z-module, has no basis. In fact, any two distinct nonzero elements p1 /q1 and p2 /q2 are
linearly dependent, since
p1
p2
(p2 q1 )
(p1 q2 )
= 0.
q1
q2
Definition 2.13 can be generalized to rings and yields free modules.
Definition 24.2. Given a commutative ring A and any (nonempty) set I, let A(I) be the
subset of the cartesian product AI consisting of all families ( i )i2I with finite support of
scalars in A.2 We define addition and multiplication by a scalar as follows:
( i )i2I + (i )i2I = (
2
+ i )i2I ,
699
and
(i )i2I = ( i )i2I .
It is immediately verified that addition and multiplication by a scalar are well defined.
Thus, A(I) is a module. Furthermore, because families with finite support are considered, the
family (ei )i2I of vectors ei , defined such that (ei )j = 0 if j 6= i and (ei )i = 1, is clearly a basis
of the module A(I) . When I = {1, . . . , n}, we denote A(I) by An . The function : I ! A(I) ,
such that (i) = ei for every i 2 I, is clearly an injection.
Definition 24.3. An A-module M is free i it has a basis.
The module A(I) is a free module.
All definitions from Section 2.5 apply to modules, linear maps, kernel, image, except the
definition of rank, which has to be defined dierently. Propositions 2.15, 2.16, 2.17, and
2.18 hold for modules. However, the other propositions do not generalize to modules. The
definition of an isomorphism generalizes to modules. As a consequence, a module is free i
it is isomorphic to a module of the form A(I) .
Section 2.6 generalizes to modules. Given a submodule N of a module M , we can define
the quotient module M/N .
If a is an ideal in A and if M is an A-module, we define aM as the set of finite sums of
the form
a1 m1 + + ak mk , ai 2 a, mi 2 M.
It is immediately verified that aM is a submodule of M .
Interestingly, the part of Theorem 2.13 that asserts that any two bases of a vector space
have the same cardinality holds for modules. One way to prove this fact is to pass to a
vector space by a quotient process.
Theorem 24.1. For any free module M , any two bases of M have the same cardinality.
Proof sketch. We give the argument for finite bases, but it also holds for infinite bases. The
trick is to pick any maximal ideal m in A (whose existence is guaranteed by Theorem 31.3).
Then, A/m is a field, and M/mM can be made into a vector space over A/m; we leave the
details as an exercise. If (u1 , . . . , un ) is a basis of M , then it is easy to see that the image of
this basis is a basis of the vector space M/mM . By Theorem 2.13, the number n of elements
in any basis of M/mM is an invariant, so any two bases of M must have the same number
of elements.
The common number of elements in any basis of a free module is called the dimension
(or rank ) of the free module.
One should realize that the notion of linear independence in a module is a little tricky.
According to the definition, the one-element sequence (u) consisting of a single nonzero
700
vector is linearly independent if for all 2 A, if u = 0 then = 0. However, there are free
modules that contain nonzero vectors that are not linearly independent! For example, the
ring A = Z/6Z viewed as a module over itself has the basis (1), but the zero-divisors, such
as 2 or 4, are not linearly independent. Using language introduced in Definition 24.4, a free
module may have torsion elements. There are also nonfree modules such that every nonzero
vector is linearly independent, such as Q over Z.
All definitions from Section 3.1 about matrices apply to free modules, and so do all the
proposition. Similarly, all definitions from Section 4.1 about direct sums and direct products
apply to modules. All propositions that do not involve extending bases still hold. The
important proposition 4.10 survives in the following form.
Proposition 24.2. Let f : E ! F be a surjective linear between two A-modules with F a
free module. Given any basis (v1 , . . . , vr ) of F , for any r vectors u1 , . . . , ur 2 E such that
f (ui ) = vi for i = 1, . . . , r, the vectors (u1 , . . . , ur ) are linearly independent and the module
E is the direct sum
E = Ker (f ) U,
where U is the free submodule of E spanned by the basis (u1 , . . . , ur ).
Proof. Pick any w 2 E, write f (w) over the basis (v1 , . . . , vr ) as f (w) = a1 v1 + + ar vr ,
and let u = a1 u1 + + ar ur . Observe that
f (w
Therefore, h = w
E = Ker (f ) + U .
u) = f (w) f (u)
= a1 v 1 + + ar v r
= a1 v 1 + + ar v r
= 0.
701
Um ,
Z/2Z
where Z and Z/2Z are view as Z-modules, but (1, 0) and (0, 1) are not linearly independent,
since
2(1, 0) + 2(0, 1) = (0, 0).
A useful fact is that every module is a quotient of some free module. Indeed, if M is
an A-module, pick any spanning set I for M (such a set exists, for example, I = M ), and
consider the unique homomorphism ' : A(I) ! M extending the identity function from I to
itself. Then we have an isomorphism A(I) /Ker (') M .
In particular, if M is finitely generated, we can pick I to be a finite set of generators, in
which case we get an isomorphism An /Ker (') M , for some natural number n. A finitely
generated module is sometimes called a module of finite type.
The case n = 1 is of particular interest. A module M is said to be cyclic if it is generated
by a single element. In this case M = Ax, for some x 2 M . We have the linear map
mx : A ! M given by a 7! ax for every a 2 A, and it is obviously surjective since M = Ax.
Since the kernel a = Ker (mx ) of mx is an ideal in A, we get an isomorphism A/a Ax.
Conversely, for any ideal a of A, if M = A/a, we see that M is generated by the image x of
1 in M , so M is a cyclic module.
The ideal a = Ker (mx ) is the set of all a 2 A such that ax = 0. This is called the
annihilator of x, and it is the special case of the following more general situation.
Definition 24.4. If M is any A-module, for any subset S of M , the set of all a 2 A such
that ax = 0 for all x 2 S is called the annihilator of S, and it is denoted by Ann(S). If
S = {x}, we write Ann(x) instead of Ann({x}). A nonzero element x 2 M is called a torsion
element i Ann(x) 6= (0). The set consisting of all torsion elements in M and 0 is denoted
by Mtor .
It is immediately verified that Ann(S) is an ideal of A, and by definition,
Mtor = {x 2 M | (9a 2 A, a 6= 0)(ax = 0)}.
702
If a ring has zero divisors, then the set of all torsion elements in an A-module M may not
be a submodule of M . For example, if M = A = Z/6Z, then Mtor = {2, 3, 4}, but 3 + 4 = 1
is not a torsion element. Also, a free module may not be torsion-free because there may be
torsion elements, as the example of Z/6Z as a free module over itself shows.
However, if A is an integral domain, then a free module is torsion-free and Mtor is a
submodule of M . (Recall that an integral domain is commutative).
Proposition 24.3. If A is an integral domain, then for any A-module M , the set Mtor of
torsion elements in M is a submodule of M .
Proof. If x, y 2 M are torsion elements (x, y 6= 0), then there exist some nonzero elements
a, b 2 A such that ax = 0 and by = 0. Since A is an integral domain, ab 6= 0, and then for
all , 2 A, we have
ab( x + y) = b ax + aby = 0.
Therefore, Mtor is a submodule of M .
The module Mtor is called the torsion submodule of M . If Mtor = (0), then we say that
M is torsion-free, and if M = Mtor , then we say that M is a torsion module.
If M is not finitely generated, then it is possible that Mtor 6= 0, yet the annihilator of
Mtor is reduced to 0 (find an example). However, if M is finitely generated, this cannot
happen, since if x1 , . . . , xn generate M and if a1 , . . . , an annihilate x1 , . . . , xn , then a1 an
annihilates every element of M .
Proposition 24.4. If A is an integral domain, then for any A-module M , the quotient
module M/Mtor is torsion free.
Proof. Let x be an element of M/Mtor and assume that ax = 0 for some a 6= 0 in A. This
means that ax 2 Mtor , so there is some b 6= 0 in A such that bax = 0. Since a, b 6= 0 and A
is an integral domain, ba 6= 0, so x 2 Mtor , which means that x = 0.
If A is an integral domain and if F is a free A-module with basis (u1 , . . . , un ), then F
can be embedded in a K-vector space FK isomorphic to K n , where K = Frac(A) is the
fraction field of A. Similarly, any submodule M of F is embedded into a subspace MK of
FK . Note that any linearly independent vectors (u1 , . . . , um ) in the A-module M remain
linearly independent in the vector space MK , because any linear dependence over K is of
the form
a1
am
u1 + +
um = 0
b1
bm
for some ai , bi 2 A, with b1 bm 6= 0, so if we multiply by b1 bm 6= 0, we get a linear dependence in the A-module M . Then, we see that the maximum number of linearly
independent vectors in the A-module M is at most n. The maximum number of linearly
independent vectors in a finitely generated submodule of a free module (over an integral
domain) is called the rank of the module M . If (u1 , . . . , um ) are linearly independent where
703
m is the rank of m, then for every nonzero v 2 M , there are some a, a1 , . . . , am 2 A, not all
zero, such that
av = a1 u1 + + am um .
We must have a 6= 0, since otherwise, linear independence of the ui would imply that
a1 = = am = 0, contradicting the fact that a, a1 , . . . , am 2 A are not all zero.
w = v1 + ar+1 ur+1 2 M.
For any x 2 Mr+1 , there is some v 2 Au1 Aur and some a 2 A such that x = v +aur+1 .
Then, a 2 ar+1 A, so there is some b 2 A such that a = bar+1 . As a consequence
x
bw = v
bv1 2 Mr ,
704
and so x = x
bw + bw with x
bv1
contradicting the fact that (u1 , . . . , ur+1 ) are linearly independent. Therefore,
Mr+1 = Mr
Aw,
We can also prove that a finitely generated torsion-free module over a PID is actually
free. We will give another proof of this fact later, but the following proof is instructive.
Proposition 24.6. If A is a PID and if M is a finitely generated module which is torsionfree, then M is free.
Proof. Let (y1 , . . . , yn ) be some generators for M , and let (u1 , . . . , um ) be a maximal subsequence of (y1 , . . . , yn ) which is linearly independent. If m = n, we are done. Otherwise,
due to the maximality of m, for i = 1, . . . , n, there is some ai 6= 0 such that such that
ai yi can be expressed as a linear combination of (u1 , . . . , um ). If we let a = a1 . . . an , then
a1 . . . an yi 2 Au1 Aum for i = 1, . . . , n, which shows that
aM Au1
Aum .
705
Theorem 24.7. Let M be a finitely generated module over a PID. Then M/Mtor is free,
and there exit a free submodule F of M such that M is the direct sum
M = Mtor
F.
F = Mtor
F.
24.2
Since modules are generally not free, it is natural to look for techniques for dealing with
nonfree modules. The hint is that if M is an A-module and if (ui )i2I is any set of generators
for M , then we know that there is a surjective homomorphism ' : A(I) ! M from the free
module A(I) generated by I onto M . Furthermore M is isomorphic to A(I) /Ker ('). Then,
we can pick a set of generators (vj )j2J for Ker ('), and again there is a surjective map
: A(J) ! Ker (') from the free module A(J) generated by J onto Ker ('). The map can
be viewed a linear map from A(J) to A(I) , we have
Im( ) = Ker ('),
and ' is surjective. Note that M is isomorphic to A(I) /Im( ). In such a situation we say
that we have an exact sequence and this is denoted by the diagram
A(J)
A(I)
/
'
M
/
/ 0.
A(I)
'
M
/
706
2. ' is surjective.
Consequently, M is isomorphic to A(I) /Im( ). If I and J are both finite, we say that this is
a finite presentation of M .
Observe that in the case of a finite presentation, I and J are finite, and if |J| = n and
|I| = m, then is a linear map : An ! Am , so it is given by some m n matrix R with
coefficients in A called the presentation matrix of M . Every column Rj of R may thought
of as a relation
aj1 e1 + + ajm em = 0
2
1
R=
,
1 2
presenting the module M , then we have the relations
2e1 + e2 = 0
e1 + 2e2 = 0.
From the first equation, we get e2 =
It follows that the generator e2 can be eliminated and M is generated by the single generator
e1 satisfying the relation
5e1 = 0,
which shows that M Z/5Z.
The above example shows that many dierent matrices can present the same module.
Here are some useful rules for manipulating a relation matrix without changing the isomorphism class of the module M it presents.
707
708
2 1 6
.
0 2 4
2 1 6
.
4 0
8
After deleting column 2 and row 1, we get
4
8 .
709
Proof. (1) Pick any w 2 E, write f (w) over the generators (v1 , . . . , vr ) of Im(f ) as f (w) =
a1 v1 + + ar vr , and let u = a1 u1 + + ar ur . Observe that
f (w
u) = f (w) f (u)
= a1 v 1 + + ar v r
= a1 v 1 + + ar v r
= 0.
710
where D is a diagonal matrix. It follows from Proposition 24.8 that every finitely generated
module M over a PID has a presentation with m generators and r relations of the form
i ei = 0,
where i 6= 0 and 1 | 2 | | r , which shows that M is isomorphic to the direct sum
M Am
A/(1 A)
A/(r A).
24.3
It is possible to define tensor products of modules over a ring, just as in Section 23.1, and the
results of this section continue to hold. The results of Section 23.3 also continue to hold since
they are based on the universal mapping property. However, the results of Section 23.2 on
bases generally fail, except for free modules. Similarly, the results of Section 23.4 on duality
generally fail. Tensor algebras can be defined for modules, as in Section 23.5. Symmetric
tensor and alternating tensors can be defined for modules but again, results involving bases
generally fail.
Tensor products of modules have some unexpected properties. For example, if p and q
are relatively prime integers, then
Z/pZ Z Z/qZ = (0).
This is because, by Bezouts identity, there are a, b 2 Z such that
ap + bq = 1,
711
Q.
Given any A-module, M , we let M = HomA (M, A) be its dual . We have the following
proposition:
Proposition 24.11. For any finitely-generated projective A-modules, P , and any A-module,
Q, we have the isomorphisms:
P
HomA (P, Q)
= P
= P A Q.
Proof sketch. We only consider the second isomorphism. Since P is projective, we have some
A-modules, P1 , F , with
P P1 = F,
where F is some free module. Now, we know that for any A-modules, U, V, W , we have
Y
HomA (U V, W )
HomA (V, W )
= HomA (U, W )
= HomA (U, W ) HomA (V, W ),
so
P1
= F ,
HomA (P, Q)
HomA (P1 , Q)
= HomA (F, Q).
By tensoring with Q and using the fact that tensor distributes w.r.t. coproducts, we get
(P A Q)
(P1 Q)
= (P
P1 ) A Q
= F A Q.
Now, the proof of Proposition 23.9 goes through because F is free and finitely generated, so
: (P A Q)
(P1 Q)
= F A Q ! HomA (F, Q)
= HomA (P, Q)
HomA (P1 , Q)
712
u 2 P , f 2 Q, x 2 P.
Mi
M
j2I
Mj
(i,j)2IJ
(Mi Nj ).
Avn .
Avn ) (M Av1 )
(M Avn ).
(M A) M
M = M n,
as claimed.
Proposition 24.13 also holds for an infinite basis (vj )j2J of N . Obviously, a version of
Proposition 24.13 also holds if M is free and N is arbitrary.
The next proposition will be also be needed.
713
Proposition 24.14. Given any A-module M and any ideal a in A, there is an isomorphism
(A/a) A M M/aM
given by the map (a u) 7! au (mod aM ), for all a 2 A/a and all u 2 M .
Sketch of proof. Consider the map ' : (A/a) M ! M/aM given by
'(a, u) = au
(mod aM )
for all a 2 A/a and all u 2 M . It is immediately checked that ' is well-defined because au
(mod aM ) does not depend on the representative a 2 A chosen in the equivalence class a,
and ' is bilinear. Therefore, ' induces a linear map ' : (A/a) M ! M/aM , such that
'(a u) = au (mod aM ). We also define the map : M ! (A/a) M by
(u) = 1 u.
Since aM is generated by vectors of the form au with a 2 a and u 2 M , and since
(au) = 1 au = a u = 0 u = 0,
we see that aM Ker ( ), so
24.4
The need to extend the ring of scalars arises, in particular when dealing with eigenvalues.
First, we need to define how to restrict scalar multiplication to a subring. The situation is
that we have two rings A and B, a B-module M , and a ring homomorphism : A ! B. The
special case that arises often is that A is a subring of B (B could be a field) and is the
inclusion map. Then, we can make M into an A-module by defining the scalar multiplication
: A M ! M as follows:
a x = (a)x,
714
The map is bilinear so it induces a linear map : (B) A M ! (B) A M such that
(
x) = (
) x.
for all
then it is easy to check that the axioms M1, M2, M3, M4 hold. Let us check M2 and M3.
We have
1+ 2
x) = ( 1 + 2 ) 0 x
= ( 1 0 + 2 0) x
= 1 0x+ 2 0x
= 1 ( 0 x) + 2 ( 0 x)
and
1 2
x) = 1 2 0 x
= 1 ( 2 0 x)
= 1 ( 2 ( 0 x)).
x) = (
) x,
' = f,
or equivalently,
f (1 x) = f (x),
for all x 2 M .
715
'M
(M )
/
'N
(N )
f.
716
free, any basis (u1 , . . . , un ) of M becomes the basis ('(u1 ), . . . , '(un )) of (M ); but A/m is
a field, so the dimension n is uniquely determined. This argument also applies to an infinite
basis (ui )i2I . Observe that by Proposition 24.14, we have an isomorphism
(M ) = (A/m) A M M/mM,
so M/mM is a vector space over the field A/m, which is the argument used in Theorem 24.1.
Proposition 24.18. Given a ring homomomorphism : A ! B, for any two A-modules M
and N , there is a unique isomorphism
(M ) B (N ) (M A N ),
such that (1 u) (1 v) 7! 1 (u v), for all u 2 M and all v 2 N .
The proof uses identities from Proposition 23.7. It is not hard but it requires a little
gymnastic; a good exercise for the reader.
24.5
We saw in Section 5.7 that given a linear map f : E ! E from a K-vector space E into itself,
we can define a scalar multiplication : K[X] E ! E that makes E into a K]X]-module.
If E is finite-dimensional, this K[X]-module denoted by Ef is a torsion module, and the
main results of this chapter yield important direct sum decompositions of E into subspaces
invariant under f .
Recall that given any polynomial p(X) = a0 X n + a1 X n
the field K, we define the linear map p(f ) : E ! E by
p(f ) = a0 f n + a1 f n
where f k = f
+ + an with coefficients in
+ + an id,
717
It is easy to verify that this scalar multiplication satisfies the axioms M1, M2, M3, M4:
p (u + v) = p u + p v
(p + q) u = p u + q u
(pq) u = p (q u)
1 u = u,
for all p, q 2 K[X] and all u, v 2 E. Thus, with this new scalar multiplication, E is a
K[X]-module denoted by Ef .
If p =
which means that K acts on E by scalar multiplication as before. If p(X) = X (the monomial
X), then
X u = f (u).
Since K is a field, the ring K[X] is a PID.
If E is finite-dimensional, say of dimension n, since K is a subring of K[X] and since E is
finitely generated over K, the K[X]-module Ef is finitely generated over K[X]. Furthermore,
Ef is a torsion module. This follows from the Cayley-Hamilton Theorem (Theorem 5.16),
but this can also be shown in an elementary fashion as follows. The space Hom(E, E) of
linear maps of E into itself is a vector space of dimension n2 , therefore the n2 + 1 linear maps
id, f, f 2 , . . . , f n
are linearly dependent, which yields a nonzero polynomial q such that q(f ) = 0.
We can now translate notions defined for modules into notions for endomorphisms of
vector spaces.
1. To say that U is a submodule of Ef means that U is a subspace of E invariant under
f ; that is, f (U ) U .
2. To say that V is a cyclic submodule of Ef means that there is some vector u 2 V , such
that V is spanned by (u, f (u), . . . , f k (u), . . .). If E has finite dimension n, then V is
spanned by (u, f (u), . . . , f k (u)) for some k n 1. We say that V is a cyclic subspace
for f with generator u. Sometimes, V is denoted by Z(u; f ).
3. To say that the ideal a = (p(X)) (with p(X) a monic polynomial) is the annihilator
of the submodule V means that p(f )(u) = 0 for all u 2 V , and we call p the minimal
polynomial of V .
718
+ + a1 X + a0 .
Then, there is some vector u such that (u, f (u), . . . , f k (u)) span Ef , and because q is
the minimal polynomial of Ef , we must have k = n 1. The fact that q(f ) = 0 implies
that
f n (u) = a0 u a1 f (u) an 1 f n 1 (u),
and so f is represented by the following matrix known as the companion matrix of
q(X):
0
1
0 0 0 0
a0
B1 0 0 0
a1 C
B
C
B0 1 0 0
C
a
2
B.
C
.
.
U = B. .. .. .. .
C.
.
.
.
.
.
.
.
B
C
B
C
@0 0 0 . . . 0
an 2 A
0 0 0 1
an 1
It is an easy exercise to prove that the characteristic polynomial
back q(X):
U (X) = q(X).
U (X)
of U gives
We will need the following proposition to characterize when two linear maps are similar.
Proposition 24.19. Let f : E ! E and f 0 : E 0 ! E 0 be two linear maps over the vector
spaces E and E 0 . A linear map g : E ! E 0 can be viewed as a linear map between the
K[X]-modules Ef and Ef 0 i
g f = f 0 g.
Proof. First, suppose g is K[X]-linear. Then, we have
g(p f u) = p f 0 g(u)
for all p 2 K[X] and all u 2 E, so for p = X we get
g(p f u) = g(X f u) = g(f (u))
and
p f 0 g(u) = X f 0 g(u) = f 0 (g(u)),
for all n
1.
719
Indeed, we have
g f n+1 = g f n f
= f 0n g f
= f 0n f 0 g
= f 0n+1 g,
establishing the induction step. It follows that for any polynomial p(X) =
have
g(p(X) f u) = g
=
n
X
ak f (u)
k=0
n
X
Pn
k=0
ak X k , we
ak g f k (u)
k=0
n
X
ak f 0k
g(u)
k=0
n
X
ak f
0k
k=0
(g(u))
= p(X) f 0 g(u),
so, g is indeed K[X]-linear.
Definition 24.7. We say that the linear maps f : E ! E and f 0 : E 0 ! E 0 are similar i
there is an isomorphism g : E ! E 0 such that
f0 = g f
g 1,
or equivalently,
g f = f 0 g.
Then, Proposition 24.19 shows the following fact:
Proposition 24.20. With notation of Proposition 24.19, two linear maps f and f 0 are
similar i g is an isomorphism between Ef and Ef0 0 .
Later on, we will see that the isomorphism of finitely generated torsion modules can be
characterized in terms of invariant factors, and this will be translated into a characterization of similarity of linear maps in terms of so-called similarity invariants. If f and f 0 are
represented by matrices A and A0 over bases of E and E 0 , then f and f 0 are similar i the
matrices A and A0 are similar (there is an invertible matrix P such that A0 = P AP 1 ).
Similar matrices (and endomorphisms) have the same characteristic polynomial.
720
It turns out that there is a useful relationship between Ef and the module K[X] K E.
Observe that the map : K[X] E ! E given by
p u = p(f )(u)
is K-bilinear, so it yields a K-linear map
(p u) = p u = p(f )(u).
We know from Section 24.4 that K[X] K E is a K[X]-module (obtained from the inclusion
K K[X]), which we will denote by E[X]. Since E is a vector space, E[X] is a free
K[X]-module, and if (u1 , . . . , un ) is a basis of E, then (1 u1 , . . . , 1 un ) is a basis of E[X].
The free K[X]-module E[X] is not as complicated as it looks. Over the basis
(1 u1 , . . . , 1 un ), every element z 2 E[X] can be written uniquely as
z = p1 (1 u1 ) + + pn (1 un ) = p1 u1 + + pn un ,
where p1 , . . . , pn are polynomials in K[X]. For notational simplicity, we may write
z = p1 u1 + + pn un ,
where p1 , . . . , pn are viewed as coefficients in K[X]. With this notation, we see that E[X] is
isomorphic to (K[X])n , which is easy to understand.
Observe that
is K[X]-linear, because
(q(p u)) = ((qp) u)
= (qp) u
= q(f )(p(f )(u))
= q (p(f )(u))
= q (p u).
721
where we used the fact that f p(f ) = p(f )f because p(f ) is a polynomial in f . By Proposition
24.16, the linear map f : E ! E induces a K[X]-linear map f : E[X] ! E[X] such that
f (p u) = p f (u).
Observe that we have
f ( (p u)) = f (p(f )(u)) = p(f )(f (u))
and
so we get
f =f
()
(p u) = (Xp) u
Observe that
= X1E[X]
p f (u).
f , which we abbreviate as X1
It should be noted that everything we did in Section 24.5 applies to modules over a
commutative ring A, except for the statements that assume that A[X] is a PID. So, if M
is an A-module, we can define the A[X]-modules Mf and M [X] = A[X] A M , except that
Mf is generally not a torsion module, and all the results showed above hold. Then, we have
the following remarkable result.
Theorem 24.21. (The Characteristic Sequence) Let A be a ring and let E be an A-module.
The following sequence of A[X]-linear maps is exact:
0
/
E[X]
/
E[X]
/
Ef
/
0.
This means that is injective, is surjective, and that Im( ) = Ker ( ). As a consequence,
Ef is isomorphic to the quotient of E[X] by Im(X1 f ).
Proof. Because (1 u) = u for all u 2 E, the map
is surjective. We have
(X(p u)) = X (p u)
= f ( (p u)),
which shows that
X1 = f
f,
722
(X1
X1
f)
f
f = 0,
Since the monomials X k form a basis of A[X], by Proposition 24.13 (with the roles of M
and N exchanged), every z 2 E[X] = A[X] A E has a unique expression as
X
z=
X k uk ,
k
X
k
X
k
X
k
X
k
X k uk
10
X uk
(X k uk
1 f k (uk ))
(X k (1 uk )
(X k 1
f (uk )
f (1 uk ))
f )(1 uk ).
(f
so we can write
k
X 1
f = (X1
f)
X
k 1
j=0
(X1) f
k j 1
f)
XX
k 1
(X1) f
k j 1
j=0
723
(1 uk ) ,
k
(z) =
X uk
k
= (X1
=
f)
X
k
k+1
X uk
(uk
f (uk+1 )),
f (uk+1 ) = 0,
for all k.
Since (uk ) has finite support, there is a largest k, say m + 1 so that um+1 = 0, and then from
uk = f (uk+1 ),
we deduce that uk = 0 for all k. Therefore, z = 0, and
is injective.
= det(X1
f ).
Note that to have a correct definition, we need to define the determinant of a linear map
allowing the indeterminate X as a scalar, and this is what the definition of M [X] achieves
(among other things). Theorem 24.21 can be used to quick a short proof of the CayleyHamilton Theorem, see Bourbaki [14] (Chapter III, Section 8, Proposition 20). Proposition
5.10 is still the crucial ingredient of the proof.
We now develop the theory necessary to understand the structure of finitely generated
modules over a PID.
724
24.6
We begin by considering modules over a product ring obtained from a direct decomposition,
as in Definition 21.3. In this section and the next, we closely follow Bourbaki [15] (Chapter
VII). Let A be a commutative ring and let (b1 , . . . , bn ) be ideals in A such that there is
an isomorphism A A/b1 A/bn . From Theorem 21.16 part (b), there exist some
elements e1 , . . . , en of A such that
e2i = ei
ei ej = 0, i 6= j
e 1 + + e n = 1A ,
and bi = (1A
ei )A, for i, j = 1, . . . , n.
x 2 M.
The map pi is clearly linear, and because of the properties satisfied by the ei s, we have
p2i = pi
pi pj = 0, i 6= j
p1 + + pn = id.
This shows that the pi are projections, and by Proposition 4.6 (which also holds for modules),
we have a direct sum
M = p1 (M )
pn (M ) = e1 M
en M.
725
M (pnr r ).
i x,
for some
2 A,
Proof. First, observe that since M () is annihilated by , we can view M () as a A/()module. By the Chinese Remainder Theorem (Theorem 21.15) applied to the ideals (upn1 1 ) =
(pn1 1 ), (pn2 2 ), . . . , (pnr r ), we have an isomorphism
A/() A/(pn1 1 ) A/(pnr r ).
Since we also have isomorphisms
A/(pni i ) (A/())/((pni i )/()),
726
M () = N1
Nr ,
for all a 2 A
{0},
Recall that if M is a torsion module over a ring A which is an integral domain, then
every finite set of elements x1 , . . . , xn in M is annihilated by a = a1 an , where each ai
annihilates xi .
Since A is a PID, we can pick a set P of irreducible elements of A such that every nonzero
nonunit of A has a unique factorization up to a unit. Then, we have the following structure
theorem for torsion modules which holds even for modules that are not finitely generated.
Theorem 24.24. (Primary Decomposition Theorem) Let M be a torsion-module over a
PID. For every irreducible element p 2 P , let Mp be the submodule of M annihilated by
some power of p. Then, M is the (possibly infinite) direct sum
M
M=
Mp .
p2P
M (pnr r ).
X
p2P
xp ,
xp 2 M p ,
727
xp =
p2P
yp
p2P
for all p 2 P , with only finitely many xp and yp nonzero, then xp and yp are annihilated by
some common nonzero element a 2 A, so xp , yp 2 M (a). By Proposition 24.23, we must
have xp = yp for all p, which proves that we have a direct sum.
It is clear that if p and p0 are two irreducible elements such that p = up0 for some unit u,
then Mp = Mp0 . Therefore, Mp only depends on the ideal (p).
Definition 24.9. Given a torsion-module M over a PID, the modules Mp associated with
irreducible elements in P are called the p-primary components of M .
The p-primary components of a torsion module uniquely determine the module, as shown
by the next proposition.
Proposition 24.25. Two torsion modules M and N over a PID are isomorphic i for
every every irreducible element p 2 P , the p-primary components Mp and Np of M and N
are isomorphic.
Proof. Let f : M ! N be an isomorphism. For any p 2 P , we have x 2 Mp i pk x = 0 for
some k 1, so
0 = f (pk x) = pk f (x),
which shows that f (x) 2 Np . Therefore, f restricts to a linear map f | Mp from Mp to
Np . Since f is an isomorphism, we also have a linear map f 1 : M ! N , and our previous
reasoning shows that f 1 restricts to a linear map f 1 | Np from Np to Mp . But, f | Mp and
f 1 | Np are mutual inverses, so Mp and Np are isomorphic.
Conversely,
if Mp NL
p for all p 2 P , by Theorem 24.24, we get an isomorphism between
L
M = p2P Mp and N = p2P Np .
In view of Proposition 24.25, the direct sum of Theorem 24.24 in terms of its p-primary
components is called the canonical primary decomposition of M .
If M is a finitely generated torsion-module, then Theorem 24.24 takes the following form.
Theorem 24.26. (Primary Decomposition Theorem for finitely generated torsion modules)
Let M be a finitely generated torsion-module over a PID A. If Ann(M ) = (a) and if a =
upn1 1 pnr r is a factorization of a into prime factors, then M is the finite direct sum
M=
r
M
M (pni i ).
i=1
i x,
for some
2 A.
728
N 0 i, by
N,
pa)x,
pa 6= 0. Observe that
Since p(1 ap) 6= 0, x is a torsion element, and thus M is a torsion module. The above
argument shows that
p(1 ap)x = 0,
729
for all n
1.
24.7
There are several ways of obtaining the decomposition of a finitely generated module as a
direct sum of cyclic modules. One way to proceed is to first use the Primary Decomposition
Theorem and then to show how each primary module Mp is the direct sum of cyclic modules of
the form A/(pn ). This is the approach followed by Lang [67] (Chapter III, section 7), among
others. We prefer to use a proposition that produces a particular basis for a submodule of
a finitely generated free module, because it yields more information. This is the approach
followed in Dummitt and Foote [32] (Chapter 12) and Bourbaki [15] (Chapter VII). The
proof that we present is due to Pierre Samuel.
Proposition 24.30. Let F be a finitely generated free module over a PID A, and let M be
any submodule of F . Then, M is a free module and there is a basis (e1 , ..., en ) of F , some
q n, and some nonzero elements a1 , . . . , aq 2 A, such that (a1 e1 , . . . , aq eq ) is a basis of M
and ai divides ai+1 for all i, with 1 i q 1.
Proof. The proposition is trivial when M = {0}, thus assume that M is nontrivial. Pick some
basis (u1 , . . . , un ) for F . Let L(F, A) be the set of linear forms on F . For any f 2 L(F, A),
it is immediately verified that f (M ) is an ideal in A. Thus, f (M ) = ah A, for some ah 2 A,
since every ideal in A is a principal ideal. Since A is a PID, any nonempty family of ideals
in A has a maximal element, so let f be a linear map such that ah A is a maximal ideal in A.
Let i : F ! A be the i-th projection, i.e., i is defined such that i (x1 u1 + + xn un ) = xi .
730
It is clear that i is a linear map, and since M is nontrivial, one of the i (M ) is nontrivial,
and ah 6= 0. There is some e0 2 M such that f (e0 ) = ah .
We claim that, for every g 2 L(F, A), the element ah 2 A divides g(e0 ).
Indeed, if d is the gcd of ah and g(e0 ), by the Bezout identity, we can write
d = rah + sg(e0 ),
for some r, s 2 A, and thus
d = rf (e0 ) + sg(e0 ) = (rf + sg)(e0 ).
However, rf + sg 2 L(F, A), and thus,
ah A dA (rf + sg)(M ),
since d divides ah , and by maximality of ah A, we must have ah A = dA, which implies that
d = ah , and thus, ah divides g(e0 ). In particular, ah divides each i (e0 ) and let i (e0 ) = ah bi ,
with bi 2 A.
Let e = b1 u1 + + bn un . Note that
(0)
and
M = Ae0
with e0 = ah e.
(M \ f
(0)),
f (x)e),
and since f (e) = 1, we have f (x f (x)e) = f (x) f (x)f (e) = f (x) f (x) = 0. Thus,
F = Ae + f 1 (0). Similarly, for any x 2 M , we have f (x) = rah , for some r 2 A, and thus,
x = f (x)e + (x
f (x)e) = rah e + (x
f (x)e) = re0 + (x
f (x)e = x
f (x)e),
rah e = x
re0 2 M , since
To prove that we have a direct sum, it is enough to prove that Ae \ f 1 (0) = {0}. For
any x = re 2 Ae, if f (x) = 0, then f (re) = rf (e) = r = 0, since f (e) = 1 and, thus, x = 0.
Therefore, the sums are direct sums.
731
We can now prove that M is a free module by induction on the size, q, of a maximal
linearly independent family for M .
If q = 0, the result is trivial. Otherwise, since
M = Ae0
(M \ f
(0)),
it is clear that M \ f 1 (0) is a submodule of F and that every maximal linearly independent
family in M \ f 1 (0) has at most q 1 elements. By the induction hypothesis, M \ f 1 (0)
is a free module, and by adding e0 to a basis of M \ f 1 (0), we obtain a basis for M , since
the sum is direct.
The second part is shown by induction on the dimension n of F .
The case n = 0 is trivial. Otherwise, since
F = Ae
(0),
and since, by the previous argument, f 1 (0) is also free, f 1 (0) has dimension n 1. By
the induction hypothesis applied to its submodule M \ f 1 (0), there is a basis (e2 , . . . , en )
of f 1 (0), some q n, and some nonzero elements a2 , . . . , aq 2 A, such that, (a2 e2 , . . . , aq eq )
is a basis of M \ f 1 (0), and ai divides ai+1 for all i, with 2 i q 1. Let e1 = e, and
a1 = ah , as above. It is clear that (e1 , . . . , en ) is a basis of F , and that that (a1 e1 , . . . , aq eq )
is a basis of M , since the sums are direct, and e0 = a1 e1 = ah e. It remains to show that a1
divides a2 . Consider the linear map g : F ! A such that g(e1 ) = g(e2 ) = 1, and g(ei ) = 0,
for all i, with 3 i n. We have ah = a1 = g(a1 e1 ) = g(e0 ) 2 g(M ), and thus ah A g(M ).
Since ah A is maximal, we must have g(M ) = ah A = a1 A. Since a2 = g(a2 e2 ) 2 g(M ), we
have a2 2 a1 A, which shows that a1 divides a2 .
We need the following basic proposition.
Proposition 24.31. For any commutative ring A, if F is a free A-module and if (e1 , . . . , en )
is a basis of F , for any elements a1 , . . . , an 2 A, there is an isomorphism
Proof. Let
F/(Aa1 e1
Aan en ) (A/a1 A)
: F ! A/(a1 A)
(A/an A).
(x1 e1 + + xn en ) = (x1 , . . . , xn ),
where xi is the equivalence class of xi in A/ai A. The map is clearly surjective, and its
kernel consists of all vectors x1 e1 + + xn en such that xi 2 ai A, for i = 1, . . . , n, which
means that
Ker ( ) = Aa1 e1 Aan en .
Since M/Ker ( ) is isomorphic to Im( ), we get the desired isomorphism.
732
We can now prove the existence part of the structure theorem for finitely generated
modules over a PID.
Theorem 24.32. Let M be a finitely generated nontrivial A-module, where A a PID. Then,
M is isomorphic to a direct sum of cyclic modules
M A/a1
A/am ,
(A/ar+1
A/am ),
A/an A.
Whenever ai is unit, the factor A/ai A = (0), so we can weed out the units. Let r = n q,
and let s 2 N be the smallest index such that as+1 is not a unit. Note that s = 0 means that
there are no units. Also, as M 6= (0), s < n. Then,
M An /Ker (') A/as+1 A
Let m = r + q
s=n
A/an A.
r=n q
where as+1 | as+2 | | aq are nonzero and nonunits and aq+1 = = an = 0, so we define
the m ideals ai as follows:
(
(0)
if 1 i r
ai =
ar+q+1 i A if r + 1 i m.
With these definitions, the ideals ai are proper ideals and we have
ai ai+1 ,
i = 1, . . . , m
1.
733
The natural number r is called the free rank or Betti number of the module M . The
generators 1 , . . . , m of the ideals a1 , . . . , am (defined up to a unit) are often called the
invariant factors of M (in the notation of Theorem 24.32, the generators of the ideals
a1 , . . . , am are denoted by aq , . . . , as+1 , s q).
As corollaries of Theorem 24.32, we obtain again the following facts established in Section
24.1:
1. A finitely generated module over a PID is the direct sum of its torsion module and a
free module.
2. A finitely generated torsion-free module over a PID is free.
It turns out that the ideals a1 a2 am 6= A are uniquely determined by the
module M . Uniqueness proofs found in most books tend to be intricate and not very intuitive.
The shortest proof that we are aware of is from Bourbaki [15] (Chapter VII, Section 4), and
uses wedge products.
The following preliminary results are needed.
Proposition 24.33. If A is a commutative ring and if a1 , . . . , am are ideals of A, then there
is an isomorphism
A/a1 A/am A/(a1 + + am ).
Sketch of proof. We proceed by induction on m. For m = 2, we define the map
' : A/a1 A/a2 ! A/(a1 + a2 ) by
'(a, b) = ab
(mod a1 + a2 ).
(mod a1 + a2 ).
It is also clear that this map is bilinear, so it induces a linear map ' : A/a1 A/a2 !
A/(a1 + a2 ) such that '(a b) = ab (mod a1 + a2 ).
Next, observe that any arbitrary tensor
a1 b 1 + + an b n
in A/a1 A/a2 can be rewritten as
1 (a1 b1 + + an bn ),
which is of the form 1 s, with s 2 A. We can use this fact to show that ' is injective and
surjective, and thus an isomorphism.
734
= 1 (a + b)
=1a+1b
=a1+1b
= 0 + 0 = 0,
since a 2 a1 and b 2 a2 , which proves injectivity.
Recall that the exterior algebra of an A-module M is defined by
^
M=
k
M^
(M ).
k 0
i=1
A proof can be found in Bourbaki [14] (Chapter III, Section 7, No 7, Proposition 10).
Proposition 24.35. Let A be a commutative ring and let a1 , . . . , an be n ideals of A. If the
module M is the direct sum of n cyclic modules
M = A/a1
A/an ,
V
then for every p > 0, the exterior power p M is isomorphic to the direct sum of the modules
A/aH , where H ranges over all subsets H {1, . . . , n} with p elements, and with
X
aH =
ah .
h2H
Proof. If ui is the image of 1 in A/ai , then A/ai is equal to Aui . By Proposition 24.34, we
have
n ^
^
O
M
(Aui ).
i=1
We also have
(Aui ) =
k
M^
k 0
(Aui ) A
Aui ,
735
H{1,...,n}
H={k1 ,...,kp }
(Auk1 ) (Aukp ).
p
^
A/aH ,
H{1,...,n}
|H|=p
as claimed.
When the ideals ai form a chain of inclusions a1 an , we get the following
remarkable result.
Proposition 24.36. Let A be a commutative ring and let a1 , . . . , an be n ideals of A such
that a1 a2 an . If the module M is the direct sum of n cyclic modules
M = A/a1
A/an ,
Vp
M.
Proof. With the notation of Proposition 24.35, we have aH = amax(H) , where max(H) is the
greatest element in the set H. Since max(H) p for any subset with p elements and since
max(H) = p when H = {1, . . . , p}, we see that
\
ap =
aH .
H{1,...,n}
|H|=p
A/aH
H{1,...,n}
|H|=p
Vp
736
A/am A/a01
A/a0n ,
A/am ,
(A/ar+1
A/am ),
A/aq /A.
737
6= 0)( x 2 M )},
which shows that M 0 /M is the torsion module of F/M . Therefore, M 0 is uniquely determined. Since
M = Aa1 e1 Aaq eq ,
by Proposition 24.31 we have an isomorphism
M 0 /M A/a1 A
A/aq A.
Now, it is possible that the first s elements ai are units, in which case A/ai A = (0), so we
can eliminate such factors and we get
M 0 /M A/as+1 A
A/aq A,
738
739
Proposition 24.42. If X is an m n matrix of rank r over a PID A, then there exist some
invertible n n matrix P , some invertible m m matrix Q, and a m n matrix D of the
form
0
1
1 0 0 0 0
B 0 2 0 0 0C
B.
.. . .
.. ..
.. C
B.
C
.
.
.
.
.
.C
B
B
C
D = B 0 0 r 0 0C
B
C
B 0 0 0 0 0C
B.
..
. . .
.C
@ ..
. .. .. . . .. A
0 0 0 0 0
for some nonzero i 2 A, such that
(1) 1 | 2 | | r ,
(2) X = QDP
, and
for some invertible matrices, P and Q. Then, Proposition 24.42 implies the following fact.
Proposition 24.43. Two m n matrices X and Y are equivalent i they have the same
invariant factors.
If X is the matrix of a linear map f : F ! F 0 with respect to some basis (u1 , . . . , un )
of F and some basis (u01 , . . . , u0m ) of F 0 , then the columns of X are the coordinates of the
f (uj ) over the u0i , where the f (uj ) generate f (F ), so Proposition 24.40 applies and yields
the following result:
Proposition 24.44. If X is a m n matrix or rank r over a PID A, and if 1 A, . . . , r A
are its invariant factors, then 1 is a gcd of the entries in X, and for k = 2, . . . , r, the
product 1 k is a gcd of all k k minors of X.
There are algorithms for converting a matrix X over a PID to the form X = QDP 1
as described in Proposition 24.42. For Euclidean domains, this can be achieved by using
the elementary row and column operations P (i, k), Ei,j; , and Ei, described in Chapter 6,
where we require the scalar used in Ei, to be a unit. For an arbitrary PID, another kind
of elementary matrix (containing some 2 2 submatrix in addition to diagonal entries) is
needed. These procedures involve computing gcds and use the Bezout identity to mimic
740
division. Such methods are presented in Serre [96], Jacobson [59], and Van Der Waerden
[112], and sketched in Artin [4]. We describe and justify several of these methods in Section
25.4.
From Section 24.2, we know that a submodule of a finitely generated module over a PID
is finitely presented. Therefore, in Proposition 24.39, the submodule M of the free module
F is finitely presented by some matrix R with a number of rows equal to the dimension
of F . Using Theorem 25.17, the matrix R can be diagonalized as R = QDP 1 where D
is a diagonal matrix. Then, the columns of Q form a basis (e1 , . . . , en ) of F , and since
RP = QD, the nonzero columns of RP form the basis (a1 e1 , . . . , aq eq ) of M . When the ring
A is a Euclidean domain, Theorem 25.14 shows that P and Q are products of elementary
row and column operations. In particular, when A = Z, in which cases our Z-modules are
abelian groups, we can find P and Q using Euclidean division.
In this case, a finitely generated submodule M of Zn is called a lattice. It is given as the
set of integral linear combinations of a finite set of integral vectors.
Here is an example taken from Artin [4] (Chapter 12, Section 4). Let F be the free
Z-module Z2 , and let M be the lattice generated by the columns of the matrix
2
1
R=
.
1 2
The columns (u1 , u2 ) of R are linearly independent, but they are not a basis of Z2 . For
example, in order to obtain e1 as a linear combination of these columns, we would need to
solve the linear system
2x y = 1
x + 2y = 0.
From the second equation, we get x =
But, y =
1 0
2
1
1 1
1 0
=
,
3 1
1 2
1 2
0 5
so R = QDP
2
1
1
2
1 0
3 1
1 0
2
0 5
1
D=
1
,
1
with
Q=
1 0
,
3 1
1 0
,
0 5
P =
1 1
.
1 2
741
The new basis (u01 , u02 ) for Z2 consists of the columns of Q and the new basis for M consists
of the columns (u01 , 5u02 ) of QD, where
1 0
QD =
.
3 5
A picture of the lattice and its generators (u1 , u2 ) and of the same lattice with the new basis
(u01 , 5u02 ) is shown in Figure 24.1, where the lattice points are displayed as stars.
*
*
*
*
*
*
*
*
*
*
*
*
The invariant factor decomposition of a finitely generated module M over a PID A given
by Theorem 24.38 says that
Mtor A/ar+1
A/am ,
a direct sum of cyclic modules, with (0) 6= ar+1 am 6= A. Using the Chinese
Remainder Theorem (Theorem 21.15), we can further decompose each module A/i A into
a direct sum of modules of the form A/pn A, where p is a prime in A.
Theorem 24.45. (Elementary Divisors Decomposition) Let M be a finitely generated nontrivial A-module, where A a PID. Then, M is isomorphic to the direct sum Ar Mtor , where
Ar is a free module and where the torsion module Mtor is a direct sum of cyclic modules of
n
the form A/pi i,j , for some primes p1 , . . . , pt 2 A and some positive integers ni,j , such that
for each i = 1, . . . , t, there is a sequence of integers
1 ni,1 , . . . , ni,1 < ni,2 , . . . , ni,2 < < ni,si , . . . , ni,si ,
|
{z
} |
{z
}
|
{z
}
mi,1
mi,2
mi,si
742
with si
1, and where ni,j occurs mi,j
1 times, for j = 1, . . . , si . Furthermore, the
irreducible elements pi and the integers r, t, ni,j , si , mi,j are uniquely determined.
Proof. By Theorem 24.38, we already know that M Ar
determined, and where
Mtor A/ar+1 A/am ,
a direct sum of cyclic modules, with (0) 6= ar+1 am 6= A. Then, each ai is a principal
ideal of the form i A, where i 6= 0 and i is not a unit. Using the Chinese Remainder
Theorem (Theorem 21.15), if we factor i into prime factors as
i = upk11 pkhh ,
with kj
1, we get an isomorphism
A/i A A/pk11 A
A/pkhh .
This implies that Mtor is the direct sum of modules of the form A/pi i,j , for some primes
pi 2 A.
To prove uniqueness, observe that the pi -primary component of Mtor is the direct sum
n
ni,si
(A/pi
A)mi,si ,
and these are uniquely determined. Since ni,1 < < ni,si , we have
ni,si
pi
A pi i,1 A 6= A,
Proposition 24.37 implies that the irreducible elements pi and ni,j , si , and mi,j are unique.
In view of Theorem 24.45, we make the following definition.
Definition 24.11. Given a finitely generated module M over a PID A as in Theorem 24.45,
n
the ideals pi i,j A are called the elementary divisors of M , and the mi,j are their multiplicities.
The ideal (0) is also considered to be an elementary divisor and r is its multiplicity.
Remark: Theorem 24.45 shows how the elementary divisors are obtained from the invariant
factors: the elementary divisors are the prime power factors of the invariant factors.
Conversely, we can get the invariant factors from the elementary divisors. We may assume
that M is a torsion module. Let
m = max {mi,1 + + mi,si },
1it
and construct the t m matrix C = (cij ) whose ith row is the sequence
ni,si , . . . , ni,si , . . . , ni,2 , . . . , ni,2 , ni,1 , . . . , ni,1 , 0, . . . , 0,
|
{z
}
|
{z
} |
{z
}
mi,si
mi,2
mi,1
743
padded with 0s if necessary to make it of length m. Then, the jth invariant factor is
c
From a computational point of view, finding the elementary divisors is usually practically
impossible, because it requires factoring. For example, if A = K[X] where K is a field, such
as K = R or K = C, factoring amounts to finding the roots of a polynomial, but by Galois
theory, in general, this is not algorithmically doable. On the other hand, the invariant factors
can be computed using elementary row and column operations.
It can also be shown that A and the modules of the form A/pn A are indecomposable
(with n > 0). A module M is said to be indecomposable if M cannot be written as a direct
sum of two nonzero proper submodules of M . For a proof, see Bourbaki [15] (Chapter VII,
Section 4, No. 8, Proposition 8). Theorem 24.45 shows that a finitely generated module over
a PID is a direct sum of indecomposable modules.
We will now apply the structure theorems for finitely generated (torsion) modules to the
K[X]-module Ef associated with an endomorphism f on a vector space E.
744
Chapter 25
The Rational Canonical Form and
Other Normal Forms
25.1
Let E be a finite-dimensional vector space over a field K, and let f : E ! E be an endomorphism of E. We know from Section 24.5 that there is a K[X]-module Ef associated with f ,
and that Mf is a finitely generated torsion module over the PID K[X]. In this chapter, we
show how Theorems from Sections 24.6 and 24.7 yield important results about the structure
of the linear map f .
Recall that the annihilator of a subspace V is an ideal (p) uniquely defined by a monic
polynomial p called the minimal polynomial of V .
Our first result is obtained by translating the primary decomposition theorem, Theorem
24.26. It is not too surprising that we obtain again Theorem 22.9!
Theorem 25.1. (Primary Decomposition Theorem) Let f : E ! E be a linear map on the
finite-dimensional vector space E over the field K. Write the minimal polynomial m of f as
m = pr11 prkk ,
where the pi are distinct irreducible monic polynomials over K, and the ri are positive integers. Let
Wi = Ker (pi (f )ri ), i = 1, . . . , k.
Then
(a) E = W1
Wk .
(b) Each Wi is invariant under f and the projection from W onto Wi is given by a polynomial in f .
(c) The minimal polynomial of the restriction f | Wi of f to Wi is pri i .
745
746
K[X]/(pm )
of m n cyclic modules, where the pj are uniquely determined monic polynomials of degree
at least 1, such that
pm | pm 1 | | p1 .
Each cyclic module K[X]/(pi ) is isomorphic to a cyclic subspace for f , say Vi , whose minimal
polynomial is pi .
It is customary to renumber the polynomials pi as follows. The n polynomials q1 , . . . , qn
are defined by:
(
1
if 1 j n m
qj (X) =
pn j+1 (X) if n m + 1 j n.
Then, we see that
q1 | q2 | | qn ,
where the first n
En ,
En
of cyclic subspaces Ei = Z(ui ; f ) for f , such that the minimal polynomial of the restriction
of f to Ei is qi . The polynomials qi satisfying the above conditions are unique, and qn is the
minimal polynomial of f .
In view of translation point (4) at the beginning of Section 24.5, we know that over the
basis
(ui , f (ui ), . . . , f ni 1 (ui ))
747
of the cyclic subspace Ei = Z(ui ; f ), with ni = deg(qi ), the matrix of the restriction of f to
Ei is the companion matrix of pi (X), of the form
0
1
0 0 0 0
a0
B1 0 0 0
a1 C
B
C
C
B0 1 0 0
a
2
B.
C
.
.
.
B. ... ... ... .
.. C
.
B.
C
B
C
@0 0 0 . . . 0
an i 2 A
0 0 0 1
an i 1
If we put all these bases together, we obtain a block matrix whose blocks are of the above
form. Therefore, we proved the following result.
Theorem 25.3. (Rational Canonical Form, First Version) Let f : E ! E be an endomorphism on a K-vector space of dimension n. There exist n monic polynomials q1 , . . . , qn 2
K[X] such that
q1 | q2 | | qn ,
with q1 = = qn
the form
B
B
B
X=B
B
@
An
m+1
0
..
.
0
0
An
m+2
..
.
0
0
..
.
0
0
..
.
An
0
0
0
..
.
C
C
C
C,
C
0A
An
where each Ai is the companion matrix of qi . The polynomials qi satisfying the above conditions are unique, and qn is the minimal polynomial of f .
Definition 25.1. A matrix X as in Theorem 25.3 is called a matrix in rational form. The
polynomials q1 , . . . , qn arising in Theorems 25.2 and 25.3 are called the similarity invariants
(or invariant factors) of f .
Theorem 25.3 shows that every matrix is similar to a matrix in rational form. Such a
matrix is unique.
By Proposition 24.20, two linear maps f and f 0 are similar i there is an isomorphism
between Ef and Ef0 0 , and thus by the uniqueness part of Theorem 24.38, i they have the
same similarity invariants q1 , . . . , qn .
Proposition 25.4. If E and E 0 are two finite-dimensional vector spaces and if f : E ! E
and f 0 : E 0 ! E 0 are two linear maps, then f and f 0 are similar i they have the same
similarity invariants.
The eect of extending the fied K to a field L is the object of the next proposition.
748
Proposition 25.5. Let f : E ! E be a linear map on a K-vector space E, and let (q1 , . . . , qn )
be the similarity invariants of f . If L is a field extension of K (which means that K L),
and if E(L) = L K E is the vector space obtained by extending the scalars, and f(L) = 1L f
the linear map of E(L) induced by f , then the similarity invariants of f(L) are (q1 , . . . , qn )
viewed as polynomials in L[X].
Proof. We know that Ef is isomorphic to the direct sum
Ef K[X]/(q1 K[X])
K[X]/(qn K[X]),
so by tensoring with L[X] and using Propositions 24.12 and 23.7, we get
L[X] K[X] Ef L[X] K[X] (K[X]/(q1 K[X]) K[X]/(qn K[X]))
L[X] K[X] (K[X]/(q1 K[X])) L[X] K[X] (K[X]/(qn K[X]))
(K[X]/(q1 K[X])) K[X] L[X] (K[X]/(qn K[X])) K[X] L[X].
However, by Proposition 24.14, we have isomorphisms
(K[X]/(qi K[X])) K[X] L[X] L[X]/(qi L[X]),
so we get
L[X] K[X] Ef L[X]/(q1 L[X])
L[X]/(qn L[X]).
Since Ef is a K[X]-module, the L[X] module L[X] K[X] Ef is the module obtained from
Ef by the ring extension K[X] L[X], and since f is a K[X]-linear map of Ef , it becomes
f(L[X]) on L[X] K[X] Ef , which is the same as f(L) viewed as an L-linear map of the space
E(L) = L K E, so L[X] K[X] Ef is actually isomorphic to E(L)f(L) , and we have
E(L)f(L) L[X]/(q1 L[X])
L[X]/(qn L[X]),
749
/ E[X]
/
E[X]
/ Ef
/
K[X]/(qn K[X]),
f ) K[X]/(p1 K[X])
K[X]/(pm K[X]),
where p1 , . . . , pm are the invariant factors of Im(X1 f ) with respect to E[X]. Since E[X]
E[X]/Im(X1 f ), by the uniqueness part of Theorem 24.38 and because the polynomials
are monic, we must have m = n and pi = qi , for i = 1, . . . , n. Therefore, we proved the
following crucial fact:
Proposition 25.7. For any linear map f : E ! E over a K-vector space E of dimension n,
the similarity invariants of f are equal to the invariant factors of Im(X1 f ) with respect
to E[X].
Proposition 25.7 is the key to computing the similarity invariants of a linear map. This
can be done using a procedure to convert XI U to its Smith normal form. Propositions
25.7 and 24.44 yield the following result.
Proposition 25.8. For any linear map f : E ! E over a K-vector space E of dimension n,
if (q1 , . . . , qn ) are the similarity invariants of f , for any matrix U representing f with respect
to any basis, then for k = 1, . . . , n the product
dk (X) = q1 (X) qk (X)
is the gcd of the k k-minors of the matrix XI
Note that the matrix XI
polynomial f (X) = det(XI
U.
Proposition 25.9. For any linear map f : E ! E over a K-vector space E of dimension
n, if (q1 , . . . , qn ) are the similarity invariants of f , then the following properties hold:
(1) If
f (X)
= q1 (X) qn (X).
750
(2) The minimal polynomial m(X) = qn (X) of f divides the characteristic polynomial
f X) of f .
(3) The characteristic polynomial
f X)
divides m(X)n .
25.2
Let us now translate the Elementary Divisors Decomposition Theorem, Theorem 24.45, in
terms of Ef . We obtain the following result.
Theorem 25.10. (Cyclic Decomposition Theorem, Second Version) Let f : E ! E be an
endomorphism on a K-vector space of dimension n. Then, E is the direct sum of of cyclic
n
subspaces Ej = Z(uj ; f ) for f , such that the minimal polynomial of Ej is of the form pi i,j ,
for some irreducible monic polynomials p1 , . . . , pt 2 K[X] and some positive integers ni,j ,
such that for each i = 1, . . . , t, there is a sequence of integers
1 ni,1 , . . . , ni,1 < ni,2 , . . . , ni,2 < < ni,si , . . . , ni,si ,
|
{z
} |
{z
}
|
{z
}
mi,1
mi,2
mi,si
with si 1, and where ni,j occurs mi,j 1 times, for j = 1, . . . , si . Furthermore, the monic
polynomials pi and the integers r, t, ni,j , si , mi,j are uniquely determined.
P
Note that there are =
mi,j cyclic subspaces Z(uj ; f ). Using bases for the cyclic
subspaces Z(uj ; f ) as in Theorem 25.3, we get the following theorem.
Theorem 25.11. (Rational Canonical Form, Second Version) Let f : E ! E be an endomorphism on a K-vector space of dimension n. There exist t distinct irreducible monic
polynomials p1 , . . . , pt 2 K[X] and some positive integers ni,j , such that for each i = 1, . . . , t,
there is a sequence of integers
1 ni,1 , . . . , ni,1 < ni,2 , . . . , ni,2 < < ni,si , . . . , ni,si ,
|
{z
} |
{z
}
|
{z
}
mi,1
mi,2
mi,si
751
with si 1, and where ni,j occurs mi,j 1 times, for j = 1, . . . , si , and there is a basis of E
such that the matrix X of f is a block matrix of the form
0
1
A1 0
0
0
B 0 A2
0
0C
B
C
B ..
C
.
.
.
.
.
.
.
.
X=B .
C,
.
.
.
.
B
C
@ 0 0 A 1 0 A
0 0
0
A
P
n
where each Aj is the companion matrix of some pi i,j , and =
mi,j . The monic polynomials p1 , . . . , pt and the integers r, t, ni,j , si , mi,j are uniquely determined
n
The polynomials pi i,j are called the elementary divisors of f (and X). These polynomials
are factors of the minimal polynomial.
As we pointed earlier, unlike the similarity invariants, the elementary divisors may change
when we pass to a field extension.
We will now consider the special case where all the irreducible polynomials pi are of the
form X
i ; that is, when are the eigenvalues of f belong to K. In this case, we find again
the Jordan form.
25.3
In this section, we assume that all the roots of the minimal polynomial of f belong to K.
This will be the case if K is algebraically closed. The irreducible polynomials pi of Theorem
25.10 are the polynomials X
i , for the distinct eigenvalues i of f . Then, each cyclic
subspace Z(uj ; f ) has a minimal polynomial of the form (X
)m , for some eigenvalue of
f and some m
1. It turns out that by choosing a suitable basis for the cyclic subspace
Z(uj ; f ), the matrix of the restriction of f to Z(uj ; f ) is a Jordan block.
Proposition 25.12. Let E be a finite-dimensional K-vector space and let f : E ! E be a
linear map. If E is a cyclic K[X]-module and if (X
)n is the minimal polynomial of f ,
then there is a basis of E of the form
((f
id)n 1 (u), (f
id)n 2 (u), . . . , (f
id)(u), u),
for some u 2 E. With respect to this basis, the matrix of f is the Jordan block
0
1
1 0 0
B0
C
B . . 1 0. C
B. . ... ... .C
.C.
Jn ( ) = B . .
B
C
.
. . 1A
@0 0 0
0 0 0
752
)n annihilates E, we get
p(f )(u) = r(f )(u),
which means that every vector of the form p(f )(u) with p(X) of degree
linear combination of u, f (u), . . . , f n 2 (u), f n 1 (u).
n is actually a
id)n 2 (u)(f
id)(u), . . . , (f
id)n 1 (u)
id)n 1 (u) + a1 (f
id)n 2 (u) + + an 2 (f
id)(u) + an 1 u = 0,
)n
+ a1 (X
)n
+ + an 2 (X
) + an
id)n 1 (u), (f
id)n 2 (u), . . . , (f
id)(u), u),
id)n 1 (u), (f
id + id, as (f
id)n 2 (u), . . . , (f
id)(u), u).
id)n 1 (u)) = (f
id)n (u) + (f
id)k (u)) = (f
id)k+1 (u) + (f
id)n 1 (u) = (f
id)n 1 (u)
and
f ((f
id)k (u),
0kn
2.
But this means precisely that the matrix of f in this basis is the Jordan block Jn ( ).
753
Combining Theorem 25.11 and Proposition 25.12, we obtain a strong version of the
Jordan form.
Theorem 25.13. (Jordan Canonical Form) Let E be finite-dimensional K-vector space.
The following properties are equivalent:
(1) The eigenvalues of f all belong to K.
(2) There is a basis of E in which the matrix of f is upper (or lower) triangular.
(3) There exist a basis of E in which the matrix A of f is Jordan matrix. Furthermore, the
number of Jordan blocks Jr ( ) appearing in A, for fixed r and , is uniquely determined
by f .
Proof. The implication (1) =) (3) follows from Theorem 25.11 and Proposition 25.12. The
implications (3) =) (2) and (2) =) (1) are trivial.
Compared to Theorem 22.16, the new ingredient is the uniqueness assertion in (3), which
is not so easy to prove.
Observe that the minimal polynomial of f is the least common multiple of the polynomials
(X
)r associated with the Jordan blocks Jr ( ) appearing in A, and the characteristic
polynomial of A is the product of these polynomials.
We now return to the problem of computing eectively the similarity invariants of a
matrix A. By Proposition 25.7, this is equivalent to computing the invariant factors of
XI A. In principle, this can be done using Proposition 24.42. A procedure to do this
eectively for the ring A = K[X] is to convert XI A to its Smith normal form. This will
also yield the rational canonical form for A.
25.4
The Smith normal form is the special case of Proposition 24.42 applied to the PID K[X]
where K is a field, but it also says that the matrices P and Q are products of elementary
matrices. It turns out that such a result holds for any Euclidean ring, and the proof is
basically the same.
Recall from Definition 20.9 that a Euclidean ring is an integral domain A such that there
exists a function : A ! N with the following property: For all a, b 2 A with b 6= 0, there
are some q, r 2 A such that
a = bq + r
and
754
Theorem 25.14. If M is an m n matrix over a Euclidean ring A, then there exist some
invertible n n matrix P and some invertible m m matrix Q, where P and Q are products
of elementary matrices, and a m n matrix D of the form
0
1
1 0 0 0 0
B 0 2 0 0 0C
B.
.. . .
.. ..
.. C
B.
C
.
.
.
.
.
.C
B
B
C
D = B 0 0 r 0 0C
B
C
B 0 0 0 0 0C
B.
..
. . .
.C
@ ..
. .. .. . . .. A
0 0 0 0 0
for some nonzero i 2 A, such that
(1) 1 | 2 | | r , and
(2) M = QDP
Proof. We follow Jacobsons proof [59] (Chapter 3, Theorem 3.8). We proceed by induction
on m + n.
If m = n = 1, let P = (1) and Q = (1).
For the induction step, if M = 0, let P = In and Q = Im . If M 6= 0, the stategy is to
apply a sequence of elementary transformations that converts M to a matrix of the form
0
1
1 0 0
B0
C
B
C
0
M = B ..
C
@.
A
Y
0
0
1
a11 a12 a1n
B 0 a22 a2n C
B
C
B ..
..
.. C .
.
.
@ .
.
.
. A
0 am2 amn
755
with
a11
B0
B
B ..
@ .
0
0
a22
..
.
..
.
am2
1
0
a2n C
C
.. C
. A
amn
(b) If there is some entry a1k in the first row such that a11 does not divide a1k , then pick
such an entry (say, with the smallest index j such that (a1j ) is minimal), and divide a1k by
a11 ; that is, find bk and b1k such that
a1k = a11 bk + b1k ,
with
756
go to Step 4.
a11 0
B0
B
M = B ..
@ .
Y
0
1
C
C
C
A
(ii) If Step 2b ruined column 1 which now contains some nonzero entry below a11 , go
back to Step 2a.
We perform a sequence of alternating steps between Step 2a and Step 2b. Because the
-value of the (1, 1)-entry strictly decreases whenever we reenter Step 2a and Step 2b, such
a sequence must terminate with a matrix of the form
0
1
a11 0 0
B0
C
B
C
M = B ..
C
@ .
A
Y
0
Step 4 . If a11 divides all entries in Y , stop.
Otherwise, there is some column, say j, such that a11 does not divide some entry aij , so
add the jth column to the first column. This yields a matrix of the form
0
1
a11 0 0
B b2j
C
B
C
M = B ..
C
@ .
A
Y
bmj
where the ith entry in column 1 is nonzero, so go back to Step 2a,
Again, since the -value of the (1, 1)-entry strictly decreases whenever we reenter Step
2a and Step 2b, such a sequence must terminate with a matrix of the form
0
1
1 0 0
B0
C
B
C
0
M = B ..
C
@.
A
Y
0
757
..
.
..
.
0
..
.
0
..
.
0
..
.
1 0 0
0 q1 0
0 0 q2
.. .. ..
. . .
0 0 0
1
0
.. C
. C
C
0 C
C
0 C
C
0 C
. C
..
. .. A
qm
..
.
1, such that
758
(1) q1 | q2 | | qm ,
(2) q1 , . . . qm are the similarity invariants of A, and
(3) XI
A = QDP
The matrix D in Theorem 25.16 is often called Smith normal form of A, even though
this is confusing terminology since D is really the Smith normal form of XI A.
Of course, we know from previous work that in Theorem 25.15, the 1 , . . . , r are unique,
and that in Theorem 25.16, the q1 , . . . , qm are unique. This can also be proved using some
simple properties of minors, but we leave it as an exercise (for help, look at Jacobson [59],
Chapter 3, Theorem 3.9).
The rational canonical form of A can also be obtained from Q 1 and D, but first, let
us consider the generalization of Theorem 25.15 to PIDs that are not necessarily Euclidean
rings.
We need to find a norm that assigns a natural number (a) to any nonzero element
of a PID A, in such a way that (a) decreases whenever we return to Step 2a and Step 2b.
Since a PID is a UFD, we use the number
(a) = k1 + + kr
of prime factors in the factorization of a nonunit element
a = upk11 pkr r ,
and we set
(u) = 0
if u is a unit.
We cant divide anymore, but we can find gcds and use Bezout to mimic division. The
key ingredient is this: for any two nonzero elements a, b 2 A, if a does not divide b then let
d 6= 0 be a gcd of a and b. By Bezout, there exist x, y 2 A such that
ax + by = d.
We can also write a = td and b =
implies that
sy = 1,
t
s
x s
1 0
=
,
y x
y t
0 1
sdy = d, which
759
which shows that both matrices on the left of the equation are invertible, and so is the
transpose of the second one,
x y
s t
(they all have determinant 1). We also have
as + bt = tds
so
x y
s t
and
a
d
=
b
0
a b
sdt = 0,
x s
y t
= d 0 .
Because a does not divide b, their gcd d has strictly fewer prime factors than a, so
(d) < (a).
Using matrices of the form
with xt
x
Bs
B
B0
B
B0
B
B ..
@.
0
y
t
0
0
..
.
0
0
1
0
..
.
0
0
0
1
..
.
...
1
0
0C
C
0C
C
0C
C
.. C
.A
1
Theorem 25.17. If M is an m n matrix over a PID A, then there exist some invertible
n n matrix P and some invertible m m matrix Q, where P and Q are products of
elementary matrices and matrices of the form
0
x
Bs
B
B0
B
B0
B
B ..
@.
0
y
t
0
0
..
.
0
0
1
0
..
.
0
0
0
1
..
.
..
.
1
0
0C
C
0C
C
0C
C
.. C
.A
1
760
with xt
D of the form
0 0
2 0
.. . .
.
. ..
.
0 r
0 0
..
.
. ..
0 0
0
0
..
.
0
0
.. . .
.
.
0
1
0
0C
.. C
C
.C
C
0C
C
0C
.. C
.A
0
(1) 1 | 2 | | r , and
(2) M = QDP
Proof sketch. In Step 2a, if a11 does not divide ak1 , then first permute row 2 and row k (if
k 6= 2). Then, if we write a = a11 and b = ak1 , if d is a gcd of a and b and if x, y, s, t are
determined as explained above, multiply on the left by the matrix
0
1
x y 0 0 0
B s t 0 0 0C
B
C
B 0 0 1 0 0C
B
C
B 0 0 0 1 0C
B
C
B .. .. .. .. . . .. C
@. . . .
. .A
0 0 0 1
to obtain a matrix of the form
d
B 0
B
B a31
B
B ..
@ .
a12
a22
a32
..
.
am1 am2
1
a1n
a2n C
C
a3n C
C
.. C
. A
. . . amn
..
.
In Step 2b, if a11 does not divide a1k , then first permute column 2 and column k (if
k 6= 2). Then, if we write a = a11 and b = a1k , if d is a gcd of a and b and if x, y, s, t are
determined as explained above, multiply on the right by the matrix
0
1
x s 0 0 0
B y t 0 0 0C
B
C
B 0 0 1 0 0C
B
C
B 0 0 0 1 0C
B
C
B .. .. .. .. . . .. C
@. . . .
. .A
0 0 0 1
d
B a21
B
B ..
@ .
761
0
a22
..
.
a13
a23
..
.
1
a1n
a2n C
C
.. C
. A
..
.
with (d) < (a11 ). Then, go back to Step 2b. The other steps remain the same. Whenever
we return to Step 2a or Step 2b, the -value of the (1, 1)-entry strictly decreases, so the
whole procedure terminates.
We conclude this section by explaining how the rational canonical form of a matrix A
can be obtained from the canonical form QDP 1 of XI A.
Let f : E ! E be a linear map over a K-vector space of dimension n. Recall from
Theorem 24.21 (see Section 24.5) that as a K[X]-module, Ef is the image of the free module
E[X] by the map : E[X] ! Ef , where E[X] consists of all linear combinations of the form
p1 e 1 + + pn e n ,
where (e1 , . . . , en ) is a basis of E and p1 , . . . , pn 2 K[X] are polynomials, and
is given by
The matrix A is the representation of a linear map f over the canonical basis (e1 , . . . , en )
of E = K n , and and XI A is the matrix of
with respect to the basis (e1 , . . . , en )
(over K[X]). What Theorem 25.16 tells us is that there are K[X]-bases (u1 , . . . , un ) and
(v1 , . . . , vn ) of Ef with respect to which the matrix of is D. Then
(ui ) = vi , i = 1, . . . , n m,
(un m+i ) = qi vn m+i , i = 1, . . . , m,
and because Im( ) = Ker ( ), this implies that
(vi ) = 0,
Consequently, w1 = (vn
and we have
m+1 ), . . . , wm
i = 1, . . . , n
m.
Mf = K[X]w1
K[X]wm ,
m+i ))
= (qi vn
m+i )
= qi (vn
m+i )
= qi w i ,
762
so as a K-vector space, the cyclic subspace Z(wi ; f ) = K[X]wi has qi as annihilator, and by
a remark from Section 24.5, it has the basis (over K)
(wi , f (wi ), . . . , f ni 1 (wi )),
ni = deg(qi ).
Furthermore, over this basis, the restriction of f to Z(wi ; f ) is represented by the companion
matrix of qi . By putting all these bases together, we obtain a block matrix which is the
canonical rational form of f (and A).
Now, XI A = QDP 1 is the matrix of with respect to the canonical basis (e1 , . . . , en )
(over K[X]), and D is the matrix of with respect to the bases (u1 , . . . , un ) and (v1 , . . . , vn )
(over K[X]), which tells us that the columns of Q consist of the coordinates (in K[X]) of the
basis vectors (v1 , . . . , vn ) with respect to the basis (e1 , . . . , en ). Therefore, the coordinates (in
K) of the vectors (w1 , . . . , wm ) spanning Ef over K[X], where wi = (vn m+i ), are obtained
by substituting the matrix A for X in the coordinates of the columns vectors of Q, and
evaluating the resulting expressions.
Since
D = Q 1 (XI
A)P,
the matrix D is obtained from A by a sequence of elementary row operations whose product
is Q 1 and a sequence of elementary column operations whose product is P . Therefore, to
compute the vectors w1 , . . . , wm from A, we simply have to figure out how to construct Q
from the sequence of elementary row operations that yield Q 1 . The trick is to use column
operations to gather a product of row operations in reverse order.
Indeed, if Q
= Ek E2 E1 ,
then
Q = E1 1 E2 1 Ek 1 .
Now, row operations operate on the left and column operations operate on the right, so
the product E1 1 E2 1 Ek 1 can be computed from left to right as a sequence of column
operations.
Let us review the meaning of the elementary row and column operations P (i, k), Ei,j; ,
and Ei, .
1. As a row operation, P (i, k) permutes row i and row k.
2. As a column operation, P (i, k) permutes column i and column k.
3. The inverse of P (i, k) is P (i, k) itself.
4. As a row operation, Ei,j; adds
763
times column i to column j (note the switch in
Given a square matrix A (over K), the row and column operations applied to XI A in
converting it to its Smith normal form may involve coefficients that are polynomials and it
is necessary to explain what is the action of an operation Ei,j; in this case. If the coefficient
in Ei,j; is a polynomial over K, as a row operation, the action of Ei,j; on a matrix X is
to multiply the jth row of M by the matrix (A) obtained by substituting the matrix A for
X and then to add the resulting vector to row i. Similarly, as a column operation, the action
of Ei,j; on a matrix X is to multiply the ith column of M by the matrix (A) obtained
by substituting the matrix A for X and then to add the resulting vector to column j. An
algorithm to compute the rational canonical form of a matrix can now be given. We apply
the elementary column operations Ei 1 for i = 1, . . . k, starting with the identity matrix.
Algorithm for Converting an n n matrix to Rational Canonical Form
While applying elementary row and column operations to compute the Smith normal
form D of XI A, keep track of the row operations and perform the following steps:
1. Let P 0 = In , and for every elementary row operation E do the following:
(a) If E = P (i, k), permute column i and column k of P 0 .
(b) If E = Ei,j; , multiply the ith column of P 0 by the matrix (A) obtained by
substituting the matrix A for X, and then subtract the resulting vector from
column j.
(c) If E = Ei, where
2. When step (1) terminates, the first n m columns of P 0 are zero and the last m are
linearly independent. For i = 1, . . . , m, multiply the (n m + i)th column wi of P 0
successively by I, A1 , A2 , Ani 1 , where ni is the degree of the polynomial qi (appearing
in D), and form the n n matrix P consisting of the vectors
w1 , Aw1 , . . . , An1 1 w1 , w2 , Aw2 , . . . , An2 1 w2 , . . . , wm , Awm , . . . , Anm 1 wm .
Then, P
764
Here is an example taken from Dummit and Foote [32] (Chapter 12, Section 12.2). Let
A be the matrix
0
1
1 2
4 4
B2
1 4
8C
C.
A=B
@1 0
1
2A
0 1
2 3
One should check that the following sequence of row and column operations produces the
Smith normal form D of XI A:
row P (1, 3) row E1,
row P (2, 4) rowE2,
with
1
1
1
B0
D=B
@0
0
(X 1)
(X+1)
column E1,3;X
column E2,3;2
column E1,4;2
column E2,4;X 3 ,
1
0
0
C
1
0
0
C.
2
A
0 (X 1)
0
2
0
0
(X 1)
Then, applying Step 1 of the above algorithm, we get the sequence of column operations:
0
1
0
1
0
1
0 0 1 0
1 0 0 0
0 0 1 0
B0 1 0 0 C
B0 1 0 0 C
B 0 1 0 0C
E1, 1
E2,1, 2
P (1,3)
B
C
B
C
B
C
!
!
!
@0 0 1 0 A
@1 0 0 0 A
@ 1 0 0 0A
0 0 0 1
0 0 0 1
0 0 0 1
0
1
0
1
0
1
0 0 1 0
0 0 1 0
0 0 1 0
B 2 1 0 0C E3,1,A I B0 1 0 0C
B0 0 0 1 C
E2, 1
P (2,4)
B
C
B
C
B
C
!
!
!
@ 1 0 0 0A
@0 0 0 0 A
@0 0 0 0 A
0 0 0 1
0 0 0 1
0 1 0 0
0
1
0
1
0
1
0 0 1 0
0
2 1 0
0 0 1 0
B0 0 0 1C E3,2, 2
B0 0 0 1C E4,2;A+I B0 0 0 1C
0
B
C
B
C
B
C
!
!
@0 0 0 0 A
@0 0 0 0 A
@0 0 0 0 A = P .
0
1 0 0
0
1 0 0
0 0 0 0
Step 2 of the algorithm yields the vectors
0 1
0 1 0 1
1
1
1
B0 C
B0 C B2 C
B C, AB C = B C,
@0 A
@0 A @1 A
0
0
0
so we get
1
B0
P =B
@0
0
0 1
0
B1 C
B C,
@0 A
0
1
2
1
0
0
1
0
0
0 1 0
0
B1 C B
C B
AB
@0 A = @
0
1
2
1C
C.
0A
1
1
2
1C
C,
0A
1
765
0
1
B0
=B
@0
0
0
0
1
0
1
2
0C
C,
1A
1
1
1
2
0
1
2
0
0
0
0
0
1
1
0
0C
C.
1A
2
766
Chapter 26
Topology
26.1
This chapter contains a review of basic topological concepts. First, metric spaces are defined.
Next, normed vector spaces are defined. Closed and open sets are defined, and their basic
properties are stated. The general concept of a topological space is defined. The closure and
the interior of a subset are defined. The subspace topology and the product topology are
defined. Continuous maps and homeomorphisms are defined. Limits of seqences are defined.
Continuous linear maps and multilinear maps are defined and studied briefly. The chapter
ends with the definition of a normed affine space.
Most spaces considered in this book have a topological structure given by a metric or a
norm, and we first review these notions. We begin with metric spaces. Recall that R+ =
{x 2 R | x 0}.
Definition 26.1. A metric space is a set E together with a function d : E E ! R+ ,
called a metric, or distance, assigning a nonnegative real number d(x, y) to any two points
x, y 2 E, and satisfying the following conditions for all x, y, z 2 E:
(D1) d(x, y) = d(y, x).
(symmetry)
(D2) d(x, y)
(positivity)
0, and d(x, y) = 0 i x = y.
(triangle inequality)
Geometrically, condition (D3) expresses the fact that in a triangle with vertices x, y, z,
the length of any side is bounded by the sum of the lengths of the other two sides. From
(D3), we immediately get
|d(x, y) d(y, z)| d(x, z).
Let us give some examples of metric spaces. Recall that the absolute value |x| of a real
number x 2 R is defined such
p that |x| = x if x 0, |x| = x if x < 0, and for a complex
number x = a + ib, by |x| = a2 + b2 .
767
768
Example 26.1.
1. Let E = R, and d(x, y) = |x
natural metric on R.
y1 |2 + + |xn
yn | 2
1
2
(closed interval)
(open interval)
We will need to define the notion of proximity in order to define convergence of limits
and continuity of functions. For this, we introduce some standard small neighborhoods.
Definition 26.2. Given a metric space E with metric d, for every a 2 E, for every 2 R,
with > 0, the set
B(a, ) = {x 2 E | d(a, x) }
is called the closed ball of center a and radius , the set
B0 (a, ) = {x 2 E | d(a, x) < }
is called the open ball of center a and radius , and the set
S(a, ) = {x 2 E | d(a, x) = }
is called the sphere of center a and radius . It should be noted that is finite (i.e., not
+1). A subset X of a metric space E is bounded if there is a closed ball B(a, ) such that
X B(a, ).
Clearly, B(a, ) = B0 (a, ) [ S(a, ).
769
Example 26.2.
1. In E = R with the distance |x
interval ]a , a + [.
2. In E = R2 with the Euclidean metric, an open ball of center a and radius is the set
of points inside the disk of center a and radius , excluding the boundary points on
the circle.
3. In E = R3 with the Euclidean metric, an open ball of center a and radius is the set
of points inside the sphere of center a and radius , excluding the boundary points on
the sphere.
One should be aware that intuition can be misleading in forming a geometric image of a
closed (or open) ball. For example, if d is the discrete metric, a closed ball of center a and
radius < 1 consists only of its center a, and a closed ball of center a and radius
1
consists of the entire space!
If E = [a, b], and d(x, y) = |x y|, as in Example 26.1, an open ball B0 (a, ), with
< b a, is in fact the interval [a, a + [, which is closed on the left.
We now consider a very important special case of metric spaces, normed vector spaces.
Normed vector spaces have already been defined in Chapter 7 (Definition 7.1) but for the
readers convenience we repeat the definition.
Definition 26.3. Let E be a vector space over a field K, where K is either the field R of
reals, or the field C of complex numbers. A norm on E is a function k k : E ! R+ , assigning
a nonnegative real number kuk to any vector u 2 E, and satisfying the following conditions
for all x, y, z 2 E:
(N1) kxk
0, and kxk = 0 i x = 0.
(positivity)
(N2) k xk = | | kxk.
(scaling)
(triangle inequality)
kyk| kx
yk.
yk,
770
it is easily seen that d is a metric. Thus, every normed vector space is immediately a metric
space. Note that the metric associated with a norm is invariant under translation, that is,
d(x + u, y + u) = d(x, y).
For this reason, we can restrict ourselves to open or closed balls of center 0.
Examples of normed vector spaces were given in Example 7.1. We repeat the most
important examples.
Example 26.3. Let E = Rn (or E = Cn ). There are three standard norms. For every
(x1 , . . . , xn ) 2 E, we have the norm kxk1 , defined such that,
kxk1 = |x1 | + + |xn |,
we have the Euclidean norm kxk2 , defined such that,
kxk2 = |x1 |2 + + |xn |2
1
2
1) by
771
is an open set. In fact, it is possible to find a metric for which such open n-cubes are open
balls! Similarly, we can define the closed n-cube
{(x1 , . . . , xn ) 2 E | ai xi bi , 1 i n},
which is a closed set.
The open sets satisfy some important properties that lead to the definition of a topological
space.
Proposition 26.1. Given a metric space E with metric d, the family O of all open sets
defined in Definition 26.4 satisfies the following properties:
(O1) For every finite family (Ui )1in of sets Ui 2 O, we have U1 \ \ Un 2 O, i.e., O is
closed under finite intersections.
S
(O2) For every arbitrary family (Ui )i2I of sets Ui 2 O, we have i2I Ui 2 O, i.e., O is closed
under arbitrary unions.
(O3) ; 2 O, and E 2 O, i.e., ; and E belong to O.
Furthermore, for any two distinct points a 6= b in E, there exist two open sets Ua and Ub
such that, a 2 Ua , b 2 Ub , and Ua \ Ub = ;.
Proof. It is straightforward. For the last point, letting = d(a, b)/3 (in fact = d(a, b)/2
works too), we can pick Ua = B0 (a, ) and Ub = B0 (b, ). By the triangle inequality, we
must have Ua \ Ub = ;.
The above proposition leads to the very general concept of a topological space.
One should be careful that, in general, the family of open sets is not closed under infinite
intersections. ForTexample, in R under the metric |x y|, letting Un =] 1/n, +1/n[,
each Un is open, but n Un = {0}, which is not open.
26.2
Topological Spaces
772
773
Given a topological space (E, O), given any subset A of E, since E 2 O and E is a closed
set, the family CA = {F | A F, F a closed set} of closed sets containing A is nonempty,
T
and since any arbitrary intersection of closed sets is a closed set, the intersection CA of
the sets in the family CA is the smallest closed set containing A. By a similar reasoning, the
union of all the open subsets contained in A is the largest open set contained in A.
Definition 26.6. Given a topological space (E, O), given any subset A of E, the smallest
closed set containing A is denoted by A, and is called the closure, or adherence of A. A
subset A of E is dense in E if A = E. The largest open set contained in A is denoted by
A, and is called the interior of A. The set Fr A = A \ E
frontier) of A. We also denote the boundary of A by @A.
774
{(x1 , . . . , xn ) 2 E | ai xi bi , 1 i n}.
One should realize that every open set U 2 O which is entirely contained in A is also in
the family U , but U may contain open sets that are not in O. For example, if E = R
with |x y|, and A = [a, b], then sets of the form [a, c[, with a < c < b belong to U , but they
are not open sets for R under |x y|. However, there is agreement in the following situation.
Proposition 26.4. Given a topological space (E, O), given any subset A of E, if U is the
subspace topology, then the following properties hold.
(1) If A is an open set A 2 O, then every open set U 2 U is an open set U 2 O.
(2) If A is a closed set in E, then every closed set w.r.t. the subspace topology is a closed
set w.r.t. O.
Proof. Left as an exercise.
The concept of product topology is also useful. We have the following proposition.
Proposition 26.5. Given n topological spaces (Ei , Oi ), let B be the family of subsets of
E1 En defined as follows:
B = {U1 Un | Ui 2 Oi , 1 i n},
and let P be the family consisting of arbitrary unions of sets in B, including ;. Then, P is
a topology on E1 En .
775
12
k(x1 , . . . , xn )k2 = kx1 k21 + + kxn k2n ,
The following proposition gives useful criteria for determining whether a family of open
subsets is a basis of a topological space.
Proposition 26.6. Given a topological space (E, O) and a family B of open subsets in O
the following properties hold:
776
(1) The family B is a basis for the topology O i for every open set U 2 O and every
x 2 U , there is some B 2 B such that x 2 B and B U .
(2) The family B is a basis for the topology O i
(a) For every x 2 E, there is some B 2 B such that x 2 B.
(b) For any two open subsets, B1 , B2 2 B, for every x 2 E, if x 2 B1 \ B2 , then there
is some B3 2 B such that x 2 B3 and B3 B1 \ B2 .
26.3
Definition 26.10. Let (E, OE ) and (F, OF ) be topological spaces, and let f : E ! F be a
function. For every a 2 E, we say that f is continuous at a, if for every open set V 2 OF
containing f (a), there is some open set U 2 OE containing a, such that, f (U ) V . We say
that f is continuous if it is continuous at every a 2 E.
Define a neighborhood of a 2 E as any subset N of E containing some open set O 2 O
such that a 2 O. Now, if f is continuous at a and N is any neighborhood of f (a), there is
some open set V N containing f (a), and since f is continuous at a, there is some open
set U containing a, such that f (U ) V . Since V N , the open set U is a subset of f 1 (N )
containing a, and f 1 (N ) is a neighborhood of a. Conversely, if f 1 (N ) is a neighborhood
of a whenever N is any neighborhood of f (a), it is immediate that f is continuous at a. It
is easy to see that Definition 26.10 is equivalent to the following statements.
Proposition 26.7. Let (E, OE ) and (F, OF ) be topological spaces, and let f : E ! F be a
function. For every a 2 E, the function f is continuous at a 2 E i for every neighborhood
N of f (a) 2 F , then f 1 (N ) is a neighborhood of a. The function f is continuous on E i
f 1 (V ) is an open set in OE for every open set V 2 OF .
If E and F are metric spaces defined by metrics d1 and d2 , we can show easily that f is
continuous at a i
for every > 0, there is some > 0, such that, for every x 2 E,
if d1 (a, x) , then d2 (f (a), f (x)) .
Similarly, if E and F are normed vector spaces defined by norms k k1 and k k2 , we can
show easily that f is continuous at a i
for every > 0, there is some > 0, such that, for every x 2 E,
if kx
f (a)k2 .
777
It is worth noting that continuity is a topological notion, in the sense that equivalent
metrics (or equivalent norms) define exactly the same notion of continuity.
If (E, OE ) and (F, OF ) are topological spaces, and f : E ! F is a function, for every
nonempty subset A E of E, we say that f is continuous on A if the restriction of f to A
is continuous with respect to (A, U ) and (F, OF ), where U is the subspace topology induced
by OE on A.
Given a product E1 En of topological spaces, as usual, we let i : E1 En ! Ei
be the projection function such that, i (x1 , . . . , xn ) = xi . It is immediately verified that each
i is continuous.
Given a topological space (E, O), we say that a point a 2 E is isolated if {a} is an open
set in O. Then, if (E, OE ) and (F, OF ) are topological spaces, any function f : E ! F is
continuous at every isolated point a 2 E. In the discrete topology, every point is isolated.
In a nontrivial normed vector space (E, k k) (with E 6= {0}), no point is isolated. To
show this, we show that every open ball B0 (u, ,) contains some vectors dierent from u.
Indeed, since E is nontrivial, there is some v 2 E such that v 6= 0, and thus = kvk > 0
(by (N1)). Let
w =u+
v.
+1
Since v 6= 0 and > 0, we have w 6= u. Then,
kw
which shows that kw
uk =
v =
< ,
+1
+1
uk < , for w 6= u.
778
x2
xy
+ y2
The function f is continuous on R R {(0, 0)}, but on the line y = mx, with m 6= 0, we
m
have f (x, y) = 1+m
2 6= 0, and thus, on this line, f (x, y) does not approach 0 when (x, y)
approaches (0, 0).
The following proposition is useful for showing that real-valued functions are continuous.
Proposition 26.9. If E is a topological space, and (R, |x y|) the reals under the standard
topology, for any two functions f : E ! R and g : E ! R, for any a 2 E, for any 2 R, if
f and g are continuous at a, then f + g, f , f g, are continuous at a, and f /g is continuous
at a if g(a) 6= 0.
Proof. Left as an exercise.
Using Proposition 26.9, we can show easily that every real polynomial function is continuous.
The notion of isomorphism of topological spaces is defined as follows.
Definition 26.11. Let (E, OE ) and (F, OF ) be topological spaces, and let f : E ! F be a
function. We say that f is a homeomorphism between E and F if f is bijective, and both
f : E ! F and f 1 : F ! E are continuous.
One should be careful that a bijective continuous function f : E ! F is not necessarily
an homeomorphism. For example, if E = R with the discrete topology, and F = R with
the standard topology, the identity is not a homeomorphism. Another interesting example
involving a parametric curve is given below. Let L : R ! R2 be the function, defined such
that,
t(1 + t2 )
,
1 + t4
t(1 t2 )
L2 (t) =
.
1 + t4
L1 (t) =
If we think of (x(t), y(t)) = (L1 (t), L2 (t)) as a geometric point in R2 , the set of points
(x(t), y(t)) obtained by letting t vary in R from 1 to +1, defines a curve having the shape
of a figure eight, with self-intersection at the origin, called the lemniscate of Bernoulli.
The map L is continuous, and in fact bijective, but its inverse L 1 is not continuous. Indeed,
when we approach the origin on the branch of the curve in the upper left quadrant (i.e.,
points such that, x 0, y 0), then t goes to 1, and when we approach the origin on the
779
branch of the curve in the lower right quadrant (i.e., points such that, x
goes to +1.
0, y 0), then t
We also review the concept of limit of a sequence. Given any set E, a sequence is any
function x : N ! E, usually denoted by (xn )n2N , or (xn )n 0 , or even by (xn ).
Definition 26.12. Given a topological space (E, O), we say that a sequence (xn )n2N converges to some a 2 E if for every open set U containing a, there is some n0 0, such that,
xn 2 U , for all n n0 . We also say that a is a limit of (xn )n2N .
When E is a metric space with metric d, it is easy to show that this is equivalent to the
fact that,
for every > 0, there is some n0
n0 .
When E is a normed vector space with norm k k, it is easy to show that this is equivalent
to the fact that,
for every > 0, there is some n0
ak , for all n
n0 .
The following proposition shows the importance of the Hausdor separation axiom.
Proposition 26.10. Given a topological space (E, O), if the Hausdor separation axiom
holds, then every sequence has at most one limit.
Proof. Left as an exercise.
It is worth noting that the notion of limit is topological, in the sense that a sequence
converge to a limit b i it converges to the same limit b in any equivalent metric (and similarly
for equivalent norms).
We still need one more concept of limit for functions.
Definition 26.13. Let (E, OE ) and (F, OF ) be topological spaces, let A be some nonempty
subset of E, and let f : A ! F be a function. For any a 2 A and any b 2 F , we say that f (x)
approaches b as x approaches a with values in A if for every open set V 2 OF containing b,
there is some open set U 2 OE containing a, such that, f (U \ A) V . This is denoted by
lim
x!a,x2A
f (x) = b.
First, note that by Proposition 26.2, since a 2 A, for every open set U containing a, we
have U \ A 6= ;, and the definition is nontrivial. Also, even if a 2 A, the value f (a) of f at
a plays no role in this definition. When E and F are metric space with metrics d1 and d2 ,
it can be shown easily that the definition can be stated as follows:
For every > 0, there is some > 0, such that, for every x 2 A,
if d1 (x, a) , then d2 (f (x), b) .
780
When E and F are normed vector spaces with norms k k1 and k k2 , it can be shown easily
that the definition can be stated as follows:
For every > 0, there is some > 0, such that, for every x 2 A,
if kx
bk2 .
We have the following result relating continuity at a point and the previous notion.
Proposition 26.11. Let (E, OE ) and (F, OF ) be two topological spaces, and let f : E ! F
be a function. For any a 2 E, the function f is continuous at a i f (x) approaches f (a)
when x approaches a (with values in E).
Proof. Left as a trivial exercise.
Another important proposition relating the notion of convergence of a sequence to continuity, is stated without proof.
Proposition 26.12. Let (E, OE ) and (F, OF ) be two topological spaces, and let f : E ! F
be a function.
(1) If f is continuous, then for every sequence (xn )n2N in E, if (xn ) converges to a, then
(f (xn )) converges to f (a).
(2) If E is a metric space, and (f (xn )) converges to f (a) whenever (xn ) converges to a,
for every sequence (xn )n2N in E, then f is continuous.
A special case of Definition 26.13 will be used when E and F are (nontrivial) normed
vector spaces with norms k k1 and k k2 . Let U be any nonempty open subset of E. We
showed earlier that E has no isoled points and that every set {v} is closed, for every v 2 E.
Since E is nontrivial, for every v 2 U , there is a nontrivial open ball contained in U (an open
ball not reduced to its center). Then, for every v 2 U , A = U {v} is open and nonempty,
and clearly, v 2 A. For any v 2 U , if f (x) approaches b when x approaches v with values
in A = U {v}, we say that f (x) approaches b when x approaches v with values 6= v in U .
This is denoted by
lim
f (x) = b.
x!v,x2U,x6=v
Remark: Variations of the above case show up in the following case: E = R, and F is some
arbitrary topological space. Let A be some nonempty subset of R, and let f : A ! F be
some function. For any a 2 A, we say that f is continuous on the right at a if
lim
x!a,x2A\[a, +1[
f (x) = f (a).
781
Let us consider another variation. Let A be some nonempty subset of R, and let f : A ! F
be some function. For any a 2 A, we say that f has a discontinuity of the first kind at a if
lim
f (x) = f (a )
lim
f (x) = f (a+ )
x!a,x2A\ ] 1,a[
and
x!a,x2A\ ]a, +1[
26.4
Connected Sets
We claim that a locally constant function is continuous. In fact, we will prove that
f 1 (V ) is open for every subset, V Y (not just for an open set V ). It is enough to show
that f 1 (y) is open for every y 2 Y , since for every subset V Y ,
[
f 1 (V ) =
f 1 (y),
y2V
782
and open sets are closed under arbitrary unions. However, either f 1 (y) = ; if y 2 Y f (X)
or f is constant on U = f 1 (y) if y 2 f (X) (with value y), and since f is locally constant,
for every x 2 U , there is some open set, W X, such that x 2 W and f is constant on W ,
which implies that f (w) = y for all w 2 W and thus, that W U , showing that U is a union
of open sets and thus, is open. The following proposition shows that a space is connected i
every locally constant function is constant:
Proposition 26.13. A topological space is connected i every locally constant function is
constant.
Proof. First, assume that X is connected. Let f : X ! Y be a locally constant function
to some space Y and assume that f is not constant. Pick any y 2 f (Y ). Since f is not
constant, U1 = f 1 (y) 6= X, and of course, U1 6= ;. We proved just before Proposition
26.13 that f 1 (V ) is open for every subset V Y , and thus U1 = f 1 (y) = f 1 ({y}) and
U2 = f 1 (Y {y}) are both open, nonempty, and clearly X = U1 [ U2 and U1 and U2 are
disjoint. This contradicts the fact that X is connected and f must be constant.
Assume that every locally constant function, f : X ! Y , to a Hausdor space, Y , is
constant. If X is not connected, we can write X = U1 [ U2 , where both U1 , U2 are open,
disjoint, and nonempty. We can define the function, f : X ! R, such that f (x) = 1 on U1
and f (x) = 0 on U2 . Since U1 and U2 are open, the function f is locally constant, and yet
not constant, a contradiction.
The following standard proposition characterizing the connected subsets of R can be
found in most topology texts (for example, Munkres [84], Schwartz [93]). For the sake of
completeness, we give a proof.
Proposition 26.14. A subset of the real line, R, is connected i it is an interval, i.e., of
the form [a, b], ] a, b], where a = 1 is possible, [a, b[ , where b = +1 is possible, or ]a, b[ ,
where a = 1 or b = +1 is possible.
Proof. Assume that A is a connected nonempty subset of R. The cases where A = ; or
A consists of a single point are trivial. We show that whenever a, b 2 A, a < b, then the
entire interval [a, b] is a subset of A. Indeed, if this was not the case, there would be some
c 2 ]a, b[ such that c 2
/ A, and then we could write A = ( ] 1, c[ \A) [ ( ]c, +1[ \A), where
] 1, c[ \A and ]c, +1[ \A are nonempty and disjoint open subsets of A, contradicting the
fact that A is connected. It follows easily that A must be an interval.
Conversely, we show that an interval, I, must be connected. Let A be any nonempty
subset of I which is both open and closed in I. We show that I = A. Fix any x 2 A
and consider the set, Rx , of all y such that [x, y] A. If the set Rx is unbounded, then
Rx = [x, +1[ . Otherwise, if this set is bounded, let b be its least upper bound. We
claim that b is the right boundary of the interval I. Because A is closed in I, unless I
is open on the right and b is its right boundary, we must have b 2 A. In the first case,
A \ [x, b[ = I \ [x, b[ = [x, b[ . In the second case, because A is also open in I, unless b is the
783
right boundary of the interval I (closed on the right), there is some open set ]b , b + [
contained in A, which implies that [x, b + /2] A, contradicting the fact that b is the least
upper bound of the set Rx . Thus, b must be the right boundary of the interval I (closed on
the right). A similar argument applies to the set, Ly , of all x such that [x, y] A and either
Ly is unbounded, or its greatest lower bound a is the left boundary of I (open or closed on
the left). In all cases, we showed that A = I, and the interval must be connected.
A characterization on the connected subsets of Rn is harder and requires the notion of
arcwise connectedness. One of the most important properties of connected sets is that they
are preserved by continuous maps.
Proposition 26.15. Given any continuous map, f : E ! F , if A E is connected, then
f (A) is connected.
Proof. If f (A) is not connected, then there exist some nonempty open sets, U, V , in F such
that f (A) \ U and f (A) \ V are nonempty and disjoint, and
f (A) = (f (A) \ U ) [ (f (A) \ V ).
Then, f
(U ) and f
with A \ f 1 (U ) and A \ f
that A is connected.
(U )) [ (A \ f
(V )),
784
where Ai \ U and Ai \ V are disjoint, since Ai A and A \ U and A \ V are disjoint. Since
Ai is connected, either Ai \ U = ; or Ai \ V = ;. This implies that either Ai A \ U or
Ai A \ V . However, by assumption, Ai \ Aj 6= ;, for all i, j 2 I, and thus, either both
Ai A \ U and Aj A \ U , or both Ai A \ V and Aj A \ V , since A \ U and A \ V
are disjoint. Thus, we conclude that either Ai A \ U for all i 2 I, or Ai A \ V for all
i 2 I. But this proves that either
[
A=
Ai A \ U,
i2I
or
A=
[
i2I
Ai A \ V,
contradicting the fact that both A \ U and A \ V are disjoint and nonempty. Thus, A must
be connected.
In particular, the above lemma applies when the connected sets in a family (Ai )i2I have
a point in common.
Lemma 26.17. If A is a connected subset of a topological space, E, then for every subset,
B, such that A B A, where A is the closure of A in E, the set B is connected.
Proof. If B is not connected, then there are two nonempty open subsets, U, V , of E such
that B \ U and B \ V are disjoint and nonempty, and
B = (B \ U ) [ (B \ V ).
Since A B, the above implies that
A = (A \ U ) [ (A \ V ),
and since A is connected, either A \ U = ;, or A \ V = ;. Without loss of generality, assume
that A \ V = ;, which implies that A A \ U B \ U . However, B \ U is closed in
the subspace topology for B and since B A and A is closed in E, the closure of A in B
w.r.t. the subspace topology of B is clearly B \ A = B, which implies that B B \ U
(since the closure is the smallest closed set containing the given set). Thus, B \ V = ;, a
contradiction.
In particular, Lemma 26.17 shows that if A is a connected subset, then its closure, A, is
also connected. We are now ready to introduce the connected components of a space.
Definition 26.15. Given a topological space, (E, O), we say that two points, a, b 2 E, are
connected if there is some connected subset, A, of E such that a 2 A and b 2 A.
785
Proposition 26.19 shows that in a locally connected space, the connected open sets form a
basis for the topology. It is easily seen that Rn is locally connected. Another very important
property of surfaces and more generally, manifolds, is to be arcwise connected. The intuition
is that any two points can be joined by a continuous arc of curve. This is formalized as
follows.
Definition 26.17. Given a topological space, (E, O), an arc (or path) is a continuous map,
: [a, b] ! E, where [a, b] is a closed interval of the real line, R. The point (a) is the initial
point of the arc and the point (b) is the terminal point of the arc. We say that is an arc
joining (a) and (b). An arc is a closed curve if (a) = (b). The set ([a, b]) is the trace
of the arc .
786
could be
(2t)
if 0 t 1/2;
(t) =
(2t 1) if 1/2 t 1.
The inverse,
787
a neighborhood of b). Thus, b can be joined to every point c 2 U by an arc, and since by the
definition of Fa , there is an arc from a to b, the composition of these two arcs yields an arc
from a to c, which shows that c 2 Fa . But then U Fa and thus, Fa is open. Now assume
that b is in the complement of Fa . As in the previous case, there is some arcwise connected
neighborhood U containing b. Thus, every point c 2 U can be joined to b by an arc. If
there was an arc joining a to c, we would get an arc from a to b, contradicting the fact that
b is in the complement of Fa . Thus, every point c 2 U is in the complement of Fa , which
shows that U is contained in the complement of Fa , and thus, that the the complement of
Fa is open. Consequently, we have shown that Fa is both open and closed and since it is
nonempty, we must have E = Fa , which shows that E is arcwise connected.
If E is locally arcwise connected, the above argument shows that the connected components of E are arcwise connected.
It is not true that a connected space is arcwise connected. For example, the space
consisting of the graph of the function
f (x) = sin(1/x),
where x > 0, together with the portion of the y-axis, for which
but not arcwise connected.
1 y 1, is connected,
A trivial modification of the proof of Theorem 26.20 shows that in a normed vector
space, E, a connected open set is arcwise connected by polygonal lines (i.e., arcs consisting
of line segments). This is because in every open ball, any two points are connected by a line
segment. Furthermore, if E is finite dimensional, these polygonal lines can be forced to be
parallel to basis vectors.
We now consider compactness.
26.5
Compact Sets
The property of compactness is very important in topology and analysis. We provide a quick
review geared towards the study of surfaces and for details, we refer the reader to Munkres
[84], Schwartz [93]. In this section, we will need to assume that the topological spaces are
Hausdor spaces. This is not a luxury, as many of the results are false otherwise.
There are various equivalent ways of defining compactness. For our purposes, the most
convenient way involves the notion of open cover.
Definition 26.20. Given a topological space, E, for any subset,
A, of E, an open cover,
S
(Ui )i2I , of A is a family of open subsets of E such that A i2I Ui . An open subcover of an
open cover, (Ui )i2I , of A is any subfamily, (Uj )j2J , which is an open cover of A, with J I.
An open cover, (Ui )i2I , of A is finite if I is finite. The topological space, E, is compact if it
788
is Hausdor and for every open cover, (Ui )i2I , of E, there is a finite open subcover, (Uj )j2J ,
of E. Given any subset, A, of E, we say that A is compact if it is compact with respect to
the subspace topology. We say that A is relatively compact if its closure A is compact.
Another equivalent and useful characterization can be given in terms of families having
theTfinite intersection property. A family, (Fi )i2I , of sets has the finite intersection property
if j2J Fj 6= ; for every finite subset, J, of I. We have the following proposition:
789
subcover, splitting [an , bn ] into two equal intervals, we know that at least one of the two has
no finite open subcover and we denote such a bad interval by [an+1 , bn+1 ]. The sequence
(an ) is nondecreasing and bounded from above by b, and thus, by a fundamental property
of the real line, it converges to its least upper bound, . Similarly, the sequence (bn ) is
nonincreasing and bounded from below by a and thus, it converges to its greatest lowest
bound, . Since [an , bn ] has length (b a)/2n , we must have = . However, the common
limit = of the sequences (an ) and (bn ) must belong to some open set, Ui , of the open
cover and since Ui is open, it must contain some interval [c, d] containing . Then, because
is the common limit of the sequences (an ) and (bn ), there is some N such that the intervals
[an , bn ] are all contained in the interval [c, d] for all n N , which contradicts the fact that
none of the intervals [an , bn ] has a finite open subcover. Thus, [a, b] is indeed compact.
The argument of Proposition 26.22 can be adapted to show that in Rm , every closed set,
[a1 , b1 ] [am , bm ], is compact. At every stage, we need to divide into 2m subpieces
instead of 2.
The following two propositions give very important properties of the compact sets, and
they only hold for Hausdor spaces:
Proposition 26.23. Given a topological Hausdor space, E, for every compact subset, A,
and every point, b, not in A, there exist disjoint open sets, U and V , such that A U and
b 2 V . As a consequence, every compact subset is closed.
Proof. Since E is Hausdor, for every a 2 A, there are some disjoint open sets, Ua and Vb ,
containing a and b respectively. Thus, the family, (Ua )a2A , forms an open cover of A.
S Since
A is compact there is a finite open subcover, (Uj )j2J , T
of A, where J A, and then j2J Uj
is an open set containing A disjoint from the open set j2J Vj containing b. This shows that
every point, b, in the complement of A belongs to some open set in this complement and
thus, that the complement is open, i.e., that A is closed.
Actually, the proof of Proposition 26.23 can be used to show the following useful property:
Proposition 26.24. Given a topological Hausdor space, E, for every pair of compact
disjoint subsets, A and B, there exist disjoint open sets, U and V , such that A U and
B V.
Proof. We repeat the argument of Proposition 26.23 with B playing the role of b and use
Proposition 26.23 to find disjoint open sets, Ua , containing a 2 A and, Va , containing B.
The following proposition shows that in a compact topological space, every closed set is
compact:
Proposition 26.25. Given a compact topological space, E, every closed set is compact.
790
Proof. Since A is closed, E A is open and from any open cover, (Ui )i2I , of A, we can form
an open cover of E by adding E A to (Ui )i2I and, since E is compact, a finite subcover,
(Uj )j2J [ {E A}, of E can be extracted such that (Uj )j2J is a finite subcover of A.
Remark: Proposition 26.25 also holds for quasi-compact spaces, i.e., the Hausdor separation property is not needed.
Putting Proposition 26.24 and Proposition 26.25 together, we note that if X is compact,
then for every pair of disjoint closed, sets A and B, there exist disjoint open sets, U and V ,
such that A U and B V . We say that X is a normal space.
Proposition 26.26. Given a compact topological space, E, for every a 2 E, for every
neighborhood, V , of a, there exists a compact neighborhood, U , of a such that U V
Proof. Since V is a neighborhood of a, there is some open subset, O, of V containing a. Then
the complement, K = E O, of O is closed and since E is compact, by Proposition 26.25, K
is compact. Now, if we consider the family of all closed sets of the form, K \F , where F is any
closed neighborhood of a, since a 2
/ K, this family has an empty intersection and thus, there
is a finite number of closed neighborhood, F1 , . . . , Fn , of a, such that K \ F1 \ \ Fn = ;.
Then, U = F1 \ \ Fn is a compact neigborhood of a contained in O V .
It can be shown that in a normed vector space of finite dimension, a subset is compact
i it is closed and bounded. For Rn , the proof is simple.
In a normed vector space of infinite dimension, there are closed and bounded sets that
are not compact!
More could be said about compactness in metric spaces but we will only need the notion
of Lebesgue number, which will be discussed a little later. Another crucial property of
compactness is that it is preserved under continuity.
Proposition 26.27. Let E be a topological space and let F be a topological Hausdor space.
For every compact subset, A, of E, for every continuous map, f : E ! F , the subspace f (A)
is compact.
Proof. Let (Ui )i2I be an open cover of f (A). We claim that (f 1 (Ui ))i2I is an open cover of
A, which is easily checked. Since A is compact, there is a finite open subcover, (f 1 (Uj ))j2J ,
of A, and thus, (Uj )j2J is an open subcover of f (A).
As a corollary of Proposition 26.27, if E is compact, F is Hausdor, and f : E ! F
is continuous and bijective, then f is a homeomorphism. Indeed, it is enough to show
that f 1 is continuous, which is equivalent to showing that f maps closed sets to closed
sets. However, closed sets are compact and Proposition 26.27 shows that compact sets are
mapped to compact sets, which, by Proposition 26.23, are closed.
791
792
Definition 26.22. Let (E, O) be a locally compact space. Let ! be any point not in E,
and let E! = E [ {!}. Define the family, O! , as follows:
O! = O [ {(E
The pair, (E! , O! ), is called the Alexandro compactification (or one point compactification)
of (E, O).
The following theorem shows that (E! , O! ) is indeed a topological space, and that it is
compact.
Theorem 26.29. Let E be a locally compact topological space. The Alexandro compactification, E! , of E is a compact space such that E is a subspace of E! and if E is not compact,
then E = E! .
Proof. The verification that O! is a family of open sets is not difficult but a bit tedious.
Details can be found in Munkres [84] or Schwartz [93]. Let us show that E! is compact. For
every open cover, (Ui )i2I , of E! , since ! must be covered, there is some Ui0 of the form
Ui0 = (E
K0 ) [ {!}
if Ui 2 O,
K if Ui = (E
K) [ {!},
where K is compact in E. Then, because each K is compact and thus closed in E (since E
is Hausdor), E K is open, and every Vi is an open subset of E. Furthermore, the family,
(Vi )i2(I {i0 }) , is an open cover of K0 . Since K0 is compact, there is a finite open subcover,
(Vj )j2J , of K0 , and thus, (Uj )j2J[{i0 } is a finite open cover of E! .
Let us show that E! is Hausdor. Given any two points, a, b 2 E! , if both a, b 2 E, since
E is Hausdor and every open set in O is an open set in O! , there exist disjoint open sets,
U, V (in O), such that a 2 U and b 2 V . If b = !, since E is locally compact, there is some
compact set, K, containing an open set, U , containing a and then, U and V = (E K) [ {!}
are disjoint open sets (in O! ) such that a 2 U and b 2 V .
The space E is a subspace of E! because for every open set, U , in O! , either U 2 O
and E \ U = U is open in E, or U = (E K) [ {!}, where K is compact in E, and thus,
U \ E = E K, which is open in E, since K is compact in E and thus, closed (since E
is Hausdor). Finally, if E is not compact, for every compact subset, K, of E, E K is
nonempty and thus, for every open set, U = (E K)[{!}, containing !, we have U \E 6= ;,
which shows that ! 2 E and thus, that E = E! .
Finally, in studying surfaces and manifolds, an important property is the existence of a
countable basis for the topology. Indeed, this property guarantees the existence of trianguations of surfaces, a crucial property.
793
794
Conversely, assume that E is compact, and let (xn ) be any sequence. If l 2 E is not
an accumulation point of the sequence, then there is some open set, Ul , such that l 2 Ul
and xn 2 Ul for only finitely many n. Thus, if (xn ) does not have any accumulation point,
the family, (Ul )l2E , is an open cover of E and since E is compact, it has some finite open
subcover, (Ul )l2J , where J is a finite subset of E. But every
S Ul with l 2 J is such that
xn 2 Ul for only finitely many n, and since J is finite, xn 2 l2J Ul for only finitely many n,
which contradicts the fact that (Ul )l2J is an open cover of E, and thus contains all the xn .
Thus, (xn ) has some accumulation point.
Remark: It should be noted that the proof showing that if E is compact, then every sequence has some accumulation point, holds for any arbitrary compact space (the proof does
not use a countable basis for the topology). The converse also holds for metric spaces. We
will prove this converse since it is a major property of metric spaces.
Given a metric space in which every sequence has some accumulation point, we first prove
the existence of a Lebesgue number .
Lemma 26.32. Given a metric space, E, if every sequence, (xn ), has an accumulation point,
for every open cover, (Ui )i2I , of E, there is some > 0 (a Lebesgue number for (Ui )i2I ) such
that, for every open ball, B0 (a, ), of radius , there is some open subset, Ui , such that
B0 (a, ) Ui .
Proof. If there was no with the above property, then, for every natural number, n, there
would be some open ball, B0 (an , 1/n), which is not contained in any open set, Ui , of the
open cover, (Ui )i2I . However, the sequence, (an ), has some accumulation point, a, and since
(Ui )i2I is an open cover of E, there is some Ui such that a 2 Ui . Since Ui is open, there is
some open ball of center a and radius contained in Ui . Now, since a is an accumulation
point of the sequence, (an ), every open set containing a contains an for infinitely many n
and thus, there is some n large enough so that
1/n /2 and an 2 B0 (a, /2),
which implies that
B0 (an , 1/n) B0 (a, ) Ui ,
a contradiction.
By a previous remark, since the proof of Proposition 26.31 implies that in a compact
topological space, every sequence has some accumulation point, by Lemma 26.32, in a compact metric space, every open cover has a Lebesgue number. This fact can be used to prove
another important property of compact metric spaces, the uniform continuity theorem.
795
Definition 26.25. Given two metric spaces, (E, dE ) and (F, dF ), a function, f : E ! F , is
uniformly continuous if for every > 0, there is some > 0, such that, for all a, b 2 E,
if dE (a, b)
796
A metric space satisfying the condition of Lemma 26.34 is sometimes called precompact
(or totally bounded ). We now obtain the WeierstrassBolzano property.
Theorem 26.35. A metric space, E, is compact i every sequence, (xn ), has an accumulation point.
Proof. We already observed that the proof of Proposition 26.31 shows that for any compact
space (not necessarily metric), every sequence, (xn ), has an accumulation point. Conversely,
let E be a metric space, and assume that every sequence, (xn ), has an accumulation point.
Given any open cover, (Ui )i2I , for E, we must find a finite open subcover of E. By Lemma
26.32, there is some > 0 (a Lebesgue number for (Ui )i2I ) such that, for every open ball,
B0 (a, ), of radius , there is some open subset, Uj , such that B0 (a, ) Uj . By Lemma
26.34, for every > 0, there is a finite open cover, B0 (a0 , ) [ [ B0 (an , ), of E by open
balls of radius . But from the previous statement, every open ball, B0 (ai , ), is contained
in some open set, Uji , and thus, {Uj1 , . . . , Ujn } is an open cover of E.
Another very useful characterization of compact metric spaces is obtained in terms of
Cauchy sequences. Such a characterization is quite useful in fractal geometry (and elsewhere). First, recall the definition of a Cauchy sequence and of a complete metric space.
Definition 26.26. Given a metric space, (E, d), a sequence, (xn )n2N , in E is a Cauchy
sequence if the following condition holds: for every > 0, there is some p 0, such that, for
all m, n
p, then d(xm , xn ) .
If every Cauchy sequence in (E, d) converges we say that (E, d) is a complete metric
space.
First, let us show the following proposition:
Proposition 26.36. Given a metric space, E, if a Cauchy sequence, (xn ), has some accumulation point, a, then a is the limit of the sequence, (xn ).
Proof. Since (xn ) is a Cauchy sequence, for every > 0, there is some p 0, such that, for
all m, n p, then d(xm , xn ) /2. Since a is an accumulation point for (xn ), for infinitely
many n, we have d(xn , a) /2, and thus, for at least some n p, we have d(xn , a) /2.
Then, for all m p,
d(xm , a) d(xm , xn ) + d(xn , a) ,
which shows that a is the limit of the sequence (xn ).
Recall that a metric space is precompact (or totally bounded ) if for every > 0, there is
a finite open cover, B0 (a0 , ) [ [ B0 (an , ), of E by open balls of radius . We can now
prove the following theorem.
Theorem 26.37. A metric space, E, is compact i it is precompact and complete.
797
Proof. Let E be compact. For every > 0, the family of all open balls of radius is an open
cover for E and since E is compact, there is a finite subcover, B0 (a0 , ) [ [ B0 (an , ), of
E by open balls of radius . Thus, E is precompact. Since E is compact, by Theorem 26.35,
every sequence, (xn ), has some accumulation point. Thus, every Cauchy sequence, (xn ), has
some accumulation point, a, and, by Proposition 26.36, a is the limit of (xn ). Thus, E is
complete.
Now, assume that E is precompact and complete. We prove that every sequence, (xn ),
has an accumulation point. By the other direction of Theorem 26.35, this shows that E
is compact. Given any sequence, (xn ), we construct a Cauchy subsequence, (yn ), of (xn )
as follows: Since E is precompact, letting = 1, there exists a finite cover, U1 , of E by
open balls of radius 1. Thus, some open ball, Bo1 , in the cover, U1 , contains infinitely many
elements from the sequence (xn ). Let y0 be any element of (xn ) in Bo1 . By induction, assume
that a sequence of open balls, (Boi )1im , has been defined, such that every ball, Boi , has
radius 21i , contains infinitely many elements from the sequence (xn ) and contains some yi
from (xn ) such that
1
d(yi , yi+1 ) i ,
2
1
for all i, 0 i m 1. Then, letting = 2m+1
, because E is precompact, there is some
finite cover, Um+1 , of E by open balls of radius and thus, of the open ball Bom . Thus, some
open ball, Bom+1 , in the cover, Um+1 , contains infinitely many elements from the sequence,
(xn ), and we let ym+1 be any element of (xn ) in Bom+1 . Thus, we have defined by induction
a sequence, (yn ), which is a subsequence of, (xn ), and such that
d(yi , yi+1 )
for all i. However, for all m, n
1
,
2i
1, we have
n
X
1
1
,
i
m 1
2
2
i=m
and thus, (yn ) is a Cauchy sequence Since E is complete, the sequence, (yn ), has a limit, and
since it is a subsequence of (xn ), the sequence, (xn ), has some accumulation point.
If (E, d) is a nonempty complete metric space, every map, f : E ! E, for which there is
some k such that 0 k < 1 and
d(f (x), f (y)) kd(x, y)
for all x, y 2 E, has the very important property that it has a unique fixed point, that
is, there is a unique, a 2 E, such that f (a) = a. A map as above is called a contraction
mapping. Furthermore, the fixed point of a contraction mapping can be computed as the
limit of a fast converging sequence.
798
The fixed point property of contraction mappings is used to show some important theorems of analysis, such as the implicit function theorem and the existence of solutions to
certain dierential equations. It can also be used to show the existence of fractal sets defined in terms of iterated function systems. Since the proof is quite simple, we prove the
fixed point property of contraction mappings. First, observe that a contraction mapping is
(uniformly) continuous.
Proposition 26.38. If (E, d) is a nonempty complete metric space, every contraction mapping, f : E ! E, has a unique fixed point. Furthermore, for every x0 2 E, defining the
sequence, (xn ), such that xn+1 = f (xn ), the sequence, (xn ), converges to the unique fixed
point of f .
Proof. First, we prove that f has at most one fixed point. Indeed, if f (a) = a and f (b) = b,
since
d(a, b) = d(f (a), f (b)) kd(a, b)
and 0 k < 1, we must have d(a, b) = 0, that is, a = b.
Thus, we have
d(xn+p , xn ) d(xn+p , xn+p 1 ) + d(xn+p 1 , xn+p 2 ) + + d(xn+1 , xn )
(k p 1 + k p 2 + + k + 1)k n d(x1 , x0 )
kn
d(x1 , x0 ).
1 k
We conclude that d(xn+p , xn ) converges to 0 when n goes to infinity, which shows that (xn )
is a Cauchy sequence. Since E is complete, the sequence (xn ) has a limit, a. Since f is
continuous, the sequence (f (xn )) converges to f (a). But xn+1 = f (xn ) converges to a and
so f (a) = a, the unique fixed point of f .
Note that no matter how the starting point x0 of the sequence (xn ) is chosen, (xn )
converges to the unique fixed point of f . Also, the convergence is fast, since
d(xn , a)
kn
1
d(x1 , x0 ).
The Hausdor distance between compact subsets of a metric space provides a very nice
illustration of some of the theorems on complete and compact metric spaces just presented.
799
Definition 26.27. Given a metric space, (X, d), for any subset, A X, for any,
define the -hull of A as the set
0,
V (A) = {x 2 X, 9a 2 A | d(a, x) }.
Given any two nonempty bounded subsets, A, B of X, define D(A, B), the Hausdor distance
between A and B, by
D(A, B) = inf{
Note that since we are considering nonempty bounded subsets, D(A, B) is well defined
(i.e., not infinite). However, D is not necessarily a distance function. It is a distance function
if we restrict our attention to nonempty compact subsets of X (actually, it is also a metric on
closed and bounded subsets). We let K(X) denote the set of all nonempty compact subsets
of X. The remarkable fact is that D is a distance on K(X) and that if X is complete or
compact, then so is K(X). The following theorem is taken from Edgar [33].
Theorem 26.39. If (X, d) is a metric space, then the Hausdor distance, D, on the set,
K(X), of nonempty compact subsets of X is a distance. If (X, d) is complete, then (K(X), D)
is complete and if (X, d) is compact, then (K(X), D) is compact.
Proof. Since (nonempty) compact sets are bounded, D(A, B) is well defined. Clearly, D is
symmetric. Assume that D(A, B) = 0. Then, for every > 0, A V (B), which means that
for every a 2 A, there is some b 2 B such that d(a, b) , and thus, that A B. Since
B is closed, B = B, and we have A B. Similarly, B A, and thus, A = B. Clearly, if
A = B, we have D(A, B) = 0. It remains to prove the triangle inequality. If B V1 (A)
and C V2 (B), then
V2 (B) V2 (V1 (A)),
and since
we get
Similarly, we can prove that
C V2 (B) V1 +2 (A).
A V1 +2 (C),
Next, we need to prove that if (X, d) is complete, then (K(X), D) is also complete. First,
we show that if (An ) is a sequence of nonempty compact sets converging to a nonempty
compact set A in the Hausdor metric, then
A = {x 2 X | there is a sequence, (xn ), with xn 2 An converging to x}.
Indeed, if (xn ) is a sequence with xn 2 An converging to x and (An ) converges to A then, for
every > 0, there is some xn such that d(xn , x) /2 and there is some an 2 A such that
800
26.6
801
If E and F are normed vector spaces, we first characterize when a linear map f : E ! F is
continuous.
Proposition 26.41. Given two normed vector spaces E and F , for any linear map f : E !
F , the following conditions are equivalent:
(1) The function f is continuous at 0.
(2) There is a constant k
0 such that,
0 such that,
kf (u)k kkuk, for every u 2 E.
Assume that (2) holds. If u = 0, then by linearity, f (0) = 0, and thus kf (0)k kk0k
holds trivially for all k 0. If u 6= 0, then kuk > 0, and since
u
= 1,
kuk
we have
f
which implies that
u
kuk
k,
kf (u)k kkuk.
Thus, (3) holds.
If (3) holds, then for all u, v 2 E, we have
kf (v)
f (u)k = kf (v
u)k kkv
uk.
If k = 0, then f is the zero function, and continuity is obvious. Otherwise, if k > 0, for every
> 0, if kv uk k , then kf (v u)k , which shows continuity at every u 2 E. Finally,
it is obvious that (4) implies (1).
802
Among other things, Proposition 26.41 shows that a linear map is continuous i the
image of the unit (closed) ball is bounded. If E and F are normed vector spaces, the set of
all continuous linear maps f : E ! F is denoted by L(E; F ).
Using Proposition 26.41, we can define a norm on L(E; F ) which makes it into a normed
vector space. This definition has already been given in Chapter 7 (Definition 7.7) but for
the readers convenience, we repeat it here.
Definition 26.28. Given two normed vector spaces E and F , for every continuous linear
map f : E ! F , we define the norm kf k of f as
kf k = min {k
From Definition 26.28, for every continuous linear map f 2 L(E; F ), we have
kf (x)k kf kkxk,
for every x 2 E. It is easy to verify that L(E; F ) is a normed vector space under the norm
of Definition 26.28. Furthermore, if E, F, G, are normed vector spaces, and f : E ! F and
g : F ! G are continuous linear maps, we have
kg f k kgkkf k.
We can now show that when E = Rn or E = Cn , with any of the norms k k1 , k k2 , or
k k1 , then every linear map f : E ! F is continuous.
Proposition 26.42. If E = Rn or E = Cn , with any of the norms k k1 , k k2 , or k k1 , and
F is any normed vector space, then every linear map f : E ! F is continuous.
Proof. Let (e1 , . . . , en ) be the standard basis of Rn (a similar proof applies to Cn ). In view
of Proposition 7.2, it is enough to prove the proposition for the norm
kxk1 = max{|xi | | 1 i n}.
We have,
kf (v)
f (u)k = kf (v
u)k = f (
(vi
ui )ei ) =
1in
(vi
ui )f (ei ) ,
1in
and so,
kf (v)
f (u)k
1in
ui | =
1in
kf (ei )k kv
uk1 .
By the argument used in Proposition 26.41 to prove that (3) implies (4), f is continuous.
803
Actually, we proved in Theorem 7.3 that if E is a vector space of finite dimension, then
any two norms are equivalent, so that they define the same topology. This fact together with
Proposition 26.42 prove the following:
Theorem 26.43. If E is a vector space of finite dimension (over R or C), then all norms
are equivalent (define the same topology). Furthermore, for any normed vector space F ,
every linear map f : E ! F is continuous.
We leave as an exercise to show that this is indeed a norm. Let F = R, and let f : E ! F
be the map defined such that, f (P (X)) = P (3). It is clear that f is linear. Consider the
sequence of polynomials
n
X
Pn (X) =
.
2
n
It is clear that kPn k = 12 , and thus, the sequence Pn has the null polynomial as a limit.
However, we have
n
3
f (Pn (X)) = Pn (3) =
,
2
and the sequence f (Pn (X)) diverges to +1. Consequently, in view of Proposition 26.12 (1),
f is not continuous.
We now consider the continuity of multilinear maps. We treat explicitly bilinear maps,
the general case being a straightforward extension.
Proposition 26.44. Given normed vector spaces E, F and G, for any bilinear map f : E
E ! G, the following conditions are equivalent:
(1) The function f is continuous at h0, 0i.
2) There is a constant k
0 such that,
0 such that,
kf (u, v)k kkukkvk, for all u, v 2 E.
804
Proof. It is similar to that of Proposition 26.41, with a small subtlety in proving that (3)
implies (4), namely that two dierent s that are not independent are needed.
If E, F , and G, are normed vector spaces, we denote the set of all continuous bilinear
maps f : E F ! G by L2 (E, F ; G). Using Proposition 26.44, we can define a norm on
L2 (E, F ; G) which makes it into a normed vector space.
Definition 26.29. Given normed vector spaces E, F , and G, for every continuous bilinear
map f : E F ! G, we define the norm kf k of f as
kf k = min {k 0 | kf (x, y)k kkxkkyk, for all x, y 2 E}
= max {kf (x, y)k | kxk, kyk 1} .
From Definition 26.28, for every continuous bilinear map f 2 L2 (E, F ; G), we have
kf (x, y)k kf kkxkkyk,
for all x, y 2 E. It is easy to verify that L2 (E, F ; G) is a normed vector space under the
norm of Definition 26.29.
Given a bilinear map f : E F ! G, for every u 2 E, we obtain a linear map denoted
f u : F ! G, defined such that, f u(v) = f (u, v). Furthermore, since
kf (x, y)k kf kkxkkyk,
it is clear that f u is continuous. We can then consider the map ' : E ! L(F ; G), defined
such that, '(u) = f u, for any u 2 E, or equivalently, such that,
'(u)(v) = f (u, v).
Actually, it is easy to show that ' is linear and continuous, and that k'k = kf k. Thus, f 7! '
defines a map from L2 (E, F ; G) to L(E; L(F ; G)). We can also go back from L(E; L(F ; G))
to L2 (E, F ; G). We summarize all this in the following proposition.
Proposition 26.45. Let E, F, G be three normed vector spaces. The map f 7! ', from
L2 (E, F ; G) to L(E; L(F ; G)), defined such that, for every f 2 L2 (E, F ; G),
'(u)(v) = f (u, v),
is an isomorphism of vector spaces, and furthermore, k'k = kf k.
As a corollary of Proposition 26.45, we get the following proposition which will be useful
when we define second-order derivatives.
805
Proposition 26.46. Let E, F be normed vector spaces. The map app from L(E; F ) E to
F , defined such that, for every f 2 L(E; F ), for every u 2 E,
app(f, u) = f (u),
is a continuous bilinear map.
Remark: If E and F are nontrivial, it can be shown that kappk = 1. It can also be shown
that composition
: L(E; F ) L(F ; G) ! L(E; G),
is bilinear and continuous.
The above propositions and definition generalize to arbitrary n-multilinear maps, with
n 2. Proposition 26.44 extends in the obvious way to any n-multilinear map f : E1
En ! F , but condition (3) becomes:
There is a constant k
0 such that,
806
26.7
!
For geometric applications, we will need to consider affine spaces (E, E ) where the associated
!
space of translations E is a vector space equipped with a norm.
!
!
Definition 26.31. Given an affine space (E, E ), where the space of translations E is a
!
!
vector space over R or C, we say that (E, E ) is a normed affine space if E is a normed
vector space with norm k k.
Given a normed affine space, there is a natural metric on E itself, defined such that
!
d(a, b) = k abk.
Observe that this metric is invariant under translation, that is,
d(a + u, b + u) = d(a, b).
Also, for every fixed a 2 E and
26.8
Futher Readings
A thorough treatment of general topology can be found in Munkres [84, 85], Dixmier [29],
Lang [70], Schwartz [93, 92], Bredon [18], and the classic, Seifert and Threlfall [95].
Chapter 27
A Detour On Fractals
27.1
A pleasant application of the Hausdor distance and of the fixed point theorem for contracting mappings is a method for defining a class of self-similar fractals. For this, we can use
iterated function systems.
Definition 27.1. Given a metric space, (X, d), an iterated function system, for short, an
ifs, is a finite sequence of functions, (f1 , . . . , fn ), where each fi : X ! X is a contracting
mapping. A nonempty compact subset, K, of X is an invariant set (or attractor) for the ifs,
(f1 , . . . , fn ), if
K = f1 (K) [ [ fn (K).
The major result about ifss is the following:
Theorem 27.1. If (X, d) is a nonempty complete metric space, then every iterated function
system, (f1 , . . . , fn ), has a unique invariant set, A, which is a nonempty compact subset of
X. Furthermore, for every nonempty compact subset, A0 , of X, this invariant set, A, if the
limit of the sequence, (Am ), where Am+1 = f1 (Am ) [ [ fn (Am ).
Proof. Since X is complete, by Theorem 26.39, the space (K(X), D) is a complete metric
space. The theorem will follow from Theorem 26.38 if we can show that the map,
F : K(X) ! K(X), defined such that
F (K) = f1 (K) [ [ fn (K),
for every nonempty compact set, K, is a contracting mapping. Let A, B be any two nonempty
compact subsets of X and consider any D(A, B). Since each fi : X ! X is a contracting
mapping, there is some i , with 0 i < 1, such that
d(fi (a), fi (b))
807
i d(a, b),
808
for all a, b 2 X. Let
= max{ 1 , . . . ,
We claim that
i d(ai , bi )
F (A) V (F (B)).
F (B) V (F (A)),
where = max{ 1 , . . . ,
contracting mapping.
n }.
Since 0
< 1, we have 0
Theorem 27.1 justifies the existence of many familiar self-similar fractals. One of the
best known fractals is the Sierpinski gasket.
Example 27.1. Consider an equilateral triangle with vertices a, b, c, and let f1 , f2 , f3 be
the dilatations of centers a, b, c and ratio 1/2. The Sierpinski gasket is the invariant set of
the ifs (f1 , f2 , f3 ). The dilations f1 , f2 , fp
3 can be defined explicitly as follows, assuming that
a = ( 1/2, 0), b = (1/2, 0), and c = (0, 3/2). The contractions f1 , f2 , f3 are specified by
1
x
2
1
=
y,
2
x0 =
y0
1
,
4
1
1
x+ ,
2
4
1
=
y,
2
x0 =
y0
and
1
x,
2
p
1
3
=
y+
.
2
4
x0 =
y0
809
x0 =
y0
810
x0 =
y0
x0 =
y0 =
1
x
2
1
x
2
1
y,
2
1
y,
2
1
y,
2
1
y + 1.
2
It can be shown that for any number of iterations, the polygon does not cross itself. This
means that no edge is traversed twice and that if a point is traversed twice, then this point
is the endpoint of some edge. The result of 13 iterations, starting with the line segment
((0, 0), (0, 1)), is shown in Figure 27.4.
The Heighway dragon turns out to fill a closed and bounded set. It can also be shown
that the plane can be tiled with copies of the Heighway dragon.
Another well known example is the Koch curve.
811
812
x0 =
y0
x
y0 =
x0 =
y0 =
2
,
3
p
1
3
1
x
y
,
6p
6
6
p
3
1
3
x+ y+
,
6
6
6
p
1
3
1
x+
y+ ,
6p 6
6p
3
1
3
x+ y+
,
6
6
6
1
2
x+ ,
3
3
1
=
y.
3
x0 =
y0
813
x0 =
y0
1
1
x+ ,
2
2
1
=
y + 1,
2
x0 =
y0
1
y + 1,
2
1
1
=
x+ ,
2
2
x0 =
y0
1
y 1,
2
1
1
=
x+ .
2
2
x0 =
y0
814
Chapter 28
Dierential Calculus
28.1
This chapter contains a review of basic notions of dierential calculus. First, we review
the definition of the derivative of a function f : R ! R. Next, we define directional derivatives and the total derivative of a function f : E ! F between normed affine spaces. Basic
properties of derivatives are shown, including the chain rule. We show how derivatives are
represented by Jacobian matrices. The mean value theorem is stated, as well as the implicit
function theorem and the inverse function theorem. Dieomorphisms and local dieomorphisms are defined. Tangent spaces are defined. Higher-order derivatives are defined, as well
as the Hessian. Schwarzs lemma (about the commutativity of partials) is stated. Several
versions of Taylors formula are stated, and a famous formula due to Fa`a di Brunos is given.
We first review the notion of the derivative of a real-valued function whose domain is an
open subset of R.
Let f : A ! R, where A is a nonempty open subset of R, and consider any a 2 A.
The main idea behind the concept of the derivative of f at a, denoted by f 0 (a), is that
locally around a (that is, in some small open set U A containing a), the function f is
approximated linearly by the map
x 7! f (a) + f 0 (a)(x
a).
Part of the difficulty in extending this idea to more complex spaces is to give an adequate
notion of linear approximation. Of course, we will use linear maps! Let us now review the
formal definition of the derivative of a real-valued function.
Definition 28.1. Let A be any nonempty open subset of R, and let a 2 A. For any function
f : A ! R, the derivative of f at a 2 A is the limit (if it exists)
lim
h!0, h2U
f (a + h)
h
815
f (a)
816
df
where U = {h 2 R | a + h 2 A, h 6= 0}. This limit is denoted by f 0 (a), or Df (a), or dx
(a).
0
If f (a) exists for every a 2 A, we say that f is dierentiable on A. In this case, the map
df
a 7! f 0 (a) is denoted by f 0 , or Df , or dx
.
Note that since A is assumed to be open, A {a} is also open, and since the function
h 7! a + h is continuous and U is the inverse image of A {a} under this function, U is
indeed open and the definition makes sense.
We can also define f 0 (a) as follows: there is some function , such that,
f (a + h) = f (a) + f 0 (a) h + (h)h,
whenever a + h 2 A, where (h) is defined for all h such that a + h 2 A, and
lim
h!0, h2U
(h) = 0.
Remark: We can also define the notion of derivative of f at a on the left, and derivative
of f at a on the right. For example, we say that the derivative of f at a on the left is the
limit f 0 (a ) (if it exists)
f (a + h) f (a)
lim
,
h!0, h2U
h
where U = {h 2 R | a + h 2 A, h < 0}.
817
!
notationally more pleasant to denote f (a)f (a + h) by f (a + h) f (a). Thus, in the rest of
!
this chapter, the vector ab will be denoted by b a. But now, how do we define the quotient
by a vector? Well, we dont!
A first possibility is to consider the directional derivative with respect to a vector u 6= 0
!
in E . We can consider the vector f (a + tu) f (a), where t 2 R (or t 2 C). Now,
f (a + tu)
t
f (a)
makes sense. The idea is that in E, the points of the form a + tu for t in some small interval
[ , +] in R (or C) form a line segment [r, s] in A containing a, and that the image of
this line segment defines a small curve segment on f (A). This curve segment is defined by
the map t 7! f (a + tu), from [r, s] to F , and the directional derivative Du f (a) defines the
direction of the tangent line at a to this curve. This leads us to the following definition.
Definition 28.2. Let E and F be two normed affine spaces, let A be a nonempty open
!
subset of E, and let f : A ! F be any function. For any a 2 A, for any u 6= 0 in E , the
directional derivative of f at a w.r.t. the vector u, denoted by Du f (a), is the limit (if it
exists)
f (a + tu) f (a)
lim
,
t!0, t2U
t
where U = {t 2 R | a + tu 2 A, t 6= 0} (or U = {t 2 C | a + tu 2 A, t 6= 0}).
Since the map t 7! a + tu is continuous, and since A {a} is open, the inverse image U
of A {a} under the above map is open, and the definition of the limit in Definition 28.2
makes sense.
Remark: Since the notion of limit is purely topological, the existence and value of a directional derivative is independent of the choice of norms in E and F , as long as they are
equivalent norms.
The directional derivative is sometimes called the Gateaux derivative.
In the special case where E = R and F = R, and we let u = 1 (i.e., the real number 1,
viewed as a vector), it is immediately verified that D1 f (a) = f 0 (a), in the sense of Definition
28.1. When E = R (or E = C) and F is any normed vector space, the derivative D1 f (a),
also denoted by f 0 (a), provides a suitable generalization of the notion of derivative.
However, when E has dimension 2, directional derivatives present a serious problem,
which is that their definition is not sufficiently uniform. Indeed, there is no reason to believe
that the directional derivatives w.r.t. all nonnull vectors u share something in common. As
a consequence, a function can have all directional derivatives at a, and yet not be continuous
at a. Two functions may have all directional derivatives in some open sets, and yet their
composition may not. Thus, we introduce a more uniform notion.
818
Definition 28.3. Let E and F be two normed affine spaces, let A be a nonempty open subset
of E, and let f : A ! F be any function. For any a 2 A, we say that f is dierentiable at
!
!
a 2 A if there is a linear continuous map L : E ! F and a function , such that
f (a + h) = f (a) + L(h) + (h)khk
for every a + h 2 A, where (h) is defined for every h such that a + h 2 A and
lim
h!0, h2U
(h) = 0,
!
where U = {h 2 E | a + h 2 A, h 6= 0}. The linear map L is denoted by Df (a), or Dfa , or
df (a), or dfa , or f 0 (a), and it is called the Frechet derivative, or derivative, or total derivative,
or total dierential , or dierential , of f at a.
!
Since the map h 7! a + h from E to E is continuous, and since A is open in E, the
!
inverse image U of A {a} under the above map is open in E , and it makes sense to say
that
lim (h) = 0.
h!0, h2U
f (a + h)
f (a)
khk
L(h)
and that the value (0) plays absolutely no role in this definition. The condition for f to be
dierentiable at a amounts to the fact that
kf (a + h) f (a) L(h)k
lim
=0
h7!0
khk
as h 6= 0 approaches 0, when a + h 2 A. However, it does no harm to assume that (0) = 0,
and we will assume this from now on.
819
f (a + tu) f (a)
|t|
= L(u) + (tu)kuk,
t
t
and the limit when t 6= 0 approaches 0 is indeed Du f (a).
The uniqueness of L follows from Proposition 28.1. Also, when E is of finite dimension, it
is easily shown that every linear map is continuous, and this assumption is then redundant.
It is important to note that the derivative Df (a) of f at a is a continuous linear map
!
!
from the vector space E to the vector space F , and not a function from the affine space E
to the affine space F .
As an example, consider the map, f : Mn (R) ! Mn (R), given by
f (A) = A> A
I,
where Mn (R) is equipped with anypmatrix norm, since they are all equivalent; for example,
pick the Frobenius norm, kAkF = tr(A> A). We claim that
Df (A)(H) = A> H + H > A,
We have
f (A + H)
f (A)
(A> A
I)
A> A
= H > H.
It follows that
(H) =
f (A + H)
H >H
kHk
H > kHk
= H > = kHk ,
kHk
lim (H) = 0,
H7!0
A> H
A> H
H >A
H >A
820
! !
called the derivative of f on A, and also denoted by df . Recall that L( E ; F ) denotes the
!
!
vector space of all continuous maps from E to F .
When E is of finite dimension n, for any frame (a0 , (u1 , . . . , un )) of E, where (u1 , . . . , un )
!
is a basis of E , we can define the directional derivatives with respect to the vectors in the
basis (u1 , . . . , un ) (actually, we can also do it for an infinite frame). This way, we obtain the
definition of partial derivatives, as follows.
Definition 28.4. For any two normed affine spaces E and F , if E is of finite dimension
n, for every frame (a0 , (u1 , . . . , un )) for E, for every a 2 E, for every function f : E ! F ,
the directional derivatives Duj f (a) (if they exist) are called the partial derivatives of f with
respect to the frame (a0 , (u1 , . . . , un )). The partial derivative Duj f (a) is also denoted by
@f
@j f (a), or
(a).
@xj
@f
(a) for a partial derivative, although customary and going back to
@xj
Leibniz, is a logical obscenity. Indeed, the variable xj really has nothing to do with the
formal definition. This is just another of these situations where tradition is just too hard to
overthrow!
The notation
821
Proof. Straightforward.
We now state the very useful chain rule.
Theorem 28.5. Given three normed affine spaces E, F , and G, let A be an open set in
E, and let B an open set in F . For any functions f : A ! F and g : B ! G, such that
f (A) B, for any a 2 A, if Df (a) exists and Dg(f (a)) exists, then D(g f )(a) exists, and
D(g f )(a) = Dg(f (a)) Df (a).
Proof. It is not difficult, but more involved than the previous two.
Theorem 28.5 has many interesting consequences. We mention two corollaries.
Proposition 28.6. Given three normed affine spaces E, F , and G, for any open subset A in
E, for any a 2 A, let f : A ! F such that Df (a) exists, and let g : F ! G be a continuous
affine map. Then, D(g f )(a) exists, and
D(g f )(a) = g Df (a),
where g is the linear map associated with the affine map g.
Proposition 28.7. Given two normed affine spaces E and F , let A be some open subset in
E, let B be some open subset in F , let f : A ! B be a bijection from A to B, and assume
that Df exists on A and that Df 1 exists on B. Then, for every a 2 A,
Df
!
!
Proposition 28.7 has the remarkable consequence that the two vector spaces E and F
have the same dimension. In other words, a local property, the existence of a bijection f
between an open set A of E and an open set B of F , such that f is dierentiable on A and
!
!
f 1 is dierentiable on B, implies a global property, that the two vector spaces E and F
have the same dimension.
We now consider the situation where the normed affine space F is a finite direct sum
F = (F1 , b1 ) (Fm , bm ).
Proposition 28.8. Given normed affine spaces E and F = (F1 , b1 ) (Fm , bm ), given
any open subset A of E, for any a 2 A, for any function f : A ! F , letting f = (f1 , . . . , fm ),
Df (a) exists i every Dfi (a) exists, and
Df (a) = in1 Df1 (a) + + inm Dfm (a).
Proof. Observe that f (a + h) f (a) = (f (a + h) b) (f (a) b), where b = (b1 , . . . , bm ),
!
and thus, as far as dealing with derivatives, Df (a) is equal to Dfb (a), where fb : E ! F is
defined such that fb (x) = f (x) b, for every x 2 E. Thus, we can work with the vector space
!
F instead of the affine space F . The proposition is then a simple application of Theorem
28.5.
822
In the special case where F is a normed affine space of finite dimension m, for any frame
!
(b0 , (v1 , . . . , vm )) of F , where (v1 , . . . , vm ) is a basis of F , every point x 2 F can be expressed
uniquely as
x = b0 + x 1 v 1 + + x m v m ,
where (x1 , . . . , xm ) 2 K m , the coordinates of x in the frame (b0 , (v1 , . . . , vm )) (where K = R
or K = C). Thus, letting Fi be the standard normed affine space K with its natural
structure, we note that F is isomorphic to the direct sum F = (K, 0) (K, 0). Then,
every function f : E ! F is represented by m functions (f1 , . . . , fm ), where fi : E ! K
(where K = R or K = C), and
f (x) = b0 + f1 (x)v1 + + fm (x)vm ,
for every x 2 E. The following proposition is an immediate corollary of Proposition 28.8.
Proposition 28.9. For any two normed affine spaces E and F , if F is of finite dimension
!
m, for any frame (b0 , (v1 , . . . , vm )) of F , where (v1 , . . . , vm ) is a basis of F , for every a 2 E,
a function f : E ! F is dierentiable at a i each fi is dierentiable at a, and
!
for every u 2 E .
We now consider the situation where E is a finite direct sum. Given a normed affine
space E = (E1 , a1 ) (En , an ) and a normed affine space F , given any open subset A
of E, for any c = (c1 , . . . , cn ) 2 A, we define the continuous functions icj : Ej ! E, such that
icj (x) = (c1 , . . . , cj 1 , x, cj+1 , . . . , cn ).
For any function f : A ! F , we have functions f icj : Ej ! F , defined on (icj ) 1 (A), which
contains cj . If D(f icj )(cj ) exists, we call it the partial derivative of f w.r.t. its jth argument,
! !
at c. We also denote this derivative by Dj f (c). Note that Dj f (c) 2 L(Ej ; F ).
This notion is a generalization of the notion defined in Definition 28.4. In fact, when
E is of dimension n, and a frame (a0 , (u1 , . . . , un )) has been chosen, we can write E =
(E1 , a1 ) (En , an ), for some obvious (Ej , aj ) (as explained just after Proposition 28.8),
and then
Dj f (c)( uj ) = @j f (c),
and the two notions are consistent.
The definition of icj and of Dj f (c) also makes sense for a finite product E1 En of
affine spaces Ei . We will use freely the notation @j f (c) instead of Dj f (c).
The notion @j f (c) introduced in Definition 28.4 is really that of the vector derivative,
whereas Dj f (c) is the corresponding linear map. Although perhaps confusing, we identify
the two notions. The following proposition holds.
823
28.2
Jacobian Matrices
If both E and F are of finite dimension, for any frame (a0 , (u1 , . . . , un )) of E and any frame
(b0 , (v1 , . . . , vm )) of F , every function f : E ! F is determined by m functions fi : E ! R
(or fi : E ! C), where
f (x) = b0 + f1 (x)v1 + + fm (x)vm ,
for every x 2 E. From Proposition 28.1, we have
Df (a)(uj ) = Duj f (a) = @j f (a),
and from Proposition 28.9, we have
Df (a)(uj ) = Df1 (a)(uj )v1 + + Dfi (a)(uj )vi + + Dfm (a)(uj )vm ,
that is,
Df (a)(uj ) = @j f1 (a)v1 + + @j fi (a)vi + + @j fm (a)vm .
Since the j-th column of the mn-matrix representing Df (a) w.r.t. the bases (u1 , . . . , un )
and (v1 , . . . , vm ) is equal to the components of the vector Df (a)(uj ) over the basis (v1 , . . . ,vm ),
the linear map Df (a) is determined by the m n-matrix J(f )(a) = (@j fi (a)), (or J(f )(a) =
@fi
(
(a))):
@xj
0
1
@1 f1 (a) @2 f1 (a) . . . @n f1 (a)
B @1 f2 (a) @2 f2 (a) . . . @n f2 (a) C
B
C
J(f )(a) = B
C
..
..
..
.
.
@
A
.
.
.
.
@1 fm (a) @2 fm (a) . . . @n fm (a)
824
or
@f1
@f1
B @x1 (a) @x2 (a)
B
B @f2
@f2
B
B @x (a) @x (a)
2
J(f )(a) = B 1
B ..
..
B .
.
B
@ @fm
@fm
(a)
(a)
@x1
@x2
...
...
..
...
1
@f1
(a)
@xn C
C
C
@f2
(a) C
@xn C
C
.. C
. C
C
@fm A
(a)
@xn
cos()
sin()
r sin()
r cos()
825
@f
@f
@f
J(f )(a) =
(a)
(a)
(a) .
@x
@y
@z
More generally, when f : Rn ! R, the Jacobian matrix at a 2 Rn is the row vector
@f
@f
J(f )(a) =
(a)
(a) .
@x1
@xn
Its transpose is a column vector called the gradient of f at a, denoted by gradf (a) or rf (a).
Then, given any v 2 Rn , note that
Df (a)(v) =
@f
@f
(a) v1 + +
(a) vn = gradf (a) v,
@x1
@xn
826
@g1
@g1
B @y1 (b) @y2 (b)
B
B @g2
@g2
B
B @y (b) @y (b)
2
J(h)(a) = B 1
B ..
..
B .
.
B
@ @gm
@gm
(b)
(b)
@y1
@y2
...
...
..
...
10
@f1
@g1
(a)
(b) C B
@yn C B @x1
B @f2
@g2 C
B
(a)
(b) C
C
@yn C B
B @x1
B ..
.. C
B
. C
CB .
A
@ @fn
@gm
(a)
(b)
@x1
@yn
@f1
(a) . . .
@x2
@f2
(a) . . .
@x2
..
..
.
.
@fn
(a) . . .
@x2
1
@f1
(a)
@xp C
C
@f2 C
(a)C
@xp C
C.
.. C
. C
C
@fn A
(a)
@xp
X @gi
@hi
@fk
(a) =
(b)
(a).
@xj
@y
@x
k
j
k=1
Given two normed affine spaces E and F of finite dimension, given an open subset A of
E, if a function f : A ! F is dierentiable at a 2 A, then its Jacobian matrix is well defined.
One should be warned that the converse is false. There are functions such that all the
partial derivatives exist at some a 2 A, but yet, the function is not dierentiable at a,
and not even continuous at a. For example, consider the function f : R2 ! R, defined such
that f (0, 0) = 0, and
x2 y
f (x, y) = 4
if (x, y) 6= (0, 0).
x + y2
h
For any u 6= 0, letting u =
, we have
k
f (0 + tu)
t
f (0)
h2 k
= 2 4
,
t h + k2
827
so that
h2
k
if k 6= 0
0 if k = 0.
Thus, Du f (0, 0) exists for all u 6= 0. On the other hand, if Df (0, 0) existed, it would be
a linear map Df (0, 0) : R2 ! R represented by a row matrix ( ), and we would have
Du f (0, 0) = Df (0, 0)(u) = h + k, but the explicit formula for Du f (0, 0) is not linear. As
a matter of fact, the function f is not continuous at (0, 0). For example, on the parabola
y = x2 , f (x, y) = 12 , and when we approach the origin on this parabola, the limit is 12 , when
in fact, f (0, 0) = 0.
Du f (0, 0) =
However, there are sufficient conditions on the partial derivatives for Df (a) to exist,
namely, continuity of the partial derivatives.
! !
If f is dierentiable on A, then f defines a function Df : A ! L( E ; F ). It turns out
that the continuity of the partial derivatives on A is a necessary and sufficient condition for
Df to exist and to be continuous on A.
If f : [a, b] ! R is a function which is continuous on [a, b] and dierentiable on ]a, b], then
there is some c with a < c < b such that
f (b)
f (a) = (b
a)f 0 (c).
This result is known as the mean value theorem and is a generalization of Rolles theorem,
which corresponds to the case where f (a) = f (b).
Unfortunately, the mean value theorem fails for vector-valued functions. For example,
the function f : [0, 2] ! R2 given by
f (t) = (cos t, sin t)
is such that f (2)
]0, 2[.
f (0) = (0, 0), yet its derivative f 0 (t) = ( sin t, cos t) does not vanish in
kDf (x)k M,
828
for some M
0, then
kf (a + h)
f (a)k M khk.
!
!
As a corollary, if L : E ! F is a continuous linear map, then
kf (a + h)
where M = supx2]a,a+h[ kDf (x)
f (a)
L(h)k M khk,
Lk.
The above lemma is sometimes called the mean value theorem. Lemma 28.11 can be
used to show the following important result.
Theorem 28.12. Given two normed affine spaces E and F , where E is of finite dimension
n, and where (a0 , (u1 , . . . , un )) is a frame of E, given any open subset A of E, given any
! !
function f : A ! F , the derivative Df : A ! L( E ; F ) is defined and continuous on A i
@f
every partial derivative @j f (or
) is defined and continuous on A, for all j, 1 j n.
@xj
As a corollary, if F is of finite dimension m, and (b0 , (v1 , . . . , vm )) is a frame of F , the
! !
derivative Df : A ! L( E ; F ) is defined and continuous on A i every partial derivative
@fi
@j fi (or
) is defined and continuous on A, for all i, j, 1 i m, 1 j n.
@xj
Theorem 28.12 gives a necessary and sufficient condition for the existence and continuity
of the derivative of a function on an open set. It should be noted that a more general version
of Theorem 28.12 holds, assuming that E = (E1 , a1 ) (En , an ), or E = E1 En ,
and using the more general partial derivatives Dj f introduced before Proposition 28.10.
Definition 28.6. Given two normed affine spaces E and F , and an open subset A of E, we
say that a function f : A ! F is of class C 0 on A or a C 0 -function on A if f is continuous
on A. We say that f : A ! F is of class C 1 on A or a C 1 -function on A if Df exists and is
continuous on A.
Since the existence of the derivative on an open set implies continuity, a C 1 -function
is of course a C 0 -function. Theorem 28.12 gives a necessary and sufficient condition for a
function f to be a C 1 -function (when E is of finite dimension). It is easy to show that the
composition of C 1 -functions (on appropriate open sets) is a C 1 -function.
28.3
Given three normed affine spaces E, F , and G, given a function f : E F ! G, given any
c 2 G, it may happen that the equation
f (x, y) = c
829
has the property that, for some open sets A E, and B F , there is a function g : A ! B,
such that
f (x, g(x)) = c,
for all x 2 A. Such a situation is usually very rare, but if some solution (a, b) 2 E F
such that f (a, b) = c is known, under certain conditions, for some small open sets A E
containing a and B F containing b, the existence of a unique g : A ! B, such that
f (x, g(x)) = c,
for all x 2 A, can be shown. Under certain conditions, it can also be shown that g is
continuous, and dierentiable. Such a theorem, known as the implicit function theorem, can
be shown. We state a version of this result below, following Schwartz [94]. The proof (see
Schwartz [94]) is fairly involved, and uses a fixed-point theorem for contracting mappings in
complete metric spaces. Other proofs can be found in Lang [69] and Cartan [20].
Theorem 28.13. Let E, F , and G, be normed affine spaces, let be an open subset of
E F , let f : ! G be a function defined on , let (a, b) 2 , let c 2 G, and assume that
f (a, b) = c. If the following assumptions hold
(1) The function f : ! G is continuous on ;
(2) F is a complete normed affine space (and so is G);
! !
@f
@f
(x, y) exists for every (x, y) 2 , and
: ! L( F ; G ) is continuous;
@y
@y
@f
1
! !
! !
@f
(4)
(a, b) is a bijection of L( F ; G ), and
(a, b)
2 L( G ; F );
@y
@y
(3)
(a) There exist some open subset A E containing a and some open subset B F
containing b, such that A B , and for every x 2 A, the equation f (x, y) = c has
a single solution y = g(x), and thus, there is a unique function g : A ! B such that
f (x, g(x)) = c, for all x 2 A;
(b) The function g : A ! B is continuous.
If we also assume that
(5) The derivative Df (a, b) exists;
then
(c) The derivative Dg(a) exists, and
Dg(a) =
@f
@y
(a, b)
@f
(a, b);
@x
830
and if in addition
(6)
! !
@f
: ! L( E ; G ) is also continuous (and thus, in view of (3), f is C 1 on );
@x
then
! !
(d) The derivative Dg : A ! L( E ; F ) is continuous, and
@f
1 @f
Dg(x) =
(x, g(x))
(x, g(x)),
@y
@x
for all x 2 A.
The implicit function theorem plays an important role in the calculus of variations. We
now consider another very important notion, that of a (local) dieomorphism.
Definition 28.7. Given two topological spaces E and F , and an open subset A of E, we
say that a function f : A ! F is a local homeomorphism from A to F if for every a 2 A,
there is an open set U A containing a and an open set V containing f (a) such that f is a
homeomorphism from U to V = f (U ). If B is an open subset of F , we say that f : A ! F
is a (global) homeomorphism from A to B if f is a homeomorphism from A to B = f (A).
If E and F are normed affine spaces, we say that f : A ! F is a local dieomorphism from
A to F if for every a 2 A, there is an open set U A containing a and an open set V
containing f (a) such that f is a bijection from U to V , f is a C 1 -function on U , and f 1
is a C 1 -function on V = f (U ). We say that f : A ! F is a (global) dieomorphism from A
to B if f is a homeomorphism from A to B = f (A), f is a C 1 -function on A, and f 1 is a
C 1 -function on B.
Note that a local dieomorphism is a local homeomorphism. Also, as a consequence of
Proposition 28.7, if f is a dieomorphism on A, then Df (a) is a linear isomorphism for every
a 2 A. The following theorem can be shown. In fact, there is a fairly simple proof using
Theorem 28.13; see Schwartz [94], Lang [69] and Cartan [20].
Theorem 28.14. Let E and F be complete normed affine spaces, let A be an open subset
of E, and let f : A ! F be a C 1 -function on A. The following properties hold:
(1) For every a 2 A, if Df (a) is a linear isomorphism (which means that both Df (a)
and (Df (a)) 1 are linear and continuous),1 then there exist some open subset U A
containing a, and some open subset V of F containing f (a), such that f is a dieomorphism from U to V = f (U ). Furthermore,
Df
For every neighborhood N of a, its image f (N ) is a neighborhood of f (a), and for every
open ball U A of center a, its image f (U ) contains some open ball of center f (a).
1
Actually, since E and F are Banach spaces, by the Open Mapping Theorem, it is sufficient to assume
that Df (a) is continuous and bijective; see Lang [69].
831
(2) If Df (a) is invertible for every a 2 A, then B = f (A) is an open subset of F , and
f is a local dieomorphism from A to B. Furthermore, if f is injective, then f is a
dieomorphism from A to B.
Part (1) of Theorem 28.14 is often referred to as the (local) inverse function theorem.
It plays an important role in the study of manifolds and (ordinary) dierential equations.
If E and F are both of finite dimension, and some frames have been chosen, the invertibility of Df (a) is equivalent to the fact that the Jacobian determinant det(J(f )(a))
is nonnull. The case where Df (a) is just injective or just surjective is also important for
defining manifolds, using implicit definitions.
Definition 28.8. Let E and F be normed affine spaces, where E and F are of finite dimension (or both E and F are complete), and let A be an open subset of E. For any a 2 A, a
C 1 -function f : A ! F is an immersion at a if Df (a) is injective. A C 1 -function f : A ! F
is a submersion at a if Df (a) is surjective. A C 1 -function f : A ! F is an immersion on A
(resp. a submersion on A) if Df (a) is injective (resp. surjective) for every a 2 A.
The following results can be shown.
Proposition 28.15. Let A be an open subset of Rn , and let f : A ! Rm be a function.
For every a 2 A, f : A ! Rm is a submersion at a i there exists an open subset U of A
containing a, an open subset W Rn m , and a dieomorphism ' : U ! f (U ) W , such
that,
f = 1 ',
where 1 : f (U ) W ! f (U ) is the first projection. Equivalently,
(f
/ f (U ) W
U AN
NNN
NNN
1
N
f NNN
&
f (U ) Rm
Futhermore, the image of every open subset of A under f is an open subset of F . (The same
result holds for Cn and Cm ).
Proposition 28.16. Let A be an open subset of Rn , and let f : A ! Rm be a function.
For every a 2 A, f : A ! Rm is an immersion at a i there exists an open subset U of
A containing a, an open subset V containing f (a) such that f (U ) V , an open subset W
containing 0 such that W Rm n , and a dieomorphism ' : V ! U W , such that,
' f = in1 ,
832
where in1 : U ! U W is the injection map such that in1 (u) = (u, 0), or equivalently,
(' f )(x1 , . . . , xn ) = (x1 , . . . , xn , 0, . . . , 0).
f
/ f (U ) V
U AM
MMM
MMM
'
M
in1 MMM
&
U W
28.4
833
v = Df (a)(u),
is an affine variety Ta ( ) of E F , defined
y = f (a) + Df (a)(x
a),
is the graph of f .
@f
(a, b)(x
@x
a) +
@f
(a, b)(y
@y
b).
If E = R and F = R2 , the tangent line at (a, b, c), to the curve of equations y = g(x),
z = h(x), is defined by the equations
y = b + Dg(a)(x
z = c + Dh(a)(x
a),
a).
Thus, derivatives and partial derivatives have the desired intended geometric interpretation as tangent spaces. Of course, in order to deal with this topic properly, we really would
have to go deeper into the study of (dierential) manifolds.
We now briefly consider second-order and higher-order derivatives.
28.5
Given two normed affine spaces E and F , and some open subset A of E, if Df (a) is defined
! !
! !
for every a 2 A, then we have a mapping Df : A ! L( E ; F ). Since L( E ; F ) is a normed
vector space, if Df exists on an open subset U of A containing a, we can consider taking
the derivative of Df at some a 2 A. If D(Df )(a) exists for every a 2 A, we get a mapping
834
!
! !
D2 f : A ! L( E ; L( E ; F )), where D2 f (a) = D(Df )(a), for every a 2 A. If D2 f (a) exists,
!
then for every u 2 E ,
! !
D2 f (a)(u) = D(Df )(a)(u) = Du (Df )(a) 2 L( E ; F ).
! !
! !
Recall from Proposition 26.46, that the map app from L( E ; F ) E to F , defined such
! !
!
that for every L 2 L( E ; F ), for every v 2 E ,
app(L, v) = L(v),
!
is a continuous bilinear map. Thus, in particular, given a fixed v 2 E , the linear map
! !
!
appv : L( E ; F ) ! F , defined such that appv (L) = L(v), is a continuous map.
Also recall from Proposition 28.6, that if h : A ! G is a function such that Dh(a) exits,
and k : G ! H is a continuous linear map, then, D(k h)(a) exists, and
k(Dh(a)(u)) = D(k h)(a)(u),
that is,
k(Du h(a)) = Du (k h)(a),
Applying these two facts to h = Df , and to k = appv , we have
Du (Df )(a)(v) = Du (appv
Df )(a).
But (appv Df )(x) = Df (x)(v) = Dv f (x), for every x 2 A, that is, appv
So, we have
Du (Df )(a)(v) = Du (Dv f )(a),
Df = Dv f on A.
835
Then, the above discussion can be summarized by saying that when D2 f (a) is defined,
we have
D2 f (a)(u, v) = Du Dv f (a).
When E has finite dimension and (a0 , (e1 , . . . , en )) is a frame for E, we denote Dej Dei f (a)
@ 2f
@ 2f
by
(a), when i 6= j, and we denote Dei Dei f (a) by
(a).
@xi @xj
@x2i
The following important lemma attributed to Schwarz can be shown, using Lemma 28.11.
! !
!
Given a bilinear map f : E E ! F , recall that f is symmetric, if
f (u, v) = f (v, u),
!
for all u, v 2 E .
Lemma 28.18. (Schwarzs lemma) Given two normed affine spaces E and F , given any
open subset A of E, given any f : A ! F , for every a 2 A, if D2 f (a) exists, then D2 f (a) 2
! ! !
L2 ( E , E ; F ) is a continuous symmetric bilinear map. As a corollary, if E is of finite
dimension n, and (a0 , (e1 , . . . , en )) is a frame for E, we have
@ 2f
@ 2f
(a) =
(a).
@xi @xj
@xj @xi
Remark: There is a variation of the above lemma which does not assume the existence of
D2 f (a), but instead assumes that Du Dv f and Dv Du f exist on an open subset containing a
and are continuous at a, and concludes that Du Dv f (a) = Dv Du f (a). This is just a dierent
result which does not imply Lemma 28.18, and is not a consequence of Lemma 28.18.
@ 2f
@ 2f
When E = R2 , the only existence of
(a) and
(a) is not sufficient to insure the
@x@y
@y@x
existence of D2 f (a).
When E if of finite dimension n and (a0 , (e1 , . . . , en )) is a frame for E, if D2 f (a) exists,
!
for every u = u1 e1 + + un en and v = v1 e1 + + vn en in E , since D2 f (a) is a symmetric
bilinear form, we have
n
X
@ 2f
2
(a),
D f (a)(u, v) =
ui vj
@xi @xj
i=1,j=1
which can be written in matrix form as:
0 2
@ f
@ 2f
(a)
(a)
B @x2
@x1 @x2
B
1
B 2
@ 2f
B @ f
B
(a)
(a)
@x @x
@x22
D2 f (a)(u, v) = U > B
B 1 .2
..
B
..
.
B
B 2
2
@ @ f
@ f
(a)
(a)
@x1 @xn
@x2 @xn
...
...
...
...
1
@ 2f
(a)
@x1 @xn C
C
C
2
@ f
C
(a)C
@x2 @xn C V
C
..
C
.
C
C
2
A
@ f
(a)
2
@xn
836
where U is the column matrix representing u, and V is the column matrix representing v,
over the frame (a0 , (e1 , . . . , en )).
The above symmetric matrix is called the Hessian of f at a. If F itself is of finite
dimension, and (b0 , (v1 , . . . , vm )) is a frame for F , then f = (f1 , . . . , fm ), and each component
D2 f (a)i (u, v) of D2 f (a)(u, v) (1 i m), can be written as
0 2
1
@ fi
@ 2 fi
@ 2 fi
(a) . . .
(a)
B @x2 (a)
@x1 @x2
@x1 @xn C
B
C
1
B 2
C
2
2
@ fi
@ fi
B @ fi
C
B
(a)
(a)
...
(a)C
2
2
> B @x @x
@x2
@x2 @xn C V
D f (a)i (u, v) = U B 1 2
C
.
.
..
...
B
C
.
.
.
.
.
B
C
B 2
C
2
2
@ @ fi
A
@ fi
@ fi
(a)
(a) . . .
(a)
@x1 @xn
@x2 @xn
@x2n
Thus, we could describe the vector D2 f (a)(u, v) in terms of an mnmn-matrix consisting
of m diagonal blocks, which are the above Hessians, and the row matrix (U > , . . . , U > ) (m
times) and the column matrix consisting of m copies of V .
We now indicate briefly how higher-order derivatives are defined. Let m
2. Given
a function f : A ! F as before, for any a 2 A, if the derivatives Di f exist on A for all
i, 1 i m 1, by induction, Dm 1 f can be considered to be a continuous function
! !
Dm 1 f : A ! Lm 1 (E m 1 ; F ), and we define
Dm f (a) = D(Dm 1 f )(a).
! !
Then, Dm f (a) can be identified with a continuous m-multilinear map in Lm (E m ; F ). We
can then show (as we did before), that if Dm f (a) is defined, then
Dm f (a)(u1 , . . . , um ) = Du1 . . . Dum f (a).
When E if of finite dimension n and (a0 , (e1 , . . . , en )) is a frame for E, if Dm f (a) exists,
for every j1 , . . . , jm 2 {1, . . . , n}, we denote Dejm . . . Dej1 f (a) by
@ mf
(a).
@xj1 . . . @xjm
! !
Given a m-multilinear map f 2 Lm (E m ; F ), recall that f is symmetric if
f (u(1) , . . . , u(m) ) = f (u1 , . . . , um ),
!
for all u1 , . . . , um 2 E , and all permutations on {1, . . . , m}. Then, the following generalization of Schwarzs lemma holds.
837
Lemma 28.19. Given two normed affine spaces E and F , given any open subset A of E,
given any f : A ! F , for every a 2 A, for every m 1, if Dm f (a) exists, then Dm f (a) 2
! !
Lm (E m ; F ) is a continuous symmetric m-multilinear map. As a corollary, if E is of finite
dimension n, and (a0 , (e1 , . . . , en )) is a frame for E, we have
@ mf
@ mf
(a) =
(a),
@xj1 . . . @xjm
@x(j1 ) . . . @x(jm )
for every j1 , . . . , jm 2 {1, . . . , n}, and for every permutation on {1, . . . , m}.
If E is of finite dimension n, and (a0 , (e1 , . . . , en )) is a frame for E, Dm f (a) is a symmetric
m-multilinear map, and we have
Dm f (a)(u1 , . . . , um ) =
X
j
u1,j1 um,jm
@ mf
(a),
@xj1 . . . @xjm
where j ranges over all functions j : {1, . . . , m} ! {1, . . . , n}, for any m vectors
uj = uj,1 e1 + + uj,n en .
The concept of C 1 -function is generalized to the concept of C m -function, and Theorem
28.12 can also be generalized.
Definition 28.11. Given two normed affine spaces E and F , and an open subset A of E,
for any m 1, we say that a function f : A ! F is of class C m on A or a C m -function on
A if Dk f exists and is continuous on A for every k, 1 k m. We say that f : A ! F
is of class C 1 on A or a C 1 -function on A if Dk f exists and is continuous on A for every
k 1. A C 1 -function (on A) is also called a smooth function (on A). A C m -dieomorphism
f : A ! B between A and B (where A is an open subset of E and B is an open subset
of B) is a bijection between A and B = f (A), such that both f : A ! B and its inverse
f 1 : B ! A are C m -functions.
Equivalently, f is a C m -function on A if f is a C 1 -function on A and Df is a C m 1 function on A.
We have the following theorem giving a necessary and sufficient condition for f to a
C -function on A. A generalization to the case where E = (E1 , a1 ) (En , an ) also
holds.
m
Theorem 28.20. Given two normed affine spaces E and F , where E is of finite dimension
n, and where (a0 , (u1 , . . . , un )) is a frame of E, given any open subset A of E, given any
function f : A ! F , for any m
1, the derivative Dm f is a C m -function on A i every
@kf
partial derivative Dujk . . . Duj1 f (or
(a)) is defined and continuous on A, for all
@xj1 . . . @xjk
838
X
j
u1,j1 um,jm
@ mf
(a),
@xj1 . . . @xjm
where j ranges over all functions j : {1, . . . , m} ! {1, . . . , n}, for any m vectors
uj = uj,1 e1 + + uj,n en .
We can then group the various occurrences of @xjk corresponding to the same variable xjk ,
and this leads to the notation
@ 1 @ 2
@ n
f (a),
@x1
@x2
@xn
where 1 + 2 + + n = m.
If we denote (1 , . . . , n ) simply by , then we denote
@ 1 @ 2
@ n
f
@x1
@x2
@xn
by
@ f,
@
or
f.
@x
` DI BRUNOS FORMULA
28.6. TAYLORS FORMULA, FAA
28.6
839
We discuss, without proofs, several versions of Taylors formula. The hypotheses required in
each version become increasingly stronger. The first version can be viewed as a generalization
!
!
!
of the notion of derivative. Given an m-linear map f : E m ! F , for any vector h 2 E , we
abbreviate
f (h, . . . , h)
| {z }
m
by f (h ). The version of Taylors formula given next is sometimes referred to as the formula
of TaylorYoung.
Theorem 28.21. (TaylorYoung) Given two normed affine spaces E and F , for any open
subset A E, for any function f : A ! F , for any a 2 A, if Dk f exists in A for all k,
1 k m 1, and if Dm f (a) exists, then we have:
f (a + h) = f (a) +
1 1
1 m
D f (a)(h) + +
D f (a)(hm ) + khkm (h),
1!
m!
x2]a,a+h[
for some M
Dm+1 f (x) M,
0, then
f (a + h)
f (a)
1 m
khkm+1
m
D f (a)(h) + +
D f (a)(h ) M
.
1!
m!
(m + 1)!
1
!
!
As a corollary, if L : E m+1 ! F is a continuous (m + 1)-linear map, then
f (a + h)
f (a)
1!
D1 f (a)(h) + +
Lk.
1 m
L(hm+1 )
khkm+1
D f (a)(hm ) +
M
,
m!
(m + 1)!
(m + 1)!
840
The above theorem is sometimes stated under the slightly stronger assumption that f is
a C m -function on A. If f : A ! R is a real-valued function, Theorem 28.22 can be refined a
little bit. This version is often called the formula of TaylorMacLaurin.
Theorem 28.23. (TaylorMacLaurin) Let E be a normed affine space, let A be an open
subset of E, and let f : A ! R be a real-valued function on A. Given any a 2 A and any
!
h 6= 0 in E , if the closed segment [a, a + h] is contained in A, if Dk f exists in A for all k,
1 k m, and Dm+1 f (x) exists at every point x of the open segment ]a, a + h[, then there
is some 2 R, with 0 < < 1, such that
f (a + h) = f (a) +
1 1
1 m
1
D f (a)(h) + +
D f (a)(hm ) +
Dm+1 f (a + h)(hm+1 ).
1!
m!
(m + 1)!
We also mention for mathematical culture, a version with integral remainder, in the
case of a real-valued function. This is usually called Taylors formula with integral remainder .
Theorem 28.24. (Taylors formula with integral remainder) Let E be a normed affine space,
let A be an open subset of E, and let f : A ! R be a real-valued function on A. Given any
!
a 2 A and any h 6= 0 in E , if the closed segment [a, a + h] is contained in A, and if f is a
C m+1 -function on A, then we have
f (a + h) = f (a) +
1 1
1 m
D f (a)(h) + +
D f (a)(hm )
1!
m!
Z 1
(1
+
0
i
t)m h m+1
m+1
D
f (a + th)(h
) dt.
m!
The advantage of the above formula is that it gives an explicit remainder. We now
examine briefly the situation where E is of finite dimension n, and (a0 , (e1 , . . . , en )) is a
frame for E. In this case, we get a more explicit expression for the expression
k=m
X
i=0
1 k
D f (a)(hk )
k!
1 k
D f (a)(hk ) =
k!
k
1 ++kn
@ kn
hk11 hknn @ k1
f (a),
k1 ! kn ! @x1
@xn
m
which, using the abbreviated notation introduced at the end of Section 28.5, can also be
written as
k=m
X 1
X h
Dk f (a)(hk ) =
@ f (a).
k!
!
k=0
||m
` DI BRUNOS FORMULA
28.6. TAYLORS FORMULA, FAA
841
The advantange of the above notation is that it is the same as the notation used when
n = 1, i.e., when E = R (or E = C). Indeed, in this case, the TaylorMacLaurin formula
reads as:
f (a + h) = f (a) +
h 1
hm m
hm+1
D f (a) + +
D f (a) +
Dm+1 f (a + h),
1!
m!
(m + 1)!
for some 2 R, with 0 < < 1, where Dk f (a) is the value of the k-th derivative of f at
a (and thus, as we have already said several times, this is the kth-order vector derivative,
which is just a scalar, since F = R).
In the above formula, the assumptions are that f : [a, a + h] ! R is a C m -function on
[a, a + h], and that Dm+1 f (x) exists for every x 2]a, a + h[.
Taylors formula is useful to study the local properties of curves and surfaces. In the case
of a curve, we consider a function f : [r, s] ! F from a closed interval [r, s] of R to some
affine space F , the derivatives Dk f (a)(hk ) correspond to vectors hk Dk f (a), where Dk f (a) is
the kth vector derivative of f at a (which is really Dk f (a)(1, . . . , 1)), and for any a 2]r, s[,
Theorem 28.21 yields the following formula:
f (a + h) = f (a) +
h 1
hm m
D f (a) + +
D f (a) + hm (h),
1!
m!
for any h such that a + h 2]r, s[, and where limh!0, h6=0 (h) = 0.
In the case of functions f : Rn ! R, it is convenient to have formulae for the Taylor
Young formula and the TaylorMacLaurin formula in terms of the gradient and the Hessian.
Recall that the gradient rf (a) of f at a 2 Rn is the column vector
0
1
@f
B @x1 (a) C
B
C
B @f
C
B
C
(a)
B
C
rf (a) B @x2 C ,
B .. C
B . C
B
C
@ @f
A
(a)
@xn
and that
f 0 (a)(u) = Df (a)(u) = rf (a) u,
for any u 2 Rn (where means inner product). The Hessian matrix r2 f (a) of f at a 2 Rn
842
and we have
@ 2f
@ 2f
(a)
(a)
B @x2
@x1 @x2
B
1
B 2
@ 2f
B @ f
B
(a)
(a)
@x1 @x2
@x22
r2 f (a) = B
B
..
..
B
.
.
B
B 2
2
@ @ f
@ f
(a)
(a)
@x1 @xn
@x2 @xn
...
...
..
...
1
@ 2f
(a)
@x1 @xn C
C
C
@ 2f
C
(a)C
@x2 @xn C ,
C
..
C
.
C
C
2
A
@ f
(a)
2
@xn
for all u, v 2 Rn . Then, we have the following three formulations of the formula of Taylor
Young of order 2:
1
f (a + h) = f (a) + Df (a)(h) + D2 f (a)(h, h) + khk2 (h)
2
1
f (a + h) = f (a) + rf (a) h + (h r2 f (a)h) + (h h)(h)
2
1
>
f (a + h) = f (a) + (rf (a)) h + (h> r2 f (a) h) + (h> h)(h).
2
with limh7!0 (h) = 0.
One should keep in mind that only the first formula is intrinsic (i.e., does not depend on
the choice of a basis), whereas the other two depend on the basis and the inner product chosen
on Rn . As an exercise, the reader should write similar formulae for the TaylorMacLaurin
formula of order 2.
Another application of Taylors formula is the derivation of a formula which gives the mth derivative of the composition of two functions, usually known as Fa`a di Brunos formula.
This formula is useful when dealing with geometric continuity of splines curves and surfaces.
Proposition 28.25. Given any normed affine space E, for any function f : R ! R and any
function g : R ! E, for any a 2 R, letting b = f (a), f (i) (a) = Di f (a), and g (i) (b) = Di g(b),
for any m 1, if f (i) (a) and g (i) (b) exist for all i, 1 i m, then (g f )(m) (a) = Dm (g f )(a)
exists and is given by the following formula:
(1) i1
(m) im
X
X
f (a)
f (a)
m!
(m)
(j)
(g f ) (a) =
g (b)
.
i
!
i
!
1!
m!
1
m
0jm i +i ++i =j
1
i1 +2i2 ++mim =m
i1 ,i2 , ,im 0
28.7
843
In this section, we briefly consider vector fields and covariant derivatives of vector fields.
Such derivatives play an important role in continuous mechanics. Given a normed affine
!
!
!
space (E, E ), a vector field over (E, E ) is a function X : E ! E . Intuitively, a vector field
assigns a vector to every point in E. Such vectors could be forces, velocities, accelerations,
etc.
Given two vector fields X, Y defined on some open subset of E, for every point a 2 ,
we would like to define the derivative of X with respect to Y at a. This is a type of directional
derivative that gives the variation of X as we move along Y , and we denote it by DY X(a).
The derivative DY X(a) is defined as follows.
!
Definition 28.12. Let (E, E ) be a normed affine space. Given any open subset of E,
given any two vector fields X and Y defined over , for any a 2 , the covariant derivative
(or Lie derivative) of X w.r.t. the vector field Y at a, denoted by DY X(a), is the limit (if it
exists)
X(a + tY (a)) X(a)
lim
,
t!0, t2U
t
where U = {t 2 R | a + tY (a) 2 , t 6= 0}.
If Y is a constant vector field, it is immediately verified that the map
X 7! DY X(a)
is a linear map called the derivative of the vector field X, and denoted by DX(a). If
f : E ! R is a function, we define DY f (a) as the limit (if it exists)
lim
t!0, t2U
f (a + tY (a))
t
f (a)
844
Proposition 28.26. The covariant derivative DY X(a) satisfies the following properties:
D(Y1 +Y2 ) X(a) = DY1 X(a) + DY2 X(a),
Df Y X(a) = f (a)DY X(a),
DY (X1 + X2 )(a) = DY X1 (a) + DY X2 (a),
DY f X(a) = DY f (a)X(a) + f (a)DY X(a),
where X, Y, X1 , X2 , Y1 , Y2 are smooth vector fields over , and f : E ! R is a smooth function.
In dierential geometry, the above properties are taken as the axioms of affine connections, in order to define covariant derivatives of vector fields over manifolds. In many cases,
the vector field Y is the tangent field of some smooth curve : ] , [! E. If so, the
following proposition holds.
Proposition 28.27. Given a smooth curve
defined on (] , [) such that
:]
d
(u),
dt
for any vector field X defined on (] , [), we have
d
DY X(a) =
X( (t)) (0),
dt
Y ( (u)) =
where a = (0).
The derivative DY X(a) is thus the derivative of the vector field X along the curve , and
it is called the covariant derivative of X along .
!
Given an affine frame (O, (u1 , . . . , un )) for (E, E ), it is easily seen that the covariant
derivative DY X(a) is expressed as follows:
n X
n
X
@Xi
DY X(a) =
Yj
(a)ei .
@x
j
i=1 j=1
Generally, DY X(a) 6= DX Y (a). The quantity
[X, Y ] = DX Y
DY X
is called the Lie bracket of the vector fields X and Y . The Lie bracket plays an important
role in dierential geometry. In terms of coordinates,
n X
n
X
@Yi
@Xi
[X, Y ] =
Xj
Yj
ei .
@x
@x
j
j
i=1 j=1
28.8
845
Futher Readings
A thorough treatment of dierential calculus can be found in Munkres [85], Lang [70],
Schwartz [94], Cartan [20], and Avez [6]. The techniques of dierential calculus have many
applications, especially to the geometry of curves and surfaces and to dierential geometry
in general. For this, we recommend do Carmo [30, 31] (two beautiful classics on the subject), Kreyszig [65], Stoker [102], Gray [50], Berger and Gostiaux [10], Milnor [81], Lang [68],
Warner [114] and Choquet-Bruhat [23].
846
Chapter 29
Extrema of Real-Valued Functions
29.1
In either case, we say that J has a local extremum (or relative extremum) at u. We say that
J has a strict local minimum (resp. strict local maximum) at the point u 2 E if there is
some open subset W E containing u such that
J(u) < J(w) for all w 2 W
{u}
(resp.
J(u) > J(w) for all w 2 W
847
{u}).
848
By abuse of language, we often say that the point u itself is a local minimum or a
local maximum, even though, strictly speaking, this does not make sense.
We begin with a well-known necessary condition for a local extremum.
Proposition 29.1. Let E be a normed vector space and let J : ! R be a function, with
some open subset of E. If the function J has a local extremum at some point u 2 and
if J is dierentiable at u, then
dJ(u) = J 0 (u) = 0.
Proof. Pick any v 2 E. Since is open, for t small enough we have u + tv 2 , so there is
an open interval I R such that the function ' given by
'(t) = J(u + tv)
for all t 2 I is well-defined. By applying the chain rule, we see that ' is dierentiable at
t = 0, and we get
'0 (0) = dJu (v).
Without loss of generality, assume that u is a local minimum. Then we have
'0 (0) = lim
'(t)
'(t)
t7!0
and
'(0)
'(0)
0,
t
which shows that '0 (0) = dJu (v) = 0. As v 2 E is arbitrary, we conclude that dJu = 0.
t7!0+
It is important to note that the fact that is open is crucial. For example, if J is the
identity function on [0, 1], then dJ(x) = 1 for all x 2 [0, 1], even though J has a minimum at
x = 0 and a maximum at x = 1. Also, if E = Rn , then the condition dJ(u) = 0 is equivalent
to the system
@J
(u1 , . . . , un ) = 0
@x1
..
.
@J
(u1 , . . . , un ) = 0.
@xn
In many practical situations, we need to look for local extrema of a function J under
additional constraints. This situation can be formalized conveniently as follows: We have a
function J : ! R defined on some open subset of a normed vector space, but we also
have some subset U of and we are looking for the local extrema of J with respect to the
set U . Note that in most cases, U is not open. In fact, U is usually closed.
849
and
@'
(u1 , u2 )
@x2
2 L(E2 ; E2 ),
850
@'
(u)
@x2
@'
(u).
@x1
dG(u1 ) =
@J
(u) dg(u1 )
@x2
@J
@'
(u)
(u)
@x2
@x2
@'
(u).
@x1
@'
(u)
@x2
@J
@J
(u) =
(u)
@x2
@x2
@'
(u)
@x2
@'
(u),
@x1
@'
(u),
@x2
if we let
(u) =
@J
(u)
@x2
@'
(u)
@x2
then we get
@J
@J
(u) +
(u)
@x1
@x2
@J
@'
=
(u)
(u)
@x2
@x2
= (u) d'(u),
dJ(u) =
@'
@'
(u) +
(u)
@x1
@x2
851
and let u 2 U be a point such that the derivatives d'i (u) 2 L(Rn ; R) are linearly independent;
equivalently, assume that the m n matrix (@'i /@xj )(u) has rank m. If J : ! R is a
function which is dierentiable at u 2 U and if J has a local constrained extremum at u,
then there exist m numbers i (u) 2 R, uniquely defined, such that
+ +
dJ(u) +
1 (u)d'1 (u)
rJ(u) +
1 (u)r'1 (u)
m (u)d'm (u)
= 0;
equivalently,
+ +
1 (u)r'm (u)
= 0.
Proof. The linear independence of the m linear forms d'i (u) is equivalent to the fact that
the m n matrix A = (@'i /@xj )(u) has rank m. By reordering the columns, we may
assume that the first m columns are linearly independent. If we let ' : ! Rm be the
function defined by
'(v) = ('1 (v), . . . , 'm (v))
for all v 2 , then we see that @'/@x2 (u) is invertible and both @'/@x2 (u) and its inverse
are continuous, so that Theorem 29.3 applies, and there is some (continuous) linear form
(u) 2 L(Rm ; R) such that
dJ(u) + (u) d'(u) = 0.
However, (u) is defined by some m-tuple ( 1 (u), . . . ,
definition of ', the above equation is equivalent to
dJ(u) +
The uniqueness of the
i (u)
1 (u)d'1 (u)
+ +
m (u))
m (u)d'm (u)
= 0.
The numbers i (u) involved in Theorem 29.3 are called the Lagrange multipliers associated with the constrained extremum u (again, with some minor abuse of language). The
linear independence of the linear forms d'i (u) is equivalent to the fact that the Jacobian matrix (@'i /@xj )(u) of ' = ('1 , . . . , 'm ) at u has rank m. If m = 1, the linear independence
of the d'i (u) reduces to the condition r'1 (u) 6= 0.
852
A fruitful way to reformulate the use of Lagrange multipliers is to introduce the notion
of the Lagrangian associated with our constrained extremum problem. This is the function
L : Rm ! R given by
L(v, ) = J(v) +
with = ( 1 , . . . ,
u 2 U such that
m ).
1 '1 (v)
+ +
m 'm (v),
if and only if
dL(u, ) = 0,
or equivalently
rL(u, ) = 0;
that is, i (u, ) is a critical point of the Lagrangian L.
Indeed dL(u, ) = 0 if equivalent to
@L
(u, ) = 0
@v
@L
(u, ) = 0
@ 1
..
.
@L
(u, ) = 0,
@ m
and since
and
@L
(u, ) = dJ(u) + 1 d'1 (u) + + m d'm (u)
@v
@L
(u, ) = 'i (u),
@ i
we get
dJ(u) + 1 d'1 (u) + + m d'm (u) = 0
and
'1 (u) = = 'm (u) = 0,
that is, u 2 U .
If we write out explicitly the condition
dJ(u) + 1 d'1 (u) + + m d'm (u) = 0,
853
@J
(u) +
@xn
@'1
(u) + +
@x1
..
.
@'1
(u) + +
@xn
@'m
(u) = 0
@x1
@'m
(u) = 0,
@xn
and it is important to note that the matrix of this system is the transpose of the Jacobian
matrix of ' at u. If we write Jac(J)(u) = (@'i /@xj )(u) for the Jacobian matrix of J (at
u), then the above system is written in matrix form as
rJ(u) + (Jac(J)(u))> = 0,
where
Remark: If the Jacobian matrix Jac(J)(v) = (@'i /@xj )(v) has rank m for all v 2 U
(which is equivalent to the linear independence of the linear forms d'i (v)), then we say that
0 2 Rm is a regular value of '. In this case, it is known that
U = {v 2 | '(v) = 0}
is a smooth submanifold of dimension n
Tv U = {w 2 Rn | d'i (v)(w) = 0, 1 i m} =
is the tangent space to U at v (a vector space of dimension n
m
\
i=1
854
(the level set of level J(u)) is a hypersurface in , and if dJ(u) 6= 0, the zero locus of dJ(u)
is the tangent space Tu Z(J) to Z(J) at u (a vector space of dimension n 1), where
Tu Z(J) = {w 2 Rn | dJ(u)(w) = 0}.
Consequently, Theorem 29.3 asserts that
Tu U Tu Z(J);
this is a geometric condition.
The beauty of the Lagrangian is that the constraints {'i (v) = 0} have been incorporated
into the function L(v, ), and that the necessary condition for the existence of a constrained
local extremum of J is reduced to the necessary condition for the existence of a local extremum of the unconstrained L.
However, one should be careful to check that the assumptions of Theorem 29.3 are satisfied (in particular, the linear independence of the linear forms d'i ). For example, let
J : R3 ! R be given by
J(x, y, z) = x + y + z 2
and g : R3 ! R by
g(x, y, z) = x2 + y 2 .
@J
@g
(0, 0, z) =
(0, 0, z),
@y
@y
@g
(x, y, z) = 2x,
@x
@g
(x, y, z) = 2y,
@y
@J
@g
(0, 0, z) =
(0, 0, z),
@z
@z
@g
(0, 0, z) = 0,
@z
the partial derivatives above all vanish for x = y = 0, so at a local extremum we should also
have
@J
@J
@J
(0, 0, z) = 0,
(0, 0, z) = 0,
(0, 0, z) = 0,
@x
@y
@z
but this is absurd since
@J
(x, y, z) = 1,
@x
@J
(x, y, z) = 1,
@y
@J
(x, y, z) = 2z.
@z
The reader should enjoy finding the reason for the flaw in the argument.
855
One should also keep in mind that Theorem 29.3 gives only a necessary condition. The
(u, ) may not correspond to local extrema! Thus, it is always necessary to analyze the local
behavior of J near a critical point u. This is generally difficult, but in the case where J is
affine or quadratic and the constraints are affine or quadratic, this is possible (although not
always easy).
Let us apply the above method to the following example in which E1 = R, E2 = R,
= R2 , and
J(x1 , x2 ) = x2
'(x1 , x2 ) = x21 + x22
1.
Observe that
is the unit circle, and since
2x1
,
2x2
it is clear that r'(x1 , x2 ) 6= 0 for every point = (x1 , x2 ) on the unit circle. If we form the
Lagrangian
L(x1 , x2 , ) = x2 + (x21 + x22 1),
Theorem 29.3 says that a necessary condition for J to have a constrained local extremum is
that rL(x1 , x2 , ) = 0, so the following equations must hold:
2 x1 = 0
1 + 2 x2 = 0
x21 + x22 = 1.
The second equation implies that 6= 0, and then the first yields x1 = 0, so the third yields
x2 = 1, and we get two solutions:
1
= ,
2
1
=
,
2
(x1 , x2 ) = (0, 1)
(x01 , x02 ) = (0, 1).
We can check immediately that the first solution is a minimum and the second is a maximum.
The reader should look for a geometric interpretation of this problem.
Let us now consider the case in which J is a quadratic function of the form
1
J(v) = v > Av
2
v > b,
856
where C is an m n matrix with m < n and d 2 Rm . We also assume that C has rank m.
In this case, the function ' is given by
'(v) = (Cv
d)> ,
because we view '(v) as a row vector (and v as a column vector), and since
d'(v)(w) = C > w,
the condition that the Jacobian matrix of ' at u have rank m is satisfied. The Lagrangian
of this problem is
1
L(v, ) = v > Av
2
v > b + (Cv
1
d)> = v > Av
2
v>b +
>
(Cv
d),
Av b + C >
rL(v, ) =
.
Cv d
Therefore, the necessary condition for contrained local extrema is
Av + C > = b
Cv = d,
which can be expressed in matrix form as
A C>
v
b
=
,
C 0
d
where the matrix of the system is a symmetric matrix. We should not be surprised to find
the system of Section 18, except for some renaming of the matrices and vectors involved.
As we know from Section 18.2, the function J has a minimum i A is positive definite, so
in general, if A is only a symmetric matrix, the critical points of the Lagrangian do not
correspond to extrema of J.
We now investigate conditions for the existence of extrema involving the second derivative
of J.
29.2
For the sake of brevity, we consider only the case of local minima; analogous results are
obtained for local maxima (replace J by J, since maxu J(u) = minu J(u)). We begin
with a necessary condition for an unconstrained local minimum.
857
Proposition 29.4. Let E be a normed vector space and let J : ! R be a function, with
some open subset of E. If the function J is dierentiable in , if J has a second derivative
D2 J(u) at some point u 2 , and if J has a local minimum at u, then
D2 J(u)(w, w)
for all w 2 E.
Proof. Pick any nonzero vector w 2 E. Since is open, for t small enough, u + tw 2 and
J(u + tw) J(u), so there is some open interval I R such that
u + tw 2 and J(u + tw)
J(u)
for all t 2 I. Using the TaylorYoung formula and the fact that we must have dJ(u) = 0
since J has a local minimum at u, we get
0 J(u + tw)
J(u) =
t2 2
D J(u)(w, w) + t2 kwk2 (tw),
2
0.
Since the argument holds for all w 2 E (trivially if w = 0), the proposition is proved.
One should be cautioned that there is no converse to the previous proposition. For example, the function f : x 7! x3 has no local minimum at 0, yet df (0) = 0 and D2 f (0)(u, v) = 0.
Similarly, the reader should check that the function f : R2 ! R given by
f (x, y) = x2
3y 3
has no local minimum at (0, 0); yet df (0, 0) = 0 and D2 f (0, 0)(u, v) = 2u2
0.
When E = Rn , Proposition 29.4 says that a necessary condition for having a local
minimum is that the Hessian r2 J(u) be positive semidefinite (it is always symmetric).
We now give sufficient conditions for the existence of a local minimum.
Theorem 29.5. Let E be a normed vector space, let J : ! R be a function with some
open subset of E, and assume that J is dierentiable in and that dJ(u) = 0 at some point
u 2 . The following properties hold:
(1) If D2 J(u) exists and if there is some number 2 R such that > 0 and
D2 J(u)(w, w)
kwk2
for all w 2 E,
858
(2) If D2 J(v) exists for all v 2 and if there is a ball B centered at u such that
D2 J(v)(w, w)
1
J(u) = D2 J(u)(w, w) + kwk2 (w)
2
1
+ (w) kwk2
2
with limw7!0 (w) = 0. Consequently if we pick r > 0 small enough that |(w)| < for all w
with kwk < r, then J(u + w) > J(u) for all u + w 2 B, where B is the open ball of center u
and radius r. This proves that J has a local strict minimum at u.
(2) The formula of TaylorMaclaurin shows that for all u + w 2 B, we have
1
J(u + w) = J(u) + D2 J(v)(w, w)
2
J(u),
for all x 2 Rn ,
859
We can combine Theorem 29.5 and Proposition 29.6 to obtain a useful sufficient condition
for the existence of a strict local minimum. First let us introduce some terminology.
Given a function J : ! R as before, say that a point u 2 is a nondegenerate critical
point if dJ(u) = 0 and if the Hessian matrix r2 J(u) is invertible.
Proposition 29.7. Let J : ! R be a function defined on some open subset Rn . If
J is dierentiable in and if some point u 2 is a nondegenerate critical point such that
r2 J(u) is positive definite, then J has a strict local minimum at u.
Remark: It is possible to generalize Proposition 29.7 to infinite-dimensional spaces by finding a suitable generalization of the notion of a nondegenerate critical point. Firstly, we
assume that E is a Banach space (a complete normed vector space). Then, we define the
dual E 0 of E as the set of continuous linear forms on E, so that E 0 = L(E; R). Following
Lang, we use the notation E 0 for the space of continuous linear forms to avoid confusion
with the space E = Hom(E, R) of all linear maps from E to R. A continuous bilinear map
' : E E ! R in L2 (E, E; R) yields a map from E to E 0 given by
(u) = 'u ,
where 'u 2 E 0 is the linear form defined by
'u (v) = '(u, v).
It is easy to check that 'u is continuous and that the map is continuous. Then, we say
that ' is nondegenerate i : E ! E 0 is an isomorphism of Banach spaces, which means
1
that is invertible and that both and
are continuous linear maps. Given a function
J : ! R dierentiable on as before (where is an open subset of E), if D2 J(u) exists
for some u 2 , we say that u is a nondegenerate critical point if dJ(u) = 0 and if D2 J(u) is
nondegenerate. Of course, D2 J(u) is positive definite if D2 J(u)(w, w) > 0 for all w 2 E {0}.
Using the above definition, Proposition 29.6 can be generalized to a nondegenerate positive definite bilinear form (on a Banach space) and Theorem 29.7 can also be generalized to
the situation where J : ! R is defined on an open subset of a Banach space. For details
and proofs, see Cartan [20] (Part I Chapter 8) and Avez [6] (Chapter 8 and Chapter 10).
In the next section, we make use of convexity; both on the domain and on the function
J itself.
29.3
860
Definition 29.3. Given any real vector space E, we say that a subset C of E is convex if
either C = ; or if for every pair of points u, v 2 C,
(1
)u + v 2 C
for all
2 R such that 0
1.
)u + v) (1
2 R such that 0
1;
the function f is strictly convex (on C) if for every pair of distinct points u, v 2 C (u 6= v),
f ((1
)u + v) < (1
< 1.
)u + v 2 E |
2 R, 0
1}.
c},
are convex. Any intersection of halfspaces is convex. More generally, any intersection of
convex sets is convex.
Linear forms are convex functions (but not strictly convex). Any norm k k : E ! R+ is
a convex function. The max function,
max(x1 , . . . , xn ) = max{x1 , . . . , xn }
is convex on Rn . The exponential x 7! ecx is strictly convex for any c 6= 0 (c 2 R).
The logarithm function is concave on R+ {0}, and the log-determinant function log det is
concave on the set of symmetric positive definite matrices. This function plays an important
role in convex optimization. An excellent exposition of convexity and its applications to
optimization can be found in Boyd [17].
Here is a necessary condition for a function to have a local minimum with respect to a
convex subset U .
861
Theorem 29.8. (Necessary condition for a local minimum on a convex subset) Let J : ! R
be a function defined on some open subset of a normed vector space E and let U be
a nonempty convex subset. Given any u 2 U , if dJ(u) exists and if J has a local minimum
in u with respect to U , then
dJ(u)(v
u)
for all v 2 U .
J(u)
0, so we get
0.
The above implies that dJ(u)(w) 0, because otherwise we could pick t > 0 small enough
so that
dJ(u)(w) + kwk (tw) < 0,
a contradiction. Since the argument holds for all v = u + w 2 U , the theorem is proved.
Observe that the convexity of U is a substitute for the use of Lagrange multipliers, but
we now have to deal with an inequality instead of an equality.
Consider the special case where U is a subspace of E. In this case, since u 2 U we have
2u 2 U , and for any u + w 2 U , we must have 2u (u + w) = u w 2 U . The previous
theorem implies that dJ(u)(w) 0 and dJ(u)( w) 0, that is, dJ(w) 0, so dJ(w) = 0.
Since the argument holds for w 2 U (because U is a subspace, if u, w 2 U , then u + w 2 U ),
we conclude that
dJ(u)(w) = 0 for all w 2 U .
We will now characterize convex functions when they have a first derivative or a second
derivative.
Proposition 29.9. (Convexity and first derivative) Let f : ! R be a function dierentiable on some open subset of a normed vector space E and let U be a nonempty
convex subset.
(1) The function f is convex on U i
f (v)
f (u) + df (u)(v
u)
for all u, v 2 U .
862
u)
2 R with 0 <
)u + v) (1
f ((1
)u + v)
< 1. If the
)f (u) + f (v),
which yields
f (u)
f (v)
f (u).
It follows that
df (u)(v
u) = lim
f ((1
)u + v)
f (u)
f (v)
7!0
f (u).
If f is strictly convex, the above reasoning does not work, because a strict inequality is not
necessarily preserved by passing to the limit. We have recourse to the following trick: For
any ! such that 0 < ! < 1, observe that
(1
)u + v = u + (v
u) =
u+
(u + !(v
u)).
f (u + (v
u))
!
!
f (u) +
f (u + !(v
u)).
u))
f (u)
f (u + !(v
u))
f (u)
u)) = f ((1
u))
f (u)
< f (v)
f (u),
u))
f (u)
f (u + !(v
u))
!
f (u)
< f (v)
u)
f (u + !(v
u))
!
f (u)
< f (v)
f (u),
f (u).
863
u) for all u, v 2 U .
f (u) + df (u)(v
f (v + (v
f (v + (u
)f (v) + f (u)
< 1, we get
u))
df (v + (u v))(u v)
v)) + (1
)df (v + (u v))(u
with 0 <
v),
f (v + (u
v)) = f ((1
)v + u),
u, v
u)
for all u, v 2 U .
(2) If
D2 f (u)(v
u, v
u) > 0
Proof. First, assume that the inequality in condition (1) is satisfied. For any two distinct
points u, v 2 U , the formula of TaylorMaclaurin yields
f (v)
f (u)
df (u)(v
1
u) = D2 (w)(v u, v u)
2
2
= D2 (w)(v w, v w),
2
for some w = (1
)u + v = u + (v u) with 0 <
so that v u = (v w). Since D2 f (u)(v w, v w)
applying Theorem 29.9(1).
864
Similarly, if (2) holds, the above reasoning and Theorem 29.9(2) imply that f is strictly
convex.
To prove the necessary condition in (1), define g : ! R by
g(v) = f (v)
df (u)(v),
where u 2 U is any point considered fixed. If f is convex and if f has a local minimum at u
with respect to U , since
g(v)
g(u) = f (v)
f (u)
df (u)(v
u),
Theorem 29.9 implies that f (v) f (u) df (u)(v u) 0, which implies that g has a local
minimum at u with respect to all v 2 U . Therefore, we have dg(u) = 0. Observe that g is
twice dierentiable in and D2 g(u) = D2 f (u), so the formula of TaylorYoung yields for
every v = u + w 2 U and all t with 0 t 1,
0 g(u + tw)
t2 2
g(u) = D (u)(tw, tw) + ktwk2 (tw)
2
t2 2
= (D (u)(w, w) + 2 kwk2 (wt)),
2
with limt7!0 (wt) = 0, and for t small enough, we must have D2 (u)(w, w)
0, as claimed.
The converse of Theorem 29.10 (2) is false as we see by considering the function f given
by f (x) = x4 . On the other hand, if f is a quadratic function of the form
1
f (u) = u> Au
2
where A is a symmetric matrix, we know that
df (u)(v) = v > (Au
u> b
b),
so
1
1 >
u) = v > Av v > b
u Au + u> b (v u)> (Au b)
2
2
1
1 >
= v > Av
u Au (v u)> Au
2
2
1
1
= v > Av + u> Au v > Au
2
2
1
= (v u)> A(v u).
2
Therefore, Theorem 29.9 implies that if A is positive semidefinite, then f is convex and if A
is positive definite, then f is strictly convex. The converse follows by Theorem 29.10.
f (v)
f (u)
df (u)(v
We conclude this section by applying our previous theorems to convex functions defined
on convex subsets. In this case, local minima (resp. local maxima) are global minima (resp.
global maxima).
865
Definition 29.4. Let f : E ! R be any function defined on some normed vector space (or
more generally, any set). For any u 2 E, we say that f has a minimum in u (resp. maximum
in u) if
f (u) f (v) (resp. f (u) f (v)) for all v 2 E.
We say that f has a strict minimum in u (resp. strict maximum in u) if
f (u) < f (v) (resp. f (u) > f (v)) for all v 2 E
{u}.
for all v 2 U
for all v 2 U
{u}),
and similarly for a maximum in u (resp. strict maximum in u) with respect to U with
changed to and < to >.
Sometimes, we say global maximum (or minimum) to stress that a maximum (or a minimum) is not simply a local maximum (or minimum).
Theorem 29.11. Given any normed vector space E, let U be any nonempty convex subset
of E.
(1) For any convex function J : U ! R, for any u 2 U , if J has a local minimum at u in
U , then J has a (global) minimum at u in U .
(2) Any strict convex function J : U ! R has at most one minimum (in U ), and if it does,
then it is a strict minimum (in U ).
(3) Let J : ! R be any function defined on some open subset of E with U and
assume that J is convex on U . For any point u 2 U , if dJ(u) exists, then J has a
minimum in u with respect to U i
dJ(u)(v
u)
for all v 2 U .
(4) If the convex subset U in (3) is open, then the above condition is equivalent to
dJ(u) = 0.
Proof. (1) Let v = u + w be any arbitrary point in U . Since J is convex, for all t with
0 t 1, we have
J(u + tw) = J(u + t(v
u)) (1
t)J(u) + tJ(v),
which yields
J(u + tw)
J(u) t(J(v)
J(u)).
866
Because J has a local minimum in u, there is some t0 with 0 < t0 < 1 such that
0 J(u + t0 w)
which implies that J(v)
J(u)
J(u),
0.
(2) If J is strictly convex, the above reasoning with w 6= 0 shows that there is some t0
with 0 < t0 < 1 such that
0 J(u + t0 w)
J(u)),
which shows that u is a strict global minimum (in U ), and thus that it is unique.
(3) We already know from Theorem 29.9 that the condition dJ(u)(v u) 0 for all v 2 U
is necessary (even if J is not convex). Conversely, because J is convex, careful inspection of
the proof of part (1) of Proposition 29.9 shows that only the fact that dJ(u) exists in needed
to prove that
J(v) J(u) dJ(u)(v u) for all v 2 U ,
and if
u)
0 for all v 2 U ,
J(u)
0 for all v 2 U ,
dJ(u)(v
then
J(v)
as claimed.
(4) If U is open, then for every u 2 U we can find an open ball B centered at u of radius
small enough so that B U . Then, for any w 6= 0 such that kwk < , we have both
v = u + w 2 B and v 0 = u w 2 B, so condition (3) implies that
dJ(u)(w)
0 and dJ(u)( w)
0,
which yields
dJ(u)(w) = 0.
Since the above holds for all w 6= 0 such such that kwk < and since dJ(u) is linear, we
leave it to the reader to fill in the details of the proof that dJ(u) = 0.
Theorem 29.11 can be used to rederive the fact that the least squares solutions of a linear
system Ax = b (where A is an m n matrix) are given by the normal equation
A> Ax = A> b.
For this, we condider the quadratic function
J(v) =
1
kAv
2
bk22
1
kbk22 ,
2
29.4. SUMMARY
867
and our least squares problem is equivalent to finding the minima of J on Rn . A computation
reveals that
1
J(v) = v > A> Av v > B > b,
2
and so
dJ(u) = A> Au B > b.
Since B > B is positive semidefinite, the function J is convex, and Theorem 29.11(4) implies
that the minima of J are the solutions of the equation
A> Au
A> b = 0.
The considerations in this chapter reveal the need to find methods for finding the zeros
of the derivative map
dJ : ! E 0 ,
where is some open subset of a normed vector space E and E 0 is the space of all continuous
linear forms on E (a subspace of E ). Generalizations of Newtons method yield such methods
and they are the objet of the next chapter.
29.4
Summary
The main concepts and results of this chapter are listed below:
Local minimum, local maximum, local extremum, strict local minimum, strict local
maximum.
Necessary condition for a local extremum involving the derivative; critical point.
Local minimum with respect to a subset U , local maximum with respect to a subset U ,
local extremum with respect to a subset U .
Constrained local extremum.
Necessary condition for a constrained extremum.
Necessary condition for a constrained extremum in terms of Lagrange multipliers.
Lagrangian.
Critical points of a Lagrangian.
Necessary condition of an unconstrained local minimum involving the second-order
derivative.
Sufficient condition for a local minimum involving the second-order derivative.
868
Chapter 30
Newtons Method and its
Generalizations
30.1
where E 0 = L(E; R) is the set of linear continuous functions from E to R; that is, the dual
of E, as defined in the Remark after Proposition 29.7.
This leads us to consider the problem in a more general form, namely: Given a function
f : ! Y from an open subset of a normed vector space X to a normed vector space Y ,
find
(i) Sufficient conditions which guarantee the existence of a zero of the function f ; that is,
an element a 2 such that f (a) = 0.
(ii) An algorithm for approximating such an a, that is, a sequence (xk ) of points of whose
limit is a.
When X = Y = R, we can use Newtons method . We pick some initial element x0 2 R
close enough to a zero a of f , and we define the sequence (xk ) by
xk+1 = xk
869
f (xk )
,
f 0 (xk )
870
for all k 0, provided that f 0 (xk ) 6= 0. The idea is to define xk+1 as the intersection of the
x-axis with the tangent line to the graph of the function x 7! f (x) at the point (xk , f (xk )).
Indeed, the equation of this tangent line is
y
xk ),
and its intersection with the x-axis is obtained for y = 0, which yields
x = xk
f (xk )
,
f 0 (xk )
as claimed.
For example, if > 0 and f (x) = x2
xk+1
=
xk +
2
xk
p
p
to compute the square root of . It can be shown that the method converges to for
any x0 > 0. Actually, the method also converges when x0 < 0! Find out what is the limit.
The case of a real function suggests the following method for finding the zeros of a
function f : ! Y , with X: given a starting point x0 2 , the sequence (xk ) is defined
by
xk+1 = xk (f 0 (xk )) 1 (f (xk ))
for all k
0.
These are rather demanding conditions but there are sufficient conditions that guarantee
that they are met. Another practical issue is that irt may be very cotstly to compute
(f 0 (xk )) 1 at every iteration step. In the next section, we investigate generalizations of
Newtons method which address the issues that we just discussed.
30.2
871
f (xk ),
In general, it is very costly to compute J(f )(xk ) at each iteration and then to solve
the corresponding linear system. If the method converges, the consecutive vectors xk should
dier only a little, as also the corresponding matrices J(f )(xk ). Thus, we are led to a variant
of Newtons method which consists in keeping the same matrix for p consecutive steps (where
p is some fixed integer 2):
xk+1 = xk
xk+1 = xk
..
.
xk+1 = xk
..
.
0kp 1
p k 2p 1
rp k (r + 1)p
It is also possible to set p = 1, that is, to use the same matrix f 0 (x0 ) for all iterations,
which leads to iterations of the form
xk+1 = xk
0,
A0 1 f (xk ),
0.
In the last two cases, if possible, we use an LU factorization of f 0 (x0 ) or A0 to speed up the
method. In some cases, it may even possible to set A0 = I.
The above considerations lead us to the definition of a generalized Newton method , as in
Ciarlet [24] (Chapter 7). Recall that a linear map f 2 L(E; F ) is called an isomorphism i
f is continuous, bijective, and f 1 is also continuous.
Definition 30.1. If X and Y are two normed vector spaces and if f : ! Y is a function
from some open subset of X, a generalized Newton method for finding zeros of f consists
of
(1) A sequence of families (Ak (x)) of linear isomorphisms from X to Y , for all x 2 and
all integers k 0;
(2) Some starting point x0 2 ;
872
0,
`=k
` = min{rp, k}, if rp k (r + 1)p
`=0
1, r
x0 k r} ,
then
(1)
sup sup Ak 1 (x)
k 0 x2B
(2)
L(Y ;X)
M,
< 1 and
sup sup kf 0 (x)
k 0
x,x0 2B
(3)
kf (x0 )k
Ak (x0 )kL(X;Y )
r
(1
M
).
873
0`k
is entirely contained within B and converges to a zero a of f , which is the only zero of f in
B. Furthermore, the convergence is geometric, which means that
kxk
ak
kx1
1
x0 k
A proof of Theorem 30.1 can be found in Ciarlet [24] (Section 7.5). It is not really difficult
but quite technical.
If we assume that we already know that some element a 2 is a zero of f , the next
theorem gives sufficient conditions for a special version of a generalized Newton method to
converge. For this special method, the linear isomorphisms Ak (x) are independent of x 2 .
Theorem 30.2. Let X be a Banach space, and let f : ! Y be dierentiable on the open
subset X. If a 2 is a point such that f (a) = 0, if f 0 (a) is a linear isomorphism, and
if there is some with 0 < < 1/2 such that
sup kAk
k 0
f 0 (a)kL(X;Y )
then there is a closed ball B of center a such that for every x0 2 B, the sequence (xk ) defined
by
xk+1 = xk Ak 1 (f (xk )), k 0,
is entirely contained within B and converges to a, which is the only zero of f in B. Furthermore, the convergence is geometric, which means that
kxk
for some
ak
kx0
ak ,
< 1.
A proof of Theorem 30.2 can be also found in Ciarlet [24] (Section 7.5).
For the sake of completeness, we state a version of the NewtonKantorovich theorem,
which corresponds to the case where Ak (x) = f 0 (x). In this instance, a stronger result can
be obtained especially regarding upper bounds, and we state a version due to Gragg and
Tapia which appears in Problem 7.5-4 of Ciarlet [24].
Theorem 30.3. (NewtonKantorovich) Let X be a Banach space, and let f : ! Y be
dierentiable on the open subset X. Assume that there exist three positive constants
, , and a point x0 2 such that
1
0 < ,
2
874
and if we let
p
1 2
p
1 + 1 2
+ =
B = {x 2 X | kx x0 k < }
+ = {x 2 | kx x0 k < + },
=
(f (x0 )) f (x0 ) ,
sup kf 0 (x) f 0 (y)k kx
yk .
x,y2+
Then, f 0 (x) is isomorphism of L(X; Y ) for all x 2 B, and the sequence defined by
xk+1 = xk
is entirely contained within the ball B and converges to a zero a of f which is the only zero
of f in + . Finally, if we write = /+ , then we have the following bounds:
p
2 1 2 2k
1
kx1 x0 k
if <
kxk ak
2k
1
2
kx1 x0 k
1
kxk ak
if = ,
k
1
2
2
and
2 kxk+1 xk k
p
kxk
1 + (1 + 42k (1 + 2k ) 2 )
ak 2k
kxk
xk 1 k .
We can now specialize Theorems 30.1 and 30.2 to the search of zeros of the derivative
f : ! E 0 , of a function f : ! R, with E. The second derivative J 00 of J is a
continuous bilinear form J 00 : E E ! R, but is is convenient to view it as a linear map
in L(E, E 0 ); the continuous linear form J 00 (u) is given by J 00 (u)(v) = J 00 (u, v). In our next
theorem, we assume that the Ak (x) are isomorphisms in L(E, E 0 ).
0
Theorem 30.4. Let E be a Banach space, let J : ! R be twice dierentiable on the open
subset E, and assume that there are constants r, M, > 0 such that if we let
B = {x 2 E | kx
then
x0 k r} ,
875
(1)
sup sup Ak 1 (x)
k 0 x2B
(2)
M,
L(E 0 ;E)
< 1 and
sup sup kJ 00 (x)
k 0
Ak (x0 )kL(E;E 0 )
x,x0 2B
(3)
kJ 0 (x0 )k
r
(1
M
).
0`k
is entirely contained within B and converges to a zero a of J 0 , which is the only zero of J 0
in B. Furthermore, the convergence is geometric, which means that
kxk
ak
kx1
1
x0 k
In the next theorem, we assume that the Ak (x) are isomorphisms in L(E, E 0 ) that are
independent of x 2 .
Theorem 30.5. Let E be a Banach space, and let J : ! R be twice dierentiable on the
open subset E. If a 2 is a point such that J 0 (a) = 0, if J 00 (a) is a linear isomorphism,
and if there is some with 0 < < 1/2 such that
sup kAk
k 0
J 00 (a)kL(E;E 0 )
then there is a closed ball B of center a such that for every x0 2 B, the sequence (xk ) defined
by
xk+1 = xk Ak 1 (J 0 (xk )), k 0,
is entirely contained within B and converges to a, which is the only zero of J 0 in B. Furthermore, the convergence is geometric, which means that
kxk
for some
ak
kx0
ak ,
< 1.
When E = Rn , the Newton method given by Theorem 30.4 yield an itereation step of
the form
xk+1 = xk Ak 1 (x` )rJ(xk ), 0 ` k,
876
0,
As remarked in [24] (Section 7.5), generalized Newton methods have a very wide range
of applicability. For example, various versions of gradient descent methods can be viewed as
instances of Newton methods.
Newtons method also plays an important role in convex optimization, in particular,
interior-point methods. A variant of Newtons method dealing with equality constraints has
been developed. We refer the reader to Boyd and Vandenberghe [17], Chapters 10 and 11,
for a comprehensive exposition of these topics.
30.3
Summary
The main concepts and results of this chapter are listed below:
Newtons method for functions f : R ! R.
Generalized Newton methods.
The Newton-Kantorovich theorem.
Chapter 31
Appendix: Zorns Lemma; Some
Applications
31.1
Zorns lemma is a particularly useful form of the axiom of choice, especially for algebraic
applications. Readers who want to learn more about Zorns lemma and its applications to
algebra should consult either Lang [67], Appendix 2, 2 (pp. 878-884) and Chapter III, 5
(pp. 139-140), or Artin [4], Appendix 1 (pp. 588-589). For the logical ramifications of
Zorns lemma and its equivalence with the axiom of choice, one should consult Schwartz
[93], (Vol. 1), Chapter I, 6, or a text on set theory such as Enderton [34], Suppes [107], or
Kuratowski and Mostowski [66].
is
A pair (S, ), where is a partial order on S, is called a partially ordered set or poset.
Given a poset, (S, ), a subset, C, of S is totally ordered or a chain if for every pair of
elements x, y 2 C, either x y or y x. The empty set is trivially a chain. A subset, P ,
(empty or not) of S is bounded if there is some b 2 S so that x b for all x 2 P . Observe
that the empty subset of S is bounded if and only if S is nonempty. A maximal element of
P is an element, m 2 P , so that m x implies that m = x, for all x 2 P . Zorns lemma
can be stated as follows:
Lemma 31.1. Given a partially ordered set, (S, ), if every chain is bounded, then S has a
maximal element.
877
878
Proof. See any of Schwartz [93], Enderton [34], Suppes [107], or Kuratowski and Mostowski
[66].
Remark: As we noted, the hypothesis of Zorns lemma implies that S is nonempty (since
the empty set must be bounded). A partially ordered set such that every chain is bounded
is sometimes called inductive.
We now give some applications of Zorns lemma.
31.2
Using Zorns lemma, we can prove that Theorem 2.9 holds for arbitrary vector spaces, and
not just for finitely generated vector spaces, as promised in Chapter 2.
Theorem 31.2. Given any family, S = (ui )i2I , generating a vector space E and any linearly
independent subfamily, L = (uj )j2J , of S (where J I), there is a basis, B, of E such that
L B S.
Proof. Consider the set L of linearly independent families, B, such that L B S. Since
L 2 L, this set is nonempty. We claim that L is inductive.
Consider any chain, (Bl )l2 , of
S
linearly independent families Bl in L, and look at B = l2 Bl . The family B is of the form
B = (vh )h2H , for some index set H, and it must be linearly independent. Indeed, if this was
not true, there would be some family ( h )h2H of scalars, of finite support, so that
X
h vh = 0,
h2H
where not all h are zero. Since B = l2 Bl and only finitely many h are nonzero, there
is a finite subset, F , of , so that vh 2 Bfh i h 6= 0. But (Bl )l2 is a chain, and if we let
f = max{fh | fh 2 F }, then vh 2 Bf , for all vh for which h 6= 0. Thus,
X
h vh = 0
h2H
31.3
879
Let A be a commutative ring with identity element. Recall that an ideal A in A is a proper
ideal if A 6= A. The following theorem holds:
Theorem 31.3. Given any proper ideal, A A, there is a maximal ideal, B, containing A.
Proof. Let I be the set of all proper ideals, B, in A that contain A. The set I is nonempty,
since A 2 I. We claim that I S
is inductive. Consider any chain (Ai )i2I of ideals Ai in A.
One can easily check that B = i2I Ai is an ideal. Furthermore, B is a proper ideal, since
otherwise, the identity element 1 would belong to B = A, and so, we would have 1 2 Ai for
some i, which would imply Ai = A, a contradiction. Also, B is obviously an upper bound
for all the Ai s. By Zorns lemma (Lemma 31.1), the set I has a maximal element, say B,
and B is a maximal ideal containing A.
880
Bibliography
[1] Lars V. Ahlfors and Leo Sario. Riemann Surfaces. Princeton Math. Series, No. 2.
Princeton University Press, 1960.
[2] George E. Andrews, Richard Askey, and Ranjan Roy. Special Functions. Cambridge
University Press, first edition, 2000.
[3] Emil Artin. Geometric Algebra. Wiley Interscience, first edition, 1957.
[4] Michael Artin. Algebra. Prentice Hall, first edition, 1991.
[5] M. F. Atiyah and I. G. Macdonald. Introduction to Commutative Algebra. Addison
Wesley, third edition, 1969.
[6] A. Avez. Calcul Dierentiel. Masson, first edition, 1991.
[7] Sheldon Axler. Linear Algebra Done Right. Undergraduate Texts in Mathematics.
Springer Verlag, second edition, 2004.
[8] Marcel Berger. Geometrie 1. Nathan, 1990. English edition: Geometry 1, Universitext,
Springer Verlag.
[9] Marcel Berger. Geometrie 2. Nathan, 1990. English edition: Geometry 2, Universitext,
Springer Verlag.
[10] Marcel Berger and Bernard Gostiaux. Geometrie dierentielle: varietes, courbes et
surfaces. Collection Mathematiques. Puf, second edition, 1992. English edition: Differential geometry, manifolds, curves, and surfaces, GTM No. 115, Springer Verlag.
[11] Rolf Berndt. An Introduction to Symplectic Geometry. Graduate Studies in Mathematics, Vol. 26. AMS, first edition, 2001.
[12] J.E. Bertin. Alg`ebre lineaire et geometrie classique. Masson, first edition, 1981.
[13] Nicolas Bourbaki. Alg`ebre, Chapitre 9. Elements de Mathematiques. Hermann, 1968.
[14] Nicolas Bourbaki. Alg`ebre, Chapitres 1-3. Elements de Mathematiques. Hermann,
1970.
881
882
BIBLIOGRAPHY
[15] Nicolas Bourbaki. Alg`ebre, Chapitres 4-7. Elements de Mathematiques. Masson, 1981.
[16] Nicolas Bourbaki. Espaces Vectoriels Topologiques. Elements de Mathematiques. Masson, 1981.
[17] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University
Press, first edition, 2004.
[18] Glen E Bredon. Topology and Geometry. GTM No. 139. Springer Verlag, first edition,
1993.
[19] G. Cagnac, E. Ramis, and J. Commeau. Mathematiques Speciales, Vol. 3, Geometrie.
Masson, 1965.
[20] Henri Cartan. Cours de Calcul Dierentiel. Collection Methodes. Hermann, 1990.
[21] Henri Cartan. Dierential Forms. Dover, first edition, 2006.
[22] Claude Chevalley. The Algebraic Theory of Spinors and Cliord Algebras. Collected
Works, Vol. 2. Springer, first edition, 1997.
[23] Yvonne Choquet-Bruhat, Cecile DeWitt-Morette, and Margaret Dillard-Bleick. Analysis, Manifolds, and Physics, Part I: Basics. North-Holland, first edition, 1982.
[24] P.G. Ciarlet. Introduction to Numerical Matrix Analysis and Optimization. Cambridge
University Press, first edition, 1989. French edition: Masson, 1994.
[25] Timothee Cour and Jianbo Shi. Solving markov random fields with spectral relaxation.
In Marita Meila and Xiaotong Shen, editors, Artifical Intelligence and Statistics. Society for Artificial Intelligence and Statistics, 2007.
[26] H.S.M. Coxeter. Introduction to Geometry. Wiley, second edition, 1989.
[27] James W. Demmel. Applied Numerical Linear Algebra. SIAM Publications, first edition, 1997.
[28] Jean Dieudonne. Alg`ebre Lineaire et Geometrie Elementaire. Hermann, second edition,
1965.
[29] Jacques Dixmier. General Topology. UTM. Springer Verlag, first edition, 1984.
[30] Manfredo P. do Carmo. Dierential Geometry of Curves and Surfaces. Prentice Hall,
1976.
[31] Manfredo P. do Carmo. Riemannian Geometry. Birkhauser, second edition, 1992.
[32] David S. Dummit and Richard M. Foote. Abstract Algebra. Wiley, second edition,
1999.
BIBLIOGRAPHY
883
[33] Gerald A. Edgar. Measure, Topology, and Fractal Geometry. Undergraduate Texts in
Mathematics. Springer Verlag, first edition, 1992.
[34] Herbert B. Enderton. Elements of Set Theory. Academic Press, 1997.
[35] Charles L. Epstein. Introduction to the Mathematics of Medical Imaging. SIAM, second
edition, 2007.
[36] Gerald Farin. Curves and Surfaces for CAGD. Academic Press, fourth edition, 1998.
[37] Olivier Faugeras. Three-Dimensional Computer Vision, A geometric Viewpoint. the
MIT Press, first edition, 1996.
[38] James Foley, Andries van Dam, Steven Feiner, and John Hughes. Computer Graphics.
Principles and Practice. Addison-Wesley, second edition, 1993.
[39] David A. Forsyth and Jean Ponce. Computer Vision: A Modern Approach. Prentice
Hall, first edition, 2002.
[40] Jean Fresnel. Methodes Modernes En Geometrie. Hermann, first edition, 1998.
[41] William Fulton. Algebraic Topology, A first course. GTM No. 153. Springer Verlag,
first edition, 1995.
[42] William Fulton and Joe Harris. Representation Theory, A first course. GTM No. 129.
Springer Verlag, first edition, 1991.
[43] Jean H. Gallier. Curves and Surfaces In Geometric Modeling: Theory And Algorithms.
Morgan Kaufmann, 1999.
[44] Jean H. Gallier. Geometric Methods and Applications, For Computer Science and
Engineering. TAM, Vol. 38. Springer, second edition, 2011.
[45] Walter Gander, Gene H. Golub, and Urs von Matt. A constrained eigenvalue problem.
Linear Algebra and its Applications, 114/115:815839, 1989.
[46] F.R. Gantmacher. The Theory of Matrices, Vol. I. AMS Chelsea, first edition, 1977.
[47] Roger Godement. Cours dAlg`ebre. Hermann, first edition, 1963.
[48] Gene H. Golub. Some modified eigenvalue problems. SIAM Review, 15(2):318334,
1973.
[49] H. Golub, Gene and F. Van Loan, Charles. Matrix Computations. The Johns Hopkins
University Press, third edition, 1996.
[50] A. Gray. Modern Dierential Geometry of Curves and Surfaces. CRC Press, second
edition, 1997.
884
BIBLIOGRAPHY
[51] Donald T. Greenwood. Principles of Dynamics. Prentice Hall, second edition, 1988.
[52] Larry C. Grove. Classical Groups and Geometric Algebra. Graduate Studies in Mathematics, Vol. 39. AMS, first edition, 2002.
[53] Jacques Hadamard. Lecons de Geometrie Elementaire. I Geometrie Plane. Armand
Colin, thirteenth edition, 1947.
[54] Jacques Hadamard. Lecons de Geometrie Elementaire. II Geometrie dans lEspace.
Armand Colin, eighth edition, 1949.
[55] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical
Learning: Data Mining, Inference, and Prediction. Springer, second edition, 2009.
[56] D. Hilbert and S. Cohn-Vossen. Geometry and the Imagination. Chelsea Publishing
Co., 1952.
[57] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press,
first edition, 1990.
[58] Roger A. Horn and Charles R. Johnson. Topics in Matrix Analysis. Cambridge University Press, first edition, 1994.
[59] Nathan Jacobson. Basic Algebra I. Freeman, second edition, 1985.
[60] Ramesh Jain, Rangachar Katsuri, and Brian G. Schunck. Machine Vision. McGrawHill, first edition, 1995.
[61] J
urgen Jost. Riemannian Geometry and Geometric Analysis. Universitext. Springer
Verlag, fourth edition, 2005.
[62] Homan Kenneth and Kunze Ray. Linear Algebra. Prentice Hall, second edition, 1971.
[63] D. Kincaid and W. Cheney. Numerical Analysis. Brooks/Cole Publishing, second
edition, 1996.
[64] Anthony W. Knapp. Lie Groups Beyond an Introduction. Progress in Mathematics,
Vol. 140. Birkhauser, second edition, 2002.
[65] Erwin Kreyszig. Dierential Geometry. Dover, first edition, 1991.
[66] K. Kuratowski and A. Mostowski. Set Theory. Studies in Logic, Vol. 86. Elsevier,
1976.
[67] Serge Lang. Algebra. Addison Wesley, third edition, 1993.
[68] Serge Lang. Dierential and Riemannian Manifolds. GTM No. 160. Springer Verlag,
third edition, 1995.
BIBLIOGRAPHY
885
[69] Serge Lang. Real and Functional Analysis. GTM 142. Springer Verlag, third edition,
1996.
[70] Serge Lang. Undergraduate Analysis. UTM. Springer Verlag, second edition, 1997.
[71] Peter Lax. Linear Algebra and Its Applications. Wiley, second edition, 2007.
[72] N. N. Lebedev. Special Functions and Their Applications. Dover, first edition, 1972.
[73] Saunders Mac Lane and Garrett Birkho. Algebra. Macmillan, first edition, 1967.
[74] Ib Madsen and Jorgen Tornehave. From Calculus to Cohomology. De Rham Cohomology and Characteristic Classes. Cambridge University Press, first edition, 1998.
[75] M.-P. Malliavin. Alg`ebre Commutative. Applications en Geometrie et Theorie des
Nombres. Masson, first edition, 1985.
[76] Jerrold E. Marsden and J.R. Hughes, Thomas. Mathematical Foundations of Elasticity.
Dover, first edition, 1994.
[77] William S. Massey. Algebraic Topology: An Introduction. GTM No. 56. Springer
Verlag, second edition, 1987.
[78] William S. Massey. A Basic Course in Algebraic Topology. GTM No. 127. Springer
Verlag, first edition, 1991.
[79] Dimitris N. Metaxas. Physics-Based Deformable Models. Kluwer Academic Publishers,
first edition, 1997.
[80] Carl D. Meyer. Matrix Analysis and Applied Linear Algebra. SIAM, first edition, 2000.
[81] John W. Milnor. Topology from the Dierentiable Viewpoint. The University Press of
Virginia, second edition, 1969.
[82] John W. Milnor and James D. Stashe. Characteristic Classes. Annals of Math. Series,
No. 76. Princeton University Press, first edition, 1974.
[83] Shigeyuki Morita. Geometry of Dierential Forms. Translations of Mathematical
Monographs No 201. AMS, first edition, 2001.
[84] James R. Munkres. Topology, a First Course. Prentice Hall, first edition, 1975.
[85] James R. Munkres. Analysis on Manifolds. Addison Wesley, 1991.
[86] Ivan Niven, Herbert S. Zuckerman, and Hugh L. Montgomery. An Introduction to the
Theory of Numbers. Wiley, fifth edition, 1991.
886
BIBLIOGRAPHY
BIBLIOGRAPHY
887