Lecture Note Geometry 2023 2024
Lecture Note Geometry 2023 2024
13 janvier 2025
♣ Preface ♣
I freely confess that I never had a taste for study or research either in physics or
geometry except in so far as they could serve as a means of arriving at some sort of
knowledge of the proximate causes...for the good and convenience of life, in maintaining
health, in the practice of some art...having observed that a good part of the arts is based
on geometry, among others the cutting of stone in architecture, that of sundials, and that
of perspective in particular.
i
♣ Table des matières ♣
Preface i
1 Vector spaces 1
1.1 Number systems and fields . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Examples of vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Linear independence, spanning and bases of vector spaces . . . . . . . . . . 4
1.4.1 Linear dependence and independence . . . . . . . . . . . . . . . . . 4
1.5 Bases of vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5.1 Existence of a basis . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Linear transformations 10
2.1 Definition and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Kernels and images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Injection, surjection and bijection . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Rank and nullity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Operations on linear maps . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6 Linear transformations and matrices . . . . . . . . . . . . . . . . . . . . . 13
2.6.1 Setting up the correspondence . . . . . . . . . . . . . . . . . . . . 13
2.6.2 The correspondence between operations on linear maps and matrices 14
2.7 Trace of endomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.7.1 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
ii
? ? Chapitre One ? ?
Vector spaces
Example 1.1. N and Z are not fields, but Q, R and C are all fields.
1
2
1.3. Examples of vector spaces
For V to be called a vector space, the following axioms must be satisfied for all α, β ∈ K
and all u, v, w ∈ V.
1. u + v = v + u Commutative law
2. u + (v + w) = (u + v) + w Associative law
3. There is an element 0V ∈ V such that u + 0V = 0V + u = u Additive identity
4. For each element u ∈ V there exists an element −u ∈ V such that u + (−u) =
(−u) + u = 0 Additive inverse
5. 1v = v. Multiplication identity
6. α(u + v) = αu + βv; Distributive law
7. (α + β)v = αv + βv; Distributive law
8. (αβ)v = α(βv); Associative law
Elements of the field K will be called scalars. Note that we will use letters like v to
denote vectors. The zero vector in V will be written as 0v , or usually just 0. This is
different from the zero scalar 0 = 0K ∈ K.
For nearly all results in this course, there is no loss in assuming that K is the field R
of real numbers. So you may assume this if you find it helpful to do so. Just occasionally,
we will need to assume K = C the field of complex numbers. However, it is important to
note that nearly all arguments in Linear Algebra use only the axioms for a field and so
are valid for any field, which is why shall use a general field K for most of the course.
Then K[x]≤n is also a vector space over K; in fact it is a subspace of K[x]. Note that
the polynomials of degree exactly n do not form a vector space. (Why not ?)
4. Let K = R and let V be the set of n−times differentiable functions f : R → R which
are solutions of the differential equation
dn f dn−1 f df
λ0 n
+ λ1 n−1
+ ... + λn−1 + λn f = 0.
dx dx dx
for fixed λ0 , λ1 , ..., λn ∈ R. Then V is a vector space over R, for if f (x) and g(x) are
both solutions of this equation, then so are f (x) + g(x) and αf (x) for all α ∈ R.
5. The previous example is a space of functions. There are many such examples that are
important in Analysis. For example, the set C k ((0; 1); R), consisting of all functions
f : (0; 1) → R such that the kth derivative f (k) exists and is continuous, is a
vector space over R with the usual pointwise definitions of addition and scalar
multiplication of functions.
We shall be assuming the following additional simple properties of vectors and scalars
from now on. They can all be deduced from the axioms (and it is a useful exercise to do
so).
(i) α0V = 0V for all α ∈ K
(ii) 0v = 0V for all v ∈ V
(iii) −(αv) = (−α)v = α(−v, ) for all α ∈ K and v ∈ V.
(iv) if αv = 0V then α = 0 and v = 0V .
α1 v1 + α2 v2 + ... + αn vn = 0.
v1 , v2 , ..., vn are said to be linearly independent if they are not linearly dependent. In
other words, they are linearly independent if the only scalars α1 , α2 , ..., αn ∈ K that satisfy
the above equation are α1 = 0, α2 = 0, ..., αn = 0.
Example 1.2. (a) Let V = R2 , v1 = (1, 3), v2 = (2, 5). Show that the vectors v1 , v2 are
linearly independent.
(b)Let V = Q2 , v1 = (1, 3), v2 = (2, 6). Show that the vectors v1 , v2 are linearly dependent.
Lemma 1.4.1. v1 , v2 , ..., vn ∈ V are linearly dependent if and only if either v1 = 0 or,
for some r, vr is a linear combination of v1 , v2 , ..., vr−1 .
Definition 1.5. The vectors v1 , v2 , ..., vn in V form a basis of V if they are linearly
independent and span V.
Proposition 1. The vectors v1 , v2 , ..., vn form a basis of V if and only if every v ∈ V can
be written uniquely as v = α1 v1 + α2 v2 + ... + αn vn ; that is, the coefficients α1 , α2 , ... + αn
are uniquely determined by the vector v.
Definition 1.6. The scalars α1 , α2 , ... + αn in the statement of the proposition are called
the coordinates of v with respect to the basis v1 , v2 , ..., vn .
With respect to a different basis, v will have different coordinates. Indeed, a basis for
a vector space can be thought of as a choice of a system of coordinates.
A vector space with a finite basis is called finite-dimensional. In fact, nearly all of this
course will be about finite-dimensional spaces, but it is important to remember that these
are not the only examples.
Theorem 1 (The basis theorem). Suppose that v1 , v2 , ..., vn and w1 , w2 , ..., wn are both
bases of the vector space V . Then m = n. In other words, all finite bases of V contain
the same number of vectors.
Definition 1.7. The number n of vectors in a basis of the finite-dimensional vector space
V is called the dimension of V and we write dim(V ) = n.
Example 1.4. Find the basis and the dimension of the set of complex numbers C over C,
and over R.
Lemma 1.5.1. Suppose that the vectors v1 ; v2 , ..., vn , w span V and that w is a linear
combination of v1 , v2 , ..., vn . Then v1 ; v2 , ..., vn span V.
There is an important process, which we shall call sifting, which can be applied to any
sequence of vectors v1 , v2 , ..., vn in a vector space V , as follows. We consider each vector
vi in turn. If it is zero, or a linear combination of the preceding vectors v1 , ..., vi−1 , then
we remove it from the list.
1 = α1 + α2 ; 1 = α1 ; 0 = α1 .
The second and third of these equations contradict each other, and so there is no
solution. Hence v7 is not a linear combination of v2 , v4 , and it stays. Finally, we need to
try
0 = α1 + α2 + α3 0 = α1 + α3 1 = α1
and solving these in the normal way, we find a solution α1 = 1, α2 = 0, α3 = −1. Thus we
delete v8 and we are left with just v2 , v4 ; v7 . Of course, the vectors that are removed during
the sifting process depends very much on the order of the list of vectors. For example, if
v8 had come at the beginning of the list rather than at the end, then we would have kept
it.
The idea of sifting allows us to prove the following theorem, stating that every finite
sequence of vectors which spans a vector space V actually contains a basis for V .
Theorem 2. Suppose that the vectors v1 , ..., vr span the vector space V . Then there is
a subsequence of v1 , ..., vr which forms a basis of V.
The theorem tells us that any vector space with a finite spanning set is finite dimen-
sional, and indeed the spanning set contains a basis. We now prove the dual result : any
linearly independent set is contained in a basis.
Theorem 3. Let V be a vector space over K which has a finite spanning set, and suppose
that the vectors v1 , ..., vr are linearly independent in V . Then we can extend the sequence
to a basis v1 , ..., vn of V , where n ≥ r.
Example 1.6. The vectors v1 = (1, 2, 0, 2), v2 = (0, 1, 0, 2) are linearly independent in
R4 . Let us extend them to a basis of R4 . The easiest thing is to append the standard basis
of R4 , giving the combined list of vectors
which we shall sift. We find that (1, 0, 0, 0) = α1 (1, 2, 0, 2) + α2 (0, 1, 0, 2) has no solution,
so w1 stays. However, w2 = v1 − v2 − w1 so w2 is deleted. It is clear that w3 is not a linear
combination of v1 , v2 ; w1 , because all of those have a 0 in their third component. Hence
w3 remains. Now we have four linearly independent vectors, so must have a basis at this
stage, and we can stop the sifting early. The resulting basis is
Proposition 2 (The exchange lemma). Suppose that vectors v1 , ..., vn span V and that
vectors w1 , ..., wm ∈ V are linearly independent. Then m ≤ n.
Corollary 1. Let V be a vector space of dimension n over K. Then any n vectors which
span V form a basis of V , and no n − 1 vectors can span V.
Corollary 2. Let V be a vector space of dimension n over K. Then any n linearly inde-
pendent vectors form a basis of V and no n + 1 vectors can be linearly independent.
1.6 Subspaces
Definition 1.8. Let W be subset of V, W is a subspace of V if
i. W be a non-empty : the zero vector 0V ∈ W ;
ii. W is closed under addition : For every u, v ∈ W, u + v ∈ W ;
iii. W is closed under scalar multiplication : For every v ∈ W, α ∈ K, αv ∈ W.
for every u, v ∈ W, α, β ∈ K, αu + βv ∈ W.
A subspace W is itself a vector space over K under the operations of vector addition and
scalar multiplication in V . Notice that all vector space axioms of W hold automatically.
(They are inherited from V .)
For any vector space V , V is always a subspace of itself. Subspaces other than V are
sometimes called proper subspaces. We also always have a subspace {0} consisting of the
zero vector alone. This is called the trivial subspace, and its dimension is 0, because it
has no linearly independent sets of vectors at all. Intersecting two subspaces gives a third
subspace :
Warning ! It is not necessarily true that W1 ∪W2 is a subspace, as the following example
shows.
Note that any subspace of V that contains W1 and W2 has to contain all vectors of
the form u + v for u ∈ W1 , v ∈ W2 . This motivates the following definition.
Another way to form subspaces is to take linear combinations of some given vectors :
Proposition 5. Let v1 , ..., vn be vectors in the vector space V . Then the set of all linear
combinations α1 v1 + α2 v2 + ... + αn vn of v1 , ..., vn forms a subspace of V.
Linear transformations
When you study sets, the notion of function is extremely important. There is little to
say about a single isolated set, while functions allow you to link different sets. Similarly,
in Linear Algebra, a single isolated vector space is not the end of the story. We have to
connect different vector spaces by functions. However, a function having little regard to
the vector space operations may be of little value.
One of the most useful properties of linear maps is that, if we know how a linear map
U → V acts on a basis of U, then we know how it acts on the whole of U.
10
11
2.2. Kernels and images
Proposition 7 ((Linear maps are uniquely determined by their action on a basis).). Let
U, V be vector spaces over K, let u1 , ..., un be a basis of U and let v1 , ..., vn be any sequence
of n vectors in V . Then there is a unique linear map T : U → V with T (ui ) = vi for
1 ≤ i ≤ n.
Definition 2.2. Let T : U → V be a linear map. The image of T,is the set defined by
ker(T ) = {u ∈ U | T (u) = 0V }.
Example 2.3. Find im(T ) and ker(T ) of the linear map f : R3 −→ R2 defined by
f (x, y, z) = (−2x , y + 3z).
Proof .............
Theorem 5 (The rank-nullity theorem). Let U, V be vector spaces over K with U finite-
dimensional, and let T : U → V be a linear map. Then
m
X
T (ej ) = αi,j fi for 1 ≤ j ≤ n.
i=1
But suppose we chose different bases, say e1 = (1, 1, 1), e2 = (0, 1, 1), e3 = (1, 0, 1),
and f1 = (0, 1), f2 = (1, 0). Then we have T (e1 ) = (1, 1) = f1 + f2 , T (e2 ) = (0, 1) =
f1 , T (e3 ) = (1, 0) = f2 , and the matrix is
!
1 1 0
.
1 0 1
2. This time we take the differentiation map T from R[x]≤n to R[x]≤n−1 . Then, with
respect to the bases 1, x, x2 , ..., xn and 1, x, x2 , ..., xn−1 of R[x]≤n and R[x]≤n−1 , res-
pectively, the matrix of T is
0 1 0 0 ··· 0 0
0
0 2 0 ··· 0 0
0 0 0 3 ··· 0 0
.. .. .. .. . . .. .. .
. . . . . . .
0
0 0 0 ··· n − 1 0
0 0 0 0 ··· 0 n
In the same basis of K[x]≤n and the basis 1 of K, Eα (xn ) = αn . The matrix of Eα is
(1 α α2 ...αn−1 αn ).
3. T : V → V is the identity map. Notice that U = V in this example. Provided that
we choose the same basis for U and V , then the matrix of T is the n × n identity
matrix In .
4. T : V → V is the zero map. The matrix of T is the m × n zero matrix Om,n ,
regardless of what bases we choose. (The coordinates of the zero vector are all zero
in any basis.)
tr(A) = tr(AT ).
This follows immediately from the fact that transposing a square matrix does not
affect elements along the main diagonal.
3. If A and B are m × n and n × m real or complex matrices, respectively, then
tr(AB) = tr(BA)
The matrices in the trace of a product can be switched without changing the result.