0% found this document useful (0 votes)
15 views18 pages

Lecture Note Geometry 2023 2024

The document is a lecture note for MAT 1032 on Affine and Euclidean Geometry, authored by Dr. Azehaze Laurian. It covers fundamental concepts in vector spaces, linear transformations, and their properties, including definitions, examples, and axioms. The content is structured into sections detailing number systems, vector space definitions, linear independence, spanning, and bases, aimed at engineering students.

Uploaded by

xiongamer246
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views18 pages

Lecture Note Geometry 2023 2024

The document is a lecture note for MAT 1032 on Affine and Euclidean Geometry, authored by Dr. Azehaze Laurian. It covers fundamental concepts in vector spaces, linear transformations, and their properties, including definitions, examples, and axioms. The content is structured into sections detailing number systems, vector space definitions, linear independence, spanning, and bases, aimed at engineering students.

Uploaded by

xiongamer246
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

MAT 1032 Affine and Euclidean Geometry

Level 1 Engineer (ENG 1)

Lecture note and Exercises

Dr. AZEBAZE Laurian


Ph.D. in Mathematics, University of Yaounde I
[email protected]

13 janvier 2025
♣ Preface ♣

I freely confess that I never had a taste for study or research either in physics or
geometry except in so far as they could serve as a means of arriving at some sort of
knowledge of the proximate causes...for the good and convenience of life, in maintaining
health, in the practice of some art...having observed that a good part of the arts is based
on geometry, among others the cutting of stone in architecture, that of sundials, and that
of perspective in particular.

Gerard Desargues ‘ (1591-1661)

i
♣ Table des matières ♣

Preface i

1 Vector spaces 1
1.1 Number systems and fields . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Examples of vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Linear independence, spanning and bases of vector spaces . . . . . . . . . . 4
1.4.1 Linear dependence and independence . . . . . . . . . . . . . . . . . 4
1.5 Bases of vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5.1 Existence of a basis . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Linear transformations 10
2.1 Definition and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Kernels and images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Injection, surjection and bijection . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Rank and nullity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Operations on linear maps . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6 Linear transformations and matrices . . . . . . . . . . . . . . . . . . . . . 13
2.6.1 Setting up the correspondence . . . . . . . . . . . . . . . . . . . . 13
2.6.2 The correspondence between operations on linear maps and matrices 14
2.7 Trace of endomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.7.1 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

ii
? ? Chapitre One ? ?

Vector spaces

1.1 Number systems and fields


We introduce the number systems most commonly used in mathematics.

1. The natural numbers N = {1, 2, 3, 4, ...}.


In N, addition is possible but not subtraction ; e.g. 2 − 3 ∈
/ N.
2. The integers Z = {..., −2, −1, 0, 1, 2, ...}. In Z, addition, subtraction and multiplica-
2
tion are always possible, but not division ; e.g. 3

/ Z.
3. The rational numbers Q = { pq | p, q ∈ Z, q 6= 0}. In Q, addition, subtraction,

multiplication and division (except by zero) are all possible. However, 2 ∈
/ Q.
4. The real numbers R. These are the numbers which can be expressed as decimals.
The rational numbers are those with finite or recurring decimals. In R, addition,
subtraction, multiplication and division (except by zero) are still possible, and all

positive numbers have square roots, but −1 ∈ / R.
5. The complex numbers C = {x + iy | x, y ∈ R} where i2 = −1. In C, addition,
subtraction, multiplication and division (except by zero) are still possible, and all,
numbers have square roots. In fact all polynomial equations with coefficients in C
have solutions in C.

Roughly speaking, S is a field if addition, subtraction, multiplication and division


(except by zero) are all possible in S. We shall always use the letter K for a general field.

Example 1.1. N and Z are not fields, but Q, R and C are all fields.

1.2 vector space


Definition 1.1 (vector space). A vector space over a field K is a set V which has two
basic operations, addition and scalar multiplication, satisfying certain requirements. Thus
for every pair u, v ∈ V, u + v ∈ V is defined, and for every α ∈ K , αv ∈ V is defined.

1
2
1.3. Examples of vector spaces

For V to be called a vector space, the following axioms must be satisfied for all α, β ∈ K
and all u, v, w ∈ V.

1. u + v = v + u Commutative law
2. u + (v + w) = (u + v) + w Associative law
3. There is an element 0V ∈ V such that u + 0V = 0V + u = u Additive identity
4. For each element u ∈ V there exists an element −u ∈ V such that u + (−u) =
(−u) + u = 0 Additive inverse
5. 1v = v. Multiplication identity
6. α(u + v) = αu + βv; Distributive law
7. (α + β)v = αv + βv; Distributive law
8. (αβ)v = α(βv); Associative law

Elements of the field K will be called scalars. Note that we will use letters like v to
denote vectors. The zero vector in V will be written as 0v , or usually just 0. This is
different from the zero scalar 0 = 0K ∈ K.
For nearly all results in this course, there is no loss in assuming that K is the field R
of real numbers. So you may assume this if you find it helpful to do so. Just occasionally,
we will need to assume K = C the field of complex numbers. However, it is important to
note that nearly all arguments in Linear Algebra use only the axioms for a field and so
are valid for any field, which is why shall use a general field K for most of the course.

1.3 Examples of vector spaces


1. Kn = {(α1 , α2 , ..., αn ) | αi ∈ K}. This is the space of row vectors. Addition and
scalar multiplication are defined by the obvious rules :

(α1 , α2 , ..., αn ) + (β1 , β2 , ..., βn ) = (α1 + β1 , α2 + β2 , ..., αn + βn ); (1.1)


λ(α1 , α2 , ..., αn ) = (λα1 , λα2 , ..., λαn ). (1.2)

The most familiar examples are R2 = {(x, y) | x, y ∈ R} and


R3 = {(x, y, z) | x, y, z ∈ R}
which we can think of geometrically as the points in ordinary 2− and 3−dimensional
space, equipped with a coordinate system. Vectors in R2 and R3 can also be thought
of as directed lines joining the origin to the points with coordinates (x, y) or (x, y, z).

GEOMETRY (ENG 1 semester 2) ©ISJ 2023-2024 [email protected]


3
1.3. Examples of vector spaces

Addition of vectors is then given by the parallelogram law.

Note that K1 is essentially the same as R itself.


2. Let K[x] be the set of polynomials in an indeterminate x with coefficients in the field
K. That is,

K[x] = {α0 + α1 x + α2 x2 + ... + αn xn + ... | αi ∈ K}.

Then K[x] is a vector space over K.


3. Let K[x]≤n be the set of polynomials over K of degree at most n, for some n ≥ 0.
That is,
K[x] = {α0 + α1 x + α2 x2 + ... + αn xn | αi ∈ K}.

Then K[x]≤n is also a vector space over K; in fact it is a subspace of K[x]. Note that
the polynomials of degree exactly n do not form a vector space. (Why not ?)
4. Let K = R and let V be the set of n−times differentiable functions f : R → R which
are solutions of the differential equation
dn f dn−1 f df
λ0 n
+ λ1 n−1
+ ... + λn−1 + λn f = 0.
dx dx dx
for fixed λ0 , λ1 , ..., λn ∈ R. Then V is a vector space over R, for if f (x) and g(x) are
both solutions of this equation, then so are f (x) + g(x) and αf (x) for all α ∈ R.
5. The previous example is a space of functions. There are many such examples that are
important in Analysis. For example, the set C k ((0; 1); R), consisting of all functions
f : (0; 1) → R such that the kth derivative f (k) exists and is continuous, is a
vector space over R with the usual pointwise definitions of addition and scalar
multiplication of functions.

GEOMETRY (ENG 1 semester 2) ©ISJ 2023-2024 [email protected]


4
1.4. Linear independence, spanning and bases of vector spaces

We shall be assuming the following additional simple properties of vectors and scalars
from now on. They can all be deduced from the axioms (and it is a useful exercise to do
so).
(i) α0V = 0V for all α ∈ K
(ii) 0v = 0V for all v ∈ V
(iii) −(αv) = (−α)v = α(−v, ) for all α ∈ K and v ∈ V.
(iv) if αv = 0V then α = 0 and v = 0V .

1.4 Linear independence, spanning and bases of vector


spaces

1.4.1 Linear dependence and independence


Definition 1.2. Let V be a vector space over the field K. The vectors v1 , v2 , ..., vn are said
to be linearly dependent if there exist scalars α1 , α2 , ..., αn ∈ K, not all zero, such that

α1 v1 + α2 v2 + ... + αn vn = 0.

v1 , v2 , ..., vn are said to be linearly independent if they are not linearly dependent. In
other words, they are linearly independent if the only scalars α1 , α2 , ..., αn ∈ K that satisfy
the above equation are α1 = 0, α2 = 0, ..., αn = 0.

Definition 1.3. Vectors of the form α1 v1 + α2 v2 + ... + αn vn = 0 for α1 , α2 , ..., αn ∈ K


are called linear combinations of v1 , v2 , ..., vn .

Example 1.2. (a) Let V = R2 , v1 = (1, 3), v2 = (2, 5). Show that the vectors v1 , v2 are
linearly independent.
(b)Let V = Q2 , v1 = (1, 3), v2 = (2, 6). Show that the vectors v1 , v2 are linearly dependent.

Lemma 1.4.1. v1 , v2 , ..., vn ∈ V are linearly dependent if and only if either v1 = 0 or,
for some r, vr is a linear combination of v1 , v2 , ..., vr−1 .

1.5 Bases of vector spaces


Definition 1.4 (Spanning vectors). The vectors v1 , v2 , ..., vn in V span V if every vector
v ∈ V is a linear combination of v1 , v2 , ..., vn . That is v = α1 v1 + α2 v2 + ... + αn vn .

Definition 1.5. The vectors v1 , v2 , ..., vn in V form a basis of V if they are linearly
independent and span V.

GEOMETRY (ENG 1 semester 2) ©ISJ 2023-2024 [email protected]


5
1.5. Bases of vector spaces

Proposition 1. The vectors v1 , v2 , ..., vn form a basis of V if and only if every v ∈ V can
be written uniquely as v = α1 v1 + α2 v2 + ... + αn vn ; that is, the coefficients α1 , α2 , ... + αn
are uniquely determined by the vector v.

Definition 1.6. The scalars α1 , α2 , ... + αn in the statement of the proposition are called
the coordinates of v with respect to the basis v1 , v2 , ..., vn .

With respect to a different basis, v will have different coordinates. Indeed, a basis for
a vector space can be thought of as a choice of a system of coordinates.

Example 1.3. Here are some examples of bases of vector spaces.


1. (1; 0) and (0; 1) form a basis of K2 . because for (x; y) ∈ K can be written as x(1; 0) +
y(0; 1), and this expression is clearly unique.
2. More generally, (1; 0; 0), (0; 1; 0), (0; 0; 1) form a basis of K3 ,
(1; 0; 0; 0), (0; 1; 0; 0), (0; 0; 1; 0), (0; 0; 0; 1) form a basis of K4 and so on. This is called
the standard basis of Kn for n ∈ N. (To be precise, the standard basis of Kn is
v1 , v2 , ..., vn , where vi is the vector with a 1 in the ith position and a 0 in all other
positions.)
3. There are many other bases of Kn . For example u = (1; 0), v = (1; 1) form a basis
of K2 , because (α1 ; α2 ) = (α1 − α2 )u + α2 v, and this expression is unique. In fact,
any two non-zero vectors such that one is not a scalar multiple of the other form a
basis for K2 .
4. As we have defined a basis, it has to consist of a finite number of vectors. Not every
vector space has a finite basis. For example, let K[x] be the space of polynomials in x
with coefficients in K. The infinite sequence of vectors 1, x, x2 , x3 , ..., xn , ... is a basis
of K[x].
5. The basis of K[x]≤n the set of polynomials over K of degree at most n is the sequence
of vectors 1, x, x2 , x3 , ..., xn .

A vector space with a finite basis is called finite-dimensional. In fact, nearly all of this
course will be about finite-dimensional spaces, but it is important to remember that these
are not the only examples.

Theorem 1 (The basis theorem). Suppose that v1 , v2 , ..., vn and w1 , w2 , ..., wn are both
bases of the vector space V . Then m = n. In other words, all finite bases of V contain
the same number of vectors.

Definition 1.7. The number n of vectors in a basis of the finite-dimensional vector space
V is called the dimension of V and we write dim(V ) = n.

GEOMETRY (ENG 1 semester 2) ©ISJ 2023-2024 [email protected]


6
1.5. Bases of vector spaces

Thus, as we might expect, Kn has dimension n. K[x] is infinite-dimensional, but the


space K[x]≤n of polynomials of degree at most n has basis 1, x, x2 , ..., xn , so its dimension
is n + 1 (not n).

Note that the dimension of V depends on the field K.

Example 1.4. Find the basis and the dimension of the set of complex numbers C over C,
and over R.

Lemma 1.5.1. Suppose that the vectors v1 ; v2 , ..., vn , w span V and that w is a linear
combination of v1 , v2 , ..., vn . Then v1 ; v2 , ..., vn span V.

There is an important process, which we shall call sifting, which can be applied to any
sequence of vectors v1 , v2 , ..., vn in a vector space V , as follows. We consider each vector
vi in turn. If it is zero, or a linear combination of the preceding vectors v1 , ..., vi−1 , then
we remove it from the list.

Example 1.5. Let us sift the following sequence of vectors in R3 .

v1 = (0, 0, 0) v2 = (1, 1, 1) v3 = (2, 2, 2) v4 = (1, 0, 0)

v5 = (3, 2, 2) v6 = (0, 0, 0) v7 = (1, 1, 0) v8 = (0, 0, 1)

v1 = 0R3 , so we remove it. v2 is non-zero so it stays. v3 = 2v2 so it is removed. v4 is clearly


not a linear combination of v2 , so it stays. We have to decide next whether v5 is a linear
combination of v2 ; v4 . If so, then (3, 2, 2) = α1 (1, 1, 1)+α2 (1, 0, 0), which (fairly obviously)
has the solution α1 = 2, α2 = 1, so remove v5 . Then v6 = 0 so that is removed too. Next
we try v7 = (1, 1, 0) = α1 (1, 1, 1) + α2 (1, 0, 0), and looking at the three components, this
reduces to the three equations

1 = α1 + α2 ; 1 = α1 ; 0 = α1 .

The second and third of these equations contradict each other, and so there is no
solution. Hence v7 is not a linear combination of v2 , v4 , and it stays. Finally, we need to
try

v8 = (0, 0, 1) = α1 (1, 1, 1) + α2 (1, 0, 0) + α3 (1, 1, 0)

leading to the three equations

0 = α1 + α2 + α3 0 = α1 + α3 1 = α1

and solving these in the normal way, we find a solution α1 = 1, α2 = 0, α3 = −1. Thus we
delete v8 and we are left with just v2 , v4 ; v7 . Of course, the vectors that are removed during

GEOMETRY (ENG 1 semester 2) ©ISJ 2023-2024 [email protected]


7
1.5. Bases of vector spaces

the sifting process depends very much on the order of the list of vectors. For example, if
v8 had come at the beginning of the list rather than at the end, then we would have kept
it.

The idea of sifting allows us to prove the following theorem, stating that every finite
sequence of vectors which spans a vector space V actually contains a basis for V .

Theorem 2. Suppose that the vectors v1 , ..., vr span the vector space V . Then there is
a subsequence of v1 , ..., vr which forms a basis of V.

The theorem tells us that any vector space with a finite spanning set is finite dimen-
sional, and indeed the spanning set contains a basis. We now prove the dual result : any
linearly independent set is contained in a basis.

Theorem 3. Let V be a vector space over K which has a finite spanning set, and suppose
that the vectors v1 , ..., vr are linearly independent in V . Then we can extend the sequence
to a basis v1 , ..., vn of V , where n ≥ r.

Example 1.6. The vectors v1 = (1, 2, 0, 2), v2 = (0, 1, 0, 2) are linearly independent in
R4 . Let us extend them to a basis of R4 . The easiest thing is to append the standard basis
of R4 , giving the combined list of vectors

v1 = (1, 2, 0, 2), v2 = (0, 1, 0, 2) w1 = (1, 0, 0, 0),

w2 = (0, 1, 0, 0), w3 = (0, 0, 1, 0), w4 = (0, 0, 0, 1),

which we shall sift. We find that (1, 0, 0, 0) = α1 (1, 2, 0, 2) + α2 (0, 1, 0, 2) has no solution,
so w1 stays. However, w2 = v1 − v2 − w1 so w2 is deleted. It is clear that w3 is not a linear
combination of v1 , v2 ; w1 , because all of those have a 0 in their third component. Hence
w3 remains. Now we have four linearly independent vectors, so must have a basis at this
stage, and we can stop the sifting early. The resulting basis is

v1 = (1, 2, 0, 2), v2 = (0, 1, 0, 2) w1 = (1, 0, 0, 0) w3 = (0, 0, 1, 0).

Proposition 2 (The exchange lemma). Suppose that vectors v1 , ..., vn span V and that
vectors w1 , ..., wm ∈ V are linearly independent. Then m ≤ n.

Corollary 1. Let V be a vector space of dimension n over K. Then any n vectors which
span V form a basis of V , and no n − 1 vectors can span V.

Corollary 2. Let V be a vector space of dimension n over K. Then any n linearly inde-
pendent vectors form a basis of V and no n + 1 vectors can be linearly independent.

GEOMETRY (ENG 1 semester 2) ©ISJ 2023-2024 [email protected]


8
1.6. Subspaces

1.5.1 Existence of a basis


Corollary 3. If a non-trivial vector space V is spanned by a finite number of vectors,
then it has a basis.

1.6 Subspaces
Definition 1.8. Let W be subset of V, W is a subspace of V if
i. W be a non-empty : the zero vector 0V ∈ W ;
ii. W is closed under addition : For every u, v ∈ W, u + v ∈ W ;
iii. W is closed under scalar multiplication : For every v ∈ W, α ∈ K, αv ∈ W.

These two conditions can be replaced with a single condition

for every u, v ∈ W, α, β ∈ K, αu + βv ∈ W.

A subspace W is itself a vector space over K under the operations of vector addition and
scalar multiplication in V . Notice that all vector space axioms of W hold automatically.
(They are inherited from V .)

Example 1.7. 1. Show that W = {(x, y) ∈ R2 | y − 2x = 0} is a subset of R2 .


2. show that the following subsets of R2 are not subspaces
W1 = {(x, y) ∈ R2 | x + y = 2}; W2 = {(x, y) ∈ R2 | x = 0 and y = 0}; and
W3 = {(x, y) ∈ R2 | x ≥ 0 or y ≥ 0}.

For any vector space V , V is always a subspace of itself. Subspaces other than V are
sometimes called proper subspaces. We also always have a subspace {0} consisting of the
zero vector alone. This is called the trivial subspace, and its dimension is 0, because it
has no linearly independent sets of vectors at all. Intersecting two subspaces gives a third
subspace :

Proposition 3. If W1 and W2 are subspaces of V then so is W1 ∩ W2 .

Warning ! It is not necessarily true that W1 ∪W2 is a subspace, as the following example
shows.

Example 1.8. Let V = R2 , let W1 = {(x, 0) | x ∈ R} and W2 = {(0, y) | y ∈ R}. Then


W1 , W2 are subspaces of V , but W1 ∪W2 is not a subspace, because (1, 0); (0, 1) ∈ W1 ∪W2 ,
but (1, 0) + (0, 1) = (1, 1) ∈
/ W1 ∪ W2 .

Note that any subspace of V that contains W1 and W2 has to contain all vectors of
the form u + v for u ∈ W1 , v ∈ W2 . This motivates the following definition.

GEOMETRY (ENG 1 semester 2) ©ISJ 2023-2024 [email protected]


9
1.6. Subspaces

Definition 1.9. Let W1 , W2 be subspaces of the vector space V . Then W1 + W2 is defined


to be the set of vectors v ∈ V such that v = w1 + w2 for some w1 ∈ W1 , w2 ∈ W2 . Or, if
you prefer, W1 + W2 = {w1 + w2 | w1 ∈ W1 , w2 ∈ W2 }.

Do not confuse W1 + W2 with W1 ∪ W2 .

Proposition 4. If W1 , W2 are subspaces of V then so is W1 +W2 . In fact, it is the smallest


subspace that contains both W1 and W2 .

Theorem 4. Let V be a finite-dimensional vector space, and let W1 , W2 be subspaces of


V . Then
dim(W1 + W2 ) = dim(W1 ) + dim(W2 ) − dim(W1 ∩ W2 ).

Exercise 1. Let F = {(x, y, z) ∈ R3 | x = 0}, G = {(x, y, z) ∈ R3 | y = 0} be two


subspaces of R3 .
T
1. Find the basis of F, G and F G as well as the dimension of F, G and F + G.
2. define the set F + G.

Another way to form subspaces is to take linear combinations of some given vectors :

Proposition 5. Let v1 , ..., vn be vectors in the vector space V . Then the set of all linear
combinations α1 v1 + α2 v2 + ... + αn vn of v1 , ..., vn forms a subspace of V.

Definition 1.10. Two subspaces W1 , W2 of V are called complementary if W1 ∩ W2 = {0}


and W1 + W2 = V.

Proposition 6. Let W1 , W2 be subspaces of V. Then W1 , W2 are complementary subspaces


if and only if each vector in v ∈ V can be written uniquely as v = w1 + w2 with w1 ∈ W1
and w2 ∈ W2 .

Example 1.9. Show that the following are complementary subspaces.

1. W1 = {(x, 0) ∈ R2 | x ∈ R} and W2 = {(0, y) ∈ R2 | y ∈ R}.


2. W1 = {(x, 0, 0) ∈ R3 | x ∈ R} and W2 = {(0, y, z) ∈ R3 | y, z ∈ R}.
3. W1 = {(x, x) ∈ R2 | x ∈ R} and W2 = {(−x, x) ∈ R2 | x ∈ R}.
4. F = {(x, y, z) ∈ R3 | x − y − z = 0} and G = {(x, y, z) ∈ R3 | y = z = 0}.

GEOMETRY (ENG 1 semester 2) ©ISJ 2023-2024 [email protected]


? ? Chapitre Two ? ?

Linear transformations

When you study sets, the notion of function is extremely important. There is little to
say about a single isolated set, while functions allow you to link different sets. Similarly,
in Linear Algebra, a single isolated vector space is not the end of the story. We have to
connect different vector spaces by functions. However, a function having little regard to
the vector space operations may be of little value.

2.1 Definition and examples


Definition 2.1. Definition. Let U, V be two vector spaces over the same field K. A linear
transformation or linear map f from U to V is a function f : U → V such that
(i) f (u1 + u2 ) = f (u1 ) + f (u2 ) for all u1 , u2 ∈ U ;
(ii) f (αu) = αf (u) for all α ∈ K and u ∈ U.
Notice that the two conditions for linearity are equivalent to a single condition

f (αu + βv) = αf (u) + βf (v) for all α, β ∈ K and u, v ∈ U.

Example 2.1. Show that f : R3 −→ R2 defined by f (x, y, z) = (−2x , y + 3z) is a linear


map.

First let us state a couple of easy consequences of the definition :

Lemma 2.1.1. Let f : U → V be a linear map. Then


i. f (OU ) = 0V ;
ii. f (−u) = −f (u) for all u ∈ U.

Example 2.2. Many familiar geometrical transformations, such as projections, rotations,


reflections and magnifications are linear maps. Note, however, that a non trivial transla-
tion is not a linear map, because it does not satisfy f (0U ) = 0V .

One of the most useful properties of linear maps is that, if we know how a linear map
U → V acts on a basis of U, then we know how it acts on the whole of U.

10
11
2.2. Kernels and images

Proposition 7 ((Linear maps are uniquely determined by their action on a basis).). Let
U, V be vector spaces over K, let u1 , ..., un be a basis of U and let v1 , ..., vn be any sequence
of n vectors in V . Then there is a unique linear map T : U → V with T (ui ) = vi for
1 ≤ i ≤ n.

2.2 Kernels and images


To any linear map U → V, we can associate a subspace of U and a subspace of V .

Definition 2.2. Let T : U → V be a linear map. The image of T,is the set defined by

im(T ) = {v ∈ V | T (u) = v}.

Definition 2.3. The kernel of T, is the set defined by

ker(T ) = {u ∈ U | T (u) = 0V }.

Example 2.3. Find im(T ) and ker(T ) of the linear map f : R3 −→ R2 defined by
f (x, y, z) = (−2x , y + 3z).

Proposition 8. Let T : U → V be a linear map. Then


— im(T ) is a subspace of V.
— ker(T ) is a subspace of U.

Proof .............

2.3 Injection, surjection and bijection


Definition 2.4. Let U and V be two vector spaces and T : U −→ V be a linear map.
— T is injective if for every u, v ∈ U such that T (u) = T (v) then u = v.
— T is surjective if for every v ∈ V there exits u ∈ U such that T (u) = v.
— T is bijective if for every v ∈ V there exits a unique u ∈ U such that T (u) = v.

Proposition 9. Let U and V be two vector spaces and f : U −→ V be a linear map.


— f is injective if and only if ker(T ) = U.
— f is surjective if and only if im(T ) = V.
— f is bijective if and only if ker(T ) = U and im(T ) = V.

GEOMETRY (ENG 1 semester 2) ©ISJ 2023-2024 [email protected]


12
2.4. Rank and nullity

2.4 Rank and nullity


The dimensions of the kernel and image of a linear map contain important information
about it, and are related to each other.

Definition 2.5. let T : U → V be a linear map.


— dim(im(T )) is called the rank of T ;
— dim(ker(T )) is called the nullity of T.

Theorem 5 (The rank-nullity theorem). Let U, V be vector spaces over K with U finite-
dimensional, and let T : U → V be a linear map. Then

rank(T ) + nullity(T ) = dim(U ).

Corollary 4. Let T : U → V be a linear map, and suppose that


dim(U ) = dim(V ) = n. Then the following properties of T are equivalent :
— T is surjective ;
— rank(T ) = n;
— nullity(T ) = 0;
— T is injective ;
— T is bijective.

2.5 Operations on linear maps


We can define the operations of addition, scalar multiplication and composition on
linear maps. Let T1 : U → V and T2 : U → V be two linear maps, and let α ∈ K be a
scalar.
• (Addition of linear maps). We define a map T1 + T2 : U → V by the rule
(T1 + T2 )(u) = T1 (u) + T2 (u) for u ∈ U.
• (Scalar multiplication of linear maps). We define a map αT1 : U → V by the
rule (αT1 )(u) = αT1 (u) for u ∈ U.
• (Composition of linear maps). We define a map T2 ◦ T1 : U → W by
(T2 ◦ T1 )(u) = T2 (T1 (u)) for u ∈ U.
In particular, we define T 2 = T ◦ T and T i+1 = T i ◦ T for i > 2. It is routine to
check that T1 + T2 , αT1 and T2 ◦ T1 are themselves all linear maps (but you should
do it !).

GEOMETRY (ENG 1 semester 2) ©ISJ 2023-2024 [email protected]


13
2.6. Linear transformations and matrices

2.6 Linear transformations and matrices


We shall see in this section that, for fixed choice of bases, there is a very natural one-one
correspondence between linear maps and matrices. This is perhaps the most important
idea in linear algebra, because it enables us to deduce properties of matrices from those
of linear maps, and vice-versa. It also explains why we multiply matrices in the way we
do.

2.6.1 Setting up the correspondence


Let T : U → V be a linear map, where dim(U ) = n; dim(V ) = m. Suppose that we
are given a basis e1 , ..., en of U and a basis f1 , ..., fm of V . Now, for 1 ≤ j ≤ n, the vector
T (ej ) lies in V , so T (ej ) can be written uniquely as a linear combination of f1 , ..., fm . Let

T (e1 ) = α1,1 f1 + α2,1 f2 + + αm,1 fm (2.1)


T (e2 ) = α1,2 f1 + α2,2 f2 + + αm,2 fm (2.2)
... ... (2.3)
T (en ) = α1,n f1 + α2,n f2 + + αm,n fm (2.4)

where the coefficients αi,j ∈ K (for 1 ≤ i ≤ m; 1 ≤ j ≤ n) are uniquely determined.


Putting it more compactly, we define scalars αi,j by

m
X
T (ej ) = αi,j fi for 1 ≤ j ≤ n.
i=1

The coefficients αi,j form an m × n matrix


 
α1,1 α1,2 · · · α1,n
 
 α2,1 α2,2 · · · α2,n 
A= .
 
 .. .. .. .. 
 . . . 
αm,1 αm,2 · · · αm,n
over K. Then A is called the matrix of the linear map T with respect to the chosen
bases of U and V . In general, different choice of bases gives different matrices. Notice the
role of individual columns in A The jth column of A consists of the coordinates of T (ej )
with respect to the basis f1 , ..., fm of V.

Example 2.4. Once again, we consider our previous examples

1. T : R3 → R2 , T (x, y, z) = (x, y). Usually, we choose the standard bases of Km and Kn ,


which in this case are e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1) and f1 = (1, 0), f2 =

GEOMETRY (ENG 1 semester 2) ©ISJ 2023-2024 [email protected]


14
2.6. Linear transformations and matrices

(0, 1). We have T (e1 ) = f1 , T (e2 ) = f2 , T (e3 ) = 0, and the matrix is


!
1 0 0
.
0 1 0

But suppose we chose different bases, say e1 = (1, 1, 1), e2 = (0, 1, 1), e3 = (1, 0, 1),
and f1 = (0, 1), f2 = (1, 0). Then we have T (e1 ) = (1, 1) = f1 + f2 , T (e2 ) = (0, 1) =
f1 , T (e3 ) = (1, 0) = f2 , and the matrix is
!
1 1 0
.
1 0 1

2. This time we take the differentiation map T from R[x]≤n to R[x]≤n−1 . Then, with
respect to the bases 1, x, x2 , ..., xn and 1, x, x2 , ..., xn−1 of R[x]≤n and R[x]≤n−1 , res-
pectively, the matrix of T is
 
0 1 0 0 ··· 0 0
 
0
 0 2 0 ··· 0 0 
0 0 0 3 ··· 0 0
 
 .. .. .. .. . . .. ..  .
 
. . . . . . .
 
0
 0 0 0 ··· n − 1 0 
0 0 0 0 ··· 0 n

In the same basis of K[x]≤n and the basis 1 of K, Eα (xn ) = αn . The matrix of Eα is
(1 α α2 ...αn−1 αn ).
3. T : V → V is the identity map. Notice that U = V in this example. Provided that
we choose the same basis for U and V , then the matrix of T is the n × n identity
matrix In .
4. T : V → V is the zero map. The matrix of T is the m × n zero matrix Om,n ,
regardless of what bases we choose. (The coordinates of the zero vector are all zero
in any basis.)

2.6.2 The correspondence between operations on linear maps and


matrices
Let U, V and W be vector spaces over the same field K, let dim(U ) = n, dim(V ) =
m, dim(W ) = l, and choose fixed bases e1 , ..., en of U and f1 , ..., fm of V , and g1 , ..., gl
of W . All matrices of linear maps between these spaces will be written with respect to
these bases.

GEOMETRY (ENG 1 semester 2) ©ISJ 2023-2024 [email protected]


15
2.7. Trace of endomorphism

Proposition 10. 1. Let T1 , T2 : U → V be linear maps with m × n matrices A, B


respectively. Then the matrix of T1 + T2 is A + B.
2. Let T : U → V be a linear map with m × n matrices A and let λ ∈ K be a scalar.
Then the matrix of λT is λA.
Composition of linear maps corresponds to matrix multiplication. This time the cor-
respondence is less obvious, and we state it as a theorem.
Theorem 6. Let T1 : V → W be a linear map with l × m matrix A = (αi,j ) and let
T2 : U → V be a linear map with m × n matrix B = (βi,j ). Then the matrix of the
composite map T1 ◦ T2 : U → W is AB.

2.7 Trace of endomorphism


The trace of a square matrix A, denoted tr(A), is defined to be the sum of elements
on the main diagonal of A. The trace is only defined for a square matrix (n × n).
The trace of an (n × n) square matrix
 
α α1,2 · · · α1,n
 1,1 
 α2,1 α2,2 · · · α2,n 
A= .
 
 .. .. .. .. 
 . . . 
αn,1 αn,2 · · · αn,n
is defined as n
X
tr(A) = ai,i = a1,1 + a2,2 + ... + an,n
i=1

2.7.1 Basic properties


1. The trace is a linear mapping. That is,

tr(A + B) = tr(A) + tr(B) and tr(λA) = λtr(A)

for every square matrices A, B and scalar λ.


2. A matrix and its transpose have the same trace

tr(A) = tr(AT ).

This follows immediately from the fact that transposing a square matrix does not
affect elements along the main diagonal.
3. If A and B are m × n and n × m real or complex matrices, respectively, then

tr(AB) = tr(BA)

The matrices in the trace of a product can be switched without changing the result.

GEOMETRY (ENG 1 semester 2) ©ISJ 2023-2024 [email protected]

You might also like