Linear Algebra Notes
Linear Algebra Notes
SCHOOL OF MATHEMATICS
UNIVERSITY OF NAIROBI
SCHOOL OF MATHEMATICS
SCHOOL OF MATHEMATICS
Preface
The work presented in this book is a result of teaching my students for close to two decades. Much
of the work can be found in other excellent books written by imminent authors. The approach
used in the book is student friendly and can be read by anyone familiar with early years of
undergraduate study. The material presented is sufficient for an introductory single semester
course in linear Algebra. A number of theorems have just been stated to allow ease of reading.
A Wafula
SCHOOL OF MATHEMATICS
TABLE OF CONTENTS
Chapter I
Chapter II
Chapter III
Chapter IV
Chapter V
SCHOOL OF MATHEMATICS
CHAPTER I
VECTOR SPACES
We discuss a basic structure in linear Algebra that forms a foundation of solutions to many
problems in diverse fields namely vector spaces.
Definition;
A vector space V is a non-empty set of elements called vectors on which is defined a concept of
equality of vectors and two operation called vector addition and scalar multiplication with the
following properties.
iii.
a b c a b c
for all vectors a, b,c V .(Associative law).
iv. There exists a vector called zero and written 0 such that 0 a a for all aV.
v. For every vector a V there exists a vector called minus a and denoted by -a such
that a+(-a)=0
vi. a b b a for all vectors a and b in V
k a b ka kb
vii. for all a and b in V and k in R
viii.
p q a pa qa for all vectors a in V and scalars p and q in R
ix.
pq a p qa for all scalars p and q in R and vector a in V
x. 1a a for all a in V
Note:
i.
SCHOOL OF MATHEMATICS
ii.
iii.
iv. For
v.
One can verify that the remaining condition for a vector space are satisfied.
n n
2. Consider R the space of all column vectors of the type where addition in R is
performed according to corresponding entries, namely
3. Let M(2) be the space of all 2x2 matrices where vector addition is performed by
= for all .
The space p n consisting of all polynomial in x of degree n or less is a vector space where addition
and scalar multiplication are defined as follows
a xn
n
a n 1 a1x a 0 b n x n b n 1x n 1 b1x b 0 a n b n x n a n 1 b n 1 x n 1 a1 b1 x a 0 b 0
And
C 0,1
4. Let be the space of all continuous real valued functions from the interval Addition
and scalar multiplication are defined as follows
SCHOOL OF MATHEMATICS
f g x f x g x
kf x kf x for all k R and x 0,1 .
That the above functions are continuous is well established in calculus.
C 0,1 C 0,1
The zero in is the constant zero function. is an example of an infinite dimensional
vector space.
In the Euclidean space we have a concept of geometry. A vital tool in this vector space that can be
used to investigate its geometry is called the dot product.
Definition
x.y x1y1 x 2 y 2 x n yn
Example
(ii) .
Definition
Definition
SCHOOL OF MATHEMATICS
n
In R we define the magnitude of a vector (the norm) to be
From the definition of the norm we identify the properties of the norm as follows
Example
1. Let
Evaluate .
We first compute .
So .
3. Given that x+2y2=4 and 2x-3y2=9 find x and y if x and y are
Orthogonal
4. Prove the parallelogram law that
2 2 2 2
x y x y 2 x 2 y
.
5.
Prove that the diagonals of a Rhombus intersect at right angles.
SCHOOL OF MATHEMATICS
Theorem
Proof
0 0’ 0’(0 is a zero) .
However,
Theorem
0a 0
For all
Proof
It is clear that
0 0 0 .So
0a 0 0 a 0a 0a
.
0a 0 .
Theorem
Proof
SCHOOL OF MATHEMATICS
Now,
Theorem
a 1 a
Proof
a+(-1)a=(1 1 )a 0a 0
Theorem
Proof
b b 0 b a b’
b a b’
0 b’
b’
Therefore, is unique
Theorem
Proof
If a b then
a c b c (a c) (a c) 0
9
SCHOOL OF MATHEMATICS
Hence
then
b-a
b
In the diagram is the angle between the vectors and apply the cosine rule in OAB to obtain
b-a2=a2+b2-2ab
b a . b a
But b-a2
10
SCHOOL OF MATHEMATICS
Hence
a2+b2-2a.b=a2+b2-2ab i.e.
let so,
= i.e cos=1 so =0 or 180 degrees .This agrees well with the orientation of parallel vectors.
If a and b are perpendicular then 90 and cos 0 .On the other hand, if a and b are
0
. Hence =900
Example
b)
====
. Hence=
n
2. Prove that for all vectors in R .
Proof
11
SCHOOL OF MATHEMATICS
2 2 2
x y ( x y ).( x y ) x 2 x. y y
2 2 2 2
x 2 x . y cos y x 2 x . y y ( x y ) 2
The result follows on taking square-roots of both sides. This relationship is called the triangle
inequality. It alludes to the fact that in a triangle the longest side cannot exceed the sum of the
b
a
b-a
where is a scalar
Example
1. Write the value of the line joining (0,1.-1) and (3,0,2) and hence find the parametric
equation of the line.
Solution
12
SCHOOL OF MATHEMATICS
2. Find the vector equation and parametric of the line joining the points
and
Solution
From the equation we have
=
Equation of a plane
Question
1. Find the point of the intersection of the line joining and with plane .
Question
2. Determine the acute angle between the planes
13
SCHOOL OF MATHEMATICS
.
Distance of a plane from the origin:
Let be the nearest point from the origin on the plane then A is located on the normal to the
plane.
Thus
.
Substituting in the equation of the plane we obtain
.
From which we find that
.
Therefore the shortest Distance of the plane from the origin is
.
Distance of a plane from a given point:
Let be the nearest point from a given point on the plane then the A is located on the
normal to the plane passing through Q. We make the transformation .So that the new
origin shifts to .
The equation of the plane becomes
Which on rearrangement becomes .
Now from the previous description the distance
.
CHAPTER II
Vector subspaces
14
SCHOOL OF MATHEMATICS
Definition:
Given a vector space V, a non-empty subset W of V is called a vector subspace of V if W is a vector
space with respect to vector addition and scalar multiplication in V.
Theorem
W is a vector space of V if and only if
i. a b W for all a,b W
ii. ka W for all a W and k R
Proof
Suppose that W is a vector space of V then W is a vector space in its own right hence (i) and (ii) will
apply
Conversely
Suppose that (i) and (ii) are satisfied in W then we need to show that the eight remaining
conditions in a vector space apply in W
Since W then a W
1 a a W by (ii).
Examples
Let then
a 2b 0 and x 2y 0 .But,
. Now
a x 2 b y a x 2b 2y a 2b x 2y 0 0 0
So,
15
SCHOOL OF MATHEMATICS
Furthermore,
= k 0 0 .
Hence
d) is not a subspace of R2
Note that
e) is not a subspace of R2
Note
But since 2≠22
Linear combinations
In this section we discuss a method of creating subspaces from a finite set of vectors.
Definition
x c1x1 c 2 x 2 c3x 3 c n x n where c1,c 2 ,c3 c n are scalars
A vector
x1, x 2 .x n
are vectors in a vector space V is called a linear combination of the x i ’s
Example
=.
is a linear combination of .
Remark
If the xi’s are orthogonal, the scalars can be found as follows
x c1x1 c 2 x 2 c3x 3 c n x n
We dot both sides by x1 obtain
x1.x c1 (x1.x1) c2 (x 2.x1) c n (x n .x1) = c1.(x1.x1 ) 0 c1.(x1.x1 )
2
( x 1 , x) c1 x1
For x1≠0 we have that . Hence, .
Similarly, .
Example 1
Verify that are orthogonal and hence write as a linear combination of the two vectors.
16
SCHOOL OF MATHEMATICS
Solution
. So and are orthogonal.
Let
).
Example 2
Proof
Suppose that
Question
2. Verify that are orthogonal and hence write as a linear combination of the three vectors.
Definition
The subset W of all linear combinations of x1 , x 2 ..x n is called the linear span of x1 , x 2 ..x n .
We write
W span x1, x 2 ,.x n for the linear span of
x1 , x 2 ..x n .
Theorem
The linear span of a set of vectors x1 , x 2 ..x n is a vector subspace of the vector space V
Proof
17
SCHOOL OF MATHEMATICS
Then
c1x1 c2 x 2 . cn x n c’1 x1 c’2 x 2 .
Note that we have obtained the above by repeated use of the associative and commutative law.
Finally
The subset W of all linear combinations of the vectors x1 , x 2 ..x n is called the linear
Remark
Span
Span
CHAPTER III
The objective in this chapter is to obtain the least number of vectors that can generate (span) a
vector space
Definition
18
SCHOOL OF MATHEMATICS
A set of vectors x1 , x 2 ,.x n in a vector space V is said to be linearly dependent if there exists a
Example
Note that any set of vector which contain zero is linearly dependent since 0=0x1+0x2+…+0xn
Theorem
A set of vectors x1 , x 2 ,.x n in a vector space V is linearly dependent if and only if there exists
Proof
At least one of the vectors is a linear combination of the others. We can assume x 1 is a linear
combination of the others. If not we re-arrange the set so that x1 is a linear combination of the rest
x1 c 2 x 2 c3x 3 cn x n .
So
0 x1 c 2 x 2 c3x 3 c n x n
Conversely,
Suppose c1x1 c 2 x 2 c n x n 0 and not all the ci ’s are zero. Assume that c1≠0. If c1=0 re-
arrange the equation so that c1≠0 then
c2 x 2 c3 x x . cn x n c1x1
i.e.
19
SCHOOL OF MATHEMATICS
LINEAR INDEPENDENCE
Definition
Theorem
The vectors x1 , x 2 ,.x n are linearly independent if and only if the equation
c1x1 c2 x 2 cn x n 0 has one and only one solution c1 c 2 c n 0
.
Proof
Suppose that x1 , x 2 ,.x n are linearly independent and c1x1 c2 x 2 c n x n 0 If any of the
.
scalars ci’ s is non-zero then x1 , x 2 ,.x n are linearly independent contradicting the initial
Conversely
Suppose that the equation c1x1 c 2 x 2 c n x n 0 has one and only one solution c1=c2=……=cn=0
we must show that the xi’s are independent. Suppose that the xi’s are linearly dependent. There
exist scalars c1 ,c 2 ,c n not all zero such that c1x1 c2 x 2 c n x n 0 . This contradicts the initial
assumption as all such scalars are zero.
Examples
1. Show that
Proof
20
SCHOOL OF MATHEMATICS
,
Proof
Suppose that
2. Prove that the functions and are linearly independent in the space of continuous functions
Proof
…….i
Differentiating
….ii
……iii
…..iv
But , hence
21
SCHOOL OF MATHEMATICS
Question
Question
Question
4 Prove that if x1 , x 2 ,.x n are orthogonal, then they are linearly independent.
5 Prove that if are linearly independent in a vector space V then are also linearly
independent.
We introduce matrices as useful tools of trade as will be evident in the subsequent topics.
Thus
Matrix multiplication is computed by the dot product of rows of the first matrix with the columns
of the second matrix. More precisely the dot product of the row in with the column of yield the
entry in .Matrix multiplication is only possible when the number of columns of the first matrix is
the same as the number of rows of the second matrix.
Echelon form
A matrix A is said to be in echelon form if for each non-zero row , the first nonzero element has all
the element below it as zero ,unless the row is the last row in the matrix.
22
SCHOOL OF MATHEMATICS
Example
Each matrix can be reduced to echelon form by use of elementary row operations
Example
Solution
Replace the second row with the sum of times the first row and the second row. Also replace the
third row with the sum of the first row and the third row. Secondly Replace the third row with the
sum of times the third row and the second row . Finally replace the third row with the sum of
times the third row to the second row
3 Show that the rows of the matrix below are linearly independent
Theorem
The nonzero rows of a matrix when reduced to echelon form are linearly independent
23
SCHOOL OF MATHEMATICS
row the vectors are linearly independent. If on the other hand the echelon form does contain a
zero row then the vectors are linearly dependent
Example
Solution
The echelon form does not contain a Zero row so the three vectors are linearly independent
2) Show that the vectors are linearly dependent and show clearly how one of the vectors is a
linear combination of the other vectors
Solution
Let
The echelon form equivalent to A has a zero row so the original vectors are linearly dependent.
Furthermore,
Hence,
Clearly,
3) Show that the polynomials are linearly dependent and clearly indicate how one of the
Polynomials is a linear combination of the other Polynomials
4) Prove that any three non- zero vectors are linearly dependent
Theorem
Given that the vectors x1 , x 2 ,.x n are linearly independent and x c1x1 c 2 x2 . c n xn then
this representation is unique
Proof
24
SCHOOL OF MATHEMATICS
Hence
Definition
A set of vectors x1 , x 2 ,.x n is called a finite basis of a vector space V if x1 , x 2 ,.x n are linearly
independent and generate (span) the vector space V i.e. span{ x1 , x 2 ,.x n }=V
Definition
A vector space V is said to be finite dimensional if V has a finite basis. Otherwise, V is infinite
dimensional.
Theorem
Every vector space V that is different from the trivial vector space has a basis
Theorem
A set of vectors [ x1 , x 2 ,.x n ] is a basis of V if and only if the set [ x1 , x 2 ,.x n ] is maximal linearly
independent in V
Remark
This means that the vectors x1 , x 2 ,.x n are linearly independent and if a new element is
introduced, the resulting set is linearly dependent.
Proof
25
SCHOOL OF MATHEMATICS
Suppose x1, x 2 ,.x n is a finite basis of vector space V. we show that the set is maximal linearly
independent
Let xV then since x1 , x 2 ,.x n are a basis therefore x is a linear combination of x1 , x 2 ,.x n hence
x1 , x 2 ,.x n , x ] is a linearly dependent set. Hence
[
,
Conversely
Suppose that { x1 , x 2 ,.x n } is maximal linearly independent. We must show that { x1, x 2 ,.x n } is a
Let xV then {x, x1 , x 2 ,.x n } is a linearly independent set. It must be that x is a linear combination
Theorem
A of vectors { x1 , x 2 ,.x n } is a basis for vector space V if and only if { x1 , x 2 ,.x n } is a minimal
generating (spanning) set of vectors for the vector space V
Remark
The set { x1 , x 2 ,.x n } generates the vector space V but if any one of the vectors is deleted from
the list, the remaining vectors do not generate V
Theorem
If a vector space V has a finite basis then each basis of V has the same number of elements
Example
Let
So span R3
26
SCHOOL OF MATHEMATICS
Definition
If has a finite basis and that n is the number of elements in a basis of V then the dimension of V is
n
Definition
Example
The Dimension of R4 is 4.
In R4 the vectors
has 4 dimensions.
i.
ii.
iii.
Solution
a,bR
The vector and generated W1 since and are orthogonal and they are linearly independent hence
form a basis for
=}
27
SCHOOL OF MATHEMATICS
The vector and are linearly independent hence they form a basis for .The dimension of is two .
={
Hence and are linearly independent and forms the basis for
28
SCHOOL OF MATHEMATICS
CHAPTER IV
A system of equations of the type below is called a linear system of equation of m equations in n
variables.
The matrix
The vector is called the constant Matrix. The system can now be summarized in a vector equation
of the type where is a solution to the system , is a vector in Rn .
So, to solve the system we note that we obtain an equivalent system by any of the following
operations:
Operations are called elementary row operations similar to those defined on matrices. The
technique is called the Gaussian Elimination method. It involves applying a sequence of row
operation to reduce the number of variables in each subsequent equation and is illustrated below
(i)
( ii)
( iii)
29
SCHOOL OF MATHEMATICS
Solution
We apply elementary operations on the system to obtain an equivalent system where each
subsequent equation has fewer variables
Equation and
,,
The same steps can be achieved by use of row operations on the augmented matrix [ . The
objective is to reduce the augmented matrix to echelon form and the recover a more simple
system
1 2 3 3 1 2 3 3 1 2 3 3
2 4 5 0 2 R1 R2 R2 ; R1 R3 R3 0 0 1 6 R2 R3 0 5 4 2
' ' ' '
1 3 1 5 0 5 4 2 0 0 1 6
From the reduced form we obtain the equations just was found in the earlier method
30
SCHOOL OF MATHEMATICS
Question
Solution
Let then
Question
Question
1.
2.
31
SCHOOL OF MATHEMATICS
Solution
=
==
We recapture the equations from echelon form,
From the equations we obtain .This is a contradiction. Hence the system is inconsistence
Remark
A linear system either has a unique solution or infinity of solutions or is inconsistent.
The method above can be used to solve a system which has unique solution . The augmented
matrix is reduced to the form .From which the equivalent system is
Hence
Question
32
SCHOOL OF MATHEMATICS
23
5
1 2 3 3 1 2 0 15 1 0 0
0 5 4 2 R 4 R R ' , R 3R R ' 0 5 0 26 1 2 '' 26
R2 R2 , R1 R2 R1 0 1 0
''
2 3 2 1 3 1 5 5 5
0 0 1 6 0 0 1 6 0 0 1
6
Leading to the same solution namely,, , . This is the same result we obtained earlier. Of cause the
technique involves many more steps hence one needs care to avoid errors.
Question
Solution
Definition
If A is a matrix the number of non-zero rows when it is reduced to echelon form is called
the rank of the matrix.
Example
Solution
Rank (A)=2
33
SCHOOL OF MATHEMATICS
The Rank of A is 2 since the reduced matrix equivalent to A has two non- zero rows
A square matrix A is said to be a non-singular matrix if there exists a square matrix B such that
i)
ii)
iii)
Proof
The method of reduction to echelon can be used to find the inverse of the square matrix. Actually
we solve the equation . The augmented matrix is reduced to the form then . So the matrix
Example
Solution
34
SCHOOL OF MATHEMATICS
Hence
Question
Definition
A square matrix A can be written in the form where L is a lower triangular matrix and U is an
upper triangular matrix . This is called the Decomposition of the Matrix A
The matrix U can be found by reducing A to echelon form then L can be found by solving the
matrix equation
Solution
So let
Hence
35
SCHOOL OF MATHEMATICS
CHAPTER V
LINEAR TRANSFORMATIONS
Definition
A matrix of T from a linear space V to a linear space W is called a linear transformation if for all
, scalars and vectors
Remark
The scalar multiplication and vector addition on the left hand side are performed in V while the
ones the right hand side are performed in W.
Theorem
i.
ii.
iii.
36
SCHOOL OF MATHEMATICS
Set Then
Set then
Example
Proof
Proof
However
Example
Proof
37
SCHOOL OF MATHEMATICS
Note T(0)=0
Example
Proof
Definition
Let be a linear transformation of the linear space V into the linear space W then
Theorem
i. is a subspace of V
ii. image of V under T is a subspace of W
Proof of (i)
Let . Then
Hence
So,
38
SCHOOL OF MATHEMATICS
hence V.
Examples
2 Determine the dimension of the Kernel and image space of the transformation below:
Theorem
Proof
If V is a finite dimensional vector space then to each linear transformation we associate a matrix.
Also a Matrix thus represents a linear transformation from a finite dimensional space
Let be a basis for vector space V and A to be a linear transformation of V into W with basis
Each vector xV can be written uniquely as a linear combination of the basis vectors
i.e.
Hence,
++…+)++…++ … +).
39
SCHOOL OF MATHEMATICS
and. Note that the basis vectors can be written as column vectors namely where it is
understood to mean that
..
So and and etc. It is clear the columns of the Matrix are precisely the images of the basis vectors.
Example
Solution
Solution
Obviously, a change in the basis will result in different matrix representation of a linear function F
SCHOOL OF MATHEMATICS
Solution
(i) We first find the image of the basis vectors under T as follows
(ii) Now
And
4 Let D be the differential linear transformation defined on the space V of all polynomials of
degree three. Find the matrix for D in the basis (i) (ii)
5 Let V be the Euclidean space and the linear transformation T be represented by the Matrix
in the standard basis .(i) Write the transformation T in the Cartesian f form (ii) Hence find matrix
for T using the basis
Proof
Proof
41
SCHOOL OF MATHEMATICS
The identity mapping is a linear transformation that is both one to one and onto.
Definition If is a one to one onto mapping then the mapping defined such that for ,then is called
the inverse function of
Theorem
If is a linear transformation that admits an inverse then the Inverse is a linear transformation.
Proof
Let be elements of such that there exists withThen and from the linearity of T we have
Definition A vector space V is said to be isomorphic to a vector space W if there exists an invertible
linear transformation T from V onto W
Theorem
If are linearly independent vectors of V and T is a one to one linear map of V into W then are
linearly independent in W.
Proof
c1x1 c 2 x 2 c n x n 0
the Zero vector alone, it must be that
Now the Kernel of T is consist of
c1 =c 2 c n 0
.
42
SCHOOL OF MATHEMATICS
Remark. It follows from the above result that Isomorphic spaces have the same dimension
Corollary.
Rn
Proof
Is an isomorphism. The fact that T is linear can be easily verified. Also, note that if
43