0% found this document useful (0 votes)
17 views44 pages

Linear Algebra Notes

This document provides an introduction to vector spaces and related concepts in linear algebra. It defines what a vector space is, provides examples of vector spaces, and discusses properties of vectors and operations on vectors such as addition, scalar multiplication, the dot product, and norms. The document also presents several theorems about vector spaces.

Uploaded by

Solomon Wamutu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views44 pages

Linear Algebra Notes

This document provides an introduction to vector spaces and related concepts in linear algebra. It defines what a vector space is, provides examples of vector spaces, and discusses properties of vectors and operations on vectors such as addition, scalar multiplication, the dot product, and norms. The document also presents several theorems about vector spaces.

Uploaded by

Solomon Wamutu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

lOMoARcPSD|46912607

Linear Algebra notes

Linear algebra 1 (University of Nairobi)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by Solomon Wamutu ([email protected])
lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

UNIVERSITY OF NAIROBI

SCHOOL OF MATHEMATICS

A FIRST COURSE IN LINEAR ALGEBRA

(Dr. Arthur Wafula)

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Preface

The work presented in this book is a result of teaching my students for close to two decades. Much
of the work can be found in other excellent books written by imminent authors. The approach
used in the book is student friendly and can be read by anyone familiar with early years of
undergraduate study. The material presented is sufficient for an introductory single semester
course in linear Algebra. A number of theorems have just been stated to allow ease of reading.

A Wafula

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

TABLE OF CONTENTS

Chapter I

Vector spaces Page 4

Chapter II

Vector subspaces Page16

Chapter III

Bases of vector spaces Page 20

Chapter IV

Linear systems of equations Page31

Chapter V

Linear transformations Page 40

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

CHAPTER I

VECTOR SPACES

We discuss a basic structure in linear Algebra that forms a foundation of solutions to many
problems in diverse fields namely vector spaces.

Definition;

A vector space V is a non-empty set of elements called vectors on which is defined a concept of
equality of vectors and two operation called vector addition and scalar multiplication with the
following properties.

i. If a, b  V then a  b  V for all a, b  V . (Closure of vector addition).


ii. k,a  V for all the aV and kR.(Closure of scalar multiplication).

iii.
 a  b   c a   b  c 
for all vectors a, b,c  V .(Associative law).
iv. There exists a vector called zero and written 0 such that 0  a a for all aV.
v. For every vector a  V there exists a vector called minus a and denoted by -a such
that a+(-a)=0
vi. a  b b  a for all vectors a and b in V
k  a  b  ka  kb
vii. for all a and b in V and k in R

viii.
 p  q  a pa  qa for all vectors a in V and scalars p and q in R
ix.
 pq  a p  qa  for all scalars p and q in R and vector a in V
x. 1a a for all a in V

Examples of vector space

1. The space R2 of all column vectors x=.

Note:

i.

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

ii.
iii.
iv. For
v.

One can verify that the remaining condition for a vector space are satisfied.
n n
2. Consider R the space of all column vectors of the type where addition in R is
performed according to corresponding entries, namely

for any value in R


n
The zero in R is the vector with zero entries i.e. .
n
One can verify that R is a vector space. This is called the Euclidean vector space

3. Let M(2) be the space of all 2x2 matrices where vector addition is performed by

= for all .

The zero vector is

One can verify that M(2) is a vector space.

The space p n consisting of all polynomial in x of degree n or less is a vector space where addition
and scalar multiplication are defined as follows

a xn
n
 a n  1 a1x  a 0   b n x n  b n  1x n  1  b1x  b 0  a n  b n  x n   a n  1  b n 1  x n  1  a1  b1  x  a 0  b 0

And

k  a n x n +a n  1x n a1x  a  ka n x n  ka n  1x n  ka1x  ka 0

The zero in p n is the zero polynomial. p n is a vector space

C  0,1
4. Let be the space of all continuous real valued functions from the interval Addition
and scalar multiplication are defined as follows

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

 f  g   x  f  x   g  x 
 kf   x  kf  x  for all k  R and x   0,1 .
That the above functions are continuous is well established in calculus.

C  0,1 C  0,1
The zero in is the constant zero function. is an example of an infinite dimensional
vector space.

In the Euclidean space we have a concept of geometry. A vital tool in this vector space that can be
used to investigate its geometry is called the dot product.

Definition

If x, y  R the dot product (scalar product) of x and y is defined by:


n

x.y x1y1  x 2 y 2  x n yn

Example

(ii) .

Definition

The vector x and y are said to be orthogonal (perpendicular) if x. y=0

properties of the dot product.

1. x. x 0 and x. x 0 if and only if x 0


2. x. y  y.x for all and in V.
3. ( x. y ).z x.( y.z ) for all and z in V.
x.  y  z  x. y  x. z for all x, y, z  V
4.
5. for all scalars k and vectors x and y

Definition

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

n
In R we define the magnitude of a vector (the norm) to be

Properties of the norm

From the definition of the norm we identify the properties of the norm as follows

(i) and if and only if


n
(ii) for all x in R .
n
(iii) for all vectors in R

Example

1. Let

Evaluate .

We first compute .

So .

2. Given that x and y are orthogonal vectors, simplify the expression


 x  2y  .  3x  2y 
Solution

 x  2y  .  3x  2y  3x. x  y   2y.  x  2y  3x 2  6xy  2yx  4y 2


2 2
3x 2  4yx  4y 2 =3x.x  4y.y= x  y

( . Since the vectors are orthogonal)

3. Given that x+2y2=4 and 2x-3y2=9 find x and y if x and y are
Orthogonal
4. Prove the parallelogram law that
2 2 2 2
x  y  x  y 2 x  2 y
.
5.
Prove that the diagonals of a Rhombus intersect at right angles.

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

6. Show in a right-angled triangle whose longest side is it must be that


2 2 2
c a  b
.

Theorem

The zero in a vector space is unique

Proof

Let and be zeroes in a vector space V then consider

0  0’ 0’(0 is a zero) .

However,

0  0’ 0 (0’ is a zero) . Hence . And so zero is unique in V.

Theorem

0a 0

For all

Proof

It is clear that

0  0 0 .So

0a  0  0  a 0a  0a
.

And from the uniqueness of zero we must infer that

0a 0 .

Theorem

k0=0 for all kR

Proof

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Now,

0+0=0 and so Consequently , and from the uniqueness of 0 it must be that .

Theorem

 a   1 a

For all vectors aV.

Proof

a+(-1)a=(1    1 )a 0a 0

From the definition of  a in V, we have that


  1 a  a ,.

Theorem

Prove that –a is unique

Proof

Suppose that b and b’ are minuses of a. Then

b b  0 b   a  b’

 b  a   b’

0  b’

b’

Therefore, is unique

Theorem

If a b if and only if a  c b  c for all a, b,c  V

Proof

If a b then

 a  c    b  c  (a  c)  (a  c) 0
9

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Hence

a  c b  c for all a, b,c  V


Conversely, if ,

then

a=a  0 a  (c+-c)=(a+c) +-c=(b+c)-c=b+(c+-c)=b+0=b

The angle between vectors

In this section we make application of the geometry of a triangle to derive a remarkable


relationship of the dot product and the angle between two vectors.

b-a
b

In the diagram  is the angle between the vectors and apply the cosine rule in OAB to obtain

b-a2=a2+b2-2ab

 b  a  . b  a 
But b-a2

= b.b  b.a  a.b  a.a

=b2  2a.b +a2

10

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Hence

a2+b2-2a.b=a2+b2-2ab i.e.

Note that if a and b are parallel vectors then =0.

let so,

= i.e cos=1 so =0 or 180 degrees .This agrees well with the orientation of parallel vectors.

If a and b are perpendicular then  90 and cos 0 .On the other hand, if a and b are
0

perpendicular then a.b 0 so

. Hence =900

Example

1. Determine the acute angle between the vectors below


a)
b)
Solution

b)

====

. Hence=
n
2. Prove that for all vectors in R .
Proof

11

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

2 2 2
x  y ( x  y ).( x  y )  x  2 x. y  y
2 2 2 2
 x  2 x . y cos  y  x  2 x . y  y ( x  y ) 2

The result follows on taking square-roots of both sides. This relationship is called the triangle

inequality. It alludes to the fact that in a triangle the longest side cannot exceed the sum of the

other sides in length .

The equation of a straight line

b
a

b-a

The equation of the line AP is given by

where is a scalar

Example

1. Write the value of the line joining (0,1.-1) and (3,0,2) and hence find the parametric
equation of the line.

Solution

is the vector equation of the line.

12

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

The parametric equations of the line are

2. Find the vector equation and parametric of the line joining the points
and

Solution
From the equation we have
=

The parametric equations are

Equation of a plane

The equation of a plane on R3 is of the form where and k are constants .


And vector is the normal to the plane. To see this fact note that if is particular point on the plane
and is a general point on the same plane then

On further expansion we have that Consequently,


Remarks

The equation is the equation of a plane through the origin.


When then we have the equation of a plane parallel to the plane.
When then we have the equation of a plane parallel to the plane.
Finally, when then we have the equation of a plane parallel to the plane.

Question
1. Find the point of the intersection of the line joining and with plane .

Question
2. Determine the acute angle between the planes

13

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

.
Distance of a plane from the origin:
Let be the nearest point from the origin on the plane then A is located on the normal to the
plane.
Thus
.
Substituting in the equation of the plane we obtain
.
From which we find that
.
Therefore the shortest Distance of the plane from the origin is
.
Distance of a plane from a given point:
Let be the nearest point from a given point on the plane then the A is located on the
normal to the plane passing through Q. We make the transformation .So that the new
origin shifts to .
The equation of the plane becomes
Which on rearrangement becomes .
Now from the previous description the distance
.

CHAPTER II

Vector subspaces

14

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Definition:
Given a vector space V, a non-empty subset W of V is called a vector subspace of V if W is a vector
space with respect to vector addition and scalar multiplication in V.
Theorem
W is a vector space of V if and only if
i. a  b  W for all a,b  W
ii. ka  W for all a  W and k  R

Proof

Suppose that W is a vector space of V then W is a vector space in its own right hence (i) and (ii) will
apply

Conversely

Suppose that (i) and (ii) are satisfied in W then we need to show that the eight remaining
conditions in a vector space apply in W

Since W  then a  W

Now 0a 0  W by (i) then

  1 a  a  W by (ii).

All the remaining six conditions apply in W since W is a subset of V.

Examples

a) let V be a vector space then W={0} is a vector subspace of V


b) let V be a vector space and W=V then w is a vector space of V
c) let V=R2
is a vector space of R2

Let then

a  2b 0 and x  2y 0 .But,

. Now

 a  x   2  b  y   a  x   2b  2y  a  2b    x  2y  0  0 0

So,

15

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Furthermore,

= k 0 0 .

Hence

d) is not a subspace of R2
Note that
e) is not a subspace of R2
Note
But since 2≠22

Linear combinations
In this section we discuss a method of creating subspaces from a finite set of vectors.

Definition
x c1x1  c 2 x 2  c3x 3  c n x n where c1,c 2 ,c3 c n are scalars
A vector
x1, x 2 .x n
are vectors in a vector space V is called a linear combination of the x i ’s
Example

=.
is a linear combination of .

Remark
If the xi’s are orthogonal, the scalars can be found as follows
x c1x1  c 2 x 2  c3x 3  c n x n
We dot both sides by x1 obtain
 x1.x   c1 (x1.x1)  c2 (x 2.x1)  c n (x n .x1) = c1.(x1.x1 )  0 c1.(x1.x1 )
2
( x 1 , x) c1 x1
For x1≠0 we have that . Hence, .
Similarly, .

Example 1

Verify that are orthogonal and hence write as a linear combination of the two vectors.

16

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Solution
. So and are orthogonal.

Let

. So that and, hence

).

Hence and a similar computation using leads to . Consequently

Example 2

1. Show that is not a linear combination of and

Proof

Suppose that

, this is absurd as 1≠0.

Hence is not a linear combination of and

Question

2. Verify that are orthogonal and hence write as a linear combination of the three vectors.

Definition

The subset W of all linear combinations of x1 , x 2 ..x n is called the linear span of x1 , x 2 ..x n .

We write
W span  x1, x 2 ,.x n  for the linear span of
x1 , x 2 ..x n .

Theorem

The linear span of a set of vectors x1 , x 2 ..x n is a vector subspace of the vector space V

Proof

17

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Let c1x1  c 2 x 2 .  c n x n and c’1 x1  c’2 x 2 .  c’n x n W

Then
 c1x1  c2 x 2 .  cn x n    c’1 x1  c’2 x 2 .  

 c1  c’2  x1   c 2  c’2  x 2 .   c n  c’n  x n


W.

Note that we have obtained the above by repeated use of the associative and commutative law.

Finally

k  c1x1  c2 x 2 .  c n x n  kc1x1  kc 2 x 2 .  kc 2 x 2


W.

The subset W of all linear combinations of the vectors x1 , x 2 ..x n is called the linear

subspace generated by x1 , x 2 ..x n

Remark

Span

Span

CHAPTER III

BASIS OF VECTOR SPACES

The objective in this chapter is to obtain the least number of vectors that can generate (span) a
vector space

Definition

18

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

A set of vectors x1 , x 2 ,.x n in a vector space V is said to be linearly dependent if there exists a

vector in the set


 x1 , x 2 ,.x n 
which is a linear combination of the other vectors.

Example

The set is linearly dependent since

Note that any set of vector which contain zero is linearly dependent since 0=0x1+0x2+…+0xn

Theorem

A set of vectors x1 , x 2 ,.x n in a vector space V is linearly dependent if and only if there exists

scalars c1 ,c 2 ,c n not all zero so that c1x1  c2 x 2 .  cn x n 0

Proof

Suppose that x1 , x 2 ,.x n are linearly dependent

At least one of the vectors is a linear combination of the others. We can assume x 1 is a linear
combination of the others. If not we re-arrange the set so that x1 is a linear combination of the rest

x1 c 2 x 2  c3x 3  cn x n .

So

0  x1  c 2 x 2  c3x 3  c n x n

where the first scalar is -1≠0 .

Conversely,

Suppose c1x1  c 2 x 2  c n x n 0 and not all the ci ’s are zero. Assume that c1≠0. If c1=0 re-
arrange the equation so that c1≠0 then

c2 x 2  c3 x x .  cn x n  c1x1

i.e.

19

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Hence x1 is a linear combination of the other vectors and so


 x1 , x 2 ,.x n 
is a linearly dependent
set.

LINEAR INDEPENDENCE

Definition

x , x ,.x n is said to be linearly independent if none of the vectors is a linear


A set of vectors 1 2
combination of the other vectors

Theorem

The vectors x1 , x 2 ,.x n are linearly independent if and only if the equation
c1x1  c2 x 2  cn x n 0 has one and only one solution c1 c 2  c n 0
.

Proof

Suppose that x1 , x 2 ,.x n are linearly independent and c1x1  c2 x 2  c n x n 0 If any of the
.

scalars ci’ s is non-zero then x1 , x 2 ,.x n are linearly independent contradicting the initial

assumption it must be that c1 c 2  c n 0

Conversely

Suppose that the equation c1x1  c 2 x 2  c n x n 0 has one and only one solution c1=c2=……=cn=0
we must show that the xi’s are independent. Suppose that the xi’s are linearly dependent. There

exist scalars c1 ,c 2 ,c n not all zero such that c1x1  c2 x 2  c n x n 0 . This contradicts the initial
assumption as all such scalars are zero.

Examples

1. Show that

Proof

Suppose that then

20

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

,




Prove that the vectors are linearly independent in R3

Proof

Suppose that


(but c1=0, c2=0)

2. Prove that the functions and are linearly independent in the space of continuous functions

Proof

Assume that for some scalars

…….i

Differentiating

….ii

We multiply (i) by sin and (ii) by to obtain

……iii

…..iv

Solving (iii) and (iv) simultaneously and eliminating c1 we obtain

But , hence

From (i) (cos is not always zero) .So it must be that .

21

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Question

3. Prove that are linearly independent in the vector space P3(x)

Question

4. Prove that are linearly independent in the space

Question

4 Prove that if x1 , x 2 ,.x n are orthogonal, then they are linearly independent.
5 Prove that if are linearly independent in a vector space V then are also linearly
independent.

Matrices and Matrix operations

We introduce matrices as useful tools of trade as will be evident in the subsequent topics.

A matrix is a rectangular arrangement of numbers. We add matrices according to corresponding


entries. For this reason addition is meaningful only for matrices of the same order.

Thus

Matrix multiplication is computed by the dot product of rows of the first matrix with the columns
of the second matrix. More precisely the dot product of the row in with the column of yield the
entry in .Matrix multiplication is only possible when the number of columns of the first matrix is
the same as the number of rows of the second matrix.

Elementary row operations

The following are called elementary row operations on a Matrix

i. Interchanging any two rows


ii. Multiplying a row of a Matrix by a non -zero scalar
iii. Adding a scalar multiple of a row to another row

A matrix A is similar to B if B is obtained from A by a finite sequence of elementary operations

Echelon form

A matrix A is said to be in echelon form if for each non-zero row , the first nonzero element has all
the element below it as zero ,unless the row is the last row in the matrix.

The echelon form is one of the canonical forms we shall investigate.

22

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Example

The matrices below are in echelon form

Each matrix can be reduced to echelon form by use of elementary row operations

Example

1. Reduce the matrix


to echelon form.

Solution

Replace the second row with the sum of times the first row and the second row. Also replace the
third row with the sum of the first row and the third row. Secondly Replace the third row with the
sum of times the third row and the second row . Finally replace the third row with the sum of
times the third row to the second row

The final Matrix is in echelon form

2 Reduce the matrix below to echelon form

3 Show that the rows of the matrix below are linearly independent

Theorem

The nonzero rows of a matrix when reduced to echelon form are linearly independent

The above theorem can be used to provide an alternative method of investigating


independence of vectors in . The vectors are written as the rows of a matrix and then an
equivalent matrix is obtained that is in echelon form. If the echelon form does not contain a zero

23

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

row the vectors are linearly independent. If on the other hand the echelon form does contain a
zero row then the vectors are linearly dependent

Example

1) Investigate the vectors for linear independence

Solution

Let the matrix then

The echelon form does not contain a Zero row so the three vectors are linearly independent

2) Show that the vectors are linearly dependent and show clearly how one of the vectors is a
linear combination of the other vectors

Solution

Let

The echelon form equivalent to A has a zero row so the original vectors are linearly dependent.
Furthermore,

Hence,

Clearly,

3) Show that the polynomials are linearly dependent and clearly indicate how one of the
Polynomials is a linear combination of the other Polynomials

4) Prove that any three non- zero vectors are linearly dependent

Theorem

Given that the vectors x1 , x 2 ,.x n are linearly independent and x c1x1  c 2 x2 .  c n xn then
this representation is unique

Proof

24

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

x c1 x1  c 2 x2 .  c n xn and x c1/ x1 +c2/ x2 cn/ xn


Suppose that

Then subtracting both sides we obtain

0 (c1  c1/ ) x1 +(c1  c2/ ) x2  (c1  cn/ ) xn


.

Since the xi’s are linearly independent we must have that:

0 (c1  c1/ ) (c1  c2/ )  (c1  cn/ )


.

Hence

c1 c1/ ,c1 c2/ ,c1 cn/


.

Definition

A set of vectors x1 , x 2 ,.x n is called a finite basis of a vector space V if x1 , x 2 ,.x n are linearly

independent and generate (span) the vector space V i.e. span{ x1 , x 2 ,.x n }=V

Definition

A vector space V is said to be finite dimensional if V has a finite basis. Otherwise, V is infinite
dimensional.

Theorem

Every vector space V that is different from the trivial vector space has a basis

Theorem

A set of vectors [ x1 , x 2 ,.x n ] is a basis of V if and only if the set [ x1 , x 2 ,.x n ] is maximal linearly
independent in V

Remark

This means that the vectors x1 , x 2 ,.x n are linearly independent and if a new element is
introduced, the resulting set is linearly dependent.

Proof

25

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Suppose x1, x 2 ,.x n is a finite basis of vector space V. we show that the set is maximal linearly
independent

Let xV then since x1 , x 2 ,.x n are a basis therefore x is a linear combination of x1 , x 2 ,.x n hence
x1 , x 2 ,.x n , x ] is a linearly dependent set. Hence
[
,

[ x1 , x 2 ,.x n ] is maximal linearly independent

Conversely

Suppose that { x1 , x 2 ,.x n } is maximal linearly independent. We must show that { x1, x 2 ,.x n } is a

basis for V. It is sufficient to show that { x1 , x 2 ,.x n } is a generating set for V.

Let xV then {x, x1 , x 2 ,.x n } is a linearly independent set. It must be that x is a linear combination

of x1 , x 2 ,.x n , hence { x1 , x 2 ,.x n } is a basis for V.

Theorem

A of vectors { x1 , x 2 ,.x n } is a basis for vector space V if and only if { x1 , x 2 ,.x n } is a minimal
generating (spanning) set of vectors for the vector space V

Remark

The set { x1 , x 2 ,.x n } generates the vector space V but if any one of the vectors is deleted from
the list, the remaining vectors do not generate V

Theorem

If a vector space V has a finite basis then each basis of V has the same number of elements

Example

The vectors are a basis for R3

Note are orthogonal vectors in R3 hence they are linearly independent

Let

So span R3

26

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Definition

If has a finite basis and that n is the number of elements in a basis of V then the dimension of V is
n

Definition

x1 , x 2 ,.x n x c1x1  c 2 x2 .  c n xn


If { } is a finite basis for a Vector space V and then define the

matrix or column vector representation of x by

Example

The Dimension of R4 is 4.

In R4 the vectors

form the standard basis and so R4

has 4 dimensions.

Determine the dimension of the subspace of R3 below

i.
ii.
iii.

Solution

a,bR

The vector and generated W1 since and are orthogonal and they are linearly independent hence
form a basis for

=}

27

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Hence and generates W2 .

The vector and are linearly independent hence they form a basis for .The dimension of is two .

={

Hence and are linearly independent and forms the basis for

The dimension of is two.

28

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

CHAPTER IV

LINEAR SYSTEMS OF EQUATIONS

A system of equations of the type below is called a linear system of equation of m equations in n
variables.

This represents m equation in n unknowns can be written as a matrix equation as follows.

The matrix

is called the coefficient matrix .

The vector is called the constant Matrix. The system can now be summarized in a vector equation
of the type where is a solution to the system , is a vector in Rn .

So, to solve the system we note that we obtain an equivalent system by any of the following
operations:

i. Interchanging two equations


ii. Multiply an equation by a non-zero scalar
iii. Add a scalar multiple of one equation to another equation

Operations are called elementary row operations similar to those defined on matrices. The
technique is called the Gaussian Elimination method. It involves applying a sequence of row
operation to reduce the number of variables in each subsequent equation and is illustrated below

Gauss elimination method

Solve the system

(i)

( ii)

( iii)

29

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Solution

We apply elementary operations on the system to obtain an equivalent system where each
subsequent equation has fewer variables

Equation and

Interchanging rows we have that

And now we substitute in the rest of the equations

,,

The same steps can be achieved by use of row operations on the augmented matrix [ . The
objective is to reduce the augmented matrix to echelon form and the recover a more simple
system

1  2 3 3 1  2 3 3  1  2 3 3 
2  4 5 0  2 R1  R2  R2 ; R1  R3  R3  0 0 1 6  R2  R3  0  5 4  2 
 ' '   ' ' 

 1 3  1 5  0  5 4  2   0 0 1 6 

From the reduced form we obtain the equations just was found in the earlier method

30

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Question

Solution

The augmented matrix

We replace the equation

Let then

. Here  is an arbitrary value, leading to infinity of solutions

Question

Solve the system below

Question

Find the solutions of the following systems of equation

1.

2.
31

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Solve the system below

Solution

=
==
We recapture the equations from echelon form,

From the equations we obtain .This is a contradiction. Hence the system is inconsistence
Remark
A linear system either has a unique solution or infinity of solutions or is inconsistent.

The Jordan Gauss reduction Method

The method above can be used to solve a system which has unique solution . The augmented
matrix is reduced to the form .From which the equivalent system is

Hence

Question

Solve the system below

Recall that the system when reduced to echelon form is

32

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

 23 
 5
1  2 3 3   1  2 0  15 1 0 0 
 0  5 4  2  R  4 R R ' , R  3R R '  0  5 0  26  1 2 ''  26 
 R2 R2 , R1  R2 R1 0 1 0
''
  2 3 2 1 3 1  5 5  5
 0 0 1 6   0 0 1 6  0 0 1 
 6
 

Leading to the same solution namely,, , . This is the same result we obtained earlier. Of cause the
technique involves many more steps hence one needs care to avoid errors.

Question

Solve the matrix equation where and

Solution

We use the Jordan-Gauss method to find the Matrix X

The augmented matrix is

Hence the solution is

Definition
If A is a matrix the number of non-zero rows when it is reduced to echelon form is called
the rank of the matrix.
Example

Find the rank of the matrix below

Solution

Rank (A)=2

Find the rank of the matrix below

33

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

The Rank of A is 2 since the reduced matrix equivalent to A has two non- zero rows

A linear system of the type has no solution if

Rank (A)Rank of augmented matrix

A square matrix A is said to be a non-singular matrix if there exists a square matrix B such that

The matrix B is called the inverse of A written as

Theorem If A and B are nonsingular matrices then is nonsingular and

i)
ii)
iii)

Proof

i) = Hence from definition.


i) (AB)
ii) If then by substitution.
Conversely, if then,

The method of reduction to echelon can be used to find the inverse of the square matrix. Actually
we solve the equation . The augmented matrix is reduced to the form then . So the matrix

Example

Find the inverse of the matrix below

Solution

The augmented matrix is

34

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Hence

Question

1 Find the inverse of the matrix below

2 Solve the matrix equation where and

Definition

A square matrix A can be written in the form where L is a lower triangular matrix and U is an
upper triangular matrix . This is called the Decomposition of the Matrix A

The matrix U can be found by reducing A to echelon form then L can be found by solving the
matrix equation

3 Write the matrix below in the form where

Solution

So let

Then We solve the equation

From which we have that

Hence

35

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

CHAPTER V

LINEAR TRANSFORMATIONS

Definition

A matrix of T from a linear space V to a linear space W is called a linear transformation if for all
, scalars and vectors

Remark

The scalar multiplication and vector addition on the left hand side are performed in V while the
ones the right hand side are performed in W.

Theorem

If T is a linear transformation of the vector space w, then

i.
ii.
iii.

36

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Set Then

Set then

Example

1. Show that is not a linear transformation

Proof

Prove that is a linear transformation

Proof

Let , be the scalars and

However

From the computation, T is a linear transformation

Example

Show that is not a linear transformation

Proof

37

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Note T(0)=0

hence is not a linear transformation

Example

Prove that is not a linear transformation

Proof

T is not a linear transformation.


.

Definition

Let be a linear transformation of the linear space V into the linear space W then

i. The kernel of T is defined as


ii. The image of the image of V under T is

Theorem

i. is a subspace of V
ii. image of V under T is a subspace of W

Proof of (i)

Let . Then

Hence

So,

38

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Finally let and be scalars. Then

hence V.

Prove part ii of the above theorem as an exercise

Examples

1 Determine the dimension of the kernel of the transformation below:

2 Determine the dimension of the Kernel and image space of the transformation below:

Theorem

A linear transformation T from V into W is injective if and only if the Kernel of T is

Proof

If T is injective then T is one to one . But . So Kernel of T is .

Conversely suppose that Kernel of Kernel of T is and then is Hence and so .

LINEAR TRANSFORMATIONS AND THEIR MATRICES

If V is a finite dimensional vector space then to each linear transformation we associate a matrix.
Also a Matrix thus represents a linear transformation from a finite dimensional space

Let be a basis for vector space V and A to be a linear transformation of V into W with basis

Each vector xV can be written uniquely as a linear combination of the basis vectors

i.e.

Hence,

++…+)++…++ … +).

An inspection of the above expression confirms that is a matrix product where

39

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

and. Note that the basis vectors can be written as column vectors namely where it is
understood to mean that

..

So and and etc. It is clear the columns of the Matrix are precisely the images of the basis vectors.

Example

1 Find the matrix of the transformation of

Solution

Hence the matrix of the transformation is

2 Find the matrix of transformation if T is defined by

in the standard basis

Solution

The matrix of transformation is

Obviously, a change in the basis will result in different matrix representation of a linear function F

3 Write the matrix representation of in the (u) standard basis (ii)


40

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Solution

(i) We first find the image of the basis vectors under T as follows

Hence the matrix is

(ii) Now

And

Hence the matrix is

4 Let D be the differential linear transformation defined on the space V of all polynomials of
degree three. Find the matrix for D in the basis (i) (ii)

5 Let V be the Euclidean space and the linear transformation T be represented by the Matrix

in the standard basis .(i) Write the transformation T in the Cartesian f form (ii) Hence find matrix
for T using the basis

If T is a linear transformation of a linear space V into a linear space W and S is a linear


transformation of W into a linear space X than we obtain a composite map of ST of V into X.

Now ST is defined by for values of x in V.

Theorem If Tand are linear transformations then is a linear transformation.

Proof

Theorem. A linear transformation is a one to one mapping if and only if

Proof

Suppose that and then. So it must be that leading to .

41

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Conversely, let be one to one and then since . So

Definition The mapping defined by is called the identity mapping.

The identity mapping is a linear transformation that is both one to one and onto.

Definition If is a one to one onto mapping then the mapping defined such that for ,then is called
the inverse function of

Note that .So.

We write for the inverse function of .

Theorem

If is a linear transformation that admits an inverse then the Inverse is a linear transformation.

Proof

Let be elements of such that there exists withThen and from the linearity of T we have

Hence from the definition of the inverse

Definition A vector space V is said to be isomorphic to a vector space W if there exists an invertible
linear transformation T from V onto W

Theorem

If are linearly independent vectors of V and T is a one to one linear map of V into W then are
linearly independent in W.

Proof

c1Tx1  c 2Tx 2  c n Tx n 0


Suppose that

Then from the linearity of T c1Tx1  c 2Tx 2  c n Tx n T (c1x1 c 2x 2  ...  c n x n ) 0

c1x1  c 2 x 2  c n x n 0
the Zero vector alone, it must be that
Now the Kernel of T is consist of

Since the vectors are linearly independent in V we must have that

c1 =c 2  c n 0
.

42

Downloaded by Solomon Wamutu ([email protected])


lOMoARcPSD|46912607

SCHOOL OF MATHEMATICS

Remark. It follows from the above result that Isomorphic spaces have the same dimension

Corollary.

Each finite dimensional space V is isomorphic to

Rn

Proof

be a basis for vector space V then each vector in V is of the form


Let

. The mapping defined by (c1 ,c2 ,,c n )

Is an isomorphism. The fact that T is linear can be easily verified. Also, note that if

T ( x ) 0 . Then (c1,c 2 ,,c n ) 0 . Hence c1 =c 2  c n 0 and so the Kernel of .

Theorem is not isomorphic to for

43

Downloaded by Solomon Wamutu ([email protected])

You might also like