0% found this document useful (0 votes)
57 views263 pages

Linear Algebra 1 1

Uploaded by

Matar chawal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views263 pages

Linear Algebra 1 1

Uploaded by

Matar chawal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 263

Page |0

ELEMENTARY
LINEAR ALGEBRA
MUHAMMAD USMAN HAMID

MUHAMMAD USMAN HAMID (0323 – 6032785)


Page |1

PREFACE
Linear Algebra is the study of vectors and linear transformations. The main
objective of this course is to help students learn in rigorous manner, the tools and
methods essential for studying the solution spaces of problems in mathematics,
engineering, the natural sciences and social sciences and develop mathematical
skills needed to apply these to the problems arising within their field of study; and
to various real world problems.

Course Contents:

 System of Linear Equations: Representation in matrix form, matrices,


operations on matrices, echelon and reduced echelon form, inverse of a
matrix (by elementary row operations), solution of linear system, Gauss-
Jordan method, Gaussian elimination.
 Vector Spaces: Definition and examples, subspaces. Linear combination and
spanning set. Linearly Independent sets. Finitely generated vector spaces.
Bases and dimension of a vector space. Operations on subspaces,
Intersections, sums and direct sums of subspaces. Quotient Spaces.
 Inner product Spaces: Definition and examples. Properties, Projection.
Cauchy inequality. Orthogonal and orthonormal basis. Gram Schmidt
Process.
 Determinants: Permutations of order two and three and definitions of
determinants of the same order. Computing of determinants. Definition of
higher order determinants. Properties. Expansion of determinants.
 Diagonalization, Eigen-values and eigenvectors
 Linear mappings: Definition and examples. Kernel and image of a linear
mapping. Rank and nullity. Reflections, projections, and homotheties.
Change of basis. Theorem of Hamilton-Cayley.

Recommended Books:

 Curtis C. W., Linear Algebra


 Apostol T., Multi Variable Calculus and Linear Algebra.
 Anton H., Rorres C., Elementary Linear Algebra: Applications Version
 Dr. Karamat Hussain, Linear Algebra
 Linear Algebra by Seymour Lipschutz

Visit us @ Youtube: “Learning With Usman Hamid”


Page |2

For video lectures


@ You tube visit
Learning with Usman Hamid
visit facebook page “mathwath”
or contact: 0323 – 6032785

Visit us @ Youtube: “Learning With Usman Hamid”


Page |3

Chapter # 1

SYSTEMS OF LINEAR EQUATIONS


Systems of linear equations play an important and motivating role in the subject of
linear algebra. In fact, many problems in linear algebra reduce to finding the
solution of a system of linear equations. Thus, the techniques introduced in this
chapter will be applicable to abstract ideas introduced later. On the other hand,
some of the abstract results will give us new insights into the structure and
properties of systems of linear equations. All our systems of linear equations
involve scalars as both coefficients and constants, and such scalars may come from
any number field F. There is almost no loss in generality if the reader assumes that
all our scalars are real numbers — that is, that they come from the real field R.

Linear Equation:

A linear equation in unknowns is an equation that can be put in the


standard form where and b are
constants. The constant is called the coefficient of , and b is called the
constant term of the equation.

Solutions of Linear Equation:

A solution of the linear equation is a list of values


for the unknowns or, equivalently, a vector u in Rn , say

… or ⃗ ( ) such that the following


statement (obtained by substituting ki for xi in the equation) is true:

In such a case we say that u satisfies the equation.

Remark:

Equation implicitly assumes there is an ordering of


the unknowns. In order to avoid subscripts, we will usually use x, y for two
unknowns; x, y, z for three unknowns; and x, y, z, t for four unknowns.

Visit us @ Youtube: “Learning With Usman Hamid”


Page |4

Example : Consider the following linear equation in three unknowns x, y, z:


We note that , or, equivalently, the
vector ⃗ ( ) is a solution of the equation. That is, ( ) ( )

On the other hand, ( ) is not a solution, because on substitution, we do not


get a true statement: ( ) ( )

System of Linear Equations

A system of linear equations is a list of linear equations with the same unknowns.
In particular, a system of ‗m‘ linear equations L1, L2,..., Lm in ‗n‘ unknowns
can be put in the standard form

:::::::::::::::::::::::::::::::::::::::::::::::::::

where the aij and bi are constants. The number aij is the coefficient of the unknown
xj in the equation Li, and the number bi is the constant term of the equation Li.

 The system is called an system. It is called a square system if


that is, if the number of equations is equal to the number of unknowns.
 The system is said to be homogeneous if all the constant terms are zero
that is, if ….. Otherwise the system is said to be
nonhomogeneous.
 A solution (or a particular solution) of the system (above) is a list of values
for the unknowns or, equivalently, a vector u in Rn , which is a solution of
each of the equations in the system. The set of all solutions of the system is
called the solution set or the general solution of the system.
 A finite set of linear equations is called a system of linear equations, or
more briefly a linear system. The variables are called unknown.
 A linear equation does not involve any products or roots of variables. All
variables occur only to the first power, and do not appear as arguments of
trigonometric, logarithmic or exponential functions.

Visit us @ Youtube: “Learning With Usman Hamid”


Page |5

EXAMPLES FOR LINEAR AND NON – LINEAR EQUATIONS

 linear
 not linear
 linear for constants
 linear
 linear
 linear
 not linear
 not linear
 not linear
 √ not linear
 √ linear
 not linear
 linear
 not linear
 not linear
 √ linear
 √ linear
 . / not linear
 not linear
 linear
 not linear

TRY OTHERS ALSO!!!!!!!!

Visit us @ Youtube: “Learning With Usman Hamid”


Page |6

Example: Consider the following system of linear equations:

It is a system because it has three equations in four unknowns. Determine


whether (a) ( ) and (b) ( ) are solutions of the
system.

Solution:

(a) Substitute the values of u in each equation, obtaining

( ) ( )

( ) ( ) ( )

( ) ( ) ( )

Yes, u is a solution of the system because it is a solution of each equation.

(b) Substitute the values of v into each successive equation, obtaining

( ) ( )

( ) ( ) ( )

No, v is not a solution of the system, because it is not a solution of the second
equation. (We do not need to substitute v into the third equation.)

Consistent and Inconsistent Solutions:

The system of linear equations is said to be consistent if it has one or more


solutions, and it is said to be inconsistent if it has no solution.

Underdetermined: A system of linear equations is considered underdetermined if


there are fewer equations than unknowns.

Over determined: A system of linear equations is considered over determined if


there are more equations than unknowns.

Visit us @ Youtube: “Learning With Usman Hamid”


Page |7

PRACTICE:

1. Consider the following system of linear equations:

Determine whether given 3 – tuples are solutions of the system?

(a) ( )
(b) ( )
(c) ( )
(d) . /
(e) ( )

2. Consider the following system of linear equations:

Determine whether given 3 – tuples are solutions of the system?

a) . /
b) . /
c) ( )
d) . /
e) . /

Visit us @ Youtube: “Learning With Usman Hamid”


Page |8

If the field F of scalars is infinite, such as when F is the real field R or the complex
field C, then we have the following important result.

Result: Suppose the field F is infinite. Then any system of linear equations has

(i) a unique solution, (ii) no solution, or (iii) an infinite number of solutions.

SYSTEM OF
LINEAR
EQUATIONS

CONSISTENT
INCONSISTENT

UNIQUE INFINITE
SOLUTION NUMBER OF NO SOLUTION
SOLUTIONS

Remark:

Linear system in two unknowns arise in connection with intersection of lines.

 The lines may be parallel and distinct, in which case there is no intersection
and consequently no solution.
 The lines may be intersect at only one point, in which case the system has
exactly one solution.
 The lines may coincide, in which case there are infinitely many points of
intersection (the points on the common line) and consequently infinitely
many solutions. (in such system, all equations will be same with few
common factors)

Visit us @ Youtube: “Learning With Usman Hamid”


Page |9

Example (A Linear System with one Solution):

Solve the following system of linear equations:

………..(i)

………..(ii)

Solution:
() multiplying with

Adding (i) with (ii)

()

. / Geometrically this means that the lines represented by the


equations in the system intersect at the single point . /

Example (A Linear System with No Solution):

Solve the following system of linear equations:

………..(i)

………..(ii)

Solution:

() multiplying with

Adding (i) with (ii)

The result is contradictory, so the given system has no solution. Geometrically this
means that the lines may be parallel and distinct, in this case there is no
intersection and consequently no solution.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 10

Example (A Linear System with Infinitely many Solutions):

Solve the following system of linear equations:

………..(i) ………..(ii)

Solution:
() multiplying with

Adding (i) with (ii)

Equation does not impose any restriction on ‗x‘ and ‗y‘ and hence can be
omitted. Thus the solution of the system are those values of ‗x‘ and ‗y‘ that satisfy
the single equation

Geometrically this means that the lines corresponding to the two equations
in the original system coincide. And this system will have infinitely many
solutions.

How to Find Few Solutions of Such System?

 Find the value of ‗x‘ from Common equation.


 Put ‗t‘ being Parameter (arbitrary value instead of actual value)
 Replace in given system.
 Use Upon your taste and get different answers.
 We may apply same procedure by replacing ‗x‘ and ‗y‘

Example: Want to find different solutions for problem as follows using


Parametric Equation (arbitrary equation using Parameter instead of actual value).

………..(i) ………..(ii)

Solution:

and put

. / , . /

. /

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 11

Example: Want to find different solutions for problem as follows using


Parametric Equation (arbitrary equation using Parameter instead of actual value).

………..(i)

………..(ii)

………..(ii)

Solution:

Since above all equations have same graphics or formation. Therefore will have
infinitely many solutions. We will solve it using parametric equations.

In above all equations we have the parallel form

and put

( )

( )

( )

*( )( )( )+

TRY OTHERS ALSO!!!!!!!!

General Solution:

If a linear system has infinitely many solutions, then a set of parametric equations
from which all solutions can be obtained by assigning numerical values to the
parameters is called a General Solution of the system.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 12

PRACTICE:

1. In each part, solve the linear system, if possible, and use the result to
determine whether the lines represented by the equations in the system have
zero, one, or infinitely many points of intersection. If there is a single point
of intersection, give its coordinates, and if there are infinitely many, find
parametric equations for them.
a) and
b) and
c) and

2. In each part use parametric equations to describe the solution set of linear
equations.
a) b)
c) d)
e) g)
f) h)

3. In each part use parametric equations to describe the infinitely many


solutions of linear equations.
a) and
b) , and
c) and
d) , and

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 14

Matrices:

A matrix A over a field F or, simply, a matrix A (when F is implicit) is a


rectangular array of scalars usually presented in the following form:

[ ]

The numbers in the array are called the entries in the matrix.

Augmented and Coefficient Matrices of a System

Consider the general system of equations in unknowns.

:::::::::::::::::::::::::::::::::::::::::::::::::::

Such a system has associated with it the following two matrices:

[ ] and [ ]

The first matrix M is called the augmented matrix of the system, and the second
matrix A is called the coefficient matrix.

The coefficient matrix A is simply the matrix of coefficients, which is the


augmented matrix M without the last column of constants. Some texts write
, - to emphasize the two parts of M, where B denotes the column vector
of constants.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 15

Example:

Consider the general system of equations in unknowns.

Then [ ]

and [ ]

Example:

Consider

[ ]

Then the general system of equations in unknowns is as follows;

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 16

PRACTICE:

1. In each part, find a linear system in the unknowns ….. that


corresponds to the given augmented matrix.

a) [ ]

b) [ ]

c) 0 1

d) [ ]

2. In each part, find the augmented matrix for the given linear system.
a) , ,
b) ,
c) , ,
d) , ,
e)

f)

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 17

Degenerate Linear Equations A linear equation is said to be degenerate if


all the coefficients are zero, that is, if it has the form

The solution of such an equation depends only on the value of the constant b.
Specifically,

(i) If b 0, then the equation has no solution.

(ii) If b = 0, then every vector ⃗ ( ) in Rn is a solution.

The following theorem applies.

Theorem: Let be a system of linear equations that contains a degenerate


equation L, say with constant b.

(i) If b 0, then the system has no solution.

(ii) If b 0, then may be deleted from the system without changing the solution
set of the system.

Part (i) comes from the fact that the degenerate equation has no solution, so the
system has no solution.

Part (ii) comes from the fact that every element in Rn is a solution of the
degenerate equation.

Leading Unknown in a Non-degenerate Linear Equation

Let ‗L‘ be a non-degenerate linear equation. This means one or more of the
coefficients of L are not zero. By the leading unknown of L, we mean the first
unknown in L with a nonzero coefficient.

For example, x3 and y are the leading unknowns, respectively, in the equations

and

We frequently omit terms with zero coefficients, so the above equations would be
written as and

In such a case, the leading unknown appears first.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 18

Linear Combination of System of Linear equations

Consider the system of linear equations in unknowns. Let be the linear


equation obtained by multiplying the equations by constants
respectively, and then adding the resulting equations. Specifically, let be the
following linear equation:

( ) ( )

Then L is called a linear combination of the equations in the system.

EXAMPLE: Let L1, L2, L3 denote, respectively, the three equations in

Let L be the equation obtained by multiplying L1, L2, L3 by 3, , 4, respectively,


and then adding. Namely,

Then Sum will be

Then L is a linear combination of L1, L2, L3. As expected, the solution


( ) of the system is also a solution of L. That is, substituting ‗u‘ in L,
we obtain a true statement:

( ) ( ) ( ) ( )

The following theorem holds.

Theorem: Two systems of linear equations have the same solutions if and only if
each equation in each system is a linear combination of the equations in the other
system.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 19

PRACTICE: Show that if the linear equations

Have the same solution set, then the two equations are identical. (i.e. )

Equivalent Systems: Two systems of linear equations are said to be equivalent


if they have the same solutions.

Elementary Operations (Elementary Row Operations)

The basic method for solving a linear system is to perform algebraic operations on
the system that do not alter the solution set and that produce a succession of
increasingly simpler system, until a point is reached where it can be ascertained
whether the system is consistent, and if so, what its solutions are. Typically the
algebraic operations are:

1. Multiply an equation through by a non – zero constant.


2. Interchange two equations.
3. Add a constant times one equation to another.

Since the rows (horizontal lines) of an augmented matrix correspond to the


equations in the associated system, these three operations correspond to the
following operations on the rows of the augmented matrix;

1. Multiply a row through by a non – zero constant.


2. Interchange two rows.
3. Add a constant times one row to another. Or replace an equation by the sum
of a multiple of another equation and itself.

These are called Elementary Row Operations on a matrix.

The main property of the above elementary operations is contained in the


following theorem.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 20

Theorem: Suppose a system of of linear equations is obtained from a system


of linear equations by a finite sequence of elementary operations. Then and
have the same solutions.

Remark: Sometimes (say to avoid fractions when all the given scalars are
integers) we may apply step 1 and 3 in one step.

EXAMPLE: In the left column we solve a system of linear equations by


operating on the equations in the system, and in the right column we solve the
system by operation on the rows of the augmented matrix.

[ ]

[ ]

[ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 21

[ ]

[ ]

[ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 22

[ ]

. / . / . / . /

[ ]

Thus, the solution is

We may write ( ) as a required solution. In the order triple form.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 23

PRACTICE:

1. Find a single elementary row operation that will create a ‗1‘ in the upper left
corner of the given augmented matrices and will not create any fractions in
its first rows.

a) [ ]

b) [ ]

c) [ ]

d) [ ]

2. Find all values of ‗k‘ for which the given augmented matrices correspond to
a consistent linear system.
a) 0 1

b) 0 1

c) 0 1

d) 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 24

SMALL SQUARE SYSTEMS OF LINEAR EQUATIONS

This section considers the special case of one equation in one unknown, and
two equations in two unknowns. These simple systems are treated separately
because their solution sets can be described geometrically, and their properties
motivate the general case.

Linear Equation in One Unknown

Theorem: Consider the linear equation ax = b.

(i) If , then is a unique solution of .

(ii) If , but , then has no solution.

(iii) If and , then every scalar ‗k‘ is a solution of .

Proof:

P1: Consider the linear equation ax = b………(i)

If , then the scalar exists.

. / hence is solution.

To prove solution is unique. Consider is another solution of so that


solution is unique.

PII: Consider the linear equation ax = b………(i)

If , then for any scalar ‗k‘ we have . Now if then


. Then ‗k‘ will not the solution of and the result will be proved.

PIII: Consider the linear equation ax = b………(i)

If , then for any scalar ‗k‘ we have . Now if then


. Then ‗k‘ will be the solution of and the result will be proved.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 25

Example: Solve the following system involving only one unknown.

a)
b)
c)

Solution:

(a) Rewrite the equation in standard form obtaining . Then is the


unique solution [Theorem(i)].

(b) Rewrite the equation in standard form, obtaining . The equation has no
solution [Theorem(ii)].

(c) Rewrite the equation in standard form, obtaining . Then every scalar k
is a solution [Theorem(iii)].

PRACTICE: Solve the followings;

a)
b)
c)
d)

System of Two Linear Equations in Two Unknowns (2 2 System)

Consider a system of two non-degenerate linear equations in two unknowns ‗x‘


and ‗y‘, which can be put in the standard form

Because the equations are non-degenerate, and are not both zero, and and
are not both zero.

The general solution of the system (above) belongs to one of three types as
indicated in Figure. If R is the field of scalars, then the graph of each equation is a
line in the plane R2 and the three types may be described geometrically as pictured
in Figure below.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 26

Specifically,

(1) The system has exactly one solution.

Here the two lines intersect in one point [Fig.(a)]. This occurs when the lines have
distinct slopes or, equivalently, when the coefficients of ‗x‘ and ‗y‘ are not
proportional:

or; equivalently; For example, in Fig.(a),

(2) The system has no solution.


Here the two lines are parallel [Fig.(b)]. This occurs when the lines have the same
slopes but different ‗y‘ intercepts, or when

For example, in Fig. (b),

(3) The system has an infinite number of solutions.

Here the two lines coincide [Fig.(c)]. This occurs when the lines have the same
slopes and same ‗y‘ intercepts, or when the coefficients and constants are
proportional,

For example, in Fig.(c),

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 27

Remark: The following expression and its value is called a determinant of order
two: | |

Thus, the system has a unique solution if and


only if the determinant of its coefficients is not zero. (We show later that this
statement is true for any square system of linear equations.)

Example: Solve the following system.

Solution:

Then eliminating ‗x‘ by operating

Substituting in we get in

Thus the pair ( ) is the unique solution.

Example: Solve the following system.

Solution:

Then eliminating ‗x‘ by operating

This is a degenerate equation with the non – zero constants. Hence, this equation
and the system have no solution. Geometrically, the lines corresponding to the
equations are parallel.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 28

Example: Solve the following system.

Solution:

Then eliminating ‗x‘ by operating

This is a degenerate equation where the constant term is also zero. Thus the system
has infinite solutions, which correspond to the solution of either equation.
Geometrically, the lines corresponding to the equations coincide.

To find general solution; we have

and put

. / …

PRACTICE: Solve the followings;

a)
b)
c)
d)

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 29

Example: Consider the following system.

a) For which value of ‗a‘ does the system have unique solution?
b) Find those pairs of values (a,b) for which the system has more than one
solutions.

Solution: and

a) Then eliminating ‗x‘ by operating ( )


then the system has a unique solution if and only if the coefficient of ‗y‘ is
not zero. i.e. or if
b) Then eliminating ‗x‘ by operating ( )
then the system has more than one solution if both sides are zero. Thus
when

When then when or

When then when or

Thus ( ) and ( ) are pairs for which the system has more than one
solutions.

PRACTICE:

Under what conditions on ‗a‘ and ‗b‘ will the following linear system have no
solutions, one solution, infinitely many solution. Or, for what value of ‗a‘ the
system have unique solution. And for what pair of (a,b) does each system have
more than one solution?

a) and
b) and
c) and
d) and
e) , , ( )
f) , , ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 30

Systems in Triangular and Echelon Forms

The main method for solving systems of linear equations, Gaussian elimination, is
treated in the next Section.

Here we consider two simple types of systems of linear equations: systems


in triangular form and the more general systems in echelon form.

Triangular Form

Consider the following system of linear equations, which is in triangular form:

That is, the first unknown x1 is the leading unknown in the first equation, the
second unknown x2 is the leading unknown in the second equation, and so on.

Definition: The system in which the first unknown x 1 is the leading unknown in
the first equation, the second unknown x2 is the leading unknown in the second
equation, and so on. Then such system is called Triangular system.

(Example given above)

Thus, in particular, the system is square and each leading unknown is directly to
the right of the leading unknown in the preceding equation. Such a triangular
system always has a unique solution, which may be obtained by back-substitution.

That is,

(1) First solve the last equation for the last unknown to get .

(2) Then substitute this value in the next-to-last equation, and solve for the
next-to-last unknown as follows:

or

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 31

(3) Now substitute and in the second equation, and solve for the
second unknown as follows:

( ) ( ) or

(4) Finally, substitute , and in the first equation, and solve


for the first unknown as follows:

( ) ( ) ( ) or

Thus, , , and , or, equivalently, the vector


( ) is the unique solution of the system.

Remark: There is an alternative form for back-substitution (which will be used


when solving a system using the matrix format). Namely, after first finding the
value of the last unknown, we substitute this value for the last unknown in all the
preceding equations before solving for the next-to-last unknown. This yields a
triangular system with one less equation and one less unknown. For example, in
the above triangular system, we substitute in all the preceding equations to
obtain the triangular system

We then repeat the process using the new last equation. And so on.

PRACTICE: Solve the following Triangular system: (by Back substitution)

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 32

Pivoting:

Changing the order of equations is called pivoting. It has two types.

1. Partial pivoting 2. Total pivoting

Partial pivoting:

In partial pivoting we interchange rows where pivotal element is zero.

In Partial Pivoting if the pivotal coefficient ― ‖ happens to be zero


or near to zero, the ith column elements are searched for the numerically largest
element. Let the jth row ( ) contains this element, then we interchange the ―i th‖
equation with the ―jth‖ equation and proceed for elimination. This process is
continued whenever pivotal coefficients become zero during elimination.

Total pivoting:

In Full (complete, total) pivoting we interchange rows as well as column.

In Total Pivoting we look for an absolutely largest coefficient in the


entire system and start the elimination with the corresponding variable, using this
coefficient as the pivotal coefficient (may change row and column). Similarly, in
the further steps. It is more complicated than Partial Pivoting. Partial Pivoting is
preferred for hand calculation.

Why is Pivoting important?:

Because Pivoting made the difference between non-sense and a perfect result.

Pivotal coefficient:

For elimination methods (Gauss‘s Elimination, Gauss‘s Jordan) the coefficient of


the first unknown in the first equation is called Pivotal Coefficient.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 33

Back substitution:

The analogous algorithm for upper triangular system ― ‖ of the form

( ,( , ( ,

Is called Back Substitution.

The solution ―xi‖ is computed by


Forward substitution

The analogous algorithm for lower triangular system ― ‖ of the form

( ,( , ( ,

Is called Forward Substitution.



The solution ―xi‖ is computed by

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 34

Echelon Form, Pivot and Free Variables

The following system of linear equations is said to be in echelon form:

That is, no equation is degenerate and the leading unknown in each equation other
than the first is to the right of the leading unknown in the preceding equation. The
leading unknowns in the system, , are called pivot variables, and the
other unknowns, x2 and x5, are called free variables. Those positions in which
leading 1 occur called pivot positions and pivot column.

Generally speaking, an echelon system or a system in echelon form has the


following form:

::::::::::::::::::::::::::::::::::::::::::::::

where 1 < j2 < …. < jr and , ….., are not zero. The pivot variables are
, …, Note that r n.

The solution set of any echelon system is described in the following theorem

Theorem: Consider a system of linear equations in echelon form, say with ‗r


equations in ‗n‘ unknowns. There are two cases:
(i) r = n. That is, there are as many equations as unknowns (triangular form). Then
the system has a unique solution.
(ii) r < n. That is, there are more unknowns than equations. Then we can arbitrarily
assign values to the free variables and solve uniquely for the ‗r‘ pivot
variables, obtaining a solution of the system.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 35

Suppose an echelon system contains more unknowns than equations. Assuming the
field F is infinite, the system has an infinite number of solutions, because each of
the free variables may be assigned any scalar.
The general solution of a system with free variables may be described in
either of two equivalent ways. One description is called the „„Parametric Form‟‟
of the solution, and the other description is called the „„Free –Variable Form.‟‟

Parametric Form

Consider we have the system

Assign arbitrary values, called parameters, to the free variables x 2 and x5, say
and , and then use back-substitution to obtain values for the pivot
variables x1, x3, x5 in terms of the parameters ‗a‘ and ‗b‘. Specifically,

(1) Substitute in the last equation, and solve for :

(2) Substitute and into the second equation, and solve for x3:

( ) ( )

(3) Substitute , , , into the first equation,


and solve for x1:

( ) ( )

Accordingly, the general solution in parametric form is

; ; ; ; or, equivalently,
( ) where a and b are arbitrary numbers.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 36

Free –Variable Form

Consider we have the system

Use back-substitution to solve for the pivot variables x1, x3, x4 directly in terms of
the free variables x2 and x5.

That is, the last equation gives

Substitution in the second equation yields

and then substitution in the first equation yields

Accordingly, ; free variable;

; ; free variable

or, equivalently, ( ) is the free –


variable form for the general solution of the system.

We emphasize that there is no difference between the above two forms of


the general solution, and the use of one or the other to represent the general
solution is simply a matter of taste.

Remark: A particular solution of the above system can be found by assigning any
values to the free variables and then solving for the pivot variables by back
substitution. For example, setting and we obtain

( ) ; ( ) ; ( ) ( )

Thus, ( ) is the particular solution corresponding to


and .

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 37

PRACTICE:

1. Determine the ‗Pivot‖ and ‗Free variables‘ in each of the followings;

2. Solve using parametric form as well as free variable form assigning Pivot.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 38

Unique Solution:

If a matrix has been reduced to row reduced echelon form by elementary row
operation, then it will have the unique solution.

Example given as follows;

Suppose that the augmented matrix for a linear system in the unknowns has been
reduced by elementary row operation to

[ ]

this matrix is in reduced row echelon form and corresponds to the unique solution
as follows;

Thus the system has a unique solution namely,

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 39

PRACTICE:

Below are in three unknowns (also in row reduced form discussed later) solve the
system.

a) [ ]

b) [ ]

c) [ ]

d) [ ]

e) [ ]

f) [ ]

g) [ ]

h) [ ]

i) [ ]

j) [ ]

k) [ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 40

Echelon Form of a Matrix:

A matrix is said to be in echelon form if it has the following structure;

i. All the non – zero rows proceed the zero rows.


ii. The first non – zero element in each row is 1.
iii. The preceding number of zeros before the first non – zero element 1 in each
row should be greater than its previous row.

For example followings are in echelon form.

[ ] [ ] [ ]

Reduced Echelon Form of a Matrix:

A matrix is said to be in reduced echelon form if it has the following structure;

i. Matrix should be in echelon form.


ii. If the first non – zero element 1 in the ith row of matrix lies in the jth column
then all other elements in the jth column are zero.

For example followings are in reduced echelon form.

[ ] [ ] [ ] [ ]

Remark about echelon forms:

i. Every matrix has a unique reduced row echelon form.


ii. Row echelon forms are not unique.
iii. Although row echelon forms are not unique, the reduced row echelon form
and all row echelon forms of a matrix A have the same number of zero rows,
and the leading 1‘s always occur in the same positions.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 41

Row Echelon Form of a Matrix:

A matrix is said to be in row echelon form if it has the following structure;

i. If a row does not consists entirely of zeros, then the first non – zero number
in the row is a 1. We call this a leading 1.
ii. If there are any rows that consist entirely of zeros, then they are grouped
together at the bottom of the matrix.
iii. In any two successive rows that do not consists entirely of zeros, the leading
1 in the lower row occurs farther to the right than the leading 1 in the higher
row.

For example followings are in row echelon form.

[ ] [ ] [ ]

Row Reduced Echelon Form of a Matrix:

A matrix is said to be in row reduced echelon form if it has the following structure;

i. If a row does not consists entirely of zeros, then the first non – zero number
in the row is a 1. We call this a leading 1.
ii. If there are any rows that consist entirely of zeros, then they are grouped
together at the bottom of the matrix.
iii. In any two successive rows that do not consists entirely of zeros, the leading
1 in the lower row occurs farther to the right than the leading 1 in the higher
row.
iv. Each column that contains a leading 1 has zeros everywhere else in that
column.

For example followings are in row reduced echelon form.

[ ] [ ] 0 1 [ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 42

Remark: A matrix in reduced row echelon form is of necessity in row echelon


form, but not conversely (a matrix in echelon form may not be in reduced row
echelon form).

For example followings are in echelon form but not reduced row echelon form.

[ ] [ ] [ ]

Remark (difference): As we saw in previous examples, a matrix in row echelon


form has zero below each leading 1, whereas a matrix in reduced row echelon form
has zeros below and above each leading 1. Thus, with any real numbers substituted
for the *‘s, all matrices of the following types are in row echelon form;

[ ] [ ] [ ]

[ ]
All matrices of the following types are in row reduced echelon form;

[ ] [ ] [ ]

[ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 43

PRACTICE:

Determine whether the matrix is in row echelon form, reduced row echelon form,
both or neither.

i. [ ] iii. [ ]

iv. 0 1
ii. [ ]

vi. [ ]
v. [ ]

vii. 0 1
ix. [ ]

viii. [ ]
x. [ ]

xi. [ ]
xiii. [ ]

xii. [ ]
xiv. 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 44

Gaussian Elimination

The main method for solving the general system of linear equations is called
Gaussian elimination. It essentially consists of two parts:

Part A. (Forward Elimination) Step-by-step reduction of the system yielding either


a degenerate equation with no solution (which indicates the system has no
solution) or an equivalent simpler system in triangular or echelon form.

Part B. (Backward Elimination) Step-by-step back-substitution to find the solution


of the simpler system.

Gaussian Elimination steps (Procedure):

i. In this method we reduce the augmented matrix into echelon form. In this
way, the value of last variable is calculated.
ii. Then by backward substitution, the values of remaining unknown can be
calculated.

Example: Solve the matrix using Gauss‘s Elimination method.

[ ]

Solution: Firstly we reduce the given matrix in echelon form.

Step – I: locate the left most column that does not consist entirely of zeros.

[ ]

Step – II: interchange the top row with another row, if necessary, to bring a non –
zero entry to the top of the column found in step – I.

[ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 45

Step – III: if the entry that is now at the top of the column found in step – I is ‗a‘,
multiply the first row by ‗1/a‘ in order to introduce a leading 1.

[ ]

Step – IV: add suitable multiples of the top row to the rows below so that all
entries below the leading 1 become zero.

[ ]

Step – V: now cover the top row in the matrix and begin again with step – I
applied to the submatrix that remains or remaining rows. Continue in this way until
the entire matrix is in row echelon form.

[ ]

[ ]

[ ]

Hence above matrix is in row echelon form.

Thus corresponding system is

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 46

Solving for leading variables we obtain and in next line solution using free variable

( )

( ) ( )

Finally we express the general solution of the system parametrically by assigning


the free variables arbitrary values ‗r‘ and ‗s‘ respectively. This yield

Above is our required solution.

PRACTICE:

1. Solve the linear system by Gauss‘s Elimination method.


i.

ii.

iii.

iv.

2. Find two different row echelon forms of 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 47

Gauss Jordan Elimination

Procedure:

i. In this method we reduce the augmented matrix into reduced echelon form.
In this way, the value of last variable is calculated.
ii. Then by backward substitution, the values of remaining unknown can be
calculated.

Example: Solve the matrix using Gauss‘s Elimination method.

[ ]

Solution: Firstly we reduce the given matrix in reduced echelon form.

Step – I: locate the left most column that does not consist entirely of zeros.

[ ]

Step – II: interchange the top row with another row, if necessary, to bring a non –
zero entry to the top of the column found in step – I.

[ ]

Step – III: if the entry that is now at the top of the column found in step – I is ‗a‘,
multiply the first row by ‗1/a‘ in order to introduce a leading 1.

[ ]

Step – IV: add suitable multiples of the top row to the rows below so that all
entries below the leading 1 become zero.

[ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 48

Step – V: now cover the top row in the matrix and begin again with step – I
applied to the submatrix that remains or remaining rows. Continue in this way until
the entire matrix is in row echelon form.

[ ]

[ ]

[ ]

Step – VI: Beginning with the last non – zero row and working upward, add
suitable multiples of each row to the rows above to introduce zeros above the
leading 1‘s.

[ ]

[ ]

[ ]

Hence above matrix is in row reduced echelon form. Thus corresponding system is

Solving for leading variables we obtain and in next line solution using free variable

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 49

Finally we express the general solution of the system parametrically by assigning


the free variables arbitrary values ‗r‘ and ‗s‘ respectively. These yields

Above is our required solution.

Example:

Solve the linear system by Gauss‘s Jordan Elimination method.

Solution:

[ ]

[ ]

[ ]

[ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 50

[ ]

Echelon form.
[ ]

[ ]

reduced echelon form.


[ ]

Thus corresponding system is

Solving for leading variables we obtain and in next line solution using free variable

Finally we express the general solution of the system parametrically by assigning


the free variables arbitrary values ‗r‘, ‗s‘ and ‗t‘ respectively. These
yields

Above is our required solution.

☼ Find reduced row echelon forms of [ ] without introducing

fractions at any intermediate stages.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 51

PRACTICE:

1. Solve the linear system by Gauss‘s Jordan Elimination method.

i. ii.

iii. iv.

2. Solve the following system for x, y and z. using any method.

Example: Solve the linear system by Gauss Elimination method. Or show that
system has no solution.

[ ]

Solution:

[ ] [ ]

[ ]

The matrix is now in echelon form. The third row of the echelon matrix
corresponds to the degenerate equation which has
no solution, thus the system has no solution.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 52

Homogeneous system of linear equations

A system of linear equations is said to be Homogeneous if the constant terms are


all zeros; i.e. the system has the form;

:::::::::::::::::::::::::::::::::::::::::::::::::::

Remark:

 Every homogeneous system of linear equations is consistent because all such


systems have ….. as a solution. This solution is
called the Trivial Solution. If there are others solutions, they are called
nontrivial solutions.
 Because a homogeneous linear system always has the trivial solution, there
are only two possibilities for its solutions:
i. The system has only the trivial solution.
ii. The system has infinitely many solutions in addition to the trivial
solution.

Example:

Using Gauss Jordan‘s Elimination method matrix [ ]

can be converted into row reduced echelon form as [ ]

Thus corresponding system is

These yields

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 53

PRACTICE:

1) Solve the linear system by any method, Gauss elimination method or Gauss‘s
Jordan Elimination method
i.

ii.

iii.

iv.

v.

vi.

vii.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 54

viii.

ix.

x.
find

2) Solve the following systems where ‗a‘ , ‗b‘ and ‗c‘ are constants.
a) , ,
b) ,

Remark: A homogeneous system AX = 0 with more unknowns than equations


has a nonzero solution.

Nonhomogeneous and Associated Homogeneous Systems

Let AX = B be a nonhomogeneous system of linear equations. Then AX = 0 is


called the associated homogeneous system. For example,

show a nonhomogeneous system and its associated homogeneous system.

Free variable theorem for Homogeneous system:

If a homogeneous linear system has ‗n‘ unknowns, and if the reduced row echelon
form of its augmented matrix has ‗r‘ non – zero rows, then the system has
free variables.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 55

Chapter #2

MATRICES AND MATRIX OPERATIONS


Matrices:

A matrix A over a field K or, simply, a matrix A (when K is implicit) is a


rectangular array of scalars usually presented in the following form:

[ ]

The numbers in the array are called the entries in the matrix.

Note that

 the element aij , called the ij-entry or ij-element, appears in row i and column
j. We frequently denote such a matrix by simply writing [ ]
 A matrix with ‗m‘ rows and ‗n‘ columns is called an m – by –n matrix,
written . The pair of numbers m and n is called the size of the matrix.
 Two matrices A and B are equal, written A = B, if they have the same size
and if corresponding elements are equal. Thus, the equality of two
matrices is equivalent to a system of equalities, one for each
corresponding pair of elements.
 A matrix with only one row is called a row matrix or row vector, and a
matrix with only one column is called a column matrix or column vector.
 A matrix whose entries are all zero is called a zero matrix and will usually
be denoted by 0 or ⃗ .
 Matrices whose entries are all real numbers are called real matrices and are
said to be matrices over R.
 Analogously, matrices whose entries are all complex numbers are called
complex matrices and are said to be matrices over C. This text will be
mainly concerned with such real and complex matrices.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 56

Square matrix: A square matrix is a matrix with the same number of rows as
columns. An square matrix is said to be of order ‗n‘ and is sometimes called
an n-square matrix.

Diagonal and Trace:

Let [ ] be an n-square matrix. The diagonal or main diagonal of A consists


of the elements with the same subscripts—that is,

The trace of A, written as ( ) , is the sum of the diagonal elements.


Namely, ( )

For example [ ]

Then ( ) * + and ( )

Remark: (Prove Yourself)

Suppose [ ] and [ ] are n – square matrices and ‗k‘ is scalar then

1) ( ) ( ) ( ) 3) ( ) ( )
2) ( ) ( ) 4) ( ) ( )

Diagonal Matrix: A square matrix [ ] is diagonal if its non-diagonal entries


are all zero. Such a matrix is sometimes denoted by ( )
where some or all the may be zero.

For example,

[ ] and 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 60

Addition and Subtraction of matrices:

If A and B are matrices of same size, then the sum is the matrix obtained by
adding the entries of B to the corresponding entries of A, and the difference
is the matrix obtained by subtracting the entries of B to the corresponding
entries of A.

For example

[ ] and [ ]

Then [ ] and [ ]

Remark: addition and Subtraction of two matrices will be defined if both matrices
are equal i.e. having same size and corresponding entries.

e.g [ ] and 0 1 are not defined for above operations.

Scalar Multiplication:

If A is any matrix and c is any scalar, then the product cA is the matrix obtained by
multiplying each entry of the matrix A by c. the matrix cA is said to be a Scalar
Multiple of A.

For example

[ ] and [ ]

Then ( ) [ ] and [ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 61

Matrix Multiplication (Entry method):

If A is an matrix and B is an matrix, then the product AB is then


matrix whose entries are determined as follows;

To find the entry in the row ‗i‘ and column ‗j‘ of AB, single out row ‗i‘ from
the matrix A and column ‗j‘ from the matrix B. multiply the corresponding entries
from the row and column together, and then add up the resulting products.

For example if 0 1 and [ ]

Then 0 1[ ]

( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( )
[ ]
( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( )

0 1

Row Column Rule (General Product definition):

If [ ] is an matrix and [ ] is an matrix, then the product


AB is then matrix given as follows;

( )

Remark: multiplication of two matrices will be defined if number of column of 1 st


matrix equals to the number of rows of 2nd matrix.

Transpose of a Matrix: If [ ] is an matrix, then the transpose of A,


denoted by is defined to be the matrix that results by interchanging the
rows and columns of A. i.e. ( ) ( )

Like 0 1 then [ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 62

PRACTICE:

1. Suppose that A, B, C, D and E are matrices with the following sizes;

( ) ( ) ( ) ( ) ( )

In each part, determine whether the given matrix expression is defined. For those
that are defined, give the size of the resulting matrix.

i. v. ix.
ii. vi. ( ) x. ( )
iii. vii. xi.
iv. ( ) viii. xii.

2. Use the following matrices to compute the indicated expression if it is


defined.

[ ] 0 1 0 1 [ ]

[ ]

i. v. ix. ( )
ii. vi. x. ( )
iii. vii. ( ) xi. ( )
iv. viii. ( )
xii. xvi. xix. ( )
xiii. xx. ( )
xvii.
xiv. ( )
xviii.
xv.
xxi. ( )
xxii. ( )
xxiii. ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 63

Matrix form of a linear System (Already discussed)

Consider the general system of equations in unknowns.

:::::::::::::::::::::::::::::::::::::::::::::::::::

Such a system has associated with it the following form:

[ ][ ] [ ]

Also

[ ] and [ ]

The first matrix M is called the augmented matrix of the system, and the second
matrix A is called the coefficient matrix.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 64

PRACTICE:

1. Express the given linear system as a single matrix equation


a) c)

b)
d)

2. Express the matrix equation as a system of linear equation.

a) [ ][ ] [ ] b) [ ][ ] [ ]

c) [ ][ ] [ ]

d) [ ][ ] [ ]

3. Solve the matrices for ‗a‘, ‗b‘, ‗c‘ and ‗d‘.


a) 0 1 0 1

b) 0 1 0 1

4. Solve the matrices for ‗k‘ if any that satisfy the equation.

a) , -[ ][ ]

b) , -[ ][ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 65

Partitioned Matrices

A matrix can be subdivided or Partitioned into smaller matrices by inserting


horizontal and vertical rules between selected rows and columns.

For example, the following are three possible partitions of a general

matrix [ ].

The first is a partition of A into four submatrices

[ ] [ ]

The second is a partition of A into its row vectors

[ ] [ ]

The third is a partition of A into its column vectors

[ ] , -

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 66

Row method

This will be computed by following procedure. If the system is given as follows;

[ ][ ] [ ]

Then , -

Or

( ) , - , -

For example if 0 1 and [ ]

( )( ) ( )( ) ( )( )
Then 0 1[ ] [ ] 0 1
( )( ) ( )( ) ( )( )

Column method

This will be computed by following procedure. If the system is given as above;

Then , -

Or ( ) [ ] [ ]

For example if 0 1 and [ ]

Then , -[ ] , -

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 67

PRACTICE:

Use the following matrices and either the row method or column method, as
appropriate, to find the indicated row or column.

[ ] and [ ]

i. The first row of


ii. The third row of
iii. The second column of
iv. The first column of
v. The third row of
vi. The third column of
vii. The first column of
viii. The third column of
ix. The second row of
x. The first column of
xi. The third column of
xii. The first row of

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 68

Linear combination of matrices:

If are matrices of the same sizes and are scalars then an


expression of the form is called a linear combination
of the matrices with coefficients

General form: If [ ] [ ]

Then

[ ] [ ] [ ] [ ]

Matrix product as a Linear combination:

For example [ ][ ] [ ] can be written as in linear

combination [ ] [ ] [ ] [ ]

Column of a product as a Linear combination:

Since 0 1[ ] 0 1

then 0 1 0 1 0 1 0 1

0 1 0 1 0 1 0 1

0 1 0 1 0 1 0 1

0 1 0 1 0 1 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 69

PRACTICE: Use the following matrices and linear combination, to find the
indicated operations.

[ ] and [ ]

i. Each column vector of as a linear combination of the column vector of

ii. Each column vector of as a linear combination of the column vector of

iii. Each column vector of as a linear combination of the column vector of

iv. Each column vector of as a linear combination of the column vector of

Column Row expansion

Suppose that an matrix A is partitioned into its ‗r‘ column vectors


(each of size ) and an matrix B is partitioned into its ‗r‘
row vectors (each of size ). Each term in the sum

has size so that sum itself is an matrix.

So from above discussion we write the column row expansion of AB as follows;

Question: find the column row expansion of the product;

0 10 1

Solution: using the column of A and rows of B as follows;

0 1, - 0 1, -

0 1 0 1 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 70

PRACTICE:

Use the column row expansion of AB to express this product as a sum of matrices.

1) 0 1 and 0 1

2) 0 1 and 0 1

3) 0 1 and [ ]

4) 0 1 and [ ]

Square Root of a Matrix:

A matrix B is said to be square root of a matrix A if

1) Find two square roots of 0 1

2) How many different square roots can you find of 0 1?


3) Do you think that every matrix has at least one square root? Explain
your reasoning.

Properties of matrix arithmetic (addition and scalar multiplication):

i. (Commutative law for matrix addition)


ii. ( ) ( ) (Associative law for matrix addition)
iii. ( ) ( ) (Associative law for matrix multiplication)
iv. ( ) (Left distributive law)
v. ( ) (Right distributive law)
vi. ( ) x. ( )
vii. ( ) xi. ( )
viii. ( ) xii. ( ) ( )
ix. ( ) xiii. ( ) ( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 71

Question: Verify the property ( ) ( ) for the following matrices

[ ] 0 1 0 1

Solution:

( ) [ ] .0 10 1/ [ ]0 1

( ) [ ]

( ) ([ ]0 1+ 0 1 [ ]0 1

( ) [ ]

Hence verified that ( ) ( )

PRACTICE:

Verify the law given in following lines for given matrices and scalars.

0 1 0 1 0 1

i. Associative law for matrix addition


ii. Associative law for matrix multiplication
iii. Left distributive law ix. ( ) ( ) ( )
iv. ( ) x. ( )
v. ( ) xi. ( )
vi. ( ) xii. ( )
vii. ( ) xiii. ( )
viii. ( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 72

Important Remark:

In matrix arithmetic, the equality of AB and BA can fail for three reasons;

i. AB may be defined and BA may not defined ( for example, if A is


and B is )
ii. AB and BA may both be defined, but they may have different sizes ( for
example, if A is and B is )
iii. AB and BA may both be defined and have the same size, but the two
products may be different. (as given below)

Example: for given matrices

0 1 0 1

0 10 1 0 1 and 0 10 1 0 1

Clearly

Zero Matrix: A matrix whose entries are all zero is called a zero matrix.

For example: 0 1 [ ] , -

Remark:

i. We will denote a zero matrix by O. i.e. 0 1

ii. If we want to mentions size then write as 0 1 i.e.


iii. ‗O‘ plays the same role in matrix equation as the number ‗0‘ in the
numerical equations. i.e.
iv. vi.
v. ( ) vii. If
viii. If and then but this law does not hold in general.
ix. If then or but this law does not hold in general.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 73

Failure of Cancellation law: for given matrices

0 1 0 1 0 1 we have 0 1

Although but cancelling of A from both sides of does not lead to


the statement

Failure of Zero Product with non – zero factors:

For given matrices 0 1 0 1

We have but also

Identity Matrix: The n – square identity or unit matrix, denoted by In, or simply
I, is the n-square matrix with 1‘s on the diagonal and 0‘s elsewhere. The identity
matrix I is similar to the scalar 1 in that, for any n-square matrix A,

For example: 0 1 [ ] , -

Remark:

i. We will denote an identity matrix by I. i.e. 0 1

ii. If we want to mentions size then write as 0 1 i.e.

iii. for example 0 1[ ]

iv. for example 0 10 1


v. If R is the reduced row echelon form of an matrix A , then either R
has a row of zeros or R is the identity matrix .
vi. and ( )
vii. ( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 74

PRACTICE:

1. Compute the given operation using A and B both.


0 1 0 1
i. and
ii. and
2. Show that the matrix 0 1 satisfies the equation
( ) ( )
3. A square matrix A is said to be idempotent if

Then Show that if A is idempotent, then so is

Scalar Matrices:

For any scalar k, the matrix that contains k‘s on the diagonal and 0‘s elsewhere
is called the scalar matrix corresponding to the scalar k. Observe that
( ) ( ) That is, multiplying a matrix A by the scalar matrix is
equivalent to multiplying A by the scalar k.

Invertible (Nonsingular) Matrices:


A square matrix A is said to be invertible or nonsingular if there exists a
matrix B such that where is the identity matrix. If no such matrix
B can be found, then A is said to be Singular.

Example: let 0 1 0 1

Then 0 10 1 0 1

Also 0 10 1 0 1

Remark:

i. If B and C are both inverse of a matrix A then


ii. An invertible matrix has exactly one inverse. Denoted as
iii.
iv. for matrices.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 75

Theorem: The matrix 0 1 is invertible if and only if in

this case 0 1 ( )
| |

Inverse of a Matrix: Let A be an arbitrary matrix, then its inverse


can be defined as follows; ( )
| |

In other words, when | | , the inverse of a matrix A may be obtained


from A as follows:
(1) Interchange the two elements on the diagonal.
(2) Take the negatives of the other two elements.

(3) Multiply the resulting matrix by | | or, equivalently, divide each element by | |
In case | | , the matrix A is not invertible.

Example: let 0 1 0 1 then find

( ) 0 1 [ ]
| |

For since | | therefore B is not invertible.

PRACTICE: Compute the inverse of the following matrices.

i. 0 1 iii. 0 1

ii. 0 1 iv. 0 1

v. 0 1

( ) ( )
vi. [ ]
( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 76

Solution of a Linear System by Matrix Inversion:

 Consider we have a system and


 Replace the system in matrix equation 0 1 [ ]

 Rearrange as 0 1 0 10 1
 If we assume that the is invertible i.e. ( ) then we can
multiply through on the left by the inverse and rewrite the equation as
follows;
0 1 0 1 0 1 0 10 1

 After simplification 0 1 0 1 0 1

 After simplification 0 10 1 0 1

 From above we obtain and

PRACTICE: Find the unique solution of given linear system.

i. and
ii. and
iii. and
iv. and

Theorem: If and are invertible matrices with the same size then AB is
invertible and ( )

Proof: consider ( )( ) ( )

Similarly ( )( ) ( )

Then ( )( ) ( )( ) ( )

In general ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 77

Polynomial Matrix:

If A is a square matrix, say and if ( ) is any


polynomial, then we define the matrix ( ) to be

( )

Where is the identity matrix. Above expression is called a matrix


polynomial in A.

Example:

Let 0 1 and ( ) then find ( )

Solution: given polynomial is ( )

( ) 0 1 0 1 0 1

( ) 0 1 after solving.

Remark:
if ( ) ( ) ( ) then for square matrix A we can write ( ) ( ) ( )

PRACTICE:

1) Compute ( ) for the given matrix A and the following polynomials;


( ) ( ) ( )
0 1 and 0 1
2) Verify the statement ( ) ( ) ( ) for the stated matrix A and given
polynomials;
( ) ( ) ( )
i. 0 1

ii. 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 78

Properties of exponents:

If A is invertible and ‗n‘ is a non – negative integer, then

i. is invertible and ( )
ii. is invertible and ( ) ( )
iii. kA is invertible for any non – zero scalar ‗k‘, then ( )

Example: let 0 1 0 1 then

( ) 0 10 10 1 0 1 …….(i)

Also 0 10 10 1 0 1

Then ( )
| |
( )
( )( ) ( )( )
0 1

( ) 0 1 …….(ii)

Thus from (i) and (ii) ( ) ( )

Theorem:

If A is invertible matrix, then is also invertible and ( ) ( )

Proof: since we know that

Then ( ) ( ) Also ( ) ( )

Thus ( ) ( )

This implies ( ) ( )

REMEMBER: ( ) [ ] ( )

MUHAMMAD USMAN HAMID (0323 – 6032785)


P a g e | 79

PRACTICE (MIXED):

1) Compute then given operation for following matrices;


i. 0 1 iii. 0 1

ii. 0 1 iv. 0 1

a) ( ) ( )
b) ( )
c) ( )
d) ( )

2) Use the given information to find A


i. ( ) 0 1

ii. ( ) 0 1

iii. ( ) 0 1

iv. 0 1

3) Compute and using A and B both.


0 1 0 1

4) A square matrix A is said to be idempotent if

Then Show that if A is idempotent, then is invertible and is its own


inverse.

5) Determine whether given matrices are invertible, and if so, find the inverse.

(Hint: solve AX = I for X by equating corresponding entries on the two sides)

[ ] [ ]

6) Give an example of matrices such that ( )( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 80

Elementary Matrices Let denotes an elementary row operation and let ( )


denote the results of applying the operation to a matrix A. Now let be the
matrix obtained by applying e to the identity matrix ; that is, ( ) Then E is
called the elementary matrix corresponding to the elementary row operation e.
Note that E is always a square matrix.

OR A matrix E is called an elementary matrix if it can be obtained from


an identity matrix by performing a single elementary row operation.

Examples: Below are four elementary matrices and the operations that produce
them;

0 1 Multiply the 2nd row of by

[ ] interchange the 2nd and 4th rows of

[ ] add 3 time the third row of to the first row.

[ ] multiply the 1st row of by 1

PRACTICE: Determine whether the given matrix is elementary.

i. 0 1
v. [ ]
ii. 0 1
vi. [ ]
iii. [ ]
vii. [ ]

iv. [ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 81

Row Operations by Matrix Multiplication:

If the elementary matrix E results from performing a certain row operation


on and if A is an matrix, then the product EA is the matrix that results
when this same row operation is performed on A.

Example: Consider the matrix [ ] and consider the

elementary matrix [ ] which results from adding 3 time the 1 st row of

to the 3rd row.

Then product [ ][ ] [ ] which is

precisely the matrix that results when we add 3 times the 1st row of A to the 3rd row

Remark: We know from discussion that if E is an elementary matrix that results


from performing an elementary row operation on an identity matrix I, then there is
a second elementary row operation, which when applied to E produces I back
again. Following table lists these operations. The operations on the right side of the
table are called the inverse operations of the corresponding operations on the left.

Row operation on I that produce E Row operation on E that reproduce I


Multiply row by Multiply row by
Interchange row and Interchange row and
Add c time row to row Add time row to row

Example: Obtain an elementary matrix then restore it to an identity matrix.

Consider the matrix 0 1 0 1 0 1

Consider the matrix 0 1 0 1 0 1

Consider the matrix 0 1 0 1 0 1

MUHAMMAD USMAN HAMID (0323 – 6032785)


P a g e | 82

PRACTICE:

1) Find a row operation and the corresponding elementary matrix that will
restore the given elementary matrix to the identity matrix.
i. 0 1
vi. [ ]

ii. [ ]
vii. [ ]
iii. [ ]

viii.
iv. [ ]
[ ]

v. 0 1

2) Find a row operation corresponding to elementary matrix E and verify that


the product EA results from applying the row operation to A.
i. 0 1 and 0 1

ii. [ ] and [ ]

iii. [ ] and [ ]

iv. 0 1 and 0 1

v. [ ] and [ ]

vi. [ ] and [ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 83

3) Use the following matrices and find an elementary matrix E that satisfies the
stated equation;

[ ] , [ ] , [ ]

[ ] , [ ]

i. iv. vii.
ii. v. viii.
iii. vi.

Remark:

The next theorem is a key result about invertibility of elementary matrices. It will a
building block for many results that follows;

Theorem:

Elementary matrix is invertible and the inverse is also an elementary matrix.

Proof:

If E is an elementary matrix, then E results by performing some row operation on I.


let be the matrix that results when the inverse of this operation is performed on
I. applying the result ―If the elementary matrix E results from performing a certain
row operation on and if A is an matrix, then the product EA is the matrix
that results when this same row operation is performed on A‖ And using the fact
that inverse row operations cancel the effect of each other if follows that

Thus the elementary matrix is the inverse of E i.e.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 84

Row Equivalence:

A matrix A is said to be row equivalent to a matrix B, written if B can be


obtained from A by a sequence of elementary row operations. In the case that B is
also an echelon matrix, B is called an echelon form of A.

Echelon Matrices:

A matrix A is called an echelon matrix, or is said to be in echelon form, if the


following two conditions hold (where a leading nonzero element of a row of A is
the first nonzero element in the row):

i. All zero rows, if any, are at the bottom of the matrix.


ii. Each leading nonzero entry in a row is to the right of the leading nonzero
entry in the preceding row.

That is, [ ] is an echelon matrix if there exist nonzero entries

…….. ; where j1 < j2 < < jr

()
with the property that for {
( )

The entries …….. , which are the leading nonzero elements in their
respective rows, are called the pivots of the echelon matrix.

Example:

The following is an echelon matrix whose pivots have been bracket:


( )
( )
( )
( )
[ ]

Observe that the pivots are in columns C2; C4; C6; C7, and each is to the right of the
one above. Using the above notation, the pivots are ,
where , , , . Here r = 4.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 85

Row Canonical Form (Row Reduced Form)

A matrix A is said to be in row canonical form (or row-reduced echelon form) if it


is an echelon matrix— that is, if it satisfies the following properties

1) All zero rows, if any, are at the bottom of the matrix.


2) Each leading nonzero entry in a row is to the right of the leading nonzero
entry in the preceding row.
3) Each pivot (leading nonzero entry) is equal to 1.
4) Each pivot is the only nonzero entry in its column.

The major difference between an echelon matrix and a matrix in row canonical
form is that in an echelon matrix there must be zeros below the pivots [Properties
(1) and (2)], but in a matrix in row canonical form, each pivot must also equal 1
[Property (3)] and there must also be zeros above the pivots [Property (4)].

The zero matrix ‗0‘ of any size and the identity matrix of any size are
important special examples of matrices in row canonical form.

Example: The following are echelon matrices whose pivots have been bracket:
( )
( )
( ) ( )
[ ] [ ( )]
( )

( )
[ ( ) ]
( )

The third matrix is also an example of a matrix in row canonical form. The second
matrix is not in row canonical form, because it does not satisfy property (4); that is,
there is a nonzero entry above the second pivot in the third column. The first
matrix is not in row canonical form, because it satisfies neither property (3) nor
property (4); that is, some pivots are not equal to 1 and there are nonzero entries
above the pivots.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 86

The following are two basic results on row equivalence.

Theorem: Suppose [ ] and [ ] are row equivalent echelon matrices


with respective pivot entries …….. and …….. Then
A and B have the same number of nonzero rows that is, r = s and the pivot entries
are in the same positions that is, ……….

Theorem:

Every matrix A is row equivalent to a unique matrix in row canonical form.

Equivalent Theorem:

If A is an matrix, then the following statements are equivalent, that is, all
true or all false;

(a) A is invertible.
(b) Ax = 0 has only the trivial solution.
(c) The reduced row echelon form of A is
(d) A is expressible as a product of elementary matrices.

Proof: we will prove the equivalence by establishing the chain of implications;

( ) ( ) ( ) ( ) ( )

( ) ( ) Assume A is invertible and let be any solution of Ax = 0

( ) ( )

Thus Ax = 0 has only the trivial solution.

( ) ( ) Assume Ax = 0 has only the trivial solution and Ax = 0 has the matrix
form as follows;

………… (1)

:::::::::::::::::::::::::::::::::::::::::::::::::::

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 87

If we solve the above system by Gauss Jordan elimination, then the system of
equations corresponding to the reduced row echelon form of augmented matrix
will be of the following form;

……………….. (2)

Thus the augmented matrix will be [ ] can be reduced to

the augmented matrix (For 1)

[ ]

For (2) by sequence of elementary row operations. If we desired the last column
(all zeros) in each of these matrices, we can conclude that the reduced row echelon
form of A is .

( ) ( ) Assume that the reduced row echelon form of A is , so that A can be


reduced to by a finite sequence of elementary row operations.

By theorem ―If the elementary matrix E results from performing a certain


row operation on and if A is an matrix, then the product EA is the matrix
that results when this same row operation is performed on A‖ each of these
operations can be accomplished by multiplying on the left by an appropriate
elementary matrix. Thus we can find elementary matrices such that
……………(3)

Also by theorem ―Elementary matrix is invertible and the inverse is also an


elementary matrix‖ are invertible. Then multiplying both sides
of equation (3) on the left successively by we obtain

Thus this equation expresses A as a product of elementary matrices.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 88

( ) ( ) If A is product of elementary matrices then from theorems ―If A is


invertible and ‗n‘ is a non – negative integer, then

i. is invertible and ( )
ii. is invertible and ( ) ( )
iii. kA is invertible for any non – zero scalar ‗k‘, then ( ) ‖

And ―Elementary matrix is invertible and the inverse is also an elementary


matrix‖

The matrix A is product of invertible matrices and hence is invertible.

A Method for Inverting Matrices (Inversion Algorithm)

To find the inverse of an invertible matrix A

 Find a sequence of elementary row operations that reduces A to the identity.


 Then perform that same sequence of operations on to obtain
 For this we will change , - to , -

Example: Find the inverse of [ ]

Solution: [ ] [ ]

[ ] [ ]

[ ]

[ ]

[ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 89

Remark:

Often it will not be known in advance if a given matrix A is invertible.


However, if it is not, then by results (part i, iii);

A is invertible. Then the reduced row echelon form of A is

It will be impossible to reduce A to by elementary row operations. This will be


signaled by a row zeros appearing on the left side of the partition at some stage of
the inversion algorithm. If this occurs, then you can stop the computations and
conclude that A is not invertible.

Example:

Find the inverse of [ ]

Solution:

[ ]

[ ]

[ ]

Since we obtain a row of zeros on the left side, A is not invertible.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 90

PRACTICE:

1. Find inverse of given matrices, if exists.

i. 0 1 iii. 0 1

ii. 0 1 iv. 0 1

v. [ ]
vii.

[ ]
vi. [ ]

viii.

[ ]

ix. [ ]
xi. [ ]

√ √
x. [ √ √ ]

xii. [ ]

xiii. [ ]

xiv. [ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 91

2. Find the inverse of each of the following matrices, where


and k are all non – zeros.

i. [ ] iii. [ ]

ii. [ ] iv. [ ]

3..Find all values of ‗c‘, if any, for which the given matrix is invertible.

i. [ ] ii. [ ]

4..Express the matrix and its inverse as products of elementary matrices.

i. 0 1 ii. 0 1

iii. [ ] iv. [ ]

5.. Show that the matrices A and B are row equivalent by finding a sequence of
elementary row operations that produces B from A, and then use that result to find
a matrix C such that CA = B.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 92

Diagonal Matrix:

A square matrix in which all the entries off the main diagonal (without main
diagonal) are zero is called a Diagonal Matrix. Examples are given as follows;

0 1, [ ] , [ ] , 0 1

Remark:

 A general diagonal matrix D can be written as

[ ]

 A diagonal matrix is invertible if and only if all of its diagonal entries are
non – zero. In this case inverse is

[ ]
 If D is the diagonal matrix and ‗k‘ is a positive integer, then

[ ]

Is Null (Zero) matrix a diagonal matrix? Or why Null (Zero) matrix a


diagonal matrix?

A diagonal matrix is one in which all non – diagonal entries are zero. Entries on
the main diagonal may or may not be zero. Clearly this is also satisfied. Hence, a
zero square matrix is upper and lower triangular as well as a diagonal matrix.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 93

Example: if [ ] then

[ ] [ ] [ ]

Triangular Matrix: A square matrix [ ] that is either upper triangular


or lower triangular is called Triangular matrix.

For example; [ ]

Lower triangulation matrix: A matrix having only zeros above the diagonal is
called Lower Triangular matrix.

(Or)

A matrix ―L‖ is lower triangular if its entries satisfy

i.e. [ ]

Upper triangulation matrix: A matrix having only zeros below the diagonal is
called Upper Triangular matrix.

(Or)

A matrix ―U‖ is upper triangular if its entries satisfy

i.e. [ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 94

REMARKS:

 A square matrix [ ] is upper triangular iff all entries to the left of the
main diagonal are zero. i.e. if
 A square matrix [ ] is lower triangular iff all entries to the right of the
main diagonal are zero. i.e. if
 A square matrix [ ] is upper triangular iff the row starts with at
least zeros for every .
 A square matrix [ ] is lower triangular iff the column starts with
at least zeros for every .

PRACTICE:

1. Classify the matrix as upper triangular, lower triangular, or diagonal, and decide
by inspection whether the matrix is invertible. (Recall that: diagonal matrix is both
upper and lower triangular, so there may be more than one answer in some parts.)

i. 0 1 ii. 0 1

iii. [ ] iv. [ ]

v. 0 1 vi. 0 1

vii. [ ] viii. [ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 95

2.. Find the product by inspection.

i. [ ][ ]

ii. 0 1[ ]

iii. [ ][ ]

iv. [ ][ ][ ]

3.. Find and (where ‗k‘ is any integer) by inspection.

i. 0 1
ii. [ ]

iii. iv. [ ]

[ ]

4.. Compute the product by inspection.

i. [ ][ ][ ]

ii. [ ][ ][ ]

5..Compute the indicated quantity.

i. 0 1

ii. 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 96

6.. Multiplying by diagonal matrices compute the product by inspection.

i. [ ][ ]

ii. [ ][ ]

iii. [ ]0 1

iv. [ ][ ]

7... Determine by inspection whether the matrix is invertible.

i. [ ] ii. [ ]

iii. [ ] iv. [ ]

8… Find the diagonal entries of AB by inspection.

i. [ ] and [ ]

ii. [ ] and [ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 97

Symmetric Matrices:

A matrix A is symmetric if AT = A. Equivalently, [ ] is symmetric if


symmetric elements (mirror elements with respect to the diagonal) are equal, that
is, if each

Examples: 0 1 , [ ], [ ]

Skew Symmetric Matrices:

A matrix A is skew-symmetric if AT = or, equivalently, if each


Clearly, the diagonal elements of such a matrix must be zero, because
implies

(Note that a matrix A must be square if AT = A or AT = )

Remark:

 The product of two symmetric matrices is symmetric iff the matrices


commute.
 If A is an invertible symmetric matrix, then is symmetric.
 If A is an invertible, then and are also invertible.

Theorem:

If A and B are symmetric matrices with the same size, and if ‗k‘ is any scalar, then

a) is symmetric.
b) and are symmetric.
c) is symmetric.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 98

PRACTICE:

1) Find all values of unknown constant(s) for which A is symmetric.


i. 0 1 ii. 0 1

iii. [ ]

2) Find all values of ‗x‘ for which A is invertible.

i. [ ]

ii.

[ ]

3) Find a diagonal matrix that satisfies the given conditions.

i. [ ]

ii. [ ]

4) Let A be symmetric matrix, then


i. Show that is symmetric.
ii. Show that is symmetric.
5) Find an upper triangular matrix that satisfies 0 1

6) Find all values of a,b,c and d for which A is skew symmetric.

[ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 99

Orthogonal Matrices:

A real matrix A is orthogonal if ; that is, if . Thus, A


must necessarily be square and invertible.

Examples:

[ ]

Remark (discussed later):

Vectors ⃗ ⃗ ⃗ ⃗ in Rn are said to form an orthonormal set of vectors if


the vectors are unit vectors and are orthogonal to each other. i.e.

⃗ ⃗ { where is known as Kronecker delta function.

Theorem: Let A be a real matrix then following are equivalent;

a) A is orthogonal.
b) The rows of A form an orthonormal set.
c) The columns of A form an orthonormal set

Normal Matrices:

A real matrix A is normal if it commutes with its transpose AT—that is, if

.If A is symmetric, orthogonal, or skew-symmetric, then A is normal.


There are also other normal matrices.

Examples: let 0 1

0 10 1 0 1

0 10 1 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 100

COMPLEX MATRICES

A matrix with complex entries is called a complex matrix. Since we know that
is a complex number then ̅ ̅̅̅̅̅̅̅̅ is its conjugate. Then
the conjugate of a complex matrix A is written as ̅ is the matrix obtain from A by
taking the conjugate of each entry in A. i.e. if [ ] then ̅ [ ̅ ]

Remark:

 The two operations of transpose and conjugation commute for any complex
matrix.
 The special notation is used for the conjugate transpose of A. i.e.
( ̅) ̅̅̅̅
 If A is real then (some author use instead of )

Examples:

let 0 1 then [ ]

Hermitian Matrix:

A complex matrix A is said to be Hermitian if

Remember:

[ ] is Hermitian iff symmetric elements are conjugate. i.e. if each ̅ ,


in which case each diagonal element must be real.

Examples: let [ ]

Clearly the diagonal elements of A are real and the symmetric elements and
, and , and are conjugate. Thus A is Hermitian.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 101

Skew Hermitian Matrix:

A complex matrix A is said to be Skew Hermitian if

Unitary Matrix:

A complex matrix A is said to be Hermitian if i.e.

Examples: let [ ]

Clearly i.e. this yields A is Unitary matrix.

Normal Matrices:

A real matrix A is normal if it commutes with its transpose AH—that is, if

Examples: let 0 1

Clearly this yields A is Normal matrix.

Note: When a matrix A is real, Hermitian is the same as Symmetric and Unitary
is the same as Orthogonal.

PRACTICE:

1) Find where (a) 0 1 (b) [ ]

2) Show that [ ] is unitary.

3) Determine which of the following matrices are normal;


(a) 0 1 (b) 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 102

BLOCK MATRICES

Using a system of horizontal and vertical (dashed) lines, we can partition a matrix
A into submatrices called blocks (or cells) of A. Clearly a given matrix may be
divided into blocks in different ways. For example,

[ ] ,[ ],[ ]

The convenience of the partition of matrices, say A and B, into blocks is that the
result of operations on A and B can be obtained by carrying out the computation
with the blocks, just as if they were the actual elements of the matrices. This is
illustrated below, where the notation [ ] will be used for a block matrix A
with blocks Aij

Suppose that [ ] and [ ] are block matrices with the same


numbers of row and column blocks, and suppose that corresponding blocks have
the same size. Then adding the corresponding blocks of A and B also adds the
corresponding elements of A and B, and multiplying each block of A by a scalar
‗k‘ multiplies each element of A by ‗k‘. Thus,

[ ]

And [ ]

The case of matrix multiplication is less obvious, but still true. That is, suppose
that , - and [ ] are block matrices such that the number of columns
of each block is equal to the number of rows of each block

(Thus, each product UikVkj is defined.)

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 103

Then

[ ] where

Square Block Matrices

Let M be a block matrix. Then M is called a square block matrix if

(i) M is a square matrix.

(ii) The blocks form a square matrix.

(iii) The diagonal blocks are also square matrices.

The latter two conditions will occur if and only if there are the same numbers of
horizontal and vertical lines and they are placed symmetrically.

Consider the following two block matrices:

[ ] [ ]

The block matrix A is not a square block matrix, because the second and third
diagonal blocks are not square. On the other hand, the block matrix B is a square
block matrix.

Block Diagonal Matrices

Let [ ] be a square block matrix such that the non-diagonal blocks are all
zero matrices; that is, when . Then M is called a block diagonal
matrix. We sometimes denote such a block diagonal matrix by writing

( ) or

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 104

The importance of block diagonal matrices is that the algebra of the block matrix
is frequently reduced to the algebra of the individual blocks. Specifically, suppose
( ) is a polynomial and M is the above block diagonal matrix. Then ( ) is a
block diagonal matrix, and ( ) ( ( ) ( ) ( ))

Also, M is invertible if and only if each is invertible, and, in such a case,


is a block diagonal matrix, and ( )

Analogously, a square block matrix is called a block upper triangular matrix if the
blocks below the diagonal are zero matrices and a block lower triangular matrix if
the blocks above the diagonal are zero matrices.

Example: Determine which of the following square block matrices are upper
diagonal, lower diagonal, or diagonal:

[ ] [ ] [ ]

[ ]

(a) A is upper triangular because the block below the diagonal is a zero block.
(b) B is lower triangular because all blocks above the diagonal are zero blocks.
(c) C is diagonal because the blocks above and below the diagonal are zero blocks.
(d) D is neither upper triangular nor lower triangular. Also, no other partitioning of
D will make it into either a block upper triangular matrix or a block lower
triangular matrix.
Periodic Matrix: A square matrix A is said to be periodic matrix of period k,
where k is the least positive integer such that
Idempotent Matrix: A square matrix A is said to be idempotent matrix if
Nilpotent Matrix: A square matrix A is said to be Nilpotent matrix of index k,
where k is the least positive integer such that
Involutory: A square matrix A is said to be Involutory matrix if

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 105

Rank of a Matrix: The rank of a matrix A, written rank A , is equal to the


number of pivots in an echelon form of A. It is equal to the number of non – zero
rows in its echelon form.
The rank is a very important property of a matrix and, depending on the
context in which the matrix is used; it will be defined in many different ways. Of
course, all the definitions lead to the same number.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 106

Canonical Form of a Matrix

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 107

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 108

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 109

Chapter # 3

VECTOR SPACES
This chapter introduces the underlying structure of linear algebra that of a finite
dimensional vector space. Where vector space is a collection of objects, called
vectors, which may be added together and multiplied by numbers, called scalars.

Vectors: Many physical quantities, such as temperature and speed, possess only
‗‗magnitude.‘‘ These quantities can be represented by real numbers and are called
scalars. On the other hand, there are also quantities, such as force and velocity, that
possess both ‗‗magnitude‘‘ and ‗‗direction.‘‘ These quantities, which can be
represented by arrows having appropriate lengths and directions and emanating
(originating) from some given reference point O, are called vectors. The tail of the
arrow is called initial point of the vector and the tip the terminal point of the
vector.

Remark: Mathematically, we identify the vector with its ( ) and write


( ) Moreover, we call the ordered triple ( ) of real numbers a point
or vector depending upon its interpretation. We generalize this notion and call an
n- tuple ( ) of real numbers a vector. However, special notation may be
used for the vectors in R3 called spatial vectors.

Vectors in Rn: The set of all n-tuples of real numbers, denoted by Rn is called
n-space. A particular n-tuple in Rn say ( ) is called a point or
vector. The numbers are called the coordinates, components, entries or elements
of . Moreover, when discussing the space Rn

 we use the term scalar for the elements of R.


 Two vectors, u and v, are equal, written u = v, if they have the same number
of components and if the corresponding components are equal. Although the
vectors ( ) and ( ) contain the same three numbers, these vectors
are not equal because corresponding entries are not equal.
 The vector ( ) whose entries are all 0 is called the zero vector and
is usually denoted by ⃗ or 0.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 110

VECTOR ADDITION AND SCALAR MULTIPLICATION

Vector addition: Consider two vectors u and v in Rn , say


⃗ ( ) and ( ) then Their sum written ⃗ , is the
vector obtained by adding corresponding components from u and v. That is,
⃗ ( )

Scalar Product: The scalar product or, simply, product, of the vector v by a real
number k, written kv, is the vector obtained by multiplying each component of v
by k. That is, ( )

 Observe that u + v and kv are also vectors in Rn


 The sum of vectors with different numbers of components is not defined.
 Negatives and subtraction are defined in Rn as follows:
( ) and ⃗ ⃗ ( ) The vector –v is called the negative of
v, and u – v is called the difference of u and v.
 Linear combination of the vectors : suppose we are given vectors
in Rn and scalars in R. We can multiply the
vectors by the corresponding scalars and then add the resultant scalar
products to form the vector

Such a vector v is called a linear combination of the vectors

Vectors in R3 (Spatial Vectors), Notation

Vectors in R3, called spatial vectors, appear in many applications, especially in


physics. In fact, a special notation is frequently used for such vectors as follows:

i ( ) denotes the unit vector in the ‗x‘ direction:

j ( ) denotes the unit vector in the ‗y‘ direction:

k ( )denotes the unit vector in the ‗z‘ direction:

Then any vector ( ) in R3 can be expressed uniquely in the form

( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 111

n – Space: If ‗n‘ is a positive integer, then an ordered n – tuple is a


sequence of ‗n‘ real numbers( ). The set of all ordered n – tuples is
n
called n – space and is denoted by R

Finding the components of vectors:

If a vector in 2 – space or 3 – space is positioned with its initial point at the origin
of a rectangular coordinate system, then the vector is completely determined by the
coordinates of its terminal point. We call these coordinates the components of
vector v relative to the coordinate system.

If ⃗⃗⃗⃗⃗⃗⃗⃗ denote the vector with initial point ( ) and terminal


point ( ) then the components of the vector are given by the formula
⃗⃗⃗⃗⃗⃗⃗⃗ ( )

Example: The components of the vector ⃗⃗⃗⃗⃗⃗⃗⃗ with initial point ( )


and terminal point ( ) are

⃗⃗⃗⃗⃗⃗⃗⃗ ( ( ) ) ( )

PRACTICE:

1) Find the components of the vector ⃗⃗⃗⃗⃗⃗⃗⃗


i. ( ) ( )
ii. ( ) ( )
iii. ( ) ( )
iv. ( ) ( )

2) Let ⃗ ( ) ( ) and ⃗⃗ ( ) then find the components of


i. ⃗ ⃗⃗
ii. ⃗
iii. (⃗ ⃗⃗ )
iv. (⃗ ⃗⃗ )
3) Let ⃗ ( ) ( ) find scalar ‗a‘ and ‗b‘ so that
⃗ ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 112

Vector Space:

Let V be a nonempty set with two operations:

(i) Vector Addition: This assigns to any ⃗ a sum ⃗ .

(ii) Scalar Multiplication: This assigns to any , a product .

Then V is called a vector space (over the field K) if the following axioms hold for
any vectors ⃗ ⃗⃗ :

 ⃗ ⃗
 ⃗ ( ⃗⃗ ) (⃗ ) ⃗⃗
 There is a vector in V, denoted by ⃗ and called the zero vector, such that, for
any ; ⃗ ⃗
 For each ; there is a vector in V, denoted by – , and called the negative
of ⃗ , such that ( ) ( ) ⃗
 If ‗k‘ is any scalar and is in V then is in V.
 (⃗ ) ⃗ for ⃗ and
 ( ) for and
 ( ) ( ) for and
 ( ) , for the unit scalar 1 in K.

The above axioms naturally split into two sets (as indicated by the labeling of the
axioms). The first four are concerned only with the additive structure of V and can
be summarized by saying V is a commutative group (Abelian group) under
addition. This means

 Any sum of vectors requires no parentheses and does not


depend on the order of the summands.
 The zero vector ⃗ is unique, and the negative of a vector is unique.
 (Cancellation Law) If ⃗ ⃗⃗ ⃗⃗ , then ⃗

Also, subtraction in V is defined by ( ) , where is the unique


negative of . On the other hand, the remaining four axioms are concerned with the
‗‗action‘‘ of the field K of scalars on the vector space V.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 113

To Show that a Set with two operation is a Vector Space

 Identify the set V of objects that will become vectors.


 Identify the addition and scalar multiplication operation on V.
 Verify remaining axioms.

EXAMPLES OF VECTOR SPACES

This section lists important examples of vector spaces that will be used throughout
the text.

The Zero Vector Space:

Let ⃗ and define ⃗ ⃗ ⃗ also ⃗ ⃗ for all scalars ‗k‘ then given space V
will a be vector space and called zero vector space.

Space Kn

Let K be an arbitrary field. The notation Kn is frequently used to denote the set of
all n-tuples of elements in K. Here Kn is a vector space over K using the following
operations:

(i) Vector Addition:

( ) ( ) ( )

(ii) Scalar Multiplication:


( ) ( )

The zero vector in Kn is the n-tuple of zeros, ⃗ ( ) and the negative of


a vector is defined by ( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 114

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 115

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 116

Space Rn

We have to show that is a vector space. Since we know that

*( ) +

Let us define addition and scalar multiplication of n – tuples as follows for ⃗

⃗ ( ) ( ) ( )

And ( ) ( )

Firstly we will show that is an Abelian Group.

Closure Law: that is ⃗ we will have ⃗

Let ⃗ ( ) ( ) then

⃗ ( ) ( )

⃗ ( )

Closure Law holds.

Associative Law: that is ⃗ ⃗⃗ we will have ⃗ ( ⃗⃗ ) (⃗ ) ⃗⃗

Let ⃗ ( ) ( ) ⃗⃗ ( ) then

⃗ ( ⃗⃗ ) ( ) ,( ) ( )-

⃗ ( ⃗⃗ ) ( ) ( )

⃗ ( ⃗⃗ ) , ( ) ( ) ( )-

⃗ ( ⃗⃗ ) ,( ) ( ) ( ) -

⃗ ( ⃗⃗ ) ( ) ( )

⃗ ( ⃗⃗ ) ,( ) ( )- ( )

⃗ ( ⃗⃗ ) (⃗ ) ⃗⃗

Association Law holds.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 117

Identity Law: that is ⃗ we will have ⃗ ⃗

Let ⃗ ( ) ( ) then

⃗ ( ) ( ) ( )

⃗ ( )

⃗ ( ) ( ) ( )

⃗ ( )

Identity Law holds.

Inverse Law: that is we will have ( ) ⃗

Let ( ) ( ) then

( ) ( ) ( ) ( )

( ) ( ) ⃗

( ) ( )

( ) ( ) ⃗

Inverse Law holds.

Commutative Law: that is ⃗ we will have ⃗ ⃗

Let ⃗ ( ) ( ) then

⃗ ( ) ( )

⃗ ( ) ( )

⃗ ( ) ( ) ⃗

Commutative Law holds.

Hence given space is Abelian group under addition.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 118

Now we will show Scalar multiplication properties.

 If ‗k‘ is any scalar and is in V then is in V.

Let ‗k‘ is any scalar and ( ) then

( ) ( )

 (⃗ ) ⃗ for ⃗ and
(⃗ ) ,( ) ( )-
(⃗ ) , ( ) ( )- ⃗
(⃗ ) ⃗

 ( ) for and
( ) ( )( )
( ) ( ) ( )
( )

 ( ) ( ) for and
( ) , ( )- ( )( )
( ) ( )

 ( ) , for the unit scalar 1 in K.


( ) ( ) ( ) ( )

Hence above all conditions show that is a vector space.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 119

Matrix Space

The notation or simply M; will be used to denote the set of all


matrices with entries in a field K. Then is a vector space over K with respect
to the usual operations of matrix addition and scalar multiplication of matrices.

We will prove it as follows; since we know that

[ ] [ ]

Let us define addition and scalar multiplication for ⃗

⃗ [ ] [ ] [ ]

And [ ] [ ]

Firstly we will show that is an Abelian group.

Closure Law: that is ⃗ we will have ⃗

Let ⃗ [ ] [ ] then

⃗ [ ] [ ] [ ]

Closure Law holds.

Associative Law: that is ⃗ ⃗⃗ we will have ⃗ ( ⃗⃗ ) (⃗ ) ⃗⃗

Let ⃗ [ ] [ ] ⃗⃗ [ ] then

⃗ ( ⃗⃗ ) [ ] 0[ ] [ ] 1

⃗ ( ⃗⃗ ) [ ] [ ] [ ( )]

⃗ ( ⃗⃗ ) [( ) ] 0[ ] [ ] 1 [ ]

⃗ ( ⃗⃗ ) (⃗ ) ⃗⃗ Association Law holds.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 120

Identity Law:

that is ⃗ we will have ⃗ ⃗

Let ⃗ [ ] [ ] then

⃗ [ ] [ ] [ ] [ ]

⃗ [ ] [ ] [ ] [ ]

Identity Law holds.

Inverse Law:

that is we will have ( ) ⃗

Let [ ] [ ] then

( ) [ ] [ ] [ ( )] [ ] ⃗

[ ] [ ] [ ] [ ] ⃗

Inverse Law holds.

Commutative Law:

that is ⃗ we will have ⃗ ⃗

Let ⃗ [ ] [ ] then

⃗ [ ] [ ] [ ] [ ]

⃗ [ ] [ ] ⃗

Commutative Law holds.

Hence given space is Abelian group under addition.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 121

Now we will show Scalar multiplication properties.

 If ‗k‘ is any scalar and is in V then is in V.

Let ‗k‘ is any scalar and [ ] then

[ ] [ ]

 (⃗ ) ⃗ for ⃗ and
(⃗ ) 0[ ] [ ] 1
(⃗ ) 0 [ ] [ ] 1 ⃗
(⃗ ) ⃗

 ( ) for and
( ) ( )[ ]
( ) [ ] [ ]
( )

 ( ) ( ) for and
( ) 0 [ ] 1 ( )[ ]
( ) ( )

 ( ) , for the unit scalar 1 in K.


( ) [ ] [ ] [ ]

Hence above all conditions show that is a vector space.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 122

The Vector Space of real valued Functions defined on ( )

We will prove it as follows; since we know that

( ) * ( )+

Let us define addition and scalar multiplication for ⃗

⃗ ( )( ) ( ) ( )

And ( )( ) ( )

Firstly we will show that is an Abelian Group.

Closure Law: that is ⃗ we will have ⃗

Let ⃗ then

⃗ ( )( ) ( ) ( )

Closure Law holds.

Associative Law: that is ⃗ ⃗⃗ we will have ⃗ ( ⃗⃗ ) (⃗ ) ⃗⃗

Let ⃗ ⃗⃗ then

⃗ ( ⃗⃗ ) ( ) , -( ) ( ) ( ) ( )

⃗ ( ⃗⃗ ) , -( ) ( ) (⃗ ) ⃗⃗

⃗ ( ⃗⃗ ) (⃗ ) ⃗⃗ Association Law holds.

Identity Law:

that is ⃗ we will have ⃗ ⃗

Let ⃗ then

⃗ ( )( ) ( ) ( ) ( )

⃗ ( )( ) ( ) ( ) ( )

Identity Law holds.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 123

Inverse Law: that is we will have ( ) ⃗

Let then

( ) ( ( ))( ) ( ) ( ) ⃗

( )( ) ( ) ( ) ⃗ Inverse Law holds.

Commutative Law: that is ⃗ we will have ⃗ ⃗

Let ⃗ then

⃗ ( )( ) ( ) ( ) ( ) ( ) ⃗

⃗ ⃗ Commutative Law holds.

Hence given space is Abelian group under addition.

Now we will show Scalar multiplication properties.

 If ‗k‘ is any scalar and is in V then is in V.

Let ‗k‘ is any scalar and then ( )( ) ( )

 (⃗ ) ⃗ for ⃗ and
(⃗ ) , ( )-( ) ( )( ) ( ) ( )
(⃗ ) ⃗

 ( ) for and
( ) ,( ) -( ) ( )( ) ( ) ( )
( )

 ( ) ( ) for and
( ) , -( ) , ( )- ( ) ( ) ( )
( ) ( )

 ( ) , for the unit scalar 1 in K.


( ) , -( ) ( ) ( )
Hence above all conditions show that ( ) is a vector space.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 124

Polynomial Space ( )

Let ( ) denote the set of all polynomials of the form

( )

where the coefficients belong to a field K. Then ( ) is a vector space over K


using the following operations:

(i) Vector Addition: Here ( ) ( ) in ( ) is the usual operation of addition of


polynomials.

(ii) Scalar Multiplication: Here ( )in ( ) is the usual operation of the product
of a scalar ‗k‘ and a polynomial ( ).

The zero polynomial ( ) is the zero vector in ( ).

Polynomial Space ( )

Let ( )denote the set of all polynomials ( ) over a field K, where the degree of
( ) is less than or equal to n; that is,

( ) where .

Then ( ) is a vector space over K with respect to the usual operations of


addition of polynomials and of multiplication of a polynomial by a constant (just
like the vector space ( )above). We include the zero polynomial 0 as an element
of ( ), even though its degree is undefined.

Fields and Subfields

Suppose a field E is an extension of a field K; that is, suppose E is a field that


contains K as a subfield. Then E may be viewed as a vector space over K using the
following operations:

(i) Vector Addition: Here in E is the usual addition in E.

(ii) Scalar Multiplication: Here in E, where and , is the usual product


of ‗k‘ and u as elements of E.
That is, the eight axioms of a vector space are satisfied by E and its subfield K with
respect to the above two operations.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 125

A Set that is Not a Vector Space

Show that is not a vector space under the operation

⃗ ( ) and ⃗ ( )

We have to show that is a vector space. Since we know that

Let us consider ⃗ ( ) ( )

⃗ ( ) ( ) ( ) ( )

And ⃗ ( )

⃗ ( ) ( ) ( )

⃗ ( ) ( ) ( ) actual value.

⃗ ( )

Not satisfy given operation.

Hence is not a vector space under the given operation

Some Standard Operations:

 For Set of real numbers


addition and ⃗ scalar multiplication.
 For Set of ordered pairs of real numbers
( ) and ⃗ ( )
 For Set of n – tuples of real numbers

⃗ ( )

And ⃗ ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 126

PRACTICE:

1) Show that *( ) + is a vector space.


Or show that space of infinite sequence of real number is vector space.
2) Show that is a vector space under
defined operation as and
3) Show that is not a vector space under the operation
⃗ ( ) and ⃗ ( )
4) Show that is a vector space under the
standard operation of addition and scalar multiplication.
5) Let Show that with
defined operations ( ) and ⃗ ( )
i. Show that not a vector space.
ii. Compute and for ( ) ( )
6) Show that is a vector space or not under
the operation of addition and scalar multiplication as follows
( ) and ⃗ ( )
i. Compute and for ( ) ( )
ii. Show that ( ) ⃗
iii. Show that ( ) ⃗
7) Show that the set of all pairs of real numbers of the form ( ) is a vector
space or not with standard operations on .
8) Show that the set of all pairs of real numbers of the form ( ) is a vector
space or not with the operations ( ) ( ) ( )
and ( ) ( )
9) Show that the set of all pairs of real numbers of the form ( ) with
is a vector space or not with standard operations on .
10) Show that the set of all n – tuples of the real numbers of the form
( ) is a vector space or not with standard operations on .
11) Show that the set of all triples of the real numbers is a vector space or
not with standard vector addition but with scalar multiplication defined by
( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 127

12) Show that is a vector space.


Or show that space of matrices is a vector space.
13) Show that set of all invertible matrices with the standard matrix
addition and scalar multiplication is a vector space or not.
14) Show that set of all non singular matrices with the standard
matrix addition and scalar multiplication is a vector space or not.
15) Show that set of all matrices of the form 0 1 with the
standard matrix addition and scalar multiplication is a vector space or not.
16) Show that set of all real matrices of the form 0 1 with the
standard matrix addition and scalar multiplication is a vector space or not.

17) Show that Function Space , - i.e. set of all function of X into K (an
arbitrary field) is vector space.
Or show that , - * + is a vector space.
18) Show that
, - * , -+ is a vector
space.
19) Show that , - * , - ( ) ( )+ is a vector
space.
20) Show that the set of all real valued functions defined everywhere on
the real line and such that ( ) is a vector space or not with the
operations ( )( ) ( ) ( ) And ( )( ) ( )
21) Show that the set of polynomials of the form is a vector
space or not with the operation
( ) ( ) ( ) ( )
And ( ) ( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 128

Theorem: Let V be a vector space over a field K.

i. For any scalar and ;


ii. For and any vector ;
iii. If , where and , then or
iv. For any and any ; ( ) ( )
And in particular ( )

Proof:

Part = i: For any scalar and ;

Since we know that therefor ( )

Adding on both sides

Part = ii: For and any vector ;

We can write ( )

, - ( ) ( ) , ( )- ( )

Part = iii: If , where and , then or

Suppose and then there exists a scalar such that

Thus ( ) ( ) ( )

Part = iv: For any and any ; ( ) ( )

And in particular ( )

Using ( ) and ( )

, ( )- ( ) ( )

and , ( )- ( ) ( )

Thus ( ) ( ) and for k = 1 we get ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 129

Linear Combinations

Let V be a vector space over a field K. A vector in V is a linear combination of


vectors in V if there exist scalars in K such that

Alternatively, is a linear combination of if there is a solution


to the vector equation where are
unknown scalars. These scalar are called coefficients of linear combination.

Example:(Linear Combinations in Rn)

Suppose we want to express ( ) in R3 as a linear combination of the


vectors ( ); ( ); ( )

We seek scalars x, y, z such that that is,

[ ] [ ] [ ] [ ]

or , ,

(For notational convenience, we have written the vectors in R3 as columns, because


it is then easier to find the equivalent system of linear equations.) Reducing the
system to echelon form yields

And then

Back-substitution yields the solution .

Thus

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 124

Remark:

Generally speaking, the question of expressing a given vector in Kn as a linear


combination of vectors in Kn is equivalent to solving a system
of linear equations, where is the column B of constants, and the v‟s are
the columns of the coefficient matrix A. Such a system may have a unique solution
(as above), many solutions, or no solution. The last case—no solution—means that
cannot be written as a linear combination of the v‟s.

Example:

Suppose that the vectors ⃗ ( ) and ( ) in R3 show that


⃗⃗ ( ) is a linear combination of ⃗ and and ⃗⃗ ( ) is not a linear
combination of ⃗ and

Solution: In order for ⃗⃗ to be a linear combination of ⃗ and , there must be


scalars such that ⃗⃗ ⃗ that is

( ) ( ) ( ) ( )

Equating corresponding components gives

, ,

Solving the system using Gaussian Elimination yields

So ⃗⃗ ⃗ hence ⃗⃗ is a linear combination of ⃗ and

Similarly In order for ⃗⃗ to be a linear combination of ⃗ and , there must


be scalars such that ⃗⃗ ⃗ that is

( ) ( ) ( ) ( )

Equating corresponding components gives

, ,

Solving the system of equation we notify that this is inconsistent. So no such


exists. Consequently ⃗⃗ is not a linear combination of ⃗ and

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 125

Example: (Linear combinations in ( ) )

Suppose we want to express the polynomial as a linear


combination of the polynomials

, ,

We seek scalars x, y, z such that that is,

( ) ( ) ( ) …….(i)

There are two ways to proceed from here.

(1) Expand the right-hand side of (i) obtaining:

( ) ( ) ( )

Set coefficients of the same powers of ‗t‘ equal to each other, and reduce the
system to echelon form:

, and

Or , and

Or , and

The system is in triangular form and has a solution. Back-substitution yields the
solution . Thus,

(2) The equation (i) is actually an identity in the variable ‗t‘; that is, the equation
holds for any value of ‗t‘. We can obtain three equations in the unknowns x, y, z by
setting ‗t‘ equal to any three values.

For example, Set in ( ) to obtain:

Set in ( ) to obtain:

Set in ( ) to obtain:

Reducing this system to echelon form and solving by back-substitution again


yields the solution

Thus (again),

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 126

PRACTICE:

1) Which of the following are linear combination of ⃗ ( ) and


( )?

i. ( ) ii. ( ) iii. ( )

2) Express the following as linear combination of ⃗ ( )


( ) and ⃗⃗ ( )

i. ( ) ii. ( ) iii. ( )

3) Which of the following are linear combination of 0 1,

0 1 and 0 1?

i. 0 1

ii. 0 1

iii. 0 1

4) For what value of ‗k‘ will the vector ( ) in R3be a linear combination
of the vectors ( )( )

5) In each part express the vector as a linear combination of


, ,

i.
ii.
iii.
iv.

6) Let V be a vector space over a field K. Then show that for every
and : we have ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 128

Subspaces

Let V be a vector space over a field K and let W be a subset of V. Then W is a


subspace of V if W is itself a vector space over K with respect to the operations of
vector addition and scalar multiplication on V.

The way in which one shows that any set W is a vector space is to show that W
satisfies the eight axioms of a vector space. However, if W is a subset of a vector
space V, then some of the axioms automatically hold in W, because they already
hold in V. Simple criteria for identifying subspaces follow.

Theorem: Suppose W is a subset of a vector space V. Then W is a subspace of V


iff the following two conditions hold:

a) The zero vector 0 belongs to W


b) For every and :
(i) The sum
(ii) The multiple

Property (i) in (b) states that W is closed under vector addition and property (ii) in
(b) states that W is closed under scalar multiplication. Both properties may be
combined into the following equivalent single statement:

( ) For every ; , the linear combination

Now let V be any vector space. Then V automatically contains two subspaces: the
set {0} consisting of the zero vector alone and the whole space V itself. These are
sometimes called the trivial subspaces of V. this means that every vector space has
at least two subspaces.

The Zero Subspace:

If V is any vector space and * + is the subspace of V that contains the zero
vector only, then W is closed under addition and scalar multiplication

Since and for any scalar ‗k‘

Then we call W the zero subspace of V.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 129

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 130

Theorem: If * + is a non-empty set of vectors in a vector space


V then the set W of all possible linear combinations of the vectors in S is a
subspace of V.

Proof: Let W be the set of all possible linear combinations of the vectors in S

We must show that W is closed under addition and scalar multiplication.

To prove closure under addition let ⃗ as

⃗ and

Then their sum can be written as

⃗ ( ) ( ) ( )

Which is the linear combination of the vectors in S.

To prove closure under multiplication let ⃗ as

then ⃗ ( ) ( ) ( )

Which is the linear combination of the vectors in S.

Then W is closed under multiplication.

Hence W is a subspace of V

Theorem: If * + is a non-empty set of vectors in a vector space V


and if the set W of all possible linear combinations of the vectors in S is a subspace
of V then set W is the smallest subspace of V that contains all of the vectors in S in
the sense that any other subspace that contains those vectors contains W.

Proof: Let be any subspace of V that contains all of the vectors in S. Since
is closed under addition and scalar multiplication, it contains all linear
combinations of the vectors in S and hence contains W.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 131

Theorem: Suppose W is a subset of a vector space V. Then W is a subspace of V


iff the following two conditions hold:

For every and :

i. The sum
ii. The multiple

Proof:

Suppose that W is a subspace of V then by definition of a subspace W is a vector


space over field K and hence given conditions must hold.

Conversely:

Suppose that W is a non – empty subset of V such that the conditions (i) and (ii)
are satisfied then we have to show that W is a subspace of V

For this let and since also by (ii)

So for by (i)

Which shows that W is a subspace of V under addition.

Now because V is an Abelian group under addition and so W is also an


Abelian group under addition.

Also for and by (ii)

The remaining four conditions of scalar multiplication holds in W because they


hold in V therefore W is subspace of V.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 130

Theorem: Suppose W is a non – empty subset a vector space V. Then W is a


subspace of a vector space V. If and only if for every ; , the
linear combination

Proof: Suppose that W is a subspace of V then by definition of a subspace W


is a vector space over field K and hence given condition must hold.

Conversely: Suppose that W is a non – empty subset of V such that the condition
is satisfied then we have to show that W is a subspace of V

For this let then according to condition

Now take so that

This means that for every ; , we have and

Thus by theorem ―W is a subspace of V iff the following two conditions hold:

For every and : The sum and the multiple ‖

W is a subspace of V

Theorem:

Suppose U and W are subspaces of a vector space V. then show that is also
a subspace of V.

Proof: Suppose that and

also

also being Subspace.

Hence for and

This implies is a subspace of V

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 131

Theorem:

Show that the intersection of any number of subspaces of a vector space V is a


subspace of V.

Proof: Suppose that * + be any subcollection of subspaces of a vector


space V over the field K. Then we have to show that is also a subspace of V.

For this Suppose that and

for all

for each being Subspace.

This implies is a subspace of V

Theorem:

Suppose U and W are subspaces of a vector space V. then is also a


subspace of V containing both U and W. Further is a smallest subspace of
V containing both U and W

Proof: Given that U and W are subspaces of a vector space V then we can define

* +

We will prove that is also a subspace of V

For this Suppose that and

where

also being Subspace.

( ) ( )

( ) ( )

( ) ( )

This implies is a subspace of V

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 132

Next we will prove that is a smallest subspace of V containing both U and


W i.e. and

Since for all

and similarly

Hence is a subspace of V containing both U and W

Now

we will prove that is a smallest subspace of V containing both U and W

let be any subspace of V containing both U and W

then for every we have so that

but So

Hence is a smallest subspace of V containing both U and W

Only way to learn


mathematics is to do
mathematics

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 133

Examples of nontrivial subspaces follow.

Lines through the origin are subspace of and of

If W is the line through the origin of either or , then adding two vectors on
line or multiplying a vector on the line by a scalar produces another vector on the
line, so W is closed under addition and scalar multiplication. Hence a subspace.

Planes through the origin are subspace of

If u and v are vectors in a plane W through the origin of , then it is evident


geometrically that u + v and ku also lie in the same plane W for any scalar ‗k‘.
thus W is closed under addition and scalar multiplication.

A list of subspace of and of

Subspace of Subspace of
* + * +
Lines through the origin Lines through the origin
Planes through the origin

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 134

A subset of that is not a subspace of

Consider *( ) + in This is not a subspace of because


it is not closed under scalar multiplication.

For example: ( ) but ( )

Example: Consider the vector space and Let U consist of all vectors in R3
whose entries are equal; that is, *( ) +

For example, ( )( )( )( ) are vectors in


U. Geometrically, U is the line through the origin O and the point (1, 1, 1) as
shown in Figure. Clearly ( ) belongs to U, because all entries in 0 are
equal. Further, suppose u and v are arbitrary vectors in U, say, ( ) and
( ).

Then, for any scalar , the following are also vectors in U:

( ) and ( )

Thus, U is a subspace of .

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 135

Example: Consider the vector space And Let W be any plane in R3

passing through the origin, as pictured in Figure. Then ( )belongs to W,

because we assumed W passes through, the origin O. Further, suppose u and v are
vectors in W. Then u and v may be viewed as arrows in the plane W emanating
from the origin O, as in Figure. The sum and any multiple ku of u also lie in
the plane W. Thus, W is a subspace of

PRACTICE:

1) Use subspace criteria to determine which of the following are subspaces of


?
a) All vectors of the form ( )
b) All vectors of the form ( )
c) All vectors of the form ( ) where
d) All vectors of the form ( ) where
e) All vectors of the form ( )
f) All vectors of the form ( ) where
g) All vectors of the form ( ) where
h) All vectors of the form ( ) where
and
i) All vectors of the form ( ) where are rationals
j) All vectors of the form ( ) where
k) All vectors of the form ( ) where

2) Show that set of rational numbers Q is not a subspace of R.


3) The union of any number of subspaces need not to be a subspace. Prove!

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 136

Subspaces of

We know that the sum of two symmetric matrices is symmetric and


that a scalar multiple of a symmetric matrix is symmetric. Thus the set of
symmetric matrices is closed under addition and scalar multiplication and
hence is a subspace of .

Similarly the sets of upper triangular matrices, lower triangular matrices and
diagonal matrices are subspaces of .

Let the vector space of matrices. Let W1 be the subset of all


(upper) triangular matrices then W1 is a subspace of V, because W1 contains the
zero matrix 0 and W1 is closed under matrix addition and scalar multiplication; that
is, the sum and scalar multiple of such triangular matrices are also triangular.

A subset of that is not a Subspace

The set W of invertible matrices is not a subspace of , failing on two


counts;

 It is not closed under addition


 It is not closed under scalar multiplication

We will illustrate this with an example;

Let us consider two matrices 0 1 and 0 1 in then the


matrix is the zero matrix and hence is not invertible and then has a
column of zeros, so it also is not invertible.

Remark:

Matrices whose determinant is zero are not subspaces.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 137

PRACTICE:

Use subspace criteria to determine which of the following are subspaces of ?

a) The set of all diagonal matrices.


b) The set of all matrices A such that ( )
c) The set of all matrices A such that ( )
d) The set of all symmetric matrices.
e) The set of all matrices A such that
f) The set of all matrices A for which has only the trivial
solution.
g) The set of all matrices A such that for some fixed
matrices B.

The Subspace ( )

Since we know that a theorem in calculus ―a sum of continuous functions is


continuous and that a constant times a continuous function is continuous‖
Rephrased in vector language, the set of continuous functions on ( ) is a
subspace of ( ) we will denote this subspace by ( )

Remark:

 A function with continuous derivative is said to be continuously


differentiable.
 Sum of two continuous differentiable functions is continuously
differentiable and that a constant times a continuous differentiable function
is continuously differentiable.
 Functions that are continuously differentiable on ( ) form a subspace
of ( ). we will denote this subspace by ( )
 We will denote this subspace by ( ) where the superscript
emphasizes that the first derivatives are continuous.
 We will denote this subspace by ( ) ( ) where the
superscript emphasizes that the continuous derivatives and the derivatives
of all orders respectively.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 138

The Subspace of all Polynomials:


Since we know that a polynomial is a function that can be expressed in the form
( ) where are constants.
Also we know that ―the sum of two polynomials is a polynomial and that a
constant time polynomial is polynomial.‖ Thus the set W of all polynomials is
closed under addition and scalar multiplication and hence is a subspace of
( ). We will denote it by

Degree of Polynomial:
The highest power of the variable that occurs with a non – zero coefficient.
For example the Polynomial ( ) with has
degree ‗n‘
The Subspace of all Polynomials of degree n:

It is not true that the set W of polynomials with positive degree ‗n‘ is a subspace of
( ) because the set is not closed under addition.

For example the Polynomials and both have


degree 2 but their sum has degree 1.

But for each non – negative integer ‗n‘ the polynomials of degree ‗n‘ or less form a
subspace of ( ). We will denote it by

Example:

Let ( ), the vector space ( )of polynomials. Then the space ( ) of


polynomials of degree at most ‗n‘ may be viewed as a subspace of ( ). Let ( )
be the collection of polynomials with only even powers of ‗t‘. For example, the
following are polynomials in ( ) :

and

(We assume that any constant is an even power of ‗t‘) Then ( ) is a


subspace of ( ) .

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 139

Remark: It is proved in calculus that polynomials are continuous functions and


have continuous derivatives of all orders on ( ), thus it follows that is not
only a subspace of ( ) but is also a subspace of ( )

All spaces discussed previously are nested. See follow;

PRACTICE:

1) (Calculus Required) Show that followings set of functions are


subspaces of ( )?
i. All functions in ( ) that satisfy ( )
ii. All functions in ( ) that satisfy ( )
iii. All functions in ( ) that satisfy ( ) ( )
iv. All polynomials of degree 2.
v. All continuous functions on ( )
vi. All differentiable functions on ( )
vii. All differentiable functions on ( ) that satisfy

2) (Calculus Required) Show that the set of continuous functions ( )


on , - such that ∫ ( ) is a subspace of , -

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 140

3) Use subspace criteria to determine which of the following are subspaces of


?
i. All polynomials for which
ii. All polynomials for which
iii. All polynomials in which are rational
iv. All polynomials in which are real numbers.

4) Use subspace criteria to determine which of the following are subspaces of


?
i. All sequences v in of the form ( )
ii. All sequences v in of the form ( )
iii. All sequences v in of the form ( )
iv. All sequences v in whose components are zero from some point
on.

5) Let V be a vector space of functions . Show that W is subspace of


V where;
i. * ( ) ( ) + all functions whose values at 1 is 0.
ii. * ( ) ( ) ( )+ all functions assigning to same value
to 3 and 1.
iii. * ( ) ( ) ( )+ all odd functions.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 141

Spanning Sets
Let V be a vector space over K. Vectors in V are said to span V or to
form a spanning set of V if every in V is a linear combination of the vectors
. That is, if there exist scalars in K such that

Or if * + is a non – empty set of a vector in a vector space V


then the subspace W of V that consists of all possible linear combinations of the of
the vectors in S is called the subspace of V generated by S and we say that the
vectors span W. we denote this subspace as
* + or ( )
Or Let S be a non – empty subset of a vector space V. then the set of all linear
combinations of finite number of elements of S is called the linear span of S and is
denoted by 〈 〉 or ( ) or , -
Remarks:
 Suppose span V. Then, for any vector w, the set
also spans V.
 Suppose span V and suppose is a linear combination of
some of the other v‘s. Then the v‘s without also span V.
 Suppose span V and suppose one of the v‘s is the zero vector.
Then the v‘s without the zero vector also span V.
 If * + and * + are non – empty sets of a
vector in a vector space V, then * + * +
iff each vector in S is a linear combination of those in , and each vector in
is a linear combination of those in S
Theorem: If S and T are subsets of V then 〈 〉 〈 〉
Proof: Let * + and * +
Let 〈 〉 then a linear combination of vectors
in S.
We may write a linear
combination of vectors in T. then this implies 〈 〉
Hence 〈 〉 〈 〉

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 142

Theorem:

Let S be a non – empty set of vectors in a vector space V over a field K. then 〈 〉 is
a subspace of V containing S and it is the smallest subspace of V containing S.

Proof:

Let and 〈 〉 also and then

Now (∑ ) (∑ ) ∑ ( ) ∑ ( )

∑ ( ) ∑ ( ) ( ) ( )

Which shows that is a linear combination of vectors in S.

So 〈 〉

So and 〈 〉 〈 〉 〈 〉 is a subspace of V.

Now we prove that 〈 〉 is the smallest subspace of V containing S.

If W is any other subspace of V containing S then it contains all vectors of the


form ∑ where and

〈 〉

Thus 〈 〉 is the smallest subspace of V containing S.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 143

The standard unit vectors spans Rn

Since we know that standard unit vectors are as follows;

̂ ( ) ̂ ( ) ̂ ( )

These vectors span Rn since every vector ( ) in Rn can be


expressed as ̂ ̂ ̂ which is a linear combination of
̂ ̂ ̂

Example: Consider ( ) in R3 then it can be written as a linear


combination of standard unit vectors in R3 as follows;

( ) ( ) ( ) ( ) ̂ ̂ ̂

Example: Consider the vector space V = R3 then show that


̂ ( ) ̂ ( ) ̂ ( ) span ( ) in R3
Solution: Since we know that ―The standard unit vectors spans R n‖ then we can
write ( ) as a linear combination of standard unit vectors in R3
( ) ( ) ( ) ( ) ̂ ̂ ̂
Example: Consider the vector space V = R3 then we claim that the following
vectors also form a spanning set of R3
( ) ( ) ( )
Specifically, if ( ) is any vector in R3, then
( ) ( ) ( )
For example, ( )
Example: Consider the vector space V = R3 then One can show that ( )
cannot be written as a linear combination of the vectors
( ) ( ) ( ) Accordingly, do not span R3

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 144

Testing for spanning:


Determine whether the vectors ( ) ( ) ( ) span the
vector space R3
Solution:
must determine whether an arbitrary vector ( ) in R3 can be expressed
as linear combination of the vectors
Expressing above equation in terms of components gives
( ) ( ) ( ) ( )
( ) ( )

This system is consistent if and only if its coefficient matrix

[ ]

has a non - zero determinant.


But this is not the case here since ( )
So do not span R3

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 145

A geometric view of spanning in R2 and R3


a) If v is a non – zero vector in R2 or R3 that has its initial points at the origin,
then span{v}, which is the set of all scalar multiples of v, is the line through
the origin determined by v.

b) If and are non-zero vectors in R3 that have their initial points at the
origin, then span{ }, which consists of all linear combinations of
and , is the plane through the origin determined by these two vectors.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 146

PRACTICE:

1) In each part determine whether the vectors span R3


a) ( ) ( ) ( )
b) ( ) ( ) ( )

2) Suppose ( ) ( ) ( ) then which of


the following vectors are in span * +
a) ( )
b) ( )
c) ( )
d) ( )

3) Show that the yz – plane *( ) + is spanned by ( ) and


( )
4) Find an equation of the subspace W of R3 generated by
*( )( )+
5) Show that the complex numbers and generate the vector
space C over R.
6) Let ( ) ( ) ( ) and
( ) ( ) then show that
* + * +
7) Show that ( ) ( ) ( ) span R3
8) Find conditions on so that ( ) in R3 belongs to
( ) where ( ) ( ) ( )
9) Let S and T be subsets of a vector space V. then show that
i. 〈 〉 〈 〉 〈 〉 but equality does not hold.
ii. 〈 〉 〈 〉 〈 〉 but equality does not hold.
iii. 〈 〉 〈 〉 〈 〉 〈 〉
iv. 〈〈 〉〉 〈 〉
10) Suppose * + is linearly independent subset
of V then show that 〈 〉 〈 〉 * +. That is * + { } * +

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 147

A spanning set for ( )

Consider the vector space ( ) consisting of all polynomials of degree .

(a) Clearly every polynomial in ( ) can be expressed as a linear combination


of the polynomials
Thus, these powers of (where ) form a spanning set for ( ) .
We can denote this by writing ( ) * +

(b) One can also show that, for any scalar , the following powers of
, that is ( ) ( ) (where ( ) ), also
form a spanning set for ( ) .

We can denote this by writing ( ) * ( ) ( ) +

(c) Consider the vector space consisting of all matrices, and


consider the following four matrices in M:
0 1, 0 1, 0 1, 0 1

Then clearly any matrix A in M can be written as a linear combination of the four
matrices. For example,

0 1

Accordingly, the four matrices E11, E12, E21, E22 span M.

PRACTICE:

1) Show that a vector space ( ) of real polynomials cannot be spanned


by a finite number of polynomials.
2) Determine whether the following polynomials span
i.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 148

Solution Space of a Homogeneous System

Consider a system of linear equations in unknowns. Then every


solution may be viewed as a vector in Kn. Thus, the solution set of such a system
is a subset of Kn. Now suppose the system is homogeneous; that is, suppose the
system has the form . Let W be its solution set. Because , the zero
vector . Moreover, suppose u and v belong to W. Then u and v are solutions
of , or, in other words, and . Therefore, for any scalars
and , we have ( )

Thus, belongs to W, because it is a solution of . Accordingly, W


is a subspace of Kn.

We state the above result formally.

Theorem: The solution set W of a homogeneous linear system of ‗m‘


equations in ‗n‘ unknowns is a subspace and known as Null Space of K ( )
n

Proof: Let W be the solution set of the system. The set W is not empty because it
contains at least the trivial solution .

To show W is subspace. Let be vectors in W. Since these vectors are


solution of , we have and

Implies ( ) means W is closed under addition.

Similarly if ‗k‘ is any scalar then ( )

means W is closed under multiplication. Hence W is a subspace of Kn ( ).

Because the solution set of a homogeneous system in ‗n‘ unknowns is


actually a subspace of Kn ( ), we will generally refer it to as the solution space
of the system.

Remark:

We emphasize that the solution set of a nonhomogeneous system is not a


n
subspace of K . Infact, the zero vector 0 does not belong to its solution set.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 149

Example: Solve the system by any method and then give a geometric description

of the solution set [ ][ ] [ ]

Solution: the solutions are , ,


From which it follows that or
This is the equation of plane through the origin that has ( ) as a normal.

Example: Solve the system by any method and then give a geometric description

of the solution set [ ][ ] [ ]

Solution: the solutions are , ,


Which are parametric equations for the line through the origin that is parallel to the
vector ( )

Example: Solve the system by any method and then give a geometric description

of the solution set [ ][ ] [ ]

Solution: the only solutions is , ,


So the solution space consists of the single point * +

Example: Solve the system by any method and then give a geometric description

of the solution set [ ][ ] [ ]

Solution: This linear system is satisfied by all real values of , so the solution
space is all of R3

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 150

Remark: ( )

Whereas the solution set of every homogeneous system of ‗m‘ equations in


‗n‘ unknowns is a subspace of Kn ( ), it is never true that the solution set of a
non – homogeneous system of ‗m‘ equations in ‗n‘ unknowns is a subspace of Kn
( ). There are two possible scenarios;

i. The system may not have any solutions at all.


ii. If there are solutions, then the solution set will not be closed
either under addition or under scalar multiplication.

PRACTICE:

1) Show that the solutions vectors of a consistent non – homogeneous system


of ‗m‘ linear equations in ‗n‘ unknowns do not form a subspace of

2) Determine whether the solution space of the system is a line through


the origin, a plane through the origin, or the origin only. If it is a plane, find
an equation for it. If it is a line, find parametric equation for it.

i. [ ]

ii. [ ]

iii. [ ]

iv. [ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 151

Linear Dependence and Independence

Let V be a vector space over a field K. The following defines the notion of linear
dependence and independence of vectors over K. (One usually suppresses
mentioning K when the field is understood.) This concept plays an essential role in
the theory of linear algebra and in mathematics in general.

Definition: We say that the vectors in V are linearly dependent if


there exist scalars in K, not all of them 0, such that

On the other hand we say that the vectors in V are linearly


independent if there exist scalars in K, all of them 0, such that

Another definition: If * + is a set of two or more vectors in a


vector space V , then S is said to be linearly independent set if no vector in S can
be expressed as a linear combination of the others.

A set that is not linearly independent is said to be linearly dependent.

Remark:

 A set * + of vectors in V is linearly dependent or


independent according to whether the vectors are linearly
dependent or independent.
 An infinite set S of vectors is linearly dependent or independent according to
whether there do or do not exist vectors in S that are linearly
dependent.
 Warning: The set * + above represents a list or, in other
words, a finite sequence of vectors where the vectors are ordered and
repetition is permitted.
 Suppose 0 is one of the vectors , say . Then the vectors
must be linearly dependent, because we have the following linear
combination where the coefficient of ;

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 152

 Suppose v is a nonzero vector. Then v, by itself, is linearly independent,


because implies
Implies a single non – zero vector always linearly independent.
 Suppose two of the vectors are equal or one is a scalar multiple
of the other, say . Then the vectors must be linearly dependent,
because we have the following linear combination where the coefficient of
;

 Two vectors v1 and v2 are linearly dependent if and only if one of them is a
multiple of the other.
 If any two vectors out of are equal say then
are linearly independent because
( )
 If the set * + is linearly independent, then any rearrangement of
the vectors { } is also linearly independent.
 If a set S of vectors is linearly independent, then any subset of S is linearly
independent. Alternatively, if S contains a linearly dependent subset, then S
is linearly dependent.
 A set * + in a vector space V is said to be linearly
independent set iff the only coefficients satisfying the vector equation
are
 Let V be a vector space over a field F and * + be a set of
vectors in V, then if S is linearly independent then any subset of S is also
linearly independent.
 Let V be a vector space over a field F and * + be a set of
vectors in V, then if S is linearly dependent then any subset of S is also
linearly dependent.
 A finite set that contains 0 is linearly dependent.
 A set with exactly one vector is linearly independent iff that vector is not 0.
 A set with exactly two vectors is linearly independent iff neither vector is a
scalar multiple of other.
 If * + is linearly independent set of vectors, then so are
* +* +* + * + * + and * +

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 153

Theorem:

A set * + in a vector space V is said to be linearly independent


set iff the only coefficients satisfying the vector equation

are

Proof:

Suppose that * + is linearly independent. Then we will show that if


the equation can be satisfied with coefficients that
are not all zero, then at least one of the vectors in S must be expressible as the
linear combination of the others, thereby contradicting the assumption of linear
independence.

To be specific suppose that then we can rewrite the above equation as

Which expresses as the linear combination of the other vectors in S.

Conversely: We must show that if the only coefficients satisfying


are then the vectors in S
must be linearly independent. But if this were true of the coefficients and the
vectors were not linearly independent, then at least one of them would be
expressible as a linear combination of the other, say

( ) ( ) ( )

But this contradict our assumption that can only be


satisfied by coefficients that are all zero.

Thus the vectors in S must be linearly independent.

Theorem: (Just Statement): Suppose * + spans V, and suppose


* + is linearly independent. Then , and V is spanned by a set of
the form { }

Thus, in particular, or more vectors in V are linearly dependent.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 154

Theorem:

Let V be a vector space over a field F and * + be a set of vectors in


V, then if S is linearly independent then any subset of S is also linearly
independent.

Proof:

Here * + and * + is a subset of S.

And let where are scalars.

We may write

But * + is linearly independent then

Hence * + is linearly independent.

Theorem:

Let V be a vector space over a field F and * + be a set of vectors in


V, then if S is linearly dependent then any subset of S is also linearly dependent.

Proof:

Here * + and * + is a subset of S.

Since * + is linearly dependent then

where for some i

Now where for some i

Then * + is linearly dependent.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 155

Theorem:

A set * + of ‗n‘ vectors ( ) in a vector space V is linearly


dependent iff atleast one of the vectors in S is a linear combination of the
remaining vectors of the set.

Proof: Suppose the set * + is linearly dependent. Then there


exists scalars at least one of them say is non – zero such that

Or

Or

Which shows that is a linear combination of the remaining vectors of the set.

Conversely:

Suppose that some vector of the given set is a linear combination of the
remaining vectors of the set. i.e.

Then above equation can be written as

( )

Here there is at least one coefficient namely of which is non – zero and so
that { } is linearly dependent.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 156

Theorem:

A set * + in a vector space V is linearly dependent iff some of


the vectors say is a linear combination of the vectors preceding it.

Proof: Suppose the set * + is linearly dependent. Then there


exists scalars at least one of them say is non – zero such that

………..(i)

Let be the last non – zero scalar in (i) then the terms
are all zeros.

So the equation (i) becomes

where

Or

Which shows that is a linear combination of the vectors preceding it.

Conversely:

Suppose that in * + , some of the vectors say is a linear


combination of the vectors preceding it.

Then above equation can be written as

( )

Or ( )

Here there is at least one coefficient namely of which is non – zero and so
that * + is linearly dependent.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 157

Theorem:

Let * + be a set of vector in Rn. if then S is linearly


dependent.

Proof: Suppose that

( )

( )

( )

And consider the equation

If we express both sides of this equation in terms of components and then equate
the corresponding components, we obtain the system;

This is the homogeneous system of ‗n‘ equations in ‗r‘ unknowns

Since .

It follows from theorem ― a homogeneous linear system with more unknowns than
equations has infinitely many solution‖ that the system has non – trivial solution.
Therefore * + is linearly dependent.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 158

Linear independence of the Standard unit vectors in Rn

In Rn the most basic linearly independent set is the set of standard unit vectors

( ), ( ), ………. ( )

Now we consider the standard unit vectors in R3

( ) , ( ) , ( )

To prove linearly independent we must show that the only coefficient satisfying
the vector equation are
But this become evident by writing the equation in its component form
( ) ( )

Linear independence in R3

Determine whether the vectors ( ) ( ) ( ) are


3
linearly independent or linearly dependent in R

Solution: Consider

Rewriting in component form ( ) ( ) ( ) ( )

Equating corresponding components on the two sides yields the homogeneous


linear system

After solving the system we get

This shows that the system has non – trivial solution and hence that the vectors are
linearly dependent.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 159

Linear independence in R4

Determine whether the vectors

( ) ( ) ( ) are linearly independent or


4
linearly dependent in R

Solution: Consider

Rewriting in component form

( ) ( ) ( ) ( )

Equating corresponding components on the two sides yields the homogeneous


linear system

After solving the system we get

This shows that the system has trivial solution and hence that the vectors are
linearly independent.

Example:

Let ( ) ( ) ( ) .Then are linearly dependent,


because ( ) ( ) ( ) ( )

Example: We show that the vectors ( ) ( ) ( ) are


linearly independent. We form the vector equation ,
where x, y, z are unknown scalars. This yield

[ ] [ ] [ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 160

Or Or

Back-substitution yields . We have shown that

implies

Accordingly, are linearly independent.

Linear Dependence in R3

Linear dependence in the vector space can be described geometrically as


follows:

(a) Any two vectors u and v in are linearly dependent (not independent) if
and only if they lie on the same line through the origin O, as shown in Fig.

(b) Any three vectors u, v, w in are linearly dependent (not independent) if


and only if they lie on the same plane through the origin O, as shown in Fig.

MUHAMMAD USMAN HAMID (0323 – 6032785)


P a g e | 161

Practice:

1) In each part, determine whether the vectors are linearly independent or are
linearly dependent in

a) ( )( )( )
b) ( )( )( )( )

2) In each part, determine whether the vectors are linearly independent or are
linearly dependent in

a) ( )( )( )
b) ( )( )( )( )
c) ( )( )( )
d) ( )( )( )

3) Express each vector in ( )( )( ) as a linear


combination of other.
4) For which real value of do the following vectors form a linearly dependent
set in ?
. / . / . /

Or determine so that the above vectors are linearly dependent in .

5) Show that the vectors ( )( ) in C2 are linearly dependent


over C but linearly independent over R.
6) Show that the vectors ( √ √ )( √ ) in R2 are linearly
dependent over R but linearly independent over Q.
7) Show that the vectors ( )( ) in C2 are linearly dependent over
the complex field C but linearly independent over the real field R.
8) Suppose that are linearly independent vectors.
Prove that is linearly independent.
9) Show that for any vectors in a vector space V, the vectors
form a linearly dependent set.
10) Under what conditions is a set with one vector linearly independent?

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 162

Linear independence for Polynomials

Consider the vector space ( ) consisting of all polynomials of


degree . Show that the polynomials in ( ) form a linearly
independent set.

Solution: Let

And consider

Equivalently for all ‗x‘ in ( )

Since we know that a non – zero polynomial of degree ‗n‘ has atmost ‗n‘ distinct
roots. Then in this case all coefficients in above expression must be zero. For
otherwise the left side of the equation would be a non – zero polynomial with
infinity many roots. Thus above equation has only the trivial solution.

Implies ( ) is the linearly independent set.

Example:

Determine whether the polynomials

Are linearly dependent or linearly independent in

Solution: Consider

Equivalently ( ) ( ) ( )

( ) ( ) ( )

Since this equation must be satisfied by all ‗x‘ in ( ), each coefficient must
be zero. Thus the linear independence or linear dependence of the given
polynomials hinges on whether the following linear system has a non – trivial
solution;

After solving we will get . And hence given polynomials form


linearly dependent set.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 163

Example: Determine whether the polynomials

Are linearly dependent or linearly independent in

Solution: Consider

( ) ( ) ( )

( ) ( ) ( )

( )

……………(i)

……………(ii)

……………(iii)

……………(iv)

After solving we will get . As follows;

Implies

Putting these values in (iii) and (iv) we see equations are not satisfied. They are
satisfied only when .

Hence given polynomials are linearly independent.

Remember: In every set with more than three vectors is linearly dependent.

Practice: In each part, determine whether the vectors are linearly independent
or are linearly dependent in

a)
b)

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 164

Linear independence of functions:

Example: Let V be the real vector space of all functions defined on R into R.
determine whether the given vectors ( ) ( ) ( ) are linearly
independent or linearly depended in V.

Solution: Consider where x,y,z are unknown scalar.

This implies

Thus in this equation we choose appropriate values of ‗t‘ to easily get


for example

i. Substitute to obtain ( ) ( ) ( ) or
ii. Substitute to obtain ( ) ( ) ( ) or
iii. Substitute to obtain ( ) ( ) . / or

We have shown implies

Thus given vectors are linearly independent.

Wronskian of Functions:

If ( ) ( ) ( ) are functions that are n – 1 time


differentiable on the interval ( ), then the determinant

( ) ( ) ( )
( ) ( ) ( )
( ) | |
( ) ( ) ( )

Is called Wronskian of ( ) ( ) ( )

Remember:

Sometime linear dependence of functions can be deduced from known


identities. However, it is relatively rare that linear independence or dependence of
functions can be ascertained by algebraic or trigonometric methods. To make
matter worse, there is no general method for doing that either.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 165

Theorem:

If the functions have n – 1 continuous derivatives on the interval


( ) and if the Wronskian of these functions is not identically zero on
( ), then these functions form a linearly independent set of vectors in
( ) but the converse of this theorem is false.

Example:

Use the Wronskian to show that are linearly independent


vectors in ( ).

Solution:

The Wronskian of given functions is as follows;


( ) ( )
( ) | | | |
( ) ( )

Consider . / . / . /

This function is not identically zero on the interval ( ), thus , the functions
are linearly independent.

Example:

Use the Wronskian to show that are linearly


independent vectors in ( ).

Solution:

The Wronskian of given functions is as follows;

( ) ( ) ( )
( ) | ( ) ( ) ( )| | |
( ) ( ) ( )

This function is not identically zero on the interval ( ), thus , the functions
are linearly independent.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 166

Linear Dependence and Echelon Matrices

Consider the following echelon matrix A, whose pivots have been circled:
( )
( )
( )
( )
[ ]

Observe that the rows R2, R3, R4 have 0‘s in the second column below the nonzero
pivot in R1, and hence any linear combination of R2, R3, R4 must have 0 as its
second entry. Thus, R1 cannot be a linear combination of the rows below it.
Similarly, the rows R3 and R4 have 0‘s in the third column below the nonzero pivot
in R2, and hence R2 cannot be a linear combination of the rows below it. Finally,
R3 cannot be a multiple of R4, because R4 has a 0 in the fifth column below the
nonzero pivot in R3. Viewing the nonzero rows from the bottom up, R4, R3, R2, R1,
no row is a linear combination of the preceding rows. Thus, the rows are linearly
independent. The argument used with the above echelon matrix A can be used for
the nonzero rows of any echelon matrix. Thus, we have the following very useful
result.

THEOREM:

The nonzero rows of a matrix in echelon form are linearly independent.

It is not unreasonable to use the hypothesis

ARNOLD ROSS

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 167

Practice:

1) Let V be the real vector space of all functions defined on R into R. determine
whether the given vectors are linearly independent or linearly depended in V.
i.
ii.
iii.
iv.
v.
2) By using appropriate identities, where required, determine which of the
following sets of vectors in ( ) are linearly dependent.
i.
ii.
iii. ( )
iv.

3) Use the Wronskian to show that given functions are linearly independent
vectors in ( ).
i.
ii.
iii.
iv.
v.
vi.

4) Using the technique of casting out vectors which are linear combination of
others, find a linearly independent subset of the given set spanning the same
subspace;
i. *( )( )( )( )+ in R3
ii. * + in the space of all functions from R to R
iii. * + in the space of all polynomials

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 168

Basis

A set * + of vectors is a basis of a finite dimensional vector space


V if it has the following two properties:

i. S is linearly independent.
ii. S spans V.

Or

A set * + of vectors is a basis of a finite dimensional vector space


V if every can be written uniquely as a linear combination of the basis vectors.

Examples of Bases

This subsection presents important examples of bases of some of the main vector
spaces appearing in this text.

Usual or the Standard basis for Rn

In Rn the most basic linearly independent set is the set of standard unit vectors

( ), ( ), ………., ( )

Thus they form a basis for Rn that we call the Standard basis for Rn

For example, any vector ( )in Rn can be written as a linear


combination of the above vectors. i.e.

In particular we consider the standard unit vectors in R3

( ) , ( ) , ( )

And we call the Standard basis for R3

Remark:

 The number of elements in a basis of a vector space V over F is called


dimension of V. It is denoted by .
 For n – dimensional vector space V, every set * + of ‗n‘
linearly independent vectors forms a basis for V.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 169

Example: Show that the vectors ( ) ( ) ( )


form a basis for R3

Solution: We must show that these vectors are linearly independent and span R3
To prove linear independence we must show that the vector equation
has only the trivial solution.

Then ( ) ( ) ( )

By equating corresponding components on the two sides, we get a linear system;

…………..(i)

…………..(ii)

…………..(iii)

From (ii) and (iii)

Putting these values in equation (i), we see equation (i) is not satisfied. It is
satisfied only when k = 0. i.e. Hence given vectors
( ) ( ) ( ) are linearly independent.

Since dimension of R3 is ‗3‘ and the number of linearly independent vectors in R3


is also ‗3‘, so the vectors ( ) ( ) ( ) forms a
basis for R3

Practice:

1) Show that the given vectors may or may not form a basis for R2 or R3.
i. ( ) ( ) ( )
ii. ( ) ( )
iii. ( ) ( ) ( )
iv. ( ) ( ) ( )
v. ( ) ( ) ( )
2) In words explain why the matrices ( ) ( ) ( ) and ( )( )
2 3
are not basis for R and R respectively.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 170

Usual or the Standard basis for ( )

Consider the vector space ( ) consisting of all polynomials of degree or


less then the set * + is a basis for ( )

For this we must show that given polynomials in S are linearly independent
and span ( ).

Clearly every polynomial in ( ) can be expressed as a linear combination of


the polynomials that is .
Thus, these powers of (where ) form a spanning set for ( ) .We can
denote this by writing ( ) * +

Now we have to show that the polynomials in ( )


form a linearly independent set.

For this let

And consider

Equivalently for all ‗x‘ in ( )

Since we know that a non – zero polynomial of degree ‗n‘ has atmost ‗n‘ distinct
roots. Then in this case all coefficients in above expression must be zero. For
otherwise the left side of the equation would be a non – zero polynomial with
infinity many roots. Thus above equation has only the trivial solution. Implies
( ) is the linearly independent set.

Thus the set * + is a basis for ( )

Practice:

1) Show that the given polynomials may or may not form a basis for or .
i.
ii. Hermite Polynomials
iii. Laguerre Polynomials
2) Show that * + is not a basis. Find a basis for vector space
V spanned by these polynomials.
3) In words explain why the polynomials are not basis for

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 171

Usual or the Standard basis for

Show that the following matrices form a basis of the vector space of all
matrices over K:

0 1, 0 1, 0 1, 0 1

Solution:

We must show that the given matrices are linearly independent and span

To prove linear independence we must show that the equation


has only the trivial solution, where 0 is the
zero matrix.

Consider 0 1

0 1 0 1 0 1 0 1 0 1

[ ] 0 1

Above equation has only trivial solution. i.e.

Given matrices are linearly independent.

To prove matrices span

Consider 0 1

0 1 0 1 0 1 0 1 0 1

[ ] 0 1 i.e. given matrices span

This show that 0 1, 0 1, 0 1, 0 1

Form a basis of the vector space of all matrices over K

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 172

Remark:

 Vector space of all matrices: The following six matrices


form a basis of the vector space of all matrices over K:

0 1, 0 1, 0 1,

0 1 , 0 1 , 0 1

 More generally, in the vector space of all matrices, let Eij


be the matrix with ij-entry 1 and 0‘s elsewhere. Then all such matrices form
a basis of called the usual or standard basis of

Practice:

1) Show that the given matrices may or may not form a basis for .
i. 0 1 0 1 0 1 0 1

ii. 0 1 0 1 0 1 0 1

iii. 0 1 0 1 0 1 0 1

2) In words explain why the matrices 0 1 0 1 0 1 0 1 are


not basis for

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 173

Theorem (Uniqueness of Basis representation):

If * + is a basis for a vector space V, then every vector v in


V can be expressed in the form in exactly one way.

Or let * + in V is linearly independent then each v of


( ) of S is uniquely expressible.

Proof:

Since S spans V, if follows from the definition of a spanning set that every vector
in V is expressible as a linear combination of the vectors in S. To see that there is
only one way to express a vector as a linear combination of the vector in S,
suppose that some vector v can be written as;

……………..(i)

And also as ……………..(ii)

Subtracting the second equation from the first;

( ) ( ) ( )

Since the right side of the equation is the linear combination of vectors in S, and S
is linearly independent then;

( ) ( ) ( )

That is

Thus the two expressions for v are the same (unique).

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 174

Theorem:

Any finite dimensional vector space contains a basis.

Proof:

Let V be a finite dimensional vector space then V should be linear span of some
finite set. Let * + be a finite spanning set of V. in case are
linearly independent, then they form a basis of V and the proof is complete.

Suppose are not linearly independent. i.e. they are linearly


dependent, so one of the vectors is a linear combination of the preceding
vectors. We drop this vector from the set and obtain a set of vectors,
. Then clearly any linear combination of is also a linear
combination of . So * + is also a spanning set for V.
continuing in this way, we arrive at a linearly independent spanning set
* + such that and so it forms a basis for V.

Thus every finite dimensional vector space contains a basis.

Theorem:

Let V be a vector space of finite dimension ‗n‘. Then, any or more


vectors in V are linearly dependent.

Proof:

Suppose * + is a basis of V. because B spans V, then by lemma


„Suppose * + spans V, and suppose * + is linearly
independent. Then , and V is spanned by a set of the form
{ }‟

Thus, in particular, or more vectors in V are linearly dependent.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 175

Theorem:

Let V be a vector space of finite dimension ‗n‘. Then, any linearly


independent set * + with ‗n‘ elements is a basis of V.

Proof:

Suppose * + is a basis of V. then by lemma

„Suppose * + spans V, and suppose * + is linearly


independent. Then , and V is spanned by a set of the form
{ }‟

Elements from B can be adjoined to S to form a spanning set of V with ‗n‘


elements. Because S already has ‗n‘ elements, S itself is a spanning set of V.

Thus S is a basis of V.

Theorem:

Let V be a vector space of finite dimension ‗n‘. Then, any spanning set
* + of V with ‗n‘ elements is a basis of V.

Proof:

Suppose * + is a basis of V. and suppose * + is


linearly dependent. Then some is a linear combination of the preceding vectors.
By problem “if * + spans V then for the set
* + will be linearly dependent and spans V and if is a linear
combination of then without spans V”

Thus V is spanned by vectors in T without and there are of them. By


Lemma „„Suppose * + spans V, and suppose * + is
linearly independent. Then , and V is spanned by a set of the form
{ }‟‟ the independent set B cannot have more than
elements. This contradict the fact B has ‗n‘ elements.

Thus T is linearly independent and hence T is a basis of V.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 176

Theorem: Suppose S spans a vector space V. Then: Any maximum number of


linearly independent vectors in S form a basis of V.

Proof: Suppose * + is maximum linearly independent subset of S,


and suppose . Accordingly, * + is linearly independent. No
can be linear combination of preceding vectors. Hence is is a linear combination
of the . Thus ( ) and hence ( )

This leads to ( )

Thus * + spans V and as it is linearly independent, it is a basis of V.

Theorem: Suppose S spans a vector space V. Then: Suppose one deletes from S
every vector that is a linear combination of preceding vectors in S. Then the
remaining vectors form a basis of V.

Proof: The remaining vectors form a maximum linearly independent subset


of S; hence by theorem “Suppose S spans a vector space V. Then: Any
maximum number of linearly independent vectors in S form a basis of V” it is
a basis of V.

Theorem: Let V be a vector space of finite dimension and let * +


be a set of linearly independent vectors in V. Then S is part of a basis of V; that is,
S may be extended to a basis of V.

Proof: Suppose * + is a basis of V. then B spans V and


hence V is spanned by * +By theorem
“Suppose S spans a vector space V. Then: Any maximum number of linearly
independent vectors in S form a basis of V. And Suppose one deletes from S
every vector that is a linear combination of preceding vectors in S. Then the
remaining vectors form a basis of V ” we can delete from each vector that
is the linear combination of preceding vectors to obtain a basis for V. because S
is linearly independent, no is a linear combination of preceding vectors.
Thus contains every vector in S, and S is the part of the basis for V

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 177

Examples:

(a) The following four vectors in R4 form a matrix in echelon form:


( ) ( ) ( ) ( )

Thus, the vectors are linearly independent, and, because dimR4 = 4, the four
vectors form a basis of R4

(b) The following polynomials in ( ) are of increasing degree:


( ) ( )

Therefore, no polynomial is a linear combination of preceding polynomials; hence,


the polynomials are linear independent. Furthermore, they form a basis of ( ),
because dim ( ) .

(c) Consider any four vectors in R3, say


( ) ( ) ( ) ( )

By Theorem “Let V be a vector space of finite dimension „n‟. Then, any


or more vectors in V are linearly dependent”, the four vectors must be linearly
dependent, because they come from the three-dimensional vector space R3.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 178

Dimension

The number of elements (vectors) in a basis of a vector space V over F is called


dimension of V. It is denoted by ( ).

Engineers often use the term degree of freedom as a synonym for dimension.

Remark:

 The vector space {0} is defined to have dimension 0. It is known as zero


vector space.
 The simplest of all vector spaces is the zero vector space V = {0}. This
space is the finite dimensional because it is spanned by vector 0.Since {0} is
not linearly independent set, thats why V = {0} has no basis. However we
will find it useful to define empty set to be a basis for this vector space.
 Suppose a vector space V does not have a finite basis. Then V is said to be
of infinite dimension or to be infinite-dimensional.
 ( ) (The standard basis has ‗n‘ vectors)
 Example: *( )( )( )+ is the standard basis set of and
( )
 ( ) (The standard basis has ‗n + 1‘ vectors)
 ( ) (The standard basis has ‗mn‘ vectors)
 are finite dimensional vector space. While
( ) ( ) ( ) ( ) are infinite
dimensional vector spaces
 , * +- it means, the dimension of the space
spanned by a linearly independent set of vectors is equal to the number of
vectors in that set.
 For two finite dimensional subspaces U and W of a vector space V over a
field F we have ( ) ( ) ( ) ( )
 For two finite dimensional subspaces U and W of a vector space V over a
field F with * + and we have
( ) ( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 179

Keep in mind:

 Let V be a vector space such that one basis has ‗m’ elements and another
basis has ‗n‘ elements. Then .
 Every basis of the finite dimensional vector space has the same number of
elements (vectors).
 Let V be an n – dimensional vector space, and let * + be any
basis then if a set in V has more than ‗n‘ vectors, then it is linearly
dependent.
 Let V be an n – dimensional vector space, and let * + be any
basis then if a set in V has fewer than ‗n‘ vectors, then it does not span V.
 (Plus Theorem) Let S be a non – empty set of a vectors in a vector space V
then if S is linearly independent set, and if v is a vector in V that is outside of
span(S), then the set * + that results by inserting into S is still linearly
independent.
 (Minus Theorem) Let S be a non – empty set of a vectors in a vector space
V then if v is a vector in S that is expressible as a linear combination of other
vectors in S, and if * + denotes the set obtained by removing v from S,
then S and * + span the same space; that is, ( ) ( * +)
 Let V be an n – dimensional vector space, and let S be a set in V with
exactly ‗n‘ vectors. Then S is a basis for V if and only if S spans V or S is
linearly independent.
 Let S be a finite set of vectors in a finite dimensional vector space V then if
S spans V but is not a basis for V, then S can be reduced to a basis for V by
removing appropriate vectors from S. (This theorem tells that; every
spanning set for a subspace is either a basis for that subspace or has a basis
as a subset)
 Let S be a finite set of vectors in a finite dimensional vector space V then if
S is linearly independent set that is not already a basis for V, then S can be
enlarged to a basis for V by inserting appropriate vectors into S. (This
theorem tells that; every linearly independent set in a subspace is either a
basis for that subspace or can be extended to a basis for it.)
 If W is subspace of a finite dimensional vector space V, then W is finite
dimensional , ( ) ( ) and ( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 180

An infinite dimensional vector space

Show that is an infinite dimensional vector space as it has no finite spanning


set.

Solution:

Arbitrary if we consider a finite spanning set, say * + then the


degree of the polynomials in S would have a maximum value say ‗n‘ and this in
turn would imply that any linear combination of the polynomials in S would have
degree at most ‗n‘. Thus there we would be no way to express the polynomial
as a linear combination of the polynomials in S, contradicting the fact that the
vectors in S span

Example: Let W be a subspace of the real space R3. Note that dim R3 = 3. The
following cases apply:

(a) If , then * + , a point.


(b) If , then W is a line through the origin 0.
(c) If , then W is a plane through the origin 0.
(d) If , then W is the entire space R3

Example: Find a basis and dimension of the subspace W of R3 where


*( ) +

Solution: Note that , because, for example ( ) thus


( ) . Note that ( )( ) are two independent vectors in W
thus ( ) and so both vectors ( )( ) form a basis of W.

Example: Find a basis and dimension of the subspace W of R3 where


*( ) +

Solution: The vector ( ) any vector has the form


( ). Hence . Thus spans W and ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 181

Example:

Find a basis and dimension of the solution space of the homogeneous system

Solution:

Using Gauss Jordan‘s Elimination method matrix [ ]

can be converted into row reduced echelon form as [ ]

Thus corresponding system is

These yields

Which can be written in vector form as

( ) ( )

Or alternatively as

( ) ( ) ( ) ( )

This shows that the vectors

( ) ( ) ( )

Span the solution space and are linearly independent (Check!). Thus the solution
space has dimension 3.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 182

Example:(Applying the plus minus theorem)

Show that , , are linearly independent vectors.

Solution: The set * + is linearly independent since neither vector in S


is a scalar multiple of the other. Since the vector cannot be expressed as a linear
combination of the vectors in S, it can be adjoined to S to produce a linearly
independent set * + * +

Example: (Basses by inspection)

Explain why the vectors ( ), ( ) form a basis for R2

Solution: Since neither vector is a scalar multiple of the other, the two vectors
form a linearly independent set in the two dimensional space R2 and hence they
form a basis by theorem „Let V be an n – dimensional vector space, and let S
be a set in V with exactly „n‟ vectors. Then S is a basis for V if and only if S
spans V or S is linearly independent.‟

Example: (Basses by inspection): Explain why the vectors ( ) ,


( ) and ( ) form a basis for R3

Solution: The vectors form a linearly independent set in the xz – plane.


The vector is outside of the xz – plane, so the set * + is also linearly
3
independent. Since R is there dimensional theorem „Let V be an n – dimensional
vector space, and let S be a set in V with exactly „n‟ vectors. Then S is a basis
for V if and only if S spans V or S is linearly independent.‟ Implies that
* + is the basis for R3

Example: Determine a basis for the subspace of R3

Solution: Given equation of is or


where ‗y‘ and ‗z‘ are free variables. Then the above equation in vector form can be
written as ( ) ( ) ( )

( ) ( ) ( ) ( ) ( ) thus given plane is


spanned by vectors ( )( ) and as none of the vector is multiple of the
other. So that set *( )( )+ is linearly independent. Hence
*( )( )+ and dimension of subspace is 2.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 183

Practice:

1) Determine whether ( )( )( ) form a basis of R4. If not,


find the dimension of the subspace they span.

2) Find a basis and dimension of the subspace W of R4 where


i. All vectors of the form ( )
ii. All vectors of the form ( ) where and

iii. *( ) +
iv. *( ) +

3) Find a basis and dimension of the solution space of the homogeneous system.
i.

ii.

iii.

iv.

v.

vi.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 184

4) Determine a basis for the subspace of R3 and state its dimension.


i. *( ) +
ii. *( ) +
iii.
iv.
v.
vi.
5) Find the dimension of each of the following vector spaces:
i. The vector space of all diagonal matrices
ii. The vector space of all symmetric matrices
iii. The vector space of all upper triangular matrices
6) Find the dimension of the subspace of P3 consisting of all polynomials
for which
7) Show that the set W of all polynomials in P2 such that ( ) is a subspace
of P2. Then make a conjecture about the dimension of W, and conform your
conjecture by finding a basis for W.
8) Find a standard basis vector for R3 that can be added to the set * + to
3
produce a basis for R
i. ( ) ( )
ii. ( ) ( )
9) Find a standard basis vector for R4 that can be added to the set * + to
4
produce a basis for R where ( ) ( )
10) Let * + be a basis for a vector space V. Show that * + is
also a basis, where
11) The vectors ( ) ( ) are linearly independent.
Enlarge * + to a basis for R 3

12) The vectors ( ) ( ) are linearly independent.


Enlarge * + to a basis for R4
13) Find a basis for the subspace of R3 that is spanned by the vectors;

( ) ( ) ( ) ( )

14) Find a basis for the subspace of R4 that is spanned by the vectors;

( ) ( ) ( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 185

The following is a fundamental result in linear algebra.

Theorem (Omit Proof): Let V be a vector space such that one basis has ‗m’
elements and another basis has ‗n‘ elements. Then .

(A vector space V is said to be of finite dimension ‗n‘ or n-dimensional, written


if V has a basis with ‗n‘ elements. Theorem tells us that all bases of V
have the same number of elements, so this definition is well defined.)

Theorem:

Every basis of the finite dimensional vector space has the same number of
elements (vectors).

Proof:

Let a vector space V over field F has two basis and with ‗m‘ and ‗n‘ number
of elements. Since spans V and is a linearly independent subset in V, so
cannot have more than ‗m‘ number of elements

i.e. ……………(i)

now Since spans V and is a linearly independent subset in V, so cannot have


more than ‗n‘ number of elements

i.e. ……………(ii)

from (i) and (ii)

Hence the theorem.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 186

Theorem:

Let V be an n – dimensional vector space, and let * + be any basis then


if a set in V has more than ‗n‘ vectors, then it is linearly dependent.

Proof:

Let * + be any set of ‗m‘ vectors in V, where . We want


to show that is linearly dependent. Since * + is a basis, each
can be expressed as linear combination of the vectors in S, say

………………(i)

To show that is linearly dependent, we must find scalar , not all


zero, such that ………………(ii)

We can write equation (i) in partition as follows

, - , -[ ] ………………(iii)

Since , the linear system

[ ][ ] [ ] ………………(iv)

Has more equations than unknowns and hence has a non – trivial solution

Creating a column vector from this solution and multiplying both sides of (iii) on
the right by this vector yields

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 187

, -[ ] , -[ ][ ]

By (iv) this simplifies to , -[ ] , -[ ]

, -[ ] [ ]

Which we can writes as

Since the scalar coefficients in this equation are not all zero, we have proved that
* + is linearly dependent

Theorem:

Let S be a non – empty set of a vectors in a vector space V then if S is linearly


independent set, and if v is a vector in V that is outside of span(S), then the set
* + that results by inserting into S is still linearly independent.

Proof:

Assume that * + is a linearly independent set of vectors in V and v


is a vector in V that is outside of span(S). To show that * + is
linearly independent set we must show that the only scalar that satisfy
are
But it must be true that for otherwise we could solve
for v as a linear combination of
, contradicting the assumption that v is outside of span(S). Thus
simplifies to
which, by linear independence of * + implies that

Hence the theorem.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 188

Theorem:

Let S be a non – empty set of a vectors in a vector space V then if v is a vector in S


that is expressible as a linear combination of other vectors in S, and if * +
denotes the set obtained by removing v from S, then S and * + span the same
space; that is, ( ) ( * +)

Proof:

Assume that * + is a set of vectors in V and (to be specific) suppose


that is a linear combination of , say

We want to show that if is removed from , then the remaining set of vectors
* + still spans S; that is , we must show that every vector w in
( ) is expressible as a linear combination of * +. But if w is in
( ), then w is expressible in the form

Or, on substituting in above

( )

Which expresses w as a linear combination of * +.Hence the theorem.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 189

In general, to show that a set of vectors * + is a basis for a vector space


V, one must show that the vectors are linearly independent and span V, However,
if we happen to know that V has dimension ‗n‘ (so that * + contains the
right number of vectors for a basis), the it is suffices to check either linear
independence or spanning, the remaining condition will hold automatically. This is
the content of the following theorem.

Theorem:

Let W be a subspace of an n-dimensional vector space V. Then . In


particular, if , then .

Proof:

Because V is of dimension ‗n‘ , any n + 1 or more vectors are linearly dependent.


Furthermore, because a basis of W consists of linearly independent vectors, it
cannot contain more than ‗n‘ elements. Accordingly, in
particular, if * + is a basis of W, then because it is an independent set
with n elements, it is also a basis of V. Thus when

Quotient Space

Let W be a subspace of V i.e. then * + then


is a subset of V for then form a vector space over the same field of V
with respect to operation defined as;

i. ( ) ( ) ( ) where
ii. ( ) where

Remember:

i. W is the additive identity of i.e. W is zero vector of


ii.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 190

Examples:

i. Let W be a subspace of a R3 spanned by the vector ( ) that is


*( )+ * ( ) + then W is the straight line
through origin and the point ( ). For any vector ( ) we can
regard the coset ( ) as the set of vectors obtained by adding the
vector ( ) to each vector of . This coset is therefore the set of all
vectors on the line through the point ( ) parallel to the line .
Hence is the collection of lines parallel to
ii. Let *( )( )+ then W is the set of all vectors in xy –
plane and the cosets are the planes parallel to the xy – plane. Thus the
quotient space is the collection of planes parallel to xy – plane.
iii. Let *( )+ then for any vector ( ) we have
( ) ( ) ( ) ( ) and therefore since
( )
( ) ,( ) - ,( ) -
( ) ( ) ( )
( ) ( )
The vectors ( ) and ( ) are therefore also independent
and hence they for a basis of
iv. The set P1 is a subspace of P4 form the quotient space for this we
consider ( )
Then ( ) ( ) ( ) ( )
( )
So ( )( )( ) spans

Moreover these are linearly independent so a basis for is


* +
v. Let * + be a basis for a subspace W of V, and extended
it to a basis * + of V, then
* + is a basis of

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 191

Theorem: Dimension of Quotient Space:

If W is subspace of a finite dimensional vector space V, then

i. W is finite dimensional
ii. ( ) ( )
iii.

Proof:

i. Since ( ) ( ) and if W has a basis with n elements.


Then by theorem “Let W be a vector space such that one basis has m
elements and another basis has n elements, then m = n.” all basis of
W have the same number of elements and hence W is finite dimensional.
ii. If W is finite dimensional, so it has a basis * +. Either S
is also a basis for V or it is not. If so, then ( ) , which means
that ( ) ( ). If not, then because S is a linearly independent
set it can be enlarged to a basis for V by theorem “Let S be a finite set of
vectors in a finite dimensional vector space V then if S is linearly
independent set that is not already a basis for V, then S can be
enlarged to a basis for V by inserting appropriate vectors into S”

But this implies that ( ) ( ). So we have shown that


( ) ( ) in all cases.

iii. Let * + be a basis of W with and


* + be an extended basis of V with
.
Then we have to prove that * + is a basis of

For this firstly we will show that * + is linearly


independent. Consider;

( ) ( ) ( )

( ) ( ) ( )

( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 192

( )

Implies is a linear combination of

Since * + be an extended basis of V so


are linearly independent.

Implies

Thus * + is linearly independent.

Now we will show that * + is spans Consider;


be any element of . Here can be expressed as a linear combination of
.i.e.

( ) ( )

( )

( ) ( ) ( )

( ) ( ) ( )

This shows that * + is spans .

Thus * + is a basis of and

Hence

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 193

Theorem: If W is subspace of a finite dimensional vector space V, then

i. W is finite dimensional
ii. ( ) ( )
iii. ( ) ( )

Proof:

i. Since ( ) ( ) and if W has a basis with n elements.


Then by theorem “Let W be a vector space such that one basis has m
elements and another basis has n elements, then m = n.” all basis of
W have the same number of elements and hence W is finite dimensional.
ii. If W is finite dimensional, so it has a basis * +. Either S
is also a basis for V or it is not. If so, then ( ) , which means
that ( ) ( ). If not, then because S is a linearly independent
set it can be enlarged to a basis for V by theorem “Let S be a finite set of
vectors in a finite dimensional vector space V then if S is linearly
independent set that is not already a basis for V, then S can be
enlarged to a basis for V by inserting appropriate vectors into S”

But this implies that ( ) ( ). So we have shown that


( ) ( ) in all cases.

iii. Assume that ( ) ( ) and that * + is a basis


for W. If S is not also a basis for V, then being linearly independent S
can be extended to a basis for V by theorem “Let S be a finite set of
vectors in a finite dimensional vector space V then if S is linearly
independent set that is not already a basis for V, then S can be
enlarged to a basis for V by inserting appropriate vectors into S”.
But this would mean that ( ) ( ), which contradiction our
hypothesis. Thus S must also be a basis for V, which means that
The converse is obvious.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 194

Theorem:

Let V be an n – dimensional vector space, and let S be a set in V with exactly ‗n‘
vectors. Then S is a basis for V if and only if S spans V or S is linearly
independent.

Proof:

Assume that S has exactly n vectors and spans V. To prove that S is a basis, we
must show that S is linearly independent set. But if this is not so, then some vector
v in S is a linear combination of the remaining vectors. If we remove this vector
from S, then it follows from theorem;

“Let S be a non – empty set of a vectors in a vector space V then if v is a


vector in S that is expressible as a linear combination of other vectors in S,
and if * + denotes the set obtained by removing v from S, then S and
* + span the same space; that is, ( ) ( * +)”

That the remaining set of vectors still spans V. But this is impossible since
theorem “Let V be an n – dimensional vector space, and let * + be
any basis then if a set in V has fewer than „n‟ vectors, then it does not span V”
states that no set with fewer than n vectors can span an n – dimensional vector
space. Thus S is linearly independent.

Assume that S has exactly n vectors and is a linearly independent set. To


prove that S is a basis, we must show that S spans V. But if this is not so, then
there is some vector v in V that is not in span(S). If we insert this vector into S,
then it follows from theorem “Let S be a non – empty set of a vectors in a vector
space V then if S is linearly independent set, and if v is a vector in V that is
outside of span(S), then the set * + that results by inserting into S is still
linearly independent” that this set of vectors is still linearly independent.
But this is impossible, since theorem

“Let V be an n – dimensional vector space, and let * + be any basis


then if a set in V has more than „n‟ vectors, then it is linearly dependent”

states that no set with more than n vectors in an n – dimensional vector space can
be linearly independent. Thus S spans V.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 195

Theorem:

Let S be a finite set of vectors in a finite dimensional vector space V then if S


spans V but is not a basis for V, then S can be reduced to a basis for V by
removing appropriate vectors from S. (This theorem tells that; every spanning set
for a subspace is either a basis for that subspace or has a basis as a subset)

Proof:

If S is a set of vectors that span V but is not a basis for V, then S is a linearly
dependent set. Thus some vector v in S is expressible as a linear combination of the
other vectors in S. By theorem;

“Let S be a non – empty set of a vectors in a vector space V then if v is a vector


in S that is expressible as a linear combination of other vectors in S, and if
* + denotes the set obtained by removing v from S, then S and * +
span the same space; that is, ( ) ( * +)”

We can remove v from S, and the resulting set will still span V. If is linearly
independent, then is a basis for V, and we are done. If is linearly dependent,
then we can remove some appropriate vector from to produce a set that still
span V. We can continue removing vectors in this way until we finally arrive at a
set of vectors in that is linearly independent and span V. This subset of is a
basis for V.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 196

Theorem:

Let S be a finite set of vectors in a finite dimensional vector space V then if S is


linearly independent set that is not already a basis for V, then S can be enlarged to
a basis for V by inserting appropriate vectors into S. (This theorem tells that; every
linearly independent set in a subspace is either a basis for that subspace or can be
extended to a basis for it)

Proof:

Suppose that ( ) . If S is a linearly independent set that is not already a


basis for V, then S fails to span V, so there is some vector v in V that is not in
span(S). By the theorem “Let S be a non – empty set of a vectors in a vector
space V then if S is linearly independent set, and if v is a vector in V that is
outside of span(S), then the set * + that results by inserting into S is still
linearly independent.” We can inset v into S, and the resulting set will still be
linearly independent. If spans V, then is a basis for V, and we are finished. If
does not spans V, then we insert an appropriate vector into to produce a set
that is still linearly independent. We can continue inserting vectors in this way
until we reach a set with n already independent vectors in V. this set will be a basis
for V by theorem “Let V be an n – dimensional vector space, and let S be a set
in V with exactly „n‟ vectors. Then S is a basis for V if and only if S spans V or
S is linearly independent.”

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 197

Sums

Let U and W be subsets of a vector space V. The sum of U and W, written U + W,


consists of all sums u + w where and . That is,

* +

Now suppose U and W are subspaces of V. Then one can easily show that U + W
is a subspace of V. Recall that is also a subspace of V. The following
theorem relates the dimensions of these subspaces.

Theorem:

Suppose U and W are finite-dimensional subspaces of a vector space V. Then


( ) ( )

Proof:

Suppose * + be a basis of ,* + be a
basis of and * + be a basis of

Then we have to show that * + be a


basis of

Firstly we will show that linearity condition. For this consider;

……………..(i)

Since LHS of (i) is in so does RHS also will be in

i.e.

also

therefore

as * + be a basis of then for we have

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 198

Since * + be a basis of

So that (i) becomes

But * + be a basis of

i.e. each

Hence * + is linearly independent.

Now we to show spanning condition. For this suppose

i.e. . Also we know that * + be a basis of


and * + be a basis of

Then

And

Adding both we get

( ) ( ) ( )

This implies that * + spans

From both conditions we conclude that * +


is a basis of

Therefore is finite dimensional. And

( ) ( ) ( )

( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 199

Example: Let , the vector space of matrices. Let U consist of those


matrices whose second row is zero, and let W consist of those matrices whose
second column is zero. Then

20 13 , 20 13 , 20 13 , 20 13
That is, consists of those matrices whose lower right entry is 0, and
consists of those matrices whose second row and second column are zero.
Note that ( ) , ( ) , ( ) . Also, ( ) ,
which is expected from Theorem. That is,

( ) ( )

Practice:

1. Give an example of a vector space V and its subspace W such that V,W and
are infinite dimensional and
2. Let be subspaces of a vector space V. Then show that;
i. is subspace of
ii. are contained in
iii. is the smallest subspace containing
i.e. ( )
iv.

3. Consider the following subspaces of R5;


( ) *( )( )( )+
( ) *( )( )( )+

Find a basis and the dimension of (a) (b)

4. Suppose are distinct four dimensional subspaces of a vector space


V, where . Find the possible dimension of

5. Let be the subset of all upper and lower triangular matrices of


show that both are subspaces of ,
find ( ), ( ), ( )
also verify ( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 200

Direct Sums

The vector space V is said to be the direct sum of its subspaces U and W, denoted
by if every v V can be written in one and only one way as
where u U and w W.

The following theorem characterizes such a decomposition.

Theorem:

The vector space V is the direct sum of its subspaces U and W if and only if:

(i) , (ii) * +.

Proof: Suppose then every v V can be uniquely written in the form


where u U and w W. Thus in particular,

Now suppose v then

(i) where v U and 0 W.


(ii) where 0 U and v W.

Thus and * +

On the other hand, suppose that , and * +. Let v V


because , there exists u U and w W such that . We need
to show that such a sum is unique. Suppose also that where U
and W then

also

But U and W then by * +

and and so and

Thus such a sum for v V is unique, and

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 201

Example: Consider the vector space

(a) Let U be the xy-plane and let W be the yz-plane; that is,

*( ) + and *( ) +

Then , because every vector in is the sum of a vector in U and a


vector in W. However, is not the direct sum of U and W, because such sums
are not unique. For example,

( ) ( ) ( ) and also ( ) ( ) ( )

(b) Let U be the xy-plane and let W be the z-axis; that is,

*( ) + and *( ) +

Now any vector ( ) can be written as the sum of a vector in U and a


vector in V in one and only one way:

( ) ( ) ( )

Accordingly, is the direct sum of U and W; that is,

General Direct Sums

The notion of a direct sum is extended to more than one factor in the obvious way.
That is, V is the direct sum of subspaces , written

if every vector v V can be written in one and only


one way as where

Theorem:

Suppose . Also, for each k, suppose Sk is a linearly


independent subset of Wk .Then

(a) The union is linearly independent in V.

(b) If each Sk is a basis of Wk , then is a basis of V.

(c)

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 202

Theorem: (For two factors)

Suppose . Suppose * + and * + are


linearly independent subset of .Then

(a) The union is linearly independent in V.

(b) If each * + and * + are basis of ,


then is a basis of V.

(c)

Proof:

a) Suppose that
Where are scalars then
( ) ( )
Where and
Because such a sum for 0 is unique, this leads to
and
Because is linearly independent for each and is linearly independent
for each therefore is is linearly independent.
b) By theorem ―Suppose . Suppose * + and
* + are linearly independent subset of .Then the
union is linearly independent in V.‖ is linearly independent, and
by problem ―Suppose that are subspaces of a vector space V and that
* + spans U and * + spans W. This show that spans ‖
Thus is a basis of V.
c) Suppose . Suppose * + and * +
are linearly independent subset of .Then If each * +
and * + are basis of , then is a basis of V.
then it follows directly

Theorem: (Just read): Suppose .

And ∑ then

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 203

Practice:

1. Consider the following subspaces of R3;


*( ) +, *( )+ (W is yz – plane)
Show that
2. Suppose that are subspaces of a vector space V and that * +
spans U and * + spans W. Then show that spans .

Cardinality of a vector space V(F):

Let ( ) and * + is any basis of V(F) then cardinality of


vector space is given as | ( )| where is any prime.

Number of Basis of a vector space V(F):

Let ( ) then;
( )( )( ) ( )
i. Number of distinct basis
ii. Number of ordered basis ( )( )( ) ( )

Number of subspaces of a vector space V(F):

Let V(F) be a vector space and be a subspace of V(F) . Let ( ) and


* + then
( )( )( ) ( )
i. Number of basis in W
ii. Number of basis of V selective ‗r‘ linearly independent vectors from V
( )( )( ) ( )

( )( )( ) ( )
iii. Number of subspaces of ( )
( )( )( ) ( )
where

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 204

Coordinates

If * + is a basis for a vector space V, and


is the expression for a vector in terms of the basis
S, then the scalars are called the coordinates of relative to the
basis S.

The vector ( ) in Rn constructed from these coordinates is called the


coordinate vector ⃗ relative to S; it is denoted by ( ) ( )

Remark:

The above ‗n‘ scalars also form the coordinate column vector
( ) of relative to S. The choice of the column vector rather than the
row vector to represent v depends on the context in which it is used.

Coordinates relative to the standard basis for Rn

In the special case where V = Rn and S is the standard basis, the coordinate vector
( ) and the vector are the same; that is , ( )

For example in R3 the representation of a vector ( ) as a linear


combination of the vectors in the standard basis * + is
so the coordinate vector relative to the basis is ( ) ( ) which is the same
as the vector .

Example:

Consider real space R3. The following vectors form a basis S of R3:

( ); ( ); ( )

The coordinates of ( ) relative to the basis S are obtained as follows.

Set ; that is, set v as a linear combination of the basis


vectors using unknown scalars x, y, z. This yield

[ ] [ ] [ ] [ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 205

The equivalent system of linear equations is as follows:

The solution of the system is . Thus,

And so ( ) ( )

Practice:

1) Find the coordinate vectors of w relative to the basis * + for R2


i. ( ) ( ) ( )
ii. ( ) ( ) ( )
iii. ( ) ( ) ( )
iv. ( ) ( ) ( )
2) Find the coordinate vectors of v relative to the basis * + for R3
i. ( ) ( ) ( ) ( )
ii. ( ) ( ) ( ) ( )
iii. ( ) ( ) ( ) ( )

3) Relative to the basis * + *( )( )+ of R2 , find the coordinate


vector of ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 206

Coordinates relative to the standard basis for Pn

In the special case where V = Pn the given formula for

( ) expresses the polynomials as a linear


combination of the standard basis vectors * +. Thus the
coordinate vectors for relative to is ( ) ( )

Example: Consider the vector space P2 of polynomials of degree . The


polynomials , , ( )

form a basis S of P2 . The coordinate vector ( )of relative to S


is obtained as follows.

Set using unknown scalars a,b,c, and simplify:

( ) ( ) ( )

( ) ( )

Then set the coefficients of the same powers of ‗x‘ equal to each other to obtain the
system , ,

The solution of the system is . Thus,

and hence; ( ) ( )

Practice: Find the coordinate vectors of p relative to the basis * +


for P2 , P3 and P4

i. , , ,
ii. , , ,
iii. , , ,
iv. , , ,
v. , , , ,

vi. , , , ,

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 207

Coordinates relative to the standard basis for M22

In the special case where V = M22 the representation of a vector

0 1 could be expressible as the linear combination of the standard basis


vectors in matrices as follows;

0 1 0 1 0 1 0 1 0 1

So the coordinate vector of B relative to S is ( ) ( )

Example:

Consider the vector space M22 . The matrices


0 1 0 1 0 1 0 1

form a basis S of M22 . The coordinate vector ( )of 0 1 relative to S is


obtained as follows.

Set using unknown scalars a,b,c,d and simplify:

0 1 0 1 0 1 0 1 0 1

0 1 0 1

Then set the coefficients of the same powers of ‗x‘ equal to each other to obtain the
system , , ,

The solution of the system is . Thus,

and hence; ( ) ( )
(Note that the coordinate vectors of A is a vector in R4, because )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 208

Practice:

1) Find the coordinate vectors of A relative to the basis * + for

i. 0 1 0 1 0 1 0 1 0 1

ii. 0 1 0 1 0 1 0 1 0 1

2) Consider the coordinate vector ( ) [ ]find B if S is the basis in

0 1 0 1 0 1 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 209

Geometrical interpretation of the coordinates of a vector relative to a basis S:

There is a geometrical interpretation of the coordinates of a vector relative to a


basis S for the real space Rn, which we illustrate using the basis S of R3 in
Example the following vectors form a basis S of R3:

( ); ( ); ( )

The coordinates of ( ) relative to the basis S are ( ) ( ). First


consider the space R3 with the usual x, y, z axes. Then the basis vectors determine
a new coordinate system of R3, say with axes as shown in following
figure. That is;

i. The axis is in the direction of with unit length | |


ii. The axis is in the direction of with unit length | |
iii. The axis is in the direction of with unit length | |

Then each vector ( ), or equivalently the point ( ) in R3 will have


new coordinates with respect to the new axes. These new coordinates are
precisely ( ) , the coordinates of with respect to the basis S, thus as shown in
example, the coordinates of the point ( ) with the new axes form the vector
( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 210

Change of Basis

A basis that is suitable for one problem may not be suitable for another, so it is a
common process in the study of vector spaces to change from one basis to another.
Because a basis is the vector space generalization of a coordinate system, changing
bases is akin to changing coordinate axes in R2 and R3. In this section we shall
study problems related to change of basis.

In applications it is common to work with more than one coordinate system,


and in such cases it is usually necessary to know the relationships between the
coordinates of a fixed point or vector in the various coordinate systems. Since a
basis is the vector space generalization of a coordinate system, we are led to
consider the following problem.

Change-of-Basis Problem

If we change the basis for a vector space V from some old basis B to some new
basis , how is the old coordinate vector ( ) of a vector v related to the new
coordinate vector ( ) ?

Keep in mind: For simplicity, we will solve this problem for two-dimensional
spaces. The solution for n-dimensional spaces is similar and is left for the reader.
Let * + and * + be the old and new bases, respectively. We
will need the coordinate vectors for the new basis vectors relative to the old basis.
Suppose they are

, - 0 1 and , - 0 1

That is, and ………….(i)

Now let v be any vector in V, and let , - [ ] be the new coordinate vector,
so that ………….(ii)

In order to find the old coordinates of v, we must express v in terms of the old basis
B. To do this, we substitute (i) into (ii). This yield

( ) ( ) or ( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 211

Thus the old coordinate vector for v is , - [ ]

which can be written as , - 0 1[ ] 0 1, -

This equation states that the old coordinate vector results when we multiply the
new coordinate vector , - on the left by the matrix 0 1

The columns of this matrix are the coordinates of the new basis vectors relative to
the old basis. Thus we have the following solution of the change-of-basis problem.

Solution of the Change-of-Basis Problem:

If we change the basis for a vector space V from the old basis * +
to the new basis * + , then the old coordinate vector , - of a
vector v is related to the new coordinate vector , - of the same vector v by the
equation , - , - where the columns of P are the coordinate vectors of the
new basis vectors relative to the old basis; that is, the column vectors of P are
, - , - , -

Transition Matrices

The matrix P is called the transition matrix from to B; it can be expressed in


terms of its column vectors as ,, - , - , - -

Similarly The matrix P is called the transition matrix from to ; it can be


expressed in terms of its column vectors as
,, - , - , - -

Example: Finding a Transition Matrix

Consider the bases * + and * + for R2 , where

( ) ( ) and ( ) ( )

(a) Find the transition matrix from to B.


(b) Find the transition matrix from to .
(c) Find , - if , - 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 212

Solution (a)

First we must find the coordinate vectors for the new basis vectors and
relative to the old basis B. By inspection, and

So , - 0 1 and , - 0 1

Thus the transition matrix from to B is 0 1

Solution (b)

First we must find the coordinate vectors for the old basis vectors and
relative to the new basis . By inspection, and

So , - 0 1 and , - 0 1

Thus the transition matrix from to is 0 1

Solution (c)

Using , - , - and the transition matrix 0 1 yields

, - 0 10 1 0 1

Remark:

If we multiply the transition matrix from to B obtained in previous Example (a)


and the transition matrix from B to obtained in Example (b), we find

( )( ) 0 10 1 0 1

which shows that ( )( ) that is, is invertible and its inverse


is . Thus we have the following theorem.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 213

Theorem: If P is the transition matrix from a basis to a basis B for a finite-


dimensional vector space V, then P is invertible, and is the transition matrix
from B to .

Proof: Let be the transition matrix from B to . We shall show that


( )( ) and thus conclude that to complete the
proof.

Assume that * + and suppose that [ ]

Since , - , - and , - , - for all x in V. Multiplying the second


equation through on the left by P and substituting the first gives , - , -
for all x in V.

Letting in , - , - gives

[ ] [ ] [ ] or [ ] [ ]

Similarly, successively substituting in , - , - yields

[ ] [ ] [ ] [ ]

Therefore,

To summarize, if P is the transition matrix from a basis to a basis B, then for


every vector v, the following relationships hold:

, - , -

And , - , -

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 214

An efficient method for computing transition matrices for Rn

Or a procedure to compute

i. Form the matrix , -


ii. Use elementary row operation to reduce the matrix in previous step to
reduced row echelon form
iii. The resulting matrix will be , -
iv. Extract the matrix from the right side of the matrix in , -

This procedure is captured in the following diagram

, - , -

Example: Revisited

Consider the bases * + and * + for R2 , where

( ) ( ) and ( ) ( )

(a) Find the transition matrix from to B.


(b) Find the transition matrix from to .

Solution (a): Here is the old basis and is the new basis. So,

, - 0 10 1

Since the left side is already the identity matrix, no reduction is needed. We see by
inspection that the transition matrix from to B is 0 1

Solution (b): Here is the old basis and is the new basis. So,

, - 0 10 1
By reducing this matrix, so the left side becomes identity. We obtain
, - 0 1

So the transition matrix from to is 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 215

Transition to the standard Basis for Rn

Let * + be any basis for the vector space Rn and let


* + be the standard basis for Rn. If the vectors in these basis are
written in column form, then , -

It follows from this theorem that if , - is any invertible


matrix, then A can be viewed as the transition matrix from the basis * +

for Rn to the standard basis for Rn. Thus for example the matrix [ ]

which is invertible, is the transition matrix form the basis

( ) ( ) ( ) to the basis

( ) ( ) ( )

Practice:

1. Consider the bases * + and * + for R2 , where

0 1 0 1 and 0 1 0 1

(a) Find the transition matrix from to B.


(b) Find the transition matrix from to .
(c) Find , - if 0 1 and use , - , - to compute , -
(d) Check your work by computing , - directly.

2. Consider the bases * + and * + for R2 , where

0 1 0 1 and 0 1 0 1

(a) Find the transition matrix from to B.


(b) Find the transition matrix from to .
(c) Find , - if 0 1 and use , - , - to compute , -
(d) Check your work by computing , - directly.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 216

3. Consider the bases * + and * + for R3 , where

[ ] [ ] [ ] and [ ] [ ] [ ]

(a) Find the transition matrix from to .

(b) Find , - if [ ] and use , - , - to compute , -

(c) Check your work by computing , - directly.

4. Consider the bases * + and * + for R3 , where

[ ] [ ] [ ] and [ ] [ ] [ ]

(a) Find the transition matrix from to .

(b) Find , - if [ ] and use , - , - to compute , -

(c) Check your work by computing , - directly.

5. Consider the bases * + and * +


(a) Find the transition matrix from to .
(b) Find , - if and use , - , - to
compute , -

6. Consider the bases * + and * + for P1 where

(a) Find the transition matrix from to B.


(b) Find the transition matrix from to .
(c) Find , - if and use , - , - to compute
, -

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 217

Row Space, Column Space and Null Space

In this section we shall study three important vector spaces that are associated with
matrices. Our work here will provide us with a deeper understanding of the
relationships between the solutions of a linear system of equations and properties
of its coefficient matrix.

Row vectors and Column vectors

For an matrix [ ]

the vectors , -

, -

, -

In Rn formed from the rows of A are called the row vectors of A, and the vectors in
Rm formed from the columns of A are called the column vectors of A.

Example: Row and Column Vectors in a Matrix

Let 0 1

The row vectors of A are , - , -

And the column vectors of A are 0 1 0 1 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 218

Definition

 If A is an matrix, then the subspace of Rn spanned by the row


vectors of A is called the row space of A
 If A is an matrix, then the subspace of Rm spanned by the column
vectors of A is called the column space of A.
 The solution space of the homogeneous system of equations , which
n
is a subspace of R , is called the null space of A.

In this section and the next we shall be concerned with the following two general
questions:

i. What relationships exist between the solutions of a linear system


and the row space, column space, and nulls pace of the coefficient matrix
A?
ii. What relationships exist among the row space, column space, and null
space of a matrix?

To investigate the first of these questions, we could use the following theorem,

Theorem: A system of linear equations is consistent if and only if b is in


the column space of A.

Example: A Vector b in the Column Space of A

Let be the linear system [ ][ ] [ ]

Show that b is in the column space of A, and express b as a linear combination of


the column vectors of A.

Solution: Solving the system by Gaussian elimination yields

Since the system is consistent, b is in the column space of A. Moreover, from the
fact that “a linear system of „m‟ equation in „n‟ unknowns can be
written as ” and the solution obtained, it follows
that

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 219

[ ] [ ] [ ] [ ]

The next theorem establishes a fundamental relationship between the solutions of a


nonhomogeneous linear system and those of the corresponding
homogeneous linear system with the same coefficient matrix.

Theorem: If denotes any single solution of a consistent linear system ,


and if * + form a basis for the null space of A, that is, the
solution space of the homogeneous system then every solution of can be
expressed in the form and, conversely, for
all choices of scalars , the vector x in this formula is a solution of
.

Proof: Assume that is any fixed solution of and that x is an arbitrary


solution. Then and

Subtracting these equations yields or ( ) which


shows that is a solution of the homogeneous system . Since
* + is a basis for the solution space of this system, we can express
as a linear combination of these vectors, say

Thus,

Which proves the first part of the theorem.

Conversely:

For all choices of the scalars in ,


we have ( )

Or ( ) ( ) ( ) ( )

But is a solution of the nonhomogeneous system, and , are


solutions of the homogeneous system, so the last equation implies that
which shows that x is a solution of .

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 220

General and Particular Solutions

There is some terminology associated with formula .


The vector is called a particular solution of .

The expression is called the general solution of ,


and the expression is called the general solution of . With this terminology,
Formula states that the general solution of
is the sum of any particular solution of and the general solution
of .

For linear systems with two or three unknowns, Theorem (above) has a nice
geometric interpretation in R2 and R3. For example, consider the case where
and are linear systems with two unknowns. The solutions of
form a subspace of R2 and hence constitute a line through the origin, the
origin only, or all of R2 . From Theorem (above), the solutions of can be
obtained by adding any particular solution of , say , to the solutions of
. Assuming that is positioned with its initial point at the origin, this has
the geometric effect of translating the solution space of , so that the point
at the origin is moved to the tip of (Figure). This means that the solution
vectors of form a line through the tip of , the point at the tip of , or
2
all of R . Similarly, for linear systems with three unknowns, the solutions of
constitute a plane through the tip of any particular solution , a line
through the tip of , the point at the tip of , or all of R3.

Adding to each vector x in the solution space of translates the solution space.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 221

Example: General Solution of a Linear System

Consider the nonhomogeneous linear system

…………..(1)

and obtained

, , , , ,

This result can be written in vector form as

…………..(2)

[ ] [ ] [ ] [ ] [ ] [ ]

Which is the general solution of 1. The vector in 2 is a particular solution of 1; the


linear combination x in 2 is the general solution of the homogeneous system

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 222

Bases for Row Spaces, Column Spaces, and Null spaces

We first developed elementary row operations for the purpose of solving linear
systems, and we know from that work that performing an elementary row
operation on an augmented matrix does not change the solution set of the
corresponding linear system. It follows that applying an elementary row operation
to a matrix A does not change the solution set of the corresponding linear system
, or, stated another way, it does not change the nulls pace of A. Thus we
have the following theorem.

Elementary row operations do not change the nullspace of a matrix.

Example: Basis for Null space

Find a basis for the null space of [ ]

Solution: The null space of A is the solution space of the homogeneous system,

Which has the basis

[ ] [ ]

The following theorem is a companion to previous theorem.

Elementary row operations do not change the row space of a matrix.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 223

Example: Basis for Null space

Find a basis for the null space of

[ ]

Solution:

The null space of A is the solution space of the homogeneous system ,


which has the basis

[ ] [ ] [ ]

Theorem:

If A and B are row equivalent matrices, then

(a) A given set of column vectors of A is linearly independent if and only if the
corresponding column vectors of B are linearly independent.
(b) A given set of column vectors of A forms a basis for the column space of A
if and only if the corresponding column vectors of B form a basis for the
column space of B.

The following theorem makes it possible to find bases for the row and column
spaces of a matrix in row-echelon form by inspection.

Theorem:

If a matrix R is in row-echelon form, then the row vectors with the leading 1'
s ( the nonzero row vectors) form a basis for the row space of R, and the column
vectors with the leading 1' s of the row vectors form a basis for the column space
of R.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 224

Example: Bases for Row and Column Spaces

The matrix [ ] is in row-echelon form. From Theorem

If a matrix R is in row-echelon form, then the row vectors with the leading 1' s
( the nonzero row vectors) form a basis for the row space of R, and the column
vectors with the leading 1' s of the row vectors form a basis for the column space
of R.

the vectors , - , - and


, - form a basis for the row space of R, and the vectors

[ ] [ ] [ ] form a basis for the column space of R

Example: Bases for Row and Column Spaces

Find bases for the row and column spaces of

[ ]

Solution: Since elementary row operations do not change the row space of a
matrix, we can find a basis for the row space of A by finding a basis for the row
space of any row-echelon form of A. Reducing A to row-echelon form, we obtain

[ ]

By Theorem ―If a matrix R is in row-echelon form, then the row vectors with the
leading 1' s ( the nonzero row vectors) form a basis for the row space of R, and the
column vectors with the leading 1' s of the row vectors form a basis for the column

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 225

space of R”, the nonzero row vectors of R form a basis for the row space of R and
hence form a basis for the row space of A.

These basis vectors are

, -

, -

, -

Keeping in mind that A and R may have different column spaces, we cannot find a
basis for the column space of A directly from the column vectors of R. However, it
follows from theorem

If A and B are row equivalent matrices, then a given set of column vectors of A
forms a basis for the column space of A if and only if the corresponding column
vectors of B form a basis for the column space of B.

that if we can find a set of column vectors of R that forms a basis for the column
space of R, then the corresponding column vectors of A will form a basis for the
column space of A.

The first, third, and fifth columns of R contain the leading 1's of the row vectors,
so

[ ] [ ] [ ]

form a basis for the column space of R; thus the corresponding column vectors of
A—namely,

[ ] [ ] [ ]

form a basis for the column space of A.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 226

Example: Basis for a Vector Space Using Row Operations

Find a basis for the space spanned by the vectors

( ) ( ) ( ) ( )

Solution: Except for a variation in notation, the space spanned by these vectors

is the row space of the matrix [ ]

Reducing this matrix to row-echelon form, we obtain [ ]

The nonzero row vectors in this matrix are

⃗⃗ ( ) ⃗⃗ ( ) ⃗⃗ ( )

These vectors form a basis for the row space and consequently form a basis for the
subspace of R5 spanned by and .

Example: Basis for the Row Space of a Matrix

Find a basis for the row space of [ ] consisting entirely

of row vectors from A.

Solution: We will transpose A, thereby converting the row space of A into the
column space of AT ; then we will find a basis for the column space of AT ; and
then we will transpose again to convert column vectors back to row vectors.

Transposing A yields

[ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 227

Reducing this matrix to row-echelon form yields

[ ]

The first, second, and fourth columns contain the leading 1's, so the corresponding
column vectors in form a basis for the column space of ; these are

[ ] [ ] [ ]

Transposing again and adjusting the notation appropriately yields the basis vectors

, - , -

And , - for the row space of A.

Example: Basis and Linear Combinations

(a) Find a subset of the vectors ( ) ( )

( ) ( ) ( ) that forms a basis for the


subspace of R4 spanned by these vectors.

(b) Express each vector not in the basis as a linear combination of the basis
vectors.

Solution (a)

We begin by constructing a matrix that has , as its column

vectors: [ ]

The first part of our problem can be solved by finding a basis for the column space
of this matrix. Reducing the matrix to reduced row-echelon form and denoting the
column vectors of the resulting matrix by ⃗⃗ ⃗⃗ ⃗⃗ ⃗⃗ and ⃗⃗ yields

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 228

[ ]

The leading 1's occur in columns 1, 2, and 4, so by Theorem,

If a matrix R is in row-echelon form, then the row vectors with the leading 1' s
( the nonzero row vectors) form a basis for the row space of R, and the column
vectors with the leading 1' s of the row vectors form a basis for the column space
of R.

* ⃗⃗ ⃗⃗ ⃗⃗ + is a basis for the column space of [ ], and

consequently, * + is a basis for the column space of [ ].

Solution (b)

We shall start by expressing ⃗⃗ and ⃗⃗ as linear combinations of the basis vectors


⃗⃗ ⃗⃗ ⃗⃗ . The simplest way of doing this is to express ⃗⃗ and ⃗⃗ in terms of
basis vectors with smaller subscripts. Thus we shall express ⃗⃗ as a linear
combination of ⃗⃗ and ⃗⃗ , and we shall express ⃗⃗ as a linear combination of

⃗⃗ ⃗⃗ and ⃗⃗ . By inspection of [ ], these linear combinations

are ⃗⃗ ⃗⃗ ⃗⃗ and ⃗⃗ ⃗⃗ ⃗⃗ ⃗⃗

We call these the dependency equations. The corresponding relationships in

[ ] are and

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 229

The procedure illustrated in the preceding example is sufficiently important that we


shall summarize the steps:

Basis for the space spanned a set of vectors:

Given a set of vectors * + in Rn , the following procedure


produces a subset of these vectors that forms a basis for span(S) and expresses
those vectors of S that are not in the basis as linear combinations of the basis
vectors.

Step 1. Form the matrix A having, as its column vectors.

Step 2. Reduce the matrix A to its reduced row-echelon form R, and let,
⃗⃗ ⃗⃗ , be the column vectors of R.

Step 3. Identify the columns that contain the leading 1's in R. The corresponding
column vectors of A are the basis vectors for span(S).

Step 4. Express each column vector of R that does not contain a leading 1 as a
linear combination of preceding column vectors that do contain leading 1's. (You
will be able to do this by inspection.) This yields a set of dependency equations
involving the column vectors of R. The corresponding equations for the column
vectors of A express the vectors that are not in the basis as linear combinations of
the basis vectors.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 230

Exercise:

1. Express the product Ax as a linear combination of the column vectors of A.


a) 0 10 1

b) [ ][ ]

c) [ ][ ]

d) 0 1[ ]

2. Determine whether b is in the column space of A, and if so, express b as a


linear combination of the column vector of A.

a) [ ] [ ]

b) [ ] [ ]

c) [ ] [ ]

d) [ ] [ ]

3. Suppose that is a solution of a non –


homogeneous linear system and that the solution set of the
homogeneous system is given by the formulas;

a) Find a vector form of the general solution of


b) Find a vector form of the general solution of

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 231

4. Suppose that is a solution of a non –


homogeneous linear system and that the solution set of the
homogeneous system is given by the formulas;

a) Find a vector form of the general solution of


b) Find a vector form of the general solution of

5. Find a vector form of the general solution of and then use that result
to find a vector form of the general solution of .
a)

b)

c)

d)

6. Find basis for the null space and row space of A.

a) [ ] b) [ ]

c) [ ]

d) [ ]

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 232

7. Here are matrices in row echelon form are given. By inspection, find a basis
for the row space and for the column space of those matrices.

a) [ ]
b) [ ]

d) [ ]
c)

[ ]

8. Find the basis for row space as well as column space by row reduction.

[ ]

9. Find a basis for the row space of [ ] consisting

entirely of row vectors from A.

10.Find a basis for the subspace of R4 that is spanned by the given vectors.
a) ( )( )( )( )( )( )
b) ( )( )( )
c) ( )( )( )( )

11.Find a subset of the given vectors that forms a basis for the space spanned by
those vectors, and then express each vector that is not in the basis as a linear
combination of the basis vectors.
a) ( ) ( ) ( )
( ) ( )
b) ( ) ( ) ( ) ( )
c) ( ) ( ) ( )
( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 233

12.Find a basis for the row space of [ ] consisting

entirely of row vectors from A.

13.Construct a matrix whose null space consists of all linear combinations of

the vectors [ ] and [ ]

14.Find a matrix whose null space is

a) A point b) A line c) A space

15.Find a matrix whose null space is the line .


16.Describe the null space of the following matrices;
a) 0 1

b) 0 1

c) 0 1

d) 0 1

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 234

Chapter # 4

Inner product spaces


In this chapter we will generalize the ideas of length, angle, distance and
orthogonality. We will also discuss various applications of these ideas.

Norm of Vector:

Consider a vectors v in Rn , then norm, length or size of vector is non – negative


square root of denoted by ‖ ‖ and defined as follows;

‖ ‖ √ √ for ( )

Note:

a) ‖ ‖
b) ‖ ‖
c) ‖ ‖ | |‖ ‖ for any scalar ‗k‘
d) a vector is said to be unit vector if ‖ ‖ or equivalently

e) for any non – zero vector v in Rn the vector ̂ ‖⃗ ‖
is the unit vector in the
same direction as and the process of finding ̂ is called normalizing

Example:

 Consider ( ) in then ‖ ‖ √( ) √
 Also ( ) in
Then ‖ ‖ √ ( ) ( ) √

Example (Normalizing a vector):

Consider ( ) Then ‖ ‖ √ ( ) √

Then clearly ⃗ ̂ ( )
‖⃗ ‖
Thus ⃗ is the unit vector that has the same direction as ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 235

PRACTICE:

Find the norm of and a unit vector that is oppositely directed to

i. ( )
ii. ( )
iii. ( )
iv. ( )

Find ̂ then change its sign.

The Standard Unit Vectors in

Vectors given as follows are called Standard Unit Vectors in Rn

̂ ( ) ̂ ( ) ̂ ( )

Note:

f) ̂ ( ) ̂ ( ) are called standard unit vectors in


g) ̂ ( ) ̂ ( ) ̂ ( ) are called Standard unit vectors in
h) Every vector ( ) can be expressed as the linear
combination of standard unit vectors. e.g.
( ) ̂ ̂ ̂
Like
( ) ( ) ( ) ( ) ̂ ̂ ̂
Also
( ) ( ) ( ) ( ) ( )
( ) ̂ ̂ ̂ ̂

Distance between vectors:

Consider two non – zero vectors u and v in Rn, say ⃗ ( ) and


( ) then distance between them is given as follows;

(⃗ ) ‖⃗ ‖ √( ) ( ) ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 236

Example: if ⃗ ( ) and ( ) then

(⃗ ) ‖⃗ ‖ √( ) ( ) ( ) ( )

(⃗ ) ‖⃗ ‖ √( ) ( ) ( ) ( ) √

PRACTICE:

1) Evaluate the given expression with ⃗ ( ) , ( ) and


⃗⃗ ( )
i. ‖ ⃗ ‖
ii. ‖ ⃗ ‖ ‖ ‖
iii. ‖ ⃗ ‖
iv. ‖ ⃗ ⃗⃗ ‖
v. ‖ ⃗ ⃗⃗ ‖
vi. ‖ ⃗ ‖
vii. ‖ ‖ ‖ ‖
viii. ‖ ⃗ ‖ ‖ ‖

2) Evaluate the given expression with ⃗ ( ), ( ) and


⃗⃗ ( )
i. ‖ ⃗ ⃗⃗ ‖
ii. ‖ ⃗ ‖ ‖ ‖ ‖ ⃗⃗ ‖
iii. ‖ ⃗ ‖‖ ‖
iv. ‖ ⃗ ‖ ‖ ‖ ‖ ⃗⃗ ‖
v. ‖‖ ⃗ ‖ ⃗⃗ ‖

3) Let ( ). Find all scalars ‗k‘ such that ‖ ‖


4) Let ( ). Find all scalars ‗k‘ such that ‖ ‖

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 237

Dot Product:

Consider two vectors u and v in Rn , say ⃗ ( ) and


( ) then their dot product or scalar product is given as follows;

Both vectors ⃗ are said to be Orthogonal (perpendicular) if their inner product


vanishes .i.e. ⃗

Examples:

 If ⃗ ( ) and ( ) then ⃗
 If ⃗ ( ) and ( ) then ⃗

PRACTICE:

1) Find ⃗ , ⃗ ⃗ , for the followings;


i. ⃗ ( ) and ( )
ii. ⃗ ( ) and ( )
iii. ⃗ ( ) and ( )
iv. ⃗ ( ) and ( )

2) Suppose that a vector in the xy – plane has a length of 9 units and points in
a direction that is counterclockwise from the positive x – axis, and a
vector ⃗ in the plane has a length of 5 units and points in the positive y –
direction . find ⃗

Algebraic Properties of the Dot Product:

If ⃗ and ⃗⃗ are the vectors in Rn and if ‗k‘ is a scalar then

i. ⃗ ⃗ (symmetry property)
ii. ⃗ ( ⃗⃗ ) ⃗ ⃗ ⃗⃗ (distributive property)
iii. (⃗ ) ( ⃗) (homogeneity property)
iv. and (positivity property)

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 238

Proof of Algebraic Properties of the Dot Product:

If ⃗ ( ) ( ) and ⃗⃗ ( ) are the


n
vectors in R and if ‗k‘ is a scalar then

i. ⃗ ⃗ (symmetry property)
Proof:
⃗ ( )( )

ii. ⃗ ( ⃗⃗ ) ⃗ ⃗ ⃗⃗ (distributive property)


Proof:
⃗ ( ⃗⃗ )
( ) ,( ) ( )-
( )( )
( )( )
( )( ) ( )( )

( )( ) ( )( )

⃗ ⃗ ⃗⃗

iii. (⃗ ) ( ⃗) (homogeneity property)


Proof:
(⃗ ) ,( )( )-
( )
( ) ( ) ( ) ( )
( ⃗)

iv. and (positivity property)


Proof:
( )( )
‖ ‖ since ‖ ‖
Obviously

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 239

Theorem: If ⃗ and ⃗⃗ are the vectors in Rn and if ‗k‘ is a scalar then

i.
ii. ⃗ ( ⃗⃗ ) ⃗ ⃗ ⃗⃗
iii. (⃗ ) ( ⃗)

Proof

If ⃗ ( ) ( ) and ⃗⃗ ( ) are the


n
vectors in R and if ‗k‘ is a scalar then

i.
Proof:
( )( )
( )
⃗ ( )( )

Hence

ii. ⃗ ( ⃗⃗ ) ⃗ ⃗ ⃗⃗
Proof:
⃗ ( ⃗⃗ )
( ) ,( ) ( )-
( )( )
( )( )
( )( ) ( )( )

( )( ) ( )( )

⃗ ⃗ ⃗⃗

iii. (⃗ ) ⃗ ( )
Proof:
(⃗ ) ,( )( )-
( )
( ) ( ) ( )
⃗ ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 240

Examples: Calculate ( ⃗ )( ⃗ ) using algebraic properties.

(⃗ )( ⃗ ) ⃗ ( ⃗ ) ( ⃗ )

(⃗ )( ⃗ ) (⃗ ⃗ ) (⃗ ) ( ⃗) ( )

(⃗ )( ⃗ ) ‖⃗ ‖ (⃗ ) (⃗ ) ‖ ‖

(⃗ )( ⃗ ) ‖⃗ ‖ (⃗ ) ‖ ‖

Dot Product (by using norm):

Consider two vectors u and v in R2 or R3 , say ⃗ ( ) and


( ) and if is the angle between u and v then their dot product,
inner product (Euclidean inner product) or scalar product is given as follows;

⃗ ⃗ ‖⃗ ‖‖⃗ ‖

Examples:

If ⃗ ( ) and ( ) also

then ⃗ ‖ ⃗ ‖‖ ‖ √ √

Angels between vectors:

Consider two non – zero vectors u and v in Rn, say ⃗ ( ) and


( ) then angle between them is given as follows;
⃗ ⃗ ⃗ ⃗
‖ ⃗ ‖‖ ⃗ ‖
.‖ ⃗ ‖‖ ⃗ ‖/ where

Remark:

 is acute if ⃗
 is obtuse if ⃗
 if ⃗

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 241

PRACTICE:

Find the Euclidean distance ‖ ⃗ ‖ between ⃗ and and the cosine of the angles
between those vectors. State whether that angle is acute, obtuse or

i. ⃗ ( ) and ( )
ii. ⃗ ( ) and ( )
iii. ⃗ ( ) and ( )
iv. ⃗ ( ) and ( )

Cauchy Schwarz Inequality (Proved Later):

Consider two non – zero vectors u and v in Rn, say ⃗ ( )


and ( ) then Cauchy Schwarz inequality will be as follows;

|⃗ | ‖ ⃗ ‖‖ ‖

And in terms of components this will be written as;

| | ( )( )

Examples: If ⃗ ( ) and ( )

then | ⃗ | | | | | | |

and ‖ ⃗ ‖ √ and ‖ ‖ √ ‖ ⃗ ‖‖ ‖ √ √ √

Clearly |⃗ | ‖ ⃗ ‖‖ ‖ as inequality satisfied.

PRACTICE: Verify that the Cauchy Schwarz Inequality holds.

i. ⃗ ( ) and ( )
ii. ⃗ ( ) and ( )
iii. ⃗ ( ) and ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 242

Triangle Inequality for Vectors:

If u and v are in Rn then Triangle inequality will be as follows;

‖⃗ ‖ ‖⃗ ‖ ‖ ‖

Proof: Consider ‖ ⃗ ‖ (⃗ ) (⃗ ) (⃗ ⃗ ) (⃗ ) ( )

‖⃗ ‖ ‖⃗ ‖ (⃗ ) ‖ ‖ ‖⃗ ‖ |⃗ | ‖ ‖

‖⃗ ‖ ‖⃗ ‖ |⃗ | ‖ ‖ property of absolute value.

‖⃗ ‖ ‖⃗ ‖ ‖ ⃗ ‖‖ ‖ ‖ ‖ using Cauchy Schwarz inequality.

‖⃗ ‖ (‖ ⃗ ‖ ‖ ‖) ‖⃗ ⃗‖ ‖⃗ ‖ ‖⃗ ‖

Triangle Inequality for Distances:

If u , v and w are in Rn then Triangle inequality will be as follows;

(⃗ ) ( ⃗ ⃗⃗ ) ( ⃗⃗ )

Proof: Since we know that ( ⃗ ) ‖⃗ ‖ ‖⃗ ⃗⃗ ⃗⃗ ‖

(⃗ ) ‖( ⃗ ⃗⃗ ) ( ⃗⃗ )‖ ‖⃗ ⃗⃗ ‖ ‖ ⃗⃗ ‖ ( ⃗ ⃗⃗ ) ( ⃗⃗ )

(⃗ ) ( ⃗ ⃗⃗ ) ( ⃗⃗ )

Parallelogram equation (law) for Vectors:

If u and v are in Rn then Triangle inequality will be as follows;

‖⃗ ‖ ‖⃗ ‖ (‖ ⃗ ‖ ‖ ‖ )

Proof:

Consider ‖ ⃗ ‖ ‖⃗ ‖ (⃗ ) (⃗ ) (⃗ ) (⃗ )

‖⃗ ‖ ‖⃗ ‖ (⃗ ⃗ ) (⃗ ) ( ) (⃗ ⃗ ) (⃗ ) ( )

‖⃗ ‖ ‖⃗ ‖ (⃗ ⃗ ) ( ) ‖⃗ ‖ ‖ ‖

‖⃗ ‖ ‖⃗ ‖ (‖ ⃗ ‖ ‖ ‖ )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 243

Theorem: If u and v are in Rn then ⃗ ‖⃗ ‖ ‖⃗ ‖

Proof: Consider ‖ ⃗ ‖ (⃗ ) (⃗ ) (⃗ ⃗ ) (⃗ ) ( )

‖⃗ ‖ ‖⃗ ‖ (⃗ ) ‖ ‖

Also ‖ ⃗ ‖ (⃗ ) (⃗ ) (⃗ ⃗ ) (⃗ ) ( )

‖⃗ ‖ ‖⃗ ‖ (⃗ ) ‖ ‖

Then clearly ‖ ⃗ ‖ ‖⃗ ‖ ⃗

Orthogonal (Perpendicular) vectors:

Consider two non – zero vectors u and v in Rn, say ⃗ ( )


and ( ) then these vectors are said to be orthogonal if ⃗

We will also agree that zero vector in Rn is orthogonal to every vector in Rn

Example: Show that ⃗ ( ) and ( ) are orthogonal in R4

Solution:

⃗ ( )( ) ( )( ) ( )( ) ( )( ) ( )( )

Thus given vectors are orthogonal vectors in R4

Example: Let { ̂ ̂ ̂ } be the set of standard unit vectors in R3. Show that each
ordered pair of vectors in S is orthogonal.

Solution: We have to show that ̂ ̂ ̂ ̂ ̂ ̂

̂ ̂ ( )( ) ( )( ) ( )( ) ( )( )

̂ ̂ ( )( ) ( )( ) ( )( ) ( )( )

̂ ̂ ( )( ) ( )( ) ( )( ) ( )( )

Thus the result.

We may also show ̂ ̂ ̂ ̂ ̂ ̂

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 244

PRACTICE:

Determine whether the vectors are orthogonal or not.

i. ⃗ ( ) and ( )
ii. ⃗ ( ) and ( )
iii. ⃗ ( ) and ( )
iv. ⃗ ( ) and ( )
v. ⃗ ( ) and ( )
vi. ⃗ ( ) and ( )
vii. ⃗ and ( )
viii. ⃗ ( ) and ( )

Projection of vectors onto (along) another vector:

Consider two non – zero vectors u and v in Rn, say ⃗ ( )


and ( ) then projection of vector ⃗ onto (along) a non – zero
vector is given as follows;
⃗ ⃗ ⃗ ⃗
(⃗ ⃗ ) ⃗⃗ ⃗ ⃗ (vector component of ⃗ onto )
‖⃗ ‖ ⃗ ⃗

This is also called orthogonal projection of ⃗ on

Projection of vectors orthogonal to another vector:

Consider two non – zero vectors u and v in Rn, say ⃗ ( )


and ( ) then projection of vector ⃗ orthogonal to a non – zero
vector is given as follows;
⃗ ⃗
⃗ ⃗⃗ ⃗ ‖⃗ ‖
⃗ (vector component of ⃗ orthogonal to )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 245

Example: Let ⃗ ( ) and ( ). Find the vector component of ⃗


along and the vector component of ⃗ orthogonal to .

Solution: We have ⃗ ( ) and ( )

⃗ ( )( ) ( )( ) ( )( ) ( )( )

‖ ‖ ( ) ( ) ( )

Then the vector component of ⃗ along (onto) is as follows;


⃗ ⃗ ( ) ( ) ( *
‖ ‖

Also the vector component of ⃗ orthogonal is as follows;


⃗ ⃗
⃗ ⃗ ⃗ ⃗ ( ) . / . /
‖⃗ ‖

PRACTICE:

1) Find ‖ ⃗ ⃗ ‖ for given vectors.

i. ⃗ ( ) and ( )
ii. ⃗ ( ) and ( )
iii. ⃗ ( ) and ( )
iv. ⃗ ( ) and ( )

2) Find the vector component of ⃗ along and the vector component of ⃗


orthogonal to .
i. ⃗ ( ) and ( )
ii. ⃗ ( ) and ( )
iii. ⃗ ( ) and ( )
iv. ⃗ ( ) and ( )
v. ⃗ ( ) and ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 246

Theorem of Pythagoras in

If u and v are orthogonal vectors in with the Euclidean inner product then

‖⃗ ‖ ‖⃗ ‖ ‖ ‖

Proof: Since u and v are orthogonal therefore ⃗

‖⃗ ‖ (⃗ ) (⃗ ) (⃗ ⃗ ) (⃗ ) ( )

‖⃗ ‖ (⃗ ⃗ ) ( ) ‖⃗ ‖ ‖ ‖ ⃗

Example:

Consider ⃗ ( ) and ( ) then verify Pythagoras Theorem.

Solution: Given that ⃗ ( ) and ( )

Then ⃗ ( ) also ‖ ⃗ ‖ and ‖ ⃗ ‖ ‖ ‖

Clearly ‖ ⃗ ‖ ‖⃗ ‖ ‖ ‖

Cross Product:

Consider two vectors u and v in Rn, say ⃗ ( ) and ( ) then


their cross product, outer product or vector product is given as follows;

̂ ̂ ̂
⃗ | | | | ̂ | | ̂ | |̂

Both vectors ⃗ are said to be parallel if their cross product vanishes .i.e.

Example: Consider ⃗ ( ) and ( ) then their cross product will


be as follows

̂ ̂ ̂
⃗ | | | | ̂ | | ̂ | |̂ ̂ ̂ ̂

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 247

Scalar Triple Product:

Consider three vectors u, v and w in R3, say ⃗ ( ) ( ) and


⃗⃗ ( ) then their scalar triple product is given as follows;

⃗ ( ⃗⃗ ) | |

Example: Consider ⃗ ( ) ( ) and ⃗⃗ ( ) then their


scalar triple product will be as follows

⃗ ( ⃗⃗ ) | | | | ( )| | ̂ ( )| |

⃗ ( ⃗⃗ )

Keep in mind: ⃗ ( ⃗⃗ ) ⃗⃗ ( ⃗ ) ( ⃗⃗ ⃗)

PRACTICE:

1. Let ⃗ ( ) ( ) ⃗⃗ ( ) then find the indicated


operations.
i. ⃗⃗ vii. ⃗
ii. ⃗⃗ viii. (⃗ )
iii. ( ⃗ ) ⃗⃗ ix. ⃗ ( ⃗⃗ )
iv. ( ⃗⃗ ) x. ⃗⃗ ( ⃗⃗ )
v. xi. ⃗⃗ ⃗⃗
vi. ( ⃗ ⃗⃗ ) ( ⃗ ⃗⃗ ) xii. ( ⃗) ( ⃗)

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 248

Field: A non-empty set F is called a field if

 F is Abelian group under addition.


 F – {0} is Abelian group under multiplication.
 Distributive law holds in F.

Note: Elements of a Field called Scalars.

Inner Product Spaces


Let V be a real vector space. Suppose to each pair of vectors ⃗ there is
assigned a real number, denoted by 〈 ⃗ 〉 . This function is called a (real) inner
product on V if it satisfies the following axioms:

i. (Linear Property): 〈 ⃗ ⃗ 〉 〈⃗ 〉 〈⃗ 〉.
ii. (Symmetric Property): 〈 ⃗ 〉 〈 ⃗ 〉 .
iii. (Positive Definite Property): 〈 ⃗ ⃗ 〉 ; and 〈 ⃗ ⃗ 〉 if and only if
⃗ .

The vector space V with an inner product is called a (real) inner product space.

Axiom (i) states that an inner product function is linear in the first position. Using
(i) and the symmetry axiom (ii) , we obtain

〈⃗ 〉 〈 ⃗〉 〈 ⃗〉 〈 ⃗〉 〈⃗ 〉 〈⃗ 〉

That is, the inner product function is also linear in its second position. Combining
these two properties and using induction yields the following general formula:

〈∑ ⃗ ∑ 〉 ∑ ∑ 〈⃗ 〉

That is, an inner product of linear combinations of vectors is equal to a linear


combination of the inner products of the vectors.

We may define above axioms as follows

i. (Additivity Axiom): 〈 ⃗ ⃗ 〉 〈⃗ 〉 〈⃗ 〉.
ii. (Symmetry Axiom): 〈 ⃗ 〉 〈 ⃗ 〉 .
iii. (Homogeneity Axiom): 〈 ⃗ 〉 〈⃗ 〉 .
iv. (Positivity Axiom): 〈 ⃗ ⃗ 〉 ; and 〈 ⃗ ⃗ 〉 if and only if ⃗ .

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 249

Because the axiom for a real inner product space are based on properties of the dot
product, these inner product space axioms will be satisfied automatically if we
define the inner product of two vectors u and v in Rn to be

〈⃗ 〉 ⃗

This inner product is commonly called the Euclidean Inner Product (or the
Standard Inner Product) on Rn to distinguish it from other possible inner
products that might be defined on Rn. We call Rn with the Euclidean Inner Product
Euclidean – n space.

Example:

Let V be a real inner product space. Then, by linearity,

〈 〉 〈 〉 〈 〉 〈 〉
〈 〉 〈 〉 〈 〉

〈 〉 〈 〉 〈 〉 〈 〉 〈 〉

〈 〉 〈 〉 〈 〉

Observe that in the last equation we have used the symmetry property that
〈 〉 〈 〉.

Norm (length) of a Vector

By the axiom 〈 ⃗ ⃗ 〉 ; and 〈 ⃗ ⃗ 〉 if and only if ⃗ ., 〈 ⃗ ⃗ 〉 is


nonnegative for any vector ⃗ . Thus, its positive square root exists. We use the
notation ‖ ⃗ ‖ √〈 ⃗ ⃗ 〉 and This nonnegative number is called the norm or length
of ⃗ . The relation ‖ ⃗ ‖ 〈 ⃗ ⃗ 〉 will be used frequently. Also remember a vector
of norm 1 is called unit vector.

Distance between two Vectors

For this we use the notation ( ⃗ ) ‖⃗ ‖ √〈 ⃗ ⃗ 〉

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 250

Examples of Inner Product Spaces

This section lists the main examples of inner product spaces used in this text.

Although the Euclidean inner product is the most important inner product on
n
R . However, there are various applications in which it is desirable to modify the
Euclidean inner product by weighting its terms differently. More precisely, if
are positive real numbers, which we shall call weights, and if
⃗ ( ) and ( ) are vectors in Rn , then it can be
shown that the formula 〈 ⃗ 〉 defines an
n
inner product on R ; it is called the weighted Euclidean inner product with
weights .

Example: Weighted Euclidean Inner Product

Let ⃗ ( ) and ( ) be vectors in R2. Verify that the weighted


Euclidean inner product 〈 ⃗ 〉 satisfies the four inner product
axioms.

Solution

Axiom 1: If ⃗ and are interchanged in this equation, the right side remains the
same. Therefore, 〈 ⃗ 〉 〈 ⃗ 〉

Axiom 2: If ⃗⃗ ( ), then

〈⃗ ⃗⃗ 〉 ( ) ( )

〈⃗ ⃗⃗ 〉 ( ) ( )

〈⃗ ⃗⃗ 〉 ( ) ( )

〈⃗ ⃗⃗ 〉 〈 ⃗ ⃗⃗ 〉 〈 ⃗⃗ 〉

Axiom 3: 〈 ⃗ 〉 ( ) ( ) ( ) 〈⃗ 〉

Axiom 4: 〈 〉 ( ) ( ) ; and 〈 〉 if and


only if

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 251

Example: Using a Weighted Euclidean Inner Product

It is important to keep in mind that norm and distance depend on the inner product
being used. If the inner product is changed, then the norms and distances between
vectors also change. For example, for the vectors ⃗ ( ) and ( ) in R2
with the Euclidean inner product, we have ‖ ⃗ ‖ √

and ( ⃗ ) ‖ ⃗ ‖ ‖( )‖ √ ( ) √ However, if we
change to the weighted Euclidean inner product 〈 ⃗ 〉 then we
obtain ‖ ⃗ ‖ √〈 ⃗ ⃗ 〉 √ ( )( ) ( )( ) √

and ( ⃗ ) ‖ ⃗ ‖‖ ⃗ ‖ √〈 ⃗ ⃗ 〉 √〈( )( )〉

(⃗ ) √ ( )( ) ( )( ) √

Practice:

1. Let R2 have the weighted Euclidean inner product 〈 ⃗ 〉


and let ⃗ ( ), ( ) , ⃗⃗ ( ) and then compute the
stated quantities;
a) 〈 ⃗ 〉 d) ‖ ‖
b) 〈 ⃗⃗ 〉 e) ( ⃗ )
c) 〈 ⃗ ⃗⃗ 〉 f) ‖ ⃗ ‖

2. Let R2 have the weighted Euclidean inner product 〈 ⃗ 〉


and let ⃗ ( ), ( ) , ⃗⃗ ( ) and then compute the
stated quantities;
a) 〈 ⃗ 〉 d) ‖ ‖
b) 〈 ⃗⃗ 〉 e) ( ⃗ )
c) 〈 ⃗ ⃗⃗ 〉 f) ‖ ⃗ ‖

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 252

Euclidean n-Space Rn

Consider the vector space Rn. The dot product or scalar product in Rn is defined by

⃗ . This function defines an inner product on Rn.


The norm ‖ ⃗ ‖ of the vector ⃗ in this space is as follows:

‖⃗ ‖ √⃗ ⃗ √

On the other hand, by the Pythagorean theorem, the distance from the origin O in
R3 to a point ( ) is given by √ . This is precisely the same
as the above-defined norm of the vector ⃗ ( ) in R3. Because the
Pythagorean Theorem is a consequence of the axioms of Euclidean geometry, the
vector space Rn with the above inner product and norm is called Euclidean n-
space. Although there are many ways to define an inner product on Rn, we shall
assume this inner product unless otherwise stated or implied. It is called the usual
(or standard) inner product on Rn.

Remark: Frequently the vectors in Rn will be represented by column vectors—that


is, by column matrices. In such a case, the formula 〈 ⃗ 〉 ⃗ defines the
n
usual inner product on R .

Example: Let ⃗ ( ) ( ) ⃗⃗ ( ) in R4.

(a) Show 〈 ⃗ ⃗ ⃗⃗⃗ 〉 〈⃗ ⃗⃗⃗ 〉 〈⃗ ⃗⃗⃗ 〉

By definition, 〈 ⃗ ⃗⃗ 〉 and 〈 ⃗⃗ 〉

As ⃗ ( ) Thus, 〈 ⃗ ⃗⃗ 〉

Then 〈 ⃗ ⃗⃗ 〉 〈 ⃗⃗ 〉 ( ) ( ) 〈 ⃗ ⃗⃗ 〉

(b) Normalize ⃗ and ⃗ :

Since ‖ ⃗ ‖ √ √ and ‖ ‖ √

We normalize ⃗ and to obtain the following unit vectors in the directions of ⃗


and , respectively:
⃗ ⃗
̂ ‖⃗ ‖
. / and ̂ ‖⃗ ‖
. /
√ √ √ √

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 253

Unit Circles and Spheres in Inner Product Spaces

If V is an inner product space, then the set of points in V that satisfy ‖ ‖ is


2 3
called the unit sphere or sometimes the unit circle in V. In R and R these are the
points that lie 1 unit away from the origin.

Example: Unusual Unit Circles in

(a) Sketch the unit circle in an coordinate system in R2 sing the Euclidean
inner product 〈 ⃗ 〉 .
(b) Sketch the unit circle in an coordinate system in R2 using the weighted
Euclidean inner product 〈 ⃗ 〉

Solution (a)

If ⃗ ( ) , then ‖ ⃗ ‖ √ , so the equation of the unit circle is


√ , or, on squaring both sides,

As expected, the graph of this equation is a circle of radius 1 centered at the origin

Solution (b)

If ⃗ ( ) , then ‖ ⃗ ‖ √ , so the equation of the unit circle is

√ , or, on squaring both sides,

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 254

Practice:

1. Compute the quantities using the inner product on R2 generated by


0 1 and 0 1
a) 〈 ⃗ 〉 d) ‖ ‖
b) 〈 ⃗⃗ 〉 e) ( ⃗ )
c) 〈 ⃗ ⃗⃗ 〉 f) ‖ ⃗ ‖
2. Find ‖ ⃗ ‖ and ( ⃗ ) relative to the weighted Euclidian inner product
〈⃗ 〉 on R2
a) ⃗ ( ) and ( )
b) ⃗ ( ) and ( )
2
3. Sketch the unit circle in R using the given inner products
a) 〈 ⃗ 〉
b) 〈 ⃗ 〉

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 255

The Standard Inner Product on

If and are
polynomials in then the following formula defines an inner product on that
we call the standard inner product on

〈 〉

The norm of polynomial relative to this inner product is

‖ ‖ √〈 〉 √

The Evaluation Inner Product on

If and are
polynomials in and if are distinct real numbers (called sample
point) then the formula 〈 〉 ( ) ( ) ( ) ( ) ( ) ( )
defines an inner product on called the evaluation inner product at
Algebraically this can be viewed as the dot product in Rn of the n – tuples
( ( ) ( ) ( )) and ( ( ) ( ) ( )) and hence the first three
inner product axioms follow from properties of the product.

The fourth inner product axiom follows from the fact that

〈 〉 , ( )- , ( )- , ( )-

With equality holding if and only if ( ) ( ) ( )

But a non – zero polynomial of degree ‗n‘ or less can have at most ‗n‘ distinct
roots, so it must be that which proves that the fourth inner product axiom
holds.

The norm of a polynomial relative to the evaluation inner product is

‖ ‖ √〈 〉 √, ( )- , ( )- , ( )-

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 256

Example

Let have the evaluation inner product at the points


then compute 〈 〉 and ‖ ‖ for the polynomials

( ) and ( )

Solution

〈 〉 ( ) ( ) ( ) ( ) ( ) ( )

〈 〉 ( ) ( ) ( ) ( ) ( ) ( )

‖ ‖ √, ( )- , ( )- , ( )- √ √

Practice:

1. Find the standard inner product on of the given polynomials


a) and
b) and
2. In the following exercise, a sequence of a sample points is given. Use the
evaluation inner product on at those sample points to find 〈 〉 for the
polynomials and
a)
b)
3. Find ( ) and ‖ ‖ relative to the evaluation inner product on at the
stated sample points.
a) and
b) and
4. Find ( ) and ‖ ‖ relative to the evaluation inner product on at the
stated sample points.
a)
b)

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 257

Function Space , - and Polynomial Space ( )

The notation , - is used to denote the vector space of all continuous


functions on the closed interval , - that is, where . The following
defines an inner product on , - , where ( ) and ( ) are functions in
, -:

〈 〉 ∫ ( ) ( ) And it is called the usual inner product on , -.

The vector space ( ) of all polynomials is a subspace of , - for any


interval , -, and hence, the above is also an inner product on ( ).

Example

Show that 〈 〉 ∫ ( ) ( ) defines an inner product on , -

Solution: Axiom 1: 〈 〉 ∫ ( ) ( ) ∫ ( ) ( ) 〈 〉

Axiom 2: 〈 〉 ∫ ( ( ) ( )) ( )

〈 〉 ∫ ( ) ( ) ∫ ( ) ( ) 〈 〉 〈 〉

Axiom 3: 〈 〉 ∫ ( ) ( ) ∫ ( ) ( ) 〈 〉

Axiom 4: 〈 〉 ∫ ( ) ( ) ∫ ( )

And 〈 〉

Norm of a vector in , -

If , - has the inner product 〈 〉 ∫ ( ) ( ) then the norm of a


function ( ) relative to this inner product is

‖ ‖ ‖ ‖ √〈 〉 √∫ ( ) ( ) √∫ ( )

And the unit sphere in this space consists of all functions in , - that
satisfy the equation ∫ ( )

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 258

Remember The following define the other norms on , -:

 ‖ ‖ ∫ | ( )| area between the function and the t-axis


 ( ) area between the functions f and g

The geometrical descriptions of this norm and its corresponding distance function
is described below.

 ‖ ‖ (| ( )|) maximum distance between and the t-axis


 ( ) maximum distance between the functions f and g

The geometrical descriptions of this norm and its corresponding distance function
is described below.

Visit us @ Youtube: “Learning With Usman Hamid”


P a g e | 259

Example:

Consider ( ) and ( ) in the polynomial space ( ) with inner


product 〈 〉 ∫ ( ) ( ) then find 〈 〉 ‖ ‖ and ‖ ‖

Solution: Using defined inner product

〈 〉 ∫ ( ) ( ) ∫ ( )( ) ∫ ( )

〈 〉 | |

‖ ‖ 〈 〉 ∫ ( ) ( ) ∫ ( )

‖ ‖ | |

‖ ‖ 〈 〉 ∫ ( ) ( ) ∫ | |

Then clearly ‖ ‖ √ and ‖ ‖ √ √

Practice:

1. Let the vector space have the inner product 〈 〉 ∫ ( ) ( ) then


find the following for and
a) 〈 〉 c) ‖ ‖
b) ‖ ‖ d) ( )
2. Let the vector space have the inner product 〈 〉 ∫ ( ) ( ) then
find the following for and
a) 〈 〉
b) ‖ ‖
c) ‖ ‖
d) ( )
3. Use the inner product 〈 〉 ∫ ( ) ( ) to compute 〈 〉
a) and
b) and

Visit us @ Youtube: “Learning With Usman Hamid”

You might also like