0% found this document useful (0 votes)
369 views74 pages

Engineering Mathematics

This document provides an overview of systems of linear equations and matrices. It discusses: 1) How systems of linear equations can be organized into matrices and solved using operations like Gaussian elimination that manipulate the rows of an augmented matrix. 2) Gaussian elimination involves elementary row operations like multiplying rows by constants, interchanging rows, and adding multiples of rows to reduce a system of linear equations to reduced row echelon form. 3) A matrix is in reduced row echelon form if it has leading 1s, zeros above the leading 1s, and each leading 1 is further right than the one above. This form uniquely determines the solution to the linear system.

Uploaded by

AhmadMoaaz
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
369 views74 pages

Engineering Mathematics

This document provides an overview of systems of linear equations and matrices. It discusses: 1) How systems of linear equations can be organized into matrices and solved using operations like Gaussian elimination that manipulate the rows of an augmented matrix. 2) Gaussian elimination involves elementary row operations like multiplying rows by constants, interchanging rows, and adding multiples of rows to reduce a system of linear equations to reduced row echelon form. 3) A matrix is in reduced row echelon form if it has leading 1s, zeros above the leading 1s, and each leading 1 is further right than the one above. This form uniquely determines the solution to the linear system.

Uploaded by

AhmadMoaaz
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 74

ENGINE2ERING MATHEMATICS H/O 1 1.

SYSTEMS OF LINEAR EQUATIONS & MATRICES:


Information in science & math is often organized into rows columns to form rectangular arrays called matrices. Operations on matrix are important in developing computer programs to solve systems of linear equations because computers are well suited for manipulating arrays of numerical information. 1.1 Introduction to systems of linear equations. Given the equation (called a linear equation) a1 x + a2 y = b a1 , a2 & b are Constants & a1 & a2 are not both zero.
are variables & will never have powers. The variables are sometimes called unknowns. Note that all variables occur only to the 1st power & do not appear as arguments for trigonometric, logarithmic, or exponential functions.
x& y

Examples of linear equations; x + 3 y = 7, y =1 / 2 x + 3z +1 Examples of non linear equations;


x +3 y =5 , y =sin x.

A solution of a linear equation


s1, s 2 s n

a1 x2 + a2 x2 + an xn = b is a sequence of n numbers

such that the equation is satisfied when we substitute x1=51, x2=52, ---, xn=5n. The set of all solution of the equation is called its solution set or sometimes the general solution of the equation.

EXAMPLE: Find the solution set of (a) 4 x 2 y =1

(b) x1 4 x2 + 7 x3 = 5.

SOLUTION.(a) Let x=t where t is the parameter (an arbitrary no.) y=2t-1/2 values for x & y the solution set in terms of t. i.e. if t=3, x=3
y = 2 X 3 11 / 2

e.t.c.

Hence x=3, y=1/2 are the numerical solutions for 4x-3y=1 for t=3. Or. Let y =t X=1/2t+1/4 they yield the same solution set as above for t=3.

Solution (b) We assign arbitrary values to any two variables & solve for the 3rd variable.i.e. s = x2 t= x3 & solve for x1 .
We obtain
x1 = 5 + 4 s 7t , x2 = s, x3 = t.

LINEAR SYSTEMS A finite set of linear equations in the variables


x1, x 2 x n

is called a system of linear equations, or a linear system.


4 x1 x2 + 3 x3 = 1 3 x1 + x2 + 9 x3 = 4 x1 =1, x2 = 2, x3 = 3 , since these values satisfy both equations.

For example

Has the solution

A solution must satisfy both equations and not one of the equations. A system of equations that has no solutions is said to be in consistent; if there is at least One solution of the system, it is called consistent. Graphs below show solution of a system of equations

(a)

L2 L1

Where L1 and L2 are equations, L1and L2 are parallel, no solution. X

(b)

L2 L2

L1and L2 may intersect, only one solution.

(c)

y L1and L2

L1and L2 coincides, infinitely may points intersect, and therefore, infinitely may solution. X

Augmented matrices: A matrix is a rectangular array of numbers. A matrix is the form


a11 a 21 ' ' ' am1 a12 a22 ' ' ' am 2 a1n a2 n ' ' ' amn b1 b2 ' ' ' bm

Remora: when constructing an augmented matrix, the unknowns must be written in the same order in each equation and the constants must be onto the right.

1.2 Gaussian Elimination:


The basic method for solving a system of linear equation is to replace the given system by a new system that as the same solution set but which is easier to solve. The rows of an augmented matrix correspond to the equations in the associated system, the following operations are considered to eliminate the unknowns systematically. 1) Multiply a row through a non-zero constant 2) Interchange two rows 3) Add a multiple of one row to another row These are called Elementary row operations. Example

Solve the system of linear equation bellow by operating on the equations in the system.
x 2x 3x + + + y 4y 6y + 2z 3z 5z = 9 1 = 1 2 3 = 0 1 4 6 2 3 5 9 1 0

Add -2 times the 1st equation to the 2nd to obtain. First row
x 3x + + y 2y 6y + 2z 7z 5z = = = 1 17 0 3 0 9 1 2 6 2 7 5 9 17 0

Add -3x 1st equation to the 3rd equation to obtain. First row
x + y 2y 3y +
nd

2z 7z

= = =

11z

9 1 17 0 27 0

1 2 3

2 7 11

9 17 27

Multiply the 2 equation by Second row


x + y y 3y + 2z = = = 7 / 2z 11z 1 17 / 2 0 27 0 9 1 1 3 2 7 / 2 11 17 / 2 27 9

Add 3 X 2 nd equation to 3rd equation Second row to 3rd


x + y y +
rd

2z 7 / 2z 1 / 2 z

= = =

9 1 17 / 2 0 3 / 2 0

1 1 0

2 7 / 2 1 / 2

9 17 / 2 3 / 2

Multiply the 3 equation by -2


x + y y + 2z 7 / 2z z = = = 1 17 / 2 0 0 3 9 0 1 0 11 / 2 7 / 2 1 17 / 2 3 9

Add 2 X 2 nd equation to 1st equation Second row


x y + 11 / 2 z 7 / 2z z = = = 35 / 2 17 / 2 3 1 0 0 0 1 0 11 / 2 7 / 2 1 35 / 2 17 / 2 3

Add 1 / 2 X 3rd equation to the 1st and 7/2X the 3rd equation to the second

x y z

= 1 1 = 2 0 = 3 0

0 1 0

0 0 1

1 2 3

The solution of the system is x=1, y= 2, z=3.

This is an example of a matrix that is in reduced row echelon form. To be in this form a matrix must have the following properties (1) If a row does not consist entirely of zeros, then the first non zero number in the row is a 1. We call this a leading 1. (2) If there are any rows that consist entirely of zeros, then they are grouped together at the bottom of the matrix. (3) In any two successive rows that do not consist entirely of zeros, the leading 1 in the lower row occurs further to the right than the leading 1 in the higher row. (4) Each column that contains a leading 1 has zeros every where else. A matrix with the first three properties is said to be in a row - echelon form. Examples (a) Examples of reduced row echelon matrices.
0 0 0 0 1 0 0 0 2 0 0 0 0 1 0 0 1 3 0 0

1 0 0

0 1 0

0 0 1

4 7 1

1 0 0

0 1 0

0 0 1

1 0 0

4 1 0

3 6 1

7 2 5

(b) Example Suppose that augmented matrix for a system of linear of the equation has been reduced by row operations to the given reduced row echelon form. Solve the system.
1 0 0 0 1 0 0 0 1 4 2 3 1 6 2

Solution: The corresponding system of equations is


x1 x2 x3 + + + 4 x4 2 x4 3 x4 = = =

x1, x2 and x3 are called leading variables 1


6and x4 is a free variable 2

x1 = 1 4 x4 x2 = 6 2 x4 x3 = 2 3 x4

Let x4=t (an arbitrary value); which then determines the values of the leading variables. Thus there infinitely many solutions and the general solution is given by the formulas. X1=-1-4t, x2=6-2t, x3=2-3t, x4=t. Elimination method This method can be used to reduce any matrix to reduced row-echelon form. Example
0 2 2 0 4 4 2 10 5 0 6 6 7 12 5 12 28 1

Step 1: locate the leftmost column that does not consist entirely zeros.

Microsoft Equation 3.0

Step 2: Interchange the top row with another row, if necessary, to bring a non-zero entry to the top of the column found in step 1.
2 0 2 4 0 4 10 2 5 6 0 6 12 7 5 28 12 1

Step 3: If the entry thats now at the top of the column found in step 1 is a1, multiply the first row by 1/a in order to introduce a leading 1.
1 0 2 2 0 4 5 2 5 3 0 6 6 7 5 14 12 1

Step 4: Add suitable multiples of the top row to the row below so that all entries below the leading 1 become zeros.
1 0 0 2 0 0 5 2 5 3 0 0 6 7 17 14 12 29

Step 5: Now lower the top row into matrix and begin again step 1 applied to the sub matrix that remains. Continue in this way until the entire matrix is in row echelon form.
1 2 5 3 6 14 7 12 0 0 2 0 0 0 5 0 17 29

Leftmost

non-zero column in the sub matrix


st 6 1 rowX 1 / 2 29 st nd 6 5 X 1 row + 2 row to int roduce below the leading1. 1

0 0 0 0
0 0

0 0 0 0
2 0 0

1 5 1 0

0 0 0 0

7 / 2 17 7 / 2 1/ 2

5 3 6 14 1 0 7 / 2 6 0 0 0 1/ 2 1

Lowered and refined to step 1 again.

Leftmost

column

1st (and only) row in the new sub matrix was multiplied by 2 to introduce a leading 1.
1 0 0 2 0 0 5 1 0 3 0 0 6 7 / 2 1 14 6 2

The entire matrix is now in row reduced echelon form. To find the reduced row echelon form we need the following additional step. Step6: Beginning with the last non-zero row and working upward, add suitable multiples of each row to the rows above to introduce zeros above the leading 1 s.
1 0 0 1 0 0 2 0 0 2 0 0 5 1 0 5 1 0 3 0 0 3 0 0 6 0 1 0 0 1 14 1 2 2 1 2 7 / 2 X 3rd row of proceeding submatrix + 2 nd row

5 X 2 nd row +1st row

1 0 0

2 0 0

0 1 0

3 0 0

0 0 1

7 1 2

5 X 2 nd row +1st row

The last matrix is in reduced row-echelon form. If only the first five steps are needed, the above procedure produces a row-echelon form and is called Gaussian elimination. Carrying the procedure to the sixth step and producing a matrix in reduced row-echelon for is called gauss-Jordan elimination. Remark: every matrix has a unique reduced roe-echelon form; that is one will move at the same reduced row-echelon form for a given matrix number matter how the row operations are varied. In contrast a row-echelon form of a given matrix is not unique: different sequences of row operations can produce different row-echelon forms. Example: Solve by gauss-Jordan elimination.
x1 2 x1 2 x1 + + + 3 x2 6 x2 6 x2 2 x3 5 x3 5 x3 + + + 2 x5 4 x5 4 x5 = = = = 0 1 5 6

2 x4

3 x6

+ 10 x4 + 8 x4

+ 15 x6 + 18 x6

Solution The augmented matrix of the system is


1 2 0 2 3 6 0 6 2 5 5 0 0 2 10 8 2 4 0 4 0 3 15 18 0 1 5 6

Step1. Adding -2 times the first row to the second and fourth rows gives, Affected rows are second and fourth. Step2. Multiply the 2nd row by -1 and then adding -5 X the new second row to the 3rd row and -4times the new second row to the 4th row gives.
1 0 0 0 3 0 0 0 2 1 0 0 0 2 0 0 2 0 0 0 0 3 0 6 0 1 0 2

Interchanging the 3rd row and the 4th row and then multiply the third row of the resulting matrix by 1/6 gives row-echelon form.
1 0 0 0 3 0 0 0 2 1 0 0 0 2 0 0 2 0 0 0 0 3 1 0 0 1 1 / 3 0

Adding -3times the third row to the second row and then +2Xsecond row of the resulting matrix to the first row yields the reduced row-echelon form.
1 0 0 0 3 0 0 0 0 1 0 0 4 2 0 0 2 0 0 0 0 0 1 0 0 0 1 / 3 0

The corresponding system of equations is


x1 + 3 x2 x3 + + 4 x4 2 x4 x6 x1 x2 x6 = 3 x2 4 x4 2 x5 + 2 x5 = 0

= 0 = 1/ 3

= 2 x4 = 1/ 3

If x2=r, x4=s, x5=t; the general solution is given by the formula X1=-3r-4s-2t, x2=r, x3=-2s, x5=t, x6=1/3. Back - substitution It is sometimes preferable to solve a system of a linear equation by using Gaussian elimination to bring the augmented matrix into row-echelon form without continuing all the way to the reduced row echelon form. When this is done, the corresponding system of equations can be solved by a technique called Back-substitution. Example:

1 0 0 0

3 0 0 0

2 1 0 0

0 2 0 0

2 0 0 0

0 3 1 0

0 1 1 / 3 0

To solve the corresponding system of equations;


x1 + 3 x2 2 x3 x3 + 2 x4 + 2 x5 + 3 x6 x6 = 0

= 1 = 1/ 3

Step1. Solve the equation for the leading variables.


x1 x3 x6 = 3 x2 = = 1 1/ 3 + 2 x3 2 x4 2 x5 3 x6

Step2. Beginning with bottom equations and working up wards, substitute each equation in all the equation above all. Substitute x6 =1/3 into the second equation yields.

x1 = 3x2 + 2 x3 2 x5 x3 = 2 x 4 x6 = 1 \ 3
Substitute x3 = 2 x4 into the first equation yields.

x1 = 3x2 4 x4 2 x5 x2 = 2 x4 x6 = 1 \ 3
Step3. Assign arbitrary values to the 3 valuables if any of x2 = r , x4 = s, & x5 = t. the general solution is given by the formulas
x1 = 3 x 4 s 2t , x2 = r , x3 = 2 s, x4 = s, x5 = t , x6 =1 / 3.

Note: The arbitrary values that are assigned to the free valuables are called parameters. Read example: About Gaussian Elimination.

Homogeneous linear systems. A system of linear equation is said to be homogeneous if the constant are all zero; that is, the system has the form.
a11 x1 a 21 x1 ' ' ' am1 x1 + + + a12 x2 a22 x2 ' ' ' am 2 x2 + + + + + + a1n xn a2 n xn ' ' ' a mn xn = 0 = 0 ' ' ' = 0

Every homogeneous system of linear equation is consistent, since all such systems have x1 = 0, x2 = 0, ---, xn = 0 as a solution. This solution is called the trivial solution, if there are solutions; they are called non trivial solution. Because a homogeneous linear system always has the trivial solution, these are only two possibilities for its solution; (i). The system has only the trivial solution.

y a1x+b1y=0 (a1 b1 not both zero)

a2x+b2y=o (a1 b1 not both zero)

(ii). The system has infinitely many solution in addition to the trivial solution

infinitely many solution

a1x+b1y=0 & a2x+b2y=0.

This (for b) happens wherever the system involves more un knows equation. Example: Gauss-Jordan elimination. Solve the following homogeneous linear system of 1 equation using G.S.E.
2 X1 X1 X1 + + 2X2 X2 X2 X3 + 2X3 2X3 X3 + + + X5 X5 X5 X5 = 0 = 0 = 0 = 0

3X 4 + X4

SOLUTION The augmented matrix for the system is


2 1 1 0 2 1 1 0 1 2 2 1 0 3 0 1 1 1 1 1 0 0 0 0

Reducing the matrix to reduced row-echelon form, gives


1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 1 0 0 0 0 0 0

The corresponding system of the equation is

x1

x2 x3

+ +

x5 x5 x4

= 0 = 0 = 0

Solving the leading variables yields.


x1 x3 x4 = x2 = = x5 0 x5

This, the several solution is


t , x4 = 0, x5 = t. x1 = s t , x2 = s, x3 =

Note that the trivial solution is obtained when


s = t = 0.

MATRICES & MATRIX OPERATIONS By definition, a matrix is a rectangular array of numbers. The numbers in the array are called entries in the matrix. The size of a matrix is determined by the number of rows & columns it contains. Rows are the horizontal lines. (i) Columns are the vertical lines. (j) A matrix with only one column, it is called a column matrix (or column vector). A matrix with only one row is called a row matrix (or row vector). Note: Capital letters shall be representing matrices Lowercase letters shall be demoting numerical quantities (scalars). A general 3X4 matrix is written as, a11 A = a21 a31 a12 a22 a32 a13 a23 a33 a14 a24 a34

& general mXn matrix as

a11 a 21 ' A= ' ' a m1

a12 a22 ' ' ' am 2

a1n a2n ' ' ' a mn

This can be written as

[a ] M X n
i j

- when the size is required to be known. - When the size is not emphasized.

[a ]
i j

Also (A)i j = ai j For A= 7


2 3 ( A)11 = 2 0

( A)12 = 3 ( A) 21 = 7 ( A) 22 = 0

A matrix with n rows & n columns is called square matrix of order n. & a11 , a22 , ., am are said to be on the main diagonal of A. a1n a2 n ann

a11 a 21 ' ' ' an1

a12 a22 ' ' ' an 2

Main diagonal

Operations on matrices: Two matrices are defined to be equal if they have the same size and their corresponding entries are equal. If A= [ai j ] and B = [bi j ] have the same size, then A=B if and only if ( a ) i j = Bi j or equivalently , ai j = bi j for all i and j .

Matrices of different sizes can not be added or subtracted. Example: Consider A and B
2 A = 1 4 1 0 2 0 2 7 3 4 , 0 4 2 0 4 B = 2 3 5 2 3 3 2 2 5 0 4 1 1 5 2 2 4 5 2 11 2 5 5

2 Then A + B = 1 7

4 6 3 and A B = 3 5 1

If A is i j matrix and c is i j sector, then the product CA is the matrix obtained by multiplying each entry of the matrix of A by C. the matrix cA is said to be a scalarmultiple of A. Example: (addition and subtraction)
2 If A = 1 Then 3 3 4 1 6 6 7 5 8 2 9 and C = 3 6 3 0 12

4 2A = 2 2 3

0 If B = 1

Then 2 A B + 1 C 3 7 = 4 2 2 3 11

Is the linear combination of A, B and C with the scalar coefficients of 2, -1 and 1/3. Multiplying matrices If A is an m X r matrix and B is an r X n matrix, then the product AB is m X n matrix. If r of A is r of B, then the product is undefined. A B MXr rXn

Must be the same

End product

Example
1 Consider A = 2 12 Then AB = 8 2 6 27 4 4 , 0 30 26 4 B = 0 2 13 . 12 1 1 7 4 3 5 3 1 2

Partition matrices and linear combinations A matrix can be subdivided or partitioned into smaller matrices by inserting horizontal and vertical rules between selected rows and columns. A is partitioned into different ways as follows.

a d g A11 A12 A = b e h = A A 12 22 c f k

a d g r1 A = b e h = r2 c f k r3 a d g A = b e h = y1 y2 y3 y4 . c f k

Example The matrix product below which gives a linear combination;

31 2 2 1 2 3 1 = 2 1 2 3 canbewritenas 1 3 2 1 21 1 2 + 3 3 = 9 2 1 2 3 or 31 2 [1 9 3] 1 2 3 [= 16 18 35] 2 1 2 ca n b e

[ 2311 ] 9[ 21 3] 3[ 12 2] [= 16 18 35].

If matrix A, variable X, and b as constants, then a system of in equations in unknowns can be represented as the simple matrix equation as A x = b. Where A co-efficient matrix of the system. The augmented matrix for the system is obtained by adjoining b to A as the last column, thus the augmented matrix is
a11 a 21 [ A / B] = ' ' am1 a12 a22 ' ' am 2 a1n a2 n ' ' amn b1 b2 ' ' bm

Transpose of a matrix If A is any m X N MATRIX, then the transpose of A, denoted by AT , is defined to be the n x m matrix that results from interchanging the rows & columns of Ai that is, the first column of AT is the second row of A, & so forth.
a11 a14 a T 12 a 22 a 23 a 24 , A = a13 a32 a33 a34 a14 illustrute with examples, example below. a11 let A = a 21 a31 a12 a13 a 21 a 22 a 23 a 24 a31 a32 a33 a34

Trace of a matrix If A is a square matrix, then the trace of A, demoted by tr (A), is defined to be the sum of the entries on the main diagonal of A. The trace of A is undefined if A is not a square matrix. Example
1 2 3 5 consider B = 1 2 4 2 find ( i ) tr ( B ) ( ii ) B T 7 8 7 1 0 4 3 0

11 1.4. INVERSES; RULES OF MATRIX ARTHMETIC

(a) Properties of matrix operations For real numbers a & b, ab =ba, which is called Communicative law for multiplication. For matrices, AB BA , where by properties for AB may be defined.
for example; if A is 2 X 3 matrix B is 3 X 4 matrix;

Then AB is defined & BA undefined.

Other properties include Associative, distributive law (b) Zero matrices (c) Identify matrices (d) Properties of inverses An invertible matrix has exacted one inverse. Finding an inverse matrix from invertible Theorem The matrix A =
a c b is invertible if ad bc 0, d

in which case the inverse is given by the formula.

d 1 A1 = ad bc c H / W : proof that

b d b ad bc ad bc = a a c ad bc ad bc AA1 = I 2 & A1 A = I 2.

Theorem If A &B are invertible matrices of the same size, then AB is invertible &

( AB ) 1 = B 1 A1
Pr oof :

show that ( AB ) B 1 A 1 = B 1 A 1 ( AB ) =1, by u sin g an example.

) (

Note: A product of any number of invertible matrices is invertible, & the inverse of the product is the product of the inverses in the reverse order. (e) Powers of a matrix If A is a square matrix, then we define the non negative integer powers of A to be

A0 = I ,

An

AA A

n > 0.

n factors

Also, if it is invertible, then we define the negative integer powers to be

Microsoft Equation 3.0

A n = A 1

A 1 A 1 A 1 n factors

(i). Theorems laws of exponents. 1. If A is a square matrix & r & s are integers, then
A r A s = Ar +s , Ar

( )

= Ars .

2. if A is invertible matrix, then:

(a) (b) (c)

A1 is invertible & A1 An

( ) =A is invertible & ( A ) = ( A )
1 n 1

1 n

for n = 0,1,2,... K A1

for any non zero scalar K , the matrix KA is invertible & KA 1 = 1 2 , 3

Example 1 consider A = 1 find A1 , A3 .

Properties of transpose Theorem: If the sizes if the matrices are such that the stated

(i ) ( AT )T = A. operations can be performed , then : (ii ) ( A + B )T = AT + B T & ( A B )T = AT BT (iii ) ( KA)T = KAT , where K is any scalar. (iv ) ( AB )T = BT AT

if A is an invertible matrix, then AT is also invertible & AT = A1

) (

ELEMENTARY MATRICES AND A METHOD ROR FINDING A1. DEFINITION An n X n matrix is called an elementary matrix if can be obtained from the n X n identity matrix. In by performing a single elementary row operation, Example: Elementary matrices & row operations: Listed below are 4 elementary matrices 7 the operations that produce them.

1 0 1 0 0 0 1 0 0 1 0 0

0 multiply the sec ond row of 12 by 3 3 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 1 Interchange the sec ond & forth rows of 14. 0 0

3 0 Add 3 times the third row of 13 to the first row. 1 0 st 0 multiply the1 row of 13 by 1. 1

When a matrix A is multiplied on the left by an elementary matrix E, the effect is to perform an elementary row operation on A. Theorem: (Row operations by matrix multiplication. If the elementary matrix E results from reforming a certain row operation on Im and if A is an m x n matrix, then the product EA is the matrix that results when thus same row operation is performed on A. Example (using elementary matrices)
consider matrix 1 A = 2 1 0 1 4 2 3 4 0 1 0 3 6 . 0 0 st rd 0 which results from adding 3 times the 1 row of 13 to the 3 row. 1

Elem.matrix

1 E = 0 3

The product of EA is 1 EA = 2 4 0 1 4 2 3 10

3 6 , which is previoselythesamematrixthatresults when we add 3 times the fir 9

If an elementary row operation is applied to an identity matrix 1 to produce an elementary matrix E, then there is a second row operation that, when applied to E,

produces 1 back again. For example, if E is obtained by multiplying the first row of 1 by a non-zero constant C, then 1 can be recovered if the i th row of E is multiplied by 1/c see table below. The operations on the right are called Inverse operations of the corresponding operations on the left. Table Row operation on 1 That produces E a Multiply row i by C 0 b Interchange rows i &j c Add C times row i to row j Example (row operations and inverse row operations) In each of the following, an elementary row operation is applied to the 2 X 2 identity matrix to obtain an elementary matrix E, then E is restored the identity matrix by applying the inverse row operation.
1 0 0 1 1 0 0 7 1 0 0 1

Row operation on E That reproduces 1 Multiply row i by 1/c Interchange rows i &j Add -C times row i to row j.

multiply the sec ond row by 7 1 0 0 1 0 1 1 0

multiplythe sec ondrowby 1 1 0 0 1

int erchange int erchange the first & sec ond rows. the first & sec ond rows 1 0 0 1 1 0 Add 5 times the 2 nd row to the 1st 5 1 1 0 Add 5 times the 2 nd row to the first . 0 1

Theorem: Every elementary matrix is invertible, and the inverse is also an elementary matrix. A method for Inverting matrices Matrix A can be a product of elementary matrices. Proof: Assume that the reduced roe-echelon form of A is In.

So that A can be reduced to1n by a finite sequence of elementary row operations. Each of these operations can be accomplished by multiplying on the left by an approximate elementary matrices E1, E2 ....., EK such that EK E2E, A = 1n . (a) E1, E2, EK are invertible. Multiplying both sides of equation (a) on the left successively by

E K , , E2 , E 1 , we obtain
1 1 1

A = E1 E2 E K I n = E1 E 2 E K ( b )
1 1 1

Equation (b) Expresses A as a product of elementary matrices.


multiplying eqn ( a ) on the right by A 1 yields A 1 = E K E2 E1 I n ( c )

This shows that A 1 can be obtained by multiplying In successively on the left by the , E K . Since each multiplication on the left by are of elementary matrices E1 , E2 , these elementary matrices performs a row operation, it follows, by comparing equations ( a ) & ( c ) ; that the sequence of row operations that reduces A to In will reduce In to
A1.

Note: To find the inverse of an invertible matrix A, we must find a sequence of elementary row operations that reduces A to the identity & then perform the sequence of operations on In to obtain A1.
A1.

Example: using row operations to find Find the inverse of


1 A = 2 1 2 5 0 3 3 8

Microsoft Equation 3.0

Solution: We want to reduce A to the identity matrix by row operations & simultaneously apply these operations to1 to produce A 1 . To accomplish this we shall adjoin the identity matrix to the right side of A, thereby producing a matrix of the form [A I ] Then we shall apply row operations to this matrix until the left side is reduced to 1; these operations will convert the right side to A1 , so that the final matrix will have the form

[I

1 A

Therefore;

1 2 1

2 5 0

3 1 3 0 8 0

0 1 0

0 0 1

1 0 0 1 0 0 1 0 0 1 0 0

2 1 2 2 1 0 2 1 0 2 1 0 3

3 1 3 0 5 0 1

0 1 0 0 1 2 0 1 2 6 5 2 16 5 2

0 0 we added 2 X the 1st row to the 2 nd & 1X the 1st row to the 3rd . 1 0 0 we added 2 X the 2 nd row to the 3rd . 1 0 0 we multiplied the 3rd row by 1. 1 3 3 we added 3 X the 3rd row to the 2 nd & 3 X 3rd row to the 1st . 1 9 3 we added 2 X the 2 nd row to the 1st . 1 9 3 . 1

3 2 1 5 3 1 1 5

3 2

0 14 0 13 1 5

1 0 0 1 0 0 Thus A

0 40 0 13 1 5

40 = 13 5

16 5 2

Often it will not be known in advance whether a given matrix is invertible. If an n X n matrix A is not invertible, then it can mot be reduced to 1n by elementary row operations. i.e the reduced row-echelon form of A has at least one row of zeros. Thus, if the procedure in the last example is attempted on a matrix that is not invertible, then at some point in the computations a row of zeros will occur at the left side. It can then be concluded that the given matrix is not invertible, and the computations can be stopped. Example: (showing that the matrix is not invertible). Consider the matrix
1 A = 2 1 6 4 2 4 1 5

Applying the procedure of the above example

1 2 1 1 0 0 1 0 0

6 4 2 6 8 8 6 8 0

4 1 1 0 5 0 4 9 4 1 9 2 0 1 1 1

0 1 0

0 0 1 0 1 0 0 0 we added 2 X the 1st row to the 2 nd & added the 1st row to the 3rd . 1 0 0 we added the 2 nd row to the 3rd . 1

9 2

0 1 1

Since we have obtained a row of zeros on the left side, A is not invertible. Solving Linear system by matrix Inversion: We have looked at two methods for solving linear systems: Gaussian elimination & Gauss-Jordan elimination. Another method: Theorem If A is an invertible n X n matrix, then for each n X 1 matrix b, the system of equations Ax = b has exactly one solution, namely, x = A 1b. Proof Since A( A1b ) = b, it follows that x = A 1b is a solution of Ax = b. To show that this is the only solution, we will assume that x0 is an arbitrary solution & then show that x0 1 must be the solution A b. Example: (soln. of a L. Sys. Using A 1 ). Consider the system of linear equations.
x1 2 x1 x1 + + 2 x2 5 x2 + 3 x3 + 3 x3 + 8 x3 = 5

= 3 = 17.

In matrix form this system can be written as Ax = b, where

1 A = 2 1

2 5 0

3 3 , 8 16 5 2

x1 x = x2 , x 3 9 3 1 16 5 2

5 b = 3 17

40 = 13 5

40 x = A b = 13 5
1

9 5 1 3 3 = 1. 1 17 2 x3 = 2.

or

x1 =1,

x2 = 1,

NB: the method used in example above applies only when the system has many equations as unknowns & the coefficient matrix is invertible. 2. DETERMINANTS: If A =
a c b , then the determinant is ad bc, demoted by det(A). d

det ( A) = ad bc. ( for asquare matrix )

and ad bc 0 for a matrix to be invertible. A 1 = d 1 det ( A) c b ; the inverse of A. a

Read permutations on Determinants. Evaluating determinants by row reduction Theorem Let A be a square matrix. (a) If A has a row of zeros or a column of zeros, then (A) = 0. (b) det(A) = det ( AT ) . Theorem (triangular matrices) If A is an n X n triangular matrix (upper triangular, lower triangular, or diagonal), then det(A) is the product of the entries on the main diagonal of the matrix, ie det(A) = a11 a22---ann.

Look for proof

a11 a for matrix A = 21 a31 a 41

0 a22 a32 a42

0 0 a33 a 43

0 0 lower triangular 0 a 44

det ( A) = a11 a22 a33 a44 .

Example (Determinant of an upper triangular matrix)


2 0 A = 0 0 0 7 3 0 0 0 3 7 6 0 0 8 5 7 9 0 3 1 6 = ( 2 ) ( 3) ( 6) ( 9) ( 4) = 1296 8 4

Elementary row operations Theorem Let A be an n X n matrix. (a). if B is the matrix that results when a single row or single column of A is multiplied by a scalar k, then det(B) = k det(A). (b). If B is the matrix that results when two rows or columns of A are interchanged, then det(B) = -det(A). (c). if B is the matrix that results when a multiple of one row when a multiple of one column is added to another column, then det(B) = det(A). Example: (illustrating 3 X 3 determinants)

Re lationship Ka11 a 21 a 31 Ka12 a 22 a 32 Ka13 a 23 a 33 =K a11 a 21 a 31 a12 a 22 a 32 a13 a 23 a 33

operation

( a)

The first row of A is multiplied by K .

det ( B ) = K det ( A) a 21 a 11 a 31 a 22 a12 a 32 a 23 a13 a 33 = a11 a 21 a31 a12 a 22 a32 a13 a 23 a 33 The first & sec ond rows of A are int erchanged .

(b)

det ( B ) = det ( A) a11 + Ka 21 a 21 a 31 a12 + Ka 22 a 22 a 32 a12 a 22 a 32 a13 a 23 a 33 a13 + Ka 23 a 23 a33

(c)

A multiple of the sec ond row of A is added to the first row

a11 = a 21 a 31

det ( B ) = det ( A).

Verifying the equation in the last row (part (c)) Note that a11 a22 a33 + a12 a23 a31 + a11 a12 a13 det a21 a22 a23 = a13 a21 a32 a13 a22 a31 a12 a21 a33 a11 a23 a32 . a31 a32 a33 This is realized from the following matrices (B)
A a11 a12 a a 21 22 B a11 a12 a13 a11 a12 a a a a a 21 22 23 21 22 a31 a32 a33 a31 a32 ves + ves ( b) Deter min ant of a 3 X 3 matrix.

( a ) Deter min ant


of a 2 X 2 matrix

Verification

det ( B ) = ( a11 + Ka 21 ) a22 a33 + ( a12 + Ka 22 ) a 23 a31 + ( a13 + Ka 23 ) a 21 a32 a31 a22 ( a13 + Ka 23 ) a33 a21 ( a12 + Ka 22 ) a32 a23 ( a11 + Ka 21 ) = det ( A) + K ( a 21 a22 a33 + a 22 a 23 a31 + a23 a 21 a32 a31 a22 a23 a33 a21 a22 a32 a 23 a 21 ) = det ( A) + 0 = det ( A).

Provide proof for a & b. Elementary matrices: An Elementary matrix results from performing single elementary row operation on an identity matrix; thus if we let A = In from det(B) = Kdet(A); such that det(A) = det(In) = 1,then the matrix B is an elementary matrix. Theorem Let E be an n X n elementary matrix (a). If E results from multiplying a row of In by k, then det(E) = K. (b). If E results from interchanging two rows of In, then det(E) = -1 (c). If E results from adding a multiple of one row of In to another, then det(E) = 1. Example: (Determinants of Elementary matrices) The following determinants of elementary matrices are evaluated by inspection.
1 0 0 0 0 3 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0

= 3,

= 1

The sec. row of The first and last rows I4 were interchanged. I4 was multiplied by 3
1 0 0 0 0 1 0 0 0 0 1 0 7 0 0 1 =1.

7 times the last row of In was added to the first row. Matrices with Proportional Rows or Columns: If a square matrix A has proportional rows, then a row of zeros can be introduced by adding a suitable multiple of one of the rows to the other and similarly for columns. But

adding a multiple of one row or column to another does not change the determinant for det(A) = 0. Theorem If A is a square matrix with two proportional rows or two proportional columns, then det(A) = 0. Example The example below illustrates the introduction of a row of zeros when there are two proportional rows.
1 2 3 1 3 6 9 1 2 4 1 4 4 8 5 8 1 0 3 1 3 0 9 1 2 0 1 4 4 0 = 0. 5 8

The 2 nd row is 2 X the 1st , so we added 2 X the 1st row to the 2 nd to int roduce a row of zeros.

Each of the following matrices has two proportional rows or columns; thus each has a determinant of zero.
1 2 1 4 2 2 8 4 7 5 , 3 3 6 5 9 1 2 8 3 4 5 1 12 5 2 4 15

4 , 8

Evaluating Determinants by Row Reduction: The method employed is to reduce the given matrix to upper triangular form by elementary row operations, then compute the determinant of the upper triangular matrix (an easy computation), then relate that determinant to that of the original matrix. 1. Example: (using row reduction to evaluate a det.) Evaluate det(A) where
0 A = 3 2 1 6 6 5 9 1

Solution Reduce A to row-reduction form (which is upper triangular) and apply principles of getting 3 X 3 determinant.

det ( A) =

0 3 2

1 6 6

5 9 = 1 2 1 6 2 1 10 2 1 0 1

3 0 2

6 1 6

9 5 The first and sec . rows of A were int erchanged . 1

1 3 0 2 1

3 5 a common factor of 3 from the first row was taken through the det er min ant s 1 3 5 2 X the 1nd row was added to the 3rd row. 5 3 5 10 X the 2 nd row was added to the 3rd row. 55 2 1 0 3

3 0 0 1

3 0 0

= ( 3) ( 55) 0 0

5 a common factor of 55 from the last row was taken through the det er m 1

= ( 3) ( 55) (1) =165

NB. The method of reduction is well suited for computer evaluation of determinants because its systematic and easily programmed. 2. Example: (using column operations to evaluate a determinant). Compute the det of
1 2 A = 0 7 0 7 6 3 0 0 3 1 3 6 0 5

Solution: The det could be computed as above by using elementary row operations to reduce A to row-echelon form, but we can put A in lower triangular form in one step by adding -3 X the first column to the fourth to obtain
1 2 det ( A) = det 0 7 0 7 6 3 0 0 3 1 0 6 0 26

= (1) ( 7 ) ( 3) ( 26 ) = 546

This example points out the utility of keeping a eye open for column operations that can shorten computations. Properties of the Determinant function Read more on them & Proofs
1. det ( KA) = k n det ( A). E .g Ka11 Ka21 Ka31 2. 3. Ka12 Ka 22 Ka32 Ka13
3

a11 a31

a12 a22 a32

a13 a32 . a33

Ka32 = K a21 Ka33

det ( A + B ) det ( A) + det B. det ( AB ) = det ( A) det B & det ( ABC ) = det ( A) det ( B ) det ( C ) det A1 =

4.

1 . det ( A)

Linear systems of the Form Ax = x. Many applications of linear algebra are concerned with systems of n linear equations in n unknowns that are expressed in the form.
x Ax = 0. ( i ) by inserting an identity matrix & factoring as ( I A) x = 0. ( ii ) Ax = x. whereis the scalar.

Example: The linear system

x1 + 3 x2 = x1 4 x1 + 2 x2 = x2 can be written in the form 1 4 3 x1 x = 1 . 2 x2 x2

x 1 3 A= & x = 1 . 4 2 x2 The system can be rewritten as x 1 2 3 x1 0 = 2 x2 0 3 x1 0 = 2 x 2 0

1 x 4
1 or 0 or

0 x1 1 1 x2 4

1 3 x1 0 4 2 x = 0 2 3 2

1 I A = 4

The primary problem of interest for linear systems of the form in equation (ii) is to determine those values of for which the system has a nontrivial solution; such a value of is called a characteristic value or an Eigen value of A. The Gamma Prefix Eigen can be translated as proper, which stems from the older literature where Eigen values were known as Proper values; they were also called latent roots. If is an Eigen value of A, then the nontrivial solutions of equation (ii) are called the eigen vectors of A corresponding to . The system (I A) x = 0 has a nontrivial solution if & only if
det ( I A) = 0. ( iii )

This is called the x-tic egn of A; the Eigen values of A can be found by solving this equation for . Example: Calculating the Eigen values & Eigen vectors. The x-tic equation of A is

det ( I A) =

1
4

3 =0 2

or

2 3 10 =0.

The factored form of this equation is ( + 2) ( 5) = 0 , so the Eigen values of A are


= 2 & = 5.

By defn i.e

x x = 1 is an eigen vector of A if & only if x is a nontrivial solution of ( I A) x = 0, x2

1 3 x1 0 4 2 x = 0 ( iv ) 2 if = 2, then eqn( iv ) becomes 3 4 3 x1 0 = . A 4 x2 0

Solving this system yields (verifies)

x1 = t , x2 = t , so the eigen vectors corresponding to = 2 are nonzero solutions of the form x t x = 1 = x2 t

Again from (iv), the eigen vectors of A corresponding to = 5 are the nontrivial solutions of
4 3 x1 0 4 3 x = 0 2

We leave it for the reader to solve this system & show that the eigen vectors of A corresponding to = 5 are the nonzero solution of the form.

3 t x = 4 . t
COFACTOR EXPANSION; CRAMERS RULE Minors and cofactors

from above;

a11 A= a21 a31

a12 a22 a32

a13 a23 a33

det ( A) = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 a13 a22 a31 a12 a21 a33 a11 a23 a32 ( i )

Rearranging terms & factoring, equation (i) can be rewritten as


det ( A) = a11 ( a22 a33 a23 a32 ) a12 ( a21 a33 a23 a31 ) + a13 ( a21 a32 a22 a31 ) ( ii )

a11 , a12 & a13 represent det er min ants as follows


for a11 , M 11 = a 22 a32 a 21 a31 a 21 a31 a 23 a33 a 23 a33 a 22 a32

for a12 , M 12 =

for a13 , M 13 =

Defn: If A is a square matrix, then the minor of entry

ai j is domoted by M i j &

is defined to

be the determinant of the sub matrix that remains after the i th row & j th column are i +j deleted from A. The number ( 1) M i j is demoted by Ci j and is called the Cofactor of entry ai j Example: (finding minors & cofactors)
3 A = 2 1 1 5 4 4 6 8

let

The minor of entry a11 is


M 11 3 =2 1 1 5 4 4 5 6 = 4 8 6 =16 8

The cofactor of a11 is C11 = ( 1)

1+ 1

M 11 = M 11 =16.

Similarly , the min or of entry a32 is 3 =2 1 1 5 4 4 3 6 = 2 8 4 = 26. 6

M 32

The cofactor of a32 is C32 = ( 1)

3+2

M 32 = M 32 = M 32 = 26

Notice that the cofactor & the minor of an element ai j differ only in sign, that is, Ci j = M i j . A quick way for determining whether to use the + or is to use the fact th th that the sign relating Ci j & M i j is in the i row & j column of the checker board array.
+ + ' ' ' + + ' ' ' + + ' ' ' + + ' ' ' + + ' ' '

For example, C11 = M 11 , C21 = M 21 , C12 = M 12 , C22 = M 22 , & so on. Cofactor Expansions: From equation (ii)
det ( A) = a11 M 11 + a12 ( M 12 ) + a13 M 13 = a11 C11 + a12 C12 + a13 C13 , ( iii )

Equation (iii) shows that the det of A can be computed by multiplying the entries in the first row of A by their corresponding cofactors & adding the resulting products. This method of evaluating det A is called cofactor expansion along the first row of A. Example: (cofactor expansion along the first row)

let

3 A = 2 5

1 4 4

0 3 . 2

Evaluate det ( A) by the cofactor exp ansion along 1st row of A.

So ln : From eqn ( iii ) 3 det ( A) = 2 5 1 4 4 0 4 3 =3 4 2 3 2 1 2 5 3 2 +0 2 5 4 4

= 3( 4 ) (1) ( 11) +0 = 1.

By rearranging the terms in it (1) in various ways, it is possible to obtain other formulas like equation (iii). There should be no trouble checking that all of the following are correct.
det ( A) = a11 C11 + a12 C 12 +a13 C13 = a11 C11 + a21 C21 + a31 C31 = a21 C21 + a22 C22 + a23 C23 = a12 C12 + a22 C22 + a32 C32 = a31 C31 + a32 C32 + a33 C33 = a13 C13 + a23 C23 + a33 C33 ( iv )

Notice that in each equation the entries & cofactors all come from the same row or column. These equations are called the cofactor expansions of det (A). Theorem: Expansions by cofactors: The determinant of an n X n matrix A can be computed by multiplying the entries in any row (or column) by their cofactors & adding the resulting products; i.e for each
1 i n & 1 j n,

det ( A) = ai j Ci j + a2 j C2 j + + anj Cnj (Cofactor expansion along the j th column) det ( A) = ai 1 Ci 1 + ai 2 Ci 2 + + ain Cin (Cofactor expansion along the i th row)

And

Example: (Cofactor expansion along the first column) Let A be the matrix as in the example above From equation (iv)

det ( A) = 2 5

1 4 4

4 3 =3 4 2

3 1 ( 2 ) 2 4

0 1 +5 2 4

0 3

=3( 4 ) ( 2 ) ( 2 ) +5(3) = 1.

Note: The best strategy for evaluating a determinant by cofactor expansion is to expand along a row or a column having the largest number of zeros. Cofactor expansion & row or column operations can sometimes be used in combination to provide an effective method for evaluating determinants. Example: Row operations & cofactor expansion. Evaluate det (A) where
3 1 A = 2 3 5 2 4 7 2 1 1 5 6 1 5 3

Solution: By adding suitable multiples of the second row to the remaining rows, we obtain
0 1 det ( A) = 0 0 1 2 0 1
1 = 0 1 1 = 0 0

1 1 3 8
1 3 8 1 3 9 3 9

3 1 3 0
3 3 cofactor exp ansion along the1st column. 0 3 3 we added the1st row to the 3 rd row. 3 3 3

( = 1)
= 18

cofactor exp ansion along the1st column.

Ad joint of a Matrix: In a cofactor expansion we compute det (A) by multiplying the entries in a row or a column by their cofactors and adding the resulting products.

It turns out that if one multiples the entries in any row by the corresponding cofactors from a different row, the sum of these products is always zero. Example: (Entries and cofactors from different rows) a11 A= a21 a31 a12 a22 a32 a13 a23 a33

let

Consider the quantity


a11 C31 + a12 C32 + a13C33
1 c31 = C31 , 1 1 c32 = C32 & c33 = C33

a11 A = a21 a11


1

a12 a22 a12

a13 a23 a13

det A1 = 0 ( a )

( )

Evaluating

det ( A1 ) by cofactor expansion along the third row gives det A1 = a11C 131 + a12C 132 + a13C 133

( )

from ( a ) & ( b ) a11C31 + a12C32 + a13C33 = 0.

= a11C31 + a12C32 + a13C33 ( b )

Definition: If A is any n X n matrix and Ci j is the cofactor of ai j , then the matrix


C11 C 21 ' ' ' Cn1 C12 C 22 ' ' ' Cn 2 C1n C2 n ' ' ' Cnn

is called the matrix of cofactors from A.

The transpose of this matrix is called the ad joint of A. and is demoted by adj (A). Example: Ad joint of a 3 X 3 matrix

let

3 A = 1 2

2 6 4

1 3 0

The cofactors of A are


C11 C21 C31 = 12 = 4 = 12 C12 C22 C32 = 6 C13 C23 C33 = 16 = = 16 16

= 2 = 10

So that the matrix of cofactors is


12 4 12 6 2 10 16 16 16

And the ad joint of A is


12 adj ( A) = 6 16 4 2 16 12 10 . 16

Theorem: Inverse of a matrix using its ad joint. If A is an invertible matrix, then


A 1 = 1 adj ( A) det ( A) (i)

Example: Using the ad joint to find an Inverse matrix Use equation (i) above to find the inverse matrix of A.
3 A = 1 2 2 6 4 1 3 0

so ln det ( A) = 64. Thus, 12 4 12 1 1 A1 = adj ( A) = 6 2 10 det ( A) 64 16 16 16 4 12 12 64 64 64 6 10 2 = 64 64 64 16 16 16 64 64 64


CRAMERS RULE: Cramers rule is of marginal interest for computational purposes, but is useful for studying the mathematical properties of a solution without the need for solving the system. Theorem: (Cramers rile) If Ax =b is a system of n linear equations in n unknowns such that det ( A) 0, then the system has a unique solution. Thus the solution is

x1 =

det ( A1 ) , det ( A)

x2 =

det ( A2 ) det ( An ) , , xn = det ( A) det ( A)

Where Aj is the matrix obtained by replacing the entries in the j th column of A by the entries in the matrix
b1 b 2 ' b= ' ' ' b n

Read about proof of this theorem Example: (Using Cramers rule to solve a linear system) Use Cramers rule to solve.

x1 3 x1 x1
So ln 1 A = 3 1 1 A2 = 3 1 0 4 2 6 30 8

+ + 4 x2 2 x2

+ + +

2 x3 6 x3 3 x3

= 30 = 8

2 6 , 3 2 6 , 3

6 A1 = 30 8 1 A3 = 3 1

0 4 2 0 4 2

2 6 , 3 6 30 8

Therefore
xn = det ( A1 ) 40 10 = = , det ( A) 44 11 det ( A2 ) 72 18 = = , det ( A) 44 11 det ( A3 ) 152 38 = = . det ( A) 44 11

x2 =

x3 =

NOTE: To solve a system of n equations in n unknowns by Cramers rule, it is necessary to evaluate n + 1 dets of n X n matrices. For systems with more than three equations, Gaussian elimination is for more efficient, since it is only necessary to reduce one n X (n +1) augmented matrix. However, Cramers rule does give a formula for the solution if the det of the coefficient matrix is nonzero. ASSIGNMENT

GENERAL VECTOR SPACES (INTRODUCTION TO LINEAR ALGEBRA) Real Vector Spaces: Vector Space Axioms The following definition consists of ten axioms (rules of the game, no proof). Defn: Let V be an arbitrary non-empty set of objects on which two operations are defined, addition and multiplication by scalars (numbers). By addition we mean a rule for associating with each pair of projects u and v in V an object u + v, called the sum of u and v; by scalar multiplication we mean a rule for associating with each scalar k and each object u in V an object ku, called the scalar multiple of u by k. If the following axioms are satisfied by all objects u, v, w in V and all scalars k and L, then we call V a Vector space and we call the objects in V vectors. If u and v are objects in V, then u + v is in V. u+v=v+u u + (v + w) = (u + v) + w There is an object 0(zero) in V, called a zero vector for V, such that 0 + u = u + 0 = u for all u in V. 5. For each u in V, there is an object u in V, called a negative of u, such that u + (- u) = (- u) + u = 0. 6. If k is any scalar and u is any object in V, then ku is in V. 7. k(u + v) = ku +kv. 8. (k + L) u = ku + Lu 9. k(Lu) = (kL)u 10. 1u = u. Remark: 1. 2. 3. 4.

Depending to the application, scalars may be real numbers or complex numbers. Vector spaces in which the scalars are complex numbers are called complex vector spaces, and those in which the scalars must be real are called real vector spaces. Examples of vector spaces
Microsoft Equation 3.0

1) R n is a vector space The set V = R n with the standard operations of addition and scalar multiplication is a vector space. The three most important special cases of R n are R(the real numbers), R 2 (the vectors in the plane), and R 3 (the vectors in 3 spaces). 2) A Vector space of 2 X 2 matrices Show that the set V of all 2 X 2 matrices with real entries is a vector space if vector addition is defined to be matrix addition and vector scalar multiplication. Solution: In this example, it is condiment to verify the axioms in the following order 1, 6, 2, 3, 7, 8, 9, 4, 5, & 10. Let U12 V12 U V U = 11 & V = 11 U 21 U 22 V21 V22 To prove axiom 1, we must show that u + v is a 2 X 2 matrix. Since U12 V11 V12 U u + v = 11 + U 21 U 22 V21 V22
U + V11 = 11 U 21 + V21 U12 + V12 U 22 + V22

Similarly, axiom 6 holds because for any real no. k we have


U12 KU 11 U KU = K 11 = U 21 U 22 KU 21 KU 12 . KU 22

So that KU is a 2 X 2 matrix and consequently is an object in V. Axiom 2 U12 V11 V12 V11 V12 U11 U12 U U + V = 11 + = + = V +U . U 21 U 22 V21 V22 V21 V22 U 21 U 22

Microsoft Equation 3.0

Read Similarly, axiom 3; follows from associative law for addition, axioms 7, 8, 9 follow h, j and l respectively of the properties of matrix arithmetic. To prove axiom 4, we must find an object 0 in V such that 0 + u = u + 0 = u for all u in V. Thus can be done by defining 0 to be
0 0 = 0 0 0

With this definition 0 0 u11 u12 u11 u12 0+u = 0 = = =u + 0 0 u 21 u 22 u 21 u 22 And similarly u + 0 = u. To prove axiom 5, we must show that each object is in V has a negative u such that u + (- u) = 0 and (- u) + u = 0. Thus can be done by defining the ve of u to be
u u = 11 u 21 u12 u 22

With this definition


u12 u11 u u + ( u ) = 11 + u21 u22 u21 u12 0 0 = =0 u 22 0 0

And similarly (- u) + u = 0. Finally, axiom 10 is a simple computation:


u 1u = 1 11 u 21 u12 u11 = u 22 u 21 u12 = u. u22

3) A vector space of m X n matrices. The arguments in the above example can be adapted to show that the set V of all m X n matrices with real entries, together with the operations of matrix addition and scalar multiplication, is a vector space. The m X n zero matrix U, then the matrix U is the ve u of the vector u. demote this vector space by the symbol Mmn. 4) A set that is not a vector space Let V = R 2 and define addition and scalar multiplication operations as follows: If u = ( u1 , u2 ) and v = ( v1 , v2 ) , then define u + v = ( u1 + v1 , u 2 + v2 )

And if k is real numbers, then define


ku = ( ku1 , 0 ) e.g if u = ( 2,4 ), v = ( 3,5), and k = 7, then

u + v = ( 2 + ( 3),4 + 5) = ( 1,9 ) ku = 7u = ( 7 * 2,0 ) = (14,0 )

The addition operation is the slandered addition operation on R 2 , but the scalar multiplication operation is not the slandered scalar multiplication. There are insistences whereby are values of u for which axioms 10 fails to hold, e.g., if u = (u1, u2) is such that u2 = 0,

1u = 1( u1 , u2 ) = (1, u1 ,0 ) = ( u1 ,0 ) U .
Thus V is not a vector space with the stated operations. Every plane though the origin is a vector space.(Example 6). SUBSPSCES: Defn: A subset W of a vector space V is called a subspace of V if W is itself a vector space under the addition and scalar multiplication defined on V. Theorem: If W is a set of one or more vectors from a vector space V, then W is a subspace of V if and only if the following conditions hold. (a) If u and v are vectors in W, then u + v is in W. (b) If k is any scalar and u is any vector in W, then ku is in W. Examples: 1) Show that a line through the origin of R 3 is a subset of R 3 . Solution Let W be a line through the origin of R 3 . It is evident geometrically that the sum of two vectors on this line also lies on the line and that a scalar multiple of a vector on the line is on the line as well, as shown below:

Thus, W is closed under addition and scalar multiplication, so it is a subspace of R 3 . 2) A subset of R 2 thats not a subspace. Let W be the set of all points (x, y) in R 2 such that x 0 & y 0 . These are the points in the first quadrant. The set W is not a subspace of R 2 since it is not closed under scalar multiplication e.g., V=(1, 1) lies in W, but its ve (-1)v = - v = (-1, -1) does not, as shown below.

Every non-zero vector space V has at least two subspaces: V itself is a subspace and the set {0} consisting of just the zero vectors in V is a subspace called the zero subspace. The following list shows the subspaces of R 2 & R 3 ;

Subspaces of R 2 -0 - Lines through the origin - R2

subspaces of R 3 -0 - Lines through the origin - planes through the origin - R3

3) Subspaces of Mnn The sum of two symmetric matrices is symmetric, and a scalar multiple of a symmetric matrix is symmetric. Thus, the set of n X n symmetric matrices is a subspace of the vector space Mnn of all n X n matrices. Similarly, the set of n X n upper triangular matrices, the set of n X n lower triangular matrices, and the set of n X n diagonal matrices all form subspaces of Mnn, since each of these sets is closed under addition and scalar multiplication. 4) Solution spaces that are subspaces of R 3 Consider the linear system.

( a)

1 2 3 1 3 2 1 3 4 0 0 0

2 4 6 2 7 4 2 7 1 0 0 0

3 6 9

x 0 y = 0 z 0 x 0 y = 0 z 0 x 0 y = 0 z 0

(b)

3 8 6 3 8 2

(c)

(d )

0 0 0

x 0 y = 0 z 0

Each of these systems has three unknowns, so the solutions form subspaces of R 3 . Geometrically, this means that each solution space must be a line through the origin, a plane through the origin, the origin only, or all of R 3 . We shall now verify that this is so (leaving it to the student to solve the system). Solutions (a) The solutions are X = 2s 3t, y = s, z = t From which it follows that X = 2y 3z or x 2y + 3z = 0

Thus is the equation of the plane through the origin with n = (1, -2, 3) as a normal vector. (b) The solutions are X = -5t, y = -t, z = t. Which are parametric equations for the line through the origin parallel to the vector V = (-5, -1, 1). (c) The solution is x = 0, y = 0, z = 0, so the solution space is the origin only, that is, {0}. (d) The solutions are X = r, y = s, z = t. Where r, s, t have arbitrary values, so the solution space is all of R 3 . Defn: A vector W is called a linear combination of the vectors v1, v2, ---, vr if it can be expressed in the form
W = k1v1 + k 2 v2 + + k r vr .

where k1k 2 , , k r are scalars.

N.B. If r = 1, then the equation in the preceding definition reduces to W = k1v1; that is, W is a linear combination of a single vector V1 if it is a scalar multiple of v1. 5) Vectors in R 3 are linear combinations of i, j and k. Every vector V = (a, b, c) in R 3 is expressible as a linear combination of the standard basis Vectors Since i = (1, 0,0), j = (0, 0, 1), k = (0, 0 ,1)

V = (a, b, c) = a (1, 0, 0) + b (0, 1, 0) + c (0, 0, 1) = ai + bj + ck. 6) Checking a linear combination. Consider the vectors u = ( 1, 2, -1) and v = (6, 4, 2) in R 3 . Show that W = (9, 2, 7) is a linear combination of u and v and that W = (4, -1, 8) is not a linear combination of u and v. Solution: Thus, v1 , v2 , & v3 from a linearly dependent set if this system has a nontrivial solution or a linearly independent set if it has only the trivial solution. Solving this equation yields k1 = -t, k2 = -t, k3 = t. Thus, the system has nontrivial solutions and v1, v2, & v3 form a linearly dependent set. Alternatively, we could show the existence of nontrivial solutions without solving the system by showing that the coefficient matrix has determinant zero and consequently is not invertible. (verify)

Geometric Interpretation of Linear Independence Linear independence has some useful geometric interpretations in R2 & R3. (i) In R2 & R3, a set of two vectors is linearly independent if and only vectors do notice on the same line when they are placed with their initial points at the origin. Fig. below.

z z V2 V1 V2 y V2 x x (a) Linearly dependent (b) Linearly dependent (c) Linearly independent Thus follows from the fact two vectors are linearly independent if and only if neither vector is a scalar multiple of the other. Geometrically, this is equivalent to stating that the vectors do not lie on the same line when they are positioned with their initial points at the origin. (ii) In R3, a set of three vectors is linearly independent if and only if the vectors do not lie in the same plane when they are placed with their initial points at the origin. Fig. below. x y y z V1 V1

V1 z x V3 V2 y z x V2 y y x V1 x x V3 V3 z x

V2

(a) Linearly dependent (b) Linearly dependent (c) Linearly independent Thus result follows from the fact that 3 vectors are linearly independent if and only if none of the vectors is a linear combination of the other two. Geometrically, this is equivalent to stating that none of the vectors lies in the same plane as the other two, or alternatively, that the three vectors do not lie in a common plane when they are positioned with their initial points at the origin. (Why?) BASES AND DIMENSION: Defin: If V is any vector space and S = {v1, v2, ---, vn} is a set of vectors in V, then S is called a basis for V if the following two conditions hold: (a) S is linearly independent (b) S spans V. Theorem If S = {v1, v2, ---, vn} is a basis for a vector space V, then every vector v in V can be expressed in the form V = c1v1 + c2v2 + --- + cnvn in exactly one way. Proof: Since S spans V, it follows from the definition of a spanning set that every vector in V is expressible as a linear combination of the vectors in S. suppose that some vector v can be written as V = c1v1 + c2v2 + --- + cnvn. ---- (i) and V = k1v1 + k2v2 + --- + knv2. ------ (ii) equation (i) equation (ii) gives 0 = ( c1 k1 ) v1 + ( c2 k 2 ) v2 + + ( cn k n ) vn Since the right side of this equation is a linear combination of vectors in S, the linear independence of S implies that;

c1 k1 = 0, c2 k 2 = 0, , cn k n = 0 ie, c1 = k1 , c2 = k 2 , , cn = k n

Thus, the two expressions for V are the same. Examples 1) If i = (1,0,0), j = ( 0,1,0 ), & k = ( 0,0,1), then S = {i, j , k } is a linearly independent set in R3. this set also spans R3 since any vector v = (a, b, c) in R3 can be written as v = ( a, b, c ) = a(1,0,0 ) + b( 0,1,0) + c( 0,0,1) = ai + bj + ck . ( i ) Thus, S is a basis for R3, it is called standard basis for R3. Looking at the coefficients of i, j and k in equation (i), it follows that the coordinates of v relative to the standard basis are a, b, and c, so ( v ) s = ( a, b, c ) comparing this to equation (i) we see that
v = (v) s

Thus equation states that the components of a vector v relative to a rectangular xyz coordinate system and the coordinates of v relative to the standard basis are the same; thus, the coordinate system and basis produce precisely the same one to-one correspondence between points in 3 space and ordered triples of real numbers. Fig. below.

2) Demonstrating that a set of vectors is a basis

Let v1 = (1, 2, 1), v2 = (2, 9, 0) & v3 = (3, 3, 4). Show that the S = {v1, v2, v3} is a basis for R3. Solution: To show that the set S spans R3, we must show that an arbitrary vector b = (b1, b2, b3) can be expressed as a linear combination. b = c1v1 + c2v2 + c3v3; of the vectors in S. expressing this equation in terms of components gives

( b1 , b2 , b3 ) = c1 (1,2,1) + c2 ( 2,9,0) + c3 ( 3,3,4)

or ( b1 , b2 , b3 ) = ( c1 + 2c2 + 3c3 , 2c1 + 9c2 + 3c3 , c1 + 4c3 ) or , on equating corresponding components, c1 + 2c 2 + 3c3 = b1 2c1 + 9c 2 + 3c3 = b 2 ( ii ) c1 + 4c3 = b 3

Thus, to show that S spans R3, we must demonstrate that system equation (ii) has a solution for all choices of b = (b1, b2, b3). To prove that S is linearly independent, we must show that the only solution of c1v1 + c2 v2 + c3v3 = 0 ( iii ) is c1 = c2 = c3 = 0. As above, if equation (iii) is expressed in terms of components, the verification of independence reduces to showing that the homogeneous system
c1 2c1 c1 + 2c 2 + 9c 2 + 3c 3 = 0 = 0 = 0 + 3c 3 + 4c 3 ( iv )

has only the

trivial solution. Observe that systems (ii) and (iv) have the same coefficient matrix. Thus, we can simultaneously prove that S is linearly independent and and spans R3 by demonstrating that in systems (ii) and (iv) the matrix of coefficients
1 A = 2 1 2 9 0 3 1 3, we find det ( A ) = 2 4 1 2 9 0 3 3 = 1 & so S is a basis for R 3 . 4

3) Representing a vector using two basis Let S = {v1, v2, v3} be the basis for R3 in the preceding example. (a) Find the coordinate vector of v = (5, -1, 9) with respect to S. (b) Find the vector v in R3 whose coordinate vector with respect to the basis S is (V)3 = (-1, 3, 2). Solutions: (a) We must find scalars c1, c2 c2 such that v = c1v1 + c2v2 + c3v3 or in terms of components, (5, -1, 9) = c1(1, 2, 1) + c2(2, 9, 0) + c3(3, 3, 4) Equating corresponding components gives

c1 2c1 c1

2c 2

+ 3c 3 + 3c 3 + 4c 3

+ 9c 2

= -1 = 9

Solving this system, we obtain c1 = 1, c2 = -1, c3 = 2. Therefore, (V)s = (1, -1, 2) (b) Using the definition of the coordinate vector (V)s, we obtain
v = ( 1) v1 + 3v2 + 2v3 = ( 1) (1,2,1) + 3( 2,9,0 ) + 2( 3,3,4 ) = (11,31,7 )

4) Standard Basis for Mmn Let


1 M1 = 0 0 , 0 0 M2 = 0 1 , 0 0 M3 = 1 0 , 0 0 M4 = 0 0 1

The set S = {M1, M2, M3, M4} is a basis for the vector space M22 of 2 x 2 matrices; To see that S spans M22, note that an arbitrary vector (matrix)
b d can be written as a c b 1 0 0 1 0 0 0 = a +b +c +d d 0 0 0 0 1 0 0 = aM 1 +bM 2 + cM 3 + dM 4 . 0 1 a c

To see that S is linearly independent, assume that aM 1 +bM 2 + cM 3 + dM 4 = 0 that is, 1 0 0 a +b 0 0 0 it follows that a c 1 0 +c 0 1 0 0 +d 0 0 0 0 = 1 0 0 0

b 0 0 = d 0 0 Thus , a = b = c = d = 0, so S is linearly independent.

The basis S in this example is called the standard basis for M22. More generally, the standard basis for Mmn consists of the mn different matrices with a single 1 and zeros for the remaining entries. 5) Basis for Subspace span (S) If S = {v1, v2, ---, vr} is a linearly independent set in a vector space V, then S is a basis for the subspace span (S) since the set S spans span (S). Defn: A nonzero vector space V is called finite-dimensional if it contains a finite set of vectors {v1, v2, ---, vn} that forms a basis. If no such set exists, V is called infinite-dimensional. In addition, regard the zero vector space to be finite-dimensional. Theorem

All bases for a finite- dimensional vector space have the same number of vectors. Defn: The dimension of a finite- dimensional vector space V, demoted by dim(V), is defined to be the number of vectors in a basis for V. in addition, we define the zero vector space to have dimension zero. Note: From here, follow a common convention of regarding the empty set to be a basis for the zero vector space. This is consistent with the preceding definition, since the empty set has no vectors and the zero vector space has dimension zero. Dimensions of some vector spaces dim ( R n ) = n ( The standard basis has n vectors) dim ( Pn ) = n + 1 ( The standard basis has n + 1 vectors) dim ( M mn ) = mn ( The standard basis has mn vectors). 6) Dimensions of a solution space Determine a basis for and the dimension of the solution space of the homogeneous system.
2 x1 x1 x1 + + 2 x2 x2 x2 + x3 2 x3 2 x3 x3 3x4 + x4 + + + x5 x5 x5 x5 = = = = 0 0 0 0

soln : The general soln of the given system is x1 = s t , x 2 = 5, x3 = t , x 4 = 0,

x5 = t .

Therefore, the solution vectors can be written as x1 s t s t 1 1 x s s 0 1 0 2 x3 = t = 0 + t = s 0 + t 1 x 4 0 0 0 0 0 t 0 t 0 1 x5 which shows that the vectors 1 1 1 0 v1 = 0 & v 2 = 1 span the solution space. since they are linearly independent 0 0 0 1 (verify), {v1 , v 2 } is a basis and the solution space is two - dimensional.

7) Checking for a Basis (a) Show that v1 = (-3, 7) & v2 = (5, 5) form a basis for R2 by inspection. (b) Show that v1 = (2, 0, -1), v2 = (4, 0, 7), v3 = (-1, 1, 4) form a basis for R3 by inspection. Solutions:

(a) Since neither vector is a scalar multiple of the other, the two vectors form a linearly independent set in the two- dimensional space R2, & hence form a basis. (b) The vectors v1 & v2 form a linearly independent set in the xz-plane, so the set {v1, v2, v3} is also linearly independent. Since R3 is 3-dimensional, from theorem below, {v1, v2, v3} is a basis for R3. Theorem: If V is an n-dimensional vector space and if S is a set in V with exactly n vectors, the S is a basis for V if either S spans V or S is linearly independent. Row space, column space and Null space: Example
1 0 2 A = 3 1 4 The row vectors of A are r1 =[2 1 0] & r2 =[3 1 2 c1 = , 3

4]

and the column vectors of A are 1 0 c 2 = & c3 = 1 4

Defn: If A is M x n matrix, then the subspace of Rn spanned by the row vectors of A is called the row space of A, and the subspace of Rm spanned by the column vectors is called the column space of A. the solution space of the homogeneous system of equations Ax = 0, which is a subspace of Rn, is called the null space of A. Example (A vector b in the column space of A) Let Ax = b be the linear system. 1 3 2 x1 1 1 2 3 x = 9 2 2 1 2 x3 3 Show that b is the column space of A, and express b as a linear combination of the column vectors of A. Solution: Solving the system by Gaussian elimination yields (verify) x1 = 2, x2 = 1, x3 = 3 . Since the system is consistent, b is in the column space of A; For a linear system, Ax = b, of m equations in n unknowns can be written as
From eqn ( i ), it follows that

x1c1 + x2 + + xn cn = b

(i)

1 3 2 1 2 1 2 +3 3 = 9. 2 1 2 3

Basis for Row spaces, column spaces, & Null spaces: From previous work, performing an elementary row operation on an augmented matrix does not change the solution set of the corresponding linear system. It follows that

applying an elementary row operation to a matrix A does not change the solution set of the corresponding linear system Ax = 0, or, stated another way, it does not change the null space of A. Thus, we have the following theorem. Theorem Elementary row operations do not change the null space of a matrix. Example: (Basis for Null space) Find a basis for null space of
2 1 A = 1 0 2 1 1 0 1 2 2 1 0 3 0 1 1 1 1 1

Solution: The Null space of A is the solution space of the homogeneous system.
2 x1 x1 x1 + 2 x2 x2 x2 x3 2 x3 2 x3 x3 3 x4 + x4 + x5 x5 x5 x5 = 0 = 0 = 0 = 0 + + + +

From example above


1 1 1 0 v1 = 0 & v2 = 1 ; form a basis for this space. 0 0 0 1

Theorems: 1) Elementary row operations do not change the row space of a matrix. 2) If A and B are row equivalent matrices, then; (a) A given set of column vectors of A is linearly independent if and only if the corresponding column vectors of B are linearly independent. (b) A given set of column vectors of A forms a basis for the column space of A if and only if the corresponding column vectors of B form a basis for the column space of B. 3) If a matrix R is in row- echelon form, then the row vectors with the leading 1s (i.e. the non zero row vectors) form a basis for the row space of R, and the column vectors with the leading 1s of the row vectors form a basis for the column space of R. Examples: 1) Basis for Row & column spaces The matrix
1 0 R = 0 0 2 1 0 0 5 3 0 0 0 0 1 0 3 0 is in row - echelon form 0 0

The vectors

r1 = [1

r2 = [ 0 1 3 r3 = [ 0 0

5 0

0 1 0]; form a basis for the row space of R, and the vectors 0 0 c 4 = form a basis for the column space of R. 1 0

0]

3]

1 0 c1 = , 0 0

2 1 c 2 = , 0 0

2) Basis for Row and column spaces Find bases for the row and column spaces of
1 2 A = 2 1 3 6 6 3 4 9 9 4 2 1 1 2 5 8 9 5 4 2 7 4

Since the elementary row operations do not change the row space of matrix, we can find a basis for the row space of A by finding a basis for the row space of any row- echelon form of A. Reducing A to row- echelon form we obtain (verify)
1 0 R = 0 0 3 0 0 0 4 1 0 0 2 3 0 0 5 2 1 0 4 6 5 0

By theorem 3 above, the nonzero row vectors of R form a basis for the row space of R, and hence form a basis for the row space of A. These basis vectors are r1 = [1 3 4 2 5 4]

r2 = [ 0 0 1 3 2 6] r3 = [ 0 0 0 0 1 5]

* Keeping in mind that A & R may have different column space, we can not find a basis for the column space of A directly from the column vectors of R. However, it follows from theorem 2 (b) that if we can find a set of column vectors of R that forms a basis for the column space of R, then the corresponding column vectors of A will form a basis for the column space of A. The 1st, 3rd, and 5th columns of R contain the leading 1
1 0 c1 ' = , 0 0 4 1 c 3 ' = , 0 0 5 2 c5 ' = 1 0

Form a basis for the column space of R; thus the corresponding column vectors of A, namely

1 2 c1 = , 2 1

4 9 c 3 = , 9 4

5 8 c 5 = Form a basis for the column space of A. 9 5

END OF H/O 1 ENGNEERING MATHEMATICS 1 H/O 2 3) Basis for a vector space using Row operations. Find a basis for the space spanned by the vectors; V1 = (1, -2, 0, 0, 3), v2 = (2, -5, -3, -2, 0), v3 = (0, 5, 15, 10, 0), v4 = (2, 6, 18, 8, 6) Solution: Except for a variation in notation, the space spanned by these vectors is the row space of the matrix.
1 2 0 2 1 0 0 0 2 5 5 6 2 1 0 0 0 3 15 18 0 3 1 0 0 2 1 0 0 2 10 8 3 0 0 0 3 6 0 6

Reducing this matrix to row- echelon form we obtain

The nonzero row vectors in this matrix are w1 = (1, -2, 0, 0, 3), w2 = (0, 1, 3, 2, 0), w3 = (0, 0, 1, 1, 0). These vectors form a basis for the row space and consequently form a basis for the subspace of R5 spanned by v1, v2, v3 & v4. 4) Basis for the row space of a matrix Find a basis for the row space of
1 2 A = 0 2 2 5 5 6 0 3 15 18 0 2 10 8 3 6 Consisting entirely of row vectors from A 0 6

Solution: Transpose A, hereby converting the row space of A into the column space of AT; then find a basis for the column space of AT; and the transpose again to convert column vectors back to row vectors. Reducing this matrix to row- echelon form yields

The 1st, 2nd and 4th columns contain the leading 1s, so the corresponding column vectors in AT form a basis for the column space of AT; these are
1 2 c1 = 0 , 0 3 2 2 5 8 c2 = 3, & c4 = 18 2 8 6 6

1 0 0 0 0

2 1 0 0 0

0 5 0 0 0

2 10 1 0 0

Transposing again and adjusting the notation appropriately yields the basis vectors r1 = [1 2 0 0 3], r2 = [ 2 5 3 2 6] & r4 = [ 2 6 18 8 6]; for the row space of A. RANK AND NULLITY: 1) Theorem: If A is any matrix, then the row space and column space of A have the same dimension. Defn: The column dimension of the row space and column space of a matrix A is called the rank of A and is demoted by rank(A); the dimension of the null space of A is called the nullity of A and is demoted by nullity(A). 2) Theorem: If A is any matrix, then rank(A) = rank(AT). 3) Theorem: Dimension theorem for matrices If A is a matrix with n columns, then rank(A) + nullity(A) = n. 4) Theorem: If A is an M x n matrix, then: (a) rank(A) = the number of leading variables in the solution of Ax = 0. (b) nullity(A) = the no. of parameters in the general solution of Ax = 0. 1) Example: The sum of Rank and Nullity The matrix
1 3 A = 2 4 2 7 5 9 0 2 2 2 4 0 4 4 5 1 6 4 3 4 1 7

NB: 1. Rank(A) = N0. of leading variables 2. nullity(A) = No. of free variables. The reduced row- echelon form of A is

1 0 0 0

0 1 0 0

4 2 0 0

28 12 0 0

37 16 0 0

13 5 0 0

( i )

Since there are two non- zero rows (or two leading 1s) The row space and column space are both two- dimensional, so rank(A) = 2. Nullity Find the dimension of the solution space of the linear system Ax = 0. This system can be solved by reducing the augmented matrix to reduced row- echelon form. The resulting matrix will be identical to equation (i), except with an additional last column of zeros, and the corresponding system of equations will be
x1 x2 x1 x2 x1 x2 x3 x4 x5 x6 4 x3 2 x3 = 4 x3 = 2 x3 = = = = = = 4r 2r r s t u 28 x4 12 x4 + 28 x4 + 12 x4 37 x5 16 x5 + 37 x5 + 16 x5 + 13x6 + 5 x6 = 0 = 0 (ii )

Or, on solving for the leading variables,

13 x6 5 x6 13u 5u

It follows that the general solution of the system is


+ 28s + 12 s + 37t + 16t

Or equivalently x1 4 28 37 13 x 2 12 16 5 2 x3 1 0 0 0 = r + s + t + u ( iii ) x 0 1 0 0 4 x5 0 0 1 0 0 0 0 1 x6 The 4 vectors on the right side of equation (iii) form a basis for solution space so that nullity (A) = 4. Since the matrix (A) has 6 columns, so rank(A) + nullity(A) = 6 Which is consistent with rank(A) = 2, nullity(A) = 4, which totals to 6. 2) Examples: No. of parameters in a general solution Find the no. of parameters in the general solution of Ax = 0 if A is a 5 x 7 matrix of rank 3. Solution:

Where

Rank (A) + nullity (A) = n n=7 Rank (A) = 3 nullity (A) = 7 3 = 4. Thus, there are 4 parameters.

ORTHONORMAL BASES: GRAM- SCHMIDT PROCESS: QR- DECOMPOSITION: Defn: A set of vectors in an inner product space is called an orthogonal set if all pairs of district vectors in the set are orthogonal. An orthogonal set in which each vector has nom 1 is called orthonormal. An inner product on a real V is a function that associates a real no. <u, v> with each pair of vectors u and v in V in such a way that the following axioms are satisfied for all vectors u, v & w in V & all scalars k. [symentry axiom] 1) u, v = v, u u + v , w = u , v + v , w [Additivity axiom ] 2) [Homogeneous axiom ] 3) ku, v = k u , v [ positivity axiom] and v, v =0 . 4) v, v 0 If and only if v = 0. A real vector space with an inner product is called a real inner product space. Read more about inner products. ORTHOGONAL MATRICES; CHANGE OF BASIS Defn: A square matrix A with the property A-1 = AT is said to be an orthogonal matrix. It follows that a square matrix A is orthogonal if and only if
AAT = AT A = I or AA = I
T

(i )

or AT A = I .

1) Example: (a 3 x 3 orthogonal matrix) The matrix

37 6 A= 7 2 7 37 2 AT A = 7 6 7

2 3 6

7 7 7

2 is orthogonal, since 7 3 7
6 7 6 3 2 7

7 7

37 6 6 7 7 3 2 7 7
2 7

2 3 6

7 7 7

1 0 0 2 = 0 1 0 7 3 0 0 1 7
6 7

2) Example: (A rotation matrix is orthogonal)

A standard matrix for the counter clockwise rotation of R2 through an angle Q is


cos Q A = sin Q sin Q cos Q sin Q cos Q cos Q sin Q sin Q 1 = cos Q 0 0 1

This matrix is orthogonal for all choices of Q, since


cos Q AT A = sin Q

Change of Basis: Qn: If we change the basis for a vector space V from some old basis B to some new basis B, how is the old coordinate matrix [v]B of a vector v related to the new coordinate matrix [v]B? For simplicity, we will solve this problem for two- dimensional spaces. The solution for n- dimensional spaces is similar and is left for the reader let B = {u, u2} & B = {u1, u2} be the old and new bases, respectively, we will need the coordinate matrices for the new basis vectors relative to the old basis. Suppose they are

a c [ u1'] B = & [ u2 '] B = ( i) b d That is u1'= au1 + bu2 u 2 '= cu1 + du2 ( ii) let v be any vector in V and let k1 [ V] B' = ( i i ) k2 be the coordinate matrix so that

v = k1u1'+ k2u2 ' ( iv)

In order to find the old coordinates of v we must express v in terms of the old basis B, hence, substitute equation (ii) into equation (iv), Thus yields. v = k1 ( au1 + bu 2 ) + k 2 ( cu1 + du 2 )

or v = ( k1a + k 2 c ) u1 + ( k1b + k 2 d ) u 2 Thus, the old coordinate matrix for v is

( v) B =
[v]B = [v]B

k1a k1b

+ k2c + k2d

This can be written as


c k1 or from equation ( iii ) b d k2 a c = [v]B ' b d a

This equation states that the old coordinate matrix [v ] B results when we multiply the new coordinate Matrix [ v ] B1 on the left by the matrix
a p = b c d

The columns of this matrix are the coordinates of the new basis vectors relative to the old basis (equation (i)).

[ u1 '] B , [ u2 '] B

[ v ] B = p[ v ] B

( v ) . The columns of P are

[ un '] B .

Transition matrices: The matrix p is called transition matrix from B ' to B; it can be expressed in terms of ' ' its column vectors as p = [ u1 '] B u 2 B u n B ( vi )

[ ]

[ ]]

Examples: (finding a transition matrix) ' ' ' 3 Consider the bases B= u1 , u 2 & B = u1 , u 2 for R , where ' u1 = (1,2) ; u 2 = ( 0,1) ; u1' = (1,1) ; u 2 = ( 2,1) (a)Find the transition matrix from B 'to B. (b)Use equation (v) to find [v ] B if [v ] B ' = Solution:

[ ]
3 5

Begin by finding the coordinate matrices for the new basis vectors u '1 and u ' 2 relative to the old basis vector basis B. By inspection u1 ' = u1 + u 2 , u '2 = 2u1 + u2 so that 2 2 [u '1 ] B = 1 Thus, the transition matrix from B ' to B is P = 1 . 1 & [u ' 2 ] B = 1 1 1 (b)Using equation (vi) & the transition matrix in part (a)

[]

[]

NOTE: To obtain transition matrix from B to B Simply change the point of view & regard B as the old basis & B is as the new basis. As usual, the columns of the transition matrix will be the coordinates of the new basis vectors relative to the old basis. By equating corresponding components & solving the resulting linear system, (the 1 u1 student should be able to show that u1 = u1 '+u 2 ' & u 2 = 2u1 2 ) so that

21 3 7 [V] B = = 11 5 2
2 B'

[u1 ] B

'

[ ] & [u ]
1 1

[ ] Thus, the transition matrix from B to B' is Q = [


2 1

1 1

2 1

].

If we multiply the transition matrix from B ' to B obtained in example above & the transition matrix from B to B obtained in this note, we find
1 PQ = 1 2 1 1 1 2 1 = 1 0 0 = I . Which shows that Q = P-1. 1

MORE ABOUT EIGEN VALUSE AND EIGEN VECTROS: 1) Example: Eigen values of a 3 x 3 matrix Find the eigen values of
0 A = 0 4 1 0 17 0 1 8

Solution: The x-tics off polynomial of A is

det ( I A) = det 0 4

17

0 3 2 1 = 8 +17 4 8

The eigen values of A must therefore satisfy the cubic equation


3 82 +17 4 = 0
( i )

The integer solutions of the above equation are related to the general polynomial below n + c1n 1 + + cn = 0 The only possible integer solutions of equation (i) are the divisors of -4, ie.
1, 2, 4.

Successively substituting these values in Equation (i), shows that = 4 is an integer solution, with = 4 as a factor to the left side of equation (i) Dividing = 4 into 3 82 + 17 4 shows that equation (i) can be written as ( 4 ) ( 2 4 +1) = 0. Thus, the remaining solutions of equation (i) satisfy the quadratic equation. 2 4 + 1 = 0 . Thus, the eigen values of A are = 4, = 2 + 3, & = 2 3 . 2) Example: Eigen values of an Upper triangular matrix Find the eigen values of the upper triangular matrix a11 a12 a13 a14 0 a a23 a24 22 A= 0 0 a33 a34 0 0 a44 0 Solution: Det of a triangular matrix is the product of the entries on the main diagonal, hence, a13 a14 a11 a12 0 a22 a23 a24 det ( I A) = det 0 0 a33 a34 0 0 a44 0 = ( a11 ) ( a22 ) ( a33 ) ( a44 ) Thus, the x-tic equation is ( a11 ) ( a22 ) ( a33 ) ( a44 ) = 0 and the eigen values are = a11 , = a22 , = a33 , = a44 . Which are prcising the diagonal entries of A. Theorem If A is an n x n triangular matrix (upper triangular, lower triangular, or diagonal), then the eigen values of A are the entries on the main diagonal of A. Example: (Eigen values of a lower triangular matrix) By inspection, the eigen values of the lower triangular matrix

12 0 0 2 A= 1 3 0 1 5 8 4 are = 1 2 = 2 3 & = 1 4 .
Finding Bases for Eigen spaces: The eigen vectors if A corresponding to an eigen value are the non zero vectors x that satisfy Ax = x. Equivalently, the eigen vectors corresponding to are the non zero vectors in the solution space of ( I A) x = 0 . We call this solution space the eigen space of A corresponding to . Example: Bases for Eigen spaces Find bases for the eigen spaces of
0 A = 1 1 0 2 0 2 1 3

Solution: The x-tic equation of matrix A is

3 52 + 8 4 = 0, or in factored form, ( 1) ( 2 ) 2 = 0 (verify); thus the eigen values of A are =1 & = 2, so there are two eigen spaces of A.

By definition x1 x= x2 is an eigen vector of A corresponding to if and only if x is a x3 nontrivial solution of ( I A) x = 0 , i.e., of

If = 2 , then equation (a) becomes

0 2 x1 0 1 2 1 x = 0 ( a ) 2 1 0 3 x3 0 2 0 2 x1 0 1 0 1 x = 0 2 1 0 1 x3 0

Solving this system yields (verify) x1 = s, x2 = t , x3 = s. Thus, the eigen vectors of A corresponding to = 2 are the non zero vectors of the form

s s 0 1 0 x = t = 0 + t = s 0 + t 1 s s 0 1 0

Since
1 0 0 & 1 1 0

Are linearly independent, these vectors form a basis for the eigen space corresponding to = 2. If = 1 , then equation (a) becomes

1 0 2 x1 0 1 1 1 x = 0 2 1 0 2 x3 0
Solving this system yields (verify)
x2 = 2s, x2 = s , x3 = s.

Thus, the eigen vectors corresponding to = 1 are the non zero vectors of the form.
2 s 2 s = s 1 So that s 1 2 1 is a basis for the eigen space corresponding to =1. 1

Diagonolization: Defn: A square matrix A is called Diagonolizable if there is an invertible matrix P such that P-1AP is an diagonal matrix; the matrix P is said to diagonalize A. Theorem If A is an n x n matrix, then the following are equivalent. (a) A is diagonolizable (b) A has n linearly independent eigen vectors.

Procedure for diagonalizing a matrix: Step1. Find n linearly independent eigen vectors of A, say, p1, p2, -----, pn Step2. Form the matrix P having p1, p2, -----, pn as its column vectors.
Step3. The matrix P-1Ap will then be diagonal with 1, 2 , n as its successive diagonal entries, where 1 is the eigen value corresponding to
pi , for i =1,2,3, , n.

1) Example: (finding a matrix P that diagonalizes a matrix A) Find a matrix P that diagonalizes
0 A = 1 1 0 2 0 2 1 3

Solution: From example above, the x-tic equation of A to be ( 1) ( 2) 2 = 0. & it has the following bases for the eigen spaces:
=2:
1 p1 = 0 , 1 0 1 0 0 p2 = 1 0

=1:

2 p3 = 1 1

There are 3 basis vectors in total, so the matrix A is diagonalizable &


1 P = 0 1 ( verify that
-1

2 1 diagonalizes A 1 0 2 0 0 0 . 1

2 P Ap = 0 0

There is no preferred order for the columns of P. Since the i th diagonal entry of P-1Ap is an eigen value for the i th column vector of P, changing the order of the columns of P just changes the order of the eigen values on the diagonal of P-1Ap. Thus, had we written
1 P = 0 1
-1

2 1 1 0 1 0

0 1 in the above P, we would have obtained 0 0 0 . 2

2 P Ap = 0 0

2) Example: A matrix that is not diagonalizable Find a matrix P that diagonalizes

1 A = 1 3

0 2 5

0 0 2

Solution: The x-tic polynomial of A is

1 det ( I A) = 1 3

2
5

0 2 0 = ( 1)( 2 ) 2

So the x-tic equation is ( 1)( 2 ) 2 = 0 Thus, the eigen values of A are =1 & = 2 (student: Show that bases for the eigen spaces are

18 1 = 1 : p1 = 8 , = 2 : p2 1

0 = 0 1

Since A is a 3 x 3 matrix & there are only two basis vectors in total, A is not diagonalizable. Theorem: If an n x n matrix A has n distinct eigen values, then A is diagonalizable. Example: Previously, we had
0 A = 0 4 1 0 17 0 1 8
3,

Which has 3 distinct eigen values, = 4, = 2 + diagonalizable further,


4 P Ap = 0 0
-1

= 2 3.

Therefore, A is

0 2+ 3 0

0 0 . For some invertible matrix P, if desired, the 2 3

matrix P can be solved using the method shown in example 1 above. END

det ( A) = a11 ( a22 a33 a23 a32 ) a12 ( a21 a33 a23 a31 ) + a13 ( a21 a32 a22 a31 ) ( ii ) a11 , a12 & a13 represent det er min ants as follows. for a11 , M 11 = a22 a32 a21 a31 a23 a33 a23 a33

for a12 , M 12 =

a21 a22 . a31 a32 det ( A) = a11 ( a22 a33 a23 a32 ) a12 ( a21 a33 a23 a31 ) + a13 ( a21 a32 a22 a31 ) ( ii ) for a13 , M 13 = a11 , a12 & a13 represent det er min ants as follows. for a11 , M 11 = a22 a32 a21 a31 a21 a31 a23 a33 a23 a33 a22 . a32

for a12 , M 12 =

for a13 , M 13 =

You might also like