Vector Space

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 42

Study Materials for 1st Year B.

Tech Students Paper Name: Mathematics Paper Code : M201 Teacher Name: Rahuldeb Das

Lecture 1 Definition
DEFINITION (Vector Space) A vector space over satisfying the following axioms: 1. VECTOR ADDITION: To every pair in 1. 2. 3. There is a unique element every 4. For every in such that (Commutative law). (Associative law). (the zero vector) such that for denoted is a non-empty set,

there corresponds a unique element

(called the additive identity). there is a unique element (called the additive inverse). such that

is called VECTOR ADDITION. 2. SCALAR MULTIPLICATION: For each element 1. in such that for every and and there corresponds a unique

2. for every where 3. DISTRIBUTIVE LAWS: RELATING VECTOR ADDITION WITH SCALAR
MULTIPLICATION

For any 1.

and

the following distributive laws hold:

2. Note: the number 0 is the element of Remark The elements of whereas is the zero vector. are called VECTORS. If the vector space is

are called SCALARS, and that of

the vector space is called a REAL VECTOR SPACE. If called a complex vector space. We may sometimes write THEOREM Let 1. 2. 3. Proof. Proof of Part 1. For Hence, by Axiom 1d there exists is equivalent to such that for a vector space if

is understood from the context.

be a vector space over implies if and only if either for every

Then

is the zero vector or

Proof of Part 2. As , using the distributive law, we have

Thus, for any

, the first part implies

. In the same way,

Hence, using the first part, one has Now suppose (note that If

for any then the proof is over. Therefore, let us assume exists and

is a real or complex number, hence

as

for every vector and then

Thus we have shown that if Proof of Part 3. We have 6pt depth 0pt Examples 1. The set and

and hence

height6pt width

of real numbers, with the usual addition and multiplication (i.e., ) forms a vector space over

2. Consider the set define,

For

and

Then 3. Let

is a real vector space. be the set of in and -tuples of real we define

numbers. For

(called component wise or coordinate wise operations). Then is a real vector space with addition and scalar multiplication defined as above. This vector space is denoted by called the real vector space of -tuples.

4. Let (the set of positive real numbers). This is NOT A VECTOR SPACE under usual operations of addition and scalar multiplication (why?). We now define a new vector addition and scalar multiplication as

Subspaces
DEFINITION (Vector Subspace) Let a subspace of if be a NON-EMPTY SUBSET of whenever and is said to be where the vector

addition and scalar multiplication are the same as that of Remark Any subspace is a vector space in its own right with respect to the vector addition and scalar multiplication that is defined for EXAMPLE 1. Let 1. 2. be a vector space. Then the set consisting of the zero vector These are called trivial subspaces. Then passing through the origin.) Then is not a subspace of but it doesn't pass through the origin.) Then is a subspace of ( is is a subspace of ( is a

are vector subspaces of 2. Let plane in

3. Let again a plane in 4. Let 5. The vector space

is a subspace of the vector space given in Example 5. Let Then

6. Conisder the vector space check that

Hence

is not a subspace of

But if we think of as a subset of the real vector space (component wise addition and scalar multiplication), then is a subspace. Check that is a subspace of the point 7. Consider the set , the zero vector of ). is a real as well as a ( represents a line passing through

given in Example 9. Then

complex vector space Let verify that

be a subset of

. Then

is a vector subspace of the real vector space

. The reason , as

being the following: the condition

implies that for any scalar

Assignment
1. Let be the set of all real numbers. Define the addition in by Prove

and the scalar multiplication by that is a real vector space with respect to the operations defined above. 2. Which of the following are correct statements? 1. Let 2. Let be a vector space. Let vector subspace of 3. Let real vector space, Multiple Choice Questions: 1. If in a vector space c. = , then Then is a subspace of

Then the set Then

forms a is a subspace of the

a) c= o and = c) only =

b) only c= o d) c= o or =

2. If S and T are two subspace of a vector space V then which one of the followings is a sub space of V also ? a) S U T c) S -T b) S T d) T- S

3. If a finite set has 5 elements, then its power set has a) 5 ! c) 10 b) 25 d) None

Lecture 2 Linear Combinations


DEFINITION (Linear Span) Let be a vector space and let be a non-empty subset of The linear span of is the set defined by

If

is an empty set we define

EXAMPLE 1. Is a linear combination of such that and

Solution: We want to find

(3.1.1)

2. Verify that ? 3. The linear span of

is not a linear combination of the vectors over is

and

4. as . LEMMA (Linear Span is a subspace) Let empty subset of Then is a subspace of and hence is non-empty subset of and scalars Let such Hence, be a vector space and let be a non, and if , take and

Proof. By definition, Then, for that

there exist vectors and

Thus,

is a vector subspace of be a vector space and as be a subspace. If then

Remark Let

is a subspace of

is a vector space in its own right. Then is the smallest

THEOREM Let be a non-empty subset of a vector space subspace of containing

Proof. For every smallest subspace of by Proposition, DEFINITION Let be an containing

and therefore, consider any subspace of

To show containing

is the Then

and hence the result follows. matrix with real entries. Then using the rows we define

and columns 1. 2. 3. 4. denoted denoted consists of all as

Note that the ``column space" of a matrix solution. Hence, LEMMA Let Then THEOREM Let be an be a real

such that

has a

matrix. Suppose

for some elementary matrix

matrix with real entries. Then

1. is a subspace of ; 2. the non-zero row vectors of a matrix in row-reduced form, forms a basis for the row-space. Hence Proof. Part can be easily proved. Let be an matrix. For part Then Then, a repeated That is, if the rows of then let be the

row-reduced form of

with non-zero rows for some elementary matrices

application of Lemma implies the matrix are

Hence the required result follows.

Linear Independence
DEFINITION (Linear Independence and Dependence) Let any non-empty subset of If there exist some non-zero 's be such that is called linearly

then the set is called a linearly dependent set. Otherwise, the set independent. EXAMPLE 1. Let Then check that Since is a solution of, so the set 2. Let

and

is a linearly dependent subset of Suppose there exists such that

Then check that in this case we necessarily have which shows that the set is a linearly independent subset of In other words, if check whether the set equation is a non-empty subset of a vector space then to is linearly dependent or independent, one needs to consider the

(1)

In case linearly independent subset of of

is THE ONLY SOLUTION of (1), the set becomes a Otherwise, the set becomes a linearly dependent subset

PROPOSITION Let

be a vector space.

1. Then the zero-vector cannot belong to a linearly independent set. 2. If is a linearly independent subset of then every subset of is also linearly independent. 3. If is a linearly dependent subset of then every set containing is also linearly dependent. Proof. We give the proof of the first part. The reader is required to supply the proof of other parts. Let be a set consisting of the zero vector. Then for any Hence, for the system we have a non-zero solution linearly dependent. THEOREM Let Suppose there exists a vector linearly dependent, then Proof. Since the set
NOT ALL ZERO

and

Therefore, the set

is

be a linearly independent subset of a vector space such that the set is a linear combination of is linearly dependent, there exist scalars such that (2) is

CLAIM: Let if possible not all Then equation (2) gives zero. Hence, by the definition of linear independence, the set is linearly dependent which is contradictory to our hypothesis. Thus, and we get with

Note that for every the result follows.

and hence

for

Hence

We now state two important corollaries of the above theorem. We don't give their proofs as they are easy consequence of the above theorem. COROLLARY Let Then there exists a smallest be a linearly dependent subset of a vector space such that

COROLLARY Let Suppose there exists a vector

be a linearly independent subset of a vector space such that Then the set

is also a linearly independent subset of

Assignment
1. Show that any two row-equivalent matrices have the same row space. Give examples to show that the column space of two row-equivalent matrices need not be same. 2. Find all the vector subspaces of 3. Find the conditions on the real numbers so that the set . . Further show that if

is a subspace of 4. Show that the vector space is a subspace of

consists of all polynomials of degree , then is not a subspace. 5. Conisder the vector space given in Example 5. Determine all its vector subspaces. 6. Let and be two subspaces of a vector space Show that When is is a subspace a

of Also show that subspace of 7. Let and

need not be a subspace of

be two subspaces of a vector space Show that

Define is a subspace of Also

show that

8. Let

where Determine

all

such that Let Find all choices for the vector Does there exist

9. Consider the vector space such that the set

is linear independent subset of

choices for vectors and such that the set is linearly independent subset of ? 10. If none of the elements appearing along the principal diagonal of a lower triangular matrix is zero, show that the row vectors are linearly independent in The same is true for column vectors. 11. Let the vector 12. Show that 13. Show that general if is linearly dependent in is a linearly independent set in is a linearly independent set then is also a linearly independent set. 14. In give an example of vectors and such that is linearly In Determine whether or not

dependent but any set of vectors from is linearly independent. 15. What is the maximum number of linearly independent vectors in

Multiple Choice Questions


1. If , are two vectors of a vector space then indicate which one of the following is not a liner combination of and a) 3-4 c) None b) d) .

2. The maximum number of independent vectors among the four (1,2,3,4), (0,4,3,1), (0,0,7,1) and (0,0,0,0) is a) 4 c) 2 b) 3 d) 1

3. The maximum number of independent vectors in V = { (x1,x2,x3,x4,x5) : xi R} is a) 4 c) 5 b) 3 d) None

Lecture 3

Bases
DEFINITION (Basis of a Vector Space) 1. A non-empty subset of a vector space is called a basis of 1. is a linearly independent set, and i.e., every vector in of the elements of 2. A vector in is called a basis vector. Remark Let 2. if

can be expressed as a linear combination

be a basis of a vector space

Then any

is a

unique linear combination of the basis vectors, Observe that if there exists a such that then and

But then the set for uniqueness.

is linearly independent and therefore the scalars and we have the

must all be equal to zero. Hence, for

By convention, the linear span of an empty set is vector space EXAMPLE 1. Check that if or

Hence, the empty set is a basis of the

then or are bases of

or

2. For

let forms a basis of

Then, the set This set is called the standard basis of

That is, if of 3. Let Then vector

then the set

forms an standard basis

be a vector subspace of It can be easily verified that the and

Then by Remark, A basis of

cannot be a basis of

can be obtained by the following method: is equivalent to we replace the value of

The condition with to get

Hence, 4. Let Note that any element basis of 5. Let element is

forms a basis of and That is, can be written as is a complex vector space. Hence, a

and is expressible as Also,

That is,

is a real vector space. Any Hence a basis of is is not defined.

Observe that is a vector in

and hence

6. Recall the vector space the vector space of all polynomials with real coefficients. A basis of this vector space is the set

This basis has infinite number of vectors as the degree of the polynomial can be any positive integer. DEFINITION (Finite Dimensional Vector Space) A vector space is said to be finite dimensional if there exists a basis consisting of finite number of elements. Otherwise, the vector space is called infinite dimensional.

Remark We can use the above results to obtain a basis of any finite dimensional vector space as follows: Step 1: Choose a non-zero vector, say, independent. Step 2: If such that independent. we have got a basis of Then the set is linearly

Else there exists a vector, say, is linearly

Then by Corollary, the set

Step 3: If say, such that linearly independent. At the step, either

then

is a basis of

Else there exists a vector, is

So, by Corollary, the set or as a basis of . So, we choose a vector, say,

In the first case, we have In the second case, such that

Therefore, by Corollary, the set is linearly independent.

This process will finally end as

is a finite dimensional vector space.

Assignment
1. Let be a subset of a vector space Suppose but is not a linearly independent set. Then prove that each vector in can be expressed in more than one way as a linear combination of vectors from 2. Show that the set is a basis of 3. Let be a matrix of rank Then show that the non-zero rows in the row-reduced echelon form of are linearly independent and they form a basis of the row space of

Multiple Choice Questions

1. The value of k for which the two vectors (k,6) and (2,k) form a basis of V2 is a) 2 3 c) any value 2. The vectyors (1,2,3) and (4,-2,7) are a) linearly independent b) linearly dependent b) - 2 3 d) any value except 23

c) form a basis of V3 is a) 2 c) 3

d) None

3. The dimension of the sub space {(x1,x2,x3) : xi R and 2x1 + x2 x3= 0} b) 1 d) None

Lecture 4
Basic Properties
DEFINITION (Linear Transformation) Let and is called a linear transformation if be vector spaces over A map

We now give a few examples of linear transformations. EXAMPLE 1. Define transformation as by for all Then is a linear

2. Verify that the maps given below from

to

are linear transformations. Let

1. Define 2. For any 3. For a fixed vector Note that examples values for the vector and define define can be obtained by assigning particular

3. Define Then

by is a linear transformation with and

DEFINITION (Zero Transformation) Let the map defined by

be a vector space and let

be

Then is a linear transformation. Such a linear transformation is called the zero transformation and is denoted by DEFINITION (Identity Transformation) Let be a vector space and let the map defined by

be

Then is a linear transformation. Such a linear transformation is called the Identity transformation and is denoted by THEOREM Let an ordered basis of vectors In other words, Proof. Since is determined by for any there exist scalars such that be a linear transformation and be Then the linear transformation is a linear combination of the

is a basis of

So, by the definition of a linear transformation

Observe that, given

we know the scalars in

Therefore, to know

we just need to know the vectors That is, for every

is determined by the coordinates and the vectors

of

with respect to the ordered basis

DEFINITION (Inverse Linear Transformation) Let be a linear transformation. If the map is one-one and onto, then the map defined by is called the inverse of the linear transformation

EXAMPLE 1. Define defined by by Then is

Note that

Hence, the map

the identity transformation. Verify that is indeed the inverse of the linear transformation and the linear transformation

Thus,

2. Recall the vector space defined by

for

Then

is defined as

for Hence, conclude that the map

Verify that is indeed the inverse of the linear transformation

Assignment
1. Which of the following are linear transformations answers. 1. Let 2. Let 3. Let 4. Let and with with and with with Justify your

Lecture 5
Rank-Nullity Theorem
DEFINITION (Range and Null Space) Let over the same set of scalars and 1. 2. We now prove some results associated with the above definitions. PROPOSITION Let and be finite dimensional vector spaces and let is an ordered basis of and be finite dimensional vector spaces be a linear transformation. We define

be a linear transformation. Suppose that Then 1. 1. is a subspace of

2. 3. 2. 1. 2. 3. is one-one is a basis of 4. Proof. The results about the readers. We now assume that Let if and only if and can be easily proved. We thus leave the proof for is the zero subspace of is a subspace of

is one-one. We need to show that Also for any linear transformation, is one-one implies That is,

Then by definition, Thus So,

Let

We need to show that

is one-one. So, let us assume that for some This implies, Hence, is one-one.

Then, by linearity of This in turn implies The other parts can be similarly proved. Remark 1. The space SPACE of 2. We write 3. of is called the RANGE SPACE of and and and

is called the NULL

is called the rank of the linear transformation

is called the nullity

EXAMPLE Determine the range and null space of the linear transformation

Solution: By Definition

We therefore have

Also, by definition

THEOREM (Rank Nullity Theorem) Let be a finite dimensional vector space. Then

be a linear transformation and

or equivalently

Proof. Let Since basis of

and

Suppose is a linearly independent set in

is a basis of we can extend it to form a

. So, there exist vectors is a basis of Therefore,

such that

We now prove that the set is linearly independent. Suppose the set is not linearly independent. Then, there exists scalars, not all zero such that That is,

So, by definition of

Hence, there exists scalars That is,

such that

But the set of linear independence

is a basis of

and so linearly independent. Thus by definition

In other words, we have shown that Hence,

is a basis of

COROLLARY Let vector space Then

be a linear transformation on a finite dimensional

Proof. By Proposition, Theorem equivalently is onto.

is one-one if and only if is equivalent to the condition

By the rank-nullity Or is one-

By definition, is invertible if is one-one and onto. But we have shown that one if and only if is onto. Thus, we have the last equivalent condition. Remark Let be a finite dimensional vector space and let transformation. If either is one-one or is onto, then is invertible.

be a linear

The following are some of the consequences of the rank-nullity theorem. The proof is left as an exercise for the reader. COROLLARY The following are equivalent for an 1. 2. There exist exactly 3. There exist exactly 4. There is a real matrix

rows of that are linearly independent. columns of that are linearly independent. with non-zero determinant and every

sub matrix of

sub matrix of has zero determinant. 5. The dimension of the range space of is 6. There is a subset of consisting of exactly linearly independent vectors such that the system 7. The dimension of the null space of for is consistent.

Assignment
1. Let be a linear transformation and let is linearly be

linearly independent in Prove that independent. 2. Let be defined by

Then the vectors

and

are linearly independent whereas

and

are linearly dependent. 3. Is there a linear transformation

4. Let

be defined by

1. Find

for

2. Find and Also calculate and 3. Show that and find the matrix of the linear transformation with respect to the standard basis. 5. Let 1. If of 2. If be a linear transformation. is finite dimensional then show that the null space and the range space are also finite dimensional. and are both finite dimensional then show that 1. if 2. if 6. Let be an 3. if real matrix. Then then the system has infinitely many solutions, such then then is onto. is not one-one.

4. if then there exists a non-zero vector that the system does not have any solution. 7. Let be a vector space of dimension . Suppose and let and let

be an ordered .

basis of

Put matrix 8. Let

. Then prove that is invertible. matrix. Prove that

is a basis of

if and only if the

be an

Lecture 6
Definition
In , given two vectors , we know the inner product and , this inner product

. Note that for any satisfies the conditions and if and only if an arbitrary vector space.

. Thus, we are motivated to define an inner product on be a vector space over An inner product

DEFINITION (Inner Product) Let over denoted by is a map,

such that for 1. 2. 3.

and

the complex conjugate of for all

and

and equality holds if and only if be a vector space with an inner product

DEFINITION (Inner Product Space) Let Then

is called an inner product space, in short denoted by IPS.

EXAMPLE The first two examples given below are called the STANDARD INNER
PRODUCT or

the DOT PRODUCT on

and

respectively..

1. Let

be the real vector space of dimension and of

Given two vectors we define

Verify 2. Let

is an inner product. be a complex vector space of dimension and in Then for check that

is an inner product.

3. Let

and let

Define

Check that

is

an inner product. Hint: Note that 4. let Show that is an inner product in 5. Consider the real vector space . In this example, we define three products that satisfy two conditions out of the three conditions for an inner product. Hence the three products are not inner products. 1. Define Then it is easy to verify that the third condition is not valid whereas the first two conditions are valid. 2. Define Then it is easy to verify that the first condition is not valid whereas the second and third conditions are valid. 3. Define Then it is easy to verify that the second condition is not valid whereas the first and third conditions are valid.

DEFINITION (Length/Norm of a Vector) For denoted by

we define the length (norm) of

the positive square root.

A very useful and a fundamental inequality concerning the inner product is due to Cauchy and Schwartz. The next theorem gives the statement and a proof of this inequality. THEOREM (Cauchy-Schwartz inequality) Let for any The equality holds if and only if the vectors , then DEFINITION 5.1.8 (Angle between two vectors) Let every be a real vector space. Then for by the Cauchy-Schwartz inequality, we have and are linearly dependent. Further, if be an inner product space. Then

We know that

is an one-one and onto function. Therefore, for

every real number

there exists a unique

such that

1. The real number with angle between the two vectors 2. The vectors and in

and satisfying and in

is called the

are said to be orthogonal if is called mutually orthogonal if for

3. A set of vectors all

DEFINITION (Orthogonal Complement) Let inner product . Then the subspace

be a subspace of a vector space

with

is called the orthogonal complement of

in

THEOREM Let be an inner product space. Let zero, mutually orthogonal vectors of 1. Then the set 2. 3. Let and also let for is linearly independent.

be a set of non-

Then for any

In particular,

for all

if and only if

Therefore, we have obtained the required result. DEFINITION (Orthonormal Set) Let mutually orthogonal vectors for If the set is also a basis of is called an orthonormal basis of then the set of vectors be an inner product space. A set of non-zero, in is called an orthonormal set if

EXAMPLE

1. Consider the vector space ordered basis

with the standard inner product. Then the standard is an orthonormal set. Also, the basis is an orthonormal set.

Assignment
1. Recall the following inner product on for and

1. Find the angle between the vectors 2. Let 3. Find two vectors 2. Find an inner product in Find such that

and

such that and such that the following conditions hold:

[Hint: Consider a symmetric matrix solve a system of equations for the unknowns

Define .]

and

3. Let Find with respect to the standard inner product. 4. Let be a subspace of a finite dimensional inner product space . Prove that

5. Let

be the real vector space of all continuous functions with domain Then show that is an inner product space with inner

That is, product

For different values of

and

find the angle between the functions

and

6. Let

be an inner product space. Prove that

This inequality is called the TRIANGLE INEQUALITY . 7. Let Use the Cauchy-Schwartz inequality to prove that

When does the equality hold?

Lecture 7
Definitions
In this chapter, the linear transformations are from a given finite dimensional vector space to itself. Observe that in this case, the matrix of the linear transformation is a square matrix. So, in this chapter, all the matrices are square matrices and a vector means for some positive integer Let be a matrix of order In general, we ask the question: there exist a non-zero vector such that (1)

For what values of

Here, stands for either the vector space equivalent to the equation

over

or

over

Equation (1) is

This system of linear equations has a non-zero solution, if

So, to solve (1), we are forced to choose those values of Observe that therefore lead to the following definition.

for which of degree We are The

is a polynomial in

DEFINITION (Characteristic Polynomial) Let polynomial The equation

be a matrix of order

is called the characteristic polynomial of is called the characteristic equation of then

and is denoted by If is a

solution of the characteristic equation of

is called a characteristic value

Some books use the term EIGEN VALUE in place of characteristic value.

THEOREM Let

Suppose such that

is a root of

the characteristic equation. Then there exists a non-zero Proof. Since the matrix is a root of the characteristic equation,

This shows that

is singular and therefore by Theorem the linear system

has a non-zero solution. Remark Observe that the linear system So, we consider only those has a solution for every

that are non-zero and are solutions of the linear system

DEFINITION (Eigen value and Eigenvector) If the linear system zero solution 1. 2. 3. the tuple for some then

has a non-

is called an eigen value of is called an eigenvector corresponding to the eigen value is called an eigen pair. of and

Remark To understand the difference between a characteristic value and an eigen value, we give the following example.

Consider the matrix

Then the characteristic polynomial of

is

Given the matrix

recall the linear transformation

defined by

1. If in 2. If are

that is, if So, has

is considered a COMPLEX matrix, then the roots of and as eigen pairs. has no solution as characteristic

that is, if

is considered a REAL matrix, then then has no eigen value but it has

in Therefore, if values. Remark Note that if

is an eigen pair for an is also an eigen pair for

matrix

then for any non-zero are

Similarly, if then for any non-zero

eigenvectors of

corresponding to the eigen value it is easily seen that if corresponding to the eigen value

eigenvector of

, then is also an Hence, when we talk of eigenvectors

corresponding to an eigen value Suppose is singular and Corollary, the linear system That is, has

we mean linearly independent eigenvectors. Then Then by linearly independent solutions.

is a root of the characteristic equation Suppose has

linearly independent eigenvectors corresponding to the eigen value

whenever

EXAMPLE

1. Let equation has roots equation

Then

Hence, the characteristic

That is is a repeated eigen value. Now check that the for is equivalent to the equation

And this has the solution Hence, from the above remark, is a representative for the eigenvector. Therefore, here we have two eigen values mathend000# but only one eigenvector.

2. Let roots

Then Here, the matrix that we have is

The characteristic equation has and we know that for every from to

and we can choose any two linearly independent vectors get and as the two eigen pairs. are linearly independent vectors in are eigen pairs for the identity matrix,

In general, if

then

3. Let equation has roots

Then

The characteristic Now check that the eigen pairs are and

In this case, we have two distinct eigen values and the corresponding eigenvectors are also linearly independent. The reader is required to prove the linear independence of the two eigenvectors.

4. Let equation has roots

Then Hence, over the matrix

The characteristic has no eigen value.

Over

the reader is required to show that the eigen pairs are

and

Assignment
1. Find the eigen values of a triangular matrix. 2. Find eigen pairs over for each of the following matrices:

and

3. Prove that the matrices

and

have the same set of eigen values. Construct a

matrix such that the eigenvectors of and are different. 4. Let be a matrix such that ( is called an idempotent matrix). Then prove that its eigen values are either 0 or or both.

Lecture 8
THEOREM 6.1.11 Let not necessarily distinct. Then be an matrix with eigen values and

EXERCISE 1. Let be a skew symmetric matrix of order value of 2. Let be a orthogonal matrix such that .If Then prove that 0 is an eigen , then prove that

there exists a non-zero vector

Let

be an

matrix. Then in the proof of the above theorem, we observed that the is a polynomial equation of degree it has the form in Also,

characteristic equation for some numbers

Note that, in the expression substitute by elements of It turns out that the expression

is an element of

Thus, we can only

holds true as a matrix identity. This is a celebrated theorem called the Cayley Hamilton Theorem. We state this theorem without proof and give some implications. THEOREM (Cayley Hamilton Theorem) Let satisfies its characteristic equation. That is, holds true as a matrix identity. Some of the implications of Cayley Hamilton Theorem are as follows. be a square matrix of order Then

Remark

1. Let function,

Then its characteristic polynomial is and for each eigen value of

Also, for the

This shows that the condition does not imply that

2. Suppose we are given a square matrix of order and we are interested in calculating where is large compared to Then we can use the division algorithm to find numbers and a polynomial such that

3. Hence, by the Cayley Hamilton Theorem, 4. 5. Let be a non-singular matrix of order Then note that and

This matrix identity can be used to calculate the inverse.


Note that the vector (as an element of the vector space of all matrices) is a linear

combination of the vectors

EXERCISE Find inverse of the following matrices by using the Cayley Hamilton Theorem

THEOREM If corresponding eigenvectors independent.

are distinct eigen values of a matrix then the set

with is linearly

Proof. The proof is by induction on the number of eigen values. The result is obviously true if as the corresponding eigenvector is non-zero and we know that any set containing exactly one non-zero vector is linearly independent. Let the result be true for equation We prove the result for We consider the

(1)

for the unknowns

We have

(2)

From Equations (1) and (2), we get

This is an equation in

eigenvectors. So, by the induction hypothesis, we have

But the eigen values are distinct implies for Also,

for and therefore (1) gives

We therefore get

Thus, we have the required result. We are thus lead to the following important corollary. COROLLARY The eigenvectors corresponding to distinct eigen values of an matrix are linearly independent.

Assignment
1. For an 1. 2. If matrix prove the following. and have the same set of eigen values. is an eigen value of an invertible matrix then then is an eigen value of for any positive and

3. If is an eigen value of integer

is an eigen value of nonsingular then

4. If and are matrices with have the same set of eigen values.

In each case, what can you say about the eigenvectors?

2. Let

and be matrices for which 1. Do and have the same set of eigen values? 2. Give examples to show that the matrices and and let

and need not be similar. be an eigen pair for

3. Let be an eigen pair for a matrix another matrix 1. Then prove that

is an eigen pair for the matrix are respectively the eigen values of

2. Give an example to show that if and then

need not be an eigen value of

Lecture 9
Diagonalisation

DEFINITION (Matrix Diagonalisation) A matrix is said to be diagonalisable if there exists a non-singular matrix such that is a diagonal matrix. Remark Let be an diagonalisable matrix with eigen values By

definition, is similar to a diagonal matrix Observe that as similar matrices have the same set of eigen values and the eigen values of a diagonal matrix are its diagonal entries.

EXAMPLE Let

Then we have the following:

1. Let Then has no real eigen value and hence doesn't have eigenvectors that are vectors in Hence, there does not exist any non-singular real matrix 2. In case, such that is a diagonal matrix. are and the and

the two complex eigen values of and

corresponding eigenvectors are

respectively. Also,

can be taken as a basis of

Define a

complex matrix by

Then

THEOREM let be an matrix. Then linearly independent eigenvectors. COROLLARY 6.2.5 let be an distinct. Then is diagonalisable. EXAMPLE

is diagonalisable if and only if

has are

matrix. Suppose that the eigen values of

1. Let

Then

Hence,

has

eigen values It is easily seen that and are the only eigen pairs. That is, the matrix has exactly one eigenvector corresponding to the repeated eigen value Hence, by Theorem the matrix is not diagonalisable.

2. Let eigen values

Then It can be easily verified that and

Hence,

has

correspond to the eigen value and

corresponds to the eigen value

Note that the set consisting of eigenvectors corresponding to the eigen value are not orthogonal. This set can be replaced by the orthogonal set the eigen value as which still consists of eigenvectors corresponding to . Also, the set

forms a basis of

So, by Theorem , the matrix

is diagonalisable. Also, if unitary matrix then Observe that the matrix

is the corresponding

is a symmetric matrix. In this case, the eigenvectors are

mutually orthogonal. In general, for any real symmetric matrix there always exist eigenvectors and they are mutually orthogonal. This result will be proved later.

Assignment
1. By finding the eigen values of the following matrices, justify whether or not for some real non-singular matrix and a real diagonal matrix

for any with

2. Are the two matrices

and

diagonalisable? , where if

3. Find the eigen values and eigenvectors of and otherwise.

4. Let be an matrix and an matrix. Suppose Then show that is diagonalisable if and only if both and are diagonalisable. 5. Let be a linear transformation with and

Then 1. determine the eigen values of 2. find the number of linearly independent eigenvectors corresponding to each eigen value? 3. is diagonalisable? Justify your answer.

6. Let be a non-zero square matrix such that diagonalised. 7. Are the following matrices diagonalisable?

Show that

cannot be

You might also like