0% found this document useful (0 votes)
38 views6 pages

Linear Algebra, Infinite Dimensional Spaces, and MAPLE: Definition A Projection Is A Transformation P From E

1. The document presents the spectral resolution or Jordan canonical form for decomposing a linear operator on a finite dimensional inner product space. It shows that any linear operator A can be written as the sum of its eigenvalues and nilpotent parts. 2. It then discusses how to define the matrix exponential exp(tA) and shows that this provides the solution to differential equations of the form Z' = AZ, Z(0)=c. 3. Specifically, it proves that exp(tA) can be written as the sum of the exponentials of the eigenvalues of A multiplied by the corresponding projections, plus terms involving the nilpotent parts. This gives a method to solve linear differential equations.

Uploaded by

Saugat Kar
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views6 pages

Linear Algebra, Infinite Dimensional Spaces, and MAPLE: Definition A Projection Is A Transformation P From E

1. The document presents the spectral resolution or Jordan canonical form for decomposing a linear operator on a finite dimensional inner product space. It shows that any linear operator A can be written as the sum of its eigenvalues and nilpotent parts. 2. It then discusses how to define the matrix exponential exp(tA) and shows that this provides the solution to differential equations of the form Z' = AZ, Z(0)=c. 3. Specifically, it proves that exp(tA) can be written as the sum of the exponentials of the eigenvalues of A multiplied by the corresponding projections, plus terms involving the nilpotent parts. This gives a method to solve linear differential equations.

Uploaded by

Saugat Kar
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

1

Linear Algebra, Infinite Dimensional Spaces, and MAPLE


This course will be chiefly concerned with linear operators on Hilbert Spaces. We intend to present a model, a paradigm, for how a linear transformation on an inner-product space might be constructed. This paradigm will not model all such linear mappings. To model them all would require an understanding of measures and integration beyond what a beginning science or engineering student might know. To set the pattern for this paradigm, we first recall some linear algebra. We recall, review, and re-examine the finite dimensional setting. In that setting, there is the Jordan Canonical Form. We present this decomposition for matrices in an alternate view from the traditional one. The advantage to the representation presented here is conceptual. It sets a pattern in concepts, instead of in "form." We think of projections and eigenvalues. And, we must turn to nilpotent matrices when projections and eigenvalues are not enough. This is how we begin.

Section 1: A Decomposition for Matrices


Definition A projection is a transformation P from E E such that P2 = P. Note that some texts also require that P should be non-expansive in the sense that |Px| |x|. An example of a projection that is not nonexpansive is P(x,y) = (x+y,0) Definition A nilpotent transformation is a linear transformation N from E E for which there is an integer k such that Nk = 0. 3 0-2 Examples -1 1 1 is a projection from R3 to R3 and 3 0-2 function from R3 to R3. -1 1 1 0 0 0 is a nilpotent -1 1 1

Theorem 1(Spectral Resolution for A) If A is a linear function from Rn to k k k Rn then there are sequences { i }i=1, {Pi }i=1, and {Ni }i=1 such that each i is a number and (a) Pi is a projection, (b) Ni is nilpotent, (c)P i Pj = 0 if i j k (d) Ni Pj = 0 if i j, (e) Ni Pi = Ni (f) I = Pi , 1 k and (g) A = [ i Pi + Ni ] i

2 Outline of Proof: We assume the Cayley-Hamilton Theorem which states that if A in an nxn matrix and D( ) is the polynomial in defined by D( ) = det( I-A), then the polynomial in A defined by making the substitution A = satisfies D(A) = 0. To construct the proof for the theorem, first factor D( ) as D( ) =

p=1

(-p)

p,

where the p's are the zero's of D with multiplicity mp. Now form a partial fraction decomposition of D( ): Let a1, a 2, ... ak be functions such that k a ( ) 1 p = m . D( ) p=1 ( - p) p Two Examples 1 0 0 A2 = 0 2 1 0 0 2 D1( ) = ( -1)( -2) and D2( ) = ( -1)( -2)2 1 -1 1 = + ( -1)( -2) -1 -2 - +3 1 1 = + ( -1)( -2)2 -1 ( -2)2 -1-2 A1 = 3 4 and If qp( ) = k

i p 1= p=1

(-i )

then

ap() qp()

so that I =

p=1

ap(A) qp(A)

Two Examples Continued 1 = -1 ( -2) + ( -1) = (2- ) + ( -1) 1 = ( -2)2 + (- +3)( -1) 100 000 3 2 -2-2 I= + and I = 000 + 010 . -3-2 3 3 000 001 Claim 1: Using the Caley-Hamilton Theorem if Pj = aj (A) qj (A) then Pi Pj = 0.

Claim 2: Pj is a projection since Pj = Pj I =

i=1 Claim 3: By the Caley-Hamilton Theorem, if Ni = ai (A)q i (A)(A- i I) = Pi (A- i I) then Ni


m i

Pj Pi = Pj 2.

= 0.

Claim 4: Ni Pi = Ni and N i Pj = 0. To see this note that Ni Pj = Pj Ni = Pj Pi (A- i I) = 0 and Ni Pi = Pi Pi (A- i I) = N i . Finally, since I = A=

Pi
i i

then

Pi A
= =

Pi (i I + A - i I)
i

i Two Examples Finished -1-2 = 1 3 2 +2 -2-2 and 3 4 -3-2 3 3 100 100 000 000 021 000 = 1 + 2 010 + 001 . 002 000 001 000 Remarks k (1) The sequence { i }i=1 is the sequence of eigenvalues . If x is in Rn and
m -1 vi = Pi Ni i (x),

[i Pi + N i ].

then vi is an eigenvector for A in the sense that i vi = A vi . (2) For the nilpotent part m i n.

4 In fact, Ni n must be zero for each i. Assignment Get the spectral resolution (or Jordan Canonical Form) for the matrices: -5 1 3 0 1 , -3 2 , 0-1 , and 1-2-1 . 1 0 1-2 1 2 -4 1 2

Section 2: Exp(tA)
Often in differential equations -- both in ordinary and partial differential equations -- one can conceive of the problem as having this form Z = AZ , Z(0) = c. If one can make sense of exp(tA), then surely this should be the solution to a differential equation of the form suggested. Finite dimensional linear systems beg for such an understanding. More importantly, this understanding gives direction for the analysis of stability questions in linear, and nonlinear differential equations. Here is a review of the linear, finite dimensional case. Theorem 2 If P is a projection, t is a number, and x is in {E, < , >}, Then the sequence n ti Pi n ti S(n) = i! (x) = i! P(x) i=1 i=1 converges in E. Recall that whatever norm is used for Rn , if A is an nxn matrix, then there is a number B such that if x is in Rn , then |Ax| B|x|. Moreover, the least such B is denoted ||A||. i i tA i! . i=0 Corollary 3 If P is a projection, exp(tP)(x) = (1-P)(x) + et P(x). Definition exp(tA) = Corollary 4 Suppose that P and Q are projections, PQ = QP = 0, and if t is a number. Then exp(tP+tQ) = et P + et Q + (1-P-Q). Suggestion of Proof. With the suppositions for P and Q, P+Q is also a projection. Thus, the previous result applies. m-1 ti Ni Observation If N is nilpotent of order m then exp(tN) = i! . i=0 Theorem 5 If A is a linear transformation from Rn Rn and k A = [ i Pi + Ni ] i is as in Theorem 1, then

6 k 1 m i -1tj N j
i

exp(tA) =

exp(i t)Pi (

j=0

j!

).

Suggestion of Proof. Suppose that is a number. Then exp(tA) = exp( t) exp(t[A- I]). Suppose that A = i [ i Pi +N i ]. p t t Then Pi exp(tA) = e i Pi p! (A- i I) p p=0 p t t i = e Pi p! Pi (A- i I) p since Pi = Pi 2. p=0 p t p t = e i Pi p! Ni . p=0 Thus, exp(tA) = 1 exp(tA) = i Pi exp(tA) m i -1p t p t = i e i Pi p! Ni . p=0

Assignment Solve Z = , Z(0) = c, where A is any of the matrices in the previous assignments.

You might also like