0% found this document useful (0 votes)
84 views11 pages

Linear Systems Lect5

- The Cayley-Hamilton theorem states that every square matrix satisfies its own characteristic equation. This means that any power of a matrix A can be expressed as a linear combination of the identity matrix and lower powers of A. - Any function with a convergent power series expansion can also be expressed as a linear combination of the identity matrix and powers of A. This allows functions of matrices to be evaluated. - For time-varying systems described by a matrix A(t), the solution is of the form e^∫A(τ)dτ if A(t) commutes with itself at different times. This holds if A(t) is a scalar function times a constant matrix.

Uploaded by

Santhosh Gs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views11 pages

Linear Systems Lect5

- The Cayley-Hamilton theorem states that every square matrix satisfies its own characteristic equation. This means that any power of a matrix A can be expressed as a linear combination of the identity matrix and lower powers of A. - Any function with a convergent power series expansion can also be expressed as a linear combination of the identity matrix and powers of A. This allows functions of matrices to be evaluated. - For time-varying systems described by a matrix A(t), the solution is of the form e^∫A(τ)dτ if A(t) commutes with itself at different times. This holds if A(t) is a scalar function times a constant matrix.

Uploaded by

Santhosh Gs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

EL 625 Lecture 5

EL 625 Lecture 5
Cayley-Hamilton Theorem:

Every square matrix satisfies its own characteristic equation


If the characteristic equation is
p() = det(I A) = n + an1n1 + an2n2 + . . . + a1 + a0
From Cayley-Hamilton theorem,
p(A) = An + an1An1 + an2An2 + . . . + a1A + a0I =
Thus, An can be expressed as a linear combination of the matrices,
Ai, i = 0, 1, 2, . . . , n 1 (A0 = I).
An =
An+1 =
=

n1
X

i=0
n1
X

i=0
n1
X
i=0

aiAi
aiAi+1
biAi

An+1 can be expressed as a linear combination of A, A2, . . . , An.


But, since An itself can be expressed as a linear combination of

EL 625 Lecture 5

I, A, A2, . . . , An1 = An+1 can be expressed as a linear combination of I, A, A2, . . . , An1.


Any power of A can be written as a linear combination of
I, A, A2, . . . , An1

Any function which has a convergent power series expansion


can be expressed as a linear combination of I, A, A2, . . . , An1.
f (A) =

n1
X
j=0

j Aj

If the matrix, A can be diagonalized via a similarity transformation,


T , i.e. T 1AT = ,
T

f (A)T =
=

= f () =

n1
X

j=0
n1
X
j=0
n1
X
j=0

f () =

j T 1Aj T
j (T 1AT )j
j j

f (1)
0
...
0

...

f (2) . . .
...
...

0
...

. . . f (n)

EL 625 Lecture 5

n1
X
j=0

j j

Pn1

j=0

j j1

Pn1

0
...

j=0

= f (i) =

n1
X
j=0

j ij

Example:
A=

...

j j2 . . .

0
...

0
...

...

...

Pn1

j=0

for i = 1, 2, . . . , n.

0 1

2 3

The eigen values are 1 = 1 and 2 = 2.

f (A) = eAt
f () = et
eAt = 1A + 0I
et = 1 + 0
e1t = 11 + 0
= et = 1 + 0
e2t = 12 + 0

j jn

EL 625 Lecture 5

= e2t = 21 + 0

Thus,
At

2t

= (e e )
=

2et e2t

1 = e2t et
0 = 2et e2t

0 1

2 3

2t

+ (2e e )

e2t et

2et 2e2t 2e2t et

If the matrix, A has repeated eigen values,


k

d f ()
=
dk =i
Example:

A=

dk

Pn1

j
j=0 j



dk
=i

1 1

0 1

This matrix has a repeated eigen value at 1.


f () = et
df ()
= tet
d
eAt = 1A + 0I
et = 1 + 0

1 0
0 1

EL 625 Lecture 5

et = 1 + 0
tet|=1 = tet = 1

Thus,
e

At

= te

1 = tet
0 = et tet

1 1

0 1

et tet
0 et

Time-varying systems:

+ (e te )

1 0
0 1

A first-order example: x(t)


= a(t)x(t) with x(t0) = x0
dx(t)
= a(t)dt
x(t)
Rt
t0

= x(t) = e

a( )d

x0

Is the solution to a general nth order unforced system,


Rt
t0 A( )d

x(t) = e
Rt
t0

i.e.(t, t0) = e

A( )d

x0

Yes, but only if A(t)


Proof: Let F (t, t0) =

Rt

t0

Rt

t0

A( )d =

Rt

t0

A( )d A(t)

A( )d . By assumption,

A(t)F (t, t0) = F (t, t0)A(t)

EL 625 Lecture 5

This implies,
F (t, t0)A(t)F (t, t0) = F 2(t, t0)A
= A(t)F 2(t, t0) = F 2(t, t0)A(t)
and in general, A(t)F r (t, t0) = F r (t, t0)A(t) for any positive integer.
Consider,
x(t) = eF (t,t0)x0
=

x(t)
=
=
=

F 2(t, t0)

I + F (t, t0 ) +
+ . . . x0
2!

2
t
F
(t,
)A(t)
0

A(t) + F (t, t0 )A(t) +


+ . . . x0
2!

2
t
)
F
(t,
0

I + F (t, t0 ) +
+ . . . A(t)x0
2!

2
)
F
(t,
t
0
+ . . . x0
A(t) I + F (t, t0) +
2!

from the assumption of the commutativity

of the product of A(t) and F (t, t0)

= x(t)

Theorem: A(t)F (t, t0) = F (t, t0)A(t) if and only if A(t)A( ) =


A( )A(t). Proof:
A(t)A( ) = A( )A(t)
= A(t)

t0

A( )d =

t0

A( )d A(t)

EL 625 Lecture 5

= A(t)F (t, t0) = F (t, t0)A(t)

A(t)F (t, t0) = F (t, t0)A(t)


F (t, t0)
F (t, t0)
=
A(t)
= A(t)
t0
t0
= A(t)A(t0) = A(t0)A(t)
= A(t)A( ) = A( )A(t)

If A(t) = (t)A where (t) is a scalar function of time and A is a


constant matrix.

A(t)A( ) = (t)A(
)A = A( )A(t)
Thus,
(t, t0) = e

Rt
t0

( )Ad

Example:
A(t) =

= t

Rt
t0

=e

2t 3t

= tA

0 1

2 3

( )d A

EL 625 Lecture 5

1 2
2
(
=

)d
(t
t
0)
t0
2

Thus,
1

(t, t0) = e 2 (t

2 t2 )A

The matrix, A has the eigen values, 1 = 1, 2 = 2.


Using the method of trial functions, Z10 and Z20 can be evaluated
as,

= e
f (A)

2 t2 )
0

1 (t2 t2 )
0
2

1 1

2 2

2 1
2 1

1 (t2 t2 )
0
2

2e 2 (t

2e

2 1

Z20 =
1

2 1

Z10 =

With f () = e 2 (t

2 t2 )
0

+e

(t2 t20 )

2e(t

1 (t2 t2 )(2)

0
2

2 t2 )
0

(t2 t20 )

2e(t

2 t2 )
0

Or, using Cayley-Hamilton theorem:


= 1A + 0I
f (A)
f () = 1 + 0

1 1

2 2

1 (t2 t2 )
0
2

1 (t2 t2 )
0
2

EL 625 Lecture 5

e 2 (t

2 t2 )
0

= 1 + 0

e(t

2 t2 )
0

= 21 + 0

Thus,
(t2 t20 )

= (e
f (A)

2e

1 (t2 t2 )
0
2

2e 2 (t

2 t2 )
0

2 t2 )
0

1 = e(t

0 = 2e 2 (t

0 1

2 3

(t2 t02 )

2e(t

2 t2 )
0

1 t2 t4

A = 0 1 t2

It can easily be verified that

0 0 1

2 t2 )
0

+ (2e

2 t2 )
0

2 t2 )
0

e(t

(t2 t20 )

2e(t

Example:

2 t2 )
0

1 (t2 t2 )
0
2

e 2 (t

1 (t2 t2 )
0
2

1 (t2 t2 )

0
2

1 (t2 t2 )
0
2

(t2 t02 )

1 t2 + t20 t4 + t2t20 + t04

A(t)A( ) = A( )A(t) = 0

t2 + t20

1 0

0 1

EL 625 Lecture 5

10

Thus,
(t, t0) = eF (t,t0)
where
F (t, t0) =

t0

A( )d

(t t0)

1 3
3 (t

t30)

(t t0)

1 5
5 (t
1 3
3 (t

t50)

t30)

(t t0)

The matrix, F (t, t0) has a repeated eigen value of multiplicity 3 at


1(t) = (t t0).
Using the method of trial functions,
f (F ) = f (1)Z10 + f 0()|=1 Z11 + f 00()|=1 Z12
Choosing, f1() = ( + 1 1),
f1(F ) = [F + (1 1)I]
= f1(1)Z10 + f10 ()|=1 Z11 + f100()|=1 Z12
= Z10 + Z11
= Z10 + Z11 = [F + (1 1)I)]
Choosing, f2() = 2 21 + 21,
f2(F ) = F 2 21F + 21I

EL 625 Lecture 5

11

= f2(1)Z10 + f20 ()|=1 Z11 + f200()|=1 Z12

= Z12

= 2Z12

1
1
= F 2 1F + 21I
2
2

With f () = e,
f (F ) = eF
= e1 (Z10 + Z11 + Z12)

1
1
=
+ (1 1)I) + F 2 1F + 12I
2
2

1
1
= e1 F 2 + (1 1)F + ( 12 + 1 1)I
2
2

e1 F

1 3
3 (t

t30)

1 5
5 (t

1 3
t50) + 18
(t t30)
1 3
3 (t

t03)

You might also like