0% found this document useful (0 votes)
17 views16 pages

Chap 05 T

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views16 pages

Chap 05 T

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Chapter 5: Functions of Vectors and Matrices

y , Ax (= y T Ax ): "Bilinear Form"

x, Ax (= x T Ax ) : "Quadratic Form"

Note that because

x Ax = x Ax
T
( T
)
T
= x T AT x ,

( 
) + 
T
1 A A
x T Ax = x T Ax + x T AT x = x T  x

2  2 
any quadratic form can be written as a quadratic form with
a symmetric A-matrix. We therefore treat all quadratic
forms as is they contained symmetric matrices.
DEFINITIONS: Let Q = x T Ax

1. Q (or A) is positive definite iff : x , Ax > 0 for all x ≠ 0.


2. Q (or A) is positive semidefinite if : x , Ax ≥ 0 for all x ≠ 0.
3. Q (or A) is negative definite iff: x , Ax < 0 for all x ≠ 0.

4. Q (or A) is negative semidefinite if: x , Ax ≤ 0 for all x ≠ 0.

5. Q (or A) is indefinite if: x , Ax > 0 for some x ≠ 0, and


x , Ax < 0 for other x ≠ 0.

Tests for definiteness of matrix A in terms of its eigenvalues

F
: λi
If the real parts of eigenvalues λi
Matrix A is . . . of A are: . . . .

1. Positive definite All >0

2. Positive semidefinite All ≥0

3. Negative definite All <0

4. Negative semidefinite All ≤0


5. Indefinite Some Re( λ i ) > 0, some Re( λ i ) < 0 .

See book for tests involving leading principal minors.


We need to consider functions of matrices before we can
solve the state equations in time domain.
Applying a function f ( A ) to a matrix A is NOT the same
thing as applying the function to the matrix entries
element-by-element.

First, define matrix powers:


AA = A2 , K etc.
A0 = I
Am An = Am + n
( Am ) n = Amn
( A−1 ) n = A− n
Matrix Polynomials:
polynomial form
Matrix Form Scalar Form
P ( A) = cm A m + L + c1 A + c0 I P ( x ) = cm x m + L + c1 x + c 0
P ( A) = c ( A − Ia1 ) L ( A − Ia m ) P ( x ) = c ( x − a1 ) L ( x − am )

factored form
Convergence of Polynomial Series:

Theorem: Let A be an n x n matrix whose eigenvalues are λi .


If the infinite series

σ( x ) = a0 + a1 x + a 2 x + L + a k x + L= ∑ a k x k
2 k
i =1

converges for all x = λ i , then . . .


. . . the series

σ( A) = a0 I + a1 A + a 2 A + L + a k A + L= ∑ a k A k
2 k
i =1

converges. This will be important when we want the Taylor


series expansions of a function of a matrix.

Theorem: If f ( z ) is any function (not necessarily a


polynomial) whose derivative exists for all z within a
circle of the complex plane in which all eigenvalues of
matrix A lie, then f ( A ) can be written as a convergent
power series.
Example: Find
dt
( )
d At
e

A2 t 2
e At = I + At + +L
2!
de At 2 A2 t 3 A3t 2
= A+ + +L
dt 2! 3!
 A2 t 2 
= A I + At + + L
 2! 
= Ae At ( = e At A)

Also note that:


A3 A5
sin( A) = A − + −L . . . etc., same as for
3! 5!
expansions of
A2 A4 scalar functions.
cos( A) = I − + −L
2! 4!
A much more useful theorem:
Theorem: Let g( λ ) be a polynomial of degree n-1 and f ( λ ) be
ANY function of λ . If f ( λ ) = g ( λ ) for all eigenvalues of A
("on the spectrum of A"), then f ( A ) = g ( A ) (for A itself.)

Implication: We can define the matrix-version of a non-


polynomial scalar function using a matrix polynomial, if
the two functions agree on the spectrum of the matrix!

1 2 
Example: Let A=  
0 2 

"spectrum of A"={eigenvalues(A)}= σ( A ) ={1, 2}


Let g( λ ) be our n-1 order polynomial: g( λ ) = α 0 + α 1λ

Now suppose we are asked to find f ( A) = A5

f ( λ ) = λ5

So we set f ( λ ) = g ( λ ) for λ = {1,2}


NOTE:
NOTE: IfIfwewehad
had
repeated
repeatedeigenvalues,
15 = α 0 + α 1 ⋅ 1 eigenvalues,
find α 0 , α 1 these
theseequations
equations
2 5 = 32 = α 0 + α1 ⋅ 2 would
wouldnot
notbe
be
independent.
independent. WeWe
could
couldinstead
insteaduse
usethe
Solving, α 0 = −30, α1 = 31 equation
the
equationAND
ANDitsits
derivatives.
derivatives.
Using this result:
1 62 Alternative way of
A = − 30 I + 31 A = 
5

 0 32 calculating A
− 3 1 
Example: Let A =  0 − 2  Find a closed-form solution
 
for sin( A ). (Can't use Taylor series) λ1 = −3, λ2 = −2
(goes on forever)
See This is similar to an earlier example. Because n=2,
previous any analytic function of A can be written as a first
theorem order matrix polynomial, so
n-1 order sin ( A ) = α I + α A
0 1

Evaluate this expression on the spectrum of A:

sin( −3) = α 0 + α1 ( −3) α1 = −0.768


sin( −2 ) = α 0 + α1 ( −2 ) α 0 = −2.45

Solving,
sin( − 3) sin( − 2) − sin( − 3) = α I + α A
sin( A) =  
sin( −2 )
0 1
 0 
If A had repeated eigenvalues, the two equations
sin( −3) = α 0 + α1 ( −3)
sin( −2 ) = α 0 + α1 ( −2 )

would be linearly dependent and have no unique solution.


Then we could use one of them and use a derivative for
the other:

d
sin( λ ) = α 0 + α1λ

so
cos( λ ) = α1

This would be the second independent equation.


Cayley-Hamilton Theorem: Let a system have characteristic
polynomial
A − λ I = φ (λ )
Then
φ( A) = 0

That is, every matrix satisfies its own characteristic


polynomial.

This theorem, together with the previous one, imply that we


never need to consider polynomials of a matrix of order
higher than n-1 (!!)
Example: (Reduction of matrix polynomials to degree n-1 or
less). Let 3 1 
A=  
1 2 
characteristic
and find P ( A ) = A + 3 A + 2 A + A + I . equation
4 3 2

∆ ( λ ) = λ2 − 5λ + 5 = 0
A 2 = −5 I + 5 A
So from Cayley-Hamilton,
A 4 = A2 A 2 = ( 5 A − 5 I ) 2 = 25 A 2 − 50 A + 25 I = 25( 5 A − 5 I ) − 50 A + 25 I

A 3 = A2 A = (5 A − 5 I ) A = 5 A2 − 5 A = 5(5 A − 5 I ) − 5 A

A2 = 5 A − 5 I

Now P(A) will contain no powers of A higher than 1.


(=n-1)
Some examples of what these theorems allow us to do:
Example: Suppose the characteristic polynomial of a system is

φ(λ ) = λn + cn −1λn −1 + L + c1λ + c0 = 0


so
φ( A) = An + cn −1 An −1 + L + c1 A + c0 I = 0
Noting that c0 is equal to the product of all the eigenvalues,
we know it is nonzero iff matrix A is non-singular (no zero
eigenvalues), or A is invertible. Multiply the above equation
through by A −1 to get:

A n −1 + c n −1 A n − 2 + L + c1 I + c0 A −1 = 0
Easy way
Solving A −1
=−
c0
A[
1 n −1
+ cn −1 A n − 2 + L + c1 I ] for computer
to find
inverse
Definition: The minimal polynomial of a square matrix A is
the lowest degree monic polynomial φm (λ) which satisfies

φm ( A) = 0
Being minimal affects only powers of repeated terms in
characteristic polynomials, for example, if
m1 m2 mp
φ(λ ) = (λ − λ1 ) (λ − λ 2 ) L (λ − λ p ) ,
η1 η2 ηp
φm (λ) = (λ − λ1 ) (λ − λ 2 ) L (λ − λ p )

How many ones


where ηi ≤ mi in super diagonal of
Jordan Form
Note that ηi is not necessarily 1, but is rather the
index of the eigenvalue λ i .
Another important example of this technique will be in the
computation of the matrix exponential:

At
e
We will see in the next chapter how important this
matrix will be in the solution of the state variable
equations for a system.

You might also like