0% found this document useful (0 votes)
515 views7 pages

2250 Matrix Exponential

The document discusses the matrix exponential and its properties. It begins by defining the matrix exponential eAt as packaging the n solutions of repeatedly solving the initial value problem x'(t) = Ax(t) into a matrix. It then lists several key properties and identities of the matrix exponential, including that eAt satisfies the same differential equation as x(t) and eAt can be used to find the general solution to x'(t) = Ax(t). It also discusses Putzer's spectral formula for solving the system using eigenvalues of A.

Uploaded by

kanet17
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
515 views7 pages

2250 Matrix Exponential

The document discusses the matrix exponential and its properties. It begins by defining the matrix exponential eAt as packaging the n solutions of repeatedly solving the initial value problem x'(t) = Ax(t) into a matrix. It then lists several key properties and identities of the matrix exponential, including that eAt satisfies the same differential equation as x(t) and eAt can be used to find the general solution to x'(t) = Ax(t). It also discusses Putzer's spectral formula for solving the system using eigenvalues of A.

Uploaded by

kanet17
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

10.

4 Matrix Exponential 505

10.4 Matrix Exponential


The problem
x′ (t) = Ax(t), x(0) = x0

has a unique solution, according to the Picard-Lindelöf theorem. Solve


the problem n times, when x0 equals a column of the identity matrix,
and write w1 (t), . . . , wn (t) for the n solutions so obtained. Define the
matrix exponential by packaging these n solutions into a matrix:

eAt ≡ aug(w1 (t), . . . , wn (t)).

By construction, any possible solution of x′ = Ax can be uniquely ex-


pressed in terms of the matrix exponential eAt by the formula

x(t) = eAt x(0).

Matrix Exponential Identities


Announced here and proved below are various formulae and identities
for the matrix exponential eAt :
 ′
eAt = AeAt Columns satisfy x′ = Ax.

e0 = I Where 0 is the zero matrix.


BeAt = eAt B If AB = BA.
eAt eBt = e(A+B)t If AB = BA.
eAt eAs = eA(t+s) At and As commute.
 −1
eAt = e−At Equivalently, eAt e−At = I.

eAt = r1 (t)P1 + · · · + rn (t)Pn Putzer’s spectral formula.


See page 508.
eλ1 t − eλ2 t
eAt = eλ1 t I + (A − λ1 I) A is 2 × 2, λ1 6= λ2 real.
λ1 − λ2
eAt = eλ1 t I + teλ1 t (A − λ1 I) A is 2 × 2, λ1 = λ2 real.
eat sin bt
eAt = eat cos bt I + (A − aI) A is 2 × 2, λ1 = λ2 = a + ib,
b
b > 0.

tn
eAt = An
X
Picard series. See page 510.
n=0
n!

eAt = P −1 eJt P Jordan form J = P AP −1 .


506

Putzer’s Spectral Formula


The spectral formula of Putzer applies to a system x′ = Ax to find the
general solution, using matrices P1 , . . . , Pn constructed from A and the
eigenvalues λ1 , . . . , λn of A, matrix multiplication, and the solution r(t)
of the first order n × n initial value problem
 
λ1 0 0 · · · 0 0  
1
 1 λ2 0 · · · 0 0 
  
0 
0 1 λ3 · · · 0 0

   
r (t) =   r(t), r(0) =  .. .

..  
.

.
   
0
 
0 0 0 · · · 1 λn
The system is solved by first order scalar methods and back-substitution.
We will derive the formula separately for the 2 × 2 case (the one used
most often) and the n × n case.

Putzer’s Spectral Formula for a 2 × 2 matrix A


The general solution of x′ = Ax is given by the formula
x(t) = (r1 (t)P1 + r2 (t)P2 ) x(0),
where r1 , r2 , P1 , P2 are defined as follows.
The eigenvalues r = λ1 , λ2 are the two roots of the quadratic equation
det(A − rI) = 0.
Define 2 × 2 matrices P1 , P2 by the formulae
P1 = I, P2 = A − λ1 I.
The functions r1 (t), r2 (t) are defined by the differential system
r1′ = λ1 r1 , r1 (0) = 1,

r2 = λ2 r2 + r1 , r2 (0) = 0.
Proof: The Cayley-Hamilton formula (A − λ1 I)(A − λ2 I) = 0 is valid for
any 2 × 2 matrix A and the two roots r = λ1 , λ2 of the determinant equality
det(A − rI) = 0. The Cayley-Hamilton formula is the same as (A − λ2 )P2 = 0,
which implies the identity AP2 = λ2 P2 . Compute as follows.
x′ (t) = (r1′ (t)P1 + r2′ (t)P2 ) x(0)
= (λ1 r1 (t)P1 + r1 (t)P2 + λ2 r2 (t)P2 ) x(0)
= (r1 (t)A + λ2 r2 (t)P2 ) x(0)
= (r1 (t)A + r2 (t)AP2 ) x(0)
= A (r1 (t)I + r2 (t)P2 ) x(0)
= Ax(t).
This proves that x(t) is a solution. Because Φ(t) ≡ r1 (t)P1 + r2 (t)P2 satisfies
Φ(0) = I, then any possible solution of x′ = Ax can be represented by the given
formula. The proof is complete.
10.4 Matrix Exponential 507

Real Distinct Eigenvalues. Suppose A is 2×2 having real distinct


eigenvalues λ1 , λ2 and x(0) is real. Then

eλ1 t − eλ2 T
r1 = eλ1 t , r2 =
λ1 − λ2
and !
λ1 t eλ1 t − eλ2 t
x(t) = e I+ (A − λ1 I) x(0).
λ1 − λ2
The matrix exponential formula for real distinct eigenvalues:

eλ1 t − eλ2 t
eAt = eλ1 t I + (A − λ1 I).
λ1 − λ2

Real Equal Eigenvalues. Suppose A is 2 × 2 having real equal


eigenvalues λ1 = λ2 and x(0) is real. Then r1 = eλ1 t , r2 = teλ1 t and
 
x(t) = eλ1 t I + teλ1 t (A − λ1 I) x(0).

The matrix exponential formula for real equal eigenvalues:

eAt = eλ1 t I + teλ1 t (A − λ1 I).

Complex Eigenvalues. Suppose A is 2 × 2 having complex eigen-


values λ1 = a + bi with b > 0 and λ2 = a − bi. If x(0) is real, then a
real solution is obtained by taking the real part of the spectral formula.
This formula is formally identical to the case of real distinct eigenvalues.
Then

Re(x(t)) = (Re(r1 (t))I + Re(r2 (t)(A − λ1 I))) x(0)


sin bt
 
= Re(e(a+ib)t )I + Re(eat (A − (a + ib)I)) x(0)
b
at sin bt
 
at
= e cos bt I + e (A − aI)) x(0)
b

The matrix exponential formula for complex conjugate eigenvalues:


sin bt
 
At at
e =e cos bt I + (A − aI)) .
b

How to Remember Putzer’s Formula for a 2 × 2 Matrix A.


The expressions

eAt = r1 (t)I + r2 (t)(A − λ1 I),


(1) eλ1 t − eλ2 t
r1 (t) = eλ1 t , r2 (t) =
λ1 − λ2
508

are enough to generate all three formulae. The fraction r2 (t) is a differ-
ence quotient with limit teλ1 t as λ2 → λ1 , therefore the formula includes
the case λ1 = λ2 by limiting. If λ1 = λ2 = a + ib with b > 0, then the
fraction r2 is already real, because it has for z = eλ1 t and w = λ1 the
form
z−z sin bt
r2 (t) = = .
w−w b
Taking real parts of expression (1) then gives the complex case formula
for eAt .

Putzer’s Spectral Formula for an n × n Matrix A


The general solution of x′ = Ax is given by the formula

x(t) = (r1 (t)P1 + r2 (t)P2 + · · · + rn (t)Pn ) x(0),

where r1 , r2 , . . . , rn , P1 , P2 , . . . , Pn are defined as follows.


The eigenvalues r = λ1 , . . . , λn are the roots of the polynomial equation

det(A − rI) = 0.

Define n × n matrices P1 , . . . , Pn by the formulae

P1 = I, Pk = (A − λk−1 I)Pk−1 , k = 2, . . . , n.
k−1
More succinctly, Pk = Πj=1 (A − λj I). The functions r1 (t), . . . , rn (t) are
defined by the differential system

r1′ = λ1 r1 , r1 (0) = 1,
r2′ = λ2 r2 + r1 , r2 (0) = 0,
..
.
rn′ = λn rn + rn−1 , rn (0) = 0.

Proof: The Cayley-Hamilton formula (A − λ1 I) · · · (A − λn I) = 0 is valid for


any n × n matrix A and the n roots r = λ1 , . . . , λn of the determinant equality
det(A − rI) = 0. Two facts will be used: (1) The Cayley-Hamilton formula
implies APn = λn Pn ; (2) The definition of Pk implies λk Pk + Pk+1 = APk for
1 ≤ k ≤ n − 1. Compute as follows.

1 x′ (t) = (r1′ (t)P1 + · · · + rn′ (t)Pn ) x(0)


n n
!
X X
2 = λk rk (t)Pk + rk−1 Pk x(0)
k=1 k=2
n−1 n−1
!
X X
3 = λk rk (t)Pk + rn (t)λn Pn + rk Pk+1 x(0)
k=1 k=1
n−1
!
X
4 = rk (t)(λk Pk + Pk+1 ) + rn (t)λn Pn x(0)
k=1
10.4 Matrix Exponential 509

n−1
!
X
5 = rk (t)APk + rn (t)APn x(0)
k=1
n
!
X
6 =A rk (t)Pk x(0)
k=1

7 = Ax(t).

Details: 1 Differentiate the formula for x(t). 2 Use the differential equa-
tions for r1 ,. . . ,rn . 3 Split off the last term from the first sum, then re-index
the last sum. 4 Combine the two sums. 5 Use the recursion for Pk and
the Cayley-Hamilton formula (A − λn I)Pn = 0. 6 Factor out A on the left.
7 Apply the definition of x(t).
Pn
This proves that x(t) is a solution. Because Φ(t) ≡ k=1 rk (t)Pk satisfies
Φ(0) = I, then any possible solution of x′ = Ax can be so represented. The
proof is complete.

Proofs of Matrix Exponential Properties


 ′
Verify eAt = AeAt . Let x0 denote a column of the identity matrix. Define
x(t) = eAt x0 . Then
′
eAt x0 = x′ (t)
= Ax(t)
= AeAt x0 .
Because this identity holds for all columns of the identity matrix, then (eAt )′ and
′
AeAt have identical columns, hence we have proved the identity eAt = AeAt .

Verify AB = BA implies BeAt = eAt B. Define w1 (t) = eAt Bw0 and


w2 (t) = BeAt w0 . Calculate w1′ (t) = Aw1 (t) and w2′ (t) = BAeAt w0 =
ABeAt w0 = Aw2 (t), due to BA = AB. Because w1 (0) = w2 (0) = w0 , then the
uniqueness assertion of the Picard-Lindelöf theorem implies that w1 (t) = w2 (t).
Because w0 is any vector, then eAt B = BeAt . The proof is complete.

Verify eAt eBt = e(A+B)t . Let x0 be a column of the identity matrix. Define
x(t) = eAt eBt x0 and y(t) = e(A+B)t x0 . We must show that x(t) = y(t) for
all t. Define u(t) = eBt x0 . We will apply the result eAt B = BeAt , valid for
BA = AB. The details:
′
x′ (t) = eAt u(t)
= AeAt u(t) + eAt u′ (t)
= Ax(t) + eAt Bu(t)
= Ax(t) + BeAt u(t)
= (A + B)x(t).

We also know that y′ (t) = (A + B)y(t) and since x(0) = y(0) = x0 , then the
Picard-Lindelöf theorem implies that x(t) = y(t) for all t. This completes the
proof.
510

Verify eAt eAs = eA(t+s) . Let t be a variable and consider s fixed. Define
x(t) = eAt eAs x0 and y(t) = eA(t+s) x0 . Then x(0) = y(0) and both satisfy the
differential equation u′ (t) = Au(t). By the uniqueness in the Picard-Lindelöf
theorem, x(t) = y(t), which implies eAt eAs = eA(t+s) . The proof is complete.

tn
Verify eAt = An
X
. The idea of the proof is to apply Picard iteration.
n=0
n!
By definition, the columns of eAt are vector solutions w1 (t), . . . , wn (t) whose
values at t = 0 are the corresponding columns of the n × n identity matrix.
According to the theory of Picard iterates, a particular iterate is defined by
Z t
yn+1 (t) = y0 + Ayn (r)dr, n ≥ 0.
0

The vector y0 equals some column of the identity matrix. The Picard iterates
can be found explicitly, as follows.
Rt
y1 (t) = y0 + 0 Ay0 dr
= (I + At) y0 ,
Rt
y2 (t) = y0 + 0 Ay1 (r)dr
Rt
= y0 + 0 A (I + At) y0 dr
= I + At + A2 t2 /2 y0 ,
..
.  
2 n
yn (t) = I + At + A2 t2 + · · · + An tn! y0 .

The Picard-Lindelöf theorem implies that for y0 = column k of the identity


matrix,
lim yn (t) = wk (t).
n→∞

This being valid for each index k, then the columns of the matrix sum
N
X tm
Am
m=0
m!

converge as N → ∞ to w1 (t), . . . , wn (t). This implies the matrix identity



X tn
eAt = An .
n=0
n!

The proof is complete.

Theorem 12 (Special Formulas for eAt )

ediag(λ1 ,...,λn )t = diag eλ1 t , . . . , eλn t


 
Real or complex constants
λ1 , . . . , λn .
!
a b
t !
−b a at cos bt sin bt
e =e Real a, b.
− sin bt cos bt
10.4 Matrix Exponential 511

Theorem 13 (Computing eJt for J Triangular)


If J is an upper triangular matrix, then a column u(t) of eJt can be computed
by solving the system u′ (t) = Ju(t), u(0) = v, where v is the correspond-
ing column of the identity matrix. This problem can always be solved by
first-order scalar methods of growth-decay theory and the integrating factor
method.
Theorem 14 (Block Diagonal Matrix)
If A = diag(B1 , . . . , Bk ) and each of B1 , . . . , Bk is a square matrix, then
 
eAt = diag eB1 t , . . . , eBk t .

You might also like