0% found this document useful (0 votes)
5 views

Lecture04_-_Diagonalization

The document discusses eigenvalues and eigenvectors, defining them in relation to matrices and providing theorems and proofs regarding their properties. It explains the concept of diagonalization, stating that a matrix is diagonalizable if it has enough linearly independent eigenvectors and can be expressed in a simpler form for calculations. Examples illustrate how to find eigenvalues and eigenvectors, as well as the implications of matrix similarity.

Uploaded by

lourdes.gerges
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Lecture04_-_Diagonalization

The document discusses eigenvalues and eigenvectors, defining them in relation to matrices and providing theorems and proofs regarding their properties. It explains the concept of diagonalization, stating that a matrix is diagonalizable if it has enough linearly independent eigenvectors and can be expressed in a simpler form for calculations. Examples illustrate how to find eigenvalues and eigenvectors, as well as the implications of matrix similarity.

Uploaded by

lourdes.gerges
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Mathematical Methods

in Electrical Engineering
Lecture 𝟎𝟒 – Diagonalization

Dr. Elie Abou Diwan


Eigenvalues and Eigenvectors
DEFINITION:
Let 𝐴 be an 𝑛 × 𝑛 matrix. A complex number λ is called an eigenvalue of 𝐴 if there exists a
nonzero vector 𝑣 ∈ C 𝑛 such that 𝐴𝑣 = λ𝑣.
The vector 𝑣 is called an eigenvector corresponding to λ.
The eigenspace corresponding to λ is the set of all vectors satisfying 𝐴𝑣 = λ𝑣.

THEOREM:
Let 𝐴 be an 𝑛 × 𝑛 matrix. The characteristic polynomial of 𝐴, denoted by 𝑝(λ), is the polynomial
defined by 𝑝 λ = det⁡(𝐴 − λ𝐼𝑛 ).
The eigenvalues of 𝐴 are the roots of its characteristic polynomial i.e. the roots of the equation
det 𝐴 − λ𝐼𝑛 = 0.
Eigenvalues and Eigenvectors (cont.)
Proof:
If λ is an eigenvalue, then there exists a vector 𝑣 ≠ 0 such that 𝐴𝑣 = λ𝑣.
⇒ There exists a vector 𝑣 ≠ 0 such that 𝐴𝑣 − λ𝐼𝑛 𝑣 = 0.
⇒ There exists a vector 𝑣 ≠ 0 such that (𝐴 − λ𝐼𝑛 )𝑣 = 0.
⇒ There exists a vector 𝑣 ≠ 0 such that 𝐵𝑣 = 0 (homogeneous linear system).
If 𝐵 is invertible, then the only solution to the homogeneous linear system 𝐵𝑣 = 0 is the trivial
solution 𝑣 = 0. But here 𝑣 ≠ 0, then 𝐵 is not invertible.
Consequently, det 𝐵 = det⁡(𝐴 − λ𝐼𝑛 ) = 0.

THEOREM:
The eigenspace 𝑆λ is the set of all solutions of the system (𝐴 − λ𝐼𝑛 )𝑣 = 0.
Eigenvalues and Eigenvectors (cont.)
EXAMPLE 𝟏:
0 0 −1
Let 𝐴 = 1 2 −2 . Find the eigenvalues and bases for the eigenspace of 𝐴.
1 0 0

To find the eigenvalues of 𝐴, we solve the equation det⁡(𝐴 − λ𝐼3 ) = 0.


0 0 −1 1 0 0 0 0 −1 λ 0 0 −λ 0 −1
𝐴 − λ𝐼3 = 1 2 −2 − λ 0 1 0 = 1 2 −2 − 0 λ 0 = 1 2 − λ −2
1 0 0 0 0 1 1 0 0 0 0 λ 1 0 −λ
−λ 0 −1
det 𝐴 − λ𝐼3 = 1 2 − λ −2 = −λ 2 − λ −λ + 1 2 − λ = 2 − λ λ2 + 1
1 0 −λ
det 𝐴 − λ𝐼3 = 0 ⇒ 2 − λ λ2 + 1 = 0
2−λ=0⇒λ=2
λ2 + 1 = 0 ⇒ λ = ±𝑖
Finally, λ1 = 2, λ2 = 𝑖, λ3 = −𝑖 are the eigenvalues of 𝐴.
Eigenvalues and Eigenvectors (cont.)
Eigenvector corresponding to eigenvalue λ1 = 2:
𝐴 − λ1 𝐼3 𝑣⁡ = 0
−2 0 −1 𝑥 0
⇒ 1 0 −2 𝑦 = 0
1 0 −2 𝑧 0
𝑥=0
⇒ 𝑦=𝑦
𝑧=0
𝑥 0 0
⇒𝑣= 𝑦 = 𝑦 =𝑦 1
𝑧 0 0
0
Therefore, 𝑣1 = 1 is a basis for the eigenspace 𝑆1 .
0
Eigenvalues and Eigenvectors (cont.)
Eigenvector corresponding to eigenvalue λ2 = 𝑖:
𝐴 − λ2 𝐼3 𝑣 = 0
−𝑖 0 −1 𝑥 0
⇒ 1 2 − 𝑖 −2 𝑦 = 0
1 0 −𝑖 𝑧 0
𝑥 = 𝑖𝑧
⇒ 𝑦 = 𝑧⁡
𝑧 = 𝑧⁡
𝑥 𝑖𝑧 𝑖
⇒𝑣= 𝑦 = 𝑧 =𝑧 1
𝑧 𝑧 1
𝑖
Therefore, 𝑣2 = 1 is a basis for the eigenspace 𝑆2 .
1
Eigenvalues and Eigenvectors (cont.)
Eigenvector corresponding to eigenvalue λ3 = −𝑖:
𝐴 − λ3 𝐼3 𝑣 = 0
𝑖 0 −1 𝑥 0
⇒ 1 2 + 𝑖 −2 𝑦 = 0
1 0 𝑖 𝑧 0
𝑥 = −𝑖𝑧⁡⁡
⇒ 𝑦 = 𝑧⁡⁡⁡⁡⁡⁡
𝑧 = 𝑧⁡⁡⁡⁡⁡⁡⁡
𝑥 −𝑖𝑧 −𝑖
⇒𝑣= 𝑦 = 𝑧 =𝑧 1
𝑧 𝑧 1
−𝑖
Therefore, 𝑣3 = 1 is a basis for the eigenspace 𝑆3 .
1

Remark: 𝑣3 = 𝑣2∗ since λ3 = 𝜆∗2


Eigenvalues and Eigenvectors (cont.)
EXAMPLE 𝟐:
𝐴 is an 𝑛 × 𝑛 matrix whose eigenvalues are λ1 , λ2 , … , λ𝑛 . Find the eigenvalues of the matrix α𝐴.

The eigenvalues of the matrix α𝐴 are the roots of the equation det⁡(α𝐴 − λ′𝐼𝑛 ) = 0.
λ′
⇒ det α 𝐴 − 𝐼𝑛 = 0
α
λ′
⇒ α𝑛 det 𝐴 − 𝐼𝑛 = 0 because det α𝐵 = α𝑛 det⁡(𝐵)
α
Since λ is an eigenvalue of 𝐴, then det 𝐴 − λ𝐼𝑛 = 0.
By identification:
λ′
λ = ⇒ λ′ = αλ
α
Consequently, the eigenvalues of the matrix α𝐴 are αλ1 , αλ2 , … , αλ𝑛 .
Eigenvalues and Eigenvectors (cont.)
EXAMPLE 𝟑:
Show that 𝐴−1 exists if and only if λ1 , λ2 , … , λ𝑛 ≠ 0.

Since this is an if and only if statement, then we must prove both implications. That is:
• If 𝐴−1 exists, then λ1 , λ2 , … , λ𝑛 ≠ 0
• If λ1 , λ2 , … , λ𝑛 ≠ 0, then 𝐴−1 exists

If 𝐴−1 exists, then det⁡(𝐴) ≠ 0


⇒ det⁡(𝐴 − 0𝐼𝑛 ) ≠ 0
⇒ λ = 0 cannot be an eigenvalue (because if 0 is an eigenvalue, then we should have
det 𝐴 − 0𝐼𝑛 = 0). Consequently, λ1 , λ2 , … , λ𝑛 ≠ 0.

If λ is an eigenvalue, then det 𝐴 − λ𝐼𝑛 = 0. For λ = 0, det 𝐴 − 0𝐼𝑛 = det 𝐴 = 0. 𝐴 is not


invertible in this case. Consequently, if λ1 , λ2 , … , λ𝑛 ≠ 0, then 𝐴 is invertible.

Remark: 𝐴 is not invertible if and only if 0 is an eigenvalue of 𝐴.


Similar Matrices & Diagonalization
DEFINITION:
Two 𝑛 × 𝑛 matrices 𝐴 and 𝐵 are said to be similar if there exists an invertible matrix 𝑃 such that
𝐵 = 𝑃−1 𝐴𝑃. (𝐴 is similar to 𝐵 ⇔ 𝐴~𝐵 ⇔ 𝑃−1 𝐴𝑃 = 𝐵)

Remark: Two similar matrices have the same eigenvalues.

Proof:
Let 𝐴 and 𝐵 be two similar matrices. So there exists an invertible matrix 𝑃 such that 𝐵 = 𝑃−1 𝐴𝑃.
If λ is an eigenvalue of 𝐵, then det 𝐵 − λ𝐼𝑛 = 0
⇒ det 𝑃−1 𝐴𝑃 − λ𝐼𝑛 = 0
⇒ det 𝑃−1 𝐴𝑃 − λ𝑃−1 𝐼𝑛 𝑃 = 0
(λ𝑃−1 𝐼𝑛 𝑃 = λ𝑃−1 𝑃 = λ𝐼𝑛 where 𝑃−1 𝑃 = 𝐼𝑛 because 𝑃 is invertible)
⇒ det 𝑃−1 𝐴𝑃 − 𝑃−1 λ𝐼𝑛 𝑃 = 0
⇒ det 𝑃−1 𝐴 − λ𝐼𝑛 𝑃 = 0
Similar Matrices & Diagonalization (cont.)
⇒ det 𝑃−1 det 𝐴 − λ𝐼𝑛 det⁡(𝑃) = 0 since det 𝐴𝐵 = det 𝐴 det⁡(𝐵)
1 −1
1
⇒ det 𝐴 − λ𝐼𝑛 det 𝑃 = 0⁡since⁡𝑃⁡is⁡invertible, then det 𝑃 =
det⁡(𝑃) det⁡(𝑃)
⇒ det 𝐴 − λ𝐼𝑛 = 0
⇒ λ is an eigenvalue of 𝐴.

DEFINITION:
A 𝑛 × 𝑛 matrix 𝐴 is said to be diagonalizable if it is similar to a diagonal matrix 𝐵. In other words,
if matrix 𝐴 is similar to a diagonal matrix 𝐵, then there exits an invertible matrix 𝑃 such that
𝑃−1 𝐴𝑃 = 𝐵 ⇔ 𝐴 = 𝑃𝐵𝑃−1
Similar Matrices & Diagonalization (cont.)
Why is this useful?
Suppose you wanted to find 𝐴3 .
If 𝐴 is diagonalizable, then 𝐴3 = 𝑃𝐵𝑃−1 3 where 𝐵 is a diagonal matrix and 𝑃 is an invertible
matrix.
⇒ 𝐴3 = 𝑃𝐵𝑃−1 𝑃𝐵𝑃−1 𝑃𝐵𝑃−1 = 𝑃𝐵 𝑃−1 𝑃 𝐵 𝑃−1 𝑃 𝐵𝑃−1 = 𝑃𝐵𝐼𝑛 𝐵𝐼𝑛 𝐵𝑃−1 = 𝑃𝐵3 𝑃−1
In general, 𝐴𝑘 = 𝑃𝐵𝑘 𝑃−1
7 0 0
Or powers of diagonal matrices are relatively easy to compute. For example, if 𝐵 = 0 −2 0
0 0 3
73 0 0
then 𝐵3 = 0 −2 3 0
0 0 33
This means that finding 𝐴𝑘 involves only two matrix multiplications instead of the 𝑘 matrix
multiplications that would be necessary to multiply 𝐴 by itself 𝑘 times.
Similar Matrices & Diagonalization (cont.)
REMARK:
The main diagonal entries are the eigenvalues of a diagonal matrix. We can use this property to
get the eigenvalues of a similar matrix because two similar matrices have the same eigenvalues.

THEOREM:
If 𝐴 is an 𝑛 × 𝑛 matrix and 𝐴 has 𝑛 linearly independent eigenvectors, then 𝐴 is diagonalizable
(similar to a diagonal matrix 𝐵).
Similar Matrices & Diagonalization (cont.)
Proof:
Suppose that 𝑛 × 𝑛 matrix 𝐴 has eigenvalues λ1 , λ2 , … , λ𝑛 with corresponding linearly
independent eigenvectors 𝑣1 , 𝑣2 , … , 𝑣𝑛 .

λ1 0 0 … 0
0 λ2 0 … 0
Let 𝐵 be the diagonal matrix 0 0 λ3 … 0
⋮ ⋮ ⋮ ⋱ ⋮
0 0 0 … λ𝑛

𝑝11 𝑝12 𝑝13 … 𝑝1𝑛


𝑝21 𝑝22 𝑝23 … 𝑝2𝑛
Let 𝑃 be the matrix 𝑝31 𝑝32 𝑝33 … 𝑝3𝑛 = 𝑣1 𝑣2 𝑣3 … 𝑣𝑛
⋮ ⋮ ⋮ ⋱ ⋮
𝑝𝑛1 𝑝𝑛2 𝑝𝑛3 … 𝑝𝑛𝑛
Similar Matrices & Diagonalization (cont.)
Matrix 𝑃 is invertible because its columns form a linearly independent set.

Furthermore:
𝐴𝑃 = 𝐴 𝑣1 𝑣2 𝑣3 … 𝑣𝑛 = 𝐴𝑣1 𝐴𝑣2 𝐴𝑣3 … 𝐴𝑣𝑛
Or 𝐴𝑣 = λ𝑣, then:
𝐴𝑃 = λ1 𝑣1 λ2 𝑣2 λ3 𝑣3 … λ𝑛 𝑣𝑛

On the other hand:


λ1 0 0 … 0
0 λ2 0 … 0
𝑃𝐵 = 𝑣1 𝑣2 𝑣3 … 𝑣𝑛 0 0 λ3 … 0 = λ1 𝑣1 λ2 𝑣2 λ3 𝑣3 … λ𝑛 𝑣𝑛
⋮ ⋮ ⋮ ⋱ ⋮
0 0 0 … λ𝑛
Similar Matrices & Diagonalization (cont.)
Consequently, 𝐴𝑃 = 𝑃𝐵
⇒ 𝐴𝑃𝑃−1 = 𝑃𝐵𝑃−1
⇒ 𝐴𝐼𝑛 = 𝑃𝐵𝑃−1 where 𝑃𝑃−1 = 𝐼𝑛 because 𝑃 is invertible
⇒ 𝐴 = 𝑃𝐵𝑃−1
Thus, matrix 𝐴 is diagonalizable because there exists an invertible matrix 𝑃 such that 𝐴 = 𝑃𝐵𝑃−1
where 𝐵 is a diagonal matrix.

Remark: This proof shows us how, in case 𝑛 × 𝑛 matrix 𝐴 has 𝑛 linearly independent
eigenvectors, to find both a diagonal matrix 𝐵 to which 𝐴 is similar and an invertible matrix 𝑃 for
which 𝐴 = 𝑃𝐵𝑃−1 . We state this as a corollary.
Similar Matrices & Diagonalization (cont.)
COROLLARY:
If 𝐴 is an 𝑛 × 𝑛 matrix and 𝐴 has 𝑛 linearly independent eigenvectors 𝑣1 , 𝑣2 , … , 𝑣𝑛 with
corresponding eigenvalues λ1 , λ2 , … , λ𝑛 , then 𝐴 = 𝑃𝐵𝑃−1 where 𝐵 is the diagonal matrix
λ1 0 0 … 0
0 λ2 0 … 0
0 0 λ3 … 0
⋮ ⋮ ⋮ ⋱ ⋮
0 0 0 … λ𝑛
𝑝11 𝑝12 𝑝13 … 𝑝1𝑛
𝑝21 𝑝22 𝑝23 … 𝑝2𝑛
and 𝑃 is the invertible matrix 𝑝31 𝑝32 𝑝33 … 𝑝3𝑛 = 𝑣1 𝑣2 𝑣3 … 𝑣𝑛
⋮ ⋮ ⋮ ⋱ ⋮
𝑝𝑛1 𝑝𝑛2 𝑝𝑛3 … 𝑝𝑛𝑛
Similar Matrices & Diagonalization (cont.)
REMARKS:
λ1 0 0 … 0 λ1 𝑚 0 0 … 0
0 λ2 0 … 0 0 λ2 𝑚 0 … 0
REMARK 𝟏: If 𝐵 = 0 0 λ3 … 0 , then 𝐵𝑚 = 0 0 λ3 𝑚 … 0
⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮
0 0 0 … λ𝑛 0 0 0 … λ𝑛 𝑚

REMARK 𝟐: If 𝐴 is diagonalizable, then 𝐴𝑚 = 𝑃𝐵𝑚 𝑃−1


Similar Matrices & Diagonalization (cont.)
EXAMPLE 𝟏:
13 −8
Show that matrix 𝐴 = is diagonalizable.
25 −17

First, we find the eigenvalues of 𝐴:


13 − λ −8
det 𝐴 − λ𝐼2 = 0 ⇒ = 0 ⇒ 13 − λ −17 − λ + 200 = 0
25 −17 − λ
2
⇒ λ + 4λ − 21 = 0
∆′ = 𝑏 ′2 − 𝑎𝑐 = 4 − 1 −21 = 25
−𝑏 ′ − ∆′
λ1 = = −7
⇒ 𝑎
−𝑏 ′ + ∆′
λ2 = = 3⁡⁡⁡
𝑎

Finally, λ1 = −7, and λ2 = 3 are the eigenvalues of 𝐴.


Similar Matrices & Diagonalization (cont.)
Eigenvector corresponding to eigenvalue λ1 = −7:
𝐴 − λ1 𝐼2 𝑣 = 0

20 −8 𝑥 0
⇒ 𝑦 =
25 −10 0

2
⇒ 𝑥 = 5𝑦
𝑦 = 𝑦⁡⁡⁡
𝑥 2 2
⇒ 𝑣 = 𝑦 = 5𝑦 = 𝑦 5
𝑦 1

2/5
Eigenvector 𝑣1 =
1
Similar Matrices & Diagonalization (cont.)
Eigenvector corresponding to eigenvalue λ2 = 3:
𝐴 − λ2 𝐼2 𝑣 = 0
10 −8 𝑥 0
⇒ =
25 −20 𝑦 0
4
⇒ 𝑥 = 5𝑦
𝑦 = 𝑦⁡⁡⁡
𝑥 4 4
⇒ 𝑣 = 𝑦 = 5𝑦 = 𝑦 5
𝑦 1
4/5
Eigenvector 𝑣2 =
1

2/5 4/5
Finally, since the vectors and are linearly independent, we conclude that 𝐴 = 𝑃𝐵𝑃−1
1 1
−7 0 2/5 4/5
where 𝐵 is the diagonal matrix and 𝑃 is the invertible matrix .
0 3 1 1
Similar Matrices & Diagonalization (cont.)
EXAMPLE 𝟐:
0 0 −1
Let 𝐴 = 1 2 −2 . Find 𝐴𝑛 .
1 0 0

STEP 𝟏: Find the eigenvalues and the corresponding eigenvectors:


0
Eigenvalue: 2, eigenvector: 1
0
𝑖
Eigenvalue: 𝑖, eigenvector: 1
1
−𝑖
Eigenvalue: −𝑖, eigenvector: 1
1
Similar Matrices & Diagonalization (cont.)
0 𝑖 −𝑖
STEP 𝟐: Check if 1 , 1 and 1 are linearly independent:
0 1 1

0 𝑖 −𝑖 0 𝑖 −𝑖
1 1 1 = −2𝑖 ≠ 0 ⇒ 1 , 1 and 1 are linearly independent.
0 1 1 0 1 1

STEP 𝟑: Find the diagonal matrix 𝐵 and the invertible matrix 𝑃:

0 𝑖 −𝑖
Since the vectors 1 , 1 and 1 are linearly independent, we conclude that 𝐴 = 𝑃𝐵𝑃−1
0 1 1
2 0 0 0 𝑖 −𝑖
where 𝐵 is the diagonal matrix 0 𝑖 0 and 𝑃 is the invertible matrix 1 1 1 .
0 0 −𝑖 0 1 1
Similar Matrices & Diagonalization (cont.)
STEP 𝟒: Find 𝑃−1 :

𝑎𝑑𝑗(𝑃) 𝐶𝑡 𝐶11 𝐶12 𝐶13 0 −1 1


𝑃−1 = = ⁡where⁡𝐶 = 𝐶21 𝐶22 𝐶23 = −2𝑖 0 0
det⁡(𝑃) det⁡(𝑃) 𝐶31 𝐶32 𝐶33 2𝑖 −𝑖 −𝑖

0 −2𝑖 2𝑖
𝐶 𝑡 = −1 0 −𝑖
1 0 −𝑖

0 𝑖 −𝑖
det 𝑃 = 1 1 1 = −2𝑖
0 1 1

0 1 −1
𝑃−1 = −𝑖 2 0 1/2
𝑖/2 0 1/2
Similar Matrices & Diagonalization (cont.)
STEP 𝟓: Find 𝐴𝑛 :
0 𝑖 −𝑖 2𝑛 0 0 0 1 −1
𝐴𝑛 = 𝑃𝐵𝑛 𝑃−1 = 1 1 1 0 𝑖𝑛 0 − 𝑖 2 0 1/2
0 0 −𝑖 𝑛 𝑖/2 0 1/2
0 1 1

𝑖 𝑛 + −𝑖 𝑛 𝑖(𝑖 𝑛 ) − 𝑖 −𝑖 𝑛
0
0 𝑖(𝑖 𝑛 ) −𝑖 −𝑖 𝑛 0 1 −1 2 2
−𝑖 𝑖 𝑛 + 𝑖 −𝑖 𝑛
𝑖 𝑛
+ −𝑖 𝑛
𝐴𝑛 = 2𝑛 𝑖𝑛 −𝑖 𝑛 − 𝑖 2 0 1/2 = 2𝑛 −2𝑛 +
0 𝑖𝑛 −𝑖 𝑛 𝑖/2 0 1/2 2 2
−𝑖 𝑖 𝑛 + 𝑖 −𝑖 𝑛
𝑖 𝑛 + −𝑖 𝑛
0
2 2
Cayley-Hamilton Theorem
THEOREM:
Suppose 𝐴 is an 𝑛 × 𝑛 matrix. The Cayley-Hamilton theorem states that 𝐴 satisfies its own
characteristic polynomial. In other words, if 𝑝 λ = det⁡(𝐴 − λ𝐼𝑛 ) is the characteristic polynomial
of 𝐴 then 𝑝 𝐴 = 0.

EXAMPLE:
2 5
Let 𝐴 = . Use the Cayley-Hamilton theorem to find 𝐴−1 .
2 −1
2−λ 5
𝑝 λ = det 𝐴 − λ𝐼2 = = λ2 − λ − 12
2 −1 − λ
By Cayley-Hamilton theorem, 𝑝 𝐴 = 0
2 2
𝐴 − 𝐼2
⇒ 𝐴 − 𝐴 − 12𝐼2 = 0 ⇒ 𝐴 − 𝐴 = 12𝐼2 ⇒ 𝐴 𝐴 − 𝐼2 = 12𝐼2 ⇒ 𝐴 = 𝐼2
12
𝐴 − 𝐼2 1 1 5
⇒ 𝐴−1 = =
12 12 2 −2
Matrix Differential Equation
SYSTEM OF LINEAR DIFFERENTIAL EQUATIONS (LDE):
𝑥′1 = 𝑎11 𝑥1 + 𝑎12 𝑥2 + ⋯ + 𝑎1𝑛 𝑥𝑛
𝑥′2 = 𝑎21 𝑥1 + 𝑎22 𝑥2 + ⋯ + 𝑎2𝑛 𝑥𝑛
Consider the system

𝑥′𝑛 = 𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛 𝑥𝑛
𝑥1 𝑥′1
𝑥2 𝑥′2
Let 𝑥 = ⋮ and 𝑥 ′ = , then the system can be written in the form 𝑥 ′ = 𝐴𝑥 where

𝑥𝑛 𝑥′𝑛
𝑎11 𝑎12 … 𝑎1𝑛
𝑎21 𝑎22 … 𝑎2𝑛
𝐴= ⋮ ⋮ ⋱ ⋮
𝑎𝑛1 𝑎𝑛2 … 𝑎𝑛𝑛
Matrix Differential Equation (cont.)
THEOREM:
Let 𝑣 be an eigenvector corresponding to an eigenvalue λ of matrix 𝐴, then 𝑥 𝑡 = 𝑒 λ𝑡 𝑣 is a
solution to the system 𝑥 ′ = 𝐴𝑥.

Proof:
𝑥 𝑡 = 𝑒 λ𝑡 𝑣 ⇒ 𝑥 ′ 𝑡 = λ𝑒 λ𝑡 𝑣
𝐴𝑥 = 𝐴 𝑒 λ𝑡 𝑣 = 𝑒 λ𝑡 𝐴𝑣 = 𝑒 λ𝑡 λ𝑣 = λ𝑒 λ𝑡 𝑣 = 𝑥′

COROLLARY:
If 𝐴 has 𝑛 linearly independent eigenvectors 𝑣1 , … , 𝑣𝑛 , then the vector functions
𝑥1 = 𝑒 λ1𝑡 𝑣1 , … , 𝑥𝑛 = 𝑒 λ𝑛 𝑡 𝑣𝑛 are 𝑛 linearly independent solutions and the general solution to
the system is 𝑥 = 𝐶1 𝑥1 + ⋯ + 𝐶𝑛 𝑥𝑛 where 𝐶1 , … , 𝐶𝑛 are constants of integration.
Matrix Differential Equation (cont.)
EXAMPLE:
Use the method of eigenvalues and eigenvectors to solve the system of linear differential equations
𝑥′1 = 6𝑥1 + 5𝑥2
𝑥′2 = 𝑥1 + 2𝑥2 ⁡⁡

𝑥′1 6 5 𝑥1
𝑥′ = 𝐴𝑥 ⇔ =
𝑥′2 1 2 𝑥2
Eigenvalues of 𝐴:
6−λ 5
det 𝐴 − λ𝐼2 = 0 ⇔ = 0 ⇒ 6 − λ 2 − λ − 5 = 0 ⇒ λ2 − 8λ + 7 = 0
1 2−λ
′ ′2
∆ = 𝑏 − 𝑎𝑐 = 16 − 1 7 = 9
−𝑏 ′ − ∆′
λ1 = = 1⁡⁡⁡
⇒ 𝑎
−𝑏 ′ + ∆′
λ2 = = 7⁡⁡⁡
𝑎
Finally, λ1 = 1, and λ2 = 7 are the eigenvalues of 𝐴.
Matrix Differential Equation (cont.)
Eigenvector corresponding to eigenvalue λ1 = 1:
𝐴 − λ1 𝐼2 𝑣 = 0

5 5 𝑥 0
⇒ 𝑦 =
1 1 0
𝑥 = −𝑦
⇒ 𝑦 = 𝑦⁡⁡⁡

𝑥 −𝑦 −1
⇒𝑣= 𝑦 = 𝑦 =𝑦
1

−1
Eigenvector 𝑣1 =
1
Matrix Differential Equation (cont.)
Eigenvector corresponding to eigenvalue λ2 = 7:
𝐴 − λ2 𝐼2 𝑣 = 0

−1 5 𝑥 0
⇒ 𝑦 =
1 −5 0

𝑥 = 5𝑦

𝑦 = 𝑦⁡⁡⁡
𝑥 5𝑦 5
⇒𝑣= 𝑦 = =𝑦
𝑦 1
5
Eigenvector 𝑣2 =
1
𝑥1 −1 𝑡 5 7𝑡
The solution to the original system is 𝑥 = 𝑥 = 𝐶1 𝑒 + 𝐶2 𝑒
2 1 1
Matrix Differential Equation (cont.)
REMARK:
If 𝑛 × 𝑛 matrix 𝐴 has 𝑛 linearly independent eigenvectors 𝑣1 , … , 𝑣𝑛 , then 𝐴 is diagonalizable. In
other words, there exists an invertible matrix 𝑃 such that 𝐵 = 𝑃−1 𝐴𝑃 or equivalently 𝐴 =
𝑃𝐵𝑃−1 where 𝐵 is a diagonal matrix.
Since 𝑥 ′ = 𝐴𝑥, then 𝑥 ′ = 𝑃𝐵𝑃−1 𝑥
⇒ 𝑃−1 𝑥 ′ = 𝑃−1 𝑃𝐵𝑃−1 𝑥
⇒ 𝑃−1 𝑥 ′ = 𝐼𝑛 𝐵𝑃−1 𝑥 = 𝐵𝑃−1 𝑥
Change of variable: Let 𝑦 = 𝑃−1 𝑥, then 𝑦 ′ = 𝑃−1 𝑥′
Consequently, 𝑦 ′ = 𝐵𝑦. This equation can be used to find 𝑦.
Finally, the general solution of the system 𝑥 ′ = 𝐴𝑥 is 𝑥 = 𝑃𝑦.
(𝑦 = 𝑃−1 𝑥 ⇒ 𝑃𝑦 = 𝑃𝑃−1 𝑥 ⇒ 𝑃𝑦 = 𝐼𝑛 𝑥 ⇒ 𝑃𝑦 = 𝑥)
Matrix Differential Equation (cont.)
EXAMPLE:
𝑥′1 = 3𝑥1 − 5𝑥2
Use the diagonalization technique to solve the system of linear differential equations
𝑥′2 = 𝑥1 − 𝑥2 ⁡⁡⁡⁡⁡

′ 𝑥′1 3 −5 𝑥1
𝑥 = 𝐴𝑥 ⇔ =
𝑥′2 1 −1 𝑥2
Eigenvalues of 𝐴:
3−λ −5
det 𝐴 − λ𝐼2 = 0 ⇔ = 0 ⇒ 3 − λ −1 − λ + 5 = 0 ⇒ λ2 − 2λ + 2 = 0
1 −1 − λ
∆′ = 𝑏 ′2 − 𝑎𝑐 = 1 − 1 2 = −1 = 𝑖 2
−𝑏 ′ − ∆′
λ1 = = 1 − 𝑖⁡⁡⁡
⇒ 𝑎
−𝑏 ′ + ∆′
λ2 = = 1 + 𝑖⁡⁡⁡
𝑎

Finally, λ1 = 1 − 𝑖, and λ2 = 1 + 𝑖 are the eigenvalues of 𝐴.


Matrix Differential Equation (cont.)
Eigenvector corresponding to eigenvalue λ1 = 1 − 𝑖:
𝐴 − λ1 𝐼2 𝑣 = 0

2+𝑖 −5 𝑥 0
⇒ 𝑦 =
1 −2 + 𝑖 0

𝑥 = (2 − 𝑖)𝑦

𝑦 = 𝑦⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡

𝑥 (2 − 𝑖)𝑦 2−𝑖
⇒𝑣= 𝑦 = =𝑦
𝑦 1

2−𝑖
Eigenvector 𝑣1 =
1
Matrix Differential Equation (cont.)
2+𝑖
The eigenvector corresponding to eigenvalue λ2 = 1 + 𝑖 is 𝑣2 = because λ1 and λ2 are
1
complex conjugates.

Diagonalization of 𝐴:
1 −2 + 𝑖 𝑖 1
− +𝑖
2+𝑖 2−𝑖 1+𝑖 0
𝑃= ,𝐵 = , and⁡𝑃−1 = −1 2 + 𝑖 = 2 2
1 1 0 1−𝑖 2+𝑖 2−𝑖 𝑖 1
1 1 −𝑖
2 2
Now solve 𝑦 ′ = 𝐵𝑦:
𝑦′1 1+𝑖 0 𝑦1
⇒ = 𝑦2
𝑦′2 0 1−𝑖

𝑦1 (𝑡) = 𝐶1 𝑒 1+𝑖 𝑡
𝑦′1 = 1 + 𝑖 𝑦1
⇒ ⇒ 1−𝑖 𝑡
𝑦′2 = 1 − 𝑖 𝑦2 𝑦2 (𝑡) = 𝐶2 𝑒
Matrix Differential Equation (cont.)
Consequently:

𝑦1 𝐶1 𝑒 1+𝑖 𝑡
𝑦= 𝑦 = 1−𝑖 𝑡
2 𝐶2 𝑒

But 𝑥 = 𝑃𝑦, then:


1+𝑖 𝑡 𝐶1 2 + 𝑖 𝑒 1+𝑖 𝑡 + 𝐶2 (2 − 𝑖)𝑒 1−𝑖 𝑡
2+𝑖 2−𝑖 𝐶1 𝑒
𝑥= 1−𝑖 𝑡
= 1+𝑖 𝑡
1 1 𝐶2 𝑒 𝐶1 𝑒 + 𝐶2 𝑒 1−𝑖 𝑡

Remark: We can use Euler identity 𝑒 𝑖𝑡 = cos 𝑡 + 𝑖 sin 𝑡 to write in terms of cos 𝑡 and sin 𝑡.

You might also like