ChBE6500 HW3
ChBE6500 HW3
Homework 3
Written Portion
General guidance: These problems are designed to help you learn to work with and appreciate LU decomposition and
Eigenvalue analysis. Several questions involve theorems and properties that you are not expected to memorize for the
exams. Helpful Hints are below are at the end of the document. If similar problems appear on the exam, you can
expect to receive similar hints.
1) For a mystery coefficient matrix [𝐴𝐴], MATLAB’s “lu” command returns the following matrices, such that the
linear system of the form [𝐴𝐴] [𝑥𝑥] = [𝑏𝑏] becomes [𝐿𝐿] [𝑈𝑈] [𝑥𝑥] = [𝑃𝑃] [𝑏𝑏].
1 0 0 7 8 10 0 0 1
[𝐿𝐿] = �0.1429 1 0�; [𝑈𝑈] = �0 0.8571 1.5714�; [𝑃𝑃] = �1 0 0�
0.5714 0.5 1 0 0 −0.5 0 1 0
7
Assuming [𝑏𝑏] = �8�, determine the solution to this linear system of equations. You should not need to
9
calculate [𝐴𝐴] at any point in this calculation.
2 0 0
Where 𝐴𝐴 = �0 1 1�
0 1 1
1 −1 2
A = �−2 5 0�
𝑟𝑟 3 4
a) Find a value of 𝑟𝑟 such that there are 3 distinct real eigenvalues. Hint: this is not easy to do. I
suggest you plot the characteristic polynomial for different values of 𝑟𝑟 and find a value of 𝑟𝑟 that
yields 3 real roots.
b) Using this value of 𝑟𝑟, find the corresponding eigenvectors.
1
4) If a matrix isn’t symmetric and repeated eigenvalues don’t give distinct eigenvectors, then the matrix is known
to be defective and can’t be diagonalized (or decoupled when speaking about linear differential equations).
However, it can be nearly diagonalized by putting the matrix in what’s called a Jordan form, J.
𝜆𝜆 1
J= � � for a 2x2 matrix with repeating eigenvalue λ
0 𝜆𝜆
3 2
Consider the matrix A = � �
0 3
a) Find all the eigenvalues and eigenvectors.
b) State the Jordan form of A.
5) In class, you learned about how the matrix exponential 𝑒𝑒 𝐴𝐴𝐴𝐴 is directly analogous to the scalar exponential 𝑒𝑒 𝑎𝑎𝑎𝑎 .
If A doesn’t have repeated eigenvalues, 𝑒𝑒 𝐴𝐴𝐴𝐴 can be computed by the following relationship:
where S is matrix in which the columns are eigenvectors of A and eΛt is the diagonal matrix of exponential raised to
each eigenvalue as described by the following:
𝑒𝑒 𝜆𝜆1 𝑡𝑡 0 ⋯ 0
𝑒𝑒 𝛬𝛬𝑡𝑡
=� 0 𝑒𝑒 𝜆𝜆1 𝑡𝑡 ⋯ 0 �
⋮ ⋮ ⋱ ⋮
𝜆𝜆𝑛𝑛 𝑡𝑡
0 0 ⋯ 𝑒𝑒
1 2
Consider the matrix A = � �.
0 3
a) Find eigenvalues and eigenvectors of A
b) Compute 𝑒𝑒 𝐴𝐴 (assume 𝑡𝑡 = 1) using diagonalization and brute force (direct Taylor expansion of the
𝐴𝐴 without diagonalization). Determine how many terms of the Taylor expansions are necessary to
achieve less than a 1% error for every matrix element. You need not perform this calculation by
hand: use Matlab and write down the results.
MATLAB Portion
6) I (Anant) searched the internet to find an algorithm to perform LU decomposition with partial pivoting, but was
disappointed. So I did some playing, which I would like you to replicate.
a) Write a function to perform naïve Gaussian elimination in order to produce an LU decomposition for a
square matrix of any size. This process can be summarized by representing each Gaussian elimination
operations with an operator matrix:
�� ]−1
[𝐸𝐸1� … [𝐸𝐸
���� �� ]−1 [𝑈𝑈]
𝑛𝑛��
[𝐸𝐸𝑛𝑛 ] … [𝐸𝐸1 ][𝐴𝐴] = [𝑈𝑈] ⇒ [𝐴𝐴] =
[𝐿𝐿]
You can calculate the matrices [𝐸𝐸𝑖𝑖 ] by applying the appropriate Gaussian Elimination setup on the
1 4 −2 10
10 87 13 3
identity matrix. Use this function to output the LU decomposition for the matrix � �.
−8 2 90 2
3 38 29 49
b) If Gaussian Elimination with partial pivoting is applied to the matrix from part (a),
1 4 −2 10
10 87 13 3
� �, then pivoting will be required for every step. In general, this process can be
−8 2 90 2
3 38 29 49
represented as follows:
[𝐸𝐸𝑛𝑛 ][𝑃𝑃𝑛𝑛 ] … [𝐸𝐸1 ][𝑃𝑃1 ][𝐴𝐴] = [𝑈𝑈],
where the matrices [𝑃𝑃𝑖𝑖 ] correspond to row swaps or the identity matrix (if a row swap is not
necessary). The problem with this representation is that ∏𝑛𝑛𝑖𝑖=1[𝐸𝐸𝑖𝑖 ][𝑃𝑃𝑖𝑖 ] or its inverse is not a lower
triangular matrix. I have not proven this rigorously, but it looks like one can achieve an LU
decomposition of the same form as Matlab’s built-in “lu” function as follows:
i) Perform Gaussian Elimination with partial pivoting. Keep track of the pivot matrices in
order to define [𝑃𝑃] = [𝑃𝑃𝑛𝑛 ] … [𝑃𝑃1 ].
ii) Interestingly, the matrix [𝑃𝑃][𝐴𝐴] seems to encode all the necessary row swaps even if
applied before any of the Gaussian Elimination steps. Thus, it is possible to compute
[𝐿𝐿] using one of the following methods:
Method 1: Naïve Gaussian elimination with no pivots necessary can be
performed on [𝑃𝑃][𝐴𝐴]. However, a different set of steps will be employed:
[𝐹𝐹𝑛𝑛 ] … [𝐹𝐹1 ][𝑃𝑃][𝐴𝐴] = [𝑈𝑈].
[𝐹𝐹1�−1
]� ]−1 [𝑈𝑈]
�� … [𝐹𝐹
��� ��
𝑛𝑛��
The LU decomposition can be expressed as [𝑃𝑃][𝐴𝐴] = ,
[𝐿𝐿]
which matches the format of Matlab’s lu function. [𝐿𝐿] can be computed using
[𝐿𝐿] = [𝐹𝐹1 ]−1 … [𝐹𝐹𝑛𝑛 ]−1.
7) If A is an n×n matrix, we define the trace of A as the sum of the diagonal elements: 𝑇𝑇𝑇𝑇(𝐴𝐴) = 𝑎𝑎11 + 𝑎𝑎22 +
… + 𝑎𝑎𝑛𝑛𝑛𝑛 . The trace is also equal to the sum of the eigenvalues for matrix A. For the following square matrices 𝐴𝐴
and 𝐵𝐵, write a MATLAB code that verifies that the trace is the sum of the eigenvalues and shows that the following
statements are also true. Do not use the built-in trace function in MATLAB, but you may use the eig function.
1 −1 2 3 0
⎡−1 3 −4 0 −1⎤
⎢ ⎥
𝐴𝐴 = ⎢ 2 −4 0 5 4⎥
⎢3 0 5 −1 2 ⎥
⎣ 0 −1 4 −2 4 ⎦
3
2 0 1 −1 −4
⎡0 1 −1 3 −2⎤
⎢ ⎥
𝐵𝐵 = ⎢ 1 −1 0 2 1⎥
⎢−1 3 2 −2 4 ⎥
⎣−4 −2 1 4 7⎦
8) Write a MATLAB code that outputs whether a random square matrix of any size is completely diagonalizable.
Remember that the following matrices are completely diagonalizable:
a) Self-Adjoint (Symmetric)
b) Non-Self Adjoint with distinct eigenvalues
c) Non-Self Adjoint with repeated eigenvalues but distinct eigenvectors
Test your code using the following matrices. Use built-in functionality to calculate eigenvector and eigenvalues.
This is a self-adjoint matrix:
5.28 4.73 2.04 2.18 4.68
⎡4.73 5.10 2.12 2.47 4.08⎤
⎢ ⎥
[𝑀𝑀1 ] = ⎢2.04 2.12 3.16 3.9 1.42⎥
⎢2.18 2.47 3.9 4.9 1.35⎥
⎣4.68 4.08 1.42 1.35 4.43⎦
Something to think about: How did Anant generate matrices with the eigenvalue he desired? Answer: using similarity
transforms.
General Rule for Eigenvalues and Eigenvectors: The number of eigenvalues must always equal
the dimension of matrix A. Also, the number of eigenvectors (whether distinct or generalized)
must also equal the dimension of matrix A.
Checking for Stability: If all the eigenvalues of A are negative and/or zero, the system is stable.
If any eigenvalue is positive, the system is unstable no matter how many negative eigenvalues
there are. If all eigenvalues are zero, the system is neutrally stable. Remember to check only
the real parts of the eigenvalues. Complex parts play no role in stability.
A = QΛQ-1
Where Q is a matrix where its columns are the orthonormal eigenvectors of A and Λ is a
diagonal matrix where the diagonal elements are the eigenvalues of A
1) Q-1 = QT
2) All eigenvalues are real
Decoupling: In decoupling, each differential equation depends on its own dynamics and not any
other. For instance, one differential equation depends on x1, but doesn’t depend on x2 or x3.
To perform decoupling, first rewrite A as a similarity transformation like the one above. Then,
recognize that the transformation between the coupled and decoupled differential equations
can be defined by the following relationship:
x = Qx’
Where x’ is the decoupled form of x.
By multiplying each side of the differential equation by QT and using the properties of the self-
adjoint matrix the equations decouple in the following form:
𝑑𝑑𝑑𝑑′
= 𝛬𝛬𝛬𝛬′
𝑑𝑑𝑑𝑑
https://ocw.mit.edu/courses/mathematics/18-03sc-differential-equations-fall-2011/unit-iv-
first-order-systems/matrix-methods-eigenvalues-and-normal-
modes/MIT18_03SCF11_s33_8text.pdf