0% found this document useful (0 votes)
51 views

MAT 461/561: 3.3 Special Matrices: Announcements

This document discusses special matrices and their properties. It begins by introducing banded matrices, which have nonzero entries concentrated around the main diagonal within a certain bandwidth. Gaussian elimination can be simplified for banded matrices by preserving the bandwidth. Symmetric matrices are then discussed, along with the more specific class of symmetric positive definite (SPD) matrices. SPD matrices have certain properties, like positive eigenvalues, and solving systems with them can exploit the symmetry using factorizations like Cholesky. Pivoting may not be needed for these special matrices to maintain their structures.

Uploaded by

Debisa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

MAT 461/561: 3.3 Special Matrices: Announcements

This document discusses special matrices and their properties. It begins by introducing banded matrices, which have nonzero entries concentrated around the main diagonal within a certain bandwidth. Gaussian elimination can be simplified for banded matrices by preserving the bandwidth. Symmetric matrices are then discussed, along with the more specific class of symmetric positive definite (SPD) matrices. SPD matrices have certain properties, like positive eigenvalues, and solving systems with them can exploit the symmetry using factorizations like Cholesky. Pivoting may not be needed for these special matrices to maintain their structures.

Uploaded by

Debisa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

MAT 461/561: 3.

3 Special Matrices

James V. Lambers

February 10 & 12, 2020

Announcements
Homework 1 graded (those who turned it in :P), see grade and comments in Canvas gradebook
Practice Midterm Exam: on Lectures page, with solution
Two parts: part 1: 5 “regular” problems, and part 2: “Lightning round”, conceptual questions, like
“Concept Check” (∼10 questions)
Wednesday, Feb 19: review

Special Matrices
For a general system Ax = b, A n × n and invertible, can ALWAYS use Gaussian elimination with
pivoting (LU Decomposition), followed by forward and back substitution, but is it the best way to
go?

Banded Matrices
A banded matrix is a matrix for which all nonzero entries occupy a “band” around the main
diagonal.
• A has upper bandwidth p if aij = 0 whenever j − i > p.
• A has lower bandwidth q if aij = 0 whenever i − j > q.
• The bandwidth of A (number of diagonals with nonzero entries) is p + q + 1.
• We say A is banded if bandwidth < 2n − 1
• If p = q = 1, then A is tridiagonal. If p = q = 2, A is pentadiagonal
• If q = 0, p = 1, then A is upper bidiagonal.
What happens when we perform Gaussian elimination (no pivoting) on a banded matrix?
Example with lower bandwidth 1, upper bandwidth 2:
 
× × × 0 0

 × × × × 0 

A= 0 × × × ×
 

× × ×
 
 0 0 
0 0 0 × ×

1
Eliminate a21 :  
× × × 0 0

 0 × × × 0 

A= 0 × × × ×
 

× × ×
 
 0 0 
0 0 0 × ×
Eliminate a32 :  
× × × 0 0

 0 × × × 0 

A= 0 0 × × ×
 

× × ×
 
 0 0 
0 0 0 × ×
Eliminate a43 and a54 :  
× × × 0 0

 0 × × × 0 

A= 0 0 × × ×
 

× ×
 
 0 0 0 
0 0 0 0 ×

• In each column, only need to eliminate q entries (lower bandwidth), so in the LU Decompo-
sition, L has lower bandwidth q

• Each row operation only affects p entries (upper bandwidth), so U has upper bandwidth p

• Total amount of flops: n − 1 columns, at most q entries to eliminate per column, so O(nq)
row operations, each costing p flops, so total work is O(npq).

So how much does Gaussian elimination simplify for a tridiagonal matrix (p = q = 1)


Gaussian Elimination:

for j = 1, 2, . . . , n − 1 do
mj+1,j = aj+1,j /ajj
aj+1,j+1 = aj+1,j+1 − mj+1,j aj,j+1
end for

Forward Substitution:

for i = 1, 2, . . . , n do
yi = bi − `i,i−1 yi−1
end for

Back Substitution:

for i = n, n − 1, . . . , 1 do
xi = yi − ui,i+1 xi+1
xi = xi /uii
end for

2
Banded matrices should NOT be stored as 2-D arrays (because of storage of zero entries). Instead,
we can store them as vectors, one per diagonal. Tridiagonal case: use `i for subdiagonal, ui for
superdiagonal, di for main diagonal
Vector storage:
Gaussian Elimination:

for j = 1, 2, . . . , n − 1 do
`j = `j /dj
dj+1 = dj+1 − `j uj
end for

Forward Substitution:

y1 = b1
for i = 2, . . . , n do
yi = bi − `i−1 yi−1
end for

Back Substitution:

xn = yn /dn
for i = n − 1, . . . , 1 do
xi = yi − ui xi+1
xi = xi /di
end for

Problem: Pivoting! Because it can destroy the banded structure, leading to what is called “fill-
in” (introduction of nonzero entries outside the band). Can try other pivoting strategies, such as
requiring |mij | ≤ L instead of 1

Symmetric Matrices
A real matrix n × n A is symmetric if A = AT . Example:
 
1 2 3
A= 2 4 5 
 
3 5 6

In A = LU , L is unit lower triangular, U is upper triangular. Let U = DM T where D is diagonal,


with dii = uii and M is unit lower triangular. Specifically, M T = D−1 U . Therefore we have

A = LDM T ,

where both L and M are unit lower triangular. Now let A be symmetric. Then

AT = M DLT = LDM T = A.

Therefore
DLT M −T = M −1 LD

3
Because (AB)T = B T AT , this becomes
D(M −1 L)T = M −1 LD
The left side is upper triangular, the right side lower. Therefore M −1 L is diagonal, because D is.
Therefore M −1 L = I and M = L, so A = LDLT .
To solve Ax = b with A symmetric, can compute A = LDLT , and then solve:
1. Ly = b (forward substitution)
2. Dz = y (divisions)
3. LT x = z (back substitution)
It’s better to use A = LDLT than A = LU when A is symmetric because the symmetry leads to
the algorithm taking 13 n3 flops!
What about pivoting? If A is symmetric, P A is not! However, P AP T is symmetric and we have
P AP T = LDLT .

Symmetric Positive Definite Matrices


A symmetric matrix A is positive definite (or we say A is “SPD”) if
xT Ax > 0
for all x 6= 0. Equivalently, all eigenvalues of A are strictly positive (note: eigenvalues of any
symmetric matrix are real). Note: if Ax = λx where x 6= 0 and λ is a scalar, then x is an
eigenvector of A, and λ is an eigenvalue. det(A) is the product of all eigenvalues, and
n
X
tr(A) = aii
i=1
is the sum of all eigenvalues.
Properties of SPD matrices:
• det(A) > 0
• aii > 0
• The largest entry of A must lie on the diagonal.
• A−1 is also SPD
How can we check if A is SPD? Can check the principal minors of A, which are the determinants
of the leading principal submatrices of A. Recall
 
a11 a12 ··· a1k

 a21 a22 ··· a2k 

Ak =  .. .. .. .. 

 . . . .


ak1 ··· ··· akk
are the leading principal submatrices of A. The principal minors of A are Mk = det(Ak ). Then A
is SPD if and only if Mk > 0, k = 1, 2, . . . , n. Recall, Mk 6= 0, the LU Decomposition exists (no
pivoting needed).

4
If A is SPD, can compute A = LDLT and not need to pivot, but we can also compute the Cholesky
Factorization
A = GGT
where G is lower triangular and has positive diagonal entries. It exists if and only if A is SPD. To
solve Ax = b, perform: A = GGT , Gy = b (forw sub), GT x = y (back sub). Cost: 13 n3 flops.
Note that if A is SPD,
A = GGT = LDLT
and therefore G = LD1/2 . For this reason, LDLT is called “square-root-free Cholesky”. Note that
in some texts, A = RT R where R = GT is upper triangular. Matlab: use chol function.
Example:
    
9 −3 3 9 g11 0 0 0 g11 g21 g31 g41
 −3 17 −1 −7   g21 g22 0 0  0 g22 g32 g42 
A= =  = GGT
    
3 −1 17 15 g31 g32 g33 0 0 0 g33 g43

    
9 −7 15 44 g41 g42 g43 g44 0 0 0 g44
Filled in:
    
9 −3 3 9 3 0 0 0 3 −1 1 3
 −3 17 −1 −7   −1 4 0 0  0 4 0 −1 
A= =  = GGT
    
3 −1 17 15 1 0 4 0 0 0 4 3 

   
9 −7 15 44 3 −1 3 5 0 0 0 5
Then we have
2
a11 = 9 = g11

g11 = a11 = 3
a21 = −3 = g11 g21
g21 = a21 /g11 = −1
a31 = 3 = g11 g31
g31 = a31 /g11 = 1
a41 = 9 = g11 g41
g41 = a41 /g11 = 3
2 2
a22 = 17 = g21 + g22
q q
g22 = 2 =
a22 − g21 17 − (−1)2 = 4
a32 = −1 = g31 g21 + g32 g22
g32 = (a32 − g31 g21 )/g22 = (−1 − (1)(−1))/4 = 0
a42 = −7 = g41 g21 + g42 g22
g42 = (a42 − g41 g21 )/g22 = (−7 − 3(−1))/4 = −1
2 2 2
a33 = 17 = g31 + g32 + g33
q p
g33 = 2 − g2 =
a33 − g31 17 − 12 − 02 = 4
32
a43 = 15 = g41 g31 + g42 g32 + g43 g33
g43 = (a43 − g41 g31 − g42 g32 )/g33 = (15 − 3(1) − 0)/4 = 3
2 2 2 2
a44 = 44 = g41 + g42 + g43 + g44
q q
g44 = a44 − 2
g41 − 2
g42 − 2
g43 = 44 − 32 − (−1)2 − 32 = 5

5
These formulas can also be obtained from the matrix product formula:

n min(i,j)
X X
aij = (gik )(GT )kj = gik gjk
k=1 k=1

Homework Questions
3.2.11 and 3.2.20: When returning L and U stored in A, to check:

>> U=triu(A)
>> L=eye(size(A))+tril(A,-1)
>> L*U-A

Instead of storing a permutation matrix P , create a vector p=1:n and then swap elements of p
whenever swapping rows of A. Then, to solve Ly = P b, use b(p) for P b.

You might also like