0% found this document useful (0 votes)
18 views3 pages

Topic 6 - Summary

This document discusses various applications of matrices, including solving linear equations using matrix inversion and Gaussian elimination. It also covers LU factorization, least-squares estimation, eigenvalues and eigenvectors, diagonalization of symmetric matrices, and singular value decomposition. Each method provides a systematic approach to handle different types of matrix-related problems in linear algebra.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views3 pages

Topic 6 - Summary

This document discusses various applications of matrices, including solving linear equations using matrix inversion and Gaussian elimination. It also covers LU factorization, least-squares estimation, eigenvalues and eigenvectors, diagonalization of symmetric matrices, and singular value decomposition. Each method provides a systematic approach to handle different types of matrix-related problems in linear algebra.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Topic 6 - Matrix applications

Definitions and some properties of matrices were introduced last year. This
topic covers some further applications of matrices.

Matrix solution of linear equations


If we have a system of n unknowns and n linear equations of those unknowns,
then a unique solution can be determined using matrices. Given the three
equations:

a11 x + a12 y + a13 z = b1


a21 x + a22 y + a23 z = b2
a31 x + a32 y + a33 z = b3

This can be written as


Ax = b
where      
a11 a12 a13 x b1
A = a21 a22 a23  x = y  b = b2 
a31 a32 a33 z b3
The solution x can be found by inverting A

x = A−1 b.

Gaussian elimination
For larger matrices, inversion is difficult to do without a computer. Gaussian
ellimination solves the system Ax = b by reducing it to a far simpler system
by manipulating the equations, i.e the rows of the matrix system. Consider
    
a b c d w q
 e f g h  x  r
 i j k l   y  = s
    

m n o p z t

If we can eliminate, i.e make equal to zero, the lower triangular elements, then
this system is trivial to solve. We systematically eliminate the elements e, i, j,
m, n and o. To eliminate e, subtract the first row multiplied by e/a from the
second row to get
    
a b c d w q
 0 f − be/a g − ce/a h − de/a x − we/a r − qe/a
  = .
i j k l  y   s 
m n o p z t

1
Do the same process, sequentially, to all other lower triangular elements to leave
the system as     
A B C D W Q
 0 F G H   X  R 
 0 0 K L   Y  = S 
    

0 0 0 P Z T
Working from row 4 to row 1, the values of W , X, Y and Z can be easily
calculated.

LU factorisation
Square matrices can often by factorised into the product of a lower and an upper
triangular matrix, which is useful for solving systems of linear equations. If we
factorise A = LU then the system of equations Ax = b becomes LUx = b
which may be solved in two stages. First solve Ly = b for y, then solve Ux = y
for x. The LU factorisation can be found by setting the diagonal elements of
either L or U to one and solving algebraically.

Least-squares estimation
The least squares estimation is a useful technique for estimating overdetermined
systems of equations, i.e. systems in which there are more equations than un-
knowns. Suppose we have a system

Ax = b̃

where A is a M × N , x is a N × 1 and b̃ is a M × 1 matrix, M > N . The


least-squares estimation of x is

x̃ = (AT A)−1 AT b̃

Eigenvectors and Eigenvalues


An eigenvalue λ and an eigenvector v of a square matrix A are the non-trivial,
i.e. v ̸= 0, solutions to the equation

Av = λv

This equation has no unique solution for v, except v = 0, so the values of λ are
found by solving the equation

|A − λI| = 0

Once we know are eigenvalues λ, we need to solve the equation Av = λv for v


to find our eigenvectors. Since the solution v is not unique, we can simply set
one component of v as 1 and solve for the other components.

2
Eigenvalues and eigenvectors of symmetric matrices
The eigenvalues and eigenvectors of symmetric matrices have some additional
properties

1. All the eigenvalues are real


2. Where the eigenvalues are different, the eigenvalues are orthogonal
3. For an eigenvalue that is repeated twice, its possible eigenvectors are found
within a plane.

4. For a symmetric matrix A, the matrices AT A, AAT and Ak all have


the same eigenvectors as A. The eigenvalues of AT A and AAT are the
squares of the eigenvalues of A, while the eigenvalues of Ak are those of
A raised to the power k.

Diagonalisation of symmetric matrices


A symmetric matrix A may be written as

A = PDP−1

where P is a matrix in which all colums are eigenvectors of A and D is a


diagonal matrix of all the eigenvalues. Note, the order in which the eigenvectors
are places in P must match the order the corresponding eigenvalues are placed
in D. The process is reversible, and thus D = P−1 AP, which is a useful result
for solving systems of equations. If we normalise the eigenvectors, then the
diagonalisation simplifies to A = PDPT , since P−1 = PT in this case.

Singular value decomposition


The singular value decomposition (SVD) can be used to factorise any matrix.
A M × N matrix A can be factorised into

A = USVT

where U is a M × M orthonormal matrix, V is a N × N orthonormal matrix


and S is an M × N diagonal matrix.

You might also like