Matrix Decomposition - Wikipedia
Matrix Decomposition - Wikipedia
Example
For instance, when solving a system of linear equations , the matrix A can be
decomposed via the LU decomposition. The LU decomposition factorizes a matrix into a lower
triangular matrix L and an upper triangular matrix U. The systems and
require fewer additions and multiplications to solve, compared with the original
system , though one might require significantly more digits in inexact arithmetic such
as floating point.
LU decomposition
Traditionally applicable to: square matrix A, although rectangular matrices can be
applicable.[1][nb 1]
Related: the LDU decomposition is , where L is lower triangular with ones on the
diagonal, U is upper triangular with ones on the diagonal, and D is a diagonal matrix.
Comments: The LUP and LU decompositions are useful in solving an n-by-n system of linear
equations . These decompositions summarize the process of Gaussian elimination in
matrix form. Matrix P represents any row interchanges carried out in the process of Gaussian
elimination. If Gaussian elimination produces the row echelon form without requiring any row
interchanges, then P = I, so an LU decomposition exists.
LU reduction
Block LU decomposition
Rank factorization
Applicable to: m-by-n matrix A of rank r
Decomposition: where C is an m-by-r full column rank matrix and F is an r-by-n full
row rank matrix
Comment: The rank factorization can be used to compute the Moore–Penrose pseudoinverse
of A,[2] which one can apply to obtain all solutions of the linear system .
Cholesky decomposition
Applicable to: square, hermitian, positive definite matrix A
Comment: if the matrix is Hermitian and positive semi-definite, then it has a decomposition
of the form if the diagonal entries of are allowed to be zero
Comment: An alternative is the LDL decomposition, which can avoid extracting square roots.
QR decomposition
Applicable to: m-by-n matrix A with linearly independent columns
Uniqueness: In general it is not unique, but if is of full rank, then there exists a single that
has all positive diagonal elements. If is square, also is unique.
Comment: The QR decomposition provides an effective way to solve the system of equations
. The fact that is orthogonal means that , so that is equivalent
to , which is very easy to solve since is triangular.
RRQR factorization
Interpolative decomposition
Eigendecomposition
Applicable to: square matrix A with linearly independent eigenvectors (not necessarily distinct
eigenvalues).
Existence: An n-by-n matrix A always has n (complex) eigenvalues, which can be ordered (in
more than one way) to form an n-by-n diagonal matrix D and a corresponding matrix of
nonzero columns V that satisfies the eigenvalue equation . is invertible if and
only if the n eigenvectors are linearly independent (i.e., each eigenvalue has geometric
multiplicity equal to its algebraic multiplicity). A sufficient (but not necessary) condition for
this to happen is that all the eigenvalues are different (in this case geometric and algebraic
multiplicity are equal to 1)
Comment: One can always normalize the eigenvectors to have length one (see the definition of
the eigenvalue equation)
Comment: For any real symmetric matrix A, the eigendecomposition always exists and can be
written as , where both D and V are real-valued.
Jordan decomposition
Comment: the Jordan normal form generalizes the eigendecomposition to cases where there
are repeated eigenvalues and cannot be diagonalized, the Jordan–Chevalley decomposition
does this without choosing a basis.
Schur decomposition
Comment: if A is a normal matrix, then T is diagonal and the Schur decomposition coincides
with the spectral decomposition.
Decomposition: This is a version of Schur decomposition where and only contain real
numbers. One can always write where V is a real orthogonal matrix, is the
transpose of V, and S is a block upper triangular matrix called the real Schur form. The blocks
on the diagonal of S are of size 1×1 (in which case they represent real eigenvalues) or 2×2 (in
which case they are derived from complex conjugate eigenvalue pairs).
QZ decomposition
Comment: there are two versions of this decomposition: complex and real.
Comment: in the complex QZ decomposition, the ratios of the diagonal elements of S to the
corresponding diagonal elements of T, , are the generalized eigenvalues that
solve the generalized eigenvalue problem (where is an unknown scalar and v is
an unknown nonzero vector).
Takagi's factorization
Applicable to: square, complex, symmetric matrix A.
Comment: The diagonal elements of D are the nonnegative square roots of the eigenvalues of
.
Comment: This is not a special case of the eigendecomposition (see above), which uses
instead of . Moreover, if A is not real, it is not Hermitian and the form using also does
not apply.
Comment: Like the eigendecomposition above, the singular value decomposition involves
finding basis directions along which matrix multiplication is equivalent to scalar multiplication,
but it has greater generality since the matrix under consideration need not be square.
Uniqueness: the singular values of are always uniquely determined. and need not to be
unique in general.
Scale-invariant decompositions
Refers to variants of existing matrix decompositions, such as the SVD, that are invariant with
respect to diagonal scaling.
Comment: Is analogous to the SVD except that the diagonal elements of S are invariant with
respect to left and/or right multiplication of A by arbitrary nonsingular diagonal matrices, as
opposed to the standard SVD for which the singular values are invariant with respect to left
and/or right multiplication of A by arbitrary unitary matrices.
Comment: Is an alternative to the standard SVD when invariance is required with respect to
diagonal rather than unitary transformations of A.
Uniqueness: The scale-invariant singular values of (given by the diagonal elements of S) are
always uniquely determined. Diagonal matrices D and E, and unitary U and V, are not
necessarily unique in general.
Comment: U and V matrices are not the same as those from the SVD.
Other decompositions
Polar decomposition
Applicable to: any square complex matrix A.
Uniqueness: is always unique and equal to (which is always hermitian and positive
semidefinite). If is invertible, then is unique.
Comment: Since any Hermitian matrix admits a spectral decomposition with a unitary matrix,
can be written as . Since is positive semidefinite, all elements in are non-
negative. Since the product of two unitary matrices is unitary, taking one can write
which is the singular value decomposition. Hence, the
existence of the polar decomposition is equivalent to the existence of the singular value
decomposition.
Mostow's decomposition
Sectoral decomposition
Applicable to: square, complex matrix A with numerical range contained in the sector
In the case of positive semidefinite , there is a unique positive semidefinite such that
.
Generalizations
There exist analogues of the SVD, QR, LU and Cholesky factorizations for quasimatrices and
cmatrices or continuous matrices.[13] A ‘quasimatrix’ is, like a matrix, a rectangular scheme
whose elements are indexed, but one discrete index is replaced by a continuous index. Likewise,
a ‘cmatrix’, is continuous in both indices. As an example of a cmatrix, one can think of the kernel
of an integral operator.
These factorizations are based on early work by Fredholm (1903), Hilbert (1904) and Schmidt
(1907). For an account, and a translation to English of the seminal papers, see Stewart (2011).
See also
Matrix splitting
References
Notes
1. If a non-square matrix is used, however, then the matrix U will also have the same rectangular shape as
the original matrix A. And so, calling the matrix U would be incorrect as the correct term would be that
U is the 'row echelon form' of A. Other than this, there are no differences in LU factorization for square
and non-square matrices.
Citations
2. Piziak, R.; Odell, P. L. (1 June 1999). "Full Rank Factorization of Matrices". Mathematics Magazine. 72
(3): 193. doi:10.2307/2690882 (https://doi.org/10.2307%2F2690882) . JSTOR 2690882 (https://ww
w.jstor.org/stable/2690882) .
3. Uhlmann, J.K. (2018), "A Generalized Matrix Inverse that is Consistent with Respect to Diagonal
Transformations", SIAM Journal on Matrix Analysis and Applications, 239 (2): 781–800,
doi:10.1137/17M113890X (https://doi.org/10.1137%2F17M113890X)
4. Uhlmann, J.K. (2018), "A Rank-Preserving Generalized Matrix Inverse for Consistency with Respect to
Similarity", IEEE Control Systems Letters, arXiv:1804.07334 (https://arxiv.org/abs/1804.07334) ,
doi:10.1109/LCSYS.2018.2854240 (https://doi.org/10.1109%2FLCSYS.2018.2854240) , ISSN 2475-
1456 (https://www.worldcat.org/issn/2475-1456)
8. Mostow, G. D. (1955), Some new decomposition theorems for semi-simple groups (https://archive.org/d
etails/liealgebrasandli029541mbp) , Mem. Amer. Math. Soc., vol. 14, American Mathematical Society,
pp. 31–54
9. Nielsen, Frank; Bhatia, Rajendra (2012). Matrix Information Geometry. Springer. p. 224. arXiv:1007.4402
(https://arxiv.org/abs/1007.4402) . doi:10.1007/978-3-642-30232-9 (https://doi.org/10.1007%2F978-
3-642-30232-9) . ISBN 9783642302329.
10. Zhang, Fuzhen (30 June 2014). "A matrix decomposition and its applications" (https://zenodo.org/reco
rd/851661/files/article.pdf) (PDF). Linear and Multilinear Algebra. 63 (10): 2033–2042.
doi:10.1080/03081087.2014.933219 (https://doi.org/10.1080%2F03081087.2014.933219) .
11. Drury, S.W. (November 2013). "Fischer determinantal inequalities and Highamʼs Conjecture" (https://do
i.org/10.1016%2Fj.laa.2013.08.031) . Linear Algebra and Its Applications. 439 (10): 3129–3133.
doi:10.1016/j.laa.2013.08.031 (https://doi.org/10.1016%2Fj.laa.2013.08.031) .
12. Idel, Martin; Soto Gaona, Sebastián; Wolf, Michael M. (2017-07-15). "Perturbation bounds for
Williamson's symplectic normal form". Linear Algebra and Its Applications. 525: 45–58.
arXiv:1609.01338 (https://arxiv.org/abs/1609.01338) . doi:10.1016/j.laa.2017.03.013 (https://doi.or
g/10.1016%2Fj.laa.2017.03.013) .
Bibliography
Choudhury, Dipa; Horn, Roger A. (April 1987). "A Complex Orthogonal-Symmetric Analog of the
Polar Decomposition". SIAM Journal on Algebraic and Discrete Methods. 8 (2): 219–225.
doi:10.1137/0608019 (https://doi.org/10.1137%2F0608019) .
Fredholm, I. (1903), "Sur une classe d'´equations fonctionnelles", Acta Mathematica (in French),
27: 365–390, doi:10.1007/bf02421317 (https://doi.org/10.1007%2Fbf02421317)
Horn, Roger A.; Merino, Dennis I. (January 1995). "Contragredient equivalence: A canonical
form and some applications" (https://doi.org/10.1016%2F0024-3795%2893%2900056-6) .
Linear Algebra and Its Applications. 214: 43–92. doi:10.1016/0024-3795(93)00056-6 (https://d
oi.org/10.1016%2F0024-3795%2893%2900056-6) .
Jun, Lu (2021), Numerical matrix decomposition and its modern applications: A rigorous first
course (https://arxiv.org/abs/2107.02579) , arXiv:2107.02579 (https://arxiv.org/abs/2107.0
2579) , retrieved 2021-11-17
External links
GraphLab (https://web.archive.org/web/20110314171151/http://www.graphlab.ml.cmu.edu/p
mf.html) GraphLab collaborative filtering library, large scale parallel implementation of matrix
decomposition methods (in C++) for multicore.