Diagram summarizing relationships between matrix classes and common matrix factorizations
In themathematical discipline oflinear algebra, amatrix decomposition ormatrix factorization is afactorization of amatrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.
For example, when solving asystem of linear equations, the matrixA can be decomposed via theLU decomposition. The LU decomposition factorizes a matrix into alower triangular matrixL and anupper triangular matrixU. The systems and require fewer additions and multiplications to solve, compared with the original system, though one might require significantly more digits in inexact arithmetic such asfloating point.
Similarly, theQR decomposition expressesA asQR withQ anorthogonal matrix andR an upper triangular matrix. The systemQ(Rx) =b is solved byRx =QTb =c, and the systemRx =c is solved by 'back substitution'. The number of additions and multiplications required is about twice that of using the LU solver, but no more digits are required in inexact arithmetic because the QR decomposition isnumerically stable.
Decompositions related to solving systems of linear equations
Existence: An LUP decomposition exists for any square matrixA. WhenP is anidentity matrix, the LUP decomposition reduces to the LU decomposition.
Comments: The LUP and LU decompositions are useful in solving ann-by-n system of linear equations. These decompositions summarize the process ofGaussian elimination in matrix form. MatrixP represents any row interchanges carried out in the process of Gaussian elimination. If Gaussian elimination produces therow echelon form without requiring any row interchanges, thenP = I, so an LU decomposition exists.
Decomposition:, where is upper triangular with real positive diagonal entries
Comment: if the matrix is Hermitian and positive semi-definite, then it has a decomposition of the form if the diagonal entries of are allowed to be zero
Uniqueness: for positive definite matrices Cholesky decomposition is unique. However, it is not unique in the positive semi-definite case.
Comment: if is real and symmetric, has all real elements
Comment: An alternative is theLDL decomposition, which can avoid extracting square roots.
Uniqueness: In general it is not unique, but if is of fullrank, then there exists a single that has all positive diagonal elements. If is square, also is unique.
Comment: The QR decomposition provides an effective way to solve the system of equations. The fact that isorthogonal means that, so that is equivalent to, which is very easy to solve since istriangular.
Existence: Ann-by-n matrixA always hasn (complex) eigenvalues, which can be ordered (in more than one way) to form ann-by-n diagonal matrixD and a corresponding matrix of nonzero columnsV that satisfies theeigenvalue equation. is invertible if and only if then eigenvectors arelinearly independent (that is, each eigenvalue hasgeometric multiplicity equal to itsalgebraic multiplicity). A sufficient (but not necessary) condition for this to happen is that all the eigenvalues are different (in this case geometric and algebraic multiplicity are equal to 1)
Comment: One can always normalize the eigenvectors to have length one (see the definition of the eigenvalue equation)
Comment: Everynormal matrixA (that is, matrix for which, where is aconjugate transpose) can be eigendecomposed. For anormal matrixA (and only for a normal matrix), the eigenvectors can also be made orthonormal () and the eigendecomposition reads as. In particular allunitary,Hermitian, orskew-Hermitian (in the real-valued case, allorthogonal,symmetric, orskew-symmetric, respectively) matrices are normal and therefore possess this property.
Comment: For any realsymmetric matrixA, the eigendecomposition always exists and can be written as, where bothD andV are real-valued.
Comment: The eigendecomposition is useful for understanding the solution of a system of linear ordinary differential equations or linear difference equations. For example, the difference equation starting from the initial condition is solved by, which is equivalent to, whereV andD are the matrices formed from the eigenvectors and eigenvalues ofA. SinceD is diagonal, raising it to power, just involves raising each element on the diagonal to the powert. This is much easier to do and understand than raisingA to powert, sinceA is usually not diagonal.
Comment: the Jordan normal form generalizes the eigendecomposition to cases where there are repeated eigenvalues and cannot be diagonalized, the Jordan–Chevalley decomposition does this without choosing a basis.
Decomposition: This is a version of Schur decomposition where and only contain real numbers. One can always write whereV is a realorthogonal matrix, is thetranspose ofV, andS is ablock upper triangular matrix called the realSchur form. The blocks on the diagonal ofS are of size 1×1 (in which case they represent real eigenvalues) or 2×2 (in which case they are derived fromcomplex conjugate eigenvalue pairs).
Comment: in the complex QZ decomposition, the ratios of the diagonal elements ofS to the corresponding diagonal elements ofT,, are the generalizedeigenvalues that solve thegeneralized eigenvalue problem (where is an unknown scalar andv is an unknown nonzero vector).
Decomposition (real version): and whereA,B,Q,Z,S, andT are matrices containing real numbers only. In this caseQ andZ areorthogonal matrices, theT superscript representstransposition, andS andT areblock upper triangular matrices. The blocks on the diagonal ofS andT are of size 1×1 or 2×2.
Comment: The diagonal elements ofD are the nonnegative square roots of the eigenvalues of.
Comment:V may be complex even ifA is real.
Comment: This is not a special case of the eigendecomposition (see above), which uses instead of. Moreover, ifA is not real, it is not Hermitian and the form using also does not apply.
Decomposition:, whereD is a nonnegativediagonal matrix, andU andV satisfy. Here is theconjugate transpose ofV (or simply thetranspose, ifV contains real numbers only), andI denotes the identity matrix (of some dimension).
Comment: The diagonal elements ofD are called thesingular values ofA.
Comment: Like the eigendecomposition above, the singular value decomposition involves finding basis directions along which matrix multiplication is equivalent to scalar multiplication, but it has greater generality since the matrix under consideration need not be square.
Uniqueness: the singular values of are always uniquely determined. and need not to be unique in general.
Refers to variants of existing matrix decompositions, such as the SVD, that are invariant with respect to diagonal scaling.
Applicable to:m-by-n matrixA.
Unit-Scale-Invariant Singular-Value Decomposition:, whereS is a unique nonnegativediagonal matrix of scale-invariant singular values,U andV areunitary matrices, is theconjugate transpose ofV, and positive diagonal matricesD andE.
Comment: Is analogous to the SVD except that the diagonal elements ofS are invariant with respect to left and/or right multiplication ofA by arbitrary nonsingular diagonal matrices, as opposed to the standard SVD for which the singular values are invariant with respect to left and/or right multiplication ofA by arbitrary unitary matrices.
Comment: Is an alternative to the standard SVD when invariance is required with respect to diagonal rather than unitary transformations ofA.
Uniqueness: The scale-invariant singular values of (given by the diagonal elements ofS) are always uniquely determined. Diagonal matricesD andE, and unitaryU andV, are not necessarily unique in general.
Comment:U andV matrices are not the same as those from the SVD.
Analogous scale-invariant decompositions can be derived from other matrix decompositions; for example, to obtain scale-invariant eigenvalues.[3][4]
Uniqueness: is always unique and equal to (which is always hermitian and positive semidefinite). If is invertible, then is unique.
Comment: Since any Hermitian matrix admits a spectral decomposition with a unitary matrix, can be written as. Since is positive semidefinite, all elements in are non-negative. Since the product of two unitary matrices is unitary, takingone can write which is the singular value decomposition. Hence, the existence of the polar decomposition is equivalent to the existence of the singular value decomposition.
This sectionneeds expansion with: examples and additional citations. You can help byadding missing information.(December 2014)
There exist analogues of the SVD, QR, LU and Cholesky factorizations forquasimatrices andcmatrices orcontinuous matrices.[13] A "quasimatrix" is, like a matrix, a rectangular scheme whose elements are indexed, but one discrete index is replaced by a continuous index. Likewise, a "cmatrix", is continuous in both indices. As an example of a cmatrix, one can think of the kernel of anintegral operator.
^If a non-square matrix is used, however, then the matrixU will also have the same rectangular shape as the original matrixA. And so, calling the matrixU upper triangular would be incorrect as the correct term would be thatU is the 'row echelon form' ofA. Other than this, there are no differences in LU factorization for square and non-square matrices.
^Lay, David C. (2016).Linear algebra and its applications. Steven R. Lay, Judith McDonald (Fifth Global ed.). Harlow. p. 142.ISBN978-1-292-09223-2.OCLC920463015.{{cite book}}: CS1 maint: location missing publisher (link)
^Piziak, R.; Odell, P. L. (1 June 1999). "Full Rank Factorization of Matrices".Mathematics Magazine.72 (3): 193.doi:10.2307/2690882.JSTOR2690882.
^Uhlmann, J.K. (2018), "A Generalized Matrix Inverse that is Consistent with Respect to Diagonal Transformations",SIAM Journal on Matrix Analysis and Applications,239 (2):781–800,doi:10.1137/17M113890X
^Idel, Martin; Soto Gaona, Sebastián; Wolf, Michael M. (2017-07-15). "Perturbation bounds for Williamson's symplectic normal form".Linear Algebra and Its Applications.525:45–58.arXiv:1609.01338.doi:10.1016/j.laa.2017.03.013.S2CID119578994.
Choudhury, Dipa; Horn, Roger A. (April 1987). "A Complex Orthogonal-Symmetric Analog of the Polar Decomposition".SIAM Journal on Algebraic and Discrete Methods.8 (2):219–225.doi:10.1137/0608019.