If a real matrix is interpreted as alinear transformation of-dimensionalspace, the polar decomposition separates it into arotation orreflection of and ascaling of the space along a set of orthogonal axes.
The polar decomposition of a square matrix always exists. If isinvertible, the decomposition is unique, and the factor will bepositive-definite. In that case, can be written uniquely in the form, where is unitary, and is the unique self-adjointlogarithm of the matrix.[2] This decomposition is useful in computing thefundamental group of (matrix)Lie groups.[3]
The polar decomposition can also be defined as, where is a symmetric positive-definite matrix with the same eigenvalues as but different eigenvectors.
The definition may be extended to rectangular matrices by requiring to be asemi-unitary matrix, and to be a positive-semidefinite Hermitian matrix. The decomposition always exists, and is always unique. The matrix is unique if and only if has full rank.[4]
A real square matrix can be interpreted as thelinear transformation of that takes a column vector to. Then, in the polar decomposition, the factor is an real orthogonal matrix. The polar decomposition then can be seen as expressing the linear transformation defined by into ascaling of the space along each eigenvector of by a scale factor (the action of), followed by a rotation of (the action of).
Alternatively, the decomposition expresses the transformation defined by as a rotation () followed by a scaling () along certain orthogonal directions. The scale factors are the same, but the directions are different.
The polar decomposition of thecomplex conjugate of is given by Note thatgives the corresponding polar decomposition of thedeterminant ofA, since and In particular, if has determinant 1, then both and have determinant 1.
The positive-semidefinite matrixP is always unique, even ifA issingular, and is denoted aswhere denotes theconjugate transpose of. The uniqueness ofP ensures that this expression is well-defined. The uniqueness is guaranteed by the fact that is a positive-semidefinite Hermitian matrix and, therefore, has a unique positive-semidefinite Hermitiansquare root.[5] IfA is invertible, thenP is positive-definite, thus also invertible, and the matrixU is uniquely determined by
In terms of thesingular value decomposition (SVD) of,, one haswhere,, and are unitary matrices (orthogonal if the field is the reals). This confirms that is positive-definite, and is unitary. Thus, the existence of the SVD is equivalent to the existence of polar decomposition.
One can also decompose in the formHere is the same as before, and is given byThis is known as the left polar decomposition, whereas the previous decomposition is known as the right polar decomposition. Left polar decomposition is also known as reverse polar decomposition.
Thepolar decomposition of a square invertible real matrix is of the formwhere is apositive-definitematrix, and is an orthogonal matrix.
If isnormal, then it is unitarily equivalent to a diagonal matrix: for some unitary matrix and some diagonal matrix This makes the derivation of its polar decomposition particularly straightforward, as we can then write
where is the matrix of absolute diagonal values, and is a diagonal matrix containing thephases of the elements of that is, when, and when
The polar decomposition is thus with and diagonal in the eigenbasis of and having eigenvalues equal to the phases and absolute values of those of respectively.
From thesingular-value decomposition, it can be shown that a matrix is invertible if and only if (equivalently,) is. Moreover, this is true if and only if the eigenvalues of are all not zero.[6]
In this case, the polar decomposition is directly obtained by writingand observing that is unitary. To see this, we can exploit the spectral decomposition of to write.
In this expression, is unitary because is. To show that also is unitary, we can use theSVD to write, so thatwhere again is unitary by construction.
Yet another way to directly show the unitarity of is to note that, writing theSVD of in terms of rank-1 matrices as, whereare the singular values of, we have which directly implies the unitarity of because a matrix is unitary if and only if its singular values have unitary absolute value.
Note how, from the above construction, it follows thatthe unitary matrix in the polar decomposition of an invertible matrix is uniquely defined.
The SVD of a square matrix reads, with unitary matrices, and a diagonal, positive semi-definite matrix. By simply inserting an additional pair ofs ors, we obtain the two forms of the polar decomposition of:More generally, if is some rectangular matrix, its SVD can be written as where now and are isometries with dimensions and, respectively, where, and is again a diagonal positive semi-definite square matrix with dimensions. We can now apply the same reasoning used in the above equation to write, but now is not in general unitary. Nonetheless, has the same support and range as, and it satisfies and. This makes into an isometry when its action is restricted onto the support of, that is, it means that is apartial isometry.
As an explicit example of this more general case, consider the SVD of the following matrix:We then havewhich is an isometry, but not unitary. On the other hand, if we consider the decomposition ofwe findwhich is a partial isometry (but not an isometry).
The polar decomposition for matrices generalizes as follows: ifA is a bounded linear operator then there is a unique factorization ofA as a productA =UP whereU is a partial isometry,P is a non-negative self-adjoint operator and the initial space ofU is the closure of the range ofP.
The operatorU must be weakened to a partial isometry, rather than unitary, because of the following issues. IfA is theone-sided shift onl2(N), then |A| = {A*A}1/2 =I. So ifA =U |A|,U must beA, which is not unitary.
The existence of a polar decomposition is a consequence ofDouglas' lemma:
Lemma—IfA,B are bounded operators on a Hilbert spaceH, andA*A ≤B*B, then there exists a contractionC such thatA = CB. Furthermore,C is unique if ker(B*) ⊂ ker(C).
The operatorC can be defined byC(Bh) :=Ah for allh inH, extended by continuity to the closure ofRan(B), and by zero on the orthogonal complement to all ofH. The lemma then follows sinceA*A ≤B*B implies ker(B) ⊂ ker(A).
In particular. IfA*A =B*B, thenC is a partial isometry, which is unique if ker(B*) ⊂ ker(C).In general, for any bounded operatorA,where (A*A)1/2 is the unique positive square root ofA*A given by the usualfunctional calculus. So by the lemma, we havefor some partial isometryU, which is unique if ker(A*) ⊂ ker(U). TakeP to be (A*A)1/2 and one obtains the polar decompositionA =UP. Notice that an analogous argument can be used to showA = P'U', whereP' is positive andU' a partial isometry.
WhenH is finite-dimensional,U can be extended to a unitary operator; this is not true in general (see example above). Alternatively, the polar decomposition can be shown using the operator version ofsingular value decomposition.
IfA is a closed, densely definedunbounded operator between complex Hilbert spaces then it still has a (unique)polar decompositionwhere |A| is a (possibly unbounded) non-negative self-adjoint operator with the same domain asA, andU is a partial isometry vanishing on the orthogonal complement of the range ran(|A|).
The proof uses the same lemma as above, which goes through for unbounded operators in general. If dom(A*A) = dom(B*B), andA*Ah =B*Bh for allh ∈ dom(A*A), then there exists a partial isometryU such thatA =UB.U is unique if ran(B)⊥ ⊂ ker(U). The operatorA being closed and densely defined ensures that the operatorA*A is self-adjoint (with dense domain) and therefore allows one to define (A*A)1/2. Applying the lemma gives polar decomposition.
If an unbounded operatorA isaffiliated to a von Neumann algebraM, andA =UP is its polar decomposition, thenU is inM and so is the spectral projection ofP, 1B(P), for any Borel setB in[0, ∞).
The polar decomposition ofquaternions withorthonormal basis quaternions depends on the unit 2-dimensional sphere ofsquare roots of minus one, known asright versors. Given any on this sphere and an angle−π <a ≤π, theversor is on the unit3-sphere of Fora = 0 anda =π, the versor is 1 or −1, regardless of whichr is selected. Thenormt of a quaternionq is theEuclidean distance from the origin toq. When a quaternion is not just a real number, then there is aunique polar decomposition:Herer,a,t are all uniquely determined such thatr is a right versor(r2 = –1),a satisfies0 <a <π, andt > 0.
In theCartesian plane, alternative planarring decompositions arise as follows:
Ifx ≠ 0,z =x(1 + ε(y/x)) is a polar decomposition of adual numberz =x +yε, whereε2 = 0; i.e.,ε isnilpotent. In this polar decomposition, the unit circle has been replaced by the linex = 1, the polar angle by theslopey/x, and the radiusx is negative in the left half-plane.
Ifx2 ≠y2, then theunit hyperbolax2 −y2 = 1, andits conjugatex2 −y2 = −1 can be used to form a polar decomposition based on the branch of the unit hyperbola through(1, 0). This branch is parametrized by thehyperbolic anglea and is written wherej2 = +1, and the arithmetic[7] ofsplit-complex numbers is used. The branch through(−1, 0) is traced by −eaj. Since the operation of multiplying byj reflects a point across the liney =x, the conjugate hyperbola has branches traced byjeaj or −jeaj. Therefore a point in one of the quadrants has a polar decomposition in one of the forms: The set{1, −1,j, −j} has products that make it isomorphic to theKlein four-group. Evidently polar decomposition in this case involves an element from that group.
Polar decomposition of an element of thealgebra M(2, R) of 2 × 2 real matrices uses these alternative planar decompositions since any planarsubalgebra is isomorphic to dual numbers, split-complex numbers, or ordinary complex numbers.
Numerical determination of the matrix polar decomposition
To compute an approximation of the polar decompositionA =UP, usually the unitary factorU is approximated.[8][9] The iteration is based onHeron's method for the square root of1 and computes, starting from, the sequence
The combination of inversion and Hermite conjugation is chosen so that in the singular value decomposition, the unitary factors remain the same and the iteration reduces to Heron's method on the singular values.
This basic iteration may be refined to speed up the process:
Every step or in regular intervals, the range of the singular values of is estimated and then the matrix is rescaled to to center the singular values around1. The scaling factor is computed using matrix norms of the matrix and its inverse. Examples of such scale estimates are:
using the row-sum and column-summatrix norms orusing theFrobenius norm. Including the scale factor, the iteration is now
TheQR decomposition can be used in a preparation step to reduce a singular matrixA to a smaller regular matrix, and inside every step to speed up the computation of the inverse.
Heron's method for computing roots of can be replaced by higher order methods, for instance based onHalley's method of third order, resulting inThis iteration can again be combined with rescaling. This particular formula has the benefit that it is also applicable to singular or rectangular matricesA.
^Higham, Nicholas J.; Schreiber, Robert S. (1990). "Fast polar decomposition of an arbitrary matrix".SIAM J. Sci. Stat. Comput.11 (4). Philadelphia, PA, USA: Society for Industrial and Applied Mathematics:648–655.CiteSeerX10.1.1.111.9239.doi:10.1137/0911038.ISSN0196-5204.S2CID14268409.
^Higham, Nicholas J. (1986). "Computing the polar decomposition with applications".SIAM J. Sci. Stat. Comput.7 (4). Philadelphia, PA, USA: Society for Industrial and Applied Mathematics:1160–1174.CiteSeerX10.1.1.137.7354.doi:10.1137/0907079.ISSN0196-5204.
^Byers, Ralph; Hongguo Xu (2008). "A New Scaling for Newton's Iteration for the Polar Decomposition and its Backward Stability".SIAM J. Matrix Anal. Appl.30 (2). Philadelphia, PA, USA: Society for Industrial and Applied Mathematics:822–843.CiteSeerX10.1.1.378.6737.doi:10.1137/070699895.ISSN0895-4798.
Hall, Brian C. (2015).Lie Groups, Lie Algebras, and Representations: An Elementary Introduction. Graduate Texts in Mathematics. Vol. 222 (2nd ed.). Springer.ISBN978-3319134666.