This articlemay be too technical for most readers to understand. Pleasehelp improve it tomake it understandable to non-experts, without removing the technical details.(August 2024) (Learn how and when to remove this message) |
Inlinear algebra,eigendecomposition is thefactorization of amatrix into acanonical form, whereby the matrix is represented in terms of itseigenvalues and eigenvectors. Onlydiagonalizable matrices can be factorized in this way. When the matrix being factorized is anormal or realsymmetric matrix, the decomposition is called "spectral decomposition", derived from thespectral theorem.
A (nonzero) vectorv of dimensionN is an eigenvector of a squareN ×N matrixA if it satisfies alinear equation of the formfor some scalarλ. Thenλ is called the eigenvalue corresponding tov. Geometrically speaking, the eigenvectors ofA are the vectors thatA merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem.
This yields an equation for the eigenvaluesWe callp(λ) thecharacteristic polynomial, and the equation, called the characteristic equation, is anNth-order polynomial equation in the unknownλ. This equation will haveNλ distinct solutions, where1 ≤Nλ ≤N. The set of solutions, that is, the eigenvalues, is called thespectrum ofA.[1][2][3]
If the field of scalars isalgebraically closed, then we canfactorp asThe integerni is termed thealgebraic multiplicity of eigenvalueλi. The algebraic multiplicities sum toN:
For each eigenvalueλi, we have a specific eigenvalue equationThere will be1 ≤mi ≤nilinearly independent solutions to each eigenvalue equation. The linear combinations of themi solutions (except the one which gives the zero vector) are the eigenvectors associated with the eigenvalueλi. The integermi is termed thegeometric multiplicity ofλi. It is important to keep in mind that the algebraic multiplicityni and geometric multiplicitymi may or may not be equal, but we always havemi ≤ni. The simplest case is of course whenmi =ni = 1. The total number of linearly independent eigenvectors,Nv, can be calculated by summing the geometric multiplicities
The eigenvectors can be indexed by eigenvalues, using a double index, withvij being thejth eigenvector for theith eigenvalue. The eigenvectors can also be indexed using the simpler notation of a single indexvk, withk = 1, 2, ...,Nv.
LetA be a squaren ×n matrix withn linearly independent eigenvectorsqi (wherei = 1, ...,n). ThenA can befactored aswhereQ is the squaren ×n matrix whoseith column is the eigenvectorqi ofA, andΛ is thediagonal matrix whose diagonal elements are the corresponding eigenvalues,Λii =λi. Note that onlydiagonalizable matrices can be factorized in this way. For example, thedefective matrix (which is ashear matrix) cannot be diagonalized.
Then eigenvectorsqi are usually normalized, but they don't have to be. A non-normalized set ofn eigenvectors,vi can also be used as the columns ofQ. That can be understood by noting that the magnitude of the eigenvectors inQ gets canceled in the decomposition by the presence ofQ−1. If one of the eigenvaluesλi has multiple linearly independent eigenvectors (that is, the geometric multiplicity ofλi is greater than 1), then these eigenvectors for this eigenvalueλi can be chosen to be mutuallyorthogonal; however, if two eigenvectors belong to two different eigenvalues, it may be impossible for them to be orthogonal to each other (see Example below). One special case is that ifA is a normal matrix, then by the spectral theorem, it's always possible to diagonalizeA in anorthonormal basis{qi}.
The decomposition can be derived from the fundamental property of eigenvectors:The linearly independent eigenvectorsqi with nonzero eigenvalues form a basis (not necessarily orthonormal) for all possible productsAx, forx ∈Cn, which is the same as theimage (orrange) of the correspondingmatrix transformation, and also thecolumn space of the matrixA. The number of linearly independent eigenvectorsqi with nonzero eigenvalues is equal to therank of the matrixA, and also the dimension of the image (or range) of the corresponding matrix transformation, as well as its column space.
The linearly independent eigenvectorsqi with an eigenvalue of zero form a basis (which can be chosen to be orthonormal) for thenull space (also known as the kernel) of the matrix transformationA.
The 2 × 2 real matrixAmay be decomposed into a diagonal matrix through multiplication of a non-singular matrixQ
Thenfor some real diagonal matrix.
Multiplying both sides of the equation on the left byQ:The above equation can be decomposed into twosimultaneous equations:Factoring out theeigenvaluesx andy:Lettingthis gives us two vector equations:And can be represented by a single vector equation involving two solutions as eigenvalues:whereλ represents the two eigenvaluesx andy, andu represents the vectorsa andb.
Shiftingλu to the left hand side and factoringu outSinceQ is non-singular, it is essential thatu is nonzero. Therefore,Thusgiving us the solutions of the eigenvalues for the matrixA asλ = 1 orλ = 3, and the resulting diagonal matrix from the eigendecomposition ofA is thus.
Putting the solutions back into the above simultaneous equations
Solving the equations, we haveThus the matrixQ required for the eigendecomposition ofA isthat is:The exclusion of the number 0 from the set of real numbers,, is necessary to ensure that the matrix is non-singular.
If a matrixA can be eigendecomposed and if none of its eigenvalues are zero, thenA isinvertible and its inverse is given byIf is a symmetric matrix, since is formed from the eigenvectors of, is guaranteed to be anorthogonal matrix, therefore. Furthermore, becauseΛ is adiagonal matrix, its inverse is easy to calculate:
When eigendecomposition is used on a matrix of measured, realdata, theinverse may be less valid when all eigenvalues are used unmodified in the form above. This is because as eigenvalues become relatively small, their contribution to the inversion is large. Those near zero or at the "noise" of the measurement system will have undue influence and could hamper solutions (detection) using the inverse.[4]
Two mitigations have been proposed: truncating small or zero eigenvalues, and extending the lowest reliable eigenvalue to those below it. See alsoTikhonov regularization as a statistically motivated but biased method for rolling off eigenvalues as they become dominated by noise.
The first mitigation method is similar to a sparse sample of the original matrix, removing components that are not considered valuable. However, if the solution or detection process is near the noise level, truncating may remove components that influence the desired solution.
The second mitigation extends the eigenvalue so that lower values have much less influence over inversion, but do still contribute, such that solutions near the noise will still be found.
The reliable eigenvalue can be found by assuming that eigenvalues of extremely similar and low value are a good representation of measurement noise (which is assumed low for most systems).
If the eigenvalues are rank-sorted by value, then the reliable eigenvalue can be found by minimization of theLaplacian of the sorted eigenvalues:[5]where the eigenvalues are subscripted with ans to denote being sorted. The position of the minimization is the lowest reliable eigenvalue. In measurement systems, the square root of this reliable eigenvalue is the average noise over the components of the system.
The eigendecomposition allows for much easier computation ofpower series of matrices. Iff (x) is given bythen we know thatBecauseΛ is adiagonal matrix, functions ofΛ are very easy to calculate:
The off-diagonal elements off (Λ) are zero; that is,f (Λ) is also a diagonal matrix. Therefore, calculatingf (A) reduces to just calculating the function on each of the eigenvalues.
A similar technique works more generally with theholomorphic functional calculus, usingfromabove. Once again, we find that
which are examples for the functions. Furthermore, is thematrix exponential.
This sectionneeds expansion. You can help byadding missing information.(June 2008) |
Spectral matrices are matrices that possess distinct eigenvalues and a complete set of eigenvectors. This characteristic allows spectral matrices to be fully diagonalizable, meaning they can be decomposed into simpler forms using eigendecomposition. This decomposition process reveals fundamental insights into the matrix's structure and behavior, particularly in fields such as quantum mechanics, signal processing, and numerical analysis.[6]
A complex-valued square matrix isnormal (meaning ,, where is theconjugate transpose) if and only if it can be decomposed as, where is aunitary matrix (meaning) and diag() is adiagonal matrix.[7] The columns of form anorthonormal basis and are eigenvectors of with corresponding eigenvalues.[8]
For example, consider the 2 x 2 normal matrix.
The eigenvalues are and.
The (normalized) eigenvectors corresponding to these eigenvalues are and.
The diagonalization is, where, and.
The verification is.
This example illustrates the process of diagonalizing a normal matrix by finding its eigenvalues and eigenvectors, forming the unitary matrix, the diagonal matrix, and verifying the decomposition.

As a special case, for everyn ×n realsymmetric matrix, the eigenvalues are real and the eigenvectors can be chosen real andorthonormal. Thus a real symmetric matrixA can be decomposed as, whereQ is anorthogonal matrix whose columns are the real, orthonormal eigenvectors ofA, andΛ is a diagonal matrix whose entries are the eigenvalues ofA.[9]
Diagonalizable matrices can be decomposed using eigendecomposition, provided they have a full set of linearly independent eigenvectors. They can be expressed as, where is a matrix whose columns are eigenvectors of, and is a diagonal matrix consisting of the corresponding eigenvalues of.[8]
Positivedefinite matrices are matrices for which all eigenvalues are positive. They can be decomposed as using theCholesky decomposition, where is a lower triangular matrix.[10]
Unitary matrices satisfy (real case) or (complex case), wheredenotes theconjugate transpose anddenotes the conjugate transpose. They diagonalize usingunitary transformations.[8]
Hermitian matrices satisfy, wheredenotes the conjugate transpose. They can be diagonalized using unitary ororthogonal matrices.[8]
Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using thecharacteristic polynomial. However, this is often impossible for larger matrices, in which case we must use anumerical method.
In practice, eigenvalues of large matrices are not computed using the characteristic polynomial. Computing the polynomial becomes expensive in itself, and exact (symbolic) roots of a high-degree polynomial can be difficult to compute and express: theAbel–Ruffini theorem implies that the roots of high-degree (5 or above) polynomials cannot in general be expressed simply usingnth roots. Therefore, general algorithms to find eigenvectors and eigenvalues areiterative.
Iterative numerical algorithms for approximating roots of polynomials exist, such asNewton's method, but in general it is impractical to compute the characteristic polynomial and then apply these methods. One reason is that smallround-off errors in the coefficients of the characteristic polynomial can lead to large errors in the eigenvalues and eigenvectors: the roots are an extremelyill-conditioned function of the coefficients.[11]
A simple and accurate iterative method is thepower method: arandom vectorv is chosen and a sequence ofunit vectors is computed as
Thissequence willalmost always converge to an eigenvector corresponding to the eigenvalue of greatest magnitude, provided thatv has a nonzero component of this eigenvector in the eigenvector basis (and also provided that there is only one eigenvalue of greatest magnitude). This simple algorithm is useful in some practical applications; for example,Google uses it to calculate thepage rank of documents in their search engine.[12] Also, the power method is the starting point for many more sophisticated algorithms. For instance, by keeping not just the last vector in the sequence, but instead looking at thespan ofall the vectors in the sequence, one can get a better (faster converging) approximation for the eigenvector, and this idea is the basis ofArnoldi iteration.[11] Alternatively, the importantQR algorithm is also based on a subtle transformation of a power method.[11]
Once the eigenvalues are computed, the eigenvectors could be calculated by solving the equationusingGaussian elimination orany other method for solvingmatrix equations.
However, in practical large-scale eigenvalue methods, the eigenvectors are usually computed in other ways, as a byproduct of the eigenvalue computation. Inpower iteration, for example, the eigenvector is actually computed before the eigenvalue (which is typically computed by theRayleigh quotient of the eigenvector).[11] In the QR algorithm for aHermitian matrix (or any normal matrix), the orthonormal eigenvectors are obtained as a product of theQ matrices from the steps in the algorithm.[11] (For more general matrices, the QR algorithm yields theSchur decomposition first, from which the eigenvectors can be obtained by abacksubstitution procedure.[13]) For Hermitian matrices, theDivide-and-conquer eigenvalue algorithm is more efficient than the QR algorithm if both eigenvectors and eigenvalues are desired.[11]
Recall that thegeometric multiplicity of an eigenvalue can be described as the dimension of the associated eigenspace, thenullspace ofλI −A. The algebraic multiplicity can also be thought of as a dimension: it is the dimension of the associatedgeneralized eigenspace (1st sense), which is the nullspace of the matrix(λI −A)k forany sufficiently largek. That is, it is the space ofgeneralized eigenvectors (first sense), where a generalized eigenvector is any vector whicheventually becomes 0 ifλI −A is applied to it enough times successively. Any eigenvector is a generalized eigenvector, and so each eigenspace is contained in the associated generalized eigenspace. This provides an easy proof that the geometric multiplicity is always less than or equal to the algebraic multiplicity.
This usage should not be confused with thegeneralized eigenvalue problem described below.
Aconjugate eigenvector orconeigenvector is a vector sent after transformation to a scalar multiple of its conjugate, where the scalar is called theconjugate eigenvalue orconeigenvalue of the linear transformation. The coneigenvectors and coneigenvalues represent essentially the same information and meaning as the regular eigenvectors and eigenvalues, but arise when an alternative coordinate system is used. The corresponding equation isFor example, in coherent electromagnetic scattering theory, the linear transformationA represents the action performed by the scattering object, and the eigenvectors represent polarization states of the electromagnetic wave. Inoptics, the coordinate system is defined from the wave's viewpoint, known as theForward Scattering Alignment (FSA), and gives rise to a regular eigenvalue equation, whereas inradar, the coordinate system is defined from the radar's viewpoint, known as theBack Scattering Alignment (BSA), and gives rise to a coneigenvalue equation.
Ageneralized eigenvalue problem (second sense) is the problem of finding a (nonzero) vectorv that obeyswhereA andB are matrices. Ifv obeys this equation, with someλ, then we callv thegeneralized eigenvector ofA andB (in the second sense), andλ is called thegeneralized eigenvalue ofA andB (in the second sense) which corresponds to the generalized eigenvectorv. The possible values ofλ must obey the following equation
Ifn linearly independent vectors{v1, …,vn} can be found, such that for everyi ∈ {1, …,n},Avi =λiBvi, then we define the matricesP andD such thatThen the following equality holdsAnd the proof is
And sinceP is invertible, we multiply the equation from the right by its inverse, finishing the proof.
The set of matrices of the formA −λB, whereλ is a complex number, is called apencil; the termmatrix pencil can also refer to the pair(A,B) of matrices.[14]
IfB is invertible, then the original problem can be written in the formwhich is a standard eigenvalue problem. However, in most situations it is preferable not to perform the inversion, but rather to solve the generalized eigenvalue problem as stated originally. This is especially important ifA andB areHermitian matrices, since in this caseB−1A is not generally Hermitian and important properties of the solution are no longer apparent.
IfA andB are both symmetric or Hermitian, andB is also apositive-definite matrix, the eigenvaluesλi are real and eigenvectorsv1 andv2 with distinct eigenvalues areB-orthogonal (v1*Bv2 = 0).[15] In this case, eigenvectors can be chosen so that the matrixP defined above satisfies orand there exists abasis of generalized eigenvectors (it is not adefective problem).[14] This case is sometimes called aHermitian definite pencil ordefinite pencil.[14]