Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Eigenvalues and eigenvectors

From Wikipedia, the free encyclopedia
(Redirected fromEigenvalue)
Concepts from linear algebra
"Characteristic root" redirects here. For the root of a characteristic equation, seeCharacteristic equation (calculus).

Inlinear algebra, aneigenvector (/ˈɡən-/EYE-gən-) orcharacteristic vector is a (nonzero)vector that has itsdirection unchanged (or reversed) by a givenlinear transformation. More precisely, an eigenvectorv{\displaystyle \mathbf {v} } of a linear transformationT{\displaystyle T} isscaled by a constant factorλ{\displaystyle \lambda } when the linear transformation is applied to it:Tv=λv{\displaystyle T\mathbf {v} =\lambda \mathbf {v} }. The correspondingeigenvalue,characteristic value, orcharacteristic root is the multiplying factorλ{\displaystyle \lambda } (possibly anegative orcomplex number).

Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformationrotates,stretches, orshears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed.[1]

The eigenvectors and eigenvalues of a linear transformation serve to characterize it, and so they play important roles in all areas where linear algebra is applied, fromgeology toquantum mechanics. In particular, it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same transformation (feedback). In such an application, the largest eigenvalue is of particular importance, because it governs the long-term behavior of the system after many applications of the linear transformation, and the associated eigenvector is thesteady state of the system.

Matrices

[edit]

For ann×n{\displaystyle n{\times }n} matrixA and a nonzeron{\displaystyle n}-vectorv{\displaystyle \mathbf {v} }, if multiplyingA byv{\displaystyle \mathbf {v} } (denotedAv{\displaystyle A\mathbf {v} }) simply scalesv{\displaystyle \mathbf {v} } by a factorλ, whereλ is ascalar, thenv{\displaystyle \mathbf {v} } is called an eigenvector ofA, andλ is the corresponding eigenvalue. This relationship can be expressed as:Av=λv{\displaystyle A\mathbf {v} =\lambda \mathbf {v} }.[2]

Given ann-dimensional vector space and a choice ofbasis, there is a direct correspondence between linear transformations from the vector space into itself andn-by-nsquare matrices. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of linear transformations, or the language ofmatrices.[3][4]

Overview

[edit]

Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefixeigen- is adopted from theGermaneigen (cognate with theEnglish wordown) for 'proper', 'characteristic', 'own'.[5][6] Originally used to studyprincipal axes of the rotational motion ofrigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example instability analysis,vibration analysis,atomic orbitals,facial recognition, andmatrix diagonalization.

In essence, an eigenvectorv of a linear transformationT is a nonzero vector that, whenT is applied to it, does not change direction. ApplyingT to the eigenvector only scales the eigenvector by the scalar valueλ, called an eigenvalue. This condition can be written as the equationT(v)=λv,{\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} ,}referred to as theeigenvalue equation oreigenequation. In general,λ may be anyscalar. For example,λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero orcomplex.

In thisshear mapping the red arrow changes direction, but the blue arrow does not. The blue arrow is an eigenvector of this shear mapping because it does not change direction, and since its length is unchanged, its eigenvalue is 1.
A2 × 2 real and symmetric matrix representing a stretching and shearing of the plane. The eigenvectors of the matrix (red lines) are the two special directions such that every point on them will just slide on them.

The example here, based on theMona Lisa, provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called ashear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Pointsalong the horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either.

Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be adifferential operator likeddx{\displaystyle {\tfrac {d}{dx}}}, in which case the eigenvectors are functions calledeigenfunctions that are scaled by that differential operator, such asddxeλx=λeλx.{\displaystyle {\frac {d}{dx}}e^{\lambda x}=\lambda e^{\lambda x}.}Alternatively, the linear transformation could take the form of ann × n matrix, in which case the eigenvectors aren × 1 matrices. If the linear transformation is expressed in the form of ann × n matrixA, then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplicationAv=λv,{\displaystyle A\mathbf {v} =\lambda \mathbf {v} ,}where the eigenvectorv is ann × 1 matrix. For a matrix, eigenvalues and eigenvectors can be used todecompose the matrix—for example bydiagonalizing it.

Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefixeigen- is applied liberally when naming them:

  • The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called theeigensystem of that transformation.[7][8]
  • The set of all eigenvectors ofT corresponding to the same eigenvalue, together with the zero vector, is called aneigenspace, or thecharacteristic space ofT associated with that eigenvalue.[9]
  • If a set of eigenvectors ofT forms abasis of the domain ofT, then this basis is called aneigenbasis.

History

[edit]

Eigenvalues are often introduced in the context oflinear algebra ormatrix theory. Historically, however, they arose in the study ofquadratic forms anddifferential equations.

In the 18th century,Leonhard Euler studied the rotational motion of arigid body, and discovered the importance of theprincipal axes.[a]Joseph-Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix.[10]

In the early 19th century,Augustin-Louis Cauchy saw how their work could be used to classify thequadric surfaces, and generalized it to arbitrary dimensions.[11] Cauchy also coined the termracine caractéristique (characteristic root), for what is now calledeigenvalue; his term survives incharacteristic equation.[b]

Later,Joseph Fourier used the work of Lagrange andPierre-Simon Laplace to solve theheat equation byseparation of variables in his 1822 treatiseThe Analytic Theory of Heat (Théorie analytique de la chaleur).[12]Charles-François Sturm elaborated on Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that realsymmetric matrices have real eigenvalues.[11] This was extended byCharles Hermite in 1855 to what are now calledHermitian matrices.[13]

Around the same time,Francesco Brioschi proved that the eigenvalues oforthogonal matrices lie on theunit circle,[11] andAlfred Clebsch found the corresponding result forskew-symmetric matrices.[13] Finally,Karl Weierstrass clarified an important aspect in thestability theory started by Laplace, by realizing thatdefective matrices can cause instability.[11]

In the meantime,Joseph Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now calledSturm–Liouville theory.[14]Schwarz studied the first eigenvalue ofLaplace's equation on general domains towards the end of the 19th century, whilePoincaré studiedPoisson's equation a few years later.[15]

At the start of the 20th century,David Hilbert studied the eigenvalues ofintegral operators by viewing the operators as infinite matrices.[16] He was the first to use theGerman wordeigen, which means "own",[6] to denote eigenvalues and eigenvectors in 1904,[c] though he may have been following a related usage byHermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today.[17]

The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, whenRichard von Mises published thepower method. One of the most popular methods today, theQR algorithm, was proposed independently byJohn G. F. Francis[18] andVera Kublanovskaya[19] in 1961.[20][21]

Eigenvalues and eigenvectors of matrices

[edit]
See also:Euclidean vector andMatrix (mathematics)

Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices.[22][23]Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices,[3][4] which is especially common in numerical and computational applications.[24]

MatrixA acts by stretching the vectorx, not changing its direction, sox is an eigenvector ofA.

Considern-dimensional vectors that are formed as a list ofn scalars, such as the three-dimensional vectorsx=[134]andy=[206080].{\displaystyle \mathbf {x} ={\begin{bmatrix}1\\-3\\4\end{bmatrix}}\quad {\mbox{and}}\quad \mathbf {y} ={\begin{bmatrix}-20\\60\\-80\end{bmatrix}}.}

These vectors are said to bescalar multiples of each other, orparallel orcollinear, if there is a scalarλ such thatx=λy.{\displaystyle \mathbf {x} =\lambda \mathbf {y} .}

In this case,λ=120{\displaystyle \lambda =-{\frac {1}{20}}}.

Now consider the linear transformation ofn-dimensional vectors defined by ann-by-n matrixA,Av=w,{\displaystyle A\mathbf {v} =\mathbf {w} ,}or[A11A12A1nA21A22A2nAn1An2Ann][v1v2vn]=[w1w2wn]{\displaystyle {\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}}={\begin{bmatrix}w_{1}\\w_{2}\\\vdots \\w_{n}\end{bmatrix}}}where, for each row,wi=Ai1v1+Ai2v2++Ainvn=j=1nAijvj.{\displaystyle w_{i}=A_{i1}v_{1}+A_{i2}v_{2}+\cdots +A_{in}v_{n}=\sum _{j=1}^{n}A_{ij}v_{j}.}

If it occurs thatv andw are scalar multiples, that is if

Av=w=λv,{\displaystyle A\mathbf {v} =\mathbf {w} =\lambda \mathbf {v} ,}1

thenv is aneigenvector of the linear transformationA and the scale factorλ is theeigenvalue corresponding to that eigenvector. Equation (1) is theeigenvalue equation for the matrixA.

Equation (1) can be stated equivalently as

(AλI)v=0,{\displaystyle \left(A-\lambda I\right)\mathbf {v} =\mathbf {0} ,}2

whereI is then-by-nidentity matrix and0 is the zero vector.

Eigenvalues and the characteristic polynomial

[edit]
Main article:Characteristic polynomial

Equation (2) has a nonzero solutionvif and only if thedeterminant of the matrix(AλI) is zero. Therefore, the eigenvalues ofA are values ofλ that satisfy the equation

det(AλI)=0{\displaystyle \det(A-\lambda I)=0}3

Using theLeibniz formula for determinants, the left-hand side of equation (3) is apolynomial function of the variableλ and thedegree of this polynomial isn, the order of the matrixA. Itscoefficients depend on the entries ofA, except that its term of degreen is always(−1)nλn. This polynomial is called thecharacteristic polynomial ofA. Equation (3) is called thecharacteristic equation or thesecular equation ofA.

The characteristic polynomial of ann-by-n matrixA, being a polynomial of degreen, has at mostncomplex number roots, which can be found by factoring the characteristic polynomial, or numerically by root finding. The characteristic polynomial can befactored into the product ofn linear terms,

det(AλI)=(λ1λ)(λ2λ)(λnλ),{\displaystyle \det(A-\lambda I)=(\lambda _{1}-\lambda )(\lambda _{2}-\lambda )\cdots (\lambda _{n}-\lambda ),}4

where the complex numbersλ1,λ2, ... ,λn, each of which is an eigenvalue, may not all be distinct. (The number of times an eigenvalue appears is known as itsalgebraic multiplicity.)

As a brief example, which is described in more detail in the examples section later, consider the matrixA=[2112].{\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.}

Taking the determinant of(AλI), the characteristic polynomial ofA isdet(AλI)=|2λ112λ|=34λ+λ2.{\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}=3-4\lambda +\lambda ^{2}.}

Setting the characteristic polynomial equal to zero, it has roots atλ = 1 andλ = 3, which are the two eigenvalues ofA. The eigenvectors corresponding to each eigenvalueλ can be found by solving for the components ofv in the equation(AλI)v =0. In this example, the eigenvectors are any nonzero scalar multiples ofvλ=1=[11],vλ=3=[11].{\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}1\\-1\end{bmatrix}},\quad \mathbf {v} _{\lambda =3}={\begin{bmatrix}1\\1\end{bmatrix}}.}

If the entries of the matrixA are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may beirrational numbers even if all the entries ofA arerational numbers or even if they are all integers. However, if the entries ofA are allalgebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers.

The non-real roots of a real polynomial with real coefficients can be grouped into pairs ofcomplex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by theintermediate value theorem at least one of the roots is real. Therefore, anyreal matrix with odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs.

Spectrum of a matrix

[edit]

Thespectrum of a matrix is the list of eigenvalues, repeated according to multiplicity; in an alternative notation the set of eigenvalues with their multiplicities.

An important quantity associated with the spectrum is the maximum absolute value of any eigenvalue. This is known as thespectral radius of the matrix.

Algebraic multiplicity

[edit]

Letλi be an eigenvalue of ann-by-n matrixA. Thealgebraic multiplicityμA(λi) of the eigenvalue is itsmultiplicity as a root of the characteristic polynomial, that is, the largest integerk such that(λλi)kevenly divides that polynomial.[9][25][26]

Suppose a matrixA has dimensionn anddn distinct eigenvalues. Whereas equation (4) factors the characteristic polynomial ofA into the product ofn linear terms with some terms potentially repeating, the characteristic polynomial can also be written as the product ofd terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity,det(AλI)=(λ1λ)μA(λ1)(λ2λ)μA(λ2)(λdλ)μA(λd).{\displaystyle \det(A-\lambda I)=(\lambda _{1}-\lambda )^{\mu _{A}(\lambda _{1})}(\lambda _{2}-\lambda )^{\mu _{A}(\lambda _{2})}\cdots (\lambda _{d}-\lambda )^{\mu _{A}(\lambda _{d})}.}

Ifd =n then the right-hand side is the product ofn linear terms, and this is the same as equation (4). The size of each eigenvalue's algebraic multiplicity is related to the dimensionn as1μA(λi)n,μA=i=1dμA(λi)=n.{\displaystyle {\begin{aligned}1&\leq \mu _{A}(\lambda _{i})\leq n,\\\mu _{A}&=\sum _{i=1}^{d}\mu _{A}\left(\lambda _{i}\right)=n.\end{aligned}}}

IfμA(λi) = 1, thenλi is said to be asimple eigenvalue.[26] IfμA(λi) equals the geometric multiplicity ofλi,γA(λi), defined in the next section, thenλi is said to be asemisimple eigenvalue.

Eigenspaces, geometric multiplicity, and the eigenbasis for matrices

[edit]

Given a particular eigenvalueλ of then × n matrixA, define thesetE to be all vectorsv that satisfy equation (2),E={v:(AλI)v=0}.{\displaystyle E=\left\{\mathbf {v} :\left(A-\lambda I\right)\mathbf {v} =\mathbf {0} \right\}.}

On one hand, this set is precisely thekernel or nullspace of the matrixAλI. On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector ofA associated withλ. So, the setE is theunion of the zero vector with the set of all eigenvectors ofA associated withλ, andE equals the nullspace ofAλI. The spaceE is called theeigenspace orcharacteristic space ofA associated withλ.[27][9] In generalλ is a complex number and the eigenvectors are complexn × 1 matrices (column vectors). Because every nullspace is alinear subspace of the domain,E is a linear subspace ofCn{\displaystyle \mathbb {C} ^{n}}.

Because the eigenspaceE is a linear subspace, it isclosed under addition. That is, if two vectorsu andv belong to the setE, writtenu,vE, thenu +vE or equivalentlyA(u +v) =λ(u +v). This can be checked using thedistributive property of matrix multiplication. Similarly, becauseE is a linear subspace, it is closed under scalar multiplication. That is, ifvE andα is a complex number,αvE or equivalentlyA(αv) =λ(αv). This can be checked by noting that multiplication of complex matrices by complex numbers iscommutative. As long asu +v andαv are not zero, they are also eigenvectors ofA associated withλ.

The dimension of the eigenspaceE associated withλ, or equivalently the maximum number of linearly independent eigenvectors associated withλ, is referred to as the eigenvalue'sgeometric multiplicityγA(λ){\displaystyle \gamma _{A}(\lambda )}. BecauseE is also the nullspace ofAλI, the geometric multiplicity ofλ is the dimension of the nullspace ofAλI, also called thenullity ofAλI. This quantity is related to the size and rank ofAλI by the equationγA(λ)=nrank(AλI).{\displaystyle \gamma _{A}(\lambda )=n-\operatorname {rank} (A-\lambda I).}

Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceedn.1γA(λ)μA(λ)n{\displaystyle 1\leq \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )\leq n}

To prove the inequalityγA(λ)μA(λ){\displaystyle \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )}, letB =AλI, whereλ is a fixed complex number, and the eigenspace associated withλ is the nullspace ofB. Let the dimension of that eigenspace bek=γA(λ){\displaystyle k=\gamma _{A}(\lambda )}. This means that the lastk rows of the echelon form ofB are zero. Thus, there is an invertible matrixE coming from Gauss-Jordan reduction, such thatEB=[0k×(nk)0k×k].{\displaystyle EB={\begin{bmatrix}*&*\\\mathbf {0} _{k\times (n-k)}&\mathbf {0} _{k\times k}\end{bmatrix}}.}Therefore the lastk rows ofEBtE are(−t) times the lastk rows ofE. Therefore the polynomialtk evenly divides the polynomialdet(EBtE), because of basic properties of determinants (homogeneity). On the other hand,det(EBtE) =detE det(BtI) =pA(t +λ) detE, so(tλ)k dividespA(t), and so the algebraic multiplicity ofλ is at leastk.

SupposeA hasdn distinct eigenvaluesλ1, ... ,λd, where the geometric multiplicity ofλi isγA(λi). The total geometric multiplicity ofA,γA=i=1dγA(λi),dγAn,{\displaystyle \gamma _{A}=\sum _{i=1}^{d}\gamma _{A}(\lambda _{i}),\quad d\leq \gamma _{A}\leq n,}

is the dimension of thesum of all the eigenspaces ofA's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors ofA. IfγA=n{\displaystyle \gamma _{A}=n}, then

Additional properties

[edit]

LetA be an arbitraryn × n matrix of complex numbers with eigenvaluesλ1, ... ,λn. Each eigenvalue appearsμA( λi ) times in this list, whereμA(λi) is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues:

Left and right eigenvectors

[edit]
See also:left and right (algebra)

Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to aright eigenvector, namely acolumn vector thatright multiplies then × n matrixA in the defining equation, equation (1),Av=λv.{\displaystyle A\mathbf {v} =\lambda \mathbf {v} .}

The eigenvalue and eigenvector problem can also be defined forrow vectors thatleft multiply matrixA. In this formulation, the defining equation isuA=κu,{\displaystyle \mathbf {u} A=\kappa \mathbf {u} ,}

whereκ is a scalar andu is a1 × n matrix. Any row vectoru satisfying this equation is called aleft eigenvector ofA andκ is its associated eigenvalue. Taking the transpose of this equation,ATuT=κuT.{\displaystyle A^{\textsf {T}}\mathbf {u} ^{\textsf {T}}=\kappa \mathbf {u} ^{\textsf {T}}.}

Comparing this equation to equation (1), it follows immediately that a left eigenvector ofA is the same as the transpose of a right eigenvector ofAT, with the same eigenvalue. Furthermore, since the characteristic polynomial ofAT is the same as the characteristic polynomial ofA, the left and right eigenvectors ofA are associated with the same eigenvalues.

Diagonalization and the eigendecomposition

[edit]
Main article:Eigendecomposition of a matrix

Suppose the eigenvectors ofA form a basis, or equivalentlyA hasn linearly independent eigenvectorsv1,v2, ...,vn with associated eigenvaluesλ1,λ2, ...,λn. The eigenvalues need not be distinct. Define asquare matrixQ whose columns are then linearly independent eigenvectors ofA,Q=[v1v2vn].{\displaystyle Q={\begin{bmatrix}\mathbf {v} _{1}&\mathbf {v} _{2}&\cdots &\mathbf {v} _{n}\end{bmatrix}}.}

Since each column ofQ is an eigenvector ofA, right multiplyingA byQ scales each column ofQ by its associated eigenvalue,AQ=[λ1v1λ2v2λnvn].{\displaystyle AQ={\begin{bmatrix}\lambda _{1}\mathbf {v} _{1}&\lambda _{2}\mathbf {v} _{2}&\cdots &\lambda _{n}\mathbf {v} _{n}\end{bmatrix}}.}

With this in mind, define a diagonal matrixΛ where each diagonal elementΛii is the eigenvalue associated with theith column ofQ. ThenAQ=QΛ.{\displaystyle AQ=Q\Lambda .}

Because the columns ofQ are linearly independent,Q is invertible. Right multiplying both sides of the equation byQ−1,A=QΛQ1,{\displaystyle A=Q\Lambda Q^{-1},}or by instead left multiplying both sides byQ−1,Q1AQ=Λ.{\displaystyle Q^{-1}AQ=\Lambda .}

A can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called theeigendecomposition and it is asimilarity transformation. Such a matrixA is said to besimilar to the diagonal matrixΛ ordiagonalizable. The matrixQ is the change of basis matrix of the similarity transformation. Essentially, the matricesA andΛ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ.

Conversely, suppose a matrixA is diagonalizable. LetP be a non-singular square matrix such thatP−1AP is some diagonal matrixD. Left multiplying both byP,AP =PD. Each column ofP must therefore be an eigenvector ofA whose eigenvalue is the corresponding diagonal element ofD. Since the columns ofP must be linearly independent forP to be invertible, there existn linearly independent eigenvectors ofA. It then follows that the eigenvectors ofA form a basis if and only ifA is diagonalizable.

A matrix that is not diagonalizable is said to bedefective. For defective matrices, the notion of eigenvectors generalizes togeneralized eigenvectors and the diagonal matrix of eigenvalues generalizes to theJordan normal form. Over an algebraically closed field, any matrixA has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition intogeneralized eigenspaces.

Variational characterization

[edit]
Main article:Min-max theorem

In theHermitian case, eigenvalues can be given a variational characterization. The largest eigenvalue ofH is the maximum value of thequadratic formxTHx/xTx. A value ofx that realizes that maximum is an eigenvector.

Matrix examples

[edit]

Two-dimensional matrix example

[edit]
The transformation matrixA =  2
1
1
2
  
preserves the direction of magenta vectors parallel tovλ=1 =[1 −1]T and blue vectors parallel tovλ=3 =[1 1]T. The red vectors are not parallel to either eigenvector, so, their directions are changed by the transformation. The lengths of the magenta vectors are unchanged after the transformation (due to their eigenvalue of1), while blue vectors are three times the length of the original (due to their eigenvalue of3). See also:An extended version, showing all four quadrants.

Consider the matrixA=[2112].{\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.}

The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectorsv of this transformation satisfy equation (1), and the values ofλ for which the determinant of the matrix(AλI) equals zero are the eigenvalues.

Taking the determinant to find characteristic polynomial ofA,det(AλI)=|[2112]λ[1001]|=|2λ112λ|=34λ+λ2=(λ3)(λ1).{\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&1\\1&2\end{bmatrix}}-\lambda {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}\\[6pt]&=3-4\lambda +\lambda ^{2}\\[6pt]&=(\lambda -3)(\lambda -1).\end{aligned}}}

Setting the characteristic polynomial equal to zero, it has roots atλ = 1 andλ = 3, which are the two eigenvalues ofA.

Forλ = 1, equation (2) becomes,(AI)vλ=1=[1111][v1v2]=[00]1v1+1v2=0{\displaystyle {\begin{aligned}(A-I)\mathbf {v} _{\lambda =1}&={\begin{bmatrix}1&1\\1&1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}\\1v_{1}+1v_{2}&=0\end{aligned}}}

Any nonzero vector withv1 = −v2 solves this equation. Therefore,vλ=1=[v1v1]=[11]{\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}v_{1}\\-v_{1}\end{bmatrix}}={\begin{bmatrix}1\\-1\end{bmatrix}}}is an eigenvector ofA corresponding toλ = 1, as is any scalar multiple of this vector.

Forλ = 3, equation (2) becomes(A3I)vλ=3=[1111][v1v2]=[00]1v1+1v2=0;1v11v2=0{\displaystyle {\begin{aligned}(A-3I)\mathbf {v} _{\lambda =3}&={\begin{bmatrix}-1&{\hphantom {-}}1\\{\hphantom {-}}1&-1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}\\-1v_{1}+1v_{2}&=0;\\1v_{1}-1v_{2}&=0\end{aligned}}}

Any nonzero vector withv1 =v2 solves this equation. Therefore,vλ=3=[v1v1]=[11]{\displaystyle \mathbf {v} _{\lambda =3}={\begin{bmatrix}v_{1}\\v_{1}\end{bmatrix}}={\begin{bmatrix}1\\1\end{bmatrix}}}is an eigenvector ofA corresponding toλ = 3, as is any scalar multiple of this vector.

Thus, the vectorsvλ=1 andvλ=3 are eigenvectors ofA associated with the eigenvaluesλ = 1 andλ = 3, respectively.

Three-dimensional matrix example

[edit]

Consider the matrixA=[200034049].{\displaystyle A={\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}.}

The characteristic polynomial ofA isdet(AλI)=|[200034049]λ[100010001]|=|2λ0003λ4049λ|,=(2λ)[(3λ)(9λ)16]=λ3+14λ235λ+22.{\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}-\lambda {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &0&0\\0&3-\lambda &4\\0&4&9-\lambda \end{vmatrix}},\\[6pt]&=(2-\lambda ){\bigl [}(3-\lambda )(9-\lambda )-16{\bigr ]}=-\lambda ^{3}+14\lambda ^{2}-35\lambda +22.\end{aligned}}}

The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues ofA. These eigenvalues correspond to the eigenvectors[1 0 0]T,[0 −2 1]T, and[0 1 2]T, or any nonzero multiple thereof.

Three-dimensional matrix example with complex eigenvalues

[edit]

Consider thecyclic permutation matrixA=[010001100].{\displaystyle A={\begin{bmatrix}0&1&0\\0&0&1\\1&0&0\end{bmatrix}}.}

This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is1 −λ3, whose roots areλ1=1λ2=12+i32λ3=λ2=12i32{\displaystyle {\begin{aligned}\lambda _{1}&=1\\\lambda _{2}&=-{\frac {1}{2}}+i{\frac {\sqrt {3}}{2}}\\\lambda _{3}&=\lambda _{2}^{*}=-{\frac {1}{2}}-i{\frac {\sqrt {3}}{2}}\end{aligned}}}wherei is animaginary unit withi2 = −1.

For the real eigenvalueλ1 = 1, any vector with three equal nonzero entries is an eigenvector. For example,A[555]=[555]=1[555].{\displaystyle A{\begin{bmatrix}5\\5\\5\end{bmatrix}}={\begin{bmatrix}5\\5\\5\end{bmatrix}}=1\cdot {\begin{bmatrix}5\\5\\5\end{bmatrix}}.}

For the complex conjugate pair of imaginary eigenvalues,λ2λ3=1,λ22=λ3,λ32=λ2.{\displaystyle \lambda _{2}\lambda _{3}=1,\quad \lambda _{2}^{2}=\lambda _{3},\quad \lambda _{3}^{2}=\lambda _{2}.}

ThenA[1λ2λ3]=[λ2λ31]=λ2[1λ2λ3],{\displaystyle A{\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}}={\begin{bmatrix}\lambda _{2}\\\lambda _{3}\\1\end{bmatrix}}=\lambda _{2}\cdot {\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}},}andA[1λ3λ2]=[λ3λ21]=λ3[1λ3λ2].{\displaystyle A{\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}={\begin{bmatrix}\lambda _{3}\\\lambda _{2}\\1\end{bmatrix}}=\lambda _{3}\cdot {\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}.}

Therefore, the other two eigenvectors ofA are complex and arevλ2 = [1λ2λ3]T andvλ3 = [1λ3λ2]T with eigenvaluesλ2 andλ3, respectively. The two complex eigenvectors also appear in a complex conjugate pair,vλ2=vλ3.{\displaystyle \mathbf {v} _{\lambda _{2}}=\mathbf {v} _{\lambda _{3}}^{*}.}

Diagonal matrix example

[edit]

Matrices with entries only along the main diagonal are calleddiagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrixA=[100020003].{\displaystyle A={\begin{bmatrix}1&0&0\\0&2&0\\0&0&3\end{bmatrix}}.}

The characteristic polynomial ofA isdet(AλI)=(1λ)(2λ)(3λ),{\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),}which has the rootsλ1 = 1,λ2 = 2, andλ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A.

Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors,vλ1=[100],vλ2=[010],vλ3=[001],{\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},}respectively, as well as scalar multiples of these vectors.

Triangular matrix example

[edit]

A matrix whose elements above the main diagonal are all zero is called alowertriangular matrix, while a matrix whose elements below the main diagonal are all zero is called anupper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal.

Consider the lower triangular matrix,A=[100120233].{\displaystyle A={\begin{bmatrix}1&0&0\\1&2&0\\2&3&3\end{bmatrix}}.}

The characteristic polynomial ofA isdet(AλI)=(1λ)(2λ)(3λ),{\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),}which has the rootsλ1 = 1,λ2 = 2, andλ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A.

These eigenvalues correspond to the eigenvectors,vλ1=[1112],vλ2=[013],vλ3=[001],{\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\-1\\{\frac {1}{2}}\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\-3\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},}respectively, as well as scalar multiples of these vectors.

Matrix with repeated eigenvalues example

[edit]

As in the previous example, the lower triangular matrixA=[2000120001300013],{\displaystyle A={\begin{bmatrix}2&0&0&0\\1&2&0&0\\0&1&3&0\\0&0&1&3\end{bmatrix}},}has a characteristic polynomial that is the product of its diagonal elements,det(AλI)=|2λ00012λ00013λ00013λ|=(2λ)2(3λ)2.{\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &0&0&0\\1&2-\lambda &0&0\\0&1&3-\lambda &0\\0&0&1&3-\lambda \end{vmatrix}}=(2-\lambda )^{2}(3-\lambda )^{2}.}

The roots of this polynomial, and hence the eigenvalues, are 2 and 3. Thealgebraic multiplicity of each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues isμA =4 =n, the order of the characteristic polynomial and the dimension ofA.

On the other hand, thegeometric multiplicity of the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector[0 1 −1 1]T and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector[0 0 0 1]T. The total geometric multiplicityγA is 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section.

Eigenvector-eigenvalue identity

[edit]

For aHermitian matrixA, the norm squared of theα-th component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the correspondingminor matrix,|viα|2=k(λi(A)λk(Aα))ki(λi(A)λk(A)),{\displaystyle |v_{i\alpha }|^{2}={\frac {\prod _{k}{(\lambda _{i}(A)-\lambda _{k}(A_{\alpha }))}}{\prod _{k\neq i}{(\lambda _{i}(A)-\lambda _{k}(A))}}},}whereAα{\textstyle A_{\alpha }} is thesubmatrix formed by removing theα-th row and column from the original matrix.[33][34][35] This identity also extends todiagonalizable matrices, and has been rediscovered many times in the literature.[34][36]

Eigenvalues and eigenfunctions of differential operators

[edit]
Main article:Eigenfunction

The definitions of eigenvalue and eigenvectors of a linear transformationT remains valid even if the underlying vector space is an infinite-dimensionalHilbert orBanach space. A widely used class of linear transformations acting on infinite-dimensional spaces are thedifferential operators onfunction spaces. LetD be a linear differential operator on the spaceC(R){\displaystyle C^{\infty }(\mathbb {R} )} ofinfinitely differentiable real functions of a real argumentt. The eigenvalue equation forD is thedifferential equationDf(t)=λf(t){\displaystyle Df(t)=\lambda f(t)}

The functions that satisfy this equation are eigenvectors ofD and are commonly calledeigenfunctions.

Derivative operator example

[edit]

Consider the derivative operatorddt{\displaystyle {\tfrac {d}{dt}}} with eigenvalue equationddtf(t)=λf(t).{\displaystyle {\frac {d}{dt}}f(t)=\lambda f(t).}

This differential equation can be solved by multiplying both sides bydt/f(t) andintegrating. Its solution, theexponential functionf(t)=f(0)eλt,{\displaystyle f(t)=f(0)e^{\lambda t},}is the eigenfunction of the derivative operator. In this case the eigenfunction is itself a function of its associated eigenvalue. In particular, forλ = 0 the eigenfunctionf(t) is a constant.

General definition

[edit]

The concept of eigenvalues and eigenvectors extends naturally to arbitrarylinear transformations on arbitrary vector spaces. LetV be any vector space over somefieldK ofscalars, and letT be a linear transformation mappingV intoV,T:VV.{\displaystyle T:V\to V.}

We say that a nonzero vectorvV is aneigenvector ofT if and only if there exists a scalarλK such that

T(v)=λv.{\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} .}5

This equation is called the eigenvalue equation forT, and the scalarλ is theeigenvalue ofT corresponding to the eigenvectorv.T(v) is the result of applying the transformationT to the vectorv, whileλv is the product of the scalarλ withv.[37][38]

Eigenspaces, geometric multiplicity, and the eigenbasis

[edit]

Given an eigenvalueλ, consider the setE={v:T(v)=λv},{\displaystyle E=\left\{\mathbf {v} :T(\mathbf {v} )=\lambda \mathbf {v} \right\},}which is the union of the zero vector with the set of all eigenvectors associated with λ.E is called theeigenspace orcharacteristic space ofT associated with λ.[39]It is the kernel of the linear transformationTλI.

By definition of a linear transformation,T(x+y)=T(x)+T(y),T(αx)=αT(x),{\displaystyle {\begin{aligned}T(\mathbf {x} +\mathbf {y} )&=T(\mathbf {x} )+T(\mathbf {y} ),\\T(\alpha \mathbf {x} )&=\alpha T(\mathbf {x} ),\end{aligned}}}forx,yV andαK. Therefore, ifu andv are eigenvectors ofT associated with eigenvalueλ, namelyu,vE, thenT(u+v)=λ(u+v),T(αv)=λ(αv).{\displaystyle {\begin{aligned}T(\mathbf {u} +\mathbf {v} )&=\lambda (\mathbf {u} +\mathbf {v} ),\\T(\alpha \mathbf {v} )&=\lambda (\alpha \mathbf {v} ).\end{aligned}}}

So, bothu +v andαv are either zero or eigenvectors ofT associated withλ, namelyu +v,αvE, andE is closed under addition and scalar multiplication. The eigenspaceE associated withλ is therefore a linear subspace ofV.[40] If that subspace has dimension 1, it is sometimes called aneigenline.[41]

Thegeometric multiplicityγT(λ) of an eigenvalueλ is the dimension of the eigenspace associated withλ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue.[42] By the definition of eigenvalues and eigenvectors,γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector.

The eigenspaces ofT always form adirect sum. As a consequence, eigenvectors ofdifferent eigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimensionn of the vector space on whichT operates, and there cannot be more thann distinct eigenvalues.[d]

Any subspace spanned by eigenvectors ofT is aninvariant subspace ofT, and the restriction ofT to such a subspace is diagonalizable. Moreover, if the entire vector spaceV can be spanned by the eigenvectors ofT, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues ofT is the entire vector spaceV, then a basis ofV called aneigenbasis can be formed from linearly independent eigenvectors ofT. WhenT admits an eigenbasis,T is diagonalizable.

Spectral theory

[edit]
Main article:Spectral theory

Ifλ is an eigenvalue ofT, then the operator(TλI) is notone-to-one, and therefore its inverse(TλI)−1 does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator(TλI) may not have an inverse even ifλ is not an eigenvalue.

For this reason, infunctional analysis eigenvalues can be generalized to thespectrum of a linear operatorT as the set of all scalarsλ for which the operator(TλI) has nobounded inverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them.

Associative algebras and representation theory

[edit]
Main article:Weight (representation theory)

One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with analgebra representation – anassociative algebra acting on amodule. The study of such actions is the field ofrepresentation theory.

Therepresentation-theoretical concept of weight is an analog of eigenvalues, whileweight vectors andweight spaces are the analogs of eigenvectors and eigenspaces, respectively.

Hecke eigensheaf is a tensor-multiple of itself and is considered inLanglands correspondence.

Dynamic equations

[edit]

The simplestdifference equations have the formxt=a1xt1+a2xt2++akxtk.{\displaystyle x_{t}=a_{1}x_{t-1}+a_{2}x_{t-2}+\cdots +a_{k}x_{t-k}.}

The solution of this equation forx in terms oft is found by using its characteristic equationλka1λk1a2λk2ak1λak=0,{\displaystyle \lambda ^{k}-a_{1}\lambda ^{k-1}-a_{2}\lambda ^{k-2}-\cdots -a_{k-1}\lambda -a_{k}=0,}

which can be found by stacking into matrix form a set of equations consisting of the above difference equation and thek – 1 equationsxt–1 =xt–1, ...,xtk+1 =xtk+1, giving ak-dimensional system of the first order in the stacked variable vector[xt  ⋅⋅⋅ xtk+1] in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation givesk characteristic rootsλ1, ... ,λk, for use in the solution equationxt=c1λ1t++ckλkt.{\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{k}\lambda _{k}^{t}.}

A similar procedure is used for solving adifferential equation of the formdkxdtk+ak1dk1xdtk1++a1dxdt+a0x=0.{\displaystyle {\frac {d^{k}x}{dt^{k}}}+a_{k-1}{\frac {d^{k-1}x}{dt^{k-1}}}+\cdots +a_{1}{\frac {dx}{dt}}+a_{0}x=0.}

Calculation

[edit]
Main article:Eigenvalue algorithm

The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice.

Classical method

[edit]

The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact arithmetics such asfloating-point.

Eigenvalues

[edit]

The eigenvalues of a matrixA can be determined by finding the roots of the characteristic polynomial. This is easy for2 × 2 matrices, but the difficulty increases rapidly with the size of the matrix.

In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any requiredaccuracy.[43] However, this approach is not viable in practice because the coefficients would be contaminated by unavoidableround-off errors, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified byWilkinson's polynomial).[43] Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is thedeterminant, which for ann × n matrix is a sum ofn! different products.[e]

Explicitalgebraic formulas for the roots of a polynomial exist only if the degreen is 4 or less. According to theAbel–Ruffini theorem there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. (Generality matters because any polynomial with degreen is the characteristic polynomial of somecompanion matrix of ordern.) Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximatenumerical methods. Even theexact formula for the roots of a degree 3 polynomial is numerically impractical.

Eigenvectors

[edit]

Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes asystem of linear equations with known coefficients. For example, once it is known that 6 is an eigenvalue of the matrixA=[4163]{\displaystyle A={\begin{bmatrix}4&1\\6&3\end{bmatrix}}}we can find its eigenvectors by solving the equationAv = 6v, that is[4163][xy]=6[xy]{\displaystyle {\begin{bmatrix}4&1\\6&3\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=6\cdot {\begin{bmatrix}x\\y\end{bmatrix}}}

This matrix equation is equivalent to twolinear equations{4x+3y=6x6x+3y=6y{\displaystyle \left\{{\begin{aligned}4x+{\hphantom {3}}y&=6x\\6x+3y&=6y\end{aligned}}\right.}that is,{2x+3y=06x3y=0{\displaystyle \left\{{\begin{aligned}-2x+{\hphantom {3}}y&=0\\6x-3y&=0\end{aligned}}\right.}

Both equations reduce to the single linear equationy = 2x. Therefore, any vector of the form[a  2a]T, for any nonzero real numbera, is an eigenvector ofA with eigenvalueλ = 6.

The matrixA above has another eigenvalueλ = 1. A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of3x +y = 0, that is, any vector of the form[b  −3b]T, for any nonzero real numberb.

Simple iterative methods

[edit]
Main article:Power iteration

The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix (optionally normalizing the vector to keep its elements of reasonable size); this makes the vector converge towards an eigenvector.A variation is to instead multiply the vector by(AμI)−1; this causes it to converge to an eigenvector of the eigenvalue closest toμC{\displaystyle \mu \in \mathbb {C} }.

Ifv is (a good approximation of) an eigenvector ofA, then the corresponding eigenvalue can be computed asλ=vAvvv{\displaystyle \lambda ={\frac {\mathbf {v} ^{*}A\mathbf {v} }{\mathbf {v} ^{*}\mathbf {v} }}}wherev denotes theconjugate transpose ofv.

Modern methods

[edit]

Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until theQR algorithm was designed in 1961.[43] Combining theHouseholder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm.[citation needed] For largeHermitiansparse matrices, theLanczos algorithm is one example of an efficientiterative method to compute eigenvalues and eigenvectors, among several other possibilities.[43]

Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed.

Applications

[edit]

Geometric transformations

[edit]

Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes.The following table presents some example transformations in the plane along with their2 × 2 matrices, eigenvalues, and eigenvectors.

Eigenvalues of geometric transformations
ScalingUnequal scalingRotationHorizontal shearHyperbolic rotation
IllustrationEqual scaling (homothety)Vertical shrink and horizontal stretch of a unit square.Rotation by 50 degrees
Horizontal shear mapping
Matrix[k00k]{\displaystyle {\begin{bmatrix}k&0\\0&k\end{bmatrix}}}[k100k2]{\displaystyle {\begin{bmatrix}k_{1}&0\\0&k_{2}\end{bmatrix}}}[cosθsinθsinθcosθ]{\displaystyle {\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{bmatrix}}}[1k01]{\displaystyle {\begin{bmatrix}1&k\\0&1\end{bmatrix}}}[coshφsinhφsinhφcoshφ]{\displaystyle {\begin{bmatrix}\cosh \varphi &\sinh \varphi \\\sinh \varphi &\cosh \varphi \end{bmatrix}}}
Characteristic
polynomial
 (λk)2{\displaystyle \ (\lambda -k)^{2}}(λk1)(λk2){\displaystyle (\lambda -k_{1})(\lambda -k_{2})}λ22cos(θ)λ+1{\displaystyle \lambda ^{2}-2\cos(\theta )\lambda +1} (λ1)2{\displaystyle \ (\lambda -1)^{2}}λ22cosh(φ)λ+1{\displaystyle \lambda ^{2}-2\cosh(\varphi )\lambda +1}
Eigenvalues,λi{\displaystyle \lambda _{i}}λ1=λ2=k{\displaystyle \lambda _{1}=\lambda _{2}=k}λ1=k1λ2=k2{\displaystyle {\begin{aligned}\lambda _{1}&=k_{1}\\\lambda _{2}&=k_{2}\end{aligned}}}λ1=eiθ=cosθ+isinθλ2=eiθ=cosθisinθ{\displaystyle {\begin{aligned}\lambda _{1}&=e^{i\theta }\\&=\cos \theta +i\sin \theta \\\lambda _{2}&=e^{-i\theta }\\&=\cos \theta -i\sin \theta \end{aligned}}}λ1=λ2=1{\displaystyle \lambda _{1}=\lambda _{2}=1}λ1=eφ=coshφ+sinhφλ2=eφ=coshφsinhφ{\displaystyle {\begin{aligned}\lambda _{1}&=e^{\varphi }\\&=\cosh \varphi +\sinh \varphi \\\lambda _{2}&=e^{-\varphi }\\&=\cosh \varphi -\sinh \varphi \end{aligned}}}
Algebraicmult.,
μi=μ(λi){\displaystyle \mu _{i}=\mu (\lambda _{i})}
μ1=2{\displaystyle \mu _{1}=2}μ1=1μ2=1{\displaystyle {\begin{aligned}\mu _{1}&=1\\\mu _{2}&=1\end{aligned}}}μ1=1μ2=1{\displaystyle {\begin{aligned}\mu _{1}&=1\\\mu _{2}&=1\end{aligned}}}μ1=2{\displaystyle \mu _{1}=2}μ1=1μ2=1{\displaystyle {\begin{aligned}\mu _{1}&=1\\\mu _{2}&=1\end{aligned}}}
Geometricmult.,
γi=γ(λi){\displaystyle \gamma _{i}=\gamma (\lambda _{i})}
γ1=2{\displaystyle \gamma _{1}=2}γ1=1γ2=1{\displaystyle {\begin{aligned}\gamma _{1}&=1\\\gamma _{2}&=1\end{aligned}}}γ1=1γ2=1{\displaystyle {\begin{aligned}\gamma _{1}&=1\\\gamma _{2}&=1\end{aligned}}}γ1=1{\displaystyle \gamma _{1}=1}γ1=1γ2=1{\displaystyle {\begin{aligned}\gamma _{1}&=1\\\gamma _{2}&=1\end{aligned}}}
EigenvectorsAll nonzero vectorsu1=[10]u2=[01]{\displaystyle {\begin{aligned}\mathbf {u} _{1}&={\begin{bmatrix}1\\0\end{bmatrix}}\\\mathbf {u} _{2}&={\begin{bmatrix}0\\1\end{bmatrix}}\end{aligned}}}u1=[1i]u2=[1+i]{\displaystyle {\begin{aligned}\mathbf {u} _{1}&={\begin{bmatrix}1\\-i\end{bmatrix}}\\\mathbf {u} _{2}&={\begin{bmatrix}1\\+i\end{bmatrix}}\end{aligned}}}u1=[10]{\displaystyle \mathbf {u} _{1}={\begin{bmatrix}1\\0\end{bmatrix}}}u1=[11]u2=[11]{\displaystyle {\begin{aligned}\mathbf {u} _{1}&={\begin{bmatrix}1\\1\end{bmatrix}}\\\mathbf {u} _{2}&={\begin{bmatrix}1\\-1\end{bmatrix}}\end{aligned}}}

The characteristic equation for a rotation is aquadratic equation withdiscriminantD = −4(sinθ)2, which is a negative number wheneverθ is not an integer multiple ofπ (180°). Therefore, except for these special cases, the two eigenvalues are complex numbers,cosθ ±isinθ; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane.

A linear transformation that takes a square to a rectangle of the same area (asqueeze mapping) has reciprocal eigenvalues.

Principal component analysis

[edit]
PCA of themultivariate Gaussian distribution centered at(1, 3) with a standard deviation of3 in roughly the(0.878, 0.478) direction and of 1 in the orthogonal direction. The vectors shown are unit eigenvectors of the (symmetric, positive-semidefinite)covariance matrix scaled by the square root of the corresponding eigenvalue. Just as in the one-dimensional case, the square root is taken because thestandard deviation is more readily visualized than thevariance.
Main article:Principal component analysis
See also:Positive semidefinite matrix andFactor analysis

Theeigendecomposition of asymmetricpositive semidefinite (PSD)matrix yields anorthogonal basis of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used inmultivariate analysis, where thesamplecovariance matrices are PSD. This orthogonal decomposition is calledprincipal component analysis (PCA) in statistics. PCA studieslinear relations among variables. PCA is performed on thecovariance matrix or thecorrelation matrix (in which each variable is scaled to have itssample variance equal to one). For the covariance or correlation matrix, the eigenvectors correspond toprincipal components and the eigenvalues to thevariance explained by the principal components. Principal component analysis of the correlation matrix provides anorthogonal basis for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data.

Principal component analysis is used as a means ofdimensionality reduction in the study of largedata sets, such as those encountered inbioinformatics. InQ methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment ofpractical significance (which differs from thestatistical significance ofhypothesis testing; cf.criteria for determining the number of factors). More generally, principal component analysis can be used as a method offactor analysis instructural equation modeling.

Graphs

[edit]

Inspectral graph theory, an eigenvalue of agraph is defined as an eigenvalue of the graph'sadjacency matrixA, or (increasingly) of the graph'sLaplacian matrix due to itsdiscrete Laplace operator, which is eitherDA (sometimes called thecombinatorial Laplacian) orID−1/2AD−1/2 (sometimes called thenormalized Laplacian), whereD is a diagonal matrix withDii equal to the degree of vertexvi, and inD−1/2, thei-th diagonal entry is1/deg(vi){\textstyle 1/{\sqrt {\deg(v_{i})}}}. Thek-th principal eigenvector of a graph is defined as either the eigenvector corresponding to thek-th largest ork-th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.

The principal eigenvector is used to measure thecentrality of its vertices. An example isGoogle'sPageRank algorithm. The principal eigenvector of a modifiedadjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to thestationary distribution of theMarkov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, viaspectral clustering. Other methods are also available for clustering.

Markov chains

[edit]

AMarkov chain is represented by a matrix whose entries are thetransition probabilities between states of a system. In particular the entries are non-negative, and every row of the matrix sums to one, being the sum of probabilities of transitions from one state to some other state of the system. ThePerron–Frobenius theorem gives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state.

Vibration analysis

[edit]
Mode shape of a tuning fork at eigenfrequency440.09 Hz
Main article:Vibration

Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with manydegrees of freedom. The eigenvalues are thenatural frequencies (oreigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed bymx¨+kx=0{\displaystyle m{\ddot {x}}+kx=0}ormx¨=kx{\displaystyle m{\ddot {x}}=-kx}

That is, acceleration is proportional to position (i.e., we expectx to be sinusoidal in time).

Inn dimensions,m becomes amass matrix andk astiffness matrix. Admissible solutions are then a linear combination of solutions to thegeneralized eigenvalue problemkx=ω2mx{\displaystyle kx=\omega ^{2}mx}whereω2 is the eigenvalue andω is the (imaginary)angular frequency. The principalvibration modes are different from the principal compliance modes, which are the eigenvectors ofk alone. Furthermore,damped vibration, governed bymx¨+cx˙+kx=0{\displaystyle m{\ddot {x}}+c{\dot {x}}+kx=0}leads to a so-calledquadratic eigenvalue problem,(ω2m+ωc+k)x=0.{\displaystyle \left(\omega ^{2}m+\omega c+k\right)x=0.}

This can be reduced to a generalized eigenvalue problem byalgebraic manipulation at the cost of solving a larger system.

The orthogonality properties of the eigenvectors allows decoupling of thedifferential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved usingfinite element analysis, but neatly generalize the solution to scalar-valued vibration problems.

Tensor of moment of inertia

[edit]

Inmechanics, the eigenvectors of themoment of inertia tensor define theprincipal axes of arigid body. Thetensor of moment ofinertia is a key quantity required to determine the rotation of a rigid body around itscenter of mass.

Stress tensor

[edit]

Insolid mechanics, thestress tensor is symmetric and so can be decomposed into adiagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has noshear components; the components it does have are the principal components.

Schrödinger equation

[edit]
Thewavefunctions associated with thebound states of anelectron in ahydrogen atom can be seen as the eigenvectors of thehydrogen atom Hamiltonian as well as of theangular momentum operator. They are associated with eigenvalues interpreted as their energies (increasing downward:n = 1, 2, 3, ...) andangular momentum (increasing across: s, p, d, ...). The illustration shows the square of the absolute value of the wavefunctions. Brighter areas correspond to higherprobability density for a positionmeasurement. The center of each figure is theatomic nucleus, aproton.

An example of an eigenvalue equation where the transformationT is represented in terms of a differential operator is the time-independentSchrödinger equation inquantum mechanics:HψE=EψE{\displaystyle H\psi _{E}=E\psi _{E}\,}where theHamiltonianH is a second-orderdifferential operator, and thewavefunctionψE is one of its eigenfunctions corresponding to the eigenvalueE, interpreted as itsenergy.

However, in the case where one is interested only in thebound state solutions of the Schrödinger equation, one looks forψE within the space ofsquare integrable functions. Since this space is aHilbert space with a well-definedscalar product, one can introduce abasis set in whichψE andH can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form.

Thebra–ket notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented byE. In this notation, the Schrödinger equation is:H|ΨE=E|ΨE{\displaystyle H|\Psi _{E}\rangle =E|\Psi _{E}\rangle }whereE is aneigenstate ofH, andE represents the eigenvalue.H is anobservableself-adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation aboveHE is understood to be the vector obtained by application of the transformationH toE.

Wave transport

[edit]

Light,acoustic waves, andmicrowaves are randomlyscattered numerous times when traversing a staticdisordered system. Even though multiple scattering repeatedly randomizes the waves, ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrixt.[44][45] The eigenvectors of the transmission operatortt form a set of disorder-specific input wavefronts which enable waves to couple into the disordered system's eigenchannels: the independent pathways waves can travel through the system. The eigenvalues,τ, oftt correspond to the intensity transmittance associated with each eigenchannel. One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution withτmax = 1 andτmin = 0.[45] Furthermore, one of the striking properties of open eigenchannels, beyond the perfect transmittance, is the statistically robust spatial profile of the eigenchannels.[46]

Molecular orbitals

[edit]

Inquantum mechanics, and in particular inatomic andmolecular physics, within theHartree–Fock theory, theatomic andmolecular orbitals can be defined by the eigenvectors of theFock operator. The corresponding eigenvalues are interpreted asionization potentials viaKoopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue problems. Such equations are usually solved by aniteration procedure, called in this caseself-consistent field method. Inquantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonalbasis set. This particular representation is ageneralized eigenvalue problem calledRoothaan equations.

Geology and glaciology

[edit]
This sectionmay be too technical for most readers to understand. Pleasehelp improve it tomake it understandable to non-experts, without removing the technical details.(December 2023) (Learn how and when to remove this message)

Ingeology, especially in the study ofglacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of aclast'sfabric can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can be compared graphically or as astereographic projection. Graphically, many geologists use a Tri-Plot (Sneed and Folk) diagram,.[47][48] A stereographic projection projects 3-dimensional spaces onto a two-dimensional plane. A type of stereographic projection is Wulff Net, which is commonly used incrystallography to createstereograms.[49]

The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are orderedv1,v2,v3 by their eigenvaluesE1E2E3;[50]v1 then is the primary orientation/dip of clast,v2 is the secondary andv3 is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on acompass rose of360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values ofE1,E2, andE3 are dictated by the nature of the sediment's fabric. IfE1 =E2 =E3, the fabric is said to be isotropic. IfE1 =E2 >E3, the fabric is said to be planar. IfE1 >E2 >E3, the fabric is said to be linear.[51]

Basic reproduction number

[edit]
Main article:Basic reproduction number

The basic reproduction number (R0) is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, thenR0 is the average number of people that one typical infectious person will infect. The generation time of an infection is the time,tG, from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after timetG has passed. The valueR0 is then the largest eigenvalue of the next generation matrix.[52][53]

Eigenfaces

[edit]
Eigenfaces as examples of eigenvectors
Main article:Eigenface

Inimage processing, processed images of faces can be seen as vectors whose components are thebrightnesses of eachpixel.[54] The dimension of this vector space is the number of pixels. The eigenvectors of thecovariance matrix associated with a large set of normalized pictures of faces are calledeigenfaces; this is an example ofprincipal component analysis. They are very useful for expressing any face image as alinear combination of some of them. In thefacial recognition branch ofbiometrics, eigenfaces provide a means of applyingdata compression to faces foridentification purposes. Research related to eigen vision systems determining hand gestures has also been made.

Similar to this concept,eigenvoices represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems for speaker adaptation.

See also

[edit]

Notes

[edit]
  1. ^Note:
    • In 1751, Leonhard Euler proved that any body has a principal axis of rotation: Leonhard Euler (presented: October 1751; published: 1760)"Du mouvement d'un corps solide quelconque lorsqu'il tourne autour d'un axe mobile" (On the movement of any solid body while it rotates around a moving axis),Histoire de l'Académie royale des sciences et des belles lettres de Berlin, pp. 176–227.On p. 212, Euler proves that any body contains a principal axis of rotation:"Théorem. 44. De quelque figure que soit le corps, on y peut toujours assigner un tel axe, qui passe par son centre de gravité, autour duquel le corps peut tourner librement & d'un mouvement uniforme." (Theorem. 44. Whatever be the shape of the body, one can always assign to it such an axis, which passes through its center of gravity, around which it can rotate freely and with a uniform motion.)
    • In 1755,Johann Andreas Segner proved that any body has three principal axes of rotation: Johann Andreas Segner,Specimen theoriae turbinum [Essay on the theory of tops (i.e., rotating bodies)] ( Halle ("Halae"), (Germany): Gebauer, 1755). (https://books.google.com/books?id=29 p. xxviiii [29]), Segner derives a third-degree equation int, which proves that a body has three principal axes of rotation. He then states (on the same page):"Non autem repugnat tres esse eiusmodi positiones plani HM, quia in aequatione cubica radices tres esse possunt, et tres tangentis t valores." (However, it is not inconsistent [that there] be three such positions of the plane HM, because in cubic equations, [there] can be three roots, and three values of the tangent t.)
    • The relevant passage of Segner's work was discussed briefly byArthur Cayley. See: A. Cayley (1862) "Report on the progress of the solution of certain special problems of dynamics,"Report of the Thirty-second meeting of the British Association for the Advancement of Science; held at Cambridge in October 1862,32: 184–252; see especiallypp. 225–226.
  2. ^Kline 1972, pp. 807–808 Augustin Cauchy (1839) "Mémoire sur l'intégration des équations linéaires" (Memoir on the integration of linear equations),Comptes rendus,8: 827–830, 845–865, 889–907, 931–937.From p. 827:"On sait d'ailleurs qu'en suivant la méthode de Lagrange, on obtient pour valeur générale de la variable prinicipale une fonction dans laquelle entrent avec la variable principale les racines d'une certaine équation que j'appellerai l'équation caractéristique, le degré de cette équation étant précisément l'order de l'équation différentielle qu'il s'agit d'intégrer." (One knows, moreover, that by following Lagrange's method, one obtains for the general value of the principal variable a function in which there appear, together with the principal variable, the roots of a certain equation that I will call the "characteristic equation", the degree of this equation being precisely the order of the differential equation that must be integrated.)
  3. ^See:
    • David Hilbert (1904)"Grundzüge einer allgemeinen Theorie der linearen Integralgleichungen. (Erste Mitteilung)" (Fundamentals of a general theory of linear integral equations. (First report)),Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse (News of the Philosophical Society at Göttingen, mathematical-physical section), pp. 49–91.From p. 51:"Insbesondere in dieser ersten Mitteilung gelange ich zu Formeln, die die Entwickelung einer willkürlichen Funktion nach gewissen ausgezeichneten Funktionen, die ich 'Eigenfunktionen' nenne, liefern: ..." (In particular, in this first report I arrive at formulas that provide the [series] development of an arbitrary function in terms of some distinctive functions, which I calleigenfunctions: ... ) Later on the same page:"Dieser Erfolg ist wesentlich durch den Umstand bedingt, daß ich nicht, wie es bisher geschah, in erster Linie auf den Beweis für die Existenz der Eigenwerte ausgehe, ... " (This success is mainly attributable to the fact that I do not, as it has happened until now, first of all aim at a proof of the existence of eigenvalues...)
    • For the origin and evolution of the terms eigenvalue, characteristic value, etc., see:Earliest Known Uses of Some of the Words of Mathematics (E)
  4. ^For a proof of this lemma, seeRoman 2008, p. 186, Theorem 8.2;Shilov 1977, p. 109;Hefferon 2001, p. 364; andBeezer 2006, p. 469, Theorem EDELI.
  5. ^By doingGaussian elimination overformal power series truncated ton terms it is possible to get away withO(n4) operations, but that does not takecombinatorial explosion into account.

Citations

[edit]
  1. ^Burden & Faires 1993, p. 401.
  2. ^Strang, Gilbert. "6: Eigenvalues and Eigenvectors".Introduction to Linear Algebra(PDF) (5 ed.). Wellesley-Cambridge Press.
  3. ^abHerstein 1964, pp. 228, 229.
  4. ^abNering 1970, p. 38.
  5. ^Betteridge 1965.
  6. ^ab"Eigenvector and Eigenvalue".www.mathsisfun.com. Retrieved19 August 2020.
  7. ^Press et al. 2007, p. 536.
  8. ^Wolfram.com: Eigenvector.
  9. ^abcNering 1970, p. 107.
  10. ^Hawkins 1975, §2.
  11. ^abcdHawkins 1975, §3.
  12. ^Kline 1972, p. 673.
  13. ^abKline 1972, pp. 807–808.
  14. ^Kline 1972, pp. 715–716.
  15. ^Kline 1972, pp. 706–707.
  16. ^Kline 1972, p. 1063, p..
  17. ^Aldrich 2006.
  18. ^Francis 1961, pp. 265–271.
  19. ^Kublanovskaya 1962.
  20. ^Golub & Van Loan 1996, §7.3.
  21. ^Meyer 2000, §7.3.
  22. ^Cornell University Department of Mathematics (2016)Lower-Level Courses for Freshmen and SophomoresArchived 7 April 2018 at theWayback Machine. Accessed on 2016-03-27.
  23. ^University of Michigan Mathematics (2016)Math Course CatalogueArchived 2015-11-01 at theWayback Machine. Accessed on 2016-03-27.
  24. ^Press et al. 2007, p. 38.
  25. ^Fraleigh 1976, p. 358.
  26. ^abGolub & Van Loan 1996, p. 316.
  27. ^Anton 1987, pp. 305, 307.
  28. ^abBeauregard & Fraleigh 1973, p. 307.
  29. ^Herstein 1964, p. 272.
  30. ^Nering 1970, pp. 115–116.
  31. ^Herstein 1964, p. 290.
  32. ^Nering 1970, p. 116.
  33. ^Wolchover 2019.
  34. ^abDenton et al. 2022.
  35. ^Van Mieghem 2014.
  36. ^Van Mieghem 2024.
  37. ^Korn & Korn 2000, Section 14.3.5a.
  38. ^Friedberg, Insel & Spence 1989, p. 217.
  39. ^Roman 2008, p. 186 §8.
  40. ^Nering 1970, p. 107;Shilov 1977, p. 109.
  41. ^Lipschutz & Lipson 2002, p. 111.
  42. ^Nering 1970, p. 107;Golub & Van Loan 1996, p. 316;Roman 2008, p. 189 §8.
  43. ^abcdTrefethen & Bau 1997.
  44. ^Vellekoop & Mosk 2007, pp. 2309–2311.
  45. ^abRotter & Gigan 2017, p. 15005.
  46. ^Bender et al. 2020.
  47. ^Graham & Midgley 2000, pp. 1473–1477.
  48. ^Sneed & Folk 1958, pp. 114–150.
  49. ^Knox-Robinson & Gardoll 1998, p. 243.
  50. ^Busche, Christian; Schiller, Beate."Endogene Geologie - Ruhr-Universität Bochum".www.ruhr-uni-bochum.de.
  51. ^Benn & Evans 2004, pp. 103–107.
  52. ^Diekmann, Heesterbeek & Metz 1990, pp. 365–382.
  53. ^Heesterbeek & Diekmann 2000.
  54. ^Xirouhakis, Votsis & Delopoulus 1999.

Sources

[edit]

Further reading

[edit]

External links

[edit]
This article'suse ofexternal links may not follow Wikipedia's policies or guidelines. Pleaseimprove this article by removingexcessive orinappropriate external links, and converting useful links where appropriate intofootnote references.(December 2019) (Learn how and when to remove this message)
The WikibookLinear Algebra has a page on the topic of:Eigenvalues and Eigenvectors

Wikiversity uses introductory physics to introduceEigenvalues and eigenvectors

Theory

[edit]
Linear equations
Three dimensional Euclidean space
Matrices
Matrix decompositions
Relations and computations
Vector spaces
Structures
Multilinear algebra
Affine and projective
Numerical linear algebra
Majormathematics areas
Foundations
Algebra
Analysis
Discrete
Geometry
Number theory
Topology
Applied
Computational
Related topics
International
National
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Eigenvalues_and_eigenvectors&oldid=1321634633"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp