Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Spectral theorem

From Wikipedia, the free encyclopedia
Result about when a matrix can be diagonalized

Inlinear algebra andfunctional analysis, aspectral theorem is a result about when alinear operator ormatrix can bediagonalized (that is, represented as adiagonal matrix in some basis). This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators onfinite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class oflinear operators that can be modeled bymultiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutativeC*-algebras. See alsospectral theory for a historical perspective.

Examples of operators to which the spectral theorem applies areself-adjoint operators or more generallynormal operators onHilbert spaces.

The spectral theorem also provides acanonical decomposition, called thespectral decomposition, of the underlying vector space on which the operator acts.

Augustin-Louis Cauchy proved the spectral theorem forsymmetric matrices, i.e., that every real, symmetric matrix is diagonalizable. In addition, Cauchy was the first to be systematic aboutdeterminants.[1][2] The spectral theorem as generalized byJohn von Neumann is today perhaps the most important result ofoperator theory.

This article mainly focuses on the simplest kind of spectral theorem, that for aself-adjoint operator on a Hilbert space. However, as noted above, the spectral theorem also holds for normal operators on a Hilbert space.

Finite-dimensional case

[edit]

Hermitian maps and Hermitian matrices

[edit]

We begin by considering aHermitian matrix onCn{\displaystyle \mathbb {C} ^{n}} (but the following discussion will be adaptable to the more restrictive case ofsymmetric matrices onRn{\displaystyle \mathbb {R} ^{n}}). We consider aHermitian mapA on a finite-dimensionalcomplexinner product spaceV endowed with apositive definitesesquilinearinner product,{\displaystyle \langle \cdot ,\cdot \rangle }. The Hermitian condition onA{\displaystyle A} means that for allx,yV,Ax,y=x,Ay.{\displaystyle \langle Ax,y\rangle =\langle x,Ay\rangle .}

An equivalent condition is thatA* =A, whereA* is theHermitian conjugate ofA. In the case thatA is identified with a Hermitian matrix, the matrix ofA* is equal to itsconjugate transpose. (IfA is areal matrix, then this is equivalent toAT =A, that is,A is asymmetric matrix.)

This condition implies that all eigenvalues of a Hermitian map are real: To see this, it is enough to apply it to the case whenx =y is an eigenvector. (Recall that aneigenvector of a linear mapA is a non-zero vectorv such thatAv =λv for some scalarλ. The valueλ is the correspondingeigenvalue. Moreover, theeigenvalues are roots of thecharacteristic polynomial.)

TheoremIfA is Hermitian onV, then there exists anorthonormal basis ofV consisting of eigenvectors ofA. Each eigenvalue ofA is real.

We provide a sketch of a proof for the case where the underlying field of scalars is thecomplex numbers.

By thefundamental theorem of algebra, applied to thecharacteristic polynomial ofA, there is at least one complex eigenvalueλ1 and corresponding eigenvectorv1, which must by definition be non-zero. Then sinceλ1v1,v1=A(v1),v1=v1,A(v1)=λ¯1v1,v1,{\displaystyle \lambda _{1}\langle v_{1},v_{1}\rangle =\langle A(v_{1}),v_{1}\rangle =\langle v_{1},A(v_{1})\rangle ={\bar {\lambda }}_{1}\langle v_{1},v_{1}\rangle ,} we find thatλ1 is real. Now consider the spaceKn1=span(v1){\displaystyle {\mathcal {K}}^{n-1}={\text{span}}(v_{1})^{\perp }}, theorthogonal complement ofv1. By Hermiticity,Kn1{\displaystyle {\mathcal {K}}^{n-1}} is aninvariant subspace ofA. To see that, consider anykKn1{\displaystyle k\in {\mathcal {K}}^{n-1}} so thatk,v1=0{\displaystyle \langle k,v_{1}\rangle =0} by definition ofKn1{\displaystyle {\mathcal {K}}^{n-1}}. To satisfy invariance, we need to check ifA(k)Kn1{\displaystyle A(k)\in {\mathcal {K}}^{n-1}}. This is true because,A(k),v1=k,A(v1)=k,λ1v1=0{\displaystyle \langle A(k),v_{1}\rangle =\langle k,A(v_{1})\rangle =\langle k,\lambda _{1}v_{1}\rangle =0}. Applying the same argument toKn1{\displaystyle {\mathcal {K}}^{n-1}} shows thatA has at least one real eigenvalueλ2{\displaystyle \lambda _{2}} and corresponding eigenvectorv2Kn1v1{\displaystyle v_{2}\in {\mathcal {K}}^{n-1}\perp v_{1}}. This can be used to build another invariant subspaceKn2=span({v1,v2}){\displaystyle {\mathcal {K}}^{n-2}={\text{span}}(\{v_{1},v_{2}\})^{\perp }}. Finite induction then finishes the proof.

The matrix representation ofA in a basis of eigenvectors is diagonal, and by the construction the proof gives a basis of mutually orthogonal eigenvectors; by choosing them to be unit vectors one obtains an orthonormal basis of eigenvectors.A can be written as a linear combination of pairwise orthogonal projections, called itsspectral decomposition. LetVλ={vV:Av=λv}{\displaystyle V_{\lambda }=\{v\in V:Av=\lambda v\}}be the eigenspace corresponding to an eigenvalueλ{\displaystyle \lambda }. Note that the definition does not depend on any choice of specific eigenvectors. In general,V is the orthogonal direct sum of the spacesVλ{\displaystyle V_{\lambda }} where theλ{\displaystyle \lambda } ranges over thespectrum ofA{\displaystyle A}.

When the matrix being decomposed is Hermitian, the spectral decomposition is a special case of theSchur decomposition (see the proof in case ofnormal matrices below).

Spectral decomposition and the singular value decomposition

[edit]

The spectral decomposition is a special case of thesingular value decomposition, which states that any matrixACm×n{\displaystyle A\in \mathbb {C} ^{m\times n}} can be expressed asA=UΣV{\displaystyle A=U\Sigma V^{*}}, whereUCm×m{\displaystyle U\in \mathbb {C} ^{m\times m}} andVCn×n{\displaystyle V\in \mathbb {C} ^{n\times n}} areunitary matrices andΣRm×n{\displaystyle \Sigma \in \mathbb {R} ^{m\times n}} is a diagonal matrix. The diagonal entries ofΣ{\displaystyle \Sigma } are uniquely determined byA{\displaystyle A} and are known as thesingular values ofA{\displaystyle A}. IfA{\displaystyle A} is Hermitian, thenA=A{\displaystyle A^{*}=A} andVΣU=UΣV{\displaystyle V\Sigma U^{*}=U\Sigma V^{*}} which impliesU=V{\displaystyle U=V}.

Normal matrices

[edit]
Main article:Normal matrix

The spectral theorem extends to a more general class of matrices. LetA be an operator on a finite-dimensional inner product space.A is said to benormal ifA*A =AA*.

One can show thatA is normal if and only if it is unitarily diagonalizable using theSchur decomposition. That is, any matrix can be written asA =UTU*, whereU is unitary andT isupper triangular.IfA is normal, then one sees thatTT* =T*T. Therefore,T must be diagonal since a normal upper triangular matrix is diagonal (seenormal matrix). The converse is obvious.

In other words,A is normal if and only if there exists aunitary matrixU such thatA=UDU,{\displaystyle A=UDU^{*},}whereD is adiagonal matrix. Then, the entries of the diagonal ofD are theeigenvalues ofA. The column vectors ofU are the eigenvectors ofA and they are orthonormal. Unlike the Hermitian case, the entries ofD need not be real.

Compact self-adjoint operators

[edit]
See also:Compact operator on Hilbert space § Spectral theorem

In the more general setting of Hilbert spaces, which may have an infinite dimension, the statement of the spectral theorem forcompactself-adjoint operators is virtually the same as in the finite-dimensional case.

TheoremSupposeA is a compact self-adjoint operator on a (real or complex) Hilbert spaceV. Then there is anorthonormal basis ofV consisting of eigenvectors ofA. Each eigenvalue is real.

As for Hermitian matrices, the key point is to prove the existence of at least one nonzero eigenvector. One cannot rely on determinants to show existence of eigenvalues, but one can use a maximization argument analogous to the variational characterization of eigenvalues.

If the compactness assumption is removed, then it isnot true that every self-adjoint operator has eigenvectors. For example, the multiplication operatorMx{\displaystyle M_{x}} onL2([0,1]){\displaystyle L^{2}([0,1])} which takes eachψ(x)L2([0,1]){\displaystyle \psi (x)\in L^{2}([0,1])} toxψ(x){\displaystyle x\psi (x)} is bounded and self-adjoint, but has no eigenvectors. However, its spectrum, suitably defined, is still equal to[0,1]{\displaystyle [0,1]}, see spectrum of bounded operator.

Bounded self-adjoint operators

[edit]
See also:Eigenfunction andSelf-adjoint operator § Spectral theorem

Possible absence of eigenvectors

[edit]

The next generalization we consider is that ofbounded self-adjoint operators on a Hilbert space. Such operators may have no eigenvectors: for instance letA be the operator of multiplication byt onL2([0,1]){\displaystyle L^{2}([0,1])}, that is,[3][Af](t)=tf(t).{\displaystyle [Af](t)=tf(t).}

This operator does not have any eigenvectorsinL2([0,1]){\displaystyle L^{2}([0,1])}, though it does have eigenvectors in a larger space. Namely thedistributionf(t)=δ(tt0){\displaystyle f(t)=\delta (t-t_{0})}, whereδ{\displaystyle \delta } is theDirac delta function, is an eigenvector when construed in an appropriate sense. The Dirac delta function is however not a function in the classical sense and does not lie in the Hilbert spaceL2[0, 1]. Thus, the delta-functions are "generalized eigenvectors" ofA{\displaystyle A} but not eigenvectors in the usual sense.

Spectral subspaces and projection-valued measures

[edit]

In the absence of (true) eigenvectors, one can look for a "spectral subspace" consisting of analmost eigenvector, i.e, a closed subspaceVE{\displaystyle V_{E}} ofV{\displaystyle V} associated with aBorel setEσ(A){\displaystyle E\subset \sigma (A)} in thespectrum ofA{\displaystyle A}. This subspace can be thought of as the closed span of generalized eigenvectors forA{\displaystyle A} with eigenvalues inE{\displaystyle E}.[4] In the above example, where[Af](t)=tf(t),{\displaystyle [Af](t)=tf(t),\;} we might consider the subspace of functions supported on a small interval[a,a+ε]{\displaystyle [a,a+\varepsilon ]} inside[0,1]{\displaystyle [0,1]}. This space is invariant underA{\displaystyle A} and for anyf{\displaystyle f} in this subspace,Af{\displaystyle Af} is very close toaf{\displaystyle af}. Each subspace, in turn, is encoded by the associated projection operator, and the collection of all the subspaces is then represented by aprojection-valued measure.

One formulation of the spectral theorem expresses the operatorA as an integral of the coordinate function over the operator's spectrumσ(A){\displaystyle \sigma (A)} with respect to a projection-valued measure.[5]A=σ(A)λdπ(λ).{\displaystyle A=\int _{\sigma (A)}\lambda \,d\pi (\lambda ).}When the self-adjoint operator in question iscompact, this version of the spectral theorem reduces to something similar to the finite-dimensional spectral theorem above, except that the operator is expressed as a finite or countably infinite linear combination of projections, that is, the measure consists only of atoms.

Multiplication operator version

[edit]

An alternative formulation of the spectral theorem says that every bounded self-adjoint operator is unitarily equivalent to a multiplication operator, a relatively simple type of operator.

Theorem[6]LetA{\displaystyle A} be a bounded self-adjoint operator on a Hilbert spaceV{\displaystyle V}. Then there is ameasure space(X,Σ,μ){\displaystyle (X,\Sigma ,\mu )} and a real-valuedessentially bounded measurable functionλ{\displaystyle \lambda } onX{\displaystyle X} and aunitary operatorU:VL2(X,μ){\displaystyle U:V\to L^{2}(X,\mu )} such thatUTU=A,{\displaystyle U^{*}TU=A,}whereT{\displaystyle T} is themultiplication operator:[Tf](x)=λ(x)f(x){\displaystyle [Tf](x)=\lambda (x)f(x)}and|T|{\displaystyle \vert T\vert }=|λ|{\displaystyle =\vert \lambda \vert _{\infty }}.

Multiplication operators are a direct generalization of diagonal matrices. A finite-dimensional Hermitian vector spaceV{\displaystyle V} may be coordinatized as the space of functionsf:BC{\displaystyle f:B\to \mathbb {C} } from a basisB{\displaystyle B} to the complex numbers, so that theB{\displaystyle B}-coordinates of a vector are the values of the corresponding functionf{\displaystyle f}. The finite-dimensional spectral theorem for a self-adjoint operatorA:VV{\displaystyle A:V\to V} states that there exists an orthonormal basis of eigenvectorsB{\displaystyle B}, so that the inner product becomes thedot product with respect to theB{\displaystyle B}-coordinates: thusV{\displaystyle V} is isomorphic toL2(B,μ){\displaystyle L^{2}(B,\mu )} for the discrete unit measureμ{\displaystyle \mu } onB{\displaystyle B}. AlsoA{\displaystyle A} is unitarily equivalent to the multiplication operator[Tf](v)=λ(v)f(v){\displaystyle [Tf](v)=\lambda (v)f(v)}, whereλ(v){\displaystyle \lambda (v)} is the eigenvalue ofvB{\displaystyle v\in B}: that is,A{\displaystyle A} multiplies eachB{\displaystyle B}-coordinate by the corresponding eigenvalueλ(v){\displaystyle \lambda (v)}, the action of a diagonal matrix. Finally, theoperator norm|A|=|T|{\displaystyle |A|=|T|} is equal to the magnitude of the largest eigenvector|λ|{\displaystyle |\lambda |_{\infty }}.

The spectral theorem is the beginning of the vast research area of functional analysis calledoperator theory; see alsospectral measure.

There is also an analogous spectral theorem for boundednormal operators on Hilbert spaces. The only difference in the conclusion is that nowλ{\displaystyle \lambda } may be complex-valued.

Direct integrals

[edit]

There is also a formulation of the spectral theorem in terms ofdirect integrals. It is similar to the multiplication-operator formulation, but more canonical.

LetA{\displaystyle A} be a bounded self-adjoint operator and letσ(A){\displaystyle \sigma (A)} be the spectrum ofA{\displaystyle A}. The direct-integral formulation of the spectral theorem associates two quantities toA{\displaystyle A}. First, a measureμ{\displaystyle \mu } onσ(A){\displaystyle \sigma (A)}, and second, a family of Hilbert spaces{Hλ},λσ(A).{\displaystyle \{H_{\lambda }\},\,\,\lambda \in \sigma (A).} We then form the direct integral Hilbert spaceRHλdμ(λ).{\displaystyle \int _{\mathbf {R} }^{\oplus }H_{\lambda }\,d\mu (\lambda ).}The elements of this space are functions (or "sections")s(λ),λσ(A),{\displaystyle s(\lambda ),\,\,\lambda \in \sigma (A),} such thats(λ)Hλ{\displaystyle s(\lambda )\in H_{\lambda }} for allλ{\displaystyle \lambda }. The direct-integral version of the spectral theorem may be expressed as follows:[7]

TheoremIfA{\displaystyle A} is a bounded self-adjoint operator, thenA{\displaystyle A} is unitarily equivalent to the "multiplication byλ{\displaystyle \lambda }" operator onRHλdμ(λ){\displaystyle \int _{\mathbf {R} }^{\oplus }H_{\lambda }\,d\mu (\lambda )}for some measureμ{\displaystyle \mu } and some family{Hλ}{\displaystyle \{H_{\lambda }\}} of Hilbert spaces. The measureμ{\displaystyle \mu } is uniquely determined byA{\displaystyle A} up to measure-theoretic equivalence; that is, any two measure associated to the sameA{\displaystyle A} have the same sets of measure zero. The dimensions of the Hilbert spacesHλ{\displaystyle H_{\lambda }} are uniquely determined byA{\displaystyle A} up to a set ofμ{\displaystyle \mu }-measure zero.

The spacesHλ{\displaystyle H_{\lambda }} can be thought of as something like "eigenspaces" forA{\displaystyle A}. Note, however, that unless the one-element setλ{\displaystyle \lambda } has positive measure, the spaceHλ{\displaystyle H_{\lambda }} is not actually a subspace of the direct integral. Thus, theHλ{\displaystyle H_{\lambda }}'s should be thought of as "generalized eigenspace"—that is, the elements ofHλ{\displaystyle H_{\lambda }} are "eigenvectors" that do not actually belong to the Hilbert space.

Although both the multiplication-operator and direct integral formulations of the spectral theorem express a self-adjoint operator as unitarily equivalent to a multiplication operator, the direct integral approach is more canonical. First, the set over which the direct integral takes place (the spectrum of the operator) is canonical. Second, the function we are multiplying by is canonical in the direct-integral approach: Simply the functionλλ{\displaystyle \lambda \mapsto \lambda }.

Cyclic vectors and simple spectrum

[edit]

A vectorφ{\displaystyle \varphi } is called acyclic vector forA{\displaystyle A} if the vectorsφ,Aφ,A2φ,{\displaystyle \varphi ,A\varphi ,A^{2}\varphi ,\ldots } span a dense subspace of the Hilbert space. SupposeA{\displaystyle A} is a bounded self-adjoint operator for which a cyclic vector exists. In that case, there is no distinction between the direct-integral and multiplication-operator formulations of the spectral theorem. Indeed, in that case, there is a measureμ{\displaystyle \mu } on the spectrumσ(A){\displaystyle \sigma (A)} ofA{\displaystyle A} such thatA{\displaystyle A} is unitarily equivalent to the "multiplication byλ{\displaystyle \lambda }" operator onL2(σ(A),μ){\displaystyle L^{2}(\sigma (A),\mu )}.[8] This result representsA{\displaystyle A} simultaneously as a multiplication operatorand as a direct integral, sinceL2(σ(A),μ){\displaystyle L^{2}(\sigma (A),\mu )} is just a direct integral in which each Hilbert spaceHλ{\displaystyle H_{\lambda }} is justC{\displaystyle \mathbb {C} }.

Not every bounded self-adjoint operator admits a cyclic vector; indeed, by the uniqueness in the direct integral decomposition, this can occur only when all theHλ{\displaystyle H_{\lambda }}'s have dimension one. When this happens, we say thatA{\displaystyle A} has "simple spectrum" in the sense ofspectral multiplicity theory. That is, a bounded self-adjoint operator that admits a cyclic vector should be thought of as the infinite-dimensional generalization of a self-adjoint matrix with distinct eigenvalues (i.e., each eigenvalue has multiplicity one).

Although not everyA{\displaystyle A} admits a cyclic vector, it is easy to see that we can decompose the Hilbert space as a direct sum of invariant subspaces on whichA{\displaystyle A} has a cyclic vector. This observation is the key to the proofs of the multiplication-operator and direct-integral forms of the spectral theorem.

Functional calculus

[edit]

One important application of the spectral theorem (in whatever form) is the idea of defining afunctional calculus. That is, given a functionf{\displaystyle f} defined on the spectrum ofA{\displaystyle A}, we wish to define an operatorf(A){\displaystyle f(A)}. Iff{\displaystyle f} is simply a positive power,f(x)=xn{\displaystyle f(x)=x^{n}}, thenf(A){\displaystyle f(A)} is just then{\displaystyle n}-th power ofA{\displaystyle A},An{\displaystyle A^{n}}. The interesting cases are wheref{\displaystyle f} is a nonpolynomial function such as a square root or an exponential. Either of the versions of the spectral theorem provides such a functional calculus.[9] In the direct-integral version, for example,f(A){\displaystyle f(A)} acts as the "multiplication byf{\displaystyle f}" operator in the direct integral:[f(A)s](λ)=f(λ)s(λ).{\displaystyle [f(A)s](\lambda )=f(\lambda )s(\lambda ).}That is to say, each spaceHλ{\displaystyle H_{\lambda }} in the direct integral is a (generalized) eigenspace forf(A){\displaystyle f(A)} with eigenvaluef(λ){\displaystyle f(\lambda )}.

Unbounded self-adjoint operators

[edit]

Many important linear operators which occur inanalysis, such asdifferential operators, areunbounded. There is also a spectral theorem forself-adjoint operators that applies in these cases. To give an example, every constant-coefficient differential operator is unitarily equivalent to a multiplication operator. Indeed, the unitary operator that implements this equivalence is theFourier transform; the multiplication operator is a type ofFourier multiplier.

In general, spectral theorem for self-adjoint operators may take several equivalent forms.[10] Notably, all of the formulations given in the previous section for bounded self-adjoint operators—the projection-valued measure version, the multiplication-operator version, and the direct-integral version—continue to hold for unbounded self-adjoint operators, with small technical modifications to deal with domain issues. Specifically, the only reason the multiplication operatorA{\displaystyle A} onL2([0,1]){\displaystyle L^{2}([0,1])} is bounded, is due to the choice of domain[0,1]{\displaystyle [0,1]}. The same operator on, e.g.,L2(R){\displaystyle L^{2}(\mathbb {R} )} would be unbounded.

The notion of "generalized eigenvectors" naturally extends to unbounded self-adjoint operators, as they are characterized asnon-normalizable eigenvectors. Contrary to the case ofalmost eigenvectors, however, the eigenvalues can be real or complex and, even if they are real, do not necessarily belong to the spectrum. Though, for self-adjoint operators there always exist a real subset of "generalized eigenvalues" such that the corresponding set of eigenvectors iscomplete.[11]

See also

[edit]

Notes

[edit]
  1. ^Hawkins, Thomas (1975)."Cauchy and the spectral theory of matrices".Historia Mathematica.2:1–29.doi:10.1016/0315-0860(75)90032-4.
  2. ^A Short History of Operator Theory byEvans M. Harrell II
  3. ^Hall 2013 Section 6.1
  4. ^Hall 2013 Theorem 7.2.1
  5. ^Hall 2013 Theorem 7.12
  6. ^Hall 2013 Theorem 7.20
  7. ^Hall 2013 Theorem 7.19
  8. ^Hall 2013 Lemma 8.11
  9. ^E.g.,Hall 2013 Definition 7.13
  10. ^See Section 10.1 ofHall 2013
  11. ^de la Madrid Modino 2001, pp. 95–97.

References

[edit]
Spaces
Properties
Theorems
Operators
Algebras
Open problems
Applications
Advanced topics
Basic concepts
Main results
Special Elements/Operators
Spectrum
Decomposition
Spectral Theorem
Special algebras
Finite-Dimensional
Generalizations
Miscellaneous
Examples
Applications
Retrieved from "https://en.wikipedia.org/w/index.php?title=Spectral_theorem&oldid=1314199357"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp