Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Min-max theorem

From Wikipedia, the free encyclopedia
Variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces
Not to be confused withMinimax theorem.
"Variational theorem" redirects here; not to be confused withvariational principle.
icon
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Min-max theorem" – news ·newspapers ·books ·scholar ·JSTOR
(November 2011) (Learn how and when to remove this message)

Inlinear algebra andfunctional analysis, themin-max theorem, orvariational theorem, orCourant–Fischer–Weyl min-max principle, is a result that gives a variational characterization ofeigenvalues ofcompact Hermitian operators onHilbert spaces. It can be viewed as the starting point of many results of similar nature.

This article first discusses the finite-dimensional case and its applications before considering compact operators on infinite-dimensional Hilbert spaces. We will see that for compact operators, the proof of the main theorem uses essentially the same idea from the finite-dimensional argument.

In the case that the operator is non-Hermitian, the theorem provides an equivalent characterization of the associatedsingular values. The min-max theorem can be extended toself-adjoint operators that are bounded below.

Matrices

[edit]

LetA be an ×nHermitian matrix. As with many other variational results on eigenvalues, one considers theRayleigh–Ritz quotientRA :Cn \ {0} →R defined by

RA(x)=(Ax,x)(x,x){\displaystyle R_{A}(x)={\frac {(Ax,x)}{(x,x)}}}

where(⋅, ⋅) denotes theEuclidean inner product onCn. Equivalently, the Rayleigh–Ritz quotient can be replaced by

f(x)=(Ax,x),x=1.{\displaystyle f(x)=(Ax,x),\;\|x\|=1.}

The Rayleigh quotient of an eigenvectorv{\displaystyle v} is its associated eigenvalueλ{\displaystyle \lambda } becauseRA(v)=(λx,x)/(x,x)=λ{\displaystyle R_{A}(v)=(\lambda x,x)/(x,x)=\lambda }. For a Hermitian matrixA, the range of the continuous functionsRA(x) andf(x) is a compact interval [a,b] of the real line. The maximumb and the minimuma are the largest and smallest eigenvalue ofA, respectively. The min-max theorem is a refinement of this fact.

Min-max theorem

[edit]

LetA{\textstyle A} be Hermitian on an inner product spaceV{\textstyle V} with dimensionn{\textstyle n}, with spectrum ordered in descending orderλ1...λn{\textstyle \lambda _{1}\geq ...\geq \lambda _{n}}.

Letv1,...,vn{\textstyle v_{1},...,v_{n}} be the corresponding unit-length orthogonal eigenvectors.

Reverse the spectrum ordering, so thatξ1=λn,...,ξn=λ1{\textstyle \xi _{1}=\lambda _{n},...,\xi _{n}=\lambda _{1}}.

(Poincaré’s inequality)LetM{\textstyle M} be a subspace ofV{\textstyle V} with dimensionk{\textstyle k}, then there exists unit vectorsx,yM{\textstyle x,y\in M}, such that

x,Axλk{\textstyle \langle x,Ax\rangle \leq \lambda _{k}}, andy,Ayξk{\textstyle \langle y,Ay\rangle \geq \xi _{k}}.

Proof

Part 2 is a corollary, usingA{\textstyle -A}.

M{\textstyle M} is ak{\textstyle k} dimensional subspace, so if we pick any list ofnk+1{\textstyle n-k+1} vectors, their spanN:=span(vk,...vn){\textstyle N:=span(v_{k},...v_{n})} must intersectM{\textstyle M} on at least a single line.

Take unitxMN{\textstyle x\in M\cap N}. That’s what we need.

x=i=knaivi{\textstyle x=\sum _{i=k}^{n}a_{i}v_{i}}, sincexN{\textstyle x\in N}.
Sincei=kn|ai|2=1{\textstyle \sum _{i=k}^{n}|a_{i}|^{2}=1}, we findx,Ax=i=kn|ai|2λiλk{\textstyle \langle x,Ax\rangle =\sum _{i=k}^{n}|a_{i}|^{2}\lambda _{i}\leq \lambda _{k}}.

min-max theoremλk=maxMVdim(M)=kminxMx=1x,Ax=minMVdim(M)=nk+1maxxMx=1x,Ax{\displaystyle {\begin{aligned}\lambda _{k}&=\max _{\begin{array}{c}{\mathcal {M}}\subset V\\\operatorname {dim} ({\mathcal {M}})=k\end{array}}\min _{\begin{array}{c}x\in {\mathcal {M}}\\\|x\|=1\end{array}}\langle x,Ax\rangle \\&=\min _{\begin{array}{c}{\mathcal {M}}\subset V\\\operatorname {dim} ({\mathcal {M}})=n-k+1\end{array}}\max _{\begin{array}{c}x\in {\mathcal {M}}\\\|x\|=1\end{array}}\langle x,Ax\rangle {\text{. }}\end{aligned}}}

Proof

Part 2 is a corollary of part 1, by usingA{\textstyle -A}.

By Poincare’s inequality,λk{\textstyle \lambda _{k}} is an upper bound to the right side.

By settingM=span(v1,...vk){\textstyle {\mathcal {M}}=span(v_{1},...v_{k})}, the upper bound is achieved.

Define thepartial tracetrV(A){\textstyle tr_{V}(A)} to be the trace of projection ofA{\textstyle A} toV{\textstyle V}. It is equal toiviAvi{\textstyle \sum _{i}v_{i}^{*}Av_{i}} given an orthonormal basis ofV{\textstyle V}.

Wielandt minimax formula ([1]: 44 )Let1i1<<ikn{\textstyle 1\leq i_{1}<\cdots <i_{k}\leq n} be integers. Define a partial flag to be a nested collectionV1Vk{\textstyle V_{1}\subset \cdots \subset V_{k}} of subspaces ofCn{\textstyle \mathbb {C} ^{n}} such thatdim(Vj)=ij{\textstyle \operatorname {dim} \left(V_{j}\right)=i_{j}} for all1jk{\textstyle 1\leq j\leq k}.

Define the associated Schubert varietyX(V1,,Vk){\textstyle X\left(V_{1},\ldots ,V_{k}\right)} to be the collection of allk{\textstyle k} dimensional subspacesW{\textstyle W} such thatdim(WVj)j{\textstyle \operatorname {dim} \left(W\cap V_{j}\right)\geq j}.

λi1(A)++λik(A)=supV1,,VkinfWX(V1,,Vk)trW(A){\displaystyle \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)=\sup _{V_{1},\ldots ,V_{k}}\inf _{W\in X\left(V_{1},\ldots ,V_{k}\right)}tr_{W}(A)}

Proof
Proof

The{\textstyle \leq } case.

LetVj=span(e1,,eij){\textstyle V_{j}=span(e_{1},\dots ,e_{i_{j}})}, and anyWX(V1,,Vk){\textstyle W\in X\left(V_{1},\ldots ,V_{k}\right)}, it remains to show thatλi1(A)++λik(A)trW(A){\displaystyle \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)\leq tr_{W}(A)}

To show this, we construct an orthonormal set of vectorsv1,,vk{\textstyle v_{1},\dots ,v_{k}} such thatvjVjW{\textstyle v_{j}\in V_{j}\cap W}. ThentrW(A)jvj,Avjλij(A){\textstyle tr_{W}(A)\geq \sum _{j}\langle v_{j},Av_{j}\rangle \geq \lambda _{i_{j}}(A)}

Sincedim(V1W)1{\textstyle dim(V_{1}\cap W)\geq 1}, we pick any unitv1V1W{\textstyle v_{1}\in V_{1}\cap W}. Next, sincedim(V2W)2{\textstyle dim(V_{2}\cap W)\geq 2}, we pick any unitv2(V2W){\textstyle v_{2}\in (V_{2}\cap W)} that is perpendicular tov1{\textstyle v_{1}}, and so on.

The{\textstyle \geq } case.

For any such sequence of subspacesVi{\textstyle V_{i}}, we must find someWX(V1,,Vk){\textstyle W\in X\left(V_{1},\ldots ,V_{k}\right)} such thatλi1(A)++λik(A)trW(A){\displaystyle \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)\geq tr_{W}(A)}

Now we prove this by induction.

Then=1{\textstyle n=1} case is the Courant-Fischer theorem. Assume nown2{\textstyle n\geq 2}.

Ifi12{\textstyle i_{1}\geq 2}, then we can apply induction. LetE=span(ei1,,en){\textstyle E=span(e_{i_{1}},\dots ,e_{n})}. We construct a partial flag withinE{\textstyle E} from the intersection ofE{\textstyle E} withV1,,Vk{\textstyle V_{1},\dots ,V_{k}}.

We begin by picking a(ik(i11)){\textstyle (i_{k}-(i_{1}-1))}-dimensional subspaceWkEVik{\textstyle W_{k}'\subset E\cap V_{i_{k}}}, which exists by counting dimensions. This has codimension(i11){\textstyle (i_{1}-1)} withinVik{\textstyle V_{i_{k}}}.

Then we go down by one space, to pick a(ik1(i11)){\textstyle (i_{k-1}-(i_{1}-1))}-dimensional subspaceWk1WkVik1{\textstyle W_{k-1}'\subset W_{k}\cap V_{i_{k-1}}}. This still exists. Etc. Now sincedim(E)n1{\textstyle dim(E)\leq n-1}, apply the induction hypothesis, there exists someWX(W1,,Wk){\textstyle W\in X(W_{1},\dots ,W_{k})} such thatλi1(i11)(A|E)++λik(i11)(A|E)trW(A){\displaystyle \lambda _{i_{1}-(i_{1}-1)}(A|E)+\cdots +\lambda _{i_{k}-(i_{1}-1)}(A|E)\geq tr_{W}(A)} Nowλij(i11)(A|E){\textstyle \lambda _{i_{j}-(i_{1}-1)}(A|E)} is the(ij(i11)){\textstyle (i_{j}-(i_{1}-1))}-th eigenvalue ofA{\textstyle A} orthogonally projected down toE{\textstyle E}. By Cauchy interlacing theorem,λij(i11)(A|E)λij(A){\textstyle \lambda _{i_{j}-(i_{1}-1)}(A|E)\leq \lambda _{i_{j}}(A)}. SinceX(W1,,Wk)X(V1,,Vk){\textstyle X(W_{1},\dots ,W_{k})\subset X(V_{1},\dots ,V_{k})}, we’re done.

Ifi1=1{\textstyle i_{1}=1}, then we perform a similar construction. LetE=span(e2,,en){\textstyle E=span(e_{2},\dots ,e_{n})}. IfVkE{\textstyle V_{k}\subset E}, then we can induct. Otherwise, we construct a partial flag sequenceW2,,Wk{\textstyle W_{2},\dots ,W_{k}} By induction, there exists someWX(W2,,Wk)X(V2,,Vk){\textstyle W'\in X(W_{2},\dots ,W_{k})\subset X(V_{2},\dots ,V_{k})}, such thatλi21(A|E)++λik1(A|E)trW(A){\displaystyle \lambda _{i_{2}-1}(A|E)+\cdots +\lambda _{i_{k}-1}(A|E)\geq tr_{W'}(A)} thus
λi2(A)++λik(A)trW(A){\displaystyle \lambda _{i_{2}}(A)+\cdots +\lambda _{i_{k}}(A)\geq tr_{W'}(A)} And it remains to find somev{\textstyle v} such thatWvX(V1,,Vk){\textstyle W'\oplus v\in X(V_{1},\dots ,V_{k})}.

IfV1W{\textstyle V_{1}\not \subset W'}, then anyvV1W{\textstyle v\in V_{1}\setminus W'} would work. Otherwise, ifV2W{\textstyle V_{2}\not \subset W'}, then anyvV2W{\textstyle v\in V_{2}\setminus W'} would work, and so on. If none of these work, then it meansVkE{\textstyle V_{k}\subset E}, contradiction.

This has some corollaries:[1]: 44 

Extremal partial traceλ1(A)++λk(A)=supdim(V)=ktrV(A){\displaystyle \lambda _{1}(A)+\dots +\lambda _{k}(A)=\sup _{\operatorname {dim} (V)=k}tr_{V}(A)}

ξ1(A)++ξk(A)=infdim(V)=ktrV(A){\displaystyle \xi _{1}(A)+\dots +\xi _{k}(A)=\inf _{\operatorname {dim} (V)=k}tr_{V}(A)}

CorollaryThe sumλ1(A)++λk(A){\textstyle \lambda _{1}(A)+\dots +\lambda _{k}(A)} is a convex function, andξ1(A)++ξk(A){\textstyle \xi _{1}(A)+\dots +\xi _{k}(A)} is concave.

(Schur-Horn inequality)ξ1(A)++ξk(A)ai1,i1++aik,ikλ1(A)++λk(A){\displaystyle \xi _{1}(A)+\dots +\xi _{k}(A)\leq a_{i_{1},i_{1}}+\dots +a_{i_{k},i_{k}}\leq \lambda _{1}(A)+\dots +\lambda _{k}(A)} for any subset of indices.

Equivalently, this states that the diagonal vector ofA{\textstyle A} is majorized by its eigenspectrum.

Schatten-norm Hölder inequalityGiven HermitianA,B{\textstyle A,B} and Hölder pair1/p+1/q=1{\textstyle 1/p+1/q=1},|tr(AB)|ASpBSq{\displaystyle |\operatorname {tr} (AB)|\leq \|A\|_{S^{p}}\|B\|_{S^{q}}}

Proof
Proof

WLOG,B{\textstyle B} is diagonalized, then we need to show|iBiiAii|ASp(Bii)lq{\textstyle |\sum _{i}B_{ii}A_{ii}|\leq \|A\|_{S^{p}}\|(B_{ii})\|_{l^{q}}}

By the standard Hölder inequality, it suffices to show(Aii)lpASp{\textstyle \|(A_{ii})\|_{l^{p}}\leq \|A\|_{S^{p}}}

By the Schur-Horn inequality, the diagonals ofA{\textstyle A} are majorized by the eigenspectrum ofA{\textstyle A}, and since the mapf(x1,,xn)=xp{\textstyle f(x_{1},\dots ,x_{n})=\|x\|_{p}} is symmetric and convex, it is Schur-convex.

Counterexample in the non-Hermitian case

[edit]

LetN be the nilpotent matrix

[0100].{\displaystyle {\begin{bmatrix}0&1\\0&0\end{bmatrix}}.}

Define the Rayleigh quotientRN(x){\displaystyle R_{N}(x)} exactly as above in the Hermitian case. Then it is easy to see that the only eigenvalue ofN is zero, while the maximum value of the Rayleigh quotient is1/2. That is, the maximum value of the Rayleigh quotient is larger than the maximum eigenvalue.

Applications

[edit]

Min-max principle for singular values

[edit]

Thesingular values {σk} of a square matrixM are the square roots of the eigenvalues ofM*M (equivalentlyMM*). An immediate consequence[citation needed] of the first equality in the min-max theorem is:

σk=maxS:dim(S)=kminxS,x=1(MMx,x)12=maxS:dim(S)=kminxS,x=1Mx.{\displaystyle \sigma _{k}^{\downarrow }=\max _{S:\dim(S)=k}\min _{x\in S,\|x\|=1}(M^{*}Mx,x)^{\frac {1}{2}}=\max _{S:\dim(S)=k}\min _{x\in S,\|x\|=1}\|Mx\|.}

Similarly,

σk=minS:dim(S)=nk+1maxxS,x=1Mx.{\displaystyle \sigma _{k}^{\downarrow }=\min _{S:\dim(S)=n-k+1}\max _{x\in S,\|x\|=1}\|Mx\|.}

Hereσk{\displaystyle \sigma _{k}^{\downarrow }} denotes thekth entry in the decreasing sequence of the singular values, so thatσ1σ2{\displaystyle \sigma _{1}^{\downarrow }\geq \sigma _{2}^{\downarrow }\geq \cdots }.

Cauchy interlacing theorem

[edit]
Main article:Poincaré separation theorem

LetA be a symmetricn ×n matrix. Them ×m matrixB, wheremn, is called acompression ofA if there exists anorthogonal projectionP onto a subspace of dimensionm such thatPAP* =B. The Cauchy interlacing theorem states:

Theorem. If the eigenvalues ofA areα1 ≤ ... ≤αn, and those ofB areβ1 ≤ ... ≤βj ≤ ... ≤βm, then for alljm,
αjβjαnm+j.{\displaystyle \alpha _{j}\leq \beta _{j}\leq \alpha _{n-m+j}.}

This can be proven using the min-max principle. Letβi have corresponding eigenvectorbi andSj be thej dimensional subspaceSj = span{b1, ...,bj}, then

βj=maxxSj,x=1(Bx,x)=maxxSj,x=1(PAPx,x)minSjmaxxSj,x=1(A(Px),Px)=αj.{\displaystyle \beta _{j}=\max _{x\in S_{j},\|x\|=1}(Bx,x)=\max _{x\in S_{j},\|x\|=1}(PAP^{*}x,x)\geq \min _{S_{j}}\max _{x\in S_{j},\|x\|=1}(A(P^{*}x),P^{*}x)=\alpha _{j}.}

According to first part of min-max,αjβj. On the other hand, if we defineSmj+1 = span{bj, ...,bm}, then

βj=minxSmj+1,x=1(Bx,x)=minxSmj+1,x=1(PAPx,x)=minxSmj+1,x=1(A(Px),Px)αnm+j,{\displaystyle \beta _{j}=\min _{x\in S_{m-j+1},\|x\|=1}(Bx,x)=\min _{x\in S_{m-j+1},\|x\|=1}(PAP^{*}x,x)=\min _{x\in S_{m-j+1},\|x\|=1}(A(P^{*}x),P^{*}x)\leq \alpha _{n-m+j},}

where the last inequality is given by the second part of min-max.

Whennm = 1, we haveαjβjαj+1, hence the nameinterlacing theorem.

Lidskii's inequality

[edit]
Main article:Trace class § Lidskii's theorem

Lidskii inequalityIf1i1<<ikn{\textstyle 1\leq i_{1}<\cdots <i_{k}\leq n} thenλi1(A+B)++λik(A+B)λi1(A)++λik(A)+λ1(B)++λk(B){\displaystyle {\begin{aligned}&\lambda _{i_{1}}(A+B)+\cdots +\lambda _{i_{k}}(A+B)\\&\quad \leq \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)+\lambda _{1}(B)+\cdots +\lambda _{k}(B)\end{aligned}}}

λi1(A+B)++λik(A+B)λi1(A)++λik(A)+ξ1(B)++ξk(B){\displaystyle {\begin{aligned}&\lambda _{i_{1}}(A+B)+\cdots +\lambda _{i_{k}}(A+B)\\&\quad \geq \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)+\xi _{1}(B)+\cdots +\xi _{k}(B)\end{aligned}}}

Proof

Note thatiλi(A+B)=tr(A+B)=iλi(A)+λi(B){\displaystyle \sum _{i}\lambda _{i}(A+B)=tr(A+B)=\sum _{i}\lambda _{i}(A)+\lambda _{i}(B)}. In other words,λ(A+B)λ(A)λ(B){\displaystyle \lambda (A+B)-\lambda (A)\preceq \lambda (B)} where{\displaystyle \preceq } meansmajorization. By the Schur convexity theorem, we then have

p-Wielandt-Hoffman inequalityλ(A+B)λ(A)pBSp{\textstyle \|\lambda (A+B)-\lambda (A)\|_{\ell ^{p}}\leq \|B\|_{S^{p}}} whereSp{\textstyle \|\cdot \|_{S^{p}}} stands for the p-Schatten norm.

Compact operators

[edit]

LetA be acompact,Hermitian operator on a Hilbert spaceH. Recall that the non-zerospectrum of such an operator consists of real eigenvalues with finite multiplicities whose only possiblecluster point is zero. IfA has infinitely many positive eigenvalues, they accumulate at zero. In this case, we list the positive eigenvalues ofA as

λkλ1,{\displaystyle \cdots \leq \lambda _{k}\leq \cdots \leq \lambda _{1},}

where entries are repeated withmultiplicity, as in the matrix case. (To emphasize that the sequence is decreasing, we may writeλk=λk{\displaystyle \lambda _{k}=\lambda _{k}^{\downarrow }}.) We now apply the same reasoning as in the matrix case. LettingSkH be ak dimensional subspace, we can obtain the following theorem.

Theorem (Min-Max). LetA be a compact, self-adjoint operator on a Hilbert spaceH, whose positive eigenvalues are listed in decreasing order... ≤λk ≤ ... ≤λ1. Then:
maxSkminxSk,x=1(Ax,x)=λk,minSk1maxxSk1,x=1(Ax,x)=λk.{\displaystyle {\begin{aligned}\max _{S_{k}}\min _{x\in S_{k},\|x\|=1}(Ax,x)&=\lambda _{k}^{\downarrow },\\\min _{S_{k-1}}\max _{x\in S_{k-1}^{\perp },\|x\|=1}(Ax,x)&=\lambda _{k}^{\downarrow }.\end{aligned}}}

A similar pair of equalities hold for negative eigenvalues.

Proof

LetS' be the closure of the linear spanS=span{uk,uk+1,}{\displaystyle S'=\operatorname {span} \{u_{k},u_{k+1},\ldots \}}.The subspaceS' has codimensionk − 1. By the same dimension count argument as in the matrix case,S'Sk has positive dimension. So there existsxS' Sk withx=1{\displaystyle \|x\|=1}. Since it is an element ofS', such anx necessarily satisfy

(Ax,x)λk.{\displaystyle (Ax,x)\leq \lambda _{k}.}

Therefore, for allSk

infxSk,x=1(Ax,x)λk{\displaystyle \inf _{x\in S_{k},\|x\|=1}(Ax,x)\leq \lambda _{k}}

ButA is compact, therefore the functionf(x) = (Ax,x) is weakly continuous. Furthermore, any bounded set inH is weakly compact. This lets us replace the infimum by minimum:

minxSk,x=1(Ax,x)λk.{\displaystyle \min _{x\in S_{k},\|x\|=1}(Ax,x)\leq \lambda _{k}.}

So

supSkminxSk,x=1(Ax,x)λk.{\displaystyle \sup _{S_{k}}\min _{x\in S_{k},\|x\|=1}(Ax,x)\leq \lambda _{k}.}

Because equality is achieved whenSk=span{u1,,uk}{\displaystyle S_{k}=\operatorname {span} \{u_{1},\ldots ,u_{k}\}},

maxSkminxSk,x=1(Ax,x)=λk.{\displaystyle \max _{S_{k}}\min _{x\in S_{k},\|x\|=1}(Ax,x)=\lambda _{k}.}

This is the first part of min-max theorem for compact self-adjoint operators.

Analogously, consider now a(k − 1)-dimensional subspaceSk−1, whose the orthogonal complement is denoted bySk−1. IfS' = span{u1...uk},

SSk10.{\displaystyle S'\cap S_{k-1}^{\perp }\neq {0}.}

So

xSk1x=1,(Ax,x)λk.{\displaystyle \exists x\in S_{k-1}^{\perp }\,\|x\|=1,(Ax,x)\geq \lambda _{k}.}

This implies

maxxSk1,x=1(Ax,x)λk{\displaystyle \max _{x\in S_{k-1}^{\perp },\|x\|=1}(Ax,x)\geq \lambda _{k}}

where the compactness ofA was applied. Index the above by the collection ofk-1-dimensional subspaces gives

infSk1maxxSk1,x=1(Ax,x)λk.{\displaystyle \inf _{S_{k-1}}\max _{x\in S_{k-1}^{\perp },\|x\|=1}(Ax,x)\geq \lambda _{k}.}

PickSk−1 = span{u1, ...,uk−1} and we deduce

minSk1maxxSk1,x=1(Ax,x)=λk.{\displaystyle \min _{S_{k-1}}\max _{x\in S_{k-1}^{\perp },\|x\|=1}(Ax,x)=\lambda _{k}.}

Self-adjoint operators

[edit]

The min-max theorem also applies to (possibly unbounded) self-adjoint operators.[2][3] Recall theessential spectrum is the spectrum without isolated eigenvalues of finite multiplicity. Sometimes we have some eigenvalues below the essential spectrum, and we would like to approximate the eigenvalues and eigenfunctions.

Theorem (Min-Max). LetA be self-adjoint, and letE1E2E3{\displaystyle E_{1}\leq E_{2}\leq E_{3}\leq \cdots } be the eigenvalues ofA below the essential spectrum. Then

En=minψ1,,ψnmax{ψ,Aψ:ψspan(ψ1,,ψn),ψ=1}{\displaystyle E_{n}=\min _{\psi _{1},\ldots ,\psi _{n}}\max\{\langle \psi ,A\psi \rangle :\psi \in \operatorname {span} (\psi _{1},\ldots ,\psi _{n}),\,\|\psi \|=1\}}.

If we only haveN eigenvalues and hence run out of eigenvalues, then we letEn:=infσess(A){\displaystyle E_{n}:=\inf \sigma _{ess}(A)} (the bottom of the essential spectrum) forn>N, and the above statement holds after replacing min-max with inf-sup.

Theorem (Max-Min). LetA be self-adjoint, and letE1E2E3{\displaystyle E_{1}\leq E_{2}\leq E_{3}\leq \cdots } be the eigenvalues ofA below the essential spectrum. Then

En=maxψ1,,ψn1min{ψ,Aψ:ψψ1,,ψn1,ψ=1}{\displaystyle E_{n}=\max _{\psi _{1},\ldots ,\psi _{n-1}}\min\{\langle \psi ,A\psi \rangle :\psi \perp \psi _{1},\ldots ,\psi _{n-1},\,\|\psi \|=1\}}.

If we only haveN eigenvalues and hence run out of eigenvalues, then we letEn:=infσess(A){\displaystyle E_{n}:=\inf \sigma _{ess}(A)} (the bottom of the essential spectrum) forn > N, and the above statement holds after replacing max-min with sup-inf.

The proofs[2][3] use the following results about self-adjoint operators:

Theorem. LetA be self-adjoint. Then(AE)0{\displaystyle (A-E)\geq 0} forER{\displaystyle E\in \mathbb {R} } if and only ifσ(A)[E,){\displaystyle \sigma (A)\subseteq [E,\infty )}.[2]: 77 
Theorem. IfA is self-adjoint, then

infσ(A)=infψD(A),ψ=1ψ,Aψ{\displaystyle \inf \sigma (A)=\inf _{\psi \in {\mathfrak {D}}(A),\|\psi \|=1}\langle \psi ,A\psi \rangle }

and

supσ(A)=supψD(A),ψ=1ψ,Aψ{\displaystyle \sup \sigma (A)=\sup _{\psi \in {\mathfrak {D}}(A),\|\psi \|=1}\langle \psi ,A\psi \rangle }.[2]: 77 

See also

[edit]

References

[edit]
  1. ^abTao, Terence (2012).Topics in random matrix theory. Graduate studies in mathematics. Providence, R.I: American Mathematical Society.ISBN 978-0-8218-7430-1.
  2. ^abcdG. Teschl, Mathematical Methods in Quantum Mechanics (GSM 99)https://www.mat.univie.ac.at/~gerald/ftp/book-schroe/schroe.pdf
  3. ^abLieb; Loss (2001).Analysis. GSM. Vol. 14 (2nd ed.). Providence: American Mathematical Society.ISBN 0-8218-2783-9.

External links and citations to related work

[edit]
Spaces
Properties
Theorems
Operators
Algebras
Open problems
Applications
Advanced topics
Basic concepts
Derivatives
Measurability
Integrals
Results
Related
Functional calculus
Applications
Basic concepts
Main results
Special Elements/Operators
Spectrum
Decomposition
Spectral Theorem
Special algebras
Finite-Dimensional
Generalizations
Miscellaneous
Examples
Applications
Retrieved from "https://en.wikipedia.org/w/index.php?title=Min-max_theorem&oldid=1320914404"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp