Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Perron–Frobenius theorem

From Wikipedia, the free encyclopedia
Theory in linear algebra

Inmatrix theory, thePerron–Frobenius theorem, proved byOskar Perron (1907) andGeorg Frobenius (1912), asserts that areal square matrix with positive entries has a uniqueeigenvalue of largest magnitude and that eigenvalue is real. The correspondingeigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes ofnonnegative matrices. This theorem has important applications to probability theory (ergodicity ofMarkov chains); to the theory ofdynamical systems (subshifts of finite type); to economics (Okishio's theorem,[1]Hawkins–Simon condition[2]);to demography (Leslie population age distribution model);[3] to social networks (DeGroot learning process); to Internet search engines (PageRank);[4] and even to ranking of American footballteams.[5] The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors isEdmund Landau.[6][7]

Statement

[edit]

Letpositive andnon-negative respectively describematrices with exclusivelypositive real numbers as elements and matrices with exclusively non-negative real numbers as elements. Theeigenvalues of a realsquare matrixA arecomplex numbers that make up thespectrum of the matrix. Theexponential growth rate of the matrix powersAk ask → ∞ is controlled by the eigenvalue ofA with the largestabsolute value (modulus). The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors whenA is a non-negative real square matrix. Early results were due toOskar Perron (1907) and concerned positive matrices. Later,Georg Frobenius (1912) found their extension to certain classes of non-negative matrices.

Positive matrices

[edit]

LetA=(aij){\displaystyle A=(a_{ij})} be ann×n{\displaystyle n\times n} positive matrix:aij>0{\displaystyle a_{ij}>0} for1i,jn{\displaystyle 1\leq i,j\leq n}. Then the following statements hold.

  1. There is a positive real numberr, called thePerron root or thePerron–Frobenius eigenvalue (also called theleading eigenvalue,principal eigenvalue ordominant eigenvalue), such thatr is an eigenvalue ofA and any other eigenvalueλ (possiblycomplex) inabsolute value is strictly smaller thanr , |λ| <r. Thus, thespectral radiusρ(A){\displaystyle \rho (A)} is equal tor. If the matrix coefficients are algebraic, this implies that the eigenvalue is aPerron number.
  2. The Perron–Frobenius eigenvalue is simple:r is a simple root of thecharacteristic polynomial ofA. Consequently, theeigenspace associated tor is one-dimensional. (The same is true for the left eigenspace, i.e., the eigenspace forAT, the transpose ofA.)
  3. There exists an eigenvectorv = (v1,...,vn)T ofA with eigenvaluer such that all components ofv are positive:A v =r v,vi > 0 for 1 ≤in. (Respectively, there exists a positive left eigenvectorw :wT A =wT r,wi > 0.) It is known in the literature under many variations as thePerron vector,Perron eigenvector,Perron-Frobenius eigenvector,leading eigenvector,principal eigenvector ordominant eigenvector.
  4. There are no other positive (moreover non-negative) eigenvectors except positive multiples ofv (respectively, left eigenvectors except ww'w), i.e., all other eigenvectors must have at least one negative or non-real component.
  5. limkAk/rk=vwT{\displaystyle \lim _{k\rightarrow \infty }A^{k}/r^{k}=vw^{T}}, where the left and right eigenvectors forA are normalized so thatwTv = 1. Moreover, the matrixvwT is theprojection onto the eigenspace corresponding to r. This projection is called thePerron projection.
  6. Collatz–Wielandt formula: for all non-negative non-zero vectorsx, letf(x) be the minimum value of [Ax]i / xi taken over all thosei such thatxi ≠ 0. Thenf is a real valued function whosemaximum over all non-negative non-zero vectorsx is the Perron–Frobenius eigenvalue.
  7. A "Min-max" Collatz–Wielandt formula takes a form similar to the one above: for all strictly positive vectorsx, letg(x) be the maximum value of [Ax]i / xi taken overi. Theng is a real valued function whoseminimum over all strictly positive vectorsx is the Perron–Frobenius eigenvalue.
  8. BirkhoffVarga formula: Letx andy be strictly positive vectors. Then,[8]r=supx>0infy>0yAxyx=infx>0supy>0yAxyx=infx>0supy>0i,j=1nyiaijxj/i=1nyixi.{\displaystyle r=\sup _{x>0}\inf _{y>0}{\frac {y^{\top }Ax}{y^{\top }x}}=\inf _{x>0}\sup _{y>0}{\frac {y^{\top }Ax}{y^{\top }x}}=\inf _{x>0}\sup _{y>0}\sum _{i,j=1}^{n}y_{i}a_{ij}x_{j}/\sum _{i=1}^{n}y_{i}x_{i}.}
  9. DonskerVaradhanFriedland formula: Letp be a probability vector andx a strictly positive vector. Then,[9][10]r=suppinfx>0i=1npi[Ax]i/xi.{\displaystyle r=\sup _{p}\inf _{x>0}\sum _{i=1}^{n}p_{i}[Ax]_{i}/x_{i}.}
  10. Fiedler formula:[11]r=supz>0 infx>0, y>0, xy=zyAxyx=supz>0 infx>0, y>0, xy=zi,j=1nyiaijxj/i=1nyixi.{\displaystyle r=\sup _{z>0}\ \inf _{x>0,\ y>0,\ x\circ y=z}{\frac {y^{\top }Ax}{y^{\top }x}}=\sup _{z>0}\ \inf _{x>0,\ y>0,\ x\circ y=z}\sum _{i,j=1}^{n}y_{i}a_{ij}x_{j}/\sum _{i=1}^{n}y_{i}x_{i}.}
  11. The Perron–Frobenius eigenvalue satisfies the inequalitiesminijaijrmaxijaij.{\displaystyle \min _{i}\sum _{j}a_{ij}\leq r\leq \max _{i}\sum _{j}a_{ij}.}

All of these properties extend beyond strictly positive matrices toprimitive matrices (see below). Facts 1–7 can be found in Meyer[12]chapter 8 claims 8.2.11–15 page 667 and exercises 8.2.5,7,9 pages 668–669.

The left and right eigenvectorsw andv are sometimes normalized so that the sum of their components is equal to 1; in this case, they are sometimes calledstochastic eigenvectors. Often they are normalized so that the right eigenvectorv sums to one, whilewTv=1{\displaystyle w^{T}v=1}.

Non-negative matrices

[edit]

There is an extension to matrices with non-negative entries. Since any non-negative matrix can be obtained as a limit of positive matrices, one obtains the existence of an eigenvector with non-negative components; the corresponding eigenvalue will be non-negative and greater thanor equal, in absolute value, to all other eigenvalues.[13][14] However, for the exampleA=(0110){\displaystyle A=\left({\begin{smallmatrix}0&1\\1&0\end{smallmatrix}}\right)}, the maximum eigenvaluer = 1 has the same absolute value as the other eigenvalue −1; while forA=(0100){\displaystyle A=\left({\begin{smallmatrix}0&1\\0&0\end{smallmatrix}}\right)}, the maximum eigenvalue isr = 0, which is not a simple root of the characteristic polynomial, and the corresponding eigenvector (1, 0) is not strictly positive.

However, Frobenius found a special subclass of non-negative matrices —irreducible matrices — for which a non-trivial generalization is possible. For such a matrix, although the eigenvalues attaining the maximal absolute value might not be unique, their structure is under control: they have the formωr{\displaystyle \omega r}, wherer{\displaystyle r} is a real strictly positive eigenvalue, andω{\displaystyle \omega } ranges over the complexh' throots of 1 for some positive integerh called theperiod of the matrix.The eigenvector corresponding tor{\displaystyle r} has strictly positive components (in contrast with the general case of non-negative matrices, where components are only non-negative). Also all such eigenvalues are simple roots of the characteristic polynomial. Further properties are described below.

Classification of matrices

[edit]

LetA be an × n square matrix overfieldF.The matrixA isirreducible if any of the following equivalent propertiesholds.

Definition 1 :A does not have non-trivial invariantcoordinate subspaces.Here a non-trivial coordinate subspace means alinear subspace spanned by anyproper subset of standard basis vectors ofFn. More explicitly, for any linear subspace spanned by standard basis vectorsei1, ...,eik, 0 <k < n its image under the action ofA is not contained in the same subspace.

Definition 2:A cannot be conjugated into block upper triangular form by apermutation matrixP:

PAP1(EFOG),{\displaystyle PAP^{-1}\neq {\begin{pmatrix}E&F\\O&G\end{pmatrix}},}

whereE andG are non-trivial (i.e. of size greater than zero) square matrices.

Definition 3: One can associate with a matrixA a certaindirected graphGA. It hasn vertices labeled 1,...,n, and there is an edge from vertexi to vertexj precisely whenaij ≠ 0. Then the matrixA is irreducible if and only if its associated graphGA isstrongly connected.

IfF is the field of real or complex numbers, then we also have the following condition.

Definition 4: Thegroup representation of(R,+){\displaystyle (\mathbb {R} ,+)} onRn{\displaystyle \mathbb {R} ^{n}} or(C,+){\displaystyle (\mathbb {C} ,+)} onCn{\displaystyle \mathbb {C} ^{n}} given bytexp(tA){\displaystyle t\mapsto \exp(tA)} has no non-trivial invariant coordinate subspaces. (By comparison, this would be anirreducible representation if there were no non-trivial invariant subspaces at all, not only considering coordinate subspaces.)

A matrix isreducible if it is not irreducible.

A real matrixA isprimitive if it is non-negative and itsmth power is positive for some natural numberm (i.e. all entries ofAm are positive).

LetA be real and non-negative. Fix an indexi and define theperiod of indexi to be thegreatest common divisor of all natural numbersm such that (Am)ii > 0. WhenA is irreducible, the period of every index is the same and is called theperiod ofA. In fact, whenA is irreducible, the period can be defined as the greatest common divisor of the lengths of the closed directed paths inGA (see Kitchens[15] page 16). The period is also called the index of imprimitivity (Meyer[12] page 674) or the order of cyclicity. If the period is 1,A isaperiodic. It can be proved that primitive matrices are the same as irreducible aperiodic non-negative matrices.

All statements of the Perron–Frobenius theorem for positive matrices remain true for primitive matrices. The same statements also hold for a non-negative irreducible matrix, except that it may possess several eigenvalues whose absolute value is equal to its spectral radius, so the statements need to be correspondingly modified. In fact the number of such eigenvalues is equal to the period.

Results for non-negative matrices were first obtained by Frobenius in 1912.

Perron–Frobenius theorem for irreducible non-negative matrices

[edit]

LetA{\displaystyle A} be an irreducible non-negativeN×N{\displaystyle N\times N} matrix with periodh{\displaystyle h} andspectral radiusρ(A)=r{\displaystyle \rho (A)=r}. Then the following statements hold.

PAP1=(OA1OOOOOA2OOOOOOAh1AhOOOO),{\displaystyle PAP^{-1}={\begin{pmatrix}O&A_{1}&O&O&\ldots &O\\O&O&A_{2}&O&\ldots &O\\\vdots &\vdots &\vdots &\vdots &&\vdots \\O&O&O&O&\ldots &A_{h-1}\\A_{h}&O&O&O&\ldots &O\end{pmatrix}},}

whereO{\displaystyle O} denotes a zero matrix and the blocks along the main diagonal are square matrices.

  • The Perron–Frobenius eigenvalue satisfies the inequalities
minijaijrmaxijaij.{\displaystyle \min _{i}\sum _{j}a_{ij}\leq r\leq \max _{i}\sum _{j}a_{ij}.}

The exampleA=(001001110){\displaystyle A=\left({\begin{smallmatrix}0&0&1\\0&0&1\\1&1&0\end{smallmatrix}}\right)} shows that the (square) zero-matrices along the diagonal may be of different sizes, the blocksAj need not be square, andh need not divide n.

Further properties

[edit]

LetA be an irreducible non-negative matrix, then:

  1. (I+A)n−1 is a positive matrix. (Meyer[12]claim 8.3.5 p. 672). For a non-negativeA, this is also a sufficient condition.[16]
  2. Wielandt's theorem.[17][clarification needed] If |B|<A, thenρ(B)≤ρ(A). If equality holds (i.e. ifμ=ρ(A)e is eigenvalue forB), thenB =eD AD−1 for some diagonal unitary matrixD (i.e. diagonal elements ofD equals toel, non-diagonal are zero).[18]
  3. If some powerAq is reducible, then it is completely reducible, i.e. for some permutation matrixP, it is true that:PAqP1=(A1OOOOA2OOOOOAd){\displaystyle PA^{q}P^{-1}={\begin{pmatrix}A_{1}&O&O&\dots &O\\O&A_{2}&O&\dots &O\\\vdots &\vdots &\vdots &&\vdots \\O&O&O&\dots &A_{d}\\\end{pmatrix}}}, whereAi are irreducible matrices having the same maximal eigenvalue. The number of these matricesd is the greatest common divisor ofq andh, whereh is period ofA.[19]
  4. Ifc(x)= xn + ck1 xn-k1 + ck2 xn-k2 + ... + cks xn-ks is the characteristic polynomial ofA in which only the non-zero terms are listed, then the period ofA equals the greatest common divisor ofk1, k2, ... , ks.[20]
  5. Cesàroaverages:limk1/ki=0,...,kAi/ri=(vwT),{\displaystyle \lim _{k\rightarrow \infty }1/k\sum _{i=0,...,k}A^{i}/r^{i}=(vw^{T}),} where the left and right eigenvectors forA are normalized so thatwTv = 1. Moreover, the matrixv wT is thespectral projection corresponding tor, the Perron projection.[21]
  6. Letr be the Perron–Frobenius eigenvalue, then the adjoint matrix for (r-A) is positive.[22]
  7. IfA has at least one non-zero diagonal element, thenA is primitive.[23]
  8. If 0 ≤A <B, thenrArB. Moreover, ifB is irreducible, then the inequality is strict:rA < rB.

A matrixA is primitive provided it is non-negative andAm is positive for somem, and henceAk is positive for allk ≥ m. To check primitivity, one needs a bound on how large the minimal suchm can be, depending on the size ofA:[24]

  • IfA is a non-negative primitive matrix of sizen, thenAn2 − 2n + 2 is positive. Moreover, this is the best possible result, since for the matrixM below, the powerMk is not positive for everyk <n2 − 2n + 2, since (Mn2 − 2n+1)1,1 = 0.
M=(0100000100000100000111000){\displaystyle M=\left({\begin{smallmatrix}0&1&0&0&\cdots &0\\0&0&1&0&\cdots &0\\0&0&0&1&\cdots &0\\\vdots &\vdots &\vdots &\vdots &&\vdots \\0&0&0&0&\cdots &1\\1&1&0&0&\cdots &0\end{smallmatrix}}\right)}

Applications

[edit]

Numerous books have been written on the subject of non-negative matrices, and Perron–Frobenius theory is invariably a central feature. The following examples given below only scratch the surface of its vast application domain.

Non-negative matrices

[edit]

The Perron–Frobenius theorem does not apply directly to non-negative matrices. Nevertheless, any reducible square matrixA may be written in upper-triangular block form (known as thenormal form of a reducible matrix)[25]

PAP−1 =(B10B2000000Bh){\displaystyle \left({\begin{smallmatrix}B_{1}&*&*&\cdots &*\\0&B_{2}&*&\cdots &*\\\vdots &\vdots &\vdots &&\vdots \\0&0&0&\cdots &*\\0&0&0&\cdots &B_{h}\end{smallmatrix}}\right)}

whereP is a permutation matrix and eachBi is a square matrix that is either irreducible or zero. Now ifA isnon-negative then so too is each block ofPAP−1, moreover the spectrum ofA is just the union of the spectra of theBi.

The invertibility ofA can also be studied. The inverse ofPAP−1 (if it exists) must have diagonal blocks of the formBi−1 so if anyBi isn't invertible then neither isPAP−1 orA.Conversely letD be the block-diagonal matrix corresponding toPAP−1, in other wordsPAP−1 with theasterisks zeroised. If eachBi is invertible then so isD andD−1(PAP−1) is equal to theidentity plus a nilpotent matrix. But such a matrix is always invertible (ifNk = 0 the inverse of 1 −N is1 +N +N2 + ... +Nk−1) soPAP−1 andA are both invertible.

Therefore, many of the spectral properties ofA may be deduced by applying the theorem to the irreducibleBi. For example, the Perron root is the maximum of the ρ(Bi). While there will still be eigenvectors with non-negative components it is quite possiblethat none of these will be positive.

Stochastic matrices

[edit]

A row (column)stochastic matrix is a square matrix each of whose rows (columns) consists of non-negative real numbers whose sum is unity. The theorem cannot be applied directly to such matrices because they need not be irreducible.

IfA is row-stochastic then the column vector with each entry 1 is an eigenvector corresponding to the eigenvalue 1, which is also ρ(A) by the remark above. It might not be the only eigenvalue on the unit circle: and the associated eigenspace can be multi-dimensional. IfA is row-stochastic and irreducible then the Perron projection is also row-stochastic and all its rows are equal.

Algebraic graph theory

[edit]

The theorem has particular use inalgebraic graph theory. The "underlying graph" of a nonnegativen-square matrix is the graph with vertices numbered 1, ...,n and arcij if and only ifAij ≠ 0. If the underlying graph of such a matrix is strongly connected, then the matrix is irreducible, and thus the theorem applies. In particular, theadjacency matrix of astrongly connected graph is irreducible.[26][27]

Finite Markov chains

[edit]

The theorem has a natural interpretation in the theory of finiteMarkov chains (where it is the matrix-theoretic equivalent of the convergence of an irreducible finite Markov chain to its stationary distribution, formulated in terms of the transition matrix of the chain; see, for example, the article on thesubshift of finite type).

Compact operators

[edit]
Main article:Krein–Rutman theorem

More generally, it can be extended to the case of non-negativecompact operators, which, in many ways, resemble finite-dimensional matrices. These are commonly studied in physics, under the name oftransfer operators, or sometimesRuelle–Perron–Frobenius operators (afterDavid Ruelle). In this case, the leading eigenvalue corresponds to thethermodynamic equilibrium of adynamical system, and the lesser eigenvalues to the decay modes of a system that is not in equilibrium. Thus, the theory offers a way of discovering thearrow of time in what would otherwise appear to be reversible, deterministic dynamical processes, when examined from the point of view ofpoint-set topology.[28]

Proof methods

[edit]

A common thread in many proofs is theBrouwer fixed point theorem. Another popular method is that of Wielandt (1950). He used theCollatz–Wielandt formula described above to extend and clarify Frobenius's work.[29] Another proof is based on thespectral theory[30] from which part of the arguments are borrowed.

Perron root is strictly maximal eigenvalue for positive (and primitive) matrices

[edit]

IfA is a positive (or more generally primitive) matrix, then there exists a real positive eigenvaluer (Perron–Frobenius eigenvalue or Perron root), which is strictly greater in absolute value than all other eigenvalues, hencer is thespectral radius ofA.

This statement does not hold for general non-negative irreducible matrices, which haveh eigenvalues with the same absolute eigenvalue asr, whereh is the period ofA.

Proof for positive matrices

[edit]

LetA be a positive matrix, assume that its spectral radius ρ(A) = 1 (otherwise considerA/ρ(A)). Hence, there exists an eigenvalue λ on the unit circle, and all the other eigenvalues are less or equal 1 in absolute value. Suppose that another eigenvalue λ ≠ 1 also falls on the unit circle. Then there exists a positive integerm such thatAm is a positive matrix and the real part of λm is negative. Let ε be half the smallest diagonal entry ofAm and setT =Am − εI which is yet another positive matrix. Moreover, ifAx =λx thenAmx =λmx thusλm − ε is an eigenvalue ofT. Because of the choice ofm this point lies outside the unit disk consequentlyρ(T) > 1. On the other hand, all the entries inT are positive and less than or equal to those inAm so byGelfand's formulaρ(T) ≤ρ(Am) ≤ρ(A)m = 1. This contradiction means that λ=1 and there can be no other eigenvalues on the unit circle.

Absolutely the same arguments can be applied to the case of primitive matrices; we just need to mention the following simple lemma, which clarifies the properties of primitive matrices.

Lemma

[edit]

Given a non-negativeA, assume there existsm, such thatAm is positive, thenAm+1,Am+2,Am+3,... are all positive.

Am+1 =AAm, so it can have zero element only if some row ofA is entirely zero, but in this case the same row ofAm will be zero.

Applying the same arguments as above for primitive matrices, prove the main claim.

Power method and the positive eigenpair

[edit]

For a positive (or more generally irreducible non-negative) matrixA the dominanteigenvector is real and strictly positive (for non-negativeA respectively non-negative.)

This can be established using thepower method, which states that for a sufficiently generic (in the sense below) matrixA the sequence of vectorsbk+1 =Abk / |Abk | converges to theeigenvector with the maximumeigenvalue. (The initial vectorb0 can be chosen arbitrarily except for some measure zero set). Starting with a non-negative vectorb0 produces the sequence of non-negative vectorsbk. Hence the limiting vector is also non-negative. By the power method this limiting vector is the dominant eigenvector forA, proving the assertion. The corresponding eigenvalue is non-negative.

The proof requires two additional arguments. First, the power method converges for matrices which do not have several eigenvalues of the same absolute value as the maximal one. The previous section's argument guarantees this.

Second, to ensure strict positivity of all of the components of the eigenvector for the case of irreducible matrices. This follows from the following fact, which is of independent interest:

Lemma: given a positive (or more generally irreducible non-negative) matrixA andv as any non-negative eigenvector forA, then it is necessarily strictly positive and the corresponding eigenvalue is also strictly positive.

Proof. One of the definitions of irreducibility for non-negative matrices is that for all indexesi,j there existsm, such that (Am)ij is strictly positive. Given a non-negative eigenvectorv, and that at least one of its components sayi-th is strictly positive, the corresponding eigenvalue is strictly positive, indeed, givenn such that (An)ii >0, hence:rnvi =Anvi ≥(An)iivi>0. Hencer is strictly positive. The eigenvector is strict positivity. Then givenm, such that (Am)ji >0, hence:rmvj =(Amv)j ≥(Am)jivi >0, hencevj is strictly positive, i.e., the eigenvector is strictly positive.

Multiplicity one

[edit]

This section proves that the Perron–Frobenius eigenvalue is a simple root of the characteristic polynomial of the matrix. Hence the eigenspace associated to Perron–Frobenius eigenvaluer is one-dimensional. The arguments here are close to those in Meyer.[12]

Given a strictly positive eigenvectorv corresponding tor and another eigenvectorw with the same eigenvalue. (The vectorsv andw can be chosen to be real, becauseA andr are both real, so the null space ofA-r has a basis consisting of real vectors.) Assuming at least one of the components ofw is positive (otherwise multiplyw by −1). Given maximal possibleα such thatu=v- α w is non-negative, then one of the components ofu is zero, otherwiseα is not maximum. Vectoru is an eigenvector. It is non-negative, hence by the lemma described in theprevious section non-negativity implies strict positivity for any eigenvector. On the other hand, as above at least one component ofu is zero. The contradiction implies thatw does not exist.

Case: There are no Jordan blocks corresponding to the Perron–Frobenius eigenvaluer and all other eigenvalues which have the same absolute value.

If there is a Jordan block, then theinfinity norm(A/r)k tends to infinity fork → ∞,but that contradicts the existence of the positive eigenvector.

Givenr = 1, orA/r. Lettingv be a Perron–Frobenius strictly positive eigenvector, soAv=v, then:

v=AkvAkmini(vi),    Akv/mini(vi){\displaystyle \|v\|_{\infty }=\|A^{k}v\|_{\infty }\geq \|A^{k}\|_{\infty }\min _{i}(v_{i}),~~\Rightarrow ~~\|A^{k}\|_{\infty }\leq \|v\|/\min _{i}(v_{i})}So ‖Ak is bounded for allk. This gives another proof that there are no eigenvalues which have greater absolute value than Perron–Frobenius one. It also contradicts the existence of the Jordan block for any eigenvalue which has absolute value equal to 1 (in particular for the Perron–Frobenius one), because existence of the Jordan block implies that ‖Ak is unbounded. For a two by two matrix:

Jk=(λ10λ)k=(λkkλk10λk),{\displaystyle J^{k}={\begin{pmatrix}\lambda &1\\0&\lambda \end{pmatrix}}^{k}={\begin{pmatrix}\lambda ^{k}&k\lambda ^{k-1}\\0&\lambda ^{k}\end{pmatrix}},}

hence ‖Jk = |k +λ| (for |λ| = 1), so it tends to infinity whenk does so. SinceJk =C−1AkC, thenAkJk/ (C−1C ), so it also tends to infinity. The resulting contradiction implies that there are no Jordan blocks for the corresponding eigenvalues.

Combining the two claims above reveals that the Perron–Frobenius eigenvaluer is simple root of the characteristic polynomial. In the case of nonprimitive matrices, there exist other eigenvalues which have the same absolute value asr. The same claim is true for them, but requires more work.

No other non-negative eigenvectors

[edit]

Given positive (or more generally irreducible non-negative matrix)A, the Perron–Frobenius eigenvector is the only (up to multiplication by constant) non-negative eigenvector forA.

Other eigenvectors must contain negative or complex components since eigenvectors for different eigenvalues are orthogonal in some sense, but two positive eigenvectors cannot be orthogonal, so they must correspond to the same eigenvalue, but the eigenspace for the Perron–Frobenius is one-dimensional.

Assuming there exists an eigenpair (λ,y) forA, such that vectory is positive, and given (r,x), wherex – is the left Perron–Frobenius eigenvector forA (i.e. eigenvector forAT), thenrxTy = (xTA)y =xT (Ay) =λxTy, alsoxTy > 0, so one has:r =λ. Since the eigenspace for the Perron–Frobenius eigenvaluer is one-dimensional, non-negative eigenvectory is a multiple of the Perron–Frobenius one.[31]

Collatz–Wielandt formula

[edit]

Given a positive (or more generally irreducible non-negative matrix)A, one defines the functionf on the set of all non-negative non-zero vectorsx such thatf(x) is the minimum value of [Ax]i / xi taken over all thosei such thatxi ≠ 0. Thenf is a real-valued function, whosemaximum is the Perron–Frobenius eigenvaluer.

For the proof we denote the maximum off by the valueR. The proof requires to show R = r. Inserting the Perron-Frobenius eigenvectorv intof, we obtainf(v) = r and concluder ≤ R. For the opposite inequality, we consider an arbitrary nonnegative vectorx and letξ=f(x). The definition off gives0 ≤ ξx ≤ Ax (componentwise). Now, we use the positive right eigenvectorw forA for the Perron-Frobenius eigenvaluer, then ξ wT x = wT ξx ≤ wT (Ax) = (wT A)x = r wT x. Hencef(x) = ξ ≤ r, which impliesR ≤ r.[32]

Perron projection as a limit:Ak/rk

[edit]

LetA be a positive (or more generally, primitive) matrix, and letr be its Perron–Frobenius eigenvalue.

  1. There exists a limitAk/rk fork → ∞, denote it byP.
  2. P is aprojection operator:P2 =P, which commutes withA:AP =PA.
  3. The image ofP is one-dimensional and spanned by the Perron–Frobenius eigenvectorv (respectively forPT—by the Perron–Frobenius eigenvectorw forAT).
  4. P =vwT, wherev,w are normalized such thatwTv = 1.
  5. HenceP is a positive operator.

HenceP is aspectral projection for the Perron–Frobenius eigenvaluer, and is called the Perron projection. The above assertion is not true for general non-negative irreducible matrices.

Actually the claims above (except claim 5) are valid for any matrixM such that there exists an eigenvaluer which is strictly greater than the other eigenvalues in absolute value and is the simple root of the characteristicpolynomial. (These requirements hold for primitive matrices as above).

Given thatM is diagonalizable,M is conjugate to a diagonal matrix with eigenvaluesr1, ... ,rn on the diagonal (denoter1 =r). The matrixMk/rk will be conjugate (1, (r2/r)k, ... , (rn/r)k), which tends to (1,0,0,...,0), fork → ∞, so the limit exists. The same method works for generalM (without assuming thatM is diagonalizable).

The projection and commutativity properties are elementary corollaries of the definition:MMk/rk =Mk/rkM ;P2 = limM2k/r2k =P. The third fact is also elementary:M(Pu) =M limMk/rku = limrMk+1/rk+1u, so taking the limit yieldsM(Pu) =r(Pu), so image ofP lies in ther-eigenspace forM, which is one-dimensional by the assumptions.

Denoting byv,r-eigenvector forM (byw forMT). Columns ofP are multiples ofv, because the image ofP is spanned by it. Respectively, rows ofw. SoP takes a form(a v wT), for somea. Hence its trace equals to(a wT v). Trace of projector equals the dimension of its image. It was proved before that it is not more than one-dimensional. From the definition one sees thatP acts identically on ther-eigenvector forM. So it is one-dimensional. So choosing (wTv) = 1, impliesP =vwT.

Inequalities for Perron–Frobenius eigenvalue

[edit]

For any non-negative matrixA its Perron–Frobenius eigenvaluer satisfies the inequality:

rmaxijaij.{\displaystyle r\;\leq \;\max _{i}\sum _{j}a_{ij}.}

This is not specific to non-negative matrices: for any matrixA with an eigenvalueλ{\displaystyle \scriptstyle \lambda } it is truethat|λ|maxij|aij|{\displaystyle \scriptstyle |\lambda |\;\leq \;\max _{i}\sum _{j}|a_{ij}|}. This is an immediate corollary of theGershgorin circle theorem. However another proof is more direct:

Anymatrix induced norm satisfies the inequalityA|λ|{\displaystyle \scriptstyle \|A\|\geq |\lambda |} for any eigenvalueλ{\displaystyle \scriptstyle \lambda } because, ifx{\displaystyle \scriptstyle x} is a corresponding eigenvector,A|Ax|/|x|=|λx|/|x|=|λ|{\displaystyle \scriptstyle \|A\|\geq |Ax|/|x|=|\lambda x|/|x|=|\lambda |}. Theinfinity norm of a matrix is the maximum of row sums:A=max1imj=1n|aij|.{\displaystyle \scriptstyle \left\|A\right\|_{\infty }=\max \limits _{1\leq i\leq m}\sum _{j=1}^{n}|a_{ij}|.} Hence the desired inequality is exactlyA|λ|{\displaystyle \scriptstyle \|A\|_{\infty }\geq |\lambda |} applied to the non-negative matrixA.

Another inequality is:

minijaijr.{\displaystyle \min _{i}\sum _{j}a_{ij}\;\leq \;r.}

This fact is specific to non-negative matrices; for general matrices there is nothing similar. Given thatA is positive (not just non-negative), then there exists a positive eigenvectorw such thatAw =rw and the smallest component ofw (saywi) is 1. Thenr = (Aw)i ≥ the sum of the numbers in rowi ofA. Thus the minimum row sum gives a lower bound forr and this observation can be extended to all non-negative matrices by continuity.

Another way to argue it is via theCollatz-Wielandt formula. One takes the vectorx = (1, 1, ..., 1) and immediately obtains the inequality.

Further proofs

[edit]

Perron projection

[edit]

The proof now proceeds usingspectral decomposition. The trick here is to split the Perron root from the other eigenvalues. The spectral projection associated with the Perron root is called the Perron projection and it enjoys the following property:

The Perron projection of an irreducible non-negative square matrix is a positive matrix.

Perron's findings and also (1)–(5) of the theorem are corollaries of this result. The key point is that a positive projection always has rank one. This means that ifA is an irreducible non-negative square matrix then the algebraic and geometric multiplicities of its Perron root are both one. Also ifP is its Perron projection thenAP =PA = ρ(A)P so every column ofP is a positive right eigenvector ofA and every row is a positive left eigenvector. Moreover, ifAx = λx thenPAx = λPx = ρ(A)Px which meansPx = 0 if λ ≠ ρ(A). Thus the only positive eigenvectors are those associated with ρ(A). IfA is a primitive matrix with ρ(A) = 1 then it can be decomposed asP ⊕ (1 − P)A so thatAn =P + (1 − P)An. Asn increases the second of these terms decays to zero leavingP as the limit ofAn asn → ∞.

The power method is a convenient way to compute the Perron projection of a primitive matrix. Ifv andw are the positive row and column vectors that it generates then the Perron projection is justwv/vw. The spectral projections aren't neatly blocked as in the Jordan form. Here they are overlaid and each generally has complex entries extending to all four corners of the square matrix. Nevertheless, they retain their mutual orthogonality which is what facilitates the decomposition.

Peripheral projection

[edit]

The analysis whenA is irreducible and non-negative is broadly similar. The Perron projection is still positive but there may now be other eigenvalues of modulus ρ(A) that negate use of the power method and prevent the powers of (1 − P)A decaying as in the primitive case whenever ρ(A) = 1. So we consider theperipheral projection, which is the spectral projection ofA corresponding to all the eigenvalues that have modulusρ(A). It may then be shown that the peripheral projection of an irreducible non-negative square matrix is a non-negative matrix with a positive diagonal.

Cyclicity

[edit]

Suppose in addition that ρ(A) = 1 andA hash eigenvalues on the unit circle. IfP is the peripheral projection then the matrixR =AP =PA is non-negative and irreducible,Rh =P, and the cyclic groupP,R,R2, ....,Rh−1 represents the harmonics ofA. The spectral projection ofA at the eigenvalue λ on the unit circle is given by the formulah11hλkRk{\displaystyle \scriptstyle h^{-1}\sum _{1}^{h}\lambda ^{-k}R^{k}}. All of these projections (including the Perron projection) have the same positive diagonal, moreover choosing any one of them and then taking the modulus of every entry invariably yields the Perron projection. Some donkey work is still needed in order to establish the cyclic properties (6)–(8) but it's essentially just a matter of turning the handle. The spectral decomposition ofA is given byA = R ⊕ (1 − P)A so the difference betweenAn andRn isAn − Rn = (1 − P)An representing the transients ofAn which eventually decay to zero.P may be computed as the limit ofAnh asn → ∞.

Counterexamples

[edit]

The matricesL =(100100111){\displaystyle \left({\begin{smallmatrix}1&0&0\\1&0&0\\1&1&1\end{smallmatrix}}\right)},P =(100100111){\displaystyle \left({\begin{smallmatrix}1&0&0\\1&0&0\\\!-1&1&1\end{smallmatrix}}\right)},T =(011101110){\displaystyle \left({\begin{smallmatrix}0&1&1\\1&0&1\\1&1&0\end{smallmatrix}}\right)},M =(0100010000000100000100100){\displaystyle \left({\begin{smallmatrix}0&1&0&0&0\\1&0&0&0&0\\0&0&0&1&0\\0&0&0&0&1\\0&0&1&0&0\end{smallmatrix}}\right)} provide simple examples of what can go wrong if the necessary conditions are not met. It is easily seen that the Perron and peripheral projections ofL are both equal toP, thus when the original matrix is reducible the projections may lose non-negativity and there is no chance of expressing them as limits of its powers. The matrixT is an example of a primitive matrix with zero diagonal. If the diagonal of an irreducible non-negative square matrix is non-zero then the matrix must be primitive but this example demonstrates that the converse is false.M is an example of a matrix with several missing spectral teeth. If ω = eiπ/3 then ω6 = 1 and the eigenvalues ofM are {1,ω23=-1,ω4} with a dimension 2 eigenspace for +1 so ω and ω5 are both absent. More precisely, sinceM is block-diagonal cyclic, then the eigenvalues are {1,-1} for the first block, and {1,ω24} for the lower one[citation needed]

Terminology

[edit]

A problem that causes confusion is a lack of standardisation in the definitions. For example, some authors use the termsstrictly positive andpositive to mean > 0 and ≥ 0 respectively. In this articlepositive means > 0 andnon-negative means ≥ 0. Another vexed area concernsdecomposability andreducibility:irreducible is an overloaded term. For avoidance of doubt a non-zero non-negative square matrixA such that 1 + A is primitive is sometimes said to beconnected. Then irreducible non-negative square matrices and connected matrices are synonymous.[33]

The nonnegative eigenvector is often normalized so that the sum of its components is equal to unity; in this case, the eigenvector is the vector of aprobability distribution and is sometimes called astochastic eigenvector.

Perron–Frobenius eigenvalue anddominant eigenvalue are alternative names for the Perron root. Spectral projections are also known asspectral projectors andspectral idempotents. The period is sometimes referred to as theindex of imprimitivity or theorder of cyclicity.

See also

[edit]

Notes

[edit]
  1. ^Bowles, Samuel (1981-06-01). "Technical change and the profit rate: a simple proof of the Okishio theorem".Cambridge Journal of Economics.5 (2):183–186.doi:10.1093/oxfordjournals.cje.a035479.ISSN 0309-166X.
  2. ^Meyer 2000, pp. 8.3.6 p. 681"Archived copy"(PDF). Archived fromthe original(PDF) on March 7, 2010. Retrieved2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
  3. ^Meyer 2000, pp. 8.3.7 p. 683"Archived copy"(PDF). Archived fromthe original(PDF) on March 7, 2010. Retrieved2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
  4. ^Langville & Meyer 2006, p. 15.2 p. 167Langville, Amy N.; Langville, Amy N.; Meyer, Carl D. (2006-07-23).Google's PageRank and Beyond: The Science of Search Engine Rankings. Princeton University Press.ISBN 978-0691122021. Archived from the original on July 10, 2014. Retrieved2016-10-31.{{cite book}}: CS1 maint: bot: original URL status unknown (link)
  5. ^Keener 1993, p. p. 80
  6. ^Landau, Edmund (1895), "Zur relativen Wertbemessung der Turnierresultaten",Deutsches Wochenschach,XI:366–369
  7. ^Landau, Edmund (1915),"Über Preisverteilung bei Spielturnieren",Zeitschrift für Mathematik und Physik,63:192–202
  8. ^Birkhoff, Garrett and Varga, Richard S., 1958. Reactor criticality and nonnegative matrices. Journal of the Society for Industrial and Applied Mathematics, 6(4), pp.354-377.
  9. ^Donsker, M.D. and Varadhan, S.S., 1975. On a variational formula for the principal eigenvalue for operators with maximum principle. Proceedings of the National Academy of Sciences, 72(3), pp.780-783.
  10. ^Friedland, S., 1981. Convex spectral functions. Linear and multilinear algebra, 9(4), pp.299-316.
  11. ^Miroslav Fiedler; Charles R. Johnson; Thomas L. Markham; Michael Neumann (1985)."A Trace Inequality for M-matrices and the Symmetrizability of a Real Matrix by a Positive Diagonal Matrix".Linear Algebra and Its Applications.71:81–94.doi:10.1016/0024-3795(85)90237-X.
  12. ^abcdMeyer 2000, pp. chapter 8 page 665"Archived copy"(PDF). Archived fromthe original(PDF) on March 7, 2010. Retrieved2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
  13. ^Meyer 2000, pp. chapter 8.3 page 670."Archived copy"(PDF). Archived fromthe original(PDF) on March 7, 2010. Retrieved2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
  14. ^Gantmacher 2000, p. chapter XIII.3 theorem 3 page 66
  15. ^Kitchens, Bruce (1998),Symbolic dynamics: one-sided, two-sided and countable state markov shifts., Springer,ISBN 9783540627388
  16. ^Minc, Henryk (1988).Nonnegative matrices. New York: John Wiley & Sons. p. 6 [Corollary 2.2].ISBN 0-471-83966-3.
  17. ^Gradshtein, Izrailʹ Solomonovich (18 September 2014).Table of integrals, series, and products. Elsevier.ISBN 978-0-12-384934-2.OCLC 922964628.
  18. ^Meyer 2000, pp. claim 8.3.11 p. 675"Archived copy"(PDF). Archived fromthe original(PDF) on March 7, 2010. Retrieved2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
  19. ^Gantmacher 2000, p. section XIII.5 theorem 9
  20. ^Meyer 2000, pp. page 679"Archived copy"(PDF). Archived fromthe original(PDF) on March 7, 2010. Retrieved2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
  21. ^Meyer 2000, pp. example 8.3.2 p. 677"Archived copy"(PDF). Archived fromthe original(PDF) on March 7, 2010. Retrieved2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
  22. ^Gantmacher 2000, p. section XIII.2.2 page 62
  23. ^Meyer 2000, pp. example 8.3.3 p. 678"Archived copy"(PDF). Archived fromthe original(PDF) on March 7, 2010. Retrieved2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
  24. ^Meyer 2000, pp. chapter 8 example 8.3.4 page 679 and exercise 8.3.9 p. 685"Archived copy"(PDF). Archived fromthe original(PDF) on March 7, 2010. Retrieved2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
  25. ^Varga 2002, p. 2.43 (page 51)
  26. ^Brualdi, Richard A.;Ryser, Herbert J. (1992).Combinatorial Matrix Theory. Cambridge: Cambridge UP.ISBN 978-0-521-32265-2.
  27. ^Brualdi, Richard A.; Cvetkovic, Dragos (2009).A Combinatorial Approach to Matrix Theory and Its Applications. Boca Raton, FL: CRC Press.ISBN 978-1-4200-8223-4.
  28. ^Mackey, Michael C. (1992).Time's Arrow: The origins of thermodynamic behaviour. New York: Springer-Verlag.ISBN 978-0-387-97702-7.
  29. ^Gantmacher 2000, p. section XIII.2.2 page 54
  30. ^Smith, Roger (2006)."A Spectral Theoretic Proof of Perron–Frobenius"(PDF).Mathematical Proceedings of the Royal Irish Academy (FTP). pp. 29–35.doi:10.3318/PRIA.2002.102.1.29.[dead ftp link](To view documents seeHelp:FTP)
  31. ^Meyer 2000, pp. chapter 8 claim 8.2.10 page 666"Archived copy"(PDF). Archived fromthe original(PDF) on March 7, 2010. Retrieved2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
  32. ^Meyer 2000, pp. chapter 8 page 666"Archived copy"(PDF). Archived fromthe original(PDF) on March 7, 2010. Retrieved2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
  33. ^For surveys of results on irreducibility, seeOlga Taussky-Todd andRichard A. Brualdi.

References

[edit]

Further reading

[edit]
  • Abraham Berman,Robert J. Plemmons,Nonnegative Matrices in the Mathematical Sciences, 1994, SIAM.ISBN 0-89871-321-8.
  • Chris Godsil andGordon Royle,Algebraic Graph Theory, Springer, 2001.
  • A. Graham,Nonnegative Matrices and Applicable Topics in Linear Algebra, John Wiley&Sons, New York, 1987.
  • R. A. Horn and C.R. Johnson,Matrix Analysis, Cambridge University Press, 1990
  • Bas Lemmens and Roger Nussbaum,Nonlinear Perron-Frobenius Theory, Cambridge Tracts in Mathematics 189, Cambridge Univ. Press, 2012.
  • S. P. Meyn and R. L. Tweedie,Markov Chains and Stochastic Stability London: Springer-Verlag, 1993.ISBN 0-387-19832-6 (2nd edition, Cambridge University Press, 2009)
  • Seneta, E.Non-negative matrices and Markov chains. 2nd rev. ed., 1981, XVI, 288 p., Softcover Springer Series in Statistics. (Originally published by Allen & Unwin Ltd., London, 1973)ISBN 978-0-387-29765-1
  • Suprunenko, D.A. (2001) [1994],"Perron–Frobenius theorem",Encyclopedia of Mathematics,EMS Press (The claim thatAj has ordern/h at the end of the statement of the theorem is incorrect.)
  • Varga, Richard S. (2002),Matrix Iterative Analysis (2nd ed.), Springer-Verlag.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Perron–Frobenius_theorem&oldid=1297033140"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp