Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Random matrix

From Wikipedia, the free encyclopedia
Matrix-valued random variable

Inprobability theory andmathematical physics, arandom matrix is amatrix-valuedrandom variable—that is, a matrix in which some or all of its entries aresampled randomly from aprobability distribution.Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques likemean-field theory, diagrammatic methods, thecavity method, or thereplica method to compute quantities liketraces,spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as thespectrum ofnuclei of heavy atoms,[1][2] thethermal conductivity of alattice, or the emergence ofquantum chaos,[3] can be modeled mathematically as problems concerning large, random matrices.

History

[edit]
[icon]
This sectionneeds expansion. You can help byadding missing information.(April 2024)

Random matrix theory first gained attention beyond mathematics literature in the context of nuclear physics. Experiments byEnrico Fermi and others demonstrated evidence that individualnucleons cannot be approximated to move independently, leadingNiels Bohr to formulate the idea of acompound nucleus. Because there was no knowledge of directnucleon-nucleon interactions,Eugene Wigner andLeonard Eisenbud approximated that the nuclearHamiltonian could be modeled as a random matrix. For larger atoms, the distribution of theenergy eigenvalues of the Hamiltonian could be computed in order to approximatescattering cross sections by invoking theWishart distribution.[4]

Applications

[edit]

Physics

[edit]

Innuclear physics, random matrices were introduced byEugene Wigner to model the nuclei of heavy atoms.[1][2] Wigner postulated that the spacings between the lines in the spectrum of a heavy atom nucleus should resemble the spacings between theeigenvalues of a random matrix, and should depend only on the symmetry class of the underlying evolution.[5] Insolid-state physics, random matrices model the behaviour of large disorderedHamiltonians in themean-field approximation.

Inquantum chaos, the Bohigas–Giannoni–Schmit (BGS) conjecture asserts that the spectral statistics of quantum systems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory.[3]

Inquantum optics, transformations described by random unitary matrices are crucial for demonstrating the advantage of quantum over classical computation (see, e.g., theboson sampling model).[6] Moreover, such random unitary transformations can be directly implemented in an optical circuit, by mapping their parameters to optical circuit components (that isbeam splitters and phase shifters).[7]

Mathematical statistics and numerical analysis

[edit]

Inmultivariate statistics, random matrices were introduced byJohn Wishart, who sought toestimate covariance matrices of large samples.[8]Chernoff-,Bernstein-, andHoeffding-type inequalities can typically be strengthened when applied to the maximal eigenvalue (i.e. the eigenvalue of largest magnitude) of a finite sum of randomHermitian matrices.[9] Random matrix theory is used to study the spectral properties of random matrices—such as sample covariance matrices—which is of particular interest inhigh-dimensional statistics. Random matrix theory also saw applications inneural networks[10] anddeep learning, with recent work utilizing random matrices to show that hyper-parameter tunings can be cheaply transferred between large neural networks without the need for re-training.[11]

Innumerical analysis, random matrices have been used since the work ofJohn von Neumann andHerman Goldstine[12] to describe computation errors in operations such asmatrix multiplication. Although random entries are traditional "generic" inputs to an algorithm, theconcentration of measure associated with random matrix distributions implies that random matrices will not test large portions of an algorithm's input space.[13]

Number theory

[edit]

Innumber theory, the distribution of zeros of theRiemann zeta function (and otherL-functions) is modeled by the distribution of eigenvalues of certain random matrices.[14] The connection was first discovered byHugh Montgomery andFreeman Dyson. It is connected to theHilbert–Pólya conjecture.

Free probability

[edit]

The relation offree probability with random matrices[15] is a key reason for the wide use of free probability in other subjects. Voiculescu introduced the concept of freeness around 1983 in an operator algebraic context; at the beginning there was no relation at all with random matrices. This connection was only revealed later in 1991 by Voiculescu;[16] he was motivated by the fact that the limit distribution which he found in his free central limit theorem had appeared before in Wigner's semi-circle law in the random matrix context.

Computational neuroscience

[edit]

In the field of computational neuroscience, random matrices are increasingly used to model the network of synaptic connections between neurons in the brain. Dynamical models of neuronal networks with random connectivity matrix were shown to exhibit a phase transition to chaos[17] when the variance of the synaptic weights crosses a critical value, at the limit of infinite system size. Results on random matrices have also shown that the dynamics of random-matrix models are insensitive to mean connection strength. Instead, the stability of fluctuations depends on connection strength variation[18][19] and time to synchrony depends on network topology.[20][21]

In the analysis of massive data such asfMRI, random matrix theory has been applied in order to perform dimension reduction. When applying an algorithm such asPCA, it is important to be able to select the number of significant components. The criteria for selecting components can be multiple (based on explained variance, Kaiser's method, eigenvalue, etc.). Random matrix theory in this content has its representative theMarchenko-Pastur distribution, which guarantees the theoretical high and low limits of the eigenvalues associated with a random variable covariance matrix. This matrix calculated in this way becomes the null hypothesis that allows one to find the eigenvalues (and their eigenvectors) that deviate from the theoretical random range. The components thus excluded become the reduced dimensional space (see examples in fMRI[22][23]).

Optimal control

[edit]

Inoptimal control theory, the evolution ofn state variables through time depends at any time on their own values and on the values ofk control variables. With linear evolution, matrices of coefficients appear in the state equation (equation of evolution). In some problems the values of the parameters in these matrices are not known with certainty, in which case there are random matrices in the state equation and the problem is known as one ofstochastic control.[24]: ch. 13 [25] A key result in the case oflinear-quadratic control with stochastic matrices is that thecertainty equivalence principle does not apply: while in the absence ofmultiplier uncertainty (that is, with only additive uncertainty) the optimal policy with a quadratic loss function coincides with what would be decided if the uncertainty were ignored, the optimal policy may differ if the state equation contains random coefficients.

Computational mechanics

[edit]

Incomputational mechanics, epistemic uncertainties underlying the lack of knowledge about the physics of the modeled system give rise to mathematical operators associated with the computational model, which are deficient in a certain sense. Such operators lack certain properties linked to unmodeled physics. When such operators are discretized to perform computational simulations, their accuracy is limited by the missing physics. To compensate for this deficiency of the mathematical operator, it is not enough to make the model parameters random, it is necessary to consider a mathematical operator that is random and can thus generate families of computational models in the hope that one of these captures the missing physics. Random matrices have been used in this sense,[26] with applications in vibroacoustics, wave propagations, materials science, fluid mechanics, heat transfer, etc.

Engineering

[edit]

Random matrix theory can be applied to the electrical and communications engineering research efforts to study, model and develop Massive Multiple-Input Multiple-Output (MIMO) radio systems.[citation needed]

Types

[edit]

Gaussian ensembles

[edit]
Main article:Gaussian ensemble

The most-commonly studied random matrixdistributions are the Gaussian ensembles: GOE, GUE and GSE. They are often denoted by theirDyson index,β = 1 for GOE,β = 2 for GUE, andβ = 4 for GSE. This index counts the number of real components per matrix element.

Definitions

[edit]

TheGaussian unitary ensembleGUE(n){\displaystyle {\text{GUE}}(n)} is described by theGaussian measure with density1ZGUE(n)en2trH2{\displaystyle {\frac {1}{Z_{{\text{GUE}}(n)}}}e^{-{\frac {n}{2}}\mathrm {tr} H^{2}}}on the space ofn×n{\displaystyle n\times n}Hermitian matricesH=(Hij)i,j=1n{\displaystyle H=(H_{ij})_{i,j=1}^{n}}. HereZGUE(n)=2n/2(πn)12n2{\displaystyle Z_{{\text{GUE}}(n)}=2^{n/2}\left({\frac {\pi }{n}}\right)^{{\frac {1}{2}}n^{2}}}is a normalization constant, chosen so that the integral of the density is equal to one. The termunitary refers to the fact that the distribution is invariant under unitary conjugation. The Gaussian unitary ensemble modelsHamiltonians lacking time-reversal symmetry.

TheGaussian orthogonal ensembleGOE(n){\displaystyle {\text{GOE}}(n)} is described by the Gaussian measure with density1ZGOE(n)en4trH2{\displaystyle {\frac {1}{Z_{{\text{GOE}}(n)}}}e^{-{\frac {n}{4}}\mathrm {tr} H^{2}}}on the space ofn × n real symmetric matricesH = (Hij)n
i,j=1
. Its distribution is invariant under orthogonal conjugation, and it models Hamiltonians with time-reversal symmetry. Equivalently, it is generated byH=(G+GT)/2n{\displaystyle H=(G+G^{T})/{\sqrt {2n}}}, whereG{\displaystyle G} is ann×n{\displaystyle n\times n} matrix with IID samples from the standard normal distribution.

TheGaussian symplectic ensembleGSE(n){\displaystyle {\text{GSE}}(n)} is described by the Gaussian measure with density1ZGSE(n)entrH2{\displaystyle {\frac {1}{Z_{{\text{GSE}}(n)}}}e^{-n\mathrm {tr} H^{2}}}on the space ofn × n Hermitianquaternionic matrices, e.g. symmetric square matrices composed ofquaternions,H = (Hij)n
i,j=1
. Its distribution is invariant under conjugation by thesymplectic group, and it models Hamiltonians with time-reversal symmetry but no rotational symmetry.

Basic properties

[edit]

Pointcorrelation functions The ensembles as defined here have Gaussian distributed matrix elements with mean ⟨Hij⟩ = 0, and two-point correlations given byHijHmn=HijHnm=1nδimδjn+2βnβδinδjm,{\displaystyle \langle H_{ij}H_{mn}^{*}\rangle =\langle H_{ij}H_{nm}\rangle ={\frac {1}{n}}\delta _{im}\delta _{jn}+{\frac {2-\beta }{n\beta }}\delta _{in}\delta _{jm},}from which all higher correlations follow byIsserlis' theorem.

Themoment generating function for the GOE isE[etr(VH)]=e14NV+VTF2{\displaystyle E[e^{tr(VH)}]=e^{{\frac {1}{4N}}\|V+V^{T}\|_{F}^{2}}}whereF{\displaystyle \|\cdot \|_{F}} is theFrobenius norm.

Spectral distribution

[edit]
Spectral density of GOE/GUE/GSE, asN=20,21,...,25{\displaystyle N=2^{0},2^{1},...,2^{5}}. They are normalized so that the distributions converge to thesemicircle distribution. The number of "humps" is equal to N.

The jointprobability density for theeigenvaluesλ1,λ2, ...,λn of GUE/GOE/GSE is given by

1Zβ,nk=1neβ4λk2i<j|λjλi|β ,{\displaystyle {\frac {1}{Z_{\beta ,n}}}\prod _{k=1}^{n}e^{-{\frac {\beta }{4}}\lambda _{k}^{2}}\prod _{i<j}\left|\lambda _{j}-\lambda _{i}\right|^{\beta }~,}1

whereZβ,n is a normalization constant which can be explicitly computed, seeSelberg integral. In the case of GUE (β = 2), the formula (1) describes adeterminantal point process. Eigenvalues repel as the joint probability density has a zero (ofβ{\displaystyle \beta }th order) for coinciding eigenvaluesλj=λi{\displaystyle \lambda _{j}=\lambda _{i}}, andZ2,n=(2π)n/2k=1nk!{\displaystyle Z_{2,n}=(2\pi )^{n/2}\prod _{k=1}^{n}k!}.

More succinctly,1Zβ,neβ4λ22|Δn(λ)|β{\displaystyle {\frac {1}{Z_{\beta ,n}}}e^{-{\frac {\beta }{4}}\|\lambda \|_{2}^{2}}|\Delta _{n}(\lambda )|^{\beta }}whereΔn{\displaystyle \Delta _{n}} is theVandermonde determinant.

The distribution of the largest eigenvalue for GOE, and GUE, are explicitly solvable.[27] They converge to theTracy–Widom distribution after shifting and scaling appropriately.

The spectrum, divided byNσ2{\displaystyle {\sqrt {N\sigma ^{2}}}}, converges in distribution to thesemicircular distribution on the interval[2,+2]{\displaystyle [-2,+2]}:ρ(x)=12π4x2{\displaystyle \rho (x)={\frac {1}{2\pi }}{\sqrt {4-x^{2}}}}. Hereσ2{\displaystyle \sigma ^{2}} is the variance of off-diagonal entries. The variance of the on-diagonal entries do not matter.

Wishart matrices

[edit]
Main article:Wishart distribution

Wishart matrices aren × n random matrices of the formH =XX*, whereX is ann × m random matrix (m ≥ n) with independent entries, andX* is itsconjugate transpose. In the important special case considered by Wishart, the entries ofX are identically distributed Gaussian random variables (either real or complex).

Thelimit of the empirical spectral measure of Wishart matrices was found[28] byVladimir Marchenko andLeonid Pastur.

Random band matrix

[edit]
Main article:Random band matrix

Random band matrices are random matrices with the property that all entries outside a certain band are zero.[29] They can be used to roughly model systems of interacting particles arranged roughly in a grid such that each particle is only allowed to interact with its neighbors, which is an improvement on the mean field model.[29]

In one dimension, this means thatHij=0{\textstyle H_{ij}=0} if|ij|>W{\textstyle |i-j|>W}, where W is the band width. Physically, this means that the amount by which particlesi andj interact is 0 if their separation is over W. In more than one dimension, i and j are no longer integers butnd vectors with integer components, andHij=0{\textstyle H_{ij}=0} if|ij|L1{\displaystyle |i-j|_{L^{1}}}, where||L1{\displaystyle |\cdot |_{L^{1}}} indicates thetaxicab distance between the two locations.Hij=Hji{\textstyle H_{ij}=H_{ji}} for all i,j and nonzero values ofHij{\textstyle H_{ij}} have variancesσij2{\displaystyle \sigma _{ij}^{2}} of the same order of magnitude, normalized such thatjσij2=1{\textstyle \sum _{j}\sigma _{ij}^{2}=1} for each value of j.[29]

Random unitary matrices

[edit]
Main article:Circular ensembles

Non-Hermitian random matrices

[edit]
Main article:Circular law

Spectral theory

[edit]

The spectral theory of random matrices studies the distribution of the eigenvalues as the size of the matrix goes to infinity.[30]

Empirical spectral measure

[edit]

Theempirical spectral measureμH{\displaystyle \mu _{H}} ofH{\displaystyle H} is defined byμH(A)=1n#{eigenvalues of H in A}=N1A,H,AR.{\displaystyle \mu _{H}(A)={\frac {1}{n}}\,\#\left\{{\text{eigenvalues of }}H{\text{ in }}A\right\}=N_{1_{A},H},\quad A\subset \mathbb {R} .}or more succinctly, ifλ1,,λn{\displaystyle \lambda _{1},\ldots ,\lambda _{n}} are the eigenvalues ofH{\displaystyle H}μH(dλ)=1niδλi(dλ).{\displaystyle \mu _{H}(d\lambda )={\frac {1}{n}}\sum _{i}\delta _{\lambda _{i}}(d\lambda ).}

Usually, the limit ofμH{\displaystyle \mu _{H}} is a deterministic measure; this is a particular case ofself-averaging. Thecumulative distribution function of the limiting measure is called theintegrated density of states and is denotedN(λ). If the integrated density of states is differentiable, its derivative is called thedensity of states and is denoted ρ(λ).

Types of convergence

[edit]

Given a matrix ensemble, we say that its spectral measures convergeweakly toρ{\displaystyle \rho } iff for any measurable setA{\displaystyle A}, the ensemble-average converges:limnEH[μH(A)]=ρ(A){\displaystyle \lim _{n\to \infty }\mathbb {E} _{H}[\mu _{H}(A)]=\rho (A)}Convergenceweakly almost surely: If we sampleH1,H2,H3,{\displaystyle H_{1},H_{2},H_{3},\dots } independently from the ensemble, then with probability 1,limnμHn(A)=ρ(A){\displaystyle \lim _{n\to \infty }\mu _{H_{n}}(A)=\rho (A)}for any measurable setA{\displaystyle A}.

In another sense, weak almost sure convergence means that we sampleH1,H2,H3,{\displaystyle H_{1},H_{2},H_{3},\dots }, not independently, but by "growing" (astochastic process), then with probability 1,limnμHn(A)=ρ(A){\displaystyle \lim _{n\to \infty }\mu _{H_{n}}(A)=\rho (A)} for any measurable setA{\displaystyle A}.

For example, we can "grow" a sequence of matrices from the Gaussian ensemble as follows:

Note that generic matrix ensembles do not allow us to grow, but most of the common ones, such as the three Gaussian ensembles, do allow us to grow.

Global regime

[edit]

In theglobal regime, one is interested in the distribution of linear statistics of the formNf,H=n1trf(H){\displaystyle N_{f,H}=n^{-1}{\text{tr}}f(H)}.

The limit of the empirical spectral measure for Wigner matrices was described byEugene Wigner; seeWigner semicircle distribution andWigner surmise. As far as sample covariance matrices are concerned, atheory was developed by Marčenko and Pastur.[28][31]

The limit of the empirical spectral measure of invariant matrix ensembles is described by a certain integral equation which arises frompotential theory.[32]

Fluctuations

[edit]

For the linear statisticsNf,H =n−1 Σf(λj), one is also interested in the fluctuations about ∫ f(λdN(λ). For many classes of random matrices, a central limit theorem of the formNf,Hf(λ)dN(λ)σf,nDN(0,1){\displaystyle {\frac {N_{f,H}-\int f(\lambda )\,dN(\lambda )}{\sigma _{f,n}}}{\overset {D}{\longrightarrow }}N(0,1)}is known.[33][34]

The variational problem for the unitary ensembles

[edit]

Consider the measure

dμN(μ)=1Z~NeHN(λ)dλ,HN(λ)=jkln|λjλk|+Nj=1NQ(λj),{\displaystyle \mathrm {d} \mu _{N}(\mu )={\frac {1}{{\widetilde {Z}}_{N}}}e^{-H_{N}(\lambda )}\mathrm {d} \lambda ,\qquad H_{N}(\lambda )=-\sum \limits _{j\neq k}\ln |\lambda _{j}-\lambda _{k}|+N\sum \limits _{j=1}^{N}Q(\lambda _{j}),}

whereQ(M){\displaystyle Q(M)} is the potential of the ensemble and letν{\displaystyle \nu } be the empirical spectral measure.

We can rewriteHN(λ){\displaystyle H_{N}(\lambda )} withν{\displaystyle \nu } as

HN(λ)=N2[xyln|xy|dν(x)dν(y)+Q(x)dν(x)],{\displaystyle H_{N}(\lambda )=N^{2}\left[-\int \int _{x\neq y}\ln |x-y|\mathrm {d} \nu (x)\mathrm {d} \nu (y)+\int Q(x)\mathrm {d} \nu (x)\right],}

the probability measure is now of the form

dμN(μ)=1Z~NeN2IQ(ν)dλ,{\displaystyle \mathrm {d} \mu _{N}(\mu )={\frac {1}{{\widetilde {Z}}_{N}}}e^{-N^{2}I_{Q}(\nu )}\mathrm {d} \lambda ,}

whereIQ(ν){\displaystyle I_{Q}(\nu )} is the above functional inside the squared brackets.

Let now

M1(R)={ν:ν0, Rdν=1}{\displaystyle M_{1}(\mathbb {R} )=\left\{\nu :\nu \geq 0,\ \int _{\mathbb {R} }\mathrm {d} \nu =1\right\}}

be the space of one-dimensional probability measures and consider the minimizer

EQ=infνM1(R)xyln|xy|dν(x)dν(y)+Q(x)dν(x).{\displaystyle E_{Q}=\inf \limits _{\nu \in M_{1}(\mathbb {R} )}-\int \int _{x\neq y}\ln |x-y|\mathrm {d} \nu (x)\mathrm {d} \nu (y)+\int Q(x)\mathrm {d} \nu (x).}

ForEQ{\displaystyle E_{Q}} there exists a unique equilibrium measureνQ{\displaystyle \nu _{Q}} through theEuler-Lagrange variational conditions for some real constantl{\displaystyle l}

2Rlog|xy|dν(y)Q(x)=l,xJ{\displaystyle 2\int _{\mathbb {R} }\log |x-y|\mathrm {d} \nu (y)-Q(x)=l,\quad x\in J}
2Rlog|xy|dν(y)Q(x)l,xRJ{\displaystyle 2\int _{\mathbb {R} }\log |x-y|\mathrm {d} \nu (y)-Q(x)\leq l,\quad x\in \mathbb {R} \setminus J}

whereJ=j=1q[aj,bj]{\displaystyle J=\bigcup \limits _{j=1}^{q}[a_{j},b_{j}]} is the support of the measure and define

q(x)=(Q(x)2)2+Q(x)Q(y)xydνQ(y){\displaystyle q(x)=-\left({\frac {Q'(x)}{2}}\right)^{2}+\int {\frac {Q'(x)-Q'(y)}{x-y}}\mathrm {d} \nu _{Q}(y)}.

The equilibrium measureνQ{\displaystyle \nu _{Q}} has the following Radon–Nikodym density

dνQ(x)dx=1πq(x).{\displaystyle {\frac {\mathrm {d} \nu _{Q}(x)}{\mathrm {d} x}}={\frac {1}{\pi }}{\sqrt {q(x)}}.}[35]

Mesoscopic regime

[edit]

[36][37] The typical statement of the Wigner semicircular law is equivalent to the following statement: For eachfixed interval[λ0Δλ,λ0+Δλ]{\displaystyle [\lambda _{0}-\Delta \lambda ,\lambda _{0}+\Delta \lambda ]} centered at a pointλ0{\displaystyle \lambda _{0}}, asN{\displaystyle N}, the number of dimensions of the gaussian ensemble increases, the proportion of the eigenvalues falling within the interval converges to[λ0Δλ,λ0+Δλ]ρ(t)dt{\displaystyle \int _{[\lambda _{0}-\Delta \lambda ,\lambda _{0}+\Delta \lambda ]}\rho (t)dt}, whereρ(t){\displaystyle \rho (t)} is the density of the semicircular distribution.

IfΔλ{\displaystyle \Delta \lambda } can be allowed to decrease asN{\displaystyle N} increases, then we obtain strictly stronger theorems, named "local laws" or "mesoscopic regime".

The mesoscopic regime is intermediate between the local and the global. In themesoscopic regime, one is interested in the limit distribution of eigenvalues in a set that shrinks to zero, but slow enough, such that the number of eigenvalues inside{\displaystyle \to \infty }.

For example, the Ginibre ensemble has a mesoscopic law: For any sequence of shrinking disks with areasu{\displaystyle u}inside the unite disk, if the disks have areaAn=O(n1+ϵ){\displaystyle A_{n}=O(n^{-1+\epsilon })}, the conditional distribution of the spectrum inside the disks also converges to a uniform distribution. That is, if we cut the shrinking disks along with the spectrum falling inside the disks, then scale the disks up to unit area, we would see the spectra converging to a flat distribution in the disks.[37]

Local regime

[edit]

In thelocal regime, one is interested in the limit distribution of eigenvalues in a set that shrinks so fast that the number of eigenvalues remainsO(1){\displaystyle O(1)}.

Typically this means the study of spacings between eigenvalues, and, more generally, in the joint distribution of eigenvalues in an interval of length of order 1/n. One distinguishes betweenbulk statistics, pertaining to intervals inside the support of the limiting spectral measure, andedge statistics, pertaining to intervals near the boundary of the support.

Bulk statistics

[edit]

Formally, fixλ0{\displaystyle \lambda _{0}} in theinterior of thesupport ofN(λ){\displaystyle N(\lambda )}. Then consider thepoint processΞ(λ0)=jδ(nρ(λ0)(λjλ0)) ,{\displaystyle \Xi (\lambda _{0})=\sum _{j}\delta {\Big (}{\cdot }-n\rho (\lambda _{0})(\lambda _{j}-\lambda _{0}){\Big )}~,}whereλj{\displaystyle \lambda _{j}} are the eigenvalues of the random matrix.

The point processΞ(λ0){\displaystyle \Xi (\lambda _{0})} captures the statistical properties of eigenvalues in the vicinity ofλ0{\displaystyle \lambda _{0}}. For theGaussian ensembles, the limit ofΞ(λ0){\displaystyle \Xi (\lambda _{0})} is known;[5] thus, for GUE it is adeterminantal point process with the kernelK(x,y)=sinπ(xy)π(xy){\displaystyle K(x,y)={\frac {\sin \pi (x-y)}{\pi (x-y)}}}(thesine kernel).

Theuniversality principle postulates that the limit ofΞ(λ0){\displaystyle \Xi (\lambda _{0})} asn{\displaystyle n\to \infty } should depend only on the symmetry class of the random matrix (and neither on the specific model of random matrices nor onλ0{\displaystyle \lambda _{0}}). Rigorous proofs of universality are known for invariant matrix ensembles[38][39] and Wigner matrices.[40][41]

Edge statistics

[edit]
Main article:Tracy–Widom distribution

One example of edge statistics is theTracy–Widom distribution.

As another example, consider the Ginibre ensemble. It can be real or complex. The real Ginibre ensemble has i.i.d. standard Gaussian entriesN(0,1){\displaystyle {\mathcal {N}}(0,1)}, and the complex Ginibre ensemble has i.i.d. standard complex Gaussian entriesN(0,1/2)+iN(0,1/2){\displaystyle {\mathcal {N}}(0,1/2)+i{\mathcal {N}}(0,1/2)}.

Now letGn{\displaystyle G_{n}} be sampled from the real or complex ensemble, and letρ(Gn){\displaystyle \rho (G_{n})} be the absolute value of its maximal eigenvalue:ρ(Gn):=maxj|λj|{\displaystyle \rho (G_{n}):=\max _{j}|\lambda _{j}|}We have the following theorem for the edge statistics:[42]

Edge statistics of the Ginibre ensembleForGn{\displaystyle G_{n}} andρ(Gn){\displaystyle \rho \left(G_{n}\right)} as above, with probability one,limn1nρ(Gn)=1{\displaystyle \lim _{n\rightarrow \infty }{\frac {1}{\sqrt {n}}}\rho \left(G_{n}\right)=1}

Moreover, ifγn=log(n2π)2log(log(n)){\displaystyle \gamma _{n}=\log \left({\frac {n}{2\pi }}\right)-2\log(\log(n))} andYn:=4nγn(1nρ(Gn)1γn4n),{\displaystyle Y_{n}:={\sqrt {4n\gamma _{n}}}\left({\frac {1}{\sqrt {n}}}\rho \left(G_{n}\right)-1-{\sqrt {\frac {\gamma _{n}}{4n}}}\right),}thenYn{\displaystyle Y_{n}} converges in distribution to theGumbel law, i.e., the probability measure onR{\displaystyle \mathbb {R} } with cumulative distribution functionFGum(x)=eex{\displaystyle F_{\mathrm {Gum} }(x)=e^{-e^{-x}}}.

This theorem refines thecircular law of the Ginibre ensemble. In words, the circular law says that the spectrum of1nGn{\displaystyle {\frac {1}{\sqrt {n}}}G_{n}} almost surely falls uniformly on the unit disc. and the edge statistics theorem states that the radius of the almost-unit-disk is about1γn4n{\displaystyle 1-{\sqrt {\frac {\gamma _{n}}{4n}}}}, and fluctuates on a scale of14nγn{\displaystyle {\frac {1}{\sqrt {4n\gamma _{n}}}}}, according to the Gumbel law.

Spectral rigidity

[edit]

The phenomenon ofspectral rigidity states that the eigenvalues from most commonly used matrix ensembles tend to be more uniformly distributed than they would be if they were sampled independently at random. That is, they together clump less than a purelyPoisson point process. It is also calledeigenvalue rigidity orlevel repulsion.

More quantitatively, suppose that a matrix ensemble has limit spectral density measureμ{\displaystyle \mu }. Fix some subsetS{\displaystyle S} such that0<μ(S)<1{\displaystyle 0<\mu (S)<1}. This is the proportion of eigenvalues that falls withinS{\displaystyle S} at the limit of largeN{\displaystyle N}, so the expected number of eigenvalues falling withinS{\displaystyle S} isNμ(S){\displaystyle N\mu (S)}. Now, a purely Poisson point process would have meant that the actual number ofNμ(S)+O(Nμ(S)(1μ(S))){\displaystyle N\mu (S)+O({\sqrt {N\mu (S)(1-\mu (S))}})}, sinceNμ(S)(1μ(S)){\displaystyle {\sqrt {N\mu (S)(1-\mu (S))}}} is the standard deviation of the number of points falling withinS{\displaystyle S} when the points are completely independent of each other. Conversely, if the points are completely rigid, then the actual number would be equal toNμ(S){\displaystyle N\mu (S)} without fluctuation. Now, it turns out that in many matrix ensembles, the number of points falling withinS{\displaystyle S} isNμ(S)+O(lnN){\displaystyle N\mu (S)+O({\sqrt {\ln N}})}, i.e. not completely rigid, but very close to it.[43][44] Spectral rigidity has been numerically observed in the zeros of theRiemann zeta function.[45]

Correlation functions

[edit]

The joint probability density of the eigenvalues ofn×n{\displaystyle n\times n} random Hermitian matricesMHn×n{\displaystyle M\in \mathbf {H} ^{n\times n}}, with partition functions of the formZn=MHn×ndμ0(M)etr(V(M)){\displaystyle Z_{n}=\int _{M\in \mathbf {H} ^{n\times n}}d\mu _{0}(M)e^{{\text{tr}}(V(M))}}whereV(x):=j=1vjxj{\displaystyle V(x):=\sum _{j=1}^{\infty }v_{j}x^{j}}anddμ0(M){\displaystyle d\mu _{0}(M)} is the standard Lebesgue measure on the spaceHn×n{\displaystyle \mathbf {H} ^{n\times n}} of Hermitiann×n{\displaystyle n\times n} matrices, is given bypn,V(x1,,xn)=1Zn,Vi<j(xixj)2eiV(xi).{\displaystyle p_{n,V}(x_{1},\dots ,x_{n})={\frac {1}{Z_{n,V}}}\prod _{i<j}(x_{i}-x_{j})^{2}e^{-\sum _{i}V(x_{i})}.}Thek{\displaystyle k}-point correlation functions (ormarginal distributions) are defined asRn,V(k)(x1,,xk)=n!(nk)!Rdxk+1Rdxnpn,V(x1,x2,,xn),{\displaystyle R_{n,V}^{(k)}(x_{1},\dots ,x_{k})={\frac {n!}{(n-k)!}}\int _{\mathbf {R} }dx_{k+1}\cdots \int _{\mathbb {R} }dx_{n}\,p_{n,V}(x_{1},x_{2},\dots ,x_{n}),}which are skew symmetric functions of their variables. In particular, the one-point correlation function, ordensity of states, isRn,V(1)(x1)=nRdx2Rdxnpn,V(x1,x2,,xn).{\displaystyle R_{n,V}^{(1)}(x_{1})=n\int _{\mathbb {R} }dx_{2}\cdots \int _{\mathbf {R} }dx_{n}\,p_{n,V}(x_{1},x_{2},\dots ,x_{n}).}Its integral over a Borel setBR{\displaystyle B\subset \mathbf {R} } gives the expected number of eigenvalues contained inB{\displaystyle B}:BRn,V(1)(x)dx=E(#{eigenvalues in B}).{\displaystyle \int _{B}R_{n,V}^{(1)}(x)dx=\mathbf {E} \left(\#\{{\text{eigenvalues in }}B\}\right).}

The following result expresses these correlation functions as determinants of the matrices formed from evaluating the appropriate integral kernel at the pairs(xi,xj){\displaystyle (x_{i},x_{j})} of points appearing within the correlator.

Theorem [Dyson-Mehta] For anyk{\displaystyle k},1kn{\displaystyle 1\leq k\leq n} thek{\displaystyle k}-point correlation functionRn,V(k){\displaystyle R_{n,V}^{(k)}} can be written as a determinantRn,V(k)(x1,x2,,xk)=det1i,jk(Kn,V(xi,xj)),{\displaystyle R_{n,V}^{(k)}(x_{1},x_{2},\dots ,x_{k})=\det _{1\leq i,j\leq k}\left(K_{n,V}(x_{i},x_{j})\right),}whereKn,V(x,y){\displaystyle K_{n,V}(x,y)} is then{\displaystyle n}th Christoffel-Darboux kernelKn,V(x,y):=k=0n1ψk(x)ψk(y),{\displaystyle K_{n,V}(x,y):=\sum _{k=0}^{n-1}\psi _{k}(x)\psi _{k}(y),}associated toV{\displaystyle V}, written in terms of the quasipolynomialsψk(x)=1hkpk(z)eV(z)/2,{\displaystyle \psi _{k}(x)={1 \over {\sqrt {h_{k}}}}\,p_{k}(z)\,e^{-V(z)/2},} where{pk(x)}kN{\displaystyle \{p_{k}(x)\}_{k\in \mathbf {N} }} is a complete sequence of monic polynomials, of the degrees indicated, satisfying the orthogonality conditionsRψj(x)ψk(x)dx=δjk.{\displaystyle \int _{\mathbf {R} }\psi _{j}(x)\psi _{k}(x)dx=\delta _{jk}.}

Generalizations

[edit]

Wigner matrices are random Hermitian matricesHn=(Hn(i,j))i,j=1n{\textstyle H_{n}=(H_{n}(i,j))_{i,j=1}^{n}} such that the entries{Hn(i,j) ,1ijn}{\displaystyle \left\{H_{n}(i,j)~,\,1\leq i\leq j\leq n\right\}}above the main diagonal are independent random variables with zero mean and have identical second moments.

The Gaussian ensembles can be extended forβ1,2,4{\displaystyle \beta \neq 1,2,4} using the Dumitriu-Edelman tridiagonal trick. These are called thebeta ensembles.[46]

Invariant matrix ensembles are random Hermitian matrices with density on the space of real symmetric/Hermitian/quaternionic Hermitian matrices, which is of the form1ZnenV(tr(H)) ,{\textstyle {\frac {1}{Z_{n}}}e^{-nV(\mathrm {tr} (H))}~,} where the functionV is called the potential.

The Gaussian ensembles are the only common special cases of these two classes of random matrices. This is a consequence of a theorem by Porter and Rosenzweig.[47][48]

Heavy tailed distributions generalize to random matrices asheavy tailed matrix ensembles.[49]

Selected bibliography

[edit]

Books

[edit]
  • Mehta, M.L. (2004).Random Matrices. Amsterdam: Elsevier/Academic Press.ISBN 0-12-088409-7.
  • Deift, Percy; Gioev, Dimitri (2009).Random matrix theory: invariant ensembles and universality. Courant lecture notes in mathematics. New York : Providence, R.I: Courant Institute of Mathematical Sciences ; American Mathematical Society.ISBN 978-0-8218-4737-4.
  • Forrester, Peter (2010).Log-gases and random matrices. London Mathematical Society monographs. Princeton: Princeton University Press.ISBN 978-0-691-12829-0.
  • Anderson, G.W.; Guionnet, A.; Zeitouni, O. (2010).An introduction to random matrices. Cambridge: Cambridge University Press.ISBN 978-0-521-19452-5.
  • Bai, Zhidong; Silverstein, Jack W. (2010).Spectral analysis of large dimensional random matrices. Springer series in statistics (2 ed.). New York ; London: Springer.doi:10.1007/978-1-4419-0661-8.ISBN 978-1-4419-0660-1.ISSN 0172-7397.
  • Akemann, G.; Baik, J.; Di Francesco, P. (2011).The Oxford Handbook of Random Matrix Theory. Oxford: Oxford University Press.ISBN 978-0-19-957400-1.
  • Tao, Terence (2012).Topics in random matrix theory. Graduate studies in mathematics. Providence, R.I: American Mathematical Society.ISBN 978-0-8218-7430-1.
  • Potters, Marc; Bouchaud, Jean-Philippe (2020-11-30).A First Course in Random Matrix Theory: for Physicists, Engineers and Data Scientists. Cambridge University Press.doi:10.1017/9781108768900.ISBN 978-1-108-76890-0.

Survey articles

[edit]

Historic works

[edit]

References

[edit]
  1. ^abWigner, Eugene P. (1955). "Characteristic Vectors of Bordered Matrices With Infinite Dimensions".Annals of Mathematics.62 (3):548–564.Bibcode:1955AnMat..62..548W.doi:10.2307/1970079.ISSN 0003-486X.JSTOR 1970079.
  2. ^abBlock, R. C.; Good, W. M.; Harvey, J. A.; Schmitt, H. W.; Trammell, G. T., eds. (1957-07-01).Conference on Neutron Physics by Time-Of-Flight Held at Gatlinburg, Tennessee, November 1 and 2, 1956 (Report ORNL-2309). Oak Ridge, Tennessee: Oak Ridge National Lab.doi:10.2172/4319287.OSTI 4319287.
  3. ^abBohigas, O.; Giannoni, M.J.; Schmit, Schmit (1984). "Characterization of Chaotic Quantum Spectra and Universality of Level Fluctuation Laws".Phys. Rev. Lett.52 (1):1–4.Bibcode:1984PhRvL..52....1B.doi:10.1103/PhysRevLett.52.1.
  4. ^Bohigas, Oriol; Weidenmuller, Hans (2015). Akemann, Gernot; Baik, Jinho; Di Francesco, Philippe (eds.)."History – an overview".academic.oup.com. pp. 15–40.doi:10.1093/oxfordhb/9780198744191.013.2.ISBN 978-0-19-874419-1. Retrieved2024-04-22.
  5. ^abMehta 2004
  6. ^Aaronson, Scott; Arkhipov, Alex (2013)."The computational complexity of linear optics".Theory of Computing.9:143–252.doi:10.4086/toc.2013.v009a004.
  7. ^Russell, Nicholas; Chakhmakhchyan, Levon; O'Brien, Jeremy; Laing, Anthony (2017). "Direct dialling of Haar random unitary matrices".New J. Phys.19 (3): 033007.arXiv:1506.06220.Bibcode:2017NJPh...19c3007R.doi:10.1088/1367-2630/aa60ed.S2CID 46915633.
  8. ^Wishart 1928
  9. ^Tropp, J. (2011). "User-Friendly Tail Bounds for Sums of Random Matrices".Foundations of Computational Mathematics.12 (4):389–434.arXiv:1004.4389.doi:10.1007/s10208-011-9099-z.S2CID 17735965.
  10. ^Pennington, Jeffrey; Bahri, Yasaman (2017). "Geometry of Neural Network Loss Surfaces via Random Matrix Theory".ICML'17: Proceedings of the 34th International Conference on Machine Learning.70.S2CID 39515197.
  11. ^Yang, Greg (2022). "Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer".arXiv:2203.03466v2 [cs.LG].
  12. ^von Neumann & Goldstine 1947
  13. ^Edelman & Rao 2005
  14. ^Keating, Jon (1993). "The Riemann zeta-function and quantum chaology".Proc. Internat. School of Phys. Enrico Fermi.CXIX:145–185.doi:10.1016/b978-0-444-81588-0.50008-0.ISBN 978-0-444-81588-0.
  15. ^Mingo, James A.; Speicher, Roland (2017):Free Probability and Random Matrices. Fields Institute Monographs, Vol. 35, Springer, New York
  16. ^Voiculescu, Dan (1991): "Limit laws for random matrices and free products". Inventiones mathematicae 104.1: 201-220
  17. ^Sompolinsky, H.; Crisanti, A.; Sommers, H. (July 1988). "Chaos in Random Neural Networks".Physical Review Letters.61 (3):259–262.Bibcode:1988PhRvL..61..259S.doi:10.1103/PhysRevLett.61.259.PMID 10039285.S2CID 16967637.
  18. ^Rajan, Kanaka; Abbott, L. (November 2006). "Eigenvalue Spectra of Random Matrices for Neural Networks".Physical Review Letters.97 (18) 188104.Bibcode:2006PhRvL..97r8104R.doi:10.1103/PhysRevLett.97.188104.PMID 17155583.
  19. ^Wainrib, Gilles; Touboul, Jonathan (March 2013). "Topological and Dynamical Complexity of Random Neural Networks".Physical Review Letters.110 (11) 118101.arXiv:1210.5082.Bibcode:2013PhRvL.110k8101W.doi:10.1103/PhysRevLett.110.118101.PMID 25166580.S2CID 1188555.
  20. ^Timme, Marc; Wolf, Fred; Geisel, Theo (February 2004). "Topological Speed Limits to Network Synchronization".Physical Review Letters.92 (7) 074101.arXiv:cond-mat/0306512.Bibcode:2004PhRvL..92g4101T.doi:10.1103/PhysRevLett.92.074101.PMID 14995853.S2CID 5765956.
  21. ^Muir, Dylan; Mrsic-Flogel, Thomas (2015)."Eigenspectrum bounds for semirandom matrices with modular and spatial structure for neural networks"(PDF).Phys. Rev. E.91 (4) 042808.Bibcode:2015PhRvE..91d2808M.doi:10.1103/PhysRevE.91.042808.PMID 25974548.
  22. ^Vergani, Alberto A.; Martinelli, Samuele; Binaghi, Elisabetta (July 2019)."Resting state fMRI analysis using unsupervised learning algorithms".Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization.8 (3). Taylor&Francis:2168–1171.doi:10.1080/21681163.2019.1636413.
  23. ^Burda, Z; Kornelsen, J; Nowak, MA; Porebski, B; Sboto-Frankenstein, U; Tomanek, B; Tyburczyk, J (2013). "Collective Correlations of Brodmann Areas fMRI Study with RMT-Denoising".Acta Physica Polonica B.44 (6): 1243.arXiv:1306.3825.Bibcode:2013AcPPB..44.1243B.doi:10.5506/APhysPolB.44.1243.
  24. ^Chow, Gregory P. (1976).Analysis and Control of Dynamic Economic Systems. New York: Wiley.ISBN 0-471-15616-7.
  25. ^Turnovsky, Stephen (1974). "The stability properties of optimal economic policies".American Economic Review.64 (1):136–148.JSTOR 1814888.
  26. ^Soize, C. (2005-04-08)."Random matrix theory for modeling uncertainties in computational mechanics"(PDF).Computer Methods in Applied Mechanics and Engineering.194 (12–16):1333–1366.Bibcode:2005CMAME.194.1333S.doi:10.1016/j.cma.2004.06.038.ISSN 1879-2138.S2CID 58929758.
  27. ^Chiani M (2014). "Distribution of the largest eigenvalue for real Wishart and Gaussian random matrices and a simple approximation for the Tracy-Widom distribution".Journal of Multivariate Analysis.129:69–81.arXiv:1209.3394.doi:10.1016/j.jmva.2014.04.002.S2CID 15889291.
  28. ^ab.Marčenko, V A; Pastur, L A (1967). "Distribution of eigenvalues for some sets of random matrices".Mathematics of the USSR-Sbornik.1 (4):457–483.Bibcode:1967SbMat...1..457M.doi:10.1070/SM1967v001n04ABEH001994.
  29. ^abcBourgade, Paul (2018-06-06),"Random band matrices",Proceedings of the International Congress of Mathematicians (ICM 2018), WORLD SCIENTIFIC, pp. 2759–2783,arXiv:1807.03031,doi:10.1142/9789813272880_0159,ISBN 978-981-327-287-3, retrieved2025-08-17{{citation}}: CS1 maint: work parameter with ISBN (link)
  30. ^Meckes, Elizabeth (2021-01-08). "The Eigenvalues of Random Matrices".arXiv:2101.02928 [math.PR].
  31. ^Pastur 1973
  32. ^Pastur, L.;Shcherbina, M. (1995). "On the Statistical Mechanics Approach in the Random Matrix Theory: Integrated Density of States".J. Stat. Phys.79 (3–4):585–611.Bibcode:1995JSP....79..585D.doi:10.1007/BF02184872.S2CID 120731790.
  33. ^Johansson, K. (1998). "On fluctuations of eigenvalues of random Hermitian matrices".Duke Math. J.91 (1):151–204.doi:10.1215/S0012-7094-98-09108-6.
  34. ^Pastur, L.A. (2005)."A simple approach to the global regime of Gaussian ensembles of random matrices".Ukrainian Math. J.57 (6):936–966.doi:10.1007/s11253-005-0241-4.S2CID 121531907.
  35. ^Harnad, John (15 July 2013).Random Matrices, Random Processes and Integrable Systems. Springer. pp. 263–266.ISBN 978-1-4614-2877-0.
  36. ^Erdős, László; Schlein, Benjamin; Yau, Horng-Tzer (April 2009)."Local Semicircle Law and Complete Delocalization for Wigner Random Matrices".Communications in Mathematical Physics.287 (2):641–655.arXiv:0803.0542.Bibcode:2009CMaPh.287..641E.doi:10.1007/s00220-008-0636-9.ISSN 0010-3616.
  37. ^abBourgade, Paul; Yau, Horng-Tzer; Yin, Jun (2014-08-01). "Local circular law for random matrices".Probability Theory and Related Fields.159 (3):545–595.arXiv:1206.1449.doi:10.1007/s00440-013-0514-z.ISSN 1432-2064.
  38. ^Pastur, L.;Shcherbina, M. (1997)."Universality of the local eigenvalue statistics for a class of unitary invariant random matrix ensembles".Journal of Statistical Physics.86 (1–2):109–147.Bibcode:1997JSP....86..109P.doi:10.1007/BF02180200.S2CID 15117770.
  39. ^Deift, P.; Kriecherbauer, T.; McLaughlin, K.T.-R.; Venakides, S.; Zhou, X. (1997). "Asymptotics for polynomials orthogonal with respect to varying exponential weights".International Mathematics Research Notices.1997 (16):759–782.doi:10.1155/S1073792897000500.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  40. ^Erdős, L.;Péché, S.; Ramírez, J.A.; Schlein, B.; Yau, H.T. (2010). "Bulk universality for Wigner matrices".Communications on Pure and Applied Mathematics.63 (7):895–925.arXiv:0905.4176.doi:10.1002/cpa.20317.
  41. ^Tao, Terence;Vu, Van H. (2010). "Random matrices: universality of local eigenvalue statistics up to the edge".Communications in Mathematical Physics.298 (2):549–572.arXiv:0908.1982.Bibcode:2010CMaPh.298..549T.doi:10.1007/s00220-010-1044-5.S2CID 16594369.
  42. ^Rider, B (2003-03-28)."A limit theorem at the edge of a non-Hermitian random matrix ensemble".Journal of Physics A: Mathematical and General.36 (12):3401–3409.Bibcode:2003JPhA...36.3401R.doi:10.1088/0305-4470/36/12/331.ISSN 0305-4470.
  43. ^Meckes, Elizabeth (2021-01-08),The Eigenvalues of Random Matrices,arXiv:2101.02928
  44. ^Erdős, László; Yau, Horng-Tzer (2017).A dynamical approach to random matrix theory. Courant lecture notes in mathematics. Courant Institute of Mathematical Sciences. New York, New York : Providence, Rhode Island: Courant Institute of Mathematical Sciences, New York University ; American Mathematical Society.ISBN 978-1-4704-3648-3.
  45. ^Berry, Michael Victor (January 1997)."Semiclassical theory of spectral rigidity".Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences.400 (1819):229–251.doi:10.1098/rspa.1985.0078.
  46. ^Dumitriu, Ioana; Edelman, Alan (2002-06-25), "Matrix models for beta ensembles",Journal of Mathematical Physics,43 (11):5830–5847,arXiv:math-ph/0206043,Bibcode:2002JMP....43.5830D,doi:10.1063/1.1507823
  47. ^Porter, C. E.; Rosenzweig, N. (1960-01-01)."STATISTICAL PROPERTIES OF ATOMIC AND NUCLEAR SPECTRA".Ann. Acad. Sci. Fennicae. Ser. A VI.44.OSTI 4147616.
  48. ^Livan, Giacomo; Novaes, Marcel; Vivo, Pierpaolo (2018), Livan, Giacomo; Novaes, Marcel; Vivo, Pierpaolo (eds.), "Classified Material",Introduction to Random Matrices: Theory and Practice, SpringerBriefs in Mathematical Physics, vol. 26, Cham: Springer International Publishing, pp. 15–21,doi:10.1007/978-3-319-70885-0_3,ISBN 978-3-319-70885-0{{citation}}: CS1 maint: work parameter with ISBN (link)
  49. ^Burda, Zdzislaw; Jurkiewicz, Jerzy (2015-09-17), Akemann, Gernot; Baik, Jinho; Di Francesco, Philippe (eds.), "Heavy-tailed random matrices",The Oxford Handbook of Random Matrix Theory, Oxford University Press, pp. 270–289,doi:10.1093/oxfordhb/9780198744191.013.13,ISBN 978-0-19-874419-1{{citation}}: CS1 maint: work parameter with ISBN (link)

External links

[edit]
Matrix classes
Explicitly constrained entries
Constant
Conditions oneigenvalues or eigenvectors
Satisfying conditions onproducts orinverses
With specific applications
Used instatistics
Used ingraph theory
Used in science and engineering
Related terms
Concepts
Ensembles
Laws
Techniques
National
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Random_matrix&oldid=1335573620"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp