Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Computational complexity of matrix multiplication

From Wikipedia, the free encyclopedia
Algorithmic runtime requirements for matrix multiplication

Unsolved problem in computer science
What is the fastest algorithm for matrix multiplication?
More unsolved problems in computer science

Intheoretical computer science, thecomputational complexity of matrix multiplication dictateshow quickly the operation ofmatrix multiplication can be performed.Matrix multiplication algorithms are a central subroutine in theoretical andnumerical algorithms fornumerical linear algebra andoptimization, so finding the fastest algorithm for matrix multiplication is of major practical relevance.

Directly applying the mathematical definition of matrix multiplication gives an algorithm that requiresn3field operations to multiply twon ×n matrices over that field (Θ(n3) inbig O notation). Surprisingly, algorithms exist that provide better running times than this straightforward "schoolbook algorithm". The first to be discovered wasStrassen's algorithm, devised byVolker Strassen in 1969 and often referred to as "fast matrix multiplication".[1] The optimal number of field operations needed to multiply two squaren ×n matricesup to constant factors is still unknown. This is a major open question intheoretical computer science.

As of January 2024[update], the best bound on theasymptotic complexity of a matrix multiplication algorithm isO(n2.371339).[2] However, this and similar improvements to Strassen are not used in practice, because they aregalactic algorithms: the constant coefficient hidden by thebig O notation is so large that they are only worthwhile for matrices that are too large to handle on present-day computers.[3][4]

Simple algorithms

[edit]

IfA,B are twon ×n matrices over a field, then their productAB is also ann ×n matrix over that field, defined entrywise as(AB)ij=k=1nAikBkj.{\displaystyle (AB)_{ij}=\sum _{k=1}^{n}A_{ik}B_{kj}.}

Schoolbook algorithm

[edit]
For implementation techniques (in particular parallel and distributed algorithms), seeMatrix multiplication algorithm.

The simplest approach to computing the product of twon ×n matricesA andB is to compute the arithmetic expressions coming from the definition of matrix multiplication. Inpseudocode:

inputA andB, bothn byn matricesinitializeC to be ann byn matrix of all zerosfori from 1 ton:forj from 1 ton:fork from 1 ton:C[i][j] =C[i][j] +A[i][k]*B[k][j]outputC (as A*B)

Thisalgorithm requiresn3{\displaystyle n^{3}} multiplications andn3n2{\displaystyle n^{3}-n^{2}} additions of scalars for computing the product of two squaren×n matrices. Itscomputational complexity is thereforeO(n3){\displaystyle O(n^{3})}, in amodel of computation where field operations (addition and multiplication) take constant time (in practice, this is the case forfloating point numbers, but not necessarily for integers).

Strassen's algorithm

[edit]
Main article:Strassen algorithm

Strassen's algorithm improves on naive matrix multiplication through adivide-and-conquer approach. The key observation is that multiplying two2 × 2 matrices can be done with only seven multiplications, instead of the usual eight (at the expense of 11 additional addition and subtraction operations). This means that, treating the inputn×n matrices asblock2 × 2 matrices, the task of multiplying twon×n matrices can be reduced to seven subproblems of multiplying twon/2×n/2 matrices. Applying this recursively gives an algorithm needingO(nlog27)O(n2.807){\displaystyle O(n^{\log _{2}7})\approx O(n^{2.807})} field operations.

Unlike algorithms with faster asymptotic complexity, Strassen's algorithm is used in practice. Thenumerical stability is reduced compared to the naive algorithm,[5] but it is faster in cases wheren > 100 or so[6] and appears in several libraries, such asBLAS.[7] Fast matrix multiplication algorithms cannot achievecomponent-wise stability, but some can be shown to exhibitnorm-wise stability.[8] It is very useful for large matrices over exact domains such asfinite fields, where numerical stability is not an issue.

Matrix multiplication exponent

[edit]
Improvement of estimates of exponentω over time for the computational complexity of matrix multiplicationO(nω){\displaystyle O(n^{\omega })}
Closeup of 1990–2024
Timeline of matrix multiplication exponent
YearBound onωAuthors
19692.8074Strassen[1]
19782.796Pan[9]
19792.780Bini,Capovani [it], Romani[10]
19812.522Schönhage[11]
19812.517Romani[12]
19812.496Coppersmith,Winograd[13]
19862.479Strassen[14]
19902.3755Coppersmith,Winograd[15]
20102.3737Stothers[16]
20122.3729Williams[17][18]
20142.3728639Le Gall[19]
20202.3728596Alman,Williams[20][21]
20222.371866Duan, Wu, Zhou[22]
20242.371552Williams, Xu, Xu, and Zhou[23]
20242.371339Alman, Duan,Williams, Xu, Xu, and Zhou[2]

Thematrix multiplication exponent, usually denotedω, is the smallestreal number for which any twon×n{\displaystyle n\times n} matrices over a field can be multiplied together usingnω+o(1){\displaystyle n^{\omega +o(1)}} field operations. This notation is commonly used inalgorithms research, so that algorithms using matrix multiplication as a subroutine have bounds on running time that can update as bounds onω improve.

Using a naive lower bound and schoolbook matrix multiplication for the upper bound, one can straightforwardly conclude that2 ≤ω ≤ 3. Whetherω = 2 is a major open question intheoretical computer science, and there is a line of research developing matrix multiplication algorithms to get improved bounds onω.

All recent algorithms in this line of research use thelaser method, a generalization of the Coppersmith–Winograd algorithm, which was given byDon Coppersmith andShmuel Winograd in 1990 and was the best matrix multiplication algorithm until 2010.[24] The conceptual idea of these algorithms is similar to Strassen's algorithm: a method is devised for multiplying twok ×k-matrices with fewer thank3 multiplications, and this technique is applied recursively. The laser method has limitations to its power:Ambainis, Filmus andLe Gall prove that it cannot be used to show thatω < 2.3725 by analyzing higher and higher tensor powers of a certain identity of Coppersmith and Winograd and neitherω < 2.3078 for a wide class of variants of this approach.[25] In 2022 Duan, Wu and Zhou devised a variant breaking the first of the two barriers withω < 2.37188,[22] they do so by identifying a source of potential optimization in the laser method termedcombination loss for which they compensate using an asymmetric version of the hashing method in the Coppersmith–Winograd algorithm.

Nonetheless, the above are classical examples ofgalactic algorithms. On the opposite, the above Strassen's algorithm of 1969 andPan's algorithm of 1978, whose respective exponents are slightly above and below 2.78, have constant coefficients that make them feasible.[26]

Group theory reformulation of matrix multiplication algorithms

[edit]

Henry Cohn,Robert Kleinberg,Balázs Szegedy andChris Umans put methods such as the Strassen and Coppersmith–Winograd algorithms in an entirely differentgroup-theoretic context, by utilising triples of subsets of finite groups which satisfy a disjointness property called thetriple product property (TPP). They also give conjectures that, if true, would imply that there are matrix multiplication algorithms with essentially quadratic complexity. This implies that the optimal exponent of matrix multiplication is 2, which most researchers believe is indeed the case.[4] One such conjecture is that families ofwreath products ofAbelian groups with symmetric groups realise families of subset triples with a simultaneous version of the TPP.[27][28] Several of their conjectures have since been disproven by Blasiak, Cohn, Church, Grochow, Naslund, Sawin, and Umans using the Slice Rank method.[29] Further, Alon, Shpilka andChris Umans have recently shown that some of these conjectures implying fast matrix multiplication are incompatible with another plausible conjecture, thesunflower conjecture,[30] which in turn is related to thecap set problem.[29]

Lower bounds forω

[edit]

There is a trivial lower bound ofω2{\displaystyle \omega \geq 2}. Since any algorithm for multiplying twon ×n-matrices has to process all2n2 entries, there is a trivial asymptotic lower bound ofΩ(n2) operations for any matrix multiplication algorithm. Thus2ω<2.37188{\displaystyle 2\leq \omega <2.37188}. It is unknown whetherω>2{\displaystyle \omega >2}. The best known lower bound for matrix-multiplication complexity isΩ(n2 log(n)), for bounded coefficientarithmetic circuits over the real or complex numbers, and is due toRan Raz.[31]

It is known that, under the model of computation typically studied, there is no matrix multiplication algorithm that uses preciselyO(nω) operations; there must be an additional factor ofno(1).[13]

Rectangular matrix multiplication

[edit]

Similar techniques also apply to rectangular matrix multiplication. The central object of study isω(k){\displaystyle \omega (k)}, which is the smallestc{\displaystyle c} such that one can multiply a matrix of sizen×nk{\displaystyle n\times \lceil n^{k}\rceil } with a matrix of sizenk×n{\displaystyle \lceil n^{k}\rceil \times n} withO(nc+o(1)){\displaystyle O(n^{c+o(1)})} arithmetic operations. A result in algebraic complexity states that multiplying matrices of sizen×nk{\displaystyle n\times \lceil n^{k}\rceil } andnk×n{\displaystyle \lceil n^{k}\rceil \times n} requires the same number of arithmetic operations as multiplying matrices of sizen×nk{\displaystyle n\times \lceil n^{k}\rceil } andn×n{\displaystyle n\times n} and of sizen×n{\displaystyle n\times n} andn×nk{\displaystyle n\times \lceil n^{k}\rceil }, so this encompasses the complexity of rectangular matrix multiplication.[32] This generalizes the square matrix multiplication exponent, sinceω(1)=ω{\displaystyle \omega (1)=\omega }.

Since the output of the matrix multiplication problem is sizen2{\displaystyle n^{2}}, we haveω(k)2{\displaystyle \omega (k)\geq 2} for all values ofk{\displaystyle k}. If one can prove for some values ofk{\displaystyle k} between 0 and 1 thatω(k)2{\displaystyle \omega (k)\leq 2}, then such a result shows thatω(k)=2{\displaystyle \omega (k)=2} for thosek{\displaystyle k}. The largestk such thatω(k)=2{\displaystyle \omega (k)=2} is known as thedual matrix multiplication exponent, usually denotedα.α is referred to as the "dual" because showing thatα=1{\displaystyle \alpha =1} is equivalent to showing thatω=2{\displaystyle \omega =2}. Like the matrix multiplication exponent, the dual matrix multiplication exponent sometimes appears in the complexity of algorithms in numerical linear algebra and optimization.[33]

The first bound onα is byCoppersmith in 1982, who showed thatα>0.17227{\displaystyle \alpha >0.17227}.[34] The current best peer-reviewed bound onα isα0.321334{\displaystyle \alpha \geq 0.321334}, given by Williams, Xu, Xu, and Zhou.[23]

Related problems

[edit]
Further information:Computational complexity of mathematical operations § Matrix algebra

Problems that have the same asymptotic complexity as matrix multiplication includedeterminant,matrix inversion,Gaussian elimination (see next section). Problems with complexity that is expressible in terms ofω{\displaystyle \omega } include characteristic polynomial, eigenvalues (but not eigenvectors),Hermite normal form, andSmith normal form.[citation needed]

Matrix inversion, determinant and Gaussian elimination

[edit]

In his 1969 paper, where he proved the complexityO(nlog27)O(n2.807){\displaystyle O(n^{\log _{2}7})\approx O(n^{2.807})} for matrix computation, Strassen proved also thatmatrix inversion,determinant andGaussian elimination have, up to a multiplicative constant, the samecomputational complexity as matrix multiplication. The proof does not make any assumptions on matrix multiplication that is used, except that its complexity isO(nω){\displaystyle O(n^{\omega })} for someω2{\displaystyle \omega \geq 2}.

The starting point of Strassen's proof is usingblock matrix multiplication. Specifically, a matrix of even dimension2n×2n may be partitioned in fourn×n blocks[ABCD].{\displaystyle {\begin{bmatrix}{A}&{B}\\{C}&{D}\end{bmatrix}}.}Under this form, its inverse is[ABCD]1=[A1+A1B(DCA1B)1CA1A1B(DCA1B)1(DCA1B)1CA1(DCA1B)1],{\displaystyle {\begin{bmatrix}{A}&{B}\\{C}&{D}\end{bmatrix}}^{-1}={\begin{bmatrix}{A}^{-1}+{A}^{-1}{B}({D}-{CA}^{-1}{B})^{-1}{CA}^{-1}&-{A}^{-1}{B}({D}-{CA}^{-1}{B})^{-1}\\-({D}-{CA}^{-1}{B})^{-1}{CA}^{-1}&({D}-{CA}^{-1}{B})^{-1}\end{bmatrix}},}provided thatA andDCA1B{\displaystyle {D}-{CA}^{-1}{B}} are invertible.

Thus, the inverse of a2n×2n matrix may be computed with two inversions, six multiplications and four additions or additive inverses ofn×n matrices. It follows that, denoting respectively byI(n),M(n) andA(n) =n2 the number of operations needed for inverting, multiplying and addingn×n matrices, one hasI(2n)2I(n)+6M(n)+4A(n).{\displaystyle I(2n)\leq 2I(n)+6M(n)+4A(n).}Ifn=2k,{\displaystyle n=2^{k},} one may apply this formula recursively:I(2k)2I(2k1)+6M(2k1)+4A(2k1)22I(2k2)+6(M(2k1)+2M(2k2))+4(A(2k1)+2A(2k2)){\displaystyle {\begin{aligned}I(2^{k})&\leq 2I(2^{k-1})+6M(2^{k-1})+4A(2^{k-1})\\&\leq 2^{2}I(2^{k-2})+6(M(2^{k-1})+2M(2^{k-2}))+4(A(2^{k-1})+2A(2^{k-2}))\\&\,\,\,\vdots \end{aligned}}}IfM(n)cnω,{\displaystyle M(n)\leq cn^{\omega },} andα=2ω4,{\displaystyle \alpha =2^{\omega }\geq 4,} one gets eventuallyI(2k)2kI(1)+6c(αk1+2αk2++2k1α0)+k2k+12k+6cαk2kα2+k2k+1d(2k)ω{\displaystyle {\begin{aligned}I(2^{k})&\leq 2^{k}I(1)+6c(\alpha ^{k-1}+2\alpha ^{k-2}+\cdots +2^{k-1}\alpha ^{0})+k2^{k+1}\\&\leq 2^{k}+6c{\frac {\alpha ^{k}-2^{k}}{\alpha -2}}+k2^{k+1}\\&\leq d(2^{k})^{\omega }\end{aligned}}}for some constantd.

For matrices whose dimension is not a power of two, the same complexity is reached by increasing the dimension of the matrix to a power of two, by padding the matrix with rows and columns whose entries are 1 on the diagonal and 0 elsewhere.

This proves the asserted complexity for matrices such that all submatrices that have to be inverted are indeed invertible. This complexity is thus proved for almost all matrices, as a matrix with randomly chosen entries is invertible with probability one.

The same argument applies toLU decomposition, as, if the matrixA is invertible, the equality[ABCD]=[I0CA1I][AB0DCA1B]{\displaystyle {\begin{bmatrix}{A}&{B}\\{C}&{D}\end{bmatrix}}={\begin{bmatrix}I&0\\CA^{-1}&I\end{bmatrix}}\,{\begin{bmatrix}A&B\\0&D-CA^{-1}B\end{bmatrix}}} defines a block LU decomposition that may be applied recursively toA{\displaystyle A} andDCA1B,{\displaystyle D-CA^{-1}B,} for getting eventually a true LU decomposition of the original matrix.

The argument applies also for the determinant, since it results from the block LU decomposition thatdet[ABCD]=det(A)det(DCA1B).{\displaystyle \det {\begin{bmatrix}{A}&{B}\\{C}&{D}\end{bmatrix}}=\det(A)\det(D-CA^{-1}B).}

Minimizing number of multiplications

[edit]

Related to the problem of minimizing the number of arithmetic operations is minimizing the number of multiplications, which is typically a more costly operation than addition. AO(nω){\displaystyle O(n^{\omega })} algorithm for matrix multiplication must necessarily only useO(nω){\displaystyle O(n^{\omega })} multiplication operations, but these algorithms are impractical. Improving from the naiven3{\displaystyle n^{3}} multiplications for schoolbook multiplication,4×4{\displaystyle 4\times 4} matrices inZ/2Z{\displaystyle \mathbb {Z} /2\mathbb {Z} } can be done with 47 multiplications,[35]3×3{\displaystyle 3\times 3} matrix multiplication over acommutative ring can be done in 21 multiplications[36][37] (23 if non-commutative[38]). The lower bound of multiplications needed is 2mn+2nm−2 (multiplication ofn×m matrices withm×n matrices using the substitution method,mn3{\displaystyle m\geq n\geq 3}), which means n=3 case requires at least 19 multiplications and n=4 at least 34.[39] For n=2 optimal seven multiplications and 15 additions are minimal, compared to only four additions for eight multiplications.[40][41]

See also

[edit]

References

[edit]
  1. ^abVolker Strassen (Aug 1969)."Gaussian elimination is not optimal".Numerische Mathematik.13 (4):354–356.doi:10.1007/BF02165411.S2CID 121656251.
  2. ^abAlman, Josh; Duan, Ran; Williams, Virginia Vassilevska; Xu, Yinzhan; Xu, Zixuan; Zhou, Renfei (2024). "More Asymmetry Yields Faster Matrix Multiplication".arXiv:2404.16349 [cs.DS].
  3. ^Iliopoulos, Costas S. (1989)."Worst-case complexity bounds on algorithms for computing the canonical structure of finite abelian groups and the Hermite and Smith normal forms of an integer matrix"(PDF).SIAM Journal on Computing.18 (4):658–669.CiteSeerX 10.1.1.531.9309.doi:10.1137/0218045.MR 1004789. Archived fromthe original(PDF) on 2014-03-05. Retrieved2015-01-16.The Coppersmith–Winograd algorithm is not practical, due to the very large hidden constant in the upper bound on the number of multiplications required.
  4. ^abRobinson, Sara (November 2005)."Toward an Optimal Algorithm for Matrix Multiplication"(PDF).SIAM News.38 (9).Even if someone manages to prove one of the conjectures—thereby demonstrating thatω = 2—the wreath product approach is unlikely to be applicable to the large matrix problems that arise in practice. [...] the input matrices must be astronomically large for the difference in time to be apparent.
  5. ^Miller, Webb (1975). "Computational complexity and numerical stability".SIAM News.4 (2):97–107.CiteSeerX 10.1.1.148.9947.doi:10.1137/0204009.
  6. ^Skiena, Steven (2012). "Sorting and Searching".The Algorithm Design Manual. Springer. pp. 45–46,401–403.doi:10.1007/978-1-84800-070-4_4.ISBN 978-1-84800-069-8.
  7. ^Press, William H.; Flannery, Brian P.;Teukolsky, Saul A.; Vetterling, William T. (2007).Numerical Recipes: The Art of Scientific Computing (3rd ed.).Cambridge University Press. p. 108.ISBN 978-0-521-88068-8.
  8. ^Ballard, Grey; Benson, Austin R.; Druinsky, Alex; Lipshitz, Benjamin; Schwartz, Oded (2016). "Improving the numerical stability of fast matrix multiplication".SIAM Journal on Matrix Analysis and Applications.37 (4):1382–1418.arXiv:1507.00687.doi:10.1137/15M1032168.S2CID 2853388.
  9. ^Victor Yakovlevich Pan (Oct 1978). "Strassen's Algorithm is not Optimal: Trilinear Technique of Aggregating, Uniting and Canceling for Constructing Fast Algorithms for Matrix Operations".Proc. 19th FOCS. pp. 166–176.doi:10.1109/SFCS.1978.34.S2CID 14348408.
  10. ^Dario Andrea Bini; Milvio Capovani; Francesco Romani; Grazia Lotti (Jun 1979)."O(n2.7799){\displaystyle O(n^{2.7799})} complexity forn×n{\displaystyle n\times n} approximate matrix multiplication".Information Processing Letters.8 (5):234–235.doi:10.1016/0020-0190(79)90113-3.
  11. ^A. Schönhage (1981). "Partial and total matrix multiplication".SIAM Journal on Computing.10 (3):434–455.doi:10.1137/0210032.
  12. ^Francesco Romani (1982). "Some properties of disjoint sums of tensors related to matrix multiplication".SIAM Journal on Computing.11 (2):263–267.doi:10.1137/0211020.
  13. ^abD. Coppersmith; S. Winograd (1981). "On the asymptotic complexity of matrix multiplication".Proc. 22nd Annual Symposium on Foundations of Computer Science (FOCS). pp. 82–90.doi:10.1109/SFCS.1981.27.S2CID 206558664.
  14. ^Volker Strassen (Oct 1986). "The asymptotic spectrum of tensors and the exponent of matrix multiplication".Proc. 27th Ann. Symp. on Foundation of Computer Science (FOCS). pp. 49–54.doi:10.1109/SFCS.1986.52.ISBN 0-8186-0740-8.S2CID 15077423.
  15. ^D. Coppersmith; S. Winograd (Mar 1990)."Matrix multiplication via arithmetic progressions".Journal of Symbolic Computation.9 (3):251–280.doi:10.1016/S0747-7171(08)80013-2.
  16. ^Stothers, Andrew James (2010).On the complexity of matrix multiplication (Ph.D. thesis). University of Edinburgh.
  17. ^Virginia Vassilevska Williams (2012). "Multiplying Matrices Faster than Coppersmith-Winograd". In Howard J. Karloff; Toniann Pitassi (eds.).Proc. 44thSymposium on Theory of Computing (STOC). ACM. pp. 887–898.doi:10.1145/2213977.2214056.ISBN 978-1-4503-1245-5.S2CID 14350287.
  18. ^Williams, Virginia Vassilevska.Multiplying matrices inO(n2.373){\displaystyle O(n^{2.373})} time(PDF) (Technical Report). Stanford University.
  19. ^Le Gall, François (2014). "Algebraic complexity theory and matrix multiplication". In Katsusuke Nabeshima (ed.).Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation - ISSAC '14. pp. 296–303.arXiv:1401.7714.Bibcode:2014arXiv1401.7714L.doi:10.1145/2608628.2627493.ISBN 978-1-4503-2501-1.S2CID 2597483.
  20. ^Alman, Josh; Williams, Virginia Vassilevska (2024). "A Refined Laser Method and Faster Matrix Multiplication".Theoretics 11261.arXiv:2010.05846.doi:10.46298/theoretics.24.21.
  21. ^Hartnett, Kevin (23 March 2021)."Matrix Multiplication Inches Closer to Mythic Goal".Quanta Magazine. Retrieved2021-04-01.
  22. ^abDuan, Ran; Wu, Hongxun; Zhou, Renfei (2022). "Faster Matrix Multiplication via Asymmetric Hashing".arXiv:2210.10173 [cs.DS].
  23. ^abVassilevska Williams, Virginia; Xu, Yinzhan; Xu, Zixuan; Zhou, Renfei.New Bounds for Matrix Multiplication: from Alpha to Omega. Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA). pp. 3792–3835.arXiv:2307.07970.doi:10.1137/1.9781611977912.134.
  24. ^Coppersmith, Don; Winograd, Shmuel (1990)."Matrix multiplication via arithmetic progressions"(PDF).Journal of Symbolic Computation.9 (3): 251.doi:10.1016/S0747-7171(08)80013-2.
  25. ^Ambainis, Andris; Filmus, Yuval; Le Gall, François (2015-06-14)."Fast Matrix Multiplication".Proceedings of the forty-seventh annual ACM symposium on Theory of Computing. STOC '15. Portland, Oregon, USA: Association for Computing Machinery. pp. 585–593.arXiv:1411.5414.doi:10.1145/2746539.2746554.ISBN 978-1-4503-3536-2.S2CID 8332797.
  26. ^Laderman, Julian; Pan, Victor; Sha, Xuan-He (1992). "On practical algorithms for accelerated matrix multiplication".Linear Algebra and Its Applications.162–164:557–588.doi:10.1016/0024-3795(92)90393-O.
  27. ^Cohn, H.; Kleinberg, R.; Szegedy, B.; Umans, C. (2005). "Group-theoretic Algorithms for Matrix Multiplication".46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05). p. 379.arXiv:math/0511460.doi:10.1109/SFCS.2005.39.ISBN 0-7695-2468-0.S2CID 41278294.
  28. ^Cohn, Henry; Umans, Chris (2003). "A Group-theoretic Approach to Fast Matrix Multiplication".Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, 11–14 October 2003. IEEE Computer Society. pp. 438–449.arXiv:math.GR/0307321.doi:10.1109/SFCS.2003.1238217.ISBN 0-7695-2040-5.S2CID 5890100.
  29. ^abBlasiak, J.; Cohn, H.; Church, T.; Grochow, J.; Naslund, E.; Sawin, W.; Umans, C. (2017). "On cap sets and the group-theoretic approach to matrix multiplication".Discrete Analysis. p. 1245.doi:10.19086/da.1245.S2CID 9687868.
  30. ^Alon, N.; Shpilka, A.; Umans, C. (April 2011)."On Sunflowers and Matrix Multiplication".Electronic Colloquium on Computational Complexity. TR11-067.
  31. ^Raz, Ran (2002). "On the complexity of matrix product".Proceedings of the thiry-fourth annual ACM symposium on Theory of computing. pp. 144–151.doi:10.1145/509907.509932.ISBN 1581134959.S2CID 9582328.
  32. ^Gall, Francois Le; Urrutia, Florent (2018). "Improved rectangular matrix multiplication using powers of the Coppersmith-Winograd tensor". In Czumaj, Artur (ed.).Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7–10, 2018. Society for Industrial and Applied Mathematics. pp. 1029–1046.arXiv:1708.05622.doi:10.1137/1.9781611975031.67.ISBN 978-1-61197-503-1.
  33. ^Cohen, Michael B.; Lee, Yin Tat; Song, Zhao (2021-01-05)."Solving Linear Programs in the Current Matrix Multiplication Time".Journal of the ACM.68 (1): 3:1–3:39.arXiv:1810.07896.doi:10.1145/3424305.ISSN 0004-5411.S2CID 231955576.
  34. ^Coppersmith, D. (1982-08-01)."Rapid Multiplication of Rectangular Matrices".SIAM Journal on Computing.11 (3):467–471.doi:10.1137/0211037.ISSN 0097-5397.
  35. ^SeeExtended Data Fig. 1: Algorithm for multiplying 4 × 4 matrices in modular arithmetic (Z2{\displaystyle \mathbb {Z} _{2}})) with 47 multiplications inFawzi, A.; Balog, M.; Huang, A.; Hubert, T.; Romera-Paredes, B.; Barekatain, M.; Novikov, A.; r Ruiz, F. J.; Schrittwieser, J.; Swirszcz, G.; Silver, D.; Hassabis, D.; Kohli, P. (2022)."Discovering faster matrix multiplication algorithms with reinforcement learning".Nature.610 (7930):47–53.Bibcode:2022Natur.610...47F.doi:10.1038/s41586-022-05172-4.PMC 9534758.PMID 36198780.
  36. ^Rosowski, Andreas (2023). "Fast commutative matrix algorithms".Journal of Symbolic Computation.114:302–321.arXiv:1904.07683.doi:10.1016/j.jsc.2022.05.002.MR 4433063.
  37. ^Makarov, O. M. (1986)."An algorithm for multiplying 3×3 matrices".Zhurnal Vychislitel'noi Matematiki I Matematicheskoi Fiziki.26 (2):293–294. Retrieved5 October 2022.
    Also inMakarov, O. M. (1986). "An algorithm for multiplying 3×3 matrices".USSR Computational Mathematics and Mathematical Physics.26:179–180.doi:10.1016/0041-5553(86)90203-X.
  38. ^Laderman, Julian D. (1976)."A noncommutative algorithm for multiplying 3×3 matrices using 23 multiplications".Bulletin of the American Mathematical Society.82 (1):126–128.doi:10.1090/S0002-9904-1976-13988-2.ISSN 0002-9904.
  39. ^Bläser, Markus (February 2003)."On the complexity of the multiplication of matrices of small formats".Journal of Complexity.19 (1):43–60.doi:10.1016/S0885-064X(02)00007-9.
  40. ^Winograd, S. (1971-10-01)."On multiplication of 2 × 2 matrices".Linear Algebra and Its Applications.4 (4):381–388.doi:10.1016/0024-3795(71)90009-7.ISSN 0024-3795.
  41. ^L., Probert, R. (1973).On the complexity of matrix multiplication. University of Waterloo.OCLC 1124200063.{{cite book}}: CS1 maint: multiple names: authors list (link)

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Computational_complexity_of_matrix_multiplication&oldid=1321687693"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp