Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Gaussian elimination

From Wikipedia, the free encyclopedia
Algorithm for solving systems of linear equations
Animation of Gaussian elimination. Red row eliminates the following rows, green rows change their order.

In mathematics,Gaussian elimination, also known asrow reduction, is analgorithm for solvingsystems of linear equations. It consists of a sequence of row-wise operations performed on the correspondingmatrix of coefficients. This method can also be used to compute therank of a matrix, thedeterminant of asquare matrix, and the inverse of aninvertible matrix. The method is named afterCarl Friedrich Gauss (1777–1855).

To perform row reduction on a matrix, one uses a sequence ofelementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:

  • swapping two rows,
  • multiplying a row by a nonzero number, and
  • adding a multiple of one row to another row.

Using these operations, a matrix can always be transformed intoreduced row echelon form: each nonzero row is above every zero row, each nonzero row has leftmost nonzero entry equal to 1, the columns containing these leading 1s have all other entries 0, and the leading 1 in each nonzero row is to the right of the leading 1 in the previous row. This final form is unique; in other words, it is independent of the sequence of row operations used. For example, in the following sequence of row operations (where two elementary operations on different rows are done at the first and third steps), the third and fourth matrices are the ones in row echelon form, and the final matrix is the unique reduced row echelon form.[13191111311535][131902280228][131902280000][102301140000]{\displaystyle {\begin{bmatrix}1&3&1&9\\1&1&-1&1\\3&11&5&35\end{bmatrix}}\to {\begin{bmatrix}1&3&1&9\\0&-2&-2&-8\\0&2&2&8\end{bmatrix}}\to {\begin{bmatrix}1&3&1&9\\0&-2&-2&-8\\0&0&0&0\end{bmatrix}}\to {\begin{bmatrix}1&0&-2&-3\\0&1&1&4\\0&0&0&0\end{bmatrix}}}

Using row operations to convert a matrix into reduced row echelon form is sometimes calledGauss–Jordan elimination. In this case, the termGaussian elimination refers to the process until it has reached itsupper triangular, (unreduced)row echelon form. For computational reasons, when solving systems of linear equations, it is sometimes preferable to stop row operations before the matrix is completely reduced.

Definitions and example of algorithm

[edit]

The process of row reduction makes use ofelementary row operations, and can be divided into two parts. The first part (sometimes called forward elimination) reduces a given system to row echelon form, from which one can tell whether there are no solutions, a unique solution, or infinitely many solutions. The second part (sometimes calledback substitution) continues to use row operations until the solution is found; in other words, it puts the matrix into reduced row echelon form.

Another point of view, which turns out to be very useful to analyze the algorithm, is that row reduction produces amatrix decomposition of the original matrix. The elementary row operations may be viewed as the multiplication on the left of the original matrix byelementary matrices. Alternatively, a sequence of elementary operations that reduces a single row may be viewed as multiplication by aFrobenius matrix. Then the first part of the algorithm computes anLU decomposition, while the second part writes the original matrix as the product of a uniquely determined invertible matrix and a uniquely determined reduced row echelon matrix.

Row operations

[edit]

There are three types of elementary row operations which may be performed on the rows of a matrix:

  1. Interchanging two rows.
  2. Multiplying a row by a non-zeroscalar.
  3. Adding a scalar multiple of one row to another.

If the matrix is associated to a system of linear equations, then these operations do not change the solution set. Therefore, if one's goal is to solve a system of linear equations, then using these row operations could make the problem easier.

Echelon form

[edit]
Main article:Row echelon form

For each row in a matrix, if the row does not consist of only zeros, then the leftmost nonzero entry is called theleading entry (orpivot) of that row. If two leading entries are in the same column, then a row operation oftype 3 can be used to make one of those entries zero. Then by using the row swapping operation, one can always order the rows so that for every non-zero row, the leading entry is to the right of the leading entry of the row above. If this is the case, the matrix is said to be inrow echelon form. In this form, the lower left part of the matrix contains only zeros, and all of the zero rows are below the non-zero rows. The word "echelon" is used here because one can roughly think of the rows being ranked by their size, with the largest being at the top and the smallest being at the bottom.

For example, the following matrix is in row echelon form, and its leading entries are shown in red:[021100310000].{\displaystyle {\begin{bmatrix}0&\color {red}{\mathbf {2} }&1&-1\\0&0&\color {red}{\mathbf {3} }&1\\0&0&0&0\end{bmatrix}}.}

It is in echelon form because the zero row is at the bottom and the leading entry of the second row (in the third column) is to the right of the leading entry of the first row (in the second column).

A matrix is said to be in reduced row echelon form if furthermore all of the leading entries are equal to 1 (which can be achieved by using the elementary row operation of type 2), and in every column containing a leading entry, all of the other entries in that column are zero (which can be achieved by using elementary row operations of type 3).

Example of the algorithm

[edit]

Suppose the goal is to find and describe the set of solutions to the following system of linear equations:2x+yz=8(L1)3xy+2z=11(L2)2x+y+2z=3(L3){\displaystyle {\begin{alignedat}{4}2x&{}+{}&y&{}-{}&z&{}={}&8&\qquad (L_{1})\\-3x&{}-{}&y&{}+{}&2z&{}={}&-11&\qquad (L_{2})\\-2x&{}+{}&y&{}+{}&2z&{}={}&-3&\qquad (L_{3})\end{alignedat}}}

The table below is the row reduction process applied simultaneously to the system of equations and its associatedaugmented matrix. In practice, one does not usually deal with the systems in terms of equations, but instead makes use of the augmented matrix, which is more suitable for computer manipulations. The row reduction procedure may be summarized as follows: eliminatex from all equations belowL1, and then eliminatey from all equations belowL2. This will put the system intotriangular form. Then, using back-substitution, each unknown can be solved for.

System of equationsRow operationsAugmented matrix
2x+yz=83xy+2z=112x+y+2z=3{\displaystyle {\begin{alignedat}{4}2x&{}+{}&y&{}-{}&z&{}={}&8&\\-3x&{}-{}&y&{}+{}&2z&{}={}&-11&\\-2x&{}+{}&y&{}+{}&2z&{}={}&-3&\end{alignedat}}}[2118312112123]{\displaystyle \left[{\begin{array}{rrr|r}2&1&-1&8\\-3&-1&2&-11\\-2&1&2&-3\end{array}}\right]}
2x+yz=812y+12z=12y+z=5{\displaystyle {\begin{alignedat}{4}2x&{}+{}&y&{}-{}&z&{}={}&8&\\&&{\tfrac {1}{2}}y&{}+{}&{\tfrac {1}{2}}z&{}={}&1&\\&&2y&{}+{}&z&{}={}&5&\end{alignedat}}}L2+32L1L2L3+L1L3{\displaystyle {\begin{aligned}L_{2}+{\tfrac {3}{2}}L_{1}&\to L_{2}\\L_{3}+L_{1}&\to L_{3}\end{aligned}}}[21180121210215]{\displaystyle \left[{\begin{array}{rrr|r}2&1&-1&8\\0&{\frac {1}{2}}&{\frac {1}{2}}&1\\0&2&1&5\end{array}}\right]}
2x+yz=812y+12z=1z=1{\displaystyle {\begin{alignedat}{4}2x&{}+{}&y&{}-{}&z&{}={}&8&\\&&{\tfrac {1}{2}}y&{}+{}&{\tfrac {1}{2}}z&{}={}&1&\\&&&&-z&{}={}&1&\end{alignedat}}}L3+4L2L3{\displaystyle L_{3}+-4L_{2}\to L_{3}}[21180121210011]{\displaystyle \left[{\begin{array}{rrr|r}2&1&-1&8\\0&{\frac {1}{2}}&{\frac {1}{2}}&1\\0&0&-1&1\end{array}}\right]}
The matrix is now in row echelon form (also called triangular form)
2x+y=712y=32z=1{\displaystyle {\begin{alignedat}{4}2x&{}+{}&y&&&{}={}7&\\&&{\tfrac {1}{2}}y&&&{}={}{\tfrac {3}{2}}&\\&&&{}-{}&z&{}={}1&\end{alignedat}}}L1L3L1L2+12L3L2{\displaystyle {\begin{aligned}L_{1}-L_{3}&\to L_{1}\\L_{2}+{\tfrac {1}{2}}L_{3}&\to L_{2}\end{aligned}}}[21070120320011]{\displaystyle \left[{\begin{array}{rrr|r}2&1&0&7\\0&{\frac {1}{2}}&0&{\frac {3}{2}}\\0&0&-1&1\end{array}}\right]}
2x+y=7y=3z=1{\displaystyle {\begin{alignedat}{4}2x&{}+{}&y&\quad &&{}={}&7&\\&&y&\quad &&{}={}&3&\\&&&\quad &z&{}={}&-1&\end{alignedat}}}2L2L2L3L3{\displaystyle {\begin{aligned}2L_{2}&\to L_{2}\\-L_{3}&\to L_{3}\end{aligned}}}[210701030011]{\displaystyle \left[{\begin{array}{rrr|r}2&1&0&7\\0&1&0&3\\0&0&1&-1\end{array}}\right]}
x=2y=3z=1{\displaystyle {\begin{alignedat}{4}x&\quad &&\quad &&{}={}&2&\\&\quad &y&\quad &&{}={}&3&\\&\quad &&\quad &z&{}={}&-1&\end{alignedat}}}L1L2L112L1L1{\displaystyle {\begin{aligned}L_{1}-L_{2}&\to L_{1}\\{\tfrac {1}{2}}L_{1}&\to L_{1}\end{aligned}}}[100201030011]{\displaystyle \left[{\begin{array}{rrr|r}1&0&0&2\\0&1&0&3\\0&0&1&-1\end{array}}\right]}
The matrix is now in reduced row echelon form

The second column describes which row operations have just been performed. For the first step,x is eliminated fromL2 by adding3/2L1 toL2. Next,x is eliminated fromL3 by addingL1 toL3. These row operations are labelled in the table asL2+32L1L2,L3+L1L3.{\displaystyle {\begin{aligned}L_{2}+{\tfrac {3}{2}}L_{1}&\to L_{2},\\L_{3}+L_{1}&\to L_{3}.\end{aligned}}}

Oncey is also eliminated from the third row, the result is a system of linear equations in triangular form, and so the first part of the algorithm is complete. From a computational point of view, it is faster to solve the variables in reverse order, a process known as back-substitution. One sees the solution isz = −1,y = 3, andx = 2. In particular, there is a unique solution to the original system of equations in this case.

Instead of stopping once the matrix is in row echelon form, one could continue until the matrix is inreduced row echelon form, as it is done in the table. The process of row reducing until the matrix is reduced is sometimes referred to as Gauss–Jordan elimination, to distinguish it from stopping after reaching echelon form.

History

[edit]

The method of Gaussian elimination appears – albeit without proof – in the Chinese mathematical textChapter Eight:Rectangular Arrays ofThe Nine Chapters on the Mathematical Art. Its use is illustrated in eighteen problems, with two to five equations. The first reference to the book by this title is dated to 179 AD, but parts of it were written as early as approximately 150 BC.[1][2][3] It was commented on byLiu Hui in the 3rd century.

According to Grcar[4] solution of linear equations by elimination was invented independently in several cultures in Eurasia starting from antiquity and in Europe definite examples of procedure were published already by late Renaissance (in 1550's). It is quite possible that already then the procedure was considered by mathematicians elementary and in no need to explanation for professionals, so we may never learn its detailed history except that by then it was practiced in at least several places in Europe.

The method in Europe stems from the notes ofIsaac Newton.[4][5] In 1669–1670, Newton wrote that all the algebra books known to him lacked a lesson for solving simultaneous equations, which Newton then supplied. Cambridge University eventually published the notes asArithmetica Universalis in 1707 long after Newton had left academic life. The notes were widely imitated, which made (what is now called) Gaussian elimination a standard lesson in algebra textbooks by the end of the 18th century.Carl Friedrich Gauss in 1810 devised a notation for symmetric elimination that was adopted in the 19th century by professionalhand computers to solve the normal equations of least-squares problems.[6] The algorithm that is taught in high school was named for Gauss only in the 1950s as a result of confusion over the history of the subject.[7]

Some authors use the termGaussian elimination to refer only to the procedure until the matrix is in echelon form, and use the term Gauss–Jordan elimination to refer to the procedure which ends in reduced echelon form. The name is used because it is a variation of Gaussian elimination as described byWilhelm Jordan in 1888. However, the method also appears in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss–Jordan elimination independently.[8]

Applications

[edit]

Historically, the first application of the row reduction method is for solving systems of linear equations. Below are some other important applications of the algorithm.

Computing determinants

[edit]

To explain how Gaussian elimination allows the computation of the determinant of a square matrix, we have to recall how the elementary row operations change the determinant:

  • Swapping two rows multiplies the determinant by −1
  • Multiplying a row by a nonzero scalar multiplies the determinant by the same scalar
  • Adding to one row a scalar multiple of another does not change the determinant.

If Gaussian elimination applied to a square matrixA produces a row echelon matrixB, letd be the product of the scalars by which the determinant has been multiplied, using the above rules. Then the determinant ofA is the quotient byd of the product of the elements of the diagonal ofB:det(A)=diag(B)d.{\displaystyle \det(A)={\frac {\prod \operatorname {diag} (B)}{d}}.}

Computationally, for ann ×n matrix, this method needs onlyO(n3) arithmetic operations, while usingLeibniz formula for determinants requires(nn!){\displaystyle (n\,n!)} operations(number of summands in the formula times the number of multiplications in each summand), and recursiveLaplace expansion requiresO(n 2n) operations if the sub-determinants are memorized for being computed only once(number of operations in a linear combination times the number of sub-determinants to compute, which are determined by their columns). Even on the fastest computers, these two methods are impractical or almost impracticable forn above 20.

Finding the inverse of a matrix

[edit]
See also:Invertible matrix

A variant of Gaussian elimination calledGauss–Jordan elimination can be used for finding the inverse of a matrix, if it exists. IfA is ann ×n square matrix, then one can use row reduction to compute itsinverse matrix, if it exists. First, then ×nidentity matrix is augmented to the right ofA, forming ann × 2nblock matrix[A |I]. Now through application of elementary row operations, find the reduced echelon form of thisn × 2n matrix. The matrixA is invertible if and only if it can be reduced to the identity matrixI; in this case the right block of the final matrix isA−1. If the algorithm is unable to reduce the left block toI, thenA is not invertible.

For example, consider the following matrix:A=[210121012].{\displaystyle A={\begin{bmatrix}2&-1&0\\-1&2&-1\\0&-1&2\end{bmatrix}}.}

To find the inverse of this matrix, one takes the following matrix augmented by the identity and row-reduces it as a 3 × 6 matrix:[A|I]=[210100121010012001].{\displaystyle [A|I]=\left[{\begin{array}{ccc|ccc}2&-1&0&1&0&0\\-1&2&-1&0&1&0\\0&-1&2&0&0&1\end{array}}\right].}

By performing row operations, one can check that the reduced row echelon form of this augmented matrix is[I|B]=[10034121401012112001141234].{\displaystyle [I|B]=\left[{\begin{array}{rrr|rrr}1&0&0&{\frac {3}{4}}&{\frac {1}{2}}&{\frac {1}{4}}\\0&1&0&{\frac {1}{2}}&1&{\frac {1}{2}}\\0&0&1&{\frac {1}{4}}&{\frac {1}{2}}&{\frac {3}{4}}\end{array}}\right].}

One can think of each row operation as the left product by anelementary matrix. Denoting byB the product of these elementary matrices, we showed, on the left, thatBA =I, and therefore,B =A−1. On the right, we kept a record ofBI =B, which we know is the inverse desired. This procedure for finding the inverse works for square matrices of any size.

Computing ranks and bases

[edit]

The Gaussian elimination algorithm can be applied to anym ×n matrixA. In this way, for example, some 6 × 9 matrices can be transformed to a matrix that has a row echelon form likeT=[a00b000c000000d00000000e000000000],{\displaystyle T={\begin{bmatrix}a&*&*&*&*&*&*&*&*\\0&0&b&*&*&*&*&*&*\\0&0&0&c&*&*&*&*&*\\0&0&0&0&0&0&d&*&*\\0&0&0&0&0&0&0&0&e\\0&0&0&0&0&0&0&0&0\end{bmatrix}},}where the stars are arbitrary entries, anda,b,c,d,e are nonzero entries. This echelon matrixT contains a wealth of information aboutA: therank ofA is 5, since there are 5 nonzero rows inT; thevector space spanned by the columns ofA has a basis consisting of its columns 1, 3, 4, 7 and 9 (the columns witha,b,c,d,e inT), and the stars show how the other columns ofA can be written as linear combinations of the basis columns.

All of this applies also to the reduced row echelon form, which is a particular row echelon format.

Computational efficiency

[edit]

The number ofarithmetic operations required to perform row reduction is one way of measuring the algorithm's computational efficiency. For example, to solve a system ofn equations forn unknowns by performing row operations on the matrix until it is in echelon form, and then solving for each unknown in reverse order, requiresn(n + 1)/2 divisions,(2n3 + 3n2 − 5n)/6 multiplications, and(2n3 + 3n2 − 5n)/6 subtractions,[9] for a total of approximately2n3/3 operations. Thus it has aarithmetic complexity (time complexity, where each arithmetic operation take a unit of time, independently of the size of the inputs) ofO(n3).

This complexity is a good measure of the time needed for the whole computation when the time for each arithmetic operation is approximately constant. This is the case when the coefficients are represented byfloating-point numbers or when they belong to afinite field. If the coefficients areintegers orrational numbers exactly represented, the intermediate entries can grow exponentially large, so thebit complexity is exponential.[10]However, theBareiss algorithm is a variant of Gaussian elimination that avoids this exponential growth of the intermediate entries; with the same arithmetic complexity ofO(n3), it has a bit complexity ofO(n5), and has therefore astrongly-polynomial time complexity.

Gaussian elimination and its variants can be used on computers for systems with thousands of equations and unknowns. However, the cost becomes prohibitive for systems with millions of equations. These large systems are generally solved usingiterative methods. Specific methods exist for systems whose coefficients follow a regular pattern (seeSystem of linear equations).

Bareiss algorithm

[edit]
Main article:Bareiss algorithm

The firststrongly-polynomial time algorithm for Gaussian elimination was published byJack Edmonds in 1967.[11]: 37  Independently, and almost simultaneously, Erwin Bareiss discovered another algorithm, based on the following remark, which applies to a division-free variant of Gaussian elimination.

In standard Gaussian elimination, one subtracts from each rowRi{\displaystyle R_{i}} below the pivot rowRk{\displaystyle R_{k}} a multiple ofRk{\displaystyle R_{k}} byri,k/rk,k,{\displaystyle r_{i,k}/r_{k,k},} whereri,k{\displaystyle r_{i,k}} andrk,k{\displaystyle r_{k,k}} are the entries in the pivot column ofRi{\displaystyle R_{i}} andRk,{\displaystyle R_{k},} respectively.

Bareiss variant consists, instead, of replacingRi{\displaystyle R_{i}} withrk,kRiri,kRkrk1,k1.{\textstyle {\frac {r_{k,k}R_{i}-r_{i,k}R_{k}}{r_{k-1,k-1}}}.} This produces a row echelon form that has the same zero entries as with the standard Gaussian elimination.

Bareiss' main remark is that each matrix entry generated by this variant is the determinant of a submatrix of the original matrix.

In particular, if one starts with integer entries, the divisions occurring in the algorithm are exact divisions resulting in integers. Therefore, all intermediate entries and final entries are integers. Moreover,Hadamard's inequality provides an upper bound on the absolute values of the intermediate and final entries, and thus a bit complexity ofO~(n5),{\displaystyle {\tilde {O}}(n^{5}),} usingsoft O notation.

Moreover, as an upper bound on the size of final entries is known, a complexityO~(n4){\displaystyle {\tilde {O}}(n^{4})} can be obtained withmodular computation followed either byChinese remaindering orHensel lifting.

As a corollary, the following problems can be solved in strongly polynomial time with the same bit complexity:[11]: 40 

  • Testing whetherm given rational vectors arelinearly independent
  • Computing thedeterminant of a rational matrix
  • Computing a solution of a rational equation systemAx =b
  • Computing theinverse matrix of a nonsingular rational matrix
  • Computing therank of a rational matrix

Numeric instability

[edit]

One possible problem isnumerical instability, caused by the possibility of dividing by very small numbers. If, for example, the leading coefficient of one of the rows is very close to zero, then to row-reduce the matrix, one would need to divide by that number. This means that any error which existed for the number that was close to zero would be amplified. Gaussian elimination is numerically stable fordiagonally dominant orpositive-definite matrices. For general matrices, Gaussian elimination is usually considered to be stable, when usingpartial pivoting, even though there are examples of stable matrices for which it is unstable.[12]

Generalizations

[edit]

Gaussian elimination can be performed over anyfield, not just the real numbers.

Buchberger's algorithm is a generalization of Gaussian elimination tosystems of polynomial equations. This generalization depends heavily on the notion of amonomial order. The choice of an ordering on the variables is already implicit in Gaussian elimination, manifesting as the choice to work from left to right when selecting pivot positions.

Computing the rank of a tensor of order greater than 2 isNP-hard.[13] Therefore, ifP ≠ NP, there cannot be a polynomial time analog of Gaussian elimination for higher-ordertensors (matrices arearray representations of order-2 tensors).

Pseudocode

[edit]

As explained above, Gaussian elimination transforms a givenm ×n matrixA into a matrix inrow-echelon form.

In the followingpseudocode,A[i, j] denotes the entry of the matrixA in rowi and columnj with the indices starting from 1. The transformation is performedin place, meaning that the original matrix is lost for being eventually replaced by its row-echelon form.

h := 1 /*Initialization of the pivot row */k := 1 /*Initialization of the pivot column */while h ≤ mand k ≤ n:    /*Find the k-th pivot: */    i_max :=argmax (i = h ... m, abs(A[i, k]))if A[i_max, k] = 0:        /*No pivot in this column, pass to next column */        k := k + 1else:swap rows(h, i_max)        /*Do for all rows below pivot: */for i = h + 1 ... m:            f := A[i, k] / A[h, k]            /*Fill with zeros the lower part of pivot column: */            A[i, k] := 0            /*Do for all remaining elements in current row: */for j = k + 1 ... n:                A[i, j] := A[i, j] - A[h, j] * f        /*Increase pivot row and column */        h := h + 1        k := k + 1

This algorithm differs slightly from the one discussed earlier, by choosing a pivot with largestabsolute value. Such apartial pivoting may be required if, at the pivot place, the entry of the matrix is zero. In any case, choosing the largest possible absolute value of the pivot improves thenumerical stability of the algorithm, when floating point is used for representing numbers.[14]

Upon completion of this procedure the matrix will be in row echelon form and the corresponding system may be solved by back substitution.

See also

[edit]

References

[edit]
  1. ^"DOCUMENTA MATHEMATICA, Vol. Extra Volume: Optimization Stories (2012), 9-14".www.emis.de. Retrieved2022-12-02.
  2. ^Calinger 1999, pp. 234–236
  3. ^Timothy Gowers; June Barrow-Green; Imre Leader (8 September 2008).The Princeton Companion to Mathematics. Princeton University Press. p. 607.ISBN 978-0-691-11880-2.
  4. ^abGrcar 2011a, pp. 169–172
  5. ^Grcar 2011b, pp. 783–785
  6. ^Lauritzen, p. 3
  7. ^Grcar 2011b, p. 789
  8. ^Althoen, Steven C.; McLaughlin, Renate (1987), "Gauss–Jordan reduction: a brief history",The American Mathematical Monthly,94 (2), Mathematical Association of America:130–142,doi:10.2307/2322413,ISSN 0002-9890,JSTOR 2322413
  9. ^Farebrother 1988, p. 12
  10. ^Fang, Xin Gui; Havas, George (1997)."On the worst-case complexity of integer Gaussian elimination".Proceedings of the 1997 international symposium on Symbolic and algebraic computation. ISSAC '97. Kihei, Maui, Hawaii, United States: ACM. pp. 28–31.doi:10.1145/258726.258740.ISBN 0-89791-875-4.
  11. ^abGrötschel, Martin;Lovász, László;Schrijver, Alexander (1993),Geometric algorithms and combinatorial optimization, Algorithms and Combinatorics, vol. 2 (2nd ed.), Springer-Verlag, Berlin,doi:10.1007/978-3-642-78240-4,ISBN 978-3-642-78242-8,MR 1261419
  12. ^Golub & Van Loan 1996, §3.4.6
  13. ^Hillar, Christopher;Lim, Lek-Heng (2009-11-07). "Most tensor problems are NP-hard".arXiv:0911.1393 [cs.CC].
  14. ^Kurgalin, Sergei; Borzunov, Sergei (2021).Algebra and Geometry with Python. Cham.doi:10.1007/978-3-030-61541-3.ISBN 978-3-030-61540-6.{{cite book}}: CS1 maint: location missing publisher (link)

Works cited

[edit]

External links

[edit]
The WikibookLinear Algebra has a page on the topic of:Gaussian elimination
Linear equations
Three dimensional Euclidean space
Matrices
Matrix decompositions
Relations and computations
Vector spaces
Structures
Multilinear algebra
Affine and projective
Numerical linear algebra
Authority control databasesEdit this at Wikidata
Retrieved from "https://en.wikipedia.org/w/index.php?title=Gaussian_elimination&oldid=1323309550"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp