Numerical linear algebra, sometimes calledapplied linear algebra, is the study of howmatrix operations can be used to createcomputer algorithms whichefficiently and accurately provide approximate answers to questions incontinuous mathematics. It is a subfield ofnumerical analysis, and a type oflinear algebra.Computers usefloating-point arithmetic and cannot exactly representirrational data, so when a computer algorithm is applied to a matrix of data, it can sometimesincrease the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.
Numerical linear algebra aims to solve problems of continuousmathematics using finite precision computers, so its applications to thenatural andsocial sciences are as vast as the applications of continuous mathematics. It is often a fundamental part ofengineering andcomputational science problems, such asimage andsignal processing,telecommunication,computational finance,materials science simulations,structural biology,data mining,bioinformatics, andfluid dynamics. Matrix methods are particularly used infinite difference methods,finite element methods, and the modeling ofdifferential equations. Noting the broad applications of numerical linear algebra,Lloyd N. Trefethen and David Bau, III argue that it is "as fundamental to the mathematical sciences as calculus and differential equations",[1]: x even though it is a comparatively small field.[2] Because many properties of matrices and vectors also apply to functions and operators, numerical linear algebra can also be viewed as a type offunctional analysis which has a particular emphasis on practical algorithms.[1]: ix
Common problems in numerical linear algebra include obtaining matrix decompositions like thesingular value decomposition, theQR factorization, theLU factorization, or theeigendecomposition, which can then be used to answer common linear algebraic problems like solving linear systems of equations, locating eigenvalues, or least squares optimisation. Numerical linear algebra's central concern with developing algorithms that do not introduce errors when applied to real data on a finite precision computer is often achieved byiterative methods rather than direct ones.
Numerical linear algebra was developed by computer pioneers likeJohn von Neumann,Alan Turing,James H. Wilkinson,Alston Scott Householder,George Forsythe, andHeinz Rutishauser, in order to apply the earliest computers to problems in continuous mathematics, such as ballistics problems and the solutions to systems ofpartial differential equations.[2] The first serious attempt to minimize computer error in the application of algorithms to real data is John von Neumann andHerman Goldstine's work in 1947.[3] The field has grown as technology has increasingly enabled researchers to solve complex problems on extremely large high-precision matrices, and some numerical algorithms have grown in prominence as technologies likeparallel computing have made them practical approaches to scientific problems.[2]
For many problems in applied linear algebra, it is useful to adopt the perspective of a matrix as being a concatenation of column vectors. For example, when solving the linear system, rather than understandingx as the product of withb, it is helpful to think ofx as the vector ofcoefficients in the linear expansion ofb in thebasis formed by the columns ofA.[1]: 8 Thinking of matrices as a concatenation of columns is also a practical approach for the purposes of matrix algorithms. This is because matrix algorithms frequently contain two nested loops: one over the columns of a matrixA, and another over the rows ofA. For example, for matrices and vectors and, we could use the column partitioning perspective to computey :=Ax +y as
forq=1:nforp=1:my(p)=A(p,q)*x(q)+y(p)endend
The singular value decomposition of a matrix is whereU andV areunitary, and isdiagonal. The diagonal entries of are called thesingular values ofA. Because singular values are the square roots of theeigenvalues of, there is a tight connection between the singular value decomposition and eigenvalue decompositions. This means that most methods for computing the singular value decomposition are similar to eigenvalue methods;[1]: 36 perhaps the most common method involvesHouseholder procedures.[1]: 253
The QR factorization of a matrix is a matrix and a matrix so thatA = QR, whereQ isorthogonal andR isupper triangular.[1]: 50 [4]: 223 The two main algorithms for computing QR factorizations are theGram–Schmidt process and theHouseholder transformation. The QR factorization is often used to solvelinear least-squares problems, and eigenvalue problems (by way of the iterativeQR algorithm).
An LU factorization of a matrixA consists of a lower triangular matrixL and an upper triangular matrixU so thatA = LU. The matrixU is found by an upper triangularization procedure which involves left-multiplyingA by a series of matrices to form the product, so that equivalently.[1]: 147 [4]: 96
The eigenvalue decomposition of a matrix is, where the columns ofX are the eigenvectors ofA, and is a diagonal matrix the diagonal entries of which are the corresponding eigenvalues ofA.[1]: 33 There is no direct method for finding the eigenvalue decomposition of an arbitrary matrix. Because it is not possible to write a program that finds the exact roots of an arbitrary polynomial in finite time, any general eigenvalue solver must necessarily be iterative.[1]: 192
From the numerical linear algebra perspective, Gaussian elimination is a procedure for factoring a matrixA into itsLU factorization, which Gaussian elimination accomplishes by left-multiplyingA by a succession of matrices untilU is upper triangular andL is lower triangular, where.[1]: 148 Naive programs for Gaussian elimination are notoriously highly unstable, and produce huge errors when applied to matrices with many significant digits.[2] The simplest solution is to introducepivoting, which produces a modified Gaussian elimination algorithm that is stable.[1]: 151
Numerical linear algebra characteristically approaches matrices as a concatenation of columns vectors. In order to solve the linear system, the traditional algebraic approach is to understandx as the product of withb. Numerical linear algebra instead interpretsx as the vector of coefficients of the linear expansion ofb in the basis formed by the columns ofA.[1]: 8
Many different decompositions can be used to solve the linear problem, depending on the characteristics of the matrixA and the vectorsx andb, which may make one factorization much easier to obtain than others. IfA =QR is a QR factorization ofA, then equivalently. This is as easy to compute as a matrix factorization.[1]: 54 If is an eigendecompositionA, and we seek to findb so thatb =Ax, with and, then we have.[1]: 33 This is closely related to the solution to the linear system using the singular value decomposition, because singular values of a matrix are the absolute values of its eigenvalues, which are also equivalent to the square roots of the absolute values of the eigenvalues of the Gram matrix. And ifA =LU is anLU factorization ofA, thenAx =b can be solved using the triangular matricesLy =b andUx =y.[1]: 147 [4]: 99
Matrix decompositions suggest a number of ways to solve the linear systemr =b −Ax where we seek to minimizer, as in theregression problem. The QR algorithm solves this problem by computing the reduced QR factorization ofA and rearranging to obtain. This upper triangular system can then be solved forx. The SVD also suggests an algorithm for obtaining linear least squares. By computing the reduced SVD decomposition and then computing the vector, we reduce the least squares problem to a simple diagonal system.[1]: 84 The fact that least squares solutions can be produced by the QR and SVD factorizations means that, in addition to the classicalnormal equations method for solving least squares problems, these problems can also be solved by methods that include the Gram-Schmidt algorithm and Householder methods.
Allow that a problem is a function, whereX is a normed vector space of data andY is a normed vector space of solutions. For some data point, the problem is said to be ill-conditioned if a small perturbation inx produces a large change in the value off(x). We can quantify this by defining acondition number which represents how well-conditioned a problem is, defined as
Instability is the tendency of computer algorithms, which depend onfloating-point arithmetic, to produce results that differ dramatically from the exact mathematical solution to a problem. When a matrix contains real data with manysignificant digits, many algorithms for solving problems like linear systems of equation or least squares optimisation may produce highly inaccurate results. Creating stable algorithms for ill-conditioned problems is a central concern in numerical linear algebra. One example is that the stability of householder triangularization makes it a particularly robust solution method for linear systems, whereas the instability of the normal equations method for solving least squares problems is a reason to favour matrix decomposition methods like using the singular value decomposition. Some matrix decomposition methods may be unstable, but have straightforward modifications that make them stable; one example is the unstable Gram–Schmidt, which can easily be changed to produce the stablemodified Gram–Schmidt.[1]: 140 Another classical problem in numerical linear algebra is the finding that Gaussian elimination is unstable, but becomes stable with the introduction of pivoting.
There are two reasons that iterative algorithms are an important part of numerical linear algebra. First, many important numerical problems have no direct solution; in order to find the eigenvalues and eigenvectors of an arbitrary matrix, we can only adopt an iterative approach. Second, noniterative algorithms for an arbitrary matrix require time, which is a surprisingly high floor given that matrices contain only numbers. Iterative approaches can take advantage of several features of some matrices to reduce this time. For example, when a matrix issparse, an iterative algorithm can skip many of the steps that a direct approach would necessarily follow, even if they are redundant steps given a highly structured matrix.
The core of many iterative methods in numerical linear algebra is the projection of a matrix onto a lower dimensionalKrylov subspace, which allows features of a high-dimensional matrix to be approximated by iteratively computing the equivalent features of similar matrices starting in a low dimension space and moving to successively higher dimensions. WhenA is symmetric and we wish to solve the linear problemAx =b, the classical iterative approach is theconjugate gradient method. IfA is not symmetric, then examples of iterative solutions to the linear problem are thegeneralized minimal residual method andCGN. IfA is symmetric, then to solve the eigenvalue and eigenvector problem we can use theLanczos algorithm, and ifA is non-symmetric, then we can useArnoldi iteration.
Several programming languages use numerical linear algebra optimisation techniques and are designed to implement numerical linear algebra algorithms. These languages includeMATLAB,Analytica,Maple, andMathematica. Other programming languages which are not explicitly designed for numerical linear algebra have libraries that provide numerical linear algebra routines and optimisation;C andFortran have packages likeBasic Linear Algebra Subprograms andLAPACK,Python has the libraryNumPy, andPerl has thePerl Data Language. Many numerical linear algebra commands inR rely on these more fundamental libraries likeLAPACK.[5] More libraries can be found on theList of numerical libraries.