Example of sparse matrix |
The above sparse matrix contains only 9 non-zero elements, with 26 zero elements. Its sparsity is 74%, and its density is 26%. |

Innumerical analysis andscientific computing, asparse matrix orsparse array is amatrix in which most of the elements are zero.[1] There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify assparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considereddense.[1] The number of zero-valued elements divided by the total number of elements (e.g.,m ×n for anm ×n matrix) is sometimes referred to as thesparsity of the matrix.
Conceptually, sparsity corresponds to systems with few pairwise interactions. For example, consider a line of balls connected by springs from one to the next: this is a sparse system, as only adjacent balls are coupled. By contrast, if the same line of balls were to have springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of sparsity is useful incombinatorics and application areas such asnetwork theory andnumerical analysis, which typically have a low density of significant data or connections. Large sparse matrices often appear inscientific orengineering applications when solvingpartial differential equations.
When storing and manipulating sparse matrices on acomputer, it is beneficial and often necessary to use specializedalgorithms anddata structures that take advantage of the sparse structure of the matrix. Specialized computers have been made for sparse matrices,[2] as they are common in the machine learning field.[3] Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing andmemory are wasted on the zeros. Sparse data is by nature more easilycompressed and thus requires significantly lessstorage. Some very large sparse matrices are infeasible to manipulate using standard dense-matrix algorithms.
Aband matrix is a special class of sparse matrix where the non-zero elements are concentrated near the main diagonal. A band matrix is characterised by its lower and upper bandwidths, which refer to the number of diagonals below and above (respectively) themain diagonal between which all of the non-zero entries are contained.
Formally, thelower bandwidth of a matrixA is the smallest numberp such that the entryai,j vanishes wheneveri >j +p. Similarly, theupper bandwidth is the smallest numberp such thatai,j = 0 wheneveri <j −p (Golub & Van Loan 1996, §1.2.1). For example, atridiagonal matrix has lower bandwidth1 and upper bandwidth1. As another example, the following sparse matrix has lower and upper bandwidth both equal to 3. Notice that zeros are represented with dots for clarity.
Matrices with reasonably small upper and lower bandwidth are known as band matrices and often lend themselves to simpler algorithms than general sparse matrices; or one can sometimes apply dense matrix algorithms and gain efficiency simply by looping over a reduced number of indices.
By rearranging the rows and columns of a matrixA it may be possible to obtain a matrixA′ with a lower bandwidth. A number of algorithms are designed forbandwidth minimization.
A diagonal matrix is the extreme case of a banded matrix, with zero upper and lower bandwidth. A diagonal matrix can be stored efficiently by storing just the entries in themain diagonal as aone-dimensional array, so a diagonaln ×n matrix requires onlyn entries in memory.
A symmetric sparse matrix arises as theadjacency matrix of anundirected graph; it can be stored efficiently as anadjacency list.
Ablock-diagonal matrix consists of sub-matrices along its diagonal blocks. A block-diagonal matrixA has the form
whereAk is a square matrix for allk = 1, ...,n.
Thefill-in of a matrix are those entries that change from an initial zero to a non-zero value during the execution of an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm, it is useful to minimize the fill-in by switching rows and columns in the matrix. Thesymbolic Cholesky decomposition can be used to calculate the worst possible fill-in before doing the actualCholesky decomposition.
There are other methods than theCholesky decomposition in use. Orthogonalization methods (such as QR factorization) are common, for example, when solving problems by least squares methods. While the theoretical fill-in is still the same, in practical terms the "false non-zeros" can be different for different methods. And symbolic versions of those algorithms can be used in the same manner as the symbolic Cholesky to compute worst case fill-in.
Bothiterative and direct methods exist for sparse matrix solving.
Iterative methods, such asconjugate gradient method andGMRES utilize fast computations of matrix-vector products, where matrix is sparse. The use ofpreconditioners can significantly accelerate convergence of such iterative methods.
A matrix is typically stored as a two-dimensional array. Each entry in the array represents an elementai,j of the matrix and is accessed by the twoindicesi andj. Conventionally,i is the row index, numbered from top to bottom, andj is the column index, numbered from left to right. For anm ×n matrix, the amount of memory required to store the matrix in this format is proportional tom ×n (disregarding the fact that the dimensions of the matrix also need to be stored).
In the case of a sparse matrix, substantial memory requirement reductions can be realized by storing only the non-zero entries. Depending on the number and distribution of the non-zero entries, different data structures can be used and yield huge savings in memory when compared to the basic approach. The trade-off is that accessing the individual elements becomes more complex and additional structures are needed to be able to recover the original matrix unambiguously.
Formats can be divided into two groups:
DOK consists of adictionary that maps(row, column)-pairs to the value of the elements. Elements that are missing from the dictionary are taken to be zero. The format is good for incrementally constructing a sparse matrix in random order, but poor for iterating over non-zero values in lexicographical order. One typically constructs a matrix in this format and then converts to another more efficient format for processing.[4]
LIL stores one list per row, with each entry containing the column index and the value. Typically, these entries are kept sorted by column index for faster lookup. This is another format good for incremental matrix construction.[5]
COO stores a list of(row, column, value) tuples. Ideally, the entries are sorted first by row index and then by column index, to improve random access times. This is another format that is good for incremental matrix construction.[6]
Thecompressed sparse row (CSR) orcompressed row storage (CRS) or Yale format represents a matrixM by three (one-dimensional) arrays, that respectively contain nonzero values, the extents of rows, and column indices. It is similar to COO, but compresses the row indices, hence the name. This format allows fast row access and matrix-vector multiplications (Mx). The CSR format has been in use since at least the mid-1960s, with the first complete description appearing in 1967.[7]
The CSR format stores a sparsem ×n matrixM in row form using three (one-dimensional) arrays(V, COL_INDEX, ROW_INDEX). LetNNZ denote the number of nonzero entries inM. (Note thatzero-based indices shall be used here.)
For example, the matrixis a4 × 4 matrix with 4 nonzero elements, hence
V = [ 5 8 3 6 ]COL_INDEX = [ 0 1 2 1 ]ROW_INDEX = [ 0 1 2 3 4 ]
assuming a zero-indexed language.
To extract a row, we first define:
row_start = ROW_INDEX[row]row_end = ROW_INDEX[row + 1]
Then we take slices from V and COL_INDEX starting at row_start and ending at row_end.
To extract the row 1 (the second row) of this matrix we setrow_start=1 androw_end=2. Then we make the slicesV[1:2] = [8] andCOL_INDEX[1:2] = [1]. We now know that in row 1 we have one element at column 1 with value 8.
In this case the CSR representation contains 13 entries, compared to 16 in the original matrix. The CSR format saves on memory only whenNNZ < (m (n − 1) − 1) / 2.
Another example, the matrixis a4 × 6 matrix (24 entries) with 8 nonzero elements, so
V = [ 10 20 30 40 50 60 70 80 ]COL_INDEX = [ 0 1 1 3 2 3 4 5 ] ROW_INDEX = [ 0 2 4 7 8 ]
The whole is stored as 21 entries: 8 inV, 8 inCOL_INDEX, and 5 inROW_INDEX.
(10, 20) (30, 40) (50, 60, 70) (80), indicating the index ofV (andCOL_INDEX) where each row starts and ends;(10, 20, ...) (0, 30, 0, 40, ...)(0, 0, 50, 60, 70, 0) (0, 0, 0, 0, 0, 80).Note that in this format, the first value ofROW_INDEX is always zero and the last is alwaysNNZ, so they are in some sense redundant (although in programming languages where the array length needs to be explicitly stored,NNZ would not be redundant). Nonetheless, this does avoid the need to handle an exceptional case when computing the length of each row, as it guarantees the formulaROW_INDEX[i + 1] − ROW_INDEX[i] works for any rowi. Moreover, the memory cost of this redundant storage is likely insignificant for a sufficiently large matrix.
The (old and new) Yale sparse matrix formats are instances of the CSR scheme. The old Yale format works exactly as described above, with three arrays; the new format combinesROW_INDEX andCOL_INDEX into a single array and handles the diagonal of the matrix separately.[9]
For logical adjacency matrices, the data array can be omitted, as the existence of an entry in the row array is sufficient to model a binary adjacency relation.
It is likely known as the Yale format because it was proposed in the 1977 Yale Sparse Matrix Package report from Department of Computer Science at Yale University.[10]
CSC is similar to CSR except that values are read first by column, a row index is stored for each value, and column pointers are stored. For example, CSC is(val, row_ind, col_ptr), whereval is an array of the (top-to-bottom, then left-to-right) non-zero values of the matrix;row_ind is the row indices corresponding to the values; and,col_ptr is the list ofval indexes where each column starts. The name is based on the fact that column index information is compressed relative to the COO format. One typically uses another format (LIL, DOK, COO) for construction. This format is efficient for arithmetic operations, column slicing, and matrix-vector products. This is the traditional format for specifying a sparse matrix in MATLAB (via thesparse function).
Many software libraries support sparse matrices, and provide solvers for sparse matrix equations. The following are open-source:
The termsparse matrix was possibly coined byHarry Markowitz who initiated some pioneering work but then left the field.[11]
The computation kernel of DNN is large sparse-dense matrix multiplication. In the field of numerical analysis, a sparse matrix is a matrix populated primarily with zeros as elements of the table. By contrast, if the number of non-zero elements in a matrix is relatively large, then it is commonly considered a dense matrix. The fraction of zero elements (non-zero elements) in a matrix is called the sparsity (density). Operations using standard dense-matrix structures and algorithms are relatively slow and consume large amounts of memory when applied to large sparse matrices.
The WSE contains 400,000 AI-optimized compute cores. Called SLAC™ for Sparse Linear Algebra Cores, the compute cores are flexible, programmable, and optimized for the sparse linear algebra that underpins all neural network computation
The WSE is the largest chip ever made at 46,225 square millimeters in area, it is 56.7 times larger than the largest graphics processing unit. It contains 78 times more AI optimized compute cores, 3,000 times more high speed, on-chip memory, 10,000 times more memory bandwidth, and 33,000 times more communication bandwidth.
scipy.sparse.dok_matrixscipy.sparse.lil_matrixscipy.sparse.coo_matrix