Sparse Matrix
Sparse Matrix
In numerical analysis and computer science, a sparse matrix or sparse array is a Example of sparse matrix
matrix in which most of the elements are zero. By contrast, if most of the elements
are nonzero, then the matrix is considered dense. The number of zero-valued
elements divided by the total number of elements (e.g., m × n for an m × n matrix) is
called the sparsity of the matrix (which is equal to 1 minus the density of the The above sparse matrix contains only 9
nonzero elements, with 26 zero
matrix).
elements. Its sparsity is 74%, and its
density is 26%.
Conceptually, sparsity corresponds to systems that are loosely coupled. Consider a
line of balls connected by springs from one to the next: this is a sparse system as
only adjacent balls are coupled. By contrast, if the same line of balls had springs
connecting each ball to all other balls, the system would correspond to a dense
matrix. The concept of sparsity is useful in combinatorics and application areas such
as network theory, which have a low density of significant data or connections.
Contents
Storing a sparse matrix
Dictionary of keys (DOK)
List of lists (LIL)
Coordinate list (COO)
Compressed sparse row (CSR, CRS or Y
ale format)
Compressed sparse column (CSC or CCS)
Special structure
Banded
Diagonal
Symmetric
Reducing fill-in
Solving sparse matrix equations
Software
History
See also
Notes
References
Further reading
In the case of a sparse matrix, substantial memory requirement reductions can be realized by storing only the non-zero entries.
Depending on the number and distribution of the non-zero entries, different data structures can be used and yield huge savings in
memory when compared to the basic approach. The trade-off is that accessing the individual elements becomes more complex and
additional structures are needed to be able to recover the original matrix unambiguously
.
Those that support efficient modification, such as DOK (Dictionary of keys), LIL (List of lists), or COO (Coordinate
list). These are typically used to construct the matrices.
Those that support efficient access and matrix operations, such as CSR (Compressed Sparse Row) or CSC
(Compressed Sparse Column).
The CSR format stores a sparse m × n matrix M in row form using three (one-dimensional) arrays (A, IA, JA) . Let NNZ denote
the number of nonzero entries inM. (Note that zero-based indices shall be used here.)
The array A is of length NNZ and holds all the nonzero entries ofM in left-to-right top-to-bottom ("row-major")
order.
The array IA is of length m + 1 . It is defined by this recursive definition:
IA[0] = 0
IA[i] = IA[i − 1] + (number of nonzero elements on the(i-1)-th row in the
original matrix)
Thus, the first m elements of IA store the index into A of the first nonzero
element in each row ofM, and the last element IA[m] stores NNZ , the
number of elements inA, which can be also thought of as the index inA of
first element of a phantom row just beyond the end of the matrixM. The
values of the i-th row of the original matrix is read from the elements
A[IA[i]] to A[IA[i + 1] − 1] (inclusive on both ends), i.e. from the start of
[5]
one row to the last index just before the start of the next.
The third array, JA, contains the column index inM of each element of A and
hence is of length NNZ as well.
For example, the matrix
Illustration of row-major
order compared to column-
major order
A = [ 5 8 3 6 ]
IA = [ 0 0 2 3 4 ]
JA = [ 0 1 2 1 ]
So, in array JA, the element "5" from A has column index 0, "8" and "6" have index 1, and element "3" has index 2.
In this case the CSR representation contains 13 entries, compared to 16 in the original matrix. The CSR format saves on memory only
when NNZ < (m (n − 1) − 1) / 2 . Another example, the matrix
A = [ 10 20 30 40 50 60 70 80 ]
IA = [ 0 2 4 7 8 ]
JA = [ 0 1 1 3 2 3 4 5 ]
IA splits the array A into rows: (10, 20) (30, 40) (50, 60, 70) (80);
JA aligns values in columns:(10, 20, ...) (0, 30, 0, 40, ...)(0, 0, 50, 60, 70, 0) (0, 0, 0,
0, 0, 80).
Note that in this format, the first value of IA is always zero and the last is always NNZ , so they are in some sense redundant
(although in programming languages where the array length needs to be explicitly stored, NNZ would not be redundant).
Nonetheless, this does avoid the need to handle an exceptional case when computing the length of each row, as it guarantees the
formula IA[i + 1] − IA[i] works for any row i. Moreover, the memory cost of this redundant storage is likely insignificant for a
sufficiently large matrix.
The (old and new) Yale sparse matrix formats are instances of the CSR scheme. The old Yale format works exactly as described
IA and JA into a single array.[6]
above, with three arrays; the new format achieves a further compression by combining
For logical adjacency matrices, the data array can be omitted, as the existence of an entry in the row array is sufficient to model a
binary adjaceny relation.
Special structure
Banded
An important special type of sparse matrices is band matrix, defined as follows. The lower bandwidth of a matrix A is the smallest
number p such that the entry ai,j vanishes whenever i > j + p. Similarly, the upper bandwidth is the smallest number p such that
ai,j = 0 whenever i < j − p (Golub & Van Loan 1996, §1.2.1). For example, a tridiagonal matrix has lower bandwidth 1 and upper
bandwidth 1. As another example, the following sparse matrix has lower and upper bandwidth both equal to 3. Notice that zeros are
represented with dots for clarity.
Matrices with reasonably small upper and lower bandwidth are known as band matrices and often lend themselves to simpler
algorithms than general sparse matrices; or one can sometimes apply dense matrix algorithms and gain efficiency simply by looping
over a reduced number of indices.
By rearranging the rows and columns of a matrix A it may be possible to obtain a matrix A′ with a lower bandwidth. A number of
algorithms are designed forbandwidth minimization.
Diagonal
A very efficient structure for an extreme case of band matrices, the diagonal matrix, is to store just the entries in the main diagonal as
a one-dimensional array, so a diagonal n × n matrix requires only n entries.
Symmetric
A symmetric sparse matrix arises as theadjacency matrix of an undirected graph; it can be stored efficiently as an adjacency list.
Reducing fill-in
The fill-in of a matrix are those entries that change from an initial zero to a non-zero value during the execution of an algorithm. To
reduce the memory requirements and the number of arithmetic operations used during an algorithm, it is useful to minimize the fill-in
by switching rows and columns in the matrix. The symbolic Cholesky decompositioncan be used to calculate the worst possible fill-
in before doing the actualCholesky decomposition.
There are other methods than the Cholesky decomposition in use. Orthogonalization methods (such as QR factorization) are
common, for example, when solving problems by least squares methods. While the theoretical fill-in is still the same, in practical
terms the "false non-zeros" can be different for different methods. And symbolic versions of those algorithms can be used in the same
manner as the symbolic Cholesky to compute worst case fill-in.
Iterative methods, such as conjugate gradient method and GMRES utilize fast computations of matrix-vector products , where
matrix is sparse. The use of preconditioners can significantly accelerate convergence of such iterative methods.
Software
Several software libraries support sparse matrices, and provide solvers for sparse matrix equations. The following are open-source:
History
[7]
The term sparse matrix was possibly coined byHarry Markowitz who triggered some pioneering work but then left the field.
See also
Matrix representation
Pareto principle
Ragged matrix
Skyline matrix
Sparse graph code
Sparse file
Harwell-Boeing file format
Matrix Market exchange formats
Notes
1. See scipy.sparse.dok_matrix(http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.dok_matrix.ht
ml)
2. See scipy.sparse.lil_matrix(http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.lil_matrix.html)
3. See scipy.sparse.coo_matrix(http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.coo_matrix.ht
ml)
4. Buluç, Aydın; Fineman, Jeremy T.; Frigo, Matteo; Gilbert, John R.;Leiserson, Charles E. (2009). Parallel sparse
matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks (http://gauss.cs.ucsb.edu/~
aydin/csb2009.pdf) (PDF). ACM Symp. on Parallelism in Algorithms and Architectures.CiteSeerX 10.1.1.211.5256
(https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.211.5256) .
5. netlib.org (http://netlib.org/linalg/html_templates/node91.html)
6. Bank, Randolph E.; Douglas, Craig C. (1993),"Sparse Matrix Multiplication Package (SMMP)"(http://www.mgnet.or
g/~douglas/Preprints/pub0034.pdf)(PDF), Advances in Computational Mathematics, 1
7. pp. 9,10 in Oral history interview with Harry M. Markowitz(http://purl.umn.edu/107467)
References
Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations (3rd ed.). Baltimore: Johns Hopkins.ISBN 978-
0-8018-5414-9.
Stoer, Josef; Bulirsch, Roland (2002).Introduction to Numerical Analysis(3rd ed.). Berlin, New York: Springer-
Verlag. ISBN 978-0-387-95452-3.
Tewarson, Reginald P. (May 1973). Sparse Matrices (Part of the Mathematics in Science & Engineering series) .
Academic Press Inc. (This book, by a professor at the State University of New oYrk at Stony Book, was the first book
exclusively dedicated to Sparse Matrices.Graduate courses using this as a textbook were of fered at that University
in the early 1980s).
Bank, Randolph E.; Douglas, Craig C."Sparse Matrix Multiplication Package"(PDF).
Pissanetzky, Sergio (1984). Sparse Matrix Technology. Academic Press.
Snay, Richard A. (1976). "Reducing the profile of sparse symmetric matrices".Bulletin Géodésique. 50 (4): 341.
doi:10.1007/BF02521587. Also NOAA Technical Memorandum NOS NGS-4, National Geodetic Survey, Rockville,
MD.
Further reading
Gibbs, Norman E.; Poole, William G.; Stockmeyer , Paul K. (1976). "A comparison of several bandwidth and profile
reduction algorithms". ACM Transactions on Mathematical Software. 2 (4): 322–330. doi:10.1145/355705.355707.
Gilbert, John R.; Moler, Cleve; Schreiber, Robert (1992). "Sparse matrices in MATLAB: Design and Implementation".
SIAM Journal on Matrix Analysis and Applications . 13 (1): 333–356. CiteSeerX 10.1.1.470.1054 .
doi:10.1137/0613024.
Sparse Matrix Algorithms Researchat the University of Florida, containing the UF sparse matrix collection.
SMALL project A EU-funded project on sparse models, algorithms and dictionary learning for large-scale data.
Text is available under theCreative Commons Attribution-ShareAlike License ; additional terms may apply. By using this
site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of theWikimedia
Foundation, Inc., a non-profit organization.