Jump to content

Sparse matrix: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 11: Line 11:


Given a sparse ''N''×''M'' matrix ''A'' the '''row bandwidth''' for the ''n''-th row is defined as
Given a sparse ''N''×''M'' matrix ''A'' the '''row bandwidth''' for the ''n''-th row is defined as
:<math>b_n(\mathbf{A}) := \mathrm{min_{1 \le m \le M} \lbrace m \mid a_{n, m} \neq 0 \rbrace </math>
:<math>b_n(\mathbf{A}) := \mathrm{min}_{1 \le m \le M} \lbrace m \mid a_{n, m} \neq 0 \rbrace </math>


The '''bandwidth''' for the matrix is defined as
The '''bandwidth''' for the matrix is defined as

Revision as of 10:45, 16 May 2006

In the mathematical subfield of numerical analysis a sparse matrix is a matrix populated primarily with zeros.

Sparsity is a concept, useful in combinatorics and application areas such as network theory, of a low density of significant data or connections. This concept is amenable to quantitative reasoning. It is also noticeable in everyday life.

Huge sparse matrices often appear in science or engineering when solving problems for linear models.

When storing and manipulating sparse matrices on the computer, it is often necessary to modify the standard algorithms and take advantage of the sparse structure of the matrix. Sparse data is by its nature easily compressed, which can yield enormous savings in memory usage. And more importantly, manipulating huge sparse matrices with the standard algorithms may be impossible due to their sheer size. The definition of huge depends on the hardware and the computer programs available to manipulate the matrix.

Definitions

Given a sparse N×M matrix A the row bandwidth for the n-th row is defined as

The bandwidth for the matrix is defined as

Example

A bitmap image having only 2 colors, with one of them dominant (say a file that stores a handwritten signature) can be encoded as a sparse matrix that contains only row and column numbers for pixels with the non-dominant color.

Storing a sparse matrix

The naive data structure for a matrix is a two dimensional array. Each entry in the array represents an element ai,j of the matrix and can be accessed by the two indices i and j. For a n×m matrix we need at least (n*m) / 8 bytes to represent the matrix when assuming 1 bit for each entry.

A sparse matrix contains many (often mostly) zero entries. The basic idea when storing sparse matrices is to only store the non-zero entries as opposed to storing all entries. Depending on the number and distribution of the non-zero entries, different data structures can be used and yield huge savings in memory when compared to a naive approach.

One example of such a sparse matrix format is the (old) Yale Sparse Matrix Format [1]. It stores an initial sparse N×N matrix M in row form using three arrays, A, IA, JA. NZ denotes the number of nonzero entries in matrix M. The array A then is of length NZ and holds all nonzero entries of M. The array IA stores at IA(i) the position of the first element of row i in the sparse array A. The length of row i is determined by IA(i+1) - IA(i). Therefore IA needs to be of length N + 1. In array JA, the column index of the element A(j) is stored. JA is of length NZ.

Diagonal matrix

A very efficient structure for a diagonal matrix is to store just the entries in the main diagonal as a one dimensional array. For n×n matrix we need only n / 8 bytes when assuming 1 bit for each entry.

Reducing the bandwidth

The Cuthill-McKee algorithm can be used to reduce the bandwidth of a sparse symmetric matrix. There are however matrices for which the Reverse Cuthill-McKee algorithm performs better.

The National Geodetic Survey (NGS) uses Dr. Richard Snay's "Banker's" algorithm because on realistic sparse matrices used in Geodesy work it has better performance.

There are many other methods in use.

Reducing the fill-in

The fill-in of a matrix are those entries which change from an initial zero to a non-zero value during the execution of an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm it is useful to minimize the fill-in by switching rows and columns in the matrix. The symbolic Cholesky decomposition can be used to calculate the worst possible fill-in before doing the actual Cholesky decomposition.

There are other methods than the Cholesky decomposition in use. Orthogonalization methods (such as QR factorization) are common, for example, when solving problems by least squares methods. While the theoretical fill-in is still the same, in practical terms the "false non-zeros" can be different for different methods. And symbolic versions of those algorithms can be used in the same manner as the symbolic Cholesky to compute worst case fill-in.

See also

References

  • Sparse Matrix Multiplication Package, Randolph E. Bank, Craig C. Douglas [1]
  • Pissanetzky, Sergio 1984, "Sparse Matrix Technology", Academic Press

Further reading

  • Norman E. Gibbs, William G. Poole, Jr. and Paul K. Stockmeyer (1976). "A comparison of several bandwidth and profile reduction algorithms". ACM Transactions on Mathematical Software. 2 (4): 332–330.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Sparse Matrix Algorithms Research at the University of Florida, containing the UF sparse matrix collection.