Jump to content

Diagonally dominant matrix

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Neoleex (talk | contribs) at 01:10, 6 June 2009 (Variations). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In mathematics, a matrix is said to be diagonally dominant if in every row of the matrix, the magnitude of the diagonal entry in that row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row, and if in at least one row of the matrix, the magnitude of the diagonal entry in that row is strictly larger than the sum of the magnitudes of all the other (non-diagonal) entries in that row. More precisely, the matrix A is diagonally dominant if

where aij denotes the entry in the ith row and jth column. If the strictly greater than equality is true for all rows (all values of i), then the matrix is called strictly diagonally dominant.

Examples

The matrix

gives

Because each diagonal element is greater than or equal to the sum of the other elements in the row, A is diagonally dominant.


The matrix

gives

Because and are less than the sum of the other elements in their respective row, B is not diagonally dominant.


The matrix

gives

Because each diagonal element is strictly greater than the sum of the other elements in the row, C is strictly diagonally dominant.

Variations

The definition in the first paragraph sums entries across rows. It is therefore sometimes called row diagonal dominance. If one changes the definition to sum down columns, this is called column diagonal dominance. Matrix C in the example section is both row diagonally dominant and column diagonally dominant.

The definition in the first paragraph uses a strict inequality. It is therefore sometimes called strict diagonal dominance. If a weak inequality () is used, this is called weak diagonal dominance. The unqualified term diagonal dominance can mean both strict and weak diagonal dominance, depending on the context.[1]

If an irreducible matrix is weakly diagonally dominant, but in at least one row (or column) is strictly diagonally dominant, then the matrix is irreducibly diagonally dominant.

Applications and properties

By the Gershgorin circle theorem, a strictly (or irreducibly) diagonally dominant matrix is non-singular. This result is known as the Levy–Desplanques theorem.[2]

A Hermitian diagonally dominant matrix with real non-negative diagonal entries is positive semi-definite. If the symmetry requirement is eliminated, such a matrix is not necessarily positive semi-definite; however, the real parts of its eigenvalues are non-negative.

No (partial) pivoting is necessary for a strictly column diagonally dominant matrix when performing Gaussian elimination (LU factorization).

The Jacobi and Gauss–Seidel methods for solving a linear system converge if the matrix is strictly (or irreducibly) diagonally dominant.

Many matrices that arise in finite element methods are diagonally dominant.

A slight variation on the idea of diagonal dominance is used to prove that the pairing on diagrams without loops in the Temperley-Lieb algebra is nondegenerate.[3] For a matrix with polynomial entries, one sensible definition of diagonal dominance is if the highest power of appearing in each row appears only on the diagonal. (The evaluations of such a matrix at large values of are diagonally dominant in the above sense.)

Notes

  1. ^ For instance, Horn and Johnson (1985, p. 349) use it to mean weak diagonal dominance.
  2. ^ Horn and Johnson, Thm 6.1.10. This result has been independently rediscovered dozens of times. A few notable ones are Lévy (1881), Desplanques (1886), Minkowski (1900), Hadamard (1903), Schur, Markov (1908), Rohrbach (1931), Gershgorin (1931), Artin (1932), Ostrowski (1937), and Furtwängler (1936). For a history of this "recurring theorem" see: Taussky, Olga (1949). "A recurring theorem on determinants". American Mathematical Monthly. 56: 672–676. doi:10.2307/2305561. Another useful history is in: Schneider, Hans (1977). "Olga Taussky-Todd's influence on matrix theory and matrix theorists". Linear and Multilinear Algebra. 5 (3): 197–224. doi:10.1080/03081087708817197.
  3. ^ K.H. Ko and L. Smolinski (1991). "A combinatorial matrix in 3-manifold theory". Pacific. J. Math. 149: 319–336.

References

  • Gene H. Golub & Charles F. Van Loan. Matrix Computations, 1996. ISBN 0-8018-5414-8
  • Roger A. Horn & Charles R. Johnson. Matrix Analysis, Cambridge University Press, 1985. ISBN 0-521-38632-2 (paperback).
  • Kaw, Autar (2008), Introduction to Matrix Algebra (1st ed.), www.autarkaw.com, ISBN 0-615-25126-4 {{citation}}: Check |isbn= value: checksum (help).