# M-matrix

In mathematics, especially linear algebra, an M-matrix is a Z-matrix with eigenvalues whose real parts are nonnegative. The set of non-singular M-matrices are a subset of the class of P-matrices, and also of the class of inverse-positive matrices (i.e. matrices with inverses belonging to the class of positive matrices).[1] The name M-matrix was seemingly originally chosen by Alexander Ostrowski in reference to Hermann Minkowski, who proved that if a Z-matrix has all of its row sums positive, then the determinant of that matrix is positive.[2]

## Characterizations

An M-matrix is commonly defined as follows:

Definition: Let A be a n × n real Z-matrix. That is, A = (aij) where aij ≤ 0 for all ij, 1 ≤ i,jn. Then matrix A is also an M-matrix if it can be expressed in the form A = sIB, where B = (bij) with bij ≥ 0, for all 1 ≤ i,j ≤ n, where s is at least as large as the maximum of the moduli of the eigenvalues of B, and I is an identity matrix.

For the non-singularity of A, according to the Perron-Frobenius theorem, it must be the case that s > ρ(B). Also, for a non-singular M-matrix, the diagonal elements aii of A must be positive. Here we will further characterize only the class of non-singular M-matrices.

Many statements that are equivalent to this definition of non-singular M-matrices are known, and any one of these statements can serve as a starting definition of a non-singular M-matrix.[3] For example, Plemmons lists 40 such equivalences.[4] These characterizations has been categorized by Plemmons in terms of their relations to the properties of: (1) positivity of principal minors, (2) inverse-positivity and splittings, (3) stability, and (4) semipositivity and diagonal dominance. It makes sense to categorize the properties in this way because the statements within a particular group are related to each other even when matrix A is an arbitrary matrix, and not necessarily a Z-matrix. Here we mention a few characterizations from each category.

## Equivalences

Below, denotes the element-wise order (not the usual positive semidefinite order on matrices). That is, for any real matrices A, B of size m × n, we write AB ( or A > B) if aijbij (or aij > bij ) for all i, j.

Let A be a n × n real Z-matrix, then the following statements are equivalent to A being a non-singular M-matrix:

Positivity of Principal Minors

• All the principal minors of A are positive. That is, the determinant of each submatrix of A obtained by deleting a set, possibly empty, of corresponding rows and columns of A is positive.
• A + D is non-singular for each nonnegative diagonal matrix D.
• Every real eigenvalue of A is positive.
• All the leading principal minors of A are positive.
• There exist lower and upper triangular matrices L and U respectively, with positive diagonals, such that A = LU.

Inverse-Positivity and Splittings

• A is inverse-positive. That is, A−1 exists and A−1 ≥ 0.
• A is monotone. That is, Ax ≥ 0 implies x ≥ 0.
• A has a convergent regular splitting. That is, A has a representation A = M - N, where M−1 ≥ 0, N ≥ 0 with M−1N convergent. That is, ρ(M−1N) < 1.
• There exist inverse-positive matrices M1 and M2 with M1AM2.
• Every regular splitting of A is convergent.

Stability

• There exists a positive diagonal matrix D such that AD + DAT is positive definite.
• A is positive stable. That is, the real part of each eigenvalue of A is positive.
• There exists a symmetric positive definite matrix W such that AW + WAT is positive definite.
• A + I is non-singular, and G = (A + I)−1(AI) is convergent.
• A + I is non-singular, and for G = (A + I)−1(AI), there exists a positive definite symmetric matrix W such that WGTWG is positive definite.

Semipositivity and Diagonal Dominance

• A is semi-positive. That is, there exists x > 0 with Ax > 0.
• There exists x ≥ 0 with Ax > 0.
• There exists a positive diagonal matrix D such that AD has all positive row sums.
• A has all positive diagonal elements, and there exists a positive diagonal matrix D such that AD is strictly diagonally dominant.
• A has all positive diagonal elements, and there exists a positive diagonal matrix D such that D−1AD is strictly diagonally dominant.

## Applications

The primary contributions to M-matrix theory has mainly come from mathematicians and economists. M-matrices are used in mathematics to establish bounds on eigenvalues and on the establishment of convergence criteria for iterative methods for the solution of large sparse systems of linear equations. M-matrices arise naturally in some discretizations of differential operators, such as the Laplacian, and as such are well-studied in scientific computing. M-matrices also occur in the study of solutions to linear complementarity problem. Linear complementarity problems arise in linear and quadratic programming, computational mechanics, and in the problem of finding equilibrium point of a bimatrix game. Lastly, M-matrices occur in the study of finite Markov chains in the field of probability theory and operations research like queuing theory. Meanwhile, the economists have studied M-matrices in connection with gross substitutability, stability of a general equilibrium and Leontief's input-output analysis in economic systems. The condition of positivity of all principal minors is also known as the Hawkins–Simon condition in economic literature.[5] In engineering, M-matrices also occur in the problems of feedback control in control theory and is related to Hurwitz matrix. In computational biology, M-matrices occur in the study of population dynamics.