Minor (linear algebra)

From Wikipedia, the free encyclopedia
  (Redirected from Minor (matrix))
Jump to: navigation, search
This article is about a concept in linear algebra. For the unrelated concept of "minor" in graph theory, see Minor (graph theory).

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows or columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices.

Definition and illustration[edit]

First minors[edit]

If A is a square matrix, then the minor of the entry in the i-th row and j-th column (also called the (i,j) minor, or a first minor[1]) is the determinant of the submatrix formed by deleting the i-th row and j-th column. This number is often denoted Mi,j. The (i,j) cofactor is obtained by multiplying the minor by (-1)^{i+j}.

To illustrate these definitions, consider the following 3 by 3 matrix,

\,\,\,1 & 4 & 7 \\
\,\,\,3 & 0 & 5 \\
-1 & 9 & \!11 \\

To compute the minor M23 and the cofactor C23, we find the determinant of the above matrix with row 2 and column 3 removed.

 M_{23} = \det \begin{bmatrix}
\,\,1 & 4 & \Box\, \\
\,\Box & \Box & \Box\, \\
-1 & 9 & \Box\, \\
\end{bmatrix}= \det \begin{bmatrix}
\,\,\,1 & 4\, \\
-1 & 9\, \\
\end{bmatrix} = (9-(-4)) = 13

So the cofactor of the (2,3) entry is

\ C_{23} = (-1)^{2+3}(M_{23}) = -13.

General definition[edit]

Let A be an m × n matrix and k an integer with 0 < km, and kn. A k × k minor of A is the determinant of a k × k matrix obtained from A by deleting mk rows and nk columns. For such a matrix there are a total of {m \choose k} \cdot {n \choose k} minors of size k × k.


The complement, Bijk...,pqr..., of a minor, Mijk...,pqr..., of a square matrix, A, is formed by the determinant of the matrix A from which all the rows (ijk...) and columns (pqr...) associated with Mijk...,pqr... have been removed. The complement of the first minor of an element aij is merely that element.[2]

Applications of minors and cofactors[edit]

Cofactor expansion of the determinant[edit]

Main article: Laplace expansion

The cofactors feature prominently in Laplace's formula for the expansion of determinants, which is a method of computing larger determinants in terms of smaller ones. Given the n\times n matrix (a_{ij}), the determinant of A (denoted det(A)) can be written as the sum of the cofactors of any row or column of the matrix multiplied by the entries that generated them. In other words, the cofactor expansion along the jth column gives:

\ \det(\mathbf A) = a_{1j}C_{1j} + a_{2j}C_{2j} + a_{3j}C_{3j} + ... + a_{nj}C_{nj} = \sum_{i=1}^{n} a_{ij} C_{ij}

The cofactor expansion along the ith row gives:

\ \det(\mathbf A) = a_{i1}C_{i1} + a_{i2}C_{i2} + a_{i3}C_{i3} + ... + a_{in}C_{in} = \sum_{j=1}^{n} a_{ij} C_{ij}

Inverse of a matrix[edit]

Main article: Cramer's rule

One can write down the inverse of an invertible matrix by computing its cofactors by using Cramer's rule, as follows. The matrix formed by all of the cofactors of a square matrix A is called the cofactor matrix (also called the matrix of cofactors):

\mathbf C=\begin{bmatrix}
    C_{11}  & C_{12} & \cdots &   C_{1n}   \\
    C_{21}  & C_{22} & \cdots &   C_{2n}   \\
  \vdots & \vdots & \ddots & \vdots \\ 
    C_{n1}  & C_{n2} & \cdots &  C_{nn}

Then the inverse of A is the transpose of the cofactor matrix times the inverse of the determinant of A:

\mathbf A^{-1} = \frac{1}{\operatorname{det}(\mathbf A)} \mathbf C^\mathsf{T}.

The transpose of the cofactor matrix is called the adjugate matrix (also called the classical adjoint) of A.

The above formula can be generalized as follows:

[\mathbf A^{-1}]_{I,J} = \frac{[\mathbf A]_{J',I'}}{\det \mathbf A},

where I', J' denote the subsets of indices complementary to I, J. A simple proof can be given using wedge product. Indeed,

[\mathbf A^{-1}]_{I,J}(e_1\wedge\ldots \wedge e_n) = (\mathbf A^{-1}e_{j_1})\wedge \ldots \wedge(\mathbf A^{-1}e_{j_k})\wedge e_{i'_1}\wedge\ldots \wedge e_{i'_{n-k}},

where e_1,\ldots,e_n are the basis vectors. Acting by \mathbf A on both sides, one gets

[\mathbf A^{-1}]_{I,J}\det \mathbf A (e_1\wedge\ldots \wedge e_n) = [\mathbf A^{-1}]_{I,J} (\mathbf A e_1)\wedge\ldots \wedge (\mathbf A e_n)= (e_{j_1})\wedge \ldots \wedge(e_{j_k})\wedge (\mathbf A e_{i'_1})\wedge\ldots \wedge (\mathbf A e_{i'_{n-k}})=[\mathbf A]_{J',I'}(e_1\wedge\ldots \wedge e_n).

Other applications[edit]

Given an m × n matrix with real entries (or entries from any other field) and rank r, then there exists at least one non-zero r × r minor, while all larger minors are zero.

We will use the following notation for minors: if A is an m × n matrix, I is a subset of {1,...,m} with k elements and J is a subset of {1,...,n} with k elements, then we write [A]I,J for the k × k minor of A that corresponds to the rows with index in I and the columns with index in J.

  • If I = J, then [A]I,J is called a principal minor.
  • If the matrix that corresponds to a principal minor is a quadratic upper-left part of the larger matrix (i.e., it consists of matrix elements in rows and columns from 1 to k), then the principal minor is called a leading principal minor. For an n × n square matrix, there are n leading principal minors.
  • For Hermitian matrices, the leading principal minors can be used to test for positive definiteness and the principal minors can be used to test for positive semidefiniteness. See Sylvester's criterion for more details.

Both the formula for ordinary matrix multiplication and the Cauchy-Binet formula for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices. Suppose that A is an m × n matrix, B is an n × p matrix, I is a subset of {1,...,m} with k elements and J is a subset of {1,...,p} with k elements. Then

[\mathbf{AB}]_{I,J} = \sum_{K} [\mathbf{A}]_{I,K} [\mathbf{B}]_{K,J}\,

where the sum extends over all subsets K of {1,...,n} with k elements. This formula is a straightforward extension of the Cauchy-Binet formula.

Multilinear algebra approach[edit]

A more systematic, algebraic treatment of the minor concept is given in multilinear algebra, using the wedge product: the k-minors of a matrix are the entries in the kth exterior power map.

If the columns of a matrix are wedged together k at a time, the k × k minors appear as the components of the resulting k-vectors. For example, the 2 × 2 minors of the matrix

1 & 4 \\
3 & \!\!-1 \\
2 & 1 \\

are −13 (from the first two rows), −7 (from the first and last row), and 5 (from the last two rows). Now consider the wedge product

(\mathbf{e}_1 + 3\mathbf{e}_2 +2\mathbf{e}_3)\wedge(4\mathbf{e}_1-\mathbf{e}_2+\mathbf{e}_3)

where the two expressions correspond to the two columns of our matrix. Using the properties of the wedge product, namely that it is bilinear[disambiguation needed] and

\mathbf{e}_i\wedge \mathbf{e}_i = 0


\mathbf{e}_i\wedge \mathbf{e}_j = - \mathbf{e}_j\wedge \mathbf{e}_i,

we can simplify this expression to

 -13 \mathbf{e}_1\wedge \mathbf{e}_2 -7 \mathbf{e}_1\wedge \mathbf{e}_3 +5 \mathbf{e}_2\wedge \mathbf{e}_3

where the coefficients agree with the minors computed earlier.

A remark about different notations[edit]

In some books [3] instead of cofactor the term adjunct is used. Moreover, it is denoted as Aij and defined in the same way as cofactor:

\mathbf{A}_{ij} = (-1)^{i+j} \mathbf{M}_{ij}

Using this notation the inverse matrix is written this way:

\mathbf{A}^{-1} = \frac{1}{\det(A)}\begin{bmatrix}
    A_{11}  & A_{21} & \cdots &   A_{n1}   \\
    A_{12}  & A_{22} & \cdots &   A_{n2}   \\
  \vdots & \vdots & \ddots & \vdots \\ 
    A_{1n}  & A_{2n} & \cdots &  A_{nn}

Keep in mind that adjunct is not adjugate or adjoint[disambiguation needed]. In modern terminology, the "adjoint" of a matrix most often refers to the corresponding adjoint operator.

See also[edit]


  1. ^ Burnside, William Snow & Panton, Arthur William (1886) Theory of Equations: with an Introduction to the Theory of Binary Algebraic Form.
  2. ^ Bertha Jeffreys, Methods of Mathematical Physics, p.135, Cambridge University Press, 1999 ISBN 0-521-66402-0.
  3. ^ Felix Gantmacher, Theory of matrices (1st ed., original language is Russian), Moscow: State Publishing House of technical and theoretical literature, 1953, p.491,

External links[edit]