Minor (linear algebra)
- This article is about a concept in linear algebra. For the unrelated concept of "minor" in graph theory, see minor (graph theory).
In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A.
Detailed definition
Let A be an m × n matrix and k an integer with 0 < k ≤ m, and k ≤ n. A k × k minor of A is the determinant of a k × k matrix obtained from A by deleting m − k rows and n − k columns.
Since there are
- mCk (read "m choose k")
ways from m rows to choose k rows, and there are
- nCk
ways from n columns to choose k columns, there are a total of
- mCk · nCk
minors of size k × k.
Cofactors
The (n − 1) × (n − 1) minor (often denoted Mij) of an n × n square matrix is defined as the determinant of the matrix formed by removing the ith row and the jth column.
The cofactor Cij of a square matrix A is just (−1)i + j times the corresponding minor:
- Cij = (−1)i + j Mij
The transpose of the matrix C of cofactors is called the adjugate matrix.
Example
For example, given the matrix
suppose we wish to find the cofactor C23. The minor M23 is the determinant of the above matrix with row 2 and column 3 removed (the following is not standard notation):
- yields
where the vertical bars around the matrix indicate that the determinant should be taken. Thus, C23 is (-1)2+3 M23
Applications
The cofactors feature prominently in Laplace's formula for the expansion of determinants. If all the cofactors of a square matrix A are collected to form a new matrix of the same size and then transposed, one obtains the adjugate of A, which is useful in calculating the inverse of small matrices.
Given an m × n matrix with real entries (or entries from any other field) and rank r, then there exists at least one non-zero r × r minor, while all larger minors are zero.
We will use the following notation for minors: if A is an m × n matrix, I is a subset of {1,...,m} with k elements and J is a subset of {1,...,n} with k elements, then we write [A]I,J for the k × k minor of A that corresponds to the rows with index in I and the columns with index in J.
- If I = J, then [A]I,J is called a principal minor.
- If the matrix that corresponds to a principal minor is a quadratic upper-left part of the larger matrix (i.e., it consists of matrix elements in rows and columns from 1 to k), then the principal minor is called a leading principal minor. For an n × n square matrix, there are n leading principal minors.
- For Hermitian matrices, the principal minors can be used to test for positive definiteness.
Both the formula for ordinary matrix multiplication and the Cauchy-Binet formula for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices. Suppose that A is an m × n matrix, B is an n × p matrix, I is a subset of {1,...,m} with k elements and J is a subset of {1,...,p} with k elements. Then
where the sum extends over all subsets K of {1,...,n} with k elements. This formula is a straightforward corollary of the Cauchy-Binet formula.
Multilinear algebra approach
A more systematic, algebraic treatment of the minor concept is given in multilinear algebra, using the wedge product: the k-minors of a matrix are the entries in the kth exterior power map.
If the columns of a matrix are wedged together k at a time, the k × k minors appear as the components of the resulting k-vectors. For example, the 2 × 2 minors of the matrix
are −13 (from the first two rows), −7 (from the first and last row), and 5 (from the last two rows). Now consider the wedge product
where the two expressions correspond to the two columns of our matrix. Using the properties of the wedge product, namely that it is bilinear and
and
we can simplify this expression to
where the coefficients agree with the minors computed earlier.