# Permanent

The permanent of a square matrix in linear algebra is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both permanent and determinant are special cases of a more general function of a matrix called the immanant.

## Definition

The permanent of an n-by-n matrix A = (ai,j) is defined as

$\operatorname{perm}(A)=\sum_{\sigma\in S_n}\prod_{i=1}^n a_{i,\sigma(i)}.$

The sum here extends over all elements σ of the symmetric group Sn; i.e. over all permutations of the numbers 1, 2, ..., n.

For example,

$\operatorname{perm}\begin{pmatrix}a&b \\ c&d\end{pmatrix}=ad+bc,$

and

$\operatorname{perm}\begin{pmatrix}a&b&c \\ d&e&f \\ g&h&i \end{pmatrix}=aei + bfg + cdh + ceg + bdi + afh.$

The definition of the permanent of A differs from that of the determinant of A in that the signatures of the permutations are not taken into account.

The permanent of a matrix A is denoted per A, perm A, or Per A, sometimes with parentheses around the argument. In his monograph, Minc (1984) uses Per(A) for the permanent of rectangular matrices, and uses per(A) when A is a square matrix. Muir (1882) uses the notation $\overset{+}{|}\quad \overset{+}{|}$.

The word, permanent, originated with Cauchy in 1812 as “fonctions symétriques permanentes” for a related type of function,[1] and was used by Muir (1882) in the modern, more specific, sense.[2]

## Properties and applications

If one views the permanent as a map that takes n vectors as arguments, then it is a multilinear map and it is symmetric (meaning that any order of the vectors results in the same permanent). Furthermore, given a square matrix $A = (a_{ij})$ of order n, we have:[3]

• perm(A) is invariant under arbitrary permutations of the rows and/or columns of A. This property may be written symbolically as perm(A) = perm(PAQ) for any appropriately sized permutation matrices P and Q,
• multiplying any single row or column of A by a scalar s changes perm(A) to s⋅perm(A),
• perm(A) is invariant under transposition, that is, perm(A) = perm(A).

If $A = (a_{ij})$ and $B=(b_{ij})$ are square matrices of order n then,[4]

$\operatorname{perm}(A + B) = \sum_{s,t} \operatorname{perm} (a_{ij})_{i \in s, j \in t} \operatorname{perm} (b_{ij})_{i \in \bar{s}, j \in \bar{t}},$

where s and t are subsets of the same size of {1,2,...,n} and $\bar{s}, \bar{t}$ are their respective complements in that set.

On the other hand, the basic multiplicative property of determinants is not valid for permanents.[5] A simple example shows that this is so.

$4 = \operatorname{perm} \left ( \begin{matrix} 1 & 1 \\ 1 & 1 \end{matrix} \right )\operatorname{perm} \left ( \begin{matrix} 1 & 1 \\ 1 & 1 \end{matrix} \right ) \neq \operatorname{perm}\left ( \left ( \begin{matrix} 1 & 1 \\ 1 & 1 \end{matrix} \right ) \left ( \begin{matrix} 1 & 1 \\ 1 & 1 \end{matrix} \right ) \right ) = \operatorname{perm} \left ( \begin{matrix} 2 & 2 \\ 2 & 2 \end{matrix} \right )= 8.$

A formula similar to Laplace's for the development of a determinant along a row, column or diagonal is also valid for the permanent;[6] all signs have to be ignored for the permanent. For example, expanding along the first column,

$\operatorname{perm} \left ( \begin{matrix} 1 & 1 & 1 & 1\\2 & 1 & 0 & 0\\3 & 0 & 1 & 0\\4 & 0 & 0 & 1 \end{matrix} \right ) = 1 \cdot \operatorname{perm} \left(\begin{matrix}1&0&0\\0&1&0\\0&0&1\end{matrix}\right) + 2\cdot \operatorname{perm} \left(\begin{matrix}1&1&1\\0&1&0\\0&0&1\end{matrix}\right) +3\cdot \operatorname{perm} \left(\begin{matrix}1&1&1\\1&0&0\\0&0&1\end{matrix}\right) + 4 \cdot \operatorname{perm} \left(\begin{matrix}1&1&1\\1&0&0\\0&1&0\end{matrix}\right)= 1(1) + 2(1) + 3(1) + 4(1) = 10,$

while expanding along the last row gives,

$\operatorname{perm} \left ( \begin{matrix} 1 & 1 & 1 & 1\\2 & 1 & 0 & 0\\3 & 0 & 1 & 0\\4 & 0 & 0 & 1 \end{matrix} \right ) = 4 \cdot \operatorname{perm} \left(\begin{matrix}1&1&1\\1&0&0\\0&1&0\end{matrix}\right) + 0\cdot \operatorname{perm} \left(\begin{matrix}1&1&1\\2&0&0\\3&1&0\end{matrix}\right) +0\cdot \operatorname{perm} \left(\begin{matrix}1&1&1\\2&1&0\\3&0&0\end{matrix}\right) + 1 \cdot \operatorname{perm} \left(\begin{matrix}1&1&1\\2&1&0\\3&0&1\end{matrix}\right)= 4(1) + 0 + 0 + 1(6) = 10.$

Unlike the determinant, the permanent has no easy geometrical interpretation; it is mainly used in combinatorics and in treating boson Green's functions in quantum field theory. However, it has two graph-theoretic interpretations: as the sum of weights of cycle covers of a directed graph, and as the sum of weights of perfect matchings in a bipartite graph.

### Cycle covers

Any square matrix $A = (a_{ij})$ can be viewed as the adjacency matrix of a weighted directed graph, with $a_{ij}$ representing the weight of the arc from vertex i to vertex j. A cycle cover of a weighted directed graph is a collection of vertex-disjoint directed cycles in the digraph that covers all vertices in the graph. Thus, each vertex i in the digraph has a unique "successor" $\sigma(i)$ in the cycle cover, and $\sigma$ is a permutation on $\{1,2,\dots,n\}$ where n is the number of vertices in the digraph. Conversely, any permutation $\sigma$ on $\{1,2,\dots,n\}$ corresponds to a cycle cover in which there is an arc from vertex i to vertex $\sigma(i)$ for each i.

If the weight of a cycle-cover is defined to be the product of the weights of the arcs in each cycle, then

$\operatorname{Weight}(\sigma) = \prod_{i=1}^n a_{i,\sigma(i)}.$

The permanent of an $n \times n$ matrix A is defined as

$\operatorname{perm}(A)=\sum_\sigma \prod_{i=1}^{n} a_{i,\sigma(i)}$

where $\sigma$ is a permutation over $\{1,2,\dots,n\}$. Thus the permanent of A is equal to the sum of the weights of all cycle-covers of the digraph.

### Perfect matchings

A square matrix $A = (a_{ij})$ can also be viewed as the adjacency matrix of a bipartite graph which has vertices $x_1, x_2, \dots, x_n$ on one side and $y_1, y_2, \dots, y_n$ on the other side, with $a_{ij}$ representing the weight of the edge from vertex $x_i$ to vertex $y_j$. If the weight of a perfect matching $\sigma$ that matches $x_i$ to $y_{\sigma(i)}$ is defined to be the product of the weights of the edges in the matching, then

$\operatorname{Weight}(\sigma) = \prod_{i=1}^n a_{i,\sigma(i)}.$

Thus the permanent of A is equal to the sum of the weights of all perfect matchings of the graph.

## Permanents of (0,1) matrices

The permanents of matrices that only have 0 and 1 as entries are often the answers to certain counting questions involving the structures that the matrices represent. This is particularly true of adjacency matrices in graph theory and incidence matrices of symmetric block designs.

In an unweighted, directed, simple graph (a digraph), if we set each $a_{ij}$ to be 1 if there is an edge from vertex i to vertex j, then each nonzero cycle cover has weight 1, and the adjacency matrix has 0-1 entries. Thus the permanent of a (0,1)-matrix is equal to the number of vertex cycle covers of an unweighted directed graph.

For an unweighted bipartite graph, if we set ai,j = 1 if there is an edge between the vertices $x_i$ and $y_j$ and ai,j = 0 otherwise, then each perfect matching has weight 1. Thus the number of perfect matchings in G is equal to the permanent of matrix A.[7]

Let Ω(n,k) be the class of all (0,1)-matrices of order n with each row and column sum equal to k. Every matrix A in this class has perm(A) > 0.[8] The incidence matrices of projective planes are in the class Ω(n2 + n + 1, n + 1) for n an integer > 1. The permanents corresponding to the smallest projective planes have been calculated. For n = 2, 3, and 4 the values are 24, 3852 and 18,534,400 respectively.[8] Let Z be the incidence matrix of the projective plane with n = 2, the Fano plane. Remarkably, perm(Z) = 24 = |det (Z)|, the absolute value of the determinant of Z. This is a consequence of Z being a circulant matrix and the theorem:[9]

If A is a circulant matrix in the class Ω(n,k) then if k > 3, perm(A) > |det (A)| and if k = 3, perm(A) = |det (A)|. Furthermore, when k = 3, by permuting rows and columns, A can be put into the form of a direct sum of e copies of the matrix Z and consequently, n = 7e and perm(A) = 24e.

Permanents can also be used to calculate the number of permutations with restricted (prohibited) positions. For the standard n-set, {1,2,...,n}, let $A = (a_{ij})$ be the (0,1)-matrix where aij = 1 if i → j is allowed in a permutation and aij = 0 otherwise. Then perm(A) counts the number of permutations of the n-set which satisfy all the restrictions.[10] Two well known special cases of this are the solution of the derangement problem (the number of permutations with no fixed points) given by:

$\operatorname{perm}(J - I) = \operatorname{perm}\left (\begin{matrix} 0 & 1 & 1 & \dots & 1 \\ 1 & 0 & 1 & \dots & 1 \\ 1 & 1 & 0 & \dots & 1 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & 1 & \dots & 0 \end{matrix} \right) = n! \sum_{i=0}^n \frac{(-1)^i}{i!},$

where J is the all 1's matrix and I is the identity matrix, each of order n, and the solution to the ménage problem given by:

$\operatorname{perm}(J - I - I') = \operatorname{perm}\left (\begin{matrix} 0 & 0 & 1 & \dots & 1 \\ 1 & 0 & 0 & \dots & 1 \\ 1 & 1 & 0 & \dots & 1 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 1 & 1 & \dots & 0 \end{matrix} \right) = 2 \cdot n! \sum_{k=0}^n (-1)^k \frac{2n}{2n-k} {2n-k\choose k} (n-k)!,$

where I' is the (0,1)-matrix whose only non-zero entries are on the first superdiagonal.

The following result was conjectured by H. Minc in 1967[11] and proved by L. M. Brégman in 1973.[12]

Theorem: Let A be an n × n (0,1)-matrix with ri ones in row i, 1 ≤ in. Then

$\operatorname{perm} A \leq \prod_{i=1}^n (r_i)!^{1/r_i}.$

## Van der Waerden's conjecture

In 1926 Van der Waerden conjectured that the minimum permanent among all n × n doubly stochastic matrices is $n!/n^n$, achieved by the matrix for which all entries are equal to $1/n$.[13] Proofs of this conjecture were published in 1980 by B. Gyires[14] and in 1981 by G. P. Egorychev[15] and D. I. Falikman;[16] Egorychev's proof is an application of the Alexandrov–Fenchel inequality.[17] For this work, Egorychev and Falikman won the Fulkerson Prize in 1982.[18]

## Computation

The naïve approach, using the definition, of computing permanents is computationally infeasible even for relatively small matrices. One of the fastest known algorithms is due to Herbert John Ryser (Ryser (1963, p. 27)). Ryser’s method is based on an inclusion–exclusion formula that can be given[19] as follows: Let $A_k$ be obtained from A by deleting k columns, let $P(A_k)$ be the product of the row-sums of $A_k$, and let $\Sigma_k$ be the sum of the values of $P(A_k)$ over all possible $A_k$. Then

$\operatorname{perm}(A)=\sum_{k=0}^{n-1} (-1)^{k}\Sigma_k.$

It may be rewritten in terms of the matrix entries as follows:

$\operatorname{perm} (A) = (-1)^n \sum_{S\subseteq\{1,\dots,n\}} (-1)^{|S|} \prod_{i=1}^n \sum_{j\in S} a_{ij}.$

The permanent is believed to be more difficult to compute than the determinant. While the determinant can be computed in polynomial time by Gaussian elimination, Gaussian elimination cannot be used to compute the permanent. Moreover, computing the permanent of a (0,1)-matrix is #P-complete. Thus, if the permanent can be computed in polynomial time by any method, then FP = #P, which is an even stronger statement than P = NP. When the entries of A are nonnegative, however, the permanent can be computed approximately in probabilistic polynomial time, up to an error of εM, where M is the value of the permanent and ε > 0 is arbitrary.[20]

## MacMahon's Master Theorem

Another way to view permanents is via multivariate generating functions. Let $A = (a_{ij})$ be a square matrix of order n. Consider the multivariate generating function:

$F(x_1,x_2,\dots,x_n) = \prod_{i=1}^n \left ( \sum_{j=1}^n a_{ij} x_j \right ) = \left ( \sum_{j=1}^n a_{1j} x_j \right ) \left ( \sum_{j=1}^n a_{2j} x_j \right ) \cdots \left ( \sum_{j=1}^n a_{nj} x_j \right ).$

The coefficient of $x_1 x_2 \dots x_n$ in $F(x_1,x_2,\dots,x_n)$ is perm(A).[21]

As a generalization, for any sequence of n non-negative integers, $s_1,s_2,\dots,s_n$ define:

$\operatorname{perm}^{(s_1,s_2,\dots,s_n)}(A) := \text{ coefficient of }x_1^{s_1} x_2^{s_2} \cdots x_n^{s_n} \text{ in }\left ( \sum_{j=1}^n a_{1j} x_j \right )^{s_1} \left ( \sum_{j=1}^n a_{2j} x_j \right )^{s_2} \cdots \left ( \sum_{j=1}^n a_{nj} x_j \right )^{s_n}.$

MacMahon's Master Theorem relating permanents and determinants is:[22]

$\operatorname{perm}^{(s_1,s_2,\dots,s_n)}(A) = \text{ coefficient of }x_1^{s_1} x_2^{s_2} \cdots x_n^{s_n} \text{ in } \frac{1}{\operatorname{Det}(I - XA)},$

where I is the order n identity matrix and X is the diagonal matrix with diagonal $[x_1,x_2,\dots,x_n].$

## Permanents of rectangular matrices

The permanent function can be generalized to apply to non-square matrices. Indeed, several authors make this the definition of a permanent and consider the restriction to square matrices a special case.[23] Specifically, for an m × n matrix $A = (a_{ij})$ with m ≤ n, define

$\operatorname{perm} (A) = \sum_{\sigma \in \operatorname{P}(n,m)} a_{1 \sigma(1)} a_{2 \sigma(2)} \ldots a_{m \sigma(m)}$

where P(n,m) is the set of all m-permutations of the n-set {1,2,...,n}.[24]

Ryser's computational result for permanents also generalizes. If A is an m × n matrix with m ≤ n, let $A_k$ be obtained from A by deleting k columns, let $P(A_k)$ be the product of the row-sums of $A_k$, and let $\sigma_k$ be the sum of the values of $P(A_k)$ over all possible $A_k$. Then

$\operatorname{perm}(A)=\sum_{k=0}^{m-1} (-1)^{k}\binom{n-m+k}{k}\sigma_{n-m+k}.$[25]

### Systems of distinct representatives

The generalization of the definition of a permanent to non-square matrices allows the concept to be used in a more natural way in some applications. For instance:

Let S1, S2, ..., Sm be subsets (not necessarily distinct) of an n-set with m ≤ n. The incidence matrix of this collection of subsets is an m × n (0,1)-matrix A. The number of systems of distinct representatives (SDR's) of this collection is perm(A).[26]

## Notes

1. ^ Cauchy, A. L. (1815), "Mémoire sur les fonctions qui ne peuvent obtenir que deux valeurs égales et de signes contraires par suite des transpositions opérées entre les variables qu’elles renferment.", Journal de l'École Polytechnique 10: 91–169
2. ^ van Lint & Wilson 2001, p. 108
3. ^ Ryser 1963, pp. 25 – 26
4. ^ Percus 1971, p. 2
5. ^ Ryser 1963, p. 26
6. ^ Percus 1971, p. 12
7. ^ Dexter Kozen. The Design and Analysis of Algorithms. Springer-Verlag, New York, 1991. ISBN 978-0-387-97687-7; pp. 141–142
8. ^ a b Ryser 1963, p. 124
9. ^ Ryser 1963, p. 125
10. ^ Percus 1971, p. 12
11. ^ Minc, H. (1967), "An inequality for permanents of (0,1) matrices", Journal of Combinatorial Theory 2: 321–326
12. ^ van Lint & Wilson 2001, p. 101
13. ^ van der Waerden, B. L. (1926), "Aufgabe 45", Jber. Deutsch. Math.-Verein. 35: 117.
14. ^ Gyires, B. (1980), "The common source of several inequalities concerning doubly stochastic matrices", Publicationes Mathematicae Institutum Mathematicum Universitatis Debreceniensis 27 (3-4): 291–304, MR 604006.
15. ^ Egoryčev, G. P. (1980), Reshenie problemy van-der-Vardena dlya permanentov (in Russian), Krasnoyarsk: Akad. Nauk SSSR Sibirsk. Otdel. Inst. Fiz., p. 12, MR 602332. Egorychev, G. P. (1981), "Proof of the van der Waerden conjecture for permanents", Akademiya Nauk SSSR (in Russian) 22 (6): 65–71, 225, MR 638007. Egorychev, G. P. (1981), "The solution of van der Waerden's problem for permanents", Advances in Mathematics 42 (3): 299–305, doi:10.1016/0001-8708(81)90044-X, MR 642395.
16. ^ Falikman, D. I. (1981), "Proof of the van der Waerden conjecture on the permanent of a doubly stochastic matrix", Akademiya Nauk Soyuza SSR (in Russian) 29 (6): 931–938, 957, MR 625097.
17. ^ Brualdi (2006) p.487
18. ^ Fulkerson Prize, Mathematical Optimization Society, retrieved 2012-08-19.
19. ^
20. ^ Jerrum, M.; Sinclair, A.; Vigoda, E. (2004), "A polynomial-time approximation algorithm for the permanent of a matrix with nonnegative entries", Journal of the ACM 51: 671–697, doi:10.1145/1008731.1008738
21. ^ Percus 1971, p. 14
22. ^ Percus 1971, p. 17
23. ^ In particular, Minc (1984) and Ryser (1963) do this.
24. ^ Ryser 1963, p. 25
25. ^ Ryser 1963, p. 26
26. ^ Ryser 1963, p. 54

## References

• Brualdi, Richard A. (2006). Combinatorial matrix classes. Encyclopedia of Mathematics and Its Applications 108. Cambridge: Cambridge University Press. ISBN 0-521-86565-4. Zbl 1106.05001.
• Minc, Henryk (1978). Permanents. Encyclopedia of Mathematics and its Applications 6. With a foreword by Marvin Marcus. Reading, MA: Addison–Wesley. ISSN 0953-4806. OCLC 3980645. Zbl 0401.15005.
• Muir, Thomas; William H. Metzler. (1960) [1882]. A Treatise on the Theory of Determinants. New York: Dover. OCLC 535903.
• Percus, J.K. (1971), Combinatorial Methods, Applied Mathematical Sciences #4, New York: Springer-Verlag, ISBN 0-387-90027-6
• Ryser, Herbert John (1963), Combinatorial Mathematics, The Carus Mathematical Monographs #14, The Mathematical Association of America
• van Lint, J.H.; Wilson, R.M. (2001), A Course in Combinatorics, Cambridge University Press, ISBN 0521422604