# Commuting matrices

In linear algebra, two matrices ${\displaystyle A}$ and ${\displaystyle B}$ are said to commute if ${\displaystyle AB=BA}$ and equivalently, their commutator ${\displaystyle [A,B]=AB-BA}$ is zero. A set of matrices ${\displaystyle A_{1},\ldots ,A_{k}}$ is said to commute if they commute pairwise, meaning that every pair of matrices in the set commute with each other.

## Characterizations and properties

• Commuting matrices preserve each other's eigenspaces.[1] As a consequence, commuting matrices over an algebraically closed field are simultaneously triangularizable, that is, there are bases over which they are both upper triangular. In other words, if ${\displaystyle A_{1},\ldots ,A_{k}}$ commute, there exists a similarity matrix ${\displaystyle P}$ such that ${\displaystyle P^{-1}A_{i}P}$ is upper triangular for all ${\displaystyle i\in \{1,\ldots ,k\}}$. The converse is not necessarily true, as the following counterexample shows:

${\displaystyle {\begin{bmatrix}1&2\\0&3\end{bmatrix}}{\begin{bmatrix}1&1\\0&1\end{bmatrix}}={\begin{bmatrix}1&3\\0&3\end{bmatrix}}\neq {\begin{bmatrix}1&5\\0&3\end{bmatrix}}={\begin{bmatrix}1&1\\0&1\end{bmatrix}}{\begin{bmatrix}1&2\\0&3\end{bmatrix}}}$

However, if the square of the commutator of two matrices is zero, i.e. ${\displaystyle [A,B]^{2}=0}$, then the converse is true.[2]
• If matrices ${\displaystyle A}$ and ${\displaystyle B}$ are simultaneously diagonalizable, that is, there exists a similarity matrix ${\displaystyle P}$ such that ${\displaystyle P^{-1}AP}$ and ${\displaystyle P^{-1}BP}$ are both diagonal, then ${\displaystyle A}$ and ${\displaystyle B}$ commute. The converse is not necessarily true, since one of the matrices could be not diagonalizable, e.g.:

${\displaystyle {\begin{bmatrix}0&1\\0&0\end{bmatrix}}{\begin{bmatrix}1&0\\0&1\end{bmatrix}}={\begin{bmatrix}1&0\\0&1\end{bmatrix}}{\begin{bmatrix}0&1\\0&0\end{bmatrix}}{\text{ but }}{\begin{bmatrix}0&1\\0&0\end{bmatrix}}{\text{ is not diagonalizable.}}}$

If, however, both matrices are diagonalizable, then they can be simultaneously diagonalized.
• If one of the matrices has the property that its minimal polynomial coincides with its characteristic polynomial (i.e., it has the maximal degree), which happens in particular whenever the characteristic polynomial has only simple roots, then the other matrix can be written as a polynomial in the first.
• As a direct consequence of simultaneous triangulizability, the eigenvalues of two commuting complex matrices A, B with their algebraic multiplicities (the multisets of roots of their characteristic polynomials) can be matched up as ${\displaystyle \alpha _{i}\leftrightarrow \beta _{i}}$ in such a way that the multiset of eigenvalues of any polynomial ${\displaystyle P(A,B)}$ in the two matrices is the multiset of the values ${\displaystyle P(\alpha _{i},\beta _{i})}$. This theorem is due to Frobenius.[3]
• Two Hermitian matrices commute if their eigenspaces coincide. In particular, two Hermitian matrices without multiple eigenvalues commute if they share the same set of eigenvectors. This follows by considering the eigenvalue decompositions of both matrices. Let ${\displaystyle A}$ and ${\displaystyle B}$ be two Hermitian matrices. ${\displaystyle A}$ and ${\displaystyle B}$ have common eigenspaces when they can be written as ${\displaystyle A=U\Lambda _{1}U^{\dagger }}$ and ${\displaystyle B=U\Lambda _{2}U^{\dagger }}$. It then follows that
${\displaystyle AB=U\Lambda _{1}U^{\dagger }U\Lambda _{2}U^{\dagger }=U\Lambda _{1}\Lambda _{2}U^{\dagger }=U\Lambda _{2}\Lambda _{1}U^{\dagger }=U\Lambda _{2}U^{\dagger }U\Lambda _{1}U^{\dagger }=BA.}$
• The property of two matrices commuting is not transitive: A matrix ${\displaystyle A}$ may commute with both ${\displaystyle B}$ and ${\displaystyle C}$, and still ${\displaystyle B}$ and ${\displaystyle C}$ do not commute with each other. As an example, the unit matrix commutes with all matrices, which between them do not all commute. If the set of matrices considered is restricted to Hermitian matrices without multiple eigenvalues, then commutativity is transitive, as a consequence of the characterization in terms of eigenvectors.

## Examples

• The unit matrix commutes with all matrices.
• Every Diagonal matrices commutes with all other Diagonal matrices.[4]
• Scalar matrices commute with every other matrix.[5]
• Jordan blocks commute with upper triangular matrices that have the same value along bands.
• If the product of two symmetric matrices is symmetric, then they must commute.

## History

The notion of commuting matrices was introduced by Cayley in his memoir on the theory of matrices, which also provided the first axiomatization of matrices. The first significant results proved on them was the above result of Frobenius in 1878.[6]

## References

1. ^ Horn, Roger A.; Johnson, Charles R. (2012). Matrix Analysis. Cambridge University Press. p. 70. ISBN 9780521839402.
2. ^ Horn, Roger A.; Johnson, Charles R. (2012). Matrix Analysis. Cambridge University Press. p. 127. ISBN 9780521839402.
3. ^ Frobenius, G. (1877). "Ueber lineare Substitutionen und bilineare Formen". Journal für die reine und angewandte Mathematik. 84: 1-63.
4. ^ "do-diagonal-matrices-always-commute". stackexchange. Retrieved August 4, 2018.
5. ^ "Scalar matrices". wikipedia. Retrieved August 4, 2018.
6. ^ Drazin, M. (1951), "Some Generalizations of Matrix Commutativity", Proceedings of the London Mathematical Society, 3, 1 (1): 222–231, doi:10.1112/plms/s3-1.1.222