Jump to content

Commutation matrix

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Bengski68 (talk | contribs) at 19:48, 15 January 2022 (Separated content into a Properties section; added information relating to quantum information theory). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In mathematics, especially in linear algebra and matrix theory, the commutation matrix is used for transforming the vectorized form of a matrix into the vectorized form of its transpose. Specifically, the commutation matrix K(m,n) is the nm × mn matrix which, for any m × n matrix A, transforms vec(A) into vec(AT):

K(m,n) vec(A) = vec(AT) .

Here vec(A) is the mn × 1 column vector obtain by stacking the columns of A on top of one another:

where A = [Ai,j].

In the context of quantum information theory, the commutation matrix is sometimes referred to as the swap matrix or swap operator [1]

Properties

  • Replacing A with AT in the definition of the commutation matrix shows that K(m,n) = (K(n,m))T. Therefore in the special case of m = n the commutation matrix is an involution and symmetric.
  • The main use of the commutation matrix, and the source of its name, is to commute the Kronecker product: for every m × n matrix A and every r × q matrix B,

This property is often used in developing the higher order statistics of Wishart covariance matrices.[2]

  • The case of n=q=1 for the above equation states that for any column vectors v,w of sizes m,r respectively,

This property is the reason that this matrix is referred to as the "swap operator" in the context of quantum information theory.

  • An explicit form for the commutation matrix is as follows: if er,j denotes the j-th canonical vector of dimension r (i.e. the vector with 1 in the j-th coordinate and 0 elsewhere) then

Pseudocode

For both square and rectangular matrices of m rows and n columns, the commutation matrix can be generated by this pseudocode, which is similar to an article at Stack Exchange[3] and demonstrably gives the correct result though is presented without proof.

K = zeros(m*n,m*n)
for i = 1 to m
  for j = 1 to n
    K(i + m*(j - 1), j + n*(i - 1)) = 1
  end
end


Example

Let M be a 2×2 square matrix.

Then we have

And K(2,2) is the 4×4 square matrix that will transform vec(M) into vec(MT)

The matrix has two possible vectorizations as follows:

and the code above yields

giving the expected results

References

  1. ^ Watrous, John (2018). The Theory of Quantum Information. Cambridge Universtiy Press. p. 94.
  2. ^ von Rosen, Dietrich (1988). "Moments for the Inverted Wishart Distribution". Scand. J. Stat. 15: 97–109.
  3. ^ "Kronecker product and the commutation matrix". Stack Exchange. 2013.{{cite web}}: CS1 maint: url-status (link)
  • Jan R. Magnus and Heinz Neudecker (1988), Matrix Differential Calculus with Applications in Statistics and Econometrics, Wiley.