For the Kronecker product of representations of symmetric groups, see Kronecker coefficient.
In mathematics, the Kronecker product, denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a generalization of the outer product (which is denoted by the same symbol) from vectors to matrices, and gives the matrix of the tensor product with respect to a standard choice of basis. The Kronecker product should not be confused with the usual matrix multiplication, which is an entirely different operation.
The Kronecker product is named after Leopold Kronecker, even though there is little evidence that he was the first to define and use it. Indeed, in the past the Kronecker product was sometimes called the Zehfuss matrix, after Johann Georg Zehfuss who in 1858 described the matrix operation we now know as the Kronecker product.[1]
Definition
If A is an m × n matrix and B is a p × q matrix, then the Kronecker product A ⊗ B is the mp × nq block matrix:
more explicitly:
More compactly, we have
If A and B represent linear transformationsV1 → W1 and V2 → W2, respectively, then A ⊗ B represents the tensor product of the two maps, V1 ⊗ V2 → W1 ⊗ W2.
Non-commutative:
In general, A ⊗ B and B ⊗ A are different matrices. However, A ⊗ B and B ⊗ A are permutation equivalent, meaning that there exist permutation matricesP and Q (so called commutation matrices) such that[2]
If A and B are square matrices, then A ⊗ B and B ⊗ A are even permutation similar, meaning that we can take P = QT.
The mixed-product property and the inverse of a Kronecker product:
If A, B, C and D are matrices of such size that one can form the matrix products AC and BD, then
This is called the mixed-product property, because it mixes the ordinary matrix product and the Kronecker product. It follows that A ⊗ B is invertibleif and only if both A and B are invertible, in which case the inverse is given by
In the language of Category theory, the mixed-product property of the Kronecker product (and more general tensor product) shows that the category MatF of matrices over a fieldF, is in fact a bicategory, with objects natural numbers n, morphisms n -> m are n by m matrices with entries in F, composition is given by matrix multiplication, identity arrows are simply n x n identity matricesIn, and the tensor product is given by the Kronecker product.
MatF is a concrete skeleton category for the equivalent categoryFinVectF of finite dimensional vector spaces over F, whose objects are such finite dimensional vector spaces V, arrows are F-linear maps L: V - > W, and identity arrows are the identity maps of the spaces. The equivalence of categories amounts to simultaneously choosing a basis in ever finite-dimensional vector space V over F; matrices' elements represent these mappings with respect to the chosen bases; and likewise the kronecker product is the representation of the tensor product in the chosen bases.
Determinant:
Let A be an n × n matrix and let B be an m × m matrix. Then
The exponent in |A| is the order of B and the exponent in |B| is the order of A.
Kronecker sum and exponentiation
If A is n × n, B is m × m and Ik denotes the k × kidentity matrix then we can define what is sometimes called the Kronecker sum, ⊕, by
Note that this is different from the direct sum of two matrices. This operation is related to the tensor product on Lie algebras.
We have the following formula for the matrix exponential, which is useful in some numerical evaluations.[4]
Kronecker sums appear naturally in physics when considering ensembles of non-interacting systems.[citation needed] Let Hi be the Hamiltonian of the ith such system. Then the total Hamiltonian of the ensemble is
Suppose that A and B are square matrices of size n and m respectively. Let λ1, ..., λn be the eigenvalues of A and μ1, ..., μm be those of B (listed according to multiplicity). Then the eigenvalues of A ⊗ B are
It follows that the trace and determinant of a Kronecker product are given by
The Kronecker product of matrices corresponds to the abstract tensor product of linear maps. Specifically, if the vector spaces V, W, X, and Y have bases {v1, ..., vm},{w1, ..., wn},{x1, ..., xd}, and {y1, ..., ye}, respectively, and if the matrices A and B represent the linear transformations S : V → X and T : W → Y, respectively in the appropriate bases, then the matrix A ⊗ B represents the tensor product of the two maps, S ⊗ T : V ⊗ W → X ⊗ Y with respect to the basis {v1 ⊗ w1, v1 ⊗ w2, ..., v2 ⊗ w1, ..., vm ⊗ wn} of V ⊗ W and the similarly defined basis of X ⊗ Y with the property that A ⊗ B(vi ⊗ wj) = (Avi) ⊗ (Bwj), where i and j are integers in the proper range.[5]
When V and W are Lie algebras, and S : V → V and T : W → W are Lie algebra homomorphisms, the Kronecker sum of A and B represents the induced Lie algebra homomorphisms V ⊗ W → V ⊗ W.
The Kronecker product can be used to get a convenient representation for some matrix equations. Consider for instance the equation AXB = C, where A, B and C are given matrices and the matrix X is the unknown. We can rewrite this equation as
Here, vec(X) denotes the vectorization of the matrix X formed by stacking the columns of X into a single column vector.
It now follows from the properties of the Kronecker product that the equation AXB = C has a unique solution if and only if A and B are nonsingular (Horn & Johnson 1991, Lemma 4.3.1).
If X is row-ordered into the column vector x then AXB can be also be written as (Jain 1989, 2.8 Block Matrices and Kronecker Products) (A ⊗ BT)x.
Two related matrix operations are the Tracy–Singh and Khatri–Rao products which operate on partitioned matrices. Let the m × n matrix A be partitioned into the mi × nj blocks Aij and p × q matrix B into the pk × qℓ blocks Bkl with of course Σi mi = m, Σj nj = n, Σk pk = p and Σℓ qℓ = q.
which means that the (ij)-th subblock of the mp × nq product A ∘ B is the mi p × nj q matrix Aij ∘ B, of which the (kℓ)-th subblock equals the mi pk × nj qℓ matrix Aij ⊗ Bkℓ. Essentially the Tracy–Singh product is the pairwise Kronecker product for each pair of partitions in the two matrices.
For example, if A and B both are 2 × 2 partitioned matrices e.g.:
in which the ij-th block is the mipi × njqj sized Kronecker product of the corresponding blocks of A and B, assuming the number of row and column partitions of both matrices is equal. The size of the product is then (Σi mipi) × (Σj njqj). Proceeding with the same matrices as the previous example we obtain:
This is a submatrix of the Tracy–Singh product of the two matrices (each partition in this example is a partition in a corner of the Tracy–Singh product).
A column-wise Kronecker product of two matrices may also be called the Khatri–Rao product. This product assumes the partitions of the matrices are their columns. In this case m1 = m, p1 = p, n = q and for each j: nj = pj = 1. The resulting product is a mp × n matrix of which each column is the Kronecker product of the corresponding columns of A and B. Using the matrices from the previous examples with the columns partitioned:
so that:
This column-wise version of the Khatri-Rao product is useful in linear algebra approaches to data analytical processing.[11]
^H. V. Henderson; S. R. Searle (1980). "The vec-permutation matrix, the vec operator and Kronecker products: A review". LINEAR AND MULTILINEAR ALGEBRA. 9 (4): 271–288. doi:10.1080/03081088108817379.
^J. W. Brewer (1969). "A Note on Kronecker Matrix Products and Matrix Equation Systems". SIAM Journal on Applied Mathematics. 17 (3): 603–606. doi:10.1137/0117057.
^Dummit, David S.; Foote, Richard M. (1999). Abstract Algebra (2 ed.). New York: John Wiley and Sons. pp. 401–402. ISBN0-471-36857-1.
^Tracy, D. S.; Singh, R. P. (1972). "A New Matrix Product and Its Applications in Matrix Differentiation". Statistica Neerlandica. 26 (4): 143–157. doi:10.1111/j.1467-9574.1972.tb00199.x.
^Liu, S. (1999). "Matrix Results on the Khatri–Rao and Tracy–Singh Products". Linear Algebra and its Applications. 289 (1–3): 267–277. doi:10.1016/S0024-3795(98)10209-4.
^Zhang X; Yang Z; Cao C. (2002), "Inequalities involving Khatri–Rao products of positive semi-definite matrices", Applied Mathematics E-notes, 2: 117–124