# Matrix multiplication

(Redirected from Frobenius inner product)

In mathematics, matrix multiplication is a binary operation that takes a pair of matrices, and produces another matrix. This term may refer to a number of different ways to multiply matrices, but most commonly refers to the matrix product.[1][2]

This article will use the following notational conventions. Matrices are represented by capital letters in bold, vectors in lowercase bold, and entries of vectors and matrices are italic (since they are scalars).

## Scalar multiplication

The simplest form of multiplication associated with matrices is scalar multiplication.

### General definition

Left scalar multiplication

The left multiplication of a matrix A with a scalar λ gives another matrix λA of the same size as A. The entries of λA are given by

$\lambda(A)_{ij} = (\lambda A)_{ij} = \lambda A_{ij}$

explicitly:

$\lambda \mathbf{A} = \lambda \begin{pmatrix} A_{11} & A_{12} & \cdots & A_{1m} \\ A_{21} & A_{22} & \cdots & A_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ A_{n1} & A_{n2} & \cdots & A_{nm} \\ \end{pmatrix} = \begin{pmatrix} \lambda A_{11} & \lambda A_{12} & \cdots & \lambda A_{1m} \\ \lambda A_{21} & \lambda A_{22} & \cdots & \lambda A_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ \lambda A_{n1} & \lambda A_{n2} & \cdots & \lambda A_{nm} \\ \end{pmatrix}.$
Right scalar multiplication

Similarly, the right multiplication of a matrix A with a scalar λ is defined to be

$(A\lambda)_{ij} = A_{ij} \lambda.$

When the underlying ring is commutative, for example, the real or complex number field, these two multiplications are the same, and are simply called scalar multiplication. However, for matrices over a more general ring that need not be commutative, such as the quaternions, they may not be equal.

### Examples

For a real scalar and matrix:

\begin{align} & \lambda = 2, \quad \mathbf{A} =\begin{pmatrix} a & b \\ c & d \\ \end{pmatrix}, \\ & 2 \mathbf{A} = 2 \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix} = \begin{pmatrix} 2 \!\cdot\! a & 2 \!\cdot\! b \\ 2 \!\cdot\! c & 2 \!\cdot\! d \\ \end{pmatrix} = \begin{pmatrix} a \!\cdot\! 2 & b \!\cdot\! 2 \\ c \!\cdot\! 2 & d \!\cdot\! 2 \\ \end{pmatrix} = \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix}2= \mathbf{A}2. \end{align}

For quaternion scalars and matrices:

$i\begin{pmatrix} i & 0 \\ 0 & j \\ \end{pmatrix} = \begin{pmatrix} -1 & 0 \\ 0 & k \\ \end{pmatrix} \ne \begin{pmatrix} -1 & 0 \\ 0 & -k \\ \end{pmatrix} = \begin{pmatrix} i & 0 \\ 0 & j \\ \end{pmatrix}i.$

## Matrix product (two matrices)

Assume two matrices are to be multiplied (the generalization to any number is discussed below). If A is an n×m matrix and B is an m×p matrix, the result AB of their multiplication is an n×p matrix defined only if the number of columns m in A is equal to the number of rows m in B.

### General definition

Arithmetic process of multiplying numbers (solid lines) in row i in matrix A and column j in matrix B, then adding the terms (dashed lines) to obtain entry ij in the final matrix.

When multiplying matrices, the elements of the rows in the first matrix are multiplied with corresponding columns in the second matrix (depicted in the image right). One may compute each entry in the third matrix one at a time.

For two matrices

$\mathbf{A}=\begin{pmatrix} A_{11} & A_{12} & \cdots & A_{1m} \\ A_{21} & A_{22} & \cdots & A_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ A_{n1} & A_{n2} & \cdots & A_{nm} \\ \end{pmatrix},\quad\mathbf{B}=\begin{pmatrix} B_{11} & B_{12} & \cdots & B_{1p} \\ B_{21} & B_{22} & \cdots & B_{2p} \\ \vdots & \vdots & \ddots & \vdots \\ B_{m1} & B_{m2} & \cdots & B_{mp} \\ \end{pmatrix}$

(where necessarily the number of columns in A equals the number of rows in B equals m) the matrix product AB is defined by[3][4]

$\mathbf{A}\mathbf{B} =\begin{pmatrix} (AB)_{11} & (AB)_{12} & \cdots & (AB)_{1p} \\ (AB)_{21} & (AB)_{22} & \cdots & (AB)_{2p} \\ \vdots & \vdots & \ddots & \vdots \\ (AB)_{n1} & (AB)_{n2} & \cdots & (AB)_{np} \\ \end{pmatrix}$

(with no multiplication signs or dots) where AB has entries defined by

$(AB)_{ij} = \sum_{k=1}^m A_{ik}B_{kj}.$

Treating the rows and columns in each matrix as row and column vectors respectively, this entry is also their vector dot product:

$\mathbf{a}_i=\begin{pmatrix} A_{i1} & A_{i2} & \cdots & A_{im} \end{pmatrix}\,, \quad \mathbf{b}_j=\begin{pmatrix} B_{1j} \\ B_{2j} \\ \vdots \\ B_{mj} \end{pmatrix}, \quad (AB)_{ij} = \mathbf{a}_i \cdot \mathbf{b}_j.$

(See below for further details). Usually the entries are numbers or expressions, but can even be matrices themselves (see block matrix). The matrix product can still be calculated exactly the same way.

### Illustration

The figure to the right illustrates diagrammatically the product of two matrices A and B, showing how each intersection in the product matrix corresponds to a row of A and a column of B.

$\overset{4\times 2 \text{ matrix}}{\begin{bmatrix} \color{BrickRed} a_{11} & \color{BrickRed} a_{12} \\ \cdot & \cdot \\ \color{BurntOrange} a_{31} & \color{BurntOrange} a_{32} \\ \cdot & \cdot \\ \end{bmatrix}} \overset{2\times 3\text{ matrix}}{\begin{bmatrix} \cdot & \color{RedViolet}b_{12} & \color{Violet}b_{13} \\ \cdot & \color{RedViolet}b_{22} & \color{Violet}b_{23} \\ \end{bmatrix}} = \overset{4\times 3\text{ matrix}}{\begin{bmatrix} \cdot & x_{12} & \cdot \\ \cdot & \cdot & \cdot \\ \cdot & \cdot & x_{33} \\ \cdot & \cdot & \cdot \\ \end{bmatrix}}$

The values at the intersections marked with circles are:

\begin{align} x_{12} &= ({\color{BrickRed}a_{11}}, {\color{BrickRed}a_{12}})\cdot({\color{RedViolet}b_{12}}, {\color{RedViolet}b_{22}}) &= {\color{BrickRed}a_{11}}{\color{RedViolet}b_{12}} + {\color{BrickRed}a_{12}}{\color{RedViolet}b_{22}} \\ x_{33} &= ({\color{BurntOrange}a_{31}}, {\color{BurntOrange}a_{32}})\cdot({\color{Violet}b_{13}}, {\color{Violet}b_{23}}) &= {\color{BurntOrange}a_{31}}{\color{Violet}b_{13}} + {\color{BurntOrange}a_{32}}{\color{Violet}b_{23}}. \\ \end{align}

### Examples

Square matrix and column vector
$\mathbf{A} = \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix}, \quad \mathbf{B} = \begin{pmatrix} x \\ y \\ \end{pmatrix}$

their matrix product is:

$\mathbf{AB} = \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix} \begin{pmatrix} x \\ y \\ \end{pmatrix} =\begin{pmatrix} ax + by \\ cx + dy \\ \end{pmatrix}$

yet BA is not defined.

The product of a square matrix multiplied by a column matrix arises naturally in linear algebra; for solving linear equations and representing linear transformations. By choosing a, b, c, d in A appropriately, A can represent a variety of transformations such as rotations, scaling and reflections, shears, of a geometric shape in space.

Square matrices
$\mathbf{A} = \begin{pmatrix} 1 & 2 \\ 3 & 4 \\ \end{pmatrix}, \quad \mathbf{B} = \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix}$

their matrix products are:

$\mathbf{AB} = \begin{pmatrix} 1 & 2 \\ 3 & 4 \\ \end{pmatrix} \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix} = \begin{pmatrix} 1 \times a + 2\times c & 1\times b + 2\times d \\ 3 \times a + 4 \times c & 3 \times b + 4\times d \\ \end{pmatrix}=\begin{pmatrix} a + 2c & b + 2d \\ 3a + 4c & 3b + 4d \\ \end{pmatrix}$

and

$\mathbf{BA} = \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix} \begin{pmatrix} 1 & 2 \\ 3 & 4 \\ \end{pmatrix} = \begin{pmatrix} a \times 1 + b\times 3 & a \times 2 + b \times 4 \\ c \times 1 + d \times 3 & c \times 2 + d \times 4 \\ \end{pmatrix}=\begin{pmatrix} a + 3b & 2a + 4b \\ c + 3d & 2c + 4d \\ \end{pmatrix}.$

Multiplying square matrices which represent linear transformations corresponds to the composite transformation (see below for details).

## Properties of matrix multiplication

### General

Analogous to numbers (elements of a field), matrices satisfy the following general properties, although there is one subtlety, due to the nature of matrix multiplication.

#### All matrices

1. Not commutative:
In general:
$\mathbf{A}\mathbf{B} \neq \mathbf{B}\mathbf{A}$
because AB and BA may not be simultaneously defined, and even if they are they may still not be equal. This is contrary to ordinary multiplication of numbers. To specify the ordering of matrix multiplication in words; "pre-multiply (or left multiply) A by B" means BA, while "post-multiply (or right multiply) A by C" means AC. As long as the entries of the matrix come from a ring that has an identity, and n > 1 there is a pair of n×n noncommuting matrices over the ring. A notable exception is that the identity matrix (or any scalar multiple of it) commutes with every square matrix.
2. Associative:
$\mathbf{A}(\mathbf{BC}) = (\mathbf{AB})\mathbf{C}$
$\mathbf{A}(\mathbf{B} + \mathbf{C}) = \mathbf{AB} + \mathbf{AC}, \quad (\mathbf{A} + \mathbf{B} )\mathbf{C} = \mathbf{AC} + \mathbf{BC}$
4. Scalar multiplication is compatible with matrix multiplication:
$\lambda(\mathbf{AB}) = (\lambda \mathbf{A})\mathbf{B}$ and $(\mathbf{A} \mathbf{B})\lambda=\mathbf{A}(\mathbf{B}\lambda )$
where λ is a scalar. If the entries of the matrix are real or complex numbers (or from any other commutative ring), then all four quantities are equal. More generally, all four are equal if λ belongs to the center of the ring of entries of the matrix, because in this case λX = Xλ for all matrices X.
5. Transpose:
$(\mathbf{AB})^\mathrm{T} = \mathbf{B}^\mathrm{T}\mathbf{A}^\mathrm{T}$
where T denotes the transpose, the interchange of row i with column i in a matrix. This identity holds for any matrices over a commutative ring, but not for all rings in general. Note that A and B are reversed.
6. Hermitian conjugate: If A and B have complex entries, then
$(\mathbf{AB})^\dagger = \mathbf{B}^\dagger\mathbf{A}^\dagger$
where $\dagger$ denotes the Hermitian conjugate of a matrix (complex conjugate and transposed).
7. Traces: The trace of a product AB is independent of the order of A and B:
$\mathrm{tr}(\mathbf{AB}) = \mathrm{tr}(\mathbf{BA})$

#### Square matrices only

1. Identity element: If A is a square matrix, then
$\mathbf{AI} = \mathbf{IA} = \mathbf{A}$
where I is the identity matrix of the same order.
2. Inverse matrix: If A is a square matrix, there may be an inverse matrix A−1 of A such that
$\mathbf{A}\mathbf{A}^{-1} = \mathbf{A}^{-1}\mathbf{A} = \mathbf{I}$
If this property holds then A is an invertible matrix, if not A is a singular matrix.
3. Determinants: The determinant of a product AB is the product of the determinants of square matrices A and B (not defined when the underlying ring is not commutative):
$\det(\mathbf{AB}) = \det(\mathbf{A})\det(\mathbf{B})$
Since det(A) and det(B) are just numbers and so commute, det(AB) = det(A)det(B) = det(B)det(A) = det(BA), even when ABBA.

### Linear transformations

Matrices offer a concise way of representing linear transformations between vector spaces, and matrix multiplication corresponds to the composition of linear transformations. The matrix product of two matrices can be defined when their entries belong to the same ring, and hence can be added and multiplied.

Let U, V, and W be vector spaces over the same field with given bases, S: VW and T: UV be linear transformations and ST: UW be their composition.

Suppose that A, B, and C are the matrices representing the transformations T, S, and ST with respect to the given bases.

Then AB = C, that is, the matrix of the composition (or the product) of linear transformations is the product of their matrices with respect to the given bases.

## Matrix product (any number)

### Chain multiplication

Matrix multiplication can be extended to the case of more than two matrices, provided that for each sequential pair, their dimensions match.

#### General definition

The product of N matrices A1, A2, ..., AN with sizes n0×n1, n1×n2, ..., nN − 1×nN, is the n0×nN matrix:

$\prod_{i=1}^N \mathbf{A}_i = \mathbf{A}_1\mathbf{A}_2\cdots\mathbf{A}_N.$

The same properties will hold, as long as the ordering of matrices is not changed.

#### Examples

If A, B, C, and D are respectively m×p, p×q, q×r, and r×n matrices, then there are 5 ways of grouping them without changing their order, and

$\mathbf{ABCD} = ((\mathbf{AB})\mathbf{C})\mathbf{D}=(\mathbf{A}(\mathbf{BC}))\mathbf{D}=\mathbf{A}((\mathbf{BC})\mathbf{D})=\mathbf{A}(\mathbf{B}(\mathbf{CD}))=(\mathbf{AB})(\mathbf{CD})$

is an m×n matrix.

### Powers of matrices

Square matrices can be multiplied by themselves repeatedly in the same way as ordinary numbers, because they always have the same number of rows and columns. This repeated multiplication can be described as a power of the matrix, a special case of the ordinary matrix product. On the contrary, rectangular matrices do not have the same number of rows and columns so they can never be raised to a power. An n×n matrix A raised to a positive integer k is defined as

$\mathbf{A}^k = \underset{k \mathrm{\, times}}{\mathbf{A}\mathbf{A}\cdots\mathbf{A}}$

and the following identities hold, where λ is a scalar:

Zero power
$\mathbf{A}^0 = \mathbf{I}$

where I is the identity matrix. This is parallel to the zeroth power of any number which equals unity.

Scalar multiplication
$( \lambda \mathbf{A} )^k = \lambda^k\mathbf{A}^k$
Determinant
$\det(\mathbf{A}^k) = \det(\mathbf{A})^k$

The naive computation of matrix powers is to multiply k times the matrix A to the result, starting with the identity matrix just like the scalar case. This can be improved using exponentiation by squaring, a method commonly used for scalars. For diagonalizable matrices, an even better method is to use the eigenvalue decomposition of A. Another method based on the Cayley–Hamilton theorem finds an identity using the matrices' characteristic polynomial, producing a more effective equation for Ak in which a scalar is raised to the required power, rather than an entire matrix.

### Powers of diagonal matrices

A special case is the power of a diagonal matrix A.

Since the product of diagonal matrices amounts to simply multiplying corresponding diagonal elements together, the power k of a diagonal matrix A will have entries raised to the power. Explicitly;

$\mathbf{A}^k = \begin{pmatrix} A_{11} & 0 & \cdots & 0 \\ 0 & A_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & A_{nn} \end{pmatrix}^k = \begin{pmatrix} A_{11}^k & 0 & \cdots & 0 \\ 0 & A_{22}^k & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & A_{nn}^k \end{pmatrix}$

meaning it is easy to raise a diagonal matrix to a power. When raising an arbitrary matrix (not necessarily a diagonal matrix) to a power, it is often helpful to exploit this property by diagonalizing the matrix first.

## The inner and outer products

Given two column vectors a and b, the Euclidean inner product and outer product are the simplest special cases of the matrix product, by transposing the column vectors into row vectors.[5]

The inner product

is a column vector multiplied on the left by a row vector:

$\mathbf{a}\cdot \mathbf{b} = \mathbf{a}^\mathrm{T}\mathbf{b}$

More explicitly,

$\mathbf{a}\cdot \mathbf{b} = \begin{pmatrix}a_1 & a_2 & \cdots & a_n\end{pmatrix} \begin{pmatrix}b_1 \\ b_2 \\ \vdots \\ b_n\end{pmatrix} = a_1b_1+a_2b_2+\cdots+a_nb_n = \sum_{i=1}^n a_ib_i$
The outer product

is a row vector multiplied on the left by a column vector:

$\mathbf{a}\otimes \mathbf{b} = \mathbf{a}\mathbf{b}^\mathrm{T}$

where

$\mathbf{a}\mathbf{b}^\mathrm{T} = \begin{pmatrix}a_1 \\ a_2 \\ \vdots \\ a_n\end{pmatrix} \begin{pmatrix}b_1 & b_2 & \cdots & b_n\end{pmatrix} = \begin{pmatrix} a_1 b_1 & a_1 b_2 & \cdots & a_1 b_n \\ a_2 b_1 & a_2 b_2 & \cdots & a_2 b_n \\ \vdots & \vdots & \ddots & \vdots \\ a_n b_1 & a_n b_2 & \cdots & a_n b_n \\ \end{pmatrix}.$

Matrix product (in terms of inner product)

Suppose that the first n×m matrix A is decomposed into its row vectors ai, and the second m×p matrix B into its column vectors bi:[1]

$\mathbf{A} = \begin{pmatrix} A_{1 1} & A_{1 2} & \cdots & A_{1 m} \\ A_{2 1} & A_{2 2} & \cdots & A_{2 m} \\ \vdots & \vdots & \ddots & \vdots \\ A_{n 1} & A_{n 2} & \cdots & A_{n m} \end{pmatrix} = \begin{pmatrix} \mathbf{a}_1 \\ \mathbf{a}_2 \\ \vdots \\ \mathbf{a}_n \end{pmatrix},\quad \mathbf{B} = \begin{pmatrix} B_{1 1} & B_{1 2} & \cdots & B_{1 p} \\ B_{2 1} & B_{2 2} & \cdots & B_{2 p} \\ \vdots & \vdots & \ddots & \vdots \\ B_{m 1} & B_{m 2} & \cdots & B_{m p} \end{pmatrix} = \begin{pmatrix} \mathbf{b}_1 & \mathbf{b}_2 & \cdots & \mathbf{b}_p \end{pmatrix}$

where

$\mathbf{a}_i = \begin{pmatrix}A_{i1} & A_{i2} & \cdots & A_{im} \end{pmatrix},\quad \mathbf{b}_i = \begin{pmatrix}B_{1i} & B_{2i} & \cdots & B_{mi}\end{pmatrix}^\mathrm{T}$

The entries in the introduction were given by:

$\mathbf{AB} = \begin{pmatrix} \mathbf{a}_1 \\ \mathbf{a}_2 \\ \vdots \\ \mathbf{a}_n \end{pmatrix} \begin{pmatrix} \mathbf{b}_1 & \mathbf{b}_2 & \dots & \mathbf{b}_p \end{pmatrix} = \begin{pmatrix} (\mathbf{a}_1 \cdot \mathbf{b}_1) & (\mathbf{a}_1 \cdot \mathbf{b}_2) & \dots & (\mathbf{a}_1 \cdot \mathbf{b}_p) \\ (\mathbf{a}_2 \cdot \mathbf{b}_1) & (\mathbf{a}_2 \cdot \mathbf{b}_2) & \dots & (\mathbf{a}_2 \cdot \mathbf{b}_p) \\ \vdots & \vdots & \ddots & \vdots \\ (\mathbf{a}_n \cdot \mathbf{b}_1) & (\mathbf{a}_n \cdot \mathbf{b}_2) & \dots & (\mathbf{a}_n \cdot \mathbf{b}_p) \end{pmatrix}$

It is also possible to express a matrix product in terms of concatenations of products of matrices and row or column vectors:

$\mathbf{AB} = \begin{pmatrix} \mathbf{a}_1 \\ \mathbf{a}_2 \\ \vdots \\ \mathbf{a}_n \end{pmatrix} \begin{pmatrix} \mathbf{b}_1 & \mathbf{b}_2 & \dots & \mathbf{b}_p \end{pmatrix} = \begin{pmatrix} \mathbf{A}\mathbf{b}_1 & \mathbf{A}\mathbf{b}_2 & \dots & \mathbf{A}\mathbf{b}_p \end{pmatrix} = \begin{pmatrix} \mathbf{a}_1\mathbf{B} \\ \mathbf{a}_2\mathbf{B}\\ \vdots\\ \mathbf{a}_n\mathbf{B} \end{pmatrix}$

These decompositions are particularly useful for matrices that are envisioned as concatenations of particular types of row vectors or column vectors, e.g. orthogonal matrices (whose rows and columns are unit vectors orthogonal to each other) and Markov matrices (whose rows or columns sum to 1).

Matrix product (in terms of outer product)

An alternative method results when the decomposition is done the other way around, i.e. the first matrix A is decomposed into column vectors ai and the second matrix B into row vectors bi:

$\mathbf{AB} = \begin{pmatrix} \mathbf{\bar a}_1 & \mathbf{\bar a}_2 & \cdots & \mathbf{\bar a}_m \end{pmatrix} \begin{pmatrix} \mathbf{\bar b}_1 \\ \mathbf{\bar b}_2 \\ \vdots \\ \mathbf{\bar b}_m \end{pmatrix} = \mathbf{\bar a}_1 \otimes \mathbf{\bar b}_1 + \mathbf{\bar a}_2 \otimes \mathbf{\bar b}_2 + \cdots + \mathbf{\bar a}_m \otimes \mathbf{\bar b}_m = \sum_{i=1}^m \mathbf{\bar a}_i \otimes \mathbf{\bar b}_i$

where this time

$\mathbf{\bar a}_i = \begin{pmatrix}A_{1i} & A_{2i} & \cdots & A_{ni} \end{pmatrix}^\mathrm{T},\quad \mathbf{\bar b}_i = \begin{pmatrix}B_{i1} & B_{i2} & \cdots & B_{ip}\end{pmatrix}$

This method emphasizes the effect of individual column/row pairs on the result, which is a useful point of view with e.g. covariance matrices, where each such pair corresponds to the effect of a single sample point.

### Examples

Suppose

$\mathbf{A} = \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{pmatrix},\quad \mathbf{B}=\begin{pmatrix} a & d \\ b & e \\ c & f \\ \end{pmatrix}$

using the inner product approach:

$\begin{pmatrix} {\color{BrickRed}1} & {\color{BurntOrange}2} & {\color{Violet}3} \\ {\color{BrickRed}4} & {\color{BurntOrange}5} & {\color{Violet}6} \\ {\color{BrickRed}7} & {\color{BurntOrange}8} & {\color{Violet}9} \\ \end{pmatrix} \begin{pmatrix} {\color{BrickRed}a} & {\color{BrickRed}d} \\ {\color{BurntOrange}b} & {\color{BurntOrange}e} \\ {\color{Violet}c} & {\color{Violet}f} \\ \end{pmatrix} = \begin{pmatrix} {\color{BrickRed}1a} + {\color{BurntOrange}2b} + {\color{Violet}3c} & {\color{BrickRed}1d} + {\color{BurntOrange}2e} + {\color{Violet}3f} \\ {\color{BrickRed}4a} + {\color{BurntOrange}5b} + {\color{Violet}6c} & {\color{BrickRed}4d} + {\color{BurntOrange}5e} + {\color{Violet}6f} \\ {\color{BrickRed}7a} + {\color{BurntOrange}8b} + {\color{Violet}9c} & {\color{BrickRed}7d} + {\color{BurntOrange}8e} + {\color{Violet}9f} \\ \end{pmatrix}$

while the outer product approach gives:

$\begin{pmatrix} {\color{BrickRed}1} & {\color{BurntOrange}2} & {\color{Violet}3} \\ {\color{BrickRed}4} & {\color{BurntOrange}5} & {\color{Violet}6} \\ {\color{BrickRed}7} & {\color{BurntOrange}8} & {\color{Violet}9} \\ \end{pmatrix} \begin{pmatrix} {\color{BrickRed}a} & {\color{BrickRed}d} \\ {\color{BurntOrange}b} & {\color{BurntOrange}e} \\ {\color{Violet}c} & {\color{Violet}f} \\ \end{pmatrix} = \begin{pmatrix} {\color{BrickRed}1a} & {\color{BrickRed}1d} \\ {\color{BrickRed}4a} & {\color{BrickRed}4d} \\ {\color{BrickRed}7a} & {\color{BrickRed}7d} \\ \end{pmatrix}+ \begin{pmatrix} {\color{BurntOrange}2b} & {\color{BurntOrange}2e} \\ {\color{BurntOrange}5b} & {\color{BurntOrange}5e} \\ {\color{BurntOrange}8b} & {\color{BurntOrange}8e} \\ \end{pmatrix}+ \begin{pmatrix} {\color{Violet}3c} & {\color{Violet}3f} \\ {\color{Violet}6c} & {\color{Violet}6f} \\ {\color{Violet}9c} & {\color{Violet}9f} \\ \end{pmatrix}.$

## Algorithms for efficient matrix multiplication

 What is the fastest algorithm for matrix multiplication?
The bound on ω over time.

The running time of square matrix multiplication, if carried out naively, is $O( n^3 )$. The running time for multiplying rectangular matrices (one m×p-matrix with one p×n-matrix) is O(mnp), however, more efficient algorithms exist, such as Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication". It is based on a way of multiplying two 2×2-matrices which requires only 7 multiplications (instead of the usual 8), at the expense of several additional addition and subtraction operations. Applying this recursively gives an algorithm with a multiplicative cost of $O( n^{\log_{2}7}) \approx O(n^{2.807})$. Strassen's algorithm is more complex, and the numerical stability is reduced compared to naïve algorithm.[6] Nevertheless, it appears in several libraries, such as BLAS, where it is significantly more efficient for matrices with dimensions n > 100,[7] and is very useful for large matrices over exact domains such as finite fields, where numerical stability is not an issue.

The current $O( n^k )$ algorithm with the lowest known exponent k is a generalization of the Coppersmith–Winograd algorithm that has an asymptotic complexity of O(n2.3727) thanks to Vassilevska Williams.[8] This algorithm, and the Coppersmith-Winograd algorithm on which it is based, are similar to Strassen's algorithm: a way is devised for multiplying two k×k-matrices with fewer than k3 multiplications, and this technique is applied recursively. However, the constant coefficient hidden by the Big O notation is so large that these algorithms are only worthwhile for matrices that are too large to handle on present-day computers.[9]

Since any algorithm for multiplying two n×n-matrices has to process all 2×n2-entries, there is an asymptotic lower bound of $\Omega (n^2)$ operations. Raz (2002) proves a lower bound of $\Omega(n^2 \log n)$ for bounded coefficient arithmetic circuits over the real or complex numbers.

Cohn et al. (2003, 2005) put methods such as the Strassen and Coppersmith–Winograd algorithms in an entirely different group-theoretic context, by utilising triples of subsets of finite groups which satisfy a disjointness property called the triple product property (TPP). They show that if families of wreath products of Abelian groups with symmetric groups realise families of subset triples with a simultaneous version of the TPP, then there are matrix multiplication algorithms with essentially quadratic complexity. Most researchers believe that this is indeed the case.[10] However, Alon, Shpilka and Umans have recently shown that some of these conjectures implying fast matrix multiplication are incompatible with another plausible conjecture, the sunflower conjecture.[11]

Because of the nature of matrix operations and the layout of matrices in memory, it is typically possible to gain substantial performance gains through use of parallelization and vectorization. It should therefore be noted that some lower time-complexity algorithms on paper may have indirect time complexity costs on real machines.

Freivalds' algorithm is a simple Monte Carlo algorithm that given matrices $A,B,C$ verifies in $\Theta(n^2)$ time if $AB=C$.

Block matrix multiplication. In the 2D algorithm, each processor is responsible for one submatrix of C. In the 3D algorithm, every pair of submatrices from A and B that is multiplied is assigned to one processor.

### Communication-avoiding and distributed algorithms

On modern architectures with hierarchical memory, the cost of loading and storing input matrix elements tends to dominate the cost of arithmetic. On a single machine this is the amount of data transferred between RAM and cache, while on a distributed memory multi-node machine it is the amount transferred between nodes; in either case it is called the communication bandwidth. The naive algorithm using three nested loops uses $\Omega(n^3)$ communication bandwidth.

Cannon's algorithm, also known as the 2D algorithm, partitions each input matrix into a block matrix whose elements are submatrices of size $\sqrt{M/3}$ by $\sqrt{M/3}$, where M is the size of fast memory.[12] The naive algorithm is then used over the block matrices, computing products of submatrices entirely in fast memory. This reduces communication bandwidth to $O(n^3/\sqrt{M})$, which is asymptotically optimal (for algorithms performing $\Omega(n^3)$ computation).[13][14]

In a distributed setting with p processors arranged in a $\sqrt{p}$ by $\sqrt{p}$ 2D mesh, one submatrix of the result can be assigned to each processor, and the product can be computed with each processor transmitting $O(n^2/\sqrt{p})$ words, which is asymptotically optimal assuming that each node stores the minimum $O(n^2/p)$ elements.[14] This can be improved by the 3D algorithm, which arranges the processors in a 3D cube mesh, assigning every product of two input submatrices to a single processor. The result submatrices are then generated by performing a reduction over each row.[15] This algorithm transmits $O(n^2/p^{2/3})$ words per processor, which is asymptotically optimal.[14] However, this requires replicating each input matrix element $p^{1/3}$ times, and so requires a factor of $p^{1/3}$ more memory than is needed to store the inputs. This algorithm can be combined with Strassen to further reduce runtime.[15] "2.5D" algorithms provide a continuous tradeoff between memory usage and communication bandwidth.[16]

## Other forms of multiplication

There are other ways to multiply two matrices, in fact simpler than the definition above.

For two matrices of the same dimensions, there is the Hadamard product, also known as the element-wise product, pointwise product, entrywise product and the Schur product.[17] For two matrices A and B of the same dimensions, the Hadamard product AB is a matrix of the same dimensions, which has elements

$(A \circ B)_{ij} = (A)_{ij}(B)_{ij}$

explicitly:

$\mathbf{A} \circ \mathbf{B} = \begin{pmatrix} A_{11} & A_{12} & \cdots & A_{1m} \\ A_{21} & A_{22} & \cdots & A_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ A_{n1} & A_{n2} & \cdots & A_{nm} \\ \end{pmatrix}\circ\begin{pmatrix} B_{11} & B_{12} & \cdots & B_{1m} \\ B_{21} & B_{22} & \cdots & B_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ B_{n1} & B_{n2} & \cdots & B_{nm} \\ \end{pmatrix} =\begin{pmatrix} A_{11}B_{11} & A_{12}B_{12} & \cdots & A_{1m}B_{1m} \\ A_{21}B_{21} & A_{22}B_{22} & \cdots & A_{2m}B_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ A_{n1}B_{n1} & A_{n2}B_{n2} & \cdots & A_{nm}B_{nm} \\ \end{pmatrix}$

Due to the characteristic entrywise procedure, this operation is identical to many multiplying ordinary numbers (mn of them) all at once - hence the Hadamard product is commutative, associative and distributive over addition, and is a principal submatrix of the Kronecker product. It appears in lossy compression algorithms such as JPEG.

### Frobenius product

The Frobenius inner product, sometimes denoted A : B is the component-wise inner product of two matrices as though they are vectors. It is also the sum of the entries of the Hadamard product. Explicitly,

$\mathbf{A}:\mathbf{B}=\sum_i\sum_j A_{ij} B_{ij} = \mathrm{tr}(\mathbf{A}^\mathrm{T} \mathbf{B}) = \mathrm{tr}(\mathbf{A} \mathbf{B}^\mathrm{T}).$

where "tr" denotes the trace of a matrix. This inner product induces the Frobenius norm.

### Kronecker product

For two matrices A and B of any different dimensions m×n and p×q respectively (no constraints on the dimensions of each matrix), the Kronecker product denoted AB is a matrix with dimensions mp×nq, which has elements

$(A \otimes B)_{ij} = (A)_{ij}\mathbf{B}$

explicitly:

$\mathbf{A} \otimes \mathbf{B} = \begin{pmatrix} A_{11}\mathbf{B} & A_{12}\mathbf{B} & \cdots & A_{1n}\mathbf{B} \\ A_{21}\mathbf{B} & A_{22}\mathbf{B} & \cdots & A_{2n}\mathbf{B} \\ \vdots & \vdots & \ddots & \vdots \\ A_{m1}\mathbf{B} & A_{m2}\mathbf{B} & \cdots & A_{mn}\mathbf{B} \\ \end{pmatrix}.$

This is the application of the more general tensor product applied to matrices.

## Notes

1. ^ a b Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1, ISBN (VHC Inc.) 0-89573-752-3
2. ^ McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994, ISBN 0-07-051400-3
3. ^ Linear Algebra (4th Edition), S. Lipcshutz, M. Lipson, Schaum's Outlines, McGraw Hill (USA), 2009, ISBN 978-0-07-154352-1
4. ^ Mathematical methods for physics and engineering, K.F. Riley, M.P. Hobson, S.J. Bence, Cambridge University Press, 2010, ISBN 978-0-521-86153-3
5. ^ Mathematical methods for physics and engineering, K.F. Riley, M.P. Hobson, S.J. Bence, Cambridge University Press, 2010, ISBN 978-0-521-86153-3
6. ^ Miller, Webb (1975), "Computational complexity and numerical stability", SIAM News 4: 97–107
7. ^ Press 2007, p. 108.
8. ^ Virginia Vassilevska Williams. "Breaking the Coppersmith-Winograd barrier". The original algorithm was presented by Don Coppersmith and Shmuel Winograd in 1990, has an asymptotic complexity of O(n2.376).
9. ^ Robinson, Sara (2005), "Toward an Optimal Algorithm for Matrix Multiplication", SIAM News 38 (9)
10. ^ Robinson, 2005.
11. ^ Alon, Shpilka, Umans, On Sunflowers and Matrix Multiplication
12. ^ Lynn Elliot Cannon, A cellular computer to implement the Kalman Filter Algorithm, Technical report, Ph.D. Thesis, Montana State University, 14 July 1969.
13. ^ Hong, J.W.; H.T. Kung (1981). "I/O complexity: The red-blue pebble game". STOC ’81: Proceedings of the thirteenth annual ACM symposium on Theory of computing: 326–333.
14. ^ a b c Irony, Dror; Sivan Toledo, Alexander Tiskin (September 2004). "Communication lower bounds for distributed-memory matrix multiplication". J. Parallel Distrib. Comput. 64 (9): 1017–1026. doi:10.1016/j.jpdc.2004.03.021.
15. ^ a b Agarwal, R.C.; S. M. Balle, F. G. Gustavson, M. Joshi, P. Palkar (September 1995). "A three-dimensional approach to parallel matrix multiplication". IBM J. Res. Dev. 39 (5): 575–582. doi:10.1147/rd.395.0575.
16. ^ Solomonik, Edgar; James Demmel (2011). "Communication-optimal parallel 2.5D matrix multiplication and LU factorization algorithms". Proceedings of the 17th international conference on Parallel processing. Part II: 90–109.
17. ^ (Horn & Johnson 1985, Ch. 5)