# Multilinear map

In linear algebra, a multilinear map is a function of several variables that is linear separately in each variable. More precisely, a multilinear map is a function

$f\colon V_{1}\times \cdots \times V_{n}\to W{\text{,}}$ where $V_{1},\ldots ,V_{n}$ and $W$ are vector spaces (or modules over a commutative ring), with the following property: for each $i$ , if all of the variables but $v_{i}$ are held constant, then $f(v_{1},\ldots ,v_{n})$ is a linear function of $v_{i}$ .

A multilinear map of one variable is a linear map, and of two variables is a bilinear map. More generally, a multilinear map of k variables is called a k-linear map. If the codomain of a multilinear map is the field of scalars, it is called a multilinear form. Multilinear maps and multilinear forms are fundamental objects of study in multilinear algebra.

If all variables belong to the same space, one can consider symmetric, antisymmetric and alternating k-linear maps. The latter coincide if the underlying ring (or field) has a characteristic different from two, else the former two coincide.

## Examples

• Any bilinear map is a multilinear map. For example, any inner product on a vector space is a multilinear map, as is the cross product of vectors in $\mathbb {R} ^{3}$ .
• The determinant of a matrix is an alternating multilinear function of the columns (or rows) of a square matrix.
• If $F\colon \mathbb {R} ^{m}\to \mathbb {R} ^{n}$ is a Ck function, then the $k\!$ th derivative of $F\!$ at each point $p$ in its domain can be viewed as a symmetric $k$ -linear function $D^{k}\!f(p)\colon \mathbb {R} ^{m}\times \cdots \times \mathbb {R} ^{m}\to \mathbb {R} ^{n}$ .
• The tensor-to-vector projection in multilinear subspace learning is a multilinear map as well.

## Coordinate representation

Let

$f\colon V_{1}\times \cdots \times V_{n}\to W{\text{,}}$ be a multilinear map between finite-dimensional vector spaces, where $V_{i}\!$ has dimension $d_{i}\!$ , and $W\!$ has dimension $d\!$ . If we choose a basis $\{{\textbf {e}}_{i1},\ldots ,{\textbf {e}}_{id_{i}}\}$ for each $V_{i}\!$ and a basis $\{{\textbf {b}}_{1},\ldots ,{\textbf {b}}_{d}\}$ for $W\!$ (using bold for vectors), then we can define a collection of scalars $A_{j_{1}\cdots j_{n}}^{k}$ by

$f({\textbf {e}}_{1j_{1}},\ldots ,{\textbf {e}}_{nj_{n}})=A_{j_{1}\cdots j_{n}}^{1}\,{\textbf {b}}_{1}+\cdots +A_{j_{1}\cdots j_{n}}^{d}\,{\textbf {b}}_{d}.$ Then the scalars $\{A_{j_{1}\cdots j_{n}}^{k}\mid 1\leq j_{i}\leq d_{i},1\leq k\leq d\}$ completely determine the multilinear function $f\!$ . In particular, if

${\textbf {v}}_{i}=\sum _{j=1}^{d_{i}}v_{ij}{\textbf {e}}_{ij}\!$ for $1\leq i\leq n\!$ , then

$f({\textbf {v}}_{1},\ldots ,{\textbf {v}}_{n})=\sum _{j_{1}=1}^{d_{1}}\cdots \sum _{j_{n}=1}^{d_{n}}\sum _{k=1}^{d}A_{j_{1}\cdots j_{n}}^{k}v_{1j_{1}}\cdots v_{nj_{n}}{\textbf {b}}_{k}.$ ## Example

Let's take a trilinear function

$f\colon R^{2}\times R^{2}\times R^{2}\to R,$ where Vi = R2, di = 2, i = 1,2,3, and W = R, d = 1.

A basis for each Vi is $\{{\textbf {e}}_{i1},\ldots ,{\textbf {e}}_{id_{i}}\}=\{{\textbf {e}}_{1},{\textbf {e}}_{2}\}=\{(1,0),(0,1)\}.$ Let

$f({\textbf {e}}_{1i},{\textbf {e}}_{2j},{\textbf {e}}_{3k})=f({\textbf {e}}_{i},{\textbf {e}}_{j},{\textbf {e}}_{k})=A_{ijk},$ where $i,j,k\in \{1,2\}$ . In other words, the constant $A_{ijk}$ is a function value at one of the eight possible triples of basis vectors (since there are two choices for each of the three $V_{i}$ ), namely:

$\{{\textbf {e}}_{1},{\textbf {e}}_{1},{\textbf {e}}_{1}\},\{{\textbf {e}}_{1},{\textbf {e}}_{1},{\textbf {e}}_{2}\},\{{\textbf {e}}_{1},{\textbf {e}}_{2},{\textbf {e}}_{1}\},\{{\textbf {e}}_{1},{\textbf {e}}_{2},{\textbf {e}}_{2}\},\{{\textbf {e}}_{2},{\textbf {e}}_{1},{\textbf {e}}_{1}\},\{{\textbf {e}}_{2},{\textbf {e}}_{1},{\textbf {e}}_{2}\},\{{\textbf {e}}_{2},{\textbf {e}}_{2},{\textbf {e}}_{1}\},\{{\textbf {e}}_{2},{\textbf {e}}_{2},{\textbf {e}}_{2}\}.$ Each vector ${\textbf {v}}_{i}\in V_{i}=R^{2}$ can be expressed as a linear combination of the basis vectors

${\textbf {v}}_{i}=\sum _{j=1}^{2}v_{ij}{\textbf {e}}_{ij}=v_{i1}\times {\textbf {e}}_{1}+v_{i2}\times {\textbf {e}}_{2}=v_{i1}\times (1,0)+v_{i2}\times (0,1).$ The function value at an arbitrary collection of three vectors ${\textbf {v}}_{i}\in R^{2}$ can be expressed as

$f({\textbf {v}}_{1},{\textbf {v}}_{2},{\textbf {v}}_{3})=\sum _{i=1}^{2}\sum _{j=1}^{2}\sum _{k=1}^{2}A_{ijk}v_{1i}v_{2j}v_{3k}.$ Or, in expanded form as

{\begin{aligned}f((a,b),(c,d)&,(e,f))=ace\times f({\textbf {e}}_{1},{\textbf {e}}_{1},{\textbf {e}}_{1})+acf\times f({\textbf {e}}_{1},{\textbf {e}}_{1},{\textbf {e}}_{2})\\&+ade\times f({\textbf {e}}_{1},{\textbf {e}}_{2},{\textbf {e}}_{1})+adf\times f({\textbf {e}}_{1},{\textbf {e}}_{2},{\textbf {e}}_{2})+bce\times f({\textbf {e}}_{2},{\textbf {e}}_{1},{\textbf {e}}_{1})+bcf\times f({\textbf {e}}_{2},{\textbf {e}}_{1},{\textbf {e}}_{2})\\&+bde\times f({\textbf {e}}_{2},{\textbf {e}}_{2},{\textbf {e}}_{1})+bdf\times f({\textbf {e}}_{2},{\textbf {e}}_{2},{\textbf {e}}_{2}).\end{aligned}} ## Relation to tensor products

There is a natural one-to-one correspondence between multilinear maps

$f\colon V_{1}\times \cdots \times V_{n}\to W{\text{,}}$ and linear maps

$F\colon V_{1}\otimes \cdots \otimes V_{n}\to W{\text{,}}$ where $V_{1}\otimes \cdots \otimes V_{n}\!$ denotes the tensor product of $V_{1},\ldots ,V_{n}$ . The relation between the functions $f\!$ and $F\!$ is given by the formula

$F(v_{1}\otimes \cdots \otimes v_{n})=f(v_{1},\ldots ,v_{n}).$ ## Multilinear functions on n×n matrices

One can consider multilinear functions, on an n×n matrix over a commutative ring K with identity, as a function of the rows (or equivalently the columns) of the matrix. Let A be such a matrix and ai, 1 ≤ in, be the rows of A. Then the multilinear function D can be written as

$D(A)=D(a_{1},\ldots ,a_{n}),$ satisfying

$D(a_{1},\ldots ,ca_{i}+a_{i}',\ldots ,a_{n})=cD(a_{1},\ldots ,a_{i},\ldots ,a_{n})+D(a_{1},\ldots ,a_{i}',\ldots ,a_{n}).$ If we let ${\hat {e}}_{j}$ represent the jth row of the identity matrix, we can express each row ai as the sum

$a_{i}=\sum _{j=1}^{n}A(i,j){\hat {e}}_{j}.$ Using the multilinearity of D we rewrite D(A) as

$D(A)=D\left(\sum _{j=1}^{n}A(1,j){\hat {e}}_{j},a_{2},\ldots ,a_{n}\right)=\sum _{j=1}^{n}A(1,j)D({\hat {e}}_{j},a_{2},\ldots ,a_{n}).$ Continuing this substitution for each ai we get, for 1 ≤ in,

$D(A)=\sum _{1\leq k_{i}\leq n}A(1,k_{1})A(2,k_{2})\dots A(n,k_{n})D({\hat {e}}_{k_{1}},\dots ,{\hat {e}}_{k_{n}}),$ where, since in our case 1 ≤ in,

$\sum _{1\leq k_{i}\leq n}=\sum _{1\leq k_{1}\leq n}\ldots \sum _{1\leq k_{i}\leq n}\ldots \sum _{1\leq k_{n}\leq n}$ is a series of nested summations.

Therefore, D(A) is uniquely determined by how D operates on ${\hat {e}}_{k_{1}},\dots ,{\hat {e}}_{k_{n}}$ .

## Example

In the case of 2×2 matrices we get

$D(A)=A_{1,1}A_{2,1}D({\hat {e}}_{1},{\hat {e}}_{1})+A_{1,1}A_{2,2}D({\hat {e}}_{1},{\hat {e}}_{2})+A_{1,2}A_{2,1}D({\hat {e}}_{2},{\hat {e}}_{1})+A_{1,2}A_{2,2}D({\hat {e}}_{2},{\hat {e}}_{2})\,$ Where ${\hat {e}}_{1}=[1,0]$ and ${\hat {e}}_{2}=[0,1]$ . If we restrict $D$ to be an alternating function then $D({\hat {e}}_{1},{\hat {e}}_{1})=D({\hat {e}}_{2},{\hat {e}}_{2})=0$ and $D({\hat {e}}_{2},{\hat {e}}_{1})=-D({\hat {e}}_{1},{\hat {e}}_{2})=-D(I)$ . Letting $D(I)=1$ we get the determinant function on 2×2 matrices:

$D(A)=A_{1,1}A_{2,2}-A_{1,2}A_{2,1}.$ ## Properties

• A multilinear map has a value of zero whenever one of its arguments is zero.