# Tensor product

In mathematics, the tensor product ${\displaystyle V\otimes W}$ of two vector spaces V and W (over the same field) is a vector space to which is associated a bilinear map ${\displaystyle V\times W\rightarrow V\otimes W}$ that maps a pair ${\displaystyle (v,w),\ v\in V,w\in W}$ to an element of ${\displaystyle V\otimes W}$ denoted ${\displaystyle v\otimes w}$.

An element of the form ${\displaystyle v\otimes w}$ is called the tensor product of v and w. An element of ${\displaystyle V\otimes W}$ is a tensor, and the tensor product of two vectors is sometimes called an elementary tensor or a decomposable tensor. The elementary tensors span ${\displaystyle V\otimes W}$ in the sense that every element of ${\displaystyle V\otimes W}$ is a sum of elementary tensors. If bases are given for V and W, a basis of ${\displaystyle V\otimes W}$ is formed by all tensor products of a basis element of V and a basis element of W.

The tensor product of two vector spaces captures the properties of all bilinear maps in the sense that a bilinear map from ${\displaystyle V\times W}$ into another vector space Z factors uniquely through a linear map ${\displaystyle V\otimes W\to Z}$ (see Universal property).

Tensor products are used in many application areas, including physics and engineering. For example, in general relativity, the gravitational field is described through the metric tensor, which is a vector field of tensors, one at each point of the space-time manifold, and each belonging to the tensor product with itself of the cotangent space at the point.

## Definitions and constructions

The tensor product of two vector spaces is a vector space that is defined up to an isomorphism. There are several equivalent ways to define it. Most consist of defining explicitly a vector space that is called a tensor product, and, generally, the equivalence proof results almost immediately from the basic properties of the vector spaces that are so defined.

The tensor product can also be defined through a universal property; see § Universal property, below. As for every universal property, all objects that satisfy the property are isomorphic through a unique isomorphism that is compatible with the universal property. When this definition is used, the other definitions may be viewed as constructions of objects satisfying the universal property and as proofs that there are objects satisfying the universal property, that is that tensor products exist.

### From bases

Let V and W be two vector spaces over a field F, with respective bases ${\displaystyle B_{V}}$ and ${\displaystyle B_{W}}$.

The tensor product ${\displaystyle V\otimes W}$ of V and W is a vector space that has as a basis the set of all ${\displaystyle v\otimes w}$ with ${\displaystyle v\in B_{V}}$ and ${\displaystyle w\in B_{W}}$. This definition can be formalized in the following way (this formalization is rarely used in practice, as the preceding informal definition is generally sufficient): ${\displaystyle V\otimes W}$ is the set of the functions from the Cartesian product ${\displaystyle B_{V}\times B_{W}}$ to F that have a finite number of nonzero values. The pointwise operations make ${\displaystyle V\otimes W}$ a vector space. The function that maps ${\displaystyle (v,w)}$ to 1 and the other elements of ${\displaystyle B_{V}\times B_{W}}$ to 0 is denoted ${\displaystyle v\otimes w}$.

The set ${\displaystyle \{v\otimes w\mid v\in B_{V},w\in B_{W}\}}$ is then straightforwardly a basis of ${\displaystyle V\otimes W}$, which is called the tensor product of the bases ${\displaystyle B_{V}}$ and ${\displaystyle B_{W}}$.

We can equivalently define ${\displaystyle V\otimes W}$ to be the set of bilinear forms on ${\displaystyle V\times W}$ that are nonzero at only a finite number of elements of ${\displaystyle B_{V}\times B_{W}}$. To see this, given ${\displaystyle (x,y)\in V\times W}$ and a bilinear form ${\displaystyle B:V\times W\to F}$, we can decompose ${\displaystyle x}$ and ${\displaystyle y}$ in the bases ${\displaystyle B_{V}}$ and ${\displaystyle B_{W}}$ as:

${\displaystyle x=\sum _{v\in B_{V}}x_{v}\,v\quad {\text{and}}\quad y=\sum _{w\in B_{W}}y_{w}\,w,}$
where only a finite number of ${\displaystyle x_{v}}$'s and ${\displaystyle y_{w}}$'s are nonzero, and find by the bilinearity of ${\displaystyle B}$ that:
${\displaystyle B(x,y)=\sum _{v\in B_{V}}\sum _{w\in B_{W}}x_{v}y_{w}\,B(v,w)}$

Hence, we see that the value of ${\displaystyle B}$ for any ${\displaystyle (x,y)\in V\times W}$ is uniquely and totally determined by the values that it takes on ${\displaystyle B_{V}\times B_{W}}$. This lets us extend the maps ${\displaystyle v\otimes w}$ defined on ${\displaystyle B_{V}\times B_{W}}$ as before into bilinear maps ${\displaystyle v\otimes w:V\times W\to F}$ , by letting:

${\displaystyle (v\otimes w)(x,y):=\sum _{v'\in B_{V}}\sum _{w'\in B_{W}}x_{v'}y_{w'}\,(v\otimes w)(v',w')=x_{v}\,y_{w}.}$

Then we can express any bilinear form ${\displaystyle B}$ as a (potentially infinite) formal linear combination of the ${\displaystyle v\otimes w}$ maps according to:

${\displaystyle B=\sum _{v\in B_{V}}\sum _{w\in B_{W}}B(v,w)(v\otimes w)}$
making these maps similar to a Schauder basis for the vector space ${\displaystyle {\text{Hom}}(V,W;F)}$ of all bilinear forms on ${\displaystyle V\times W}$. To instead have it be a proper Hamel basis, it only remains to add the requirement that ${\displaystyle B}$ is nonzero at an only a finite number of elements of ${\displaystyle B_{V}\times B_{W}}$, and consider the subspace of such maps instead.

In either construction, the tensor product of two vectors is defined from their decomposition on the bases. More precisely, taking the basis decompositions of ${\displaystyle x\in V}$ and ${\displaystyle y\in W}$ as before:

{\displaystyle {\begin{aligned}x\otimes y&={\biggl (}\sum _{v\in B_{V}}x_{v}\,v{\biggr )}\otimes {\biggl (}\sum _{w\in B_{W}}y_{w}\,w{\biggr )}\\[5mu]&=\sum _{v\in B_{V}}\sum _{w\in B_{W}}x_{v}y_{w}\,v\otimes w.\end{aligned}}}

This definition is quite clearly derived from the coefficients of ${\displaystyle B(v,w)}$ in the expansion by bilinearity of ${\displaystyle B(x,y)}$ using the bases ${\displaystyle B_{V}}$ and ${\displaystyle B_{W}}$, as done above. It is then straightforward to verify that with this definition, the map ${\displaystyle {\otimes }:(x,y)\mapsto x\otimes y}$ is a bilinear map from ${\displaystyle V\times W}$ to ${\displaystyle V\otimes W}$ satisfying the universal property that any construction of the tensor product satisfies (see below).

If arranged into a rectangular array, the coordinate vector of ${\displaystyle x\otimes y}$ is the outer product of the coordinate vectors of ${\displaystyle x}$ and ${\displaystyle y}$. Therefore, the tensor product is a generalization of the outer product, that is, an abstraction of it beyond coordinate vectors.

A limitation of this definition of the tensor product is that, if one changes bases, a different tensor product is defined. However, the decomposition on one basis of the elements of the other basis defines a canonical isomorphism between the two tensor products of vector spaces, which allows identifying them. Also, contrarily to the two following alternative definitions, this definition cannot be extended into a definition of the tensor product of modules over a ring.

### As a quotient space

A construction of the tensor product that is basis independent can be obtained in the following way.

Let V and W be two vector spaces over a field F.

One considers first a vector space L that has the Cartesian product ${\displaystyle V\times W}$ as a basis. That is, the basis elements of L are the pairs ${\displaystyle (v,w)}$ with ${\displaystyle v\in V}$ and ${\displaystyle w\in W}$. To get such a vector space, one can define it as the vector space of the functions ${\displaystyle V\times W\to F}$ that have a finite number of nonzero values and identifying ${\displaystyle (v,w)}$ with the function that takes the value 1 on ${\displaystyle (v,w)}$ and 0 otherwise.

Let R be the linear subspace of L that is spanned by the relations that the tensor product must satisfy. More precisely, R is spanned by the elements of one of the forms:

{\displaystyle {\begin{aligned}(v_{1}+v_{2},w)&-(v_{1},w)-(v_{2},w),\\(v,w_{1}+w_{2})&-(v,w_{1})-(v,w_{2}),\\(sv,w)&-s(v,w),\\(v,sw)&-s(v,w),\end{aligned}}}

where ${\displaystyle v,v_{1},v_{2}\in V}$, ${\displaystyle w,w_{1},w_{2}\in W}$ and ${\displaystyle s\in F}$.

Then, the tensor product is defined as the quotient space:

${\displaystyle V\otimes W=L/R,}$

and the image of ${\displaystyle (v,w)}$ in this quotient is denoted ${\displaystyle v\otimes w}$.

It is straightforward to prove that the result of this construction satisfies the universal property considered below. (A very similar construction can be used to define the tensor product of modules.)

### Universal property

In this section, the universal property satisfied by the tensor product is described. As for every universal property, two objects that satisfy the property are related by a unique isomorphism. It follows that this is a (non-constructive) way to define the tensor product of two vector spaces. In this context, the preceding constructions of tensor products may be viewed as proofs of existence of the tensor product so defined.

A consequence of this approach is that every property of the tensor product can be deduced from the universal property, and that, in practice, one may forget the method that has been used to prove its existence.

The "universal-property definition" of the tensor product of two vector spaces is the following (recall that a bilinear map is a function that is separately linear in each of its arguments):

The tensor product of two vector spaces V and W is a vector space denoted as ${\displaystyle V\otimes W}$, together with a bilinear map ${\displaystyle {\otimes }:(v,w)\mapsto v\otimes w}$ from ${\displaystyle V\times W}$ to ${\displaystyle V\otimes W}$, such that, for every bilinear map ${\displaystyle h:V\times W\to Z}$, there is a unique linear map ${\displaystyle {\tilde {h}}:V\otimes W\to Z}$, such that ${\displaystyle h={\tilde {h}}\circ {\otimes }}$ (that is, ${\displaystyle h(v,w)={\tilde {h}}(v\otimes w)}$ for every ${\displaystyle v\in V}$ and ${\displaystyle w\in W}$).

### Linearly disjoint

Like the universal property above, the following characterization may also be used to determine whether or not a given vector space and given bilinear map form a tensor product.[1]

Theorem — Let ${\displaystyle X,Y}$, and ${\displaystyle Z}$ be complex vector spaces and let ${\displaystyle T:X\times Y\to Z}$ be a bilinear map. Then ${\displaystyle (Z,T)}$ is a tensor product of ${\displaystyle X}$ and ${\displaystyle Y}$ if and only if[1] the image of ${\displaystyle T}$ spans all of ${\displaystyle Z}$ (that is, ${\displaystyle \operatorname {span} \;T(X\times Y)=Z}$), and also ${\displaystyle X}$ and ${\displaystyle Y}$ are ${\displaystyle T}$-linearly disjoint, which by definition means that for all positive integers ${\displaystyle n}$ and all elements ${\displaystyle x_{1},\ldots ,x_{n}\in X}$ and ${\displaystyle y_{1},\ldots ,y_{n}\in Y}$ such that ${\displaystyle \sum _{i=1}^{n}T\left(x_{i},y_{i}\right)=0}$,

1. if all ${\displaystyle x_{1},\ldots ,x_{n}}$ are linearly independent then all ${\displaystyle y_{i}}$ are ${\displaystyle 0}$, and
2. if all ${\displaystyle y_{1},\ldots ,y_{n}}$ are linearly independent then all ${\displaystyle x_{i}}$ are ${\displaystyle 0}$.

Equivalently, ${\displaystyle X}$ and ${\displaystyle Y}$ are ${\displaystyle T}$-linearly disjoint if and only if for all linearly independent sequences ${\displaystyle x_{1},\ldots ,x_{m}}$ in ${\displaystyle X}$ and all linearly independent sequences ${\displaystyle y_{1},\ldots ,y_{n}}$ in ${\displaystyle Y}$, the vectors ${\displaystyle \left\{T\left(x_{i},y_{j}\right):1\leq i\leq m,1\leq j\leq n\right\}}$ are linearly independent.

For example, it follows immediately that if ${\displaystyle m}$ and ${\displaystyle n}$ are positive integers then ${\displaystyle Z:=\mathbb {C} ^{mn}}$ and the bilinear map ${\displaystyle T:\mathbb {C} ^{m}\times \mathbb {C} ^{n}\to \mathbb {C} ^{mn}}$ defined by sending ${\displaystyle (x,y)=\left(\left(x_{1},\ldots ,x_{m}\right),\left(y_{1},\ldots ,y_{n}\right)\right)}$ to ${\displaystyle \left(x_{i}y_{j}\right)_{\stackrel {i=1,\ldots ,m}{j=1,\ldots ,n}}}$ form a tensor product of ${\displaystyle X:=\mathbb {C} ^{m}}$ and ${\displaystyle Y:=\mathbb {C} ^{n}}$.[2] Often, this map ${\displaystyle T}$ will be denoted by ${\displaystyle \,\otimes \,}$ so that ${\displaystyle x\otimes y\;:=\;T(x,y)}$ denotes this bilinear map's value at ${\displaystyle (x,y)\in X\times Y}$.

As another example, suppose that ${\displaystyle \mathbb {C} ^{S}}$ is the vector space of all complex-valued functions on a set ${\displaystyle S}$ with addition and scalar multiplication defined pointwise (meaning that ${\displaystyle f+g}$ is the map ${\displaystyle s\mapsto f(s)+g(s)}$ and ${\displaystyle cf}$ is the map ${\displaystyle s\mapsto cf(s)}$). Let ${\displaystyle S}$ and ${\displaystyle T}$ be any sets and for any ${\displaystyle f\in \mathbb {C} ^{S}}$ and ${\displaystyle g\in \mathbb {C} ^{T}}$, let ${\displaystyle f\otimes g\in \mathbb {C} ^{S\times T}}$ denote the function defined by ${\displaystyle (s,t)\mapsto f(s)g(t)}$. If ${\displaystyle X\subseteq \mathbb {C} ^{S}}$ and ${\displaystyle Y\subseteq \mathbb {C} ^{T}}$ are vector subspaces then the vector subspace ${\displaystyle Z:=\operatorname {span} \left\{f\otimes g:f\in X,g\in Y\right\}}$ of ${\displaystyle \mathbb {C} ^{S\times T}}$ together with the bilinear map:

{\displaystyle {\begin{alignedat}{4}\;&&X\times Y&&\;\to \;&Z\\[0.3ex]&&(f,g)&&\;\mapsto \;&f\otimes g\\\end{alignedat}}}
form a tensor product of ${\displaystyle X}$ and ${\displaystyle Y}$.[2]

## Properties

### Dimension

If V and W are vectors spaces of finite dimension, then ${\displaystyle V\otimes W}$ is finite-dimensional, and its dimension is the product of the dimensions of V and W.

This results from the fact that a basis of ${\displaystyle V\otimes W}$ is formed by taking all tensor products of a basis element of V and a basis element of W.

### Associativity

The tensor product is associative in the sense that, given three vector spaces ${\displaystyle U,V,W}$, there is a canonical isomorphism:

${\displaystyle (U\otimes V)\otimes W\cong U\otimes (V\otimes W),}$

that maps ${\displaystyle (u\otimes v)\otimes w}$ to ${\displaystyle u\otimes (v\otimes w)}$.

This allows omitting parentheses in the tensor product of more than two vector spaces or vectors.

### Commutativity as vector space operation

The tensor product of two vector spaces ${\displaystyle V}$ and ${\displaystyle W}$ is commutative in the sense that there is a canonical isomorphism:

${\displaystyle V\otimes W\cong W\otimes V,}$

that maps ${\displaystyle v\otimes w}$ to ${\displaystyle w\otimes v}$.

On the other hand, even when ${\displaystyle V=W}$, the tensor product of vectors is not commutative; that is ${\displaystyle v\otimes w\neq w\otimes v}$, in general.

The map ${\displaystyle x\otimes y\mapsto y\otimes x}$ from ${\displaystyle V\otimes V}$ to itself induces a linear automorphism that is called a braiding map. More generally and as usual (see tensor algebra), let ${\displaystyle V^{\otimes n}}$ denote the tensor product of n copies of the vector space V. For every permutation s of the first n positive integers, the map:

${\displaystyle x_{1}\otimes \cdots \otimes x_{n}\mapsto x_{s(1)}\otimes \cdots \otimes x_{s(n)}}$

induces a linear automorphism of ${\displaystyle V^{\otimes n}\to V^{\otimes n}}$, which is called a braiding map.

## Tensor product of linear maps

Given a linear map ${\displaystyle f:U\to V}$, and a vector space W, the tensor product:

${\displaystyle f\otimes W:U\otimes W\to V\otimes W}$

is the unique linear map such that:

${\displaystyle (f\otimes W)(u\otimes w)=f(u)\otimes w.}$

The tensor product ${\displaystyle W\otimes f}$ is defined similarly.

Given two linear maps ${\displaystyle f:U\to V}$ and ${\displaystyle g:W\to Z}$, their tensor product:

${\displaystyle f\otimes g:U\otimes W\to V\otimes Z}$

is the unique linear map that satisfies:

${\displaystyle (f\otimes g)(u\otimes w)=f(u)\otimes g(w).}$

One has:

${\displaystyle f\otimes g=(f\otimes Z)\circ (U\otimes g)=(V\otimes g)\circ (f\otimes W).}$

In terms of category theory, this means that the tensor product is a bifunctor from the category of vector spaces to itself.[3]

If f and g are both injective or surjective, then the same is true for all above defined linear maps. In particular, the tensor product with a vector space is an exact functor; this means that every exact sequence is mapped to an exact sequence (tensor products of modules do not transform injections into injections, but they are right exact functors).

By choosing bases of all vector spaces involved, the linear maps f and g can be represented by matrices. Then, depending on how the tensor ${\displaystyle v\otimes w}$ is vectorized, the matrix describing the tensor product ${\displaystyle f\otimes g}$ is the Kronecker product of the two matrices. For example, if V, X, W, and Y above are all two-dimensional and bases have been fixed for all of them, and f and g are given by the matrices:

${\displaystyle A={\begin{bmatrix}a_{1,1}&a_{1,2}\\a_{2,1}&a_{2,2}\\\end{bmatrix}},\qquad B={\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}},}$
respectively, then the tensor product of these two matrices is:
${\displaystyle {\begin{bmatrix}a_{1,1}&a_{1,2}\\a_{2,1}&a_{2,2}\\\end{bmatrix}}\otimes {\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}={\begin{bmatrix}a_{1,1}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}&a_{1,2}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}\\[3pt]a_{2,1}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}&a_{2,2}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}\\\end{bmatrix}}={\begin{bmatrix}a_{1,1}b_{1,1}&a_{1,1}b_{1,2}&a_{1,2}b_{1,1}&a_{1,2}b_{1,2}\\a_{1,1}b_{2,1}&a_{1,1}b_{2,2}&a_{1,2}b_{2,1}&a_{1,2}b_{2,2}\\a_{2,1}b_{1,1}&a_{2,1}b_{1,2}&a_{2,2}b_{1,1}&a_{2,2}b_{1,2}\\a_{2,1}b_{2,1}&a_{2,1}b_{2,2}&a_{2,2}b_{2,1}&a_{2,2}b_{2,2}\\\end{bmatrix}}.}$

The resultant rank is at most 4, and thus the resultant dimension is 4. rank here denotes the tensor rank i.e. the number of requisite indices (while the matrix rank counts the number of degrees of freedom in the resulting array). ${\displaystyle \operatorname {Tr} A\otimes B=\operatorname {Tr} A\times \operatorname {Tr} B}$.

A dyadic product is the special case of the tensor product between two vectors of the same dimension.

## General tensors

For non-negative integers r and s a type ${\displaystyle (r,s)}$ tensor on a vector space V is an element of:

${\displaystyle T_{s}^{r}(V)=\underbrace {V\otimes \cdots \otimes V} _{r}\otimes \underbrace {V^{*}\otimes \cdots \otimes V^{*}} _{s}=V^{\otimes r}\otimes \left(V^{*}\right)^{\otimes s}.}$
Here ${\displaystyle V^{*}}$ is the dual vector space (which consists of all linear maps f from V to the ground field K).

There is a product map, called the (tensor) product of tensors:[4]

${\displaystyle T_{s}^{r}(V)\otimes _{K}T_{s'}^{r'}(V)\to T_{s+s'}^{r+r'}(V).}$

It is defined by grouping all occurring "factors" V together: writing ${\displaystyle v_{i}}$ for an element of V and ${\displaystyle f_{i}}$ for an element of the dual space:

${\displaystyle (v_{1}\otimes f_{1})\otimes (v'_{1})=v_{1}\otimes v'_{1}\otimes f_{1}.}$

If V is finite dimensional, then picking a basis of V and the corresponding dual basis of ${\displaystyle V^{*}}$ naturally induces a basis of ${\displaystyle T_{s}^{r}(V)}$ (this basis is described in the article on Kronecker products). In terms of these bases, the components of a (tensor) product of two (or more) tensors can be computed. For example, if F and G are two covariant tensors of orders m and n respectively (i.e. ${\displaystyle F\in T_{m}^{0}}$ and ${\displaystyle G\in T_{n}^{0}}$), then the components of their tensor product are given by:[5]

${\displaystyle (F\otimes G)_{i_{1}i_{2}\cdots i_{m+n}}=F_{i_{1}i_{2}\cdots i_{m}}G_{i_{m+1}i_{m+2}i_{m+3}\cdots i_{m+n}}.}$

Thus, the components of the tensor product of two tensors are the ordinary product of the components of each tensor. Another example: let U be a tensor of type (1, 1) with components ${\displaystyle U_{\beta }^{\alpha }}$, and let V be a tensor of type ${\displaystyle (1,0)}$ with components ${\displaystyle V^{\gamma }}$. Then:

${\displaystyle \left(U\otimes V\right)^{\alpha }{}_{\beta }{}^{\gamma }=U^{\alpha }{}_{\beta }V^{\gamma }}$
and:
${\displaystyle (V\otimes U)^{\mu \nu }{}_{\sigma }=V^{\mu }U^{\nu }{}_{\sigma }.}$

Tensors equipped with their product operation form an algebra, called the tensor algebra.

### Evaluation map and tensor contraction

For tensors of type (1, 1) there is a canonical evaluation map:

${\displaystyle V\otimes V^{*}\to K}$
defined by its action on pure tensors:
${\displaystyle v\otimes f\mapsto f(v).}$

More generally, for tensors of type ${\displaystyle (r,s)}$, with r, s > 0, there is a map, called tensor contraction:

${\displaystyle T_{s}^{r}(V)\to T_{s-1}^{r-1}(V).}$
(The copies of ${\displaystyle V}$ and ${\displaystyle V^{*}}$ on which this map is to be applied must be specified.)

On the other hand, if ${\displaystyle V}$ is finite-dimensional, there is a canonical map in the other direction (called the coevaluation map):

${\displaystyle {\begin{cases}K\to V\otimes V^{*}\\\lambda \mapsto \sum _{i}\lambda v_{i}\otimes v_{i}^{*}\end{cases}}}$
where ${\displaystyle v_{1},\ldots ,v_{n}}$ is any basis of ${\displaystyle V}$, and ${\displaystyle v_{i}^{*}}$ is its dual basis. This map does not depend on the choice of basis.[6]

The interplay of evaluation and coevaluation can be used to characterize finite-dimensional vector spaces without referring to bases.[7]

The tensor product ${\displaystyle T_{s}^{r}(V)}$ may be naturally viewed as a module for the Lie algebra ${\displaystyle \mathrm {End} (V)}$ by means of the diagonal action: for simplicity let us assume ${\displaystyle r=s=1}$, then, for each ${\displaystyle u\in \mathrm {End} (V)}$,

${\displaystyle u(a\otimes b)=u(a)\otimes b-a\otimes u^{*}(b),}$
where ${\displaystyle u^{*}\in \mathrm {End} \left(V^{*}\right)}$ is the transpose of u, that is, in terms of the obvious pairing on ${\displaystyle V\otimes V^{*}}$,
${\displaystyle \langle u(a),b\rangle =\langle a,u^{*}(b)\rangle .}$

There is a canonical isomorphism ${\displaystyle T_{1}^{1}(V)\to \mathrm {End} (V)}$ given by:

${\displaystyle (a\otimes b)(x)=\langle x,b\rangle a.}$

Under this isomorphism, every u in ${\displaystyle \mathrm {End} (V)}$ may be first viewed as an endomorphism of ${\displaystyle T_{1}^{1}(V)}$ and then viewed as an endomorphism of ${\displaystyle \mathrm {End} (V)}$. In fact it is the adjoint representation ad(u) of ${\displaystyle \mathrm {End} (V)}$.

## Linear maps as tensors

Given two finite dimensional vector spaces U, V over the same field K, denote the dual space of U as U*, and the K-vector space of all linear maps from U to V as Hom(U,V). There is an isomorphism:

${\displaystyle U^{*}\otimes V\cong \mathrm {Hom} (U,V),}$
defined by an action of the pure tensor ${\displaystyle f\otimes v\in U^{*}\otimes V}$ on an element of ${\displaystyle U}$,
${\displaystyle (f\otimes v)(u)=f(u)v.}$

Its "inverse" can be defined using a basis ${\displaystyle \{u_{i}\}}$ and its dual basis ${\displaystyle \{u_{i}^{*}\}}$ as in the section "Evaluation map and tensor contraction" above:

${\displaystyle {\begin{cases}\mathrm {Hom} (U,V)\to U^{*}\otimes V\\F\mapsto \sum _{i}u_{i}^{*}\otimes F(u_{i}).\end{cases}}}$

This result implies:

${\displaystyle \dim(U\otimes V)=\dim(U)\dim(V),}$
which automatically gives the important fact that ${\displaystyle \{u_{i}\otimes v_{j}\}}$ forms a basis of ${\displaystyle U\otimes V}$ where ${\displaystyle \{u_{i}\},\{v_{j}\}}$ are bases of U and V.

Furthermore, given three vector spaces U, V, W the tensor product is linked to the vector space of all linear maps, as follows:

${\displaystyle \mathrm {Hom} (U\otimes V,W)\cong \mathrm {Hom} (U,\mathrm {Hom} (V,W)).}$
This is an example of adjoint functors: the tensor product is "left adjoint" to Hom.

## Tensor products of modules over a ring

The tensor product of two modules A and B over a commutative ring R is defined in exactly the same way as the tensor product of vector spaces over a field:

${\displaystyle A\otimes _{R}B:=F(A\times B)/G,}$
where now ${\displaystyle F(A\times B)}$ is the free R-module generated by the cartesian product and G is the R-module generated by these relations.

More generally, the tensor product can be defined even if the ring is non-commutative. In this case A has to be a right-R-module and B is a left-R-module, and instead of the last two relations above, the relation:

${\displaystyle (ar,b)\sim (a,rb)}$
is imposed. If R is non-commutative, this is no longer an R-module, but just an abelian group.

The universal property also carries over, slightly modified: the map ${\displaystyle \varphi :A\times B\to A\otimes _{R}B}$ defined by ${\displaystyle (a,b)\mapsto a\otimes b}$ is a middle linear map (referred to as "the canonical middle linear map"[8]); that is, it satisfies:[9]

{\displaystyle {\begin{aligned}\varphi (a+a',b)&=\varphi (a,b)+\varphi (a',b)\\\varphi (a,b+b')&=\varphi (a,b)+\varphi (a,b')\\\varphi (ar,b)&=\varphi (a,rb)\end{aligned}}}

The first two properties make φ a bilinear map of the abelian group ${\displaystyle A\times B}$. For any middle linear map ${\displaystyle \psi }$ of ${\displaystyle A\times B}$, a unique group homomorphism f of ${\displaystyle A\otimes _{R}B}$ satisfies ${\displaystyle \psi =f\circ \varphi }$, and this property determines ${\displaystyle \varphi }$ within group isomorphism. See the main article for details.

### Tensor product of modules over a non-commutative ring

Let A be a right R-module and B be a left R-module. Then the tensor product of A and B is an abelian group defined by:

${\displaystyle A\otimes _{R}B:=F(A\times B)/G}$
where ${\displaystyle F(A\times B)}$ is a free abelian group over ${\displaystyle A\times B}$ and G is the subgroup of ${\displaystyle F(A\times B)}$ generated by relations:
{\displaystyle {\begin{aligned}&\forall a,a_{1},a_{2}\in A,\forall b,b_{1},b_{2}\in B,{\text{ for all }}r\in R:\\&(a_{1},b)+(a_{2},b)-(a_{1}+a_{2},b),\\&(a,b_{1})+(a,b_{2})-(a,b_{1}+b_{2}),\\&(ar,b)-(a,rb).\\\end{aligned}}}

The universal property can be stated as follows. Let G be an abelian group with a map ${\displaystyle q:A\times B\to G}$ that is bilinear, in the sense that:

{\displaystyle {\begin{aligned}q(a_{1}+a_{2},b)&=q(a_{1},b)+q(a_{2},b),\\q(a,b_{1}+b_{2})&=q(a,b_{1})+q(a,b_{2}),\\q(ar,b)&=q(a,rb).\end{aligned}}}

Then there is a unique map ${\displaystyle {\overline {q}}:A\otimes B\to G}$ such that ${\displaystyle {\overline {q}}(a\otimes b)=q(a,b)}$ for all ${\displaystyle a\in A}$ and ${\displaystyle b\in B}$.

Furthermore, we can give ${\displaystyle A\otimes _{R}B}$ a module structure under some extra conditions:

1. If A is a (S,R)-bimodule, then ${\displaystyle A\otimes _{R}B}$ is a left S-module, where ${\displaystyle s(a\otimes b):=(sa)\otimes b}$.
2. If B is a (R,S)-bimodule, then ${\displaystyle A\otimes _{R}B}$ is a right S-module, where ${\displaystyle (a\otimes b)s:=a\otimes (bs)}$.
3. If A is a (S,R)-bimodule and B is a (R,T)-bimodule, then ${\displaystyle A\otimes _{R}B}$ is a (S,T)-bimodule, where the left and right actions are defined in the same way as the previous two examples.
4. If R is a commutative ring, then A and B are (R,R)-bimodules where ${\displaystyle ra:=ar}$ and ${\displaystyle br:=rb}$. By 3), we can conclude ${\displaystyle A\otimes _{R}B}$ is a (R,R)-bimodule.

### Computing the tensor product

For vector spaces, the tensor product ${\displaystyle V\otimes W}$ is quickly computed since bases of V of W immediately determine a basis of ${\displaystyle V\otimes W}$, as was mentioned above. For modules over a general (commutative) ring, not every module is free. For example, Z/nZ is not a free abelian group (Z-module). The tensor product with Z/nZ is given by:

${\displaystyle M\otimes _{\mathbf {Z} }\mathbf {Z} /n\mathbf {Z} =M/nM.}$

More generally, given a presentation of some R-module M, that is, a number of generators ${\displaystyle m_{i}\in M,i\in I}$ together with relations:

${\displaystyle \sum _{j\in J}a_{ji}m_{i}=0,\qquad a_{ij}\in R,}$
the tensor product can be computed as the following cokernel:
${\displaystyle M\otimes _{R}N=\operatorname {coker} \left(N^{J}\to N^{I}\right)}$

Here ${\displaystyle N^{J}=\oplus _{j\in J}N}$, and the map ${\displaystyle N^{J}\to N^{I}}$ is determined by sending some ${\displaystyle n\in N}$ in the jth copy of ${\displaystyle N^{J}}$ to ${\displaystyle a_{ij}n}$ (in ${\displaystyle N^{I}}$). Colloquially, this may be rephrased by saying that a presentation of M gives rise to a presentation of ${\displaystyle M\otimes _{R}N}$. This is referred to by saying that the tensor product is a right exact functor. It is not in general left exact, that is, given an injective map of R-modules ${\displaystyle M_{1}\to M_{2}}$, the tensor product:

${\displaystyle M_{1}\otimes _{R}N\to M_{2}\otimes _{R}N}$
is not usually injective. For example, tensoring the (injective) map given by multiplication with n, n : ZZ with Z/nZ yields the zero map 0 : Z/nZZ/nZ, which is not injective. Higher Tor functors measure the defect of the tensor product being not left exact. All higher Tor functors are assembled in the derived tensor product.

## Tensor product of algebras

Let R be a commutative ring. The tensor product of R-modules applies, in particular, if A and B are R-algebras. In this case, the tensor product ${\displaystyle A\otimes _{R}B}$ is an R-algebra itself by putting:

${\displaystyle (a_{1}\otimes b_{1})\cdot (a_{2}\otimes b_{2})=(a_{1}\cdot a_{2})\otimes (b_{1}\cdot b_{2}).}$
For example:
${\displaystyle R[x]\otimes _{R}R[y]\cong R[x,y].}$

A particular example is when A and B are fields containing a common subfield R. The tensor product of fields is closely related to Galois theory: if, say, A = R[x] / f(x), where f is some irreducible polynomial with coefficients in R, the tensor product can be calculated as:

${\displaystyle A\otimes _{R}B\cong B[x]/f(x)}$
where now f is interpreted as the same polynomial, but with its coefficients regarded as elements of B. In the larger field B, the polynomial may become reducible, which brings in Galois theory. For example, if A = B is a Galois extension of R, then:
${\displaystyle A\otimes _{R}A\cong A[x]/f(x)}$
is isomorphic (as an A-algebra) to the ${\displaystyle A^{\operatorname {deg} (f)}}$.

## Eigenconfigurations of tensors

Square matrices ${\displaystyle A}$ with entries in a field ${\displaystyle K}$ represent linear maps of vector spaces, say ${\displaystyle K^{n}\to K^{n}}$, and thus linear maps ${\displaystyle \psi :\mathbb {P} ^{n-1}\to \mathbb {P} ^{n-1}}$ of projective spaces over ${\displaystyle K}$. If ${\displaystyle A}$ is nonsingular then ${\displaystyle \psi }$ is well-defined everywhere, and the eigenvectors of ${\displaystyle A}$ correspond to the fixed points of ${\displaystyle \psi }$. The eigenconfiguration of ${\displaystyle A}$ consists of ${\displaystyle n}$ points in ${\displaystyle \mathbb {P} ^{n-1}}$, provided ${\displaystyle A}$ is generic and ${\displaystyle K}$ is algebraically closed. The fixed points of nonlinear maps are the eigenvectors of tensors. Let ${\displaystyle A=(a_{i_{1}i_{2}\cdots i_{d}})}$ be a ${\displaystyle d}$-dimensional tensor of format ${\displaystyle n\times n\times \cdots \times n}$ with entries ${\displaystyle (a_{i_{1}i_{2}\cdots i_{d}})}$ lying in an algebraically closed field ${\displaystyle K}$ of characteristic zero. Such a tensor ${\displaystyle A\in (K^{n})^{\otimes d}}$ defines polynomial maps ${\displaystyle K^{n}\to K^{n}}$ and ${\displaystyle \mathbb {P} ^{n-1}\to \mathbb {P} ^{n-1}}$ with coordinates:

${\displaystyle \psi _{i}(x_{1},\ldots ,x_{n})=\sum _{j_{2}=1}^{n}\sum _{j_{3}=1}^{n}\cdots \sum _{j_{d}=1}^{n}a_{ij_{2}j_{3}\cdots j_{d}}x_{j_{2}}x_{j_{3}}\cdots x_{j_{d}}\;\;{\mbox{for }}i=1,\ldots ,n}$

Thus each of the ${\displaystyle n}$ coordinates of ${\displaystyle \psi }$ is a homogeneous polynomial ${\displaystyle \psi _{i}}$ of degree ${\displaystyle d-1}$ in ${\displaystyle \mathbf {x} =\left(x_{1},\ldots ,x_{n}\right)}$. The eigenvectors of ${\displaystyle A}$ are the solutions of the constraint:

${\displaystyle {\mbox{rank}}{\begin{pmatrix}x_{1}&x_{2}&\cdots &x_{n}\\\psi _{1}(\mathbf {x} )&\psi _{2}(\mathbf {x} )&\cdots &\psi _{n}(\mathbf {x} )\end{pmatrix}}\leq 1}$
and the eigenconfiguration is given by the variety of the ${\displaystyle 2\times 2}$ minors of this matrix.[10]

## Other examples of tensor products

### Topological tensor products

Hilbert spaces generalize finite-dimensional vector spaces to arbitrary dimensions. There is an analogous operation, also called the "tensor product," that makes Hilbert spaces a symmetric monoidal category. It is essentially constructed as the metric space completion of the algebraic tensor product discussed above. However, it does not satisfy the obvious analogue of the universal property defining tensor products;[11] the morphisms for that property must be restricted to Hilbert–Schmidt operators.[12]

In situations where the imposition of an inner product is inappropriate, one can still attempt to complete the algebraic tensor product, as a topological tensor product. However, such a construction is no longer uniquely specified: in many cases, there are multiple natural topologies on the algebraic tensor product.

### Tensor product of graded vector spaces

Some vector spaces can be decomposed into direct sums of subspaces. In such cases, the tensor product of two spaces can be decomposed into sums of products of the subspaces (in analogy to the way that multiplication distributes over addition).

### Tensor product of representations

Vector spaces endowed with an additional multiplicative structure are called algebras. The tensor product of such algebras is described by the Littlewood–Richardson rule.

### Tensor product of multilinear forms

Given two multilinear forms ${\displaystyle f(x_{1},\dots ,x_{k})}$ and ${\displaystyle g(x_{1},\dots ,x_{m})}$ on a vector space ${\displaystyle V}$ over the field ${\displaystyle K}$ their tensor product is the multilinear form:

${\displaystyle (f\otimes g)(x_{1},\dots ,x_{k+m})=f(x_{1},\dots ,x_{k})g(x_{k+1},\dots ,x_{k+m}).}$
[13]

This is a special case of the product of tensors if they are seen as multilinear maps (see also tensors as multilinear maps). Thus the components of the tensor product of multilinear forms can be computed by the Kronecker product.

### Tensor product of graphs

It should be mentioned that, though called "tensor product", this is not a tensor product of graphs in the above sense; actually it is the category-theoretic product in the category of graphs and graph homomorphisms. However it is actually the Kronecker tensor product of the adjacency matrices of the graphs. Compare also the section Tensor product of linear maps above.

### Monoidal categories

The most general setting for the tensor product is the monoidal category. It captures the algebraic essence of tensoring, without making any specific reference to what is being tensored. Thus, all tensor products can be expressed as an application of the monoidal category to some particular setting, acting on some particular objects.

## Quotient algebras

A number of important subspaces of the tensor algebra can be constructed as quotients: these include the exterior algebra, the symmetric algebra, the Clifford algebra, the Weyl algebra, and the universal enveloping algebra in general.

The exterior algebra is constructed from the exterior product. Given a vector space V, the exterior product ${\displaystyle V\wedge V}$ is defined as:

${\displaystyle V\wedge V:=V\otimes V{\big /}\{v\otimes v\mid v\in V\}.}$

When the underlying field of V does not have characteristic 2, then this definition is equivalent to:

${\displaystyle V\wedge V:=V\otimes V{\big /}{\bigl \{}v_{1}\otimes v_{2}+v_{2}\otimes v_{1}\mid (v_{1},v_{2})\in V^{2}{\bigr \}}.}$

The image of ${\displaystyle v_{1}\otimes v_{2}}$ in the exterior product is usually denoted ${\displaystyle v_{1}\wedge v_{2}}$ and satisfies, by construction, ${\displaystyle v_{1}\wedge v_{2}=-v_{2}\wedge v_{1}}$. Similar constructions are possible for ${\displaystyle V\otimes \dots \otimes V}$ (n factors), giving rise to ${\displaystyle \Lambda ^{n}V}$, the nth exterior power of V. The latter notion is the basis of differential n-forms.

The symmetric algebra is constructed in a similar manner, from the symmetric product:

${\displaystyle V\odot V:=V\otimes V{\big /}{\bigl \{}v_{1}\otimes v_{2}-v_{2}\otimes v_{1}\mid (v_{1},v_{2})\in V^{2}{\bigr \}}.}$

More generally:

${\displaystyle \operatorname {Sym} ^{n}V:=\underbrace {V\otimes \dots \otimes V} _{n}{\big /}(\dots \otimes v_{i}\otimes v_{i+1}\otimes \dots -\dots \otimes v_{i+1}\otimes v_{i}\otimes \dots )}$

That is, in the symmetric algebra two adjacent vectors (and therefore all of them) can be interchanged. The resulting objects are called symmetric tensors.

## Tensor product in programming

### Array programming languages

Array programming languages may have this pattern built in. For example, in APL the tensor product is expressed as ○.× (for example A ○.× B or A ○.× B ○.× C). In J the tensor product is the dyadic form of */ (for example a */ b or a */ b */ c).

J's treatment also allows the representation of some tensor fields, as a and b may be functions instead of constants. This product of two functions is a derived function, and if a and b are differentiable, then a */ b is differentiable.

However, these kinds of notation are not universally present in array languages. Other array languages may require explicit treatment of indices (for example, MATLAB), and/or may not support higher-order functions such as the Jacobian derivative (for example, Fortran/APL).

## Notes

1. ^ a b Trèves 2006, pp. 403–404.
2. ^ a b Trèves 2006, pp. 407.
3. ^ Hazewinkel, Michiel; Gubareni, Nadezhda Mikhaĭlovna; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004). Algebras, rings and modules. Springer. p. 100. ISBN 978-1-4020-2690-4.
4. ^ Bourbaki (1989), p. 244 defines the usage "tensor product of x and y", elements of the respective modules.
5. ^ Analogous formulas also hold for contravariant tensors, as well as tensors of mixed variance. Although in many cases such as when there is an inner product defined, the distinction is irrelevant.
6. ^ "The Coevaluation on Vector Spaces". The Unapologetic Mathematician. 2008-11-13. Archived from the original on 2017-02-02. Retrieved 2017-01-26.
7. ^
8. ^ Hungerford, Thomas W. (1974). Algebra. Springer. ISBN 0-387-90518-9.
9. ^ Chen, Jungkai Alfred (Spring 2004), "Tensor product" (PDF), Advanced Algebra II (lecture notes), National Taiwan University, archived (PDF) from the original on 2016-03-04{{citation}}: CS1 maint: location missing publisher (link)
10. ^ Abo, H.; Seigal, A.; Sturmfels, B. (2015). "Eigenconfigurations of Tensors". arXiv:1505.05729 [math.AG].
11. ^ Garrett, Paul (July 22, 2010). "Non-existence of tensor products of Hilbert spaces" (PDF).
12. ^ Kadison, Richard V.; Ringrose, John R. (1997). Fundamentals of the theory of operator algebras. Graduate Studies in Mathematics. Vol. I. Providence, R.I.: American Mathematical Society. Thm. 2.6.4. ISBN 978-0-8218-0819-1. MR 1468229.
13. ^ Tu, L. W. (2010). An Introduction to Manifolds. Universitext. Springer. p. 25. ISBN 978-1-4419-7399-3.