# Tensor product

(Redirected from Tensor product of vector spaces)

In mathematics, the tensor product VW of two vector spaces V and W (over the same field) is itself a vector space, together with an operation of bilinear composition, denoted by , from ordered pairs in the Cartesian product V × W into V ⊗ W, in a way that generalizes the outer product. The tensor product of V and W is the vector space generated by the symbols vw, with vV and wW, in which the relations of bilinearity are imposed for the product operation , and no other relations are assumed to hold. The tensor product space is thus the "freest" (or most general) such vector space, in the sense of having the fewest constraints.

The tensor product of (finite dimensional) vector spaces has dimension equal to the product of the dimensions of the two factors:

${\displaystyle \dim(V\otimes W)=\dim V\times \dim W.}$

In particular, this distinguishes the tensor product from the direct sum vector space, whose dimension is the sum of the dimensions of the two summands:

${\displaystyle \dim(V\oplus W)=\dim V+\dim W.}$

More generally, the tensor product can be extended to other categories of mathematical objects in addition to vector spaces, such as to matrices, tensors, algebras, topological vector spaces, and modules. In each such case the tensor product is characterized by a similar universal property: it is the freest bilinear operation. The general concept of a "tensor product" is captured by monoidal categories; that is, the class of all things that have a tensor product is a monoidal category.

## Tensor product of vector spaces

The tensor product of two vector spaces V and W over a field K is another vector space over K. It is denoted VK W, or VW when the underlying field K is understood.

If ${\displaystyle V}$ has a basis ${\displaystyle e_{1},\dots ,e_{m}}$ and ${\displaystyle W}$ has a basis ${\displaystyle f_{1},\dots ,f_{n}}$, then the tensor product ${\displaystyle V\otimes W}$ can be taken to be a vector space spanned by a basis consisting of all pair-wise products of elements from the two bases; each such basis element of ${\displaystyle V\otimes W}$ is denoted ${\displaystyle e_{i}\otimes f_{j}}$. For any vectors ${\displaystyle v=\sum \nolimits _{i}v_{i}e_{i}\in V}$ and ${\displaystyle w=\sum \nolimits _{j}w_{j}f_{j}\in W,}$ there is a corresponding product vector ${\displaystyle v\otimes w}$ in ${\displaystyle V\otimes W}$ given by ${\displaystyle \sum \nolimits _{ij}v_{i}w_{j}(e_{i}\otimes f_{j})\in V\otimes W.}$ This product operation ${\displaystyle \otimes :V\times W\rightarrow V\otimes W}$ is quickly verified to be bilinear.

As an example, letting ${\displaystyle V=W=\mathbb {R} ^{3}}$ (considered as a vector space over the field of real numbers) and considering the standard basis set ${\displaystyle \{{\hat {x}},{\hat {y}},{\hat {z}}\}}$ for each, the tensor product ${\displaystyle V\otimes W}$ is spanned by the nine basis vectors ${\displaystyle \{{\hat {x}}\otimes {\hat {x}},{\hat {x}}\otimes {\hat {y}},{\hat {x}}\otimes {\hat {z}},{\hat {y}}\otimes {\hat {x}},{\hat {y}}\otimes {\hat {y}},{\hat {y}}\otimes {\hat {z}},{\hat {z}}\otimes {\hat {x}},{\hat {z}}\otimes {\hat {y}},{\hat {z}}\otimes {\hat {z}}\},}$ and is isomorphic to ${\displaystyle \mathbb {R} ^{9}.}$ For vectors ${\displaystyle v=(1,2,3),w=(1,0,0)\in \mathbb {R} ^{3},}$ the tensor product ${\displaystyle v\otimes w={\hat {x}}\otimes {\hat {x}}+2{\hat {y}}\otimes {\hat {x}}+3{\hat {z}}\otimes {\hat {x}}.}$

The above definition relies on a choice of basis, which can not be done canonically for a generic vector space. However, any two choices of basis lead to isomorphic tensor product spaces (c.f. the universal property described below). Alternatively, the tensor product may be defined in an expressly basis-independent manner as a quotient space of a free vector space over V × W. This approach is described below.

### The free vector space

The definition of requires the notion of the free vector space F(S) on some set S, a vector space whose basis is indexed by S. F(S) is defined as the set of all functions g from S to a given field K that have finite support; i.e., g is identically zero outside some finite subset of S. It is a vector space over K with the usual addition and scalar multiplication of functions. It has a basis parameterized by S. Indeed, for each s in S we define[1]

${\displaystyle {\begin{cases}\delta _{s}:S\to K\\\delta _{s}(t)={\begin{cases}1&t=s\\0&t\neq s\end{cases}}\end{cases}}}$

Then {δs | sS} is a basis for F(S), since each element g of F(S) can be uniquely written as a linear combination of δs, and because of the restriction that g has finite support, this linear combination consists of finitely many terms. Because of this explicit expression, an element of F(S) is often called a formal sum of symbols in S.

By construction, the (possibly infinite) dimension of the vector space F(S) equals the cardinality of the set S.

### Definition

Let us first consider a special case: let us say V, W are free vector spaces for the sets S, T respectively. That is, V = F(S), W = F(T). In this special case, the tensor product is defined as F(S) ⊗ F(T) = F(S × T). In most typical cases, any vector space can be immediately understood as the free vector space for some set, so this definition suffices. However, there is also an explicit way of constructing the tensor product directly from V, W, without appeal to S, T.

In general, given two vector spaces V and W over a field K, the tensor product U of V and W, denoted as U = VW is defined as the vector space whose elements and operations are constructed as follows:

From the Cartesian product V × W, the free vector space F(V × W) over K is formed. The vectors of VW are then defined to be the equivalence classes of the congruence generated by the following relations on F(V × W):

{\displaystyle {\begin{aligned}&\forall v,v_{1},v_{2}\in V,\forall w,w_{1},w_{2}\in W,\forall c\in K:\\&(v_{1},w)+(v_{2},w)\sim (v_{1}+v_{2},w),\\&(v,w_{1})+(v,w_{2})\sim (v,w_{1}+w_{2}),\\&c(v,w)\sim (cv,w),\\&c(v,w)\sim (v,cw).\end{aligned}}}

The operations of VW, i.e. the map of vector addition + : U × UU and scalar multiplication ⋅ : K × UU are defined to be the respective operations +F and F from F(V × W), acting on any representatives

${\displaystyle {\tilde {u}}_{1},{\tilde {u}}_{2}}$

in the involved equivalence classes outputting the one equivalence class of the result.

${\displaystyle {\tilde {u}}_{1}\in u_{1},{\tilde {u}}_{2}\in u_{2}\Rightarrow (+):(u_{1},u_{2})\mapsto [{\tilde {u}}_{1}+_{F}{\tilde {u}}_{2}]}$
${\displaystyle {\tilde {u}}_{1}\in u_{1}\Rightarrow (\cdot ):(c,u_{1})\mapsto [c\cdot _{F}{\tilde {u}}_{1}]}$

The result can be proven to be independent of which representatives of the involved classes have been chosen. In other words, the operations are well-defined.

In other words, the tensor product VW is defined as the quotient space F(V × W)/N, where N is the subspace of F(V × W) consisting of the equivalence class of the zero element, N = [∅], ∅ ∈ F(V × W), under the equivalence relation of above. In this way, because it is a quotient of the free vector space by the subspace generated by the relations, it is the freest such vector space.[2][3] For this reason, the tensor product ${\displaystyle V\otimes W}$ can also be characterised by a universal property.

The following expression explicitly gives the subspace N:[4]

{\displaystyle {\begin{aligned}N=\operatorname {span} (\{u\in F(V\times W)\,|\,&\exists v,v_{1},v_{2}\in V,\exists w,w_{1},w_{2}\in W,\exists c\in K:\\&u=(v_{1},w)+(v_{2},w)-(v_{1}+v_{2},w)\lor \\&u=(v,w_{1})+(v,w_{2})-(v,w_{1}+w_{2})\lor \\&u=c(v,w)-(cv,w)\lor \\&u=c(v,w)-(v,cw)\}).\end{aligned}}}

In the quotient, where N is mapped to the zero vector, the following equalities,

{\displaystyle {\begin{aligned}(v_{1},w)+(v_{2},w)&=(v_{1}+v_{2},w),\\(v,w_{1})+(v,w_{2})&=(v,w_{1}+w_{2}),\\c(v,w)&=(cv,w),\\c(v,w)&=(v,cw)\end{aligned}}}

all hold (unlike in F(V × W)), which is exactly what is desired. In these latter expressions, the (v1, w), etc., are images in the quotient of vectors in the free product under the quotient map. Usually, some other notation is employed for them, see below.

### Notation

Elements of VW are often referred to as tensors, although this term refers to many other related concepts as well.[5] If v belongs to V and w belongs to W, then the equivalence class of (v, w) is denoted by vw, which is called the tensor product of v with w. In physics and engineering, this use of the "⊗" symbol refers specifically to the outer product operation; the result of the outer product vw is one of the standard ways of representing the equivalence class vw.[6] An element of VW that can be written in the form vw is called a pure or simple tensor. In general, an element of the tensor product space is not a pure tensor, but rather a finite linear combination of pure tensors. For example, if v1 and v2 are linearly independent, and w1 and w2 are also linearly independent, then v1w1 + v2w2 cannot be written as a pure tensor. The number of simple tensors required to express an element of a tensor product is called the tensor rank (not to be confused with tensor order, which is the number of spaces one has taken the product of, in this case 2; in notation, the number of indices), and for linear operators or matrices, thought of as (1, 1) tensors (elements of the space VV), it agrees with matrix rank.

### Dimension

Given bases {vi} and {wj} for V and W respectively, the tensors {viwj} form a basis for VW. Therefore, if V and W are finite-dimensional, the dimension of the tensor product is the product of dimensions of the original spaces; for instance RmRn is isomorphic to Rmn.

### Tensor product of linear maps

The tensor product also operates on linear maps between vector spaces. Specifically, given two linear maps S : VX and T : WY between vector spaces, the tensor product of the two linear maps S and T is a linear map

${\displaystyle S\otimes T:V\otimes W\to X\otimes Y}$

defined by

${\displaystyle (S\otimes T)(v\otimes w)=S(v)\otimes T(w).}$

In this way, the tensor product becomes a bifunctor from the category of vector spaces to itself, covariant in both arguments.[7]

If S and T are both injective, surjective, or continuous then ST is, respectively, injective, surjective, continuous.

By choosing bases of all vector spaces involved, the linear maps S and T can be represented by matrices. Then, the matrix describing the tensor product ST is the Kronecker product of the two matrices. For example, if V, X, W, and Y above are all two-dimensional and bases have been fixed for all of them, and S and T are given by the matrices

${\displaystyle {\begin{bmatrix}a_{1,1}&a_{1,2}\\a_{2,1}&a_{2,2}\\\end{bmatrix}},\qquad {\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}},}$

respectively, then the tensor product of these two matrices is

${\displaystyle {\begin{bmatrix}a_{1,1}&a_{1,2}\\a_{2,1}&a_{2,2}\\\end{bmatrix}}\otimes {\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}={\begin{bmatrix}a_{1,1}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}&a_{1,2}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}\\&\\a_{2,1}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}&a_{2,2}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}\\\end{bmatrix}}={\begin{bmatrix}a_{1,1}b_{1,1}&a_{1,1}b_{1,2}&a_{1,2}b_{1,1}&a_{1,2}b_{1,2}\\a_{1,1}b_{2,1}&a_{1,1}b_{2,2}&a_{1,2}b_{2,1}&a_{1,2}b_{2,2}\\a_{2,1}b_{1,1}&a_{2,1}b_{1,2}&a_{2,2}b_{1,1}&a_{2,2}b_{1,2}\\a_{2,1}b_{2,1}&a_{2,1}b_{2,2}&a_{2,2}b_{2,1}&a_{2,2}b_{2,2}\\\end{bmatrix}}.}$

The resultant rank is at most 4, and thus the resultant dimension is 4. Here rank denotes the tensor rank (number of requisite indices), while the matrix rank counts the number of degrees of freedom in the resulting array.

A dyadic product is the special case of the tensor product between two vectors of the same dimension.

### Universal property

This commutative diagram presents the universal property of tensor product. Here ${\displaystyle \varphi }$ and ${\displaystyle h}$ are bilinear, whereas ${\displaystyle {\tilde {h}}}$ is linear.

In the context of vector spaces, the tensor product ${\displaystyle V\otimes W}$ and the associated bilinear map ${\displaystyle \varphi :V\times W\to V\otimes W}$ are characterized up to isomorphism by a universal property regarding bilinear maps. (Recall that a bilinear map is a function that is separately linear in each of its arguments.) Informally, ${\displaystyle \varphi }$ is the most general bilinear map out of ${\displaystyle V\times W}$.

The vector space ${\displaystyle V\otimes W}$ and the associated bilinear map ${\displaystyle \varphi :V\times W\to V\otimes W}$ have the property that any bilinear map ${\displaystyle h:V\times W\to Z}$ from ${\displaystyle V\times W}$ to any vector space ${\displaystyle Z}$ factors through ${\displaystyle \varphi }$ uniquely. By saying "${\displaystyle h}$ factors through ${\displaystyle \varphi }$ uniquely," we mean that there is a unique linear map ${\displaystyle {\tilde {h}}:V\otimes W\to Z}$ such that ${\displaystyle h={\tilde {h}}\circ \varphi }$.

This characterization can simplify proofs about the tensor product. For example, the tensor product is symmetric, meaning there is a canonical isomorphism:

${\displaystyle V\otimes W\cong W\otimes V.}$

To construct, say, a map from ${\displaystyle V\otimes W}$ to ${\displaystyle W\otimes V}$, it suffices to give a bilinear map ${\displaystyle h:V\times W\to W\otimes V}$ that maps ${\displaystyle (v,w)}$ to ${\displaystyle w\otimes v}$. Then the universal property of ${\displaystyle V\otimes W}$ means ${\displaystyle h}$ factors into a map ${\displaystyle {\tilde {h}}:V\otimes W\to W\otimes V}$. A map ${\displaystyle g}$ in the opposite direction is similarly defined, and one checks that the two linear maps h:VWWV and g:WVVW are inverse to one another by again using their universal properties.

Similar reasoning can be used to show that the tensor product is associative, that is, there are natural isomorphisms

${\displaystyle V_{1}\otimes (V_{2}\otimes V_{3})\cong (V_{1}\otimes V_{2})\otimes V_{3}.}$

Therefore, it is customary to omit the parentheses and write V1V2V3.

The category of vector spaces with tensor product is an example of a symmetric monoidal category.

The universal-property definition of a tensor product is valid in more categories that just the category of vector spaces. Instead of using multilinear (bilinear) maps, the general tensor product definition uses multimorphisms.[8]

### Tensor powers and braiding

Let n be a non-negative integer. The nth tensor power of the vector space V is the n-fold tensor product of V with itself. That is

${\displaystyle V^{\otimes n}\;{\overset {\mathrm {def} }{=}}\;\underbrace {V\otimes \cdots \otimes V} _{n}.}$

A permutation σ of the set {1, 2, ..., n} determines a mapping of the nth Cartesian power of V as follows:

${\displaystyle {\begin{cases}\sigma \colon V^{n}\to V^{n}\\\sigma (v_{1},v_{2},\cdots ,v_{n})=\left(v_{\sigma (1)},v_{\sigma (2)},\cdots ,v_{\sigma (n)}\right)\end{cases}}}$

Let

${\displaystyle \varphi \colon V^{n}\to V^{\otimes n}}$

be the natural multilinear embedding of the Cartesian power of V into the tensor power of V. Then, by the universal property, there is a unique isomorphism

${\displaystyle \tau _{\sigma }\colon V^{\otimes n}\to V^{\otimes n}}$

such that

${\displaystyle \varphi \circ \sigma =\tau _{\sigma }\circ \varphi .}$

The isomorphism τσ is called the braiding map associated to the permutation σ.

## Product of tensors

For non-negative integers r and s a type (r,s) tensor on a vector space V is an element of

${\displaystyle T_{s}^{r}(V)=\underbrace {V\otimes \dots \otimes V} _{r}\otimes \underbrace {V^{*}\otimes \dots \otimes V^{*}} _{s}=V^{\otimes r}\otimes V^{*\otimes s}.}$

Here V is the dual vector space (which consists of all linear maps f from V to the ground field K).

There is a product map, called the (tensor) product of tensors[9]

${\displaystyle T_{s}^{r}(V)\otimes _{K}T_{s'}^{r'}(V)\to T_{s+s'}^{r+r'}(V).}$

It is defined by grouping all occurring "factors" V together: writing vi for an element of V and fi for elements of the dual space,

${\displaystyle (v_{1}\otimes f_{1})\otimes (v'_{1})=v_{1}\otimes v'_{1}\otimes f_{1}.}$

Picking a basis of V and the corresponding dual basis of V naturally induces a basis for Tr
s
(V)
(this basis is described in the article on Kronecker products). In terms of these bases, the components of a (tensor) product of two (or more) tensors can be computed. For example, if F and G are two covariant tensors of rank m and n respectively (i.e. FT 0
m
, and GT 0
n
), then the components of their tensor product are given by

${\displaystyle (F\otimes G)_{i_{1}i_{2}\ldots i_{m+n}}=F_{i_{1}i_{2}\ldots i_{m}}G_{i_{m+1}i_{m+2}i_{m+3}\ldots i_{m+n}}.}$

[10] Thus, the components of the tensor product of two tensors are the ordinary product of the components of each tensor. Another example: let U be a tensor of type (1, 1) with components Uαβ, and let V be a tensor of type (1, 0) with components Vγ. Then

${\displaystyle U^{\alpha }{}_{\beta }V^{\gamma }=(U\otimes V)^{\alpha }{}_{\beta }{}^{\gamma }}$

and

${\displaystyle V^{\mu }U^{\nu }{}_{\sigma }=(V\otimes U)^{\mu \nu }{}_{\sigma }.}$

## Relation to dual space

A particular example is the tensor product of some vector space V with its dual vector space V (which consists of all linear maps f from V to the ground field K). In this case, there is a canonical evaluation map

${\displaystyle V\otimes V^{*}\to K}$

which on elementary tensors is defined by

${\displaystyle v\otimes f\mapsto f(v).}$

The resulting map

${\displaystyle T_{s}^{r}(V)\to T_{s-1}^{r-1}(V)}$

is called tensor contraction (for r, s > 0).

On the other hand, if V is finite-dimensional, there is a canonical map in the other direction (called the coevaluation map)

${\displaystyle K\to V\otimes V^{*},\lambda \mapsto \sum _{i}\lambda v_{i}\otimes v_{i}^{*}.}$

where v1, ..., vn is any basis of V, and vi is its dual basis. Surprisingly, this map does not depend on our choice of basis.[11]

The interplay of evaluation and coevaluation map can be used to characterize finite-dimensional vector spaces without referring to bases.[12]

### Tensor product vs. Hom

Given two finite dimensional vector spaces U, V, denote the dual space of U as U*, we have the following relation:

${\displaystyle U^{*}\otimes V\cong \mathrm {Hom} (U,V),}$

an isomorphism can be defined by ${\displaystyle \alpha :U^{*}\otimes V\rightarrow \mathrm {Hom} (U,V)}$, when acting on pure tensors

${\displaystyle u^{*}\otimes v\mapsto (u^{*}\otimes v)(u)=u^{*}(u)v,}$

its "inverse" can be defined in a similar manner as above (Relation to dual space) using dual basis ${\displaystyle \{u_{i}^{*}\}}$,

${\displaystyle \mathrm {Hom} (U,V)\to U^{*}\otimes V,\quad f(\cdot )\mapsto \sum _{i}u_{i}^{*}\otimes f(u_{i}).}$

This result implies

${\displaystyle \dim(U\otimes V)=\dim(U)\dim(V)}$

which automatically gives the important fact that ${\displaystyle \{u_{i}\otimes v_{j}\}}$ forms a basis for ${\displaystyle U\otimes V}$ where ${\displaystyle \{u_{i}\},\{v_{j}\}}$ are bases of U and V.

Furthermore, given three vector spaces U, V, W the tensor product is linked to the vector space of all linear maps, as follows:

${\displaystyle \mathrm {Hom} (U\otimes V,W)\cong \mathrm {Hom} (U,\mathrm {Hom} (V,W)).}$

Here Hom(-,-) denotes the K-vector space of all linear maps. This is an example of adjoint functors: the tensor product is "left adjoint" to Hom.

The tensor ${\displaystyle \scriptstyle T_{s}^{r}(V)}$ may be naturally viewed as a module for the Lie algebra End(V) by means of the diagonal action: for simplicity let us assume r = s = 1, then, for each u ∈ End(V),

${\displaystyle u(a\otimes b)=u(a)\otimes b-a\otimes u^{*}(b),}$

where u in End(V) is the transpose of u, that is, in terms of the obvious pairing on VV,

${\displaystyle \langle u(a),b\rangle =\langle a,u^{*}(b)\rangle }$.

There is a canonical isomorphism ${\displaystyle \scriptstyle T_{1}^{1}(V)\rightarrow \mathrm {End} (V)}$ given by

${\displaystyle (a\otimes b)(x)=\langle x,b\rangle a.}$

Under this isomorphism, every u in End(V) may be first viewed as an endomorphism of ${\displaystyle \scriptstyle T_{1}^{1}(V)}$ and then viewed as an endomorphism of End(V). In fact it is the adjoint representation ad(u) of End(V).

## Tensor products of modules over a ring

The tensor product of two modules A and B over a commutative ring R is defined in exactly the same way as the tensor product of vector spaces over a field:

${\displaystyle A\otimes _{R}B:=F(A\times B)/G}$

where now F(A × B) is the free R-module generated by the cartesian product and G is the R-module generated by the same relations as above.

More generally, the tensor product can be defined even if the ring is non-commutative (abba). In this case A has to be a right-R-module and B is a left-R-module, and instead of the last two relations above, the relation

${\displaystyle (ar,b)-(a,rb)}$

is imposed. If R is non-commutative, this is no longer an R-module, but just an abelian group.

The universal property also carries over, slightly modified: the map φ : A × BAR B defined by (a, b) ↦ ab is a middle linear map (referred to as "the canonical middle linear map".[13]); that is,[14] it satisfies:

{\displaystyle {\begin{aligned}\phi (a+a',b)=\phi (a,b)+\phi (a',b)\\\phi (a,b+b')=\phi (a,b)+\phi (a,b')\\\phi (ar,b)=\phi (a,rb)\end{aligned}}}

The first two properties make φ a bilinear map of the abelian group A × B. For any middle linear map ψ of A × B, a unique group homomorphism f of AR B satisfies ψ = fφ, and this property determines ${\displaystyle \phi }$ within group isomorphism. See the main article for details.

### Computing the tensor product

For vector spaces, the tensor product VW is quickly computed since bases of V of W immediately determine a basis of VW, as was mentioned above. For modules over a general (commutative) ring, not every module is free. For example, Z/nZ is not a free abelian group (= Z-module). The tensor product with Z/nZ is given by

${\displaystyle M\otimes _{\mathbf {Z} }\mathbf {Z} /n\mathbf {Z} =M/nM.}$

More generally, given a presentation of some R-module M, that is, a number of generators miM, iI together with relations ${\displaystyle \sum _{j\in J}a_{ji}m_{i}=0}$, with , the tensor product can be computed as the following cokernel:

${\displaystyle M\otimes _{R}N=\operatorname {coker} (N^{J}\rightarrow N^{I})}$

Here NJ := ⨁jJ N and the map is determined by sending some nN in the jth copy of NJ to ajin (in NI). Colloquially, this may be rephrased by saying that a presentation of M gives rise to a presentation of MR N. This is referred to by saying that the tensor product is a right exact functor. It is not in general left exact, that is, given an injective map of R-modules M1M2, the tensor product

${\displaystyle M_{1}\otimes _{R}N\to M_{2}\otimes _{R}N}$

is not usually injective. For example, tensoring the (injective) map given by multiplication with n, n : ZZ with Z/nZ yields the zero map 0 : Z/nZZ/nZ, which is not injective. Higher Tor functors measure the defect of the tensor product being not left exact. All higher Tor functors are assembled in the derived tensor product.

## Tensor product of algebras

Let R be a commutative ring. The tensor product of R-modules applies, in particular, if A and B are R-algebras. In this case, the tensor product AR B is an R-algebra itself by putting

${\displaystyle (a_{1}\otimes b_{1})\cdot (a_{2}\otimes b_{2})=(a_{1}\cdot a_{2})\otimes (b_{1}\cdot b_{2}).}$

For example,

${\displaystyle R[x]\otimes _{R}R[y]\cong R[x,y].}$

A particular example is when A and B are fields containing a common subfield R. The tensor product of fields is closely related to Galois theory: if, say, A = R[x] / f(x), where f is some irreducible polynomial with coefficients in R, the tensor product can be calculated as

${\displaystyle A\otimes _{R}B\cong B[x]/f(x)}$

where now f is interpreted as the same polynomial, but with its coefficients regarded as elements of B. In the larger field B, the polynomial may become reducible, which brings in Galois theory. For example, if A = B is a Galois extension of R, then

${\displaystyle A\otimes _{R}A\cong A[x]/f(x)}$

is isomorphic (as an A-algebra) to the Adeg(f).

## Eigenconfigurations of tensors

Square matrices A with entries in a field K represent linear maps of vector spaces, say ${\displaystyle K^{n}\to K^{n}}$, and thus linear maps ${\displaystyle \psi :\mathbb {P} ^{n-1}\to \mathbb {P} ^{n-1}}$ of projective spaces over ${\displaystyle K}$. If A is nonsingular then ${\displaystyle \psi }$ is well-defined everywhere, and the eigenvectors of ${\displaystyle A}$ correspond to the fixed points of ${\displaystyle \psi }$. The eigenconfiguration of A consists of ${\displaystyle n}$ points in ${\displaystyle \mathbb {P} ^{n-1}}$, provided ${\displaystyle A}$ is generic and K is algebraically closed. The fixed points of nonlinear maps are the eigenvectors of tensors. Let ${\displaystyle A=(a_{i_{1}i_{2}\cdots i_{d}})}$ be a ${\displaystyle d}$-dimensional tensor of format ${\displaystyle n\times n\times \cdots \times n}$ with entries ${\displaystyle (a_{i_{1}i_{2}\cdots i_{d}})}$ lying in an algebraically closed field ${\displaystyle K}$ of characteristic zero. Such a tensor ${\displaystyle A\in (K^{n})^{\otimes d}}$ defines polynomial maps ${\displaystyle K^{n}\to K^{n}}$ and ${\displaystyle \mathbb {P} ^{n-1}\to \mathbb {P} ^{n-1}}$ with coordinates

${\displaystyle \psi _{i}(x_{1},...,x_{n})=\sum _{j_{2}=1}^{n}\sum _{j_{3}=1}^{n}\cdots \sum _{j_{d}=1}^{n}a_{ij_{2}j_{3}\cdots j_{d}}x_{j_{2}}x_{j_{3}}\cdots x_{j_{d}}\;\;{\mbox{for }}i=1,...,n}$

Thus each of the ${\displaystyle n}$ coordinates of ${\displaystyle \psi }$ is a homogeneous polynomial ${\displaystyle \psi _{i}}$ of degree ${\displaystyle d-1}$ in ${\displaystyle \mathbf {x} =(x_{1},...,x_{n})}$. The eigenvectors of ${\displaystyle A}$ are the solutions of the constraint

${\displaystyle {\mbox{rank}}{\begin{pmatrix}x_{1}&x_{2}&\cdots &x_{n}\\\psi _{1}(\mathbf {x} )&\psi _{2}(\mathbf {x} )&\cdots &\psi _{n}(\mathbf {x} )\end{pmatrix}}\leq 1}$

and the eigenconfiguration is given by the variety of the ${\displaystyle 2\times 2}$ minors of this matrix.[15]

## Other examples of tensor products

<!—- this section needs a clean-up? —- Taku —->

### Tensor product of multilinear forms

Given two multilinear forms ${\displaystyle \scriptstyle f(x_{1},\dots ,x_{k})}$ and ${\displaystyle \scriptstyle g(x_{1},\dots ,x_{m})}$ on a vector space ${\displaystyle V}$ over the field ${\displaystyle K}$ their tensor product is the multilinear form

${\displaystyle (f\otimes g)(x_{1},\dots ,x_{k+m})=f(x_{1},\dots ,x_{k})g(x_{k+1},\dots ,x_{k+m}).}$[16]

This is a special case of the product of tensors if they are seen as multilinear maps (see also tensors as multilinear maps). Thus the components of the tensor product of multilinear forms can be computed by the Kronecker product.

### Tensor product of graphs

It should be mentioned that, though called "tensor product", this is not a tensor product of graphs in the above sense; actually it is the category-theoretic product in the category of graphs and graph homomorphisms. However it is actually the Kronecker tensor product of the adjacency matrices of the graphs. Compare also the section Tensor product of linear maps above.

### Monoidal categories

A general context for tensor product is that of a monoidal category.

## Applications

### Exterior and symmetric algebra

Two notable constructions in linear algebra can be constructed as quotients of the tensor product: the exterior algebra and the symmetric algebra. For example, given a vector space V, the exterior product

${\displaystyle V\wedge V}$

is defined as

${\displaystyle V\otimes V/(v\otimes v{\text{ for all }}v\in V).}$

Note that when the underlying field of V does not have characteristic 2, then this definition is equivalent to

${\displaystyle V\otimes V/(v_{1}\otimes v_{2}+v_{2}\otimes v_{1}{\text{ for all }}v_{1},v_{2}\in V).}$

The image of ${\displaystyle v_{1}\otimes v_{2}}$ in the exterior product is usually denoted ${\displaystyle v_{1}\wedge v_{2}}$ and satisfies, by construction, ${\displaystyle v_{1}\wedge v_{2}=-v_{2}\wedge v_{1}}$. Similar constructions are possible for ${\displaystyle V\otimes \dots \otimes V}$ (n factors), giving rise to ${\displaystyle \Lambda ^{n}V}$, the nth exterior power of V. The latter notion is the basis of differential n-forms.

The symmetric algebra is constructed in a similar manner:

${\displaystyle \operatorname {Sym} ^{n}V:=\underbrace {V\otimes \dots \otimes V} _{n}/(\dots \otimes v_{i}\otimes v_{i+1}\otimes \dots -\dots \otimes v_{i+1}\otimes v_{i}\otimes \dots )}$

That is, in the symmetric algebra two adjacent vectors (and therefore all of them) can be interchanged. The resulting objects are called symmetric tensors.

## Tensor product in programming

### Array programming languages

Array programming languages may have this pattern built in. For example, in APL the tensor product is expressed as ○.× (for example A ○.× B or A ○.× B ○.× C). In J the tensor product is the dyadic form of */ (for example a */ b or a */ b */ c).

Note that J's treatment also allows the representation of some tensor fields, as a and b may be functions instead of constants. This product of two functions is a derived function, and if a and b are differentiable, then a */ b is differentiable.

However, these kinds of notation are not universally present in array languages. Other array languages may require explicit treatment of indices (for example, MATLAB), and/or may not support higher-order functions such as the Jacobian derivative (for example, Fortran/APL).

## Notes

1. ^ In a fancier language, δs is Dirac's delta function with point mass at s when S is viewed as a discrete space.
2. ^ Keith Conrad Tensor products University of Connecticut, lecture notes
3. ^ Eisenbud, David, Commutative algebra with a view to algebraic geometry, Springer
4. ^ Lee, J. M. (2003), Introduction to Smooth manifolds, Springer Graduate Texts in Mathematics, 218, ISBN 0-387-95448-1
5. ^
6. ^ This similar to how the engineering use of "${\displaystyle {\pmod {n}}}$" specifically returns the remainder, one of the many elements of the ${\displaystyle {\pmod {n}}}$ equivalence class.
7. ^ Hazewinkel, Michiel; Gubareni, Nadezhda Mikhaĭlovna; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004). Algebras, rings and modules. Springer. p. 100. ISBN 978-1-4020-2690-4.
8. ^
9. ^ Bourbaki (1989), p. 244 defines the usage "tensor product of x and y", elements of the respective modules.
10. ^ Analogous formulas also hold for contravariant tensors, as well as tensors of mixed variance. Although in many cases such as when there is an inner product defined, the distinction is irrelevant.
11. ^ "The Coevaluation on Vector Spaces". The Unapologetic Mathematician. 2008-11-13. Retrieved 2017-01-26.
12. ^
13. ^ Hungerford, Thomas W. (1974). Algebra. Springer. ISBN 0-387-90518-9.
14. ^ Chen, Jungkai Alfred (Spring 2004), "Tensor product" (PDF), Advanced Algebra II (lecture notes), National Taiwan University
15. ^ Abo, H.; Seigal, A.; Sturmfels B. arXiv:1505.05729 [math.AG]
16. ^ Tu, L. W. (2010). An Introduction to Manifolds. Universitext. Springer. p. 25. ISBN 978-1-4419-7399-3.