# Ring of polynomial functions

(Redirected from Polynomials on vector spaces)

In mathematics, the ring of polynomial functions on a vector space V over an infinite field k gives a coordinate-free analog of a polynomial ring. It is denoted by k[V]. If V has finite dimension and is viewed as an algebraic variety, then k[V] is precisely the coordinate ring of V.

The explicit definition of the ring can be given as follows. If ${\displaystyle k[t_{1},\dots ,t_{n}]}$ is a polynomial ring, then we can view ${\displaystyle t_{i}}$ as coordinate functions on ${\displaystyle k^{n}}$; i.e., ${\displaystyle t_{i}(x)=x_{i}}$ when ${\displaystyle x=(x_{1},\dots ,x_{n}).}$ This suggests the following: given a vector space V, let k[V] be the subring generated by the dual space ${\displaystyle V^{*}}$ of the ring of all functions ${\displaystyle V\to k}$. If we fix a basis for V and write ${\displaystyle t_{i}}$ for its dual basis, then k[V] consists of polynomials in ${\displaystyle t_{i}}$; it is a polynomial ring.

In applications, one also defines k[V] when V is defined over some subfield of k (e.g., k is the complex field and V is a real vector space.) The same definition still applies.

## Symmetric multilinear maps

Let k be an infinite field of characteristic zero (or at least very large) and V a finite-dimensional vector space.

Let ${\displaystyle S^{q}(V)}$ denote the vector space of multilinear functionals ${\displaystyle \textstyle \lambda :\prod _{1}^{q}V\to k}$ that are symmetric; ${\displaystyle \lambda (v_{1},\dots ,v_{q})}$ is the same for all permutations of ${\displaystyle v_{i}}$'s.

Any λ in ${\displaystyle S^{q}(V)}$ gives rise to a homogeneous polynomial function f of degree q: we just let ${\displaystyle f(v)=\lambda (v,\dots ,v).}$ To see that f is a polynomial function, choose a basis ${\displaystyle e_{i},\,1\leq i\leq n}$ of V and ${\displaystyle t_{i}}$ its dual. Then

${\displaystyle \lambda (v_{1},\dots ,v_{q})=\sum _{i_{1},\dots ,i_{q}=1}^{n}\lambda (e_{i_{1}},\dots ,e_{i_{q}})t_{i_{1}}(v_{1})\cdots t_{i_{q}}(v_{q})}$,

which implies f is a polynomial in ti's.

Thus, there is a well-defined linear map:

${\displaystyle \phi :S^{q}(V)\to k[V]_{q},\,\phi (\lambda )(v)=\lambda (v,\cdots ,v).}$

We show it is an isomorphism. Choosing a basis as before, any homogeneous polynomial function f of degree q can be written as:

${\displaystyle f=\sum _{i_{1},\dots ,i_{q}=1}^{n}a_{i_{1}\cdots i_{q}}t_{i_{1}}\cdots t_{i_{q}}}$

where ${\displaystyle a_{i_{1}\cdots i_{q}}}$ are symmetric in ${\displaystyle i_{1},\dots ,i_{q}}$. Let

${\displaystyle \psi (f)(v_{1},\dots ,v_{q})=\sum _{i_{1},\cdots ,i_{q}=1}^{n}a_{i_{1}\cdots i_{q}}t_{i_{1}}(v_{1})\cdots t_{i_{q}}(v_{q}).}$

Clearly, φ ∘ ψ is the identity; in particular, φ is surjective. To see φ is injective, suppose φ(λ) = 0. Consider

${\displaystyle \phi (\lambda )(t_{1}v_{1}+\cdots +t_{q}v_{q})=\lambda (t_{1}v_{1}+\cdots +t_{q}v_{q},...,t_{1}v_{1}+\cdots +t_{q}v_{q})}$,

which is zero. The coefficient of t1t2tq in the above expression is q! times λ(v1, …, vq); it follows that λ = 0.

Note: φ is independent of a choice of basis; so the above proof shows that ψ is also independent of a basis, the fact not a priori obvious.

Example: A bilinear functional gives rise to a quadratic form in a unique way and any quadratic form arises in this way.

## Taylor series expansion

Given a smooth function, locally, one can get a partial derivative of the function from its Taylor series expansion and, conversely, one can recover the function from the series expansion. This fact continues to hold for polynomials functions on a vector space. If f is in k[V], then we write: for x, y in V,

${\displaystyle f(x+y)=\sum _{n=0}^{\infty }g_{n}(x,y)}$

where gn(x, y) are homogeneous of degree n in y and only finitely many of them are nonzero. We then let

${\displaystyle (P_{y}f)(x)=g_{1}(x,y),}$

resulting in the linear endomorphism Py of k[V]. It is called the polarization operator. We then have, as promised:

Theorem — For each f in k[V] and x, y in V,

${\displaystyle f(x+y)=\sum _{n=0}^{\infty }{1 \over n!}P_{y}^{n}f(x)}$.

Proof: We first note that (Py f) (x) is the coefficient of t in f(x + t y); in other words, since g0(x, y) = g0(x, 0) = f(x),

${\displaystyle P_{y}f(x)=\left.{d \over dt}\right|_{t=0}f(x+ty)}$

where the right-hand side is, by definition,

${\displaystyle \left.{f(x+ty)-f(x) \over t}\right|_{t=0}.}$

The theorem follows from this. For example, for n = 2, we have:

${\displaystyle P_{y}^{2}f(x)=\left.{\partial \over \partial t_{1}}\right|_{t_{1}=0}P_{y}f(x+t_{1}y)=\left.{\partial \over \partial t_{1}}\right|_{t_{1}=0}\left.{\partial \over \partial t_{2}}\right|_{t_{2}=0}f(x+(t_{1}+t_{2})y)=2!g_{2}(x,y).}$

The general case is similar. ${\displaystyle \square }$

## Operator product algebra

When the polynomials are valued not over a field k, but instead are valued over some algebra, then one may define additional structure. Thus, for example, one may consider the ring of functions over GL(n,m), instead of for k = GL(1,m).[clarification needed] In this case, one may impose an additional axiom.

The operator product algebra is an associative algebra of the form

${\displaystyle A^{i}(x)B^{j}(y)=\sum _{k}f_{k}^{ij}(x,y,z)C^{k}(z)}$

The structure constants ${\displaystyle f_{k}^{ij}(x,y,z)}$ are required to be single-valued functions, rather than sections of some vector bundle. The fields (or operators) ${\displaystyle A^{i}(x)}$ are required to span the ring of functions. In practical calculations, it is usually required that the sums be analytic within some radius of convergence; typically with a radius of convergence of ${\displaystyle |x-y|}$. Thus, the ring of functions can be taken to be the ring of polynomial functions.

The above can be considered to be an additional requirement imposed on the ring; it is sometimes called the bootstrap. In physics, a special case of the operator product algebra is known as the operator product expansion.