# Ring of polynomial functions

In mathematics, the ring of polynomial functions on a vector space V over a field k gives a coordinate-free analog of a polynomial ring. It is denoted by k[V]. If V has finite dimension and is viewed as an algebraic variety, then k[V] is precisely the coordinate ring of V.

The explicit definition of the ring can be given as follows. If ${\displaystyle k[t_{1},\dots ,t_{n}]}$ is a polynomial ring, then we can view ${\displaystyle t_{i}}$ as coordinate functions on ${\displaystyle k^{n}}$; i.e., ${\displaystyle t_{i}(x)=x_{i}}$ when ${\displaystyle x=(x_{1},\dots ,x_{n}).}$ This suggests the following: given a vector space V, let k[V] be the ring generated by the dual space ${\displaystyle V^{*}}$, which is a subring of the ring of all functions ${\displaystyle V\to k}$. If we fix a basis for V and write ${\displaystyle t_{i}}$ for its dual basis, then k[V] consists of polynomials in ${\displaystyle t_{i}}$.

If k is infinite, then k[V] is the symmetric algebra of the dual space ${\displaystyle V^{*}}$.

In applications, one also defines k[V] when V is defined over some subfield of k (e.g., k is the complex field and V is a real vector space.) The same definition still applies.

Throughout the article, for simplicity, the base field k is assumed to be infinite.

## Relation with polynomial ring

Let ${\displaystyle A=K[x]}$ be the set of all polynomials over a field K and B be the set of all polynomial functions in one variable over K. Both A and B are algebras over K given by the standard multiplication and addition of polynomials and functions. We can map each ${\displaystyle f}$ in A to ${\displaystyle {\hat {f}}}$ in B by the rule ${\displaystyle {\hat {f}}(t)=f(t)}$. A routine check shows that the mapping ${\displaystyle f\mapsto {\hat {f}}}$ is a homomorphism of the algebras A and B. This homomorphism is an isomorphism if and only if K is an infinite field. For example, if K is a finite field then let ${\displaystyle p(x)=\prod \limits _{t\in K}(x-t)}$. p is a nonzero polynomial in K[x], however ${\displaystyle p(t)=0}$ for all t in K, so ${\displaystyle {\hat {p}}=0}$ is the zero function and our homomorphism is not an isomorphism (and, actually, the algebras are not isomorphic, since the algebra of polynomials is infinite while that of polynomial functions is finite).

If K is infinite then choose a polynomial f such that ${\displaystyle {\hat {f}}=0}$. We want to show this implies that ${\displaystyle f=0}$. Let ${\displaystyle \deg f=n}$ and let ${\displaystyle t_{0},t_{1},\dots ,t_{n}}$ be n + 1 distinct elements of K. Then ${\displaystyle f(t_{i})=0}$ for ${\displaystyle 0\leq i\leq n}$ and by Lagrange interpolation we have ${\displaystyle f=0}$. Hence the mapping ${\displaystyle f\mapsto {\hat {f}}}$ is injective. Since this mapping is clearly surjective, it is bijective and thus an algebra isomorphism of A and B.

## Symmetric multilinear maps

Let k be an infinite field of characteristic zero (or at least very large) and V a finite-dimensional vector space.

Let ${\displaystyle S^{q}(V)}$ denote the vector space of multilinear functionals ${\displaystyle \textstyle \lambda :\prod _{1}^{q}V\to k}$ that are symmetric; ${\displaystyle \lambda (v_{1},\dots ,v_{q})}$ is the same for all permutations of ${\displaystyle v_{i}}$'s.

Any λ in ${\displaystyle S^{q}(V)}$ gives rise to a homogeneous polynomial function f of degree q: we just let ${\displaystyle f(v)=\lambda (v,\dots ,v).}$ To see that f is a polynomial function, choose a basis ${\displaystyle e_{i},\,1\leq i\leq n}$ of V and ${\displaystyle t_{i}}$ its dual. Then

${\displaystyle \lambda (v_{1},\dots ,v_{q})=\sum _{i_{1},\dots ,i_{q}=1}^{n}\lambda (e_{i_{1}},\dots ,e_{i_{q}})t_{i_{1}}(v_{1})\cdots t_{i_{q}}(v_{q})}$,

which implies f is a polynomial in ti's.

Thus, there is a well-defined linear map:

${\displaystyle \phi :S^{q}(V)\to k[V]_{q},\,\phi (\lambda )(v)=\lambda (v,\cdots ,v).}$

We show it is an isomorphism. Choosing a basis as before, any homogeneous polynomial function f of degree q can be written as:

${\displaystyle f=\sum _{i_{1},\dots ,i_{q}=1}^{n}a_{i_{1}\cdots i_{q}}t_{i_{1}}\cdots t_{i_{q}}}$

where ${\displaystyle a_{i_{1}\cdots i_{q}}}$ are symmetric in ${\displaystyle i_{1},\dots ,i_{q}}$. Let

${\displaystyle \psi (f)(v_{1},\dots ,v_{q})=\sum _{i_{1},\cdots ,i_{q}=1}^{n}a_{i_{1}\cdots i_{q}}t_{i_{1}}(v_{1})\cdots t_{i_{q}}(v_{q}).}$

Clearly, φ ∘ ψ is the identity; in particular, φ is surjective. To see φ is injective, suppose φ(λ) = 0. Consider

${\displaystyle \phi (\lambda )(t_{1}v_{1}+\cdots +t_{q}v_{q})=\lambda (t_{1}v_{1}+\cdots +t_{q}v_{q},...,t_{1}v_{1}+\cdots +t_{q}v_{q})}$,

which is zero. The coefficient of t1t2tq in the above expression is q! times λ(v1, …, vq); it follows that λ = 0.

Note: φ is independent of a choice of basis; so the above proof shows that ψ is also independent of a basis, the fact not a priori obvious.

Example: A bilinear functional gives rise to a quadratic form in a unique way and any quadratic form arises in this way.

## Taylor series expansion

Given a smooth function, locally, one can get a partial derivative of the function from its Taylor series expansion and, conversely, one can recover the function from the series expansion. This fact continues to hold for polynomials functions on a vector space. If f is in k[V], then we write: for x, y in V,

${\displaystyle f(x+y)=\sum _{n=0}^{\infty }g_{n}(x,y)}$

where gn(x, y) are homogeneous of degree n in y and only finitely many of them are nonzero. We then let

${\displaystyle (P_{y}f)(x)=g_{1}(x,y),}$

resulting in the linear endomorphism Py of k[V]. It is called the polarization operator. We then have, as promised:

Theorem — For each f in k[V] and x, y in V,

${\displaystyle f(x+y)=\sum _{n=0}^{\infty }{1 \over n!}P_{y}^{n}f(x)}$.

Proof: We first note that (Py f) (x) is the coefficient of t in f(x + t y); in other words, since g0(x, y) = g0(x, 0) = f(x),

${\displaystyle P_{y}f(x)=\left.{d \over dt}\right|_{t=0}f(x+ty)}$

where the right-hand side is, by definition,

${\displaystyle \left.{f(x+ty)-f(x) \over t}\right|_{t=0}.}$

The theorem follows from this. For example, for n = 2, we have:

${\displaystyle P_{y}^{2}f(x)=\left.{\partial \over \partial t_{1}}\right|_{t_{1}=0}P_{y}f(x+t_{1}y)=\left.{\partial \over \partial t_{1}}\right|_{t_{1}=0}\left.{\partial \over \partial t_{2}}\right|_{t_{2}=0}f(x+(t_{1}+t_{2})y)=2!g_{2}(x,y).}$

The general case is similar. ${\displaystyle \square }$

## Operator product algebra

When the polynomials are valued not over a field k, but instead are valued over some algebra, then one may define additional structure. Thus, for example, one may consider the ring of functions over GL(n,m), instead of for k = GL(1,m).[clarification needed] In this case, one may impose an additional axiom.

The operator product algebra is an associative algebra of the form

${\displaystyle A^{i}(x)B^{j}(y)=\sum _{k}f_{k}^{ij}(x,y,z)C^{k}(z)}$

The structure constants ${\displaystyle f_{k}^{ij}(x,y,z)}$ are required to be single-valued functions, rather than sections of some vector bundle. The fields (or operators) ${\displaystyle A^{i}(x)}$ are required to span the ring of functions. In practical calculations, it is usually required that the sums be analytic within some radius of convergence; typically with a radius of convergence of ${\displaystyle |x-y|}$. Thus, the ring of functions can be taken to be the ring of polynomial functions.

The above can be considered to be an additional requirement imposed on the ring; it is sometimes called the bootstrap. In physics, a special case of the operator product algebra is known as the operator product expansion.