# Vanish at infinity

In mathematics, a function on a normed vector space is said to vanish at infinity if

$f(x)\to 0$ as $\|x\|\to \infty.$

For example, the function

$f(x)=\frac{1}{x^2+1}$

defined on the real line vanishes at infinity.

More generally, a function $f$ on a locally compact space (which may not have a norm) vanishes at infinity if, given any positive number $\epsilon$, there is a compact subset $K$ such that

$\|f(x)\| < \epsilon$

whenever the point $x$ lies outside of $K$. For a given locally compact space $\Omega$, the set of such functions

$f:\Omega\rightarrow\mathbb{F}$

(where $\mathbb{F}$ is either the field $\mathbb{R}$ of real numbers or the field $\mathbb{C}$ of complex numbers) forms an $\mathbb{F}$-vector space with respect to pointwise scalar multiplication and addition, often denoted $C_{0}(\Omega)$.

Both of these notions correspond to the intuitive notion of adding a point "at infinity" and requiring the values of the function to get arbitrarily close to zero as we approach it. This "definition" can be formalized in many cases by adding a point at infinity.

## Rapidly decreasing

Refining the concept, one can look more closely at the rate of vanishing of functions at infinity. One of the basic intuitions of mathematical analysis is that the Fourier transform interchanges smoothness conditions with rate conditions on vanishing at infinity. The rapidly decreasing test functions of tempered distribution theory are smooth functions that are

o(|x|N)

for all N, as |x| → ∞, and such that all their partial derivatives satisfy that condition, too. This condition is set up so as to be self-dual under Fourier transform, so that the corresponding distribution theory of tempered distributions will have the same good property.