Homogeneous distribution

Not to be confused with uniform distribution

In mathematics, a homogeneous distribution is a distribution S on Euclidean space Rn or Rn \ {0} that is homogeneous in the sense that, roughly speaking,

$S(tx) = t^m S(x)\,$

for all t > 0.

More precisely, let $\mu_t : x\mapsto tx$ be the scalar multiplication operator on Rn. A distribution S on Rn or Rn \ {0} is homogeneous of degree m provided that

$S[t^{-n}\varphi\circ\mu_t] = t^mS[\varphi]$

for all positive real t and all test functions φ. The additional factor of tn is needed to reproduce the usual notion of homogeneity for locally integrable functions, and comes about from the Jacobian change of variables. The number m can be real or complex.

It can be a non-trivial problem to extend a given homogeneous distribution from Rn \ {0} to a distribution on Rn, although this is necessary for many of the techniques of Fourier analysis, in particular the Fourier transform, to be brought to bear. Such an extension exists in most cases, however, although it may not be unique.

Properties

If S is a homogeneous distribution on Rn \ {0} of degree α, then the weak first partial derivative of S

$\frac{\partial S}{\partial x_i}$

has degree α−1. Furthermore, a version of Euler's homogeneous function theorem holds: a distribution S is homogeneous of degree α if and only if

$\sum_{i=1}^n x_i\frac{\partial S}{\partial x_i} = \alpha S.$

One dimension

A complete classification of homogeneous distributions in one dimension is possible. The homogeneous distributions on R \ {0} are given by various power functions. In addition to the power functions, homogeneous distributions on R include the Dirac delta function and its derivatives.

The Dirac delta function is homogeneous of degree −1. Intuitively,

$\int_{\mathbb{R}} \delta(tx)\varphi(x)\,dx = \int_{\mathbb{R}} \delta(y)\varphi(y/t)\,\frac{dy}{t} = t^{-1}\varphi(0)$

by making a change of variables y = tx in the "integral". Moreover, the kth weak derivative of the delta function δ(k) is homogeneous of degree −k−1. These distributions all have support consisting only of the origin: when localized over R \ {0}, these distributions are all identically zero.

xα +

In one dimension, the function

$x_+^\alpha = \begin{cases}x^\alpha&\text{if }x>0\\ 0&\text{otherwise}\end{cases}$

is locally integrable on R \ {0}, and thus defines a distribution. The distribution is homogeneous of degree α. Similarly $x_-^\alpha = (-x)_+^\alpha$ and $|x|^\alpha = x_+^\alpha + x_-^\alpha$ are homogeneous distributions of degree α.

However, each of these distributions is only locally integrable on all of R provided Re(α) > −1. But although the function $x_+^\alpha$ naively defined by the above formula fails to be locally integrable for Re α ≤ −1, the mapping

$\alpha\mapsto x_+^\alpha$

is a holomorphic function from the right half-plane to the topological vector space of tempered distributions. It admits a unique meromorphic extension with simple poles at each negative integer α = −1, −2, .... The resulting extension is homogeneous of degree α, provided α is not a negative integer, since on the one hand the relation

$x_+^\alpha[\varphi\circ\mu_t] = t^{\alpha+1}x_+^\alpha[\varphi]$

holds and is holomorphic in α > 0. On the other hand, both sides extend meromorphically in α, and so remain equal throughout the domain of definition.

Throughout the domain of definition, xα
+
also satisfies the following properties:

• $\frac{d}{dx} x_+^\alpha = \alpha x_+^{\alpha-1}$
• $x x_+^\alpha = x_+^{\alpha+1}$

Other extensions

There are several distinct ways to extend the definition of power functions to homogeneous distributions on R at the negative integers.

χα
+

The poles in xα
+
at the negative integers can be removed by renormalizing. Put

$\chi_+^\alpha = \frac{x_+^\alpha}{\Gamma(1+\alpha)}.$

This is an entire function of α. At the negative integers,

$\chi_+^{-k} = \delta^{(k-1)}.$

The distributions $\chi_+^a$ have the properties

• $\frac{d}{dx} \chi_+^\alpha = \chi_+^{\alpha-1}$
• $x \chi_+^\alpha = \alpha\chi_+^{\alpha+1}.$
$\underline{x}^k$

A second approach is to define the distribution $\underline{x}^{-k}$, for k = 1, 2, ...,

$\underline{x}^{-k} = \frac{(-1)^{k-1}}{(k-1)!}\frac{d^k}{dx^k}\log |x|.$

These clearly retain the original properties of power functions:

• $\frac{d}{dx} \underline{x}^{-k} = -k \underline{x}^{-k-1}$
• $x \underline{x}^{-k} = \underline{x}^{-k+1},\quad\text{if }k>1.$

These distributions are also characterized by their action on test functions

$\underline{x}^{-k} = \int_{-\infty}^\infty \frac{\phi(x) - \sum_{j=0}^{k-1}x^j\phi^{(j)}(0)/j!}{x^k}\,dx,$

and so generalize the Cauchy principal value distribution of 1/x that arises in the Hilbert transform.

(x ± i0)α

Another homogeneous distribution is given by the distributional limit

$(x + i0)^\alpha = \lim_{\epsilon\downarrow 0} (x+i\epsilon)^\alpha.$

That is, acting on test functions

$(x + i0)^\alpha[\varphi] = \lim_{\epsilon\downarrow 0} \int_{\mathbb{R}} (x+i\epsilon)^\alpha\varphi(x)\,dx.$

The branch of the logarithm is chosen to be single-valued in the upper half-plane and to agree with the natural log along the positive real axis. As the limit of entire functions, (x + i0)α[φ] is an entire function of α. Similarly,

$(x-i0)^\alpha = \lim_{\epsilon\downarrow 0} (x-i\epsilon)^\alpha$

is also a well-defined distribution for all α

When Re α > 0,

$(x\pm i0)^\alpha = x_+^\alpha + e^{\pm i\pi \alpha}x_-^\alpha,$

which then holds by analytic continuation whenever α is not a negative integer. By the permanence of functional relations,

$\frac{d}{dx} (x\pm i0)^\alpha = \alpha(x\pm i0)^{\alpha-1}.$

At the negative integers, the identity holds (at the level of distributions on R \ {0})

$(x\pm i0)^{-k}=x_+^{-k} + (-1)^kx_-^{-k}\pm\pi i \frac{\delta^{(k-1)}}{(k-1)!},$

and the singularities cancel to give a well-defined distribution on R. The average of the two distributions agrees with $\underline{x}^{-k}$:

$\frac{(x + i0)^{-k}+ (x- i0)^{-k}}{2} = \underline{x}^{-k}.$

The difference of the two distributions is a multiple of the delta function:

$(x + i0)^{-k} - (x- i0)^{-k} = 2\pi i \frac{\delta^{(k-1)}}{(k-1)!},$

which is known as the Plemelj jump relation.

Classification

The following classification theorem holds (Gel'fand & Shilov 1966, §3.11). Let S be a distribution homogeneous of degree α on R \ {0}. Then $S = a x_+^\alpha + b x_-^\alpha$ for some constants a, b. Any distribution S on R homogeneous of degree α ≠ −1, −2, ... is of this form as well. As a result, every homogeneous distribution of degree α ≠ −1, −2, ... on R \ {0} extends to R.

Finally, homogeneous distributions of degree −k, a negative integer, on R are all of the form:

$a\underline{x}^{-k} + b\delta^{(k-1)}.$

Higher dimensions

Homogeneous distributions on the Euclidean space Rn \ {0} with the origin deleted are always of the form

$S(x) = f(x/|x|)|x|^\lambda\,$

(1)

where ƒ is a distribution on the unit sphere Sn−1. The number λ, which is the degree of the homogeneous distribution S, may be real or complex.

Any homogeneous distribution of the form (1) on Rn \ {0} extends uniquely to a homogeneous distribution on Rn provided Re λ > −n. In fact, an analytic continuation argument similar to the one-dimensional case extends this for all λ ≠ −n, −n−1, ....

References

• Gel'fand, I.M.; Shilov, G.E. (1966), Generalized functions 1, Academic Press.
• Hörmander, L. (1976), Linear Partial Differential Operators, Volume 1, Springer-Verlag, ISBN 978-3-540-00662-6.
• Taylor, Michael (1996), Partial differential equations, vol. 1, Springer-Verlag.