In signal processing of multidimensional signals, for example in computer vision, the intrinsic dimension of the signal describes how many variables are needed to represent the signal. For a signal of N variables, its intrinsic dimension M satisfies 0 ≤ M ≤ N.
Usually the intrinsic dimension of a signal relates to variables defined in a Cartesian coordinate system. In general, however, it is also possible to describe the concept for non-Cartesian coordinates, for example, using polar coordinates.
Let f(x1, x2) be a two-variable function (or signal) which is of the form
- f(x1,x2) = g(x1)
for some one-variable function g which is not constant. This means that f varies, in accordance to g, with the first variable or along the first coordinate. On the other hand, f is constant with respect to the second variable or along the second coordinate. It is only necessary to know the value of one, namely the first, variable in order to determine the value of f. Hence, it is a two-variable function but its intrinsic dimension is one.
A slightly more complicated example is
- f(x1,x2) = g(x1 + x2)
f is still intrinsic one-dimensional, which can be seen by making a variable transformation
- x1 + x2 = y1
- x1 - x2 = y2
- f((y1 + y2) / 2,(y1 - y2) / 2) = g(y1)
Since the variation in f can be described by the single variable y1 its intrinsic dimension is one.
For the case that f is constant, its intrinsic dimension is zero since no variable is needed to describe variation. For the general case, when the intrinsic dimension of the two-variable function f is neither zero or one, it is two.
In the literature, functions which are of intrinsic dimension zero, one, or two are sometimes referred to as i0D, i1D or i2D, respectively.
For an N-variable function f, the set of variables can be represented as an N-dimensional vector x:
- f=f(x) where x=(x1, x2, ..., xN)
If for some M-variable function g and M × N matrix A is it the case that
- for all x; f(x)=g(Ax),
- M is the smallest number for which the above relation between f and g can be found,
then the intrinsic dimension of f is M.
The intrinsic dimension is a characterization of f, it is not an unambiguous characterization of g nor of A. That is, if the above relation is satisfied for some f, g, and A, it must also be satisfied for the same f and g′ and A′ given by
- A′=B−1 A
where B is a non-singular M × M matrix, since
The Fourier transform of functions of low intrinsic dimension
An N variable function which has intrinsic dimension M < N has a characteristic Fourier transform. Intuitively, since this type of function is constant along one or several dimensions its Fourier transform must appear like an impulse (the Fourier transform of a constant) along the same dimension in the frequency space.
A simple example
Let f be a two-variable function which is i1D. This means that there exists a normalized vector n in R2 and a one-variable function g such that
- f(x) = g(nTx)
for all x in R2. If F is the Fourier transform of f (both are two-variable functions) it must be the case that
- F(u)= G(nTu) · δ(mTu)
Here G is the Fourier transform of g (both are one-variable functions), δ is the Dirac impulse function and m is a normalized vector in R2 perpendicular to n. This means that F vanishes everywhere except on a line which passes through the origin of the frequency domain and is parallel to m. Along this line F varies according to G.
The general case
Let f be an N-variable function which has intrinsic dimension M, that is, there exists an M-variable function g and M × N matrix A such that
- f(x)=g(Ax) for all x.
Its Fourier transform F can then be described as follows:
- F vanishes everywhere except for a subspace of dimension M
- The subspace M is spanned by the rows of the matrix A
- In the subspace, F varies according to G the Fourier transform of g
The type of intrinsic dimension described above assumes that a linear transformation is applied to the coordinates of the N-variable function f to produce the M variables which are necessary to represent every value of f. This means that f is constant along lines, planes, or hyperplanes, depending on N and M.
In a general case, f has intrinsic dimension M is there exist M functions a1, a2, ..., aM and an M-variable function g such that
- f(x) = g(a1(x),a2(x),...,aM(x)) for all x
- M is the smallest number of functions which allows the above transformation
A simple example is transforming a 2-variable function f to polar coordinates:
- f(x1,x2) = g((x12 + x22)1/2), f is i1D and is constant along any circle centered at the origin
- f(x1,x2) = g(arctan(x2 / x1)), f is i1D and is constant along all rays from the origin
For the general case, a simple description of either the point sets for which f is constant or its Fourier transform is usually not possible.
Applications and history
The case of a two-variable signal which is i1D appears frequently in computer vision and image processing and captures the idea of local image regions which contain lines or edges. The analysis of such regions has a long history, but it was not until a more formal and theoretical treatment of such operations began that the concept of intrinsic dimension was established, even though the name has varied.
For example, the concept which here is referred to as an image neighborhood of intrinsic dimension 1 or i1D neighborhood is called 1-dimensional by Knutsson (1982), linear symmetric by Bigün & Granlund (1987) and simple neighborhood in Granlund & Knutsson (1995).
The term intrinsic dimension was coined by Bennett (1965).
- Hans Knutsson (1982). Filtering and reconstruction in image processing. Linköping Studies in Science and Technology, Dissertation No 88, Linköping University, Sweden.
- Josef Bigün; Gösta H. Granlund (1987). "Optimal orientation detection of linear symmetry". Proceedings of the International Conference on Computer Vision (PDF). pp. 433–438.
- Gösta H. Granlund; Hans Knutsson (1995). Signal Processing in Computer Vision. Kluwer Academic Publishers. ISBN 978-1-4757-2377-9.
- Bennett, R. (June 1965). "Representation and analysis of signals—Part XXI: The intrinsic dimensionality of signal collections". Rep. 163. Baltimore, MD: The Johns Hopkins University.