# Frame (linear algebra)

(Redirected from Frame of a vector space)

In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal.[1] Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering.[2]

## Definition and motivation

### Motivating example: computing a basis from a linearly dependent set

Suppose we have a set of vectors $\{\mathbf{e}_k\}$ in the vector space V and we want to express an arbitrary element $\mathbf{v} \in V$ as a linear combination of the vectors $\{\mathbf{e}_{k}\}$, that is, we want to find coefficients $c_k$ such that

$\mathbf{v} = \sum_k c_k \mathbf{e}_k$

If the set $\{ \mathbf{e}_{k} \}$ does not span $V$, then such coefficients do not exist for every such $\mathbf{v}$. If $\{ \mathbf{e}_{k} \}$ spans $V$ and also is linearly independent, this set forms a basis of $V$, and the coefficients $c_{k}$ are uniquely determined by $\mathbf{v}$. If, however, $\{\mathbf{e}_{k}\}$ spans $V$ but is not linearly independent, the question of how to determine the coefficients becomes less apparent, in particular if $V$ is of infinite dimension.

Given that $\{\mathbf{e}_k\}$ spans $V$ and is linearly dependent, one strategy is to remove vectors from the set until it becomes linearly independent and forms a basis. There are some problems with this plan:

1. Removing arbitrary vectors from the set may cause it to be unable to span $V$ before it becomes linearly independent.
2. Even if it is possible to devise a specific way to remove vectors from the set until it becomes a basis, this approach may become infeasible in practice if the set is large or infinite.
3. In some applications, it may be an advantage to use more vectors than necessary to represent $\mathbf{v}$. This means that we want to find the coefficients $c_k$ without removing elements in $\{\mathbf{e}_k\}$. The coefficients $c_k$ will no longer be uniquely determined by $\mathbf{v}$. Therefore, the vector $\mathbf{v}$ can be represented as a linear combination of $\{\mathbf{e}_{k}\}$ in more than one way.

### Formal definition

Let V be an inner product space and $\{\mathbf{e}_k\}_{k \in \mathbb{N}}$ be a set of vectors in $V$. These vectors satisfy the frame condition if there are positive real numbers A and B such that AB and for each $\mathbf{v}$ in V,

$A \left\| \mathbf{v} \right\| ^2 \leq \sum_{k \in \mathbb{N}} \left| \langle \mathbf{v}, \mathbf{e}_k \rangle \right| ^2 \leq B \left\| \mathbf{v} \right\| ^2 .$

A set of vectors that satisfies the frame condition is a frame for the vector space.[3]

The numbers A and B are called the lower and upper frame bounds, respectively.[3] The frame bounds are not unique because numbers less than A and greater than B are also valid frame bounds. The optimal lower bound is the supremum of all lower bounds and the optimal upper bound is the infimum of all upper bounds.

A frame is overcomplete (or redundant) if it is not a basis for the vector space.

## History

Because of the various mathematical components surrounding frames, frame theory has roots in harmonic and functional analysis, operator theory, linear algebra, and matrix theory.[4]

The Fourier transform has been used for over a century as a way of decomposing and expanding signals. However, the Fourier transform masks key information regarding the moment of emission and the duration of a signal. In 1946, Dennis Gabor was able to solve this using a technique that simultaneously reduced noise, provided resiliency, and created quantization while encapsulating important signal characteristics.[1] This discovery marked the first concerted effort towards frame theory.

The frame condition was first described by Richard Duffin and Albert Charles Schaeffer in a 1952 article on nonharmonic Fourier series as a way of computing the coefficients in a linear combination of the vectors of a linearly dependent spanning set (in their terminology, a "Hilbert space frame").[5] In the 1980s, Stéphane Mallat, Ingrid Daubechies, and Mayer used frames to analyze wavelets. Today frames are associated with wavelets, signal and image processing, and data compression.

## Relation to bases

A frame satisfies a generalization of Parseval's identity, namely the frame condition, while still maintaining norm equivalence between a signal and its sequence of coefficients.

If the set $\{\mathbf{e}_k\}$ is a frame of V, it spans V. Otherwise there would exist at least one non-zero $\mathbf{v} \in V$ which would be orthogonal to all $\mathbf{e}_k$. If we insert $\mathbf{v}$ into the frame condition, we obtain

$A \left\| \mathbf{v} \right\| ^2 \leq 0 \leq B \left\| \mathbf{v} \right\| ^{2} ;$

therefore $A \leq 0$, which is a violation of the initial assumptions on the lower frame bound.

If a set of vectors spans V, this is not a sufficient condition for calling the set a frame. As an example, consider $V = \mathbb{R}^2$ with the dot product, and the infinite set $\{\mathbf{e}_k\}$ given by

$\left\{ (1,0) , \, (0,1), \, \left(0,\frac{1}{\sqrt{2}}\right) , \, \left(0,\frac{1}{\sqrt{3}}\right), \dotsc \right\}.$

This set spans V but since $\sum_k \left| \langle \mathbf{e}_k , (0,1)\rangle \right| ^2 = 0 + 1 + \frac{1}{2} + \frac{1}{3} +\dotsb = \infty$, we cannot choose a finite upper frame bound B. Consequently, the set $\{\mathbf{e}_k\}$ is not a frame.

## Applications

In signal processing, each vector is interpreted as a signal. In this interpretation, a vector expressed as a linear combination of the frame vectors is a redundant signal. Using a frame, it is possible to create a simpler, more sparse representation of a signal as compared with a family of elementary signals (that is, representing a signal strictly with a set of linearly independent vectors may not always be the most compact form).[6] Frames, therefore, provide robustness. Because they provide a way of producing the same vector within a space, signals can be encoded in various ways. This facilitates fault tolerance and resilience to a loss of signal. Finally, redundancy can be used to mitigate noise, which is relevant to the restoration, enhancement, and reconstruction of signals.

In signal processing, it is common to assume the vector space is a Hilbert space.

## Special cases

A frame is a tight frame if A = B; in other words, the frame satisfies a generalized version of Parseval's identity. For example, the union of k orthonormal bases of a vector space is a tight frame with A = B = k. A tight frame is a Parseval frame (sometimes called a normalized frame) if A = B = 1. Each orthonormal basis is a Parseval frame, but the converse is not always true.

A frame is an equal norm frame (sometimes called a uniform frame or a normalized frame) if there is a constant c such that $\|e_i\| = c$ for each i. An equal norm frame is a unit norm frame if c = 1. A Parseval (or tight) unit norm frame is an orthonormal basis; such a frame satisfies Parseval's identity.

A frame is an equiangular frame if there is a constant c such that $| \langle e_i, e_j \rangle | = c$ for each distinct i and j.

A frame is an exact frame if no proper subset of the frame spans the inner product space. Each basis for an inner product space is an exact frame for the space (so a basis is a special case of a frame).

## Generalizations

A bessel sequence is a set of vectors that satisfies only the upper bound of the frame condition.

## Dual frames

The frame condition entails the existence of a set of dual frame vectors $\{ \mathbf{\tilde{e}}_{k} \}$ with the property that

$\mathbf{v} = \sum_k \langle \mathbf{v} , \mathbf{\tilde{e}}_k \rangle \mathbf{e}_k = \sum_k \langle \mathbf{v} , \mathbf{e}_k \rangle \mathbf{\tilde{e}}_k$

for any $\mathbf{v} \in V$. This implies that a frame together with its dual frame has the same property as a basis and its dual basis in terms of reconstructing a vector from scalar products.

In order to construct a dual frame, we first need the linear mapping $\mathbf{S} : V \rightarrow V$, called the frame operator, defined as

$\mathbf{S} \mathbf{v} = \sum_{k} \langle \mathbf{v} , \mathbf{e}_{k} \rangle \mathbf{e}_{k}$.

From this definition of $\mathbf{S}$ and linearity in the first argument of the inner product,

$\langle \mathbf{S} \mathbf{v} , \mathbf{v} \rangle = \sum_k \left| \langle \mathbf{v} , \mathbf{e}_k \rangle \right| ^2 ,$

which, when substituted in the frame condition inequality, yields

$A \left\| \mathbf{v} \right\| ^2 \leq \langle \mathbf{S} \mathbf{v} , \mathbf{v} \rangle \leq B \left\| \mathbf{v} \right\| ^2 ,$

for each $\mathbf{v} \in V$.

The frame operator $\mathbf{S}$ is self-adjoint, positive definite, and has positive upper and lower bounds. The inverse $\mathbf{S}^{-1}$ of $\mathbf{S}$ exists and it, too, is self-adjoint, positive definite, and has positive upper and lower bounds.

The dual frame is defined by mapping each element of the frame with $\mathbf{S}^{-1}$:

$\tilde{\mathbf{e}}_{k} = \mathbf{S}^{-1} \mathbf{e}_{k}$

To see that this makes sense, let $\mathbf{v}$ be an element of $V$ and let

$\mathbf{u} = \sum_{k} \langle \mathbf{v} , \mathbf{e}_{k} \rangle \tilde{\mathbf{e}}_{k}$.

Thus

$\mathbf{u} = \sum_{k} \langle \mathbf{v} , \mathbf{e}_{k} \rangle ( \mathbf{S}^{-1} \mathbf{e}_{k} ) = \mathbf{S}^{-1} \left ( \sum_{k} \langle \mathbf{v} , \mathbf{e}_{k} \rangle \mathbf{e}_{k} \right ) = \mathbf{S}^{-1} \mathbf{S} \mathbf{v} = \mathbf{v}$,

which proves that

$\mathbf{v} = \sum_{k} \langle \mathbf{v} , \mathbf{e}_{k} \rangle \tilde{\mathbf{e}}_{k}$.

Alternatively, we can let

$\mathbf{u} = \sum_{k} \langle \mathbf{v} , \tilde{\mathbf{e}}_{k} \rangle \mathbf{e}_{k}$.

By inserting the above definition of $\tilde{\mathbf{e}}_{k}$ and applying the properties of $\mathbf{S}$ and its inverse,

$\mathbf{u} = \sum_{k} \langle \mathbf{v} , \mathbf{S}^{-1} \mathbf{e}_{k} \rangle \mathbf{e}_{k} = \sum_{k} \langle \mathbf{S}^{-1} \mathbf{v} , \mathbf{e}_{k} \rangle \mathbf{e}_{k} = \mathbf{S} (\mathbf{S}^{-1} \mathbf{v}) = \mathbf{v}$

which shows that

$\mathbf{v} = \sum_{k} \langle \mathbf{v} , \tilde{\mathbf{e}}_{k} \rangle \mathbf{e}_{k}$.

The numbers $\langle \mathbf{v} , \tilde{\mathbf{e}}_{k} \rangle$ are called frame coefficients. This derivation of a dual frame is a summary of Section 3 in the article by Duffin and Schaeffer.[5] They use the term conjugate frame for what here is called a dual frame.

The dual frame $\{\tilde{\mathbf{e}}_{k}\}$ is called the canonical dual of $\{\mathbf{e}_{k}\}$ because it acts similarly as a dual basis to a basis.

When the frame $\{\mathbf{e}_{k}\}$ is overcomplete, a vector $\mathbf{v}$ can be written as a linear combination of $\{\mathbf{e}_{k}\}$ in more than one way. That is, there are different choices of coefficients $\{c_{k}\}$ such that $\mathbf{v} = \sum_{k} c_{k} \mathbf{e}_{k}$. This allows us some freedom for the choice of coefficients $\{c_{k}\}$ other than $\langle \mathbf{v} , \tilde{\mathbf{e}}_{k} \rangle$. It is necessary that the frame $\{\mathbf{e}_{k}\}$ is overcomplete for other such coefficients $\{c_{k}\}$ to exist. If so, then there exist frames $\{\mathbf{g}_{k}\} \neq \{\tilde{\mathbf{e}}_{k}\}$ for which

$\mathbf{v} = \sum_{k} \langle \mathbf{v} , \mathbf{g}_{k} \rangle \mathbf{e}_{k}$

for all $\mathbf{v} \in V$. We call $\{\mathbf{g}_{k}\}$ a dual frame of $\{\mathbf{e}_{k}\}$.