Hankel matrix

In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant, e.g.:

${\displaystyle \qquad {\begin{bmatrix}a&b&c&d&e\\b&c&d&e&f\\c&d&e&f&g\\d&e&f&g&h\\e&f&g&h&i\\\end{bmatrix}}.}$

More generally, a Hankel matrix is any ${\displaystyle n\times n}$ matrix ${\displaystyle A}$ of the form

${\displaystyle A={\begin{bmatrix}a_{0}&a_{1}&a_{2}&\ldots &\ldots &a_{n-1}\\a_{1}&a_{2}&&&&\vdots \\a_{2}&&&&&\vdots \\\vdots &&&&&a_{2n-4}\\\vdots &&&&a_{2n-4}&a_{2n-3}\\a_{n-1}&\ldots &\ldots &a_{2n-4}&a_{2n-3}&a_{2n-2}\end{bmatrix}}.}$

In terms of the components, if the ${\displaystyle i,j}$ element of ${\displaystyle A}$ is denoted with ${\displaystyle A_{ij}}$, and assuming ${\displaystyle i\leq j}$, then we have ${\displaystyle A_{i,j}=A_{i+k,j-k}}$ for all ${\displaystyle k=0,...,j-i.}$

Properties

• Any Hankel matrix is symmetric.
• Let ${\displaystyle J_{n}}$ be the ${\displaystyle n\times n}$ exchange matrix. If ${\displaystyle H}$ is a ${\displaystyle m\times n}$ Hankel matrix, then ${\displaystyle H=TJ_{n}}$ where ${\displaystyle T}$ is a ${\displaystyle m\times n}$ Toeplitz matrix.
• If ${\displaystyle T}$ is real symmetric, then ${\displaystyle H=TJ_{n}}$ will have the same eigenvalues as ${\displaystyle T}$ up to sign.[1]
• The Hilbert matrix is an example of a Hankel matrix.

Relation to formal Laurent series

Hankel matrices are closely related to formal Laurent series.[2] In fact, such a series ${\displaystyle f(z)=\sum _{n=-\infty }^{N}a_{n}z^{n}}$ gives rise to a linear map, referred to as a Hankel operator

${\displaystyle H_{f}:\mathbf {C} [z]\to \mathbf {z} ^{-1}\mathbf {C} [[z^{-1}]],}$

which takes a polynomial ${\displaystyle g\in \mathbf {C} [z]}$ and sends it to the product ${\displaystyle fg}$, but discards all powers of ${\displaystyle z}$ with a non-negative exponent, so as to give an element in ${\displaystyle z^{-1}\mathbf {C} [[z^{-1}]]}$, the formal power series with strictly negative exponents. The map ${\displaystyle H_{f}}$ is in a natural way ${\displaystyle \mathbf {C} [z]}$-linear, and its matrix with respect to the elements ${\displaystyle 1,z,z^{2},\dots \in \mathbf {C} [z]}$ and ${\displaystyle z^{-1},z^{-2},\dots \in z^{-1}\mathbf {C} [[z^{-1}]]}$ is the Hankel matrix

${\displaystyle {\begin{bmatrix}a_{1}&a_{2}&\ldots \\a_{2}&a_{3}&\ldots \\a_{3}&a_{4}&\ldots \\\vdots \end{bmatrix}}.}$

Any Hankel matrix arises in such a way. A theorem due to Kronecker says that the rank of this matrix is finite precisely if ${\displaystyle f}$ is a rational function, i.e., a fraction of two polynomials ${\displaystyle f={\frac {p(z)}{q(z)}}}$.

Hankel operator

A Hankel operator on a Hilbert space is one whose matrix is a (possibly infinite) Hankel matrix with respect to an orthonormal basis. As indicated above, a Hankel Matrix is a matrix with constant values along its antidiagonals, which means that a Hankel matrix ${\displaystyle A}$ must satisfy, for all rows ${\displaystyle i}$ and columns ${\displaystyle j}$, ${\displaystyle (A_{i,j})_{i,j\geq 1}}$. Note that every entry ${\displaystyle A_{i,j}}$ depends only on ${\displaystyle i+j}$.

Let the corresponding Hankel Operator be ${\displaystyle H_{\alpha }}$. Given a Hankel matrix ${\displaystyle A}$, the corresponding Hankel operator is then defined as ${\displaystyle H_{\alpha }(u)=Au}$.

We are often interested in Hankel operators ${\displaystyle H_{\alpha }:\ell ^{2}\left(\mathbb {Z} ^{+}\cup \{0\}\right)\rightarrow \ell ^{2}\left(\mathbb {Z} ^{+}\cup \{0\}\right)}$ over the Hilbert space ${\displaystyle \ell ^{2}(\mathbf {Z} )}$, the space of square integrable bilateral complex sequences. For any ${\displaystyle u\in \ell ^{2}(\mathbf {Z} )}$, we have

${\displaystyle \|u\|_{\ell ^{2}(z)}^{2}=\sum _{n=-\infty }^{\infty }\left|u_{n}\right|^{2}}$

We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.

Note that the matrix ${\displaystyle A}$ does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.

The determinant of a Hankel matrix is called a catalecticant.

Hankel matrix transform

The Hankel matrix transform, or simply Hankel transform, produces the sequence of the determinants of the Hankel matrices formed from the given sequence. Namely, the sequence ${\displaystyle \{h_{n}\}_{n\geq 0}}$ is the Hankel transform of the sequence ${\displaystyle \{b_{n}\}_{n\geq 0}}$ when

${\displaystyle h_{n}=\det(b_{i+j-2})_{1\leq i,j\leq n+1}.}$

The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes

${\displaystyle c_{n}=\sum _{k=0}^{n}{n \choose k}b_{k}}$

as the binomial transform of the sequence ${\displaystyle \{b_{n}\}}$, then one has

${\displaystyle \det(b_{i+j-2})_{1\leq i,j\leq n+1}=\det(c_{i+j-2})_{1\leq i,j\leq n+1}.}$

Applications of Hankel matrices

Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired.[3] The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization.[4] The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation.

Method of moments for polynomial distributions

The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation.[5]