# Transfer matrix

This article is about the transfer matrix in wavelet theory. For the transfer matrix in control systems, see Transfer function matrix. For the transfer matrix method in statistical physics, see Transfer-matrix method. For the transfer matrix method in optics, see Transfer-matrix method (optics).

In applied mathematics, the transfer matrix is a formulation in terms of a block-Toeplitz matrix of the two-scale equation, which characterizes refinable functions. Refinable functions play an important role in wavelet theory and finite element theory.

For the mask ${\displaystyle h}$, which is a vector with component indexes from ${\displaystyle a}$ to ${\displaystyle b}$, the transfer matrix of ${\displaystyle h}$, we call it ${\displaystyle T_{h}}$ here, is defined as

${\displaystyle (T_{h})_{j,k}=h_{2\cdot j-k}.}$

More verbosely

${\displaystyle T_{h}={\begin{pmatrix}h_{a}&&&&&\\h_{a+2}&h_{a+1}&h_{a}&&&\\h_{a+4}&h_{a+3}&h_{a+2}&h_{a+1}&h_{a}&\\\ddots &\ddots &\ddots &\ddots &\ddots &\ddots \\&h_{b}&h_{b-1}&h_{b-2}&h_{b-3}&h_{b-4}\\&&&h_{b}&h_{b-1}&h_{b-2}\\&&&&&h_{b}\end{pmatrix}}.}$

The effect of ${\displaystyle T_{h}}$ can be expressed in terms of the downsampling operator "${\displaystyle \downarrow }$":

${\displaystyle T_{h}\cdot x=(h*x)\downarrow 2.}$

## Properties

• ${\displaystyle T_{h}\cdot x=T_{x}\cdot h}$.
• If you drop the first and the last column and move the odd-indexed columns to the left and the even-indexed columns to the right, then you obtain a transposed Sylvester matrix.
• The determinant of a transfer matrix is essentially a resultant.
More precisely:
Let ${\displaystyle h_{\mathrm {e} }}$ be the even-indexed coefficients of ${\displaystyle h}$ (${\displaystyle (h_{\mathrm {e} })_{k}=h_{2k}}$) and let ${\displaystyle h_{\mathrm {o} }}$ be the odd-indexed coefficients of ${\displaystyle h}$ (${\displaystyle (h_{\mathrm {o} })_{k}=h_{2k+1}}$).
Then ${\displaystyle \det T_{h}=(-1)^{\lfloor {\frac {b-a+1}{4}}\rfloor }\cdot h_{a}\cdot h_{b}\cdot \mathrm {res} (h_{\mathrm {e} },h_{\mathrm {o} })}$, where ${\displaystyle \mathrm {res} }$ is the resultant.
This connection allows for fast computation using the Euclidean algorithm.
${\displaystyle \mathrm {tr} ~T_{g*h}=\mathrm {tr} ~T_{g}\cdot \mathrm {tr} ~T_{h}}$
• For the determinant of the transfer matrix of convolved mask holds
${\displaystyle \det T_{g*h}=\det T_{g}\cdot \det T_{h}\cdot \mathrm {res} (g_{-},h)}$
where ${\displaystyle g_{-}}$ denotes the mask with alternating signs, i.e. ${\displaystyle (g_{-})_{k}=(-1)^{k}\cdot g_{k}}$.
• If ${\displaystyle T_{h}\cdot x=0}$, then ${\displaystyle T_{g*h}\cdot (g_{-}*x)=0}$.
This is a concretion of the determinant property above. From the determinant property one knows that ${\displaystyle T_{g*h}}$ is singular whenever ${\displaystyle T_{h}}$ is singular. This property also tells, how vectors from the null space of ${\displaystyle T_{h}}$ can be converted to null space vectors of ${\displaystyle T_{g*h}}$.
• If ${\displaystyle x}$ is an eigenvector of ${\displaystyle T_{h}}$ with respect to the eigenvalue ${\displaystyle \lambda }$, i.e.
${\displaystyle T_{h}\cdot x=\lambda \cdot x}$,
then ${\displaystyle x*(1,-1)}$ is an eigenvector of ${\displaystyle T_{h*(1,1)}}$ with respect to the same eigenvalue, i.e.
${\displaystyle T_{h*(1,1)}\cdot (x*(1,-1))=\lambda \cdot (x*(1,-1))}$.
• Let ${\displaystyle \lambda _{a},\dots ,\lambda _{b}}$ be the eigenvalues of ${\displaystyle T_{h}}$, which implies ${\displaystyle \lambda _{a}+\dots +\lambda _{b}=\mathrm {tr} ~T_{h}}$ and more generally ${\displaystyle \lambda _{a}^{n}+\dots +\lambda _{b}^{n}=\mathrm {tr} (T_{h}^{n})}$. This sum is useful for estimating the spectral radius of ${\displaystyle T_{h}}$. There is an alternative possibility for computing the sum of eigenvalue powers, which is faster for small ${\displaystyle n}$.
Let ${\displaystyle C_{k}h}$ be the periodization of ${\displaystyle h}$ with respect to period ${\displaystyle 2^{k}-1}$. That is ${\displaystyle C_{k}h}$ is a circular filter, which means that the component indexes are residue classes with respect to the modulus ${\displaystyle 2^{k}-1}$. Then with the upsampling operator ${\displaystyle \uparrow }$ it holds
${\displaystyle \mathrm {tr} (T_{h}^{n})=\left(C_{k}h*(C_{k}h\uparrow 2)*(C_{k}h\uparrow 2^{2})*\cdots *(C_{k}h\uparrow 2^{n-1})\right)_{[0]_{2^{n}-1}}}$
Actually not ${\displaystyle n-2}$ convolutions are necessary, but only ${\displaystyle 2\cdot \log _{2}n}$ ones, when applying the strategy of efficient computation of powers. Even more the approach can be further sped up using the Fast Fourier transform.
• From the previous statement we can derive an estimate of the spectral radius of ${\displaystyle \varrho (T_{h})}$. It holds
${\displaystyle \varrho (T_{h})\geq {\frac {a}{\sqrt {\#h}}}\geq {\frac {1}{\sqrt {3\cdot \#h}}}}$
where ${\displaystyle \#h}$ is the size of the filter and if all eigenvalues are real, it is also true that
${\displaystyle \varrho (T_{h})\leq a}$,
where ${\displaystyle a=\Vert C_{2}h\Vert _{2}}$.

• Strang, Gilbert (1996). "Eigenvalues of ${\displaystyle (\downarrow 2){H}}$ and convergence of the cascade algorithm". IEEE Transactions on Signal Processing. 44. pp. 233–238.