# Alternant matrix

In linear algebra, an alternant matrix is a matrix formed by applying a finite list of functions pointwise to a fixed column of inputs. An alternant determinant is the determinant of a square alternant matrix.

Generally, if $f_{1},f_{2},\dots f_{n}$ are functions from a set $X$ to a field $F$ , and ${\alpha _{1},\alpha _{2},...\alpha _{m}}\in X$ , then the alternant matrix has size $m\times n$ and is defined by

$M={\begin{bmatrix}f_{1}(\alpha _{1})&f_{2}(\alpha _{1})&\dots &f_{n}(\alpha _{1})\\f_{1}(\alpha _{2})&f_{2}(\alpha _{2})&\dots &f_{n}(\alpha _{2})\\f_{1}(\alpha _{3})&f_{2}(\alpha _{3})&\dots &f_{n}(\alpha _{3})\\\vdots &\vdots &\ddots &\vdots \\f_{1}(\alpha _{m})&f_{2}(\alpha _{m})&\dots &f_{n}(\alpha _{m})\\\end{bmatrix}}$ or, more compactly, $M_{ij}=f_{j}(\alpha _{i})$ . (Some authors use the transpose of the above matrix.) Examples of alternant matrices include Vandermonde matrices, for which $f_{j}(\alpha )=\alpha ^{j-1}$ , and Moore matrices, for which $f_{j}(\alpha )=\alpha ^{q^{j-1}}$ .

## Properties

• The alternant can be used to check the linear independence of the functions $f_{1},f_{2},\dots f_{n}$ in function space. For example, let $f_{1}(x)=\sin(x),f_{2}(x)=\cos(x)$ and choose $\alpha _{1}=0,\alpha _{2}=\pi /2$ . Then the alternant is the matrix ${\begin{bmatrix}0&1\\1&0\end{bmatrix}}$ and the alternant determinant is $-1\neq 0$ . Therefore M is invertible and the vectors $\{\sin(x),\cos(x)\}$ form a basis for their spanning set: in particular, $\sin(x)$ and $\cos(x)$ are linearly independent.
• Linear dependence of the columns of an alternant does not imply that the functions are linearly dependent in function space. For example, let $f_{1}(x)=\sin(x),f_{2}(x)=\cos(x)$ and choose $\alpha _{1}=0,\alpha _{2}=\pi$ . Then the alternant is ${\begin{bmatrix}0&1\\0&-1\end{bmatrix}}$ and the alternant determinant is 0, but we have already seen that $\sin(x)$ and $\cos(x)$ are linearly independent.
• Despite this, the alternant can be used to find a linear dependence if it is already known that one exists. For example, we know from the theory of partial fractions that there are real numbers A and B for which ${\frac {A}{x+1}}+{\frac {B}{x+2}}={\frac {1}{(x+1)(x+2)}}.$ Choosing $f_{1}(x)={\frac {1}{x+1}},f_{2}(x)={\frac {1}{x+2}},f_{3}={\frac {1}{(x+1)(x+2)}}$ and $(\alpha _{1},\alpha _{2},\alpha _{3})=(1,2,3)$ , we obtain the alternant ${\begin{bmatrix}1/2&1/3&1/6\\1/3&1/4&1/12\\1/4&1/5&1/20\end{bmatrix}}\sim {\begin{bmatrix}1&0&1\\0&1&-1\\0&0&0\end{bmatrix}}.$ Therefore $(1,-1,-1)$ is in the nullspace of the matrix: that is, $f_{1}-f_{2}-f_{3}=0$ . Moving $f_{3}$ to the other side of the equation gives the partial fraction decomposition $A=1,B=-1$ .
• If $n=m$ and $\alpha _{i}=\alpha _{j}$ for any $i\neq j$ , then the alternant determinant is zero (as a row is repeated).
• If $n=m$ and the functions $f_{j}(x)$ are all polynomials, then $(\alpha _{j}-\alpha _{i})$ divides the alternant determinant for all $1\leq i . In particular, if V is a Vandermonde matrix, then $\prod _{i divides such polynomial alternant determinants. The ratio ${\frac {\det M}{\det V}}$ is therefore a polynomial in $\alpha _{1},...,\alpha _{m}$ called the bialternant. The Schur polynomial $s_{(\lambda _{1},\dots ,\lambda _{n})}$ is classically defined as the bialternant of the polynomials $f_{j}(x)=x^{\lambda _{j}}$ .