# Vandermonde matrix

In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row, i.e., an m × n matrix

${\displaystyle V={\begin{bmatrix}1&\alpha _{1}&\alpha _{1}^{2}&\dots &\alpha _{1}^{n-1}\\1&\alpha _{2}&\alpha _{2}^{2}&\dots &\alpha _{2}^{n-1}\\1&\alpha _{3}&\alpha _{3}^{2}&\dots &\alpha _{3}^{n-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&\alpha _{m}&\alpha _{m}^{2}&\dots &\alpha _{m}^{n-1}\end{bmatrix}},}$

or

${\displaystyle V_{i,j}=\alpha _{i}^{j-1}\,}$

for all indices i and j.[1] (Some authors use the transpose of the above matrix.)

The determinant of a square Vandermonde matrix (where m = n) can be expressed as

${\displaystyle \det(V)=\prod _{1\leq i

This is called the Vandermonde determinant or Vandermonde polynomial. If all the numbers ${\displaystyle \alpha _{i}}$ are distinct, then it is non-zero.

The Vandermonde determinant is sometimes called the discriminant, although many sources, including this article, refer to the discriminant as the square of this determinant. Note that the Vandermonde determinant is alternating in the entries, meaning that permuting the ${\displaystyle \alpha _{i}}$ by an odd permutation changes the sign, while permuting them by an even permutation does not change the value of the determinant. It thus depends on the order, while its square (the discriminant) does not depend on the order.

When two or more αi are equal, the corresponding polynomial interpolation problem (see below) is underdetermined. In that case one may use a generalization called confluent Vandermonde matrices, which makes the matrix non-singular while retaining most properties. If αi = αi + 1 = ... = αi+k and αi ≠ αi − 1, then the (i + k)th row is given by

${\displaystyle V_{i+k,j}={\begin{cases}0,&{\text{if }}j\leq k;\\{\frac {(j-1)!}{(j-k-1)!}}\alpha _{i}^{j-k-1},&{\text{if }}j>k.\end{cases}}}$

The above formula for confluent Vandermonde matrices can be readily derived by letting two parameters ${\displaystyle \alpha _{i}}$ and ${\displaystyle \alpha _{j}}$ go arbitrarily close to each other. The difference vector between the rows corresponding to ${\displaystyle \alpha _{i}}$ and ${\displaystyle \alpha _{j}}$ scaled to a constant yields the above equation (for k = 1). Similarly, the cases k > 1 are obtained by higher order differences. Consequently, the confluent rows are derivatives of the original Vandermonde row.

## Properties

In the case of a square Vandermonde matrix, the Leibniz formula for the determinant gives

${\displaystyle \det(V)=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )\prod _{i=1}^{n}\alpha _{i}^{\sigma (i)-1},}$

where Sn denotes the set of permutations of ${\displaystyle \{1,\ldots ,n\}}$, and ${\displaystyle \operatorname {sgn}(\sigma )}$ denotes the signature of the permutation σ. This determinant factors as

${\displaystyle \sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )\prod _{i=1}^{n}\alpha _{i}^{\sigma (i)-1}=\prod _{1\leq i

Each of these factors must divide the determinant, because the latter is an alternating polynomial in the n variables. It also follows that the Vandermonde determinant divides any other alternating polynomial; the quotient will be a symmetric polynomial.

If m ≤ n, then the matrix V has maximum rank (m) if and only if all αi are distinct. A square Vandermonde matrix is thus invertible if and only if the αi are distinct; an explicit formula for the inverse is known.[2][3][4]

## Applications

The Vandermonde matrix evaluates a polynomial at a set of points; formally, it transforms coefficients of a polynomial ${\displaystyle a_{0}+a_{1}x+a_{2}x^{2}+\cdots +a_{n-1}x^{n-1}}$ to the values the polynomial takes at the points ${\displaystyle \alpha _{i}.}$ The non-vanishing of the Vandermonde determinant for distinct points ${\displaystyle \alpha _{i}}$ shows that, for distinct points, the map from coefficients to values at those points is a one-to-one correspondence, and thus that the polynomial interpolation problem is solvable with unique solution; this result is called the unisolvence theorem.

They are thus useful in polynomial interpolation, since solving the system of linear equations Vu = y for u with V an m × n Vandermonde matrix is equivalent to finding the coefficients uj of the polynomial(s)

${\displaystyle P(x)=\sum _{j=0}^{n-1}u_{j}x^{j}}$

of degree ≤ n − 1 which has (have) the property

${\displaystyle P(\alpha _{i})=y_{i}\quad {\text{for }}i=1,\ldots ,m.\,}$

The Vandermonde matrix can be inverted in terms of Lagrange basis polynomials:[5] each column is the coefficients of the Lagrange basis polynomial, with terms in increasing order going down. The resulting solution to the interpolation problem is called the Lagrange polynomial.

The Vandermonde determinant plays a central role in the Frobenius formula, which gives the character of conjugacy classes of representations of the symmetric group.[6]

When the values ${\displaystyle \alpha _{k}}$ range over powers of a finite field, then the determinant has a number of interesting properties: for example, in proving the properties of a BCH code.

Confluent Vandermonde matrices are used in Hermite interpolation.

A commonly known special Vandermonde matrix is the discrete Fourier transform matrix (DFT matrix), where the numbers αi are chosen to be the m different mth roots of unity.

The Vandermonde matrix diagonalizes the companion matrix.

The Vandermonde matrix is used in some forms of Reed–Solomon error correction codes.