# Lagrange polynomial This image shows, for four points ((−9, 5), (−4, 2), (−1, −2), (7, 9)), the (cubic) interpolation polynomial L(x) (dashed, black), which is the sum of the scaled basis polynomials y00(x), y11(x), y22(x) and y33(x). The interpolation polynomial passes through all four control points, and each scaled basis polynomial passes through its respective control point and is 0 where x corresponds to the other three control points.

In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data.

Given a data set of coordinate pairs $(x_{j},y_{j})$ with $0\leq j\leq k,$ the $x_{j}$ are called nodes and the $y_{j}$ are called values. The Lagrange polynomial $L(x)$ has degree ${\textstyle \leq k}$ and assumes each value at the corresponding node, $L(x_{j})=y_{j}.$ Although named after Joseph-Louis Lagrange, who published it in 1795, the method was first discovered in 1779 by Edward Waring. It is also an easy consequence of a formula published in 1783 by Leonhard Euler.

Uses of Lagrange polynomials include the Newton–Cotes method of numerical integration and Shamir's secret sharing scheme in cryptography.

For equispaced nodes, Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation.

## Definition

Given a set of ${\textstyle k+1}$ nodes $\{x_{0},x_{1},\ldots ,x_{k}\}$ , which must all be distinct, $x_{j}\neq x_{m}$ for indices $j\neq m$ , the Lagrange basis for polynomials of degree ${\textstyle \leq k}$ for those nodes is the set of polynomials ${\textstyle \{\ell _{0}(x),\ell _{1}(x),\ldots ,\ell _{k}(x)\}}$ each of degree ${\textstyle k}$ which take values ${\textstyle \ell _{j}(x_{m})=0}$ if ${\textstyle m\neq j}$ and ${\textstyle \ell _{j}(x_{j})=1}$ . Using the Kronecker delta this can be written ${\textstyle \ell _{j}(x_{m})=\delta _{jm}.}$ Each basis polynomial can be explicitly described by the product:

{\begin{aligned}\ell _{j}(x)&={\frac {(x-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x-x_{j-1})}{(x_{j}-x_{j-1})}}{\frac {(x-x_{j+1})}{(x_{j}-x_{j+1})}}\cdots {\frac {(x-x_{k})}{(x_{j}-x_{k})}}\\[10mu]&=\prod _{\begin{smallmatrix}0\leq m\leq k\\m\neq j\end{smallmatrix}}{\frac {x-x_{m}}{x_{j}-x_{m}}}.\end{aligned}} Notice that the numerator ${\textstyle \prod _{m\neq j}(x-x_{m})}$ has ${\textstyle k}$ roots at the nodes ${\textstyle \{x_{m}\}_{m\neq j}}$ while the denominator ${\textstyle \prod _{m\neq j}(x_{j}-x_{m})}$ scales the resulting polynomial so that ${\textstyle \ell _{j}(x_{j})=1.}$ The Lagrange interpolating polynomial for those nodes through the corresponding values $\{y_{0},y_{1},\ldots ,y_{k}\}$ is the linear combination:

$L(x)=\sum _{j=0}^{k}y_{j}\ell _{j}(x).$ Each basis polynomial has degree ${\textstyle k}$ , so the sum ${\textstyle L(x)}$ has degree ${\textstyle \leq k}$ , and it interpolates the data because ${\textstyle L(x_{m})=\sum _{j=0}^{k}y_{j}\ell _{j}(x_{m})=\sum _{j=0}^{k}y_{j}\delta _{mj}=y_{m}.}$ The interpolating polynomial is unique. Proof: assume the polynomial ${\textstyle M(x)}$ of degree ${\textstyle \leq k}$ interpolates the data. Then the difference ${\textstyle M(x)-L(x)}$ is zero at ${\textstyle k+1}$ distinct nodes ${\textstyle \{x_{0},x_{1},\ldots ,x_{k}\}.}$ But the only polynomial of degree ${\textstyle \leq k}$ with more than ${\textstyle k}$ roots is the constant zero function, so ${\textstyle M(x)-L(x)=0,}$ or ${\textstyle M(x)=L(x).}$ ## Barycentric form

Each Lagrange basis polynomial ${\textstyle \ell _{j}(x)}$ can be rewritten as the product of three parts, a function ${\textstyle \ell (x)=\prod _{m}(x-x_{m})}$ common to every basis polynomial, a node-specific constant ${\textstyle w_{j}=\prod _{m\neq j}(x_{j}-x_{m})^{-1}}$ (called the barycentric weight), and a part representing the displacement from ${\textstyle x_{j}}$ to ${\textstyle x}$ :

$\ell _{j}(x)=\ell (x){\dfrac {w_{j}}{x-x_{j}}}$ By factoring ${\textstyle \ell (x)}$ out from the sum, we can write the Lagrange polynomial in the so-called first barycentric form:

$L(x)=\ell (x)\sum _{j=0}^{k}{\frac {w_{j}}{x-x_{j}}}y_{j}.$ If the weights $w_{j}$ have been pre-computed, this requires only ${\mathcal {O}}(k)$ operations compared to ${\mathcal {O}}(k^{2})$ for evaluating each Lagrange basis polynomial $\ell _{j}(x)$ individually.

The barycentric interpolation formula can also easily be updated to incorporate a new node $x_{k+1}$ by dividing each of the $w_{j}$ , $j=0\dots k$ by $(x_{j}-x_{k+1})$ and constructing the new $w_{k+1}$ as above.

For any ${\textstyle x,}$ ${\textstyle \sum _{j=0}^{k}\ell _{j}(x)=1}$ because the constant function ${\textstyle g(x)=1}$ is the unique polynomial of degree $\leq k$ interpolating the data ${\textstyle \{(x_{0},1),(x_{1},1),\ldots ,(x_{k},1)\}.}$ We can thus further simplify the barycentric formula by dividing $L(x)=L(x)/g(x)\colon$ {\begin{aligned}L(x)&=\ell (x)\sum _{j=0}^{k}{\frac {w_{j}}{x-x_{j}}}y_{j}{\Bigg /}\ell (x)\sum _{j=0}^{k}{\frac {w_{j}}{x-x_{j}}}\\[10mu]&=\sum _{j=0}^{k}{\frac {w_{j}}{x-x_{j}}}y_{j}{\Bigg /}\sum _{j=0}^{k}{\frac {w_{j}}{x-x_{j}}}.\end{aligned}} This is called the second form or true form of the barycentric interpolation formula.

This second form has advantages in computation cost and accuracy: it avoids evaluation of $\ell (x)$ ; the work to compute each term in the denominator $w_{j}/(x-x_{j})$ has already been done in computing ${\bigl (}w_{j}/(x-x_{j}){\bigr )}y_{j}$ and so computing the sum in the denominator costs only ${\textstyle k-1}$ addition operations; for evaluation points ${\textstyle x}$ which are close to one of the nodes ${\textstyle x_{j}}$ , catastrophic cancelation would ordinarily be a problem for the value ${\textstyle (x-x_{j})}$ , however this quantity appears in both numerator and denominator and the two cancel leaving good relative accuracy in the final result.

Using this formula to evaluate $L(x)$ at one of the nodes $x_{j}$ will result in the indeterminate $\infty y_{j}/\infty$ ; computer implementations must replace such results by $L(x_{j})=y_{j}$ ## A perspective from linear algebra

Solving an interpolation problem leads to a problem in linear algebra amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial ${\textstyle L(x)=\sum _{j=0}^{k}x^{j}m_{j}}$ , we must invert the Vandermonde matrix $(x_{i})^{j}$ to solve $L(x_{i})=y_{i}$ for the coefficients $m_{j}$ of $L(x)$ . By choosing a better basis, the Lagrange basis, ${\textstyle L(x)=\sum _{j=0}^{k}l_{j}(x)y_{j}}$ , we merely get the identity matrix, $\delta _{ij}$ , which is its own inverse: the Lagrange basis automatically inverts the analog of the Vandermonde matrix.

This construction is analogous to the Chinese remainder theorem. Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears.

Furthermore, when the order is large, Fast Fourier transformation can be used to solve for the coefficients of the interpolated polynomial.

## Examples

### Example 1

We wish to interpolate ƒ(x) = x2 over the domain 1 ≤ x ≤ 3, given these three points:

{\begin{aligned}x_{0}&=1&&&f(x_{0})&=1\\x_{1}&=2&&&f(x_{1})&=4\\x_{2}&=3&&&f(x_{2})&=9.\end{aligned}} The interpolating polynomial is:

{\begin{aligned}L(x)&={1}\cdot {x-2 \over 1-2}\cdot {x-3 \over 1-3}+{4}\cdot {x-1 \over 2-1}\cdot {x-3 \over 2-3}+{9}\cdot {x-1 \over 3-1}\cdot {x-2 \over 3-2}\\[10pt]&=x^{2}.\end{aligned}} ### Example 2

We wish to interpolate ƒ(x) = x3 over the domain 1 ≤ x ≤ 4, given these four points:

 $x_{0}=1$ $f(x_{0})=1$ $x_{1}=2$ $f(x_{1})=8$ $x_{2}=3$ $f(x_{2})=27$ $x_{3}=4$ $f(x_{3})=64$ The interpolating polynomial is:

{\begin{aligned}L(x)&={1}\cdot {x-2 \over 1-2}\cdot {x-3 \over 1-3}\cdot {x-4 \over 1-4}+{8}\cdot {x-1 \over 2-1}\cdot {x-3 \over 2-3}\cdot {x-4 \over 2-4}+{27}\cdot {x-1 \over 3-1}\cdot {x-2 \over 3-2}\cdot {x-4 \over 3-4}+{64}\cdot {x-1 \over 4-1}\cdot {x-2 \over 4-2}\cdot {x-3 \over 4-3}\\[8pt]&=x^{3}\end{aligned}} 