# Lagrange polynomial

(Redirected from Lagrange interpolation) This image shows, for four points ((−9, 5), (−4, 2), (−1, −2), (7, 9)), the (cubic) interpolation polynomial L(x) (dashed, black), which is the sum of the scaled basis polynomials y00(x), y11(x), y22(x) and y33(x). The interpolation polynomial passes through all four control points, and each scaled basis polynomial passes through its respective control point and is 0 where x corresponds to the other three control points.

In numerical analysis, Lagrange polynomials are used for polynomial interpolation. For a given set of points $(x_{j},y_{j})$ with no two $x_{j}$ values equal, the Lagrange polynomial is the polynomial of lowest degree that assumes at each value $x_{j}$ the corresponding value $y_{j}$ , so that the functions coincide at each point.

Although named after Joseph-Louis Lagrange, who published it in 1795, the method was first discovered in 1779 by Edward Waring. It is also an easy consequence of a formula published in 1783 by Leonhard Euler.

Uses of Lagrange polynomials include the Newton–Cotes method of numerical integration and Shamir's secret sharing scheme in cryptography.

Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation. As changing the points $x_{j}$ requires recalculating the entire interpolant, it is often easier to use Newton polynomials instead.

## Definition Here we plot the Lagrange basis functions of 1st, 2nd, and 3rd order on a bi-unit domain. Linear combinations of Lagrange basis functions are used to construct Lagrange interpolating polynomials. Lagrange basis functions are commonly used in finite element analysis as the bases for the element shape-functions. Furthermore, it is common to use a bi-unit domain as the natural space for the finite-element's definition.

Given a set of k + 1 data points

$(x_{0},y_{0}),\ldots ,(x_{j},y_{j}),\ldots ,(x_{k},y_{k})$ where no two $x_{j}$ are the same, the interpolation polynomial in the Lagrange form is a linear combination

$L(x):=\sum _{j=0}^{k}y_{j}\ell _{j}(x)$ of Lagrange basis polynomials

$\ell _{j}(x):=\prod _{\begin{smallmatrix}0\leq m\leq k\\m\neq j\end{smallmatrix}}{\frac {x-x_{m}}{x_{j}-x_{m}}}={\frac {(x-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x-x_{j-1})}{(x_{j}-x_{j-1})}}{\frac {(x-x_{j+1})}{(x_{j}-x_{j+1})}}\cdots {\frac {(x-x_{k})}{(x_{j}-x_{k})}},$ where $0\leq j\leq k$ . Note how, given the initial assumption that no two $x_{j}$ are the same, then (when $m\neq j$ ) $x_{j}-x_{m}\neq 0$ , so this expression is always well-defined. The reason pairs $x_{i}=x_{j}$ with $y_{i}\neq y_{j}$ are not allowed is that no interpolation function $L$ such that $y_{i}=L(x_{i})$ would exist; a function can only get one value for each argument $x_{i}$ . On the other hand, if also $y_{i}=y_{j}$ , then those two points would actually be one single point.

For all $i\neq j$ , $\ell _{j}(x)$ includes the term $(x-x_{i})$ in the numerator, so the whole product will be zero at $x=x_{i}$ :

$\forall ({j\neq i}):\ell _{j}(x_{i})=\prod _{m\neq j}{\frac {x_{i}-x_{m}}{x_{j}-x_{m}}}={\frac {(x_{i}-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x_{i}-x_{i})}{(x_{j}-x_{i})}}\cdots {\frac {(x_{i}-x_{k})}{(x_{j}-x_{k})}}=0.$ On the other hand,

$\ell _{j}(x_{j}):=\prod _{m\neq j}{\frac {x_{j}-x_{m}}{x_{j}-x_{m}}}=1$ In other words, all basis polynomials are zero at $x=x_{j}$ , except $\ell _{j}(x)$ , for which it holds that $\ell _{j}(x_{j})=1$ , because it lacks the $(x-x_{j})$ term.

It follows that $y_{j}\ell _{j}(x_{j})=y_{j}$ , so at each point $x_{j}$ , $L(x_{j})=y_{j}+0+0+\dots +0=y_{j}$ , showing that $L$ interpolates the function exactly.

## Proof

The function L(x) being sought is a polynomial in x of the least degree that interpolates the given data set; that is, it assumes the value yj at the corresponding xj for all data points j:

$L(x_{j})=y_{j}\qquad j=0,\ldots ,k.$ Observe that:

• In $\ell _{j}(x)$ there are k factors in the product and each factor contains one x, so L(x) (which is a sum of these k-degree polynomials) must be a polynomial of degree at most k.
• $\ell _{j}(x_{i})=\prod _{\begin{smallmatrix}m=0\\m\neq j\end{smallmatrix}}^{k}{\frac {x_{i}-x_{m}}{x_{j}-x_{m}}}.$ Expand this product. Since the product omits the term where m = j, if i = j then all terms that appear are ${\frac {x_{j}-x_{m}}{x_{j}-x_{m}}}=1$ . Also, if ij then one term in the product will be (for m = i), ${\frac {x_{i}-x_{i}}{x_{j}-x_{i}}}=0$ , zeroing the entire product. So,

$\ell _{j}(x_{i})=\delta _{ji}={\begin{cases}1,&{\text{if }}j=i\\0,&{\text{if }}j\neq i\end{cases}},$ where $\delta _{ij}$ is the Kronecker delta. So:

$L(x_{i})=\sum _{j=0}^{k}y_{j}\ell _{j}(x_{i})=\sum _{j=0}^{k}y_{j}\delta _{ji}=y_{i}.$ Thus the function L(x) is a polynomial with degree at most k and where L(xi) = yi.

Additionally, the interpolating polynomial is unique, as shown by the unisolvence theorem at the polynomial interpolation article.

It's also true that:

$\sum _{j=0}^{k}\ell _{j}(x)=1\qquad \forall x$ since it must be a polynomial of degree, at most, k and passes through all these k + 1 data points:

$(x_{0},1),\ldots ,(x_{j},1),\ldots ,(x_{k},1)$ resulting in a horizontal line, since a straight line is the only polynomial of degree less than k + 1 that passes through k + 1 aligned points.

## A perspective from linear algebra

Solving an interpolation problem leads to a problem in linear algebra amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial ${\textstyle L(x)=\sum _{j=0}^{k}x^{j}m_{j}}$ , we must invert the Vandermonde matrix $(x_{i})^{j}$ to solve $L(x_{i})=y_{i}$ for the coefficients $m_{j}$ of $L(x)$ . By choosing a better basis, the Lagrange basis, ${\textstyle L(x)=\sum _{j=0}^{k}l_{j}(x)y_{j}}$ , we merely get the identity matrix, $\delta _{ij}$ , which is its own inverse: the Lagrange basis automatically inverts the analog of the Vandermonde matrix.

This construction is analogous to the Chinese Remainder Theorem. Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears.

Furthermore, when the order is large, Fast Fourier Transformation can be used to solve for the coefficients of the interpolated polynomial.

## Examples

### Example 1

We wish to interpolate ƒ(x) = x2 over the range 1 ≤ x ≤ 3, given these three points:

{\begin{aligned}x_{0}&=1&&&f(x_{0})&=1\\x_{1}&=2&&&f(x_{1})&=4\\x_{2}&=3&&&f(x_{2})&=9.\end{aligned}} The interpolating polynomial is:

{\begin{aligned}L(x)&={1}\cdot {x-2 \over 1-2}\cdot {x-3 \over 1-3}+{4}\cdot {x-1 \over 2-1}\cdot {x-3 \over 2-3}+{9}\cdot {x-1 \over 3-1}\cdot {x-2 \over 3-2}\\[10pt]&=x^{2}.\end{aligned}} ### Example 2

We wish to interpolate ƒ(x) = x3 over the range 1 ≤ x ≤ 4, given these four points:

 $x_{0}=1$ $f(x_{0})=1$ $x_{1}=2$ $f(x_{1})=8$ $x_{2}=3$ $f(x_{2})=27$ $x_{3}=4$ $f(x_{3})=64$ The interpolating polynomial is:

{\begin{aligned}L(x)&={1}\cdot {x-2 \over 1-2}\cdot {x-3 \over 1-3}\cdot {x-4 \over 1-4}+{8}\cdot {x-1 \over 2-1}\cdot {x-3 \over 2-3}\cdot {x-4 \over 2-4}+{27}\cdot {x-1 \over 3-1}\cdot {x-2 \over 3-2}\cdot {x-4 \over 3-4}+{64}\cdot {x-1 \over 4-1}\cdot {x-2 \over 4-2}\cdot {x-3 \over 4-3}\\[8pt]&=x^{3}\end{aligned}} 