This image shows, for four points ((−9, 5), (−4, 2), (−1, −2), (7, 9)), the (cubic) interpolation polynomial L(x) (dashed, black), which is the sum of the scaled basis polynomials y0ℓ0(x), y1ℓ1(x), y2ℓ2(x) and y3ℓ3(x). The interpolation polynomial passes through all four control points, and each scaled basis polynomial passes through its respective control point and is 0 where x corresponds to the other three control points.
In numerical analysis, Lagrange polynomials are used for polynomial interpolation. For a given set of points with no two values equal, the Lagrange polynomial is the polynomial of lowest degree that assumes at each value the corresponding value (i.e. the functions coincide at each point).
The interpolating polynomial of the least degree is unique, however, and since it can be arrived at through multiple methods, referring to "the Lagrange polynomial" is perhaps not as correct as referring to "the Lagrange form" of that unique polynomial.
Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation. As changing the points requires recalculating the entire interpolant, it is often easier to use Newton polynomials instead.
Here we plot the Lagrange basis functions of 1st, 2nd, and 3rd order on a bi-unit domain. Linear combinations of Lagrange basis functions are used to construct Lagrange interpolating polynomials. Lagrange basis functions are commonly used in finite element analysis as the bases for the element shape-functions. Furthermore, it is common to use a bi-unit domain as the natural space for the finite-element's definition.
Given a set of k + 1 data points
where no two are the same, the interpolation polynomial in the Lagrange form is a linear combination
of Lagrange basis polynomials
where . Note how, given the initial assumption that no two are the same, , so this expression is always well-defined. The reason pairs with are not allowed is that no interpolation function such that would exist; a function can only get one value for each argument . On the other hand, if also , then those two points would actually be one single point.
For all , includes the term in the numerator, so the whole product will be zero at :
On the other hand,
In other words, all basis polynomials are zero at , except , for which it holds that , because it lacks the term.
It follows that , so at each point , , showing that interpolates the function exactly.
The function L(x) being sought is a polynomial in of the least degree that interpolates the given data set; that is, assumes value at the corresponding for all data points :
In there are k factors in the product and each factor contains one x, so L(x) (which is a sum of these k-degree polynomials) must be a polynomial of degree at most k.
We consider what happens when this product is expanded. Because the product skips , if then all terms are (except where , but that case is impossible, as pointed out in the definition section—in that term, , and since , , contrary to ).
Also if then since does not preclude it, one term in the product will be for , i.e. , zeroing the entire product. So
Solving an interpolation problem leads to a problem in linear algebra amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial , we must invert the Vandermonde matrix to solve for the coefficients of . By choosing a better basis, the Lagrange basis, , we merely get the identity matrix, , which is its own inverse: the Lagrange basis automatically inverts the analog of the Vandermonde matrix.
This construction is analogous to the Chinese Remainder Theorem. Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears.
Furthermore, when the order is large, Fast Fourier Transformation can be used to solve for the coefficients of the interpolated polynomial.
Example of interpolation divergence for a set of Lagrange polynomials.
The Lagrange form of the interpolation polynomial shows the linear character of polynomial interpolation and the uniqueness of the interpolation polynomial. Therefore, it is preferred in proofs and theoretical arguments. Uniqueness can also be seen from the invertibility of the Vandermonde matrix, due to the non-vanishing of the Vandermonde determinant.
But, as can be seen from the construction, each time a node xk changes, all Lagrange basis polynomials have to be recalculated. A better form of the interpolation polynomial for practical (or computational) purposes is the barycentric form of the Lagrange interpolation (see below) or Newton polynomials.
Lagrange and other interpolation at equally spaced points, as in the example above, yield a polynomial oscillating above and below the true function. This behaviour tends to grow with the number of points, leading to a divergence known as Runge's phenomenon; the problem may be eliminated by choosing interpolation points at Chebyshev nodes.
Clearly, is zero at nodes. Suppose we want to find at a point . Define a new function and choose (This ensures at nodes) where is the constant we are required to determine for a given . Now has zeroes (at all nodes and ) between and (including endpoints). Let us assume that is -times differentiable, and are polynomials hence are infinitely differentiable, by Rolle's theoram has zeroes, has zeroes... has 1 zero, say . Explicitly writing :