Divided differences: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎Alternative characterizations: removed some trivial and unhelpful consequences
partial fractions formula not helpful; more information & ref on Peano form
Line 158: Line 158:
</math>
</math>


With the help of the [[polynomial function]] <math>q(\xi) = (\xi-x_0) \cdots (\xi-x_n)</math> this can be written as
With the help of the [[polynomial function]] <math>\omega(\xi) = (\xi-x_0) \cdots (\xi-x_n)</math> this can be written as


:<math>
:<math>
f[x_0,\dots,x_n] = \sum_{j=0}^{n} \frac{f(x_j)}{q'(x_j)}.
f[x_0,\dots,x_n] = \sum_{j=0}^{n} \frac{f(x_j)}{\omega'(x_j)}.
</math>
</math>


===Peano form===
====Partial fractions====<!-- This section is linked from [[Partial fraction]] -->
If <math>x_0<x_1<\cdots<x_n</math> and <math>n\geq 1</math>, the divided differences can be expressed as<ref>{{Cite book |last=Skof |first=Fulvia |url=https://books.google.com/books?id=ulUM2GagwacC&newbks=0&printsec=frontcover&pg=PA41&dq=This+is+called+the+Peano+form+of+the+divided+differences&hl=en |title=Giuseppe Peano between Mathematics and Logic: Proceeding of the International Conference in honour of Giuseppe Peano on the 150th anniversary of his birth and the centennial of the Formulario Mathematico Torino (Italy) October 2-3, 2008 |date=2011-04-30 |publisher=Springer Science & Business Media |isbn=978-88-470-1836-5 |pages=40 |language=en}}</ref>
You can represent [[partial fraction]]s using the expanded form of divided differences. (This does not simplify computation, but is interesting in itself.) If <math>p</math> and <math>q</math> are polynomial functions, where <math>\mathrm{deg}\ p < \mathrm{deg}\ q</math> and <math>q</math> is given in terms of [[linear factor]]s by <math>q(\xi) = (\xi-x_1)\cdot \dots \cdot(\xi-x_n)</math>, then it follows from partial fraction decomposition that
:<math>\frac{p(\xi)}{q(\xi)} = \left(t\mapsto\frac{p(t)}{\xi-t}\right)[x_1,\dots,x_n].</math>
If [[limit of a function|limits]] of the divided differences are accepted, then this connection does also hold if some of the <math>x_j</math> coincide.


:<math>f[x_0,\ldots,x_n] = \frac{1}{(n-1)!} \int_{x_0}^{x_n} f^{(n)}(t)\;B_{n-1}(t) \, dt</math>
If <math>f</math> is a polynomial function with arbitrary degree and it is decomposed by <math>f(x) = p(x) + q(x)\cdot d(x)</math> using [[polynomial division]] of <math>f</math> by <math>q</math>,
then

: <math>\frac{p(\xi)}{q(\xi)} = \left(t\mapsto\frac{f(t)}{\xi-t}\right)[x_1,\dots,x_n].</math>

===Peano form===
The divided differences can be expressed as


where <math>f^{(n)}</math> is the <math>n</math>-th [[derivative]] of the function <math>f</math> and <math>B_{n-1}</math> is a certain [[B-spline]] of degree <math>n-1</math> for the data points <math>x_0,\dots,x_n</math>, given by the formula
:<math>f[x_0,\ldots,x_n] = \frac{1}{n!} \int_{x_0}^{x_n} f^{(n)}(t)B_{n-1}(t) \, dt</math>


:<math>B_{n-1}(t)=\sum_{k=0}^n\frac{(\max(0,x_k-t))^{n-1}}{\omega'(x_k)}</math>
where <math>B_{n-1}</math> is a [[B-spline]] of degree <math>n-1</math> for the data points <math>x_0,\dots,x_n</math> and <math>f^{(n)}</math> is the <math>n</math>-th [[derivative]] of the function <math>f</math>.


This is called the '''Peano form''' of the divided differences and <math>B_{n-1}</math> is called the [[Peano kernel]] for the divided differences, both named after [[Giuseppe Peano]].
This is a consequence of the [[Peano kernel theorem]]; it is called the ''Peano form'' of the divided differences and <math>B_{n-1}</math> is the ''Peano kernel'' for the divided differences, all named after [[Giuseppe Peano]].


===Taylor form===
===Taylor form===
Line 235: Line 227:
When the data points are equidistantly distributed we get the special case called '''forward differences'''. They are easier to calculate than the more general divided differences.
When the data points are equidistantly distributed we get the special case called '''forward differences'''. They are easier to calculate than the more general divided differences.


Given ''n+1'' data points
Note that the "divided portion" from '''forward divided difference''' must still be computed, to recover the '''forward divided difference''' from the '''forward difference'''.
:<math>(x_0, y_0),\ldots,(x_n, y_n)</math>

===Definition===
Given ''n'' data points
:<math>(x_0, y_0),\ldots,(x_{n-1}, y_{n-1})</math>


with
with


:<math>x_{\nu} = x_0 + \nu h,\ h > 0,\ \nu=0,\ldots,n-1</math>
:<math>x_{k} = x_0 + k h,\ \text{ for } \ k=0,\ldots,n \text{ and fixed } h>0</math>


the divided differences can be calculated via '''forward differences''' defined as
the forward differences are defined as


:<math>\Delta^{(0)} y_i := y_i</math>
:<math>\Delta^{(0)} y_k := y_k,\qquad k=0,\ldots,n</math>
:<math>\Delta^{(k)}y_i := \Delta^{(k-1)}y_{i+1} - \Delta^{(k-1)}y_i,\ k \ge 1.</math>
:<math>\Delta^{(j)}y_k := \Delta^{(j-1)}y_{k+1} - \Delta^{(j-1)}y_k,\qquad k=0,\ldots,n-j,\ j=1,\dots,n.</math>


<math>
The relationship between divided differences and forward differences is<ref>{{cite book|last1=Burden|first1=Richard L.|last2=Faires|first2=J. Douglas|title=Numerical Analysis|url=https://archive.org/details/numericalanalysi00rlbu|url-access=limited|date=2011|page=[https://archive.org/details/numericalanalysi00rlbu/page/n146 129]|isbn=9780538733519|edition=9th}}</ref>

:<math>f[x_0, x_1, \ldots , x_k] = \frac{1}{k!h^k}\Delta^{(k)}f(x_0).</math>

===Example===

:<math>
\begin{matrix}
\begin{matrix}
y_0 & & & \\
y_0 & & & \\
Line 267: Line 250:
\end{matrix}
\end{matrix}
</math>
</math>

The relationship between divided differences and forward differences is<ref>{{cite book|last1=Burden|first1=Richard L.|last2=Faires|first2=J. Douglas|title=Numerical Analysis|url=https://archive.org/details/numericalanalysi00rlbu|url-access=limited|date=2011|page=[https://archive.org/details/numericalanalysi00rlbu/page/n146 129]|isbn=9780538733519|edition=9th}}</ref>

:<math>[y_0, y_1, \ldots , y_n] = \frac{1}{n!h^n}\Delta^{(n)}y_0.</math>


== See also ==
== See also ==

Revision as of 22:23, 1 May 2022

In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions.[citation needed] Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation.[1]

Divided differences is a recursive division process. Given a sequence of data points , the method calculates the coefficients of the interpolation polynomial of these points in the Newton form.

Definition

Given n + 1 data points

where the are assumed to be pairwise distinct, the forward divided differences are defined as:

To make the recursive process of computation clearer, the divided differences can be put in tabular form, where the columns correspond to the value of j above, and each entry in the table is computed from the difference of the entries to its immediate lower left and to its immediate upper left, divided by a difference of corresponding x-values:

Notation

Note that the divided difference depends on the values and , but the notation hides the dependency on the x-values. If the data points are given by a function ƒ,

one sometimes writes

for the divided difference

Several other notations for the divided difference of the function ƒ on the nodes x0, ..., xn are also used, for instance:

Example

Divided differences for and the first few values of :

Properties

  • Divided differences are symmetric: If is a permutation then
  • If is a polynomial function of degree , then
for a number in the open interval determined by the smallest and largest of the 's.

Matrix form

The divided difference scheme can be put into an upper triangular matrix:

.

Then it holds

  • if is a scalar
This follows from the Leibniz rule. It means that multiplication of such matrices is commutative. Summarised, the matrices of divided difference schemes with respect to the same set of nodes x form a commutative ring.
  • Since is a triangular matrix, its eigenvalues are obviously .
  • Let be a Kronecker delta-like function, that is
Obviously , thus is an eigenfunction of the pointwise function multiplication. That is is somehow an "eigenmatrix" of : . However, all columns of are multiples of each other, the matrix rank of is 1. So you can compose the matrix of all eigenvectors of from the -th column of each . Denote the matrix of eigenvectors with . Example
The diagonalization of can be written as
.

Polynomials and power series

The matrix

contains the divided difference scheme for the identity function with respect to the nodes , thus contains the divided differences for the power function with exponent . Consequently, you can obtain the divided differences for a polynomial function by applying to the matrix : If

and

then

This is known as Opitz' formula.[2][3]

Now consider increasing the degree of to infinity, i.e. turn the Taylor polynomial into a Taylor series. Let be a function which corresponds to a power series. You can compute the divided difference scheme for by applying the corresponding matrix series to : If

and

then

Alternative characterizations

Expanded form

With the help of the polynomial function this can be written as

Peano form

If and , the divided differences can be expressed as[4]

where is the -th derivative of the function and is a certain B-spline of degree for the data points , given by the formula

This is a consequence of the Peano kernel theorem; it is called the Peano form of the divided differences and is the Peano kernel for the divided differences, all named after Giuseppe Peano.

Taylor form

First order

If nodes are cumulated, then the numerical computation of the divided differences is inaccurate, because you divide almost two zeros, each of which with a high relative error due to differences of approximations to similar values. However we know, that difference quotients approximate the derivative and vice versa:

for

This approximation can be turned into an identity whenever Taylor's theorem applies.

You can eliminate the odd powers of by expanding the Taylor series at the center between and :

, that is

Higher order

The Taylor series or any other representation with function series can in principle be used to approximate divided differences. Taylor series are infinite sums of power functions. The mapping from a function to a divided difference is a linear functional. We can as well apply this functional to the function summands.

Express power notation with an ordinary function:

Regular Taylor series is a weighted sum of power functions:

Taylor series for divided differences:

We know that the first terms vanish, because we have a higher difference order than polynomial order, and in the following term the divided difference is one:

It follows that the Taylor series for the divided difference essentially starts with which is also a simple approximation of the divided difference, according to the mean value theorem for divided differences.

If we would have to compute the divided differences for the power functions in the usual way, we would encounter the same numerical problems that we had when computing the divided difference of . The nice thing is, that there is a simpler way. It holds

Consequently, we can compute the divided differences of by a division of formal power series. See how this reduces to the successive computation of powers when we compute for several .

If you need to compute a whole divided difference scheme with respect to a Taylor series, see the section about divided differences of power series.

Forward differences

When the data points are equidistantly distributed we get the special case called forward differences. They are easier to calculate than the more general divided differences.

Given n+1 data points

with

the forward differences are defined as

The relationship between divided differences and forward differences is[5]

See also

References

  1. ^ Isaacson, Walter (2014). The Innovators. Simon & Schuster. p. 20. ISBN 978-1-4767-0869-0.
  2. ^ de Boor, Carl, Divided Differences, Surv. Approx. Theory 1 (2005), 46–69, [1]
  3. ^ Opitz, G. Steigungsmatrizen, Z. Angew. Math. Mech. (1964), 44, T52–T54
  4. ^ Skof, Fulvia (2011-04-30). Giuseppe Peano between Mathematics and Logic: Proceeding of the International Conference in honour of Giuseppe Peano on the 150th anniversary of his birth and the centennial of the Formulario Mathematico Torino (Italy) October 2-3, 2008. Springer Science & Business Media. p. 40. ISBN 978-88-470-1836-5.
  5. ^ Burden, Richard L.; Faires, J. Douglas (2011). Numerical Analysis (9th ed.). p. 129. ISBN 9780538733519.
  • Louis Melville Milne-Thomson (2000) [1933]. The Calculus of Finite Differences. American Mathematical Soc. Chapter 1: Divided Differences. ISBN 978-0-8218-2107-7.
  • Myron B. Allen; Eli L. Isaacson (1998). Numerical Analysis for Applied Science. John Wiley & Sons. Appendix A. ISBN 978-1-118-03027-1.
  • Ron Goldman (2002). Pyramid Algorithms: A Dynamic Programming Approach to Curves and Surfaces for Geometric Modeling. Morgan Kaufmann. Chapter 4:Newton Interpolation and Difference Triangles. ISBN 978-0-08-051547-2.

External links