In mathematics , divided differences is an algorithm , historically used for computing tables of logarithms and trigonometric functions.[citation needed ] Charles Babbage 's difference engine , an early mechanical calculator , was designed to use this algorithm in its operation.[1]
Divided differences is a recursive division process. The method can be used to calculate the coefficients in the interpolation polynomial in the Newton form .
Definition [ edit ]
Given k + 1 data points
(
x
0
,
y
0
)
,
…
,
(
x
k
,
y
k
)
{\displaystyle (x_{0},y_{0}),\ldots ,(x_{k},y_{k})}
The forward divided differences are defined as:
[
y
ν
]
:=
y
ν
,
ν
∈
{
0
,
…
,
k
}
{\displaystyle [y_{\nu }]:=y_{\nu },\qquad \nu \in \{0,\ldots ,k\}}
[
y
ν
,
…
,
y
ν
+
j
]
:=
[
y
ν
+
1
,
…
,
y
ν
+
j
]
−
[
y
ν
,
…
,
y
ν
+
j
−
1
]
x
ν
+
j
−
x
ν
,
ν
∈
{
0
,
…
,
k
−
j
}
,
j
∈
{
1
,
…
,
k
}
.
{\displaystyle [y_{\nu },\ldots ,y_{\nu +j}]:={\frac {[y_{\nu +1},\ldots ,y_{\nu +j}]-[y_{\nu },\ldots ,y_{\nu +j-1}]}{x_{\nu +j}-x_{\nu }}},\qquad \nu \in \{0,\ldots ,k-j\},\ j\in \{1,\ldots ,k\}.}
The backward divided differences are defined as:
[
y
ν
]
:=
y
ν
,
ν
∈
{
0
,
…
,
k
}
{\displaystyle [y_{\nu }]:=y_{\nu },\qquad \nu \in \{0,\ldots ,k\}}
[
y
ν
,
…
,
y
ν
−
j
]
:=
[
y
ν
,
…
,
y
ν
−
j
+
1
]
−
[
y
ν
−
1
,
…
,
y
ν
−
j
]
x
ν
−
x
ν
−
j
,
ν
∈
{
j
,
…
,
k
}
,
j
∈
{
1
,
…
,
k
}
.
{\displaystyle [y_{\nu },\ldots ,y_{\nu -j}]:={\frac {[y_{\nu },\ldots ,y_{\nu -j+1}]-[y_{\nu -1},\ldots ,y_{\nu -j}]}{x_{\nu }-x_{\nu -j}}},\qquad \nu \in \{j,\ldots ,k\},\ j\in \{1,\ldots ,k\}.}
Notation [ edit ]
If the data points are given as a function ƒ ,
(
x
0
,
f
(
x
0
)
)
,
…
,
(
x
k
,
f
(
x
k
)
)
{\displaystyle (x_{0},f(x_{0})),\ldots ,(x_{k},f(x_{k}))}
one sometimes writes
f
[
x
ν
]
:=
f
(
x
ν
)
,
ν
∈
{
0
,
…
,
k
}
{\displaystyle f[x_{\nu }]:=f(x_{\nu }),\qquad \nu \in \{0,\ldots ,k\}}
f
[
x
ν
,
…
,
x
ν
+
j
]
:=
f
[
x
ν
+
1
,
…
,
x
ν
+
j
]
−
f
[
x
ν
,
…
,
x
ν
+
j
−
1
]
x
ν
+
j
−
x
ν
,
ν
∈
{
0
,
…
,
k
−
j
}
,
j
∈
{
1
,
…
,
k
}
.
{\displaystyle f[x_{\nu },\ldots ,x_{\nu +j}]:={\frac {f[x_{\nu +1},\ldots ,x_{\nu +j}]-f[x_{\nu },\ldots ,x_{\nu +j-1}]}{x_{\nu +j}-x_{\nu }}},\qquad \nu \in \{0,\ldots ,k-j\},\ j\in \{1,\ldots ,k\}.}
Several notations for the divided difference of the function ƒ on the nodes x 0 , ..., x n are used:
[
x
0
,
…
,
x
n
]
f
,
{\displaystyle [x_{0},\ldots ,x_{n}]f,}
[
x
0
,
…
,
x
n
;
f
]
,
{\displaystyle [x_{0},\ldots ,x_{n};f],}
D
[
x
0
,
…
,
x
n
]
f
{\displaystyle D[x_{0},\ldots ,x_{n}]f}
etc.
Example [ edit ]
Divided differences for
ν
=
0
{\displaystyle \nu =0}
and the first few values of
j
{\displaystyle j}
:
[
y
0
]
=
y
0
[
y
0
,
y
1
]
=
y
1
−
y
0
x
1
−
x
0
[
y
0
,
y
1
,
y
2
]
=
[
y
1
,
y
2
]
−
[
y
0
,
y
1
]
x
2
−
x
0
=
y
2
−
y
1
x
2
−
x
1
−
y
1
−
y
0
x
1
−
x
0
x
2
−
x
0
=
y
2
−
y
1
(
x
2
−
x
1
)
(
x
2
−
x
0
)
−
y
1
−
y
0
(
x
1
−
x
0
)
(
x
2
−
x
0
)
[
y
0
,
y
1
,
y
2
,
y
3
]
=
[
y
1
,
y
2
,
y
3
]
−
[
y
0
,
y
1
,
y
2
]
x
3
−
x
0
{\displaystyle {\begin{aligned}{\mathopen {[}}y_{0}]&=y_{0}\\{\mathopen {[}}y_{0},y_{1}]&={\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}\\{\mathopen {[}}y_{0},y_{1},y_{2}]&={\frac {{\mathopen {[}}y_{1},y_{2}]-{\mathopen {[}}y_{0},y_{1}]}{x_{2}-x_{0}}}={\frac {{\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}-{\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}}{x_{2}-x_{0}}}={\frac {y_{2}-y_{1}}{(x_{2}-x_{1})(x_{2}-x_{0})}}-{\frac {y_{1}-y_{0}}{(x_{1}-x_{0})(x_{2}-x_{0})}}\\{\mathopen {[}}y_{0},y_{1},y_{2},y_{3}]&={\frac {{\mathopen {[}}y_{1},y_{2},y_{3}]-{\mathopen {[}}y_{0},y_{1},y_{2}]}{x_{3}-x_{0}}}\end{aligned}}}
To make the recursive process more clear, the divided differences can be put in a tabular form:
x
0
y
0
=
[
y
0
]
[
y
0
,
y
1
]
x
1
y
1
=
[
y
1
]
[
y
0
,
y
1
,
y
2
]
[
y
1
,
y
2
]
[
y
0
,
y
1
,
y
2
,
y
3
]
x
2
y
2
=
[
y
2
]
[
y
1
,
y
2
,
y
3
]
[
y
2
,
y
3
]
x
3
y
3
=
[
y
3
]
{\displaystyle {\begin{matrix}x_{0}&y_{0}=[y_{0}]&&&\\&&[y_{0},y_{1}]&&\\x_{1}&y_{1}=[y_{1}]&&[y_{0},y_{1},y_{2}]&\\&&[y_{1},y_{2}]&&[y_{0},y_{1},y_{2},y_{3}]\\x_{2}&y_{2}=[y_{2}]&&[y_{1},y_{2},y_{3}]&\\&&[y_{2},y_{3}]&&\\x_{3}&y_{3}=[y_{3}]&&&\\\end{matrix}}}
Properties [ edit ]
(
f
+
g
)
[
x
0
,
…
,
x
n
]
=
f
[
x
0
,
…
,
x
n
]
+
g
[
x
0
,
…
,
x
n
]
{\displaystyle (f+g)[x_{0},\dots ,x_{n}]=f[x_{0},\dots ,x_{n}]+g[x_{0},\dots ,x_{n}]}
(
λ
⋅
f
)
[
x
0
,
…
,
x
n
]
=
λ
⋅
f
[
x
0
,
…
,
x
n
]
{\displaystyle (\lambda \cdot f)[x_{0},\dots ,x_{n}]=\lambda \cdot f[x_{0},\dots ,x_{n}]}
(
f
⋅
g
)
[
x
0
,
…
,
x
n
]
=
f
[
x
0
]
⋅
g
[
x
0
,
…
,
x
n
]
+
f
[
x
0
,
x
1
]
⋅
g
[
x
1
,
…
,
x
n
]
+
⋯
+
f
[
x
0
,
…
,
x
n
]
⋅
g
[
x
n
]
=
∑
r
=
0
n
f
[
x
0
,
…
,
x
r
]
⋅
g
[
x
r
,
…
,
x
n
]
{\displaystyle (f\cdot g)[x_{0},\dots ,x_{n}]=f[x_{0}]\cdot g[x_{0},\dots ,x_{n}]+f[x_{0},x_{1}]\cdot g[x_{1},\dots ,x_{n}]+\dots +f[x_{0},\dots ,x_{n}]\cdot g[x_{n}]=\sum _{r=0}^{n}f[x_{0},\ldots ,x_{r}]\cdot g[x_{r},\ldots ,x_{n}]}
Divided differences are symmetric: If
σ
:
{
0
,
…
,
n
}
→
{
0
,
…
,
n
}
{\displaystyle \sigma :\{0,\dots ,n\}\to \{0,\dots ,n\}}
is a permutation then
f
[
x
0
,
…
,
x
n
]
=
f
[
x
σ
(
0
)
,
…
,
x
σ
(
n
)
]
{\displaystyle f[x_{0},\dots ,x_{n}]=f[x_{\sigma (0)},\dots ,x_{\sigma (n)}]}
f
[
x
0
,
…
,
x
n
]
=
f
(
n
)
(
ξ
)
n
!
{\displaystyle f[x_{0},\dots ,x_{n}]={\frac {f^{(n)}(\xi )}{n!}}}
where
ξ
{\displaystyle \xi }
is in the open interval determined by the smallest and largest of the
x
k
{\displaystyle x_{k}}
's.
Matrix form [ edit ]
The divided difference scheme can be put into an upper triangular matrix .
Let
T
f
(
x
0
,
…
,
x
n
)
=
(
f
[
x
0
]
f
[
x
0
,
x
1
]
f
[
x
0
,
x
1
,
x
2
]
…
f
[
x
0
,
…
,
x
n
]
0
f
[
x
1
]
f
[
x
1
,
x
2
]
…
f
[
x
1
,
…
,
x
n
]
⋮
⋮
⋮
⋱
⋮
0
0
0
…
f
[
x
n
]
)
{\displaystyle T_{f}(x_{0},\dots ,x_{n})={\begin{pmatrix}f[x_{0}]&f[x_{0},x_{1}]&f[x_{0},x_{1},x_{2}]&\ldots &f[x_{0},\dots ,x_{n}]\\0&f[x_{1}]&f[x_{1},x_{2}]&\ldots &f[x_{1},\dots ,x_{n}]\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\ldots &f[x_{n}]\end{pmatrix}}}
.
Then it holds
T
f
+
g
x
=
T
f
x
+
T
g
x
{\displaystyle T_{f+g}x=T_{f}x+T_{g}x}
T
f
⋅
g
x
=
T
f
x
⋅
T
g
x
{\displaystyle T_{f\cdot g}x=T_{f}x\cdot T_{g}x}
This follows from the Leibniz rule. It means that multiplication of such matrices is commutative . Summarised, the matrices of divided difference schemes with respect to the same set of nodes form a commutative ring .
Since
T
f
x
{\displaystyle T_{f}x}
is a triangular matrix, its eigenvalues are obviously
f
(
x
0
)
,
…
,
f
(
x
n
)
{\displaystyle f(x_{0}),\dots ,f(x_{n})}
.
Let
δ
ξ
{\displaystyle \delta _{\xi }}
be a Kronecker delta -like function, that is
δ
ξ
(
t
)
=
{
1
:
t
=
ξ
,
0
:
else
.
{\displaystyle \delta _{\xi }(t)={\begin{cases}1&:t=\xi ,\\0&:{\mbox{else}}.\end{cases}}}
Obviously
f
⋅
δ
ξ
=
f
(
ξ
)
⋅
δ
ξ
{\displaystyle f\cdot \delta _{\xi }=f(\xi )\cdot \delta _{\xi }}
, thus
δ
ξ
{\displaystyle \delta _{\xi }}
is an eigenfunction of the pointwise function multiplication. That is
T
δ
x
i
x
{\displaystyle T_{\delta _{x_{i}}}x}
is somehow an "eigenmatrix " of
T
f
x
{\displaystyle T_{f}x}
:
T
f
x
⋅
T
δ
x
i
x
=
f
(
x
i
)
⋅
T
δ
x
i
x
{\displaystyle T_{f}x\cdot T_{\delta _{x_{i}}}x=f(x_{i})\cdot T_{\delta _{x_{i}}}x}
. However, all columns of
T
δ
x
i
x
{\displaystyle T_{\delta _{x_{i}}}x}
are multiples of each other, the matrix rank of
T
δ
x
i
x
{\displaystyle T_{\delta _{x_{i}}}x}
is 1. So you can compose the matrix of all eigenvectors from the
i
{\displaystyle i}
-th column of each
T
δ
x
i
x
{\displaystyle T_{\delta _{x_{i}}}x}
. Denote the matrix of eigenvectors with
U
x
{\displaystyle Ux}
. Example
U
(
x
0
,
x
1
,
x
2
,
x
3
)
=
(
1
1
(
x
1
−
x
0
)
1
(
x
2
−
x
0
)
⋅
(
x
2
−
x
1
)
1
(
x
3
−
x
0
)
⋅
(
x
3
−
x
1
)
⋅
(
x
3
−
x
2
)
0
1
1
(
x
2
−
x
1
)
1
(
x
3
−
x
1
)
⋅
(
x
3
−
x
2
)
0
0
1
1
(
x
3
−
x
2
)
0
0
0
1
)
{\displaystyle U(x_{0},x_{1},x_{2},x_{3})={\begin{pmatrix}1&{\frac {1}{(x_{1}-x_{0})}}&{\frac {1}{(x_{2}-x_{0})\cdot (x_{2}-x_{1})}}&{\frac {1}{(x_{3}-x_{0})\cdot (x_{3}-x_{1})\cdot (x_{3}-x_{2})}}\\0&1&{\frac {1}{(x_{2}-x_{1})}}&{\frac {1}{(x_{3}-x_{1})\cdot (x_{3}-x_{2})}}\\0&0&1&{\frac {1}{(x_{3}-x_{2})}}\\0&0&0&1\end{pmatrix}}}
The diagonalization of
T
f
x
{\displaystyle T_{f}x}
can be written as
U
x
⋅
diag
(
f
(
x
0
)
,
…
,
f
(
x
n
)
)
=
T
f
x
⋅
U
x
{\displaystyle Ux\cdot \operatorname {diag} (f(x_{0}),\dots ,f(x_{n}))=T_{f}x\cdot Ux}
.
Alternative definitions [ edit ]
Expanded form [ edit ]
f
[
x
0
]
=
f
(
x
0
)
f
[
x
0
,
x
1
]
=
f
(
x
0
)
(
x
0
−
x
1
)
+
f
(
x
1
)
(
x
1
−
x
0
)
f
[
x
0
,
x
1
,
x
2
]
=
f
(
x
0
)
(
x
0
−
x
1
)
⋅
(
x
0
−
x
2
)
+
f
(
x
1
)
(
x
1
−
x
0
)
⋅
(
x
1
−
x
2
)
+
f
(
x
2
)
(
x
2
−
x
0
)
⋅
(
x
2
−
x
1
)
f
[
x
0
,
x
1
,
x
2
,
x
3
]
=
f
(
x
0
)
(
x
0
−
x
1
)
⋅
(
x
0
−
x
2
)
⋅
(
x
0
−
x
3
)
+
f
(
x
1
)
(
x
1
−
x
0
)
⋅
(
x
1
−
x
2
)
⋅
(
x
1
−
x
3
)
+
f
(
x
2
)
(
x
2
−
x
0
)
⋅
(
x
2
−
x
1
)
⋅
(
x
2
−
x
3
)
+
f
(
x
3
)
(
x
3
−
x
0
)
⋅
(
x
3
−
x
1
)
⋅
(
x
3
−
x
2
)
f
[
x
0
,
…
,
x
n
]
=
∑
j
=
0
n
f
(
x
j
)
∏
k
∈
{
0
,
…
,
n
}
∖
{
j
}
(
x
j
−
x
k
)
{\displaystyle {\begin{aligned}f[x_{0}]&=f(x_{0})\\f[x_{0},x_{1}]&={\frac {f(x_{0})}{(x_{0}-x_{1})}}+{\frac {f(x_{1})}{(x_{1}-x_{0})}}\\f[x_{0},x_{1},x_{2}]&={\frac {f(x_{0})}{(x_{0}-x_{1})\cdot (x_{0}-x_{2})}}+{\frac {f(x_{1})}{(x_{1}-x_{0})\cdot (x_{1}-x_{2})}}+{\frac {f(x_{2})}{(x_{2}-x_{0})\cdot (x_{2}-x_{1})}}\\f[x_{0},x_{1},x_{2},x_{3}]&={\frac {f(x_{0})}{(x_{0}-x_{1})\cdot (x_{0}-x_{2})\cdot (x_{0}-x_{3})}}+{\frac {f(x_{1})}{(x_{1}-x_{0})\cdot (x_{1}-x_{2})\cdot (x_{1}-x_{3})}}+{\frac {f(x_{2})}{(x_{2}-x_{0})\cdot (x_{2}-x_{1})\cdot (x_{2}-x_{3})}}+\\&\quad \quad {\frac {f(x_{3})}{(x_{3}-x_{0})\cdot (x_{3}-x_{1})\cdot (x_{3}-x_{2})}}\\f[x_{0},\dots ,x_{n}]&=\sum _{j=0}^{n}{\frac {f(x_{j})}{\prod _{k\in \{0,\dots ,n\}\setminus \{j\}}(x_{j}-x_{k})}}\end{aligned}}}
With the help of a polynomial function
q
{\displaystyle q}
with
q
(
ξ
)
=
(
ξ
−
x
0
)
⋯
(
ξ
−
x
n
)
{\displaystyle q(\xi )=(\xi -x_{0})\cdots (\xi -x_{n})}
this can be written as
f
[
x
0
,
…
,
x
n
]
=
∑
j
=
0
n
f
(
x
j
)
q
′
(
x
j
)
.
{\displaystyle f[x_{0},\dots ,x_{n}]=\sum _{j=0}^{n}{\frac {f(x_{j})}{q'(x_{j})}}.}
Alternatively, we can allow counting backwards from the start of the sequence by defining
x
k
=
x
k
+
n
+
1
=
x
k
−
(
n
+
1
)
{\displaystyle x_{k}=x_{k+n+1}=x_{k-(n+1)}}
whenever
k
<
0
{\displaystyle k<0}
or
n
<
k
{\displaystyle n<k}
. This definition allows
x
−
1
{\displaystyle x_{-1}}
to be interpreted as
x
n
{\displaystyle x_{n}}
,
x
−
2
{\displaystyle x_{-2}}
to be interpreted as
x
n
−
1
{\displaystyle x_{n-1}}
,
x
−
n
{\displaystyle x_{-n}}
to be interpreted as
x
0
{\displaystyle x_{0}}
, etc. The expanded form of the divided difference thus becomes
f
[
x
0
,
…
,
x
n
]
=
∑
j
=
0
n
f
(
x
j
)
∏
k
=
j
−
n
j
−
1
(
x
j
−
x
k
)
=
∑
j
=
0
n
f
(
x
j
)
∏
k
=
j
+
1
j
+
n
(
x
j
−
x
k
)
{\displaystyle f[x_{0},\dots ,x_{n}]=\sum _{j=0}^{n}{\frac {f(x_{j})}{\prod \limits _{k=j-n}^{j-1}(x_{j}-x_{k})}}=\sum _{j=0}^{n}{\frac {f(x_{j})}{\prod \limits _{k=j+1}^{j+n}(x_{j}-x_{k})}}}
Yet another characterization utilizes limits:
f
[
x
0
,
…
,
x
n
]
=
∑
j
=
0
n
lim
x
→
x
j
[
f
(
x
j
)
(
x
−
x
j
)
∏
k
=
0
n
(
x
−
x
k
)
]
{\displaystyle f[x_{0},\dots ,x_{n}]=\sum _{j=0}^{n}\lim _{x\rightarrow x_{j}}\left[{\frac {f(x_{j})(x-x_{j})}{\prod \limits _{k=0}^{n}(x-x_{k})}}\right]}
Partial fractions [ edit ]
You can represent partial fractions using the expanded form of divided differences. (This does not simplify computation, but is interesting in itself.) If
p
{\displaystyle p}
and
q
{\displaystyle q}
are polynomial functions , where
d
e
g
p
<
d
e
g
q
{\displaystyle \mathrm {deg} \ p<\mathrm {deg} \ q}
and
q
{\displaystyle q}
is given in terms of linear factors by
q
(
ξ
)
=
(
ξ
−
x
1
)
⋅
⋯
⋅
(
ξ
−
x
n
)
{\displaystyle q(\xi )=(\xi -x_{1})\cdot \dots \cdot (\xi -x_{n})}
, then it follows from partial fraction decomposition that
p
(
ξ
)
q
(
ξ
)
=
(
t
→
p
(
t
)
ξ
−
t
)
[
x
1
,
…
,
x
n
]
.
{\displaystyle {\frac {p(\xi )}{q(\xi )}}=\left(t\to {\frac {p(t)}{\xi -t}}\right)[x_{1},\dots ,x_{n}].}
If limits of the divided differences are accepted, then this connection does also hold, if some of the
x
j
{\displaystyle x_{j}}
coincide.
If
f
{\displaystyle f}
is a polynomial function with arbitrary degree
and it is decomposed by
f
(
x
)
=
p
(
x
)
+
q
(
x
)
⋅
d
(
x
)
{\displaystyle f(x)=p(x)+q(x)\cdot d(x)}
using polynomial division of
f
{\displaystyle f}
by
q
{\displaystyle q}
,
then
p
(
ξ
)
q
(
ξ
)
=
(
t
→
f
(
t
)
ξ
−
t
)
[
x
1
,
…
,
x
n
]
.
{\displaystyle {\frac {p(\xi )}{q(\xi )}}=\left(t\to {\frac {f(t)}{\xi -t}}\right)[x_{1},\dots ,x_{n}].}
Peano form [ edit ]
The divided differences can be expressed as
f
[
x
0
,
…
,
x
n
]
=
1
n
!
∫
x
0
x
n
f
(
n
)
(
t
)
B
n
−
1
(
t
)
d
t
{\displaystyle f[x_{0},\ldots ,x_{n}]={\frac {1}{n!}}\int _{x_{0}}^{x_{n}}f^{(n)}(t)B_{n-1}(t)\,dt}
where
B
n
−
1
{\displaystyle B_{n-1}}
is a B-spline of degree
n
−
1
{\displaystyle n-1}
for the data points
x
0
,
…
,
x
n
{\displaystyle x_{0},\dots ,x_{n}}
and
f
(
n
)
{\displaystyle f^{(n)}}
is the
n
{\displaystyle n}
-th derivative of the function
f
{\displaystyle f}
.
This is called the Peano form of the divided differences and
B
n
−
1
{\displaystyle B_{n-1}}
is called the Peano kernel for the divided differences, both named after Giuseppe Peano .
Taylor form [ edit ]
First order [ edit ]
If nodes are cumulated, then the numerical computation of the divided differences is inaccurate, because you divide almost two zeros, each of which with a high relative error due to differences of similar values . However we know, that difference quotients approximate the derivative and vice versa:
f
(
y
)
−
f
(
x
)
y
−
x
≈
f
′
(
x
)
{\displaystyle {\frac {f(y)-f(x)}{y-x}}\approx f'(x)}
for
x
≈
y
{\displaystyle x\approx y}
This approximation can be turned into an identity whenever Taylor's theorem applies.
f
(
y
)
=
f
(
x
)
+
f
′
(
x
)
⋅
(
y
−
x
)
+
f
″
(
x
)
⋅
(
y
−
x
)
2
2
!
+
f
‴
(
x
)
⋅
(
y
−
x
)
3
3
!
+
…
{\displaystyle f(y)=f(x)+f'(x)\cdot (y-x)+f''(x)\cdot {\frac {(y-x)^{2}}{2!}}+f'''(x)\cdot {\frac {(y-x)^{3}}{3!}}+\dots }
⇒
f
(
y
)
−
f
(
x
)
y
−
x
=
f
′
(
x
)
+
f
″
(
x
)
⋅
y
−
x
2
!
+
f
‴
(
x
)
⋅
(
y
−
x
)
2
3
!
+
…
{\displaystyle \Rightarrow {\frac {f(y)-f(x)}{y-x}}=f'(x)+f''(x)\cdot {\frac {y-x}{2!}}+f'''(x)\cdot {\frac {(y-x)^{2}}{3!}}+\dots }
You can eliminate the odd powers of
y
−
x
{\displaystyle y-x}
by expanding the Taylor series at the center between
x
{\displaystyle x}
and
y
{\displaystyle y}
:
x
=
m
−
h
,
y
=
m
+
h
{\displaystyle x=m-h,y=m+h}
, that is
m
=
x
+
y
2
,
h
=
y
−
x
2
{\displaystyle m={\frac {x+y}{2}},h={\frac {y-x}{2}}}
f
(
m
+
h
)
=
f
(
m
)
+
f
′
(
m
)
⋅
h
+
f
″
(
m
)
⋅
h
2
2
!
+
f
‴
(
m
)
⋅
h
3
3
!
+
…
{\displaystyle f(m+h)=f(m)+f'(m)\cdot h+f''(m)\cdot {\frac {h^{2}}{2!}}+f'''(m)\cdot {\frac {h^{3}}{3!}}+\dots }
f
(
m
−
h
)
=
f
(
m
)
−
f
′
(
m
)
⋅
h
+
f
″
(
m
)
⋅
h
2
2
!
−
f
‴
(
m
)
⋅
h
3
3
!
+
…
{\displaystyle f(m-h)=f(m)-f'(m)\cdot h+f''(m)\cdot {\frac {h^{2}}{2!}}-f'''(m)\cdot {\frac {h^{3}}{3!}}+\dots }
f
(
y
)
−
f
(
x
)
y
−
x
=
f
(
m
+
h
)
−
f
(
m
−
h
)
2
⋅
h
=
f
′
(
m
)
+
f
‴
(
m
)
⋅
h
2
3
!
+
…
{\displaystyle {\frac {f(y)-f(x)}{y-x}}={\frac {f(m+h)-f(m-h)}{2\cdot h}}=f'(m)+f'''(m)\cdot {\frac {h^{2}}{3!}}+\dots }
Higher order [ edit ]
The Taylor series or any other representation with function series can in principle be used to approximate divided differences. Taylor series are infinite sums of power functions . The mapping from a function
f
{\displaystyle f}
to a divided difference
f
[
x
0
,
…
,
x
n
]
{\displaystyle f[x_{0},\dots ,x_{n}]}
is a linear functional . We can as well apply this functional to the function summands.
Express power notation with an ordinary function:
p
n
(
x
)
=
x
n
.
{\displaystyle p_{n}(x)=x^{n}.}
Regular Taylor series is a weighted sum of power functions:
f
=
f
(
0
)
⋅
p
0
+
f
′
(
0
)
⋅
p
1
+
f
″
(
0
)
2
!
⋅
p
2
+
f
‴
(
0
)
3
!
⋅
p
3
+
…
{\displaystyle f=f(0)\cdot p_{0}+f'(0)\cdot p_{1}+{\frac {f''(0)}{2!}}\cdot p_{2}+{\frac {f'''(0)}{3!}}\cdot p_{3}+\dots }
Taylor series for divided differences:
f
[
x
0
,
…
,
x
n
]
=
f
(
0
)
⋅
p
0
[
x
0
,
…
,
x
n
]
+
f
′
(
0
)
⋅
p
1
[
x
0
,
…
,
x
n
]
+
f
″
(
0
)
2
!
⋅
p
2
[
x
0
,
…
,
x
n
]
+
f
‴
(
0
)
3
!
⋅
p
3
[
x
0
,
…
,
x
n
]
+
…
{\displaystyle f[x_{0},\dots ,x_{n}]=f(0)\cdot p_{0}[x_{0},\dots ,x_{n}]+f'(0)\cdot p_{1}[x_{0},\dots ,x_{n}]+{\frac {f''(0)}{2!}}\cdot p_{2}[x_{0},\dots ,x_{n}]+{\frac {f'''(0)}{3!}}\cdot p_{3}[x_{0},\dots ,x_{n}]+\dots }
We know that the first
n
{\displaystyle n}
terms vanish, because we have a higher difference order than polynomial order, and in the following term the divided difference is one:
∀
j
<
n
p
j
[
x
0
,
…
,
x
n
]
=
0
p
n
[
x
0
,
…
,
x
n
]
=
1
p
n
+
1
[
x
0
,
…
,
x
n
]
=
x
0
+
⋯
+
x
n
p
n
+
m
[
x
0
,
…
,
x
n
]
=
∑
a
∈
{
0
,
…
,
n
}
m
with
a
1
≤
a
2
≤
⋯
≤
a
m
∏
j
∈
a
x
j
.
{\displaystyle {\begin{array}{llcl}\forall j<n&p_{j}[x_{0},\dots ,x_{n}]&=&0\\&p_{n}[x_{0},\dots ,x_{n}]&=&1\\&p_{n+1}[x_{0},\dots ,x_{n}]&=&x_{0}+\dots +x_{n}\\&p_{n+m}[x_{0},\dots ,x_{n}]&=&\sum _{a\in \{0,\dots ,n\}^{m}{\text{ with }}a_{1}\leq a_{2}\leq \dots \leq a_{m}}\prod _{j\in a}x_{j}.\\\end{array}}}
It follows that the Taylor series for the divided difference essentially starts with
f
(
n
)
(
0
)
n
!
{\displaystyle {\frac {f^{(n)}(0)}{n!}}}
which is also a simple approximation of the divided difference, according to the mean value theorem for divided differences .
If we would have to compute the divided differences for the power functions in the usual way, we would encounter the same numerical problems that we had when computing the divided difference of
f
{\displaystyle f}
. The nice thing is, that there is a simpler way.
It holds
t
n
=
(
1
−
x
0
⋅
t
)
⋯
⋅
(
1
−
x
n
⋅
t
)
⋅
(
p
0
[
x
0
,
…
,
x
n
]
+
p
1
[
x
0
,
…
,
x
n
]
⋅
t
+
p
2
[
x
0
,
…
,
x
n
]
⋅
t
2
+
…
)
.
{\displaystyle t^{n}=(1-x_{0}\cdot t)\dots \cdot (1-x_{n}\cdot t)\cdot (p_{0}[x_{0},\dots ,x_{n}]+p_{1}[x_{0},\dots ,x_{n}]\cdot t+p_{2}[x_{0},\dots ,x_{n}]\cdot t^{2}+\dots ).}
Consequently, we can compute the divided differences of
p
n
{\displaystyle p_{n}}
by a division of formal power series . See how this reduces to the successive computation of powers when we compute
p
n
[
h
]
{\displaystyle p_{n}[h]}
for several
n
{\displaystyle n}
.
If you need to compute a whole divided difference scheme with respect to a Taylor series, see the section about divided differences of power series .
Polynomials and power series [ edit ]
Divided differences of polynomials are particularly interesting, because they can benefit from the Leibniz rule.
The matrix
J
{\displaystyle J}
with
J
=
(
x
0
1
0
0
⋯
0
0
x
1
1
0
⋯
0
0
0
x
2
1
0
⋮
⋮
⋱
⋱
0
0
0
0
x
n
)
{\displaystyle J={\begin{pmatrix}x_{0}&1&0&0&\cdots &0\\0&x_{1}&1&0&\cdots &0\\0&0&x_{2}&1&&0\\\vdots &\vdots &&\ddots &\ddots &\\0&0&0&0&&x_{n}\end{pmatrix}}}
contains the divided difference scheme for the identity function with respect to the nodes
x
0
,
…
,
x
n
{\displaystyle x_{0},\dots ,x_{n}}
,
thus
J
n
{\displaystyle J^{n}}
contains the divided differences for the power function with exponent
n
{\displaystyle n}
.
Consequently, you can obtain the divided differences for a polynomial function
φ
(
p
)
{\displaystyle \varphi (p)}
with respect to the polynomial
p
{\displaystyle p}
by applying
p
{\displaystyle p}
(more precisely: its corresponding matrix polynomial function
φ
M
(
p
)
{\displaystyle \varphi _{\mathrm {M} }(p)}
) to the matrix
J
{\displaystyle J}
.
φ
(
p
)
(
ξ
)
=
a
0
+
a
1
⋅
ξ
+
⋯
+
a
n
⋅
ξ
n
{\displaystyle \varphi (p)(\xi )=a_{0}+a_{1}\cdot \xi +\dots +a_{n}\cdot \xi ^{n}}
φ
M
(
p
)
(
J
)
=
a
0
+
a
1
⋅
J
+
⋯
+
a
n
⋅
J
n
{\displaystyle \varphi _{\mathrm {M} }(p)(J)=a_{0}+a_{1}\cdot J+\dots +a_{n}\cdot J^{n}}
=
(
φ
(
p
)
[
x
0
]
φ
(
p
)
[
x
0
,
x
1
]
φ
(
p
)
[
x
0
,
x
1
,
x
2
]
…
φ
(
p
)
[
x
0
,
…
,
x
n
]
0
φ
(
p
)
[
x
1
]
φ
(
p
)
[
x
1
,
x
2
]
…
φ
(
p
)
[
x
1
,
…
,
x
n
]
⋮
⋱
⋱
⋱
⋮
0
…
0
0
φ
(
p
)
[
x
n
]
)
{\displaystyle ={\begin{pmatrix}\varphi (p)[x_{0}]&\varphi (p)[x_{0},x_{1}]&\varphi (p)[x_{0},x_{1},x_{2}]&\ldots &\varphi (p)[x_{0},\dots ,x_{n}]\\0&\varphi (p)[x_{1}]&\varphi (p)[x_{1},x_{2}]&\ldots &\varphi (p)[x_{1},\dots ,x_{n}]\\\vdots &\ddots &\ddots &\ddots &\vdots \\0&\ldots &0&0&\varphi (p)[x_{n}]\end{pmatrix}}}
This is known as Opitz' formula .[2]
[3]
Now consider increasing the degree of
p
{\displaystyle p}
to infinity,
i.e. turn the Taylor polynomial to a Taylor series .
Let
f
{\displaystyle f}
be a function which corresponds to a power series .
You can compute a divided difference scheme by computing the according matrix series applied to
J
{\displaystyle J}
.
If the nodes
x
0
,
…
,
x
n
{\displaystyle x_{0},\dots ,x_{n}}
are all equal,
then
J
{\displaystyle J}
is a Jordan block and
computation boils down to generalizing a scalar function to a matrix function using Jordan decomposition .
Forward differences [ edit ]
When the data points are equidistantly distributed we get the special case called forward differences . They are easier to calculate than the more general divided differences.
Note that the "divided portion" from forward divided difference must still be computed, to recover the forward divided difference from the forward difference .
Definition [ edit ]
Given n data points
(
x
0
,
y
0
)
,
…
,
(
x
n
−
1
,
y
n
−
1
)
{\displaystyle (x_{0},y_{0}),\ldots ,(x_{n-1},y_{n-1})}
with
x
ν
=
x
0
+
ν
h
,
h
>
0
,
ν
=
0
,
…
,
n
−
1
{\displaystyle x_{\nu }=x_{0}+\nu h,\ h>0,\ \nu =0,\ldots ,n-1}
the divided differences can be calculated via forward differences defined as
Δ
(
0
)
y
i
:=
y
i
{\displaystyle \Delta ^{(0)}y_{i}:=y_{i}}
Δ
(
k
)
y
i
:=
Δ
(
k
−
1
)
y
i
+
1
−
Δ
(
k
−
1
)
y
i
,
k
≥
1.
{\displaystyle \Delta ^{(k)}y_{i}:=\Delta ^{(k-1)}y_{i+1}-\Delta ^{(k-1)}y_{i},\ k\geq 1.}
The relationship between divided differences and forward differences is[4]
f
[
x
0
,
x
1
,
…
,
x
k
]
=
1
k
!
h
k
Δ
(
k
)
f
(
x
0
)
.
{\displaystyle f[x_{0},x_{1},\ldots ,x_{k}]={\frac {1}{k!h^{k}}}\Delta ^{(k)}f(x_{0}).}
Example [ edit ]
y
0
Δ
y
0
y
1
Δ
2
y
0
Δ
y
1
Δ
3
y
0
y
2
Δ
2
y
1
Δ
y
2
y
3
{\displaystyle {\begin{matrix}y_{0}&&&\\&\Delta y_{0}&&\\y_{1}&&\Delta ^{2}y_{0}&\\&\Delta y_{1}&&\Delta ^{3}y_{0}\\y_{2}&&\Delta ^{2}y_{1}&\\&\Delta y_{2}&&\\y_{3}&&&\\\end{matrix}}}
See also [ edit ]
References [ edit ]
^ Isaacson, Walter (2014). The Innovators . Simon & Schuster. p. 20. ISBN 978-1-4767-0869-0 .
^ de Boor, Carl , Divided Differences , Surv. Approx. Theory 1 (2005), 46–69, [1]
^ Opitz, G. Steigungsmatrizen , Z. Angew. Math. Mech. (1964), 44, T52–T54
^ Burden, Richard L.; Faires, J. Douglas (2011). Numerical Analysis (9th ed.). p. 129 .
Louis Melville Milne-Thomson (2000) [1933]. The Calculus of Finite Differences . American Mathematical Soc. Chapter 1: Divided Differences. ISBN 978-0-8218-2107-7 .
Myron B. Allen; Eli L. Isaacson (1998). Numerical Analysis for Applied Science . John Wiley & Sons. Appendix A. ISBN 978-1-118-03027-1 .
Ron Goldman (2002). Pyramid Algorithms: A Dynamic Programming Approach to Curves and Surfaces for Geometric Modeling . Morgan Kaufmann. Chapter 4:Newton Interpolation and Difference Triangles. ISBN 978-0-08-051547-2 .
External links [ edit ]