Truncation error (numerical integration)

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Truncation errors in numerical integration are of two kinds:

  • local truncation errors – the error caused by one iteration, and
  • global truncation errors – the cumulative error caused by many iterations.


Suppose we have a continuous differential equation

 y' = f(t,y), \qquad y(t_0) = y_0, \qquad t \geq t_0

and we wish to compute an approximation  y_n of the true solution  y(t_n) at discrete time steps  t_1,t_2,\ldots,t_N . For simplicity, assume the time steps are equally spaced:

 h = t_n - t_{n-1}, \qquad n=1,2,\ldots,N.

Suppose we compute the sequence  y_n with a one-step method of the form

 y_n = y_{n-1} + h A(t_{n-1}, y_{n-1}, h, f).

The function  A is called the increment function, and can be interpreted as an estimate of the slope of  y_n .

Local truncation error[edit]

The local truncation error  \tau_n is the error that our increment function,  A , causes during a single iteration, assuming perfect knowledge of the true solution at the previous iteration.

More formally, the local truncation error,  \tau_n , at step  n is computed from the difference between the left- and the right-hand side of the equation for the increment  y_n \approx y_{n-1} + h A(t_{n-1}, y_{n-1}, h, f) :

 \tau_n = y(t_n) - y(t_{n-1}) - h A(t_{n-1}, y(t_{n-1}), h, f). [1][2]

The numerical method is consistent if the local truncation error is  o(h) (this means that for every  \varepsilon > 0 there exists an  H such that  |\tau_n| < \varepsilon h for all  h < H ; see big O notation). If the increment function  A is differentiable, then the method is consistent if, and only if,  A(t,y,0,f) = f(t,y) .[3]

Furthermore, we say that the numerical method has order  p if for any sufficiently smooth solution of the initial value problem, the local truncation error is  O(h^{p+1}) (meaning that there exist constants  C and  H such that  |\tau_n| < Ch^{p+1} for all  h < H ).[4]

Global truncation error[edit]

The global truncation error is the accumulation of the local truncation error over all of the iterations, assuming perfect knowledge of the true solution at the initial time step.[citation needed]

More formally, the global truncation error,  e_n , at time  t_n is defined by:

e_n &= y(t_n) - y_n \\
&= y(t_n) - \Big( y_0 + h A(t_0,y_0,h,f) + h A(t_1,y_1,h,f) + \cdots + h A(t_{n-1},y_{n-1},h,f) \Big).

The numerical method is convergent if global truncation error goes to zero as the step size goes to zero; in other words, the numerical solution converges to the exact solution:  \lim_{h\to0} \max_n |e_n| = 0 .[6]

Relationship between local and global truncation errors[edit]

Sometimes it is possible to calculate an upper bound on the global truncation error, if we already know the local truncation error. This requires our increment function be sufficiently well-behaved.

The global truncation error satisfies the recurrence relation:

 e_{n+1} = e_n + h \Big( A(t_n, y(t_n), h, f) - A(t_n, y_n, h, f) \Big) + \tau_{n+1}.

This follows immediately from the definitions. Now assume that the increment function is Lipschitz continuous in the second argument, that is, there exists a constant L such that for all t and y_1 and y_2, we have:

 | A(t,y_1,h,f) - A(t,y_2,h,f) | \le L |y_1-y_2|.

Then the global error satisfies the bound

 | e_n | \le \frac{\max_j \tau_j}{hL} \left( \mathrm{e}^{L(t_n-t_0)} - 1 \right). [7]

It follows from the above bound for the global error that if the function  f in the differential equation is continuous in the first argument and Lipschitz continuous in the second argument (the condition from the Picard–Lindelöf theorem), and the increment function  A is continuous in all arguments and Lipschitz continuous in the second argument, then the global error tends to zero as the step size  h approaches zero (in other words, the numerical method converges to the exact solution).[8]

Extension to linear multistep methods[edit]

Now consider a linear multistep method, given by the formula

& y_{n+s} + a_{s-1} y_{n+s-1} + a_{s-2} y_{n+s-2} + \cdots + a_0 y_n \\
& \qquad {} = h \bigl( b_s f(t_{n+s},y_{n+s}) + b_{s-1} f(t_{n+s-1},y_{n+s-1}) + \cdots + b_0 f(t_n,y_n) \bigr),

Thus, the next value for the numerical solution is computed according to

 y_{n+s} =  - \sum_{k=0}^{s-1} a_{n+k} y_{n+k} + h \sum_{k=0}^s b_k f(t_{n+k}, y_{n+k}).

The next iterate of a linear multistep method depends on the previous s iterates. Thus, in the definition for the local truncation error, it is now assumed that the previous s iterates all correspond to the exact solution:

 \tau_n = y(t_{n+s}) + \sum_{k=0}^{s-1} a_{n+k} y(t_{n+k}) - h \sum_{k=0}^s b_k f(t_{n+k}, y(t_{n+k})). [9]

Again, the method is consistent if  \tau_n = o(h) and it has order p if  \tau_n = O(h^{p+1}) . The definition of the global truncation error is also unchanged.

The relation between local and global truncation errors is slightly different from in the simpler setting of one-step methods. For linear multistep methods, an additional concept called zero-stability is needed to explain the relation between local and global truncation errors. Linear multistep methods that satisfy the condition of zero-stability have the same relation between local and global errors as one-step methods. In other words, if a linear multistep method is zero-stable and consistent, then it converges. And if a linear multistep method is zero-stable and has local error  \tau_n = O(h^{p+1}) , then its global error satisfies  e_n = O(h^p) .[10]

See also[edit]


  1. ^ Gupta, G. K.; Sacks-Davis, R.; Tischer, P. E. (March 1985). "A review of recent developments in solving ODEs". Computing Surveys 17 (1): 5–47. doi:10.1145/4078.4079. CiteSeerX: 
  2. ^ Süli & Mayers 2003, p. 317, calls  \tau_n/h the truncation error.
  3. ^ Süli & Mayers 2003, pp. 321 & 322
  4. ^ Iserles 1996, p. 8; Süli & Mayers 2003, p. 323
  5. ^ Süli & Mayers 2003, p. 317
  6. ^ Iserles 1996, p. 5
  7. ^ Süli & Mayers 2003, p. 318
  8. ^ Süli & Mayers 2003, p. 322
  9. ^ Süli & Mayers 2003, p. 337, uses a different definition, dividing this by essentially by h
  10. ^ Süli & Mayers 2003, p. 340


External links[edit]