Jump to content

Truncation error

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Autarkaw (talk | contribs) at 15:57, 1 January 2021. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In numerical analysis and scientific computing, truncation error is the error caused by approximating a mathematical process. Let's take three examples so that the myths surrounding the definition of truncation error can be laid to rest.

Example 1:

A summation series for is given by an infinite series such as

In reality, we can only use a finite number of these terms as it would take an infinite amount of computational time to take use all of them. So let's suppose we use only three terms of the series, then

In this case, the truncation error is

Example A:

Given the following infinite series, find the truncation error for x=0.75 if only the first three terms of the series are used.

Solution

Using only first three terms of the series gives

The sum of an infinite geometrical series

is given by

For our series, a=1 and r=0.75, to give

The truncation error hence is


Example 2:

The definition of the exact first derivative of the function is given by

However, if we are calculating the derivative numerically, has to be finite. The error caused by choosing to be finite is a truncation error in the mathematical process of differentiation.

Example A:

Find the truncation in calculating the first derivative of at using a step size of

Solution:

The first derivative of is

,

and at ,

.

The approximate value is given by

The truncation error hence is


Example 3:

The definition of the exact integral of a function from to is given as follows.

Let be a function defined on a closed interval of the real numbers, , and

,

be a partition of I, where

.

where

and

This implies that we are finding the area under the curve using infinite rectangles. However, if we are calculating the integral numerically, we can only use a finite number of rectangles. The error caused by choosing a finite number of rectangles as opposed to an infinite number of them is a truncation error in the mathematical process of integration.

Occasionally, by mistake, round-off error (the consequence of using finite precision floating point numbers on computers), is also called truncation error, especially if the number is rounded by chopping.

See also

References

  • Atkinson, Kendall E. (1989), An Introduction to Numerical Analysis (2nd ed.), New York: John Wiley & Sons, p. 20, ISBN 978-0-471-50023-0
  • Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, New York: Springer-Verlag, p. 1, ISBN 978-0-387-95452-3.