Discretization error

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In numerical analysis, computational physics, and simulation, discretization error (or truncation error) is the error inherent in discretization. It results from the fact that a function of a continuous variable is represented in the computer by a finite number of evaluations, for example, on a lattice. Discretization error can usually be reduced by using a more finely spaced lattice, with an increased computational cost. Whenever continuous data is discretized, there is always some amount of discretization error. The goal is to reduce the amount to a level considered negligible for the modeling purposes at hand.

Examples[edit]

Discretization error is the principal source of error in methods of finite differences and the pseudo-spectral method of computational physics.

When we define the derivative of \,\!f(x) as f'(x) = \lim_{h\rightarrow0}{\frac{f(x+h)-f(x)}{h}} or f'(x)\approx\frac{f(x+h)-f(x)}{h}, where \,\!h is a finitely small number, the difference between the first formula and this approximation is known as discretization error.

Related phenomena[edit]

In signal processing, the analog of discretization is sampling, and results in no loss if the conditions of the sampling theorem are satisfied, otherwise the resulting error is called aliasing.

Discretization error, which arises from finite resolution in the domain, should not be confused with quantization error, which is finite resolution in the range (values), nor in round-off error arising from floating point arithmetic. Discretization error would occur even if it were possible to represent the values exactly and use exact arithmetic – it is the error from representing a function by its values at a discrete set of points, not an error in these values.[1]

References[edit]

  1. ^ Higham, Nicholas (2002). Accuracy and Stability of Numerical Algorithms (2 ed). SIAM. p. 5. 

See also[edit]