||It has been suggested that this article be merged into Discretization. (Discuss) Proposed since February 2013.|
In numerical analysis, computational physics, and simulation, discretization error (or truncation error) is error resulting from the fact that a function of a continuous variable is represented in the computer by a finite number of evaluations, for example, on a lattice. Discretization error can usually be reduced by using a more finely spaced lattice, with an increased computational cost.
When we define the derivative of as or , where is a finitely small number, the difference between the first formula and this approximation is known as discretization error.
Discretization error, which arises from finite resolution in the domain, should not be confused with quantization error, which is finite resolution in the range (values), nor in round-off error arising from floating point arithmetic. Discretization error would occur even if it were possible to represent the values exactly and use exact arithmetic – it is the error from representing a function by its values at a discrete set of points, not an error in these values.
- Higham, Nicholas (2002). Accuracy and Stability of Numerical Algorithms (2 ed). SIAM. p. 5.