# Order of approximation

(Redirected from Orders of approximation)

In science, engineering, and other quantitative disciplines, order of approximation refers to formal or informal expressions for how accurate an approximation is.

## Usage in science and engineering

In formal expressions, the ordinal number used before the word order refers to the highest power in the series expansion used in the approximation. The expressions: a zeroth-order approximation, a first-order approximation, a second-order approximation, and so forth are used as fixed phrases. The expression a zero order approximation is also common. Cardinal numerals are occasionally used in expressions like an order zero approximation, an order one approximation, etc.

The omission of the word order leads to phrases that have less formal meaning. Phrases like first approximation or to a first approximation may refer to a roughly approximate value of a quantity.[1][2] The phrase to a zeroth approximation indicates a wild guess.[3] The expression order of approximation is sometimes informally used to mean the number of significant figures, in increasing order of accuracy, or to the order of magnitude. However, this may be confusing as these formal expressions do not directly refer to the order of derivatives.

The choice of series expansion depends on the scientific method used to investigate a phenomenon. The expression order of approximation is expected to indicate progressively more refined approximations of a function in a specified interval. The choice of order of approximation depends on the research purpose. One may wish to simplify a known analytic expression to devise a new application or, on the contrary, try to fit a curve to data points. Higher order of approximation is not always more useful than the lower one. For example, if a quantity is constant within the whole interval, approximating it with a second-order Taylor series will not increase the accuracy.

In the case of a smooth function, the nth-order approximation is a polynomial of degree n, which is obtained by truncating the Taylor series to this degree. The formal usage of order of approximation corresponds to the omission of some terms of the series used in the expansion (usually the higher terms). This affects accuracy. The error usually varies within the interval. Thus the numbers zeroth, first, second etc. used formally in the above meaning do not directly give information about percent error or significant figures.

### Zeroth-order

Zeroth-order approximation is the term scientists use for a first rough answer. Many simplifying assumptions are made, and when a number is needed, an order-of-magnitude answer (or zero significant figures) is often given. For example, you might say "the town has a few thousand residents", when it has 3,914 people in actuality. This is also sometimes referred to as an order-of-magnitude approximation. The zero of "zeroth-order" represents the fact that even the only number given, "a few", is itself loosely defined.

A zeroth-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be constant, or a flat line with no slope: a polynomial of degree 0. For example,

${\displaystyle x=[0,1,2]\,}$
${\displaystyle y=[3,3,5]\,}$
${\displaystyle y\sim f(x)=3.67\,}$

could be – if data point accuracy were reported – an approximate fit to the data, obtained by simply averaging the x-values and the y-values. However, data points represent results of measurements and they do differ from points in Euclidean geometry. Thus quoting an average value containing three significant digits in the output with just one significant digit in the input data could be recognized as an example of false precision. With the implied accuracy of the data points of ±0.5, the zeroth order approximation could at best yield the result for y of ~3.7±2.0 in the interval of x from -0.5 to 2.5, considering the standard deviation.

If the data points are reported as

${\displaystyle x=[0.00,1.00,2.00]\,}$
${\displaystyle y=[3.00,3.00,5.00]\,}$

the zeroth-order approximation results in

${\displaystyle y\sim f(x)=3.67\,}$

The accuracy of the result justifies an attempt to derive a multiplicative function for that average, for example,

${\displaystyle y\sim \ x+2.67}$

One should be careful though because the multiplicative function will be defined for the whole interval. If only three data points are available, one has no knowledge about the rest of the interval, which may be a large part of it. This means that y could have another component which equals 0 at the ends and in the middle of the interval. A number of functions having this property are known, for example y = sin πx. Taylor series is useful and helps predict an analytic solution but the approximation alone does not provide conclusive evidence.

### First-order

[3]First-order approximation is the term scientists use for a slightly better answer. Some simplifying assumptions are made, and when a number is needed, an answer with only one significant figure is often given ("the town has 4×103 or four thousand residents"). In the case of a first-order approximation, at least one number given is exact. In the zeroth order example above, the quantity "a few" was given but in the first order example, the number "4" is given.

A first-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be a linear approximation, straight line with a slope: a polynomial of degree 1. For example,

${\displaystyle x=[0.00,1.00,2.00]\,}$
${\displaystyle y=[3.00,3.00,5.00]\,}$
${\displaystyle y\sim f(x)=x+2.67\,}$

is an approximate fit to the data. In this example there is a zeroth order approximation that is the same as the first order but the method of getting there is different; i.e. a wild stab in the dark at a relationship happened to be as good as an 'educated guess'.

### Second-order

Second-order approximation is the term scientists use for a decent-quality answer. Few simplifying assumptions are made, and when a number is needed, an answer with two or more significant figures ("the town has 3.9×103 or thirty-nine hundred residents") is generally given. In mathematical finance, second-order approximations are known as convexity corrections. As in the examples above, the term "2nd order" refers to the number of exact numerals given for the imprecise quantity. In this case, "3" and "9" are given as the two successive levels of precision, instead of simply the "4" from the first order, or "a few" from the zeroth-order found in the examples above.

A second-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be a quadratic polynomial, geometrically, a parabola: a polynomial of degree 2. For example,

${\displaystyle x=[0.00,1.00,2.00]\,}$
${\displaystyle y=[3.00,3.00,5.00]\,}$
${\displaystyle y\sim f(x)=x^{2}-x+3\,}$

is an approximate fit to the data. In this case, with only three data points, a parabola is an exact fit based on the data provided. However, the data points for most of the interval are not available, which advises caution (see "zeroth order").

### Higher-order

While higher-order approximations exist and are crucial to a better understanding and description of reality, they are not typically referred to by number.

Continuing the above, a third-order approximation would be required to perfectly fit four data points, and so on. See polynomial interpolation.

## Colloquial usage

These terms are also used colloquially by scientists and engineers to describe phenomena that can be neglected as not significant (e.g. "Of course the rotation of the Earth affects our experiment, but it's such a high-order effect that we wouldn't be able to measure it" or "At these velocities, relativity is a fourth-order effect that we only worry about at the annual calibration.") In this usage, the ordinality of the approximation is not exact, but is used to emphasize its insignificance; the higher the number used, the less important the effect. The terminology, in this context, represents a high level of precision required to account for an effect which is inferred to be very small when compared to the overall subject matter. The higher the order, the more precision is required to measure the effect, and therefore the smallness of the effect in comparison to the overall measurement.