Error analysis

Error analysis is the study of kind and quantity of error that occurs, particularly in the fields of applied mathematics (particularly numerical analysis), applied linguistics and statistics.

Error analysis in numerical modeling

In numerical simulation or modeling of real systems, error analysis is concerned with the changes in the output of the model as the parameters to the model vary about a mean.

For instance, in a system modeled as a function of two variables $\scriptstyle z \,=\, f(x,y)$. Error analysis deals with the propagation of the numerical errors in $\scriptstyle x$ and $\scriptstyle y$ (around mean values $\scriptstyle\bar{x}$ and $\scriptstyle\bar{y}$) to error in $\scriptstyle z$ (around a mean $\scriptstyle\bar{z}$).[1]

In numerical analysis, error analysis comprises both forward error analysis and backward error analysis. Forward error analysis involves the analysis of a function $\scriptstyle z' = f'(a_0,\,a_1,\,\dots,\,a_n)$ which is an approximation (usually a finite polynomial) to a function $\scriptstyle z \,=\, f(a_0,a_1,\dots,a_n)$ to determine the bounds on the error in the approximation; i.e., to find $\scriptstyle\epsilon$ such that $\scriptstyle 0 \,\le\, |z - z'| \,\le\, \epsilon$. Backward error analysis involves the analysis of the approximation function $\scriptstyle z' \,=\, f'(a_0,\,a_1,\,\dots,\,a_n)$, to determine the bounds on the parameters $\scriptstyle a_i \,=\, \bar{a_i} \,\pm\, \epsilon_i$ such that the result $\scriptstyle z' \,=\, z$.[2]

Error analysis in second language acquisition

In second language acquisition, error analysis studies the types and causes of language errors. Errors are classified[3] according to:

Error analysis in SLA was established in the 1960s by Stephen Pit Corder and colleagues.[4] Error analysis was an alternative to contrastive analysis, an approach influenced by behaviorism through which applied linguists sought to use the formal distinctions between the learners' first and second languages to predict errors. Error analysis showed that contrastive analysis was unable to predict a great majority of errors, although its more valuable aspects have been incorporated into the study of language transfer. A key finding of error analysis has been that many learner errors are produced by learners making faulty inferences about the rules of the new language.

Error analysis in molecular dynamics simulation

In molecular dynamics (MD) simulations, there are errors due to inadequate sampling of the phase space or infrequently occurring events, these lead to the statistical error due to random fluctuation in the measurements.

For a series of M measurements of a fluctuating property A, the mean value is:

$\langle A \rangle = \frac{1}{M} \sum_{\mu=1}^M A_{\mu}.$

When these M measurements are independent, the variance of the mean <A> is:

$\sigma^{2}( \langle A \rangle ) = \frac{1}{M} \sigma^{2}( A ),$

but in most MD simulations, there is correlation between quantity A at different time, so the variance of the mean <A> will be underestimated as the effective number of independent measurements is actually less than M. In such situations we rewrite the variance as :

$\sigma^{2}( \langle A \rangle ) = \frac{1}{M} \sigma^{2}A \left[ 1 + 2 \sum_\mu \left( 1 - \frac{\mu}{M} \right) \phi_{\mu} \right],$

where $\phi_{\mu}$ is the autocorrelation function defined by

$\phi_{\mu} = \frac{ \langle A_{\mu}A_{0} \rangle - \langle A \rangle^{2} }{ \langle A^{2} \rangle - \langle A \rangle^{2}}.$

We can then use the auto correlation function to estimate the error bar. Luckily, we have a much simpler method based on block averaging.[5]