In statistics , propagation of uncertainty (or propagation of error ) is the effect of variables ' uncertainties (or errors ) on the uncertainty of a function based on them. Mainly, the variables are measured in an experiment, and have uncertainties due to measurement limitations (e.g. instrument precision ) which propagate to the result.
The uncertainty is usually defined by the absolute error — a variable that is probable to get the values x ±Δx is said to have an uncertainty (or margin of error) of Δx . In other words, for a measured value x , it is probable that the true value lies in the interval [x −Δx , x +Δx ]. Uncertainties can also be defined by the relative error Δx /x , which is usually written as a percentage. In many cases it is assumed that the difference between a measured value and the true value is normally distributed , with the standard deviation of the distribution being the uncertainty of the measurement.
This article explains how to calculate the uncertainty of a function if the variables' uncertainties are known.
Let
f
(
x
1
,
x
2
,
.
.
.
,
x
n
)
{\displaystyle f(x_{1},x_{2},...,x_{n})}
be a function which depends on
n
{\displaystyle n}
variables
x
1
,
x
2
,
.
.
.
,
x
n
{\displaystyle x_{1},x_{2},...,x_{n}}
. The uncertainty of each variable is given by
Δ
x
j
{\displaystyle \Delta x_{j}}
:
x
j
±
Δ
x
j
.
{\displaystyle x_{j}\pm \Delta x_{j}\,.}
If the variables are uncorrelated , we can calculate the uncertainty Δf of f that results from the uncertainties of the variables:
Δ
f
=
Δ
f
(
x
1
,
x
2
,
.
.
.
,
x
n
,
Δ
x
1
,
Δ
x
2
,
.
.
.
,
Δ
x
n
)
=
(
∑
i
=
1
n
(
∂
f
∂
x
i
Δ
x
i
)
2
)
1
/
2
,
{\displaystyle \Delta f=\Delta f\left(x_{1},x_{2},...,x_{n},\Delta x_{1},\Delta x_{2},...,\Delta x_{n}\right)=\left(\sum _{i=1}^{n}\left({\frac {\partial f}{\partial x_{i}}}\Delta x_{i}\right)^{2}\right)^{1/2}\,,}
where
∂
f
∂
x
j
{\displaystyle {\frac {\partial f}{\partial x_{j}}}}
designates the partial derivative of
f
{\displaystyle f}
for the
j
{\displaystyle j}
-th variable.
If the variables are correlated , the covariance between variable pairs, Ci,k := cov(xi ,xk ), enters the formula with a double sum over all pairs (i ,k ):
Δ
f
=
(
∑
i
=
1
n
∑
k
=
1
n
(
∂
f
∂
x
i
∂
f
∂
x
k
C
i
,
k
)
)
1
/
2
,
{\displaystyle \Delta f=\left(\sum _{i=1}^{n}\sum _{k=1}^{n}\left({\frac {\partial f}{\partial x_{i}}}{\frac {\partial f}{\partial x_{k}}}C_{i,k}\right)\right)^{1/2}\,,}
where Ci,i = var(xi ) = Δxi ².
After calculating
Δ
f
{\displaystyle \Delta f}
, we can say that the value of the function with its uncertainty is:
f
±
Δ
f
.
{\displaystyle f\pm \Delta f\,.}
This table shows the uncertainty of simple functions, resulting from uncorrelated variables A , B , C with uncertainties ΔA , ΔB , ΔC , and a precisely-known constant c .
Function
Uncertainty
X
=
A
±
B
{\displaystyle X=A\pm B\,}
Δ
X
=
Δ
A
+
Δ
B
{\displaystyle \Delta X=\Delta A+\Delta B}
σ
X
2
=
σ
A
2
+
σ
B
2
{\displaystyle \sigma _{X}^{2}=\sigma _{A}^{2}+\sigma _{B}^{2}\,}
non-independent:
σ
X
2
=
σ
A
2
+
σ
B
2
+
2
⋅
c
o
v
(
A
,
B
)
{\displaystyle \sigma _{X}^{2}=\sigma _{A}^{2}+\sigma _{B}^{2}+2\cdot cov(A,B)}
where
c
o
v
(
A
,
B
)
{\displaystyle cov(A,B)}
is the COVariance .
X
=
c
A
{\displaystyle X=cA\,}
Δ
X
=
c
⋅
Δ
A
{\displaystyle \Delta X=c\cdot \Delta A}
σ
X
=
c
⋅
σ
A
{\displaystyle \sigma _{X}=c\cdot \sigma _{A}\,}
X
=
A
⋅
B
{\displaystyle X=A\cdot B\,}
Δ
X
X
=
Δ
A
A
+
Δ
B
B
+
Δ
A
⋅
Δ
B
A
⋅
B
{\displaystyle {\frac {\Delta X}{X}}={\frac {\Delta A}{A}}+{\frac {\Delta B}{B}}+{\frac {\Delta A\cdot \Delta B}{A\cdot B}}}
[ 1]
Δ
X
X
=
Δ
A
A
+
Δ
B
B
{\displaystyle {\frac {\Delta X}{X}}={\frac {\Delta A}{A}}+{\frac {\Delta B}{B}}}
[ 2]
(
σ
X
X
)
2
=
(
σ
A
A
)
2
+
(
σ
B
B
)
2
+
(
σ
A
⋅
σ
B
A
⋅
B
)
2
{\displaystyle \left({\frac {\sigma _{X}}{X}}\right)^{2}=\left({\frac {\sigma _{A}}{A}}\right)^{2}+\left({\frac {\sigma _{B}}{B}}\right)^{2}+\left({\frac {\sigma _{A}\cdot \sigma _{B}}{A\cdot B}}\right)^{2}\,}
[ 1]
(
σ
X
X
)
2
=
(
σ
A
A
)
2
+
(
σ
B
B
)
2
{\displaystyle \left({\frac {\sigma _{X}}{X}}\right)^{2}=\left({\frac {\sigma _{A}}{A}}\right)^{2}+\left({\frac {\sigma _{B}}{B}}\right)^{2}}
[ 2]
non-independence:
σ
X
2
=
B
2
σ
A
2
+
A
2
σ
B
2
+
2
A
⋅
B
⋅
E
11
+
2
A
⋅
E
12
+
2
B
⋅
E
21
+
E
22
−
E
11
2
{\displaystyle \sigma _{X}^{2}=B^{2}\sigma _{A}^{2}+A^{2}\sigma _{B}^{2}+2A\cdot B\cdot E_{11}+2A\cdot E_{12}+2B\cdot E_{21}+E_{22}-E_{11}^{2}}
where
E
i
j
=
E
(
(
Δ
A
)
i
⋅
(
Δ
B
)
j
)
{\displaystyle E_{ij}=E((\Delta A)^{i}\cdot (\Delta B)^{j})}
where
Δ
A
=
a
−
A
{\displaystyle \Delta A=a-A}
[ 1]
X
=
A
⋅
B
⋅
C
{\displaystyle X=A\cdot B\cdot C\,}
Δ
X
X
=
Δ
A
A
+
Δ
B
B
+
Δ
C
C
+
Δ
A
⋅
Δ
B
A
⋅
B
+
Δ
A
⋅
Δ
C
A
⋅
C
+
Δ
B
⋅
Δ
C
B
⋅
C
+
Δ
A
⋅
Δ
B
⋅
Δ
C
A
⋅
B
⋅
C
{\displaystyle {\frac {\Delta X}{X}}={\frac {\Delta A}{A}}+{\frac {\Delta B}{B}}+{\frac {\Delta C}{C}}+{\frac {\Delta A\cdot \Delta B}{A\cdot B}}+{\frac {\Delta A\cdot \Delta C}{A\cdot C}}+{\frac {\Delta B\cdot \Delta C}{B\cdot C}}+{\frac {\Delta A\cdot \Delta B\cdot \Delta C}{A\cdot B\cdot C}}}
[ 3] [ 1]
Δ
X
X
=
Δ
A
A
+
Δ
B
B
+
Δ
C
C
{\displaystyle {\frac {\Delta X}{X}}={\frac {\Delta A}{A}}+{\frac {\Delta B}{B}}+{\frac {\Delta C}{C}}}
[ 2]
(
σ
X
X
)
2
=
(
σ
A
A
)
2
+
(
σ
B
B
)
2
+
(
σ
C
C
)
2
+
(
σ
A
⋅
σ
B
A
⋅
B
)
2
+
(
σ
A
⋅
σ
C
A
⋅
C
)
2
+
(
σ
B
⋅
σ
C
B
⋅
C
)
2
+
(
σ
A
⋅
σ
B
⋅
σ
C
A
⋅
B
⋅
C
)
2
{\displaystyle \left({\frac {\sigma _{X}}{X}}\right)^{2}=\left({\frac {\sigma _{A}}{A}}\right)^{2}+\left({\frac {\sigma _{B}}{B}}\right)^{2}+\left({\frac {\sigma _{C}}{C}}\right)^{2}+\left({\frac {\sigma _{A}\cdot \sigma _{B}}{A\cdot B}}\right)^{2}+\left({\frac {\sigma _{A}\cdot \sigma _{C}}{A\cdot C}}\right)^{2}+\left({\frac {\sigma _{B}\cdot \sigma _{C}}{B\cdot C}}\right)^{2}+\left({\frac {\sigma _{A}\cdot \sigma _{B}\cdot \sigma _{C}}{A\cdot B\cdot C}}\right)^{2}}
[ 4] [ 1]
(
σ
X
X
)
2
=
(
σ
A
A
)
2
+
(
σ
B
B
)
2
+
(
σ
C
C
)
2
{\displaystyle \left({\frac {\sigma _{X}}{X}}\right)^{2}=\left({\frac {\sigma _{A}}{A}}\right)^{2}+\left({\frac {\sigma _{B}}{B}}\right)^{2}+\left({\frac {\sigma _{C}}{C}}\right)^{2}}
[ 2]
X
=
A
i
⋅
B
j
{\displaystyle X=A^{i}\cdot B^{j}}
Δ
X
X
=
|
i
|
Δ
A
A
+
|
j
|
Δ
B
B
{\displaystyle {\frac {\Delta X}{X}}=|i|{\frac {\Delta A}{A}}+|j|{\frac {\Delta B}{B}}}
[ 2]
(
σ
X
X
)
2
=
(
i
σ
A
A
)
2
+
(
j
σ
B
B
)
2
{\displaystyle \left({\frac {\sigma _{X}}{X}}\right)^{2}=\left(i{\frac {\sigma _{A}}{A}}\right)^{2}+\left(j{\frac {\sigma _{B}}{B}}\right)^{2}}
[ 2] equivalently:
σ
X
2
=
(
i
⋅
A
i
−
1
⋅
B
j
)
2
⋅
σ
A
2
+
(
j
⋅
A
i
⋅
B
j
−
1
)
2
⋅
σ
B
2
{\displaystyle \sigma _{X}^{2}=\left(i\cdot A^{i-1}\cdot B^{j}\right)^{2}\cdot \sigma _{A}^{2}+\left(j\cdot A^{i}\cdot B^{j-1}\right)^{2}\cdot \sigma _{B}^{2}}
[ 2]
X
=
ln
(
A
)
{\displaystyle X=\ln(A)\,}
Δ
X
=
Δ
X
X
{\displaystyle \Delta X={\frac {\Delta X}{X}}}
[ 2]
σ
X
=
σ
A
A
{\displaystyle \sigma _{X}={\frac {\sigma _{A}}{A}}}
[ 2]
X
=
e
A
{\displaystyle X=e^{A}\,}
Δ
X
=
e
A
⋅
Δ
A
{\displaystyle \Delta X=e^{A}\cdot \Delta A}
[ 2]
σ
X
X
=
σ
A
{\displaystyle {\frac {\sigma _{X}}{X}}=\sigma _{A}\,}
[ 2]
Partial derivatives
Given
X
=
f
(
A
,
B
,
C
,
⋯
)
{\displaystyle X=f(A,B,C,\cdots )}
Absolute Error
Varience
Δ
X
=
|
δ
f
δ
A
|
⋅
Δ
A
+
|
δ
f
δ
B
|
⋅
Δ
B
+
|
δ
f
δ
C
|
⋅
Δ
C
+
⋯
{\displaystyle \Delta X=\left|{\frac {\delta f}{\delta A}}\right|\cdot \Delta A+\left|{\frac {\delta f}{\delta B}}\right|\cdot \Delta B+\left|{\frac {\delta f}{\delta C}}\right|\cdot \Delta C+\cdots }
σ
X
2
=
(
δ
f
δ
A
σ
A
)
2
+
(
δ
f
δ
B
σ
B
)
2
+
(
δ
f
δ
C
σ
C
)
2
+
⋯
{\displaystyle \sigma _{X}^{2}=\left({\frac {\delta f}{\delta A}}\sigma _{A}\right)^{2}+\left({\frac {\delta f}{\delta B}}\sigma _{B}\right)^{2}+\left({\frac {\delta f}{\delta C}}\sigma _{C}\right)^{2}+\cdots }
[ 5]
Example calculation: Inverse tangent function
We can calculate the uncertainty propagation for the inverse tangent function as an example of using partial derivatives to propagate error.
Define
f
(
θ
)
=
arctan
θ
{\displaystyle f(\theta )=\arctan {\theta }}
,
where
σ
θ
{\displaystyle \sigma _{\theta }}
is the absolute uncertainty on our measurement of
θ
{\displaystyle \theta }
.
The partial derivative of
f
(
θ
)
{\displaystyle f(\theta )}
with respect to
θ
{\displaystyle \theta }
is
∂
f
∂
θ
=
1
1
+
θ
2
{\displaystyle {\frac {\partial f}{\partial \theta }}={\frac {1}{1+\theta ^{2}}}}
.
Therefore, our propagated uncertainty is
σ
f
=
σ
θ
1
+
θ
2
{\displaystyle \sigma _{f}={\frac {\sigma _{\theta }}{1+\theta ^{2}}}}
,
where
σ
f
{\displaystyle \sigma _{f}}
is the absolute propagated uncertainty.
Example application: Resistance measurement
A practical application is an experiment in which one measures current , I , and voltage , V , on a resistor in order to determine the resistance , R , using Ohm's law ,
R
=
V
/
I
.
{\displaystyle R=V/I.}
Given the measured variables with uncertainties, I ±ΔI and V ±ΔV , the uncertainty in the computed quantity, ΔR is
Δ
R
=
(
(
Δ
V
I
)
2
+
(
V
I
2
Δ
I
)
2
)
1
/
2
=
R
(
Δ
V
V
)
2
+
(
Δ
I
I
)
2
.
{\displaystyle \Delta R=\left(\left({\frac {\Delta V}{I}}\right)^{2}+\left({\frac {V}{I^{2}}}\Delta I\right)^{2}\right)^{1/2}=R{\sqrt {\left({\frac {\Delta V}{V}}\right)^{2}+\left({\frac {\Delta I}{I}}\right)^{2}}}.}
Thus, in this simple case, the relative error ΔR /R is simply the square root of the sum of the squares of the two relative errors of the measured variables.
Notes
^ a b c d e Leo A. Goodman (December 1960 ). "On the Exact Variance of Products" (fee required) . Journal of the American Statistical Association (in eng). 55 (292): 708–713. Retrieved 2007-04-20 . A simple exact formula for the variance of the product of two random variables ; ; CS1 maint: unrecognized language (link )
^ a b c d e f g h i j k only an approximation e. g.
X
=
A
−
3
⋅
B
2
{\displaystyle X=A^{-3}\cdot B^{2}}
where
A
=
10
B
=
20
Δ
A
=
1
Δ
B
=
2
{\displaystyle A=10{\mbox{ }}B=20{\mbox{ }}\Delta A=1{\mbox{ }}\Delta B=2}
Δ
X
X
=
|
−
3
|
1
10
+
|
2
|
2
20
=
1
2
{\displaystyle {\frac {\Delta X}{X}}=|-3|{\frac {1}{10}}+|2|{\frac {2}{20}}={\frac {1}{2}}}
gives
(
A
−
Δ
A
)
−
3
⋅
(
B
+
Δ
B
)
2
≈
(
A
−
3
⋅
B
2
)
⋅
(
1
+
Δ
X
X
)
9
−
3
⋅
22
2
≈
(
10
−
3
⋅
20
2
)
⋅
3
2
484
729
≈
3
5
.6639
≈
.6
{\displaystyle {\begin{aligned}(A-\Delta A)^{-3}\cdot (B+\Delta B)^{2}&\approx (A^{-3}\cdot B^{2})\cdot (1+{\frac {\Delta X}{X}})\\9^{-3}\cdot 22^{2}&\approx (10^{-3}\cdot 20^{2})\cdot {\frac {3}{2}}\\{\frac {484}{729}}&\approx {\frac {3}{5}}\\.6639&\approx .6\end{aligned}}}
. As such these formulas should be used with caution and if possible be replaced with a more exact formulation.
^
X
=
A
⋅
B
⋅
C
{\displaystyle X=A\cdot B\cdot C}
Δ
X
X
=
Δ
(
A
⋅
B
)
A
⋅
B
+
Δ
C
C
+
Δ
(
A
⋅
B
)
⋅
Δ
C
A
⋅
B
⋅
C
{\displaystyle {\frac {\Delta X}{X}}={\frac {\Delta (A\cdot B)}{A\cdot B}}+{\frac {\Delta C}{C}}+{\frac {\Delta (A\cdot B)\cdot \Delta C}{A\cdot B\cdot C}}}
Δ
(
A
⋅
B
)
A
⋅
B
=
Δ
A
A
+
Δ
B
B
+
Δ
A
⋅
Δ
B
A
⋅
B
{\displaystyle {\frac {\Delta (A\cdot B)}{A\cdot B}}={\frac {\Delta A}{A}}+{\frac {\Delta B}{B}}+{\frac {\Delta A\cdot \Delta B}{A\cdot B}}}
Δ
X
X
=
Δ
A
A
+
Δ
B
B
+
Δ
A
⋅
Δ
B
A
⋅
B
+
Δ
C
C
+
(
Δ
A
A
+
Δ
B
B
+
Δ
A
⋅
Δ
B
A
⋅
B
)
⋅
Δ
C
C
{\displaystyle {\frac {\Delta X}{X}}={\frac {\Delta A}{A}}+{\frac {\Delta B}{B}}+{\frac {\Delta A\cdot \Delta B}{A\cdot B}}+{\frac {\Delta C}{C}}+\left({\frac {\Delta A}{A}}+{\frac {\Delta B}{B}}+{\frac {\Delta A\cdot \Delta B}{A\cdot B}}\right)\cdot {\frac {\Delta C}{C}}}
Δ
X
X
=
Δ
A
A
+
Δ
B
B
+
Δ
C
C
+
Δ
A
⋅
Δ
B
A
⋅
B
+
Δ
A
⋅
Δ
C
A
⋅
C
+
Δ
B
⋅
Δ
C
B
⋅
C
+
Δ
A
⋅
Δ
B
⋅
Δ
C
A
⋅
B
⋅
C
{\displaystyle {\frac {\Delta X}{X}}={\frac {\Delta A}{A}}+{\frac {\Delta B}{B}}+{\frac {\Delta C}{C}}+{\frac {\Delta A\cdot \Delta B}{A\cdot B}}+{\frac {\Delta A\cdot \Delta C}{A\cdot C}}+{\frac {\Delta B\cdot \Delta C}{B\cdot C}}+{\frac {\Delta A\cdot \Delta B\cdot \Delta C}{A\cdot B\cdot C}}}
^
X
=
A
⋅
B
⋅
C
{\displaystyle X=A\cdot B\cdot C}
σ
X
2
=
C
2
⋅
σ
A
⋅
B
2
+
(
A
⋅
B
)
2
⋅
σ
C
2
+
σ
A
⋅
B
2
⋅
σ
C
2
{\displaystyle \sigma _{X}^{2}=C^{2}\cdot \sigma _{A\cdot B}^{2}+(A\cdot B)^{2}\cdot \sigma _{C}^{2}+\sigma _{A\cdot B}^{2}\cdot \sigma _{C}^{2}}
σ
A
⋅
B
2
=
B
2
⋅
σ
A
2
+
A
2
⋅
σ
B
2
+
σ
A
2
⋅
σ
B
2
{\displaystyle \sigma _{A\cdot B}^{2}=B^{2}\cdot \sigma _{A}^{2}+A^{2}\cdot \sigma _{B}^{2}+\sigma _{A}^{2}\cdot \sigma _{B}^{2}}
σ
X
2
=
C
2
⋅
(
B
2
⋅
σ
A
2
+
A
2
⋅
σ
B
2
+
σ
A
2
⋅
σ
B
2
)
+
(
A
⋅
B
)
2
⋅
σ
C
2
+
(
B
2
⋅
σ
A
2
+
A
2
⋅
σ
B
2
+
σ
A
2
⋅
σ
B
2
)
⋅
σ
C
2
{\displaystyle \sigma _{X}^{2}=C^{2}\cdot (B^{2}\cdot \sigma _{A}^{2}+A^{2}\cdot \sigma _{B}^{2}+\sigma _{A}^{2}\cdot \sigma _{B}^{2})+(A\cdot B)^{2}\cdot \sigma _{C}^{2}+(B^{2}\cdot \sigma _{A}^{2}+A^{2}\cdot \sigma _{B}^{2}+\sigma _{A}^{2}\cdot \sigma _{B}^{2})\cdot \sigma _{C}^{2}}
σ
X
2
=
B
2
⋅
C
2
⋅
σ
A
2
+
A
2
⋅
C
2
⋅
σ
B
2
+
A
2
⋅
B
2
⋅
σ
C
2
+
C
2
⋅
σ
A
2
⋅
σ
B
2
+
B
2
⋅
σ
A
2
⋅
σ
C
2
+
A
2
⋅
σ
B
2
⋅
σ
C
2
+
σ
A
2
⋅
σ
B
2
⋅
σ
C
2
{\displaystyle \sigma _{X}^{2}=B^{2}\cdot C^{2}\cdot \sigma _{A}^{2}+A^{2}\cdot C^{2}\cdot \sigma _{B}^{2}+A^{2}\cdot B^{2}\cdot \sigma _{C}^{2}+C^{2}\cdot \sigma _{A}^{2}\cdot \sigma _{B}^{2}+B^{2}\cdot \sigma _{A}^{2}\cdot \sigma _{C}^{2}+A^{2}\cdot \sigma _{B}^{2}\cdot \sigma _{C}^{2}+\sigma _{A}^{2}\cdot \sigma _{B}^{2}\cdot \sigma _{C}^{2}}
(
σ
X
X
)
2
=
(
σ
A
A
)
2
+
(
σ
B
B
)
2
+
(
σ
C
C
)
2
+
(
σ
A
⋅
σ
B
A
⋅
B
)
2
+
(
σ
A
⋅
σ
C
A
⋅
C
)
2
+
(
σ
B
⋅
σ
C
B
⋅
C
)
2
+
(
σ
A
⋅
σ
B
⋅
σ
C
A
⋅
B
⋅
C
)
2
{\displaystyle \left({\frac {\sigma _{X}}{X}}\right)^{2}=\left({\frac {\sigma _{A}}{A}}\right)^{2}+\left({\frac {\sigma _{B}}{B}}\right)^{2}+\left({\frac {\sigma _{C}}{C}}\right)^{2}+\left({\frac {\sigma _{A}\cdot \sigma _{B}}{A\cdot B}}\right)^{2}+\left({\frac {\sigma _{A}\cdot \sigma _{C}}{A\cdot C}}\right)^{2}+\left({\frac {\sigma _{B}\cdot \sigma _{C}}{B\cdot C}}\right)^{2}+\left({\frac {\sigma _{A}\cdot \sigma _{B}\cdot \sigma _{C}}{A\cdot B\cdot C}}\right)^{2}}
^ Vern Lindberg (2000-07-01). "Uncertainties and Error Propagation" . Uncertainties, Graphing, and the Vernier Caliper (in eng). Rochester Institute of Technology. p. 1. Retrieved 2007-04-20 . The guiding principle in all cases is to consider the most pessimistic situation. ; ; CS1 maint: unrecognized language (link )
External links
See also