# Mean signed deviation

(Redirected from Mean signed difference)
In statistics, the mean signed difference, deviation, or error (MSD) is a sample statistic that summarises how well a set of estimates ${\displaystyle {\hat {\theta }}_{i}}$ match the quantities ${\displaystyle \theta _{i}}$ that they are supposed to estimate. It is one of a number of statistics that can be used to assess an estimation procedure, and it would often be used in conjunction with a sample version of the mean square error.
For example, suppose a linear regression model has been estimated over a sample of data, and is then used to extrapolate predictions of the dependent variable out of sample after the out-of-sample data points have become available. Then ${\displaystyle \theta _{i}}$ would be the i-th out-of-sample value of the dependent variable, and ${\displaystyle {\hat {\theta }}_{i}}$ would be its predicted value. The mean signed deviation is the average value of ${\displaystyle {\hat {\theta }}_{i}-\theta _{i}.}$
The mean signed difference is derived from a set of n pairs, ${\displaystyle ({\hat {\theta }}_{i},\theta _{i})}$, where ${\displaystyle {\hat {\theta }}_{i}}$ is an estimate of the parameter ${\displaystyle \theta }$ in a case where it is known that ${\displaystyle \theta =\theta _{i}}$. In many applications, all the quantities ${\displaystyle \theta _{i}}$ will share a common value. When applied to forecasting in a time series analysis context, a forecasting procedure might be evaluated using the mean signed difference, with ${\displaystyle {\hat {\theta }}_{i}}$ being the predicted value of a series at a given lead time and ${\displaystyle \theta _{i}}$ being the value of the series eventually observed for that time-point. The mean signed difference is defined to be
${\displaystyle \operatorname {MSD} ({\hat {\theta }})=\sum _{i=1}^{n}{\frac {{\hat {\theta _{i}}}-\theta _{i}}{n}}.}$