# Mean absolute percentage error

The mean absolute percentage error (MAPE), also known as mean absolute percentage deviation (MAPD), is a measure of prediction accuracy of a forecasting method in statistics. It usually expresses the accuracy as a ratio defined by the formula:

${\displaystyle {\mbox{MAPE}}={\frac {100\%}{n}}\sum _{t=1}^{n}\left|{\frac {A_{t}-F_{t}}{A_{t}}}\right|}$

where At is the actual value and Ft is the forecast value. Their difference is divided by the actual value At. The absolute value of this ratio is summed for every forecasted point in time and divided by the number of fitted points n.

## MAPE in regression problems

Mean absolute percentage error is commonly used as a loss function for regression problems and in model evaluation, because of its very intuitive interpretation in terms of relative error.

### Definition

Consider a standard regression setting in which the data are fully described by a random pair ${\displaystyle Z=(X,Y)}$ with values in ${\displaystyle \mathbb {R} ^{d}\times \mathbb {R} }$, and n i.i.d. copies ${\displaystyle (X_{1},Y_{1}),...,(X_{n},Y_{n})}$ of ${\displaystyle (X,Y)}$. Regression models aims at finding a good model for the pair, that is a measurable function g from ${\displaystyle \mathbb {R} ^{d}}$ to ${\displaystyle \mathbb {R} }$ such that ${\displaystyle g(X)}$ is close to Y.

In the classical regression setting, the closeness of ${\displaystyle g(X)}$ to Y is measured via the L2 risk, also called the mean squared error (MSE). In the MAPE regression context,[1] the closeness of ${\displaystyle g(X)}$ to Y is measured via the MAPE, and the aim of MAPE regressions is to find a model ${\displaystyle g_{\text{MAPE}}}$ such that:

${\displaystyle g_{\text{MAPE}}(x)=\arg \min _{g\in {\mathcal {G}}}\mathbb {E} \left[\left|{\frac {g(X)-Y}{Y}}\right||X=x\right]}$

where ${\displaystyle {\mathcal {G}}}$ is the class of models considered (e.g. linear models).

In practice

In practice ${\displaystyle g_{\text{MAPE}}(x)}$ can be estimated by the empirical risk minimization strategy, leading to

${\displaystyle {\widehat {g}}_{\text{MAPE}}(x)=\arg \min _{g\in {\mathcal {G}}}\sum _{i=1}^{n}\left|{\frac {g(X_{i})-Y_{i}}{Y_{i}}}\right|}$

From a practical point of view, the use of the MAPE as a quality function for regression model is equivalent to doing weighted mean absolute error (MAE) regression, also known as quantile regression. This property is trivial since

${\displaystyle {\widehat {g}}_{\text{MAPE}}(x)=\arg \min _{g\in {\mathcal {G}}}\sum _{i=1}^{n}\omega (Y_{i})\left|g(X_{i})-Y_{i}\right|{\mbox{ with }}\omega (Y_{i})=\left|{\frac {1}{Y_{i}}}\right|}$

As a consequence, the use of the MAPE is very easy in practice, for example using existing libraries for quantile regression allowing weights.

### Consistency

The use of the MAPE as a loss function for regression analysis is feasible both on a practical point of view and on a theoretical one, since the existence of an optimal model and the consistency of the empirical risk minimization can be proved.[1]

## Issues

Although the concept of MAPE sounds very simple and convincing, it has major drawbacks in practical application,[2] and there are many studies on shortcomings and misleading results from MAPE.[3][4]

• It cannot be used if there are zero or close-to-zero values (which sometimes happens, for example in demand data) because there would be a division by zero or values of MAPE tending to infinity.[5]
• For forecasts which are too low the percentage error cannot exceed 100%, but for forecasts which are too high there is no upper limit to the percentage error.
• MAPE puts a heavier penalty on negative errors, ${\displaystyle A_{t} than on positive errors.[6] As a consequence, when MAPE is used to compare the accuracy of prediction methods it is biased in that it will systematically select a method whose forecasts are too low. This little-known but serious issue can be overcome by using an accuracy measure based on the logarithm of the accuracy ratio (the ratio of the predicted to actual value), given by ${\textstyle \log \left({\frac {\text{predicted}}{\text{actual}}}\right)}$. This approach leads to superior statistical properties and also leads to predictions which can be interpreted in terms of the geometric mean.[2]
• People often think the MAPE will be optimized at the median. But for example, a log normal has a median of ${\displaystyle e^{\mu }}$ where as it is MAPE optimized at ${\displaystyle e^{\mu -\sigma ^{2}}}$.

To overcome these issues with MAPE, there are some other measures proposed in literature: