# Forecasting

(Redirected from Statistical forecasting)

Forecasting is the process of making statements about events whose actual outcomes (typically) have not yet been observed. A commonplace example might be estimation of some variable of interest at some specified future date. Prediction is a similar, but more general term. Both might refer to formal statistical methods employing time series, cross-sectional or longitudinal data, or alternatively to less formal judgmental methods. Usage can differ between areas of application: for example, in hydrology, the terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specific future times, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period.

Risk and uncertainty are central to forecasting and prediction; it is generally considered good practice to indicate the degree of uncertainty attaching to forecasts. In any case, the data must be up to date in order for the forecast to be as accurate as possible.[1]

## Categories of forecasting methods

### Qualitative vs. quantitative methods

Qualitative forecasting techniques are subjective, based on the opinion and judgment of consumers, experts; they are appropriate when past data are not available. They are usually applied to intermediate- or long-range decisions. Examples of qualitative forecasting methods are[citation needed] informed opinion and judgment, the Delphi method, market research, and historical life-cycle analogy.

Quantitative forecasting models are used to forecast future data as a function of past data; they are appropriate when past data are available. These methods are usually applied to short- or intermediate-range decisions. Examples of quantitative forecasting methods are[citation needed] last period demand, simple and weighted N-Period moving averages, simple exponential smoothing, and multiplicative seasonal indexes.

### Naïve approach

Naïve forecasts are the most cost-effective objective forecasting model, and provide a benchmark against which more sophisticated models can be compared. For stationary time series data, this approach says that the forecast for any period equals the historical average. For time series data that are stationary in terms of first differences, the naïve forecast equals the previous period's actual value.

### Time series methods

Time series methods use historical data as the basis of estimating future outcomes.

e.g. Box-Jenkins

### Causal / econometric forecasting methods

Some forecasting methods try to identify the underlying factors that might influence the variable that is being forecast. For example, including information about climate patterns might improve the ability of a model to predict umbrella sales. Forecasting models often take account of regular seasonal variations. In addition to climate, such variations can also be due to holidays and customs: for example, one might predict that sales of college football apparel will be higher during the football season than during the off season.[2]

Several informal methods used in causal forecasting do not employ strict algorithms[clarification needed], but instead use the judgment of the forecaster. Some forecasts take account of past relationships between variables: if one variable has, for example, been approximately linearly related to another for a long period of time, it may be appropriate to extrapolate such a relationship into the future, without necessarily understanding the reasons for the relationship.

Causal methods include:

• Regression analysis includes a large group of methods for predicting future values of a variable using information about other variables. These methods include both parametric (linear or non-linear) and non-parametric techniques.

Quantitative forecasting models are often judged against each other by comparing their in-sample or out-of-sample mean square error, although some researchers have advised against this.[4]

### Judgmental methods

Judgmental forecasting methods incorporate intuitive judgements, opinions and subjective probability estimates.

### Artificial intelligence methods

Often these are done today by specialized programs loosely labeled

## Forecasting accuracy

The forecast error is the difference between the actual value and the forecast value for the corresponding period.

$\ E_t = Y_t - F_t$

where E is the forecast error at period t, Y is the actual value at period t, and F is the forecast for period t.

Measures of aggregate error:

 Mean absolute error (MAE) $\ MAE = \frac{\sum_{t=1}^{N} |E_t|}{N}$ Mean Absolute Percentage Error (MAPE) $\ MAPE = \frac{\sum_{t=1}^N |\frac{E_t}{Y_t}|}{N}$ Mean Absolute Deviation (MAD) $\ MAD = \frac{\sum_{t=1}^{N} |E_t|}{N}$ Percent Mean Absolute Deviation (PMAD) $\ PMAD = \frac{\sum_{t=1}^{N} |E_t|}{\sum_{t=1}^{N} |Y_t|}$ Mean squared error (MSE) or Mean squared prediction error (MSPE) $\ MSE = \frac{\sum_{t=1}^N {E_t^2}}{N}$ Root Mean squared error (RMSE) $\ RMSE = \sqrt{\frac{\sum_{t=1}^N {E_t^2}}{N}}$ Forecast skill (SS) $\ SS = 1- \frac{MSE_{forecast}}{MSE_{ref}}$ Average of Errors (E) $\ \bar{E}= \frac{\sum_{i=1}^N {E_i}}{N}$

Business forecasters and practitioners sometimes use different terminology in the industry. They refer to the PMAD as the MAPE, although they compute this as a volume weighted MAPE.[citation needed] For more information see Calculating demand forecast accuracy.

## Applications

Climate change and increasing energy prices have led to the use of Egain Forecasting for buildings. This attempts to reduce the energy needed to heat the building, thus reducing the emission of greenhouse gases. Forecasting is used in Customer Demand Planning in everyday business for manufacturing and distribution companies.

Forecasting has also been used to predict the development of conflict situations. Forecasters perform research that uses empirical results to gauge the effectiveness of certain forecasting models.[5] However research has shown that there is little difference between the accuracy of the forecasts of experts knowledgeable in the conflict situation and those by individuals who knew much less.[6]

Similarly, experts in some studies argue that role thinking[clarification needed] does not contribute to the accuracy of the forecast.[7] The discipline of demand planning, also sometimes referred to as supply chain forecasting, embraces both statistical forecasting and a consensus process. An important, albeit often ignored aspect of forecasting, is the relationship it holds with planning. Forecasting can be described as predicting what the future will look like, whereas planning predicts what the future should look like.[8][9] There is no single right forecasting method to use. Selection of a method should be based on your objectives and your conditions (data etc.).[10] A good place to find a method, is by visiting a selection tree. An example of a selection tree can be found here.[11] Forecasting has application in many situations:

## Limitations

Limitations pose barriers beyond which forecasting methods cannot reliably predict.

### Performance limits of fluid dynamics equations

As proposed by Edward Lorenz in 1963, long range weather forecasts, those made at a range of two weeks or more, are impossible to definitively predict the state of the atmosphere, owing to the chaotic nature of the fluid dynamics equations involved. Extremely small errors in the initial input, such as temperatures and winds, within numerical models double every five days.[13]

### Complexity introduced by the technological singularity

The technological singularity is the theoretical emergence of superintelligence through technological means.[14] Since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend, the technological singularity is seen as an occurrence beyond which events cannot be predicted.

Ray Kurzweil predicts the singularity will occur around 2045 while Vernor Vinge predicts it will happen some time before 2030.

## References

1. ^ Scott Armstrong, Fred Collopy, Andreas Graefe and Kesten C. Green. "Answers to Frequently Asked Questions". Retrieved May 15, 2013.
2. ^ Nahmias, Steven (2009). Production and Operations Analysis.
3. ^ Ellis, Kimberly (2008). Production Planning and Inventory Control Virginia Tech. McGraw Hill. ISBN 978-0-390-87106-0.
4. ^ J. Scott Armstrong and Fred Collopy (1992). "Error Measures For Generalizing About Forecasting Methods: Empirical Comparisons". International Journal of Forecasting 8: 69–80.
5. ^ J. Scott Armstrong, Kesten C. Green and Andreas Graefe (2010). "Answers to Frequently Asked Questions".
6. ^ Kesten C. Greene and J. Scott Armstrong (2007). "The Ombudsman: Value of Expertise for Forecasting Decisions in Conflicts". Interfaces (INFORMS) 0: 1–12.
7. ^ Kesten C. Green and J. Scott Armstrong (1975). "Role thinking: Standing in other people’s shoes to forecast decisions in conflicts". Role thinking: Standing in other people’s shoes to forecast decisions in conflicts 39: 111–116.
8. ^ "FAQ". Forecastingprinciples.com. 1998-02-14. Retrieved 2012-08-28.
9. ^ Kesten C. Greene and J. Scott Armstrong. [http://www.qbox.wharton.upenn.edu/documents/mktg/research/INTFOR3581%20-%20Publication% 2015.pdf "Structured analogies for forecasting"] (PDF). qbox.wharton.upenn.edu.
10. ^ "FAQ". Forecastingprinciples.com. 1998-02-14. Retrieved 2012-08-28.
11. ^ "Selection Tree". Forecastingprinciples.com. 1998-02-14. Retrieved 2012-08-28.
12. ^ J. Scott Armstrong (1983). "Relative Accuracy of Judgmental and Extrapolative Methods in Forecasting Annual Earnings". Journal of Forecasting 2: 437–447.
13. ^ Cox, John D. (2002). Storm Watchers. John Wiley & Sons, Inc. pp. 222–224. ISBN 0-471-38108-X.
14. ^ Superintelligence. Answer to the 2009 EDGE QUESTION: "WHAT WILL CHANGE EVERYTHING?": http://www.nickbostrom.com/views/superintelligence.pdf