# Forecast skill

Forecast skill (or skill score,[1] forecast skill, prediction skill), in the fields of forecasting and prediction, is any measure of the accuracy and/or degree of association of prediction to an observation or estimate of the actual value of what is being predicted (formally, the predictand).

In meteorology, forecast skill in weather forecasting, a motivating application, measures the superiority of a forecast over a simple historical baseline of past observations. The same forecast methodology can result in different skill measurements at different places, or even in the same place for different seasons (e.g. spring weather might be driven by erratic local conditions, whereas winter cold snaps might correlate with observable polar winds). Forecast skill is often presented in the form of seasonal geographical maps.

Forecast skill for single-value forecasts is commonly represented in terms of metrics such as correlation, root mean squared error, mean absolute error, relative mean absolute error, bias, and the Brier score, among others. A number of scores associated with the concept of entropy in information theory are also being used.[2][3]

The term 'forecast skill' can be used both quantitatively and qualitatively. In the former case, skill could be equal to a statistic describing forecast performance, such as the correlation of the forecast with observations. In the latter case, it could either refer to forecast performance according to a single metric or to the overall forecast performance based on multiple metrics.

## Metrics

Probabilistic forecast skill scores may use metrics such as the Ranked Probabilistic Skill Score (RPSS) or the Continuous RPSS (CRPSS), among others. Categorical skill metrics such as the False Alarm Ratio (FAR), the Probability of Detection (POD), the Critical Success Index (CSI), and Equitable Threat Score (ETC) are also relevant for some forecasting applications. Skill is often, but not exclusively, expressed as the relative representation that compares the forecast performance of a particular forecast prediction to that of a reference, benchmark prediction—a formulation called a 'Skill Score'.

Forecast skill metric and score calculations should be made over a large enough sample of forecast-observation pairs to be statistically robust. A sample of predictions for a single predictand (eg, temperature at one location, or a single stock value) typically includes forecasts made on a number of different dates. A sample could also pool forecast-observation pairs across space, for a prediction made on a single date, as in the forecast of a weather event that is verified at many locations.

## Example skill calculation

An example of a skill calculation which uses the error metric 'Mean Squared Error (MSE)' and the associated skill score is given in the table below. In this case, a perfect forecast results in a forecast skill metric of zero, and skill score value of 1.0. A forecast with equal skill to the reference forecast would have a skill score of 0.0, and a forecast which is less skillful than the reference forecast would have unbounded negative skill score values.[4][5]

 Skill Metric: Mean squared error (MSE) ${\displaystyle \ {\mathit {MSE}}={\frac {\sum _{t=1}^{N}{E_{t}^{2}}}{N}}}$ The associated Skill Score (SS) ${\displaystyle \ {\mathit {SS}}=1-{\frac {{\mathit {MSE}}_{\text{forecast}}}{{\mathit {MSE}}_{\text{ref}}}}}$

A broad range of forecast metrics can be found in published and online resources. A good starting point is the Australian Bureau of Meteorology's longstanding web pages on verification at WWRP/WGNE Joint Working Group on Forecast Verification Research.

A popular textbook and reference that discusses forecast skill is Statistical Methods in the Atmospheric Sciences.[6]

## References

1. ^ Glossary of Meteorology, American Meteorological Society
2. ^ Gneiting, Tilmann; Raftery, Adrian E (2007-03-01). "Strictly Proper Scoring Rules, Prediction, and Estimation". Journal of the American Statistical Association. 102 (477): 359–378. doi:10.1198/016214506000001437. ISSN 0162-1459. Retrieved 2018-02-09.
3. ^ Riccardo Benedetti (2010-01-01). "Scoring Rules for Forecast Verification". Monthly Weather Review. 138 (1): 203–211. Bibcode:2010MWRv..138..203B. doi:10.1175/2009MWR2945.1. Retrieved 2018-02-07.
4. ^ Roebber, Paul J. (1998), "The Regime Dependence of Degree Day Forecast Technique, Skill, and Value", American Meteorological Society -- Weather and Forecasting, Allen Press, 13 (3): 783–794, Bibcode:1998WtFor..13..783R, doi:10.1175/1520-0434(1998)013<0783:TRDODD>2.0.CO;2, retrieved 2009-01-19
5. ^ Murphy, Allen H. (1988), "Skill Scores Based on the Mean Square Error and Their Relationships to the Correlation Coefficient", American Meteorological Society -- Monthly Weather Review, Allen Press, 116 (12): 2417–2424, Bibcode:1988MWRv..116.2417M, doi:10.1175/1520-0493(1988)116<2417:SSBOTM>2.0.CO;2, retrieved 2009-01-19
6. ^ Wilks, Daniel. "Statistical Methods in the Atmospheric Sciences". store.elsevier.com (3rd ed.). ISBN 9780123850225. Retrieved 2016-02-01.