Jump to content

Brier score

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by AndrewTutt (talk | contribs) at 18:22, 28 July 2007. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A Brier score measures the accuracy of a set of probability assessments. Proposed by Brier (1950), it is the average deviation between predicted probabilities for a set of events and their outcomes, so a lower score represents higher accuracy.


Definition of the Brier Score

Suppose it is required to give a probability Pi forecast of a binary event – such as a forecast of rain. The forecast issued says that there is a probability Pi that the event will occur. Let Xi = 1 if the event occurs and Xi = 0 if it doesn’t.

Then the Brier score is given by: .

  • If you forecast 100% (Pi = 1) and there is at least 0.01 inches of rain in the bucket, your Brier Score is 0, or "perfect".
  • If you forecast 100% Pi and there is no rain in the bucket, your Brier Score is 1, or "awful".
  • If you forecast 70% Pi and there is at least 0.01 inches of rain in the bucket, your Brier Score is (0.70-1)^2 = 0.09, or "not too shabby".
  • If you forecast 30% Pi and there is at least 0.01 inches of rain in the bucket, your Brier Score is (0.30-1)^2 = 0.49, or "needs work".
  • If you hedge your forecast with a 50% Pi and whether or not there is at least 0.01 inches of rain in the bucket, your Brier Score is 0.25, or "no courage".

In weather forecasting, a trace (<0.01) is considered "0.0"

References