Bayesian information criterion

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In statistics, the Bayesian information criterion (BIC) or Schwarz criterion (also SBC, SBIC) is a criterion for model selection among a finite set of models. It is based, in part, on the likelihood function and it is closely related to the Akaike information criterion (AIC).

When fitting models, it is possible to increase the likelihood by adding parameters, but doing so may result in overfitting. Both BIC and AIC resolve this problem by introducing a penalty term for the number of parameters in the model; the penalty term is larger in BIC than in AIC.

The BIC was developed by Gideon E. Schwarz, who gave a Bayesian argument for adopting it.[1]

Mathematically[edit]

The BIC is an asymptotic result derived under the assumptions that the data distribution is in the exponential family. Let:

The formula for the BIC is:[2]

{-2 \cdot \ln{p(x|M)}} \approx \mathrm{BIC} = {-2 \cdot \ln{\hat L} + k \cdot (\ln(n) - \ln(2 \pi))}. \

For large n, this can be approximated by:

 \mathrm{BIC} = {-2 \cdot \ln{\hat L} + k \cdot \ln(n)}. \

Under the assumption that the model errors or disturbances are independent and identically distributed according to a normal distribution and that the boundary condition that the derivative of the log likelihood with respect to the true variance is zero, this becomes (up to an additive constant, which depends only on n and not on the model):[3]

 \mathrm{BIC} = n \cdot \ln(\widehat{\sigma_e^2}) + k \cdot \ln(n) \

where \widehat{\sigma_e^2} is the error variance.

The error variance in this case is defined as

\widehat{\sigma_e^2} = \frac{1}{n} \sum_{i=1}^n (x_i-\hat{x_i})^2.

One may point out from probability theory that \widehat{\sigma_e^2} is a biased estimator for the true variance, \sigma^2. Let \widehat{\widehat{\sigma_e^2}} denote the unbiased approximation to the error variance, defined as

\widehat{\widehat{\sigma_e^2}} = \frac{1}{n-1} \sum_{i=1}^n (x_i-\hat{x_i})^2.

The following version may be more tractable:[citation needed][citation needed]

 \mathrm{BIC}= \chi^2 + k \cdot \ln(n) + C. \,

for some constant C, which does not vary between candidate models but depends only upon the data points.[citation needed]

Given any two estimated models, the model with the lower value of BIC is the one to be preferred. The BIC is an increasing function of \sigma_e^2 and an increasing function of k. That is, unexplained variation in the dependent variable and the number of explanatory variables increase the value of BIC. Hence, lower BIC implies either fewer explanatory variables, better fit, or both. The strength of the evidence against the model with the higher BIC value can be summarized as follows:[4]

ΔBIC Evidence against higher BIC
0 to 2 Not worth more than a bare mention
2 to 6 Positive
6 to 10 Strong
>10 Very Strong

The BIC generally penalizes free parameters more strongly than does the Akaike information criterion, though it depends on the size of n and relative magnitude of n and k.

It is important to keep in mind that the BIC can be used to compare estimated models only when the numerical values of the dependent variable are identical for all estimates being compared. The models being compared need not be nested, unlike the case when models are being compared using an F-test or a likelihood ratio test.

Characteristics of the Bayesian information criterion[edit]

  1. It is independent of the prior or the prior is "vague" (a constant).
  2. It can measure the efficiency of the parameterized model in terms of predicting the data.
  3. It penalizes the complexity of the model where complexity refers to the number of parameters in model.
  4. It is approximately equal to the minimum description length criterion but with negative sign.
  5. It can be used to choose the number of clusters according to the intrinsic complexity present in a particular dataset.
  6. It is closely related to other penalized likelihood criteria such as RIC and the Akaike information criterion.

Applications[edit]

BIC has been widely used for model identification in time series and linear regression. It can, however, be applied quite widely to any set of maximum likelihood-based models. However, in many applications (for example, selecting a black body or power law spectrum for an astronomical source), BIC simply reduces to maximum likelihood selection because the number of parameters is equal for the models of interest.

See also[edit]

Notes[edit]

  1. ^ Schwarz, Gideon E. (1978). "Estimating the dimension of a model". Annals of Statistics 6 (2): 461–464. doi:10.1214/aos/1176344136. MR 468014. 
  2. ^ Wit, Ernst; Edwin van den Heuvel, Jan-Willem Romeyn (2012). "‘All models are wrong...’: an introduction to model uncertainty". Statistica Neerlandica 66 (3): 217–236. doi:10.1111/j.1467-9574.2012.00530.x. 
  3. ^ Priestley, M.B. (1981) Spectral Analysis and Time Series, Academic Press. ISBN 0-12-564922-3 (p. 375)
  4. ^ Kass, Robert E.; Adrian E. Raftery (1995). "Bayes Factors". Journal of the American Statistical Association 90 (430): 773–795. doi:10.2307/2291091. ISSN 0162-1459. 

References[edit]

^ Schwarz, Gideon E. (1978). "Estimating the dimension of a model". Annals of Statistics 6 (2): 461–464. doi:10.1214/aos/1176344136. MR 468014.

External links[edit]