Bayes factor

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In statistics, the use of Bayes factors is a Bayesian alternative to classical hypothesis testing.[1][2] Bayesian model comparison is a method of model selection based on Bayes factors.

Definition[edit]

The posterior probability Pr(M|D) of a model M given data D is given by Bayes' theorem:

\Pr(M|D) = \frac{\Pr(D|M)\Pr(M)}{\Pr(D)}.

The key data-dependent term Pr(D|M) is a likelihood, and represents the probability that some data are produced under the assumption of this model, M; evaluating it correctly is the key to Bayesian model comparison.

Given a model selection problem in which we have to choose between two models, on the basis of observed data D, the plausibility of the two different models M1 and M2, parametrised by model parameter vectors  \theta_1 and  \theta_2 is assessed by the Bayes factor K given by

 K = \frac{\Pr(D|M_1)}{\Pr(D|M_2)}
= \frac{\int \Pr(\theta_1|M_1)\Pr(D|\theta_1,M_1)\,d\theta_1}
{\int \Pr(\theta_2|M_2)\Pr(D|\theta_2,M_2)\,d\theta_2} .

If instead of the Bayes factor integral, the likelihood corresponding to the maximum likelihood estimate of the parameter for each model is used, then the test becomes a classical likelihood-ratio test.[citation needed] Unlike a likelihood-ratio test, this Bayesian model comparison does not depend on any single set of parameters, as it integrates over all parameters in each model (with respect to the respective priors). However, an advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure.[3] It thus guards against overfitting. For models where an explicit version of the likelihood is not available or too costly to evaluate numerically, approximate Bayesian computation can be used for model selection in a Bayesian framework,[4] with the caveat that approximate-Bayesian estimates of Bayes factors are often biased.[5]

Other approaches are:

Interpretation[edit]

A value of K > 1 means that M1 is more strongly supported by the data under consideration than M2. Note that classical hypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence against it. Harold Jeffreys gave a scale for interpretation of K:[6]

K dB bits Strength of evidence
< 1:1
< 0
Negative (supports M2)
1:1 to 3:1
0 to 5
0 to 1.6
Barely worth mentioning
3:1 to 10:1
5 to 10
1.6 to 3.3
Substantial
10:1 to 30:1
    10 to 15    
    3.3 to 5.0    
Strong
30:1 to 100:1
15 to 20
5.0 to 6.6
Very strong
> 100:1
> 20
> 6.6
Decisive

The second column gives the corresponding weights of evidence in decibans (tenths of a power of 10); bits are added in the third column for clarity. According to I. J. Good a change in a weight of evidence of 1 deciban or 1/3 of a bit (i.e. a change in an odds ratio from evens to about 5:4) is about as finely as humans can reasonably perceive their degree of belief in a hypothesis in everyday use.[7]

An alternative table, widely cited, is provided by Kass and Raftery (1995):[3]

2 ln K K Strength of evidence
0 to 2
1 to 3
Not worth more than a bare mention
2 to 6
3 to 20
Positive
6 to 10
20 to 150
Strong
>10
>150
Very strong

The use of Bayes factors or classical hypothesis testing takes place in the context of inference rather than decision-making under uncertainty. That is, we merely wish to find out which hypothesis is true, rather than actually making a decision on the basis of this information. Frequentist statistics draws a strong distinction between these two because classical hypothesis tests are not coherent in the Bayesian sense. Bayesian procedures, including Bayes factors, are coherent, so there is no need to draw such a distinction. Inference is then simply regarded as a special case of decision-making under uncertainty in which the resulting action is to report a value. For decision-making, Bayesian statisticians might use a Bayes factor combined with a prior distribution and a loss function associated with making the wrong choice. In an inference context the loss function would take the form of a scoring rule. Use of a logarithmic score function for example, leads to the expected utility taking the form of the Kullback–Leibler divergence.

Example[edit]

Suppose we have a random variable that produces either a success or a failure. We want to compare a model M1 where the probability of success is q = ½, and another model M2 where q is completely unknown and we take a prior distribution for q which is uniform on [0,1]. We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to the binomial distribution:

{{200 \choose 115}q^{115}(1-q)^{85}}.

So we have

P(X=115 \mid M_1)={200 \choose 115}\left({1 \over 2}\right)^{200}=0.005956...,\,

but

P(X=115 \mid M_2)=\int_{0}^1{200 \choose 115}q^{115}(1-q)^{85}dq = {1 \over 201} = 0.004975....

The ratio is then 1.197..., which is "barely worth mentioning" even if it points very slightly towards M1.

This is not the same as a classical likelihood ratio test, which would have found the maximum likelihood estimate for q, namely 115200 = 0.575, whence P(X=115 \mid M_2) = {{200 \choose 115}q^{115}(1-q)^{85}} = 0.056991 (rather than averaging over all possible q). That gives a likelihood ratio of 0.1045, and so pointing towards M2.

The modern method of relative likelihood takes into account the number of free parameters in the models, unlike the classical likelihood ratio. The relative likelihood method could be applied as follows. Model M1 has 0 parameters, and so its AIC value is 2·0 + 2·ln(0.005956) = 10.2467. Model M2 has 1 parameter, and so its AIC value is 2·1 + 2·ln(0.056991) = 7.7297. Hence M1 is about exp((7.7297 − 10.2467)/2) = 0.284 times as probable as M2 to minimize the information loss. Thus M2 is slightly preferred, but M1 cannot be excluded.

A frequentist hypothesis test of M1 (here considered as a null hypothesis) would have produced a more dramatic result, saying that M1 could be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 if q = ½ is 0.0200, and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.0400. Note that 115 is more than two standard deviations away from 100.

M2 is a more complex model than M1 because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason why Bayesian inference has been put forward as a theoretical justification for and generalisation of Occam's razor, reducing Type I errors.[8]

See also[edit]

Statistical ratios

References[edit]

  1. ^ Goodman S (1999). "Toward evidence-based medical statistics. 1: The P value fallacy" (PDF). Ann Intern Med 130 (12): 995–1004. doi:10.7326/0003-4819-130-12-199906150-00008. PMID 10383371. 
  2. ^ Goodman S (1999). "Toward evidence-based medical statistics. 2: The Bayes factor" (PDF). Ann Intern Med 130 (12): 1005–13. doi:10.7326/0003-4819-130-12-199906150-00019. PMID 10383350. 
  3. ^ a b Robert E. Kass and Adrian E. Raftery (1995). "Bayes Factors". Journal of the American Statistical Association 90 (430): 791. 
  4. ^ Toni, T.; Stumpf, M.P.H. (2009). "Simulation-based model selection for dynamical systems in systems and population biology" (PDF). Bioinformatics 26 (1): 104–10. doi:10.1093/bioinformatics/btp619. PMC 2796821. PMID 19880371. 
  5. ^ Robert, C.P., J. Cornuet, J. Marin and N.S. Pillai (2011). "Lack of confidence in approximate Bayesian computation model choice". Proceedings of the National Academy of Sciences 108 (37): 15112–15117. doi:10.1073/pnas.1102900108. PMC 3174657. PMID 21876135. 
  6. ^ H. Jeffreys (1961). The Theory of Probability (3 ed.). Oxford.  p. 432
  7. ^ Good, I.J. (1979). "Studies in the History of Probability and Statistics. XXXVII A. M. Turing's statistical work in World War II". Biometrika 66 (2): 393–396. doi:10.1093/biomet/66.2.393. MR 82c:01049. 
  8. ^ Sharpening Ockham's Razor On a Bayesian Strop

External links[edit]