| Part of a series on |
| Bayesian statistics |
|---|
| Posterior =Likelihood ×Prior ÷Evidence |
| Background |
| Model building |
| Posterior approximation |
| Estimators |
| Evidence approximation |
| Model evaluation |
TheBayes factor is a ratio of two competingstatistical models represented by theirevidence, and is used to quantify the support for one model over the other.[1] The models in question can have a common set of parameters, such as anull hypothesis and an alternative, but this is not necessary; for instance, it could also be a non-linear model compared to itslinear approximation. The Bayes factor can be thought of as a Bayesian analog to thelikelihood-ratio test, although it uses the integrated (i.e., marginal) likelihood rather than the maximized likelihood. As such, both quantities only coincide under simple hypotheses (e.g., two specific parameter values).[2] Also, in contrast withnull hypothesis significance testing, Bayes factors support evaluation of evidencein favor of a null hypothesis, rather than only allowing the null to be rejected or not rejected.[3]
Although conceptually simple, the computation of the Bayes factor can be challenging depending on the complexity of the model and the hypotheses.[4] Since closed-form expressions of the marginal likelihood are generally not available, numerical approximations based onMCMC samples have been suggested.[5] For certain special cases, simplified algebraic expressions can be derived; for instance, the Savage–Dickey density ratio in the case of a precise (equality constrained) hypothesis against an unrestricted alternative.[6][7] Another approximation, derived by applyingLaplace's approximation to the integrated likelihoods, is known as theBayesian information criterion (BIC);[8] in large data sets the Bayes factor will approach the BIC as the influence of the priors wanes. In small data sets, priors generally matter and must not beimproper since the Bayes factor will be undefined if either of the two integrals in its ratio is not finite.
The Bayes factor is the ratio of two marginal likelihoods; that is, thelikelihoods of two statistical models integrated over theprior probabilities of their parameters.[9]
Theposterior probability of a modelM given dataD is given byBayes' theorem:
The key data-dependent term represents the probability that some data are produced under the assumption of the modelM; evaluating it correctly is the key to Bayesian model comparison.
Given amodel selection problem in which one wishes to choose between two models on the basis of observed dataD, the plausibility of the two different modelsM1 andM2, parametrised by model parameter vectors and, is assessed by the Bayes factorK given by
When the two models have equal prior probability, so that, the Bayes factor is equal to the ratio of the posterior probabilities ofM1 andM2. If instead of the Bayes factor integral, the likelihood corresponding to themaximum likelihood estimate of the parameter for each statistical model is used, then the test becomes a classicallikelihood-ratio test. Unlike a likelihood-ratio test, this Bayesian model comparison does not depend on any single set of parameters, as it integrates over all parameters in each model (with respect to the respective priors). An advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure.[10] It thus guards againstoverfitting. For models where an explicit version of the likelihood is not available or too costly to evaluate numerically,approximate Bayesian computation can be used for model selection in a Bayesian framework,[11]with the caveat that approximate-Bayesian estimates of Bayes factors are often biased.[12]
Other approaches are:
A value ofK > 1 means thatM1 is more strongly supported by the data under consideration thanM2. Note that classicalhypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidenceagainst it. The fact that a Bayes factor can produce evidencefor and not just against a null hypothesis is one of the key advantages of this analysis method.[13]
Harold Jeffreys gave a scale (Jeffreys' scale) for interpretation of:[14]
| K | dHart | bits | Strength of evidence |
|---|---|---|---|
| < 100 | < 0 | < 0 | Negative (supportsM2) |
| 100 to 101/2 | 0 to 5 | 0 to 1.6 | Barely worth mentioning |
| 101/2 to 101 | 5 to 10 | 1.6 to 3.3 | Substantial |
| 101 to 103/2 | 10 to 15 | 3.3 to 5.0 | Strong |
| 103/2 to 102 | 15 to 20 | 5.0 to 6.6 | Very strong |
| > 102 | > 20 | > 6.6 | Decisive |
The second column gives the corresponding weights of evidence indecihartleys (also known asdecibans);bits are added in the third column for clarity. The table continues in the other direction, so that, for example, is decisive evidence for.
An alternative table, widely cited, is provided by Kass and Raftery (1995):[10]
| log10K | K | Strength of evidence |
|---|---|---|
| 0 to 1/2 | 1 to 3.2 | Not worth more than a bare mention |
| 1/2 to 1 | 3.2 to 10 | Substantial |
| 1 to 2 | 10 to 100 | Strong |
| > 2 | > 100 | Decisive |
According toI. J. Good, thejust-noticeable difference of humans in their everyday life, when it comes to a changedegree of belief in a hypothesis, is about a factor of 1.3x, or 1 deciban, or 1/3 of a bit, or from 1:1 to 5:4 in odds ratio.[15]
Suppose we have arandom variable that produces either a success or a failure. We want to compare a model where the probability of success isq =1⁄2, and another model whereq is unknown and we take aprior distribution forq that isuniform on [0,1]. We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to thebinomial distribution:
Thus we have for
whereas for we have
The ratio is then 1.2, which is "barely worth mentioning" even if it points very slightly towards.
Afrequentisthypothesis test of (here considered as anull hypothesis) would have produced a very different result. Such a test says that should be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 ifq =1⁄2 is 0.02, and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.04. Note that 115 is more than two standard deviations away from 100. Thus, whereas afrequentisthypothesis test would yieldsignificant results at the 5% significance level, the Bayes factor hardly considers this to be an extreme result. Note, however, that a non-uniform prior (for example one that reflects that you expect the number of success and failures to be of the same order of magnitude) could result in a Bayes factor that is more in agreement with the frequentist hypothesis test.
A classicallikelihood-ratio test would have found themaximum likelihood estimate forq, namely, whence
(rather than averaging over all possibleq). That gives a likelihood ratio of 0.1 and points towardsM2.
is a more complex model than because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason whyBayesian inference has been put forward as a theoretical justification for and generalisation ofOccam's razor, reducingType I errors.[16]
On the other hand, the modern method ofrelative likelihood takes into account the number of free parameters in the models, unlike the classical likelihood ratio. The relative likelihood method could be applied as follows. ModelM1 has 0 parameters, and so itsAkaike information criterion (AIC) value is. ModelM2 has 1 parameter, and so its AIC value is. HenceM1 is about times as probable asM2 to minimize the information loss. ThusM2 is slightly preferred, butM1 cannot be excluded.
{{cite book}}: CS1 maint: location missing publisher (link)