Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Bayes factor

From Wikipedia, the free encyclopedia
Statistical factor used to compare competing hypotheses
Part of a series on
Bayesian statistics
Posterior =Likelihood ×Prior ÷Evidence
Background
Model building
Posterior approximation
Estimators
Evidence approximation
Model evaluation

TheBayes factor is a ratio of two competingstatistical models represented by theirevidence, and is used to quantify the support for one model over the other.[1] The models in question can have a common set of parameters, such as anull hypothesis and an alternative, but this is not necessary; for instance, it could also be a non-linear model compared to itslinear approximation. The Bayes factor can be thought of as a Bayesian analog to thelikelihood-ratio test, although it uses the integrated (i.e., marginal) likelihood rather than the maximized likelihood. As such, both quantities only coincide under simple hypotheses (e.g., two specific parameter values).[2] Also, in contrast withnull hypothesis significance testing, Bayes factors support evaluation of evidencein favor of a null hypothesis, rather than only allowing the null to be rejected or not rejected.[3]

Although conceptually simple, the computation of the Bayes factor can be challenging depending on the complexity of the model and the hypotheses.[4] Since closed-form expressions of the marginal likelihood are generally not available, numerical approximations based onMCMC samples have been suggested.[5] For certain special cases, simplified algebraic expressions can be derived; for instance, the Savage–Dickey density ratio in the case of a precise (equality constrained) hypothesis against an unrestricted alternative.[6][7] Another approximation, derived by applyingLaplace's approximation to the integrated likelihoods, is known as theBayesian information criterion (BIC);[8] in large data sets the Bayes factor will approach the BIC as the influence of the priors wanes. In small data sets, priors generally matter and must not beimproper since the Bayes factor will be undefined if either of the two integrals in its ratio is not finite.

Definition

[edit]

The Bayes factor is the ratio of two marginal likelihoods; that is, thelikelihoods of two statistical models integrated over theprior probabilities of their parameters.[9]

Theposterior probabilityPr(M|D){\displaystyle \Pr(M|D)} of a modelM given dataD is given byBayes' theorem:

Pr(M|D)=Pr(D|M)Pr(M)Pr(D).{\displaystyle \Pr(M|D)={\frac {\Pr(D|M)\Pr(M)}{\Pr(D)}}.}

The key data-dependent termPr(D|M){\displaystyle \Pr(D|M)} represents the probability that some data are produced under the assumption of the modelM; evaluating it correctly is the key to Bayesian model comparison.

Given amodel selection problem in which one wishes to choose between two models on the basis of observed dataD, the plausibility of the two different modelsM1 andM2, parametrised by model parameter vectorsθ1{\displaystyle \theta _{1}} andθ2{\displaystyle \theta _{2}}, is assessed by the Bayes factorK given by

K=Pr(D|M1)Pr(D|M2)=Pr(θ1|M1)Pr(D|θ1,M1)dθ1Pr(θ2|M2)Pr(D|θ2,M2)dθ2=Pr(M1|D)Pr(D)Pr(M1)Pr(M2|D)Pr(D)Pr(M2)=Pr(M1|D)Pr(M2|D)Pr(M2)Pr(M1).{\displaystyle K={\frac {\Pr(D|M_{1})}{\Pr(D|M_{2})}}={\frac {\int \Pr(\theta _{1}|M_{1})\Pr(D|\theta _{1},M_{1})\,d\theta _{1}}{\int \Pr(\theta _{2}|M_{2})\Pr(D|\theta _{2},M_{2})\,d\theta _{2}}}={\frac {\frac {\Pr(M_{1}|D)\Pr(D)}{\Pr(M_{1})}}{\frac {\Pr(M_{2}|D)\Pr(D)}{\Pr(M_{2})}}}={\frac {\Pr(M_{1}|D)}{\Pr(M_{2}|D)}}{\frac {\Pr(M_{2})}{\Pr(M_{1})}}.}

When the two models have equal prior probability, so thatPr(M1)=Pr(M2){\displaystyle \Pr(M_{1})=\Pr(M_{2})}, the Bayes factor is equal to the ratio of the posterior probabilities ofM1 andM2. If instead of the Bayes factor integral, the likelihood corresponding to themaximum likelihood estimate of the parameter for each statistical model is used, then the test becomes a classicallikelihood-ratio test. Unlike a likelihood-ratio test, this Bayesian model comparison does not depend on any single set of parameters, as it integrates over all parameters in each model (with respect to the respective priors). An advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure.[10] It thus guards againstoverfitting. For models where an explicit version of the likelihood is not available or too costly to evaluate numerically,approximate Bayesian computation can be used for model selection in a Bayesian framework,[11]with the caveat that approximate-Bayesian estimates of Bayes factors are often biased.[12]

Other approaches are:

Interpretation

[edit]

A value ofK > 1 means thatM1 is more strongly supported by the data under consideration thanM2. Note that classicalhypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidenceagainst it. The fact that a Bayes factor can produce evidencefor and not just against a null hypothesis is one of the key advantages of this analysis method.[13]

Harold Jeffreys gave a scale (Jeffreys' scale) for interpretation ofK{\displaystyle K}:[14]

KdHartbitsStrength of evidence
< 100< 0< 0Negative (supportsM2)
100 to 101/20 to 50 to 1.6Barely worth mentioning
101/2 to 1015 to 101.6 to 3.3Substantial
101 to 103/210 to 153.3 to 5.0Strong
103/2 to 10215 to 205.0 to 6.6Very strong
> 102> 20> 6.6Decisive

The second column gives the corresponding weights of evidence indecihartleys (also known asdecibans);bits are added in the third column for clarity. The table continues in the other direction, so that, for example,K102{\displaystyle K\leq 10^{-2}} is decisive evidence forM2{\displaystyle M_{2}}.

An alternative table, widely cited, is provided by Kass and Raftery (1995):[10]

log10KKStrength of evidence
0 to 1/21 to 3.2Not worth more than a bare mention
1/2 to 13.2 to 10Substantial
1 to 210 to 100Strong
> 2> 100Decisive

According toI. J. Good, thejust-noticeable difference of humans in their everyday life, when it comes to a changedegree of belief in a hypothesis, is about a factor of 1.3x, or 1 deciban, or 1/3 of a bit, or from 1:1 to 5:4 in odds ratio.[15]

Example

[edit]

Suppose we have arandom variable that produces either a success or a failure. We want to compare a modelM1{\displaystyle M_{1}} where the probability of success isq =12, and another modelM2{\displaystyle M_{2}} whereq is unknown and we take aprior distribution forq that isuniform on [0,1]. We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to thebinomial distribution:

(200115)q115(1q)85.{\displaystyle {{200 \choose 115}q^{115}(1-q)^{85}}.}

Thus we have forM1{\displaystyle M_{1}}

P(X=115M1)=(200115)(12)2000.006{\displaystyle P(X=115\mid M_{1})={200 \choose 115}\left({1 \over 2}\right)^{200}\approx 0.006}

whereas forM2{\displaystyle M_{2}} we have

P(X=115M2)=01(200115)q115(1q)85dq=12010.005{\displaystyle P(X=115\mid M_{2})=\int _{0}^{1}{200 \choose 115}q^{115}(1-q)^{85}dq={1 \over 201}\approx 0.005}

The ratio is then 1.2, which is "barely worth mentioning" even if it points very slightly towardsM1{\displaystyle M_{1}}.

Afrequentisthypothesis test ofM1{\displaystyle M_{1}} (here considered as anull hypothesis) would have produced a very different result. Such a test says thatM1{\displaystyle M_{1}} should be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 ifq =12 is 0.02, and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.04. Note that 115 is more than two standard deviations away from 100. Thus, whereas afrequentisthypothesis test would yieldsignificant results at the 5% significance level, the Bayes factor hardly considers this to be an extreme result. Note, however, that a non-uniform prior (for example one that reflects that you expect the number of success and failures to be of the same order of magnitude) could result in a Bayes factor that is more in agreement with the frequentist hypothesis test.

A classicallikelihood-ratio test would have found themaximum likelihood estimate forq, namelyq^=115200=0.575{\displaystyle {\hat {q}}={\frac {115}{200}}=0.575}, whence

P(X=115M2)=(200115)q^115(1q^)850.06{\displaystyle \textstyle P(X=115\mid M_{2})={{200 \choose 115}{\hat {q}}^{115}(1-{\hat {q}})^{85}}\approx 0.06}

(rather than averaging over all possibleq). That gives a likelihood ratio of 0.1 and points towardsM2.

M2{\displaystyle M_{2}} is a more complex model thanM1{\displaystyle M_{1}} because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason whyBayesian inference has been put forward as a theoretical justification for and generalisation ofOccam's razor, reducingType I errors.[16]

On the other hand, the modern method ofrelative likelihood takes into account the number of free parameters in the models, unlike the classical likelihood ratio. The relative likelihood method could be applied as follows. ModelM1 has 0 parameters, and so itsAkaike information criterion (AIC) value is202ln(0.005956)10.2467{\displaystyle 2\cdot 0-2\cdot \ln(0.005956)\approx 10.2467}. ModelM2 has 1 parameter, and so its AIC value is212ln(0.056991)7.7297{\displaystyle 2\cdot 1-2\cdot \ln(0.056991)\approx 7.7297}. HenceM1 is aboutexp(7.729710.24672)0.284{\displaystyle \exp \left({\frac {7.7297-10.2467}{2}}\right)\approx 0.284} times as probable asM2 to minimize the information loss. ThusM2 is slightly preferred, butM1 cannot be excluded.

See also

[edit]
Statistical ratios

References

[edit]
  1. ^Morey, Richard D.; Romeijn, Jan-Willem; Rouder, Jeffrey N. (2016)."The philosophy of Bayes factors and the quantification of statistical evidence".Journal of Mathematical Psychology.72:6–18.doi:10.1016/j.jmp.2015.11.001.
  2. ^Lesaffre, Emmanuel; Lawson, Andrew B. (2012). "Bayesian hypothesis testing".Bayesian Biostatistics. Somerset: John Wiley & Sons. pp. 72–78.doi:10.1002/9781119942412.ch3.ISBN 978-0-470-01823-1.
  3. ^Ly, Alexander; et al. (2020)."The Bayesian Methodology of Sir Harold Jeffreys as a Practical Alternative to theP Value Hypothesis Test".Computational Brain & Behavior.3 (2):153–161.doi:10.1007/s42113-019-00070-x.hdl:2066/226717.
  4. ^Llorente, Fernando; et al. (2023). "Marginal likelihood computation for model selection and hypothesis testing: an extensive review".SIAM Review. to appear:3–58.arXiv:2005.08334.doi:10.1137/20M1310849.S2CID 210156537.
  5. ^Congdon, Peter (2014). "Estimating model probabilities or marginal likelihoods in practice".Applied Bayesian Modelling (2nd ed.). Wiley. pp. 38–40.ISBN 978-1-119-95151-3. A widely used approach is the method proposed byChib, Siddhartha (1995). "Marginal Likelihood from the Gibbs Output".Journal of the American Statistical Association.90 (432):1313–1321.doi:10.2307/2291521. This method was later extended to handle cases where Metropolis-Hastings samplers are used,Chib, Siddhartha; Jeliazkov, Ivan (2001). "Marginal Likelihood from the Metropolis–Hastings Output".Journal of the American Statistical Association.96 (453):270–281.doi:10.1198/016214501750332848.
  6. ^Koop, Gary (2003). "Model Comparison: The Savage–Dickey Density Ratio".Bayesian Econometrics. Somerset: John Wiley & Sons. pp. 69–71.ISBN 0-470-84567-8.
  7. ^Wagenmakers, Eric-Jan; Lodewyckx, Tom; Kuriyal, Himanshu; Grasman, Raoul (2010)."Bayesian hypothesis testing for psychologists: A tutorial on the Savage–Dickey method"(PDF).Cognitive Psychology.60 (3):158–189.doi:10.1016/j.cogpsych.2009.12.001.PMID 20064637.S2CID 206867662.
  8. ^Ibrahim, Joseph G.; Chen, Ming-Hui; Sinha, Debajyoti (2001). "Model Comparison".Bayesian Survival Analysis. Springer Series in Statistics. New York: Springer. pp. 246–254.doi:10.1007/978-1-4757-3447-8_6.ISBN 0-387-95277-2.
  9. ^Gill, Jeff (2002). "Bayesian Hypothesis Testing and the Bayes Factor".Bayesian Methods : A Social and Behavioral Sciences Approach. Chapman & Hall. pp. 199–237.ISBN 1-58488-288-3.
  10. ^abRobert E. Kass & Adrian E. Raftery (1995)."Bayes Factors"(PDF).Journal of the American Statistical Association.90 (430): 791.doi:10.2307/2291091.JSTOR 2291091.
  11. ^Toni, T.; Stumpf, M.P.H. (2009)."Simulation-based model selection for dynamical systems in systems and population biology".Bioinformatics.26 (1):104–10.arXiv:0911.1705.doi:10.1093/bioinformatics/btp619.PMC 2796821.PMID 19880371.
  12. ^Robert, C.P.; J. Cornuet; J. Marin & N.S. Pillai (2011)."Lack of confidence in approximate Bayesian computation model choice".Proceedings of the National Academy of Sciences.108 (37):15112–15117.Bibcode:2011PNAS..10815112R.doi:10.1073/pnas.1102900108.PMC 3174657.PMID 21876135.
  13. ^Williams, Matt; Bååth, Rasmus; Philipp, Michael (2017)."Using Bayes Factors to Test Hypotheses in Developmental Research".Research in Human Development.14 (4):321–337.doi:10.1080/15427609.2017.1370964.
  14. ^Jeffreys, Harold (1998) [1961].The Theory of Probability (3rd ed.). Oxford, England. p. 432.ISBN 9780191589676.{{cite book}}: CS1 maint: location missing publisher (link)
  15. ^Good, I.J. (1979). "Studies in the History of Probability and Statistics. XXXVII A. M. Turing's statistical work in World War II".Biometrika.66 (2):393–396.doi:10.1093/biomet/66.2.393.MR 0548210.
  16. ^Sharpening Ockham's Razor On a Bayesian Strop

Further reading

[edit]

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Bayes_factor&oldid=1315722299"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp