Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Bayesian information criterion

From Wikipedia, the free encyclopedia
Criterion for model selection
Part of a series on
Bayesian statistics
Posterior =Likelihood ×Prior ÷Evidence
Background
Model building
Posterior approximation
Estimators
Evidence approximation
Model evaluation

Instatistics, theBayesian information criterion (BIC) orSchwarz information criterion (alsoSIC,SBC,SBIC) is a criterion formodel selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, on thelikelihood function and it is closely related to theAkaike information criterion (AIC).

When fitting models, it is possible to increase the maximum likelihood by adding parameters, but doing so may result inoverfitting. Both BIC and AIC attempt to resolve this problem by introducing a penalty term for the number of parameters in the model; the penalty term is larger in BIC than in AIC for sample sizes greater than 7.[1]

The BIC was developed by Gideon E. Schwarz and published in a 1978 paper,[2] as a large-sample approximation to theBayes factor.

Definition

[edit]

The BIC is formally defined as[3][a]

BIC=kln(n)2ln(L^). {\displaystyle \mathrm {BIC} =k\ln(n)-2\ln({\widehat {L}}).\ }

where

Derivation

[edit]

The BIC can be derived by integrating out the parameters of the model usingLaplace's method, starting with the followingmodel evidence:[5][6]: 217 

p(xM)=p(xθ,M)π(θM)dθ{\displaystyle p(x\mid M)=\int p(x\mid \theta ,M)\pi (\theta \mid M)\,d\theta }

whereπ(θM){\displaystyle \pi (\theta \mid M)} is the prior forθ{\displaystyle \theta } under modelM{\displaystyle M}.

The log-likelihood,ln(p(xθ,M)){\displaystyle \ln(p(x\mid \theta ,M))}, is then expanded to a second orderTaylor series about theMLE,θ^{\displaystyle {\widehat {\theta }}}, assuming it is twice differentiable as follows:

ln(p(xθ,M))=ln(L^)n2(θθ^)TI(θ^)(θθ^)+R(x,θ),{\displaystyle \ln(p(x\mid \theta ,M))=\ln({\widehat {L}})-{\frac {n}{2}}(\theta -{\widehat {\theta }})^{\operatorname {T} }{\mathcal {I}}({\widehat {\theta }})(\theta -{\widehat {\theta }})+R(x,\theta ),}

whereI(θ){\displaystyle {\mathcal {I}}(\theta )} is the averageobserved information per observation, andR(x,θ){\displaystyle R(x,\theta )} denotes the residual term. To the extent thatR(x,θ){\displaystyle R(x,\theta )} is negligible andπ(θM){\displaystyle \pi (\theta \mid M)} is relatively linear nearθ^{\displaystyle {\widehat {\theta }}}, we can integrate outθ{\displaystyle \theta } to get the following:

p(xM)L^(2πn)k2|I(θ^)|12π(θ^){\displaystyle p(x\mid M)\approx {\hat {L}}{\left({\frac {2\pi }{n}}\right)}^{\frac {k}{2}}|{\mathcal {I}}({\widehat {\theta }})|^{-{\frac {1}{2}}}\pi ({\widehat {\theta }})}

Asn{\displaystyle n} increases, we can ignore|I(θ^)|{\displaystyle |{\mathcal {I}}({\widehat {\theta }})|} andπ(θ^){\displaystyle \pi ({\widehat {\theta }})} as they areO(1){\displaystyle O(1)}. Thus,

p(xM)=exp(lnL^k2ln(n)+O(1))=exp(BIC2+O(1)),{\displaystyle p(x\mid M)=\exp \left(\ln {\widehat {L}}-{\frac {k}{2}}\ln(n)+O(1)\right)=\exp \left(-{\frac {\mathrm {BIC} }{2}}+O(1)\right),}

where BIC is defined as above, andL^{\displaystyle {\widehat {L}}} either (a) is the Bayesian posterior mode or (b) uses the MLE and the priorπ(θM){\displaystyle \pi (\theta \mid M)} has nonzero slope at the MLE. Then the posterior

p(Mx)p(xM)p(M)exp(BIC2)p(M){\displaystyle p(M\mid x)\propto p(x\mid M)p(M)\approx \exp \left(-{\frac {\mathrm {BIC} }{2}}\right)p(M)}

Use

[edit]

When picking from several models, ones with lower BIC values are generally preferred. The BIC is an increasingfunction of the error varianceσe2{\displaystyle \sigma _{e}^{2}} and an increasing function ofk. That is, unexplained variation in thedependent variable and the number of explanatory variables increase the value of BIC. However, a lower BIC does not necessarily indicate one model is better than another. Because it involves approximations, the BIC is merely a heuristic. In particular, differences in BIC should never be treated like transformed Bayes factors.

It is important to keep in mind that the BIC can be used to compare estimated models only when the numerical values of the dependent variable[b] are identical for all models being compared. The models being compared need not benested, unlike the case when models are being compared using anF-test or alikelihood ratio test.[citation needed]

Properties

[edit]
icon
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(November 2011) (Learn how and when to remove this message)
  • The BIC generally penalizes free parameters more strongly than theAkaike information criterion, though it depends on the size ofn and relative magnitude ofn and k.
  • It is independent of the prior.
  • It can measure the efficiency of the parameterized model in terms of predicting the data.
  • It penalizes the complexity of the model where complexity refers to the number of parameters in the model.
  • It is approximately equal to theminimum description length criterion but with negative sign.
  • It can be used to choose the number of clusters according to the intrinsic complexity present in a particular dataset.
  • It is closely related to other penalized likelihood criteria such asDeviance information criterion and theAkaike information criterion.

Limitations

[edit]

The BIC suffers from two main limitations[7]

  1. the above approximation is only valid for sample sizen{\displaystyle n} much larger than the numberk{\displaystyle k} of parameters in the model.
  2. the BIC cannot handle complex collections of models as in the variable selection (orfeature selection) problem in high-dimension.[7]

Gaussian special case

[edit]

Under the assumption that the model errors or disturbances are independent and identically distributed according to anormal distribution and the boundary condition that the derivative of thelog likelihood with respect to the true variance is zero, this becomes (up to an additive constant, which depends only onn and not on the model):[8]

BIC=nln(σe2^)+kln(n) {\displaystyle \mathrm {BIC} =n\ln({\widehat {\sigma _{e}^{2}}})+k\ln(n)\ }

whereσe2^{\displaystyle {\widehat {\sigma _{e}^{2}}}} is the error variance. The error variance in this case is defined as

σe2^=1ni=1n(xix^i)2.{\displaystyle {\widehat {\sigma _{e}^{2}}}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-{\widehat {x}}_{i})^{2}.}

whichis a biased estimator for the true variance.

In terms of theresidual sum of squares (RSS) the BIC is

BIC=nln(RSS/n)+kln(n) {\displaystyle \mathrm {BIC} =n\ln({\text{RSS}}/n)+k\ln(n)\ }

When testing multiple linear models against a saturated model, the BIC can be rewritten in terms of thedevianceχ2{\displaystyle \chi ^{2}} as:[9]

BIC=χ2+kln(n){\displaystyle \mathrm {BIC} =\chi ^{2}+k\ln(n)}

wherek{\displaystyle k} is the number of model parameters in the test.

See also

[edit]

Notes

[edit]
  1. ^The AIC, AICc and BIC defined by Claeskens and Hjort[4] are the negatives of those defined in this article and in most other standard references.
  2. ^A dependent variable is also called aresponse variable or anoutcome variable. SeeRegression analysis.

References

[edit]
  1. ^See the review paper:Stoica, P.; Selen, Y. (2004), "Model-order selection: a review of information criterion rules",IEEE Signal Processing Magazine (July):36–47,doi:10.1109/MSP.2004.1311138,S2CID 17338979.
  2. ^Schwarz, Gideon E. (1978), "Estimating the dimension of a model",Annals of Statistics,6 (2):461–464,doi:10.1214/aos/1176344136,MR 0468014.
  3. ^Wit, Ernst; Edwin van den Heuvel; Jan-Willem Romeyn (2012)."'All models are wrong...': an introduction to model uncertainty"(PDF).Statistica Neerlandica.66 (3):217–236.doi:10.1111/j.1467-9574.2012.00530.x.S2CID 7793470. Archived fromthe original(PDF) on 2020-07-26. Retrieved2019-12-11.
  4. ^Claeskens, G.;Hjort, N. L. (2008),Model Selection and Model Averaging,Cambridge University Press
  5. ^Raftery, A.E. (1995). "Bayesian model selection in social research".Sociological Methodology.25:111–196.doi:10.2307/271063.JSTOR 271063.
  6. ^Konishi, Sadanori; Kitagawa, Genshiro (2008).Information criteria and statistical modeling. Springer.ISBN 978-0-387-71886-6.
  7. ^abGiraud, C. (2015).Introduction to high-dimensional statistics. Chapman & Hall/CRC.ISBN 9781482237948.
  8. ^Priestley, M.B. (1981).Spectral Analysis and Time Series.Academic Press.ISBN 978-0-12-564922-3. (p. 375).
  9. ^Kass, Robert E.; Raftery, Adrian E. (1995), "Bayes Factors",Journal of the American Statistical Association,90 (430):773–795,doi:10.2307/2291091,ISSN 0162-1459,JSTOR 2291091.

Further reading

[edit]

External links

[edit]
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis (see alsoTemplate:Least squares and regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Retrieved from "https://en.wikipedia.org/w/index.php?title=Bayesian_information_criterion&oldid=1307978139"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp