Instatistics, theBayesian information criterion (BIC) orSchwarz information criterion (alsoSIC,SBC,SBIC) is a criterion formodel selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, on thelikelihood function and it is closely related to theAkaike information criterion (AIC).
When fitting models, it is possible to increase the maximum likelihood by adding parameters, but doing so may result inoverfitting. Both BIC and AIC attempt to resolve this problem by introducing a penalty term for the number of parameters in the model; the penalty term is larger in BIC than in AIC for sample sizes greater than 7.[1]
The BIC was developed by Gideon E. Schwarz and published in a 1978 paper,[2] as a large-sample approximation to theBayes factor.
= the maximized value of thelikelihood function of the model, i.e., where are the parameter values that maximize the likelihood function and is the observed data;
= the number of data points in, the number ofobservations, or equivalently, the sample size;
= the number ofparameters estimated by the model. For example, inmultiple linear regression, the estimated parameters are the intercept, the slope parameters, and the constant variance of the errors; thus,.
The BIC can be derived by integrating out the parameters of the model usingLaplace's method, starting with the followingmodel evidence:[5][6]: 217
where is the prior for under model.
The log-likelihood,, is then expanded to a second orderTaylor series about theMLE,, assuming it is twice differentiable as follows:
where is the averageobserved information per observation, and denotes the residual term. To the extent that is negligible and is relatively linear near, we can integrate out to get the following:
As increases, we can ignore and as they are. Thus,
where BIC is defined as above, and either (a) is the Bayesian posterior mode or (b) uses the MLE and the prior has nonzero slope at the MLE. Then the posterior
When picking from several models, ones with lower BIC values are generally preferred. The BIC is an increasingfunction of the error variance and an increasing function ofk. That is, unexplained variation in thedependent variable and the number of explanatory variables increase the value of BIC. However, a lower BIC does not necessarily indicate one model is better than another. Because it involves approximations, the BIC is merely a heuristic. In particular, differences in BIC should never be treated like transformed Bayes factors.
It is important to keep in mind that the BIC can be used to compare estimated models only when the numerical values of the dependent variable[b] are identical for all models being compared. The models being compared need not benested, unlike the case when models are being compared using anF-test or alikelihood ratio test.[citation needed]
The BIC generally penalizes free parameters more strongly than theAkaike information criterion, though it depends on the size ofn and relative magnitude ofn and k.
It is independent of the prior.
It can measure the efficiency of the parameterized model in terms of predicting the data.
It penalizes the complexity of the model where complexity refers to the number of parameters in the model.
Under the assumption that the model errors or disturbances are independent and identically distributed according to anormal distribution and the boundary condition that the derivative of thelog likelihood with respect to the true variance is zero, this becomes (up to an additive constant, which depends only onn and not on the model):[8]
where is the error variance. The error variance in this case is defined as
^See the review paper:Stoica, P.; Selen, Y. (2004), "Model-order selection: a review of information criterion rules",IEEE Signal Processing Magazine (July):36–47,doi:10.1109/MSP.2004.1311138,S2CID17338979.