This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "Goodness of fit" – news ·newspapers ·books ·scholar ·JSTOR(January 2018) (Learn how and when to remove this message) |
Part of a series on |
Regression analysis |
---|
Models |
Estimation |
Background |
Thegoodness of fit of astatistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used instatistical hypothesis testing, e.g. totest for normality ofresiduals, to test whether two samples are drawn from identical distributions (seeKolmogorov–Smirnov test), or whether outcome frequencies follow a specified distribution (seePearson's chi-square test). In theanalysis of variance, one of the components into which the variance is partitioned may be alack-of-fit sum of squares.
In assessing whether a given distribution is suited to a data-set, the followingtests and their underlying measures of fit can be used:
Inregression analysis, more specificallyregression validation, the following topics relate to goodness of fit:
The following are examples that arise in the context ofcategorical data.
Pearson's chi-square test uses a measure of goodness of fit which is the sum of differences between observed andexpected outcome frequencies (that is, counts of observations), each squared and divided by the expectation:
where:
The expected frequency is calculated by:where:
The resulting value can be compared with achi-square distribution to determine the goodness of fit. The chi-square distribution has (k −c)degrees of freedom, wherek is the number of non-empty bins andc is the number of estimated parameters (including location and scale parameters and shape parameters) for the distribution plus one. For example, for a 3-parameterWeibull distribution,c = 4.
A binomial experiment is a sequence of independent trials in which the trials can result in one of two outcomes, success or failure. There aren trials each with probability of success, denoted byp. Provided thatnpi ≫ 1 for everyi (wherei = 1, 2, ..., k), then
This has approximately a chi-square distribution withk − 1 degrees of freedom. The fact that there arek − 1 degrees of freedom is a consequence of the restriction. We know there arek observed bin counts, however, once anyk − 1 are known, the remaining one is uniquely determined. Basically, one can say, there are onlyk − 1 freely determined binn counts, thusk − 1 degrees of freedom.
G-tests arelikelihood-ratio tests ofstatistical significance that are increasingly being used in situations where Pearson's chi-square tests were previously recommended.[7]
The general formula forG is
where and are the same as for the chi-square test, denotes thenatural logarithm, and the sum is taken over all non-empty bins. Furthermore, the total observed count should be equal to the total expected count:where is the total number of observations.
G-tests have been recommended at least since the 1981 edition of the popular statistics textbook byRobert R. Sokal andF. James Rohlf.[8]