Furthermore, the total observed count must be equal to the total expected count:
where is the total number of observations.
Both, theG-test statistics and thechi-square test statistics are special cases of a general family of power divergence statistics by Cressie and Read[2]. For set
Suppose we had a sample where each is the number of times that an object of type was observed. Furthermore, let be the total number of observations. If we assume that the underlying model is multinomial, then the test statistic is defined by
where is the null hypothesis and is themaximum likelihood estimate (MLE) of the parameters given the data. Recall that for the multinomial model, the MLE of given some data is given by
Furthermore, we may represent each null hypothesis parameter as
where is the expected count of objects of type under the null hypothesis. Thus, by substituting the representations of and in the log-likelihood ratio, the equation simplifies to
Heuristically, one can imagine as continuous and approaching zero, in which case, and terms with zero observations can simply be dropped. However theexpected count in each cell must be strictly greater than zero for each cell ( for all) to apply the method.
Given the null hypothesis that the observed frequencies result from random sampling from a distribution with the given expected frequencies, thedistribution of the test statistics is approximately achi-squared distribution, with the same number ofdegrees of freedom as in the corresponding chi-squared test.
For very small samples themultinomial test for goodness of fit, andFisher's exact test for contingency tables, or even Bayesian hypothesis selection are preferable to theG-test.[3] McDonald recommends to always use an exact test (exact test of goodness-of-fit,Fisher's exact test) if the total sample size is less than 1 000 .
There is nothing magical about a sample size of 1 000, it's just a nice round number that is well within the range where an exact test, chi-square test, andG–test will give almost identicalp values. Spreadsheets, web-page calculators, andSAS shouldn't have any problem doing an exact test on a sample size of 1 000 .
The general formula for Pearson's chi-squared test statistic is
The approximation of theG-test statistics by chi-squared test statistics is obtained by a second orderTaylor expansion of the natural logarithm around 1 (see thederivation below).We have when the observed counts are close to the expected counts. When this difference is large, however, the approximation by the chi-squared test statistics begins to break down. Here, the effects of outliers in data will be more pronounced, and this explains the why chi-squared tests fail in situations with little data.
For samples of a reasonable size, theG-test and the chi-squared test will lead to the same conclusions. However, the approximation to the theoretical chi-squared distribution for theG-test is better than for thePearson's chi-squared test.[6] In cases where for some cell case theG-test is always better than the chi-squared test.[citation needed]
For testing goodness-of-fit theG-test is infinitely moreefficient than the chi-squared test in the sense of Bahadur, but the two tests are equally efficient in the sense of Pitman or in the sense of Hodges and Lehmann.[7][8]
and let with, so that the total number of counts remains the same. Assume that is small in comparison to for all. To be more precise, notice that usingbigΘ notation. If usingbigO notation for large, which should be true under the null hypothesis because of thecentral limit theorem, then and
TheG-test statistic is proportional to theKullback–Leibler divergence of the theoretical distribution of the null hypothesis from the empirical distribution of the observed data:
where is the total number of observations and and are the theoretical and empirical probabilities of objects of type, respectively.
In this case objects with two-dimensional types are considered. Let be the count of objects of type, i.e., is the entry in the contingency table in row and column. Set
Then the estimated expected count of objects of type assuming independence is given by
Finally, theG-test statistics in this case is given by
Let be random variables withjoint distribution given by the empirical distribution of the contingency table, i.e.,
Then theG-test statistics can be expressed in several alternative forms:
It can also be shown[citation needed] that the inverse document frequency weighting commonly used for text retrieval is an approximation ofG applicable when the row sum for the query is much smaller than the row sum for the remainder of the corpus. Similarly, the result of Bayesian inference applied to a choice of single multinomial distribution for all rows of the contingency table taken together versus the more general alternative of a separate multinomial per row produces results very similar to theG-test statistic.[citation needed]
InR fast implementations can be found in theAMR andRfast packages. For the AMR package, the command isg.test which works exactly likechisq.test from base R. R also has thelikelihood.testArchived 2013-12-16 at theWayback Machine function in theDeducerArchived 2012-03-09 at theWayback Machine package.Note: Fisher'sG-test in theGeneCycle Package of theR programming language (fisher.g.test) does not implement theG-test as described in this article, but rather Fisher's exact test of Gaussian white-noise in a time series.[11]
AnotherR implementation to compute theG-test statistic and corresponding p-values is provided by the R packageentropy. The commands areGstat for the standard G statistic and the associated p-value andGstatindep for the G statistic applied to comparing joint and product distributions to test independence.
InSAS, one can conductG-test by applying the/chisq option after theproc freq.[12]
InStata, one can conduct aG-test by applying thelr option after thetabulate command.
^McDonald, J.H. (2014)."G–test of goodness-of-fit".Handbook of Biological Statistics (Third ed.). Baltimore, Maryland: Sparky House Publishing. pp. 53–58.
^abCressie, Noel; Read, Timothy R. C. (1984)."Multinomial goodness-of-fit tests".Journal of the Royal Statistical Society. Series B. Methodological.46 (3):440–464. Retrieved14 January 2026.
^Hoey, J. (2012). "The Two-Way Likelihood Ratio (G) Test and Comparison to Two-Way Chi-Squared Test".arXiv:1206.4881 [stat.ME].
^Harremoës, P.; Tusnády, G. (2012). "Information divergence is more chi squared distributed than the chi squared statistic".Proceedings ISIT 2012. pp. 538–543.arXiv:1202.1125.Bibcode:2012arXiv1202.1125H.
^G-test of independence,G-test for goodness-of-fit in Handbook of Biological Statistics, University of Delaware. (pp. 46–51, 64–69 in: McDonald, J. H. (2009)Handbook of Biological Statistics (2nd ed.). Sparky House Publishing, Baltimore, Maryland.)