Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

G-test

From Wikipedia, the free encyclopedia
Statistical test

Instatistics,G-tests arelikelihood-ratio ormaximum likelihoodstatistical significance tests that are increasingly being used in situations wherechi-squared tests were previously recommended.[1]

Formulation

[edit]

The general formula for test statistics of theG-test is

G=2iOiln(OiEi),{\displaystyle G=2\sum _{i}{O_{i}\cdot \ln \left({\frac {O_{i}}{E_{i}}}\right)},}

whereOi0{\displaystyle O_{i}\geq 0} is the observed count in a cell,Ei>0{\displaystyle E_{i}>0} is the expected count under thenull hypothesis,ln{\displaystyle \ln } denotes thenatural logarithm, and the sum is taken over all non-empty cells. The resultingG{\displaystyle G} is asymptoticallychi-squared distributed as the total number of observations tends to infinity (convergence in distribution[2]).

Furthermore, the total observed count must be equal to the total expected count:

iOi=iEi=N,{\displaystyle \sum _{i}O_{i}=\sum _{i}E_{i}=N,}

whereN{\displaystyle N} is the total number of observations.

Both, theG-test statisticsG{\displaystyle G} and thechi-square test statisticsχ2{\displaystyle \chi ^{2}} are special cases of a general family of power divergence statistics by Cressie and Read[2]. Forλ{0,1}{\displaystyle \lambda \notin \{0,-1\}} set

CRλ=2λ(λ+1)iOi((OiEi)λ1).{\displaystyle \operatorname {CR} _{\lambda }={\frac {2}{\lambda (\lambda +1)}}\sum _{i}O_{i}\left(\left({\frac {O_{i}}{E_{i}}}\right)^{\lambda }-1\right).}

Then,

G=limλ0CRλ,χ2=CR1.{\displaystyle G=\lim _{\lambda \to 0}\operatorname {CR} _{\lambda },\qquad \chi ^{2}=\operatorname {CR} _{1}.}

Derivation

[edit]

We can derive the value of theG-test from thelog-likelihood ratio test where the underlying model is amultinomial model.

Suppose we had a sampleO=(O1,,Om){\displaystyle O=(O_{1},\ldots ,O_{m})} where eachOi{\displaystyle O_{i}} is the number of times that an object of typei{\displaystyle i} was observed. Furthermore, letN=i=1mOi{\displaystyle N=\sum _{i=1}^{m}O_{i}} be the total number of observations. If we assume that the underlying model is multinomial, then the test statistic is defined by

ln(L(p~|O)L(p^|O))=ln(i=1mp~iOii=1mp^iOi),{\displaystyle \ln \left({\frac {L({\tilde {p}}|O)}{L({\hat {p}}|O)}}\right)=\ln \left({\frac {\prod _{i=1}^{m}{\tilde {p}}_{i}^{O_{i}}}{\prod _{i=1}^{m}{\hat {p}}_{i}^{O_{i}}}}\right),}

wherep~=(p~1,,p~m){\displaystyle {\tilde {p}}=({\tilde {p}}_{1},\ldots ,{\tilde {p}}_{m})} is the null hypothesis andp^=(p^1,,p^m){\displaystyle {\hat {p}}=({\hat {p}}_{1},\ldots ,{\hat {p}}_{m})} is themaximum likelihood estimate (MLE) of the parameters given the data. Recall that for the multinomial model, the MLE ofp^i{\displaystyle {\hat {p}}_{i}} given some data is given by

p^i=OiN.{\displaystyle {\hat {p}}_{i}={\frac {O_{i}}{N}}\,.}

Furthermore, we may represent each null hypothesis parameterp~i{\displaystyle {\tilde {p}}_{i}} as

p~i=EiN,{\displaystyle {\tilde {p}}_{i}={\frac {E_{i}}{N}}\,,}

whereEi{\displaystyle E_{i}} is the expected count of objects of typei{\displaystyle i} under the null hypothesis. Thus, by substituting the representations ofp~i{\displaystyle {\tilde {p}}_{i}} andp^i{\displaystyle {\hat {p}}_{i}} in the log-likelihood ratio, the equation simplifies to

ln(L(p~|O)L(p^|O))=ln(i=1m(EiOi)Oi)=i=1mOiln(EiOi){\displaystyle \ln \left({\frac {L({\tilde {p}}|O)}{L({\hat {p}}|O)}}\right)=\ln \left(\prod _{i=1}^{m}\left({\frac {E_{i}}{O_{i}}}\right)^{O_{i}}\right)=\sum _{i=1}^{m}O_{i}\ln \left({\frac {E_{i}}{O_{i}}}\right)}

Finally, multiply by a factor of2{\displaystyle -2} (used to make theG-test formulaasymptotically equivalent to the Pearson's chi-squared test statistics) to achieve the form

G=2i=1mOiln(EiOi)=2i=1mOiln(OiEi){\displaystyle G=-2\sum _{i=1}^{m}O_{i}\ln \left({\frac {E_{i}}{O_{i}}}\right)=2\sum _{i=1}^{m}O_{i}\ln \left({\frac {O_{i}}{E_{i}}}\right)}

Heuristically, one can imagineOi{\displaystyle O_{i}} as continuous and approaching zero, in which caseOilnOi0{\displaystyle O_{i}\ln O_{i}\to 0}, and terms with zero observations can simply be dropped. However theexpected count in each cell must be strictly greater than zero for each cell (Ei>0{\displaystyle E_{i}>0} for alli{\displaystyle i}) to apply the method.

Distribution and use

[edit]

Given the null hypothesis that the observed frequencies result from random sampling from a distribution with the given expected frequencies, thedistribution of the test statisticsG{\displaystyle G} is approximately achi-squared distribution, with the same number ofdegrees of freedom as in the corresponding chi-squared test.

For very small samples themultinomial test for goodness of fit, andFisher's exact test for contingency tables, or even Bayesian hypothesis selection are preferable to theG-test.[3] McDonald recommends to always use an exact test (exact test of goodness-of-fit,Fisher's exact test) if the total sample size is less than 1 000 .

There is nothing magical about a sample size of 1 000, it's just a nice round number that is well within the range where an exact test, chi-square test, andG–test will give almost identicalp values. Spreadsheets, web-page calculators, andSAS shouldn't have any problem doing an exact test on a sample size of 1 000 .
— John H. McDonald (2014)[3]

G-tests have been recommended at least since the 1981 edition ofBiometry, a statistics textbook byRobert R. Sokal andF. James Rohlf.[4]

Relation to other metrics

[edit]

Relation to the chi-squared test

[edit]

The commonly usedchi-squared tests for goodness of fit to a distribution and for independence incontingency tables are in fact approximations of thelog-likelihood ratio on which theG-tests are based.[5]

The general formula for Pearson's chi-squared test statistic is

χ2=i(OiEi)2Ei.{\displaystyle \chi ^{2}=\sum _{i}{\frac {\left(O_{i}-E_{i}\right)^{2}}{E_{i}}}.}

The approximation of theG-test statistics by chi-squared test statistics is obtained by a second orderTaylor expansion of the natural logarithm around 1 (see thederivation below).We haveGχ2{\displaystyle G\approx \chi ^{2}} when the observed countsOi{\displaystyle O_{i}} are close to the expected countsEi{\displaystyle E_{i}}. When this difference is large, however, the approximation by the chi-squared test statistics begins to break down. Here, the effects of outliers in data will be more pronounced, and this explains the why chi-squared tests fail in situations with little data.

For samples of a reasonable size, theG-test and the chi-squared test will lead to the same conclusions. However, the approximation to the theoretical chi-squared distribution for theG-test is better than for thePearson's chi-squared test.[6] In cases whereOi>2Ei{\displaystyle O_{i}>2\cdot E_{i}} for some cell case theG-test is always better than the chi-squared test.[citation needed]

For testing goodness-of-fit theG-test is infinitely moreefficient than the chi-squared test in the sense of Bahadur, but the two tests are equally efficient in the sense of Pitman or in the sense of Hodges and Lehmann.[7][8]

Derivation (chi-squared)

[edit]

Consider

G=2iOiln(OiEi),{\displaystyle G=2\sum _{i}{O_{i}\ln \left({\frac {O_{i}}{E_{i}}}\right)},}

and letOi=Ei+δi{\displaystyle O_{i}=E_{i}+\delta _{i}} withiδi=0{\displaystyle \textstyle \sum _{i}\delta _{i}=0}, so that the total number of counts remains the same. Assume thatδi=OiEi{\displaystyle \delta _{i}=O_{i}-E_{i}} is small in comparison toEi{\displaystyle E_{i}} for alli{\displaystyle i}. To be more precise, notice thatEi=Θ(n){\displaystyle E_{i}=\Theta (n)} usingbigΘ notation. IfOi=Ei+O(n1/2){\displaystyle O_{i}=E_{i}+{\mathcal {O}}(n^{1/2})} usingbigO notation for largen{\displaystyle n}, which should be true under the null hypothesis because of thecentral limit theorem, thenδi=O(n1/2){\displaystyle \delta _{i}={\mathcal {O}}(n^{1/2})} and

δi3Ei2=O(n3/2n2)=O(n1/2){\displaystyle {\frac {\delta _{i}^{3}}{E_{i}^{2}}}={\mathcal {O}}\left({\frac {n^{3/2}}{n^{2}}}\right)={\mathcal {O}}(n^{-1/2})}

follow, which will be used later.

Upon substitution we find,

G=2i(Ei+δi)ln(1+δiEi).{\displaystyle G=2\sum _{i}(E_{i}+\delta _{i})\ln \left(1+{\frac {\delta _{i}}{E_{i}}}\right).}

Using theTaylor expansionln(1+x)=x12x2+O(x3){\displaystyle \ln(1+x)=x-{\tfrac {1}{2}}x^{2}+{\mathcal {O}}(x^{3})} yields

G=2i(Ei+δi)(δiEi12δi2Ei2+O(δi3Ei3)),{\displaystyle G=2\sum _{i}(E_{i}+\delta _{i})\left({\frac {\delta _{i}}{E_{i}}}-{\frac {1}{2}}{\frac {\delta _{i}^{2}}{E_{i}^{2}}}+{\mathcal {O}}\left({\frac {\delta _{i}^{3}}{E_{i}^{3}}}\right)\right),}

and distributing terms we find,

G=2i(δi+12δi2Ei+O(δi3Ei2)).{\displaystyle G=2\sum _{i}\left(\delta _{i}+{\frac {1}{2}}{\frac {\delta _{i}^{2}}{E_{i}}}+{\mathcal {O}}\left({\frac {\delta _{i}^{3}}{E_{i}^{2}}}\right)\right).}

Now, usingiδi=0{\displaystyle \textstyle \sum _{i}\delta _{i}=0} andδi=OiEi{\displaystyle \delta _{i}=O_{i}-E_{i}} andO(δi3/Ei2)=O(n1/2){\displaystyle {\mathcal {O}}(\delta _{i}^{3}/E_{i}^{2})={\mathcal {O}}(n^{-1/2})} for largen{\displaystyle n}, we can write the result,

Gi(OiEi)2Ei.{\displaystyle G\approx \sum _{i}{\frac {\left(O_{i}-E_{i}\right)^{2}}{E_{i}}}.}

Relation to Kullback–Leibler divergence

[edit]

TheG-test statistic is proportional to theKullback–Leibler divergence of the theoretical distributionp~=(p~1,,p~m){\displaystyle {\tilde {p}}=({\tilde {p}}_{1},\ldots ,{\tilde {p}}_{m})} of the null hypothesis from the empirical distributionp^=(p^1,,p^m){\displaystyle {\hat {p}}=({\hat {p}}_{1},\ldots ,{\hat {p}}_{m})} of the observed data:

G=2iOiln(OiEi)=2Nip^iln(p^ip~i)=2NDKL(p^p~),{\displaystyle {\begin{aligned}G&=2\sum _{i}{O_{i}\cdot \ln \left({\frac {O_{i}}{E_{i}}}\right)}=2N\sum _{i}{{\hat {p}}_{i}\cdot \ln \left({\frac {{\hat {p}}_{i}}{{\tilde {p}}_{i}}}\right)}\\&=2N\,D_{\mathrm {KL} }({\hat {p}}\|{\tilde {p}}),\end{aligned}}}

whereN{\displaystyle N} is the total number of observations andp~i=EiN{\displaystyle {\tilde {p}}_{i}={\tfrac {E_{i}}{N}}} andp^i=OiN{\displaystyle {\hat {p}}_{i}={\tfrac {O_{i}}{N}}} are the theoretical and empirical probabilities of objects of typei{\displaystyle i}, respectively.

Relation to mutual information

[edit]

For analysis ofcontingency tables the value of theG-test statistics can also be expressed in terms ofmutual information.

In this case objects with two-dimensional types(i,j){\displaystyle (i,j)} are considered. LetOij{\displaystyle O_{ij}} be the count of objects of type(i,j){\displaystyle (i,j)}, i.e.,Oij{\displaystyle O_{ij}} is the entry in the contingency table in rowi{\displaystyle i} and columnj{\displaystyle j}. Set

N=ijOij,p^ij=OijN,p^i=jOijN,p^j=iOijN.{\displaystyle N=\sum _{ij}O_{ij},\qquad {\hat {p}}_{ij}={\frac {O_{ij}}{N}}\,,\qquad {\hat {p}}_{i\bullet }={\frac {\sum _{j}O_{ij}}{N}}\,,\qquad {\hat {p}}_{\bullet j}={\frac {\sum _{i}O_{ij}}{N}}\,.}

Then the estimated expected count of objects of type(i,j){\displaystyle (i,j)} assuming independence is given by

Eij=Np^ip^j.{\displaystyle E_{ij}=N{\hat {p}}_{i\bullet }{\hat {p}}_{\bullet j}.}

Finally, theG-test statistics in this case is given by

G=2ijOijln(OijEij){\displaystyle G=2\sum _{ij}O_{ij}\ln \left({\frac {O_{ij}}{E_{ij}}}\right)}

LetX,Y{\displaystyle X,Y} be random variables withjoint distribution given by the empirical distributionp^ij{\displaystyle {\hat {p}}_{ij}} of the contingency table, i.e.,

P(X=i,Y=j)=p^ij,P(X=i)=p^i,P(Y=j)=p^j.{\displaystyle P(X=i,Y=j)={\hat {p}}_{ij},\qquad P(X=i)={\hat {p}}_{i\bullet },\qquad P(Y=j)={\hat {p}}_{\bullet j}.}

Then theG-test statistics can be expressed in several alternative forms:

G=2Nijp^ij(ln(p^ij)ln(p^i)ln(p^j))=2N(H(X)+H(Y)H(X,Y))=2NMI(X,Y),{\displaystyle {\begin{aligned}G&=2N\cdot \sum _{ij}{{\hat {p}}_{ij}\left(\ln({\hat {p}}_{ij})-\ln({\hat {p}}_{i\bullet })-\ln({\hat {p}}_{\bullet j})\right)}\\&=2N\cdot {\Bigl (}H(X)+H(Y)-H(X,Y){\Bigr )}\\&=2N\cdot \operatorname {MI} (X,Y),\end{aligned}}}

where theentropiesH(X){\displaystyle H(X)} andH(Y){\displaystyle H(Y)} are given

H(X)=ip^iln(p^i),H(Y)=jp^jln(p^j){\displaystyle H(X)=-\sum _{i}{\hat {p}}_{i\bullet }\ln({\hat {p}}_{i\bullet }),\qquad H(Y)=-\sum _{j}{\hat {p}}_{\bullet j}\ln({\hat {p}}_{\bullet j})}

and thejoint entropyH(X,Y){\displaystyle H(X,Y)} is given by

H(X,Y)=ijp^ijln(p^ij){\displaystyle H(X,Y)=-\sum _{ij}{\hat {p}}_{ij}\ln({\hat {p}}_{ij})}

and themutual information ofX{\displaystyle X} andY{\displaystyle Y} is

MI(X,Y)=H(X)+H(Y)H(X,Y).{\displaystyle \operatorname {MI} (X,Y)=H(X)+H(Y)-H(X,Y).}


It can also be shown[citation needed] that the inverse document frequency weighting commonly used for text retrieval is an approximation ofG applicable when the row sum for the query is much smaller than the row sum for the remainder of the corpus. Similarly, the result of Bayesian inference applied to a choice of single multinomial distribution for all rows of the contingency table taken together versus the more general alternative of a separate multinomial per row produces results very similar to theG-test statistic.[citation needed]

Application

[edit]

Statistical software

[edit]
  • InR fast implementations can be found in theAMR andRfast packages. For the AMR package, the command isg.test which works exactly likechisq.test from base R. R also has thelikelihood.testArchived 2013-12-16 at theWayback Machine function in theDeducerArchived 2012-03-09 at theWayback Machine package.Note: Fisher'sG-test in theGeneCycle Package of theR programming language (fisher.g.test) does not implement theG-test as described in this article, but rather Fisher's exact test of Gaussian white-noise in a time series.[11]
  • AnotherR implementation to compute theG-test statistic and corresponding p-values is provided by the R packageentropy. The commands areGstat for the standard G statistic and the associated p-value andGstatindep for the G statistic applied to comparing joint and product distributions to test independence.
  • InSAS, one can conductG-test by applying the/chisq option after theproc freq.[12]
  • InStata, one can conduct aG-test by applying thelr option after thetabulate command.
  • InJava, useorg.apache.commons.math3.stat.inference.GTest.[13]
  • InPython, usescipy.stats.power_divergence withlambda_=0.[14]

References

[edit]
  1. ^McDonald, J.H. (2014)."G–test of goodness-of-fit".Handbook of Biological Statistics (Third ed.). Baltimore, Maryland: Sparky House Publishing. pp. 53–58.
  2. ^abCressie, Noel; Read, Timothy R. C. (1984)."Multinomial goodness-of-fit tests".Journal of the Royal Statistical Society. Series B. Methodological.46 (3):440–464. Retrieved14 January 2026.
  3. ^abMcDonald, John H. (2014)."Small numbers in chi-square andG–tests".Handbook of Biological Statistics (3rd ed.). Baltimore, MD: Sparky House Publishing. pp. 86–89.
  4. ^Sokal, R. R.; Rohlf, F. J. (1981).Biometry: The Principles and Practice of Statistics in Biological Research (Second ed.). New York: Freeman.ISBN 978-0-7167-2411-7.
  5. ^Hoey, J. (2012). "The Two-Way Likelihood Ratio (G) Test and Comparison to Two-Way Chi-Squared Test".arXiv:1206.4881 [stat.ME].
  6. ^Harremoës, P.; Tusnády, G. (2012). "Information divergence is more chi squared distributed than the chi squared statistic".Proceedings ISIT 2012. pp. 538–543.arXiv:1202.1125.Bibcode:2012arXiv1202.1125H.
  7. ^Quine, M. P.; Robinson, J. (1985)."Efficiencies of chi-square and likelihood ratio goodness-of-fit tests".Annals of Statistics.13 (2):727–742.doi:10.1214/aos/1176349550.
  8. ^Harremoës, P.; Vajda, I. (2008). "On the Bahadur-efficient testing of uniformity by means of the entropy".IEEE Transactions on Information Theory.54 (1):321–331.Bibcode:2008ITIT...54..321H.CiteSeerX 10.1.1.226.8051.doi:10.1109/tit.2007.911155.S2CID 2258586.
  9. ^Dunning, Ted (1993). "Accurate Methods for the Statistics of Surprise and CoincidenceArchived 2011-12-15 at theWayback Machine",Computational Linguistics, Volume 19, issue 1 (March, 1993).
  10. ^Rivas, Elena (30 October 2020)."RNA structure prediction using positive and negative evolutionary information".PLOS Computational Biology.16 (10) e1008387.Bibcode:2020PLSCB..16E8387R.doi:10.1371/journal.pcbi.1008387.PMC 7657543.PMID 33125376.
  11. ^Fisher, R. A. (1929)."Tests of significance in harmonic analysis".Proceedings of the Royal Society of London A.125 (796):54–59.Bibcode:1929RSPSA.125...54F.doi:10.1098/rspa.1929.0151.hdl:2440/15201.
  12. ^G-test of independence,G-test for goodness-of-fit in Handbook of Biological Statistics, University of Delaware. (pp. 46–51, 64–69 in: McDonald, J. H. (2009)Handbook of Biological Statistics (2nd ed.). Sparky House Publishing, Baltimore, Maryland.)
  13. ^"org.apache.commons.math3.stat.inference.GTest". Archived fromthe original on 2018-07-26. Retrieved2018-07-11.
  14. ^"Scipy.stats.power_divergence — SciPy v1.7.1 Manual".

External links

[edit]
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Retrieved from "https://en.wikipedia.org/w/index.php?title=G-test&oldid=1332909932"
Category:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp