Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Test statistic

From Wikipedia, the free encyclopedia
Statistic used in statistical hypothesis testing

Teststatistic is a quantity derived from thesample forstatistical hypothesis testing.[1] A hypothesis test is typically specified in terms of a test statistic, considered as a numerical summary of a data-set that reduces the data to one value that can be used to perform the hypothesis test. In general, a test statistic is selected or defined in such a way as to quantify, within observed data, behaviours that would distinguish thenull from thealternative hypothesis, where such an alternative is prescribed, or that would characterize the null hypothesis if there is no explicitly stated alternative hypothesis.

An important property of a test statistic is that itssampling distribution under the null hypothesis must be calculable, either exactly or approximately, which allowsp-values to be calculated. Atest statistic shares some of the same qualities of adescriptive statistic, and many statistics can be used as both test statistics and descriptive statistics. However, a test statistic is specifically intended for use in statistical testing, whereas the main quality of a descriptive statistic is that it is easily interpretable. Some informative descriptive statistics, such as thesample range, do not make good test statistics since it is difficult to determine their sampling distribution.

Two widely used test statistics are thet-statistic and theF-statistic.

Example

[edit]

Suppose the task is to test whether a coin isfair (i.e. has equal probabilities of producing a head or a tail). If the coin is flipped 100 times and the results are recorded, the raw data can be represented as a sequence of 100 heads and tails. If there is interest in themarginal probability of obtaining a tail, only the numberT out of the 100 flips that produced a tail needs to be recorded. ButT can also be used as a test statistic in one of two ways:

  • the exactsampling distribution ofT under the null hypothesis is thebinomial distribution with parameters 0.5 and 100.
  • the value ofT can be compared with its expected value under the null hypothesis of 50, and since the sample size is large, anormal distribution can be used as an approximation to the sampling distribution either forT or for the revised test statisticT−50.

Using one of these sampling distributions, it is possible to compute either aone-tailed or two-tailed p-value for the null hypothesis that the coin is fair. The test statistic in this case reduces a set of 100 numbers to a single numerical summary that can be used for testing.

Common test statistics

[edit]

One-sample tests are appropriate when a sample is being compared to thepopulation from a hypothesis. The population characteristics are known from theory or are calculated from the population.

Two-sample tests are appropriate for comparing two samples, typically experimental and control samples from ascientifically controlled experiment.

Paired tests are appropriate for comparing two samples where it is impossible to control important variables. Rather than comparing two sets, members are paired between samples so the difference between the members becomes the sample. Typically the mean of the differences is then compared to zero. The common example scenario for when apaired difference test is appropriate is when a single set of test subjects has something applied to them and the test is intended to check for an effect.

Z-tests are appropriate for comparing means under stringent conditions regarding normality and a known standard deviation.

At-test is appropriate for comparing means under relaxed conditions (less is assumed).

Tests of proportions are analogous to tests of means (the 50% proportion).

Chi-squared tests use the same calculations and the same probability distribution for different applications:

  • Chi-squared tests for variance are used to determine whether a normal population has a specified variance. The null hypothesis is that it does.
  • Chi-squared tests of independence are used for deciding whether two variables are associated or are independent. The variables arecategorical rather than numeric. It can be used to decide whetherleft-handedness is correlated with height (or not). The null hypothesis is that the variables are independent. The numbers used in the calculation are the observed and expected frequencies of occurrence (fromcontingency tables).
  • Chi-squaredgoodness of fit tests are used to determine the adequacy of curves fit to data. The null hypothesis is that the curve fit is adequate. It is common to determine curve shapes to minimize themean square error, so it is appropriate that the goodness-of-fit calculation sums the squared errors.

F-tests (analysis of variance, ANOVA) are commonly used when deciding whether groupings of data by category are meaningful. If the variance of test scores of the left-handed in a class is much smaller than the variance of the whole class, then it may be useful to study lefties as a group. The null hypothesis is that two variances are the same – so the proposed grouping is not meaningful.

In the table below, the symbols used are defined at the bottom of the table. Many other tests can be found inother articles. Proofs exist that the test statistics are appropriate.[2]

NameFormulaAssumptions or notes
One-samplez{\displaystyle z} -testz=x¯μ0(σ/n){\displaystyle z={\frac {{\overline {x}}-\mu _{0}}{({\sigma }/{\sqrt {n}})}}}(Normal populationorn large)and σ known.

(z is the distance from the mean in relation to thestandard deviation of the mean). For non-normal distributions it is possible to calculate a minimum proportion of a population that falls withink standard deviations for anyk (see:Chebyshev's inequality).

Two-sample z-testz=(x¯1x¯2)d0σ12n1+σ22n2{\displaystyle z={\frac {({\overline {x}}_{1}-{\overline {x}}_{2})-d_{0}}{\sqrt {{\frac {\sigma _{1}^{2}}{n_{1}}}+{\frac {\sigma _{2}^{2}}{n_{2}}}}}}}Normal populationand independent observationsand σ1 and σ2 are known whered0{\displaystyle d_{0}} is the value ofμ1μ2{\displaystyle \mu _{1}-\mu _{2}} under the null hypothesis
One-samplet-testt=x¯μ0(s/n),{\displaystyle t={\frac {{\overline {x}}-\mu _{0}}{(s/{\sqrt {n}})}},}
df=n1 {\displaystyle df=n-1\ }
(Normal populationorn large)andσ{\displaystyle \sigma } unknown
Pairedt-testt=d¯d0(sd/n),{\displaystyle t={\frac {{\overline {d}}-d_{0}}{(s_{d}/{\sqrt {n}})}},}

df=n1 {\displaystyle df=n-1\ }

(Normal population of differencesorn large)andσ{\displaystyle \sigma } unknown
Two-sample pooledt-test, equal variancest=(x¯1x¯2)d0sp1n1+1n2,{\displaystyle t={\frac {({\overline {x}}_{1}-{\overline {x}}_{2})-d_{0}}{s_{p}{\sqrt {{\frac {1}{n_{1}}}+{\frac {1}{n_{2}}}}}}},}

sp2=(n11)s12+(n21)s22n1+n22,{\displaystyle s_{p}^{2}={\frac {(n_{1}-1)s_{1}^{2}+(n_{2}-1)s_{2}^{2}}{n_{1}+n_{2}-2}},}
df=n1+n22 {\displaystyle df=n_{1}+n_{2}-2\ }[3]

(Normal populationsorn1 + n2 > 40)and independent observationsand σ1 = σ2 unknown
Two-sample unpooledt-test, unequal variances (Welch'st-test)t=(x¯1x¯2)d0s12n1+s22n2,{\displaystyle t={\frac {({\overline {x}}_{1}-{\overline {x}}_{2})-d_{0}}{\sqrt {{\frac {s_{1}^{2}}{n_{1}}}+{\frac {s_{2}^{2}}{n_{2}}}}}},}

df=(s12n1+s22n2)2(s12n1)2n11+(s22n2)2n21{\displaystyle df={\frac {\left({\dfrac {s_{1}^{2}}{n_{1}}}+{\dfrac {s_{2}^{2}}{n_{2}}}\right)^{2}}{{\dfrac {\left({\dfrac {s_{1}^{2}}{n_{1}}}\right)^{2}}{n_{1}-1}}+{\dfrac {\left({\dfrac {s_{2}^{2}}{n_{2}}}\right)^{2}}{n_{2}-1}}}}}[3]

(Normal populationsorn1 + n2 > 40)and independent observationsand σ1 ≠ σ2 both unknown
One-proportion z-testz=p^p0p0(1p0)n{\displaystyle z={\frac {{\hat {p}}-p_{0}}{\sqrt {p_{0}(1-p_{0})}}}{\sqrt {n}}}n · p0 > 10andn (1 − p0) > 10and it is a SRS (Simple Random Sample), seenotes.
Two-proportion z-test, pooled forH0:p1=p2{\displaystyle H_{0}\colon p_{1}=p_{2}}z=(p^1p^2)p^(1p^)(1n1+1n2){\displaystyle z={\frac {({\hat {p}}_{1}-{\hat {p}}_{2})}{\sqrt {{\hat {p}}(1-{\hat {p}})({\frac {1}{n_{1}}}+{\frac {1}{n_{2}}})}}}}

p^=x1+x2n1+n2{\displaystyle {\hat {p}}={\frac {x_{1}+x_{2}}{n_{1}+n_{2}}}}

n1p1 > 5andn1(1 − p1) > 5andn2p2 > 5andn2(1 − p2) > 5and independent observations, seenotes.
Two-proportion z-test, unpooled for|d0|>0{\displaystyle |d_{0}|>0}z=(p^1p^2)d0p^1(1p^1)n1+p^2(1p^2)n2{\displaystyle z={\frac {({\hat {p}}_{1}-{\hat {p}}_{2})-d_{0}}{\sqrt {{\frac {{\hat {p}}_{1}(1-{\hat {p}}_{1})}{n_{1}}}+{\frac {{\hat {p}}_{2}(1-{\hat {p}}_{2})}{n_{2}}}}}}}n1p1 > 5andn1(1 − p1) > 5andn2p2 > 5andn2(1 − p2) > 5and independent observations, seenotes.
Chi-squared test for varianceχ2=(n1)s2σ02{\displaystyle \chi ^{2}=(n-1){\frac {s^{2}}{\sigma _{0}^{2}}}}df = n-1

• Normal population

Chi-squared test forgoodness of fitχ2=k(observedexpected)2expected{\displaystyle \chi ^{2}=\sum _{k}{\frac {({\text{observed}}-{\text{expected}})^{2}}{\text{expected}}}}df = k − 1 − # parameters estimated, and one of these must hold.

• All expected counts are at least 5.[4]: 350 

• All expected counts are > 1 and no more than 20% of expected counts are less than 5[5]

Two-sample F test for equality of variancesF=s12s22{\displaystyle F={\frac {s_{1}^{2}}{s_{2}^{2}}}}Normal populations
Arrange sos12s22{\displaystyle s_{1}^{2}\geq s_{2}^{2}} and reject H0 forF>F(α/2,n11,n21){\displaystyle F>F(\alpha /2,n_{1}-1,n_{2}-1)}[6]
Regressiont-test ofH0:R2=0.{\displaystyle H_{0}\colon R^{2}=0.}t=R2(nk1)1R2{\displaystyle t={\sqrt {\frac {R^{2}(n-k-1^{*})}{1-R^{2}}}}}RejectH0 fort>t(α/2,nk1){\displaystyle t>t(\alpha /2,n-k-1^{*})}[4]: 288 
*Subtract 1 for intercept;k terms contain independent variables.
In general, the subscript 0 indicates a value taken from thenull hypothesis, H0, which should be used as much as possible in constructing its test statistic.... Definitions of other symbols:

See also

[edit]

References

[edit]
  1. ^Berger, R. L.; Casella, G. (2001).Statistical Inference(PDF) (Second ed.). Duxbury Press. p. 374.
  2. ^Loveland, Jennifer L. (2011).Mathematical Justification of Introductory Hypothesis Tests and Development of Reference Materials (M.Sc. (Mathematics)). Utah State University. RetrievedApril 30, 2013. Abstract: "The focus was on the Neyman–Pearson approach to hypothesis testing. A brief historical development of the Neyman–Pearson approach is followed by mathematical proofs of each of the hypothesis tests covered in the reference material." The proofs do not reference the concepts introduced by Neyman and Pearson, instead they show that traditional test statistics have the probability distributions ascribed to them, so that significance calculations assuming those distributions are correct. The thesis information is also posted at mathnstats.com as of April 2013.
  3. ^abNIST/SEMATECH."Two-Samplet-test for Equal Means".e-Handbook of Statistical Methods.
  4. ^abSteel, R. G. D.; Torrie, J. H. (1960).Principles and Procedures of Statistics with Special Reference to the Biological Sciences.McGraw Hill.doi:10.1002/bimj.19620040313.
  5. ^Weiss, Neil A. (1999).Introductory Statistics (5th ed.). Addison Wesley. pp. 802.ISBN 0-201-59877-9.
  6. ^NIST/SEMATECH."F-Test for Equality of Two Standard Deviations".e-Handbook of Statistical Methods. (Testing standard deviations the same as testing variances)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Test_statistic&oldid=1334442406"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp