Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

p-value

From Wikipedia, the free encyclopedia
Function of the observed sample results
Not to be confused with theP-factor.

Innull-hypothesis significance testing, thep-value[note 1] is theprobability of obtaining test results at least as extreme as theresult actually observed, under the assumption that thenull hypothesis is correct.[2][3] A very smallp-value means that such an extreme observedoutcome would be very unlikelyunder the null hypothesis. Even though reportingp-values of statistical tests is common practice inacademic publications of many quantitative fields, misinterpretation andmisuse of p-values is widespread and has been a major topic inmathematics andmetascience.[4][5]

In 2016, theAmerican Statistical Association (ASA) made a formal statement that "p-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone" and that "ap-value, or statistical significance, does not measure the size of an effect or the importance of a result", and "does not provide a good measure of evidence regarding a model or hypothesis" without "context or other evidence".[6] That said, a 2019 task force by ASA has issued a statement on statistical significance and replicability, concluding with: "p-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data".[7]

Basic concepts

[edit]

In statistics, every conjecture concerning the unknownprobability distribution of a collection of random variables representing the observed dataX{\displaystyle X} in some study is called astatistical hypothesis. If we state one hypothesis only and the aim of the statistical test is to see whether this hypothesis is tenable, but not to investigate other specific hypotheses, then such a test is called anull hypothesis test.

As our statistical hypothesis will, by definition, state some property of the distribution, thenull hypothesis is the default hypothesis under which that property does not exist. The null hypothesis is typically that some parameter (such as a correlation or a difference between means) in the populations of interest is zero. Our hypothesis might specify the probability distribution ofX{\displaystyle X} precisely, or it might only specify that it belongs to some class of distributions. Often, we reduce the data to a single numerical statistic, e.g.,T{\displaystyle T}, whose marginal probability distribution is closely connected to a main question of interest in the study.

Thep-value is used in the context of null hypothesis testing in order to quantify thestatistical significance of a result, the result being the observed value of the chosen statisticT{\displaystyle T}.[note 2] The lower thep-value is, the lower the probability of getting that result if the null hypothesis were true. A result is said to bestatistically significant if it allows us to reject the null hypothesis. All other things being equal, smallerp-values are taken as stronger evidence against the null hypothesis.

Loosely speaking, rejection of the null hypothesis implies that there is sufficient evidence against it.

As a particular example, if a null hypothesis states that a certain summary statisticT{\displaystyle T} follows the standardnormal distributionN(0,1),{\displaystyle {\mathcal {N}}(0,1),} then the rejection of this null hypothesis could mean that (i) the mean ofT{\displaystyle T} is not 0, or (ii) thevariance ofT{\displaystyle T} is not 1, or (iii)T{\displaystyle T} is not normally distributed. Different tests of the same null hypothesis would be more or less sensitive to different alternatives. However, even if we do manage to reject the null hypothesis for all 3 alternatives, and even if we know that the distribution is normal and variance is 1, the null hypothesis test does not tell us which non-zero values of the mean are now most plausible. The more independent observations from the same probability distribution one has, the more accurate the test will be, and the higher the precision with which one will be able to determine the mean value and show that it is not equal to zero; but this will also increase the importance of evaluating the real-world or scientific relevance of this deviation.

Definition and interpretation

[edit]

Definition

[edit]

Thep-value is the probability under the null hypothesis of obtaining a real-valued test statistic at least as extreme as the one obtained. Consider an observed test-statistict{\displaystyle t} from unknown distributionT{\displaystyle T}. Then thep-valuep{\displaystyle p} is what the prior probability would be of observing a test-statistic value at least as "extreme" ast{\displaystyle t} if null hypothesisH0{\displaystyle H_{0}} were true. That is:

Interpretations

[edit]

The error that a practising statistician would consider the more important to avoid (which is a subjective judgment) is called the error of the first kind. The first demand of the mathematical theory is to deduce such test criteria as would ensure that the probability of committing an error of the first kind would equal (or approximately equal, or not exceed) a preassigned number α, such as α = 0.05 or 0.01, etc. This number is called the level of significance.

— Jerzy Neyman, "The Emergence of Mathematical Statistics"[8]

In a significance test, the null hypothesisH0{\displaystyle H_{0}} is rejected if thep-value is less than to a predefined threshold valueα{\displaystyle \alpha }, which is referred to as the alpha level orsignificance level.α{\displaystyle \alpha } is not derived from the data, but rather is set by the researcher before examining the data.α{\displaystyle \alpha } is commonly set to 0.05, though lower alpha levels are sometimes used. The 0.05 value (equivalent to 1/20 chances) was originally proposed byRonald Fisher in 1925 in his famous book entitled "Statistical Methods for Research Workers".[9]

Differentp-values based on independent sets of data can be combined, for instance usingFisher's combined probability test.

Distribution

[edit]

Thep-value is a function of the chosen test statisticT{\displaystyle T} and is therefore arandom variable. If the null hypothesis fixes the probability distribution ofT{\displaystyle T} precisely (e.g.H0:θ=θ0,{\displaystyle H_{0}:\theta =\theta _{0},} whereθ{\displaystyle \theta } is the only parameter), and if that distribution is continuous, then when the null-hypothesis is true, thep-value isuniformly distributed between 0 and 1. Regardless of the truth of theH0{\displaystyle H_{0}}, thep-value is not fixed; if the same test is repeated independently with fresh data, one will typically obtain a differentp-value in each iteration.

Usually only a singlep-value relating to a hypothesis is observed, so thep-value is interpreted by a significance test, and no effort is made to estimate the distribution it was drawn from. When a collection ofp-values are available (e.g. when considering a group of studies on the same subject), the distribution of significantp-values is sometimes called ap-curve.[10]Ap-curve can be used to assess the reliability of scientific literature, such as by detecting publication bias orp-hacking.[10][11]

Distribution for composite hypothesis

[edit]

In parametric hypothesis testing problems, asimple or point hypothesis refers to a hypothesis where the parameter's value is assumed to be a single number. In contrast, in acomposite hypothesis the parameter's value is given by a set of numbers. When the null-hypothesis is composite (or the distribution of the statistic is discrete), then when the null-hypothesis is true the probability of obtaining ap-value less than or equal to any number between 0 and 1 is still less than or equal to that number. In other words, it remains the case that very small values are relatively unlikely if the null-hypothesis is true, and that a significance test at levelα{\displaystyle \alpha } is obtained by rejecting the null-hypothesis if thep-value is less than or equal toα{\displaystyle \alpha }.[12][13]

For example, when testing the null hypothesis that a distribution is normal with a mean less than or equal to zero against the alternative that the mean is greater than zero (H0:μ0{\displaystyle H_{0}:\mu \leq 0}, variance known), the null hypothesis does not specify the exact probability distribution of the appropriate test statistic. In this example, that would be the Z-statistic belonging to the one-sided one-sampleZ-test. For each possible value of the theoretical mean, theZ-test statistic has a different probability distribution. In these circumstances, thep-value is defined by taking the least favorable null-hypothesis case, which is typically on the border between null and alternative. This definition ensures the complementarity of p-values and alpha-levels:α=0.05{\displaystyle \alpha =0.05} means one only rejects the null hypothesis if thep-value is less than or equal to0.05{\displaystyle 0.05}, and the hypothesis test will indeed have amaximum type-1 error rate of0.05{\displaystyle 0.05}.

Usage

[edit]

Thep-value is widely used instatistical hypothesis testing, specifically in null hypothesis significance testing. In this method, before conducting the study, one first chooses a model (thenull hypothesis) and the alpha levelα (most commonly 0.05). After analyzing the data, if thep-value is less thanα, that is taken to mean that the observed data is sufficiently inconsistent with thenull hypothesis for the null hypothesis to be rejected. However, that does not prove that the null hypothesis is false. Thep-value does not, in itself, establish probabilities of hypotheses. Rather, it is a tool for deciding whether to reject the null hypothesis.[14]

Misuse

[edit]
Main article:Misuse of p-values

According to the ASA, there is widespread agreement thatp-values are often misused and misinterpreted.[3] One practice that has been particularly criticized is accepting the alternative hypothesis for anyp-value nominally less than 0.05 without other supporting evidence. Althoughp-values are helpful in assessing how incompatible the data are with a specified statistical model, contextual factors must also be considered, such as "the design of a study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity of assumptions that underlie the data analysis".[3] Another concern is that thep-value is often misunderstood as being the probability that the null hypothesis is true.[3][15]p-values and significance tests also say nothing about the possibility of drawing conclusions from a sample to a population.

Some statisticians have proposed abandoningp-values and focusing more on other inferential statistics,[3] such asconfidence intervals,[16][17]likelihood ratios,[18][19] orBayes factors,[20][21][22] but there is heated debate on the feasibility of these alternatives.[23][24] Others have suggested to remove fixed significance thresholds and to interpretp-values as continuous indices of the strength of evidence against the null hypothesis.[25][26] Yet others suggested to report alongsidep-values the prior probability of a real effect that would be required to obtain a false positive risk (i.e. the probability that there is no real effect) below a pre-specified threshold (e.g. 5%).[27]

That said, in 2019 a task force by ASA had convened to consider the use of statistical methods in scientific studies, specifically hypothesis tests andp-values, and their connection to replicability.[7] It states that "Different measures of uncertainty can complement one another; no single measure serves all purposes", citingp-value as one of these measures. They also stress thatp-values can provide valuable information when considering the specific value as well as when compared to some threshold. In general, it stresses that "p-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data". This sentiment was further supported by a comment inNature Human Behaviour, that, in response to recommendations to redefine statistical significance to P ≤ 0.005, have proposed that "researchers should transparently report and justify all choices they make when designing a study, including the alpha level."[28]

Calculation

[edit]

Usually,T{\displaystyle T} is atest statistic. A test statistic is the output of ascalar function of all the observations. This statistic provides a single number, such as at-statistic or anF-statistic. As such, the test statistic follows a distribution determined by the function used to define that test statistic and the distribution of the input observational data.

For the important case in which the data are hypothesized to be a random sample from a normal distribution, depending on the nature of the test statistic and the hypotheses of interest about its distribution, different null hypothesis tests have been developed. Some such tests are thez-test for hypotheses concerning the mean of anormal distribution with known variance, thet-test based onStudent'st-distribution of a suitable statistic for hypotheses concerning the mean of a normal distribution when the variance is unknown, theF-test based on theF-distribution of yet another statistic for hypotheses concerning the variance. For data of other nature, for instance, categorical (discrete) data, test statistics might be constructed whose null hypothesis distribution is based on normal approximations to appropriate statistics obtained by invoking thecentral limit theorem for large samples, as in the case ofPearson's chi-squared test.

Thus computing ap-value requires a null hypothesis, a test statistic (together with deciding whether the researcher is performing aone-tailed test or atwo-tailed test), and data. Even though computing the test statistic on given data may be easy, computing the sampling distribution under the null hypothesis, and then computing itscumulative distribution function (CDF) is often a difficult problem. Today, this computation is done using statistical software, often via numeric methods (rather than exact formulae), but, in the early and mid 20th century, this was instead done via tables of values, and one interpolated or extrapolatedp-values from these discrete values[citation needed]. Rather than using a table ofp-values, Fisher instead inverted the CDF, publishing a list of values of the test statistic for given fixedp-values; this corresponds to computing thequantile function (inverse CDF).

Example

[edit]
Main article:Checking whether a coin is fair

Testing the fairness of a coin

[edit]

As an example of a statistical test, an experiment is performed to determine whether acoin flip isfair (equal chance of landing heads or tails) or unfairly biased (one outcome being more likely than the other).

Suppose that the experimental results show the coin turning up heads 14 times out of 20 total flips. The full dataX{\displaystyle X} would be a sequence of twenty times the symbol "H" or "T". The statistic on which one might focus could be the total numberT{\displaystyle T} of heads. The null hypothesis is that the coin is fair, and coin tosses are independent of one another. If a right-tailed test is considered, which would be the case if one is actually interested in the possibility that the coin is biased towards falling heads, then thep-value of this result is the chance of a fair coin landing on headsat least 14 times out of 20 flips. That probability can be computed frombinomial coefficients as

Pr(14 heads)+Pr(15 heads)++Pr(20 heads)=1220[(2014)+(2015)++(2020)]=6046010485760.058.{\displaystyle {\begin{aligned}&\Pr(14{\text{ heads}})+\Pr(15{\text{ heads}})+\cdots +\Pr(20{\text{ heads}})\\&={\frac {1}{2^{20}}}\left[{\binom {20}{14}}+{\binom {20}{15}}+\cdots +{\binom {20}{20}}\right]={\frac {60\,460}{1\,048\,576}}\approx 0.058.\end{aligned}}}

This probability is thep-value, considering only extreme results that favor heads. This is called aone-tailed test. However, one might be interested in deviations in either direction, favoring either heads or tails. The two-tailedp-value, which considers deviations favoring either heads or tails, may instead be calculated. As thebinomial distribution is symmetrical for a fair coin, the two-sidedp-value is simply twice the above calculated single-sidedp-value: the two-sidedp-value is 0.115.

In the above example:

  • Null hypothesis (H0): The coin is fair, with Pr(heads) = 0.5.
  • Test statistic: Number of heads.
  • Alpha level (designated threshold of significance): 0.05.
  • ObservationO: 14 heads out of 20 flips.
  • Two-tailedp-value of observationO givenH0 = 2 × min(Pr(no. of heads ≥ 14 heads), Pr(no. of heads ≤ 14 heads)) = 2 × min(0.058, 0.978) = 2 × 0.058 = 0.115.

The Pr(no. of heads ≤ 14 heads) = 1 − Pr(no. of heads ≥ 14 heads) + Pr(no. of head = 14) = 1 − 0.058 + 0.036 = 0.978; however, the symmetry of this binomial distribution makes it an unnecessary computation to find the smaller of the two probabilities. Here, the calculatedp-value exceeds 0.05, meaning that the data falls within the range of what would happen 95% of the time, if the coin were fair. Hence, the null hypothesis is not rejected at the 0.05 level.

However, had one more head been obtained, the resultingp-value (two-tailed) would have been 0.0414 (4.14%), in which case the null hypothesis would be rejected at the 0.05 level.

Optional stopping

[edit]

The difference between the two meanings of "extreme" appear when we consider a sequential hypothesis testing, or optional stopping, for the fairness of the coin. In general, optional stopping changes how p-value is calculated.[29][30] Suppose we design the experiment as follows:

  • Flip the coin twice. If both comes up heads or tails, end the experiment.
  • Else, flip the coin 4 more times.

This experiment has 7 types of outcomes: 2 heads, 2 tails, 5 heads 1 tail, ..., 1 head 5 tails. We now calculate thep-value of the "3 heads 3 tails" outcome.

If we use the test statistic #heads/tails{\displaystyle {\text{heads}}/{\text{tails}}}, then under the null hypothesis (i.e. #heads3{\displaystyle {\text{heads}}\leq 3}) the two-sidedp-value is exactly equal to 1, and both the one-sided left-tail p-value and the one-sided right-tailp-value are exactly equal to19/32{\displaystyle 19/32}.

If we consider every outcome that has equal or lower probability than "3 heads 3 tails" as "at least as extreme", then thep-value is exactly1/2.{\displaystyle 1/2.}

However, suppose we have planned to simply flip the coin 6 times no matter what happens, then the second definition ofp-value would mean that thep-value of "3 heads 3 tails" is exactly 1.

Thus, the "at least as extreme" definition ofp-value is deeply contextual and depends on what the experimenterplanned to do even in situations that did not occur.

History

[edit]
Chest high painted portrait of man wearing a brown robe and head covering
John Arbuthnot
Pierre-Simon Laplace
Man seated at his desk looking up at the camera
Karl Pearson
Sepia toned photo of young man wearing a suit, a medal, and wire-rimmed eyeglasses
Ronald Fisher

P-value computations date back to the 1700s, where they were computed for thehuman sex ratio at birth, and used to compute statistical significance compared to the null hypothesis of equal probability of male and female births.[31]John Arbuthnot studied this question in 1710,[32][33][34][35] and examined birth records in London for each of the 82 years from 1629 to 1710. In every year, the number of males born in London exceeded the number of females. Considering more male or more female births as equally likely, the probability of the observed outcome is 1/282, or about 1 in 4,836,000,000,000,000,000,000,000; in modern terms, thep-value. This is vanishingly small, leading Arbuthnot to conclude that this was not due to chance, but to divine providence: "From whence it follows, that it is Art, not Chance, that governs." In modern terms, he rejected the null hypothesis of equally likely male and female births at thep = 1/282 significance level. This and other work by Arbuthnot is credited as "… the first use of significance tests …"[36] the first example of reasoning about statistical significance,[37] and "… perhaps the first published report of anonparametric test …",[33] specifically thesign test; see details atSign test § History.

The same question was later addressed byPierre-Simon Laplace, who instead used aparametric test, modeling the number of male births with abinomial distribution:[38]

In the 1770s Laplace considered the statistics of almost half a million births. The statistics showed an excess of boys compared to girls. He concluded by calculation of ap-value that the excess was a real, but unexplained, effect.

Thep-value was first formally introduced byKarl Pearson, in hisPearson's chi-squared test,[39] using thechi-squared distribution and notated as capital P.[39] Thep-values for thechi-squared distribution (for various values of χ2 and degrees of freedom), now notated asP, were calculated in (Elderton 1902), collected in (Pearson 1914, pp. xxxi–xxxiii, 26–28, Table XII).

Ronald Fisher formalized and popularized the use of thep-value in statistics,[40][41] with it playing a central role in his approach to the subject.[42] In his highly influential bookStatistical Methods for Research Workers (1925), Fisher proposed the levelp = 0.05, or a 1 in 20 chance of being exceeded by chance, as a limit forstatistical significance, and applied this to a normal distribution (as a two-tailed test), thus yielding the rule of two standard deviations (on a normal distribution) for statistical significance (see68–95–99.7 rule).[43][note 3][44]

He then computed a table of values, similar to Elderton but, importantly, reversed the roles of χ2 andp. That is, rather than computingp for different values of χ2 (and degrees of freedomn), he computed values of χ2 that yield specifiedp-values, specifically 0.99, 0.98, 0.95, 0,90, 0.80, 0.70, 0.50, 0.30, 0.20, 0.10, 0.05, 0.02, and 0.01.[45] That allowed computed values of χ2 to be compared against cutoffs and encouraged the use ofp-values (especially 0.05, 0.02, and 0.01) as cutoffs, instead of computing and reportingp-values themselves. The same type of tables were then compiled in (Fisher & Yates 1938), which cemented the approach.[44]

As an illustration of the application ofp-values to the design and interpretation of experiments, in his following bookThe Design of Experiments (1935), Fisher presented thelady tasting tea experiment,[46] which is the archetypal example of thep-value.

To evaluate a lady's claim that she (Muriel Bristol) could distinguish by taste how tea is prepared (first adding the milk to the cup, then the tea, or first tea, then milk), she was sequentially presented with 8 cups: 4 prepared one way, 4 prepared the other, and asked to determine the preparation of each cup (knowing that there were 4 of each). In that case, the null hypothesis was that she had no special ability, the test wasFisher's exact test, and thep-value was1/(84)=1/700.014,{\displaystyle 1/{\binom {8}{4}}=1/70\approx 0.014,} so Fisher was willing to reject the null hypothesis (consider the outcome highly unlikely to be due to chance) if all were classified correctly. (In the actual experiment, Bristol correctly classified all 8 cups.)

Fisher reiterated thep = 0.05 threshold and explained its rationale, stating:[47]

It is usual and convenient for experimenters to take 5 per cent as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means, to eliminate from further discussion the greater part of the fluctuations which chance causes have introduced into their experimental results.

He also applies this threshold to the design of experiments, noting that had only 6 cups been presented (3 of each), a perfect classification would have only yielded ap-value of1/(63)=1/20=0.05,{\displaystyle 1/{\binom {6}{3}}=1/20=0.05,} which would not have met this level of significance.[47] Fisher also underlined the interpretation ofp, as the long-run proportion of values at least as extreme as the data, assuming the null hypothesis is true.

In later editions, Fisher explicitly contrasted the use of thep-value for statistical inference in science with the Neyman–Pearson method, which he terms "Acceptance Procedures".[48] Fisher emphasizes that while fixed levels such as 5%, 2%, and 1% are convenient, the exactp-value can be used, and the strength of evidence can and will be revised with further experimentation. In contrast, decision procedures require a clear-cut decision, yielding an irreversible action, and the procedure is based on costs of error, which, he argues, are inapplicable to scientific research.

Related indices

[edit]

TheE-value can refer to two concepts, both of which are related to the p-value and both of which play a role inmultiple testing. First,it corresponds to a generic, more robust alternative to the p-value that can deal withoptional continuation of experiments. Second, it is also used to abbreviate "expect value", which is theexpected number of times that one expects to obtain a test statistic at least as extreme as the one that was actually observed if one assumes that the null hypothesis is true.[49] This expect-value is the product of the number of tests and thep-value.

Theq-value is the analog of thep-value with respect to thepositive false discovery rate.[50] It is used inmultiple hypothesis testing to maintain statistical power while minimizing thefalse positive rate.[51]

TheProbability of Direction (pd) is theBayesian numerical equivalent of thep-value.[52] It corresponds to the proportion of theposterior distribution that is of the median's sign, typically varying between 50% and 100%, and representing the certainty with which an effect is positive or negative.

Second-generation p-values extend the concept of p-values by not considering extremely small, practically irrelevanteffect sizes as significant.[53]

The S-value, also known as the surprisal value, has been defined as a logarithmic transformation of the p-value: S-value = - log2(p-value). The transformation into S-values is intended to facilitate the interpretation of p-values by using a more intuitive logarithmic scale that indicates how “surprised” one is by a result.[54][55][56]

See also

[edit]

Notes

[edit]
  1. ^Italicisation, capitalisation and hyphenation of the term vary. For example,AMA style uses "P value",APA style uses "p value", and theAmerican Statistical Association uses "p-value". In all cases, the "p" stands forprobability.[1]
  2. ^The statistical significance of a result does not imply that the result also has real-world relevance. For instance, a medication might have a statistically significant effect that is too small to be interesting.
  3. ^To be more specific, thep = 0.05 corresponds to about 1.96 standard deviations for a normal distribution (two-tailed test), and 2 standard deviations corresponds to about a 1 in 22 chance of being exceeded by chance, orp ≈ 0.045; Fisher notes these approximations.

References

[edit]
  1. ^"ASA House Style"(PDF).Amstat News. American Statistical Association.
  2. ^Aschwanden C (2015-11-24)."Not Even Scientists Can Easily Explain P-values".FiveThirtyEight. Archived fromthe original on 25 September 2019. Retrieved11 October 2019.
  3. ^abcdeWasserstein RL, Lazar NA (7 March 2016)."The ASA's Statement on p-Values: Context, Process, and Purpose".The American Statistician.70 (2):129–133.doi:10.1080/00031305.2016.1154108.
  4. ^Hubbard R, Lindsay RM (2008). "WhyP Values Are Not a Useful Measure of Evidence in Statistical Significance Testing".Theory & Psychology.18 (1):69–88.doi:10.1177/0959354307086923.S2CID 143487211.
  5. ^Munafò MR, Nosek BA, Bishop DV, Button KS, Chambers CD, du Sert NP, et al. (January 2017)."A manifesto for reproducible science".Nature Human Behaviour.1 (1) 0021.doi:10.1038/s41562-016-0021.PMC 7610724.PMID 33954258.S2CID 6326747.
  6. ^Wasserstein, Ronald L.; Lazar, Nicole A. (2016-04-02)."The ASA Statement on p -Values: Context, Process, and Purpose".The American Statistician.70 (2):129–133.doi:10.1080/00031305.2016.1154108.ISSN 0003-1305.S2CID 124084622.
  7. ^abBenjamini, Yoav; De Veaux, Richard D.; Efron, Bradley; Evans, Scott; Glickman, Mark; Graubard, Barry I.; He, Xuming; Meng, Xiao-Li; Reid, Nancy M.; Stigler, Stephen M.; Vardeman, Stephen B.; Wikle, Christopher K.; Wright, Tommy; Young, Linda J.; Kafadar, Karen (2021-10-02)."ASA President's Task Force Statement on Statistical Significance and Replicability".Chance.34 (4). Informa UK Limited:10–11.doi:10.1080/09332480.2021.2003631.ISSN 0933-2480.
  8. ^Neyman, Jerzy (1976). "The Emergence of Mathematical Statistics: A Historical Sketch with Particular Reference to the United States". In Owen, D.B. (ed.).On the History of Statistics and Probability. Textbooks and Monographs. New York: Marcel Dekker Inc. p. 161.
  9. ^Fisher, R. A. (1992), Kotz, Samuel; Johnson, Norman L. (eds.), "Statistical Methods for Research Workers",Breakthroughs in Statistics: Methodology and Distribution, Springer Series in Statistics, New York, NY: Springer, pp. 66–70,doi:10.1007/978-1-4612-4380-9_6,ISBN 978-1-4612-4380-9{{citation}}: CS1 maint: work parameter with ISBN (link)
  10. ^abHead ML, Holman L, Lanfear R, Kahn AT, Jennions MD (March 2015)."The extent and consequences of p-hacking in science".PLOS Biology.13 (3) e1002106.doi:10.1371/journal.pbio.1002106.PMC 4359000.PMID 25768323.
  11. ^Simonsohn U, Nelson LD, Simmons JP (November 2014). "p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results".Perspectives on Psychological Science.9 (6):666–681.doi:10.1177/1745691614553988.PMID 26186117.S2CID 39975518.
  12. ^Bhattacharya B, Habtzghi D (2002). "Median of the p value under the alternative hypothesis".The American Statistician.56 (3):202–6.doi:10.1198/000313002146.S2CID 33812107.
  13. ^Hung HM, O'Neill RT, Bauer P, Köhne K (March 1997)."The behavior of the P-value when the alternative hypothesis is true".Biometrics (Submitted manuscript).53 (1):11–22.doi:10.2307/2533093.JSTOR 2533093.PMID 9147587.
  14. ^Nuzzo R (February 2014)."Scientific method: statistical errors".Nature.506 (7487):150–152.Bibcode:2014Natur.506..150N.doi:10.1038/506150a.hdl:11573/685222.PMID 24522584.
  15. ^Colquhoun D (November 2014)."An investigation of the false discovery rate and the misinterpretation of p-values".Royal Society Open Science.1 (3) 140216.arXiv:1407.5296.Bibcode:2014RSOS....140216C.doi:10.1098/rsos.140216.PMC 4448847.PMID 26064558.
  16. ^Lee DK (December 2016)."Alternatives to P value: confidence interval and effect size".Korean Journal of Anesthesiology.69 (6):555–562.doi:10.4097/kjae.2016.69.6.555.PMC 5133225.PMID 27924194.
  17. ^Ranstam J (August 2012)."Why the P-value culture is bad and confidence intervals a better alternative".Osteoarthritis and Cartilage.20 (8):805–808.doi:10.1016/j.joca.2012.04.001.PMID 22503814.
  18. ^Perneger TV (May 2001)."Sifting the evidence. Likelihood ratios are alternatives to P values".BMJ.322 (7295):1184–1185.doi:10.1136/bmj.322.7295.1184.PMC 1120301.PMID 11379590.
  19. ^Royall R (2004). "The Likelihood Paradigm for Statistical Evidence".The Nature of Scientific Evidence. pp. 119–152.doi:10.7208/chicago/9780226789583.003.0005.ISBN 978-0-226-78957-6.
  20. ^Schimmack U (30 April 2015)."Replacing p-values with Bayes-Factors: A Miracle Cure for the Replicability Crisis in Psychological Science".Replicability-Index. Retrieved7 March 2017.
  21. ^Marden JI (December 2000). "Hypothesis Testing: From p Values to Bayes Factors".Journal of the American Statistical Association.95 (452):1316–1320.doi:10.2307/2669779.JSTOR 2669779.
  22. ^Stern HS (16 February 2016)."A Test by Any Other Name: P Values, Bayes Factors, and Statistical Inference".Multivariate Behavioral Research.51 (1):23–29.doi:10.1080/00273171.2015.1099032.PMC 4809350.PMID 26881954.
  23. ^Murtaugh PA (March 2014)."In defense of P values".Ecology.95 (3):611–617.Bibcode:2014Ecol...95..611M.doi:10.1890/13-0590.1.PMID 24804441.
  24. ^Aschwanden C (7 March 2016)."Statisticians Found One Thing They Can Agree On: It's Time To Stop Misusing P-Values".FiveThirtyEight.
  25. ^Amrhein V, Korner-Nievergelt F, Roth T (2017)."The earth is flat (p > 0.05): significance thresholds and the crisis of unreplicable research".PeerJ.5 e3544.doi:10.7717/peerj.3544.PMC 5502092.PMID 28698825.
  26. ^Amrhein V, Greenland S (January 2018). "Remove, rather than redefine, statistical significance".Nature Human Behaviour.2 (1): 4.doi:10.1038/s41562-017-0224-0.PMID 30980046.S2CID 46814177.
  27. ^Colquhoun D (December 2017)."The reproducibility of research and the misinterpretation ofp-values".Royal Society Open Science.4 (12) 171085.Bibcode:2017RSOS....471085C.doi:10.1098/rsos.171085.PMC 5750014.PMID 29308247.
  28. ^Lakens, D., Adolfi, F.G., Albers, C.J. et al. Justify your alpha. Nat Hum Behav 2, 168–171 (2018).https://doi.org/10.1038/s41562-018-0311-x
  29. ^Goodman, Steven (2008-07-01)."A Dirty Dozen: Twelve P-Value Misconceptions".Seminars in Hematology. Interpretation of Quantitative Research.45 (3):135–140.doi:10.1053/j.seminhematol.2008.04.003.ISSN 0037-1963.PMID 18582619.
  30. ^Wagenmakers, Eric-Jan (October 2007)."A practical solution to the pervasive problems of p values".Psychonomic Bulletin & Review.14 (5):779–804.doi:10.3758/BF03194105.ISSN 1069-9384.PMID 18087943.
  31. ^Brian E,Jaisson M (2007). "Physico-Theology and Mathematics (1710–1794)".The Descent of Human Sex Ratio at Birth. Springer Science & Business Media. pp. 1–25.ISBN 978-1-4020-6036-6.
  32. ^Arbuthnot J (1710)."An argument for Divine Providence, taken from the constant regularity observed in the births of both sexes"(PDF).Philosophical Transactions of the Royal Society of London.27 (325–336):186–190.doi:10.1098/rstl.1710.0011.S2CID 186209819.
  33. ^abConover WJ (1999). "Chapter 3.4: The Sign Test".Practical Nonparametric Statistics (Third ed.). Wiley. pp. 157–176.ISBN 978-0-471-16068-7.
  34. ^Sprent P (1989).Applied Nonparametric Statistical Methods (Second ed.). Chapman & Hall.ISBN 978-0-412-44980-2.
  35. ^Stigler SM (1986).The History of Statistics: The Measurement of Uncertainty Before 1900. Harvard University Press. pp. 225–226.ISBN 978-0-67440341-3.
  36. ^Bellhouse P (2001). "John Arbuthnot". InHeyde CC,Seneta E (eds.).Statisticians of the Centuries. Springer. pp. 39–42.ISBN 978-0-387-95329-8.
  37. ^Hald A (1998). "Chapter 4. Chance or Design: Tests of Significance".A History of Mathematical Statistics from 1750 to 1930. Wiley. p. 65.
  38. ^Stigler SM (1986).The History of Statistics: The Measurement of Uncertainty Before 1900. Harvard University Press. p. 134.ISBN 978-0-67440341-3.
  39. ^abPearson K (1900)."On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling"(PDF).Philosophical Magazine. Series 5.50 (302):157–175.doi:10.1080/14786440009463897.
  40. ^Biau, David Jean; Jolles, Brigitte M.; Porcher, Raphaël (2010)."P Value and the Theory of Hypothesis Testing: An Explanation for New Researchers".Clinical Orthopaedics and Related Research.468 (3):885–892.doi:10.1007/s11999-009-1164-4.ISSN 0009-921X.PMC 2816758.PMID 19921345.
  41. ^Brereton, Richard G. (2021)."P values and multivariate distributions: Non-orthogonal terms in regression models".Chemometrics and Intelligent Laboratory Systems.210 104264.doi:10.1016/j.chemolab.2021.104264.
  42. ^Hubbard R, Bayarri MJ (2003), "Confusion Over Measures of Evidence (p′s) Versus Errors (α′s) in Classical Statistical Testing",The American Statistician,57 (3): 171–178 [p. 171],doi:10.1198/0003130031856,S2CID 55671953
  43. ^Fisher 1925, p. 47, ChapterIII. Distributions.
  44. ^abDallal 2012, Note 31:Why P=0.05?.
  45. ^Fisher 1925, pp. 78–79, 98, ChapterIV. Tests of Goodness of Fit, Independence and Homogeneity; with Table of χ2,Table III. Table of χ2.
  46. ^Fisher 1971, II. The Principles of Experimentation, Illustrated by a Psycho-physical Experiment.
  47. ^abFisher 1971, Section 7. The Test of Significance.
  48. ^Fisher 1971, Section 12.1 Scientific Inference and Acceptance Procedures.
  49. ^"Definition of E-value".National Institutes of Health.
  50. ^Storey JD (2003)."The positive false discovery rate: a Bayesian interpretation and the q-value".The Annals of Statistics.31 (6):2013–2035.doi:10.1214/aos/1074290335.
  51. ^Storey JD, Tibshirani R (August 2003)."Statistical significance for genomewide studies".Proceedings of the National Academy of Sciences of the United States of America.100 (16):9440–9445.Bibcode:2003PNAS..100.9440S.doi:10.1073/pnas.1530509100.PMC 170937.PMID 12883005.
  52. ^Makowski D, Ben-Shachar MS, Chen SH, Lüdecke D (10 December 2019)."Indices of Effect Existence and Significance in the Bayesian Framework".Frontiers in Psychology.10 2767.doi:10.3389/fpsyg.2019.02767.PMC 6914840.PMID 31920819.
  53. ^An Introduction to Second-Generation p-Values Jeffrey D. Blume, Robert A. Greevy, Valerie F. Welty, Jeffrey R. Smith &William D. Duponthttps://www.tandfonline.com/doi/full/10.1080/00031305.2018.1537893
  54. ^Die Testentscheidung – Bio Data Science
  55. ^Rafi, Zad; Greenland, Sander (2020-09-30)."Semantic and cognitive tools to aid statistical science: replace confidence and significance by compatibility and surprise".BMC Medical Research Methodology.20 (1): 244.arXiv:1909.08579.doi:10.1186/s12874-020-01105-9.ISSN 1471-2288.
  56. ^Rovetta, Alessandro (2024-01-12)."S-values and Surprisal intervals to Replace P-values and Confidence Intervals: Accepted - January 2024".REVSTAT-Statistical Journal.ISSN 2183-0371. Archived fromthe original on 2024-12-04.

Further reading

[edit]

External links

[edit]
Wikimedia Commons has media related toP-value.
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Retrieved from "https://en.wikipedia.org/w/index.php?title=P-value&oldid=1336157865"
Category:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp