This articlerelies largely or entirely on asingle source. Relevant discussion may be found on thetalk page. Please helpimprove this article byintroducing citations to additional sources. Find sources: "False positive rate" – news ·newspapers ·books ·scholar ·JSTOR(July 2016) |
Instatistics, when performingmultiple comparisons, afalse positive ratio (also known asfall-out orfalse alarm rate[1] ) is theprobability of falsely rejecting thenull hypothesis for a particulartest. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive (false positives) and the total number of actual negative events (regardless of classification).
The false positiverate (or "false alarm rate") usually refers to theexpectancy of the false positiveratio.
The false positive rate (false alarm rate) is[1]
where is the number of false positives, is the number of true negatives and is the total number of ground truth negatives.
Thesignificance level used to test each hypothesis is set based on the form of inference (simultaneous inference vs. selective inference) and its supporting criteria (for exampleFWER orFDR), that were pre-determined by the researcher.
When performingmultiple comparisons in astatistical framework such as above, thefalse positive ratio (also known as thefalse alarm rate, as opposed false alarmratio - FAR ) usually refers to the probability of falsely rejecting thenull hypothesis for a particulartest. Using the terminology suggested here, it is simply.
SinceV is arandom variable and is a constant (), the false positiveratio is also a random variable, ranging between 0–1.
Thefalse positive rate (or "false alarm rate") usually refers to theexpectancy of the false positive ratio, expressed by.
It is worth noticing that the two definitions ("false positive ratio" / "false positive rate") are somewhat interchangeable. For example, in the referenced article[2] serves as the false positive "rate" rather than as its "ratio".
The following table defines the possible outcomes when testing multiple null hypotheses. Suppose we have a numberm of null hypotheses, denoted by:H1, H2, ..., Hm.Using astatistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant.Summing each type of outcome over allHi yields the following random variables:
| Null hypothesis is true (H0) | Alternative hypothesis is true (HA) | Total | |
|---|---|---|---|
| Test is declared significant | V | S | R |
| Test is declared non-significant | U | T | |
| Total | m |
Inm hypothesis tests of which are true null hypotheses,R is an observable random variable, andS,T,U, andV are unobservablerandom variables.
This sectionpossibly containsoriginal research. Pleaseimprove it byverifying the claims made and addinginline citations. Statements consisting only of original research should be removed.(February 2013) (Learn how and when to remove this message) |
While the false positive rate is mathematically equal to thetype I error rate, it is viewed as a separate term for the following reasons:[citation needed]
The false positive rate should also not be confused with thefamily-wise error rate, which is defined as. As the number of tests grows, the familywise error rate usually converges to 1 while the false positive rate remains fixed.
Lastly, it is important to note the profound difference between the false positive rate and thefalse discovery rate: while the first is defined as, the second is defined as.