TheMann–Whitney test (also called theMann–Whitney–Wilcoxon (MWW/MWU),Wilcoxon rank-sum test, orWilcoxon–Mann–Whitney test) is anonparametricstatistical test of thenull hypothesis that randomly selected valuesX andY from two populations have the same distribution.
All the observations from both groups areindependent of each other,
The responses are at leastordinal (i.e., one can at least say, of any two observations, which is the greater),
Under the null hypothesisH0, the distributions of both populations are identical.[3]
The alternative hypothesisH1 is that the distributions are not identical.
Under the general formulation, the test is onlyconsistent when the following occurs underH1:
The probability of an observation from populationX exceeding an observation from populationY is different (larger, or smaller) than the probability of an observation fromY exceeding an observation fromX; i.e.,P(X >Y) ≠ P(Y >X) orP(X >Y) + 0.5 · P(X =Y) ≠ 0.5.
Under more strict assumptions than the general formulation above, e.g., if the responses are assumed to be continuous and the alternative is restricted to a shift in location, i.e.,F1(x) =F2(x +δ), we can interpret a significant Mann–WhitneyU test as showing a difference in medians. Under this location shift assumption, we can also interpret the Mann–WhitneyU test as assessing whether theHodges–Lehmann estimate of the difference in central tendency between the two populations differs from zero. TheHodges–Lehmann estimate for this two-sample problem is themedian of all possible differences between an observation in the first sample and an observation in the second sample.
Otherwise, if both thedispersions andshapes of the distribution of both samples differ, the Mann–WhitneyU test fails a test of medians. It is possible to show examples where medians are numerically equal while the test rejects the null hypothesis with a small p-value.[4][5][6]
The Mann–WhitneyU test / Wilcoxon rank-sum test is not the same as theWilcoxonsigned-rank test, although both are nonparametric and involve summation ofranks. The Mann–WhitneyU test is applied to independent samples. The Wilcoxon signed-rank test is applied to matched or dependent samples.
Let be group 1, ani.i.d. sample from, and be group 2, an i.i.d. sample from, and let both samples be independent of each other. The correspondingMann–WhitneyU statistic is defined as the smaller of:
with
being the sums of the ranks in groups 1 and 2, after ranking all samples from both groups such that the smallest value obtains rank 1 and the largest rank.[7]
Note that this is the same definition as thecommon language effect size, i.e. the probability that a classifier will rank a randomly chosen instance from the first group higher than a randomly chosen instance from the second group.[9]
Because of its probabilistic form, theU statistic can be generalized to a measure of a classifier's separation power for more than two classes:[10]
Wherec is the number of classes, and theRk,ℓ term of AUCk,ℓ considers only the ranking of the items belonging to classesk andℓ (i.e., items belonging to all other classes are ignored) according to the classifier's estimates of the probability of those items belonging to classk. AUCk,k will always be zero but, unlike in the two-class case, generallyAUCk,ℓ ≠ AUCℓ,k, which is why theM measure sums over all (k,ℓ) pairs, in effect using the average of AUCk,ℓ and AUCℓ,k.
It is also easily calculated by hand, especially for small samples. There are two ways of doing this.
Method one:
For comparing two small sets of observations, a direct method is quick, and gives insight into the meaning of theU statistic, which corresponds to the number of wins out of all pairwise contests (see the tortoise and hare example under Examples below). For each observation in one set, count the number of times this first value wins over any observations in the other set (the other value loses if this first is larger). Count 0.5 for any ties. The sum of wins and ties isU (i.e.:) for the first set.U for the other set is the converse (i.e.:).
Method two:
For larger samples:
Assign numeric ranks to all the observations (put the observations from both groups to one set), beginning with 1 for the smallest value. Where there are groups of tied values, assign a rank equal to the midpoint of unadjusted rankings (e.g., the ranks of(3, 5, 5, 5, 5, 8) are(1, 3.5, 3.5, 3.5, 3.5, 6), where the unadjusted ranks would be(1, 2, 3, 4, 5, 6)).
Now, add up the ranks for the observations which came from sample 1. The sum of ranks in sample 2 is now determined, since the sum of all the ranks equalsN(N + 1)/2 whereN is the total number of observations.
Suppose thatAesop is dissatisfied with hisclassic experiment in which onetortoise was found to beat onehare in a race, and decides to carry out a significance test to discover whether the results could be extended to tortoises and hares in general. He collects a sample of 6 tortoises and 6 hares, and makes them all run his race at once. The order in which they reach the finishing post (their rank order, from first to last crossing the finish line) is as follows, writing T for a tortoise and H for a hare:
T H H H H H T T T T T H
What is the value ofU?
Using the direct method, we take each tortoise in turn, and count the number of hares it beats, getting 6, 1, 1, 1, 1, 1, which means thatUT = 11. Alternatively, we could take each hare in turn, and count the number of tortoises it beats. In this case, we get 5, 5, 5, 5, 5, 0, soUH = 25. Note that the sum of these two values forU = 36, which is6×6.
Using the indirect method:
Rank the animals by the time they take to complete the course, so give the first animal home rank 12, the second rank 11, and so forth.
The sum of the ranks achieved by the tortoises is12 + 6 + 5 + 4 + 3 + 2 = 32.
In reporting the results of a Mann–WhitneyU test, it is important to state:[12]
A measure of the central tendencies of the two groups (means or medians; since the Mann–WhitneyU test is an ordinal test, medians are usually recommended)
In practice some of this information may already have been supplied and common sense should be used in deciding whether to repeat it. A typical report might run,
"Median latencies in groups E and C were 153 and 247 ms; the distributions in the two groups differed significantly (Mann–WhitneyU = 10.5,n1 =n2 = 8,P < 0.05 two-tailed)."
A statement that does full justice to the statistical status of the test might run,
"Outcomes of the two treatments were compared using the Wilcoxon–Mann–Whitney two-sample rank-sum test. The treatment effect (difference between treatments) was quantified using the Hodges–Lehmann (HL) estimator, which is consistent with the Wilcoxon test.[13] This estimator (HLΔ) is the median of all possible differences in outcomes between a subject in group B and a subject in group A. A non-parametric 0.95 confidence interval for HLΔ accompanies these estimates as does ρ, an estimate of the probability that a randomly chosen subject from population B has a higher weight than a randomly chosen subject from population A. The median [quartiles] weight for subjects on treatment A and B respectively are 147 [121, 177] and 151 [130, 180] kg. Treatment A decreased weight by HLΔ = 5 kg (0.95 CL [2, 9] kg,2P = 0.02,ρ = 0.58)."
However it would be rare to find such an extensive report in a document whose major topic was not statistical inference.
wheremU andσU are the mean and standard deviation ofU, is approximately a standard normal deviate whose significance can be checked in tables of the normal distribution.mU andσU are given by
where the left side is simply the variance and the right side is the adjustment for ties,tk is the number of ties for thekth rank, andK is the total number of unique ranks with ties.
A more computationally-efficient form withn1n2/12 factored out is
wheren =n1 +n2.
If the number of ties is small (and especially if there are no large tie bands) ties can be ignored when doing calculations by hand. The computer statistical packages will use the correctly adjusted formula as a matter of routine.
Note that sinceU1 +U2 =n1n2, the meann1n2/2 used in the normal approximation is the mean of the two values ofU. Therefore, the absolute value of thez-statistic calculated will be same whichever value ofU is used.
One method of reporting the effect size for the Mann–WhitneyU test is withf, the common language effect size.[18][19] As a sample statistic, the common language effect size is computed by forming all possible pairs between the two groups, then finding the proportion of pairs that support a direction (say, that items from group 1 are larger than items from group 2).[19] To illustrate, in a study with a sample of ten hares and ten tortoises, the total number of ordered pairs is ten times ten or 100 pairs of hares and tortoises. Suppose the results show that the hare ran faster than the tortoise in 90 of the 100 sample pairs; in that case, the sample common language effect size is 90%.[20]
The relationship betweenf and the Mann–WhitneyU (specifically) is as follows:
A statistic calledρ that is linearly related toU and widely used in studies of categorization (discrimination learning involvingconcepts), and elsewhere,[21] is calculated by dividingU by its maximum value for the given sample sizes, which is simplyn1×n2.ρ is thus a non-parametric measure of the overlap between two distributions; it can take values between 0 and 1, and it estimatesP(Y >X) + 0.5 P(Y =X), whereX andY are randomly chosen observations from the two distributions. Both extreme values represent complete separation of the distributions, while aρ of 0.5 represents complete overlap. The usefulness of theρ statistic can be seen in the case of the odd example used above, where two distributions that were significantly different on a Mann–WhitneyU test nonetheless had nearly identical medians: theρ value in this case is approximately 0.723 in favour of the hares, correctly reflecting the fact that even though the median tortoise beat the median hare, the hares collectively did better than the tortoises collectively.[citation needed]
A method of reporting the effect size for the Mann–WhitneyU test is with a measure ofrank correlation known as the rank-biserial correlation. Edward Cureton introduced and named the measure.[22] Like other correlational measures, the rank-biserial correlation can range from minus one to plus one, with a value of zero indicating no relationship.
There is a simple difference formula to compute the rank-biserial correlation from the common language effect size: the correlation is the difference between the proportion of pairs favorable to the hypothesis (f) minus its complement (i.e.: the proportion that is unfavorable (u)). This simple difference formula is just the difference of the common language effect size of each group, and is as follows:[18]
For example, consider the example where hares run faster than tortoises in 90 of 100 pairs. The common language effect size is 90%, so the rank-biserial correlation is 90% minus 10%, and the rank-biserial r = 0.80.
An alternative formula for the rank-biserial can be used to calculate it from the Mann–WhitneyU (either or) and the sample sizes of each group:[23]
This formula is useful when the data are not available, but when there is a published report, becauseU and the sample sizes are routinely reported. Using the example above with 90 pairs that favor the hares and 10 pairs that favor the tortoise,U2 is the smaller of the two, soU2 = 10. This formula then givesr = 1 – (2×10) / (10×10) = 0.80, which is the same result as with the simple difference formula above.
The Mann–WhitneyU test tests a null hypothesis that theprobability distribution of a randomly drawn observation from one group is the same as the probability distribution of a randomly drawn observation from the other group against an alternative that those distributions are not equal (seeMann–Whitney U test#Assumptions and formal statement of hypotheses). In contrast, at-test tests a null hypothesis of equal means in two groups against an alternative of unequal means. Hence, except in special cases, the Mann–WhitneyU test and the t-test do not test the same hypotheses and should be compared with this in mind.
Ordinal data
The Mann–WhitneyU test is preferable to thet-test when the data areordinal but not interval scaled, in which case the spacing between adjacent values of the scale cannot be assumed to be constant.
Robustness
As it compares the sums of ranks,[24] the Mann–WhitneyU test is less likely than thet-test to spuriously indicate significance because of the presence ofoutliers. However, the Mann–WhitneyU test may have worsetype I error control when data are both heteroscedastic and non-normal.[25]
Efficiency
When normality holds, the Mann–WhitneyU test has an (asymptotic)efficiency of 3/π or about 0.95 when compared to thet-test.[26] For distributions sufficiently far from normal and for sufficiently large sample sizes, the Mann–WhitneyU test is considerably more efficient than thet.[27] This comparison in efficiency, however, should be interpreted with caution, as Mann–Whitney and the t-test do not test the same quantities. If, for example, a difference of group means is of primary interest, Mann–Whitney is not an appropriate test.[28]
The Mann–WhitneyU test will give very similar results to performing an ordinary parametric two-samplet-test on the rankings of the data.[29]
Relative efficiencies of the Mann–Whitney test versus the two-samplet-test iff =g equals a number of distributions[30]
The Mann–WhitneyU test is not valid for testing the null hypothesis against the alternative hypothesis), without assuming that the distributions are the same under the null hypothesis (i.e., assuming).[2] To test between those hypotheses, better tests are available. Among those are theBrunner-Munzel and the Fligner–Policello test.[31] Specifically, under the more general null hypothesis, the Mann–WhitneyU test can have inflated type I error rates even in large samples (especially if the variances of two populations are unequal and the sample sizes are different), a problem the better alternatives solve.[32] As a result, it has been suggested to use one of the alternatives (specifically the Brunner–Munzel test) if it cannot be assumed that the distributions are equal under the null hypothesis.[32]
If one desires a simple shift interpretation, the Mann–WhitneyU test shouldnot be used when the distributions of the two samples are very different, as it can give erroneous interpretation of significant results.[33] In that situation, theunequal variances version of thet-test may give more reliable results.
Similarly, some authors (Conover, W. J. (1999).Practical Nonparametric Statistics -- 3rd ed. New York: John Wiley & Sons. p. 272-281.ISBN0-471-16068-7.) suggest transforming the data to ranks (if they are not already ranks) and then performing thet-test on the transformed data, the version of thet-test used depending on whether or not the population variances are suspected to be different. Rank transformations do not preserve variances, but variances are recomputed from samples after rank transformations.
The Mann–WhitneyU test is related to a number of other non-parametric statistical procedures. For example, it is equivalent toKendall's tau correlation coefficient if one of the variables is binary (that is, it can only take two values).[citation needed]
In many software packages, the Mann–WhitneyU test (of the hypothesis of equal distributions against appropriate alternatives) has been poorly documented. Some packages incorrectly treat ties or fail to document asymptotic techniques (e.g., correction for continuity). A 2000 review discussed some of the following packages:[36]
The statistic appeared in a 1914 article[40] by the German Gustav Deuchler (with a missing term in the variance).
In a single paper in 1945,Frank Wilcoxon proposed[41] both the one-sample signed rank and the two-sample rank sum test, in atest of significance with a point null-hypothesis against its complementary alternative (that is, equal versus not equal). However, he only tabulated a few points for the equal-sample size case in that paper (though in a later paper he gave larger tables).
A thorough analysis of the statistic, which included a recurrence allowing the computation of tail probabilities for arbitrary sample sizes and tables for sample sizes of eight or less appeared in the article byHenry Mann and his student Donald Ransom Whitney in 1947.[1] This article discussed alternative hypotheses, including astochastic ordering (where thecumulative distribution functions satisfied the pointwise inequalityFX(t) <FY(t)). This paper also computed the first four moments and established the limiting normality of the statistic under the null hypothesis, so establishing that it is asymptotically distribution-free.
^[1], See Table 2.1 of Pratt (1964) "Robustness of Some Procedures for the Two-Sample Location Problem."Journal of the American Statistical Association. 59 (307): 655–680. If the two distributions are normal with the same mean but different variances, then Pr[X > Y] = Pr[Y < X] but the size of the Mann–Whitney test can be larger than the nominal level. So we cannot define the null hypothesis as Pr[X > Y] = Pr[Y < X] and get a valid test.
^Mason, S. J., Graham, N. E. (2002). "Areas beneath the relative operating characteristics (ROC) and relative operating levels (ROL) curves: Statistical significance and interpretation".Quarterly Journal of the Royal Meteorological Society.128 (584):2145–2166.doi:10.1256/003590002320603584.ISSN1477-870X.
^Myles Hollander; Douglas A. Wolfe (1999).Nonparametric Statistical Methods (2 ed.). Wiley-Interscience.ISBN978-0-471-19045-5.
^abSiegal, Sidney (1956).Nonparametric statistics for the behavioral sciences. McGraw-Hill. p. 121.{{cite book}}: CS1 maint: numeric names: authors list (link)
^Lehmann, Erich; D'Abrera, Howard (1975).Nonparametrics: Statistical Methods Based on Ranks. Holden-Day. p. 20.{{cite book}}: CS1 maint: numeric names: authors list (link)
^Wilkinson, Leland (1999). "Statistical methods in psychology journals: Guidelines and explanations".American Psychologist.54 (8):594–604.doi:10.1037/0003-066X.54.8.594.
^Nakagawa, Shinichi; Cuthill, Innes C (2007). "Effect size, confidence interval and statistical significance: a practical guide for biologists".Biological Reviews of the Cambridge Philosophical Society.82 (4):591–605.doi:10.1111/j.1469-185X.2007.00027.x.PMID17944619.S2CID615371.
^Herrnstein, Richard J.; Loveland, Donald H.; Cable, Cynthia (1976). "Natural Concepts in Pigeons".Journal of Experimental Psychology: Animal Behavior Processes.2 (4):285–302.doi:10.1037/0097-7403.2.4.285.PMID978139.
^Wendt, H.W. (1972). "Dealing with a common problem in social science: A simplified rank-biserial coefficient of correlation based on theU statistic".European Journal of Social Psychology.2 (4):463–465.doi:10.1002/ejsp.2420020412.
^Motulsky, Harvey J.;Statistics Guide, San Diego, CA: GraphPad Software, 2007, p. 123
^Zimmerman, Donald W. (1998-01-01). "Invalidation of Parametric and Nonparametric Statistical Tests by Concurrent Violation of Two Assumptions".The Journal of Experimental Education.67 (1):55–68.doi:10.1080/00220979809598344.ISSN0022-0973.
^Lehamnn, Erich L.;Elements of Large Sample Theory, Springer, 1999, p. 176
^Bergmann, Reinhard; Ludbrook, John; Spooren, Will P.J.M. (2000). "Different Outcomes of the Wilcoxon–Mann–Whitney Test from Different Statistics Packages".The American Statistician.54 (1):72–77.doi:10.1080/00031305.2000.10474513.JSTOR2685616.S2CID120473946.
^"scipy.stats.mannwhitneyu".SciPy v0.16.0 Reference Guide. The Scipy community. 24 July 2015. Retrieved11 September 2015.scipy.stats.mannwhitneyu(x, y, use_continuity=True): Computes the Mann–Whitney rank test on samples x and y.
^Kruskal, William H. (September 1957). "Historical Notes on the Wilcoxon Unpaired Two-Sample Test".Journal of the American Statistical Association.52 (279):356–360.doi:10.2307/2280906.JSTOR2280906.
Hettmansperger, T.P.; McKean, J.W. (1998).Robust nonparametric statistical methods. Kendall's Library of Statistics. Vol. 5 (First ed., rather than Taylor and Francis (2010) second ed.). London; New York: Edward Arnold; John Wiley and Sons, Inc. pp. xiv+467.ISBN978-0-340-54937-7.MR1604954.
Corder, G.W.; Foreman, D.I. (2014).Nonparametric Statistics: A Step-by-Step Approach. Wiley.ISBN978-1-118-84031-3.
Lehmann, Erich L. (2006).Nonparametrics: Statistical methods based on ranks. With the special assistance of H.J.M. D'Abrera (Reprinting of 1988 revision of 1975 Holden-Day ed.). New York: Springer. pp. xvi+463.ISBN978-0-387-35212-1.MR0395032.
Oja, Hannu (2010).Multivariate nonparametric methods with R: An approach based on spatial signs and ranks. Lecture Notes in Statistics. Vol. 199. New York: Springer. pp. xiv+232.doi:10.1007/978-1-4419-0468-3.ISBN978-1-4419-0467-6.MR2598854.