Nonparametric statistics is a type of statistical analysis that makes minimal assumptions about the underlyingdistribution of the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as inparametric statistics.[1] Nonparametric statistics can be used fordescriptive statistics orstatistical inference. Nonparametric tests are often used when the assumptions of parametric tests are evidently violated.[2]
The term "nonparametric statistics" has been defined imprecisely in the following two ways, among others:
The first meaning ofnonparametric involves techniques that do not rely on data belonging to any particular parametric family of probability distributions. These include, among others:
An example isorder statistics, which are based onordinal ranking of observations.
The discussion following is taken fromKendall's Advanced Theory of Statistics.[3]
Statistical hypotheses concern the behavior of observable random variables.... For example, the hypothesis (a) that a normal distribution has a specified mean and variance is statistical; so is the hypothesis (b) that it has a given mean but unspecified variance; so is the hypothesis (c) that a distribution is of normal form with both mean and variance unspecified; finally, so is the hypothesis (d) that two unspecified continuous distributions are identical.
It will have been noticed that in the examples (a) and (b) the distribution underlying the observations was taken to be of a certain form (the normal) and the hypothesis was concerned entirely with the value of one or both of its parameters. Such a hypothesis, for obvious reasons, is calledparametric.
Hypothesis (c) was of a different nature, as no parameter values are specified in the statement of the hypothesis; we might reasonably call such a hypothesisnon-parametric. Hypothesis (d) is also non-parametric but, in addition, it does not even specify the underlying form of the distribution and may now be reasonably termeddistribution-free. Notwithstanding these distinctions, the statistical literature now commonly applies the label "non-parametric" to test procedures that we have just termed "distribution-free", thereby losing a useful classification.
The second meaning ofnon-parametric involves techniques that do not assume that thestructure of a model is fixed. Typically, the model grows in size to accommodate the complexity of the data. In these techniques, individual variablesare typically assumed to belong to parametric distributions, and assumptions about the types of associations among variables are also made. These techniques include, among others:
Non-parametric methods are widely used for studying populations that have a ranked order (such as movie reviews receiving one to five "stars"). The use of non-parametric methods may be necessary when data have aranking but no clearnumerical interpretation, such as when assessingpreferences. In terms oflevels of measurement, non-parametric methods result inordinal data.
As non-parametric methods make fewer assumptions, their applicability is much more general than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are morerobust.
Non-parametric methods are sometimes considered simpler to use and more robust than parametric methods, even when the assumptions of parametric methods are justified. This is due to their more general nature, which may make them less susceptible to misuse and misunderstanding. Non-parametric methods can be considered a conservative choice, as they will work even when their assumptions are not met, whereas parametric methods can produce misleading results when their assumptions are violated.
The wider applicability and increasedrobustness of non-parametric tests comes at a cost: in cases where a parametric test's assumptions are met, non-parametric tests have lessstatistical power. In other words, a larger sample size can be required to draw conclusions with the same degree of confidence.
Non-parametric models differ fromparametric models in that the model structure is not specifieda priori but is instead determined from data. The termnon-parametric is not meant to imply that such models completely lack parameters but that the number and nature of the parameters are flexible and not fixed in advance.
Non-parametric (ordistribution-free)inferential statistical methods are mathematical procedures for statistical hypothesis testing which, unlikeparametric statistics, make no assumptions about theprobability distributions of the variables being assessed. The most frequently used tests include
Early nonparametric statistics include themedian (13th century or earlier, use in estimation byEdward Wright, 1599; seeMedian § History) and thesign test byJohn Arbuthnot (1710) in analyzing thehuman sex ratio at birth (seeSign test § History).[5][6]