Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Median

From Wikipedia, the free encyclopedia
(Redirected fromSample median)
Middle quantile of a data set or probability distribution
This article is about the statistical concept. For other uses, seeMedian (disambiguation).
Calculating the median in data sets of odd (above) and even (below) observations

Themedian of a set of numbers is the value separating the higher half from the lower half of adata sample, apopulation, or aprobability distribution. For adata set, it may be thought of as the “middle" value. The basic feature of the median in describing data compared to themean (often simply described as the "average") is that it is notskewed by a small proportion of extremely large or small values, and therefore provides a better representation of the center.Median income, for example, may be a better way to describe the center of the income distribution because increases in the largest incomes alone have no effect on the median. For this reason, the median is of central importance inrobust statistics.

Finite set of numbers

[edit]

The median of a finite list of numbers is the "middle" number, when those numbers are listed in order from smallest to greatest.

If the data set has an odd number of observations, the middle one is selected (after arranging in ascending order). For example, the following list of seven numbers,

1, 3, 3,6, 7, 8, 9

has the median of6, which is the fourth value.

If the data set has an even number of observations, there is no distinct middle value and the median is usually defined to be thearithmetic mean of the two middle values.[1][2] For example, this data set of 8 numbers

1, 2, 3,4, 5, 6, 8, 9

has a median value of4.5, that is(4+5)/2{\displaystyle (4+5)/2}. (In more technical terms, this interprets the median as the fullytrimmedmid-range).

In general, with this convention, the median can be defined as follows: For a data setx{\displaystyle x} ofn{\displaystyle n} elements, ordered from smallest to greatest,

ifn{\displaystyle n} is odd,med(x)=x(n+1)/2{\displaystyle \operatorname {med} (x)=x_{(n+1)/2}}
ifn{\displaystyle n} is even,med(x)=x(n/2)+x((n/2)+1)2{\displaystyle \operatorname {med} (x)={\frac {x_{(n/2)}+x_{((n/2)+1)}}{2}}}
Comparison of commonaverages of values [ 1, 2, 2, 3, 4, 7, 9 ]
TypeDescriptionExampleResult
MidrangeMidway point between the minimum and the maximum of a data set1, 2, 2, 3, 4, 7,95
Arithmetic meanSum of values of a data set divided by number of values:x¯=1ni=1nxi{\textstyle {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}}(1 + 2 + 2 + 3 + 4 + 7 + 9) / 74
MedianMiddle value separating the greater and lesser halves of a data set1, 2, 2,3, 4, 7, 93
ModeMost frequent value in a data set1,2,2, 3, 4, 7, 92

Definition and notation

[edit]

Formally, a median of apopulation is any value such that at least half of the population is less than or equal to the proposed median and at least half is greater than or equal to the proposed median. As seen above, medians may not be unique. If each set contains more than half the population, then some of the population is exactly equal to the unique median.

The median is well-defined for anyordered (one-dimensional) data and is independent of anydistance metric. The median can thus be applied to school classes which are ranked but not numerical (e.g. working out a median grade when student test scores are graded from F to A), although the result might be halfway between classes if there is an even number of classes. (For odd number classes, one specific class is determined as the median.)

Ageometric median, on the other hand, is defined in any number of dimensions. A related concept, in which the outcome is forced to correspond to a member of the sample, is themedoid.

There is no widely accepted standard notation for the median, but some authors represent the median of a variablex as med(x),,[3] asμ1/2,[1] or asM.[3][4] In any of these cases, the use of these or other symbols for the median needs to be explicitly defined when they are introduced.

The median is a special case of otherways of summarizing the typical values associated with a statistical distribution: it is the 2ndquartile, 5thdecile, and 50thpercentile.

Uses

[edit]

The median can be used as a measure oflocation when one attaches reduced importance to extreme values, typically because a distribution isskewed, extreme values are not known, oroutliers are untrustworthy, i.e., may be measurement or transcription errors.

For example, consider themultiset

1, 2, 2, 2, 3, 14.

The median is 2 in this case, as is themode, and it might be seen as a better indication of thecenter than thearithmetic mean of 4, which is larger than all but one of the values. However, the widely cited empirical relationship that the mean is shifted "further into the tail" of a distribution than the median is not generally true. At most, one can say that the two statistics cannot be "too far" apart; see§ Inequality relating means and medians below.[5]

As a median is based on the middle data in a set, it is not necessary to know the value of extreme results in order to calculate it. For example, in a psychology test investigating the time needed to solve a problem, if a small number of people failed to solve the problem at all in the given time a median can still be calculated.[6]

Because the median is simple to understand and easy to calculate, while also a robust approximation to themean, the median is a popularsummary statistic indescriptive statistics. In this context, there are several choices for a measure ofvariability: therange, theinterquartile range, themean absolute deviation, and themedian absolute deviation.

For practical purposes, different measures of location and dispersion are often compared on the basis of how well the corresponding population values can be estimated from a sample of data. The median, estimated using the sample median, has good properties in this regard. While it is not usually optimal if a given population distribution is assumed, its properties are always reasonably good. For example, a comparison of theefficiency of candidate estimators shows that the sample mean is more statistically efficientwhen—and only when— data is uncontaminated by data from heavy-tailed distributions or from mixtures of distributions.[citation needed] Even then, the median has a 64% efficiency compared to the minimum-variance mean (for large normal samples), which is to say the variance of the median will be ~50% greater than the variance of the mean.[7][8]

Probability distributions

[edit]

For anyreal-valuedprobability distribution withcumulative distribution function F, a median is defined as any real number m that satisfies the inequalitieslimxmF(x)12F(m){\displaystyle \lim _{x\to m-}F(x)\leq {\frac {1}{2}}\leq F(m)}(cf. thedrawing in thedefinition of expected value for arbitrary real-valued random variables). An equivalent phrasing uses a random variableX distributed according toF:P(Xm)12 and P(Xm)12.{\displaystyle \operatorname {P} (X\leq m)\geq {\frac {1}{2}}{\text{ and }}\operatorname {P} (X\geq m)\geq {\frac {1}{2}}\,.}

Mode, median and mean (expected value) of a probability density function[9]

Note that this definition does not requireX to have anabsolutely continuous distribution (which has aprobability density functionf), nor does it require adiscrete one. In the former case, the inequalities can be upgraded to equality: a median satisfiesP(Xm)=mf(x)dx=12{\displaystyle \operatorname {P} (X\leq m)=\int _{-\infty }^{m}{f(x)\,dx}={\frac {1}{2}}}andP(Xm)=mf(x)dx=12.{\displaystyle \operatorname {P} (X\geq m)=\int _{m}^{\infty }{f(x)\,dx}={\frac {1}{2}}\,.}

Anyprobability distribution on the real number setR{\displaystyle \mathbb {R} } has at least one median, but in pathological cases there may be more than one median: ifF is constant 1/2 on an interval (so thatf = 0 there), then any value of that interval is a median.

Medians of particular distributions

[edit]

The medians of certain types of distributions can be easily calculated from their parameters; furthermore, they exist even for some distributions lacking a well-defined mean, such as theCauchy distribution:

Properties

[edit]

Optimality property

[edit]

Themean absolute error of a real variablec with respect to therandom variable X isE[|Xc|]{\displaystyle \operatorname {E} \left[\left|X-c\right|\right]}Provided that the probability distribution ofX is such that the above expectation exists, thenm is a median ofX if and only ifm is a minimizer of the mean absolute error with respect toX.[11] In particular, ifm is a sample median, then it minimizes the arithmetic mean of the absolute deviations.[12] Note, however, that in cases where the sample contains an even number of elements, this minimizer is not unique.

More generally, a median is defined as a minimum ofE[|Xc||X|],{\displaystyle \operatorname {E} \left[\left|X-c\right|-\left|X\right|\right],}as discussed below in the section onmultivariate medians (specifically, thespatial median).

This optimization-based definition of the median is useful in statistical data-analysis, for example, ink-medians clustering.

Inequality relating means and medians

[edit]
Comparison ofmean, median andmode of twolog-normal distributions with differentskewness

If the distribution has finite variance, then the distance between the medianX~{\displaystyle {\tilde {X}}} and the meanX¯{\displaystyle {\bar {X}}} is bounded by onestandard deviation.

This bound was proved by Book and Sher in 1979 for discrete samples,[13] and more generally by Page and Murty in 1982.[14] In a comment on a subsequent proof by O'Cinneide,[15] Mallows in 1991 presented a compact proof that usesJensen's inequality twice,[16] as follows. Using |·| for theabsolute value, we have

|μm|=|E(Xm)|E(|Xm|)E(|Xμ|)E((Xμ)2)=σ.{\displaystyle {\begin{aligned}\left|\mu -m\right|=\left|\operatorname {E} (X-m)\right|&\leq \operatorname {E} \left(\left|X-m\right|\right)\\[2ex]&\leq \operatorname {E} \left(\left|X-\mu \right|\right)\\[1ex]&\leq {\sqrt {\operatorname {E} \left({\left(X-\mu \right)}^{2}\right)}}=\sigma .\end{aligned}}}

The first and third inequalities come from Jensen's inequality applied to the absolute-value function and the square function, which are each convex. The second inequality comes from the fact that a median minimizes theabsolute deviation functionaE[|Xa|]{\displaystyle a\mapsto \operatorname {E} [|X-a|]}.

Mallows's proof can be generalized to obtain a multivariate version of the inequality[17] simply by replacing the absolute value with anorm:μmE(Xμ2)=trace(var(X)){\displaystyle \left\|\mu -m\right\|\leq {\sqrt {\operatorname {E} \left({\left\|X-\mu \right\|}^{2}\right)}}={\sqrt {\operatorname {trace} \left(\operatorname {var} (X)\right)}}}

wherem is aspatial median, that is, a minimizer of the functionaE(Xa).{\displaystyle a\mapsto \operatorname {E} (\|X-a\|).\,} The spatial median is unique when the data-set's dimension is two or more.[18][19]

An alternative proof uses the one-sided Chebyshev inequality; it appears inan inequality on location and scale parameters. This formula also follows directly fromCantelli's inequality.[20]

Unimodal distributions

[edit]

For the case ofunimodal distributions, one can achieve a sharper bound on the distance between the median and the mean:[21]

|X~X¯|(35)1/2σ0.7746σ.{\displaystyle \left|{\tilde {X}}-{\bar {X}}\right|\leq \left({\frac {3}{5}}\right)^{1/2}\sigma \approx 0.7746\sigma .}

A similar relation holds between the median and the mode:

|X~mode|31/2σ1.732σ.{\displaystyle \left|{\tilde {X}}-\mathrm {mode} \right|\leq 3^{1/2}\sigma \approx 1.732\sigma .}

The mean is greater than the median for monotonic distributions.

Mean, median, and skew

[edit]

A typical heuristic is that positively skewed distributions have mean > median. This is true for all members of thePearson distribution family. However this is not always true. For example, theWeibull distribution family has members with positive mean, but mean < median. Violations of the rule are particularly common for discrete distributions. For example, any Poisson distribution has positive skew, but its mean < median wheneverμmod1>ln2{\displaystyle \mu {\bmod {1}}>\ln 2}.[22] See[23] for a proof sketch.

When the distribution has a monotonically decreasing probability density, then the median is less than the mean, as shown in the figure.

Jensen's inequality for medians

[edit]

Jensen's inequality states that for any random variableX with a finite expectationE[X] and for any convex functionf

f(E(x))E(f(x)){\displaystyle f(\operatorname {E} (x))\leq \operatorname {E} (f(x))}

This inequality generalizes to the median as well. We say a functionf:RR is aC function if, for anyt,

f1((,t])={xRf(x)t}{\displaystyle f^{-1}\left(\,(-\infty ,t]\,\right)=\{x\in \mathbb {R} \mid f(x)\leq t\}}is aclosed interval (allowing the degenerate cases of asingle point or anempty set). Every convex function is a C function, but the reverse does not hold. Iff is a C function, then

f(med[X])med[f(X)]{\displaystyle f(\operatorname {med} [X])\leq \operatorname {med} [f(X)]}

If the medians are not unique, the statement holds for the corresponding suprema.[24]

Medians for samples

[edit]
This section discusses the theory of estimating a population median from a sample. To calculate the median of a sample "by hand," see§ Finite data set of numbers above.

Efficient computation of the sample median

[edit]

Even thoughcomparison-sortingn items requiresΩ(n logn) operations,selection algorithms can compute thekth-smallest ofn items with onlyΘ(n) operations. This includes the median, which is then/2th order statistic (or for an even number of samples, thearithmetic mean of the two middle order statistics).[25]

Selection algorithms still have the downside of requiringΩ(n) memory, that is, they need to have the full sample (or a linear-sized portion of it) in memory. Because this, as well as the linear time requirement, can be prohibitive, several estimation procedures for the median have been developed. A simple one is the median of three rule, which estimates the median as the median of a three-element subsample; this is commonly used as a subroutine in thequicksort sorting algorithm, which uses an estimate of its input's median. A morerobust estimator isTukey'sninther, which is the median of three rule applied with limited recursion:[26] ifA is the sample laid out as anarray, and

med3(A) = med(A[1],A[n/2],A[n]),

then

ninther(A) = med3(med3(A[1 ...1/3n]), med3(A[1/3n ...2/3n]), med3(A[2/3n ...n]))

Theremedian is an estimator for the median that requires linear time but sub-linear memory, operating in a single pass over the sample.[27]

Sampling distribution

[edit]

The distributions of both the sample mean and the sample median were determined byLaplace.[28] The distribution of the sample median from a population with a density functionf(x){\displaystyle f(x)} is asymptotically normal with meanμ{\displaystyle \mu } and variance[29]

14nf(m)2{\displaystyle {\frac {1}{4nf(m)^{2}}}}

wherem{\displaystyle m} is the median off(x){\displaystyle f(x)} andn{\displaystyle n} is the sample size:

Sample medianN(μ=m,σ2=14nf(m)2){\displaystyle {\text{Sample median}}\sim {\mathcal {N}}{\left(\mu {=}m,\,\sigma ^{2}{=}{\frac {1}{4nf(m)^{2}}}\right)}}

A modern proof follows below. Laplace's result is now understood as a special case ofthe asymptotic distribution of arbitrary quantiles.

For normal samples, the density isf(m)=1/2πσ2{\displaystyle f(m)=1/{\sqrt {2\pi \sigma ^{2}}}}, thus for large samples the variance of the median equals(π/2)(σ2/n).{\displaystyle ({\pi }/{2})\cdot (\sigma ^{2}/n).}[7] (See also section#Efficiency below.)

Derivation of the asymptotic distribution

[edit]
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(November 2023) (Learn how and when to remove this message)

We take the sample size to be an odd numberN=2n+1{\displaystyle N=2n+1} and assume our variable continuous; the formula for the case of discrete variables is given below in§ Empirical local density. The sample can be summarized as "below median", "at median", and "above median", which corresponds to a trinomial distribution with probabilitiesF(v){\displaystyle F(v)},f(v){\displaystyle f(v)} and1F(v){\displaystyle 1-F(v)}. For a continuous variable, the probability of multiple sample values being exactly equal to the median is 0, so one can calculate the density of at the pointv{\displaystyle v} directly from the trinomial distribution:

Pr[med=v]dv=(2n+1)!n!n!F(v)n(1F(v))nf(v)dv.{\displaystyle \Pr[\operatorname {med} =v]\,dv={\frac {(2n+1)!}{n!n!}}F(v)^{n}(1-F(v))^{n}f(v)\,dv.}

Now we introduce the beta function. For integer argumentsα{\displaystyle \alpha } andβ{\displaystyle \beta }, this can be expressed asB(α,β)=(α1)!(β1)!(α+β1)!{\displaystyle \mathrm {B} (\alpha ,\beta )={\frac {(\alpha -1)!(\beta -1)!}{(\alpha +\beta -1)!}}}. Also, recall thatf(v)dv=dF(v){\displaystyle f(v)\,dv=dF(v)}. Using these relationships and setting bothα{\displaystyle \alpha } andβ{\displaystyle \beta } equal ton+1{\displaystyle n+1} allows the last expression to be written as

F(v)n(1F(v))nB(n+1,n+1)dF(v){\displaystyle {\frac {F(v)^{n}(1-F(v))^{n}}{\mathrm {B} (n+1,n+1)}}\,dF(v)}

Hence the density function of the median is a symmetric beta distributionpushed forward byF{\displaystyle F}. Its mean, as we would expect, is 0.5 and its variance is1/(4(N+2)){\displaystyle 1/(4(N+2))}. By thechain rule, the corresponding variance of the sample median is

14(N+2)f(m)2.{\displaystyle {\frac {1}{4(N+2)f(m)^{2}}}.}

The additional 2 is negligiblein the limit.

Empirical local density
[edit]

In practice, the functionsf{\displaystyle f} andF{\displaystyle F} above are often not known or assumed. However, they can be estimated from an observed frequency distribution. In this section, we give an example. Consider the following table, representing a sample of 3,800 (discrete-valued) observations:

v00.511.522.533.544.55
f(v)0.0000.0080.0100.0130.0830.1080.3280.2200.2020.0230.005
F(v)0.0000.0080.0180.0310.1140.2220.5500.7700.9720.9951.000

Because the observations are discrete-valued, constructing the exact distribution of the median is not an immediate translation of the above expression forPr(med=v){\displaystyle \Pr(\operatorname {med} =v)}; one may (and typically does) have multiple instances of the median in one's sample. So we must sum over all these possibilities:

Pr(med=v)=i=0nk=0nN!i!(Nik)!k!F(v1)i(1F(v))kf(v)Nik{\displaystyle \Pr(\operatorname {med} =v)=\sum _{i=0}^{n}\sum _{k=0}^{n}{\frac {N!}{i!(N-i-k)!k!}}F(v-1)^{i}(1-F(v))^{k}f(v)^{N-i-k}}

Here,i is the number of points strictly less than the median andk the number strictly greater.

Using these preliminaries, it is possible to investigate the effect of sample size on the standard errors of the mean and median. The observed mean is 3.16, the observed raw median is 3 and the observed interpolated median is 3.174. The following table gives some comparison statistics.

Sample size
Statistic
391521
Expected value of median3.1983.1913.1743.161
Standard error of median (above formula)0.4820.3050.2570.239
Standard error of median (asymptotic approximation)0.8790.5080.3930.332
Standard error of mean0.4210.2430.1880.159

The expected value of the median falls slightly as sample size increases while, as would be expected, the standard errors of both the median and the mean are proportionate to the inverse square root of the sample size. The asymptotic approximation errs on the side of caution by overestimating the standard error.

Estimation of variance from sample data

[edit]

The value of(2f(x))2{\displaystyle (2f(x))^{-2}}—the asymptotic value ofn1/2(νm){\displaystyle n^{-1/2}(\nu -m)} whereν{\displaystyle \nu } is the population median—has been studied by several authors. The standard "delete one"jackknife method producesinconsistent results.[30] An alternative—the "delete k" method—wherek{\displaystyle k} grows with the sample size has been shown to be asymptotically consistent.[31] This method may be computationally expensive for large data sets. A bootstrap estimate is known to be consistent,[32] but converges very slowly (order ofn14{\displaystyle n^{-{\frac {1}{4}}}}).[33] Other methods have been proposed but their behavior may differ between large and small samples.[34]

Efficiency

[edit]

Theefficiency of the sample median, measured as the ratio of the variance of the mean to the variance of the median, depends on the sample size and on the underlying population distribution. For a sample of sizeN=2n+1{\displaystyle N=2n+1} from thenormal distribution, the efficiency for large N is

2πN+2N{\displaystyle {\frac {2}{\pi }}{\frac {N+2}{N}}}

The efficiency tends to2π{\displaystyle {\frac {2}{\pi }}} asN{\displaystyle N} tends to infinity.

In other words, the relative variance of the median will beπ/21.57{\displaystyle \pi /2\approx 1.57}, or 57% greater than the variance of the mean – the relativestandard error of the median will be(π/2)121.25{\displaystyle (\pi /2)^{\frac {1}{2}}\approx 1.25}, or 25% greater than thestandard error of the mean,σ/n{\displaystyle \sigma /{\sqrt {n}}} (see also section#Sampling distribution above.).[35]

Other estimators

[edit]

For univariate distributions that aresymmetric about one median, theHodges–Lehmann estimator is arobust and highlyefficient estimator of the population median.[36]

If data is represented by astatistical model specifying a particular family ofprobability distributions, then estimates of the median can be obtained by fitting that family of probability distributions to the data and calculating the theoretical median of the fitted distribution.Pareto interpolation is an application of this when the population is assumed to have aPareto distribution.

Multivariate median

[edit]

Previously, this article discussed the univariate median, when the sample or population had one-dimension. When the dimension is two or higher, there are multiple concepts that extend the definition of the univariate median; each such multivariate median agrees with the univariate median when the dimension is exactly one.[36][37][38][39]

Marginal median

[edit]

The marginal median is defined for vectors defined with respect to a fixed set of coordinates. A marginal median is defined to be the vector whose components are univariate medians. The marginal median is easy to compute, and its properties were studied by Puri and Sen.[36][40]

Geometric median

[edit]

Thegeometric median of a discrete set of sample pointsx1,xN{\displaystyle x_{1},\ldots x_{N}} in a Euclidean space is the[a] point minimizing the sum of distances to the sample points.

μ^=argminμRmn=1Nμxn2{\displaystyle {\hat {\mu }}={\underset {\mu \in \mathbb {R} ^{m}}{\operatorname {arg\,min} }}\sum _{n=1}^{N}\left\|\mu -x_{n}\right\|_{2}}

In contrast to the marginal median, the geometric median isequivariant with respect to Euclideansimilarity transformations such astranslations androtations.

Median in all directions

[edit]

If the marginal medians for all coordinate systems coincide, then their common location may be termed the "median in all directions".[42] This concept is relevant to voting theory on account of themedian voter theorem. When it exists, the median in all directions coincides with the geometric median (at least for discrete distributions).

Centerpoint

[edit]
This section is an excerpt fromCenterpoint (geometry).[edit]
Instatistics andcomputational geometry, the notion ofcenterpoint is a generalization of the median to data in higher-dimensionalEuclidean space. Given a set of points ind-dimensional space, a centerpoint of the set is a point such that any hyperplane that goes through that point divides the set of points in two roughly equal subsets: the smaller part should have at least a 1/(d + 1) fraction of the points. Like the median, a centerpoint need not be one of the data points. Every non-empty set of points (with no duplicates) has at least one centerpoint.

Conditional median

[edit]

The conditional median occurs in the setting where we seek to estimate a random variableX{\displaystyle X} from a random variableY{\displaystyle Y}, which is a noisy version ofX{\displaystyle X}. The conditional median in this setting is given by

m(X|Y=y)=FX|Y=y1(12){\displaystyle m(X|Y=y)=F_{X|Y=y}^{-1}\left({\frac {1}{2}}\right)}wheretFX|Y=y1(t){\displaystyle t\mapsto F_{X|Y=y}^{-1}(t)} is the inverse of the conditional cdf (i.e., conditional quantile function) ofxFX|Y(x|y){\displaystyle x\mapsto F_{X|Y}(x|y)}. For example, a popular model isY=X+Z{\displaystyle Y=X+Z} whereZ{\displaystyle Z} is standard normal independent ofX{\displaystyle X}. The conditional median is the optimal BayesianL1{\displaystyle L_{1}} estimator:

m(X|Y=y)=argminfE[|Xf(Y)|]{\displaystyle m(X|Y=y)=\arg \min _{f}\operatorname {E} \left[|X-f(Y)|\right]}

It is known that for the modelY=X+Z{\displaystyle Y=X+Z} whereZ{\displaystyle Z} is standard normal independent ofX{\displaystyle X}, the estimator is linear if and only ifX{\displaystyle X} is Gaussian.[43]

Other median-related concepts

[edit]

Interpolated median

[edit]

When dealing with a discrete variable, it is sometimes useful to regard the observed values as being midpoints of underlying continuous intervals. An example of this is aLikert scale, on which opinions or preferences are expressed on a scale with a set number of possible responses. If the scale consists of the positive integers, an observation of 3 might be regarded as representing the interval from 2.50 to 3.50. It is possible to estimate the median of the underlying variable. If, say, 22% of the observations are of value 2 or below and 55.0% are of 3 or below (so 33% have the value 3), then the medianm{\displaystyle m} is 3 since the median is the smallest value ofx{\displaystyle x} for whichF(x){\displaystyle F(x)} is greater than a half. But the interpolated median is somewhere between 2.50 and 3.50. First we add half of the interval widthw{\displaystyle w} to the median to get the upper bound of the median interval. Then we subtract that proportion of the interval width which equals the proportion of the 33% which lies above the 50% mark. In other words, we split up the interval width pro rata to the numbers of observations. In this case, the 33% is split into 28% below the median and 5% above it so we subtract 5/33 of the interval width from the upper bound of 3.50 to give an interpolated median of 3.35. More formally, if the valuesf(x){\displaystyle f(x)} are known, the interpolated median can be calculated from

mint=m+w[12F(m)12f(m)].{\displaystyle m_{\text{int}}=m+w\left[{\frac {1}{2}}-{\frac {F(m)-{\frac {1}{2}}}{f(m)}}\right].}

Alternatively, if in an observed sample there arek{\displaystyle k} scores above the median category,j{\displaystyle j} scores in it andi{\displaystyle i} scores below it then the interpolated median is given by

mint=m+w2[kij].{\displaystyle m_{\text{int}}=m+{\frac {w}{2}}\left[{\frac {k-i}{j}}\right].}

Pseudo-median

[edit]
Main article:Pseudomedian

For univariate distributions that aresymmetric about one median, theHodges–Lehmann estimator is a robust and highly efficient estimator of the population median; for non-symmetric distributions, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the populationpseudo-median, which is the median of a symmetrized distribution and which is close to the population median.[44] The Hodges–Lehmann estimator has been generalized to multivariate distributions.[45]

Variants of regression

[edit]

TheTheil–Sen estimator is a method forrobustlinear regression based on finding medians ofslopes.[46]

Median filter

[edit]

Themedian filter is an important tool ofimage processing, that can effectively remove anysalt and pepper noise fromgrayscale images.

Cluster analysis

[edit]
Main article:k-medians clustering

Incluster analysis, thek-medians clustering algorithm provides a way of defining clusters, in which the criterion of maximising the distance between cluster-means that is used ink-means clustering, is replaced by maximising the distance between cluster-medians.

Median–median line

[edit]

This is a method of robust regression. The idea dates back toWald in 1940 who suggested dividing a set of bivariate data into two halves depending on the value of the independent parameterx{\displaystyle x}: a left half with values less than the median and a right half with values greater than the median.[47] He suggested taking the means of the dependenty{\displaystyle y} and independentx{\displaystyle x} variables of the left and the right halves and estimating the slope of the line joining these two points. The line could then be adjusted to fit the majority of the points in the data set.

Nair and Shrivastava in 1942 suggested a similar idea but instead advocated dividing the sample into three equal parts before calculating the means of the subsamples.[48] Brown and Mood in 1951 proposed the idea of using the medians of two subsamples rather the means.[49] Tukey combined these ideas and recommended dividing the sample into three equal size subsamples and estimating the line based on the medians of the subsamples.[50]

Median-unbiased estimators

[edit]
Main article:Bias of an estimator § Median-unbiased estimators

Anymean-unbiased estimator minimizes therisk (expected loss) with respect to the squared-errorloss function, as observed byGauss. Amedian-unbiased estimator minimizes the risk with respect to theabsolute-deviation loss function, as observed byLaplace. Otherloss functions are used instatistical theory, particularly inrobust statistics.

The theory of median-unbiased estimators was revived by George W. Brown in 1947:[51]

An estimate of a one-dimensional parameter θ will be said to be median-unbiased if, for fixed θ, the median of the distribution of the estimate is at the value θ; i.e., the estimate underestimates just as often as it overestimates. This requirement seems for most purposes to accomplish as much as the mean-unbiased requirement and has the additional property that it is invariant under one-to-one transformation.

— page 584

Further properties of median-unbiased estimators have been reported.[52][53][54][55]

There are methods of constructing median-unbiased estimators that are optimal (in a sense analogous to the minimum-variance property for mean-unbiased estimators). Such constructions exist for probability distributions havingmonotone likelihood-functions.[56][57] One such procedure is an analogue of theRao–Blackwell procedure for mean-unbiased estimators: The procedure holds for a smaller class of probability distributions than does the Rao—Blackwell procedure but for a larger class ofloss functions.[58]

History

[edit]

Scientific researchers in the ancient near east appear not to have used summary statistics altogether, instead choosing values that offered maximal consistency with a broader theory that integrated a wide variety of phenomena.[59] Within the Mediterranean (and, later, European) scholarly community, statistics like the mean are fundamentally a medieval and early modern development. (The history of the median outside Europe and its predecessors remains relatively unstudied.)

The idea of the median appeared in the 6th century in theTalmud, in order to fairly analyze divergentappraisals.[60][61] However, the concept did not spread to the broader scientific community.

Instead, the closest ancestor of the modern median is themid-range, invented byAl-Biruni[62]: 31 [63] Transmission of his work to later scholars is unclear. He applied his technique toassaying currency metals, but, after he published his work, most assayers still adopted the most unfavorable value from their results, lest they appear tocheat.[62]: 35–8 [64] However, increased navigation at sea during theAge of Discovery meant that ship's navigators increasingly had to attempt to determine latitude in unfavorable weather against hostile shores, leading to renewed interest in summary statistics. Whether rediscovered or independently invented, the mid-range is recommended to nautical navigators in Harriot's "Instructions for Raleigh's Voyage to Guiana, 1595".[62]: 45–8 

The idea of the median may have first appeared inEdward Wright's 1599 bookCertaine Errors in Navigation on a section aboutcompass navigation.[65] Wright was reluctant to discard measured values, and may have felt that the median — incorporating a greater proportion of the dataset than themid-range — was more likely to be correct. However, Wright did not give examples of his technique's use, making it hard to verify that he described the modern notion of median.[59][63][b] The median (in the context of probability) certainly appeared in the correspondence ofChristiaan Huygens, but as an example of a statistic that was inappropriate foractuarial practice.[59]

The earliest recommendation of the median dates to 1757, whenRoger Joseph Boscovich developed a regression method based on theL1 norm and therefore implicitly on the median.[59][66] In 1774,Laplace made this desire explicit: he suggested the median be used as the standard estimator of the value of a posteriorPDF. The specific criterion was to minimize the expected magnitude of the error;|αα|{\displaystyle |\alpha -\alpha ^{*}|} whereα{\displaystyle \alpha ^{*}} is the estimate andα{\displaystyle \alpha } is the true value. To this end, Laplace determined the distributions of both the sample mean and the sample median in the early 1800s.[28][67] However, a decade later,Gauss andLegendre developed theleast squares method, which minimizes(αα)2{\displaystyle (\alpha -\alpha ^{*})^{2}} to obtain the mean; the strong justification of this estimator by reference tomaximum likelihood estimation based on anormal distribution means it has mostly replaced Laplace's original suggestion.[68]

Antoine Augustin Cournot in 1843 was the first[69] to use the termmedian (valeur médiane) for the value that divides a probability distribution into two equal halves.Gustav Theodor Fechner used the median (Centralwerth) in sociological and psychological phenomena.[70] It had earlier been used only in astronomy and related fields.Gustav Fechner popularized the median into the formal analysis of data, although it had been used previously by Laplace,[70] and the median appeared in a textbook byF. Y. Edgeworth.[71]Francis Galton used the termmedian in 1881,[72][73] having earlier used the termsmiddle-most value in 1869, and themedium in 1880.[74][75]


See also

[edit]
  • Absolute deviation – Difference between a variable's observed value and a reference valuePages displaying short descriptions of redirect targets
  • Bias of an estimator – Statistical property
  • Central tendency – Statistical value representing the center or average of a distribution
  • Concentration of measure – Statistical parameter forLipschitz functions – Strong form of uniform continuityPages displaying short descriptions of redirect targets
  • Median graph – Graph with a median for each three vertices
  • Median of medians – Fast approximate median algorithm – Algorithm to calculate the approximate median in linear time
  • Median search – Method for finding kth smallest valuePages displaying short descriptions of redirect targets
  • Median slope – Statistical method for fitting a linePages displaying short descriptions of redirect targets
  • Median voter theory – Theorem in political sciencePages displaying short descriptions of redirect targets
  • Medoid – representative objects of a data set or a cluster within a data set whose sum of dissimilarities to all the objects in the cluster is minimalPages displaying wikidata descriptions as a fallbacks – Generalization of the median in higher dimensions
  • Moving average#Moving median – Type of statistical measure over subsets of a dataset
  • Median absolute deviation – Statistical measure of variability

Notes

[edit]
  1. ^The geometric median is unique unless the sample is collinear.[41]
  2. ^Subsequent scholars appear to concur with Eisenhart that Boroughs' 1580 figures, while suggestive of the median, in fact describe an arithmetic mean.;[62]: 62–3  Boroughs is mentioned in no other work.

References

[edit]
  1. ^abWeisstein, Eric W."Statistical Median".MathWorld.
  2. ^Simon, Laura J.;"Descriptive statistics"Archived 2010-07-30 at theWayback Machine,Statistical Education Resource Kit, Pennsylvania State Department of Statistics
  3. ^abDerek Bissell (1994).Statistical Methods for Spc and Tqm. CRC Press. pp. 26–.ISBN 978-0-412-39440-9. Retrieved25 February 2013.
  4. ^David J. Sheskin (27 August 2003).Handbook of Parametric and Nonparametric Statistical Procedures (Third ed.). CRC Press. p. 7.ISBN 978-1-4200-3626-8. Retrieved25 February 2013.
  5. ^Paul T. von Hippel (2005)."Mean, Median, and Skew: Correcting a Textbook Rule".Journal of Statistics Education.13 (2). Archived fromthe original on 2008-10-14. Retrieved2015-06-18.
  6. ^Robson, Colin (1994).Experiment, Design and Statistics in Psychology. Penguin. pp. 42–45.ISBN 0-14-017648-9.
  7. ^abWilliams, D. (2001).Weighing the Odds. Cambridge University Press. p. 165.ISBN 052100618X.
  8. ^Maindonald, John; Braun, W. John (2010-05-06).Data Analysis and Graphics Using R: An Example-Based Approach. Cambridge University Press. p. 104.ISBN 978-1-139-48667-5.
  9. ^"AP Statistics Review - Density Curves and the Normal Distributions". Archived fromthe original on 8 April 2015. Retrieved16 March 2015.
  10. ^Newman, M. E. J. (2005). "Power laws, Pareto distributions and Zipf's law".Contemporary Physics.46 (5):323–351.arXiv:cond-mat/0412004.Bibcode:2005ConPh..46..323N.doi:10.1080/00107510500052444.S2CID 2871747.
  11. ^Stroock, Daniel (2011).Probability Theory. Cambridge University Press. pp. 43.ISBN 978-0-521-13250-3.
  12. ^DeGroot, Morris H. (1970).Optimal Statistical Decisions. McGraw-Hill Book Co., New York-London-Sydney. p. 232.ISBN 9780471680291.MR 0356303.
  13. ^Stephen A. Book; Lawrence Sher (1979)."How close are the mean and the median?".The Two-Year College Mathematics Journal.10 (3):202–204.doi:10.2307/3026748.JSTOR 3026748. Retrieved12 March 2022.
  14. ^Warren Page; Vedula N. Murty (1982)."Nearness Relations Among Measures of Central Tendency and Dispersion: Part 1".The Two-Year College Mathematics Journal.13 (5):315–327.doi:10.1080/00494925.1982.11972639 (inactive 1 November 2024). Retrieved12 March 2022.{{cite journal}}: CS1 maint: DOI inactive as of November 2024 (link)
  15. ^O'Cinneide, Colm Art (1990)."The mean is within one standard deviation of any median".The American Statistician.44 (4):292–293.doi:10.1080/00031305.1990.10475743. Retrieved12 March 2022.
  16. ^Mallows, Colin (August 1991). "Another comment on O'Cinneide".The American Statistician.45 (3): 257.doi:10.1080/00031305.1991.10475815.
  17. ^Piché, Robert (2012).Random Vectors and Random Sequences. Lambert Academic Publishing.ISBN 978-3659211966.
  18. ^Kemperman, Johannes H. B. (1987). Dodge, Yadolah (ed.). "The median of a finite measure on a Banach space: Statistical data analysis based on the L1-norm and related methods".Papers from the First International Conference Held at Neuchâtel, August 31–September 4, 1987. Amsterdam: North-Holland Publishing Co.:217–230.MR 0949228.
  19. ^Milasevic, Philip; Ducharme, Gilles R. (1987)."Uniqueness of the spatial median".Annals of Statistics.15 (3):1332–1333.doi:10.1214/aos/1176350511.MR 0902264.
  20. ^K.Van SteenNotes on probability and statistics
  21. ^Basu, S.; Dasgupta, A. (1997). "The Mean, Median, and Mode of Unimodal Distributions:A Characterization".Theory of Probability and Its Applications.41 (2):210–223.doi:10.1137/S0040585X97975447.S2CID 54593178.
  22. ^von Hippel, Paul T. (January 2005)."Mean, Median, and Skew: Correcting a Textbook Rule".Journal of Statistics Education.13 (2).doi:10.1080/10691898.2005.11910556.ISSN 1069-1898.
  23. ^Groeneveld, Richard A.; Meeden, Glen (August 1977)."The Mode, Median, and Mean Inequality".The American Statistician.31 (3):120–121.doi:10.1080/00031305.1977.10479215.ISSN 0003-1305.
  24. ^Merkle, M. (2005). "Jensen's inequality for medians".Statistics & Probability Letters.71 (3):277–281.doi:10.1016/j.spl.2004.11.010.
  25. ^Alfred V. Aho and John E. Hopcroft and Jeffrey D. Ullman (1974).The Design and Analysis of Computer Algorithms. Reading/MA: Addison-Wesley.ISBN 0-201-00029-6. Here: Section 3.6 "Order Statistics", p.97-99, in particular Algorithm 3.6 and Theorem 3.9.
  26. ^Bentley, Jon L.; McIlroy, M. Douglas (1993)."Engineering a sort function".Software: Practice and Experience.23 (11):1249–1265.doi:10.1002/spe.4380231105.S2CID 8822797.
  27. ^Rousseeuw, Peter J.; Bassett, Gilbert W. Jr. (1990)."The remedian: a robust averaging method for large data sets"(PDF).J. Amer. Statist. Assoc.85 (409):97–104.doi:10.1080/01621459.1990.10475311.
  28. ^abStigler, Stephen (December 1973). "Studies in the History of Probability and Statistics. XXXII: Laplace, Fisher and the Discovery of the Concept of Sufficiency".Biometrika.60 (3):439–445.doi:10.1093/biomet/60.3.439.JSTOR 2334992.MR 0326872.
  29. ^Rider, Paul R. (1960). "Variance of the median of small samples from several special populations".J. Amer. Statist. Assoc.55 (289):148–150.doi:10.1080/01621459.1960.10482056.
  30. ^Efron, B. (1982).The Jackknife, the Bootstrap and other Resampling Plans. Philadelphia: SIAM.ISBN 0898711797.
  31. ^Shao, J.; Wu, C. F. (1989)."A General Theory for Jackknife Variance Estimation".Ann. Stat.17 (3):1176–1197.doi:10.1214/aos/1176347263.JSTOR 2241717.
  32. ^Efron, B. (1979)."Bootstrap Methods: Another Look at the Jackknife".Ann. Stat.7 (1):1–26.doi:10.1214/aos/1176344552.JSTOR 2958830.
  33. ^Hall, P.; Martin, M. A. (1988)."Exact Convergence Rate of Bootstrap Quantile Variance Estimator".Probab Theory Related Fields.80 (2):261–268.doi:10.1007/BF00356105.S2CID 119701556.
  34. ^Jiménez-Gamero, M. D.; Munoz-García, J.; Pino-Mejías, R. (2004)."Reduced bootstrap for the median".Statistica Sinica.14 (4):1179–1198.
  35. ^Maindonald, John; John Braun, W. (2010-05-06).Data Analysis and Graphics Using R: An Example-Based Approach. Cambridge University Press.ISBN 9781139486675.
  36. ^abcHettmansperger, Thomas P.; McKean, Joseph W. (1998).Robust nonparametric statistical methods. Kendall's Library of Statistics. Vol. 5. London: Edward Arnold.ISBN 0-340-54937-8.MR 1604954.
  37. ^Small, Christopher G. "A survey of multidimensional medians." International Statistical Review/Revue Internationale de Statistique (1990): 263–277.doi:10.2307/1403809JSTOR 1403809
  38. ^Niinimaa, A., and H. Oja. "Multivariate median." Encyclopedia of statistical sciences (1999).
  39. ^Mosler, Karl. Multivariate Dispersion, Central Regions, and Depth: The Lift Zonoid Approach. Vol. 165. Springer Science & Business Media, 2012.
  40. ^Puri, Madan L.; Sen, Pranab K.;Nonparametric Methods in Multivariate Analysis, John Wiley & Sons, New York, NY, 1971. (Reprinted by Krieger Publishing)
  41. ^Vardi, Yehuda; Zhang, Cun-Hui (2000)."The multivariateL1-median and associated data depth".Proceedings of the National Academy of Sciences of the United States of America.97 (4): 1423–1426 (electronic).Bibcode:2000PNAS...97.1423V.doi:10.1073/pnas.97.4.1423.MR 1740461.PMC 26449.PMID 10677477.
  42. ^Davis, Otto A.; DeGroot, Morris H.; Hinich, Melvin J. (January 1972)."Social Preference Orderings and Majority Rule"(PDF).Econometrica.40 (1):147–157.doi:10.2307/1909727.JSTOR 1909727. The authors, working in a topic in which uniqueness is assumed, actually use the expression "unique median in all directions".
  43. ^Barnes, Leighton; Dytso, Alex J.; Jingbo, Liu; Poor, H.Vincent (2024-08-22). "L1 Estimation: On the Optimality of Linear Estimators".IEEE Transactions on Information Theory.70 (11):8026–8039.doi:10.1109/TIT.2024.3440929.
  44. ^Pratt, William K.; Cooper, Ted J.; Kabir, Ihtisham (1985-07-11). Corbett, Francis J (ed.). "Pseudomedian Filter".Architectures and Algorithms for Digital Image Processing II.0534: 34.Bibcode:1985SPIE..534...34P.doi:10.1117/12.946562.S2CID 173183609.
  45. ^Oja, Hannu (2010).Multivariate nonparametric methods with R: An approach based on spatial signs and ranks. Lecture Notes in Statistics. Vol. 199. New York, NY: Springer. pp. xiv+232.doi:10.1007/978-1-4419-0468-3.ISBN 978-1-4419-0467-6.MR 2598854.
  46. ^Wilcox, Rand R. (2001), "Theil–Sen estimator",Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy, Springer-Verlag, pp. 207–210,ISBN 978-0-387-95157-7.
  47. ^Wald, A. (1940)."The Fitting of Straight Lines if Both Variables are Subject to Error"(PDF).Annals of Mathematical Statistics.11 (3):282–300.doi:10.1214/aoms/1177731868.JSTOR 2235677.
  48. ^Nair, K. R.; Shrivastava, M. P. (1942). "On a Simple Method of Curve Fitting".Sankhyā: The Indian Journal of Statistics.6 (2):121–132.JSTOR 25047749.
  49. ^Brown, G. W.; Mood, A. M. (1951). "On Median Tests for Linear Hypotheses".Proc Second Berkeley Symposium on Mathematical Statistics and Probability. Berkeley, CA: University of California Press. pp. 159–166.Zbl 0045.08606.
  50. ^Tukey, J. W. (1977).Exploratory Data Analysis. Reading, MA: Addison-Wesley.ISBN 0201076160.
  51. ^Brown, George W. (1947)."On Small-Sample Estimation".Annals of Mathematical Statistics.18 (4):582–585.doi:10.1214/aoms/1177730349.JSTOR 2236236.
  52. ^Lehmann, Erich L. (1951)."A General Concept of Unbiasedness".Annals of Mathematical Statistics.22 (4):587–592.doi:10.1214/aoms/1177729549.JSTOR 2236928.
  53. ^Birnbaum, Allan (1961)."A Unified Theory of Estimation, I".Annals of Mathematical Statistics.32 (1):112–135.doi:10.1214/aoms/1177705145.JSTOR 2237612.
  54. ^van der Vaart, H. Robert (1961)."Some Extensions of the Idea of Bias".Annals of Mathematical Statistics.32 (2):436–447.doi:10.1214/aoms/1177705051.JSTOR 2237754.MR 0125674.
  55. ^Pfanzagl, Johann; with the assistance of R. Hamböker (1994).Parametric Statistical Theory. Walter de Gruyter.ISBN 3-11-013863-8.MR 1291393.
  56. ^Pfanzagl, Johann. "On optimal median unbiased estimators in the presence of nuisance parameters." The Annals of Statistics (1979): 187–193.
  57. ^Brown, L. D.; Cohen, Arthur; Strawderman, W. E. (1976)."A Complete Class Theorem for Strict Monotone Likelihood Ratio With Applications".Ann. Statist.4 (4):712–722.doi:10.1214/aos/1176343543.
  58. ^Page; Brown, L. D.; Cohen, Arthur; Strawderman, W. E. (1976)."A Complete Class Theorem for Strict Monotone Likelihood Ratio With Applications".Ann. Statist.4 (4):712–722.doi:10.1214/aos/1176343543.
  59. ^abcdBakker, Arthur; Gravemeijer, Koeno P. E. (2006-06-01). "An Historical Phenomenology of Mean and Median".Educational Studies in Mathematics.62 (2):149–168.doi:10.1007/s10649-006-7099-8.ISSN 1573-0816.S2CID 143708116.
  60. ^Adler, Dan (31 December 2014)."Talmud and Modern Economics".Jewish American and Israeli Issues. Archived fromthe original on 6 December 2015. Retrieved22 February 2020.
  61. ^Modern Economic Theory in the Talmud byYisrael Aumann
  62. ^abcdEisenhart, Churchill (24 August 1971).The Development of the Concept of the Best Mean of a Set of Measurements from Antiquity to the Present Day(PDF) (Speech). 131st Annual Meeting of the American Statistical Association. Colorado State University.
  63. ^ab"How the Average Triumphed Over the Median".Priceonomics. 5 April 2016. Retrieved2020-02-23.
  64. ^Sangster, Alan (March 2021)."The Life and Works of Luca Pacioli (1446/7–1517), Humanist Educator".Abacus.57 (1):126–152.doi:10.1111/abac.12218.hdl:2164/16100.ISSN 0001-3072.S2CID 233917744.
  65. ^Wright, Edward; Parsons, E. J. S.; Morris, W. F. (1939)."Edward Wright and His Work".Imago Mundi.3:61–71.doi:10.1080/03085693908591862.ISSN 0308-5694.JSTOR 1149920.
  66. ^Stigler, S. M. (1986).The History of Statistics: The Measurement of Uncertainty Before 1900. Harvard University Press.ISBN 0674403401.
  67. ^Laplace PS de (1818)Deuxième supplément à la Théorie Analytique des Probabilités, Paris, Courcier
  68. ^Jaynes, E.T. (2007).Probability theory : the logic of science (5. print. ed.). Cambridge [u.a.]: Cambridge Univ. Press. p. 172.ISBN 978-0-521-59271-0.
  69. ^Howarth, Richard (2017).Dictionary of Mathematical Geosciences: With Historical Notes. Springer. p. 374.
  70. ^abKeynes, J.M. (1921)A Treatise on Probability. Pt II Ch XVII §5 (p 201) (2006 reprint, Cosimo Classics,ISBN 9781596055308 : multiple other reprints)
  71. ^Stigler, Stephen M. (2002).Statistics on the Table: The History of Statistical Concepts and Methods. Harvard University Press. pp. 105–7.ISBN 978-0-674-00979-0.
  72. ^Galton F (1881) "Report of the Anthropometric Committee" pp 245–260.Report of the 51st Meeting of the British Association for the Advancement of Science
  73. ^David, H. A. (1995). "First (?) Occurrence of Common Terms in Mathematical Statistics".The American Statistician.49 (2):121–133.doi:10.2307/2684625.ISSN 0003-1305.JSTOR 2684625.
  74. ^encyclopediaofmath.org
  75. ^personal.psu.edu

External links

[edit]

This article incorporates material from Median of a distribution onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.

Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Retrieved from "https://en.wikipedia.org/w/index.php?title=Median&oldid=1279773909#Medians_for_samples"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp