Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Order statistic

From Wikipedia, the free encyclopedia
Kth smallest value in a statistical sample
This article includes a list ofgeneral references, butit lacks sufficient correspondinginline citations. Please help toimprove this article byintroducing more precise citations.(December 2010) (Learn how and when to remove this message)
Probability density functions of the order statistics for a sample of sizen = 5 from anexponential distribution with unit scale parameter

Instatistics, thekthorder statistic of astatistical sample is equal to itskth-smallest value.[1] Together withrank statistics, order statistics are among the most fundamental tools innon-parametric statistics andinference.

Important special cases of the order statistics are theminimum andmaximum value of a sample, and (with some qualifications discussed below) thesample median and othersample quantiles.

When usingprobability theory to analyze order statistics ofrandom samples from acontinuous distribution, thecumulative distribution function is used to reduce the analysis to the case of order statistics of theuniform distribution.

Notation and examples

[edit]

For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. If the sample values are

6, 9, 3, 7,

the order statistics would be denoted

x(1)=3,x(2)=6,x(3)=7,x(4)=9,{\displaystyle {\begin{aligned}x_{(1)}&=3,&x_{(2)}&=6,\\x_{(3)}&=7,&x_{(4)}&=9,\end{aligned}}}

where the subscript(i) enclosed in parentheses indicates theith order statistic of the sample.

Thefirst order statistic (orsmallest order statistic) is always theminimum of the sample, that is,

X(1)=min{X1,,Xn}{\displaystyle X_{(1)}=\min\{\,X_{1},\ldots ,X_{n}\,\}}

where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values.

Similarly, for a sample of sizen, thenth order statistic (orlargest order statistic) is themaximum, that is,

X(n)=max{X1,,Xn}.{\displaystyle X_{(n)}=\max\{\,X_{1},\ldots ,X_{n}\,\}.}

Thesample range is the difference between the maximum and minimum. It is a function of the order statistics:

Range{X1,,Xn}=X(n)X(1).{\displaystyle {\rm {Range}}\{\,X_{1},\ldots ,X_{n}\,\}=X_{(n)}-X_{(1)}.}

A similar important statistic inexploratory data analysis that is simply related to the order statistics is the sampleinterquartile range.

The sample median may or may not be an order statistic, since there is a single middle value only when the numbern of observations isodd. More precisely, ifn = 2m+1 for some integerm, then the sample median isX(m+1){\displaystyle X_{(m+1)}} and so is an order statistic. On the other hand, whenn iseven,n = 2m and there are two middle values,X(m){\displaystyle X_{(m)}} andX(m+1){\displaystyle X_{(m+1)}}, and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles.

Probabilistic analysis

[edit]

Given any random variablesX1,X2, ...,Xn, the order statistics X(1), X(2), ..., X(n) are also random variables, defined by sorting the values (realizations) ofX1, ...,Xn in increasing order.

When the random variablesX1,X2, ...,Xn form asample they areindependent and identically distributed. This is the case treated below. In general, the random variablesX1, ...,Xn can arise by sampling from more than one population. Then they areindependent, but not necessarily identically distributed, and theirjoint probability distribution is given by theBapat–Beg theorem.

From now on, we will assume that the random variables under consideration arecontinuous and, where convenient, we will also assume that they have aprobability density function (PDF), that is, they areabsolutely continuous. The peculiarities of the analysis of distributions assigning mass to points (in particular,discrete distributions) are discussed at the end.

Cumulative distribution function of order statistics

[edit]

For a random sample as above, with cumulative distributionFX(x){\displaystyle F_{X}(x)}, the order statistics for that sample have cumulative distributions as follows[2](wherer specifies which order statistic):FX(r)(x)=j=rn(nj)[FX(x)]j[1FX(x)]nj{\displaystyle F_{X_{(r)}}(x)=\sum _{j=r}^{n}{\binom {n}{j}}\left[F_{X}(x)\right]^{j}\left[1-F_{X}(x)\right]^{n-j}}The proof of this formula is purecombinatorics: for ther{\displaystyle r}th order statistic to bex{\displaystyle \leq x}, the number of samples that are>x{\displaystyle >x} has to be between0{\displaystyle 0} andnr{\displaystyle n-r}. In the case thatX(j){\displaystyle X_{(j)}} is the largest order statisticx{\displaystyle \leq x}, there has to bej{\displaystyle j} samplesx{\displaystyle \leq x} (each with an independent probability ofFX(x){\displaystyle F_{X}(x)}) andnj{\displaystyle n-j} samples>x{\displaystyle >x} (each with an independent probability of1FX(x){\displaystyle 1-F_{X}(x)}). Finally there are(nj){\textstyle {\binom {n}{j}}} different ways of choosing which of then{\displaystyle n} samples are of thex{\displaystyle \leq x} kind.

The corresponding probability density function may be derived from this result, and is found to be

fX(r)(x)=n!(r1)!(nr)!fX(x)[FX(x)]r1[1FX(x)]nr.{\displaystyle f_{X_{(r)}}(x)={\frac {n!}{(r-1)!(n-r)!}}f_{X}(x)\left[F_{X}(x)\right]^{r-1}\left[1-F_{X}(x)\right]^{n-r}.}

Moreover, there are two special cases, which have CDFs that are easy to compute.

FX(n)(x)=Pr(max{X1,,Xn}x)=[FX(x)]n{\displaystyle F_{X_{(n)}}(x)=\Pr(\max\{\,X_{1},\ldots ,X_{n}\,\}\leq x)=[F_{X}(x)]^{n}}

FX(1)(x)=Pr(min{X1,,Xn}x)=1[1FX(x)]n{\displaystyle F_{X_{(1)}}(x)=\Pr(\min\{\,X_{1},\ldots ,X_{n}\,\}\leq x)=1-[1-F_{X}(x)]^{n}}

Which can be derived by careful consideration of probabilities.

Probability distributions of order statistics

[edit]

Order statistics sampled from a uniform distribution

[edit]

In this section we show that the order statistics of theuniform distribution on theunit interval havemarginal distributions belonging to thebeta distribution family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using thecdf.

We assume throughout this section thatX1,X2,,Xn{\displaystyle X_{1},X_{2},\ldots ,X_{n}} is arandom sample drawn from a continuous distribution with cdfFX{\displaystyle F_{X}}. DenotingUi=FX(Xi){\displaystyle U_{i}=F_{X}(X_{i})} we obtain the corresponding random sampleU1,,Un{\displaystyle U_{1},\ldots ,U_{n}} from the standarduniform distribution. Note that the order statistics also satisfyU(i)=FX(X(i)){\displaystyle U_{(i)}=F_{X}(X_{(i)})}.

The probability density function of the order statisticU(k){\displaystyle U_{(k)}} is equal to[3]

fU(k)(u)=n!(k1)!(nk)!uk1(1u)nk{\displaystyle f_{U_{(k)}}(u)={n! \over (k-1)!(n-k)!}u^{k-1}(1-u)^{n-k}}

that is, thekth order statistic of the uniform distribution is abeta-distributed random variable.[3][4]

U(k)Beta(k,n+1k).{\displaystyle U_{(k)}\sim \operatorname {Beta} (k,n+1\mathbf {-} k).}

The proof of these statements is as follows. ForU(k){\displaystyle U_{(k)}} to be betweenu andu + du, it is necessary that exactlyk − 1 elements of the sample are smaller thanu, and that at least one is betweenu andu + du. The probability that more than one is in this latter interval is alreadyO(du2){\displaystyle O(du^{2})}, so we have to calculate the probability that exactlyk − 1, 1 andn − k observations fall in the intervals(0,u){\displaystyle (0,u)},(u,u+du){\displaystyle (u,u+du)} and(u+du,1){\displaystyle (u+du,1)} respectively. This equals (refer tomultinomial distribution for details)

n!(k1)!(nk)!uk1du(1udu)nk{\displaystyle {n! \over (k-1)!(n-k)!}u^{k-1}\cdot du\cdot (1-u-du)^{n-k}}

and the result follows.

The mean of this distribution isk / (n + 1).

The joint distribution of the order statistics of the uniform distribution

[edit]

Similarly, fori < j, thejoint probability density function of the two order statisticsU(i) < U(j) can be shown to be

fU(i),U(j)(u,v)=n!ui1(i1)!(vu)ji1(ji1)!(1v)nj(nj)!{\displaystyle f_{U_{(i)},U_{(j)}}(u,v)=n!{u^{i-1} \over (i-1)!}{(v-u)^{j-i-1} \over (j-i-1)!}{(1-v)^{n-j} \over (n-j)!}}

which is (up to terms of higher order thanO(dudv){\displaystyle O(du\,dv)}) the probability thati − 1, 1,j − 1 − i, 1 andn − j sample elements fall in the intervals(0,u){\displaystyle (0,u)},(u,u+du){\displaystyle (u,u+du)},(u+du,v){\displaystyle (u+du,v)},(v,v+dv){\displaystyle (v,v+dv)},(v+dv,1){\displaystyle (v+dv,1)} respectively.

One reasons in an entirely analogous way to derive the higher-order joint distributions. Perhaps surprisingly, the joint density of then order statistics turns out to beconstant:

fU(1),U(2),,U(n)(u1,u2,,un)=n!.{\displaystyle f_{U_{(1)},U_{(2)},\ldots ,U_{(n)}}(u_{1},u_{2},\ldots ,u_{n})=n!.}

One way to understand this is that the unordered sample does have constant density equal to 1, and that there aren! different permutations of the sample corresponding to the same sequence of order statistics. This is related to the fact that 1/n! is the volume of the region0<u1<<un<1{\displaystyle 0<u_{1}<\cdots <u_{n}<1}. It is also related with another particularity of order statistics of uniform random variables: It follows from theBRS-inequality that the maximum expected number of uniform U(0,1] random variables one can choose from a sample of size n with a sum up not exceeding0<s<n/2{\displaystyle 0<s<n/2} is bounded above by2sn{\displaystyle {\sqrt {2sn}}}, which is thus invariant on the set of alls,n{\displaystyle s,n} with constant productsn{\displaystyle sn}.

Using the above formulas, one can derive the distribution of the range of the order statistics, that is the distribution ofU(n)U(1){\displaystyle U_{(n)}-U_{(1)}}, i.e. maximum minus the minimum. More generally, fornk>j1{\displaystyle n\geq k>j\geq 1},U(k)U(j){\displaystyle U_{(k)}-U_{(j)}} also has a beta distribution:U(k)U(j)Beta(kj,n(kj)+1){\displaystyle U_{(k)}-U_{(j)}\sim \operatorname {Beta} (k-j,n-(k-j)+1)}From these formulas we can derive the covariance between two order statistics:Cov(U(k),U(j))=j(nk+1)(n+1)2(n+2){\displaystyle \operatorname {Cov} (U_{(k)},U_{(j)})={\frac {j(n-k+1)}{(n+1)^{2}(n+2)}}}The formula follows from noting thatVar(U(k)U(j))=Var(U(k))+Var(U(j))2Cov(U(k),U(j))=k(nk+1)(n+1)2(n+2)+j(nj+1)(n+1)2(n+2)2Cov(U(k),U(j)){\displaystyle {\begin{aligned}\operatorname {Var} (U_{(k)}-U_{(j)})&=\operatorname {Var} (U_{(k)})+\operatorname {Var} (U_{(j)})-2\cdot \operatorname {Cov} (U_{(k)},U_{(j)})\\[1ex]&={\frac {k(n-k+1)}{(n+1)^{2}(n+2)}}+{\frac {j(n-j+1)}{(n+1)^{2}(n+2)}}-2\cdot \operatorname {Cov} (U_{(k)},U_{(j)})\end{aligned}}}and comparing that withVar(U)=(kj)(n(kj)+1)(n+1)2(n+2){\displaystyle \operatorname {Var} (U)={\frac {(k-j)(n-(k-j)+1)}{(n+1)^{2}(n+2)}}}whereUBeta(kj,n(kj)+1){\displaystyle U\sim \operatorname {Beta} (k-j,n-(k-j)+1)}, which is the actual distribution of the difference.

Order statistics sampled from an exponential distribution

[edit]

ForX1,X2,..,Xn{\displaystyle X_{1},X_{2},..,X_{n}} a random sample of sizen from anexponential distribution with parameterλ, the order statisticsX(i) fori = 1,2,3, ...,n each have distribution

X(i)=d1λ(j=1iZjnj+1){\displaystyle X_{(i)}{\stackrel {d}{=}}{\frac {1}{\lambda }}\left(\sum _{j=1}^{i}{\frac {Z_{j}}{n-j+1}}\right)}

where theZj are i.i.d. standard exponential random variables (i.e. with rate parameter 1). This result was first published byAlfréd Rényi.[5][6]

Order statistics sampled from an Erlang distribution

[edit]

TheLaplace transform of order statistics may be sampled from anErlang distribution via a path counting method[clarification needed].[7]

The joint distribution of the order statistics of an absolutely continuous distribution

[edit]

IfFX isabsolutely continuous, it has a density such thatdFX(x)=fX(x)dx{\displaystyle dF_{X}(x)=f_{X}(x)\,dx}, and we can use the substitutions

u=FX(x){\displaystyle u=F_{X}(x)}

and

du=fX(x)dx{\displaystyle du=f_{X}(x)\,dx}

to derive the following probability density functions for the order statistics of a sample of sizen drawn from the distribution ofX:

fX(k)(x)=n!(k1)!(nk)![FX(x)]k1[1FX(x)]nkfX(x){\displaystyle f_{X_{(k)}}(x)={\frac {n!}{(k-1)!(n-k)!}}[F_{X}(x)]^{k-1}[1-F_{X}(x)]^{n-k}f_{X}(x)}

fX(j),X(k)(x,y)=n!(j1)!(kj1)!(nk)![FX(x)]j1[FX(y)FX(x)]k1j[1FX(y)]nkfX(x)fX(y){\displaystyle f_{X_{(j)},X_{(k)}}(x,y)={\frac {n!}{(j-1)!(k-j-1)!(n-k)!}}[F_{X}(x)]^{j-1}[F_{X}(y)-F_{X}(x)]^{k-1-j}[1-F_{X}(y)]^{n-k}f_{X}(x)f_{X}(y)} wherexy{\displaystyle x\leq y}

fX(1),,X(n)(x1,,xn)=n!fX(x1)fX(xn){\displaystyle f_{X_{(1)},\ldots ,X_{(n)}}(x_{1},\ldots ,x_{n})=n!f_{X}(x_{1})\cdots f_{X}(x_{n})} wherex1x2xn.{\displaystyle x_{1}\leq x_{2}\leq \dots \leq x_{n}.}

Application: confidence intervals for quantiles

[edit]

An interesting question is how well the order statistics perform as estimators of thequantiles of the underlying distribution.

A small-sample-size example

[edit]

The simplest case to consider is how well the sample median estimates the population median.

As an example, consider a random sample of size 6. In that case, the sample median is usually defined as the midpoint of the interval delimited by the 3rd and 4th order statistics. However, we know from the preceding discussion that the probability that this interval actually contains the population median is[clarification needed]

(63)(1/2)6=51631%.{\displaystyle {6 \choose 3}(1/2)^{6}={5 \over 16}\approx 31\%.}

Although the sample median is probably among the best distribution-independentpoint estimates of the population median, what this example illustrates is that it is not a particularly good one in absolute terms. In this particular case, a better confidence interval for the median is the one delimited by the 2nd and 5th order statistics, which contains the population median with probability

[(62)+(63)+(64)](1/2)6=253278%.{\displaystyle \left[{6 \choose 2}+{6 \choose 3}+{6 \choose 4}\right](1/2)^{6}={25 \over 32}\approx 78\%.}

With such a small sample size, if one wants at least 95% confidence, one is reduced to saying that the median is between the minimum and the maximum of the 6 observations with probability 31/32 or approximately 97%. Size 6 is, in fact, the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95% confidence interval for the population median.

Large sample sizes

[edit]

For the uniform distribution, asn tends to infinity, thepth sample quantile is asymptoticallynormally distributed, since it is approximated by

U(np)AN(p,p(1p)n).{\displaystyle U_{(\lceil np\rceil )}\sim AN{\left(p,{\frac {p(1-p)}{n}}\right)}.}

For a general distributionF with a continuous non-zero density atF −1(p), a similar asymptotic normality applies:

X(np)AN(F1(p),p(1p)n[f(F1(p))]2){\displaystyle X_{(\lceil np\rceil )}\sim AN{\left(F^{-1}(p),{\frac {p(1-p)}{n[f(F^{-1}(p))]^{2}}}\right)}}

wheref is thedensity function, andF −1 is thequantile function associated withF. One of the first people to mention and prove this result wasFrederick Mosteller in his seminal paper in 1946.[8] Further research led in the 1960s to theBahadur representation which provides information about the errorbounds. The convergence to normal distribution also holds in a stronger sense, such as convergence inrelative entropy or KL divergence.[9]

An interesting observation can be made in the case where the distribution is symmetric, and the population median equals the population mean. In this case, thesample mean, by thecentral limit theorem, is also asymptotically normally distributed, but with variance σ2/n instead. This asymptotic analysis suggests that the mean outperforms the median in cases of lowkurtosis, and vice versa. For example, the median achieves better confidence intervals for theLaplace distribution, while the mean performs better forX that are normally distributed.

Proof

[edit]

It can be shown that

B(k,n+1k) =d XX+Y,{\displaystyle B(k,n+1-k)\ {\stackrel {\mathrm {d} }{=}}\ {\frac {X}{X+Y}},}

where

X=i=1kZi,Y=i=k+1n+1Zi,{\displaystyle X=\sum _{i=1}^{k}Z_{i},\quad Y=\sum _{i=k+1}^{n+1}Z_{i},}

withZi being independent identically distributedexponential random variables with rate 1. SinceX/n andY/n are asymptotically normally distributed by the CLT, our results follow by application of thedelta method.

Mutual Information of Order Statistics

[edit]

Themutual information andf-divergence between order statistics have also been considered.[10] For example, if the parent distribution is continuous, then for all1r,mn{\displaystyle 1\leq r,m\leq n}I(X(r);X(m))=I(U(r);U(m)),{\displaystyle I(X_{(r)};X_{(m)})=I(U_{(r)};U_{(m)}),}In other words, mutual information is independent of the parent distribution. For discrete random variables, the equality need not to hold and we only haveI(X(r);X(m))I(U(r);U(m)),{\displaystyle I(X_{(r)};X_{(m)})\leq I(U_{(r)};U_{(m)}),}

The mutual information between uniform order statistics is given byI(U(r);U(m))=Tm1+TnrTmr+1Tn{\displaystyle I(U_{(r)};U_{(m)})=T_{m-1}+T_{n-r}-T_{m-r+1}-T_{n}}whereTk=log(k!)kHk{\displaystyle T_{k}=\log(k!)-kH_{k}}whereHk{\displaystyle H_{k}} is thek{\displaystyle k}-th harmonic number.

Application: Non-parametric density estimation

[edit]

Moments of the distribution for the first order statistic can be used to develop a non-parametric density estimator.[11] Suppose, we want to estimate the densityfX{\displaystyle f_{X}} at the pointx{\displaystyle x^{*}}. Consider the random variablesYi=|Xix|{\displaystyle Y_{i}=|X_{i}-x^{*}|}, which are i.i.d with distribution functiongY(y)=fX(y+x)+fX(xy){\displaystyle g_{Y}(y)=f_{X}(y+x^{*})+f_{X}(x^{*}-y)}. In particular,fX(x)=gY(0)2{\displaystyle f_{X}(x^{*})={\frac {g_{Y}(0)}{2}}}.

The expected value of the first order statisticY(1){\displaystyle Y_{(1)}} given a sample ofN{\displaystyle N} total observations yields,

E(Y(1))=1(N+1)g(0)+1(N+1)(N+2)01Q(z)δN+1(z)dz{\displaystyle E(Y_{(1)})={\frac {1}{(N+1)g(0)}}+{\frac {1}{(N+1)(N+2)}}\int _{0}^{1}Q''(z)\delta _{N+1}(z)\,dz}

whereQ{\displaystyle Q} is the quantile function associated with the distributiongY{\displaystyle g_{Y}}, andδN(z)=(N+1)(1z)N{\displaystyle \delta _{N}(z)=(N+1)(1-z)^{N}}. This equation in combination with ajackknifing technique becomes the basis for the following density estimation algorithm,

  Input: A sample ofN{\displaystyle N} observations.{x}=1M{\displaystyle \{x_{\ell }\}_{\ell =1}^{M}} points of density evaluation. Tuning parametera(0,1){\displaystyle a\in (0,1)} (usually 1/3).  Output:{f^}=1M{\displaystyle \{{\hat {f}}_{\ell }\}_{\ell =1}^{M}} estimated density at the points of evaluation.
  1: SetmN=round(N1a){\displaystyle m_{N}=\operatorname {round} (N^{1-a})}  2: SetsN=NmN{\displaystyle s_{N}={\frac {N}{m_{N}}}}  3: Create ansN×mN{\displaystyle s_{N}\times m_{N}} matrixMij{\displaystyle M_{ij}} which holdsmN{\displaystyle m_{N}} subsets withsN{\displaystyle s_{N}} observations each.  4: Create a vectorf^{\displaystyle {\hat {f}}} to hold the density evaluations.  5:for=1M{\displaystyle \ell =1\to M}do  6:fork=1mN{\displaystyle k=1\to m_{N}}do  7:         Find the nearest distancedk{\displaystyle d_{\ell k}} to the current pointx{\displaystyle x_{\ell }} within thek{\displaystyle k}th subset  8:end for  9:     Compute the subset average of distances tox:d=k=1mNdkmN{\displaystyle x_{\ell }:d_{\ell }=\sum _{k=1}^{m_{N}}{\frac {d_{\ell k}}{m_{N}}}} 10:     Compute the density estimate atx:f^=12(1+sN)d{\displaystyle x_{\ell }:{\hat {f}}_{\ell }={\frac {1}{2(1+s_{N})d_{\ell }}}} 11:end for 12:returnf^{\displaystyle {\hat {f}}}

In contrast to the bandwidth/length based tuning parameters forhistogram andkernel based approaches, the tuning parameter for the order statistic based density estimator is the size of sample subsets. Such an estimator is more robust than histogram and kernel based approaches, for example densities like the Cauchy distribution (which lack finite moments) can be inferred without the need for specialized modifications such asIQR based bandwidths. This is because the first moment of the order statistic always exists if the expected value of the underlying distribution does, but the converse is not necessarily true.[12]

Dealing with discrete variables

[edit]

SupposeX1,X2,,Xn{\displaystyle X_{1},X_{2},\ldots ,X_{n}} are i.i.d. random variables from a discrete distribution with cumulative distribution functionF(x){\displaystyle F(x)} andprobability mass functionf(x){\displaystyle f(x)}. To find the probabilities of thekth{\displaystyle k^{\text{th}}} order statistics, three values are first needed, namelyp1=Pr(X<x)=F(x)f(x),p2=Pr(X=x)=f(x), and p3=Pr(X>x)=1F(x).{\displaystyle {\begin{aligned}p_{1}&=\Pr(X<x)=F(x)-f(x),\\p_{2}&=\Pr(X=x)=f(x),{\text{ and }}\\p_{3}&=\Pr(X>x)=1-F(x).\end{aligned}}}

The cumulative distribution function of thekth{\displaystyle k^{\text{th}}} order statistic can be computed by noting that

Pr(X(k)x)=Pr(there are at least k observations less than or equal to x),=Pr(there are at most nk observations greater than x),=j=0nk(nj)p3j(p1+p2)nj.{\displaystyle {\begin{aligned}\Pr(X_{(k)}\leq x)&=\Pr({\text{there are at least }}k{\text{ observations less than or equal to }}x),\\&=\Pr({\text{there are at most }}n-k{\text{ observations greater than }}x),\\&=\sum _{j=0}^{n-k}{\binom {n}{j}}p_{3}^{j}(p_{1}+p_{2})^{n-j}.\end{aligned}}}

Similarly,P(X(k)<x){\displaystyle P(X_{(k)}<x)} is given by

Pr(X(k)<x)=Pr(there are at least k observations less than x),=Pr(there are at most nk observations greater than or equal to x),=j=0nk(nj)(p2+p3)j(p1)nj.{\displaystyle {\begin{aligned}\Pr(X_{(k)}<x)&=\Pr({\text{there are at least }}k{\text{ observations less than }}x),\\&=\Pr({\text{there are at most }}n-k{\text{ observations greater than or equal to }}x),\\&=\sum _{j=0}^{n-k}{n \choose j}(p_{2}+p_{3})^{j}(p_{1})^{n-j}.\end{aligned}}}

Note that the probability mass function ofX(k){\displaystyle X_{(k)}} is just the difference of these values, that is to say

Pr(X(k)=x)=Pr(X(k)x)Pr(X(k)<x),=j=0nk(nj)[p3j(p1+p2)nj(p2+p3)j(p1)nj],=j=0nk(nj)[(1F(x))jF(x)nj(1F(x)+f(x))j(F(x)f(x))nj].{\displaystyle {\begin{aligned}\Pr(X_{(k)}=x)&=\Pr(X_{(k)}\leq x)-\Pr(X_{(k)}<x),\\&=\sum _{j=0}^{n-k}{\binom {n}{j}}\left[p_{3}^{j}(p_{1}+p_{2})^{n-j}-(p_{2}+p_{3})^{j}(p_{1})^{n-j}\right],\\&=\sum _{j=0}^{n-k}{\binom {n}{j}}\left[\left(1-F(x)\right)^{j}F(x)^{n-j}-\left(1-F(x)+f(x)\right)^{j}\left(F(x)-f(x)\right)^{n-j}\right].\end{aligned}}}

Computing order statistics

[edit]
Main articles:Selection algorithm andSampling in order

The problem of computing thekth smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm. Although this problem is difficult for very large lists, sophisticated selection algorithms have been created that can solve this problem in time proportional to the number of elements in the list, even if the list is totally unordered. If the data is stored in certain specialized data structures, this time can be brought down to O(logn). In many applications all order statistics are required, in which case asorting algorithm can be used and the time taken is O(n logn).

Applications

[edit]

Order statistics have a lot of applications in areas as reliability theory, financial mathematics, survival analysis, epidemiology, sports, quality control, actuarial risk, etc. There is an extensive literature devoted to studies on applications of order statistics in these fields.

For example, a recent application in actuarial risk can be found in,[13] where some weighted premium principles in terms of record claims and kth record claims are provided.

See also

[edit]

Examples of order statistics

[edit]

References

[edit]
  1. ^David, H. A.; Nagaraja, H. N. (2003).Order Statistics. Wiley Series in Probability and Statistics.doi:10.1002/0471722162.ISBN 9780471722168.
  2. ^Casella, George; Berger, Roger (2002).Statistical Inference (2nd ed.). Cengage Learning. p. 229.ISBN 9788131503942.
  3. ^abGentle, James E. (2009),Computational Statistics, Springer, p. 63,ISBN 9780387981444.
  4. ^Jones, M. C. (2009), "Kumaraswamy's distribution: A beta-type distribution with some tractability advantages",Statistical Methodology,6 (1):70–81,doi:10.1016/j.stamet.2008.04.001,As is well known, the beta distribution is the distribution of them 'th order statistic from a random sample of sizen from the uniform distribution (on (0,1)).
  5. ^David, H. A.; Nagaraja, H. N. (2003), "Chapter 2. Basic Distribution Theory",Order Statistics, Wiley Series in Probability and Statistics, p. 9,doi:10.1002/0471722162.ch2,ISBN 9780471722168
  6. ^Rényi, Alfréd (1953)."On the theory of order statistics".Acta Mathematica Hungarica.4 (3):191–231.doi:10.1007/BF02127580.
  7. ^Hlynka, M.; Brill, P. H.; Horn, W. (2010). "A method for obtaining Laplace transforms of order statistics of Erlang random variables".Statistics & Probability Letters.80:9–18.doi:10.1016/j.spl.2009.09.006.
  8. ^Mosteller, Frederick (1946)."On Some Useful "Inefficient" Statistics".Annals of Mathematical Statistics.17 (4):377–408.doi:10.1214/aoms/1177730881. RetrievedFebruary 26, 2015.
  9. ^M. Cardone, A. Dytso and C. Rush, "Entropic Central Limit Theorem for Order Statistics," in IEEE Transactions on Information Theory, vol. 69, no. 4, pp. 2193-2205, April 2023, doi: 10.1109/TIT.2022.3219344.
  10. ^ A. Dytso, M. Cardone and C. Rush, "Measuring Dependencies of Order Statistics: An Information Theoretic Perspective," in 2020 IEEE Information Theory Workshop, 2021, doi: 10.1109/ITW46852.2021.9457617.
  11. ^Garg, Vikram V.; Tenorio, Luis;Willcox, Karen (2017). "Minimum local distance density estimation".Communications in Statistics - Theory and Methods.46 (1):148–164.arXiv:1412.2851.doi:10.1080/03610926.2014.988260.S2CID 14334678.
  12. ^David, H. A.; Nagaraja, H. N. (2003), "Chapter 3. Expected Values and Moments",Order Statistics, Wiley Series in Probability and Statistics, p. 34,doi:10.1002/0471722162.ch3,ISBN 9780471722168
  13. ^Castaño-Martínez, A.; López-Blázquez, F.; Pigueiras, G.; Sordo, M.A. (2020). "A method for constructing and interpreting some weighted premium principles".ASTIN Bulletin.50 (3):1037–1064.doi:10.1017/asb.2020.15.

External links

[edit]
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis (see alsoTemplate:Least squares and regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
International
National
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Order_statistic&oldid=1310991822"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp