
Inprobability theory,Donsker's theorem (also known asDonsker's invariance principle, or thefunctional central limit theorem), named afterMonroe D. Donsker, is a functional extension of thecentral limit theorem for empirical distribution functions. Specifically, the theorem states that an appropriately centered and scaled version of the empirical distribution function converges to aGaussian process.
Let be a sequence ofindependent and identically distributed (i.i.d.)random variables with mean 0 and variance 1. Let. The stochastic process is known as arandom walk. Define the diffusively rescaled random walk (partial-sum process) by
Thecentral limit theorem asserts thatconverges in distribution to a standardGaussian random variable as. Donsker's invariance principle[1][2] extends this convergence to the whole function. More precisely, in its modern form, Donsker's invariance principle states that: Asrandom variables taking values in theSkorokhod space, the random functionconverges in distribution to astandard Brownian motion as


LetFn be theempirical distribution function of the sequence of i.i.d. random variables with distribution functionF. Define the centered and scaled version ofFn by
indexed byx ∈ R. By the classicalcentral limit theorem, for fixedx, the random variableGn(x)converges in distribution to aGaussian (normal)random variableG(x) with zero mean and varianceF(x)(1 − F(x)) as the sample sizen grows.
Theorem (Donsker, Skorokhod, Kolmogorov) The sequence ofGn(x), as random elements of theSkorokhod space,converges in distribution to aGaussian processG with zero mean and covariance given by
The processG(x) can be written asB(F(x)) whereB is a standardBrownian bridge on theunit interval.
For continuous probability distributions, it reduces to the case where the distribution is uniform on by theinverse transform.
Given any finite sequence of times, we have that is distributed as abinomial distribution with mean and variance.
Similarly, the joint distribution of is a multinomial distribution. Now, thecentral limit approximation for multinomial distributions shows that converges in distribution to a gaussian process with covariance matrix with entries, which is precisely the covariance matrix for the Brownian bridge.
Kolmogorov (1933) showed that whenF iscontinuous, the supremum and supremum of absolute value,converges in distribution to the laws of the same functionals of theBrownian bridgeB(t), see theKolmogorov–Smirnov test. In 1949 Doob asked whether the convergence in distribution held for more general functionals, thus formulating a problem ofweak convergence of random functions in a suitablefunction space.[3]
In 1952 Donsker stated and proved (not quite correctly)[4] a general extension for the Doob–Kolmogorov heuristic approach. In the original paper, Donsker proved that the convergence in law ofGn to the Brownian bridge holds forUniform[0,1] distributions with respect touniform convergence int over the interval [0,1].[2]
However Donsker's formulation was not quite correct because of the problem of measurability of the functionals of discontinuous processes. In 1956 Skorokhod and Kolmogorov defined a separable metricd, called theSkorokhod metric, on the space ofcàdlàg functions on [0,1], such that convergence ford to a continuous function is equivalent to convergence for the sup norm, and showed thatGn converges in law in to the Brownian bridge.
Later Dudley reformulated Donsker's result to avoid the problem of measurability and the need of the Skorokhod metric. One can prove[4] that there existXi, iid uniform in [0,1] and a sequence of sample-continuous Brownian bridgesBn, such that
is measurable andconverges in probability to 0. An improved version of this result, providing more detail on therate of convergence, is theKomlós–Major–Tusnády approximation.