Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Divergence (statistics)

From Wikipedia, the free encyclopedia
Function that measures dissimilarity between two probability distributions
Not to be confused withDeviance (statistics),Deviation (statistics), orDiscrepancy (statistics).

Ininformation geometry, adivergence is a kind ofstatistical distance: abinary function which establishes the separation from oneprobability distribution to another on astatistical manifold.

The simplest divergence issquared Euclidean distance (SED), and divergences can be viewed as generalizations of SED. The other most important divergence isrelative entropy (also calledKullback–Leibler divergence), which is central toinformation theory. There are numerous other specific divergences and classes of divergences, notablyf-divergences andBregman divergences (see§ Examples).

Definition

[edit]

Given adifferentiable manifold[a]M{\displaystyle M} of dimensionn{\displaystyle n}, adivergence onM{\displaystyle M} is aC2{\displaystyle C^{2}}-functionD:M×M[0,){\displaystyle D:M\times M\to [0,\infty )} satisfying:[1][2]

  1. D(p,q)0{\displaystyle D(p,q)\geq 0} for allp,qM{\displaystyle p,q\in M} (non-negativity),
  2. D(p,q)=0{\displaystyle D(p,q)=0} if and only ifp=q{\displaystyle p=q} (positivity),
  3. At every pointpM{\displaystyle p\in M},D(p,p+dp){\displaystyle D(p,p+dp)} is a positive-definitequadratic form for infinitesimal displacementsdp{\displaystyle dp} fromp{\displaystyle p}.

In applications to statistics, the manifoldM{\displaystyle M} is typically the space of parameters of aparametric family of probability distributions.

Condition 3 means thatD{\displaystyle D} defines an inner product on the tangent spaceTpM{\displaystyle T_{p}M} for everypM{\displaystyle p\in M}. SinceD{\displaystyle D} isC2{\displaystyle C^{2}} onM{\displaystyle M}, this defines a Riemannian metricg{\displaystyle g} onM{\displaystyle M}.

Locally atpM{\displaystyle p\in M}, we may construct a localcoordinate chart with coordinatesx{\displaystyle x}, then the divergence isD(x(p),x(p)+dx)=12dxTgp(x)dx+O(|dx|3){\displaystyle D(x(p),x(p)+dx)=\textstyle {\frac {1}{2}}dx^{T}g_{p}(x)dx+O(|dx|^{3})}wheregp(x){\displaystyle g_{p}(x)} is a matrix of sizen×n{\displaystyle n\times n}. It is the Riemannian metric at pointp{\displaystyle p} expressed in coordinatesx{\displaystyle x}.

Dimensional analysis of condition 3 shows that divergence has the dimension of squared distance.[3]

Thedual divergenceD{\displaystyle D^{*}} is defined as

D(p,q)=D(q,p).{\displaystyle D^{*}(p,q)=D(q,p).}

When we wish to contrastD{\displaystyle D} againstD{\displaystyle D^{*}}, we refer toD{\displaystyle D} asprimal divergence.

Given any divergenceD{\displaystyle D}, its symmetrized version is obtained by averaging it with its dual divergence:[3]

DS(p,q)=12(D(p,q)+D(q,p)).{\displaystyle D_{S}(p,q)=\textstyle {\frac {1}{2}}{\big (}D(p,q)+D(q,p){\big )}.}

Difference from other similar concepts

[edit]

Unlikemetrics, divergences are not required to be symmetric, and the asymmetry is important in applications.[3] Accordingly, one often refers asymmetrically to the divergence "ofq fromp" or "fromp toq", rather than "betweenp andq". Secondly, divergences generalizesquared distance, not linear distance, and thus do not satisfy thetriangle inequality, but some divergences (such as theBregman divergence) do satisfy generalizations of thePythagorean theorem.

In general statistics and probability, "divergence" generally refers to any kind of functionD(p,q){\displaystyle D(p,q)}, wherep,q{\displaystyle p,q} are probability distributions or other objects under consideration, such that conditions 1, 2 are satisfied. Condition 3 is required for "divergence" as used in information geometry.

As an example, thetotal variation distance, a commonly used statistical divergence, does not satisfy condition 3.

Notation

[edit]

Notation for divergences varies significantly between fields, though there are some conventions.

Divergences are generally notated with an uppercase 'D', as inD(x,y){\displaystyle D(x,y)}, to distinguish them from metric distances, which are notated with a lowercase 'd'. When multiple divergences are in use, they are commonly distinguished with subscripts, as inDKL{\displaystyle D_{\text{KL}}} forKullback–Leibler divergence (KL divergence).

Often a different separator between parameters is used, particularly to emphasize the asymmetry. Ininformation theory, a double bar is commonly used:D(pq){\displaystyle D(p\parallel q)}; this is similar to, but distinct from, the notation forconditional probability,P(A|B){\displaystyle P(A|B)}, and emphasizes interpreting the divergence as a relative measurement, as inrelative entropy; this notation is common for the KL divergence. A colon may be used instead,[b] asD(p:q){\displaystyle D(p:q)}; this emphasizes the relative information supporting the two distributions.

The notation for parameters varies as well. UppercaseP,Q{\displaystyle P,Q} interprets the parameters as probability distributions, while lowercasep,q{\displaystyle p,q} orx,y{\displaystyle x,y} interprets them geometrically as points in a space, andμ1,μ2{\displaystyle \mu _{1},\mu _{2}} orm1,m2{\displaystyle m_{1},m_{2}} interprets them as measures.

Geometrical properties

[edit]
Further information:Information geometry

Many properties of divergences can be derived if we restrictS to be a statistical manifold, meaning that it can be parametrized with a finite-dimensional coordinate systemθ, so that for a distributionpS we can writep =p(θ).

For a pair of pointsp,qS with coordinatesθp andθq, denote the partial derivatives ofD(p,q) as

D((i)p,q)  =def  θpiD(p,q),D((ij)p,(k)q)  =def  θpiθpjθqkD(p,q),  etc.{\displaystyle {\begin{aligned}D((\partial _{i})_{p},q)\ \ &{\stackrel {\mathrm {def} }{=}}\ \ {\tfrac {\partial }{\partial \theta _{p}^{i}}}D(p,q),\\D((\partial _{i}\partial _{j})_{p},(\partial _{k})_{q})\ \ &{\stackrel {\mathrm {def} }{=}}\ \ {\tfrac {\partial }{\partial \theta _{p}^{i}}}{\tfrac {\partial }{\partial \theta _{p}^{j}}}{\tfrac {\partial }{\partial \theta _{q}^{k}}}D(p,q),\ \ \mathrm {etc.} \end{aligned}}}

Now we restrict these functions to a diagonalp =q, and denote[4]

D[i,] : pD((i)p,p),D[i,j] : pD((i)p,(j)p),  etc.{\displaystyle {\begin{aligned}D[\partial _{i},\cdot ]\ &:\ p\mapsto D((\partial _{i})_{p},p),\\D[\partial _{i},\partial _{j}]\ &:\ p\mapsto D((\partial _{i})_{p},(\partial _{j})_{p}),\ \ \mathrm {etc.} \end{aligned}}}

By definition, the functionD(p,q) is minimized atp =q, and therefore

D[i,]=D[,i]=0,D[ij,]=D[,ij]=D[i,j]  gij(D),{\displaystyle {\begin{aligned}&D[\partial _{i},\cdot ]=D[\cdot ,\partial _{i}]=0,\\&D[\partial _{i}\partial _{j},\cdot ]=D[\cdot ,\partial _{i}\partial _{j}]=-D[\partial _{i},\partial _{j}]\ \equiv \ g_{ij}^{(D)},\end{aligned}}}

where matrixg(D) ispositive semi-definite and defines a uniqueRiemannian metric on the manifoldS.

DivergenceD(·, ·) also defines a uniquetorsion-freeaffine connection(D) with coefficients

Γij,k(D)=D[ij,k],{\displaystyle \Gamma _{ij,k}^{(D)}=-D[\partial _{i}\partial _{j},\partial _{k}],}

and thedual to this connection ∇* is generated by the dual divergenceD*.

Thus, a divergenceD(·, ·) generates on a statistical manifold a unique dualistic structure (g(D), ∇(D), ∇(D*)). The converse is also true: every torsion-free dualistic structure on a statistical manifold is induced from some globally defined divergence function (which however need not be unique).[5]

For example, whenD is anf-divergence[6] for some function ƒ(·), then it generates themetricg(Df) =c·g and the connection(Df) = ∇(α), whereg is the canonicalFisher information metric, ∇(α) is the α-connection,c = ƒ′′(1), andα = 3 + 2ƒ′′′(1)/ƒ′′(1).

Examples

[edit]

The two most important divergences are therelative entropy (Kullback–Leibler divergence, KL divergence), which is central toinformation theory and statistics, and thesquared Euclidean distance (SED). Minimizing these two divergences is the main way thatlinear inverse problems are solved, via theprinciple of maximum entropy andleast squares, notably inlogistic regression andlinear regression.[7]

The two most important classes of divergences are thef-divergences andBregman divergences; however, other types of divergence functions are also encountered in the literature. The only divergence for probabilities over a finitealphabet that is both anf-divergence and a Bregman divergence is the Kullback–Leibler divergence.[8] The squared Euclidean divergence is a Bregman divergence (corresponding to the functionx2{\displaystyle x^{2}}) but not anf-divergence.

f-divergences

[edit]
Main article:f-divergence

Given a convex functionf:[0,+)(,+]{\displaystyle f:[0,+\infty )\to (-\infty ,+\infty ]} such thatf(0)=limt0+f(t),f(1)=0{\displaystyle f(0)=\lim _{t\to 0^{+}}f(t),f(1)=0}, thef-divergence generated byf{\displaystyle f} is defined as

Df(p,q)=p(x)f(q(x)p(x))dx{\displaystyle D_{f}(p,q)=\int p(x)f{\bigg (}{\frac {q(x)}{p(x)}}{\bigg )}dx}.
Kullback–Leibler divergence:DKL(p,q)=p(x)ln(p(x)q(x))dx{\displaystyle D_{\mathrm {KL} }(p,q)=\int p(x)\ln \left({\frac {p(x)}{q(x)}}\right)dx}
squaredHellinger distance:H2(p,q)=2(p(x)q(x))2dx{\displaystyle H^{2}(p,\,q)=2\int {\Big (}{\sqrt {p(x)}}-{\sqrt {q(x)}}\,{\Big )}^{2}dx}
Jensen–Shannon divergence:DJS(p,q)=12p(x)ln(p(x))+q(x)ln(q(x))(p(x)+q(x))ln(p(x)+q(x)2)dx{\displaystyle D_{JS}(p,q)={\frac {1}{2}}\int p(x)\ln \left(p(x)\right)+q(x)\ln \left(q(x)\right)-(p(x)+q(x))\ln \left({\frac {p(x)+q(x)}{2}}\right)dx}
α-divergenceD(α)(p,q)=41α2(1p(x)1α2q(x)1+α2dx){\displaystyle D^{(\alpha )}(p,q)={\frac {4}{1-\alpha ^{2}}}{\bigg (}1-\int p(x)^{\frac {1-\alpha }{2}}q(x)^{\frac {1+\alpha }{2}}dx{\bigg )}}
chi-squared divergence:Dχ2(p,q)=(p(x)q(x))2p(x)dx{\displaystyle D_{\chi ^{2}}(p,q)=\int {\frac {(p(x)-q(x))^{2}}{p(x)}}dx}
(α,β)-product divergence[citation needed]:Dα,β(p,q)=2(1α)(1β)(1(q(x)p(x))1α2)(1(q(x)p(x))1β2)p(x)dx{\displaystyle D_{\alpha ,\beta }(p,q)={\frac {2}{(1-\alpha )(1-\beta )}}\int {\Big (}1-{\Big (}{\tfrac {q(x)}{p(x)}}{\Big )}^{\!\!{\frac {1-\alpha }{2}}}{\Big )}{\Big (}1-{\Big (}{\tfrac {q(x)}{p(x)}}{\Big )}^{\!\!{\frac {1-\beta }{2}}}{\Big )}p(x)dx}

Bregman divergences

[edit]
Main article:Bregman divergence

Bregman divergences correspond to convex functions on convex sets. Given astrictly convex, continuously differentiable functionF on aconvex set, known as theBregman generator, theBregman divergence measures the convexity of: the error of the linear approximation ofF fromq as an approximation of the value atp:

DF(p,q)=F(p)F(q)F(q),pq.{\displaystyle D_{F}(p,q)=F(p)-F(q)-\langle \nabla F(q),p-q\rangle .}

The dual divergence to a Bregman divergence is the divergence generated by theconvex conjugateF* of the Bregman generator of the original divergence. For example, for the squared Euclidean distance, the generator isx2{\displaystyle x^{2}}, while for the relative entropy the generator is thenegative entropyxlogx{\displaystyle x\log x}.

History

[edit]

The use of the term "divergence" – both what functions it refers to, and what various statistical distances are called – has varied significantly over time, but by c. 2000 had settled on the current usage within information geometry, notably in the textbookAmari & Nagaoka (2000).[1]

The term "divergence" for a statistical distance was used informally in various contexts from c. 1910 to c. 1940. Its formal use dates at least toBhattacharyya (1943), entitled "On a measure of divergence between two statistical populations defined by their probability distributions", which defined theBhattacharyya distance, andBhattacharyya (1946), entitled "On a Measure of Divergence between Two Multinomial Populations", which defined theBhattacharyya angle. The term was popularized by its use for theKullback–Leibler divergence inKullback & Leibler (1951) and its use in the textbookKullback (1959). The term "divergence" was used generally byAli & Silvey (1966) for statistically distances. Numerous references to earlier uses ofstatistical distances are given inAdhikari & Joshi (1956) andKullback (1959, pp. 6–7, §1.3 Divergence).

Kullback & Leibler (1951) actually used "divergence" to refer to thesymmetrized divergence (this function had already been defined and used byHarold Jeffreys in 1948[9]), referring to the asymmetric function as "the mean information for discrimination ... per observation",[10] whileKullback (1959) referred to the asymmetric function as the "directed divergence".[11]Ali & Silvey (1966) referred generally to such a function as a "coefficient of divergence", and showed that many existing functions could be expressed asf-divergences, referring to Jeffreys' function as "Jeffreys' measure of divergence" (today "Jeffreys divergence"), and Kullback–Leibler's asymmetric function (in each direction) as "Kullback's and Leibler's measures of discriminatory information" (today "Kullback–Leibler divergence").[12]

The information geometry definition of divergence (the subject of this article) was initially referred to by alternative terms, including "quasi-distance"Amari (1982, p. 369) and "contrast function"Eguchi (1985), though "divergence" was used inAmari (1985) for theα-divergence, and has become standard for the general class.[1][2]

The term "divergence" is in contrast to a distance (metric), since the symmetrized divergence does not satisfy the triangle inequality.[13] For example, the term "Bregman distance" is still found, but "Bregman divergence" is now preferred.

Notationally,Kullback & Leibler (1951) denoted their asymmetric function asI(1:2){\displaystyle I(1:2)}, whileAli & Silvey (1966) denote their functions with a lowercase 'd' asd(P1,P2){\displaystyle d\left(P_{1},P_{2}\right)}.

See also

[edit]

Notes

[edit]
  1. ^Throughout, we only requiredifferentiability classC2 (continuous with continuous first and second derivatives), since only second derivatives are required. In practice, commonly used statistical manifolds and divergences are infinitely differentiable ("smooth").
  2. ^A colon is used inKullback & Leibler (1951, p. 80), where the KL divergence between measureμ1{\displaystyle \mu _{1}} andμ2{\displaystyle \mu _{2}} is written asI(1:2){\displaystyle I(1:2)}.

References

[edit]
  1. ^abcAmari & Nagaoka 2000, chapter 3.2.
  2. ^abAmari 2016, p. 10, Definition 1.1.
  3. ^abcAmari 2016, p. 10.
  4. ^Eguchi (1992)
  5. ^Matumoto (1993)
  6. ^Nielsen, F.; Nock, R. (2013). "On the Chi square and higher-order Chi distances for approximating f-divergences".IEEE Signal Processing Letters.21:10–13.arXiv:1309.3029.doi:10.1109/LSP.2013.2288355.S2CID 4152365.
  7. ^Csiszar 1991.
  8. ^Jiao, Jiantao; Courtade, Thomas; No, Albert; Venkat, Kartik; Weissman, Tsachy (December 2014). "Information Measures: the Curious Case of the Binary Alphabet".IEEE Transactions on Information Theory.60 (12):7616–7626.arXiv:1404.6810.Bibcode:2014ITIT...60.7616J.doi:10.1109/TIT.2014.2360184.ISSN 0018-9448.S2CID 13108908.
  9. ^Jeffreys 1948, p. 158.
  10. ^Kullback & Leibler 1951, p. 80.
  11. ^Kullback 1959, p. 7.
  12. ^Ali & Silvey 1966, p. 139.
  13. ^Kullback 1959, p. 6.

Bibliography

[edit]
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis (see alsoTemplate:Least squares and regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Retrieved from "https://en.wikipedia.org/w/index.php?title=Divergence_(statistics)&oldid=1306908786"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp