Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Dirichlet distribution

From Wikipedia, the free encyclopedia
Probability distribution
Dirichlet distribution
Probability density function
ParametersK2{\displaystyle K\geq 2} number of categories (integer)
α=(α1,,αK){\displaystyle {\boldsymbol {\alpha }}=(\alpha _{1},\ldots ,\alpha _{K})}concentration parameters, whereαi>0{\displaystyle \alpha _{i}>0}
Supportx1,,xK{\displaystyle x_{1},\ldots ,x_{K}} wherexi[0,1]{\displaystyle x_{i}\in [0,1]} andi=1Kxi=1{\displaystyle \sum _{i=1}^{K}x_{i}=1}
(i.e. aK1{\displaystyle K-1}simplex)
PDF1B(α)i=1Kxiαi1{\displaystyle {\frac {1}{\mathrm {B} ({\boldsymbol {\alpha }})}}\prod _{i=1}^{K}x_{i}^{\alpha _{i}-1}}
whereB(α)=i=1KΓ(αi)Γ(α0){\displaystyle \mathrm {B} ({\boldsymbol {\alpha }})={\frac {\prod _{i=1}^{K}\Gamma (\alpha _{i})}{\Gamma {\bigl (}\alpha _{0}{\bigr )}}}}
whereα0=i=1Kαi{\displaystyle \alpha _{0}=\sum _{i=1}^{K}\alpha _{i}}
MeanE[Xi]=αiα0{\displaystyle \operatorname {E} [X_{i}]={\frac {\alpha _{i}}{\alpha _{0}}}}
E[lnXi]=ψ(αi)ψ(α0){\displaystyle \operatorname {E} [\ln X_{i}]=\psi (\alpha _{i})-\psi (\alpha _{0})}
(whereψ{\displaystyle \psi } is thedigamma function)
Modexi=αi1α0K,αi>1.{\displaystyle x_{i}={\frac {\alpha _{i}-1}{\alpha _{0}-K}},\quad \alpha _{i}>1.}
VarianceVar[Xi]=α~i(1α~i)α0+1,{\displaystyle \operatorname {Var} [X_{i}]={\frac {{\tilde {\alpha }}_{i}(1-{\tilde {\alpha }}_{i})}{\alpha _{0}+1}},}Cov[Xi,Xj]=δijα~iα~iα~jα0+1{\displaystyle \operatorname {Cov} [X_{i},X_{j}]={\frac {\delta _{ij}\,{\tilde {\alpha }}_{i}-{\tilde {\alpha }}_{i}{\tilde {\alpha }}_{j}}{\alpha _{0}+1}}}
whereα~i=αiα0{\displaystyle {\tilde {\alpha }}_{i}={\frac {\alpha _{i}}{\alpha _{0}}}}, andδij{\displaystyle \delta _{ij}} is theKronecker delta
EntropyH(X)=logB(α){\displaystyle H(X)=\log \mathrm {B} ({\boldsymbol {\alpha }})}+(α0K)ψ(α0){\displaystyle +(\alpha _{0}-K)\psi (\alpha _{0})-}j=1K(αj1)ψ(αj){\displaystyle \sum _{j=1}^{K}(\alpha _{j}-1)\psi (\alpha _{j})}
withα0{\displaystyle \alpha _{0}} defined as for variance, above; andψ{\displaystyle \psi } is thedigamma function
Method of momentsαi=E[Xi](E[Xj](1E[Xj])V[Xj]1){\displaystyle \alpha _{i}=E[X_{i}]\left({\frac {E[X_{j}](1-E[X_{j}])}{V[X_{j}]}}-1\right)} wherej is any index, possiblyi itself

Inprobability andstatistics, theDirichlet distribution (afterPeter Gustav Lejeune Dirichlet), often denotedDir(α){\displaystyle \operatorname {Dir} ({\boldsymbol {\alpha }})}, is a family ofcontinuousmultivariateprobability distributions parameterized by a vectorα of positivereals. It is a multivariate generalization of thebeta distribution,[1] hence its alternative name ofmultivariate beta distribution (MBD).[2] Dirichlet distributions are commonly used asprior distributions inBayesian statistics, and in fact, the Dirichlet distribution is theconjugate prior of thecategorical distribution andmultinomial distribution.

The infinite-dimensional generalization of the Dirichlet distribution is theDirichlet process.

Definitions

[edit]

Probability density function

[edit]
Illustrating how the log of the density function changes whenK=3{\displaystyle K=3} as we change the vectorα{\displaystyle {\boldsymbol {\alpha }}} fromα=(0.3,0.3,0.3){\displaystyle {\boldsymbol {\alpha }}=(0.3,0.3,0.3)} to(2.0,2.0,2.0){\displaystyle (2.0,2.0,2.0)}, keeping all the individualαi{\displaystyle \alpha _{i}}'s equal to each other.

The Dirichlet distribution of orderK2{\displaystyle K\geq 2} with parametersα1,,αK>0{\displaystyle \alpha _{1},\ldots ,\alpha _{K}>0} has aprobability density function given by

f(x1,,xK;α1,,αK)=1B(α)i=1Kxiαi1{\displaystyle f\left(x_{1},\ldots ,x_{K};\alpha _{1},\ldots ,\alpha _{K}\right)={\frac {1}{\mathrm {B} ({\boldsymbol {\alpha }})}}\prod _{i=1}^{K}x_{i}^{\alpha _{i}-1}}wherexi[0,1] for all i{1,,K} and i=1Kxi=1.{\displaystyle x_{i}\in \left[0,1\right]{\mbox{ for all }}i\in \{1,\dots ,K\}{\mbox{ and }}\sum _{i=1}^{K}x_{i}=1\,.}That is, the probability density function is defined on the standardK1{\displaystyle K-1}simplex embedded inK{\displaystyle K}-dimensionalEuclidean space,RK{\displaystyle \mathbb {R} ^{K}}.

Thenormalizing constant is the multivariatebeta function, which can be expressed in terms of thegamma function:

B(α)=i=1KΓ(αi)Γ(i=1Kαi),α=(α1,,αK).{\displaystyle \mathrm {B} ({\boldsymbol {\alpha }})={\frac {\prod \limits _{i=1}^{K}\Gamma (\alpha _{i})}{\Gamma \left(\sum \limits _{i=1}^{K}\alpha _{i}\right)}},\qquad {\boldsymbol {\alpha }}=(\alpha _{1},\ldots ,\alpha _{K}).}

Support

[edit]

Thesupport of the Dirichlet distribution is the set ofK-dimensional vectorsx whose entries are real numbers in the interval [0,1] such thatx1=1{\displaystyle \|{\boldsymbol {x}}\|_{1}=1}, i.e. the sum of the coordinates is equal to 1. These can be viewed as the probabilities of aK-waycategorical event. Another way to express this is that the domain of the Dirichlet distribution is itself a set ofprobability distributions, specifically the set ofK-dimensionaldiscrete distributions. The technical term for the set of points in the support of aK-dimensional Dirichlet distribution is theopenstandard(K − 1)-simplex,[3] which is a generalization of atriangle, embedded in the next-higher dimension. For example, withK = 3, the support is anequilateral triangle embedded in a downward-angle fashion in three-dimensional space, with vertices at (1,0,0), (0,1,0) and (0,0,1), i.e. touching each of the coordinate axes at a point 1 unit away from the origin.

Special cases

[edit]

A common special case is thesymmetric Dirichlet distribution, where all of the elements making up the parameter vectorα have the same value. The symmetric case might be useful, for example, when a Dirichlet prior over components is called for, but there is no prior knowledge favoring one component over another. Since all elements of the parameter vector have the same value, the symmetric Dirichlet distribution can be parametrized by a single scalar valueα, called theconcentration parameter. In terms ofα, the density function has the form

f(x1,,xK;α)=Γ(αK)Γ(α)Ki=1Kxiα1.{\displaystyle f(x_{1},\dots ,x_{K};\alpha )={\frac {\Gamma (\alpha K)}{\Gamma (\alpha )^{K}}}\prod _{i=1}^{K}x_{i}^{\alpha -1}.}

Whenα = 1,[1] the symmetric Dirichlet distribution is equivalent to a uniform distribution over the openstandard(K−1)-simplex, i.e. it is uniform over all points in itssupport. This particular distribution is known as theflat Dirichlet distribution. Values of the concentration parameter above 1 prefervariates that are dense, evenly distributed distributions, i.e. all the values within a single sample are similar to each other. Values of the concentration parameter below 1 prefer sparse distributions, i.e. most of the values within a single sample will be close to 0, and the vast majority of the mass will be concentrated in a few of the values.

Whenα = 1/2, the distribution is the same as would be obtained by choosing a point uniformly at random from the surface of a(K−1)-dimensionalunit hypersphere and squaring each coordinate. Theα = 1/2 distribution is theJeffreys prior for the Dirichlet distribution.

More generally, the parameter vector is sometimes written as the productαn{\displaystyle \alpha {\boldsymbol {n}}} of a (scalar)concentration parameterα and a (vector)base measuren=(n1,,nK){\displaystyle {\boldsymbol {n}}=(n_{1},\dots ,n_{K})} wheren lies within the(K − 1)-simplex (i.e.: its coordinatesni{\displaystyle n_{i}} sum to one). The concentration parameter in this case is larger by a factor ofK than the concentration parameter for a symmetric Dirichlet distribution described above. This construction ties in with concept of a base measure when discussingDirichlet processes and is often used in the topic modelling literature.

^ If we define the concentration parameter as the sum of the Dirichlet parameters for each dimension, the Dirichlet distribution with concentration parameterK, the dimension of the distribution, is the uniform distribution on the(K − 1)-simplex.

Properties

[edit]

Moments

[edit]

LetX=(X1,,XK)Dir(α){\displaystyle X=(X_{1},\ldots ,X_{K})\sim \operatorname {Dir} ({\boldsymbol {\alpha }})}.

Let

α0=i=1Kαi.{\displaystyle \alpha _{0}=\sum _{i=1}^{K}\alpha _{i}.}

Then[4][5]

E[Xi]=αiα0,{\displaystyle \operatorname {E} [X_{i}]={\frac {\alpha _{i}}{\alpha _{0}}},}Var[Xi]=αi(α0αi)α02(α0+1).{\displaystyle \operatorname {Var} [X_{i}]={\frac {\alpha _{i}(\alpha _{0}-\alpha _{i})}{\alpha _{0}^{2}(\alpha _{0}+1)}}.}

Furthermore, ifij{\displaystyle i\neq j}

Cov[Xi,Xj]=αiαjα02(α0+1).{\displaystyle \operatorname {Cov} [X_{i},X_{j}]={\frac {-\alpha _{i}\alpha _{j}}{\alpha _{0}^{2}(\alpha _{0}+1)}}.}

The covariance matrix issingular.

More generally, moments of Dirichlet-distributed random variables can be expressed in the following way. Fort=(t1,,tK)RK{\displaystyle {\boldsymbol {t}}=(t_{1},\dotsc ,t_{K})\in \mathbb {R} ^{K}}, denote byti=(t1i,,tKi){\displaystyle {\boldsymbol {t}}^{\circ i}=(t_{1}^{i},\dotsc ,t_{K}^{i})} itsi-thHadamard power. Then,[6]

E[(tX)n]=n!Γ(α0)Γ(α0+n)t1k1tKkKk1!kK!i=1KΓ(αi+ki)Γ(αi)=n!Γ(α0)Γ(α0+n)Zn(t1α,,tnα),{\displaystyle \operatorname {E} \left[({\boldsymbol {t}}\cdot {\boldsymbol {X}})^{n}\right]={\frac {n!\,\Gamma (\alpha _{0})}{\Gamma (\alpha _{0}+n)}}\sum {\frac {{t_{1}}^{k_{1}}\cdots {t_{K}}^{k_{K}}}{k_{1}!\cdots k_{K}!}}\prod _{i=1}^{K}{\frac {\Gamma (\alpha _{i}+k_{i})}{\Gamma (\alpha _{i})}}={\frac {n!\,\Gamma (\alpha _{0})}{\Gamma (\alpha _{0}+n)}}Z_{n}({\boldsymbol {t}}^{\circ 1}\cdot {\boldsymbol {\alpha }},\cdots ,{\boldsymbol {t}}^{\circ n}\cdot {\boldsymbol {\alpha }}),}

where the sum is over non-negative integersk1,,kK{\displaystyle k_{1},\ldots ,k_{K}} withn=k1++kK{\displaystyle n=k_{1}+\cdots +k_{K}}, andZn{\displaystyle Z_{n}} is thecycle index polynomial of theSymmetric group of degreen.

We have the special caseE[tX]=tαα0.{\displaystyle \operatorname {E} \left[{\boldsymbol {t}}\cdot {\boldsymbol {X}}\right]={\frac {{\boldsymbol {t}}\cdot {\boldsymbol {\alpha }}}{\alpha _{0}}}.}

The multivariate analogueE[(t1X)n1(tqX)nq]{\textstyle \operatorname {E} \left[({\boldsymbol {t}}_{1}\cdot {\boldsymbol {X}})^{n_{1}}\cdots ({\boldsymbol {t}}_{q}\cdot {\boldsymbol {X}})^{n_{q}}\right]} for vectorst1,,tqRK{\displaystyle {\boldsymbol {t}}_{1},\dotsc ,{\boldsymbol {t}}_{q}\in \mathbb {R} ^{K}} can be expressed[7] in terms of a color pattern of the exponentsn1,,nq{\displaystyle n_{1},\dotsc ,n_{q}} in the sense of thePólya enumeration theorem.

Particular cases include the simple computation[8]

E[i=1KXiβi]=B(α+β)B(α)=Γ(i=1Kαi)Γ[i=1K(αi+βi)]×i=1KΓ(αi+βi)Γ(αi).{\displaystyle \operatorname {E} \left[\prod _{i=1}^{K}X_{i}^{\beta _{i}}\right]={\frac {B\left({\boldsymbol {\alpha }}+{\boldsymbol {\beta }}\right)}{B\left({\boldsymbol {\alpha }}\right)}}={\frac {\Gamma \left(\sum \limits _{i=1}^{K}\alpha _{i}\right)}{\Gamma \left[\sum \limits _{i=1}^{K}(\alpha _{i}+\beta _{i})\right]}}\times \prod _{i=1}^{K}{\frac {\Gamma (\alpha _{i}+\beta _{i})}{\Gamma (\alpha _{i})}}.}

Mode

[edit]

Themode of the distribution is[9] the vector(x1, ...,xK) with

xi=αi1α0K,αi>1.{\displaystyle x_{i}={\frac {\alpha _{i}-1}{\alpha _{0}-K}},\qquad \alpha _{i}>1.}

Marginal distributions

[edit]

Themarginal distributions arebeta distributions:[10]

XiBeta(αi,α0αi).{\displaystyle X_{i}\sim \operatorname {Beta} (\alpha _{i},\alpha _{0}-\alpha _{i}).}

Also see§ Related distributions below.

Conjugate to categorical or multinomial

[edit]

The Dirichlet distribution is theconjugate prior distribution of thecategorical distribution (a genericdiscrete probability distribution with a given number of possible outcomes) andmultinomial distribution (the distribution over observed counts of each possible category in a set of categorically distributed observations). This means that if a data point has either a categorical or multinomial distribution, and theprior distribution of the distribution's parameter (the vector of probabilities that generates the data point) is distributed as a Dirichlet, then theposterior distribution of the parameter is also a Dirichlet. Intuitively, in such a case, starting from what we know about the parameter prior to observing the data point, we then can update our knowledge based on the data point and end up with a new distribution of the same form as the old one. This means that we can successively update our knowledge of a parameter by incorporating new observations one at a time, without running into mathematical difficulties.

Formally, this can be expressed as follows. Given a model

α=(α1,,αK)=concentration hyperparameterpα=(p1,,pK)Dir(K,α)Xp=(x1,,xK)Cat(K,p){\displaystyle {\begin{array}{rcccl}{\boldsymbol {\alpha }}&=&\left(\alpha _{1},\ldots ,\alpha _{K}\right)&=&{\text{concentration hyperparameter}}\\\mathbf {p} \mid {\boldsymbol {\alpha }}&=&\left(p_{1},\ldots ,p_{K}\right)&\sim &\operatorname {Dir} (K,{\boldsymbol {\alpha }})\\\mathbb {X} \mid \mathbf {p} &=&\left(\mathbf {x} _{1},\ldots ,\mathbf {x} _{K}\right)&\sim &\operatorname {Cat} (K,\mathbf {p} )\end{array}}}

then the following holds:

c=(c1,,cK)=number of occurrences of category ipX,αDir(K,c+α)=Dir(K,c1+α1,,cK+αK){\displaystyle {\begin{array}{rcccl}\mathbf {c} &=&\left(c_{1},\ldots ,c_{K}\right)&=&{\text{number of occurrences of category }}i\\\mathbf {p} \mid \mathbb {X} ,{\boldsymbol {\alpha }}&\sim &\operatorname {Dir} (K,\mathbf {c} +{\boldsymbol {\alpha }})&=&\operatorname {Dir} \left(K,c_{1}+\alpha _{1},\ldots ,c_{K}+\alpha _{K}\right)\end{array}}}

This relationship is used inBayesian statistics to estimate the underlying parameterp of acategorical distribution given a collection ofN samples. Intuitively, we can view thehyperprior vectorα aspseudocounts, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations (the vectorc) in order to derive the posterior distribution.

In Bayesianmixture models and otherhierarchical Bayesian models with mixture components, Dirichlet distributions are commonly used as the prior distributions for thecategorical variables appearing in the models. See the section onapplications below for more information.

Relation to Dirichlet-multinomial distribution

[edit]

In a model where a Dirichlet prior distribution is placed over a set ofcategorical-valued observations, themarginaljoint distribution of the observations (i.e. the joint distribution of the observations, with the prior parametermarginalized out) is aDirichlet-multinomial distribution. This distribution plays an important role inhierarchical Bayesian models, because when doinginference over such models using methods such asGibbs sampling orvariational Bayes, Dirichlet prior distributions are often marginalized out. See thearticle on this distribution for more details.

Entropy

[edit]

IfX is aDir(α){\displaystyle \operatorname {Dir} ({\boldsymbol {\alpha }})} random variable, thedifferential entropy ofX (innat units) is[11]

h(X)=E[lnf(X)]=lnB(α)+(α0K)ψ(α0)j=1K(αj1)ψ(αj){\displaystyle h({\boldsymbol {X}})=\operatorname {E} [-\ln f({\boldsymbol {X}})]=\ln \operatorname {B} ({\boldsymbol {\alpha }})+(\alpha _{0}-K)\psi (\alpha _{0})-\sum _{j=1}^{K}(\alpha _{j}-1)\psi (\alpha _{j})}

whereψ{\displaystyle \psi } is thedigamma function.

The following formula forE[ln(Xi)]{\displaystyle \operatorname {E} [\ln(X_{i})]} can be used to derive the differentialentropy above. Since the functionsln(Xi){\displaystyle \ln(X_{i})} are the sufficient statistics of the Dirichlet distribution, theexponential family differential identities can be used to get an analytic expression for the expectation ofln(Xi){\displaystyle \ln(X_{i})} (see equation (2.62) in[12]) and its associated covariance matrix:

E[ln(Xi)]=ψ(αi)ψ(α0){\displaystyle \operatorname {E} [\ln(X_{i})]=\psi (\alpha _{i})-\psi (\alpha _{0})}

and

Cov[ln(Xi),ln(Xj)]=ψ(αi)δijψ(α0){\displaystyle \operatorname {Cov} [\ln(X_{i}),\ln(X_{j})]=\psi '(\alpha _{i})\delta _{ij}-\psi '(\alpha _{0})}

whereψ{\displaystyle \psi } is thedigamma function,ψ{\displaystyle \psi '} is thetrigamma function, andδij{\displaystyle \delta _{ij}} is theKronecker delta.

The spectrum ofRényi information for values other thanλ=1{\displaystyle \lambda =1} is given by[13]

FR(λ)=(1λ)1(λlogB(α)+i=1KlogΓ(λ(αi1)+1)logΓ(λ(α0K)+K)){\displaystyle F_{R}(\lambda )=(1-\lambda )^{-1}\left(-\lambda \log \mathrm {B} ({\boldsymbol {\alpha }})+\sum _{i=1}^{K}\log \Gamma (\lambda (\alpha _{i}-1)+1)-\log \Gamma (\lambda (\alpha _{0}-K)+K)\right)}

and the information entropy is the limit asλ{\displaystyle \lambda } goes to 1.

Another related interesting measure is the entropy of a discrete categorical (one-of-K binary) vectorZ with probability-mass distributionX, i.e.,P(Zi=1,Zji=0|X)=Xi{\displaystyle P(Z_{i}=1,Z_{j\neq i}=0|{\boldsymbol {X}})=X_{i}}. The conditionalinformation entropy ofZ, givenX is

S(X)=H(Z|X)=EZ[logP(Z|X)]=i=1KXilogXi{\displaystyle S({\boldsymbol {X}})=H({\boldsymbol {Z}}|{\boldsymbol {X}})=\operatorname {E} _{\boldsymbol {Z}}[-\log P({\boldsymbol {Z}}|{\boldsymbol {X}})]=\sum _{i=1}^{K}-X_{i}\log X_{i}}

This function ofX is a scalar random variable. IfX has a symmetric Dirichlet distribution with allαi=α{\displaystyle \alpha _{i}=\alpha }, the expected value of the entropy (innat units) is[14]

E[S(X)]=i=1KE[XilnXi]=ψ(Kα+1)ψ(α+1){\displaystyle \operatorname {E} [S({\boldsymbol {X}})]=\sum _{i=1}^{K}\operatorname {E} [-X_{i}\ln X_{i}]=\psi (K\alpha +1)-\psi (\alpha +1)}

Kullback–Leibler divergence

[edit]

TheKullback–Leibler (KL) divergence between two Dirichlet distributions,Dir(α){\displaystyle {\text{Dir}}({\boldsymbol {\alpha }})} andDir(β){\displaystyle {\text{Dir}}({\boldsymbol {\beta }})}, over the same simplex is:[15]

DKL(Dir(α)Dir(β))=logΓ(i=1Kαi)Γ(i=1Kβi)+i=1K[logΓ(βi)Γ(αi)+(αiβi)(ψ(αi)ψ(j=1Kαj))]{\displaystyle {\begin{aligned}D_{\mathrm {KL} }{\big (}\mathrm {Dir} ({\boldsymbol {\alpha }})\,\|\,\mathrm {Dir} ({\boldsymbol {\beta }}){\big )}&=\log {\frac {\Gamma \left(\sum _{i=1}^{K}\alpha _{i}\right)}{\Gamma \left(\sum _{i=1}^{K}\beta _{i}\right)}}+\sum _{i=1}^{K}\left[\log {\frac {\Gamma (\beta _{i})}{\Gamma (\alpha _{i})}}+(\alpha _{i}-\beta _{i})\left(\psi (\alpha _{i})-\psi \left(\sum _{j=1}^{K}\alpha _{j}\right)\right)\right]\end{aligned}}}

Aggregation

[edit]

If

X=(X1,,XK)Dir(α1,,αK){\displaystyle X=(X_{1},\ldots ,X_{K})\sim \operatorname {Dir} (\alpha _{1},\ldots ,\alpha _{K})}

then, if the random variables with subscriptsi andj are dropped from the vector and replaced by their sum,

X=(X1,,Xi+Xj,,XK)Dir(α1,,αi+αj,,αK).{\displaystyle X'=(X_{1},\ldots ,X_{i}+X_{j},\ldots ,X_{K})\sim \operatorname {Dir} (\alpha _{1},\ldots ,\alpha _{i}+\alpha _{j},\ldots ,\alpha _{K}).}

This aggregation property may be used to derive the marginal distribution ofXi{\displaystyle X_{i}} mentioned above.

Neutrality

[edit]
Main article:Neutral vector

IfX=(X1,,XK)Dir(α){\displaystyle X=(X_{1},\ldots ,X_{K})\sim \operatorname {Dir} ({\boldsymbol {\alpha }})}, then the vector X is said to beneutral[16] in the sense thatXK is independent ofX(K){\displaystyle X^{(-K)}}[3] where

X(K)=(X11XK,X21XK,,XK11XK),{\displaystyle X^{(-K)}=\left({\frac {X_{1}}{1-X_{K}}},{\frac {X_{2}}{1-X_{K}}},\ldots ,{\frac {X_{K-1}}{1-X_{K}}}\right),}

and similarly for removing any ofX2,,XK1{\displaystyle X_{2},\ldots ,X_{K-1}}. Observe that any permutation ofX is also neutral (a property not possessed by samples drawn from ageneralized Dirichlet distribution).[17]

Combining this with the property of aggregation it follows thatXj + ... +XK is independent of(X1X1++Xj1,X2X1++Xj1,,Xj1X1++Xj1){\displaystyle \left({\frac {X_{1}}{X_{1}+\cdots +X_{j-1}}},{\frac {X_{2}}{X_{1}+\cdots +X_{j-1}}},\ldots ,{\frac {X_{j-1}}{X_{1}+\cdots +X_{j-1}}}\right)}. In fact it is true, further, for the Dirichlet distribution, that for3jK1{\displaystyle 3\leq j\leq K-1}, the pair(X1++Xj1,Xj++XK){\displaystyle \left(X_{1}+\cdots +X_{j-1},X_{j}+\cdots +X_{K}\right)}, and the two vectors(X1X1++Xj1,X2X1++Xj1,,Xj1X1++Xj1){\displaystyle \left({\frac {X_{1}}{X_{1}+\cdots +X_{j-1}}},{\frac {X_{2}}{X_{1}+\cdots +X_{j-1}}},\ldots ,{\frac {X_{j-1}}{X_{1}+\cdots +X_{j-1}}}\right)} and(XjXj++XK,Xj+1Xj++XK,,XKXj++XK){\displaystyle \left({\frac {X_{j}}{X_{j}+\cdots +X_{K}}},{\frac {X_{j+1}}{X_{j}+\cdots +X_{K}}},\ldots ,{\frac {X_{K}}{X_{j}+\cdots +X_{K}}}\right)}, viewed as triple of normalised random vectors, aremutually independent. The analogous result is true for partition of the indices{1, 2, ...,K} into any other pair of non-singleton subsets.

Characteristic function

[edit]

The characteristic function of the Dirichlet distribution is aconfluent form of theLauricella hypergeometric series. It is given byPhillips as[18]

CF(s1,,sK1)=E(ei(s1X1++sK1XK1))=Ψ[K1](α1,,αK1;α0;is1,,isK1){\displaystyle CF\left(s_{1},\ldots ,s_{K-1}\right)=\operatorname {E} \left(e^{i\left(s_{1}X_{1}+\cdots +s_{K-1}X_{K-1}\right)}\right)=\Psi ^{\left[K-1\right]}(\alpha _{1},\ldots ,\alpha _{K-1};\alpha _{0};is_{1},\ldots ,is_{K-1})}

where

Ψ[m](a1,,am;c;z1,zm)=(a1)k1(am)kmz1k1zmkm(c)kk1!km!.{\displaystyle \Psi ^{[m]}(a_{1},\ldots ,a_{m};c;z_{1},\ldots z_{m})=\sum {\frac {(a_{1})_{k_{1}}\cdots (a_{m})_{k_{m}}\,z_{1}^{k_{1}}\cdots z_{m}^{k_{m}}}{(c)_{k}\,k_{1}!\cdots k_{m}!}}.}

The sum is over non-negative integersk1,,km{\displaystyle k_{1},\ldots ,k_{m}} andk=k1++km{\displaystyle k=k_{1}+\cdots +k_{m}}. Phillips goes on to state that this form is "inconvenient for numerical calculation" and gives an alternative in terms of acomplex path integral:

Ψ[m]=Γ(c)2πiLetta1++amcj=1m(tzj)ajdt{\displaystyle \Psi ^{[m]}={\frac {\Gamma (c)}{2\pi i}}\int _{L}e^{t}\,t^{a_{1}+\cdots +a_{m}-c}\,\prod _{j=1}^{m}(t-z_{j})^{-a_{j}}\,dt}

whereL denotes any path in the complex plane originating at{\displaystyle -\infty }, encircling in the positive direction all the singularities of the integrand and returning to{\displaystyle -\infty }.

Inequality

[edit]

Probability density functionf(x1,,xK1;α1,,αK){\displaystyle f\left(x_{1},\ldots ,x_{K-1};\alpha _{1},\ldots ,\alpha _{K}\right)} plays a key role in a multifunctional inequality which implies various bounds for the Dirichlet distribution.[19]

Another inequality relates the moment-generating function of the Dirichlet distribution to the convex conjugate of the scaled reversed Kullback-Leibler divergence:[20]

logE(expi=1KsiXi)suppi=1K(pisiαilog(αiα0pi)),{\displaystyle \log \operatorname {E} \left(\exp {\sum _{i=1}^{K}s_{i}X_{i}}\right)\leq \sup _{p}\sum _{i=1}^{K}\left(p_{i}s_{i}-\alpha _{i}\log \left({\frac {\alpha _{i}}{\alpha _{0}p_{i}}}\right)\right),}where the supremum is taken overp spanning the(K − 1)-simplex.

Related distributions

[edit]

WhenX=(X1,,XK)Dir(α1,,αK){\displaystyle {\boldsymbol {X}}=(X_{1},\ldots ,X_{K})\sim \operatorname {Dir} \left(\alpha _{1},\ldots ,\alpha _{K}\right)}, the marginal distribution of each componentXiBeta(αi,α0αi){\displaystyle X_{i}\sim \operatorname {Beta} (\alpha _{i},\alpha _{0}-\alpha _{i})}, aBeta distribution. In particular, ifK = 2 thenX1Beta(α1,α2){\displaystyle X_{1}\sim \operatorname {Beta} (\alpha _{1},\alpha _{2})} is equivalent toX=(X1,1X1)Dir(α1,α2){\displaystyle {\boldsymbol {X}}=(X_{1},1-X_{1})\sim \operatorname {Dir} \left(\alpha _{1},\alpha _{2}\right)}.

ForK independently distributedGamma distributions:

Y1Gamma(α1,θ),,YKGamma(αK,θ){\displaystyle Y_{1}\sim \operatorname {Gamma} (\alpha _{1},\theta ),\ldots ,Y_{K}\sim \operatorname {Gamma} (\alpha _{K},\theta )}

we have:[21]: 402 

V=i=1KYiGamma(α0,θ),{\displaystyle V=\sum _{i=1}^{K}Y_{i}\sim \operatorname {Gamma} \left(\alpha _{0},\theta \right),}X=(X1,,XK)=(Y1V,,YKV)Dir(α1,,αK).{\displaystyle X=(X_{1},\ldots ,X_{K})=\left({\frac {Y_{1}}{V}},\ldots ,{\frac {Y_{K}}{V}}\right)\sim \operatorname {Dir} \left(\alpha _{1},\ldots ,\alpha _{K}\right).}

Although theXis are not independent from one another, they can be seen to be generated from a set ofK independentgamma random variables.[21]: 594  Unfortunately, since the sumV is lost in formingX (in fact it can be shown thatV is stochastically independent ofX), it is not possible to recover the original gamma random variables from these values alone. Nevertheless, because independent random variables are simpler to work with, this reparametrization can still be useful for proofs about properties of the Dirichlet distribution.

Conjugate prior of the Dirichlet distribution

[edit]

Because the Dirichlet distribution is anexponential family distribution it has a conjugate prior.The conjugate prior is of the form:[22]

CD(αv,η)(1B(α))ηexp(kvkαk).{\displaystyle \operatorname {CD} ({\boldsymbol {\alpha }}\mid {\boldsymbol {v}},\eta )\propto \left({\frac {1}{\operatorname {B} ({\boldsymbol {\alpha }})}}\right)^{\eta }\exp \left(-\sum _{k}v_{k}\alpha _{k}\right).}

Herev{\displaystyle {\boldsymbol {v}}} is aK-dimensional real vector andη{\displaystyle \eta } is a scalar parameter. The domain of(v,η){\displaystyle ({\boldsymbol {v}},\eta )} is restricted to the set of parameters for which the above unnormalized density function can be normalized. The (necessary and sufficient) condition is:[23]

kvk>0 and η>1 and (η0 or kexpvkη<1){\displaystyle \forall k\;\;v_{k}>0\;\;\;\;{\text{ and }}\;\;\;\;\eta >-1\;\;\;\;{\text{ and }}\;\;\;\;(\eta \leq 0\;\;\;\;{\text{ or }}\;\;\;\;\sum _{k}\exp -{\frac {v_{k}}{\eta }}<1)}

The conjugation property can be expressed as

if [prior:αCD(v,η){\displaystyle {\boldsymbol {\alpha }}\sim \operatorname {CD} (\cdot \mid {\boldsymbol {v}},\eta )}] and [observation:xαDirichlet(α){\displaystyle {\boldsymbol {x}}\mid {\boldsymbol {\alpha }}\sim \operatorname {Dirichlet} (\cdot \mid {\boldsymbol {\alpha }})}] then [posterior:αxCD(vlogx,η+1){\displaystyle {\boldsymbol {\alpha }}\mid {\boldsymbol {x}}\sim \operatorname {CD} (\cdot \mid {\boldsymbol {v}}-\log {\boldsymbol {x}},\eta +1)}].

In the published literature there is no practical algorithm to efficiently generate samples fromCD(αv,η){\displaystyle \operatorname {CD} ({\boldsymbol {\alpha }}\mid {\boldsymbol {v}},\eta )}.

Generalization by scaling and translation of log-probabilities

[edit]

As noted above, Dirichlet variates can be generated by normalizing independentgamma variates. If instead one normalizesgeneralized gamma variates, one obtains variates from the simplicial generalized beta distribution (SGB).[24] On the other hand, SGB variates can also be obtained by applying thesoftmax function to scaled and translated logarithms of Dirichlet variates. Specifically, letx=(x1,,xK)Dir(α){\displaystyle \mathbf {x} =(x_{1},\ldots ,x_{K})\sim \operatorname {Dir} ({\boldsymbol {\alpha }})} and lety=(y1,,yK){\displaystyle \mathbf {y} =(y_{1},\ldots ,y_{K})}, where applying the logarithm elementwise:y=softmax(a1logx+logb)x=softmax(alogyalogb){\displaystyle \mathbf {y} =\operatorname {softmax} (a^{-1}\log \mathbf {x} +\log \mathbf {b} )\;\iff \;\mathbf {x} =\operatorname {softmax} (a\log \mathbf {y} -a\log \mathbf {b} )}oryk=bkxk1/ai=1Kbixi1/axk=(yk/bk)ai=1K(yi/bi)a{\displaystyle y_{k}={\frac {b_{k}x_{k}^{1/a}}{\sum _{i=1}^{K}b_{i}x_{i}^{1/a}}}\;\iff \;x_{k}={\frac {(y_{k}/b_{k})^{a}}{\sum _{i=1}^{K}(y_{i}/b_{i})^{a}}}}wherea>0{\displaystyle a>0} andb=(b1,,bK){\displaystyle \mathbf {b} =(b_{1},\ldots ,b_{K})}, with allbk>0{\displaystyle b_{k}>0}, thenySGB(a,b,α){\displaystyle \mathbf {y} \sim \operatorname {SGB} (a,\mathbf {b} ,{\boldsymbol {\alpha }})}. The SGB density function can be derived by noting that the transformationxy{\displaystyle \mathbf {x} \mapsto \mathbf {y} }, which is abijection from the simplex to itself, induces adifferential volume change factor[25] of:R(y,a,b)=a1Kk=1Kykxk{\displaystyle R(\mathbf {y} ,a,\mathbf {b} )=a^{1-K}\prod _{k=1}^{K}{\frac {y_{k}}{x_{k}}}}where it is understood thatx{\displaystyle \mathbf {x} } is recovered as a function ofy{\displaystyle \mathbf {y} }, as shown above. This facilitates writing the SGB density in terms of the Dirichlet density, as:fSGB(ya,b,α)=fDir(xα)R(y,a,b){\displaystyle f_{\text{SGB}}(\mathbf {y} \mid a,\mathbf {b} ,{\boldsymbol {\alpha }})={\frac {f_{\text{Dir}}(\mathbf {x} \mid {\boldsymbol {\alpha }})}{R(\mathbf {y} ,a,\mathbf {b} )}}}This generalization of the Dirichlet density, via achange of variables, is closely related to anormalizing flow, while it must be noted that the differential volume change is not given by theJacobian determinant ofxy:RKRK{\displaystyle \mathbf {x} \mapsto \mathbf {y} :\mathbb {R} ^{K}\to \mathbb {R} ^{K}} which is zero, but by the Jacobian determinant of(x1,,xK1)(y1,,yK1){\displaystyle (x_{1},\ldots ,x_{K-1})\mapsto \mathbf {(} y_{1},\ldots ,y_{K-1})}, as explained in more detail atNormalizing flow § Simplex flow.

For further insight into the interaction between the Dirichlet shape parametersα{\displaystyle {\boldsymbol {\alpha }}}, and the transformation parametersa,b{\displaystyle a,\mathbf {b} }, it may be helpful to consider the logarithmic marginals,logxk1xk{\displaystyle \log {\frac {x_{k}}{1-x_{k}}}}, which follow thelogistic-beta distribution,Bσ(αk,ikαi){\displaystyle B_{\sigma }(\alpha _{k},\sum _{i\neq k}\alpha _{i})}. See in particular the sections ontail behaviour andgeneralization with location and scale parameters.

Application

[edit]

Whenb1=b2==bK{\displaystyle b_{1}=b_{2}=\cdots =b_{K}}, then the transformation simplifies toxsoftmax(a1logx){\displaystyle \mathbf {x} \mapsto \operatorname {softmax} (a^{-1}\log \mathbf {x} )}, which is known astemperature scaling inmachine learning, where it is used as a calibration transform for multiclass probabilistic classifiers.[26] Traditionally the temperature parameter (a{\displaystyle a} here) is learntdiscriminatively by minimizing multiclasscross-entropy over a supervised calibration data set with known class labels. But the above PDF transformation mechanism can be used to facilitate also the design ofgeneratively trained calibration models with a temperature scaling component.

Occurrence and applications

[edit]

Bayesian models

[edit]

Dirichlet distributions are most commonly used as theprior distribution ofcategorical variables ormultinomial variables in Bayesianmixture models and otherhierarchical Bayesian models. (In many fields, such as innatural language processing, categorical variables are often imprecisely called "multinomial variables". Such a usage is unlikely to cause confusion, just as whenBernoulli distributions andbinomial distributions are commonly conflated.)

Inference over hierarchical Bayesian models is often done usingGibbs sampling, and in such a case, instances of the Dirichlet distribution are typicallymarginalized out of the model by integrating out the Dirichletrandom variable. This causes the various categorical variables drawn from the same Dirichlet random variable to become correlated, and the joint distribution over them assumes aDirichlet-multinomial distribution, conditioned on the hyperparameters of the Dirichlet distribution (theconcentration parameters). One of the reasons for doing this is that Gibbs sampling of theDirichlet-multinomial distribution is extremely easy; see that article for more information.


Intuitive interpretations of the parameters

[edit]

The concentration parameter

[edit]

Dirichlet distributions are very often used asprior distributions inBayesian inference. The simplest and perhaps most common type of Dirichlet prior is the symmetric Dirichlet distribution, where all parameters are equal. This corresponds to the case where you have no prior information to favor one component over any other. As described above, the single valueα to which all parameters are set is called theconcentration parameter. If the sample space of the Dirichlet distribution is interpreted as adiscrete probability distribution, then intuitively the concentration parameter can be thought of as determining how "concentrated" the probability mass of the Dirichlet distribution to its center, leading to samples with mass dispersed almost equally among all components, i.e., with a value much less than 1, the mass will be highly concentrated in a few components, and all the rest will have almost no mass, and with a value much greater than 1, the mass will be dispersed almost equally among all the components. See the article on theconcentration parameter for further discussion.

String cutting

[edit]

One example use of the Dirichlet distribution is if one wanted to cut strings (each of initial length 1.0) intoK pieces with different lengths, where each piece had a designated average length, but allowing some variation in the relative sizes of the pieces. Recall thatα0=i=1Kαi.{\displaystyle \alpha _{0}=\sum _{i=1}^{K}\alpha _{i}.} Theαi/α0{\displaystyle \alpha _{i}/\alpha _{0}} values specify the mean lengths of the cut pieces of string resulting from the distribution. The variance around this mean varies inversely withα0{\displaystyle \alpha _{0}}.

Example of Dirichlet(1/2,1/3,1/6) distribution
Example of Dirichlet(1/2,1/3,1/6) distribution

Pólya's urn

[edit]

Consider an urn containing balls ofK different colors. Initially, the urn containsα1 balls of color 1,α2 balls of color 2, and so on. Now performN draws from the urn, where after each draw, the ball is placed back into the urn with an additional ball of the same color. In the limit asN approaches infinity, the proportions of different colored balls in the urn will be distributed asDir(α1, ...,αK).[27]

For a formal proof, note that the proportions of the different colored balls form a bounded[0,1]K-valuedmartingale, hence by themartingale convergence theorem, these proportions convergealmost surely andin mean to a limiting random vector. To see that this limiting vector has the above Dirichlet distribution, check that all mixedmoments agree.

Each draw from the urn modifies the probability of drawing a ball of any one color from the urn in the future. This modification diminishes with the number of draws, since the relative effect of adding a new ball to the urn diminishes as the urn accumulates increasing numbers of balls.


Random variate generation

[edit]
Further information:Non-uniform random variate generation

From gamma distribution

[edit]

With a source of Gamma-distributed random variates, one can easily sample a random vectorx=(x1,,xK){\displaystyle x=(x_{1},\ldots ,x_{K})} from theK-dimensional Dirichlet distribution with parameters(α1,,αK){\displaystyle (\alpha _{1},\ldots ,\alpha _{K})} . First, drawK independent random samplesy1,,yK{\displaystyle y_{1},\ldots ,y_{K}} fromGamma distributions each with density

Gamma(αi,1)=yiαi1eyiΓ(αi),{\displaystyle \operatorname {Gamma} (\alpha _{i},1)={\frac {y_{i}^{\alpha _{i}-1}\;e^{-y_{i}}}{\Gamma (\alpha _{i})}},\!}

and then set

xi=yij=1Kyj.{\displaystyle x_{i}={\frac {y_{i}}{\sum _{j=1}^{K}y_{j}}}.}

[Proof]

The joint distribution of the independently sampled gamma variates,{yi}{\displaystyle \{y_{i}\}}, is given by the product:

eiyii=1Kyiαi1Γ(αi){\displaystyle e^{-\sum _{i}y_{i}}\prod _{i=1}^{K}{\frac {y_{i}^{\alpha _{i}-1}}{\Gamma (\alpha _{i})}}}

Next, one uses a change of variables, parametrising{yi}{\displaystyle \{y_{i}\}} in terms ofy1,y2,,yK1{\displaystyle y_{1},y_{2},\ldots ,y_{K-1}} andi=1Kyi{\displaystyle \sum _{i=1}^{K}y_{i}} , and performs a change of variables fromyx{\displaystyle y\to x} such thatx¯=i=1Kyi,x1=y1x¯,x2=y2x¯,,xK1=yK1x¯{\displaystyle {\bar {x}}=\textstyle \sum _{i=1}^{K}y_{i},x_{1}={\frac {y_{1}}{\bar {x}}},x_{2}={\frac {y_{2}}{\bar {x}}},\ldots ,x_{K-1}={\frac {y_{K-1}}{\bar {x}}}}. Each of the variables0x1,x2,,xk11{\displaystyle 0\leq x_{1},x_{2},\ldots ,x_{k-1}\leq 1} and likewise0i=1K1xi1{\displaystyle 0\leq \textstyle \sum _{i=1}^{K-1}x_{i}\leq 1}. One must then use the change of variables formula,P(x)=P(y(x))|yx|{\displaystyle P(x)=P(y(x)){\bigg |}{\frac {\partial y}{\partial x}}{\bigg |}} in which|yx|{\displaystyle {\bigg |}{\frac {\partial y}{\partial x}}{\bigg |}} is the transformation Jacobian. Writing y explicitly as a function of x, one obtainsy1=x¯x1,y2=x¯x2yK1=x¯xK1,yK=x¯(1i=1K1xi){\displaystyle y_{1}={\bar {x}}x_{1},y_{2}={\bar {x}}x_{2}\ldots y_{K-1}={\bar {x}}x_{K-1},y_{K}={\bar {x}}(1-\textstyle \sum _{i=1}^{K-1}x_{i})}The Jacobian now looks like|x¯0x10x¯x2x¯x¯1i=1K1xi|{\displaystyle {\begin{vmatrix}{\bar {x}}&0&\ldots &x_{1}\\0&{\bar {x}}&\ldots &x_{2}\\\vdots &\vdots &\ddots &\vdots \\-{\bar {x}}&-{\bar {x}}&\ldots &1-\sum _{i=1}^{K-1}x_{i}\end{vmatrix}}}

The determinant can be evaluated by noting that it remains unchanged if multiples of a row are added to another row, and adding each of the first K-1 rows to the bottom row to obtain

|x¯0x10x¯x2001|{\displaystyle {\begin{vmatrix}{\bar {x}}&0&\ldots &x_{1}\\0&{\bar {x}}&\ldots &x_{2}\\\vdots &\vdots &\ddots &\vdots \\0&0&\ldots &1\end{vmatrix}}}

which can be expanded about the bottom row to obtain the determinant valuex¯K1{\displaystyle {\bar {x}}^{K-1}}. Substituting for x in the joint pdf and including the Jacobian determinant, one obtains:

[i=1K1(x¯xi)αi1][x¯(1i=1K1xi)]αK1i=1KΓ(αi)x¯K1ex¯=Γ(α¯)[i=1K1(xi)αi1][1i=1K1xi]αK1i=1KΓ(αi)×x¯α¯1ex¯Γ(α¯){\displaystyle {\begin{aligned}&{\frac {\left[\prod _{i=1}^{K-1}({\bar {x}}x_{i})^{\alpha _{i}-1}\right]\left[{\bar {x}}(1-\sum _{i=1}^{K-1}x_{i})\right]^{\alpha _{K}-1}}{\prod _{i=1}^{K}\Gamma (\alpha _{i})}}{\bar {x}}^{K-1}e^{-{\bar {x}}}\\=&{\frac {\Gamma ({\bar {\alpha }})\left[\prod _{i=1}^{K-1}(x_{i})^{\alpha _{i}-1}\right]\left[1-\sum _{i=1}^{K-1}x_{i}\right]^{\alpha _{K}-1}}{\prod _{i=1}^{K}\Gamma (\alpha _{i})}}\times {\frac {{\bar {x}}^{{\bar {\alpha }}-1}e^{-{\bar {x}}}}{\Gamma ({\bar {\alpha }})}}\end{aligned}}}whereα¯=i=1Kαi{\displaystyle {\bar {\alpha }}=\textstyle \sum _{i=1}^{K}\alpha _{i}}. The right-hand side can be recognized as the product of a Dirichlet pdf for thexi{\displaystyle x_{i}} and a gamma pdf forx¯{\displaystyle {\bar {x}}}. The product form shows the Dirichlet and gamma variables are independent, so the latter can be integrated out by simply omitting it, to obtain:x1,x2,,xK1(1i=1K1xi)αK1i=1K1xiαi1B(α){\displaystyle x_{1},x_{2},\ldots ,x_{K-1}\sim {\frac {(1-\sum _{i=1}^{K-1}x_{i})^{\alpha _{K}-1}\prod _{i=1}^{K-1}x_{i}^{\alpha _{i}-1}}{B({\boldsymbol {\alpha }})}}}

Which is equivalent to

i=1Kxiαi1B(α){\displaystyle {\frac {\prod _{i=1}^{K}x_{i}^{\alpha _{i}-1}}{B({\boldsymbol {\alpha }})}}} with supporti=1Kxi=1{\displaystyle \sum _{i=1}^{K}x_{i}=1}

Below is example Python code to draw the sample:

params=[a1,a2,...,ak]sample=[random.gammavariate(a,1)forainparams]sample=[v/sum(sample)forvinsample]

This formulation is correct regardless of how the Gamma distributions are parameterized (shape/scale vs. shape/rate) because they are equivalent when scale and rate equal 1.0.

From marginal beta distributions

[edit]

A less efficient algorithm[28] relies on the univariate marginal and conditional distributions being beta and proceeds as follows. Simulatex1{\displaystyle x_{1}} from

Beta(α1,i=2Kαi){\displaystyle {\textrm {Beta}}\left(\alpha _{1},\sum _{i=2}^{K}\alpha _{i}\right)}

Then simulatex2,,xK1{\displaystyle x_{2},\ldots ,x_{K-1}} in order, as follows. Forj=2,,K1{\displaystyle j=2,\ldots ,K-1}, simulateϕj{\displaystyle \phi _{j}} from

Beta(αj,i=j+1Kαi),{\displaystyle {\textrm {Beta}}\left(\alpha _{j},\sum _{i=j+1}^{K}\alpha _{i}\right),}

and let

xj=(1i=1j1xi)ϕj.{\displaystyle x_{j}=\left(1-\sum _{i=1}^{j-1}x_{i}\right)\phi _{j}.}

Finally, set

xK=1i=1K1xi.{\displaystyle x_{K}=1-\sum _{i=1}^{K-1}x_{i}.}

This iterative procedure corresponds closely to the "string cutting" intuition described above.

Below is example Python code to draw the sample:

params=[a1,a2,...,ak]xs=[random.betavariate(params[0],sum(params[1:]))]forjinrange(1,len(params)-1):phi=random.betavariate(params[j],sum(params[j+1:]))xs.append((1-sum(xs))*phi)xs.append(1-sum(xs))

When each alpha is 1

[edit]

Whenα1 = ... =αK = 1, a sample from the distribution can be found by randomly drawing a set ofK − 1 values independently and uniformly from the interval[0, 1], adding the values0 and1 to the set to make it haveK + 1 values, sorting the set, and computing the difference between each pair of order-adjacent values, to givex1, ...,xK.

When each alpha is 1/2 and relationship to the hypersphere

[edit]

Whenα1 = ... =αK = 1/2, a sample from the distribution can be found by randomly drawingK values independently from the standard normal distribution, squaring these values, and normalizing them by dividing by their sum, to givex1, ...,xK.

A point(x1, ...,xK) can be drawn uniformly at random from the (K−1)-dimensional unit hypersphere (which is the surface of aK-dimensionalhyperball) via a similar procedure. Randomly drawK values independently from the standard normal distribution and normalize these coordinate values by dividing each by the constant that is the square root of the sum of their squares.

See also

[edit]

References

[edit]
  1. ^S. Kotz; N. Balakrishnan; N. L. Johnson (2000).Continuous Multivariate Distributions. Volume 1: Models and Applications. New York: Wiley.ISBN 978-0-471-18387-7. (Chapter 49: Dirichlet and Inverted Dirichlet Distributions)
  2. ^Olkin, Ingram; Rubin, Herman (1964)."Multivariate Beta Distributions and Independence Properties of the Wishart Distribution".The Annals of Mathematical Statistics.35 (1):261–269.doi:10.1214/aoms/1177703748.JSTOR 2238036.
  3. ^abBela A. Frigyik; Amol Kapila; Maya R. Gupta (2010)."Introduction to the Dirichlet Distribution and Related Processes"(PDF). University of Washington Department of Electrical Engineering. Archived fromthe original(Technical Report UWEETR-2010-006) on 2015-02-19.
  4. ^Eq. (49.9) on page 488 ofKotz, Balakrishnan & Johnson (2000). Continuous Multivariate Distributions. Volume 1: Models and Applications. New York: Wiley.
  5. ^BalakrishV. B. (2005).""Chapter 27. Dirichlet Distribution"".A Primer on Statistical Distributions. Hoboken, NJ: John Wiley & Sons, Inc. p. 274.ISBN 978-0-471-42798-8.
  6. ^Dello Schiavo, Lorenzo (2019)."Characteristic functionals of Dirichlet measures".Electron. J. Probab.24:1–38.arXiv:1810.09790.doi:10.1214/19-EJP371.
  7. ^Dello Schiavo, Lorenzo; Quattrocchi, Filippo (2023). "Multivariate Dirichlet Moments and a Polychromatic Ewens Sampling Formula".arXiv:2309.11292 [math.PR].
  8. ^Hoffmann, Till."Moments of the Dirichlet distribution". Archived fromthe original on 2016-02-14. Retrieved14 February 2016.
  9. ^Christopher M. Bishop (17 August 2006).Pattern Recognition and Machine Learning. Springer.ISBN 978-0-387-31073-2.
  10. ^Farrow, Malcolm."MAS3301 Bayesian Statistics"(PDF).Newcastle University. Retrieved10 April 2013.
  11. ^Lin, Jiayu (2016).On The Dirichlet Distribution(PDF). Kingston, Canada: Queen's University. pp. § 2.4.9.
  12. ^Nguyen, Duy (15 August 2023)."AN IN DEPTH INTRODUCTION TO VARIATIONAL BAYES NOTE".SSRN 4541076. Retrieved15 August 2023.
  13. ^Song, Kai-Sheng (2001). "Rényi information, loglikelihood, and an intrinsic distribution measure".Journal of Statistical Planning and Inference.93 (325). Elsevier:51–69.doi:10.1016/S0378-3758(00)00169-5.
  14. ^Nemenman, Ilya; Shafee, Fariel; Bialek, William (2002).Entropy and Inference, revisited(PDF). NIPS 14., eq. 8
  15. ^Joram Soch (2020-05-10)."Kullback–Leibler divergence for the Dirichlet distribution".The Book of Statistical Proofs. StatProofBook. Retrieved2025-06-23.
  16. ^Connor, Robert J.; Mosimann, James E (1969). "Concepts of Independence for Proportions with a Generalization of the Dirichlet Distribution".Journal of the American Statistical Association.64 (325). American Statistical Association:194–206.doi:10.2307/2283728.JSTOR 2283728.
  17. ^See Kotz, Balakrishnan & Johnson (2000), Section 8.5, "Connor and Mosimann's Generalization", pp. 519–521.
  18. ^Phillips, P. C. B. (1988)."The characteristic function of the Dirichlet and multivariate F distribution"(PDF).Cowles Foundation Discussion Paper 865.
  19. ^Grinshpan, A. Z. (2017)."An inequality for multiple convolutions with respect to Dirichlet probability measure".Advances in Applied Mathematics.82 (1):102–119.doi:10.1016/j.aam.2016.08.001.
  20. ^Perrault, P. (2024). "A New Bound on the Cumulant Generating Function of Dirichlet Processes".arXiv:2409.18621 [math.PR]. Theorem 3.3
  21. ^abDevroye, Luc (1986).Non-Uniform Random Variate Generation. Springer-Verlag.ISBN 0-387-96305-7.
  22. ^Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George (2009). "Bayesian Inference on Multiscale Models for Poisson Intensity Estimation: Applications to Photon-Limited Image Denoising".IEEE Transactions on Image Processing.18 (8):1724–1741.Bibcode:2009ITIP...18.1724L.doi:10.1109/TIP.2009.2022008.PMID 19414285.S2CID 859561.
  23. ^Andreoli, Jean-Marc (2018). "A conjugate prior for the Dirichlet distribution".arXiv:1811.05266 [cs.LG].
  24. ^Graf, Monique (2019)."The Simplicial Generalized Beta distribution - R-package SGB and applications".Libra. Retrieved26 May 2025.{{cite web}}: CS1 maint: numeric names: authors list (link)
  25. ^Sorrenson, Peter; et al. (2024) (2023). "Learning Distributions on Manifolds with Free-Form Flows".arXiv:2312.09852 [cs.LG].{{cite arXiv}}: CS1 maint: numeric names: authors list (link)
  26. ^Ferrer, Luciana; Ramos, Daniel (2025)."Evaluating Posterior Probabilities: Decision Theory, Proper Scoring Rules, and Calibration".Transactions on Machine Learning Research.
  27. ^Blackwell, David; MacQueen, James B. (1973)."Ferguson distributions via Polya urn schemes".Ann. Stat.1 (2):353–355.doi:10.1214/aos/1176342372.
  28. ^A. Gelman; J. B. Carlin; H. S. Stern; D. B. Rubin (2003).Bayesian Data Analysis (2nd ed.). Chapman & Hall/CRC. pp. 582.ISBN 1-58488-388-X.

External links

[edit]
Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Degenerate
andsingular
Degenerate
Dirac delta function
Singular
Cantor
Families
Retrieved from "https://en.wikipedia.org/w/index.php?title=Dirichlet_distribution&oldid=1314475788"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp