Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Generalized inverse Gaussian distribution

From Wikipedia, the free encyclopedia
Family of continuous probability distributions
Generalized inverse Gaussian
Probability density function
Probability density plots of GIG distributions
Parametersa > 0,b > 0,p real
Supportx > 0
PDFf(x)=(a/b)p/22Kp(ab)x(p1)e(ax+b/x)/2{\displaystyle f(x)={\frac {(a/b)^{p/2}}{2K_{p}({\sqrt {ab}})}}x^{(p-1)}e^{-(ax+b/x)/2}}
MeanE[x]=b Kp+1(ab)a Kp(ab){\displaystyle \operatorname {E} [x]={\frac {{\sqrt {b}}\ K_{p+1}({\sqrt {ab}})}{{\sqrt {a}}\ K_{p}({\sqrt {ab}})}}}
E[x1]=a Kp+1(ab)b Kp(ab)2pb{\displaystyle \operatorname {E} [x^{-1}]={\frac {{\sqrt {a}}\ K_{p+1}({\sqrt {ab}})}{{\sqrt {b}}\ K_{p}({\sqrt {ab}})}}-{\frac {2p}{b}}}
E[lnx]=lnba+plnKp(ab){\displaystyle \operatorname {E} [\ln x]=\ln {\frac {\sqrt {b}}{\sqrt {a}}}+{\frac {\partial }{\partial p}}\ln K_{p}({\sqrt {ab}})}
Mode(p1)+(p1)2+aba{\displaystyle {\frac {(p-1)+{\sqrt {(p-1)^{2}+ab}}}{a}}}
Variance(ba)[Kp+2(ab)Kp(ab)(Kp+1(ab)Kp(ab))2]{\displaystyle \left({\frac {b}{a}}\right)\left[{\frac {K_{p+2}({\sqrt {ab}})}{K_{p}({\sqrt {ab}})}}-\left({\frac {K_{p+1}({\sqrt {ab}})}{K_{p}({\sqrt {ab}})}}\right)^{2}\right]}
MGF(aa2t)p2Kp(b(a2t))Kp(ab){\displaystyle \left({\frac {a}{a-2t}}\right)^{\frac {p}{2}}{\frac {K_{p}({\sqrt {b(a-2t)}})}{K_{p}({\sqrt {ab}})}}}
CF(aa2it)p2Kp(b(a2it))Kp(ab){\displaystyle \left({\frac {a}{a-2it}}\right)^{\frac {p}{2}}{\frac {K_{p}({\sqrt {b(a-2it)}})}{K_{p}({\sqrt {ab}})}}}

Inprobability theory andstatistics, thegeneralized inverse Gaussian distribution (GIG) is a three-parameter family of continuousprobability distributions withprobability density function

f(x)=(a/b)p/22Kp(ab)x(p1)e(ax+b/x)/2,x>0,{\displaystyle f(x)={\frac {(a/b)^{p/2}}{2K_{p}({\sqrt {ab}})}}x^{(p-1)}e^{-(ax+b/x)/2},\qquad x>0,}

whereKp is amodified Bessel function of the second kind,a > 0,b > 0 andp a real parameter. It is used extensively ingeostatistics, statistical linguistics, finance, etc. This distribution was first proposed byÉtienne Halphen.[1][2][3] It was rediscovered and popularised byOle Barndorff-Nielsen, who called it the generalized inverse Gaussian distribution. Its statistical properties are discussed in Bent Jørgensen's lecture notes.[4]

Properties

[edit]

Alternative parametrization

[edit]

By settingθ=ab{\displaystyle \theta ={\sqrt {ab}}} andη=b/a{\displaystyle \eta ={\sqrt {b/a}}}, we can alternatively express the GIG distribution as

f(x)=12ηKp(θ)(xη)p1eθ(x/η+η/x)/2,{\displaystyle f(x)={\frac {1}{2\eta K_{p}(\theta )}}\left({\frac {x}{\eta }}\right)^{p-1}e^{-\theta (x/\eta +\eta /x)/2},}

whereθ{\displaystyle \theta } is the concentration parameter whileη{\displaystyle \eta } is the scaling parameter.

Summation

[edit]

Barndorff-Nielsen and Halgreen proved that the GIG distribution isinfinitely divisible.[5]

Entropy

[edit]

The entropy of the generalized inverse Gaussian distribution is given as[citation needed]

H=12log(ba)+log(2Kp(ab))(p1)[ddνKν(ab)]ν=pKp(ab)+ab2Kp(ab)(Kp+1(ab)+Kp1(ab)){\displaystyle {\begin{aligned}H={\frac {1}{2}}\log \left({\frac {b}{a}}\right)&{}+\log \left(2K_{p}\left({\sqrt {ab}}\right)\right)-(p-1){\frac {\left[{\frac {d}{d\nu }}K_{\nu }\left({\sqrt {ab}}\right)\right]_{\nu =p}}{K_{p}\left({\sqrt {ab}}\right)}}\\&{}+{\frac {\sqrt {ab}}{2K_{p}\left({\sqrt {ab}}\right)}}\left(K_{p+1}\left({\sqrt {ab}}\right)+K_{p-1}\left({\sqrt {ab}}\right)\right)\end{aligned}}}

where[ddνKν(ab)]ν=p{\displaystyle \left[{\frac {d}{d\nu }}K_{\nu }\left({\sqrt {ab}}\right)\right]_{\nu =p}} is a derivative of the modified Bessel function of the second kind with respect to the orderν{\displaystyle \nu } evaluated atν=p{\displaystyle \nu =p}

Characteristic Function

[edit]

The characteristic of a random variableXGIG(p,a,b){\displaystyle X\sim GIG(p,a,b)} is given as (for a derivation of the characteristic function, see supplementary materials of[6])

E(eitX)=(aa2it)p2Kp((a2it)b)Kp(ab){\displaystyle E(e^{itX})=\left({\frac {a}{a-2it}}\right)^{\frac {p}{2}}{\frac {K_{p}\left({\sqrt {(a-2it)b}}\right)}{K_{p}\left({\sqrt {ab}}\right)}}}

fortR{\displaystyle t\in \mathbb {R} } wherei{\displaystyle i} denotes theimaginary number.

Related distributions

[edit]

Special cases

[edit]

Theinverse Gaussian andgamma distributions are special cases of the generalized inverse Gaussian distribution forp = −1/2 andb = 0, respectively.[7] Specifically, an inverse Gaussian distribution of the form

f(x;μ,λ)=[λ2πx3]1/2exp(λ(xμ)22μ2x){\displaystyle f(x;\mu ,\lambda )=\left[{\frac {\lambda }{2\pi x^{3}}}\right]^{1/2}\exp {\left({\frac {-\lambda (x-\mu )^{2}}{2\mu ^{2}x}}\right)}}

is a GIG witha=λ/μ2{\displaystyle a=\lambda /\mu ^{2}},b=λ{\displaystyle b=\lambda }, andp=1/2{\displaystyle p=-1/2}. A gamma distribution of the form

g(x;α,β)=βα1Γ(α)xα1eβx{\displaystyle g(x;\alpha ,\beta )=\beta ^{\alpha }{\frac {1}{\Gamma (\alpha )}}x^{\alpha -1}e^{-\beta x}}

is a GIG witha=2β{\displaystyle a=2\beta },b=0{\displaystyle b=0}, andp=α{\displaystyle p=\alpha }.

Other special cases include theinverse-gamma distribution, fora = 0.[7]

Conjugate prior for Gaussian

[edit]

The GIG distribution isconjugate to thenormal distribution when serving as the mixing distribution in anormal variance-mean mixture.[8][9] Let the prior distribution for some hidden variable, sayz{\displaystyle z}, be GIG:

P(za,b,p)=GIG(za,b,p){\displaystyle P(z\mid a,b,p)=\operatorname {GIG} (z\mid a,b,p)}

and let there beT{\displaystyle T} observed data points,X=x1,,xT{\displaystyle X=x_{1},\ldots ,x_{T}}, with normal likelihood function, conditioned onz:{\displaystyle z:}

P(Xz,α,β)=i=1TN(xiα+βz,z){\displaystyle P(X\mid z,\alpha ,\beta )=\prod _{i=1}^{T}N(x_{i}\mid \alpha +\beta z,z)}

whereN(xμ,v){\displaystyle N(x\mid \mu ,v)} is the normal distribution, with meanμ{\displaystyle \mu } and variancev{\displaystyle v}. Then the posterior forz{\displaystyle z}, given the data is also GIG:

P(zX,a,b,p,α,β)=GIG(za+Tβ2,b+S,pT2){\displaystyle P(z\mid X,a,b,p,\alpha ,\beta )={\text{GIG}}\left(z\mid a+T\beta ^{2},b+S,p-{\frac {T}{2}}\right)}

whereS=i=1T(xiα)2{\displaystyle \textstyle S=\sum _{i=1}^{T}(x_{i}-\alpha )^{2}}.[note 1]

Sichel distribution

[edit]

The Sichel distribution results when the GIG is used as the mixing distribution for thePoisson parameterλ{\displaystyle \lambda }.[10][11]

Notes

[edit]
  1. ^Due to the conjugacy, these details can be derived without solving integrals, by noting that
    P(zX,a,b,p,α,β)P(za,b,p)P(Xz,α,β){\displaystyle P(z\mid X,a,b,p,\alpha ,\beta )\propto P(z\mid a,b,p)P(X\mid z,\alpha ,\beta )}.
    Omitting all factors independent ofz{\displaystyle z}, the right-hand-side can be simplified to give anun-normalized GIG distribution, from which the posterior parameters can be identified.

References

[edit]
  1. ^Seshadri, V. (1997). "Halphen's laws". In Kotz, S.; Read, C. B.; Banks, D. L. (eds.).Encyclopedia of Statistical Sciences, Update Volume 1. New York: Wiley. pp. 302–306.
  2. ^Perreault, L.; Bobée, B.; Rasmussen, P. F. (1999). "Halphen Distribution System. I: Mathematical and Statistical Properties".Journal of Hydrologic Engineering.4 (3): 189.doi:10.1061/(ASCE)1084-0699(1999)4:3(189).
  3. ^Étienne Halphen was the grandson of the mathematicianGeorges Henri Halphen.
  4. ^Jørgensen, Bent (1982).Statistical Properties of the Generalized Inverse Gaussian Distribution. Lecture Notes in Statistics. Vol. 9. New York–Berlin: Springer-Verlag.ISBN 0-387-90665-7.MR 0648107.
  5. ^Barndorff-Nielsen, O.; Halgreen, Christian (1977). "Infinite Divisibility of the Hyperbolic and Generalized Inverse Gaussian Distributions".Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete.38:309–311.doi:10.1007/BF00533162.
  6. ^Pal, Subhadip; Gaskins, Jeremy (23 May 2022)."Modified Pólya-Gamma data augmentation for Bayesian analysis of directional data".Journal of Statistical Computation and Simulation.92 (16):3430–3451.doi:10.1080/00949655.2022.2067853.ISSN 0094-9655.S2CID 249022546.
  7. ^abJohnson, Norman L.; Kotz, Samuel; Balakrishnan, N. (1994),Continuous univariate distributions. Vol. 1, Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics (2nd ed.), New York:John Wiley & Sons, pp. 284–285,ISBN 978-0-471-58495-7,MR 1299979
  8. ^Karlis, Dimitris (2002). "An EM type algorithm for maximum likelihood estimation of the normal–inverse Gaussian distribution".Statistics & Probability Letters.57 (1):43–52.doi:10.1016/S0167-7152(02)00040-8.
  9. ^Barndorf-Nielsen, O. E. (1997). "Normal Inverse Gaussian Distributions and stochastic volatility modelling".Scand. J. Statist.24 (1):1–13.doi:10.1111/1467-9469.00045.
  10. ^Sichel, Herbert S. (1975). "On a distribution law for word frequencies".Journal of the American Statistical Association.70 (351a):542–547.doi:10.1080/01621459.1975.10482469.
  11. ^Stein, Gillian Z.; Zucchini, Walter; Juritz, June M. (1987). "Parameter estimation for the Sichel distribution and its multivariate extension".Journal of the American Statistical Association.82 (399):938–944.doi:10.1080/01621459.1987.10478520.

See also

[edit]


Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Degenerate
andsingular
Degenerate
Dirac delta function
Singular
Cantor
Families
Retrieved from "https://en.wikipedia.org/w/index.php?title=Generalized_inverse_Gaussian_distribution&oldid=1328033395"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp