Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Generalized normal distribution

From Wikipedia, the free encyclopedia
Probability distribution

Thegeneralized normal distribution (GND) orgeneralized Gaussian distribution (GGD) is either of two families ofparametriccontinuous probability distributions on thereal line. Both families add ashape parameter to thenormal distribution. To distinguish the two families, they are referred to below as "symmetric" and "asymmetric"; however, this is not a standard nomenclature.

Symmetric version

[edit]
Symmetric Generalized Normal
Probability density function
Probability density plots of generalized normal distributions
Cumulative distribution function
Cumulative distribution function plots of generalized normal distributions
Parametersμ{\displaystyle \mu \,}location (real)
α{\displaystyle \alpha \,}scale (positive,real)
β{\displaystyle \beta \,}shape (positive,real)
Supportx(;+){\displaystyle x\in (-\infty ;+\infty )\!}
PDF

β2αΓ(1/β)e(|xμ|/α)β{\displaystyle {\frac {\beta }{2\alpha \Gamma (1/\beta )}}\;e^{-(|x-\mu |/\alpha )^{\beta }}}

Γ{\displaystyle \Gamma } denotes thegamma function
CDF

12+sign(xμ)12Γ(1/β)γ(1/β,|xμα|β){\displaystyle {\frac {1}{2}}+{\text{sign}}(x-\mu ){\frac {1}{2\Gamma (1/\beta )}}\gamma \left(1/\beta ,\left|{\frac {x-\mu }{\alpha }}\right|^{\beta }\right)}

whereβ{\displaystyle \beta } is a shape parameter,α{\displaystyle \alpha } is a scale parameter andγ{\displaystyle \gamma } is theunnormalized incomplete lower gamma function.
Quantile

sign(p0.5)[αβF1(2|p0.5|;1β)]1/β+μ{\displaystyle {\text{sign}}(p-0.5)\left[\alpha ^{\beta }F^{-1}\left(2|p-0.5|;{\frac {1}{\beta }}\right)\right]^{1/\beta }+\mu }

whereF1(p;a){\displaystyle F^{-1}\left(p;a\right)} is the quantile function ofGamma distribution[1]
Meanμ{\displaystyle \mu \,}
Medianμ{\displaystyle \mu \,}
Modeμ{\displaystyle \mu \,}
Varianceα2Γ(3/β)Γ(1/β){\displaystyle {\frac {\alpha ^{2}\Gamma (3/\beta )}{\Gamma (1/\beta )}}}
Skewness0
Excess kurtosisΓ(5/β)Γ(1/β)Γ(3/β)23{\displaystyle {\frac {\Gamma (5/\beta )\Gamma (1/\beta )}{\Gamma (3/\beta )^{2}}}-3}
Entropy1βlog[β2αΓ(1/β)]{\displaystyle {\frac {1}{\beta }}-\log \left[{\frac {\beta }{2\alpha \Gamma (1/\beta )}}\right]}[2]

Thesymmetric generalized normal distribution, also known as theexponential power distribution or thegeneralized error distribution, is a parametric family ofsymmetric distributions. It includes allnormal andLaplace distributions, and as limiting cases it includes allcontinuous uniform distributions on bounded intervals of the real line.

This family includes thenormal distribution whenβ=2{\displaystyle \textstyle \beta =2} (with meanμ{\displaystyle \textstyle \mu } and varianceα22{\displaystyle \textstyle {\frac {\alpha ^{2}}{2}}}) and it includes theLaplace distribution whenβ=1{\displaystyle \textstyle \beta =1}. Asβ{\displaystyle \textstyle \beta \rightarrow \infty }, the densityconverges pointwise to a uniform density on(μα,μ+α){\displaystyle \textstyle (\mu -\alpha ,\mu +\alpha )}.

This family allows for tails that are either heavier than normal (whenβ<2{\displaystyle \beta <2}) or lighter than normal (whenβ>2{\displaystyle \beta >2}). It is a useful way to parametrize a continuum of symmetric,platykurtic densities spanning from the normal (β=2{\displaystyle \textstyle \beta =2}) to the uniform density (β={\displaystyle \textstyle \beta =\infty }), and a continuum of symmetric,leptokurtic densities spanning from the Laplace (β=1{\displaystyle \textstyle \beta =1}) to the normal density (β=2{\displaystyle \textstyle \beta =2}).The shape parameterβ{\displaystyle \beta } also controls thepeakedness in addition to the tails.

Parameter estimation

[edit]

Parameter estimation viamaximum likelihood and themethod of moments has been studied.[3] The estimates do not have a closed form and must be obtained numerically. Estimators that do not require numerical calculation have also been proposed.[4]

The generalized normal log-likelihood function has infinitely many continuous derivates (i.e. it belongs to the class C ofsmooth functions) only ifβ{\displaystyle \textstyle \beta } is a positive, even integer. Otherwise, the function hasβ{\displaystyle \textstyle \lfloor \beta \rfloor } continuous derivatives. As a result, the standard results for consistency and asymptotic normality ofmaximum likelihood estimates ofβ{\displaystyle \beta } only apply whenβ2{\displaystyle \textstyle \beta \geq 2}.

Maximum likelihood estimator

[edit]

It is possible to fit the generalized normal distribution adopting an approximatemaximum likelihood method.[5][6] Withμ{\displaystyle \mu } initially set to the sample first momentm1{\displaystyle m_{1}},β{\displaystyle \textstyle \beta } is estimated by using aNewton–Raphson iterative procedure, starting from an initial guess ofβ=β0{\displaystyle \textstyle \beta =\textstyle \beta _{0}},

β0=m1m2,{\displaystyle \beta _{0}={\frac {m_{1}}{\sqrt {m_{2}}}},}

where

m1=1Ni=1N|xi|,{\displaystyle m_{1}={1 \over N}\sum _{i=1}^{N}|x_{i}|,}

is the first statisticalmoment of the absolute values andm2{\displaystyle m_{2}} is the second statisticalmoment. The iteration is

βi+1=βig(βi)g(βi),{\displaystyle \beta _{i+1}=\beta _{i}-{\frac {g(\beta _{i})}{g'(\beta _{i})}},}

where

g(β)=1+ψ(1/β)βi=1N|xiμ|βlog|xiμ|i=1N|xiμ|β+log(βNi=1N|xiμ|β)β,{\displaystyle g(\beta )=1+{\frac {\psi (1/\beta )}{\beta }}-{\frac {\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }\log |x_{i}-\mu |}{\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }}}+{\frac {\log({\frac {\beta }{N}}\sum _{i=1}^{N}|x_{i}-\mu |^{\beta })}{\beta }},}

and

g(β)=ψ(1/β)β2ψ(1/β)β3+1β2i=1N|xiμ|β(log|xiμ|)2i=1N|xiμ|β+(i=1N|xiμ|βlog|xiμ|)2(i=1N|xiμ|β)2+i=1N|xiμ|βlog|xiμ|βi=1N|xiμ|βlog(βNi=1N|xiμ|β)β2,{\displaystyle {\begin{aligned}g'(\beta )={}&-{\frac {\psi (1/\beta )}{\beta ^{2}}}-{\frac {\psi '(1/\beta )}{\beta ^{3}}}+{\frac {1}{\beta ^{2}}}-{\frac {\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }(\log |x_{i}-\mu |)^{2}}{\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }}}\\[6pt]&{}+{\frac {\left(\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }\log |x_{i}-\mu |\right)^{2}}{\left(\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }\right)^{2}}}+{\frac {\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }\log |x_{i}-\mu |}{\beta \sum _{i=1}^{N}|x_{i}-\mu |^{\beta }}}\\[6pt]&{}-{\frac {\log \left({\frac {\beta }{N}}\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }\right)}{\beta ^{2}}},\end{aligned}}}

and whereψ{\displaystyle \psi } andψ{\displaystyle \psi '} are thedigamma function andtrigamma function.

Given a value forβ{\displaystyle \textstyle \beta }, it is possible to estimateμ{\displaystyle \mu } by finding the minimum of:

minμ=i=1N|xiμ|β{\displaystyle \min _{\mu }=\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }}

Finallyα{\displaystyle \textstyle \alpha } is evaluated as

α=(βNi=1N|xiμ|β)1/β.{\displaystyle \alpha =\left({\frac {\beta }{N}}\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }\right)^{1/\beta }.}

Forβ1{\displaystyle \beta \leq 1}, median is a more appropriate estimator ofμ{\displaystyle \mu } . Onceμ{\displaystyle \mu } is estimated,β{\displaystyle \beta } andα{\displaystyle \alpha } can be estimated as described above.[7]

Applications

[edit]

The symmetric generalized normal distribution has been used in modeling when the concentration of values around the mean and the tail behavior are of particular interest.[8][9] Other families of distributions can be used if the focus is on other deviations from normality. If thesymmetry of the distribution is the main interest, theskew normal family or asymmetric version of the generalized normal family discussed below can be used. If the tail behavior is the main interest, thestudent t family can be used, which approximates the normal distribution as the degrees of freedom grows to infinity. The t distribution, unlike this generalized normal distribution, obtains heavier than normal tails without acquiring acusp at the origin. It finds uses in plasma physics under the name of Langdon Distribution resulting from inverse bremsstrahlung.[10]

In alinear regression problem modeled asyGeneralizedNormal(Xθ,α,p){\displaystyle y\sim \mathrm {GeneralizedNormal} (X\cdot \theta ,\alpha ,p)}, theMLE will be theargminθXθyp{\displaystyle \arg \min _{\theta }\|X\cdot \theta -y\|_{p}} where thep-norm is used.

Properties

[edit]

Moments

[edit]

LetXβ{\displaystyle X_{\beta }} be zero mean generalized Gaussian distribution of shapeβ{\displaystyle \beta } and scaling parameterα{\displaystyle \alpha } . The moments ofXβ{\displaystyle X_{\beta }} exist and are finite for any k greater than −1. For any non-negative integer k, the plain central moments are[2]

E[Xβk]={0if k is odd,αkΓ(k+1β)/Γ(1β)if k is even.{\displaystyle \operatorname {E} \left[X_{\beta }^{k}\right]={\begin{cases}0&{\text{if }}k{\text{ is odd,}}\\\alpha ^{k}\Gamma \left({\frac {k+1}{\beta }}\right){\Big /}\,\Gamma \left({\frac {1}{\beta }}\right)&{\text{if }}k{\text{ is even.}}\end{cases}}}

Connection to Stable Count Distribution

[edit]

From the viewpoint of theStable count distribution,β{\displaystyle \beta } can be regarded as Lévy's stability parameter. This distribution can be decomposed to an integral of kernel density where the kernel is either aLaplace distribution or aGaussian distribution:

121Γ(1β+1)ezβ={01ν(12e|z|/ν)Nβ(ν)dν,1β>0;or 01s(12πe12(z/s)2)Vβ(s)ds,2β>0;{\displaystyle {\frac {1}{2}}{\frac {1}{\Gamma ({\frac {1}{\beta }}+1)}}e^{-z^{\beta }}={\begin{cases}\displaystyle \int _{0}^{\infty }{\frac {1}{\nu }}\left({\frac {1}{2}}e^{-|z|/\nu }\right){\mathfrak {N}}_{\beta }(\nu )\,d\nu ,&1\geq \beta >0;{\text{or }}\\\displaystyle \int _{0}^{\infty }{\frac {1}{s}}\left({\frac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}(z/s)^{2}}\right)V_{\beta }(s)\,ds,&2\geq \beta >0;\end{cases}}}

whereNβ(ν){\displaystyle {\mathfrak {N}}_{\beta }(\nu )} is theStable count distribution andVβ(s){\displaystyle V_{\beta }(s)} is theStable vol distribution.

Connection to Positive-Definite Functions

[edit]

The probability density function of the symmetric generalized normal distribution is apositive-definite function forβ(0,2]{\displaystyle \beta \in (0,2]}.[11][12]

Infinite divisibility

[edit]

The symmetric generalized Gaussian distribution is aninfinitely divisible distribution if and only ifβ(0,1]{2}{\displaystyle \beta \in (0,1]\cup \{2\}}.[11]

Generalizations

[edit]

The multivariate generalized normal distribution, i.e. the product ofn{\displaystyle n} exponential power distributions with the sameβ{\displaystyle \beta } andα{\displaystyle \alpha } parameters, is the only probability density that can be written in the formp(x)=g(xβ){\displaystyle p(\mathbf {x} )=g(\|\mathbf {x} \|_{\beta })} and has independent marginals.[13] The results for the special case of theMultivariate normal distribution is originally attributed toMaxwell.[14]

Asymmetric version

[edit]
Asymmetric Generalized Normal
Probability density function
Probability density plots of generalized normal distributions
Cumulative distribution function
Cumulative distribution function plots of generalized normal distributions
Parametersξ{\displaystyle \xi \,}location (real)
α{\displaystyle \alpha \,}scale (positive,real)
κ{\displaystyle \kappa \,}shape (real)
Supportx(,ξ+α/κ) if κ>0{\displaystyle x\in (-\infty ,\xi +\alpha /\kappa ){\text{ if }}\kappa >0}
x(,) if κ=0{\displaystyle x\in (-\infty ,\infty ){\text{ if }}\kappa =0}
x(ξ+α/κ,+) if κ<0{\displaystyle x\in (\xi +\alpha /\kappa ,+\infty ){\text{ if }}\kappa <0}
PDFϕ(y)ακ(xξ){\displaystyle {\frac {\phi (y)}{\alpha -\kappa (x-\xi )}}}, where
y={1κlog[1κ(xξ)α]if κ0xξαif κ=0{\displaystyle y={\begin{cases}-{\frac {1}{\kappa }}\log \left[1-{\frac {\kappa (x-\xi )}{\alpha }}\right]&{\text{if }}\kappa \neq 0\\{\frac {x-\xi }{\alpha }}&{\text{if }}\kappa =0\end{cases}}}
ϕ{\displaystyle \phi } is the standardnormalpdf
CDFΦ(y){\displaystyle \Phi (y)}, where
y={1κlog[1κ(xξ)α]if κ0xξαif κ=0{\displaystyle y={\begin{cases}-{\frac {1}{\kappa }}\log \left[1-{\frac {\kappa (x-\xi )}{\alpha }}\right]&{\text{if }}\kappa \neq 0\\{\frac {x-\xi }{\alpha }}&{\text{if }}\kappa =0\end{cases}}}
Φ{\displaystyle \Phi } is the standardnormalCDF
Meanξακ(eκ2/21){\displaystyle \xi -{\frac {\alpha }{\kappa }}\left(e^{\kappa ^{2}/2}-1\right)}
Medianξ{\displaystyle \xi \,}
Varianceα2κ2eκ2(eκ21){\displaystyle {\frac {\alpha ^{2}}{\kappa ^{2}}}e^{\kappa ^{2}}\left(e^{\kappa ^{2}}-1\right)}
Skewness3eκ2e3κ22(eκ21)3/2 sign(κ){\displaystyle {\frac {3e^{\kappa ^{2}}-e^{3\kappa ^{2}}-2}{(e^{\kappa ^{2}}-1)^{3/2}}}{\text{ sign}}(\kappa )}
Excess kurtosise4κ2+2e3κ2+3e2κ26{\displaystyle e^{4\kappa ^{2}}+2e^{3\kappa ^{2}}+3e^{2\kappa ^{2}}-6}
Not to be confused withSkew normal distribution.

Theasymmetric generalized normal distribution is a family of continuous probability distributions in which the shape parameter can be used to introduce asymmetry or skewness.[15][16] When the shape parameter is zero, the normal distribution results. Positive values of the shape parameter yield left-skewed distributions bounded to the right, and negative values of the shape parameter yield right-skewed distributions bounded to the left. Only when the shape parameter is zero is the density function for this distribution positive over the whole real line: in this case the distribution is anormal distribution, otherwise the distributions are shifted and possibly reversedlog-normal distributions.

Parameter estimation

[edit]

Parameters can be estimated viamaximum likelihood estimation or the method of moments. The parameter estimates do not have a closed form, so numerical calculations must be used to compute the estimates. Since the sample space (the set of real numbers where the density is non-zero) depends on the true value of the parameter, some standard results about the performance of parameter estimates will not automatically apply when working with this family.

Applications

[edit]

The asymmetric generalized normal distribution can be used to model values that may be normally distributed, or that may be either right-skewed or left-skewed relative to the normal distribution. Theskew normal distribution is another distribution that is useful for modeling deviations from normality due to skew. Other distributions used to model skewed data include thegamma,lognormal, andWeibull distributions, but these do not include the normal distributions as special cases.

Kullback-Leibler divergence between two PDFs

[edit]

Kullback-Leibler divergence (KLD) is a method using for compute the divergence or similarity between two probability density functions.[17]

LetP(x){\displaystyle P(x)} andQ(x){\displaystyle Q(x)} two generalized Gaussian distributions with parametersα1,β1,μ1{\displaystyle \alpha _{1},\beta _{1},\mu _{1}} andα2,β2,μ2{\displaystyle \alpha _{2},\beta _{2},\mu _{2}}subject to the constraintμ1=μ2=0{\displaystyle \mu _{1}=\mu _{2}=0}.[18] Then this divergence is given by:

KLDpdf(P(x)||Q(x))=1β1+(α1α2)β2Γ(1+β2β1)Γ(1β1)+log(α2Γ(1+1β2)α1Γ(1+1β1)){\displaystyle {\rm {KLD_{pdf}}}(P(x)||Q(x))=-{\frac {1}{\beta _{1}}}+{\frac {({\frac {\alpha _{1}}{\alpha _{2}}})^{\beta _{2}}\Gamma ({\frac {1+\beta _{2}}{\beta _{1}}})}{\Gamma ({\frac {1}{\beta _{1}}})}}+\log \left({\frac {\alpha _{2}\Gamma (1+{\frac {1}{\beta _{2}}})}{\alpha _{1}\Gamma (1+{\frac {1}{\beta _{1}}})}}\right)}

Other distributions related to the normal

[edit]

The two generalized normal families described here, like theskew normal family, are parametric families that extends the normal distribution by adding a shape parameter. Due to the central role of the normal distribution in probability and statistics, many distributions can be characterized in terms of their relationship to the normal distribution. For example, thelog-normal,folded normal, andinverse normal distributions are defined as transformations of a normally-distributed value, but unlike the generalized normal and skew-normal families, these do not include the normal distributions as special cases.

Actually all distributions with finite variance are in the limit highly related to the normal distribution. The Student-t distribution, theIrwin–Hall distribution and theBates distribution also extend the normal distribution, andinclude in the limit the normal distribution. So there is no strong reason to prefer the "generalized" normal distribution of type 1, e.g. over a combination of Student-t and a normalized extended Irwin–Hall – this would include e.g. the triangular distribution (which cannot be modeled by the generalized Gaussian type 1).

A symmetric distribution which can model both tail (long and short)and center behavior (like flat, triangular or Gaussian) completely independently could be derived e.g. by using X = IH/chi.

TheTukey g- and h-distribution also allows for a deviation from normality, both through skewness and fat tails.[19]

See also

[edit]

References

[edit]
  1. ^Griffin, Maryclare."Working with the Exponential Power Distribution Using gnorm".Github, gnorm package. Retrieved26 June 2020.
  2. ^abNadarajah, Saralees (September 2005). "A generalized normal distribution".Journal of Applied Statistics.32 (7):685–694.Bibcode:2005JApSt..32..685N.doi:10.1080/02664760500079464.S2CID 121914682.
  3. ^Varanasi, M.K.; Aazhang, B. (October 1989). "Parametric generalized Gaussian density estimation".Journal of the Acoustical Society of America.86 (4):1404–1415.Bibcode:1989ASAJ...86.1404V.doi:10.1121/1.398700.
  4. ^Domínguez-Molina, J. Armando;González-Farías, Graciela; Rodríguez-Dagnino, Ramón M."A practical procedure to estimate the shape parameter in the generalized Gaussian distribution"(PDF). Archived fromthe original(PDF) on 2007-09-28. Retrieved2009-03-03.
  5. ^Varanasi, M.K.; Aazhang B. (1989). "Parametric generalized Gaussian density estimation".J. Acoust. Soc. Am.86 (4):1404–1415.Bibcode:1989ASAJ...86.1404V.doi:10.1121/1.398700.
  6. ^Do, M.N.; Vetterli, M. (February 2002)."Wavelet-based Texture Retrieval Using Generalised Gaussian Density and Kullback-Leibler Distance".IEEE Transactions on Image Processing.11 (2):146–158.Bibcode:2002ITIP...11..146D.doi:10.1109/83.982822.PMID 18244620.
  7. ^Varanasi, Mahesh K.; Aazhang, Behnaam (1989-10-01). "Parametric generalized Gaussian density estimation".The Journal of the Acoustical Society of America.86 (4):1404–1415.Bibcode:1989ASAJ...86.1404V.doi:10.1121/1.398700.ISSN 0001-4966.
  8. ^Liang, Faming; Liu, Chuanhai;Wang, Naisyin (April 2007)."A robust sequential Bayesian method for identification of differentially expressed genes".Statistica Sinica.17 (2):571–597. Archived fromthe original on 2007-10-09. Retrieved2009-03-03.
  9. ^Box, George E. P.;Tiao, George C. (1992).Bayesian Inference in Statistical Analysis. New York: Wiley.ISBN 978-0-471-57428-6.
  10. ^Milder, Avram L. (2021).Electron velocity distribution functions and Thomson scattering (PhD thesis). University of Rochester.hdl:1802/36536.
  11. ^abDytso, Alex; Bustin, Ronit; Poor, H. Vincent; Shamai, Shlomo (2018)."Analytical properties of generalized Gaussian distributions".Journal of Statistical Distributions and Applications.5 (1): 6.doi:10.1186/s40488-018-0088-5.
  12. ^Bochner, Salomon (1937)."Stable laws of probability and completely monotone functions".Duke Mathematical Journal.3 (4):726–728.doi:10.1215/s0012-7094-37-00360-0.
  13. ^Sinz, Fabian; Gerwinn, Sebastian; Bethge, Matthias (May 2009)."Characterization of the p-Generalized Normal Distribution".Journal of Multivariate Analysis.100 (5):817–820.doi:10.1016/j.jmva.2008.07.006.
  14. ^Kac, M. (1939). "On a characterization of the normal distribution".American Journal of Mathematics.61 (3):726–728.doi:10.2307/2371328.JSTOR 2371328.
  15. ^Hosking, J.R.M., Wallis, J.R. (1997)Regional frequency analysis: an approach based on L-moments, Cambridge University Press.ISBN 0-521-43045-3. Section A.8
  16. ^Documentation for the lmomco R package
  17. ^Kullback, S.; Leibler, R.A. (1951)."On information and sufficiency".The Annals of Mathematical Statistics.22 (1):79–86.doi:10.1214/aoms/1177729694.
  18. ^Quintero-Rincón, A.; Pereyra, M.; D’Giano, C.; Batatia, H.; Risk, M. (2017)."A visual EEG epilepsy detection method based on a wavelet statistical representation and the Kullback-Leibler divergence".IFMBE Proceedings.60:13–16.doi:10.1007/978-981-10-4086-3_4.hdl:11336/77054.ISBN 978-981-10-4085-6.
  19. ^The Tukey g-and-h Distribution, Yuan Yan, Marc G. GentonSignificance, Volume 16, Issue 3, June 2019, Pages 12–13,doi:10.1111/j.1740-9713.2019.01273.x
Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Degenerate
andsingular
Degenerate
Dirac delta function
Singular
Cantor
Families
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Retrieved from "https://en.wikipedia.org/w/index.php?title=Generalized_normal_distribution&oldid=1296456654"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp