Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Multivariate stable distribution

From Wikipedia, the free encyclopedia
Concept in probability theory
multivariate stable
Probability density function

Heatmap showing a Multivariate (bivariate) stable distribution with α = 1.1
Parametersα(0,2]{\displaystyle \alpha \in (0,2]}exponent
δRd{\displaystyle \delta \in \mathbb {R} ^{d}} – shift/location vector
Λ{\displaystyle \Lambda } – a spectral finite measure on the sphere
SupportuRd{\displaystyle u\in \mathbb {R} ^{d}}
PDF(no analytic expression)
CDF(no analytic expression)
VarianceInfinite whenα<2{\displaystyle \alpha <2}
CFsee text

Themultivariate stable distribution is a multivariateprobability distribution that is a multivariate generalisation of the univariatestable distribution. The multivariate stable distribution defines linear relations betweenstable distribution marginals.[clarification needed] In the same way as for the univariate case, the distribution is defined in terms of itscharacteristic function.

The multivariate stable distribution can also be thought as an extension of themultivariate normal distribution. It has parameter, α, which is defined over the range 0 < α ≤ 2, and where the case α = 2 is equivalent to the multivariate normal distribution. It has an additional skew parameter that allows for non-symmetric distributions, where themultivariate normal distribution is symmetric.

Definition

[edit]

LetS{\displaystyle \mathbb {S} } be the Euclidean unit sphere inRd{\displaystyle \mathbb {R} ^{d}}, that is,S={uRd:|u|=1}{\displaystyle \mathbb {S} =\{u\in \mathbb {R} ^{d}\colon |u|=1\}}. Arandom vectorX{\displaystyle X} has a multivariate stable distribution—denoted asXS(α,Λ,δ){\displaystyle X\sim S(\alpha ,\Lambda ,\delta )}—, if the joint characteristic function ofX{\displaystyle X} is[1]

Eexp(iuTX)=exp{S{|uTs|α+iν(uTs,α)}Λ(ds)+iuTδ}{\displaystyle \operatorname {E} \exp(iu^{T}X)=\exp \left\{-\int \limits _{\mathbb {S} }\left\{|u^{T}s|^{\alpha }+i\nu (u^{T}s,\alpha )\right\}\,\Lambda (ds)+iu^{T}\delta \right\}},

where 0 < α < 2, and foryR{\displaystyle y\in \mathbb {R} }

ν(y,α)={sign(y)tan(πα/2)|y|αα1,(2/π)yln|y|α=1.{\displaystyle \nu (y,\alpha )={\begin{cases}-\operatorname {sign} (y)\tan(\pi \alpha /2)|y|^{\alpha }&\alpha \neq 1,\\(2/\pi )y\ln |y|&\alpha =1.\end{cases}}}

This is essentially the result of Feldheim,[2] that any stable random vector can be characterized by a spectral measureΛ{\displaystyle \Lambda } (a finite measure onS{\displaystyle \mathbb {S} }) and a shift vectorδRd{\displaystyle \delta \in \mathbb {R} ^{d}}.

Parametrization using projections

[edit]

Another way to describe a stable random vector is in terms of projections. For any vectoru{\displaystyle u} the projectionuTX{\displaystyle u^{T}X} is univariateα{\displaystyle \alpha }-stable with some skewnessβ(u){\displaystyle \beta (u)}, scaleγ(u){\displaystyle \gamma (u)}, and some shiftδ(u){\displaystyle \delta (u)}. The notationXS(α,β,γ,δ){\displaystyle X\sim S(\alpha ,\beta ,\gamma ,\delta )} is used ifX is stable withuTXs(α,β(u),γ(u),δ(u)){\displaystyle u^{T}X\sim s(\alpha ,\beta (u),\gamma (u),\delta (u))}for everyuRd{\displaystyle u\in \mathbb {R} ^{d}}. This is called the projection parametrization.

The spectral measure determines the projection parameter functions by:

γ(u)=(S|uTs|αΛ(ds))1/α{\displaystyle \gamma (u)={\Bigl (}\int _{\mathbb {S} }|u^{T}s|^{\alpha }\,\Lambda (ds){\Bigr )}^{1/\alpha }}
β(u)=γ(u)αS|uTs|αsign(uTs)Λ(ds){\displaystyle \beta (u)=\gamma (u)^{-\alpha }\int _{\mathbb {S} }|u^{T}s|^{\alpha }\operatorname {sign} (u^{T}s)\,\Lambda (ds)}
δ(u)={uTδα1uTδSπ2uTsln|uTs|Λ(ds)α=1{\displaystyle \delta (u)={\begin{cases}u^{T}\delta &\alpha \neq 1\\u^{T}\delta -\int _{\mathbb {S} }{\frac {\pi }{2}}u^{T}s\ln |u^{T}s|\,\Lambda (ds)&\alpha =1\end{cases}}}

Special cases

[edit]

There are special cases where the multivariatecharacteristic function takes a simpler form. Define the characteristic function of a stable marginal as

ω(y|α,β)={|y|α[1iβ(tanπα2)sign(y)]α1|y|[1+iβ2πsign(y)ln|y|]α=1{\displaystyle \omega (y|\alpha ,\beta )={\begin{cases}|y|^{\alpha }\left[1-i\beta (\tan {\frac {\pi \alpha }{2}})\operatorname {sign} (y)\right]&\alpha \neq 1\\|y|\left[1+i\beta {\tfrac {2}{\pi }}\operatorname {sign} (y)\ln |y|\right]&\alpha =1\end{cases}}}

Isotropic multivariate stable distribution

[edit]

Here the characteristic function isEexp(iuTX)=exp{γ0α|u|α+iuTδ)}{\displaystyle E\exp(iu^{T}X)=\exp\{-\gamma _{0}^{\alpha }|u|^{\alpha }+iu^{T}\delta )\}}. The spectral measure is a scalar multiple of the uniform distribution on the sphere, leading to radial/isotropic symmetry.[3]For the Gaussian caseα=2{\displaystyle \alpha =2} this corresponds to independent components, but this is not the case whenα<2{\displaystyle \alpha <2}. Isotropy is a special case of ellipticity (see the next paragraph)—just takeΣ{\displaystyle \Sigma } to be a multiple of the identity matrix.

Elliptically contoured multivariate stable distribution

[edit]

Theelliptically contoured multivariate stable distribution is a special symmetric case of the multivariate stable distribution.X isα-stable and elliptically contoured iff it has jointcharacteristic functionEexp(iuTX)=exp{(uTΣu)α/2+iuTδ)}{\displaystyle E\exp(iu^{T}X)=\exp\{-(u^{T}\Sigma u)^{\alpha /2}+iu^{T}\delta )\}} for some shift vectorδRd{\displaystyle \delta \in \mathbb {R} ^{d}} (equal to the mean when it exists) and some positive semidefinite matrixΣ{\displaystyle \Sigma } (akin to a correlation matrix, although the usual definition of correlation fails to be meaningful).Note the relation to the characteristic function of themultivariate normal distribution:Eexp(iuTX)=exp{(uTΣu)+iuTδ)}{\displaystyle E\exp(iu^{T}X)=\exp\{-(u^{T}\Sigma u)+iu^{T}\delta )\}}, obtained whenα = 2.

Independent components

[edit]

The marginals are independent withXjS(α,βj,γj,δj){\displaystyle X_{j}\sim S(\alpha ,\beta _{j},\gamma _{j},\delta _{j})} iff thecharacteristic function is

Eexp(iuTX)=exp{j=1mω(uj|α,βj)γjα+iuTδ}{\displaystyle E\exp(iu^{T}X)=\exp \left\{-\sum _{j=1}^{m}\omega (u_{j}|\alpha ,\beta _{j})\gamma _{j}^{\alpha }+iu^{T}\delta \right\}}.

Observe that whenα = 2 this reduces again to the multivariate normal; note that the i.i.d. case and the isotropic case do not coincide whenα < 2.Independent components is a special case of a discrete spectral measure (see next paragraph), with the spectral measure supported by the standard unit vectors.

Heatmap showing a multivariate (bivariate) independent stable distribution with α = 1
Heatmap showing a multivariate (bivariate) independent stable distribution with α = 2

Discrete

[edit]

If the spectral measure is discrete with massλj{\displaystyle \lambda _{j}} atsjS{\displaystyle s_{j}\in \mathbb {S} },j=1,,m{\displaystyle j=1,\ldots ,m},the characteristic function is

Eexp(iuTX)=exp{j=1mω(uTsj|α,1)λjα+iuTδ}{\displaystyle E\exp(iu^{T}X)=\exp \left\{-\sum _{j=1}^{m}\omega (u^{T}s_{j}|\alpha ,1)\lambda _{j}^{\alpha }+iu^{T}\delta \right\}}.

Linear properties

[edit]

IfXS(α,β(),γ(),δ()){\displaystyle X\sim S(\alpha ,\beta (\cdot ),\gamma (\cdot ),\delta (\cdot ))} isd-dimensionalα{\displaystyle \alpha }-stable,A is anm ×d matrix, andbRm,{\displaystyle b\in \mathbb {R} ^{m},} thenAX + b ism-dimensionalα{\displaystyle \alpha }-stable with scale functionγAT{\displaystyle \gamma \circ A^{T}}, skewness functionβAT{\displaystyle \beta \circ A^{T}}, and location functionδAT+bT{\displaystyle \delta \circ A^{T}+b^{T}}.

Inference in the independent-component model

[edit]

Bickson and Guestrin have shown how to compute inference in closed form in a linear model (or equivalently afactor analysis model), involving independent-component models.[4]

More specifically, letXiS(α,βxi,γxi,δxi){\displaystyle X_{i}\sim S(\alpha ,\beta _{x_{i}},\gamma _{x_{i}},\delta _{x_{i}})}(i=1,,n){\displaystyle (i=1,\ldots ,n)} be a family of i.i.d. unobserved univariates drawn from astable distribution. Given a known linear relation matrixA of sizen×n{\displaystyle n\times n}, the observationsYi=i=1nAijXj{\displaystyle Y_{i}=\sum _{i=1}^{n}A_{ij}X_{j}} are assumed to be distributed as a convolution of the hidden factorsXi{\displaystyle X_{i}}, henceYi=S(α,βyi,γyi,δyi){\displaystyle Y_{i}=S(\alpha ,\beta _{y_{i}},\gamma _{y_{i}},\delta _{y_{i}})}. The inference task is to compute the most likelyXi{\displaystyle X_{i}}, given the linear relation matrixA and the observationsYi{\displaystyle Y_{i}}. This task can be computed in closed form in O(n3).

An application for this construction ismultiuser detection with stable, non-Gaussian noise.

See also

[edit]

Resources

[edit]

Notes

[edit]
  1. ^J. Nolan, Multivariate stable densities and distribution functions: general and elliptical case, BundesBank Conference, Eltville, Germany, 11 November 2005. See alsohttp://academic2.american.edu/~jpnolan/stable/stable.html
  2. ^Feldheim, E. (1937). Etude de la stabilité des lois de probabilité . Ph. D. thesis, Faculté des Sciences de Paris, Paris, France.
  3. ^User manual for STABLE 5.1 Matlab version, Robust Analysis Inc.,http://www.RobustAnalysis.com
  4. ^D. Bickson and C. Guestrin. Inference in linear models with multivariate heavy-tails. In Neural Information Processing Systems (NIPS) 2010, Vancouver, Canada, Dec. 2010.https://www.cs.cmu.edu/~bickson/stable/
Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Degenerate
andsingular
Degenerate
Dirac delta function
Singular
Cantor
Families
Retrieved from "https://en.wikipedia.org/w/index.php?title=Multivariate_stable_distribution&oldid=1310415302"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp