Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Gaussian function

From Wikipedia, the free encyclopedia
(Redirected fromGaussian kernel)
Mathematical function
"Gaussian curve" redirects here. For the band, seeGaussian Curve (band).
icon
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Gaussian function" – news ·newspapers ·books ·scholar ·JSTOR
(August 2009) (Learn how and when to remove this message)

Inmathematics, aGaussian function, often simply referred to as aGaussian, is afunction of the base formf(x)=exp(x2){\displaystyle f(x)=\exp(-x^{2})}and with parametric extensionf(x)=aexp((xb)22c2){\displaystyle f(x)=a\exp \left(-{\frac {(x-b)^{2}}{2c^{2}}}\right)}for arbitraryreal constantsa,b and non-zeroc. It is named after the mathematicianCarl Friedrich Gauss. Thegraph of a Gaussian is a characteristic symmetric "bell curve" shape. The parametera is the height of the curve's peak,b is the position of the center of the peak, andc (thestandard deviation, sometimes called the GaussianRMS width) controls the width of the "bell".

Gaussian functions are often used to represent theprobability density function of anormally distributedrandom variable withexpected valueμ =b andvarianceσ2 =c2. In this case, the Gaussian is of the form[1]

g(x)=1σ2πexp(12(xμ)2σ2).{\displaystyle g(x)={\frac {1}{\sigma {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2}}{\frac {(x-\mu )^{2}}{\sigma ^{2}}}\right).}

Gaussian functions are widely used instatistics to describe thenormal distributions, insignal processing to defineGaussian filters, inimage processing where two-dimensional Gaussians are used forGaussian blurs, and in mathematics to solveheat equations anddiffusion equations and to define theWeierstrass transform. They are also abundantly used inquantum chemistry to formbasis sets.

Properties

[edit]

Gaussian functions arise by composing theexponential function with aconcavequadratic function:f(x)=exp(αx2+βx+γ),{\displaystyle f(x)=\exp(\alpha x^{2}+\beta x+\gamma ),}where

(Note:a=1/(σ2π){\displaystyle a=1/(\sigma {\sqrt {2\pi }})} inlna{\displaystyle \ln a}, not to be confused withα=1/2c2{\displaystyle \alpha =-1/2c^{2}})

The Gaussian functions are thus those functions whoselogarithm is a concave quadratic function.

The parameterc is related to thefull width at half maximum (FWHM) of the peak according to

FWHM=22ln2c2.35482c.{\displaystyle {\text{FWHM}}=2{\sqrt {2\ln 2}}\,c\approx 2.35482\,c.}

The function may then be expressed in terms of the FWHM, represented byw:f(x)=ae4(ln2)(xb)2/w2.{\displaystyle f(x)=ae^{-4(\ln 2)(x-b)^{2}/w^{2}}.}

Alternatively, the parameterc can be interpreted by saying that the twoinflection points of the function occur atx =b ±c.

Thefull width at tenth of maximum (FWTM) for a Gaussian could be of interest and isFWTM=22ln10c4.29193c.{\displaystyle {\text{FWTM}}=2{\sqrt {2\ln 10}}\,c\approx 4.29193\,c.}

Gaussian functions areanalytic, and theirlimit asx → ∞ is 0 (for the above case ofb = 0).

Gaussian functions are among those functions that areelementary but lack elementaryantiderivatives; theintegral of the Gaussian function is theerror function:

ex2dx=π2erfx+C.{\displaystyle \int e^{-x^{2}}\,dx={\frac {\sqrt {\pi }}{2}}\operatorname {erf} x+C.}

Nonetheless, their improper integrals over the whole real line can be evaluated exactly, using theGaussian integralex2dx=π,{\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }},}and one obtainsae(xb)2/(2c2)dx=ac2π.{\displaystyle \int _{-\infty }^{\infty }ae^{-(x-b)^{2}/(2c^{2})}\,dx=ac\cdot {\sqrt {2\pi }}.}

Normalized Gaussian curves withexpected valueμ andvarianceσ2. The corresponding parameters area=1σ2π{\textstyle a={\tfrac {1}{\sigma {\sqrt {2\pi }}}}},b =μ andc =σ.

This integral is 1 if and only ifa=1c2π{\textstyle a={\tfrac {1}{c{\sqrt {2\pi }}}}} (thenormalizing constant), and in this case the Gaussian is theprobability density function of anormally distributedrandom variable withexpected valueμ =b andvarianceσ2 =c2:g(x)=1σ2πexp((xμ)22σ2).{\displaystyle g(x)={\frac {1}{\sigma {\sqrt {2\pi }}}}\exp \left({\frac {-(x-\mu )^{2}}{2\sigma ^{2}}}\right).}

These Gaussians are plotted in the accompanying figure.

The product of two Gaussian functions is a Gaussian, and theconvolution of two Gaussian functions is also a Gaussian, with variance being the sum of the original variances:c2=c12+c22{\displaystyle c^{2}=c_{1}^{2}+c_{2}^{2}}. The product of two Gaussian probability density functions (PDFs), though, is not in general a Gaussian PDF.

The Fourieruncertainty principle becomes an equality if and only if (modulated) Gaussian functions are considered.[2]

Taking theFourier transform (unitary, angular-frequency convention) of a Gaussian function with parametersa = 1,b = 0 andc yields another Gaussian function, with parametersc{\displaystyle c},b = 0 and1/c{\displaystyle 1/c}.[3] So in particular the Gaussian functions withb = 0 andc=1{\displaystyle c=1} are kept fixed by the Fourier transform (they areeigenfunctions of the Fourier transform with eigenvalue 1).A physical realization is that of thediffraction pattern: for example, aphotographic slide whosetransmittance has a Gaussian variation is also a Gaussian function.

The fact that the Gaussian function is an eigenfunction of the continuous Fourier transform allows us to derive the following interesting[clarification needed] identity from thePoisson summation formula:kZexp(π(kc)2)=ckZexp(π(kc)2).{\displaystyle \sum _{k\in \mathbb {Z} }\exp \left(-\pi \cdot \left({\frac {k}{c}}\right)^{2}\right)=c\cdot \sum _{k\in \mathbb {Z} }\exp \left(-\pi \cdot (kc)^{2}\right).}

Integral of a Gaussian function

[edit]

The integral of an arbitrary Gaussian function isae(xb)2/2c2dx= a|c|2π.{\displaystyle \int _{-\infty }^{\infty }a\,e^{-(x-b)^{2}/2c^{2}}\,dx=\ a\,|c|\,{\sqrt {2\pi }}.}

An alternative form iskefx2+gx+hdx=kef(xg/(2f))2+g2/(4f)+hdx=kπfexp(g24f+h),{\displaystyle \int _{-\infty }^{\infty }k\,e^{-fx^{2}+gx+h}\,dx=\int _{-\infty }^{\infty }k\,e^{-f{\big (}x-g/(2f){\big )}^{2}+g^{2}/(4f)+h}\,dx=k\,{\sqrt {\frac {\pi }{f}}}\,\exp \left({\frac {g^{2}}{4f}}+h\right),}wheref must be strictly positive for the integral to converge.

Relation to standard Gaussian integral

[edit]

The integralae(xb)2/2c2dx{\displaystyle \int _{-\infty }^{\infty }ae^{-(x-b)^{2}/2c^{2}}\,dx}for somereal constantsa,b andc > 0 can be calculated by putting it into the form of aGaussian integral. First, the constanta can simply be factored out of the integral. Next, the variable of integration is changed fromx toy =xb:aey2/2c2dy,{\displaystyle a\int _{-\infty }^{\infty }e^{-y^{2}/2c^{2}}\,dy,}and then toz=y/2c2{\displaystyle z=y/{\sqrt {2c^{2}}}}:a2c2ez2dz.{\displaystyle a{\sqrt {2c^{2}}}\int _{-\infty }^{\infty }e^{-z^{2}}\,dz.}

Then, using theGaussian integral identityez2dz=π,{\displaystyle \int _{-\infty }^{\infty }e^{-z^{2}}\,dz={\sqrt {\pi }},}

we haveae(xb)2/2c2dx=a2πc2.{\displaystyle \int _{-\infty }^{\infty }ae^{-(x-b)^{2}/2c^{2}}\,dx=a{\sqrt {2\pi c^{2}}}.}

Two-dimensional Gaussian function

[edit]
3d plot of a Gaussian function with a two-dimensional domain

Base form:f(x,y)=exp(x2y2){\displaystyle f(x,y)=\exp(-x^{2}-y^{2})}

In two dimensions, the power to whiche is raised in the Gaussian function is any negative-definitequadratic form. Consequently, thelevel sets of the Gaussian will always be ellipses.

A particular example of a two-dimensional Gaussian function isf(x,y)=Aexp(((xx0)22σX2+(yy0)22σY2)).{\displaystyle f(x,y)=A\exp \left(-\left({\frac {(x-x_{0})^{2}}{2\sigma _{X}^{2}}}+{\frac {(y-y_{0})^{2}}{2\sigma _{Y}^{2}}}\right)\right).}

Here the coefficientA is the amplitude,x0y0 is the center, andσxσy are thex andy spreads of the blob. The figure on the right was created usingA = 1,x0 = 0,y0 = 0,σx =σy = 1.

The volume under the Gaussian function is given byV=f(x,y)dxdy=2πAσXσY.{\displaystyle V=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f(x,y)\,dx\,dy=2\pi A\sigma _{X}\sigma _{Y}.}

In general, a two-dimensional elliptical Gaussian function is expressed asf(x,y)=Aexp((a(xx0)2+2b(xx0)(yy0)+c(yy0)2)),{\displaystyle f(x,y)=A\exp {\Big (}-{\big (}a(x-x_{0})^{2}+2b(x-x_{0})(y-y_{0})+c(y-y_{0})^{2}{\big )}{\Big )},}where the matrix[abbc]{\displaystyle {\begin{bmatrix}a&b\\b&c\end{bmatrix}}}ispositive-definite.

Using this formulation, the figure on the right can be created usingA = 1,(x0,y0) = (0, 0),a =c = 1/2,b = 0.

Meaning of parameters for the general equation

[edit]

For the general form of the equation the coefficientA is the height of the peak and(x0,y0) is the center of the blob.

If we seta=cos2θ2σX2+sin2θ2σY2,b=sinθcosθ2σX2+sinθcosθ2σY2,c=sin2θ2σX2+cos2θ2σY2,{\displaystyle {\begin{aligned}a&={\frac {\cos ^{2}\theta }{2\sigma _{X}^{2}}}+{\frac {\sin ^{2}\theta }{2\sigma _{Y}^{2}}},\\b&=-{\frac {\sin \theta \cos \theta }{2\sigma _{X}^{2}}}+{\frac {\sin \theta \cos \theta }{2\sigma _{Y}^{2}}},\\c&={\frac {\sin ^{2}\theta }{2\sigma _{X}^{2}}}+{\frac {\cos ^{2}\theta }{2\sigma _{Y}^{2}}},\end{aligned}}}then we rotate the blob by a positive, counter-clockwise angleθ{\displaystyle \theta } (for negative, clockwise rotation, invert the signs in theb coefficient).[4]

To get back the coefficientsθ{\displaystyle \theta },σX{\displaystyle \sigma _{X}} andσY{\displaystyle \sigma _{Y}} froma{\displaystyle a},b{\displaystyle b} andc{\displaystyle c} use

θ=12arctan(2bac),θ[45,45],σX2=12(acos2θ+2bcosθsinθ+csin2θ),σY2=12(asin2θ2bcosθsinθ+ccos2θ).{\displaystyle {\begin{aligned}\theta &={\frac {1}{2}}\arctan \left({\frac {2b}{a-c}}\right),\quad \theta \in [-45,45],\\\sigma _{X}^{2}&={\frac {1}{2(a\cdot \cos ^{2}\theta +2b\cdot \cos \theta \sin \theta +c\cdot \sin ^{2}\theta )}},\\\sigma _{Y}^{2}&={\frac {1}{2(a\cdot \sin ^{2}\theta -2b\cdot \cos \theta \sin \theta +c\cdot \cos ^{2}\theta )}}.\end{aligned}}}

Example rotations of Gaussian blobs can be seen in the following examples:

θ=0{\displaystyle \theta =0}
θ=π/6{\displaystyle \theta =-\pi /6}
θ=π/3{\displaystyle \theta =-\pi /3}

Using the followingOctave code, one can easily see the effect of changing the parameters:

A=1;x0=0;y0=0;sigma_X=1;sigma_Y=2;[X,Y]=meshgrid(-5:.1:5,-5:.1:5);fortheta=0:pi/100:pia=cos(theta)^2/(2*sigma_X^2)+sin(theta)^2/(2*sigma_Y^2);b=sin(2*theta)/(4*sigma_X^2)-sin(2*theta)/(4*sigma_Y^2);c=sin(theta)^2/(2*sigma_X^2)+cos(theta)^2/(2*sigma_Y^2);Z=A*exp(-(a*(X-x0).^2+2*b*(X-x0).*(Y-y0)+c*(Y-y0).^2));surf(X,Y,Z);shadinginterp;view(-36,36)waitforbuttonpressend

Such functions are often used inimage processing and in computational models ofvisual system function—see the articles onscale space andaffine shape adaptation.

Also seemultivariate normal distribution.

Higher-order Gaussian or super-Gaussian function or generalized Gaussian function

[edit]

A more general formulation of a Gaussian function with a flat-top and Gaussian fall-off can be taken by raising the content of the exponent to a powerP{\displaystyle P}:f(x)=Aexp(((xx0)22σX2)P).{\displaystyle f(x)=A\exp \left(-\left({\frac {(x-x_{0})^{2}}{2\sigma _{X}^{2}}}\right)^{P}\right).}

This function is known as a super-Gaussian function and is often used for Gaussian beam formulation.[5] This function may also be expressed in terms of thefull width at half maximum (FWHM), represented byw:f(x)=Aexp(ln2(4(xx0)2w2)P).{\displaystyle f(x)=A\exp \left(-\ln 2\left(4{\frac {(x-x_{0})^{2}}{w^{2}}}\right)^{P}\right).}

In a two-dimensional formulation, a Gaussian function alongx{\displaystyle x} andy{\displaystyle y} can be combined[6] with potentially differentPX{\displaystyle P_{X}} andPY{\displaystyle P_{Y}} to form a rectangular Gaussian distribution:f(x,y)=Aexp(((xx0)22σX2)PX((yy0)22σY2)PY).{\displaystyle f(x,y)=A\exp \left(-\left({\frac {(x-x_{0})^{2}}{2\sigma _{X}^{2}}}\right)^{P_{X}}-\left({\frac {(y-y_{0})^{2}}{2\sigma _{Y}^{2}}}\right)^{P_{Y}}\right).}or an elliptical Gaussian distribution:f(x,y)=Aexp(((xx0)22σX2+(yy0)22σY2)P){\displaystyle f(x,y)=A\exp \left(-\left({\frac {(x-x_{0})^{2}}{2\sigma _{X}^{2}}}+{\frac {(y-y_{0})^{2}}{2\sigma _{Y}^{2}}}\right)^{P}\right)}

Multi-dimensional Gaussian function

[edit]
Main article:Multivariate normal distribution

In ann{\displaystyle n}-dimensional space a Gaussian function can be defined asf(x)=exp(xTCx),{\displaystyle f(x)=\exp(-x^{\mathsf {T}}Cx),}wherex=[x1xn]{\displaystyle x={\begin{bmatrix}x_{1}&\cdots &x_{n}\end{bmatrix}}} is a column ofn{\displaystyle n} coordinates,C{\displaystyle C} is apositive-definiten×n{\displaystyle n\times n} matrix, andT{\displaystyle {}^{\mathsf {T}}} denotesmatrix transposition.

The integral of this Gaussian function over the wholen{\displaystyle n}-dimensional space is given asRnexp(xTCx)dx=πndetC.{\displaystyle \int _{\mathbb {R} ^{n}}\exp(-x^{\mathsf {T}}Cx)\,dx={\sqrt {\frac {\pi ^{n}}{\det C}}}.}

It can be easily calculated by diagonalizing the matrixC{\displaystyle C} and changing the integration variables to the eigenvectors ofC{\displaystyle C}.

More generally a shifted Gaussian function is defined asf(x)=exp(xTCx+sTx),{\displaystyle f(x)=\exp(-x^{\mathsf {T}}Cx+s^{\mathsf {T}}x),}wheres=[s1sn]{\displaystyle s={\begin{bmatrix}s_{1}&\cdots &s_{n}\end{bmatrix}}} is the shift vector and the matrixC{\displaystyle C} can be assumed to be symmetric,CT=C{\displaystyle C^{\mathsf {T}}=C}, and positive-definite. The following integrals with this function can be calculated with the same technique:RnexTCx+vTxdx=πndetCexp(14vTC1v)M.{\displaystyle \int _{\mathbb {R} ^{n}}e^{-x^{\mathsf {T}}Cx+v^{\mathsf {T}}x}\,dx={\sqrt {\frac {\pi ^{n}}{\det {C}}}}\exp \left({\frac {1}{4}}v^{\mathsf {T}}C^{-1}v\right)\equiv {\mathcal {M}}.}RnexTCx+vTx(aTx)dx=(aTu)M, where u=12C1v.{\displaystyle \int _{\mathbb {R} ^{n}}e^{-x^{\mathsf {T}}Cx+v^{\mathsf {T}}x}(a^{\mathsf {T}}x)\,dx=(a^{T}u)\cdot {\mathcal {M}},{\text{ where }}u={\frac {1}{2}}C^{-1}v.}RnexTCx+vTx(xTDx)dx=(uTDu+12tr(DC1))M.{\displaystyle \int _{\mathbb {R} ^{n}}e^{-x^{\mathsf {T}}Cx+v^{\mathsf {T}}x}(x^{\mathsf {T}}Dx)\,dx=\left(u^{\mathsf {T}}Du+{\frac {1}{2}}\operatorname {tr} (DC^{-1})\right)\cdot {\mathcal {M}}.}RnexTCx+sTx(xΛx)exTCx+sTxdx=(2tr(CΛCB1)+4uTCΛCu2uT(CΛs+CΛs)+sTΛs)M,{\displaystyle {\begin{aligned}&\int _{\mathbb {R} ^{n}}e^{-x^{\mathsf {T}}C'x+s'^{\mathsf {T}}x}\left(-{\frac {\partial }{\partial x}}\Lambda {\frac {\partial }{\partial x}}\right)e^{-x^{\mathsf {T}}Cx+s^{\mathsf {T}}x}\,dx\\&\qquad =\left(2\operatorname {tr} (C'\Lambda CB^{-1})+4u^{\mathsf {T}}C'\Lambda Cu-2u^{\mathsf {T}}(C'\Lambda s+C\Lambda s')+s'^{\mathsf {T}}\Lambda s\right)\cdot {\mathcal {M}},\end{aligned}}}whereu=12B1v, v=s+s, B=C+C.{\textstyle u={\frac {1}{2}}B^{-1}v,\ v=s+s',\ B=C+C'.}

Estimation of parameters

[edit]
See also:Normal distribution § Estimation of parameters

A number of fields such asstellar photometry,Gaussian beam characterization, andemission/absorption line spectroscopy work with sampled Gaussian functions and need to accurately estimate the height, position, and width parameters of the function. There are three unknown parameters for a 1D Gaussian function (a,b,c) and five for a 2D Gaussian function(A;x0,y0;σX,σY){\displaystyle (A;x_{0},y_{0};\sigma _{X},\sigma _{Y})}.

The most common method for estimating the Gaussian parameters is to take the logarithm of the data andfit a parabola to the resulting data set.[7][8] While this provides a simplecurve fitting procedure, the resulting algorithm may be biased by excessively weighting small data values, which can produce large errors in the profile estimate. One can partially compensate for this problem throughweighted least squares estimation, reducing the weight of small data values, but this too can be biased by allowing the tail of the Gaussian to dominate the fit. In order to remove the bias, one can instead use aniteratively reweighted least squares procedure, in which the weights are updated at each iteration.[8]It is also possible to performnon-linear regression directly on the data, without involving thelogarithmic data transformation; for more options, seeprobability distribution fitting.

Parameter precision

[edit]

Once one has an algorithm for estimating the Gaussian function parameters, it is also important to know howprecise those estimates are. Anyleast squares estimation algorithm can provide numerical estimates for the variance of each parameter (i.e., the variance of the estimated height, position, and width of the function). One can also useCramér–Rao bound theory to obtain an analytical expression for the lower bound on the parameter variances, given certain assumptions about the data.[9][10]

  1. The noise in the measured profile is eitheri.i.d. Gaussian, or the noise isPoisson-distributed.
  2. The spacing between each sampling (i.e. the distance between pixels measuring the data) is uniform.
  3. The peak is "well-sampled", so that less than 10% of the area or volume under the peak (area if a 1D Gaussian, volume if a 2D Gaussian) lies outside the measurement region.
  4. The width of the peak is much larger than the distance between sample locations (i.e. the detector pixels must be at least 5 times smaller than the Gaussian FWHM).

When these assumptions are satisfied, the followingcovariance matrixK applies for the 1D profile parametersa{\displaystyle a},b{\displaystyle b}, andc{\displaystyle c} under i.i.d. Gaussian noise and under Poisson noise:[9]KGauss=σ2πδXQ2(32c01a02ca201a02ca2) ,KPoiss=12π(3a2c0120ca0120c2a) ,{\displaystyle \mathbf {K} _{\text{Gauss}}={\frac {\sigma ^{2}}{{\sqrt {\pi }}\delta _{X}Q^{2}}}{\begin{pmatrix}{\frac {3}{2c}}&0&{\frac {-1}{a}}\\0&{\frac {2c}{a^{2}}}&0\\{\frac {-1}{a}}&0&{\frac {2c}{a^{2}}}\end{pmatrix}}\ ,\qquad \mathbf {K} _{\text{Poiss}}={\frac {1}{\sqrt {2\pi }}}{\begin{pmatrix}{\frac {3a}{2c}}&0&-{\frac {1}{2}}\\0&{\frac {c}{a}}&0\\-{\frac {1}{2}}&0&{\frac {c}{2a}}\end{pmatrix}}\ ,}whereδX{\displaystyle \delta _{X}} is the width of the pixels used to sample the function,Q{\displaystyle Q} is the quantum efficiency of the detector, andσ{\displaystyle \sigma } indicates the standard deviation of the measurement noise. Thus, the individual variances for the parameters are, in the Gaussian noise case,var(a)=3σ22πδXQ2cvar(b)=2σ2cδXπQ2a2var(c)=2σ2cδXπQ2a2{\displaystyle {\begin{aligned}\operatorname {var} (a)&={\frac {3\sigma ^{2}}{2{\sqrt {\pi }}\,\delta _{X}Q^{2}c}}\\\operatorname {var} (b)&={\frac {2\sigma ^{2}c}{\delta _{X}{\sqrt {\pi }}\,Q^{2}a^{2}}}\\\operatorname {var} (c)&={\frac {2\sigma ^{2}c}{\delta _{X}{\sqrt {\pi }}\,Q^{2}a^{2}}}\end{aligned}}}

and in the Poisson noise case,var(a)=3a22πcvar(b)=c2πavar(c)=c22πa.{\displaystyle {\begin{aligned}\operatorname {var} (a)&={\frac {3a}{2{\sqrt {2\pi }}\,c}}\\\operatorname {var} (b)&={\frac {c}{{\sqrt {2\pi }}\,a}}\\\operatorname {var} (c)&={\frac {c}{2{\sqrt {2\pi }}\,a}}.\end{aligned}}}

For the 2D profile parameters giving the amplitudeA{\displaystyle A}, position(x0,y0){\displaystyle (x_{0},y_{0})}, and width(σX,σY){\displaystyle (\sigma _{X},\sigma _{Y})} of the profile, the following covariance matrices apply:[10]

KGauss=σ2πδXδYQ2(2σXσY001AσY1AσX02σXA2σY000002σYA2σX001Aσy002σXA2σy01AσX0002σYA2σX)KPoisson=12π(3AσXσY001σY1σX0σXAσY00000σYAσX001σY002σX3AσY13A1σX0013A2σY3AσX).{\displaystyle {\begin{aligned}\mathbf {K} _{\text{Gauss}}={\frac {\sigma ^{2}}{\pi \delta _{X}\delta _{Y}Q^{2}}}&{\begin{pmatrix}{\frac {2}{\sigma _{X}\sigma _{Y}}}&0&0&{\frac {-1}{A\sigma _{Y}}}&{\frac {-1}{A\sigma _{X}}}\\0&{\frac {2\sigma _{X}}{A^{2}\sigma _{Y}}}&0&0&0\\0&0&{\frac {2\sigma _{Y}}{A^{2}\sigma _{X}}}&0&0\\{\frac {-1}{A\sigma _{y}}}&0&0&{\frac {2\sigma _{X}}{A^{2}\sigma _{y}}}&0\\{\frac {-1}{A\sigma _{X}}}&0&0&0&{\frac {2\sigma _{Y}}{A^{2}\sigma _{X}}}\end{pmatrix}}\\[6pt]\mathbf {K} _{\operatorname {Poisson} }={\frac {1}{2\pi }}&{\begin{pmatrix}{\frac {3A}{\sigma _{X}\sigma _{Y}}}&0&0&{\frac {-1}{\sigma _{Y}}}&{\frac {-1}{\sigma _{X}}}\\0&{\frac {\sigma _{X}}{A\sigma _{Y}}}&0&0&0\\0&0&{\frac {\sigma _{Y}}{A\sigma _{X}}}&0&0\\{\frac {-1}{\sigma _{Y}}}&0&0&{\frac {2\sigma _{X}}{3A\sigma _{Y}}}&{\frac {1}{3A}}\\{\frac {-1}{\sigma _{X}}}&0&0&{\frac {1}{3A}}&{\frac {2\sigma _{Y}}{3A\sigma _{X}}}\end{pmatrix}}.\end{aligned}}}where the individual parameter variances are given by the diagonal elements of the covariance matrix.

Discrete Gaussian

[edit]
Main article:Discrete Gaussian kernel
Thediscrete Gaussian kernel (solid), compared with thesampled Gaussian kernel (dashed) for scalest=0.5,1,2,4.{\displaystyle t=0.5,1,2,4.}

One may ask for a discrete analog to the Gaussian;this is necessary in discrete applications, particularlydigital signal processing. A simple answer is to sample the continuous Gaussian, yielding thesampled Gaussian kernel. However, this discrete function does not have the discrete analogs of the properties of the continuous function, and can lead to undesired effects, as described in the articlescale space implementation.

An alternative approach is to use thediscrete Gaussian kernel:[11]T(n,t)=etIn(t){\displaystyle T(n,t)=e^{-t}I_{n}(t)}whereIn(t){\displaystyle I_{n}(t)} denotes themodified Bessel functions of integer order.

This is the discrete analog of the continuous Gaussian in that it is the solution to the discretediffusion equation (discrete space, continuous time), just as the continuous Gaussian is the solution to the continuous diffusion equation.[11][12]

Applications

[edit]

Gaussian functions appear in many contexts in thenatural sciences, thesocial sciences,mathematics, andengineering. Some examples include:

See also

[edit]

References

[edit]
  1. ^Squires, G. L. (2001-08-30).Practical Physics (4 ed.). Cambridge University Press.doi:10.1017/cbo9781139164498.ISBN 978-0-521-77940-1.
  2. ^Folland, Gerald B.; Sitaram, Alladi (1997). "The uncertainty principle: A mathematical survey".The Journal of Fourier Analysis and Applications.3 (3):207–238.Bibcode:1997JFAA....3..207F.doi:10.1007/BF02649110.ISSN 1069-5869.
  3. ^Weisstein, Eric W."Fourier Transform – Gaussian".MathWorld. Retrieved19 December 2013.
  4. ^Nawri, Nikolai."Berechnung von Kovarianzellipsen"(PDF). Archived fromthe original(PDF) on 2019-08-14. Retrieved14 August 2019.
  5. ^Parent, A., M. Morin, and P. Lavigne. "Propagation of super-Gaussian field distributions".Optical and Quantum Electronics 24.9 (1992): S1071–S1079.
  6. ^"GLAD optical software commands manual, Entry on GAUSSIAN command"(PDF).Applied Optics Research. 2016-12-15.
  7. ^Caruana, Richard A.; Searle, Roger B.; Heller, Thomas.; Shupack, Saul I. (1986). "Fast algorithm for the resolution of spectra".Analytical Chemistry.58 (6). American Chemical Society (ACS):1162–1167.doi:10.1021/ac00297a041.ISSN 0003-2700.
  8. ^abHongwei Guo, "A simple algorithm for fitting a Gaussian function," IEEE Sign. Proc. Mag. 28(9): 134-137 (2011).
  9. ^abN. Hagen, M. Kupinski, and E. L. Dereniak, "Gaussian profile estimation in one dimension," Appl. Opt. 46:5374–5383 (2007)
  10. ^abN. Hagen and E. L. Dereniak, "Gaussian profile estimation in two dimensions," Appl. Opt. 47:6842–6851 (2008)
  11. ^abLindeberg, T., "Scale-space for discrete signals," PAMI(12), No. 3, March 1990, pp. 234–254.
  12. ^Campbell, J, 2007,The SMM model as a boundary value problem using the discrete diffusion equation, Theor Popul Biol. 2007 Dec;72(4):539–46.
  13. ^Haddad, R.A. and Akansu, A.N., 1991,A Class of Fast Gaussian Binomial Filters for Speech and Image processing, IEEE Trans. on Signal Processing, 39-3: 723–727
  14. ^Honarkhah, M and Caers, J, 2010,Stochastic Simulation of Patterns Using Distance-Based Pattern Modeling, Mathematical Geosciences, 42: 487–517

Further reading

[edit]
  • Haberman, Richard (2013). "10.3.3 Inverse Fourier transform of a Gaussian".Applied Partial Differential Equations. Boston: PEARSON.ISBN 978-0-321-79705-6.

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Gaussian_function&oldid=1328641153"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp