Inmathematics, aGaussian function, often simply referred to as aGaussian, is afunction of the base formand with parametric extensionfor arbitraryreal constantsa,b and non-zeroc. It is named after the mathematicianCarl Friedrich Gauss. Thegraph of a Gaussian is a characteristic symmetric "bell curve" shape. The parametera is the height of the curve's peak,b is the position of the center of the peak, andc (thestandard deviation, sometimes called the GaussianRMS width) controls the width of the "bell".
These Gaussians are plotted in the accompanying figure.
The product of two Gaussian functions is a Gaussian, and theconvolution of two Gaussian functions is also a Gaussian, with variance being the sum of the original variances:. The product of two Gaussian probability density functions (PDFs), though, is not in general a Gaussian PDF.
The Fourieruncertainty principle becomes an equality if and only if (modulated) Gaussian functions are considered.[2]
Taking theFourier transform (unitary, angular-frequency convention) of a Gaussian function with parametersa = 1,b = 0 andc yields another Gaussian function, with parameters,b = 0 and.[3] So in particular the Gaussian functions withb = 0 and are kept fixed by the Fourier transform (they areeigenfunctions of the Fourier transform with eigenvalue 1).A physical realization is that of thediffraction pattern: for example, aphotographic slide whosetransmittance has a Gaussian variation is also a Gaussian function.
The fact that the Gaussian function is an eigenfunction of the continuous Fourier transform allows us to derive the following interesting[clarification needed] identity from thePoisson summation formula:
The integralfor somereal constantsa,b andc > 0 can be calculated by putting it into the form of aGaussian integral. First, the constanta can simply be factored out of the integral. Next, the variable of integration is changed fromx toy =x −b:and then to:
3d plot of a Gaussian function with a two-dimensional domain
Base form:
In two dimensions, the power to whiche is raised in the Gaussian function is any negative-definitequadratic form. Consequently, thelevel sets of the Gaussian will always be ellipses.
A particular example of a two-dimensional Gaussian function is
Here the coefficientA is the amplitude,x0, y0 is the center, andσx, σy are thex andy spreads of the blob. The figure on the right was created usingA = 1,x0 = 0,y0 = 0,σx =σy = 1.
The volume under the Gaussian function is given by
In general, a two-dimensional elliptical Gaussian function is expressed aswhere the matrixispositive-definite.
Using this formulation, the figure on the right can be created usingA = 1,(x0,y0) = (0, 0),a =c = 1/2,b = 0.
A more general formulation of a Gaussian function with a flat-top and Gaussian fall-off can be taken by raising the content of the exponent to a power:
This function is known as a super-Gaussian function and is often used for Gaussian beam formulation.[5] This function may also be expressed in terms of thefull width at half maximum (FWHM), represented byw:
In a two-dimensional formulation, a Gaussian function along and can be combined[6] with potentially different and to form a rectangular Gaussian distribution:or an elliptical Gaussian distribution:
In an-dimensional space a Gaussian function can be defined aswhere is a column of coordinates, is apositive-definite matrix, and denotesmatrix transposition.
The integral of this Gaussian function over the whole-dimensional space is given as
It can be easily calculated by diagonalizing the matrix and changing the integration variables to the eigenvectors of.
More generally a shifted Gaussian function is defined aswhere is the shift vector and the matrix can be assumed to be symmetric,, and positive-definite. The following integrals with this function can be calculated with the same technique:where
A number of fields such asstellar photometry,Gaussian beam characterization, andemission/absorption line spectroscopy work with sampled Gaussian functions and need to accurately estimate the height, position, and width parameters of the function. There are three unknown parameters for a 1D Gaussian function (a,b,c) and five for a 2D Gaussian function.
The most common method for estimating the Gaussian parameters is to take the logarithm of the data andfit a parabola to the resulting data set.[7][8] While this provides a simplecurve fitting procedure, the resulting algorithm may be biased by excessively weighting small data values, which can produce large errors in the profile estimate. One can partially compensate for this problem throughweighted least squares estimation, reducing the weight of small data values, but this too can be biased by allowing the tail of the Gaussian to dominate the fit. In order to remove the bias, one can instead use aniteratively reweighted least squares procedure, in which the weights are updated at each iteration.[8]It is also possible to performnon-linear regression directly on the data, without involving thelogarithmic data transformation; for more options, seeprobability distribution fitting.
Once one has an algorithm for estimating the Gaussian function parameters, it is also important to know howprecise those estimates are. Anyleast squares estimation algorithm can provide numerical estimates for the variance of each parameter (i.e., the variance of the estimated height, position, and width of the function). One can also useCramér–Rao bound theory to obtain an analytical expression for the lower bound on the parameter variances, given certain assumptions about the data.[9][10]
The spacing between each sampling (i.e. the distance between pixels measuring the data) is uniform.
The peak is "well-sampled", so that less than 10% of the area or volume under the peak (area if a 1D Gaussian, volume if a 2D Gaussian) lies outside the measurement region.
The width of the peak is much larger than the distance between sample locations (i.e. the detector pixels must be at least 5 times smaller than the Gaussian FWHM).
When these assumptions are satisfied, the followingcovariance matrixK applies for the 1D profile parameters,, and under i.i.d. Gaussian noise and under Poisson noise:[9]where is the width of the pixels used to sample the function, is the quantum efficiency of the detector, and indicates the standard deviation of the measurement noise. Thus, the individual variances for the parameters are, in the Gaussian noise case,
and in the Poisson noise case,
For the 2D profile parameters giving the amplitude, position, and width of the profile, the following covariance matrices apply:[10]
where the individual parameter variances are given by the diagonal elements of the covariance matrix.
One may ask for a discrete analog to the Gaussian;this is necessary in discrete applications, particularlydigital signal processing. A simple answer is to sample the continuous Gaussian, yielding thesampled Gaussian kernel. However, this discrete function does not have the discrete analogs of the properties of the continuous function, and can lead to undesired effects, as described in the articlescale space implementation.
This is the discrete analog of the continuous Gaussian in that it is the solution to the discretediffusion equation (discrete space, continuous time), just as the continuous Gaussian is the solution to the continuous diffusion equation.[11][12]
Gaussian functions are theGreen's function for the (homogeneous and isotropic)diffusion equation (and to theheat equation, which is the same thing), apartial differential equation that describes the time evolution of a mass-density underdiffusion. Specifically, if the mass-density at timet=0 is given by aDirac delta, which essentially means that the mass is initially concentrated in a single point, then the mass-distribution at timet will be given by a Gaussian function, with the parametera being linearly related to 1/√t andc being linearly related to√t; this time-varying Gaussian is described by theheat kernel. More generally, if the initial mass-density is φ(x), then the mass-density at later times is obtained by taking theconvolution of φ with a Gaussian function. The convolution of a function with a Gaussian is also known as aWeierstrass transform.
Mathematically, thederivatives of the Gaussian function can be represented usingHermite functions. For unit variance, then-th derivative of the Gaussian is the Gaussian function itself multiplied by then-thHermite polynomial, up to scale.
Gaussian beams are used in optical systems, microwave systems and lasers.
Inscale space representation, Gaussian functions are used as smoothing kernels for generating multi-scale representations incomputer vision andimage processing. Specifically, derivatives of Gaussians (Hermite functions) are used as a basis for defining a large number of types of visual operations.
Ingeostatistics they have been used for understanding the variability between the patterns of a complextraining image. They are used with kernel methods to cluster the patterns in the feature space.[14]
^Caruana, Richard A.; Searle, Roger B.; Heller, Thomas.; Shupack, Saul I. (1986). "Fast algorithm for the resolution of spectra".Analytical Chemistry.58 (6). American Chemical Society (ACS):1162–1167.doi:10.1021/ac00297a041.ISSN0003-2700.
Haberman, Richard (2013). "10.3.3 Inverse Fourier transform of a Gaussian".Applied Partial Differential Equations. Boston: PEARSON.ISBN978-0-321-79705-6.