Integral of the Gaussian function, equal to sqrt(π)
This integral from statistics and physics is not to be confused with
Gaussian quadrature , a method of numerical integration.
A graph of the functionf ( x ) = e − x 2 {\displaystyle f(x)=e^{-x^{2}}} and the area between it and thex {\displaystyle x} -axis, (i.e. the entire real line) which is equal toπ {\displaystyle {\sqrt {\pi }}} . TheGaussian integral , also known as theEuler–Poisson integral , is theintegral of theGaussian function f ( x ) = e − x 2 {\displaystyle f(x)=e^{-x^{2}}} over the entire real line. Named after the German mathematicianCarl Friedrich Gauss , the integral is∫ − ∞ ∞ e − x 2 d x = π . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}.}
Abraham de Moivre originally discovered this type of integral in 1733, while Gauss published the precise integral in 1809,[ 1] attributing its discovery toLaplace . The integral has a wide range of applications. For example, with a slight change of variables it is used to compute thenormalizing constant of thenormal distribution . The same integral with finite limits is closely related to both theerror function and thecumulative distribution function of thenormal distribution . In physics this type of integral appears frequently, for example, inquantum mechanics , to find the probability density of the ground state of the harmonic oscillator. This integral is also used in the path integral formulation, to find the propagator of the harmonic oscillator, and instatistical mechanics , to find itspartition function .
Although noelementary function exists for the error function, as can be proven by theRisch algorithm ,[ 2] the Gaussian integral can be solvedanalytically through the methods ofmultivariable calculus . That is, there is no elementaryindefinite integral for∫ e − x 2 d x , {\displaystyle \int e^{-x^{2}}\,dx,} but thedefinite integral ∫ − ∞ ∞ e − x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx} can be evaluated. The definite integral of an arbitraryGaussian function is∫ − ∞ ∞ e − a ( x + b ) 2 d x = π a . {\displaystyle \int _{-\infty }^{\infty }e^{-a(x+b)^{2}}\,dx={\sqrt {\frac {\pi }{a}}}.}
By polar coordinates [ edit ] A standard way to compute the Gaussian integral, the idea of which goes back to Poisson,[ 3] is to make use of the property that:
( ∫ − ∞ ∞ e − x 2 d x ) 2 = ∫ − ∞ ∞ e − x 2 d x ∫ − ∞ ∞ e − y 2 d y = ∫ − ∞ ∞ ∫ − ∞ ∞ e − ( x 2 + y 2 ) d x d y . {\displaystyle \left(\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\right)^{2}=\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\int _{-\infty }^{\infty }e^{-y^{2}}\,dy=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-\left(x^{2}+y^{2}\right)}\,dx\,dy.}
Consider the functione − ( x 2 + y 2 ) = e − r 2 {\displaystyle e^{-\left(x^{2}+y^{2}\right)}=e^{-r^{2}}} on the planeR 2 {\displaystyle \mathbb {R} ^{2}} , and compute its integral two ways:
on the one hand, bydouble integration in theCartesian coordinate system , its integral is a square:( ∫ e − x 2 d x ) 2 ; {\displaystyle \left(\int e^{-x^{2}}\,dx\right)^{2};} on the other hand, byshell integration (a case of double integration inpolar coordinates ), its integral is computed to beπ {\displaystyle \pi } Comparing these two computations yields the integral, though one should take care about theimproper integrals involved.
∬ R 2 e − ( x 2 + y 2 ) d x d y = ∫ 0 2 π ∫ 0 ∞ e − r 2 r d r d θ = 2 π ∫ 0 ∞ r e − r 2 d r = 2 π ∫ − ∞ 0 1 2 e s d s s = − r 2 = π ∫ − ∞ 0 e s d s = π [ e s ] − ∞ 0 = π ( e 0 − e − ∞ ) = π ( 1 − 0 ) = π , {\displaystyle {\begin{aligned}\iint _{\mathbb {R} ^{2}}e^{-\left(x^{2}+y^{2}\right)}dx\,dy&=\int _{0}^{2\pi }\int _{0}^{\infty }e^{-r^{2}}r\,dr\,d\theta \\[6pt]&=2\pi \int _{0}^{\infty }re^{-r^{2}}\,dr\\[6pt]&=2\pi \int _{-\infty }^{0}{\tfrac {1}{2}}e^{s}\,ds&&s=-r^{2}\\[6pt]&=\pi \int _{-\infty }^{0}e^{s}\,ds\\[6pt]&=\pi \,\left[e^{s}\right]_{-\infty }^{0}\\[6pt]&=\pi \,\left(e^{0}-e^{-\infty }\right)\\[6pt]&=\pi \,\left(1-0\right)\\[6pt]&=\pi ,\end{aligned}}} where the factor ofr is theJacobian determinant which appears because of thetransform to polar coordinates (r dr dθ is the standard measure on the plane, expressed in polar coordinatesWikibooks:Calculus/Polar Integration#Generalization ), and the substitution involves takings = −r 2 , sods = −2r dr .
Combining these yields( ∫ − ∞ ∞ e − x 2 d x ) 2 = π , {\displaystyle \left(\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\right)^{2}=\pi ,} so∫ − ∞ ∞ e − x 2 d x = π . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}.}
To justify the improper double integrals and equating the two expressions, we begin with an approximating function:I ( a ) = ∫ − a a e − x 2 d x . {\displaystyle I(a)=\int _{-a}^{a}e^{-x^{2}}dx.}
If the integral∫ − ∞ ∞ e − x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx} wereabsolutely convergent we would have that itsCauchy principal value , that is, the limitlim a → ∞ I ( a ) {\displaystyle \lim _{a\to \infty }I(a)} would coincide with∫ − ∞ ∞ e − x 2 d x . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx.} To see that this is the case, consider that
∫ − ∞ ∞ | e − x 2 | d x < ∫ − ∞ − 1 − x e − x 2 d x + ∫ − 1 1 e − x 2 d x + ∫ 1 ∞ x e − x 2 d x < ∞ . {\displaystyle \int _{-\infty }^{\infty }\left|e^{-x^{2}}\right|dx<\int _{-\infty }^{-1}-xe^{-x^{2}}\,dx+\int _{-1}^{1}e^{-x^{2}}\,dx+\int _{1}^{\infty }xe^{-x^{2}}\,dx<\infty .}
So we can compute∫ − ∞ ∞ e − x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx} by just taking the limitlim a → ∞ I ( a ) . {\displaystyle \lim _{a\to \infty }I(a).}
Taking the square ofI ( a ) {\displaystyle I(a)} yields
I ( a ) 2 = ( ∫ − a a e − x 2 d x ) ( ∫ − a a e − y 2 d y ) = ∫ − a a ( ∫ − a a e − y 2 d y ) e − x 2 d x = ∫ − a a ∫ − a a e − ( x 2 + y 2 ) d y d x . {\displaystyle {\begin{aligned}I(a)^{2}&=\left(\int _{-a}^{a}e^{-x^{2}}\,dx\right)\left(\int _{-a}^{a}e^{-y^{2}}\,dy\right)\\[6pt]&=\int _{-a}^{a}\left(\int _{-a}^{a}e^{-y^{2}}\,dy\right)\,e^{-x^{2}}\,dx\\[6pt]&=\int _{-a}^{a}\int _{-a}^{a}e^{-\left(x^{2}+y^{2}\right)}\,dy\,dx.\end{aligned}}}
UsingFubini's theorem , the above double integral can be seen as an area integral∬ [ − a , a ] × [ − a , a ] e − ( x 2 + y 2 ) d ( x , y ) , {\displaystyle \iint _{[-a,a]\times [-a,a]}e^{-\left(x^{2}+y^{2}\right)}\,d(x,y),} taken over a square with vertices{(−a ,a ), (a ,a ), (a , −a ), (−a , −a )} on thexy -plane .
Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square'sincircle must be less thanI ( a ) 2 {\displaystyle I(a)^{2}} , and similarly the integral taken over the square'scircumcircle must be greater thanI ( a ) 2 {\displaystyle I(a)^{2}} . The integrals over the two disks can easily be computed by switching from Cartesian coordinates topolar coordinates :
x = r cos θ , y = r sin θ {\displaystyle {\begin{aligned}x&=r\cos \theta ,&y&=r\sin \theta \end{aligned}}} J ( r , θ ) = [ ∂ x ∂ r ∂ x ∂ θ ∂ y ∂ r ∂ y ∂ θ ] = [ cos θ − r sin θ sin θ − r cos θ ] {\displaystyle \mathbf {J} (r,\theta )={\begin{bmatrix}{\dfrac {\partial x}{\partial r}}&{\dfrac {\partial x}{\partial \theta }}\\[1em]{\dfrac {\partial y}{\partial r}}&{\dfrac {\partial y}{\partial \theta }}\end{bmatrix}}={\begin{bmatrix}\cos \theta &-r\sin \theta \\\sin \theta &{\hphantom {-}}r\cos \theta \end{bmatrix}}} d ( x , y ) = | J ( r , θ ) | d ( r , θ ) = r d ( r , θ ) . {\displaystyle d(x,y)=\left|J(r,\theta )\right|d(r,\theta )=r\,d(r,\theta ).} ∫ 0 2 π ∫ 0 a r e − r 2 d r d θ < I 2 ( a ) < ∫ 0 2 π ∫ 0 a 2 r e − r 2 d r d θ . {\displaystyle \int _{0}^{2\pi }\int _{0}^{a}re^{-r^{2}}\,dr\,d\theta <I^{2}(a)<\int _{0}^{2\pi }\int _{0}^{a{\sqrt {2}}}re^{-r^{2}}\,dr\,d\theta .}
(Seeto polar coordinates from Cartesian coordinates for help with polar transformation.)
Integrating,π ( 1 − e − a 2 ) < I 2 ( a ) < π ( 1 − e − 2 a 2 ) . {\displaystyle \pi \left(1-e^{-a^{2}}\right)<I^{2}(a)<\pi \left(1-e^{-2a^{2}}\right).}
By thesqueeze theorem , this gives the Gaussian integral∫ − ∞ ∞ e − x 2 d x = π . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}.}
By Cartesian coordinates [ edit ] A different technique, which goes back to Laplace (1812),[ 3] is the following. Lety = x s d y = x d s . {\displaystyle {\begin{aligned}y&=xs\\dy&=x\,ds.\end{aligned}}}
Since the limits ons asy → ±∞ depend on the sign ofx , it simplifies the calculation to use the fact thate −x 2 is aneven function , and, therefore, the integral over all real numbers is just twice the integral from zero to infinity. That is,
∫ − ∞ ∞ e − x 2 d x = 2 ∫ 0 ∞ e − x 2 d x . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx=2\int _{0}^{\infty }e^{-x^{2}}\,dx.}
Thus, over the range of integration,x ≥ 0 , and the variablesy ands have the same limits. This yields:I 2 = 4 ∫ 0 ∞ ∫ 0 ∞ e − ( x 2 + y 2 ) d y d x = 4 ∫ 0 ∞ ( ∫ 0 ∞ e − ( x 2 + y 2 ) d y ) d x = 4 ∫ 0 ∞ ( ∫ 0 ∞ e − x 2 ( 1 + s 2 ) x d s ) d x {\displaystyle {\begin{aligned}I^{2}&=4\int _{0}^{\infty }\int _{0}^{\infty }e^{-\left(x^{2}+y^{2}\right)}dy\,dx\\[6pt]&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-\left(x^{2}+y^{2}\right)}\,dy\right)\,dx\\[6pt]&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-x^{2}\left(1+s^{2}\right)}x\,ds\right)\,dx\\[6pt]\end{aligned}}} Then, usingFubini's theorem to switch theorder of integration :I 2 = 4 ∫ 0 ∞ ( ∫ 0 ∞ e − x 2 ( 1 + s 2 ) x d x ) d s = 4 ∫ 0 ∞ [ e − x 2 ( 1 + s 2 ) − 2 ( 1 + s 2 ) ] x = 0 x = ∞ d s = 4 ( 1 2 ∫ 0 ∞ d s 1 + s 2 ) = 2 arctan ( s ) | 0 ∞ = π . {\displaystyle {\begin{aligned}I^{2}&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-x^{2}\left(1+s^{2}\right)}x\,dx\right)\,ds\\[6pt]&=4\int _{0}^{\infty }\left[{\frac {e^{-x^{2}\left(1+s^{2}\right)}}{-2\left(1+s^{2}\right)}}\right]_{x=0}^{x=\infty }\,ds\\[6pt]&=4\left({\frac {1}{2}}\int _{0}^{\infty }{\frac {ds}{1+s^{2}}}\right)\\[6pt]&=2\arctan(s){\Big |}_{0}^{\infty }\\[6pt]&=\pi .\end{aligned}}}
Therefore,I = π {\displaystyle I={\sqrt {\pi }}} , as expected.
In Laplace approximation, we deal only with up to second-order terms inTaylor expansion , so we considere − x 2 ≈ 1 − x 2 ≈ ( 1 + x 2 ) − 1 {\displaystyle e^{-x^{2}}\approx 1-x^{2}\approx (1+x^{2})^{-1}} .
In fact, since( 1 + t ) e − t ≤ 1 {\displaystyle (1+t)e^{-t}\leq 1} for allt {\displaystyle t} , we have the exact bounds:1 − x 2 ≤ e − x 2 ≤ ( 1 + x 2 ) − 1 {\displaystyle 1-x^{2}\leq e^{-x^{2}}\leq (1+x^{2})^{-1}} Then we can do the bound at Laplace approximation limit:∫ [ − 1 , 1 ] ( 1 − x 2 ) n d x ≤ ∫ [ − 1 , 1 ] e − n x 2 d x ≤ ∫ [ − 1 , 1 ] ( 1 + x 2 ) − n d x {\displaystyle \int _{[-1,1]}(1-x^{2})^{n}dx\leq \int _{[-1,1]}e^{-nx^{2}}dx\leq \int _{[-1,1]}(1+x^{2})^{-n}dx}
That is,2 n ∫ [ 0 , 1 ] ( 1 − x 2 ) n d x ≤ ∫ [ − n , n ] e − x 2 d x ≤ 2 n ∫ [ 0 , 1 ] ( 1 + x 2 ) − n d x {\displaystyle 2{\sqrt {n}}\int _{[0,1]}(1-x^{2})^{n}dx\leq \int _{[-{\sqrt {n}},{\sqrt {n}}]}e^{-x^{2}}dx\leq 2{\sqrt {n}}\int _{[0,1]}(1+x^{2})^{-n}dx}
By trigonometric substitution, we exactly compute those two bounds:2 n ( 2 n ) ! ! / ( 2 n + 1 ) ! ! {\displaystyle 2{\sqrt {n}}(2n)!!/(2n+1)!!} and2 n ( π / 2 ) ( 2 n − 3 ) ! ! / ( 2 n − 2 ) ! ! {\displaystyle 2{\sqrt {n}}(\pi /2)(2n-3)!!/(2n-2)!!}
By taking the square root of theWallis formula ,π 2 = ∏ n = 1 ( 2 n ) 2 ( 2 n − 1 ) ( 2 n + 1 ) {\displaystyle {\frac {\pi }{2}}=\prod _{n=1}{\frac {(2n)^{2}}{(2n-1)(2n+1)}}} we haveπ = 2 lim n → ∞ n ( 2 n ) ! ! ( 2 n + 1 ) ! ! {\displaystyle {\sqrt {\pi }}=2\lim _{n\to \infty }{\sqrt {n}}{\frac {(2n)!!}{(2n+1)!!}}} , the desired lower bound limit. Similarly we can get the desired upper bound limit.Conversely, if we first compute the integral with one of the other methods above, we would obtain a proof of the Wallis formula.
Proof by complex integral [ edit ] Several proofs have been discovered usingCauchy's integral formula , despite the integral being initially thought to be ill-suited to theresidue calculus .[ 3] [ 4]
Relation to the gamma function [ edit ] The integrand is aneven function ,
∫ − ∞ ∞ e − x 2 d x = 2 ∫ 0 ∞ e − x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}dx=2\int _{0}^{\infty }e^{-x^{2}}dx}
Thus, after the change of variablex = t {\textstyle x={\sqrt {t}}} , this turns into the Euler integral
2 ∫ 0 ∞ e − x 2 d x = 2 ∫ 0 ∞ 1 2 e − t t − 1 2 d t = Γ ( 1 2 ) = π {\displaystyle 2\int _{0}^{\infty }e^{-x^{2}}dx=2\int _{0}^{\infty }{\frac {1}{2}}\ e^{-t}\ t^{-{\frac {1}{2}}}dt=\Gamma {\left({\frac {1}{2}}\right)}={\sqrt {\pi }}}
whereΓ ( z ) = ∫ 0 ∞ t z − 1 e − t d t {\textstyle \Gamma (z)=\int _{0}^{\infty }t^{z-1}e^{-t}dt} is thegamma function . More generally,∫ 0 ∞ x n e − a x b d x = Γ ( ( n + 1 ) / b ) b a ( n + 1 ) / b , {\displaystyle \int _{0}^{\infty }x^{n}e^{-ax^{b}}dx={\frac {\Gamma {\left((n+1)/b\right)}}{ba^{(n+1)/b}}},} which can be obtained by substitutingt = a x b {\displaystyle t=ax^{b}} in the integrand of the gamma function to getΓ ( z ) = a z b ∫ 0 ∞ x b z − 1 e − a x b d x {\textstyle \Gamma (z)=a^{z}b\int _{0}^{\infty }x^{bz-1}e^{-ax^{b}}dx} .
The integral of a Gaussian function [ edit ] The integral of an arbitraryGaussian function is∫ − ∞ ∞ e − a ( x + b ) 2 d x = π a . {\displaystyle \int _{-\infty }^{\infty }e^{-a(x+b)^{2}}\,dx={\sqrt {\frac {\pi }{a}}}.}
An alternative form is∫ − ∞ ∞ e − ( a x 2 − b x + c ) d x = π a e b 2 4 a − c . {\displaystyle \int _{-\infty }^{\infty }e^{-(ax^{2}-bx+c)}\,dx={\sqrt {\frac {\pi }{a}}}\,e^{{\frac {b^{2}}{4a}}-c}.}
This form is useful for calculating expectations of some continuous probability distributions related to the normal distribution, such as thelog-normal distribution , for example.
∫ − ∞ ∞ e 1 2 i t 2 d t = e i π / 4 2 π {\displaystyle \int _{-\infty }^{\infty }e^{{\frac {1}{2}}it^{2}}dt=e^{i\pi /4}{\sqrt {2\pi }}} and more generally,∫ R N e 1 2 i x T A x d x = det ( A ) − 1 2 ( e i π / 4 2 π ) N {\displaystyle \int _{\mathbb {R} ^{N}}e^{{\frac {1}{2}}i\mathbf {x} ^{T}A\mathbf {x} }dx=\det(A)^{-{\frac {1}{2}}}{\left(e^{i\pi /4}{\sqrt {2\pi }}\right)}^{N}} for any positive-definite symmetric matrixA {\displaystyle A} .
n -dimensional and functional generalization[ edit ] SupposeA is a symmetric positive-definite (hence invertible)n ×n precision matrix , which is the matrix inverse of thecovariance matrix . Then,
∫ R n exp ( − 1 2 x T A x ) d n x = ∫ R n exp ( − 1 2 ∑ i , j = 1 n A i j x i x j ) d n x = ( 2 π ) n det A = 1 det ( A / 2 π ) = det ( 2 π A − 1 ) {\displaystyle {\begin{aligned}\int _{\mathbb {R} ^{n}}\exp {\left(-{\frac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} \right)}\,d^{n}\mathbf {x} &=\int _{\mathbb {R} ^{n}}\exp {\left(-{\frac {1}{2}}\sum \limits _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)}\,d^{n}\mathbf {x} \\[1ex]&={\sqrt {\frac {{\left(2\pi \right)}^{n}}{\det A}}}={\sqrt {\frac {1}{\det \left(A/2\pi \right)}}}\\[1ex]&={\sqrt {\det \left(2\pi A^{-1}\right)}}\end{aligned}}}
By completing the square, this generalizes to∫ R n exp ( − 1 2 x T A x + b T x + c ) d n x = det ( 2 π A − 1 ) exp ( 1 2 b T A − 1 b + c ) {\displaystyle \int _{\mathbb {R} ^{n}}\exp {\left(-{\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} +\mathbf {b} ^{\mathsf {T}}\mathbf {x} +c\right)}\,d^{n}\mathbf {x} ={\sqrt {\det \left(2\pi A^{-1}\right)}}\exp \left({\tfrac {1}{2}}\mathbf {b} ^{\mathsf {T}}A^{-1}\mathbf {b} +c\right)}
This fact is applied in the study of themultivariate normal distribution .
Also,∫ x k 1 ⋯ x k 2 N exp ( − 1 2 ∑ i , j = 1 n A i j x i x j ) d n x = ( 2 π ) n det A 1 2 N N ! ∑ σ ∈ S 2 N ( A − 1 ) k σ ( 1 ) k σ ( 2 ) ⋯ ( A − 1 ) k σ ( 2 N − 1 ) k σ ( 2 N ) {\displaystyle \int x_{k_{1}}\cdots x_{k_{2N}}\,\exp {\left(-{\frac {1}{2}}\sum \limits _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)}\,d^{n}x={\sqrt {\frac {(2\pi )^{n}}{\det A}}}\,{\frac {1}{2^{N}N!}}\,\sum _{\sigma \in S_{2N}}(A^{-1})_{k_{\sigma (1)}k_{\sigma (2)}}\cdots (A^{-1})_{k_{\sigma (2N-1)}k_{\sigma (2N)}}} whereσ is apermutation of{1, …, 2N } and the extra factor on the right-hand side is the sum over all combinatorial pairings of{1, …, 2N } ofN copies ofA −1 .
Alternatively,[ 5]
∫ f ( x ) exp ( − 1 2 ∑ i , j = 1 n A i j x i x j ) d n x = ( 2 π ) n det A exp ( 1 2 ∑ i , j = 1 n ( A − 1 ) i j ∂ ∂ x i ∂ ∂ x j ) f ( x ) | x = 0 {\displaystyle \int f(\mathbf {x} )\exp {\left(-{\frac {1}{2}}\sum _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)}d^{n}\mathbf {x} ={\sqrt {\frac {{\left(2\pi \right)}^{n}}{\det A}}}\,\left.\exp \left({\frac {1}{2}}\sum _{i,j=1}^{n}\left(A^{-1}\right)_{ij}{\partial \over \partial x_{i}}{\partial \over \partial x_{j}}\right)f(\mathbf {x} )\right|_{\mathbf {x} =0}}
for someanalytic function f , provided it satisfies some appropriate bounds on its growth and some other technical criteria. (It works for some functions and fails for others. Polynomials are fine.) The exponential over a differential operator is understood as apower series .
Whilefunctional integrals have no rigorous definition (or even a nonrigorous computational one in most cases), we candefine a Gaussian functional integral in analogy to the finite-dimensional case.[citation needed ] There is still the problem, though, that( 2 π ) ∞ {\displaystyle (2\pi )^{\infty }} is infinite and also, thefunctional determinant would also be infinite in general. This can be taken care of if we only consider ratios:
∫ f ( x 1 ) ⋯ f ( x 2 N ) exp [ − ∬ 1 2 A ( x 2 N + 1 , x 2 N + 2 ) f ( x 2 N + 1 ) f ( x 2 N + 2 ) d d x 2 N + 1 d d x 2 N + 2 ] D f ∫ exp [ − ∬ 1 2 A ( x 2 N + 1 , x 2 N + 2 ) f ( x 2 N + 1 ) f ( x 2 N + 2 ) d d x 2 N + 1 d d x 2 N + 2 ] D f = 1 2 N N ! ∑ σ ∈ S 2 N A − 1 ( x σ ( 1 ) , x σ ( 2 ) ) ⋯ A − 1 ( x σ ( 2 N − 1 ) , x σ ( 2 N ) ) . {\displaystyle {\begin{aligned}&{\frac {\displaystyle \int f(x_{1})\cdots f(x_{2N})\exp \left[{-\iint {\frac {1}{2}}A(x_{2N+1},x_{2N+2})f(x_{2N+1})f(x_{2N+2})\,d^{d}x_{2N+1}\,d^{d}x_{2N+2}}\right]{\mathcal {D}}f}{\displaystyle \int \exp \left[{-\iint {\frac {1}{2}}A(x_{2N+1},x_{2N+2})f(x_{2N+1})f(x_{2N+2})\,d^{d}x_{2N+1}\,d^{d}x_{2N+2}}\right]{\mathcal {D}}f}}\\[6pt]={}&{\frac {1}{2^{N}N!}}\sum _{\sigma \in S_{2N}}A^{-1}(x_{\sigma (1)},x_{\sigma (2)})\cdots A^{-1}(x_{\sigma (2N-1)},x_{\sigma (2N)}).\end{aligned}}}
In theDeWitt notation , the equation looks identical to the finite-dimensional case.
n -dimensional with linear term[ edit ] IfA is again a symmetric positive-definite matrix, then (assuming all are column vectors)∫ exp ( − 1 2 ∑ i , j = 1 n A i j x i x j + ∑ i = 1 n b i x i ) d n x = ∫ exp ( − 1 2 x T A x + b T x ) d n x = ( 2 π ) n det A exp ( 1 2 b T A − 1 b ) . {\displaystyle {\begin{aligned}\int \exp \left(-{\frac {1}{2}}\sum _{i,j=1}^{n}A_{ij}x_{i}x_{j}+\sum _{i=1}^{n}b_{i}x_{i}\right)d^{n}\mathbf {x} &=\int \exp \left(-{\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} +\mathbf {b} ^{\mathsf {T}}\mathbf {x} \right)d^{n}\mathbf {x} \\&={\sqrt {\frac {(2\pi )^{n}}{\det A}}}\exp \left({\tfrac {1}{2}}\mathbf {b} ^{\mathsf {T}}A^{-1}\mathbf {b} \right).\end{aligned}}}
Integrals of similar form [ edit ] ∫ 0 ∞ x 2 n e − x 2 / a 2 d x = π a 2 n + 1 ( 2 n − 1 ) ! ! 2 n + 1 {\displaystyle \int _{0}^{\infty }x^{2n}e^{-{x^{2}}/{a^{2}}}\,dx={\sqrt {\pi }}{\frac {a^{2n+1}(2n-1)!!}{2^{n+1}}}} ∫ 0 ∞ x 2 n + 1 e − x 2 / a 2 d x = n ! 2 a 2 n + 2 {\displaystyle \int _{0}^{\infty }x^{2n+1}e^{-{x^{2}}/{a^{2}}}\,dx={\frac {n!}{2}}a^{2n+2}} ∫ 0 ∞ x 2 n e − b x 2 d x = ( 2 n − 1 ) ! ! b n 2 n + 1 π b {\displaystyle \int _{0}^{\infty }x^{2n}e^{-bx^{2}}\,dx={\frac {(2n-1)!!}{b^{n}2^{n+1}}}{\sqrt {\frac {\pi }{b}}}} ∫ 0 ∞ x 2 n + 1 e − b x 2 d x = n ! 2 b n + 1 {\displaystyle \int _{0}^{\infty }x^{2n+1}e^{-bx^{2}}\,dx={\frac {n!}{2b^{n+1}}}} ∫ 0 ∞ x n e − b x 2 d x = Γ ( n + 1 2 ) 2 b n + 1 2 {\displaystyle \int _{0}^{\infty }x^{n}e^{-bx^{2}}\,dx={\frac {\Gamma ({\frac {n+1}{2}})}{2b^{\frac {n+1}{2}}}}} wheren {\displaystyle n} is a positive integer
An easy way to derive these is bydifferentiating under the integral sign .
∫ − ∞ ∞ x 2 n e − α x 2 d x = ( − 1 ) n ∫ − ∞ ∞ ∂ n ∂ α n e − α x 2 d x = ( − 1 ) n ∂ n ∂ α n ∫ − ∞ ∞ e − α x 2 d x = π ( − 1 ) n ∂ n ∂ α n α − 1 2 = π α ( 2 n − 1 ) ! ! ( 2 α ) n {\displaystyle {\begin{aligned}\int _{-\infty }^{\infty }x^{2n}e^{-\alpha x^{2}}\,dx&=\left(-1\right)^{n}\int _{-\infty }^{\infty }{\frac {\partial ^{n}}{\partial \alpha ^{n}}}e^{-\alpha x^{2}}\,dx\\[1ex]&=\left(-1\right)^{n}{\frac {\partial ^{n}}{\partial \alpha ^{n}}}\int _{-\infty }^{\infty }e^{-\alpha x^{2}}\,dx\\[1ex]&={\sqrt {\pi }}\left(-1\right)^{n}{\frac {\partial ^{n}}{\partial \alpha ^{n}}}\alpha ^{-{\frac {1}{2}}}\\[1ex]&={\sqrt {\frac {\pi }{\alpha }}}{\frac {(2n-1)!!}{\left(2\alpha \right)^{n}}}\end{aligned}}}
One could also integrate by parts and find arecurrence relation to solve this.
Higher-order polynomials [ edit ] Applying a linear change of basis shows that the integral of the exponential of a homogeneous polynomial inn variables may depend only onSL(n ) -invariants of the polynomial. One such invariant is thediscriminant ,zeros of which mark the singularities of the integral. However, the integral may also depend on other invariants.[ 6]
Exponentials of other even polynomials can numerically be solved using series. These may be interpreted asformal calculations when there is no convergence. For example, the solution to the integral of the exponential of a quartic polynomial is[citation needed ]
∫ − ∞ ∞ e a x 4 + b x 3 + c x 2 + d x + f d x = 1 2 e f ∑ n , m , p = 0 n + p = 0 mod 2 ∞ b n n ! c m m ! d p p ! Γ ( 3 n + 2 m + p + 1 4 ) ( − a ) 3 n + 2 m + p + 1 4 . {\displaystyle \int _{-\infty }^{\infty }e^{ax^{4}+bx^{3}+cx^{2}+dx+f}\,dx={\frac {1}{2}}e^{f}\sum _{\begin{smallmatrix}n,m,p=0\\n+p=0{\bmod {2}}\end{smallmatrix}}^{\infty }{\frac {b^{n}}{n!}}{\frac {c^{m}}{m!}}{\frac {d^{p}}{p!}}{\frac {\Gamma {\left({\frac {3n+2m+p+1}{4}}\right)}}{{\left(-a\right)}^{\frac {3n+2m+p+1}{4}}}}.}
Then +p = 0 mod 2 requirement is because the integral from −∞ to 0 contributes a factor of(−1)n +p /2 to each term, while the integral from 0 to +∞ contributes a factor of 1/2 to each term. These integrals turn up in subjects such asquantum field theory .