Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Dirac delta function

This is a good article. Click here for more information.
From Wikipedia, the free encyclopedia
Generalized function whose value is zero everywhere except at zero
"Delta function" redirects here. For other uses, seeDelta function (disambiguation).

Schematic representation of the Dirac delta function by a line surmounted by an arrow. The height of the arrow is usually meant to specify the value of any multiplicative constant, which will give the area under the function. The other convention is to write the area next to the arrowhead.
The Dirac delta as the limit asa0{\displaystyle a\to 0} (in the sense ofdistributions) of the sequence of zero-centerednormal distributionsδa(x)=1|a|πe(x/a)2{\displaystyle \delta _{a}(x)={\frac {1}{\left|a\right|{\sqrt {\pi }}}}e^{-(x/a)^{2}}}
Differential equations
Scope
Classification
Solution
People

Inmathematical analysis, theDirac delta function (orδ distribution), also known as theunit impulse,[1] is ageneralized function on thereal numbers, whose value is zero everywhere except at zero, and whoseintegral over the entire real line is equal to one.[2][3][4] Thus it can berepresented heuristically as

δ(x)={0,x0,x=0{\displaystyle \delta (x)={\begin{cases}0,&x\neq 0\\{\infty },&x=0\end{cases}}}

such that

δ(x)dx=1.{\displaystyle \int _{-\infty }^{\infty }\delta (x)dx=1.}

Since there is no function having this property, modelling the delta "function" rigorously involves the use oflimits or, as is common in mathematics,measure theory and the theory ofdistributions.

The delta function was introduced by physicistPaul Dirac, and has since been applied routinely in physics and engineering to model point masses and instantaneous impulses. It is called the delta function because it is a continuous analogue of theKronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1. The mathematical rigor of the delta function was disputed untilLaurent Schwartz developed the theory of distributions, where it is defined as a linear form acting on functions.

Motivation and overview

[edit]

Thegraph of the Dirac delta is usually thought of as following the wholex-axis and the positivey-axis.[5] The Dirac delta is used to model a tall narrow spike function (animpulse), and other similarabstractions such as apoint charge orpoint mass.[6][7] For example, to calculate thedynamics of abilliard ball being struck, one can approximate theforce of the impact by a Dirac delta. In doing so, one can simplify the equations and calculate themotion of the ball by only considering the total impulse of the collision, without a detailed model of all of the elastic energy transfer at subatomic levels (for instance).

To be specific, suppose that a billiard ball is at rest. At timet=0{\displaystyle t=0} it is struck by another ball, imparting it with amomentumP, with units kg⋅m⋅s−1. The exchange of momentum is not actually instantaneous, being mediated by elastic processes at the molecular and subatomic level, but for practical purposes it is convenient to consider that energy transfer as effectively instantaneous. Theforce therefore isPδ(t); the units ofδ(t) are s−1.

To model this situation more rigorously, suppose that the force instead is uniformly distributed over a small time intervalΔt=[0,T]{\displaystyle \Delta t=[0,T]}. That is,

FΔt(t)={P/Δt0<tT,0otherwise.{\displaystyle F_{\Delta t}(t)={\begin{cases}P/\Delta t&0<t\leq T,\\0&{\text{otherwise}}.\end{cases}}}

Then the momentum at any timet is found by integration:

p(t)=0tFΔt(τ)dτ={PtTPt/Δt0tT0otherwise.{\displaystyle p(t)=\int _{0}^{t}F_{\Delta t}(\tau )\,d\tau ={\begin{cases}P&t\geq T\\P\,t/\Delta t&0\leq t\leq T\\0&{\text{otherwise.}}\end{cases}}}

Now, the model situation of an instantaneous transfer of momentum requires taking the limit asΔt → 0, giving a result everywhere except at0:

p(t)={Pt>00t<0.{\displaystyle p(t)={\begin{cases}P&t>0\\0&t<0.\end{cases}}}

Here the functionsFΔt{\displaystyle F_{\Delta t}} are thought of as useful approximations to the idea of instantaneous transfer of momentum.

The delta function allows us to construct an idealized limit of these approximations. Unfortunately, the actual limit of the functions (in the sense ofpointwise convergence)limΔt0+FΔt{\textstyle \lim _{\Delta t\to 0^{+}}F_{\Delta t}} is zero everywhere but a single point, where it is infinite. To make proper sense of the Dirac delta, we should instead insist that the property

FΔt(t)dt=P,{\displaystyle \int _{-\infty }^{\infty }F_{\Delta t}(t)\,dt=P,}

which holds for allΔt>0{\displaystyle \Delta t>0}, should continue to hold in the limit. So, in the equationF(t)=Pδ(t)=limΔt0FΔt(t){\textstyle F(t)=P\,\delta (t)=\lim _{\Delta t\to 0}F_{\Delta t}(t)}, it is understood that the limit is always takenoutside the integral.

In applied mathematics, as we have done here, the delta function is often manipulated as a kind of limit (aweak limit) of asequence of functions, each member of which has a tall spike at the origin: for example, a sequence ofGaussian distributions centered at the origin withvariance tending to zero. (However, even in some applications, highly oscillatory functions are used as approximations to the delta function, seebelow.)

The Dirac delta, given the desired properties outlined above, cannot be a function with domain and range inreal numbers.[4] For example, the objectsf(x) =δ(x) andg(x) = 0 are equal everywhere except atx = 0 yet have integrals that are different. According toLebesgue integration theory, iff andg are functions such thatf =galmost everywhere, thenf is integrableif and only ifg is integrable and the integrals off andg are identical.[8] A rigorous approach to regarding the Dirac delta function as amathematical object in its own right usesmeasure theory or the theory ofdistributions.[9]

History

[edit]

In physics, the Dirac delta function was popularized byPaul Dirac in this bookThe Principles of Quantum Mechanics published in 1930.[3] However,Oliver Heaviside, 35 years before Dirac, described an impulsive function called theHeaviside step function for purposes and with properties analogous to Dirac's work. Even earlier several mathematicians and physicists used limits of sharply peaked functions in derivations.[10]Aninfinitesimal formula for an infinitely tall, unit impulse delta function (infinitesimal version ofCauchy distribution) explicitly appears in an 1827 text ofAugustin-Louis Cauchy.[11]Siméon Denis Poisson considered the issue in connection with the study of wave propagation as didGustav Kirchhoff somewhat later. Kirchhoff andHermann von Helmholtz also introduced the unit impulse as a limit ofGaussians, which also corresponded toLord Kelvin's notion of a point heat source.[12] The Dirac delta function as such was introduced byPaul Dirac in his 1927 paperThe Physical Interpretation of the Quantum Dynamics.[13] He called it the "delta function" since he used it as acontinuum analogue of the discreteKronecker delta.[3]

Mathematicians refer to the same concept as adistribution rather than a function.[14]Joseph Fourier presented what is now called theFourier integral theorem in his treatiseThéorie analytique de la chaleur in the form:[15]

f(x)=12π  dαf(α) dp cos(pxpα) ,{\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\ \ d\alpha \,f(\alpha )\ \int _{-\infty }^{\infty }dp\ \cos(px-p\alpha )\ ,}

which is tantamount to the introduction of theδ-function in the form:[16]

δ(xα)=12πdp cos(pxpα) .{\displaystyle \delta (x-\alpha )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }dp\ \cos(px-p\alpha )\ .}

Later,Augustin Cauchy expressed the theorem using exponentials:[17][18]

f(x)=12π eipx(eipαf(α)dα)dp.{\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\ e^{ipx}\left(\int _{-\infty }^{\infty }e^{-ip\alpha }f(\alpha )\,d\alpha \right)\,dp.}

Cauchy pointed out that in some circumstances theorder of integration is significant in this result (contrastFubini's theorem).[19][20]

As justified using thetheory of distributions, the Cauchy equation can be rearranged to resemble Fourier's original formulation and expose theδ-function as

f(x)=12πeipx(eipαf(α)dα)dp=12π(eipxeipαdp)f(α)dα=δ(xα)f(α)dα,{\displaystyle {\begin{aligned}f(x)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ipx}\left(\int _{-\infty }^{\infty }e^{-ip\alpha }f(\alpha )\,d\alpha \right)\,dp\\[4pt]&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\left(\int _{-\infty }^{\infty }e^{ipx}e^{-ip\alpha }\,dp\right)f(\alpha )\,d\alpha =\int _{-\infty }^{\infty }\delta (x-\alpha )f(\alpha )\,d\alpha ,\end{aligned}}}

where theδ-function is expressed as

δ(xα)=12πeip(xα)dp .{\displaystyle \delta (x-\alpha )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ip(x-\alpha )}\,dp\ .}

A rigorous interpretation of the exponential form and the various limitations upon the functionf necessary for its application extended over several centuries. The problems with a classical interpretation are explained as follows:[21]

The greatest drawback of the classical Fourier transformation is a rather narrow class of functions (originals) for which it can be effectively computed. Namely, it is necessary that these functionsdecrease sufficiently rapidly to zero (in the neighborhood of infinity) to ensure the existence of the Fourier integral. For example, the Fourier transform of such simple functions as polynomials does not exist in the classical sense. The extension of the classical Fourier transformation to distributions considerably enlarged the class of functions that could be transformed and this removed many obstacles.

Further developments included generalization of the Fourier integral, "beginning withPlancherel's pathbreakingL2-theory (1910), continuing withWiener's andBochner's works (around 1930) and culminating with the amalgamation intoL. Schwartz's theory ofdistributions (1945) ...",[22] and leading to the formal development of the Dirac delta function.

Definitions

[edit]

The Dirac delta functionδ(x){\displaystyle \delta (x)} can be loosely thought of as a function on the real line which is zero everywhere except at the origin, where it is infinite,

δ(x){+,x=00,x0{\displaystyle \delta (x)\simeq {\begin{cases}+\infty ,&x=0\\0,&x\neq 0\end{cases}}}

and which is also constrained to satisfy the identity[23]

δ(x)dx=1.{\displaystyle \int _{-\infty }^{\infty }\delta (x)\,dx=1.}

This is merely aheuristic characterization. The Dirac delta is not a function in the traditional sense as noextended real number valued function defined on the real numbers has these properties.[24]

As a measure

[edit]

One way to rigorously capture the notion of the Dirac delta function is to define ameasure, calledDirac measure, which accepts a subsetA of the real lineR as an argument, and returnsδ(A) = 1 if0 ∈A, andδ(A) = 0 otherwise.[25] If the delta function is conceptualized as modeling an idealized point mass at 0, thenδ(A) represents the mass contained in the setA. One may then define the integral againstδ as the integral of a function against this mass distribution. Formally, theLebesgue integral provides the necessary analytic device. The Lebesgue integral with respect to the measureδ satisfies

f(x)δ(dx)=f(0){\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (dx)=f(0)}

for all continuous compactly supported functionsf. The measureδ is notabsolutely continuous with respect to theLebesgue measure—in fact, it is asingular measure. Consequently, the delta measure has noRadon–Nikodym derivative (with respect to Lebesgue measure)—no true function for which the property

f(x)δ(x)dx=f(0){\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (x)\,dx=f(0)}

holds.[26] As a result, the latter notation is a convenientabuse of notation, and not a standard (Riemann orLebesgue) integral.[27]

As aprobability measure onR, the delta measure is characterized by itscumulative distribution function, which is theunit step function.[28]

H(x)={1if x00if x<0.{\displaystyle H(x)={\begin{cases}1&{\text{if }}x\geq 0\\0&{\text{if }}x<0.\end{cases}}}

This means thatH(x) is the integral of the cumulativeindicator function1(−∞,x] with respect to the measureδ; to wit,

H(x)=R1(,x](t)δ(dt)=δ((,x]),{\displaystyle H(x)=\int _{\mathbf {R} }\mathbf {1} _{(-\infty ,x]}(t)\,\delta (dt)=\delta \!\left((-\infty ,x]\right),}

the latter being the measure of this interval. Thus in particular the integration of the delta function against a continuous function can be properly understood as aRiemann–Stieltjes integral:[29]

f(x)δ(dx)=f(x)dH(x).{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (dx)=\int _{-\infty }^{\infty }f(x)\,dH(x).}

All highermoments ofδ are zero. In particular,characteristic function andmoment generating function are both equal to one.[30]

As a distribution

[edit]

In the theory ofdistributions, a generalized function is considered not a function in itself but only through how it affects other functions when "integrated" against them.[31] In keeping with this philosophy, to define the delta function properly, it is enough to say what the "integral" of the delta function is against a sufficiently "good"test functionφ.[4] If the delta function is already understood as a measure, then the Lebesgue integral of a test function against that measure supplies the necessary integral.[32]

A typical space of test functions consists of allsmooth functions onR withcompact support that have as many derivatives as required. As a distribution, the Dirac delta is alinear functional on the space of test functions and is defined by[33]

δ[φ]=φ(0){\displaystyle \delta [\varphi ]=\varphi (0)}1

for every test functionφ.

Forδ to be properly a distribution, it must be continuous in a suitable topology on the space of test functions. In general, for a linear functionalS on the space of test functions to define a distribution, it is necessary and sufficient that, for every positive integerN there is an integerMN and a constantCN such that for every test functionφ, one has the inequality[34]

|S[φ]|CNk=0MNsupx[N,N]|φ(k)(x)|{\displaystyle \left|S[\varphi ]\right|\leq C_{N}\sum _{k=0}^{M_{N}}\sup _{x\in [-N,N]}\left|\varphi ^{(k)}(x)\right|}

wheresup represents thesupremum. With theδ distribution, one has such an inequality (withCN = 1) withMN = 0 for allN. Thusδ is a distribution of order zero. It is, furthermore, a distribution with compact support (thesupport being{0}).

The delta distribution can also be defined in several equivalent ways. For instance, it is thedistributional derivative of theHeaviside step function. This means that for every test functionφ, one has

δ[φ]=φ(x)H(x)dx.{\displaystyle \delta [\varphi ]=-\int _{-\infty }^{\infty }\varphi '(x)\,H(x)\,dx.}

Intuitively, ifintegration by parts were permitted, then the latter integral should simplify to

φ(x)H(x)dx=φ(x)δ(x)dx,{\displaystyle \int _{-\infty }^{\infty }\varphi (x)\,H'(x)\,dx=\int _{-\infty }^{\infty }\varphi (x)\,\delta (x)\,dx,}

and indeed, a form of integration by parts is permitted for the Stieltjes integral, and in that case, one does have

φ(x)H(x)dx=φ(x)dH(x).{\displaystyle -\int _{-\infty }^{\infty }\varphi '(x)\,H(x)\,dx=\int _{-\infty }^{\infty }\varphi (x)\,dH(x).}

In the context of measure theory, the Dirac measure gives rise to distribution by integration. Conversely, equation (1) defines aDaniell integral on the space of all compactly supported continuous functionsφ which, by theRiesz representation theorem, can be represented as the Lebesgue integral ofφ with respect to someRadon measure.[35]}

Generally, when the termDirac delta function is used, it is in the sense of distributions rather than measures, theDirac measure being among several terms for the corresponding notion in measure theory. Some sources may also use the termDirac delta distribution.

Generalizations

[edit]

The delta function can be defined inn-dimensionalEuclidean spaceRn as the measure such that

Rnf(x)δ(dx)=f(0){\displaystyle \int _{\mathbf {R} ^{n}}f(\mathbf {x} )\,\delta (d\mathbf {x} )=f(\mathbf {0} )}

for every compactly supported continuous functionf. As a measure, then-dimensional delta function is theproduct measure of the 1-dimensional delta functions in each variable separately. Thus, formally, withx = (x1,x2, ...,xn), one has[36]

δ(x)=δ(x1)δ(x2)δ(xn).{\displaystyle \delta (\mathbf {x} )=\delta (x_{1})\,\delta (x_{2})\cdots \delta (x_{n}).}2

The delta function can also be defined in the sense of distributions exactly as above in the one-dimensional case.[37] However, despite widespread use in engineering contexts, (2) should be manipulated with care, since the product of distributions can only be defined under quite narrow circumstances.[38][39]

The notion of aDirac measure makes sense on any set.[40] Thus ifX is a set,x0X is a marked point, andΣ is anysigma algebra of subsets ofX, then the measure defined on setsA ∈ Σ by

δx0(A)={1if x0A0if x0A{\displaystyle \delta _{x_{0}}(A)={\begin{cases}1&{\text{if }}x_{0}\in A\\0&{\text{if }}x_{0}\notin A\end{cases}}}

is the delta measure or unit mass concentrated atx0.

Another common generalization of the delta function is to adifferentiable manifold where most of its properties as a distribution can also be exploited because of thedifferentiable structure. The delta function on a manifoldM centered at the pointx0M is defined as the following distribution:

δx0[φ]=φ(x0){\displaystyle \delta _{x_{0}}[\varphi ]=\varphi (x_{0})}3

for all compactly supported smooth real-valued functionsφ onM.[41] A common special case of this construction is a case in whichM is anopen set in the Euclidean spaceRn.

On alocally compact Hausdorff spaceX, the Dirac delta measure concentrated at a pointx is theRadon measure associated with the Daniell integral (3) on compactly supported continuous functionsφ.[42] At this level of generality, calculus as such is no longer possible, however a variety of techniques from abstract analysis are available. For instance, the mappingx0δx0{\displaystyle x_{0}\mapsto \delta _{x_{0}}} is a continuous embedding ofX into the space of finite Radon measures onX, equipped with itsvague topology. Moreover, theconvex hull of the image ofX under this embedding isdense in the space of probability measures onX.[43]

Properties

[edit]

Scaling and symmetry

[edit]

The delta function satisfies the following scaling property for a non-zero scalarα:[3][44]

δ(αx)dx=δ(u)du|α|=1|α|{\displaystyle \int _{-\infty }^{\infty }\delta (\alpha x)\,dx=\int _{-\infty }^{\infty }\delta (u)\,{\frac {du}{|\alpha |}}={\frac {1}{|\alpha |}}}

and so

δ(αx)=δ(x)|α|.{\displaystyle \delta (\alpha x)={\frac {\delta (x)}{|\alpha |}}.}4

Scaling property proof:dx g(x)δ(αx)=1αdx g(xα)δ(x)=1αg(0).{\displaystyle \int \limits _{-\infty }^{\infty }dx\ g(x)\delta (\alpha x)={\frac {1}{\alpha }}\int \limits _{-\infty }^{\infty }dx'\ g\left({\frac {x'}{\alpha }}\right)\delta (x')={\frac {1}{\alpha }}g(0).}where a change of variablex′ =αx is used. Ifα is negative, i.e.,α = −|a|, thendx g(x)δ(αx)=1|α|dx g(xα)δ(x)=1|α|dx g(xα)δ(x)=1|α|g(0).{\displaystyle \int \limits _{-\infty }^{\infty }dx\ g(x)\delta (\alpha x)={\frac {1}{-\left\vert \alpha \right\vert }}\int \limits _{\infty }^{-\infty }dx'\ g\left({\frac {x'}{\alpha }}\right)\delta (x')={\frac {1}{\left\vert \alpha \right\vert }}\int \limits _{-\infty }^{\infty }dx'\ g\left({\frac {x'}{\alpha }}\right)\delta (x')={\frac {1}{\left\vert \alpha \right\vert }}g(0).}Thus,δ(αx)=1|α|δ(x){\displaystyle \delta (\alpha x)={\frac {1}{\left\vert \alpha \right\vert }}\delta (x)}.

In particular, the delta function is aneven distribution (symmetry), in the sense that

δ(x)=δ(x){\displaystyle \delta (-x)=\delta (x)}

which ishomogeneous of degree−1.

Algebraic properties

[edit]

Thedistributional product ofδ withx is equal to zero:

xδ(x)=0.{\displaystyle x\,\delta (x)=0.}

More generally,(xa)nδ(xa)=0{\displaystyle (x-a)^{n}\delta (x-a)=0} for all positive integersn{\displaystyle n}.

Conversely, ifxf(x) =xg(x), wheref andg are distributions, then

f(x)=g(x)+cδ(x){\displaystyle f(x)=g(x)+c\delta (x)}

for some constantc.[45]

Translation

[edit]

The integral of any function multiplied by the time-delayed Dirac deltaδT(t)=δ(tT){\displaystyle \delta _{T}(t){=}\delta (t{-}T)} is

f(t)δ(tT)dt=f(T).{\displaystyle \int _{-\infty }^{\infty }f(t)\,\delta (t-T)\,dt=f(T).}

This is sometimes referred to as thesifting property[46] or thesampling property.[47] The delta function is said to "sift out" the value off(t) att =T.[48]

It follows that the effect ofconvolving a functionf(t) with the time-delayed Dirac delta is to time-delayf(t) by the same amount:[49]

(fδT)(t) =def f(τ)δ(tTτ)dτ=f(τ)δ(τ(tT))dτsince δ(x)=δ(x)  by (4)=f(tT).{\displaystyle {\begin{aligned}(f*\delta _{T})(t)\ &{\stackrel {\mathrm {def} }{=}}\ \int _{-\infty }^{\infty }f(\tau )\,\delta (t-T-\tau )\,d\tau \\&=\int _{-\infty }^{\infty }f(\tau )\,\delta (\tau -(t-T))\,d\tau \qquad {\text{since}}~\delta (-x)=\delta (x)~~{\text{by (4)}}\\&=f(t-T).\end{aligned}}}

The sifting property holds under the precise condition thatf be atempered distribution (see the discussion of the Fourier transformbelow). As a special case, for instance, we have the identity (understood in the distribution sense)

δ(ξx)δ(xη)dx=δ(ηξ).{\displaystyle \int _{-\infty }^{\infty }\delta (\xi -x)\delta (x-\eta )\,dx=\delta (\eta -\xi ).}

Composition with a function

[edit]

More generally, the delta distribution may becomposed with a smooth functiong(x) in such a way that the familiar change of variables formula holds (whereu=g(x){\displaystyle u=g(x)}), that

Rδ(g(x))f(g(x))|g(x)|dx=g(R)δ(u)f(u)du{\displaystyle \int _{\mathbb {R} }\delta {\bigl (}g(x){\bigr )}f{\bigl (}g(x){\bigr )}\left|g'(x)\right|dx=\int _{g(\mathbb {R} )}\delta (u)\,f(u)\,du}

provided thatg is acontinuously differentiable function withg′ nowhere zero.[50] That is, there is a unique way to assign meaning to the distributionδg{\displaystyle \delta \circ g} so that this identity holds for all compactly supported test functionsf. Therefore, the domain must be broken up to exclude theg′ = 0 point. This distribution satisfiesδ(g(x)) = 0 ifg is nowhere zero, and otherwise ifg has a realroot atx0, then

δ(g(x))=δ(xx0)|g(x0)|.{\displaystyle \delta (g(x))={\frac {\delta (x-x_{0})}{|g'(x_{0})|}}.}

It is natural therefore todefine the compositionδ(g(x)) for continuously differentiable functionsg by

δ(g(x))=iδ(xxi)|g(xi)|{\displaystyle \delta (g(x))=\sum _{i}{\frac {\delta (x-x_{i})}{|g'(x_{i})|}}}

where the sum extends over all roots ofg(x), which are assumed to besimple. Thus, for example

δ(x2α2)=12|α|[δ(x+α)+δ(xα)].{\displaystyle \delta \left(x^{2}-\alpha ^{2}\right)={\frac {1}{2|\alpha |}}{\Big [}\delta \left(x+\alpha \right)+\delta \left(x-\alpha \right){\Big ]}.}

In the integral form, the generalized scaling property may be written as

f(x)δ(g(x))dx=if(xi)|g(xi)|.{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (g(x))\,dx=\sum _{i}{\frac {f(x_{i})}{|g'(x_{i})|}}.}

Indefinite integral

[edit]

For a constantaR{\displaystyle a\in \mathbb {R} } and a "well-behaved" arbitrary real-valued functiony(x),y(x)δ(xa)dx=y(a)H(xa)+c,{\displaystyle \displaystyle {\int }y(x)\delta (x-a)dx=y(a)H(x-a)+c,}whereH(x) is theHeaviside step function andc is an integration constant.

Properties inn dimensions

[edit]

The delta distribution in ann-dimensional space satisfies the following scaling property instead,δ(αx)=|α|nδ(x) ,{\displaystyle \delta (\alpha {\boldsymbol {x}})=|\alpha |^{-n}\delta ({\boldsymbol {x}})~,}so thatδ is ahomogeneous distribution of degreen.

Under anyreflection orrotationρ, the delta function is invariant,δ(ρx)=δ(x) .{\displaystyle \delta (\rho {\boldsymbol {x}})=\delta ({\boldsymbol {x}})~.}

As in the one-variable case, it is possible to define the composition ofδ with abi-Lipschitz function[51]g:RnRn uniquely so that the following holdsRnδ(g(x))f(g(x))|detg(x)|dx=g(Rn)δ(u)f(u)du{\displaystyle \int _{\mathbb {R} ^{n}}\delta (g({\boldsymbol {x}}))\,f(g({\boldsymbol {x}}))\left|\det g'({\boldsymbol {x}})\right|d{\boldsymbol {x}}=\int _{g(\mathbb {R} ^{n})}\delta ({\boldsymbol {u}})f({\boldsymbol {u}})\,d{\boldsymbol {u}}}for all compactly supported functionsf.

Using thecoarea formula fromgeometric measure theory, one can also define the composition of the delta function with asubmersion from one Euclidean space to another one of different dimension; the result is a type ofcurrent. In the special case of a continuously differentiable functiong :RnR such that thegradient ofg is nowhere zero, the following identity holds[52]Rnf(x)δ(g(x))dx=g1(0)f(x)|g|dσ(x){\displaystyle \int _{\mathbb {R} ^{n}}f({\boldsymbol {x}})\,\delta (g({\boldsymbol {x}}))\,d{\boldsymbol {x}}=\int _{g^{-1}(0)}{\frac {f({\boldsymbol {x}})}{|{\boldsymbol {\nabla }}g|}}\,d\sigma ({\boldsymbol {x}})}where the integral on the right is overg−1(0), the(n − 1)-dimensional surface defined byg(x) = 0 with respect to theMinkowski content measure. This is known as asimple layer integral.

More generally, ifS is a smooth hypersurface ofRn, then we can associate toS the distribution that integrates any compactly supported smooth functiong overS:δS[g]=Sg(s)dσ(s){\displaystyle \delta _{S}[g]=\int _{S}g({\boldsymbol {s}})\,d\sigma ({\boldsymbol {s}})}

whereσ is the hypersurface measure associated toS. This generalization is associated with thepotential theory ofsimple layer potentials onS. IfD is adomain inRn with smooth boundaryS, thenδS is equal to thenormal derivative of theindicator function ofD in the distribution sense,

Rng(x)1D(x)ndx=Sg(s)dσ(s),{\displaystyle -\int _{\mathbb {R} ^{n}}g({\boldsymbol {x}})\,{\frac {\partial 1_{D}({\boldsymbol {x}})}{\partial n}}\,d{\boldsymbol {x}}=\int _{S}\,g({\boldsymbol {s}})\,d\sigma ({\boldsymbol {s}}),}

wheren is the outward normal.[53][54]

In three dimensions, the delta function is represented in spherical coordinates by:

δ(rr0)={1r2sinθδ(rr0)δ(θθ0)δ(ϕϕ0)x0,y0,z0012πr2sinθδ(rr0)δ(θθ0)x0=y0=0, z0014πr2δ(rr0)x0=y0=z0=0{\displaystyle \delta ({\boldsymbol {r}}-{\boldsymbol {r}}_{0})={\begin{cases}\displaystyle {\frac {1}{r^{2}\sin \theta }}\delta (r-r_{0})\delta (\theta -\theta _{0})\delta (\phi -\phi _{0})&x_{0},y_{0},z_{0}\neq 0\\\displaystyle {\frac {1}{2\pi r^{2}\sin \theta }}\delta (r-r_{0})\delta (\theta -\theta _{0})&x_{0}=y_{0}=0,\ z_{0}\neq 0\\\displaystyle {\frac {1}{4\pi r^{2}}}\delta (r-r_{0})&x_{0}=y_{0}=z_{0}=0\end{cases}}}

Derivatives

[edit]

The derivative of the Dirac delta distribution, denotedδ′ and also called theDirac delta prime orDirac delta derivative, is defined on compactly supported smooth test functionsφ by[55]δ[φ]=δ[φ]=φ(0).{\displaystyle \delta '[\varphi ]=-\delta [\varphi ']=-\varphi '(0).}

The first equality here is a kind ofintegration by parts, for ifδ were a true function thenδ(x)φ(x)dx=δ(x)φ(x)|δ(x)φ(x)dx=δ(x)φ(x)dx=φ(0).{\displaystyle \int _{-\infty }^{\infty }\delta '(x)\varphi (x)\,dx=\delta (x)\varphi (x)|_{-\infty }^{\infty }-\int _{-\infty }^{\infty }\delta (x)\varphi '(x)\,dx=-\int _{-\infty }^{\infty }\delta (x)\varphi '(x)\,dx=-\varphi '(0).}

Bymathematical induction, thek-th derivative ofδ is defined similarly as the distribution given on test functions by

δ(k)[φ]=(1)kφ(k)(0).{\displaystyle \delta ^{(k)}[\varphi ]=(-1)^{k}\varphi ^{(k)}(0).}

In particular,δ is an infinitely differentiable distribution.

The first derivative of the delta function is the distributional limit of the difference quotients:[56]δ(x)=limh0δ(x+h)δ(x)h.{\displaystyle \delta '(x)=\lim _{h\to 0}{\frac {\delta (x+h)-\delta (x)}{h}}.}

More properly, one hasδ=limh01h(τhδδ){\displaystyle \delta '=\lim _{h\to 0}{\frac {1}{h}}(\tau _{h}\delta -\delta )}whereτh is the translation operator, defined on functions byτhφ(x) =φ(x +h), and on a distributionS by(τhS)[φ]=S[τhφ].{\displaystyle (\tau _{h}S)[\varphi ]=S[\tau _{-h}\varphi ].}

In the theory ofelectromagnetism, the first derivative of the delta function represents a point magneticdipole situated at the origin. Accordingly, it is referred to as a dipole or thedoublet function.[57]

The derivative of the delta function satisfies a number of basic properties, including:[58]δ(x)=δ(x)xδ(x)=δ(x){\displaystyle {\begin{aligned}\delta '(-x)&=-\delta '(x)\\x\delta '(x)&=-\delta (x)\end{aligned}}}which can be shown by applying a test function and integrating by parts.

The latter of these properties can also be demonstrated by applying distributional derivative definition, Leibniz 's theorem and linearity of inner product:[59][better source needed]

xδ,φ=δ,xφ=δ,(xφ)=δ,xφ+xφ=δ,xφδ,xφ=x(0)φ(0)x(0)φ(0)=x(0)δ,φx(0)δ,φ=x(0)δ,φ+x(0)δ,φ=x(0)δx(0)δ,φx(t)δ(t)=x(0)δ(t)x(0)δ(t)=x(0)δ(t)=δ(t){\displaystyle {\begin{aligned}\langle x\delta ',\varphi \rangle \,&=\,\langle \delta ',x\varphi \rangle \,=\,-\langle \delta ,(x\varphi )'\rangle \,=\,-\langle \delta ,x'\varphi +x\varphi '\rangle \,=\,-\langle \delta ,x'\varphi \rangle -\langle \delta ,x\varphi '\rangle \,=\,-x'(0)\varphi (0)-x(0)\varphi '(0)\\&=\,-x'(0)\langle \delta ,\varphi \rangle -x(0)\langle \delta ,\varphi '\rangle \,=\,-x'(0)\langle \delta ,\varphi \rangle +x(0)\langle \delta ',\varphi \rangle \,=\,\langle x(0)\delta '-x'(0)\delta ,\varphi \rangle \\\Longrightarrow x(t)\delta '(t)&=x(0)\delta '(t)-x'(0)\delta (t)=-x'(0)\delta (t)=-\delta (t)\end{aligned}}}

Furthermore, the convolution ofδ′ with a compactly-supported, smooth functionf is

δf=δf=f,{\displaystyle \delta '*f=\delta *f'=f',}

which follows from the properties of the distributional derivative of a convolution.

Higher dimensions

[edit]

More generally, on anopen setU in then-dimensionalEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}, the Dirac delta distribution centered at a pointaU is defined by[60]δa[φ]=φ(a){\displaystyle \delta _{a}[\varphi ]=\varphi (a)}for allφCc(U){\displaystyle \varphi \in C_{c}^{\infty }(U)}, the space of all smooth functions with compact support onU. Ifα=(α1,,αn){\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})} is anymulti-index with|α|=α1++αn{\displaystyle |\alpha |=\alpha _{1}+\cdots +\alpha _{n}} andα{\displaystyle \partial ^{\alpha }} denotes the associated mixedpartial derivative operator, then theα-th derivativeαδa ofδa is given by[60]

αδa,φ=(1)|α|δa,αφ=(1)|α|αφ(x)|x=a for all φCc(U).{\displaystyle \left\langle \partial ^{\alpha }\delta _{a},\,\varphi \right\rangle =(-1)^{|\alpha |}\left\langle \delta _{a},\partial ^{\alpha }\varphi \right\rangle =(-1)^{|\alpha |}\partial ^{\alpha }\varphi (x){\Big |}_{x=a}\quad {\text{ for all }}\varphi \in C_{c}^{\infty }(U).}

That is, theα-th derivative ofδa is the distribution whose value on any test functionφ is theα-th derivative ofφ ata (with the appropriate positive or negative sign).

The first partial derivatives of the delta function are thought of asdouble layers along the coordinate planes. More generally, thenormal derivative of a simple layer supported on a surface is a double layer supported on that surface and represents a laminar magnetic monopole. Higher derivatives of the delta function are known in physics asmultipoles.[61]

Higher derivatives enter into mathematics naturally as the building blocks for the complete structure of distributions with point support. IfS is any distribution onU supported on the set{a} consisting of a single point, then there is an integerm and coefficientscα such that[60][62]S=|α|mcααδa.{\displaystyle S=\sum _{|\alpha |\leq m}c_{\alpha }\partial ^{\alpha }\delta _{a}.}

Representations

[edit]

The delta function can be viewed as the limit of a sequence of functions

δ(x)=limε0+ηε(x).{\displaystyle \delta (x)=\lim _{\varepsilon \to 0^{+}}\eta _{\varepsilon }(x).}This limit is meant in a weak sense: either that

limε0+ηε(x)f(x)dx=f(0){\displaystyle \lim _{\varepsilon \to 0^{+}}\int _{-\infty }^{\infty }\eta _{\varepsilon }(x)f(x)\,dx=f(0)}5

for allcontinuous functionsf havingcompact support, or that this limit holds for allsmooth functionsf with compact support. The former is convergence in thevague topology of measures, and the latter is convergence in the sense ofdistributions.

Approximations to the identity

[edit]

An approximate delta functionηε can be constructed in the following manner. Letη be an absolutely integrable function onR of total integral1, and defineηε(x)=ε1η(xε).{\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-1}\eta \left({\frac {x}{\varepsilon }}\right).}

Inn dimensions, one uses instead the scalingηε(x)=εnη(xε).{\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-n}\eta \left({\frac {x}{\varepsilon }}\right).}

Then a simple change of variables shows thatηε also has integral1. One may show that (5) holds for all continuous compactly supported functionsf,[63] and soηε converges weakly toδ in the sense of measures.

Theηε constructed in this way are known as anapproximation to the identity.[64] This terminology is because the spaceL1(R) of absolutely integrable functions is closed under the operation ofconvolution of functions:fgL1(R) wheneverf andg are inL1(R). However, there is no identity inL1(R) for the convolution product: no elementh such thatfh =f for allf. Nevertheless, the sequenceηε does approximate such an identity in the sense that

fηεfas ε0.{\displaystyle f*\eta _{\varepsilon }\to f\quad {\text{as }}\varepsilon \to 0.}

This limit holds in the sense ofmean convergence (convergence inL1). Further conditions on theηε, for instance that it be a mollifier associated to a compactly supported function,[65] are needed to ensure pointwise convergencealmost everywhere.

If the initialη =η1 is itself smooth and compactly supported then the sequence is called amollifier. The standard mollifier is obtained by choosingη to be a suitably normalizedbump function, for instance

η(x)={1Inexp(11|x|2)if |x|<10if |x|1.{\displaystyle \eta (x)={\begin{cases}{\frac {1}{I_{n}}}\exp {\Big (}-{\frac {1}{1-|x|^{2}}}{\Big )}&{\text{if }}|x|<1\\0&{\text{if }}|x|\geq 1.\end{cases}}}(In{\displaystyle I_{n}} ensuring that the total integral is 1).

In some situations such asnumerical analysis, apiecewise linear approximation to the identity is desirable. This can be obtained by takingη1 to be ahat function. With this choice ofη1, one has

ηε(x)=ε1max(1|xε|,0){\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-1}\max \left(1-\left|{\frac {x}{\varepsilon }}\right|,0\right)}

which are all continuous and compactly supported, although not smooth and so not a mollifier.

Probabilistic considerations

[edit]

In the context ofprobability theory, it is natural to impose the additional condition that the initialη1 in an approximation to the identity should be positive, as such a function then represents aprobability distribution. Convolution with a probability distribution is sometimes favorable because it does not result inovershoot or undershoot, as the output is aconvex combination of the input values, and thus falls between the maximum and minimum of the input function. Takingη1 to be any probability distribution at all, and lettingηε(x) =η1(x/ε)/ε as above will give rise to an approximation to the identity. In general this converges more rapidly to a delta function if, in addition,η has mean0 and has small higher moments. For instance, ifη1 is theuniform distribution on[12,12]{\textstyle \left[-{\frac {1}{2}},{\frac {1}{2}}\right]}, also known as therectangular function, then:[66]ηε(x)=1εrect(xε)={1ε,ε2<x<ε2,0,otherwise.{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x}{\varepsilon }}\right)={\begin{cases}{\frac {1}{\varepsilon }},&-{\frac {\varepsilon }{2}}<x<{\frac {\varepsilon }{2}},\\0,&{\text{otherwise}}.\end{cases}}}

Another example is with theWigner semicircle distributionηε(x)={2πε2ε2x2,ε<x<ε,0,otherwise.{\displaystyle \eta _{\varepsilon }(x)={\begin{cases}{\frac {2}{\pi \varepsilon ^{2}}}{\sqrt {\varepsilon ^{2}-x^{2}}},&-\varepsilon <x<\varepsilon ,\\0,&{\text{otherwise}}.\end{cases}}}

This is continuous and compactly supported, but not a mollifier because it is not smooth.

Semigroups

[edit]

Approximations to the delta functions often arise as convolutionsemigroups.[67] This amounts to the further constraint that the convolution ofηε withηδ must satisfyηεηδ=ηε+δ{\displaystyle \eta _{\varepsilon }*\eta _{\delta }=\eta _{\varepsilon +\delta }}

for allε,δ > 0. Convolution semigroups inL1 that approximate the delta function are always an approximation to the identity in the above sense, however the semigroup condition is quite a strong restriction.

In practice, semigroups approximating the delta function arise asfundamental solutions orGreen's functions to physically motivatedelliptic orparabolicpartial differential equations. In the context ofapplied mathematics, semigroups arise as the output of alinear time-invariant system. Abstractly, ifA is a linear operator acting on functions ofx, then a convolution semigroup arises by solving theinitial value problem

{tη(t,x)=Aη(t,x),t>0limt0+η(t,x)=δ(x){\displaystyle {\begin{cases}{\dfrac {\partial }{\partial t}}\eta (t,x)=A\eta (t,x),\quad t>0\\[5pt]\displaystyle \lim _{t\to 0^{+}}\eta (t,x)=\delta (x)\end{cases}}}

in which the limit is as usual understood in the weak sense. Settingηε(x) =η(ε,x) gives the associated approximate delta function.

Some examples of physically important convolution semigroups arising from such a fundamental solution include the following.

The heat kernel

[edit]

Theheat kernel, defined by[68]

ηε(x)=12πεex22ε{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\sqrt {2\pi \varepsilon }}}\mathrm {e} ^{-{\frac {x^{2}}{2\varepsilon }}}}

represents the temperature in an infinite wire at timet > 0, if a unit of heat energy is stored at the origin of the wire at timet = 0. This semigroup evolves according to the one-dimensionalheat equation:

ut=122ux2.{\displaystyle {\frac {\partial u}{\partial t}}={\frac {1}{2}}{\frac {\partial ^{2}u}{\partial x^{2}}}.}

Inprobability theory,ηε(x) is anormal distribution ofvarianceε and mean0. It represents theprobability density at timet =ε of the position of a particle starting at the origin following a standardBrownian motion. In this context, the semigroup condition is then an expression of theMarkov property of Brownian motion.

In higher-dimensional Euclidean spaceRn, the heat kernel isηε=1(2πε)n/2exx2ε,{\displaystyle \eta _{\varepsilon }={\frac {1}{(2\pi \varepsilon )^{n/2}}}\mathrm {e} ^{-{\frac {x\cdot x}{2\varepsilon }}},}and has the same physical interpretation,mutatis mutandis. It also represents an approximation to the delta function in the sense thatηεδ in the distribution sense asε → 0.

The Poisson kernel

[edit]

ThePoisson kernelηε(x)=1πIm{1xiε}=1πεε2+x2=12πeiξx|εξ|dξ{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\pi }}\mathrm {Im} \left\{{\frac {1}{x-\mathrm {i} \varepsilon }}\right\}={\frac {1}{\pi }}{\frac {\varepsilon }{\varepsilon ^{2}+x^{2}}}={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\mathrm {e} ^{\mathrm {i} \xi x-|\varepsilon \xi |}\,d\xi }

is the fundamental solution of theLaplace equation in the upper half-plane.[69] It represents theelectrostatic potential in a semi-infinite plate whose potential along the edge is held at fixed at the delta function. The Poisson kernel is also closely related to theCauchy distribution andEpanechnikov and Gaussian kernel functions.[70] This semigroup evolves according to the equationut=(2x2)12u(t,x){\displaystyle {\frac {\partial u}{\partial t}}=-\left(-{\frac {\partial ^{2}}{\partial x^{2}}}\right)^{\frac {1}{2}}u(t,x)}

where the operator is rigorously defined as theFourier multiplierF[(2x2)12f](ξ)=|2πξ|Ff(ξ).{\displaystyle {\mathcal {F}}\left[\left(-{\frac {\partial ^{2}}{\partial x^{2}}}\right)^{\frac {1}{2}}f\right](\xi )=|2\pi \xi |{\mathcal {F}}f(\xi ).}

Oscillatory integrals

[edit]

In areas of physics such aswave propagation andwave mechanics, the equations involved arehyperbolic and so may have more singular solutions. As a result, the approximate delta functions that arise as fundamental solutions of the associatedCauchy problems are generallyoscillatory integrals. An example, which comes from a solution of theEuler–Tricomi equation oftransonicgas dynamics,[71] is the rescaledAiry functionε1/3Ai(xε1/3).{\displaystyle \varepsilon ^{-1/3}\operatorname {Ai} \left(x\varepsilon ^{-1/3}\right).}

Although using the Fourier transform, it is easy to see that this generates a semigroup in some sense—it is not absolutely integrable and so cannot define a semigroup in the above strong sense. Many approximate delta functions constructed as oscillatory integrals only converge in the sense of distributions (an example is theDirichlet kernel below), rather than in the sense of measures.

Another example is the Cauchy problem for thewave equation inR1+1:[72]c22ut2Δu=0u=0,ut=δfor t=0.{\displaystyle {\begin{aligned}c^{-2}{\frac {\partial ^{2}u}{\partial t^{2}}}-\Delta u&=0\\u=0,\quad {\frac {\partial u}{\partial t}}=\delta &\qquad {\text{for }}t=0.\end{aligned}}}

The solutionu represents the displacement from equilibrium of an infinite elastic string, with an initial disturbance at the origin.

Other approximations to the identity of this kind include thesinc function (used widely in electronics and telecommunications)ηε(x)=1πxsin(xε)=12π1ε1εcos(kx)dk{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\pi x}}\sin \left({\frac {x}{\varepsilon }}\right)={\frac {1}{2\pi }}\int _{-{\frac {1}{\varepsilon }}}^{\frac {1}{\varepsilon }}\cos(kx)\,dk}

and theBessel functionηε(x)=1εJ1ε(x+1ε).{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\varepsilon }}J_{\frac {1}{\varepsilon }}\left({\frac {x+1}{\varepsilon }}\right).}

Plane wave decomposition

[edit]

One approach to the study of a linear partial differential equationL[u]=f,{\displaystyle L[u]=f,}

whereL is adifferential operator onRn, is to seek first a fundamental solution, which is a solution of the equationL[u]=δ.{\displaystyle L[u]=\delta .}

WhenL is particularly simple, this problem can often be resolved using the Fourier transform directly (as in the case of the Poisson kernel and heat kernel already mentioned). For more complicated operators, it is sometimes easier first to consider an equation of the formL[u]=h{\displaystyle L[u]=h}

whereh is aplane wave function, meaning that it has the formh=h(xξ){\displaystyle h=h(x\cdot \xi )}

for some vectorξ. Such an equation can be resolved (if the coefficients ofL areanalytic functions) by theCauchy–Kovalevskaya theorem or (if the coefficients ofL are constant) by quadrature. So, if the delta function can be decomposed into plane waves, then one can in principle solve linear partial differential equations.

Such a decomposition of the delta function into plane waves was part of a general technique first introduced essentially byJohann Radon, and then developed in this form byFritz John (1955).[73] Choosek so thatn +k is an even integer, and for a real numbers, putg(s)=Re[sklog(is)k!(2πi)n]={|s|k4k!(2πi)n1n odd|s|klog|s|k!(2πi)nn even.{\displaystyle g(s)=\operatorname {Re} \left[{\frac {-s^{k}\log(-is)}{k!(2\pi i)^{n}}}\right]={\begin{cases}{\frac {|s|^{k}}{4k!(2\pi i)^{n-1}}}&n{\text{ odd}}\\[5pt]-{\frac {|s|^{k}\log |s|}{k!(2\pi i)^{n}}}&n{\text{ even.}}\end{cases}}}

Thenδ is obtained by applying a power of theLaplacian to the integral with respect to the unitsphere measure ofg(x ·ξ) forξ in theunit sphereSn−1:δ(x)=Δx(n+k)/2Sn1g(xξ)dωξ.{\displaystyle \delta (x)=\Delta _{x}^{(n+k)/2}\int _{S^{n-1}}g(x\cdot \xi )\,d\omega _{\xi }.}

The Laplacian here is interpreted as a weak derivative, so that this equation is taken to mean that, for any test functionφ,φ(x)=Rnφ(y)dyΔxn+k2Sn1g((xy)ξ)dωξ.{\displaystyle \varphi (x)=\int _{\mathbf {R} ^{n}}\varphi (y)\,dy\,\Delta _{x}^{\frac {n+k}{2}}\int _{S^{n-1}}g((x-y)\cdot \xi )\,d\omega _{\xi }.}

The result follows from the formula for theNewtonian potential (the fundamental solution of Poisson's equation). This is essentially a form of the inversion formula for theRadon transform because it recovers the value ofφ(x) from its integrals over hyperplanes.[74] For instance, ifn is odd andk = 1, then the integral on the right hand side iscnΔxn+12Sn1φ(y)|(yx)ξ|dωξdy=cnΔx(n+1)/2Sn1dωξ|p|Rφ(ξ,p+xξ)dp{\displaystyle {\begin{aligned}&c_{n}\Delta _{x}^{\frac {n+1}{2}}\iint _{S^{n-1}}\varphi (y)|(y-x)\cdot \xi |\,d\omega _{\xi }\,dy\\[5pt]&\qquad =c_{n}\Delta _{x}^{(n+1)/2}\int _{S^{n-1}}\,d\omega _{\xi }\int _{-\infty }^{\infty }|p|R\varphi (\xi ,p+x\cdot \xi )\,dp\end{aligned}}}

where(ξ,p) is the Radon transform ofφ:Rφ(ξ,p)=xξ=pf(x)dn1x.{\displaystyle R\varphi (\xi ,p)=\int _{x\cdot \xi =p}f(x)\,d^{n-1}x.}

An alternative equivalent expression of the plane wave decomposition is:[75]δ(x)={(n1)!(2πi)nSn1(xξ)ndωξn even12(2πi)n1Sn1δ(n1)(xξ)dωξn odd.{\displaystyle \delta (x)={\begin{cases}{\frac {(n-1)!}{(2\pi i)^{n}}}\displaystyle \int _{S^{n-1}}(x\cdot \xi )^{-n}\,d\omega _{\xi }&n{\text{ even}}\\{\frac {1}{2(2\pi i)^{n-1}}}\displaystyle \int _{S^{n-1}}\delta ^{(n-1)}(x\cdot \xi )\,d\omega _{\xi }&n{\text{ odd}}.\end{cases}}}

Fourier transform

[edit]

The delta function is atempered distribution, and therefore it has a well-definedFourier transform. Formally, one finds[76]

δ^(ξ)=e2πixξδ(x)dx=1.{\displaystyle {\widehat {\delta }}(\xi )=\int _{-\infty }^{\infty }e^{-2\pi ix\xi }\,\delta (x)dx=1.}

Properly speaking, the Fourier transform of a distribution is defined by imposingself-adjointness of the Fourier transform under theduality pairing,{\displaystyle \langle \cdot ,\cdot \rangle } of tempered distributions withSchwartz functions. Thusδ^{\displaystyle {\widehat {\delta }}} is defined as the unique tempered distribution satisfying

δ^,φ=δ,φ^{\displaystyle \langle {\widehat {\delta }},\varphi \rangle =\langle \delta ,{\widehat {\varphi }}\rangle }

for all Schwartz functionsφ. And indeed it follows from this thatδ^=1.{\displaystyle {\widehat {\delta }}=1.}

As a result of this identity, theconvolution of the delta function with any other tempered distributionS is simplyS:

Sδ=S.{\displaystyle S*\delta =S.}

That is to say thatδ is anidentity element for the convolution on tempered distributions, and in fact, the space of compactly supported distributions under convolution is anassociative algebra with identity the delta function. This property is fundamental insignal processing, as convolution with a tempered distribution is alinear time-invariant system, and applying the linear time-invariant system measures itsimpulse response. The impulse response can be computed to any desired degree of accuracy by choosing a suitable approximation forδ, and once it is known, it characterizes the system completely. SeeLTI system theory § Impulse response and convolution.

The inverse Fourier transform of the tempered distributionf(ξ) = 1 is the delta function. Formally, this is expressed as1e2πixξdξ=δ(x){\displaystyle \int _{-\infty }^{\infty }1\cdot e^{2\pi ix\xi }\,d\xi =\delta (x)}and more rigorously, it follows since1,f^=f(0)=δ,f{\displaystyle \langle 1,{\widehat {f}}\rangle =f(0)=\langle \delta ,f\rangle }for all Schwartz functionsf.

In these terms, the delta function provides a suggestive statement of the orthogonality property of the Fourier kernel onR. Formally, one hasei2πξ1t[ei2πξ2t]dt=ei2π(ξ2ξ1)tdt=δ(ξ2ξ1).{\displaystyle \int _{-\infty }^{\infty }e^{i2\pi \xi _{1}t}\left[e^{i2\pi \xi _{2}t}\right]^{*}\,dt=\int _{-\infty }^{\infty }e^{-i2\pi (\xi _{2}-\xi _{1})t}\,dt=\delta (\xi _{2}-\xi _{1}).}

This is, of course, shorthand for the assertion that the Fourier transform of the tempered distributionf(t)=ei2πξ1t{\displaystyle f(t)=e^{i2\pi \xi _{1}t}}isf^(ξ2)=δ(ξ1ξ2){\displaystyle {\widehat {f}}(\xi _{2})=\delta (\xi _{1}-\xi _{2})}which again follows by imposing self-adjointness of the Fourier transform.

Byanalytic continuation of the Fourier transform, theLaplace transform of the delta function is found to be[77]0δ(ta)estdt=esa.{\displaystyle \int _{0}^{\infty }\delta (t-a)\,e^{-st}\,dt=e^{-sa}.}

Fourier kernels

[edit]
See also:Convergence of Fourier series

In the study ofFourier series, a major question consists of determining whether and in what sense the Fourier series associated with aperiodic function converges to the function. Then-th partial sum of the Fourier series of a functionf of period is defined by convolution (on the interval[−π,π]) with theDirichlet kernel:DN(x)=n=NNeinx=sin((N+12)x)sin(x/2).{\displaystyle D_{N}(x)=\sum _{n=-N}^{N}e^{inx}={\frac {\sin \left(\left(N+{\frac {1}{2}}\right)x\right)}{\sin(x/2)}}.}Thus,sN(f)(x)=DNf(x)=n=NNaneinx{\displaystyle s_{N}(f)(x)=D_{N}*f(x)=\sum _{n=-N}^{N}a_{n}e^{inx}}wherean=12πππf(y)einydy.{\displaystyle a_{n}={\frac {1}{2\pi }}\int _{-\pi }^{\pi }f(y)e^{-iny}\,dy.}A fundamental result of elementary Fourier series states that the Dirichlet kernel restricted to the interval [−π,π] tends to a multiple of the delta function asN → ∞. This is interpreted in the distribution sense, thatsN(f)(0)=ππDN(x)f(x)dx2πf(0){\displaystyle s_{N}(f)(0)=\int _{-\pi }^{\pi }D_{N}(x)f(x)\,dx\to 2\pi f(0)}for every compactly supportedsmooth functionf. Thus, formally one hasδ(x)=12πn=einx{\displaystyle \delta (x)={\frac {1}{2\pi }}\sum _{n=-\infty }^{\infty }e^{inx}}on the interval[−π,π].

Despite this, the result does not hold for all compactly supportedcontinuous functions: that isDN does not converge weakly in the sense of measures. The lack of convergence of the Fourier series has led to the introduction of a variety ofsummability methods to produce convergence. The method ofCesàro summation leads to theFejér kernel[78]

FN(x)=1Nn=0N1Dn(x)=1N(sinNx2sinx2)2.{\displaystyle F_{N}(x)={\frac {1}{N}}\sum _{n=0}^{N-1}D_{n}(x)={\frac {1}{N}}\left({\frac {\sin {\frac {Nx}{2}}}{\sin {\frac {x}{2}}}}\right)^{2}.}

TheFejér kernels tend to the delta function in a stronger sense that[79]

ππFN(x)f(x)dx2πf(0){\displaystyle \int _{-\pi }^{\pi }F_{N}(x)f(x)\,dx\to 2\pi f(0)}

for every compactly supportedcontinuous functionf. The implication is that the Fourier series of any continuous function is Cesàro summable to the value of the function at every point.

Hilbert space theory

[edit]

The Dirac delta distribution is adensely definedunboundedlinear functional on theHilbert spaceL2 ofsquare-integrable functions.[80] Indeed, smooth compactly supported functions aredense inL2, and the action of the delta distribution on such functions is well-defined. In many applications, it is possible to identify subspaces ofL2 and to give a strongertopology on which the delta function defines abounded linear functional.

Sobolev spaces

[edit]

TheSobolev embedding theorem forSobolev spaces on the real lineR implies that any square-integrable functionf such that

fH12=|f^(ξ)|2(1+|ξ|2)dξ<{\displaystyle \|f\|_{H^{1}}^{2}=\int _{-\infty }^{\infty }|{\widehat {f}}(\xi )|^{2}(1+|\xi |^{2})\,d\xi <\infty }

is automatically continuous, and satisfies in particular

δ[f]=|f(0)|<CfH1.{\displaystyle \delta [f]=|f(0)|<C\|f\|_{H^{1}}.}

Thusδ is a bounded linear functional on the Sobolev spaceH1.[81] Equivalentlyδ is an element of thecontinuous dual spaceH−1 ofH1. More generally, inn dimensions, one hasδHs(Rn) provideds >n/2.

Spaces of holomorphic functions

[edit]

Incomplex analysis, the delta function enters viaCauchy's integral formula, which asserts that ifD is a domain in thecomplex plane with smooth boundary, then

f(z)=12πiDf(ζ)dζζz,zD{\displaystyle f(z)={\frac {1}{2\pi i}}\oint _{\partial D}{\frac {f(\zeta )\,d\zeta }{\zeta -z}},\quad z\in D}

for allholomorphic functionsf inD that are continuous on the closure ofD. As a result, the delta functionδz is represented in this class of holomorphic functions by the Cauchy integral:

δz[f]=f(z)=12πiDf(ζ)dζζz.{\displaystyle \delta _{z}[f]=f(z)={\frac {1}{2\pi i}}\oint _{\partial D}{\frac {f(\zeta )\,d\zeta }{\zeta -z}}.}

Moreover, letH2(∂D) be theHardy space consisting of the closure inL2(∂D) of all holomorphic functions inD continuous up to the boundary ofD. Then functions inH2(∂D) uniquely extend to holomorphic functions inD, and the Cauchy integral formula continues to hold. In particular forzD, the delta functionδz is a continuous linear functional onH2(∂D). This is a special case of the situation inseveral complex variables in which, for smooth domainsD, theSzegő kernel plays the role of the Cauchy integral.[82]

Another representation of the delta function in a space of holomorphic functions is on the spaceH(D)L2(D){\displaystyle H(D)\cap L^{2}(D)} of square-integrable holomorphic functions in an open setDCn{\displaystyle D\subset \mathbb {C} ^{n}}. This is a closed subspace ofL2(D){\displaystyle L^{2}(D)}, and therefore is a Hilbert space. On the other hand, the functional that evaluates a holomorphic function inH(D)L2(D){\displaystyle H(D)\cap L^{2}(D)} at a pointz{\displaystyle z} ofD{\displaystyle D} is a continuous functional, and so by the Riesz representation theorem, is represented by integration against a kernelKz(ζ){\displaystyle K_{z}(\zeta )}, theBergman kernel.[83] This kernel is the analog of the delta function in this Hilbert space. A Hilbert space having such a kernel is called areproducing kernel Hilbert space. In the special case of the unit disc, one hasδw[f]=f(w)=1π|z|<1f(z)dxdy(1z¯w)2.{\displaystyle \delta _{w}[f]=f(w)={\frac {1}{\pi }}\iint _{|z|<1}{\frac {f(z)\,dx\,dy}{(1-{\bar {z}}w)^{2}}}.}

Resolutions of the identity

[edit]

Given a completeorthonormal basis set of functions{φn} in a separable Hilbert space, for example, the normalizedeigenvectors of acompact self-adjoint operator, any vectorf can be expressed asf=n=1αnφn.{\displaystyle f=\sum _{n=1}^{\infty }\alpha _{n}\varphi _{n}.}The coefficients {αn} are found asαn=φn,f,{\displaystyle \alpha _{n}=\langle \varphi _{n},f\rangle ,}which may be represented by the notation:αn=φnf,{\displaystyle \alpha _{n}=\varphi _{n}^{\dagger }f,}a form of thebra–ket notation of Dirac.[84] Adopting this notation, the expansion off takes thedyadic form:[85]

f=n=1φn(φnf).{\displaystyle f=\sum _{n=1}^{\infty }\varphi _{n}\left(\varphi _{n}^{\dagger }f\right).}

LettingI denote theidentity operator on the Hilbert space, the expression

I=n=1φnφn,{\displaystyle I=\sum _{n=1}^{\infty }\varphi _{n}\varphi _{n}^{\dagger },}

is called aresolution of the identity. When the Hilbert space is the spaceL2(D) of square-integrable functions on a domainD, the quantity:

φnφn,{\displaystyle \varphi _{n}\varphi _{n}^{\dagger },}

is an integral operator, and the expression forf can be rewritten

f(x)=n=1D(φn(x)φn(ξ))f(ξ)dξ.{\displaystyle f(x)=\sum _{n=1}^{\infty }\int _{D}\,\left(\varphi _{n}(x)\varphi _{n}^{*}(\xi )\right)f(\xi )\,d\xi .}

The right-hand side converges tof in theL2 sense. It need not hold in a pointwise sense, even whenf is a continuous function. Nevertheless, it is common to abuse notation and write

f(x)=δ(xξ)f(ξ)dξ,{\displaystyle f(x)=\int \,\delta (x-\xi )f(\xi )\,d\xi ,}

resulting in the representation of the delta function:[86]

δ(xξ)=n=1φn(x)φn(ξ).{\displaystyle \delta (x-\xi )=\sum _{n=1}^{\infty }\varphi _{n}(x)\varphi _{n}^{*}(\xi ).}

With a suitablerigged Hilbert space(Φ,L2(D), Φ*) whereΦ ⊂L2(D) contains all compactly supported smooth functions, this summation may converge inΦ*, depending on the properties of the basisφn. In most cases of practical interest, the orthonormal basis comes from an integral or differential operator (e.g. theheat kernel), in which case the series converges in thedistribution sense.[87]

Infinitesimal delta functions

[edit]

Cauchy used an infinitesimalα to write down a unit impulse, infinitely tall and narrow Dirac-type delta functionδα satisfyingF(x)δα(x)dx=F(0){\textstyle \int F(x)\delta _{\alpha }(x)\,dx=F(0)} in a number of articles in 1827.[88] Cauchy defined an infinitesimal inCours d'Analyse (1827) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's andLazare Carnot's terminology.

Non-standard analysis allows one to rigorously treat infinitesimals. The article byYamashita (2007) contains a bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by thehyperreals. Here the Dirac delta can be given by an actual function, having the property that for every real functionF one hasF(x)δα(x)dx=F(0){\textstyle \int F(x)\delta _{\alpha }(x)\,dx=F(0)} as anticipated by Fourier and Cauchy.

Dirac comb

[edit]
Main article:Dirac comb
A Dirac comb is an infinite series of Dirac delta functions spaced at intervals ofT

A so-called uniform "pulse train" of Dirac delta measures, which is known as aDirac comb, or as theSha distribution, creates asampling function, often used indigital signal processing (DSP) and discrete time signal analysis. The Dirac comb is given as theinfinite sum, whose limit is understood in the distribution sense,

Ш(x)=n=δ(xn),{\displaystyle \operatorname {\text{Ш}} (x)=\sum _{n=-\infty }^{\infty }\delta (x-n),}

which is a sequence of point masses at each of the integers.

Up to an overall normalizing constant, the Dirac comb is equal to its own Fourier transform. This is significant because iff is anySchwartz function, then theperiodization off is given by the convolution(fШ)(x)=n=f(xn).{\displaystyle (f*\operatorname {\text{Ш}} )(x)=\sum _{n=-\infty }^{\infty }f(x-n).}In particular,(fШ)=f^Ш^=f^Ш{\displaystyle (f*\operatorname {\text{Ш}} )^{\wedge }={\widehat {f}}{\widehat {\operatorname {\text{Ш}} }}={\widehat {f}}\operatorname {\text{Ш}} }is precisely thePoisson summation formula.[89][90]More generally, this formula remains to be true iff is a tempered distribution of rapid descent or, equivalently, iff^{\displaystyle {\widehat {f}}} is a slowly growing, ordinary function within the space of tempered distributions.

Sokhotski–Plemelj theorem

[edit]

TheSokhotski–Plemelj theorem, important in quantum mechanics, relates the delta function to the distributionp.v.1/x, theCauchy principal value of the function1/x, defined by

p.v.1x,φ=limε0+|x|>εφ(x)xdx.{\displaystyle \left\langle \operatorname {p.v.} {\frac {1}{x}},\varphi \right\rangle =\lim _{\varepsilon \to 0^{+}}\int _{|x|>\varepsilon }{\frac {\varphi (x)}{x}}\,dx.}

Sokhotsky's formula states that[91]

limε0+1x±iε=p.v.1xiπδ(x),{\displaystyle \lim _{\varepsilon \to 0^{+}}{\frac {1}{x\pm i\varepsilon }}=\operatorname {p.v.} {\frac {1}{x}}\mp i\pi \delta (x),}

Here the limit is understood in the distribution sense, that for all compactly supported smooth functionsf,

limε0+f(x)x±iεdx=iπf(0)+limε0+|x|>εf(x)xdx.{\displaystyle \int _{-\infty }^{\infty }\lim _{\varepsilon \to 0^{+}}{\frac {f(x)}{x\pm i\varepsilon }}\,dx=\mp i\pi f(0)+\lim _{\varepsilon \to 0^{+}}\int _{|x|>\varepsilon }{\frac {f(x)}{x}}\,dx.}

Relationship to the Kronecker delta

[edit]

TheKronecker deltaδij is the quantity defined by

δij={1i=j0ij{\displaystyle \delta _{ij}={\begin{cases}1&i=j\\0&i\not =j\end{cases}}}

for all integersi,j. This function then satisfies the following analog of the sifting property: ifai (fori in the set of all integers) is anydoubly infinite sequence, then

i=aiδik=ak.{\displaystyle \sum _{i=-\infty }^{\infty }a_{i}\delta _{ik}=a_{k}.}

Similarly, for any real or complex valued continuous functionf onR, the Dirac delta satisfies the sifting property

f(x)δ(xx0)dx=f(x0).{\displaystyle \int _{-\infty }^{\infty }f(x)\delta (x-x_{0})\,dx=f(x_{0}).}

This exhibits the Kronecker delta function as a discrete analog of the Dirac delta function.[92]

Applications

[edit]

Probability theory

[edit]
See also:Probability distribution § Dirac delta representation

Inprobability theory andstatistics, the Dirac delta function is often used to represent adiscrete distribution, or a partially discrete, partiallycontinuous distribution, using aprobability density function (which is normally used to represent absolutely continuous distributions). For example, the probability density functionf(x) of a discrete distribution consisting of pointsx = {x1, ...,xn}, with corresponding probabilitiesp1, ...,pn, can be written as[93]

f(x)=i=1npiδ(xxi).{\displaystyle f(x)=\sum _{i=1}^{n}p_{i}\delta (x-x_{i}).}

As another example, consider a distribution in which 6/10 of the time returns a standardnormal distribution, and 4/10 of the time returns exactly the value 3.5 (i.e. a partly continuous, partly discretemixture distribution). The density function of this distribution can be written as

f(x)=0.612πex22+0.4δ(x3.5).{\displaystyle f(x)=0.6\,{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {x^{2}}{2}}}+0.4\,\delta (x-3.5).}

The delta function is also used to represent the resulting probability density function of a random variable that is transformed by continuously differentiable function. IfY = g(X) is a continuous differentiable function, then the density ofY can be written as

fY(y)=+fX(x)δ(yg(x))dx.{\displaystyle f_{Y}(y)=\int _{-\infty }^{+\infty }f_{X}(x)\delta (y-g(x))\,dx.}

The delta function is also used in a completely different way to represent thelocal time of adiffusion process (likeBrownian motion).[94] The local time of a stochastic processB(t) is given by(x,t)=0tδ(xB(s))ds{\displaystyle \ell (x,t)=\int _{0}^{t}\delta (x-B(s))\,ds}and represents the amount of time that the process spends at the pointx in the range of the process. More precisely, in one dimension this integral can be written(x,t)=limε0+12ε0t1[xε,x+ε](B(s))ds{\displaystyle \ell (x,t)=\lim _{\varepsilon \to 0^{+}}{\frac {1}{2\varepsilon }}\int _{0}^{t}\mathbf {1} _{[x-\varepsilon ,x+\varepsilon ]}(B(s))\,ds}where1[xε,x+ε]{\displaystyle \mathbf {1} _{[x-\varepsilon ,x+\varepsilon ]}} is theindicator function of the interval[xε,x+ε].{\displaystyle [x-\varepsilon ,x+\varepsilon ].}

Quantum mechanics

[edit]

The delta function is expedient inquantum mechanics. Thewave function of a particle gives theprobability amplitude of finding a particle within a given region of space. Wave functions are assumed to be elements of the Hilbert spaceL2 ofsquare-integrable functions, and the total probability of finding a particle within a given interval is the integral of the magnitude of the wave function squared over the interval. A set{|φn} of wave functions is orthonormal if

φnφm=δnm,{\displaystyle \langle \varphi _{n}\mid \varphi _{m}\rangle =\delta _{nm},}

whereδnm is the Kronecker delta. A set of orthonormal wave functions is complete in the space of square-integrable functions if any wave function|ψ⟩ can be expressed as a linear combination of the{|φn} with complex coefficients:

ψ=cnφn,{\displaystyle \psi =\sum c_{n}\varphi _{n},}

wherecn =φn|ψ. Complete orthonormal systems of wave functions appear naturally as theeigenfunctions of theHamiltonian (of abound system) in quantum mechanics that measures the energy levels, which are called the eigenvalues. The set of eigenvalues, in this case, is known as thespectrum of the Hamiltonian. Inbra–ket notation this equality implies theresolution of the identity:

I=|φnφn|.{\displaystyle I=\sum |\varphi _{n}\rangle \langle \varphi _{n}|.}

Here the eigenvalues are assumed to be discrete, but the set of eigenvalues of anobservable can also be continuous. An example is theposition operator,(x) =xψ(x). The spectrum of the position (in one dimension) is the entire real line and is called acontinuous spectrum. However, unlike the Hamiltonian, the position operator lacks proper eigenfunctions. The conventional way to overcome this shortcoming is to widen the class of available functions by allowing distributions as well, i.e., to replace the Hilbert space with arigged Hilbert space.[95] In this context, the position operator has a complete set ofgeneralized eigenfunctions,[96] labeled by the pointsy of the real line, given by

φy(x)=δ(xy).{\displaystyle \varphi _{y}(x)=\delta (x-y).}

The generalized eigenfunctions of the position operator are called theeigenkets and are denoted byφy =|y.[97]

Similar considerations apply to any other(unbounded) self-adjoint operator with continuous spectrum and no degenerate eigenvalues, such as themomentum operatorP. In that case, there is a setΩ of real numbers (the spectrum) and a collection of distributionsφy withy ∈ Ω such that

Pφy=yφy.{\displaystyle P\varphi _{y}=y\varphi _{y}.}

That is,φy are the generalized eigenvectors ofP. If they form an "orthonormal basis" in the distribution sense, that is:

φy,φy=δ(yy),{\displaystyle \langle \varphi _{y},\varphi _{y'}\rangle =\delta (y-y'),}

then for any test functionψ,

ψ(x)=Ωc(y)φy(x)dy{\displaystyle \psi (x)=\int _{\Omega }c(y)\varphi _{y}(x)\,dy}

wherec(y) =ψ,φy. That is, there is a resolution of the identity

I=Ω|φyφy|dy{\displaystyle I=\int _{\Omega }|\varphi _{y}\rangle \,\langle \varphi _{y}|\,dy}

where the operator-valued integral is again understood in the weak sense. If the spectrum ofP has both continuous and discrete parts, then the resolution of the identity involves a summation over the discrete spectrum and an integral over the continuous spectrum.

The delta function also has many more specialized applications in quantum mechanics, such as thedelta potential models for a single and double potential well.

Structural mechanics

[edit]

The delta function can be used instructural mechanics to describe transient loads or point loads acting on structures. The governing equation of a simplemass–spring system excited by a sudden forceimpulseI at timet = 0 can be written[98][99]md2ξdt2+kξ=Iδ(t),{\displaystyle m{\frac {d^{2}\xi }{dt^{2}}}+k\xi =I\delta (t),}wherem is the mass,ξ is the deflection, andk is thespring constant.

As another example, the equation governing the static deflection of a slenderbeam is, according toEuler–Bernoulli theory,

EId4wdx4=q(x),{\displaystyle EI{\frac {d^{4}w}{dx^{4}}}=q(x),}

whereEI is thebending stiffness of the beam,w is thedeflection,x is the spatial coordinate, andq(x) is the load distribution. If a beam is loaded by a point forceF atx =x0, the load distribution is written

q(x)=Fδ(xx0).{\displaystyle q(x)=F\delta (x-x_{0}).}

As the integration of the delta function results in theHeaviside step function, it follows that the static deflection of a slender beam subject to multiple point loads is described by a set of piecewisepolynomials.

Also, a pointmoment acting on a beam can be described by delta functions. Consider two opposing point forcesF at a distanced apart. They then produce a momentM =Fd acting on the beam. Now, let the distanced approach thelimit zero, whileM is kept constant. The load distribution, assuming a clockwise moment acting atx = 0, is written

q(x)=limd0(Fδ(x)Fδ(xd))=limd0(Mdδ(x)Mdδ(xd))=Mlimd0δ(x)δ(xd)d=Mδ(x).{\displaystyle {\begin{aligned}q(x)&=\lim _{d\to 0}{\Big (}F\delta (x)-F\delta (x-d){\Big )}\\[4pt]&=\lim _{d\to 0}\left({\frac {M}{d}}\delta (x)-{\frac {M}{d}}\delta (x-d)\right)\\[4pt]&=M\lim _{d\to 0}{\frac {\delta (x)-\delta (x-d)}{d}}\\[4pt]&=M\delta '(x).\end{aligned}}}

Point moments can thus be represented by thederivative of the delta function. Integration of the beam equation again results in piecewisepolynomial deflection.

See also

[edit]

Notes

[edit]
  1. ^Jeffrey 1993, p. 639.
  2. ^Arfken & Weber 2000, p. 84.
  3. ^abcdDirac 1930, §22 Theδ function.
  4. ^abcGelfand & Shilov 1966–1968, Volume I, §1.1.
  5. ^Zhao 2011, p. 174.
  6. ^Bracewell 2000, p. 74.
  7. ^Snieder 2004, p. 212.
  8. ^Schwartz 1950, p. 19.
  9. ^Schwartz 1950, p. 5.
  10. ^Jackson, J. D. (2008-08-01)."Examples of the zeroth theorem of the history of science".American Journal of Physics.76 (8):704–719.arXiv:0708.4249.Bibcode:2008AmJPh..76..704J.doi:10.1119/1.2904468.ISSN 0002-9505.
  11. ^Laugwitz 1989, p. 230.
  12. ^A more complete historical account can be found invan der Pol & Bremmer 1987, §V.4.
  13. ^Dirac, P. A. M. (January 1927)."The physical interpretation of the quantum dynamics".Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character.113 (765):621–641.Bibcode:1927RSPSA.113..621D.doi:10.1098/rspa.1927.0012.ISSN 0950-1207.S2CID 122855515.
  14. ^Zee, Anthony (2013).Einstein Gravity in a Nutshell. In a Nutshell Series (1st ed.). Princeton: Princeton University Press. p. 33.ISBN 978-0-691-14558-7.
  15. ^Fourier, JB (1822).The Analytical Theory of Heat (English translation by Alexander Freeman, 1878 ed.). The University Press. p. [1]., cf.https://books.google.com/books?id=-N8EAAAAYAAJ&pg=PA449 and pp. 546–551.Original French text.
  16. ^Komatsu, Hikosaburo (2002)."Fourier's hyperfunctions and Heaviside's pseudodifferential operators". InTakahiro Kawai; Keiko Fujita (eds.).Microlocal Analysis and Complex Fourier Analysis. World Scientific. p. [2].ISBN 978-981-238-161-3.
  17. ^Myint-U., Tyn;Debnath, Lokenath (2007).Linear Partial Differential Equations for Scientists And Engineers (4th ed.). Springer. p. [3].ISBN 978-0-8176-4393-5.
  18. ^Debnath, Lokenath; Bhatta, Dambaru (2007).Integral Transforms And Their Applications (2nd ed.).CRC Press. p. [4].ISBN 978-1-58488-575-7.
  19. ^Grattan-Guinness, Ivor (2009).Convolutions in French Mathematics, 1800–1840: From the Calculus and Mechanics to Mathematical Analysis and Mathematical Physics, Volume 2. Birkhäuser. p. 653.ISBN 978-3-7643-2238-0.
  20. ^See, for example,Cauchy, Augustin-Louis (1789-1857) Auteur du texte (1882–1974)."Des intégrales doubles qui se présentent sous une forme indéterminèe".Oeuvres complètes d'Augustin Cauchy. Série 1, tome 1 / publiées sous la direction scientifique de l'Académie des sciences et sous les auspices de M. le ministre de l'Instruction publique...{{cite book}}: CS1 maint: numeric names: authors list (link)
  21. ^Mitrović, Dragiša; Žubrinić, Darko (1998).Fundamentals of Applied Functional Analysis: Distributions, Sobolev Spaces. CRC Press. p. 62.ISBN 978-0-582-24694-2.
  22. ^Kracht, Manfred;Kreyszig, Erwin (1989)."On singular integral operators and generalizations". In Themistocles M. Rassias (ed.).Topics in Mathematical Analysis: A Volume Dedicated to the Memory of A.L. Cauchy. World Scientific. p. https://books.google.com/books?id=xIsPrSiDlZIC&pg=PA553 553].ISBN 978-9971-5-0666-7.
  23. ^Halperin & Schwartz 1952, p. 1.
  24. ^Dirac 1930, p. 63.
  25. ^Rudin 1966, §1.20
  26. ^Hewitt & Stromberg 1963, §19.61.
  27. ^Gelfand & Shilov 1966–1968, Volume I, §1.3.
  28. ^Driggers 2003, p. 2321 See alsoBracewell 1986, Chapter 5 for a different interpretation. Other conventions for the assigning the value of the Heaviside function at zero exist, and some of these are not consistent with what follows.
  29. ^Hewitt & Stromberg 1963, §9.19.
  30. ^Billingsley 1986, p. 356.
  31. ^Hazewinkel 2011, p. 41.
  32. ^Stein & Shakarchi 2007, p. 285. sfn error: no target: CITEREFSteinShakarchi2007 (help)
  33. ^Strichartz 1994, §2.2.
  34. ^Hörmander 1983, Theorem 2.1.5.
  35. ^Schwartz 1950.
  36. ^Bracewell 1986, Chapter 5.
  37. ^Hörmander 1983, §3.1.
  38. ^Strichartz 1994, §2.3.
  39. ^Hörmander 1983, §8.2.
  40. ^Rudin 1966, §1.20.
  41. ^Dieudonné 1972, §17.3.3.
  42. ^Krantz, Steven G.; Parks, Harold R. (2008-12-15).Geometric Integration Theory. Springer Science & Business Media.ISBN 978-0-8176-4679-0.
  43. ^Federer 1969, §2.5.19.
  44. ^Strichartz 1994, Problem 2.6.2.
  45. ^Vladimirov 1971, Chapter 2, Example 3(d).
  46. ^Weisstein, Eric W."Sifting Property".MathWorld.
  47. ^Karris, Steven T. (2003).Signals and Systems with MATLAB Applications. Orchard Publications. p. 15.ISBN 978-0-9709511-6-8.
  48. ^Roden, Martin S. (2014-05-17).Introduction to Communication Theory. Elsevier. p. [5].ISBN 978-1-4831-4556-3.
  49. ^Rottwitt, Karsten; Tidemand-Lichtenberg, Peter (2014-12-11).Nonlinear Optics: Principles and Applications. CRC Press. p. [6] 276.ISBN 978-1-4665-6583-8.
  50. ^Gelfand & Shilov 1966–1968, Vol. 1, §II.2.5.
  51. ^Further refinement is possible, namely tosubmersions, although these require a more involved change of variables formula.
  52. ^Hörmander 1983, §6.1.
  53. ^Lange 2012, pp.29–30.
  54. ^Gelfand & Shilov 1966–1968, p. 212.
  55. ^Gelfand & Shilov 1966–1968, p. 26.
  56. ^Gelfand & Shilov 1966–1968, §2.1.
  57. ^Weisstein, Eric W."Doublet Function".MathWorld.
  58. ^Bracewell 2000, p. 86.
  59. ^"Gugo82's comment on the distributional derivative of Dirac's delta".matematicamente.it. 12 September 2010.
  60. ^abcHörmander 1983, p. 56.
  61. ^Namias, Victor (July 1977). "Application of the Dirac delta function to electric charge and multipole distributions".American Journal of Physics.45 (7):624–630.doi:10.1119/1.10779.
  62. ^Rudin 1991, Theorem 6.25.
  63. ^Stein & Weiss 1971, Theorem 1.18.
  64. ^Rudin 1991, §II.6.31.
  65. ^More generally, one only needsη =η1 to have an integrable radially symmetric decreasing rearrangement.
  66. ^Saichev & Woyczyński 1997, §1.1 The "delta function" as viewed by a physicist and an engineer, p. 3.
  67. ^Milovanović, Gradimir V.; Rassias, Michael Th (2014-07-08).Analytic Number Theory, Approximation Theory, and Special Functions: In Honor of Hari M. Srivastava. Springer. p. 748.ISBN 978-1-4939-0258-3.
  68. ^Stein & Shakarchi 2005, p. 111. sfn error: no target: CITEREFSteinShakarchi2005 (help)
  69. ^Stein & Weiss 1971, §I.1.
  70. ^Mader, Heidy M. (2006).Statistics in Volcanology. Geological Society of London. p. 81.ISBN 978-1-86239-208-3.
  71. ^Vallée & Soares 2004, §7.2.
  72. ^Hörmander 1983, §7.8.
  73. ^Courant & Hilbert 1962, §14.
  74. ^John 1955.
  75. ^Gelfand & Shilov 1966–1968, I, §3.10.
  76. ^The numerical factors depend on theconventions for the Fourier transform.
  77. ^Bracewell 1986.
  78. ^Lang 1997, p. 312.
  79. ^In the terminology ofLang (1997), the Fejér kernel is a Dirac sequence, whereas the Dirichlet kernel is not.
  80. ^Reed & Simon 1980, Ch. II–III, VIII.
  81. ^Adams & Fournier 2003, p. 71.
  82. ^Hazewinkel 1995, p. 357.
  83. ^Zhu 2007, Ch. 4.
  84. ^Levin 2002, p. 109.
  85. ^Davis & Thomson 2000, p. 343.
  86. ^Davis & Thomson 2000, p. 344.
  87. ^de la Madrid, Bohm & Gadella 2002.
  88. ^Laugwitz 1989.
  89. ^Córdoba 1988.
  90. ^Hörmander 1983,§7.2.
  91. ^Vladimirov 1971, §5.7.
  92. ^Hartmann 1997, pp. 154–155.
  93. ^Kanwal, Ram P. (1997). "15.1. Applications to Probability and Random Processes".Generalized Functions Theory and Technique. Boston, MA: Birkhäuser Boston.doi:10.1007/978-1-4684-0035-9.ISBN 978-1-4684-0037-3.
  94. ^Karatzas & Shreve 1998, p. 204.
  95. ^Isham 1995, §6.2.
  96. ^Gelfand & Shilov 1966–1968, Vol. 4, §I.4.1.
  97. ^de la Madrid Modino 2001, pp. 96, 106.
  98. ^Arfken & Weber 2005, pp. 975–976.
  99. ^Boyce, DiPrima & Meade 2017, pp. 270–273.

References

[edit]

External links

[edit]
Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Degenerate
andsingular
Degenerate
Dirac delta function
Singular
Cantor
Families
Classification
Operations
Attributes of variables
Relation to processes
Solutions
Existence/uniqueness
Solution topics
Solution methods
Examples
Mathematicians


Retrieved from "https://en.wikipedia.org/w/index.php?title=Dirac_delta_function&oldid=1317722562"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp