(2016-10-25) Always well-defined as a two-variable function.
This is actually a special case of atensor product, so defined:
(f . g ) (x,y) = f (x) g (y)
(2008-10-21) Motivational introduction to convolution products and distributions.
Whenever it makes sense, the followingintegral (from to ) is known as the value at point x of the convolution product of f and g.
fg (x)
=
f (u) g (x-u) du
So defined, the convolution operator is commutative (: Change the variable of integrationfrom u to w = x-u ). It's also associative :
fgh (x)
=
f (u) g (v) h (x-u-v) du dv
More generally, a convolution product of several functions at a certain value x is the integral of their ordinary (pointwise) product over the (oriented)hyperplane where the sum of their arguments is equal to the constant x. This viewpoint makes the commutativity and associativity of convolution obvious.
Loosely speaking, a key feature of convolution is that a convolution product of two functions is at least as nice a functionas either of its factors.
This happens trivially when that "other factor" is Dirac's distribution (a unit spike at point zero) which is, by definition,the neutral element for the convolution operation:
f = f = f
One night in 1944, my late teacher Laurent Schwartz(1915-2002) had the idea that extended functions (now called distributions ) could befully specified by their convolution products over a setof suitable very well-behaved test functions, up to irrelevant differencesover a set of measure zero. (Not all functions correspond to a distribution,butlocally summable ones do.) Schwartz was awarded a Fields Medal (in 1950) for the successful development of that idea.
Test functions are always chosen to be smooth and so that the derivative of a test function is always a test function. Because a convolution can be differentiated by differentiatingeither factor, we may define the derivative of any distribution f via its convolution intoany test function :
f ' = f'
There's no such thing as a non-differentiable distribution !
Although a convolution product is well-defined when one factor (a function or a distrivution) is nice enough, it's not at all defined in some cases,utlimately because the right-hand side of the following definingequation can be problematic:
(fg) (u) (u) du = f (x) g (y) (x+y) dx dy
This is so because (x+y) is never a suitable two-dimensional test function on the plane (note that it's constant over any anti-diagonal).
However, for example, when the support of either of the twodistributions f or g is bounded, the right-hand side makes perfect sense for any single-dimensionaltest function and, therefore, the convolution product fg is well-defined.
For distributions which do not have that kind of restrictions,the above right-hand side is not guaranted to make sense and the convoltion productis not necessarily well-defined.
A distribution is uniquely specified when its convolution into any test functionis known, but much than that is required. Indeed, if we assume that the translation of a test-function is a test function,all we need to specify a distribution is the value at zero of its convolution productinto any test function. This means that a distribution is fully specified by a functional over the set of test functions (by definition, a numerical functionalmaps a funtion to a number. Not all such finctionals are acceptable, though. A distribution is actually acontinuous linear funtionalover test-functions.
The above goes to show that all test functions must be infinitely smoothand sufficiently concentrated about a point to make the relevant integralsconverge. Also making the set of test functions stable by translationis an easy way to ensure that all parts of a distribution are relevant.
(2008-10-23) An Hermitian product defined over dual spaces ("bras" and "kets").
Let's consider a pair f and g ofcomplex functions of a real variable.
In the same spirit as the aboveconvolution product,we may define an inner product (endowed withHermitian symmetry) for some such pairs of functions via the following definite integral (from to) whenever it makes sense. Here, f (u)* denotes the complex conjugate of f (u).
<f | g >
=
f (u)*g (u) du
This notation (Dirac's notation) is firmly linked with Hermitian symmetry in the minds of all physicistsfamiliar with Dirac's bra-ket notation (pun intended). Here, kets are well-behaved test functions and bras, which we shall define as the duals of kets, are the new mathematical animals called distributions, presented in thenext article.
Other introductions to the Theory of distributions usuallyforgo the complex conjugation and use a mere pairing rather than a full-fledged Hermitian product. They use a comma rather than a vertical bar as a separator:
<f , g >
=
f (u) g (u) du
(2008-10-23) The set of distributions is the dual of the set of test functions.
A linear form is a linear function which associatesa scalar to a vector.
For avector space of finite dimension,the linear forms constitute another vector space of the same dimensiondubbed the dual of the original one.
On the other hand, the dual of a space of infinitely many dimensions neednot be isomorphic to it. Actually, something stange happens among spaces of infinitelymany dimensions: The smaller the space, the larger its dual...
Thus, loosely speaking, the dual of a very restrictedspace of test functions is a very large spaceof new mathematical objects called distributions.
The most restricted space of test functions conceived by Laurent Schwartz (in 1944) is that of the smooth functions of compact support. It is thus the space which yields, by duality, the most general type of distributions.
The support of a function is the closure of the set of all points for which it's nonzero. Compactness is a very general topological concept (a subset of a topological space is compact when everyopen cover contains a finite subcover). In Euclidean spaces of finitely many dimensions,a set is compact if and only if it's both closed and bounded (that's the Heine-Borel Theorem ). Thus, the support of a function of a real variable is compactwhen that function is zero outside of a finiteinterval. Examples of smooth functions (i.e.,infinitely differentiable functions) of compact support arenot immediately obvious. Here is one:
(x) = exp (
1
)
if x is between -1 and 1
1-x2
(x) = 0
elsewhere
(2008-10-22) The smooth functions whose derivatives are all rapidly decreasing.
A function of a vector x is said to be rapidly decreasing when its product into any power of ||x|| tends to zero as ||x|| tends to infinity.
Schwartz functions are smooth functions whosepartial derivatives of any order are all rapidly decreasing in the above sense. The set of all Schwartz functions is called the Schwartz Space.
(2008-10-22) The natural domain of definition of the Fourier transform.
Not all distributions have aFourier transformbut tempered ones do. The Fourier transform of a tempered distribution is a tempered distribution.
Two functions which differ in only finitely many points (or, more generally, over any set of vanishing Lebesgue measure) represent the same tempered distribution. However, the explicit pointwise formulas giving theinverse transform of the Fourier transform of a function,if they yield a function at all, can only yield a functionverifying the following relation:
f (x) = ½ [ f(x) + f +(x) ]
If it's not continuous,such a function only has discrete jump discontinuities where its value is equal to the average of its lower and upper limits.
When a distribution can be represented by a function, it's wiseto equate it to the representative function which which has the above property,because it's the only one which can be retrieved pointwisefrom its Fourier transform without using dubious ad hoc methods.
I am introducing a viewpoint (the involutive convention) which defines the Fourier transform so it's equal to its owninverse (i.e., it's an involution ).
The Involutive Fourier Transform F
F ( f ) (s)
=
e 2 i sxf (x)* dx
As usual,the integral is understood to be a definite integralfrom to. f (x)* is the complex conjugate of f (x).
Example: Square function ( ) and sine cardinal (sinc)
The square function (x) = ½ [ sgn(½+x) + sgn(½-x) ]and the sampling functionf (s) = sinc(s) are Fourier transforms of each other.
As the square function vanishes outside theinterval [-½, ½] and is equal to 1 on the interiorof that interval, the Fourier transform f of is given by:
f (s)
=
½
e 2isx dx
=
e ise is
=
sin s
=
sinc s
-½
2i s
s
(2008-11-02) Several definitions of the Fourier transform have been used.
Only the above definition makes the Fourier transformits own inverse. (Well, technically, you could replace i by -i in that definitionand still obtain an involution, but thisamounts to switching right and left in the orientation of thecomplex plane.)
A few competing definitions are tabulated below as pairs of transformswhich are inverses of each other The first of each pair is usually called the direct Fourier transform and the other one is the matching inverse Fourier transform, but the opposite convention can also be used. The last column gives expressions in terms of theinvolutive Fourier transform F introducedabove (and listed first).
Competing Definitions ofthe Fourier Transform and its Inverse
1
g (s) =
e 2 i sxf (x)* dx
g = F (f ) f = F (g )
2
g () =
e 2 i tf (t) dt
f (t) =
e 2 i tg () d
g = F (f )*
f = F (g* )
3
g () =
e i tf (t) dt
f (t) =
1
e i tg () d
2
(2008-10-23) In modern terms: The Fourier transform is unitary.
The Swiss mathematicianMichelPlancherel (1885-1967) is credited with the modern formulation of the theorem. The core idea occurs in a statement about series published in 1799byAntoine Parseval(1755-1836).
(2016-10-10) A time-delay corresponds to a phase shift in the frequency domain.
(2008-10-24) Our definition makes the Fourier transform equal to its own inverse.
The following table can be read both ways: The right entry is the Fourier transform of the left one and vice-versa.
Pairs of tempered distributions which are Fourier transforms of each other :
(2016-10-24) It's the Fourier transform of the convolution of their Fourier transforms.
If we weren't using theinvolutive definition of theFourier transform, we would have to replace one of the occurences of"Fourier transform" in the above definition by "inverse Fourier transform".
Because the convolution of two tempered distributions isn't always defined,neither is their product in the above sense.
However, the above is consistent with the ordinary (pointwise) product of continuous functions and does extend well beyond that.
(2008-10-24) The unit Gaussian distribution is a fixed-point of the Fourier involution.
f (x) = e x2 is its own Fourier transform.
Let g be the Fourier transform of f. We have:
g (s) =
e 2 i sxe x2 dx Differentiating both sides, we obtain:
So, g satisfies the differential equation dg = -2 s g ds whose solution is:
g (s) = g (0) e s2
Because of a well-knownmiracle, g (0) = 1. So g (s) = e s2
(2019-10-10) Sum of Independent Identically Distributed (IID) random variables
The distributions of a sum of two independent random variablesis the convolution product of their respective distributions.
(2008-10-24) Under coherent monochromatic light, a translucent film produces a distant lightwhose intensity is the Fourier transform of the film's opacity.
One practical way to observe the "distant" monochromatic image of a translucent planeis to put it at the focal point of a lens. The Fourier image is observed at any chosen distance past the lens. From that Image, an identical lens can be used to reconstruct the original lightin its own focal plane.
Interestingly, that type of setup provides an easy way to observe the convolutionof two images... Just take a photographic picture of the Fourier transform of thefirst image by putting it in the focal plane of your camera and shining a broadlaser beam through it. Make a transparent slide from that picture. This slide may be then be used as a sort of spatial filter...
(2008-10-24) The unit Dirac comb (shah function) is its own Fourier transform.
The unit comb () is an infinite sum of Dirac distributions :
This correspond to a well-defined tempered distribution whose (Hermitian)pairingwith a Schwartz test function is the sum of a convergent series.
The cyrilic symbol (shah) has been used to denote the unit Dirac comb for atleast 45 years (and probably a lot more). François Roddier was using that notatiom in 1971 as ifit was already well-established.
(2016-10-24) Crystals behave as diffraction patterns for X-rays.
(2016-10-30) The spectrum of a distribution is the support of its Fourier transform.
The support of a numericalfunctiondefined over some topological space is simply the closure of the set of points where that function isn't zero.
The support of a distribution ...
(2016-10-30) Discrete distributions with discrete spectra.
(2009-07-12) The relation between a tomographic scan and two-dimensional density.
This transform and its inverse were introduced in 1917 by the Austrian mathematician Johann Radon (1887-1956).
In the plane, a ray (a nonoriented straight line) is uniquely specified by:
The inclination of the [upward] normal unit vector n (0≤<).
The value = n.OM (the same for any point M on the line).
The cartesian equation of such a ray depends onthe parameters and :
x cos + y sin =
In the approximation ofgeometrical optics,the optical density of a bounded transmissionmedium along a given "ray of light" is the logarithm of the ratio of the incoming light intensity to the outgoing intensity.
This traditional vocabulary forces us to state thatthe light-blocking capability of a substrate around a given point is its optical density per unit of length which we denote by the symbol . It varies with location:
= (x,y) ≥ 0
The (total) density along a straight ray specified by the parameters and defined above can be expressed by a two-dimensionalintegral usinga one-dimensional distribution:
R() = (x,y) (x cos y sin ) dx dy
(2016-10-25)
The great simplicity of defining distributions as continuous functionals on some suitable set of well-behaved test functions is entirely duethe fact that continuity can indeed be well-defined in the realm of functional analysis.
That notion is the most difficult part of our subject. Fortunately, the beauty and the simplicity of the theory of distributionscan be enjoyed by taking this deep part of its foundations for granted. The following discussion is therefore entirely optional for most readers.