Inmathematics, anorthogonal polynomial sequence is a family ofpolynomials such that any two different polynomials in the sequence areorthogonal to each other under someinner product.
The most widely used orthogonal polynomials are theclassical orthogonal polynomials, consisting of theHermite polynomials, theLaguerre polynomials and theJacobi polynomials. TheGegenbauer polynomials form the most important class of Jacobi polynomials; they include theChebyshev polynomials, and theLegendre polynomials as special cases. These are frequently given by theRodrigues' formula.
The field of orthogonal polynomials developed in the late 19th century from a study ofcontinued fractions byP. L. Chebyshev and was pursued byA. A. Markov andT. J. Stieltjes. They appear in a wide variety of fields:numerical analysis (quadrature rules),probability theory,representation theory (ofLie groups,quantum groups, and related objects),enumerative combinatorics,algebraic combinatorics,mathematical physics (the theory ofrandom matrices,integrable systems, etc.), andnumber theory. Some of the mathematicians who have worked on orthogonal polynomials includeGábor Szegő,Sergei Bernstein,Naum Akhiezer,Arthur Erdélyi,Yakov Geronimus,Wolfgang Hahn,Theodore Seio Chihara,Mourad Ismail,Waleed Al-Salam,Richard Askey, andRehuel Lobatto.
Given any non-decreasing functionα on the real numbers, we can define theLebesgue–Stieltjes integralof a functionf. If this integral is finite for all polynomialsf, we can define an inner product on pairs of polynomialsf andg by
This operation is a positive semidefiniteinner product on thevector space of all polynomials, and is positive definite if the function α has an infinite number of points of growth. It induces a notion oforthogonality in the usual way, namely that two polynomials are orthogonal if their inner product is zero.
Then the sequence(Pn)∞
n=0 of orthogonal polynomials is defined by the relations
In other words, the sequence is obtained from the sequence of monomials 1,x,x2, … by theGram–Schmidt process with respect to this inner product.
Usually the sequence is required to beorthonormal, namely,however, other normalisations are sometimes used.
Sometimes we havewhereis a non-negative function with support on some interval[x1,x2] in the real line (wherex1 = −∞ andx2 = ∞ are allowed). Such aW is called aweight function.[1] Then the inner product is given byHowever, there are many examples of orthogonal polynomials where the measuredα(x) has points with non-zero measure where the functionα is discontinuous, so cannot be given by a weight functionW as above.
The most commonly used orthogonal polynomials are orthogonal for a measure with support in a real interval. This includes:
Discrete orthogonal polynomials are orthogonal with respect to somediscrete measure. Sometimes the measure has finite support, in which case the family of orthogonal polynomials is finite, rather than an infinite sequence. TheRacah polynomials are examples of discrete orthogonal polynomials, and include as special cases theHahn polynomials anddual Hahn polynomials, which in turn include as special cases theMeixner polynomials,Krawtchouk polynomials, andCharlier polynomials.
Meixner classified all the orthogonalSheffer sequences: there are only Hermite, Laguerre, Charlier, Meixner, and Meixner–Pollaczek. In some sense Krawtchouk should be on this list too, but they are a finite sequence. These six families correspond to theNEF-QVFs and aremartingale polynomials for certainLévy processes.
Sieved orthogonal polynomials, such as thesieved ultraspherical polynomials,sieved Jacobi polynomials, andsieved Pollaczek polynomials, have modifiedrecurrence relations.
One can also consider orthogonal polynomials for some curve in thecomplex plane. The most important case (other than real intervals) is when the curve is the unit circle, givingorthogonal polynomials on the unit circle, such as theRogers–Szegő polynomials.
There are some families of orthogonal polynomials that are orthogonal on plane regions such as triangles or disks. They can sometimes be written in terms of Jacobi polynomials. For example,Zernike polynomials are orthogonal on theunit disk.
The advantage of orthogonality between different orders ofHermite polynomials is applied to Generalized frequency division multiplexing (GFDM) structure. More than one symbol can be carried in each grid of time-frequency lattice.[2]
Orthogonal polynomials of one variable defined by a non-negative measure on the real line have the following properties.
The orthogonal polynomialsPn can be expressed in terms of themoments
as follows:
where the constantscn are arbitrary (depend on the normalization ofPn).
This comes directly from applying the Gram–Schmidt process to the monomials, imposing each polynomial to be orthogonal with respect to the previous ones. For example, orthogonality with prescribes that must have the formwhich can be seen to be consistent with the previously given expression with the determinant.
The polynomialsPn satisfy a three-termrecurrence relation of the form
whereAn is not 0. The converse is also true; seeFavard's theorem. These recurrence relations are key for deriving properties of orthogonal polynomials.
If the measure dα is supported on an interval [a, b], all the zeros ofPn lie in [a, b]. Moreover, the zeros have the following interlacing property: ifm < n, there is a zero ofPn between any two zeros of Pm.
From the 1980s, with the work of X. G. Viennot, J. Labelle, Y.-N. Yeh, D. Foata, and others, combinatorial interpretations were found for all the classical orthogonal polynomials.[3]
TheMacdonald polynomials are orthogonal polynomials in several variables, depending on the choice of anaffine root system. They include many other families of multivariable orthogonal polynomials as special cases, including theJack polynomials, theHall–Littlewood polynomials, theHeckman–Opdam polynomials, and theKoornwinder polynomials. TheAskey–Wilson polynomials are the special case of Macdonald polynomials for a certain non-reduced root system of rank 1.
Multiple orthogonal polynomials are polynomials in one variable that are orthogonal with respect to a finite family of measures.
These are orthogonal polynomials with respect to aSobolev inner product, i.e. an inner product with derivatives. Including derivatives has big consequences for the polynomials, in general they no longer share some of the nice features of the classical orthogonal polynomials.
Orthogonal polynomials with matrices have either coefficients that are matrices or the indeterminate is a matrix.
There are two popular examples: either the coefficients are matrices or:
Quantum polynomials or q-polynomials are theq-analogs of orthogonal polynomials.
Orthogonal polynomials can be defined as a vector basis set of asymmetric bilinear form on polynomials. In the basis of the orthogonal polynomials, the bilinear form diagonalizes as. Similarly, given anondegenerateskew-symmetric bilinear form on polynomials, we can find a pair of vector basis sets and, such that the bilinear form skew-diagonalizes as.
{{cite book}}:ISBN / Date incompatibility (help)