
| Differential equations | |||||
|---|---|---|---|---|---|
| Scope | |||||
| Classification | |||||
Types
| |||||
Relation to processes | |||||
| Solution | |||||
Solution methods | |||||
| People | |||||
Inanalysis,numerical integration comprises a broad family ofalgorithms for calculating the numerical value of a definiteintegral.The termnumerical quadrature (often abbreviated toquadrature) is more or less a synonym for "numerical integration", especially as applied to one-dimensional integrals. Some authors refer to numerical integration over more than one dimension ascubature;[1] others take "quadrature" to include higher-dimensional integration.
The basic problem in numerical integration is to compute an approximate solution to a definite integral
to a given degree of accuracy. Iff(x) is a smooth function integrated over a small number of dimensions, and the domain of integration is bounded, there are many methods for approximating the integral to the desired precision.
Numerical integration has roots in the geometrical problem of finding a square with the same area as a given plane figure (quadrature orsquaring), as in thequadrature of the circle.The term is also sometimes used to describe thenumerical solution of differential equations.
There are several reasons for carrying out numerical integration, as opposed to analytical integration by finding theantiderivative:
The term "numerical integration" first appears in 1915 in the publicationA Course in Interpolation and Numeric Integration for the Mathematical Laboratory byDavid Gibb.[2]
"Quadrature" is a historical mathematical term that means calculating area. Quadrature problems have served as one of the main sources ofmathematical analysis.Mathematicians of Ancient Greece, according to thePythagorean doctrine, understood calculation ofarea as the process of constructing geometrically asquare having the same area(squaring) — that is why the process was named "quadrature". Examples includequadrature of the circle,Lune of Hippocrates, and the treatiseQuadrature of the Parabola. This construction must be performed only by means ofcompass and straightedge.
The ancient Babylonians used thetrapezoidal rule to integrate the motion ofJupiter along theecliptic.[3]

For a quadrature of a rectangle with the sidesa andb it is necessary to construct a square with the side (thegeometric mean ofa andb). For this purpose it is possible to use the following fact: if we draw the circle with the sum ofa andb as the diameter, then the height BH (from a point of their connection to crossing with a circle) equals their geometric mean. The similar geometrical construction solves a problem of a quadrature for a parallelogram and a triangle.

Problems of quadrature for curvilinear figures are much more difficult. Thequadrature of the circle with compass and straightedge had been proved in the 19th century to be impossible. Nevertheless, for some figures (for example thelune of Hippocrates) a quadrature can be performed. The quadratures of a sphere surface and aparabola segment done byArchimedes became the highest achievement of the antique analysis.
To prove the results, Archimedes used themethod of exhaustion ofEudoxus.
In medieval Europe the quadrature meant calculation of area by any method. More often themethod of indivisibles was used; it was less rigorous, but more simple and powerful. With its helpGalileo Galilei andGilles de Roberval found the area of acycloid arch,Grégoire de Saint-Vincent investigated the area under ahyperbola (Opus Geometricum, 1647), andAlphonse Antonio de Sarasa, de Saint-Vincent's pupil and commentator, noted the relation of this area tologarithms.
John Wallis algebrised this method: he wrote in hisArithmetica Infinitorum (1656) series that we now call thedefinite integral, and he calculated their values.Isaac Barrow andJames Gregory made further progress: quadratures for somealgebraic curves andspirals.Christiaan Huygens successfully performed a quadrature of somesolids of revolution.
The quadrature of the hyperbola by Saint-Vincent and de Sarasa provided a newfunction, thenatural logarithm, of critical importance.
With the invention ofintegral calculus came a universal method for area calculation. In response, the term "quadrature" has become traditional, and instead the modern phrase "computation of a univariate definite integral" is more common.
Aquadrature rule is an approximation of thedefinite integral of afunction, usually stated as aweighted sum of function values at specified points within the domain of integration.
Numerical integration methods can generally be described as combining evaluations of the integrand to get an approximation to the integral. The integrand is evaluated at a finite set of points calledintegration points and a weighted sum of these values is used to approximate the integral. The integration points and weights depend on the specific method used and the accuracy required from the approximation.
An important part of the analysis of any numerical integration method is to study the behavior of the approximation error as a function of the number of integrand evaluations. A method that yields a small error for a small number of evaluations is usually considered superior. Reducing the number of evaluations of the integrand reduces the number of arithmetic operations involved, and therefore reduces the total error. Also, each evaluation takes time, and the integrand may be arbitrarily complicated.
A "brute force" kind of numerical integration can be done, if the integrand is reasonably well-behaved (i.e.piecewisecontinuous and ofbounded variation), by evaluating the integrand with very small increments.

This simplest method approximates the function by astep function (a piecewise constant function, or a segmented polynomial of degree zero) that passes through the point. This is called themidpoint rule orrectangle rule
A large class of quadrature rules can be derived by constructinginterpolating functions that are easy to integrate. Typically these interpolating functions arepolynomials. In practice, since polynomials of very high degree tend tooscillate wildly, only polynomials of low degree are used, typically linear and quadratic.

The interpolating function may be a straight line (anaffine function, i.e. a polynomial of degree 1)passing through the points and.This is called thetrapezoidal rule

For either one of these rules, we can make a more accurate approximation by breaking up the interval into some number of subintervals, computing an approximation for each subinterval, then adding up all the results. This is called acomposite rule,extended rule, oriterated rule. For example, the composite trapezoidal rule can be stated as
where the subintervals have the form with and Here we used subintervals of the same length but one could also use intervals of varying length.
Interpolation with polynomials evaluated at equally spaced points in yields theNewton–Cotes formulas, of which the rectangle rule and the trapezoidal rule are examples.Simpson's rule, which is based on a polynomial of order 2, is also a Newton–Cotes formula.
Quadrature rules with equally spaced points have the very convenient property ofnesting. The corresponding rule with each interval subdivided includes all the current points, so those integrand values can be re-used.
If we allow the intervals between interpolation points to vary, we find another group of quadrature formulas, such as theGaussian quadrature formulas. A Gaussian quadrature rule is typically more accurate than a Newton–Cotes rule that uses the same number of function evaluations, if the integrand issmooth (i.e., if it is sufficiently differentiable). Other quadrature methods with varying intervals includeClenshaw–Curtis quadrature (also called Fejér quadrature) methods, which do nest.
Gaussian quadrature rules do not nest, but the relatedGauss–Kronrod quadrature formulas do.
The accuracy of a quadrature rule of theNewton–Cotes type is generally a function of the number of evaluation points. The result is usually more accurate as the number of evaluation points increases, or, equivalently, as the width of the step size between the points decreases. It is natural to ask what the result would be if the step size were allowed to approach zero. This can be answered by extrapolating the result from two or more nonzero step sizes, usingseries acceleration methods such asRichardson extrapolation. The extrapolation function may be apolynomial orrational function. Extrapolation methods are described in more detail by Stoer and Bulirsch (Section 3.4) and are implemented in many of the routines in theQUADPACK library.
Let have a bounded first derivative over i.e. Themean value theorem for where givesfor some depending on.
If we integrate in from to on both sides and take the absolute values, we obtain
We can further approximate the integral on the right-hand side by bringing the absolute value into the integrand, and replacing the term in by an upper bound
| 1 |
where thesupremum was used to approximate.
Hence, if we approximate the integral by thequadrature rule our error is no greater than the right hand side of1. We can convert this into an error analysis for theRiemann sum, giving an upper bound offor the error term of that particular approximation. (Note that this is precisely the error we calculated for the example.) Using more derivatives, and by tweaking the quadrature, we can do a similar error analysis using aTaylor series (using a partial sum with remainder term) forf. This error analysis gives a strict upper bound on the error, if the derivatives off are available.
This integration method can be combined withinterval arithmetic to producecomputer proofs andverified calculations.
Several methods exist for approximate integration over unbounded intervals. The standard technique involves specially derived quadrature rules, such asGauss-Hermite quadrature for integrals on the whole real line andGauss-Laguerre quadrature for integrals on the positive reals.[4] Monte Carlo methods can also be used, or a change of variables to a finite interval; e.g., for the whole line one could useand for semi-infinite intervals one could useas possible transformations.
The quadrature rules discussed so far are all designed to compute one-dimensional integrals. To compute integrals in multiple dimensions, one approach is to phrase the multiple integral as repeated one-dimensional integrals by applyingFubini's theorem (the tensor product rule). This approach requires the function evaluations togrow exponentially as the number of dimensions increases. Three methods are known to overcome this so-calledcurse of dimensionality.
A great many additional techniques for forming multidimensional cubature integration rules for a variety of weighting functions are given in the monograph by Stroud.[5]Integration on thesphere has been reviewed by Hesse et al. (2015).[6]
Monte Carlo methods andquasi-Monte Carlo methods are easy to apply to multi-dimensional integrals. They may yield greater accuracy for the same number of function evaluations than repeated integrations using one-dimensional methods.[citation needed]
A large class of useful Monte Carlo methods are the so-calledMarkov chain Monte Carlo algorithms, which include theMetropolis–Hastings algorithm andGibbs sampling.
Sparse grids were originally developed by Smolyak for the quadrature of high-dimensional functions. The method is always based on a one-dimensional quadrature rule, but performs a more sophisticated combination of univariate results. However, whereas the tensor product rule guarantees that the weights of all of the cubature points will be positive if the weights of the quadrature points were positive, Smolyak's rule does not guarantee that the weights will all be positive.
Bayesian quadrature is a statistical approach to the numerical problem of computing integrals and falls under the field ofprobabilistic numerics. It can provide a full handling of the uncertainty over the solution of the integral expressed as aGaussian process posterior variance.
The problem of evaluating the definite integral
can be reduced to aninitial value problem for anordinary differential equation by applying the first part of thefundamental theorem of calculus. By differentiating both sides of the above with respect to the argumentx, it is seen that the functionF satisfies
Numerical methods for ordinary differential equations, such asRunge–Kutta methods, can be applied to the restated problem and thus be used to evaluate the integral. For instance, the standard fourth-order Runge–Kutta method applied to the differential equation yields Simpson's rule from above.
The differential equation has a special form: the right-hand side contains only the independent variable (here) and not the dependent variable (here). This simplifies the theory and algorithms considerably. The problem of evaluating integrals is thus best studied in its own right.
Conversely, the term "quadrature" may also be used for the solution of differential equations: "solving by quadrature" or "reduction to quadrature" means expressing its solution in terms ofintegrals.