Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Iterative method

From Wikipedia, the free encyclopedia
Algorithm in which each approximation of the solution is derived from prior approximations

Incomputational mathematics, aniterative method is amathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which thei-th approximation (called an "iterate") is derived from the previous ones.

A specific implementation withtermination criteria for a given iterative method likegradient descent,hill climbing,Newton's method, orquasi-Newton methods likeBFGS, is analgorithm of an iterative method or amethod of successive approximation. An iterative method is calledconvergent if the corresponding sequence converges for given initial approximations. A mathematically rigorous convergence analysis of an iterative method is usually performed; however,heuristic-based iterative methods are also common.

In contrast,direct methods attempt to solve the problem by a finite sequence of operations. In the absence ofrounding errors, direct methods would deliver an exact solution (for example, solving a linear system of equationsAx=b{\displaystyle A\mathbf {x} =\mathbf {b} } byGaussian elimination). Iterative methods are often the only choice fornonlinear equations. However, iterative methods are often useful even for linear problems involving many variables (sometimes on the order of millions), where direct methods would be prohibitively expensive (and in some cases impossible) even with the best available computing power.[1]

Attractive fixed points

[edit]

If an equation can be put into the formf(x) =x, and a solutionx is an attractivefixed point of the functionf, then one may begin with a pointx1 in thebasin of attraction ofx, and letxn+1 =f(xn) forn ≥ 1, and the sequence {xn}n ≥ 1 will converge to the solutionx. Herexn is thenth approximation or iteration ofx andxn+1 is the next orn + 1 iteration ofx. Alternately, superscripts in parentheses are often used in numerical methods, so as not to interfere with subscripts with other meanings. (For example,x(n+1) =f(x(n)).) If the functionf iscontinuously differentiable, a sufficient condition for convergence is that thespectral radius of the derivative is strictly bounded by one in a neighborhood of the fixed point. If this condition holds at the fixed point, then a sufficiently small neighborhood (basin of attraction) must exist.[citation needed]

Linear systems

[edit]

In the case of asystem of linear equations, the two main classes of iterative methods are thestationary iterative methods, and the more generalKrylov subspace methods.

Stationary iterative methods

[edit]

Introduction

[edit]

Stationary iterative methods solve a linear system with anoperator approximating the original one; and based on a measurement of the error in the result (the residual), form a "correction equation" for which this process is repeated. While these methods are simple to derive, implement, and analyze, convergence is only guaranteed for a limited class of matrices.

Definition

[edit]

Aniterative method is defined byxk+1:=Ψ(xk),k0{\displaystyle \mathbf {x} ^{k+1}:=\Psi (\mathbf {x} ^{k}),\quad k\geq 0}and for a given linear systemAx=b{\displaystyle A\mathbf {x} =\mathbf {b} } with exact solutionx{\displaystyle \mathbf {x} ^{*}} theerror byek:=xkx,k0.{\displaystyle \mathbf {e} ^{k}:=\mathbf {x} ^{k}-\mathbf {x} ^{*},\quad k\geq 0.}An iterative method is calledlinear if there exists a matrixCRn×n{\displaystyle C\in \mathbb {R} ^{n\times n}} such thatek+1=Cekk0{\displaystyle \mathbf {e} ^{k+1}=C\mathbf {e} ^{k}\quad \forall k\geq 0}and this matrix is called theiteration matrix.An iterative method with a given iteration matrixC{\displaystyle C} is calledconvergent if the following holdslimkCk=0.{\displaystyle \lim _{k\to \infty }C^{k}=0.}

An important theorem states that for a given iterative method and its iteration matrixC{\displaystyle C} it is convergent if and only if itsspectral radiusρ(C){\displaystyle \rho (C)} is smaller than unity, that is,ρ(C)<1.{\displaystyle \rho (C)<1.}

The basic iterative methods work bysplitting the matrixA{\displaystyle A} intoA=MN{\displaystyle A=M-N}and here the matrixM{\displaystyle M} should be easilyinvertible.The iterative methods are now defined asMxk+1=Nxk+b,k0,{\displaystyle M\mathbf {x} ^{k+1}=N\mathbf {x} ^{k}+\mathbf {b} ,\quad k\geq 0,}or, equivalently,xk+1=xk+M1(bAxk),k0.{\displaystyle \mathbf {x} ^{k+1}=\mathbf {x} ^{k}+M^{-1}\left(\mathbf {b} -A\mathbf {x} ^{k}\right),\quad k\geq 0.}From this follows that the iteration matrix is given byC=IM1A=M1N.{\displaystyle C=I-M^{-1}A=M^{-1}N.}

Examples

[edit]

Basic examples of stationary iterative methods use a splitting of the matrixA{\displaystyle A} such asA=D+L+U,D:=diag((aii)i){\displaystyle A=D+L+U\,,\quad D:=\operatorname {diag} ((a_{ii})_{i})}whereD{\displaystyle D} is only the diagonal part ofA{\displaystyle A}, andL{\displaystyle L} is the strict lowertriangular part ofA{\displaystyle A}.Respectively,U{\displaystyle U} is the strict upper triangular part ofA{\displaystyle A}.

Linear stationary iterative methods are also calledrelaxation methods.

Krylov subspace methods

[edit]

Krylov subspace methods[2] work by forming abasis of the sequence of successive matrix powers times the initial residual (theKrylov sequence).The approximations to the solution are then formed by minimizing the residual over the subspace formed.The prototypical method in this class is theconjugate gradient method (CG) which assumes that the system matrixA{\displaystyle A} issymmetricpositive-definite.For symmetric (and possibly indefinite)A{\displaystyle A} one works with theminimal residual method (MINRES).In the case of non-symmetric matrices, methods such as thegeneralized minimal residual method (GMRES) and thebiconjugate gradient method (BiCG) have been derived.

Convergence of Krylov subspace methods

[edit]

Since these methods form a basis, it is evident that the method converges inN iterations, whereN is the system size. However, in the presence of rounding errors this statement does not hold; moreover, in practiceN can be very large, and the iterative process reaches sufficient accuracy already far earlier. The analysis of these methods is hard, depending on a complicated function of thespectrum of the operator.[citation needed]

Preconditioners

[edit]

The approximating operator that appears in stationary iterative methods can also be incorporated in Krylov subspace methods such asGMRES (alternatively,preconditioned Krylov methods can be considered as accelerations of stationary iterative methods), where they become transformations of the original operator to a presumably better conditioned one. The construction of preconditioners is a large research area.[citation needed]

Methods of successive approximation

[edit]

Mathematical methods relating to successive approximation include:

History

[edit]

Jamshīd al-Kāshī used iterative methods to calculate the sine of 1° andπ inThe Treatise of Chord and Sine to high precision.An early iterative method forsolving a linear system appeared in a letter ofGauss to a student of his. He proposed solving a 4-by-4 system of equations by repeatedly solving the component in which the residual was the largest[citation needed].

The theory of stationary iterative methods was solidly established with the work ofD.M. Young starting in the 1950s. The conjugate gradient method was also invented in the 1950s, with independent developments byCornelius Lanczos,Magnus Hestenes andEduard Stiefel, but its nature and applicability were misunderstood at the time. Only in the 1970s was it realized that conjugacy based methods work very well forpartial differential equations, especially the elliptic type.[citation needed]

See also

[edit]

References

[edit]
  1. ^Amritkar, Amit; de Sturler, Eric; Świrydowicz, Katarzyna; Tafti, Danesh; Ahuja, Kapil (2015). "Recycling Krylov subspaces for CFD applications and a new hybrid recycling solver".Journal of Computational Physics.303: 222.arXiv:1501.03358.Bibcode:2015JCoPh.303..222A.doi:10.1016/j.jcp.2015.09.040.
  2. ^Charles George Broyden and Maria Terasa Vespucci:Krylov Solvers for Linear Algebraic Systems: Krylov Solvers, Elsevier, ISBN 0-444-51474-0, (2004).
  3. ^"Babylonian mathematics".Babylonian mathematics. December 1, 2000.
  4. ^day, Mahlon (November 2, 1960).Fixed-point theorems for compact convex sets. Mahlon M day.

External links

[edit]
Wikimedia Commons has media related toIterative methods.
Functions
Gradients
Convergence
Quasi–Newton
Other methods
Hessians
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
General
Differentiable
Convex
minimization
Linear and
quadratic
Interior point
Basis-exchange
Paradigms
Graph
algorithms
Minimum
spanning tree
Shortest path
Network flows
International
National
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Iterative_method&oldid=1338761904"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp