This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "Quasi-Newton method" – news ·newspapers ·books ·scholar ·JSTOR(January 2025) (Learn how and when to remove this message) |
Innumerical analysis, aquasi-Newton method is aniterative numerical method used either tofind zeroes or tofind local maxima and minima of functions via an iterativerecurrence formula much like the one forNewton's method, except using approximations of thederivatives of the functions in place of exact derivatives. Newton's method requires theJacobian matrix of allpartial derivatives of a multivariate function when used to search for zeros or theHessian matrix when usedfor finding extrema. Quasi-Newton methods, on the other hand, can be used when the Jacobian matrices or Hessian matrices are unavailable or are impractical to compute at every iteration.
Someiterative methods that reduce to Newton's method, such assequential quadratic programming, may also be considered quasi-Newton methods.
Newton's method to find zeroes of a function of multiple variables is given by, where is theleft inverse of theJacobian matrix of evaluated for.
Strictly speaking, any method that replaces the exact Jacobian with an approximation is a quasi-Newton method.[1] For instance, the chord method (where is replaced by for all iterations) is a simple example. The methods given below foroptimization refer to an important subclass of quasi-Newton methods,secant methods.[2]
Using methods developed to find extrema in order to find zeroes is not always a good idea, as the majority of the methods used to find extrema require that the matrix that is used is symmetrical. While this holds in the context of the search for extrema, it rarely holds when searching for zeroes.Broyden's "good" and "bad" methods are two methods commonly used to find extrema that can also be applied to find zeroes. Other methods that can be used are thecolumn-updating method, theinverse column-updating method, the quasi-Newton least squares method and the quasi-Newton inverse least squares method.
More recently quasi-Newton methods have been applied to find the solution of multiple coupled systems of equations (e.g. fluid–structure interaction problems or interaction problems in physics). They allow the solution to be found by solving each constituent system separately (which is simpler than the global system) in a cyclic, iterative fashion until the solution of the global system is found.[2][3]
The search for a minimum or maximum of a scalar-valued function is closely related to the search for the zeroes of thegradient of that function. Therefore, quasi-Newton methods can be readily applied to find extrema of a function. In other words, if is the gradient of, then searching for the zeroes of the vector-valued function corresponds to the search for the extrema of the scalar-valued function; the Jacobian of now becomes the Hessian of. The main difference is thatthe Hessian matrix is a symmetric matrix, unlike the Jacobian whensearching for zeroes. Most quasi-Newton methods used in optimization exploit this symmetry.
Inoptimization,quasi-Newton methods (a special case ofvariable-metric methods) are algorithms for finding localmaxima and minima offunctions. Quasi-Newton methods for optimization are based onNewton's method to find thestationary points of a function, points where the gradient is 0. Newton's method assumes that the function can be locally approximated as aquadratic in the region around the optimum, and uses the first and second derivatives to find the stationary point. In higher dimensions, Newton's method uses the gradient and theHessian matrix of secondderivatives of the function to be minimized.
In quasi-Newton methods the Hessian matrix does not need to be computed. The Hessian is updated by analyzing successive gradient vectors instead. Quasi-Newton methods are a generalization of thesecant method to find the root of the first derivative for multidimensional problems. In multiple dimensions the secant equation isunder-determined, and quasi-Newton methods differ in how they constrain the solution, typically by adding a simple low-rank update to the current estimate of the Hessian.
The first quasi-Newton algorithm was proposed byWilliam C. Davidon, a physicist working atArgonne National Laboratory. He developed the first quasi-Newton algorithm in 1959: theDFP updating formula, which was later popularized by Fletcher and Powell in 1963, but is rarely used today. The most common quasi-Newton algorithms are currently theSR1 formula (for "symmetric rank-one"), theBHHH method, the widespreadBFGS method (suggested independently byBroyden,Fletcher,Goldfarb, andShanno, in 1970), and its low-memory extensionL-BFGS. The Broyden's class is a linear combination of the DFP and BFGS methods.
The SR1 formula does not guarantee the update matrix to maintainpositive-definiteness and can be used for indefinite problems. TheBroyden's method does not require the update matrix to be symmetric and is used to find the root of a general system of equations (rather than the gradient) by updating theJacobian (rather than the Hessian).
One of the chief advantages of quasi-Newton methods overNewton's method is that theHessian matrix (or, in the case of quasi-Newton methods, its approximation) does not need to be inverted. Newton's method, and its derivatives such asinterior point methods, require the Hessian to be inverted, which is typically implemented by solving asystem of linear equations and is often quite costly. In contrast, quasi-Newton methods usually generate an estimate of directly.
As inNewton's method, one uses a second-order approximation to find the minimum of a function. TheTaylor series of around an iterate is
where () is thegradient, and an approximation to theHessian matrix.[4] The gradient of this approximation (with respect to) is
and setting this gradient to zero (which is the goal of optimization) provides the Newton step:
The Hessian approximation is chosen to satisfy
which is called thesecant equation (the Taylor series of the gradient itself). In more than one dimension isunderdetermined. In one dimension, solving for and applying the Newton's step with the updated value is equivalent to thesecant method. The various quasi-Newton methods differ in their choice of the solution to the secant equation (in one dimension, all the variants are equivalent). Most methods (but with exceptions, such asBroyden's method) seek a symmetric solution (); furthermore, the variants listed below can be motivated by finding an update that is as close as possible to in somenorm; that is,, where is somepositive-definite matrix that defines the norm. An approximate initial value is often sufficient to achieve rapid convergence, although there is no general strategy to choose.[5] Note that should be positive-definite. The unknown is updated applying the Newton's step calculated using the current approximate Hessian matrix:
is used to update the approximate Hessian, or directly its inverse using theSherman–Morrison formula.
The most popular update formulas are:
Other methods are Pearson's method, McCormick's method, the Powell symmetric Broyden (PSB) method and Greenstadt's method.[2] These recursive low-rank matrix updates can also represented as an initial matrix plus a low-rank correction. This is theCompact quasi-Newton representation, which is particularly effective forconstrained and/or large problems.
When is a convex quadratic function with positive-definite Hessian, one would expect the matrices generated by a quasi-Newton method to converge to the inverse Hessian. This is indeed the case for the class of quasi-Newton methods based on least-change updates.[6]
Implementations of quasi-Newton methods are available in many programming languages.
Notable open source implementations include:
fsolve function, withtrust region extensions.optim general-purpose optimizer routine uses theBFGS method by usingmethod="BFGS".[7]scipy.optimize.minimize function includes, among other methods, aBFGS implementation.[8]Notable proprietary implementations include:
fminunc function uses (among other methods) theBFGS quasi-Newton method.[12] Many of the constrained methods of the Optimization toolbox useBFGS and the variantL-BFGS.[13]