Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Powell's dog leg method

From Wikipedia, the free encyclopedia
Iterative optimisation algorithm

Powell's dog leg method, also called Powell's hybrid method, is aniterativeoptimisation algorithm for the solution ofnon-linear least squares problems, introduced in 1970 byMichael J. D. Powell.[1] Similarly to theLevenberg–Marquardt algorithm, it combines theGauss–Newton algorithm withgradient descent, but it uses an explicittrust region. At each iteration, if the step from the Gauss–Newton algorithm is within the trust region, it is used to update the current solution. If not, the algorithm searches for the minimum of theobjective function along the steepest descent direction, known as Cauchy point. If the Cauchy point is outside of the trust region, it is truncated to the boundary of the latter and it is taken as the new solution. If the Cauchy point is inside the trust region, the new solution is taken at the intersection between the trust region boundary and the line joining the Cauchy point and the Gauss-Newton step (dog leg step).[2]

The name of the method derives from the resemblance between the construction of the dog leg step and the shape of adogleg hole ingolf.[2]

Formulation

[edit]
Construction of the dog leg step

Given aleast squares problem in the form

F(x)=12f(x)2=12i=1m(fi(x))2{\displaystyle F({\boldsymbol {x}})={\frac {1}{2}}\left\|{\boldsymbol {f}}({\boldsymbol {x}})\right\|^{2}={\frac {1}{2}}\sum _{i=1}^{m}\left(f_{i}({\boldsymbol {x}})\right)^{2}}

withfi:RnR{\displaystyle f_{i}:\mathbb {R} ^{n}\to \mathbb {R} }, Powell's dog leg method finds the optimal pointx=argminxF(x){\displaystyle {\boldsymbol {x}}^{*}=\operatorname {argmin} _{\boldsymbol {x}}F({\boldsymbol {x}})} by constructing asequencexk=xk1+δk{\displaystyle {\boldsymbol {x}}_{k}={\boldsymbol {x}}_{k-1}+\delta _{k}} that converges tox{\displaystyle {\boldsymbol {x}}^{*}}. At a given iteration, theGauss–Newton step is given by

δgn=(JJ)1Jf(x){\displaystyle {\boldsymbol {\delta _{gn}}}=-\left({\boldsymbol {J}}^{\top }{\boldsymbol {J}}\right)^{-1}{\boldsymbol {J}}^{\top }{\boldsymbol {f}}({\boldsymbol {x}})}

whereJ=(fixj){\displaystyle {\boldsymbol {J}}=\left({\frac {\partial {f_{i}}}{\partial {x_{j}}}}\right)} is theJacobian matrix, while thesteepest descent direction is given by

δsd=Jf(x).{\displaystyle {\boldsymbol {\delta _{sd}}}=-{\boldsymbol {J}}^{\top }{\boldsymbol {f}}({\boldsymbol {x}}).}

The objective function is linearised along the steepest descent direction

F(x+tδsd)12f(x)+tJ(x)δsd2=F(x)+tδsdJf(x)+12t2Jδsd2.{\displaystyle {\begin{aligned}F({\boldsymbol {x}}+t{\boldsymbol {\delta _{sd}}})&\approx {\frac {1}{2}}\left\|{\boldsymbol {f}}({\boldsymbol {x}})+t{\boldsymbol {J}}({\boldsymbol {x}}){\boldsymbol {\delta _{sd}}}\right\|^{2}\\&=F({\boldsymbol {x}})+t{\boldsymbol {\delta _{sd}}}^{\top }{\boldsymbol {J}}^{\top }{\boldsymbol {f}}({\boldsymbol {x}})+{\frac {1}{2}}t^{2}\left\|{\boldsymbol {J}}{\boldsymbol {\delta _{sd}}}\right\|^{2}.\end{aligned}}}

To compute the value of the parametert{\displaystyle t} at the Cauchy point, thederivative of the last expression with respect tot{\displaystyle t} is imposed to be equal to zero, giving

t=δsdJf(x)Jδsd2=δsd2Jδsd2.{\displaystyle t=-{\frac {{\boldsymbol {\delta _{sd}}}^{\top }{\boldsymbol {J}}^{\top }{\boldsymbol {f}}({\boldsymbol {x}})}{\left\|{\boldsymbol {J}}{\boldsymbol {\delta _{sd}}}\right\|^{2}}}={\frac {\left\|{\boldsymbol {\delta _{sd}}}\right\|^{2}}{\left\|{\boldsymbol {J}}{\boldsymbol {\delta _{sd}}}\right\|^{2}}}.}


Given a trust region of radiusΔ{\displaystyle \Delta }, Powell's dog leg method selects the update stepδk{\displaystyle {\boldsymbol {\delta _{k}}}} as equal to:

References

[edit]
  1. ^abPowell (1970)
  2. ^abYuan (2000)

Sources

[edit]
  • Lourakis, M.L.A.; Argyros, A.A. (2005). "Is Levenberg-Marquardt the most efficient optimization algorithm for implementing bundle adjustment?".Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1. pp. 1526–1531.doi:10.1109/ICCV.2005.128.ISBN 0-7695-2334-X.S2CID 16542484.
  • Yuan, Ya-xiang (2000). "A review of trust region algorithms for optimization".Iciam. Vol. 99.
  • Powell, M.J.D. (1970). "A new algorithm for unconstrained optimization". In Rosen, J.B.; Mangasarian, O.L.; Ritter, K. (eds.).Nonlinear Programming. New York: Academic Press. pp. 31–66.
  • Powell, M.J.D. (1970). "A hybrid method for nonlinear equations". In Robinowitz, P. (ed.).Numerical Methods for Nonlinear Algebraic Equations. London: Gordon and Breach Science. pp. 87–144.

External links

[edit]
Functions
Gradients
Convergence
Quasi–Newton
Other methods
Hessians
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
General
Differentiable
Convex
minimization
Linear and
quadratic
Interior point
Basis-exchange
Paradigms
Graph
algorithms
Minimum
spanning tree
Shortest path
Network flows
Retrieved from "https://en.wikipedia.org/w/index.php?title=Powell%27s_dog_leg_method&oldid=1323390627"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp