scipy.optimize.

fmin#

scipy.optimize.fmin(func,x0,args=(),xtol=0.0001,ftol=0.0001,maxiter=None,maxfun=None,full_output=0,disp=1,retall=0,callback=None,initial_simplex=None)[source]#

Minimize a function using the downhill simplex algorithm.

This algorithm only uses function values, not derivatives or secondderivatives.

Parameters:
funccallable func(x,*args)

The objective function to be minimized.

x0ndarray

Initial guess.

argstuple, optional

Extra arguments passed to func, i.e.,f(x,*args).

xtolfloat, optional

Absolute error in xopt between iterations that is acceptable forconvergence.

ftolnumber, optional

Absolute error in func(xopt) between iterations that is acceptable forconvergence.

maxiterint, optional

Maximum number of iterations to perform.

maxfunnumber, optional

Maximum number of function evaluations to make.

full_outputbool, optional

Set to True if fopt and warnflag outputs are desired.

dispbool, optional

Set to True to print convergence messages.

retallbool, optional

Set to True to return list of solutions at each iteration.

callbackcallable, optional

Called after each iteration, as callback(xk), where xk is thecurrent parameter vector.

initial_simplexarray_like of shape (N + 1, N), optional

Initial simplex. If given, overridesx0.initial_simplex[j,:] should contain the coordinates ofthe jth vertex of theN+1 vertices in the simplex, whereN is the dimension.

Returns:
xoptndarray

Parameter that minimizes function.

foptfloat

Value of function at minimum:fopt=func(xopt).

iterint

Number of iterations performed.

funcallsint

Number of function calls made.

warnflagint

1 : Maximum number of function evaluations made.2 : Maximum number of iterations reached.

allvecslist

Solution at each iteration.

See also

minimize

Interface to minimization algorithms for multivariate functions. See the ‘Nelder-Mead’method in particular.

Notes

Uses a Nelder-Mead simplex algorithm to find the minimum of function ofone or more variables.

This algorithm has a long history of successful use in applications.But it will usually be slower than an algorithm that uses first orsecond derivative information. In practice, it can have poorperformance in high-dimensional problems and is not robust tominimizing complicated functions. Additionally, there currently is nocomplete theory describing when the algorithm will successfullyconverge to the minimum, or how fast it will if it does. Both the ftol andxtol criteria must be met for convergence.

References

[1]

Nelder, J.A. and Mead, R. (1965), “A simplex method for functionminimization”, The Computer Journal, 7, pp. 308-313

[2]

Wright, M.H. (1996), “Direct Search Methods: Once Scorned, NowRespectable”, in Numerical Analysis 1995, Proceedings of the1995 Dundee Biennial Conference in Numerical Analysis, D.F.Griffiths and G.A. Watson (Eds.), Addison Wesley Longman,Harlow, UK, pp. 191-208.

Examples

>>>deff(x):...returnx**2
>>>fromscipyimportoptimize
>>>minimum=optimize.fmin(f,1)Optimization terminated successfully.         Current function value: 0.000000         Iterations: 17         Function evaluations: 34>>>minimum[0]-8.8817841970012523e-16
On this page